0% found this document useful (0 votes)
102 views

Advanced Quantitative Economics With Python

This chapter introduces orthogonal projections and their applications. The key ideas are: 1. Orthogonal projection finds the vector in a linear subspace S that best approximates a given vector y, in the sense of minimizing the distance ‖y - z‖. 2. The orthogonal projection theorem states that there exists a unique vector ŷ in S that satisfies ŷ ∈ S and y - ŷ is orthogonal to S. 3. Orthogonal projection can be represented as a mapping or operator P that projects a vector onto the subspace. P satisfies properties like Py ∈ S and y - Py being orthogonal to S.

Uploaded by

Matias Noseda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views

Advanced Quantitative Economics With Python

This chapter introduces orthogonal projections and their applications. The key ideas are: 1. Orthogonal projection finds the vector in a linear subspace S that best approximates a given vector y, in the sense of minimizing the distance ‖y - z‖. 2. The orthogonal projection theorem states that there exists a unique vector ŷ in S that satisfies ŷ ∈ S and y - ŷ is orthogonal to S. 3. Orthogonal projection can be represented as a mapping or operator P that projects a vector onto the subspace. P satisfies properties like Py ∈ S and y - Py being orthogonal to S.

Uploaded by

Matias Noseda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 826

Advanced Quantitative Economics

with Python

Thomas J. Sargent and John Stachurski

July 10, 2020


2
Contents

I Tools and Techniques 1

1 Orthogonal Projections and Their Applications 3

2 Continuous State Markov Chains 19

3 Reverse Engineering a la Muth 41

4 Discrete State Dynamic Programming 49

II LQ Control 71

5 Information and Consumption Smoothing 73

6 Consumption Smoothing with Complete and Incomplete Markets 89

7 Tax Smoothing with Complete and Incomplete Markets 105

8 Robustness 139

9 Markov Jump Linear Quadratic Dynamic Programming 159

10 How to Pay for a War: Part 1 199

11 How to Pay for a War: Part 2 209

12 How to Pay for a War: Part 3 225

13 Optimal Taxation in an LQ Economy 231

III Multiple Agent Models 253

14 Robust Markov Perfect Equilibrium 255

3
4 CONTENTS

15 Default Risk and Income Fluctuations 275

16 Globalization and Cycles 295

17 Coase’s Theory of the Firm 313

IV Dynamic Linear Economies 327

18 Recursive Models of Dynamic Linear Economies 329

19 Growth in Dynamic Linear Economies 365

20 Lucas Asset Pricing Using DLE 377

21 IRFs in Hall Models 385

22 Permanent Income Model using the DLE Class 393

23 Rosen Schooling Model 399

24 Cattle Cycles 405

25 Shock Non Invertibility 413

V Classic Linear Models 421

26 Von Neumann Growth Model (and a Generalization) 423

VI Time Series Models 441

27 Covariance Stationary Processes 443

28 Estimation of Spectra 465

29 Additive and Multiplicative Functionals 479

30 Classical Control with Linear Algebra 503

31 Classical Prediction and Filtering With Linear Algebra 525

VII Asset Pricing and Finance 547

32 Asset Pricing II: The Lucas Asset Pricing Model 549


CONTENTS 5

33 Two Modifications of Mean-Variance Portfolio Theory 559

VIII Dynamic Programming Squared 583

34 Stackelberg Plans 585

35 Ramsey Plans, Time Inconsistency, Sustainable Plans 611

36 Optimal Taxation with State-Contingent Debt 637

37 Optimal Taxation without State-Contingent Debt 671

38 Fluctuating Interest Rates Deliver Fiscal Insurance 701

39 Fiscal Risk and Government Debt 729

40 Competitive Equilibria of a Model of Chang 759

41 Credible Government Policies in a Model of Chang 791


6 CONTENTS
Part I

Tools and Techniques

1
Chapter 1

Orthogonal Projections and Their


Applications

1.1 Contents

• Overview 1.2
• Key Definitions 1.3
• The Orthogonal Projection Theorem 1.4
• Orthonormal Basis 1.5
• Projection Using Matrix Algebra 1.6
• Least Squares Regression 1.7
• Orthogonalization and Decomposition 1.8
• Exercises 1.9
• Solutions 1.10

1.2 Overview

Orthogonal projection is a cornerstone of vector space methods, with many diverse applica-
tions.
These include, but are not limited to,
• Least squares projection, also known as linear regression
• Conditional expectations for multivariate normal (Gaussian) distributions
• Gram–Schmidt orthogonalization
• QR decomposition
• Orthogonal polynomials
• etc
In this lecture, we focus on
• key ideas
• least squares regression
We’ll require the following imports:

In [1]: import numpy as np


from scipy.linalg import qr

3
4 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

1.2.1 Further Reading

For background and foundational concepts, see our lecture on linear algebra.
For more proofs and greater theoretical detail, see A Primer in Econometric Theory.
For a complete set of proofs in a general setting, see, for example, [52].
For an advanced treatment of projection in the context of least squares prediction, see this
book chapter.

1.3 Key Definitions

Assume 𝑥, 𝑧 ∈ ℝ𝑛 .
Define ⟨𝑥, 𝑧⟩ = ∑𝑖 𝑥𝑖 𝑧𝑖 .
Recall ‖𝑥‖2 = ⟨𝑥, 𝑥⟩.
The law of cosines states that ⟨𝑥, 𝑧⟩ = ‖𝑥‖‖𝑧‖ cos(𝜃) where 𝜃 is the angle between the vectors
𝑥 and 𝑧.
When ⟨𝑥, 𝑧⟩ = 0, then cos(𝜃) = 0 and 𝑥 and 𝑧 are said to be orthogonal and we write 𝑥 ⟂ 𝑧.

For a linear subspace 𝑆 ⊂ ℝ𝑛 , we call 𝑥 ∈ ℝ𝑛 orthogonal to 𝑆 if 𝑥 ⟂ 𝑧 for all 𝑧 ∈ 𝑆, and


write 𝑥 ⟂ 𝑆.
1.3. KEY DEFINITIONS 5

The orthogonal complement of linear subspace 𝑆 ⊂ ℝ𝑛 is the set 𝑆 ⟂ ∶= {𝑥 ∈ ℝ𝑛 ∶ 𝑥 ⟂ 𝑆}.

𝑆 ⟂ is a linear subspace of ℝ𝑛
• To see this, fix 𝑥, 𝑦 ∈ 𝑆 ⟂ and 𝛼, 𝛽 ∈ ℝ.
• Observe that if 𝑧 ∈ 𝑆, then
6 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

⟨𝛼𝑥 + 𝛽𝑦, 𝑧⟩ = 𝛼⟨𝑥, 𝑧⟩ + 𝛽⟨𝑦, 𝑧⟩ = 𝛼 × 0 + 𝛽 × 0 = 0


• Hence 𝛼𝑥 + 𝛽𝑦 ∈ 𝑆 ⟂ , as was to be shown
A set of vectors {𝑥1 , … , 𝑥𝑘 } ⊂ ℝ𝑛 is called an orthogonal set if 𝑥𝑖 ⟂ 𝑥𝑗 whenever 𝑖 ≠ 𝑗.
If {𝑥1 , … , 𝑥𝑘 } is an orthogonal set, then the Pythagorean Law states that

‖𝑥1 + ⋯ + 𝑥𝑘 ‖2 = ‖𝑥1 ‖2 + ⋯ + ‖𝑥𝑘 ‖2

For example, when 𝑘 = 2, 𝑥1 ⟂ 𝑥2 implies

‖𝑥1 + 𝑥2 ‖2 = ⟨𝑥1 + 𝑥2 , 𝑥1 + 𝑥2 ⟩ = ⟨𝑥1 , 𝑥1 ⟩ + 2⟨𝑥2 , 𝑥1 ⟩ + ⟨𝑥2 , 𝑥2 ⟩ = ‖𝑥1 ‖2 + ‖𝑥2 ‖2

1.3.1 Linear Independence vs Orthogonality

If 𝑋 ⊂ ℝ𝑛 is an orthogonal set and 0 ∉ 𝑋, then 𝑋 is linearly independent.


Proving this is a nice exercise.
While the converse is not true, a kind of partial converse holds, as we’ll see below.

1.4 The Orthogonal Projection Theorem

What vector within a linear subspace of ℝ𝑛 best approximates a given vector in ℝ𝑛 ?


The next theorem provides answer to this question.
Theorem (OPT) Given 𝑦 ∈ ℝ𝑛 and linear subspace 𝑆 ⊂ ℝ𝑛 , there exists a unique solution to
the minimization problem

𝑦 ̂ ∶= argmin ‖𝑦 − 𝑧‖
𝑧∈𝑆

The minimizer 𝑦 ̂ is the unique vector in ℝ𝑛 that satisfies


• 𝑦̂ ∈ 𝑆
• 𝑦 − 𝑦̂ ⟂ 𝑆
The vector 𝑦 ̂ is called the orthogonal projection of 𝑦 onto 𝑆.
The next figure provides some intuition
1.4. THE ORTHOGONAL PROJECTION THEOREM 7

1.4.1 Proof of Sufficiency

We’ll omit the full proof.


But we will prove sufficiency of the asserted conditions.
To this end, let 𝑦 ∈ ℝ𝑛 and let 𝑆 be a linear subspace of ℝ𝑛 .
Let 𝑦 ̂ be a vector in ℝ𝑛 such that 𝑦 ̂ ∈ 𝑆 and 𝑦 − 𝑦 ̂ ⟂ 𝑆.
Let 𝑧 be any other point in 𝑆 and use the fact that 𝑆 is a linear subspace to deduce

‖𝑦 − 𝑧‖2 = ‖(𝑦 − 𝑦)̂ + (𝑦 ̂ − 𝑧)‖2 = ‖𝑦 − 𝑦‖̂ 2 + ‖𝑦 ̂ − 𝑧‖2

Hence ‖𝑦 − 𝑧‖ ≥ ‖𝑦 − 𝑦‖,
̂ which completes the proof.

1.4.2 Orthogonal Projection as a Mapping

For a linear space 𝑌 and a fixed linear subspace 𝑆, we have a functional relationship

𝑦 ∈ 𝑌 ↦ its orthogonal projection 𝑦 ̂ ∈ 𝑆

By the OPT, this is a well-defined mapping or operator from ℝ𝑛 to ℝ𝑛 .


In what follows we denote this operator by a matrix 𝑃
• 𝑃 𝑦 represents the projection 𝑦.̂
• This is sometimes expressed as 𝐸𝑆̂ 𝑦 = 𝑃 𝑦, where 𝐸̂ denotes a wide-sense expecta-
tions operator and the subscript 𝑆 indicates that we are projecting 𝑦 onto the linear
subspace 𝑆.
8 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

The operator 𝑃 is called the orthogonal projection mapping onto 𝑆.

It is immediate from the OPT that for any 𝑦 ∈ ℝ𝑛

1. 𝑃 𝑦 ∈ 𝑆 and
2. 𝑦 − 𝑃 𝑦 ⟂ 𝑆

From this, we can deduce additional useful properties, such as

1. ‖𝑦‖2 = ‖𝑃 𝑦‖2 + ‖𝑦 − 𝑃 𝑦‖2 and


2. ‖𝑃 𝑦‖ ≤ ‖𝑦‖

For example, to prove 1, observe that 𝑦 = 𝑃 𝑦 + 𝑦 − 𝑃 𝑦 and apply the Pythagorean law.

Orthogonal Complement

Let 𝑆 ⊂ ℝ𝑛 .
The orthogonal complement of 𝑆 is the linear subspace 𝑆 ⟂ that satisfies 𝑥1 ⟂ 𝑥2 for every
𝑥1 ∈ 𝑆 and 𝑥2 ∈ 𝑆 ⟂ .
Let 𝑌 be a linear space with linear subspace 𝑆 and its orthogonal complement 𝑆 ⟂ .
We write

𝑌 = 𝑆 ⊕ 𝑆⟂

to indicate that for every 𝑦 ∈ 𝑌 there is unique 𝑥1 ∈ 𝑆 and a unique 𝑥2 ∈ 𝑆 ⟂ such that
𝑦 = 𝑥 1 + 𝑥2 .
1.5. ORTHONORMAL BASIS 9

Moreover, 𝑥1 = 𝐸𝑆̂ 𝑦 and 𝑥2 = 𝑦 − 𝐸𝑆̂ 𝑦.


This amounts to another version of the OPT:
Theorem. If 𝑆 is a linear subspace of ℝ𝑛 , 𝐸𝑆̂ 𝑦 = 𝑃 𝑦 and 𝐸𝑆̂ ⟂ 𝑦 = 𝑀 𝑦, then

𝑃 𝑦 ⟂ 𝑀𝑦 and 𝑦 = 𝑃 𝑦 + 𝑀𝑦 for all 𝑦 ∈ ℝ𝑛

The next figure illustrates

1.5 Orthonormal Basis

An orthogonal set of vectors 𝑂 ⊂ ℝ𝑛 is called an orthonormal set if ‖𝑢‖ = 1 for all 𝑢 ∈ 𝑂.


Let 𝑆 be a linear subspace of ℝ𝑛 and let 𝑂 ⊂ 𝑆.
If 𝑂 is orthonormal and span 𝑂 = 𝑆, then 𝑂 is called an orthonormal basis of 𝑆.
𝑂 is necessarily a basis of 𝑆 (being independent by orthogonality and the fact that no ele-
ment is the zero vector).
One example of an orthonormal set is the canonical basis {𝑒1 , … , 𝑒𝑛 } that forms an orthonor-
mal basis of ℝ𝑛 , where 𝑒𝑖 is the 𝑖 th unit vector.
If {𝑢1 , … , 𝑢𝑘 } is an orthonormal basis of linear subspace 𝑆, then

𝑘
𝑥 = ∑⟨𝑥, 𝑢𝑖 ⟩𝑢𝑖 for all 𝑥∈𝑆
𝑖=1

To see this, observe that since 𝑥 ∈ span{𝑢1 , … , 𝑢𝑘 }, we can find scalars 𝛼1 , … , 𝛼𝑘 that verify
10 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

𝑘
𝑥 = ∑ 𝛼𝑗 𝑢𝑗 (1)
𝑗=1

Taking the inner product with respect to 𝑢𝑖 gives

𝑘
⟨𝑥, 𝑢𝑖 ⟩ = ∑ 𝛼𝑗 ⟨𝑢𝑗 , 𝑢𝑖 ⟩ = 𝛼𝑖
𝑗=1

Combining this result with (1) verifies the claim.

1.5.1 Projection onto an Orthonormal Basis

When the subspace onto which are projecting is orthonormal, computing the projection sim-
plifies:
Theorem If {𝑢1 , … , 𝑢𝑘 } is an orthonormal basis for 𝑆, then

𝑘
𝑃 𝑦 = ∑⟨𝑦, 𝑢𝑖 ⟩𝑢𝑖 , ∀ 𝑦 ∈ ℝ𝑛 (2)
𝑖=1

Proof: Fix 𝑦 ∈ ℝ𝑛 and let 𝑃 𝑦 be defined as in (2).


Clearly, 𝑃 𝑦 ∈ 𝑆.
We claim that 𝑦 − 𝑃 𝑦 ⟂ 𝑆 also holds.
It sufficies to show that 𝑦 − 𝑃 𝑦 ⟂ any basis vector 𝑢𝑖 (why?).
This is true because

𝑘 𝑘
⟨𝑦 − ∑⟨𝑦, 𝑢𝑖 ⟩𝑢𝑖 , 𝑢𝑗 ⟩ = ⟨𝑦, 𝑢𝑗 ⟩ − ∑⟨𝑦, 𝑢𝑖 ⟩⟨𝑢𝑖 , 𝑢𝑗 ⟩ = 0
𝑖=1 𝑖=1

1.6 Projection Using Matrix Algebra

Let 𝑆 be a linear subspace of ℝ𝑛 and let 𝑦 ∈ ℝ𝑛 .


We want to compute the matrix 𝑃 that verifies

𝐸𝑆̂ 𝑦 = 𝑃 𝑦

Evidently 𝑃 𝑦 is a linear function from 𝑦 ∈ ℝ𝑛 to 𝑃 𝑦 ∈ ℝ𝑛 .


This reference is useful https://en.wikipedia.org/wiki/Linear_map#Matrices.
Theorem. Let the columns of 𝑛 × 𝑘 matrix 𝑋 form a basis of 𝑆. Then

𝑃 = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′

Proof: Given arbitrary 𝑦 ∈ ℝ𝑛 and 𝑃 = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ , our claim is that


1.6. PROJECTION USING MATRIX ALGEBRA 11

1. 𝑃 𝑦 ∈ 𝑆, and

2. 𝑦 − 𝑃 𝑦 ⟂ 𝑆

Claim 1 is true because

𝑃 𝑦 = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦 = 𝑋𝑎 when 𝑎 ∶= (𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦

An expression of the form 𝑋𝑎 is precisely a linear combination of the columns of 𝑋, and


hence an element of 𝑆.
Claim 2 is equivalent to the statement

𝑦 − 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦 ⟂ 𝑋𝑏 for all 𝑏 ∈ ℝ𝐾

This is true: If 𝑏 ∈ ℝ𝐾 , then

(𝑋𝑏)′ [𝑦 − 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦] = 𝑏′ [𝑋 ′ 𝑦 − 𝑋 ′ 𝑦] = 0

The proof is now complete.

1.6.1 Starting with the Basis

It is common in applications to start with 𝑛 × 𝑘 matrix 𝑋 with linearly independent columns


and let

𝑆 ∶= span 𝑋 ∶= span{col1 𝑋, … , col𝑘 𝑋}

Then the columns of 𝑋 form a basis of 𝑆.


From the preceding theorem, 𝑃 = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦 projects 𝑦 onto 𝑆.
In this context, 𝑃 is often called the projection matrix
• The matrix 𝑀 = 𝐼 − 𝑃 satisfies 𝑀 𝑦 = 𝐸𝑆̂ ⟂ 𝑦 and is sometimes called the annihilator
matrix.

1.6.2 The Orthonormal Case

Suppose that 𝑈 is 𝑛 × 𝑘 with orthonormal columns.


Let 𝑢𝑖 ∶= col 𝑈𝑖 for each 𝑖, let 𝑆 ∶= span 𝑈 and let 𝑦 ∈ ℝ𝑛 .
We know that the projection of 𝑦 onto 𝑆 is

𝑃 𝑦 = 𝑈 (𝑈 ′ 𝑈 )−1 𝑈 ′ 𝑦

Since 𝑈 has orthonormal columns, we have 𝑈 ′ 𝑈 = 𝐼.


Hence
12 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

𝑘
𝑃 𝑦 = 𝑈 𝑈 ′ 𝑦 = ∑⟨𝑢𝑖 , 𝑦⟩𝑢𝑖
𝑖=1

We have recovered our earlier result about projecting onto the span of an orthonormal basis.

1.6.3 Application: Overdetermined Systems of Equations

Let 𝑦 ∈ ℝ𝑛 and let 𝑋 is 𝑛 × 𝑘 with linearly independent columns.


Given 𝑋 and 𝑦, we seek 𝑏 ∈ ℝ𝑘 satisfying the system of linear equations 𝑋𝑏 = 𝑦.
If 𝑛 > 𝑘 (more equations than unknowns), then 𝑏 is said to be overdetermined.
Intuitively, we may not be able to find a 𝑏 that satisfies all 𝑛 equations.
The best approach here is to
• Accept that an exact solution may not exist.
• Look instead for an approximate solution.
By approximate solution, we mean a 𝑏 ∈ ℝ𝑘 such that 𝑋𝑏 is as close to 𝑦 as possible.
The next theorem shows that the solution is well defined and unique.
The proof uses the OPT.
Theorem The unique minimizer of ‖𝑦 − 𝑋𝑏‖ over 𝑏 ∈ ℝ𝐾 is

𝛽 ̂ ∶= (𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦

Proof: Note that

𝑋 𝛽 ̂ = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦 = 𝑃 𝑦

Since 𝑃 𝑦 is the orthogonal projection onto span(𝑋) we have

‖𝑦 − 𝑃 𝑦‖ ≤ ‖𝑦 − 𝑧‖ for any 𝑧 ∈ span(𝑋)

Because 𝑋𝑏 ∈ span(𝑋)

‖𝑦 − 𝑋 𝛽‖̂ ≤ ‖𝑦 − 𝑋𝑏‖ for any 𝑏 ∈ ℝ𝐾

This is what we aimed to show.

1.7 Least Squares Regression

Let’s apply the theory of orthogonal projection to least squares regression.


This approach provides insights about many geometric properties of linear regression.
We treat only some examples.
1.7. LEAST SQUARES REGRESSION 13

1.7.1 Squared Risk Measures

Given pairs (𝑥, 𝑦) ∈ ℝ𝐾 × ℝ, consider choosing 𝑓 ∶ ℝ𝐾 → ℝ to minimize the risk

𝑅(𝑓) ∶= 𝔼 [(𝑦 − 𝑓(𝑥))2 ]

If probabilities and hence 𝔼 are unknown, we cannot solve this problem directly.
However, if a sample is available, we can estimate the risk with the empirical risk:

1 𝑁
min ∑(𝑦 − 𝑓(𝑥𝑛 ))2
𝑓∈ℱ 𝑁 𝑛=1 𝑛

Minimizing this expression is called empirical risk minimization.


The set ℱ is sometimes called the hypothesis space.
The theory of statistical learning tells us that to prevent overfitting we should take the set ℱ
to be relatively simple.
If we let ℱ be the class of linear functions 1/𝑁 , the problem is

𝑁
min ∑(𝑦𝑛 − 𝑏′ 𝑥𝑛 )2
𝑏∈ℝ𝐾
𝑛=1

This is the sample linear least squares problem.

1.7.2 Solution

Define the matrices

𝑦1 𝑥𝑛1

⎜ 𝑦2 ⎞
⎟ ⎛
⎜ 𝑥𝑛2 ⎞

𝑦 ∶= ⎜
⎜ ⎟
⎟ , 𝑥𝑛 ∶= ⎜
⎜ ⎟
⎟ = :math:‘n‘-th obs on all regressors
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ 𝑦𝑁 ⎠ ⎝ 𝑥𝑛𝐾 ⎠

and

𝑥′1 𝑥11 𝑥12 ⋯ 𝑥1𝐾



⎜ 𝑥′2 ⎞
⎟ ⎛
⎜ 𝑥21 𝑥22 ⋯ 𝑥2𝐾 ⎞

𝑋 ∶= ⎜
⎜ ⎟
⎟ ∶=∶ ⎜
⎜ ⎟

⎜ ⋮ ⎟ ⎜ ⋮ ⋮ ⋮ ⎟

⎝ 𝑥𝑁 ⎠ ⎝ 𝑥𝑁1 𝑥𝑁2 ⋯ 𝑥𝑁𝐾 ⎠

We assume throughout that 𝑁 > 𝐾 and 𝑋 is full column rank.


𝑁
If you work through the algebra, you will be able to verify that ‖𝑦 − 𝑋𝑏‖2 = ∑𝑛=1 (𝑦𝑛 − 𝑏′ 𝑥𝑛 )2 .
Since monotone transforms don’t affect minimizers, we have

𝑁
argmin ∑(𝑦𝑛 − 𝑏′ 𝑥𝑛 )2 = argmin ‖𝑦 − 𝑋𝑏‖
𝑏∈ℝ𝐾 𝑛=1 𝑏∈ℝ𝐾
14 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

By our results about overdetermined linear systems of equations, the solution is

𝛽 ̂ ∶= (𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦

Let 𝑃 and 𝑀 be the projection and annihilator associated with 𝑋:

𝑃 ∶= 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ and 𝑀 ∶= 𝐼 − 𝑃

The vector of fitted values is

𝑦 ̂ ∶= 𝑋 𝛽 ̂ = 𝑃 𝑦

The vector of residuals is

𝑢̂ ∶= 𝑦 − 𝑦 ̂ = 𝑦 − 𝑃 𝑦 = 𝑀 𝑦

Here are some more standard definitions:


• The total sum of squares is ∶= ‖𝑦‖2 .
• The sum of squared residuals is ∶= ‖𝑢‖̂ 2 .
• The explained sum of squares is ∶= ‖𝑦‖̂ 2 .

TSS = ESS + SSR

We can prove this easily using the OPT.


From the OPT we have 𝑦 = 𝑦 ̂ + 𝑢̂ and 𝑢̂ ⟂ 𝑦.̂
Applying the Pythagorean law completes the proof.

1.8 Orthogonalization and Decomposition

Let’s return to the connection between linear independence and orthogonality touched on
above.
A result of much interest is a famous algorithm for constructing orthonormal sets from lin-
early independent sets.
The next section gives details.

1.8.1 Gram-Schmidt Orthogonalization

Theorem For each linearly independent set {𝑥1 , … , 𝑥𝑘 } ⊂ ℝ𝑛 , there exists an orthonormal
set {𝑢1 , … , 𝑢𝑘 } with

span{𝑥1 , … , 𝑥𝑖 } = span{𝑢1 , … , 𝑢𝑖 } for 𝑖 = 1, … , 𝑘

The Gram-Schmidt orthogonalization procedure constructs an orthogonal set


{𝑢1 , 𝑢2 , … , 𝑢𝑛 }.
One description of this procedure is as follows:
1.8. ORTHOGONALIZATION AND DECOMPOSITION 15

• For 𝑖 = 1, … , 𝑘, form 𝑆𝑖 ∶= span{𝑥1 , … , 𝑥𝑖 } and 𝑆𝑖⟂


• Set 𝑣1 = 𝑥1
• For 𝑖 ≥ 2 set 𝑣𝑖 ∶= 𝐸𝑆̂ 𝑖−1
⟂ 𝑥𝑖 and 𝑢𝑖 ∶= 𝑣𝑖 /‖𝑣𝑖 ‖

The sequence 𝑢1 , … , 𝑢𝑘 has the stated properties.


A Gram-Schmidt orthogonalization construction is a key idea behind the Kalman filter de-
scribed in A First Look at the Kalman filter.
In some exercises below, you are asked to implement this algorithm and test it using projec-
tion.

1.8.2 QR Decomposition

The following result uses the preceding algorithm to produce a useful decomposition.
Theorem If 𝑋 is 𝑛 × 𝑘 with linearly independent columns, then there exists a factorization
𝑋 = 𝑄𝑅 where
• 𝑅 is 𝑘 × 𝑘, upper triangular, and nonsingular
• 𝑄 is 𝑛 × 𝑘 with orthonormal columns
Proof sketch: Let
• 𝑥𝑗 ∶= col𝑗 (𝑋)
• {𝑢1 , … , 𝑢𝑘 } be orthonormal with the same span as {𝑥1 , … , 𝑥𝑘 } (to be constructed using
Gram–Schmidt)
• 𝑄 be formed from cols 𝑢𝑖
Since 𝑥𝑗 ∈ span{𝑢1 , … , 𝑢𝑗 }, we have

𝑗
𝑥𝑗 = ∑⟨𝑢𝑖 , 𝑥𝑗 ⟩𝑢𝑖 for 𝑗 = 1, … , 𝑘
𝑖=1

Some rearranging gives 𝑋 = 𝑄𝑅.

1.8.3 Linear Regression via QR Decomposition

For matrices 𝑋 and 𝑦 that overdetermine 𝑏𝑒𝑡𝑎 in the linear equation system 𝑦 = 𝑋𝛽, we
found the least squares approximator 𝛽 ̂ = (𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦.
Using the QR decomposition 𝑋 = 𝑄𝑅 gives

𝛽 ̂ = (𝑅′ 𝑄′ 𝑄𝑅)−1 𝑅′ 𝑄′ 𝑦
= (𝑅′ 𝑅)−1 𝑅′ 𝑄′ 𝑦
= 𝑅−1 (𝑅′ )−1 𝑅′ 𝑄′ 𝑦 = 𝑅−1 𝑄′ 𝑦

Numerical routines would in this case use the alternative form 𝑅𝛽 ̂ = 𝑄′ 𝑦 and back substitu-
tion.
16 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

1.9 Exercises

1.9.1 Exercise 1

Show that, for any linear subspace 𝑆 ⊂ ℝ𝑛 , 𝑆 ∩ 𝑆 ⟂ = {0}.

1.9.2 Exercise 2

Let 𝑃 = 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ and let 𝑀 = 𝐼 − 𝑃 . Show that 𝑃 and 𝑀 are both idempotent and
symmetric. Can you give any intuition as to why they should be idempotent?

1.9.3 Exercise 3

Using Gram-Schmidt orthogonalization, produce a linear projection of 𝑦 onto the column


space of 𝑋 and verify this using the projection matrix 𝑃 ∶= 𝑋(𝑋 ′ 𝑋)−1 𝑋 ′ and also using
QR decomposition, where:

1
𝑦 ∶= ⎜ 3 ⎞
⎛ ⎟,
⎝ −3 ⎠

and

1 0
𝑋 ∶= ⎜ 0 −6 ⎞
⎛ ⎟
⎝ 2 2 ⎠

1.10 Solutions

1.10.1 Exercise 1

If 𝑥 ∈ 𝑆 and 𝑥 ∈ 𝑆 ⟂ , then we have in particular that ⟨𝑥, 𝑥⟩ = 0, ut then 𝑥 = 0.

1.10.2 Exercise 2

Symmetry and idempotence of 𝑀 and 𝑃 can be established using standard rules for matrix
algebra. The intuition behind idempotence of 𝑀 and 𝑃 is that both are orthogonal projec-
tions. After a point is projected into a given subspace, applying the projection again makes
no difference. (A point inside the subspace is not shifted by orthogonal projection onto that
space because it is already the closest point in the subspace to itself.).

1.10.3 Exercise 3

Here’s a function that computes the orthonormal vectors using the GS algorithm given in the
lecture
1.10. SOLUTIONS 17

In [2]: def gram_schmidt(X):


"""
Implements Gram-Schmidt orthogonalization.

Parameters
----------
X : an n x k array with linearly independent columns

Returns
-------
U : an n x k array with orthonormal columns

"""

# Set up
n, k = X.shape
U = np.empty((n, k))
I = np.eye(n)

# The first col of U is just the normalized first col of X


v1 = X[:,0]
U[:, 0] = v1 / np.sqrt(np.sum(v1 * v1))

for i in range(1, k):


# Set up
b = X[:, i] # The vector we're going to project
Z = X[:, 0:i] # First i-1 columns of X

# Project onto the orthogonal complement of the col span of Z


M = I - Z @ np.linalg.inv(Z.T @ Z) @ Z.T
u = M @ b

# Normalize
U[:, i] = u / np.sqrt(np.sum(u * u))

return U

Here are the arrays we’ll work with

In [3]: y = [1, 3, -3]

X = [[1, 0],
[0, -6],
[2, 2]]

X, y = [np.asarray(z) for z in (X, y)]

First, let’s try projection of 𝑦 onto the column space of 𝑋 using the ordinary matrix expres-
sion:

In [4]: Py1 = X @ np.linalg.inv(X.T @ X) @ X.T @ y


Py1

Out[4]: array([-0.56521739, 3.26086957, -2.2173913 ])

Now let’s do the same using an orthonormal basis created from our gram_schmidt function
18 CHAPTER 1. ORTHOGONAL PROJECTIONS AND THEIR APPLICATIONS

In [5]: U = gram_schmidt(X)
U

Out[5]: array([[ 0.4472136 , -0.13187609],


[ 0. , -0.98907071],
[ 0.89442719, 0.06593805]])

In [6]: Py2 = U @ U.T @ y


Py2

Out[6]: array([-0.56521739, 3.26086957, -2.2173913 ])

This is the same answer. So far so good. Finally, let’s try the same thing but with the basis
obtained via QR decomposition:

In [7]: Q, R = qr(X, mode='economic')


Q

Out[7]: array([[-0.4472136 , -0.13187609],


[-0. , -0.98907071],
[-0.89442719, 0.06593805]])

In [8]: Py3 = Q @ Q.T @ y


Py3

Out[8]: array([-0.56521739, 3.26086957, -2.2173913 ])

Again, we obtain the same answer.


Chapter 2

Continuous State Markov Chains

2.1 Contents

• Overview 2.2
• The Density Case 2.3
• Beyond Densities 2.4
• Stability 2.5
• Exercises 2.6
• Solutions 2.7
• Appendix 2.8
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

2.2 Overview

In a previous lecture, we learned about finite Markov chains, a relatively elementary class of
stochastic dynamic models.
The present lecture extends this analysis to continuous (i.e., uncountable) state Markov
chains.
Most stochastic dynamic models studied by economists either fit directly into this class or can
be represented as continuous state Markov chains after minor modifications.
In this lecture, our focus will be on continuous Markov models that
• evolve in discrete-time
• are often nonlinear
The fact that we accommodate nonlinear models here is significant, because linear stochastic
models have their own highly developed toolset, as we’ll see later on.
The question that interests us most is: Given a particular stochastic dynamic model, how will
the state of the system evolve over time?
In particular,
• What happens to the distribution of the state variables?
• Is there anything we can say about the “average behavior” of these variables?

19
20 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

• Is there a notion of “steady state” or “long-run equilibrium” that’s applicable to the


model?
– If so, how can we compute it?
Answering these questions will lead us to revisit many of the topics that occupied us in the
finite state case, such as simulation, distribution dynamics, stability, ergodicity, etc.

Note
For some people, the term “Markov chain” always refers to a process with a finite
or discrete state space. We follow the mainstream mathematical literature (e.g.,
[46]) in using the term to refer to any discrete time Markov process.

Let’s begin with some imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
from scipy.stats import lognorm, beta
from quantecon import LAE
from scipy.stats import norm, gaussian_kde

2.3 The Density Case

You are probably aware that some distributions can be represented by densities and some
cannot.
(For example, distributions on the real numbers ℝ that put positive probability on individual
points have no density representation)
We are going to start our analysis by looking at Markov chains where the one-step transition
probabilities have density representations.
The benefit is that the density case offers a very direct parallel to the finite case in terms of
notation and intuition.
Once we’ve built some intuition we’ll cover the general case.

2.3.1 Definitions and Basic Properties

In our lecture on finite Markov chains, we studied discrete-time Markov chains that evolve on
a finite state space 𝑆.
In this setting, the dynamics of the model are described by a stochastic matrix — a nonnega-
tive square matrix 𝑃 = 𝑃 [𝑖, 𝑗] such that each row 𝑃 [𝑖, ⋅] sums to one.
The interpretation of 𝑃 is that 𝑃 [𝑖, 𝑗] represents the probability of transitioning from state 𝑖
to state 𝑗 in one unit of time.
In symbols,

ℙ{𝑋𝑡+1 = 𝑗 | 𝑋𝑡 = 𝑖} = 𝑃 [𝑖, 𝑗]

Equivalently,
2.3. THE DENSITY CASE 21

• 𝑃 can be thought of as a family of distributions 𝑃 [𝑖, ⋅], one for each 𝑖 ∈ 𝑆


• 𝑃 [𝑖, ⋅] is the distribution of 𝑋𝑡+1 given 𝑋𝑡 = 𝑖
(As you probably recall, when using NumPy arrays, 𝑃 [𝑖, ⋅] is expressed as P[i,:])
In this section, we’ll allow 𝑆 to be a subset of ℝ, such as
• ℝ itself
• the positive reals (0, ∞)
• a bounded interval (𝑎, 𝑏)
The family of discrete distributions 𝑃 [𝑖, ⋅] will be replaced by a family of densities 𝑝(𝑥, ⋅), one
for each 𝑥 ∈ 𝑆.
Analogous to the finite state case, 𝑝(𝑥, ⋅) is to be understood as the distribution (density) of
𝑋𝑡+1 given 𝑋𝑡 = 𝑥.
More formally, a stochastic kernel on 𝑆 is a function 𝑝 ∶ 𝑆 × 𝑆 → ℝ with the property that

1. 𝑝(𝑥, 𝑦) ≥ 0 for all 𝑥, 𝑦 ∈ 𝑆

2. ∫ 𝑝(𝑥, 𝑦)𝑑𝑦 = 1 for all 𝑥 ∈ 𝑆

(Integrals are over the whole space unless otherwise specified)


For example, let 𝑆 = ℝ and consider the particular stochastic kernel 𝑝𝑤 defined by

1 (𝑦 − 𝑥)2
𝑝𝑤 (𝑥, 𝑦) ∶= √ exp {− } (1)
2𝜋 2

What kind of model does 𝑝𝑤 represent?


The answer is, the (normally distributed) random walk

IID
𝑋𝑡+1 = 𝑋𝑡 + 𝜉𝑡+1 where {𝜉𝑡 } ∼ 𝑁 (0, 1) (2)

To see this, let’s find the stochastic kernel 𝑝 corresponding to (2).


Recall that 𝑝(𝑥, ⋅) represents the distribution of 𝑋𝑡+1 given 𝑋𝑡 = 𝑥.
Letting 𝑋𝑡 = 𝑥 in (2) and considering the distribution of 𝑋𝑡+1 , we see that 𝑝(𝑥, ⋅) = 𝑁 (𝑥, 1).
In other words, 𝑝 is exactly 𝑝𝑤 , as defined in (1).

2.3.2 Connection to Stochastic Difference Equations

In the previous section, we made the connection between stochastic difference equation (2)
and stochastic kernel (1).
In economics and time-series analysis we meet stochastic difference equations of all different
shapes and sizes.
It will be useful for us if we have some systematic methods for converting stochastic difference
equations into stochastic kernels.
To this end, consider the generic (scalar) stochastic difference equation given by
22 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

𝑋𝑡+1 = 𝜇(𝑋𝑡 ) + 𝜎(𝑋𝑡 ) 𝜉𝑡+1 (3)

Here we assume that


IID
• {𝜉𝑡 } ∼ 𝜙, where 𝜙 is a given density on ℝ
• 𝜇 and 𝜎 are given functions on 𝑆, with 𝜎(𝑥) > 0 for all 𝑥
Example 1: The random walk (2) is a special case of (3), with 𝜇(𝑥) = 𝑥 and 𝜎(𝑥) = 1.
Example 2: Consider the ARCH model

𝑋𝑡+1 = 𝛼𝑋𝑡 + 𝜎𝑡 𝜉𝑡+1 , 𝜎𝑡2 = 𝛽 + 𝛾𝑋𝑡2 , 𝛽, 𝛾 > 0

Alternatively, we can write the model as

𝑋𝑡+1 = 𝛼𝑋𝑡 + (𝛽 + 𝛾𝑋𝑡2 )1/2 𝜉𝑡+1 (4)

This is a special case of (3) with 𝜇(𝑥) = 𝛼𝑥 and 𝜎(𝑥) = (𝛽 + 𝛾𝑥2 )1/2 .
Example 3: With stochastic production and a constant savings rate, the one-sector neoclas-
sical growth model leads to a law of motion for capital per worker such as

𝑘𝑡+1 = 𝑠𝐴𝑡+1 𝑓(𝑘𝑡 ) + (1 − 𝛿)𝑘𝑡 (5)

Here
• 𝑠 is the rate of savings
• 𝐴𝑡+1 is a production shock
– The 𝑡 + 1 subscript indicates that 𝐴𝑡+1 is not visible at time 𝑡
• 𝛿 is a depreciation rate
• 𝑓 ∶ ℝ+ → ℝ+ is a production function satisfying 𝑓(𝑘) > 0 whenever 𝑘 > 0
(The fixed savings rate can be rationalized as the optimal policy for a particular set of tech-
nologies and preferences (see [43], section 3.1.2), although we omit the details here).
Equation (5) is a special case of (3) with 𝜇(𝑥) = (1 − 𝛿)𝑥 and 𝜎(𝑥) = 𝑠𝑓(𝑥).
Now let’s obtain the stochastic kernel corresponding to the generic model (3).
To find it, note first that if 𝑈 is a random variable with density 𝑓𝑈 , and 𝑉 = 𝑎 + 𝑏𝑈 for some
constants 𝑎, 𝑏 with 𝑏 > 0, then the density of 𝑉 is given by

1 𝑣−𝑎
𝑓𝑉 (𝑣) = 𝑓𝑈 ( ) (6)
𝑏 𝑏

(The proof is below. For a multidimensional version see EDTC, theorem 8.1.3).
Taking (6) as given for the moment, we can obtain the stochastic kernel 𝑝 for (3) by recalling
that 𝑝(𝑥, ⋅) is the conditional density of 𝑋𝑡+1 given 𝑋𝑡 = 𝑥.
In the present case, this is equivalent to stating that 𝑝(𝑥, ⋅) is the density of 𝑌 ∶= 𝜇(𝑥) +
𝜎(𝑥) 𝜉𝑡+1 when 𝜉𝑡+1 ∼ 𝜙.
Hence, by (6),
2.3. THE DENSITY CASE 23

1 𝑦 − 𝜇(𝑥)
𝑝(𝑥, 𝑦) = 𝜙( ) (7)
𝜎(𝑥) 𝜎(𝑥)

For example, the growth model in (5) has stochastic kernel

1 𝑦 − (1 − 𝛿)𝑥
𝑝(𝑥, 𝑦) = 𝜙( ) (8)
𝑠𝑓(𝑥) 𝑠𝑓(𝑥)

where 𝜙 is the density of 𝐴𝑡+1 .


(Regarding the state space 𝑆 for this model, a natural choice is (0, ∞) — in which case
𝜎(𝑥) = 𝑠𝑓(𝑥) is strictly positive for all 𝑠 as required)

2.3.3 Distribution Dynamics

In this section of our lecture on finite Markov chains, we asked the following question: If

1. {𝑋𝑡 } is a Markov chain with stochastic matrix 𝑃


2. the distribution of 𝑋𝑡 is known to be 𝜓𝑡

then what is the distribution of 𝑋𝑡+1 ?


Letting 𝜓𝑡+1 denote the distribution of 𝑋𝑡+1 , the answer we gave was that

𝜓𝑡+1 [𝑗] = ∑ 𝑃 [𝑖, 𝑗]𝜓𝑡 [𝑖]


𝑖∈𝑆

This intuitive equality states that the probability of being at 𝑗 tomorrow is the probability of
visiting 𝑖 today and then going on to 𝑗, summed over all possible 𝑖.
In the density case, we just replace the sum with an integral and probability mass functions
with densities, yielding

𝜓𝑡+1 (𝑦) = ∫ 𝑝(𝑥, 𝑦)𝜓𝑡 (𝑥) 𝑑𝑥, ∀𝑦 ∈ 𝑆 (9)

It is convenient to think of this updating process in terms of an operator.


(An operator is just a function, but the term is usually reserved for a function that sends
functions into functions)
Let 𝒟 be the set of all densities on 𝑆, and let 𝑃 be the operator from 𝒟 to itself that takes
density 𝜓 and sends it into new density 𝜓𝑃 , where the latter is defined by

(𝜓𝑃 )(𝑦) = ∫ 𝑝(𝑥, 𝑦)𝜓(𝑥)𝑑𝑥 (10)

This operator is usually called the Markov operator corresponding to 𝑝

Note
Unlike most operators, we write 𝑃 to the right of its argument, instead of to the
left (i.e., 𝜓𝑃 instead of 𝑃 𝜓). This is a common convention, with the intention be-
ing to maintain the parallel with the finite case — see here
24 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

With this notation, we can write (9) more succinctly as 𝜓𝑡+1 (𝑦) = (𝜓𝑡 𝑃 )(𝑦) for all 𝑦, or, drop-
ping the 𝑦 and letting “=” indicate equality of functions,

𝜓𝑡+1 = 𝜓𝑡 𝑃 (11)

Equation (11) tells us that if we specify a distribution for 𝜓0 , then the entire sequence of fu-
ture distributions can be obtained by iterating with 𝑃 .
It’s interesting to note that (11) is a deterministic difference equation.
Thus, by converting a stochastic difference equation such as (3) into a stochastic kernel 𝑝 and
hence an operator 𝑃 , we convert a stochastic difference equation into a deterministic one (al-
beit in a much higher dimensional space).

Note
Some people might be aware that discrete Markov chains are in fact a special case
of the continuous Markov chains we have just described. The reason is that proba-
bility mass functions are densities with respect to the counting measure.

2.3.4 Computation

To learn about the dynamics of a given process, it’s useful to compute and study the se-
quences of densities generated by the model.
One way to do this is to try to implement the iteration described by (10) and (11) using nu-
merical integration.
However, to produce 𝜓𝑃 from 𝜓 via (10), you would need to integrate at every 𝑦, and there is
a continuum of such 𝑦.
Another possibility is to discretize the model, but this introduces errors of unknown size.
A nicer alternative in the present setting is to combine simulation with an elegant estimator
called the look-ahead estimator.
Let’s go over the ideas with reference to the growth model discussed above, the dynamics of
which we repeat here for convenience:

𝑘𝑡+1 = 𝑠𝐴𝑡+1 𝑓(𝑘𝑡 ) + (1 − 𝛿)𝑘𝑡 (12)

Our aim is to compute the sequence {𝜓𝑡 } associated with this model and fixed initial condi-
tion 𝜓0 .
To approximate 𝜓𝑡 by simulation, recall that, by definition, 𝜓𝑡 is the density of 𝑘𝑡 given 𝑘0 ∼
𝜓0 .
If we wish to generate observations of this random variable, all we need to do is

1. draw 𝑘0 from the specified initial condition 𝜓0

2. draw the shocks 𝐴1 , … , 𝐴𝑡 from their specified density 𝜙

3. compute 𝑘𝑡 iteratively via (12)


2.3. THE DENSITY CASE 25

If we repeat this 𝑛 times, we get 𝑛 independent observations 𝑘𝑡1 , … , 𝑘𝑡𝑛 .


With these draws in hand, the next step is to generate some kind of representation of their
distribution 𝜓𝑡 .
A naive approach would be to use a histogram, or perhaps a smoothed histogram using
SciPy’s gaussian_kde function.
However, in the present setting, there is a much better way to do this, based on the look-
ahead estimator.
With this estimator, to construct an estimate of 𝜓𝑡 , we actually generate 𝑛 observations of
𝑘𝑡−1 , rather than 𝑘𝑡 .
1 𝑛
Now we take these 𝑛 observations 𝑘𝑡−1 , … , 𝑘𝑡−1 and form the estimate

1 𝑛
𝜓𝑡𝑛 (𝑦) = 𝑖
∑ 𝑝(𝑘𝑡−1 , 𝑦) (13)
𝑛 𝑖=1

where 𝑝 is the growth model stochastic kernel in (8).


What is the justification for this slightly surprising estimator?
The idea is that, by the strong law of large numbers,

1 𝑛 𝑖 𝑖
∑ 𝑝(𝑘𝑡−1 , 𝑦) → 𝔼𝑝(𝑘𝑡−1 , 𝑦) = ∫ 𝑝(𝑥, 𝑦)𝜓𝑡−1 (𝑥) 𝑑𝑥 = 𝜓𝑡 (𝑦)
𝑛 𝑖=1

with probability one as 𝑛 → ∞.


Here the first equality is by the definition of 𝜓𝑡−1 , and the second is by (9).
We have just shown that our estimator 𝜓𝑡𝑛 (𝑦) in (13) converges almost surely to 𝜓𝑡 (𝑦), which
is just what we want to compute.
In fact, much stronger convergence results are true (see, for example, this paper).

2.3.5 Implementation

A class called LAE for estimating densities by this technique can be found in lae.py.
Given our use of the __call__ method, an instance of LAE acts as a callable object, which
is essentially a function that can store its own data (see this discussion).
This function returns the right-hand side of (13) using
• the data and stochastic kernel that it stores as its instance data
• the value 𝑦 as its argument
The function is vectorized, in the sense that if psi is such an instance and y is an array, then
the call psi(y) acts elementwise.
(This is the reason that we reshaped X and y inside the class — to make vectorization work)
Because the implementation is fully vectorized, it is about as efficient as it would be in C or
Fortran.
26 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

2.3.6 Example

The following code is an example of usage for the stochastic growth model described above

In [3]: # == Define parameters == #


s = 0.2
δ = 0.1
a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ)
α = 0.4 # We set f(k) = k**α
ψ_0 = beta(5, 5, scale=0.5) # Initial distribution
ϕ = lognorm(a_σ)

def p(x, y):


"""
Stochastic kernel for the growth model with Cobb-Douglas production.
Both x and y must be strictly positive.
"""
d = s * x**α
return ϕ.pdf((y - (1 - δ) * x) / d) / d

n = 10000 # Number of observations at each date t


T = 30 # Compute density of k_t at 1,...,T+1

# == Generate matrix s.t. t-th column is n observations of k_t == #


k = np.empty((n, T))
A = ϕ.rvs((n, T))
k[:, 0] = ψ_0.rvs(n) # Draw first column from initial distribution
for t in range(T-1):
k[:, t+1] = s * A[:, t] * k[:, t]**α + (1 - δ) * k[:, t]

# == Generate T instances of LAE using this data, one for each date t == #
laes = [LAE(p, k[:, t]) for t in range(T)]

# == Plot == #
fig, ax = plt.subplots()
ygrid = np.linspace(0.01, 4.0, 200)
greys = [str(g) for g in np.linspace(0.0, 0.8, T)]
greys.reverse()
for ψ, g in zip(laes, greys):
ax.plot(ygrid, ψ(ygrid), color=g, lw=2, alpha=0.6)
ax.set_xlabel('capital')
ax.set_title(f'Density of $k_1$ (lighter) to $k_T$ (darker) for $T={T}$')
plt.show()
2.4. BEYOND DENSITIES 27

The figure shows part of the density sequence {𝜓𝑡 }, with each density computed via the look-
ahead estimator.
Notice that the sequence of densities shown in the figure seems to be converging — more on
this in just a moment.
Another quick comment is that each of these distributions could be interpreted as a cross-
sectional distribution (recall this discussion).

2.4 Beyond Densities

Up until now, we have focused exclusively on continuous state Markov chains where all condi-
tional distributions 𝑝(𝑥, ⋅) are densities.
As discussed above, not all distributions can be represented as densities.
If the conditional distribution of 𝑋𝑡+1 given 𝑋𝑡 = 𝑥 cannot be represented as a density for
some 𝑥 ∈ 𝑆, then we need a slightly different theory.
The ultimate option is to switch from densities to probability measures, but not all readers
will be familiar with measure theory.
We can, however, construct a fairly general theory using distribution functions.

2.4.1 Example and Definitions

To illustrate the issues, recall that Hopenhayn and Rogerson [35] study a model of firm dy-
namics where individual firm productivity follows the exogenous process
28 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

IID
𝑋𝑡+1 = 𝑎 + 𝜌𝑋𝑡 + 𝜉𝑡+1 , where {𝜉𝑡 } ∼ 𝑁 (0, 𝜎2 )

As is, this fits into the density case we treated above.


However, the authors wanted this process to take values in [0, 1], so they added boundaries at
the endpoints 0 and 1.
One way to write this is

𝑋𝑡+1 = ℎ(𝑎 + 𝜌𝑋𝑡 + 𝜉𝑡+1 ) where ℎ(𝑥) ∶= 𝑥 1{0 ≤ 𝑥 ≤ 1} + 1{𝑥 > 1}

If you think about it, you will see that for any given 𝑥 ∈ [0, 1], the conditional distribution of
𝑋𝑡+1 given 𝑋𝑡 = 𝑥 puts positive probability mass on 0 and 1.
Hence it cannot be represented as a density.
What we can do instead is use cumulative distribution functions (cdfs).
To this end, set

𝐺(𝑥, 𝑦) ∶= ℙ{ℎ(𝑎 + 𝜌𝑥 + 𝜉𝑡+1 ) ≤ 𝑦} (0 ≤ 𝑥, 𝑦 ≤ 1)

This family of cdfs 𝐺(𝑥, ⋅) plays a role analogous to the stochastic kernel in the density case.
The distribution dynamics in (9) are then replaced by

𝐹𝑡+1 (𝑦) = ∫ 𝐺(𝑥, 𝑦)𝐹𝑡 (𝑑𝑥) (14)

Here 𝐹𝑡 and 𝐹𝑡+1 are cdfs representing the distribution of the current state and next period
state.
The intuition behind (14) is essentially the same as for (9).

2.4.2 Computation

If you wish to compute these cdfs, you cannot use the look-ahead estimator as before.
Indeed, you should not use any density estimator, since the objects you are estimating/com-
puting are not densities.
One good option is simulation as before, combined with the empirical distribution function.

2.5 Stability

In our lecture on finite Markov chains, we also studied stationarity, stability and ergodicity.
Here we will cover the same topics for the continuous case.
We will, however, treat only the density case (as in this section), where the stochastic kernel
is a family of densities.
The general case is relatively similar — references are given below.
2.5. STABILITY 29

2.5.1 Theoretical Results

Analogous to the finite case, given a stochastic kernel 𝑝 and corresponding Markov operator
as defined in (10), a density 𝜓∗ on 𝑆 is called stationary for 𝑃 if it is a fixed point of the op-
erator 𝑃 .
In other words,

𝜓∗ (𝑦) = ∫ 𝑝(𝑥, 𝑦)𝜓∗ (𝑥) 𝑑𝑥, ∀𝑦 ∈ 𝑆 (15)

As with the finite case, if 𝜓∗ is stationary for 𝑃 , and the distribution of 𝑋0 is 𝜓∗ , then, in
view of (11), 𝑋𝑡 will have this same distribution for all 𝑡.
Hence 𝜓∗ is the stochastic equivalent of a steady state.
In the finite case, we learned that at least one stationary distribution exists, although there
may be many.
When the state space is infinite, the situation is more complicated.
Even existence can fail very easily.
For example, the random walk model has no stationary density (see, e.g., EDTC, p. 210).
However, there are well-known conditions under which a stationary density 𝜓∗ exists.
With additional conditions, we can also get a unique stationary density (𝜓 ∈ 𝒟 and 𝜓 =
𝜓𝑃 ⟹ 𝜓 = 𝜓∗ ), and also global convergence in the sense that

∀ 𝜓 ∈ 𝒟, 𝜓𝑃 𝑡 → 𝜓∗ as 𝑡 → ∞ (16)

This combination of existence, uniqueness and global convergence in the sense of (16) is often
referred to as global stability.
Under very similar conditions, we get ergodicity, which means that

1 𝑛
∑ ℎ(𝑋𝑡 ) → ∫ ℎ(𝑥)𝜓∗ (𝑥)𝑑𝑥 as 𝑛 → ∞ (17)
𝑛 𝑡=1

for any (measurable) function ℎ ∶ 𝑆 → ℝ such that the right-hand side is finite.
Note that the convergence in (17) does not depend on the distribution (or value) of 𝑋0 .
This is actually very important for simulation — it means we can learn about 𝜓∗ (i.e., ap-
proximate the right-hand side of (17) via the left-hand side) without requiring any special
knowledge about what to do with 𝑋0 .
So what are these conditions we require to get global stability and ergodicity?
In essence, it must be the case that

1. Probability mass does not drift off to the “edges” of the state space.
2. Sufficient “mixing” obtains.

For one such set of conditions see theorem 8.2.14 of EDTC.


In addition
30 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

• [61] contains a classic (but slightly outdated) treatment of these topics.


• From the mathematical literature, [41] and [46] give outstanding in-depth treatments.
• Section 8.1.2 of EDTC provides detailed intuition, and section 8.3 gives additional refer-
ences.
• EDTC, section 11.3.4 provides a specific treatment for the growth model we considered
in this lecture.

2.5.2 An Example of Stability

As stated above, the growth model treated here is stable under mild conditions on the primi-
tives.
• See EDTC, section 11.3.4 for more details.
We can see this stability in action — in particular, the convergence in (16) — by simulating
the path of densities from various initial conditions.
Here is such a figure.

All sequences are converging towards the same limit, regardless of their initial condition.
The details regarding initial conditions and so on are given in this exercise, where you are
asked to replicate the figure.

2.5.3 Computing Stationary Densities

In the preceding figure, each sequence of densities is converging towards the unique stationary
density 𝜓∗ .
Even from this figure, we can get a fair idea what 𝜓∗ looks like, and where its mass is located.
2.6. EXERCISES 31

However, there is a much more direct way to estimate the stationary density, and it involves
only a slight modification of the look-ahead estimator.
Let’s say that we have a model of the form (3) that is stable and ergodic.
Let 𝑝 be the corresponding stochastic kernel, as given in (7).
To approximate the stationary density 𝜓∗ , we can simply generate a long time-series
𝑋0 , 𝑋1 , … , 𝑋𝑛 and estimate 𝜓∗ via

1 𝑛
𝜓𝑛∗ (𝑦) = ∑ 𝑝(𝑋𝑡 , 𝑦) (18)
𝑛 𝑡=1

This is essentially the same as the look-ahead estimator (13), except that now the observa-
tions we generate are a single time-series, rather than a cross-section.
The justification for (18) is that, with probability one as 𝑛 → ∞,

1 𝑛
∑ 𝑝(𝑋𝑡 , 𝑦) → ∫ 𝑝(𝑥, 𝑦)𝜓∗ (𝑥) 𝑑𝑥 = 𝜓∗ (𝑦)
𝑛 𝑡=1

where the convergence is by (17) and the equality on the right is by (15).
The right-hand side is exactly what we want to compute.
On top of this asymptotic result, it turns out that the rate of convergence for the look-ahead
estimator is very good.
The first exercise helps illustrate this point.

2.6 Exercises

2.6.1 Exercise 1

Consider the simple threshold autoregressive model

IID
𝑋𝑡+1 = 𝜃|𝑋𝑡 | + (1 − 𝜃2 )1/2 𝜉𝑡+1 where {𝜉𝑡 } ∼ 𝑁 (0, 1) (19)

This is one of those rare nonlinear stochastic models where an analytical expression for the
stationary density is available.
In particular, provided that |𝜃| < 1, there is a unique stationary density 𝜓∗ given by

𝜃𝑦
𝜓∗ (𝑦) = 2 𝜙(𝑦) Φ [ ] (20)
(1 − 𝜃2 )1/2

Here 𝜙 is the standard normal density and Φ is the standard normal cdf.
As an exercise, compute the look-ahead estimate of 𝜓∗ , as defined in (18), and compare it
with 𝜓∗ in (20) to see whether they are indeed close for large 𝑛.
In doing so, set 𝜃 = 0.8 and 𝑛 = 500.
The next figure shows the result of such a computation
32 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

The additional density (black line) is a nonparametric kernel density estimate, added to the
solution for illustration.
(You can try to replicate it before looking at the solution if you want to)
As you can see, the look-ahead estimator is a much tighter fit than the kernel density estima-
tor.
If you repeat the simulation you will see that this is consistently the case.

2.6.2 Exercise 2

Replicate the figure on global convergence shown above.


The densities come from the stochastic growth model treated at the start of the lecture.
Begin with the code found above.
Use the same parameters.
For the four initial distributions, use the shifted beta distributions

ψ_0 = beta(5, 5, scale=0.5, loc=i*2)

2.6.3 Exercise 3

A common way to compare distributions visually is with boxplots.


To illustrate, let’s generate three artificial data sets and compare them with a boxplot.
The three data sets we will use are:

{𝑋1 , … , 𝑋𝑛 } ∼ 𝐿𝑁 (0, 1), {𝑌1 , … , 𝑌𝑛 } ∼ 𝑁 (2, 1), and {𝑍1 , … , 𝑍𝑛 } ∼ 𝑁 (4, 1),
2.6. EXERCISES 33

Here is the code and figure:

In [4]: n = 500
x = np.random.randn(n) # N(0, 1)
x = np.exp(x) # Map x to lognormal
y = np.random.randn(n) + 2.0 # N(2, 1)
z = np.random.randn(n) + 4.0 # N(4, 1)

fig, ax = plt.subplots(figsize=(10, 6.6))


ax.boxplot([x, y, z])
ax.set_xticks((1, 2, 3))
ax.set_ylim(-2, 14)
ax.set_xticklabels(('$X$', '$Y$', '$Z$'), fontsize=16)
plt.show()

Each data set is represented by a box, where the top and bottom of the box are the third and
first quartiles of the data, and the red line in the center is the median.
The boxes give some indication as to
• the location of probability mass for each sample
• whether the distribution is right-skewed (as is the lognormal distribution), etc
Now let’s put these ideas to use in a simulation.
Consider the threshold autoregressive model in (19).
We know that the distribution of 𝑋𝑡 will converge to (20) whenever |𝜃| < 1.
Let’s observe this convergence from different initial conditions using boxplots.
In particular, the exercise is to generate J boxplot figures, one for each initial condition 𝑋0 in

initial_conditions = np.linspace(8, 0, J)
34 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

For each 𝑋0 in this set,

1. Generate 𝑘 time-series of length 𝑛, each starting at 𝑋0 and obeying (19).

2. Create a boxplot representing 𝑛 distributions, where the 𝑡-th distribution shows the 𝑘
observations of 𝑋𝑡 .

Use 𝜃 = 0.9, 𝑛 = 20, 𝑘 = 5000, 𝐽 = 8

2.7 Solutions

2.7.1 Exercise 1

Look-ahead estimation of a TAR stationary density, where the TAR model is

𝑋𝑡+1 = 𝜃|𝑋𝑡 | + (1 − 𝜃2 )1/2 𝜉𝑡+1

and 𝜉𝑡 ∼ 𝑁 (0, 1).


Try running at n = 10, 100, 1000, 10000 to get an idea of the speed of convergence

In [5]: ϕ = norm()
n = 500
θ = 0.8
# == Frequently used constants == #
d = np.sqrt(1 - θ**2)
δ = θ / d

def ψ_star(y):
"True stationary density of the TAR Model"
return 2 * norm.pdf(y) * norm.cdf(δ * y)

def p(x, y):


"Stochastic kernel for the TAR model."
return ϕ.pdf((y - θ * np.abs(x)) / d) / d

Z = ϕ.rvs(n)
X = np.empty(n)
for t in range(n-1):
X[t+1] = θ * np.abs(X[t]) + d * Z[t]
ψ_est = LAE(p, X)
k_est = gaussian_kde(X)

fig, ax = plt.subplots(figsize=(10, 7))


ys = np.linspace(-3, 3, 200)
ax.plot(ys, ψ_star(ys), 'b-', lw=2, alpha=0.6, label='true')
ax.plot(ys, ψ_est(ys), 'g-', lw=2, alpha=0.6, label='look-ahead estimate')
ax.plot(ys, k_est(ys), 'k-', lw=2, alpha=0.6, label='kernel based estimate')
ax.legend(loc='upper left')
plt.show()
2.7. SOLUTIONS 35

2.7.2 Exercise 2

Here’s one program that does the job

In [6]: # == Define parameters == #


s = 0.2
δ = 0.1
a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ)
α = 0.4 # f(k) = k**α

ϕ = lognorm(a_σ)

def p(x, y):


"Stochastic kernel, vectorized in x. Both x and y must be positive."
d = s * x**α
return ϕ.pdf((y - (1 - δ) * x) / d) / d

n = 1000 # Number of observations at each date t


T = 40 # Compute density of k_t at 1,...,T

fig, axes = plt.subplots(2, 2, figsize=(11, 8))


axes = axes.flatten()
xmax = 6.5

for i in range(4):
ax = axes[i]
ax.set_xlim(0, xmax)
ψ_0 = beta(5, 5, scale=0.5, loc=i*2) # Initial distribution
36 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS

# == Generate matrix s.t. t-th column is n observations of k_t == #


k = np.empty((n, T))
A = ϕ.rvs((n, T))
k[:, 0] = ψ_0.rvs(n)
for t in range(T-1):
k[:, t+1] = s * A[:,t] * k[:, t]**α + (1 - δ) * k[:, t]

# == Generate T instances of lae using this data, one for each t == #


laes = [LAE(p, k[:, t]) for t in range(T)]

ygrid = np.linspace(0.01, xmax, 150)


greys = [str(g) for g in np.linspace(0.0, 0.8, T)]
greys.reverse()
for ψ, g in zip(laes, greys):
ax.plot(ygrid, ψ(ygrid), color=g, lw=2, alpha=0.6)
ax.set_xlabel('capital')
plt.show()

2.7.3 Exercise 3

Here’s a possible solution.


Note the way we use vectorized code to simulate the 𝑘 time series for one boxplot all at once

In [7]: n = 20
k = 5000
J = 6
2.7. SOLUTIONS 37

θ = 0.9
d = np.sqrt(1 - θ**2)
δ = θ / d

fig, axes = plt.subplots(J, 1, figsize=(10, 4*J))


initial_conditions = np.linspace(8, 0, J)
X = np.empty((k, n))

for j in range(J):

axes[j].set_ylim(-4, 8)
axes[j].set_title(f'time series from t = {initial_conditions[j]}')

Z = np.random.randn(k, n)
X[:, 0] = initial_conditions[j]
for t in range(1, n):
X[:, t] = θ * np.abs(X[:, t-1]) + d * Z[:, t]
axes[j].boxplot(X)

plt.show()
38 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS
2.8. APPENDIX 39

2.8 Appendix

Here’s the proof of (6).


Let 𝐹𝑈 and 𝐹𝑉 be the cumulative distributions of 𝑈 and 𝑉 respectively.
By the definition of 𝑉 , we have 𝐹𝑉 (𝑣) = ℙ{𝑎 + 𝑏𝑈 ≤ 𝑣} = ℙ{𝑈 ≤ (𝑣 − 𝑎)/𝑏}.
In other words, 𝐹𝑉 (𝑣) = 𝐹𝑈 ((𝑣 − 𝑎)/𝑏).
Differentiating with respect to 𝑣 yields (6).
40 CHAPTER 2. CONTINUOUS STATE MARKOV CHAINS
Chapter 3

Reverse Engineering a la Muth

3.1 Contents

• Friedman (1956) and Muth (1960) 3.2


In addition to what’s in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

We’ll also need the following imports:

In [2]: import matplotlib.pyplot as plt


%matplotlib inline
import numpy as np
import scipy.linalg as la

from quantecon import Kalman


from quantecon import LinearStateSpace
from scipy.stats import norm
np.set_printoptions(linewidth=120, precision=4, suppress=True)

This lecture uses the Kalman filter to reformulate John F. Muth’s first paper [48] about ratio-
nal expectations.
Muth used classical prediction methods to reverse engineer a stochastic process that renders
optimal Milton Friedman’s [21] “adaptive expectations” scheme.

3.2 Friedman (1956) and Muth (1960)

Milton Friedman [21] (1956) posited that consumer’s forecast their future disposable income
with the adaptive expectations scheme



𝑦𝑡+𝑖,𝑡 = 𝐾 ∑(1 − 𝐾)𝑗 𝑦𝑡−𝑗 (1)
𝑗=0


where 𝐾 ∈ (0, 1) and 𝑦𝑡+𝑖,𝑡 is a forecast of future 𝑦 over horizon 𝑖.

41
42 CHAPTER 3. REVERSE ENGINEERING A LA MUTH

Milton Friedman justified the exponential smoothing forecasting scheme (1) informally,
noting that it seemed a plausible way to use past income to forecast future income.
In his first paper about rational expectations, John F. Muth [48] reverse-engineered a univari-
ate stochastic process {𝑦𝑡 }∞𝑡=−∞ for which Milton Friedman’s adaptive expectations scheme
gives linear least forecasts of 𝑦𝑡+𝑗 for any horizon 𝑖.
Muth sought a setting and a sense in which Friedman’s forecasting scheme is optimal.
That is, Muth asked for what optimal forecasting question is Milton Friedman’s adaptive
expectation scheme the answer.
Muth (1960) used classical prediction methods based on lag-operators and 𝑧-transforms to
find the answer to his question.
Please see lectures Classical Control with Linear Algebra and Classical Filtering and Predic-
tion with Linear Algebra for an introduction to the classical tools that Muth used.
Rather than using those classical tools, in this lecture we apply the Kalman filter to express
the heart of Muth’s analysis concisely.
The lecture First Look at Kalman Filter describes the Kalman filter.
We’ll use limiting versions of the Kalman filter corresponding to what are called stationary
values in that lecture.

3.2.1 A Process for Which Adaptive Expectations are Optimal

Suppose that an observable 𝑦𝑡 is the sum of an unobserved random walk 𝑥𝑡 and an IID shock
𝜖2,𝑡 :

𝑥𝑡+1 = 𝑥𝑡 + 𝜎𝑥 𝜖1,𝑡+1
(2)
𝑦𝑡 = 𝑥𝑡 + 𝜎𝑦 𝜖2,𝑡

where

𝜖
[ 1,𝑡+1 ] ∼ 𝒩(0, 𝐼)
𝜖2,𝑡

is an IID process.
Note: A property of the state-space representation (2) is that in general neither 𝜖1,𝑡 nor 𝜖2,𝑡
is in the space spanned by square-summable linear combinations of 𝑦𝑡 , 𝑦𝑡−1 , ….
𝜖
In general [ 1,𝑡 ] has more information about future 𝑦𝑡+𝑗 ’s than is contained in 𝑦𝑡 , 𝑦𝑡−1 , ….
𝜖2𝑡
We can use the asymptotic or stationary values of the Kalman gain and the one-step-ahead
conditional state covariance matrix to compute a time-invariant innovations representation

𝑥𝑡+1
̂ = 𝑥𝑡̂ + 𝐾𝑎𝑡
(3)
𝑦𝑡 = 𝑥𝑡̂ + 𝑎𝑡

where 𝑥𝑡̂ = 𝐸[𝑥𝑡 |𝑦𝑡−1 , 𝑦𝑡−2 , …] and 𝑎𝑡 = 𝑦𝑡 − 𝐸[𝑦𝑡 |𝑦𝑡−1 , 𝑦𝑡−2 , …].
Note: A key property about an innovations representation is that 𝑎𝑡 is in the space spanned
by square summable linear combinations of 𝑦𝑡 , 𝑦𝑡−1 , ….
3.2. FRIEDMAN (1956) AND MUTH (1960) 43

For more ramifications of this property, see the lectures Shock Non-Invertibility and Recursive
Models of Dynamic Linear Economies.
Later we’ll stack these state-space systems (2) and (3) to display some classic findings of
Muth.
But first, let’s create an instance of the state-space system (2) then apply the quantecon
Kalman class, then uses it to construct the associated “innovations representation”

In [3]: # Make some parameter choices


# sigx/sigy are state noise std err and measurement noise std err
μ_0, σ_x, σ_y = 10, 1, 5

# Create a LinearStateSpace object


A, C, G, H = 1, σ_x, 1, σ_y
ss = LinearStateSpace(A, C, G, H, mu_0=μ_0)

# Set prior and initialize the Kalman type


x_hat_0, Σ_0 = 10, 1
kmuth = Kalman(ss, x_hat_0, Σ_0)

# Computes stationary values which we need for the innovation


# representation
S1, K1 = kmuth.stationary_values()

# Form innovation representation state-space


Ak, Ck, Gk, Hk = A, K1, G, 1

ssk = LinearStateSpace(Ak, Ck, Gk, Hk, mu_0=x_hat_0)

3.2.2 Some Useful State-Space Math

Now we want to map the time-invariant innovations representation (3) and the original state-
space system (2) into a convenient form for deducing the impulse responses from the original
shocks to the 𝑥𝑡 and 𝑥𝑡̂ .
Putting both of these representations into a single state-space system is yet another applica-
tion of the insight that “finding the state is an art”.
We’ll define a state vector and appropriate state-space matrices that allow us to represent
both systems in one fell swoop.
Note that

𝑎𝑡 = 𝑥𝑡 + 𝜎𝑦 𝜖2,𝑡 − 𝑥𝑡̂

so that

𝑥𝑡+1
̂ = 𝑥𝑡̂ + 𝐾(𝑥𝑡 + 𝜎𝑦 𝜖2,𝑡 − 𝑥𝑡̂ )
= (1 − 𝐾)𝑥𝑡̂ + 𝐾𝑥𝑡 + 𝐾𝜎𝑦 𝜖2,𝑡

The stacked system


44 CHAPTER 3. REVERSE ENGINEERING A LA MUTH

𝑥𝑡+1 1 0 0 𝑥𝑡 𝜎𝑥 0
⎡ 𝑥̂ ⎤ = ⎡𝐾 (1 − 𝐾) 𝐾𝜎 ⎤ ⎡ 𝑥̂ ⎤ + ⎡ 0 0⎤ [𝜖1,𝑡+1 ]
⎢ 𝑡+1 ⎥ ⎢ 𝑦⎥ ⎢ 𝑡 ⎥ ⎢ ⎥ 𝜖
⎣𝜖2,𝑡+1 ⎦ ⎣ 0 0 0 ⎦ ⎣𝜖2,𝑡 ⎦ ⎣ 0 1⎦ 2,𝑡+1

𝑥
𝑦 1 0 𝜎𝑦 ⎡ 𝑡 ⎤
[ 𝑡] = [ ] ⎢ 𝑥𝑡̂ ⎥
𝑎𝑡 1 −1 𝜎𝑦
⎣𝜖2,𝑡 ⎦

𝜖
is a state-space system that tells us how the shocks [ 1,𝑡+1 ] affect states 𝑥𝑡+1
̂ , 𝑥𝑡 , the observ-
𝜖2,𝑡+1
able 𝑦𝑡 , and the innovation 𝑎𝑡 .
With this tool at our disposal, let’s form the composite system and simulate it

In [4]: # Create grand state-space for y_t, a_t as observed vars -- Use
# stacking trick above
Af = np.array([[ 1, 0, 0],
[K1, 1 - K1, K1 * σ_y],
[ 0, 0, 0]])
Cf = np.array([[σ_x, 0],
[ 0, K1 * σ_y],
[ 0, 1]])
Gf = np.array([[1, 0, σ_y],
[1, -1, σ_y]])

μ_true, μ_prior = 10, 10


μ_f = np.array([μ_true, μ_prior, 0]).reshape(3, 1)

# Create the state-space


ssf = LinearStateSpace(Af, Cf, Gf, mu_0=μ_f)

# Draw observations of y from the state-space model


N = 50
xf, yf = ssf.simulate(N)

print(f"Kalman gain = {K1}")


print(f"Conditional variance = {S1}")

Kalman gain = [[0.181]]


Conditional variance = [[5.5249]]

Now that we have simulated our joint system, we have 𝑥𝑡 , 𝑥𝑡̂ , and 𝑦𝑡 .
We can now investigate how these variables are related by plotting some key objects.

3.2.3 Estimates of Unobservables

First, let’s plot the hidden state 𝑥𝑡 and the filtered version 𝑥𝑡̂ that is linear-least squares pro-
jection of 𝑥𝑡 on the history 𝑦𝑡−1 , 𝑦𝑡−2 , …

In [5]: fig, ax = plt.subplots()


ax.plot(xf[0, :], label="$x_t$")
3.2. FRIEDMAN (1956) AND MUTH (1960) 45

ax.plot(xf[1, :], label="Filtered $x_t$")


ax.legend()
ax.set_xlabel("Time")
ax.set_title(r"$x$ vs $\hat{x}$")
plt.show()

Note how 𝑥𝑡 and 𝑥𝑡̂ differ.


For Friedman, 𝑥𝑡̂ and not 𝑥𝑡 is the consumer’s idea about her/his permanent income.

3.2.4 Relation between Unobservable and Observable

Now let’s plot 𝑥𝑡 and 𝑦𝑡 .


Recall that 𝑦𝑡 is just 𝑥𝑡 plus white noise

In [6]: fig, ax = plt.subplots()


ax.plot(yf[0, :], label="y")
ax.plot(xf[0, :], label="x")
ax.legend()
ax.set_title(r"$x$ and $y$")
ax.set_xlabel("Time")
plt.show()
46 CHAPTER 3. REVERSE ENGINEERING A LA MUTH

We see above that 𝑦 seems to look like white noise around the values of 𝑥.

3.2.5 Innovations

Recall that we wrote down the innovation representation that depended on 𝑎𝑡 . We now plot
the innovations {𝑎𝑡 }:

In [7]: fig, ax = plt.subplots()


ax.plot(yf[1, :], label="a")
ax.legend()
ax.set_title(r"Innovation $a_t$")
ax.set_xlabel("Time")
plt.show()
3.2. FRIEDMAN (1956) AND MUTH (1960) 47

3.2.6 MA and AR Representations

Now we shall extract from the Kalman instance kmuth coefficients of


• a fundamental moving average representation that represents 𝑦𝑡 as a one-sided moving
sum of current and past 𝑎𝑡 s that are square summable linear combinations of 𝑦𝑡 , 𝑦𝑡−1 , ….
• a univariate autoregression representation that depicts the coefficients in a linear least
square projection of 𝑦𝑡 on the semi-infinite history 𝑦𝑡−1 , 𝑦𝑡−2 , ….
Then we’ll plot each of them

In [8]: # Kalman Methods for MA and VAR


coefs_ma = kmuth.stationary_coefficients(5, "ma")
coefs_var = kmuth.stationary_coefficients(5, "var")

# Coefficients come in a list of arrays, but we


# want to plot them and so need to stack into an array
coefs_ma_array = np.vstack(coefs_ma)
coefs_var_array = np.vstack(coefs_var)

fig, ax = plt.subplots(2)
ax[0].plot(coefs_ma_array, label="MA")
ax[0].legend()
ax[1].plot(coefs_var_array, label="VAR")
ax[1].legend()

plt.show()
48 CHAPTER 3. REVERSE ENGINEERING A LA MUTH

The moving average coefficients in the top panel show tell-tale signs of 𝑦𝑡 being a process
whose first difference is a first-order autoregression.
The autoregressive coefficients decline geometrically with decay rate (1 − 𝐾).
These are exactly the target outcomes that Muth (1960) aimed to reverse engineer

In [9]: print(f'decay parameter 1 - K1 = {1 - K1}')

decay parameter 1 - K1 = [[0.819]]


Chapter 4

Discrete State Dynamic


Programming

4.1 Contents

• Overview 4.2
• Discrete DPs 4.3
• Solving Discrete DPs 4.4
• Example: A Growth Model 4.5
• Exercises 4.6
• Solutions 4.7
• Appendix: Algorithms 4.8
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

4.2 Overview

In this lecture we discuss a family of dynamic programming problems with the following fea-
tures:

1. a discrete state space and discrete choices (actions)

2. an infinite horizon

3. discounted rewards

4. Markov state transitions

We call such problems discrete dynamic programs or discrete DPs.


Discrete DPs are the workhorses in much of modern quantitative economics, including
• monetary economics
• search and labor economics
• household savings and consumption theory
• investment theory

49
50 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

• asset pricing
• industrial organization, etc.
When a given model is not inherently discrete, it is common to replace it with a discretized
version in order to use discrete DP techniques.
This lecture covers
• the theory of dynamic programming in a discrete setting, plus examples and applica-
tions
• a powerful set of routines for solving discrete DPs from the QuantEcon code library
Let’s start with some imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
import quantecon as qe
import scipy.sparse as sparse
from quantecon import compute_fixed_point
from quantecon.markov import DiscreteDP

4.2.1 How to Read this Lecture

We use dynamic programming many applied lectures, such as


• The shortest path lecture
• The McCall search model lecture
The objective of this lecture is to provide a more systematic and theoretical treatment, in-
cluding algorithms and implementation while focusing on the discrete case.

4.2.2 Code

Among other things, it offers


• a flexible, well-designed interface
• multiple solution methods, including value function and policy function iteration
• high-speed operations via carefully optimized JIT-compiled functions
• the ability to scale to large problems by minimizing vectorized operators and allowing
operations on sparse matrices
JIT compilation relies on Numba, which should work seamlessly if you are using Anaconda as
suggested.

4.2.3 References

For background reading on dynamic programming and additional applications, see, for exam-
ple,
• [43]
• [34], section 3.5
• [50]
• [61]
• [55]
4.3. DISCRETE DPS 51

• [47]
• EDTC, chapter 5

4.3 Discrete DPs

Loosely speaking, a discrete DP is a maximization problem with an objective function of the


form


𝔼 ∑ 𝛽 𝑡 𝑟(𝑠𝑡 , 𝑎𝑡 ) (1)
𝑡=0

where
• 𝑠𝑡 is the state variable
• 𝑎𝑡 is the action
• 𝛽 is a discount factor
• 𝑟(𝑠𝑡 , 𝑎𝑡 ) is interpreted as a current reward when the state is 𝑠𝑡 and the action chosen is
𝑎𝑡
Each pair (𝑠𝑡 , 𝑎𝑡 ) pins down transition probabilities 𝑄(𝑠𝑡 , 𝑎𝑡 , 𝑠𝑡+1 ) for the next period state
𝑠𝑡+1 .
Thus, actions influence not only current rewards but also the future time path of the state.
The essence of dynamic programming problems is to trade off current rewards vs favorable
positioning of the future state (modulo randomness).
Examples:
• consuming today vs saving and accumulating assets
• accepting a job offer today vs seeking a better one in the future
• exercising an option now vs waiting

4.3.1 Policies

The most fruitful way to think about solutions to discrete DP problems is to compare poli-
cies.
In general, a policy is a randomized map from past actions and states to current action.
In the setting formalized below, it suffices to consider so-called stationary Markov policies,
which consider only the current state.
In particular, a stationary Markov policy is a map 𝜎 from states to actions
• 𝑎𝑡 = 𝜎(𝑠𝑡 ) indicates that 𝑎𝑡 is the action to be taken in state 𝑠𝑡
It is known that, for any arbitrary policy, there exists a stationary Markov policy that domi-
nates it at least weakly.
• See section 5.5 of [50] for discussion and proofs.
In what follows, stationary Markov policies are referred to simply as policies.
The aim is to find an optimal policy, in the sense of one that maximizes (1).
Let’s now step through these ideas more carefully.
52 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

4.3.2 Formal Definition

Formally, a discrete dynamic program consists of the following components:

1. A finite set of states 𝑆 = {0, … , 𝑛 − 1}.

2. A finite set of feasible actions 𝐴(𝑠) for each state 𝑠 ∈ 𝑆, and a corresponding set of
feasible state-action pairs.

SA ∶= {(𝑠, 𝑎) ∣ 𝑠 ∈ 𝑆, 𝑎 ∈ 𝐴(𝑠)}

1. A reward function 𝑟 ∶ SA → ℝ.

2. A transition probability function 𝑄 ∶ SA → Δ(𝑆), where Δ(𝑆) is the set of probability


distributions over 𝑆.

3. A discount factor 𝛽 ∈ [0, 1).

We also use the notation 𝐴 ∶= ⋃𝑠∈𝑆 𝐴(𝑠) = {0, … , 𝑚 − 1} and call this set the action space.
A policy is a function 𝜎 ∶ 𝑆 → 𝐴.
A policy is called feasible if it satisfies 𝜎(𝑠) ∈ 𝐴(𝑠) for all 𝑠 ∈ 𝑆.
Denote the set of all feasible policies by Σ.
If a decision-maker uses a policy 𝜎 ∈ Σ, then
• the current reward at time 𝑡 is 𝑟(𝑠𝑡 , 𝜎(𝑠𝑡 ))
• the probability that 𝑠𝑡+1 = 𝑠′ is 𝑄(𝑠𝑡 , 𝜎(𝑠𝑡 ), 𝑠′ )
For each 𝜎 ∈ Σ, define
• 𝑟𝜎 by 𝑟𝜎 (𝑠) ∶= 𝑟(𝑠, 𝜎(𝑠)))
• 𝑄𝜎 by 𝑄𝜎 (𝑠, 𝑠′ ) ∶= 𝑄(𝑠, 𝜎(𝑠), 𝑠′ )
Notice that 𝑄𝜎 is a stochastic matrix on 𝑆.
It gives transition probabilities of the controlled chain when we follow policy 𝜎.
If we think of 𝑟𝜎 as a column vector, then so is 𝑄𝑡𝜎 𝑟𝜎 , and the 𝑠-th row of the latter has the
interpretation

(𝑄𝑡𝜎 𝑟𝜎 )(𝑠) = 𝔼[𝑟(𝑠𝑡 , 𝜎(𝑠𝑡 )) ∣ 𝑠0 = 𝑠] when {𝑠𝑡 } ∼ 𝑄𝜎 (2)

Comments
• {𝑠𝑡 } ∼ 𝑄𝜎 means that the state is generated by stochastic matrix 𝑄𝜎 .
• See this discussion on computing expectations of Markov chains for an explanation of
the expression in (2).
Notice that we’re not really distinguishing between functions from 𝑆 to ℝ and vectors in ℝ𝑛 .
This is natural because they are in one to one correspondence.
4.3. DISCRETE DPS 53

4.3.3 Value and Optimality

Let 𝑣𝜎 (𝑠) denote the discounted sum of expected reward flows from policy 𝜎 when the initial
state is 𝑠.
To calculate this quantity we pass the expectation through the sum in (1) and use (2) to get


𝑣𝜎 (𝑠) = ∑ 𝛽 𝑡 (𝑄𝑡𝜎 𝑟𝜎 )(𝑠) (𝑠 ∈ 𝑆)
𝑡=0

This function is called the policy value function for the policy 𝜎.
The optimal value function, or simply value function, is the function 𝑣∗ ∶ 𝑆 → ℝ defined by

𝑣∗ (𝑠) = max 𝑣𝜎 (𝑠) (𝑠 ∈ 𝑆)


𝜎∈Σ

(We can use max rather than sup here because the domain is a finite set)
A policy 𝜎 ∈ Σ is called optimal if 𝑣𝜎 (𝑠) = 𝑣∗ (𝑠) for all 𝑠 ∈ 𝑆.
Given any 𝑤 ∶ 𝑆 → ℝ, a policy 𝜎 ∈ Σ is called 𝑤-greedy if

𝜎(𝑠) ∈ arg max {𝑟(𝑠, 𝑎) + 𝛽 ∑ 𝑤(𝑠′ )𝑄(𝑠, 𝑎, 𝑠′ )} (𝑠 ∈ 𝑆)


𝑎∈𝐴(𝑠) 𝑠′ ∈𝑆

As discussed in detail below, optimal policies are precisely those that are 𝑣∗ -greedy.

4.3.4 Two Operators

It is useful to define the following operators:


• The Bellman operator 𝑇 ∶ ℝ𝑆 → ℝ𝑆 is defined by

(𝑇 𝑣)(𝑠) = max {𝑟(𝑠, 𝑎) + 𝛽 ∑ 𝑣(𝑠′ )𝑄(𝑠, 𝑎, 𝑠′ )} (𝑠 ∈ 𝑆)


𝑎∈𝐴(𝑠)
𝑠′ ∈𝑆

• For any policy function 𝜎 ∈ Σ, the operator 𝑇𝜎 ∶ ℝ𝑆 → ℝ𝑆 is defined by

(𝑇𝜎 𝑣)(𝑠) = 𝑟(𝑠, 𝜎(𝑠)) + 𝛽 ∑ 𝑣(𝑠′ )𝑄(𝑠, 𝜎(𝑠), 𝑠′ ) (𝑠 ∈ 𝑆)


𝑠′ ∈𝑆

This can be written more succinctly in operator notation as

𝑇𝜎 𝑣 = 𝑟𝜎 + 𝛽𝑄𝜎 𝑣

The two operators are both monotone


• 𝑣 ≤ 𝑤 implies 𝑇 𝑣 ≤ 𝑇 𝑤 pointwise on 𝑆, and similarly for 𝑇𝜎
They are also contraction mappings with modulus 𝛽
• ‖𝑇 𝑣 − 𝑇 𝑤‖ ≤ 𝛽‖𝑣 − 𝑤‖ and similarly for 𝑇𝜎 , where ‖⋅‖ is the max norm
54 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

For any policy 𝜎, its value 𝑣𝜎 is the unique fixed point of 𝑇𝜎 .


For proofs of these results and those in the next section, see, for example, EDTC, chapter 10.

4.3.5 The Bellman Equation and the Principle of Optimality

The main principle of the theory of dynamic programming is that


• the optimal value function 𝑣∗ is a unique solution to the Bellman equation

𝑣(𝑠) = max {𝑟(𝑠, 𝑎) + 𝛽 ∑ 𝑣(𝑠′ )𝑄(𝑠, 𝑎, 𝑠′ )} (𝑠 ∈ 𝑆)


𝑎∈𝐴(𝑠)
𝑠′ ∈𝑆

or in other words, 𝑣∗ is the unique fixed point of 𝑇 , and


• 𝜎∗ is an optimal policy function if and only if it is 𝑣∗ -greedy
By the definition of greedy policies given above, this means that

𝜎∗ (𝑠) ∈ arg max {𝑟(𝑠, 𝑎) + 𝛽 ∑ 𝑣∗ (𝑠′ )𝑄(𝑠, 𝜎(𝑠), 𝑠′ )} (𝑠 ∈ 𝑆)


𝑎∈𝐴(𝑠) 𝑠′ ∈𝑆

4.4 Solving Discrete DPs

Now that the theory has been set out, let’s turn to solution methods.
The code for solving discrete DPs is available in ddp.py from the QuantEcon.py code library.
It implements the three most important solution methods for discrete dynamic programs,
namely
• value function iteration
• policy function iteration
• modified policy function iteration
Let’s briefly review these algorithms and their implementation.

4.4.1 Value Function Iteration

Perhaps the most familiar method for solving all manner of dynamic programs is value func-
tion iteration.
This algorithm uses the fact that the Bellman operator 𝑇 is a contraction mapping with fixed
point 𝑣∗ .
Hence, iterative application of 𝑇 to any initial function 𝑣0 ∶ 𝑆 → ℝ converges to 𝑣∗ .
The details of the algorithm can be found in the appendix.

4.4.2 Policy Function Iteration

This routine, also known as Howard’s policy improvement algorithm, exploits more closely the
particular structure of a discrete DP problem.
4.5. EXAMPLE: A GROWTH MODEL 55

Each iteration consists of

1. A policy evaluation step that computes the value 𝑣𝜎 of a policy 𝜎 by solving the linear
equation 𝑣 = 𝑇𝜎 𝑣.

2. A policy improvement step that computes a 𝑣𝜎 -greedy policy.

In the current setting, policy iteration computes an exact optimal policy in finitely many iter-
ations.
• See theorem 10.2.6 of EDTC for a proof.
The details of the algorithm can be found in the appendix.

4.4.3 Modified Policy Function Iteration

Modified policy iteration replaces the policy evaluation step in policy iteration with “partial
policy evaluation”.
The latter computes an approximation to the value of a policy 𝜎 by iterating 𝑇𝜎 for a speci-
fied number of times.
This approach can be useful when the state space is very large and the linear system in the
policy evaluation step of policy iteration is correspondingly difficult to solve.
The details of the algorithm can be found in the appendix.

4.5 Example: A Growth Model

Let’s consider a simple consumption-saving model.


A single household either consumes or stores its own output of a single consumption good.
The household starts each period with current stock 𝑠.
Next, the household chooses a quantity 𝑎 to store and consumes 𝑐 = 𝑠 − 𝑎
• Storage is limited by a global upper bound 𝑀 .
• Flow utility is 𝑢(𝑐) = 𝑐𝛼 .
Output is drawn from a discrete uniform distribution on {0, … , 𝐵}.
The next period stock is therefore

𝑠′ = 𝑎 + 𝑈 where 𝑈 ∼ 𝑈 [0, … , 𝐵]

The discount factor is 𝛽 ∈ [0, 1).

4.5.1 Discrete DP Representation

We want to represent this model in the format of a discrete dynamic program.


To this end, we take
• the state variable to be the stock 𝑠
56 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

• the state space to be 𝑆 = {0, … , 𝑀 + 𝐵}


– hence 𝑛 = 𝑀 + 𝐵 + 1
• the action to be the storage quantity 𝑎
• the set of feasible actions at 𝑠 to be 𝐴(𝑠) = {0, … , min{𝑠, 𝑀 }}
– hence 𝐴 = {0, … , 𝑀 } and 𝑚 = 𝑀 + 1
• the reward function to be 𝑟(𝑠, 𝑎) = 𝑢(𝑠 − 𝑎)
• the transition probabilities to be

1
if 𝑎 ≤ 𝑠′ ≤ 𝑎 + 𝐵
𝑄(𝑠, 𝑎, 𝑠′ ) ∶= { 𝐵+1 (3)
0 otherwise

4.5.2 Defining a DiscreteDP Instance

This information will be used to create an instance of DiscreteDP by passing the following
information

1. An 𝑛 × 𝑚 reward array 𝑅.

2. An 𝑛 × 𝑚 × 𝑛 transition probability array 𝑄.

3. A discount factor 𝛽.

For 𝑅 we set 𝑅[𝑠, 𝑎] = 𝑢(𝑠 − 𝑎) if 𝑎 ≤ 𝑠 and −∞ otherwise.


For 𝑄 we follow the rule in (3).
Note:
• The feasibility constraint is embedded into 𝑅 by setting 𝑅[𝑠, 𝑎] = −∞ for 𝑎 ∉ 𝐴(𝑠).
• Probability distributions for (𝑠, 𝑎) with 𝑎 ∉ 𝐴(𝑠) can be arbitrary.
The following code sets up these objects for us

In [3]: class SimpleOG:

def __init__(self, B=10, M=5, α=0.5, β=0.9):


"""
Set up R, Q and β, the three elements that define an instance of
the DiscreteDP class.
"""

self.B, self.M, self.α, self.β = B, M, α, β


self.n = B + M + 1
self.m = M + 1

self.R = np.empty((self.n, self.m))


self.Q = np.zeros((self.n, self.m, self.n))

self.populate_Q()
self.populate_R()

def u(self, c):


return c**self.α

def populate_R(self):
4.5. EXAMPLE: A GROWTH MODEL 57

"""
Populate the R matrix, with R[s, a] = -np.inf for infeasible
state-action pairs.
"""
for s in range(self.n):
for a in range(self.m):
self.R[s, a] = self.u(s - a) if a <= s else -np.inf

def populate_Q(self):
"""
Populate the Q matrix by setting

Q[s, a, s'] = 1 / (1 + B) if a <= s' <= a + B

and zero otherwise.


"""

for a in range(self.m):
self.Q[:, a, a:(a + self.B + 1)] = 1.0 / (self.B + 1)

Let’s run this code and create an instance of SimpleOG.

In [4]: g = SimpleOG() # Use default parameters

Instances of DiscreteDP are created using the signature DiscreteDP(R, Q, β).


Let’s create an instance using the objects stored in g

In [5]: ddp = qe.markov.DiscreteDP(g.R, g.Q, g.β)

Now that we have an instance ddp of DiscreteDP we can solve it as follows

In [6]: results = ddp.solve(method='policy_iteration')

Let’s see what we’ve got here

In [7]: dir(results)

Out[7]: ['max_iter', 'mc', 'method', 'num_iter', 'sigma', 'v']

(In IPython version 4.0 and above you can also type results. and hit the tab key)
The most important attributes are v, the value function, and σ, the optimal policy

In [8]: results.v

Out[8]: array([19.01740222, 20.01740222, 20.43161578, 20.74945302, 21.04078099,


21.30873018, 21.54479816, 21.76928181, 21.98270358, 22.18824323,
22.3845048 , 22.57807736, 22.76109127, 22.94376708, 23.11533996,
23.27761762])

In [9]: results.sigma
58 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

Out[9]: array([0, 0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 5, 5, 5, 5])

Since we’ve used policy iteration, these results will be exact unless we hit the iteration bound
max_iter.
Let’s make sure this didn’t happen

In [10]: results.max_iter

Out[10]: 250

In [11]: results.num_iter

Out[11]: 3

Another interesting object is results.mc, which is the controlled chain defined by 𝑄𝜎∗ ,
where 𝜎∗ is the optimal policy.
In other words, it gives the dynamics of the state when the agent follows the optimal policy.
Since this object is an instance of MarkovChain from QuantEcon.py (see this lecture for more
discussion), we can easily simulate it, compute its stationary distribution and so on.

In [12]: results.mc.stationary_distributions

Out[12]: array([[0.01732187, 0.04121063, 0.05773956, 0.07426848, 0.08095823,


0.09090909, 0.09090909, 0.09090909, 0.09090909, 0.09090909,
0.09090909, 0.07358722, 0.04969846, 0.03316953, 0.01664061,
0.00995086]])

Here’s the same information in a bar graph

What happens if the agent is more patient?

In [13]: ddp = qe.markov.DiscreteDP(g.R, g.Q, 0.99) # Increase β to 0.99


results = ddp.solve(method='policy_iteration')
results.mc.stationary_distributions
4.5. EXAMPLE: A GROWTH MODEL 59

Out[13]: array([[0.00546913, 0.02321342, 0.03147788, 0.04800681, 0.05627127,


0.09090909, 0.09090909, 0.09090909, 0.09090909, 0.09090909,
0.09090909, 0.08543996, 0.06769567, 0.05943121, 0.04290228,
0.03463782]])

If we look at the bar graph we can see the rightward shift in probability mass

4.5.3 State-Action Pair Formulation

The DiscreteDP class in fact, provides a second interface to set up an instance.


One of the advantages of this alternative set up is that it permits the use of a sparse matrix
for Q.
(An example of using sparse matrices is given in the exercises below)
The call signature of the second formulation is DiscreteDP(R, Q, β, s_indices,
a_indices) where
• s_indices and a_indices are arrays of equal length L enumerating all feasible
state-action pairs
• R is an array of length L giving corresponding rewards
• Q is an L x n transition probability array
Here’s how we could set up these objects for the preceding example

In [14]: B, M, α, β = 10, 5, 0.5, 0.9


n = B + M + 1
m = M + 1

def u(c):
return c**α

s_indices = []
a_indices = []
Q = []
R = []
60 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

b = 1.0 / (B + 1)

for s in range(n):
for a in range(min(M, s) + 1): # All feasible a at this s
s_indices.append(s)
a_indices.append(a)
q = np.zeros(n)
q[a:(a + B + 1)] = b # b on these values, otherwise 0
Q.append(q)
R.append(u(s - a))

ddp = qe.markov.DiscreteDP(R, Q, β, s_indices, a_indices)

For larger problems, you might need to write this code more efficiently by vectorizing or using
Numba.

4.6 Exercises

In the stochastic optimal growth lecture from our introductory lecture series, we solve a
benchmark model that has an analytical solution.
The exercise is to replicate this solution using DiscreteDP.

4.7 Solutions

4.7.1 Setup

Details of the model can be found in the lecture on optimal growth.


We let 𝑓(𝑘) = 𝑘𝛼 with 𝛼 = 0.65, 𝑢(𝑐) = log 𝑐, and 𝛽 = 0.95

In [15]: α = 0.65
f = lambda k: k**α
u = np.log
β = 0.95

Here we want to solve a finite state version of the continuous state model above.
We discretize the state space into a grid of size grid_size=500, from 10−6 to grid_max=2

In [16]: grid_max = 2
grid_size = 500
grid = np.linspace(1e-6, grid_max, grid_size)

We choose the action to be the amount of capital to save for the next period (the state is the
capital stock at the beginning of the period).
Thus the state indices and the action indices are both 0, …, grid_size-1.
Action (indexed by) a is feasible at state (indexed by) s if and only if grid[a] <
f([grid[s]) (zero consumption is not allowed because of the log utility).
Thus the Bellman equation is:
4.7. SOLUTIONS 61

𝑣(𝑘) = max 𝑢(𝑓(𝑘) − 𝑘′ ) + 𝛽𝑣(𝑘′ ),


0<𝑘′ <𝑓(𝑘)

where 𝑘′ is the capital stock in the next period.


The transition probability array Q will be highly sparse (in fact it is degenerate as the model
is deterministic), so we formulate the problem with state-action pairs, to represent Q in scipy
sparse matrix format.
We first construct indices for state-action pairs:

In [17]: # Consumption matrix, with nonpositive consumption included


C = f(grid).reshape(grid_size, 1) - grid.reshape(1, grid_size)

# State-action indices
s_indices, a_indices = np.where(C > 0)

# Number of state-action pairs


L = len(s_indices)

print(L)
print(s_indices)
print(a_indices)

118841
[ 0 1 1 … 499 499 499]
[ 0 0 1 … 389 390 391]

Reward vector R (of length L):

In [18]: R = u(C[s_indices, a_indices])

(Degenerate) transition probability matrix Q (of shape (L, grid_size)), where we choose
the scipy.sparse.lil_matrix format, while any format will do (internally it will be converted to
the csr format):

In [19]: Q = sparse.lil_matrix((L, grid_size))


Q[np.arange(L), a_indices] = 1

(If you are familiar with the data structure of scipy.sparse.csr_matrix, the following is the
most efficient way to create the Q matrix in the current case)

In [20]: # data = np.ones(L)


# indptr = np.arange(L+1)
# Q = sparse.csr_matrix((data, a_indices, indptr), shape=(L, grid_size))

Discrete growth model:

In [21]: ddp = DiscreteDP(R, Q, β, s_indices, a_indices)

Notes
Here we intensively vectorized the operations on arrays to simplify the code.
As noted, however, vectorization is memory consumptive, and it can be prohibitively so for
grids with large size.
62 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

4.7.2 Solving the Model

Solve the dynamic optimization problem:

In [22]: res = ddp.solve(method='policy_iteration')


v, σ, num_iter = res.v, res.sigma, res.num_iter
num_iter

Out[22]: 10

Note that sigma contains the indices of the optimal capital stocks to save for the next pe-
riod. The following translates sigma to the corresponding consumption vector.

In [23]: # Optimal consumption in the discrete version


c = f(grid) - grid[σ]

# Exact solution of the continuous version


ab = α * β
c1 = (np.log(1 - ab) + np.log(ab) * ab / (1 - ab)) / (1 - β)
c2 = α / (1 - ab)

def v_star(k):
return c1 + c2 * np.log(k)

def c_star(k):
return (1 - ab) * k**α

Let us compare the solution of the discrete model with that of the original continuous model

In [24]: fig, ax = plt.subplots(1, 2, figsize=(14, 4))


ax[0].set_ylim(-40, -32)
ax[0].set_xlim(grid[0], grid[-1])
ax[1].set_xlim(grid[0], grid[-1])

lb0 = 'discrete value function'


ax[0].plot(grid, v, lw=2, alpha=0.6, label=lb0)

lb0 = 'continuous value function'


ax[0].plot(grid, v_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb0)
ax[0].legend(loc='upper left')

lb1 = 'discrete optimal consumption'


ax[1].plot(grid, c, 'b-', lw=2, alpha=0.6, label=lb1)

lb1 = 'continuous optimal consumption'


ax[1].plot(grid, c_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb1)
ax[1].legend(loc='upper left')
plt.show()
4.7. SOLUTIONS 63

The outcomes appear very close to those of the continuous version.


Except for the “boundary” point, the value functions are very close:

In [25]: np.abs(v - v_star(grid)).max()

Out[25]: 121.49819147053378

In [26]: np.abs(v - v_star(grid))[1:].max()

Out[26]: 0.012681735127500815

The optimal consumption functions are close as well:

In [27]: np.abs(c - c_star(grid)).max()

Out[27]: 0.003826523100010082

In fact, the optimal consumption obtained in the discrete version is not really monotone, but
the decrements are quite small:

In [28]: diff = np.diff(c)


(diff >= 0).all()

Out[28]: False

In [29]: dec_ind = np.where(diff < 0)[0]


len(dec_ind)

Out[29]: 174

In [30]: np.abs(diff[dec_ind]).max()

Out[30]: 0.001961853339766839

The value function is monotone:

In [31]: (np.diff(v) > 0).all()

Out[31]: True
64 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

4.7.3 Comparison of the Solution Methods

Let us solve the problem with the other two methods.

Value Iteration

In [32]: ddp.epsilon = 1e-4


ddp.max_iter = 500
res1 = ddp.solve(method='value_iteration')
res1.num_iter

Out[32]: 294

In [33]: np.array_equal(σ, res1.sigma)

Out[33]: True

Modified Policy Iteration

In [34]: res2 = ddp.solve(method='modified_policy_iteration')


res2.num_iter

Out[34]: 16

In [35]: np.array_equal(σ, res2.sigma)

Out[35]: True

Speed Comparison

In [36]: %timeit ddp.solve(method='value_iteration')


%timeit ddp.solve(method='policy_iteration')
%timeit ddp.solve(method='modified_policy_iteration')

289 ms ± 7.66 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
24.8 ms ± 737 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
30.3 ms ± 564 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

As is often the case, policy iteration and modified policy iteration are much faster than value
iteration.

4.7.4 Replication of the Figures

Using DiscreteDP we replicate the figures shown in the lecture.


4.7. SOLUTIONS 65

Convergence of Value Iteration

Let us first visualize the convergence of the value iteration algorithm as in the lecture, where
we use ddp.bellman_operator implemented as a method of DiscreteDP

In [37]: w = 5 * np.log(grid) - 25 # Initial condition


n = 35
fig, ax = plt.subplots(figsize=(8,5))
ax.set_ylim(-40, -20)
ax.set_xlim(np.min(grid), np.max(grid))
lb = 'initial condition'
ax.plot(grid, w, color=plt.cm.jet(0), lw=2, alpha=0.6, label=lb)
for i in range(n):
w = ddp.bellman_operator(w)
ax.plot(grid, w, color=plt.cm.jet(i / n), lw=2, alpha=0.6)
lb = 'true value function'
ax.plot(grid, v_star(grid), 'k-', lw=2, alpha=0.8, label=lb)
ax.legend(loc='upper left')

plt.show()

We next plot the consumption policies along with the value iteration

In [38]: w = 5 * u(grid) - 25 # Initial condition

fig, ax = plt.subplots(3, 1, figsize=(8, 10))


true_c = c_star(grid)

for i, n in enumerate((2, 4, 6)):


ax[i].set_ylim(0, 1)
ax[i].set_xlim(0, 2)
ax[i].set_yticks((0, 1))
ax[i].set_xticks((0, 2))
66 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

w = 5 * u(grid) - 25 # Initial condition


compute_fixed_point(ddp.bellman_operator, w, max_iter=n, print_skip=1)
σ = ddp.compute_greedy(w) # Policy indices
c_policy = f(grid) - grid[σ]

ax[i].plot(grid, c_policy, 'b-', lw=2, alpha=0.8,


label='approximate optimal consumption policy')
ax[i].plot(grid, true_c, 'k-', lw=2, alpha=0.8,
label='true optimal consumption policy')
ax[i].legend(loc='upper left')
ax[i].set_title(f'{n} value function iterations')
plt.show()

Iteration Distance Elapsed (seconds)


---------------------------------------------
1 5.518e+00 2.748e-03
2 4.070e+00 3.889e-03
Iteration Distance Elapsed (seconds)
---------------------------------------------
1 5.518e+00 9.768e-04
2 4.070e+00 1.758e-03
3 3.866e+00 2.988e-03
4 3.673e+00 4.024e-03
Iteration Distance Elapsed (seconds)
---------------------------------------------
1 5.518e+00 1.279e-03
2 4.070e+00 2.650e-03
3 3.866e+00 3.730e-03
4 3.673e+00 4.757e-03
5 3.489e+00 5.781e-03
6 3.315e+00 6.850e-03

/home/ubuntu/anaconda3/lib/python3.7/site-packages/quantecon/compute_fp.py:151:
RuntimeWarning: max_iter attained before convergence in compute_fixed_point
warnings.warn(_non_convergence_msg, RuntimeWarning)
4.7. SOLUTIONS 67

Dynamics of the Capital Stock

Finally, let us work on Exercise 2, where we plot the trajectories of the capital stock for three
different discount factors, 0.9, 0.94, and 0.98, with initial condition 𝑘0 = 0.1.

In [39]: discount_factors = (0.9, 0.94, 0.98)


k_init = 0.1

# Search for the index corresponding to k_init


k_init_ind = np.searchsorted(grid, k_init)
68 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

sample_size = 25

fig, ax = plt.subplots(figsize=(8,5))
ax.set_xlabel("time")
ax.set_ylabel("capital")
ax.set_ylim(0.10, 0.30)

# Create a new instance, not to modify the one used above


ddp0 = DiscreteDP(R, Q, β, s_indices, a_indices)

for beta in discount_factors:


ddp0.beta = beta
res0 = ddp0.solve()
k_path_ind = res0.mc.simulate(init=k_init_ind, ts_length=sample_size)
k_path = grid[k_path_ind]
ax.plot(k_path, 'o-', lw=2, alpha=0.75, label=f'$\\beta = {beta}$')

ax.legend(loc='lower right')
plt.show()

4.8 Appendix: Algorithms

This appendix covers the details of the solution algorithms implemented for DiscreteDP.
We will make use of the following notions of approximate optimality:
• For 𝜀 > 0, 𝑣 is called an 𝜀-approximation of 𝑣∗ if ‖𝑣 − 𝑣∗ ‖ < 𝜀.
• A policy 𝜎 ∈ Σ is called 𝜀-optimal if 𝑣𝜎 is an 𝜀-approximation of 𝑣∗ .
4.8. APPENDIX: ALGORITHMS 69

4.8.1 Value Iteration

The DiscreteDP value iteration method implements value function iteration as follows

1. Choose any 𝑣0 ∈ ℝ𝑛 , and specify 𝜀 > 0; set 𝑖 = 0.

2. Compute 𝑣𝑖+1 = 𝑇 𝑣𝑖 .

3. If ‖𝑣𝑖+1 − 𝑣𝑖 ‖ < [(1 − 𝛽)/(2𝛽)]𝜀, then go to step 4; otherwise, set 𝑖 = 𝑖 + 1 and go to step
2.

4. Compute a 𝑣𝑖+1 -greedy policy 𝜎, and return 𝑣𝑖+1 and 𝜎.

Given 𝜀 > 0, the value iteration algorithm


• terminates in a finite number of iterations
• returns an 𝜀/2-approximation of the optimal value function and an 𝜀-optimal policy
function (unless iter_max is reached)
(While not explicit, in the actual implementation each algorithm is terminated if the number
of iterations reaches iter_max)

4.8.2 Policy Iteration

The DiscreteDP policy iteration method runs as follows

1. Choose any 𝑣0 ∈ ℝ𝑛 and compute a 𝑣0 -greedy policy 𝜎0 ; set 𝑖 = 0.

2. Compute the value 𝑣𝜎𝑖 by solving the equation 𝑣 = 𝑇𝜎𝑖 𝑣.

3. Compute a 𝑣𝜎𝑖 -greedy policy 𝜎𝑖+1 ; let 𝜎𝑖+1 = 𝜎𝑖 if possible.

4. If 𝜎𝑖+1 = 𝜎𝑖 , then return 𝑣𝜎𝑖 and 𝜎𝑖+1 ; otherwise, set 𝑖 = 𝑖 + 1 and go to step 2.

The policy iteration algorithm terminates in a finite number of iterations.


It returns an optimal value function and an optimal policy function (unless iter_max is
reached).

4.8.3 Modified Policy Iteration

The DiscreteDP modified policy iteration method runs as follows:

1. Choose any 𝑣0 ∈ ℝ𝑛 , and specify 𝜀 > 0 and 𝑘 ≥ 0; set 𝑖 = 0.

2. Compute a 𝑣𝑖 -greedy policy 𝜎𝑖+1 ; let 𝜎𝑖+1 = 𝜎𝑖 if possible (for 𝑖 ≥ 1).

3. Compute 𝑢 = 𝑇 𝑣𝑖 (= 𝑇𝜎𝑖+1 𝑣𝑖 ). If span(𝑢−𝑣𝑖 ) < [(1−𝛽)/𝛽]𝜀, then go to step 5; otherwise


go to step 4.

• Span is defined by span(𝑧) = max(𝑧) − min(𝑧).

1. Compute 𝑣𝑖+1 = (𝑇𝜎𝑖+1 )𝑘 𝑢 (= (𝑇𝜎𝑖+1 )𝑘+1 𝑣𝑖 ); set 𝑖 = 𝑖 + 1 and go to step 2.


70 CHAPTER 4. DISCRETE STATE DYNAMIC PROGRAMMING

2. Return 𝑣 = 𝑢 + [𝛽/(1 − 𝛽)][(min(𝑢 − 𝑣𝑖 ) + max(𝑢 − 𝑣𝑖 ))/2]1 and 𝜎𝑖+1 .

Given 𝜀 > 0, provided that 𝑣0 is such that 𝑇 𝑣0 ≥ 𝑣0 , the modified policy iteration algorithm
terminates in a finite number of iterations.
It returns an 𝜀/2-approximation of the optimal value function and an 𝜀-optimal policy func-
tion (unless iter_max is reached).
See also the documentation for DiscreteDP.
Part II

LQ Control

71
Chapter 5

Information and Consumption


Smoothing

5.1 Contents

• Overview 5.2
• Two Representations of the Same Nonfinancial Income Process ??
• State Space Representations 5.4
In addition to what’s in Anaconda, this lecture employs the following libraries:

In [1]: !pip install --upgrade quantecon

5.2 Overview

This lecture studies two consumers who have exactly the same nonfinancial income process
and who both conform to the linear-quadratic permanent income of consumption smoothing
model described in the quantecon lecture.
The two consumers have different information about future nonfinancial incomes.
One consumer each period receives news in the form of a shock that simultaneously affects
both today’s nonfinancial income and the present value of future nonfinancial incomes in a
particular way.
The other, less well informed, consumer each period receives a shock that equals the part of
today’s nonfinancial income that could not be forecast from all past values of nonfinancial
income.
Even though they receive exactly the same nonfinancial incomes each period, our two con-
sumers behave differently because they have different information about their future nonfi-
nancial incomes.
The second consumer receives less information about future nonfinancial incomes in a sense
that we shall make precise below.
This difference in their information sets manifests itself in their responding differently to what
they regard as time 𝑡 information shocks.
Thus, while they receive exactly the same histories of nonfinancial income, our two consumers

73
74 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

receive different shocks or news about their future nonfinancial incomes.


We compare behaviors of our two consumers as a way to learn about
• operating characteristics of the linear-quadratic permanent income model
• how the Kalman filter introduced in this lecture and/or the theory of optimal forecast-
ing introduced in this lecture embody lessons that can be applied to the news and
noise literature
• various ways of representing and computing optimal decision rules in the linear-
quadratic permanent income model
• a Ricardian equivalence outcome describing effects on optimal consumption of a tax
cut at time 𝑡 accompanied by a foreseen permanent increases in taxes that is just suffi-
cient to cover the interest payments used to service the risk-free government bonds that
are issued to finance the tax cut
• a simple application of alternative ways to factor a covariance generating function along
lines described in this lecture
This lecture can be regarded as an introduction to some of the invertibility issues that take
center stage in the analysis of fiscal foresight by Eric Leeper, Todd Walker, and Susan Yang
[? ].

5.3 Two Representations of the Same Nonfinancial Income


Process

Where 𝛽 ∈ (0, 1), we study consequences of endowing a consumer with one of the two alterna-
tive representations for the change in the consumer’s nonfinancial income 𝑦𝑡+1 − 𝑦𝑡 .
The first representation, which we shall refer to as the original representation, is

𝑦𝑡+1 − 𝑦𝑡 = 𝜖𝑡+1 − 𝛽 −1 𝜖𝑡 (1)

where {𝜖𝑡 } is an i.i.d. normally distributed scalar process with means of zero and contempo-
raneous variances 𝜎𝜖2 .
This representation of the process is used by a consumer who at time 𝑡 knows both 𝑦𝑡 and the
original shock 𝜖𝑡 and can use both of them to forecast future 𝑦𝑡+𝑗 ’s.
Furthermore, as we’ll see below, representation (1) has the peculiar property that a positive
shock 𝜖𝑡+1 leaves the discounted present value of the consumer’s financial income at time 𝑡 + 1
unaltered.
The second representation of the same {𝑦𝑡 } process is

𝑦𝑡+1 − 𝑦𝑡 = 𝑎𝑡+1 − 𝛽𝑎𝑡 (2)

where {𝑎𝑡 } is another i.i.d. normally distributed scalar process, with means of zero and now
variances 𝜎𝑎2 .
The two i.i.d. shock variances are related by

𝜎𝑎2 = 𝛽 −2 𝜎𝜖2 > 𝜎𝜖2


5.3. TWO REPRESENTATIONS OF THE SAME NONFINANCIAL INCOME PROCESS75

so that the variance of the innovation exceeds the variance of the original shock by a multi-
plicative factor 𝛽 −2 .
The second representation is the innovations representation from Kalman filtering theory.
To see how this works, note that equating representations (1) and (2) for 𝑦𝑡+1 − 𝑦𝑡 implies
𝜖𝑡+1 − 𝛽 −1 𝜖𝑡 = 𝑎𝑡+1 − 𝛽𝑎𝑡 , which in turn implies

𝑎𝑡+1 = 𝛽𝑎𝑡 + 𝜖𝑡+1 − 𝛽 −1 𝜖𝑡 .

Solving this difference equation backwards for 𝑎𝑡+1 gives, after a few lines of algebra,


𝑎𝑡+1 = 𝜖𝑡+1 + (𝛽 − 𝛽 −1 ) ∑ 𝛽 𝑗 𝜖𝑡−𝑗 (3)
𝑗=0

which we can also write as


𝑎𝑡+1 = ∑ ℎ𝑗 𝜖𝑡+1−𝑗 ≡ ℎ(𝐿)𝜖𝑡+1
𝑗=0


where 𝐿 is the one-period lag operator, ℎ(𝐿) = ∑𝑗=0 ℎ𝑗 𝐿𝑗 , 𝐼 is the identity operator, and

𝐼 − 𝛽 −1 𝐿
ℎ(𝐿) =
𝐼 − 𝛽𝐿

Let 𝑔𝑗 ≡ 𝐸𝑧𝑡 𝑧𝑡−𝑗 be the 𝑗th autocovariance of the {𝑦𝑡 − 𝑦𝑡−1 } process.
Using calculations in the quantecon lecture, where 𝑧 ∈ 𝐶 is a complex variable, the covariance

generating function 𝑔(𝑧) = ∑𝑗=−∞ 𝑔𝑗 𝑧𝑗 of the {(𝑦𝑡 − 𝑦𝑡−1 )} process equals

𝑔(𝑧) = 𝜎𝜖2 ℎ(𝑧)ℎ(𝑧−1 ) = 𝛽 −2 𝜎𝜖2 > 𝜎𝜖2 ,

which confirms that {𝑎𝑡 } is a serially uncorrelated process with variance

𝜎𝑎2 = 𝛽 −1 𝜎𝜖2 .

To verify these claims, just notice that 𝑔(𝑧) = 𝛽 −2 𝜎𝜖2 implies that the coefficient 𝑔0 = 𝛽 −2 𝜎𝜖2
and that 𝑔𝑗 = 0 for 𝑗 ≠ 0.
Alternatively, if you are uncomfortable with covariance generating functions, note that we can
directly calculate 𝜎𝑎2 from formula (3) according to


𝜎𝑎2 = 𝜎𝜖2 + [1 + (𝛽 − 𝛽 −1 )2 ∑ 𝛽 2𝑗 ] = 𝛽 −1 𝜎𝜖2 .
𝑗=0

5.3.1 Application of Kalman filter

We can also obtain representation (2) from representation (1) by using the Kalman filter.
76 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

Thus, from equations associated with the Kalman filter, it can be verified that the steady-
state Kalman gain 𝐾 = 𝛽 2 and the steady state conditional covariance Σ = 𝐸[(𝜖𝑡 −
𝜖𝑡̂ )2 |𝑦𝑡−1 , 𝑦𝑡−2 , …] = (1 − 𝛽 2 )𝜎𝜖2 .
In a little more detail, let 𝑧𝑡 = 𝑦𝑡 − 𝑦𝑡−1 and form the state-space representation

𝜖𝑡+1 = 0𝜖𝑡 + 𝜖𝑡+1


𝑧𝑡+1 = −𝛽 −1 𝜖𝑡 + 𝜖𝑡+1

and assume that 𝜎𝜖 = 1 for convenience


Compute the steady-state Kalman filter for this system and let 𝐾 be the steady-state gain
and 𝑎𝑡+1 the one-step ahead innovation.
The innovations representation is

𝜖𝑡+1
̂ = 0𝜖𝑡̂ + 𝐾𝑎𝑡+1
𝑧𝑡+1 = −𝛽𝑎𝑡 + 𝑎𝑡+1

By applying formulas for the steady-state Kalman filter, by hand we computed that 𝐾 =
𝛽 2 , 𝜎𝑎2 = 𝛽 −2 𝜎𝜖2 = 𝛽 −2 , and Σ = (1 − 𝛽 2 )𝜎𝜖2 .
We can also obtain these formulas via the classical filtering theory described in this lecture.

5.3.2 News Shocks and Less Informative Shocks

Representation (1) is cast in terms of a news shock 𝜖𝑡+1 that represents a shock to nonfinan-
cial income coming from taxes, transfers, and other random sources of income changes known
to a well-informed person having all sorts of information about the income process.
Representation (2) for the same income process is driven by shocks 𝑎𝑡 that contain less infor-
mation than the news shock 𝜖𝑡 .
Representation (2) is called the innovations representation for the {𝑦𝑡 − 𝑦𝑡−1 } process.
It is cast in terms of what time series statisticians call the innovation or fundamental
shock that emerges from applying the theory of optimally predicting nonfinancial income
based solely on the information contained solely in past levels of growth in nonfinancial in-
come.
Fundamental for the 𝑦𝑡 process means that the shock 𝑎𝑡 can be expressed as a square-
summable linear combination of 𝑦𝑡 , 𝑦𝑡−1 , ….
The shock 𝜖𝑡 is not fundamental and has more information about the future of the {𝑦𝑡 −
𝑦𝑡−1 } process than is contained in 𝑎𝑡 .
Representation (3) reveals the important fact that the original shock 𝜖𝑡 contains more in-
formation about future 𝑦’s than is contained in the semi-infinite history 𝑦𝑡 = [𝑦𝑡 , 𝑦𝑡−1 , …] of
current and past 𝑦’s.
Staring at representation (3) for 𝑎𝑡+1 shows that it consists both of new news 𝜖𝑡+1 as well as

a long moving average (𝛽 − 𝛽 −1 ) ∑𝑗=0 𝛽 𝑗 𝜖𝑡−𝑗 of old news.
The better informed representation (1) asserts that a shock 𝜖𝑡 results in an impulse re-
sponse to nonfinancial income of 𝜖𝑡 times the sequence
5.3. TWO REPRESENTATIONS OF THE SAME NONFINANCIAL INCOME PROCESS77

1, 1 − 𝛽 −1 , 1 − 𝛽 −1 , …

so that a shock that increases nonfinancial income 𝑦𝑡 by 𝜖𝑡 at time 𝑡 is followed by an in-


crease in future 𝑦 of 𝜖𝑡 times 1 − 𝛽 −1 < 0 in all subsequent periods.
Because 1 − 𝛽 −1 < 0, this means that a positive shock of 𝜖𝑡 today raises income at time 𝑡 by
𝜖𝑡 and then decreases all future incomes by (𝛽 −1 − 1)𝜖𝑡 .
This pattern precisely describes the following mental experiment:
• The consumer receives a government transfer of 𝜖𝑡 at time 𝑡.
• The government finances the transfer by issuing a one-period bond on which it pays a
gross one-period risk-free interest rate equal to 𝛽 −1 .
• In each future period, the government rolls over the one-period bond and so continues
to borrow 𝜖𝑡 forever.
• The government imposes a lump-sum tax on the consumer in order to pay just the cur-
rent interest on the original bond and its successors created by the roll-over operation.
• In all future periods 𝑡 + 1, 𝑡 + 2, …, the government levies a lump-sum tax on the con-
sumer of 𝛽 −1 − 1 that is just enough to pay the interest on the bond.
The present value of the impulse response or moving average coefficients equals 𝑑𝜖 (𝐿) =
0
1−𝛽 = 0, a fact that we’ll see again below.

Representation (2), i.e., the innovation representation, asserts that a shock 𝑎𝑡 results in an
impulse response to nonfinancial income of 𝑎𝑡 times

1, 1 − 𝛽, 1 − 𝛽, …

so that a shock that increases income 𝑦𝑡 by 𝑎𝑡 at time 𝑡 can be expected to be followed by an


increase in 𝑦𝑡+𝑗 of 𝑎𝑡 times 1 − 𝛽 > 0 in all future periods 𝑗 = 1, 2, ….
The present value of the impulse response or moving average coefficients for representation
2
(2) is 𝑑𝑎 (𝛽) = 1−𝛽
1−𝛽 = (1 + 𝛽), another fact that will be important below.

5.3.3 Representation of 𝜖𝑡 in Terms of Future 𝑦’s

Notice that reprentation (1), namely, 𝑦𝑡+1 − 𝑦𝑡 = −𝛽 −1 𝜖𝑡 + 𝜖𝑡+1 implies the linear difference
equation

𝜖𝑡 = 𝛽𝜖𝑡+1 − 𝛽(𝑦𝑡+1 − 𝑦𝑡 ).

Solving forward we eventually obtain


𝜖𝑡 = 𝛽(𝑦𝑡 − (1 − 𝛽) ∑ 𝛽 𝑗 𝑦𝑡+𝑗+1 )
𝑗=0

This equation shows that 𝜖𝑡 equals 𝛽 times the one-step-backwards error in optimally back-
𝑡
casting 𝑦𝑡 based on the future 𝑦+ ≡ 𝑦𝑡+1 , 𝑦𝑡+2 , …] via the optimal backcasting formula


𝑡
𝐸[𝑦𝑡 |𝑦+ ] = (1 − 𝛽) ∑ 𝛽 𝑗 𝑦𝑡+𝑗+1
𝑗=0
78 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

Thus, 𝜖𝑡 contains exact information about an important linear combination of future nonfi-
nancial income.

5.3.4 Representation in Terms of 𝑎𝑡 Shocks

Next notice that representation (2), namely, 𝑦𝑡+1 − 𝑦𝑡 = −𝛽𝑎𝑡 + 𝑎𝑡+1 implies the linear differ-
ence equation

𝑎𝑡+1 = 𝛽𝑎𝑡 + (𝑦𝑡+1 − 𝑦𝑡 )

Solving this equation backward establishes that the one-step-prediction error 𝑎𝑡+1 is


𝑎𝑡+1 = 𝑦𝑡+1 − (1 − 𝛽) ∑ 𝛽 𝑗 𝑦𝑡−𝑗
𝑗=0

and where the information set is 𝑦𝑡 = [𝑦𝑡 , 𝑦𝑡−1 , …], the one step-ahead optimal prediction is


𝐸[𝑦𝑡+1 |𝑦𝑡 ] = (1 − 𝛽) ∑ 𝛽 𝑗 𝑦𝑡−𝑗
𝑗=0

5.3.5 Permanent Income Consumption-Smoothing Model

When we computed optimal consumption-saving policies for the two representations using
formulas obtained with the difference equation approach described in the quantecon lecture,
we obtain:
for a consumer having the information assumed in the news representation (1):

𝑐𝑡+1 − 𝑐𝑡 = 0
𝑏𝑡+1 − 𝑏𝑡 = −𝛽 −1 𝜖𝑡

for a consumer having the more limited information associated with the innova-
tions representation (2):

𝑐𝑡+1 − 𝑐𝑡 = (1 − 𝛽 2 )𝑎𝑡+1
𝑏𝑡+1 − 𝑏𝑡 = −𝛽𝑎𝑡

These formulas agree with outcomes from the Python programs to be reported below using
state-space representations and dynamic programming.
Evidently the two consumers behave differently though they receive exactly the same histories
of nonfinancial income.
The consumer with information associated with representation (1) responds to each shock
𝜖𝑡+1 by leaving his consumption unaltered and saving all of 𝑎𝑡+1 in anticipation of the per-
manently increased taxes that he will bear to pay for the addition 𝑎𝑡+1 to his time 𝑡 + 1 nonfi-
nancial income.
5.4. STATE SPACE REPRESENTATIONS 79

The consumer with information associated with representation (2) responds to a shock 𝑎𝑡+1
by increasing his consumption by what he perceives to be the permanent part of the in-
crease in consumption and by increasing his saving by what he perceives to be the tempo-
rary part.
We can regard the first consumer as someone whose behavior sharply illustrates the behavior
assumed in a classic Ricardian equivalence experiment.

5.4 State Space Representations

We can cast our two representations in terms of the following two state space systems

𝑦 1 −𝛽 −1 𝑦𝑡 𝜎
[ 𝑡+1 ] = [ ] [ ] + [ 𝜖 ] 𝑣𝑡+1
𝜖𝑡+1 0 0 𝜖𝑡 𝜎𝜖
𝑦
𝑦𝑡 = [1 0] [ 𝑡 ]
𝜖𝑡

and

𝑦 1 −𝛽 𝑦𝑡 𝜎
[ 𝑡+1 ] = [ ] [ ] + [ 𝑎 ] 𝑢𝑡+1
𝑎𝑡+1 0 0 𝑎𝑡 𝜎𝑎
𝑦
𝑦𝑡 = [1 0] [ 𝑡 ]
𝑎𝑡

where {𝑣𝑡 } and {𝑢𝑡 } are both i.i.d. sequences of univariate standardized normal random vari-
ables.
These two alternative income processes are ready to be used in the framework presented in
the section “Comparison with the Difference Equation Approach” in the quantecon lecture.
All the code that we shall use below is presented in that lecture.

5.4.1 Computations

We shall use Python to form both of the above two state-space representations, using the
following parameter values 𝜎𝜖 = 1, 𝜎𝑎 = 𝛽 −1 𝜎𝜖 = 𝛽 −1 where 𝛽 is the same value as the
discount factor in the household’s problem in the LQ savings problem in the lecture.
For these two representations, we use the code in the lecture to
• compute optimal decision rules for 𝑐𝑡 , 𝑏𝑡 for the two types of consumers associated with
our two representations of nonfinancial income
• use the value function objects 𝑃 , 𝑑 returned by the code to compute optimal values for
the two representations when evaluated at the following initial conditions 𝑥0 =

10
[ ]
0

for each representation.


80 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

• create instances of the LinearStateSpace class for the two representations of the {𝑦𝑡 }
process and use them to obtain impulse response functions of 𝑐𝑡 and 𝑏𝑡 to the respective
shocks 𝜖𝑡 and 𝑎𝑡 for the two representations.
• run simulations of {𝑦𝑡 , 𝑐𝑡 , 𝑏𝑡 } of length 𝑇 under both of the representations (later I’ll
give some more details about how we’ll run some special versions of these)
We want to solve the LQ problem:


2
min ∑ 𝛽 𝑡 (𝑐𝑡 − 𝛾)
𝑡=0

subject to the sequence of constraints

1
𝑐𝑡 + 𝑏 𝑡 = 𝑏 + 𝑦𝑡 , 𝑡≥0
1 + 𝑟 𝑡+1

where 𝑦𝑡 follows one of the representations defined above.


Define the control as 𝑢𝑡 ≡ 𝑐𝑡 − 𝛾.
(For simplicity we can assume 𝛾 = 0 below because 𝛾 has no effect on the impulse response
functions that interest us.)
The state transition equations under our two representations for the nonfinancial income pro-
cess {𝑦𝑡 } can be written as

𝑦𝑡+1 1 −𝛽 −1 0 𝑦𝑡 0 𝜎𝜖
⎡ 𝜖 ⎤= ⎡ 0 0 0 ⎤⎡ 𝜖 ⎤ + ⎡ 0 ⎤[ 𝑐 ] + ⎡ 𝜎 ⎤𝜈 ,
⎢ 𝑡+1 ⎥ ⎢ ⎥⎢ 𝑡 ⎥ ⎢ ⎥ 𝑡 ⎢ 𝜖 ⎥ 𝑡+1
⎣ − (1 + 𝑟)
⎣ 𝑏𝑡+1 ⎦ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 0 ⎣⏟1⏟
1 + 𝑟 ⎦ ⎣ 𝑏𝑡 ⎦ ⏟ +⏟𝑟⏟
⎦ ⎣
⏟ 0 ⎦
≡𝐴1 ≡𝐵1 ≡𝐶1

and

𝑦𝑡+1 1 −𝛽 0 𝑦𝑡 0 𝜎𝑎
⎡ 𝑎 ⎤= ⎡ 0 0 0 ⎥ ⎢ 𝑎𝑡 ⎥ + ⎢ 0 ⎥ [ 𝑐𝑡 ] + ⎢ 𝜎𝑎 ⎤
⎤ ⎡ ⎤ ⎡ ⎤ ⎡
⎢ 𝑡+1 ⎥ ⎢ ⎥𝑢𝑡+1 .
𝑏 − (1 +
⎣ 𝑡+1 ⎦ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⎣ 𝑟) 0 1 + 𝑟 𝑏
⎦⎣ 𝑡 ⎦ ⏟ 1 +
⎣⏟⏟⏟⏟ 𝑟 ⎦ ⏟⎦
⎣ 0
≡𝐴2 ≡𝐵2 ≡𝐶2

As usual, we start by importing packages.

In [2]: import numpy as np


import quantecon as qe
import matplotlib.pyplot as plt
%matplotlib inline

In [3]: # Set parameters


β, σϵ = 0.95, 1
σa = σϵ / β

R = 1 / β

# Payoff matrices are the same for two representations


RLQ = np.array([[0, 0, 0],
[0, 0, 0],
5.4. STATE SPACE REPRESENTATIONS 81

[0, 0, 1e-12]]) # put penalty on debt


QLQ = np.array([1.])

In [4]: # Original representation state transition matrices


ALQ1 = np.array([[1, -R, 0],
[0, 0, 0],
[-R, 0, R]])
BLQ1 = np.array([[0, 0, R]]).T
CLQ1 = np.array([[σϵ, σϵ, 0]]).T

# Construct and solve the LQ problem


LQ1 = qe.LQ(QLQ, RLQ, ALQ1, BLQ1, C=CLQ1, beta=β)
P1, F1, d1 = LQ1.stationary_values()

In [5]: # The optimal decision rule for c


-F1

Out[5]: array([[ 1. , -1. , -0.05]])

Evidently optimal consumption and debt decision rules for the consumer having news repre-
sentation (1) are

𝑐𝑡∗ = 𝑦𝑡 − 𝜖𝑡 − (1 − 𝛽) 𝑏𝑡 ,

𝑏𝑡+1 = 𝛽 −1 𝑐𝑡∗ + 𝛽 −1 𝑏𝑡 − 𝛽 −1 𝑦𝑡
= 𝛽 −1 𝑦𝑡 − 𝛽 −1 𝜖𝑡 − (𝛽 −1 − 1) 𝑏𝑡 + 𝛽 −1 𝑏𝑡 − 𝛽 −1 𝑦𝑡
= 𝑏𝑡 − 𝛽 −1 𝜖𝑡 .

In [6]: # Innovations representation


ALQ2 = np.array([[1, -β, 0],
[0, 0, 0],
[-R, 0, R]])
BLQ2 = np.array([[0, 0, R]]).T
CLQ2 = np.array([[σa, σa, 0]]).T

LQ2 = qe.LQ(QLQ, RLQ, ALQ2, BLQ2, C=CLQ2, beta=β)


P2, F2, d2 = LQ2.stationary_values()

In [7]: -F2

Out[7]: array([[ 1. , -0.9025, -0.05 ]])

For a consumer having access only to the information associated with the innovations repre-
sentation (2), the optimal decision rules are

𝑐𝑡∗ = 𝑦𝑡 − 𝛽 2 𝑎𝑡 − (1 − 𝛽) 𝑏𝑡 ,

𝑏𝑡+1 = 𝛽 −1 𝑐𝑡∗ + 𝛽 −1 𝑏𝑡 − 𝛽 −1 𝑦𝑡
= 𝛽 −1 𝑦𝑡 − 𝛽𝑎𝑡 − (𝛽 −1 − 1) 𝑏𝑡 + 𝛽 −1 𝑏𝑡 − 𝛽 −1 𝑦𝑡
= 𝑏𝑡 − 𝛽𝑎𝑡 .

Now we construct two Linear State Space models that emerge from using optimal policies
𝑢𝑡 = −𝐹 𝑥𝑡 for the control variable.
82 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

Take the original representation case as an example,

𝑦𝑡+1 𝑦𝑡
⎡ 𝜖 ⎤ = (𝐴 − 𝐵 𝐹 ) ⎡ 𝜖 ⎤ + 𝐶 𝜈
⎢ 𝑡+1 ⎥ 1 1 1 ⎢ 𝑡 ⎥ 1 𝑡+1
⎣ 𝑏𝑡+1 ⎦ ⎣ 𝑏𝑡 ⎦

𝑦
𝑐𝑡 −𝐹1 ⎡ 𝑡 ⎤
[ ]=[ ] ⎢ 𝜖𝑡 ⎥
𝑏𝑡 𝑆𝑏
⎣ 𝑏𝑡 ⎦

To have the Linear State Space model of the innovations representation case, we can simply
replace the corresponding matrices.

In [8]: # Construct two Linear State Space models


Sb = np.array([0, 0, 1])

ABF1 = ALQ1 - BLQ1 @ F1


G1 = np.vstack([-F1, Sb])
LSS1 = qe.LinearStateSpace(ABF1, CLQ1, G1)

ABF2 = ALQ2 - BLQ2 @ F2


G2 = np.vstack([-F2, Sb])
LSS2 = qe.LinearStateSpace(ABF2, CLQ2, G2)

In the following we compute the impulse response functions of 𝑐𝑡 and 𝑏𝑡 .

In [9]: J = 5 # Number of coefficients that we want

x_res1, y_res1 = LSS1.impulse_response(j=J)


b_res1 = np.array([x_res1[i][2, 0] for i in range(J)])
c_res1 = np.array([y_res1[i][0, 0] for i in range(J)])

x_res2, y_res2 = LSS2.impulse_response(j=J)


b_res2 = np.array([x_res2[i][2, 0] for i in range(J)])
c_res2 = np.array([y_res2[i][0, 0] for i in range(J)])

In [10]: c_res1 / σϵ, b_res1 / σϵ

Out[10]: (array([1.99998906e-11, 1.89473992e-11, 1.78947690e-11, 1.68421388e-11,


1.57895086e-11]),
array([ 0. , -1.05263158, -1.05263158, -1.05263158, -1.05263158]))

In [11]: plt.title("original representation")


plt.plot(range(J), c_res1 / σϵ, label="c impulse response function")
plt.plot(range(J), b_res1 / σϵ, label="b impulse response function")
plt.legend()

Out[11]: <matplotlib.legend.Legend at 0x7f94f48ba2b0>


5.4. STATE SPACE REPRESENTATIONS 83

The above two impulse response functions show that when the consumer has the information
assumed in the original representation, his response to receiving a positive shock of 𝜖𝑡 is to
leave his consumption unchanged and to save the entire amount of his extra income and then
forever roll over the extra bonds that he holds.
To see this notice, that starting from next period on, his debt permanently decreases by 𝛽 −1

In [12]: c_res2 / σa, b_res2 / σa

Out[12]: (array([0.0975, 0.0975, 0.0975, 0.0975, 0.0975]),


array([ 0. , -0.95, -0.95, -0.95, -0.95]))

In [13]: plt.title("innovations representation")


plt.plot(range(J), c_res2 / σa, label="c impulse response function")
plt.plot(range(J), b_res2 / σa, label="b impulse response function")
plt.plot([0, J-1], [0, 0], '--', color='k')
plt.legend()

Out[13]: <matplotlib.legend.Legend at 0x7f94f47f0630>


84 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

The above impulse responses show that when the consumer has only the information that is
assumed to be available under the innovations representation for {𝑦𝑡 − 𝑦𝑡−1 }, he responds to a
positive 𝑎𝑡 by permanently increasing his consumption.
He accomplishes this by consuming a fraction (1 − 𝛽 2 ) of the increment 𝑎𝑡 to his nonfinancial
income and saving the rest in order to lower 𝑏𝑡+1 to finance the permanent increment in his
consumption.
The preceding computations confirm what we had derived earlier using paper and pencil.
Now let’s simulate some paths of consumption and debt for our two types of consumers while
always presenting both types with the same {𝑦𝑡 } path, constructed as described below.

In [14]: # Set time length for simulation


T = 100

In [15]: x1, y1 = LSS1.simulate(ts_length=T)


plt.plot(range(T), y1[0, :], label="c")
plt.plot(range(T), x1[2, :], label="b")
plt.plot(range(T), x1[0, :], label="y")
plt.title("original representation")
plt.legend()

Out[15]: <matplotlib.legend.Legend at 0x7f94f43a3dd8>


5.4. STATE SPACE REPRESENTATIONS 85

In [16]: x2, y2 = LSS2.simulate(ts_length=T)


plt.plot(range(T), y2[0, :], label="c")
plt.plot(range(T), x2[2, :], label="b")
plt.plot(range(T), x2[0, :], label="y")
plt.title("innovations representation")
plt.legend()

Out[16]: <matplotlib.legend.Legend at 0x7f94f42ffa58>


86 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING

5.4.2 Simulating the Income Process and Two Associated Shock Processes

We now describe how we form a single {𝑦𝑡 }𝑇𝑡=0 realization that we will use to simulate the
two different decision rules associated with our two types of consumer.
We accomplish this in the following steps.

1. We form a {𝑦𝑡 , 𝜖𝑡 } realization by drawing a long simulation of {𝜖𝑡 }𝑇𝑡=0 where 𝑇 is a big
integer 𝜖𝑡 = 𝜎𝜖 𝑣𝑡 , 𝑣𝑡 is a standard normal scalar, 𝑦0 = 100, and

𝑦𝑡+1 − 𝑦𝑡 = −𝛽 −1 𝜖𝑡 + 𝜖𝑡+1 .

1. We take the same {𝑦𝑡 } realization generated in step 1 and form an innovation process
{𝑎𝑡 } from the formulas

𝑎0 = 0
𝑡−1
𝑎𝑡 = ∑ 𝛽 𝑗 (𝑦𝑡−𝑗 − 𝑦𝑡−𝑗−1 ) + 𝛽 𝑡 𝑎0 , 𝑡≥1
𝑗=0

1. We throw away the first 𝑆 observations and form the sample {𝑦𝑡 , 𝜖𝑡 , 𝑎𝑡 }𝑇𝑆+1 as the real-
ization that we’ll use in the following steps.
2. We use the step 3 realization to evaluate and simulate the decision rules for 𝑐𝑡 , 𝑏𝑡 that
Python has computed for us above.

The above steps implement the experiment of comparing decisions made by two consumers
having identical incomes at each date but at each date having different information about
their future incomes.

5.4.3 Calculating Innovations in Another Way

Here we use formula (3) above to compute 𝑎𝑡+1 as a function of the history 𝜖𝑡+1 , 𝜖𝑡 , 𝜖𝑡−1 , …
Thus, we compute

𝑎𝑡+1 = 𝛽𝑎𝑡 + 𝜖𝑡+1 − 𝛽 −1 𝜖𝑡


= 𝛽 (𝛽𝑎𝑡−1 + 𝜖𝑡 − 𝛽 −1 𝜖𝑡−1 ) + 𝜖𝑡+1 − 𝛽 −1 𝜖𝑡
= 𝛽 2 𝑎𝑡−1 + 𝛽 (𝜖𝑡 − 𝛽 −1 𝜖𝑡−1 ) + 𝜖𝑡+1 − 𝛽 −1 𝜖𝑡
=⋯
𝑡
= 𝛽 𝑡+1 𝑎0 + ∑ 𝛽 𝑗 (𝜖𝑡+1−𝑗 − 𝛽 −1 𝜖𝑡−𝑗 )
𝑗=0
𝑡−1
= 𝛽 𝑡+1 𝑎0 + 𝜖𝑡+1 + (𝛽 − 𝛽 −1 ) ∑ 𝛽 𝑗 𝜖𝑡−𝑗 − 𝛽 𝑡−1 𝜖0 .
𝑗=0

We can verify that we recover the same {𝑎𝑡 } sequence computed earlier.
5.4. STATE SPACE REPRESENTATIONS 87

5.4.4 Another Invertibility Issue

This quantecon lecture contains another example of a shock-invertibility issue that is endemic
to the LQ permanent income or consumption smoothing model.
The technical issue discussed there is ultimately the source of the shock-invertibility issues
discussed by Eric Leeper, Todd Walker, and Susan Yang [? ] in their analysis of fiscal fore-
sight.
88 CHAPTER 5. INFORMATION AND CONSUMPTION SMOOTHING
Chapter 6

Consumption Smoothing with


Complete and Incomplete Markets

6.1 Contents

• Overview 6.2
• Background 6.3
• Linear State Space Version of Complete Markets Model 6.4
• Model 1 (Complete Markets) 6.5
• Model 2 (One-Period Risk-Free Debt Only) 6.6
In addition to what’s in Anaconda, this lecture uses the library:

In [1]: !pip install --upgrade quantecon

6.2 Overview

This lecture describes two types of consumption-smoothing models.


• one is in the complete markets tradition of Kenneth Arrow https://en.
wikipedia.org/wiki/Kenneth_Arrow
• the other is in the incomplete markets tradition of Hall [24]
Complete markets allow a consumer to buy or sell claims contingent on all possible states of
the world.
Incomplete markets allow a consumer to buy or sell only a limited set of securities, often only
a single risk-free security.
Hall [24] worked in an incomplete markets tradition by assuming that the only asset that can
be traded is a risk-free one period bond.
Hall assumed an exogenous stochastic process of nonfinancial income and an exogenous
and time-invariant gross interest rate on one period risk-free debt that equals 𝛽 −1 , where
𝛽 ∈ (0, 1) is also a consumer’s intertemporal discount factor.
This is equivalent to saying that it costs 𝛽 −1 of time 𝑡 consumption to buy one unit of con-
sumption at time 𝑡 + 1 for sure.
So 𝛽 −1 is the price of a one-period risk-free claim to consumption next period.

89
90CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

We maintain Hall’s assumption about the interest rate when we describe an incomplete mar-
kets version of our model.
In addition, we extend Hall’s assumption about the risk-free interest rate to its appropriate
counterpart when we create another model in which there are markets in a complete array of
one-period Arrow state-contingent securities.
We’ll consider two closely related but distinct alternative assumptions about the consumer’s
exogenous nonfinancial income process:
• that it is generated by a finite 𝑁 state Markov chain (setting 𝑁 = 2 most of the time in
this lecture)
• that it is described by a linear state space model with a continuous state vector in ℝ𝑛
driven by a Gaussian vector IID shock process
We’ll spend most of this lecture studying the finite-state Markov specification, but will begin
by studying the linear state space specification because it is so closely linked to earlier lec-
tures.
Let’s start with some imports:

In [2]: import numpy as np


import quantecon as qe
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.linalg as la

6.2.1 Relationship to Other Lectures

This lecture can be viewed as a followup to Optimal Savings II: LQ Techniques


This lecture is also a prologomenon to a lecture on tax-smoothing Tax Smoothing with Com-
plete and Incomplete Markets

6.3 Background

Outcomes in consumption-smoothing models emerge from two sources:


• a consumer who wants to maximize an intertemporal objective function that expresses
its preference for paths of consumption that are smooth in the sense of varying as little
as possible both across time and across realized Markov states
• opportunities that allow the consumer to transform an erratic nonfinancial income pro-
cess into a smoother consumption process by purchasing or selling one or more financial
securities
In the complete markets version, each period the consumer can buy or sell a complete set
of one-period ahead state-contingent securities whose payoffs depend on next period’s realiza-
tion of the Markov state.
• In the two-state Markov chain case, two such securities are traded each period.
• In an 𝑁 state Markov state version, 𝑁 such securities are traded each period.
• In a continuous state Markov state version, a continuum of such securities are traded
each period.
6.4. LINEAR STATE SPACE VERSION OF COMPLETE MARKETS MODEL 91

These state-contingent securities are commonly called Arrow securities, after Kenneth Arrow
https://en.wikipedia.org/wiki/Kenneth_Arrow
In the incomplete markets version, the consumer can buy and sell only one security each
period, a risk-free one-period bond with gross one-period return 𝛽 −1 .

6.4 Linear State Space Version of Complete Markets Model

Now we’ll study a complete markets model adapted to a setting with a continuous Markov
state like that in the first lecture on the permanent income model.
In that model, there are
• incomplete markets: the consumer can trade only a single risk-free one-period bond
bearing gross one-period risk-free interest rate equal to 𝛽 −1 .
• the consumer’s exogenous nonfinancial income is governed by a linear state space model
driven by Gaussian shocks, the kind of model studied in an earlier lecture about linear
state space models.
Let’s write down a complete markets counterpart of that model.
Suppose that nonfinancial income is governed by the state space system

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐶𝑤𝑡+1


𝑦𝑡 = 𝑆𝑦 𝑥𝑡

where 𝑥𝑡 is an 𝑛 × 1 vector and 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼) is IID over time.


We want a natural counterpart of the Hall assumption that the one-period risk-free gross in-
terest rate is 𝛽 −1 .
Accordingly, we assume that prices of one-period ahead Arrow securities are described by the
pricing kernel

𝑞𝑡+1 (𝑥𝑡+1 | 𝑥𝑡 ) = 𝛽𝜙(𝑥𝑡+1 | 𝐴𝑥𝑡 , 𝐶𝐶 ′ ) (1)

where 𝜙(⋅ | 𝜇, Σ) is a multivariate Gaussian distribution with mean vector 𝜇 and covariance
matrix Σ.
With the pricing kernel 𝑞𝑡+1 (𝑥𝑡+1 | 𝑥𝑡 ) in hand, we can price claims to consumption at time
𝑡 + 1 consumption that pay off when 𝑥𝑡+1 ∈ 𝐴 at time 𝑡 + 1:

∫ 𝑞𝑡+1 (𝑥𝑡+1 | 𝑥𝑡 )𝑑𝑥𝑡+1


𝐴

where 𝐴 is a subset of ℝ𝑛 .
The price ∫𝐴 𝑞𝑡+1 (𝑥𝑡+1 | 𝑥𝑡 )𝑑𝑥𝑡+1 of such a claim depends on state 𝑥𝑡 because the prices of the
𝑥𝑡+1 -contingent securities depend on 𝑥𝑡 through the pricing kernel 𝑞(𝑥𝑡+1 | 𝑥𝑡 ).
Let 𝑏(𝑥𝑡+1 ) be a vector of state-contingent debt due at 𝑡 + 1 as a function of the 𝑡 + 1 state
𝑥𝑡+1 .
Using the pricing kernel assumed in (1), the value at 𝑡 of 𝑏(𝑥𝑡+1 ) is evidently
92CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

𝛽 ∫ 𝑏(𝑥𝑡+1 )𝜙(𝑥𝑡+1 | 𝐴𝑥𝑡 , 𝐶𝐶 ′ )𝑑𝑥𝑡+1 = 𝛽𝔼𝑡 𝑏𝑡+1

In our complete markets setting, the consumer faces a sequence of budget constraints

𝑐𝑡 + 𝑏𝑡 = 𝑦𝑡 + 𝛽𝔼𝑡 𝑏𝑡+1 , 𝑡≥0

Please note that

𝐸𝑡 𝑏𝑡+1 = ∫ 𝜙𝑡+1 (𝑥𝑡+1 |𝑥𝑡 )𝑏𝑡+1 (𝑥𝑡+1 )𝑑𝑥𝑡+1

which verifies that 𝐸𝑡 𝑏𝑡+1 is the value of time 𝑡 + 1 state-contingent claims on time 𝑡 + 1
consumption issued by the consumer at time 𝑡
We can solve the time 𝑡 budget constraint forward to obtain


𝑏𝑡 = 𝔼𝑡 ∑ 𝛽 𝑗 (𝑦𝑡+𝑗 − 𝑐𝑡+𝑗 )
𝑗=0

We assume as before that the consumer cares about the expected value of


∑ 𝛽 𝑡 𝑢(𝑐𝑡 ), 0<𝛽<1
𝑡=0

In the incomplete markets version of the model, we assumed that 𝑢(𝑐𝑡 ) = −(𝑐𝑡 − 𝛾)2 , so that
the above utility functional became


− ∑ 𝛽 𝑡 (𝑐𝑡 − 𝛾)2 , 0<𝛽<1
𝑡=0

But in the complete markets version, it is tractable to assume a more general utility function
that satisfies 𝑢′ > 0 and 𝑢″ < 0.
The first-order conditions for the consumer’s problem with complete markets and our assump-
tion about Arrow securities prices are

𝑢′ (𝑐𝑡+1 ) = 𝑢′ (𝑐𝑡 ) for all 𝑡 ≥ 0

which implies 𝑐𝑡 = 𝑐 ̄ for some 𝑐.̄


So it follows that


𝑏𝑡 = 𝔼𝑡 ∑ 𝛽 𝑗 (𝑦𝑡+𝑗 − 𝑐)̄
𝑗=0

or

1
𝑏𝑡 = 𝑆𝑦 (𝐼 − 𝛽𝐴)−1 𝑥𝑡 − 𝑐̄ (2)
1−𝛽
6.4. LINEAR STATE SPACE VERSION OF COMPLETE MARKETS MODEL 93

where 𝑐 ̄ satisfies

1
𝑏̄0 = 𝑆𝑦 (𝐼 − 𝛽𝐴)−1 𝑥0 − 𝑐̄ (3)
1−𝛽

where 𝑏̄0 is an initial level of the consumer’s debt due at time 𝑡 = 0, specified as a parameter
of the problem.
Thus, in the complete markets version of the consumption-smoothing model, 𝑐𝑡 = 𝑐,̄ ∀𝑡 ≥ 0 is
determined by (3) and the consumer’s debt is the fixed function of the state 𝑥𝑡 described by
(2).
Please recall that in the LQ permanent income model studied in first lecture on the perma-
nent income model, the state is 𝑥𝑡 , 𝑏𝑡 , where 𝑏𝑡 is a complicated function of past state vectors
𝑥𝑡−𝑗 .
Notice that in contrast to that incomplete markets model, at time 𝑡 the state vector is 𝑥𝑡
alone in our complete markets model.
Here’s an example that shows how in this setting the availability of insurance against fluctu-
ating nonfinancial income allows the consumer completely to smooth consumption across time
and across states of the world

In [3]: def complete_ss(β, b0, x0, A, C, S_y, T=12):


"""
Computes the path of consumption and debt for the previously described
complete markets model where exogenous income follows a linear
state space
"""
# Create a linear state space for simulation purposes
# This adds "b" as a state to the linear state space system
# so that setting the seed places shocks in same place for
# both the complete and incomplete markets economy
# Atilde = np.vstack([np.hstack([A, np.zeros((A.shape[0], 1))]),
# np.zeros((1, A.shape[1] + 1))])
# Ctilde = np.vstack([C, np.zeros((1, 1))])
# S_ytilde = np.hstack([S_y, np.zeros((1, 1))])

lss = qe.LinearStateSpace(A, C, S_y, mu_0=x0)

# Add extra state to initial condition


# x0 = np.hstack([x0, np.zeros(1)])

# Compute the (I - β * A)^{-1}


rm = la.inv(np.eye(A.shape[0]) - β * A)

# Constant level of consumption


cbar = (1 - β) * (S_y @ rm @ x0 - b0)
c_hist = np.ones(T) * cbar

# Debt
x_hist, y_hist = lss.simulate(T)
b_hist = np.squeeze(S_y @ rm @ x_hist - cbar / (1 - β))

return c_hist, b_hist, np.squeeze(y_hist), x_hist


94CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

# Define parameters
N_simul = 80
α, ρ1, ρ2 = 10.0, 0.9, 0.0
σ = 1.0

A = np.array([[1., 0., 0.],


[α, ρ1, ρ2],
[0., 1., 0.]])
C = np.array([[0.], [σ], [0.]])
S_y = np.array([[1, 1.0, 0.]])
β, b0 = 0.95, -10.0
x0 = np.array([1.0, α / (1 - ρ1), α / (1 - ρ1)])

# Do simulation for complete markets


s = np.random.randint(0, 10000)
np.random.seed(s) # Seeds get set the same for both economies
out = complete_ss(β, b0, x0, A, C, S_y, 80)
c_hist_com, b_hist_com, y_hist_com, x_hist_com = out

fig, ax = plt.subplots(1, 2, figsize=(15, 5))

# Consumption plots
ax[0].set_title('Consumption and income')
ax[0].plot(np.arange(N_simul), c_hist_com, label='consumption')
ax[0].plot(np.arange(N_simul), y_hist_com, label='income', alpha=.6,�
↪linestyle='--')

ax[0].legend()
ax[0].set_xlabel('Periods')
ax[0].set_ylim([80, 120])

# Debt plots
ax[1].set_title('Debt and income')
ax[1].plot(np.arange(N_simul), b_hist_com, label='debt')
ax[1].plot(np.arange(N_simul), y_hist_com, label='Income', alpha=.6,�
↪linestyle='--')

ax[1].legend()
ax[1].axhline(0, color='k')
ax[1].set_xlabel('Periods')

plt.show()
6.4. LINEAR STATE SPACE VERSION OF COMPLETE MARKETS MODEL 95

6.4.1 Interpretation of Graph

In the above graph, please note that:


• nonfinancial income fluctuates in a stationary manner.
• consumption is completely constant.
• the consumer’s debt fluctuates in a stationary manner; in fact, in this case, because
nonfinancial income is a first-order autoregressive process, the consumer’s debt is an
exact affine function (meaning linear plus a constant) of the consumer’s nonfinancial
income.

6.4.2 Incomplete Markets Version

The incomplete markets version of the model with nonfinancial income being governed by a
linear state space system is described in the first lecture on the permanent income model and
the followup lecture on the permanent income model.
In that incomplete markerts setting, consumption follows a random walk and the consumer’s
debt follows a process with a unit root.

6.4.3 Finite State Markov Income Process

We now turn to a finite-state Markov version of the model in which the consumer’s nonfinan-
cial income is an exact function of a Markov state that takes one of 𝑁 values.
We’ll start with a setting in which in each version of our consumption-smoothing models,
nonfinancial income is governed by a two-state Markov chain (it’s easy to generalize this to
an 𝑁 state Markov chain).
In particular, the state 𝑠𝑡 ∈ {1, 2} follows a Markov chain with transition probability matrix

𝑃𝑖𝑗 = ℙ{𝑠𝑡+1 = 𝑗 | 𝑠𝑡 = 𝑖}

where ℙ means conditional probability


Nonfinancial income {𝑦𝑡 } obeys

𝑦1̄ if 𝑠𝑡 = 1
𝑦𝑡 = {
𝑦2̄ if 𝑠𝑡 = 2

A consumer wishes to maximize


𝔼 [∑ 𝛽 𝑡 𝑢(𝑐𝑡 )] where 𝑢(𝑐𝑡 ) = −(𝑐𝑡 − 𝛾)2 and 0<𝛽<1 (4)
𝑡=0

Here 𝛾 > 0 is a bliss level of consumption

6.4.4 Market Structure

Our complete and incomplete markets models differ in how thoroughly the market struc-
ture allows a consumer to transfer resources across time and Markov states, there being more
96CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

transfer opportunities in the complete markets setting than in the incomplete markets setting.
Watch how these differences in opportunities affect
• how smooth consumption is across time and Markov states
• how the consumer chooses to make his levels of indebtedness behave over time and
across Markov states

6.5 Model 1 (Complete Markets)

At each date 𝑡 ≥ 0, the consumer trades a full array of one-period ahead Arrow securi-
ties.
We assume that prices of these securities are exogenous to the consumer.
Exogenous means that they are unaffected by the consumer’s decisions.
In Markov state 𝑠𝑡 at time 𝑡, one unit of consumption in state 𝑠𝑡+1 at time 𝑡 + 1 costs
𝑞(𝑠𝑡+1 | 𝑠𝑡 ) units of the time 𝑡 consumption good.
The prices 𝑞(𝑠𝑡+1 | 𝑠𝑡 ) are given and can be organized into a matrix 𝑄 with 𝑄𝑖𝑗 = 𝑞(𝑗|𝑖)
At time 𝑡 = 0, the consumer starts with an inherited level of debt due at time 0 of 𝑏0 units of
time 0 consumption goods.
The consumer’s budget constraint at 𝑡 ≥ 0 in Markov state 𝑠𝑡 is

𝑐𝑡 + 𝑏𝑡 ≤ 𝑦(𝑠𝑡 ) + ∑ 𝑞(𝑗 | 𝑠𝑡 ) 𝑏𝑡+1 (𝑗 | 𝑠𝑡 ) (5)


𝑗

where 𝑏𝑡 is the consumer’s one-period debt that falls due at time 𝑡 and 𝑏𝑡+1 (𝑗 | 𝑠𝑡 ) are the con-
sumer’s time 𝑡 sales of the time 𝑡 + 1 consumption good in Markov state 𝑗.
These are
• when multiplied by 𝑞(𝑗 | 𝑠𝑡 ), a source of time 𝑡 revenues for the consumer
• a source of time 𝑡 + 1 expenditures for the consumer
A natural analog of Hall’s assumption that the one-period risk-free gross interest rate is 𝛽 −1
is

𝑞(𝑗 | 𝑖) = 𝛽𝑃𝑖𝑗 (6)

To understand how this is a natural analogue, observe that in state 𝑖 it costs ∑𝑗 𝑞(𝑗 | 𝑖) to
purchase one unit of consumption next period for sure, i.e., meaning no matter what state of
the world occurs at 𝑡 + 1.
Hence the implied price of a risk-free claim on one unit of consumption next period is

∑ 𝑞(𝑗 | 𝑖) = ∑ 𝛽𝑃𝑖𝑗 = 𝛽
𝑗 𝑗

This confirms the sense in which (6) is a natural counterpart to Hall’s assumption that the
risk-free one-period gross interest rate is 𝑅 = 𝛽 −1 .
It is timely please to recall that the gross one-period risk-free interest rate is the reciprocal of
the price at time 𝑡 of a risk-free claim on one unit of consumption tomorrow.
6.5. MODEL 1 (COMPLETE MARKETS) 97

First-order necessary conditions for maximizing the consumer’s expected utility subject to the
sequence of budget constraints (5) are

𝑢′ (𝑐𝑡+1 )
𝛽 ℙ{𝑠𝑡+1 | 𝑠𝑡 } = 𝑞(𝑠𝑡+1 | 𝑠𝑡 )
𝑢′ (𝑐𝑡 )

for all 𝑠𝑡 , 𝑠𝑡+1


or, under our assumption (6) about the values taken by Arrow security prices,

𝑐𝑡+1 = 𝑐𝑡 (7)

Thus, our consumer sets 𝑐𝑡 = 𝑐 ̄ for all 𝑡 ≥ 0 for some value 𝑐 ̄ that it is our job now to deter-
mine along with values for 𝑏𝑡+1 (𝑗|𝑠𝑡 = 𝑖) for 𝑖 = 1, 2 and 𝑗 = 1, 2
We’ll use a guess and verify method to determine these objects
Guess: We’ll make the plausible guess that

𝑏𝑡+1 (𝑠𝑡+1 = 𝑗 | 𝑠𝑡 = 𝑖) = 𝑏(𝑗), 𝑖 = 1, 2; 𝑗 = 1, 2 (8)

so that the amount borrowed today depends only on tomorrow’s Markov state. (Why is this
is a plausible guess?)
To determine 𝑐,̄ we shall deduce implications of the consumer’s budget constraints in each
Markov state today and our guess (8) about the consumer’s debt level choices.
For 𝑡 ≥ 1, these imply

𝑐 ̄ + 𝑏(1) = 𝑦(1) + 𝑞(1 | 1)𝑏(1) + 𝑞(2 | 1)𝑏(2)


(9)
𝑐 ̄ + 𝑏(2) = 𝑦(2) + 𝑞(1 | 2)𝑏(1) + 𝑞(2 | 2)𝑏(2)

or

𝑏(1) 𝑐̄ 𝑦(1) 𝑃 𝑃12 𝑏(1)


[ ]+[ ]=[ ] + 𝛽 [ 11 ][ ]
𝑏(2) 𝑐̄ 𝑦(2) 𝑃21 𝑃22 𝑏(2)

These are 2 equations in the 3 unknowns 𝑐,̄ 𝑏(1), 𝑏(2).


To get a third equation, we assume that at time 𝑡 = 0, 𝑏0 is the debt due; and we assume that
at time 𝑡 = 0, the Markov state 𝑠0 = 1
(We could instead have assumed that at time 𝑡 = 0 the Markov state 𝑠0 = 2, which would
affect our answer as we shall see)
Since we have assumed that 𝑠0 = 1, the budget constraint at time 𝑡 = 0 is

𝑐 ̄ + 𝑏0 = 𝑦(1) + 𝑞(1 | 1)𝑏(1) + 𝑞(2 | 1)𝑏(2) (10)

where 𝑏0 is the (exogenous) debt the consumer is assumed to bring into period 0
If we substitute (10) into the first equation of (9) and rearrange, we discover that

𝑏(1) = 𝑏0 (11)
98CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

We can then use the second equation of (9) to deduce the restriction

𝑦(1) − 𝑦(2) + [𝑞(1 | 1) − 𝑞(1 | 2) − 1]𝑏0 + [𝑞(2 | 1) + 1 − 𝑞(2 | 2)]𝑏(2) = 0, (12)

an equation that we can solve for the unknown 𝑏(2).


Knowing 𝑏(1) and 𝑏(2), we can solve equation (10) for the constant level of consumption 𝑐.̄

6.5.1 Key Outcomes

The preceding calculations indicate that in the complete markets version of our model, we
obtain the following striking results:
• The consumer chooses to make consumption perfectly constant across time and across
Markov states.
• State-contingent debt purchases 𝑏𝑡+1 (𝑠𝑡+1 = 𝑗|𝑠𝑡 = 𝑖) depend only on 𝑗
• If the initial Markov state is 𝑠0 = 𝑗 and initial consumer debt is 𝑏0 , then debt in Markov
state 𝑗 satisfied 𝑏(𝑗) = 𝑏0
To summarize what we have achieved up to now, we have computed the constant level of con-
sumption 𝑐 ̄ and indicated how that level depends on the underlying specifications of prefer-
ences, Arrow securities prices, the stochastic process of exogenous nonfinancial income, and
the initial debt level 𝑏0
• The consumer’s debt neither accumulates, nor decumulates, nor drifts – instead, the
debt level each period is an exact function of the Markov state, so in the two-state
Markov case, it switches between two values.
• We have verified guess (8).
• When the state 𝑠𝑡 returns to the initial state 𝑠0 , debt returns to the initial debt level.
• Debt levels in all other states depend on virtually all remaining parameters of the
model.

6.5.2 Code

Here’s some code that, among other things, contains a function called consump-
tion_complete().
This function computes {𝑏(𝑖)}𝑁
𝑖=1 , 𝑐 ̄ as outcomes given a set of parameters for the general case
with 𝑁 Markov states under the assumption of complete markets

In [4]: class ConsumptionProblem:


"""
The data for a consumption problem, including some default values.
"""

def __init__(self,
β=.96,
y=[2, 1.5],
b0=3,
P=[[.8, .2],
[.4, .6]],
init=0):
"""
Parameters
6.5. MODEL 1 (COMPLETE MARKETS) 99

----------

β : discount factor
y : list containing the two income levels
b0 : debt in period 0 (= initial state debt level)
P : 2x2 transition matrix
init : index of initial state s0
"""
self.β = β
self.y = np.asarray(y)
self.b0 = b0
self.P = np.asarray(P)
self.init = init

def simulate(self, N_simul=80, random_state=1):


"""
Parameters
----------

N_simul : number of periods for simulation


random_state : random state for simulating Markov chain
"""
# For the simulation define a quantecon MC class
mc = qe.MarkovChain(self.P)
s_path = mc.simulate(N_simul, init=self.init,�
↪random_state=random_state)

return s_path

def consumption_complete(cp):
"""
Computes endogenous values for the complete market case.

Parameters
----------

cp : instance of ConsumptionProblem

Returns
-------

c_bar : constant consumption


b : optimal debt in each state

associated with the price system

Q = β * P
"""
β, P, y, b0, init = cp.β, cp.P, cp.y, cp.b0, cp.init # Unpack

Q = β * P # assumed price system

# construct matrices of augmented equation system


n = P.shape[0] + 1

y_aug = np.empty((n, 1))


y_aug[0, 0] = y[init] - b0
100CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

y_aug[1:, 0] = y

Q_aug = np.zeros((n, n))


Q_aug[0, 1:] = Q[init, :]
Q_aug[1:, 1:] = Q

A = np.zeros((n, n))
A[:, 0] = 1
A[1:, 1:] = np.eye(n-1)

x = np.linalg.inv(A - Q_aug) @ y_aug

c_bar = x[0, 0]
b = x[1:, 0]

return c_bar, b

def consumption_incomplete(cp, s_path):


"""
Computes endogenous values for the incomplete market case.

Parameters
----------

cp : instance of ConsumptionProblem
s_path : the path of states
"""
β, P, y, b0 = cp.β, cp.P, cp.y, cp.b0 # Unpack

N_simul = len(s_path)

# Useful variables
n = len(y)
y.shape = (n, 1)
v = np.linalg.inv(np.eye(n) - β * P) @ y

# Store consumption and debt path


b_path, c_path = np.ones(N_simul+1), np.ones(N_simul)
b_path[0] = b0

# Optimal decisions from (12) and (13)


db = ((1 - β) * v - y) / β

for i, s in enumerate(s_path):
c_path[i] = (1 - β) * (v - b_path[i] * np.ones((n, 1)))[s, 0]
b_path[i + 1] = b_path[i] + db[s, 0]

return c_path, b_path[:-1], y[s_path]

Let’s test by checking that 𝑐 ̄ and 𝑏2 satisfy the budget constraint

In [5]: cp = ConsumptionProblem()
c_bar, b = consumption_complete(cp)
np.isclose(c_bar + b[1] - cp.y[1] - (cp.β * cp.P)[1, :] @ b, 0)

Out[5]: True
6.6. MODEL 2 (ONE-PERIOD RISK-FREE DEBT ONLY) 101

Below, we’ll take the outcomes produced by this code – in particular the implied consumption
and debt paths – and compare them with outcomes from an incomplete markets model in the
spirit of Hall [24]

6.6 Model 2 (One-Period Risk-Free Debt Only)

This is a version of the original models of Hall (1978) in which the consumer’s ability to sub-
stitute intertemporally is constrained by his ability to buy or sell only one security, a risk-free
one-period bond bearing a constant gross interest rate that equals 𝛽 −1 .
Given an initial debt 𝑏0 at time 0, the consumer faces a sequence of budget constraints

𝑐𝑡 + 𝑏𝑡 = 𝑦𝑡 + 𝛽𝑏𝑡+1 , 𝑡≥0

where 𝛽 is the price at time 𝑡 of a risk-free claim on one unit of time consumption at time
𝑡 + 1.
First-order conditions for the consumer’s problem are

∑ 𝑢′ (𝑐𝑡+1,𝑗 )𝑃𝑖𝑗 = 𝑢′ (𝑐𝑡,𝑖 )


𝑗

For our assumed quadratic utility function this implies

∑ 𝑐𝑡+1,𝑗 𝑃𝑖𝑗 = 𝑐𝑡,𝑖 (13)


𝑗

which for our finite-state Markov setting is Hall’s (1978) conclusion that consumption follows
a random walk.
As we saw in our first lecture on the permanent income model, this leads to


𝑏𝑡 = 𝔼𝑡 ∑ 𝛽 𝑗 𝑦𝑡+𝑗 − (1 − 𝛽)−1 𝑐𝑡 (14)
𝑗=0

and


𝑐𝑡 = (1 − 𝛽) [𝔼𝑡 ∑ 𝛽 𝑗 𝑦𝑡+𝑗 − 𝑏𝑡 ] (15)
𝑗=0

Equation (15) expresses 𝑐𝑡 as a net interest rate factor 1 − 𝛽 times the sum of the expected

present value of nonfinancial income 𝔼𝑡 ∑𝑗=0 𝛽 𝑗 𝑦𝑡+𝑗 and financial wealth −𝑏𝑡 .
Substituting (15) into the one-period budget constraint and rearranging leads to


𝑏𝑡+1 − 𝑏𝑡 = 𝛽 −1 [(1 − 𝛽)𝔼𝑡 ∑ 𝛽 𝑗 𝑦𝑡+𝑗 − 𝑦𝑡 ] (16)
𝑗=0


Now let’s calculate the key term 𝔼𝑡 ∑𝑗=0 𝛽 𝑗 𝑦𝑡+𝑗 in our finite Markov chain setting.
102CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

Define


𝑣𝑡 ∶= 𝔼𝑡 ∑ 𝛽 𝑗 𝑦𝑡+𝑗
𝑗=0

which in the spirit of dynamic programming we can write as a Bellman equation

𝑣𝑡 ∶= 𝑦𝑡 + 𝛽𝔼𝑡 𝑣𝑡+1

In our two-state Markov chain setting, 𝑣𝑡 = 𝑣(1) when 𝑠𝑡 = 1 and 𝑣𝑡 = 𝑣(2) when 𝑠𝑡 = 2.
Therefore, we can write our Bellman equation as

𝑣(1) = 𝑦(1) + 𝛽𝑃11 𝑣(1) + 𝛽𝑃12 𝑣(2)


𝑣(2) = 𝑦(2) + 𝛽𝑃21 𝑣(1) + 𝛽𝑃22 𝑣(2)

or

𝑣 ⃗ = 𝑦 ⃗ + 𝛽𝑃 𝑣 ⃗

𝑣(1) 𝑦(1)
where 𝑣 ⃗ = [ ] and 𝑦 ⃗ = [ ].
𝑣(2) 𝑦(2)
We can also write the last expression as

𝑣 ⃗ = (𝐼 − 𝛽𝑃 )−1 𝑦 ⃗

In our finite Markov chain setting, from expression (15), consumption at date 𝑡 when debt is
𝑏𝑡 and the Markov state today is 𝑠𝑡 = 𝑖 is evidently

𝑐(𝑏𝑡 , 𝑖) = (1 − 𝛽) ([(𝐼 − 𝛽𝑃 )−1 𝑦]⃗ 𝑖 − 𝑏𝑡 ) (17)

and the increment to debt is

𝑏𝑡+1 − 𝑏𝑡 = 𝛽 −1 [(1 − 𝛽)𝑣(𝑖) − 𝑦(𝑖)] (18)

6.6.1 Summary of Outcomes

In contrast to outcomes in the complete markets model, in the incomplete markets model
• consumption drifts over time as a random walk; the level of consumption at time 𝑡 de-
pends on the level of debt that the consumer brings into the period as well as the ex-
pected discounted present value of nonfinancial income at 𝑡.
• the consumer’s debt drifts upward over time in response to low realizations of nonfinan-
cial income and drifts downward over time in response to high realizations of nonfinan-
cial income.
• the drift over time in the consumer’s debt and the dependence of current consumption
on today’s debt level account for the drift over time in consumption.
6.6. MODEL 2 (ONE-PERIOD RISK-FREE DEBT ONLY) 103

6.6.2 The Incomplete Markets Model

The code above also contains a function called consumption_incomplete() that uses (17) and
(18) to
• simulate paths of 𝑦𝑡 , 𝑐𝑡 , 𝑏𝑡+1
• plot these against values of 𝑐,̄ 𝑏(𝑠1 ), 𝑏(𝑠2 ) found in a corresponding complete markets
economy
Let’s try this, using the same parameters in both complete and incomplete markets economies

In [6]: cp = ConsumptionProblem()
s_path = cp.simulate()
N_simul = len(s_path)

c_bar, debt_complete = consumption_complete(cp)

c_path, debt_path, y_path = consumption_incomplete(cp, s_path)

fig, ax = plt.subplots(1, 2, figsize=(15, 5))

ax[0].set_title('Consumption paths')
ax[0].plot(np.arange(N_simul), c_path, label='incomplete market')
ax[0].plot(np.arange(N_simul), c_bar * np.ones(N_simul),
label='complete market')
ax[0].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')
ax[0].legend()
ax[0].set_xlabel('Periods')

ax[1].set_title('Debt paths')
ax[1].plot(np.arange(N_simul), debt_path, label='incomplete market')
ax[1].plot(np.arange(N_simul), debt_complete[s_path],
label='complete market')
ax[1].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')
ax[1].legend()
ax[1].axhline(0, color='k', ls='--')
ax[1].set_xlabel('Periods')

plt.show()

In the graph on the left, for the same sample path of nonfinancial income 𝑦𝑡 , notice that
104CHAPTER 6. CONSUMPTION SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

• consumption is constant when there are complete markets, but takes a random walk in
the incomplete markets version of the model.
• the consumer’s debt oscillates between two values that are functions of the Markov state
in the complete markets model, while the consumer’s debt drifts in a “unit root” fashion
in the incomplete markets economy.

6.6.3 A sequel

In tax smoothing with complete and incomplete markets, we reinterpret the mathematics and
Python code presented in this lecture in order to construct tax-smoothing models in the in-
complete markets tradition of Barro [7] as well as in the complete markets tradition of Lucas
and Stokey [45].
Chapter 7

Tax Smoothing with Complete and


Incomplete Markets

7.1 Contents

• Overview 7.2
• Tax Smoothing with Complete Markets 7.3
• Returns on State-Contingent Debt 7.4
• More Finite Markov Chain Tax-Smoothing Examples 7.5
In addition to what’s in Anaconda, this lecture uses the library:

In [1]: !pip install --upgrade quantecon

7.2 Overview

This lecture describes tax-smoothing models that are counterparts to consumption-smoothing


models in Consumption Smoothing with Complete and Incomplete Markets.
• one is in the complete markets tradition of Lucas and Stokey [45].
• the other is in the incomplete markets tradition of Barro [7].
Complete markets allow a government to buy or sell claims contingent on all possible Markov
states.
Incomplete markets allow a government to buy or sell only a limited set of securities, often
only a single risk-free security.
Barro [7] worked in an incomplete markets tradition by assuming that the only asset that can
be traded is a risk-free one period bond.
In his consumption-smoothing model, Hall [24] had assumed an exogenous stochastic process
of nonfinancial income and an exogenous gross interest rate on one period risk-free debt that
equals 𝛽 −1 , where 𝛽 ∈ (0, 1) is also a consumer’s intertemporal discount factor.
Barro [7] made an analogous assumption about the risk-free interest rate in a tax-smoothing
model that turns out to have the same mathematical structure as Hall’s consumption-
smoothing model.
To get Barro’s model from Hall’s, all we have to do is to rename variables.

105
106CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

We maintain Hall’s and Barro’s assumption about the interest rate when we describe an in-
complete markets version of our model.
In addition, we extend their assumption about the interest rate to an appropriate counterpart
to create a “complete markets” model in the style of Lucas and Stokey [45].

7.2.1 Isomorphism between Consumption and Tax Smoothing

For each version of a consumption-smoothing model, a tax-smoothing counterpart can be ob-


tained simply by relabeling
• consumption as tax collections
• a consumer’s one-period utility function as a government’s one-period loss function from
collecting taxes that impose deadweight welfare losses
• a consumer’s nonfinancial income as a government’s purchases
• a consumer’s debt as a government’s assets
Thus, we can convert the consumption-smoothing models in lecture Consumption Smooth-
ing with Complete and Incomplete Markets into tax-smoothing models by setting 𝑐𝑡 = 𝑇𝑡 ,
𝑦𝑡 = 𝐺𝑡 , and −𝑏𝑡 = 𝑎𝑡 , where 𝑇𝑡 is total tax collections, {𝐺𝑡 } is an exogenous government ex-
penditures process, and 𝑎𝑡 is the government’s holdings of one-period risk-free bonds coming
maturing at the due at the beginning of time 𝑡.
For elaborations on this theme, please see Optimal Savings II: LQ Techniques and later parts
of this lecture.
We’ll spend most of this lecture studying acquire finite-state Markov specification, but will
also treat the linear state space specification.

Link to History

For those who love history, President Thomas Jefferson’s Secretary of Treasury Albert Gal-
latin (1807) [23] seems to have prescribed policies that come from Barro’s model [7]
Let’s start with some standard imports:

In [2]: import numpy as np


import quantecon as qe
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.linalg as la

To exploit the isomorphism between consumption-smoothing and tax-smoothing models, we


simply use code from Consumption Smoothing with Complete and Incomplete Markets

7.2.2 Code

Among other things, this code contains a function called consumption_complete().


This function computes {𝑏(𝑖)}𝑁
𝑖=1 , 𝑐 ̄ as outcomes given a set of parameters for the general case
with 𝑁 Markov states under the assumption of complete markets

In [3]: class ConsumptionProblem:


"""
7.2. OVERVIEW 107

The data for a consumption problem, including some default values.


"""

def __init__(self,
β=.96,
y=[2, 1.5],
b0=3,
P=[[.8, .2],
[.4, .6]],
init=0):
"""
Parameters
----------

β : discount factor
y : list containing the two income levels
b0 : debt in period 0 (= initial state debt level)
P : 2x2 transition matrix
init : index of initial state s0
"""
self.β = β
self.y = np.asarray(y)
self.b0 = b0
self.P = np.asarray(P)
self.init = init

def simulate(self, N_simul=80, random_state=1):


"""
Parameters
----------

N_simul : number of periods for simulation


random_state : random state for simulating Markov chain
"""
# For the simulation define a quantecon MC class
mc = qe.MarkovChain(self.P)
s_path = mc.simulate(N_simul, init=self.init,�
↪random_state=random_state)

return s_path

def consumption_complete(cp):
"""
Computes endogenous values for the complete market case.

Parameters
----------

cp : instance of ConsumptionProblem

Returns
-------

c_bar : constant consumption


b : optimal debt in each state

associated with the price system


108CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

Q = β * P
"""
β, P, y, b0, init = cp.β, cp.P, cp.y, cp.b0, cp.init # Unpack

Q = β * P # assumed price system

# construct matrices of augmented equation system


n = P.shape[0] + 1

y_aug = np.empty((n, 1))


y_aug[0, 0] = y[init] - b0
y_aug[1:, 0] = y

Q_aug = np.zeros((n, n))


Q_aug[0, 1:] = Q[init, :]
Q_aug[1:, 1:] = Q

A = np.zeros((n, n))
A[:, 0] = 1
A[1:, 1:] = np.eye(n-1)

x = np.linalg.inv(A - Q_aug) @ y_aug

c_bar = x[0, 0]
b = x[1:, 0]

return c_bar, b

def consumption_incomplete(cp, s_path):


"""
Computes endogenous values for the incomplete market case.

Parameters
----------

cp : instance of ConsumptionProblem
s_path : the path of states
"""
β, P, y, b0 = cp.β, cp.P, cp.y, cp.b0 # Unpack

N_simul = len(s_path)

# Useful variables
n = len(y)
y.shape = (n, 1)
v = np.linalg.inv(np.eye(n) - β * P) @ y

# Store consumption and debt path


b_path, c_path = np.ones(N_simul+1), np.ones(N_simul)
b_path[0] = b0

# Optimal decisions from (12) and (13)


db = ((1 - β) * v - y) / β

for i, s in enumerate(s_path):
c_path[i] = (1 - β) * (v - b_path[i] * np.ones((n, 1)))[s, 0]
7.2. OVERVIEW 109

b_path[i + 1] = b_path[i] + db[s, 0]

return c_path, b_path[:-1], y[s_path]

7.2.3 Revisiting the consumption-smoothing model

The code above also contains a function called consumption_incomplete() that uses (17) and
(18) to
• simulate paths of 𝑦𝑡 , 𝑐𝑡 , 𝑏𝑡+1
• plot these against values of 𝑐,̄ 𝑏(𝑠1 ), 𝑏(𝑠2 ) found in a corresponding complete markets
economy
Let’s try this, using the same parameters in both complete and incomplete markets economies

In [4]: cp = ConsumptionProblem()
s_path = cp.simulate()
N_simul = len(s_path)

c_bar, debt_complete = consumption_complete(cp)

c_path, debt_path, y_path = consumption_incomplete(cp, s_path)

fig, ax = plt.subplots(1, 2, figsize=(15, 5))

ax[0].set_title('Consumption paths')
ax[0].plot(np.arange(N_simul), c_path, label='incomplete market')
ax[0].plot(np.arange(N_simul), c_bar * np.ones(N_simul), label='complete�
↪market')

ax[0].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')


ax[0].legend()
ax[0].set_xlabel('Periods')

ax[1].set_title('Debt paths')
ax[1].plot(np.arange(N_simul), debt_path, label='incomplete market')
ax[1].plot(np.arange(N_simul), debt_complete[s_path], label='complete�
↪market')

ax[1].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')


ax[1].legend()
ax[1].axhline(0, color='k', ls='--')
ax[1].set_xlabel('Periods')

plt.show()
110CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

In the graph on the left, for the same sample path of nonfinancial income 𝑦𝑡 , notice that
• consumption is constant when there are complete markets.
• consumption takes a random walk in the incomplete markets version of the model.
• the consumer’s debt oscillates between two values that are functions of the Markov state
in the complete markets model.
• the consumer’s debt drifts because it contains a unit root in the incomplete markets
economy.

Relabeling variables to create tax-smoothing models

As indicated above, we relabel variables to acquire tax-smoothing interpretations of the com-


plete markets and incomplete markets consumption-smoothing models.

In [5]: fig, ax = plt.subplots(1, 2, figsize=(15, 5))

ax[0].set_title('Tax collection paths')


ax[0].plot(np.arange(N_simul), c_path, label='incomplete market')
ax[0].plot(np.arange(N_simul), c_bar * np.ones(N_simul), label='complete�
↪market')

ax[0].plot(np.arange(N_simul), y_path, label='govt expenditures', alpha=.


↪6, ls='--')

ax[0].legend()
ax[0].set_xlabel('Periods')
ax[0].set_ylim([1.4, 2.1])

ax[1].set_title('Government assets paths')


ax[1].plot(np.arange(N_simul), debt_path, label='incomplete market')
ax[1].plot(np.arange(N_simul), debt_complete[s_path], label='complete�
↪market')

ax[1].plot(np.arange(N_simul), y_path, label='govt expenditures', ls='--')


ax[1].legend()
ax[1].axhline(0, color='k', ls='--')
ax[1].set_xlabel('Periods')

plt.show()
7.3. TAX SMOOTHING WITH COMPLETE MARKETS 111

7.3 Tax Smoothing with Complete Markets

It is instructive to focus on a simple tax-smoothing example with complete markets.


This example illustrates how, in a complete markets model like that of Lucas and Stokey [45],
the government purchases insurance from the private sector.
Payouts from the insurance it had purchased allows the government to avoid raising taxes
when emergencies make government expenditures surge.
We assume that government expenditures take one of two values 𝐺1 < 𝐺2 , where Markov
state 1 means “peace” and Markov state 2 means “war”.
The government budget constraint in Markov state 𝑖 is

𝑇𝑖 + 𝑏𝑖 = 𝐺𝑖 + ∑ 𝑄𝑖𝑗 𝑏𝑗
𝑗

where

𝑄𝑖𝑗 = 𝛽𝑃𝑖𝑗

is the price today of one unit of goods in Markov state 𝑗 tomorrow when the Markov state is 𝑖
today.
𝑏𝑖 is the government’s level of assets when it arrives in Markov state 𝑖.
That is, 𝑏𝑖 equals one-period state-contingent claims owed to the government that fall due at
time 𝑡 when the Markov state is 𝑖.
Thus, if 𝑏𝑖 < 0, it means the government is owed 𝑏𝑖 or owes −𝑏𝑖 when the economy arrives
in Markov state 𝑖 at time 𝑡.
In our examples below, this happens when in a previous war-time period the government has
sold an Arrow securities paying off −𝑏𝑖 in peacetime Markov state 𝑖
It can be enlightening to express the government’s budget constraint in Markov state 𝑖 as
112CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

𝑇𝑖 = 𝐺𝑖 + (∑ 𝑄𝑖𝑗 𝑏𝑗 − 𝑏𝑖 )
𝑗

in which the term (∑𝑗 𝑄𝑖𝑗 𝑏𝑗 − 𝑏𝑖 ) equals the net amount that the government spends to pur-
chase one-period Arrow securities that will pay off next period in Markov states 𝑗 = 1, … , 𝑁
after it has received payments 𝑏𝑖 this period.

7.4 Returns on State-Contingent Debt


𝑁
Notice that ∑𝑗′ =1 𝑄𝑖𝑗′ 𝑏(𝑗′ ) is the amount that the government spends in Markov state 𝑖 at
time 𝑡 to purchase one-period state-contingent claims that will pay off in Markov state 𝑗′ at
time 𝑡 + 1.
Then the ex post one-period gross return on the portfolio of government assets held from
state 𝑖 at time 𝑡 to state 𝑗 at time 𝑡 + 1 is

𝑏(𝑗)
𝑅(𝑗|𝑖) = 𝑁
∑𝑗′ =1 𝑄𝑖𝑗′ 𝑏(𝑗′ )

The cumulative return earned from putting 1 unit of time 𝑡 goods into the government port-
folio of state-contingent securities at time 𝑡 and then rolling over the proceeds into the gov-
ernment portfolio each period thereafter is

𝑅𝑇 (𝑠𝑡+𝑇 , 𝑠𝑡+𝑇 −1 , … , 𝑠𝑡 ) ≡ 𝑅(𝑠𝑡+1 |𝑠𝑡 )𝑅(𝑠𝑡+2 |𝑠𝑡+1 ) ⋯ 𝑅(𝑠𝑡+𝑇 |𝑠𝑡+𝑇 −1 )

Here is some code that computes one-period and cumulative returns on the government port-
folio in the finite-state Markov version of our complete markets model.
Convention: In this code, when 𝑃𝑖𝑗 = 0, we arbitrarily set 𝑅(𝑗|𝑖) to be 0.

In [6]: def ex_post_gross_return(b, cp):


"""
calculate the ex post one-period gross return on the portfolio
of government assets, given b and Q.
"""
Q = cp.β * cp.P

values = Q @ b

n = len(b)
R = np.zeros((n, n))

for i in range(n):
ind = cp.P[i, :] != 0
R[i, ind] = b[ind] / values[i]

return R

def cumulative_return(s_path, R):


"""
compute cumulative return from holding 1 unit market portfolio
7.4. RETURNS ON STATE-CONTINGENT DEBT 113

of government bonds, given some simulated state path.


"""
T = len(s_path)

RT_path = np.empty(T)
RT_path[0] = 1
RT_path[1:] = np.cumprod([R[s_path[t], s_path[t+1]] for t in range(T-
1)])

return RT_path

7.4.1 An Example of Tax Smoothing

We’ll study a tax-smoothing model with two Markov states.


In Markov state 1, there is peace and government expenditures are low.
In Markov state 2, there is war and government expenditures are high.
We’ll compute optimal policies in both complete and incomplete markets settings.
Then we’ll feed in a particular assumed path of Markov states and study outcomes.
• We’ll assume that the initial Markov state is state 1, which means we start from a state
of peace.
• The government then experiences 3 time periods of war and come back to peace again.
• The history of Markov states is therefore {𝑝𝑒𝑎𝑐𝑒, 𝑤𝑎𝑟, 𝑤𝑎𝑟, 𝑤𝑎𝑟, 𝑝𝑒𝑎𝑐𝑒}.
In addition, as indicated above, to simplify our example, we’ll set the government’s initial as-
set level to 1, so that 𝑏1 = 1.
Here’s code that itinitializes government assets to be unity in an initial peace time Markov
state.

In [7]: # Parameters
β = .96

# change notation y to g in the tax-smoothing example


g = [1, 2]
b0 = 1
P = np.array([[.8, .2],
[.4, .6]])

cp = ConsumptionProblem(β, g, b0, P)
Q = β * P

# change notation c_bar to T_bar in the tax-smoothing example


T_bar, b = consumption_complete(cp)
R = ex_post_gross_return(b, cp)
s_path = [0, 1, 1, 1, 0]
RT_path = cumulative_return(s_path, R)

print(f"P \n {P}")
print(f"Q \n {Q}")
print(f"Govt expenditures in peace and war = {g}")
print(f"Constant tax collections = {T_bar}")
print(f"Govt debts in two states = {-b}")
114CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

msg = """
Now let's check the government's budget constraint in peace and war.
Our assumptions imply that the government always purchases 0 units of the
Arrow peace security.
"""
print(msg)

AS1 = Q[0, :] @ b
# spending on Arrow security
# since the spending on Arrow peace security is not 0 anymore after we�
↪change b0 to 1

print(f"Spending on Arrow security in peace = {AS1}")


AS2 = Q[1, :] @ b
print(f"Spending on Arrow security in war = {AS2}")

print("")
# tax collections minus debt levels
print("Government tax collections minus debt levels in peace and war")
TB1 = T_bar + b[0]
print(f"T+b in peace = {TB1}")
TB2 = T_bar + b[1]
print(f"T+b in war = {TB2}")

print("")
print("Total government spending in peace and war")
G1 = g[0] + AS1
G2 = g[1] + AS2
print(f"Peace = {G1}")
print(f"War = {G2}")

print("")
print("Let's see ex-post and ex-ante returns on Arrow securities")

Π = np.reciprocal(Q)
exret = Π
print(f"Ex-post returns to purchase of Arrow securities = \n {exret}")
exant = Π * P
print(f"Ex-ante returns to purchase of Arrow securities \n {exant}")

print("")
print("The Ex-post one-period gross return on the portfolio of government�
↪assets")

print(R)

print("")
print("The cumulative return earned from holding 1 unit market portfolio�
↪of government

bonds")
print(RT_path[-1])

P
[[0.8 0.2]
[0.4 0.6]]
Q
[[0.768 0.192]
[0.384 0.576]]
Govt expenditures in peace and war = [1, 2]
Constant tax collections = 1.2716883116883118
7.4. RETURNS ON STATE-CONTINGENT DEBT 115

Govt debts in two states = [-1. -2.62337662]

Now let's check the government's budget constraint in peace and war.
Our assumptions imply that the government always purchases 0 units of the
Arrow peace security.

Spending on Arrow security in peace = 1.2716883116883118


Spending on Arrow security in war = 1.895064935064935

Government tax collections minus debt levels in peace and war


T+b in peace = 2.2716883116883118
T+b in war = 3.895064935064935

Total government spending in peace and war


Peace = 2.2716883116883118
War = 3.895064935064935

Let's see ex-post and ex-ante returns on Arrow securities


Ex-post returns to purchase of Arrow securities =
[[1.30208333 5.20833333]
[2.60416667 1.73611111]]
Ex-ante returns to purchase of Arrow securities
[[1.04166667 1.04166667]
[1.04166667 1.04166667]]

The Ex-post one-period gross return on the portfolio of government assets


[[0.78635621 2.0629085 ]
[0.5276864 1.38432018]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
2.0860704239993675

7.4.2 Explanation

In this example, the government always purchase 1 units of the Arrow security that pays off
in peace time (Markov state 1).
And it purchases a higher amount of the security that pays off in war time (Markov state 2).
Thus, this is an example in which
• during peacetime, the government purchases insurance against the possibility that war
breaks out next period
• during wartime, the government purchases insurance against the possibility that war
continues another period
• so long as peace continues, the ex post return on insurance against war is low
• when war breaks out or continues, the ex post return on insurance against war is high
• given the history of states that we assumed, the value of one unit of the portfolio
of government assets eventually doubles in the end because of high returns during
wartime.
We recommend plugging the quantities computed above into the government budget con-
straints in the two Markov states and staring.
Exercise: try changing the Markov transition matrix so that

1 0
𝑃 =[ ]
.2 .8
116CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

Also, start the system in Markov state 2 (war) with initial government assets −10, so that the
government starts the war in debt and 𝑏2 = −10.

7.5 More Finite Markov Chain Tax-Smoothing Examples

To interpret some episodes in the fiscal history of the United States, we find it interesting to
study a few more examples.
We compute examples in an 𝑁 state Markov setting under both complete and incomplete
markets.
These examples differ in how Markov states are jumping between peace and war.
To wrap procedures for solving models, relabeling graphs so that we record government debt
rather than government assets, and displaying results, we construct a Python class.

In [8]: class TaxSmoothingExample:


"""
construct a tax-smoothing example, by relabeling consumption problem�
↪class.

"""
def __init__(self, g, P, b0, states, β=.96,
init=0, s_path=None, N_simul=80, random_state=1):

self.states = states # state names

# if the path of states is not specified


if s_path is None:
self.cp = ConsumptionProblem(β, g, b0, P, init=init)
self.s_path = self.cp.simulate(N_simul=N_simul,�
↪random_state=random_state)

# if the path of states is specified


else:
self.cp = ConsumptionProblem(β, g, b0, P, init=s_path[0])
self.s_path = s_path

# solve for complete market case


self.T_bar, self.b = consumption_complete(self.cp)
self.debt_value = - (β * P @ self.b).T

# solve for incomplete market case


self.T_path, self.asset_path, self.g_path = \
consumption_incomplete(self.cp, self.s_path)

# calculate returns on state-contingent debt


self.R = ex_post_gross_return(self.b, self.cp)
self.RT_path = cumulative_return(self.s_path, self.R)

def display(self):

# plot graphs
N = len(self.T_path)

plt.figure()
plt.title('Tax collection paths')
plt.plot(np.arange(N), self.T_path, label='incomplete market')
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 117

plt.plot(np.arange(N), self.T_bar * np.ones(N), label='complete�


↪ market')
plt.plot(np.arange(N), self.g_path, label='govt expenditures',�
↪ alpha=.6,
ls='--')
plt.legend()
plt.xlabel('Periods')
plt.show()

plt.title('Government debt paths')


plt.plot(np.arange(N), -self.asset_path, label='incomplete market')
plt.plot(np.arange(N), -self.b[self.s_path], label='complete�
↪ market')
plt.plot(np.arange(N), self.g_path, label='govt expenditures',�
↪ ls='--')
plt.plot(np.arange(N), self.debt_value[self.s_path], label="value�
↪ of debts
today")
plt.legend()
plt.axhline(0, color='k', ls='--')
plt.xlabel('Periods')
plt.show()

fig, ax = plt.subplots()
ax.set_title('Cumulative return path (complete markets)')
line1 = ax.plot(np.arange(N), self.RT_path)[0]
c1 = line1.get_color()
ax.set_xlabel('Periods')
ax.set_ylabel('Cumulative return', color=c1)

ax_ = ax.twinx()
ax_._get_lines.prop_cycler = ax._get_lines.prop_cycler
line2 = ax_.plot(np.arange(N), self.g_path, ls='--')[0]
c2 = line2.get_color()
ax_.set_ylabel('Government expenditures', color=c2)

plt.show()

# plot detailed information


Q = self.cp.β * self.cp.P

print(f"P \n {self.cp.P}")
print(f"Q \n {Q}")
print(f"Govt expenditures in {', '.join(self.states)} = {self.cp.y.
↪ flatten()}")
print(f"Constant tax collections = {self.T_bar}")
print(f"Govt debt in {len(self.states)} states = {-self.b}")

print("")
print(f"Government tax collections minus debt levels in {',
'.join(self.states)}")
for i in range(len(self.states)):
TB = self.T_bar + self.b[i]
print(f" T+b in {self.states[i]} = {TB}")

print("")
print(f"Total government spending in {', '.join(self.states)}")
118CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

for i in range(len(self.states)):
G = self.cp.y[i, 0] + Q[i, :] @ self.b
print(f" {self.states[i]} = {G}")

print("")
print("Let's see ex-post and ex-ante returns on Arrow securities \n")

print(f"Ex-post returns to purchase of Arrow securities:")


for i in range(len(self.states)):
for j in range(len(self.states)):
if Q[i, j] != 0.:
print(f" π({self.states[j]}|{self.states[i]}) = {1/
↪ Q[i, j]}")

print("")
exant = 1 / self.cp.β
print(f"Ex-ante returns to purchase of Arrow securities = {exant}")

print("")
print("The Ex-post one-period gross return on the portfolio of�
↪ government
assets")
print(self.R)

print("")
print("The cumulative return earned from holding 1 unit market�
↪ portfolio of
government bonds")
print(self.RT_path[-1])

7.5.1 Parameters

In [9]: γ = .1
λ = .1
ϕ = .1
θ = .1
ψ = .1
g_L = .5
g_M = .8
g_H = 1.2
β = .96

7.5.2 Example 1

This example is designed to produce some stylized versions of tax, debt, and deficit paths fol-
lowed by the United States during and after the Civil War and also during and after World
War I.
We set the Markov chain to have three states

1−𝜆 𝜆 0
𝑃 =⎡
⎢ 0 1 − 𝜙 𝜙 ⎤

⎣ 0 0 1⎦

where the government expenditure vector 𝑔 = [𝑔𝐿 𝑔𝐻 𝑔𝑀 ] where 𝑔𝐿 < 𝑔𝑀 < 𝑔𝐻 .


7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 119

We set 𝑏0 = 1 and assume that the initial Markov state is state 1 so that the system starts off
in peace.
These parameters have government expenditure beginning at a low level, surging during the
war, then decreasing after the war to a level that exceeds its prewar level.
(This type of pattern occurred in the US Civil War and World War I experiences.)

In [10]: g_ex1 = [g_L, g_H, g_M]


P_ex1 = np.array([[1-λ, λ, 0],
[0, 1-ϕ, ϕ],
[0, 0, 1]])
b0_ex1 = 1
states_ex1 = ['peace', 'war', 'postwar']

In [11]: ts_ex1 = TaxSmoothingExample(g_ex1, P_ex1, b0_ex1, states_ex1,�


↪random_state=1)

ts_ex1.display()
120CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

P
[[0.9 0.1 0. ]
[0. 0.9 0.1]
[0. 0. 1. ]]
Q
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 121

[[0.864 0.096 0. ]
[0. 0.864 0.096]
[0. 0. 0.96 ]]
Govt expenditures in peace, war, postwar = [0.5 1.2 0.8]
Constant tax collections = 0.7548096885813149
Govt debt in 3 states = [-1. -4.07093426 -1.12975779]

Government tax collections minus debt levels in peace, war, postwar


T+b in peace = 1.754809688581315
T+b in war = 4.825743944636677
T+b in postwar = 1.8845674740484433

Total government spending in peace, war, postwar


peace = 1.754809688581315
war = 4.825743944636677
postwar = 1.8845674740484433

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace|peace) = 1.1574074074074074
π(war|peace) = 10.416666666666666
π(war|war) = 1.1574074074074074
π(postwar|war) = 10.416666666666666
π(postwar|postwar) = 1.0416666666666667

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[0.7969336 3.24426428 0. ]
[0. 1.12278592 0.31159337]
[0. 0. 1.04166667]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
0.17908622141460423

In [12]: # The following shows the use of the wrapper class when a specific state�
↪path is given

s_path = [0, 0, 1, 1, 2]
ts_s_path = TaxSmoothingExample(g_ex1, P_ex1, b0_ex1, states_ex1,�
↪s_path=s_path)

ts_s_path.display()
122CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 123

P
[[0.9 0.1 0. ]
[0. 0.9 0.1]
[0. 0. 1. ]]
Q
[[0.864 0.096 0. ]
[0. 0.864 0.096]
[0. 0. 0.96 ]]
Govt expenditures in peace, war, postwar = [0.5 1.2 0.8]
Constant tax collections = 0.7548096885813149
Govt debt in 3 states = [-1. -4.07093426 -1.12975779]

Government tax collections minus debt levels in peace, war, postwar


T+b in peace = 1.754809688581315
T+b in war = 4.825743944636677
T+b in postwar = 1.8845674740484433

Total government spending in peace, war, postwar


peace = 1.754809688581315
war = 4.825743944636677
postwar = 1.8845674740484433

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace|peace) = 1.1574074074074074
π(war|peace) = 10.416666666666666
π(war|war) = 1.1574074074074074
π(postwar|war) = 10.416666666666666
π(postwar|postwar) = 1.0416666666666667

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[0.7969336 3.24426428 0. ]
124CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

[0. 1.12278592 0.31159337]


[0. 0. 1.04166667]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
0.9045311615620274

7.5.3 Example 2

This example captures a peace followed by a war, eventually followed by a permanent peace .
Here we set

1 0 0
𝑃 =⎡
⎢ 0 1 − 𝛾 𝛾 ⎤

⎣𝜙 0 1 − 𝜙⎦

where the government expenditure vector 𝑔 = [𝑔𝐿 𝑔𝐿 𝑔𝐻 ] and where 𝑔𝐿 < 𝑔𝐻 .


We assume 𝑏0 = 1 and that the initial Markov state is state 2 so that the system starts off in
a temporary peace.

In [13]: g_ex2 = [g_L, g_L, g_H]


P_ex2 = np.array([[1, 0, 0],
[0, 1-γ, γ],
[ϕ, 0, 1-ϕ]])
b0_ex2 = 1
states_ex2 = ['peace', 'temporary peace', 'war']

In [14]: ts_ex2 = TaxSmoothingExample(g_ex2, P_ex2, b0_ex2, states_ex2, init=1,�


↪random_state=1)

ts_ex2.display()
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 125

P
[[1. 0. 0. ]
126CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

[0. 0.9 0.1]


[0.1 0. 0.9]]
Q
[[0.96 0. 0. ]
[0. 0.864 0.096]
[0.096 0. 0.864]]
Govt expenditures in peace, temporary peace, war = [0.5 0.5 1.2]
Constant tax collections = 0.6053287197231834
Govt debt in 3 states = [ 2.63321799 -1. -2.51384083]

Government tax collections minus debt levels in peace, temporary peace, war
T+b in peace = -2.0278892733563976
T+b in temporary peace = 1.6053287197231834
T+b in war = 3.119169550173011

Total government spending in peace, temporary peace, war


peace = -2.0278892733563976
temporary peace = 1.6053287197231834
war = 3.119169550173011

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace|peace) = 1.0416666666666667
π(temporary peace|temporary peace) = 1.1574074074074074
π(war|temporary peace) = 10.416666666666666
π(peace|war) = 10.416666666666666
π(war|war) = 1.1574074074074074

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[ 1.04166667 0. 0. ]
[ 0. 0.90470824 2.27429251]
[-1.37206116 0. 1.30985865]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
-9.36899173259421

7.5.4 Example 3

This example features a situation in which one of the states is a war state with no hope of
peace next period, while another state is a war state with a positive probability of peace next
period.
The Markov chain is:

1−𝜆 𝜆 0 0
⎡ 0 1−𝜙 𝜙 0 ⎤
𝑃 =⎢ ⎥
⎢ 0 0 1−𝜓 𝜓 ⎥
⎣ 𝜃 0 0 1 − 𝜃⎦

with government expenditure levels for the four states being [𝑔𝐿 𝑔𝐿 𝑔𝐻 𝑔𝐻 ] where 𝑔𝐿 <
𝑔𝐻 .
We start with 𝑏0 = 1 and 𝑠0 = 1.

In [15]: g_ex3 = [g_L, g_L, g_H, g_H]


7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 127

P_ex3 = np.array([[1-λ, λ, 0, 0],


[0, 1-ϕ, ϕ, 0],
[0, 0, 1-ψ, ψ],
[θ, 0, 0, 1-θ ]])
b0_ex3 = 1
states_ex3 = ['peace1', 'peace2', 'war1', 'war2']

In [16]: ts_ex3 = TaxSmoothingExample(g_ex3, P_ex3, b0_ex3, states_ex3,�


↪random_state=1)

ts_ex3.display()
128CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

P
[[0.9 0.1 0. 0. ]
[0. 0.9 0.1 0. ]
[0. 0. 0.9 0.1]
[0.1 0. 0. 0.9]]
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 129

Q
[[0.864 0.096 0. 0. ]
[0. 0.864 0.096 0. ]
[0. 0. 0.864 0.096]
[0.096 0. 0. 0.864]]
Govt expenditures in peace1, peace2, war1, war2 = [0.5 0.5 1.2 1.2]
Constant tax collections = 0.6927944572748268
Govt debt in 4 states = [-1. -3.42494226 -6.86027714 -4.43533487]

Government tax collections minus debt levels in peace1, peace2, war1, war2
T+b in peace1 = 1.6927944572748268
T+b in peace2 = 4.117736720554273
T+b in war1 = 7.553071593533488
T+b in war2 = 5.128129330254042

Total government spending in peace1, peace2, war1, war2


peace1 = 1.6927944572748268
peace2 = 4.117736720554273
war1 = 7.553071593533487
war2 = 5.128129330254042

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace1|peace1) = 1.1574074074074074
π(peace2|peace1) = 10.416666666666666
π(peace2|peace2) = 1.1574074074074074
π(war1|peace2) = 10.416666666666666
π(war1|war1) = 1.1574074074074074
π(war2|war1) = 10.416666666666666
π(peace1|war2) = 10.416666666666666
π(war2|war2) = 1.1574074074074074

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[0.83836741 2.87135998 0. 0. ]
[0. 0.94670854 1.89628977 0. ]
[0. 0. 1.07983627 0.69814023]
[0.2545741 0. 0. 1.1291214 ]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
0.023714401788642227

7.5.5 Example 4

Here the Markov chain is:

1−𝜆 𝜆 0 0 0
⎡ 0 1−𝜙 𝜙 0 0⎤
⎢ ⎥
𝑃 =⎢ 0 0 1−𝜓 𝜓 0⎥
⎢ 0 0 0 1−𝜃 𝜃⎥
⎣ 0 0 0 0 1⎦

with government expenditure levels for the five states being [𝑔𝐿 𝑔𝐿 𝑔𝐻 𝑔𝐻 𝑔𝐿 ] where
𝑔𝐿 < 𝑔 𝐻 .
We ssume that 𝑏0 = 1 and 𝑠0 = 1.
130CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

In [17]: g_ex4 = [g_L, g_L, g_H, g_H, g_L]


P_ex4 = np.array([[1-λ, λ, 0, 0, 0],
[0, 1-ϕ, ϕ, 0, 0],
[0, 0, 1-ψ, ψ, 0],
[0, 0, 0, 1-θ, θ],
[0, 0, 0, 0, 1]])
b0_ex4 = 1
states_ex4 = ['peace1', 'peace2', 'war1', 'war2', 'permanent peace']

In [18]: ts_ex4 = TaxSmoothingExample(g_ex4, P_ex4, b0_ex4, states_ex4,�


↪random_state=1)

ts_ex4.display()
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 131

P
[[0.9 0.1 0. 0. 0. ]
[0. 0.9 0.1 0. 0. ]
[0. 0. 0.9 0.1 0. ]
[0. 0. 0. 0.9 0.1]
132CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

[0. 0. 0. 0. 1. ]]
Q
[[0.864 0.096 0. 0. 0. ]
[0. 0.864 0.096 0. 0. ]
[0. 0. 0.864 0.096 0. ]
[0. 0. 0. 0.864 0.096]
[0. 0. 0. 0. 0.96 ]]
Govt expenditures in peace1, peace2, war1, war2, permanent peace = [0.5 0.5 1.2 1.2
0.5]
Constant tax collections = 0.6349979047185739
Govt debt in 5 states = [-1. -2.82289484 -5.4053292 -1.77211121 3.37494762]

Government tax collections minus debt levels in peace1, peace2, war1, war2, permanent
peace
T+b in peace1 = 1.6349979047185739
T+b in peace2 = 3.457892745537051
T+b in war1 = 6.040327103363227
T+b in war2 = 2.4071091102836437
T+b in permanent peace = -2.7399497132457675

Total government spending in peace1, peace2, war1, war2, permanent peace


peace1 = 1.6349979047185739
peace2 = 3.457892745537051
war1 = 6.040327103363227
war2 = 2.4071091102836437
permanent peace = -2.739949713245768

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace1|peace1) = 1.1574074074074074
π(peace2|peace1) = 10.416666666666666
π(peace2|peace2) = 1.1574074074074074
π(war1|peace2) = 10.416666666666666
π(war1|war1) = 1.1574074074074074
π(war2|war1) = 10.416666666666666
π(war2|war2) = 1.1574074074074074
π(permanent peace|war2) = 10.416666666666666
π(permanent peace|permanent peace) = 1.0416666666666667

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[ 0.8810589 2.48713661 0. 0. 0. ]
[ 0. 0.95436011 1.82742569 0. 0. ]
[ 0. 0. 1.11672808 0.36611394 0. ]
[ 0. 0. 0. 1.46806216 -2.79589276]
[ 0. 0. 0. 0. 1.04166667]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
-11.132109773063561

7.5.6 Example 5

The example captures a case when the system follows a deterministic path from peace to war,
and back to peace again.
Since there is no randomness, the outcomes in complete markets setting should be the same
as in incomplete markets setting.
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 133

The Markov chain is:

0 1 0 0 0 0 0
⎡0 0 1 0 0 0 0⎤
⎢ ⎥
⎢0 0 0 1 0 0 0⎥
𝑃 = ⎢0 0 0 0 1 0 0⎥
⎢ ⎥
⎢0 0 0 0 0 1 0⎥
⎢0 0 0 0 0 0 1⎥
⎣0 0 0 0 0 0 1⎦

with government expenditure levels for the seven states being


[𝑔𝐿 𝑔𝐿 𝑔𝐻 𝑔𝐻 𝑔𝐻 𝑔𝐻 𝑔𝐿 ] where 𝑔𝐿 < 𝑔𝐻 . Assume 𝑏0 = 1 and 𝑠0 = 1.

In [19]: g_ex5 = [g_L, g_L, g_H, g_H, g_H, g_H, g_L]


P_ex5 = np.array([[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1]])
b0_ex5 = 1
states_ex5 = ['peace1', 'peace2', 'war1', 'war2', 'war3', 'permanent�
↪peace']

In [20]: ts_ex5 = TaxSmoothingExample(g_ex5, P_ex5, b0_ex5, states_ex5, N_simul=7,


random_state=1)
ts_ex5.display()
134CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

P
[[0 1 0 0 0 0 0]
[0 0 1 0 0 0 0]
[0 0 0 1 0 0 0]
[0 0 0 0 1 0 0]
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 135

[0 0 0 0 0 1 0]
[0 0 0 0 0 0 1]
[0 0 0 0 0 0 1]]
Q
[[0. 0.96 0. 0. 0. 0. 0. ]
[0. 0. 0.96 0. 0. 0. 0. ]
[0. 0. 0. 0.96 0. 0. 0. ]
[0. 0. 0. 0. 0.96 0. 0. ]
[0. 0. 0. 0. 0. 0.96 0. ]
[0. 0. 0. 0. 0. 0. 0.96]
[0. 0. 0. 0. 0. 0. 0.96]]
Govt expenditures in peace1, peace2, war1, war2, war3, permanent peace = [0.5 0.5 1.2
1.2 1.2 1.2 0.5]
Constant tax collections = 0.5571895472128001
Govt debt in 6 states = [-1. -1.10123911 -1.20669652 -0.58738132 0.05773868
0.72973868
1.42973868]

Government tax collections minus debt levels in peace1, peace2, war1, war2, war3,
permanent peace
T+b in peace1 = 1.5571895472128001
T+b in peace2 = 1.6584286588928003
T+b in war1 = 1.7638860668928003
T+b in war2 = 1.1445708668928005
T+b in war3 = 0.49945086689280016
T+b in permanent peace = -0.1725491331072

Total government spending in peace1, peace2, war1, war2, war3, permanent peace
peace1 = 1.5571895472128001
peace2 = 1.6584286588928001
war1 = 1.7638860668928005
war2 = 1.1445708668928
war3 = 0.4994508668927998
permanent peace = -0.17254913310719955

Let's see ex-post and ex-ante returns on Arrow securities

Ex-post returns to purchase of Arrow securities:


π(peace2|peace1) = 1.0416666666666667
π(war1|peace2) = 1.0416666666666667
π(war2|war1) = 1.0416666666666667
π(war3|war2) = 1.0416666666666667
π(permanent peace|war3) = 1.0416666666666667

Ex-ante returns to purchase of Arrow securities = 1.0416666666666667

The Ex-post one-period gross return on the portfolio of government assets


[[0. 1.04166667 0. 0. 0. 0.
0. ]
[0. 0. 1.04166667 0. 0. 0.
0. ]
[0. 0. 0. 1.04166667 0. 0.
0. ]
[0. 0. 0. 0. 1.04166667 0.
0. ]
[0. 0. 0. 0. 0. 1.04166667
0. ]
[0. 0. 0. 0. 0. 0.
1.04166667]
[0. 0. 0. 0. 0. 0.
1.04166667]]

The cumulative return earned from holding 1 unit market portfolio of government bonds
136CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS

1.2775343959060064

7.5.7 Continuous-State Gaussian Model

To construct a tax-smoothing version of the complete markets consumption-smoothing model


with a continuous state space that we presented in the lecture consumption smoothing with
complete and incomplete markets, we simply relabel variables.
Thus, a government faces a sequence of budget constraints

𝑇𝑡 + 𝑏𝑡 = 𝑔𝑡 + 𝛽𝔼𝑡 𝑏𝑡+1 , 𝑡≥0

where 𝑇𝑡 is tax revenues, 𝑏𝑡 are receipts at 𝑡 from contingent claims that the government had
purchased at time 𝑡 − 1, and

𝛽𝔼𝑡 𝑏𝑡+1 ≡ ∫ 𝑞𝑡+1 (𝑥𝑡+1 |𝑥𝑡 )𝑏𝑡+1 (𝑥𝑡+1 )𝑑𝑥𝑡+1

is the value of time 𝑡 + 1 state-contingent claims purchased by the government at time 𝑡.


As above with the consumption-smoothing model, we can solve the time 𝑡 budget constraint
forward to obtain


𝑏𝑡 = 𝔼𝑡 ∑ 𝛽 𝑗 (𝑔𝑡+𝑗 − 𝑇𝑡+𝑗 )
𝑗=0

which can be rearranged to become

∞ ∞
𝔼𝑡 ∑ 𝛽 𝑗 𝑔𝑡+𝑗 = 𝑏𝑡 + 𝔼𝑡 ∑ 𝛽 𝑗 𝑇𝑡+𝑗
𝑗=0 𝑗=0

which states that the present value of government purchases equals the value of government
assets at 𝑡 plus the present value of tax receipts.
With these relabelings, examples presented in consumption smoothing with complete and in-
complete markets can be interpreted as tax-smoothing models.
Returns: In the continuous state version of our incomplete markets model, the ex post one-
period gross rate of return on the government portfolio equals

𝑏(𝑥𝑡+1 )
𝑅(𝑥𝑡+1 |𝑥𝑡 ) =
𝛽𝐸𝑏(𝑥𝑡+1 )|𝑥𝑡

Related Lectures

Throughout this lecture, we have taken one-period interest rates and Arrow security prices as
exogenous objects determined outside the model and specified them in ways designed to align
our models closely with the consumption smoothing model of Barro [7].
7.5. MORE FINITE MARKOV CHAIN TAX-SMOOTHING EXAMPLES 137

Other lectures make these objects endogenous and describe how a government optimally ma-
nipulates prices of government debt, albeit indirectly via effects distorting taxes have on equi-
librium prices and allocations.
In optimal taxation in an LQ economy and recursive optimal taxation, we study complete-
markets models in which the government recognizes that it can manipulate Arrow securities
prices.
Linear-quadratic versions of the Lucas-Stokey tax-smoothing model are described in Optimal
Taxation in an LQ Economy.
That lecture is a warm-up for the non-linear-quadratic model of tax smoothing described in
Optimal Taxation with State-Contingent Debt.
In both Optimal Taxation in an LQ Economy and Optimal Taxation with State-Contingent
Debt, the government recognizes that its decisions affect prices.
In optimal taxation with incomplete markets, we study an incomplete-markets model in
which the government also manipulates prices of government debt.
138CHAPTER 7. TAX SMOOTHING WITH COMPLETE AND INCOMPLETE MARKETS
Chapter 8

Robustness

8.1 Contents

• Overview 8.2
• The Model 8.3
• Constructing More Robust Policies 8.4
• Robustness as Outcome of a Two-Person Zero-Sum Game 8.5
• The Stochastic Case 8.6
• Implementation 8.7
• Application 8.8
• Appendix 8.9
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

8.2 Overview

This lecture modifies a Bellman equation to express a decision-maker’s doubts about transi-
tion dynamics.
His specification doubts make the decision-maker want a robust decision rule.
Robust means insensitive to misspecification of transition dynamics.
The decision-maker has a single approximating model.
He calls it approximating to acknowledge that he doesn’t completely trust it.
He fears that outcomes will actually be determined by another model that he cannot describe
explicitly.
All that he knows is that the actual data-generating model is in some (uncountable) set of
models that surrounds his approximating model.
He quantifies the discrepancy between his approximating model and the genuine data-
generating model by using a quantity called entropy.
(We’ll explain what entropy means below)
He wants a decision rule that will work well enough no matter which of those other models

139
140 CHAPTER 8. ROBUSTNESS

actually governs outcomes.


This is what it means for his decision rule to be “robust to misspecification of an approximat-
ing model”.
This may sound like too much to ask for, but ….
… a secret weapon is available to design robust decision rules.
The secret weapon is max-min control theory.
A value-maximizing decision-maker enlists the aid of an (imaginary) value-minimizing model
chooser to construct bounds on the value attained by a given decision rule under different
models of the transition dynamics.
The original decision-maker uses those bounds to construct a decision rule with an assured
performance level, no matter which model actually governs outcomes.

Note

In reading this lecture, please don’t think that our decision-maker is paranoid
when he conducts a worst-case analysis. By designing a rule that works well
against a worst-case, his intention is to construct a rule that will work well across
a set of models.

Let’s start with some imports:

In [2]: import pandas as pd


import numpy as np
from scipy.linalg import eig
import matplotlib.pyplot as plt
%matplotlib inline
import quantecon as qe

8.2.1 Sets of Models Imply Sets Of Values

Our “robust” decision-maker wants to know how well a given rule will work when he does not
know a single transition law ….
… he wants to know sets of values that will be attained by a given decision rule 𝐹 under a set
of transition laws.
Ultimately, he wants to design a decision rule 𝐹 that shapes these sets of values in ways that
he prefers.
With this in mind, consider the following graph, which relates to a particular decision prob-
lem to be explained below
8.2. OVERVIEW 141

The figure shows a value-entropy correspondence for a particular decision rule 𝐹 .


The shaded set is the graph of the correspondence, which maps entropy to a set of values as-
sociated with a set of models that surround the decision-maker’s approximating model.
Here
• Value refers to a sum of discounted rewards obtained by applying the decision rule 𝐹
when the state starts at some fixed initial state 𝑥0 .
• Entropy is a non-negative number that measures the size of a set of models surrounding
the decision-maker’s approximating model.
– Entropy is zero when the set includes only the approximating model, indicating
that the decision-maker completely trusts the approximating model.
– Entropy is bigger, and the set of surrounding models is bigger, the less the
decision-maker trusts the approximating model.
The shaded region indicates that for all models having entropy less than or equal to the num-
ber on the horizontal axis, the value obtained will be somewhere within the indicated set of
values.
Now let’s compare sets of values associated with two different decision rules, 𝐹𝑟 and 𝐹𝑏 .
In the next figure,
• The red set shows the value-entropy correspondence for decision rule 𝐹𝑟 .
• The blue set shows the value-entropy correspondence for decision rule 𝐹𝑏 .
142 CHAPTER 8. ROBUSTNESS

The blue correspondence is skinnier than the red correspondence.


This conveys the sense in which the decision rule 𝐹𝑏 is more robust than the decision rule 𝐹𝑟
• more robust means that the set of values is less sensitive to increasing misspecification
as measured by entropy
Notice that the less robust rule 𝐹𝑟 promises higher values for small misspecifications (small
entropy).
(But it is more fragile in the sense that it is more sensitive to perturbations of the approxi-
mating model)
Below we’ll explain in detail how to construct these sets of values for a given 𝐹 , but for now
….
Here is a hint about the secret weapons we’ll use to construct these sets

• We’ll use some min problems to construct the lower bounds

• We’ll use some max problems to construct the upper bounds


We will also describe how to choose 𝐹 to shape the sets of values.
This will involve crafting a skinnier set at the cost of a lower level (at least for low values of
entropy).

8.2.2 Inspiring Video

If you want to understand more about why one serious quantitative researcher is interested in
this approach, we recommend Lars Peter Hansen’s Nobel lecture.
8.3. THE MODEL 143

8.2.3 Other References

Our discussion in this lecture is based on


• [28]
• [26]

8.3 The Model

For simplicity, we present ideas in the context of a class of problems with linear transition
laws and quadratic objective functions.
To fit in with our earlier lecture on LQ control, we will treat loss minimization rather than
value maximization.
To begin, recall the infinite horizon LQ problem, where an agent chooses a sequence of con-
trols {𝑢𝑡 } to minimize


∑ 𝛽 𝑡 {𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 } (1)
𝑡=0

subject to the linear law of motion

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1 , 𝑡 = 0, 1, 2, … (2)

As before,
• 𝑥𝑡 is 𝑛 × 1, 𝐴 is 𝑛 × 𝑛
• 𝑢𝑡 is 𝑘 × 1, 𝐵 is 𝑛 × 𝑘
• 𝑤𝑡 is 𝑗 × 1, 𝐶 is 𝑛 × 𝑗
• 𝑅 is 𝑛 × 𝑛 and 𝑄 is 𝑘 × 𝑘
Here 𝑥𝑡 is the state, 𝑢𝑡 is the control, and 𝑤𝑡 is a shock vector.
For now, we take {𝑤𝑡 } ∶= {𝑤𝑡 }∞
𝑡=1 to be deterministic — a single fixed sequence.

We also allow for model uncertainty on the part of the agent solving this optimization prob-
lem.
In particular, the agent takes 𝑤𝑡 = 0 for all 𝑡 ≥ 0 as a benchmark model but admits the
possibility that this model might be wrong.
As a consequence, she also considers a set of alternative models expressed in terms of se-
quences {𝑤𝑡 } that are “close” to the zero sequence.
She seeks a policy that will do well enough for a set of alternative models whose members are
pinned down by sequences {𝑤𝑡 }.
Soon we’ll quantify the quality of a model specification in terms of the maximal size of the

expression ∑𝑡=0 𝛽 𝑡+1 𝑤𝑡+1

𝑤𝑡+1 .
144 CHAPTER 8. ROBUSTNESS

8.4 Constructing More Robust Policies

If our agent takes {𝑤𝑡 } as a given deterministic sequence, then, drawing on intuition from
earlier lectures on dynamic programming, we can anticipate Bellman equations such as

𝐽𝑡−1 (𝑥) = min{𝑥′ 𝑅𝑥 + 𝑢′ 𝑄𝑢 + 𝛽 𝐽𝑡 (𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤𝑡 )}


𝑢

(Here 𝐽 depends on 𝑡 because the sequence {𝑤𝑡 } is not recursive)


Our tool for studying robustness is to construct a rule that works well even if an adverse se-
quence {𝑤𝑡 } occurs.
In our framework, “adverse” means “loss increasing”.
As we’ll see, this will eventually lead us to construct the Bellman equation

𝐽 (𝑥) = min max{𝑥′ 𝑅𝑥 + 𝑢′ 𝑄𝑢 + 𝛽 [𝐽 (𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤) − 𝜃𝑤′ 𝑤]} (3)


𝑢 𝑤

Notice that we’ve added the penalty term −𝜃𝑤′ 𝑤.


Since 𝑤′ 𝑤 = ‖𝑤‖2 , this term becomes influential when 𝑤 moves away from the origin.
The penalty parameter 𝜃 controls how much we penalize the maximizing agent for “harming”
the minimizing agent.
By raising 𝜃 more and more, we more and more limit the ability of maximizing agent to dis-
tort outcomes relative to the approximating model.
So bigger 𝜃 is implicitly associated with smaller distortion sequences {𝑤𝑡 }.

8.4.1 Analyzing the Bellman Equation

So what does 𝐽 in (3) look like?


As with the ordinary LQ control model, 𝐽 takes the form 𝐽 (𝑥) = 𝑥′ 𝑃 𝑥 for some symmetric
positive definite matrix 𝑃 .
One of our main tasks will be to analyze and compute the matrix 𝑃 .
Related tasks will be to study associated feedback rules for 𝑢𝑡 and 𝑤𝑡+1 .
First, using matrix calculus, you will be able to verify that

max{(𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤)′ 𝑃 (𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤) − 𝜃𝑤′ 𝑤}


𝑤
(4)
= (𝐴𝑥 + 𝐵𝑢)′ 𝒟(𝑃 )(𝐴𝑥 + 𝐵𝑢)

where

𝒟(𝑃 ) ∶= 𝑃 + 𝑃 𝐶(𝜃𝐼 − 𝐶 ′ 𝑃 𝐶)−1 𝐶 ′ 𝑃 (5)

and 𝐼 is a 𝑗 × 𝑗 identity matrix. Substituting this expression for the maximum into (3) yields

𝑥′ 𝑃 𝑥 = min{𝑥′ 𝑅𝑥 + 𝑢′ 𝑄𝑢 + 𝛽 (𝐴𝑥 + 𝐵𝑢)′ 𝒟(𝑃 )(𝐴𝑥 + 𝐵𝑢)} (6)


𝑢
8.5. ROBUSTNESS AS OUTCOME OF A TWO-PERSON ZERO-SUM GAME 145

Using similar mathematics, the solution to this minimization problem is 𝑢 = −𝐹 𝑥 where


𝐹 ∶= (𝑄 + 𝛽𝐵′ 𝒟(𝑃 )𝐵)−1 𝛽𝐵′ 𝒟(𝑃 )𝐴.
Substituting this minimizer back into (6) and working through the algebra gives 𝑥′ 𝑃 𝑥 =
𝑥′ ℬ(𝒟(𝑃 ))𝑥 for all 𝑥, or, equivalently,

𝑃 = ℬ(𝒟(𝑃 ))

where 𝒟 is the operator defined in (5) and

ℬ(𝑃 ) ∶= 𝑅 − 𝛽 2 𝐴′ 𝑃 𝐵(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴 + 𝛽𝐴′ 𝑃 𝐴

The operator ℬ is the standard (i.e., non-robust) LQ Bellman operator, and 𝑃 = ℬ(𝑃 ) is the
standard matrix Riccati equation coming from the Bellman equation — see this discussion.
Under some regularity conditions (see [26]), the operator ℬ ∘ 𝒟 has a unique positive definite
fixed point, which we denote below by 𝑃 ̂ .
A robust policy, indexed by 𝜃, is 𝑢 = −𝐹 ̂ 𝑥 where

𝐹 ̂ ∶= (𝑄 + 𝛽𝐵′ 𝒟(𝑃 ̂ )𝐵)−1 𝛽𝐵′ 𝒟(𝑃 ̂ )𝐴 (7)

We also define

𝐾̂ ∶= (𝜃𝐼 − 𝐶 ′ 𝑃 ̂ 𝐶)−1 𝐶 ′ 𝑃 ̂ (𝐴 − 𝐵𝐹 ̂ ) (8)

The interpretation of 𝐾̂ is that 𝑤𝑡+1 = 𝐾𝑥̂ 𝑡 on the worst-case path of {𝑥𝑡 }, in the sense that
this vector is the maximizer of (4) evaluated at the fixed rule 𝑢 = −𝐹 ̂ 𝑥.
Note that 𝑃 ̂ , 𝐹 ̂ , 𝐾̂ are all determined by the primitives and 𝜃.
Note also that if 𝜃 is very large, then 𝒟 is approximately equal to the identity mapping.
Hence, when 𝜃 is large, 𝑃 ̂ and 𝐹 ̂ are approximately equal to their standard LQ values.
Furthermore, when 𝜃 is large, 𝐾̂ is approximately equal to zero.
Conversely, smaller 𝜃 is associated with greater fear of model misspecification and greater
concern for robustness.

8.5 Robustness as Outcome of a Two-Person Zero-Sum Game

What we have done above can be interpreted in terms of a two-person zero-sum game in
which 𝐹 ̂ , 𝐾̂ are Nash equilibrium objects.
Agent 1 is our original agent, who seeks to minimize loss in the LQ program while admitting
the possibility of misspecification.
Agent 2 is an imaginary malevolent player.
Agent 2’s malevolence helps the original agent to compute bounds on his value function
across a set of models.
We begin with agent 2’s problem.
146 CHAPTER 8. ROBUSTNESS

8.5.1 Agent 2’s Problem

Agent 2

1. knows a fixed policy 𝐹 specifying the behavior of agent 1, in the sense that 𝑢𝑡 = −𝐹 𝑥𝑡
for all 𝑡

2. responds by choosing a shock sequence {𝑤𝑡 } from a set of paths sufficiently close to the
benchmark sequence {0, 0, 0, …}

A natural way to say “sufficiently close to the zero sequence” is to restrict the summed inner

product ∑𝑡=1 𝑤𝑡′ 𝑤𝑡 to be small.
However, to obtain a time-invariant recursive formulation, it turns out to be convenient to
restrict a discounted inner product


∑ 𝛽 𝑡 𝑤𝑡′ 𝑤𝑡 ≤ 𝜂 (9)
𝑡=1

Now let 𝐹 be a fixed policy, and let 𝐽𝐹 (𝑥0 , w) be the present-value cost of that policy given
sequence w ∶= {𝑤𝑡 } and initial condition 𝑥0 ∈ ℝ𝑛 .
Substituting −𝐹 𝑥𝑡 for 𝑢𝑡 in (1), this value can be written as


𝐽𝐹 (𝑥0 , w) ∶= ∑ 𝛽 𝑡 𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 (10)
𝑡=0

where

𝑥𝑡+1 = (𝐴 − 𝐵𝐹 )𝑥𝑡 + 𝐶𝑤𝑡+1 (11)

and the initial condition 𝑥0 is as specified in the left side of (10).


Agent 2 chooses w to maximize agent 1’s loss 𝐽𝐹 (𝑥0 , w) subject to (9).
Using a Lagrangian formulation, we can express this problem as


max ∑ 𝛽 𝑡 {𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 − 𝛽𝜃(𝑤𝑡+1

𝑤𝑡+1 − 𝜂)}
w
𝑡=0

where {𝑥𝑡 } satisfied (11) and 𝜃 is a Lagrange multiplier on constraint (9).


For the moment, let’s take 𝜃 as fixed, allowing us to drop the constant 𝛽𝜃𝜂 term in the objec-
tive function, and hence write the problem as


max ∑ 𝛽 𝑡 {𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 − 𝛽𝜃𝑤𝑡+1

𝑤𝑡+1 }
w
𝑡=0

or, equivalently,


min ∑ 𝛽 𝑡 {−𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 + 𝛽𝜃𝑤𝑡+1

𝑤𝑡+1 } (12)
w
𝑡=0
8.5. ROBUSTNESS AS OUTCOME OF A TWO-PERSON ZERO-SUM GAME 147

subject to (11).
What’s striking about this optimization problem is that it is once again an LQ discounted
dynamic programming problem, with w = {𝑤𝑡 } as the sequence of controls.
The expression for the optimal policy can be found by applying the usual LQ formula (see
here).
We denote it by 𝐾(𝐹 , 𝜃), with the interpretation 𝑤𝑡+1 = 𝐾(𝐹 , 𝜃)𝑥𝑡 .
The remaining step for agent 2’s problem is to set 𝜃 to enforce the constraint (9), which can
be done by choosing 𝜃 = 𝜃𝜂 such that


𝛽 ∑ 𝛽 𝑡 𝑥′𝑡 𝐾(𝐹 , 𝜃𝜂 )′ 𝐾(𝐹 , 𝜃𝜂 )𝑥𝑡 = 𝜂 (13)
𝑡=0

Here 𝑥𝑡 is given by (11) — which in this case becomes 𝑥𝑡+1 = (𝐴 − 𝐵𝐹 + 𝐶𝐾(𝐹 , 𝜃))𝑥𝑡 .

8.5.2 Using Agent 2’s Problem to Construct Bounds on the Value Sets

The Lower Bound

Define the minimized object on the right side of problem (12) as 𝑅𝜃 (𝑥0 , 𝐹 ).
Because “minimizers minimize” we have

∞ ∞
𝑅𝜃 (𝑥0 , 𝐹 ) ≤ ∑ 𝛽 𝑡 {−𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 } + 𝛽𝜃 ∑ 𝛽 𝑡 𝑤𝑡+1

𝑤𝑡+1 ,
𝑡=0 𝑡=0

where 𝑥𝑡+1 = (𝐴 − 𝐵𝐹 + 𝐶𝐾(𝐹 , 𝜃))𝑥𝑡 and 𝑥0 is a given initial condition.


This inequality in turn implies the inequality


𝑅𝜃 (𝑥0 , 𝐹 ) − 𝜃 ent ≤ ∑ 𝛽 𝑡 {−𝑥′𝑡 (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥𝑡 } (14)
𝑡=0

where


ent ∶= 𝛽 ∑ 𝛽 𝑡 𝑤𝑡+1

𝑤𝑡+1
𝑡=0

The left side of inequality (14) is a straight line with slope −𝜃.
Technically, it is a “separating hyperplane”.
At a particular value of entropy, the line is tangent to the lower bound of values as a function
of entropy.
In particular, the lower bound on the left side of (14) is attained when


ent = 𝛽 ∑ 𝛽 𝑡 𝑥′𝑡 𝐾(𝐹 , 𝜃)′ 𝐾(𝐹 , 𝜃)𝑥𝑡 (15)
𝑡=0
148 CHAPTER 8. ROBUSTNESS

To construct the lower bound on the set of values associated with all perturbations w satisfy-
ing the entropy constraint (9) at a given entropy level, we proceed as follows:

• For a given 𝜃, solve the minimization problem (12).

• Compute the minimizer 𝑅𝜃 (𝑥0 , 𝐹 ) and the associated entropy using (15).
• Compute the lower bound on the value function 𝑅𝜃 (𝑥0 , 𝐹 ) − 𝜃 ent and plot it against
ent.
• Repeat the preceding three steps for a range of values of 𝜃 to trace out the lower bound.

Note
This procedure sweeps out a set of separating hyperplanes indexed by different
values for the Lagrange multiplier 𝜃.

The Upper Bound

To construct an upper bound we use a very similar procedure.


We simply replace the minimization problem (12) with the maximization problem


𝑉𝜃 (𝑥 𝑡 ′ ′ ̃ ′
̃ 0 , 𝐹 ) = max ∑ 𝛽 {−𝑥𝑡 (𝑅 + 𝐹 𝑄𝐹 )𝑥𝑡 − 𝛽 𝜃𝑤𝑡+1 𝑤𝑡+1 } (16)
w
𝑡=0

where now 𝜃 ̃ > 0 penalizes the choice of w with larger entropy.


(Notice that 𝜃 ̃ = −𝜃 in problem (12))
Because “maximizers maximize” we have

∞ ∞
𝑉𝜃 (𝑥
̃ 0, 𝐹 ) ≥ ∑ 𝛽
𝑡
{−𝑥′𝑡 (𝑅 + 𝐹 𝑄𝐹 )𝑥𝑡 } − 𝛽 𝜃 ̃ ∑ 𝛽 𝑡 𝑤𝑡+1
′ ′
𝑤𝑡+1
𝑡=0 𝑡=0

which in turn implies the inequality


̃ 𝑡 ′
̃ 0 , 𝐹 ) + 𝜃 ent ≥ ∑ 𝛽 {−𝑥𝑡 (𝑅 + 𝐹 𝑄𝐹 )𝑥𝑡 }
𝑉𝜃 (𝑥 ′
(17)
𝑡=0

where


ent ≡ 𝛽 ∑ 𝛽 𝑡 𝑤𝑡+1

𝑤𝑡+1
𝑡=0

The left side of inequality (17) is a straight line with slope 𝜃.̃
The upper bound on the left side of (17) is attained when


ent = 𝛽 ∑ 𝛽 𝑡 𝑥′𝑡 𝐾(𝐹 , 𝜃)̃ ′ 𝐾(𝐹 , 𝜃)𝑥
̃
𝑡 (18)
𝑡=0

To construct the upper bound on the set of values associated all perturbations w with a given
entropy we proceed much as we did for the lower bound
8.5. ROBUSTNESS AS OUTCOME OF A TWO-PERSON ZERO-SUM GAME 149

• For a given 𝜃,̃ solve the maximization problem (16).

• Compute the maximizer 𝑉𝜃 (𝑥


̃ 0 , 𝐹 ) and the associated entropy using (18).
• Compute the upper bound on the value function 𝑉𝜃 (𝑥 ̃
̃ 0 , 𝐹 ) + 𝜃 ent and plot it against
ent.
• Repeat the preceding three steps for a range of values of 𝜃 ̃ to trace out the upper
bound.

Reshaping the Set of Values

Now in the interest of reshaping these sets of values by choosing 𝐹 , we turn to agent 1’s prob-
lem.

8.5.3 Agent 1’s Problem

Now we turn to agent 1, who solves


min ∑ 𝛽 𝑡 {𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 − 𝛽𝜃𝑤𝑡+1

𝑤𝑡+1 } (19)
{𝑢𝑡 }
𝑡=0

where {𝑤𝑡+1 } satisfies 𝑤𝑡+1 = 𝐾𝑥𝑡 .


In other words, agent 1 minimizes


∑ 𝛽 𝑡 {𝑥′𝑡 (𝑅 − 𝛽𝜃𝐾 ′ 𝐾)𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 } (20)
𝑡=0

subject to

𝑥𝑡+1 = (𝐴 + 𝐶𝐾)𝑥𝑡 + 𝐵𝑢𝑡 (21)

Once again, the expression for the optimal policy can be found here — we denote it by 𝐹 ̃ .

8.5.4 Nash Equilibrium

Clearly, the 𝐹 ̃ we have obtained depends on 𝐾, which, in agent 2’s problem, depended on an
initial policy 𝐹 .
Holding all other parameters fixed, we can represent this relationship as a mapping Φ, where

𝐹 ̃ = Φ(𝐾(𝐹 , 𝜃))

The map 𝐹 ↦ Φ(𝐾(𝐹 , 𝜃)) corresponds to a situation in which

1. agent 1 uses an arbitrary initial policy 𝐹

2. agent 2 best responds to agent 1 by choosing 𝐾(𝐹 , 𝜃)

3. agent 1 best responds to agent 2 by choosing 𝐹 ̃ = Φ(𝐾(𝐹 , 𝜃))


150 CHAPTER 8. ROBUSTNESS

As you may have already guessed, the robust policy 𝐹 ̂ defined in (7) is a fixed point of the
mapping Φ.
In particular, for any given 𝜃,

1. 𝐾(𝐹 ̂ , 𝜃) = 𝐾,̂ where 𝐾̂ is as given in (8)

2. Φ(𝐾)̂ = 𝐹 ̂

A sketch of the proof is given in the appendix.

8.6 The Stochastic Case

Now we turn to the stochastic case, where the sequence {𝑤𝑡 } is treated as an IID sequence of
random vectors.
In this setting, we suppose that our agent is uncertain about the conditional probability distri-
bution of 𝑤𝑡+1 .
The agent takes the standard normal distribution 𝑁 (0, 𝐼) as the baseline conditional distribu-
tion, while admitting the possibility that other “nearby” distributions prevail.
These alternative conditional distributions of 𝑤𝑡+1 might depend nonlinearly on the history
𝑥𝑠 , 𝑠 ≤ 𝑡.
To implement this idea, we need a notion of what it means for one distribution to be near
another one.
Here we adopt a very useful measure of closeness for distributions known as the relative en-
tropy, or Kullback-Leibler divergence.
For densities 𝑝, 𝑞, the Kullback-Leibler divergence of 𝑞 from 𝑝 is defined as

𝑝(𝑥)
𝐷𝐾𝐿 (𝑝, 𝑞) ∶= ∫ ln [ ] 𝑝(𝑥) 𝑑𝑥
𝑞(𝑥)

Using this notation, we replace (3) with the stochastic analog

𝐽 (𝑥) = min max {𝑥′ 𝑅𝑥 + 𝑢′ 𝑄𝑢 + 𝛽 [∫ 𝐽 (𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤) 𝜓(𝑑𝑤) − 𝜃𝐷𝐾𝐿 (𝜓, 𝜙)]} (22)
𝑢 𝜓∈𝒫

Here 𝒫 represents the set of all densities on ℝ𝑛 and 𝜙 is the benchmark distribution 𝑁 (0, 𝐼).
The distribution 𝜙 is chosen as the least desirable conditional distribution in terms of next
period outcomes, while taking into account the penalty term 𝜃𝐷𝐾𝐿 (𝜓, 𝜙).
This penalty term plays a role analogous to the one played by the deterministic penalty 𝜃𝑤′ 𝑤
in (3), since it discourages large deviations from the benchmark.

8.6.1 Solving the Model

The maximization problem in (22) appears highly nontrivial — after all, we are maximizing
over an infinite dimensional space consisting of the entire set of densities.
8.6. THE STOCHASTIC CASE 151

However, it turns out that the solution is tractable, and in fact also falls within the class of
normal distributions.
First, we note that 𝐽 has the form 𝐽 (𝑥) = 𝑥′ 𝑃 𝑥 + 𝑑 for some positive definite matrix 𝑃 and
constant real number 𝑑.
Moreover, it turns out that if (𝐼 − 𝜃−1 𝐶 ′ 𝑃 𝐶)−1 is nonsingular, then

max {∫(𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤)′ 𝑃 (𝐴𝑥 + 𝐵𝑢 + 𝐶𝑤) 𝜓(𝑑𝑤) − 𝜃𝐷𝐾𝐿 (𝜓, 𝜙)}


𝜓∈𝒫 (23)

= (𝐴𝑥 + 𝐵𝑢) 𝒟(𝑃 )(𝐴𝑥 + 𝐵𝑢) + 𝜅(𝜃, 𝑃 )

where

𝜅(𝜃, 𝑃 ) ∶= 𝜃 ln[det(𝐼 − 𝜃−1 𝐶 ′ 𝑃 𝐶)−1 ]

and the maximizer is the Gaussian distribution

𝜓 = 𝑁 ((𝜃𝐼 − 𝐶 ′ 𝑃 𝐶)−1 𝐶 ′ 𝑃 (𝐴𝑥 + 𝐵𝑢), (𝐼 − 𝜃−1 𝐶 ′ 𝑃 𝐶)−1 ) (24)

Substituting the expression for the maximum into Bellman equation (22) and using 𝐽 (𝑥) =
𝑥′ 𝑃 𝑥 + 𝑑 gives

𝑥′ 𝑃 𝑥 + 𝑑 = min {𝑥′ 𝑅𝑥 + 𝑢′ 𝑄𝑢 + 𝛽 (𝐴𝑥 + 𝐵𝑢)′ 𝒟(𝑃 )(𝐴𝑥 + 𝐵𝑢) + 𝛽 [𝑑 + 𝜅(𝜃, 𝑃 )]} (25)
𝑢

Since constant terms do not affect minimizers, the solution is the same as (6), leading to

𝑥′ 𝑃 𝑥 + 𝑑 = 𝑥′ ℬ(𝒟(𝑃 ))𝑥 + 𝛽 [𝑑 + 𝜅(𝜃, 𝑃 )]

To solve this Bellman equation, we take 𝑃 ̂ to be the positive definite fixed point of ℬ ∘ 𝒟.
In addition, we take 𝑑 ̂ as the real number solving 𝑑 = 𝛽 [𝑑 + 𝜅(𝜃, 𝑃 )], which is

𝛽
𝑑 ̂ ∶= 𝜅(𝜃, 𝑃 ) (26)
1−𝛽

The robust policy in this stochastic case is the minimizer in (25), which is once again 𝑢 =
−𝐹 ̂ 𝑥 for 𝐹 ̂ given by (7).
Substituting the robust policy into (24) we obtain the worst-case shock distribution:

̂ 𝑡 , (𝐼 − 𝜃−1 𝐶 ′ 𝑃 ̂ 𝐶)−1 )
𝑤𝑡+1 ∼ 𝑁 (𝐾𝑥

where 𝐾̂ is given by (8).


Note that the mean of the worst-case shock distribution is equal to the same worst-case 𝑤𝑡+1
as in the earlier deterministic setting.

8.6.2 Computing Other Quantities

Before turning to implementation, we briefly outline how to compute several other quantities
of interest.
152 CHAPTER 8. ROBUSTNESS

Worst-Case Value of a Policy

One thing we will be interested in doing is holding a policy fixed and computing the dis-
counted loss associated with that policy.
So let 𝐹 be a given policy and let 𝐽𝐹 (𝑥) be the associated loss, which, by analogy with (22),
satisfies

𝐽𝐹 (𝑥) = max {𝑥′ (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥 + 𝛽 [∫ 𝐽𝐹 ((𝐴 − 𝐵𝐹 )𝑥 + 𝐶𝑤) 𝜓(𝑑𝑤) − 𝜃𝐷𝐾𝐿 (𝜓, 𝜙)]}


𝜓∈𝒫

Writing 𝐽𝐹 (𝑥) = 𝑥′ 𝑃𝐹 𝑥 + 𝑑𝐹 and applying the same argument used to derive (23) we get

𝑥′ 𝑃𝐹 𝑥 + 𝑑𝐹 = 𝑥′ (𝑅 + 𝐹 ′ 𝑄𝐹 )𝑥 + 𝛽 [𝑥′ (𝐴 − 𝐵𝐹 )′ 𝒟(𝑃𝐹 )(𝐴 − 𝐵𝐹 )𝑥 + 𝑑𝐹 + 𝜅(𝜃, 𝑃𝐹 )]

To solve this we take 𝑃𝐹 to be the fixed point

𝑃𝐹 = 𝑅 + 𝐹 ′ 𝑄𝐹 + 𝛽(𝐴 − 𝐵𝐹 )′ 𝒟(𝑃𝐹 )(𝐴 − 𝐵𝐹 )

and

𝛽 𝛽
𝑑𝐹 ∶= 𝜅(𝜃, 𝑃𝐹 ) = 𝜃 ln[det(𝐼 − 𝜃−1 𝐶 ′ 𝑃𝐹 𝐶)−1 ] (27)
1−𝛽 1−𝛽

If you skip ahead to the appendix, you will be able to verify that −𝑃𝐹 is the solution to the
Bellman equation in agent 2’s problem discussed above — we use this in our computations.

8.7 Implementation

The QuantEcon.py package provides a class called RBLQ for implementation of robust LQ
optimal control.
The code can be found on GitHub.
Here is a brief description of the methods of the class
• d_operator() and b_operator() implement 𝒟 and ℬ respectively
• robust_rule() and robust_rule_simple() both solve for the triple 𝐹 ̂ , 𝐾,̂ 𝑃 ̂ , as
described in equations (7) – (8) and the surrounding discussion
– robust_rule() is more efficient
– robust_rule_simple() is more transparent and easier to follow
• K_to_F() and F_to_K() solve the decision problems of agent 1 and agent 2 respec-
tively
• compute_deterministic_entropy() computes the left-hand side of (13)
• evaluate_F() computes the loss and entropy associated with a given policy — see
this discussion

8.8 Application

Let us consider a monopolist similar to this one, but now facing model uncertainty.
8.8. APPLICATION 153

The inverse demand function is 𝑝𝑡 = 𝑎0 − 𝑎1 𝑦𝑡 + 𝑑𝑡 .


where

IID
𝑑𝑡+1 = 𝜌𝑑𝑡 + 𝜎𝑑 𝑤𝑡+1 , {𝑤𝑡 } ∼ 𝑁 (0, 1)

and all parameters are strictly positive.


The period return function for the monopolist is

(𝑦𝑡+1 − 𝑦𝑡 )2
𝑟𝑡 = 𝑝𝑡 𝑦𝑡 − 𝛾 − 𝑐𝑦𝑡
2

Its objective is to maximize expected discounted profits, or, equivalently, to minimize



𝔼 ∑𝑡=0 𝛽 𝑡 (−𝑟𝑡 ).
To form a linear regulator problem, we take the state and control to be

1
𝑥𝑡 = ⎡ 𝑦
⎢ 𝑡⎥
⎤ and 𝑢𝑡 = 𝑦𝑡+1 − 𝑦𝑡
⎣𝑑𝑡 ⎦

Setting 𝑏 ∶= (𝑎0 − 𝑐)/2 we define

0 𝑏 0
𝑅 = − ⎢𝑏 −𝑎1 1/2⎤

⎥ and 𝑄 = 𝛾/2
⎣0 1/2 0 ⎦

For the transition matrices, we set

1 0 0 0 0
𝐴=⎡ ⎤
⎢0 1 0⎥ , 𝐵=⎡ ⎤
⎢1⎥ , 𝐶=⎡
⎢0⎥

⎣0 0 𝜌⎦ ⎣0⎦ ⎣𝜎𝑑 ⎦

Our aim is to compute the value-entropy correspondences shown above.


The parameters are

𝑎0 = 100, 𝑎1 = 0.5, 𝜌 = 0.9, 𝜎𝑑 = 0.05, 𝛽 = 0.95, 𝑐 = 2, 𝛾 = 50.0

The standard normal distribution for 𝑤𝑡 is understood as the agent’s baseline, with uncer-
tainty parameterized by 𝜃.
We compute value-entropy correspondences for two policies

1. The no concern for robustness policy 𝐹0 , which is the ordinary LQ loss minimizer.

2. A “moderate” concern for robustness policy 𝐹𝑏 , with 𝜃 = 0.02.

The code for producing the graph shown above, with blue being for the robust policy, is as
follows
154 CHAPTER 8. ROBUSTNESS

In [3]: # Model parameters

a_0 = 100
a_1 = 0.5
ρ = 0.9
σ_d = 0.05
β = 0.95
c = 2
γ = 50.0

θ = 0.002
ac = (a_0 - c) / 2.0

# Define LQ matrices

R = np.array([[0., ac, 0.],


[ac, -a_1, 0.5],
[0., 0.5, 0.]])

R = -R # For minimization
Q = γ / 2

A = np.array([[1., 0., 0.],


[0., 1., 0.],
[0., 0., ρ]])
B = np.array([[0.],
[1.],
[0.]])
C = np.array([[0.],
[0.],
[σ_d]])

# ----------------------------------------------------------------------- #
# Functions
# ----------------------------------------------------------------------- #

def evaluate_policy(θ, F):

"""
Given θ (scalar, dtype=float) and policy F (array_like), returns the
value associated with that policy under the worst case path for {w_t},
as well as the entropy level.
"""

rlq = qe.robustlq.RBLQ(Q, R, A, B, C, β, θ)
K_F, P_F, d_F, O_F, o_F = rlq.evaluate_F(F)
x0 = np.array([[1.], [0.], [0.]])
value = - x0.T @ P_F @ x0 - d_F
entropy = x0.T @ O_F @ x0 + o_F
return list(map(float, (value, entropy)))

def value_and_entropy(emax, F, bw, grid_size=1000):

"""
Compute the value function and entropy levels for a θ path
increasing until it reaches the specified target entropy value.
8.8. APPLICATION 155

Parameters
==========
emax: scalar
The target entropy value

F: array_like
The policy function to be evaluated

bw: str
A string specifying whether the implied shock path follows best
or worst assumptions. The only acceptable values are 'best' and
'worst'.

Returns
=======
df: pd.DataFrame
A pandas DataFrame containing the value function and entropy
values up to the emax parameter. The columns are 'value' and
'entropy'.
"""

if bw == 'worst':
θs = 1 / np.linspace(1e-8, 1000, grid_size)
else:
θs = -1 / np.linspace(1e-8, 1000, grid_size)

df = pd.DataFrame(index=θs, columns=('value', 'entropy'))

for θ in θs:
df.loc[θ] = evaluate_policy(θ, F)
if df.loc[θ, 'entropy'] >= emax:
break

df = df.dropna(how='any')
return df

# ------------------------------------------------------------------------�
↪ #
# Main
# ------------------------------------------------------------------------�
↪ #

# Compute the optimal rule


optimal_lq = qe.lqcontrol.LQ(Q, R, A, B, C, beta=β)
Po, Fo, do = optimal_lq.stationary_values()

# Compute a robust rule given θ


baseline_robust = qe.robustlq.RBLQ(Q, R, A, B, C, β, θ)
Fb, Kb, Pb = baseline_robust.robust_rule()

# Check the positive definiteness of worst-case covariance matrix to


# ensure that θ exceeds the breakdown point
test_matrix = np.identity(Pb.shape[0]) - (C.T @ Pb @ C) / θ
eigenvals, eigenvecs = eig(test_matrix)
assert (eigenvals >= 0).all(), 'θ below breakdown point.'
156 CHAPTER 8. ROBUSTNESS

emax = 1.6e6

optimal_best_case = value_and_entropy(emax, Fo, 'best')


robust_best_case = value_and_entropy(emax, Fb, 'best')
optimal_worst_case = value_and_entropy(emax, Fo, 'worst')
robust_worst_case = value_and_entropy(emax, Fb, 'worst')

fig, ax = plt.subplots()

ax.set_xlim(0, emax)
ax.set_ylabel("Value")
ax.set_xlabel("Entropy")
ax.grid()

for axis in 'x', 'y':


plt.ticklabel_format(style='sci', axis=axis, scilimits=(0, 0))

plot_args = {'lw': 2, 'alpha': 0.7}

colors = 'r', 'b'

df_pairs = ((optimal_best_case, optimal_worst_case),


(robust_best_case, robust_worst_case))

class Curve:

def __init__(self, x, y):


self.x, self.y = x, y

def __call__(self, z):


return np.interp(z, self.x, self.y)

for c, df_pair in zip(colors, df_pairs):


curves = []
for df in df_pair:
# Plot curves
x, y = df['entropy'], df['value']
x, y = (np.asarray(a, dtype='float') for a in (x, y))
egrid = np.linspace(0, emax, 100)
curve = Curve(x, y)
print(ax.plot(egrid, curve(egrid), color=c, **plot_args))
curves.append(curve)
# Color fill between curves
ax.fill_between(egrid,
curves[0](egrid),
curves[1](egrid),
color=c, alpha=0.1)

plt.show()

[<matplotlib.lines.Line2D object at 0x7facecd9a7b8>]


[<matplotlib.lines.Line2D object at 0x7facecd9ab38>]
[<matplotlib.lines.Line2D object at 0x7facecd9a6d8>]
[<matplotlib.lines.Line2D object at 0x7facecd3c550>]
8.8. APPLICATION 157

Here’s another such figure, with 𝜃 = 0.002 instead of 0.02

Can you explain the different shape of the value-entropy correspondence for the robust pol-
icy?
158 CHAPTER 8. ROBUSTNESS

8.9 Appendix

We sketch the proof only of the first claim in this section, which is that, for any given 𝜃,
𝐾(𝐹 ̂ , 𝜃) = 𝐾,̂ where 𝐾̂ is as given in (8).
This is the content of the next lemma.
Lemma. If 𝑃 ̂ is the fixed point of the map ℬ ∘ 𝒟 and 𝐹 ̂ is the robust policy as given in (7),
then

𝐾(𝐹 ̂ , 𝜃) = (𝜃𝐼 − 𝐶 ′ 𝑃 ̂ 𝐶)−1 𝐶 ′ 𝑃 ̂ (𝐴 − 𝐵𝐹 ̂ ) (28)

Proof: As a first step, observe that when 𝐹 = 𝐹 ̂ , the Bellman equation associated with the
LQ problem (11) – (12) is

𝑃 ̃ = −𝑅−𝐹 ̂ ′ 𝑄𝐹 ̂ −𝛽 2 (𝐴−𝐵𝐹 ̂ )′ 𝑃 ̃ 𝐶(𝛽𝜃𝐼 +𝛽𝐶 ′ 𝑃 ̃ 𝐶)−1 𝐶 ′ 𝑃 ̃ (𝐴−𝐵𝐹 ̂ )+𝛽(𝐴−𝐵𝐹 ̂ )′ 𝑃 ̃ (𝐴−𝐵𝐹 ̂ ) (29)

(revisit this discussion if you don’t know where (29) comes from) and the optimal policy is

𝑤𝑡+1 = −𝛽(𝛽𝜃𝐼 + 𝛽𝐶 ′ 𝑃 ̃ 𝐶)−1 𝐶 ′ 𝑃 ̃ (𝐴 − 𝐵𝐹 ̂ )𝑥𝑡

Suppose for a moment that −𝑃 ̂ solves the Bellman equation (29).


In this case, the policy becomes

𝑤𝑡+1 = (𝜃𝐼 − 𝐶 ′ 𝑃 ̂ 𝐶)−1 𝐶 ′ 𝑃 ̂ (𝐴 − 𝐵𝐹 ̂ )𝑥𝑡

which is exactly the claim in (28).


Hence it remains only to show that −𝑃 ̂ solves (29), or, in other words,

𝑃 ̂ = 𝑅 + 𝐹 ̂ ′ 𝑄𝐹 ̂ + 𝛽(𝐴 − 𝐵𝐹 ̂ )′ 𝑃 ̂ 𝐶(𝜃𝐼 − 𝐶 ′ 𝑃 ̂ 𝐶)−1 𝐶 ′ 𝑃 ̂ (𝐴 − 𝐵𝐹 ̂ ) + 𝛽(𝐴 − 𝐵𝐹 ̂ )′ 𝑃 ̂ (𝐴 − 𝐵𝐹 ̂ )

Using the definition of 𝒟, we can rewrite the right-hand side more simply as

𝑅 + 𝐹 ̂ ′ 𝑄𝐹 ̂ + 𝛽(𝐴 − 𝐵𝐹 ̂ )′ 𝒟(𝑃 ̂ )(𝐴 − 𝐵𝐹 ̂ )

Although it involves a substantial amount of algebra, it can be shown that the latter is just
𝑃̂ .
(Hint: Use the fact that 𝑃 ̂ = ℬ(𝒟(𝑃 ̂ )))
Chapter 9

Markov Jump Linear Quadratic


Dynamic Programming

9.1 Contents

• Overview 9.2
• Review of useful LQ dynamic programming formulas 9.3
• Linked Ricatti equations for Markov LQ dynamic programming 9.4
• Applications 9.5
• Example 1 9.6
• Example 2 9.7
• More examples 9.8
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

9.2 Overview

This lecture describes Markov jump linear quadratic dynamic programming, an ex-
tension of the method described in the first LQ control lecture.
Markov jump linear quadratic dynamic programming is described and analyzed in [20] and
the references cited there.
The method has been applied to problems in macroeconomics and monetary economics by
[65] and [64].
The periodic models of seasonality described in chapter 14 of [31] are a special case of Markov
jump linear quadratic problems.
Markov jump linear quadratic dynamic programming combines advantages of
• the computational simplicity of linear quadratic dynamic programming, with
• the ability of finite state Markov chains to represent interesting patterns of random
variation.
The idea is to replace the constant matrices that define a linear quadratic dynamic pro-
gramming problem with 𝑁 sets of matrices that are fixed functions of the state of an 𝑁

159
160 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

state Markov chain.


The state of the Markov chain together with the continuous 𝑛 × 1 state vector 𝑥𝑡 form the
state of the system.
For the class of infinite horizon problems being studied in this lecture, we obtain 𝑁 interre-
lated matrix Riccati equations that determine 𝑁 optimal value functions and 𝑁 linear deci-
sion rules.
One of these value functions and one of these decision rules apply in each of the 𝑁 Markov
states.
That is, when the Markov state is in state 𝑗, the value function and the decision rule for state
𝑗 prevails.

9.3 Review of useful LQ dynamic programming formulas

To begin, it is handy to have the following reminder in mind.


A linear quadratic dynamic programming problem consists of a scalar discount factor
𝛽 ∈ (0, 1), an 𝑛 × 1 state vector 𝑥𝑡 , an initial condition for 𝑥0 , a 𝑘 × 1 control vector 𝑢𝑡 , a 𝑝 × 1
random shock vector 𝑤𝑡+1 and the following two triples of matrices:
• A triple of matrices (𝑅, 𝑄, 𝑊 ) defining a loss function

𝑟(𝑥𝑡 , 𝑢𝑡 ) = 𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 + 2𝑢′𝑡 𝑊 𝑥𝑡


• a triple of matrices (𝐴, 𝐵, 𝐶) defining a state-transition law

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1

The problem is


−𝑥′0 𝑃 𝑥0 − 𝜌 = min

𝐸 ∑ 𝛽 𝑡 𝑟(𝑥𝑡 , 𝑢𝑡 )
{𝑢𝑡 }𝑡=0
𝑡=0

subject to the transition law for the state.


The optimal decision rule has the form

𝑢𝑡 = −𝐹 𝑥𝑡

and the optimal value function is of the form

− (𝑥′𝑡 𝑃 𝑥𝑡 + 𝜌)

where 𝑃 solves the algebraic matrix Riccati equation

𝑃 = 𝑅 + 𝛽𝐴′ 𝑃 𝐴 − (𝛽𝐵′ 𝑃 𝐴 + 𝑊 )′ (𝑄 + 𝛽𝐵𝑃 𝐵)−1 (𝛽𝐵𝑃 𝐴 + 𝑊 )

and the constant 𝜌 satisfies


9.4. LINKED RICATTI EQUATIONS FOR MARKOV LQ DYNAMIC PROGRAMMING161

𝜌 = 𝛽 (𝜌 + trace(𝑃 𝐶𝐶 ′ ))

and the matrix 𝐹 in the decision rule for 𝑢𝑡 satisfies

𝐹 = (𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 (𝛽(𝐵′ 𝑃 𝐴) + 𝑊 )

With the preceding formulas in mind, we are ready to approach Markov Jump linear
quadratic dynamic programming.

9.4 Linked Ricatti equations for Markov LQ dynamic pro-


gramming

The key idea is to make the matrices 𝐴, 𝐵, 𝐶, 𝑅, 𝑄, 𝑊 fixed functions of a finite state 𝑠 that
is governed by an 𝑁 state Markov chain.
This makes decision rules depend on the Markov state, and so fluctuate through time in lim-
ited ways.
In particular, we use the following extension of a discrete-time linear quadratic dynamic pro-
gramming problem.
We let 𝑠𝑡 ∈ [1, 2, … , 𝑁 ] be a time 𝑡 realization of an 𝑁 -state Markov chain with transition
matrix Π having typical element Π𝑖𝑗 .
Here 𝑖 denotes today and 𝑗 denotes tomorrow and

Π𝑖𝑗 = Prob(𝑠𝑡+1 = 𝑗|𝑠𝑡 = 𝑖)

We’ll switch between labeling today’s state as 𝑠𝑡 and 𝑖 and between labeling tomorrow’s state
as 𝑠𝑡+1 or 𝑗.
The decision-maker solves the minimization problem:


min

𝐸 ∑ 𝛽 𝑡 𝑟(𝑥𝑡 , 𝑠𝑡 , 𝑢𝑡 )
{𝑢𝑡 }𝑡=0
𝑡=0

with

𝑟(𝑥𝑡 , 𝑠𝑡 , 𝑢𝑡 ) = −(𝑥′𝑡 𝑅𝑠𝑡 𝑥𝑡 + 𝑢′𝑡 𝑄𝑠𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑊𝑠𝑡 𝑥𝑡 )

subject to linear laws of motion with matrices (𝐴, 𝐵, 𝐶) each possibly dependent on the
Markov-state-𝑠𝑡 :

𝑥𝑡+1 = 𝐴𝑠𝑡 𝑥𝑡 + 𝐵𝑠𝑡 𝑢𝑡 + 𝐶𝑠𝑡 𝑤𝑡+1

where {𝑤𝑡+1 } is an i.i.d. stochastic process with 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼).


The optimal decision rule for this problem has the form

𝑢𝑡 = −𝐹𝑠𝑡 𝑥𝑡
162 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

and the optimal value functions are of the form

− (𝑥′𝑡 𝑃𝑠𝑡 𝑥𝑡 + 𝜌𝑠𝑡 )

or equivalently

−𝑥′𝑡 𝑃𝑖 𝑥𝑡 − 𝜌𝑖

The optimal value functions −𝑥′ 𝑃𝑖 𝑥 − 𝜌𝑖 for 𝑖 = 1, … , 𝑛 satisfy the 𝑁 interrelated Bellman
equations

−𝑥′ 𝑃𝑖 𝑥 − 𝜌𝑖 = max −
𝑢

[𝑥′ 𝑅𝑖 𝑥 + 𝑢′ 𝑄𝑖 𝑢 + 2𝑢′ 𝑊𝑖 𝑥 − 𝛽 ∑ Π𝑖𝑗 𝐸((𝐴𝑖 𝑥 + 𝐵𝑖 𝑢 + 𝐶𝑖 𝑤)′ 𝑃𝑗 (𝐴𝑖 𝑥 + 𝐵𝑖 𝑢 + 𝐶𝑖 𝑤)𝑥 + 𝜌𝑗 )]


𝑗

The matrices 𝑃𝑠𝑡 = 𝑃𝑖 and the scalars 𝜌𝑠𝑡 = 𝜌𝑖 , 𝑖 = 1, …, n satisfy the following stacked
system of algebraic matrix Riccati equations:

𝑃𝑖 = 𝑅𝑖 + 𝛽 ∑ 𝐴′𝑖 𝑃𝑗 𝐴𝑖 Π𝑖𝑗 − ∑ Π𝑖𝑗 [(𝛽𝐵𝑖′ 𝑃𝑗 𝐴𝑖 + 𝑊𝑖 )′ (𝑄 + 𝛽𝐵𝑖′ 𝑃𝑗 𝐵𝑖 )−1 (𝛽𝐵𝑖′ 𝑃𝑗 𝐴𝑖 + 𝑊𝑖 )]


𝑗 𝑗

𝜌𝑖 = 𝛽 ∑ Π𝑖𝑗 (𝜌𝑗 + trace(𝑃𝑗 𝐶𝑖 𝐶𝑖′ ))


𝑗

and the 𝐹𝑖 in the optimal decision rules are

𝐹𝑖 = (𝑄𝑖 + 𝛽 ∑ Π𝑖𝑗 𝐵𝑖′ 𝑃𝑗 𝐵𝑖 )−1 (𝛽 ∑ Π𝑖𝑗 (𝐵𝑖′ 𝑃𝑗 𝐴𝑖 ) + 𝑊𝑖 )


𝑗 𝑗

9.5 Applications

We now describe some Python code and a few examples that put the code to work.
To begin, we import these Python modules

In [2]: import numpy as np


import quantecon as qe
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

%matplotlib inline

In [3]: # Set discount factor


β = 0.95
9.6. EXAMPLE 1 163

9.6 Example 1

This example is a version of a classic problem of optimally adjusting a variable 𝑘𝑡 to a target


level in the face of costly adjustment.
This provides a model of gradual adjustment.
Given 𝑘0 , the objective function is


max

𝐸0 ∑ 𝛽 𝑡 𝑟 (𝑠𝑡 , 𝑘𝑡 )
{𝑘𝑡 }𝑡=1 𝑡=0

where the one-period payoff function is

𝑟(𝑠𝑡 , 𝑘𝑡 ) = 𝑓1,𝑠𝑡 𝑘𝑡 − 𝑓2,𝑠𝑡 𝑘𝑡2 − 𝑑𝑠𝑡 (𝑘𝑡+1 − 𝑘𝑡 )2 ,

𝐸0 is a mathematical expectation conditioned on time 0 information 𝑥0 , 𝑠0 and the transition


law for continuous state variable 𝑘𝑡 is

𝑘𝑡+1 − 𝑘𝑡 = 𝑢𝑡

We can think of 𝑘𝑡 as the decision-maker’s capital and 𝑢𝑡 as costs of adjusting the level of
capital.
We assume that 𝑓1 (𝑠𝑡 ) > 0, 𝑓2 (𝑠𝑡 ) > 0, and 𝑑 (𝑠𝑡 ) > 0.
Denote the state transition matrix for Markov state 𝑠𝑡 ∈ {1, 2} as Π:

Pr (𝑠𝑡+1 = 𝑗 ∣ 𝑠𝑡 = 𝑖) = Π𝑖𝑗

𝑘
Let 𝑥𝑡 = [ 𝑡 ]
1
We can represent the one-period payoff function 𝑟 (𝑠𝑡 , 𝑘𝑡 ) and the state-transition law as

𝑟 (𝑠𝑡 , 𝑘𝑡 ) = 𝑓1,𝑠𝑡 𝑘𝑡 − 𝑓2,𝑠𝑡 𝑘𝑡2 − 𝑑𝑠𝑡 𝑢𝑡 2


𝑓
𝑓 𝑡 − 1,𝑠 𝑡
= −𝑥′𝑡 [ 2,𝑠
𝑓1,𝑠𝑡
2 ]𝑥 + 𝑑
𝑡 ⏟ 𝑠𝑡 𝑢𝑡
2
− 2
⏟⏟⏟⏟⏟⏟⏟ 0
≡𝑄(𝑠𝑡 )
≡𝑅(𝑠𝑡 )

𝑘𝑡+1 1
𝑥𝑡+1 = [ ]= 𝐼⏟2 𝑥𝑡 + [ ] 𝑢𝑡
1 ⏟0
≡𝐴(𝑠𝑡 )
≡𝐵(𝑠𝑡 )

In [4]: def construct_arrays1(f1_vals=[1. ,1.],


f2_vals=[1., 1.],
d_vals=[1., 1.]):
"""
Construct matrices that map the problem described in example 1
into a Markov jump linear quadratic dynamic programming problem
"""
164 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

# Number of Markov states


m = len(f1_vals)
# Number of state and control variables
n, k = 2, 1

# Construct sets of matrices for each state


As = [np.eye(n) for i in range(m)]
Bs = [np.array([[1, 0]]).T for i in range(m)]

Rs = np.zeros((m, n, n))
Qs = np.zeros((m, k, k))

for i in range(m):
Rs[i, 0, 0] = f2_vals[i]
Rs[i, 1, 0] = - f1_vals[i] / 2
Rs[i, 0, 1] = - f1_vals[i] / 2

Qs[i, 0, 0] = d_vals[i]

Cs, Ns = None, None

# Compute the optimal k level of the payoff function in each state


k_star = np.empty(m)
for i in range(m):
k_star[i] = f1_vals[i] / (2 * f2_vals[i])

return Qs, Rs, Ns, As, Bs, Cs, k_star

The continuous part of the state 𝑥𝑡 consists of two variables, namely, 𝑘𝑡 and a constant term.

In [5]: state_vec1 = ["k", "constant term"]

We start with a Markov transition matrix that makes the Markov state be strictly periodic:

0 1
Π1 = [ ],
1 0

We set 𝑓1,𝑠𝑡 and 𝑓2,𝑠𝑡 to be independent of the Markov state 𝑠𝑡

𝑓1,1 = 𝑓1,2 = 1,

𝑓2,1 = 𝑓2,2 = 1

In contrast to 𝑓1,𝑠𝑡 and 𝑓2,𝑠𝑡 , we make the adjustment cost 𝑑𝑠𝑡 vary across Markov states 𝑠𝑡 .
We set the adjustment cost to be lower in Markov state 2

𝑑1 = 1, 𝑑2 = 0.5

The following code forms a Markov switching LQ problem and computes the optimal value
functions and optimal decision rules for each Markov state
9.6. EXAMPLE 1 165

In [6]: # Construct Markov transition matrix


Π1 = np.array([[0., 1.],
[1., 0.]])

In [7]: # Construct matrices


Qs, Rs, Ns, As, Bs, Cs, k_star = construct_arrays1(d_vals=[1., 0.5])

In [8]: # Construct a Markov Jump LQ problem


ex1_a = qe.LQMarkov(Π1, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)
# Solve for optimal value functions and decision rules
ex1_a.stationary_values();

Let’s look at the value function matrices and the decision rules for each Markov state

In [9]: # P(s)
ex1_a.Ps

Out[9]: array([[[ 1.56626026, -0.78313013],


[-0.78313013, -4.60843493]],

[[ 1.37424214, -0.68712107],
[-0.68712107, -4.65643947]]])

In [10]: # d(s) = 0, since there is no randomness


ex1_a.ds

Out[10]: array([0., 0.])

In [11]: # F(s)
ex1_a.Fs

Out[11]: array([[[ 0.56626026, -0.28313013]],

[[ 0.74848427, -0.37424214]]])

Now we’ll plot the decision rules and see if they make sense

In [12]: # Plot the optimal decision rules


k_grid = np.linspace(0., 1., 100)
# Optimal choice in state s1
u1_star = - ex1_a.Fs[0, 0, 1] - ex1_a.Fs[0, 0, 0] * k_grid
# Optimal choice in state s2
u2_star = - ex1_a.Fs[1, 0, 1] - ex1_a.Fs[1, 0, 0] * k_grid

fig, ax = plt.subplots()
ax.plot(k_grid, k_grid + u1_star, label="$\overline{s}_1$ (high)")
ax.plot(k_grid, k_grid + u2_star, label="$\overline{s}_2$ (low)")

# The optimal k*
ax.scatter([0.5, 0.5], [0.5, 0.5], marker="*")
ax.plot([k_star[0], k_star[0]], [0., 1.0], '--')

# 45 degree line
166 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

ax.plot([0., 1.], [0., 1.], '--', label="45 degree line")

ax.set_xlabel("$k_t$")
ax.set_ylabel("$k_{t+1}$")
ax.legend()
plt.show()

The above graph plots 𝑘𝑡+1 = 𝑘𝑡 + 𝑢𝑡 = 𝑘𝑡 − 𝐹 𝑥𝑡 as an affine (i.e., linear in 𝑘𝑡 plus a constant)
function of 𝑘𝑡 for both Markov states 𝑠𝑡 .
It also plots the 45 degree line.
Notice that the two 𝑠𝑡 -dependent closed loop functions that determine 𝑘𝑡+1 as functions of 𝑘𝑡
share the same rest point (also called a fixed point) at 𝑘𝑡 = 0.5.
Evidently, the optimal decision rule in Markov state 2, in which the adjustment cost is lower,
makes 𝑘𝑡+1 a flatter function of 𝑘𝑡 in Markov state 2.
This happens because when 𝑘𝑡 is not at its fixed point, |𝑢𝑡,2 | > |𝑢𝑡,2 |, so that the decision-
maker adjusts toward the fixed point faster when the Markov state 𝑠𝑡 takes a value that
makes it cheaper.

In [13]: # Compute time series


T = 20
x0 = np.array([[0., 1.]]).T
x_path = ex1_a.compute_sequence(x0, ts_length=T)[0]

fig, ax = plt.subplots()
ax.plot(range(T), x_path[0, :-1])
ax.set_xlabel("$t$")
ax.set_ylabel("$k_t$")
ax.set_title("Optimal path of $k_t$")
plt.show()
9.6. EXAMPLE 1 167

Now we’ll depart from the preceding transition matrix that made the Markov state be strictly
periodic.
We’ll begin with symmetric transition matrices of the form

1−𝜆 𝜆
Π2 = [ ].
𝜆 1−𝜆

In [14]: λ = 0.8 # high λ


Π2 = np.array([[1-λ, λ],
[λ, 1-λ]])

ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


ex1_b.stationary_values();
ex1_b.Fs

Out[14]: array([[[ 0.57291724, -0.28645862]],

[[ 0.74434525, -0.37217263]]])

In [15]: λ = 0.2 # low λ


Π2 = np.array([[1-λ, λ],
[λ, 1-λ]])

ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


ex1_b.stationary_values();
ex1_b.Fs

Out[15]: array([[[ 0.59533259, -0.2976663 ]],

[[ 0.72818728, -0.36409364]]])
168 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

We can plot optimal decision rules associated with different 𝜆 values.

In [16]: λ_vals = np.linspace(0., 1., 10)


F1 = np.empty((λ_vals.size, 2))
F2 = np.empty((λ_vals.size, 2))

for i, λ in enumerate(λ_vals):
Π2 = np.array([[1-λ, λ],
[λ, 1-λ]])

ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


ex1_b.stationary_values();
F1[i, :] = ex1_b.Fs[0, 0, :]
F2[i, :] = ex1_b.Fs[1, 0, :]

In [17]: for i, state_var in enumerate(state_vec1):


fig, ax = plt.subplots()
ax.plot(λ_vals, F1[:, i], label="$\overline{s}_1$", color="b")
ax.plot(λ_vals, F2[:, i], label="$\overline{s}_2$", color="r")

ax.set_xlabel("$\lambda$")
ax.set_ylabel("$F_{s_t}$")
ax.set_title(f"Coefficient on {state_var}")
ax.legend()
plt.show()
9.6. EXAMPLE 1 169

Notice how the decision rules’ constants and slopes behave as functions of 𝜆.
Evidently, as the Markov chain becomes more nearly periodic (i.e., as 𝜆 → 1), the dynamic
program adjusts capital faster in the low adjustment cost Markov state to take advantage of
what is only temporarily a more favorable time to invest.
Now let’s study situations in which the Markov transition matrix Π is asymmetric

1−𝜆 𝜆
Π3 = [ ].
𝛿 1−𝛿

In [18]: λ, δ = 0.8, 0.2


Π3 = np.array([[1-λ, λ],
[δ, 1-δ]])

ex1_b = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


ex1_b.stationary_values();
ex1_b.Fs

Out[18]: array([[[ 0.57169781, -0.2858489 ]],

[[ 0.72749075, -0.36374537]]])

We can plot optimal decision rules for different 𝜆 and 𝛿 values.

In [19]: λ_vals = np.linspace(0., 1., 10)


δ_vals = np.linspace(0., 1., 10)

λ_grid = np.empty((λ_vals.size, δ_vals.size))


δ_grid = np.empty((λ_vals.size, δ_vals.size))
170 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

F1_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec1)))


F2_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec1)))

for i, λ in enumerate(λ_vals):
λ_grid[i, :] = λ
δ_grid[i, :] = δ_vals
for j, δ in enumerate(δ_vals):
Π3 = np.array([[1-λ, λ],
[δ, 1-δ]])

ex1_b = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


ex1_b.stationary_values();
F1_grid[i, j, :] = ex1_b.Fs[0, 0, :]
F2_grid[i, j, :] = ex1_b.Fs[1, 0, :]

In [20]: for i, state_var in enumerate(state_vec1):


fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# high adjustment cost, blue surface
ax.plot_surface(λ_grid, δ_grid, F1_grid[:, :, i], color="b")
# low adjustment cost, red surface
ax.plot_surface(λ_grid, δ_grid, F2_grid[:, :, i], color="r")
ax.set_xlabel("$\lambda$")
ax.set_ylabel("$\delta$")
ax.set_zlabel("$F_{s_t}$")
ax.set_title(f"coefficient on {state_var}")
plt.show()
9.6. EXAMPLE 1 171

The following code defines a wrapper function that computes optimal decision rules for cases
with different Markov transition matrices

In [21]: def run(construct_func, vals_dict, state_vec):


"""
A Wrapper function that repeats the computation above
for different cases
"""

Qs, Rs, Ns, As, Bs, Cs, k_star = construct_func(**vals_dict)

# Symmetric Π
# Notice that pure periodic transition is a special case
# when λ=1
print("symmetric Π case:\n")
λ_vals = np.linspace(0., 1., 10)
F1 = np.empty((λ_vals.size, len(state_vec)))
F2 = np.empty((λ_vals.size, len(state_vec)))

for i, λ in enumerate(λ_vals):
Π2 = np.array([[1-λ, λ],
[λ, 1-λ]])

mplq = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


mplq.stationary_values();
F1[i, :] = mplq.Fs[0, 0, :]
F2[i, :] = mplq.Fs[1, 0, :]

for i, state_var in enumerate(state_vec):


fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(λ_vals, F1[:, i], label="$\overline{s}_1$", color="b")
ax.plot(λ_vals, F2[:, i], label="$\overline{s}_2$", color="r")

ax.set_xlabel("$\lambda$")
ax.set_ylabel("$F(\overline{s}_t)$")
172 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

ax.set_title(f"coefficient on {state_var}")
ax.legend()
plt.show()

# Plot optimal k*_{s_t} and k that optimal policies are targeting


# only for example 1
if state_vec == ["k", "constant term"]:
fig = plt.figure()
ax = fig.add_subplot(111)
for i in range(2):
F = [F1, F2][i]
c = ["b", "r"][i]
ax.plot([0, 1], [k_star[i], k_star[i]], "--",
color=c, label="$k^*(\overline{s}_"+str(i+1)+")$")
ax.plot(λ_vals, - F[:, 1] / F[:, 0], color=c,
label="$k^{target}(\overline{s}_"+str(i+1)+")$")

# Plot a vertical line at λ=0.5


ax.plot([0.5, 0.5], [min(k_star), max(k_star)], "-.")

ax.set_xlabel("$\lambda$")
ax.set_ylabel("$k$")
ax.set_title("Optimal k levels and k targets")
ax.text(0.5, min(k_star)+(max(k_star)-min(k_star))/20,�
↪"$\lambda=0.5$")

ax.legend(bbox_to_anchor=(1., 1.))
plt.show()

# Asymmetric Π
print("asymmetric Π case:\n")
δ_vals = np.linspace(0., 1., 10)

λ_grid = np.empty((λ_vals.size, δ_vals.size))


δ_grid = np.empty((λ_vals.size, δ_vals.size))
F1_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec)))
F2_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec)))

for i, λ in enumerate(λ_vals):
λ_grid[i, :] = λ
δ_grid[i, :] = δ_vals
for j, δ in enumerate(δ_vals):
Π3 = np.array([[1-λ, λ],
[δ, 1-δ]])

mplq = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)


mplq.stationary_values();
F1_grid[i, j, :] = mplq.Fs[0, 0, :]
F2_grid[i, j, :] = mplq.Fs[1, 0, :]

for i, state_var in enumerate(state_vec):


fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(λ_grid, δ_grid, F1_grid[:, :, i], color="b")
ax.plot_surface(λ_grid, δ_grid, F2_grid[:, :, i], color="r")
ax.set_xlabel("$\lambda$")
ax.set_ylabel("$\delta$")
ax.set_zlabel("$F(\overline{s}_t)$")
ax.set_title(f"coefficient on {state_var}")
9.6. EXAMPLE 1 173

plt.show()

To illustrate the code with another example, we shall set 𝑓2,𝑠𝑡 and 𝑑𝑠𝑡 as constant functions
and

𝑓1,1 = 0.5, 𝑓1,2 = 1

Thus, the sole role of the Markov jump state 𝑠𝑡 is to identify times in which capital is very
productive and other times in which it is less productive.
The example below reveals much about the structure of the optimum problem and optimal
policies.
Only 𝑓1,𝑠𝑡 varies with 𝑠𝑡 .
𝑓1,𝑠𝑡
So there are different 𝑠𝑡 -dependent optimal static 𝑘 level in different states 𝑘𝑠∗𝑡 = 2𝑓2,𝑠𝑡 , values
of 𝑘 that maximize one-period payoff functions in each state.
We denote a target 𝑘 level as 𝑘𝑠𝑡𝑎𝑟𝑔𝑒𝑡
𝑡
, the fixed point of the optimal policies in each state,
given the value of 𝜆.
We call 𝑘𝑠𝑡𝑎𝑟𝑔𝑒𝑡
𝑡
a “target” because in each Markov state 𝑠𝑡 , optimal policies are contraction
mappings and will push 𝑘𝑡 towards a fixed point 𝑘𝑠𝑡𝑎𝑟𝑔𝑒𝑡
𝑡
.
When 𝜆 → 0, each Markov state becomes close to absorbing state and consequently 𝑘𝑠𝑡𝑎𝑟𝑔𝑒𝑡
𝑡


𝑘𝑠𝑡 .
But when 𝜆 → 1, the Markov transition matrix becomes more nearly periodic, so the op-
timum decision rules target more at the optimal k level in the other state in order to enjoy
higher expected payoff in the next period.
The switch happens at 𝜆 = 0.5 when both states are equally likely to be reached.
Below we plot an additional figure that shows optimal 𝑘 levels in the two states Markov jump
state and also how the targeted 𝑘 levels change as 𝜆 changes.

In [22]: run(construct_arrays1, {"f1_vals":[0.5, 1.]}, state_vec1)

symmetric Π case:
174 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING
9.6. EXAMPLE 1 175

asymmetric Π case:
176 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

Set 𝑓1,𝑠𝑡 and 𝑑𝑠𝑡 as constant functions and

𝑓2,1 = 0.5, 𝑓2,2 = 1

In [23]: run(construct_arrays1, {"f2_vals":[0.5, 1.]}, state_vec1)

symmetric Π case:
9.6. EXAMPLE 1 177

asymmetric Π case:
178 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

9.7 Example 2

We now add to the example 1 setup another state variable 𝑤𝑡 that follows the evolution law

𝑤𝑡+1 = 𝛼0 (𝑠𝑡 ) + 𝜌 (𝑠𝑡 ) 𝑤𝑡 + 𝜎 (𝑠𝑡 ) 𝜖𝑡+1 , 𝜖𝑡+1 ∼ 𝑁 (0, 1)

We think of 𝑤𝑡 as a rental rate or tax rate that the decision maker pays each period for 𝑘𝑡 .
To capture this idea, we add to the decision-maker’s one-period payoff function the product of
𝑤𝑡 and 𝑘𝑡
9.7. EXAMPLE 2 179

𝑟(𝑠𝑡 , 𝑘𝑡 , 𝑤𝑡 ) = 𝑓1,𝑠𝑡 𝑘𝑡 − 𝑓2,𝑠𝑡 𝑘𝑡2 − 𝑑𝑠𝑡 (𝑘𝑡+1 − 𝑘𝑡 )2 − 𝑤𝑡 𝑘𝑡 ,

𝑘𝑡
We now let the continuous part of the state at time 𝑡 be 𝑥𝑡 = ⎡ ⎤
⎢ 1 ⎥ and continue to set the
⎣𝑤𝑡 ⎦
control 𝑢𝑡 = 𝑘𝑡+1 − 𝑘𝑡 .
We can write the one-period payoff function 𝑟 (𝑠𝑡 , 𝑘𝑡 , 𝑤𝑡 ) and the state-transition law as

2
𝑟 (𝑠𝑡 , 𝑘𝑡 , 𝑤𝑡 ) = 𝑓1 (𝑠𝑡 ) 𝑘𝑡 − 𝑓2 (𝑠𝑡 ) 𝑘𝑡2 − 𝑑 (𝑠𝑡 ) (𝑘𝑡+1 − 𝑘𝑡 ) − 𝑤𝑡 𝑘𝑡


⎜ ⎞

⎜ 𝑓2 (𝑠𝑡 ) − 𝑓1 (𝑠 2
𝑡) 1
2 ⎟
⎜ ⎡ ⎤ ⎟
= −⎜
⎜ 𝑥′
𝑡⎢ − 𝑓 (𝑠
1 𝑡 )
0 0 ⎥𝑥𝑡 + 𝑑
⏟ (𝑠 𝑡 ) 𝑢2 ⎟
𝑡⎟,


2 ⎟
≡𝑄(𝑠𝑡 ) ⎟
1
⎜ ⏟⏟ ⎣ ⏟2⏟⏟⏟⏟ 0 ⏟⏟⏟⏟ 0⎦ ⎟
⎝ ≡𝑅(𝑠𝑡 ) ⎠

and

𝑘𝑡+1 1 0 0 1 0
𝑥𝑡+1 = ⎡
⎢ 1 ⎤ = ⎡0
⎥ ⎢ 1 0 ⎤ ⎥𝑥𝑡 +
⎡0⎤ 𝑢 +
⎢ ⎥ 𝑡
⎡ 0 ⎤𝜖
⎢ ⎥ 𝑡+1
⎣0 𝛼0 (𝑠𝑡 ) 𝜌 (𝑠𝑡 )⎦
⎣𝑤𝑡+1 ⎦ ⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⎣0⎦
⏟ ⎣
⏟⎦𝜎 (𝑠 𝑡 )
≡𝐴(𝑠𝑡 ) ≡𝐵(𝑠𝑡 ) ≡𝐶(𝑠𝑡 )

In [24]: def construct_arrays2(f1_vals=[1. ,1.],


f2_vals=[1., 1.],
d_vals=[1., 1.],
α0_vals=[1., 1.],
ρ_vals=[0.9, 0.9],
σ_vals=[1., 1.]):
"""
Construct matrices that maps the problem described in example 2
into a Markov jump linear quadratic dynamic programming problem.
"""

m = len(f1_vals)
n, k, j = 3, 1, 1

Rs = np.zeros((m, n, n))
Qs = np.zeros((m, k, k))
As = np.zeros((m, n, n))
Bs = np.zeros((m, n, k))
Cs = np.zeros((m, n, j))

for i in range(m):
Rs[i, 0, 0] = f2_vals[i]
Rs[i, 1, 0] = - f1_vals[i] / 2
Rs[i, 0, 1] = - f1_vals[i] / 2
Rs[i, 0, 2] = 1/2
Rs[i, 2, 0] = 1/2

Qs[i, 0, 0] = d_vals[i]

As[i, 0, 0] = 1
180 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

As[i, 1, 1] = 1
As[i, 2, 1] = α0_vals[i]
As[i, 2, 2] = ρ_vals[i]

Bs[i, :, :] = np.array([[1, 0, 0]]).T

Cs[i, :, :] = np.array([[0, 0, σ_vals[i]]]).T

Ns = None
k_star = None

return Qs, Rs, Ns, As, Bs, Cs, k_star

In [25]: state_vec2 = ["k", "constant term", "w"]

Only 𝑑𝑠𝑡 depends on 𝑠𝑡 .

In [26]: run(construct_arrays2, {"d_vals":[1., 0.5]}, state_vec2)

symmetric Π case:
9.7. EXAMPLE 2 181

asymmetric Π case:
182 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING
9.7. EXAMPLE 2 183

Only 𝑓1,𝑠𝑡 depends on 𝑠𝑡 .

In [27]: run(construct_arrays2, {"f1_vals":[0.5, 1.]}, state_vec2)

symmetric Π case:
184 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

asymmetric Π case:
9.7. EXAMPLE 2 185
186 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

Only 𝑓2,𝑠𝑡 depends on 𝑠𝑡 .

In [28]: run(construct_arrays2, {"f2_vals":[0.5, 1.]}, state_vec2)

symmetric Π case:
9.7. EXAMPLE 2 187

asymmetric Π case:
188 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING
9.7. EXAMPLE 2 189

Only 𝛼0 (𝑠𝑡 ) depends on 𝑠𝑡 .

In [29]: run(construct_arrays2, {"α0_vals":[0.5, 1.]}, state_vec2)

symmetric Π case:
190 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

asymmetric Π case:
9.7. EXAMPLE 2 191
192 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

Only 𝜌𝑠𝑡 depends on 𝑠𝑡 .

In [30]: run(construct_arrays2, {"ρ_vals":[0.5, 0.9]}, state_vec2)

symmetric Π case:
9.7. EXAMPLE 2 193

asymmetric Π case:
194 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING
9.7. EXAMPLE 2 195

Only 𝜎𝑠𝑡 depends on 𝑠𝑡 .

In [31]: run(construct_arrays2, {"σ_vals":[0.5, 1.]}, state_vec2)

symmetric Π case:
196 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

asymmetric Π case:
9.7. EXAMPLE 2 197
198 CHAPTER 9. MARKOV JUMP LINEAR QUADRATIC DYNAMIC PROGRAMMING

9.8 More examples

The following lectures describe how Markov jump linear quadratic dynamic programming can
be used to extend the [7] model of optimal tax-smoothing and government debt in several in-
teresting directions

1. How to Pay for a War: Part 1

1. How to Pay for a War: Part 2

2. How to Pay for a War: Part 3


Chapter 10

How to Pay for a War: Part 1

10.1 Contents

• Reader’s Guide 10.2


• Public Finance Questions 10.3
• Barro (1979) Model 10.4
• Python Class to Solve Markov Jump Linear Quadratic Control Problems 10.5
• Barro Model with a Time-varying Interest Rate 10.6
In addition to what’s in Anaconda, this lecture will deploy quantecon:

In [1]: !pip install --upgrade quantecon

10.2 Reader’s Guide

Let’s start with some standard imports:

In [2]: import quantecon as qe


import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

This lecture uses the method of Markov jump linear quadratic dynamic programming
that is described in lecture Markov Jump LQ dynamic programming to extend the [7] model
of optimal tax-smoothing and government debt in a particular direction.
This lecture has two sequels that offer further extensions of the Barro model

1. How to Pay for a War: Part 2

1. How to Pay for a War: Part 3

The extensions are modified versions of his 1979 model later suggested by Barro (1999 [8],
2003 [9]).
Barro’s original 1979 [7] model is about a government that borrows and lends in order to
minimize an intertemporal measure of distortions caused by taxes.

199
200 CHAPTER 10. HOW TO PAY FOR A WAR: PART 1

Technical tractability induced Barro [7] to assume that


• the government trades only one-period risk-free debt, and
• the one-period risk-free interest rate is constant
By using Markov jump linear quadratic dynamic programming we can allow interest rates to
move over time in empirically interesting ways.
Also, by expanding the dimension of the state, we can add a maturity composition decision to
the government’s problem.
It is by doing these two things that we extend Barro’s 1979 [7] model along lines he suggested
in Barro (1999 [8], 2003 [9]).
Barro (1979) [7] assumed
• that a government faces an exogenous sequence of expenditures that it must finance
by a tax collection sequence whose expected present value equals the initial debt it owes
plus the expected present value of those expenditures.
• that the government wants to minimize the following measure of tax distortions:

𝐸0 ∑𝑡=0 𝛽 𝑡 𝑇𝑡2 , where 𝑇𝑡 are total tax collections and 𝐸0 is a mathematical expectation
conditioned on time 0 information.
• that the government trades only one asset, a risk-free one-period bond.
• that the gross interest rate on the one-period bond is constant and equal to 𝛽 −1 , the
reciprocal of the factor 𝛽 at which the government discounts future tax distortions.
Barro’s model can be mapped into a discounted linear quadratic dynamic programming prob-
lem.
Partly inspired by Barro (1999) [8] and Barro (2003) [9], our generalizations of Barro’s (1979)
[7] model assume
• that the government borrows or saves in the form of risk-free bonds of maturities
1, 2, … , 𝐻.
• that interest rates on those bonds are time-varying and in particular, governed by a
jointly stationary stochastic process.
Our generalizations are designed to fit within a generalization of an ordinary linear quadratic
dynamic programming problem in which matrices that define the quadratic objective function
and the state transition function are time-varying and stochastic.
This generalization, known as a Markov jump linear quadratic dynamic program, com-
bines
• the computational simplicity of linear quadratic dynamic programming, and
• the ability of finite state Markov chains to represent interesting patterns of random
variation.
We want the stochastic time variation in the matrices defining the dynamic programming
problem to represent variation over time in
• interest rates
• default rates
• roll over risks
As described in Markov Jump LQ dynamic programming, the idea underlying Markov jump
linear quadratic dynamic programming is to replace the constant matrices defining a
linear quadratic dynamic programming problem with matrices that are fixed functions
of an 𝑁 state Markov chain.
10.3. PUBLIC FINANCE QUESTIONS 201

For infinite horizon problems, this leads to 𝑁 interrelated matrix Riccati equations that pin
down 𝑁 value functions and 𝑁 linear decision rules, applying to the 𝑁 Markov states.

10.3 Public Finance Questions

Barro’s 1979 [7] model is designed to answer questions such as


• Should a government finance an exogenous surge in government expenditures by raising
taxes or borrowing?
• How does the answer to that first question depend on the exogenous stochastic process
for government expenditures, for example, on whether the surge in government expendi-
tures can be expected to be temporary or permanent?
Barro’s 1999 [8] and 2003 [9] models are designed to answer more fine-grained questions such
as
• What determines whether a government wants to issue short-term or long-term debt?
• How do roll-over risks affect that decision?
• How does the government’s long-short portfolio management decision depend on fea-
tures of the exogenous stochastic process for government expenditures?
Thus, both the simple and the more fine-grained versions of Barro’s models are ways of pre-
cisely formulating the classic issue of How to pay for a war.
This lecture describes:
• An application of Markov jump LQ dynamic programming to a model in which a gov-
ernment faces exogenous time-varying interest rates for issuing one-period risk-free debt.
A sequel to this lecture describes applies Markov LQ control to settings in which a govern-
ment issues risk-free debt of different maturities.

10.4 Barro (1979) Model

We begin by solving a version of the Barro (1979) [7] model by mapping it into the original
LQ framework.
As mentioned in this lecture, the Barro model is mathematically isomorphic with the LQ per-
manent income model.
Let 𝑇𝑡 denote tax collections, 𝛽 a discount factor, 𝑏𝑡,𝑡+1 time 𝑡 + 1 goods that the government
promises to pay at 𝑡, 𝐺𝑡 government purchases, 𝑝𝑡,𝑡+1 the number of time 𝑡 goods received per
time 𝑡 + 1 goods promised.
Evidently, 𝑝𝑡,𝑡+1 is inversely related to appropriate corresponding gross interest rates on gov-
ernment debt.
In the spirit of Barro (1979) [7], the stochastic process of government expenditures is exoge-
nous.
The government’s problem is to choose a plan for taxation and borrowing {𝑏𝑡+1 , 𝑇𝑡 }∞
𝑡=0 to
minimize


𝐸0 ∑ 𝛽 𝑡 𝑇𝑡2
𝑡=0
202 CHAPTER 10. HOW TO PAY FOR A WAR: PART 1

subject to the constraints

𝑇𝑡 + 𝑝𝑡,𝑡+1 𝑏𝑡,𝑡+1 = 𝐺𝑡 + 𝑏𝑡−1,𝑡

𝐺 𝑡 = 𝑈 𝑔 𝑧𝑡

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1

where 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼)


The variables 𝑇𝑡 , 𝑏𝑡,𝑡+1 are control variables chosen at 𝑡, while 𝑏𝑡−1,𝑡 is an endogenous state
variable inherited from the past at time 𝑡 and 𝑝𝑡,𝑡+1 is an exogenous state variable at time 𝑡.
To begin, we assume that 𝑝𝑡,𝑡+1 is constant (and equal to 𝛽)

• later we will extend the model to allow 𝑝𝑡,𝑡+1 to vary over time

𝑏𝑡−1,𝑡
To map into the LQ framework, we use 𝑥𝑡 = [ ] as the state vector, and 𝑢𝑡 = 𝑏𝑡,𝑡+1 as
𝑧𝑡
the control variable.
Therefore, the (𝐴, 𝐵, 𝐶) matrices are defined by the state-transition law:

0 0 1 0
𝑥𝑡+1 = [ ] 𝑥 + [ ] 𝑢𝑡 + [ ] 𝑤𝑡+1
0 𝐴22 𝑡 0 𝐶2

To find the appropriate (𝑅, 𝑄, 𝑊 ) matrices, we note that 𝐺𝑡 and 𝑏𝑡−1,𝑡 can be written as ap-
propriately defined functions of the current state:

𝐺𝑡 = 𝑆𝐺 𝑥𝑡 , 𝑏𝑡−1,𝑡 = 𝑆1 𝑥𝑡

If we define 𝑀𝑡 = −𝑝𝑡,𝑡+1 , and let 𝑆 = 𝑆𝐺 + 𝑆1 , then we can write taxation as a function of


the states and control using the government’s budget constraint:

𝑇𝑡 = 𝑆𝑥𝑡 + 𝑀𝑡 𝑢𝑡

It follows that the (𝑅, 𝑄, 𝑊 ) matrices are implicitly defined by:

𝑇𝑡2 = 𝑥′𝑡 𝑆 ′ 𝑆𝑥𝑡 + 𝑢′𝑡 𝑀𝑡′ 𝑀𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑀𝑡′ 𝑆𝑥𝑡

If we assume that 𝑝𝑡,𝑡+1 = 𝛽, then 𝑀𝑡 ≡ 𝑀 = −𝛽.


In this case, none of the LQ matrices are time varying, and we can use the original LQ frame-
work.
We will implement this constant interest-rate version first, assuming that 𝐺𝑡 follows an AR(1)
process:

𝐺𝑡+1 = 𝐺 ̄ + 𝜌𝐺𝑡 + 𝜎𝑤𝑡+1


10.4. BARRO (1979) MODEL 203

1
To do this, we set 𝑧𝑡 = [ ], and consequently:
𝐺𝑡

1 0 0
𝐴22 = [ ̄ ] , 𝐶2 = [ ]
𝐺 𝜌 𝜎

In [3]: # Model parameters


β, Gbar, ρ, σ = 0.95, 5, 0.8, 1

# Basic model matrices


A22 = np.array([[1, 0],
[Gbar, ρ],])

C2 = np.array([[0],
[σ]])

Ug = np.array([[0, 1]])

# LQ framework matrices
A_t = np.zeros((1, 3))
A_b = np.hstack((np.zeros((2, 1)), A22))
A = np.vstack((A_t, A_b))

B = np.zeros((3, 1))
B[0, 0] = 1

C = np.vstack((np.zeros((1, 1)), C2))

Sg = np.hstack((np.zeros((1, 1)), Ug))


S1 = np.zeros((1, 3))
S1[0, 0] = 1
S = S1 + Sg

M = np.array([[-β]])

R = S.T @ S
Q = M.T @ M
W = M.T @ S

# Small penalty on the debt required to implement the no-Ponzi scheme


R[0, 0] = R[0, 0] + 1e-9

We can now create an instance of LQ:

In [4]: LQBarro = qe.LQ(Q, R, A, B, C=C, N=W, beta=β)


P, F, d = LQBarro.stationary_values()
x0 = np.array([[100, 1, 25]])

We can see the isomorphism by noting that consumption is a martingale in the permanent
income model and that taxation is a martingale in Barro’s model.
We can check this using the 𝐹 matrix of the LQ model.
Because 𝑢𝑡 = −𝐹 𝑥𝑡 , we have

𝑇𝑡 = 𝑆𝑥𝑡 + 𝑀 𝑢𝑡 = (𝑆 − 𝑀 𝐹 )𝑥𝑡
204 CHAPTER 10. HOW TO PAY FOR A WAR: PART 1

and

𝑇𝑡+1 = (𝑆 − 𝑀 𝐹 )𝑥𝑡+1 = (𝑆 − 𝑀 𝐹 )(𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1 ) = (𝑆 − 𝑀 𝐹 )((𝐴 − 𝐵𝐹 )𝑥𝑡 + 𝐶𝑤𝑡+1 )

Therefore, the mathematical expectation of 𝑇𝑡+1 conditional on time 𝑡 information is

𝐸𝑡 𝑇𝑡+1 = (𝑆 − 𝑀 𝐹 )(𝐴 − 𝐵𝐹 )𝑥𝑡

Consequently, taxation is a martingale (𝐸𝑡 𝑇𝑡+1 = 𝑇𝑡 ) if

(𝑆 − 𝑀 𝐹 )(𝐴 − 𝐵𝐹 ) = (𝑆 − 𝑀 𝐹 ),

which holds in this case:

In [5]: S - M @ F, (S - M @ F) @ (A - B @ F)

Out[5]: (array([[ 0.05000002, 19.79166502, 0.2083334 ]]),


array([[ 0.05000002, 19.79166504, 0.2083334 ]]))

This explains the fanning out of the conditional empirical distribution of taxation across time,
computing by simulation the Barro model a large number of times:

In [6]: T = 500
for i in range(250):
x, u, w = LQBarro.compute_sequence(x0, ts_length=T)
plt.plot(list(range(T+1)), ((S - M @ F) @ x)[0, :])
plt.xlabel('Time')
plt.ylabel('Taxation')
plt.show()
10.5. PYTHON CLASS TO SOLVE MARKOV JUMP LINEAR QUADRATIC CONTROL PROBLEMS205

We can see a similar, but a smoother pattern, if we plot government debt over time.

In [7]: T = 500
for i in range(250):
x, u, w = LQBarro.compute_sequence(x0, ts_length=T)
plt.plot(list(range(T+1)), x[0, :])
plt.xlabel('Time')
plt.ylabel('Govt Debt')
plt.show()

10.5 Python Class to Solve Markov Jump Linear Quadratic


Control Problems

To implement the extension to the Barro model in which 𝑝𝑡,𝑡+1 varies over time, we must al-
low the M matrix to be time-varying.
Our 𝑄 and 𝑊 matrices must also vary over time.
We can solve such a model using the LQMarkov class that solves Markov jump linear quan-
dratic control problems as described above.
The code for the class can be viewed here.
The class takes lists of matrices that corresponds to 𝑁 Markov states.
The value and policy functions are then found by iterating on a coupled system of matrix
Riccati difference equations.
Optimal 𝑃𝑠 , 𝐹𝑠 , 𝑑𝑠 are stored as attributes.
206 CHAPTER 10. HOW TO PAY FOR A WAR: PART 1

The class also contains a “method” for simulating the model.

10.6 Barro Model with a Time-varying Interest Rate

We can use the above class to implement a version of the Barro model with a time-varying
interest rate. The simplest way to extend the model is to allow the interest rate to take two
possible values. We set:

1
𝑝𝑡,𝑡+1 = 𝛽 + 0.02 = 0.97

2
𝑝𝑡,𝑡+1 = 𝛽 − 0.017 = 0.933

Thus, the first Markov state has a low interest rate, and the second Markov state has a high
interest rate.
We also need to specify a transition matrix for the Markov state.
We use:

0.8 0.2
Π=[ ]
0.2 0.8

(so each Markov state is persistent, and there is an equal chance of moving from one state to
the other)
The choice of parameters means that the unconditional expectation of 𝑝𝑡,𝑡+1 is 0.9515, higher
than 𝛽(= 0.95).
If we were to set 𝑝𝑡,𝑡+1 = 0.9515 in the version of the model with a constant interest rate,
government debt would explode.

In [8]: # Create list of matrices that corresponds to each Markov state


Π = np.array([[0.8, 0.2],
[0.2, 0.8]])

As = [A, A]
Bs = [B, B]
Cs = [C, C]
Rs = [R, R]

M1 = np.array([[-β - 0.02]])
M2 = np.array([[-β + 0.017]])

Q1 = M1.T @ M1
Q2 = M2.T @ M2
Qs = [Q1, Q2]
W1 = M1.T @ S
W2 = M2.T @ S
Ws = [W1, W2]

# create Markov Jump LQ DP problem instance


lqm = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)
lqm.stationary_values();
10.6. BARRO MODEL WITH A TIME-VARYING INTEREST RATE 207

The decision rules are now dependent on the Markov state:

In [9]: lqm.Fs[0]

Out[9]: array([[-0.98437712, 19.20516427, -0.8314215 ]])

In [10]: lqm.Fs[1]

Out[10]: array([[-1.01434301, 21.5847983 , -0.83851116]])

Simulating a large number of such economies over time reveals interesting dynamics.
Debt tends to stay low and stable but recurrently surges.

In [11]: T = 2000
x0 = np.array([[1000, 1, 25]])
for i in range(250):
x, u, w, s = lqm.compute_sequence(x0, ts_length=T)
plt.plot(list(range(T+1)), x[0, :])
plt.xlabel('Time')
plt.ylabel('Govt Debt')
plt.show()
208 CHAPTER 10. HOW TO PAY FOR A WAR: PART 1
Chapter 11

How to Pay for a War: Part 2

11.1 Contents

• An Application of Markov Jump Linear Quadratic Dynamic Programming 11.2


• Two example specifications 11.3
• One- and Two-period Bonds but No Restructuring ??
• Mapping into an LQ Markov Jump Problem 11.5
• Penalty on Different Issuance Across Maturities 11.6
• A Model with Restructuring 11.7
• Restructuring as a Markov Jump Linear Quadratic Control Problem 11.8
In addition to what’s in Anaconda, this lecture deploys the quantecon library:

In [1]: !pip install --upgrade quantecon

11.2 An Application of Markov Jump Linear Quadratic Dy-


namic Programming

This is a sequel to an earlier lecture.


We use a method introduced in lecture Markov Jump LQ dynamic programming to imple-
ment suggestions by Barro (1999 [8], 2003 [9]) for extending his classic 1979 model of tax
smoothing.
Barro’s 1979 [7] model is about a government that borrows and lends in order to help it mini-
mize an intertemporal measure of distortions caused by taxes.
Technically, Barro’s 1979 [7] model looks a lot like a consumption-smoothing model.
Our generalizations of his 1979 [7] model will also look like souped-up consumption-
smoothing models.
Wanting tractability induced Barro in 1979 [7] to assume that
• the government trades only one-period risk-free debt, and
• the one-period risk-free interest rate is constant
In our earlier lecture, we relaxed the second of these assumptions but not the first.
In particular, we used Markov jump linear quadratic dynamic programming to allow the ex-

209
210 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

ogenous interest rate to vary over time.


In this lecture, we add a maturity composition decision to the government’s problem by ex-
panding the dimension of the state.
We assume
• that the government borrows or saves in the form of risk-free bonds of maturities
1, 2, … , 𝐻.
• that interest rates on those bonds are time-varying and in particular are governed by a
jointly stationary stochastic process.
Let’s start with some standard imports:

In [2]: import quantecon as qe


import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

11.3 Two example specifications

We’ll describe two possible specifications


• In one, each period the government issues zero-coupon bonds of one- and two-period
maturities and redeems them only when they mature – in this version, the maturity
structure of government debt at each date is partly inherited from the past.
• In the second, the government redesigns the maturity structure of the debt each period.

11.4 One- and Two-period Bonds but No Restructuring

Let 𝑇𝑡 denote tax collections, 𝛽 a discount factor, 𝑏𝑡,𝑡+1 time 𝑡 + 1 goods that the government
promises to pay at 𝑡, 𝑏𝑡,𝑡+2 time 𝑡 + 2 goods that the government promises to pay at time 𝑡,
𝐺𝑡 government purchases, 𝑝𝑡,𝑡+1 the number of time 𝑡 goods received per time 𝑡 + 1 goods
promised, and 𝑝𝑡,𝑡+2 the number of time 𝑡 goods received per time 𝑡 + 2 goods promised.
Evidently, 𝑝𝑡,𝑡+1 , 𝑝𝑡,𝑡+2 are inversely related to appropriate corresponding gross interest rates
on government debt.
In the spirit of Barro (1979) [7], government expenditures are governed by an exogenous
stochastic process.
Given initial conditions 𝑏−2,0 , 𝑏−1,0 , 𝑧0 , 𝑖0 , where 𝑖0 is the initial Markov state, the government
chooses a contingency plan for {𝑏𝑡,𝑡+1 , 𝑏𝑡,𝑡+2 , 𝑇𝑡 }∞ 𝑡=0 to maximize.


−𝐸0 ∑ 𝛽 𝑡 [𝑇𝑡2 + 𝑐1 (𝑏𝑡,𝑡+1 − 𝑏𝑡,𝑡+2 )2 ]
𝑡=0

subject to the constraints


11.5. MAPPING INTO AN LQ MARKOV JUMP PROBLEM 211

𝑇𝑡 = 𝐺𝑡 + 𝑏𝑡−2,𝑡 + 𝑏𝑡−1,𝑡 − 𝑝𝑡,𝑡+2 𝑏𝑡,𝑡+2 − 𝑝𝑡,𝑡+1 𝑏𝑡,𝑡+1


𝐺𝑡 = 𝑈𝑔,𝑠𝑡 𝑧𝑡
𝑧𝑡+1 = 𝐴22,𝑠𝑡 𝑧𝑡 + 𝐶2,𝑠𝑡 𝑤𝑡+1
𝑝𝑡,𝑡+1
⎡𝑝 ⎤
⎢ 𝑡,𝑡+2 ⎥
⎢ 𝑈𝑔,𝑠𝑡 ⎥ ∼ functions of Markov state with transition matrix Π
⎢ ⎥
⎢𝐴22,𝑠𝑡 ⎥
⎣ 𝐶2,𝑠𝑡 ⎦

Here 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼) and Π𝑖𝑗 is the probability that the Markov state moves from state 𝑖 to
state 𝑗 in one period.
The variables 𝑇𝑡 , 𝑏𝑡,𝑡+1 , 𝑏𝑡,𝑡+2 are control variables chosen at 𝑡, while the variables 𝑏𝑡−1,𝑡 , 𝑏𝑡−2,𝑡
are endogenous state variables inherited from the past at time 𝑡 and 𝑝𝑡,𝑡+1 , 𝑝𝑡,𝑡+2 are exoge-
nous state variables at time 𝑡.
The parameter 𝑐1 imposes a penalty on the government’s issuing different quantities of one
and two-period debt.
This penalty deters the government from taking large “long-short” positions in debt of differ-
ent maturities. An example below will show this in action.
As well as extending the model to allow for a maturity decision for government debt, we can
also in principle allow the matrices 𝑈𝑔,𝑠𝑡 , 𝐴22,𝑠𝑡 , 𝐶2,𝑠𝑡 to depend on the Markov state 𝑠𝑡 .
Below, we will often adopt the convention that for matrices appearing in a linear state space,
𝐴𝑡 ≡ 𝐴𝑠𝑡 , 𝐶𝑡 ≡ 𝐶𝑠𝑡 and so on, so that dependence on 𝑡 is always intermediated through the
Markov state 𝑠𝑡 .

11.5 Mapping into an LQ Markov Jump Problem

First, define

𝑏̂𝑡 = 𝑏𝑡−1,𝑡 + 𝑏𝑡−2,𝑡 ,

which is debt due at time 𝑡.


Then define the endogenous part of the state:

𝑏̂𝑡
𝑏̄𝑡 = [ ]
𝑏𝑡−1,𝑡+1

and the complete state

𝑏̄
𝑥𝑡 = [ 𝑡 ]
𝑧𝑡

and the control vector

𝑏𝑡,𝑡+1
𝑢𝑡 = [ ]
𝑏𝑡,𝑡+2
212 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

The endogenous part of state vector follows the law of motion:

𝑏̂ 0 1 𝑏̂𝑡 1 0 𝑏𝑡,𝑡+1
[ 𝑡+1 ] = [ ][ ]+[ ][ ]
𝑏𝑡,𝑡+2 0 0 𝑏𝑡−1,𝑡+1 0 1 𝑏𝑡,𝑡+2

or

𝑏̄𝑡+1 = 𝐴11 𝑏̄𝑡 + 𝐵1 𝑢𝑡

Define the following functions of the state

𝐺𝑡 = 𝑆𝐺,𝑡 𝑥𝑡 , 𝑏̂𝑡 = 𝑆1 𝑥𝑡

and

𝑀𝑡 = [−𝑝𝑡,𝑡+1 −𝑝𝑡,𝑡+2 ]

where 𝑝𝑡,𝑡+1 is the discount on one period loans in the discrete Markov state at time 𝑡 and
𝑝𝑡,𝑡+2 is the discount on two-period loans in the discrete Markov state.
Define

𝑆𝑡 = 𝑆𝐺,𝑡 + 𝑆1

Note that in discrete Markov state 𝑖

𝑇𝑡 = 𝑀𝑡 𝑢𝑡 + 𝑆𝑡 𝑥𝑡

It follows that

𝑇𝑡2 = 𝑥′𝑡 𝑆𝑡′ 𝑆𝑡 𝑥𝑡 + 𝑢′𝑡 𝑀𝑡′ 𝑀𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑀𝑡′ 𝑆𝑡 𝑥𝑡

or

𝑇𝑡2 = 𝑥′𝑡 𝑅𝑡 𝑥𝑡 + 𝑢′𝑡 𝑄𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑊𝑡 𝑥𝑡

where

𝑅𝑡 = 𝑆𝑡′ 𝑆𝑡 , 𝑄𝑡 = 𝑀𝑡′ 𝑀𝑡 , 𝑊𝑡 = 𝑀𝑡′ 𝑆𝑡

Because the payoff function also includes the penalty parameter on issuing debt of different
maturities, we have:

𝑇𝑡2 + 𝑐1 (𝑏𝑡,𝑡+1 − 𝑏𝑡,𝑡+2 )2 = 𝑥′𝑡 𝑅𝑡 𝑥𝑡 + 𝑢′𝑡 𝑄𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑊𝑡 𝑥𝑡 + 𝑐1 𝑢′𝑡 𝑄𝑐 𝑢𝑡

1 −1
where 𝑄𝑐 = [ ]. Therefore, the overall 𝑄 matrix for the Markov jump LQ problem is:
−1 1
11.5. MAPPING INTO AN LQ MARKOV JUMP PROBLEM 213

𝑄𝑐𝑡 = 𝑄𝑡 + 𝑐1 𝑄𝑐

The law of motion of the state in all discrete Markov states 𝑖 is

𝑥𝑡+1 = 𝐴𝑡 𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑡 𝑤𝑡+1

where

𝐴11 0 𝐵1 0
𝐴𝑡 = [ ], 𝐵=[ ], 𝐶𝑡 = [ ]
0 𝐴22,𝑡 0 𝐶2,𝑡

Thus, in this problem all the matrices apart from 𝐵 may depend on the Markov state at time
𝑡.
As shown in the previous lecture, the LQMarkov class can solve Markov jump LQ problems
when provided with the 𝐴, 𝐵, 𝐶, 𝑅, 𝑄, 𝑊 matrices for each Markov state.
The function below maps the primitive matrices and parameters from the above two-period
model into the matrices that the LQMarkov class requires:

In [3]: def LQ_markov_mapping(A22, C2, Ug, p1, p2, c1=0):

"""
Function which takes A22, C2, Ug, p_{t, t+1}, p_{t, t+2} and penalty
parameter c1, and returns the required matrices for the LQMarkov
model: A, B, C, R, Q, W.
This version uses the condensed version of the endogenous state.
"""

# Make sure all matrices can be treated as 2D arrays


A22 = np.atleast_2d(A22)
C2 = np.atleast_2d(C2)
Ug = np.atleast_2d(Ug)
p1 = np.atleast_2d(p1)
p2 = np.atleast_2d(p2)

# Find the number of states (z) and shocks (w)


nz, nw = C2.shape

# Create A11, B1, S1, S2, Sg, S matrices


A11 = np.zeros((2, 2))
A11[0, 1] = 1

B1 = np.eye(2)

S1 = np.hstack((np.eye(1), np.zeros((1, nz+1))))


Sg = np.hstack((np.zeros((1, 2)), Ug))
S = S1 + Sg

# Create M matrix
M = np.hstack((-p1, -p2))

# Create A, B, C matrices
A_T = np.hstack((A11, np.zeros((2, nz))))
A_B = np.hstack((np.zeros((nz, 2)), A22))
214 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

A = np.vstack((A_T, A_B))

B = np.vstack((B1, np.zeros((nz, 2))))

C = np.vstack((np.zeros((2, nw)), C2))

# Create Q^c matrix


Qc = np.array([[1, -1], [-1, 1]])

# Create R, Q, W matrices

R = S.T @ S
Q = M.T @ M + c1 * Qc
W = M.T @ S

return A, B, C, R, Q, W

With the above function, we can proceed to solve the model in two steps:

1. Use LQ_markov_mapping to map 𝑈𝑔,𝑡 , 𝐴22,𝑡 , 𝐶2,𝑡 , 𝑝𝑡,𝑡+1 , 𝑝𝑡,𝑡+2 into the
𝐴, 𝐵, 𝐶, 𝑅, 𝑄, 𝑊 matrices for each of the 𝑛 Markov states.

2. Use the LQMarkov class to solve the resulting n-state Markov jump LQ problem.

11.6 Penalty on Different Issuance Across Maturities

To implement a simple example of the two-period model, we assume that 𝐺𝑡 follows an AR(1)
process:

𝐺𝑡+1 = 𝐺 ̄ + 𝜌𝐺𝑡 + 𝜎𝑤𝑡+1

1
To do this, we set 𝑧𝑡 = [ ], and consequently:
𝐺𝑡

1 0 0
𝐴22 = [ ̄ ] , 𝐶2 = [ ] , 𝑈𝑔 = [0 1]
𝐺 𝜌 𝜎

Therefore, in this example, 𝐴22 , 𝐶2 and 𝑈𝑔 are not time-varying.


We will assume that there are two Markov states, one with a flatter yield curve, and one with
a steeper yield curve. In state 1, prices are:

1 1
𝑝𝑡,𝑡+1 = 𝛽 , 𝑝𝑡,𝑡+2 = 𝛽 2 − 0.02

and in state 2, prices are:

2 2
𝑝𝑡,𝑡+1 = 𝛽 , 𝑝𝑡,𝑡+2 = 𝛽 2 + 0.02

We first solve the model with no penalty parameter on different issuance across maturities,
i.e. 𝑐1 = 0.
We also need to specify a transition matrix for the Markov state, we use:
11.6. PENALTY ON DIFFERENT ISSUANCE ACROSS MATURITIES 215

0.9 0.1
Π=[ ]
0.1 0.9

Thus, each Markov state is persistent, and there is an equal chance of moving from one to the
other.

In [4]: # Model parameters


β, Gbar, ρ, σ, c1 = 0.95, 5, 0.8, 1, 0
p1, p2, p3, p4 = β, β**2 - 0.02, β, β**2 + 0.02

# Basic model matrices


A22 = np.array([[1, 0], [Gbar, ρ] ,])
C_2 = np.array([[0], [σ]])
Ug = np.array([[0, 1]])

A1, B1, C1, R1, Q1, W1 = LQ_markov_mapping(A22, C_2, Ug, p1, p2, c1)
A2, B2, C2, R2, Q2, W2 = LQ_markov_mapping(A22, C_2, Ug, p3, p4, c1)

# Small penalties on debt required to implement no-Ponzi scheme


R1[0, 0] = R1[0, 0] + 1e-9
R2[0, 0] = R2[0, 0] + 1e-9

# Construct lists of matrices correspond to each state


As = [A1, A2]
Bs = [B1, B2]
Cs = [C1, C2]
Rs = [R1, R2]
Qs = [Q1, Q2]
Ws = [W1, W2]

Π = np.array([[0.9, 0.1],
[0.1, 0.9]])

# Construct and solve the model using the LQMarkov class


lqm = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)
lqm.stationary_values()

# Simulate the model


x0 = np.array([[100, 50, 1, 10]])
x, u, w, t = lqm.compute_sequence(x0, ts_length=300)

# Plot of one and two-period debt issuance


fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.plot(u[0, :])
ax1.set_title('One-period debt issuance')
ax1.set_xlabel('Time')
ax2.plot(u[1, :])
ax2.set_title('Two-period debt issuance')
ax2.set_xlabel('Time')
plt.show()
216 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

The above simulations show that when no penalty is imposed on different issuances across
maturities, the government has an incentive to take large “long-short” positions in debt of
different maturities.
To prevent such an outcome, we now set 𝑐1 = 0.01.
This penalty is enough to ensure that the government issues positive quantities of both one
and two-period debt:

In [5]: # Put small penalty on different issuance across maturities


c1 = 0.01

A1, B1, C1, R1, Q1, W1 = LQ_markov_mapping(A22, C_2, Ug, p1, p2, c1)
A2, B2, C2, R2, Q2, W2 = LQ_markov_mapping(A22, C_2, Ug, p3, p4, c1)

# Small penalties on debt required to implement no-Ponzi scheme


R1[0, 0] = R1[0, 0] + 1e-9
R2[0, 0] = R2[0, 0] + 1e-9

# Construct lists of matrices


As = [A1, A2]
Bs = [B1, B2]
Cs = [C1, C2]
Rs = [R1, R2]
Qs = [Q1, Q2]
Ws = [W1, W2]

# Construct and solve the model using the LQMarkov class


lqm2 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)
lqm2.stationary_values()

# Simulate the model


x, u, w, t = lqm2.compute_sequence(x0, ts_length=300)

# Plot of one and two-period debt issuance


fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.plot(u[0, :])
ax1.set_title('One-period debt issuance')
ax1.set_xlabel('Time')
ax2.plot(u[1, :])
ax2.set_title('Two-period debt issuance')
ax2.set_xlabel('Time')
plt.show()
11.7. A MODEL WITH RESTRUCTURING 217

11.7 A Model with Restructuring

This model alters two features of the previous model:

1. The maximum horizon of government debt is now extended to a general H periods.

2. The government is able to redesign the maturity structure of debt every period.

We impose a cost on adjusting issuance of each maturity by amending the payoff function to
become:

𝐻−1
𝑇𝑡2 + ∑ 𝑐2 (𝑏𝑡+𝑗
𝑡−1 𝑡
− 𝑏𝑡+𝑗+1 )2
𝑗=0

The government’s budget constraint is now:

𝐻 𝐻−1
𝑡
𝑇𝑡 + ∑ 𝑝𝑡,𝑡+𝑗 𝑏𝑡+𝑗 = 𝑏𝑡𝑡−1 + ∑ 𝑝𝑡,𝑡+𝑗 𝑏𝑡+𝑗
𝑡−1
+ 𝐺𝑡
𝑗=1 𝑗=1

To map this into the Markov Jump LQ framework, we define state and control variables.
Let:

𝑏𝑡𝑡−1 𝑡
𝑏𝑡+1
⎡ 𝑏𝑡−1 ⎤ ⎡ 𝑏𝑡 ⎤
𝑏̄𝑡 = ⎢ 𝑡+1 ⎥ , 𝑢𝑡 = ⎢ 𝑡+2 ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
𝑡−1 𝑡
𝑏
⎣ 𝑡+𝐻−1 ⎦ ⎣𝑏𝑡+𝐻 ⎦

Thus, 𝑏̄𝑡 is the endogenous state (debt issued last period) and 𝑢𝑡 is the control (debt issued
today).
As before, we will also have the exogenous state 𝑧𝑡 , which determines government spending.
Therefore, the full state is:
218 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

𝑏̄
𝑥𝑡 = [ 𝑡 ]
𝑧𝑡

We also define a vector 𝑝𝑡 that contains the time 𝑡 price of goods in period 𝑡 + 𝑗:

𝑝𝑡,𝑡+1
⎡𝑝 ⎤
𝑝𝑡 = ⎢ 𝑡,𝑡+2 ⎥
⎢ ⋮ ⎥
⎣𝑝𝑡,𝑡+𝐻 ⎦

Finally, we define three useful matrices 𝑆𝑠 , 𝑆𝑥 , 𝑆𝑥̃ :

𝑝𝑡,𝑡+1 1 0 0 ⋯ 0
⎡ 𝑝 ⎤ ⎡0 1 0 ⋯ 0⎤
⎢ 𝑡,𝑡+2 ⎥ = 𝑆𝑠 𝑝𝑡 where 𝑆𝑠 = ⎢ ⎥
⎢ ⋮ ⎥ ⎢⋮ ⋱ ⎥
𝑝
⎣ 𝑡,𝑡+𝐻−1 ⎦ ⎣0 0 ⋯ 1 0 ⎦

𝑡−1
𝑏𝑡+1 0 1 0 ⋯ 0
⎡ 𝑏𝑡−1 ⎤ ⎡0 0 1 ⋯ 0⎤
⎢ 𝑡+2 ⎥ = 𝑆𝑥 𝑏̄𝑡 where 𝑆𝑥 = ⎢ ⎥
⎢ ⋮ ⎥ ⎢⋮ ⋱ ⎥
𝑡−1
⎣𝑏𝑡+𝑇 −1 ⎦ ⎣0 0 ⋯ 0 1⎦

𝑏𝑡𝑡−1 = 𝑆𝑥̃ 𝑏̄𝑡 where 𝑆𝑥̃ = [1 0 0 ⋯ 0]

In terms of dimensions, the first two matrices defined above are (𝐻 − 1) × 𝐻.


The last is 1 × 𝐻
We can now write the government’s budget constraint in matrix notation. Rearranging the
government budget constraint gives:

𝐻−1 𝐻
𝑇𝑡 = 𝑏𝑡𝑡−1 + ∑ 𝑝𝑡+𝑗
𝑡 𝑡−1
𝑏𝑡+𝑗 𝑡
+ 𝐺𝑡 − ∑ 𝑝𝑡+𝑗 𝑡
𝑏𝑡+𝑗
𝑗=1 𝑗=1

or

𝑇𝑡 = 𝑆𝑥̃ 𝑏̄𝑡 + (𝑆𝑠 𝑝𝑡 ) ⋅ (𝑆𝑥 𝑏̄𝑡 ) + 𝑈𝑔 𝑧𝑡 − 𝑝𝑡 ⋅ 𝑢𝑡

If we want to write this in terms of the full state, we have:

𝑇𝑡 = [(𝑆𝑥̃ + 𝑝𝑡′ 𝑆𝑠′ 𝑆𝑥 ) 𝑈 𝑔] 𝑥𝑡 − 𝑝𝑡′ 𝑢𝑡

To simplify the notation, let 𝑆𝑡 = [(𝑆𝑥̃ + 𝑝𝑡 ’𝑆𝑠 ’𝑆𝑥 ) 𝑈 𝑔].


Then

𝑇𝑡 = 𝑆𝑡 𝑥𝑡 − 𝑝𝑡′ 𝑢𝑡

Therefore
11.7. A MODEL WITH RESTRUCTURING 219

𝑇𝑡2 = 𝑥′𝑡 𝑅𝑡 𝑥𝑡 + 𝑢′𝑡 𝑄𝑡 𝑢𝑡 + 2𝑢′𝑡 𝑊𝑡 𝑥𝑡

where

𝑅𝑡 = 𝑆𝑡′ 𝑆𝑡 , 𝑄𝑡 = 𝑝𝑡 𝑝𝑡′ , 𝑊𝑡 = −𝑝𝑡 𝑆𝑡

where to economize on notation we adopt the convention that for the linear state matrices
𝑅𝑡 ≡ 𝑅𝑠𝑡 , 𝑄𝑡 ≡ 𝑊𝑠𝑡 and so on.
We’ll continue to use this convention also for the linear state matrices 𝐴, 𝐵, 𝑊 and so on be-
low.
Because the payoff function also includes the penalty parameter for rescheduling, we have:

𝐻−1
𝑇𝑡2 + ∑ 𝑐2 (𝑏𝑡+𝑗
𝑡−1 𝑡
− 𝑏𝑡+𝑗+1 )2 = 𝑇𝑡2 + 𝑐2 (𝑏̄𝑡 − 𝑢𝑡 )′ (𝑏̄𝑡 − 𝑢𝑡 )
𝑗=0

Because the complete state is 𝑥𝑡 and not 𝑏̄𝑡 , we rewrite this as:

𝑇𝑡2 + 𝑐2 (𝑆𝑐 𝑥𝑡 − 𝑢𝑡 )′ (𝑆𝑐 𝑥𝑡 − 𝑢𝑡 )

where 𝑆𝑐 = [𝐼 0]
Multiplying this out gives:

𝑇𝑡2 + 𝑐2 𝑥′𝑡 𝑆𝑐′ 𝑆𝑐 𝑥𝑡 − 2𝑐2 𝑢′𝑡 𝑆𝑐 𝑥𝑡 + 𝑐2 𝑢′𝑡 𝑢𝑡

Therefore, with the cost term, we must amend our 𝑅, 𝑄, 𝑊 matrices as follows:

𝑅𝑡𝑐 = 𝑅𝑡 + 𝑐2 𝑆𝑐′ 𝑆𝑐

𝑄𝑐𝑡 = 𝑄𝑡 + 𝑐2 𝐼

𝑊𝑡𝑐 = 𝑊𝑡 − 𝑐2 𝑆𝑐

To finish mapping into the Markov jump LQ setup, we need to construct the law of motion
for the full state.
This is simpler than in the previous setup, as we now have 𝑏̄𝑡+1 = 𝑢𝑡 .
Therefore:

𝑏̄𝑡+1
𝑥𝑡+1 ≡ [ ] = 𝐴𝑡 𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑡 𝑤𝑡+1
𝑧𝑡+1

where

0 0 𝐼 0
𝐴𝑡 = [ ], 𝐵 = [ ], 𝐶=[ ]
0 𝐴22,𝑡 0 𝐶2,𝑡
220 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

This completes the mapping into a Markov jump LQ problem.

11.8 Restructuring as a Markov Jump Linear Quadratic Con-


trol Problem

As with the previous model, we can use a function to map the primitives of the model with
restructuring into the matrices that the LQMarkov class requires:

In [6]: def LQ_markov_mapping_restruct(A22, C2, Ug, T, p_t, c=0):

"""
Function which takes A22, C2, T, p_t, c and returns the
required matrices for the LQMarkov model: A, B, C, R, Q, W
Note, p_t should be a T by 1 matrix
c is the rescheduling cost (a scalar)
This version uses the condensed version of the endogenous state
"""

# Make sure all matrices can be treated as 2D arrays


A22 = np.atleast_2d(A22)
C2 = np.atleast_2d(C2)
Ug = np.atleast_2d(Ug)
p_t = np.atleast_2d(p_t)

# Find the number of states (z) and shocks (w)


nz, nw = C2.shape

# Create Sx, tSx, Ss, S_t matrices (tSx stands for \tilde S_x)
Ss = np.hstack((np.eye(T-1), np.zeros((T-1, 1))))
Sx = np.hstack((np.zeros((T-1, 1)), np.eye(T-1)))
tSx = np.zeros((1, T))
tSx[0, 0] = 1

S_t = np.hstack((tSx + p_t.T @ Ss.T @ Sx, Ug))

# Create A, B, C matrices
A_T = np.hstack((np.zeros((T, T)), np.zeros((T, nz))))
A_B = np.hstack((np.zeros((nz, T)), A22))
A = np.vstack((A_T, A_B))

B = np.vstack((np.eye(T), np.zeros((nz, T))))


C = np.vstack((np.zeros((T, nw)), C2))

# Create cost matrix Sc


Sc = np.hstack((np.eye(T), np.zeros((T, nz))))

# Create R_t, Q_t, W_t matrices

R_c = S_t.T @ S_t + c * Sc.T @ Sc


Q_c = p_t @ p_t.T + c * np.eye(T)
W_c = -p_t @ S_t - c * Sc

return A, B, C, R_c, Q_c, W_c


11.8. RESTRUCTURING AS A MARKOV JUMP LINEAR QUADRATIC CONTROL PROBLEM221

11.8.1 Example with Restructuring

As an example of the model with restructuring, consider this model where 𝐻 = 3.


We will assume that there are two Markov states, one with a flatter yield curve, and one with
a steeper yield curve.
In state 1, prices are:

1 1 1
𝑝𝑡,𝑡+1 = 0.9695 , 𝑝𝑡,𝑡+2 = 0.902 , 𝑝𝑡,𝑡+3 = 0.8369

and in state 2, prices are:

2 2 2
𝑝𝑡,𝑡+1 = 0.9295 , 𝑝𝑡,𝑡+2 = 0.902 , 𝑝𝑡,𝑡+3 = 0.8769

We will assume the same transition matrix and 𝐺𝑡 process as above

In [7]: # New model parameters


H = 3
p1 = np.array([[0.9695], [0.902], [0.8369]])
p2 = np.array([[0.9295], [0.902], [0.8769]])
Pi = np.array([[0.9, 0.1], [0.1, 0.9]])

# Put penalty on different issuance across maturities


c2 = 0.5

A1, B1, C1, R1, Q1, W1 = LQ_markov_mapping_restruct(A22, C_2, Ug, H, p1, c2)
A2, B2, C2, R2, Q2, W2 = LQ_markov_mapping_restruct(A22, C_2, Ug, H, p2, c2)

# Small penalties on debt required to implement no-Ponzi scheme


R1[0, 0] = R1[0, 0] + 1e-9
R1[1, 1] = R1[1, 1] + 1e-9
R1[2, 2] = R1[2, 2] + 1e-9
R2[0, 0] = R2[0, 0] + 1e-9
R2[1, 1] = R2[1, 1] + 1e-9
R2[2, 2] = R2[2, 2] + 1e-9

# Construct lists of matrices


As = [A1, A2]
Bs = [B1, B2]
Cs = [C1, C2]
Rs = [R1, R2]
Qs = [Q1, Q2]
Ws = [W1, W2]

# Construct and solve the model using the LQMarkov class


lqm3 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)
lqm3.stationary_values()

x0 = np.array([[5000, 5000, 5000, 1, 10]])


x, u, w, t = lqm3.compute_sequence(x0, ts_length=300)

In [8]: # Plots of different maturities debt issuance

fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 4))


ax1.plot(u[0, :])
222 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2

ax1.set_title('One-period debt issuance')


ax1.set_xlabel('Time')
ax2.plot(u[1, :])
ax2.set_title('Two-period debt issuance')
ax2.set_xlabel('Time')
ax3.plot(u[2, :])
ax3.set_title('Three-period debt issuance')
ax3.set_xlabel('Time')
ax4.plot(u[0, :] + u[1, :] + u[2, :])
ax4.set_title('Total debt issuance')
ax4.set_xlabel('Time')
plt.tight_layout()
plt.show()

In [9]: # Plot share of debt issuance that is short-term

fig, ax = plt.subplots()
ax.plot((u[0, :] / (u[0, :] + u[1, :] + u[2, :])))
ax.set_title('One-period debt issuance share')
ax.set_xlabel('Time')
plt.show()
11.8. RESTRUCTURING AS A MARKOV JUMP LINEAR QUADRATIC CONTROL PROBLEM223
224 CHAPTER 11. HOW TO PAY FOR A WAR: PART 2
Chapter 12

How to Pay for a War: Part 3

12.1 Contents

• Another Application of Markov Jump Linear Quadratic Dynamic Programming 12.2


• Roll-Over Risk 12.3
• A Dead End 12.4
• Better Representation of Roll-Over Risk 12.5
In addition to what’s in Anaconda, this lecture deploys the quantecon library:

In [1]: !pip install --upgrade quantecon

12.2 Another Application of Markov Jump Linear Quadratic


Dynamic Programming

This is another sequel to an earlier lecture.


We again use a method introduced in lecture Markov Jump LQ dynamic programming to im-
plement some ideas Barro (1999 [8], 2003 [9]) that extend his classic 1979 [7] model of tax
smoothing.
Barro’s 1979 [7] model is about a government that borrows and lends in order to help it mini-
mize an intertemporal measure of distortions caused by taxes.
Technically, Barro’s 1979 [7] model looks a lot like a consumption-smoothing model.
Our generalizations of his 1979 model will also look like souped-up consumption-smoothing
models.
In this lecture, we describe a tax-smoothing problem of a government that faces roll-over
risk.
Let’s start with some standard imports:

In [2]: import quantecon as qe


import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

225
226 CHAPTER 12. HOW TO PAY FOR A WAR: PART 3

12.3 Roll-Over Risk

Let 𝑇𝑡 denote tax collections, 𝛽 a discount factor, 𝑏𝑡,𝑡+1 time 𝑡 + 1 goods that the government
𝑡
promises to pay at 𝑡, 𝐺𝑡 government purchases, 𝑝𝑡+1 the number of time 𝑡 goods received per
time 𝑡 + 1 goods promised.
The stochastic process of government expenditures is exogenous.
The government’s problem is to choose a plan for borrowing and tax collections {𝑏𝑡+1 , 𝑇𝑡 }∞
𝑡=0
to minimize


𝐸0 ∑ 𝛽 𝑡 𝑇𝑡2
𝑡=0

subject to the constraints

𝑡
𝑇𝑡 + 𝑝𝑡+1 𝑏𝑡,𝑡+1 = 𝐺𝑡 + 𝑏𝑡−1,𝑡

𝐺𝑡 = 𝑈𝑔,𝑡 𝑧𝑡

𝑧𝑡+1 = 𝐴22,𝑡 𝑧𝑡 + 𝐶2,𝑡 𝑤𝑡+1

where 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼). The variables 𝑇𝑡 , 𝑏𝑡,𝑡+1 are control variables chosen at 𝑡, while 𝑏𝑡−1,𝑡 is
𝑡
an endogenous state variable inherited from the past at time 𝑡 and 𝑝𝑡+1 is an exogenous state
variable at time 𝑡.
This is the same set-up as used in this lecture.
We will consider a situation in which the government faces “roll-over risk”.
Specifically, we shut down the government’s ability to borrow in one of the Markov states.

12.4 A Dead End


𝑡
A first thought for how to implement this might be to allow 𝑝𝑡+1 to vary over time with:

𝑡
𝑝𝑡+1 =𝛽

in Markov state 1 and

𝑡
𝑝𝑡+1 =0

in Markov state 2.
Consequently, in the second Markov state, the government is unable to borrow, and the bud-
get constraint becomes 𝑇𝑡 = 𝐺𝑡 + 𝑏𝑡−1,𝑡 .
However, if this is the only adjustment we make in our linear-quadratic model, the govern-
ment will not set 𝑏𝑡,𝑡+1 = 0, which is the outcome we want to express roll-over risk in period
𝑡.
12.5. BETTER REPRESENTATION OF ROLL-OVER RISK 227

Instead, the government would have an incentive to set 𝑏𝑡,𝑡+1 to a large negative number in
state 2 – it would accumulate large amounts of assets to bring into period 𝑡 + 1 because that
is cheap (Our Riccati equations will discover this for us!).
Thus, we must represent “roll-over risk” some other way.

12.5 Better Representation of Roll-Over Risk

To force the government to set 𝑏𝑡,𝑡+1 = 0, we can instead extend the model to have four
Markov states:

1. Good today, good yesterday

2. Good today, bad yesterday

3. Bad today, good yesterday

4. Bad today, bad yesterday

where good is a state in which effectively the government can issue debt and bad is a state in
which effectively the government can’t issue debt.
We’ll explain what effectively means shortly.
We now set

𝑡
𝑝𝑡+1 =𝛽

in all states.
In addition – and this is important because it defines what we mean by effectively – we put a
large penalty on the 𝑏𝑡−1,𝑡 element of the state vector in states 2 and 4.
This will prevent the government from wishing to issue any debt in states 3 or 4 because it
would experience a large penalty from doing so in the next period.
The transition matrix for this formulation is:

0.95 0 0.05 0
⎡0.95 0 0.05 0 ⎤
Π=⎢ ⎥
⎢ 0 0.9 0 0.1⎥
⎣ 0 0.9 0 0.1⎦

This transition matrix ensures that the Markov state cannot move, for example, from state 3
to state 1.
Because state 3 is “bad today”, the next period cannot have “good yesterday”.

In [3]: # Model parameters


β, Gbar, ρ, σ = 0.95, 5, 0.8, 1

# Basic model matrices


A22 = np.array([[1, 0], [Gbar, ρ], ])
C2 = np.array([[0], [σ]])
228 CHAPTER 12. HOW TO PAY FOR A WAR: PART 3

Ug = np.array([[0, 1]])

# LQ framework matrices
A_t = np.zeros((1, 3))
A_b = np.hstack((np.zeros((2, 1)), A22))
A = np.vstack((A_t, A_b))

B = np.zeros((3, 1))
B[0, 0] = 1

C = np.vstack((np.zeros((1, 1)), C2))

Sg = np.hstack((np.zeros((1, 1)), Ug))


S1 = np.zeros((1, 3))
S1[0, 0] = 1
S = S1 + Sg

R = S.T @ S

# Large penalty on debt in R2 to prevent borrowing in a bad state


R1 = np.copy(R)
R2 = np.copy(R)
R1[0, 0] = R[0, 0] + 1e-9
R2[0, 0] = R[0, 0] + 1e12

M = np.array([[-β]])
Q = M.T @ M
W = M.T @ S

Π = np.array([[0.95, 0, 0.05, 0],


[0.95, 0, 0.05, 0],
[0, 0.9, 0, 0.1],
[0, 0.9, 0, 0.1]])

# Construct lists of matrices that correspond to each state


As = [A, A, A, A]
Bs = [B, B, B, B]
Cs = [C, C, C, C]
Rs = [R1, R2, R1, R2]
Qs = [Q, Q, Q, Q]
Ws = [W, W, W, W]

lqm = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)


lqm.stationary_values();

This model is simulated below, using the same process for 𝐺𝑡 as in this lecture.
𝑡
When 𝑝𝑡+1 = 𝛽 government debt fluctuates around zero.
The spikes in the series for taxation show periods when the government is unable to access
financial markets: positive spikes occur when debt is positive, and the government must raise
taxes in the current period.
Negative spikes occur when the government has positive asset holdings.
An inability to use financial markets in the next period means that the government uses those
assets to lower taxation today.

In [4]: x0 = np.array([[0, 1, 25]])


12.5. BETTER REPRESENTATION OF ROLL-OVER RISK 229

T = 300
x, u, w, state = lqm.compute_sequence(x0, ts_length=T)

# Calculate taxation each period from the budget constraint and the Markov�
↪ state
tax = np.zeros([T, 1])
for i in range(T):
tax[i, :] = S @ x[:, i] + M @ u[:, i]

# Plot of debt issuance and taxation


fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
ax1.plot(x[0, :])
ax1.set_title('One-period debt issuance')
ax1.set_xlabel('Time')
ax2.plot(tax)
ax2.set_title('Taxation')
ax2.set_xlabel('Time')
plt.show()

We can adjust the model so that, rather than having debt fluctuate around zero, the govern-
ment is a debtor in every period we allow it to borrow.
𝑡
To accomplish this, we simply raise 𝑝𝑡+1 to 𝛽 + 0.02 = 0.97.

In [5]: M = np.array([[-β - 0.02]])

Q = M.T @ M
W = M.T @ S

# Construct lists of matrices


As = [A, A, A, A]
Bs = [B, B, B, B]
Cs = [C, C, C, C]
Rs = [R1, R2, R1, R2]
Qs = [Q, Q, Q, Q]
Ws = [W, W, W, W]

lqm2 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β)


x, u, w, state = lqm2.compute_sequence(x0, ts_length=T)

# Calculate taxation each period from the budget constraint and the
# Markov state
tax = np.zeros([T, 1])
for i in range(T):
230 CHAPTER 12. HOW TO PAY FOR A WAR: PART 3

tax[i, :] = S @ x[:, i] + M @ u[:, i]

# Plot of debt issuance and taxation


fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
ax1.plot(x[0, :])
ax1.set_title('One-period debt issuance')
ax1.set_xlabel('Time')
ax2.plot(tax)
ax2.set_title('Taxation')
ax2.set_xlabel('Time')
plt.show()

With a lower interest rate, the government has an incentive to increase debt over time.
However, with “roll-over risk”, debt is recurrently reset to zero and taxes spike up.
Consequently, the government is wary of letting debt get too high, due to the high costs of a
“sudden stop”.
Chapter 13

Optimal Taxation in an LQ
Economy

13.1 Contents

• Overview 13.2
• The Ramsey Problem 13.3
• Implementation 13.4
• Examples 13.5
• Exercises 13.6
• Solutions 13.7
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

13.2 Overview

In this lecture, we study optimal fiscal policy in a linear quadratic setting.


We modify a model of Robert Lucas and Nancy Stokey [45] so that convenient formulas for
solving linear-quadratic models can be applied.
The economy consists of a representative household and a benevolent government.
The government finances an exogenous stream of government purchases with state-contingent
loans and a linear tax on labor income.
A linear tax is sometimes called a flat-rate tax.
The household maximizes utility by choosing paths for consumption and labor, taking prices
and the government’s tax rate and borrowing plans as given.
Maximum attainable utility for the household depends on the government’s tax and borrow-
ing plans.
The Ramsey problem [51] is to choose tax and borrowing plans that maximize the household’s
welfare, taking the household’s optimizing behavior as given.
There is a large number of competitive equilibria indexed by different government fiscal poli-

231
232 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

cies.
The Ramsey planner chooses the best competitive equilibrium.
We want to study the dynamics of tax rates, tax revenues, government debt under a Ramsey
plan.
Because the Lucas and Stokey model features state-contingent government debt, the govern-
ment debt dynamics differ substantially from those in a model of Robert Barro [7].
The treatment given here closely follows this manuscript, prepared by Thomas J. Sargent and
Francois R. Velde.
We cover only the key features of the problem in this lecture, leaving you to refer to that
source for additional results and intuition.
We’ll need the following imports:

In [2]: import sys


import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from numpy import sqrt, eye, zeros, cumsum
from numpy.random import randn
import scipy.linalg
from collections import namedtuple
from quantecon import nullspace, mc_sample_path, var_quadratic_sum

13.2.1 Model Features

• Linear quadratic (LQ) model


• Representative household
• Stochastic dynamic programming over an infinite horizon
• Distortionary taxation

13.3 The Ramsey Problem

We begin by outlining the key assumptions regarding technology, households and the govern-
ment sector.

13.3.1 Technology

Labor can be converted one-for-one into a single, non-storable consumption good.


In the usual spirit of the LQ model, the amount of labor supplied in each period is unre-
stricted.
This is unrealistic, but helpful when it comes to solving the model.
Realistic labor supply can be induced by suitable parameter values.
13.3. THE RAMSEY PROBLEM 233

13.3.2 Households

Consider a representative household who chooses a path {ℓ𝑡 , 𝑐𝑡 } for labor and consumption to
maximize

1 ∞
−𝔼 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏𝑡 )2 + ℓ𝑡2 ] (1)
2 𝑡=0

subject to the budget constraint


𝔼 ∑ 𝛽 𝑡 𝑝𝑡0 [𝑑𝑡 + (1 − 𝜏𝑡 )ℓ𝑡 + 𝑠𝑡 − 𝑐𝑡 ] = 0 (2)
𝑡=0

Here
• 𝛽 is a discount factor in (0, 1).
• 𝑝𝑡0 is a scaled Arrow-Debreu price at time 0 of history contingent goods at time 𝑡 + 𝑗.
• 𝑏𝑡 is a stochastic preference parameter.
• 𝑑𝑡 is an endowment process.
• 𝜏𝑡 is a flat tax rate on labor income.
• 𝑠𝑡 is a promised time-𝑡 coupon payment on debt issued by the government.
The scaled Arrow-Debreu price 𝑝𝑡0 is related to the unscaled Arrow-Debreu price as follows.
If we let 𝜋𝑡0 (𝑥𝑡 ) denote the probability (density) of a history 𝑥𝑡 = [𝑥𝑡 , 𝑥𝑡−1 , … , 𝑥0 ] of the state
𝑥𝑡 , then the Arrow-Debreu time 0 price of a claim on one unit of consumption at date 𝑡, his-
tory 𝑥𝑡 would be

𝛽 𝑡 𝑝𝑡0
𝜋𝑡0 (𝑥𝑡 )

Thus, our scaled Arrow-Debreu price is the ordinary Arrow-Debreu price multiplied by the
discount factor 𝛽 𝑡 and divided by an appropriate probability.
The budget constraint (2) requires that the present value of consumption be restricted to
equal the present value of endowments, labor income and coupon payments on bond holdings.

13.3.3 Government

The government imposes a linear tax on labor income, fully committing to a stochastic path
of tax rates at time zero.
The government also issues state-contingent debt.
Given government tax and borrowing plans, we can construct a competitive equilibrium with
distorting government taxes.
Among all such competitive equilibria, the Ramsey plan is the one that maximizes the welfare
of the representative consumer.
234 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

13.3.4 Exogenous Variables

Endowments, government expenditure, the preference shock process 𝑏𝑡 , and promised coupon
payments on initial government debt 𝑠𝑡 are all exogenous, and given by
• 𝑑𝑡 = 𝑆𝑑 𝑥𝑡
• 𝑔𝑡 = 𝑆𝑔 𝑥𝑡
• 𝑏𝑡 = 𝑆𝑏 𝑥𝑡
• 𝑠𝑡 = 𝑆𝑠 𝑥𝑡
The matrices 𝑆𝑑 , 𝑆𝑔 , 𝑆𝑏 , 𝑆𝑠 are primitives and {𝑥𝑡 } is an exogenous stochastic process taking
values in ℝ𝑘 .
We consider two specifications for {𝑥𝑡 }.

1. Discrete case: {𝑥𝑡 } is a discrete state Markov chain with transition matrix 𝑃 .

2. VAR case: {𝑥𝑡 } obeys 𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐶𝑤𝑡+1 where {𝑤𝑡 } is independent zero-mean Gaus-
sian with identify covariance matrix.

13.3.5 Feasibility

The period-by-period feasibility restriction for this economy is

𝑐𝑡 + 𝑔𝑡 = 𝑑𝑡 + ℓ𝑡 (3)

A labor-consumption process {ℓ𝑡 , 𝑐𝑡 } is called feasible if (3) holds for all 𝑡.

13.3.6 Government Budget Constraint

Where 𝑝𝑡0 is again a scaled Arrow-Debreu price, the time zero government budget constraint
is


𝔼 ∑ 𝛽 𝑡 𝑝𝑡0 (𝑠𝑡 + 𝑔𝑡 − 𝜏𝑡 ℓ𝑡 ) = 0 (4)
𝑡=0

13.3.7 Equilibrium

An equilibrium is a feasible allocation {ℓ𝑡 , 𝑐𝑡 }, a sequence of prices {𝑝𝑡0 }, and a tax system
{𝜏𝑡 } such that

1. The allocation {ℓ𝑡 , 𝑐𝑡 } is optimal for the household given {𝑝𝑡0 } and {𝜏𝑡 }.

2. The government’s budget constraint (4) is satisfied.

The Ramsey problem is to choose the equilibrium {ℓ𝑡 , 𝑐𝑡 , 𝜏𝑡 , 𝑝𝑡0 } that maximizes the house-
hold’s welfare.
If {ℓ𝑡 , 𝑐𝑡 , 𝜏𝑡 , 𝑝𝑡0 } solves the Ramsey problem, then {𝜏𝑡 } is called the Ramsey plan.
The solution procedure we adopt is
13.3. THE RAMSEY PROBLEM 235

1. Use the first-order conditions from the household problem to pin down prices and allo-
cations given {𝜏𝑡 }.

2. Use these expressions to rewrite the government budget constraint (4) in terms of ex-
ogenous variables and allocations.

3. Maximize the household’s objective function (1) subject to the constraint constructed in
step 2 and the feasibility constraint (3).

The solution to this maximization problem pins down all quantities of interest.

13.3.8 Solution

Step one is to obtain the first-conditions for the household’s problem, taking taxes and prices
as given.
Letting 𝜇 be the Lagrange multiplier on (2), the first-order conditions are 𝑝𝑡0 = (𝑐𝑡 − 𝑏𝑡 )/𝜇 and
ℓ𝑡 = (𝑐𝑡 − 𝑏𝑡 )(1 − 𝜏𝑡 ).
Rearranging and normalizing at 𝜇 = 𝑏0 − 𝑐0 , we can write these conditions as

𝑏𝑡 − 𝑐𝑡 ℓ𝑡
𝑝𝑡0 = and 𝜏𝑡 = 1 − (5)
𝑏0 − 𝑐0 𝑏𝑡 − 𝑐𝑡

Substituting (5) into the government’s budget constraint (4) yields


𝔼 ∑ 𝛽 𝑡 [(𝑏𝑡 − 𝑐𝑡 )(𝑠𝑡 + 𝑔𝑡 − ℓ𝑡 ) + ℓ𝑡2 ] = 0 (6)
𝑡=0

The Ramsey problem now amounts to maximizing (1) subject to (6) and (3).
The associated Lagrangian is


1
ℒ = 𝔼 ∑ 𝛽 𝑡 {− [(𝑐𝑡 − 𝑏𝑡 )2 + ℓ𝑡2 ] + 𝜆 [(𝑏𝑡 − 𝑐𝑡 )(ℓ𝑡 − 𝑠𝑡 − 𝑔𝑡 ) − ℓ𝑡2 ] + 𝜇𝑡 [𝑑𝑡 + ℓ𝑡 − 𝑐𝑡 − 𝑔𝑡 ]} (7)
𝑡=0
2

The first-order conditions associated with 𝑐𝑡 and ℓ𝑡 are

−(𝑐𝑡 − 𝑏𝑡 ) + 𝜆[−ℓ𝑡 + (𝑔𝑡 + 𝑠𝑡 )] = 𝜇𝑡

and

ℓ𝑡 − 𝜆[(𝑏𝑡 − 𝑐𝑡 ) − 2ℓ𝑡 ] = 𝜇𝑡

Combining these last two equalities with (3) and working through the algebra, one can show
that

ℓ𝑡 = ℓ𝑡̄ − 𝜈𝑚𝑡 and 𝑐𝑡 = 𝑐𝑡̄ − 𝜈𝑚𝑡 (8)

where
236 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

• 𝜈 ∶= 𝜆/(1 + 2𝜆)
• ℓ𝑡̄ ∶= (𝑏𝑡 − 𝑑𝑡 + 𝑔𝑡 )/2
• 𝑐𝑡̄ ∶= (𝑏𝑡 + 𝑑𝑡 − 𝑔𝑡 )/2
• 𝑚𝑡 ∶= (𝑏𝑡 − 𝑑𝑡 − 𝑠𝑡 )/2
Apart from 𝜈, all of these quantities are expressed in terms of exogenous variables.
To solve for 𝜈, we can use the government’s budget constraint again.
The term inside the brackets in (6) is (𝑏𝑡 − 𝑐𝑡 )(𝑠𝑡 + 𝑔𝑡 ) − (𝑏𝑡 − 𝑐𝑡 )ℓ𝑡 + ℓ𝑡2 .
Using (8), the definitions above and the fact that ℓ ̄ = 𝑏 − 𝑐,̄ this term can be rewritten as

(𝑏𝑡 − 𝑐𝑡̄ )(𝑔𝑡 + 𝑠𝑡 ) + 2𝑚2𝑡 (𝜈 2 − 𝜈)

Reinserting into (6), we get

∞ ∞
𝔼 {∑ 𝛽 𝑡 (𝑏𝑡 − 𝑐𝑡̄ )(𝑔𝑡 + 𝑠𝑡 )} + (𝜈 2 − 𝜈)𝔼 {∑ 𝛽 𝑡 2𝑚2𝑡 } = 0 (9)
𝑡=0 𝑡=0

Although it might not be clear yet, we are nearly there because:


• The two expectations terms in (9) can be solved for in terms of model primitives.
• This in turn allows us to solve for the Lagrange multiplier 𝜈.
• With 𝜈 in hand, we can go back and solve for the allocations via (8).
• Once we have the allocations, prices and the tax system can be derived from (5).

13.3.9 Computing the Quadratic Term

Let’s consider how to obtain the term 𝜈 in (9).


If we can compute the two expected geometric sums

∞ ∞
𝑏0 ∶= 𝔼 {∑ 𝛽 𝑡 (𝑏𝑡 − 𝑐𝑡̄ )(𝑔𝑡 + 𝑠𝑡 )} and 𝑎0 ∶= 𝔼 {∑ 𝛽 𝑡 2𝑚2𝑡 } (10)
𝑡=0 𝑡=0

then the problem reduces to solving

𝑏0 + 𝑎0 (𝜈 2 − 𝜈) = 0

for 𝜈.
Provided that 4𝑏0 < 𝑎0 , there is a unique solution 𝜈 ∈ (0, 1/2), and a unique corresponding
𝜆 > 0.
Let’s work out how to compute mathematical expectations in (10).
For the first one, the random variable (𝑏𝑡 − 𝑐𝑡̄ )(𝑔𝑡 + 𝑠𝑡 ) inside the summation can be expressed
as

1 ′
𝑥 (𝑆 − 𝑆𝑑 + 𝑆𝑔 )′ (𝑆𝑔 + 𝑆𝑠 )𝑥𝑡
2 𝑡 𝑏

For the second expectation in (10), the random variable 2𝑚2𝑡 can be written as
13.3. THE RAMSEY PROBLEM 237

1 ′
𝑥 (𝑆 − 𝑆𝑑 − 𝑆𝑠 )′ (𝑆𝑏 − 𝑆𝑑 − 𝑆𝑠 )𝑥𝑡
2 𝑡 𝑏

It follows that both objects of interest are special cases of the expression


𝑞(𝑥0 ) = 𝔼 ∑ 𝛽 𝑡 𝑥′𝑡 𝐻𝑥𝑡 (11)
𝑡=0

where 𝐻 is a matrix conformable to 𝑥𝑡 and 𝑥′𝑡 is the transpose of column vector 𝑥𝑡 .


Suppose first that {𝑥𝑡 } is the Gaussian VAR described above.
In this case, the formula for computing 𝑞(𝑥0 ) is known to be 𝑞(𝑥0 ) = 𝑥′0 𝑄𝑥0 + 𝑣, where
• 𝑄 is the solution to 𝑄 = 𝐻 + 𝛽𝐴′ 𝑄𝐴, and
• 𝑣 = trace (𝐶 ′ 𝑄𝐶)𝛽/(1 − 𝛽)
The first equation is known as a discrete Lyapunov equation and can be solved using this
function.

13.3.10 Finite State Markov Case

Next, suppose that {𝑥𝑡 } is the discrete Markov process described above.
Suppose further that each 𝑥𝑡 takes values in the state space {𝑥1 , … , 𝑥𝑁 } ⊂ ℝ𝑘 .
Let ℎ ∶ ℝ𝑘 → ℝ be a given function, and suppose that we wish to evaluate


𝑞(𝑥0 ) = 𝔼 ∑ 𝛽 𝑡 ℎ(𝑥𝑡 ) given 𝑥0 = 𝑥𝑗
𝑡=0

For example, in the discussion above, ℎ(𝑥𝑡 ) = 𝑥′𝑡 𝐻𝑥𝑡 .


It is legitimate to pass the expectation through the sum, leading to


𝑞(𝑥0 ) = ∑ 𝛽 𝑡 (𝑃 𝑡 ℎ)[𝑗] (12)
𝑡=0

Here
• 𝑃 𝑡 is the 𝑡-th power of the transition matrix 𝑃 .
• ℎ is, with some abuse of notation, the vector (ℎ(𝑥1 ), … , ℎ(𝑥𝑁 )).
• (𝑃 𝑡 ℎ)[𝑗] indicates the 𝑗-th element of 𝑃 𝑡 ℎ.
It can be shown that (12) is in fact equal to the 𝑗-th element of the vector (𝐼 − 𝛽𝑃 )−1 ℎ.
This last fact is applied in the calculations below.

13.3.11 Other Variables

We are interested in tracking several other variables besides the ones described above.
To prepare the way for this, we define
238 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

𝑡
𝑏𝑡+𝑗 − 𝑐𝑡+𝑗
𝑝𝑡+𝑗 =
𝑏𝑡 − 𝑐𝑡

as the scaled Arrow-Debreu time 𝑡 price of a history contingent claim on one unit of con-
sumption at time 𝑡 + 𝑗.
These are prices that would prevail at time 𝑡 if markets were reopened at time 𝑡.
These prices are constituents of the present value of government obligations outstanding at
time 𝑡, which can be expressed as


𝐵𝑡 ∶= 𝔼𝑡 ∑ 𝛽 𝑗 𝑝𝑡+𝑗
𝑡
(𝜏𝑡+𝑗 ℓ𝑡+𝑗 − 𝑔𝑡+𝑗 ) (13)
𝑗=0

Using our expression for prices and the Ramsey plan, we can also write 𝐵𝑡 as

∞ 2
(𝑏𝑡+𝑗 − 𝑐𝑡+𝑗 )(ℓ𝑡+𝑗 − 𝑔𝑡+𝑗 ) − ℓ𝑡+𝑗
𝐵𝑡 = 𝔼𝑡 ∑ 𝛽 𝑗
𝑗=0
𝑏𝑡 − 𝑐𝑡

This version is more convenient for computation.


Using the equation

𝑡 𝑡 𝑡+1
𝑝𝑡+𝑗 = 𝑝𝑡+1 𝑝𝑡+𝑗

it is possible to verify that (13) implies that


𝑡
𝐵𝑡 = (𝜏𝑡 ℓ𝑡 − 𝑔𝑡 ) + 𝐸𝑡 ∑ 𝑝𝑡+𝑗 (𝜏𝑡+𝑗 ℓ𝑡+𝑗 − 𝑔𝑡+𝑗 )
𝑗=1

and

𝑡
𝐵𝑡 = (𝜏𝑡 ℓ𝑡 − 𝑔𝑡 ) + 𝛽𝐸𝑡 𝑝𝑡+1 𝐵𝑡+1 (14)

Define

𝑅𝑡−1 ∶= 𝔼𝑡 𝛽 𝑗 𝑝𝑡+1
𝑡
(15)

𝑅𝑡 is the gross 1-period risk-free rate for loans between 𝑡 and 𝑡 + 1.

13.3.12 A Martingale

We now want to study the following two objects, namely,

𝜋𝑡+1 ∶= 𝐵𝑡+1 − 𝑅𝑡 [𝐵𝑡 − (𝜏𝑡 ℓ𝑡 − 𝑔𝑡 )]

and the cumulation of 𝜋𝑡


13.4. IMPLEMENTATION 239

𝑡
Π𝑡 ∶= ∑ 𝜋𝑡
𝑠=0

The term 𝜋𝑡+1 is the difference between two quantities:

• 𝐵𝑡+1 , the value of government debt at the start of period 𝑡 + 1.

• 𝑅𝑡 [𝐵𝑡 + 𝑔𝑡 − 𝜏𝑡 ], which is what the government would have owed at the beginning of
period 𝑡 + 1 if it had simply borrowed at the one-period risk-free rate rather than selling
state-contingent securities.
Thus, 𝜋𝑡+1 is the excess payout on the actual portfolio of state-contingent government debt
relative to an alternative portfolio sufficient to finance 𝐵𝑡 + 𝑔𝑡 − 𝜏𝑡 ℓ𝑡 and consisting entirely of
risk-free one-period bonds.
Use expressions (14) and (15) to obtain

1 𝑡
𝜋𝑡+1 = 𝐵𝑡+1 − 𝑡 [𝛽𝐸𝑡 𝑝𝑡+1 𝐵𝑡+1 ]
𝛽𝐸𝑡 𝑝𝑡+1

or

𝜋𝑡+1 = 𝐵𝑡+1 − 𝐸𝑡̃ 𝐵𝑡+1 (16)

where 𝐸𝑡̃ is the conditional mathematical expectation taken with respect to a one-step tran-
sition density that has been formed by multiplying the original transition density with the
likelihood ratio

𝑡
𝑝𝑡+1
𝑚𝑡𝑡+1 = 𝑡
𝐸𝑡 𝑝𝑡+1

It follows from equation (16) that

𝐸𝑡̃ 𝜋𝑡+1 = 𝐸𝑡̃ 𝐵𝑡+1 − 𝐸𝑡̃ 𝐵𝑡+1 = 0

which asserts that {𝜋𝑡+1 } is a martingale difference sequence under the distorted probability
measure, and that {Π𝑡 } is a martingale under the distorted probability measure.
In the tax-smoothing model of Robert Barro [7], government debt is a random walk.
In the current model, government debt {𝐵𝑡 } is not a random walk, but the excess payoff
{Π𝑡 } on it is.

13.4 Implementation

The following code provides functions for

1. Solving for the Ramsey plan given a specification of the economy.

2. Simulating the dynamics of the major variables.


240 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

Description and clarifications are given below

In [3]: # Set up a namedtuple to store data on the model economy


Economy = namedtuple('economy',
('β', # Discount factor
'Sg', # Govt spending selector matrix
'Sd', # Exogenous endowment selector matrix
'Sb', # Utility parameter selector matrix
'Ss', # Coupon payments selector matrix
'discrete', # Discrete or continuous -- boolean
'proc')) # Stochastic process parameters

# Set up a namedtuple to store return values for compute_paths()


Path = namedtuple('path',
('g', # Govt spending
'd', # Endowment
'b', # Utility shift parameter
's', # Coupon payment on existing debt
'c', # Consumption
'l', # Labor
'p', # Price
'τ', # Tax rate
'rvn', # Revenue
'B', # Govt debt
'R', # Risk-free gross return
'π', # One-period risk-free interest rate
'Π', # Cumulative rate of return, adjusted
'ξ')) # Adjustment factor for Π

def compute_paths(T, econ):


"""
Compute simulated time paths for exogenous and endogenous variables.

Parameters
===========
T: int
Length of the simulation

econ: a namedtuple of type 'Economy', containing


β - Discount factor
Sg - Govt spending selector matrix
Sd - Exogenous endowment selector matrix
Sb - Utility parameter selector matrix
Ss - Coupon payments selector matrix
discrete - Discrete exogenous process (True or False)
proc - Stochastic process parameters

Returns
========
path: a namedtuple of type 'Path', containing
g - Govt spending
d - Endowment
b - Utility shift parameter
s - Coupon payment on existing debt
c - Consumption
l - Labor
p - Price
13.4. IMPLEMENTATION 241

τ - Tax rate
rvn - Revenue
B - Govt debt
R - Risk-free gross return
π - One-period risk-free interest rate
Π - Cumulative rate of return, adjusted
ξ - Adjustment factor for Π

The corresponding values are flat numpy ndarrays.

"""

# Simplify names
β, Sg, Sd, Sb, Ss = econ.β, econ.Sg, econ.Sd, econ.Sb, econ.Ss

if econ.discrete:
P, x_vals = econ.proc
else:
A, C = econ.proc

# Simulate the exogenous process x


if econ.discrete:
state = mc_sample_path(P, init=0, sample_size=T)
x = x_vals[:, state]
else:
# Generate an initial condition x0 satisfying x0 = A x0
nx, nx = A.shape
x0 = nullspace((eye(nx) - A))
x0 = -x0 if (x0[nx-1] < 0) else x0
x0 = x0 / x0[nx-1]

# Generate a time series x of length T starting from x0


nx, nw = C.shape
x = zeros((nx, T))
w = randn(nw, T)
x[:, 0] = x0.T
for t in range(1, T):
x[:, t] = A @ x[:, t-1] + C @ w[:, t]

# Compute exogenous variable sequences


g, d, b, s = ((S @ x).flatten() for S in (Sg, Sd, Sb, Ss))

# Solve for Lagrange multiplier in the govt budget constraint


# In fact we solve for ν = lambda / (1 + 2*lambda). Here ν is the
# solution to a quadratic equation a(ν**2 - ν) + b = 0 where
# a and b are expected discounted sums of quadratic forms of the state.
Sm = Sb - Sd - Ss
# Compute a and b
if econ.discrete:
ns = P.shape[0]
F = scipy.linalg.inv(eye(ns) - β * P)
a0 = 0.5 * (F @ (x_vals.T @ Sm.T)**2)[0]
H = ((Sb - Sd + Sg) @ x_vals) * ((Sg - Ss) @ x_vals)
b0 = 0.5 * (F @ H.T)[0]
a0, b0 = float(a0), float(b0)
else:
H = Sm.T @ Sm
a0 = 0.5 * var_quadratic_sum(A, C, H, β, x0)
242 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

H = (Sb - Sd + Sg).T @ (Sg + Ss)


b0 = 0.5 * var_quadratic_sum(A, C, H, β, x0)

# Test that ν has a real solution before assigning


warning_msg = """
Hint: you probably set government spending too {}. Elect a {}
Congress and start over.
"""
disc = a0**2 - 4 * a0 * b0
if disc >= 0:
ν = 0.5 * (a0 - sqrt(disc)) / a0
else:
print("There is no Ramsey equilibrium for these parameters.")
print(warning_msg.format('high', 'Republican'))
sys.exit(0)

# Test that the Lagrange multiplier has the right sign


if ν * (0.5 - ν) < 0:
print("Negative multiplier on the government budget constraint.")
print(warning_msg.format('low', 'Democratic'))
sys.exit(0)

# Solve for the allocation given ν and x


Sc = 0.5 * (Sb + Sd - Sg - ν * Sm)
Sl = 0.5 * (Sb - Sd + Sg - ν * Sm)
c = (Sc @ x).flatten()
l = (Sl @ x).flatten()
p = ((Sb - Sc) @ x).flatten() # Price without normalization
τ = 1 - l / (b - c)
rvn = l * τ

# Compute remaining variables


if econ.discrete:
H = ((Sb - Sc) @ x_vals) * ((Sl - Sg) @ x_vals) - (Sl @ x_vals)**2
temp = (F @ H.T).flatten()
B = temp[state] / p
H = (P[state, :] @ x_vals.T @ (Sb - Sc).T).flatten()
R = p / (β * H)
temp = ((P[state, :] @ x_vals.T @ (Sb - Sc).T)).flatten()
ξ = p[1:] / temp[:T-1]
else:
H = Sl.T @ Sl - (Sb - Sc).T @ (Sl - Sg)
L = np.empty(T)
for t in range(T):
L[t] = var_quadratic_sum(A, C, H, β, x[:, t])
B = L / p
Rinv = (β * ((Sb - Sc) @ A @ x)).flatten() / p
R = 1 / Rinv
AF1 = (Sb - Sc) @ x[:, 1:]
AF2 = (Sb - Sc) @ A @ x[:, :T-1]
ξ = AF1 / AF2
ξ = ξ.flatten()

π = B[1:] - R[:T-1] * B[:T-1] - rvn[:T-1] + g[:T-1]


Π = cumsum(π * ξ)

# Prepare return values


path = Path(g=g, d=d, b=b, s=s, c=c, l=l, p=p,
13.4. IMPLEMENTATION 243

τ=τ, rvn=rvn, B=B, R=R, π=π, Π=Π, ξ=ξ)

return path

def gen_fig_1(path):
"""
The parameter is the path namedtuple returned by compute_paths(). See
the docstring of that function for details.
"""

T = len(path.c)

# Prepare axes
num_rows, num_cols = 2, 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(14, 10))
plt.subplots_adjust(hspace=0.4)
for i in range(num_rows):
for j in range(num_cols):
axes[i, j].grid()
axes[i, j].set_xlabel('Time')
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'}
p_args = {'lw': 2, 'alpha': 0.7}

# Plot consumption, govt expenditure and revenue


ax = axes[0, 0]
ax.plot(path.rvn, label=r'$\tau_t \ell_t$', **p_args)
ax.plot(path.g, label='$g_t$', **p_args)
ax.plot(path.c, label='$c_t$', **p_args)
ax.legend(ncol=3, **legend_args)

# Plot govt expenditure and debt


ax = axes[0, 1]
ax.plot(list(range(1, T+1)), path.rvn, label=r'$\tau_t \ell_t$',�
↪ **p_args)
ax.plot(list(range(1, T+1)), path.g, label='$g_t$', **p_args)
ax.plot(list(range(1, T)), path.B[1:T], label='$B_{t+1}$', **p_args)
ax.legend(ncol=3, **legend_args)

# Plot risk-free return


ax = axes[1, 0]
ax.plot(list(range(1, T+1)), path.R - 1, label='$R_t - 1$', **p_args)
ax.legend(ncol=1, **legend_args)

# Plot revenue, expenditure and risk free rate


ax = axes[1, 1]
ax.plot(list(range(1, T+1)), path.rvn, label=r'$\tau_t \ell_t$',�
↪ **p_args)
ax.plot(list(range(1, T+1)), path.g, label='$g_t$', **p_args)
axes[1, 1].plot(list(range(1, T)), path.π, label=r'$\pi_{t+1}$',�
↪ **p_args)
ax.legend(ncol=3, **legend_args)

plt.show()

def gen_fig_2(path):
244 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

"""
The parameter is the path namedtuple returned by compute_paths(). See
the docstring of that function for details.
"""

T = len(path.c)

# Prepare axes
num_rows, num_cols = 2, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 10))
plt.subplots_adjust(hspace=0.5)
bbox = (0., 1.02, 1., .102)
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'}
p_args = {'lw': 2, 'alpha': 0.7}

# Plot adjustment factor


ax = axes[0]
ax.plot(list(range(2, T+1)), path.ξ, label=r'$\xi_t$', **p_args)
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=1, **legend_args)

# Plot adjusted cumulative return


ax = axes[1]
ax.plot(list(range(2, T+1)), path.Π, label=r'$\Pi_t$', **p_args)
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=1, **legend_args)

plt.show()

13.4.1 Comments on the Code

The function var_quadratic_sum imported from quadsums is for computing the value of
(11) when the exogenous process {𝑥𝑡 } is of the VAR type described above.
Below the definition of the function, you will see definitions of two namedtuple objects,
Economy and Path.
The first is used to collect all the parameters and primitives of a given LQ economy, while the
second collects output of the computations.
In Python, a namedtuple is a popular data type from the collections module of the
standard library that replicates the functionality of a tuple, but also allows you to assign a
name to each tuple element.
These elements can then be references via dotted attribute notation — see for example the
use of path in the functions gen_fig_1() and gen_fig_2().
The benefits of using namedtuples:
• Keeps content organized by meaning.
• Helps reduce the number of global variables.
Other than that, our code is long but relatively straightforward.
13.5. EXAMPLES 245

13.5 Examples

Let’s look at two examples of usage.

13.5.1 The Continuous Case

Our first example adopts the VAR specification described above.


Regarding the primitives, we set
• 𝛽 = 1/1.05
• 𝑏𝑡 = 2.135 and 𝑠𝑡 = 𝑑𝑡 = 0 for all 𝑡
Government spending evolves according to

𝑔𝑡+1 − 𝜇𝑔 = 𝜌(𝑔𝑡 − 𝜇𝑔 ) + 𝐶𝑔 𝑤𝑔,𝑡+1

with 𝜌 = 0.7, 𝜇𝑔 = 0.35 and 𝐶𝑔 = 𝜇𝑔 √1 − 𝜌2 /10.


Here’s the code

In [4]: # == Parameters == #
β = 1 / 1.05
ρ, mg = .7, .35
A = eye(2)
A[0, :] = ρ, mg * (1-ρ)
C = np.zeros((2, 1))
C[0, 0] = np.sqrt(1 - ρ**2) * mg / 10
Sg = np.array((1, 0)).reshape(1, 2)
Sd = np.array((0, 0)).reshape(1, 2)
Sb = np.array((0, 2.135)).reshape(1, 2)
Ss = np.array((0, 0)).reshape(1, 2)

economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb, Ss=Ss,


discrete=False, proc=(A, C))

T = 50
path = compute_paths(T, economy)
gen_fig_1(path)
246 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

The legends on the figures indicate the variables being tracked.


Most obvious from the figure is tax smoothing in the sense that tax revenue is much less vari-
able than government expenditure.

In [5]: gen_fig_2(path)
13.5. EXAMPLES 247

See the original manuscript for comments and interpretation.

13.5.2 The Discrete Case

Our second example adopts a discrete Markov specification for the exogenous process

In [6]: # == Parameters == #
β = 1 / 1.05
P = np.array([[0.8, 0.2, 0.0],
[0.0, 0.5, 0.5],
[0.0, 0.0, 1.0]])

# Possible states of the world


# Each column is a state of the world. The rows are [g d b s 1]
x_vals = np.array([[0.5, 0.5, 0.25],
[0.0, 0.0, 0.0],
[2.2, 2.2, 2.2],
[0.0, 0.0, 0.0],
[1.0, 1.0, 1.0]])
248 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5)
Sd = np.array((0, 1, 0, 0, 0)).reshape(1, 5)
Sb = np.array((0, 0, 1, 0, 0)).reshape(1, 5)
Ss = np.array((0, 0, 0, 1, 0)).reshape(1, 5)

economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb, Ss=Ss,


discrete=True, proc=(P, x_vals))

T = 15
path = compute_paths(T, economy)
gen_fig_1(path)

The call gen_fig_2(path) generates

In [7]: gen_fig_2(path)
13.6. EXERCISES 249

See the original manuscript for comments and interpretation.

13.6 Exercises

13.6.1 Exercise 1

Modify the VAR example given above, setting

𝑔𝑡+1 − 𝜇𝑔 = 𝜌(𝑔𝑡−3 − 𝜇𝑔 ) + 𝐶𝑔 𝑤𝑔,𝑡+1

with 𝜌 = 0.95 and 𝐶𝑔 = 0.7√1 − 𝜌2 .


Produce the corresponding figures.
250 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY

13.7 Solutions

13.7.1 Exercise 1

In [8]: # == Parameters == #
β = 1 / 1.05
ρ, mg = .95, .35
A = np.array([[0, 0, 0, ρ, mg*(1-ρ)],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 1]])
C = np.zeros((5, 1))
C[0, 0] = np.sqrt(1 - ρ**2) * mg / 8
Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5)
Sd = np.array((0, 0, 0, 0, 0)).reshape(1, 5)
# Chosen st. (Sc + Sg) * x0 = 1
Sb = np.array((0, 0, 0, 0, 2.135)).reshape(1, 5)
Ss = np.array((0, 0, 0, 0, 0)).reshape(1, 5)

economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb,


Ss=Ss, discrete=False, proc=(A, C))

T = 50
path = compute_paths(T, economy)

gen_fig_1(path)
13.7. SOLUTIONS 251

In [9]: gen_fig_2(path)
252 CHAPTER 13. OPTIMAL TAXATION IN AN LQ ECONOMY
Part III

Multiple Agent Models

253
Chapter 14

Robust Markov Perfect Equilibrium

14.1 Contents

• Overview 14.2
• Linear Markov Perfect Equilibria with Robust Agents 14.3
• Application 14.4
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

14.2 Overview

This lecture describes a Markov perfect equilibrium with robust agents.


We focus on special settings with
• two players
• quadratic payoff functions
• linear transition rules for the state vector
These specifications simplify calculations and allow us to give a simple example that illus-
trates basic forces.
This lecture is based on ideas described in chapter 15 of [26] and in Markov perfect equilib-
rium and Robustness.
Let’s start with some standard imports:

In [2]: import numpy as np


import quantecon as qe
from scipy.linalg import solve
import matplotlib.pyplot as plt
%matplotlib inline

14.2.1 Basic Setup

Decisions of two agents affect the motion of a state vector that appears as an argument of
payoff functions of both agents.

255
256 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

As described in Markov perfect equilibrium, when decision-makers have no concerns about


the robustness of their decision rules to misspecifications of the state dynamics, a Markov
perfect equilibrium can be computed via backward recursion on two sets of equations

• a pair of Bellman equations, one for each agent.

• a pair of equations that express linear decision rules for each agent as functions of that
agent’s continuation value function as well as parameters of preferences and state tran-
sition matrices.
This lecture shows how a similar equilibrium concept and similar computational procedures
apply when we impute concerns about robustness to both decision-makers.
A Markov perfect equilibrium with robust agents will be characterized by

• a pair of Bellman equations, one for each agent.

• a pair of equations that express linear decision rules for each agent as functions of that
agent’s continuation value function as well as parameters of preferences and state tran-
sition matrices.
• a pair of equations that express linear decision rules for worst-case shocks for each agent
as functions of that agent’s continuation value function as well as parameters of prefer-
ences and state transition matrices.
Below, we’ll construct a robust firms version of the classic duopoly model with adjustment
costs analyzed in Markov perfect equilibrium.

14.3 Linear Markov Perfect Equilibria with Robust Agents

As we saw in Markov perfect equilibrium, the study of Markov perfect equilibria in dynamic
games with two players leads us to an interrelated pair of Bellman equations.
In linear quadratic dynamic games, these “stacked Bellman equations” become “stacked Ric-
cati equations” with a tractable mathematical structure.

14.3.1 Modified Coupled Linear Regulator Problems

We consider a general linear quadratic regulator game with two players, each of whom fears
model misspecifications.
We often call the players agents.
The agents share a common baseline model for the transition dynamics of the state vector

• this is a counterpart of a ‘rational expectations’ assumption of shared beliefs

But now one or more agents doubt that the baseline model is correctly specified.
The agents express the possibility that their baseline specification is incorrect by adding a
contribution 𝐶𝑣𝑖𝑡 to the time 𝑡 transition law for the state

• 𝐶 is the usual volatility matrix that appears in stochastic versions of optimal


linear regulator problems.
14.3. LINEAR MARKOV PERFECT EQUILIBRIA WITH ROBUST AGENTS 257

• 𝑣𝑖𝑡 is a possibly history-dependent vector of distortions to the dynamics of the state


that agent 𝑖 uses to represent misspecification of the original model.
For convenience, we’ll start with a finite horizon formulation, where 𝑡0 is the initial date and
𝑡1 is the common terminal date.
Player 𝑖 takes a sequence {𝑢−𝑖𝑡 } as given and chooses a sequence {𝑢𝑖𝑡 } to minimize and {𝑣𝑖𝑡 }
to maximize

𝑡1 −1

∑ 𝛽 𝑡−𝑡0 {𝑥′𝑡 𝑅𝑖 𝑥𝑡 + 𝑢′𝑖𝑡 𝑄𝑖 𝑢𝑖𝑡 + 𝑢′−𝑖𝑡 𝑆𝑖 𝑢−𝑖𝑡 + 2𝑥′𝑡 𝑊𝑖 𝑢𝑖𝑡 + 2𝑢′−𝑖𝑡 𝑀𝑖 𝑢𝑖𝑡 − 𝜃𝑖 𝑣𝑖𝑡 𝑣𝑖𝑡 } (1)
𝑡=𝑡0

while thinking that the state evolves according to

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵1 𝑢1𝑡 + 𝐵2 𝑢2𝑡 + 𝐶𝑣𝑖𝑡 (2)

Here
• 𝑥𝑡 is an 𝑛 × 1 state vector, 𝑢𝑖𝑡 is a 𝑘𝑖 × 1 vector of controls for player 𝑖, and
• 𝑣𝑖𝑡 is an ℎ × 1 vector of distortions to the state dynamics that concern player 𝑖
• 𝑅𝑖 is 𝑛 × 𝑛
• 𝑆𝑖 is 𝑘−𝑖 × 𝑘−𝑖
• 𝑄𝑖 is 𝑘𝑖 × 𝑘𝑖
• 𝑊𝑖 is 𝑛 × 𝑘𝑖
• 𝑀𝑖 is 𝑘−𝑖 × 𝑘𝑖
• 𝐴 is 𝑛 × 𝑛
• 𝐵𝑖 is 𝑛 × 𝑘𝑖
• 𝐶 is 𝑛 × ℎ
• 𝜃𝑖 ∈ [𝜃𝑖 , +∞] is a scalar multiplier parameter of player 𝑖
If 𝜃𝑖 = +∞, player 𝑖 completely trusts the baseline model.
If 𝜃𝑖 <∞ , player 𝑖 suspects that some other unspecified model actually governs the transition
dynamics.

The term 𝜃𝑖 𝑣𝑖𝑡 𝑣𝑖𝑡 is a time 𝑡 contribution to an entropy penalty that an (imaginary) loss-
maximizing agent inside agent 𝑖’s mind charges for distorting the law of motion in a way that
harms agent 𝑖.

• the imaginary loss-maximizing agent helps the loss-minimizing agent by help-


ing him construct bounds on the behavior of his decision rule over a large set
of alternative models of state transition dynamics.

14.3.2 Computing Equilibrium

We formulate a linear robust Markov perfect equilibrium as follows.


Player 𝑖 employs linear decision rules 𝑢𝑖𝑡 = −𝐹𝑖𝑡 𝑥𝑡 , where 𝐹𝑖𝑡 is a 𝑘𝑖 × 𝑛 matrix.
Player 𝑖’s malevolent alter ego employs decision rules 𝑣𝑖𝑡 = 𝐾𝑖𝑡 𝑥𝑡 where 𝐾𝑖𝑡 is an ℎ × 𝑛 ma-
trix.
A robust Markov perfect equilibrium is a pair of sequences {𝐹1𝑡 , 𝐹2𝑡 } and a pair of sequences
{𝐾1𝑡 , 𝐾2𝑡 } over 𝑡 = 𝑡0 , … , 𝑡1 − 1 that satisfy
258 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

• {𝐹1𝑡 , 𝐾1𝑡 } solves player 1’s robust decision problem, taking {𝐹2𝑡 } as given, and
• {𝐹2𝑡 , 𝐾2𝑡 } solves player 2’s robust decision problem, taking {𝐹1𝑡 } as given.
If we substitute 𝑢2𝑡 = −𝐹2𝑡 𝑥𝑡 into (1) and (2), then player 1’s problem becomes
minimization-maximization of

𝑡1 −1
∑ 𝛽 𝑡−𝑡0 {𝑥′𝑡 Π1𝑡 𝑥𝑡 + 𝑢′1𝑡 𝑄1 𝑢1𝑡 + 2𝑢′1𝑡 Γ1𝑡 𝑥𝑡 − 𝜃1 𝑣1𝑡

𝑣1𝑡 } (3)
𝑡=𝑡0

subject to

𝑥𝑡+1 = Λ1𝑡 𝑥𝑡 + 𝐵1 𝑢1𝑡 + 𝐶𝑣1𝑡 (4)

where
• Λ𝑖𝑡 ∶= 𝐴 − 𝐵−𝑖 𝐹−𝑖𝑡

• Π𝑖𝑡 ∶= 𝑅𝑖 + 𝐹−𝑖𝑡 𝑆𝑖 𝐹−𝑖𝑡
• Γ𝑖𝑡 ∶= 𝑊𝑖 − 𝑀𝑖′ 𝐹−𝑖𝑡

This is an LQ robust dynamic programming problem of the type studied in the Robustness
lecture, which can be solved by working backward.
Maximization with respect to distortion 𝑣1𝑡 leads to the following version of the 𝒟 operator
from the Robustness lecture, namely

𝒟1 (𝑃 ) ∶= 𝑃 + 𝑃 𝐶(𝜃1 𝐼 − 𝐶 ′ 𝑃 𝐶)−1 𝐶 ′ 𝑃 (5)

The matrix 𝐹1𝑡 in the policy rule 𝑢1𝑡 = −𝐹1𝑡 𝑥𝑡 that solves agent 1’s problem satisfies

𝐹1𝑡 = (𝑄1 + 𝛽𝐵1′ 𝒟1 (𝑃1𝑡+1 )𝐵1 )−1 (𝛽𝐵1′ 𝒟1 (𝑃1𝑡+1 )Λ1𝑡 + Γ1𝑡 ) (6)

where 𝑃1𝑡 solves the matrix Riccati difference equation

𝑃1𝑡 = Π1𝑡 −(𝛽𝐵1′ 𝒟1 (𝑃1𝑡+1 )Λ1𝑡 +Γ1𝑡 )′ (𝑄1 +𝛽𝐵1′ 𝒟1 (𝑃1𝑡+1 )𝐵1 )−1 (𝛽𝐵1′ 𝒟1 (𝑃1𝑡+1 )Λ1𝑡 +Γ1𝑡 )+𝛽Λ′1𝑡 𝒟1 (𝑃1𝑡+1 )Λ1𝑡
(7)
Similarly, the policy that solves player 2’s problem is

𝐹2𝑡 = (𝑄2 + 𝛽𝐵2′ 𝒟2 (𝑃2𝑡+1 )𝐵2 )−1 (𝛽𝐵2′ 𝒟2 (𝑃2𝑡+1 )Λ2𝑡 + Γ2𝑡 ) (8)

where 𝑃2𝑡 solves

𝑃2𝑡 = Π2𝑡 −(𝛽𝐵2′ 𝒟2 (𝑃2𝑡+1 )Λ2𝑡 +Γ2𝑡 )′ (𝑄2 +𝛽𝐵2′ 𝒟2 (𝑃2𝑡+1 )𝐵2 )−1 (𝛽𝐵2′ 𝒟2 (𝑃2𝑡+1 )Λ2𝑡 +Γ2𝑡 )+𝛽Λ′2𝑡 𝒟2 (𝑃2𝑡+1 )Λ2𝑡
(9)
Here in all cases 𝑡 = 𝑡0 , … , 𝑡1 − 1 and the terminal conditions are 𝑃𝑖𝑡1 = 0.
The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from
time 𝑡1 − 1.
Since we’re working backwards, 𝑃1𝑡+1 and 𝑃2𝑡+1 are taken as given at each stage.
14.3. LINEAR MARKOV PERFECT EQUILIBRIA WITH ROBUST AGENTS 259

Moreover, since
• some terms on the right-hand side of (6) contain 𝐹2𝑡
• some terms on the right-hand side of (8) contain 𝐹1𝑡
we need to solve these 𝑘1 + 𝑘2 equations simultaneously.

14.3.3 Key Insight

As in Markov perfect equilibrium, a key insight here is that equations (6) and (8) are linear
in 𝐹1𝑡 and 𝐹2𝑡 .
After these equations have been solved, we can take 𝐹𝑖𝑡 and solve for 𝑃𝑖𝑡 in (7) and (9).
Notice how 𝑗’s control law 𝐹𝑗𝑡 is a function of {𝐹𝑖𝑠 , 𝑠 ≥ 𝑡, 𝑖 ≠ 𝑗}.
Thus, agent 𝑖’s choice of {𝐹𝑖𝑡 ; 𝑡 = 𝑡0 , … , 𝑡1 − 1} influences agent 𝑗’s choice of control laws.
However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the
influence that his choice exerts on the other agent’s choice.
After these equations have been solved, we can also deduce associated sequences of worst-case
shocks.

14.3.4 Worst-case Shocks

For agent 𝑖 the maximizing or worst-case shock 𝑣𝑖𝑡 is

𝑣𝑖𝑡 = 𝐾𝑖𝑡 𝑥𝑡

where

𝐾𝑖𝑡 = 𝜃𝑖−1 (𝐼 − 𝜃𝑖−1 𝐶 ′ 𝑃𝑖,𝑡+1 𝐶)−1 𝐶 ′ 𝑃𝑖,𝑡+1 (𝐴 − 𝐵1 𝐹𝑖𝑡 − 𝐵2 𝐹2𝑡 )

14.3.5 Infinite Horizon

We often want to compute the solutions of such games for infinite horizons, in the hope that
the decision rules 𝐹𝑖𝑡 settle down to be time-invariant as 𝑡1 → +∞.
In practice, we usually fix 𝑡1 and compute the equilibrium of an infinite horizon game by driv-
ing 𝑡0 → −∞.
This is the approach we adopt in the next section.

14.3.6 Implementation

We use the function nnash_robust to compute a Markov perfect equilibrium of the infinite
horizon linear quadratic dynamic game with robust planers in the manner described above.
260 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

14.4 Application

14.4.1 A Duopoly Model

Without concerns for robustness, the model is identical to the duopoly model from the
Markov perfect equilibrium lecture.
To begin, we briefly review the structure of that model.
Two firms are the only producers of a good the demand for which is governed by a linear in-
verse demand function

𝑝 = 𝑎0 − 𝑎1 (𝑞1 + 𝑞2 ) (10)

Here 𝑝 = 𝑝𝑡 is the price of the good, 𝑞𝑖 = 𝑞𝑖𝑡 is the output of firm 𝑖 = 1, 2 at time 𝑡 and
𝑎0 > 0, 𝑎1 > 0.
In (10) and what follows,
• the time subscript is suppressed when possible to simplify notation
• 𝑥̂ denotes a next period value of variable 𝑥
Each firm recognizes that its output affects total output and therefore the market price.
The one-period payoff function of firm 𝑖 is price times quantity minus adjustment costs:

𝜋𝑖 = 𝑝𝑞𝑖 − 𝛾(𝑞𝑖̂ − 𝑞𝑖 )2 , 𝛾 > 0, (11)

Substituting the inverse demand curve (10) into (11) lets us express the one-period payoff as

𝜋𝑖 (𝑞𝑖 , 𝑞−𝑖 , 𝑞𝑖̂ ) = 𝑎0 𝑞𝑖 − 𝑎1 𝑞𝑖2 − 𝑎1 𝑞𝑖 𝑞−𝑖 − 𝛾(𝑞𝑖̂ − 𝑞𝑖 )2 , (12)

where 𝑞−𝑖 denotes the output of the firm other than 𝑖.



The objective of the firm is to maximize ∑𝑡=0 𝛽 𝑡 𝜋𝑖𝑡 .
Firm 𝑖 chooses a decision rule that sets next period quantity 𝑞𝑖̂ as a function 𝑓𝑖 of the current
state (𝑞𝑖 , 𝑞−𝑖 ).
This completes our review of the duopoly model without concerns for robustness.
Now we activate robustness concerns of both firms.
To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic
programming problems, we again define the state and controls as

1
𝑥𝑡 ∶= ⎡𝑞 ⎤
⎢ 1𝑡 ⎥ and 𝑢𝑖𝑡 ∶= 𝑞𝑖,𝑡+1 − 𝑞𝑖𝑡 , 𝑖 = 1, 2
⎣𝑞2𝑡 ⎦

If we write

𝑥′𝑡 𝑅𝑖 𝑥𝑡 + 𝑢′𝑖𝑡 𝑄𝑖 𝑢𝑖𝑡

where 𝑄1 = 𝑄2 = 𝛾,
14.4. APPLICATION 261

0 − 𝑎20 0 0 0 − 𝑎20
𝑅1 ∶= ⎡−
⎢ 2
𝑎0
𝑎1 𝑎1 ⎤
2 ⎥ and 𝑅2 ∶= ⎡
⎢ 0 0 𝑎1
2


𝑎1 𝑎 𝑎1
⎣ 0 2 0⎦ ⎣− 20 2 𝑎1 ⎦

then we recover the one-period payoffs (11) for the two firms in the duopoly model.
The law of motion for the state 𝑥𝑡 is 𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵1 𝑢1𝑡 + 𝐵2 𝑢2𝑡 where

1 0 0 0 0
𝐴 ∶= ⎡ ⎤
⎢0 1 0 ⎥ , 𝐵1 ∶= ⎡ ⎤
⎢1⎥ , 𝐵2 ∶= ⎡
⎢0⎥

⎣0 0 1 ⎦ ⎣0⎦ ⎣1⎦

A robust decision rule of firm 𝑖 will take the form 𝑢𝑖𝑡 = −𝐹𝑖 𝑥𝑡 , inducing the following closed-
loop system for the evolution of 𝑥 in the Markov perfect equilibrium:

𝑥𝑡+1 = (𝐴 − 𝐵1 𝐹1 − 𝐵1 𝐹2 )𝑥𝑡 (13)

14.4.2 Parameters and Solution

Consider the duopoly model with parameter values of:


• 𝑎0 = 10
• 𝑎1 = 2
• 𝛽 = 0.96
• 𝛾 = 12
From these, we computed the infinite horizon MPE without robustness using the code

In [3]: import numpy as np


import quantecon as qe

# Parameters
a0 = 10.0
a1 = 2.0
β = 0.96
γ = 12.0

# In LQ form
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])

R1 = [[ 0., -a0 / 2, 0.],


[-a0 / 2., a1, a1 / 2.],
[ 0, a1 / 2., 0.]]

R2 = [[ 0., 0., -a0 / 2],


[ 0., 0., a1 / 2.],
[-a0 / 2, a1 / 2., a1]]

Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
262 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

# Solve using QE's nnash function


F1, F2, P1, P2 = qe.nnash(A, B1, B2, R1, R2, Q1,
Q2, S1, S2, W1, W2, M1,
M2, beta=β)

# Display policies
print("Computed policies for firm 1 and firm 2:\n")
print(f"F1 = {F1}")
print(f"F2 = {F2}")
print("\n")

Computed policies for firm 1 and firm 2:

F1 = [[-0.66846615 0.29512482 0.07584666]]


F2 = [[-0.66846615 0.07584666 0.29512482]]

Markov Perfect Equilibrium with Robustness

We add robustness concerns to the Markov Perfect Equilibrium model by extending the func-
tion qe.nnash (link) into a robustness version by adding the maximization operator 𝒟(𝑃 )
into the backward induction.
The MPE with robustness function is nnash_robust.
The function’s code is as follows

In [4]: def nnash_robust(A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2,
θ1, θ2, beta=1.0, tol=1e-8, max_iter=1000):

"""
Compute the limit of a Nash linear quadratic dynamic game with
robustness concern.

In this problem, player i minimizes


.. math::
\sum_{t=0}^{\infty}
\left\{
x_t' r_i x_t + 2 x_t' w_i
u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}'
m_i u_{it}
\right\}
subject to the law of motion
.. math::
x_{it+1} = A x_t + b_1 u_{1t} + b_2 u_{2t} + C w_{it+1}
and a perceived control law :math:`u_j(t) = - f_j x_t` for the other
player.

The player i also concerns about the model misspecification,


and maximizes
.. math::
\sum_{t=0}^{\infty}
\left\{
\beta^{t+1} \theta_{i} w_{it+1}'w_{it+1}
\right\}
14.4. APPLICATION 263

The solution computed in this routine is the :math:`f_i` and


:math:`P_i` of the associated double optimal linear regulator
problem.

Parameters
----------
A : scalar(float) or array_like(float)
Corresponds to the MPE equations, should be of size (n, n)
C : scalar(float) or array_like(float)
As above, size (n, c), c is the size of w
B1 : scalar(float) or array_like(float)
As above, size (n, k_1)
B2 : scalar(float) or array_like(float)
As above, size (n, k_2)
R1 : scalar(float) or array_like(float)
As above, size (n, n)
R2 : scalar(float) or array_like(float)
As above, size (n, n)
Q1 : scalar(float) or array_like(float)
As above, size (k_1, k_1)
Q2 : scalar(float) or array_like(float)
As above, size (k_2, k_2)
S1 : scalar(float) or array_like(float)
As above, size (k_1, k_1)
S2 : scalar(float) or array_like(float)
As above, size (k_2, k_2)
W1 : scalar(float) or array_like(float)
As above, size (n, k_1)
W2 : scalar(float) or array_like(float)
As above, size (n, k_2)
M1 : scalar(float) or array_like(float)
As above, size (k_2, k_1)
M2 : scalar(float) or array_like(float)
As above, size (k_1, k_2)
θ1 : scalar(float)
Robustness parameter of player 1
θ2 : scalar(float)
Robustness parameter of player 2
beta : scalar(float), optional(default=1.0)
Discount factor
tol : scalar(float), optional(default=1e-8)
This is the tolerance level for convergence
max_iter : scalar(int), optional(default=1000)
This is the maximum number of iterations allowed

Returns
-------
F1 : array_like, dtype=float, shape=(k_1, n)
Feedback law for agent 1
F2 : array_like, dtype=float, shape=(k_2, n)
Feedback law for agent 2
P1 : array_like, dtype=float, shape=(n, n)
The steady-state solution to the associated discrete matrix
Riccati equation for agent 1
P2 : array_like, dtype=float, shape=(n, n)
The steady-state solution to the associated discrete matrix
Riccati equation for agent 2
264 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

"""

# Unload parameters and make sure everything is a matrix


params = A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2
params = map(np.asmatrix, params)
A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2 = params

# Multiply A, B1, B2 by sqrt(β) to enforce discounting


A, B1, B2 = [np.sqrt(β) * x for x in (A, B1, B2)]

# Initial values
n = A.shape[0]
k_1 = B1.shape[1]
k_2 = B2.shape[1]

v1 = np.eye(k_1)
v2 = np.eye(k_2)
P1 = np.eye(n) * 1e-5
P2 = np.eye(n) * 1e-5
F1 = np.random.randn(k_1, n)
F2 = np.random.randn(k_2, n)

for it in range(max_iter):
# Update
F10 = F1
F20 = F2

I = np.eye(C.shape[1])

# D1(P1)
# Note: INV1 may not be solved if the matrix is singular
INV1 = solve(θ1 * I - C.T @ P1 @ C, I)
D1P1 = P1 + P1 @ C @ INV1 @ C.T @ P1

# D2(P2)
# Note: INV2 may not be solved if the matrix is singular
INV2 = solve(θ2 * I - C.T @ P2 @ C, I)
D2P2 = P2 + P2 @ C @ INV2 @ C.T @ P2

G2 = solve(Q2 + B2.T @ D2P2 @ B2, v2)


G1 = solve(Q1 + B1.T @ D1P1 @ B1, v1)
H2 = G2 @ B2.T @ D2P2
H1 = G1 @ B1.T @ D1P1

# Break up the computation of F1, F2


F1_left = v1 - (H1 @ B2 + G1 @ M1.T) @ (H2 @ B1 + G2 @ M2.T)
F1_right = H1 @ A + G1 @ W1.T - \
(H1 @ B2 + G1 @ M1.T) @ (H2 @ A + G2 @ W2.T)
F1 = solve(F1_left, F1_right)
F2 = H2 @ A + G2 @ W2.T - (H2 @ B1 + G2 @ M2.T) @ F1

Λ1 = A - B2 @ F2
Λ2 = A - B1 @ F1
Π1 = R1 + F2.T @ S1 @ F2
Π2 = R2 + F1.T @ S2 @ F1
Γ1 = W1.T - M1.T @ F2
14.4. APPLICATION 265

Γ2 = W2.T - M2.T @ F1

# Compute P1 and P2
P1 = Π1 - (B1.T @ D1P1 @ Λ1 + Γ1).T @ F1 + \
Λ1.T @ D1P1 @ Λ1
P2 = Π2 - (B2.T @ D2P2 @ Λ2 + Γ2).T @ F2 + \
Λ2.T @ D2P2 @ Λ2

dd = np.max(np.abs(F10 - F1)) + np.max(np.abs(F20 - F2))

if dd < tol: # success!


break

else:
raise ValueError(f'No convergence: Iteration limit of {maxiter} \
reached in nnash')

return F1, F2, P1, P2

14.4.3 Some Details

Firm 𝑖 wants to minimize

𝑡1 −1
∑ 𝛽 𝑡−𝑡0 {𝑥′𝑡 𝑅𝑖 𝑥𝑡 + 𝑢′𝑖𝑡 𝑄𝑖 𝑢𝑖𝑡 + 𝑢′−𝑖𝑡 𝑆𝑖 𝑢−𝑖𝑡 + 2𝑥′𝑡 𝑊𝑖 𝑢𝑖𝑡 + 2𝑢′−𝑖𝑡 𝑀𝑖 𝑢𝑖𝑡 }
𝑡=𝑡0

where

1
𝑥𝑡 ∶= ⎢𝑞1𝑡 ⎤

⎥ and 𝑢𝑖𝑡 ∶= 𝑞𝑖,𝑡+1 − 𝑞𝑖𝑡 , 𝑖 = 1, 2
𝑞
⎣ 2𝑡 ⎦

and

0 − 𝑎20 0 0 0 − 𝑎20
𝑅1 ∶= ⎡ −
⎢ 2
𝑎0
𝑎1 𝑎1 ⎤
2 ⎥, 𝑅2 ∶= ⎡
⎢ 0 0 𝑎1
2
⎤,
⎥ 𝑄1 = 𝑄2 = 𝛾, 𝑆1 = 𝑆2 = 0, 𝑊1 = 𝑊2 = 0, 𝑀1 = 𝑀
𝑎1 𝑎 𝑎1
⎣ 0 2 0⎦ ⎣− 20 2 𝑎1 ⎦

The parameters of the duopoly model are:

• 𝑎0 = 10

• 𝑎1 = 2
• 𝛽 = 0.96
• 𝛾 = 12

In [5]: # Parameters
a0 = 10.0
a1 = 2.0
β = 0.96
266 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

γ = 12.0

# In LQ form
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])

R1 = [[ 0., -a0 / 2, 0.],


[-a0 / 2., a1, a1 / 2.],
[ 0, a1 / 2., 0.]]

R2 = [[ 0., 0., -a0 / 2],


[ 0., 0., a1 / 2.],
[-a0 / 2, a1 / 2., a1]]

Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0

Consistency Check

We first conduct a comparison test to check if nnash_robust agrees with qe.nnash in the
non-robustness case in which each 𝜃𝑖 ≈ +∞

In [6]: # Solve using QE's nnash function


F1, F2, P1, P2 = qe.nnash(A, B1, B2, R1, R2, Q1,
Q2, S1, S2, W1, W2, M1,
M2, beta=β)

# Solve using nnash_robust


F1r, F2r, P1r, P2r = nnash_robust(A, np.zeros((3, 1)), B1, B2, R1, R2, Q1,
Q2, S1, S2, W1, W2, M1, M2, 1e-10,
1e-10, beta=β)

print('F1 and F1r should be the same: ', np.allclose(F1, F1r))


print('F2 and F2r should be the same: ', np.allclose(F1, F1r))
print('P1 and P1r should be the same: ', np.allclose(P1, P1r))
print('P2 and P2r should be the same: ', np.allclose(P1, P1r))

F1 and F1r should be the same: True


F2 and F2r should be the same: True
P1 and P1r should be the same: True
P2 and P2r should be the same: True

We can see that the results are consistent across the two functions.

Comparative Dynamics under Baseline Transition Dynamics

We want to compare the dynamics of price and output under the baseline MPE model with
those under the baseline model under the robust decision rules within the robust MPE.
This means that we simulate the state dynamics under the MPE equilibrium closed-loop
transition matrix
14.4. APPLICATION 267

𝐴𝑜 = 𝐴 − 𝐵1 𝐹1 − 𝐵2 𝐹2

where 𝐹1 and 𝐹2 are the firms’ robust decision rules within the robust markov_perfect equi-
librium

• by simulating under the baseline model transition dynamics and the robust
MPE rules we are in assuming that at the end of the day firms’ concerns
about misspecification of the baseline model do not materialize.

• a short way of saying this is that misspecification fears are all ‘just in the minds’ of the
firms.
• simulating under the baseline model is a common practice in the literature.
• note that some assumption about the model that actually governs the data has to be
made in order to create a simulation.
• later we will describe the (erroneous) beliefs of the two firms that justify their robust
decisions as best responses to transition laws that are distorted relative to the baseline
model.
After simulating 𝑥𝑡 under the baseline transition dynamics and robust decision rules 𝐹𝑖 , 𝑖 =
1, 2, we extract and plot industry output 𝑞𝑡 = 𝑞1𝑡 + 𝑞2𝑡 and price 𝑝𝑡 = 𝑎0 − 𝑎1 𝑞𝑡 .
Here we set the robustness and volatility matrix parameters as follows:
• 𝜃1 = 0.02
• 𝜃2 = 0.04
0
• 𝐶=⎛ ⎜0.01⎞

⎝0.01⎠
Because we have set 𝜃1 < 𝜃2 < +∞ we know that

• both firms fear that the baseline specification of the state transition dynam-
ics are incorrect.

• firm 1 fears misspecification more than firm 2.

In [7]: # Robustness parameters and matrix


C = np.asmatrix([[0], [0.01], [0.01]])
θ1 = 0.02
θ2 = 0.04
n = 20

# Solve using nnash_robust


F1r, F2r, P1r, P2r = nnash_robust(A, C, B1, B2, R1, R2, Q1,
Q2, S1, S2, W1, W2, M1, M2,
θ1, θ2, beta=β)

# MPE output and price


AF = A - B1 @ F1 - B2 @ F2
x = np.empty((3, n))
x[:, 0] = 1, 1, 1
268 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

for t in range(n - 1):


x[:, t + 1] = AF @ x[:, t]
q1 = x[1, :]
q2 = x[2, :]
q = q1 + q2 # Total output, MPE
p = a0 - a1 * q # Price, MPE

# RMPE output and price


AO = A - B1 @ F1r - B2 @ F2r
xr = np.empty((3, n))
xr[:, 0] = 1, 1, 1
for t in range(n - 1):
xr[:, t+1] = AO @ xr[:, t]
qr1 = xr[1, :]
qr2 = xr[2, :]
qr = qr1 + qr2 # Total output, RMPE
pr = a0 - a1 * qr # Price, RMPE

# RMPE heterogeneous beliefs output and price


I = np.eye(C.shape[1])
INV1 = solve(θ1 * I - C.T @ P1 @ C, I)
K1 = P1 @ C @ INV1 @ C.T @ P1 @ AO
AOCK1 = AO + C.T @ K1

INV2 = solve(θ2 * I - C.T @ P2 @ C, I)


K2 = P2 @ C @ INV2 @ C.T @ P2 @ AO
AOCK2 = AO + C.T @ K2
xrp1 = np.empty((3, n))
xrp2 = np.empty((3, n))
xrp1[:, 0] = 1, 1, 1
xrp2[:, 0] = 1, 1, 1
for t in range(n - 1):
xrp1[:, t + 1] = AOCK1 @ xrp1[:, t]
xrp2[:, t + 1] = AOCK2 @ xrp2[:, t]
qrp11 = xrp1[1, :]
qrp12 = xrp1[2, :]
qrp21 = xrp2[1, :]
qrp22 = xrp2[2, :]
qrp1 = qrp11 + qrp12 # Total output, RMPE from player 1's belief
qrp2 = qrp21 + qrp22 # Total output, RMPE from player 2's belief
prp1 = a0 - a1 * qrp1 # Price, RMPE from player 1's belief
prp2 = a0 - a1 * qrp2 # Price, RMPE from player 2's belief

The following code prepares graphs that compare market-wide output 𝑞1𝑡 + 𝑞2𝑡 and the price
of the good 𝑝𝑡 under equilibrium decision rules 𝐹𝑖 , 𝑖 = 1, 2 from an ordinary Markov perfect
equilibrium and the decision rules under a Markov perfect equilibrium with robust firms with
multiplier parameters 𝜃𝑖 , 𝑖 = 1, 2 set as described above.
Both industry output and price are under the transition dynamics associated with the base-
line model; only the decision rules 𝐹𝑖 differ across the two equilibrium objects presented.

In [8]: fig, axes = plt.subplots(2, 1, figsize=(9, 9))


14.4. APPLICATION 269

ax = axes[0]
ax.plot(q, 'g-', lw=2, alpha=0.75, label='MPE output')
ax.plot(qr, 'm-', lw=2, alpha=0.75, label='RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(2, 4))
ax.legend(loc='upper left', frameon=0)

ax = axes[1]
ax.plot(p, 'g-', lw=2, alpha=0.75, label='MPE price')
ax.plot(pr, 'm-', lw=2, alpha=0.75, label='RMPE price')
ax.set(ylabel="price", xlabel="time")
ax.legend(loc='upper right', frameon=0)
plt.show()

Under the dynamics associated with the baseline model, the price path is higher with the
Markov perfect equilibrium robust decision rules than it is with decision rules for the ordinary
Markov perfect equilibrium.
So is the industry output path.
To dig a little beneath the forces driving these outcomes, we want to plot 𝑞1𝑡 and 𝑞2𝑡 in the
Markov perfect equilibrium with robust firms and to compare them with corresponding ob-
jects in the Markov perfect equilibrium without robust firms
270 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

In [9]: fig, axes = plt.subplots(2, 1, figsize=(9, 9))

ax = axes[0]
ax.plot(q1, 'g-', lw=2, alpha=0.75, label='firm 1 MPE output')
ax.plot(qr1, 'b-', lw=2, alpha=0.75, label='firm 1 RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(1, 2))
ax.legend(loc='upper left', frameon=0)

ax = axes[1]
ax.plot(q2, 'g-', lw=2, alpha=0.75, label='firm 2 MPE output')
ax.plot(qr2, 'r-', lw=2, alpha=0.75, label='firm 2 RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(1, 2))
ax.legend(loc='upper left', frameon=0)
plt.show()

Evidently, firm 1’s output path is substantially lower when firms are robust firms while firm
2’s output path is virtually the same as it would be in an ordinary Markov perfect equilib-
rium with no robust firms.
14.4. APPLICATION 271

Recall that we have set 𝜃1 = .02 and 𝜃2 = .04, so that firm 1 fears misspecification of the
baseline model substantially more than does firm 2
• but also please notice that firm 2’s behavior in the Markov perfect equilibrium with ro-
bust firms responds to the decision rule 𝐹1 𝑥𝑡 employed by firm 1.
• thus it is something of a coincidence that its output is almost the same in the two equi-
libria.
Larger concerns about misspecification induce firm 1 to be more cautious than firm 2 in pre-
dicting market price and the output of the other firm.
To explore this, we study next how ex-post the two firms’ beliefs about state dynamics differ
in the Markov perfect equilibrium with robust firms.
(by ex-post we mean after extremization of each firm’s intertemporal objective)

Heterogeneous Beliefs

As before, let 𝐴𝑜 = 𝐴 − 𝐵_1𝐹 _1𝑟 − 𝐵_2𝐹 _2𝑟 , where in a robust MPE, 𝐹𝑖𝑟 is a robust
decision rule for firm 𝑖.
Worst-case forecasts of 𝑥𝑡 starting from 𝑡 = 0 differ between the two firms.
This means that worst-case forecasts of industry output 𝑞1𝑡 + 𝑞2𝑡 and price 𝑝𝑡 also differ be-
tween the two firms.
To find these worst-case beliefs, we compute the following three “closed-loop” transition ma-
trices

• 𝐴𝑜

• 𝐴𝑜 + 𝐶𝐾_1
• 𝐴𝑜 + 𝐶𝐾_2
We call the first transition law, namely, 𝐴𝑜 , the baseline transition under firms’ robust deci-
sion rules.
We call the second and third worst-case transitions under robust decision rules for firms 1 and
2.
From {𝑥𝑡 } paths generated by each of these transition laws, we pull off the associated price
and total output sequences.
The following code plots them

In [10]: print('Baseline Robust transition matrix AO is: \n', np.round(AO, 3))


print('Player 1\'s worst-case transition matrix AOCK1 is: \n', \
np.round(AOCK1, 3))
print('Player 2\'s worst-case transition matrix AOCK2 is: \n', \
np.round(AOCK2, 3))

Baseline Robust transition matrix AO is:


[[ 1. 0. 0. ]
[ 0.666 0.682 -0.074]
[ 0.671 -0.071 0.694]]
Player 1's worst-case transition matrix AOCK1 is:
[[ 0.998 0.002 0. ]
[ 0.664 0.685 -0.074]
272 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM

[ 0.669 -0.069 0.694]]


Player 2's worst-case transition matrix AOCK2 is:
[[ 0.999 0. 0.001]
[ 0.665 0.683 -0.073]
[ 0.67 -0.071 0.695]]

In [11]: # == Plot == #
fig, axes = plt.subplots(2, 1, figsize=(9, 9))

ax = axes[0]
ax.plot(qrp1, 'b--', lw=2, alpha=0.75,
label='RMPE worst-case belief output player 1')
ax.plot(qrp2, 'r:', lw=2, alpha=0.75,
label='RMPE worst-case belief output player 2')
ax.plot(qr, 'm-', lw=2, alpha=0.75, label='RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(2, 4))
ax.legend(loc='upper left', frameon=0)

ax = axes[1]
ax.plot(prp1, 'b--', lw=2, alpha=0.75,
label='RMPE worst-case belief price player 1')
ax.plot(prp2, 'r:', lw=2, alpha=0.75,
label='RMPE worst-case belief price player 2')
ax.plot(pr, 'm-', lw=2, alpha=0.75, label='RMPE price')
ax.set(ylabel="price", xlabel="time")
ax.legend(loc='upper right', frameon=0)
plt.show()
14.4. APPLICATION 273

We see from the above graph that under robustness concerns, player 1 and player 2 have het-
erogeneous beliefs about total output and the goods price even though they share the same
baseline model and information

• firm 1 thinks that total output will be higher and price lower than does firm
2

• this leads firm 1 to produce less than firm 2


These beliefs justify (or rationalize) the Markov perfect equilibrium robust decision rules.
This means that the robust rules are the unique optimal rules (or best responses) to the in-
dicated worst-case transition dynamics.
([26] discuss how this property of robust decision rules is connected to the concept of admissi-
bility in Bayesian statistical decision theory)
274 CHAPTER 14. ROBUST MARKOV PERFECT EQUILIBRIUM
Chapter 15

Default Risk and Income


Fluctuations

15.1 Contents

• Overview 15.2
• Structure 15.3
• Equilibrium 15.4
• Computation 15.5
• Results 15.6
• Exercises 15.7
• Solutions 15.8
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

15.2 Overview

This lecture computes versions of Arellano’s [4] model of sovereign default.


The model describes interactions among default risk, output, and an equilibrium interest rate
that includes a premium for endogenous default risk.
The decision maker is a government of a small open economy that borrows from risk-neutral
foreign creditors.
The foreign lenders must be compensated for default risk.
The government borrows and lends abroad in order to smooth the consumption of its citizens.
The government repays its debt only if it wants to, but declining to pay has adverse conse-
quences.
The interest rate on government debt adjusts in response to the state-dependent default
probability chosen by government.
The model yields outcomes that help interpret sovereign default experiences, including
• countercyclical interest rates on sovereign debt

275
276 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

• countercyclical trade balances


• high volatility of consumption relative to output
Notably, long recessions caused by bad draws in the income process increase the government’s
incentive to default.
This can lead to
• spikes in interest rates
• temporary losses of access to international credit markets
• large drops in output, consumption, and welfare
• large capital outflows during recessions
Such dynamics are consistent with experiences of many countries.
Let’s start with some imports:

In [2]: import matplotlib.pyplot as plt


import numpy as np
import quantecon as qe
import random

from numba import jit, jitclass, int64, float64

%matplotlib inline

15.3 Structure

In this section we describe the main features of the model.

15.3.1 Output, Consumption and Debt

A small open economy is endowed with an exogenous stochastically fluctuating potential out-
put stream {𝑦𝑡 }.
Potential output is realized only in periods in which the government honors its sovereign
debt.
The output good can be traded or consumed.
The sequence {𝑦𝑡 } is described by a Markov process with stochastic density kernel 𝑝(𝑦, 𝑦′ ).
Households within the country are identical and rank stochastic consumption streams accord-
ing to


𝔼 ∑ 𝛽 𝑡 𝑢(𝑐𝑡 ) (1)
𝑡=0

Here
• 0 < 𝛽 < 1 is a time discount factor
• 𝑢 is an increasing and strictly concave utility function
Consumption sequences enjoyed by households are affected by the government’s decision to
borrow or lend internationally.
15.3. STRUCTURE 277

The government is benevolent in the sense that its aim is to maximize (1).
The government is the only domestic actor with access to foreign credit.
Because household are averse to consumption fluctuations, the government will try to smooth
consumption by borrowing from (and lending to) foreign creditors.

15.3.2 Asset Markets

The only credit instrument available to the government is a one-period bond traded in inter-
national credit markets.
The bond market has the following features
• The bond matures in one period and is not state contingent.
• A purchase of a bond with face value 𝐵′ is a claim to 𝐵′ units of the consumption good
next period.
• To purchase 𝐵′ next period costs 𝑞𝐵′ now, or, what is equivalent.
• For selling −𝐵′ units of next period goods the seller earns −𝑞𝐵′ of today’s goods.
– If 𝐵′ < 0, then −𝑞𝐵′ units of the good are received in the current period, for a
promise to repay −𝐵′ units next period.
– There is an equilibrium price function 𝑞(𝐵′ , 𝑦) that makes 𝑞 depend on both 𝐵′
and 𝑦.
Earnings on the government portfolio are distributed (or, if negative, taxed) lump sum to
households.
When the government is not excluded from financial markets, the one-period national budget
constraint is

𝑐 = 𝑦 + 𝐵 − 𝑞(𝐵′ , 𝑦)𝐵′ (2)

Here and below, a prime denotes a next period value or a claim maturing next period.
To rule out Ponzi schemes, we also require that 𝐵 ≥ −𝑍 in every period.
• 𝑍 is chosen to be sufficiently large that the constraint never binds in equilibrium.

15.3.3 Financial Markets

Foreign creditors
• are risk neutral
• know the domestic output stochastic process {𝑦𝑡 } and observe 𝑦𝑡 , 𝑦𝑡−1 , … , at time 𝑡
• can borrow or lend without limit in an international credit market at a constant inter-
national interest rate 𝑟
• receive full payment if the government chooses to pay
• receive zero if the government defaults on its one-period debt due
When a government is expected to default next period with probability 𝛿, the expected value
of a promise to pay one unit of consumption next period is 1 − 𝛿.
Therefore, the discounted expected value of a promise to pay 𝐵 next period is

1−𝛿
𝑞= (3)
1+𝑟
278 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

Next we turn to how the government in effect chooses the default probability 𝛿.

15.3.4 Government’s Decisions

At each point in time 𝑡, the government chooses between

1. defaulting

2. meeting its current obligations and purchasing or selling an optimal quantity of one-
period sovereign debt

Defaulting means declining to repay all of its current obligations.


If the government defaults in the current period, then consumption equals current output.
But a sovereign default has two consequences:

1. Output immediately falls from 𝑦 to ℎ(𝑦), where 0 ≤ ℎ(𝑦) ≤ 𝑦.

• It returns to 𝑦 only after the country regains access to international credit markets.

1. The country loses access to foreign credit markets.

15.3.5 Reentering International Credit Market

While in a state of default, the economy regains access to foreign credit in each subsequent
period with probability 𝜃.

15.4 Equilibrium

Informally, an equilibrium is a sequence of interest rates on its sovereign debt, a stochastic


sequence of government default decisions and an implied flow of household consumption such
that

1. Consumption and assets satisfy the national budget constraint.

2. The government maximizes household utility taking into account

• the resource constraint


• the effect of its choices on the price of bonds
• consequences of defaulting now for future net output and future borrowing and lending
opportunities

1. The interest rate on the government’s debt includes a risk-premium sufficient to make
foreign creditors expect on average to earn the constant risk-free international interest
rate.

To express these ideas more precisely, consider first the choices of the government, which
15.4. EQUILIBRIUM 279

1. enters a period with initial assets 𝐵, or what is the same thing, initial debt to be repaid
now of −𝐵

2. observes current output 𝑦, and

3. chooses either

4. to default, or

5. to pay −𝐵 and set next period’s debt due to −𝐵′

In a recursive formulation,
• state variables for the government comprise the pair (𝐵, 𝑦)
• 𝑣(𝐵, 𝑦) is the optimum value of the government’s problem when at the beginning of a
period it faces the choice of whether to honor or default
• 𝑣𝑐 (𝐵, 𝑦) is the value of choosing to pay obligations falling due
• 𝑣𝑑 (𝑦) is the value of choosing to default
𝑣𝑑 (𝑦) does not depend on 𝐵 because, when access to credit is eventually regained, net foreign
assets equal 0.
Expressed recursively, the value of defaulting is

𝑣𝑑 (𝑦) = 𝑢(ℎ(𝑦)) + 𝛽 ∫ {𝜃𝑣(0, 𝑦′ ) + (1 − 𝜃)𝑣𝑑 (𝑦′ )} 𝑝(𝑦, 𝑦′ )𝑑𝑦′

The value of paying is

𝑣𝑐 (𝐵, 𝑦) = max

{𝑢(𝑦 − 𝑞(𝐵′ , 𝑦)𝐵′ + 𝐵) + 𝛽 ∫ 𝑣(𝐵′ , 𝑦′ )𝑝(𝑦, 𝑦′ )𝑑𝑦′ }
𝐵 ≥−𝑍

The three value functions are linked by

𝑣(𝐵, 𝑦) = max{𝑣𝑐 (𝐵, 𝑦), 𝑣𝑑 (𝑦)}

The government chooses to default when

𝑣𝑐 (𝐵, 𝑦) < 𝑣𝑑 (𝑦)

and hence given 𝐵′ the probability of default next period is

𝛿(𝐵′ , 𝑦) ∶= ∫ 𝟙{𝑣𝑐 (𝐵′ , 𝑦′ ) < 𝑣𝑑 (𝑦′ )}𝑝(𝑦, 𝑦′ )𝑑𝑦′ (4)

Given zero profits for foreign creditors in equilibrium, we can combine (3) and (4) to pin
down the bond price function:

1 − 𝛿(𝐵′ , 𝑦)
𝑞(𝐵′ , 𝑦) = (5)
1+𝑟
280 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

15.4.1 Definition of Equilibrium

An equilibrium is
• a pricing function 𝑞(𝐵′ , 𝑦),
• a triple of value functions (𝑣𝑐 (𝐵, 𝑦), 𝑣𝑑 (𝑦), 𝑣(𝐵, 𝑦)),
• a decision rule telling the government when to default and when to pay as a function of
the state (𝐵, 𝑦), and
• an asset accumulation rule that, conditional on choosing not to default, maps (𝐵, 𝑦) into
𝐵′
such that
• The three Bellman equations for (𝑣𝑐 (𝐵, 𝑦), 𝑣𝑑 (𝑦), 𝑣(𝐵, 𝑦)) are satisfied
• Given the price function 𝑞(𝐵′ , 𝑦), the default decision rule and the asset accumulation
decision rule attain the optimal value function 𝑣(𝐵, 𝑦), and
• The price function 𝑞(𝐵′ , 𝑦) satisfies equation (5)

15.5 Computation

Let’s now compute an equilibrium of Arellano’s model.


The equilibrium objects are the value function 𝑣(𝐵, 𝑦), the associated default decision rule,
and the pricing function 𝑞(𝐵′ , 𝑦).
We’ll use our code to replicate Arellano’s results.
After that we’ll perform some additional simulations.
We use a slightly modified version of the algorithm recommended by Arellano.
• The appendix to [4] recommends value function iteration until convergence, updating
the price, and then repeating.
• Instead, we update the bond price at every value function iteration step.
The second approach is faster and the two different procedures deliver very similar results.
Here is a more detailed description of our algorithm:

1. Guess a value function 𝑣(𝐵, 𝑦) and price function 𝑞(𝐵′ , 𝑦).

2. At each pair (𝐵, 𝑦),

• update the value of defaulting 𝑣𝑑 (𝑦).


• update the value of continuing 𝑣𝑐 (𝐵, 𝑦).

1. Update the value function 𝑣(𝐵, 𝑦), the default rule, the implied ex ante default probabil-
ity, and the price function.

2. Check for convergence. If converged, stop – if not, go to step 2.

We use simple discretization on a grid of asset holdings and income levels.


The output process is discretized using Tauchen’s quadrature method.
As we have in other places, we will accelerate our code using Numba.
15.5. COMPUTATION 281

We start by defining the data structure that will help us compile the class (for more informa-
tion on why we do this, see the lecture on numba.)

In [3]: # Define the data information for the jitclass


arellano_data = [
('B', float64[:]), ('P', float64[:, :]), ('y', float64[:]),
('β', float64), ('γ', float64), ('r', float64),
('ρ', float64), ('η', float64), ('θ', float64),
('def_y', float64[:])
]

# Define utility function


@jit(nopython=True)
def u(c, γ):
return c**(1-γ)/(1-γ)

We then define our jitclass that will store various parameters and contain the code that
can apply the Bellman operators and determine the savings policy given prices and value
functions

In [4]: @jitclass(arellano_data)
class Arellano_Economy:
"""
Arellano 2008 deals with a small open economy whose government
invests in foreign assets in order to smooth the consumption of
domestic households. Domestic households receive a stochastic
path of income.

Parameters
----------
B : vector(float64)
A grid for bond holdings
P : matrix(float64)
The transition matrix for a country's output
y : vector(float64)
The possible output states
β : float
Time discounting parameter
γ : float
Risk-aversion parameter
r : float
int lending rate
ρ : float
Persistence in the income process
η : float
Standard deviation of the income process
θ : float
Probability of re-entering financial markets in each period
"""

def __init__(
self, B, P, y,
β=0.953, γ=2.0, r=0.017,
ρ=0.945, η=0.025, θ=0.282
):
282 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

# Save parameters
self.B, self.P, self.y = B, P, y
self.β, self.γ, self.r, = β, γ, r
self.ρ, self.η, self.θ = ρ, η, θ

# Compute the mean output


self.def_y = np.minimum(0.969 * np.mean(y), y)

def bellman_default(self, iy, EVd, EV):


"""
The RHS of the Bellman equation when the country is in a
defaulted state on their debt
"""
# Unpack certain parameters for simplification
β, γ, θ = self.β, self.γ, self.θ

# Compute continuation value


zero_ind = len(self.B) // 2
cont_value = θ * EV[iy, zero_ind] + (1 - θ) * EVd[iy]

return u(self.def_y[iy], γ) + β*cont_value

def bellman_nondefault(self, iy, iB, q, EV, iB_tp1_star=-1):


"""
The RHS of the Bellman equation when the country is not in a
defaulted state on their debt
"""
# Unpack certain parameters for simplification
β, γ, θ = self.β, self.γ, self.θ
B, y = self.B, self.y

# Compute the RHS of Bellman equation


if iB_tp1_star < 0:
iB_tp1_star = self.compute_savings_policy(iy, iB, q, EV)
c = max(y[iy] - q[iy, iB_tp1_star]*B[iB_tp1_star] + B[iB], 1e-14)

return u(c, γ) + β*EV[iy, iB_tp1_star]

def compute_savings_policy(self, iy, iB, q, EV):


"""
Finds the debt/savings that maximizes the value function
for a particular state given prices and a value function
"""
# Unpack certain parameters for simplification
β, γ, θ = self.β, self.γ, self.θ
B, y = self.B, self.y

# Compute the RHS of Bellman equation


current_max = -1e14
iB_tp1_star = 0
for iB_tp1, B_tp1 in enumerate(B):
c = max(y[iy] - q[iy, iB_tp1]*B[iB_tp1] + B[iB], 1e-14)
m = u(c, γ) + β*EV[iy, iB_tp1]

if m > current_max:
iB_tp1_star = iB_tp1
current_max = m
15.5. COMPUTATION 283

return iB_tp1_star

We can now write a function that will use this class to compute the solution to our model

In [5]: @jit(nopython=True)
def solve(model, tol=1e-8, maxiter=10_000):
"""
Given an Arellano_Economy type, this function computes the optimal
policy and value functions
"""
# Unpack certain parameters for simplification
β, γ, r, θ = model.β, model.γ, model.r, model.θ
B = np.ascontiguousarray(model.B)
P, y = np.ascontiguousarray(model.P), np.ascontiguousarray(model.y)
nB, ny = B.size, y.size

# Allocate space
iBstar = np.zeros((ny, nB), int64)
default_prob = np.zeros((ny, nB))
default_states = np.zeros((ny, nB))
q = np.ones((ny, nB)) * 0.95
Vd = np.zeros(ny)
Vc, V, Vupd = np.zeros((ny, nB)), np.zeros((ny, nB)), np.zeros((ny, nB))

it = 0
dist = 10.0
while (it < maxiter) and (dist > tol):

# Compute expectations used for this iteration


EV = P@V
EVd = P@Vd

for iy in range(ny):
# Update value function for default state
Vd[iy] = model.bellman_default(iy, EVd, EV)

for iB in range(nB):
# Update value function for non-default state
iBstar[iy, iB] = model.compute_savings_policy(iy, iB, q, EV)
Vc[iy, iB] = model.bellman_nondefault(iy, iB, q, EV,�
↪ iBstar[iy, iB])

# Once value functions are updated, can combine them to get


# the full value function
Vd_compat = np.reshape(np.repeat(Vd, nB), (ny, nB))
Vupd[:, :] = np.maximum(Vc, Vd_compat)

# Can also compute default states and update prices


default_states[:, :] = 1.0 * (Vd_compat > Vc)
default_prob[:, :] = P @ default_states
q[:, :] = (1 - default_prob) / (1 + r)

# Check tolerance etc...


dist = np.max(np.abs(Vupd - V))
V[:, :] = Vupd[:, :]
it += 1

return V, Vc, Vd, iBstar, default_prob, default_states, q


284 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

and, finally, we write a function that will allow us to simulate the economy once we have the
policy functions

In [6]: def simulate(model, T, default_states, iBstar, q, y_init=None,�


↪B_init=None):

"""
Simulates the Arellano 2008 model of sovereign debt

Parameters
----------
model: Arellano_Economy
An instance of the Arellano model with the corresponding parameters
T: integer
The number of periods that the model should be simulated
default_states: array(float64, 2)
A matrix of 0s and 1s that denotes whether the country was in
default on their debt in that period (default = 1)
iBstar: array(float64, 2)
A matrix which specifies the debt/savings level that a country holds
during a given state
q: array(float64, 2)
A matrix that specifies the price at which a country can borrow/save
for a given state
y_init: integer
Specifies which state the income process should start in
B_init: integer
Specifies which state the debt/savings state should start

Returns
-------
y_sim: array(float64, 1)
A simulation of the country's income
B_sim: array(float64, 1)
A simulation of the country's debt/savings
q_sim: array(float64, 1)
A simulation of the price required to have an extra unit of
consumption in the following period
default_sim: array(bool, 1)
A simulation of whether the country was in default or not
"""
# Find index i such that Bgrid[i] is approximately 0
zero_B_index = np.searchsorted(model.B, 0.0)

# Set initial conditions


in_default = False
max_y_default = 0.969 * np.mean(model.y)
if y_init == None:
y_init = np.searchsorted(model.y, model.y.mean())
if B_init == None:
B_init = zero_B_index

# Create Markov chain and simulate income process


mc = qe.MarkovChain(model.P, model.y)
y_sim_indices = mc.simulate_indices(T+1, init=y_init)

# Allocate memory for remaining outputs


Bi = B_init
B_sim = np.empty(T)
15.6. RESULTS 285

y_sim = np.empty(T)
q_sim = np.empty(T)
default_sim = np.empty(T, dtype=bool)

# Perform simulation
for t in range(T):
yi = y_sim_indices[t]

# Fill y/B for today


if not in_default:
y_sim[t] = model.y[yi]
else:
y_sim[t] = np.minimum(model.y[yi], max_y_default)
B_sim[t] = model.B[Bi]
default_sim[t] = in_default

# Check whether in default and branch depending on that state


if not in_default:
if default_states[yi, Bi] > 1e-4:
in_default=True
Bi_next = zero_B_index
else:
Bi_next = iBstar[yi, Bi]
else:
Bi_next = zero_B_index
if np.random.rand() < model.θ:
in_default=False

# Fill in states
q_sim[t] = q[yi, Bi_next]
Bi = Bi_next

return y_sim, B_sim, q_sim, default_sim

15.6 Results

Let’s start by trying to replicate the results obtained in [4].


In what follows, all results are computed using Arellano’s parameter values.
The values can be seen in the __init__ method of the Arellano_Economy shown above.
• For example, r=0.017 matches the average quarterly rate on a 5 year US treasury over
the period 1983–2001.
Details on how to compute the figures are reported as solutions to the exercises.
The first figure shows the bond price schedule and replicates Figure 3 of Arellano, where 𝑦𝐿
and 𝑌𝐻 are particular below average and above average values of output 𝑦.
286 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

• 𝑦𝐿 is 5% below the mean of the 𝑦 grid values


• 𝑦𝐻 is 5% above the mean of the 𝑦 grid values
The grid used to compute this figure was relatively coarse (ny, nB = 21, 251) in order to
match Arrelano’s findings.
Here’s the same relationships computed on a finer grid (ny, nB = 51, 551)

In either case, the figure shows that


• Higher levels of debt (larger −𝐵′ ) induce larger discounts on the face value, which cor-
respond to higher interest rates.
15.6. RESULTS 287

• Lower income also causes more discounting, as foreign creditors anticipate greater likeli-
hood of default.
The next figure plots value functions and replicates the right hand panel of Figure 4 of [4].

We can use the results of the computation to study the default probability 𝛿(𝐵′ , 𝑦) defined in
(4).
The next plot shows these default probabilities over (𝐵′ , 𝑦) as a heat map.

As anticipated, the probability that the government chooses to default in the following period
increases with indebtedness and falls with income.
Next let’s run a time series simulation of {𝑦𝑡 }, {𝐵𝑡 } and 𝑞(𝐵𝑡+1 , 𝑦𝑡 ).
288 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

The grey vertical bars correspond to periods when the economy is excluded from financial
markets because of a past default.

One notable feature of the simulated data is the nonlinear response of interest rates.
Periods of relative stability are followed by sharp spikes in the discount rate on government
debt.

15.7 Exercises

15.7.1 Exercise 1

To the extent that you can, replicate the figures shown above
• Use the parameter values listed as defaults in the __init__ method of the
Arellano_Economy.
• The time series will of course vary depending on the shock draws.
15.8. SOLUTIONS 289

15.8 Solutions

Compute the value function, policy and equilibrium prices

In [7]: β, γ, r = 0.953, 2.0, 0.017


ρ, η, θ = 0.945, 0.025, 0.282
ny = 21
nB = 251
Bgrid = np.linspace(-0.45, 0.45, nB)
mc = qe.markov.tauchen(ρ, η, 0, 3, ny)
ygrid, P = np.exp(mc.state_values), mc.P

ae = Arellano_Economy(
Bgrid, P, ygrid, β=β, γ=γ, r=r, ρ=ρ, η=η, θ=θ
)

In [8]: V, Vc, Vd, iBstar, default_prob, default_states, q = solve(ae)

Compute the bond price schedule as seen in figure 3 of Arellano (2008)

In [9]: # Create "Y High" and "Y Low" values as 5% devs from mean
high, low = np.mean(ae.y) * 1.05, np.mean(ae.y) * .95
iy_high, iy_low = (np.searchsorted(ae.y, x) for x in (high, low))

fig, ax = plt.subplots(figsize=(10, 6.5))


ax.set_title("Bond price schedule $q(y, B')$")

# Extract a suitable plot grid


x = []
q_low = []
q_high = []
for i in range(nB):
b = ae.B[i]
if -0.35 <= b <= 0: # To match fig 3 of Arellano
x.append(b)
q_low.append(q[iy_low, i])
q_high.append(q[iy_high, i])
ax.plot(x, q_high, label="$y_H$", lw=2, alpha=0.7)
ax.plot(x, q_low, label="$y_L$", lw=2, alpha=0.7)
ax.set_xlabel("$B'$")
ax.legend(loc='upper left', frameon=False)
plt.show()
290 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

Draw a plot of the value functions

In [10]: # Create "Y High" and "Y Low" values as 5% devs from mean
high, low = np.mean(ae.y) * 1.05, np.mean(ae.y) * .95
iy_high, iy_low = (np.searchsorted(ae.y, x) for x in (high, low))

fig, ax = plt.subplots(figsize=(10, 6.5))


ax.set_title("Value Functions")
ax.plot(ae.B, V[iy_high], label="$y_H$", lw=2, alpha=0.7)
ax.plot(ae.B, V[iy_low], label="$y_L$", lw=2, alpha=0.7)
ax.legend(loc='upper left')
ax.set(xlabel="$B$", ylabel="$V(y, B)$")
ax.set_xlim(ae.B.min(), ae.B.max())
plt.show()
15.8. SOLUTIONS 291

Draw a heat map for default probability

In [11]: xx, yy = ae.B, ae.y


zz = default_prob

# Create figure
fig, ax = plt.subplots(figsize=(10, 6.5))
hm = ax.pcolormesh(xx, yy, zz)
cax = fig.add_axes([.92, .1, .02, .8])
fig.colorbar(hm, cax=cax)
ax.axis([xx.min(), 0.05, yy.min(), yy.max()])
ax.set(xlabel="$B'$", ylabel="$y$", title="Probability of Default")
plt.show()
292 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS

Plot a time series of major variables simulated from the model

In [12]: T = 250

np.random.seed(42)
y_vec, B_vec, q_vec, default_vec = simulate(ae, T, default_states,�
↪iBstar, q)

# Pick up default start and end dates


start_end_pairs = []
i = 0
while i < len(default_vec):
if default_vec[i] == 0:
i += 1
else:
# If we get to here we're in default
start_default = i
while i < len(default_vec) and default_vec[i] == 1:
i += 1
end_default = i - 1
start_end_pairs.append((start_default, end_default))

plot_series = (y_vec, B_vec, q_vec)


titles = 'output', 'foreign assets', 'bond price'

fig, axes = plt.subplots(len(plot_series), 1, figsize=(10, 12))


fig.subplots_adjust(hspace=0.3)

for ax, series, title in zip(axes, plot_series, titles):


# Determine suitable y limits
s_max, s_min = max(series), min(series)
s_range = s_max - s_min
y_max = s_max + s_range * 0.1
y_min = s_min - s_range * 0.1
15.8. SOLUTIONS 293

ax.set_ylim(y_min, y_max)
for pair in start_end_pairs:
ax.fill_between(pair, (y_min, y_min), (y_max, y_max),
color='k', alpha=0.3)
ax.grid()
ax.plot(range(T), series, lw=2, alpha=0.7)
ax.set(title=title, xlabel="time")

plt.show()
294 CHAPTER 15. DEFAULT RISK AND INCOME FLUCTUATIONS
Chapter 16

Globalization and Cycles

16.1 Contents

• Overview 16.2
• Key Ideas 16.3
• Model 16.4
• Simulation 16.5
• Exercises 16.6
• Solutions 16.7

16.2 Overview

In this lecture, we review the paper Globalization and Synchronization of Innovation Cycles
by Kiminori Matsuyama, Laura Gardini and Iryna Sushko.
This model helps us understand several interesting stylized facts about the world economy.
One of these is synchronized business cycles across different countries.
Most existing models that generate synchronized business cycles do so by assumption, since
they tie output in each country to a common shock.
They also fail to explain certain features of the data, such as the fact that the degree of syn-
chronization tends to increase with trade ties.
By contrast, in the model we consider in this lecture, synchronization is both endogenous and
increasing with the extent of trade integration.
In particular, as trade costs fall and international competition increases, innovation incentives
become aligned and countries synchronize their innovation cycles.
Let’s start with some imports:

In [1]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from numba import jit, vectorize
from ipywidgets import interact

295
296 CHAPTER 16. GLOBALIZATION AND CYCLES

16.2.1 Background

The model builds on work by Judd [36], Deneckner and Judd [18] and Helpman and Krugman
[33] by developing a two-country model with trade and innovation.
On the technical side, the paper introduces the concept of coupled oscillators to economic
modeling.
As we will see, coupled oscillators arise endogenously within the model.
Below we review the model and replicate some of the results on synchronization of innovation
across countries.

16.3 Key Ideas

It is helpful to begin with an overview of the mechanism.

16.3.1 Innovation Cycles

As discussed above, two countries produce and trade with each other.
In each country, firms innovate, producing new varieties of goods and, in doing so, receiving
temporary monopoly power.
Imitators follow and, after one period of monopoly, what had previously been new varieties
now enter competitive production.
Firms have incentives to innovate and produce new goods when the mass of varieties of goods
currently in production is relatively low.
In addition, there are strategic complementarities in the timing of innovation.
Firms have incentives to innovate in the same period, so as to avoid competing with substi-
tutes that are competitively produced.
This leads to temporal clustering in innovations in each country.
After a burst of innovation, the mass of goods currently in production increases.
However, goods also become obsolete, so that not all survive from period to period.
This mechanism generates a cycle, where the mass of varieties increases through simultaneous
innovation and then falls through obsolescence.

16.3.2 Synchronization

In the absence of trade, the timing of innovation cycles in each country is decoupled.
This will be the case when trade costs are prohibitively high.
If trade costs fall, then goods produced in each country penetrate each other’s markets.
As illustrated below, this leads to synchronization of business cycles across the two countries.
16.4. MODEL 297

16.4 Model

Let’s write down the model more formally.


(The treatment is relatively terse since full details can be found in the original paper)
Time is discrete with 𝑡 = 0, 1, ….
There are two countries indexed by 𝑗 or 𝑘.
In each country, a representative household inelastically supplies 𝐿𝑗 units of labor at wage
rate 𝑤𝑗,𝑡 .
Without loss of generality, it is assumed that 𝐿1 ≥ 𝐿2 .
Households consume a single nontradeable final good which is produced competitively.
Its production involves combining two types of tradeable intermediate inputs via

𝑜 1−𝛼 𝛼
𝑋𝑘,𝑡 𝑋𝑘,𝑡
𝑌𝑘,𝑡 = 𝐶𝑘,𝑡 = ( ) ( )
1−𝛼 𝛼
𝑜
Here 𝑋𝑘,𝑡 is a homogeneous input which can be produced from labor using a linear, one-for-
one technology.
It is freely tradeable, competitively supplied, and homogeneous across countries.
By choosing the price of this good as numeraire and assuming both countries find it optimal
to always produce the homogeneous good, we can set 𝑤1,𝑡 = 𝑤2,𝑡 = 1.
The good 𝑋𝑘,𝑡 is a composite, built from many differentiated goods via

1
1− 1 1− 𝜎
𝑋𝑘,𝑡 𝜎 = ∫ [𝑥𝑘,𝑡 (𝜈)] 𝑑𝜈
Ω𝑡

Here 𝑥𝑘,𝑡 (𝜈) is the total amount of a differentiated good 𝜈 ∈ Ω𝑡 that is produced.
The parameter 𝜎 > 1 is the direct partial elasticity of substitution between a pair of varieties
and Ω𝑡 is the set of varieties available in period 𝑡.
We can split the varieties into those which are supplied competitively and those supplied mo-
nopolistically; that is, Ω𝑡 = Ω𝑐𝑡 + Ω𝑚
𝑡 .

16.4.1 Prices

Demand for differentiated inputs is

−𝜎
𝑝𝑘,𝑡 (𝜈) 𝛼𝐿𝑘
𝑥𝑘,𝑡 (𝜈) = ( )
𝑃𝑘,𝑡 𝑃𝑘,𝑡

Here
• 𝑝𝑘,𝑡 (𝜈) is the price of the variety 𝜈 and
• 𝑃𝑘,𝑡 is the price index for differentiated inputs in 𝑘, defined by

1−𝜎
[𝑃𝑘,𝑡 ] = ∫ [𝑝𝑘,𝑡 (𝜈)]1−𝜎 𝑑𝜈
Ω𝑡
298 CHAPTER 16. GLOBALIZATION AND CYCLES

The price of a variety also depends on the origin, 𝑗, and destination, 𝑘, of the goods because
shipping varieties between countries incurs an iceberg trade cost 𝜏𝑗,𝑘 .
Thus the effective price in country 𝑘 of a variety 𝜈 produced in country 𝑗 becomes 𝑝𝑘,𝑡 (𝜈) =
𝜏𝑗,𝑘 𝑝𝑗,𝑡 (𝜈).
Using these expressions, we can derive the total demand for each variety, which is

𝐷𝑗,𝑡 (𝜈) = ∑ 𝜏𝑗,𝑘 𝑥𝑘,𝑡 (𝜈) = 𝛼𝐴𝑗,𝑡 (𝑝𝑗,𝑡 (𝜈))−𝜎


𝑘

where

𝜌𝑗,𝑘 𝐿𝑘
𝐴𝑗,𝑡 ∶= ∑ and 𝜌𝑗,𝑘 = (𝜏𝑗,𝑘 )1−𝜎 ≤ 1
𝑘
(𝑃𝑘,𝑡 )1−𝜎

It is assumed that 𝜏1,1 = 𝜏2,2 = 1 and 𝜏1,2 = 𝜏2,1 = 𝜏 for some 𝜏 > 1, so that

𝜌1,2 = 𝜌2,1 = 𝜌 ∶= 𝜏 1−𝜎 < 1

The value 𝜌 ∈ [0, 1) is a proxy for the degree of globalization.


Producing one unit of each differentiated variety requires 𝜓 units of labor, so the marginal
cost is equal to 𝜓 for 𝜈 ∈ Ω𝑗,𝑡 .
Additionally, all competitive varieties will have the same price (because of equal marginal
cost), which means that, for all 𝜈 ∈ Ω𝑐 ,

𝑐 𝑐 𝑐 −𝜎
𝑝𝑗,𝑡 (𝜈) = 𝑝𝑗,𝑡 ∶= 𝜓 and 𝐷𝑗,𝑡 = 𝑦𝑗,𝑡 ∶= 𝛼𝐴𝑗,𝑡 (𝑝𝑗,𝑡 )

Monopolists will have the same marked-up price, so, for all 𝜈 ∈ Ω𝑚 ,

𝑚 𝜓 𝑚 𝑚 −𝜎
𝑝𝑗,𝑡 (𝜈) = 𝑝𝑗,𝑡 ∶= 1 and 𝐷𝑗,𝑡 = 𝑦𝑗,𝑡 ∶= 𝛼𝐴𝑗,𝑡 (𝑝𝑗,𝑡 )
1− 𝜎

Define

𝑐
𝑝𝑗,𝑡 𝑐
𝑦𝑗,𝑡 1 1−𝜎
𝜃 ∶= 𝑚 𝑚 = (1 − )
𝑝𝑗,𝑡 𝑦𝑗,𝑡 𝜎

Using the preceding definitions and some algebra, the price indices can now be rewritten as

1−𝜎 𝑚
𝑃𝑘,𝑡 𝑐
𝑁𝑗,𝑡
( ) = 𝑀𝑘,𝑡 + 𝜌𝑀𝑗,𝑡 where 𝑀𝑗,𝑡 ∶= 𝑁𝑗,𝑡 +
𝜓 𝜃
𝑐 𝑚
The symbols 𝑁𝑗,𝑡 and 𝑁𝑗,𝑡 will denote the measures of Ω𝑐 and Ω𝑚 respectively.

16.4.2 New Varieties

To introduce a new variety, a firm must hire 𝑓 units of labor per variety in each country.
Monopolist profits must be less than or equal to zero in expectation, so
16.4. MODEL 299

𝑚 𝑚 𝑚 𝑚 𝑚 𝑚
𝑁𝑗,𝑡 ≥ 0, 𝜋𝑗,𝑡 ∶= (𝑝𝑗,𝑡 − 𝜓)𝑦𝑗,𝑡 −𝑓 ≤0 and 𝜋𝑗,𝑡 𝑁𝑗,𝑡 =0

With further manipulations, this becomes

𝑚 𝑐 1 𝛼𝐿𝑗 𝛼𝐿𝑘
𝑁𝑗,𝑡 = 𝜃(𝑀𝑗,𝑡 − 𝑁𝑗,𝑡 ) ≥ 0, [ + ]≤𝑓
𝜎 𝜃(𝑀𝑗,𝑡 + 𝜌𝑀𝑘,𝑡 ) 𝜃(𝑀𝑗,𝑡 + 𝑀𝑘,𝑡 /𝜌)

16.4.3 Law of Motion

With 𝛿 as the exogenous probability of a variety becoming obsolete, the dynamic equation for
the measure of firms becomes

𝑐 𝑐 𝑚 𝑐 𝑐
𝑁𝑗,𝑡+1 = 𝛿(𝑁𝑗,𝑡 + 𝑁𝑗,𝑡 ) = 𝛿(𝑁𝑗,𝑡 + 𝜃(𝑀𝑗,𝑡 − 𝑁𝑗,𝑡 ))

We will work with a normalized measure of varieties

𝑐 𝑚
𝜃𝜎𝑓𝑁𝑗,𝑡 𝜃𝜎𝑓𝑁𝑗,𝑡 𝜃𝜎𝑓𝑀𝑗,𝑡 𝑖𝑗,𝑡
𝑛𝑗,𝑡 ∶= , 𝑖𝑗,𝑡 ∶= , 𝑚𝑗,𝑡 ∶= = 𝑛𝑗,𝑡 +
𝛼(𝐿1 + 𝐿2 ) 𝛼(𝐿1 + 𝐿2 ) 𝛼(𝐿1 + 𝐿2 ) 𝜃
𝐿𝑗
We also use 𝑠𝑗 ∶= 𝐿1 +𝐿2 to be the share of labor employed in country 𝑗.
We can use these definitions and the preceding expressions to obtain a law of motion for
𝑛𝑡 ∶= (𝑛1,𝑡 , 𝑛2,𝑡 ).
In particular, given an initial condition, 𝑛0 = (𝑛1,0 , 𝑛2,0 ) ∈ ℝ2+ , the equilibrium trajectory,
{𝑛𝑡 }∞ ∞ 2 2
𝑡=0 = {(𝑛1,𝑡 , 𝑛2,𝑡 )}𝑡=0 , is obtained by iterating on 𝑛𝑡+1 = 𝐹 (𝑛𝑡 ) where 𝐹 ∶ ℝ+ → ℝ+ is
given by

⎧(𝛿(𝜃𝑠1 (𝜌) + (1 − 𝜃)𝑛1,𝑡 ), 𝛿(𝜃𝑠2 (𝜌) + (1 − 𝜃)𝑛2,𝑡 )) for 𝑛𝑡 ∈ 𝐷𝐿𝐿


{
{(𝛿𝑛1,𝑡 , 𝛿𝑛2,𝑡 ) for 𝑛𝑡 ∈ 𝐷𝐻𝐻
𝐹 (𝑛𝑡 ) =
⎨(𝛿𝑛 , 𝛿(𝜃ℎ (𝑛 ) + (1 − 𝜃)𝑛 )) for 𝑛𝑡 ∈ 𝐷𝐻𝐿
{ 1,𝑡 2 1,𝑡 2,𝑡
{(𝛿(𝜃ℎ (𝑛 ) + (1 − 𝜃)𝑛 , 𝛿𝑛 )) for 𝑛𝑡 ∈ 𝐷𝐿𝐻
⎩ 1 2,𝑡 1,𝑡 2,𝑡

Here

𝐷𝐿𝐿 ∶= {(𝑛1 , 𝑛2 ) ∈ ℝ2+ |𝑛𝑗 ≤ 𝑠𝑗 (𝜌)}


𝐷𝐻𝐻 ∶= {(𝑛1 , 𝑛2 ) ∈ ℝ2+ |𝑛𝑗 ≥ ℎ𝑗 (𝜌)}
𝐷𝐻𝐿 ∶= {(𝑛1 , 𝑛2 ) ∈ ℝ2+ |𝑛1 ≥ 𝑠1 (𝜌) and 𝑛2 ≤ ℎ2 (𝑛1 )}
𝐷𝐿𝐻 ∶= {(𝑛1 , 𝑛2 ) ∈ ℝ2+ |𝑛1 ≤ ℎ1 (𝑛2 ) and 𝑛2 ≥ 𝑠2 (𝜌)}

while

𝑠1 − 𝜌𝑠2
𝑠1 (𝜌) = 1 − 𝑠2 (𝜌) = min { , 1}
1−𝜌

and ℎ𝑗 (𝑛𝑘 ) is defined implicitly by the equation

𝑠𝑗 𝑠𝑘
1= +
ℎ𝑗 (𝑛𝑘 ) + 𝜌𝑛𝑘 ℎ𝑗 (𝑛𝑘 ) + 𝑛𝑘 /𝜌
300 CHAPTER 16. GLOBALIZATION AND CYCLES

Rewriting the equation above gives us a quadratic equation in terms of ℎ𝑗 (𝑛𝑘 ).


Since we know ℎ𝑗 (𝑛𝑘 ) > 0 then we can just solve the quadratic equation and return the posi-
tive root.
This gives us

1 𝑠𝑗 𝑛𝑘
ℎ𝑗 (𝑛𝑘 )2 + ((𝜌 + )𝑛𝑘 − 𝑠𝑗 − 𝑠𝑘 ) ℎ𝑗 (𝑛𝑘 ) + (𝑛2𝑘 − − 𝑠𝑘 𝑛𝑘 𝜌) = 0
𝜌 𝜌

16.5 Simulation

Let’s try simulating some of these trajectories.


We will focus in particular on whether or not innovation cycles synchronize across the two
countries.
As we will see, this depends on initial conditions.
For some parameterizations, synchronization will occur for “most” initial conditions, while for
others synchronization will be rare.
The computational burden of testing synchronization across many initial conditions is not
trivial.
In order to make our code fast, we will use just in time compiled functions that will get called
and handled by our class.
These are the @jit statements that you see below (review this lecture if you don’t recall how
to use JIT compilation).
Here’s the main body of code

In [2]: @jit(nopython=True)
def _hj(j, nk, s1, s2, θ, δ, ρ):
"""
If we expand the implicit function for h_j(n_k) then we find that
it is quadratic. We know that h_j(n_k) > 0 so we can get its
value by using the quadratic form
"""
# Find out who's h we are evaluating
if j == 1:
sj = s1
sk = s2
else:
sj = s2
sk = s1

# Coefficients on the quadratic a x^2 + b x + c = 0


a = 1.0
b = ((ρ + 1 / ρ) * nk - sj - sk)
c = (nk * nk - (sj * nk) / ρ - sk * ρ * nk)

# Positive solution of quadratic form


root = (-b + np.sqrt(b * b - 4 * a * c)) / (2 * a)

return root
16.5. SIMULATION 301

@jit(nopython=True)
def DLL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DLL"
return (n1 <= s1_ρ) and (n2 <= s2_ρ)

@jit(nopython=True)
def DHH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DHH"
return (n1 >= _hj(1, n2, s1, s2, θ, δ, ρ)) and \
(n2 >= _hj(2, n1, s1, s2, θ, δ, ρ))

@jit(nopython=True)
def DHL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DHL"
return (n1 >= s1_ρ) and (n2 <= _hj(2, n1, s1, s2, θ, δ, ρ))

@jit(nopython=True)
def DLH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DLH"
return (n1 <= _hj(1, n2, s1, s2, θ, δ, ρ)) and (n2 >= s2_ρ)

@jit(nopython=True)
def one_step(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"""
Takes a current value for (n_{1, t}, n_{2, t}) and returns the
values (n_{1, t+1}, n_{2, t+1}) according to the law of motion.
"""
# Depending on where we are, evaluate the right branch
if DLL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * (θ * s1_ρ + (1 - θ) * n1)
n2_tp1 = δ * (θ * s2_ρ + (1 - θ) * n2)
elif DHH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * n1
n2_tp1 = δ * n2
elif DHL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * n1
n2_tp1 = δ * (θ * _hj(2, n1, s1, s2, θ, δ, ρ) + (1 - θ) * n2)
elif DLH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * (θ * _hj(1, n2, s1, s2, θ, δ, ρ) + (1 - θ) * n1)
n2_tp1 = δ * n2

return n1_tp1, n2_tp1

@jit(nopython=True)
def n_generator(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"""
Given an initial condition, continues to yield new values of
n1 and n2
"""
n1_t, n2_t = n1_0, n2_0
while True:
n1_tp1, n2_tp1 = one_step(n1_t, n2_t, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ)
yield (n1_tp1, n2_tp1)
n1_t, n2_t = n1_tp1, n2_tp1

@jit(nopython=True)
def _pers_till_sync(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ, maxiter, npers):
"""
302 CHAPTER 16. GLOBALIZATION AND CYCLES

Takes initial values and iterates forward to see whether


the histories eventually end up in sync.

If countries are symmetric then as soon as the two countries have the
same measure of firms then they will be synchronized -- However, if
they are not symmetric then it is possible they have the same measure
of firms but are not yet synchronized. To address this, we check whether
firms stay synchronized for `npers` periods with Euclidean norm

Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
maxiter : scalar(Int)
Maximum number of periods to simulate
npers : scalar(Int)
Number of periods we would like the countries to have the
same measure for

Returns
-------
synchronized : scalar(Bool)
Did the two economies end up synchronized
pers_2_sync : scalar(Int)
The number of periods required until they synchronized
"""
# Initialize the status of synchronization
synchronized = False
pers_2_sync = maxiter
iters = 0

# Initialize generator
n_gen = n_generator(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ)

# Will use a counter to determine how many times in a row


# the firm measures are the same
nsync = 0

while (not synchronized) and (iters < maxiter):


# Increment the number of iterations and get next values
iters += 1
n1_t, n2_t = next(n_gen)

# Check whether same in this period


if abs(n1_t - n2_t) < 1e-8:
nsync += 1
# If not, then reset the nsync counter
else:
nsync = 0

# If we have been in sync for npers then stop and countries


# became synchronized nsync periods ago
if nsync > npers:
synchronized = True
pers_2_sync = iters - nsync
16.5. SIMULATION 303

return synchronized, pers_2_sync

@jit(nopython=True)
def _create_attraction_basis(s1_ρ, s2_ρ, s1, s2, θ, δ, ρ,
maxiter, npers, npts):
# Create unit range with npts
synchronized, pers_2_sync = False, 0
unit_range = np.linspace(0.0, 1.0, npts)

# Allocate space to store time to sync


time_2_sync = np.empty((npts, npts))
# Iterate over initial conditions
for (i, n1_0) in enumerate(unit_range):
for (j, n2_0) in enumerate(unit_range):
synchronized, pers_2_sync = _pers_till_sync(n1_0, n2_0, s1_ρ,
s2_ρ, s1, s2, θ, δ,
ρ, maxiter, npers)
time_2_sync[i, j] = pers_2_sync

return time_2_sync

# == Now we define a class for the model == #

class MSGSync:
"""
The paper "Globalization and Synchronization of Innovation Cycles"�
↪presents

a two-country model with endogenous innovation cycles. Combines elements


from Deneckere Judd (1985) and Helpman Krugman (1985) to allow for a
model with trade that has firms who can introduce new varieties into
the economy.

We focus on being able to determine whether the two countries eventually


synchronize their innovation cycles. To do this, we only need a few
of the many parameters. In particular, we need the parameters listed
below

Parameters
----------
s1 : scalar(Float)
Amount of total labor in country 1 relative to total worldwide labor
θ : scalar(Float)
A measure of how much more of the competitive variety is used in
production of final goods
δ : scalar(Float)
Percentage of firms that are not exogenously destroyed every period
ρ : scalar(Float)
Measure of how expensive it is to trade between countries
"""
def __init__(self, s1=0.5, θ=2.5, δ=0.7, ρ=0.2):
# Store model parameters
self.s1, self.θ, self.δ, self.ρ = s1, θ, δ, ρ

# Store other cutoffs and parameters we use


self.s2 = 1 - s1
self.s1_ρ = self._calc_s1_ρ()
self.s2_ρ = 1 - self.s1_ρ
304 CHAPTER 16. GLOBALIZATION AND CYCLES

def _unpack_params(self):
return self.s1, self.s2, self.θ, self.δ, self.ρ

def _calc_s1_ρ(self):
# Unpack params
s1, s2, θ, δ, ρ = self._unpack_params()

# s_1(ρ) = min(val, 1)
val = (s1 - ρ * s2) / (1 - ρ)
return min(val, 1)

def simulate_n(self, n1_0, n2_0, T):


"""
Simulates the values of (n1, n2) for T periods

Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
T : scalar(Int)
Number of periods to simulate

Returns
-------
n1 : Array(Float64, ndim=1)
A history of normalized measures of firms in country one
n2 : Array(Float64, ndim=1)
A history of normalized measures of firms in country two
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ

# Allocate space
n1 = np.empty(T)
n2 = np.empty(T)

# Create the generator


n1[0], n2[0] = n1_0, n2_0
n_gen = n_generator(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ)

# Simulate for T periods


for t in range(1, T):
# Get next values
n1_tp1, n2_tp1 = next(n_gen)

# Store in arrays
n1[t] = n1_tp1
n2[t] = n2_tp1

return n1, n2

def pers_till_sync(self, n1_0, n2_0, maxiter=500, npers=3):


"""
Takes initial values and iterates forward to see whether
16.5. SIMULATION 305

the histories eventually end up in sync.

If countries are symmetric then as soon as the two countries have the
same measure of firms then they will be synchronized -- However, if
they are not symmetric then it is possible they have the same measure
of firms but are not yet synchronized. To address this, we check�
↪ whether
firms stay synchronized for `npers` periods with Euclidean norm

Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
maxiter : scalar(Int)
Maximum number of periods to simulate
npers : scalar(Int)
Number of periods we would like the countries to have the
same measure for

Returns
-------
synchronized : scalar(Bool)
Did the two economies end up synchronized
pers_2_sync : scalar(Int)
The number of periods required until they synchronized
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ

return _pers_till_sync(n1_0, n2_0, s1_ρ, s2_ρ,


s1, s2, θ, δ, ρ, maxiter, npers)

def create_attraction_basis(self, maxiter=250, npers=3, npts=50):


"""
Creates an attraction basis for values of n on [0, 1] X [0, 1]
with npts in each dimension
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ

ab = _create_attraction_basis(s1_ρ, s2_ρ, s1, s2, θ, δ,


ρ, maxiter, npers, npts)

return ab

16.5.1 Time Series of Firm Measures

We write a short function below that exploits the preceding code and plots two time series.
Each time series gives the dynamics for the two countries.
The time series share parameters but differ in their initial condition.
Here’s the function
306 CHAPTER 16. GLOBALIZATION AND CYCLES

In [3]: def plot_timeseries(n1_0, n2_0, s1=0.5, θ=2.5,


δ=0.7, ρ=0.2, ax=None, title=''):
"""
Plot a single time series with initial conditions
"""
if ax is None:
fig, ax = plt.subplots()

# Create the MSG Model and simulate with initial conditions


model = MSGSync(s1, θ, δ, ρ)
n1, n2 = model.simulate_n(n1_0, n2_0, 25)

ax.plot(np.arange(25), n1, label="$n_1$", lw=2)


ax.plot(np.arange(25), n2, label="$n_2$", lw=2)

ax.legend()
ax.set(title=title, ylim=(0.15, 0.8))

return ax

# Create figure
fig, ax = plt.subplots(2, 1, figsize=(10, 8))

plot_timeseries(0.15, 0.35, ax=ax[0], title='Not Synchronized')


plot_timeseries(0.4, 0.3, ax=ax[1], title='Synchronized')

fig.tight_layout()

plt.show()
16.5. SIMULATION 307

In the first case, innovation in the two countries does not synchronize.
In the second case, different initial conditions are chosen, and the cycles become synchro-
nized.

16.5.2 Basin of Attraction

Next, let’s study the initial conditions that lead to synchronized cycles more systematically.
We generate time series from a large collection of different initial conditions and mark those
conditions with different colors according to whether synchronization occurs or not.
The next display shows exactly this for four different parameterizations (one for each subfig-
ure).
308 CHAPTER 16. GLOBALIZATION AND CYCLES

Dark colors indicate synchronization, while light colors indicate failure to synchronize.

As you can see, larger values of 𝜌 translate to more synchronization.


You are asked to replicate this figure in the exercises.
In the solution to the exercises, you’ll also find a figure with sliders, allowing you to experi-
ment with different parameters.
Here’s one snapshot from the interactive figure
16.6. EXERCISES 309

16.6 Exercises

16.6.1 Exercise 1

Replicate the figure shown above by coloring initial conditions according to whether or not
synchronization occurs from those conditions.

16.7 Solutions

In [4]: def plot_attraction_basis(s1=0.5, θ=2.5, δ=0.7, ρ=0.2, npts=250, ax=None):


if ax is None:
fig, ax = plt.subplots()

# Create attraction basis


unitrange = np.linspace(0, 1, npts)
model = MSGSync(s1, θ, δ, ρ)
ab = model.create_attraction_basis(npts=npts)
cf = ax.pcolormesh(unitrange, unitrange, ab, cmap="viridis")

return ab, cf

fig = plt.figure(figsize=(14, 12))

# Left - Bottom - Width - Height


ax0 = fig.add_axes((0.05, 0.475, 0.38, 0.35), label="axes0")
ax1 = fig.add_axes((0.5, 0.475, 0.38, 0.35), label="axes1")
ax2 = fig.add_axes((0.05, 0.05, 0.38, 0.35), label="axes2")
ax3 = fig.add_axes((0.5, 0.05, 0.38, 0.35), label="axes3")
310 CHAPTER 16. GLOBALIZATION AND CYCLES

params = [[0.5, 2.5, 0.7, 0.2],


[0.5, 2.5, 0.7, 0.4],
[0.5, 2.5, 0.7, 0.6],
[0.5, 2.5, 0.7, 0.8]]

ab0, cf0 = plot_attraction_basis(*params[0], npts=500, ax=ax0)


ab1, cf1 = plot_attraction_basis(*params[1], npts=500, ax=ax1)
ab2, cf2 = plot_attraction_basis(*params[2], npts=500, ax=ax2)
ab3, cf3 = plot_attraction_basis(*params[3], npts=500, ax=ax3)

cbar_ax = fig.add_axes([0.9, 0.075, 0.03, 0.725])


plt.colorbar(cf0, cax=cbar_ax)

ax0.set_title(r"$s_1=0.5$, $\theta=2.5$, $\delta=0.7$, $\rho=0.2$",


fontsize=22)
ax1.set_title(r"$s_1=0.5$, $\theta=2.5$, $\delta=0.7$, $\rho=0.4$",
fontsize=22)
ax2.set_title(r"$s_1=0.5$, $\theta=2.5$, $\delta=0.7$, $\rho=0.6$",
fontsize=22)
ax3.set_title(r"$s_1=0.5$, $\theta=2.5$, $\delta=0.7$, $\rho=0.8$",
fontsize=22)

fig.suptitle("Synchronized versus Asynchronized 2-cycles",


x=0.475, y=0.915, size=26)
plt.show()
16.7. SOLUTIONS 311

16.7.1 Interactive Version

Additionally, instead of just seeing 4 plots at once, we might want to manually be able to
change 𝜌 and see how it affects the plot in real-time. Below we use an interactive plot to do
this.
Note, interactive plotting requires the ipywidgets module to be installed and enabled.

In [5]: def interact_attraction_basis(ρ=0.2, maxiter=250, npts=250):


# Create the figure and axis that we will plot on
fig, ax = plt.subplots(figsize=(12, 10))

# Create model and attraction basis


s1, θ, δ = 0.5, 2.5, 0.75
model = MSGSync(s1, θ, δ, ρ)
ab = model.create_attraction_basis(maxiter=maxiter, npts=npts)

# Color map with colormesh


unitrange = np.linspace(0, 1, npts)
cf = ax.pcolormesh(unitrange, unitrange, ab, cmap="viridis")
cbar_ax = fig.add_axes([0.95, 0.15, 0.05, 0.7])
plt.colorbar(cf, cax=cbar_ax)
plt.show()
return None

In [6]: fig = interact(interact_attraction_basis,


ρ=(0.0, 1.0, 0.05),
maxiter=(50, 5000, 50),
npts=(25, 750, 25))
312 CHAPTER 16. GLOBALIZATION AND CYCLES
Chapter 17

Coase’s Theory of the Firm

17.1 Contents

• Overview 17.2
• The Model 17.3
• Equilibrium 17.4
• Existence, Uniqueness and Computation of Equilibria 17.5
• Implementation 17.6
• Exercises 17.7
• Solutions 17.8

17.2 Overview

In 1937, Ronald Coase wrote a brilliant essay on the nature of the firm [16].
Coase was writing at a time when the Soviet Union was rising to become a significant indus-
trial power.
At the same time, many free-market economies were afflicted by a severe and painful depres-
sion.
This contrast led to an intensive debate on the relative merits of decentralized, price-based
allocation versus top-down planning.
In the midst of this debate, Coase made an important observation: even in free-market
economies, a great deal of top-down planning does in fact take place.
This is because firms form an integral part of free-market economies and, within firms, alloca-
tion is by planning.
In other words, free-market economies blend both planning (within firms) and decentralized
production coordinated by prices.
The question Coase asked is this: if prices and free markets are so efficient, then why do firms
even exist?
Couldn’t the associated within-firm planning be done more efficiently by the market?
We’ll use the following imports:

In [1]: import numpy as np

313
314 CHAPTER 17. COASE’S THEORY OF THE FIRM

import matplotlib.pyplot as plt


%matplotlib inline
from scipy.optimize import fminbound
from interpolation import interp

17.2.1 Why Firms Exist

On top of asking a deep and fascinating question, Coase also supplied an illuminating answer:
firms exist because of transaction costs.
Here’s one example of a transaction cost:
Suppose agent A is considering setting up a small business and needs a web developer to con-
struct and help run an online store.
She can use the labor of agent B, a web developer, by writing up a freelance contract for
these tasks and agreeing on a suitable price.
But contracts like this can be time-consuming and difficult to verify
• How will agent A be able to specify exactly what she wants, to the finest detail, when
she herself isn’t sure how the business will evolve?
• And what if she isn’t familiar with web technology? How can she specify all the relevant
details?
• And, if things go badly, will failure to comply with the contract be verifiable in court?
In this situation, perhaps it will be easier to employ agent B under a simple labor contract.
The cost of this contract is far smaller because such contracts are simpler and more standard.
The basic agreement in a labor contract is: B will do what A asks him to do for the term of
the contract, in return for a given salary.
Making this agreement is much easier than trying to map every task out in advance in a con-
tract that will hold up in a court of law.
So agent A decides to hire agent B and a firm of nontrivial size appears, due to transaction
costs.

17.2.2 A Trade-Off

Actually, we haven’t yet come to the heart of Coase’s investigation.


The issue of why firms exist is a binary question: should firms have positive size or zero size?
A better and more general question is: what determines the size of firms?
The answer Coase came up with was that “a firm will tend to expand until the costs of or-
ganizing an extra transaction within the firm become equal to the costs of carrying out the
same transaction by means of an exchange on the open market…” ([16], p. 395).
But what are these internal and external costs?
In short, Coase envisaged a trade-off between
• transaction costs, which add to the expense of operating between firms, and
• diminishing returns to management, which adds to the expense of operating within
firms
17.3. THE MODEL 315

We discussed an example of transaction costs above (contracts).


The other cost, diminishing returns to management, is a catch-all for the idea that big opera-
tions are increasingly costly to manage.
For example, you could think of management as a pyramid, so hiring more workers to im-
plement more tasks requires expansion of the pyramid, and hence labor costs grow at a rate
more than proportional to the range of tasks.
Diminishing returns to management makes in-house production expensive, favoring small
firms.

17.2.3 Summary

Here’s a summary of our discussion:


• Firms grow because transaction costs encourage them to take some operations in house.
• But as they get large, in-house operations become costly due to diminishing returns to
management.
• The size of firms is determined by balancing these effects, thereby equalizing the
marginal costs of each form of operation.

17.2.4 A Quantitative Interpretation

Coases ideas were expressed verbally, without any mathematics.


In fact, his essay is a wonderful example of how far you can get with clear thinking and plain
English.
However, plain English is not good for quantitative analysis, so let’s bring some mathematical
and computation tools to bear.
In doing so we’ll add a bit more structure than Coase did, but this price will be worth pay-
ing.
Our exposition is based on [39].

17.3 The Model

The model we study involves production of a single unit of a final good.


Production requires a linearly ordered chain, requiring sequential completion of a large num-
ber of processing stages.
The stages are indexed by 𝑡 ∈ [0, 1], with 𝑡 = 0 indicating that no tasks have been undertaken
and 𝑡 = 1 indicating that the good is complete.

17.3.1 Subcontracting

The subcontracting scheme by which tasks are allocated across firms is illustrated in the fig-
ure below
316 CHAPTER 17. COASE’S THEORY OF THE FIRM

In this example,
• Firm 1 receives a contract to sell one unit of the completed good to a final buyer.
• Firm 1 then forms a contract with firm 2 to purchase the partially completed good at
stage 𝑡1 , with the intention of implementing the remaining 1 − 𝑡1 tasks in-house (i.e.,
processing from stage 𝑡1 to stage 1).
• Firm 2 repeats this procedure, forming a contract with firm 3 to purchase the good at
stage 𝑡2 .
• Firm 3 decides to complete the chain, selecting 𝑡3 = 0.
At this point, production unfolds in the opposite direction (i.e., from upstream to down-
stream).
• Firm 3 completes processing stages from 𝑡3 = 0 up to 𝑡2 and transfers the good to firm
2.
• Firm 2 then processes from 𝑡2 up to 𝑡1 and transfers the good to firm 1,
• Firm 1 processes from 𝑡1 to 1 and delivers the completed good to the final buyer.
The length of the interval of stages (range of tasks) carried out by firm 𝑖 is denoted by ℓ𝑖 .

Each firm chooses only its upstream boundary, treating its downstream boundary as given.
The benefit of this formulation is that it implies a recursive structure for the decision problem
for each firm.
In choosing how many processing stages to subcontract, each successive firm faces essentially
the same decision problem as the firm above it in the chain, with the only difference being
17.4. EQUILIBRIUM 317

that the decision space is a subinterval of the decision space for the firm above.
We will exploit this recursive structure in our study of equilibrium.

17.3.2 Costs

Recall that we are considering a trade-off between two types of costs.


Let’s discuss these costs and how we represent them mathematically.
Diminishing returns to management means rising costs per task when a firm expands
the range of productive activities coordinated by its managers.
We represent these ideas by taking the cost of carrying out ℓ tasks in-house to be 𝑐(ℓ), where
𝑐 is increasing and strictly convex.
Thus, the average cost per task rises with the range of tasks performed in-house.
We also assume that 𝑐 is continuously differentiable, with 𝑐(0) = 0 and 𝑐′ (0) > 0.
Transaction costs are represented as a wedge between the buyer’s and seller’s prices.
It matters little for us whether the transaction cost is borne by the buyer or the seller.
Here we assume that the cost is borne only by the buyer.
In particular, when two firms agree to a trade at face value 𝑣, the buyer’s total outlay is 𝛿𝑣,
where 𝛿 > 1.
The seller receives only 𝑣, and the difference is paid to agents outside the model.

17.4 Equilibrium

We assume that all firms are ex-ante identical and act as price takers.
As price takers, they face a price function 𝑝, which is a map from [0, 1] to ℝ+ , with 𝑝(𝑡) inter-
preted as the price of the good at processing stage 𝑡.
There is a countable infinity of firms indexed by 𝑖 and no barriers to entry.
The cost of supplying the initial input (the good processed up to stage zero) is set to zero for
simplicity.
Free entry and the infinite fringe of competitors rule out positive profits for incumbents, since
any incumbent could be replaced by a member of the competitive fringe filling the same role
in the production chain.
Profits are never negative in equilibrium because firms can freely exit.

17.4.1 Informal Definition of Equilibrium

An equilibrium in this setting is an allocation of firms and a price function such that

1. all active firms in the chain make zero profits, including suppliers of raw materials

2. no firm in the production chain has an incentive to deviate, and

3. no inactive firms can enter and extract positive profits


318 CHAPTER 17. COASE’S THEORY OF THE FIRM

17.4.2 Formal Definition of Equilibrium

Let’s make this definition more formal.


(You might like to skip this section on first reading)
An allocation of firms is a nonnegative sequence {ℓ𝑖 }𝑖∈ℕ such that ℓ𝑖 = 0 for all sufficiently
large 𝑖.
Recalling the figures above,
• ℓ𝑖 represents the range of tasks implemented by the 𝑖-th firm
As a labeling convention, we assume that firms enter in order, with firm 1 being the furthest
downstream.
An allocation {ℓ𝑖 } is called feasible if ∑ 𝑖≥1 ℓ𝑖 = 1.
In a feasible allocation, the entire production process is completed by finitely many firms.
Given a feasible allocation, {ℓ𝑖 }, let {𝑡𝑖 } represent the corresponding transaction stages, de-
fined by

𝑡0 = 𝑠 and 𝑡𝑖 = 𝑡𝑖−1 − ℓ𝑖 (1)

In particular, 𝑡𝑖−1 is the downstream boundary of firm 𝑖 and 𝑡𝑖 is its upstream boundary.
As transaction costs are incurred only by the buyer, its profits are

𝜋𝑖 = 𝑝(𝑡𝑖−1 ) − 𝑐(ℓ𝑖 ) − 𝛿𝑝(𝑡𝑖 ) (2)

Given a price function 𝑝 and a feasible allocation {ℓ𝑖 }, let


• {𝑡𝑖 } be the corresponding firm boundaries.
• {𝜋𝑖 } be corresponding profits, as defined in (2).
This price-allocation pair is called an equilibrium for the production chain if

1. 𝑝(0) = 0,
2. 𝜋𝑖 = 0 for all 𝑖, and
3. 𝑝(𝑠) − 𝑐(𝑠 − 𝑡) − 𝛿𝑝(𝑡) ≤ 0 for any pair 𝑠, 𝑡 with 0 ≤ 𝑠 ≤ 𝑡 ≤ 1.

The rationale behind these conditions was given in our informal definition of equilibrium
above.

17.5 Existence, Uniqueness and Computation of Equilibria

We have defined an equilibrium but does one exist? Is it unique? And, if so, how can we com-
pute it?

17.5.1 A Fixed Point Method

To address these questions, we introduce the operator 𝑇 mapping a nonnegative function 𝑝 on


[0, 1] to 𝑇 𝑝 via
17.5. EXISTENCE, UNIQUENESS AND COMPUTATION OF EQUILIBRIA 319

𝑇 𝑝(𝑠) = min {𝑐(𝑠 − 𝑡) + 𝛿𝑝(𝑡)} for all 𝑠 ∈ [0, 1]. (3)


𝑡≤𝑠

Here and below, the restriction 0 ≤ 𝑡 in the minimum is understood.


The operator 𝑇 is similar to a Bellman operator.
Under this analogy, 𝑝 corresponds to a value function and 𝛿 to a discount factor.
But 𝛿 > 1, so 𝑇 is not a contraction in any obvious metric, and in fact, 𝑇 𝑛 𝑝 diverges for
many choices of 𝑝.
Nevertheless, there exists a domain on which 𝑇 is well-behaved: the set of convex increasing
continuous functions 𝑝 ∶ [0, 1] → ℝ such that 𝑐′ (0)𝑠 ≤ 𝑝(𝑠) ≤ 𝑐(𝑠) for all 0 ≤ 𝑠 ≤ 1.
We denote this set of functions by 𝒫.
In [39] it is shown that the following statements are true:

1. 𝑇 maps 𝒫 into itself.

2. 𝑇 has a unique fixed point in 𝒫, denoted below by 𝑝∗ .

3. For all 𝑝 ∈ 𝒫 we have 𝑇 𝑘 𝑝 → 𝑝∗ uniformly as 𝑘 → ∞.

Now consider the choice function

𝑡∗ (𝑠) ∶= the solution to min{𝑐(𝑠 − 𝑡) + 𝛿𝑝∗ (𝑡)} (4)


𝑡≤𝑠

By definition, 𝑡∗ (𝑠) is the cost-minimizing upstream boundary for a firm that is contracted to
deliver the good at stage 𝑠 and faces the price function 𝑝∗ .
Since 𝑝∗ lies in 𝒫 and since 𝑐 is strictly convex, it follows that the right-hand side of (4) is
continuous and strictly convex in 𝑡.
Hence the minimizer 𝑡∗ (𝑠) exists and is uniquely defined.
We can use 𝑡∗ to construct an equilibrium allocation as follows:
Recall that firm 1 sells the completed good at stage 𝑠 = 1, its optimal upstream boundary is
𝑡∗ (1).
Hence firm 2’s optimal upstream boundary is 𝑡∗ (𝑡∗ (1)).
Continuing in this way produces the sequence {𝑡∗𝑖 } defined by

𝑡∗0 = 1 and 𝑡∗𝑖 = 𝑡∗ (𝑡𝑖−1 ) (5)

The sequence ends when a firm chooses to complete all remaining tasks.
We label this firm (and hence the number of firms in the chain) as

𝑛∗ ∶= inf{𝑖 ∈ ℕ ∶ 𝑡∗𝑖 = 0} (6)

The task allocation corresponding to (5) is given by ℓ𝑖∗ ∶= 𝑡∗𝑖−1 − 𝑡∗𝑖 for all 𝑖.
In [39] it is shown that
320 CHAPTER 17. COASE’S THEORY OF THE FIRM

1. The value 𝑛∗ in (6) is well-defined and finite,

2. the allocation {ℓ𝑖∗ } is feasible, and

3. the price function 𝑝∗ and this allocation together forms an equilibrium for the produc-
tion chain.

While the proofs are too long to repeat here, much of the insight can be obtained by observ-
ing that, as a fixed point of 𝑇 , the equilibrium price function must satisfy

𝑝∗ (𝑠) = min {𝑐(𝑠 − 𝑡) + 𝛿𝑝∗ (𝑡)} for all 𝑠 ∈ [0, 1] (7)


𝑡≤𝑠

From this equation, it is clear that so profits are zero for all incumbent firms.

17.5.2 Marginal Conditions

We can develop some additional insights on the behavior of firms by examining marginal con-
ditions associated with the equilibrium.
As a first step, let ℓ∗ (𝑠) ∶= 𝑠 − 𝑡∗ (𝑠).
This is the cost-minimizing range of in-house tasks for a firm with downstream boundary 𝑠.
In [39] it is shown that 𝑡∗ and ℓ∗ are increasing and continuous, while 𝑝∗ is continuously dif-
ferentiable at all 𝑠 ∈ (0, 1) with

(𝑝∗ )′ (𝑠) = 𝑐′ (ℓ∗ (𝑠)) (8)

Equation (8) follows from 𝑝∗ (𝑠) = min𝑡≤𝑠 {𝑐(𝑠 − 𝑡) + 𝛿𝑝∗ (𝑡)} and the envelope theorem for
derivatives.
A related equation is the first order condition for 𝑝∗ (𝑠) = min𝑡≤𝑠 {𝑐(𝑠 − 𝑡) + 𝛿𝑝∗ (𝑡)}, the
minimization problem for a firm with upstream boundary 𝑠, which is

𝛿(𝑝∗ )′ (𝑡∗ (𝑠)) = 𝑐′ (𝑠 − 𝑡∗ (𝑠)) (9)

This condition matches the marginal condition expressed verbally by Coase that we stated
above:

“A firm will tend to expand until the costs of organizing an extra transaction
within the firm become equal to the costs of carrying out the same transaction
by means of an exchange on the open market…”

Combining (8) and (9) and evaluating at 𝑠 = 𝑡𝑖 , we see that active firms that are adjacent
satisfy

𝛿 𝑐′ (ℓ𝑖+1

) = 𝑐′ (ℓ𝑖∗ ) (10)

In other words, the marginal in-house cost per task at a given firm is equal to that of its up-
stream partner multiplied by gross transaction cost.
17.6. IMPLEMENTATION 321

This expression can be thought of as a Coase–Euler equation, which determines inter-firm


efficiency by indicating how two costly forms of coordination (markets and management) are
jointly minimized in equilibrium.

17.6 Implementation

For most specifications of primitives, there is no closed-form solution for the equilibrium as
far as we are aware.
However, we know that we can compute the equilibrium corresponding to a given transaction
cost parameter 𝛿 and a cost function 𝑐 by applying the results stated above.
In particular, we can

1. fix initial condition 𝑝 ∈ 𝒫,

2. iterate with 𝑇 until 𝑇 𝑛 𝑝 has converged to 𝑝∗ , and

3. recover firm choices via the choice function (3)

At each iterate, we will use continuous piecewise linear interpolation of functions.


To begin, here’s a class to store primitives and a grid:

In [2]: class ProductionChain:

def __init__(self,
n=1000,
delta=1.05,
c=lambda t: np.exp(10 * t) - 1):

self.n, self.delta, self.c = n, delta, c


self.grid = np.linspace(1e-04, 1, n)

Now let’s implement and iterate with 𝑇 until convergence.


Recalling that our initial condition must lie in 𝒫, we set 𝑝0 = 𝑐

In [3]: def compute_prices(pc, tol=1e-5, max_iter=5000):


"""
Compute prices by iterating with T

* pc is an instance of ProductionChain
* The initial condition is p = c

"""
delta, c, n, grid = pc.delta, pc.c, pc.n, pc.grid
p = c(grid) # Initial condition is c(s), as an array
new_p = np.empty_like(p)
error = tol + 1
i = 0

while error > tol and i < max_iter:


for i, s in enumerate(grid):
322 CHAPTER 17. COASE’S THEORY OF THE FIRM

Tp = lambda t: delta * interp(grid, p, t) + c(s - t)


new_p[i] = Tp(fminbound(Tp, 0, s))
error = np.max(np.abs(p - new_p))
p = new_p
i = i + 1

if i < max_iter:
print(f"Iteration converged in {i} steps")
else:
print(f"Warning: iteration hit upper bound {max_iter}")

p_func = lambda x: interp(grid, p, x)


return p_func

The next function computes optimal choice of upstream boundary and range of task imple-
mented for a firm face price function p_function and with downstream boundary 𝑠.

In [4]: def optimal_choices(pc, p_function, s):


"""
Takes p_func as the true function, minimizes on [0,s]

Returns optimal upstream boundary t_star and optimal size of


firm ell_star

In fact, the algorithm minimizes on [-1,s] and then takes the


max of the minimizer and zero. This results in better results
close to zero

"""
delta, c = pc.delta, pc.c
f = lambda t: delta * p_function(t) + c(s - t)
t_star = max(fminbound(f, -1, s), 0)
ell_star = s - t_star
return t_star, ell_star

The allocation of firms can be computed by recursively stepping through firms’ choices of
their respective upstream boundary, treating the previous firm’s upstream boundary as their
own downstream boundary.
In doing so, we start with firm 1, who has downstream boundary 𝑠 = 1.

In [5]: def compute_stages(pc, p_function):


s = 1.0
transaction_stages = [s]
while s > 0:
s, ell = optimal_choices(pc, p_function, s)
transaction_stages.append(s)
return np.array(transaction_stages)

Let’s try this at the default parameters.


The next figure shows the equilibrium price function, as well as the boundaries of firms as
vertical lines

In [6]: pc = ProductionChain()
p_star = compute_prices(pc)
17.6. IMPLEMENTATION 323

transaction_stages = compute_stages(pc, p_star)

fig, ax = plt.subplots()

ax.plot(pc.grid, p_star(pc.grid))
ax.set_xlim(0.0, 1.0)
ax.set_ylim(0.0)
for s in transaction_stages:
ax.axvline(x=s, c="0.5")
plt.show()

Iteration converged in 1000 steps

Here’s the function ℓ∗ , which shows how large a firm with downstream boundary 𝑠 chooses to
be

In [7]: ell_star = np.empty(pc.n)


for i, s in enumerate(pc.grid):
t, e = optimal_choices(pc, p_star, s)
ell_star[i] = e

fig, ax = plt.subplots()
ax.plot(pc.grid, ell_star, label="$\ell^*$")
ax.legend(fontsize=14)
plt.show()
324 CHAPTER 17. COASE’S THEORY OF THE FIRM

Note that downstream firms choose to be larger, a point we return to below.

17.7 Exercises

17.7.1 Exercise 1

The number of firms is endogenously determined by the primitives.


What do you think will happen in terms of the number of firms as 𝛿 increases? Why?
Check your intuition by computing the number of firms at delta in (1.01, 1.05, 1.1).

17.7.2 Exercise 2

The value added of firm 𝑖 is 𝑣𝑖 ∶= 𝑝∗ (𝑡𝑖−1 ) − 𝑝∗ (𝑡𝑖 ).


One of the interesting predictions of the model is that value added is increasing with down-
streamness, as are several other measures of firm size.
Can you give any intution?
Try to verify this phenomenon (value added increasing with downstreamness) using the code
above.

17.8 Solutions

17.8.1 Exercise 1

In [8]: for delta in (1.01, 1.05, 1.1):


17.8. SOLUTIONS 325

pc = ProductionChain(delta=delta)
p_star = compute_prices(pc)
transaction_stages = compute_stages(pc, p_star)
num_firms = len(transaction_stages)
print(f"When delta={delta} there are {num_firms} firms")

Iteration converged in 1000 steps


When delta=1.01 there are 64 firms
Iteration converged in 1000 steps
When delta=1.05 there are 41 firms
Iteration converged in 1000 steps
When delta=1.1 there are 35 firms

17.8.2 Exercise 2

Firm size increases with downstreamness because 𝑝∗ , the equilibrium price function, is in-
creasing and strictly convex.
This means that, for a given producer, the marginal cost of the input purchased from the pro-
ducer just upstream from itself in the chain increases as we go further downstream.
Hence downstream firms choose to do more in house than upstream firms — and are therefore
larger.
The equilibrium price function is strictly convex due to both transaction costs and diminish-
ing returns to management.
One way to put this is that firms are prevented from completely mitigating the costs associ-
ated with diminishing returns to management — which induce convexity — by transaction
costs. This is because transaction costs force firms to have nontrivial size.
Here’s one way to compute and graph value added across firms

In [9]: pc = ProductionChain()
p_star = compute_prices(pc)
stages = compute_stages(pc, p_star)

va = []

for i in range(len(stages) - 1):


va.append(p_star(stages[i]) - p_star(stages[i+1]))

fig, ax = plt.subplots()
ax.plot(va, label="value added by firm")
ax.set_xticks((5, 25))
ax.set_xticklabels(("downstream firms", "upstream firms"))
plt.show()

Iteration converged in 1000 steps


326 CHAPTER 17. COASE’S THEORY OF THE FIRM
Part IV

Dynamic Linear Economies

327
Chapter 18

Recursive Models of Dynamic


Linear Economies

18.1 Contents

• A Suite of Models 18.2


• Econometrics 18.3
• Dynamic Demand Curves and Canonical Household Technologies 18.4
• Gorman Aggregation and Engel Curves 18.5
• Partial Equilibrium 18.6
• Equilibrium Investment Under Uncertainty 18.7
• A Rosen-Topel Housing Model 18.8
• Cattle Cycles 18.9
• Models of Occupational Choice and Pay 18.10
• Permanent Income Models 18.11
• Gorman Heterogeneous Households 18.12
• Non-Gorman Heterogeneous Households 18.13

“Mathematics is the art of giving the same name to different things” – Henri
Poincare

“Complete market economies are all alike” – Robert E. Lucas, Jr., (1989)

“Every partial equilibrium model can be reinterpreted as a general equilibrium


model.” – Anonymous

18.2 A Suite of Models

This lecture presents a class of linear-quadratic-Gaussian models of general economic equilib-


rium designed by Lars Peter Hansen and Thomas J. Sargent [31].
The class of models is implemented in a Python class DLE that is part of quantecon.
Subsequent lectures use the DLE class to implement various instances that have appeared in
the economics literature

329
330 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

1. Growth in Dynamic Linear Economies

1. Lucas Asset Pricing using DLE

2. IRFs in Hall Model

3. Permanent Income Using the DLE class

4. Rosen schooling model

5. Cattle cycles

6. Shock Non Invertibility

18.2.1 Overview of the Models

In saying that “complete markets are all alike”, Robert E. Lucas, Jr. was noting that all of
them have
• a commodity space.
• a space dual to the commodity space in which prices reside.
• endowments of resources.
• peoples’ preferences over goods.
• physical technologies for transforming resources into goods.
• random processes that govern shocks to technologies and preferences and associated in-
formation flows.
• a single budget constraint per person.
• the existence of a representative consumer even when there are many people in the
model.
• a concept of competitive equilibrium.
• theorems connecting competitive equilibrium allocations to allocations that would be
chosen by a benevolent social planner.
The models have no frictions such as …
• Enforcement difficulties
• Information asymmetries
• Other forms of transactions costs
• Externalities
The models extensively use the powerful ideas of
• Indexing commodities and their prices by time (John R. Hicks).
• Indexing commodities and their prices by chance (Kenneth Arrow).
Much of the imperialism of complete markets models comes from applying these two tricks.
The Hicks trick of indexing commodities by time is the idea that dynamics are a special
case of statics.
The Arrow trick of indexing commodities by chance is the idea that analysis of trade un-
der uncertainty is a special case of the analysis of trade under certainty.
The [31] class of models specify the commodity space, preferences, technologies, stochastic
shocks and information flows in ways that allow the models to be analyzed completely using
only the tools of linear time series models and linear-quadratic optimal control described in
the two lectures Linear State Space Models and Linear Quadratic Control.
18.2. A SUITE OF MODELS 331

There are costs and benefits associated with the simplifications and specializations needed to
make a particular model fit within the [31] class
• the costs are that linear-quadratic structures are sometimes too confining.
• benefits include computational speed, simplicity, and ability to analyze many model fea-
tures analytically or nearly analytically.
A variety of superficially different models are all instances of the [31] class of models
• Lucas asset pricing model
• Lucas-Prescott model of investment under uncertainty
• Asset pricing models with habit persistence
• Rosen-Topel equilibrium model of housing
• Rosen schooling models
• Rosen-Murphy-Scheinkman model of cattle cycles
• Hansen-Sargent-Tallarini model of robustness and asset pricing
• Many more …
The diversity of these models conceals an essential unity that illustrates the quotation by
Robert E. Lucas, Jr., with which we began this lecture.

18.2.2 Forecasting?

A consequence of a single budget constraint per person plus the Hicks-Arrow tricks is that
households and firms need not forecast.
But there exist equivalent structures called recursive competitive equilibria in which they
do appear to need to forecast.
In these structures, to forecast, households and firms use:
• equilibrium pricing functions, and
• knowledge of the Markov structure of the economy’s state vector.

18.2.3 Theory and Econometrics

For an application of the [31] class of models, the outcome of theorizing is a stochastic pro-
cess, i.e., a probability distribution over sequences of prices and quantities, indexed by param-
eters describing preferences, technologies, and information flows.
Another name for that object is a likelihood function, a key object of both frequentist and
Bayesian statistics.
There are two important uses of an equilibrium stochastic process or likelihood func-
tion.
The first is to solve the direct problem.
The direct problem takes as inputs values of the parameters that define preferences, tech-
nologies, and information flows and as an output characterizes or simulates random paths of
quantities and prices.
The second use of an equilibrium stochastic process or likelihood function is to solve the in-
verse problem.
The inverse problem takes as an input a time series sample of observations on a subset of
prices and quantities determined by the model and from them makes inferences about the
332 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

parameters that define the model’s preferences, technologies, and information flows.

18.2.4 More Details

A [31] economy consists of lists of matrices that describe peoples’ household technologies,
their preferences over consumption services, their production technologies, and their informa-
tion sets.
There are complete markets in history-contingent commodities.
Competitive equilibrium allocations and prices
• satisfy equations that are easy to write down and solve
• have representations that are convenient econometrically
Different example economies manifest themselves simply as different settings for various ma-
trices.
[31] use these tools:
• A theory of recursive dynamic competitive economies
• Linear optimal control theory
• Recursive methods for estimating and interpreting vector autoregressions
The models are flexible enough to express alternative senses of a representative household
• A single ‘stand-in’ household of the type used to good effect by Edward C. Prescott.
• Heterogeneous households satisfying conditions for Gorman aggregation into a represen-
tative household.
• Heterogeneous household technologies that violate conditions for Gorman aggregation
but are still susceptible to aggregation into a single representative household via ‘non-
Gorman’ or ‘mongrel’ aggregation’.
These three alternative types of aggregation have different consequences in terms of how
prices and allocations can be computed.
In particular, can prices and an aggregate allocation be computed before the equilibrium allo-
cation to individual heterogeneous households is computed?
• Answers are “Yes” for Gorman aggregation, “No” for non-Gorman aggregation.
In summary, the insights and practical benefits from economics to be introduced in this lec-
ture are
• Deeper understandings that come from recognizing common underlying structures.
• Speed and ease of computation that comes from unleashing a common suite of Python
programs.
We’ll use the following mathematical tools
• Stochastic Difference Equations (Linear).
• Duality: LQ Dynamic Programming and Linear Filtering are the same things mathe-
matically.
• The Spectral Factorization Identity (for understanding vector autoregressions and non-
Gorman aggregation).
So here is our roadmap.
We’ll describe sets of matrices that pin down
18.2. A SUITE OF MODELS 333

• Information
• Technologies
• Preferences
Then we’ll describe
• Equilibrium concept and computation
• Econometric representation and estimation

18.2.5 Stochastic Model of Information Flows and Outcomes

We’ll use stochastic linear difference equations to describe information flows and equilibrium
outcomes.
The sequence {𝑤𝑡 ∶ 𝑡 = 1, 2, …} is said to be a martingale difference sequence adapted to
{𝐽𝑡 ∶ 𝑡 = 0, 1, …} if 𝐸(𝑤𝑡+1 |𝐽𝑡 ) = 0 for 𝑡 = 0, 1, … .

The sequence {𝑤𝑡 ∶ 𝑡 = 1, 2, …} is said to be conditionally homoskedastic if 𝐸(𝑤𝑡+1 𝑤𝑡+1 ∣
𝐽𝑡 ) = 𝐼 for 𝑡 = 0, 1, … .
We assume that the {𝑤𝑡 ∶ 𝑡 = 1, 2, …} process is conditionally homoskedastic.
Let {𝑥𝑡 ∶ 𝑡 = 1, 2, …} be a sequence of 𝑛-dimensional random vectors, i.e. an 𝑛-dimensional
stochastic process.
The process {𝑥𝑡 ∶ 𝑡 = 1, 2, …} is constructed recursively using an initial random vector 𝑥0 ∼
𝒩(𝑥0̂ , Σ0 ) and a time-invariant law of motion:

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐶𝑤𝑡+1

for 𝑡 = 0, 1, … where 𝐴 is an 𝑛 by 𝑛 matrix and 𝐶 is an 𝑛 by 𝑁 matrix.


Evidently, the distribution of 𝑥𝑡+1 conditional on 𝑥𝑡 is 𝒩(𝐴𝑥𝑡 , 𝐶𝐶 ′ ).

18.2.6 Information Sets

Let 𝐽0 be generated by 𝑥0 and 𝐽𝑡 be generated by 𝑥0 , 𝑤1 , … , 𝑤𝑡 , which means that 𝐽𝑡 consists


of the set of all measurable functions of {𝑥0 , 𝑤1 , … , 𝑤𝑡 }.

18.2.7 Prediction Theory

The optimal forecast of 𝑥𝑡+1 given current information is

𝐸(𝑥𝑡+1 ∣ 𝐽𝑡 ) = 𝐴𝑥𝑡

and the one-step-ahead forecast error is

𝑥𝑡+1 − 𝐸(𝑥𝑡+1 ∣ 𝐽𝑡 ) = 𝐶𝑤𝑡+1

The covariance matrix of 𝑥𝑡+1 conditioned on 𝐽𝑡 is

𝐸(𝑥𝑡+1 − 𝐸(𝑥𝑡+1 ∣ 𝐽𝑡 ))(𝑥𝑡+1 − 𝐸(𝑥𝑡+1 ∣ 𝐽𝑡 ))′ = 𝐶𝐶 ′


334 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

A nonrecursive expression for 𝑥𝑡 as a function of 𝑥0 , 𝑤1 , 𝑤2 , … , 𝑤𝑡 is

𝑥𝑡 = 𝐴𝑥𝑡−1 + 𝐶𝑤𝑡
= 𝐴2 𝑥𝑡−2 + 𝐴𝐶𝑤𝑡−1 + 𝐶𝑤𝑡
𝑡−1
= [∑ 𝐴𝜏 𝐶𝑤𝑡−𝜏 ] + 𝐴𝑡 𝑥0
𝜏=0

Shift forward in time:

𝑗−1
𝑥𝑡+𝑗 = ∑ 𝐴𝑠 𝐶𝑤𝑡+𝑗−𝑠 + 𝐴𝑗 𝑥𝑡
𝑠=0

Projecting on the information set {𝑥0 , 𝑤𝑡 , 𝑤𝑡−1 , … , 𝑤1 } gives

𝐸𝑡 𝑥𝑡+𝑗 = 𝐴𝑗 𝑥𝑡

where 𝐸𝑡 (⋅) ≡ 𝐸[(⋅) ∣ 𝑥0 , 𝑤𝑡 , 𝑤𝑡−1 , … , 𝑤1 ] = 𝐸(⋅) ∣ 𝐽𝑡 , and 𝑥𝑡 is in 𝐽𝑡 .


It is useful to obtain the covariance matrix of the 𝑗-step-ahead prediction error 𝑥𝑡+𝑗 −
𝑗−1
𝐸𝑡 𝑥𝑡+𝑗 = ∑𝑠=0 𝐴𝑠 𝐶𝑤𝑡−𝑠+𝑗 .
Evidently,

𝑗−1

𝐸𝑡 (𝑥𝑡+𝑗 − 𝐸𝑡 𝑥𝑡+𝑗 )(𝑥𝑡+𝑗 − 𝐸𝑡 𝑥𝑡+𝑗 ) = ∑ 𝐴𝑘 𝐶𝐶 ′ 𝐴𝑘 ≡ 𝑣𝑗

𝑘=0

𝑣𝑗 can be calculated recursively via

𝑣1 = 𝐶𝐶 ′
𝑣𝑗 = 𝐶𝐶 ′ + 𝐴𝑣𝑗−1 𝐴′ , 𝑗≥2

18.2.8 Orthogonal Decomposition

To decompose these covariances into parts attributable to the individual components of 𝑤𝑡 ,


we let 𝑖𝜏 be an 𝑁 -dimensional column vector of zeroes except in position 𝜏 , where there is a
one. Define a matrix 𝜐𝑗,𝜏

𝑗−1

𝜐𝑗,𝜏 = ∑ 𝐴𝑘 𝐶𝑖𝜏 𝑖′𝜏 𝐶 ′ 𝐴 𝑘 .
𝑘=0

𝑁
Note that ∑𝜏=1 𝑖𝜏 𝑖′𝜏 = 𝐼, so that we have

𝑁
∑ 𝜐𝑗,𝜏 = 𝜐𝑗
𝜏=1

Evidently, the matrices {𝜐𝑗,𝜏 , 𝜏 = 1, … , 𝑁 } give an orthogonal decomposition of the covari-


ance matrix of 𝑗-step-ahead prediction errors into the parts attributable to each of the com-
ponents 𝜏 = 1, … , 𝑁 .
18.2. A SUITE OF MODELS 335

18.2.9 Taste and Technology Shocks

𝐸(𝑤𝑡 ∣ 𝐽𝑡−1 ) = 0 and 𝐸(𝑤𝑡 𝑤𝑡′ ∣ 𝐽𝑡−1 ) = 𝐼 for 𝑡 = 1, 2, …

𝑏𝑡 = 𝑈𝑏 𝑧𝑡 and 𝑑𝑡 = 𝑈𝑑 𝑧𝑡 ,

𝑈𝑏 and 𝑈𝑑 are matrices that select entries of 𝑧𝑡 . The law of motion for {𝑧𝑡 ∶ 𝑡 = 0, 1, …} is

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1 for 𝑡 = 0, 1, …

where 𝑧0 is a given initial condition. The eigenvalues of the matrix 𝐴22 have absolute values
that are less than or equal to one.
Thus, in summary, our model of information and shocks is

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1


𝑏𝑡 = 𝑈𝑏 𝑧𝑡
𝑑𝑡 = 𝑈 𝑑 𝑧𝑡 .

We can now briefly summarize other components of our economies, in particular


• Production technologies
• Household technologies
• Household preferences

18.2.10 Production Technology

Where 𝑐𝑡 is a vector of consumption rates, 𝑘𝑡 is a vector of physical capital goods, 𝑔𝑡 is a vec-


tor intermediate productions goods, 𝑑𝑡 is a vector of technology shocks, the production tech-
nology is

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡
𝑔𝑡 ⋅ 𝑔𝑡 = ℓ𝑡2

Here Φ𝑐 , Φ𝑔 , Φ𝑖 , Γ, Δ𝑘 , Θ𝑘 are all matrices conformable to the vectors they multiply and ℓ𝑡 is a
disutility generating resource supplied by the household.
For technical reasons that facilitate computations, we make the following.
Assumption: [Φ𝑐 Φ𝑔 ] is nonsingular.

18.2.11 Household Technology

Households confront a technology that allows them to devote consumption goods to construct
a vector ℎ𝑡 of household capital goods and a vector 𝑠𝑡 of utility generating house services

𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡
336 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

where Λ, Π, Δℎ , Θℎ are matrices that pin down the household technology.


We make the following
Assumption: The absolute values of the eigenvalues of Δℎ are less than or equal to one.
Below, we’ll outline further assumptions that we shall occasionally impose.

18.2.12 Preferences

Where 𝑏𝑡 is a stochastic process of preference shocks that will play the role of demand
shifters, the representative household orders stochastic processes of consumption services 𝑠𝑡
according to


1
( )𝐸 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + ℓ𝑡2 ]∣𝐽0 , 0 < 𝛽 < 1
2 𝑡=0

We now proceed to give examples of production and household technologies that appear in
various models that appear in the literature.
First, we give examples of production Technologies

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡

∣ 𝑔𝑡 ∣≤ ℓ𝑡

so we’ll be looking for specifications of the matrices Φ𝑐 , Φ𝑔 , Φ𝑖 , Γ, Δ𝑘 , Θ𝑘 that define them.

18.2.13 Endowment Economy

There is a single consumption good that cannot be stored over time.


In time period 𝑡, there is an endowment 𝑑𝑡 of this single good.
There is neither a capital stock, nor an intermediate good, nor a rate of investment.
So 𝑐𝑡 = 𝑑𝑡 .
To implement this specification, we can choose 𝐴22 , 𝐶2 , and 𝑈𝑑 to make 𝑑𝑡 follow any of a
variety of stochastic processes.
To satisfy our earlier rank assumption, we set:

𝑐𝑡 + 𝑖𝑡 = 𝑑1𝑡

𝑔𝑡 = 𝜙1 𝑖𝑡

where 𝜙1 is a small positive number.


To implement this version, we set Δ𝑘 = Θ𝑘 = 0 and

1 1 0 0 𝑑
Φ𝑐 = [ ] , Φ𝑖 = [ ] , Φ𝑔 = [ ] , Γ = [ ] , 𝑑𝑡 = [ 1𝑡 ]
0 𝜙1 −1 0 0
18.2. A SUITE OF MODELS 337

We can use this specification to create a linear-quadratic version of Lucas’s (1978) asset pric-
ing model.

18.2.14 Single-Period Adjustment Costs

There is a single consumption good, a single intermediate good, and a single investment good.
The technology is described by

𝑐𝑡 = 𝛾𝑘𝑡−1 + 𝑑1𝑡 , 𝛾 > 0


𝜙1 𝑖𝑡 = 𝑔𝑡 + 𝑑2𝑡 , 𝜙1 > 0
ℓ𝑡2 = 𝑔𝑡2
𝑘𝑡 = 𝛿𝑘 𝑘𝑡−1 + 𝑖𝑡 , 0 < 𝛿𝑘 < 1

Set

1 0 0
Φ𝑐 = [ ] , Φ𝑔 = [ ] , Φ𝑖 = [ ]
0 −1 𝜙1

𝛾
Γ = [ ] , Δ𝑘 = 𝛿 𝑘 , Θ 𝑘 = 1
0

We set 𝐴22 , 𝐶2 and 𝑈𝑑 to make (𝑑1𝑡 , 𝑑2𝑡 )′ = 𝑑𝑡 follow a desired stochastic process.
Now we describe some examples of preferences, which as we have seen are ordered by


1
− ( ) 𝐸 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + (ℓ𝑡 )2 ] ∣ 𝐽0 , 0<𝛽<1
2 𝑡=0

where household services are produced via the household technology

ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡

and we make
Assumption: The absolute values of the eigenvalues of Δℎ are less than or equal to one.
Later we shall introduce canonical household technologies that satisfy an ‘invertibility’ re-
quirement relating sequences {𝑠𝑡 } of services and {𝑐𝑡 } of consumption flows.
And we’ll describe how to obtain a canonical representation of a household technology from
one that is not canonical.
Here are some examples of household preferences.
Time Separable preferences

1 ∞
− 𝐸 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏𝑡 )2 + ℓ𝑡2 ] ∣ 𝐽0 , 0<𝛽<1
2 𝑡=0
338 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

Consumer Durables

ℎ𝑡 = 𝛿ℎ ℎ𝑡−1 + 𝑐𝑡 , 0 < 𝛿ℎ < 1

Services at 𝑡 are related to the stock of durables at the beginning of the period:

𝑠𝑡 = 𝜆ℎ𝑡−1 , 𝜆 > 0

Preferences are ordered by

1 ∞
− 𝐸 ∑ 𝛽 𝑡 [(𝜆ℎ𝑡−1 − 𝑏𝑡 )2 + ℓ𝑡2 ] ∣ 𝐽0
2 𝑡=0

Set Δℎ = 𝛿ℎ , Θℎ = 1, Λ = 𝜆, Π = 0.
Habit Persistence

∞ ∞
1 2
−( ) 𝐸 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝜆(1 − 𝛿ℎ ) ∑ 𝛿ℎ𝑗 𝑐𝑡−𝑗−1 − 𝑏𝑡 ) + ℓ𝑡2 ]∣𝐽0
2 𝑡=0 𝑗=0

0 < 𝛽 < 1 , 0 < 𝛿ℎ < 1 , 𝜆 > 0



Here the effective bliss point 𝑏𝑡 + 𝜆(1 − 𝛿ℎ ) ∑𝑗=0 𝛿ℎ𝑗 𝑐𝑡−𝑗−1 shifts in response to a moving aver-
age of past consumption.
Initial Conditions

Preferences of this form require an initial condition for the geometric sum ∑𝑗=0 𝛿ℎ𝑗 𝑐𝑡−𝑗−1 that
we specify as an initial condition for the ‘stock of household durables,’ ℎ−1 .
Set

ℎ𝑡 = 𝛿ℎ ℎ𝑡−1 + (1 − 𝛿ℎ )𝑐𝑡 , 0 < 𝛿ℎ < 1

𝑡
ℎ𝑡 = (1 − 𝛿ℎ ) ∑ 𝛿ℎ𝑗 𝑐𝑡−𝑗 + 𝛿ℎ𝑡+1 ℎ−1
𝑗=0

𝑠𝑡 = −𝜆ℎ𝑡−1 + 𝑐𝑡 , 𝜆 > 0

To implement, set Λ = −𝜆, Π = 1, Δℎ = 𝛿ℎ , Θℎ = 1 − 𝛿ℎ .


Seasonal Habit Persistence

∞ ∞
1 2
−( ) 𝐸 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝜆(1 − 𝛿ℎ ) ∑ 𝛿ℎ𝑗 𝑐𝑡−4𝑗−4 − 𝑏𝑡 ) + ℓ𝑡2 ]
2 𝑡=0 𝑗=0

0 < 𝛽 < 1 , 0 < 𝛿ℎ < 1 , 𝜆 > 0



Here the effective bliss point 𝑏𝑡 + 𝜆(1 − 𝛿ℎ ) ∑𝑗=0 𝛿ℎ𝑗 𝑐𝑡−4𝑗−4 shifts in response to a moving
average of past consumptions of the same quarter.
18.2. A SUITE OF MODELS 339

To implement, set

ℎ̃ 𝑡 = 𝛿ℎ ℎ̃ 𝑡−4 + (1 − 𝛿ℎ )𝑐𝑡 , 0 < 𝛿ℎ < 1

This implies that

ℎ̃ 0 0 0 𝛿ℎ ℎ̃ (1 − 𝛿ℎ )
⎡ ̃ 𝑡 ⎤ ⎡ ⎡ ̃ 𝑡−1 ⎤ ⎡
⎢ℎ𝑡−1 ⎥ ⎢1 0 ⎤
0 0 ⎥ ⎢ℎ𝑡−2 ⎥ ⎢ 0 ⎤ ⎥ 𝑐𝑡
ℎ𝑡 = ⎢ ̃ ⎥ = ⎥+
⎢ℎ𝑡−2 ⎥ ⎢0 1 0 0 ⎥⎢ ⎢ℎ̃ 𝑡−3 ⎥ ⎢ 0 ⎥
⎣ℎ̃ 𝑡−3 ⎦ ⎣0 0 1 0 ⎦ ⎣ℎ̃ 𝑡−4 ⎦ ⎣ 0 ⎦

with consumption services

𝑠𝑡 = − [0 0 0 −𝜆] ℎ𝑡−1 + 𝑐𝑡 , 𝜆>0

Adjustment Costs.
Recall


1
−( )𝐸 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏1𝑡 )2 + 𝜆2 (𝑐𝑡 − 𝑐𝑡−1 )2 + ℓ𝑡2 ] ∣ 𝐽0
2 𝑡=0

0<𝛽<1 , 𝜆>0

To capture adjustment costs, set

ℎ𝑡 = 𝑐 𝑡

0 1
𝑠𝑡 = [ ]ℎ + [ ] 𝑐𝑡
−𝜆 𝑡−1 𝜆

so that

𝑠1𝑡 = 𝑐𝑡

𝑠2𝑡 = 𝜆(𝑐𝑡 − 𝑐𝑡−1 )

We set the first component 𝑏1𝑡 of 𝑏𝑡 to capture the stochastic bliss process and set the second
component identically equal to zero.
Thus, we set Δℎ = 0, Θℎ = 1

0 1
Λ=[ ] , Π=[ ]
−𝜆 𝜆

Multiple Consumption Goods


340 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

0 𝜋 0
Λ = [ ] and Π = [ 1 ]
0 𝜋2 𝜋3

1
− 𝛽 𝑡 (Π𝑐𝑡 − 𝑏𝑡 )′ (Π𝑐𝑡 − 𝑏𝑡 )
2

𝑚𝑢𝑡 = −𝛽 𝑡 [Π′ Π 𝑐𝑡 − Π′ 𝑏𝑡 ]

𝑐𝑡 = −(Π′ Π)−1 𝛽 −𝑡 𝑚𝑢𝑡 + (Π′ Π)−1 Π′ 𝑏𝑡

This is called the Frisch demand function for consumption.


We can think of the vector 𝑚𝑢𝑡 as playing the role of prices, up to a common factor, for all
dates and states.
The scale factor is determined by the choice of numeraire.
Notions of substitutes and complements can be defined in terms of these Frisch demand
functions.
Two goods can be said to be substitutes if the cross-price effect is positive and to be com-
plements if this effect is negative.
Hence this classification is determined by the off-diagonal element of −(Π′ Π)−1 , which is
equal to 𝜋2 𝜋3 / det(Π′ Π).
If 𝜋2 and 𝜋3 have the same sign, the goods are substitutes.
If they have opposite signs, the goods are complements.
To summarize, our economic structure consists of the matrices that define the following com-
ponents:
Information and shocks

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1


𝑏𝑡 = 𝑈𝑏 𝑧𝑡
𝑑𝑡 = 𝑈 𝑑 𝑧𝑡

Production Technology

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡
𝑔𝑡 ⋅ 𝑔𝑡 = ℓ𝑡2

Household Technology

𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

Preferences
18.2. A SUITE OF MODELS 341


1
( )𝐸 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + ℓ𝑡2 ]∣𝐽0 , 0 < 𝛽 < 1
2 𝑡=0

*Next steps:** we move on to discuss two closely connected concepts


• A Planning Problem or Optimal Resource Allocation Problem
• Competitive Equilibrium

18.2.15 Optimal Resource Allocation

Imagine a planner who chooses sequences {𝑐𝑡 , 𝑖𝑡 , 𝑔𝑡 }∞


𝑡=0 to maximize


−(1/2)𝐸 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + 𝑔𝑡 ⋅ 𝑔𝑡 ]∣𝐽0
𝑡=0

subject to the constraints

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡 ,
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡 ,
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡 ,
𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡 ,
𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1 , 𝑏𝑡 = 𝑈𝑏 𝑧𝑡 , and 𝑑𝑡 = 𝑈𝑑 𝑧𝑡

and initial conditions for ℎ−1 , 𝑘−1 , and 𝑧0 .


Throughout, we shall impose the following square summability conditions

∞ ∞
𝐸 ∑ 𝛽 ℎ𝑡 ⋅ ℎ𝑡 ∣ 𝐽0 < ∞ and 𝐸 ∑ 𝛽 𝑡 𝑘𝑡 ⋅ 𝑘𝑡 ∣ 𝐽0 < ∞
𝑡

𝑡=0 𝑡=0

Define:


𝐿20 = [{𝑦𝑡 } ∶ 𝑦𝑡 is a random variable in 𝐽𝑡 and 𝐸 ∑ 𝛽 𝑡 𝑦𝑡2 ∣ 𝐽0 < +∞]
𝑡=0

Thus, we require that each component of ℎ𝑡 and each component of 𝑘𝑡 belong to 𝐿20 .
We shall compare and utilize two approaches to solving the planning problem
• Lagrangian formulation
• Dynamic programming

18.2.16 Lagrangian Formulation

Form the Lagrangian


342 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES


1
ℒ = −𝐸 ∑ 𝛽 𝑡 [( )[(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + 𝑔𝑡 ⋅ 𝑔𝑡 ]
𝑡=0
2
+ 𝑀𝑡𝑑′ ⋅ (Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 − Γ𝑘𝑡−1 − 𝑑𝑡 )
+ 𝑀𝑡𝑘′ ⋅ (𝑘𝑡 − Δ𝑘 𝑘𝑡−1 − Θ𝑘 𝑖𝑡 )
+ 𝑀𝑡ℎ′ ⋅ (ℎ𝑡 − Δℎ ℎ𝑡−1 − Θℎ 𝑐𝑡 )

+ 𝑀𝑡𝑠′ ⋅ (𝑠𝑡 − Λℎ𝑡−1 − Π𝑐𝑡 )]∣𝐽0

The planner maximizes ℒ with respect to the quantities {𝑐𝑡 , 𝑖𝑡 , 𝑔𝑡 }∞


𝑡=0 and minimizes with re-
spect to the Lagrange multipliers 𝑀𝑡𝑑 , 𝑀𝑡𝑘 , 𝑀𝑡ℎ , 𝑀𝑡𝑠 .
First-order necessary conditions for maximization with respect to 𝑐𝑡 , 𝑔𝑡 , ℎ𝑡 , 𝑖𝑡 , 𝑘𝑡 , and 𝑠𝑡 , re-
spectively, are:

−Φ′𝑐 𝑀𝑡𝑑 + Θ′ℎ 𝑀𝑡ℎ + Π′ 𝑀𝑡𝑠 = 0,


− 𝑔𝑡 − Φ′𝑔 𝑀𝑡𝑑 = 0,
−𝑀𝑡ℎ + 𝛽𝐸(Δ′ℎ 𝑀𝑡+1

+ Λ′ 𝑀𝑡+1
𝑠
) ∣ 𝐽𝑡 = 0,
− Φ′𝑖 𝑀𝑡𝑑 + Θ′𝑘 𝑀𝑡𝑘 = 0,
−𝑀𝑡𝑘 + 𝛽𝐸(Δ′𝑘 𝑀𝑡+1
𝑘
+ Γ′ 𝑀𝑡+1
𝑑
) ∣ 𝐽𝑡 = 0,
− 𝑠𝑡 + 𝑏𝑡 − 𝑀𝑡𝑠 = 0

for 𝑡 = 0, 1, ….
In addition, we have the complementary slackness conditions (these recover the original tran-
sition equations) and also transversality conditions

lim 𝛽 𝑡 𝐸[𝑀𝑡𝑘′ 𝑘𝑡 ] ∣ 𝐽0 = 0
𝑡→∞
lim 𝛽 𝑡 𝐸[𝑀𝑡ℎ′ ℎ𝑡 ] ∣ 𝐽0 = 0
𝑡→∞

The system formed by the FONCs and the transition equations can be handed over to
Python.
Python will solve the planning problem for fixed parameter values.
Here are the Python Ready Equations

−Φ′𝑐 𝑀𝑡𝑑 + Θ′ℎ 𝑀𝑡ℎ + Π′ 𝑀𝑡𝑠 = 0,


− 𝑔𝑡 − Φ′𝑔 𝑀𝑡𝑑 = 0,
−𝑀𝑡ℎ + 𝛽𝐸(Δ′ℎ 𝑀𝑡+1

+ Λ′ 𝑀𝑡+1
𝑠
) ∣ 𝐽𝑡 = 0,
− Φ′𝑖 𝑀𝑡𝑑 + Θ′𝑘 𝑀𝑡𝑘 = 0,
−𝑀𝑡𝑘 + 𝛽𝐸(Δ′𝑘 𝑀𝑡+1
𝑘
+ Γ′ 𝑀𝑡+1
𝑑
) ∣ 𝐽𝑡 = 0,
− 𝑠𝑡 + 𝑏𝑡 − 𝑀𝑡𝑠 = 0
Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡 ,
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡 ,
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡 ,
𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡 ,
𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1 , 𝑏𝑡 = 𝑈𝑏 𝑧𝑡 , and 𝑑𝑡 = 𝑈𝑑 𝑧𝑡
18.2. A SUITE OF MODELS 343

The Lagrange multipliers or shadow prices satisfy

𝑀𝑡𝑠 = 𝑏𝑡 − 𝑠𝑡


𝑀𝑡ℎ = 𝐸[∑ 𝛽 𝜏 (Δ′ℎ )𝜏−1 Λ′ 𝑀𝑡+𝜏
𝑠
∣ 𝐽𝑡 ]
𝜏=1

−1
Φ′ Θ′ 𝑀 ℎ + Π′ 𝑀𝑡𝑠
𝑀𝑡𝑑 = [ ′𝑐 ] [ ℎ 𝑡 ]
Φ𝑔 −𝑔𝑡


𝑀𝑡𝑘 = 𝐸[∑ 𝛽 𝜏 (Δ′𝑘 )𝜏−1 Γ′ 𝑀𝑡+𝜏
𝑑
∣ 𝐽𝑡 ]
𝜏=1

𝑀𝑡𝑖 = Θ′𝑘 𝑀𝑡𝑘

Although it is possible to use matrix operator methods to solve the above Python ready
equations, that is not the approach we’ll use.
Instead, we’ll use dynamic programming to get recursive representations for both quantities
and shadow prices.

18.2.17 Dynamic Programming

Dynamic Programming always starts with the word let.


Thus, let 𝑉 (𝑥0 ) be the optimal value function for the planning problem as a function of the
initial state vector 𝑥0 .
(Thus, in essence, dynamic programming amounts to an application of a guess and verify
method in which we begin with a guess about the answer to the problem we want to solve.
That’s why we start with let 𝑉 (𝑥0 ) be the (value of the) answer to the problem, then estab-
lish and verify a bunch of conditions 𝑉 (𝑥0 ) has to satisfy if indeed it is the answer)
The optimal value function 𝑉 (𝑥) satisfies the Bellman equation

𝑉 (𝑥0 ) = max [−.5[(𝑠0 − 𝑏0 ) ⋅ (𝑠0 − 𝑏0 ) + 𝑔0 ⋅ 𝑔0 ] + 𝛽𝐸𝑉 (𝑥1 )]


𝑐0 ,𝑖0 ,𝑔0

subject to the linear constraints

Φ𝑐 𝑐0 + Φ𝑔 𝑔0 + Φ𝑖 𝑖0 = Γ𝑘−1 + 𝑑0 ,
𝑘0 = Δ𝑘 𝑘−1 + Θ𝑘 𝑖0 ,
ℎ0 = Δℎ ℎ−1 + Θℎ 𝑐0 ,
𝑠0 = Λℎ−1 + Π𝑐0 ,
𝑧1 = 𝐴22 𝑧0 + 𝐶2 𝑤1 , 𝑏0 = 𝑈𝑏 𝑧0 and 𝑑0 = 𝑈𝑑 𝑧0

Because this is a linear-quadratic dynamic programming problem, it turns out that the value
function has the form

𝑉 (𝑥) = 𝑥′ 𝑃 𝑥 + 𝜌
344 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

Thus, we want to solve an instance of the following linear-quadratic dynamic programming


problem:
Choose a contingency plan for {𝑥𝑡+1 , 𝑢𝑡 }∞
𝑡=0 to maximize


−𝐸 ∑ 𝛽 𝑡 [𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 + 2𝑢′𝑡 𝑊 ′ 𝑥𝑡 ], 0 < 𝛽 < 1
𝑡=0

subject to

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1 , 𝑡 ≥ 0

where 𝑥0 is given; 𝑥𝑡 is an 𝑛 × 1 vector of state variables, and 𝑢𝑡 is a 𝑘 × 1 vector of control


variables.
We assume 𝑤𝑡+1 is a martingale difference sequence with 𝐸𝑤𝑡 𝑤𝑡′ = 𝐼, and that 𝐶 is a matrix
conformable to 𝑥 and 𝑤.
The optimal value function 𝑉 (𝑥) satisfies the Bellman equation

𝑉 (𝑥𝑡 ) = max{−(𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 + 2𝑢′𝑡 𝑊 𝑥𝑡 ) + 𝛽𝐸𝑡 𝑉 (𝑥𝑡+1 )}


𝑢𝑡

where maximization is subject to

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1 , 𝑡 ≥ 0

𝑉 (𝑥𝑡 ) = −𝑥′𝑡 𝑃 𝑥𝑡 − 𝜌

𝑃 satisfies

𝑃 = 𝑅 + 𝛽𝐴′ 𝑃 𝐴 − (𝛽𝐴′ 𝑃 𝐵 + 𝑊 )(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 (𝛽𝐵′ 𝑃 𝐴 + 𝑊 ′ )

This equation in 𝑃 is called the algebraic matrix Riccati equation.


The optimal decision rule is 𝑢𝑡 = −𝐹 𝑥𝑡 , where

𝐹 = (𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 (𝛽𝐵′ 𝑃 𝐴 + 𝑊 ′ )

The optimum decision rule for 𝑢𝑡 is independent of the parameters 𝐶, and so of the noise
statistics.
Iterating on the Bellman operator leads to

𝑉𝑗+1 (𝑥𝑡 ) = max{−(𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 + 2𝑢′𝑡 𝑊 𝑥𝑡 ) + 𝛽𝐸𝑡 𝑉𝑗 (𝑥𝑡+1 )}


𝑢𝑡

𝑉𝑗 (𝑥𝑡 ) = −𝑥′𝑡 𝑃𝑗 𝑥𝑡 − 𝜌𝑗

where 𝑃𝑗 and 𝜌𝑗 satisfy the equations


18.2. A SUITE OF MODELS 345

𝑃𝑗+1 = 𝑅 + 𝛽𝐴′ 𝑃𝑗 𝐴 − (𝛽𝐴′ 𝑃𝑗 𝐵 + 𝑊 )(𝑄 + 𝛽𝐵′ 𝑃𝑗 𝐵)−1 (𝛽𝐵′ 𝑃𝑗 𝐴 + 𝑊 ′ )


𝜌𝑗+1 = 𝛽𝜌𝑗 + 𝛽 trace 𝑃𝑗 𝐶𝐶 ′

We can now state the planning problem as a dynamic programming problem


max −𝐸 ∑ 𝛽 𝑡 [𝑥′𝑡 𝑅𝑥𝑡 + 𝑢′𝑡 𝑄𝑢𝑡 + 2𝑢′𝑡 𝑊 ′ 𝑥𝑡 ], 0<𝛽<1
{𝑢𝑡 ,𝑥𝑡+1 }
𝑡=0

where maximization is subject to

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑢𝑡 + 𝐶𝑤𝑡+1 , 𝑡 ≥ 0

ℎ𝑡−1
𝑥𝑡 = ⎡ ⎤
⎢𝑘𝑡−1 ⎥ , 𝑢𝑡 = 𝑖𝑡
⎣ 𝑧𝑡 ⎦

where

Δℎ Θℎ 𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 Γ Θℎ 𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 𝑈𝑑



𝐴=⎢ 0 Δ𝑘 0 ⎤

⎣ 0 0 𝐴22 ⎦
−Θℎ 𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 Φ𝑖 0
⎡ ⎤ ⎡ ⎤
𝐵=⎢ Θ𝑘 ⎥ , 𝐶=⎢0⎥
⎣ 0 ⎦ ⎣𝐶2 ⎦

′ ′
𝑥 𝑥 𝑥 𝑅 𝑊 𝑥
[ 𝑡] 𝑆 [ 𝑡] = [ 𝑡] [ ′ ] [ 𝑡]
𝑢𝑡 𝑢𝑡 𝑢𝑡 𝑊 𝑄 𝑢𝑡

𝑆 = (𝐺′ 𝐺 + 𝐻 ′ 𝐻)/2

𝐻 = [Λ ⋮ Π𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 Γ ⋮ Π𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 𝑈𝑑 − 𝑈𝑏 ⋮ −Π𝑈𝑐 [Φ𝑐 Φ𝑔 ]−1 Φ𝑖 ]

𝐺 = 𝑈𝑔 [Φ𝑐 Φ𝑔 ]−1 [0 ⋮ Γ ⋮ 𝑈𝑑 ⋮ −Φ𝑖 ].

Lagrange multipliers as gradient of value function


A useful fact is that Lagrange multipliers equal gradients of the planner’s value function

ℳ𝑘𝑡 = 𝑀𝑘 𝑥𝑡 and 𝑀𝑡ℎ = 𝑀ℎ 𝑥𝑡 where


𝑀𝑘 = 2𝛽[0 𝐼 0]𝑃 𝐴𝑜
𝑀ℎ = 2𝛽[𝐼 0 0]𝑃 𝐴𝑜

ℳ𝑠𝑡 = 𝑀𝑠 𝑥𝑡 where 𝑀𝑠 = (𝑆𝑏 − 𝑆𝑠 ) and 𝑆𝑏 = [0 0 𝑈𝑏 ]


346 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

−1
Φ′𝑐 Θ′ 𝑀 + Π′ 𝑀𝑠
ℳ𝑑𝑡 = 𝑀𝑑 𝑥𝑡 where 𝑀𝑑 = [ ] [ ℎ ℎ ]
Φ′𝑔 −𝑆𝑔

ℳ𝑐𝑡 = 𝑀𝑐 𝑥𝑡 where 𝑀𝑐 = Θ′ℎ 𝑀ℎ + Π′ 𝑀𝑠

ℳ𝑖𝑡 = 𝑀𝑖 𝑥𝑡 where 𝑀𝑖 = Θ′𝑘 𝑀𝑘

We will use this fact and these equations to compute competitive equilibrium prices.

18.2.18 Other mathematical infrastructure

Let’s start with describing the commodity space and pricing functional for our competi-
tive equilibrium.
For the commodity space, we use


𝐿20 = [{𝑦𝑡 } ∶ 𝑦𝑡 is a random variable in 𝐽𝑡 and 𝐸 ∑ 𝛽 𝑡 𝑦𝑡2 ∣ 𝐽0 < +∞]
𝑡=0

For pricing functionals, we express values as inner products


𝜋(𝑐) = 𝐸 ∑ 𝛽 𝑡 𝑝𝑡0 ⋅ 𝑐𝑡 ∣ 𝐽0
𝑡=0

where 𝑝𝑡0 belongs to 𝐿20 .


With these objects in our toolkit, we move on to state the problem of a Representative
Household in a competitive equilibrium.

18.2.19 Representative Household

The representative household owns endowment process and initial stocks of ℎ and 𝑘 and
chooses stochastic processes for {𝑐𝑡 , 𝑠𝑡 , ℎ𝑡 , ℓ𝑡 }∞ 2
𝑡=0 , each element of which is in 𝐿0 , to maximize


1
− 𝐸0 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + ℓ𝑡2 ]
2 𝑡=0

subject to

∞ ∞
𝐸 ∑ 𝛽 𝑡 𝑝𝑡0 ⋅ 𝑐𝑡 ∣ 𝐽0 = 𝐸 ∑ 𝛽 𝑡 (𝑤𝑡0 ℓ𝑡 + 𝛼0𝑡 ⋅ 𝑑𝑡 ) ∣ 𝐽0 + 𝑣0 ⋅ 𝑘−1
𝑡=0 𝑡=0

𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡

ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡 , ℎ−1 , 𝑘−1 given

We now describe the problems faced by two types of firms called type I and type II.
18.2. A SUITE OF MODELS 347

18.2.20 Type I Firm

A type I firm rents capital and labor and endowments and produces 𝑐𝑡 , 𝑖𝑡 .
It chooses stochastic processes for {𝑐𝑡 , 𝑖𝑡 , 𝑘𝑡 , ℓ𝑡 , 𝑔𝑡 , 𝑑𝑡 }, each element of which is in 𝐿20 , to maxi-
mize


𝐸0 ∑ 𝛽 𝑡 (𝑝𝑡0 ⋅ 𝑐𝑡 + 𝑞𝑡0 ⋅ 𝑖𝑡 − 𝑟𝑡0 ⋅ 𝑘𝑡−1 − 𝑤𝑡0 ℓ𝑡 − 𝛼0𝑡 ⋅ 𝑑𝑡 )
𝑡=0

subject to

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡

− ℓ𝑡2 + 𝑔𝑡 ⋅ 𝑔𝑡 = 0

18.2.21 Type II Firm

A firm of type II acquires capital via investment and then rents stocks of capital to the 𝑐, 𝑖-
producing type I firm.
A type II firm is a price taker facing the vector 𝑣0 and the stochastic processes {𝑟𝑡0 , 𝑞𝑡0 }.
The firm chooses 𝑘−1 and stochastic processes for {𝑘𝑡 , 𝑖𝑡 }∞
𝑡=0 to maximize


𝐸 ∑ 𝛽 𝑡 (𝑟𝑡0 ⋅ 𝑘𝑡−1 − 𝑞𝑡0 ⋅ 𝑖𝑡 ) ∣ 𝐽0 − 𝑣0 ⋅ 𝑘−1
𝑡=0

subject to

𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡

18.2.22 Competitive Equilibrium: Definition

We can now state the following.


Definition: A competitive equilibrium is a price system [𝑣0 , {𝑝𝑡0 , 𝑤𝑡0 , 𝛼0𝑡 , 𝑞𝑡0 , 𝑟𝑡0 }∞
𝑡=0 ] and an
allocation {𝑐𝑡 , 𝑖𝑡 , 𝑘𝑡 , ℎ𝑡 , 𝑔𝑡 , 𝑑𝑡 }∞
𝑡=0 that satisfy the following conditions:

• Each component of the price system and the allocation resides in the space
𝐿20 .

• Given the price system and given ℎ−1 , 𝑘−1 , the allocation solves the representative
household’s problem and the problems of the two types of firms.
Versions of the two classical welfare theorems prevail under our assumptions.
We exploit that fact in our algorithm for computing a competitive equilibrium.
Step 1: Solve the planning problem by using dynamic programming.
348 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

The allocation (i.e., quantities) that solve the planning problem are the competi-
tive equilibrium quantities.

Step 2: use the following formulas to compute the equilibrium price system

𝑝𝑡0 = [Π′ 𝑀𝑡𝑠 + Θ′ℎ 𝑀𝑡ℎ ]/𝜇𝑤 𝑐 𝑤


0 = 𝑀𝑡 /𝜇0

𝑤𝑡0 =∣ 𝑆𝑔 𝑥𝑡 ∣ /𝜇𝑤
0

𝑟𝑡0 = Γ′ 𝑀𝑡𝑑 /𝜇𝑤


0

𝑞𝑡0 = Θ′𝑘 𝑀𝑡𝑘 /𝜇𝑤 𝑖 𝑤


0 = 𝑀𝑡 /𝜇0

𝛼0𝑡 = 𝑀𝑡𝑑 /𝜇𝑤


0

𝑣0 = Γ′ 𝑀0𝑑 /𝜇𝑤 ′ 𝑘 𝑤
0 + Δ𝑘 𝑀0 /𝜇0

Verification: With this price system, values can be assigned to the Lagrange multipliers for
each of our three classes of agents that cause all first-order necessary conditions to be satisfied
at these prices and at the quantities associated with the optimum of the planning problem.

18.2.23 Asset pricing

An important use of an equilibrium pricing system is to do asset pricing.


Thus, imagine that we are presented a dividend stream: {𝑦𝑡 } ∈ 𝐿20 and want to compute the
value of a perpetual claim to this stream.
To value this asset we simply take price times quantity and add to get an asset value:

𝑎0 = 𝐸 ∑𝑡=0 𝛽 𝑡 𝑝𝑡0 ⋅ 𝑦𝑡 ∣ 𝐽0 .
To compute 𝑎𝑜 we proceed as follows.
We let

𝑦𝑡 = 𝑈𝑎 𝑥𝑡


𝑎0 = 𝐸 ∑ 𝛽 𝑡 𝑥′𝑡 𝑍𝑎 𝑥𝑡 ∣ 𝐽0
𝑡=0

𝑍𝑎 = 𝑈𝑎′ 𝑀𝑐 /𝜇𝑤
0

We have the following convenient formulas:

𝑎0 = 𝑥′0 𝜇𝑎 𝑥0 + 𝜎𝑎
18.3. ECONOMETRICS 349


𝜇𝑎 = ∑ 𝛽 𝜏 (𝐴𝑜′ )𝜏 𝑍𝑎 𝐴𝑜𝜏
𝜏=0


𝛽
𝜎𝑎 = trace (𝑍𝑎 ∑ 𝛽 𝜏 (𝐴𝑜 )𝜏 𝐶𝐶 ′ (𝐴𝑜′ )𝜏 )
1−𝛽 𝜏=0

18.2.24 Re-Opening Markets

We have assumed that all trading occurs once-and-for-all at time 𝑡 = 0.


If we were to re-open markets at some time 𝑡 > 0 at time 𝑡 wealth levels implicitly defined
by time 0 trades, we would obtain the same equilibrium allocation (i.e., quantities) and the
following time 𝑡 price system

𝐿2𝑡 = [{𝑦𝑠 }∞
𝑠=𝑡 ∶ 𝑦𝑠 is a random variable in 𝐽𝑠 for 𝑠 ≥ 𝑡

and 𝐸 ∑ 𝛽 𝑠−𝑡 𝑦𝑠2 ∣ 𝐽𝑡 < +∞].
𝑠=𝑡

𝑝𝑠𝑡 = 𝑀𝑐 𝑥𝑠 /[𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ], 𝑠≥𝑡

𝑤𝑠𝑡 =∣ 𝑆𝑔 𝑥𝑠 |/[𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ], 𝑠 ≥ 𝑡

𝑟𝑠𝑡 = Γ′ 𝑀𝑑 𝑥𝑠 /[𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ], 𝑠 ≥ 𝑡

𝑞𝑠𝑡 = 𝑀𝑖 𝑥𝑠 /[𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ], 𝑠≥𝑡

𝛼𝑡𝑠 = 𝑀𝑑 𝑥𝑠 /[𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ], 𝑠 ≥ 𝑡

𝑣𝑡 = [Γ′ 𝑀𝑑 + Δ′𝑘 𝑀𝑘 ]𝑥𝑡 / [𝑒𝑗̄ 𝑀𝑐 𝑥𝑡 ]

18.3 Econometrics

Up to now, we have described how to solve the direct problem that maps model parameters
into an (equilibrium) stochastic process of prices and quantities.
Recall the inverse problem of inferring model parameters from a single realization of a time
series of some of the prices and quantities.
Another name for the inverse problem is econometrics.
An advantage of the [31] structure is that it comes with a self-contained theory of economet-
rics.
It is really just a tale of two state-space representations.
350 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

Here they are:


Original State-Space Representation:

𝑥𝑡+1 = 𝐴𝑜 𝑥𝑡 + 𝐶𝑤𝑡+1
𝑦𝑡 = 𝐺𝑥𝑡 + 𝑣𝑡

where 𝑣𝑡 is a martingale difference sequence of measurement errors that satisfies 𝐸𝑣𝑡 𝑣𝑡′ =
𝑅, 𝐸𝑤𝑡+1 𝑣𝑠′ = 0 for all 𝑡 + 1 ≥ 𝑠 and

𝑥0 ∼ 𝒩(𝑥0̂ , Σ0 )

Innovations Representation:

𝑥𝑡+1
̂ = 𝐴𝑜 𝑥𝑡̂ + 𝐾𝑡 𝑎𝑡
𝑦𝑡 = 𝐺𝑥𝑡̂ + 𝑎𝑡 ,

where 𝑎𝑡 = 𝑦𝑡 − 𝐸[𝑦𝑡 |𝑦𝑡−1 ], 𝐸𝑎𝑡 𝑎′𝑡 ≡ Ω𝑡 = 𝐺Σ𝑡 𝐺′ + 𝑅.


Compare numbers of shocks in the two representations:

• 𝑛𝑤 + 𝑛𝑦 versus 𝑛𝑦

Compare spaces spanned

• 𝐻(𝑦𝑡 ) ⊂ 𝐻(𝑤𝑡 , 𝑣𝑡 )

• 𝐻(𝑦𝑡 ) = 𝐻(𝑎𝑡 )
Kalman Filter:.
Kalman gain:

𝐾𝑡 = 𝐴𝑜 Σ𝑡 𝐺′ (𝐺Σ𝑡 𝐺′ + 𝑅)−1

Riccati Difference Equation:

Σ𝑡+1 = 𝐴𝑜 Σ𝑡 𝐴𝑜′ + 𝐶𝐶 ′
− 𝐴𝑜 Σ𝑡 𝐺′ (𝐺Σ𝑡 𝐺′ + 𝑅)−1 𝐺Σ𝑡 𝐴𝑜′

Innovations Representation as Whitener


Whitening Filter:

𝑎𝑡 = 𝑦𝑡 − 𝐺𝑥𝑡̂
𝑥𝑡+1
̂ = 𝐴𝑜 𝑥𝑡̂ + 𝐾𝑡 𝑎𝑡

can be used recursively to construct a record of innovations {𝑎𝑡 }𝑇𝑡=0 from an (𝑥0̂ , Σ0 ) and a
record of observations {𝑦𝑡 }𝑇𝑡=0 .
Limiting Time-Invariant Innovations Representation
18.3. ECONOMETRICS 351

Σ = 𝐴𝑜 Σ𝐴𝑜′ + 𝐶𝐶 ′
− 𝐴𝑜 Σ𝐺′ (𝐺Σ𝐺′ + 𝑅)−1 𝐺Σ𝐴𝑜′
𝐾 = 𝐴𝑜 Σ𝑡 𝐺′ (𝐺Σ𝐺′ + 𝑅)−1

𝑥𝑡+1
̂ = 𝐴𝑜 𝑥𝑡̂ + 𝐾𝑎𝑡
𝑦𝑡 = 𝐺𝑥𝑡̂ + 𝑎𝑡

where 𝐸𝑎𝑡 𝑎′𝑡 ≡ Ω = 𝐺Σ𝐺′ + 𝑅.

18.3.1 Factorization of Likelihood Function

Sample of observations {𝑦𝑠 }𝑇𝑠=0 on a (𝑛𝑦 × 1) vector.

𝑓(𝑦𝑇 , 𝑦𝑇 −1 , … , 𝑦0 ) = 𝑓𝑇 (𝑦𝑇 |𝑦𝑇 −1 , … , 𝑦0 )𝑓𝑇 −1 (𝑦𝑇 −1 |𝑦𝑇 −2 , … , 𝑦0 ) ⋯ 𝑓1 (𝑦1 |𝑦0 )𝑓0 (𝑦0 )
= 𝑔𝑇 (𝑎𝑇 )𝑔𝑇 −1 (𝑎𝑇 −1 ) … 𝑔1 (𝑎1 )𝑓0 (𝑦0 ).

Gaussian Log-Likelihood:

𝑇
−.5 ∑{𝑛𝑦 ln(2𝜋) + ln |Ω𝑡 | + 𝑎′𝑡 Ω−1
𝑡 𝑎𝑡 }
𝑡=0

18.3.2 Covariance Generating Functions

Autocovariance: 𝐶𝑥 (𝜏 ) = 𝐸𝑥𝑡 𝑥′𝑡−𝜏 .



Generating Function: 𝑆𝑥 (𝑧) = ∑𝜏=−∞ 𝐶𝑥 (𝜏 )𝑧 𝜏 , 𝑧 ∈ 𝐶.

18.3.3 Spectral Factorization Identity

Original state-space representation has too many shocks and implies:

𝑆𝑦 (𝑧) = 𝐺(𝑧𝐼 − 𝐴𝑜 )−1 𝐶𝐶 ′ (𝑧 −1 𝐼 − (𝐴𝑜 )′ )−1 𝐺′ + 𝑅

Innovations representation has as many shocks as dimension of 𝑦𝑡 and implies

𝑆𝑦 (𝑧) = [𝐺(𝑧𝐼 − 𝐴𝑜 )−1 𝐾 + 𝐼][𝐺Σ𝐺′ + 𝑅][𝐾 ′ (𝑧 −1 𝐼 − 𝐴𝑜′ )−1 𝐺′ + 𝐼]

Equating these two leads to:

𝐺(𝑧𝐼 − 𝐴𝑜 )−1 𝐶𝐶 ′ (𝑧 −1 𝐼 − 𝐴𝑜′ )−1 𝐺′ + 𝑅 =


[𝐺(𝑧𝐼 − 𝐴𝑜 )−1 𝐾 + 𝐼][𝐺Σ𝐺′ + 𝑅][𝐾 ′ (𝑧 −1 𝐼 − 𝐴𝑜′ )−1 𝐺′ + 𝐼].

Key Insight: The zeros of the polynomial det[𝐺(𝑧𝐼 −𝐴𝑜 )−1 𝐾+𝐼] all lie inside the unit circle,
which means that 𝑎𝑡 lies in the space spanned by square summable linear combinations of 𝑦𝑡 .

𝐻(𝑎𝑡 ) = 𝐻(𝑦𝑡 )
352 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

Key Property: Invertibility

18.3.4 Wold and Vector Autoregressive Representations

Let’s start with some lag operator arithmetic.


The lag operator 𝐿 and the inverse lag operator 𝐿−1 each map an infinite sequence into an
infinite sequence according to the transformation rules

𝐿𝑥𝑡 ≡ 𝑥𝑡−1

𝐿−1 𝑥𝑡 ≡ 𝑥𝑡+1

A Wold moving average representation for {𝑦𝑡 } is

𝑦𝑡 = [𝐺(𝐼 − 𝐴𝑜 𝐿)−1 𝐾𝐿 + 𝐼]𝑎𝑡

Applying the inverse of the operator on the right side and using

[𝐺(𝐼 − 𝐴𝑜 𝐿)−1 𝐾𝐿 + 𝐼]−1 = 𝐼 − 𝐺[𝐼 − (𝐴𝑜 − 𝐾𝐺)𝐿]−1 𝐾𝐿

gives the vector autoregressive representation


𝑦𝑡 = ∑ 𝐺(𝐴𝑜 − 𝐾𝐺)𝑗−1 𝐾𝑦𝑡−𝑗 + 𝑎𝑡
𝑗=1

18.4 Dynamic Demand Curves and Canonical Household


Technologies

18.4.1 Canonical Household Technologies

ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡
𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡
𝑏𝑡 = 𝑈𝑏 𝑧𝑡

Definition: A household service technology (Δℎ , Θℎ , Π, Λ, 𝑈𝑏 ) is said to be canonical if

• Π is nonsingular, and

• the absolute values of the eigenvalues of (Δℎ − Θℎ Π−1 Λ) are strictly less than 1/ 𝛽.
Key invertibility property: A canonical household service technology maps a service pro-
cess {𝑠𝑡 } in 𝐿20 into a corresponding consumption process {𝑐𝑡 } for which the implied house-
hold capital stock process {ℎ𝑡 } is also in 𝐿20 .
An inverse household technology:
18.5. GORMAN AGGREGATION AND ENGEL CURVES 353

𝑐𝑡 = −Π−1 Λℎ𝑡−1 + Π−1 𝑠𝑡


ℎ𝑡 = (Δℎ − Θℎ Π−1 Λ)ℎ𝑡−1 + Θℎ Π−1 𝑠𝑡

The restriction on the eigenvalues of the matrix (Δℎ − Θℎ Π−1 Λ) keeps the household capital
stock {ℎ𝑡 } in 𝐿20 .

18.4.2 Dynamic Demand Functions



𝜌𝑡0 ≡ Π−1′ [𝑝𝑡0 − Θ′ℎ 𝐸𝑡 ∑ 𝛽 𝜏 (Δ′ℎ − Λ′ Π−1′ Θ′ℎ )𝜏−1 Λ′ Π−1′ 𝑝𝑡+𝜏
0
]
𝜏=1

𝑠𝑖,𝑡 = Λℎ𝑖,𝑡−1
ℎ𝑖,𝑡 = Δℎ ℎ𝑖,𝑡−1

where ℎ𝑖,−1 = ℎ−1 .


𝑊0 = 𝐸0 ∑ 𝛽 𝑡 (𝑤𝑡0 ℓ𝑡 + 𝛼0𝑡 ⋅ 𝑑𝑡 ) + 𝑣0 ⋅ 𝑘−1
𝑡=0


𝐸0 ∑𝑡=0 𝛽 𝑡 𝜌𝑡0 ⋅ (𝑏𝑡 − 𝑠𝑖,𝑡 ) − 𝑊0
𝜇𝑤
0 = ∞
𝐸0 ∑𝑡=0 𝛽 𝑡 𝜌𝑡0 ⋅ 𝜌𝑡0

𝑐𝑡 = −Π−1 Λℎ𝑡−1 + Π−1 𝑏𝑡 − Π−1 𝜇𝑤


0 𝐸𝑡 {Π
′ −1
− Π′ −1 Θ′ℎ
[𝐼 − (Δ′ℎ − Λ′ Π′ −1 Θ′ℎ )𝛽𝐿−1 ]−1 Λ′ Π′−1 𝛽𝐿−1 }𝑝𝑡0
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

This system expresses consumption demands at date 𝑡 as functions of: (i) time-𝑡 conditional
0
expectations of future scaled Arrow-Debreu prices {𝑝𝑡+𝑠 }∞
𝑠=0 ; (ii) the stochastic process for
the household’s endowment {𝑑𝑡 } and preference shock {𝑏𝑡 }, as mediated through the multi-
plier 𝜇𝑤
0 and wealth 𝑊0 ; and (iii) past values of consumption, as mediated through the state
variable ℎ𝑡−1 .

18.5 Gorman Aggregation and Engel Curves

We shall explore how the dynamic demand schedule for consumption goods opens up the pos-
sibility of satisfying Gorman’s (1953) conditions for aggregation in a heterogeneous consumer
model.
The first equation of our demand system is an Engel curve for consumption that is linear in
the marginal utility 𝜇20 of individual wealth with a coefficient on 𝜇𝑤
0 that depends only on
prices.
The multiplier 𝜇𝑤
0 depends on wealth in an affine relationship, so that consumption is linear
in wealth.
In a model with multiple consumers who have the same household technologies (Δℎ , Θℎ , Λ, Π)
but possibly different preference shock processes and initial values of household capital stocks,
the coefficient on the marginal utility of wealth is the same for all consumers.
354 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

Gorman showed that when Engel curves satisfy this property, there exists a unique commu-
nity or aggregate preference ordering over aggregate consumption that is independent of the
distribution of wealth.

18.5.1 Re-Opened Markets



𝜌𝑡𝑡 ≡ Π−1′ [𝑝𝑡𝑡 − Θ′ℎ 𝐸𝑡 ∑ 𝛽 𝜏 (Δ′ℎ − Λ′ Π−1′ Θ′ℎ )𝜏−1 Λ′ Π−1′ 𝑝𝑡+𝜏
𝑡
]
𝜏=1

𝑠𝑖,𝑡 = Λℎ𝑖,𝑡−1
ℎ𝑖,𝑡 = Δℎ ℎ𝑖,𝑡−1 ,

where now ℎ𝑖,𝑡−1 = ℎ𝑡−1 . Define time 𝑡 wealth 𝑊𝑡


𝑊𝑡 = 𝐸𝑡 ∑ 𝛽 𝑗 (𝑤𝑡+𝑗
𝑡
ℓ𝑡+𝑗 + 𝛼𝑡𝑡+𝑗 ⋅ 𝑑𝑡+𝑗 ) + 𝑣𝑡 ⋅ 𝑘𝑡−1
𝑗=0


𝐸𝑡 ∑𝑗=0 𝛽 𝑗 𝜌𝑡+𝑗
𝑡
⋅ (𝑏𝑡+𝑗 − 𝑠𝑖,𝑡+𝑗 ) − 𝑊𝑡
𝜇𝑤
𝑡 = ∞ 𝑡 𝑡
𝐸𝑡 ∑𝑡=0 𝛽 𝑗 𝜌𝑡+𝑗 ⋅ 𝜌𝑡+𝑗

𝑐𝑡 = −Π−1 Λℎ𝑡−1 + Π−1 𝑏𝑡 − Π−1 𝜇𝑤


𝑡 𝐸𝑡 {Π
′ −1
− Π′ −1 Θ′ℎ
[𝐼 − (Δ′ℎ − Λ′ Π′ −1 Θ′ℎ )𝛽𝐿−1 ]−1 Λ′ Π′−1 𝛽𝐿−1 }𝑝𝑡𝑡
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

18.5.2 Dynamic Demand

Define a time 𝑡 continuation of a sequence {𝑧𝑡 }∞ ∞


𝑡=0 as the sequence {𝑧𝜏 }𝜏=𝑡 . The demand sys-
tem indicates that the time 𝑡 vector of demands for 𝑐𝑡 is influenced by:
Through the multiplier 𝜇𝑤𝑡 , the time 𝑡 continuation of the preference shock process {𝑏𝑡 } and
the time 𝑡 continuation of {𝑠𝑖,𝑡 }.
The time 𝑡 − 1 level of household durables ℎ𝑡−1 .
Everything that affects the household’s time 𝑡 wealth, including its stock of physical capital
𝑘𝑡−1 and its value 𝑣𝑡 , the time 𝑡 continuation of the factor prices {𝑤𝑡 , 𝛼𝑡 }, the household’s
continuation endowment process, and the household’s continuation plan for {ℓ𝑡 }.
The time 𝑡 continuation of the vector of prices {𝑝𝑡𝑡 }.

18.5.3 Attaining a Canonical Household Technology

Apply the following version of a factorization identity:


[Π + 𝛽 1/2 𝐿Λ(𝐼 − 𝛽 1/2 𝐿Δℎ )−1 Θℎ ]
̂ − 𝛽 1/2 𝐿−1 Δ )−1 Θ ]′ [Π̂ + 𝛽 1/2 𝐿Λ(𝐼
= [Π̂ + 𝛽 1/2 𝐿−1 Λ(𝐼 ̂ − 𝛽 1/2 𝐿Δ )−1 Θ ]
ℎ ℎ ℎ ℎ

The factorization identity guarantees that the [Λ,̂ Π]̂ representation satisfies both require-
ments for a canonical representation.
18.6. PARTIAL EQUILIBRIUM 355

18.6 Partial Equilibrium

Now we’ll provide quick overviews of examples of economies that fit within our framework
We provide details for a number of these examples in subsequent lectures

1. Growth in Dynamic Linear Economies

1. Lucas Asset Pricing using DLE

2. IRFs in Hall Model

3. Permanent Income Using the DLE class

4. Rosen schooling model

5. Cattle cycles

6. Shock Non Invertibility

We’ll start with an example of a partial equilibrium in which we posit demand and supply
curves
Suppose that we want to capture the dynamic demand curve:

𝑐𝑡 = −Π−1 Λℎ𝑡−1 + Π−1 𝑏𝑡 − Π−1 𝜇𝑤


0 𝐸𝑡 {Π
′ −1
− Π′ −1 Θ′ℎ
[𝐼 − (Δ′ℎ − Λ′ Π′ −1 Θ′ℎ )𝛽𝐿−1 ]−1 Λ′ Π′−1 𝛽𝐿−1 }𝑝𝑡
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

From material described earlier in this lecture, we know how to reverse engineer preferences
that generate this demand system

• note how the demand equations are cast in terms of the matrices in our stan-
dard preference representation

Now let’s turn to supply.


A representative firm takes as given and beyond its control the stochastic process {𝑝𝑡 }∞
𝑡=0 .

The firm sells its output 𝑐𝑡 in a competitive market each period.


Only spot markets convene at each date 𝑡 ≥ 0.
The firm also faces an exogenous process of cost disturbances 𝑑𝑡 .
The firm chooses stochastic processes {𝑐𝑡 , 𝑔𝑡 , 𝑖𝑡 , 𝑘𝑡 }∞
𝑡=0 to maximize


𝐸0 ∑ 𝛽 𝑡 {𝑝𝑡 ⋅ 𝑐𝑡 − 𝑔𝑡 ⋅ 𝑔𝑡 /2}
𝑡=0

subject to given 𝑘−1 and

Φ𝑐 𝑐𝑡 + Φ𝑖 𝑖𝑡 + Φ𝑔 𝑔𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡 .
356 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

18.7 Equilibrium Investment Under Uncertainty

A representative firm maximizes


𝐸 ∑ 𝛽 𝑡 {𝑝𝑡 𝑐𝑡 − 𝑔𝑡2 /2}
𝑡=0

subject to the technology

𝑐𝑡 = 𝛾𝑘𝑡−1
𝑘𝑡 = 𝛿𝑘 𝑘𝑡−1 + 𝑖𝑡
𝑔𝑡 = 𝑓1 𝑖𝑡 + 𝑓2 𝑑𝑡

where 𝑑𝑡 is a cost shifter, 𝛾 > 0, and 𝑓1 > 0 is a cost parameter and 𝑓2 = 1. Demand is
governed by

𝑝 𝑡 = 𝛼 0 − 𝛼 1 𝑐𝑡 + 𝑢 𝑡

where 𝑢𝑡 is a demand shifter with mean zero and 𝛼0 , 𝛼1 are positive parameters.
Assume that 𝑢𝑡 , 𝑑𝑡 are uncorrelated first-order autoregressive processes.

18.8 A Rosen-Topel Housing Model

𝑅𝑡 = 𝑏𝑡 + 𝛼ℎ𝑡

𝑝𝑡 = 𝐸𝑡 ∑(𝛽𝛿ℎ )𝜏 𝑅𝑡+𝜏
𝜏=0

where ℎ𝑡 is the stock of housing at time 𝑡 𝑅𝑡 is the rental rate for housing, 𝑝𝑡 is the price of
new houses, and 𝑏𝑡 is a demand shifter; 𝛼 < 0 is a demand parameter, and 𝛿ℎ is a deprecia-
tion factor for houses.
We cast this demand specification within our class of models by letting the stock of houses ℎ𝑡
evolve according to

ℎ𝑡 = 𝛿ℎ ℎ𝑡−1 + 𝑐𝑡 , 𝛿ℎ ∈ (0, 1)

where 𝑐𝑡 is the rate of production of new houses.


̄ 𝑡 or 𝑠𝑡 = 𝜆ℎ𝑡−1 + 𝜋𝑐𝑡 , where 𝜆 = 𝜆𝛿
Houses produce services 𝑠𝑡 according to 𝑠𝑡 = 𝜆ℎ ̄ ℎ , 𝜋 = 𝜆.̄
̄ 𝑡0 = 𝑅𝑡 as the rental rate on housing at time 𝑡, measured in units of time 𝑡
We can take 𝜆𝜌
consumption (housing).
Demand for housing services is

𝑠𝑡 = 𝑏𝑡 − 𝜇0 𝜌𝑡0

where the price of new houses 𝑝𝑡 is related to 𝜌𝑡0 by 𝜌𝑡0 = 𝜋−1 [𝑝𝑡 − 𝛽𝛿ℎ 𝐸𝑡 𝑝𝑡+1 ].
18.9. CATTLE CYCLES 357

18.9 Cattle Cycles

Rosen, Murphy, and Scheinkman (1994). Let 𝑝𝑡 be the price of freshly slaughtered beef, 𝑚𝑡
the feeding cost of preparing an animal for slaughter, ℎ̃ 𝑡 the one-period holding cost for a ma-
ture animal, 𝛾1 ℎ̃ 𝑡 the one-period holding cost for a yearling, and 𝛾0 ℎ̃ 𝑡 the one-period holding
cost for a calf.
The cost processes {ℎ̃ 𝑡 , 𝑚𝑡 }∞ ∞
𝑡=0 are exogenous, while the stochastic process {𝑝𝑡 }𝑡=0 is deter-
mined by a rational expectations equilibrium. Let 𝑥𝑡̃ be the breeding stock, and 𝑦𝑡̃ be the to-
tal stock of animals.
The law of motion for cattle stocks is

𝑥𝑡̃ = (1 − 𝛿)𝑥𝑡−1
̃ + 𝑔𝑥𝑡−3
̃ − 𝑐𝑡

where 𝑐𝑡 is a rate of slaughtering. The total head-count of cattle

𝑦𝑡̃ = 𝑥𝑡̃ + 𝑔𝑥𝑡−1


̃ + 𝑔𝑥𝑡−2
̃

is the sum of adults, calves, and yearlings, respectively.


A representative farmer chooses {𝑐𝑡 , 𝑥𝑡̃ } to maximize


𝐸0 ∑ 𝛽 𝑡 {𝑝𝑡 𝑐𝑡 − ℎ̃ 𝑡 𝑥𝑡̃ − (𝛾0 ℎ̃ 𝑡 )(𝑔𝑥𝑡−1
̃ ) − (𝛾1 ℎ̃ 𝑡 )(𝑔𝑥𝑡−2
̃ ) − 𝑚𝑡 𝑐𝑡
𝑡=0
− Ψ(𝑥𝑡̃ , 𝑥𝑡−1
̃ , 𝑥𝑡−2
̃ , 𝑐𝑡 )}

where

𝜓1 2 𝜓2 2 𝜓 𝜓
Ψ= 𝑥𝑡̃ + ̃ + 3 𝑥2𝑡−2
𝑥𝑡−1 ̃ + 4 𝑐𝑡2
2 2 2 2

Demand is governed by

𝑐𝑡 = 𝛼0 − 𝛼1 𝑝𝑡 + 𝑑𝑡̃

where 𝛼0 > 0, 𝛼1 > 0, and {𝑑𝑡̃ }∞


𝑡=0 is a stochastic process with mean zero representing a
demand shifter.
For more details see Cattle cycles

18.10 Models of Occupational Choice and Pay

We’ll describe the following pair of schooling models that view education as a time-to-build
process:
• Rosen schooling model for engineers
• Two-occupation model
358 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

18.10.1 Market for Engineers

Ryoo and Rosen’s (2004) [56] model consists of the following equations:
first, a demand curve for engineers

𝑤𝑡 = −𝛼𝑑 𝑁𝑡 + 𝜖1𝑡 , 𝛼𝑑 > 0

second, a time-to-build structure of the education process

𝑁𝑡+𝑘 = 𝛿𝑁 𝑁𝑡+𝑘−1 + 𝑛𝑡 , 0 < 𝛿𝑁 < 1

third, a definition of the discounted present value of each new engineering student


𝑣𝑡 = 𝛽 𝑘 𝐸𝑡 ∑(𝛽𝛿𝑁 )𝑗 𝑤𝑡+𝑘+𝑗 ;
𝑗=0

and fourth, a supply curve of new students driven by 𝑣𝑡

𝑛𝑡 = 𝛼𝑠 𝑣𝑡 + 𝜖2𝑡 , 𝛼𝑠 > 0

Here {𝜖1𝑡 , 𝜖2𝑡 } are stochastic processes of labor demand and supply shocks.

Definition: A partial equilibrium is a stochastic process {𝑤𝑡 , 𝑁𝑡 , 𝑣𝑡 , 𝑛𝑡 }𝑡=0 satisfying these
four equations, and initial conditions 𝑁−1 , 𝑛−𝑠 , 𝑠 = 1, … , −𝑘.
We sweep the time-to-build structure and the demand for engineers into the household tech-
nology and putting the supply of new engineers into the technology for producing goods.

ℎ1𝑡−1
⎡ ℎ ⎤
𝑠𝑡 = [𝜆1 0 … 0] ⎢ 2𝑡−1 ⎥ + 0 ⋅ 𝑐𝑡
⎢ ⋮ ⎥

⎣ 𝑘+1,𝑡−1 ⎦
ℎ1𝑡 𝛿𝑁 1 0 ⋯ 0 ℎ1𝑡−1 0
⎡ ℎ ⎤ ⎡0 0 1 ⋯ 0⎤ ⎡ ℎ2𝑡−1 ⎤ ⎡0⎤
⎢ 2𝑡 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⎥=⎢ ⋮ ⋮ ⋮ ⋱ ⋮⎥⎢ ⋮ ⎥ + ⎢ ⋮ ⎥ 𝑐𝑡
⎢ ℎ𝑘,𝑡 ⎥ ⎢ 0 ⋯ ⋯ 0 1⎥ ⎢ ℎ𝑘,𝑡−1 ⎥ ⎢0⎥
⎣ℎ𝑘+1,𝑡 ⎦ ⎣ 0 0 0 ⋯ 0⎦ ⎣ℎ𝑘+1,𝑡−1 ⎦ ⎣1⎦

This specification sets Rosen’s 𝑁𝑡 = ℎ1𝑡−1 , 𝑛𝑡 = 𝑐𝑡 , ℎ𝜏+1,𝑡−1 = 𝑛𝑡−𝜏 , 𝜏 = 1, … , 𝑘, and uses the
home-produced service to capture the demand for labor. Here 𝜆1 embodies Rosen’s demand
parameter 𝛼𝑑 .
• The supply of new workers becomes our consumption.
• The dynamic demand curve becomes Rosen’s dynamic supply curve for new workers.
Remark: This has an Imai-Keane flavor.
For more details and Python code see Rosen schooling model.
18.11. PERMANENT INCOME MODELS 359

18.10.2 Skilled and Unskilled Workers

First, a demand curve for labor

𝑤 𝑁
[ 𝑢𝑡 ] = 𝛼𝑑 [ 𝑢𝑡 ] + 𝜖1𝑡
𝑤𝑠𝑡 𝑁𝑠𝑡

where 𝛼𝑑 is a (2 × 2) matrix of demand parameters and 𝜖1𝑡 is a vector of demand shifters


second, time-to-train specifications for skilled and unskilled labor, respectively:

𝑁𝑠𝑡+𝑘 = 𝛿𝑁 𝑁𝑠𝑡+𝑘−1 + 𝑛𝑠𝑡


𝑁𝑢𝑡 = 𝛿𝑁 𝑁𝑢𝑡−1 + 𝑛𝑢𝑡 ;

where 𝑁𝑠𝑡 , 𝑁𝑢𝑡 are stocks of the two types of labor, and 𝑛𝑠𝑡 , 𝑛𝑢𝑡 are entry rates into the two
occupations.
third, definitions of discounted present values of new entrants to the skilled and unskilled oc-
cupations, respectively:


𝑣𝑠𝑡 = 𝐸𝑡 𝛽 𝑘 ∑(𝛽𝛿𝑁 )𝑗 𝑤𝑠𝑡+𝑘+𝑗
𝑗=0

𝑣𝑢𝑡 = 𝐸𝑡 ∑(𝛽𝛿𝑁 )𝑗 𝑤𝑢𝑡+𝑗
𝑗=0

where 𝑤𝑢𝑡 , 𝑤𝑠𝑡 are wage rates for the two occupations; and fourth, supply curves for new en-
trants:

𝑛 𝑣
[ 𝑠𝑡 ] = 𝛼𝑠 [ 𝑢𝑡 ] + 𝜖2𝑡
𝑛𝑢𝑡 𝑣𝑠𝑡

Short Cut
As an alternative, Siow simply used the equalizing differences condition

𝑣𝑢𝑡 = 𝑣𝑠𝑡

18.11 Permanent Income Models

We’ll describe a class of permanent income models that feature


• Many consumption goods and services
• A single capital good with 𝑅𝛽 = 1

• The physical production technology

𝜙𝑐 ⋅ 𝑐𝑡 + 𝑖𝑡 = 𝛾𝑘𝑡−1 + 𝑒𝑡
𝑘𝑡 = 𝑘𝑡−1 + 𝑖𝑡
360 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

𝜙𝑖 𝑖 𝑡 − 𝑔 𝑡 = 0

Implication One:
Equality of Present Values of Moving Average Coefficients of 𝑐 and 𝑒


𝑘𝑡−1 = 𝛽 ∑ 𝛽 𝑗 (𝜙𝑐 ⋅ 𝑐𝑡+𝑗 − 𝑒𝑡+𝑗 ) ⇒
𝑗=0


𝑘𝑡−1 = 𝛽 ∑ 𝛽 𝑗 𝐸(𝜙𝑐 ⋅ 𝑐𝑡+𝑗 − 𝑒𝑡+𝑗 )|𝐽𝑡 ⇒
𝑗=0

∞ ∞
∑ 𝛽 𝑗 (𝜙𝑐 )′ 𝜒𝑗 = ∑ 𝛽 𝑗 𝜖𝑗
𝑗=0 𝑗=0

where 𝜒𝑗 𝑤𝑡 is the response of 𝑐𝑡+𝑗 to 𝑤𝑡 and 𝜖𝑗 𝑤𝑡 is the response of endowment 𝑒𝑡+𝑗 to 𝑤𝑡 :


Implication Two:
Martingales

ℳ𝑘𝑡 = 𝐸(ℳ𝑘𝑡+1 |𝐽𝑡 )


ℳ𝑒𝑡 = 𝐸(ℳ𝑒𝑡+1 |𝐽𝑡 )

and

ℳ𝑐𝑡 = (Φ𝑐 )′ ℳ𝑑𝑡 = 𝜙𝑐 𝑀𝑡𝑒

For more details see Permanent Income Using the DLE class
Testing Permanent Income Models:
We have two types of implications of permanent income models:
• Equality of present values of moving average coefficients.
• Martingale ℳ𝑘𝑡 .
These have been tested in work by Hansen, Sargent, and Roberts (1991) [57] and by Attana-
sio and Pavoni (2011) [6].

18.12 Gorman Heterogeneous Households

We now assume that there is a finite number of households, each with its own household tech-
nology and preferences over consumption services.
Household 𝑗 orders preferences over consumption processes according to


1
− ( ) 𝐸 ∑ 𝛽 𝑡 [(𝑠𝑗𝑡 − 𝑏𝑗𝑡 ) ⋅ (𝑠𝑗𝑡 − 𝑏𝑗𝑡 ) + ℓ𝑗𝑡
2
] ∣ 𝐽0
2 𝑡=0

𝑠𝑗𝑡 = Λ ℎ𝑗,𝑡−1 + Π 𝑐𝑗𝑡


18.12. GORMAN HETEROGENEOUS HOUSEHOLDS 361

ℎ𝑗𝑡 = Δℎ ℎ𝑗,𝑡−1 + Θℎ 𝑐𝑗𝑡

and ℎ𝑗,−1 is given

𝑏𝑗𝑡 = 𝑈𝑏𝑗 𝑧𝑡

∞ ∞
𝐸 ∑ 𝛽 𝑡 𝑝𝑡0 ⋅ 𝑐𝑗𝑡 ∣ 𝐽0 = 𝐸 ∑ 𝛽 𝑡 (𝑤𝑡0 ℓ𝑗𝑡 + 𝛼0𝑡 ⋅ 𝑑𝑗𝑡 ) ∣ 𝐽0 + 𝑣0 ⋅ 𝑘𝑗,−1 ,
𝑡=0 𝑡=0

where 𝑘𝑗,−1 is given. The 𝑗th consumer owns an endowment process 𝑑𝑗𝑡 , governed by the
stochastic process 𝑑𝑗𝑡 = 𝑈𝑑𝑗 𝑧𝑡 .
We refer to this as a setting with Gorman heterogeneous households.
This specification confines heterogeneity among consumers to:
• differences in the preference processes {𝑏𝑗𝑡 }, represented by different selections of 𝑈𝑏𝑗
• differences in the endowment processes {𝑑𝑗𝑡 }, represented by different selections of 𝑈𝑑𝑗
• differences in ℎ𝑗,−1 and
• differences in 𝑘𝑗,−1
The matrices Λ, Π, Δℎ , Θℎ do not depend on 𝑗.
This makes everybody’s demand system have the form described earlier, with different 𝜇𝑤 𝑗0 ’s
(reflecting different wealth levels) and different 𝑏𝑗𝑡 preference shock processes and initial con-
ditions for household capital stocks.
Punchline: there exists a representative consumer.
We can use the representative consumer to compute a competitive equilibrium aggregate
allocation and price system.
With the equilibrium aggregate allocation and price system in hand, we can then compute
allocations to each household.
Computing Allocations to Individuals:
Set

ℓ𝑗𝑡 = (𝜇𝑤 𝑤
0𝑗 /𝜇0𝑎 )ℓ𝑎𝑡

Then solve the following equation for 𝜇𝑤


0𝑗 :

∞ ∞
𝜇𝑤 𝑡 0 0 0 𝑤 𝑡 0 𝑖 0
0𝑗 𝐸0 ∑ 𝛽 {𝜌𝑡 ⋅ 𝜌𝑡 + (𝑤𝑡 /𝜇0𝑎 )ℓ𝑎𝑡 } = 𝐸0 ∑ 𝛽 {𝜌𝑡 ⋅ (𝑏𝑗𝑡 − 𝑠𝑗𝑡 ) − 𝛼𝑡 ⋅ 𝑑𝑗𝑡 } − 𝑣0 𝑘𝑗,−1
𝑡=0 𝑡=0

𝑠𝑗𝑡 − 𝑏𝑗𝑡 = 𝜇𝑤 0
0𝑗 𝜌𝑡

𝑐𝑗𝑡 = −Π−1 Λℎ𝑗,𝑡−1 + Π−1 𝑠𝑗𝑡


ℎ𝑗𝑡 = (Δℎ − Θℎ Π−1 Λ)ℎ𝑗,𝑡−1 + Π−1 Θℎ 𝑠𝑗𝑡

Here ℎ𝑗,−1 given.


362 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

18.13 Non-Gorman Heterogeneous Households

We now describe a less tractable type of heterogeneity across households that we dub Non-
Gorman heterogeneity.
Here is the specification:
Preferences and Household Technologies:


1
− 𝐸 ∑ 𝛽 𝑡 [(𝑠𝑖𝑡 − 𝑏𝑖𝑡 ) ⋅ (𝑠𝑖𝑡 − 𝑏𝑖𝑡 ) + ℓ𝑖𝑡
2
] ∣ 𝐽0
2 𝑡=0

𝑠𝑖𝑡 = Λ𝑖 ℎ𝑖𝑡−1 + Π𝑖 𝑐𝑖𝑡


ℎ𝑖𝑡 = Δℎ𝑖 ℎ𝑖𝑡−1 + Θℎ𝑖 𝑐𝑖𝑡 , 𝑖 = 1, 2.

𝑏𝑖𝑡 = 𝑈𝑏𝑖 𝑧𝑡

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1

Production Technology

Φ𝑐 (𝑐1𝑡 + 𝑐2𝑡 ) + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑1𝑡 + 𝑑2𝑡

𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡

𝑔𝑡 ⋅ 𝑔𝑡 = ℓ𝑡2 , ℓ𝑡 = ℓ1𝑡 + ℓ2𝑡

𝑑𝑖𝑡 = 𝑈𝑑𝑖 𝑧𝑡 , 𝑖 = 1, 2

Pareto Problem:


1
− 𝜆𝐸0 ∑ 𝛽 𝑡 [(𝑠1𝑡 − 𝑏1𝑡 ) ⋅ (𝑠1𝑡 − 𝑏1𝑡 ) + ℓ1𝑡
2
]
2 𝑡=0

1
− (1 − 𝜆)𝐸0 ∑ 𝛽 𝑡 [(𝑠2𝑡 − 𝑏2𝑡 ) ⋅ (𝑠2𝑡 − 𝑏2𝑡 ) + ℓ2𝑡
2
]
2 𝑡=0

Mongrel Aggregation: Static


There is what we call a kind of mongrel aggregation in this setting.
We first describe the idea within a simple static setting in which there is a single consumer
static inverse demand with implied preferences:

𝑐𝑡 = Π−1 𝑏𝑡 − 𝜇0 Π−1 Π−1′ 𝑝𝑡

An inverse demand curve is


18.13. NON-GORMAN HETEROGENEOUS HOUSEHOLDS 363

𝑝𝑡 = 𝜇−1 ′ −1 ′
0 Π 𝑏𝑡 − 𝜇0 Π Π𝑐𝑡

Integrating the marginal utility vector shows that preferences can be taken to be

(−2𝜇0 )−1 (Π𝑐𝑡 − 𝑏𝑡 ) ⋅ (Π𝑐𝑡 − 𝑏𝑡 )

Key Insight: Factor the inverse of a ‘covariance matrix’.


Now assume that there are two consumers, 𝑖 = 1, 2, with demand curves

𝑐𝑖𝑡 = Π−1 −1 −1′


𝑖 𝑏𝑖𝑡 − 𝜇0𝑖 Π𝑖 Π𝑖 𝑝𝑡

𝑐1𝑡 + 𝑐2𝑡 = (Π−1 −1 −1 −1′


1 𝑏1𝑡 + Π2 𝑏2𝑡 ) − (𝜇01 Π1 Π1 + 𝜇02 Π2 Π−1′
2 )𝑝𝑡

Setting 𝑐1𝑡 + 𝑐2𝑡 = 𝑐𝑡 and solving for 𝑝𝑡 gives

𝑝𝑡 = (𝜇01 Π−1 −1′


1 Π1 + 𝜇02 Π−1 −1′ −1 −1 −1
2 Π2 ) (Π1 𝑏1𝑡 + Π2 𝑏2𝑡 )
− (𝜇01 Π−1 −1′
1 Π1 + 𝜇02 Π−1 −1′ −1
2 Π2 ) 𝑐𝑡

Punchline: choose Π associated with the aggregate ordering to satisfy

𝜇−1 ′ −1 −1′
0 Π Π = (𝜇01 Π1 Π2 + 𝜇02 Π−1 −1′ −1
2 Π2 )

Dynamic Analogue:
We now describe how to extend mongrel aggregation to a dynamic setting.
The key comparison is
• Static: factor a covariance matrix-like object
• Dynamic: factor a spectral-density matrix-like object
Programming Problem for Dynamic Mongrel Aggregation:
Our strategy for deducing the mongrel preference ordering over 𝑐𝑡 = 𝑐1𝑡 + 𝑐2𝑡 is to solve the
programming problem: choose {𝑐1𝑡 , 𝑐2𝑡 } to maximize the criterion


∑ 𝛽 𝑡 [𝜆(𝑠1𝑡 − 𝑏1𝑡 ) ⋅ (𝑠1𝑡 − 𝑏1𝑡 ) + (1 − 𝜆)(𝑠2𝑡 − 𝑏2𝑡 ) ⋅ (𝑠2𝑡 − 𝑏2𝑡 )]
𝑡=0

subject to

ℎ𝑗𝑡 = Δℎ𝑗 ℎ𝑗𝑡−1 + Θℎ𝑗 𝑐𝑗𝑡 , 𝑗 = 1, 2


𝑠𝑗𝑡 = Δ𝑗 ℎ𝑗𝑡−1 + Π𝑗 𝑐𝑗𝑡 , 𝑗 = 1, 2
𝑐1𝑡 + 𝑐2𝑡 = 𝑐𝑡

subject to (ℎ1,−1 , ℎ2,−1 ) given and {𝑏1𝑡 }, {𝑏2𝑡 }, {𝑐𝑡 } being known and fixed sequences.
Substituting the {𝑐1𝑡 , 𝑐2𝑡 } sequences that solve this problem as functions of {𝑏1𝑡 , 𝑏2𝑡 , 𝑐𝑡 } into
the objective determines a mongrel preference ordering over {𝑐𝑡 } = {𝑐1𝑡 + 𝑐2𝑡 }.
364 CHAPTER 18. RECURSIVE MODELS OF DYNAMIC LINEAR ECONOMIES

In solving this problem, it is convenient to proceed by using Fourier transforms. For details,
please see [31] where they deploy a
Secret Weapon: Another application of the spectral factorization identity.
Concluding remark: The [31] class of models described in this lecture are all complete
markets models. We have exploited the fact that complete market models are all alike to
allow us to define a class that gives the same name to different things in the spirit of
Henri Poincare.
Could we create such a class for incomplete markets models?
That would be nice, but before trying it would be wise to contemplate the remainder of a
statement by Robert E. Lucas, Jr., with which we began this lecture.

“Complete market economies are all alike but each incomplete market economy is
incomplete in its own individual way.” Robert E. Lucas, Jr., (1989)
Chapter 19

Growth in Dynamic Linear


Economies

19.1 Contents

• Common Structure 19.2


• A Planning Problem 19.3
• Example Economies 19.4
This is another member of a suite of lectures that use the quantecon DLE class to instantiate
models within the [31] class of models described in detail in Recursive Models of Dynamic
Linear Economies.
In addition to what’s included in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

This lecture describes several complete market economies having a common linear-quadratic-
Gaussian structure.
Three examples of such economies show how the DLE class can be used to compute equilibria
of such economies in Python and to illustrate how different versions of these economies can or
cannot generate sustained growth.
We require the following imports

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
from quantecon import LQ, DLE

19.2 Common Structure

Our example economies have the following features


• Information flows are governed by an exogenous stochastic process 𝑧𝑡 that follows

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1

365
366 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

where 𝑤𝑡+1 is a martingale difference sequence.


• Preference shocks 𝑏𝑡 and technology shocks 𝑑𝑡 are linear functions of 𝑧𝑡

𝑏𝑡 = 𝑈𝑏 𝑧𝑡
𝑑𝑡 = 𝑈 𝑑 𝑧𝑡
• Consumption and physical investment goods are produced using the following technol-
ogy

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡
𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡
𝑔𝑡 ⋅ 𝑔𝑡 = 𝑙2𝑡
where 𝑐𝑡 is a vector of consumption goods, 𝑔𝑡 is a vector of intermediate goods, 𝑖𝑡 is a
vector of investment goods, 𝑘𝑡 is a vector of physical capital goods, and 𝑙𝑡 is the amount
of labor supplied by the representative household.
• Preferences of a representative household are described by

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + 𝑙2𝑡 ], 0 < 𝛽 < 1
2 𝑡=0
𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡
ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡
where 𝑠𝑡 is a vector of consumption services, and ℎ𝑡 is a vector of household capital
stocks.
Thus, an instance of this class of economies is described by the matrices

{𝐴22 , 𝐶2 , 𝑈𝑏 , 𝑈𝑑 , Φ𝑐 , Φ𝑔 , Φ𝑖 , Γ, Δ𝑘 , Θ𝑘 , Λ, Π, Δℎ , Θℎ }

and the scalar 𝛽.

19.3 A Planning Problem

The first welfare theorem asserts that a competitive equilibrium allocation solves the follow-
ing planning problem.
Choose {𝑐𝑡 , 𝑠𝑡 , 𝑖𝑡 , ℎ𝑡 , 𝑘𝑡 , 𝑔𝑡 }∞
𝑡=0 to maximize

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 ) ⋅ (𝑠𝑡 − 𝑏𝑡 ) + 𝑔𝑡 ⋅ 𝑔𝑡 ]
2 𝑡=0

subject to the linear constraints

Φ𝑐 𝑐𝑡 + Φ𝑔 𝑔𝑡 + Φ𝑖 𝑖𝑡 = Γ𝑘𝑡−1 + 𝑑𝑡

𝑘𝑡 = Δ𝑘 𝑘𝑡−1 + Θ𝑘 𝑖𝑡
19.4. EXAMPLE ECONOMIES 367

ℎ𝑡 = Δℎ ℎ𝑡−1 + Θℎ 𝑐𝑡

𝑠𝑡 = Λℎ𝑡−1 + Π𝑐𝑡

and

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1

𝑏𝑡 = 𝑈𝑏 𝑧𝑡

𝑑𝑡 = 𝑈 𝑑 𝑧𝑡

The DLE class in Python maps this planning problem into a linear-quadratic dynamic pro-
gramming problem and then solves it by using QuantEcon’s LQ class.
(See Section 5.5 of Hansen & Sargent (2013) [31] for a full description of how to map these
economies into an LQ setting, and how to use the solution to the LQ problem to construct
the output matrices in order to simulate the economies)
The state for the LQ problem is

ℎ𝑡−1
𝑥𝑡 = ⎡ ⎤
⎢ 𝑘𝑡−1 ⎥
⎣ 𝑧𝑡 ⎦

and the control variable is 𝑢𝑡 = 𝑖𝑡 .


Once the LQ problem has been solved, the law of motion for the state is

𝑥𝑡+1 = (𝐴 − 𝐵𝐹 )𝑥𝑡 + 𝐶𝑤𝑡+1

where the optimal control law is 𝑢𝑡 = −𝐹 𝑥𝑡 .


Letting 𝐴𝑜 = 𝐴 − 𝐵𝐹 we write this law of motion as

𝑥𝑡+1 = 𝐴𝑜 𝑥𝑡 + 𝐶𝑤𝑡+1

19.4 Example Economies

Each of the example economies shown here will share a number of components. In particular,
for each we will consider preferences of the form

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑠𝑡 − 𝑏𝑡 )2 + 𝑙2𝑡 ], 0 < 𝛽 < 1
2 𝑡=0

𝑠𝑡 = 𝜆ℎ𝑡−1 + 𝜋𝑐𝑡
368 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

ℎ𝑡 = 𝛿ℎ ℎ𝑡−1 + 𝜃ℎ 𝑐𝑡

𝑏𝑡 = 𝑈𝑏 𝑧𝑡

Technology of the form

𝑐𝑡 + 𝑖𝑡 = 𝛾1 𝑘𝑡−1 + 𝑑1𝑡

𝑘𝑡 = 𝛿𝑘 𝑘𝑡−1 + 𝑖𝑡

𝑔𝑡 = 𝜙1 𝑖𝑡 , 𝜙1 > 0

𝑑1𝑡
[ ] = 𝑈𝑑 𝑧𝑡
0

And information of the form

1 0 0 0 0
𝑧𝑡+1 = ⎢ 0 0.8 0 ⎥ 𝑧𝑡 + ⎢ 1 0 ⎤
⎡ ⎤ ⎡
⎥ 𝑤𝑡+1
⎣ 0 0 0.5 ⎦ ⎣ 0 1 ⎦

𝑈𝑏 = [ 30 0 0 ]

5 1 0
𝑈𝑑 = [ ]
0 0 0

We shall vary {𝜆, 𝜋, 𝛿ℎ , 𝜃ℎ , 𝛾1 , 𝛿𝑘 , 𝜙1 } and the initial state 𝑥0 across the three economies.

19.4.1 Example 1: Hall (1978)

First, we set parameters such that consumption follows a random walk. In particular, we set

1
𝜆 = 0, 𝜋 = 1, 𝛾1 = 0.1, 𝜙1 = 0.00001, 𝛿𝑘 = 0.95, 𝛽 =
1.05

(In this economy 𝛿ℎ and 𝜃ℎ are arbitrary as household capital does not enter the equation for
consumption services We set them to values that will become useful in Example 3)
It is worth noting that this choice of parameter values ensures that 𝛽(𝛾1 + 𝛿𝑘 ) = 1.
For simulations of this economy, we choose an initial condition of


𝑥0 = [ 5 150 1 0 0 ]
19.4. EXAMPLE ECONOMIES 369

In [3]: # Parameter Matrices


γ_1 = 0.1
ϕ_1 = 1e-5

ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k = (np.array([[1], [0]]),


np.array([[0], [1]]),
np.array([[1], [-ϕ_1]]),
np.array([[γ_1], [0]]),
np.array([[.95]]),
np.array([[1]]))

β, l_λ, π_h, δ_h, θ_h = (np.array([[1 / 1.05]]),


np.array([[0]]),
np.array([[1]]),
np.array([[.9]]),
np.array([[1]]) - np.array([[.9]]))

a22, c2, ub, ud = (np.array([[1, 0, 0],


[0, 0.8, 0],
[0, 0, 0.5]]),
np.array([[0, 0],
[1, 0],
[0, 1]]),
np.array([[30, 0, 0]]),
np.array([[5, 1, 0],
[0, 0, 0]]))

# Initial condition
x0 = np.array([[5], [150], [1], [0], [0]])

info1 = (a22, c2, ub, ud)


tech1 = (ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = (β, l_λ, π_h, δ_h, θ_h)

These parameter values are used to define an economy of the DLE class.

In [4]: econ1 = DLE(info1, tech1, pref1)

We can then simulate the economy for a chosen length of time, from our initial state vector
𝑥0

In [5]: econ1.compute_sequence(x0, ts_length=300)

The economy stores the simulated values for each variable. Below we plot consumption and
investment

In [6]: # This is the right panel of Fig 5.7.1 from p.105 of HS2013
plt.plot(econ1.c[0], label='Cons.')
plt.plot(econ1.i[0], label='Inv.')
plt.legend()
plt.show()
370 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

Inspection of the plot shows that the sample paths of consumption and investment drift in
ways that suggest that each has or nearly has a random walk or unit root component.
This is confirmed by checking the eigenvalues of 𝐴𝑜

In [7]: econ1.endo, econ1.exo

Out[7]: (array([0.9, 1. ]), array([1. , 0.8, 0.5]))

The endogenous eigenvalue that appears to be unity reflects the random walk character of
consumption in Hall’s model.
• Actually, the largest endogenous eigenvalue is very slightly below 1.
• This outcome comes from the small adjustment cost 𝜙1 .

In [8]: econ1.endo[1]

Out[8]: 0.9999999999904767

The fact that the largest endogenous eigenvalue is strictly less than unity in modulus means
that it is possible to compute the non-stochastic steady state of consumption, investment and
capital.

In [9]: econ1.compute_steadystate()
np.set_printoptions(precision=3, suppress=True)
print(econ1.css, econ1.iss, econ1.kss)

[[5.]] [[-0.]] [[-0.002]]

However, the near-unity endogenous eigenvalue means that these steady state values are of
little relevance.
19.4. EXAMPLE ECONOMIES 371

19.4.2 Example 2: Altered Growth Condition

We generate our next economy by making two alterations to the parameters of Example 1.
• First, we raise 𝜙1 from 0.00001 to 1.
– This will lower the endogenous eigenvalue that is close to 1, causing the economy
to head more quickly to the vicinity of its non-stochastic steady-state.
• Second, we raise 𝛾1 from 0.1 to 0.15.
– This has the effect of raising the optimal steady-state value of capital.
We also start the economy off from an initial condition with a lower capital stock


𝑥0 = [ 5 20 1 0 0 ]

Therefore, we need to define the following new parameters

In [10]: γ2 = 0.15
γ22 = np.array([[γ2], [0]])

ϕ_12 = 1
ϕ_i2 = np.array([[1], [-ϕ_12]])

tech2 = (ϕ_c, ϕ_g, ϕ_i2, γ22, δ_k, θ_k)

x02 = np.array([[5], [20], [1], [0], [0]])

Creating the DLE class and then simulating gives the following plot for consumption and in-
vestment

In [11]: econ2 = DLE(info1, tech2, pref1)

econ2.compute_sequence(x02, ts_length=300)

plt.plot(econ2.c[0], label='Cons.')
plt.plot(econ2.i[0], label='Inv.')
plt.legend()
plt.show()
372 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

Simulating our new economy shows that consumption grows quickly in the early stages of the
sample.
However, it then settles down around the new non-stochastic steady-state level of consump-
tion of 17.5, which we find as follows

In [12]: econ2.compute_steadystate()
print(econ2.css, econ2.iss, econ2.kss)

[[17.5]] [[6.25]] [[125.]]

The economy converges faster to this level than in Example 1 because the largest endogenous
eigenvalue of 𝐴𝑜 is now significantly lower than 1.

In [13]: econ2.endo, econ2.exo

Out[13]: (array([0.9 , 0.952]), array([1. , 0.8, 0.5]))

19.4.3 Example 3: A Jones-Manuelli (1990) Economy

For our third economy, we choose parameter values with the aim of generating sustained
growth in consumption, investment and capital.
To do this, we set parameters so that Jones and Manuelli’s “growth condition” is just satis-
fied.
In our notation, just satisfying the growth condition is actually equivalent to setting 𝛽(𝛾1 +
𝛿𝑘 ) = 1, the condition that was necessary for consumption to be a random walk in Hall’s
model.
Thus, we lower 𝛾1 back to 0.1.
19.4. EXAMPLE ECONOMIES 373

In our model, this is a necessary but not sufficient condition for growth.
To generate growth we set preference parameters to reflect habit persistence.
In particular, we set 𝜆 = −1, 𝛿ℎ = 0.9 and 𝜃ℎ = 1 − 𝛿ℎ = 0.1.
This makes preferences assume the form

1 ∞ ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏𝑡 − (1 − 𝛿ℎ ) ∑ 𝛿ℎ𝑗 𝑐𝑡−𝑗−1 )2 + 𝑙2𝑡 ]
2 𝑡=0 𝑗=0

These preferences reflect habit persistence



• the effective “bliss point” 𝑏𝑡 + (1 − 𝛿ℎ ) ∑𝑗=0 𝛿ℎ𝑗 𝑐𝑡−𝑗−1 now shifts in response to a moving
average of past consumption
Since 𝛿ℎ and 𝜃ℎ were defined earlier, the only change we need to make from the parameters of
Example 1 is to define the new value of 𝜆.

In [14]: l_λ2 = np.array([[-1]])


pref2 = (β, l_λ2, π_h, δ_h, θ_h)

In [15]: econ3 = DLE(info1, tech1, pref2)

We simulate this economy from the original state vector

In [16]: econ3.compute_sequence(x0, ts_length=300)

# This is the right panel of Fig 5.10.1 from p.110 of HS2013


plt.plot(econ3.c[0], label='Cons.')
plt.plot(econ3.i[0], label='Inv.')
plt.legend()
plt.show()
374 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

Thus, adding habit persistence to the Hall model of Example 1 is enough to generate sus-
tained growth in our economy.
The eigenvalues of 𝐴𝑜 in this new economy are

In [17]: econ3.endo, econ3.exo

Out[17]: (array([1.+0.j, 1.-0.j]), array([1. , 0.8, 0.5]))

We now have two unit endogenous eigenvalues. One stems from satisfying the growth condi-
tion (as in Example 1).
The other unit eigenvalue results from setting 𝜆 = −1.
To show the importance of both of these for generating growth, we consider the following ex-
periments.

19.4.4 Example 3.1: Varying Sensitivity

Next we raise 𝜆 to -0.7

In [18]: l_λ3 = np.array([[-0.7]])


pref3 = (β, l_λ3, π_h, δ_h, θ_h)

econ4 = DLE(info1, tech1, pref3)

econ4.compute_sequence(x0, ts_length=300)

plt.plot(econ4.c[0], label='Cons.')
plt.plot(econ4.i[0], label='Inv.')
plt.legend()
plt.show()
19.4. EXAMPLE ECONOMIES 375

We no longer achieve sustained growth if 𝜆 is raised from -1 to -0.7.


This is related to the fact that one of the endogenous eigenvalues is now less than 1.

In [19]: econ4.endo, econ4.exo

Out[19]: (array([0.97, 1. ]), array([1. , 0.8, 0.5]))

19.4.5 Example 3.2: More Impatience

Next let’s lower 𝛽 to 0.94

In [20]: β_2 = np.array([[0.94]])


pref4 = (β_2, l_λ, π_h, δ_h, θ_h)

econ5 = DLE(info1, tech1, pref4)

econ5.compute_sequence(x0, ts_length=300)

plt.plot(econ5.c[0], label='Cons.')
plt.plot(econ5.i[0], label='Inv.')
plt.legend()
plt.show()

Growth also fails if we lower 𝛽, since we now have 𝛽(𝛾1 + 𝛿𝑘 ) < 1.


Consumption and investment explode downwards, as a lower value of 𝛽 causes the representa-
tive consumer to front-load consumption.
This explosive path shows up in the second endogenous eigenvalue now being larger than one.
376 CHAPTER 19. GROWTH IN DYNAMIC LINEAR ECONOMIES

In [21]: econ5.endo, econ5.exo

Out[21]: (array([0.9 , 1.013]), array([1. , 0.8, 0.5]))


Chapter 20

Lucas Asset Pricing Using DLE

20.1 Contents

• Asset Pricing Equations 20.2


• Asset Pricing Simulations 20.3
This is one of a suite of lectures that use the quantecon DLE class to instantiate models
within the [31] class of models described in detail in Recursive Models of Dynamic Linear
Economies.
In addition to what’s in Anaconda, this lecture uses the quantecon library

In [1]: !pip install --upgrade quantecon

This lecture uses the DLE class to price payout streams that are linear functions of the econ-
omy’s state vector, as well as risk-free assets that pay out one unit of the first consumption
good with certainty.
We assume basic knowledge of the class of economic environments that fall within the domain
of the DLE class.
Many details about the basic environment are contained in the lecture Growth in Dynamic
Linear Economies.
We’ll also need the following imports

In [2]: import numpy as np


import matplotlib.pyplot as plt
from quantecon import LQ
from quantecon import DLE
%matplotlib inline

We use a linear-quadratic version of an economy that Lucas (1978) [44] used to develop an
equilibrium theory of asset prices:
Preferences

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏𝑡 )2 + 𝑙2𝑡 ]|𝐽0
2 𝑡=0

377
378 CHAPTER 20. LUCAS ASSET PRICING USING DLE

𝑠𝑡 = 𝑐𝑡

𝑏𝑡 = 𝑈𝑏 𝑧𝑡

Technology

𝑐𝑡 = 𝑑1𝑡

𝑘𝑡 = 𝛿𝑘 𝑘𝑡−1 + 𝑖𝑡

𝑔𝑡 = 𝜙1 𝑖𝑡 , 𝜙1 > 0

𝑑1𝑡
[ ] = 𝑈𝑑 𝑧𝑡
0

Information

1 0 0 0 0
𝑧𝑡+1 = ⎡
⎢ 0 0.8 0 ⎤𝑧 + ⎡ 1 0 ⎤𝑤
⎥ 𝑡 ⎢ ⎥ 𝑡+1
⎣ 0 0 0.5 ⎦ ⎣ 0 1 ⎦

𝑈𝑏 = [ 30 0 0 ]

5 1 0
𝑈𝑑 = [ ]
0 0 0


𝑥0 = [ 5 150 1 0 0 ]

20.2 Asset Pricing Equations

[31] show that the time t value of a permanent claim to a stream 𝑦𝑠 = 𝑈𝑎 𝑥𝑠 , 𝑠 ≥ 𝑡 is:

𝑎𝑡 = (𝑥′𝑡 𝜇𝑎 𝑥𝑡 + 𝜎𝑎 )/(𝑒1̄ 𝑀𝑐 𝑥𝑡 )

with



𝜇𝑎 = ∑ 𝛽 𝜏 (𝐴𝑜 )𝜏 𝑍𝑎 𝐴𝑜𝜏
𝜏=0


𝛽 ′ ′
𝜎𝑎 = trace(𝑍𝑎 ∑ 𝛽 𝜏 (𝐴𝑜 )𝜏 𝐶𝐶 (𝐴𝑜 )𝜏 )
1−𝛽 𝜏=0

where
20.3. ASSET PRICING SIMULATIONS 379


𝑍𝑎 = 𝑈 𝑎 𝑀 𝑐

The use of 𝑒1̄ indicates that the first consumption good is the numeraire.

20.3 Asset Pricing Simulations

In [3]: gam = 0
γ = np.array([[gam], [0]])
ϕ_c = np.array([[1], [0]])
ϕ_g = np.array([[0], [1]])
ϕ_1 = 1e-4
ϕ_i = np.array([[0], [-ϕ_1]])
δ_k = np.array([[.95]])
θ_k = np.array([[1]])
β = np.array([[1 / 1.05]])
ud = np.array([[5, 1, 0],
[0, 0, 0]])
a22 = np.array([[1, 0, 0],
[0, 0.8, 0],
[0, 0, 0.5]])
c2 = np.array([[0, 1, 0],
[0, 0, 1]]).T
l_λ = np.array([[0]])
π_h = np.array([[1]])
δ_h = np.array([[.9]])
θ_h = np.array([[1]]) - δ_h
ub = np.array([[30, 0, 0]])
x0 = np.array([[5, 150, 1, 0, 0]]).T

info1 = (a22, c2, ub, ud)


tech1 = (ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = (β, l_λ, π_h, δ_h, θ_h)

In [4]: econ1 = DLE(info1, tech1, pref1)

After specifying a “Pay” matrix, we simulate the economy.


The particular choice of “Pay” used below means that we are pricing a perpetual claim on the
endowment process 𝑑1𝑡

In [5]: econ1.compute_sequence(x0, ts_length=100, Pay=np.array([econ1.Sd[0, :]]))

The graph below plots the price of this claim over time:

In [6]: ### Fig 7.12.1 from p.147 of HS2013


plt.plot(econ1.Pay_Price, label='Price of Tree')
plt.legend()
plt.show()
380 CHAPTER 20. LUCAS ASSET PRICING USING DLE

The next plot displays the realized gross rate of return on this “Lucas tree” as well as on a
risk-free one-period bond:

In [7]: ### Left panel of Fig 7.12.2 from p.148 of HS2013


plt.plot(econ1.Pay_Gross, label='Tree')
plt.plot(econ1.R1_Gross, label='Risk-Free')
plt.legend()
plt.show()
20.3. ASSET PRICING SIMULATIONS 381

In [8]: np.corrcoef(econ1.Pay_Gross[1:, 0], econ1.R1_Gross[1:, 0])

Out[8]: array([[ 1. , -0.46946739],


[-0.46946739, 1. ]])

Above we have also calculated the correlation coefficient between these two returns.
To give an idea of how the term structure of interest rates moves in this economy, the next
plot displays the net rates of return on one-period and five-period risk-free bonds:

In [9]: ### Right panel of Fig 7.12.2 from p.148 of HS2013


plt.plot(econ1.R1_Net, label='One-Period')
plt.plot(econ1.R5_Net, label='Five-Period')
plt.legend()
plt.show()

From the above plot, we can see the tendency of the term structure to slope up when rates
are low and to slope down when rates are high.
Comparing it to the previous plot of the price of the “Lucas tree”, we can also see that net
rates of return are low when the price of the tree is high, and vice versa.
We now plot the realized gross rate of return on a “Lucas tree” as well as on a risk-free one-
period bond when the autoregressive parameter for the endowment process is reduced to 0.4:

In [10]: a22_2 = np.array([[1, 0, 0],


[0, 0.4, 0],
[0, 0, 0.5]])
info2 = (a22_2, c2, ub, ud)

econ2 = DLE(info2, tech1, pref1)


econ2.compute_sequence(x0, ts_length=100, Pay=np.array([econ2.Sd[0, :]]))
382 CHAPTER 20. LUCAS ASSET PRICING USING DLE

In [11]: ### Left panel of Fig 7.12.3 from p.148 of HS2013


plt.plot(econ2.Pay_Gross, label='Tree')
plt.plot(econ2.R1_Gross, label='Risk-Free')
plt.legend()
plt.show()

In [12]: np.corrcoef(econ2.Pay_Gross[1:, 0], econ2.R1_Gross[1:, 0])

Out[12]: array([[ 1. , -0.66478407],


[-0.66478407, 1. ]])

The correlation between these two gross rates is now more negative.
Next, we again plot the net rates of return on one-period and five-period risk-free bonds:

In [13]: ### Right panel of Fig 7.12.3 from p.148 of HS2013


plt.plot(econ2.R1_Net, label='One-Period')
plt.plot(econ2.R5_Net, label='Five-Period')
plt.legend()
plt.show()
20.3. ASSET PRICING SIMULATIONS 383

We can see the tendency of the term structure to slope up when rates are low (and down
when rates are high) has been accentuated relative to the first instance of our economy.
384 CHAPTER 20. LUCAS ASSET PRICING USING DLE
Chapter 21

IRFs in Hall Models

21.1 Contents

• Example 1: Hall (1978) 21.2


• Example 2: Higher Adjustment Costs 21.3
• Example 3: Durable Consumption Goods 21.4
This is another member of a suite of lectures that use the quantecon DLE class to instantiate
models within the [31] class of models described in detail in Recursive Models of Dynamic
Linear Economies.
In addition to what’s in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

We’ll make these imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
from quantecon import LQ
from quantecon import DLE

This lecture shows how the DLE class can be used to create impulse response functions for
three related economies, starting from Hall (1978) [24].
Knowledge of the basic economic environment is assumed.
See the lecture “Growth in Dynamic Linear Economies” for more details.

21.2 Example 1: Hall (1978)

First, we set parameters to make consumption (almost) follow a random walk.


We set

1
𝜆 = 0, 𝜋 = 1, 𝛾1 = 0.1, 𝜙1 = 0.00001, 𝛿𝑘 = 0.95, 𝛽 =
1.05

385
386 CHAPTER 21. IRFS IN HALL MODELS

(In this example 𝛿ℎ and 𝜃ℎ are arbitrary as household capital does not enter the equation for
consumption services.
We set them to values that will become useful in Example 3)
It is worth noting that this choice of parameter values ensures that 𝛽(𝛾1 + 𝛿𝑘 ) = 1.
For simulations of this economy, we choose an initial condition of:


𝑥0 = [ 5 150 1 0 0 ]

In [3]: γ_1 = 0.1


γ = np.array([[γ_1], [0]])
ϕ_c = np.array([[1], [0]])
ϕ_g = np.array([[0], [1]])
ϕ_1 = 1e-5
ϕ_i = np.array([[1], [-ϕ_1]])
δ_k = np.array([[.95]])
θ_k = np.array([[1]])
β = np.array([[1 / 1.05]])
l_λ = np.array([[0]])
π_h = np.array([[1]])
δ_h = np.array([[.9]])
θ_h = np.array([[1]])
a22 = np.array([[1, 0, 0],
[0, 0.8, 0],
[0, 0, 0.5]])
c2 = np.array([[0, 0],
[1, 0],
[0, 1]])
ud = np.array([[5, 1, 0],
[0, 0, 0]])
ub = np.array([[30, 0, 0]])
x0 = np.array([[5], [150], [1], [0], [0]])

info1 = (a22, c2, ub, ud)


tech1 = (ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = (β, l_λ, π_h, δ_h, θ_h)

These parameter values are used to define an economy of the DLE class.
We can then simulate the economy for a chosen length of time, from our initial state vector
𝑥0 .
The economy stores the simulated values for each variable. Below we plot consumption and
investment:

In [4]: econ1 = DLE(info1, tech1, pref1)


econ1.compute_sequence(x0, ts_length=300)

# This is the right panel of Fig 5.7.1 from p.105 of HS2013


plt.plot(econ1.c[0], label='Cons.')
plt.plot(econ1.i[0], label='Inv.')
plt.legend()
plt.show()
21.2. EXAMPLE 1: HALL (1978) 387

The DLE class can be used to create impulse response functions for each of the endogenous
variables: {𝑐𝑡 , 𝑠𝑡 , ℎ𝑡 , 𝑖𝑡 , 𝑘𝑡 , 𝑔𝑡 }.
If no selector vector for the shock is specified, the default choice is to give IRFs to the first
shock in 𝑤𝑡+1 .
Below we plot the impulse response functions of investment and consumption to an endow-
ment innovation (the first shock) in the Hall model:

In [5]: econ1.irf(ts_length=40, shock=None)


# This is the left panel of Fig 5.7.1 from p.105 of HS2013
plt.plot(econ1.c_irf, label='Cons.')
plt.plot(econ1.i_irf, label='Inv.')
plt.legend()
plt.show()
388 CHAPTER 21. IRFS IN HALL MODELS

It can be seen that the endowment shock has permanent effects on the level of both consump-
tion and investment, consistent with the endogenous unit eigenvalue in this economy.
Investment is much more responsive to the endowment shock at shorter time horizons.

21.3 Example 2: Higher Adjustment Costs

We generate our next economy by making only one change to the parameters of Example 1:
we raise the parameter associated with the cost of adjusting capital,𝜙1 , from 0.00001 to 0.2.
This will lower the endogenous eigenvalue that is unity in Example 1 to a value slightly below
1.

In [6]: ϕ_12 = 0.2


ϕ_i2 = np.array([[1], [-ϕ_12]])
tech2 = (ϕ_c, ϕ_g, ϕ_i2, γ, δ_k, θ_k)

econ2 = DLE(info1, tech2, pref1)


econ2.compute_sequence(x0, ts_length = 300)

# This is the right panel of Fig 5.8.1 from p.106 of HS2013


plt.plot(econ2.c[0], label='Cons.')
plt.plot(econ2.i[0], label='Inv.')
plt.legend()
plt.show()
21.3. EXAMPLE 2: HIGHER ADJUSTMENT COSTS 389

In [7]: econ2.irf(ts_length=40,shock=None)
# This is the left panel of Fig 5.8.1 from p.106 of HS2013
plt.plot(econ2.c_irf,label='Cons.')
plt.plot(econ2.i_irf,label='Inv.')
plt.legend()
plt.show()

In [8]: econ2.endo
390 CHAPTER 21. IRFS IN HALL MODELS

Out[8]: array([0.9 , 0.99657126])

In [9]: econ2.compute_steadystate()
print(econ2.css, econ2.iss, econ2.kss)

[[5.]] [[1.38488458e-12]] [[2.76981205e-11]]

The first graph shows that there seems to be a downward trend in both consumption and in-
vestment.
his is a consequence of the decrease in the largest endogenous eigenvalue from unity in the
earlier economy, caused by the higher adjustment cost.
The present economy has a nonstochastic steady state value of 5 for consumption and 0 for
both capital and investment.
Because the largest endogenous eigenvalue is still close to 1, the economy heads only slowly
towards these mean values.
The impulse response functions now show that an endowment shock does not have a perma-
nent effect on the levels of either consumption or investment.

21.4 Example 3: Durable Consumption Goods

We generate our third economy by raising 𝜙1 further, to 1.0. We also raise the production
function parameter from 0.1 to 0.15 (which raises the non-stochastic steady state value of
capital above zero).
We also change the specification of preferences to make the consumption good durable.
Specifically, we allow for a single durable household good obeying:

ℎ𝑡 = 𝛿ℎ ℎ𝑡−1 + 𝑐𝑡 , 0 < 𝛿ℎ < 1

Services are related to the stock of durables at the beginning of the period:

𝑠𝑡 = 𝜆ℎ𝑡−1 , 𝜆 > 0

And preferences are ordered by:

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝜆ℎ𝑡−1 − 𝑏𝑡 )2 + 𝑙2𝑡 ]|𝐽0
2 𝑡=0

To implement this, we set 𝜆 = 0.1 and 𝜋 = 0 (we have already set 𝜃ℎ = 1 and 𝛿ℎ = 0.9).
We start from an initial condition that makes consumption begin near around its non-
stochastic steady state.

In [10]: ϕ_13 = 1
ϕ_i3 = np.array([[1], [-ϕ_13]])
21.4. EXAMPLE 3: DURABLE CONSUMPTION GOODS 391

γ_12 = 0.15
γ_2 = np.array([[γ_12], [0]])

l_λ2 = np.array([[0.1]])
π_h2 = np.array([[0]])

x01 = np.array([[150], [100], [1], [0], [0]])

tech3 = (ϕ_c, ϕ_g, ϕ_i3, γ_2, δ_k, θ_k)


pref2 = (β, l_λ2, π_h2, δ_h, θ_h)

econ3 = DLE(info1, tech3, pref2)


econ3.compute_sequence(x01, ts_length=300)

# This is the right panel of Fig 5.11.1 from p.111 of HS2013


plt.plot(econ3.c[0], label='Cons.')
plt.plot(econ3.i[0], label='Inv.')
plt.legend()
plt.show()

In contrast to Hall’s original model of Example 1, it is now investment that is much smoother
than consumption.
This illustrates how making consumption goods durable tends to undo the strong consump-
tion smoothing result that Hall obtained.

In [11]: econ3.irf(ts_length=40, shock=None)


# This is the left panel of Fig 5.11.1 from p.111 of HS2013
plt.plot(econ3.c_irf, label='Cons.')
plt.plot(econ3.i_irf, label='Inv.')
plt.legend()
plt.show()
392 CHAPTER 21. IRFS IN HALL MODELS

The impulse response functions confirm that consumption is now much more responsive to an
endowment shock (and investment less so) than in Example 1.
As in Example 2, the endowment shock has permanent effects on neither variable.
Chapter 22

Permanent Income Model using the


DLE Class

22.1 Contents

• The Permanent Income Model 22.2


This lecture is part of a suite of lectures that use the quantecon DLE class to instantiate
models within the [31] class of models described in detail in Recursive Models of Dynamic
Linear Economies.
In addition to what’s included in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

This lecture adds a third solution method for the linear-quadratic-Gaussian permanent in-
come model with 𝛽𝑅 = 1, complementing the other two solution methods described in
Optimal Savings I: The Permanent Income Model and Optimal Savings II: LQ Techniques
and this Jupyter notebook http://nbviewer.jupyter.org/github/QuantEcon/
QuantEcon.notebooks/blob/master/permanent_income.ipynb.
The additional solution method uses the DLE class.
In this way, we map the permanent income model into the framework of Hansen & Sargent
(2013) “Recursive Models of Dynamic Linear Economies” [31].
We’ll also require the following imports

In [2]: import quantecon as qe


import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
from quantecon import DLE

np.set_printoptions(suppress=True, precision=4)

22.2 The Permanent Income Model

The LQ permanent income model is an example of a savings problem.

393
394 CHAPTER 22. PERMANENT INCOME MODEL USING THE DLE CLASS

A consumer has preferences over consumption streams that are ordered by the utility func-
tional


𝐸0 ∑ 𝛽 𝑡 𝑢(𝑐𝑡 ) (1)
𝑡=0

where 𝐸𝑡 is the mathematical expectation conditioned on the consumer’s time 𝑡 information,


𝑐𝑡 is time 𝑡 consumption, 𝑢(𝑐) is a strictly concave one-period utility function, and 𝛽 ∈ (0, 1)
is a discount factor.
The LQ model gets its name partly from assuming that the utility function 𝑢 is quadratic:

𝑢(𝑐) = −.5(𝑐 − 𝛾)2

where 𝛾 > 0 is a bliss level of consumption.


The consumer maximizes the utility functional (1) by choosing a consumption, borrowing
plan {𝑐𝑡 , 𝑏𝑡+1 }∞
𝑡=0 subject to the sequence of budget constraints

𝑐𝑡 + 𝑏𝑡 = 𝑅−1 𝑏𝑡+1 + 𝑦𝑡 , 𝑡 ≥ 0 (2)

where 𝑦𝑡 is an exogenous stationary endowment process, 𝑅 is a constant gross risk-free inter-


est rate, 𝑏𝑡 is one-period risk-free debt maturing at 𝑡, and 𝑏0 is a given initial condition.
We shall assume that 𝑅−1 = 𝛽.
Equation (2) is linear.
We use another set of linear equations to model the endowment process.
In particular, we assume that the endowment process has the state-space representation

𝑧𝑡+1 = 𝐴22 𝑧𝑡 + 𝐶2 𝑤𝑡+1


(3)
𝑦 𝑡 = 𝑈 𝑦 𝑧𝑡

where 𝑤𝑡+1 is an IID process with mean zero and identity contemporaneous covariance ma-
trix, 𝐴22 is a stable matrix, its eigenvalues being strictly below unity in modulus, and 𝑈𝑦 is a
selection vector that identifies 𝑦 with a particular linear combination of the 𝑧𝑡 .
We impose the following condition on the consumption, borrowing plan:


𝐸0 ∑ 𝛽 𝑡 𝑏𝑡2 < +∞ (4)
𝑡=0

This condition suffices to rule out Ponzi schemes.


(We impose this condition to rule out a borrow-more-and-more plan that would allow the
household to enjoy bliss consumption forever)
The state vector confronting the household at 𝑡 is

𝑧
𝑥𝑡 = [ 𝑡 ]
𝑏𝑡
22.2. THE PERMANENT INCOME MODEL 395

where 𝑏𝑡 is its one-period debt falling due at the beginning of period 𝑡 and 𝑧𝑡 contains all
variables useful for forecasting its future endowment.
We assume that {𝑦𝑡 } follows a second order univariate autoregressive process:

𝑦𝑡+1 = 𝛼 + 𝜌1 𝑦𝑡 + 𝜌2 𝑦𝑡−1 + 𝜎𝑤𝑡+1

22.2.1 Solution with the DLE Class

One way of solving this model is to map the problem into the framework outlined in Section
4.8 of [31] by setting up our technology, information and preference matrices as follows:
1 0 −1 −1
Technology: 𝜙𝑐 = [ ] , 𝜙𝑔 = [ ] , 𝜙𝑖 = [ ], Γ = [ ], Δ𝑘 = 0, Θ𝑘 = 𝑅.
0 1 −0.00001 0
1 0 0 0
Information: 𝐴22 = ⎡ 𝛼 𝜌 𝜌 ⎤, 𝐶 = ⎡ 𝜎 ⎤, 𝑈 = [ 𝛾 0 0 ], 𝑈 = [ 0 1 0 ].
⎢ 1 2 ⎥ 2 ⎢ ⎥ 𝑏 𝑑
0 0 0
⎣ 0 1 0 ⎦ ⎣ 0 ⎦
Preferences: Λ = 0, Π = 1, Δℎ = 0, Θℎ = 0.
We set parameters
𝛼 = 10, 𝛽 = 0.95, 𝜌1 = 0.9, 𝜌2 = 0, 𝜎 = 1
(The value of 𝛾 does not affect the optimal decision rule)
The chosen matrices mean that the household’s technology is:

𝑐𝑡 + 𝑘𝑡−1 = 𝑖𝑡 + 𝑦𝑡

𝑘𝑡
= 𝑖𝑡
𝑅

𝑙2𝑡 = (0.00001)2 𝑖𝑡

Combining the first two of these gives the budget constraint of the permanent income model,
where 𝑘𝑡 = 𝑏𝑡+1 .
The third equation is a very small penalty on debt-accumulation to rule out Ponzi schemes.
We set up this instance of the DLE class below:

In [3]: α, β, ρ_1, ρ_2, σ = 10, 0.95, 0.9, 0, 1

γ = np.array([[-1], [0]])
ϕ_c = np.array([[1], [0]])
ϕ_g = np.array([[0], [1]])
ϕ_1 = 1e-5
ϕ_i = np.array([[-1], [-ϕ_1]])
δ_k = np.array([[0]])
θ_k = np.array([[1 / β]])
β = np.array([[β]])
l_λ = np.array([[0]])
π_h = np.array([[1]])
396 CHAPTER 22. PERMANENT INCOME MODEL USING THE DLE CLASS

δ_h = np.array([[0]])
θ_h = np.array([[0]])

a22 = np.array([[1, 0, 0],


[α, ρ_1, ρ_2],
[0, 1, 0]])

c2 = np.array([[0], [σ], [0]])


ud = np.array([[0, 1, 0],
[0, 0, 0]])
ub = np.array([[100, 0, 0]])

x0 = np.array([[0], [0], [1], [0], [0]])

info1 = (a22, c2, ub, ud)


tech1 = (ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = (β, l_λ, π_h, δ_h, θ_h)
econ1 = DLE(info1, tech1, pref1)

To check the solution of this model with that from the LQ problem, we select the 𝑆𝑐 matrix
from the DLE class.
The solution to the DLE economy has:

𝑐𝑡 = 𝑆𝑐 𝑥𝑡

In [4]: econ1.Sc

Out[4]: array([[ 0. , -0.05 , 65.5172, 0.3448, 0. ]])

The state vector in the DLE class is:

ℎ𝑡−1
𝑥𝑡 = ⎢ 𝑘𝑡−1 ⎤


𝑧
⎣ 𝑡 ⎦

where 𝑘𝑡−1 = 𝑏𝑡 is set up to be 𝑏𝑡 in the permanent income model.


𝑧
The state vector in the LQ problem is [ 𝑡 ].
𝑏𝑡
Consequently, the relevant elements of econ1.Sc are the same as in −𝐹 occur when we ap-
ply other approaches to the same model in the lecture Optimal Savings II: LQ Techniques
and this Jupyter notebook http://nbviewer.jupyter.org/github/QuantEcon/
QuantEcon.notebooks/blob/master/permanent_income.ipynb.
The plot below quickly replicates the first two figures of that lecture and that notebook to
confirm that the solutions are the same

In [5]: fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 5))

for i in range(25):
econ1.compute_sequence(x0, ts_length=150)
ax1.plot(econ1.c[0], c='g')
22.2. THE PERMANENT INCOME MODEL 397

ax1.plot(econ1.d[0], c='b')
ax1.plot(econ1.c[0], label='Consumption', c='g')
ax1.plot(econ1.d[0], label='Income', c='b')
ax1.legend()

for i in range(25):
econ1.compute_sequence(x0, ts_length=150)
ax2.plot(econ1.k[0], color='r')
ax2.plot(econ1.k[0], label='Debt', c='r')
ax2.legend()
plt.show()
398 CHAPTER 22. PERMANENT INCOME MODEL USING THE DLE CLASS
Chapter 23

Rosen Schooling Model

23.1 Contents

• A One-Occupation Model 23.2


• Mapping into HS2013 Framework 23.3
This lecture is yet another part of a suite of lectures that use the quantecon DLE class to in-
stantiate models within the [31] class of models described in detail in Recursive Models of
Dynamic Linear Economies.
In addition to what’s included in Anaconda, this lecture uses the quantecon library

In [1]: !pip install --upgrade quantecon

We’ll also need the following imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
from quantecon import LQ
from collections import namedtuple
from quantecon import DLE
from math import sqrt
%matplotlib inline

23.2 A One-Occupation Model

Ryoo and Rosen’s (2004) [56] partial equilibrium model determines


• a stock of “Engineers” 𝑁𝑡
• a number of new entrants in engineering school, 𝑛𝑡
• the wage rate of engineers, 𝑤𝑡
It takes k periods of schooling to become an engineer.
The model consists of the following equations:
• a demand curve for engineers:

𝑤𝑡 = −𝛼𝑑 𝑁𝑡 + 𝜖𝑑𝑡

399
400 CHAPTER 23. ROSEN SCHOOLING MODEL

• a time-to-build structure of the education process:

𝑁𝑡+𝑘 = 𝛿𝑁 𝑁𝑡+𝑘−1 + 𝑛𝑡
• a definition of the discounted present value of each new engineering student:


𝑣𝑡 = 𝛽𝑘 𝔼 ∑(𝛽𝛿𝑁 )𝑗 𝑤𝑡+𝑘+𝑗
𝑗=0

• a supply curve of new students driven by present value 𝑣𝑡 :

𝑛𝑡 = 𝛼𝑠 𝑣𝑡 + 𝜖𝑠𝑡

23.3 Mapping into HS2013 Framework

We represent this model in the [31] framework by


• sweeping the time-to-build structure and the demand for engineers into the household
technology, and
• putting the supply of engineers into the technology for producing goods

23.3.1 Preferences

𝛿𝑁 1 0 ⋯ 0 0
⎡0 0 1 ⋯ 0⎤ ⎡0⎤
⎢ ⎥ ⎢ ⎥
Π = 0, Λ = [𝛼𝑑 0 ⋯ 0] , Δℎ = ⎢ ⋮ ⋮ ⋮ ⋱ ⋮ ⎥ , Θℎ = ⎢ ⋮ ⎥
⎢0 ⋯ ⋯ 0 1⎥ ⎢0⎥
⎣0 0 0 ⋯ 0⎦ ⎣1⎦

where Λ is a k+1 x 1 matrix, Δℎ is a k_1 x k+1 matrix, and Θℎ is a k+1 x 1 matrix.


This specification sets 𝑁𝑡 = ℎ1𝑡−1 , 𝑛𝑡 = 𝑐𝑡 , ℎ𝜏+1,𝑡−1 = 𝑛𝑡−(𝑘−𝜏) for 𝜏 = 1, ..., 𝑘.
Below we set things up so that the number of years of education, k, can be varied.

23.3.2 Technology

To capture Ryoo and Rosen’s [56] supply curve, we use the physical technology:

𝑐𝑡 = 𝑖𝑡 + 𝑑1𝑡

𝜓1 𝑖 𝑡 = 𝑔 𝑡

where 𝜓1 is inversely proportional to 𝛼𝑠 .

23.3.3 Information

Because we want 𝑏𝑡 = 𝜖𝑑𝑡 and 𝑑1𝑡 = 𝜖𝑠𝑡 , we set


23.3. MAPPING INTO HS2013 FRAMEWORK 401

1 0 0 0 0
𝐴22 = ⎡0 𝜌 0 ⎤ , 𝐶 = ⎡1 0⎤ , 𝑈 = [30 0 1] , 𝑈 = [10 1 0]
⎢ 𝑠 ⎥ 2 ⎢ ⎥ 𝑏 𝑑
0 0 0
⎣0 0 𝜌𝑑 ⎦ ⎣0 1⎦

where 𝜌𝑠 and 𝜌𝑑 describe the persistence of the supply and demand shocks

In [3]: Information = namedtuple('Information', ['a22', 'c2','ub','ud'])


Technology = namedtuple('Technology', ['ϕ_c', 'ϕ_g', 'ϕ_i', 'γ', 'δ_k',�
↪'θ_k'])

Preferences = namedtuple('Preferences', ['β', 'l_λ', 'π_h', 'δ_h', 'θ_h'])

23.3.4 Effects of Changes in Education Technology and Demand

We now study how changing


• the number of years of education required to become an engineer and
• the slope of the demand curve
affects responses to demand shocks.
To begin, we set 𝑘 = 4 and 𝛼𝑑 = 0.1

In [4]: k = 4 # Number of periods of schooling required to become an engineer

β = np.array([[1 / 1.05]])
α_d = np.array([[0.1]])
α_s = 1
ε_1 = 1e-7
λ_1 = np.ones((1, k)) * ε_1
# Use of ε_1 is trick to aquire detectability, see HS2013 p. 228 footnote 4
l_λ = np.hstack((α_d, λ_1))
π_h = np.array([[0]])

δ_n = np.array([[0.95]])
d1 = np.vstack((δ_n, np.zeros((k - 1, 1))))
d2 = np.hstack((d1, np.eye(k)))
δ_h = np.vstack((d2, np.zeros((1, k + 1))))

θ_h = np.vstack((np.zeros((k, 1)),


np.ones((1, 1))))

ψ_1 = 1 / α_s

ϕ_c = np.array([[1], [0]])


ϕ_g = np.array([[0], [-1]])
ϕ_i = np.array([[-1], [ψ_1]])
γ = np.array([[0], [0]])

δ_k = np.array([[0]])
θ_k = np.array([[0]])

ρ_s = 0.8
ρ_d = 0.8

a22 = np.array([[1, 0, 0],


[0, ρ_s, 0],
402 CHAPTER 23. ROSEN SCHOOLING MODEL

[0, 0, ρ_d]])

c2 = np.array([[0, 0], [10, 0], [0, 10]])


ub = np.array([[30, 0, 1]])
ud = np.array([[10, 1, 0], [0, 0, 0]])

info1 = Information(a22, c2, ub, ud)


tech1 = Technology(ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = Preferences(β, l_λ, π_h, δ_h, θ_h)

econ1 = DLE(info1, tech1, pref1)

We create three other instances by:

1. Raising 𝛼𝑑 to 2

2. Raising k to 7

3. Raising k to 10

In [5]: α_d = np.array([[2]])


l_λ = np.hstack((α_d, λ_1))
pref2 = Preferences(β, l_λ, π_h, δ_h, θ_h)
econ2 = DLE(info1, tech1, pref2)

α_d = np.array([[0.1]])

k = 7
λ_1 = np.ones((1, k)) * ε_1
l_λ = np.hstack((α_d, λ_1))
d1 = np.vstack((δ_n, np.zeros((k - 1, 1))))
d2 = np.hstack((d1, np.eye(k)))
δ_h = np.vstack((d2, np.zeros((1, k+1))))
θ_h = np.vstack((np.zeros((k, 1)),
np.ones((1, 1))))

Pref3 = Preferences(β, l_λ, π_h, δ_h, θ_h)


econ3 = DLE(info1, tech1, Pref3)

k = 10
λ_1 = np.ones((1, k)) * ε_1
l_λ = np.hstack((α_d, λ_1))
d1 = np.vstack((δ_n, np.zeros((k - 1, 1))))
d2 = np.hstack((d1, np.eye(k)))
δ_h = np.vstack((d2, np.zeros((1, k + 1))))
θ_h = np.vstack((np.zeros((k, 1)),
np.ones((1, 1))))

pref4 = Preferences(β, l_λ, π_h, δ_h, θ_h)


econ4 = DLE(info1, tech1, pref4)

shock_demand = np.array([[0], [1]])

econ1.irf(ts_length=25, shock=shock_demand)
econ2.irf(ts_length=25, shock=shock_demand)
econ3.irf(ts_length=25, shock=shock_demand)
econ4.irf(ts_length=25, shock=shock_demand)
23.3. MAPPING INTO HS2013 FRAMEWORK 403

The first figure plots the impulse response of 𝑛𝑡 (on the left) and 𝑁𝑡 (on the right) to a posi-
tive demand shock, for 𝛼𝑑 = 0.1 and 𝛼𝑑 = 2.
When 𝛼𝑑 = 2, the number of new students 𝑛𝑡 rises initially, but the response then turns nega-
tive.
A positive demand shock raises wages, drawing new students into the profession.
However, these new students raise 𝑁𝑡 .
The higher is 𝛼𝑑 , the larger the effect of this rise in 𝑁𝑡 on wages.
This counteracts the demand shock’s positive effect on wages, reducing the number of new
students in subsequent periods.
Consequently, when 𝛼𝑑 is lower, the effect of a demand shock on 𝑁𝑡 is larger

In [6]: fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(econ1.c_irf,label='$\\alpha_d = 0.1$')
ax1.plot(econ2.c_irf,label='$\\alpha_d = 2$')
ax1.legend()
ax1.set_title('Response of $n_t$ to a demand shock')

ax2.plot(econ1.h_irf[:, 0], label='$\\alpha_d = 0.1$')


ax2.plot(econ2.h_irf[:, 0], label='$\\alpha_d = 24$')
ax2.legend()
ax2.set_title('Response of $N_t$ to a demand shock')
plt.show()

The next figure plots the impulse response of 𝑛𝑡 (on the left) and 𝑁𝑡 (on the right) to a posi-
tive demand shock, for 𝑘 = 4, 𝑘 = 7 and 𝑘 = 10 (with 𝛼𝑑 = 0.1)

In [7]: fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(econ1.c_irf, label='$k=4$')
ax1.plot(econ3.c_irf, label='$k=7$')
ax1.plot(econ4.c_irf, label='$k=10$')
ax1.legend()
ax1.set_title('Response of $n_t$ to a demand shock')

ax2.plot(econ1.h_irf[:,0], label='$k=4$')
ax2.plot(econ3.h_irf[:,0], label='$k=7$')
ax2.plot(econ4.h_irf[:,0], label='$k=10$')
ax2.legend()
404 CHAPTER 23. ROSEN SCHOOLING MODEL

ax2.set_title('Response of $N_t$ to a demand shock')


plt.show()

Both panels in the above figure show that raising k lowers the effect of a positive demand
shock on entry into the engineering profession.
Increasing the number of periods of schooling lowers the number of new students in response
to a demand shock.
This occurs because with longer required schooling, new students ultimately benefit less from
the impact of that shock on wages.
Chapter 24

Cattle Cycles

24.1 Contents

• The Model 24.2


• Mapping into HS2013 Framework 24.3
This is another member of a suite of lectures that use the quantecon DLE class to instantiate
models within the [31] class of models described in detail in Recursive Models of Dynamic
Linear Economies.
In addition to what’s in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

This lecture uses the DLE class to construct instances of the “Cattle Cycles” model of Rosen,
Murphy and Scheinkman (1994) [53].
That paper constructs a rational expectations equilibrium model to understand sources of
recurrent cycles in US cattle stocks and prices.
We make the following imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
from quantecon import LQ
from collections import namedtuple
from quantecon import DLE
from math import sqrt
%matplotlib inline

24.2 The Model

The model features a static linear demand curve and a “time-to-grow” structure for cattle.
Let 𝑝𝑡 be the price of slaughtered beef, 𝑚𝑡 the cost of preparing an animal for slaughter, ℎ𝑡
the holding cost for a mature animal, 𝛾1 ℎ𝑡 the holding cost for a yearling, and 𝛾0 ℎ𝑡 the hold-
ing cost for a calf.
The cost processes {ℎ𝑡 , 𝑚𝑡 }∞ ∞
𝑡=0 are exogenous, while the price process {𝑝𝑡 }𝑡=0 is determined
within a rational expectations equilibrium.

405
406 CHAPTER 24. CATTLE CYCLES

Let 𝑥𝑡 be the breeding stock, and 𝑦𝑡 be the total stock of cattle.


The law of motion for the breeding stock is

𝑥𝑡 = (1 − 𝛿)𝑥𝑡−1 + 𝑔𝑥𝑡−3 − 𝑐𝑡

where 𝑔 < 1 is the number of calves that each member of the breeding stock has each year,
and 𝑐𝑡 is the number of cattle slaughtered.
The total headcount of cattle is

𝑦𝑡 = 𝑥𝑡 + 𝑔𝑥𝑡−1 + 𝑔𝑥𝑡−2

This equation states that the total number of cattle equals the sum of adults, calves and
yearlings, respectively.
A representative farmer chooses {𝑐𝑡 , 𝑥𝑡 } to maximize:


𝜓1 2 𝜓2 2 𝜓 𝜓
𝔼0 ∑ 𝛽 𝑡 {𝑝𝑡 𝑐𝑡 − ℎ𝑡 𝑥𝑡 − 𝛾0 ℎ𝑡 (𝑔𝑥𝑡−1 ) − 𝛾1 ℎ𝑡 (𝑔𝑥𝑡−2 ) − 𝑚𝑡 𝑐𝑡 − 𝑥𝑡 − 𝑥𝑡−1 − 3 𝑥2𝑡−3 − 4 𝑐𝑡2 }
𝑡=0
2 2 2 2

subject to the law of motion for 𝑥𝑡 , taking as given the stochastic laws of motion for the ex-
ogenous processes, the equilibrium price process, and the initial state [𝑥−1 , 𝑥−2 , 𝑥−3 ].
Remark The 𝜓𝑗 parameters are very small quadratic costs that are included for technical
reasons to make well posed and well behaved the linear quadratic dynamic programming
problem solved by the fictitious planner who in effect chooses equilibrium quantities and
shadow prices.
Demand for beef is government by 𝑐𝑡 = 𝑎0 − 𝑎1 𝑝𝑡 + 𝑑𝑡̃ where 𝑑𝑡̃ is a stochastic process with
mean zero, representing a demand shifter.

24.3 Mapping into HS2013 Framework

24.3.1 Preferences
1

We set Λ = 0, Δℎ = 0, Θℎ = 0, Π = 𝛼1 2 and 𝑏𝑡 = Π𝑑𝑡̃ + Π𝛼0 .
With these settings, the FOC for the household’s problem becomes the demand curve of the
“Cattle Cycles” model.

24.3.2 Technology

To capture the law of motion for cattle, we set

(1 − 𝛿) 0 𝑔 1
Δ𝑘 = ⎡
⎢ 1 0 0 ⎤
⎥ , Θ 𝑘 = ⎡ 0 ⎤
⎢ ⎥
⎣ 0 1 0 ⎦ ⎣ 0 ⎦

(where 𝑖𝑡 = −𝑐𝑡 ).
24.3. MAPPING INTO HS2013 FRAMEWORK 407

To capture the production of cattle, we set

1 0 0 0 0 1 0 0 0
⎡ 𝑓 ⎤ ⎡ 1 0 0 0 ⎤ ⎡ 0 ⎤ ⎡ 𝑓 (1 − 𝛿) 0 𝑔𝑓 ⎤
⎢ 1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1 1 ⎥
Φ𝑐 = ⎢ 0 ⎥ , Φ𝑔 = ⎢ 0 1 0 0 ⎥ , Φ𝑖 = ⎢ 0 ⎥ , Γ = ⎢ 𝑓3 0 0 ⎥
⎢ 0 ⎥ ⎢ 0 0 1 0 ⎥ ⎢ 0 ⎥ ⎢ 0 𝑓5 0 ⎥
⎣ −𝑓7 ⎦ ⎣ 0 0 0 1 ⎦ 0
⎣ ⎦ ⎣ 0 0 0 ⎦

24.3.3 Information

We set

0
1 0 0 0 0 0 0 ⎡ ⎤
⎡ ⎤ ⎡ ⎤ 𝑓2 𝑈ℎ
0 𝜌1 0 0 1 0 0 ⎢ ⎥
𝐴22 =⎢ ⎥ , 𝐶2 = ⎢ ⎥ , 𝑈𝑏 = [ Π𝛼0 0 0 Π ] , 𝑈𝑑 = ⎢ 𝑓4 𝑈ℎ ⎥
⎢ 0 0 𝜌2 0 ⎥ ⎢ 0 1 0 ⎥
⎢ 𝑓6 𝑈ℎ ⎥
⎣ 0 0 0 𝜌3 ⎦ ⎣ 0 0 15 ⎦
⎣ 𝑓8 𝑈ℎ ⎦

Ψ1 Ψ2 Ψ3
To map this into our class, we set 𝑓12 = 2 , 𝑓22 = 2 , 𝑓32 = 2 , 2𝑓1 𝑓2 = 1, 2𝑓3 𝑓4 = 𝛾0 𝑔,
2𝑓5 𝑓6 = 𝛾1 𝑔.

In [3]: # We define namedtuples in this way as it allows us to check, for example,


# what matrices are associated with a particular technology.

Information = namedtuple('Information', ['a22', 'c2', 'ub', 'ud'])


Technology = namedtuple('Technology', ['ϕ_c', 'ϕ_g', 'ϕ_i', 'γ', 'δ_k',�
↪'θ_k'])

Preferences = namedtuple('Preferences', ['β', 'l_λ', 'π_h', 'δ_h', 'θ_h'])

We set parameters to those used by [53]

In [4]: β = np.array([[0.909]])
lλ = np.array([[0]])

a1 = 0.5
πh = np.array([[1 / (sqrt(a1))]])
δh = np.array([[0]])
θh = np.array([[0]])

δ = 0.1
g = 0.85
f1 = 0.001
f3 = 0.001
f5 = 0.001
f7 = 0.001

ϕc = np.array([[1], [f1], [0], [0], [-f7]])

ϕg = np.array([[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1,0],
408 CHAPTER 24. CATTLE CYCLES

[0, 0, 0, 1]])

ϕi = np.array([[1], [0], [0], [0], [0]])

γ = np.array([[ 0, 0, 0],
[f1 * (1 - δ), 0, g * f1],
[ f3, 0, 0],
[ 0, f5, 0],
[ 0, 0, 0]])

δk = np.array([[1 - δ, 0, g],
[ 1, 0, 0],
[ 0, 1, 0]])

θk = np.array([[1], [0], [0]])

ρ1 = 0
ρ2 = 0
ρ3 = 0.6
a0 = 500
γ0 = 0.4
γ1 = 0.7
f2 = 1 / (2 * f1)
f4 = γ0 * g / (2 * f3)
f6 = γ1 * g / (2 * f5)
f8 = 1 / (2 * f7)

a22 = np.array([[1, 0, 0, 0],


[0, ρ1, 0, 0],
[0, 0, ρ2, 0],
[0, 0, 0, ρ3]])

c2 = np.array([[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 15]])

ub = np.array([[πh * a0, 0, 0, πh]])


uh = np.array([[50, 1, 0, 0]])
um = np.array([[100, 0, 1, 0]])
ud = np.vstack(([0, 0, 0, 0],
f2 * uh, f4 * uh, f6 * uh, f8 * um))

Notice that we have set 𝜌1 = 𝜌2 = 0, so ℎ𝑡 and 𝑚𝑡 consist of a constant and a white noise
component.
We set up the economy using tuples for information, technology and preference matrices be-
low.
We also construct two extra information matrices, corresponding to cases when 𝜌3 = 1 and
𝜌3 = 0 (as opposed to the baseline case of 𝜌3 = 0.6).

In [5]: info1 = Information(a22, c2, ub, ud)


tech1 = Technology(ϕc, ϕg, ϕi, γ, δk, θk)
pref1 = Preferences(β, lλ, πh, δh, θh)

ρ3_2 = 1
a22_2 = np.array([[1, 0, 0, 0],
24.3. MAPPING INTO HS2013 FRAMEWORK 409

[0, ρ1, 0, 0],


[0, 0, ρ2, 0],
[0, 0, 0, ρ3_2]])

info2 = Information(a22_2, c2, ub, ud)

ρ3_3 = 0
a22_3 = np.array([[1, 0, 0, 0],
[0, ρ1, 0, 0],
[0, 0, ρ2, 0],
[0, 0, 0, ρ3_3]])

info3 = Information(a22_3, c2, ub, ud)

# Example of how we can look at the matrices associated with a given�


↪ namedtuple
info1.a22

Out[5]: array([[1. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0.6]])

In [6]: # Use tuples to define DLE class


econ1 = DLE(info1, tech1, pref1)
econ2 = DLE(info2, tech1, pref1)
econ3 = DLE(info3, tech1, pref1)

# Calculate steady-state in baseline case and use to set the initial�


↪ condition
econ1.compute_steadystate(nnc=4)
x0 = econ1.zz

In [7]: econ1.compute_sequence(x0, ts_length=100)

[53] use the model to understand the sources of recurrent cycles in total cattle stocks.
Plotting 𝑦𝑡 for a simulation of their model shows its ability to generate cycles in quantities

In [8]: # Calculation of y_t


totalstock = econ1.k[0] + g * econ1.k[1] + g * econ1.k[2]
fig, ax = plt.subplots()
ax.plot(totalstock)
ax.set_xlim((-1, 100))
ax.set_title('Total number of cattle')
plt.show()
410 CHAPTER 24. CATTLE CYCLES

In their Figure 3, [53] plot the impulse response functions of consumption and the breeding
stock of cattle to the demand shock, 𝑑𝑡̃ , under the three different values of 𝜌3 .
We replicate their Figure 3 below

In [9]: shock_demand = np.array([[0], [0], [1]])

econ1.irf(ts_length=25, shock=shock_demand)
econ2.irf(ts_length=25, shock=shock_demand)
econ3.irf(ts_length=25, shock=shock_demand)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(econ1.c_irf, label='$\\rho=0.6$')
ax1.plot(econ2.c_irf, label='$\\rho=1$')
ax1.plot(econ3.c_irf, label='$\\rho=0$')
ax1.set_title('Consumption response to demand shock')
ax1.legend()

ax2.plot(econ1.k_irf[:, 0], label='$\\rho=0.6$')


ax2.plot(econ2.k_irf[:, 0], label='$\\rho=1$')
ax2.plot(econ3.k_irf[:, 0], label='$\\rho=0$')
ax2.set_title('Breeding stock response to demand shock')
ax2.legend()
plt.show()
24.3. MAPPING INTO HS2013 FRAMEWORK 411

The above figures show how consumption patterns differ markedly, depending on the persis-
tence of the demand shock:
• If it is purely transitory (𝜌3 = 0) then consumption rises immediately but is later re-
duced to build stocks up again.
• If it is permanent (𝜌3 = 1), then consumption falls immediately, in order to build up
stocks to satisfy the permanent rise in future demand.
In Figure 4 of their paper, [53] plot the response to a demand shock of the breeding stock and
the total stock, for 𝜌3 = 0 and 𝜌3 = 0.6.
We replicate their Figure 4 below

In [10]: total1_irf = econ1.k_irf[:, 0] + g * econ1.k_irf[:, 1] + g * econ1.k_irf[:


↪, 2]

total3_irf = econ3.k_irf[:, 0] + g * econ3.k_irf[:, 1] + g * econ3.k_irf[:


↪, 2]

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(econ1.k_irf[:, 0], label='Breeding Stock')
ax1.plot(total1_irf, label='Total Stock')
ax1.set_title('$\\rho=0.6$')

ax2.plot(econ3.k_irf[:, 0], label='Breeding Stock')


ax2.plot(total3_irf, label='Total Stock')
ax2.set_title('$\\rho=0$')
plt.show()
412 CHAPTER 24. CATTLE CYCLES

The fact that 𝑦𝑡 is a weighted moving average of 𝑥𝑡 creates a humped shape response of the
total stock in response to demand shocks, contributing to the cyclicality seen in the first
graph of this lecture.
Chapter 25

Shock Non Invertibility

25.1 Contents

• Overview 25.2
• Model 25.3
• Code 25.4

25.2 Overview

This is another member of a suite of lectures that use the quantecon DLE class to instantiate
models within the [31] class of models described in detail in Recursive Models of Dynamic
Linear Economies.
In addition to what’s in Anaconda, this lecture uses the quantecon library.

In [1]: !pip install --upgrade quantecon

We’ll make these imports:

In [2]: import numpy as np


import quantecon as qe
import matplotlib.pyplot as plt
from quantecon import LQ
from quantecon import DLE
from math import sqrt
%matplotlib inline

This lecture can be viewed as introducing an early contribution to what is now often called a
news and noise issue.
In particular, it analyzes a shock-invertibility issue that is endemic within a class of perma-
nent income models.
Technically, the invertibility problem indicates a situation in which histories of the shocks in
an econometrician’s autoregressive or Wold moving average representation span a smaller in-
formation space than do the shocks that are seen by the agents inside the econometrician’s
model.

413
414 CHAPTER 25. SHOCK NON INVERTIBILITY

This situation sets the stage for an econometrician who is unaware of the problem and conse-
quently misinterprets shocks and likely responses to them.
A shock-invertibility that is technically close to the one studied here is discussed by Eric
Leeper, Todd Walker, and Susan Yang [? ] in their analysis of fiscal foresight.
A distinct shock-invertibility issue is present in the special LQ consumption smoothing model
in quantecon lecture.

25.3 Model

We consider the following modification of Robert Hall’s (1978) model [24] in which the en-
dowment process is the sum of two orthogonal autoregressive processes:
Preferences

1 ∞
− 𝔼 ∑ 𝛽 𝑡 [(𝑐𝑡 − 𝑏𝑡 )2 + 𝑙2𝑡 ]|𝐽0
2 𝑡=0

𝑠𝑡 = 𝑐𝑡

𝑏𝑡 = 𝑈𝑏 𝑧𝑡

Technology

𝑐𝑡 + 𝑖𝑡 = 𝛾𝑘𝑡−1 + 𝑑𝑡

𝑘𝑡 = 𝛿𝑘 𝑘𝑡−1 + 𝑖𝑡

𝑔𝑡 = 𝜙1 𝑖𝑡 , 𝜙1 > 0

𝑔𝑡 ⋅ 𝑔𝑡 = 𝑙2𝑡

Information

1 0 0 0 0 0 0 0
⎡ 0 0.9 0 0 0 0 ⎤ ⎡ 1 0 ⎤
⎢ ⎥ ⎢ ⎥
⎢ 0 0 0 0 0 0 ⎥ ⎢ 0 4 ⎥
𝑧𝑡+1 =⎢ ⎥ 𝑧𝑡 + ⎢ ⎥ 𝑤𝑡+1
⎢ 0 0 1 0 0 0 ⎥ ⎢ 0 0 ⎥
⎢ 0 0 0 1 0 0 ⎥ ⎢ 0 0 ⎥
⎣ 0 0 0 0 1 0 ⎦ ⎣ 0 0 ⎦

𝑈𝑏 = [ 30 0 0 0 0 0 ]

5 1 1 0.8 0.6 0.4


𝑈𝑑 = [ ]
0 0 0 0 0 0
25.3. MODEL 415

The preference shock is constant at 30, while the endowment process is the sum of a constant
and two orthogonal processes.
Specifically:

𝑑𝑡 = 5 + 𝑑1𝑡 + 𝑑2𝑡

𝑑1𝑡 = 0.9𝑑1𝑡−1 + 𝑤1𝑡

𝑑2𝑡 = 4𝑤2𝑡 + 0.8(4𝑤2𝑡−1 ) + 0.6(4𝑤2𝑡−2 ) + 0.4(4𝑤2𝑡−3 )

𝑑1𝑡 is a first-order AR process, while 𝑑2𝑡 is a third-order pure moving average process.

In [3]: γ_1 = 0.05


γ = np.array([[γ_1], [0]])
ϕ_c = np.array([[1], [0]])
ϕ_g = np.array([[0], [1]])
ϕ_1 = 0.00001
ϕ_i = np.array([[1], [-ϕ_1]])
δ_k = np.array([[1]])
θ_k = np.array([[1]])
β = np.array([[1 / 1.05]])
l_λ = np.array([[0]])
π_h = np.array([[1]])
δ_h = np.array([[.9]])
θ_h = np.array([[1]]) - δ_h
ud = np.array([[5, 1, 1, 0.8, 0.6, 0.4],
[0, 0, 0, 0, 0, 0]])
a22 = np.zeros((6, 6))
# Chase's great trick
a22[[0, 1, 3, 4, 5], [0, 1, 2, 3, 4]] = np.array([1.0, 0.9, 1.0, 1.0, 1.0])
c2 = np.zeros((6, 2))
c2[[1, 2], [0, 1]] = np.array([1.0, 4.0])
ub = np.array([[30, 0, 0, 0, 0, 0]])
x0 = np.array([[5], [150], [1], [0], [0], [0], [0], [0]])

info1 = (a22, c2, ub, ud)


tech1 = (ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k)
pref1 = (β, l_λ, π_h, δ_h, θ_h)

econ1 = DLE(info1, tech1, pref1)

We define the household’s net of interest deficit as 𝑐𝑡 − 𝑑𝑡 .


Hall’s model imposes “expected present-value budget balance” in the sense that


𝔼 ∑ 𝛽 𝑗 (𝑐𝑡+𝑗 − 𝑑𝑡+𝑗 )|𝐽𝑡 = 𝛽 −1 𝑘𝑡−1 ∀𝑡
𝑗=0

If we define the moving average representation of (𝑐𝑡 , 𝑐𝑡 − 𝑑𝑡 ) in terms of the 𝑤𝑡 s to be:

𝑐𝑡 𝜎 (𝐿)
[ ]=[ 1 ] 𝑤𝑡
𝑐𝑡 − 𝑑𝑡 𝜎2 (𝐿)
416 CHAPTER 25. SHOCK NON INVERTIBILITY

then Hall’s model imposes the restriction 𝜎2 (𝛽) = [0 0].


The agent inside this model sees histories of both components of the endowment process 𝑑1𝑡
and 𝑑2𝑡 .
The econometrician has data on the history of the pair [𝑐𝑡 , 𝑑𝑡 ], but not directly on the history
of 𝑤𝑡 .
The econometrician obtains a Wold representation for the process [𝑐𝑡 , 𝑐𝑡 − 𝑑𝑡 ]:

𝑐𝑡 𝜎∗ (𝐿)
[ ] = [ 1∗ ] 𝑢𝑡
𝑐𝑡 − 𝑑 𝑡 𝜎2 (𝐿)

The Appendix of chapter 8 of [31] explains why the impulse response functions in the Wold
representation estimated by the econometrician do not resemble the impulse response func-
tions that depict the response of consumption and the deficit to innovations to agents’ infor-
mation.
Technically, 𝜎2 (𝛽) = [0 0] implies that the history of 𝑢𝑡 s spans a smaller linear space than
does the history of 𝑤𝑡 s.
This means that 𝑢𝑡 will typically be a distributed lag of 𝑤𝑡 that is not concentrated at zero
lag:


𝑢𝑡 = ∑ 𝛼𝑗 𝑤𝑡−𝑗
𝑗=0

Thus, the econometrician’s news 𝑢𝑡 potentially responds belatedly to agents’ news 𝑤𝑡 .

25.4 Code

We will construct Figures from Chapter 8 Appendix E of [31] to illustrate these ideas:

In [4]: # This is Fig 8.E.1 from p.188 of HS2013

econ1.irf(ts_length=40, shock=None)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(econ1.c_irf, label='Consumption')
ax1.plot(econ1.c_irf - econ1.d_irf[:,0].reshape(40,1), label='Deficit')
ax1.legend()
ax1.set_title('Response to $w_{1t}$')

shock2 = np.array([[0], [1]])


econ1.irf(ts_length=40, shock=shock2)

ax2.plot(econ1.c_irf, label='Consumption')
ax2.plot(econ1.c_irf - econ1.d_irf[:,0].reshape(40, 1), label='Deficit')
ax2.legend()
ax2.set_title('Response to $w_{2t}$')
plt.show()
25.4. CODE 417

The above figure displays the impulse response of consumption and the deficit to the endow-
ment innovations.
Consumption displays the characteristic “random walk” response with respect to each innova-
tion.
Each endowment innovation leads to a temporary surplus followed by a permanent net-of-
interest deficit.
The temporary surplus just offsets the permanent deficit in terms of expected present value.

In [5]: G_HS = np.vstack([econ1.Sc, econ1.Sc-econ1.Sd[0, :].reshape(1, 8)])


H_HS = 1e-8 * np.eye(2) # Set very small so there is no measurement error
lss_hs = qe.LinearStateSpace(econ1.A0, econ1.C, G_HS, H_HS)

hs_kal = qe.Kalman(lss_hs)
w_lss = hs_kal.whitener_lss()
ma_coefs = hs_kal.stationary_coefficients(50, 'ma')

# This is Fig 8.E.2 from p.189 of HS2013

ma_coefs = ma_coefs
jj = 50
y1_w1 = np.empty(jj)
y2_w1 = np.empty(jj)
y1_w2 = np.empty(jj)
y2_w2 = np.empty(jj)

for t in range(jj):
y1_w1[t] = ma_coefs[t][0, 0]
y1_w2[t] = ma_coefs[t][0, 1]
y2_w1[t] = ma_coefs[t][1, 0]
y2_w2[t] = ma_coefs[t][1, 1]

# This scales the impulse responses to match those in the book


y1_w1 = sqrt(hs_kal.stationary_innovation_covar()[0, 0]) * y1_w1
y2_w1 = sqrt(hs_kal.stationary_innovation_covar()[0, 0]) * y2_w1
y1_w2 = sqrt(hs_kal.stationary_innovation_covar()[1, 1]) * y1_w2
y2_w2 = sqrt(hs_kal.stationary_innovation_covar()[1, 1]) * y2_w2

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(y1_w1, label='Consumption')
ax1.plot(y2_w1, label='Deficit')
418 CHAPTER 25. SHOCK NON INVERTIBILITY

ax1.legend()
ax1.set_title('Response to $u_{1t}$')

ax2.plot(y1_w2, label='Consumption')
ax2.plot(y2_w2, label='Deficit')
ax2.legend()
ax2.set_title('Response to $u_{2t}$')
plt.show()

The above figure displays the impulse response of consumption and the deficit to the innova-
tions in the econometrician’s Wold representation
• this is the object that would be recovered from a high order vector autoregression on
the econometrician’s observations.
Consumption responds only to the first innovation
• this is indicative of the Granger causality imposed on the [𝑐𝑡 , 𝑐𝑡 − 𝑑𝑡 ] process by Hall’s
model: consumption Granger causes 𝑐𝑡 − 𝑑𝑡 , with no reverse causality.

In [6]: # This is Fig 8.E.3 from p.189 of HS2013

jj = 20
irf_wlss = w_lss.impulse_response(jj)
ycoefs = irf_wlss[1]
# Pull out the shocks
a1_w1 = np.empty(jj)
a1_w2 = np.empty(jj)
a2_w1 = np.empty(jj)
a2_w2 = np.empty(jj)

for t in range(jj):
a1_w1[t] = ycoefs[t][0, 0]
a1_w2[t] = ycoefs[t][0, 1]
a2_w1[t] = ycoefs[t][1, 0]
a2_w2[t] = ycoefs[t][1, 1]

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))


ax1.plot(a1_w1, label='Consumption innov.')
ax1.plot(a2_w1, label='Deficit innov.')
ax1.set_title('Response to $w_{1t}$')
ax1.legend()
25.4. CODE 419

ax2.plot(a1_w2, label='Consumption innov.')


ax2.plot(a2_w2, label='Deficit innov.')
ax2.legend()
ax2.set_title('Response to $w_{2t}$')
plt.show()

The above figure displays the impulse responses of 𝑢𝑡 to 𝑤𝑡 , as depicted in:


𝑢𝑡 = ∑ 𝛼𝑗 𝑤𝑡−𝑗
𝑗=0

While the responses of the innovations to consumption are concentrated at lag zero for both
components of 𝑤𝑡 , the responses of the innovations to (𝑐𝑡 −𝑑𝑡 ) are spread over time (especially
in response to 𝑤1𝑡 ).
Thus, the innovations to (𝑐𝑡 − 𝑑𝑡 ) as revealed by the vector autoregression depend on what the
economic agent views as “old news”.
420 CHAPTER 25. SHOCK NON INVERTIBILITY
Part V

Classic Linear Models

421
Chapter 26

Von Neumann Growth Model (and


a Generalization)

26.1 Contents

• Notation 26.2
• Model Ingredients and Assumptions 26.3
• Dynamic Interpretation 26.4
• Duality 26.5
• Interpretation as a Game Theoretic Problem (Two-player Zero-sum Game) 26.6
This notebook uses the class Neumann to calculate key objects of a linear growth model of
John von Neumann [67] that was generalized by Kemeny, Morgenstern and Thompson [38].
Objects of interest are the maximal expansion rate (𝛼), the interest factor (𝛽), and the opti-
mal intensities (𝑥) and prices (𝑝).
In addition to watching how the towering mind of John von Neumann formulated an equilib-
rium model of price and quantity vectors in balanced growth, this notebook shows how fruit-
fully to employ the following important tools:
• a zero-sum two-player game
• linear programming
• the Perron-Frobenius theorem
We’ll begin with some imports:

In [1]: import numpy as np


import matplotlib.pyplot as plt
from scipy.linalg import solve
from scipy.optimize import fsolve, linprog
from textwrap import dedent
%matplotlib inline

np.set_printoptions(precision=2)

The code below provides the Neumann class

In [2]: class Neumann(object):

423
424 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

"""
This class describes the Generalized von Neumann growth model as it was
discussed in Kemeny et al. (1956, ECTA) and Gale (1960, Chapter 9.5):

Let:
n ... number of goods
m ... number of activities
A ... input matrix is m-by-n
a_{i,j} - amount of good j consumed by activity i
B ... output matrix is m-by-n
b_{i,j} - amount of good j produced by activity i

x ... intensity vector (m-vector) with non-negative entries


x'B - the vector of goods produced
x'A - the vector of goods consumed
p ... price vector (n-vector) with non-negative entries
Bp - the revenue vector for every activity
Ap - the cost of each activity

Both A and B have non-negative entries. Moreover, we assume that


(1) Assumption I (every good which is consumed is also produced):
for all j, b_{.,j} > 0, i.e. at least one entry is strictly positive
(2) Assumption II (no free lunch):
for all i, a_{i,.} > 0, i.e. at least one entry is strictly positive

Parameters
----------
A : array_like or scalar(float)
Part of the state transition equation. It should be `n x n`
B : array_like or scalar(float)
Part of the state transition equation. It should be `n x k`
"""

def __init__(self, A, B):

self.A, self.B = list(map(self.convert, (A, B)))


self.m, self.n = self.A.shape

# Check if (A, B) satisfy the basic assumptions


assert self.A.shape == self.B.shape, 'The input and output matrices \
must have the same dimensions!'
assert (self.A >= 0).all() and (self.B >= 0).all(), 'The input and \
output matrices must have only non-negative entries!'

# (1) Check whether Assumption I is satisfied:


if (np.sum(B, 0) <= 0).any():
self.AI = False
else:
self.AI = True

# (2) Check whether Assumption II is satisfied:


if (np.sum(A, 1) <= 0).any():
self.AII = False
else:
self.AII = True

def __repr__(self):
return self.__str__()
26.1. CONTENTS 425

def __str__(self):

me = """
Generalized von Neumann expanding model:
- number of goods : {n}
- number of activities : {m}

Assumptions:
- AI: every column of B has a positive entry : {AI}
- AII: every row of A has a positive entry : {AII}

"""
# Irreducible : {irr}
return dedent(me.format(n=self.n, m=self.m,
AI=self.AI, AII=self.AII))

def convert(self, x):


"""
Convert array_like objects (lists of lists, floats, etc.) into
well-formed 2D NumPy arrays
"""
return np.atleast_2d(np.asarray(x))

def bounds(self):
"""
Calculate the trivial upper and lower bounds for alpha (expansion�
↪ rate)
and beta (interest factor). See the proof of Theorem 9.8 in Gale�
↪ (1960)
"""

n, m = self.n, self.m
A, B = self.A, self.B

f = lambda α: ((B - α * A) @ np.ones((n, 1))).max()


g = lambda β: (np.ones((1, m)) @ (B - β * A)).min()

UB = np.asscalar(fsolve(f, 1)) # Upper bound for α, β


LB = np.asscalar(fsolve(g, 2)) # Lower bound for α, β

return LB, UB

def zerosum(self, γ, dual=False):


"""
Given gamma, calculate the value and optimal strategies of a
two-player zero-sum game given by the matrix

M(gamma) = B - gamma * A

Row player maximizing, column player minimizing

Zero-sum game as an LP (primal --> α)

max (0', 1) @ (x', v)


subject to
426 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

[-M', ones(n, 1)] @ (x', v)' <= 0


(x', v) @ (ones(m, 1), 0) = 1
(x', v) >= (0', -inf)

Zero-sum game as an LP (dual --> beta)

min (0', 1) @ (p', u)


subject to
[M, -ones(m, 1)] @ (p', u)' <= 0
(p', u) @ (ones(n, 1), 0) = 1
(p', u) >= (0', -inf)

Outputs:
--------
value: scalar
value of the zero-sum game

strategy: vector
if dual = False, it is the intensity vector,
if dual = True, it is the price vector
"""

A, B, n, m = self.A, self.B, self.n, self.m


M = B - γ * A

if dual == False:
# Solve the primal LP (for details see the description)
# (1) Define the problem for v as a maximization (linprog�
↪ minimizes)
c = np.hstack([np.zeros(m), -1])

# (2) Add constraints :


# ... non-negativity constraints
bounds = tuple(m * [(0, None)] + [(None, None)])
# ... inequality constraints
A_iq = np.hstack([-M.T, np.ones((n, 1))])
b_iq = np.zeros((n, 1))
# ... normalization
A_eq = np.hstack([np.ones(m), 0]).reshape(1, m + 1)
b_eq = 1

res = linprog(c, A_ub=A_iq, b_ub=b_iq, A_eq=A_eq, b_eq=b_eq,


bounds=bounds, options=dict(bland=True, tol=1e-7))

else:
# Solve the dual LP (for details see the description)
# (1) Define the problem for v as a maximization (linprog�
↪ minimizes)
c = np.hstack([np.zeros(n), 1])

# (2) Add constraints :


# ... non-negativity constraints
bounds = tuple(n * [(0, None)] + [(None, None)])
# ... inequality constraints
A_iq = np.hstack([M, -np.ones((m, 1))])
b_iq = np.zeros((m, 1))
# ... normalization
A_eq = np.hstack([np.ones(n), 0]).reshape(1, n + 1)
26.1. CONTENTS 427

b_eq = 1

res = linprog(c, A_ub=A_iq, b_ub=b_iq, A_eq=A_eq, b_eq=b_eq,


bounds=bounds, options=dict(bland=True, tol=1e-7))

if res.status != 0:
print(res.message)

# Pull out the required quantities


value = res.x[-1]
strategy = res.x[:-1]

return value, strategy

def expansion(self, tol=1e-8, maxit=1000):


"""
The algorithm used here is described in Hamburger-Thompson-Weil
(1967, ECTA). It is based on a simple bisection argument and utilizes
the idea that for a given γ (= α or β), the matrix "M = B - γ * A"
defines a two-player zero-sum game, where the optimal strategies are
the (normalized) intensity and price vector.

Outputs:
--------
alpha: scalar
optimal expansion rate
"""

LB, UB = self.bounds()

for iter in range(maxit):

γ = (LB + UB) / 2
ZS = self.zerosum(γ=γ)
V = ZS[0] # value of the game with γ

if V >= 0:
LB = γ
else:
UB = γ

if abs(UB - LB) < tol:


γ = (UB + LB) / 2
x = self.zerosum(γ=γ)[1]
p = self.zerosum(γ=γ, dual=True)[1]
break

return γ, x, p

def interest(self, tol=1e-8, maxit=1000):


"""
The algorithm used here is described in Hamburger-Thompson-Weil
(1967, ECTA). It is based on a simple bisection argument and utilizes
the idea that for a given gamma (= alpha or beta),
the matrix "M = B - γ * A" defines a two-player zero-sum game,
where the optimal strategies are the (normalized) intensity and price
vector
428 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

Outputs:
--------
beta: scalar
optimal interest rate
"""

LB, UB = self.bounds()

for iter in range(maxit):


γ = (LB + UB) / 2
ZS = self.zerosum(γ=γ, dual=True)
V = ZS[0]

if V > 0:
LB = γ
else:
UB = γ

if abs(UB - LB) < tol:


γ = (UB + LB) / 2
p = self.zerosum(γ=γ, dual=True)[1]
x = self.zerosum(γ=γ)[1]
break

return γ, x, p

26.2 Notation

We use the following notation.


0 denotes a vector of zeros. We call an 𝑛-vector - positive or 𝑥 ≫ 0 if 𝑥𝑖 > 0 for all 𝑖 =
1, 2, … , 𝑛 - non-negative or 𝑥 ≥ 0 if 𝑥𝑖 ≥ 0 for all 𝑖 = 1, 2, … , 𝑛 - semi-positive or 𝑥 > 0 if
𝑥 ≥ 0 and 𝑥 ≠ 0.
For two conformable vectors 𝑥 and 𝑦, 𝑥 ≫ 𝑦, 𝑥 ≥ 𝑦 and 𝑥 > 𝑦 mean 𝑥 − 𝑦 ≫ 0, 𝑥 − 𝑦 ≥ 0, and
𝑥 − 𝑦 > 0.
By default, all vectors are column vectors, 𝑥𝑇 denotes the transpose of 𝑥 (i.e. a row vector).
Let 𝜄𝑛 denote a column vector composed of 𝑛 ones, i.e. 𝜄𝑛 = (1, 1, … , 1)𝑇 .
Let 𝑒𝑖 denote the vector (of arbitrary size) containing zeros except for the 𝑖 th position where
it is one.
We denote matrices by capital letters. For an arbitrary matrix 𝐴, 𝑎𝑖,𝑗 represents the entry in
its 𝑖 th row and 𝑗 th column.
𝑎⋅𝑗 and 𝑎𝑖⋅ denote the 𝑗 th column and 𝑖 th row of 𝐴, respectively.

26.3 Model Ingredients and Assumptions

A pair (𝐴, 𝐵) of 𝑚 × 𝑛 non-negative matrices defines an economy.


• 𝑚 is the number of activities (or sectors)
• 𝑛 is the number of goods (produced and/or used in the economy)
26.3. MODEL INGREDIENTS AND ASSUMPTIONS 429

• 𝐴 is called the input matrix; 𝑎𝑖,𝑗 denotes the amount of good 𝑗 consumed by activity 𝑖
• 𝐵 is called the output matrix; 𝑏𝑖,𝑗 represents the amount of good 𝑗 produced by activity
𝑖
Two key assumptions restrict economy (𝐴, 𝐵):
• Assumption I: (every good which is consumed is also produced)

𝑏.,𝑗 > 0 ∀𝑗 = 1, 2, … , 𝑛
• Assumption II: (no free lunch)

𝑎𝑖,. > 0 ∀𝑖 = 1, 2, … , 𝑚

A semi-positive 𝑚-vector:math:x denotes the levels at which activities are operated (intensity
vector).
Therefore,
• vector 𝑥𝑇 𝐴 gives the total amount of goods used in production
• vector 𝑥𝑇 𝐵 gives total outputs
An economy (𝐴, 𝐵) is said to be productive, if there exists a non-negative intensity vector 𝑥 ≥
0 such that 𝑥𝑇 𝐵 > 𝑥𝑇 𝐴.
The semi-positive 𝑛-vector 𝑝 contains prices assigned to the 𝑛 goods.
The 𝑝 vector implies cost and revenue vectors
• the vector 𝐴𝑝 tells costs of the vector of activities
• the vector 𝐵𝑝 tells revenues from the vector of activities
A property of an input-output pair (𝐴, 𝐵) called irreducibility (or indecomposability) deter-
mines whether an economy can be decomposed into multiple ‘’sub-economies”.
Definition: Given an economy (𝐴, 𝐵), the set of goods 𝑆 ⊂ {1, 2, … , 𝑛} is called an indepen-
dent subset if it is possible to produce every good in 𝑆 without consuming any good outside
𝑆. Formally, the set 𝑆 is independent if ∃𝑇 ⊂ {1, 2, … , 𝑚} (subset of activities) such that
𝑎𝑖,𝑗 = 0, ∀𝑖 ∈ 𝑇 and 𝑗 ∈ 𝑆 𝑐 and for all 𝑗 ∈ 𝑆, ∃𝑖 ∈ 𝑇 , s.t. 𝑏𝑖,𝑗 > 0. The economy is irre-
ducible if there are no proper independent subsets.
We study two examples, both coming from Chapter 9.6 of Gale [22]

In [3]: # (1) Irreducible (A, B) example: α_0 = β_0


A1 = np.array([[0, 1, 0, 0],
[1, 0, 0, 1],
[0, 0, 1, 0]])

B1 = np.array([[1, 0, 0, 0],
[0, 0, 2, 0],
[0, 1, 0, 1]])

# (2) Reducible (A, B) example: β_0 < α_0


A2 = np.array([[0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 0, 1],
[0, 0, 0, 0, 1, 0]])
430 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

B2 = np.array([[1, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 1, 0, 1]])

The following code sets up our first Neumann economy or Neumann instance

In [4]: n1 = Neumann(A1, B1)


n1

Out[4]:
Generalized von Neumann expanding model:
- number of goods : 4
- number of activities : 3

Assumptions:
- AI: every column of B has a positive entry : True
- AII: every row of A has a positive entry : True

In [5]: n2 = Neumann(A2, B2)


n2

Out[5]:
Generalized von Neumann expanding model:
- number of goods : 6
- number of activities : 5

Assumptions:
- AI: every column of B has a positive entry : True
- AII: every row of A has a positive entry : True

26.4 Dynamic Interpretation

Attach a time index 𝑡 to the preceding objects, regard an economy as a dynamic system, and
study sequences

{(𝐴𝑡 , 𝐵𝑡 )}𝑡≥0 , {𝑥𝑡 }𝑡≥0 , {𝑝𝑡 }𝑡≥0

An interesting special case holds the technology process constant and investigates the dynam-
ics of quantities and prices only.
Accordingly, in the rest of this notebook, we assume that (𝐴𝑡 , 𝐵𝑡 ) = (𝐴, 𝐵) for all 𝑡 ≥ 0.
A crucial element of the dynamic interpretation involves the timing of production.
We assume that production (consumption of inputs) takes place in period 𝑡, while the associ-
ated output materializes in period 𝑡 + 1, i.e. consumption of 𝑥𝑇𝑡 𝐴 in period 𝑡 results in 𝑥𝑇𝑡 𝐵
amounts of output in period 𝑡 + 1.
These timing conventions imply the following feasibility condition:
26.5. DUALITY 431

𝑥𝑇𝑡 𝐵 ≥ 𝑥𝑇𝑡+1 𝐴 ∀𝑡 ≥ 1

which asserts that no more goods can be used today than were produced yesterday.
Accordingly, 𝐴𝑝𝑡 tells the costs of production in period 𝑡 and 𝐵𝑝𝑡 tells revenues in period 𝑡 +
1.

26.4.1 Balanced Growth

We follow John von Neumann in studying “balanced growth”.


Let ./ denote an elementwise division of one vector by another and let 𝛼 > 0 be a scalar.
Then balanced growth is a situation in which

𝑥𝑡+1 ./𝑥𝑡 = 𝛼, ∀𝑡 ≥ 0

With balanced growth, the law of motion of 𝑥 is evidently 𝑥𝑡+1 = 𝛼𝑥𝑡 and so we can rewrite
the feasibility constraint as

𝑥𝑇𝑡 𝐵 ≥ 𝛼𝑥𝑇𝑡 𝐴 ∀𝑡

In the same spirit, define 𝛽 ∈ ℝ as the interest factor per unit of time.
We assume that it is always possible to earn a gross return equal to the constant interest fac-
tor 𝛽 by investing “outside the model”.
Under this assumption about outside investment opportunities, a no-arbitrage condition gives
rise to the following (no profit) restriction on the price sequence:

𝛽𝐴𝑝𝑡 ≥ 𝐵𝑝𝑡 ∀𝑡

This says that production cannot yield a return greater than that offered by the investment
opportunity (note that we compare values in period 𝑡 + 1).
The balanced growth assumption allows us to drop time subscripts and conduct an analysis
purely in terms of a time-invariant growth rate 𝛼 and interest factor 𝛽.

26.5 Duality

The following two problems are connected by a remarkable dual relationship between the
technological and valuation characteristics of the economy:
Definition: The technological expansion problem (TEP) for the economy (𝐴, 𝐵) is to find a
semi-positive 𝑚-vector 𝑥 > 0 and a number 𝛼 ∈ ℝ, s.t.

max 𝛼
𝛼
s.t. 𝑥𝑇 𝐵 ≥ 𝛼𝑥𝑇 𝐴

Theorem 9.3 of David Gale’s book [22] asserts that if Assumptions I and II are both satisfied,
then a maximum value of 𝛼 exists and it is positive.
432 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

It is called the technological expansion rate and is denoted by 𝛼0 . The associated intensity
vector 𝑥0 is the optimal intensity vector.
Definition: The economical expansion problem (EEP) for (𝐴, 𝐵) is to find a semi-positive
𝑛-vector 𝑝 > 0 and a number 𝛽 ∈ ℝ, such that

min 𝛽
𝛽

s.t. 𝐵𝑝 ≤ 𝛽𝐴𝑝

Assumptions I and II imply existence of a minimum value 𝛽0 > 0 called the economic expan-
sion rate.
The corresponding price vector 𝑝0 is the optimal price vector.
Evidently, the criterion functions in technological expansion problem and the economical ex-
pansion problem are both linearly homogeneous, so the optimality of 𝑥0 and 𝑝0 are defined
only up to a positive scale factor.
For simplicity (and to emphasize a close connection to zero-sum games), in the following, we
normalize both vectors 𝑥0 and 𝑝0 to have unit length.
A standard duality argument (see Lemma 9.4. in (Gale, 1960) [22]) implies that under As-
sumptions I and II, 𝛽0 ≤ 𝛼0 .
But in the other direction, that is 𝛽0 ≥ 𝛼0 , Assumptions I and II are not sufficient.
Nevertheless, von Neumann [67] proved the following remarkable “duality-type” result con-
necting TEP and EEP.
Theorem 1 (von Neumann): If the economy (𝐴, 𝐵) satisfies Assumptions I and II, then
there exists a set (𝛾 ∗ , 𝑥0 , 𝑝0 ), where 𝛾 ∗ ∈ [𝛽0 , 𝛼0 ] ⊂ ℝ, 𝑥0 > 0 is an 𝑚-vector, 𝑝0 > 0 is an
𝑛-vector and the following holds true

𝑥𝑇0 𝐵 ≥ 𝛾 ∗ 𝑥𝑇0 𝐴
𝐵𝑝0 ≤ 𝛾 ∗ 𝐴𝑝0
𝑥𝑇0 (𝐵 − 𝛾 ∗ 𝐴) 𝑝0 = 0

Proof (Sketch): Assumption I and II imply that there exist (𝛼0 , 𝑥0 ) and (𝛽0 , 𝑝0 )
solving the TEP and EEP, respectively. If 𝛾 ∗ > 𝛼0 , then by definition of 𝛼0 , there
cannot exist a semi-positive 𝑥 that satisfies 𝑥𝑇 𝐵 ≥ 𝛾 ∗ 𝑥𝑇 𝐴. Similarly, if 𝛾 ∗ < 𝛽0 ,
there is no semi-positive 𝑝 so that 𝐵𝑝 ≤ 𝛾 ∗ 𝐴𝑝. Let 𝛾 ∗ ∈ [𝛽0 , 𝛼0 ], then 𝑥𝑇0 𝐵 ≥
𝛼0 𝑥𝑇0 𝐴 ≥ 𝛾 ∗ 𝑥𝑇0 𝐴. Moreover, 𝐵𝑝0 ≤ 𝛽0 𝐴𝑝0 ≤ 𝛾 ∗ 𝐴𝑝0 . These two inequalities imply
𝑥0 (𝐵 − 𝛾 ∗ 𝐴) 𝑝0 = 0.

Here the constant 𝛾 ∗ is both expansion and interest factor (not necessarily optimal).
We have already encountered and discussed the first two inequalities that represent feasibility
and no-profit conditions.
Moreover, the equality compactly captures the requirements that if any good grows at a rate
larger than 𝛾 ∗ (i.e., if it is oversupplied), then its price must be zero; and that if any activity
provides negative profit, it must be unused.
Therefore, these expressions encode all equilibrium conditions and Theorem I essentially
states that under Assumptions I and II there always exists an equilibrium (𝛾 ∗ , 𝑥0 , 𝑝0 ) with
balanced growth.
26.6. INTERPRETATION AS A GAME THEORETIC PROBLEM (TWO-PLAYER ZERO-SUM GAME)43

Note that Theorem I is silent about uniqueness of the equilibrium. In fact, it does not rule
out (trivial) cases with 𝑥𝑇0 𝐵𝑝0 = 0 so that nothing of value is produced.
To exclude such uninteresting cases, Kemeny, Morgenstern and Thomspson [38] add an extra
requirement

𝑥𝑇0 𝐵𝑝0 > 0

and call the resulting equilibria economic solutions.


They show that this extra condition does not affect the existence result, while it significantly
reduces the number of (relevant) solutions.

26.6 Interpretation as a Game Theoretic Problem (Two-player


Zero-sum Game)

To compute the equilibrium (𝛾 ∗ , 𝑥0 , 𝑝0 ), we follow the algorithm proposed by Hamburger,


Thompson and Weil (1967), building on the key insight that the equilibrium (with balanced
growth) can be considered as a solution of a particular two-player zero-sum game. First, we
introduce some notations.
Consider the 𝑚 × 𝑛 matrix 𝐶 as a payoff matrix, with the entries representing payoffs from
the minimizing column player to the maximizing row player and assume that the players
can use mixed strategies: - row player chooses the 𝑚-vector 𝑥 > 0, s.t. 𝜄𝑇𝑚 𝑥 = 1 - column
player chooses the 𝑛-vector 𝑝 > 0, s.t. 𝜄𝑇𝑛 𝑝 = 1.
Definition: The 𝑚 × 𝑛 matrix game 𝐶 has the solution (𝑥∗ , 𝑝∗ , 𝑉 (𝐶)) in mixed strategies, if

(𝑥∗ )𝑇 𝐶𝑒𝑗 ≥ 𝑉 (𝐶) ∀𝑗 ∈ {1, … , 𝑛} and (𝑒𝑖 )𝑇 𝐶𝑝∗ ≤ 𝑉 (𝐶) ∀𝑖 ∈ {1, … , 𝑚}

The number 𝑉 (𝐶) is called the value of the game.


From the above definition, it is clear that the value 𝑉 (𝐶) has two alternative interpretations:
• by playing the appropriate mixed stategy, the maximizing player can assure himself at
least 𝑉 (𝐶) (no matter what the column player chooses)
• by playing the appropriate mixed stategy, the minimizing player can make sure that the
maximizing player will not get more than 𝑉 (𝐶) (irrespective of what is the maximizing
player’s choice)
From the famous theorem of Nash (1951), it follows that there always exists a mixed strategy
Nash equilibrium for any finite two-player zero-sum game.
Moreover, von Neumann’s Minmax Theorem [66] implies that

𝑉 (𝐶) = max min 𝑥𝑇 𝐶𝑝 = min max 𝑥𝑇 𝐶𝑝 = (𝑥∗ )𝑇 𝐶𝑝∗


𝑥 𝑝 𝑝 𝑥

26.6.1 Connection with Linear Programming (LP)

Finding Nash equilibria of a finite two-player zero-sum game can be formulated as a linear
programming problem.
434 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

To see this, we introduce the following notation - For a fixed 𝑥, let 𝑣 be the value of the min-
imization problem: 𝑣 ≡ min𝑝 𝑥𝑇 𝐶𝑝 = min𝑗 𝑥𝑇 𝐶𝑒𝑗 - For a fixed 𝑝, let 𝑢 be the value of the
maximization problem: 𝑢 ≡ max𝑥 𝑥𝑇 𝐶𝑝 = max𝑖 (𝑒𝑖 )𝑇 𝐶𝑝.
Then the max-min problem (the game from the maximizing player’s point of view) can be
written as the primal LP

𝑉 (𝐶) = max 𝑣
s.t. 𝑣𝜄𝑇𝑛 ≤ 𝑥𝑇 𝐶
𝑥≥0
𝜄𝑇𝑛 𝑥 =1

while the min-max problem (the game from the minimizing player’s point of view) is the dual
LP

𝑉 (𝐶) = min 𝑢
s.t. 𝑢𝜄𝑚 ≥ 𝐶𝑝
𝑝≥0
𝜄𝑇𝑚 𝑝 =1

Hamburger, Thompson and Weil [25] view the input-output pair of the economy as payoff
matrices of two-player zero-sum games. Using this interpretation, they restate Assumption I
and II as follows

𝑉 (−𝐴) < 0 and 𝑉 (𝐵) > 0

Proof (Sketch): * ⇒ 𝑉 (𝐵) > 0 implies 𝑥𝑇0 𝐵 ≫ 0, where 𝑥0 is a maximizing vec-


tor. Since 𝐵 is non-negative, this requires that each column of 𝐵 has at least one
positive entry, which is Assumption I. * ⇐ From Assumption I and the fact that
𝑝 > 0, it follows that 𝐵𝑝 > 0. This implies that the maximizing player can always
choose 𝑥 so that 𝑥𝑇 𝐵𝑝 > 0, that is it must be the case that 𝑉 (𝐵) > 0.

In order to (re)state Theorem I in terms of a particular two-player zero-sum game, we define


the matrix for 𝛾 ∈ ℝ

𝑀 (𝛾) ≡ 𝐵 − 𝛾𝐴

For fixed 𝛾, treating 𝑀 (𝛾) as a matrix game, we can calculate the solution of the game
• If 𝛾 > 𝛼0 , then for all 𝑥 > 0, there ∃𝑗 ∈ {1, … , 𝑛}, s.t. [𝑥𝑇 𝑀 (𝛾)]𝑗 < 0 implying that
𝑉 (𝑀 (𝛾)) < 0.
• If 𝛾 < 𝛽0 , then for all 𝑝 > 0, there ∃𝑖 ∈ {1, … , 𝑚}, s.t. [𝑀 (𝛾)𝑝]𝑖 > 0 implying that
𝑉 (𝑀 (𝛾)) > 0.
• If 𝛾 ∈ {𝛽0 , 𝛼0 }, then (by Theorem I) the optimal intensity and price vectors 𝑥0 and 𝑝0
satisfy

𝑥𝑇0 𝑀 (𝛾) ≥ 0𝑇 and 𝑀 (𝛾)𝑝0 ≤ 0

That is, (𝑥0 , 𝑝0 , 0) is a solution of the game 𝑀 (𝛾) so that 𝑉 (𝑀 (𝛽0 )) = 𝑉 (𝑀 (𝛼0 )) = 0.
26.6. INTERPRETATION AS A GAME THEORETIC PROBLEM (TWO-PLAYER ZERO-SUM GAME)43

• If 𝛽0 < 𝛼0 and 𝛾 ∈ (𝛽0 , 𝛼0 ), then 𝑉 (𝑀 (𝛾)) = 0.


Moreover, if 𝑥′ is optimal for the maximizing player in 𝑀 (𝛾 ′ ) for 𝛾 ′ ∈ (𝛽0 , 𝛼0 ) and 𝑝″ is op-
timal for the minimizing player in 𝑀 (𝛾 ″ ) where 𝛾 ″ ∈ (𝛽0 , 𝛾 ′ ), then (𝑥′ , 𝑝″ , 0) is a solution for
𝑀 (𝛾), ∀𝛾 ∈ (𝛾 ″ , 𝛾 ′ ).

Proof (Sketch): If 𝑥′ is optimal for a maximizing player in game 𝑀 (𝛾 ′ ), then


(𝑥′ )𝑇 𝑀 (𝛾 ′ ) ≥ 0𝑇 and so for all 𝛾 < 𝛾 ′ .

(𝑥′ )𝑇 𝑀 (𝛾) = (𝑥′ )𝑇 𝑀 (𝛾 ′ ) + (𝑥′ )𝑇 (𝛾 ′ − 𝛾)𝐴 ≥ 0𝑇

hence 𝑉 (𝑀 (𝛾)) ≥ 0. If 𝑝″ is optimal for a minimizing player in game 𝑀 (𝛾 ″ ), then 𝑀 (𝛾)𝑝 ≤ 0


and so for all 𝛾 ″ < 𝛾

𝑀 (𝛾)𝑝″ = 𝑀 (𝛾 ″ ) + (𝛾 ″ − 𝛾)𝐴𝑝″ ≤ 0

hence 𝑉 (𝑀 (𝛾)) ≤ 0.
It is clear from the above argument that 𝛽0 , 𝛼0 are the minimal and maximal 𝛾 for which
𝑉 (𝑀 (𝛾)) = 0.
Moreover, Hamburger et al. [25] show that the function 𝛾 ↦ 𝑉 (𝑀 (𝛾)) is continuous and non-
increasing in 𝛾.
This suggests an algorithm to compute (𝛼0 , 𝑥0 ) and (𝛽0 , 𝑝0 ) for a given input-output pair
(𝐴, 𝐵).

26.6.2 Algorithm

Hamburger, Thompson and Weil [25] propose a simple bisection algorithm to find the mini-
mal and maximal roots (i.e. 𝛽0 and 𝛼0 ) of the function 𝛾 ↦ 𝑉 (𝑀 (𝛾)).

Step 1

First, notice that we can easily find trivial upper and lower bounds for 𝛼0 and 𝛽0 .
• TEP requires that 𝑥𝑇 (𝐵 − 𝛼𝐴) ≥ 0𝑇 and 𝑥 > 0, so if 𝛼 is so large that max𝑖 {[(𝐵 −
𝛼𝐴)𝜄𝑛 ]𝑖 } < 0, then TEP ceases to have a solution.
Accordingly, let UB be the 𝛼∗ that solves max𝑖 {[(𝐵 − 𝛼∗ 𝐴)𝜄𝑛 ]𝑖 } = 0.
• Similar to the upper bound, if 𝛽 is so low that min𝑗 {[𝜄𝑇𝑚 (𝐵 − 𝛽𝐴)]𝑗 } > 0, then the EEP
has no solution and so we can define LB as the 𝛽 ∗ that solves min𝑗 {[𝜄𝑇𝑚 (𝐵 − 𝛽 ∗ 𝐴)]𝑗 } = 0.
The bounds method calculates these trivial bounds for us

In [6]: n1.bounds()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
436 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

Out[6]: (1.0, 2.0)

Step 2

Compute 𝛼0 and 𝛽0
• Finding 𝛼0
1. Fix 𝛾 = 𝑈𝐵+𝐿𝐵2 and compute the solution of the two-player zero-sum game associ-
ated with 𝑀 (𝛾). We can use either the primal or the dual LP problem.
2. If 𝑉 (𝑀 (𝛾)) ≥ 0, then set 𝐿𝐵 = 𝛾, otherwise let 𝑈 𝐵 = 𝛾.
3. Iterate on 1. and 2. until |𝑈 𝐵 − 𝐿𝐵| < 𝜖.
• Finding 𝛽0
1. Fix 𝛾 = 𝑈𝐵+𝐿𝐵2 and compute the solution of the two-player zero-sum game associ-
ated. with 𝑀 (𝛾). We can use either the primal or the dual LP problem.
2. If 𝑉 (𝑀 (𝛾)) > 0, then set 𝐿𝐵 = 𝛾, otherwise let 𝑈 𝐵 = 𝛾.
3. Iterate on 1. and 2. until |𝑈 𝐵 − 𝐿𝐵| < 𝜖.
Existence: Since 𝑉 (𝑀 (𝐿𝐵)) > 0 and 𝑉 (𝑀 (𝑈 𝐵)) < 0 and 𝑉 (𝑀 (⋅)) is a continuous,
nonincreasing function, there is at least one 𝛾 ∈ [𝐿𝐵, 𝑈 𝐵], s.t. 𝑉 (𝑀 (𝛾)) = 0.
The zerosum method calculates the value and optimal strategies associated with a given 𝛾.

In [7]: γ = 2

print(f'Value of the game with γ = {γ}')


print(n1.zerosum(γ=γ)[0])
print('Intensity vector (from the primal)')
print(n1.zerosum(γ=γ)[1])
print('Price vector (from the dual)')
print(n1.zerosum(γ=γ, dual=True)[1])

Value of the game with γ = 2


-0.24000000097850327
Intensity vector (from the primal)
[0.32 0.28 0.4 ]
Price vector (from the dual)
[4.00e-01 3.20e-01 2.80e-01 2.54e-10]

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:176:
OptimizeWarning: Unknown solver options: bland

In [8]: numb_grid = 100


γ_grid = np.linspace(0.4, 2.1, numb_grid)

value_ex1_grid = np.asarray([n1.zerosum(γ=γ_grid[i])[0]
for i in range(numb_grid)])
value_ex2_grid = np.asarray([n2.zerosum(γ=γ_grid[i])[0]
for i in range(numb_grid)])

fig, axes = plt.subplots(1, 2, figsize=(14, 5), sharey=True)


fig.suptitle(r'The function $V(M(\gamma))$', fontsize=16)
26.6. INTERPRETATION AS A GAME THEORETIC PROBLEM (TWO-PLAYER ZERO-SUM GAME)43

for ax, grid, N, i in zip(axes, (value_ex1_grid, value_ex2_grid),


(n1, n2), (1, 2)):
ax.plot(γ_grid, grid)
ax.set(title=f'Example {i}', xlabel='$\gamma$')
ax.axhline(0, c='k', lw=1)
ax.axvline(N.bounds()[0], c='r', ls='--', label='lower bound')
ax.axvline(N.bounds()[1], c='g', ls='--', label='upper bound')

plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead

The expansion method implements the bisection algorithm for 𝛼0 (and uses the primal LP
problem for 𝑥0 )

In [9]: α_0, x, p = n1.expansion()


print(f'α_0 = {α_0}')
print(f'x_0 = {x}')
print(f'The corresponding p from the dual = {p}')

α_0 = 1.2599210478365421
x_0 = [0.33 0.26 0.41]
The corresponding p from the dual = [4.13e-01 3.27e-01 2.60e-01 1.82e-10]

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
438 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:176:
OptimizeWarning: Unknown solver options: bland

The interest method implements the bisection algorithm for 𝛽0 (and uses the dual LP prob-
lem for 𝑝0 )

In [10]: β_0, x, p = n1.interest()


print(f'β_0 = {β_0}')
print(f'p_0 = {p}')
print(f'The corresponding x from the primal = {x}')

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:176:
OptimizeWarning: Unknown solver options: bland

β_0 = 1.2599210478365421
p_0 = [4.13e-01 3.27e-01 2.60e-01 1.82e-10]
The corresponding x from the primal = [0.33 0.26 0.41]

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland

Of course, when 𝛾 ∗ is unique, it is irrelevant which one of the two methods we use.
In particular, as will be shown below, in case of an irreducible (𝐴, 𝐵) (like in Example 1), the
maximal and minimal roots of 𝑉 (𝑀 (𝛾)) necessarily coincide implying a “full duality” result,
i.e. 𝛼0 = 𝛽0 = 𝛾 ∗ , and that the expansion (and interest) rate 𝛾 ∗ is unique.

26.6.3 Uniqueness and Irreducibility

As an illustration, compute first the maximal and minimal roots of 𝑉 (𝑀 (⋅)) for Example 2,
which displays a reducible input-output pair (𝐴, 𝐵)

In [11]: α_0, x, p = n2.expansion()


print(f'α_0 = {α_0}')
print(f'x_0 = {x}')
print(f'The corresponding p from the dual = {p}')

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
26.6. INTERPRETATION AS A GAME THEORETIC PROBLEM (TWO-PLAYER ZERO-SUM GAME)43

instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland

α_0 = 1.1343159647658467
x_0 = [1.67e-11 1.84e-11 3.24e-01 2.61e-01 4.15e-01]
The corresponding p from the dual = [5.04e-01 4.96e-01 2.97e-12 2.24e-12 3.08e-12
3.56e-12]

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:176:
OptimizeWarning: Unknown solver options: bland

In [12]: β_0, x, p = n2.interest()


print(f'β_0 = {β_0}')
print(f'p_0 = {p}')
print(f'The corresponding x from the primal = {x}')

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:98:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:99:
DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item()
instead
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:176:
OptimizeWarning: Unknown solver options: bland

β_0 = 1.2579826759174466
p_0 = [5.11e-01 4.89e-01 2.73e-08 2.17e-08 1.88e-08 2.66e-09]
The corresponding x from the primal = [1.61e-09 1.65e-09 3.27e-01 2.60e-01 4.12e-01]

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:158:
OptimizeWarning: Unknown solver options: bland

As we can see, with a reducible (𝐴, 𝐵), the roots found by the bisection algorithms might dif-
fer, so there might be multiple 𝛾 ∗ that make the value of the game with 𝑀 (𝛾 ∗ ) zero. (see the
figure above).
Indeed, although the von Neumann theorem assures existence of the equilibrium, Assump-
tions I and II are not sufficient for uniqueness. Nonetheless, Kemeny et al. (1967) show that
there are at most finitely many economic solutions, meaning that there are only finitely many
𝛾 ∗ that satisfy 𝑉 (𝑀 (𝛾 ∗ )) = 0 and 𝑥𝑇0 𝐵𝑝0 > 0 and that for each such 𝛾𝑖∗ , there is a self-
sufficient part of the economy (a sub-economy) that in equilibrium can expand independently
with the expansion coefficient 𝛾𝑖∗ .
The following theorem (see Theorem 9.10. in Gale [22]) asserts that imposing irreducibility is
sufficient for uniqueness of (𝛾 ∗ , 𝑥0 , 𝑝0 ).
Theorem II: Consider the conditions of Theorem 1. If the economy (𝐴, 𝐵) is irreducible,
then 𝛾 ∗ = 𝛼0 = 𝛽0 .
440 CHAPTER 26. VON NEUMANN GROWTH MODEL (AND A GENERALIZATION)

26.6.4 A Special Case

There is a special (𝐴, 𝐵) that allows us to simplify the solution method significantly by invok-
ing the powerful Perron-Frobenius theorem for non-negative matrices.
Definition: We call an economy simple if it satisfies 1. 𝑛 = 𝑚 2. Each activity produces
exactly one good 3. Each good is produced by one and only one activity.
These assumptions imply that 𝐵 = 𝐼𝑛 , i.e., that 𝐵 can be written as an identity matrix (pos-
sibly after reshuffling its rows and columns).
The simple model has the following special property (Theorem 9.11. in Gale [22]): if 𝑥0 and
𝛼0 > 0 solve the TEP with (𝐴, 𝐼𝑛 ), then

1
𝑥𝑇0 = 𝛼0 𝑥𝑇0 𝐴 ⇔ 𝑥𝑇0 𝐴 = ( ) 𝑥𝑇0
𝛼0

The latter shows that 1/𝛼0 is a positive eigenvalue of 𝐴 and 𝑥0 is the corresponding non-
negative left eigenvector.
The classical result of Perron and Frobenius implies that a non-negative matrix always has
a non-negative eigenvalue-eigenvector pair.
Moreover, if 𝐴 is irreducible, then the optimal intensity vector 𝑥0 is positive and unique up to
multiplication by a positive scalar.
Suppose that 𝐴 is reducible with 𝑘 irreducible subsets 𝑆1 , … , 𝑆𝑘 . Let 𝐴𝑖 be the submatrix
corresponding to 𝑆𝑖 and let 𝛼𝑖 and 𝛽𝑖 be the associated expansion and interest factors, re-
spectively. Then we have

𝛼0 = max{𝛼𝑖 } and 𝛽0 = min{𝛽𝑖 }


𝑖 𝑖
Part VI

Time Series Models

441
Chapter 27

Covariance Stationary Processes

27.1 Contents

• Overview 27.2
• Introduction 27.3
• Spectral Analysis 27.4
• Implementation 27.5
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

27.2 Overview

In this lecture we study covariance stationary linear stochastic processes, a class of models
routinely used to study economic and financial time series.
This class has the advantage of being

1. simple enough to be described by an elegant and comprehensive theory

2. relatively broad in terms of the kinds of dynamics it can represent

We consider these models in both the time and frequency domain.

27.2.1 ARMA Processes

We will focus much of our attention on linear covariance stationary models with a finite num-
ber of parameters.
In particular, we will study stationary ARMA processes, which form a cornerstone of the
standard theory of time series analysis.
Every ARMA process can be represented in linear state space form.
However, ARMA processes have some important structure that makes it valuable to study
them separately.

443
444 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

27.2.2 Spectral Analysis

Analysis in the frequency domain is also called spectral analysis.


In essence, spectral analysis provides an alternative representation of the autocovariance func-
tion of a covariance stationary process.
Having a second representation of this important object
• shines a light on the dynamics of the process in question
• allows for a simpler, more tractable representation in some important cases
The famous Fourier transform and its inverse are used to map between the two representa-
tions.

27.2.3 Other Reading

For supplementary reading, see


• [43], chapter 2
• [59], chapter 11
• John Cochrane’s notes on time series analysis, chapter 8
• [60], chapter 6
• [17], all
Let’s start with some imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
import quantecon as qe

27.3 Introduction

Consider a sequence of random variables {𝑋𝑡 } indexed by 𝑡 ∈ ℤ and taking values in ℝ.


Thus, {𝑋𝑡 } begins in the infinite past and extends to the infinite future — a convenient and
standard assumption.
As in other fields, successful economic modeling typically assumes the existence of features
that are constant over time.
If these assumptions are correct, then each new observation 𝑋𝑡 , 𝑋𝑡+1 , … can provide addi-
tional information about the time-invariant features, allowing us to learn from as data arrive.
For this reason, we will focus in what follows on processes that are stationary — or become so
after a transformation (see for example this lecture).

27.3.1 Definitions

A real-valued stochastic process {𝑋𝑡 } is called covariance stationary if

1. Its mean 𝜇 ∶= 𝔼𝑋𝑡 does not depend on 𝑡.


27.3. INTRODUCTION 445

2. For all 𝑘 in ℤ, the 𝑘-th autocovariance 𝛾(𝑘) ∶= 𝔼(𝑋𝑡 − 𝜇)(𝑋𝑡+𝑘 − 𝜇) is finite and depends
only on 𝑘.

The function 𝛾 ∶ ℤ → ℝ is called the autocovariance function of the process.


Throughout this lecture, we will work exclusively with zero-mean (i.e., 𝜇 = 0) covariance
stationary processes.
The zero-mean assumption costs nothing in terms of generality since working with non-zero-
mean processes involves no more than adding a constant.

27.3.2 Example 1: White Noise

Perhaps the simplest class of covariance stationary processes is the white noise processes.
A process {𝜖𝑡 } is called a white noise process if

1. 𝔼𝜖𝑡 = 0

2. 𝛾(𝑘) = 𝜎2 1{𝑘 = 0} for some 𝜎 > 0

(Here 1{𝑘 = 0} is defined to be 1 if 𝑘 = 0 and zero otherwise)


White noise processes play the role of building blocks for processes with more complicated
dynamics.

27.3.3 Example 2: General Linear Processes

From the simple building block provided by white noise, we can construct a very flexible fam-
ily of covariance stationary processes — the general linear processes


𝑋𝑡 = ∑ 𝜓𝑗 𝜖𝑡−𝑗 , 𝑡∈ℤ (1)
𝑗=0

where
• {𝜖𝑡 } is white noise

• {𝜓𝑡 } is a square summable sequence in ℝ (that is, ∑𝑡=0 𝜓𝑡2 < ∞)
The sequence {𝜓𝑡 } is often called a linear filter.
Equation (1) is said to present a moving average process or a moving average representa-
tion.
With some manipulations, it is possible to confirm that the autocovariance function for (1) is


𝛾(𝑘) = 𝜎2 ∑ 𝜓𝑗 𝜓𝑗+𝑘 (2)
𝑗=0

By the Cauchy-Schwartz inequality, one can show that 𝛾(𝑘) satisfies equation (2).
Evidently, 𝛾(𝑘) does not depend on 𝑡.
446 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

27.3.4 Wold Representation

Remarkably, the class of general linear processes goes a long way towards describing the en-
tire class of zero-mean covariance stationary processes.
In particular, Wold’s decomposition theorem states that every zero-mean covariance station-
ary process {𝑋𝑡 } can be written as


𝑋𝑡 = ∑ 𝜓𝑗 𝜖𝑡−𝑗 + 𝜂𝑡
𝑗=0

where
• {𝜖𝑡 } is white noise
• {𝜓𝑡 } is square summable
• 𝜓0 𝜖𝑡 is the one-step ahead prediction error in forecasting 𝑋𝑡 as a linear least-squares
function of the infinite history 𝑋𝑡−1 , 𝑋𝑡−2 , …
• 𝜂𝑡 can be expressed as a linear function of 𝑋𝑡−1 , 𝑋𝑡−2 , … and is perfectly predictable
over arbitrarily long horizons
For the method of constructing a Wold representation, intuition, and further discussion, see
[59], p. 286.

27.3.5 AR and MA

General linear processes are a very broad class of processes.


It often pays to specialize to those for which there exists a representation having only finitely
many parameters.
(Experience and theory combine to indicate that models with a relatively small number of
parameters typically perform better than larger models, especially for forecasting)
One very simple example of such a model is the first-order autoregressive or AR(1) process

𝑋𝑡 = 𝜙𝑋𝑡−1 + 𝜖𝑡 where |𝜙| < 1 and {𝜖𝑡 } is white noise (3)



By direct substitution, it is easy to verify that 𝑋𝑡 = ∑𝑗=0 𝜙𝑗 𝜖𝑡−𝑗 .
Hence {𝑋𝑡 } is a general linear process.
Applying (2) to the previous expression for 𝑋𝑡 , we get the AR(1) autocovariance function

𝜎2
𝛾(𝑘) = 𝜙𝑘 , 𝑘 = 0, 1, … (4)
1 − 𝜙2

The next figure plots an example of this function for 𝜙 = 0.8 and 𝜙 = −0.8 with 𝜎 = 1.

In [3]: num_rows, num_cols = 2, 1


fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.4)

for i, ϕ in enumerate((0.8, -0.8)):


ax = axes[i]
27.3. INTRODUCTION 447

times = list(range(16))
acov = [ϕ**k / (1 - ϕ**2) for k in times]
ax.plot(times, acov, 'bo-', alpha=0.6,
label=f'autocovariance, $\phi = {ϕ:.2}$')
ax.legend(loc='upper right')
ax.set(xlabel='time', xlim=(0, 15))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
plt.show()

Another very simple process is the MA(1) process (here MA means “moving average”)

𝑋𝑡 = 𝜖𝑡 + 𝜃𝜖𝑡−1

You will be able to verify that

𝛾(0) = 𝜎2 (1 + 𝜃2 ), 𝛾(1) = 𝜎2 𝜃, and 𝛾(𝑘) = 0 ∀𝑘 > 1

The AR(1) can be generalized to an AR(𝑝) and likewise for the MA(1).
Putting all of this together, we get the

27.3.6 ARMA Processes

A stochastic process {𝑋𝑡 } is called an autoregressive moving average process, or ARMA(𝑝, 𝑞),
if it can be written as
448 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

𝑋𝑡 = 𝜙1 𝑋𝑡−1 + ⋯ + 𝜙𝑝 𝑋𝑡−𝑝 + 𝜖𝑡 + 𝜃1 𝜖𝑡−1 + ⋯ + 𝜃𝑞 𝜖𝑡−𝑞 (5)

where {𝜖𝑡 } is white noise.


An alternative notation for ARMA processes uses the lag operator 𝐿.
Def. Given arbitrary variable 𝑌𝑡 , let 𝐿𝑘 𝑌𝑡 ∶= 𝑌𝑡−𝑘 .
It turns out that
• lag operators facilitate succinct representations for linear stochastic processes
• algebraic manipulations that treat the lag operator as an ordinary scalar are legitimate
Using 𝐿, we can rewrite (5) as

𝐿0 𝑋𝑡 − 𝜙1 𝐿1 𝑋𝑡 − ⋯ − 𝜙𝑝 𝐿𝑝 𝑋𝑡 = 𝐿0 𝜖𝑡 + 𝜃1 𝐿1 𝜖𝑡 + ⋯ + 𝜃𝑞 𝐿𝑞 𝜖𝑡 (6)

If we let 𝜙(𝑧) and 𝜃(𝑧) be the polynomials

𝜙(𝑧) ∶= 1 − 𝜙1 𝑧 − ⋯ − 𝜙𝑝 𝑧𝑝 and 𝜃(𝑧) ∶= 1 + 𝜃1 𝑧 + ⋯ + 𝜃𝑞 𝑧𝑞 (7)

then (6) becomes

𝜙(𝐿)𝑋𝑡 = 𝜃(𝐿)𝜖𝑡 (8)

In what follows we always assume that the roots of the polynomial 𝜙(𝑧) lie outside the unit
circle in the complex plane.
This condition is sufficient to guarantee that the ARMA(𝑝, 𝑞) process is covariance stationary.
In fact, it implies that the process falls within the class of general linear processes described
above.
That is, given an ARMA(𝑝, 𝑞) process {𝑋𝑡 } satisfying the unit circle condition, there exists a

square summable sequence {𝜓𝑡 } with 𝑋𝑡 = ∑𝑗=0 𝜓𝑗 𝜖𝑡−𝑗 for all 𝑡.
The sequence {𝜓𝑡 } can be obtained by a recursive procedure outlined on page 79 of [17].
The function 𝑡 ↦ 𝜓𝑡 is often called the impulse response function.

27.4 Spectral Analysis

Autocovariance functions provide a great deal of information about covariance stationary pro-
cesses.
In fact, for zero-mean Gaussian processes, the autocovariance function characterizes the entire
joint distribution.
Even for non-Gaussian processes, it provides a significant amount of information.
It turns out that there is an alternative representation of the autocovariance function of a
covariance stationary process, called the spectral density.
At times, the spectral density is easier to derive, easier to manipulate, and provides additional
intuition.
27.4. SPECTRAL ANALYSIS 449

27.4.1 Complex Numbers

Before discussing the spectral density, we invite you to recall the main properties of complex
numbers (or skip to the next section).
It can be helpful to remember that, in a formal sense, complex numbers are just points
(𝑥, 𝑦) ∈ ℝ2 endowed with a specific notion of multiplication.
When (𝑥, 𝑦) is regarded as a complex number, 𝑥 is called the real part and 𝑦 is called the
imaginary part.
The modulus or absolute value of a complex number 𝑧 = (𝑥, 𝑦) is just its Euclidean norm in
ℝ2 , but is usually written as |𝑧| instead of ‖𝑧‖.
The product of two complex numbers (𝑥, 𝑦) and (𝑢, 𝑣) is defined to be (𝑥𝑢−𝑣𝑦, 𝑥𝑣+𝑦𝑢), while
addition is standard pointwise vector addition.
When endowed with these notions of multiplication and addition, the set of complex numbers
forms a field — addition and multiplication play well together, just as they do in ℝ.
The complex number (𝑥, 𝑦) is often written as 𝑥 + 𝑖𝑦, where 𝑖 is called the imaginary unit and
is understood to obey 𝑖2 = −1.
The 𝑥 + 𝑖𝑦 notation provides an easy way to remember the definition of multiplication given
above, because, proceeding naively,

(𝑥 + 𝑖𝑦)(𝑢 + 𝑖𝑣) = 𝑥𝑢 − 𝑦𝑣 + 𝑖(𝑥𝑣 + 𝑦𝑢)

Converted back to our first notation, this becomes (𝑥𝑢 − 𝑣𝑦, 𝑥𝑣 + 𝑦𝑢) as promised.
Complex numbers can be represented in the polar form 𝑟𝑒𝑖𝜔 where

𝑟𝑒𝑖𝜔 ∶= 𝑟(cos(𝜔) + 𝑖 sin(𝜔)) = 𝑥 + 𝑖𝑦

where 𝑥 = 𝑟 cos(𝜔), 𝑦 = 𝑟 sin(𝜔), and 𝜔 = arctan(𝑦/𝑧) or tan(𝜔) = 𝑦/𝑥.

27.4.2 Spectral Densities

Let {𝑋𝑡 } be a covariance stationary process with autocovariance function 𝛾 satisfying


∑𝑘 𝛾(𝑘)2 < ∞.
The spectral density 𝑓 of {𝑋𝑡 } is defined as the discrete time Fourier transform of its autoco-
variance function 𝛾.

𝑓(𝜔) ∶= ∑ 𝛾(𝑘)𝑒−𝑖𝜔𝑘 , 𝜔∈ℝ


𝑘∈ℤ

(Some authors normalize the expression on the right by constants such as 1/𝜋 — the conven-
tion chosen makes little difference provided you are consistent).
Using the fact that 𝛾 is even, in the sense that 𝛾(𝑡) = 𝛾(−𝑡) for all 𝑡, we can show that

𝑓(𝜔) = 𝛾(0) + 2 ∑ 𝛾(𝑘) cos(𝜔𝑘) (9)


𝑘≥1

It is not difficult to confirm that 𝑓 is


450 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

• real-valued
• even (𝑓(𝜔) = 𝑓(−𝜔) ), and
• 2𝜋-periodic, in the sense that 𝑓(2𝜋 + 𝜔) = 𝑓(𝜔) for all 𝜔
It follows that the values of 𝑓 on [0, 𝜋] determine the values of 𝑓 on all of ℝ — the proof is an
exercise.
For this reason, it is standard to plot the spectral density only on the interval [0, 𝜋].

27.4.3 Example 1: White Noise

Consider a white noise process {𝜖𝑡 } with standard deviation 𝜎.


It is easy to check that in this case 𝑓(𝜔) = 𝜎2 . So 𝑓 is a constant function.
As we will see, this can be interpreted as meaning that “all frequencies are equally present”.
(White light has this property when frequency refers to the visible spectrum, a connection
that provides the origins of the term “white noise”)

27.4.4 Example 2: AR and MA and ARMA

It is an exercise to show that the MA(1) process 𝑋𝑡 = 𝜃𝜖𝑡−1 + 𝜖𝑡 has a spectral density

𝑓(𝜔) = 𝜎2 (1 + 2𝜃 cos(𝜔) + 𝜃2 ) (10)

With a bit more effort, it’s possible to show (see, e.g., p. 261 of [59]) that the spectral density
of the AR(1) process 𝑋𝑡 = 𝜙𝑋𝑡−1 + 𝜖𝑡 is

𝜎2
𝑓(𝜔) = (11)
1 − 2𝜙 cos(𝜔) + 𝜙2

More generally, it can be shown that the spectral density of the ARMA process (5) is

2
𝜃(𝑒𝑖𝜔 )
𝑓(𝜔) = ∣ ∣ 𝜎2 (12)
𝜙(𝑒𝑖𝜔 )

where
• 𝜎 is the standard deviation of the white noise process {𝜖𝑡 }.
• the polynomials 𝜙(⋅) and 𝜃(⋅) are as defined in (7).
The derivation of (12) uses the fact that convolutions become products under Fourier trans-
formations.
The proof is elegant and can be found in many places — see, for example, [59], chapter 11,
section 4.
It’s a nice exercise to verify that (10) and (11) are indeed special cases of (12).

27.4.5 Interpreting the Spectral Density

Plotting (11) reveals the shape of the spectral density for the AR(1) model when 𝜙 takes the
values 0.8 and -0.8 respectively.
27.4. SPECTRAL ANALYSIS 451

In [4]: def ar1_sd(ϕ, ω):


return 1 / (1 - 2 * ϕ * np.cos(ω) + ϕ**2)

ωs = np.linspace(0, np.pi, 180)


num_rows, num_cols = 2, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.4)

# Autocovariance when phi = 0.8


for i, ϕ in enumerate((0.8, -0.8)):
ax = axes[i]
sd = ar1_sd(ϕ, ωs)
ax.plot(ωs, sd, 'b-', alpha=0.6, lw=2,
label='spectral density, $\phi = {ϕ:.2}$')
ax.legend(loc='upper center')
ax.set(xlabel='frequency', xlim=(0, np.pi))
plt.show()

These spectral densities correspond to the autocovariance functions for the AR(1) process
shown above.
Informally, we think of the spectral density as being large at those 𝜔 ∈ [0, 𝜋] at which the
autocovariance function seems approximately to exhibit big damped cycles.
To see the idea, let’s consider why, in the lower panel of the preceding figure, the spectral
density for the case 𝜙 = −0.8 is large at 𝜔 = 𝜋.
Recall that the spectral density can be expressed as
452 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

𝑓(𝜔) = 𝛾(0) + 2 ∑ 𝛾(𝑘) cos(𝜔𝑘) = 𝛾(0) + 2 ∑(−0.8)𝑘 cos(𝜔𝑘) (13)


𝑘≥1 𝑘≥1

When we evaluate this at 𝜔 = 𝜋, we get a large number because cos(𝜋𝑘) is large and positive
when (−0.8)𝑘 is positive, and large in absolute value and negative when (−0.8)𝑘 is negative.
Hence the product is always large and positive, and hence the sum of the products on the
right-hand side of (13) is large.
These ideas are illustrated in the next figure, which has 𝑘 on the horizontal axis.

In [5]: ϕ = -0.8
times = list(range(16))
y1 = [ϕ**k / (1 - ϕ**2) for k in times]
y2 = [np.cos(np.pi * k) for k in times]
y3 = [a * b for a, b in zip(y1, y2)]

num_rows, num_cols = 3, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.25)

# Autocovariance when = -0.8


ax = axes[0]
ax.plot(times, y1, 'bo-', alpha=0.6, label='$\gamma(k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-2, 0, 2))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)

# Cycles at frequency π
ax = axes[1]
ax.plot(times, y2, 'bo-', alpha=0.6, label='$\cos(\pi k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-1, 0, 1))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)

# Product
ax = axes[2]
ax.stem(times, y3, label='$\gamma(k) \cos(\pi k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), ylim=(-3, 3), yticks=(-1, 0, 1, 2, 3))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
ax.set_xlabel("k")

plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:27:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
27.4. SPECTRAL ANALYSIS 453

On the other hand, if we evaluate 𝑓(𝜔) at 𝜔 = 𝜋/3, then the cycles are not matched, the
sequence 𝛾(𝑘) cos(𝜔𝑘) contains both positive and negative terms, and hence the sum of these
terms is much smaller.

In [6]: ϕ = -0.8
times = list(range(16))
y1 = [ϕ**k / (1 - ϕ**2) for k in times]
y2 = [np.cos(np.pi * k/3) for k in times]
y3 = [a * b for a, b in zip(y1, y2)]

num_rows, num_cols = 3, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.25)

# Autocovariance when phi = -0.8


ax = axes[0]
ax.plot(times, y1, 'bo-', alpha=0.6, label='$\gamma(k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-2, 0, 2))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)

# Cycles at frequency π
ax = axes[1]
ax.plot(times, y2, 'bo-', alpha=0.6, label='$\cos(\pi k/3)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-1, 0, 1))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
454 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

# Product
ax = axes[2]
ax.stem(times, y3, label='$\gamma(k) \cos(\pi k/3)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), ylim=(-3, 3), yticks=(-1, 0, 1, 2, 3))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
ax.set_xlabel("$k$")

plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:27:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.

In summary, the spectral density is large at frequencies 𝜔 where the autocovariance function
exhibits damped cycles.

27.4.6 Inverting the Transformation

We have just seen that the spectral density is useful in the sense that it provides a frequency-
based perspective on the autocovariance structure of a covariance stationary process.
27.4. SPECTRAL ANALYSIS 455

Another reason that the spectral density is useful is that it can be “inverted” to recover the
autocovariance function via the inverse Fourier transform.
In particular, for all 𝑘 ∈ ℤ, we have

𝜋
1
𝛾(𝑘) = ∫ 𝑓(𝜔)𝑒𝑖𝜔𝑘 𝑑𝜔 (14)
2𝜋 −𝜋

This is convenient in situations where the spectral density is easier to calculate and manipu-
late than the autocovariance function.
(For example, the expression (12) for the ARMA spectral density is much easier to work with
than the expression for the ARMA autocovariance)

27.4.7 Mathematical Theory

This section is loosely based on [59], p. 249-253, and included for those who
• would like a bit more insight into spectral densities
• and have at least some background in Hilbert space theory
Others should feel free to skip to the next section — none of this material is necessary to
progress to computation.
Recall that every separable Hilbert space 𝐻 has a countable orthonormal basis {ℎ𝑘 }.
The nice thing about such a basis is that every 𝑓 ∈ 𝐻 satisfies

𝑓 = ∑ 𝛼𝑘 ℎ 𝑘 where 𝛼𝑘 ∶= ⟨𝑓, ℎ𝑘 ⟩ (15)


𝑘

where ⟨⋅, ⋅⟩ denotes the inner product in 𝐻.


Thus, 𝑓 can be represented to any degree of precision by linearly combining basis vectors.
The scalar sequence 𝛼 = {𝛼𝑘 } is called the Fourier coefficients of 𝑓, and satisfies ∑𝑘 |𝛼𝑘 |2 <
∞.
In other words, 𝛼 is in ℓ2 , the set of square summable sequences.
Consider an operator 𝑇 that maps 𝛼 ∈ ℓ2 into its expansion ∑𝑘 𝛼𝑘 ℎ𝑘 ∈ 𝐻.
The Fourier coefficients of 𝑇 𝛼 are just 𝛼 = {𝛼𝑘 }, as you can verify by confirming that
⟨𝑇 𝛼, ℎ𝑘 ⟩ = 𝛼𝑘 .
Using elementary results from Hilbert space theory, it can be shown that
• 𝑇 is one-to-one — if 𝛼 and 𝛽 are distinct in ℓ2 , then so are their expansions in 𝐻.
• 𝑇 is onto — if 𝑓 ∈ 𝐻 then its preimage in ℓ2 is the sequence 𝛼 given by 𝛼𝑘 = ⟨𝑓, ℎ𝑘 ⟩.
• 𝑇 is a linear isometry — in particular, ⟨𝛼, 𝛽⟩ = ⟨𝑇 𝛼, 𝑇 𝛽⟩.
Summarizing these results, we say that any separable Hilbert space is isometrically isomor-
phic to ℓ2 .
In essence, this says that each separable Hilbert space we consider is just a different way of
looking at the fundamental space ℓ2 .
With this in mind, let’s specialize to a setting where
456 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

• 𝛾 ∈ ℓ2 is the autocovariance function of a covariance stationary process, and 𝑓 is the


spectral density.
• 𝐻 = 𝐿2 , where 𝐿2 is the set of square summable functions on the interval [−𝜋, 𝜋], with
𝜋
inner product ⟨𝑔, ℎ⟩ = ∫−𝜋 𝑔(𝜔)ℎ(𝜔)𝑑𝜔.
• {ℎ𝑘 } = the orthonormal basis for 𝐿2 given by the set of trigonometric functions.

𝑒𝑖𝜔𝑘
ℎ𝑘 (𝜔) = √ , 𝑘 ∈ ℤ, 𝜔 ∈ [−𝜋, 𝜋]
2𝜋

Using the definition of 𝑇 from above and the fact that 𝑓 is even, we now have

𝑒𝑖𝜔𝑘 1
𝑇 𝛾 = ∑ 𝛾(𝑘) √ = √ 𝑓(𝜔) (16)
𝑘∈ℤ 2𝜋 2𝜋

In other words, apart from a scalar multiple, the spectral density is just a transformation of
𝛾 ∈ ℓ2 under a certain linear isometry — a different way to view 𝛾.
In particular, it is an expansion of the autocovariance function with respect to the trigono-
metric basis functions in 𝐿2 .
As discussed above, the Fourier coefficients of 𝑇 𝛾 are given by the sequence 𝛾, and, in partic-
ular, 𝛾(𝑘) = ⟨𝑇 𝛾, ℎ𝑘 ⟩.
Transforming this inner product into its integral expression and using (16) gives (14), justify-
ing our earlier expression for the inverse transform.

27.5 Implementation

Most code for working with covariance stationary models deals with ARMA models.
Python code for studying ARMA models can be found in the tsa submodule of statsmodels.
Since this code doesn’t quite cover our needs — particularly vis-a-vis spectral analysis —
we’ve put together the module arma.py, which is part of QuantEcon.py package.
The module provides functions for mapping ARMA(𝑝, 𝑞) models into their

1. impulse response function

2. simulated time series

3. autocovariance function

4. spectral density

27.5.1 Application

Let’s use this code to replicate the plots on pages 68–69 of [43].
Here are some functions to generate the plots

In [7]: def plot_impulse_response(arma, ax=None):


if ax is None:
27.5. IMPLEMENTATION 457

ax = plt.gca()
yi = arma.impulse_response()
ax.stem(list(range(len(yi))), yi)
ax.set(xlim=(-0.5), ylim=(min(yi)-0.1, max(yi)+0.1),
title='Impulse response', xlabel='time', ylabel='response')
return ax

def plot_spectral_density(arma, ax=None):


if ax is None:
ax = plt.gca()
w, spect = arma.spectral_density(two_pi=False)
ax.semilogy(w, spect)
ax.set(xlim=(0, np.pi), ylim=(0, np.max(spect)),
title='Spectral density', xlabel='frequency', ylabel='spectrum')
return ax

def plot_autocovariance(arma, ax=None):


if ax is None:
ax = plt.gca()
acov = arma.autocovariance()
ax.stem(list(range(len(acov))), acov)
ax.set(xlim=(-0.5, len(acov) - 0.5), title='Autocovariance',
xlabel='time', ylabel='autocovariance')
return ax

def plot_simulation(arma, ax=None):


if ax is None:
ax = plt.gca()
x_out = arma.simulation()
ax.plot(x_out)
ax.set(title='Sample path', xlabel='time', ylabel='state space')
return ax

def quad_plot(arma):
"""
Plots the impulse response, spectral_density, autocovariance,
and one realization of the process.

"""
num_rows, num_cols = 2, 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(12, 8))
plot_functions = [plot_impulse_response,
plot_spectral_density,
plot_autocovariance,
plot_simulation]
for plot_func, ax in zip(plot_functions, axes.flatten()):
plot_func(arma, ax)
plt.tight_layout()
plt.show()

Now let’s call these functions to generate plots.


As a warmup, let’s make sure things look right when we for the pure white noise model 𝑋𝑡 =
𝜖𝑡 .

In [8]: ϕ = 0.0
θ = 0.0
458 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

arma = qe.ARMA(ϕ, θ)
quad_plot(arma)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
"""
/home/ubuntu/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py:85:
ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16:
UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
app.launch_new_instance()
/home/ubuntu/anaconda3/lib/python3.7/site-packages/matplotlib/transforms.py:923:
ComplexWarning: Casting complex values to real discards the imaginary part
self._points[:, 1] = interval
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.

If we look carefully, things look good: the spectrum is the flat line at 100 at the very top of
the spectrum graphs, which is at it should be.
Also

1 𝜋
• the variance equals 1 = 2𝜋 ∫−𝜋 1𝑑𝜔 as it should.

• the covariogram and impulse response look as they should.


27.5. IMPLEMENTATION 459

• it is actually challenging to visualize a time series realization of white noise – a sequence


of surprises – but this too looks pretty good.
To get some more examples, as our laboratory we’ll replicate quartets of graphs that [43] use
to teach “how to read spectral densities”.
Ljunqvist and Sargent’s first model is 𝑋𝑡 = 1.3𝑋𝑡−1 − .7𝑋𝑡−2 + 𝜖𝑡

In [9]: ϕ = 1.3, -.7


θ = 0.0
arma = qe.ARMA(ϕ, θ)
quad_plot(arma)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
"""
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16:
UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
app.launch_new_instance()
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.

Ljungqvist and Sargent’s second model is 𝑋𝑡 = .9𝑋𝑡−1 + 𝜖𝑡

In [10]: ϕ = 0.9
θ = -0.0
460 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

arma = qe.ARMA(ϕ, θ)
quad_plot(arma)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
"""
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16:
UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
app.launch_new_instance()
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.

Ljungqvist and Sargent’s third model is 𝑋𝑡 = .8𝑋𝑡−4 + 𝜖𝑡

In [11]: ϕ = 0., 0., 0., .8


θ = -0.0
arma = qe.ARMA(ϕ, θ)
quad_plot(arma)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
27.5. IMPLEMENTATION 461

"""
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16:
UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
app.launch_new_instance()
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.

Ljungqvist and Sargent’s fourth model is 𝑋𝑡 = .98𝑋𝑡−1 + 𝜖𝑡 − .7𝜖𝑡−1

In [12]: ϕ = .98
θ = -0.7
arma = qe.ARMA(ϕ, θ)
quad_plot(arma)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:5:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
set the "use_line_collection" keyword argument to True.
"""
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16:
UserWarning: Attempted to set non-positive bottom ylim on a log-scaled axis.
Invalid limit will be ignored.
app.launch_new_instance()
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:23:
UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a
LineCollection instead of individual lines. This significantly improves the
performance of a stem plot. To remove this warning and switch to the new behaviour,
462 CHAPTER 27. COVARIANCE STATIONARY PROCESSES

set the "use_line_collection" keyword argument to True.

27.5.2 Explanation

The call

arma = ARMA(ϕ, θ, σ)

creates an instance arma that represents the ARMA(𝑝, 𝑞) model

𝑋𝑡 = 𝜙1 𝑋𝑡−1 + ... + 𝜙𝑝 𝑋𝑡−𝑝 + 𝜖𝑡 + 𝜃1 𝜖𝑡−1 + ... + 𝜃𝑞 𝜖𝑡−𝑞

If ϕ and θ are arrays or sequences, then the interpretation will be


• ϕ holds the vector of parameters (𝜙1 , 𝜙2 , ..., 𝜙𝑝 ).
• θ holds the vector of parameters (𝜃1 , 𝜃2 , ..., 𝜃𝑞 ).
The parameter σ is always a scalar, the standard deviation of the white noise.
We also permit ϕ and θ to be scalars, in which case the model will be interpreted as

𝑋𝑡 = 𝜙𝑋𝑡−1 + 𝜖𝑡 + 𝜃𝜖𝑡−1

The two numerical packages most useful for working with ARMA models are scipy.signal
and numpy.fft.
27.5. IMPLEMENTATION 463

The package scipy.signal expects the parameters to be passed into its functions in a
manner consistent with the alternative ARMA notation (8).
For example, the impulse response sequence {𝜓𝑡 } discussed above can be obtained using
scipy.signal.dimpulse, and the function call should be of the form

times, ψ = dimpulse((ma_poly, ar_poly, 1), n=impulse_length)

where ma_poly and ar_poly correspond to the polynomials in (7) — that is,
• ma_poly is the vector (1, 𝜃1 , 𝜃2 , … , 𝜃𝑞 )
• ar_poly is the vector (1, −𝜙1 , −𝜙2 , … , −𝜙𝑝 )
To this end, we also maintain the arrays ma_poly and ar_poly as instance data, with their
values computed automatically from the values of phi and theta supplied by the user.
If the user decides to change the value of either theta or phi ex-post by assignments such
as arma.phi = (0.5, 0.2) or arma.theta = (0, -0.1).
then ma_poly and ar_poly should update automatically to reflect these new parameters.
This is achieved in our implementation by using descriptors.

27.5.3 Computing the Autocovariance Function

As discussed above, for ARMA processes the spectral density has a simple representation that
is relatively easy to calculate.
Given this fact, the easiest way to obtain the autocovariance function is to recover it from the
spectral density via the inverse Fourier transform.
Here we use NumPy’s Fourier transform package np.fft, which wraps a standard Fortran-
based package called FFTPACK.
A look at the np.fft documentation shows that the inverse transform np.fft.ifft takes a given
sequence 𝐴0 , 𝐴1 , … , 𝐴𝑛−1 and returns the sequence 𝑎0 , 𝑎1 , … , 𝑎𝑛−1 defined by

1 𝑛−1
𝑎𝑘 = ∑ 𝐴 𝑒𝑖𝑘2𝜋𝑡/𝑛
𝑛 𝑡=0 𝑡

Thus, if we set 𝐴𝑡 = 𝑓(𝜔𝑡 ), where 𝑓 is the spectral density and 𝜔𝑡 ∶= 2𝜋𝑡/𝑛, then

1 𝑛−1 1 2𝜋 𝑛−1
𝑎𝑘 = ∑ 𝑓(𝜔𝑡 )𝑒𝑖𝜔𝑡 𝑘 = ∑ 𝑓(𝜔𝑡 )𝑒𝑖𝜔𝑡 𝑘 , 𝜔𝑡 ∶= 2𝜋𝑡/𝑛
𝑛 𝑡=0 2𝜋 𝑛 𝑡=0

For 𝑛 sufficiently large, we then have

2𝜋 𝜋
1 1
𝑎𝑘 ≈ ∫ 𝑓(𝜔)𝑒𝑖𝜔𝑘 𝑑𝜔 = ∫ 𝑓(𝜔)𝑒𝑖𝜔𝑘 𝑑𝜔
2𝜋 0 2𝜋 −𝜋

(You can check the last equality)


In view of (14), we have now shown that, for 𝑛 sufficiently large, 𝑎𝑘 ≈ 𝛾(𝑘) — which is ex-
actly what we want to compute.
464 CHAPTER 27. COVARIANCE STATIONARY PROCESSES
Chapter 28

Estimation of Spectra

28.1 Contents

• Overview 28.2
• Periodograms 28.3
• Smoothing 28.4
• Exercises 28.5
• Solutions 28.6
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

28.2 Overview

In a previous lecture, we covered some fundamental properties of covariance stationary linear


stochastic processes.
One objective for that lecture was to introduce spectral densities — a standard and very use-
ful technique for analyzing such processes.
In this lecture, we turn to the problem of estimating spectral densities and other related
quantities from data.
Estimates of the spectral density are computed using what is known as a periodogram —
which in turn is computed via the famous fast Fourier transform.
Once the basic technique has been explained, we will apply it to the analysis of several key
macroeconomic time series.
For supplementary reading, see [59] or [17].
Let’s start with some standard imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
from quantecon import ARMA, periodogram, ar_periodogram

465
466 CHAPTER 28. ESTIMATION OF SPECTRA

28.3 Periodograms

Recall that the spectral density 𝑓 of a covariance stationary process with autocorrelation
function 𝛾 can be written

𝑓(𝜔) = 𝛾(0) + 2 ∑ 𝛾(𝑘) cos(𝜔𝑘), 𝜔∈ℝ


𝑘≥1

Now consider the problem of estimating the spectral density of a given time series, when 𝛾 is
unknown.
In particular, let 𝑋0 , … , 𝑋𝑛−1 be 𝑛 consecutive observations of a single time series that is as-
sumed to be covariance stationary.
The most common estimator of the spectral density of this process is the periodogram of
𝑋0 , … , 𝑋𝑛−1 , which is defined as

2
1 𝑛−1
𝐼(𝜔) ∶= ∣∑ 𝑋𝑡 𝑒𝑖𝑡𝜔 ∣ , 𝜔∈ℝ (1)
𝑛 𝑡=0

(Recall that |𝑧| denotes the modulus of complex number 𝑧)


Alternatively, 𝐼(𝜔) can be expressed as

2 2
1⎧
{ 𝑛−1 𝑛−1 ⎫
}
𝐼(𝜔) = [∑ 𝑋𝑡 cos(𝜔𝑡)] + [∑ 𝑋𝑡 sin(𝜔𝑡)]
𝑛⎨
{ ⎬
}
⎩ 𝑡=0 𝑡=0 ⎭

It is straightforward to show that the function 𝐼 is even and 2𝜋-periodic (i.e., 𝐼(𝜔) = 𝐼(−𝜔)
and 𝐼(𝜔 + 2𝜋) = 𝐼(𝜔) for all 𝜔 ∈ ℝ).
From these two results, you will be able to verify that the values of 𝐼 on [0, 𝜋] determine the
values of 𝐼 on all of ℝ.
The next section helps to explain the connection between the periodogram and the spectral
density.

28.3.1 Interpretation

To interpret the periodogram, it is convenient to focus on its values at the Fourier frequencies

2𝜋𝑗
𝜔𝑗 ∶= , 𝑗 = 0, … , 𝑛 − 1
𝑛

In what sense is 𝐼(𝜔𝑗 ) an estimate of 𝑓(𝜔𝑗 )?


The answer is straightforward, although it does involve some algebra.
With a bit of effort, one can show that for any integer 𝑗 > 0,

𝑛−1 𝑛−1
𝑡
∑ 𝑒𝑖𝑡𝜔𝑗 = ∑ exp {𝑖2𝜋𝑗 } = 0
𝑡=0 𝑡=0
𝑛
28.3. PERIODOGRAMS 467

𝑛−1
Letting 𝑋̄ denote the sample mean 𝑛−1 ∑𝑡=0 𝑋𝑡 , we then have

2
𝑛−1 𝑛−1 𝑛−1
̄
𝑛𝐼(𝜔𝑗 ) = ∣∑(𝑋𝑡 − 𝑋)𝑒 𝑖𝑡𝜔𝑗 ̄ 𝑖𝑡𝜔𝑗 ∑(𝑋𝑟 − 𝑋)𝑒
∣ = ∑(𝑋𝑡 − 𝑋)𝑒 ̄ −𝑖𝑟𝜔𝑗
𝑡=0 𝑡=0 𝑟=0

By carefully working through the sums, one can transform this to

𝑛−1 𝑛−1 𝑛−1


𝑛𝐼(𝜔𝑗 ) = ∑(𝑋𝑡 − 𝑋)̄ 2 + 2 ∑ ∑(𝑋𝑡 − 𝑋)(𝑋
̄ ̄
𝑡−𝑘 − 𝑋) cos(𝜔𝑗 𝑘)
𝑡=0 𝑘=1 𝑡=𝑘

Now let

1 𝑛−1 ̄ ̄
𝛾(𝑘)
̂ ∶= ∑(𝑋 − 𝑋)(𝑋 𝑡−𝑘 − 𝑋), 𝑘 = 0, 1, … , 𝑛 − 1
𝑛 𝑡=𝑘 𝑡

This is the sample autocovariance function, the natural “plug-in estimator” of the autocovari-
ance function 𝛾.
(“Plug-in estimator” is an informal term for an estimator found by replacing expectations
with sample means)
With this notation, we can now write

𝑛−1
𝐼(𝜔𝑗 ) = 𝛾(0)
̂ + 2 ∑ 𝛾(𝑘)
̂ cos(𝜔𝑗 𝑘)
𝑘=1

Recalling our expression for 𝑓 given above, we see that 𝐼(𝜔𝑗 ) is just a sample analog of 𝑓(𝜔𝑗 ).

28.3.2 Calculation

Let’s now consider how to compute the periodogram as defined in (1).


There are already functions available that will do this for us — an example is
statsmodels.tsa.stattools.periodogram in the statsmodels package.
However, it is very simple to replicate their results, and this will give us a platform to make
useful extensions.
The most common way to calculate the periodogram is via the discrete Fourier transform,
which in turn is implemented through the fast Fourier transform algorithm.
In general, given a sequence 𝑎0 , … , 𝑎𝑛−1 , the discrete Fourier transform computes the se-
quence

𝑛−1
𝑡𝑗
𝐴𝑗 ∶= ∑ 𝑎𝑡 exp {𝑖2𝜋 }, 𝑗 = 0, … , 𝑛 − 1
𝑡=0
𝑛

With numpy.fft.fft imported as fft and 𝑎0 , … , 𝑎𝑛−1 stored in NumPy array a, the func-
tion call fft(a) returns the values 𝐴0 , … , 𝐴𝑛−1 as a NumPy array.
It follows that when the data 𝑋0 , … , 𝑋𝑛−1 are stored in array X, the values 𝐼(𝜔𝑗 ) at the
Fourier frequencies, which are given by
468 CHAPTER 28. ESTIMATION OF SPECTRA

2
1 𝑛−1 𝑡𝑗
∣∑ 𝑋 exp {𝑖2𝜋 }∣ , 𝑗 = 0, … , 𝑛 − 1
𝑛 𝑡=0 𝑡 𝑛

can be computed by np.abs(fft(X))**2 / len(X).


Note: The NumPy function abs acts elementwise, and correctly handles complex numbers
(by computing their modulus, which is exactly what we need).
A function called periodogram that puts all this together can be found here.
Let’s generate some data for this function using the ARMA class from QuantEcon.py (see the
lecture on linear processes for more details).
Here’s a code snippet that, once the preceding code has been run, generates data from the
process

𝑋𝑡 = 0.5𝑋𝑡−1 + 𝜖𝑡 − 0.8𝜖𝑡−2 (2)

where {𝜖𝑡 } is white noise with unit variance, and compares the periodogram to the actual
spectral density

In [3]: n = 40 # Data size


ϕ, θ = 0.5, (0, -0.8) # AR and MA parameters
lp = ARMA(ϕ, θ)
X = lp.simulation(ts_length=n)

fig, ax = plt.subplots()
x, y = periodogram(X)
ax.plot(x, y, 'b-', lw=2, alpha=0.5, label='periodogram')
x_sd, y_sd = lp.spectral_density(two_pi=False, res=120)
ax.plot(x_sd, y_sd, 'r-', lw=2, alpha=0.8, label='spectral density')
ax.legend()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py:85:
ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
28.3. PERIODOGRAMS 469

This estimate looks rather disappointing, but the data size is only 40, so perhaps it’s not sur-
prising that the estimate is poor.
However, if we try again with n = 1200 the outcome is not much better

The periodogram is far too irregular relative to the underlying spectral density.
This brings us to our next topic.
470 CHAPTER 28. ESTIMATION OF SPECTRA

28.4 Smoothing

There are two related issues here.


One is that, given the way the fast Fourier transform is implemented, the number of points 𝜔
at which 𝐼(𝜔) is estimated increases in line with the amount of data.
In other words, although we have more data, we are also using it to estimate more values.
A second issue is that densities of all types are fundamentally hard to estimate without para-
metric assumptions.
Typically, nonparametric estimation of densities requires some degree of smoothing.
The standard way that smoothing is applied to periodograms is by taking local averages.
In other words, the value 𝐼(𝜔𝑗 ) is replaced with a weighted average of the adjacent values

𝐼(𝜔𝑗−𝑝 ), 𝐼(𝜔𝑗−𝑝+1 ), … , 𝐼(𝜔𝑗 ), … , 𝐼(𝜔𝑗+𝑝 )

This weighted average can be written as

𝑝
𝐼𝑆 (𝜔𝑗 ) ∶= ∑ 𝑤(ℓ)𝐼(𝜔𝑗+ℓ ) (3)
ℓ=−𝑝

where the weights 𝑤(−𝑝), … , 𝑤(𝑝) are a sequence of 2𝑝 + 1 nonnegative values summing to
one.
In general, larger values of 𝑝 indicate more smoothing — more on this below.
The next figure shows the kind of sequence typically used.
Note the smaller weights towards the edges and larger weights in the center, so that more dis-
tant values from 𝐼(𝜔𝑗 ) have less weight than closer ones in the sum (3).

In [4]: def hanning_window(M):


w = [0.5 - 0.5 * np.cos(2 * np.pi * n/(M-1)) for n in range(M)]
return w

window = hanning_window(25) / np.abs(sum(hanning_window(25)))


x = np.linspace(-12, 12, 25)
fig, ax = plt.subplots(figsize=(9, 7))
ax.plot(x, window)
ax.set_title("Hanning window")
ax.set_ylabel("Weights")
ax.set_xlabel("Position in sequence of weights")
plt.show()
28.4. SMOOTHING 471

28.4.1 Estimation with Smoothing

Our next step is to provide code that will not only estimate the periodogram but also provide
smoothing as required.
Such functions have been written in estspec.py and are available once you’ve installed Quan-
tEcon.py.
The GitHub listing displays three functions, smooth(), periodogram(),
ar_periodogram(). We will discuss the first two here and the third one below.
The periodogram() function returns a periodogram, optionally smoothed via the
smooth() function.
Regarding the smooth() function, since smoothing adds a nontrivial amount of computa-
tion, we have applied a fairly terse array-centric method based around np.convolve.
Readers are left either to explore or simply to use this code according to their interests.
The next three figures each show smoothed and unsmoothed periodograms, as well as the
population or “true” spectral density.
(The model is the same as before — see equation (2) — and there are 400 observations)
472 CHAPTER 28. ESTIMATION OF SPECTRA

From the top figure to bottom, the window length is varied from small to large.

In looking at the figure, we can see that for this model and data size, the window length cho-
sen in the middle figure provides the best fit.
Relative to this value, the first window length provides insufficient smoothing, while the third
gives too much smoothing.
Of course in real estimation problems, the true spectral density is not visible and the choice
of appropriate smoothing will have to be made based on judgement/priors or some other the-
ory.

28.4.2 Pre-Filtering and Smoothing

In the code listing, we showed three functions from the file estspec.py.
The third function in the file (ar_periodogram()) adds a pre-processing step to peri-
odogram smoothing.
28.4. SMOOTHING 473

First, we describe the basic idea, and after that we give the code.
The essential idea is to

1. Transform the data in order to make estimation of the spectral density more efficient.
2. Compute the periodogram associated with the transformed data.
3. Reverse the effect of the transformation on the periodogram, so that it now estimates
the spectral density of the original process.

Step 1 is called pre-filtering or pre-whitening, while step 3 is called recoloring.


The first step is called pre-whitening because the transformation is usually designed to turn
the data into something closer to white noise.
Why would this be desirable in terms of spectral density estimation?
The reason is that we are smoothing our estimated periodogram based on estimated values at
nearby points — recall (3).
The underlying assumption that makes this a good idea is that the true spectral density is
relatively regular — the value of 𝐼(𝜔) is close to that of 𝐼(𝜔′ ) when 𝜔 is close to 𝜔′ .
This will not be true in all cases, but it is certainly true for white noise.
For white noise, 𝐼 is as regular as possible — it is a constant function.
In this case, values of 𝐼(𝜔′ ) at points 𝜔′ near to 𝜔 provided the maximum possible amount of
information about the value 𝐼(𝜔).
Another way to put this is that if 𝐼 is relatively constant, then we can use a large amount of
smoothing without introducing too much bias.

28.4.3 The AR(1) Setting

Let’s examine this idea more carefully in a particular setting — where the data are assumed
to be generated by an AR(1) process.
(More general ARMA settings can be handled using similar techniques to those described be-
low)
Suppose in particular that {𝑋𝑡 } is covariance stationary and AR(1), with

𝑋𝑡+1 = 𝜇 + 𝜙𝑋𝑡 + 𝜖𝑡+1 (4)

where 𝜇 and 𝜙 ∈ (−1, 1) are unknown parameters and {𝜖𝑡 } is white noise.
It follows that if we regress 𝑋𝑡+1 on 𝑋𝑡 and an intercept, the residuals will approximate white
noise.
Let
• 𝑔 be the spectral density of {𝜖𝑡 } — a constant function, as discussed above
• 𝐼0 be the periodogram estimated from the residuals — an estimate of 𝑔
• 𝑓 be the spectral density of {𝑋𝑡 } — the object we are trying to estimate
In view of an earlier result we obtained while discussing ARMA processes, 𝑓 and 𝑔 are related
by
474 CHAPTER 28. ESTIMATION OF SPECTRA

2
1
𝑓(𝜔) = ∣ ∣ 𝑔(𝜔) (5)
1 − 𝜙𝑒𝑖𝜔

This suggests that the recoloring step, which constructs an estimate 𝐼 of 𝑓 from 𝐼0 , should set

2
1
𝐼(𝜔) = ∣ ∣ 𝐼0 (𝜔)
1 − 𝜙𝑒̂ 𝑖𝜔

where 𝜙 ̂ is the OLS estimate of 𝜙.


The code for ar_periodogram() — the third function in estspec.py — does exactly
this. (See the code here).
The next figure shows realizations of the two kinds of smoothed periodograms

1. “standard smoothed periodogram”, the ordinary smoothed periodogram, and

2. “AR smoothed periodogram”, the pre-whitened and recolored one generated by


ar_periodogram()

The periodograms are calculated from time series drawn from (4) with 𝜇 = 0 and 𝜙 = −0.9.
Each time series is of length 150.
The difference between the three subfigures is just randomness — each one uses a different
28.5. EXERCISES 475

draw of the time series.

In all cases, periodograms are fit with the “hamming” window and window length of 65.
Overall, the fit of the AR smoothed periodogram is much better, in the sense of being closer
to the true spectral density.

28.5 Exercises

28.5.1 Exercise 1

Replicate this figure (modulo randomness).


The model is as in equation (2) and there are 400 observations.
For the smoothed periodogram, the window type is “hamming”.
476 CHAPTER 28. ESTIMATION OF SPECTRA

28.5.2 Exercise 2

Replicate this figure (modulo randomness).


The model is as in equation (4), with 𝜇 = 0, 𝜙 = −0.9 and 150 observations in each time
series.
All periodograms are fit with the “hamming” window and window length of 65.

28.6 Solutions

28.6.1 Exercise 1

In [5]: ## Data
n = 400
ϕ = 0.5
θ = 0, -0.8
lp = ARMA(ϕ, θ)
X = lp.simulation(ts_length=n)

fig, ax = plt.subplots(3, 1, figsize=(10, 12))

for i, wl in enumerate((15, 55, 175)): # Window lengths

x, y = periodogram(X)
ax[i].plot(x, y, 'b-', lw=2, alpha=0.5, label='periodogram')

x_sd, y_sd = lp.spectral_density(two_pi=False, res=120)


ax[i].plot(x_sd, y_sd, 'r-', lw=2, alpha=0.8, label='spectral density')

x, y_smoothed = periodogram(X, window='hamming', window_len=wl)


ax[i].plot(x, y_smoothed, 'k-', lw=2, label='smoothed periodogram')

ax[i].legend()
ax[i].set_title(f'window length = {wl}')
plt.show()
28.6. SOLUTIONS 477

28.6.2 Exercise 2

In [6]: lp = ARMA(-0.9)
wl = 65

fig, ax = plt.subplots(3, 1, figsize=(10,12))

for i in range(3):
X = lp.simulation(ts_length=150)
ax[i].set_xlim(0, np.pi)

x_sd, y_sd = lp.spectral_density(two_pi=False, res=180)


478 CHAPTER 28. ESTIMATION OF SPECTRA

ax[i].semilogy(x_sd, y_sd, 'r-', lw=2, alpha=0.75,


label='spectral density')

x, y_smoothed = periodogram(X, window='hamming', window_len=wl)


ax[i].semilogy(x, y_smoothed, 'k-', lw=2, alpha=0.75,
label='standard smoothed periodogram')

x, y_ar = ar_periodogram(X, window='hamming', window_len=wl)


ax[i].semilogy(x, y_ar, 'b-', lw=2, alpha=0.75,
label='AR smoothed periodogram')

ax[i].legend(loc='upper left')
plt.show()
Chapter 29

Additive and Multiplicative


Functionals

29.1 Contents

• Overview 29.2
• A Particular Additive Functional 29.3
• Dynamics 29.4
• Code 29.5
• More About the Multiplicative Martingale 29.6
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

29.2 Overview

Many economic time series display persistent growth that prevents them from being asymp-
totically stationary and ergodic.
For example, outputs, prices, and dividends typically display irregular but persistent growth.
Asymptotic stationarity and ergodicity are key assumptions needed to make it possible to
learn by applying statistical methods.
Are there ways to model time series that have persistent growth that still enable statistical
learning based on a law of large numbers for an asymptotically stationary and ergodic pro-
cess?
The answer provided by Hansen and Scheinkman [32] is yes.
They described two classes of time series models that accommodate growth.
They are

1. additive functionals that display random “arithmetic growth”


2. multiplicative functionals that display random “geometric growth”

These two classes of processes are closely connected.

479
480 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

If a process {𝑦𝑡 } is an additive functional and 𝜙𝑡 = exp(𝑦𝑡 ), then {𝜙𝑡 } is a multiplicative func-
tional.
Hansen and Sargent [30] (chs. 5 and 8) describe discrete time versions of additive and multi-
plicative functionals.
In this lecture, we describe both additive functionals and multiplicative functionals.
We also describe and compute decompositions of additive and multiplicative processes into
four components:

1. a constant

2. a trend component

3. an asymptotically stationary component

4. a martingale

We describe how to construct, simulate, and interpret these components.


More details about these concepts and algorithms can be found in Hansen and Sargent [30].
Let’s start with some imports:

In [2]: import numpy as np


import scipy as sp
import scipy.linalg as la
import quantecon as qe
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.stats import norm, lognorm

29.3 A Particular Additive Functional

Hansen and Sargent [30] describe a general class of additive functionals.


This lecture focuses on a subclass of these: a scalar process {𝑦𝑡 }∞
𝑡=0 whose increments are
driven by a Gaussian vector autoregression.
Our special additive functional displays interesting time series behavior while also being easy
to construct, simulate, and analyze by using linear state-space tools.
We construct our additive functional from two pieces, the first of which is a first-order vec-
tor autoregression (VAR)

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝑧𝑡+1 (1)

Here
• 𝑥𝑡 is an 𝑛 × 1 vector,
• 𝐴 is an 𝑛 × 𝑛 stable matrix (all eigenvalues lie within the open unit circle),
• 𝑧𝑡+1 ∼ 𝑁 (0, 𝐼) is an 𝑚 × 1 IID shock,
• 𝐵 is an 𝑛 × 𝑚 matrix, and
• 𝑥0 ∼ 𝑁 (𝜇0 , Σ0 ) is a random initial condition for 𝑥
29.4. DYNAMICS 481

The second piece is an equation that expresses increments of {𝑦𝑡 }∞


𝑡=0 as linear functions of

• a scalar constant 𝜈,
• the vector 𝑥𝑡 , and
• the same Gaussian vector 𝑧𝑡+1 that appears in the VAR (1)
In particular,

𝑦𝑡+1 − 𝑦𝑡 = 𝜈 + 𝐷𝑥𝑡 + 𝐹 𝑧𝑡+1 (2)

Here 𝑦0 ∼ 𝑁 (𝜇𝑦0 , Σ𝑦0 ) is a random initial condition for 𝑦.


The nonstationary random process {𝑦𝑡 }∞
𝑡=0 displays systematic but random arithmetic growth.

29.3.1 Linear State-Space Representation

A convenient way to represent our additive functional is to use a linear state space system.
To do this, we set up state and observation vectors

1
𝑥
𝑥𝑡̂ = ⎡𝑥
⎢ 𝑡⎥
⎤ and 𝑦𝑡̂ = [ 𝑡 ]
𝑦𝑡
⎣ 𝑦𝑡 ⎦

Next we construct a linear system

1 1 0 0 1 0
⎡𝑥 ⎤ = ⎡0 𝐴 0⎤ ⎡𝑥 ⎤ + ⎡𝐵⎤ 𝑧
⎢ 𝑡+1 ⎥ ⎢ ⎥ ⎢ 𝑡 ⎥ ⎢ ⎥ 𝑡+1
⎣ 𝑦𝑡+1 ⎦ ⎣𝜈 𝐷 1⎦ ⎣ 𝑦𝑡 ⎦ ⎣𝐹 ⎦

1
𝑥𝑡 0 𝐼 0 ⎡ ⎤
[ ]=[ ] 𝑥
𝑦𝑡 0 0 1 ⎢ 𝑡⎥
⎣ 𝑦𝑡 ⎦

This can be written as

𝑥𝑡+1
̂ = 𝐴𝑥 ̂ ̂ + 𝐵𝑧
̂ 𝑡+1
𝑡

𝑦𝑡̂ = 𝐷̂ 𝑥𝑡̂

which is a standard linear state space system.


To study it, we could map it into an instance of LinearStateSpace from QuantEcon.py.
But here we will use a different set of code for simulation, for reasons described below.

29.4 Dynamics

Let’s run some simulations to build intuition.


In doing so we’ll assume that 𝑧𝑡+1 is scalar and that 𝑥𝑡̃ follows a 4th-order scalar autoregres-
sion.
482 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

𝑥𝑡+1
̃ = 𝜙1 𝑥𝑡̃ + 𝜙2 𝑥𝑡−1
̃ + 𝜙3 𝑥𝑡−2
̃ + 𝜙4 𝑥𝑡−3
̃ + 𝜎𝑧𝑡+1 (3)

in which the zeros 𝑧 of the polynomial

𝜙(𝑧) = (1 − 𝜙1 𝑧 − 𝜙2 𝑧2 − 𝜙3 𝑧3 − 𝜙4 𝑧4 )

are strictly greater than unity in absolute value.


(Being a zero of 𝜙(𝑧) means that 𝜙(𝑧) = 0)
Let the increment in {𝑦𝑡 } obey

𝑦𝑡+1 − 𝑦𝑡 = 𝜈 + 𝑥𝑡̃ + 𝜎𝑧𝑡+1

with an initial condition for 𝑦0 .


While (3) is not a first order system like (1), we know that it can be mapped into a first order
system.
• For an example of such a mapping, see this example.
In fact, this whole model can be mapped into the additive functional system definition in (1)
– (2) by appropriate selection of the matrices 𝐴, 𝐵, 𝐷, 𝐹 .
You can try writing these matrices down now as an exercise — correct expressions appear in
the code below.

29.4.1 Simulation

When simulating we embed our variables into a bigger system.


This system also constructs the components of the decompositions of 𝑦𝑡 and of exp(𝑦𝑡 ) pro-
posed by Hansen and Scheinkman [32].
All of these objects are computed using the code below

In [3]: class AMF_LSS_VAR:


"""
This class transforms an additive (multiplicative)
functional into a QuantEcon linear state space system.
"""

def __init__(self, A, B, D, F=None, ν=None):


# Unpack required elements
self.nx, self.nk = B.shape
self.A, self.B = A, B

# Checking the dimension of D (extended from the scalar case)


if len(D.shape) > 1 and D.shape[0] != 1:
self.nm = D.shape[0]
self.D = D
elif len(D.shape) > 1 and D.shape[0] == 1:
self.nm = 1
self.D = D
else:
29.4. DYNAMICS 483

self.nm = 1
self.D = np.expand_dims(D, 0)

# Create space for additive decomposition


self.add_decomp = None
self.mult_decomp = None

# Set F
if not np.any(F):
self.F = np.zeros((self.nk, 1))
else:
self.F = F

# Set ν
if not np.any(ν):
self.ν = np.zeros((self.nm, 1))
elif type(ν) == float:
self.ν = np.asarray([[ν]])
elif len(ν.shape) == 1:
self.ν = np.expand_dims(ν, 1)
else:
self.ν = ν

if self.ν.shape[0] != self.D.shape[0]:
raise ValueError("The dimension of ν is inconsistent with D!")

# Construct BIG state space representation


self.lss = self.construct_ss()

def construct_ss(self):
"""
This creates the state space representation that can be passed
into the quantecon LSS class.
"""
# Pull out useful info
nx, nk, nm = self.nx, self.nk, self.nm
A, B, D, F, ν = self.A, self.B, self.D, self.F, self.ν
if self.add_decomp:
ν, H, g = self.add_decomp
else:
ν, H, g = self.additive_decomp()

# Auxiliary blocks with 0's and 1's to fill out the lss matrices
nx0c = np.zeros((nx, 1))
nx0r = np.zeros(nx)
nx1 = np.ones(nx)
nk0 = np.zeros(nk)
ny0c = np.zeros((nm, 1))
ny0r = np.zeros(nm)
ny1m = np.eye(nm)
ny0m = np.zeros((nm, nm))
nyx0m = np.zeros_like(D)

# Build A matrix for LSS


# Order of states is: [1, t, xt, yt, mt]
A1 = np.hstack([1, 0, nx0r, ny0r, ny0r]) # Transition for 1
A2 = np.hstack([1, 1, nx0r, ny0r, ny0r]) # Transition for t
# Transition for x_{t+1}
484 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

A3 = np.hstack([nx0c, nx0c, A, nyx0m.T, nyx0m.T])


# Transition for y_{t+1}
A4 = np.hstack([ν, ny0c, D, ny1m, ny0m])
# Transition for m_{t+1}
A5 = np.hstack([ny0c, ny0c, nyx0m, ny0m, ny1m])
Abar = np.vstack([A1, A2, A3, A4, A5])

# Build B matrix for LSS


Bbar = np.vstack([nk0, nk0, B, F, H])

# Build G matrix for LSS


# Order of observation is: [xt, yt, mt, st, tt]
# Selector for x_{t}
G1 = np.hstack([nx0c, nx0c, np.eye(nx), nyx0m.T, nyx0m.T])
G2 = np.hstack([ny0c, ny0c, nyx0m, ny1m, ny0m]) # Selector for�
↪ y_{t}
# Selector for martingale
G3 = np.hstack([ny0c, ny0c, nyx0m, ny0m, ny1m])
G4 = np.hstack([ny0c, ny0c, -g, ny0m, ny0m]) # Selector for�
↪ stationary
G5 = np.hstack([ny0c, ν, nyx0m, ny0m, ny0m]) # Selector for trend
Gbar = np.vstack([G1, G2, G3, G4, G5])

# Build H matrix for LSS


Hbar = np.zeros((Gbar.shape[0], nk))

# Build LSS type


x0 = np.hstack([1, 0, nx0r, ny0r, ny0r])
S0 = np.zeros((len(x0), len(x0)))
lss = qe.lss.LinearStateSpace(Abar, Bbar, Gbar, Hbar, mu_0=x0,�
↪ Sigma_0=S0)

return lss

def additive_decomp(self):
"""
Return values for the martingale decomposition
- ν : unconditional mean difference in Y
- H : coefficient for the (linear) martingale component (κ_a)
- g : coefficient for the stationary component g(x)
- Y_0 : it should be the function of X_0 (for now set it to 0.0)
"""
I = np.identity(self.nx)
A_res = la.solve(I - self.A, I)
g = self.D @ A_res
H = self.F + self.D @ A_res @ self.B

return self.ν, H, g

def multiplicative_decomp(self):
"""
Return values for the multiplicative decomposition (Example 5.4.4.)
- ν_tilde : eigenvalue
- H : vector for the Jensen term
"""
ν, H, g = self.additive_decomp()
ν_tilde = ν + (.5)*np.expand_dims(np.diag(H @ H.T), 1)
29.4. DYNAMICS 485

return ν_tilde, H, g

def loglikelihood_path(self, x, y):


A, B, D, F = self.A, self.B, self.D, self.F
k, T = y.shape
FF = F @ F.T
FFinv = la.inv(FF)
temp = y[:, 1:] - y[:, :-1] - D @ x[:, :-1]
obs = temp * FFinv * temp
obssum = np.cumsum(obs)
scalar = (np.log(la.det(FF)) + k*np.log(2*np.pi))*np.arange(1, T)

return -(.5)*(obssum + scalar)

def loglikelihood(self, x, y):


llh = self.loglikelihood_path(x, y)

return llh[-1]

Plotting

The code below adds some functions that generate plots for instances of the AMF_LSS_VAR
class.

In [4]: def plot_given_paths(amf, T, ypath, mpath, spath, tpath,


mbounds, sbounds, horline=0, show_trend=True):

# Allocate space
trange = np.arange(T)

# Create figure
fig, ax = plt.subplots(2, 2, sharey=True, figsize=(15, 8))

# Plot all paths together


ax[0, 0].plot(trange, ypath[0, :], label="$y_t$", color="k")
ax[0, 0].plot(trange, mpath[0, :], label="$m_t$", color="m")
ax[0, 0].plot(trange, spath[0, :], label="$s_t$", color="g")
if show_trend:
ax[0, 0].plot(trange, tpath[0, :], label="$t_t$", color="r")
ax[0, 0].axhline(horline, color="k", linestyle="-.")
ax[0, 0].set_title("One Path of All Variables")
ax[0, 0].legend(loc="upper left")

# Plot Martingale Component


ax[0, 1].plot(trange, mpath[0, :], "m")
ax[0, 1].plot(trange, mpath.T, alpha=0.45, color="m")
ub = mbounds[1, :]
lb = mbounds[0, :]

ax[0, 1].fill_between(trange, lb, ub, alpha=0.25, color="m")


ax[0, 1].set_title("Martingale Components for Many Paths")
ax[0, 1].axhline(horline, color="k", linestyle="-.")

# Plot Stationary Component


ax[1, 0].plot(spath[0, :], color="g")
ax[1, 0].plot(spath.T, alpha=0.25, color="g")
486 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

ub = sbounds[1, :]
lb = sbounds[0, :]
ax[1, 0].fill_between(trange, lb, ub, alpha=0.25, color="g")
ax[1, 0].axhline(horline, color="k", linestyle="-.")
ax[1, 0].set_title("Stationary Components for Many Paths")

# Plot Trend Component


if show_trend:
ax[1, 1].plot(tpath.T, color="r")
ax[1, 1].set_title("Trend Components for Many Paths")
ax[1, 1].axhline(horline, color="k", linestyle="-.")

return fig

def plot_additive(amf, T, npaths=25, show_trend=True):


"""
Plots for the additive decomposition.
Acts on an instance amf of the AMF_LSS_VAR class

"""
# Pull out right sizes so we know how to increment
nx, nk, nm = amf.nx, amf.nk, amf.nm

# Allocate space (nm is the number of additive functionals -


# we want npaths for each)
mpath = np.empty((nm*npaths, T))
mbounds = np.empty((nm*2, T))
spath = np.empty((nm*npaths, T))
sbounds = np.empty((nm*2, T))
tpath = np.empty((nm*npaths, T))
ypath = np.empty((nm*npaths, T))

# Simulate for as long as we wanted


moment_generator = amf.lss.moment_sequence()
# Pull out population moments
for t in range (T):
tmoms = next(moment_generator)
ymeans = tmoms[1]
yvar = tmoms[3]

# Lower and upper bounds - for each additive functional


for ii in range(nm):
li, ui = ii*2, (ii+1)*2
mscale = np.sqrt(yvar[nx+nm+ii, nx+nm+ii])
sscale = np.sqrt(yvar[nx+2*nm+ii, nx+2*nm+ii])
if mscale == 0.0:
mscale = 1e-12 # avoids a RuntimeWarning from calculating�
↪ ppf
if sscale == 0.0: # of normal distribution with std dev = 0.
sscale = 1e-12 # sets std dev to small value instead

madd_dist = norm(ymeans[nx+nm+ii], mscale)


sadd_dist = norm(ymeans[nx+2*nm+ii], sscale)

mbounds[li:ui, t] = madd_dist.ppf([0.01, .99])


sbounds[li:ui, t] = sadd_dist.ppf([0.01, .99])

# Pull out paths


29.4. DYNAMICS 487

for n in range(npaths):
x, y = amf.lss.simulate(T)
for ii in range(nm):
ypath[npaths*ii+n, :] = y[nx+ii, :]
mpath[npaths*ii+n, :] = y[nx+nm + ii, :]
spath[npaths*ii+n, :] = y[nx+2*nm + ii, :]
tpath[npaths*ii+n, :] = y[nx+3*nm + ii, :]

add_figs = []

for ii in range(nm):
li, ui = npaths*(ii), npaths*(ii+1)
LI, UI = 2*(ii), 2*(ii+1)
add_figs.append(plot_given_paths(amf, T,
ypath[li:ui,:],
mpath[li:ui,:],
spath[li:ui,:],
tpath[li:ui,:],
mbounds[LI:UI,:],
sbounds[LI:UI,:],
show_trend=show_trend))

add_figs[ii].suptitle(f'Additive decomposition of $y_{ii+1}$',


fontsize=14)

return add_figs

def plot_multiplicative(amf, T, npaths=25, show_trend=True):


"""
Plots for the multiplicative decomposition

"""
# Pull out right sizes so we know how to increment
nx, nk, nm = amf.nx, amf.nk, amf.nm
# Matrices for the multiplicative decomposition
ν_tilde, H, g = amf.multiplicative_decomp()

# Allocate space (nm is the number of functionals -


# we want npaths for each)
mpath_mult = np.empty((nm*npaths, T))
mbounds_mult = np.empty((nm*2, T))
spath_mult = np.empty((nm*npaths, T))
sbounds_mult = np.empty((nm*2, T))
tpath_mult = np.empty((nm*npaths, T))
ypath_mult = np.empty((nm*npaths, T))

# Simulate for as long as we wanted


moment_generator = amf.lss.moment_sequence()
# Pull out population moments
for t in range(T):
tmoms = next(moment_generator)
ymeans = tmoms[1]
yvar = tmoms[3]

# Lower and upper bounds - for each multiplicative functional


for ii in range(nm):
li, ui = ii*2, (ii+1)*2
488 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

Mdist = lognorm(np.sqrt(yvar[nx+nm+ii, nx+nm+ii]).item(),


scale=np.exp(ymeans[nx+nm+ii] \
- t * (.5)
* np.expand_dims(
np.diag(H @ H.T),
1
)[ii]
).item()
)
Sdist = lognorm(np.sqrt(yvar[nx+2*nm+ii, nx+2*nm+ii]).item(),
scale = np.exp(-ymeans[nx+2*nm+ii]).item())
mbounds_mult[li:ui, t] = Mdist.ppf([.01, .99])
sbounds_mult[li:ui, t] = Sdist.ppf([.01, .99])

# Pull out paths


for n in range(npaths):
x, y = amf.lss.simulate(T)
for ii in range(nm):
ypath_mult[npaths*ii+n, :] = np.exp(y[nx+ii, :])
mpath_mult[npaths*ii+n, :] = np.exp(y[nx+nm + ii, :] \
- np.arange(T)*(.5)
* np.expand_dims(np.diag(H
@ H.T),
1)[ii]
)
spath_mult[npaths*ii+n, :] = 1/np.exp(-y[nx+2*nm + ii, :])
tpath_mult[npaths*ii+n, :] = np.exp(y[nx+3*nm + ii, :]
+ np.arange(T)*(.5)
* np.expand_dims(np.diag(H
@ H.T),
1)[ii]
)

mult_figs = []

for ii in range(nm):
li, ui = npaths*(ii), npaths*(ii+1)
LI, UI = 2*(ii), 2*(ii+1)

mult_figs.append(plot_given_paths(amf,T,
ypath_mult[li:ui,:],
mpath_mult[li:ui,:],
spath_mult[li:ui,:],
tpath_mult[li:ui,:],
mbounds_mult[LI:UI,:],
sbounds_mult[LI:UI,:],
1,
show_trend=show_trend))
mult_figs[ii].suptitle(f'Multiplicative decomposition of \
$y_{ii+1}$', fontsize=14)

return mult_figs

def plot_martingale_paths(amf, T, mpath, mbounds, horline=1,�


↪ show_trend=False):
# Allocate space
trange = np.arange(T)
29.4. DYNAMICS 489

# Create figure
fig, ax = plt.subplots(1, 1, figsize=(10, 6))

# Plot Martingale Component


ub = mbounds[1, :]
lb = mbounds[0, :]
ax.fill_between(trange, lb, ub, color="#ffccff")
ax.axhline(horline, color="k", linestyle="-.")
ax.plot(trange, mpath.T, linewidth=0.25, color="#4c4c4c")

return fig

def plot_martingales(amf, T, npaths=25):

# Pull out right sizes so we know how to increment


nx, nk, nm = amf.nx, amf.nk, amf.nm
# Matrices for the multiplicative decomposition
ν_tilde, H, g = amf.multiplicative_decomp()

# Allocate space (nm is the number of functionals -


# we want npaths for each)
mpath_mult = np.empty((nm*npaths, T))
mbounds_mult = np.empty((nm*2, T))

# Simulate for as long as we wanted


moment_generator = amf.lss.moment_sequence()
# Pull out population moments
for t in range (T):
tmoms = next(moment_generator)
ymeans = tmoms[1]
yvar = tmoms[3]

# Lower and upper bounds - for each functional


for ii in range(nm):
li, ui = ii*2, (ii+1)*2
Mdist = lognorm(np.sqrt(yvar[nx+nm+ii, nx+nm+ii]).item(),
scale= np.exp(ymeans[nx+nm+ii] \
- t * (.5)
* np.expand_dims(
np.diag(H @ H.T),
1)[ii]

).item()
)
mbounds_mult[li:ui, t] = Mdist.ppf([.01, .99])

# Pull out paths


for n in range(npaths):
x, y = amf.lss.simulate(T)
for ii in range(nm):
mpath_mult[npaths*ii+n, :] = np.exp(y[nx+nm + ii, :] \
- np.arange(T) * (.5)
* np.expand_dims(np.diag(H
@ H.T),
1)[ii]
)

mart_figs = []
490 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

for ii in range(nm):
li, ui = npaths*(ii), npaths*(ii+1)
LI, UI = 2*(ii), 2*(ii+1)
mart_figs.append(plot_martingale_paths(amf, T, mpath_mult[li:ui, :],
mbounds_mult[LI:UI, :],
horline=1))
mart_figs[ii].suptitle(f'Martingale components for many paths of \
$y_{ii+1}$', fontsize=14)

return mart_figs

For now, we just plot 𝑦𝑡 and 𝑥𝑡 , postponing until later a description of exactly how we com-
pute them.

In [5]: ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5


σ = 0.01
ν = 0.01 # Growth rate

# A matrix should be n x n
A = np.array([[ϕ_1, ϕ_2, ϕ_3, ϕ_4],
[ 1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0]])

# B matrix should be n x k
B = np.array([[σ, 0, 0, 0]]).T

D = np.array([1, 0, 0, 0]) @ A
F = np.array([1, 0, 0, 0]) @ B

amf = AMF_LSS_VAR(A, B, D, F, ν=ν)

T = 150
x, y = amf.lss.simulate(T)

fig, ax = plt.subplots(2, 1, figsize=(10, 9))

ax[0].plot(np.arange(T), y[amf.nx, :], color='k')


ax[0].set_title('Path of $y_t$')
ax[1].plot(np.arange(T), y[0, :], color='g')
ax[1].axhline(0, color='k', linestyle='-.')
ax[1].set_title('Associated path of $x_t$')
plt.show()
29.4. DYNAMICS 491

Notice the irregular but persistent growth in 𝑦𝑡 .

29.4.2 Decomposition

Hansen and Sargent [30] describe how to construct a decomposition of an additive functional
into four parts:
• a constant inherited from initial values 𝑥0 and 𝑦0
• a linear trend
• a martingale
• an (asymptotically) stationary component
To attain this decomposition for the particular class of additive functionals defined by (1) and
(2), we first construct the matrices

𝐻 ∶= 𝐹 + 𝐷(𝐼 − 𝐴)−1 𝐵
𝑔 ∶= 𝐷(𝐼 − 𝐴)−1

Then the Hansen-Scheinkman [32] decomposition is


492 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

Martingale component

𝑡 initial conditions
𝑦𝑡 = 𝑡𝜈
⏟ + ∑ 𝐻𝑧𝑗 − 𝑔𝑥
⏟𝑡 + 𝑔⏞
𝑥 0 + 𝑦0
trend component 𝑗=1 stationary component

At this stage, you should pause and verify that 𝑦𝑡+1 − 𝑦𝑡 satisfies (2).
It is convenient for us to introduce the following notation:
• 𝜏𝑡 = 𝜈𝑡 , a linear, deterministic trend
𝑡
• 𝑚𝑡 = ∑𝑗=1 𝐻𝑧𝑗 , a martingale with time 𝑡 + 1 increment 𝐻𝑧𝑡+1
• 𝑠𝑡 = 𝑔𝑥𝑡 , an (asymptotically) stationary component
We want to characterize and simulate components 𝜏𝑡 , 𝑚𝑡 , 𝑠𝑡 of the decomposition.
A convenient way to do this is to construct an appropriate instance of a linear state space
system by using LinearStateSpace from QuantEcon.py.
This will allow us to use the routines in LinearStateSpace to study dynamics.
To start, observe that, under the dynamics in (1) and (2) and with the definitions just given,

1 1 0 0 0 0 1 0
⎡ 𝑡 + 1 ⎤ ⎡1 1 0 0 0 ⎤ ⎡ 𝑡 ⎤ ⎡ 0⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ 𝑥𝑡+1 ⎥ = ⎢0 0 𝐴 0 0⎥ ⎢ 𝑥𝑡 ⎥ + ⎢ 𝐵 ⎥ 𝑧𝑡+1
⎢ 𝑦𝑡+1 ⎥ ⎢𝜈 0 𝐷 1 0⎥ ⎢ 𝑦𝑡 ⎥ ⎢ 𝐹 ⎥
⎣𝑚𝑡+1 ⎦ ⎣0 0 0 0 1⎦ ⎣𝑚𝑡 ⎦ ⎣𝐻 ⎦

and

𝑥𝑡 0 0 𝐼 0 0 1
⎡ 𝑦 ⎤ ⎡0 0 0 1 0⎤ ⎡ 𝑡 ⎤
⎢ 𝑡⎥ ⎢ ⎥⎢ ⎥
⎢ 𝜏𝑡 ⎥ = ⎢0 𝜈 0 0 0⎥ ⎢ 𝑥𝑡 ⎥
⎢𝑚𝑡 ⎥ ⎢0 0 0 0 1 ⎥ ⎢ 𝑦𝑡 ⎥
⎣ 𝑠𝑡 ⎦ ⎣0 0 −𝑔 0 0⎦ ⎣𝑚𝑡 ⎦

With

1 𝑥𝑡
⎡ 𝑡 ⎤ ⎡𝑦 ⎤
⎢ ⎥ ⎢ 𝑡⎥
𝑥̃ ∶= ⎢ 𝑥𝑡 ⎥ and 𝑦 ̃ ∶= ⎢ 𝜏𝑡 ⎥
⎢ 𝑦𝑡 ⎥ ⎢𝑚𝑡 ⎥
⎣𝑚𝑡 ⎦ ⎣ 𝑠𝑡 ⎦

we can write this as the linear state space system

𝑥𝑡+1
̃ = 𝐴𝑥 ̃ ̃ + 𝐵𝑧
̃ 𝑡+1
𝑡

𝑦𝑡̃ = 𝐷̃ 𝑥𝑡̃

By picking out components of 𝑦𝑡̃ , we can track all variables of interest.


29.5. CODE 493

29.5 Code

The class AMF_LSS_VAR mentioned above does all that we want to study our additive func-
tional.
In fact, AMF_LSS_VAR does more because it allows us to study an associated multiplicative
functional as well.
(A hint that it does more is the name of the class – here AMF stands for “additive and mul-
tiplicative functional” – the code computes and displays objects associated with multiplicative
functionals too.)
Let’s use this code (embedded above) to explore the example process described above.
If you run the code that first simulated that example again and then the method call you will
generate (modulo randomness) the plot

In [6]: plot_additive(amf, T)
plt.show()

When we plot multiple realizations of a component in the 2nd, 3rd, and 4th panels, we also
plot the population 95% probability coverage sets computed using the LinearStateSpace class.
We have chosen to simulate many paths, all starting from the same non-random initial condi-
tions 𝑥0 , 𝑦0 (you can tell this from the shape of the 95% probability coverage shaded areas).
Notice tell-tale signs of these probability coverage shaded areas

• the purple one for the martingale component 𝑚𝑡 grows with 𝑡
• the green one for the stationary component 𝑠𝑡 converges to a constant band

29.5.1 Associated Multiplicative Functional

Where {𝑦𝑡 } is our additive functional, let 𝑀𝑡 = exp(𝑦𝑡 ).


494 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

As mentioned above, the process {𝑀𝑡 } is called a multiplicative functional.


Corresponding to the additive decomposition described above we have a multiplicative decom-
position of 𝑀𝑡

𝑡
𝑀𝑡
= exp(𝑡𝜈) exp(∑ 𝐻 ⋅ 𝑍𝑗 ) exp(𝐷(𝐼 − 𝐴)−1 𝑥0 − 𝐷(𝐼 − 𝐴)−1 𝑥𝑡 )
𝑀0 𝑗=1

or

𝑀𝑡 ̃
𝑀 𝑒(𝑋
̃ 0)
̃ ( 𝑡)(
= exp (𝜈𝑡) )
𝑀0 ̃
𝑀0 𝑒(𝑥
̃ 𝑡)

where

𝑡
𝐻 ⋅𝐻 ̃𝑡 = exp(∑(𝐻 ⋅ 𝑧𝑗 − 𝐻 ⋅ 𝐻 )), ̃0 = 1
𝜈̃ = 𝜈 + , 𝑀 𝑀
2 𝑗=1
2

and

𝑒(𝑥)
̃ = exp[𝑔(𝑥)] = exp[𝐷(𝐼 − 𝐴)−1 𝑥]

An instance of class AMF_LSS_VAR (above) includes this associated multiplicative functional


as an attribute.
Let’s plot this multiplicative functional for our example.
If you run the code that first simulated that example again and then the method call in the
cell below you’ll obtain the graph in the next cell.

In [7]: plot_multiplicative(amf, T)
plt.show()
29.5. CODE 495

As before, when we plotted multiple realizations of a component in the 2nd, 3rd, and 4th
panels, we also plotted population 95% confidence bands computed using the LinearStateS-
pace class.
Comparing this figure and the last also helps show how geometric growth differs from arith-
metic growth.
The top right panel of the above graph shows a panel of martingales associated with the
panel of 𝑀𝑡 = exp(𝑦𝑡 ) that we have generated for a limited horizon 𝑇 .
It is interesting to how the martingale behaves as 𝑇 → +∞.
Let’s see what happens when we set 𝑇 = 12000 instead of 150.

29.5.2 Peculiar Large Sample Property

Hansen and Sargent [30] (ch. 8) describe the following two properties of the martingale com-
̃𝑡 of the multiplicative decomposition
ponent 𝑀
̃𝑡 = 1 for all 𝑡 ≥ 0, nevertheless …
• while 𝐸0 𝑀
• as 𝑡 → +∞, 𝑀̃𝑡 converges to zero almost surely
̃𝑡 is a multiplicative martingale with initial
The first property follows from the fact that 𝑀
̃
condition 𝑀0 = 1.
The second is a peculiar property noted and proved by Hansen and Sargent [30].
̃𝑡 illustrates both properties
The following simulation of many paths of 𝑀

In [8]: np.random.seed(10021987)
plot_martingales(amf, 12000)
plt.show()
496 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

The dotted line in the above graph is the mean 𝐸 𝑀̃ 𝑡 = 1 of the martingale.
It remains constant at unity, illustrating the first property.
The purple 95 percent frequency coverage interval collapses around zero, illustrating the sec-
ond property.

29.6 More About the Multiplicative Martingale

̃𝑡 }∞
Let’s drill down and study probability distribution of the multiplicative martingale {𝑀 𝑡=0
in more detail.
As we have seen, it has representation

𝑡
̃𝑡 = exp(∑(𝐻 ⋅ 𝑧𝑗 − 𝐻 ⋅ 𝐻 )),
𝑀 ̃0 = 1
𝑀
𝑗=1
2

where 𝐻 = [𝐹 + 𝐷(𝐼 − 𝐴)−1 𝐵].


̃𝑡 ∼ 𝒩(− 𝑡𝐻⋅𝐻 , 𝑡𝐻 ⋅ 𝐻) and that consequently 𝑀
It follows that log 𝑀 ̃𝑡 is log normal.
2

29.6.1 Simulating a Multiplicative Martingale Again

Next, we want a program to simulate the likelihood ratio process {𝑀̃ 𝑡 }∞


𝑡=0 .

In particular, we want to simulate 5000 sample paths of length 𝑇 for the case in which 𝑥 is a
scalar and [𝐴, 𝐵, 𝐷, 𝐹 ] = [0.8, 0.001, 1.0, 0.01] and 𝜈 = 0.005.
After accomplishing this, we want to display and study histograms of 𝑀̃ 𝑇𝑖 for various values
of 𝑇 .
Here is code that accomplishes these tasks.

29.6.2 Sample Paths

Let’s write a program to simulate sample paths of {𝑥𝑡 , 𝑦𝑡 }∞


𝑡=0 .

We’ll do this by formulating the additive functional as a linear state space model and putting
the LinearStateSpace class to work.

In [9]: class AMF_LSS_VAR:


"""
This class is written to transform a scalar additive functional
into a linear state space system.
"""
def __init__(self, A, B, D, F=0.0, ν=0.0):
# Unpack required elements
self.A, self.B, self.D, self.F, self.ν = A, B, D, F, ν

# Create space for additive decomposition


self.add_decomp = None
self.mult_decomp = None
29.6. MORE ABOUT THE MULTIPLICATIVE MARTINGALE 497

# Construct BIG state space representation


self.lss = self.construct_ss()

def construct_ss(self):
"""
This creates the state space representation that can be passed
into the quantecon LSS class.
"""
# Pull out useful info
A, B, D, F, ν = self.A, self.B, self.D, self.F, self.ν
nx, nk, nm = 1, 1, 1
if self.add_decomp:
ν, H, g = self.add_decomp
else:
ν, H, g = self.additive_decomp()

# Build A matrix for LSS


# Order of states is: [1, t, xt, yt, mt]
A1 = np.hstack([1, 0, 0, 0, 0]) # Transition for 1
A2 = np.hstack([1, 1, 0, 0, 0]) # Transition for t
A3 = np.hstack([0, 0, A, 0, 0]) # Transition for x_{t+1}
A4 = np.hstack([ν, 0, D, 1, 0]) # Transition for y_{t+1}
A5 = np.hstack([0, 0, 0, 0, 1]) # Transition for m_{t+1}
Abar = np.vstack([A1, A2, A3, A4, A5])

# Build B matrix for LSS


Bbar = np.vstack([0, 0, B, F, H])

# Build G matrix for LSS


# Order of observation is: [xt, yt, mt, st, tt]
G1 = np.hstack([0, 0, 1, 0, 0]) # Selector for x_{t}
G2 = np.hstack([0, 0, 0, 1, 0]) # Selector for y_{t}
G3 = np.hstack([0, 0, 0, 0, 1]) # Selector for martingale
G4 = np.hstack([0, 0, -g, 0, 0]) # Selector for stationary
G5 = np.hstack([0, ν, 0, 0, 0]) # Selector for trend
Gbar = np.vstack([G1, G2, G3, G4, G5])

# Build H matrix for LSS


Hbar = np.zeros((1, 1))

# Build LSS type


x0 = np.hstack([1, 0, 0, 0, 0])
S0 = np.zeros((5, 5))
lss = qe.lss.LinearStateSpace(Abar, Bbar, Gbar, Hbar,
mu_0=x0, Sigma_0=S0)

return lss

def additive_decomp(self):
"""
Return values for the martingale decomposition (Proposition 4.3.3.)
- ν : unconditional mean difference in Y
- H : coefficient for the (linear) martingale component�
↪ (kappa_a)
- g : coefficient for the stationary component g(x)
- Y_0 : it should be the function of X_0 (for now set it to 0.0)
"""
498 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

A_res = 1 / (1 - self.A)
g = self.D * A_res
H = self.F + self.D * A_res * self.B

return self.ν, H, g

def multiplicative_decomp(self):
"""
Return values for the multiplicative decomposition (Example 5.4.4.)
- ν_tilde : eigenvalue
- H : vector for the Jensen term
"""
ν, H, g = self.additive_decomp()
ν_tilde = ν + (.5) * H**2

return ν_tilde, H, g

def loglikelihood_path(self, x, y):


A, B, D, F = self.A, self.B, self.D, self.F
T = y.T.size
FF = F**2
FFinv = 1 / FF
temp = y[1:] - y[:-1] - D * x[:-1]
obs = temp * FFinv * temp
obssum = np.cumsum(obs)
scalar = (np.log(FF) + np.log(2 * np.pi)) * np.arange(1, T)

return (-0.5) * (obssum + scalar)

def loglikelihood(self, x, y):


llh = self.loglikelihood_path(x, y)

return llh[-1]

The heavy lifting is done inside the AMF_LSS_VAR class.


The following code adds some simple functions that make it straightforward to generate sam-
ple paths from an instance of AMF_LSS_VAR.

In [10]: def simulate_xy(amf, T):


"Simulate individual paths."
foo, bar = amf.lss.simulate(T)
x = bar[0, :]
y = bar[1, :]

return x, y

def simulate_paths(amf, T=150, I=5000):


"Simulate multiple independent paths."

# Allocate space
storeX = np.empty((I, T))
storeY = np.empty((I, T))

for i in range(I):
# Do specific simulation
x, y = simulate_xy(amf, T)
29.6. MORE ABOUT THE MULTIPLICATIVE MARTINGALE 499

# Fill in our storage matrices


storeX[i, :] = x
storeY[i, :] = y

return storeX, storeY

def population_means(amf, T=150):


# Allocate Space
xmean = np.empty(T)
ymean = np.empty(T)

# Pull out moment generator


moment_generator = amf.lss.moment_sequence()

for tt in range (T):


tmoms = next(moment_generator)
ymeans = tmoms[1]
xmean[tt] = ymeans[0]
ymean[tt] = ymeans[1]

return xmean, ymean

Now that we have these functions in our toolkit, let’s apply them to run some simulations.

In [11]: def simulate_martingale_components(amf, T=1000, I=5000):


# Get the multiplicative decomposition
ν, H, g = amf.multiplicative_decomp()

# Allocate space
add_mart_comp = np.empty((I, T))

# Simulate and pull out additive martingale component


for i in range(I):
foo, bar = amf.lss.simulate(T)

# Martingale component is third component


add_mart_comp[i, :] = bar[2, :]

mul_mart_comp = np.exp(add_mart_comp - (np.arange(T) * H**2)/2)

return add_mart_comp, mul_mart_comp

# Build model
amf_2 = AMF_LSS_VAR(0.8, 0.001, 1.0, 0.01,.005)

amc, mmc = simulate_martingale_components(amf_2, 1000, 5000)

amcT = amc[:, -1]


mmcT = mmc[:, -1]

print("The (min, mean, max) of additive Martingale component in period T�


↪ is")
print(f"\t ({np.min(amcT)}, {np.mean(amcT)}, {np.max(amcT)})")

print("The (min, mean, max) of multiplicative Martingale component \


in period T is")
print(f"\t ({np.min(mmcT)}, {np.mean(mmcT)}, {np.max(mmcT)})")
500 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

The (min, mean, max) of additive Martingale component in period T is


(-1.8379907335579106, 0.011040789361757435, 1.4697384727035145)
The (min, mean, max) of multiplicative Martingale component in period T is
(0.14222026893384476, 1.006753060146832, 3.8858858377907133)

̃𝑡 for 𝑡 = 100, 500, 1000, 10000, 100000.


Let’s plot the probability density functions for log 𝑀
Then let’s use the plots to investigate how these densities evolve through time.
̃𝑡 for different values of 𝑡.
We will plot the densities of log 𝑀
Note: scipy.stats.lognorm expects you to pass the standard deviation first (𝑡𝐻 ⋅ 𝐻) and
then the exponent of the mean as a keyword argument scale (scale=np.exp(-t * H2 /
2)).
• See the documentation here.
This is peculiar, so make sure you are careful in working with the log normal distribution.
Here is some code that tackles these tasks

In [12]: def Mtilde_t_density(amf, t, xmin=1e-8, xmax=5.0, npts=5000):

# Pull out the multiplicative decomposition


νtilde, H, g = amf.multiplicative_decomp()
H2 = H*H

# The distribution
mdist = lognorm(np.sqrt(t*H2), scale=np.exp(-t*H2/2))
x = np.linspace(xmin, xmax, npts)
pdf = mdist.pdf(x)

return x, pdf

def logMtilde_t_density(amf, t, xmin=-15.0, xmax=15.0, npts=5000):

# Pull out the multiplicative decomposition


νtilde, H, g = amf.multiplicative_decomp()
H2 = H*H

# The distribution
lmdist = norm(-t*H2/2, np.sqrt(t*H2))
x = np.linspace(xmin, xmax, npts)
pdf = lmdist.pdf(x)

return x, pdf

times_to_plot = [10, 100, 500, 1000, 2500, 5000]


dens_to_plot = map(lambda t: Mtilde_t_density(amf_2, t, xmin=1e-8, xmax=6.
↪ 0),
times_to_plot)
ldens_to_plot = map(lambda t: logMtilde_t_density(amf_2, t, xmin=-10.0,
xmax=10.0), times_to_plot)

fig, ax = plt.subplots(3, 2, figsize=(14, 14))


ax = ax.flatten()
29.6. MORE ABOUT THE MULTIPLICATIVE MARTINGALE 501

fig.suptitle(r"Densities of $\tilde{M}_t$", fontsize=18, y=1.02)


for (it, dens_t) in enumerate(dens_to_plot):
x, pdf = dens_t
ax[it].set_title(f"Density for time {times_to_plot[it]}")
ax[it].fill_between(x, np.zeros_like(pdf), pdf)

plt.tight_layout()
plt.show()

These probability density functions help us understand mechanics underlying the peculiar
property of our multiplicative martingale
• As 𝑇 grows, most of the probability mass shifts leftward toward zero.
• For example, note that most mass is near 1 for 𝑇 = 10 or 𝑇 = 100 but most of it is near
0 for 𝑇 = 5000.
̃𝑇 lengthens toward the right.
• As 𝑇 grows, the tail of the density of 𝑀
• Enough mass moves toward the right tail to keep 𝐸 𝑀 ̃𝑇 = 1 even as most mass in the
̃𝑇 collapses around 0.
distribution of 𝑀
502 CHAPTER 29. ADDITIVE AND MULTIPLICATIVE FUNCTIONALS

29.6.3 Multiplicative Martingale as Likelihood Ratio Process

This lecture studies likelihood processes and likelihood ratio processes.


A likelihood ratio process is a multiplicative martingale with mean unity.
Likelihood ratio processes exhibit the peculiar property that naturally also appears in this
lecture.
Chapter 30

Classical Control with Linear


Algebra

30.1 Contents

• Overview 30.2
• A Control Problem 30.3
• Finite Horizon Theory 30.4
• The Infinite Horizon Limit 30.5
• Undiscounted Problems 30.6
• Implementation 30.7
• Exercises 30.8

30.2 Overview

In an earlier lecture Linear Quadratic Dynamic Programming Problems, we have studied


how to solve a special class of dynamic optimization and prediction problems by applying the
method of dynamic programming. In this class of problems

• the objective function is quadratic in states and controls.

• the one-step transition function is linear.


• shocks are IID Gaussian or martingale differences.
In this lecture and a companion lecture Classical Filtering with Linear Algebra, we study the
classical theory of linear-quadratic (LQ) optimal control problems.
The classical approach does not use the two closely related methods – dynamic programming
and Kalman filtering – that we describe in other lectures, namely, Linear Quadratic Dynamic
Programming Problems and A First Look at the Kalman Filter.
Instead, they use either

• 𝑧-transform and lag operator methods, or

• matrix decompositions applied to linear systems of first-order conditions for optimum


problems.

503
504 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

In this lecture and the sequel Classical Filtering with Linear Algebra, we mostly rely on ele-
mentary linear algebra.
The main tool from linear algebra we’ll put to work here is LU decomposition.
We’ll begin with discrete horizon problems.
Then we’ll view infinite horizon problems as appropriate limits of these finite horizon prob-
lems.
Later, we will examine the close connection between LQ control and least-squares prediction
and filtering problems.
These classes of problems are connected in the sense that to solve each, essentially the same
mathematics is used.
Let’s start with some standard imports:

In [1]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline

30.2.1 References

Useful references include [68], [27], [49], [5], and [48].

30.3 A Control Problem

Let 𝐿 be the lag operator, so that, for sequence {𝑥𝑡 } we have 𝐿𝑥𝑡 = 𝑥𝑡−1 .
More generally, let 𝐿𝑘 𝑥𝑡 = 𝑥𝑡−𝑘 with 𝐿0 𝑥𝑡 = 𝑥𝑡 and

𝑑(𝐿) = 𝑑0 + 𝑑1 𝐿 + … + 𝑑𝑚 𝐿𝑚

where 𝑑0 , 𝑑1 , … , 𝑑𝑚 is a given scalar sequence.


Consider the discrete-time control problem

𝑁
1 2 1 2
max lim ∑ 𝛽 𝑡 {𝑎𝑡 𝑦𝑡 − ℎ𝑦 − [𝑑(𝐿)𝑦𝑡 ] } , (1)
{𝑦𝑡 } 𝑁→∞
𝑡=0
2 𝑡 2

where
• ℎ is a positive parameter and 𝛽 ∈ (0, 1) is a discount factor.
• {𝑎𝑡 }𝑡≥0 is a sequence of exponential order less than 𝛽 −1/2 , by which we mean
𝑡
lim𝑡→∞ 𝛽 2 𝑎𝑡 = 0.
Maximization in (1) is subject to initial conditions for 𝑦−1 , 𝑦−2 … , 𝑦−𝑚 .
Maximization is over infinite sequences {𝑦𝑡 }𝑡≥0 .
30.4. FINITE HORIZON THEORY 505

30.3.1 Example

The formulation of the LQ problem given above is broad enough to encompass many useful
models.
As a simple illustration, recall that in LQ Control: Foundations we consider a monopolist fac-
ing stochastic demand shocks and adjustment costs.
Let’s consider a deterministic version of this problem, where the monopolist maximizes the
discounted sum


∑ 𝛽 𝑡 𝜋𝑡
𝑡=0

and

𝜋𝑡 = 𝑝𝑡 𝑞𝑡 − 𝑐𝑞𝑡 − 𝛾(𝑞𝑡+1 − 𝑞𝑡 )2 with 𝑝𝑡 = 𝛼0 − 𝛼1 𝑞𝑡 + 𝑑𝑡

In this expression, 𝑞𝑡 is output, 𝑐 is average cost of production, and 𝑑𝑡 is a demand shock.


The term 𝛾(𝑞𝑡+1 − 𝑞𝑡 )2 represents adjustment costs.
You will be able to confirm that the objective function can be rewritten as (1) when
• 𝑎𝑡 ∶= 𝛼0 + 𝑑𝑡 − 𝑐
• ℎ ∶= 2𝛼1

• 𝑑(𝐿) ∶= 2𝛾(𝐼 − 𝐿)
Further examples of this problem for factor demand, economic growth, and government policy
problems are given in ch. IX of [59].

30.4 Finite Horizon Theory

We first study a finite 𝑁 version of the problem.


Later we will study an infinite horizon problem solution as a limiting version of a finite hori-
zon problem.
(This will require being careful because the limits as 𝑁 → ∞ of the necessary and sufficient
conditions for maximizing finite 𝑁 versions of (1) are not sufficient for maximizing (1))
We begin by

1. fixing 𝑁 > 𝑚,

2. differentiating the finite version of (1) with respect to 𝑦0 , 𝑦1 , … , 𝑦𝑁 , and

3. setting these derivatives to zero.

For 𝑡 = 0, … , 𝑁 − 𝑚 these first-order necessary conditions are the Euler equations.


For 𝑡 = 𝑁 − 𝑚 + 1, … , 𝑁 , the first-order conditions are a set of terminal conditions.
Consider the term
506 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

𝑁
𝐽 = ∑ 𝛽 𝑡 [𝑑(𝐿)𝑦𝑡 ][𝑑(𝐿)𝑦𝑡 ]
𝑡=0
𝑁
= ∑ 𝛽 𝑡 (𝑑0 𝑦𝑡 + 𝑑1 𝑦𝑡−1 + ⋯ + 𝑑𝑚 𝑦𝑡−𝑚 ) (𝑑0 𝑦𝑡 + 𝑑1 𝑦𝑡−1 + ⋯ + 𝑑𝑚 𝑦𝑡−𝑚 )
𝑡=0

Differentiating 𝐽 with respect to 𝑦𝑡 for 𝑡 = 0, 1, … , 𝑁 − 𝑚 gives

𝜕𝐽
= 2𝛽 𝑡 𝑑0 𝑑(𝐿)𝑦𝑡 + 2𝛽 𝑡+1 𝑑1 𝑑(𝐿)𝑦𝑡+1 + ⋯ + 2𝛽 𝑡+𝑚 𝑑𝑚 𝑑(𝐿)𝑦𝑡+𝑚
𝜕𝑦𝑡
= 2𝛽 𝑡 (𝑑0 + 𝑑1 𝛽𝐿−1 + 𝑑2 𝛽 2 𝐿−2 + ⋯ + 𝑑𝑚 𝛽 𝑚 𝐿−𝑚 ) 𝑑(𝐿)𝑦𝑡

We can write this more succinctly as

𝜕𝐽
= 2𝛽 𝑡 𝑑(𝛽𝐿−1 ) 𝑑(𝐿)𝑦𝑡 (2)
𝜕𝑦𝑡

Differentiating 𝐽 with respect to 𝑦𝑡 for 𝑡 = 𝑁 − 𝑚 + 1, … , 𝑁 gives

𝜕𝐽
= 2𝛽 𝑁 𝑑0 𝑑(𝐿)𝑦𝑁
𝜕𝑦𝑁
𝜕𝐽
= 2𝛽 𝑁−1 [𝑑0 + 𝛽 𝑑1 𝐿−1 ] 𝑑(𝐿)𝑦𝑁−1
𝜕𝑦𝑁−1 (3)
⋮ ⋮
𝜕𝐽
= 2𝛽 𝑁−𝑚+1 [𝑑0 + 𝛽𝐿−1 𝑑1 + ⋯ + 𝛽 𝑚−1 𝐿−𝑚+1 𝑑𝑚−1 ]𝑑(𝐿)𝑦𝑁−𝑚+1
𝜕𝑦𝑁−𝑚+1

With these preliminaries under our belts, we are ready to differentiate (1).
Differentiating (1) with respect to 𝑦𝑡 for 𝑡 = 0, … , 𝑁 − 𝑚 gives the Euler equations

[ℎ + 𝑑 (𝛽𝐿−1 ) 𝑑(𝐿)]𝑦𝑡 = 𝑎𝑡 , 𝑡 = 0, 1, … , 𝑁 − 𝑚 (4)

The system of equations (4) forms a 2 × 𝑚 order linear difference equation that must hold for
the values of 𝑡 indicated.
Differentiating (1) with respect to 𝑦𝑡 for 𝑡 = 𝑁 − 𝑚 + 1, … , 𝑁 gives the terminal conditions

𝛽 𝑁 (𝑎𝑁 − ℎ𝑦𝑁 − 𝑑0 𝑑(𝐿)𝑦𝑁 ) = 0


𝛽 𝑁−1 (𝑎𝑁−1 − ℎ𝑦𝑁−1 − (𝑑0 + 𝛽 𝑑1 𝐿−1 ) 𝑑(𝐿) 𝑦𝑁−1 ) = 0
⋮⋮

𝛽 𝑁−𝑚+1 (𝑎𝑁−𝑚+1 − ℎ𝑦𝑁−𝑚+1 − (𝑑0 + 𝛽𝐿−1 𝑑1 + ⋯ + 𝛽 𝑚−1 𝐿−𝑚+1 𝑑𝑚−1 )𝑑(𝐿)𝑦𝑁−𝑚+1 ) = 0


(5)
In the finite 𝑁 problem, we want simultaneously to solve (4) subject to the 𝑚 initial condi-
tions 𝑦−1 , … , 𝑦−𝑚 and the 𝑚 terminal conditions (5).
These conditions uniquely pin down the solution of the finite 𝑁 problem.
30.4. FINITE HORIZON THEORY 507

That is, for the finite 𝑁 problem, conditions (4) and (5) are necessary and sufficient for a
maximum, by concavity of the objective function.
Next, we describe how to obtain the solution using matrix methods.

30.4.1 Matrix Methods

Let’s look at how linear algebra can be used to tackle and shed light on the finite horizon LQ
control problem.

A Single Lag Term

Let’s begin with the special case in which 𝑚 = 1.


We want to solve the system of 𝑁 + 1 linear equations

[ℎ + 𝑑 (𝛽𝐿−1 ) 𝑑 (𝐿)]𝑦𝑡 = 𝑎𝑡 , 𝑡 = 0, 1, … , 𝑁 − 1
(6)
𝛽 𝑁 [𝑎𝑁 − ℎ 𝑦𝑁 − 𝑑0 𝑑 (𝐿)𝑦𝑁 ] = 0

where 𝑑(𝐿) = 𝑑0 + 𝑑1 𝐿.
These equations are to be solved for 𝑦0 , 𝑦1 , … , 𝑦𝑁 as functions of 𝑎0 , 𝑎1 , … , 𝑎𝑁 and 𝑦−1 .
Let

𝜙(𝐿) = 𝜙0 + 𝜙1 𝐿 + 𝛽𝜙1 𝐿−1 = ℎ + 𝑑(𝛽𝐿−1 )𝑑(𝐿) = (ℎ + 𝑑02 + 𝑑12 ) + 𝑑1 𝑑0 𝐿 + 𝑑1 𝑑0 𝛽𝐿−1

Then we can represent (6) as the matrix equation

(𝜙0 − 𝑑12 ) 𝜙1 0 0 … … 0 𝑦𝑁 𝑎𝑁
⎡ 𝛽𝜙 𝜙 𝜙 0 … … 0 ⎤ ⎡𝑦 ⎤ ⎡ 𝑎 ⎤
⎢ 1 0 1 ⎥ ⎢ 𝑁−1 ⎥ ⎢ 𝑁−1 ⎥
⎢ 0 𝛽𝜙1 𝜙0 𝜙1 … … 0 ⎥ ⎢𝑦𝑁−2 ⎥ ⎢ 𝑎𝑁−2 ⎥
⎢ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⎥ ⎢ ⎥=⎢ ⎥ (7)
⎢ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ 0 … … … 𝛽𝜙1 𝜙0 𝜙1 ⎥ ⎢ 𝑦1 ⎥ ⎢ 𝑎1 ⎥
⎣ 0 … … … 0 𝛽𝜙1 𝜙0 ⎦ ⎣ 𝑦0 ⎦ ⎣𝑎0 − 𝜙1 𝑦−1 ⎦

or

𝑊 𝑦 ̄ = 𝑎̄ (8)

Notice how we have chosen to arrange the 𝑦𝑡 ’s in reverse time order.


The matrix 𝑊 on the left side of (7) is “almost” a Toeplitz matrix (where each descending
diagonal is constant).
There are two sources of deviation from the form of a Toeplitz matrix

1. The first element differs from the remaining diagonal elements, reflecting the terminal
condition.

2. The sub-diagonal elements equal 𝛽 time the super-diagonal elements.


508 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

The solution of (8) can be expressed in the form

𝑦 ̄ = 𝑊 −1 𝑎̄ (9)

which represents each element 𝑦𝑡 of 𝑦 ̄ as a function of the entire vector 𝑎.̄


That is, 𝑦𝑡 is a function of past, present, and future values of 𝑎’s, as well as of the initial con-
dition 𝑦−1 .

An Alternative Representation

An alternative way to express the solution to (7) or (8) is in so-called feedback-


feedforward form.
The idea here is to find a solution expressing 𝑦𝑡 as a function of past 𝑦’s and current and fu-
ture 𝑎’s.
To achieve this solution, one can use an LU decomposition of 𝑊 .
There always exists a decomposition of 𝑊 of the form 𝑊 = 𝐿𝑈 where
• 𝐿 is an (𝑁 + 1) × (𝑁 + 1) lower triangular matrix.
• 𝑈 is an (𝑁 + 1) × (𝑁 + 1) upper triangular matrix.
The factorization can be normalized so that the diagonal elements of 𝑈 are unity.
Using the LU representation in (9), we obtain

𝑈 𝑦 ̄ = 𝐿−1 𝑎̄ (10)

Since 𝐿−1 is lower triangular, this representation expresses 𝑦𝑡 as a function of


• lagged 𝑦’s (via the term 𝑈 𝑦),
̄ and
• current and future 𝑎’s (via the term 𝐿−1 𝑎)̄
Because there are zeros everywhere in the matrix on the left of (7) except on the diagonal,
super-diagonal, and sub-diagonal, the 𝐿𝑈 decomposition takes
• 𝐿 to be zero except in the diagonal and the leading sub-diagonal.
• 𝑈 to be zero except on the diagonal and the super-diagonal.
Thus, (10) has the form

1 𝑈12 0 0 … 0 0 𝑦𝑁
⎡0 1 𝑈 0 … 0 0 ⎤ ⎡𝑦 ⎤
⎢ 23 ⎥ ⎢ 𝑁−1 ⎥
⎢0 0 1 𝑈34 … 0 0 ⎥ ⎢𝑦𝑁−2 ⎥
⎢0 0 0 1 … 0 0 ⎥ ⎢𝑦𝑁−3 ⎥ =
⎢ ⎥ ⎢ ⎥
⎢⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⎥ ⎢ ⋮ ⎥
⎢0 0 0 0 … 1 𝑈𝑁,𝑁+1 ⎥ ⎢ 𝑦1 ⎥
⎣0 0 0 0 … 0 1 ⎦ ⎣ 𝑦0 ⎦
30.4. FINITE HORIZON THEORY 509

𝐿−1
11 0 0 … 0 𝑎𝑁
⎡ 𝐿−1 −1
𝐿22 0 … 0 ⎤ ⎡ ⎤
⎢ 21 ⎥ ⎢ 𝑎𝑁−1 ⎥
⎢ 𝐿−1
31 𝐿−1
32 𝐿−1
33 … 0 ⎥ ⎢ 𝑎𝑁−2 ⎥
⎢ ⋮ ⋮ ⋮ ⋱ ⋮ ⎥⎢ ⋮ ⎥
⎢ −1 −1 −1
⎥⎢ ⎥
⎢ 𝐿𝑁,1 𝐿𝑁,2 𝐿𝑁,3 … 0 ⎥⎢ 𝑎1 ⎥
−1 −1 −1
𝐿
⎣ 𝑁+1,1 𝐿𝑁+1,2 𝐿𝑁+1,3 … 𝐿−1 𝑎
𝑁+1 𝑁+1 ⎦ ⎣ 0 − 𝜙 𝑦
1 −1 ⎦

where 𝐿−1
𝑖𝑗 is the (𝑖, 𝑗) element of 𝐿
−1
and 𝑈𝑖𝑗 is the (𝑖, 𝑗) element of 𝑈 .
Note how the left side for a given 𝑡 involves 𝑦𝑡 and one lagged value 𝑦𝑡−1 while the right side
involves all future values of the forcing process 𝑎𝑡 , 𝑎𝑡+1 , … , 𝑎𝑁 .

Additional Lag Terms

We briefly indicate how this approach extends to the problem with 𝑚 > 1.
Assume that 𝛽 = 1 and let 𝐷𝑚+1 be the (𝑚 + 1) × (𝑚 + 1) symmetric matrix whose elements
are determined from the following formula:

𝐷𝑗𝑘 = 𝑑0 𝑑𝑘−𝑗 + 𝑑1 𝑑𝑘−𝑗+1 + … + 𝑑𝑗−1 𝑑𝑘−1 , 𝑘≥𝑗

Let 𝐼𝑚+1 be the (𝑚 + 1) × (𝑚 + 1) identity matrix.


Let 𝜙𝑗 be the coefficients in the expansion 𝜙(𝐿) = ℎ + 𝑑(𝐿−1 )𝑑(𝐿).
Then the first order conditions (4) and (5) can be expressed as:

𝑦𝑁 𝑎𝑁 𝑦𝑁−𝑚+1
⎡𝑦 ⎤ ⎡𝑎 ⎤ ⎡𝑦 ⎤
(𝐷𝑚+1 + ℎ𝐼𝑚+1 ) ⎢ 𝑁−1 ⎥ = ⎢ 𝑁−1 ⎥ + 𝑀 ⎢ 𝑁−𝑚−2 ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥ ⎢ ⋮ ⎥
𝑦
⎣ 𝑁−𝑚 ⎦ ⎣𝑎𝑁−𝑚 ⎦ 𝑦
⎣ 𝑁−2𝑚 ⎦

where 𝑀 is (𝑚 + 1) × 𝑚 and

𝐷𝑖−𝑗, 𝑚+1 for 𝑖 > 𝑗


𝑀𝑖𝑗 = {
0 for 𝑖 ≤ 𝑗

𝜙𝑚 𝑦𝑁−1 + 𝜙𝑚−1 𝑦𝑁−2 + … + 𝜙0 𝑦𝑁−𝑚−1 + 𝜙1 𝑦𝑁−𝑚−2 +


… + 𝜙𝑚 𝑦𝑁−2𝑚−1 = 𝑎𝑁−𝑚−1
𝜙𝑚 𝑦𝑁−2 + 𝜙𝑚−1 𝑦𝑁−3 + … + 𝜙0 𝑦𝑁−𝑚−2 + 𝜙1 𝑦𝑁−𝑚−3 +
… + 𝜙𝑚 𝑦𝑁−2𝑚−2 = 𝑎𝑁−𝑚−2

𝜙𝑚 𝑦𝑚+1 + 𝜙𝑚−1 𝑦𝑚 + + … + 𝜙0 𝑦1 + 𝜙1 𝑦0 + 𝜙𝑚 𝑦−𝑚+1 = 𝑎1
𝜙𝑚 𝑦𝑚 + 𝜙𝑚−1 𝑦𝑚−1 + 𝜙𝑚−2 + … + 𝜙0 𝑦0 + 𝜙1 𝑦−1 + … + 𝜙𝑚 𝑦−𝑚 = 𝑎0

As before, we can express this equation as 𝑊 𝑦 ̄ = 𝑎.̄


The matrix on the left of this equation is “almost” Toeplitz, the exception being the leading
𝑚 × 𝑚 submatrix in the upper left-hand corner.
510 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

We can represent the solution in feedback-feedforward form by obtaining a decomposition


𝐿𝑈 = 𝑊 , and obtain

𝑈 𝑦 ̄ = 𝐿−1 𝑎̄ (11)

𝑡 𝑁−𝑡
∑ 𝑈−𝑡+𝑁+1, −𝑡+𝑁+𝑗+1 𝑦𝑡−𝑗 = ∑ 𝐿−𝑡+𝑁+1, −𝑡+𝑁+1−𝑗 𝑎𝑡+𝑗
̄ ,
𝑗=0 𝑗=0

𝑡 = 0, 1, … , 𝑁

where 𝐿−1
𝑡,𝑠 is the element in the (𝑡, 𝑠) position of 𝐿, and similarly for 𝑈 .

The left side of equation (11) is the “feedback” part of the optimal control law for 𝑦𝑡 , while
the right-hand side is the “feedforward” part.
We note that there is a different control law for each 𝑡.
Thus, in the finite horizon case, the optimal control law is time-dependent.
It is natural to suspect that as 𝑁 → ∞, (11) becomes equivalent to the solution of our infinite
horizon problem, which below we shall show can be expressed as

𝑐(𝐿)𝑦𝑡 = 𝑐(𝛽𝐿−1 )−1 𝑎𝑡 ,

−1
so that as 𝑁 → ∞ we expect that for each fixed 𝑡, 𝑈𝑡,𝑡−𝑗 → 𝑐𝑗 and 𝐿𝑡,𝑡+𝑗 approaches the
−𝑗 −1 −1
coefficient on 𝐿 in the expansion of 𝑐(𝛽𝐿 ) .
This suspicion is true under general conditions that we shall study later.
For now, we note that by creating the matrix 𝑊 for large 𝑁 and factoring it into the 𝐿𝑈
form, good approximations to 𝑐(𝐿) and 𝑐(𝛽𝐿−1 )−1 can be obtained.

30.5 The Infinite Horizon Limit

For the infinite horizon problem, we propose to discover first-order necessary conditions by
taking the limits of (4) and (5) as 𝑁 → ∞.
This approach is valid, and the limits of (4) and (5) as 𝑁 approaches infinity are first-order
necessary conditions for a maximum.
However, for the infinite horizon problem with 𝛽 < 1, the limits of (4) and (5) are, in general,
not sufficient for a maximum.
That is, the limits of (5) do not provide enough information uniquely to determine the solu-
tion of the Euler equation (4) that maximizes (1).
As we shall see below, a side condition on the path of 𝑦𝑡 that together with (4) is sufficient
for an optimum is


∑ 𝛽 𝑡 ℎ𝑦𝑡2 < ∞ (12)
𝑡=0

All paths that satisfy the Euler equations, except the one that we shall select below, violate
30.5. THE INFINITE HORIZON LIMIT 511

this condition and, therefore, evidently lead to (much) lower values of (1) than does the opti-
mal path selected by the solution procedure below.
Consider the characteristic equation associated with the Euler equation

ℎ + 𝑑 (𝛽𝑧 −1 ) 𝑑 (𝑧) = 0 (13)

Notice that if 𝑧 ̃ is a root of equation (13), then so is 𝛽 𝑧−1


̃ .
Thus, the roots of (13) come in “𝛽-reciprocal” pairs.
Assume that the roots of (13) are distinct.
Let the roots be, in descending order according to their moduli, 𝑧1 , 𝑧2 , … , 𝑧2𝑚 .
From
√ the reciprocal pairs √
property and the assumption of distinct roots, it follows that |𝑧𝑗 | >
𝛽 for 𝑗 ≤ 𝑚 and |𝑧𝑗 | < 𝛽 for 𝑗 > 𝑚.
−1
It also follows that 𝑧2𝑚−𝑗 = 𝛽𝑧𝑗+1 , 𝑗 = 0, 1, … , 𝑚 − 1.
Therefore, the characteristic polynomial on the left side of (13) can be expressed as

ℎ + 𝑑(𝛽𝑧 −1 )𝑑(𝑧) = 𝑧 −𝑚 𝑧0 (𝑧 − 𝑧1 ) ⋯ (𝑧 − 𝑧𝑚 )(𝑧 − 𝑧𝑚+1 ) ⋯ (𝑧 − 𝑧2𝑚 )


(14)
= 𝑧 −𝑚 𝑧0 (𝑧 − 𝑧1 )(𝑧 − 𝑧2 ) ⋯ (𝑧 − 𝑧𝑚 )(𝑧 − 𝛽𝑧𝑚
−1
) ⋯ (𝑧 − 𝛽𝑧2−1 )(𝑧 − 𝛽𝑧1−1 )

where 𝑧0 is a constant.
In (14), we substitute (𝑧 − 𝑧𝑗 ) = −𝑧𝑗 (1 − 𝑧1 𝑧) and (𝑧 − 𝛽𝑧𝑗−1 ) = 𝑧(1 − 𝑧𝛽 𝑧−1 ) for 𝑗 = 1, … , 𝑚 to
𝑗 𝑗
get

1 1 1 1
ℎ + 𝑑(𝛽𝑧 −1 )𝑑(𝑧) = (−1)𝑚 (𝑧0 𝑧1 ⋯ 𝑧𝑚 )(1 − 𝑧) ⋯ (1 − 𝑧)(1 − 𝛽𝑧 −1 ) ⋯ (1 − 𝛽𝑧−1 )
𝑧1 𝑧𝑚 𝑧1 𝑧𝑚

𝑚
Now define 𝑐(𝑧) = ∑𝑗=0 𝑐𝑗 𝑧𝑗 as

1/2 𝑧 𝑧 𝑧
𝑐 (𝑧) = [(−1)𝑚 𝑧0 𝑧1 ⋯ 𝑧𝑚 ] (1 − ) (1 − ) ⋯ (1 − ) (15)
𝑧1 𝑧2 𝑧𝑚

Notice that (14) can be written

ℎ + 𝑑 (𝛽𝑧 −1 ) 𝑑 (𝑧) = 𝑐 (𝛽𝑧 −1 ) 𝑐 (𝑧) (16)

It is useful to write (15) as

𝑐(𝑧) = 𝑐0 (1 − 𝜆1 𝑧) … (1 − 𝜆𝑚 𝑧) (17)

where

1/2 1
𝑐0 = [(−1)𝑚 𝑧0 𝑧1 ⋯ 𝑧𝑚 ] ; 𝜆𝑗 = , 𝑗 = 1, … , 𝑚
𝑧𝑗
√ √
Since |𝑧𝑗 | > 𝛽 for 𝑗 = 1, … , 𝑚 it follows that |𝜆𝑗 | < 1/ 𝛽 for 𝑗 = 1, … , 𝑚.
512 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

Using (17), we can express the factorization (16) as

ℎ + 𝑑(𝛽𝑧 −1 )𝑑(𝑧) = 𝑐02 (1 − 𝜆1 𝑧) ⋯ (1 − 𝜆𝑚 𝑧)(1 − 𝜆1 𝛽𝑧 −1 ) ⋯ (1 − 𝜆𝑚 𝛽𝑧 −1 )

In sum, we have constructed a factorization (16) of the characteristic polynomial for the Euler
equation in which the zeros of 𝑐(𝑧) exceed 𝛽 1/2 in modulus, and the zeros of 𝑐 (𝛽𝑧 −1 ) are less
than 𝛽 1/2 in modulus.
Using (16), we now write the Euler equation as

𝑐(𝛽𝐿−1 ) 𝑐 (𝐿) 𝑦𝑡 = 𝑎𝑡

The unique solution of the Euler equation that satisfies condition (12) is

𝑐(𝐿) 𝑦𝑡 = 𝑐 (𝛽𝐿−1 )−1 𝑎𝑡 (18)

This can be established by using an argument paralleling that in chapter IX of [59].


To exhibit the solution in a form paralleling that of [59], we use (17) to write (18) as

𝑐0−2 𝑎𝑡
(1 − 𝜆1 𝐿) ⋯ (1 − 𝜆𝑚 𝐿)𝑦𝑡 = (19)
(1 − 𝛽𝜆1 𝐿−1 ) ⋯ (1 − 𝛽𝜆𝑚 𝐿−1 )

Using partial fractions, we can write the characteristic polynomial on the right side of (19) as

𝑚
𝐴𝑗 𝑐0−2
∑ where 𝐴𝑗 ∶= 𝜆𝑖
1 − 𝜆𝑗 𝛽𝐿−1 ∏𝑖≠𝑗 (1 −
𝑗=1 𝜆𝑗 )

Then (19) can be written

𝑚
𝐴𝑗
(1 − 𝜆1 𝐿) ⋯ (1 − 𝜆𝑚 𝐿)𝑦𝑡 = ∑ 𝑎
𝑗=1
1 − 𝜆𝑗 𝛽𝐿−1 𝑡

or

𝑚 ∞
(1 − 𝜆1 𝐿) ⋯ (1 − 𝜆𝑚 𝐿)𝑦𝑡 = ∑ 𝐴𝑗 ∑ (𝜆𝑗 𝛽)𝑘 𝑎𝑡+𝑘 (20)
𝑗=1 𝑘=0

Equation (20) expresses the optimum sequence for 𝑦𝑡 in terms of 𝑚 lagged 𝑦’s, and 𝑚
weighted infinite geometric sums of future 𝑎𝑡 ’s.
Furthermore, (20) is the unique solution of the Euler equation that satisfies the initial condi-
tions and condition (12).
In effect, condition (12) compels us to solve the “unstable” roots of ℎ + 𝑑(𝛽𝑧 −1 )𝑑(𝑧) forward
(see [59]).
−1 −1
The step of factoring the polynomial
√ ℎ + 𝑑(𝛽𝑧 ) 𝑑(𝑧) into 𝑐 (𝛽𝑧 )𝑐 (𝑧), where the zeros of
𝑐 (𝑧) all have modulus exceeding 𝛽, is central to solving the problem.
We note two features of the solution (20)
30.6. UNDISCOUNTED PROBLEMS 513

√ √
• Since |𝜆𝑗 | < 1/ 𝛽 for all 𝑗, it follows that (𝜆𝑗 𝛽) < 𝛽. √
• The assumption that {𝑎𝑡 } is of exponential order less than 1/ 𝛽 is sufficient to guaran-
tee that the geometric sums of future 𝑎𝑡 ’s on the right side of (20) converge.
We immediately see that those sums will converge under the weaker condition that {𝑎𝑡 } is of
exponential order less than 𝜙−1 where 𝜙 = max {𝛽𝜆𝑖 , 𝑖 = 1, … , 𝑚}.
Note that with 𝑎𝑡 identically zero, (20) implies that in general |𝑦𝑡 | eventually grows exponen-
tially at a rate given by max𝑖 |𝜆𝑖 |.

The condition max𝑖 |𝜆𝑖 | < 1/ 𝛽 guarantees that condition (12) is satisfied.

In fact, max𝑖 |𝜆𝑖 | < 1/ 𝛽 is a necessary condition for (12) to hold.
Were (12) not satisfied, the objective function would diverge to −∞, implying that the 𝑦𝑡
path could not be optimal.
For example, with 𝑎𝑡 = 0, for all 𝑡, it is easy to describe a naive (nonoptimal) policy for
{𝑦𝑡 , 𝑡 ≥ 0} that gives a finite value of (17).
We can simply let 𝑦𝑡 = 0 for 𝑡 ≥ 0.
This policy involves at most 𝑚 nonzero values of ℎ𝑦𝑡2 and [𝑑(𝐿)𝑦𝑡 ]2 , and so yields a finite
value of (1).
Therefore it is easy to dominate a path that violates (12).

30.6 Undiscounted Problems

It is worthwhile focusing on a special case of the LQ problems above: the undiscounted prob-
lem that emerges when 𝛽 = 1.
In this case, the Euler equation is

(ℎ + 𝑑(𝐿−1 )𝑑(𝐿)) 𝑦𝑡 = 𝑎𝑡

The factorization of the characteristic polynomial (16) becomes

(ℎ + 𝑑 (𝑧 −1 )𝑑(𝑧)) = 𝑐 (𝑧 −1 ) 𝑐 (𝑧)

where

𝑐 (𝑧) = 𝑐0 (1 − 𝜆1 𝑧) … (1 − 𝜆𝑚 𝑧)
𝑐0 = [(−1)𝑚 𝑧0 𝑧1 … 𝑧𝑚 ]
|𝜆𝑗 | < 1 for 𝑗 = 1, … , 𝑚
1
𝜆𝑗 = for 𝑗 = 1, … , 𝑚
𝑧𝑗
𝑧0 = constant

The solution of the problem becomes


514 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

𝑚 ∞
(1 − 𝜆1 𝐿) ⋯ (1 − 𝜆𝑚 𝐿)𝑦𝑡 = ∑ 𝐴𝑗 ∑ 𝜆𝑘𝑗 𝑎𝑡+𝑘
𝑗=1 𝑘=0

30.6.1 Transforming Discounted to Undiscounted Problem

Discounted problems can always be converted into undiscounted problems via a simple trans-
formation.
Consider problem (1) with 0 < 𝛽 < 1.
Define the transformed variables

𝑎𝑡̃ = 𝛽 𝑡/2 𝑎𝑡 , 𝑦𝑡̃ = 𝛽 𝑡/2 𝑦𝑡 (21)

𝑚
Then notice that 𝛽 𝑡 [𝑑 (𝐿)𝑦𝑡 ]2 = [𝑑 ̃(𝐿)𝑦𝑡̃ ]2 with 𝑑 ̃(𝐿) = ∑𝑗=0 𝑑𝑗̃ 𝐿𝑗 and 𝑑𝑗̃ = 𝛽 𝑗/2 𝑑𝑗 .
Then the original criterion function (1) is equivalent to

𝑁
1 1
lim ∑{𝑎𝑡̃ 𝑦𝑡̃ − ℎ 𝑦𝑡2̃ − [𝑑 ̃(𝐿) 𝑦𝑡̃ ]2 } (22)
𝑁→∞
𝑡=0
2 2

which is to be maximized over sequences {𝑦𝑡̃ , 𝑡 = 0, …} subject to 𝑦−1


̃ , ⋯ , 𝑦−𝑚
̃ given and
{𝑎𝑡̃ , 𝑡 = 1, …} a known bounded sequence.
The Euler equation for this problem is [ℎ + 𝑑 ̃(𝐿−1 ) 𝑑 ̃(𝐿)] 𝑦𝑡̃ = 𝑎𝑡̃ .
The solution is

𝑚 ∞
(1 − 𝜆̃ 1 𝐿) ⋯ (1 − 𝜆̃ 𝑚 𝐿) 𝑦𝑡̃ = ∑ 𝐴𝑗̃ ∑ 𝜆̃ 𝑘𝑗 𝑎𝑡+𝑘
̃
𝑗=1 𝑘=0

or

𝑚 ∞
𝑦𝑡̃ = 𝑓1̃ 𝑦𝑡−1
̃ + ⋯ + 𝑓𝑚̃ 𝑦𝑡−𝑚
̃ + ∑ 𝐴𝑗̃ ∑ 𝜆̃ 𝑘𝑗 𝑎𝑡+𝑘
̃ , (23)
𝑗=1 𝑘=0

where 𝑐 ̃ (𝑧 −1 )𝑐 ̃ (𝑧) = ℎ + 𝑑 ̃(𝑧 −1 )𝑑 ̃(𝑧), and where

1/2
[(−1)𝑚 𝑧0̃ 𝑧1̃ … 𝑧𝑚
̃ ] (1 − 𝜆̃ 1 𝑧) … (1 − 𝜆̃ 𝑚 𝑧) = 𝑐 ̃ (𝑧), where |𝜆̃ 𝑗 | < 1

We leave it to the reader to show that (23) implies the equivalent form of the solution

𝑚 ∞
𝑦𝑡 = 𝑓1 𝑦𝑡−1 + ⋯ + 𝑓𝑚 𝑦𝑡−𝑚 + ∑ 𝐴𝑗 ∑ (𝜆𝑗 𝛽)𝑘 𝑎𝑡+𝑘
𝑗=1 𝑘=0

where

𝑓𝑗 = 𝑓𝑗̃ 𝛽 −𝑗/2 , 𝐴𝑗 = 𝐴𝑗̃ , 𝜆𝑗 = 𝜆̃ 𝑗 𝛽 −1/2 (24)


30.7. IMPLEMENTATION 515

The transformations (21) and the inverse formulas (24) allow us to solve a discounted prob-
lem by first solving a related undiscounted problem.

30.7 Implementation

Here’s the code that computes solutions to the LQ problem using the methods described
above.

In [2]: import numpy as np


import scipy.stats as spst
import scipy.linalg as la

class LQFilter:

def __init__(self, d, h, y_m, r=None, h_eps=None, β=None):


"""

Parameters
----------
d : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [d_0, d_1, ..., d_m]
h : scalar
Parameter of the objective function (corresponding to the
quadratic term)
y_m : list or numpy.array (1-D or a 2-D column vector)
Initial conditions for y
r : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [r_0, r_1, ..., r_k]
(optional, if not defined -> deterministic problem)
β : scalar
Discount factor (optional, default value is one)
"""

self.h = h
self.d = np.asarray(d)
self.m = self.d.shape[0] - 1

self.y_m = np.asarray(y_m)

if self.m == self.y_m.shape[0]:
self.y_m = self.y_m.reshape(self.m, 1)
else:
raise ValueError("y_m must be of length m = {self.m:d}")

#---------------------------------------------
# Define the coefficients of upfront
#---------------------------------------------
ϕ = np.zeros(2 * self.m + 1)
for i in range(- self.m, self.m + 1):
ϕ[self.m - i] = np.sum(np.diag(self.d.reshape(self.m + 1, 1) \
@ self.d.reshape(1, self.m + 1),
k=-i
)
)
ϕ[self.m] = ϕ[self.m] + self.h
self.ϕ = ϕ
516 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

#-----------------------------------------------------
# If r is given calculate the vector _r
#-----------------------------------------------------
if r is None:
pass
else:
self.r = np.asarray(r)
self.k = self.r.shape[0] - 1
ϕ_r = np.zeros(2 * self.k + 1)
for i in range(- self.k, self.k + 1):
ϕ_r[self.k - i] = np.sum(np.diag(self.r.reshape(self.k + 1,�
↪ 1) \
@ self.r.reshape(1, self.k + 1),
k=-i
)
)
if h_eps is None:
self.ϕ_r = ϕ_r
else:
ϕ_r[self.k] = ϕ_r[self.k] + h_eps
self.ϕ_r = ϕ_r

#-----------------------------------------------------
# If β is given, define the transformed variables
#-----------------------------------------------------
if β is None:
self.β = 1
else:
self.β = β
self.d = self.β**(np.arange(self.m + 1)/2) * self.d
self.y_m = self.y_m * (self.β**(- np.arange(1, self.m + 1)/2)) \
.reshape(self.m, 1)

def construct_W_and_Wm(self, N):


"""
This constructs the matrices W and W_m for a given number of periods N
"""

m = self.m
d = self.d

W = np.zeros((N + 1, N + 1))
W_m = np.zeros((N + 1, m))

#---------------------------------------
# Terminal conditions
#---------------------------------------

D_m1 = np.zeros((m + 1, m + 1))


M = np.zeros((m + 1, m))

# (1) Constuct the D_{m+1} matrix using the formula

for j in range(m + 1):


for k in range(j, m + 1):
D_m1[j, k] = d[:j + 1] @ d[k - j: k + 1]
30.7. IMPLEMENTATION 517

# Make the matrix symmetric


D_m1 = D_m1 + D_m1.T - np.diag(np.diag(D_m1))

# (2) Construct the M matrix using the entries of D_m1

for j in range(m):
for i in range(j + 1, m + 1):
M[i, j] = D_m1[i - j - 1, m]

#----------------------------------------------
# Euler equations for t = 0, 1, ..., N-(m+1)
#----------------------------------------------
ϕ = self.ϕ

W[:(m + 1), :(m + 1)] = D_m1 + self.h * np.eye(m + 1)


W[:(m + 1), (m + 1):(2 * m + 1)] = M

for i, row in enumerate(np.arange(m + 1, N + 1 - m)):


W[row, (i + 1):(2 * m + 2 + i)] = ϕ

for i in range(1, m + 1):


W[N - m + i, -(2 * m + 1 - i):] = ϕ[:-i]

for i in range(m):
W_m[N - i, :(m - i)] = ϕ[(m + 1 + i):]

return W, W_m

def roots_of_characteristic(self):
"""
This function calculates z_0 and the 2m roots of the characteristic
equation associated with the Euler equation (1.7)

Note:
------
numpy.poly1d(roots, True) defines a polynomial using its roots�
↪ that can
be evaluated at any point. If x_1, x_2, ... , x_m are the roots then
p(x) = (x - x_1)(x - x_2)...(x - x_m)
"""
m = self.m
ϕ = self.ϕ

# Calculate the roots of the 2m-polynomial


roots = np.roots(ϕ)
# Sort the roots according to their length (in descending order)
roots_sorted = roots[np.argsort(abs(roots))[::-1]]

z_0 = ϕ.sum() / np.poly1d(roots, True)(1)


z_1_to_m = roots_sorted[:m] # We need only those outside the�
↪ unit circle

λ = 1 / z_1_to_m

return z_1_to_m, z_0, λ

def coeffs_of_c(self):
'''
518 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

This function computes the coefficients {c_j, j = 0, 1, ..., m} for


c(z) = sum_{j = 0}^{m} c_j z^j

Based on the expression (1.9). The order is


c_coeffs = [c_0, c_1, ..., c_{m-1}, c_m]
'''
z_1_to_m, z_0 = self.roots_of_characteristic()[:2]

c_0 = (z_0 * np.prod(z_1_to_m).real * (- 1)**self.m)**(.5)


c_coeffs = np.poly1d(z_1_to_m, True).c * z_0 / c_0

return c_coeffs[::-1]

def solution(self):
"""
This function calculates {λ_j, j=1,...,m} and {A_j, j=1,...,m}
of the expression (1.15)
"""
λ = self.roots_of_characteristic()[2]
c_0 = self.coeffs_of_c()[-1]

A = np.zeros(self.m, dtype=complex)
for j in range(self.m):
denom = 1 - λ/λ[j]
A[j] = c_0**(-2) / np.prod(denom[np.arange(self.m) != j])

return λ, A

def construct_V(self, N):


'''
This function constructs the covariance matrix for x^N (see�
↪section 6)

for a given period N


'''
V = np.zeros((N, N))
ϕ_r = self.ϕ_r

for i in range(N):
for j in range(N):
if abs(i-j) <= self.k:
V[i, j] = ϕ_r[self.k + abs(i-j)]

return V

def simulate_a(self, N):


"""
Assuming that the u's are normal, this method draws a random path
for x^N
"""
V = self.construct_V(N + 1)
d = spst.multivariate_normal(np.zeros(N + 1), V)

return d.rvs()

def predict(self, a_hist, t):


"""
This function implements the prediction formula discussed in�
↪section 6 (1.59)
30.7. IMPLEMENTATION 519

It takes a realization for a^N, and the period in which the�


↪ prediction is
formed

Output: E[abar | a_t, a_{t-1}, ..., a_1, a_0]


"""

N = np.asarray(a_hist).shape[0] - 1
a_hist = np.asarray(a_hist).reshape(N + 1, 1)
V = self.construct_V(N + 1)

aux_matrix = np.zeros((N + 1, N + 1))


aux_matrix[:(t + 1), :(t + 1)] = np.eye(t + 1)
L = la.cholesky(V).T
Ea_hist = la.inv(L) @ aux_matrix @ L @ a_hist

return Ea_hist

def optimal_y(self, a_hist, t=None):


"""
- if t is NOT given it takes a_hist (list or numpy.array) as a
deterministic a_t
- if t is given, it solves the combined control prediction problem
(section 7)(by default, t == None -> deterministic)

for a given sequence of a_t (either deterministic or a particular


realization), it calculates the optimal y_t sequence using the method
of the lecture

Note:
------
scipy.linalg.lu normalizes L, U so that L has unit diagonal elements
To make things consistent with the lecture, we need an auxiliary
diagonal matrix D which renormalizes L and U
"""

N = np.asarray(a_hist).shape[0] - 1
W, W_m = self.construct_W_and_Wm(N)

L, U = la.lu(W, permute_l=True)
D = np.diag(1 / np.diag(U))
U = D @ U
L = L @ np.diag(1 / np.diag(D))

J = np.fliplr(np.eye(N + 1))

if t is None: # If the problem is deterministic

a_hist = J @ np.asarray(a_hist).reshape(N + 1, 1)

#--------------------------------------------
# Transform the 'a' sequence if β is given
#--------------------------------------------
if self.β != 1:
a_hist = a_hist * (self.β**(np.arange(N + 1) / 2))[::-1] \
.reshape(N + 1, 1)

a_bar = a_hist - W_m @ self.y_m # a_bar from the lecture


520 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

Uy = np.linalg.solve(L, a_bar) # U @ y_bar = L^{-1}


y_bar = np.linalg.solve(U, Uy) # y_bar = U^{-1}L^{-1}

# Reverse the order of y_bar with the matrix J


J = np.fliplr(np.eye(N + self.m + 1))
# y_hist : concatenated y_m and y_bar
y_hist = J @ np.vstack([y_bar, self.y_m])

#--------------------------------------------
# Transform the optimal sequence back if β is given
#--------------------------------------------
if self.β != 1:
y_hist = y_hist * (self.β**(- np.arange(-self.m, N + 1)/2)) \
.reshape(N + 1 + self.m, 1)

return y_hist, L, U, y_bar

else: # If the problem is stochastic and we look at it

Ea_hist = self.predict(a_hist, t).reshape(N + 1, 1)


Ea_hist = J @ Ea_hist

a_bar = Ea_hist - W_m @ self.y_m # a_bar from the lecture


Uy = np.linalg.solve(L, a_bar) # U @ y_bar = L^{-1}
y_bar = np.linalg.solve(U, Uy) # y_bar = U^{-1}L^{-1}

# Reverse the order of y_bar with the matrix J


J = np.fliplr(np.eye(N + self.m + 1))
# y_hist : concatenated y_m and y_bar
y_hist = J @ np.vstack([y_bar, self.y_m])

return y_hist, L, U, y_bar

30.7.1 Example

In this application, we’ll have one lag, with

𝑑(𝐿)𝑦𝑡 = 𝛾(𝐼 − 𝐿)𝑦𝑡 = 𝛾(𝑦𝑡 − 𝑦𝑡−1 )

Suppose for the moment that 𝛾 = 0.


Then the intertemporal component of the LQ problem disappears, and the agent simply
wants to maximize 𝑎𝑡 𝑦𝑡 − ℎ𝑦𝑡2 /2 in each period.
This means that the agent chooses 𝑦𝑡 = 𝑎𝑡 /ℎ.
In the following we’ll set ℎ = 1, so that the agent just wants to track the {𝑎𝑡 } process.
However, as we increase 𝛾, the agent gives greater weight to a smooth time path.
Hence {𝑦𝑡 } evolves as a smoothed version of {𝑎𝑡 }.
The {𝑎𝑡 } sequence we’ll choose as a stationary cyclic process plus some white noise.
Here’s some code that generates a plot when 𝛾 = 0.8

In [3]: # Set seed and generate a_t sequence


np.random.seed(123)
30.7. IMPLEMENTATION 521

n = 100
a_seq = np.sin(np.linspace(0, 5 * np.pi, n)) + 2 + 0.1 * np.random.randn(n)

def plot_simulation(γ=0.8, m=1, h=1, y_m=2):

d = γ * np.asarray([1, -1])
y_m = np.asarray(y_m).reshape(m, 1)

testlq = LQFilter(d, h, y_m)


y_hist, L, U, y = testlq.optimal_y(a_seq)
y = y[::-1] # Reverse y

# Plot simulation results

fig, ax = plt.subplots(figsize=(10, 6))


p_args = {'lw' : 2, 'alpha' : 0.6}
time = range(len(y))
ax.plot(time, a_seq / h, 'k-o', ms=4, lw=2, alpha=0.6, label='$a_t$')
ax.plot(time, y, 'b-o', ms=4, lw=2, alpha=0.6, label='$y_t$')
ax.set(title=rf'Dynamics with $\gamma = {γ}$',
xlabel='Time',
xlim=(0, max(time))
)
ax.legend()
ax.grid()
plt.show()

plot_simulation()

Here’s what happens when we change 𝛾 to 5.0

In [4]: plot_simulation(γ=5)
522 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

And here’s 𝛾 = 10

In [5]: plot_simulation(γ=10)
30.8. EXERCISES 523

30.8 Exercises

30.8.1 Exercise 1

Consider solving a discounted version (𝛽 < 1) of problem (1), as follows.


Convert (1) to the undiscounted problem (22).
Let the solution of (22) in feedback form be

𝑚 ∞
(1 − 𝜆̃ 1 𝐿) ⋯ (1 − 𝜆̃ 𝑚 𝐿)𝑦𝑡̃ = ∑ 𝐴𝑗̃ ∑ 𝜆̃ 𝑘𝑗 𝑎𝑡+𝑘
̃
𝑗=1 𝑘=0

or

𝑚 ∞
𝑦𝑡̃ = 𝑓1̃ 𝑦𝑡−1
̃ + ⋯ + 𝑓𝑚̃ 𝑦𝑡−𝑚
̃ + ∑ 𝐴𝑗̃ ∑ 𝜆̃ 𝑘𝑗 𝑎𝑡+𝑘
̃ (25)
𝑗=1 𝑘=0

Here
̃ −1 )𝑑(𝑧)
• ℎ + 𝑑(𝑧 ̃ = 𝑐(𝑧̃ −1 )𝑐(𝑧)
̃
• 𝑐(𝑧) 𝑚
̃ ]1/2 (1 − 𝜆̃ 1 𝑧) ⋯ (1 − 𝜆̃ 𝑚 𝑧)
̃ = [(−1) 𝑧0̃ 𝑧1̃ ⋯ 𝑧𝑚
̃ −1 ) 𝑑(𝑧).
where the 𝑧𝑗̃ are the zeros of ℎ + 𝑑(𝑧 ̃

Prove that (25) implies that the solution for 𝑦𝑡 in feedback form is

𝑚 ∞
𝑦𝑡 = 𝑓1 𝑦𝑡−1 + … + 𝑓𝑚 𝑦𝑡−𝑚 + ∑ 𝐴𝑗 ∑ 𝛽 𝑘 𝜆𝑘𝑗 𝑎𝑡+𝑘
𝑗=1 𝑘=0

where 𝑓𝑗 = 𝑓𝑗̃ 𝛽 −𝑗/2 , 𝐴𝑗 = 𝐴𝑗̃ , and 𝜆𝑗 = 𝜆̃ 𝑗 𝛽 −1/2 .

30.8.2 Exercise 2

Solve the optimal control problem, maximize

2
1
∑ {𝑎𝑡 𝑦𝑡 − [(1 − 2𝐿)𝑦𝑡 ]2 }
𝑡=0
2

subject to 𝑦−1 given, and {𝑎𝑡 } a known bounded sequence.


Express the solution in the “feedback form” (20), giving numerical values for the coefficients.
Make sure that the boundary conditions (5) are satisfied.
(Note: this problem differs from the problem in the text in one important way: instead of
ℎ > 0 in (1), ℎ = 0. This has an important influence on the solution.)

30.8.3 Exercise 3

Solve the infinite time-optimal control problem to maximize


524 CHAPTER 30. CLASSICAL CONTROL WITH LINEAR ALGEBRA

𝑁
1
lim ∑ − [(1 − 2𝐿)𝑦𝑡 ]2 ,
𝑁→∞
𝑡=0
2

subject to 𝑦−1 given. Prove that the solution is

𝑦𝑡 = 2𝑦𝑡−1 = 2𝑡+1 𝑦−1 𝑡>0

30.8.4 Exercise 4

Solve the infinite time problem, to maximize

𝑁
1
lim ∑ (.0000001) 𝑦𝑡2 − [(1 − 2𝐿)𝑦𝑡 ]2
𝑁→∞
𝑡=0
2

subject to 𝑦−1 given. Prove that the solution 𝑦𝑡 = 2𝑦𝑡−1 violates condition (12), and so is not
optimal.
Prove that the optimal solution is approximately 𝑦𝑡 = .5𝑦𝑡−1 .
Chapter 31

Classical Prediction and Filtering


With Linear Algebra

31.1 Contents

• Overview 31.2
• Finite Dimensional Prediction 31.3
• Combined Finite Dimensional Control and Prediction 31.4
• Infinite Horizon Prediction and Filtering Problems 31.5
• Exercises 31.6

31.2 Overview

This is a sequel to the earlier lecture Classical Control with Linear Algebra.
That lecture used linear algebra – in particular, the LU decomposition – to formulate and
solve a class of linear-quadratic optimal control problems.
In this lecture, we’ll be using a closely related decomposition, the Cholesky decomposition, to
solve linear prediction and filtering problems.
We exploit the useful fact that there is an intimate connection between two superficially dif-
ferent classes of problems:
• deterministic linear-quadratic (LQ) optimal control problems
• linear least squares prediction and filtering problems
The first class of problems involves no randomness, while the second is all about randomness.
Nevertheless, essentially the same mathematics solves both types of problem.
This connection, which is often termed “duality,” is present whether one uses “classical” or
“recursive” solution procedures.
In fact, we saw duality at work earlier when we formulated control and prediction problems
recursively in lectures LQ dynamic programming problems, A first look at the Kalman filter,
and The permanent income model.
A useful consequence of duality is that
• With every LQ control problem, there is implicitly affiliated a linear least squares pre-

525
526CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

diction or filtering problem.


• With every linear least squares prediction or filtering problem there is implicitly affili-
ated a LQ control problem.
An understanding of these connections has repeatedly proved useful in cracking interesting
applied problems.
For example, Sargent [59] [chs. IX, XIV] and Hansen and Sargent [27] formulated and solved
control and filtering problems using 𝑧-transform methods.
In this lecture, we begin to investigate these ideas by using mostly elementary linear algebra.
This is the main purpose and focus of the lecture.
However, after showing matrix algebra formulas, we’ll summarize classic infinite-horizon for-
mulas built on 𝑧-transform and lag operator methods.
And we’ll occasionally refer to some of these formulas from the infinite dimensional problems
as we present the finite time formulas and associated linear algebra.
We’ll start with the following standard import:

In [1]: import numpy as np

31.2.1 References

Useful references include [68], [27], [49], [5], and [48].

31.3 Finite Dimensional Prediction

Let (𝑥1 , 𝑥2 , … , 𝑥𝑇 )′ = 𝑥 be a 𝑇 × 1 vector of random variables with mean 𝔼𝑥 = 0 and covari-


ance matrix 𝔼𝑥𝑥′ = 𝑉 .
Here 𝑉 is a 𝑇 × 𝑇 positive definite matrix.
The 𝑖, 𝑗 component 𝐸𝑥𝑖 𝑥𝑗 of 𝑉 is the inner product between 𝑥𝑖 and 𝑥𝑗 .
We regard the random variables as being ordered in time so that 𝑥𝑡 is thought of as the value
of some economic variable at time 𝑡.
For example, 𝑥𝑡 could be generated by the random process described by the Wold represen-
tation presented in equation (16) in the section below on infinite dimensional prediction and
filtering.
In that case, 𝑉𝑖𝑗 is given by the coefficient on 𝑧∣𝑖−𝑗∣ in the expansion of 𝑔𝑥 (𝑧) = 𝑑(𝑧) 𝑑(𝑧 −1 )+ℎ,

which equals ℎ + ∑𝑘=0 𝑑𝑘 𝑑𝑘+∣𝑖−𝑗∣ .
We want to construct 𝑗 step ahead linear least squares predictors of the form

𝔼̂ [𝑥𝑇 |𝑥𝑇 −𝑗 , 𝑥𝑇 −𝑗+1 , … , 𝑥1 ]

where 𝔼̂ is the linear least squares projection operator.


(Sometimes 𝔼̂ is called the wide-sense expectations operator)
To find linear least squares predictors it is helpful first to construct a 𝑇 ×1 vector 𝜀 of random
variables that form an orthonormal basis for the vector of random variables 𝑥.
31.3. FINITE DIMENSIONAL PREDICTION 527

The key insight here comes from noting that because the covariance matrix 𝑉 of 𝑥 is a posi-
tive definite and symmetric, there exists a (Cholesky) decomposition of 𝑉 such that

𝑉 = 𝐿−1 (𝐿−1 )′

and

𝐿 𝑉 𝐿′ = 𝐼

where 𝐿 and 𝐿−1 are both lower triangular.


Form the 𝑇 × 1 random vector 𝜀 = 𝐿𝑥.
The random vector 𝜀 is an orthonormal basis for 𝑥 because
• 𝐿 is nonsingular
• 𝔼 𝜀 𝜀′ = 𝐿𝔼𝑥𝑥′ 𝐿′ = 𝐼
• 𝑥 = 𝐿−1 𝜀
It is enlightening to write out and interpret the equations 𝐿𝑥 = 𝜀 and 𝐿−1 𝜀 = 𝑥.
First, we’ll write 𝐿𝑥 = 𝜀

𝐿11 𝑥1 = 𝜀1
𝐿21 𝑥1 + 𝐿22 𝑥2 = 𝜀2
(1)

𝐿𝑇 1 𝑥1 … + 𝐿𝑇 𝑇 𝑥𝑇 = 𝜀𝑇

or

𝑡−1
∑ 𝐿𝑡,𝑡−𝑗 𝑥𝑡−𝑗 = 𝜀𝑡 , 𝑡 = 1, 2, … 𝑇 (2)
𝑗=0

Next, we write 𝐿−1 𝜀 = 𝑥

𝑥1 = 𝐿−1
11 𝜀1
𝑥2 = 𝐿−1 −1
22 𝜀2 + 𝐿21 𝜀1
, (3)

𝑥𝑇 = 𝐿−1 −1 −1
𝑇 𝑇 𝜀𝑇 + 𝐿𝑇 ,𝑇 −1 𝜀𝑇 −1 … + 𝐿𝑇 ,1 𝜀1

or

𝑡−1
𝑥𝑡 = ∑ 𝐿−1
𝑡,𝑡−𝑗 𝜀𝑡−𝑗 (4)
𝑗=0

where 𝐿−1 −1
𝑖,𝑗 denotes the 𝑖, 𝑗 element of 𝐿 .

From (2), it follows that 𝜀𝑡 is in the linear subspace spanned by 𝑥𝑡 , 𝑥𝑡−1 , … , 𝑥1 .


From (4) it follows that that 𝑥𝑡 is in the linear subspace spanned by 𝜀𝑡 , 𝜀𝑡−1 , … , 𝜀1 .
528CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

Equation (2) forms a sequence of autoregressions that for 𝑡 = 1, … , 𝑇 express 𝑥𝑡 as linear


functions of 𝑥𝑠 , 𝑠 = 1, … , 𝑡 − 1 and a random variable (𝐿𝑡,𝑡 )−1 𝜀𝑡 that is orthogonal to each
componenent of 𝑥𝑠 , 𝑠 = 1, … , 𝑡 − 1.
(Here (𝐿𝑡,𝑡 )−1 denotes the reciprocal of 𝐿𝑡,𝑡 while 𝐿−1 −1
𝑡,𝑡 denotes the 𝑡, 𝑡 element of 𝐿 ).

The equivalence of the subspaces spanned by 𝜀𝑡 , … , 𝜀1 and 𝑥𝑡 , … , 𝑥1 means that for 𝑡 − 1 ≥


𝑚≥1

̂ 𝑡 ∣ 𝑥𝑡−𝑚 , 𝑥𝑡−𝑚−1 , … , 𝑥1 ] = 𝔼[𝑥


𝔼[𝑥 ̂ 𝑡 ∣ 𝜀𝑡−𝑚 , 𝜀𝑡−𝑚−1 , … , 𝜀1 ] (5)

To proceed, it is useful to drill down and note that for 𝑡 − 1 ≥ 𝑚 ≥ 1 we can rewrite (4) in the
form of the moving average representation

𝑚−1 𝑡−1
𝑥𝑡 = ∑ 𝐿−1 −1
𝑡,𝑡−𝑗 𝜀𝑡−𝑗 + ∑ 𝐿𝑡,𝑡−𝑗 𝜀𝑡−𝑗 (6)
𝑗=0 𝑗=𝑚

𝑡−1
Representation (6) is an orthogonal decomposition of 𝑥𝑡 into a part ∑𝑗=𝑚 𝐿−1
𝑡,𝑡−𝑗 𝜀𝑡−𝑗 that lies
𝑡−1
in the space spanned by [𝑥𝑡−𝑚 , 𝑥𝑡−𝑚+1 , … , 𝑥1 ] and an orthogonal component ∑𝑗=𝑚 𝐿−1
𝑡,𝑡−𝑗 𝜀𝑡−𝑗
that does not line in that space but instead in a linear space knowns as its orthogonal com-
plement.
It follows that

𝑚−1
̂ 𝑡 ∣ 𝑥𝑡−𝑚 , 𝑥𝑡−𝑚−1 , … , 𝑥1 ] = ∑ 𝐿−1
𝔼[𝑥 𝑡,𝑡−𝑗 𝜀𝑡−𝑗
𝑗=0

31.3.1 Implementation

Here’s the code that computes solutions to LQ control and filtering problems using the meth-
ods described here and in :doc: lu_tricks.

In [2]: import numpy as np


import scipy.stats as spst
import scipy.linalg as la

class LQFilter:

def __init__(self, d, h, y_m, r=None, h_eps=None, β=None):


"""

Parameters
----------
d : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [d_0, d_1, ..., d_m]
h : scalar
Parameter of the objective function (corresponding to the
quadratic term)
y_m : list or numpy.array (1-D or a 2-D column vector)
Initial conditions for y
r : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [r_0, r_1, ..., r_k]
31.3. FINITE DIMENSIONAL PREDICTION 529

(optional, if not defined -> deterministic problem)


β : scalar
Discount factor (optional, default value is one)
"""

self.h = h
self.d = np.asarray(d)
self.m = self.d.shape[0] - 1

self.y_m = np.asarray(y_m)

if self.m == self.y_m.shape[0]:
self.y_m = self.y_m.reshape(self.m, 1)
else:
raise ValueError("y_m must be of length m = {self.m:d}")

#---------------------------------------------
# Define the coefficients of upfront
#---------------------------------------------
ϕ = np.zeros(2 * self.m + 1)
for i in range(- self.m, self.m + 1):
ϕ[self.m - i] = np.sum(np.diag(self.d.reshape(self.m + 1, 1) \
@ self.d.reshape(1, self.m + 1),
k=-i
)
)
ϕ[self.m] = ϕ[self.m] + self.h
self.ϕ = ϕ

#-----------------------------------------------------
# If r is given calculate the vector _r
#-----------------------------------------------------
if r is None:
pass
else:
self.r = np.asarray(r)
self.k = self.r.shape[0] - 1
ϕ_r = np.zeros(2 * self.k + 1)
for i in range(- self.k, self.k + 1):
ϕ_r[self.k - i] = np.sum(np.diag(self.r.reshape(self.k + 1,�
↪ 1) \
@ self.r.reshape(1, self.k + 1),
k=-i
)
)
if h_eps is None:
self.ϕ_r = ϕ_r
else:
ϕ_r[self.k] = ϕ_r[self.k] + h_eps
self.ϕ_r = ϕ_r

#-----------------------------------------------------
# If β is given, define the transformed variables
#-----------------------------------------------------
if β is None:
self.β = 1
else:
self.β = β
530CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

self.d = self.β**(np.arange(self.m + 1)/2) * self.d


self.y_m = self.y_m * (self.β**(- np.arange(1, self.m + 1)/2)) \
.reshape(self.m, 1)

def construct_W_and_Wm(self, N):


"""
This constructs the matrices W and W_m for a given number of periods N
"""

m = self.m
d = self.d

W = np.zeros((N + 1, N + 1))
W_m = np.zeros((N + 1, m))

#---------------------------------------
# Terminal conditions
#---------------------------------------

D_m1 = np.zeros((m + 1, m + 1))


M = np.zeros((m + 1, m))

# (1) Constuct the D_{m+1} matrix using the formula

for j in range(m + 1):


for k in range(j, m + 1):
D_m1[j, k] = d[:j + 1] @ d[k - j: k + 1]

# Make the matrix symmetric


D_m1 = D_m1 + D_m1.T - np.diag(np.diag(D_m1))

# (2) Construct the M matrix using the entries of D_m1

for j in range(m):
for i in range(j + 1, m + 1):
M[i, j] = D_m1[i - j - 1, m]

#----------------------------------------------
# Euler equations for t = 0, 1, ..., N-(m+1)
#----------------------------------------------
ϕ = self.ϕ

W[:(m + 1), :(m + 1)] = D_m1 + self.h * np.eye(m + 1)


W[:(m + 1), (m + 1):(2 * m + 1)] = M

for i, row in enumerate(np.arange(m + 1, N + 1 - m)):


W[row, (i + 1):(2 * m + 2 + i)] = ϕ

for i in range(1, m + 1):


W[N - m + i, -(2 * m + 1 - i):] = ϕ[:-i]

for i in range(m):
W_m[N - i, :(m - i)] = ϕ[(m + 1 + i):]

return W, W_m

def roots_of_characteristic(self):
"""
31.3. FINITE DIMENSIONAL PREDICTION 531

This function calculates z_0 and the 2m roots of the characteristic


equation associated with the Euler equation (1.7)

Note:
------
numpy.poly1d(roots, True) defines a polynomial using its roots�
↪ that can
be evaluated at any point. If x_1, x_2, ... , x_m are the roots then
p(x) = (x - x_1)(x - x_2)...(x - x_m)
"""
m = self.m
ϕ = self.ϕ

# Calculate the roots of the 2m-polynomial


roots = np.roots(ϕ)
# Sort the roots according to their length (in descending order)
roots_sorted = roots[np.argsort(abs(roots))[::-1]]

z_0 = ϕ.sum() / np.poly1d(roots, True)(1)


z_1_to_m = roots_sorted[:m] # We need only those outside the�
↪ unit circle

λ = 1 / z_1_to_m

return z_1_to_m, z_0, λ

def coeffs_of_c(self):
'''
This function computes the coefficients {c_j, j = 0, 1, ..., m} for
c(z) = sum_{j = 0}^{m} c_j z^j

Based on the expression (1.9). The order is


c_coeffs = [c_0, c_1, ..., c_{m-1}, c_m]
'''
z_1_to_m, z_0 = self.roots_of_characteristic()[:2]

c_0 = (z_0 * np.prod(z_1_to_m).real * (- 1)**self.m)**(.5)


c_coeffs = np.poly1d(z_1_to_m, True).c * z_0 / c_0

return c_coeffs[::-1]

def solution(self):
"""
This function calculates {λ_j, j=1,...,m} and {A_j, j=1,...,m}
of the expression (1.15)
"""
λ = self.roots_of_characteristic()[2]
c_0 = self.coeffs_of_c()[-1]

A = np.zeros(self.m, dtype=complex)
for j in range(self.m):
denom = 1 - λ/λ[j]
A[j] = c_0**(-2) / np.prod(denom[np.arange(self.m) != j])

return λ, A

def construct_V(self, N):


'''
532CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

This function constructs the covariance matrix for x^N (see�


↪ section 6)
for a given period N
'''
V = np.zeros((N, N))
ϕ_r = self.ϕ_r

for i in range(N):
for j in range(N):
if abs(i-j) <= self.k:
V[i, j] = ϕ_r[self.k + abs(i-j)]

return V

def simulate_a(self, N):


"""
Assuming that the u's are normal, this method draws a random path
for x^N
"""
V = self.construct_V(N + 1)
d = spst.multivariate_normal(np.zeros(N + 1), V)

return d.rvs()

def predict(self, a_hist, t):


"""
This function implements the prediction formula discussed in�
↪section 6 (1.59)

It takes a realization for a^N, and the period in which the�


↪prediction is

formed

Output: E[abar | a_t, a_{t-1}, ..., a_1, a_0]


"""

N = np.asarray(a_hist).shape[0] - 1
a_hist = np.asarray(a_hist).reshape(N + 1, 1)
V = self.construct_V(N + 1)

aux_matrix = np.zeros((N + 1, N + 1))


aux_matrix[:(t + 1), :(t + 1)] = np.eye(t + 1)
L = la.cholesky(V).T
Ea_hist = la.inv(L) @ aux_matrix @ L @ a_hist

return Ea_hist

def optimal_y(self, a_hist, t=None):


"""
- if t is NOT given it takes a_hist (list or numpy.array) as a
deterministic a_t
- if t is given, it solves the combined control prediction problem
(section 7)(by default, t == None -> deterministic)

for a given sequence of a_t (either deterministic or a particular


realization), it calculates the optimal y_t sequence using the method
of the lecture

Note:
31.3. FINITE DIMENSIONAL PREDICTION 533

------
scipy.linalg.lu normalizes L, U so that L has unit diagonal elements
To make things consistent with the lecture, we need an auxiliary
diagonal matrix D which renormalizes L and U
"""

N = np.asarray(a_hist).shape[0] - 1
W, W_m = self.construct_W_and_Wm(N)

L, U = la.lu(W, permute_l=True)
D = np.diag(1 / np.diag(U))
U = D @ U
L = L @ np.diag(1 / np.diag(D))

J = np.fliplr(np.eye(N + 1))

if t is None: # If the problem is deterministic

a_hist = J @ np.asarray(a_hist).reshape(N + 1, 1)

#--------------------------------------------
# Transform the 'a' sequence if β is given
#--------------------------------------------
if self.β != 1:
a_hist = a_hist * (self.β**(np.arange(N + 1) / 2))[::-1] \
.reshape(N + 1, 1)

a_bar = a_hist - W_m @ self.y_m # a_bar from the lecture


Uy = np.linalg.solve(L, a_bar) # U @ y_bar = L^{-1}
y_bar = np.linalg.solve(U, Uy) # y_bar = U^{-1}L^{-1}

# Reverse the order of y_bar with the matrix J


J = np.fliplr(np.eye(N + self.m + 1))
# y_hist : concatenated y_m and y_bar
y_hist = J @ np.vstack([y_bar, self.y_m])

#--------------------------------------------
# Transform the optimal sequence back if β is given
#--------------------------------------------
if self.β != 1:
y_hist = y_hist * (self.β**(- np.arange(-self.m, N + 1)/2)) \
.reshape(N + 1 + self.m, 1)

return y_hist, L, U, y_bar

else: # If the problem is stochastic and we look at it

Ea_hist = self.predict(a_hist, t).reshape(N + 1, 1)


Ea_hist = J @ Ea_hist

a_bar = Ea_hist - W_m @ self.y_m # a_bar from the lecture


Uy = np.linalg.solve(L, a_bar) # U @ y_bar = L^{-1}
y_bar = np.linalg.solve(U, Uy) # y_bar = U^{-1}L^{-1}

# Reverse the order of y_bar with the matrix J


J = np.fliplr(np.eye(N + self.m + 1))
# y_hist : concatenated y_m and y_bar
y_hist = J @ np.vstack([y_bar, self.y_m])
534CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

return y_hist, L, U, y_bar

Let’s use this code to tackle two interesting examples.

31.3.2 Example 1

Consider a stochastic process with moving average representation

𝑥𝑡 = (1 − 2𝐿)𝜀𝑡

where 𝜀𝑡 is a serially uncorrelated random process with mean zero and variance unity.
If we were to use the tools associated with infinite dimensional prediction and filtering to be
described below, we would use the Wiener-Kolmogorov formula (21) to compute the linear
least squares forecasts 𝔼[𝑥𝑡+𝑗 ∣ 𝑥𝑡 , 𝑥𝑡−1 , …], for 𝑗 = 1, 2.
But we can do everything we want by instead using our finite dimensional tools and setting
𝑑 = 𝑟, generating an instance of LQFilter, then invoking pertinent methods of LQFilter.

In [3]: m = 1
y_m = np.asarray([.0]).reshape(m, 1)
d = np.asarray([1, -2])
r = np.asarray([1, -2])
h = 0.0
example = LQFilter(d, h, y_m, r=d)

The Wold representation is computed by example.coefficients_of_c().


Let’s check that it “flips roots” as required

In [4]: example.coeffs_of_c()

Out[4]: array([ 2., -1.])

In [5]: example.roots_of_characteristic()

Out[5]: (array([2.]), -2.0, array([0.5]))

Now let’s form the covariance matrix of a time series vector of length 𝑁 and put it in 𝑉 .
Then we’ll take a Cholesky decomposition of 𝑉 = 𝐿−1 𝐿−1 and use it to form the vector of
“moving average representations” 𝑥 = 𝐿−1 𝜀 and the vector of “autoregressive representations”
𝐿𝑥 = 𝜀.

In [6]: V = example.construct_V(N=5)
print(V)

[[ 5. -2. 0. 0. 0.]
[-2. 5. -2. 0. 0.]
[ 0. -2. 5. -2. 0.]
[ 0. 0. -2. 5. -2.]
[ 0. 0. 0. -2. 5.]]
31.3. FINITE DIMENSIONAL PREDICTION 535

Notice how the lower rows of the “moving average representations” are converging to the ap-
propriate infinite history Wold representation to be described below when we study infinite
horizon-prediction and filtering

In [7]: Li = np.linalg.cholesky(V)
print(Li)

[[ 2.23606798 0. 0. 0. 0. ]
[-0.89442719 2.04939015 0. 0. 0. ]
[ 0. -0.97590007 2.01186954 0. 0. ]
[ 0. 0. -0.99410024 2.00293902 0. ]
[ 0. 0. 0. -0.99853265 2.000733 ]]

Notice how the lower rows of the “autoregressive representations” are converging to the ap-
propriate infinite-history autoregressive representation to be described below when we study
infinite horizon-prediction and filtering

In [8]: L = np.linalg.inv(Li)
print(L)

[[0.4472136 0. 0. 0. 0. ]
[0.19518001 0.48795004 0. 0. 0. ]
[0.09467621 0.23669053 0.49705012 0. 0. ]
[0.04698977 0.11747443 0.2466963 0.49926632 0. ]
[0.02345182 0.05862954 0.12312203 0.24917554 0.49981682]]

31.3.3 Example 2

Consider a stochastic process 𝑋𝑡 with moving average representation


𝑋𝑡 = (1 − 2𝐿2 )𝜀𝑡

where 𝜀𝑡 is a serially uncorrelated random process with mean zero and variance unity.
Let’s find a Wold moving average representation for 𝑥𝑡 that will prevail in the infinite-history
context to be studied in detail below.
To do this, we’ll use the Wiener-Kolomogorov formula (21) presented below to compute the
linear least squares forecasts 𝔼̂ [𝑋𝑡+𝑗 ∣ 𝑋𝑡−1 , …] for 𝑗 = 1, 2, 3.
We proceed in the same way as in example 1

In [9]: m = 2
y_m = np.asarray([.0, .0]).reshape(m, 1)
d = np.asarray([1, 0, -np.sqrt(2)])
r = np.asarray([1, 0, -np.sqrt(2)])
h = 0.0
example = LQFilter(d, h, y_m, r=d)
example.coeffs_of_c()

Out[9]: array([ 1.41421356, -0. , -1. ])


536CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

In [10]: example.roots_of_characteristic()

Out[10]: (array([ 1.18920712, -1.18920712]),


-1.4142135623731122,
array([ 0.84089642, -0.84089642]))

In [11]: V = example.construct_V(N=8)
print(V)

[[ 3. 0. -1.41421356 0. 0. 0.
0. 0. ]
[ 0. 3. 0. -1.41421356 0. 0.
0. 0. ]
[-1.41421356 0. 3. 0. -1.41421356 0.
0. 0. ]
[ 0. -1.41421356 0. 3. 0. -1.41421356
0. 0. ]
[ 0. 0. -1.41421356 0. 3. 0.
-1.41421356 0. ]
[ 0. 0. 0. -1.41421356 0. 3.
0. -1.41421356]
[ 0. 0. 0. 0. -1.41421356 0.
3. 0. ]
[ 0. 0. 0. 0. 0. -1.41421356
0. 3. ]]

In [12]: Li = np.linalg.cholesky(V)
print(Li[-3:, :])

[[ 0. 0. 0. -0.9258201 0. 1.46385011
0. 0. ]
[ 0. 0. 0. 0. -0.96609178 0.
1.43759058 0. ]
[ 0. 0. 0. 0. 0. -0.96609178
0. 1.43759058]]

In [13]: L = np.linalg.inv(Li)
print(L)

[[0.57735027 0. 0. 0. 0. 0.
0. 0. ]
[0. 0.57735027 0. 0. 0. 0.
0. 0. ]
[0.3086067 0. 0.65465367 0. 0. 0.
0. 0. ]
[0. 0.3086067 0. 0.65465367 0. 0.
0. 0. ]
[0.19518001 0. 0.41403934 0. 0.68313005 0.
0. 0. ]
[0. 0.19518001 0. 0.41403934 0. 0.68313005
0. 0. ]
[0.13116517 0. 0.27824334 0. 0.45907809 0.
0.69560834 0. ]
[0. 0.13116517 0. 0.27824334 0. 0.45907809
0. 0.69560834]]
31.4. COMBINED FINITE DIMENSIONAL CONTROL AND PREDICTION 537

31.3.4 Prediction

It immediately follows from the “orthogonality principle” of least squares (see [5] or [59]
[ch. X]) that

𝑡−1
̂ 𝑡 ∣ 𝑥𝑡−𝑚 , 𝑥𝑡−𝑚+1 , … 𝑥1 ] = ∑ 𝐿−1
𝔼[𝑥 𝑡,𝑡−𝑗 𝜀𝑡−𝑗
𝑗=𝑚 (7)
= [𝐿−1
𝑡,1 𝐿−1
𝑡,2 , … , 𝐿−1
𝑡,𝑡−𝑚 0 0 … 0]𝐿 𝑥

This can be interpreted as a finite-dimensional version of the Wiener-Kolmogorov 𝑚-step


ahead prediction formula.
We can use (7) to represent the linear least squares projection of the vector 𝑥 conditioned on
the first 𝑠 observations [𝑥𝑠 , 𝑥𝑠−1 … , 𝑥1 ].
We have

̂ ∣ 𝑥𝑠 , 𝑥𝑠−1 , … , 𝑥1 ] = 𝐿−1 [𝐼𝑠


𝔼[𝑥
0
] 𝐿𝑥 (8)
0 0(𝑡−𝑠)

This formula will be convenient in representing the solution of control problems under uncer-
tainty.
Equation (4) can be recognized as a finite dimensional version of a moving average represen-
tation.
Equation (2) can be viewed as a finite dimension version of an autoregressive representation.
Notice that even if the 𝑥𝑡 process is covariance stationary, so that 𝑉 is such that 𝑉𝑖𝑗 depends
only on |𝑖 − 𝑗|, the coefficients in the moving average representation are time-dependent, there
being a different moving average for each 𝑡.
If 𝑥𝑡 is a covariance stationary process, the last row of 𝐿−1 converges to the coefficients in the
Wold moving average representation for {𝑥𝑡 } as 𝑇 → ∞.
Further, if 𝑥𝑡 is covariance stationary, for fixed 𝑘 and 𝑗 > 0, 𝐿−1 −1
𝑇 ,𝑇 −𝑗 converges to 𝐿𝑇 −𝑘,𝑇 −𝑘−𝑗
as 𝑇 → ∞.
That is, the “bottom” rows of 𝐿−1 converge to each other and to the Wold moving average
coefficients as 𝑇 → ∞.
This last observation gives one simple and widely-used practical way of forming a finite 𝑇 ap-
proximation to a Wold moving average representation.

First, form the covariance matrix 𝔼𝑥𝑥′ = 𝑉 , then obtain the Cholesky decomposition 𝐿−1 𝐿−1
of 𝑉 , which can be accomplished quickly on a computer.
The last row of 𝐿−1 gives the approximate Wold moving average coefficients.
This method can readily be generalized to multivariate systems.

31.4 Combined Finite Dimensional Control and Prediction

Consider the finite-dimensional control problem, maximize


538CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

𝑁
1 1
𝔼 ∑ {𝑎𝑡 𝑦𝑡 − ℎ𝑦𝑡2 − [𝑑(𝐿)𝑦𝑡 ]2 } , ℎ>0
𝑡=0
2 2

where 𝑑(𝐿) = 𝑑0 + 𝑑1 𝐿 + … + 𝑑𝑚 𝐿𝑚 , 𝐿 is the lag operator, 𝑎̄ = [𝑎𝑁 , 𝑎𝑁−1 … , 𝑎1 , 𝑎0 ]′ a random


̄ ′̄ = 𝑉 .
vector with mean zero and 𝔼 𝑎𝑎
The variables 𝑦−1 , … , 𝑦−𝑚 are given.
Maximization is over choices of 𝑦0 , 𝑦1 … , 𝑦𝑁 , where 𝑦𝑡 is required to be a linear function of
{𝑦𝑡−𝑠−1 , 𝑡 + 𝑚 − 1 ≥ 0; 𝑎𝑡−𝑠 , 𝑡 ≥ 𝑠 ≥ 0}.
We saw in the lecture Classical Control with Linear Algebra that the solution of this problem
under certainty could be represented in the feedback-feedforward form

𝑦−1
𝑈 𝑦 ̄ = 𝐿−1 𝑎̄ + 𝐾 ⎡
⎢ ⋮ ⎥

⎣𝑦−𝑚 ⎦

for some (𝑁 + 1) × 𝑚 matrix 𝐾.


Using a version of formula (7), we can express 𝔼[̂ 𝑎̄ ∣ 𝑎𝑠 , 𝑎𝑠−1 , … , 𝑎0 ] as

0 0
𝔼[̂ 𝑎̄ ∣ 𝑎𝑠 , 𝑎𝑠−1 , … , 𝑎0 ] = 𝑈̃ −1 [ ] 𝑈̃ 𝑎 ̄
0 𝐼(𝑠+1)

where 𝐼(𝑠+1) is the (𝑠 + 1) × (𝑠 + 1) identity matrix, and 𝑉 = 𝑈̃ −1 𝑈̃ −1 , where 𝑈̃ is the upper


triangular Cholesky factor of the covariance matrix 𝑉 .


(We have reversed the time axis in dating the 𝑎’s relative to earlier)
The time axis can be reversed in representation (8) by replacing 𝐿 with 𝐿𝑇 .
The optimal decision rule to use at time 0 ≤ 𝑡 ≤ 𝑁 is then given by the (𝑁 − 𝑡 + 1)th row of

𝑦−1
−1 0 0
̃
−1
𝑈 𝑦̄ = 𝐿 𝑈 [ ] 𝑈 𝑎̄ + 𝐾 ⎢ ⋮ ⎤
̃ ⎡

0 𝐼(𝑡+1)
⎣𝑦−𝑚 ⎦

31.5 Infinite Horizon Prediction and Filtering Problems

It is instructive to compare the finite-horizon formulas based on linear algebra decompositions


of finite-dimensional covariance matrices with classic formulas for infinite horizon and infinite
history prediction and control problems.
These classic infinite horizon formulas used the mathematics of 𝑧-transforms and lag opera-
tors.
We’ll meet interesting lag operator and 𝑧-transform counterparts to our finite horizon matrix
formulas.
We pose two related prediction and filtering problems.
We let 𝑌𝑡 be a univariate 𝑚th order moving average, covariance stationary stochastic process,
31.5. INFINITE HORIZON PREDICTION AND FILTERING PROBLEMS 539

𝑌𝑡 = 𝑑(𝐿)𝑢𝑡 (9)
𝑚
where 𝑑(𝐿) = ∑𝑗=0 𝑑𝑗 𝐿𝑗 , and 𝑢𝑡 is a serially uncorrelated stationary random process satisfy-
ing

𝔼𝑢𝑡 = 0
1 if 𝑡 = 𝑠 (10)
𝔼𝑢𝑡 𝑢𝑠 = {
0 otherwise

We impose no conditions on the zeros of 𝑑(𝑧).


A second covariance stationary process is 𝑋𝑡 given by

𝑋𝑡 = 𝑌𝑡 + 𝜀𝑡 (11)

where 𝜀𝑡 is a serially uncorrelated stationary random process with 𝔼𝜀𝑡 = 0 and 𝔼𝜀𝑡 𝜀𝑠 = 0 for
all distinct 𝑡 and 𝑠.
We also assume that 𝔼𝜀𝑡 𝑢𝑠 = 0 for all 𝑡 and 𝑠.
The linear least squares prediction problem is to find the 𝐿2 random variable 𝑋̂ 𝑡+𝑗
among linear combinations of {𝑋𝑡 , 𝑋𝑡−1 , …} that minimizes 𝔼(𝑋̂ 𝑡+𝑗 − 𝑋𝑡+𝑗 )2 .
∞ ∞
That is, the problem is to find a 𝛾𝑗 (𝐿) = ∑𝑘=0 𝛾𝑗𝑘 𝐿𝑘 such that ∑𝑘=0 |𝛾𝑗𝑘 |2 < ∞ and
𝔼[𝛾𝑗 (𝐿)𝑋𝑡 − 𝑋𝑡+𝑗 ]2 is minimized.

The linear least squares filtering problem is to find a 𝑏 (𝐿) = ∑𝑗=0 𝑏𝑗 𝐿𝑗 such that

∑𝑗=0 |𝑏𝑗 |2 < ∞ and 𝔼[𝑏 (𝐿)𝑋𝑡 − 𝑌𝑡 ]2 is minimized.
Interesting versions of these problems related to the permanent income theory were studied
by [48].

31.5.1 Problem Formulation

These problems are solved as follows.


The covariograms of 𝑌 and 𝑋 and their cross covariogram are, respectively,

𝐶𝑋 (𝜏 ) = 𝔼𝑋𝑡 𝑋𝑡−𝜏
𝐶𝑌 (𝜏 ) = 𝔼𝑌𝑡 𝑌𝑡−𝜏 𝜏 = 0, ±1, ±2, … (12)
𝐶𝑌 ,𝑋 (𝜏 ) = 𝔼𝑌𝑡 𝑋𝑡−𝜏

The covariance and cross-covariance generating functions are defined as


𝑔𝑋 (𝑧) = ∑ 𝐶𝑋 (𝜏 )𝑧 𝜏
𝜏=−∞

𝑔𝑌 (𝑧) = ∑ 𝐶𝑌 (𝜏 )𝑧 𝜏 (13)
𝜏=−∞

𝑔𝑌 𝑋 (𝑧) = ∑ 𝐶𝑌 𝑋 (𝜏 )𝑧 𝜏
𝜏=−∞
540CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

The generating functions can be computed by using the following facts.


Let 𝑣1𝑡 and 𝑣2𝑡 be two mutually and serially uncorrelated white noises with unit variances.
2 2
That is, 𝔼𝑣1𝑡 = 𝔼𝑣2𝑡 = 1, 𝔼𝑣1𝑡 = 𝔼𝑣2𝑡 = 0, 𝔼𝑣1𝑡 𝑣2𝑠 = 0 for all 𝑡 and 𝑠, 𝔼𝑣1𝑡 𝑣1𝑡−𝑗 = 𝔼𝑣2𝑡 𝑣2𝑡−𝑗 =
0 for all 𝑗 ≠ 0.
Let 𝑥𝑡 and 𝑦𝑡 be two random processes given by

𝑦𝑡 = 𝐴(𝐿)𝑣1𝑡 + 𝐵(𝐿)𝑣2𝑡
𝑥𝑡 = 𝐶(𝐿)𝑣1𝑡 + 𝐷(𝐿)𝑣2𝑡

Then, as shown for example in [59] [ch. XI], it is true that

𝑔𝑦 (𝑧) = 𝐴(𝑧)𝐴(𝑧 −1 ) + 𝐵(𝑧)𝐵(𝑧 −1 )


𝑔𝑥 (𝑧) = 𝐶(𝑧)𝐶(𝑧 −1 ) + 𝐷(𝑧)𝐷(𝑧 −1 ) (14)
−1 −1
𝑔𝑦𝑥 (𝑧) = 𝐴(𝑧)𝐶(𝑧 ) + 𝐵(𝑧)𝐷(𝑧 )

Applying these formulas to (9) – (12), we have

𝑔𝑌 (𝑧) = 𝑑(𝑧)𝑑(𝑧 −1 )
𝑔𝑋 (𝑧) = 𝑑(𝑧)𝑑(𝑧 −1 ) + ℎ (15)
−1
𝑔𝑌 𝑋 (𝑧) = 𝑑(𝑧)𝑑(𝑧 )

The key step in obtaining solutions to our problems is to factor the covariance generating
function 𝑔𝑋 (𝑧) of 𝑋.
The solutions of our problems are given by formulas due to Wiener and Kolmogorov.
These formulas utilize the Wold moving average representation of the 𝑋𝑡 process,

𝑋𝑡 = 𝑐 (𝐿) 𝜂𝑡 (16)
𝑚
where 𝑐(𝐿) = ∑𝑗=0 𝑐𝑗 𝐿𝑗 , with

̂ 𝑡 |𝑋𝑡−1 , 𝑋𝑡−2 , …]
𝑐0 𝜂𝑡 = 𝑋𝑡 − 𝔼[𝑋 (17)

Here 𝔼̂ is the linear least squares projection operator.


Equation (17) is the condition that 𝑐0 𝜂𝑡 can be the one-step-ahead error in predicting 𝑋𝑡
from its own past values.
Condition (17) requires that 𝜂𝑡 lie in the closed linear space spanned by [𝑋𝑡 , 𝑋𝑡−1 , …].
This will be true if and only if the zeros of 𝑐(𝑧) do not lie inside the unit circle.
It is an implication of (17) that 𝜂𝑡 is a serially uncorrelated random process and that normal-
ization can be imposed so that 𝔼𝜂𝑡2 = 1.
Consequently, an implication of (16) is that the covariance generating function of 𝑋𝑡 can be
expressed as

𝑔𝑋 (𝑧) = 𝑐 (𝑧) 𝑐 (𝑧 −1 ) (18)


31.5. INFINITE HORIZON PREDICTION AND FILTERING PROBLEMS 541

It remains to discuss how 𝑐(𝐿) is to be computed.


Combining (14) and (18) gives

𝑑(𝑧) 𝑑(𝑧 −1 ) + ℎ = 𝑐 (𝑧) 𝑐 (𝑧 −1 ) (19)

Therefore, we have already shown constructively how to factor the covariance generating
function 𝑔𝑋 (𝑧) = 𝑑(𝑧) 𝑑 (𝑧 −1 ) + ℎ.
We now introduce the annihilation operator:

∞ ∞
[ ∑ 𝑓𝑗 𝐿𝑗 ] ≡ ∑ 𝑓𝑗 𝐿𝑗 (20)
𝑗=−∞ 𝑗=0
+

In words, [ ]+ means “ignore negative powers of 𝐿”.


̂ 𝑡+𝑗 |𝑋𝑡 , 𝑋𝑡−1 , …] = 𝛾𝑗 (𝐿)𝑋𝑡 .
We have defined the solution of the prediction problem as 𝔼[𝑋
Assuming that the roots of 𝑐(𝑧) = 0 all lie outside the unit circle, the Wiener-Kolmogorov
formula for 𝛾𝑗 (𝐿) holds:

𝑐(𝐿)
𝛾𝑗 (𝐿) = [ ] 𝑐 (𝐿)−1 (21)
𝐿𝑗 +

̂ 𝑡 ∣ 𝑋𝑡 , 𝑋𝑡−1 , …] = 𝑏(𝐿)𝑋𝑡 .
We have defined the solution of the filtering problem as 𝔼[𝑌
The Wiener-Kolomogorov formula for 𝑏(𝐿) is

𝑔𝑌 𝑋 (𝐿)
𝑏(𝐿) = [ ] 𝑐(𝐿)−1
𝑐(𝐿−1 ) +

or

𝑑(𝐿)𝑑(𝐿−1 )
𝑏(𝐿) = [ ] 𝑐(𝐿)−1 (22)
𝑐(𝐿−1 ) +

Formulas (21) and (22) are discussed in detail in [69] and [59].
The interested reader can there find several examples of the use of these formulas in eco-
nomics Some classic examples using these formulas are due to [48].
As an example of the usefulness of formula (22), we let 𝑋𝑡 be a stochastic process with Wold
moving average representation

𝑋𝑡 = 𝑐(𝐿)𝜂𝑡

̂ 𝑡 |𝑋𝑡−1 , …], 𝑐(𝐿) = ∑𝑚 𝑐𝑗 𝐿.


where 𝔼𝜂𝑡2 = 1, and 𝑐0 𝜂𝑡 = 𝑋𝑡 − 𝔼[𝑋 𝑗=0

Suppose that at time 𝑡, we wish to predict a geometric sum of future 𝑋’s, namely


1
𝑦𝑡 ≡ ∑ 𝛿 𝑗 𝑋𝑡+𝑗 = 𝑋
𝑗=0
1 − 𝛿𝐿−1 𝑡
542CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

given knowledge of 𝑋𝑡 , 𝑋𝑡−1 , ….


We shall use (22) to obtain the answer.
Using the standard formulas (14), we have that

𝑔𝑦𝑥 (𝑧) = (1 − 𝛿𝑧 −1 )𝑐(𝑧)𝑐(𝑧 −1 )


𝑔𝑥 (𝑧) = 𝑐(𝑧)𝑐(𝑧 −1 )

Then (22) becomes

𝑐(𝐿)
𝑏(𝐿) = [ ] 𝑐(𝐿)−1 (23)
1 − 𝛿𝐿−1 +

In order to evaluate the term in the annihilation operator, we use the following result from
[27].
Proposition Let
∞ ∞
• 𝑔(𝑧) = ∑𝑗=0 𝑔𝑗 𝑧𝑗 where ∑𝑗=0 |𝑔𝑗 |2 < +∞.
• ℎ (𝑧 −1 ) = (1 − 𝛿1 𝑧−1 ) … (1 − 𝛿𝑛 𝑧−1 ), where |𝛿𝑗 | < 1, for 𝑗 = 1, … , 𝑛.
Then

𝑛
𝑔(𝑧) 𝑔(𝑧) 𝛿𝑗 𝑔(𝛿𝑗 ) 1
[ −1
] = −1
− ∑ 𝑛 ( ) (24)
ℎ(𝑧 ) + ℎ(𝑧 ) 𝑗=1 ∏ 𝑘=1 (𝛿𝑗 − 𝛿𝑘 ) 𝑧 − 𝛿𝑗
𝑘≠𝑗

and, alternatively,

𝑛
𝑔(𝑧) 𝑧𝑔(𝑧) − 𝛿𝑗 𝑔(𝛿𝑗 )
[ −1
] = ∑ 𝐵𝑗 ( ) (25)
ℎ(𝑧 ) + 𝑗=1 𝑧 − 𝛿𝑗

𝑛
where 𝐵𝑗 = 1/ ∏ 𝑘=1 (1 − 𝛿𝑘 /𝛿𝑗 ).
𝑘+𝑗

Applying formula (25) of the proposition to evaluating (23) with 𝑔(𝑧) = 𝑐(𝑧) and ℎ(𝑧 −1 ) =
1 − 𝛿𝑧 −1 gives

𝐿𝑐(𝐿) − 𝛿𝑐(𝛿)
𝑏(𝐿) = [ ] 𝑐(𝐿)−1
𝐿−𝛿
or

1 − 𝛿𝑐(𝛿)𝐿−1 𝑐(𝐿)−1
𝑏(𝐿) = [ ]
1 − 𝛿𝐿−1

Thus, we have


1 − 𝛿𝑐(𝛿)𝐿−1 𝑐(𝐿)−1
𝔼̂ [∑ 𝛿 𝑗 𝑋𝑡+𝑗 |𝑋𝑡 , 𝑥𝑡−1 , …] = [ ] 𝑋𝑡 (26)
𝑗=0
1 − 𝛿𝐿−1

This formula is useful in solving stochastic versions of problem 1 of lecture Classical Control
with Linear Algebra in which the randomness emerges because {𝑎𝑡 } is a stochastic process.
31.5. INFINITE HORIZON PREDICTION AND FILTERING PROBLEMS 543

The problem is to maximize

𝑁
1 1
𝔼0 lim ∑ 𝛽 𝑡 [𝑎𝑡 𝑦𝑡 − ℎ𝑦𝑡2 − [𝑑(𝐿)𝑦𝑡 ]2 ] (27)
𝑁→∞
𝑡−0
2 2

where 𝔼𝑡 is mathematical expectation conditioned on information known at 𝑡, and where {𝑎𝑡 }


is a covariance stationary stochastic process with Wold moving average representation

𝑎𝑡 = 𝑐(𝐿) 𝜂𝑡

where

𝑛̃
𝑐(𝐿) = ∑ 𝑐𝑗 𝐿𝑗
𝑗=0

and

̂ 𝑡 |𝑎𝑡−1 , …]
𝜂𝑡 = 𝑎𝑡 − 𝔼[𝑎

The problem is to maximize (27) with respect to a contingency plan expressing 𝑦𝑡 as a func-
tion of information known at 𝑡, which is assumed to be (𝑦𝑡−1 , 𝑦𝑡−2 , … , 𝑎𝑡 , 𝑎𝑡−1 , …).
The solution of this problem can be achieved in two steps.
First, ignoring the uncertainty, we can solve the problem assuming that {𝑎𝑡 } is a known se-
quence.
The solution is, from above,

𝑐(𝐿)𝑦𝑡 = 𝑐(𝛽𝐿−1 )−1 𝑎𝑡

or

𝑚 ∞
(1 − 𝜆1 𝐿) … (1 − 𝜆𝑚 𝐿)𝑦𝑡 = ∑ 𝐴𝑗 ∑(𝜆𝑗 𝛽)𝑘 𝑎𝑡+𝑘 (28)
𝑗=1 𝑘=0

Second, the solution of the problem under uncertainty is obtained by replacing the terms on
the right-hand side of the above expressions with their linear least squares predictors.
Using (26) and (28), we have the following solution

𝑚
1 − 𝛽𝜆𝑗 𝑐(𝛽𝜆𝑗 )𝐿−1 𝑐(𝐿)−1
(1 − 𝜆1 𝐿) … (1 − 𝜆𝑚 𝐿)𝑦𝑡 = ∑ 𝐴𝑗 [ ] 𝑎𝑡
𝑗=1
1 − 𝛽𝜆𝑗 𝐿−1

Blaschke factors
The following is a useful piece of mathematics underlying “root flipping”.
𝑚
Let 𝜋(𝑧) = ∑𝑗=0 𝜋𝑗 𝑧𝑗 and let 𝑧1 , … , 𝑧𝑘 be the zeros of 𝜋(𝑧) that are inside the unit circle,
𝑘 < 𝑚.
Then define
544CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA

(𝑧1 𝑧 − 1) (𝑧 𝑧 − 1) (𝑧 𝑧 − 1)
𝜃(𝑧) = 𝜋(𝑧)( )( 2 )…( 𝑘 )
(𝑧 − 𝑧1 ) (𝑧 − 𝑧2 ) (𝑧 − 𝑧𝑘 )

The term multiplying 𝜋(𝑧) is termed a “Blaschke factor”.


Then it can be proved directly that

𝜃(𝑧 −1 )𝜃(𝑧) = 𝜋(𝑧 −1 )𝜋(𝑧)

and that the zeros of 𝜃(𝑧) are not inside the unit circle.

31.6 Exercises

31.6.1 Exercise 1

Let 𝑌𝑡 = (1 − 2𝐿)𝑢𝑡 where 𝑢𝑡 is a mean zero white noise with 𝔼𝑢2𝑡 = 1. Let

𝑋𝑡 = 𝑌𝑡 + 𝜀𝑡

where 𝜀𝑡 is a serially uncorrelated white noise with 𝔼𝜀2𝑡 = 9, and 𝔼𝜀𝑡 𝑢𝑠 = 0 for all 𝑡 and 𝑠.
Find the Wold moving average representation for 𝑋𝑡 .
Find a formula for the 𝐴1𝑗 ’s in


̂𝑡+1 ∣ 𝑋𝑡 , 𝑋𝑡−1 , … = ∑ 𝐴1𝑗 𝑋𝑡−𝑗
𝔼𝑋
𝑗=0

Find a formula for the 𝐴2𝑗 ’s in


̂ 𝑡+2 ∣ 𝑋𝑡 , 𝑋𝑡−1 , … = ∑ 𝐴2𝑗 𝑋𝑡−𝑗
𝔼𝑋
𝑗=0

31.6.2 Exercise 2

Multivariable Prediction: Let 𝑌𝑡 be an (𝑛 × 1) vector stochastic process with moving aver-


age representation

𝑌𝑡 = 𝐷(𝐿)𝑈𝑡
𝑚
where 𝐷(𝐿) = ∑𝑗=0 𝐷𝑗 𝐿𝐽 , 𝐷𝑗 an 𝑛 × 𝑛 matrix, 𝑈𝑡 an (𝑛 × 1) vector white noise with 𝔼𝑈𝑡 = 0
for all 𝑡, 𝔼𝑈𝑡 𝑈𝑠′ = 0 for all 𝑠 ≠ 𝑡, and 𝔼𝑈𝑡 𝑈𝑡′ = 𝐼 for all 𝑡.
Let 𝜀𝑡 be an 𝑛 × 1 vector white noise with mean 0 and contemporaneous covariance matrix 𝐻,
where 𝐻 is a positive definite matrix.
Let 𝑋𝑡 = 𝑌𝑡 + 𝜀𝑡 .
′ ′ ′
Define the covariograms as 𝐶𝑋 (𝜏 ) = 𝔼𝑋𝑡 𝑋𝑡−𝜏 , 𝐶𝑌 (𝜏 ) = 𝔼𝑌𝑡 𝑌𝑡−𝜏 , 𝐶𝑌 𝑋 (𝜏 ) = 𝔼𝑌𝑡 𝑋𝑡−𝜏 .
31.6. EXERCISES 545

Then define the matrix covariance generating function, as in (21), only interpret all the ob-
jects in (21) as matrices.
Show that the covariance generating functions are given by

𝑔𝑦 (𝑧) = 𝐷(𝑧)𝐷(𝑧 −1 )′
𝑔𝑋 (𝑧) = 𝐷(𝑧)𝐷(𝑧 −1 )′ + 𝐻
𝑔𝑌 𝑋 (𝑧) = 𝐷(𝑧)𝐷(𝑧 −1 )′

A factorization of 𝑔𝑋 (𝑧) can be found (see [54] or [69]) of the form

𝑚
𝐷(𝑧)𝐷(𝑧 −1 )′ + 𝐻 = 𝐶(𝑧)𝐶(𝑧 −1 )′ , 𝐶(𝑧) = ∑ 𝐶𝑗 𝑧𝑗
𝑗=0

where the zeros of |𝐶(𝑧)| do not lie inside the unit circle.
A vector Wold moving average representation of 𝑋𝑡 is then

𝑋𝑡 = 𝐶(𝐿)𝜂𝑡

where 𝜂𝑡 is an (𝑛 × 1) vector white noise that is “fundamental” for 𝑋𝑡 .


That is, 𝑋𝑡 − 𝔼̂ [𝑋𝑡 ∣ 𝑋𝑡−1 , 𝑋𝑡−2 …] = 𝐶0 𝜂𝑡 .
The optimum predictor of 𝑋𝑡+𝑗 is

𝐶(𝐿)
𝔼̂ [𝑋𝑡+𝑗 ∣ 𝑋𝑡 , 𝑋𝑡−1 , …] = [ 𝑗 ] 𝜂𝑡
𝐿 +

If 𝐶(𝐿) is invertible, i.e., if the zeros of det 𝐶(𝑧) lie strictly outside the unit circle, then this
formula can be written

𝐶(𝐿)
𝔼̂ [𝑋𝑡+𝑗 ∣ 𝑋𝑡 , 𝑋𝑡−1 , …] = [ 𝐽 ] 𝐶(𝐿)−1 𝑋𝑡
𝐿 +
546CHAPTER 31. CLASSICAL PREDICTION AND FILTERING WITH LINEAR ALGEBRA
Part VII

Asset Pricing and Finance

547
Chapter 32

Asset Pricing II: The Lucas Asset


Pricing Model

32.1 Contents

• Overview 32.2
• The Lucas Model 32.3
• Exercises 32.4
• Solutions 32.5
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install interpolation

32.2 Overview

As stated in an earlier lecture, an asset is a claim on a stream of prospective payments.


What is the correct price to pay for such a claim?
The elegant asset pricing model of Lucas [44] attempts to answer this question in an equilib-
rium setting with risk-averse agents.
While we mentioned some consequences of Lucas’ model earlier, it is now time to work
through the model more carefully and try to understand where the fundamental asset pric-
ing equation comes from.
A side benefit of studying Lucas’ model is that it provides a beautiful illustration of model
building in general and equilibrium pricing in competitive models in particular.
Another difference to our first asset pricing lecture is that the state space and shock will be
continuous rather than discrete.
Let’s start with some imports:

In [2]: import numpy as np


from interpolation import interp
from numba import njit, prange
from scipy.stats import lognorm

549
550 CHAPTER 32. ASSET PRICING II: THE LUCAS ASSET PRICING MODEL

import matplotlib.pyplot as plt


%matplotlib inline

32.3 The Lucas Model

Lucas studied a pure exchange economy with a representative consumer (or household), where
• Pure exchange means that all endowments are exogenous.
• Representative consumer means that either
– there is a single consumer (sometimes also referred to as a household), or
– all consumers have identical endowments and preferences
Either way, the assumption of a representative agent means that prices adjust to eradicate
desires to trade.
This makes it very easy to compute competitive equilibrium prices.

32.3.1 Basic Setup

Let’s review the setup.

Assets

There is a single “productive unit” that costlessly generates a sequence of consumption goods
{𝑦𝑡 }∞
𝑡=0 .

Another way to view {𝑦𝑡 }∞


𝑡=0 is as a consumption endowment for this economy.

We will assume that this endowment is Markovian, following the exogenous process

𝑦𝑡+1 = 𝐺(𝑦𝑡 , 𝜉𝑡+1 )

Here {𝜉𝑡 } is an IID shock sequence with known distribution 𝜙 and 𝑦𝑡 ≥ 0.


An asset is a claim on all or part of this endowment stream.
The consumption goods {𝑦𝑡 }∞
𝑡=0 are nonstorable, so holding assets is the only way to transfer
wealth into the future.
For the purposes of intuition, it’s common to think of the productive unit as a “tree” that
produces fruit.
Based on this idea, a “Lucas tree” is a claim on the consumption endowment.

Consumers

A representative consumer ranks consumption streams {𝑐𝑡 } according to the time separable
utility functional


𝔼 ∑ 𝛽 𝑡 𝑢(𝑐𝑡 ) (1)
𝑡=0

Here
32.3. THE LUCAS MODEL 551

• 𝛽 ∈ (0, 1) is a fixed discount factor.


• 𝑢 is a strictly increasing, strictly concave, continuously differentiable period utility func-
tion.
• 𝔼 is a mathematical expectation.

32.3.2 Pricing a Lucas Tree

What is an appropriate price for a claim on the consumption endowment?


We’ll price an ex-dividend claim, meaning that
• the seller retains this period’s dividend
• the buyer pays 𝑝𝑡 today to purchase a claim on
– 𝑦𝑡+1 and
– the right to sell the claim tomorrow at price 𝑝𝑡+1
Since this is a competitive model, the first step is to pin down consumer behavior, taking
prices as given.
Next, we’ll impose equilibrium constraints and try to back out prices.
In the consumer problem, the consumer’s control variable is the share 𝜋𝑡 of the claim held in
each period.
Thus, the consumer problem is to maximize (1) subject to

𝑐𝑡 + 𝜋𝑡+1 𝑝𝑡 ≤ 𝜋𝑡 𝑦𝑡 + 𝜋𝑡 𝑝𝑡

along with 𝑐𝑡 ≥ 0 and 0 ≤ 𝜋𝑡 ≤ 1 at each 𝑡.


The decision to hold share 𝜋𝑡 is actually made at time 𝑡 − 1.
But this value is inherited as a state variable at time 𝑡, which explains the choice of subscript.

The Dynamic Program

We can write the consumer problem as a dynamic programming problem.


Our first observation is that prices depend on current information, and current information is
really just the endowment process up until the current period.
In fact, the endowment process is Markovian, so that the only relevant information is the cur-
rent state 𝑦 ∈ ℝ+ (dropping the time subscript).
This leads us to guess an equilibrium where price is a function 𝑝 of 𝑦.
Remarks on the solution method
• Since this is a competitive (read: price taking) model, the consumer will take this func-
tion 𝑝 as given.
• In this way, we determine consumer behavior given 𝑝 and then use equilibrium condi-
tions to recover 𝑝.
• This is the standard way to solve competitive equilibrium models.
Using the assumption that price is a given function 𝑝 of 𝑦, we write the value function and
constraint as
552 CHAPTER 32. ASSET PRICING II: THE LUCAS ASSET PRICING MODEL

𝑣(𝜋, 𝑦) = max

{𝑢(𝑐) + 𝛽 ∫ 𝑣(𝜋′ , 𝐺(𝑦, 𝑧))𝜙(𝑑𝑧)}
𝑐,𝜋

subject to

𝑐 + 𝜋′ 𝑝(𝑦) ≤ 𝜋𝑦 + 𝜋𝑝(𝑦) (2)

We can invoke the fact that utility is increasing to claim equality in (2) and hence eliminate
the constraint, obtaining

𝑣(𝜋, 𝑦) = max

{𝑢[𝜋(𝑦 + 𝑝(𝑦)) − 𝜋′ 𝑝(𝑦)] + 𝛽 ∫ 𝑣(𝜋′ , 𝐺(𝑦, 𝑧))𝜙(𝑑𝑧)} (3)
𝜋

The solution to this dynamic programming problem is an optimal policy expressing either 𝜋′
or 𝑐 as a function of the state (𝜋, 𝑦).
• Each one determines the other, since 𝑐(𝜋, 𝑦) = 𝜋(𝑦 + 𝑝(𝑦)) − 𝜋′ (𝜋, 𝑦)𝑝(𝑦)

Next Steps

What we need to do now is determine equilibrium prices.


It seems that to obtain these, we will have to

1. Solve this two-dimensional dynamic programming problem for the optimal policy.

2. Impose equilibrium constraints.

3. Solve out for the price function 𝑝(𝑦) directly.

However, as Lucas showed, there is a related but more straightforward way to do this.

Equilibrium Constraints

Since the consumption good is not storable, in equilibrium we must have 𝑐𝑡 = 𝑦𝑡 for all 𝑡.
In addition, since there is one representative consumer (alternatively, since all consumers are
identical), there should be no trade in equilibrium.
In particular, the representative consumer owns the whole tree in every period, so 𝜋𝑡 = 1 for
all 𝑡.
Prices must adjust to satisfy these two constraints.

The Equilibrium Price Function

Now observe that the first-order condition for (3) can be written as

𝑢′ (𝑐)𝑝(𝑦) = 𝛽 ∫ 𝑣1′ (𝜋′ , 𝐺(𝑦, 𝑧))𝜙(𝑑𝑧)

where 𝑣1′ is the derivative of 𝑣 with respect to its first argument.


32.3. THE LUCAS MODEL 553

To obtain 𝑣1′ we can simply differentiate the right-hand side of (3) with respect to 𝜋, yielding

𝑣1′ (𝜋, 𝑦) = 𝑢′ (𝑐)(𝑦 + 𝑝(𝑦))

Next, we impose the equilibrium constraints while combining the last two equations to get

𝑢′ [𝐺(𝑦, 𝑧)]
𝑝(𝑦) = 𝛽 ∫ [𝐺(𝑦, 𝑧) + 𝑝(𝐺(𝑦, 𝑧))]𝜙(𝑑𝑧) (4)
𝑢′ (𝑦)

In sequential rather than functional notation, we can also write this as

𝑢′ (𝑐𝑡+1 )
𝑝𝑡 = 𝔼𝑡 [𝛽 (𝑦 + 𝑝𝑡+1 )] (5)
𝑢′ (𝑐𝑡 ) 𝑡+1

This is the famous consumption-based asset pricing equation.


Before discussing it further we want to solve out for prices.

32.3.3 Solving the Model

Equation (4) is a functional equation in the unknown function 𝑝.


The solution is an equilibrium price function 𝑝∗ .
Let’s look at how to obtain it.

Setting up the Problem

Instead of solving for it directly we’ll follow Lucas’ indirect approach, first setting

𝑓(𝑦) ∶= 𝑢′ (𝑦)𝑝(𝑦) (6)

so that (4) becomes

𝑓(𝑦) = ℎ(𝑦) + 𝛽 ∫ 𝑓[𝐺(𝑦, 𝑧)]𝜙(𝑑𝑧) (7)

Here ℎ(𝑦) ∶= 𝛽 ∫ 𝑢′ [𝐺(𝑦, 𝑧)]𝐺(𝑦, 𝑧)𝜙(𝑑𝑧) is a function that depends only on the primitives.
Equation (7) is a functional equation in 𝑓.
The plan is to solve out for 𝑓 and convert back to 𝑝 via (6).
To solve (7) we’ll use a standard method: convert it to a fixed point problem.
First, we introduce the operator 𝑇 mapping 𝑓 into 𝑇 𝑓 as defined by

(𝑇 𝑓)(𝑦) = ℎ(𝑦) + 𝛽 ∫ 𝑓[𝐺(𝑦, 𝑧)]𝜙(𝑑𝑧) (8)

In what follows, we refer to 𝑇 as the Lucas operator.


The reason we do this is that a solution to (7) now corresponds to a function 𝑓 ∗ satisfying
(𝑇 𝑓 ∗ )(𝑦) = 𝑓 ∗ (𝑦) for all 𝑦.
554 CHAPTER 32. ASSET PRICING II: THE LUCAS ASSET PRICING MODEL

In other words, a solution is a fixed point of 𝑇 .


This means that we can use fixed point theory to obtain and compute the solution.

A Little Fixed Point Theory

Let 𝑐𝑏ℝ+ be the set of continuous bounded functions 𝑓 ∶ ℝ+ → ℝ+ .


We now show that

1. 𝑇 has exactly one fixed point 𝑓 ∗ in 𝑐𝑏ℝ+ .

2. For any 𝑓 ∈ 𝑐𝑏ℝ+ , the sequence 𝑇 𝑘 𝑓 converges uniformly to 𝑓 ∗ .

(Note: If you find the mathematics heavy going you can take 1–2 as given and skip to the
next section)
Recall the Banach contraction mapping theorem.
It tells us that the previous statements will be true if we can find an 𝛼 < 1 such that

‖𝑇 𝑓 − 𝑇 𝑔‖ ≤ 𝛼‖𝑓 − 𝑔‖, ∀ 𝑓, 𝑔 ∈ 𝑐𝑏ℝ+ (9)

Here ‖ℎ‖ ∶= sup𝑥∈ℝ |ℎ(𝑥)|.


+

To see that (9) is valid, pick any 𝑓, 𝑔 ∈ 𝑐𝑏ℝ+ and any 𝑦 ∈ ℝ+ .


Observe that, since integrals get larger when absolute values are moved to the inside,

|𝑇 𝑓(𝑦) − 𝑇 𝑔(𝑦)| = ∣𝛽 ∫ 𝑓[𝐺(𝑦, 𝑧)]𝜙(𝑑𝑧) − 𝛽 ∫ 𝑔[𝐺(𝑦, 𝑧)]𝜙(𝑑𝑧)∣

≤ 𝛽 ∫ |𝑓[𝐺(𝑦, 𝑧)] − 𝑔[𝐺(𝑦, 𝑧)]| 𝜙(𝑑𝑧)

≤ 𝛽 ∫ ‖𝑓 − 𝑔‖𝜙(𝑑𝑧)

= 𝛽‖𝑓 − 𝑔‖

Since the right-hand side is an upper bound, taking the sup over all 𝑦 on the left-hand side
gives (9) with 𝛼 ∶= 𝛽.

32.3.4 Computation – An Example

The preceding discussion tells that we can compute 𝑓 ∗ by picking any arbitrary 𝑓 ∈ 𝑐𝑏ℝ+ and
then iterating with 𝑇 .
The equilibrium price function 𝑝∗ can then be recovered by 𝑝∗ (𝑦) = 𝑓 ∗ (𝑦)/𝑢′ (𝑦).
Let’s try this when ln 𝑦𝑡+1 = 𝛼 ln 𝑦𝑡 + 𝜎𝜖𝑡+1 where {𝜖𝑡 } is IID and standard normal.
Utility will take the isoelastic form 𝑢(𝑐) = 𝑐1−𝛾 /(1 − 𝛾), where 𝛾 > 0 is the coefficient of
relative risk aversion.
We will set up a LucasTree class to hold parameters of the model
32.3. THE LUCAS MODEL 555

In [3]: class LucasTree:


"""
Class to store parameters of the Lucas tree model.

"""

def __init__(self,
γ=2, # CRRA utility parameter
β=0.95, # Discount factor
α=0.90, # Correlation coefficient
σ=0.1, # Volatility coefficient
grid_size=100):

self.γ, self.β, self.α, self.σ = γ, β, α, σ

# Set the grid interval to contain most of the mass of the


# stationary distribution of the consumption endowment
ssd = self.σ / np.sqrt(1 - self.α**2)
grid_min, grid_max = np.exp(-4 * ssd), np.exp(4 * ssd)
self.grid = np.linspace(grid_min, grid_max, grid_size)
self.grid_size = grid_size

# Set up distribution for shocks


self.ϕ = lognorm(σ)
self.draws = self.ϕ.rvs(500)

self.h = np.empty(self.grid_size)
for i, y in enumerate(self.grid):
self.h[i] = β * np.mean((y**α * self.draws)**(1 - γ))

The following function takes an instance of the LucasTree and generates a jitted version of
the Lucas operator

In [4]: def operator_factory(tree, parallel_flag=True):

"""
Returns approximate Lucas operator, which computes and returns the
updated function Tf on the grid points.

tree is an instance of the LucasTree class

"""

grid, h = tree.grid, tree.h


α, β = tree.α, tree.β
z_vec = tree.draws

@njit(parallel=parallel_flag)
def T(f):
"""
The Lucas operator
"""

# Turn f into a function


Af = lambda x: interp(grid, f, x)

Tf = np.empty_like(f)
556 CHAPTER 32. ASSET PRICING II: THE LUCAS ASSET PRICING MODEL

# Apply the T operator to f using Monte Carlo integration


for i in prange(len(grid)):
y = grid[i]
Tf[i] = h[i] + β * np.mean(Af(y**α * z_vec))

return Tf

return T

To solve the model, we write a function that iterates using the Lucas operator to find the
fixed point.

In [5]: def solve_model(tree, tol=1e-6, max_iter=500):


"""
Compute the equilibrium price function associated with Lucas
tree

* tree is an instance of LucasTree

"""
# Simplify notation
grid, grid_size = tree.grid, tree.grid_size
γ = tree.γ

T = operator_factory(tree)

i = 0
f = np.ones_like(grid) # Initial guess of f
error = tol + 1
while error > tol and i < max_iter:
Tf = T(f)
error = np.max(np.abs(Tf - f))
f = Tf
i += 1

price = f * grid**γ # Back out price vector

return price

Solving the model and plotting the resulting price function

In [6]: tree = LucasTree()


price_vals = solve_model(tree)

fig, ax = plt.subplots(figsize=(10, 6))


ax.plot(tree.grid, price_vals, label='$p*(y)$')
ax.set_xlabel('$y$')
ax.set_ylabel('price')
ax.legend()
plt.show()
32.3. THE LUCAS MODEL 557

We see that the price is increasing, even if we remove all serial correlation from the endow-
ment process.
The reason is that a larger current endowment reduces current marginal utility.
The price must therefore rise to induce the household to consume the entire endowment (and
hence satisfy the resource constraint).
What happens with a more patient consumer?
Here the orange line corresponds to the previous parameters and the green line is price when
𝛽 = 0.98.
558 CHAPTER 32. ASSET PRICING II: THE LUCAS ASSET PRICING MODEL

We see that when consumers are more patient the asset becomes more valuable, and the price
of the Lucas tree shifts up.
Exercise 1 asks you to replicate this figure.

32.4 Exercises

32.4.1 Exercise 1

Replicate the figure to show how discount factors affect prices.

32.5 Solutions

32.5.1 Exercise 1

In [7]: fig, ax = plt.subplots(figsize=(10, 6))

for β in (.95, 0.98):


tree = LucasTree(β=β)
grid = tree.grid
price_vals = solve_model(tree)
label = rf'$\beta = {β}$'
ax.plot(grid, price_vals, lw=2, alpha=0.7, label=label)

ax.legend(loc='upper left')
ax.set(xlabel='$y$', ylabel='price', xlim=(min(grid), max(grid)))
plt.show()
Chapter 33

Two Modifications of Mean-Variance


Portfolio Theory

33.1 Contents

• Overview 33.2
• Appendix 33.3

33.2 Overview

33.2.1 Remarks About Estimating Means and Variances

The famous Black-Litterman (1992) [11] portfolio choice model that we describe in this lec-
ture is motivated by the finding that with high or moderate frequency data, means are more
difficult to estimate than variances.
A model of robust portfolio choice that we’ll describe also begins from the same starting
point.
To begin, we’ll take for granted that means are more difficult to estimate that covariances
and will focus on how Black and Litterman, on the one hand, an robust control theorists,
on the other, would recommend modifying the mean-variance portfolio choice model
to take that into account.
At the end of this lecture, we shall use some rates of convergence results and some simula-
tions to verify how means are more difficult to estimate than variances.
Among the ideas in play in this lecture will be
• Mean-variance portfolio theory
• Bayesian approaches to estimating linear regressions
• A risk-sensitivity operator and its connection to robust control theory
Let’s start with some imports:

In [1]: import numpy as np


import scipy as sp
import scipy.stats as stat
import matplotlib.pyplot as plt

559
560CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

%matplotlib inline
from ipywidgets import interact, FloatSlider

33.2.2 Adjusting Mean-variance Portfolio Choice Theory for Distrust of


Mean Excess Returns

This lecture describes two lines of thought that modify the classic mean-variance portfolio
choice model in ways designed to make its recommendations more plausible.
As we mentioned above, the two approaches build on a common and widespread hunch – that
because it is much easier statistically to estimate covariances of excess returns than it is to
estimate their means, it makes sense to contemplated the consequences of adjusting investors’
subjective beliefs about mean returns in order to render more sensible decisions.
Both of the adjustments that we describe are designed to confront a widely recognized em-
barrassment to mean-variance portfolio theory, namely, that it usually implies taking very
extreme long-short portfolio positions.

33.2.3 Mean-variance Portfolio Choice

A risk-free security earns one-period net return 𝑟𝑓 .


An 𝑛 × 1 vector of risky securities earns an 𝑛 × 1 vector 𝑟 ⃗ − 𝑟𝑓 1 of excess returns, where 1 is
an 𝑛 × 1 vector of ones.
The excess return vector is multivariate normal with mean 𝜇 and covariance matrix Σ, which
we express either as

𝑟 ⃗ − 𝑟𝑓 1 ∼ 𝒩(𝜇, Σ)

or

𝑟 ⃗ − 𝑟𝑓 1 = 𝜇 + 𝐶𝜖

where 𝜖 ∼ 𝒩(0, 𝐼) is an 𝑛 × 1 random vector.


Let 𝑤 be an 𝑛 × 1 vector of portfolio weights.
A portfolio consisting 𝑤 earns returns

𝑤′ (𝑟 ⃗ − 𝑟𝑓 1) ∼ 𝒩(𝑤′ 𝜇, 𝑤′ Σ𝑤)

The mean-variance portfolio choice problem is to choose 𝑤 to maximize

𝛿
𝑈 (𝜇, Σ; 𝑤) = 𝑤′ 𝜇 − 𝑤′ Σ𝑤 (1)
2

where 𝛿 > 0 is a risk-aversion parameter. The first-order condition for maximizing (1) with
respect to the vector 𝑤 is

𝜇 = 𝛿Σ𝑤
33.2. OVERVIEW 561

which implies the following design of a risky portfolio:

𝑤 = (𝛿Σ)−1 𝜇 (2)

33.2.4 Estimating the Mean and Variance

The key inputs into the portfolio choice model (2) are
• estimates of the parameters 𝜇, Σ of the random excess return vector(𝑟 ⃗ − 𝑟𝑓 1)
• the risk-aversion parameter 𝛿
A standard way of estimating 𝜇 is maximum-likelihood or least squares; that amounts to es-
timating 𝜇 by a sample mean of excess returns and estimating Σ by a sample covariance ma-
trix.

33.2.5 The Black-Litterman Starting Point

When estimates of 𝜇 and Σ from historical sample means and covariances have been com-
bined with reasonable values of the risk-aversion parameter 𝛿 to compute an optimal port-
folio from formula (2), a typical outcome has been 𝑤’s with extreme long and short posi-
tions.
A common reaction to these outcomes is that they are so unreasonable that a portfolio man-
ager cannot recommend them to a customer.

In [2]: np.random.seed(12)

N = 10 # Number of assets
T = 200 # Sample size

# random market portfolio (sum is normalized to 1)


w_m = np.random.rand(N)
w_m = w_m / (w_m.sum())

# True risk premia and variance of excess return (constructed


# so that the Sharpe ratio is 1)
μ = (np.random.randn(N) + 5) /100 # Mean excess return (risk premium)
S = np.random.randn(N, N) # Random matrix for the covariance matrix
V = S @ S.T # Turn the random matrix into symmetric psd
# Make sure that the Sharpe ratio is one
Σ = V * (w_m @ μ)**2 / (w_m @ V @ w_m)

# Risk aversion of market portfolio holder


δ = 1 / np.sqrt(w_m @ Σ @ w_m)

# Generate a sample of excess returns


excess_return = stat.multivariate_normal(μ, Σ)
sample = excess_return.rvs(T)

# Estimate μ and Σ
μ_est = sample.mean(0).reshape(N, 1)
Σ_est = np.cov(sample.T)

w = np.linalg.solve(δ * Σ_est, μ_est)


562CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

fig, ax = plt.subplots(figsize=(8, 5))


ax.set_title('Mean-variance portfolio weights recommendation \
and the market portfolio')
ax.plot(np.arange(N)+1, w, 'o', c='k', label='$w$ (mean-variance)')
ax.plot(np.arange(N)+1, w_m, 'o', c='r', label='$w_m$ (market portfolio)')
ax.vlines(np.arange(N)+1, 0, w, lw=1)
ax.vlines(np.arange(N)+1, 0, w_m, lw=1)
ax.axhline(0, c='k')
ax.axhline(-1, c='k', ls='--')
ax.axhline(1, c='k', ls='--')
ax.set_xlabel('Assets')
ax.xaxis.set_ticks(np.arange(1, N+1, 1))
plt.legend(numpoints=1, fontsize=11)
plt.show()

Black and Litterman’s responded to this situation in the following way:


• They continue to accept (2) as a good model for choosing an optimal portfolio 𝑤.
• They want to continue to allow the customer to express his or her risk tolerance by set-
ting 𝛿.
• Leaving Σ at its maximum-likelihood value, they push 𝜇 away from its maximum value
in a way designed to make portfolio choices that are more plausible in terms of conform-
ing to what most people actually do.
In particular, given Σ and a reasonable value of 𝛿, Black and Litterman reverse engineered a
vector 𝜇𝐵𝐿 of mean excess returns that makes the 𝑤 implied by formula (2) equal the actual
market portfolio 𝑤𝑚 , so that

𝑤𝑚 = (𝛿Σ)−1 𝜇𝐵𝐿
33.2. OVERVIEW 563

33.2.6 Details

Let’s define


𝑤𝑚 𝜇 ≡ (𝑟𝑚 − 𝑟𝑓 )

as the (scalar) excess return on the market portfolio 𝑤𝑚 .


Define

𝜎 2 = 𝑤𝑚

Σ𝑤𝑚

as the variance of the excess return on the market portfolio 𝑤𝑚 .


Define

𝑟𝑚 − 𝑟𝑓
SR𝑚 =
𝜎

as the Sharpe-ratio on the market portfolio 𝑤𝑚 .


Let 𝛿𝑚 be the value of the risk aversion parameter that induces an investor to hold the mar-
ket portfolio in light of the optimal portfolio choice rule (2).
Evidently, portfolio rule (2) then implies that 𝑟𝑚 − 𝑟𝑓 = 𝛿𝑚 𝜎2 or

𝑟𝑚 − 𝑟𝑓
𝛿𝑚 =
𝜎2
or

SR𝑚
𝛿𝑚 =
𝜎

Following the Black-Litterman philosophy, our first step will be to back a value of 𝛿𝑚 from
• an estimate of the Sharpe-ratio, and
• our maximum likelihood estimate of 𝜎 drawn from our estimates or 𝑤𝑚 and Σ
The second key Black-Litterman step is then to use this value of 𝛿 together with the maxi-
mum likelihood estimate of Σ to deduce a 𝜇BL that verifies portfolio rule (2) at the market
portfolio 𝑤 = 𝑤𝑚

𝜇𝑚 = 𝛿𝑚 Σ𝑤𝑚

The starting point of the Black-Litterman portfolio choice model is thus a pair (𝛿𝑚 , 𝜇𝑚 ) that
tells the customer to hold the market portfolio.

In [3]: # Observed mean excess market return


r_m = w_m @ μ_est

# Estimated variance of the market portfolio


σ_m = w_m @ Σ_est @ w_m

# Sharpe-ratio
564CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

sr_m = r_m / np.sqrt(σ_m)

# Risk aversion of market portfolio holder


d_m = r_m / σ_m

# Derive "view" which would induce the market portfolio


μ_m = (d_m * Σ_est @ w_m).reshape(N, 1)

fig, ax = plt.subplots(figsize=(8, 5))


ax.set_title(r'Difference between $\hat{\mu}$ (estimate) and \
$\mu_{BL}$ (market implied)')
ax.plot(np.arange(N)+1, μ_est, 'o', c='k', label='$\hat{\mu}$')
ax.plot(np.arange(N)+1, μ_m, 'o', c='r', label='$\mu_{BL}$')
ax.vlines(np.arange(N) + 1, μ_m, μ_est, lw=1)
ax.axhline(0, c='k', ls='--')
ax.set_xlabel('Assets')
ax.xaxis.set_ticks(np.arange(1, N+1, 1))
plt.legend(numpoints=1)
plt.show()

33.2.7 Adding Views

Black and Litterman start with a baseline customer who asserts that he or she shares the
market’s views, which means that he or she believes that excess returns are governed by

𝑟 ⃗ − 𝑟𝑓 1 ∼ 𝒩(𝜇𝐵𝐿 , Σ) (3)
33.2. OVERVIEW 565

Black and Litterman would advise that customer to hold the market portfolio of risky securi-
ties.
Black and Litterman then imagine a consumer who would like to express a view that differs
from the market’s.
The consumer wants appropriately to mix his view with the market’s before using (2) to
choose a portfolio.
Suppose that the customer’s view is expressed by a hunch that rather than (3), excess returns
are governed by

𝑟 ⃗ − 𝑟𝑓 1 ∼ 𝒩(𝜇,̂ 𝜏 Σ)

where 𝜏 > 0 is a scalar parameter that determines how the decision maker wants to mix his
view 𝜇̂ with the market’s view 𝜇BL .
Black and Litterman would then use a formula like the following one to mix the views 𝜇̂ and
𝜇BL

𝜇̃ = (Σ−1 + (𝜏 Σ)−1 )−1 (Σ−1 𝜇𝐵𝐿 + (𝜏 Σ)−1 𝜇)̂ (4)

Black and Litterman would then advise the customer to hold the portfolio associated with
these views implied by rule (2):

𝑤̃ = (𝛿Σ)−1 𝜇̃

This portfolio 𝑤̃ will deviate from the portfolio 𝑤𝐵𝐿 in amounts that depend on the mixing
parameter 𝜏 .
If 𝜇̂ is the maximum likelihood estimator and 𝜏 is chosen heavily to weight this view, then the
customer’s portfolio will involve big short-long positions.

In [4]: def black_litterman(λ, μ1, μ2, Σ1, Σ2):


"""
This function calculates the Black-Litterman mixture
mean excess return and covariance matrix
"""
Σ1_inv = np.linalg.inv(Σ1)
Σ2_inv = np.linalg.inv(Σ2)

μ_tilde = np.linalg.solve(Σ1_inv + λ * Σ2_inv,


Σ1_inv @ μ1 + λ * Σ2_inv @ μ2)
return μ_tilde

τ = 1
μ_tilde = black_litterman(1, μ_m, μ_est, Σ_est, τ * Σ_est)

# The Black-Litterman recommendation for the portfolio weights


w_tilde = np.linalg.solve(δ * Σ_est, μ_tilde)

τ_slider = FloatSlider(min=0.05, max=10, step=0.5, value=τ)

@interact(τ=τ_slider)
def BL_plot(τ):
566CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

μ_tilde = black_litterman(1, μ_m, μ_est, Σ_est, τ * Σ_est)


w_tilde = np.linalg.solve(δ * Σ_est, μ_tilde)

fig, ax = plt.subplots(1, 2, figsize=(16, 6))


ax[0].plot(np.arange(N)+1, μ_est, 'o', c='k',
label=r'$\hat{\mu}$ (subj view)')
ax[0].plot(np.arange(N)+1, μ_m, 'o', c='r',
label=r'$\mu_{BL}$ (market)')
ax[0].plot(np.arange(N)+1, μ_tilde, 'o', c='y',
label=r'$\tilde{\mu}$ (mixture)')
ax[0].vlines(np.arange(N)+1, μ_m, μ_est, lw=1)
ax[0].axhline(0, c='k', ls='--')
ax[0].set(xlim=(0, N+1), xlabel='Assets',
title=r'Relationship between $\hat{\mu}$, \
$\mu_{BL}$and$\tilde{\mu}$')
ax[0].xaxis.set_ticks(np.arange(1, N+1, 1))
ax[0].legend(numpoints=1)

ax[1].set_title('Black-Litterman portfolio weight recommendation')


ax[1].plot(np.arange(N)+1, w, 'o', c='k', label=r'$w$ (mean-variance)')
ax[1].plot(np.arange(N)+1, w_m, 'o', c='r', label=r'$w_{m}$ (market,�
↪ BL)')
ax[1].plot(np.arange(N)+1, w_tilde, 'o', c='y',
label=r'$\tilde{w}$ (mixture)')
ax[1].vlines(np.arange(N)+1, 0, w, lw=1)
ax[1].vlines(np.arange(N)+1, 0, w_m, lw=1)
ax[1].axhline(0, c='k')
ax[1].axhline(-1, c='k', ls='--')
ax[1].axhline(1, c='k', ls='--')
ax[1].set(xlim=(0, N+1), xlabel='Assets',
title='Black-Litterman portfolio weight recommendation')
ax[1].xaxis.set_ticks(np.arange(1, N+1, 1))
ax[1].legend(numpoints=1)
plt.show()

33.2.8 Bayes Interpretation of the Black-Litterman Recommendation

Consider the following Bayesian interpretation of the Black-Litterman recommendation.


33.2. OVERVIEW 567

The prior belief over the mean excess returns is consistent with the market portfolio and is
given by

𝜇 ∼ 𝒩(𝜇𝐵𝐿 , Σ)

Given a particular realization of the mean excess returns 𝜇 one observes the average excess
returns 𝜇̂ on the market according to the distribution

𝜇̂ ∣ 𝜇, Σ ∼ 𝒩(𝜇, 𝜏 Σ)

where 𝜏 is typically small capturing the idea that the variation in the mean is smaller than
the variation of the individual random variable.
Given the realized excess returns one should then update the prior over the mean excess re-
turns according to Bayes rule.
The corresponding posterior over mean excess returns is normally distributed with mean

(Σ−1 + (𝜏 Σ)−1 )−1 (Σ−1 𝜇𝐵𝐿 + (𝜏 Σ)−1 𝜇)̂

The covariance matrix is

(Σ−1 + (𝜏 Σ)−1 )−1

Hence, the Black-Litterman recommendation is consistent with the Bayes update of the prior
over the mean excess returns in light of the realized average excess returns on the market.

33.2.9 Curve Decolletage

Consider two independent “competing” views on the excess market returns

𝑟𝑒⃗ ∼ 𝒩(𝜇𝐵𝐿 , Σ)

and

𝑟𝑒⃗ ∼ 𝒩(𝜇,̂ 𝜏 Σ)

A special feature of the multivariate normal random variable 𝑍 is that its density function
depends only on the (Euclidiean) length of its realization 𝑧.
Formally, let the 𝑘-dimensional random vector be

𝑍 ∼ 𝒩(𝜇, Σ)

then

𝑍 ̄ ≡ Σ(𝑍 − 𝜇) ∼ 𝒩(0, 𝐼)

and so the points where the density takes the same value can be described by the ellipse
568CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

𝑧 ̄ ⋅ 𝑧 ̄ = (𝑧 − 𝜇)′ Σ−1 (𝑧 − 𝜇) = 𝑑 ̄ (5)

where 𝑑 ̄ ∈ ℝ+ denotes the (transformation) of a particular density value.


The curves defined by equation (5) can be labeled as iso-likelihood ellipses

Remark: More generally there is a class of density functions that possesses this
feature, i.e.

∃𝑔 ∶ ℝ+ ↦ ℝ+ and 𝑐 ≥ 0, s.t. the density 𝑓 of 𝑍 has the form 𝑓(𝑧) = 𝑐𝑔(𝑧 ⋅ 𝑧)

This property is called spherical symmetry (see p 81. in Leamer (1978) [42]).
In our specific example, we can use the pair (𝑑1̄ , 𝑑2̄ ) as being two “likelihood” values for which
the corresponding iso-likelihood ellipses in the excess return space are given by

(𝑟𝑒⃗ − 𝜇𝐵𝐿 )′ Σ−1 (𝑟𝑒⃗ − 𝜇𝐵𝐿 ) = 𝑑1̄


−1
(𝑟𝑒⃗ − 𝜇)̂ ′ (𝜏 Σ) (𝑟𝑒⃗ − 𝜇)̂ = 𝑑2̄

Notice that for particular 𝑑1̄ and 𝑑2̄ values the two ellipses have a tangency point.
These tangency points, indexed by the pairs (𝑑1̄ , 𝑑2̄ ), characterize points 𝑟𝑒⃗ from which there
exists no deviation where one can increase the likelihood of one view without decreasing the
likelihood of the other view.
The pairs (𝑑1̄ , 𝑑2̄ ) for which there is such a point outlines a curve in the excess return space.
This curve is reminiscent of the Pareto curve in an Edgeworth-box setting.
Dickey (1975) [19] calls it a curve decolletage.
Leamer (1978) [42] calls it an information contract curve and describes it by the following
program: maximize the likelihood of one view, say the Black-Litterman recommendation
while keeping the likelihood of the other view at least at a prespecified constant 𝑑2̄

𝑑1̄ (𝑑2̄ ) ≡ max (𝑟𝑒⃗ − 𝜇𝐵𝐿 )′ Σ−1 (𝑟𝑒⃗ − 𝜇𝐵𝐿 )


𝑟𝑒⃗

subject to (𝑟𝑒⃗ − 𝜇)̂ ′ (𝜏 Σ)−1 (𝑟𝑒⃗ − 𝜇)̂ ≥ 𝑑2̄

Denoting the multiplier on the constraint by 𝜆, the first-order condition is

2(𝑟𝑒⃗ − 𝜇𝐵𝐿 )′ Σ−1 + 𝜆2(𝑟𝑒⃗ − 𝜇)̂ ′ (𝜏 Σ)−1 = 0

which defines the information contract curve between 𝜇𝐵𝐿 and 𝜇̂

𝑟𝑒⃗ = (Σ−1 + 𝜆(𝜏 Σ)−1 )−1 (Σ−1 𝜇𝐵𝐿 + 𝜆(𝜏 Σ)−1 𝜇)̂ (6)

Note that if 𝜆 = 1, (6) is equivalent with (4) and it identifies one point on the information
contract curve.
Furthermore, because 𝜆 is a function of the minimum likelihood 𝑑2̄ on the RHS of the con-
straint, by varying 𝑑2̄ (or 𝜆 ), we can trace out the whole curve as the figure below illustrates.
33.2. OVERVIEW 569

In [5]: np.random.seed(1987102)

N = 2 # Number of assets
T = 200 # Sample size
τ = 0.8

# Random market portfolio (sum is normalized to 1)


w_m = np.random.rand(N)
w_m = w_m / (w_m.sum())

μ = (np.random.randn(N) + 5) / 100
S = np.random.randn(N, N)
V = S @ S.T
Σ = V * (w_m @ μ)**2 / (w_m @ V @ w_m)

excess_return = stat.multivariate_normal(μ, Σ)
sample = excess_return.rvs(T)

μ_est = sample.mean(0).reshape(N, 1)
Σ_est = np.cov(sample.T)

σ_m = w_m @ Σ_est @ w_m


d_m = (w_m @ μ_est) / σ_m
μ_m = (d_m * Σ_est @ w_m).reshape(N, 1)

N_r1, N_r2 = 100, 100


r1 = np.linspace(-0.04, .1, N_r1)
r2 = np.linspace(-0.02, .15, N_r2)

λ_grid = np.linspace(.001, 20, 100)


curve = np.asarray([black_litterman(λ, μ_m, μ_est, Σ_est,
τ * Σ_est).flatten() for λ in λ_grid])

λ_slider = FloatSlider(min=.1, max=7, step=.5, value=1)

@interact(λ=λ_slider)
def decolletage(λ):
dist_r_BL = stat.multivariate_normal(μ_m.squeeze(), Σ_est)
dist_r_hat = stat.multivariate_normal(μ_est.squeeze(), τ * Σ_est)

X, Y = np.meshgrid(r1, r2)
Z_BL = np.zeros((N_r1, N_r2))
Z_hat = np.zeros((N_r1, N_r2))

for i in range(N_r1):
for j in range(N_r2):
Z_BL[i, j] = dist_r_BL.pdf(np.hstack([X[i, j], Y[i, j]]))
Z_hat[i, j] = dist_r_hat.pdf(np.hstack([X[i, j], Y[i, j]]))

μ_tilde = black_litterman(λ, μ_m, μ_est, Σ_est, τ * Σ_est).flatten()

fig, ax = plt.subplots(figsize=(10, 6))


ax.contourf(X, Y, Z_hat, cmap='viridis', alpha =.4)
ax.contourf(X, Y, Z_BL, cmap='viridis', alpha =.4)
ax.contour(X, Y, Z_BL, [dist_r_BL.pdf(μ_tilde)], cmap='viridis',�
↪ alpha=.9)
ax.contour(X, Y, Z_hat, [dist_r_hat.pdf(μ_tilde)], cmap='viridis',�
↪ alpha=.9)
570CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

ax.scatter(μ_est[0], μ_est[1])
ax.scatter(μ_m[0], μ_m[1])
ax.scatter(μ_tilde[0], μ_tilde[1], c='k', s=20*3)

ax.plot(curve[:, 0], curve[:, 1], c='k')


ax.axhline(0, c='k', alpha=.8)
ax.axvline(0, c='k', alpha=.8)
ax.set_xlabel(r'Excess return on the first asset, $r_{e, 1}$')
ax.set_ylabel(r'Excess return on the second asset, $r_{e, 2}$')
ax.text(μ_est[0] + 0.003, μ_est[1], r'$\hat{\mu}$')
ax.text(μ_m[0] + 0.003, μ_m[1] + 0.005, r'$\mu_{BL}$')
plt.show()

Note that the line that connects the two points 𝜇̂ and 𝜇𝐵𝐿 is linear, which comes from the
fact that the covariance matrices of the two competing distributions (views) are proportional
to each other.
To illustrate the fact that this is not necessarily the case, consider another example using the
same parameter values, except that the “second view” constituting the constraint has covari-
ance matrix 𝜏 𝐼 instead of 𝜏 Σ.
This leads to the following figure, on which the curve connecting 𝜇̂ and 𝜇𝐵𝐿 are bending

In [6]: λ_grid = np.linspace(.001, 20000, 1000)


curve = np.asarray([black_litterman(λ, μ_m, μ_est, Σ_est,
τ * np.eye(N)).flatten() for λ in λ_grid])

λ_slider = FloatSlider(min=5, max=1500, step=100, value=200)

@interact(λ=λ_slider)
def decolletage(λ):
dist_r_BL = stat.multivariate_normal(μ_m.squeeze(), Σ_est)
dist_r_hat = stat.multivariate_normal(μ_est.squeeze(), τ * np.eye(N))
33.2. OVERVIEW 571

X, Y = np.meshgrid(r1, r2)
Z_BL = np.zeros((N_r1, N_r2))
Z_hat = np.zeros((N_r1, N_r2))

for i in range(N_r1):
for j in range(N_r2):
Z_BL[i, j] = dist_r_BL.pdf(np.hstack([X[i, j], Y[i, j]]))
Z_hat[i, j] = dist_r_hat.pdf(np.hstack([X[i, j], Y[i, j]]))

μ_tilde = black_litterman(λ, μ_m, μ_est, Σ_est, τ * np.eye(N)).flatten()

fig, ax = plt.subplots(figsize=(10, 6))


ax.contourf(X, Y, Z_hat, cmap='viridis', alpha=.4)
ax.contourf(X, Y, Z_BL, cmap='viridis', alpha=.4)
ax.contour(X, Y, Z_BL, [dist_r_BL.pdf(μ_tilde)], cmap='viridis',�
↪ alpha=.9)
ax.contour(X, Y, Z_hat, [dist_r_hat.pdf(μ_tilde)], cmap='viridis',�
↪ alpha=.9)
ax.scatter(μ_est[0], μ_est[1])
ax.scatter(μ_m[0], μ_m[1])

ax.scatter(μ_tilde[0], μ_tilde[1], c='k', s=20*3)

ax.plot(curve[:, 0], curve[:, 1], c='k')


ax.axhline(0, c='k', alpha=.8)
ax.axvline(0, c='k', alpha=.8)
ax.set_xlabel(r'Excess return on the first asset, $r_{e, 1}$')
ax.set_ylabel(r'Excess return on the second asset, $r_{e, 2}$')
ax.text(μ_est[0] + 0.003, μ_est[1], r'$\hat{\mu}$')
ax.text(μ_m[0] + 0.003, μ_m[1] + 0.005, r'$\mu_{BL}$')
plt.show()
572CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

33.2.10 Black-Litterman Recommendation as Regularization

First, consider the OLS regression

min ‖𝑋𝛽 − 𝑦‖2


𝛽

which yields the solution

̂
𝛽𝑂𝐿𝑆 = (𝑋 ′ 𝑋)−1 𝑋 ′ 𝑦

A common performance measure of estimators is the mean squared error (MSE).


An estimator is “good” if its MSE is relatively small. Suppose that 𝛽0 is the “true” value of
the coefficient, then the MSE of the OLS estimator is

̂
mse(𝛽𝑂𝐿𝑆 ̂
, 𝛽0 ) ∶= 𝔼‖𝛽𝑂𝐿𝑆 − 𝛽0 ‖2 = 𝔼‖ ̂
𝛽⏟ − 𝔼𝛽𝑂𝐿𝑆 ‖2 + ‖𝔼 ̂
𝛽⏟ −⏟𝛽 ‖2
⏟⏟ ⏟⏟⏟
𝑂𝐿𝑆 ⏟⏟⏟ ⏟⏟ ⏟
𝑂𝐿𝑆 ⏟0⏟
variance bias

From this decomposition, one can see that in order for the MSE to be small, both the bias
and the variance terms must be small.
For example, consider the case when 𝑋 is a 𝑇 -vector of ones (where 𝑇 is the sample size), so
̂
𝛽𝑂𝐿𝑆 is simply the sample average, while 𝛽0 ∈ ℝ is defined by the true mean of 𝑦.
In this example the MSE is

2
𝑇
̂ 1
mse(𝛽𝑂𝐿𝑆 , 𝛽0 ) = 2 𝔼 (∑(𝑦𝑡 − 𝛽0 )) + 0⏟
𝑇
⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑡=1 bias
variance

However, because there is a trade-off between the estimator’s bias and variance, there are
cases when by permitting a small bias we can substantially reduce the variance so overall the
MSE gets smaller.
A typical scenario when this proves to be useful is when the number of coefficients to be esti-
mated is large relative to the sample size.
In these cases, one approach to handle the bias-variance trade-off is the so called Tikhonov
regularization.
A general form with regularization matrix Γ can be written as

̃ 2}
min {‖𝑋𝛽 − 𝑦‖2 + ‖Γ(𝛽 − 𝛽)‖
𝛽

which yields the solution

̂
𝛽𝑅𝑒𝑔 = (𝑋 ′ 𝑋 + Γ′ Γ)−1 (𝑋 ′ 𝑦 + Γ′ Γ𝛽)̃

̂
Substituting the value of 𝛽𝑂𝐿𝑆 yields

̂
𝛽𝑅𝑒𝑔 ̂
= (𝑋 ′ 𝑋 + Γ′ Γ)−1 (𝑋 ′ 𝑋 𝛽𝑂𝐿𝑆 + Γ′ Γ𝛽)̃
33.2. OVERVIEW 573

Often, the regularization matrix takes the form Γ = 𝜆𝐼 with 𝜆 > 0 and 𝛽 ̃ = 0.
Then the Tikhonov regularization is equivalent to what is called ridge regression in statistics.
To illustrate how this estimator addresses the bias-variance trade-off, we compute the MSE of
the ridge estimator

2
𝑇 2
̂ 1 𝜆
mse(𝛽ridge , 𝛽0 ) = 2
𝔼 ( ∑ (𝑦𝑡 − 𝛽0 )) + ( ) 𝛽02
(𝑇 + 𝜆) 𝑇 +
⏟⏟⏟⏟⏟ 𝜆
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 𝑡=1
bias
variance

The ridge regression shrinks the coefficients of the estimated vector towards zero relative to
the OLS estimates thus reducing the variance term at the cost of introducing a “small” bias.
However, there is nothing special about the zero vector.
When 𝛽 ̃ ≠ 0 shrinkage occurs in the direction of 𝛽.̃
Now, we can give a regularization interpretation of the Black-Litterman portfolio recommen-
dation.
To this end, simplify first the equation (4) characterizing the Black-Litterman recommenda-
tion

𝜇̃ = (Σ−1 + (𝜏 Σ)−1 )−1 (Σ−1 𝜇𝐵𝐿 + (𝜏 Σ)−1 𝜇)̂


= (1 + 𝜏 −1 )−1 ΣΣ−1 (𝜇𝐵𝐿 + 𝜏 −1 𝜇)̂
= (1 + 𝜏 −1 )−1 (𝜇𝐵𝐿 + 𝜏 −1 𝜇)̂

In our case, 𝜇̂ is the estimated mean excess returns of securities. This could be written as a
vector autoregression where
• 𝑦 is the stacked vector of observed excess returns of size (𝑁 𝑇 × 1) – 𝑁 securities and 𝑇
observations.

• 𝑋 = 𝑇 −1 (𝐼𝑁 ⊗ 𝜄𝑇 ) where 𝐼𝑁 is the identity matrix and 𝜄𝑇 is a column vector of ones.
Correspondingly, the OLS regression of 𝑦 on 𝑋 would yield the mean excess returns as coeffi-
cients.

With Γ = 𝜏 𝑇 −1 (𝐼𝑁 ⊗ 𝜄𝑇 ) we can write the regularized version of the mean excess return
estimation

̂
𝛽𝑅𝑒𝑔 ̂
= (𝑋 ′ 𝑋 + Γ′ Γ)−1 (𝑋 ′ 𝑋 𝛽𝑂𝐿𝑆 + Γ′ Γ𝛽)̃
̂
= (1 + 𝜏 )−1 𝑋 ′ 𝑋(𝑋 ′ 𝑋)−1 (𝛽𝑂𝐿𝑆 + 𝜏 𝛽)̃
= (1 + 𝜏 )−1 (𝛽 ̂ + 𝜏 𝛽)̃
𝑂𝐿𝑆

= (1 + 𝜏 −1 −1 ̂
) (𝜏 −1 𝛽𝑂𝐿𝑆 + 𝛽)̃

̂
Given that 𝛽𝑂𝐿𝑆 = 𝜇̂ and 𝛽 ̃ = 𝜇𝐵𝐿 in the Black-Litterman model, we have the following
interpretation of the model’s recommendation.
The estimated (personal) view of the mean excess returns, 𝜇̂ that would lead to extreme
short-long positions are “shrunk” towards the conservative market view, 𝜇𝐵𝐿 , that leads to
the more conservative market portfolio.
So the Black-Litterman procedure results in a recommendation that is a compromise between
574CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

the conservative market portfolio and the more extreme portfolio that is implied by estimated
“personal” views.

33.2.11 Digression on A Robust Control Operator

The Black-Litterman approach is partly inspired by the econometric insight that it is easier
to estimate covariances of excess returns than the means.
That is what gave Black and Litterman license to adjust investors’ perception of mean excess
returns while not tampering with the covariance matrix of excess returns.
The robust control theory is another approach that also hinges on adjusting mean excess re-
turns but not covariances.
Associated with a robust control problem is what Hansen and Sargent [29], [26] call a T oper-
ator.
Let’s define the T operator as it applies to the problem at hand.
Let 𝑥 be an 𝑛 × 1 Gaussian random vector with mean vector 𝜇 and covariance matrix Σ =
𝐶𝐶 ′ . This means that 𝑥 can be represented as

𝑥 = 𝜇 + 𝐶𝜖

where 𝜖 ∼ 𝒩(0, 𝐼).


Let 𝜙(𝜖) denote the associated standardized Gaussian density.
Let 𝑚(𝜖, 𝜇) be a likelihood ratio, meaning that it satisfies
• 𝑚(𝜖, 𝜇) > 0
• ∫ 𝑚(𝜖, 𝜇)𝜙(𝜖)𝑑𝜖 = 1
That is, 𝑚(𝜖, 𝜇) is a non-negative random variable with mean 1.
Multiplying 𝜙(𝜖) by the likelihood ratio 𝑚(𝜖, 𝜇) produces a distorted distribution for 𝜖,
namely

̃ = 𝑚(𝜖, 𝜇)𝜙(𝜖)
𝜙(𝜖)

The next concept that we need is the entropy of the distorted distribution 𝜙 ̃ with respect to
𝜙.
Entropy is defined as

ent = ∫ log 𝑚(𝜖, 𝜇)𝑚(𝜖, 𝜇)𝜙(𝜖)𝑑𝜖

or

̃
ent = ∫ log 𝑚(𝜖, 𝜇)𝜙(𝜖)𝑑𝜖

That is, relative entropy is the expected value of the likelihood ratio 𝑚 where the expectation
is taken with respect to the twisted density 𝜙.̃
33.2. OVERVIEW 575

Relative entropy is non-negative. It is a measure of the discrepancy between two probability


distributions.
As such, it plays an important role in governing the behavior of statistical tests designed to
discriminate one probability distribution from another.
We are ready to define the T operator.
Let 𝑉 (𝑥) be a value function.
Define

T (𝑉 (𝑥)) = min ∫ 𝑚(𝜖, 𝜇)[𝑉 (𝜇 + 𝐶𝜖) + 𝜃 log 𝑚(𝜖, 𝜇)]𝜙(𝜖)𝑑𝜖


𝑚(𝜖,𝜇)

−𝑉 (𝜇 + 𝐶𝜖)
= − log 𝜃 ∫ exp ( ) 𝜙(𝜖)𝑑𝜖
𝜃

This asserts that T is an indirect utility function for a minimization problem in which an evil
agent chooses a distorted probability distribution 𝜙 ̃ to lower expected utility, subject to a
penalty term that gets bigger the larger is relative entropy.
Here the penalty parameter

𝜃 ∈ [𝜃, +∞]

is a robustness parameter when it is +∞, there is no scope for the minimizing agent to dis-
tort the distribution, so no robustness to alternative distributions is acquired As 𝜃 is lowered,
more robustness is achieved.
Note: The T operator is sometimes called a risk-sensitivity operator.
We shall apply Tto the special case of a linear value function 𝑤′ (𝑟 ⃗ − 𝑟𝑓 1) where 𝑟 ⃗ − 𝑟𝑓 1 ∼
𝒩(𝜇, Σ) or 𝑟 ⃗ − 𝑟𝑓 1 = 𝜇 + 𝐶𝜖and 𝜖 ∼ 𝒩(0, 𝐼).
The associated worst-case distribution of 𝜖 is Gaussian with mean 𝑣 = −𝜃−1 𝐶 ′ 𝑤 and co-
variance matrix 𝐼 (When the value function is affine, the worst-case distribution distorts the
mean vector of 𝜖 but not the covariance matrix of 𝜖).
For utility function argument 𝑤′ (𝑟 ⃗ − 𝑟𝑓 1)

1 ′
T(𝑟 ⃗ − 𝑟𝑓 1) = 𝑤′ 𝜇 + 𝜁 − 𝑤 Σ𝑤
2𝜃
and entropy is

𝑣′ 𝑣 1
= 2 𝑤′ 𝐶𝐶 ′ 𝑤
2 2𝜃

33.2.12 A Robust Mean-variance Portfolio Model

According to criterion (1), the mean-variance portfolio choice problem chooses 𝑤 to maximize

𝐸[𝑤(𝑟 ⃗ − 𝑟𝑓 1)]] − var[𝑤(𝑟 ⃗ − 𝑟𝑓 1)]

which equals
576CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

𝛿
𝑤′ 𝜇 − 𝑤′ Σ𝑤
2

A robust decision maker can be modeled as replacing the mean return 𝐸[𝑤(𝑟 ⃗ − 𝑟𝑓 1)] with the
risk-sensitive

1 ′
T[𝑤(𝑟 ⃗ − 𝑟𝑓 1)] = 𝑤′ 𝜇 − 𝑤 Σ𝑤
2𝜃

that comes from replacing the mean 𝜇 of 𝑟 ⃗ − 𝑟_𝑓1 with the worst-case mean

𝜇 − 𝜃−1 Σ𝑤

Notice how the worst-case mean vector depends on the portfolio 𝑤.


The operator T is the indirect utility function that emerges from solving a problem in which
an agent who chooses probabilities does so in order to minimize the expected utility of a max-
imizing agent (in our case, the maximizing agent chooses portfolio weights 𝑤).
The robust version of the mean-variance portfolio choice problem is then to choose a portfolio
𝑤 that maximizes

𝛿
T[𝑤(𝑟 ⃗ − 𝑟𝑓 1)] − 𝑤′ Σ𝑤
2
or

𝛿
𝑤′ (𝜇 − 𝜃−1 Σ𝑤) − 𝑤′ Σ𝑤 (7)
2

The minimizer of (7) is

1
𝑤rob = Σ−1 𝜇
𝛿+𝛾

where 𝛾 ≡ 𝜃−1 is sometimes called the risk-sensitivity parameter.


An increase in the risk-sensitivity parameter 𝛾 shrinks the portfolio weights toward zero in
the same way that an increase in risk aversion does.

33.3 Appendix

We want to illustrate the “folk theorem” that with high or moderate frequency data, it is
more difficult to estimate means than variances.
In order to operationalize this statement, we take two analog estimators:
𝑁
• sample average: 𝑋̄ 𝑁 = 1
𝑁 ∑𝑖=1 𝑋𝑖
𝑁
• sample variance: 𝑆𝑁 = 1
𝑁−1 ∑𝑡=1 (𝑋𝑖 − 𝑋̄ 𝑁 )2
to estimate the unconditional mean and unconditional variance of the random variable 𝑋,
respectively.
33.3. APPENDIX 577

To measure the “difficulty of estimation”, we use mean squared error (MSE), that is the aver-
age squared difference between the estimator and the true value.
Assuming that the process {𝑋𝑖 }is ergodic, both analog estimators are known to converge to
their true values as the sample size 𝑁 goes to infinity.
More precisely for all 𝜀 > 0

lim 𝑃 {∣𝑋̄ 𝑁 − 𝔼𝑋∣ > 𝜀} = 0


𝑁→∞

and

lim 𝑃 {|𝑆𝑁 − 𝕍𝑋| > 𝜀} = 0


𝑁→∞

A necessary condition for these convergence results is that the associated MSEs vanish as 𝑁
goes to infinity, or in other words,

MSE(𝑋̄ 𝑁 , 𝔼𝑋) = 𝑜(1) and MSE(𝑆𝑁 , 𝕍𝑋) = 𝑜(1)

Even if the MSEs converge to zero, the associated rates might be different. Looking at the
limit of the relative MSE (as the sample size grows to infinity)

MSE(𝑆𝑁 , 𝕍𝑋) 𝑜(1)


= → 𝐵
̄
MSE(𝑋𝑁 , 𝔼𝑋) 𝑜(1) 𝑁→∞

can inform us about the relative (asymptotic) rates.


We will show that in general, with dependent data, the limit 𝐵 depends on the sampling fre-
quency.
In particular, we find that the rate of convergence of the variance estimator is less sensitive to
increased sampling frequency than the rate of convergence of the mean estimator.
Hence, we can expect the relative asymptotic rate, 𝐵, to get smaller with higher frequency
data, illustrating that “it is more difficult to estimate means than variances”.
That is, we need significantly more data to obtain a given precision of the mean estimate
than for our variance estimate.

33.3.1 A Special Case – IID Sample

We start our analysis with the benchmark case of IID data. Consider a sample of size 𝑁 gen-
erated by the following IID process,

𝑋𝑖 ∼ 𝒩(𝜇, 𝜎2 )

Taking 𝑋̄ 𝑁 to estimate the mean, the MSE is

𝜎2
MSE(𝑋̄ 𝑁 , 𝜇) =
𝑁
Taking 𝑆𝑁 to estimate the variance, the MSE is
578CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

2𝜎4
MSE(𝑆𝑁 , 𝜎2 ) =
𝑁 −1

Both estimators are unbiased and hence the MSEs reflect the corresponding variances of the
estimators.
Furthermore, both MSEs are 𝑜(1) with a (multiplicative) factor of difference in their rates of
convergence:

MSE(𝑆𝑁 , 𝜎2 ) 𝑁 2𝜎2
= → 2𝜎2
MSE(𝑋̄ 𝑁 , 𝜇) 𝑁 −1 𝑁→∞

We are interested in how this (asymptotic) relative rate of convergence changes as increasing
sampling frequency puts dependence into the data.

33.3.2 Dependence and Sampling Frequency

To investigate how sampling frequency affects relative rates of convergence, we assume that
the data are generated by a mean-reverting continuous time process of the form

𝑑𝑋𝑡 = −𝜅(𝑋𝑡 − 𝜇)𝑑𝑡 + 𝜎𝑑𝑊𝑡

where 𝜇is the unconditional mean, 𝜅 > 0 is a persistence parameter, and {𝑊𝑡 } is a standard-
ized Brownian motion.
Observations arising from this system in particular discrete periods 𝒯(ℎ) ≡ {𝑛ℎ ∶ 𝑛 ∈
ℤ}withℎ > 0 can be described by the following process

𝑋𝑡+1 = (1 − exp(−𝜅ℎ))𝜇 + exp(−𝜅ℎ)𝑋𝑡 + 𝜖𝑡,ℎ

where

𝜎2 (1 − exp(−2𝜅ℎ))
𝜖𝑡,ℎ ∼ 𝒩(0, Σℎ ) with Σℎ =
2𝜅

We call ℎ the frequency parameter, whereas 𝑛 represents the number of lags between observa-
tions.
Hence, the effective distance between two observations 𝑋𝑡 and 𝑋𝑡+𝑛 in the discrete time nota-
tion is equal to ℎ ⋅ 𝑛 in terms of the underlying continuous time process.
Straightforward calculations show that the autocorrelation function for the stochastic process
{𝑋𝑡 }𝑡∈𝒯(ℎ) is

Γℎ (𝑛) ≡ corr(𝑋𝑡+ℎ𝑛 , 𝑋𝑡 ) = exp(−𝜅ℎ𝑛)

and the auto-covariance function is

exp(−𝜅ℎ𝑛)𝜎2
𝛾ℎ (𝑛) ≡ cov(𝑋𝑡+ℎ𝑛 , 𝑋𝑡 ) = .
2𝜅
33.3. APPENDIX 579

𝜎2
It follows that if 𝑛 = 0, the unconditional variance is given by 𝛾ℎ (0) = 2𝜅 irrespective of the
sampling frequency.
The following figure illustrates how the dependence between the observations is related to the
sampling frequency
• For any given ℎ, the autocorrelation converges to zero as we increase the distance – 𝑛–
between the observations. This represents the “weak dependence” of the 𝑋 process.
• Moreover, for a fixed lag length, 𝑛, the dependence vanishes as the sampling frequency
goes to infinity. In fact, letting ℎ go to ∞ gives back the case of IID data.

In [7]: μ = .0
κ = .1
σ = .5
var_uncond = σ**2 / (2 * κ)

n_grid = np.linspace(0, 40, 100)


autocorr_h1 = np.exp(-κ * n_grid * 1)
autocorr_h2 = np.exp(-κ * n_grid * 2)
autocorr_h5 = np.exp(-κ * n_grid * 5)
autocorr_h1000 = np.exp(-κ * n_grid * 1e8)

fig, ax = plt.subplots(figsize=(8, 4))


ax.plot(n_grid, autocorr_h1, label=r'$h=1$', c='darkblue', lw=2)
ax.plot(n_grid, autocorr_h2, label=r'$h=2$', c='darkred', lw=2)
ax.plot(n_grid, autocorr_h5, label=r'$h=5$', c='orange', lw=2)
ax.plot(n_grid, autocorr_h1000, label=r'"$h=\infty$"', c='darkgreen', lw=2)
ax.legend()
ax.grid()
ax.set(title=r'Autocorrelation functions, $\Gamma_h(n)$',
xlabel=r'Lags between observations, $n$')
plt.show()
580CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

33.3.3 Frequency and the Mean Estimator

Consider again the AR(1) process generated by discrete sampling with frequency ℎ. Assume
that we have a sample of size 𝑁 and we would like to estimate the unconditional mean – in
our case the true mean is 𝜇.
Again, the sample average is an unbiased estimator of the unconditional mean

̄ 1 𝑁
𝔼[𝑋𝑁 ] = ∑ 𝔼[𝑋𝑖 ] = 𝔼[𝑋0 ] = 𝜇
𝑁 𝑖=1

The variance of the sample mean is given by

̄ 1 𝑁
𝕍 ( 𝑋𝑁 ) = 𝕍 ( ∑ 𝑋𝑖 )
𝑁 𝑖=1
𝑁 𝑁−1 𝑁
1
= (∑ 𝕍(𝑋 𝑖 ) + 2 ∑ ∑ cov(𝑋𝑖 , 𝑋𝑠 ))
𝑁 2 𝑖=1 𝑖=1 𝑠=𝑖+1
𝑁−1
1
= (𝑁 𝛾(0) + 2 ∑ 𝑖 ⋅ 𝛾 (ℎ ⋅ (𝑁 − 𝑖)))
𝑁2 𝑖=1
𝑁−1
1 𝜎2 𝜎2
= (𝑁 + 2 ∑ 𝑖 ⋅ exp(−𝜅ℎ(𝑁 − 𝑖)) )
𝑁2 2𝜅 𝑖=1
2𝜅

It is explicit in the above equation that time dependence in the data inflates the variance of
the mean estimator through the covariance terms. Moreover, as we can see, a higher sampling
frequency—smaller ℎ—makes all the covariance terms larger, everything else being fixed. This
implies a relatively slower rate of convergence of the sample average for high-frequency data.
Intuitively, the stronger dependence across observations for high-frequency data reduces the
“information content” of each observation relative to the IID case.
We can upper bound the variance term in the following way

𝑁−1
1
𝕍(𝑋̄ 𝑁 ) = 2 (𝑁 𝜎2 + 2 ∑ 𝑖 ⋅ exp(−𝜅ℎ(𝑁 − 𝑖))𝜎2 )
𝑁 𝑖=1
𝑁−1
𝜎2
≤ (1 + 2 ∑ ⋅ exp(−𝜅ℎ(𝑖)))
2𝜅𝑁 𝑖=1
𝜎2 1 − exp(−𝜅ℎ)𝑁−1
= (1 + 2 )
2𝜅𝑁
⏟ 1 − exp(−𝜅ℎ)
IID case

Asymptotically the exp(−𝜅ℎ)𝑁−1 vanishes and the dependence in the data inflates the bench-
mark IID variance by a factor of

1
(1 + 2 )
1 − exp(−𝜅ℎ)

This long run factor is larger the higher is the frequency (the smaller is ℎ).
33.3. APPENDIX 581

Therefore, we expect the asymptotic relative MSEs, 𝐵, to change with time-dependent data.
We just saw that the mean estimator’s rate is roughly changing by a factor of

1
(1 + 2 )
1 − exp(−𝜅ℎ)

Unfortunately, the variance estimator’s MSE is harder to derive.


Nonetheless, we can approximate it by using (large sample) simulations, thus getting an idea
about how the asymptotic relative MSEs changes in the sampling frequency ℎ relative to the
IID case that we compute in closed form.

In [8]: def sample_generator(h, N, M):


ϕ = (1 - np.exp(-κ * h)) * μ
ρ = np.exp(-κ * h)
s = σ**2 * (1 - np.exp(-2 * κ * h)) / (2 * κ)

mean_uncond = μ
std_uncond = np.sqrt(σ**2 / (2 * κ))

ε_path = stat.norm(0, np.sqrt(s)).rvs((M, N))

y_path = np.zeros((M, N + 1))


y_path[:, 0] = stat.norm(mean_uncond, std_uncond).rvs(M)

for i in range(N):
y_path[:, i + 1] = ϕ + ρ * y_path[:, i] + ε_path[:, i]

return y_path

In [9]: # Generate large sample for different frequencies


N_app, M_app = 1000, 30000 # Sample size, number of simulations
h_grid = np.linspace(.1, 80, 30)

var_est_store = []
mean_est_store = []
labels = []

for h in h_grid:
labels.append(h)
sample = sample_generator(h, N_app, M_app)
mean_est_store.append(np.mean(sample, 1))
var_est_store.append(np.var(sample, 1))

var_est_store = np.array(var_est_store)
mean_est_store = np.array(mean_est_store)

# Save mse of estimators


mse_mean = np.var(mean_est_store, 1) + (np.mean(mean_est_store, 1) - μ)**2
mse_var = np.var(var_est_store, 1) \
+ (np.mean(var_est_store, 1) - var_uncond)**2

benchmark_rate = 2 * var_uncond # IID case

# Relative MSE for large samples


rate_h = mse_var / mse_mean
582CHAPTER 33. TWO MODIFICATIONS OF MEAN-VARIANCE PORTFOLIO THEORY

fig, ax = plt.subplots(figsize=(8, 5))


ax.plot(h_grid, rate_h, c='darkblue', lw=2,
label=r'large sample relative MSE, $B(h)$')
ax.axhline(benchmark_rate, c='k', ls='--', label=r'IID benchmark')
ax.set_title('Relative MSE for large samples as a function of sampling \
frequency \n MSE($S_N$) relative to MSE($\\bar X_N$)')
ax.set_xlabel('Sampling frequency, $h$')
ax.legend()
plt.show()

The above figure illustrates the relationship between the asymptotic relative MSEs and the
sampling frequency
• We can see that with low-frequency data – large values of ℎ – the ratio of asymptotic
rates approaches the IID case.
• As ℎ gets smaller – the higher the frequency – the relative performance of the variance
estimator is better in the sense that the ratio of asymptotic rates gets smaller. That
is, as the time dependence gets more pronounced, the rate of convergence of the mean
estimator’s MSE deteriorates more than that of the variance estimator.
Part VIII

Dynamic Programming Squared

583
Chapter 34

Stackelberg Plans

34.1 Contents

• Overview 34.2
• Duopoly 34.3
• The Stackelberg Problem 34.4
• Stackelberg Plan 34.5
• Recursive Representation of Stackelberg Plan 34.6
• Computing the Stackelberg Plan 34.7
• Exhibiting Time Inconsistency of Stackelberg Plan 34.8
• Recursive Formulation of the Follower’s Problem 34.9
• Markov Perfect Equilibrium 34.10
• MPE vs. Stackelberg 34.11
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

34.2 Overview

This notebook formulates and computes a plan that a Stackelberg leader uses to manip-
ulate forward-looking decisions of a Stackelberg follower that depend on continuation se-
quences of decisions made once and for all by the Stackelberg leader at time 0.
To facilitate computation and interpretation, we formulate things in a context that allows us
to apply linear optimal dynamic programming.
From the beginning, we carry along a linear-quadratic model of duopoly in which firms face
adjustment costs that make them want to forecast actions of other firms that influence future
prices.
Let’s start with some standard imports:

In [2]: import numpy as np


import numpy.linalg as la
import quantecon as qe
from quantecon import LQ
import matplotlib.pyplot as plt
%matplotlib inline

585
586 CHAPTER 34. STACKELBERG PLANS

34.3 Duopoly

Time is discrete and is indexed by 𝑡 = 0, 1, ….


Two firms produce a single good whose demand is governed by the linear inverse demand
curve

𝑝𝑡 = 𝑎0 − 𝑎1 (𝑞1𝑡 + 𝑞2𝑡 )

where 𝑞𝑖𝑡 is output of firm 𝑖 at time 𝑡 and 𝑎0 and 𝑎1 are both positive.
𝑞10 , 𝑞20 are given numbers that serve as initial conditions at time 0.
By incurring a cost of change

2
𝛾𝑣𝑖𝑡

where 𝛾 > 0, firm 𝑖 can change its output according to

𝑞𝑖𝑡+1 = 𝑞𝑖𝑡 + 𝑣𝑖𝑡

Firm 𝑖’s profits at time 𝑡 equal

2
𝜋𝑖𝑡 = 𝑝𝑡 𝑞𝑖𝑡 − 𝛾𝑣𝑖𝑡

Firm 𝑖 wants to maximize the present value of its profits


∑ 𝛽 𝑡 𝜋𝑖𝑡
𝑡=0

where 𝛽 ∈ (0, 1) is a time discount factor.

34.3.1 Stackelberg Leader and Follower

Each firm 𝑖 = 1, 2 chooses a sequence 𝑞𝑖⃗ ≡ {𝑞𝑖𝑡+1 }∞


𝑡=0 once and for all at time 0.

We let firm 2 be a Stackelberg leader and firm 1 be a Stackelberg follower.


The leader firm 2 goes first and chooses {𝑞2𝑡+1 }∞
𝑡=0 once and for all at time 0.

Knowing that firm 2 has chosen {𝑞2𝑡+1 }∞𝑡=0 , the follower firm 1 goes second and chooses
{𝑞1𝑡+1 }∞
𝑡=0 once and for all at time 0.
In choosing 𝑞2⃗ , firm 2 takes into account that firm 1 will base its choice of 𝑞1⃗ on firm 2’s
choice of 𝑞2⃗ .

34.3.2 Abstract Statement of the Leader’s and Follower’s Problems

We can express firm 1’s problem as

max Π1 (𝑞1⃗ ; 𝑞2⃗ )


𝑞1⃗
34.3. DUOPOLY 587

where the appearance behind the semi-colon indicates that 𝑞2⃗ is given.
Firm 1’s problem induces the best response mapping

𝑞1⃗ = 𝐵(𝑞2⃗ )

(Here 𝐵 maps a sequence into a sequence)


The Stackelberg leader’s problem is

max Π2 (𝐵(𝑞2⃗ ), 𝑞2⃗ )


𝑞2⃗

whose maximizer is a sequence 𝑞2⃗ that depends on the initial conditions 𝑞10 , 𝑞20 and the pa-
rameters of the model 𝑎0 , 𝑎1 , 𝛾.
This formulation captures key features of the model
• Both firms make once-and-for-all choices at time 0.
• This is true even though both firms are choosing sequences of quantities that are in-
dexed by time.
• The Stackelberg leader chooses first within time 0, knowing that the Stackelberg fol-
lower will choose second within time 0.
While our abstract formulation reveals the timing protocol and equilibrium concept well, it
obscures details that must be addressed when we want to compute and interpret a Stackel-
berg plan and the follower’s best response to it.
To gain insights about these things, we study them in more detail.

34.3.3 Firms’ Problems

Firm 1 acts as if firm 2’s sequence {𝑞2𝑡+1 }∞


𝑡=0 is given and beyond its control.

Firm 2 knows that firm 1 chooses second and takes this into account in choosing {𝑞2𝑡+1 }∞
𝑡=0 .

In the spirit of working backward, we study firm 1’s problem first, taking {𝑞2𝑡+1 }∞
𝑡=0 as given.

We can formulate firm 1’s optimum problem in terms of the Lagrangian


𝐿 = ∑ 𝛽 𝑡 {𝑎0 𝑞1𝑡 − 𝑎1 𝑞1𝑡
2 2
− 𝑎1 𝑞1𝑡 𝑞2𝑡 − 𝛾𝑣1𝑡 + 𝜆𝑡 [𝑞1𝑡 + 𝑣1𝑡 − 𝑞1𝑡+1 ]}
𝑡=0

Firm 1 seeks a maximum with respect to {𝑞1𝑡+1 , 𝑣1𝑡 }∞


𝑡=0 and a minimum with respect to

{𝜆𝑡 }𝑡=0 .
We approach this problem using methods described in Ljungqvist and Sargent RMT5 chapter
2, appendix A and Macroeconomic Theory, 2nd edition, chapter IX.
First-order conditions for this problem are

𝜕𝐿
= 𝑎0 − 2𝑎1 𝑞1𝑡 − 𝑎1 𝑞2𝑡 + 𝜆𝑡 − 𝛽 −1 𝜆𝑡−1 = 0, 𝑡≥1
𝜕𝑞1𝑡
𝜕𝐿
= −2𝛾𝑣1𝑡 + 𝜆𝑡 = 0, 𝑡 ≥ 0
𝜕𝑣1𝑡
588 CHAPTER 34. STACKELBERG PLANS

These first-order conditions and the constraint 𝑞1𝑡+1 = 𝑞1𝑡 + 𝑣1𝑡 can be rearranged to take the
form

𝛽𝑎0 𝛽𝑎1 𝛽𝑎
𝑣1𝑡 = 𝛽𝑣1𝑡+1 + − 𝑞1𝑡+1 − 1 𝑞2𝑡+1
2𝛾 𝛾 2𝛾
𝑞𝑡+1 = 𝑞1𝑡 + 𝑣1𝑡

We can substitute the second equation into the first equation to obtain

(𝑞1𝑡+1 − 𝑞1𝑡 ) = 𝛽(𝑞1𝑡+2 − 𝑞1𝑡+1 ) + 𝑐0 − 𝑐1 𝑞1𝑡+1 − 𝑐2 𝑞2𝑡+1


𝛽𝑎0 𝛽𝑎1 𝛽𝑎1
where 𝑐0 = 2𝛾 , 𝑐1 = 𝛾 , 𝑐2 = 2𝛾 .

This equation can in turn be rearranged to become the second-order difference equation

𝑞1𝑡 + (1 + 𝛽 + 𝑐1 )𝑞1𝑡+1 − 𝛽𝑞1𝑡+2 = 𝑐0 − 𝑐2 𝑞2𝑡+1 (1)

Equation (1) is a second-order difference equation in the sequence 𝑞1⃗ whose solution we want.
It satisfies two boundary conditions:
• an initial condition that 𝑞1,0 , which is given
• a terminal condition requiring that lim𝑇 →+∞ 𝛽 𝑇 𝑞1𝑡
2
< +∞
Using the lag operators described in chapter IX of Macroeconomic Theory, Second edition
(1987), difference equation (1) can be written as

1 + 𝛽 + 𝑐1
𝛽(1 − 𝐿 + 𝛽 −1 𝐿2 )𝑞1𝑡+2 = −𝑐0 + 𝑐2 𝑞2𝑡+1
𝛽
The polynomial in the lag operator on the left side can be factored as

1 + 𝛽 + 𝑐1
(1 − 𝐿 + 𝛽 −1 𝐿2 ) = (1 − 𝛿1 𝐿)(1 − 𝛿2 𝐿) (2)
𝛽

where 0 < 𝛿1 < 1 < √1 < 𝛿2 .


𝛽

Because 𝛿2 > √1𝛽 the operator (1 − 𝛿2 𝐿) contributes an unstable component if solved back-
wards but a stable component if solved forwards.
Mechanically, write

(1 − 𝛿2 𝐿) = −𝛿2 𝐿(1 − 𝛿2−1 𝐿−1 )

and compute the following inverse operator

−1 −1 −1
[−𝛿2 𝐿(1 − 𝛿2−1 𝐿−1 )] = −𝛿2 (1 − 𝛿2 ) 𝐿−1

Operating on both sides of equation (2) with 𝛽 −1 times this inverse operator gives the fol-
lower’s decision rule for setting 𝑞1𝑡+1 in the feedback-feedforward form.


1
𝑞1𝑡+1 = 𝛿1 𝑞1𝑡 − 𝑐0 𝛿2−1 𝛽 −1 −1
+ 𝑐 2 𝛿 −1 −1
2 𝛽 ∑ 𝛿2𝑗 𝑞2𝑡+𝑗+1 , 𝑡≥0 (3)
1 − 𝛿2 𝑗=0
34.4. THE STACKELBERG PROBLEM 589

The problem of the Stackelberg leader firm 2 is to choose the sequence {𝑞2𝑡+1 }∞
𝑡=0 to maxi-
mize its discounted profits


∑ 𝛽 𝑡 {(𝑎0 − 𝑎1 (𝑞1𝑡 + 𝑞2𝑡 ))𝑞2𝑡 − 𝛾(𝑞2𝑡+1 − 𝑞2𝑡 )2 }
𝑡=0

subject to the sequence of constraints (3) for 𝑡 ≥ 0.


We can put a sequence {𝜃𝑡 }∞
𝑡=0 of Lagrange multipliers on the sequence of equations (3) and
formulate the following Lagrangian for the Stackelberg leader firm 2’s problem


𝐿̃ = ∑ 𝛽 𝑡 {(𝑎0 − 𝑎1 (𝑞1𝑡 + 𝑞2𝑡 ))𝑞2𝑡 − 𝛾(𝑞2𝑡+1 − 𝑞2𝑡 )2 }
𝑡=0
∞ ∞ (4)
1
𝑡
+ ∑ 𝛽 𝜃𝑡 {𝛿1 𝑞1𝑡 − 𝑐0 𝛿2−1 𝛽 −1 −1
+ 𝑐 2 𝛿 −1 −1
2 𝛽 ∑ 𝛿2−𝑗 𝑞2𝑡+𝑗+1 − 𝑞1𝑡+1 }
𝑡=0
1 − 𝛿2 𝑗=0

subject to initial conditions for 𝑞1𝑡 , 𝑞2𝑡 at 𝑡 = 0.


Comments: We have formulated the Stackelberg problem in a space of sequences.
The max-min problem associated with Lagrangian (4) is unpleasant because the time 𝑡 com-
ponent of firm 1’s payoff function depends on the entire future of its choices of {𝑞1𝑡+𝑗 }∞
𝑗=0 .

This renders a direct attack on the problem cumbersome.


Therefore, below, we will formulate the Stackelberg leader’s problem recursively.
We’ll put our little duopoly model into a broader class of models with the same conceptual
structure.

34.4 The Stackelberg Problem

We formulate a class of linear-quadratic Stackelberg leader-follower problems of which our


duopoly model is an instance.
We use the optimal linear regulator (a.k.a. the linear-quadratic dynamic programming prob-
lem described in LQ Dynamic Programming problems) to represent a Stackelberg leader’s
problem recursively.
Let 𝑧𝑡 be an 𝑛𝑧 × 1 vector of natural state variables.
Let 𝑥𝑡 be an 𝑛𝑥 × 1 vector of endogenous forward-looking variables that are physically free to
jump at 𝑡.
In our duopoly example 𝑥𝑡 = 𝑣1𝑡 , the time 𝑡 decision of the Stackelberg follower.
Let 𝑢𝑡 be a vector of decisions chosen by the Stackelberg leader at 𝑡.
The 𝑧𝑡 vector is inherited physically from the past.
But 𝑥𝑡 is a decision made by the Stackelberg follower at time 𝑡 that is the follower’s best re-
sponse to the choice of an entire sequence of decisions made by the Stackelberg leader at time
𝑡 = 0.
Let
590 CHAPTER 34. STACKELBERG PLANS

𝑧
𝑦𝑡 = [ 𝑡 ]
𝑥𝑡

Represent the Stackelberg leader’s one-period loss function as

𝑟(𝑦, 𝑢) = 𝑦′ 𝑅𝑦 + 𝑢′ 𝑄𝑢

Subject to an initial condition for 𝑧0 , but not for 𝑥0 , the Stackelberg leader wants to maxi-
mize


− ∑ 𝛽 𝑡 𝑟(𝑦𝑡 , 𝑢𝑡 ) (5)
𝑡=0

The Stackelberg leader faces the model

𝐼 0 𝑧 𝐴̂ 𝐴12̂ 𝑧 ̂ 𝑡
[ ] [ 𝑡+1 ] = [ 11
̂ ̂ ] [ 𝑡 ] + 𝐵𝑢 (6)
𝐺21 𝐺22 𝑥𝑡+1 𝐴21 𝐴22 𝑥 𝑡

𝐼 0
We assume that the matrix [ ] on the left side of equation (6) is invertible, so that
𝐺21 𝐺22
we can multiply both sides by its inverse to obtain

𝑧 𝐴 𝐴12 𝑧𝑡
[ 𝑡+1 ] = [ 11 ] [ ] + 𝐵𝑢𝑡 (7)
𝑥𝑡+1 𝐴21 𝐴22 𝑥𝑡

or

𝑦𝑡+1 = 𝐴𝑦𝑡 + 𝐵𝑢𝑡 (8)

34.4.1 Interpretation of the Second Block of Equations

The Stackelberg follower’s best response mapping is summarized by the second block of equa-
tions of (7).
In particular, these equations are the first-order conditions of the Stackelberg follower’s opti-
mization problem (i.e., its Euler equations).
These Euler equations summarize the forward-looking aspect of the follower’s behavior and
express how its time 𝑡 decision depends on the leader’s actions at times 𝑠 ≥ 𝑡.
When combined with a stability condition to be imposed below, the Euler equations summa-
rize the follower’s best response to the sequence of actions by the leader.
The Stackelberg leader maximizes (5) by choosing sequences {𝑢𝑡 , 𝑥𝑡 , 𝑧𝑡+1 }∞
𝑡=0 subject to (8)
and an initial condition for 𝑧0 .
Note that we have an initial condition for 𝑧0 but not for 𝑥0 .
𝑥0 is among the variables to be chosen at time 0 by the Stackelberg leader.
The Stackelberg leader uses its understanding of the responses restricted by (8) to manipulate
the follower’s decisions.
34.4. THE STACKELBERG PROBLEM 591

34.4.2 More Mechanical Details

For any vector 𝑎𝑡 , define 𝑎𝑡⃗ = [𝑎𝑡 , 𝑎𝑡+1 …].


Define a feasible set of (𝑦1⃗ , 𝑢⃗0 ) sequences

Ω(𝑦0 ) = {(𝑦1⃗ , 𝑢⃗0 ) ∶ 𝑦𝑡+1 = 𝐴𝑦𝑡 + 𝐵𝑢𝑡 , ∀𝑡 ≥ 0}

Please remember that the follower’s Euler equation is embedded in the system of dynamic
equations 𝑦𝑡+1 = 𝐴𝑦𝑡 + 𝐵𝑢𝑡 .
Note that in the definition of Ω(𝑦0 ), 𝑦0 is taken as given.
Although it is taken as given in Ω(𝑦0 ), eventually, the 𝑥0 component of 𝑦0 will be chosen by
the Stackelberg leader.

34.4.3 Two Subproblems

Once again we use backward induction.


We express the Stackelberg problem in terms of two subproblems.
Subproblem 1 is solved by a continuation Stackelberg leader at each date 𝑡 ≥ 0.
Subproblem 2 is solved the Stackelberg leader at 𝑡 = 0.
The two subproblems are designed
• to respect the protocol in which the follower chooses 𝑞1⃗ after seeing 𝑞2⃗ chosen by the
leader
• to make the leader choose 𝑞2⃗ while respecting that 𝑞1⃗ will be the follower’s best response
to 𝑞2⃗
• to represent the leader’s problem recursively by artfully choosing the state variables
confronting and the control variables available to the leader

Subproblem 1

𝑣(𝑦0 ) = max − ∑ 𝛽 𝑡 𝑟(𝑦𝑡 , 𝑢𝑡 )
(𝑦1⃗ ,𝑢⃗ 0 )∈Ω(𝑦0 )
𝑡=0

Subproblem 2

𝑤(𝑧0 ) = max 𝑣(𝑦0 )


𝑥0

Subproblem 1 takes the vector of forward-looking variables 𝑥0 as given.


Subproblem 2 optimizes over 𝑥0 .
The value function 𝑤(𝑧0 ) tells the value of the Stackelberg plan as a function of the vector of
natural state variables at time 0, 𝑧0 .

34.4.4 Two Bellman Equations

We now describe Bellman equations for 𝑣(𝑦) and 𝑤(𝑧0 ).


592 CHAPTER 34. STACKELBERG PLANS

Subproblem 1

The value function 𝑣(𝑦) in subproblem 1 satisfies the Bellman equation

𝑣(𝑦) = max

{−𝑟(𝑦, 𝑢) + 𝛽𝑣(𝑦∗ )} (9)
𝑢,𝑦

where the maximization is subject to

𝑦∗ = 𝐴𝑦 + 𝐵𝑢

and 𝑦∗ denotes next period’s value.


Substituting 𝑣(𝑦) = −𝑦′ 𝑃 𝑦 into Bellman equation (9) gives

−𝑦′ 𝑃 𝑦 = max𝑢,𝑦∗ {−𝑦′ 𝑅𝑦 − 𝑢′ 𝑄𝑢 − 𝛽𝑦∗′ 𝑃 𝑦∗ }

which as in lecture linear regulator gives rise to the algebraic matrix Riccati equation

𝑃 = 𝑅 + 𝛽𝐴′ 𝑃 𝐴 − 𝛽 2 𝐴′ 𝑃 𝐵(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴

and the optimal decision rule coefficient vector

𝐹 = 𝛽(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴

where the optimal decision rule is

𝑢𝑡 = −𝐹 𝑦𝑡

Subproblem 2

We find an optimal 𝑥0 by equating to zero the gradient of 𝑣(𝑦0 ) with respect to 𝑥0 :

−2𝑃21 𝑧0 − 2𝑃22 𝑥0 = 0,

which implies that

−1
𝑥0 = −𝑃22 𝑃21 𝑧0

34.5 Stackelberg Plan

Now let’s map our duopoly model into the above setup.
We will formulate a state space system

𝑧
𝑦𝑡 = [ 𝑡 ]
𝑥𝑡

where in this instance 𝑥𝑡 = 𝑣1𝑡 , the time 𝑡 decision of the follower firm 1.
34.5. STACKELBERG PLAN 593

34.5.1 Calculations to Prepare Duopoly Model

Now we’ll proceed to cast our duopoly model within the framework of the more general
linear-quadratic structure described above.
That will allow us to compute a Stackelberg plan simply by enlisting a Riccati equation to
solve a linear-quadratic dynamic program.
As emphasized above, firm 1 acts as if firm 2’s decisions {𝑞2𝑡+1 , 𝑣2𝑡 }∞
𝑡=0 are given and beyond
its control.

34.5.2 Firm 1’s Problem

We again formulate firm 1’s optimum problem in terms of the Lagrangian


𝐿 = ∑ 𝛽 𝑡 {𝑎0 𝑞1𝑡 − 𝑎1 𝑞1𝑡
2 2
− 𝑎1 𝑞1𝑡 𝑞2𝑡 − 𝛾𝑣1𝑡 + 𝜆𝑡 [𝑞1𝑡 + 𝑣1𝑡 − 𝑞1𝑡+1 ]}
𝑡=0

Firm 1 seeks a maximum with respect to {𝑞1𝑡+1 , 𝑣1𝑡 }∞


𝑡=0 and a minimum with respect to

{𝜆𝑡 }𝑡=0 .
First-order conditions for this problem are

𝜕𝐿
= 𝑎0 − 2𝑎1 𝑞1𝑡 − 𝑎1 𝑞2𝑡 + 𝜆𝑡 − 𝛽 −1 𝜆𝑡−1 = 0, 𝑡≥1
𝜕𝑞1𝑡
𝜕𝐿
= −2𝛾𝑣1𝑡 + 𝜆𝑡 = 0, 𝑡 ≥ 0
𝜕𝑣1𝑡

These first-order order conditions and the constraint 𝑞1𝑡+1 = 𝑞1𝑡 + 𝑣1𝑡 can be rearranged to
take the form

𝛽𝑎0 𝛽𝑎1 𝛽𝑎
𝑣1𝑡 = 𝛽𝑣1𝑡+1 + − 𝑞 − 1 𝑞2𝑡+1
2𝛾 𝛾 1𝑡+1 2𝛾
𝑞𝑡+1 = 𝑞1𝑡 + 𝑣1𝑡

We use these two equations as components of the following linear system that confronts a
Stackelberg continuation leader at time 𝑡

1 0 0 0 1 1 0 0 0 1 0
⎡ 0 1 0 0 ⎤ ⎡𝑞2𝑡+1 ⎤ ⎡0 1 0 0⎤ ⎡𝑞 ⎤ ⎡1⎤
⎢ ⎥⎢ ⎥=⎢ ⎥ ⎢ 2𝑡 ⎥ + ⎢ ⎥ 𝑣
⎢ 0 0 1 0 ⎥ ⎢𝑞1𝑡+1 ⎥ ⎢0 0 1 1⎥ ⎢𝑞1𝑡 ⎥ ⎢0⎥ 2𝑡
𝛽𝑎0
⎣ 2𝛾 − 𝛽𝑎
2𝛾
1
− 𝛽𝑎𝛾 1 𝛽 ⎦ ⎣𝑣1𝑡+1 ⎦ ⎣0 0 0 1⎦ ⎣𝑣1𝑡 ⎦ ⎣0⎦

2
Time 𝑡 revenues of firm 2 are 𝜋2𝑡 = 𝑎0 𝑞2𝑡 − 𝑎1 𝑞2𝑡 − 𝑎1 𝑞1𝑡 𝑞2𝑡 which evidently equal

′ 𝑎0
1 0 2 0 1
′ ⎡ ⎤ ⎡ 𝑎0
𝑧𝑡 𝑅1 𝑧𝑡 ≡ ⎢𝑞2𝑡 ⎥ ⎢ 2 −𝑎1 𝑎1 ⎤ ⎡
− 2 ⎥ ⎢𝑞2𝑡 ⎤

⎣𝑞1𝑡 ⎦ ⎣ 0 − 𝑎21 0 ⎦ ⎣𝑞1𝑡 ⎦

If we set 𝑄 = 𝛾, then firm 2’s period 𝑡 profits can then be written


594 CHAPTER 34. STACKELBERG PLANS

𝑦𝑡′ 𝑅𝑦𝑡 − 𝑄𝑣2𝑡


2

where

𝑧
𝑦𝑡 = [ 𝑡 ]
𝑥𝑡

with 𝑥𝑡 = 𝑣1𝑡 and

𝑅1 0
𝑅=[ ]
0 0

We’ll report results of implementing this code soon.


But first, we want to represent the Stackelberg leader’s optimal choices recursively.
It is important to do this for several reasons:
• properly to interpret a representation of the Stackelberg leader’s choice as a sequence of
history-dependent functions
• to formulate a recursive version of the follower’s choice problem
First, let’s get a recursive representation of the Stackelberg leader’s choice of 𝑞2⃗ for our
duopoly model.

34.6 Recursive Representation of Stackelberg Plan

In order to attain an appropriate representation of the Stackelberg leader’s history-dependent


plan, we will employ what amounts to a version of the Big K, little k device often used in
macroeconomics by distinguishing 𝑧𝑡 , which depends partly on decisions 𝑥𝑡 of the followers,
from another vector 𝑧𝑡̌ , which does not.
We will use 𝑧𝑡̌ and its history 𝑧𝑡̌ = [𝑧𝑡̌ , 𝑧𝑡−1
̌ , … , 𝑧0̌ ] to describe the sequence of the Stackelberg
leader’s decisions that the Stackelberg follower takes as given.
Thus, we let 𝑦𝑡′̌ = [𝑧𝑡′̌ 𝑥′𝑡̌ ] with initial condition 𝑧0̌ = 𝑧0 given.
That we distinguish 𝑧𝑡̌ from 𝑧𝑡 is part and parcel of the Big K, little k device in this in-
stance.

We have demonstrated that a Stackelberg plan for {𝑢𝑡 }𝑡=0 has a recursive representation

−1
𝑥0̌ = −𝑃22 𝑃21 𝑧0
𝑢𝑡 = −𝐹 𝑦𝑡̌ , 𝑡≥0
𝑦𝑡+1
̌ = (𝐴 − 𝐵𝐹 )𝑦𝑡̌ , 𝑡≥0

From this representation, we can deduce the sequence of functions 𝜎 = {𝜎𝑡 (𝑧𝑡̌ )}∞
𝑡=0 that com-
prise a Stackelberg plan.
𝑧̌
For convenience, let 𝐴 ̌ ≡ 𝐴 − 𝐵𝐹 and partition 𝐴 ̌ conformably to the partition 𝑦𝑡 = [ 𝑡 ] as
𝑥𝑡̌

𝐴̌ ̌
𝐴12
[ 11̌ ̌ ]
𝐴21 𝐴22
34.6. RECURSIVE REPRESENTATION OF STACKELBERG PLAN 595

Let 𝐻00 ≡ −𝑃22


−1
𝑃21 so that 𝑥0̌ = 𝐻00 𝑧0̌ .
𝑧0̌
Then iterations on 𝑦𝑡+1
̌ = 𝐴𝑦̌ 𝑡̌ starting from initial condition 𝑦0̌ = [ ] imply that for
𝐻00 𝑧0̌
𝑡≥1

𝑡
𝑥𝑡 = ∑ 𝐻𝑗𝑡 𝑧𝑡−𝑗
̌
𝑗=1

where

̌
𝐻1𝑡 = 𝐴21
𝐻𝑡 = 𝐴̌ 𝐴̌
2 22 21
⋮ ⋮
𝑡
𝐻𝑡−1 ̌ 𝐴̌
= 𝐴𝑡−2
22 21

𝐻𝑡𝑡 = ̌
𝐴𝑡−1 ̌ ̌ 𝐻 0)
22 (𝐴21 + 𝐴22 0

An optimal decision rule for the Stackelberg’s choice of 𝑢𝑡 is

𝑧̌
𝑢𝑡 = −𝐹 𝑦𝑡̌ ≡ − [𝐹𝑧 𝐹𝑥 ] [ 𝑡 ]
𝑥𝑡

or

𝑡
𝑢𝑡 = −𝐹𝑧 𝑧𝑡̌ − 𝐹𝑥 ∑ 𝐻𝑗𝑡 𝑧𝑡−𝑗 = 𝜎𝑡 (𝑧𝑡̌ ) (10)
𝑗=1

Representation (10) confirms that whenever 𝐹𝑥 ≠ 0, the typical situation, the time 𝑡 compo-
nent 𝜎𝑡 of a Stackelberg plan is history-dependent, meaning that the Stackelberg leader’s
choice 𝑢𝑡 depends not just on 𝑧𝑡̌ but on components of 𝑧𝑡−1
̌ .

34.6.1 Comments and Interpretations

After all, at the end of the day, it will turn out that because we set 𝑧0̌ = 𝑧0 , it will be true
that 𝑧𝑡 = 𝑧𝑡̌ for all 𝑡 ≥ 0.
Then why did we distinguish 𝑧𝑡̌ from 𝑧𝑡 ?
The answer is that if we want to present to the Stackelberg follower a history-dependent
representation of the Stackelberg leader’s sequence 𝑞2⃗ , we must use representation (10) cast
in terms of the history 𝑧𝑡̌ and not a corresponding representation cast in terms of 𝑧𝑡 .

34.6.2 Dynamic Programming and Time Consistency of follower’s Problem

Given the sequence 𝑞2⃗ chosen by the Stackelberg leader in our duopoly model, it turns out
that the Stackelberg follower’s problem is recursive in the natural state variables that con-
front a follower at any time 𝑡 ≥ 0.
This means that the follower’s plan is time consistent.
596 CHAPTER 34. STACKELBERG PLANS

To verify these claims, we’ll formulate a recursive version of a follower’s problem that builds
on our recursive representation of the Stackelberg leader’s plan and our use of the Big K,
little k idea.

34.6.3 Recursive Formulation of a Follower’s Problem

We now use what amounts to another “Big 𝐾, little 𝑘” trick (see rational expectations equi-
librium) to formulate a recursive version of a follower’s problem cast in terms of an ordinary
Bellman equation.
Firm 1, the follower, faces {𝑞2𝑡 }∞
𝑡=0 as a given quantity sequence chosen by the leader and be-
lieves that its output price at 𝑡 satisfies

𝑝𝑡 = 𝑎0 − 𝑎1 (𝑞1𝑡 + 𝑞2𝑡 ), 𝑡≥0

Our challenge is to represent {𝑞2𝑡 }∞


𝑡=0 as a given sequence.

To do so, recall that under the Stackelberg plan, firm 2 sets output according to the 𝑞2𝑡 com-
ponent of

1
⎡𝑞 ⎤
𝑦𝑡+1 = ⎢ 2𝑡 ⎥
⎢𝑞1𝑡 ⎥
⎣ 𝑥𝑡 ⎦

which is governed by

𝑦𝑡+1 = (𝐴 − 𝐵𝐹 )𝑦𝑡

To obtain a recursive representation of a {𝑞2𝑡 } sequence that is exogenous to firm 1, we define


a state 𝑦𝑡̃

1
⎡𝑞 ⎤
𝑦𝑡̃ = ⎢ 2𝑡 ⎥
⎢𝑞1𝑡
̃ ⎥
⎣ 𝑥𝑡̃ ⎦

that evolves according to

𝑦𝑡+1
̃ = (𝐴 − 𝐵𝐹 )𝑦𝑡̃

−1
subject to the initial condition 𝑞10
̃ = 𝑞10 and 𝑥0̃ = 𝑥0 where 𝑥0 = −𝑃22 𝑃21 as stated above.
Firm 1’s state vector is

𝑦𝑡̃
𝑋𝑡 = [ ]
𝑞1𝑡

It follows that the follower firm 1 faces law of motion


34.6. RECURSIVE REPRESENTATION OF STACKELBERG PLAN 597

𝑦̃ 𝐴 − 𝐵𝐹 0 𝑦𝑡̃ 0
[ 𝑡+1 ] = [ ] [ ] + [ ] 𝑥𝑡 (11)
𝑞1𝑡+1 0 1 𝑞1𝑡 1

This specification assures that from the point of the view of a firm 1, 𝑞2𝑡 is an exogenous pro-
cess.
Here
• 𝑞1𝑡
̃ , 𝑥𝑡̃ play the role of Big K
• 𝑞1𝑡 , 𝑥𝑡 play the role of little k
The time 𝑡 component of firm 1’s objective is


1 0 0 0 0 𝑎20 1
⎡𝑞 ⎤ ⎡0 0 0 0 − 𝑎21 ⎤ ⎡𝑞2𝑡 ⎤
2𝑡 ⎥
̃ 𝑡 − 𝑥2𝑡 𝑄̃ = ⎢
𝑋̃ 𝑡′ 𝑅𝑥 ⎢𝑞1𝑡
̃ ⎥

⎢0 0 0 0
⎥⎢ ⎥
0 ⎥ ⎢𝑞1𝑡 ̃ ⎥ − 𝛾𝑥2𝑡
⎢ 𝑥𝑡̃ ⎥ ⎢0 0 0 0 0 ⎥ ⎢ 𝑥𝑡̃ ⎥
𝑎
⎣𝑞1𝑡 ⎦ ⎣ 20 − 𝑎21 0 0 −𝑎1 ⎦ ⎣𝑞1𝑡 ⎦

Firm 1’s optimal decision rule is

𝑥𝑡 = −𝐹 ̃ 𝑋𝑡

and it’s state evolves according to

𝑋̃ 𝑡+1 = (𝐴 ̃ − 𝐵̃ 𝐹 ̃ )𝑋𝑡

under its optimal decision rule.


Later we shall compute 𝐹 ̃ and verify that when we set

1
⎡𝑞 ⎤
⎢ 20 ⎥
𝑋0 = ⎢𝑞10 ⎥
⎢ 𝑥0 ⎥
⎣𝑞10 ⎦

we recover

𝑥0 = −𝐹 ̃ 𝑋̃ 0

which will verify that we have properly set up a recursive representation of the follower’s
problem facing the Stackelberg leader’s 𝑞2⃗ .

34.6.4 Time Consistency of Follower’s Plan

Since the follower can solve its problem using dynamic programming its problem is recursive
in what for it are the natural state variables, namely
598 CHAPTER 34. STACKELBERG PLANS

1
⎡𝑞 ⎤
⎢ 2𝑡 ⎥
⎢𝑞10
̃ ⎥
⎣ 𝑥0̃ ⎦

It follows that the follower’s plan is time consistent.

34.7 Computing the Stackelberg Plan

Here is our code to compute a Stackelberg plan via a linear-quadratic dynamic program as
outlined above

In [3]: # Parameters
a0 = 10
a1 = 2
β = 0.96
γ = 120
n = 300
tol0 = 1e-8
tol1 = 1e-16
tol2 = 1e-2

βs = np.ones(n)
βs[1:] = β
βs = βs.cumprod()

In [4]: # In LQ form
Alhs = np.eye(4)

# Euler equation coefficients


Alhs[3, :] = β * a0 / (2 * γ), -β * a1 / (2 * γ), -β * a1 / γ, β

Arhs = np.eye(4)
Arhs[2, 3] = 1

Alhsinv = la.inv(Alhs)

A = Alhsinv @ Arhs

B = Alhsinv @ np.array([[0, 1, 0, 0]]).T

R = np.array([[0, -a0 / 2, 0, 0],


[-a0 / 2, a1, a1 / 2, 0],
[0, a1 / 2, 0, 0],
[0, 0, 0, 0]])

Q = np.array([[γ]])

# Solve using QE's LQ class


# LQ solves minimization problems which is why the sign of R and Q was�
↪changed

lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values(method='doubling')
34.7. COMPUTING THE STACKELBERG PLAN 599

P22 = P[3:, 3:]


P21 = P[3:, :3]
P22inv = la.inv(P22)
H_0_0 = -P22inv @ P21

# Simulate forward

π_leader = np.zeros(n)

z0 = np.array([[1, 1, 1]]).T
x0 = H_0_0 @ z0
y0 = np.vstack((z0, x0))

yt, ut = lq.compute_sequence(y0, ts_length=n)[:2]

π_matrix = (R + F. T @ Q @ F)

for t in range(n):
π_leader[t] = -(yt[:, t].T @ π_matrix @ yt[:, t])

# Display policies
print("Computed policy for Stackelberg leader\n")
print(f"F = {F}")

Computed policy for Stackelberg leader

F = [[-1.58004454 0.29461313 0.67480938 6.53970594]]

34.7.1 Implied Time Series for Price and Quantities

The following code plots the price and quantities

In [5]: q_leader = yt[1, :-1]


q_follower = yt[2, :-1]
q = q_leader + q_follower # Total output, Stackelberg
p = a0 - a1 * q # Price, Stackelberg

fig, ax = plt.subplots(figsize=(9, 5.8))


ax.plot(range(n), q_leader, 'b-', lw=2, label='leader output')
ax.plot(range(n), q_follower, 'r-', lw=2, label='follower output')
ax.plot(range(n), p, 'g-', lw=2, label='price')
ax.set_title('Output and prices, Stackelberg duopoly')
ax.legend(frameon=False)
ax.set_xlabel('t')
plt.show()
600 CHAPTER 34. STACKELBERG PLANS

34.7.2 Value of Stackelberg Leader

We’ll compute the present value earned by the Stackelberg leader.


We’ll compute it two ways (they give identical answers – just a check on coding and thinking)

In [6]: v_leader_forward = np.sum(βs * π_leader)


v_leader_direct = -yt[:, 0].T @ P @ yt[:, 0]

# Display values
print("Computed values for the Stackelberg leader at t=0:\n")
print(f"v_leader_forward(forward sim) = {v_leader_forward:.4f}")
print(f"v_leader_direct (direct) = {v_leader_direct:.4f}")

Computed values for the Stackelberg leader at t=0:

v_leader_forward(forward sim) = 150.0316


v_leader_direct (direct) = 150.0324

In [7]: # Manually checks whether P is approximately a fixed point


P_next = (R + F.T @ Q @ F + β * (A - B @ F).T @ P @ (A - B @ F))
(P - P_next < tol0).all()

Out[7]: True

In [8]: # Manually checks whether two different ways of computing the


# value function give approximately the same answer
34.8. EXHIBITING TIME INCONSISTENCY OF STACKELBERG PLAN 601

v_expanded = -((y0.T @ R @ y0 + ut[:, 0].T @ Q @ ut[:, 0] +


β * (y0.T @ (A - B @ F).T @ P @ (A - B @ F) @ y0)))
(v_leader_direct - v_expanded < tol0)[0, 0]

Out[8]: True

34.8 Exhibiting Time Inconsistency of Stackelberg Plan

In the code below we compare two values


• the continuation value −𝑦𝑡 𝑃 𝑦𝑡 earned by a continuation Stackelberg leader who inherits
state 𝑦𝑡 at 𝑡
• the value of a reborn Stackelberg leader who inherits state 𝑧𝑡 at 𝑡 and sets 𝑥𝑡 =
−1
−𝑃22 𝑃21
The difference between these two values is a tell-tale time of the time inconsistency of the
Stackelberg plan

In [9]: # Compute value function over time with a reset at time t


vt_leader = np.zeros(n)
vt_reset_leader = np.empty_like(vt_leader)

yt_reset = yt.copy()
yt_reset[-1, :] = (H_0_0 @ yt[:3, :])

for t in range(n):
vt_leader[t] = -yt[:, t].T @ P @ yt[:, t]
vt_reset_leader[t] = -yt_reset[:, t].T @ P @ yt_reset[:, t]

In [10]: fig, axes = plt.subplots(3, 1, figsize=(10, 7))

axes[0].plot(range(n+1), (- F @ yt).flatten(), 'bo',


label='Stackelberg leader', ms=2)
axes[0].plot(range(n+1), (- F @ yt_reset).flatten(), 'ro',
label='continuation leader at t', ms=2)
axes[0].set(title=r'Leader control variable $u_{t}$', xlabel='t')
axes[0].legend()

axes[1].plot(range(n+1), yt[3, :], 'bo', ms=2)


axes[1].plot(range(n+1), yt_reset[3, :], 'ro', ms=2)
axes[1].set(title=r'Follower control variable $x_{t}$', xlabel='t')

axes[2].plot(range(n), vt_leader, 'bo', ms=2)


axes[2].plot(range(n), vt_reset_leader, 'ro', ms=2)
axes[2].set(title=r'Leader value function $v(y_{t})$', xlabel='t')

plt.tight_layout()
plt.show()
602 CHAPTER 34. STACKELBERG PLANS

34.9 Recursive Formulation of the Follower’s Problem

We now formulate and compute the recursive version of the follower’s problem.
We check that the recursive Big 𝐾 , little 𝑘 formulation of the follower’s problem produces
the same output path 𝑞1⃗ that we computed when we solved the Stackelberg problem

In [11]: A_tilde = np.eye(5)


A_tilde[:4, :4] = A - B @ F

R_tilde = np.array([[0, 0, 0, 0, -a0 / 2],


[0, 0, 0, 0, a1 / 2],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[-a0 / 2, a1 / 2, 0, 0, a1]])

Q_tilde = Q
B_tilde = np.array([[0, 0, 0, 0, 1]]).T

lq_tilde = LQ(Q_tilde, R_tilde, A_tilde, B_tilde, beta=β)


P_tilde, F_tilde, d_tilde = lq_tilde.stationary_values(method='doubling')

y0_tilde = np.vstack((y0, y0[2]))


yt_tilde = lq_tilde.compute_sequence(y0_tilde, ts_length=n)[0]

In [12]: # Checks that the recursive formulation of the follower's problem gives
# the same solution as the original Stackelberg problem
fig, ax = plt.subplots()
34.9. RECURSIVE FORMULATION OF THE FOLLOWER’S PROBLEM 603

ax.plot(yt_tilde[4], 'r', label="q_tilde")


ax.plot(yt_tilde[2], 'b', label="q")
ax.legend()
plt.show()

Note: Variables with _tilde are obtained from solving the follower’s problem – those with-
out are from the Stackelberg problem

In [13]: # Maximum absolute difference in quantities over time between


# the first and second solution methods
np.max(np.abs(yt_tilde[4] - yt_tilde[2]))

Out[13]: 1.1102230246251565e-15

In [14]: # x0 == x0_tilde
yt[:, 0][-1] - (yt_tilde[:, 1] - yt_tilde[:, 0])[-1] < tol0

Out[14]: True

34.9.1 Explanation of Alignment

If we inspect the coefficients in the decision rule −𝐹 ̃ , we can spot the reason that the follower
chooses to set 𝑥𝑡 = 𝑥𝑡̃ when it sets 𝑥𝑡 = −𝐹 ̃ 𝑋𝑡 in the recursive formulation of the follower
problem.
Can you spot what features of 𝐹 ̃ imply this?
Hint: remember the components of 𝑋𝑡

In [15]: # Policy function in the follower's problem


F_tilde.round(4)
604 CHAPTER 34. STACKELBERG PLANS

Out[15]: array([[-0. , 0. , -0.1032, -1. , 0.1032]])

In [16]: # Value function in the Stackelberg problem


P

Out[16]: array([[ 963.54083615, -194.60534465, -511.62197962, -5258.22585724],


[ -194.60534465, 37.3535753 , 81.97712513, 784.76471234],
[ -511.62197962, 81.97712513, 247.34333344, 2517.05126111],
[-5258.22585724, 784.76471234, 2517.05126111, 25556.16504097]])

In [17]: # Value function in the follower's problem


P_tilde

Out[17]: array([[-1.81991134e+01, 2.58003020e+00, 1.56048755e+01,


1.51229815e+02, -5.00000000e+00],
[ 2.58003020e+00, -9.69465925e-01, -5.26007958e+00,
-5.09764310e+01, 1.00000000e+00],
[ 1.56048755e+01, -5.26007958e+00, -3.22759027e+01,
-3.12791908e+02, -1.23823802e+01],
[ 1.51229815e+02, -5.09764310e+01, -3.12791908e+02,
-3.03132584e+03, -1.20000000e+02],
[-5.00000000e+00, 1.00000000e+00, -1.23823802e+01,
-1.20000000e+02, 1.43823802e+01]])

In [18]: # Manually check that P is an approximate fixed point


(P - ((R + F.T @ Q @ F) + β * (A - B @ F).T @ P @ (A - B @ F)) < tol0).all()

Out[18]: True

In [19]: # Compute `P_guess` using `F_tilde_star`


F_tilde_star = -np.array([[0, 0, 0, 1, 0]])
P_guess = np.zeros((5, 5))

for i in range(1000):
P_guess = ((R_tilde + F_tilde_star.T @ Q @ F_tilde_star) +
β * (A_tilde - B_tilde @ F_tilde_star).T @ P_guess
@ (A_tilde - B_tilde @ F_tilde_star))

In [20]: # Value function in the follower's problem


-(y0_tilde.T @ P_tilde @ y0_tilde)[0, 0]

Out[20]: 112.65590740578043

In [21]: # Value function with `P_guess`


-(y0_tilde.T @ P_guess @ y0_tilde)[0, 0]

Out[21]: 112.65590740578051

In [22]: # Compute policy using policy iteration algorithm


F_iter = (β * la.inv(Q + β * B_tilde.T @ P_guess @ B_tilde)
@ B_tilde.T @ P_guess @ A_tilde)
34.9. RECURSIVE FORMULATION OF THE FOLLOWER’S PROBLEM 605

for i in range(100):
# Compute P_iter
P_iter = np.zeros((5, 5))
for j in range(1000):
P_iter = ((R_tilde + F_iter.T @ Q @ F_iter) + β
* (A_tilde - B_tilde @ F_iter).T @ P_iter
@ (A_tilde - B_tilde @ F_iter))

# Update F_iter
F_iter = (β * la.inv(Q + β * B_tilde.T @ P_iter @ B_tilde)
@ B_tilde.T @ P_iter @ A_tilde)

dist_vec = (P_iter - ((R_tilde + F_iter.T @ Q @ F_iter)


+ β * (A_tilde - B_tilde @ F_iter).T @ P_iter
@ (A_tilde - B_tilde @ F_iter)))

if np.max(np.abs(dist_vec)) < 1e-8:


dist_vec2 = (F_iter - (β * la.inv(Q + β * B_tilde.T @ P_iter @ B_tilde)
@ B_tilde.T @ P_iter @ A_tilde))

if np.max(np.abs(dist_vec2)) < 1e-8:


F_iter
else:
print("The policy didn't converge: try increasing the number of \
outer loop iterations")
else:
print("`P_iter` didn't converge: try increasing the number of inner \
loop iterations")

In [23]: # Simulate the system using `F_tilde_star` and check that it gives the
# same result as the original solution

yt_tilde_star = np.zeros((n, 5))


yt_tilde_star[0, :] = y0_tilde.flatten()

for t in range(n-1):
yt_tilde_star[t+1, :] = (A_tilde - B_tilde @ F_tilde_star) \
@ yt_tilde_star[t, :]

fig, ax = plt.subplots()
ax.plot(yt_tilde_star[:, 4], 'r', label="q_tilde")
ax.plot(yt_tilde[2], 'b', label="q")
ax.legend()
plt.show()
606 CHAPTER 34. STACKELBERG PLANS

In [24]: # Maximum absolute difference


np.max(np.abs(yt_tilde_star[:, 4] - yt_tilde[2, :-1]))

Out[24]: 0.0

34.10 Markov Perfect Equilibrium

The state vector is

1
𝑧𝑡 = ⎡𝑞 ⎤
⎢ 2𝑡 ⎥
⎣𝑞1𝑡 ⎦

and the state transition dynamics are

𝑧𝑡+1 = 𝐴𝑧𝑡 + 𝐵1 𝑣1𝑡 + 𝐵2 𝑣2𝑡

where 𝐴 is a 3 × 3 identity matrix and

0 0
𝐵1 = ⎡ ⎤
⎢0⎥ , 𝐵2 = ⎡
⎢1⎥

⎣1⎦ ⎣0⎦

The Markov perfect decision rules are

𝑣1𝑡 = −𝐹1 𝑧𝑡 , 𝑣2𝑡 = −𝐹2 𝑧𝑡


34.10. MARKOV PERFECT EQUILIBRIUM 607

and in the Markov perfect equilibrium, the state evolves according to

𝑧𝑡+1 = (𝐴 − 𝐵1 𝐹1 − 𝐵2 𝐹2 )𝑧𝑡

In [25]: # In LQ form
A = np.eye(3)
B1 = np.array([[0], [0], [1]])
B2 = np.array([[0], [1], [0]])

R1 = np.array([[0, 0, -a0 / 2],


[0, 0, a1 / 2],
[-a0 / 2, a1 / 2, a1]])

R2 = np.array([[0, -a0 / 2, 0],


[-a0 / 2, a1, a1 / 2],
[0, a1 / 2, 0]])

Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0

# Solve using QE's nnash function


F1, F2, P1, P2 = qe.nnash(A, B1, B2, R1, R2, Q1,
Q2, S1, S2, W1, W2, M1,
M2, beta=β, tol=tol1)

# Simulate forward
AF = A - B1 @ F1 - B2 @ F2
z = np.empty((3, n))
z[:, 0] = 1, 1, 1
for t in range(n-1):
z[:, t+1] = AF @ z[:, t]

# Display policies
print("Computed policies for firm 1 and firm 2:\n")
print(f"F1 = {F1}")
print(f"F2 = {F2}")

Computed policies for firm 1 and firm 2:

F1 = [[-0.22701363 0.03129874 0.09447113]]


F2 = [[-0.22701363 0.09447113 0.03129874]]

In [26]: q1 = z[1, :]
q2 = z[2, :]
q = q1 + q2 # Total output, MPE
p = a0 - a1 * q # Price, MPE

fig, ax = plt.subplots(figsize=(9, 5.8))


ax.plot(range(n), q, 'b-', lw=2, label='total output')
ax.plot(range(n), p, 'g-', lw=2, label='price')
ax.set_title('Output and prices, duopoly MPE')
ax.legend(frameon=False)
ax.set_xlabel('t')
plt.show()
608 CHAPTER 34. STACKELBERG PLANS

In [27]: # Computes the maximum difference between the two quantities of the two�
↪firms

np.max(np.abs(q1 - q2))

Out[27]: 7.327471962526033e-15

In [28]: # Compute values


u1 = (- F1 @ z).flatten()
u2 = (- F2 @ z).flatten()

π_1 = p * q1 - γ * (u1) ** 2
π_2 = p * q2 - γ * (u2) ** 2

v1_forward = np.sum(βs * π_1)


v2_forward = np.sum(βs * π_2)

v1_direct = (- z[:, 0].T @ P1 @ z[:, 0])


v2_direct = (- z[:, 0].T @ P2 @ z[:, 0])

# Display values
print("Computed values for firm 1 and firm 2:\n")
print(f"v1(forward sim) = {v1_forward:.4f}; v1 (direct) = {v1_direct:.4f}")
print(f"v2 (forward sim) = {v2_forward:.4f}; v2 (direct) = {v2_direct:.
↪ 4f}")

Computed values for firm 1 and firm 2:

v1(forward sim) = 133.3303; v1 (direct) = 133.3296


v2 (forward sim) = 133.3303; v2 (direct) = 133.3296
34.11. MPE VS. STACKELBERG 609

In [29]: # Sanity check


Λ1 = A - B2 @ F2
lq1 = qe.LQ(Q1, R1, Λ1, B1, beta=β)
P1_ih, F1_ih, d = lq1.stationary_values()

v2_direct_alt = - z[:, 0].T @ lq1.P @ z[:, 0] + lq1.d

(np.abs(v2_direct - v2_direct_alt) < tol2).all()

Out[29]: True

34.11 MPE vs. Stackelberg

In [30]: vt_MPE = np.zeros(n)


vt_follower = np.zeros(n)

for t in range(n):
vt_MPE[t] = -z[:, t].T @ P1 @ z[:, t]
vt_follower[t] = -yt_tilde[:, t].T @ P_tilde @ yt_tilde[:, t]

fig, ax = plt.subplots()
ax.plot(vt_MPE, 'b', label='MPE')
ax.plot(vt_leader, 'r', label='Stackelberg leader')
ax.plot(vt_follower, 'g', label='Stackelberg follower')
ax.set_title(r'MPE vs. Stackelberg Value Function')
ax.set_xlabel('t')
ax.legend(loc=(1.05, 0))
plt.show()

In [31]: # Display values


print("Computed values:\n")
print(f"vt_leader(y0) = {vt_leader[0]:.4f}")
print(f"vt_follower(y0) = {vt_follower[0]:.4f}")
print(f"vt_MPE(y0) = {vt_MPE[0]:.4f}")
610 CHAPTER 34. STACKELBERG PLANS

Computed values:

vt_leader(y0) = 150.0324
vt_follower(y0) = 112.6559
vt_MPE(y0) = 133.3296

In [32]: # Compute the difference in total value between the Stackelberg and the MPE
vt_leader[0] + vt_follower[0] - 2 * vt_MPE[0]

Out[32]: -3.970942562088169
Chapter 35

Ramsey Plans, Time Inconsistency,


Sustainable Plans

35.1 Contents

• Overview 35.2
• The Model 35.3
• Structure 35.4
• Intertemporal Influences 35.5
• Four Models of Government Policy 35.6
• A Ramsey Planner 35.7
• A Constrained-to-a-Constant-Growth-Rate Ramsey Government 35.8
• Markov Perfect Governments 35.9
• Equilibrium Outcomes for Three Models of Government Policy Making 35.10
• A Fourth Model of Government Decision Making 35.11
• Sustainable or Credible Plan 35.12
• Whose Credible Plan is it? 35.13
• Comparison of Equilibrium Values 35.14
• Note on Dynamic Programming Squared 35.15
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

35.2 Overview

This lecture describes a linear-quadratic version of a model that Guillermo Calvo [13] used to
illustrate the time inconsistency of optimal government plans.
Like Chang [14], we use the model as a laboratory in which to explore the consequences of
different timing protocols for government decision making.
The model focuses attention on intertemporal tradeoffs between
• welfare benefits that anticipated deflation generates by increasing a representative
agent’s liquidity as measured by his or her real money balances, and
• costs associated with distorting taxes that must be used to withdraw money from the
economy in order to generate anticipated deflation

611
612 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

The model features


• rational expectations
• costly government actions at all dates 𝑡 ≥ 1 that increase household utilities at dates
before 𝑡
• two Bellman equations, one that expresses the private sector’s expectation of future in-
flation as a function of current and future government actions, another that describes
the value function of a Ramsey planner
A theme of this lecture is that timing protocols affect outcomes.
We’ll use ideas from papers by Cagan [12], Calvo [13], Stokey [62], [63], Chari and Kehoe [15],
Chang [14], and Abreu [1] as well as from chapter 19 of [43].
In addition, we’ll use ideas from linear-quadratic dynamic programming described in Linear
Quadratic Control as applied to Ramsey problems in Stackelberg problems.
In particular, we have specified the model in a way that allows us to use linear-quadratic
dynamic programming to compute an optimal government plan under a timing protocol in
which a government chooses an infinite sequence of money supply growth rates once and for
all at time 0.
We’ll start with some imports:

In [2]: import numpy as np


from quantecon import LQ
import matplotlib.pyplot as plt
%matplotlib inline

35.3 The Model

There is no uncertainty.
Let:
• 𝑝𝑡 be the log of the price level
• 𝑚𝑡 be the log of nominal money balances
• 𝜃𝑡 = 𝑝𝑡+1 − 𝑝𝑡 be the net rate of inflation between 𝑡 and 𝑡 + 1
• 𝜇𝑡 = 𝑚𝑡+1 − 𝑚𝑡 be the net rate of growth of nominal balances
The demand for real balances is governed by a perfect foresight version of the Cagan [12] de-
mand function:

𝑚𝑡 − 𝑝𝑡 = −𝛼(𝑝𝑡+1 − 𝑝𝑡 ) , 𝛼 > 0 (1)

for 𝑡 ≥ 0.
Equation (1) asserts that the demand for real balances is inversely related to the public’s ex-
pected rate of inflation, which here equals the actual rate of inflation.
(When there is no uncertainty, an assumption of rational expectations simplifies to per-
fect foresight).
(See [58] for a rational expectations version of the model when there is uncertainty)
Subtracting the demand function at time 𝑡 from the demand function at 𝑡 + 1 gives:
35.3. THE MODEL 613

𝜇𝑡 − 𝜃𝑡 = −𝛼𝜃𝑡+1 + 𝛼𝜃𝑡

or

𝛼 1
𝜃𝑡 = 𝜃 + 𝜇 (2)
1 + 𝛼 𝑡+1 1 + 𝛼 𝑡
𝛼
Because 𝛼 > 0, 0 < 1+𝛼 < 1.
Definition: For a scalar 𝑥𝑡 , let 𝐿2 be the space of sequences {𝑥𝑡 }∞
𝑡=0 satisfying


∑ 𝑥2𝑡 < +∞
𝑡=0

We say that a sequence that belongs to 𝐿2 is square summable.


When we assume that the sequence 𝜇⃗ = {𝜇𝑡 }∞ 𝑡=0 is square summable and we require that the
sequence 𝜃 ⃗ = {𝜃𝑡 }∞
𝑡=0 is square summable, the linear difference equation (2) can be solved
forward to get:

∞ 𝑗
1 𝛼
𝜃𝑡 = ∑( ) 𝜇𝑡+𝑗 (3)
1 + 𝛼 𝑗=0 1 + 𝛼

Insight: In the spirit of Chang [14], note that equations (1) and (3) show that 𝜃𝑡 intermedi-
ates how choices of 𝜇𝑡+𝑗 , 𝑗 = 0, 1, … impinge on time 𝑡 real balances 𝑚𝑡 − 𝑝𝑡 = −𝛼𝜃𝑡 .
We shall use this insight to help us simplify and analyze government policy problems.
That future rates of money creation influence earlier rates of inflation creates optimal govern-
ment policy problems in which timing protocols matter.
We can rewrite the model as:

1 1 0 1 0
[ ]=[ 1+𝛼 ] [ ] +[ ]𝜇
𝜃𝑡+1 0 𝛼 𝜃𝑡 − 𝛼1 𝑡

or

𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝜇𝑡 (4)

We write the model in the state-space form (4) even though 𝜃0 is to be determined and so is
not an initial condition as it ordinarily would be in the state-space model described in Linear
Quadratic Control.
We write the model in the form (4) because we want to apply an approach described in
Stackelberg problems.
Assume that a representative household’s utility of real balances at time 𝑡 is:

𝑎2
𝑈 (𝑚𝑡 − 𝑝𝑡 ) = 𝑎0 + 𝑎1 (𝑚𝑡 − 𝑝𝑡 ) − (𝑚𝑡 − 𝑝𝑡 )2 , 𝑎0 > 0, 𝑎1 > 0, 𝑎2 > 0 (5)
2
𝑎1
The “bliss level” of real balances is then 𝑎2 .
614 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

The money demand function (1) and the utility function (5) imply that utility maximizing or
bliss level of real balances is attained when:

𝑎1
𝜃𝑡 = 𝜃∗ = −
𝑎2 𝛼

Below, we introduce the discount factor 𝛽 ∈ (0, 1) that a representative household and a
benevolent government both use to discount future utilities.
(If we set parameters so that 𝜃∗ = log(𝛽), then we can regard a recommendation to set 𝜃𝑡 =
𝜃∗ as a “poor man’s Friedman rule” that attains Milton Friedman’s optimal quantity of
money)
Via equation (3), a government plan 𝜇⃗ = {𝜇𝑡 }∞
𝑡=0 leads to an equilibrium sequence of inflation
outcomes 𝜃 ⃗ = {𝜃𝑡 }∞
𝑡=0 .
We assume that social costs 2𝑐 𝜇2𝑡 are incurred at 𝑡 when the government changes the stock of
nominal money balances at rate 𝜇𝑡 .
Therefore, the one-period welfare function of a benevolent government is:


1 𝑎 − 𝑎12𝛼 1 𝑐 2
−𝑠(𝜃𝑡 , 𝜇𝑡 ) ≡ −𝑟(𝑥𝑡 , 𝜇𝑡 ) = [ ] [ 𝑎01 𝛼 ′ 2
𝑎2 𝛼2 ] [ ] − 𝜇𝑡 = −𝑥𝑡 𝑅𝑥𝑡 − 𝑄𝜇𝑡 (6)
𝜃𝑡 − 2 − 2 𝜃 𝑡 2

Household welfare is summarized by:

∞ ∞
𝑣0 = − ∑ 𝛽 𝑡 𝑟(𝑥𝑡 , 𝜇𝑡 ) = − ∑ 𝛽 𝑡 𝑠(𝜃𝑡 , 𝜇𝑡 ) (7)
𝑡=0 𝑡=0

We can represent the dependence of 𝑣0 on (𝜃,⃗ 𝜇)⃗ recursively via the linear difference equation

𝑣𝑡 = −𝑠(𝜃𝑡 , 𝜇𝑡 ) + 𝛽𝑣𝑡+1 (8)

35.4 Structure

The following structure is induced by private agents’ behavior as summarized by the demand
function for money (1) that leads to equation (3) that tells how future settings of 𝜇 affect the
current value of 𝜃.
Equation (3) maps a policy sequence of money growth rates 𝜇⃗ = {𝜇𝑡 }∞ 2
𝑡=0 ∈ 𝐿 into an infla-
tion sequence 𝜃 ⃗ = {𝜃𝑡 }𝑡=0 ∈ 𝐿 .
∞ 2

These, in turn, induce a discounted value to a government sequence 𝑣 ⃗ = {𝑣𝑡 }∞ 2


𝑡=0 ∈ 𝐿 that
satisfies the recursion

𝑣𝑡 = −𝑠(𝜃𝑡 , 𝜇𝑡 ) + 𝛽𝑣𝑡+1

where we have called 𝑠(𝜃𝑡 , 𝜇𝑡 ) = 𝑟(𝑥𝑡 , 𝜇𝑡 ) as above.


Thus, we have a triple of sequences 𝜇,⃗ 𝜃,⃗ 𝑣 ⃗ associated with a 𝜇⃗ ∈ 𝐿2 .
At this point 𝜇⃗ ∈ 𝐿2 is an arbitrary exogenous policy.
35.5. INTERTEMPORAL INFLUENCES 615

To make 𝜇⃗ endogenous, we require a theory of government decisions.

35.5 Intertemporal Influences

Criterion function (7) and the constraint system (4) exhibit the following structure:
• Setting 𝜇𝑡 ≠ 0 imposes costs 2𝑐 𝜇2𝑡 at time 𝑡 and at no other times; but
• The money growth rate 𝜇𝑡 affects the representative household’s one-period utilities at
all dates 𝑠 = 0, 1, … , 𝑡.
That settings of 𝜇 at one date affect household utilities at earlier dates sets the stage for the
emergence of a time-inconsistent optimal government plan under a Ramsey (also called a
Stackelberg) timing protocol.
We’ll study outcomes under a Ramsey timing protocol below.
But we’ll also study the consequences of other timing protocols.

35.6 Four Models of Government Policy

We consider four models of policymakers that differ in


• what a policymaker is allowed to choose, either a sequence 𝜇⃗ or just a single period 𝜇𝑡 .
• when a policymaker chooses, either at time 0 or at times 𝑡 ≥ 0.
• what a policymaker assumes about how its choice of 𝜇𝑡 affects private agents’ expecta-
tions about earlier and later inflation rates.
In two of our models, a single policymaker chooses a sequence {𝜇𝑡 }∞𝑡=0 once and for all, taking
into account how 𝜇𝑡 affects household one-period utilities at dates 𝑠 = 0, 1, … , 𝑡 − 1
• these two models thus employ a Ramsey or Stackelberg timing protocol.
In two other models, there is a sequence of policymakers, each of whom sets 𝜇𝑡 at one 𝑡 only
• Each such policymaker ignores effects that its choice of 𝜇𝑡 has on household one-period
utilities at dates 𝑠 = 0, 1, … , 𝑡 − 1.
The four models differ with respect to timing protocols, constraints on government choices,
and government policymakers’ beliefs about how their decisions affect private agents’ beliefs
about future government decisions.
The models are
• A single Ramsey planner chooses a sequence {𝜇𝑡 }∞ 𝑡=0 once and for all at time 0.
• A single Ramsey planner chooses a sequence {𝜇𝑡 }∞ 𝑡=0 once and for all at time 0 subject
to the constraint that 𝜇𝑡 = 𝜇 for all 𝑡 ≥ 0.
• A sequence of separate policymakers chooses 𝜇𝑡 for 𝑡 = 0, 1, 2, …
– a time 𝑡 policymaker chooses 𝜇𝑡 only and forecasts that future government deci-
sions are unaffected by its choice.
• A sequence of separate policymakers chooses 𝜇𝑡 for 𝑡 = 0, 1, 2, …
– a time 𝑡 policymaker chooses only 𝜇𝑡 but believes that its choice of 𝜇𝑡 shapes pri-
vate agents’ beliefs about future rates of money creation and inflation, and through
them, future government actions.
616 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

35.7 A Ramsey Planner

First, we consider a Ramsey planner that chooses {𝜇𝑡 , 𝜃𝑡 }∞


𝑡=0 to maximize (7) subject to the
law of motion (4).
We can split this problem into two stages, as in Stackelberg problems and [43] Chapter 19.
In the first stage, we take the initial inflation rate 𝜃0 as given, and then solve the resulting
LQ dynamic programming problem.
In the second stage, we maximize over the initial inflation rate 𝜃0 .
Define a feasible set of (⃗⃗𝑥⃗⃗1 , 𝜇
⃗⃗⃗⃗0 ) sequences:

Ω(𝑥0 ) = {(⃗⃗𝑥⃗⃗1 , 𝜇
⃗⃗⃗⃗0 ) ∶ 𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐵𝜇𝑡 , ∀𝑡 ≥ 0}

35.7.1 Subproblem 1

The value function


𝐽 (𝑥0 ) = max − ∑ 𝛽 𝑡 𝑟(𝑥𝑡 , 𝜇𝑡 )
(⃗⃗𝑥⃗ ⃗1 ,⃗⃗⃗⃗⃗
𝜇0 )∈Ω(𝑥0 )
𝑡=0

satisfies the Bellman equation

𝐽 (𝑥) = max′ {−𝑟(𝑥, 𝜇) + 𝛽𝐽 (𝑥′ )}


𝜇,𝑥

subject to:

𝑥′ = 𝐴𝑥 + 𝐵𝜇

As in Stackelberg problems, we map this problem into a linear-quadratic control problem and
then carefully use the optimal value function associated with it.
Guessing that 𝐽 (𝑥) = −𝑥′ 𝑃 𝑥 and substituting into the Bellman equation gives rise to the
algebraic matrix Riccati equation:

𝑃 = 𝑅 + 𝛽𝐴′ 𝑃 𝐴 − 𝛽 2 𝐴′ 𝑃 𝐵(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴

and the optimal decision rule

𝜇𝑡 = −𝐹 𝑥𝑡

where

𝐹 = 𝛽(𝑄 + 𝛽𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴

The QuantEcon LQ class solves for 𝐹 and 𝑃 given inputs 𝑄, 𝑅, 𝐴, 𝐵, and 𝛽.


35.7. A RAMSEY PLANNER 617

35.7.2 Subproblem 2

The value of the Ramsey problem is

𝑉 = max 𝐽 (𝑥0 )
𝑥0

The value function

𝑃11 𝑃12 1
𝐽 (𝑥0 ) = − [1 𝜃0 ] [ ] [ ] = −𝑃11 − 2𝑃21 𝜃0 − 𝑃22 𝜃02
𝑃21 𝑃22 𝜃0

Maximizing this with respect to 𝜃0 yields the FOC:

−2𝑃21 − 2𝑃22 𝜃0 = 0

which implies

𝑃21
𝜃0∗ = −
𝑃22

35.7.3 Representation of Ramsey Plan

The preceding calculations indicate that we can represent a Ramsey plan 𝜇⃗ recursively with
the following system created in the spirit of Chang [14]:

𝜃0 = 𝜃0∗
𝜇𝑡 = 𝑏0 + 𝑏1 𝜃𝑡 (9)
𝜃𝑡+1 = 𝑑0 + 𝑑1 𝜃𝑡

To interpret this system, think of the sequence {𝜃𝑡 }∞


𝑡=0 as a sequence of synthetic promised
inflation rates that are just computational devices for generating a sequence 𝜇⃗ of money
growth rates that are to be substituted into equation (3) to form actual rates of inflation.
It can be verified that if we substitute a plan 𝜇⃗ = {𝜇𝑡 }∞
𝑡=0 that satisfies these equations into

equation (3), we obtain the same sequence 𝜃 generated by the system (9).
(Here an application of the Big 𝐾, little 𝑘 trick could once again be enlightening)
Thus, our construction of a Ramsey plan guarantees that promised inflation equals actual
inflation.

Multiple roles of 𝜃𝑡

The inflation rate 𝜃𝑡 that appears in the system (9) and equation (3) plays three roles simul-
taneously:
• In equation (3), 𝜃𝑡 is the actual rate of inflation between 𝑡 and 𝑡 + 1.
• In equation (2) and (3), 𝜃𝑡 is also the public’s expected rate of inflation between 𝑡 and
𝑡 + 1.
• In system (9), 𝜃𝑡 is a promised rate of inflation chosen by the Ramsey planner at time 0.
618 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

35.7.4 Time Inconsistency

As discussed in Stackelberg problems and Optimal taxation with state-contingent debt, a con-
tinuation Ramsey plan is not a Ramsey plan.
This is a concise way of characterizing the time inconsistency of a Ramsey plan.
The time inconsistency of a Ramsey plan has motivated other models of government decision
making that alter either
• the timing protocol and/or
• assumptions about how government decision makers think their decisions affect private
agents’ beliefs about future government decisions

35.8 A Constrained-to-a-Constant-Growth-Rate Ramsey Gov-


ernment

We now consider the following peculiar model of optimal government behavior.


We have created this model in order to highlight an aspect of an optimal government policy
associated with its time inconsistency, namely, the feature that optimal settings of the policy
instrument vary over time.
Instead of allowing the Ramsey government to choose different settings of its instrument at
different moments, we now assume that at time 0, a Ramsey government at time 0 once and
for all chooses a constant sequence 𝜇𝑡 = 𝜇̌ for all 𝑡 ≥ 0 to maximize

𝑐
𝑈 (−𝛼𝜇)̌ − 𝜇2̌
2

Here we have imposed the perfect foresight outcome implied by equation (2) that 𝜃𝑡 = 𝜇̌ when
the government chooses a constant 𝜇 for all 𝑡 ≥ 0.
With the quadratic form (5) for the utility function 𝑈 , the maximizing 𝜇̄ is

𝛼𝑎1
𝜇̌ = −
𝛼2 𝑎
2+𝑐

Summary: We have introduced the constrained-to-a-constant 𝜇 government in order to high-


light time-variation of 𝜇𝑡 as a telltale sign of time inconsistency of a Ramsey plan.

35.9 Markov Perfect Governments

We now change the timing protocol by considering a sequence of government policymakers,


the time 𝑡 representative of which chooses 𝜇𝑡 and expects all future governments to set 𝜇𝑡+𝑗 =
𝜇.̄
This assumption mirrors an assumption made in a different setting Markov Perfect Equilib-
rium.
Further, a government policymaker at 𝑡 believes that 𝜇̄ is unaffected by its choice of 𝜇𝑡 .
The time 𝑡 rate of inflation is then:
35.10. EQUILIBRIUM OUTCOMES FOR THREE MODELS OF GOVERNMENT POLICY MAKING619

𝛼 1
𝜃𝑡 = 𝜇̄ + 𝜇
1+𝛼 1+𝛼 𝑡

The time 𝑡 government policymaker then chooses 𝜇𝑡 to maximize:

𝑐
𝑊 = 𝑈 (−𝛼𝜃𝑡 ) − 𝜇2𝑡 + 𝛽𝑉 (𝜇)̄
2

where 𝑉 (𝜇)̄ is the time 0 value 𝑣0 of recursion (8) under a money supply growth rate that is
forever constant at 𝜇.̄
Substituting for 𝑈 and 𝜃𝑡 gives:

𝛼2 𝛼 𝑎 𝛼2 𝛼 𝑐
𝑊 = 𝑎0 + 𝑎1 (− 𝜇̄ − 𝜇𝑡 ) − 2 ((− 𝜇̄ − 𝜇𝑡 )2 − 𝜇2𝑡 + 𝛽𝑉 (𝜇)̄
1+𝛼 1+𝛼 2 1+𝛼 1+𝛼 2

The first-order necessary condition for 𝜇𝑡 is then:

𝛼 𝛼2 𝛼 𝛼
− 𝑎1 − 𝑎2 (− 𝜇̄ − 𝜇𝑡 )(− ) − 𝑐𝜇𝑡 = 0
1+𝛼 1+𝛼 1+𝛼 1+𝛼

Rearranging we get:

−𝑎1 𝛼 2 𝑎2
𝜇𝑡 = 1+𝛼 𝛼
− 𝜇̄
𝛼 𝑐 + 1+𝛼 𝑎2 [ 1+𝛼 𝛼
𝛼 𝑐 + 1+𝛼 𝑎2 ] (1 + 𝛼)

A Markov Perfect Equilibrium (MPE) outcome sets 𝜇𝑡 = 𝜇:̄

−𝑎1
𝜇𝑡 = 𝜇̄ = 1+𝛼 𝛼 𝛼2
𝛼 𝑐 + 1+𝛼 𝑎2 + 1+𝛼 𝑎2

In light of results presented in the previous section, this can be simplified to:

𝛼𝑎1
𝜇̄ = −
𝛼2 𝑎 2 + (1 + 𝛼)𝑐

35.10 Equilibrium Outcomes for Three Models of Government


Policy Making

Below we compute sequences {𝜃𝑡 , 𝜇𝑡 } under a Ramsey plan and compare these with the con-
stant levels of 𝜃 and 𝜇 in a) a Markov Perfect Equilibrium, and b) a Ramsey plan in which
the planner is restricted to choose 𝜇𝑡 = 𝜇̌ for all 𝑡 ≥ 0.
We denote the Ramsey sequence as 𝜃𝑅 , 𝜇𝑅 and the MPE values as 𝜃𝑀𝑃 𝐸 , 𝜇𝑀𝑃 𝐸 .
The bliss level of inflation is denoted by 𝜃∗ .
First, we will create a class ChangLQ that solves the models and stores their values

In [3]: class ChangLQ:


"""
Class to solve LQ Chang model
620 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

"""
def __init__(self, α, α0, α1, α2, c, T=1000, θ_n=200):

# Record parameters
self.α, self.α0, self.α1 = α, α0, α1
self.α2, self.c, self.T, self.θ_n = α2, c, T, θ_n

# Create β using "Poor Man's Friedman Rule"


self.β = np.exp(-α1 / (α * α2))

# Solve the Ramsey Problem #

# LQ Matrices
R = -np.array([[α0, -α1 * α / 2],
[-α1 * α/2, -α2 * α**2 / 2]])
Q = -np.array([[-c / 2]])
A = np.array([[1, 0], [0, (1 + α) / α]])
B = np.array([[0], [-1 / α]])

# Solve LQ Problem (Subproblem 1)


lq = LQ(Q, R, A, B, beta=self.β)
self.P, self.F, self.d = lq.stationary_values()

# Solve Subproblem 2
self.θ_R = -self.P[0, 1] / self.P[1, 1]

# Find bliss level of θ


self.θ_B = np.log(self.β)

# Solve the Markov Perfect Equilibrium


self.μ_MPE = -α1 / ((1 + α) / α * c + α / (1 + α)
* α2 + α**2 / (1 + α) * α2)
self.θ_MPE = self.μ_MPE
self.μ_check = -α * α1 / (α2 * α**2 + c)

# Calculate value under MPE and Check economy


self.J_MPE = (α0 + α1 * (-α * self.μ_MPE) - α2 / 2
* (-α * self.μ_MPE)**2 - c/2 * self.μ_MPE**2) / (1 -�
↪ self.β)
self.J_check = (α0 + α1 * (-α * self.μ_check) - α2/2
* (-α * self.μ_check)**2 - c / 2 * self.μ_check**2) \
/ (1 - self.β)

# Simulate Ramsey plan for large number of periods


θ_series = np.vstack((np.ones((1, T)), np.zeros((1, T))))
μ_series = np.zeros(T)
J_series = np.zeros(T)
θ_series[1, 0] = self.θ_R
μ_series[0] = -self.F.dot(θ_series[:, 0])
J_series[0] = -θ_series[:, 0] @ self.P @ θ_series[:, 0].T
for i in range(1, T):
θ_series[:, i] = (A - B @ self.F) @ θ_series[:, i-1]
μ_series[i] = -self.F @ θ_series[:, i]
J_series[i] = -θ_series[:, i] @ self.P @ θ_series[:, i].T

self.J_series = J_series
self.μ_series = μ_series
self.θ_series = θ_series
35.10. EQUILIBRIUM OUTCOMES FOR THREE MODELS OF GOVERNMENT POLICY MAKING621

# Find the range of θ in Ramsey plan


θ_LB = min(θ_series[1, :])
θ_LB = min(θ_LB, self.θ_B)
θ_UB = max(θ_series[1, :])
θ_UB = max(θ_UB, self.θ_MPE)
θ_range = θ_UB - θ_LB
self.θ_LB = θ_LB - 0.05 * θ_range
self.θ_UB = θ_UB + 0.05 * θ_range
self.θ_range = θ_range

# Find value function and policy functions over range of θ


θ_space = np.linspace(self.θ_LB, self.θ_UB, 200)
J_space = np.zeros(200)
check_space = np.zeros(200)
μ_space = np.zeros(200)
θ_prime = np.zeros(200)
for i in range(200):
J_space[i] = - np.array((1, θ_space[i])) \
@ self.P @ np.array((1, θ_space[i])).T
μ_space[i] = - self.F @ np.array((1, θ_space[i]))
x_prime = (A - B @ self.F) @ np.array((1, θ_space[i]))
θ_prime[i] = x_prime[1]
check_space[i] = (α0 + α1 * (-α * θ_space[i]) -
α2/2 * (-α * θ_space[i])**2 - c/2 * θ_space[i]**2) / (1 - self.β)

J_LB = min(J_space)
J_UB = max(J_space)
J_range = J_UB - J_LB
self.J_LB = J_LB - 0.05 * J_range
self.J_UB = J_UB + 0.05 * J_range
self.J_range = J_range
self.J_space = J_space
self.θ_space = θ_space
self.μ_space = μ_space
self.θ_prime = θ_prime
self.check_space = check_space

We will create an instance of ChangLQ with the following parameters

In [4]: clq = ChangLQ(α=1, α0=1, α1=0.5, α2=3, c=2)


clq.β

Out[4]: 0.8464817248906141

The following code generates a figure that plots the value function from the Ramsey Planner’s
problem, which is maximized at 𝜃0𝑅 .
𝑅
The figure also shows the limiting value 𝜃∞ to which the inflation rate 𝜃𝑡 converges under the
Ramsey plan and compares it to the MPE value and the bliss value.

In [5]: def plot_value_function(clq):


"""
Method to plot the value function over the relevant range of θ

Here clq is an instance of ChangLQ


622 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

"""
fig, ax = plt.subplots()

ax.set_xlim([clq.θ_LB, clq.θ_UB])
ax.set_ylim([clq.J_LB, clq.J_UB])

# Plot value function


ax.plot(clq.θ_space, clq.J_space, lw=2)
plt.xlabel(r"$\theta$", fontsize=18)
plt.ylabel(r"$J(\theta)$", fontsize=18)

t1 = clq.θ_space[np.argmax(clq.J_space)]
tR = clq.θ_series[1, -1]
θ_points = [t1, tR, clq.θ_B, clq.θ_MPE]
labels = [r"$\theta_0^R$", r"$\theta_\infty^R$",
r"$\theta^*$", r"$\theta^{MPE}$"]

# Add points for θs


for θ, label in zip(θ_points, labels):
ax.scatter(θ, clq.J_LB + 0.02 * clq.J_range, 60, 'black', 'v')
ax.annotate(label,
xy=(θ, clq.J_LB + 0.01 * clq.J_range),
xytext=(θ - 0.01 * clq.θ_range,
clq.J_LB + 0.08 * clq.J_range),
fontsize=18)
plt.tight_layout()
plt.show()

plot_value_function(clq)

The next code generates a figure that plots the value function from the Ramsey Planner’s
35.10. EQUILIBRIUM OUTCOMES FOR THREE MODELS OF GOVERNMENT POLICY MAKING623

problem as well as that for a Ramsey planner that must choose a constant 𝜇 (that in turn
equals an implied constant 𝜃).

In [6]: def compare_ramsey_check(clq):


"""
Method to compare values of Ramsey and Check

Here clq is an instance of ChangLQ


"""
fig, ax = plt.subplots()
check_min = min(clq.check_space)
check_max = max(clq.check_space)
check_range = check_max - check_min
check_LB = check_min - 0.05 * check_range
check_UB = check_max + 0.05 * check_range
ax.set_xlim([clq.θ_LB, clq.θ_UB])
ax.set_ylim([check_LB, check_UB])
ax.plot(clq.θ_space, clq.J_space, lw=2, label=r"$J(\theta)$")

plt.xlabel(r"$\theta$", fontsize=18)
ax.plot(clq.θ_space, clq.check_space,
lw=2, label=r"$V^\check(\theta)$")
plt.legend(fontsize=14, loc='upper left')

θ_points = [clq.θ_space[np.argmax(clq.J_space)],
clq.μ_check]
labels = [r"$\theta_0^R$", r"$\theta^\check$"]

for θ, label in zip(θ_points, labels):


ax.scatter(θ, check_LB + 0.02 * check_range, 60, 'k', 'v')
ax.annotate(label,
xy=(θ, check_LB + 0.01 * check_range),
xytext=(θ - 0.02 * check_range,
check_LB + 0.08 * check_range),
fontsize=18)
plt.tight_layout()
plt.show()

compare_ramsey_check(clq)
624 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

The next code generates figures that plot the policy functions for a continuation Ramsey
planner.
The left figure shows the choice of 𝜃′ chosen by a continuation Ramsey planner who inherits
𝜃.
The right figure plots a continuation Ramsey planner’s choice of 𝜇 as a function of an inher-
ited 𝜃.

In [7]: def plot_policy_functions(clq):


"""
Method to plot the policy functions over the relevant range of θ

Here clq is an instance of ChangLQ


"""
fig, axes = plt.subplots(1, 2, figsize=(12, 4))

labels = [r"$\theta_0^R$", r"$\theta_\infty^R$"]

ax = axes[0]
ax.set_ylim([clq.θ_LB, clq.θ_UB])
ax.plot(clq.θ_space, clq.θ_prime,
label=r"$\theta'(\theta)$", lw=2)
x = np.linspace(clq.θ_LB, clq.θ_UB, 5)
ax.plot(x, x, 'k--', lw=2, alpha=0.7)
ax.set_ylabel(r"$\theta'$", fontsize=18)

θ_points = [clq.θ_space[np.argmax(clq.J_space)],
clq.θ_series[1, -1]]

for θ, label in zip(θ_points, labels):


ax.scatter(θ, clq.θ_LB + 0.02 * clq.θ_range, 60, 'k', 'v')
ax.annotate(label,
35.10. EQUILIBRIUM OUTCOMES FOR THREE MODELS OF GOVERNMENT POLICY MAKING625

xy=(θ, clq.θ_LB + 0.01 * clq.θ_range),


xytext=(θ - 0.02 * clq.θ_range,
clq.θ_LB + 0.08 * clq.θ_range),
fontsize=18)

ax = axes[1]
μ_min = min(clq.μ_space)
μ_max = max(clq.μ_space)
μ_range = μ_max - μ_min
ax.set_ylim([μ_min - 0.05 * μ_range, μ_max + 0.05 * μ_range])
ax.plot(clq.θ_space, clq.μ_space, lw=2)
ax.set_ylabel(r"$\mu(\theta)$", fontsize=18)

for ax in axes:
ax.set_xlabel(r"$\theta$", fontsize=18)
ax.set_xlim([clq.θ_LB, clq.θ_UB])

for θ, label in zip(θ_points, labels):


ax.scatter(θ, μ_min - 0.03 * μ_range, 60, 'black', 'v')
ax.annotate(label, xy=(θ, μ_min - 0.03 * μ_range),
xytext=(θ - 0.02 * clq.θ_range,
μ_min + 0.03 * μ_range),
fontsize=18)
plt.tight_layout()
plt.show()

plot_policy_functions(clq)

The following code generates a figure that plots sequences of 𝜇 and 𝜃 in the Ramsey plan and
compares these to the constant levels in a MPE and in a Ramsey plan with a government re-
stricted to set 𝜇𝑡 to a constant for all 𝑡.

In [8]: def plot_ramsey_MPE(clq, T=15):


"""
Method to plot Ramsey plan against Markov Perfect Equilibrium

Here clq is an instance of ChangLQ


"""
fig, axes = plt.subplots(1, 2, figsize=(12, 4))

plots = [clq.θ_series[1, 0:T], clq.μ_series[0:T]]


MPEs = [clq.θ_MPE, clq.μ_MPE]
labels = [r"\theta", r"\mu"]
626 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

axes[0].hlines(clq.θ_B, 0, T-1, 'r', label=r"$\theta^*$")

for ax, plot, MPE, label in zip(axes, plots, MPEs, labels):


ax.plot(plot, label=r"$" + label + "^R$")
ax.hlines(MPE, 0, T-1, 'orange', label=r"$" + label + "^{MPE}$")
ax.hlines(clq.μ_check, 0, T, 'g', label=r"$" + label + "^\check$")
ax.set_xlabel(r"$t$", fontsize=16)
ax.set_ylabel(r"$" + label + "_t$", fontsize=18)
ax.legend(loc='upper right')

plt.tight_layout()
plt.show()

plot_ramsey_MPE(clq)

35.10.1 Time Inconsistency of Ramsey Plan

The variation over time in 𝜇⃗ chosen by the Ramsey planner is a symptom of time inconsis-
tency.
• The Ramsey planner reaps immediate benefits from promising lower inflation later to be
achieved by costly distorting taxes.
• These benefits are intermediated by reductions in expected inflation that precede the
reductions in money creation rates that rationalize them, as indicated by equation (3).
• A government authority offered the opportunity to ignore effects on past utilities and to
reoptimize at date 𝑡 ≥ 1 would, if allowed, want to deviate from a Ramsey plan.
Note: A modified Ramsey plan constructed under the restriction that 𝜇𝑡 must be constant
over time is time consistent (see 𝜇̌ and 𝜃 ̌ in the above graphs).

35.10.2 Meaning of Time Inconsistency

In settings in which governments actually choose sequentially, many economists regard a time
inconsistent plan implausible because of the incentives to deviate that occur along the plan.
A way to summarize this defect in a Ramsey plan is to say that it is not credible because
there endure incentives for policymakers to deviate from it.
For that reason, the Markov perfect equilibrium concept attracts many economists.
35.11. A FOURTH MODEL OF GOVERNMENT DECISION MAKING 627

• A Markov perfect equilibrium plan is constructed to insure that government policymak-


ers who choose sequentially do not want to deviate from it.
The no incentive to deviate from the plan property is what makes the Markov perfect equilib-
rium concept attractive.

35.10.3 Ramsey Plans Strike Back

Research by Abreu [1], Chari and Kehoe [15] [62], and Stokey [63] discovered conditions under
which a Ramsey plan can be rescued from the complaint that it is not credible.
They accomplished this by expanding the description of a plan to include expectations about
adverse consequences of deviating from it that can serve to deter deviations.
We turn to such theories of sustainable plans next.

35.11 A Fourth Model of Government Decision Making

This is a model in which


• The government chooses {𝜇𝑡 }∞ 𝑡=0 not once and for all at 𝑡 = 0 but chooses to set 𝜇𝑡 at
time 𝑡, not before.
• private agents’ forecasts of {𝜇𝑡+𝑗+1 , 𝜃𝑡+𝑗+1 }∞
𝑗=0 respond to whether the government at 𝑡
confirms or disappoints their forecasts of 𝜇𝑡 brought into period 𝑡 from period 𝑡 − 1.
• the government at each time 𝑡 understands how private agents’ forecasts will respond to
its choice of 𝜇𝑡 .
• at each 𝑡, the government chooses 𝜇𝑡 to maximize a continuation discounted utility of a
representative household.

35.11.1 A Theory of Government Decision Making

𝜇⃗ is chosen by a sequence of government decision makers, one for each 𝑡 ≥ 0.


We assume the following within-period and between-period timing protocol for each 𝑡 ≥ 0:
• at time 𝑡 − 1, private agents expect that the government will set 𝜇𝑡 = 𝜇𝑡̃ , and more
generally that it will set 𝜇𝑡+𝑗 = 𝜇𝑡+𝑗
̃ for all 𝑗 ≥ 0.
̃ }𝑗≥0 determine a 𝜃𝑡 = 𝜃𝑡̃ and an associated log of real balances 𝑚𝑡 −
• The forecasts {𝜇𝑡+𝑗
𝑝𝑡 = −𝛼𝜃𝑡̃ at 𝑡.
• Given those expectations and an associated 𝜃𝑡 = 𝜃𝑡̃ , at 𝑡 a government is free to set
𝜇𝑡 ∈ R.
• If the government at 𝑡 confirms private agents’ expectations by setting 𝜇𝑡 = 𝜇𝑡̃ at time
𝑡, private agents expect the continuation government policy {𝜇𝑡+𝑗+1 ̃ }∞
𝑗=0 and therefore
̃
bring expectation 𝜃𝑡+1 into period 𝑡 + 1.
• If the government at 𝑡 disappoints private agents by setting 𝜇𝑡 ≠ 𝜇𝑡̃ , private agents ex-
pect {𝜇𝐴 ∞ 𝐴 ∞
𝑗 }𝑗=0 as the continuation policy for 𝑡 + 1, i.e., {𝜇𝑡+𝑗+1 } = {𝜇𝑗 }𝑗=0 and therefore
expect an associated 𝜃0𝐴 for 𝑡 + 1. Here 𝜇𝐴 ⃗ = {𝜇𝐴 ∞
𝑗 }𝑗=0 is an alternative government plan
to be described below.
628 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

35.11.2 Temptation to Deviate from Plan

The government’s one-period return function 𝑠(𝜃, 𝜇) described in equation (6) above has the
property that for all 𝜃

−𝑠(𝜃, 0) ≥ −𝑠(𝜃, 𝜇)

This inequality implies that whenever the policy calls for the government to set 𝜇 ≠ 0, the
government could raise its one-period payoff by setting 𝜇 = 0.
Disappointing private sector expectations in that way would increase the government’s cur-
rent payoff but would have adverse consequences for subsequent government payoffs be-
cause the private sector would alter its expectations about future settings of 𝜇.
The temporary gain constitutes the government’s temptation to deviate from a plan.
If the government at 𝑡 is to resist the temptation to raise its current payoff, it is only because
it forecasts adverse consequences that its setting of 𝜇𝑡 would bring for continuation govern-
ment payoffs via alterations in the private sector’s expectations.

35.12 Sustainable or Credible Plan

We call a plan 𝜇⃗ sustainable or credible if at each 𝑡 ≥ 0 the government chooses to confirm


private agents’ prior expectation of its setting for 𝜇𝑡 .
The government will choose to confirm prior expectations only if the long-term loss from dis-
appointing private sector expectations – coming from the government’s understanding of the
way the private sector adjusts its expectations in response to having its prior expectations at
𝑡 disappointed – outweigh the short-term gain from disappointing those expectations.
The theory of sustainable or credible plans assumes throughout that private sector expecta-
tions about what future governments will do are based on the assumption that governments
at times 𝑡 ≥ 0 always act to maximize the continuation discounted utilities that describe
those governments’ purposes.
This aspect of the theory means that credible plans always come in pairs:
• a credible (continuation) plan to be followed if the government at 𝑡 confirms private
sector expectations
• a credible plan to be followed if the government at 𝑡 disappoints private sector expec-
tations
That credible plans come in pairs threaten to bring an explosion of plans to keep track of
• each credible plan itself consists of two credible plans
• therefore, the number of plans underlying one plan is unbounded
But Dilip Abreu showed how to render manageable the number of plans that must be kept
track of.
The key is an object called a self-enforcing plan.

35.12.1 Abreu’s Self-Enforcing Plan

A plan 𝜇𝐴
⃗ (here the superscipt 𝐴 is for Abreu) is said to be self-enforcing if
35.12. SUSTAINABLE OR CREDIBLE PLAN 629

• the consequence of disappointing private agents’ expectations at time 𝑗 is to restart


plan 𝜇𝐴⃗ at time 𝑗 + 1
• the consequence of restarting the plan is sufficiently adverse that it forever deters all
deviations from the plan
More precisely, a government plan 𝜇𝐴 ⃗ is self-
⃗ with equilibrium inflation sequence 𝜃𝐴
enforcing if

𝑣𝑗𝐴 = −𝑠(𝜃𝑗𝐴 , 𝜇𝐴 𝐴
𝑗 ) + 𝛽𝑣𝑗+1
(10)
≥ −𝑠(𝜃𝑗𝐴 , 0) + 𝛽𝑣0𝐴 ≡ 𝑣𝑗𝐴,𝐷 , 𝑗≥0

(Here it is useful to recall that setting 𝜇 = 0 is the maximizing choice for the government’s
one-period return function)
The first line tells the consequences of confirming private agents’ expectations by following
the plan, while the second line tells the consequences of disappointing private agents’ expecta-
tions by deviating from the plan.
A consequence of the inequality stated in the definition is that a self-enforcing plan is credi-
ble.
Self-enforcing plans can be used to construct other credible plans, including ones with better
values.
Thus, where 𝑣𝐴⃗ is the value associated with a self-enforcing plan 𝜇𝐴⃗ , a sufficient condition for

another plan 𝜇⃗ associated with inflation 𝜃 and value 𝑣 ⃗ to be credible is that

𝑣𝑗 = −𝑠(𝜃𝑗 , 𝜇𝑗 ) + 𝛽𝑣𝑗+1
(11)
≥ −𝑠(𝜃𝑗 , 0) + 𝛽𝑣0𝐴 ∀𝑗 ≥ 0

For this condition to be satisfied it is necessary and sufficient that

−𝑠(𝜃𝑗 , 0) − (−𝑠(𝜃𝑗 , 𝜇𝑗 )) < 𝛽(𝑣𝑗+1 − 𝑣0𝐴 )

The left side of the above inequality is the government’s gain from deviating from the plan,
while the right side is the government’s loss from deviating from the plan.
A government never wants to deviate from a credible plan.
Abreu taught us that key step in constructing a credible plan is first constructing a self-
enforcing plan that has a low time 0 value.
The idea is to use the self-enforcing plan as a continuation plan whenever the government’s
choice at time 𝑡 fails to confirm private agents’ expectation.
We shall use a construction featured in Abreu ([1]) to construct a self-enforcing plan with low
time 0 value.

35.12.2 Abreu Carrot-Stick Plan

Abreu ([1]) invented a way to create a self-enforcing plan with a low initial value.
Imitating his idea, we can construct a self-enforcing plan 𝜇⃗ with a low time 0 value to the
government by insisting that future government decision makers set 𝜇𝑡 to a value yielding
630 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

low one-period utilities to the household for a long time, after which government decisions
thereafter yield high one-period utilities.
• Low one-period utilities early are a stick
• High one-period utilities later are a carrot
Consider a candidate plan 𝜇𝐴
⃗ that sets 𝜇𝐴
𝑡 = 𝜇̄ (a high positive number) for 𝑇𝐴 periods, and
then reverts to the Ramsey plan.
Denote this sequence by {𝜇𝐴 ∞
𝑡 }𝑡=0 .

The sequence of inflation rates implied by this plan, {𝜃𝑡𝐴 }∞


𝑡=0 , can be calculated using:

∞ 𝑗
1 𝛼
𝜃𝑡𝐴 = ∑( ) 𝜇𝐴
𝑡+𝑗
1 + 𝛼 𝑗=0 1 + 𝛼

The value of {𝜃𝑡𝐴 , 𝜇𝐴 ∞


𝑡 }𝑡=0 at time 0 is

𝑇𝐴 −1
𝑣0𝐴 = − ∑ 𝛽 𝑡 𝑠(𝜃𝑡𝐴 , 𝜇𝐴
𝑡 )+𝛽
𝑇𝐴
𝐽 (𝜃0𝑅 )
𝑡=0

For an appropriate 𝑇𝐴 , this plan can be verified to be self-enforcing and therefore credible.

35.12.3 Example of Self-Enforcing Plan

The following example implements an Abreu stick-and-carrot plan.


The government sets 𝜇𝐴
𝑡 = 0.1 for 𝑡 = 0, 1, … , 9 and then starts the Ramsey plan.

We have computed outcomes for this plan.


For this plan, we plot the 𝜃𝐴 , 𝜇𝐴 sequences as well as the implied 𝑣𝐴 sequence.
Notice that because the government sets money supply growth high for 10 periods, inflation
starts high.
Inflation gradually slowly declines because people expect the government to lower the money
growth rate after period 10.
From the 10th period onwards, the inflation rate 𝜃𝑡𝐴 associated with this Abreu plan starts
𝐴
the Ramsey plan from its beginning, i.e., 𝜃𝑡+10 = 𝜃𝑡𝑅 ∀𝑡 ≥ 0.

In [9]: def abreu_plan(clq, T=1000, T_A=10, μ_bar=0.1, T_Plot=20):

# Append Ramsey μ series to stick μ series


clq.μ_A = np.append(np.ones(T_A) * μ_bar, clq.μ_series[:-T_A])

# Calculate implied stick θ series


clq.θ_A = np.zeros(T)
discount = np.zeros(T)
for t in range(T):
discount[t] = (clq.α / (1 + clq.α))**t
for t in range(T):
length = clq.μ_A[t:].shape[0]
clq.θ_A[t] = 1 / (clq.α + 1) * sum(clq.μ_A[t:] * discount[0:length])
35.12. SUSTAINABLE OR CREDIBLE PLAN 631

# Calculate utility of stick plan


U_A = np.zeros(T)
for t in range(T):
U_A[t] = clq.β**t * (clq.α0 + clq.α1 * (-clq.θ_A[t])
- clq.α2 / 2 * (-clq.θ_A[t])**2 - clq.c * clq.μ_A[t]**2)

clq.V_A = np.zeros(T)
for t in range(T):
clq.V_A[t] = sum(U_A[t:] / clq.β**t)

# Make sure Abreu plan is self-enforcing


clq.V_dev = np.zeros(T_Plot)
for t in range(T_Plot):
clq.V_dev[t] = (clq.α0 + clq.α1 * (-clq.θ_A[t])
- clq.α2 / 2 * (-clq.θ_A[t])**2) \
+ clq.β * clq.V_A[0]

fig, axes = plt.subplots(3, 1, figsize=(8, 12))

axes[2].plot(clq.V_dev[0:T_Plot], label="$V^{A, D}_t$", c="orange")

plots = [clq.θ_A, clq.μ_A, clq.V_A]


labels = [r"$\theta_t^A$", r"$\mu_t^A$", r"$V^A_t$"]

for plot, ax, label in zip(plots, axes, labels):


ax.plot(plot[0:T_Plot], label=label)
ax.set(xlabel="$t$", ylabel=label)
ax.legend()

plt.tight_layout()
plt.show()

abreu_plan(clq)
632 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

To confirm that the plan 𝜇𝐴 ⃗ is self-enforcing, we plot an object that we call 𝑉𝑡𝐴,𝐷 , defined
in the key inequality in the second line of equation (10) above.
𝑉𝑡𝐴,𝐷 is the value at 𝑡 of deviating from the self-enforcing plan 𝜇𝐴
⃗ by setting 𝜇𝑡 = 0 and then
35.12. SUSTAINABLE OR CREDIBLE PLAN 633

restarting the plan at 𝑣0𝐴 at 𝑡 + 1:

𝑣𝑡𝐴,𝐷 = −𝑠(𝜃𝑗 , 0) + 𝛽𝑣0𝐴

In the above graph 𝑣𝑡𝐴 > 𝑣𝑡𝐴,𝐷 , which confirms that 𝜇𝐴


⃗ is a self-enforcing plan.
We can also verify the inequalities required for 𝜇𝐴
⃗ to be self-confirming numerically as follows

In [10]: np.all(clq.V_A[0:20] > clq.V_dev[0:20])

Out[10]: True

Given that plan 𝜇𝐴


⃗ is self-enforcing, we can check that the Ramsey plan 𝜇𝑅
⃗ is credible by
verifying that:

𝑣𝑡𝑅 ≥ −𝑠(𝜃𝑡𝑅 , 0) + 𝛽𝑣0𝐴 , ∀𝑡 ≥ 0

In [11]: def check_ramsey(clq, T=1000):


# Make sure Ramsey plan is sustainable
R_dev = np.zeros(T)
for t in range(T):
R_dev[t] = (clq.α0 + clq.α1 * (-clq.θ_series[1, t])
- clq.α2 / 2 * (-clq.θ_series[1, t])**2) \
+ clq.β * clq.V_A[0]

return np.all(clq.J_series > R_dev)

check_ramsey(clq)

Out[11]: True

35.12.4 Recursive Representation of a Sustainable Plan

We can represent a sustainable plan recursively by taking the continuation value 𝑣𝑡 as a state
variable.
We form the following 3-tuple of functions:

𝜇𝑡̂ = 𝜈𝜇 (𝑣𝑡 )
𝜃𝑡 = 𝜈𝜃 (𝑣𝑡 ) (12)
𝑣𝑡+1 = 𝜈𝑣 (𝑣𝑡 , 𝜇𝑡 )

In addition to these equations, we need an initial value 𝑣0 to characterize a sustainable plan.


The first equation of (12) tells the recommended value of 𝜇𝑡̂ as a function of the promised
value 𝑣𝑡 .
The second equation of (12) tells the inflation rate as a function of 𝑣𝑡 .
The third equation of (12) updates the continuation value in a way that depends on whether
the government at 𝑡 confirms private agents’ expectations by setting 𝜇𝑡 equal to the recom-
mended value 𝜇𝑡̂ , or whether it disappoints those expectations.
634 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS

35.13 Whose Credible Plan is it?

A credible government plan 𝜇⃗ plays multiple roles.


• It is a sequence of actions chosen by the government.
• It is a sequence of private agents’ forecasts of government actions.
Thus, 𝜇⃗ is both a government policy and a collection of private agents’ forecasts of govern-
ment policy.
Does the government choose policy actions or does it simply confirm prior private sector fore-
casts of those actions?
An argument in favor of the government chooses interpretation comes from noting that the
theory of credible plans builds in a theory that the government each period chooses the action
that it wants.
An argument in favor of the simply confirm interpretation is gathered from staring at the key
inequality (11) that defines a credible policy.

35.14 Comparison of Equilibrium Values

We have computed plans for


• an ordinary (unrestricted) Ramsey planner who chooses a sequence {𝜇𝑡 }∞
𝑡=0 at time 0
• a Ramsey planner restricted to choose a constant 𝜇 for all 𝑡 ≥ 0
• a Markov perfect sequence of governments
Below we compare equilibrium time zero values for these three.
We confirm that the value delivered by the unrestricted Ramsey planner exceeds the value
delivered by the restricted Ramsey planner which in turn exceeds the value delivered by the
Markov perfect sequence of governments.

In [12]: clq.J_series[0]

Out[12]: 6.67918822960449

In [13]: clq.J_check

Out[13]: 6.676729524674898

In [14]: clq.J_MPE

Out[14]: 6.663435886995107

We have also computed credible plans for a government or sequence of governments that
choose sequentially.
These include
• a self-enforcing plan that gives a low initial value 𝑣0 .
• a better plan – possibly one that attains values associated with Ramsey plan – that is
not self-enforcing.
35.15. NOTE ON DYNAMIC PROGRAMMING SQUARED 635

35.15 Note on Dynamic Programming Squared

The theory deployed in this lecture is an application of what we nickname dynamic pro-
gramming squared.
The nickname refers to the fact that a value satisfying one Bellman equation is itself an argu-
ment in a second Bellman equation.
Thus, our models have involved two Bellman equations:
• equation (1) expresses how 𝜃𝑡 depends on 𝜇𝑡 and 𝜃𝑡+1
• equation (4) expresses how value 𝑣𝑡 depends on (𝜇𝑡 , 𝜃𝑡 ) and 𝑣𝑡+1
A value 𝜃 from one Bellman equation appears as an argument of a second Bellman equation
for another value 𝑣.
636 CHAPTER 35. RAMSEY PLANS, TIME INCONSISTENCY, SUSTAINABLE PLANS
Chapter 36

Optimal Taxation with


State-Contingent Debt

36.1 Contents

• Overview 36.2
• A Competitive Equilibrium with Distorting Taxes 36.3
• Recursive Formulation of the Ramsey Problem 36.4
• Examples 36.5
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

36.2 Overview

This lecture describes a celebrated model of optimal fiscal policy by Robert E. Lucas, Jr., and
Nancy Stokey [45].
The model revisits classic issues about how to pay for a war.
Here a war means a more or less temporary surge in an exogenous government expenditure
process.
The model features
• a government that must finance an exogenous stream of government expenditures with
either
– a flat rate tax on labor, or
– purchases and sales from a full array of Arrow state-contingent securities
• a representative household that values consumption and leisure
• a linear production function mapping labor into a single good
• a Ramsey planner who at time 𝑡 = 0 chooses a plan for taxes and trades of Arrow secu-
rities for all 𝑡 ≥ 0
After first presenting the model in a space of sequences, we shall represent it recursively
in terms of two Bellman equations formulated along lines that we encountered in Dynamic
Stackelberg models.

637
638 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

As in Dynamic Stackelberg models, to apply dynamic programming we shall define the state
vector artfully.
In particular, we shall include forward-looking variables that summarize optimal responses of
private agents to a Ramsey plan.
See Optimal taxation for analysis within a linear-quadratic setting.
Let’s start with some standard imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline

36.3 A Competitive Equilibrium with Distorting Taxes

For 𝑡 ≥ 0, a history 𝑠𝑡 = [𝑠𝑡 , 𝑠𝑡−1 , … , 𝑠0 ] of an exogenous state 𝑠𝑡 has joint probability density
𝜋𝑡 (𝑠𝑡 ).
We begin by assuming that government purchases 𝑔𝑡 (𝑠𝑡 ) at time 𝑡 ≥ 0 depend on 𝑠𝑡 .
Let 𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 ), and 𝑛𝑡 (𝑠𝑡 ) denote consumption, leisure, and labor supply, respectively, at
history 𝑠𝑡 and date 𝑡.
A representative household is endowed with one unit of time that can be divided between
leisure ℓ𝑡 and labor 𝑛𝑡 :

𝑛𝑡 (𝑠𝑡 ) + ℓ𝑡 (𝑠𝑡 ) = 1 (1)

Output equals 𝑛𝑡 (𝑠𝑡 ) and can be divided between 𝑐𝑡 (𝑠𝑡 ) and 𝑔𝑡 (𝑠𝑡 )

𝑐𝑡 (𝑠𝑡 ) + 𝑔𝑡 (𝑠𝑡 ) = 𝑛𝑡 (𝑠𝑡 ) (2)

A representative household’s preferences over {𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 )}∞


𝑡=0 are ordered by


∑ ∑ 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 )𝑢[𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 )] (3)
𝑡=0 𝑠𝑡

where the utility function 𝑢 is increasing, strictly concave, and three times continuously dif-
ferentiable in both arguments.
The technology pins down a pre-tax wage rate to unity for all 𝑡, 𝑠𝑡 .
The government imposes a flat-rate tax 𝜏𝑡 (𝑠𝑡 ) on labor income at time 𝑡, history 𝑠𝑡 .
There are complete markets in one-period Arrow securities.
One unit of an Arrow security issued at time 𝑡 at history 𝑠𝑡 and promising to pay one unit of
time 𝑡 + 1 consumption in state 𝑠𝑡+1 costs 𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ).
The government issues one-period Arrow securities each period.
The government has a sequence of budget constraints whose time 𝑡 ≥ 0 component is

𝑔𝑡 (𝑠𝑡 ) = 𝜏𝑡 (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 ) + ∑ 𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 )𝑏𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) − 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) (4)
𝑠𝑡+1
36.3. A COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 639

where
• 𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) is a competitive equilibrium price of one unit of consumption at date 𝑡 + 1
in state 𝑠𝑡+1 at date 𝑡 and history 𝑠𝑡 .
• 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) is government debt falling due at time 𝑡, history 𝑠𝑡 .
Government debt 𝑏0 (𝑠0 ) is an exogenous initial condition.
The representative household has a sequence of budget constraints whose time 𝑡 ≥ 0 compo-
nent is

𝑐𝑡 (𝑠𝑡 ) + ∑ 𝑝𝑡 (𝑠𝑡+1 |𝑠𝑡 )𝑏𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) = [1 − 𝜏𝑡 (𝑠𝑡 )] 𝑛𝑡 (𝑠𝑡 ) + 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) ∀𝑡 ≥ 0 (5)
𝑠𝑡+1

A government policy is an exogenous sequence {𝑔(𝑠𝑡 )}∞ 𝑡 ∞


𝑡=0 , a tax rate sequence {𝜏𝑡 (𝑠 )}𝑡=0 ,
𝑡+1 ∞
and a government debt sequence {𝑏𝑡+1 (𝑠 )}𝑡=0 .
A feasible allocation is a consumption-labor supply plan {𝑐𝑡 (𝑠𝑡 ), 𝑛𝑡 (𝑠𝑡 )}∞
𝑡=0 that satisfies (2)
𝑡
at all 𝑡, 𝑠 .
A price system is a sequence of Arrow security prices {𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 )}∞
𝑡=0 .

The household faces the price system as a price-taker and takes the government policy as
given.
The household chooses {𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 )}∞ 𝑡
𝑡=0 to maximize (3) subject to (5) and (1) for all 𝑡, 𝑠 .

A competitive equilibrium with distorting taxes is a feasible allocation, a price system,


and a government policy such that
• Given the price system and the government policy, the allocation solves the household’s
optimization problem.
• Given the allocation, government policy, and price system, the government’s budget
constraint is satisfied for all 𝑡, 𝑠𝑡 .
Note: There are many competitive equilibria with distorting taxes.
They are indexed by different government policies.
The Ramsey problem or optimal taxation problem is to choose a competitive equilib-
rium with distorting taxes that maximizes (3).

36.3.1 Arrow-Debreu Version of Price System

We find it convenient sometimes to work with the Arrow-Debreu price system that is implied
by a sequence of Arrow securities prices.
Let 𝑞𝑡0 (𝑠𝑡 ) be the price at time 0, measured in time 0 consumption goods, of one unit of con-
sumption at time 𝑡, history 𝑠𝑡 .
The following recursion relates Arrow-Debreu prices {𝑞𝑡0 (𝑠𝑡 )}∞
𝑡=0 to Arrow securities prices
𝑡 ∞
{𝑝𝑡+1 (𝑠𝑡+1 |𝑠 )}𝑡=0

0
𝑞𝑡+1 (𝑠𝑡+1 ) = 𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 )𝑞𝑡0 (𝑠𝑡 ) 𝑠.𝑡. 𝑞00 (𝑠0 ) = 1 (6)

Arrow-Debreu prices are useful when we want to compress a sequence of budget constraints
into a single intertemporal budget constraint, as we shall find it convenient to do below.
640 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

36.3.2 Primal Approach

We apply a popular approach to solving a Ramsey problem, called the primal approach.
The idea is to use first-order conditions for household optimization to eliminate taxes and
prices in favor of quantities, then pose an optimization problem cast entirely in terms of
quantities.
After Ramsey quantities have been found, taxes and prices can then be unwound from the
allocation.
The primal approach uses four steps:

1. Obtain first-order conditions of the household’s problem and solve them for
{𝑞𝑡0 (𝑠𝑡 ), 𝜏𝑡 (𝑠𝑡 )}∞ 𝑡 𝑡 ∞
𝑡=0 as functions of the allocation {𝑐𝑡 (𝑠 ), 𝑛𝑡 (𝑠 )}𝑡=0 .

2. Substitute these expressions for taxes and prices in terms of the allocation into the
household’s present-value budget constraint.

• This intertemporal constraint involves only the allocation and is regarded as an imple-
mentability constraint.

1. Find the allocation that maximizes the utility of the representative household (3) sub-
ject to the feasibility constraints (1) and (2) and the implementability condition derived
in step 2.

• This optimal allocation is called the Ramsey allocation.

1. Use the Ramsey allocation together with the formulas from step 1 to find taxes and
prices.

36.3.3 The Implementability Constraint

By sequential substitution of one one-period budget constraint (5) into another, we can ob-
tain the household’s present-value budget constraint:

∞ ∞
∑ ∑ 𝑞𝑡0 (𝑠𝑡 )𝑐𝑡 (𝑠𝑡 ) = ∑ ∑ 𝑞𝑡0 (𝑠𝑡 )[1 − 𝜏𝑡 (𝑠𝑡 )]𝑛𝑡 (𝑠𝑡 ) + 𝑏0 (7)
𝑡=0 𝑠𝑡 𝑡=0 𝑠𝑡

{𝑞𝑡0 (𝑠𝑡 )}∞


𝑡=1 can be interpreted as a time 0 Arrow-Debreu price system.

To approach the Ramsey problem, we study the household’s optimization problem.


First-order conditions for the household’s problem for ℓ𝑡 (𝑠𝑡 ) and 𝑏𝑡 (𝑠𝑡+1 |𝑠𝑡 ), respectively, im-
ply

𝑢𝑙 (𝑠𝑡 )
(1 − 𝜏𝑡 (𝑠𝑡 )) = (8)
𝑢𝑐 (𝑠𝑡 )

and

𝑢𝑐 (𝑠𝑡+1 )
𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) = 𝛽𝜋(𝑠𝑡+1 |𝑠𝑡 ) ( ) (9)
𝑢𝑐 (𝑠𝑡 )
36.3. A COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 641

where 𝜋(𝑠𝑡+1 |𝑠𝑡 ) is the probability distribution of 𝑠𝑡+1 conditional on history 𝑠𝑡 .


Equation (9) implies that the Arrow-Debreu price system satisfies

𝑢𝑐 (𝑠𝑡 )
𝑞𝑡0 (𝑠𝑡 ) = 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 ) (10)
𝑢𝑐 (𝑠0 )

Using the first-order conditions (8) and (9) to eliminate taxes and prices from (7), we derive
the implementability condition


∑ ∑ 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 )[𝑢𝑐 (𝑠𝑡 )𝑐𝑡 (𝑠𝑡 ) − 𝑢ℓ (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 )] − 𝑢𝑐 (𝑠0 )𝑏0 = 0 (11)
𝑡=0 𝑠𝑡

The Ramsey problem is to choose a feasible allocation that maximizes


∑ ∑ 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 )𝑢[𝑐𝑡 (𝑠𝑡 ), 1 − 𝑛𝑡 (𝑠𝑡 )] (12)
𝑡=0 𝑠𝑡

subject to (11).

36.3.4 Solution Details

First, define a “pseudo utility function”

𝑉 [𝑐𝑡 (𝑠𝑡 ), 𝑛𝑡 (𝑠𝑡 ), Φ] = 𝑢[𝑐𝑡 (𝑠𝑡 ), 1 − 𝑛𝑡 (𝑠𝑡 )] + Φ [𝑢𝑐 (𝑠𝑡 )𝑐𝑡 (𝑠𝑡 ) − 𝑢ℓ (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 )] (13)

where Φ is a Lagrange multiplier on the implementability condition (7).


Next form the Lagrangian


𝐽 = ∑ ∑ 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 ){𝑉 [𝑐𝑡 (𝑠𝑡 ), 𝑛𝑡 (𝑠𝑡 ), Φ] + 𝜃𝑡 (𝑠𝑡 )[𝑛𝑡 (𝑠𝑡 ) − 𝑐𝑡 (𝑠𝑡 ) − 𝑔𝑡 (𝑠𝑡 )]} − Φ𝑢𝑐 (0)𝑏0 (14)
𝑡=0 𝑠𝑡

where {𝜃𝑡 (𝑠𝑡 ); ∀𝑠𝑡 }𝑡≥0 is a sequence of Lagrange multipliers on the feasible conditions (2).
Given an initial government debt 𝑏0 , we want to maximize 𝐽 with respect to
{𝑐𝑡 (𝑠𝑡 ), 𝑛𝑡 (𝑠𝑡 ); ∀𝑠𝑡 }𝑡≥0 and to minimize with respect to {𝜃(𝑠𝑡 ); ∀𝑠𝑡 }𝑡≥0 .
The first-order conditions for the Ramsey problem for periods 𝑡 ≥ 1 and 𝑡 = 0, respectively,
are

𝑐𝑡 (𝑠𝑡 )∶ (1 + Φ)𝑢𝑐 (𝑠𝑡 ) + Φ [𝑢𝑐𝑐 (𝑠𝑡 )𝑐𝑡 (𝑠𝑡 ) − 𝑢ℓ𝑐 (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 )] − 𝜃𝑡 (𝑠𝑡 ) = 0, 𝑡≥1
𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
(15)
𝑛𝑡 (𝑠 )∶ − (1 + Φ)𝑢ℓ (𝑠 ) − Φ [𝑢𝑐ℓ (𝑠 )𝑐𝑡 (𝑠 ) − 𝑢ℓℓ (𝑠 )𝑛𝑡 (𝑠 )] + 𝜃𝑡 (𝑠 ) = 0, 𝑡≥1

and
642 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

𝑐0 (𝑠0 , 𝑏0 )∶ (1 + Φ)𝑢𝑐 (𝑠0 , 𝑏0 ) + Φ [𝑢𝑐𝑐 (𝑠0 , 𝑏0 )𝑐0 (𝑠0 , 𝑏0 ) − 𝑢ℓ𝑐 (𝑠0 , 𝑏0 )𝑛0 (𝑠0 , 𝑏0 )] − 𝜃0 (𝑠0 , 𝑏0 )
− Φ𝑢𝑐𝑐 (𝑠0 , 𝑏0 )𝑏0 = 0
𝑛0 (𝑠0 , 𝑏0 )∶ − (1 + Φ)𝑢ℓ (𝑠0 , 𝑏0 ) − Φ [𝑢𝑐ℓ (𝑠0 , 𝑏0 )𝑐0 (𝑠0 , 𝑏0 ) − 𝑢ℓℓ (𝑠0 , 𝑏0 )𝑛0 (𝑠0 , 𝑏0 )] + 𝜃0 (𝑠0 , 𝑏0 )
+ Φ𝑢𝑐ℓ (𝑠0 , 𝑏0 )𝑏0 = 0
(16)
Please note how these first-order conditions differ between 𝑡 = 0 and 𝑡 ≥ 1.
It is instructive to use first-order conditions (15) for 𝑡 ≥ 1 to eliminate the multipliers 𝜃𝑡 (𝑠𝑡 ).
For convenience, we suppress the time subscript and the index 𝑠𝑡 and obtain

(1 + Φ)𝑢𝑐 (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐𝑐 (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓ𝑐 (𝑐, 1 − 𝑐 − 𝑔)]


(17)
= (1 + Φ)𝑢ℓ (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐ℓ (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓℓ (𝑐, 1 − 𝑐 − 𝑔)]

where we have imposed conditions (1) and (2).


Equation (17) is one equation that can be solved to express the unknown 𝑐 as a function of
the exogenous variable 𝑔.
We also know that time 𝑡 = 0 quantities 𝑐0 and 𝑛0 satisfy

(1 + Φ)𝑢𝑐 (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐𝑐 (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓ𝑐 (𝑐, 1 − 𝑐 − 𝑔)]


= (1 + Φ)𝑢ℓ (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐ℓ (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓℓ (𝑐, 1 − 𝑐 − 𝑔)] + Φ(𝑢𝑐𝑐 − 𝑢𝑐,ℓ )𝑏0
(18)
Notice that a counterpart to 𝑏0 does not appear in (17), so 𝑐 does not depend on it for 𝑡 ≥ 1.
But things are different for time 𝑡 = 0.
An analogous argument for the 𝑡 = 0 equations (16) leads to one equation that can be solved
for 𝑐0 as a function of the pair (𝑔(𝑠0 ), 𝑏0 ).
These outcomes mean that the following statement would be true even when government pur-
chases are history-dependent functions 𝑔𝑡 (𝑠𝑡 ) of the history of 𝑠𝑡 .
Proposition: If government purchases are equal after two histories 𝑠𝑡 and 𝑠𝜏̃ for 𝑡, 𝜏 ≥ 0, i.e.,
if

𝑔𝑡 (𝑠𝑡 ) = 𝑔𝜏 (𝑠𝜏̃ ) = 𝑔

then it follows from (17) that the Ramsey choices of consumption and leisure, (𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 ))
and (𝑐𝑗 (𝑠𝜏̃ ), ℓ𝑗 (𝑠𝜏̃ )), are identical.
The proposition asserts that the optimal allocation is a function of the currently realized
quantity of government purchases 𝑔 only and does not depend on the specific history that
preceded that realization of 𝑔.

36.3.5 The Ramsey Allocation for a Given Multiplier

Temporarily take Φ as given.


We shall compute 𝑐0 (𝑠0 , 𝑏0 ) and 𝑛0 (𝑠0 , 𝑏0 ) from the first-order conditions (16).
36.3. A COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 643

Evidently, for 𝑡 ≥ 1, 𝑐 and 𝑛 depend on the time 𝑡 realization of 𝑔 only.


But for 𝑡 = 0, 𝑐 and 𝑛 depend on both 𝑔0 and the government’s initial debt 𝑏0 .
Thus, while 𝑏0 influences 𝑐0 and 𝑛0 , there appears no analogous variable 𝑏𝑡 that influences 𝑐𝑡
and 𝑛𝑡 for 𝑡 ≥ 1.
The absence of 𝑏𝑡 as a determinant of the Ramsey allocation for 𝑡 ≥ 1 and its presence for
𝑡 = 0 is a symptom of the time-inconsistency of a Ramsey plan.
Φ has to take a value that assures that the household and the government’s budget con-
straints are both satisfied at a candidate Ramsey allocation and price system associated with
that Φ.

36.3.6 Further Specialization

At this point, it is useful to specialize the model in the following ways.


We assume that 𝑠 is governed by a finite state Markov chain with states 𝑠 ∈ [1, … , 𝑆] and
transition matrix Π, where

Π(𝑠′ |𝑠) = Prob(𝑠𝑡+1 = 𝑠′ |𝑠𝑡 = 𝑠)

Also, assume that government purchases 𝑔 are an exact time-invariant function 𝑔(𝑠) of 𝑠.
We maintain these assumptions throughout the remainder of this lecture.

36.3.7 Determining the Multiplier

We complete the Ramsey plan by computing the Lagrange multiplier Φ on the implementabil-
ity constraint (11).
Government budget balance restricts Φ via the following line of reasoning.
The household’s first-order conditions imply

𝑢𝑙 (𝑠𝑡 )
(1 − 𝜏𝑡 (𝑠𝑡 )) = (19)
𝑢𝑐 (𝑠𝑡 )

and the implied one-period Arrow securities prices

𝑢𝑐 (𝑠𝑡+1 )
𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) = 𝛽Π(𝑠𝑡+1 |𝑠𝑡 ) (20)
𝑢𝑐 (𝑠𝑡 )

Substituting from (19), (20), and the feasibility condition (2) into the recursive version (5) of
the household budget constraint gives

𝑢𝑐 (𝑠𝑡 )[𝑛𝑡 (𝑠𝑡 ) − 𝑔𝑡 (𝑠𝑡 )] + 𝛽 ∑ Π(𝑠𝑡+1 |𝑠𝑡 )𝑢𝑐 (𝑠𝑡+1 )𝑏𝑡+1 (𝑠𝑡+1 |𝑠𝑡 )
𝑠𝑡+1 (21)
𝑡 𝑡 𝑡 𝑡−1
= 𝑢𝑙 (𝑠 )𝑛𝑡 (𝑠 ) + 𝑢𝑐 (𝑠 )𝑏𝑡 (𝑠𝑡 |𝑠 )

Define 𝑥𝑡 (𝑠𝑡 ) = 𝑢𝑐 (𝑠𝑡 )𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ).


644 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

Notice that 𝑥𝑡 (𝑠𝑡 ) appears on the right side of (21) while 𝛽 times the conditional expectation
of 𝑥𝑡+1 (𝑠𝑡+1 ) appears on the left side.
Hence the equation shares much of the structure of a simple asset pricing equation with 𝑥𝑡
being analogous to the price of the asset at time 𝑡.
We learned earlier that for a Ramsey allocation 𝑐𝑡 (𝑠𝑡 ), 𝑛𝑡 (𝑠𝑡 ) and 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ), and therefore
also 𝑥𝑡 (𝑠𝑡 ), are each functions of 𝑠𝑡 only, being independent of the history 𝑠𝑡−1 for 𝑡 ≥ 1.
That means that we can express equation (21) as

𝑢𝑐 (𝑠)[𝑛(𝑠) − 𝑔(𝑠)] + 𝛽 ∑ Π(𝑠′ |𝑠)𝑥′ (𝑠′ ) = 𝑢𝑙 (𝑠)𝑛(𝑠) + 𝑥(𝑠) (22)


𝑠′

where 𝑠′ denotes a next period value of 𝑠 and 𝑥′ (𝑠′ ) denotes a next period value of 𝑥.
Equation (22) is easy to solve for 𝑥(𝑠) for 𝑠 = 1, … , 𝑆.
If we let 𝑛,⃗ 𝑔,⃗ 𝑥⃗ denote 𝑆 × 1 vectors whose 𝑖th elements are the respective 𝑛, 𝑔, and 𝑥 values
when 𝑠 = 𝑖, and let Π be the transition matrix for the Markov state 𝑠, then we can express
(22) as the matrix equation

𝑢⃗𝑐 (𝑛⃗ − 𝑔)⃗ + 𝛽Π𝑥⃗ = 𝑢⃗𝑙 𝑛⃗ + 𝑥⃗ (23)

This is a system of 𝑆 linear equations in the 𝑆 × 1 vector 𝑥, whose solution is

𝑥⃗ = (𝐼 − 𝛽Π)−1 [𝑢⃗𝑐 (𝑛⃗ − 𝑔)⃗ − 𝑢⃗𝑙 𝑛]⃗ (24)

In these equations, by 𝑢⃗𝑐 𝑛,⃗ for example, we mean element-by-element multiplication of the
two vectors.
𝑥(𝑠)
After solving for 𝑥,⃗ we can find 𝑏(𝑠𝑡 |𝑠𝑡−1 ) in Markov state 𝑠𝑡 = 𝑠 from 𝑏(𝑠) = 𝑢𝑐 (𝑠) or the
matrix equation

𝑥⃗
𝑏⃗ = (25)
𝑢⃗𝑐

where division here means an element-by-element division of the respective components of the
𝑆 × 1 vectors 𝑥⃗ and 𝑢⃗𝑐 .
Here is a computational algorithm:

1. Start with a guess for the value for Φ, then use the first-order conditions and the feasi-
bility conditions to compute 𝑐(𝑠𝑡 ), 𝑛(𝑠𝑡 ) for 𝑠 ∈ [1, … , 𝑆] and 𝑐0 (𝑠0 , 𝑏0 ) and 𝑛0 (𝑠0 , 𝑏0 ),
given Φ.

• these are 2(𝑆 + 1) equations in 2(𝑆 + 1) unknowns.

1. Solve the 𝑆 equations (24) for the 𝑆 elements of 𝑥.⃗

• these depend on Φ.

1. Find a Φ that satisfies


36.3. A COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 645

𝑆
𝑢𝑐,0 𝑏0 = 𝑢𝑐,0 (𝑛0 − 𝑔0 ) − 𝑢𝑙,0 𝑛0 + 𝛽 ∑ Π(𝑠|𝑠0 )𝑥(𝑠) (26)
𝑠=1

by gradually raising Φ if the left side of (26) exceeds the right side and lowering Φ if the left
side is less than the right side.

1. After computing a Ramsey allocation, recover the flat tax rate on labor from (8) and
the implied one-period Arrow securities prices from (9).

In summary, when 𝑔𝑡 is a time-invariant function of a Markov state 𝑠𝑡 , a Ramsey plan can be


constructed by solving 3𝑆 + 3 equations in 𝑆 components each of 𝑐,⃗ 𝑛,⃗ and 𝑥⃗ together with
𝑛0 , 𝑐0 , and Φ.

36.3.8 Time Inconsistency

Let {𝜏𝑡 (𝑠𝑡 )}∞ 𝑡 ∞


𝑡=0 , {𝑏𝑡+1 (𝑠𝑡+1 |𝑠 )}𝑡=0 be a time 0, state 𝑠0 Ramsey plan.

Then {𝜏𝑗 (𝑠𝑗 )}∞ 𝑗 ∞ 𝑡


𝑗=𝑡 , {𝑏𝑗+1 (𝑠𝑗+1 |𝑠 )}𝑗=𝑡 is a time 𝑡, history 𝑠 continuation of a time 0, state 𝑠0
Ramsey plan.
A time 𝑡, history 𝑠𝑡 Ramsey plan is a Ramsey plan that starts from initial conditions
𝑠𝑡 , 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ).
A time 𝑡, history 𝑠𝑡 continuation of a time 0, state 0 Ramsey plan is not a time 𝑡, history 𝑠𝑡
Ramsey plan.
The means that a Ramsey plan is not time consistent.
Another way to say the same thing is that a Ramsey plan is time inconsistent.
The reason is that a continuation Ramsey plan takes 𝑢𝑐𝑡 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) as given, not 𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ).
We shall discuss this more below.

36.3.9 Specification with CRRA Utility

In our calculations below and in a subsequent lecture based on an extension of the Lucas-
Stokey model by Aiyagari, Marcet, Sargent, and Seppälä (2002) [3], we shall modify the one-
period utility function assumed above.
(We adopted the preceding utility specification because it was the one used in the original
[45] paper)
We will modify their specification by instead assuming that the representative agent has util-
ity function

𝑐1−𝜎 𝑛1+𝛾
𝑢(𝑐, 𝑛) = −
1−𝜎 1+𝛾

where 𝜎 > 0, 𝛾 > 0.


We continue to assume that

𝑐𝑡 + 𝑔𝑡 = 𝑛𝑡
646 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

We eliminate leisure from the model.


We also eliminate Lucas and Stokey’s restriction that ℓ𝑡 + 𝑛𝑡 ≤ 1.
We replace these two things with the assumption that labor 𝑛𝑡 ∈ [0, +∞].
With these adjustments, the analysis of Lucas and Stokey prevails once we make the following
replacements

𝑢ℓ (𝑐, ℓ) ∼ −𝑢𝑛 (𝑐, 𝑛)


𝑢𝑐 (𝑐, ℓ) ∼ 𝑢𝑐 (𝑐, 𝑛)
𝑢ℓ,ℓ (𝑐, ℓ) ∼ 𝑢𝑛𝑛 (𝑐, 𝑛)
𝑢𝑐,𝑐 (𝑐, ℓ) ∼ 𝑢𝑐,𝑐 (𝑐, 𝑛)
𝑢𝑐,ℓ (𝑐, ℓ) ∼ 0

With these understandings, equations (17) and (18) simplify in the case of the CRRA utility
function.
They become

(1 + Φ)[𝑢𝑐 (𝑐) + 𝑢𝑛 (𝑐 + 𝑔)] + Φ[𝑐𝑢𝑐𝑐 (𝑐) + (𝑐 + 𝑔)𝑢𝑛𝑛 (𝑐 + 𝑔)] = 0 (27)

and

(1 + Φ)[𝑢𝑐 (𝑐0 ) + 𝑢𝑛 (𝑐0 + 𝑔0 )] + Φ[𝑐0 𝑢𝑐𝑐 (𝑐0 ) + (𝑐0 + 𝑔0 )𝑢𝑛𝑛 (𝑐0 + 𝑔0 )] − Φ𝑢𝑐𝑐 (𝑐0 )𝑏0 = 0 (28)

In equation (27), it is understood that 𝑐 and 𝑔 are each functions of the Markov state 𝑠.
In addition, the time 𝑡 = 0 budget constraint is satisfied at 𝑐0 and initial government debt 𝑏0 :

𝑏̄
𝑏0 + 𝑔0 = 𝜏0 (𝑐0 + 𝑔0 ) + (29)
𝑅0

where 𝑅0 is the gross interest rate for the Markov state 𝑠0 that is assumed to prevail at time
𝑡 = 0 and 𝜏0 is the time 𝑡 = 0 tax rate.
In equation (29), it is understood that

𝑢𝑙,0
𝜏0 = 1 −
𝑢𝑐,0
𝑆
𝑢𝑐 (𝑠)
𝑅0 = 𝛽 ∑ Π(𝑠|𝑠0 )
𝑠=1
𝑢𝑐,0

36.3.10 Sequence Implementation

The above steps are implemented in a class called SequentialAllocation

In [3]: import numpy as np


from scipy.optimize import root
from quantecon import MarkovChain
36.3. A COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 647

class SequentialAllocation:

'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint μ.
'''

def __init__(self, model):

# Initialize from model object attributes


self.β, self.π, self.G = model.β, model.π, model.G
self.mc, self.Θ = MarkovChain(self.π), model.Θ
self.S = len(model.π) # Number of states
self.model = model

# Find the first best allocation


self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))

if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]

# Multiplier on the resource constraint


self.ΞFB = Uc(self.cFB, self.nFB)
self.zFB = np.hstack([self.cFB, self.nFB, self.ΞFB])

def time1_allocation(self, μ):


'''
Computes optimal allocation for time t >= 1 for a given μ
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
648 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

# FOC of c
return np.hstack([Uc(c, n) - μ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) \
+ Θ * Ξ, # FOC of n
Θ * n - c - G])

# Find the root of the first-order condition


res = root(FOC, self.zFB)
if not res.success:
raise Exception('Could not find LS allocation.')
z = res.x
c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:]

# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)

return c, n, x, Ξ

def time0_allocation(self, B_, s_0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

# First order conditions of planner's problem


def FOC(z):
μ, c, n, Ξ = z
xprime = self.time1_allocation(μ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
@ xprime,
Uc(c, n) - μ * (Ucc(c, n)
* (c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n
+ Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])

# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')

return res.x

def time1_value(self, μ):


'''
Find the value associated with multiplier μ
'''
c, n, x, Ξ = self.time1_allocation(μ)
U = self.model.U(c, n)
V = np.linalg.solve(np.eye(self.S) - self.β * self.π, U)
return c, n, x, V

def Τ(self, c, n):


'''
36.4. RECURSIVE FORMULATION OF THE RAMSEY PROBLEM 649

Computes Τ given c, n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π, β = self.model, self.π, self.β
Uc = model.Uc

if sHist is None:
sHist = self.mc.simulate(T, s_0)

cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T))


RHist = np.zeros(T - 1)

# Time 0
μ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = μ

# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(μ)
Τ = self.Τ(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x[s] /�
↪ u_c[s], \
Τ[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
μHist[t] = μ

return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist])

36.4 Recursive Formulation of the Ramsey Problem

𝑥𝑡 (𝑠𝑡 ) = 𝑢𝑐 (𝑠𝑡 )𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) in equation (21) appears to be a purely “forward-looking” variable.
But 𝑥𝑡 (𝑠𝑡 ) is a also a natural candidate for a state variable in a recursive formulation of the
Ramsey problem.

36.4.1 Intertemporal Delegation

To express a Ramsey plan recursively, we imagine that a time 0 Ramsey planner is followed
by a sequence of continuation Ramsey planners at times 𝑡 = 1, 2, ….
A “continuation Ramsey planner” at times 𝑡 ≥ 1 has a different objective function and faces
different constraints and state variabls than does the Ramsey planner at time 𝑡 = 0.
650 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

A key step in representing a Ramsey plan recursively is to regard the marginal utility scaled
government debts 𝑥𝑡 (𝑠𝑡 ) = 𝑢𝑐 (𝑠𝑡 )𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) as predetermined quantities that continuation
Ramsey planners at times 𝑡 ≥ 1 are obligated to attain.
Continuation Ramsey planners do this by choosing continuation policies that induce the rep-
resentative household to make choices that imply that 𝑢𝑐 (𝑠𝑡 )𝑏𝑡 (𝑠𝑡 |𝑠𝑡−1 ) = 𝑥𝑡 (𝑠𝑡 ).
A time 𝑡 ≥ 1 continuation Ramsey planner faces 𝑥𝑡 , 𝑠𝑡 as state variables.
A time 𝑡 ≥ 1 continuation Ramsey planner delivers 𝑥𝑡 by choosing a suitable 𝑛𝑡 , 𝑐𝑡 pair and
a list of 𝑠𝑡+1 -contingent continuation quantities 𝑥𝑡+1 to bequeath to a time 𝑡 + 1 continuation
Ramsey planner.
While a time 𝑡 ≥ 1 continuation Ramsey planner faces 𝑥𝑡 , 𝑠𝑡 as state variables, the time 0
Ramsey planner faces 𝑏0 , not 𝑥0 , as a state variable.
Furthermore, the Ramsey planner cares about (𝑐0 (𝑠0 ), ℓ0 (𝑠0 )), while continuation Ramsey
planners do not.
The time 0 Ramsey planner hands a state-contingent function that make 𝑥1 a function of 𝑠1
to a time 1 continuation Ramsey planner.
These lines of delegated authorities and responsibilities across time express the continuation
Ramsey planners’ obligations to implement their parts of the original Ramsey plan, designed
once-and-for-all at time 0.

36.4.2 Two Bellman Equations

After 𝑠𝑡 has been realized at time 𝑡 ≥ 1, the state variables confronting the time 𝑡 continua-
tion Ramsey planner are (𝑥𝑡 , 𝑠𝑡 ).
• Let 𝑉 (𝑥, 𝑠) be the value of a continuation Ramsey plan at 𝑥𝑡 = 𝑥, 𝑠𝑡 = 𝑠 for 𝑡 ≥ 1.
• Let 𝑊 (𝑏, 𝑠) be the value of a Ramsey plan at time 0 at 𝑏0 = 𝑏 and 𝑠0 = 𝑠.
We work backward by presenting a Bellman equation for 𝑉 (𝑥, 𝑠) first, then a Bellman equa-
tion for 𝑊 (𝑏, 𝑠).

36.4.3 The Continuation Ramsey Problem

The Bellman equation for a time 𝑡 ≥ 1 continuation Ramsey planner is

𝑉 (𝑥, 𝑠) = max 𝑢(𝑛 − 𝑔(𝑠), 1 − 𝑛) + 𝛽 ∑ Π(𝑠′ |𝑠)𝑉 (𝑥′ , 𝑠′ ) (30)


𝑛,{𝑥′ (𝑠′ )}
𝑠′ ∈𝑆

where maximization over 𝑛 and the 𝑆 elements of 𝑥′ (𝑠′ ) is subject to the single imple-
mentability constraint for 𝑡 ≥ 1.

𝑥 = 𝑢𝑐 (𝑛 − 𝑔(𝑠)) − 𝑢𝑙 𝑛 + 𝛽 ∑ Π(𝑠′ |𝑠)𝑥′ (𝑠′ ) (31)


𝑠′ ∈𝑆

Here 𝑢𝑐 and 𝑢𝑙 are today’s values of the marginal utilities.


For each given value of 𝑥, 𝑠, the continuation Ramsey planner chooses 𝑛 and 𝑥′ (𝑠′ ) for each
𝑠′ ∈ 𝑆.
36.4. RECURSIVE FORMULATION OF THE RAMSEY PROBLEM 651

Associated with a value function 𝑉 (𝑥, 𝑠) that solves Bellman equation (30) are 𝑆 + 1 time-
invariant policy functions

𝑛𝑡 = 𝑓(𝑥𝑡 , 𝑠𝑡 ), 𝑡≥1
(32)
𝑥𝑡+1 (𝑠𝑡+1 ) = ℎ(𝑠𝑡+1 ; 𝑥𝑡 , 𝑠𝑡 ), 𝑠𝑡+1 ∈ 𝑆, 𝑡 ≥ 1

36.4.4 The Ramsey Problem

The Bellman equation for the time 0 Ramsey planner is

𝑊 (𝑏0 , 𝑠0 ) = max 𝑢(𝑛0 − 𝑔0 , 1 − 𝑛0 ) + 𝛽 ∑ Π(𝑠1 |𝑠0 )𝑉 (𝑥′ (𝑠1 ), 𝑠1 ) (33)


𝑛0 ,{𝑥′ (𝑠1 )}
𝑠1 ∈𝑆

where maximization over 𝑛0 and the 𝑆 elements of 𝑥′ (𝑠1 ) is subject to the time 0 imple-
mentability constraint

𝑢𝑐,0 𝑏0 = 𝑢𝑐,0 (𝑛0 − 𝑔0 ) − 𝑢𝑙,0 𝑛0 + 𝛽 ∑ Π(𝑠1 |𝑠0 )𝑥′ (𝑠1 ) (34)


𝑠1 ∈𝑆

coming from restriction (26).


Associated with a value function 𝑊 (𝑏0 , 𝑛0 ) that solves Bellman equation (33) are 𝑆 + 1 time 0
policy functions

𝑛0 = 𝑓0 (𝑏0 , 𝑠0 )
(35)
𝑥1 (𝑠1 ) = ℎ0 (𝑠1 ; 𝑏0 , 𝑠0 )

Notice the appearance of state variables (𝑏0 , 𝑠0 ) in the time 0 policy functions for the Ramsey
planner as compared to (𝑥𝑡 , 𝑠𝑡 ) in the policy functions (32) for the time 𝑡 ≥ 1 continuation
Ramsey planners.
The value function 𝑉 (𝑥𝑡 , 𝑠𝑡 ) of the time 𝑡 continuation Ramsey planner equals

𝐸𝑡 ∑𝜏=𝑡 𝛽 𝜏−𝑡 𝑢(𝑐𝑡 , 𝑙𝑡 ), where the consumption and leisure processes are evaluated along the
original time 0 Ramsey plan.

36.4.5 First-Order Conditions

Attach a Lagrange multiplier Φ1 (𝑥, 𝑠) to constraint (31) and a Lagrange multiplier Φ0 to con-
straint (26).
Time 𝑡 ≥ 1: the first-order conditions for the time 𝑡 ≥ 1 constrained maximization problem on
the right side of the continuation Ramsey planner’s Bellman equation (30) are

𝛽Π(𝑠′ |𝑠)𝑉𝑥 (𝑥′ , 𝑠′ ) − 𝛽Π(𝑠′ |𝑠)Φ1 = 0 (36)

for 𝑥′ (𝑠′ ) and

(1 + Φ1 )(𝑢𝑐 − 𝑢𝑙 ) + Φ1 [𝑛(𝑢𝑙𝑙 − 𝑢𝑙𝑐 ) + (𝑛 − 𝑔(𝑠))(𝑢𝑐𝑐 − 𝑢𝑙𝑐 )] = 0 (37)

for 𝑛.
652 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

Given Φ1 , equation (37) is one equation to be solved for 𝑛 as a function of 𝑠 (or of 𝑔(𝑠)).
Equation (36) implies 𝑉𝑥 (𝑥′ , 𝑠′ ) = Φ1 , while an envelope condition is 𝑉𝑥 (𝑥, 𝑠) = Φ1 , so it
follows that

𝑉𝑥 (𝑥′ , 𝑠′ ) = 𝑉𝑥 (𝑥, 𝑠) = Φ1 (𝑥, 𝑠) (38)

Time 𝑡 = 0: For the time 0 problem on the right side of the Ramsey planner’s Bellman equa-
tion (33), first-order conditions are

𝑉𝑥 (𝑥(𝑠1 ), 𝑠1 ) = Φ0 (39)

for 𝑥(𝑠1 ), 𝑠1 ∈ 𝑆, and

(1 + Φ0 )(𝑢𝑐,0 − 𝑢𝑛,0 ) + Φ0 [𝑛0 (𝑢𝑙𝑙,0 − 𝑢𝑙𝑐,0 ) + (𝑛0 − 𝑔(𝑠0 ))(𝑢𝑐𝑐,0 − 𝑢𝑐𝑙,0 )]


(40)
− Φ0 (𝑢𝑐𝑐,0 − 𝑢𝑐𝑙,0 )𝑏0 = 0

Notice similarities and differences between the first-order conditions for 𝑡 ≥ 1 and for 𝑡 = 0.
An additional term is present in (40) except in three special cases
• 𝑏0 = 0, or
• 𝑢𝑐 is constant (i.e., preferences are quasi-linear in consumption), or
• initial government assets are sufficiently large to finance all government purchases with
interest earnings from those assets so that Φ0 = 0
Except in these special cases, the allocation and the labor tax rate as functions of 𝑠𝑡 differ
between dates 𝑡 = 0 and subsequent dates 𝑡 ≥ 1.
Naturally, the first-order conditions in this recursive formulation of the Ramsey problem
agree with the first-order conditions derived when we first formulated the Ramsey plan in the
space of sequences.

36.4.6 State Variable Degeneracy

Equations (39) and (40) imply that Φ0 = Φ1 and that

𝑉𝑥 (𝑥𝑡 , 𝑠𝑡 ) = Φ0 (41)

for all 𝑡 ≥ 1.
When 𝑉 is concave in 𝑥, this implies state-variable degeneracy along a Ramsey plan in the
sense that for 𝑡 ≥ 1, 𝑥𝑡 will be a time-invariant function of 𝑠𝑡 .
Given Φ0 , this function mapping 𝑠𝑡 into 𝑥𝑡 can be expressed as a vector 𝑥⃗ that solves equa-
tion (34) for 𝑛 and 𝑐 as functions of 𝑔 that are associated with Φ = Φ0 .

36.4.7 Manifestations of Time Inconsistency

While the marginal utility adjusted level of government debt 𝑥𝑡 is a key state variable for the
continuation Ramsey planners at 𝑡 ≥ 1, it is not a state variable at time 0.
36.4. RECURSIVE FORMULATION OF THE RAMSEY PROBLEM 653

The time 0 Ramsey planner faces 𝑏0 , not 𝑥0 = 𝑢𝑐,0 𝑏0 , as a state variable.


The discrepancy in state variables faced by the time 0 Ramsey planner and the time 𝑡 ≥ 1
continuation Ramsey planners captures the differing obligations and incentives faced by the
time 0 Ramsey planner and the time 𝑡 ≥ 1 continuation Ramsey planners.
• The time 0 Ramsey planner is obligated to honor government debt 𝑏0 measured in time
0 consumption goods.
• The time 0 Ramsey planner can manipulate the value of government debt as measured
by 𝑢𝑐,0 𝑏0 .
• In contrast, time 𝑡 ≥ 1 continuation Ramsey planners are obligated not to alter values
of debt, as measured by 𝑢𝑐,𝑡 𝑏𝑡 , that they inherit from a preceding Ramsey planner or
continuation Ramsey planner.
When government expenditures 𝑔𝑡 are a time-invariant function of a Markov state 𝑠𝑡 , a Ram-
sey plan and associated Ramsey allocation feature marginal utilities of consumption 𝑢𝑐 (𝑠𝑡 )
that, given Φ, for 𝑡 ≥ 1 depend only on 𝑠𝑡 , but that for 𝑡 = 0 depend on 𝑏0 as well.
This means that 𝑢𝑐 (𝑠𝑡 ) will be a time-invariant function of 𝑠𝑡 for 𝑡 ≥ 1, but except when 𝑏0 =
0, a different function for 𝑡 = 0.
This in turn means that prices of one-period Arrow securities 𝑝𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) = 𝑝(𝑠𝑡+1 |𝑠𝑡 ) will
be the same time-invariant functions of (𝑠𝑡+1 , 𝑠𝑡 ) for 𝑡 ≥ 1, but a different function 𝑝0 (𝑠1 |𝑠0 )
for 𝑡 = 0, except when 𝑏0 = 0.
The differences between these time 0 and time 𝑡 ≥ 1 objects reflect the Ramsey planner’s
incentive to manipulate Arrow security prices and, through them, the value of initial govern-
ment debt 𝑏0 .

36.4.8 Recursive Implementation

The above steps are implemented in a class called RecursiveAllocation

In [4]: import numpy as np


from scipy.interpolate import UnivariateSpline
from scipy.optimize import fmin_slsqp
from quantecon import MarkovChain
from scipy.optimize import root

class RecursiveAllocation:

'''
Compute the planner's allocation by solving Bellman
equation.
'''

def __init__(self, model, μgrid):

self.β, self.π, self.G = model.β, model.π, model.G


self.mc, self.S = MarkovChain(self.π), len(model.π) # Number of�
↪ states
self.Θ, self.model, self.μgrid = model.Θ, model, μgrid

# Find the first best allocation


self.solve_time1_bellman()
654 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

self.T.time_0 = True # Bellman equation now solves time 0 problem

def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and initial
grid μgrid0
'''
model, μgrid0 = self.model, self.μgrid
S = len(model.π)

# First get initial fit


pp = SequentialAllocation(model)
c, n, x, V = map(np.vstack, zip(*map(lambda μ: pp.time1_value(μ),�
↪ μgrid0)))

Vf, cf, nf, xprimef = {}, {}, {}, {}


for s in range(2):
ind = np.argsort(x[:, s]) # Sort x
# Sort arrays according to x
c, n, x, V = c[ind], n[ind], x[ind], V[ind]
cf[s] = UnivariateSpline(x[:, s], c[:, s])
nf[s] = UnivariateSpline(x[:, s], n[:, s])
Vf[s] = UnivariateSpline(x[:, s], V[:, s])
for sprime in range(S):
xprimef[s, sprime] = UnivariateSpline(x[:, s], x[:, s])
policies = [cf, nf, xprimef]

# Create xgrid
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(μgrid0))
self.xgrid = xgrid

# Now iterate on bellman equation


T = BellmanEquation(model, xgrid, policies)
diff = 1
while diff > 1e-7:
PF = T(Vf)
Vfnew, policies = self.fit_policy_function(PF)
diff = 0
for s in range(S):
diff = max(diff, np.abs(
(Vf[s](xgrid) - Vfnew[s](xgrid)) / Vf[s](xgrid)).max())
Vf = Vfnew

# Store value function policies and Bellman Equations


self.Vf = Vf
self.policies = policies
self.T = T

def fit_policy_function(self, PF):


'''
Fits the policy functions PF using the points xgrid using
UnivariateSpline
'''
xgrid, S = self.xgrid, self.S
36.4. RECURSIVE FORMULATION OF THE RAMSEY PROBLEM 655

Vf, cf, nf, xprimef = {}, {}, {}, {}


for s in range(S):
PFvec = np.vstack(tuple(map(lambda x: PF(x, s), xgrid)))
Vf[s] = UnivariateSpline(xgrid, PFvec[:, 0], s=0)
cf[s] = UnivariateSpline(xgrid, PFvec[:, 1], s=0, k=1)
nf[s] = UnivariateSpline(xgrid, PFvec[:, 2], s=0, k=1)
for sprime in range(S):
xprimef[s, sprime] = UnivariateSpline(
xgrid, PFvec[:, 3 + sprime], s=0, k=1)

return Vf, [cf, nf, xprimef]

def Τ(self, c, n):


'''
Computes Τ given c, n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def time0_allocation(self, B_, s0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
PF = self.T(self.Vf)
z0 = PF(B_, s0)
c0, n0, xprime0 = z0[1], z0[2], z0[3:]
return c0, n0, xprime0

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates Ramsey plan for T periods
'''
model, π = self.model, self.π
Uc = model.Uc
cf, nf, xprimef = self.policies

if sHist is None:
sHist = self.mc.simulate(T, s_0)

cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T))


RHist = np.zeros(T - 1)

# Time 0
cHist[0], nHist[0], xprime = self.time0_allocation(B_, s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = 0

# Time 1 onward
for t in range(1, T):
s, x = sHist[t], xprime[sHist[t]]
c, n, xprime = np.empty(self.S), nf[s](x), np.empty(self.S)
656 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

for shat in range(self.S):


c[shat] = cf[shat](x)
for sprime in range(self.S):
xprime[sprime] = xprimef[s, sprime](x)

Τ = self.Τ(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[sHist[t - 1]] @ u_c
μHist[t] = self.Vf[s](x, 1)

RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (self.β * Eu_c)

cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n, x / u_c[s], Τ

return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist])

class BellmanEquation:

'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''

def __init__(self, model, xgrid, policies0):

self.β, self.π, self.G = model.β, model.π, model.G


self.S = len(model.π) # Number of states
self.Θ, self.model = model.Θ, model

self.xbar = [min(xgrid), max(xgrid)]


self.time_0 = False

self.z0 = {}
cf, nf, xprimef = policies0
for s in range(self.S):
for x in xgrid:
xprime0 = np.empty(self.S)
for sprime in range(self.S):
xprime0[sprime] = xprimef[s, sprime](x)
self.z0[x, s] = np.hstack([cf[s](x), nf[s](x), xprime0])

self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))


if not res.success:
36.4. RECURSIVE FORMULATION OF THE RAMSEY PROBLEM 657

raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + Un(self.cFB, self.nFB) *�
↪ self.nFB
self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB)
self.zFB = {}

for s in range(S):
self.zFB[s] = np.hstack([self.cFB[s], self.nFB[s], self.xFB])

def __call__(self, Vf):


'''
Given continuation value function, next period return value function,
this period return T(V) and optimal policies
'''
if not self.time_0:
def PF(x, s): return self.get_policies_time1(x, s, Vf)
else:
def PF(B_, s0): return self.get_policies_time0(B_, s0, Vf)
return PF

def get_policies_time1(self, x, s, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, = self.model, self.β, self.Θ,
G, S, π = self.G, self.S, self.π
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[0], z[1], z[2:]
Vprime = np.empty(S)
for sprime in range(S):
Vprime[sprime] = Vf[sprime](xprime[sprime])

return -(U(c, n) + β * π[s] @ Vprime)

def cons(z):
c, n, xprime = z[0], z[1], z[2:]
return np.hstack([x - Uc(c, n) * c - Un(c, n) * n - β * π[s]
@ xprime,
(Θ * n - c - G)[s]])

out, fx, _, imode, smode = fmin_slsqp(objf,


self.z0[x, s],
f_eqcons=cons,
bounds=[(0, 100), (0, 100)]
+ [self.xbar] * S,
full_output=True,
iprint=0,
acc=1e-10)

if imode > 0:
raise Exception(smode)
658 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

self.z0[x, s] = out
return np.hstack([-fx, out])

def get_policies_time0(self, B_, s0, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, = self.model, self.β, self.Θ,
G, S, π = self.G, self.S, self.π
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[0], z[1], z[2:]
Vprime = np.empty(S)
for sprime in range(S):
Vprime[sprime] = Vf[sprime](xprime[sprime])

return -(U(c, n) + β * π[s0] @ Vprime)

def cons(z):
c, n, xprime = z[0], z[1], z[2:]
return np.hstack([-Uc(c, n) * (c - B_) - Un(c, n) * n - β * π[s0]
@ xprime,
(Θ * n - c - G)[s0]])

out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0],�


↪ f_eqcons=cons,
bounds=[(0, 100), (0, 100)]
+ [self.xbar] * S,
full_output=True, iprint=0,
acc=1e-10)

if imode > 0:
raise Exception(smode)

return np.hstack([-fx, out])

36.5 Examples

36.5.1 Anticipated One-Period War

This example illustrates in a simple setting how a Ramsey planner manages risk.
Government expenditures are known for sure in all periods except one
• For 𝑡 < 3 and 𝑡 > 3 we assume that 𝑔𝑡 = 𝑔𝑙 = 0.1.
• At 𝑡 = 3 a war occurs with probability 0.5.
– If there is war, 𝑔3 = 𝑔ℎ = 0.2
– If there is no war 𝑔3 = 𝑔𝑙 = 0.1
We define the components of the state vector as the following six (𝑡, 𝑔) pairs:
(0, 𝑔𝑙 ), (1, 𝑔𝑙 ), (2, 𝑔𝑙 ), (3, 𝑔𝑙 ), (3, 𝑔ℎ ), (𝑡 ≥ 4, 𝑔𝑙 ).
We think of these 6 states as corresponding to 𝑠 = 1, 2, 3, 4, 5, 6.
36.5. EXAMPLES 659

The transition matrix is

0 1 0 0 0 0

⎜0 0 1 0 0 0⎞⎟

⎜ ⎟
0 0 0 0.5 0.5 0⎟
Π=⎜




⎜0 0 0 0 0 1⎟⎟

⎜0 ⎟
0 0 0 0 1⎟
⎝0 0 0 0 0 1⎠

Government expenditures at each state are

0.1

⎜0.1⎞⎟

⎜ ⎟
0.1⎟
𝑔=⎜





⎜0.1 ⎟

⎜0.2⎟⎟
⎝0.1⎠

We assume that the representative agent has utility function

𝑐1−𝜎 𝑛1+𝛾
𝑢(𝑐, 𝑛) = −
1−𝜎 1+𝛾

and set 𝜎 = 2, 𝛾 = 2, and the discount factor 𝛽 = 0.9.


Note: For convenience in terms of matching our code, we have expressed utility as a function
of 𝑛 rather than leisure 𝑙.
This utility function is implemented in the class CRRAutility

In [5]: import numpy as np

class CRRAutility:

def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):

self.β, self.σ, self.γ = β, σ, γ


self.π, self.G, self.Θ, self.transfers = π, G, Θ, transfers

# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)
660 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

# Derivatives of utility function


def Uc(self, c, n):
return c**(-self.σ)

def Ucc(self, c, n):


return -self.σ * c**(-self.σ - 1)

def Un(self, c, n):


return -n**self.γ

def Unn(self, c, n):


return -self.γ * n**(self.γ - 1)

We set initial government debt 𝑏0 = 1.


We can now plot the Ramsey tax under both realizations of time 𝑡 = 3 government expendi-
tures
• black when 𝑔3 = .1, and
• red when 𝑔3 = .2

In [6]: time_π = np.array([[0, 1, 0, 0, 0, 0],


[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0.5, 0.5, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]])

time_G = np.array([0.1, 0.1, 0.1, 0.2, 0.1, 0.1])


# Θ can in principle be random
time_Θ = np.ones(6)
time_example = CRRAutility(π=time_π, G=time_G, Θ=time_Θ)

# Solve sequential problem


time_allocation = SequentialAllocation(time_example)
sHist_h = np.array([0, 1, 2, 3, 5, 5, 5])
sHist_l = np.array([0, 1, 2, 4, 5, 5, 5])
sim_seq_h = time_allocation.simulate(1, 0, 7, sHist_h)
sim_seq_l = time_allocation.simulate(1, 0, 7, sHist_l)

# Government spending paths


sim_seq_l[4] = time_example.G[sHist_l]
sim_seq_h[4] = time_example.G[sHist_h]

# Output paths
sim_seq_l[5] = time_example.Θ[sHist_l] * sim_seq_l[1]
sim_seq_h[5] = time_example.Θ[sHist_h] * sim_seq_h[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))


titles = ['Consumption', 'Labor Supply', 'Government Debt',
'Tax Rate', 'Government Spending', 'Output']

for ax, title, sim_l, sim_h in zip(axes.flatten(),


titles, sim_seq_l, sim_seq_h):
ax.set(title=title)
ax.plot(sim_l, '-ok', sim_h, '-or', alpha=0.7)
ax.grid()
36.5. EXAMPLES 661

plt.tight_layout()
plt.show()

Tax smoothing
• the tax rate is constant for all 𝑡 ≥ 1
– For 𝑡 ≥ 1, 𝑡 ≠ 3, this is a consequence of 𝑔𝑡 being the same at all those dates.
– For 𝑡 = 3, it is a consequence of the special one-period utility function that we
have assumed.
– Under other one-period utility functions, the time 𝑡 = 3 tax rate could be either
higher or lower than for dates 𝑡 ≥ 1, 𝑡 ≠ 3.
• the tax rate is the same at 𝑡 = 3 for both the high 𝑔𝑡 outcome and the low 𝑔𝑡 outcome
We have assumed that at 𝑡 = 0, the government owes positive debt 𝑏0 .
It sets the time 𝑡 = 0 tax rate partly with an eye to reducing the value 𝑢𝑐,0 𝑏0 of 𝑏0 .
It does this by increasing consumption at time 𝑡 = 0 relative to consumption in later periods.
This has the consequence of lowering the time 𝑡 = 0 value of the gross interest rate for risk-
free loans between periods 𝑡 and 𝑡 + 1, which equals

𝑢𝑐,𝑡
𝑅𝑡 =
𝛽𝔼𝑡 [𝑢𝑐,𝑡+1 ]

A tax policy that makes time 𝑡 = 0 consumption be higher than time 𝑡 = 1 consumption
evidently decreases the risk-free rate one-period interest rate, 𝑅𝑡 , at 𝑡 = 0.
Lowering the time 𝑡 = 0 risk-free interest rate makes time 𝑡 = 0 consumption goods cheaper
662 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

relative to consumption goods at later dates, thereby lowering the value 𝑢𝑐,0 𝑏0 of initial gov-
ernment debt 𝑏0 .
We see this in a figure below that plots the time path for the risk-free interest rate under
both realizations of the time 𝑡 = 3 government expenditure shock.
The following plot illustrates how the government lowers the interest rate at time 0 by raising
consumption

In [7]: fix, ax = plt.subplots(figsize=(8, 5))


ax.set_title('Gross Interest Rate')
ax.plot(sim_seq_l[-1], '-ok', sim_seq_h[-1], '-or', alpha=0.7)
ax.grid()
plt.show()

36.5.2 Government Saving

At time 𝑡 = 0 the government evidently dissaves since 𝑏1 > 𝑏0 .

• This is a consequence of it setting a lower tax rate at 𝑡 = 0, implying more


consumption at 𝑡 = 0.

At time 𝑡 = 1, the government evidently saves since it has set the tax rate sufficiently high to
allow it to set 𝑏2 < 𝑏1 .

• Its motive for doing this is that it anticipates a likely war at 𝑡 = 3.

At time 𝑡 = 2 the government trades state-contingent Arrow securities to hedge against war
at 𝑡 = 3.
36.5. EXAMPLES 663

• It purchases a security that pays off when 𝑔3 = 𝑔ℎ .

• It sells a security that pays off when 𝑔3 = 𝑔𝑙 .


• These purchases are designed in such a way that regardless of whether or not there is a
war at 𝑡 = 3, the government will begin period 𝑡 = 4 with the same government debt.
• The time 𝑡 = 4 debt level can be serviced with revenues from the constant tax rate set
at times 𝑡 ≥ 1.
At times 𝑡 ≥ 4 the government rolls over its debt, knowing that the tax rate is set at a level
that raises enough revenue to pay for government purchases and interest payments on its
debt.

36.5.3 Time 0 Manipulation of Interest Rate

We have seen that when 𝑏0 > 0, the Ramsey plan sets the time 𝑡 = 0 tax rate partly with
an eye toward lowering a risk-free interest rate for one-period loans between times 𝑡 = 0 and
𝑡 = 1.
By lowering this interest rate, the plan makes time 𝑡 = 0 goods cheap relative to consumption
goods at later times.
By doing this, it lowers the value of time 𝑡 = 0 debt that it has inherited and must finance.

36.5.4 Time 0 and Time-Inconsistency

In the preceding example, the Ramsey tax rate at time 0 differs from its value at time 1.
To explore what is going on here, let’s simplify things by removing the possibility of war at
time 𝑡 = 3.
The Ramsey problem then includes no randomness because 𝑔𝑡 = 𝑔𝑙 for all 𝑡.
The figure below plots the Ramsey tax rates and gross interest rates at time 𝑡 = 0 and time
𝑡 ≥ 1 as functions of the initial government debt (using the sequential allocation solution and
a CRRA utility function defined above)

In [8]: tax_sequence = SequentialAllocation(CRRAutility(G=0.15,


π=np.ones((1, 1)),
Θ=np.ones(1)))

n = 100
tax_policy = np.empty((n, 2))
interest_rate = np.empty((n, 2))
gov_debt = np.linspace(-1.5, 1, n)

for i in range(n):
tax_policy[i] = tax_sequence.simulate(gov_debt[i], 0, 2)[3]
interest_rate[i] = tax_sequence.simulate(gov_debt[i], 0, 3)[-1]

fig, axes = plt.subplots(2, 1, figsize=(10,8), sharex=True)


titles = ['Tax Rate', 'Gross Interest Rate']

for ax, title, plot in zip(axes, titles, [tax_policy, interest_rate]):


ax.plot(gov_debt, plot[:, 0], gov_debt, plot[:, 1], lw=2)
ax.set(title=title, xlim=(min(gov_debt), max(gov_debt)))
ax.grid()
664 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

axes[0].legend(('Time $t=0$', 'Time $t \geq 1$'))


axes[1].set_xlabel('Initial Government Debt')

fig.tight_layout()
plt.show()

The figure indicates that if the government enters with positive debt, it sets a tax rate at 𝑡 =
0 that is less than all later tax rates.
By setting a lower tax rate at 𝑡 = 0, the government raises consumption, which reduces the
value 𝑢𝑐,0 𝑏0 of its initial debt.
It does this by increasing 𝑐0 and thereby lowering 𝑢𝑐,0 .
Conversely, if 𝑏0 < 0, the Ramsey planner sets the tax rate at 𝑡 = 0 higher than in subsequent
periods.
A side effect of lowering time 𝑡 = 0 consumption is that it lowers the one-period interest rate
at time 𝑡 = 0 below that of subsequent periods.
There are only two values of initial government debt at which the tax rate is constant for all
𝑡 ≥ 0.
The first is 𝑏0 = 0

• Here the government can’t use the 𝑡 = 0 tax rate to alter the value of the
initial debt.
36.5. EXAMPLES 665

The second occurs when the government enters with sufficiently large assets that the Ramsey
planner can achieve first best and sets 𝜏𝑡 = 0 for all 𝑡.
It is only for these two values of initial government debt that the Ramsey plan is time-
consistent.
Another way of saying this is that, except for these two values of initial government debt, a
continuation of a Ramsey plan is not a Ramsey plan.
To illustrate this, consider a Ramsey planner who starts with an initial government debt 𝑏1
associated with one of the Ramsey plans computed above.
Call 𝜏1𝑅 the time 𝑡 = 0 tax rate chosen by the Ramsey planner confronting this value for ini-
tial government debt government.
The figure below shows both the tax rate at time 1 chosen by our original Ramsey planner
and what a new Ramsey planner would choose for its time 𝑡 = 0 tax rate

In [9]: tax_sequence = SequentialAllocation(CRRAutility(G=0.15,


π=np.ones((1, 1)),
Θ=np.ones(1)))

n = 100
tax_policy = np.empty((n, 2))
τ_reset = np.empty((n, 2))
gov_debt = np.linspace(-1.5, 1, n)

for i in range(n):
tax_policy[i] = tax_sequence.simulate(gov_debt[i], 0, 2)[3]
τ_reset[i] = tax_sequence.simulate(gov_debt[i], 0, 1)[3]

fig, ax = plt.subplots(figsize=(10, 6))


ax.plot(gov_debt, tax_policy[:, 1], gov_debt, τ_reset, lw=2)
ax.set(xlabel='Initial Government Debt', title='Tax Rate',
xlim=(min(gov_debt), max(gov_debt)))
ax.legend((r'$\tau_1$', r'$\tau_1^R$'))
ax.grid()

fig.tight_layout()
plt.show()
666 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

The tax rates in the figure are equal for only two values of initial government debt.

36.5.5 Tax Smoothing and non-CRRA Preferences

The complete tax smoothing for 𝑡 ≥ 1 in the preceding example is a consequence of our hav-
ing assumed CRRA preferences.
To see what is driving this outcome, we begin by noting that the Ramsey tax rate for 𝑡 ≥ 1
is a time-invariant function 𝜏 (Φ, 𝑔) of the Lagrange multiplier on the implementability con-
straint and government expenditures.
For CRRA preferences, we can exploit the relations 𝑈𝑐𝑐 𝑐 = −𝜎𝑈𝑐 and 𝑈𝑛𝑛 𝑛 = 𝛾𝑈𝑛 to derive

(1 + (1 − 𝜎)Φ)𝑈𝑐
=1
(1 + (1 − 𝛾)Φ)𝑈𝑛

from the first-order conditions.


This equation immediately implies that the tax rate is constant.
For other preferences, the tax rate may not be constant.
For example, let the period utility function be

𝑢(𝑐, 𝑛) = log(𝑐) + 0.69 log(1 − 𝑛)

We will create a new class LogUtility to represent this utility function

In [10]: import numpy as np

class LogUtility:

def __init__(self,
36.5. EXAMPLES 667

β=0.9,
ψ=0.69,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):

self.β, self.ψ, self.π = β, ψ, π


self.G, self.Θ, self.transfers = G, Θ, transfers

# Utility function
def U(self, c, n):
return np.log(c) + self.ψ * np.log(1 - n)

# Derivatives of utility function


def Uc(self, c, n):
return 1 / c

def Ucc(self, c, n):


return -c**(-2)

def Un(self, c, n):


return -self.ψ / (1 - n)

def Unn(self, c, n):


return -self.ψ / (1 - n)**2

Also, suppose that 𝑔𝑡 follows a two-state IID process with equal probabilities attached to 𝑔𝑙
and 𝑔ℎ .
To compute the tax rate, we will use both the sequential and recursive approaches described
above.
The figure below plots a sample path of the Ramsey tax rate

In [11]: log_example = LogUtility()


# Solve sequential problem
seq_log = SequentialAllocation(log_example)

# Initialize grid for value function iteration and solve


μ_grid = np.linspace(-0.6, 0.0, 200)
# Solve recursive problem
bel_log = RecursiveAllocation(log_example, μ_grid)

T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 0, 0, 0, 1,
1, 1, 1, 1, 1, 0])

# Simulate
sim_seq = seq_log.simulate(0.5, 0, T, sHist)
sim_bel = bel_log.simulate(0.5, 0, T, sHist)

# Government spending paths


sim_seq[4] = log_example.G[sHist]
sim_bel[4] = log_example.G[sHist]

# Output paths
668 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT

sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]


sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))


titles = ['Consumption', 'Labor Supply', 'Government Debt',
'Tax Rate', 'Government Spending', 'Output']

for ax, title, sim_s, sim_b in zip(axes.flatten(), titles, sim_seq,�


↪ sim_bel):
ax.plot(sim_s, '-ob', sim_b, '-xk', alpha=0.7)
ax.set(title=title)
ax.grid()

axes.flatten()[0].legend(('Sequential', 'Recursive'))
fig.tight_layout()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18:
RuntimeWarning: divide by zero encountered in log
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:22:
RuntimeWarning: divide by zero encountered in double_scalars

As should be expected, the recursive and sequential solutions produce almost identical alloca-
tions.
Unlike outcomes with CRRA preferences, the tax rate is not perfectly smoothed.
Instead, the government raises the tax rate when 𝑔𝑡 is high.
36.5. EXAMPLES 669

36.5.6 Further Comments

A related lecture describes an extension of the Lucas-Stokey model by Aiyagari, Marcet, Sar-
gent, and Seppälä (2002) [3].
In the AMSS economy, only a risk-free bond is traded.
That lecture compares the recursive representation of the Lucas-Stokey model presented in
this lecture with one for an AMSS economy.
By comparing these recursive formulations, we shall glean a sense in which the dimension of
the state is lower in the Lucas Stokey model.
Accompanying that difference in dimension will be different dynamics of government debt.
670 CHAPTER 36. OPTIMAL TAXATION WITH STATE-CONTINGENT DEBT
Chapter 37

Optimal Taxation without


State-Contingent Debt

37.1 Contents

• Overview 37.2
• Competitive Equilibrium with Distorting Taxes 37.3
• Recursive Version of AMSS Model 37.4
• Examples 37.5
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

37.2 Overview

Let’s start with following imports:

In [2]: import numpy as np


import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import root, fmin_slsqp
from scipy.interpolate import UnivariateSpline
from quantecon import MarkovChain

In an earlier lecture, we described a model of optimal taxation with state-contingent debt due
to Robert E. Lucas, Jr., and Nancy Stokey [45].
Aiyagari, Marcet, Sargent, and Seppälä [3] (hereafter, AMSS) studied optimal taxation in a
model without state-contingent debt.
In this lecture, we
• describe assumptions and equilibrium concepts
• solve the model
• implement the model numerically
• conduct some policy experiments
• compare outcomes with those in a corresponding complete-markets model

671
672 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

We begin with an introduction to the model.

37.3 Competitive Equilibrium with Distorting Taxes

Many but not all features of the economy are identical to those of the Lucas-Stokey economy.
Let’s start with things that are identical.
For 𝑡 ≥ 0, a history of the state is represented by 𝑠𝑡 = [𝑠𝑡 , 𝑠𝑡−1 , … , 𝑠0 ].
Government purchases 𝑔(𝑠) are an exact time-invariant function of 𝑠.
Let 𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 ), and 𝑛𝑡 (𝑠𝑡 ) denote consumption, leisure, and labor supply, respectively, at
history 𝑠𝑡 at time 𝑡.
Each period a representative household is endowed with one unit of time that can be divided
between leisure ℓ𝑡 and labor 𝑛𝑡 :

𝑛𝑡 (𝑠𝑡 ) + ℓ𝑡 (𝑠𝑡 ) = 1 (1)

Output equals 𝑛𝑡 (𝑠𝑡 ) and can be divided between consumption 𝑐𝑡 (𝑠𝑡 ) and 𝑔(𝑠𝑡 )

𝑐𝑡 (𝑠𝑡 ) + 𝑔(𝑠𝑡 ) = 𝑛𝑡 (𝑠𝑡 ) (2)

Output is not storable.


The technology pins down a pre-tax wage rate to unity for all 𝑡, 𝑠𝑡 .
A representative household’s preferences over {𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 )}∞
𝑡=0 are ordered by


∑ ∑ 𝛽 𝑡 𝜋𝑡 (𝑠𝑡 )𝑢[𝑐𝑡 (𝑠𝑡 ), ℓ𝑡 (𝑠𝑡 )] (3)
𝑡=0 𝑠𝑡

where
• 𝜋𝑡 (𝑠𝑡 ) is a joint probability distribution over the sequence 𝑠𝑡 , and
• the utility function 𝑢 is increasing, strictly concave, and three times continuously differ-
entiable in both arguments.
The government imposes a flat rate tax 𝜏𝑡 (𝑠𝑡 ) on labor income at time 𝑡, history 𝑠𝑡 .
Lucas and Stokey assumed that there are complete markets in one-period Arrow securities;
also see smoothing models.
It is at this point that AMSS [3] modify the Lucas and Stokey economy.
AMSS allow the government to issue only one-period risk-free debt each period.
Ruling out complete markets in this way is a step in the direction of making total tax collec-
tions behave more like that prescribed in [7] than they do in [45].

37.3.1 Risk-free One-Period Debt Only

In period 𝑡 and history 𝑠𝑡 , let


37.3. COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 673

• 𝑏𝑡+1 (𝑠𝑡 ) be the amount of the time 𝑡 + 1 consumption good that at time 𝑡 the govern-
ment promised to pay
• 𝑅𝑡 (𝑠𝑡 ) be the gross interest rate on risk-free one-period debt between periods 𝑡 and 𝑡 + 1
• 𝑇𝑡 (𝑠𝑡 ) be a non-negative lump-sum transfer to the representative household Section ??
That 𝑏𝑡+1 (𝑠𝑡 ) is the same for all realizations of 𝑠𝑡+1 captures its risk-free character.
The market value at time 𝑡 of government debt maturing at time 𝑡 + 1 equals 𝑏𝑡+1 (𝑠𝑡 ) divided
by 𝑅𝑡 (𝑠𝑡 ).
The government’s budget constraint in period 𝑡 at history 𝑠𝑡 is

𝑏𝑡+1 (𝑠𝑡 )
𝑏𝑡 (𝑠𝑡−1 ) = 𝜏𝑡𝑛 (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 ) − 𝑔𝑡 (𝑠𝑡 ) − 𝑇𝑡 (𝑠𝑡 ) +
𝑅𝑡 (𝑠𝑡 )
(4)
𝑏 (𝑠𝑡 )
≡ 𝑧(𝑠 ) + 𝑡+1 𝑡 ,
𝑡
𝑅𝑡 (𝑠 )

where 𝑧(𝑠𝑡 ) is the net-of-interest government surplus.


To rule out Ponzi schemes, we assume that the government is subject to a natural debt
limit (to be discussed in a forthcoming lecture).
The consumption Euler equation for a representative household able to trade only one-period
risk-free debt with one-period gross interest rate 𝑅𝑡 (𝑠𝑡 ) is

𝑡+1
1 𝑡+1 𝑡 𝑢𝑐 (𝑠 )
𝑡
= ∑ 𝛽𝜋 𝑡+1 (𝑠 |𝑠 ) 𝑡
𝑅𝑡 (𝑠 ) 𝑠𝑡+1 |𝑠𝑡 𝑢𝑐 (𝑠 )

Substituting this expression into the government’s budget constraint (4) yields:

𝑢𝑐 (𝑠𝑡+1 )
𝑏𝑡 (𝑠𝑡−1 ) = 𝑧(𝑠𝑡 ) + 𝛽 ∑ 𝜋𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) 𝑏 (𝑠𝑡 ) (5)
𝑠𝑡+1 |𝑠𝑡
𝑢𝑐 (𝑠𝑡 ) 𝑡+1

Components of 𝑧(𝑠𝑡 ) on the right side depend on 𝑠𝑡 , but the left side is required to depend on
𝑠𝑡−1 only.
This is what it means for one-period government debt to be risk-free.
Therefore, the sum on the right side of equation (5) also has to depend only on 𝑠𝑡−1 .
This requirement will give rise to measurability constraints on the Ramsey allocation to
be discussed soon.
If we replace 𝑏𝑡+1 (𝑠𝑡 ) on the right side of equation (5) by the right side of next period’s bud-
get constraint (associated with a particular realization 𝑠𝑡 ) we get

𝑢𝑐 (𝑠𝑡+1 ) 𝑏𝑡+2 (𝑠𝑡+1 )


𝑏𝑡 (𝑠𝑡−1 ) = 𝑧(𝑠𝑡 ) + ∑ 𝛽𝜋𝑡+1 (𝑠𝑡+1 |𝑠𝑡 ) [𝑧(𝑠 𝑡+1
) + ]
𝑠𝑡+1 |𝑠𝑡
𝑢𝑐 (𝑠𝑡 ) 𝑅𝑡+1 (𝑠𝑡+1 )

After making similar repeated substitutions for all future occurrences of government indebt-
edness, and by invoking the natural debt limit, we arrive at:


𝑢𝑐 (𝑠𝑡+𝑗 )
𝑏𝑡 (𝑠𝑡−1 ) = ∑ ∑ 𝛽 𝑗 𝜋𝑡+𝑗 (𝑠𝑡+𝑗 |𝑠𝑡 ) 𝑧(𝑠𝑡+𝑗 ) (6)
𝑗=0 𝑠𝑡+𝑗 |𝑠𝑡
𝑢𝑐 (𝑠𝑡 )
674 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

Now let’s
• substitute the resource constraint into the net-of-interest government surplus, and
• use the household’s first-order condition 1 − 𝜏𝑡𝑛 (𝑠𝑡 ) = 𝑢ℓ (𝑠𝑡 )/𝑢𝑐 (𝑠𝑡 ) to eliminate the labor
tax rate
so that we can express the net-of-interest government surplus 𝑧(𝑠𝑡 ) as

𝑢ℓ (𝑠𝑡 )
𝑧(𝑠𝑡 ) = [1 − ] [𝑐𝑡 (𝑠𝑡 ) + 𝑔𝑡 (𝑠𝑡 )] − 𝑔𝑡 (𝑠𝑡 ) − 𝑇𝑡 (𝑠𝑡 ) . (7)
𝑢𝑐 (𝑠𝑡 )

If we substitute the appropriate versions of the right side of (7) for 𝑧(𝑠𝑡+𝑗 ) into equation (6),
we obtain a sequence of implementability constraints on a Ramsey allocation in an AMSS
economy.
Expression (6) at time 𝑡 = 0 and initial state 𝑠0 was also an implementability constraint on a
Ramsey allocation in a Lucas-Stokey economy:


𝑢𝑐 (𝑠𝑗 )
𝑏0 (𝑠−1 ) = 𝔼0 ∑ 𝛽 𝑗 𝑧(𝑠𝑗 ) (8)
𝑗=0
𝑢𝑐 (𝑠0 )

Indeed, it was the only implementability constraint there.


But now we also have a large number of additional implementability constraints


𝑢𝑐 (𝑠𝑡+𝑗 )
𝑏𝑡 (𝑠𝑡−1 ) = 𝔼𝑡 ∑ 𝛽 𝑗 𝑧(𝑠𝑡+𝑗 ) (9)
𝑗=0
𝑢𝑐 (𝑠𝑡 )

Equation (9) must hold for each 𝑠𝑡 for each 𝑡 ≥ 1.

37.3.2 Comparison with Lucas-Stokey Economy

The expression on the right side of (9) in the Lucas-Stokey (1983) economy would equal the
present value of a continuation stream of government surpluses evaluated at what would be
competitive equilibrium Arrow-Debreu prices at date 𝑡.
In the Lucas-Stokey economy, that present value is measurable with respect to 𝑠𝑡 .
In the AMSS economy, the restriction that government debt be risk-free imposes that that
same present value must be measurable with respect to 𝑠𝑡−1 .
In a language used in the literature on incomplete markets models, it can be said that the
AMSS model requires that at each (𝑡, 𝑠𝑡 ) what would be the present value of continuation
government surpluses in the Lucas-Stokey model must belong to the marketable subspace
of the AMSS model.

37.3.3 Ramsey Problem Without State-contingent Debt

After we have substituted the resource constraint into the utility function, we can express the
Ramsey problem as being to choose an allocation that solves


max 𝔼0 ∑ 𝛽 𝑡 𝑢 (𝑐𝑡 (𝑠𝑡 ), 1 − 𝑐𝑡 (𝑠𝑡 ) − 𝑔𝑡 (𝑠𝑡 ))
{𝑐𝑡 (𝑠𝑡 ),𝑏𝑡+1 (𝑠𝑡 )}
𝑡=0
37.3. COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 675

where the maximization is subject to


𝑢𝑐 (𝑠𝑗 )
𝔼0 ∑ 𝛽 𝑗 𝑧(𝑠𝑗 ) ≥ 𝑏0 (𝑠−1 ) (10)
𝑗=0
𝑢𝑐 (𝑠0 )

and


𝑢𝑐 (𝑠𝑡+𝑗 )
𝔼𝑡 ∑ 𝛽 𝑗 𝑧(𝑠𝑡+𝑗 ) = 𝑏𝑡 (𝑠𝑡−1 ) ∀ 𝑠𝑡 (11)
𝑗=0
𝑢𝑐 (𝑠𝑡 )

given 𝑏0 (𝑠−1 ).

Lagrangian Formulation

Let 𝛾0 (𝑠0 ) be a non-negative Lagrange multiplier on constraint (10).


As in the Lucas-Stokey economy, this multiplier is strictly positive when the government must
resort to distortionary taxation; otherwise it equals zero.
A consequence of the assumption that there are no markets in state-contingent securities and
that a market exists only in a risk-free security is that we have to attach stochastic processes
{𝛾𝑡 (𝑠𝑡 )}∞
𝑡=1 of Lagrange multipliers to the implementability constraints (11).

Depending on how the constraints bind, these multipliers can be positive or negative:

𝛾𝑡 (𝑠𝑡 ) ≥ (≤) 0 if the constraint binds in this direction



𝑢𝑐 (𝑠𝑡+𝑗 )
𝔼𝑡 ∑ 𝛽 𝑗 𝑧(𝑠𝑡+𝑗 ) ≥ (≤) 𝑏𝑡 (𝑠𝑡−1 )
𝑗=0
𝑢𝑐 (𝑠𝑡 )

A negative multiplier 𝛾𝑡 (𝑠𝑡 ) < 0 means that if we could relax constraint (11), we would like to
increase the beginning-of-period indebtedness for that particular realization of history 𝑠𝑡 .
That would let us reduce the beginning-of-period indebtedness for some other history Section
??.
These features flow from the fact that the government cannot use state-contingent debt and
therefore cannot allocate its indebtedness efficiently across future states.

37.3.4 Some Calculations

It is helpful to apply two transformations to the Lagrangian.


Multiply constraint (10) by 𝑢𝑐 (𝑠0 ) and the constraints (11) by 𝛽 𝑡 𝑢𝑐 (𝑠𝑡 ).
Then a Lagrangian for the Ramsey problem can be represented as
676 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT


𝐽 = 𝔼0 ∑ 𝛽 𝑡 {𝑢 (𝑐𝑡 (𝑠𝑡 ), 1 − 𝑐𝑡 (𝑠𝑡 ) − 𝑔𝑡 (𝑠𝑡 ))
𝑡=0

+ 𝛾𝑡 (𝑠𝑡 )[𝔼𝑡 ∑ 𝛽 𝑗 𝑢𝑐 (𝑠𝑡+𝑗 ) 𝑧(𝑠𝑡+𝑗 ) − 𝑢𝑐 (𝑠𝑡 ) 𝑏𝑡 (𝑠𝑡−1 )}
𝑗=0
(12)

𝑡 𝑡 𝑡
= 𝔼0 ∑ 𝛽 {𝑢 (𝑐𝑡 (𝑠 ), 1 − 𝑐𝑡 (𝑠 ) − 𝑔𝑡 (𝑠𝑡 ))
𝑡=0

+ Ψ𝑡 (𝑠𝑡 ) 𝑢𝑐 (𝑠𝑡 ) 𝑧(𝑠𝑡 ) − 𝛾𝑡 (𝑠𝑡 ) 𝑢𝑐 (𝑠𝑡 ) 𝑏𝑡 (𝑠𝑡−1 )}

where

Ψ𝑡 (𝑠𝑡 ) = Ψ𝑡−1 (𝑠𝑡−1 ) + 𝛾𝑡 (𝑠𝑡 ) and Ψ−1 (𝑠−1 ) = 0 (13)

In (12), the second equality uses the law of iterated expectations and Abel’s summation for-
mula (also called summation by parts, see this page).
First-order conditions with respect to 𝑐𝑡 (𝑠𝑡 ) can be expressed as

𝑢𝑐 (𝑠𝑡 ) − 𝑢ℓ (𝑠𝑡 ) + Ψ𝑡 (𝑠𝑡 ) {[𝑢𝑐𝑐 (𝑠𝑡 ) − 𝑢𝑐ℓ (𝑠𝑡 )] 𝑧(𝑠𝑡 ) + 𝑢𝑐 (𝑠𝑡 ) 𝑧𝑐 (𝑠𝑡 )}
(14)
− 𝛾𝑡 (𝑠𝑡 ) [𝑢𝑐𝑐 (𝑠𝑡 ) − 𝑢𝑐ℓ (𝑠𝑡 )] 𝑏𝑡 (𝑠𝑡−1 ) = 0

and with respect to 𝑏𝑡 (𝑠𝑡 ) as

𝔼𝑡 [𝛾𝑡+1 (𝑠𝑡+1 ) 𝑢𝑐 (𝑠𝑡+1 )] = 0 (15)

If we substitute 𝑧(𝑠𝑡 ) from (7) and its derivative 𝑧𝑐 (𝑠𝑡 ) into the first-order condition (14), we
find two differences from the corresponding condition for the optimal allocation in a Lucas-
Stokey economy with state-contingent government debt.

1. The term involving 𝑏𝑡 (𝑠𝑡−1 ) in the first-order condition (14) does not appear in the cor-
responding expression for the Lucas-Stokey economy.

• This term reflects the constraint that beginning-of-period government indebtedness


must be the same across all realizations of next period’s state, a constraint that would
not be present if government debt could be state contingent.

1. The Lagrange multiplier Ψ𝑡 (𝑠𝑡 ) in the first-order condition (14) may change over time
in response to realizations of the state, while the multiplier Φ in the Lucas-Stokey econ-
omy is time-invariant.

We need some code from our an earlier lecture on optimal taxation with state-contingent debt
sequential allocation implementation:

In [3]: import numpy as np


from scipy.optimize import root
from quantecon import MarkovChain
37.3. COMPETITIVE EQUILIBRIUM WITH DISTORTING TAXES 677

class SequentialAllocation:

'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint μ.
'''

def __init__(self, model):

# Initialize from model object attributes


self.β, self.π, self.G = model.β, model.π, model.G
self.mc, self.Θ = MarkovChain(self.π), model.Θ
self.S = len(model.π) # Number of states
self.model = model

# Find the first best allocation


self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))

if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]

# Multiplier on the resource constraint


self.ΞFB = Uc(self.cFB, self.nFB)
self.zFB = np.hstack([self.cFB, self.nFB, self.ΞFB])

def time1_allocation(self, μ):


'''
Computes optimal allocation for time t >= 1 for a given μ
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
# FOC of c
return np.hstack([Uc(c, n) - μ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
678 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) \


+ Θ * Ξ, # FOC of n
Θ * n - c - G])

# Find the root of the first-order condition


res = root(FOC, self.zFB)
if not res.success:
raise Exception('Could not find LS allocation.')
z = res.x
c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:]

# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)

return c, n, x, Ξ

def time0_allocation(self, B_, s_0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

# First order conditions of planner's problem


def FOC(z):
μ, c, n, Ξ = z
xprime = self.time1_allocation(μ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
@ xprime,
Uc(c, n) - μ * (Ucc(c, n)
* (c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n
+ Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])

# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')

return res.x

def time1_value(self, μ):


'''
Find the value associated with multiplier μ
'''
c, n, x, Ξ = self.time1_allocation(μ)
U = self.model.U(c, n)
V = np.linalg.solve(np.eye(self.S) - self.β * self.π, U)
return c, n, x, V

def Τ(self, c, n):


'''
Computes Τ given c, n
'''
37.4. RECURSIVE VERSION OF AMSS MODEL 679

model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π, β = self.model, self.π, self.β
Uc = model.Uc

if sHist is None:
sHist = self.mc.simulate(T, s_0)

cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T))


RHist = np.zeros(T - 1)

# Time 0
μ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = μ

# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(μ)
Τ = self.Τ(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x[s] /�
↪ u_c[s], \
Τ[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
μHist[t] = μ

return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist])

To analyze the AMSS model, we find it useful to adopt a recursive formulation using tech-
niques like those in our lectures on dynamic Stackelberg models and optimal taxation with
state-contingent debt.

37.4 Recursive Version of AMSS Model

We now describe a recursive formulation of the AMSS economy.


We have noted that from the point of view of the Ramsey planner, the restriction to one-
period risk-free securities
• leaves intact the single implementability constraint on allocations (8) from the Lucas-
Stokey economy, but
• adds measurability constraints (6) on functions of tails of allocations at each time and
history
We now explore how these constraints alter Bellman equations for a time 0 Ramsey planner
680 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

and for time 𝑡 ≥ 1, history 𝑠𝑡 continuation Ramsey planners.

37.4.1 Recasting State Variables

In the AMSS setting, the government faces a sequence of budget constraints

𝜏𝑡 (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 ) + 𝑇𝑡 (𝑠𝑡 ) + 𝑏𝑡+1 (𝑠𝑡 )/𝑅𝑡 (𝑠𝑡 ) = 𝑔𝑡 + 𝑏𝑡 (𝑠𝑡−1 )

where 𝑅𝑡 (𝑠𝑡 ) is the gross risk-free rate of interest between 𝑡 and 𝑡 + 1 at history 𝑠𝑡 and 𝑇𝑡 (𝑠𝑡 )
are non-negative transfers.
Throughout this lecture, we shall set transfers to zero (for some issues about the limiting
behavior of debt, this makes a possibly important difference from AMSS [3], who restricted
transfers to be non-negative).
In this case, the household faces a sequence of budget constraints

𝑏𝑡 (𝑠𝑡−1 ) + (1 − 𝜏𝑡 (𝑠𝑡 ))𝑛𝑡 (𝑠𝑡 ) = 𝑐𝑡 (𝑠𝑡 ) + 𝑏𝑡+1 (𝑠𝑡 )/𝑅𝑡 (𝑠𝑡 ) (16)

The household’s first-order conditions are 𝑢𝑐,𝑡 = 𝛽𝑅𝑡 𝔼𝑡 𝑢𝑐,𝑡+1 and (1 − 𝜏𝑡 )𝑢𝑐,𝑡 = 𝑢𝑙,𝑡 .
Using these to eliminate 𝑅𝑡 and 𝜏𝑡 from budget constraint (16) gives

𝑢𝑙,𝑡 (𝑠𝑡 ) 𝛽(𝔼𝑡 𝑢𝑐,𝑡+1 )𝑏𝑡+1 (𝑠𝑡 )


𝑏𝑡 (𝑠𝑡−1 ) + 𝑛 (𝑠𝑡
) = 𝑐 (𝑠𝑡
) + (17)
𝑢𝑐,𝑡 (𝑠𝑡 ) 𝑡 𝑡
𝑢𝑐,𝑡 (𝑠𝑡 )
or

𝑢𝑐,𝑡 (𝑠𝑡 )𝑏𝑡 (𝑠𝑡−1 ) + 𝑢𝑙,𝑡 (𝑠𝑡 )𝑛𝑡 (𝑠𝑡 ) = 𝑢𝑐,𝑡 (𝑠𝑡 )𝑐𝑡 (𝑠𝑡 ) + 𝛽(𝔼𝑡 𝑢𝑐,𝑡+1 )𝑏𝑡+1 (𝑠𝑡 ) (18)

Now define

𝑏𝑡+1 (𝑠𝑡 )
𝑥𝑡 ≡ 𝛽𝑏𝑡+1 (𝑠𝑡 )𝔼𝑡 𝑢𝑐,𝑡+1 = 𝑢𝑐,𝑡 (𝑠𝑡 ) (19)
𝑅𝑡 (𝑠𝑡 )

and represent the household’s budget constraint at time 𝑡, history 𝑠𝑡 as

𝑢𝑐,𝑡 𝑥𝑡−1
= 𝑢𝑐,𝑡 𝑐𝑡 − 𝑢𝑙,𝑡 𝑛𝑡 + 𝑥𝑡 (20)
𝛽𝔼𝑡−1 𝑢𝑐,𝑡

for 𝑡 ≥ 1.

37.4.2 Measurability Constraints

Write equation (18) as

𝑢𝑙,𝑡 (𝑠𝑡 ) 𝛽(𝔼𝑡 𝑢𝑐,𝑡+1 )𝑏𝑡+1 (𝑠𝑡 )


𝑏𝑡 (𝑠𝑡−1 ) = 𝑐𝑡 (𝑠𝑡 ) − 𝑛 (𝑠𝑡
) + (21)
𝑢𝑐,𝑡 (𝑠𝑡 ) 𝑡 𝑢𝑐,𝑡

The right side of equation (21) expresses the time 𝑡 value of government debt in terms of a
linear combination of terms whose individual components are measurable with respect to 𝑠𝑡 .
37.4. RECURSIVE VERSION OF AMSS MODEL 681

The sum of terms on the right side of equation (21) must equal 𝑏𝑡 (𝑠𝑡−1 ).
That implies that it has to be measurable with respect to 𝑠𝑡−1 .
Equations (21) are the measurability constraints that the AMSS model adds to the single time
0 implementation constraint imposed in the Lucas and Stokey model.

37.4.3 Two Bellman Equations

Let Π(𝑠|𝑠− ) be a Markov transition matrix whose entries tell probabilities of moving from
state 𝑠− to state 𝑠 in one period.
Let
• 𝑉 (𝑥− , 𝑠− ) be the continuation value of a continuation Ramsey plan at 𝑥𝑡−1 = 𝑥− , 𝑠𝑡−1 =
𝑠− for 𝑡 ≥ 1
• 𝑊 (𝑏, 𝑠) be the value of the Ramsey plan at time 0 at 𝑏0 = 𝑏 and 𝑠0 = 𝑠
We distinguish between two types of planners:
For 𝑡 ≥ 1, the value function for a continuation Ramsey planner satisfies the Bellman
equation

𝑉 (𝑥− , 𝑠− ) = max ∑ Π(𝑠|𝑠− ) [𝑢(𝑛(𝑠) − 𝑔(𝑠), 1 − 𝑛(𝑠)) + 𝛽𝑉 (𝑥(𝑠), 𝑠)] (22)


{𝑛(𝑠),𝑥(𝑠)} 𝑠

subject to the following collection of implementability constraints, one for each 𝑠 ∈ 𝑆:

𝑢𝑐 (𝑠)𝑥−
= 𝑢𝑐 (𝑠)(𝑛(𝑠) − 𝑔(𝑠)) − 𝑢𝑙 (𝑠)𝑛(𝑠) + 𝑥(𝑠) (23)
𝛽 ∑𝑠 ̃ Π(𝑠|𝑠
̃ − )𝑢𝑐 (𝑠)̃

A continuation Ramsey planner at 𝑡 ≥ 1 takes (𝑥𝑡−1 , 𝑠𝑡−1 ) = (𝑥− , 𝑠− ) as given and before 𝑠 is
realized chooses (𝑛𝑡 (𝑠𝑡 ), 𝑥𝑡 (𝑠𝑡 )) = (𝑛(𝑠), 𝑥(𝑠)) for 𝑠 ∈ 𝑆.
The Ramsey planner takes (𝑏0 , 𝑠0 ) as given and chooses (𝑛0 , 𝑥0 ).
The value function 𝑊 (𝑏0 , 𝑠0 ) for the time 𝑡 = 0 Ramsey planner satisfies the Bellman equa-
tion

𝑊 (𝑏0 , 𝑠0 ) = max 𝑢(𝑛0 − 𝑔0 , 1 − 𝑛0 ) + 𝛽𝑉 (𝑥0 , 𝑠0 ) (24)


𝑛0 ,𝑥0

where maximization is subject to

𝑢𝑐,0 𝑏0 = 𝑢𝑐,0 (𝑛0 − 𝑔0 ) − 𝑢𝑙,0 𝑛0 + 𝑥0 (25)

37.4.4 Martingale Supercedes State-Variable Degeneracy

Let 𝜇(𝑠|𝑠− )Π(𝑠|𝑠− ) be a Lagrange multiplier on the constraint (23) for state 𝑠.
After forming an appropriate Lagrangian, we find that the continuation Ramsey planner’s
first-order condition with respect to 𝑥(𝑠) is

𝛽𝑉𝑥 (𝑥(𝑠), 𝑠) = 𝜇(𝑠|𝑠− ) (26)


682 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

Applying the envelope theorem to Bellman equation (22) gives

𝑢𝑐 (𝑠)
𝑉𝑥 (𝑥− , 𝑠− ) = ∑ Π(𝑠|𝑠− )𝜇(𝑠|𝑠− ) (27)
𝑠
𝛽 ∑𝑠 ̃ Π(𝑠|𝑠̃ − )𝑢𝑐 (𝑠)̃

Equations (26) and (27) imply that

𝑢𝑐 (𝑠)
𝑉𝑥 (𝑥− , 𝑠− ) = ∑ (Π(𝑠|𝑠− ) ) 𝑉𝑥 (𝑥(𝑠), 𝑠) (28)
𝑠
∑𝑠 ̃ Π(𝑠|𝑠
̃ − )𝑢𝑐 (𝑠)̃

Equation (28) states that 𝑉𝑥 (𝑥, 𝑠) is a risk-adjusted martingale.


Saying that 𝑉𝑥 (𝑥, 𝑠) is a risk-adjusted martingale means that 𝑉𝑥 (𝑥, 𝑠) is a martingale with re-
spect to the probability distribution over 𝑠𝑡 sequences that are generated by the twisted tran-
sition probability matrix:

̌ 𝑢𝑐 (𝑠)
Π(𝑠|𝑠 − ) ≡ Π(𝑠|𝑠− )
∑𝑠 ̃ Π(𝑠|𝑠
̃ − )𝑢𝑐 (𝑠)̃

̌
Exercise: Please verify that Π(𝑠|𝑠 − ) is a valid Markov transition density, i.e., that its ele-
ments are all non-negative and that for each 𝑠− , the sum over 𝑠 equals unity.

37.4.5 Absence of State Variable Degeneracy

Along a Ramsey plan, the state variable 𝑥𝑡 = 𝑥𝑡 (𝑠𝑡 , 𝑏0 ) becomes a function of the history 𝑠𝑡
and initial government debt 𝑏0 .
In Lucas-Stokey model, we found that
• a counterpart to 𝑉𝑥 (𝑥, 𝑠) is time-invariant and equal to the Lagrange multiplier on the
Lucas-Stokey implementability constraint
• time invariance of 𝑉𝑥 (𝑥, 𝑠) is the source of a key feature of the Lucas-Stokey model,
namely, state variable degeneracy (i.e., 𝑥𝑡 is an exact function of 𝑠𝑡 )
That 𝑉𝑥 (𝑥, 𝑠) varies over time according to a twisted martingale means that there is no state-
variable degeneracy in the AMSS model.
In the AMSS model, both 𝑥 and 𝑠 are needed to describe the state.
This property of the AMSS model transmits a twisted martingale component to consumption,
employment, and the tax rate.

37.4.6 Digression on Non-negative Transfers

Throughout this lecture, we have imposed that transfers 𝑇𝑡 = 0.


AMSS [3] instead imposed a nonnegativity constraint 𝑇𝑡 ≥ 0 on transfers.
They also considered a special case of quasi-linear preferences, 𝑢(𝑐, 𝑙) = 𝑐 + 𝐻(𝑙).
In this case, 𝑉𝑥 (𝑥, 𝑠) ≤ 0 is a non-positive martingale.
By the martingale convergence theorem 𝑉𝑥 (𝑥, 𝑠) converges almost surely.
37.4. RECURSIVE VERSION OF AMSS MODEL 683

Furthermore, when the Markov chain Π(𝑠|𝑠− ) and the government expenditure function 𝑔(𝑠)
are such that 𝑔𝑡 is perpetually random, 𝑉𝑥 (𝑥, 𝑠) almost surely converges to zero.
For quasi-linear preferences, the first-order condition with respect to 𝑛(𝑠) becomes

(1 − 𝜇(𝑠|𝑠− ))(1 − 𝑢𝑙 (𝑠)) + 𝜇(𝑠|𝑠− )𝑛(𝑠)𝑢𝑙𝑙 (𝑠) = 0

When 𝜇(𝑠|𝑠− ) = 𝛽𝑉𝑥 (𝑥(𝑠), 𝑥) converges to zero, in the limit 𝑢𝑙 (𝑠) = 1 = 𝑢𝑐 (𝑠), so that
𝜏 (𝑥(𝑠), 𝑠) = 0.
Thus, in the limit, if 𝑔𝑡 is perpetually random, the government accumulates sufficient assets
to finance all expenditures from earnings on those assets, returning any excess revenues to the
household as non-negative lump-sum transfers.

37.4.7 Code

The recursive formulation is implemented as follows

In [4]: import numpy as np


from scipy.optimize import fmin_slsqp
from scipy.optimize import root
from quantecon import MarkovChain

class RecursiveAllocationAMSS:

def __init__(self, model, μgrid, tol_diff=1e-4, tol=1e-4):

self.β, self.π, self.G = model.β, model.π, model.G


self.mc, self.S = MarkovChain(self.π), len(model.π) # Number of�
↪ states
self.Θ, self.model, self.μgrid = model.Θ, model, μgrid
self.tol_diff, self.tol = tol_diff, tol

# Find the first best allocation


self.solve_time1_bellman()
self.T.time_0 = True # Bellman equation now solves time 0 problem

def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and
initial grid μgrid0
'''
model, μgrid0 = self.model, self.μgrid
π = model.π
S = len(model.π)

# First get initial fit from Lucas Stokey solution.


# Need to change things to be ex ante
pp = SequentialAllocation(model)
interp = interpolator_factory(2, None)

def incomplete_allocation(μ_, s_):


c, n, x, V = pp.time1_value(μ_)
return c, n, π[s_] @ x, π[s_] @ V
cf, nf, xgrid, Vf, xprimef = [], [], [], [], []
684 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

for s_ in range(S):
c, n, x, V = zip(*map(lambda μ: incomplete_allocation(μ, s_),�
↪ μgrid0))
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]

# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(μgrid0))
self.xgrid = xgrid

# Now iterate on Bellman equation


T = BellmanEquation(model, xgrid, policies, tol=self.tol)
diff = 1
while diff > self.tol_diff:
PF = T(Vf)

Vfnew, policies = self.fit_policy_function(PF)


diff = np.abs((Vf(xgrid) - Vfnew(xgrid)) / Vf(xgrid)).max()

print(diff)
Vf = Vfnew

# Store value function policies and Bellman Equations


self.Vf = Vf
self.policies = policies
self.T = T

def fit_policy_function(self, PF):


'''
Fits the policy functions
'''
S, xgrid = len(self.π), self.xgrid
interp = interpolator_factory(3, 0)
cf, nf, xprimef, Tf, Vf = [], [], [], [], []
for s_ in range(S):
PFvec = np.vstack([PF(x, s_) for x in self.xgrid]).T
Vf.append(interp(xgrid, PFvec[0, :]))
cf.append(interp(xgrid, PFvec[1:1 + S]))
nf.append(interp(xgrid, PFvec[1 + S:1 + 2 * S]))
xprimef.append(interp(xgrid, PFvec[1 + 2 * S:1 + 3 * S]))
Tf.append(interp(xgrid, PFvec[1 + 3 * S:]))
policies = fun_vstack(cf), fun_vstack(
nf), fun_vstack(xprimef), fun_vstack(Tf)
Vf = fun_hstack(Vf)
return Vf, policies

def Τ(self, c, n):


37.4. RECURSIVE VERSION OF AMSS MODEL 685

'''
Computes Τ given c and n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def time0_allocation(self, B_, s0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
PF = self.T(self.Vf)
z0 = PF(B_, s0)
c0, n0, xprime0, T0 = z0[1:]
return c0, n0, xprime0, T0

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π = self.model, self.π
Uc = model.Uc
cf, nf, xprimef, Tf = self.policies

if sHist is None:
sHist = simulate_markov(π, s_0, T)

cHist, nHist, Bhist, xHist, ΤHist, THist, μHist = np.zeros((7, T))


# Time 0
cHist[0], nHist[0], xHist[0], THist[0] = self.time0_allocation(B_,�
↪ s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = self.Vf[s_0](xHist[0])

# Time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
x), xprimef[s_, :](x), Tf[s_, :](x)

Τ = self.Τ(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c

μHist[t] = self.Vf[s](xprime[s])

cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x / Eu_c, Τ


xHist[t], THist[t] = xprime[s], T[s]
return np.array([cHist, nHist, Bhist, ΤHist, THist, μHist, sHist,�
↪ xHist])

class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
686 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

'''

def __init__(self, model, xgrid, policies0, tol, maxiter=1000):

self.β, self.π, self.G = model.β, model.π, model.G


self.S = len(model.π) # Number of states
self.Θ, self.model, self.tol = model.Θ, model, tol
self.maxiter = maxiter

self.xbar = [min(xgrid), max(xgrid)]


self.time_0 = False

self.z0 = {}
cf, nf, xprimef = policies0

for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])

self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))


if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB

self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB)

self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])

def __call__(self, Vf):


'''
Given continuation value function next period return value�
↪function this

period return T(V) and optimal policies


'''
37.4. RECURSIVE VERSION OF AMSS MODEL 687

if not self.time_0:
def PF(x, s): return self.get_policies_time1(x, s, Vf)
else:
def PF(B_, s0): return self.get_policies_time0(B_, s0, Vf)
return PF

def get_policies_time1(self, x, s_, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G, S, π = self.model, self.β, self.Θ, self.G, self.S,�
↪ self.π
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]

Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])

return -π[s_] @ (U(c, n) + β * Vprime)

def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
Eu_c = π[s_] @ u_c
return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])

if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)

if imode > 0:
raise Exception(smode)

self.z0[x, s_] = out


return np.hstack([-fx, out])

def get_policies_time0(self, B_, s0, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G = self.model, self.β, self.Θ, self.G
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:-1]
688 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

return -(U(c, n) + β * Vf[s0](xprime))

def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])

if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0],�
↪f_eqcons=cons,

bounds=bounds, full_output=True,
iprint=0)

if imode > 0:
raise Exception(smode)

return np.hstack([-fx, out])

37.5 Examples

We now turn to some examples.


We will first build some useful functions for solving the model

In [5]: import numpy as np


from scipy.interpolate import UnivariateSpline

class interpolate_wrapper:

def __init__(self, F):


self.F = F

def __getitem__(self, index):


return interpolate_wrapper(np.asarray(self.F[index]))

def reshape(self, *args):


self.F = self.F.reshape(*args)
return self

def transpose(self):
self.F = self.F.transpose()

def __len__(self):
return len(self.F)

def __call__(self, xvec):


x = np.atleast_1d(xvec)
shape = self.F.shape
if len(x) == 1:
fhat = np.hstack([f(x) for f in self.F.flatten()])
return fhat.reshape(shape)
37.5. EXAMPLES 689

else:
fhat = np.vstack([f(x) for f in self.F.flatten()])
return fhat.reshape(np.hstack((shape, len(x))))

class interpolator_factory:

def __init__(self, k, s):


self.k, self.s = k, s

def __call__(self, xgrid, Fs):


shape, m = Fs.shape[:-1], Fs.shape[-1]
Fs = Fs.reshape((-1, m))
F = []
xgrid = np.sort(xgrid) # Sort xgrid
for Fhat in Fs:
F.append(UnivariateSpline(xgrid, Fhat, k=self.k, s=self.s))
return interpolate_wrapper(np.array(F).reshape(shape))

def fun_vstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.vstack(Fs))

def fun_hstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.hstack(Fs))

def simulate_markov(π, s_0, T):

sHist = np.empty(T, dtype=int)


sHist[0] = s_0
S = len(π)
for t in range(1, T):
sHist[t] = np.random.choice(np.arange(S), p=π[sHist[t - 1]])

return sHist

37.5.1 Anticipated One-Period War

In our lecture on optimal taxation with state contingent debt we studied how the government
manages uncertainty in a simple setting.
As in that lecture, we assume the one-period utility function

𝑐1−𝜎 𝑛1+𝛾
𝑢(𝑐, 𝑛) = −
1−𝜎 1+𝛾

Note
For convenience in matching our computer code, we have expressed utility as a
function of 𝑛 rather than leisure 𝑙.
690 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

We consider the same government expenditure process studied in the lecture on optimal taxa-
tion with state contingent debt.
Government expenditures are known for sure in all periods except one.
• For 𝑡 < 3 or 𝑡 > 3 we assume that 𝑔𝑡 = 𝑔𝑙 = 0.1.
• At 𝑡 = 3 a war occurs with probability 0.5.
– If there is war, 𝑔3 = 𝑔ℎ = 0.2.
– If there is no war 𝑔3 = 𝑔𝑙 = 0.1.
A useful trick is to define components of the state vector as the following six (𝑡, 𝑔) pairs:

(0, 𝑔𝑙 ), (1, 𝑔𝑙 ), (2, 𝑔𝑙 ), (3, 𝑔𝑙 ), (3, 𝑔ℎ ), (𝑡 ≥ 4, 𝑔𝑙 )

We think of these 6 states as corresponding to 𝑠 = 1, 2, 3, 4, 5, 6.


The transition matrix is

0 1 0 0 0 0

⎜ 0 0 1 0 0 0⎞⎟

⎜ ⎟
0 0 0 0.5 0.5 0⎟
𝑃 =⎜




⎜ 0 0 0 0 0 1⎟⎟

⎜0 ⎟
0 0 0 0 1⎟
⎝0 0 0 0 0 1⎠

The government expenditure at each state is

0.1

⎜0.1⎞⎟

⎜ ⎟
⎜0.1⎟⎟
𝑔=⎜
⎜ ⎟

⎜0.1 ⎟
⎜0.2⎟
⎜ ⎟
⎝0.1⎠

We assume the same utility parameters as in the Lucas-Stokey economy.


This utility function is implemented in the following class.

In [6]: import numpy as np

class CRRAutility:

def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):

self.β, self.σ, self.γ = β, σ, γ


self.π, self.G, self.Θ, self.transfers = π, G, Θ, transfers
37.5. EXAMPLES 691

# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)

# Derivatives of utility function


def Uc(self, c, n):
return c**(-self.σ)

def Ucc(self, c, n):


return -self.σ * c**(-self.σ - 1)

def Un(self, c, n):


return -n**self.γ

def Unn(self, c, n):


return -self.γ * n**(self.γ - 1)

The following figure plots the Ramsey plan under both complete and incomplete markets for
both possible realizations of the state at time 𝑡 = 3.
Optimal policies when the government has access to state contingent debt are represented by
black lines, while the optimal policies when there is only a risk-free bond are in red.
Paths with circles are histories in which there is peace, while those with triangle denote war.

In [7]: # Initialize μgrid for value function iteration


μ_grid = np.linspace(-0.7, 0.01, 200)

time_example = CRRAutility()

time_example.π = np.array([[0, 1, 0, 0, 0, 0],


[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0.5, 0.5, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]])

time_example.G = np.array([0.1, 0.1, 0.1, 0.2, 0.1, 0.1])


time_example.Θ = np.ones(6) # Θ can in principle be random

time_example.transfers = True # Government can use transfers


# Solve sequential problem
time_sequential = SequentialAllocation(time_example)
# Solve recursive problem
time_bellman = RecursiveAllocationAMSS(time_example, μ_grid)

sHist_h = np.array([0, 1, 2, 3, 5, 5, 5])


sHist_l = np.array([0, 1, 2, 4, 5, 5, 5])

sim_seq_h = time_sequential.simulate(1, 0, 7, sHist_h)


sim_bel_h = time_bellman.simulate(1, 0, 7, sHist_h)
sim_seq_l = time_sequential.simulate(1, 0, 7, sHist_l)
sim_bel_l = time_bellman.simulate(1, 0, 7, sHist_l)
692 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

# Government spending paths


sim_seq_l[4] = time_example.G[sHist_l]
sim_seq_h[4] = time_example.G[sHist_h]
sim_bel_l[4] = time_example.G[sHist_l]
sim_bel_h[4] = time_example.G[sHist_h]

# Output paths
sim_seq_l[5] = time_example.Θ[sHist_l] * sim_seq_l[1]
sim_seq_h[5] = time_example.Θ[sHist_h] * sim_seq_h[1]
sim_bel_l[5] = time_example.Θ[sHist_l] * sim_bel_l[1]
sim_bel_h[5] = time_example.Θ[sHist_h] * sim_bel_h[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))


titles = ['Consumption', 'Labor Supply', 'Government Debt',
'Tax Rate', 'Government Spending', 'Output']

for ax, title, sim_l, sim_h, bel_l, bel_h in zip(axes.flatten(), titles,


sim_seq_l, sim_seq_h,
sim_bel_l, sim_bel_h):
ax.plot(sim_l, '-ok', sim_h, '-^k', bel_l, '-or', bel_h, '-^r',�
↪alpha=0.7)

ax.set(title=title)
ax.grid()

plt.tight_layout()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:24:
RuntimeWarning: divide by zero encountered in reciprocal
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29:
RuntimeWarning: divide by zero encountered in power
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in true_divide
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:228:
RuntimeWarning: invalid value encountered in matmul
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:233:
RuntimeWarning: invalid value encountered in matmul
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in multiply

0.6029333236643755
0.11899588239403049
0.09881553212225772
0.08354106892508192
0.07149555120835548
0.06173036758118132
0.05366019901394205
0.04689112026451663
0.04115178347560931
0.036240012965927396
0.032006237992696515
0.028368464481562206
0.025192689677184087
0.022405843880195616
0.01994774715614924
0.017777614158738117
0.01586311426476452
37.5. EXAMPLES 693

0.014157556340393418
0.012655688350303772
0.011323561508356405
0.010134342587404501
0.009067133049314944
0.008133363039380094
0.007289176565901135
0.006541414713738157
0.005872916742002829
0.005262680193064001
0.0047307749771207785
0.00425304528362447
0.003818501528167009
0.0034264405600953744
0.003079364780532014
0.002768326786546087
0.002490427866931677
0.002240592066624134
0.0020186948255381727
0.001817134273040178
0.001636402035539666
0.0014731339707420147
0.0013228186455305523
0.0011905279885160533
0.00124153679699236
0.0009619064545164963
0.000866106560101833
0.0007801798498127538
0.0007044038334509719
0.0010093490580442629
0.0007333374007962044
0.0008323024383068417
0.00046553960479602885
0.000419343630648423
0.0006110525605884945
0.0003393644339027041
0.00030505082851731526
0.0002748939327310508
0.0002466101258104514
0.00022217612526700695
0.00020017376735678401
0.00018111714263865545
0.00016358937979053516
0.00014736943218961575
0.00013236625616948046
0.00011853760872608077
0.00010958655326222953
9.594155330329376e-05
694 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

How a Ramsey planner responds to war depends on the structure of the asset market.
If it is able to trade state-contingent debt, then at time 𝑡 = 2
• the government purchases an Arrow security that pays off when 𝑔3 = 𝑔ℎ
• the government sells an Arrow security that pays off when 𝑔3 = 𝑔𝑙
• These purchases are designed in such a way that regardless of whether or not there is a
war at 𝑡 = 3, the government will begin period 𝑡 = 4 with the same government debt
This pattern facilities smoothing tax rates across states.
The government without state contingent debt cannot do this.
Instead, it must enter time 𝑡 = 3 with the same level of debt falling due whether there is
peace or war at 𝑡 = 3.
It responds to this constraint by smoothing tax rates across time.
To finance a war it raises taxes and issues more debt.
To service the additional debt burden, it raises taxes in all future periods.
The absence of state contingent debt leads to an important difference in the optimal tax pol-
icy.
When the Ramsey planner has access to state contingent debt, the optimal tax policy is his-
tory independent
• the tax rate is a function of the current level of government spending only, given the
Lagrange multiplier on the implementability constraint
Without state contingent debt, the optimal tax rate is history dependent.
• A war at time 𝑡 = 3 causes a permanent increase in the tax rate.
37.5. EXAMPLES 695

Perpetual War Alert

History dependence occurs more dramatically in a case in which the government perpetually
faces the prospect of war.
This case was studied in the final example of the lecture on optimal taxation with state-
contingent debt.
There, each period the government faces a constant probability, 0.5, of war.
In addition, this example features the following preferences

𝑢(𝑐, 𝑛) = log(𝑐) + 0.69 log(1 − 𝑛)

In accordance, we will re-define our utility function.

In [8]: import numpy as np

class LogUtility:

def __init__(self,
β=0.9,
ψ=0.69,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):

self.β, self.ψ, self.π = β, ψ, π


self.G, self.Θ, self.transfers = G, Θ, transfers

# Utility function
def U(self, c, n):
return np.log(c) + self.ψ * np.log(1 - n)

# Derivatives of utility function


def Uc(self, c, n):
return 1 / c

def Ucc(self, c, n):


return -c**(-2)

def Un(self, c, n):


return -self.ψ / (1 - n)

def Unn(self, c, n):


return -self.ψ / (1 - n)**2

With these preferences, Ramsey tax rates will vary even in the Lucas-Stokey model with
state-contingent debt.
The figure below plots optimal tax policies for both the economy with state contingent debt
(circles) and the economy with only a risk-free bond (triangles).

In [9]: log_example = LogUtility()


log_example.transfers = True # Government can use transfers
696 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

log_sequential = SequentialAllocation(log_example) # Solve sequential�


↪ problem
log_bellman = RecursiveAllocationAMSS(log_example, μ_grid)

T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 1, 1, 1, 1, 0])

# Simulate
sim_seq = log_sequential.simulate(0.5, 0, T, sHist)
sim_bel = log_bellman.simulate(0.5, 0, T, sHist)

titles = ['Consumption', 'Labor Supply', 'Government Debt',


'Tax Rate', 'Government Spending', 'Output']

# Government spending paths


sim_seq[4] = log_example.G[sHist]
sim_bel[4] = log_example.G[sHist]

# Output paths
sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]
sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))

for ax, title, seq, bel in zip(axes.flatten(), titles, sim_seq, sim_bel):


ax.plot(seq, '-ok', bel, '-^b')
ax.set(title=title)
ax.grid()

axes[0, 0].legend(('Complete Markets', 'Incomplete Markets'))


plt.tight_layout()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18:
RuntimeWarning: invalid value encountered in log
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18:
RuntimeWarning: divide by zero encountered in log
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:22:
RuntimeWarning: divide by zero encountered in true_divide
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in true_divide
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in multiply

0.09444436241416772
0.05938738476066831
0.009418765545751336
0.008379498688112458
0.0074624121825786136
0.006647816793826905
0.005931361486407361
0.005294448194342734
0.004725395436488051
0.004222277512512572
0.0037757366251940845
0.003374617862213814
0.0030173865247510546
37.5. EXAMPLES 697

0.00269993016386269
0.0024177509321289497
0.002162259165674643
0.0019376222404837299
0.0017354510332485583
0.0015551292898353367
0.0013916748352019216
0.0012464994865252006
0.0011179309581667092
0.0010013269972379381
0.0008961738614742138
0.0008040179956305549
0.0007206699569846133
0.0006461939268522426
0.000579422922424242
0.0005197690502640719
0.000465555835815599
0.00041739390175425607
0.000374127633271821
0.000335566869251414
0.0003008456324990823
0.00026990338277804
0.0002418389290012913
0.0002170504253178516
0.00019449275351929968
0.00017454878793347393
0.00015642743688768114
0.00014040012936887282
0.00012815686131192587
0.0001876785080261638
0.00025576608113993276
9.084112500810741e-05
698 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT

When the government experiences a prolonged period of peace, it is able to reduce govern-
ment debt and set permanently lower tax rates.
However, the government finances a long war by borrowing and raising taxes.
This results in a drift away from policies with state contingent debt that depends on the his-
tory of shocks.
This is even more evident in the following figure that plots the evolution of the two policies
over 200 periods.

In [10]: T = 200 # Set T to 200 periods


sim_seq_long = log_sequential.simulate(0.5, 0, T)
sHist_long = sim_seq_long[-3]
sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long)

titles = ['Consumption', 'Labor Supply', 'Government Debt',


'Tax Rate', 'Government Spending', 'Output']

# Government spending paths


sim_seq_long[4] = log_example.G[sHist_long]
sim_bel_long[4] = log_example.G[sHist_long]

# Output paths
sim_seq_long[5] = log_example.Θ[sHist_long] * sim_seq_long[1]
sim_bel_long[5] = log_example.Θ[sHist_long] * sim_bel_long[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))

for ax, title, seq, bel in zip(axes.flatten(), titles, sim_seq_long, \


sim_bel_long):
ax.plot(seq, '-k', bel, '-.b', alpha=0.5)
ax.set(title=title)
ax.grid()

axes[0, 0].legend(('Complete Markets','Incomplete Markets'))


plt.tight_layout()
plt.show()
37.5. EXAMPLES 699

Footnotes
[1] In an allocation that solves the Ramsey problem and that levies distorting taxes on labor,
why would the government ever want to hand revenues back to the private sector? It would
not in an economy with state-contingent debt, since any such allocation could be improved by
lowering distortionary taxes rather than handing out lump-sum transfers. But, without state-
contingent debt there can be circumstances when a government would like to make lump-sum
transfers to the private sector.
[2] From the first-order conditions for the Ramsey problem, there exists another realization 𝑠𝑡̃
with the same history up until the previous period, i.e., 𝑠𝑡−1
̃ = 𝑠𝑡−1 , but where the multiplier
𝑡
on constraint (11) takes a positive value, so 𝛾𝑡 (𝑠 ̃ ) > 0.
700 CHAPTER 37. OPTIMAL TAXATION WITHOUT STATE-CONTINGENT DEBT
Chapter 38

Fluctuating Interest Rates Deliver


Fiscal Insurance

38.1 Contents

• Overview 38.2
• Forces at Work 38.3
• Logical Flow of Lecture 38.4
• Example Economy 38.5
• Reverse Engineering Strategy 38.6
• Code for Reverse Engineering 38.7
• Short Simulation for Reverse-engineered: Initial Debt 38.8
• Long Simulation 38.9
• BEGS Approximations of Limiting Debt and Convergence Rate 38.10
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

38.2 Overview

This lecture extends our investigations of how optimal policies for levying a flat-rate tax on
labor income and issuing government debt depend on whether there are complete markets for
debt.
A Ramsey allocation and Ramsey policy in the AMSS [3] model described in optimal taxation
without state-contingent debt generally differs from a Ramsey allocation and Ramsey policy
in the Lucas-Stokey [45] model described in optimal taxation with state-contingent debt.
This is because the implementability restriction that a competitive equilibrium with a distort-
ing tax imposes on allocations in the Lucas-Stokey model is just one among a set of imple-
mentability conditions imposed in the AMSS model.
These additional constraints require that time 𝑡 components of a Ramsey allocation for the
AMSS model be measurable with respect to time 𝑡 − 1 information.
The measurability constraints imposed by the AMSS model are inherited from the restriction
that only one-period risk-free bonds can be traded.

701
702 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

Differences between the Ramsey allocations in the two models indicate that at least some
of the measurability constraints of the AMSS model of optimal taxation without state-
contingent debt are violated at the Ramsey allocation of a corresponding [45] model with
state-contingent debt.
Another way to say this is that differences between the Ramsey allocations of the two models
indicate that some of the measurability constraints of the AMSS model are violated at the
Ramsey allocation of the Lucas-Stokey model.
Nonzero Lagrange multipliers on those constraints make the Ramsey allocation for the AMSS
model differ from the Ramsey allocation for the Lucas-Stokey model.
This lecture studies a special AMSS model in which
• The exogenous state variable 𝑠𝑡 is governed by a finite-state Markov chain.
• With an arbitrary budget-feasible initial level of government debt, the measurability
constraints
– bind for many periods, but ….
– eventually, they stop binding evermore, so ….
– in the tail of the Ramsey plan, the Lagrange multipliers 𝛾𝑡 (𝑠𝑡 ) on the AMSS imple-
mentability constraints (8) converge to zero.
• After the implementability constraints (8) no longer bind in the tail of the AMSS Ram-
sey plan
– history dependence of the AMSS state variable 𝑥𝑡 vanishes and 𝑥𝑡 becomes a time-
invariant function of the Markov state 𝑠𝑡 .
– the par value of government debt becomes constant over time so that 𝑏𝑡+1 (𝑠𝑡 ) =
𝑏̄ for 𝑡 ≥ 𝑇 for a sufficiently large 𝑇 .
– 𝑏̄ < 0, so that the tail of the Ramsey plan instructs the government always to make
a constant par value of risk-free one-period loans to the private sector.
– the one-period gross interest rate 𝑅𝑡 (𝑠𝑡 ) on risk-free debt converges to a time-
invariant function of the Markov state 𝑠𝑡 .
• For a particular 𝑏0 < 0 (i.e., a positive level of initial government loans to the private
sector), the measurability constraints never bind.
• In this special case
– the par value 𝑏𝑡+1 (𝑠𝑡 ) = 𝑏̄ of government debt at time 𝑡 and Markov state 𝑠𝑡 is
constant across time and states, but ….
̄
– the market value 𝑅 𝑏(𝑠 ) of government debt at time 𝑡 varies as a time-invariant
𝑡 𝑡
function of the Markov state 𝑠𝑡 .
̄
– fluctuations in the interest rate make gross earnings on government debt 𝑅 𝑏(𝑠 )
𝑡 𝑡
fully insure the gross-of-gross-interest-payments government budget against fluc-
tuations in government expenditures.
– the state variable 𝑥 in a recursive representation of a Ramsey plan is a time-
invariant function of the Markov state for 𝑡 ≥ 0.
• In this special case, the Ramsey allocation in the AMSS model agrees with that in a
[45] model in which the same amount of state-contingent debt falls due in all states to-
morrow
– it is a situation in which the Ramsey planner loses nothing from not being able to
purchase state-contingent debt and being restricted to exchange only risk-free debt
debt.
• This outcome emerges only when we initialize government debt at a particular 𝑏0 < 0.
In a nutshell, the reason for this striking outcome is that at a particular level of risk-free gov-
ernment assets, fluctuations in the one-period risk-free interest rate provide the government
with complete insurance against stochastically varying government expenditures.
38.3. FORCES AT WORK 703

Let’s start with some imports:

In [2]: import matplotlib.pyplot as plt


%matplotlib inline
from scipy.optimize import fsolve, fmin

38.3 Forces at Work

The forces driving asymptotic outcomes here are examples of dynamics present in a more gen-
eral class incomplete markets models analyzed in [10] (BEGS).
BEGS provide conditions under which government debt under a Ramsey plan converges to an
invariant distribution.
BEGS construct approximations to that asymptotically invariant distribution of government
debt under a Ramsey plan.
BEGS also compute an approximation to a Ramsey plan’s rate of convergence to that limit-
ing invariant distribution.
We shall use the BEGS approximating limiting distribution and the approximating rate of
convergence to help interpret outcomes here.
For a long time, the Ramsey plan puts a nontrivial martingale-like component into the par
value of government debt as part of the way that the Ramsey plan imperfectly smooths dis-
tortions from the labor tax rate across time and Markov states.
But BEGS show that binding implementability constraints slowly push government debt in
a direction designed to let the government use fluctuations in equilibrium interest rate rather
than fluctuations in par values of debt to insure against shocks to government expenditures.
• This is a weak (but unrelenting) force that, starting from an initial debt level, for a
long time is dominated by the stochastic martingale-like component of debt dynam-
ics that the Ramsey planner uses to facilitate imperfect tax-smoothing across time and
states.
• This weak force slowly drives the par value of government assets to a constant level
at which the government can completely insure against government expenditure shocks
while shutting down the stochastic component of debt dynamics.
• At that point, the tail of the par value of government debt becomes a trivial martingale:
it is constant over time.

38.4 Logical Flow of Lecture

We present ideas in the following order


• We describe a two-state AMSS economy and generate a long simulation starting from a
positive initial government debt.
• We observe that in a long simulation starting from positive government debt, the par
value of government debt eventually converges to a constant 𝑏.̄
• In fact, the par value of government debt converges to the same constant level 𝑏̄ for al-
ternative realizations of the Markov government expenditure process and for alternative
settings of initial government debt 𝑏0 .
704 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

• We reverse engineer a particular value of initial government debt 𝑏0 (it turns out to be
negative) for which the continuation debt moves to 𝑏̄ immediately.
• We note that for this particular initial debt 𝑏0 , the Ramsey allocations for the AMSS
economy and the Lucas-Stokey model are identical
– we verify that the LS Ramsey planner chooses to purchase identical claims to
time 𝑡 + 1 consumption for all Markov states tomorrow for each Markov state to-
day.
• We compute the BEGS approximations to check how accurately they describe the dy-
namics of the long-simulation.

38.4.1 Equations from Lucas-Stokey (1983) Model

Although we are studying an AMSS [3] economy, a Lucas-Stokey [45] economy plays an im-
portant role in the reverse-engineering calculation to be described below.
For that reason, it is helpful to have readily available some key equations underlying a Ram-
sey plan for the Lucas-Stokey economy.
Recall first-order conditions for a Ramsey allocation for the Lucas-Stokey economy.
For 𝑡 ≥ 1, these take the form

(1 + Φ)𝑢𝑐 (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐𝑐 (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓ𝑐 (𝑐, 1 − 𝑐 − 𝑔)]


(1)
= (1 + Φ)𝑢ℓ (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐ℓ (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓℓ (𝑐, 1 − 𝑐 − 𝑔)]

There is one such equation for each value of the Markov state 𝑠𝑡 .
In addition, given an initial Markov state, the time 𝑡 = 0 quantities 𝑐0 and 𝑏0 satisfy

(1 + Φ)𝑢𝑐 (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐𝑐 (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓ𝑐 (𝑐, 1 − 𝑐 − 𝑔)]


= (1 + Φ)𝑢ℓ (𝑐, 1 − 𝑐 − 𝑔) + Φ[𝑐𝑢𝑐ℓ (𝑐, 1 − 𝑐 − 𝑔) − (𝑐 + 𝑔)𝑢ℓℓ (𝑐, 1 − 𝑐 − 𝑔)] + Φ(𝑢𝑐𝑐 − 𝑢𝑐,ℓ )𝑏0
(2)
In addition, the time 𝑡 = 0 budget constraint is satisfied at 𝑐0 and initial government debt 𝑏0 :

𝑏̄
𝑏0 + 𝑔0 = 𝜏0 (𝑐0 + 𝑔0 ) + (3)
𝑅0

where 𝑅0 is the gross interest rate for the Markov state 𝑠0 that is assumed to prevail at time
𝑡 = 0 and 𝜏0 is the time 𝑡 = 0 tax rate.
In equation (3), it is understood that

𝑢𝑙,0
𝜏0 = 1 −
𝑢𝑐,0
𝑆
𝑢𝑐 (𝑠)
𝑅0−1 = 𝛽 ∑ Π(𝑠|𝑠0 )
𝑠=1
𝑢𝑐,0

It is useful to transform some of the above equations to forms that are more natural for ana-
lyzing the case of a CRRA utility specification that we shall use in our example economies.
38.4. LOGICAL FLOW OF LECTURE 705

38.4.2 Specification with CRRA Utility

As in lectures optimal taxation without state-contingent debt and optimal taxation with
state-contingent debt, we assume that the representative agent has utility function

𝑐1−𝜎 𝑛1+𝛾
𝑢(𝑐, 𝑛) = −
1−𝜎 1+𝛾

and set 𝜎 = 2, 𝛾 = 2, and the discount factor 𝛽 = 0.9.


We eliminate leisure from the model and continue to assume that

𝑐𝑡 + 𝑔𝑡 = 𝑛𝑡

The analysis of Lucas and Stokey prevails once we make the following replacements

𝑢ℓ (𝑐, ℓ) ∼ −𝑢𝑛 (𝑐, 𝑛)


𝑢𝑐 (𝑐, ℓ) ∼ 𝑢𝑐 (𝑐, 𝑛)
𝑢ℓ,ℓ (𝑐, ℓ) ∼ 𝑢𝑛𝑛 (𝑐, 𝑛)
𝑢𝑐,𝑐 (𝑐, ℓ) ∼ 𝑢𝑐,𝑐 (𝑐, 𝑛)
𝑢𝑐,ℓ (𝑐, ℓ) ∼ 0

With these understandings, equations (1) and (2) simplify in the case of the CRRA utility
function.
They become

(1 + Φ)[𝑢𝑐 (𝑐) + 𝑢𝑛 (𝑐 + 𝑔)] + Φ[𝑐𝑢𝑐𝑐 (𝑐) + (𝑐 + 𝑔)𝑢𝑛𝑛 (𝑐 + 𝑔)] = 0 (4)

and

(1 + Φ)[𝑢𝑐 (𝑐0 ) + 𝑢𝑛 (𝑐0 + 𝑔0 )] + Φ[𝑐0 𝑢𝑐𝑐 (𝑐0 ) + (𝑐0 + 𝑔0 )𝑢𝑛𝑛 (𝑐0 + 𝑔0 )] − Φ𝑢𝑐𝑐 (𝑐0 )𝑏0 = 0 (5)

In equation (4), it is understood that 𝑐 and 𝑔 are each functions of the Markov state 𝑠.
The CRRA utility function is represented in the following class.

In [3]: import numpy as np

class CRRAutility:

def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
706 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

self.β, self.σ, self.γ = β, σ, γ


self.π, self.G, self.Θ, self.transfers = π, G, Θ, transfers

# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)

# Derivatives of utility function


def Uc(self, c, n):
return c**(-self.σ)

def Ucc(self, c, n):


return -self.σ * c**(-self.σ - 1)

def Un(self, c, n):


return -n**self.γ

def Unn(self, c, n):


return -self.γ * n**(self.γ - 1)

38.5 Example Economy

We set the following parameter values.


The Markov state 𝑠𝑡 takes two values, namely, 0, 1.
The initial Markov state is 0.
The Markov transition matrix is .5𝐼 where 𝐼 is a 2 × 2 identity matrix, so the 𝑠𝑡 process is
IID.
Government expenditures 𝑔(𝑠) equal .1 in Markov state 0 and .2 in Markov state 1.
We set preference parameters as follows:

𝛽 = .9
𝜎=2
𝛾=2

Here are several classes that do most of the work for us.
The code is mostly taken or adapted from the earlier lectures optimal taxation without state-
contingent debt and optimal taxation with state-contingent debt.

In [4]: import numpy as np


from scipy.optimize import root
from quantecon import MarkovChain

class SequentialAllocation:
38.5. EXAMPLE ECONOMY 707

'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint μ.
'''

def __init__(self, model):

# Initialize from model object attributes


self.β, self.π, self.G = model.β, model.π, model.G
self.mc, self.Θ = MarkovChain(self.π), model.Θ
self.S = len(model.π) # Number of states
self.model = model

# Find the first best allocation


self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))

if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]

# Multiplier on the resource constraint


self.ΞFB = Uc(self.cFB, self.nFB)
self.zFB = np.hstack([self.cFB, self.nFB, self.ΞFB])

def time1_allocation(self, μ):


'''
Computes optimal allocation for time t >= 1 for a given μ
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
# FOC of c
return np.hstack([Uc(c, n) - μ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) \
+ Θ * Ξ, # FOC of n
708 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

Θ * n - c - G])

# Find the root of the first-order condition


res = root(FOC, self.zFB)
if not res.success:
raise Exception('Could not find LS allocation.')
z = res.x
c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:]

# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)

return c, n, x, Ξ

def time0_allocation(self, B_, s_0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

# First order conditions of planner's problem


def FOC(z):
μ, c, n, Ξ = z
xprime = self.time1_allocation(μ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
@ xprime,
Uc(c, n) - μ * (Ucc(c, n)
* (c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n
+ Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])

# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')

return res.x

def time1_value(self, μ):


'''
Find the value associated with multiplier μ
'''
c, n, x, Ξ = self.time1_allocation(μ)
U = self.model.U(c, n)
V = np.linalg.solve(np.eye(self.S) - self.β * self.π, U)
return c, n, x, V

def Τ(self, c, n):


'''
Computes Τ given c, n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)
38.5. EXAMPLE ECONOMY 709

return 1 + Un / (self.Θ * Uc)

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π, β = self.model, self.π, self.β
Uc = model.Uc

if sHist is None:
sHist = self.mc.simulate(T, s_0)

cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T))


RHist = np.zeros(T - 1)

# Time 0
μ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = μ

# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(μ)
Τ = self.Τ(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x[s] /�
↪ u_c[s], \
Τ[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
μHist[t] = μ

return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist])

In [5]: import numpy as np


from scipy.optimize import fmin_slsqp
from scipy.optimize import root
from quantecon import MarkovChain

class RecursiveAllocationAMSS:

def __init__(self, model, μgrid, tol_diff=1e-4, tol=1e-4):

self.β, self.π, self.G = model.β, model.π, model.G


self.mc, self.S = MarkovChain(self.π), len(model.π) # Number of�
↪ states
self.Θ, self.model, self.μgrid = model.Θ, model, μgrid
self.tol_diff, self.tol = tol_diff, tol

# Find the first best allocation


self.solve_time1_bellman()
self.T.time_0 = True # Bellman equation now solves time 0 problem

def solve_time1_bellman(self):
710 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

'''
Solve the time 1 Bellman equation for calibration model and
initial grid μgrid0
'''
model, μgrid0 = self.model, self.μgrid
π = model.π
S = len(model.π)

# First get initial fit from Lucas Stokey solution.


# Need to change things to be ex ante
pp = SequentialAllocation(model)
interp = interpolator_factory(2, None)

def incomplete_allocation(μ_, s_):


c, n, x, V = pp.time1_value(μ_)
return c, n, π[s_] @ x, π[s_] @ V
cf, nf, xgrid, Vf, xprimef = [], [], [], [], []
for s_ in range(S):
c, n, x, V = zip(*map(lambda μ: incomplete_allocation(μ, s_),�
↪ μgrid0))
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]

# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(μgrid0))
self.xgrid = xgrid

# Now iterate on Bellman equation


T = BellmanEquation(model, xgrid, policies, tol=self.tol)
diff = 1
while diff > self.tol_diff:
PF = T(Vf)

Vfnew, policies = self.fit_policy_function(PF)


diff = np.abs((Vf(xgrid) - Vfnew(xgrid)) / Vf(xgrid)).max()

print(diff)
Vf = Vfnew

# Store value function policies and Bellman Equations


self.Vf = Vf
self.policies = policies
self.T = T

def fit_policy_function(self, PF):


'''
Fits the policy functions
38.5. EXAMPLE ECONOMY 711

'''
S, xgrid = len(self.π), self.xgrid
interp = interpolator_factory(3, 0)
cf, nf, xprimef, Tf, Vf = [], [], [], [], []
for s_ in range(S):
PFvec = np.vstack([PF(x, s_) for x in self.xgrid]).T
Vf.append(interp(xgrid, PFvec[0, :]))
cf.append(interp(xgrid, PFvec[1:1 + S]))
nf.append(interp(xgrid, PFvec[1 + S:1 + 2 * S]))
xprimef.append(interp(xgrid, PFvec[1 + 2 * S:1 + 3 * S]))
Tf.append(interp(xgrid, PFvec[1 + 3 * S:]))
policies = fun_vstack(cf), fun_vstack(
nf), fun_vstack(xprimef), fun_vstack(Tf)
Vf = fun_hstack(Vf)
return Vf, policies

def Τ(self, c, n):


'''
Computes Τ given c and n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def time0_allocation(self, B_, s0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
PF = self.T(self.Vf)
z0 = PF(B_, s0)
c0, n0, xprime0, T0 = z0[1:]
return c0, n0, xprime0, T0

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π = self.model, self.π
Uc = model.Uc
cf, nf, xprimef, Tf = self.policies

if sHist is None:
sHist = simulate_markov(π, s_0, T)

cHist, nHist, Bhist, xHist, ΤHist, THist, μHist = np.zeros((7, T))


# Time 0
cHist[0], nHist[0], xHist[0], THist[0] = self.time0_allocation(B_,�
↪ s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = self.Vf[s_0](xHist[0])

# Time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
712 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

x), xprimef[s_, :](x), Tf[s_, :](x)

Τ = self.Τ(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c

μHist[t] = self.Vf[s](xprime[s])

cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x / Eu_c, Τ


xHist[t], THist[t] = xprime[s], T[s]
return np.array([cHist, nHist, Bhist, ΤHist, THist, μHist, sHist,�
↪ xHist])

class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''

def __init__(self, model, xgrid, policies0, tol, maxiter=1000):

self.β, self.π, self.G = model.β, model.π, model.G


self.S = len(model.π) # Number of states
self.Θ, self.model, self.tol = model.Θ, model, tol
self.maxiter = maxiter

self.xbar = [min(xgrid), max(xgrid)]


self.time_0 = False

self.z0 = {}
cf, nf, xprimef = policies0

for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])

self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))


if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
38.5. EXAMPLE ECONOMY 713

self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB

self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB)

self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])

def __call__(self, Vf):


'''
Given continuation value function next period return value�
↪function this

period return T(V) and optimal policies


'''
if not self.time_0:
def PF(x, s): return self.get_policies_time1(x, s, Vf)
else:
def PF(B_, s0): return self.get_policies_time0(B_, s0, Vf)
return PF

def get_policies_time1(self, x, s_, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G, S, π = self.model, self.β, self.Θ, self.G, self.S,�
↪ self.π
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]

Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])

return -π[s_] @ (U(c, n) + β * Vprime)

def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
Eu_c = π[s_] @ u_c
return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])

if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)
714 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

if imode > 0:
raise Exception(smode)

self.z0[x, s_] = out


return np.hstack([-fx, out])

def get_policies_time0(self, B_, s0, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G = self.model, self.β, self.Θ, self.G
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:-1]

return -(U(c, n) + β * Vf[s0](xprime))

def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])

if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0],�
↪f_eqcons=cons,

bounds=bounds, full_output=True,
iprint=0)

if imode > 0:
raise Exception(smode)

return np.hstack([-fx, out])

In [6]: import numpy as np


from scipy.interpolate import UnivariateSpline

class interpolate_wrapper:

def __init__(self, F):


self.F = F

def __getitem__(self, index):


return interpolate_wrapper(np.asarray(self.F[index]))

def reshape(self, *args):


self.F = self.F.reshape(*args)
return self

def transpose(self):
self.F = self.F.transpose()
38.6. REVERSE ENGINEERING STRATEGY 715

def __len__(self):
return len(self.F)

def __call__(self, xvec):


x = np.atleast_1d(xvec)
shape = self.F.shape
if len(x) == 1:
fhat = np.hstack([f(x) for f in self.F.flatten()])
return fhat.reshape(shape)
else:
fhat = np.vstack([f(x) for f in self.F.flatten()])
return fhat.reshape(np.hstack((shape, len(x))))

class interpolator_factory:

def __init__(self, k, s):


self.k, self.s = k, s

def __call__(self, xgrid, Fs):


shape, m = Fs.shape[:-1], Fs.shape[-1]
Fs = Fs.reshape((-1, m))
F = []
xgrid = np.sort(xgrid) # Sort xgrid
for Fhat in Fs:
F.append(UnivariateSpline(xgrid, Fhat, k=self.k, s=self.s))
return interpolate_wrapper(np.array(F).reshape(shape))

def fun_vstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.vstack(Fs))

def fun_hstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.hstack(Fs))

def simulate_markov(π, s_0, T):

sHist = np.empty(T, dtype=int)


sHist[0] = s_0
S = len(π)
for t in range(1, T):
sHist[t] = np.random.choice(np.arange(S), p=π[sHist[t - 1]])

return sHist

38.6 Reverse Engineering Strategy

We can reverse engineer a value 𝑏0 of initial debt due that renders the AMSS measurability
constraints not binding from time 𝑡 = 0 onward.
716 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

We accomplish this by recognizing that if the AMSS measurability constraints never bind,
then the AMSS allocation and Ramsey plan is equivalent with that for a Lucas-Stokey econ-
omy in which for each period 𝑡 ≥ 0, the government promises to pay the same state-
contingent amount 𝑏̄ in each state tomorrow.
This insight tells us to find a 𝑏0 and other fundamentals for the Lucas-Stokey [45] model that
make the Ramsey planner want to borrow the same value 𝑏̄ next period for all states and all
dates.
We accomplish this by using various equations for the Lucas-Stokey [45] model presented in
optimal taxation with state-contingent debt.
We use the following steps.
Step 1: Pick an initial Φ.
Step 2: Given that Φ, jointly solve two versions of equation (4) for 𝑐(𝑠), 𝑠 = 1, 2 associated
with the two values for 𝑔(𝑠), 𝑠 = 1, 2.
Step 3: Solve the following equation for 𝑥⃗

𝑥⃗ = (𝐼 − 𝛽Π)−1 [𝑢⃗𝑐 (𝑛⃗ − 𝑔)⃗ − 𝑢⃗𝑙 𝑛]⃗ (6)

𝑥(𝑠)
Step 4: After solving for 𝑥,⃗ we can find 𝑏(𝑠𝑡 |𝑠𝑡−1 ) in Markov state 𝑠𝑡 = 𝑠 from 𝑏(𝑠) = 𝑢𝑐 (𝑠) or
the matrix equation

𝑥⃗
𝑏⃗ = (7)
𝑢⃗𝑐

Step 5: Compute 𝐽 (Φ) = (𝑏(1) − 𝑏(2))2 .


Step 6: Put steps 2 through 6 in a function minimizer and find a Φ that minimizes 𝐽 (Φ).
Step 7: At the value of Φ and the value of 𝑏̄ that emerged from step 6, solve equations (5)
and (3) jointly for 𝑐0 , 𝑏0 .

38.7 Code for Reverse Engineering

Here is code to do the calculations for us.

In [7]: u = CRRAutility()

def min_Φ(Φ):

g1, g2 = u.G # Government spending in s=0 and s=1

# Solve Φ(c)
def equations(unknowns, Φ):
c1, c2 = unknowns
# First argument of .Uc and second argument of .Un are redundant

# Set up simultaneous equations


eq = lambda c, g: (1 + Φ) * (u.Uc(c, 1) - -u.Un(1, c + g)) + \
Φ * ((c + g) * u.Unn(1, c + g) + c * u.Ucc(c, 1))
38.7. CODE FOR REVERSE ENGINEERING 717

# Return equation evaluated at s=1 and s=2


return np.array([eq(c1, g1), eq(c2, g2)]).flatten()

global c1 # Update c1 globally


global c2 # Update c2 globally

c1, c2 = fsolve(equations, np.ones(2), args=(Φ))

uc = u.Uc(np.array([c1, c2]), 1) # uc(n - g)


# ul(n) = -un(c + g)
ul = -u.Un(1, np.array([c1 + g1, c2 + g2])) * [c1 + g1, c2 + g2]
# Solve for x
x = np.linalg.solve(np.eye((2)) - u.β * u.π, uc * [c1, c2] - ul)

global b # Update b globally


b = x / uc
loss = (b[0] - b[1])**2

return loss

Φ_star = fmin(min_Φ, .1, ftol=1e-14)

Optimization terminated successfully.


Current function value: 0.000000
Iterations: 24
Function evaluations: 48

To recover and print out 𝑏̄

In [8]: b_bar = b[0]


b_bar

Out[8]: -1.0757576567504166

To complete the reverse engineering exercise by jointly determining 𝑐0 , 𝑏0 , we set up a func-


tion that returns two simultaneous equations.

In [9]: def solve_cb(unknowns, Φ, b_bar, s=1):

c0, b0 = unknowns

g0 = u.G[s-1]

R_0 = u.β * u.π[s] @ [u.Uc(c1, 1) / u.Uc(c0, 1), u.Uc(c2, 1) / u.Uc(c0,�


↪ 1)]
R_0 = 1 / R_0

τ_0 = 1 + u.Un(1, c0 + g0) / u.Uc(c0, 1)

eq1 = τ_0 * (c0 + g0) + b_bar / R_0 - b0 - g0


eq2 = (1 + Φ) * (u.Uc(c0, 1) + u.Un(1, c0 + g0)) \
+ Φ * (c0 * u.Ucc(c0, 1) + (c0 + g0) * u.Unn(1, c0 + g0)) \
- Φ * u.Ucc(c0, 1) * b0

return np.array([eq1, eq2], dtype='float64')


718 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

To solve the equations for 𝑐0 , 𝑏0 , we use SciPy’s fsolve function

In [10]: c0, b0 = fsolve(solve_cb, np.array([1., -1.], dtype='float64'),


args=(Φ_star, b[0], 1), xtol=1.0e-12)
c0, b0

Out[10]: (0.9344994030900681, -1.0386984075517638)

Thus, we have reverse engineered an initial 𝑏0 = −1.038698407551764 that ought to render


the AMSS measurability constraints slack.

38.8 Short Simulation for Reverse-engineered: Initial Debt

The following graph shows simulations of outcomes for both a Lucas-Stokey economy and for
an AMSS economy starting from initial government debt equal to 𝑏0 = −1.038698407551764.
These graphs report outcomes for both the Lucas-Stokey economy with complete markets and
the AMSS economy with one-period risk-free debt only.

In [11]: μ_grid = np.linspace(-0.09, 0.1, 100)

log_example = CRRAutility()

log_example.transfers = True # Government can use�


↪transfers
log_sequential = SequentialAllocation(log_example) # Solve sequential�
↪problem

log_bellman = RecursiveAllocationAMSS(log_example, μ_grid,


tol_diff=1e-10, tol=1e-12)

T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 1, 1, 1, 1, 0])

sim_seq = log_sequential.simulate(-1.03869841, 0, T, sHist)


sim_bel = log_bellman.simulate(-1.03869841, 0, T, sHist)

titles = ['Consumption', 'Labor Supply', 'Government Debt',


'Tax Rate', 'Government Spending', 'Output']

# Government spending paths


sim_seq[4] = log_example.G[sHist]
sim_bel[4] = log_example.G[sHist]

# Output paths
sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]
sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]

fig, axes = plt.subplots(3, 2, figsize=(14, 10))

for ax, title, seq, bel in zip(axes.flatten(), titles, sim_seq, sim_bel):


ax.plot(seq, '-ok', bel, '-^b')
ax.set(title=title)
38.8. SHORT SIMULATION FOR REVERSE-ENGINEERED: INITIAL DEBT 719

ax.grid()

axes[0, 0].legend(('Complete Markets', 'Incomplete Markets'))


plt.tight_layout()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:24:
RuntimeWarning: divide by zero encountered in reciprocal
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29:
RuntimeWarning: divide by zero encountered in power
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in true_divide
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in multiply

0.04094445433234758
0.0016732111459339745
0.0014846748487546482
0.001313772137599195
0.0011814037134986897
0.0010559653362837158
0.000944666164618918
0.0008463807322943287
0.000756045378088178
0.0006756001035988462
0.0006041528458906972
0.0005396004512409242
0.00048207169116453786
0.00043082732110620906
0.00038481851369246
0.000343835217568062
0.00030724369371399235
0.00027450091482315945
0.0002453177340412433
0.00021923324305267807
0.00019593539447118835
0.0001751430351430761
0.00015655939835776934
0.00013996737140588777
0.00012514457833844074
0.00011190070778883692
0.00010007020224514272
8.949728533990885e-05
8.004975220692396e-05
7.160590590101371e-05
6.405836568393554e-05
5.73116243162071e-05
5.127968193826674e-05
4.588652975207788e-05
4.106387898563657e-05
3.675099365124273e-05
3.2893618376286996e-05
2.944328931223214e-05
2.6356787971151164e-05
2.359548413231486e-05
2.1124903956953046e-05
1.891424711298374e-05
1.6936003233347247e-05
1.5165596596169234e-05
1.3581066972912192e-05
1.2162792581986366e-05
720 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

1.0893236146181244e-05
9.756722931832945e-06
8.739241409324743e-06
7.828263804653913e-06
7.0125911850068424e-06
6.282205801318511e-06
5.62815236583749e-06
5.042417987425145e-06
4.517838223630651e-06
4.048002237180867e-06
3.6271748763777328e-06
3.250224118941765e-06
2.9125602167042994e-06
2.610073041495014e-06
2.339085580071665e-06
2.0963063869322354e-06
1.8787901179549775e-06
1.6838998667960153e-06
1.5092748641321522e-06
1.3528010669409543e-06
1.2125872138711638e-06
1.0869380807717548e-06
9.743372900056795e-07
8.734263728953424e-07
7.829878128713227e-07
7.019327102136164e-07
6.292854887723012e-07
5.641704103964213e-07
5.05805980826381e-07
4.534906606766279e-07
4.065963117262076e-07
3.645593901929526e-07
3.268761722879906e-07
2.930942757357439e-07
2.628095472934143e-07
2.3565911598562037e-07
2.1131781401175827e-07
1.8949465050450541e-07
1.6992856583370274e-07
1.523858167378468e-07
1.3665661547374008e-07
1.225532301127692e-07
1.0990775357155176e-07
9.856851260491467e-08
8.84008342886199e-08
7.928330281081605e-08
7.110736563127017e-08
6.377565665571852e-08
5.720079605197245e-08
5.130458272754692e-08
4.6016888702596e-08
4.127515483242469e-08
3.702257247069187e-08
3.320865120240692e-08
2.9788048716958826e-08
2.6720158225836068e-08
2.396860968686211e-08
2.1500639421803905e-08
1.928708646845264e-08
1.7301651279556726e-08
1.5520794543062754e-08
1.3923433290954976e-08
1.2490620491873631e-08
38.8. SHORT SIMULATION FOR REVERSE-ENGINEERED: INITIAL DEBT 721

1.1205398842857526e-08
1.0052543817376429e-08
9.01840911183742e-09
8.090757140720007e-09
7.258610741512252e-09
6.512129740354109e-09
5.842483345173673e-09
5.241757135254373e-09
4.7028515320266936e-09
4.21939678573495e-09
3.785682913321146e-09
3.3965877239178474e-09
3.047516492385927e-09
2.7343477176468456e-09
2.4534585050772063e-09
2.201332357134711e-09
1.975175303777501e-09
1.7722973381771031e-09
1.5902346344748702e-09
1.426911056019362e-09
1.2803676421479776e-09
1.148883261270849e-09
1.0309146320457128e-09
9.250646823766203e-10
8.3009315437256e-10
7.44877140017638e-10
6.684161228736477e-10
5.998081604009871e-10
5.382481454439933e-10
4.830093779533069e-10
4.334435649873374e-10
3.889679556881507e-10
3.4905856106637615e-10
3.1324650311641123e-10
2.8111123460099345e-10
2.522746280941152e-10
2.2639802050263124e-10
2.0317699751577044e-10
1.8233965508845813e-10
1.6364086233563025e-10
1.4685965375699284e-10
1.318020108642606e-10
1.1828799715591823e-10
1.0616010299861455e-10
9.52773346696413e-11
722 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

The Ramsey allocations and Ramsey outcomes are identical for the Lucas-Stokey and AMSS
economies.
This outcome confirms the success of our reverse-engineering exercises.
Notice how for 𝑡 ≥ 1, the tax rate is a constant - so is the par value of government debt.
However, output and labor supply are both nontrivial time-invariant functions of the Markov
state.

38.9 Long Simulation

The following graph shows the par value of government debt and the flat rate tax on labor
income for a long simulation for our sample economy.
For the same realization of a government expenditure path, the graph reports outcomes for
two economies
• the gray lines are for the Lucas-Stokey economy with complete markets
• the blue lines are for the AMSS economy with risk-free one-period debt only
For both economies, initial government debt due at time 0 is 𝑏0 = .5.
For the Lucas-Stokey complete markets economy, the government debt plotted is 𝑏𝑡+1 (𝑠𝑡+1 ).
• Notice that this is a time-invariant function of the Markov state from the beginning.
For the AMSS incomplete markets economy, the government debt plotted is 𝑏𝑡+1 (𝑠𝑡 ).
• Notice that this is a martingale-like random process that eventually seems to converge
to a constant 𝑏̄ ≈ −1.07.
38.9. LONG SIMULATION 723

• Notice that the limiting value 𝑏̄ < 0 so that asymptotically the government makes a
constant level of risk-free loans to the public.
• In the simulation displayed as well as other simulations we have run, the par value of
government debt converges to about 1.07 afters between 1400 to 2000 periods.
For the AMSS incomplete markets economy, the marginal tax rate on labor income 𝜏𝑡 con-
verges to a constant
• labor supply and output each converge to time-invariant functions of the Markov state

In [12]: T = 2000 # Set T to 200 periods

sim_seq_long = log_sequential.simulate(0.5, 0, T)
sHist_long = sim_seq_long[-3]
sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long)

titles = ['Government Debt', 'Tax Rate']

fig, axes = plt.subplots(2, 1, figsize=(14, 10))

for ax, title, id in zip(axes.flatten(), titles, [2, 3]):


ax.plot(sim_seq_long[id], '-k', sim_bel_long[id], '-.b', alpha=0.5)
ax.set(title=title)
ax.grid()

axes[0].legend(('Complete Markets', 'Incomplete Markets'))


plt.tight_layout()
plt.show()
724 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

38.9.1 Remarks about Long Simulation

As remarked above, after 𝑏𝑡+1 (𝑠𝑡 ) has converged to a constant, the measurability constraints
in the AMSS model cease to bind
• the associated Lagrange multipliers on those implementability constraints converge to
zero
This leads us to seek an initial value of government debt 𝑏0 that renders the measurability
constraints slack from time 𝑡 = 0 onward
• a tell-tale sign of this situation is that the Ramsey planner in a corresponding Lucas-
Stokey economy would instruct the government to issue a constant level of government
debt 𝑏𝑡+1 (𝑠𝑡+1 ) across the two Markov states
We now describe how to find such an initial level of government debt.

38.10 BEGS Approximations of Limiting Debt and Conver-


gence Rate

It is useful to link the outcome of our reverse engineering exercise to limiting approximations
constructed by [10].
[10] used a slightly different notation to represent a generalization of the AMSS model.
We’ll introduce a version of their notation so that readers can quickly relate notation that
appears in their key formulas to the notation that we have used.
BEGS work with objects 𝐵𝑡 , ℬ𝑡 , ℛ𝑡 , 𝒳𝑡 that are related to our notation by

𝑢𝑐,𝑡 𝑢𝑐,𝑡
ℛ𝑡 = 𝑅𝑡−1 =
𝑢𝑐,𝑡−1 𝛽𝐸𝑡−1 𝑢𝑐,𝑡
𝑏𝑡+1 (𝑠𝑡 )
𝐵𝑡 =
𝑅𝑡 (𝑠𝑡 )
𝑡−1
𝑏𝑡 (𝑠 ) = ℛ𝑡−1 𝐵𝑡−1
ℬ𝑡 = 𝑢𝑐,𝑡 𝐵𝑡 = (𝛽𝐸𝑡 𝑢𝑐,𝑡+1 )𝑏𝑡+1 (𝑠𝑡 )
𝒳𝑡 = 𝑢𝑐,𝑡 [𝑔𝑡 − 𝜏𝑡 𝑛𝑡 ]

In terms of their notation, equation (44) of [10] expresses the time 𝑡 state 𝑠 government bud-
get constraint as

ℬ(𝑠) = ℛ𝜏 (𝑠, 𝑠− )ℬ− + 𝒳𝜏(𝑠) (𝑠) (8)

where the dependence on 𝜏 is to remind us that these objects depend on the tax rate and 𝑠−
is last period’s Markov state.
BEGS interpret random variations in the right side of (8) as a measure of fiscal risk com-
posed of
• interest-rate-driven fluctuations in time 𝑡 effective payments due on the government
portfolio, namely, ℛ𝜏 (𝑠, 𝑠− )ℬ− , and
• fluctuations in the effective government deficit 𝒳𝑡
38.10. BEGS APPROXIMATIONS OF LIMITING DEBT AND CONVERGENCE RATE725

38.10.1 Asymptotic Mean

BEGS give conditions under which the ergodic mean of ℬ𝑡 is

cov∞ (ℛ, 𝒳)
ℬ∗ = − (9)
var∞ (ℛ)

where the superscript ∞ denotes a moment taken with respect to an ergodic distribution.
Formula (9) presents ℬ∗ as a regression coefficient of 𝒳𝑡 on ℛ𝑡 in the ergodic distribution.
This regression coefficient emerges as the minimizer for a variance-minimization problem:

ℬ∗ = argminℬ var(ℛℬ + 𝒳) (10)

The minimand in criterion (10) is the measure of fiscal risk associated with a given tax-debt
policy that appears on the right side of equation (8).
Expressing formula (9) in terms of our notation tells us that 𝑏̄ should approximately equal

ℬ∗
𝑏̂ = (11)
𝛽𝐸𝑡 𝑢𝑐,𝑡+1

38.10.2 Rate of Convergence

BEGS also derive the following approximation to the rate of convergence to ℬ∗ from an arbi-
trary initial condition.

𝐸𝑡 (ℬ𝑡+1 − ℬ∗ ) 1

≈ 2
(12)
(ℬ𝑡 − ℬ ) 1 + 𝛽 var(ℛ)

(See the equation above equation (47) in [10])

38.10.3 Formulas and Code Details

For our example, we describe some code that we use to compute the steady state mean and
the rate of convergence to it.
The values of 𝜋(𝑠) are 0.5, 0.5.
We can then construct 𝒳(𝑠), ℛ(𝑠), 𝑢𝑐 (𝑠) for our two states using the definitions above.
We can then construct 𝛽𝐸𝑡−1 𝑢𝑐 = 𝛽 ∑𝑠 𝑢𝑐 (𝑠)𝜋(𝑠), cov(ℛ(𝑠), 𝒳(𝑠)) and var(ℛ(𝑠)) to be
plugged into formula (11).
We also want to compute var(𝒳).
To compute the variances and covariance, we use the following standard formulas.
Temporarily let 𝑥(𝑠), 𝑠 = 1, 2 be an arbitrary random variables.
Then we define
726 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE

𝜇𝑥 = ∑ 𝑥(𝑠)𝜋(𝑠)
𝑠

var(𝑥) = (∑ ∑ 𝑥(𝑠)2 𝜋(𝑠)) − 𝜇2𝑥


𝑠 𝑠

cov(𝑥, 𝑦) = (∑ 𝑥(𝑠)𝑦(𝑠)𝜋(𝑠)) − 𝜇𝑥 𝜇𝑦
𝑠

After we compute these moments, we compute the BEGS approximation to the asymptotic
mean 𝑏̂ in formula (11).
After that, we move on to compute ℬ∗ in formula (9).
We’ll also evaluate the BEGS criterion (8) at the limiting value ℬ∗

2
𝐽 (ℬ∗ ) = var(ℛ) (ℬ∗ ) + 2ℬ∗ cov(ℛ, 𝒳) + var(𝒳) (13)

Here are some functions that we’ll use to compute key objects that we want

In [13]: def mean(x):


'''Returns mean for x given initial state'''
x = np.array(x)
return x @ u.π[s]

def variance(x):
x = np.array(x)
return x**2 @ u.π[s] - mean(x)**2

def covariance(x, y):


x, y = np.array(x), np.array(y)
return x * y @ u.π[s] - mean(x) * mean(y)

Now let’s form the two random variables ℛ, 𝒳 appearing in the BEGS approximating formu-
las

In [14]: u = CRRAutility()

s = 0
c = [0.940580824225584, 0.8943592757759343] # Vector for c
g = u.G # Vector for g
n = c + g # Total population
τ = lambda s: 1 + u.Un(1, n[s]) / u.Uc(c[s], 1)

R_s = lambda s: u.Uc(c[s], n[s]) / (u.β * (u.Uc(c[0], n[0]) * u.π[0, 0] \


+ u.Uc(c[1], n[1]) * u.π[1, 0]))
X_s = lambda s: u.Uc(c[s], n[s]) * (g[s] - τ(s) * n[s])

R = [R_s(0), R_s(1)]
X = [X_s(0), X_s(1)]

print(f"R, X = {R}, {X}")

R, X = [1.055169547122964, 1.1670526750992583], [0.06357685646224803,


0.19251010100512958]
38.10. BEGS APPROXIMATIONS OF LIMITING DEBT AND CONVERGENCE RATE727

Now let’s compute the ingredient of the approximating limit and the approximating rate of
convergence

In [15]: bstar = -covariance(R, X) / variance(R)


div = u.β * (u.Uc(c[0], n[0]) * u.π[s, 0] + u.Uc(c[1], n[1]) * u.π[s, 1])
bhat = bstar / div
bhat

Out[15]: -1.0757585378303758

Print out 𝑏̂ and 𝑏̄

In [16]: bhat, b_bar

Out[16]: (-1.0757585378303758, -1.0757576567504166)

So we have

In [17]: bhat - b_bar

Out[17]: -8.810799592140484e-07

These outcomes show that 𝑏̂ does a remarkably good job of approximating 𝑏.̄
Next, let’s compute the BEGS fiscal criterion that 𝑏̂ is minimizing

In [18]: Jmin = variance(R) * bstar**2 + 2 * bstar * covariance(R, X) + variance(X)


Jmin

Out[18]: -9.020562075079397e-17

This is machine zero, a verification that 𝑏̂ succeeds in minimizing the nonnegative fiscal cost
criterion 𝐽 (ℬ∗ ) defined in BEGS and in equation (13) above.
Let’s push our luck and compute the mean reversion speed in the formula above equation
(47) in [10].

In [19]: den2 = 1 + (u.β**2) * variance(R)


speedrever = 1/den2
print(f'Mean reversion speed = {speedrever}')

Mean reversion speed = 0.9974715478249827

Now let’s compute the implied meantime to get to within 0.01 of the limit

In [20]: ttime = np.log(.01) / np.log(speedrever)


print(f"Time to get within .01 of limit = {ttime}")

Time to get within .01 of limit = 1819.0360880098472

The slow rate of convergence and the implied time of getting within one percent of the limit-
ing value do a good job of approximating our long simulation above.
728 CHAPTER 38. FLUCTUATING INTEREST RATES DELIVER FISCAL INSURANCE
Chapter 39

Fiscal Risk and Government Debt

39.1 Contents

• Overview 39.2
• The Economy 39.3
• Long Simulation 39.4
• Asymptotic Mean and Rate of Convergence 39.5
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install --upgrade quantecon

39.2 Overview

This lecture studies government debt in an AMSS economy [3] of the type described in Opti-
mal Taxation without State-Contingent Debt.
We study the behavior of government debt as time 𝑡 → +∞.
We use these techniques

• simulations

• a regression coefficient from the tail of a long simulation that allows us to verify that
the asymptotic mean of government debt solves a fiscal-risk minimization problem
• an approximation to the mean of an ergodic distribution of government debt
• an approximation to the rate of convergence to an ergodic distribution of government
debt
We apply tools applicable to more general incomplete markets economies that are presented
on pages 648 - 650 in section III.D of [10] (BEGS).
We study an [3] economy with three Markov states driving government expenditures.

• In a previous lecture, we showed that with only two Markov states, it is pos-
sible that eventually endogenous interest rate fluctuations support complete
markets allocations and Ramsey outcomes.

729
730 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

• The presence of three states prevents the full spanning that eventually prevails in the
two-state example featured in Fiscal Insurance via Fluctuating Interest Rates.
The lack of full spanning means that the ergodic distribution of the par value of government
debt is nontrivial, in contrast to the situation in Fiscal Insurance via Fluctuating Interest
Rates where the ergodic distribution of the par value is concentrated on one point.
Nevertheless, [10] (BEGS) establish for general settings that include ours, the Ramsey plan-
ner steers government assets to a level that comes as close as possible to providing full
spanning in a precise a sense defined by BEGS that we describe below.
We use code constructed in a previous lecture.
Warning: Key equations in [10] section III.D carry typos that we correct below.
Let’s start with some imports:

In [2]: import matplotlib.pyplot as plt


%matplotlib inline
from scipy.optimize import minimize

39.3 The Economy

As in Optimal Taxation without State-Contingent Debt and Optimal Taxation with State-
Contingent Debt, we assume that the representative agent has utility function

𝑐1−𝜎 𝑛1+𝛾
𝑢(𝑐, 𝑛) = −
1−𝜎 1+𝛾

We work directly with labor supply instead of leisure.


We assume that

𝑐𝑡 + 𝑔𝑡 = 𝑛𝑡

The Markov state 𝑠𝑡 takes three values, namely, 0, 1, 2.


The initial Markov state is 0.
The Markov transition matrix is (1/3)𝐼 where 𝐼 is a 3 × 3 identity matrix, so the 𝑠𝑡 process is
IID.
Government expenditures 𝑔(𝑠) equal .1 in Markov state 0, .2 in Markov state 1, and .3 in
Markov state 2.
We set preference parameters

𝛽 = .9
𝜎=2
𝛾=2

The following Python code sets up the economy

In [3]: import numpy as np


39.3. THE ECONOMY 731

class CRRAutility:

def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):

self.β, self.σ, self.γ = β, σ, γ


self.π, self.G, self.Θ, self.transfers = π, G, Θ, transfers

# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)

# Derivatives of utility function


def Uc(self, c, n):
return c**(-self.σ)

def Ucc(self, c, n):


return -self.σ * c**(-self.σ - 1)

def Un(self, c, n):


return -n**self.γ

def Unn(self, c, n):


return -self.γ * n**(self.γ - 1)

39.3.1 First and Second Moments

We’ll want first and second moments of some key random variables below.
The following code computes these moments; the code is recycled from Fiscal Insurance via
Fluctuating Interest Rates.

In [4]: def mean(x, s):


'''Returns mean for x given initial state'''
x = np.array(x)
return x @ u.π[s]

def variance(x, s):


x = np.array(x)
return x**2 @ u.π[s] - mean(x, s)**2

def covariance(x, y, s):


x, y = np.array(x), np.array(y)
return x * y @ u.π[s] - mean(x, s) * mean(y, s)
732 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

39.4 Long Simulation

To generate a long simulation we use the following code.


We begin by showing the code that we used in earlier lectures on the AMSS model.
Here it is

In [5]: import numpy as np


from scipy.optimize import root
from quantecon import MarkovChain

class SequentialAllocation:

'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint μ.
'''

def __init__(self, model):

# Initialize from model object attributes


self.β, self.π, self.G = model.β, model.π, model.G
self.mc, self.Θ = MarkovChain(self.π), model.Θ
self.S = len(model.π) # Number of states
self.model = model

# Find the first best allocation


self.find_first_best()

def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))

if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]

# Multiplier on the resource constraint


self.ΞFB = Uc(self.cFB, self.nFB)
self.zFB = np.hstack([self.cFB, self.nFB, self.ΞFB])

def time1_allocation(self, μ):


39.4. LONG SIMULATION 733

'''
Computes optimal allocation for time t >= 1 for a given μ
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
# FOC of c
return np.hstack([Uc(c, n) - μ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) \
+ Θ * Ξ, # FOC of n
Θ * n - c - G])

# Find the root of the first-order condition


res = root(FOC, self.zFB)
if not res.success:
raise Exception('Could not find LS allocation.')
z = res.x
c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:]

# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)

return c, n, x, Ξ

def time0_allocation(self, B_, s_0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn

# First order conditions of planner's problem


def FOC(z):
μ, c, n, Ξ = z
xprime = self.time1_allocation(μ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
@ xprime,
Uc(c, n) - μ * (Ucc(c, n)
* (c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - μ * (Unn(c, n) * n
+ Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])

# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')

return res.x
734 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

def time1_value(self, μ):


'''
Find the value associated with multiplier μ
'''
c, n, x, Ξ = self.time1_allocation(μ)
U = self.model.U(c, n)
V = np.linalg.solve(np.eye(self.S) - self.β * self.π, U)
return c, n, x, V

def Τ(self, c, n):


'''
Computes Τ given c, n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π, β = self.model, self.π, self.β
Uc = model.Uc

if sHist is None:
sHist = self.mc.simulate(T, s_0)

cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T))


RHist = np.zeros(T - 1)

# Time 0
μ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = μ

# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(μ)
Τ = self.Τ(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x[s] /�
↪ u_c[s], \
Τ[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
μHist[t] = μ

return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist])

In [6]: import numpy as np


from scipy.optimize import fmin_slsqp
from scipy.optimize import root
from quantecon import MarkovChain
39.4. LONG SIMULATION 735

class RecursiveAllocationAMSS:

def __init__(self, model, μgrid, tol_diff=1e-4, tol=1e-4):

self.β, self.π, self.G = model.β, model.π, model.G


self.mc, self.S = MarkovChain(self.π), len(model.π) # Number of�
↪ states
self.Θ, self.model, self.μgrid = model.Θ, model, μgrid
self.tol_diff, self.tol = tol_diff, tol

# Find the first best allocation


self.solve_time1_bellman()
self.T.time_0 = True # Bellman equation now solves time 0 problem

def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and
initial grid μgrid0
'''
model, μgrid0 = self.model, self.μgrid
π = model.π
S = len(model.π)

# First get initial fit from Lucas Stokey solution.


# Need to change things to be ex ante
pp = SequentialAllocation(model)
interp = interpolator_factory(2, None)

def incomplete_allocation(μ_, s_):


c, n, x, V = pp.time1_value(μ_)
return c, n, π[s_] @ x, π[s_] @ V
cf, nf, xgrid, Vf, xprimef = [], [], [], [], []
for s_ in range(S):
c, n, x, V = zip(*map(lambda μ: incomplete_allocation(μ, s_),�
↪ μgrid0))
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]

# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(μgrid0))
self.xgrid = xgrid

# Now iterate on Bellman equation


T = BellmanEquation(model, xgrid, policies, tol=self.tol)
diff = 1
while diff > self.tol_diff:
PF = T(Vf)
736 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

Vfnew, policies = self.fit_policy_function(PF)


diff = np.abs((Vf(xgrid) - Vfnew(xgrid)) / Vf(xgrid)).max()

print(diff)
Vf = Vfnew

# Store value function policies and Bellman Equations


self.Vf = Vf
self.policies = policies
self.T = T

def fit_policy_function(self, PF):


'''
Fits the policy functions
'''
S, xgrid = len(self.π), self.xgrid
interp = interpolator_factory(3, 0)
cf, nf, xprimef, Tf, Vf = [], [], [], [], []
for s_ in range(S):
PFvec = np.vstack([PF(x, s_) for x in self.xgrid]).T
Vf.append(interp(xgrid, PFvec[0, :]))
cf.append(interp(xgrid, PFvec[1:1 + S]))
nf.append(interp(xgrid, PFvec[1 + S:1 + 2 * S]))
xprimef.append(interp(xgrid, PFvec[1 + 2 * S:1 + 3 * S]))
Tf.append(interp(xgrid, PFvec[1 + 3 * S:]))
policies = fun_vstack(cf), fun_vstack(
nf), fun_vstack(xprimef), fun_vstack(Tf)
Vf = fun_hstack(Vf)
return Vf, policies

def Τ(self, c, n):


'''
Computes Τ given c and n
'''
model = self.model
Uc, Un = model.Uc(c, n), model.Un(c, n)

return 1 + Un / (self.Θ * Uc)

def time0_allocation(self, B_, s0):


'''
Finds the optimal allocation given initial government debt B_ and
state s_0
'''
PF = self.T(self.Vf)
z0 = PF(B_, s0)
c0, n0, xprime0, T0 = z0[1:]
return c0, n0, xprime0, T0

def simulate(self, B_, s_0, T, sHist=None):


'''
Simulates planners policies for T periods
'''
model, π = self.model, self.π
Uc = model.Uc
cf, nf, xprimef, Tf = self.policies
39.4. LONG SIMULATION 737

if sHist is None:
sHist = simulate_markov(π, s_0, T)

cHist, nHist, Bhist, xHist, ΤHist, THist, μHist = np.zeros((7, T))


# Time 0
cHist[0], nHist[0], xHist[0], THist[0] = self.time0_allocation(B_,�
↪ s_0)
ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
μHist[0] = self.Vf[s_0](xHist[0])

# Time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
x), xprimef[s_, :](x), Tf[s_, :](x)

Τ = self.Τ(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c

μHist[t] = self.Vf[s](xprime[s])

cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x / Eu_c, Τ


xHist[t], THist[t] = xprime[s], T[s]
return np.array([cHist, nHist, Bhist, ΤHist, THist, μHist, sHist,�
↪ xHist])

class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''

def __init__(self, model, xgrid, policies0, tol, maxiter=1000):

self.β, self.π, self.G = model.β, model.π, model.G


self.S = len(model.π) # Number of states
self.Θ, self.model, self.tol = model.Θ, model, tol
self.maxiter = maxiter

self.xbar = [min(xgrid), max(xgrid)]


self.time_0 = False

self.z0 = {}
cf, nf, xprimef = policies0

for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])

self.find_first_best()

def find_first_best(self):
'''
738 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

Find the first best allocation


'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G

def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])

res = root(res, 0.5 * np.ones(2 * S))


if not res.success:
raise Exception('Could not find first best')

self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB

self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB)

self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])

def __call__(self, Vf):


'''
Given continuation value function next period return value�
↪function this

period return T(V) and optimal policies


'''
if not self.time_0:
def PF(x, s): return self.get_policies_time1(x, s, Vf)
else:
def PF(B_, s0): return self.get_policies_time0(B_, s0, Vf)
return PF

def get_policies_time1(self, x, s_, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G, S, π = self.model, self.β, self.Θ, self.G, self.S,�
↪ self.π
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]

Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])

return -π[s_] @ (U(c, n) + β * Vprime)

def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
39.4. LONG SIMULATION 739

Eu_c = π[s_] @ u_c


return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])

if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)

if imode > 0:
raise Exception(smode)

self.z0[x, s_] = out


return np.hstack([-fx, out])

def get_policies_time0(self, B_, s0, Vf):


'''
Finds the optimal policies
'''
model, β, Θ, G = self.model, self.β, self.Θ, self.G
U, Uc, Un = model.U, model.Uc, model.Un

def objf(z):
c, n, xprime = z[:-1]

return -(U(c, n) + β * Vf[s0](xprime))

def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])

if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0],�
↪f_eqcons=cons,

bounds=bounds, full_output=True,
iprint=0)

if imode > 0:
raise Exception(smode)

return np.hstack([-fx, out])

In [7]: import numpy as np


from scipy.interpolate import UnivariateSpline
740 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

class interpolate_wrapper:

def __init__(self, F):


self.F = F

def __getitem__(self, index):


return interpolate_wrapper(np.asarray(self.F[index]))

def reshape(self, *args):


self.F = self.F.reshape(*args)
return self

def transpose(self):
self.F = self.F.transpose()

def __len__(self):
return len(self.F)

def __call__(self, xvec):


x = np.atleast_1d(xvec)
shape = self.F.shape
if len(x) == 1:
fhat = np.hstack([f(x) for f in self.F.flatten()])
return fhat.reshape(shape)
else:
fhat = np.vstack([f(x) for f in self.F.flatten()])
return fhat.reshape(np.hstack((shape, len(x))))

class interpolator_factory:

def __init__(self, k, s):


self.k, self.s = k, s

def __call__(self, xgrid, Fs):


shape, m = Fs.shape[:-1], Fs.shape[-1]
Fs = Fs.reshape((-1, m))
F = []
xgrid = np.sort(xgrid) # Sort xgrid
for Fhat in Fs:
F.append(UnivariateSpline(xgrid, Fhat, k=self.k, s=self.s))
return interpolate_wrapper(np.array(F).reshape(shape))

def fun_vstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.vstack(Fs))

def fun_hstack(fun_list):

Fs = [IW.F for IW in fun_list]


return interpolate_wrapper(np.hstack(Fs))

def simulate_markov(π, s_0, T):


39.4. LONG SIMULATION 741

sHist = np.empty(T, dtype=int)


sHist[0] = s_0
S = len(π)
for t in range(1, T):
sHist[t] = np.random.choice(np.arange(S), p=π[sHist[t - 1]])

return sHist

Next, we show the code that we use to generate a very long simulation starting from initial
government debt equal to −.5.
Here is a graph of a long simulation of 102000 periods.

In [8]: μ_grid = np.linspace(-0.09, 0.1, 100)

log_example = CRRAutility(π=(1 / 3) * np.ones((3, 3)),


G=np.array([0.1, 0.2, .3]),
Θ=np.ones(3))

log_example.transfers = True # Government can use transfers


log_sequential = SequentialAllocation(log_example) # Solve sequential�
↪problem

log_bellman = RecursiveAllocationAMSS(log_example, μ_grid,


tol=1e-12, tol_diff=1e-10)

T = 102000 # Set T to 102000 periods

sim_seq_long = log_sequential.simulate(0.5, 0, T)
sHist_long = sim_seq_long[-3]
sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long)

titles = ['Government Debt', 'Tax Rate']

fig, axes = plt.subplots(2, 1, figsize=(10, 8))

for ax, title, id in zip(axes.flatten(), titles, [2, 3]):


ax.plot(sim_seq_long[id], '-k', sim_bel_long[id], '-.b', alpha=0.5)
ax.set(title=title)
ax.grid()

axes[0].legend(('Complete Markets', 'Incomplete Markets'))


plt.tight_layout()
plt.show()

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:24:
RuntimeWarning: divide by zero encountered in reciprocal
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29:
RuntimeWarning: divide by zero encountered in power
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in true_divide
/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:235:
RuntimeWarning: invalid value encountered in multiply

0.03826635338764243
0.0015144378246624802
742 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

0.0013387575049821254
0.0011833202397257799
0.001060030711610997
0.0009506620325655877
0.0008518776516784105
0.0007625857030772584
0.0006819563061621297
0.0006094002926999313
0.0005443007356557494
0.0004859950034451949
0.0004338395935867951
0.0003872273086547321
0.0003455954121662256
0.0003084287064416754
0.00027525901874807506
0.0002456631291795933
0.00021925988530410837
0.00019570695818365896
0.00017469751639734096
0.000155956971312117
0.00013923987966255844
0.00012432704762412874
0.00011102285953813124
9.915283208005873e-05
8.856139177418549e-05
7.910986486727555e-05
7.067466534256756e-05
6.314566738342215e-05
5.642474601179214e-05
5.04244714242905e-05
4.5066942140161024e-05
4.0282743553840194e-05
3.6010019182229436e-05
3.219364288469241e-05
2.878448158458779e-05
2.5738738746335907e-05
2.3017369637244783e-05
2.0585562674979178e-05
1.8412273677249495e-05
1.6470097041015085e-05
1.4734148538314439e-05
1.3182214237264232e-05
1.179465468972776e-05
1.0553942826236891e-05
9.444436182046656e-06
8.452171072609061e-06
7.564681571028991e-06
6.770836663017954e-06
6.060699059757928e-06
5.425387660617435e-06
4.856977646150699e-06
4.3483827830083455e-06
3.893275618067105e-06
3.4860039396650155e-06
3.121510956906862e-06
2.795283246242997e-06
2.503284908479676e-06
2.241904705545173e-06
2.0079210562767113e-06
1.79844647903239e-06
1.6109046568696263e-06
1.4429885479761846e-06
1.2926353781273653e-06
39.4. LONG SIMULATION 743

1.158001328946089e-06
1.0374366554173127e-06
9.29464798683436e-07
8.327658257014739e-07
7.461586888316676e-07
6.68585740139964e-07
5.991020356617505e-07
5.368605529407855e-07
4.811045502353569e-07
4.3115414103227755e-07
3.8640486859543906e-07
3.463127364199561e-07
3.1039144302497795e-07
2.7820596962682417e-07
2.49366486805813e-07
2.2352411409818838e-07
2.0036654012977202e-07
1.7961397402831138e-07
1.6101601177333915e-07
1.4434848502328258e-07
1.2941016279753393e-07
1.1602136587348267e-07
1.040209227293687e-07
9.32644858136413e-08
8.362276395815416e-08
7.497997366643375e-08
6.723235660011802e-08
6.028697859409553e-08
5.406057383526237e-08
4.847854202510787e-08
4.347405382162158e-08
3.8987205778995045e-08
3.4964652040140875e-08
3.1357501640346974e-08
2.812337635781068e-08
2.522339224827695e-08
2.2623003639032997e-08
2.0291195649165156e-08
1.8200168778793543e-08
1.6325011122562583e-08
1.4643392858101812e-08
1.3135299716088654e-08
1.1782791355167972e-08
1.056978526054371e-08
9.481866630357504e-09
8.506111410597114e-09
7.630935781926152e-09
6.845958485714195e-09
6.141856807690933e-09
5.510288304907757e-09
4.943766125568843e-09
4.4357383440222926e-09
3.979767125967659e-09
3.570867405350532e-09
3.2040370238586946e-09
2.87494303938065e-09
2.5797022179751765e-09
2.3148261887360843e-09
2.077185645788396e-09
1.863977339735376e-09
1.6726840954563806e-09
1.5010504818563051e-09
1.3470529026965274e-09
744 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

1.2088760411873413e-09
1.084893023854817e-09
9.736423300387176e-10
8.738157042491851e-10
7.842378551195234e-10
7.038548326703386e-10
6.3172225060306e-10
5.669910290750272e-10
5.089024356658802e-10
4.567708837243395e-10
4.0998718673664576e-10
3.6800135667275587e-10
3.303205893789562e-10
2.9650322075385925e-10
2.66151443165352e-10
2.389105306017189e-10
2.144620199979866e-10
1.925172025019214e-10
1.7282099772086056e-10
1.5514312043964932e-10
1.3927420637418602e-10
1.2503092617705906e-10
1.1224653227593303e-10
1.0076977408457583e-10
9.046861727966286e-11

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:30:
UserWarning: Creating legend with loc="best" can be slow with large amounts of data.
/home/ubuntu/anaconda3/lib/python3.7/site-packages/IPython/core/pylabtools.py:128:
UserWarning: Creating legend with loc="best" can be slow with large amounts of data.
fig.canvas.print_figure(bytes_io, **kw)
39.4. LONG SIMULATION 745

The long simulation apparently indicates eventual convergence to an ergodic distribution.


746 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

It takes about 1000 periods to reach the ergodic distribution – an outcome that is forecast by
approximations to rates of convergence that appear in [10] and that we discuss in a previous
lecture.
We discard the first 2000 observations of the simulation and construct the histogram of the
part value of government debt.
We obtain the following graph for the histogram of the last 100,000 observations on the par
value of government debt.

The black vertical line denotes the sample mean for the last 100,000 observations included in
ℬ∗
the histogram; the green vertical line denotes the value of 𝐸𝑢 , associated with the sample
𝑐

(presumably) from the ergodic where ℬ is the regression coefficient described below; the red
vertical line denotes an approximation by [10] to the mean of the ergodic distribution that
can be precomputed before sampling from the ergodic distribution, as described below.
Before moving on to discuss the histogram and the vertical lines approximating the ergodic
mean of government debt in more detail, the following graphs show government debt and
taxes early in the simulation, for periods 1-100 and 101 to 200 respectively.

In [9]: titles = ['Government Debt', 'Tax Rate']

fig, axes = plt.subplots(4, 1, figsize=(10, 15))

for i, id in enumerate([2, 3]):


axes[i].plot(sim_seq_long[id][:99], '-k', sim_bel_long[id][:99],
'-.b', alpha=0.5)
axes[i+2].plot(range(100, 199), sim_seq_long[id][100:199], '-k',
range(100, 199), sim_bel_long[id][100:199], '-.b',
alpha=0.5)
axes[i].set(title=titles[i])
39.4. LONG SIMULATION 747

axes[i+2].set(title=titles[i])
axes[i].grid()
axes[i+2].grid()

axes[0].legend(('Complete Markets', 'Incomplete Markets'))


plt.tight_layout()
plt.show()
748 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT
39.4. LONG SIMULATION 749

For the short samples early in our simulated sample of 102,000 observations, fluctuations in
government debt and the tax rate conceal the weak but inexorable force that the Ramsey
planner puts into both series driving them toward ergodic distributions far from these early
observations

• early observations are more influenced by the initial value of the par value of
government debt than by the ergodic mean of the par value of government
debt
750 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

• much later observations are more influenced by the ergodic mean and are independent
of the initial value of the par value of government debt

39.5 Asymptotic Mean and Rate of Convergence

We apply the results of [10] to interpret

• the mean of the ergodic distribution of government debt

• the rate of convergence to the ergodic distribution from an arbitrary initial government
debt
We begin by computing objects required by the theory of section III.i of [10].
As in Fiscal Insurance via Fluctuating Interest Rates, we recall that [10] used a particular
notation to represent what we can regard as a generalization of the AMSS model.
We introduce some of the [10] notation so that readers can quickly relate notation that ap-
pears in their key formulas to the notation that we have used in previous lectures here and
here.
BEGS work with objects 𝐵𝑡 , ℬ𝑡 , ℛ𝑡 , 𝒳𝑡 that are related to notation that we used in earlier
lectures by

𝑢𝑐,𝑡 𝑢𝑐,𝑡
ℛ𝑡 = 𝑅𝑡−1 =
𝑢𝑐,𝑡−1 𝛽𝐸𝑡−1 𝑢𝑐,𝑡
𝑏𝑡+1 (𝑠𝑡 )
𝐵𝑡 =
𝑅𝑡 (𝑠𝑡 )
𝑏𝑡 (𝑠𝑡−1 ) = ℛ𝑡−1 𝐵𝑡−1
ℬ𝑡 = 𝑢𝑐,𝑡 𝐵𝑡 = (𝛽𝐸𝑡 𝑢𝑐,𝑡+1 )𝑏𝑡+1 (𝑠𝑡 )
𝒳𝑡 = 𝑢𝑐,𝑡 [𝑔𝑡 − 𝜏𝑡 𝑛𝑡 ]

[10] call 𝒳𝑡 the effective government deficit, and ℬ𝑡 the effective government debt.
Equation (44) of [10] expresses the time 𝑡 state 𝑠 government budget constraint as

ℬ(𝑠) = ℛ𝜏 (𝑠, 𝑠− )ℬ− + 𝒳𝜏 (𝑠) (1)

where the dependence on 𝜏 is to remind us that these objects depend on the tax rate; 𝑠− is
last period’s Markov state.
BEGS interpret random variations in the right side of (1) as fiscal risks generated by
• interest-rate-driven fluctuations in time 𝑡 effective payments due on the government
portfolio, namely, ℛ𝜏 (𝑠, 𝑠− )ℬ− , and
• fluctuations in the effective government deficit 𝒳𝑡

39.5.1 Asymptotic Mean

BEGS give conditions under which the ergodic mean of ℬ𝑡 approximately satisfies the equa-
tion
39.5. ASYMPTOTIC MEAN AND RATE OF CONVERGENCE 751

cov∞ (ℛt , 𝒳t )
ℬ∗ = − (2)
var∞ (ℛt )

where the superscript ∞ denotes a moment taken with respect to an ergodic distribution.
Formula (2) represents ℬ∗ as a regression coefficient of 𝒳𝑡 on ℛ𝑡 in the ergodic distribution.
Regression coefficient ℬ∗ solves a variance-minimization problem:

ℬ∗ = argminℬ var∞ (ℛℬ + 𝒳) (3)

The minimand in criterion (3) measures fiscal risk associated with a given tax-debt policy
that appears on the right side of equation (1).
Expressing formula (2) in terms of our notation tells us that the ergodic mean of the par
value 𝑏 of government debt in the AMSS model should approximately equal

ℬ∗ ℬ∗
𝑏̂ = = (4)
𝛽𝐸(𝐸𝑡 𝑢𝑐,𝑡+1 ) 𝛽𝐸(𝑢𝑐,𝑡+1 )

where mathematical expectations are taken with respect to the ergodic distribution.

39.5.2 Rate of Convergence

BEGS also derive the following approximation to the rate of convergence to ℬ∗ from an arbi-
trary initial condition.

𝐸𝑡 (ℬ𝑡+1 − ℬ∗ ) 1

≈ (5)
(ℬ𝑡 − ℬ ) 1 + 𝛽 var∞ (ℛ)
2

(See the equation above equation (47) in [10])

39.5.3 More Advanced Material

The remainder of this lecture is about technical material based on formulas from [10].
The topic is interpreting and extending formula (3) for the ergodic mean ℬ∗ .

39.5.4 Chicken and Egg

Attributes of the ergodic distribution for ℬ𝑡 appear on the right side of formula (3) for the
ergodic mean ℬ∗ .
Thus, formula (3) is not useful for estimating the mean of the ergodic in advance of actually
computing the ergodic distribution

• we need to know the ergodic distribution to compute the right side of for-
mula (3)
752 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

So the primary use of equation (3) is how it confirms that the ergodic distribution solves a
fiscal-risk minimization problem.
As an example, notice how we used the formula for the mean of ℬ in the ergodic distribution
of the special AMSS economy in Fiscal Insurance via Fluctuating Interest Rates

• first we computed the ergodic distribution using a reverse-engineering con-


struction

• then we verified that ℬ agrees with the mean of that distribution

39.5.5 Approximating the Ergodic Mean

[10] propose an approximation to ℬ∗ that can be computed without first knowing the ergodic
distribution.
To construct the BEGS approximation to ℬ∗ , we just follow steps set forth on pages 648 - 650
of section III.D of [10]
• notation in BEGS might be confusing at first sight, so it is important to stare and di-
gest before computing
• there are also some sign errors in the [10] text that we’ll want to correct
Here is a step-by-step description of the [10] approximation procedure.

39.5.6 Step by Step

Step 1: For a given 𝜏 we compute a vector of values 𝑐𝜏 (𝑠), 𝑠 = 1, 2, … , 𝑆 that satisfy

(1 − 𝜏 )𝑐𝜏 (𝑠)−𝜎 − (𝑐𝜏 (𝑠) + 𝑔(𝑠))𝛾 = 0

This is a nonlinear equation to be solved for 𝑐𝜏 (𝑠), 𝑠 = 1, … , 𝑆.


𝑆 = 3 in our case, but we’ll write code for a general integer 𝑆.
Typo alert: Please note that there is a sign error in equation (42) of [10] – it should be a
minus rather than a plus in the middle.

• We have made the appropriate correction in the above equation.

Step 2: Knowing 𝑐𝜏 (𝑠), 𝑠 = 1, … , 𝑆 for a given 𝜏 , we want to compute the random variables

𝑐𝜏 (𝑠)−𝜎
ℛ𝜏 (𝑠) = 𝑆
𝛽 ∑𝑠′ =1 𝑐𝜏 (𝑠′ )−𝜎 𝜋(𝑠′ )

and

𝒳𝜏 (𝑠) = (𝑐𝜏 (𝑠) + 𝑔(𝑠))1+𝛾 − 𝑐𝜏 (𝑠)1−𝜎

each for 𝑠 = 1, … , 𝑆.
39.5. ASYMPTOTIC MEAN AND RATE OF CONVERGENCE 753

BEGS call ℛ𝜏 (𝑠) the effective return on risk-free debt and they call 𝒳𝜏 (𝑠) the effective
government deficit.
Step 3: With the preceding objects in hand, for a given ℬ, we seek a 𝜏 that satisfies

𝛽 𝛽
ℬ=− 𝐸𝒳𝜏 ≡ − ∑ 𝒳𝜏 (𝑠)𝜋(𝑠)
1−𝛽 1−𝛽 𝑠

This equation says that at a constant discount factor 𝛽, equivalent government debt ℬ equals
the present value of the mean effective government surplus.
Typo alert: there is a sign error in equation (46) of [10] –the left side should be multiplied
by −1.

• We have made this correction in the above equation.

For a given ℬ, let a 𝜏 that solves the above equation be called 𝜏 (ℬ).
We’ll use a Python root solver to finds a 𝜏 that this equation for a given ℬ.
We’ll use this function to induce a function 𝜏 (ℬ).
Step 4: With a Python program that computes 𝜏 (ℬ) in hand, next we write a Python func-
tion to compute the random variable.

𝐽 (ℬ)(𝑠) = ℛ𝜏(ℬ) (𝑠)ℬ + 𝒳𝜏(ℬ) (𝑠), 𝑠 = 1, … , 𝑆

Step 5: Now that we have a machine to compute the random variable 𝐽 (ℬ)(𝑠), 𝑠 = 1, … , 𝑆,
via a composition of Python functions, we can use the population variance function that we
defined in the code above to construct a function var(𝐽 (ℬ)).
We put var(𝐽 (ℬ)) into a function minimizer and compute

ℬ∗ = argminℬ var(𝐽 (ℬ))

Step 6: Next we take the minimizer ℬ∗ and the Python functions for computing means and
variances and compute

1
rate =
1+ 𝛽 2 var(ℛ𝜏(ℬ∗ ) )

Ultimate outputs of this string of calculations are two scalars

(ℬ∗ , rate)

Step 7: Compute the divisor

𝑑𝑖𝑣 = 𝛽𝐸𝑢𝑐,𝑡+1

and then compute the mean of the par value of government debt in the AMSS model

ℬ∗
𝑏̂ =
𝑑𝑖𝑣
754 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

In the two-Markov-state AMSS economy in Fiscal Insurance via Fluctuating Interest Rates,
𝐸𝑡 𝑢𝑐,𝑡+1 = 𝐸𝑢𝑐,𝑡+1 in the ergodic distribution and we have confirmed that this formula very
accurately describes a constant par value of government debt that

• supports full fiscal insurance via fluctuating interest parameters, and

• is the limit of government debt as 𝑡 → +∞


In the three-Markov-state economy of this lecture, the par value of government debt fluctu-
ates in a history-dependent way even asymptotically.
In this economy, 𝑏̂ given by the above formula approximates the mean of the ergodic distribu-
tion of the par value of government debt

• this is the red vertical line plotted in the histogram of the last 100,000 obser-
vations of our simulation of the par value of government debt plotted above

• the approximation is fairly accurate but not perfect


• so while the approximation circumvents the chicken and egg problem surrounding the
much better approximation associated with the green vertical line, it does so by enlarg-
ing the approximation error

39.5.7 Execution

Now let’s move on to compute things step by step.

Step 1

In [10]: u = CRRAutility(π=(1 / 3) * np.ones((3, 3)),


G=np.array([0.1, 0.2, .3]),
Θ=np.ones(3))

τ = 0.05 # Initial guess of τ (to displays calcs along the way)


S = len(u.G) # Number of states

def solve_c(c, τ, u):


return (1 - τ) * c**(-u.σ) - (c + u.G)**u.γ

# .x returns the result from root


c = root(solve_c, np.ones(S), args=(τ, u)).x
c

Out[10]: array([0.93852387, 0.89231015, 0.84858872])

In [11]: root(solve_c, np.ones(S), args=(τ, u))

Out[11]: fjac: array([[-0.99990816, -0.00495351, -0.01261467],


[-0.00515633, 0.99985715, 0.01609659],
[-0.01253313, -0.01616015, 0.99979086]])
fun: array([ 5.61814373e-10, -4.76900741e-10, 1.17474919e-11])
message: 'The solution converged.'
nfev: 11
qtf: array([1.55568331e-08, 1.28322481e-08, 7.89913426e-11])
39.5. ASYMPTOTIC MEAN AND RATE OF CONVERGENCE 755

r: array([ 4.26943131, 0.08684775, -0.06300593, -4.71278821, -0.


↪ 0743338 ,
-5.50778548])
status: 1
success: True
x: array([0.93852387, 0.89231015, 0.84858872])

Step 2

In [12]: n = c + u.G # Compute labor supply

39.5.8 Note about Code

Remember that in our code 𝜋 is a 3 × 3 transition matrix.


But because we are studying an IID case, 𝜋 has identical rows and we only need to compute
objects for one row of 𝜋.
This explains why at some places below we set 𝑠 = 0 just to pick off the first row of 𝜋 in the
calculations.

39.5.9 Code

First, let’s compute ℛ and 𝒳 according to our formulas

In [13]: def compute_R_X(τ, u, s):


c = root(solve_c, np.ones(S), args=(τ, u)).x # Solve for vector of c's
div = u.β * (u.Uc(c[0], n[0]) * u.π[s, 0] \
+ u.Uc(c[1], n[1]) * u.π[s, 1] \
+ u.Uc(c[2], n[2]) * u.π[s, 2])
R = c**(-u.σ) / (div)
X = (c + u.G)**(1 + u.γ) - c**(1 - u.σ)
return R, X

In [14]: c**(-u.σ) @ u.π

Out[14]: array([1.25997521, 1.25997521, 1.25997521])

In [15]: u.π

Out[15]: array([[0.33333333, 0.33333333, 0.33333333],


[0.33333333, 0.33333333, 0.33333333],
[0.33333333, 0.33333333, 0.33333333]])

We only want unconditional expectations because we are in an IID case.


So we’ll set 𝑠 = 0 and just pick off expectations associated with the first row of 𝜋

In [16]: s = 0

R, X = compute_R_X(τ, u, s)
756 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT

Let’s look at the random variables ℛ, 𝒳

In [17]: R

Out[17]: array([1.00116313, 1.10755123, 1.22461897])

In [18]: mean(R, s)

Out[18]: 1.1111111111111112

In [19]: X

Out[19]: array([0.05457803, 0.18259396, 0.33685546])

In [20]: mean(X, s)

Out[20]: 0.19134248445303795

In [21]: X @ u.π

Out[21]: array([0.19134248, 0.19134248, 0.19134248])

Step 3

In [22]: def solve_τ(τ, B, u, s):


R, X = compute_R_X(τ, u, s)
return ((u.β - 1) / u.β) * B - X @ u.π[s]

Note that 𝐵 is a scalar.


Let’s try out our method computing 𝜏

In [23]: s = 0
B = 1.0

τ = root(solve_τ, .1, args=(B, u, s)).x[0] # Very sensitive to initial�


↪ value
τ

Out[23]: 0.2740159773695818

In the above cell, B is fixed at 1 and 𝜏 is to be computed as a function of B.


Note that 0.2 is the initial value for 𝜏 in the root-finding algorithm.

Step 4

In [24]: def min_J(B, u, s):


# Very sensitive to initial value of τ
τ = root(solve_τ, .5, args=(B, u, s)).x[0]
R, X = compute_R_X(τ, u, s)
return variance(R * B + X, s)

In [25]: min_J(B, u, s)

Out[25]: 0.035564405653720765
39.5. ASYMPTOTIC MEAN AND RATE OF CONVERGENCE 757

Step 6

In [26]: B_star = minimize(min_J, .5, args=(u, s)).x[0]


B_star

Out[26]: -1.199482032053344

In [27]: n = c + u.G # Compute labor supply

In [28]: div = u.β * (u.Uc(c[0], n[0]) * u.π[s, 0] \


+ u.Uc(c[1], n[1]) * u.π[s, 1] \
+ u.Uc(c[2], n[2]) * u.π[s, 2])

In [29]: B_hat = B_star/div


B_hat

Out[29]: -1.057765110954647

In [30]: τ_star = root(solve_τ, 0.05, args=(B_star, u, s)).x[0]


τ_star

Out[30]: 0.09572926599432369

In [31]: R_star, X_star = compute_R_X(τ_star, u, s)


R_star, X_star

Out[31]: (array([0.9998398 , 1.10746593, 1.22602761]),


array([0.00202709, 0.1246474 , 0.27315286]))

In [32]: rate = 1 / (1 + u.β**2 * variance(R_star, s))


rate

Out[32]: 0.9931353429089931

In [33]: root(solve_c, np.ones(S), args=(τ_star, u)).x

Out[33]: array([0.92643817, 0.88027114, 0.83662633])


758 CHAPTER 39. FISCAL RISK AND GOVERNMENT DEBT
Chapter 40

Competitive Equilibria of a Model


of Chang

40.1 Contents

• Overview 40.2
• Setting 40.3
• Competitive Equilibrium 40.4
• Inventory of Objects in Play 40.5
• Analysis 40.6
• Calculating all Promise-Value Pairs in CE 40.7
• Solving a Continuation Ramsey Planner’s Bellman Equation 40.8
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install polytope

40.2 Overview

This lecture describes how Chang [14] analyzed competitive equilibria and a best competi-
tive equilibrium called a Ramsey plan.
He did this by
• characterizing a competitive equilibrium recursively in a way also employed in the dy-
namic Stackelberg problems and Calvo model lectures to pose Stackelberg problems in
linear economies, and then
• appropriately adapting an argument of Abreu, Pearce, and Stachetti [2] to describe key
features of the set of competitive equilibria
Roberto Chang [14] chose a model of Calvo [13] as a simple structure that conveys ideas that
apply more broadly.
A textbook version of Chang’s model appears in chapter 25 of [43].
This lecture and Credible Government Policies in Chang Model can be viewed as more so-
phisticated and complete treatments of the topics discussed in Ramsey plans, time inconsis-
tency, sustainable plans.

759
760 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

Both this lecture and Credible Government Policies in Chang Model make extensive use of an
idea to which we apply the nickname dynamic programming squared.
In dynamic programming squared problems there are typically two interrelated Bellman equa-
tions
• A Bellman equation for a set of agents or followers with value or value function 𝑣𝑎 .
• A Bellman equation for a principal or Ramsey planner or Stackelberg leader with value
or value function 𝑣𝑝 in which 𝑣𝑎 appears as an argument.
We encountered problems with this structure in dynamic Stackelberg problems, optimal taxa-
tion with state-contingent debt, and other lectures.
We’ll start with some standard imports:

In [2]: import numpy as np


import polytope
import quantecon as qe
import matplotlib.pyplot as plt
%matplotlib inline

`polytope` failed to import `cvxopt.glpk`.


will use `scipy.optimize.linprog`

40.2.1 The Setting

First, we introduce some notation.


For a sequence of scalars 𝑧 ⃗ ≡ {𝑧𝑡 }∞ 𝑡
𝑡=0 , let 𝑧 ⃗ = (𝑧0 , … , 𝑧𝑡 ), 𝑧𝑡⃗ = (𝑧𝑡 , 𝑧𝑡+1 , …).

An infinitely lived representative agent and an infinitely lived government exist at dates 𝑡 =
0, 1, ….
The objects in play are
• an initial quantity 𝑀−1 of nominal money holdings
• a sequence of inverse money growth rates ℎ⃗ and an associated sequence of nominal
money holdings 𝑀⃗
• a sequence of values of money 𝑞 ⃗
• a sequence of real money holdings 𝑚⃗
• a sequence of total tax collections 𝑥⃗
• a sequence of per capita rates of consumption 𝑐 ⃗
• a sequence of per capita incomes 𝑦 ⃗
A benevolent government chooses sequences (𝑀⃗ , ℎ,⃗ 𝑥)⃗ subject to a sequence of budget con-
straints and other constraints imposed by competitive equilibrium.
Given tax collection and price of money sequences, a representative household chooses se-
quences (𝑐,⃗ 𝑚)
⃗ of consumption and real balances.
In competitive equilibrium, the price of money sequence 𝑞 ⃗ clears markets, thereby reconciling
decisions of the government and the representative household.
Chang adopts a version of a model that [13] designed to exhibit time-inconsistency of a Ram-
sey policy in a simple and transparent setting.
By influencing the representative household’s expectations, government actions at time 𝑡 af-
fect components of household utilities for periods 𝑠 before 𝑡.
40.3. SETTING 761

When setting a path for monetary expansion rates, the government takes into account how
the household’s anticipations of the government’s future actions affect the household’s current
decisions.
The ultimate source of time inconsistency is that a time 0 Ramsey planner takes these effects
into account in designing a plan of government actions for 𝑡 ≥ 0.

40.3 Setting

40.3.1 The Household’s Problem

A representative household faces a nonnegative value of money sequence 𝑞 ⃗ and sequences 𝑦,⃗ 𝑥⃗
of income and total tax collections, respectively.
The household chooses nonnegative sequences 𝑐,⃗ 𝑀⃗ of consumption and nominal balances,
respectively, to maximize


∑ 𝛽 𝑡 [𝑢(𝑐𝑡 ) + 𝑣(𝑞𝑡 𝑀𝑡 )] (1)
𝑡=0

subject to

𝑞𝑡 𝑀𝑡 ≤ 𝑦𝑡 + 𝑞𝑡 𝑀𝑡−1 − 𝑐𝑡 − 𝑥𝑡 (2)

and

𝑞𝑡 𝑀𝑡 ≤ 𝑚̄ (3)

Here 𝑞𝑡 is the reciprocal of the price level at 𝑡, which we can also call the value of money.
Chang [14] assumes that
• 𝑢 ∶ ℝ+ → ℝ is twice continuously differentiable, strictly concave, and strictly increasing;
• 𝑣 ∶ ℝ+ → ℝ is twice continuously differentiable and strictly concave;
• 𝑢′ (𝑐)𝑐→0 = lim𝑚→0 𝑣′ (𝑚) = +∞;
• there is a finite level 𝑚 = 𝑚𝑓 such that 𝑣′ (𝑚𝑓 ) = 0
The household carries real balances out of a period equal to 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 .
Inequality (2) is the household’s time 𝑡 budget constraint.
It tells how real balances 𝑞𝑡 𝑀𝑡 carried out of period 𝑡 depend on income, consumption, taxes,
and real balances 𝑞𝑡 𝑀𝑡−1 carried into the period.
Equation (3) imposes an exogenous upper bound 𝑚̄ on the household’s choice of real bal-
ances, where 𝑚̄ ≥ 𝑚𝑓 .

40.3.2 Government

The government chooses a sequence of inverse money growth rates with time 𝑡 component
ℎ𝑡 ≡ 𝑀𝑀𝑡−1 ∈ Π ≡ [𝜋, 𝜋], where 0 < 𝜋 < 1 < 𝛽1 ≤ 𝜋.
𝑡

The government faces a sequence of budget constraints with time 𝑡 component


762 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

−𝑥𝑡 = 𝑞𝑡 (𝑀𝑡 − 𝑀𝑡−1 )

which by using the definitions of 𝑚𝑡 and ℎ𝑡 can also be expressed as

−𝑥𝑡 = 𝑚𝑡 (1 − ℎ𝑡 ) (4)

The restrictions 𝑚𝑡 ∈ [0, 𝑚]̄ and ℎ𝑡 ∈ Π evidently imply that 𝑥𝑡 ∈ 𝑋 ≡ [(𝜋 − 1)𝑚,̄ (𝜋 − 1)𝑚].
̄
We define the set 𝐸 ≡ [0, 𝑚]̄ × Π × 𝑋, so that we require that (𝑚, ℎ, 𝑥) ∈ 𝐸.
To represent the idea that taxes are distorting, Chang makes the following assumption about
outcomes for per capita output:

𝑦𝑡 = 𝑓(𝑥𝑡 ), (5)

where 𝑓 ∶ ℝ → ℝ satisfies 𝑓(𝑥) > 0, is twice continuously differentiable, 𝑓 ″ (𝑥) < 0, and
𝑓(𝑥) = 𝑓(−𝑥) for all 𝑥 ∈ ℝ, so that subsidies and taxes are equally distorting.
Calvo’s and Chang’s purpose is not to model the causes of tax distortions in any detail but
simply to summarize the outcome of those distortions via the function 𝑓(𝑥).
A key part of the specification is that tax distortions are increasing in the absolute value of
tax revenues.
Ramsey plan: A Ramsey plan is a competitive equilibrium that maximizes (1).
Within-period timing of decisions is as follows:
• first, the government chooses ℎ𝑡 and 𝑥𝑡 ;
• then given 𝑞 ⃗ and its expectations about future values of 𝑥 and 𝑦’s, the household
chooses 𝑀𝑡 and therefore 𝑚𝑡 because 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 ;
• then output 𝑦𝑡 = 𝑓(𝑥𝑡 ) is realized;
• finally 𝑐𝑡 = 𝑦𝑡
This within-period timing confronts the government with choices framed by how the private
sector wants to respond when the government takes time 𝑡 actions that differ from what the
private sector had expected.
This consideration will be important in lecture credible government policies when we study
credible government policies.
The model is designed to focus on the intertemporal trade-offs between the welfare benefits
of deflation and the welfare costs associated with the high tax collections required to retire
money at a rate that delivers deflation.
A benevolent time 0 government can promote utility generating increases in real balances
only by imposing sufficiently large distorting tax collections.
To promote the welfare increasing effects of high real balances, the government wants to in-
duce gradual deflation.

40.3.3 Household’s Problem

Given 𝑀−1 and {𝑞𝑡 }∞


𝑡=0 , the household’s problem is
40.4. COMPETITIVE EQUILIBRIUM 763


ℒ = max min ∑ 𝛽 𝑡 {𝑢(𝑐𝑡 ) + 𝑣(𝑀𝑡 𝑞𝑡 ) + 𝜆𝑡 [𝑦𝑡 − 𝑐𝑡 − 𝑥𝑡 + 𝑞𝑡 𝑀𝑡−1 − 𝑞𝑡 𝑀𝑡 ]
𝑐,⃗ 𝑀⃗ 𝜆,⃗ 𝜇⃗ 𝑡=0

+ 𝜇𝑡 [𝑚̄ − 𝑞𝑡 𝑀𝑡 ]}

First-order conditions with respect to 𝑐𝑡 and 𝑀𝑡 , respectively, are

𝑢′ (𝑐𝑡 ) = 𝜆𝑡
𝑞𝑡 [𝑢′ (𝑐𝑡 ) − 𝑣′ (𝑀𝑡 𝑞𝑡 )] ≤ 𝛽𝑢′ (𝑐𝑡+1 )𝑞𝑡+1 , = if 𝑀𝑡 𝑞𝑡 < 𝑚̄

The last equation expresses Karush-Kuhn-Tucker complementary slackness conditions (see


here).
These insist that the inequality is an equality at an interior solution for 𝑀𝑡 .
𝑀𝑡−1 𝑚𝑡
Using ℎ𝑡 = 𝑀𝑡 and 𝑞𝑡 = 𝑀𝑡 in these first-order conditions and rearranging implies

𝑚𝑡 [𝑢′ (𝑐𝑡 ) − 𝑣′ (𝑚𝑡 )] ≤ 𝛽𝑢′ (𝑓(𝑥𝑡+1 ))𝑚𝑡+1 ℎ𝑡+1 , = if 𝑚𝑡 < 𝑚̄ (6)

Define the following key variable

𝜃𝑡+1 ≡ 𝑢′ (𝑓(𝑥𝑡+1 ))𝑚𝑡+1 ℎ𝑡+1 (7)

This is real money balances at time 𝑡 + 1 measured in units of marginal utility, which Chang
refers to as ‘the marginal utility of real balances’.
From the standpoint of the household at time 𝑡, equation (7) shows that 𝜃𝑡+1 intermediates
the influences of (𝑥𝑡+1
⃗ , 𝑚⃗ 𝑡+1 ) on the household’s choice of real balances 𝑚𝑡 .
By “intermediates” we mean that the future paths (𝑥𝑡+1
⃗ , 𝑚⃗ 𝑡+1 ) influence 𝑚𝑡 entirely through
their effects on the scalar 𝜃𝑡+1 .
The observation that the one dimensional promised marginal utility of real balances 𝜃𝑡+1
functions in this way is an important step in constructing a class of competitive equilibria
that have a recursive representation.
A closely related observation pervaded the analysis of Stackelberg plans in lecture dynamic
Stackelberg problems.

40.4 Competitive Equilibrium

Definition:
• A government policy is a pair of sequences (ℎ,⃗ 𝑥)⃗ where ℎ𝑡 ∈ Π ∀𝑡 ≥ 0.
• A price system is a nonnegative value of money sequence 𝑞.⃗
• An allocation is a triple of nonnegative sequences (𝑐,⃗ 𝑚,⃗ 𝑦).

It is required that time 𝑡 components (𝑚𝑡 , 𝑥𝑡 , ℎ𝑡 ) ∈ 𝐸.
Definition:
Given 𝑀−1 , a government policy (ℎ,⃗ 𝑥),
⃗ price system 𝑞,⃗ and allocation (𝑐,⃗ 𝑚,⃗ 𝑦)⃗ are said to be
a competitive equilibrium if
• 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 and 𝑦𝑡 = 𝑓(𝑥𝑡 ).
764 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

• The government budget constraint is satisfied.


• Given 𝑞,⃗ 𝑥,⃗ 𝑦,⃗ (𝑐,⃗ 𝑚)
⃗ solves the household’s problem.

40.5 Inventory of Objects in Play

Chang constructs the following objects

1. A set Ω of initial marginal utilities of money 𝜃0


• Let Ω denote the set of initial promised marginal utilities of money 𝜃0 associated with
competitive equilibria.
• Chang exploits the fact that a competitive equilibrium consists of a first period outcome
(ℎ0 , 𝑚0 , 𝑥0 ) and a continuation competitive equilibrium with marginal utility of money
𝜃1 ∈ Ω.

1. Competitive equilibria that have a recursive representation


• A competitive equilibrium with a recursive representation consists of an initial 𝜃0 and
a four-tuple of functions (ℎ, 𝑚, 𝑥, Ψ) mapping 𝜃 into this period’s (ℎ, 𝑚, 𝑥) and next pe-
riod’s 𝜃, respectively.
• A competitive equilibrium can be represented recursively by iterating on
ℎ𝑡 = ℎ(𝜃𝑡 )
𝑚𝑡 = 𝑚(𝜃𝑡 )
(8)
𝑥𝑡 = 𝑥(𝜃𝑡 )
𝜃𝑡+1 = Ψ(𝜃𝑡 )
starting from 𝜃0
The range and domain of Ψ(⋅) are both Ω

1. A recursive representation of a Ramsey plan


• A recursive representation of a Ramsey plan is a recursive competitive equilib-
rium 𝜃0 , (ℎ, 𝑚, 𝑥, Ψ) that, among all recursive competitive equilibria, maximizes

∑𝑡=0 𝛽 𝑡 [𝑢(𝑐𝑡 ) + 𝑣(𝑞𝑡 𝑀𝑡 )].
• The Ramsey planner chooses 𝜃0 , (ℎ, 𝑚, 𝑥, Ψ) from among the set of recursive competi-
tive equilibria at time 0.
• Iterations on the function Ψ determine subsequent 𝜃𝑡 ’s that summarize the aspects of
the continuation competitive equilibria that influence the household’s decisions.
• At time 0, the Ramsey planner commits to this implied sequence {𝜃𝑡 }∞ 𝑡=0 and therefore
to an associated sequence of continuation competitive equilibria.

1. A characterization of time-inconsistency of a Ramsey plan


• Imagine that after a ‘revolution’ at time 𝑡 ≥ 1, a new Ramsey planner is given the op-
portunity to ignore history and solve a brand new Ramsey plan.
• This new planner would want to reset the 𝜃𝑡 associated with the original Ramsey plan
to 𝜃0 .
• The incentive to reinitialize 𝜃𝑡 associated with this revolution experiment indicates the
time-inconsistency of the Ramsey plan.
• By resetting 𝜃 to 𝜃0 , the new planner avoids the costs at time 𝑡 that the original Ram-
sey planner must pay to reap the beneficial effects that the original Ramsey plan for
𝑠 ≥ 𝑡 had achieved via its influence on the household’s decisions for 𝑠 = 0, … , 𝑡 − 1.
40.6. ANALYSIS 765

40.6 Analysis

A competitive equilibrium is a triple of sequences (𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐸 ∞ that satisfies (2), (3), and
(6).
Chang works with a set of competitive equilibria defined as follows.
Definition: 𝐶𝐸 = {(𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐸 ∞ such that (2), (3), and (6) are satisfied }.
𝐶𝐸 is not empty because there exists a competitive equilibrium with ℎ𝑡 = 1 for all 𝑡 ≥ 1,
namely, an equilibrium with a constant money supply and constant price level.
Chang establishes that 𝐶𝐸 is also compact.
Chang makes the following key observation that combines ideas of Abreu, Pearce, and Stac-
chetti [2] with insights of Kydland and Prescott [40].
Proposition: The continuation of a competitive equilibrium is a competitive equilibrium.
That is, (𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐶𝐸 implies that (𝑚⃗ 𝑡 , 𝑥𝑡⃗ , ℎ⃗ 𝑡 ) ∈ 𝐶𝐸 ∀ 𝑡 ≥ 1.
(Lecture dynamic Stackelberg problems also used a version of this insight)
We can now state that a Ramsey problem is to


max ∑ 𝛽 𝑡 [𝑢(𝑐𝑡 ) + 𝑣(𝑚𝑡 )]
(𝑚, ⃗
⃗ 𝑥,⃗ ℎ)∈𝐸 ∞
𝑡=0

subject to restrictions (2), (3), and (6).


Evidently, associated with any competitive equilibrium (𝑚0 , 𝑥0 ) is an implied value of 𝜃0 =
𝑢′ (𝑓(𝑥0 ))(𝑚0 + 𝑥0 ).
To bring out a recursive structure inherent in the Ramsey problem, Chang defines the set

Ω = {𝜃 ∈ ℝ such that 𝜃 = 𝑢′ (𝑓(𝑥0 ))(𝑚0 + 𝑥0 ) for some (𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐶𝐸}

Equation (6) inherits from the household’s Euler equation for money holdings the prop-
erty that the value of 𝑚0 consistent with the representative household’s choices depends on
(ℎ⃗ 1 , 𝑚⃗ 1 ).
This dependence is captured in the definition above by making Ω be the set of first period
values of 𝜃0 satisfying 𝜃0 = 𝑢′ (𝑓(𝑥0 ))(𝑚0 + 𝑥0 ) for first period component (𝑚0 , ℎ0 ) of compet-

itive equilibrium sequences (𝑚,⃗ 𝑥,⃗ ℎ).
Chang establishes that Ω is a nonempty and compact subset of ℝ+ .
Next Chang advances:
Definition: Γ(𝜃) = {(𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐶𝐸|𝜃 = 𝑢′ (𝑓(𝑥0 ))(𝑚0 + 𝑥0 )}.
Thus, Γ(𝜃) is the set of competitive equilibrium sequences (𝑚,⃗ 𝑥,⃗ ℎ)⃗ whose first period compo-
nents (𝑚0 , ℎ0 ) deliver the prescribed value 𝜃 for first period marginal utility.
If we knew the sets Ω, Γ(𝜃), we could use the following two-step procedure to find at least the
value of the Ramsey outcome to the representative household

1. Find the indirect value function 𝑤(𝜃) defined as


766 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG


𝑤(𝜃) = max ∑ 𝛽 𝑡 [𝑢(𝑓(𝑥𝑡 )) + 𝑣(𝑚𝑡 )]
(𝑚, ⃗
⃗ 𝑥,⃗ ℎ)∈Γ(𝜃) 𝑡=0

1. Compute the value of the Ramsey outcome by solving max𝜃∈Ω 𝑤(𝜃).

Thus, Chang states the following


Proposition:
𝑤(𝜃) satisfies the Bellman equation

𝑤(𝜃) = max ′ {𝑢(𝑓(𝑥)) + 𝑣(𝑚) + 𝛽𝑤(𝜃′ )} (9)


𝑥,𝑚,ℎ,𝜃

where maximization is subject to

(𝑚, 𝑥, ℎ) ∈ 𝐸 and 𝜃′ ∈ Ω (10)

and

𝜃 = 𝑢′ (𝑓(𝑥))(𝑚 + 𝑥) (11)

and

−𝑥 = 𝑚(1 − ℎ) (12)

and

𝑚 ⋅ [𝑢′ (𝑓(𝑥)) − 𝑣′ (𝑚)] ≤ 𝛽𝜃′ , = if 𝑚 < 𝑚̄ (13)

Before we use this proposition to recover a recursive representation of the Ramsey plan, note
that the proposition relies on knowing the set Ω.
To find Ω, Chang uses the insights of Kydland and Prescott [40] together with a method
based on the Abreu, Pearce, and Stacchetti [2] iteration to convergence on an operator 𝐵 that
maps continuation values into values.
We want an operator that maps a continuation 𝜃 into a current 𝜃.
Chang lets 𝑄 be a nonempty, bounded subset of ℝ.
Elements of the set 𝑄 are taken to be candidate values for continuation marginal utilities.
Chang defines an operator

𝐵(𝑄) = 𝜃 ∈ ℝ such that there is (𝑚, 𝑥, ℎ, 𝜃′ ) ∈ 𝐸 × 𝑄

such that (11), (12), and (13) hold.


Thus, 𝐵(𝑄) is the set of first period 𝜃’s attainable with (𝑚, 𝑥, ℎ) ∈ 𝐸 and some 𝜃′ ∈ 𝑄.
Proposition:
40.6. ANALYSIS 767

1. 𝑄 ⊂ 𝐵(𝑄) implies 𝐵(𝑄) ⊂ Ω (‘self-generation’).

2. Ω = 𝐵(Ω) (‘factorization’).

The proposition characterizes Ω as the largest fixed point of 𝐵.


It is easy to establish that 𝐵(𝑄) is a monotone operator.
This property allows Chang to compute Ω as the limit of iterations on 𝐵 provided that itera-
tions begin from a sufficiently large initial set.

40.6.1 Some Useful Notation

Let ℎ⃗ 𝑡 = (ℎ0 , ℎ1 , … , ℎ𝑡 ) denote a history of inverse money creation rates with time 𝑡 compo-
nent ℎ𝑡 ∈ Π.
A government strategy 𝜎 = {𝜎𝑡 }∞
𝑡=0 is a 𝜎0 ∈ Π and for 𝑡 ≥ 1 a sequence of functions 𝜎𝑡 ∶
𝑡−1
Π → Π.
Chang restricts the government’s choice of strategies to the following space:

𝐶𝐸𝜋 = {ℎ⃗ ∈ Π∞ ∶ there is some (𝑚,⃗ 𝑥)⃗ such that (𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐶𝐸}

In words, 𝐶𝐸𝜋 is the set of money growth sequences consistent with the existence of competi-
tive equilibria.
Chang observes that 𝐶𝐸𝜋 is nonempty and compact.
Definition: 𝜎 is said to be admissible if for all 𝑡 ≥ 1 and after any history ℎ⃗ 𝑡−1 , the continua-
tion ℎ⃗ 𝑡 implied by 𝜎 belongs to 𝐶𝐸𝜋 .
Admissibility of 𝜎 means that anticipated policy choices associated with 𝜎 are consistent with
the existence of competitive equilibria after each possible subsequent history.
After any history ℎ⃗ 𝑡−1 , admissibility restricts the government’s choice in period 𝑡 to the set

𝐶𝐸𝜋0 = {ℎ ∈ Π ∶ there is ℎ⃗ ∈ 𝐶𝐸𝜋 with ℎ = ℎ0 }

In words, 𝐶𝐸𝜋0 is the set of all first period money growth rates ℎ = ℎ0 , each of which is con-
sistent with the existence of a sequence of money growth rates ℎ⃗ starting from ℎ0 in the ini-
tial period and for which a competitive equilibrium exists.
Remark: 𝐶𝐸𝜋0 = {ℎ ∈ Π ∶ there is (𝑚, 𝜃′ ) ∈ [0, 𝑚]×Ω
̄ such that 𝑚𝑢′ [𝑓((ℎ−1)𝑚)−𝑣′ (𝑚)] ≤
𝛽𝜃′ with equality if 𝑚 < 𝑚}.
̄
Definition: An allocation rule is a sequence of functions 𝛼⃗ = {𝛼𝑡 }∞ 𝑡
𝑡=0 such that 𝛼𝑡 ∶ Π →
[0, 𝑚]̄ × 𝑋.
Thus, the time 𝑡 component of 𝛼𝑡 (ℎ𝑡 ) is a pair of functions (𝑚𝑡 (ℎ𝑡 ), 𝑥𝑡 (ℎ𝑡 )).
Definition: Given an admissible government strategy 𝜎, an allocation rule 𝛼 is called com-
petitive if given any history ℎ⃗ 𝑡−1 and ℎ𝑡 ∈ 𝐶𝐸𝜋0 , the continuations of 𝜎 and 𝛼 after (ℎ⃗ 𝑡−1 , ℎ𝑡 )
induce a competitive equilibrium sequence.
768 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

40.6.2 Another Operator

At this point it is convenient to introduce another operator that can be used to compute a
Ramsey plan.
For computing a Ramsey plan, this operator is wasteful because it works with a state vector
that is bigger than necessary.
We introduce this operator because it helps to prepare the way for Chang’s operator called
̃
𝐷(𝑍) that we shall describe in lecture credible government policies.
It is also useful because a fixed point of the operator to be defined here provides a good guess
̃
for an initial set from which to initiate iterations on Chang’s set-to-set operator 𝐷(𝑍) to be
described in lecture credible government policies.
Let 𝑆 be the set of all pairs (𝑤, 𝜃) of competitive equilibrium values and associated initial
marginal utilities.
Let 𝑊 be a bounded set of values in ℝ.
Let 𝑍 be a nonempty subset of 𝑊 × Ω.
Think of using pairs (𝑤′ , 𝜃′ ) drawn from 𝑍 as candidate continuation value, 𝜃 pairs.
Define the operator

𝐷(𝑍) = {(𝑤, 𝜃) ∶ there is ℎ ∈ 𝐶𝐸𝜋0

and a four-tuple (𝑚(ℎ), 𝑥(ℎ), 𝑤′ (ℎ), 𝜃′ (ℎ)) ∈ [0, 𝑚]̄ × 𝑋 × 𝑍

such that

𝑤 = 𝑢(𝑓(𝑥(ℎ))) + 𝑣(𝑚(ℎ)) + 𝛽𝑤′ (ℎ) (14)

𝜃 = 𝑢′ (𝑓(𝑥(ℎ)))(𝑚(ℎ) + 𝑥(ℎ)) (15)

𝑥(ℎ) = 𝑚(ℎ)(ℎ − 1) (16)

𝑚(ℎ)(𝑢′ (𝑓(𝑥(ℎ))) − 𝑣′ (𝑚(ℎ))) ≤ 𝛽𝜃′ (ℎ) (17)

with equality if 𝑚(ℎ) < 𝑚}


̄

It is possible to establish.
Proposition:

1. If 𝑍 ⊂ 𝐷(𝑍), then 𝐷(𝑍) ⊂ 𝑆 (‘self-generation’).


40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 769

2. 𝑆 = 𝐷(𝑆) (‘factorization’).

Proposition:

1. Monotonicity of 𝐷: 𝑍 ⊂ 𝑍 ′ implies 𝐷(𝑍) ⊂ 𝐷(𝑍 ′ ).

2. 𝑍 compact implies that 𝐷(𝑍) is compact.

It can be shown that 𝑆 is compact and that therefore there exists a (𝑤, 𝜃) pair within this set
that attains the highest possible value 𝑤.
This (𝑤, 𝜃) pair i associated with a Ramsey plan.
Further, we can compute 𝑆 by iterating to convergence on 𝐷 provided that one begins with a
sufficiently large initial set 𝑆0 .
As a very useful by-product, the algorithm that finds the largest fixed point 𝑆 = 𝐷(𝑆) also
produces the Ramsey plan, its value 𝑤, and the associated competitive equilibrium.

40.7 Calculating all Promise-Value Pairs in CE

Above we have defined the 𝐷(𝑍) operator as:

𝐷(𝑍) = {(𝑤, 𝜃) ∶ ∃ℎ ∈ 𝐶𝐸𝜋0 and (𝑚(ℎ), 𝑥(ℎ), 𝑤′ (ℎ), 𝜃′ (ℎ)) ∈ [0, 𝑚]̄ × 𝑋 × 𝑍

such that

𝑤 = 𝑢(𝑓(𝑥(ℎ))) + 𝑣(𝑚(ℎ)) + 𝛽𝑤′ (ℎ)

𝜃 = 𝑢′ (𝑓(𝑥(ℎ)))(𝑚(ℎ) + 𝑥(ℎ))

𝑥(ℎ) = 𝑚(ℎ)(ℎ − 1)

𝑚(ℎ)(𝑢′ (𝑓(𝑥(ℎ))) − 𝑣′ (𝑚(ℎ))) ≤ 𝛽𝜃′ (ℎ) (with equality if 𝑚(ℎ) < 𝑚)}
̄

We noted that the set 𝑆 can be found by iterating to convergence on 𝐷, provided that we
start with a sufficiently large initial set 𝑆0 .
Our implementation builds on ideas in this notebook.
To find 𝑆 we use a numerical algorithm called the outer hyperplane approximation algorithm.
It was invented by Judd, Yeltekin, Conklin [37].
This algorithm constructs the smallest convex set that contains the fixed point of the 𝐷(𝑆)
operator.
Given that we are finding the smallest convex set that contains 𝑆, we can represent it on a
computer as the intersection of a finite number of half-spaces.
Let 𝐻 be a set of subgradients, and 𝐶 be a set of hyperplane levels.
770 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

We approximate 𝑆 by:

𝑆 ̃ = {(𝑤, 𝜃)|𝐻 ⋅ (𝑤, 𝜃) ≤ 𝐶}

A key feature of this algorithm is that we discretize the action space, i.e., we create a grid of
possible values for 𝑚 and ℎ (note that 𝑥 is implied by 𝑚 and ℎ). This discretization simplifies
computation of 𝑆 ̃ by allowing us to find it by solving a sequence of linear programs.
The outer hyperplane approximation algorithm proceeds as follows:

1. Initialize subgradients, 𝐻, and hyperplane levels, 𝐶0 .

2. Given a set of subgradients, 𝐻, and hyperplane levels, 𝐶𝑡 , for each subgradient ℎ𝑖 ∈ 𝐻:

• Solve a linear program (described below) for each action in the action space.
• Find the maximum and update the corresponding hyperplane level, 𝐶𝑖,𝑡+1 .

1. If |𝐶𝑡+1 − 𝐶𝑡 | > 𝜖, return to 2.

Step 1 simply creates a large initial set 𝑆0 .


Given some set 𝑆𝑡 , Step 2 then constructs the set 𝑆𝑡+1 = 𝐷(𝑆𝑡 ). The linear program in
Step 2 is designed to construct a set 𝑆𝑡+1 that is as large as possible while satisfying the con-
straints of the 𝐷(𝑆) operator.
To do this, for each subgradient ℎ𝑖 , and for each point in the action space (𝑚𝑗 , ℎ𝑗 ), we solve
the following problem:

max ℎ𝑖 ⋅ (𝑤, 𝜃)
[𝑤′ ,𝜃′ ]

subject to

𝐻 ⋅ (𝑤′ , 𝜃′ ) ≤ 𝐶𝑡

𝑤 = 𝑢(𝑓(𝑥𝑗 )) + 𝑣(𝑚𝑗 ) + 𝛽𝑤′

𝜃 = 𝑢′ (𝑓(𝑥𝑗 ))(𝑚𝑗 + 𝑥𝑗 )

𝑥𝑗 = 𝑚𝑗 (ℎ𝑗 − 1)

𝑚𝑗 (𝑢′ (𝑓(𝑥𝑗 )) − 𝑣′ (𝑚𝑗 )) ≤ 𝛽𝜃′ (= if 𝑚𝑗 < 𝑚)


̄

This problem maximizes the hyperplane level for a given set of actions.
The second part of Step 2 then finds the maximum possible hyperplane level across the action
space.
The algorithm constructs a sequence of progressively smaller sets 𝑆𝑡+1 ⊂ 𝑆𝑡 ⊂ 𝑆𝑡−1 ⋯ ⊂ 𝑆0 .
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 771

Step 3 ends the algorithm when the difference between these sets is small enough.
We have created a Python class that solves the model assuming the following functional
forms:

𝑢(𝑐) = 𝑙𝑜𝑔(𝑐)

1
𝑣(𝑚) = (𝑚𝑚̄ − 0.5𝑚2 )0.5
500

𝑓(𝑥) = 180 − (0.4𝑥)2

̄ are then variables to be specified for an instance of the


The remaining parameters {𝛽, 𝑚,̄ ℎ, ℎ}
Chang class.
Below we use the class to solve the model and plot the resulting equilibrium set, once with
𝛽 = 0.3 and once with 𝛽 = 0.8.
(Here we have set the number of subgradients to 10 in order to speed up the code for now -
we can increase accuracy by increasing the number of subgradients)

In [3]: """
Provides a class called ChangModel to solve different
parameterizations of the Chang (1998) model.
"""

import numpy as np
import quantecon as qe
import time

from scipy.spatial import ConvexHull


from scipy.optimize import linprog, minimize, minimize_scalar
from scipy.interpolate import UnivariateSpline
import numpy.polynomial.chebyshev as cheb

class ChangModel:
"""
Class to solve for the competitive and sustainable sets in the Chang�
↪(1998)

model, for different parameterizations.


"""

def __init__(self, β, mbar, h_min, h_max, n_h, n_m, N_g):


# Record parameters
self.β, self.mbar, self.h_min, self.h_max = β, mbar, h_min, h_max
self.n_h, self.n_m, self.N_g = n_h, n_m, N_g

# Create other parameters


self.m_min = 1e-9
self.m_max = self.mbar
self.N_a = self.n_h*self.n_m

# Utility and production functions


uc = lambda c: np.log(c)
772 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

uc_p = lambda c: 1/c


v = lambda m: 1/500 * (mbar * m - 0.5 * m**2)**0.5
v_p = lambda m: 0.5/500 * (mbar * m - 0.5 * m**2)**(-0.5) * (mbar - m)
u = lambda h, m: uc(f(h, m)) + v(m)

def f(h, m):


x = m * (h - 1)
f = 180 - (0.4 * x)**2
return f

def θ(h, m):


x = m * (h - 1)
θ = uc_p(f(h, m)) * (m + x)
return θ

# Create set of possible action combinations, A


A1 = np.linspace(h_min, h_max, n_h).reshape(n_h, 1)
A2 = np.linspace(self.m_min, self.m_max, n_m).reshape(n_m, 1)
self.A = np.concatenate((np.kron(np.ones((n_m, 1)), A1),
np.kron(A2, np.ones((n_h, 1)))), axis=1)

# Pre-compute utility and output vectors


self.euler_vec = -np.multiply(self.A[:, 1], \
uc_p(f(self.A[:, 0], self.A[:, 1])) - v_p(self.A[:, 1]))
self.u_vec = u(self.A[:, 0], self.A[:, 1])
self.Θ_vec = θ(self.A[:, 0], self.A[:, 1])
self.f_vec = f(self.A[:, 0], self.A[:, 1])
self.bell_vec = np.multiply(uc_p(f(self.A[:, 0],
self.A[:, 1])),
np.multiply(self.A[:, 1],
(self.A[:, 0] - 1))) \
+ np.multiply(self.A[:, 1],
v_p(self.A[:, 1]))

# Find extrema of (w, θ) space for initial guess of equilibrium sets


p_vec = np.zeros(self.N_a)
w_vec = np.zeros(self.N_a)
for i in range(self.N_a):
p_vec[i] = self.Θ_vec[i]
w_vec[i] = self.u_vec[i]/(1 - β)

w_space = np.array([min(w_vec[~np.isinf(w_vec)]),
max(w_vec[~np.isinf(w_vec)])])
p_space = np.array([0, max(p_vec[~np.isinf(w_vec)])])
self.p_space = p_space

# Set up hyperplane levels and gradients for iterations


def SG_H_V(N, w_space, p_space):
"""
This function initializes the subgradients, hyperplane levels,
and extreme points of the value set by choosing an appropriate
origin and radius. It is based on a similar function in�
↪ QuantEcon's
Games.jl
"""

# First, create a unit circle. Want points placed on [0, 2π]


inc = 2 * np.pi / N
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 773

degrees = np.arange(0, 2 * np.pi, inc)

# Points on circle
H = np.zeros((N, 2))
for i in range(N):
x = degrees[i]
H[i, 0] = np.cos(x)
H[i, 1] = np.sin(x)

# Then calculate origin and radius


o = np.array([np.mean(w_space), np.mean(p_space)])
r1 = max((max(w_space) - o[0])**2, (o[0] - min(w_space))**2)
r2 = max((max(p_space) - o[1])**2, (o[1] - min(p_space))**2)
r = np.sqrt(r1 + r2)

# Now calculate vertices


Z = np.zeros((2, N))
for i in range(N):
Z[0, i] = o[0] + r*H.T[0, i]
Z[1, i] = o[1] + r*H.T[1, i]

# Corresponding hyperplane levels


C = np.zeros(N)
for i in range(N):
C[i] = np.dot(Z[:, i], H[i, :])

return C, H, Z

C, self.H, Z = SG_H_V(N_g, w_space, p_space)


C = C.reshape(N_g, 1)
self.c0_c, self.c0_s, self.c1_c, self.c1_s = np.copy(C), np.
↪ copy(C), \
np.copy(C), np.copy(C)
self.z0_s, self.z0_c, self.z1_s, self.z1_c = np.copy(Z), np.
↪ copy(Z), \
np.copy(Z), np.copy(Z)

self.w_bnds_s, self.w_bnds_c = (w_space[0], w_space[1]), \


(w_space[0], w_space[1])
self.p_bnds_s, self.p_bnds_c = (p_space[0], p_space[1]), \
(p_space[0], p_space[1])

# Create dictionaries to save equilibrium set for each iteration


self.c_dic_s, self.c_dic_c = {}, {}
self.c_dic_s[0], self.c_dic_c[0] = self.c0_s, self.c0_c

def solve_worst_spe(self):
"""
Method to solve for BR(Z). See p.449 of Chang (1998)
"""

p_vec = np.full(self.N_a, np.nan)


c = [1, 0]

# Pre-compute constraints
aineq_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_mbar = np.vstack((self.c0_s, 0))
774 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

aineq = self.H
bineq = self.c0_s
aeq = [[0, -self.β]]

for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_mbar, b_ub=bineq_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
else:
beq = self.euler_vec[j]
res = linprog(c, A_ub=aineq, b_ub=bineq, A_eq=aeq,�
↪ b_eq=beq,
bounds=(self.w_bnds_s, self.p_bnds_s))
if res.status == 0:
p_vec[j] = self.u_vec[j] + self.β * res.x[0]

# Max over h and min over other variables (see Chang (1998) p.449)
self.br_z = np.nanmax(np.nanmin(p_vec.reshape(self.n_m, self.n_h),�
↪ 0))

def solve_subgradient(self):
"""
Method to solve for E(Z). See p.449 of Chang (1998)
"""

# Pre-compute constraints
aineq_C_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_C_mbar = np.vstack((self.c0_c, 0))

aineq_C = self.H
bineq_C = self.c0_c
aeq_C = [[0, -self.β]]

aineq_S_mbar = np.vstack((np.vstack((self.H, np.array([0, -self.


↪ β]))),
np.array([-self.β, 0])))
bineq_S_mbar = np.vstack((self.c0_s, np.zeros((2, 1))))

aineq_S = np.vstack((self.H, np.array([-self.β, 0])))


bineq_S = np.vstack((self.c0_s, 0))
aeq_S = [[0, -self.β]]

# Update maximal hyperplane level


for i in range(self.N_g):
c_a1a2_c, t_a1a2_c = np.full(self.N_a, -np.inf), \
np.zeros((self.N_a, 2))
c_a1a2_s, t_a1a2_s = np.full(self.N_a, -np.inf), \
np.zeros((self.N_a, 2))

c = [-self.H[i, 0], -self.H[i, 1]]

for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 775

# COMPETITIVE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_C_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C_mbar, b_ub=bineq_C_mbar,
bounds=(self.w_bnds_c, self.p_bnds_c))
# If m < mbar, use equality constraint
else:
beq_C = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C, b_ub=bineq_C, A_eq =�
↪ aeq_C,
b_eq = beq_C, bounds=(self.w_bnds_c, \
self.p_bnds_c))
if res.status == 0:
c_a1a2_c[j] = self.H[i, 0] * (self.u_vec[j] \
+ self.β * res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_c[j] = res.x

# SUSTAINABLE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_S_mbar[-2] = self.euler_vec[j]
bineq_S_mbar[-1] = self.u_vec[j] - self.br_z
res = linprog(c, A_ub=aineq_S_mbar, b_ub=bineq_S_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
# If m < mbar, use equality constraint
else:
bineq_S[-1] = self.u_vec[j] - self.br_z
beq_S = self.euler_vec[j]
res = linprog(c, A_ub=aineq_S, b_ub=bineq_S, A_eq =�
↪ aeq_S,
b_eq = beq_S, bounds=(self.w_bnds_s, \
self.p_bnds_s))
if res.status == 0:
c_a1a2_s[j] = self.H[i, 0] * (self.u_vec[j] \
+ self.β*res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_s[j] = res.x

idx_c = np.where(c_a1a2_c == max(c_a1a2_c))[0][0]


self.z1_c[:, i] = np.array([self.u_vec[idx_c]
+ self.β * t_a1a2_c[idx_c, 0],
self.Θ_vec[idx_c]])

idx_s = np.where(c_a1a2_s == max(c_a1a2_s))[0][0]


self.z1_s[:, i] = np.array([self.u_vec[idx_s]
+ self.β * t_a1a2_s[idx_s, 0],
self.Θ_vec[idx_s]])

for i in range(self.N_g):
self.c1_c[i] = np.dot(self.z1_c[:, i], self.H[i, :])
self.c1_s[i] = np.dot(self.z1_s[:, i], self.H[i, :])

def solve_sustainable(self, tol=1e-5, max_iter=250):


"""
Method to solve for the competitive and sustainable equilibrium sets.
"""
776 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

t = time.time()
diff = tol + 1
iters = 0

print('### --------------- ###')


print('Solving Chang Model Using Outer Hyperplane Approximation')
print('### --------------- ### \n')

print('Maximum difference when updating hyperplane levels:')

while diff > tol and iters < max_iter:


iters = iters + 1
self.solve_worst_spe()
self.solve_subgradient()
diff = max(np.maximum(abs(self.c0_c - self.c1_c),
abs(self.c0_s - self.c1_s)))
print(diff)

# Update hyperplane levels


self.c0_c, self.c0_s = np.copy(self.c1_c), np.copy(self.c1_s)

# Update bounds for w and θ


wmin_c, wmax_c = np.min(self.z1_c, axis=1)[0], \
np.max(self.z1_c, axis=1)[0]
pmin_c, pmax_c = np.min(self.z1_c, axis=1)[1], \
np.max(self.z1_c, axis=1)[1]

wmin_s, wmax_s = np.min(self.z1_s, axis=1)[0], \


np.max(self.z1_s, axis=1)[0]
pmin_S, pmax_S = np.min(self.z1_s, axis=1)[1], \
np.max(self.z1_s, axis=1)[1]

self.w_bnds_s, self.w_bnds_c = (wmin_s, wmax_s), (wmin_c, wmax_c)


self.p_bnds_s, self.p_bnds_c = (pmin_S, pmax_S), (pmin_c, pmax_c)

# Save iteration
self.c_dic_c[iters], self.c_dic_s[iters] = np.copy(self.c1_c), \
np.copy(self.c1_s)
self.iters = iters

elapsed = time.time() - t
print('Convergence achieved after {} iterations and {} \
seconds'.format(iters, round(elapsed, 2)))

def solve_bellman(self, θ_min, θ_max, order, disp=False, tol=1e-7,�


↪ maxiters=100):
"""
Continuous Method to solve the Bellman equation in section 25.3
"""
mbar = self.mbar

# Utility and production functions


uc = lambda c: np.log(c)
uc_p = lambda c: 1 / c
v = lambda m: 1 / 500 * (mbar * m - 0.5 * m**2)**0.5
v_p = lambda m: 0.5/500 * (mbar*m - 0.5 * m**2)**(-0.5) * (mbar - m)
u = lambda h, m: uc(f(h, m)) + v(m)
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 777

def f(h, m):


x = m * (h - 1)
f = 180 - (0.4 * x)**2
return f

def θ(h, m):


x = m * (h - 1)
θ = uc_p(f(h, m)) * (m + x)
return θ

# Bounds for Maximization


lb1 = np.array([self.h_min, 0, θ_min])
ub1 = np.array([self.h_max, self.mbar - 1e-5, θ_max])
lb2 = np.array([self.h_min, θ_min])
ub2 = np.array([self.h_max, θ_max])

# Initialize Value Function coefficients


# Calculate roots of Chebyshev polynomial
k = np.linspace(order, 1, order)
roots = np.cos((2 * k - 1) * np.pi / (2 * order))
# Scale to approximation space
s = θ_min + (roots - -1) / 2 * (θ_max - θ_min)
# Create a basis matrix
Φ = cheb.chebvander(roots, order - 1)
c = np.zeros(Φ.shape[0])

# Function to minimize and constraints


def p_fun(x):
scale = -1 + 2 * (x[2] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0], x[1]) \
+ self.β * np.dot(cheb.chebvander(scale, order - 1), c))
return p_fun

def p_fun2(x):
scale = -1 + 2*(x[1] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0],mbar) \
+ self.β * np.dot(cheb.chebvander(scale, order - 1), c))
return p_fun

cons1 = ({'type': 'eq', 'fun': lambda x: uc_p(f(x[0], x[1])) * x[1]


* (x[0] - 1) + v_p(x[1]) * x[1] + self.β * x[2] - θ},
{'type': 'eq', 'fun': lambda x: uc_p(f(x[0], x[1]))
* x[0] * x[1] - θ})
cons2 = ({'type': 'ineq', 'fun': lambda x: uc_p(f(x[0], mbar)) * mbar
* (x[0] - 1) + v_p(mbar) * mbar + self.β * x[1] - θ},
{'type': 'eq', 'fun': lambda x: uc_p(f(x[0], mbar))
* x[0] * mbar - θ})

bnds1 = np.concatenate([lb1.reshape(3, 1), ub1.reshape(3, 1)],�


↪ axis=1)
bnds2 = np.concatenate([lb2.reshape(2, 1), ub2.reshape(2, 1)],�
↪ axis=1)

# Bellman Iterations
diff = 1
iters = 1

while diff > tol:


778 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

# 1. Maximization, given value function guess


p_iter1 = np.zeros(order)
for i in range(order):
θ = s[i]
res = minimize(p_fun,
lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p_iter1[i] = -p_fun(res.x)
res = minimize(p_fun2,
lb2 + (ub2-lb2) / 2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p_iter1[i] and res.success == True:
p_iter1[i] = -p_fun2(res.x)

# 2. Bellman updating of Value Function coefficients


c1 = np.linalg.solve(Φ, p_iter1)
# 3. Compute distance and update
diff = np.linalg.norm(c - c1)
if bool(disp == True):
print(diff)
c = np.copy(c1)
iters = iters + 1
if iters > maxiters:
print('Convergence failed after {} iterations'.
↪ format(maxiters))
break

self.θ_grid = s
self.p_iter = p_iter1
self.Φ = Φ
self.c = c
print('Convergence achieved after {} iterations'.format(iters))

# Check residuals
θ_grid_fine = np.linspace(θ_min, θ_max, 100)
resid_grid = np.zeros(100)
p_grid = np.zeros(100)
θ_prime_grid = np.zeros(100)
m_grid = np.zeros(100)
h_grid = np.zeros(100)
for i in range(100):
θ = θ_grid_fine[i]
res = minimize(p_fun,
lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
p_grid[i] = p
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 779

θ_prime_grid[i] = res.x[2]
h_grid[i] = res.x[0]
m_grid[i] = res.x[1]
res = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p and res.success == True:
p = -p_fun2(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[1]
h_grid[i] = res.x[0]
m_grid[i] = self.mbar
scale = -1 + 2 * (θ - θ_min)/(θ_max - θ_min)
resid_grid[i] = np.dot(cheb.chebvander(scale, order-1), c) - p

self.resid_grid = resid_grid
self.θ_grid_fine = θ_grid_fine
self.θ_prime_grid = θ_prime_grid
self.m_grid = m_grid
self.h_grid = h_grid
self.p_grid = p_grid
self.x_grid = m_grid * (h_grid - 1)

# Simulate
θ_series = np.zeros(31)
m_series = np.zeros(30)
h_series = np.zeros(30)

# Find initial θ
def ValFun(x):
scale = -1 + 2*(x - θ_min)/(θ_max - θ_min)
p_fun = np.dot(cheb.chebvander(scale, order - 1), c)
return -p_fun

res = minimize(ValFun,
(θ_min + θ_max)/2,
bounds=[(θ_min, θ_max)])
θ_series[0] = res.x

# Simulate
for i in range(30):
θ = θ_series[i]
res = minimize(p_fun,
lb1 + (ub1-lb1)/2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
h_series[i] = res.x[0]
m_series[i] = res.x[1]
θ_series[i+1] = res.x[2]
res2 = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
780 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res2.x) > p and res2.success == True:
h_series[i] = res2.x[0]
m_series[i] = self.mbar
θ_series[i+1] = res2.x[1]

self.θ_series = θ_series
self.m_series = m_series
self.h_series = h_series
self.x_series = m_series * (h_series - 1)

In [4]: ch1 = ChangModel(β=0.3, mbar=30, h_min=0.9, h_max=2, n_h=8, n_m=35, N_g=10)


ch1.solve_sustainable()

### --------------- ###


Solving Chang Model Using Outer Hyperplane Approximation
### --------------- ###

Maximum difference when updating hyperplane levels:


[1.9168]
[0.66782]
[0.49235]
[0.32412]
[0.19022]


↪ ---------------------------------------------------------------------------

ValueError Traceback (most�


↪recent call last)

<ipython-input-4-d19a06b35f4c> in <module>
1 ch1 = ChangModel(β=0.3, mbar=30, h_min=0.9, h_max=2,�
↪n_h=8, n_m=35, N_g=10)

----> 2 ch1.solve_sustainable()

<ipython-input-3-04bea48ab06f> in solve_sustainable(self,�
↪tol, max_iter)

269 iters = iters + 1


270 self.solve_worst_spe()
--> 271 self.solve_subgradient()
272 diff = max(np.maximum(abs(self.c0_c - self.
↪c1_c),

273 abs(self.c0_s - self.c1_s)))

<ipython-input-3-04bea48ab06f> in solve_subgradient(self)
231 res = linprog(c, A_ub=aineq_S,�
↪b_ub=bineq_S, A_eq = aeq_S,
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 781

232 b_eq = beq_S,�


↪bounds=(self.w_bnds_s, \

--> 233 self.p_bnds_s))


234 if res.status == 0:
235 c_a1a2_s[j] = self.H[i, 0] * (self.
↪u_vec[j] \

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method,�
↪callback, options, x0)

567 �
↪complete, status,

568 �
↪message, tol,

--> 569 �
↪iteration, disp)

570
571 sol = {

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _postprocess(x, postsolve_args, complete,�
↪status, message, tol, iteration, disp)

1477 status, message = _check_result(


1478 x, fun, status, slack, con,
-> 1479 lb, ub, tol, message
1480 )
1481

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _check_result(x, fun, status, slack, con, lb,�
↪ub, tol, message)

1392 # nearly basic feasible solution. Postsolving can�


↪make the solution

1393 # basic, however, this solution is NOT optimal


-> 1394 raise ValueError(message)
1395
1396 return status, message

ValueError: The algorithm terminated successfully and�


↪determined that the problem is infeasible.

In [5]: def plot_competitive(ChangModel):


"""
Method that only plots competitive equilibrium set
"""
poly_C = polytope.Polytope(ChangModel.H, ChangModel.c1_c)
782 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

ext_C = polytope.extreme(poly_C)

fig, ax = plt.subplots(figsize=(7, 5))

ax.set_xlabel('w', fontsize=16)
ax.set_ylabel(r"$\theta$", fontsize=18)

ax.fill(ext_C[:,0], ext_C[:,1], 'r', zorder=0)


ChangModel.min_theta = min(ext_C[:, 1])
ChangModel.max_theta = max(ext_C[:, 1])

# Add point showing Ramsey Plan


idx_Ramsey = np.where(ext_C[:, 0] == max(ext_C[:, 0]))[0][0]
R = ext_C[idx_Ramsey, :]
ax.scatter(R[0], R[1], 150, 'black', 'o', zorder=1)
w_min = min(ext_C[:, 0])

# Label Ramsey Plan slightly to the right of the point


ax.annotate("R", xy=(R[0], R[1]), xytext=(R[0] + 0.03 * (R[0] - w_min),
R[1]), fontsize=18)

plt.tight_layout()
plt.show()

plot_competitive(ch1)

In [6]: ch2 = ChangModel(β=0.8, mbar=30, h_min=0.9, h_max=1/0.8,


n_h=8, n_m=35, N_g=10)
ch2.solve_sustainable()
40.7. CALCULATING ALL PROMISE-VALUE PAIRS IN CE 783

### --------------- ###


Solving Chang Model Using Outer Hyperplane Approximation
### --------------- ###

Maximum difference when updating hyperplane levels:


[0.06369]
[0.02476]
[0.02153]
[0.01915]
[0.01795]
[0.01642]
[0.01507]
[0.01284]
[0.01106]
[0.00694]
[0.0085]
[0.00781]
[0.00433]
[0.00492]
[0.00303]
[0.00182]


↪---------------------------------------------------------------------------

ValueError Traceback (most�


↪recent call last)

<ipython-input-6-1970f7c91f36> in <module>
1 ch2 = ChangModel(β=0.8, mbar=30, h_min=0.9, h_max=1/0.8,
2 n_h=8, n_m=35, N_g=10)
----> 3 ch2.solve_sustainable()

<ipython-input-3-04bea48ab06f> in solve_sustainable(self,�
↪tol, max_iter)

269 iters = iters + 1


270 self.solve_worst_spe()
--> 271 self.solve_subgradient()
272 diff = max(np.maximum(abs(self.c0_c - self.
↪c1_c),

273 abs(self.c0_s - self.c1_s)))

<ipython-input-3-04bea48ab06f> in solve_subgradient(self)
231 res = linprog(c, A_ub=aineq_S,�
↪b_ub=bineq_S, A_eq = aeq_S,

232 b_eq = beq_S,�


↪bounds=(self.w_bnds_s, \

--> 233 self.p_bnds_s))


234 if res.status == 0:
235 c_a1a2_s[j] = self.H[i, 0] * (self.
↪u_vec[j] \
784 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method,�
↪callback, options, x0)

567 �
↪complete, status,

568 �
↪message, tol,

--> 569 �
↪iteration, disp)

570
571 sol = {

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _postprocess(x, postsolve_args, complete,�
↪status, message, tol, iteration, disp)

1477 status, message = _check_result(


1478 x, fun, status, slack, con,
-> 1479 lb, ub, tol, message
1480 )
1481

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _check_result(x, fun, status, slack, con, lb,�

↪ub, tol, message)

1392 # nearly basic feasible solution. Postsolving can�


↪make the solution

1393 # basic, however, this solution is NOT optimal


-> 1394 raise ValueError(message)
1395
1396 return status, message

ValueError: The algorithm terminated successfully and�


↪ determined that the problem is infeasible.

In [7]: plot_competitive(ch2)
40.8. SOLVING A CONTINUATION RAMSEY PLANNER’S BELLMAN EQUATION 785

40.8 Solving a Continuation Ramsey Planner’s Bellman Equa-


tion

In this section we solve the Bellman equation confronting a continuation Ramsey planner.
The construction of a Ramsey plan is decomposed into a two subproblems in Ramsey plans,
time inconsistency, sustainable plans and dynamic Stackelberg problems.
• Subproblem 1 is faced by a sequence of continuation Ramsey planners at 𝑡 ≥ 1.
• Subproblem 2 is faced by a Ramsey planner at 𝑡 = 0.
The problem is:

𝐽 (𝜃) = max ′ 𝑢(𝑓(𝑥)) + 𝑣(𝑚) + 𝛽𝐽 (𝜃′ )


𝑚,𝑥,ℎ,𝜃

subject to:

𝜃 ≤ 𝑢′ (𝑓(𝑥))𝑥 + 𝑣′ (𝑚)𝑚 + 𝛽𝜃′

𝜃 = 𝑢′ (𝑓(𝑥))(𝑚 + 𝑥)

𝑥 = 𝑚(ℎ − 1)
786 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

(𝑚, 𝑥, ℎ) ∈ 𝐸

𝜃′ ∈ Ω

To solve this Bellman equation, we must know the set Ω.


We have solved the Bellman equation for the two sets of parameter values for which we com-
puted the equilibrium value sets above.
Hence for these parameter configurations, we know the bounds of Ω.
The two sets of parameters differ only in the level of 𝛽.
From the figures earlier in this lecture, we know that when 𝛽 = 0.3, Ω = [0.0088, 0.0499], and
when 𝛽 = 0.8, Ω = [0.0395, 0.2193]

In [8]: ch1 = ChangModel(β=0.3, mbar=30, h_min=0.99, h_max=1/0.3,


n_h=8, n_m=35, N_g=50)
ch2 = ChangModel(β=0.8, mbar=30, h_min=0.1, h_max=1/0.8,
n_h=20, n_m=50, N_g=50)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:33:
RuntimeWarning: invalid value encountered in log

In [9]: ch1.solve_bellman(θ_min=0.01, θ_max=0.0499, order=30, tol=1e-6)


ch2.solve_bellman(θ_min=0.045, θ_max=0.15, order=30, tol=1e-6)

/home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:309:
RuntimeWarning: invalid value encountered in log

Convergence achieved after 15 iterations


Convergence achieved after 72 iterations

First, a quick check that our approximations of the value functions are good.
We do this by calculating the residuals between iterates on the value function on a fine grid:

In [10]: max(abs(ch1.resid_grid)), max(abs(ch2.resid_grid))

Out[10]: (6.463131040135295e-06, 6.875693472352395e-07)

The value functions plotted below trace out the right edges of the sets of equilibrium values
plotted above

In [11]: fig, axes = plt.subplots(1, 2, figsize=(12, 4))

for ax, model in zip(axes, (ch1, ch2)):


ax.plot(model.θ_grid, model.p_iter)
ax.set(xlabel=r"$\theta$",
ylabel=r"$J(\theta)$",
title=rf"$\beta = {model.β}$")

plt.show()
40.8. SOLVING A CONTINUATION RAMSEY PLANNER’S BELLMAN EQUATION 787

The next figure plots the optimal policy functions; values of 𝜃′ , 𝑚, 𝑥, ℎ for each value of the
state 𝜃:

In [12]: for model in (ch1, ch2):

fig, axes = plt.subplots(2, 2, figsize=(12, 6), sharex=True)


fig.suptitle(rf"$\beta = {model.β}$", fontsize=16)

plots = [model.θ_prime_grid, model.m_grid,


model.h_grid, model.x_grid]
labels = [r"$\theta'$", "$m$", "$h$", "$x$"]

for ax, plot, label in zip(axes.flatten(), plots, labels):


ax.plot(model.θ_grid_fine, plot)
ax.set_xlabel(r"$\theta$", fontsize=14)
ax.set_ylabel(label, fontsize=14)

plt.show()
788 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

With the first set of parameter values, the value of 𝜃′ chosen by the Ramsey planner quickly
hits the upper limit of Ω.
But with the second set of parameters it converges to a value in the interior of the set.
Consequently, the choice of 𝜃 ̄ is clearly important with the first set of parameter values.
One way of seeing this is plotting 𝜃′ (𝜃) for each set of parameters.
With the first set of parameter values, this function does not intersect the 45-degree line until
𝜃,̄ whereas in the second set of parameter values, it intersects in the interior.

In [13]: fig, axes = plt.subplots(1, 2, figsize=(12, 4))

for ax, model in zip(axes, (ch1, ch2)):


ax.plot(model.θ_grid_fine, model.θ_prime_grid,�
↪label=r"$\theta'(\theta)$")

ax.plot(model.θ_grid_fine, model.θ_grid_fine, label=r"$\theta$")


ax.set(xlabel=r"$\theta$", title=rf"$\beta = {model.β}$")

axes[0].legend()
plt.show()
40.8. SOLVING A CONTINUATION RAMSEY PLANNER’S BELLMAN EQUATION 789

Subproblem 2 is equivalent to the planner choosing the initial value of 𝜃 (i.e. the value which
maximizes the value function).
From this starting point, we can then trace out the paths for {𝜃𝑡 , 𝑚𝑡 , ℎ𝑡 , 𝑥𝑡 }∞
𝑡=0 that support
this equilibrium.
These are shown below for both sets of parameters

In [14]: for model in (ch1, ch2):

fig, axes = plt.subplots(2, 2, figsize=(12, 6))


fig.suptitle(rf"$\beta = {model.β}$")

plots = [model.θ_series, model.m_series, model.h_series, model.


↪ x_series]
labels = [r"$\theta$", "$m$", "$h$", "$x$"]

for ax, plot, label in zip(axes.flatten(), plots, labels):


ax.plot(plot)
ax.set(xlabel='t', ylabel=label)

plt.show()
790 CHAPTER 40. COMPETITIVE EQUILIBRIA OF A MODEL OF CHANG

40.8.1 Next Steps

In Credible Government Policies in Chang Model we shall find a subset of competitive equi-
libria that are sustainable in the sense that a sequence of government administrations that
chooses sequentially, rather than once and for all at time 0 will choose to implement them.
In the process of constructing them, we shall construct another, smaller set of competitive
equilibria.
Chapter 41

Credible Government Policies in a


Model of Chang

41.1 Contents

• Overview 41.2
• The Setting 41.3
• Calculating the Set of Sustainable Promise-Value Pairs 41.4
In addition to what’s in Anaconda, this lecture will need the following libraries:

In [1]: !pip install polytope

41.2 Overview

Some of the material in this lecture and competitive equilibria in the Chang model can be
viewed as more sophisticated and complete treatments of the topics discussed in Ramsey
plans, time inconsistency, sustainable plans.
This lecture assumes almost the same economic environment analyzed in competitive equilib-
ria in the Chang model.
The only change – and it is a substantial one – is the timing protocol for making government
decisions.
In competitive equilibria in the Chang model, a Ramsey planner chose a comprehensive gov-
ernment policy once-and-for-all at time 0.
Now in this lecture, there is no time 0 Ramsey planner.
Instead there is a sequence of government decision-makers, one for each 𝑡.
The time 𝑡 government decision-maker choose time 𝑡 government actions after forecasting
what future governments will do.
We use the notion of a sustainable plan proposed in [15], also referred to as a credible public
policy in [62].
Technically, this lecture starts where lecture competitive equilibria in the Chang model on
Ramsey plans within the Chang [14] model stopped.

791
792 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

That lecture presents recursive representations of competitive equilibria and a Ramsey plan for
a version of a model of Calvo [13] that Chang used to analyze and illustrate these concepts.
We used two operators to characterize competitive equilibria and a Ramsey plan, respectively.
In this lecture, we define a credible public policy or sustainable plan.
Starting from a large enough initial set 𝑍0 , we use iterations on Chang’s set-to-set operator
̃
𝐷(𝑍) to compute a set of values associated with sustainable plans.
̃
Chang’s operator 𝐷(𝑍) is closely connected with the operator 𝐷(𝑍) introduced in lecture
competitive equilibria in the Chang model.
̃
• 𝐷(𝑍) incorporates all of the restrictions imposed in constructing the operator 𝐷(𝑍),
but ….
• It adds some additional restrictions
– these additional restrictions incorporate the idea that a plan must be sustainable.
– sustainable means that the government wants to implement it at all times after all
histories.
Let’s start with some standard imports:

In [2]: import numpy as np


import quantecon as qe
import polytope
import matplotlib.pyplot as plt
%matplotlib inline

`polytope` failed to import `cvxopt.glpk`.


will use `scipy.optimize.linprog`

41.3 The Setting

We begin by reviewing the set up deployed in competitive equilibria in the Chang model.
Chang’s model, adopted from Calvo, is designed to focus on the intertemporal trade-offs be-
tween the welfare benefits of deflation and the welfare costs associated with the high tax col-
lections required to retire money at a rate that delivers deflation.
A benevolent time 0 government can promote utility generating increases in real balances
only by imposing an infinite sequence of sufficiently large distorting tax collections.
To promote the welfare increasing effects of high real balances, the government wants to in-
duce gradual deflation.
We start by reviewing notation.
For a sequence of scalars 𝑧 ⃗ ≡ {𝑧𝑡 }∞ 𝑡
𝑡=0 , let 𝑧 ⃗ = (𝑧0 , … , 𝑧𝑡 ), 𝑧𝑡⃗ = (𝑧𝑡 , 𝑧𝑡+1 , …).

An infinitely lived representative agent and an infinitely lived government exist at dates 𝑡 =
0, 1, ….
The objects in play are
• an initial quantity 𝑀−1 of nominal money holdings
• a sequence of inverse money growth rates ℎ⃗ and an associated sequence of nominal
money holdings 𝑀⃗
41.3. THE SETTING 793

• a sequence of values of money 𝑞 ⃗


• a sequence of real money holdings 𝑚⃗
• a sequence of total tax collections 𝑥⃗
• a sequence of per capita rates of consumption 𝑐 ⃗
• a sequence of per capita incomes 𝑦 ⃗
A benevolent government chooses sequences (𝑀⃗ , ℎ,⃗ 𝑥)⃗ subject to a sequence of budget con-
straints and other constraints imposed by competitive equilibrium.
Given tax collection and price of money sequences, a representative household chooses se-
quences (𝑐,⃗ 𝑚)
⃗ of consumption and real balances.
In competitive equilibrium, the price of money sequence 𝑞 ⃗ clears markets, thereby reconciling
decisions of the government and the representative household.

41.3.1 The Household’s Problem

A representative household faces a nonnegative value of money sequence 𝑞 ⃗ and sequences 𝑦,⃗ 𝑥⃗
of income and total tax collections, respectively.
The household chooses nonnegative sequences 𝑐,⃗ 𝑀⃗ of consumption and nominal balances,
respectively, to maximize


∑ 𝛽 𝑡 [𝑢(𝑐𝑡 ) + 𝑣(𝑞𝑡 𝑀𝑡 )] (1)
𝑡=0

subject to

𝑞𝑡 𝑀𝑡 ≤ 𝑦𝑡 + 𝑞𝑡 𝑀𝑡−1 − 𝑐𝑡 − 𝑥𝑡 (2)

and

𝑞𝑡 𝑀𝑡 ≤ 𝑚̄ (3)

Here 𝑞𝑡 is the reciprocal of the price level at 𝑡, also known as the value of money.
Chang [14] assumes that
• 𝑢 ∶ ℝ+ → ℝ is twice continuously differentiable, strictly concave, and strictly increasing;
• 𝑣 ∶ ℝ+ → ℝ is twice continuously differentiable and strictly concave;
• 𝑢′ (𝑐)𝑐→0 = lim𝑚→0 𝑣′ (𝑚) = +∞;
• there is a finite level 𝑚 = 𝑚𝑓 such that 𝑣′ (𝑚𝑓 ) = 0
Real balances carried out of a period equal 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 .
Inequality (2) is the household’s time 𝑡 budget constraint.
It tells how real balances 𝑞𝑡 𝑀𝑡 carried out of period 𝑡 depend on income, consumption, taxes,
and real balances 𝑞𝑡 𝑀𝑡−1 carried into the period.
Equation (3) imposes an exogenous upper bound 𝑚̄ on the choice of real balances, where 𝑚̄ ≥
𝑚𝑓 .
794 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

41.3.2 Government

The government chooses a sequence of inverse money growth rates with time 𝑡 component
ℎ𝑡 ≡ 𝑀𝑀𝑡−1 ∈ Π ≡ [𝜋, 𝜋], where 0 < 𝜋 < 1 < 𝛽1 ≤ 𝜋.
𝑡

The government faces a sequence of budget constraints with time 𝑡 component

−𝑥𝑡 = 𝑞𝑡 (𝑀𝑡 − 𝑀𝑡−1 )

which, by using the definitions of 𝑚𝑡 and ℎ𝑡 , can also be expressed as

−𝑥𝑡 = 𝑚𝑡 (1 − ℎ𝑡 ) (4)

The restrictions 𝑚𝑡 ∈ [0, 𝑚]̄ and ℎ𝑡 ∈ Π evidently imply that 𝑥𝑡 ∈ 𝑋 ≡ [(𝜋 − 1)𝑚,̄ (𝜋 − 1)𝑚].
̄
We define the set 𝐸 ≡ [0, 𝑚]̄ × Π × 𝑋, so that we require that (𝑚, ℎ, 𝑥) ∈ 𝐸.
To represent the idea that taxes are distorting, Chang makes the following assumption about
outcomes for per capita output:

𝑦𝑡 = 𝑓(𝑥𝑡 ) (5)

where 𝑓 ∶ ℝ → ℝ satisfies 𝑓(𝑥) > 0, is twice continuously differentiable, 𝑓 ″ (𝑥) < 0, and
𝑓(𝑥) = 𝑓(−𝑥) for all 𝑥 ∈ ℝ, so that subsidies and taxes are equally distorting.
The purpose is not to model the causes of tax distortions in any detail but simply to summa-
rize the outcome of those distortions via the function 𝑓(𝑥).
A key part of the specification is that tax distortions are increasing in the absolute value of
tax revenues.
The government chooses a competitive equilibrium that maximizes (1).

41.3.3 Within-period Timing Protocol

For the results in this lecture, the timing of actions within a period is important because of
the incentives that it activates.
Chang assumed the following within-period timing of decisions:
• first, the government chooses ℎ𝑡 and 𝑥𝑡 ;
• then given 𝑞 ⃗ and its expectations about future values of 𝑥 and 𝑦’s, the household
chooses 𝑀𝑡 and therefore 𝑚𝑡 because 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 ;
• then output 𝑦𝑡 = 𝑓(𝑥𝑡 ) is realized;
• finally 𝑐𝑡 = 𝑦𝑡
This within-period timing confronts the government with choices framed by how the private
sector wants to respond when the government takes time 𝑡 actions that differ from what the
private sector had expected.
This timing will shape the incentives confronting the government at each history that are to
be incorporated in the construction of the 𝐷̃ operator below.
41.3. THE SETTING 795

41.3.4 Household’s Problem

Given 𝑀−1 and {𝑞𝑡 }∞


𝑡=0 , the household’s problem is


ℒ = max min ∑ 𝛽 𝑡 {𝑢(𝑐𝑡 ) + 𝑣(𝑀𝑡 𝑞𝑡 ) + 𝜆𝑡 [𝑦𝑡 − 𝑐𝑡 − 𝑥𝑡 + 𝑞𝑡 𝑀𝑡−1 − 𝑞𝑡 𝑀𝑡 ]
𝑐,⃗ 𝑀⃗ 𝜆,⃗ 𝜇⃗ 𝑡=0

+ 𝜇𝑡 [𝑚̄ − 𝑞𝑡 𝑀𝑡 ]}

First-order conditions with respect to 𝑐𝑡 and 𝑀𝑡 , respectively, are

𝑢′ (𝑐𝑡 ) = 𝜆𝑡
𝑞𝑡 [𝑢′ (𝑐𝑡 ) − 𝑣′ (𝑀𝑡 𝑞𝑡 )] ≤ 𝛽𝑢′ (𝑐𝑡+1 )𝑞𝑡+1 , = if 𝑀𝑡 𝑞𝑡 < 𝑚̄

𝑀𝑡−1 𝑚𝑡
Using ℎ𝑡 = 𝑀𝑡 and 𝑞𝑡 = 𝑀𝑡 in these first-order conditions and rearranging implies

𝑚𝑡 [𝑢′ (𝑐𝑡 ) − 𝑣′ (𝑚𝑡 )] ≤ 𝛽𝑢′ (𝑓(𝑥𝑡+1 ))𝑚𝑡+1 ℎ𝑡+1 , = if 𝑚𝑡 < 𝑚̄ (6)

Define the following key variable

𝜃𝑡+1 ≡ 𝑢′ (𝑓(𝑥𝑡+1 ))𝑚𝑡+1 ℎ𝑡+1 (7)

This is real money balances at time 𝑡 + 1 measured in units of marginal utility, which Chang
refers to as ‘the marginal utility of real balances’.
From the standpoint of the household at time 𝑡, equation (7) shows that 𝜃𝑡+1 intermediates
the influences of (𝑥𝑡+1
⃗ , 𝑚⃗ 𝑡+1 ) on the household’s choice of real balances 𝑚𝑡 .
By “intermediates” we mean that the future paths (𝑥𝑡+1
⃗ , 𝑚⃗ 𝑡+1 ) influence 𝑚𝑡 entirely through
their effects on the scalar 𝜃𝑡+1 .
The observation that the one dimensional promised marginal utility of real balances 𝜃𝑡+1
functions in this way is an important step in constructing a class of competitive equilibria
that have a recursive representation.
A closely related observation pervaded the analysis of Stackelberg plans in dynamic Stackel-
berg problems and the Calvo model.

41.3.5 Competitive Equilibrium

Definition:
• A government policy is a pair of sequences (ℎ,⃗ 𝑥)⃗ where ℎ𝑡 ∈ Π ∀𝑡 ≥ 0.
• A price system is a non-negative value of money sequence 𝑞.⃗
• An allocation is a triple of non-negative sequences (𝑐,⃗ 𝑚,⃗ 𝑦).

It is required that time 𝑡 components (𝑚𝑡 , 𝑥𝑡 , ℎ𝑡 ) ∈ 𝐸.
Definition:
Given 𝑀−1 , a government policy (ℎ,⃗ 𝑥),
⃗ price system 𝑞,⃗ and allocation (𝑐,⃗ 𝑚,⃗ 𝑦)⃗ are said to be
a competitive equilibrium if
• 𝑚𝑡 = 𝑞𝑡 𝑀𝑡 and 𝑦𝑡 = 𝑓(𝑥𝑡 ).
796 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

• The government budget constraint is satisfied.


• Given 𝑞,⃗ 𝑥,⃗ 𝑦,⃗ (𝑐,⃗ 𝑚)
⃗ solves the household’s problem.

41.3.6 A Credible Government Policy

Chang works with


A credible government policy with a recursive representation
• Here there is no time 0 Ramsey planner.
• Instead there is a sequence of governments, one for each 𝑡, that choose time 𝑡 govern-
ment actions after forecasting what future governments will do.

• Let 𝑤 = ∑𝑡=0 𝛽 𝑡 [𝑢(𝑐𝑡 ) + 𝑣(𝑞𝑡 𝑀𝑡 )] be a value associated with a particular competitive
equilibrium.
• A recursive representation of a credible government policy is a pair of initial conditions
(𝑤0 , 𝜃0 ) and a five-tuple of functions

ℎ(𝑤𝑡 , 𝜃𝑡 ), 𝑚(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ), 𝑥(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ), 𝜒(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ), Ψ(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )


mapping 𝑤𝑡 , 𝜃𝑡 and in some cases ℎ𝑡 into ℎ̂ 𝑡 , 𝑚𝑡 , 𝑥𝑡 , 𝑤𝑡+1 , and 𝜃𝑡+1 , respectively.
• Starting from an initial condition (𝑤0 , 𝜃0 ), a credible government policy can be con-
structed by iterating on these functions in the following order that respects the within-
period timing:
ℎ̂ 𝑡 = ℎ(𝑤𝑡 , 𝜃𝑡 )
𝑚𝑡 = 𝑚(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )
𝑥𝑡 = 𝑥(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ) (8)
𝑤𝑡+1 = 𝜒(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )
𝜃𝑡+1 = Ψ(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )

• Here it is to be understood that ℎ̂ 𝑡 is the action that the government policy instructs
the government to take, while ℎ𝑡 possibly not equal to ℎ̂ 𝑡 is some other action that the
government is free to take at time 𝑡.
The plan is credible if it is in the time 𝑡 government’s interest to execute it.
Credibility requires that the plan be such that for all possible choices of ℎ𝑡 that are consistent
with competitive equilibria,

𝑢(𝑓(𝑥(ℎ̂ 𝑡 , 𝑤𝑡 , 𝜃𝑡 ))) + 𝑣(𝑚(ℎ̂ 𝑡 , 𝑤𝑡 , 𝜃𝑡 )) + 𝛽𝜒(ℎ̂ 𝑡 , 𝑤𝑡 , 𝜃𝑡 )


≥ 𝑢(𝑓(𝑥(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ))) + 𝑣(𝑚(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )) + 𝛽𝜒(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )

so that at each instance and circumstance of choice, a government attains a weakly higher
lifetime utility with continuation value 𝑤𝑡+1 = Ψ(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 ) by adhering to the plan and con-
firming the associated time 𝑡 action ℎ̂ 𝑡 that the public had expected earlier.
Please note the subtle change in arguments of the functions used to represent a competitive
equilibrium and a Ramsey plan, on the one hand, and a credible government plan, on the
other hand.
The extra arguments appearing in the functions used to represent a credible plan come from
allowing the government to contemplate disappointing the private sector’s expectation about
its time 𝑡 choice ℎ̂ 𝑡 .
41.3. THE SETTING 797

A credible plan induces the government to confirm the private sector’s expectation.
The recursive representation of the plan uses the evolution of continuation values to deter the
government from wanting to disappoint the private sector’s expectations.
Technically, a Ramsey plan and a credible plan both incorporate history dependence.
For a Ramsey plan, this is encoded in the dynamics of the state variable 𝜃𝑡 , a promised
marginal utility that the Ramsey plan delivers to the private sector.
For a credible government plan, we the two-dimensional state vector (𝑤𝑡 , 𝜃𝑡 ) encodes history
dependence.

41.3.7 Sustainable Plans

A government strategy 𝜎 and an allocation rule 𝛼 are said to constitute a sustainable plan
(SP) if.

1. 𝜎 is admissible.
2. Given 𝜎, 𝛼 is competitive.
3. After any history ℎ⃗ 𝑡−1 , the continuation of 𝜎 is optimal for the government; i.e., the se-
quence ℎ⃗ 𝑡 induced by 𝜎 after ℎ⃗ 𝑡−1 maximizes over 𝐶𝐸𝜋 given 𝛼.

Given any history ℎ⃗ 𝑡−1 , the continuation of a sustainable plan is a sustainable plan.
Let Θ = {(𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ 𝐶𝐸 ∶ there is an SP whose outcome is(𝑚,⃗ 𝑥,⃗ ℎ)}.

Sustainable outcomes are elements of Θ.


Now consider the space

𝑆 = {(𝑤, 𝜃) ∶ there is a sustainable outcome (𝑚,⃗ 𝑥,⃗ ℎ)⃗ ∈ Θ

with value


𝑤 = ∑ 𝛽 𝑡 [𝑢(𝑓(𝑥𝑡 )) + 𝑣(𝑚𝑡 )] and such that 𝑢′ (𝑓(𝑥0 ))(𝑚0 + 𝑥0 ) = 𝜃}
𝑡=0

The space 𝑆 is a compact subset of 𝑊 × Ω where 𝑊 = [𝑤, 𝑤] is the space of values associated
with sustainable plans. Here 𝑤 and 𝑤 are finite bounds on the set of values.
Because there is at least one sustainable plan, 𝑆 is nonempty.
Now recall the within-period timing protocol, which we can depict (ℎ, 𝑥) → 𝑚 = 𝑞𝑀 → 𝑦 = 𝑐.
With this timing protocol in mind, the time 0 component of an SP has the following compo-
nents:

1. A period 0 action ℎ̂ ∈ Π that the public expects the government to take, together
̂ 𝑥(ℎ)̂ when the government acts as
with subsequent within-period consequences 𝑚(ℎ),
expected.
2. For any first-period action ℎ ≠ ℎ̂ with ℎ ∈ 𝐶𝐸𝜋0 , a pair of within-period consequences
𝑚(ℎ), 𝑥(ℎ) when the government does not act as the public had expected.
798 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

3. For every ℎ ∈ Π, a pair (𝑤′ (ℎ), 𝜃′ (ℎ)) ∈ 𝑆 to carry into next period.

These components must be such that it is optimal for the government to choose ℎ̂ as ex-
pected; and for every possible ℎ ∈ Π, the government budget constraint and the household’s
Euler equation must hold with continuation 𝜃 being 𝜃′ (ℎ).
Given the timing protocol within the model, the representative household’s response to a
government deviation to ℎ ≠ ℎ̂ from a prescribed ℎ̂ consists of a first-period action 𝑚(ℎ)
and associated subsequent actions, together with future equilibrium prices, captured by
(𝑤′ (ℎ), 𝜃′ (ℎ)).
At this point, Chang introduces an idea in the spirit of Abreu, Pearce, and Stacchetti [2].
Let 𝑍 be a nonempty subset of 𝑊 × Ω.
Think of using pairs (𝑤′ , 𝜃′ ) drawn from 𝑍 as candidate continuation value, promised
marginal utility pairs.
Define the following operator:

̃
𝐷(𝑍) = {(𝑤, 𝜃) ∶ there is ℎ̂ ∈ 𝐶𝐸𝜋0 and for each ℎ ∈ 𝐶𝐸𝜋0
(9)
a four-tuple (𝑚(ℎ), 𝑥(ℎ), 𝑤′ (ℎ), 𝜃′ (ℎ)) ∈ [0, 𝑚]̄ × 𝑋 × 𝑍

such that

̂ + 𝑣(𝑚(ℎ))
𝑤 = 𝑢(𝑓(𝑥(ℎ))) ̂ + 𝛽𝑤′ (ℎ)̂ (10)

̂
𝜃 = 𝑢′ (𝑓(𝑥(ℎ)))(𝑚(ℎ)̂ + 𝑥(ℎ))
̂ (11)

and for all ℎ ∈ 𝐶𝐸𝜋0

𝑤 ≥ 𝑢(𝑓(𝑥(ℎ))) + 𝑣(𝑚(ℎ)) + 𝛽𝑤′ (ℎ) (12)

𝑥(ℎ) = 𝑚(ℎ)(ℎ − 1) (13)

and

𝑚(ℎ)(𝑢′ (𝑓(𝑥(ℎ))) − 𝑣′ (𝑚(ℎ))) ≤ 𝛽𝜃′ (ℎ) (14)

with equality if 𝑚(ℎ) < 𝑚}


̄

This operator adds the key incentive constraint to the conditions that had defined the earlier
𝐷(𝑍) operator defined in competitive equilibria in the Chang model.
Condition (12) requires that the plan deter the government from wanting to take one-shot
deviations when candidate continuation values are drawn from 𝑍.
Proposition:

̃
1. If 𝑍 ⊂ 𝐷(𝑍), ̃
then 𝐷(𝑍) ⊂ 𝑆 (‘self-generation’).
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 799

̃
2. 𝑆 = 𝐷(𝑆) (‘factorization’).

Proposition:.

1. Monotonicity of 𝐷:̃ 𝑍 ⊂ 𝑍 ′ implies 𝐷(𝑍)


̃ ̃ ′ ).
⊂ 𝐷(𝑍
̃
2. 𝑍 compact implies that 𝐷(𝑍) is compact.

Chang establishes that 𝑆 is compact and that therefore there exists a highest value SP and a
lowest value SP.
Further, the preceding structure allows Chang to compute 𝑆 by iterating to convergence on 𝐷̃
provided that one begins with a sufficiently large initial set 𝑍0 .
This structure delivers the following recursive representation of a sustainable outcome:

1. choose an initial (𝑤0 , 𝜃0 ) ∈ 𝑆;

2. generate a sustainable outcome recursively by iterating on (8), which we repeat here for
convenience:

ℎ̂ 𝑡 = ℎ(𝑤𝑡 , 𝜃𝑡 )
𝑚𝑡 = 𝑚(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )
𝑥𝑡 = 𝑥(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )
𝑤𝑡+1 = 𝜒(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )
𝜃𝑡+1 = Ψ(ℎ𝑡 , 𝑤𝑡 , 𝜃𝑡 )

41.4 Calculating the Set of Sustainable Promise-Value Pairs

̃
Above we defined the 𝐷(𝑍) operator as (9).
Chang (1998) provides a method for dealing with the final three constraints.
These incentive constraints ensure that the government wants to choose ℎ̂ as the private sec-
tor had expected it to.
Chang’s simplification starts from the idea that, when considering whether or not to confirm
the private sector’s expectation, the government only needs to consider the payoff of the best
possible deviation.
Equally, to provide incentives to the government, we only need to consider the harshest possi-
ble punishment.
Let ℎ denote some possible deviation. Chang defines:

𝑃 (ℎ; 𝑍) = min 𝑢(𝑓(𝑥)) + 𝑣(𝑚) + 𝛽𝑤′

where the minimization is subject to

𝑥 = 𝑚(ℎ − 1)
800 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

𝑚(ℎ)(𝑢′ (𝑓(𝑥(ℎ))) + 𝑣′ (𝑚(ℎ))) ≤ 𝛽𝜃′ (ℎ) (with equality if 𝑚(ℎ) < 𝑚)}
̄

(𝑚, 𝑥, 𝑤′ , 𝜃′ ) ∈ [0, 𝑚]̄ × 𝑋 × 𝑍

For a given deviation ℎ, this problem finds the worst possible sustainable value.
We then define:

𝐵𝑅(𝑍) = max 𝑃 (ℎ; 𝑍) subject to ℎ ∈ 𝐶𝐸𝜋0

𝐵𝑅(𝑍) is the value of the government’s most tempting deviation.


̃
With this in hand, we can define a new operator 𝐸(𝑍) that is equivalent to the 𝐷(𝑍) opera-
tor but simpler to implement:

𝐸(𝑍) = {(𝑤, 𝜃) ∶ ∃ℎ ∈ 𝐶𝐸𝜋0 and (𝑚(ℎ), 𝑥(ℎ), 𝑤′ (ℎ), 𝜃′ (ℎ)) ∈ [0, 𝑚]̄ × 𝑋 × 𝑍

such that

𝑤 = 𝑢(𝑓(𝑥(ℎ))) + 𝑣(𝑚(ℎ)) + 𝛽𝑤′ (ℎ)

𝜃 = 𝑢′ (𝑓(𝑥(ℎ)))(𝑚(ℎ) + 𝑥(ℎ))

𝑥(ℎ) = 𝑚(ℎ)(ℎ − 1)

𝑚(ℎ)(𝑢′ (𝑓(𝑥(ℎ))) − 𝑣′ (𝑚(ℎ))) ≤ 𝛽𝜃′ (ℎ) (with equality if 𝑚(ℎ) < 𝑚)


̄

and

𝑤 ≥ 𝐵𝑅(𝑍)}

Aside from the final incentive constraint, this is the same as the operator in competitive equi-
libria in the Chang model.
Consequently, to implement this operator we just need to add one step to our outer hyper-
plane approximation algorithm :

1. Initialize subgradients, 𝐻, and hyperplane levels, 𝐶0 .

2. Given a set of subgradients, 𝐻, and hyperplane levels, 𝐶𝑡 , calculate 𝐵𝑅(𝑆𝑡 ).

3. Given 𝐻, 𝐶𝑡 , and 𝐵𝑅(𝑆𝑡 ), for each subgradient ℎ𝑖 ∈ 𝐻:

• Solve a linear program (described below) for each action in the action space.
• Find the maximum and update the corresponding hyperplane level, 𝐶𝑖,𝑡+1 .

1. If |𝐶𝑡+1 − 𝐶𝑡 | > 𝜖, return to 2.


41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 801

Step 1 simply creates a large initial set 𝑆0 .


Given some set 𝑆𝑡 , Step 2 then constructs the value 𝐵𝑅(𝑆𝑡 ).
To do this, we solve the following problem for each point in the action space (𝑚𝑗 , ℎ𝑗 ):

min 𝑢(𝑓(𝑥𝑗 )) + 𝑣(𝑚𝑗 ) + 𝛽𝑤′


[𝑤′ ,𝜃′ ]

subject to

𝐻 ⋅ (𝑤′ , 𝜃′ ) ≤ 𝐶𝑡

𝑥𝑗 = 𝑚𝑗 (ℎ𝑗 − 1)

𝑚𝑗 (𝑢′ (𝑓(𝑥𝑗 )) − 𝑣′ (𝑚𝑗 )) ≤ 𝛽𝜃′ (= if 𝑚𝑗 < 𝑚)


̄

This gives us a matrix of possible values, corresponding to each point in the action space.
To find 𝐵𝑅(𝑍), we minimize over the 𝑚 dimension and maximize over the ℎ dimension.
Step 3 then constructs the set 𝑆𝑡+1 = 𝐸(𝑆𝑡 ). The linear program in Step 3 is designed to
construct a set 𝑆𝑡+1 that is as large as possible while satisfying the constraints of the 𝐸(𝑆)
operator.
To do this, for each subgradient ℎ𝑖 , and for each point in the action space (𝑚𝑗 , ℎ𝑗 ), we solve
the following problem:

max ℎ𝑖 ⋅ (𝑤, 𝜃)
[𝑤′ ,𝜃′ ]

subject to

𝐻 ⋅ (𝑤′ , 𝜃′ ) ≤ 𝐶𝑡

𝑤 = 𝑢(𝑓(𝑥𝑗 )) + 𝑣(𝑚𝑗 ) + 𝛽𝑤′

𝜃 = 𝑢′ (𝑓(𝑥𝑗 ))(𝑚𝑗 + 𝑥𝑗 )

𝑥𝑗 = 𝑚𝑗 (ℎ𝑗 − 1)

𝑚𝑗 (𝑢′ (𝑓(𝑥𝑗 )) − 𝑣′ (𝑚𝑗 )) ≤ 𝛽𝜃′ (= if 𝑚𝑗 < 𝑚)


̄

𝑤 ≥ 𝐵𝑅(𝑍)

This problem maximizes the hyperplane level for a given set of actions.
802 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

The second part of Step 3 then finds the maximum possible hyperplane level across the action
space.
The algorithm constructs a sequence of progressively smaller sets 𝑆𝑡+1 ⊂ 𝑆𝑡 ⊂ 𝑆𝑡−1 ⋯ ⊂ 𝑆0 .
Step 4 ends the algorithm when the difference between these sets is small enough.
We have created a Python class that solves the model assuming the following functional
forms:

𝑢(𝑐) = 𝑙𝑜𝑔(𝑐)

1
𝑣(𝑚) = (𝑚𝑚̄ − 0.5𝑚2 )0.5
500

𝑓(𝑥) = 180 − (0.4𝑥)2

̄ are then variables to be specified for an instance of the


The remaining parameters {𝛽, 𝑚,̄ ℎ, ℎ}
Chang class.
Below we use the class to solve the model and plot the resulting equilibrium set, once with
𝛽 = 0.3 and once with 𝛽 = 0.8. We also plot the (larger) competitive equilibrium sets, which
we described in competitive equilibria in the Chang model.
(We have set the number of subgradients to 10 in order to speed up the code for now. We can
increase accuracy by increasing the number of subgradients)
The following code computes sustainable plans

In [3]: """
Provides a class called ChangModel to solve different
parameterizations of the Chang (1998) model.
"""

import numpy as np
import quantecon as qe
import time

from scipy.spatial import ConvexHull


from scipy.optimize import linprog, minimize, minimize_scalar
from scipy.interpolate import UnivariateSpline
import numpy.polynomial.chebyshev as cheb

class ChangModel:
"""
Class to solve for the competitive and sustainable sets in the Chang�
↪(1998)

model, for different parameterizations.


"""

def __init__(self, β, mbar, h_min, h_max, n_h, n_m, N_g):


# Record parameters
self.β, self.mbar, self.h_min, self.h_max = β, mbar, h_min, h_max
self.n_h, self.n_m, self.N_g = n_h, n_m, N_g
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 803

# Create other parameters


self.m_min = 1e-9
self.m_max = self.mbar
self.N_a = self.n_h*self.n_m

# Utility and production functions


uc = lambda c: np.log(c)
uc_p = lambda c: 1/c
v = lambda m: 1/500 * (mbar * m - 0.5 * m**2)**0.5
v_p = lambda m: 0.5/500 * (mbar * m - 0.5 * m**2)**(-0.5) * (mbar - m)
u = lambda h, m: uc(f(h, m)) + v(m)

def f(h, m):


x = m * (h - 1)
f = 180 - (0.4 * x)**2
return f

def θ(h, m):


x = m * (h - 1)
θ = uc_p(f(h, m)) * (m + x)
return θ

# Create set of possible action combinations, A


A1 = np.linspace(h_min, h_max, n_h).reshape(n_h, 1)
A2 = np.linspace(self.m_min, self.m_max, n_m).reshape(n_m, 1)
self.A = np.concatenate((np.kron(np.ones((n_m, 1)), A1),
np.kron(A2, np.ones((n_h, 1)))), axis=1)

# Pre-compute utility and output vectors


self.euler_vec = -np.multiply(self.A[:, 1], \
uc_p(f(self.A[:, 0], self.A[:, 1])) - v_p(self.A[:, 1]))
self.u_vec = u(self.A[:, 0], self.A[:, 1])
self.Θ_vec = θ(self.A[:, 0], self.A[:, 1])
self.f_vec = f(self.A[:, 0], self.A[:, 1])
self.bell_vec = np.multiply(uc_p(f(self.A[:, 0],
self.A[:, 1])),
np.multiply(self.A[:, 1],
(self.A[:, 0] - 1))) \
+ np.multiply(self.A[:, 1],
v_p(self.A[:, 1]))

# Find extrema of (w, θ) space for initial guess of equilibrium sets


p_vec = np.zeros(self.N_a)
w_vec = np.zeros(self.N_a)
for i in range(self.N_a):
p_vec[i] = self.Θ_vec[i]
w_vec[i] = self.u_vec[i]/(1 - β)

w_space = np.array([min(w_vec[~np.isinf(w_vec)]),
max(w_vec[~np.isinf(w_vec)])])
p_space = np.array([0, max(p_vec[~np.isinf(w_vec)])])
self.p_space = p_space

# Set up hyperplane levels and gradients for iterations


def SG_H_V(N, w_space, p_space):
"""
This function initializes the subgradients, hyperplane levels,
and extreme points of the value set by choosing an appropriate
804 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

origin and radius. It is based on a similar function in�


↪ QuantEcon's
Games.jl
"""

# First, create a unit circle. Want points placed on [0, 2π]


inc = 2 * np.pi / N
degrees = np.arange(0, 2 * np.pi, inc)

# Points on circle
H = np.zeros((N, 2))
for i in range(N):
x = degrees[i]
H[i, 0] = np.cos(x)
H[i, 1] = np.sin(x)

# Then calculate origin and radius


o = np.array([np.mean(w_space), np.mean(p_space)])
r1 = max((max(w_space) - o[0])**2, (o[0] - min(w_space))**2)
r2 = max((max(p_space) - o[1])**2, (o[1] - min(p_space))**2)
r = np.sqrt(r1 + r2)

# Now calculate vertices


Z = np.zeros((2, N))
for i in range(N):
Z[0, i] = o[0] + r*H.T[0, i]
Z[1, i] = o[1] + r*H.T[1, i]

# Corresponding hyperplane levels


C = np.zeros(N)
for i in range(N):
C[i] = np.dot(Z[:, i], H[i, :])

return C, H, Z

C, self.H, Z = SG_H_V(N_g, w_space, p_space)


C = C.reshape(N_g, 1)
self.c0_c, self.c0_s, self.c1_c, self.c1_s = np.copy(C), np.
↪ copy(C), \
np.copy(C), np.copy(C)
self.z0_s, self.z0_c, self.z1_s, self.z1_c = np.copy(Z), np.
↪ copy(Z), \
np.copy(Z), np.copy(Z)

self.w_bnds_s, self.w_bnds_c = (w_space[0], w_space[1]), \


(w_space[0], w_space[1])
self.p_bnds_s, self.p_bnds_c = (p_space[0], p_space[1]), \
(p_space[0], p_space[1])

# Create dictionaries to save equilibrium set for each iteration


self.c_dic_s, self.c_dic_c = {}, {}
self.c_dic_s[0], self.c_dic_c[0] = self.c0_s, self.c0_c

def solve_worst_spe(self):
"""
Method to solve for BR(Z). See p.449 of Chang (1998)
"""
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 805

p_vec = np.full(self.N_a, np.nan)


c = [1, 0]

# Pre-compute constraints
aineq_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_mbar = np.vstack((self.c0_s, 0))

aineq = self.H
bineq = self.c0_s
aeq = [[0, -self.β]]

for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_mbar, b_ub=bineq_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
else:
beq = self.euler_vec[j]
res = linprog(c, A_ub=aineq, b_ub=bineq, A_eq=aeq,�
↪ b_eq=beq,
bounds=(self.w_bnds_s, self.p_bnds_s))
if res.status == 0:
p_vec[j] = self.u_vec[j] + self.β * res.x[0]

# Max over h and min over other variables (see Chang (1998) p.449)
self.br_z = np.nanmax(np.nanmin(p_vec.reshape(self.n_m, self.n_h),�
↪ 0))

def solve_subgradient(self):
"""
Method to solve for E(Z). See p.449 of Chang (1998)
"""

# Pre-compute constraints
aineq_C_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_C_mbar = np.vstack((self.c0_c, 0))

aineq_C = self.H
bineq_C = self.c0_c
aeq_C = [[0, -self.β]]

aineq_S_mbar = np.vstack((np.vstack((self.H, np.array([0, -self.


↪ β]))),
np.array([-self.β, 0])))
bineq_S_mbar = np.vstack((self.c0_s, np.zeros((2, 1))))

aineq_S = np.vstack((self.H, np.array([-self.β, 0])))


bineq_S = np.vstack((self.c0_s, 0))
aeq_S = [[0, -self.β]]

# Update maximal hyperplane level


for i in range(self.N_g):
c_a1a2_c, t_a1a2_c = np.full(self.N_a, -np.inf), \
np.zeros((self.N_a, 2))
c_a1a2_s, t_a1a2_s = np.full(self.N_a, -np.inf), \
806 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

np.zeros((self.N_a, 2))

c = [-self.H[i, 0], -self.H[i, 1]]

for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:

# COMPETITIVE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_C_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C_mbar, b_ub=bineq_C_mbar,
bounds=(self.w_bnds_c, self.p_bnds_c))
# If m < mbar, use equality constraint
else:
beq_C = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C, b_ub=bineq_C, A_eq =�
↪ aeq_C,
b_eq = beq_C, bounds=(self.w_bnds_c, \
self.p_bnds_c))
if res.status == 0:
c_a1a2_c[j] = self.H[i, 0] * (self.u_vec[j] \
+ self.β * res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_c[j] = res.x

# SUSTAINABLE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_S_mbar[-2] = self.euler_vec[j]
bineq_S_mbar[-1] = self.u_vec[j] - self.br_z
res = linprog(c, A_ub=aineq_S_mbar, b_ub=bineq_S_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
# If m < mbar, use equality constraint
else:
bineq_S[-1] = self.u_vec[j] - self.br_z
beq_S = self.euler_vec[j]
res = linprog(c, A_ub=aineq_S, b_ub=bineq_S, A_eq =�
↪ aeq_S,
b_eq = beq_S, bounds=(self.w_bnds_s, \
self.p_bnds_s))
if res.status == 0:
c_a1a2_s[j] = self.H[i, 0] * (self.u_vec[j] \
+ self.β*res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_s[j] = res.x

idx_c = np.where(c_a1a2_c == max(c_a1a2_c))[0][0]


self.z1_c[:, i] = np.array([self.u_vec[idx_c]
+ self.β * t_a1a2_c[idx_c, 0],
self.Θ_vec[idx_c]])

idx_s = np.where(c_a1a2_s == max(c_a1a2_s))[0][0]


self.z1_s[:, i] = np.array([self.u_vec[idx_s]
+ self.β * t_a1a2_s[idx_s, 0],
self.Θ_vec[idx_s]])

for i in range(self.N_g):
self.c1_c[i] = np.dot(self.z1_c[:, i], self.H[i, :])
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 807

self.c1_s[i] = np.dot(self.z1_s[:, i], self.H[i, :])

def solve_sustainable(self, tol=1e-5, max_iter=250):


"""
Method to solve for the competitive and sustainable equilibrium sets.
"""

t = time.time()
diff = tol + 1
iters = 0

print('### --------------- ###')


print('Solving Chang Model Using Outer Hyperplane Approximation')
print('### --------------- ### \n')

print('Maximum difference when updating hyperplane levels:')

while diff > tol and iters < max_iter:


iters = iters + 1
self.solve_worst_spe()
self.solve_subgradient()
diff = max(np.maximum(abs(self.c0_c - self.c1_c),
abs(self.c0_s - self.c1_s)))
print(diff)

# Update hyperplane levels


self.c0_c, self.c0_s = np.copy(self.c1_c), np.copy(self.c1_s)

# Update bounds for w and θ


wmin_c, wmax_c = np.min(self.z1_c, axis=1)[0], \
np.max(self.z1_c, axis=1)[0]
pmin_c, pmax_c = np.min(self.z1_c, axis=1)[1], \
np.max(self.z1_c, axis=1)[1]

wmin_s, wmax_s = np.min(self.z1_s, axis=1)[0], \


np.max(self.z1_s, axis=1)[0]
pmin_S, pmax_S = np.min(self.z1_s, axis=1)[1], \
np.max(self.z1_s, axis=1)[1]

self.w_bnds_s, self.w_bnds_c = (wmin_s, wmax_s), (wmin_c, wmax_c)


self.p_bnds_s, self.p_bnds_c = (pmin_S, pmax_S), (pmin_c, pmax_c)

# Save iteration
self.c_dic_c[iters], self.c_dic_s[iters] = np.copy(self.c1_c), \
np.copy(self.c1_s)
self.iters = iters

elapsed = time.time() - t
print('Convergence achieved after {} iterations and {} \
seconds'.format(iters, round(elapsed, 2)))

def solve_bellman(self, θ_min, θ_max, order, disp=False, tol=1e-7,�


↪ maxiters=100):
"""
Continuous Method to solve the Bellman equation in section 25.3
"""
mbar = self.mbar
808 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

# Utility and production functions


uc = lambda c: np.log(c)
uc_p = lambda c: 1 / c
v = lambda m: 1 / 500 * (mbar * m - 0.5 * m**2)**0.5
v_p = lambda m: 0.5/500 * (mbar*m - 0.5 * m**2)**(-0.5) * (mbar - m)
u = lambda h, m: uc(f(h, m)) + v(m)

def f(h, m):


x = m * (h - 1)
f = 180 - (0.4 * x)**2
return f

def θ(h, m):


x = m * (h - 1)
θ = uc_p(f(h, m)) * (m + x)
return θ

# Bounds for Maximization


lb1 = np.array([self.h_min, 0, θ_min])
ub1 = np.array([self.h_max, self.mbar - 1e-5, θ_max])
lb2 = np.array([self.h_min, θ_min])
ub2 = np.array([self.h_max, θ_max])

# Initialize Value Function coefficients


# Calculate roots of Chebyshev polynomial
k = np.linspace(order, 1, order)
roots = np.cos((2 * k - 1) * np.pi / (2 * order))
# Scale to approximation space
s = θ_min + (roots - -1) / 2 * (θ_max - θ_min)
# Create a basis matrix
Φ = cheb.chebvander(roots, order - 1)
c = np.zeros(Φ.shape[0])

# Function to minimize and constraints


def p_fun(x):
scale = -1 + 2 * (x[2] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0], x[1]) \
+ self.β * np.dot(cheb.chebvander(scale, order - 1), c))
return p_fun

def p_fun2(x):
scale = -1 + 2*(x[1] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0],mbar) \
+ self.β * np.dot(cheb.chebvander(scale, order - 1), c))
return p_fun

cons1 = ({'type': 'eq', 'fun': lambda x: uc_p(f(x[0], x[1])) * x[1]


* (x[0] - 1) + v_p(x[1]) * x[1] + self.β * x[2] - θ},
{'type': 'eq', 'fun': lambda x: uc_p(f(x[0], x[1]))
* x[0] * x[1] - θ})
cons2 = ({'type': 'ineq', 'fun': lambda x: uc_p(f(x[0], mbar)) * mbar
* (x[0] - 1) + v_p(mbar) * mbar + self.β * x[1] - θ},
{'type': 'eq', 'fun': lambda x: uc_p(f(x[0], mbar))
* x[0] * mbar - θ})

bnds1 = np.concatenate([lb1.reshape(3, 1), ub1.reshape(3, 1)],�


↪ axis=1)
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 809

bnds2 = np.concatenate([lb2.reshape(2, 1), ub2.reshape(2, 1)],�


↪ axis=1)

# Bellman Iterations
diff = 1
iters = 1

while diff > tol:


# 1. Maximization, given value function guess
p_iter1 = np.zeros(order)
for i in range(order):
θ = s[i]
res = minimize(p_fun,
lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p_iter1[i] = -p_fun(res.x)
res = minimize(p_fun2,
lb2 + (ub2-lb2) / 2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p_iter1[i] and res.success == True:
p_iter1[i] = -p_fun2(res.x)

# 2. Bellman updating of Value Function coefficients


c1 = np.linalg.solve(Φ, p_iter1)
# 3. Compute distance and update
diff = np.linalg.norm(c - c1)
if bool(disp == True):
print(diff)
c = np.copy(c1)
iters = iters + 1
if iters > maxiters:
print('Convergence failed after {} iterations'.
↪ format(maxiters))
break

self.θ_grid = s
self.p_iter = p_iter1
self.Φ = Φ
self.c = c
print('Convergence achieved after {} iterations'.format(iters))

# Check residuals
θ_grid_fine = np.linspace(θ_min, θ_max, 100)
resid_grid = np.zeros(100)
p_grid = np.zeros(100)
θ_prime_grid = np.zeros(100)
m_grid = np.zeros(100)
h_grid = np.zeros(100)
for i in range(100):
θ = θ_grid_fine[i]
res = minimize(p_fun,
810 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[2]
h_grid[i] = res.x[0]
m_grid[i] = res.x[1]
res = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p and res.success == True:
p = -p_fun2(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[1]
h_grid[i] = res.x[0]
m_grid[i] = self.mbar
scale = -1 + 2 * (θ - θ_min)/(θ_max - θ_min)
resid_grid[i] = np.dot(cheb.chebvander(scale, order-1), c) - p

self.resid_grid = resid_grid
self.θ_grid_fine = θ_grid_fine
self.θ_prime_grid = θ_prime_grid
self.m_grid = m_grid
self.h_grid = h_grid
self.p_grid = p_grid
self.x_grid = m_grid * (h_grid - 1)

# Simulate
θ_series = np.zeros(31)
m_series = np.zeros(30)
h_series = np.zeros(30)

# Find initial θ
def ValFun(x):
scale = -1 + 2*(x - θ_min)/(θ_max - θ_min)
p_fun = np.dot(cheb.chebvander(scale, order - 1), c)
return -p_fun

res = minimize(ValFun,
(θ_min + θ_max)/2,
bounds=[(θ_min, θ_max)])
θ_series[0] = res.x

# Simulate
for i in range(30):
θ = θ_series[i]
res = minimize(p_fun,
lb1 + (ub1-lb1)/2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 811

tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
h_series[i] = res.x[0]
m_series[i] = res.x[1]
θ_series[i+1] = res.x[2]
res2 = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res2.x) > p and res2.success == True:
h_series[i] = res2.x[0]
m_series[i] = self.mbar
θ_series[i+1] = res2.x[1]

self.θ_series = θ_series
self.m_series = m_series
self.h_series = h_series
self.x_series = m_series * (h_series - 1)

41.4.1 Comparison of Sets

The set of (𝑤, 𝜃) associated with sustainable plans is smaller than the set of (𝑤, 𝜃) pairs asso-
ciated with competitive equilibria, since the additional constraints associated with sustainabil-
ity must also be satisfied.
Let’s compute two examples, one with a low 𝛽, another with a higher 𝛽

In [4]: ch1 = ChangModel(β=0.3, mbar=30, h_min=0.9, h_max=2, n_h=8, n_m=35, N_g=10)

In [5]: ch1.solve_sustainable()

### --------------- ###


Solving Chang Model Using Outer Hyperplane Approximation
### --------------- ###

Maximum difference when updating hyperplane levels:


[1.9168]
[0.66782]
[0.49235]
[0.32412]
[0.19022]


↪---------------------------------------------------------------------------

ValueError Traceback (most�


↪recent call last)

<ipython-input-5-ce0f3c9d3306> in <module>
----> 1 ch1.solve_sustainable()
812 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

<ipython-input-3-04bea48ab06f> in solve_sustainable(self,�
↪tol, max_iter)
269 iters = iters + 1
270 self.solve_worst_spe()
--> 271 self.solve_subgradient()
272 diff = max(np.maximum(abs(self.c0_c - self.
↪c1_c),

273 abs(self.c0_s - self.c1_s)))

<ipython-input-3-04bea48ab06f> in solve_subgradient(self)
231 res = linprog(c, A_ub=aineq_S,�
↪b_ub=bineq_S, A_eq = aeq_S,

232 b_eq = beq_S,�


↪bounds=(self.w_bnds_s, \

--> 233 self.p_bnds_s))


234 if res.status == 0:
235 c_a1a2_s[j] = self.H[i, 0] * (self.
↪u_vec[j] \

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method,�

↪callback, options, x0)

567 �
↪complete, status,

568 �
↪message, tol,

--> 569 �
↪iteration, disp)

570
571 sol = {

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _postprocess(x, postsolve_args, complete,�

↪status, message, tol, iteration, disp)

1477 status, message = _check_result(


1478 x, fun, status, slack, con,
-> 1479 lb, ub, tol, message
1480 )
1481

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _check_result(x, fun, status, slack, con, lb,�
↪ub, tol, message)
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 813

1392 # nearly basic feasible solution. Postsolving can�


↪make the solution

1393 # basic, however, this solution is NOT optimal


-> 1394 raise ValueError(message)
1395
1396 return status, message

ValueError: The algorithm terminated successfully and�


↪determined that the problem is infeasible.

The following plot shows both the set of 𝑤, 𝜃 pairs associated with competitive equilibria (in
red) and the smaller set of 𝑤, 𝜃 pairs associated with sustainable plans (in blue).

In [6]: def plot_equilibria(ChangModel):


"""
Method to plot both equilibrium sets
"""
fig, ax = plt.subplots(figsize=(7, 5))

ax.set_xlabel('w', fontsize=16)
ax.set_ylabel(r"$\theta$", fontsize=18)

poly_S = polytope.Polytope(ChangModel.H, ChangModel.c1_s)


poly_C = polytope.Polytope(ChangModel.H, ChangModel.c1_c)
ext_C = polytope.extreme(poly_C)
ext_S = polytope.extreme(poly_S)

ax.fill(ext_C[:, 0], ext_C[:, 1], 'r', zorder=-1)


ax.fill(ext_S[:, 0], ext_S[:, 1], 'b', zorder=0)

# Add point showing Ramsey Plan


idx_Ramsey = np.where(ext_C[:, 0] == max(ext_C[:, 0]))[0][0]
R = ext_C[idx_Ramsey, :]
ax.scatter(R[0], R[1], 150, 'black', 'o', zorder=1)
w_min = min(ext_C[:, 0])

# Label Ramsey Plan slightly to the right of the point


ax.annotate("R", xy=(R[0], R[1]),
xytext=(R[0] + 0.03 * (R[0] - w_min),
R[1]), fontsize=18)

plt.tight_layout()
plt.show()

plot_equilibria(ch1)
814 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

Evidently, the Ramsey plan, denoted by the 𝑅, is not sustainable.


Let’s raise the discount factor and recompute the sets

In [7]: ch2 = ChangModel(β=0.8, mbar=30, h_min=0.9, h_max=1/0.8,


n_h=8, n_m=35, N_g=10)

In [8]: ch2.solve_sustainable()

### --------------- ###


Solving Chang Model Using Outer Hyperplane Approximation
### --------------- ###

Maximum difference when updating hyperplane levels:


[0.06369]
[0.02476]
[0.02153]
[0.01915]
[0.01795]
[0.01642]
[0.01507]
[0.01284]
[0.01106]
[0.00694]
[0.0085]
[0.00781]
[0.00433]
[0.00492]
[0.00303]
[0.00182]
41.4. CALCULATING THE SET OF SUSTAINABLE PROMISE-VALUE PAIRS 815


↪---------------------------------------------------------------------------

ValueError Traceback (most�


↪recent call last)

<ipython-input-8-b1776dca964b> in <module>
----> 1 ch2.solve_sustainable()

<ipython-input-3-04bea48ab06f> in solve_sustainable(self,�
↪tol, max_iter)
269 iters = iters + 1
270 self.solve_worst_spe()
--> 271 self.solve_subgradient()
272 diff = max(np.maximum(abs(self.c0_c - self.
↪c1_c),

273 abs(self.c0_s - self.c1_s)))

<ipython-input-3-04bea48ab06f> in solve_subgradient(self)
231 res = linprog(c, A_ub=aineq_S,�
↪b_ub=bineq_S, A_eq = aeq_S,

232 b_eq = beq_S,�


↪bounds=(self.w_bnds_s, \

--> 233 self.p_bnds_s))


234 if res.status == 0:
235 c_a1a2_s[j] = self.H[i, 0] * (self.
↪u_vec[j] \

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method,�

↪callback, options, x0)

567 �
↪complete, status,

568 �
↪message, tol,

--> 569 �
↪iteration, disp)

570
571 sol = {

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _postprocess(x, postsolve_args, complete,�

↪status, message, tol, iteration, disp)

1477 status, message = _check_result(


1478 x, fun, status, slack, con,
816 CHAPTER 41. CREDIBLE GOVERNMENT POLICIES IN A MODEL OF CHANG

-> 1479 lb, ub, tol, message


1480 )
1481

~/anaconda3/lib/python3.7/site-packages/scipy/optimize/
↪_linprog_util.py in _check_result(x, fun, status, slack, con, lb,�

↪ub, tol, message)

1392 # nearly basic feasible solution. Postsolving can�


↪make the solution

1393 # basic, however, this solution is NOT optimal


-> 1394 raise ValueError(message)
1395
1396 return status, message

ValueError: The algorithm terminated successfully and�


↪ determined that the problem is infeasible.

Let’s plot both sets

In [9]: plot_equilibria(ch2)

Evidently, the Ramsey plan is now sustainable.


Bibliography

[1] Dilip Abreu. On the theory of infinitely repeated games with discounting. Econometrica,
56:383–396, 1988.

[2] Dilip Abreu, David Pearce, and Ennio Stacchetti. Toward a theory of discounted re-
peated games with imperfect monitoring. Econometrica, 58(5):1041–1063, September
1990.

[3] S Rao Aiyagari, Albert Marcet, Thomas J Sargent, and Juha Seppälä. Optimal taxation
without state-contingent debt. Journal of Political Economy, 110(6):1220–1254, 2002.

[4] Cristina Arellano. Default risk and income fluctuations in emerging economies. The
American Economic Review, pages 690–712, 2008.

[5] Papoulis Athanasios and S Unnikrishna Pillai. Probability, random variables, and
stochastic processes. Mc-Graw Hill, 1991.

[6] Orazio P Attanasio and Nicola Pavoni. Risk sharing in private information models with
asset accumulation: Explaining the excess smoothness of consumption. Econometrica,
79(4):1027–1068, 2011.

[7] Robert J Barro. On the Determination of the Public Debt. Journal of Political Economy,
87(5):940–971, 1979.

[8] Robert J Barro. Determinants of democracy. Journal of Political economy, 107(S6):S158–


S183, 1999.

[9] Robert J Barro and Rachel McCleary. Religion and economic growth. Technical report,
National Bureau of Economic Research, 2003.

[10] Anmol Bhandari, David Evans, Mikhail Golosov, and Thomas J. Sargent. Fiscal Policy
and Debt Management with Incomplete Markets. The Quarterly Journal of Economics,
132(2):617–663, 2017.

[11] Fischer Black and Robert Litterman. Global portfolio optimization. Financial analysts
journal, 48(5):28–43, 1992.

[12] Philip Cagan. The monetary dynamics of hyperinflation. In Milton Friedman, editor,
Studies in the Quantity Theory of Money, pages 25–117. University of Chicago Press,
Chicago, 1956.

[13] Guillermo A. Calvo. On the time consistency of optimal policy in a monetary economy.
Econometrica, 46(6):1411–1428, 1978.

[14] Roberto Chang. Credible monetary policy in an infinite horizon model: Recursive ap-
proaches. Journal of Economic Theory, 81(2):431–461, 1998.

817
818 BIBLIOGRAPHY

[15] Varadarajan V Chari and Patrick J Kehoe. Sustainable plans. Journal of Political Econ-
omy, pages 783–802, 1990.

[16] Ronald Harry Coase. The nature of the firm. economica, 4(16):386–405, 1937.

[17] J. D. Cryer and K-S. Chan. Time Series Analysis. Springer, 2nd edition edition, 2008.

[18] Raymond J Deneckere and Kenneth L Judd. Cyclical and chaotic behavior in a dynamic
equilibrium model, with implications for fiscal policy. Cycles and chaos in economic equi-
librium, pages 308–329, 1992.

[19] J Dickey. Bayesian alternatives to the f-test and least-squares estimate in the normal
linear model. In S.E. Fienberg and A. Zellner, editors, Studies in Bayesian econometrics
and statistics, pages 515–554. North-Holland, Amsterdam, 1975.

[20] JBR Do Val, JC Geromel, and OLV Costa. Solutions for the linear-quadratic control
problem of markov jump linear systems. Journal of Optimization Theory and Applica-
tions, 103(2):283–311, 1999.

[21] M. Friedman. A Theory of the Consumption Function. Princeton University Press, 1956.

[22] David Gale. The theory of linear economic models. University of Chicago press, 1989.

[23] Albert Gallatin. Report on the finances**, november, 1807. In Reports of the Secretary
of the Treasury of the United States, Vol 1. Government printing office, Washington, DC,
1837.

[24] Robert E Hall. Stochastic Implications of the Life Cycle-Permanent Income Hypothesis:
Theory and Evidence. Journal of Political Economy, 86(6):971–987, 1978.

[25] Michael J Hamburger, Gerald L Thompson, and Roman L Weil. Computation of expan-
sion rates for the generalized von neumann model of an expanding economy. Economet-
rica, Journal of the Econometric Society, pages 542–547, 1967.

[26] L P Hansen and T J Sargent. Robustness. Princeton University Press, 2008.

[27] Lars Peter Hansen and Thomas J Sargent. Formulating and estimating dynamic linear
rational expectations models. Journal of Economic Dynamics and control, 2:7–46, 1980.

[28] Lars Peter Hansen and Thomas J Sargent. Wanting robustness in macroeconomics.
Manuscript, Department of Economics, Stanford University., 4, 2000.

[29] Lars Peter Hansen and Thomas J. Sargent. Robust control and model uncertainty.
American Economic Review, 91(2):60–66, 2001.

[30] Lars Peter Hansen and Thomas J Sargent. Robustness. Princeton university press, 2008.

[31] Lars Peter Hansen and Thomas J. Sargent. Recursive Linear Models of Dynamic Eco-
nomics. Princeton University Press, Princeton, New Jersey, 2013.

[32] Lars Peter Hansen and José A Scheinkman. Long-term risk: An operator approach.
Econometrica, 77(1):177–234, 2009.

[33] Elhanan Helpman and Paul Krugman. Market structure and international trade. MIT
Press Cambridge, 1985.

[34] O Hernandez-Lerma and J B Lasserre. Discrete-Time Markov Control Processes: Basic


Optimality Criteria. Number Vol 1 in Applications of Mathematics Stochastic Modelling
and Applied Probability. Springer, 1996.
BIBLIOGRAPHY 819

[35] Hugo A Hopenhayn and Richard Rogerson. Job Turnover and Policy Evaluation: A Gen-
eral Equilibrium Analysis. Journal of Political Economy, 101(5):915–938, 1993.

[36] Kenneth L Judd. On the performance of patents. Econometrica, pages 567–585, 1985.

[37] Kenneth L. Judd, Sevin Yeltekin, and James Conklin. Computing Supergame Equilibria.
Econometrica, 71(4):1239–1254, 07 2003.

[38] John G Kemeny, Oskar Morgenstern, and Gerald L Thompson. A generalization of the
von neumann model of an expanding economy. Econometrica, Journal of the Economet-
ric Society, pages 115–135, 1956.

[39] Tomoo Kikuchi, Kazuo Nishimura, and John Stachurski. Span of control, transaction
costs, and the structure of production chains. Theoretical Economics, 13(2):729–760,
2018.

[40] Finn E Kydland and Edward C Prescott. Dynamic optimal taxation, rational expecta-
tions and optimal control. Journal of Economic Dynamics and Control, 2:79–91, 1980.

[41] A Lasota and M C MacKey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynam-
ics. Applied Mathematical Sciences. Springer-Verlag, 1994.

[42] Edward E Leamer. Specification searches: Ad hoc inference with nonexperimental data,
volume 53. John Wiley & Sons Incorporated, 1978.

[43] L Ljungqvist and T J Sargent. Recursive Macroeconomic Theory. MIT Press, 4 edition,
2018.

[44] Robert E Lucas, Jr. Asset prices in an exchange economy. Econometrica: Journal of the
Econometric Society, 46(6):1429–1445, 1978.

[45] Robert E Lucas, Jr. and Nancy L Stokey. Optimal Fiscal and Monetary Policy in an
Economy without Capital. Journal of monetary Economics, 12(3):55–93, 1983.

[46] S P Meyn and R L Tweedie. Markov Chains and Stochastic Stability. Cambridge Univer-
sity Press, 2009.

[47] Mario J Miranda and P L Fackler. Applied Computational Economics and Finance. Cam-
bridge: MIT Press, 2002.

[48] John F Muth. Optimal properties of exponentially weighted forecasts. Journal of the
american statistical association, 55(290):299–306, 1960.

[49] Sophocles J Orfanidis. Optimum Signal Processing: An Introduction. McGraw Hill Pub-
lishing, New York, New York, 1988.

[50] Martin L Puterman. Markov decision processes: discrete stochastic dynamic program-
ming. John Wiley & Sons, 2005.

[51] F. P. Ramsey. A Contribution to the theory of taxation. Economic Journal, 37(145):47–


61, 1927.

[52] Steven Roman. Advanced linear algebra, volume 3. Springer, 2005.

[53] Sherwin Rosen, Kevin M Murphy, and Jose A Scheinkman. Cattle cycles. Journal of
Political Economy, 102(3):468–492, 1994.

[54] Y. A. Rozanov. Stationary Random Processes. Holden-Day, San Francisco, 1967.


820 BIBLIOGRAPHY

[55] John Rust. Numerical dynamic programming in economics. Handbook of computational


economics, 1:619–729, 1996.

[56] Jaewoo Ryoo and Sherwin Rosen. The engineering labor market. Journal of political
economy, 112(S1):S110–S140, 2004.

[57] Thomas Sargent, Lars Peter Hansen, and Will Roberts. Observable implications of
present value budget balance. In Rational Expectations Econometrics. Westview Press,
1991.

[58] Thomas J Sargent. The Demand for Money During Hyperinflations under Rational Ex-
pectations: I. International Economic Review, 18(1):59–82, February 1977.

[59] Thomas J Sargent. Macroeconomic Theory. Academic Press, New York, 2nd edition,
1987.

[60] A N Shiriaev. Probability. Graduate texts in mathematics. Springer. Springer, 2nd edi-
tion, 1995.

[61] N L Stokey, R E Lucas, and E C Prescott. Recursive Methods in Economic Dynamics.


Harvard University Press, 1989.

[62] Nancy L Stokey. Reputation and time consistency. The American Economic Review,
pages 134–139, 1989.

[63] Nancy L. Stokey. Credible public policy. Journal of Economic Dynamics and Control,
15(4):627–656, October 1991.

[64] Lars E.O. Svensson and Noah Williams. Optimal Monetary Policy under Uncertainty in
DSGE Models: A Markov Jump-Linear-Quadratic Approach. In Klaus Schmidt-Hebbel,
Carl E. Walsh, Norman Loayza (Series Editor), and Klaus Schmidt-Hebbel (Series, ed-
itors, Monetary Policy under Uncertainty and Learning, volume 13 of Central Banking,
Analysis, and Economic Policies Book Series, chapter 3, pages 077–114. Central Bank of
Chile, March 2009.

[65] Lars EO Svensson, Noah Williams, et al. Optimal monetary policy under uncertainty:
A markov jump-linear-quadratic approach. Federal Reserve Bank of St. Louis Review,
90(4):275–293, 2008.

[66] John von Neumann. Zur theorie der gesellschaftsspiele. Mathematische annalen,
100(1):295–320, 1928.

[67] John von Neumann. Uber ein okonomsiches gleichungssystem und eine verallgemeinering
des browerschen fixpunktsatzes. In Erge. Math. Kolloq., volume 8, pages 73–83, 1937.

[68] Peter Whittle. Prediction and regulation by linear least-square methods. English Univ.
Press, 1963.

[69] Peter Whittle. Prediction and Regulation by Linear Least Squares Methods. University of
Minnesota Press, Minneapolis, Minnesota, 2nd edition, 1983.

You might also like