0% found this document useful (0 votes)
8 views28 pages

Back in Time. Fast. Accelerated Time Iterations

This paper introduces an acceleration method for solving nonlinear rational expectations models using time iterations, achieving quadratic convergence and significant computational efficiency. The method is applicable to a broad class of models characterized by first-order conditions, and it enhances traditional time-iteration algorithms by utilizing derivative operators. The paper demonstrates the effectiveness of this approach through three benchmark examples, highlighting its practical advantages over existing methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views28 pages

Back in Time. Fast. Accelerated Time Iterations

This paper introduces an acceleration method for solving nonlinear rational expectations models using time iterations, achieving quadratic convergence and significant computational efficiency. The method is applicable to a broad class of models characterized by first-order conditions, and it enhances traditional time-iteration algorithms by utilizing derivative operators. The paper demonstrates the effectiveness of this approach through three benchmark examples, highlighting its practical advantages over existing methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Back in Time. Fast.

ed
Accelerated Time Iterations.
Pablo Winant
ESCP Business School, École Polytechnique, CREST
[email protected]

iew
15/03/2025
Abstract
We present an acceleration method to find an approximate solution for a wide class of nonlinear rational expectations
models characterized by first order conditions. This method works for any model that can be solved using time-
iterations and its convergence is quadratic leading to very sizable computational gains.
We illustrate these gains on three benchmark examples.

ev
keywords: Time Iterations, Acceleration, Quadratic Convergence, Newton-Raphson, Iterative Solvers

1. Introduction
tations models characterized by first order conditions1.
rr
We present a simple acceleration method to produce the solution of a wide class of rational expec-

This paper is motivated by the growing interest in the economic profession to investigate the
ee
robustness of nonlinear medium scale rational equilibrium models, when the economy can be drawn
away from the steady-state or when there are important nonlinearities such as an occasionally
binding constraint or a zero lower bound (for instance Fernández-Villaverde et al. (2015), Coeur-
dacier, Rey, and Winant (2020)).
p

In general, these models do not derive from the problem of one single optimizing agent, hence
methods based on Bellman equations like value iteration or Howard policy iterations (see Rust (1996)
ot

for a review) can’t be used. Instead we intend to solve any model which can be characterized by a
set of transition and equilibrium equations. The latter include Euler equations, asset prices, or other
nonlinear relations.
tn

This generic class of model, is typically solved by the time iteration algorithm (Coleman (1990),
Deaton and Laroque (1992)). The time-iteration, sometimes referred to as Coleman iteration, consists
in finding the optimal decision rule today, knowing the decision rule tomorrow. Starting from an
initial guess, and applying the operator many times, i.e. going back in time, yields the solution to
rin

the model.
Many methods have been proposed, to accelerate the solution for this family of problems. Among
them, the parameterized expectation (Haan and Marcet (1990)) the endogenous grid point methods
(Carroll (2006)), or the precomputation of expectations (Judd et al. (2017)) reduce the cost of each
ep

iteration but only work when the first order conditions can be recast in a specific form. By contrast,
our method, only requires the asymptotic convergence of the time-iterations, a desirable property
of any well-defined model.
Pr

The literature has also discussed several ways to increase the precision of the approximate decision
rules or to reduce the number of parameters. Splines, Chebychev, and complete polynomials are
1 Part of the research was conducted while the author was at the Bank of England. The author would like to thank Basile Grassi, Michel

Juillard, Michael Kumhof, Lilia Maliar, Serguei Maliar, Riccardo Masolo, Wanda Mimra, Michael Reiter and John Stachurski for valuable help
or feedback at different stages of this project.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
discussed in Judd (1998), Smolyak polynomials in Judd et al. (2014). All those linear interpolation

ed
schemes are compatible with our algorithm. Other attempts to reduce the dimensionality of rational
expectation models have resorted to adaptive grids (Brumm and Scheidegger (2017)), ergodic sets
(Maliar, Maliar, and Judd (2011)) or neural networks (Maliar, Maliar, and Winant (2021), Azinovic,
Gaegauf, and Scheidegger (2022)).

iew
The method we propose accelerates each time-iteration by using its derivative operator to produce
an estimate of the fixed point2. While the procedure produces exactly the same solution as the
common time-iteration algorithm, it converges at a quadratic rate instead of a geometric one and
completes much faster in practical applications.
One key contribution of this paper resides in showing how to construct an explicit representation

ev
of the derivative operator. We also show that a slight modification of it can be used to directly solve
the model as a single nonlinear system. Since, this linear operator is not represented as a matrix,
we need to resort to iterative methods to solve any linear system that involves it. Many iterative
solvers have been developed for this task, including GMRES (Saad and Schultz (1986)), but in our

rr
experiments we show that a very simple recursive calculation, performs almost as well, with a much
lower memory footprint.
Section 2 introduces the general form of the models that we solve. In particular, Section 3 sketches
ee
out our main algorithm, while Section 4 describes an efficient way to represent the model Jacobians
that are needed to implement it. Section 5 discusses several options to invert the Jacobian and draws
an explicit connection with iterative solvers. Section 7 compares and explains computation times
on three different models, that are detailed in detail in Appendix A. Section 8 concludes.
p

2. General Setup
Consider a state space 𝒮 ⊂ ℝ𝑛𝑠 and an unbounded state of continuous controls 𝒳 = ℝ𝑛𝑥 .
ot

In any state 𝑠 ∈ 𝒮, the optimal controls 𝑥 ∈ 𝒳 are characterized by


𝔼[𝑓(𝑠, 𝑥, 𝑠′ , 𝑥′ )] = 0 (1)
tn

where 𝑓 represents a set of 𝑛𝑥 equations, where future states 𝑠′ are drawn after distribution 𝜏 𝑚 (𝑠, 𝑥)
and where future controls 𝑥′ are taken according to an initially unknown decision rule 𝜑(𝑠′ ) = 𝑥′ .
The specification (𝒮, 𝒳, 𝑓, 𝜏 𝑚 ) can represent models with both discrete and continuous states,
rin

depending on the definition of 𝒮. It contains a wide class of economic models, the main limitation
being that we require controls to be continuously valued, so as to be naturally characterized by
Euler Equation 1.
ep

2.1. Discretization
To solve the model approximately, we discretize the state-space into a finite vector of 𝑁 states 𝒔 ⃗ =
(𝒔𝑛⃗ )𝑛∈[1,𝑁] ∈ 𝒮𝑁 . To a vector of corresponding controls 𝒙⃗ = (𝒙⃗ 𝑛 )𝑛∈[1,𝑁] ∈ 𝒳𝑁 we associate an
interpolation method 𝒥 in order to define a decision rule over the whole state-space: 𝑠 ∈ 𝒮 ↦ 𝑥 =
Pr

2 In some sense our algorithm is to Euler equation what Howard policy iteration is for Bellman problems. Indeed Howard improvement

consist in a Bellman update (our time iteration) followed by a policy evaluation (our acceleration). We know since (Puterman and Brumelle
1979) that the latter step can equivalently be expressed as a function of the (sub-)derivative of the Bellman update. Like our method, Howard
policy iteration is thus an accelerated fixed point algorithm. In particular it also converges quadratically.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
𝒥(𝑠; 𝒙)⃗ ∈ 𝒳. We assume that 𝒥 is linear in the data 𝒙⃗ to be interpolated3.

ed
We also approximate the transition distribution 𝜏 𝑚 (𝑠, 𝑥) by a finite distribution 𝜏 (𝑠, 𝑥) materialized
by enumerating the 𝐾 future states (𝑠′𝑖 (𝑠, 𝑥))𝑖=⟦1,𝐾⟧ occurring with probabilities (𝑤𝑖 (𝑠, 𝑥))𝑖=⟦1,𝐾⟧ .

This leads to the computable residual function

iew
𝐹 (𝑠, 𝑥; 𝒙)⃖ = ∑ 𝑤𝑓(𝑠, 𝑥, 𝑠′ , 𝒥(𝑠′ ; 𝒙))
⃖ (2)
(𝑤,𝑠′ )∈𝜏(𝑠,𝑥)

where 𝒙⃖ represents the decision rule tomorrow.


Finally, for any vector of controls today 𝒙⃗ we define the vector of residuals as

ev
𝑭 (𝒙,⃗ 𝒙)⃖ = (𝐹 (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙))
⃖ 𝑛∈⟦1,𝑁⟧ (3)

where we leave implicit the dependence of 𝑭 on 𝒔 ⃗ since the latter remains constant. Equation 3
represents the discretized model to be solved numerically.

rr
Time invariant solution 𝒙 ∈ 𝒳𝑁 to the discretized model must satisfy:
𝑮(𝒙) = 𝑭 (𝒙, 𝒙) = 0 (4)
ee
3. Accelerated Time Iteration
We now outline two simple algorithms to solve any model that can be represented as in Equation 3.
p

3.1. Time Iteration


Let’s define the time-iteration operator as the function 𝒯 : 𝒳𝑁 → 𝒳𝑁 such that, for any given 𝒙,⃗
⃗ 𝒙)⃗
𝐹 (𝒯(𝒙), (5)
ot

The time-iteration algorithm consists in computing the iterates


𝒙⃗ 𝒌+𝟏 = 𝒯(𝒙⃗ 𝒌 ) (6)
tn

starting from an initial guess 𝒙⃗ 0 .


Convergence is guaranteed if 𝒙⃗ 0 is close enough to 𝒙 under the condition
rin

𝜌(𝒯′ (𝒙)) < 1 (7)

where 𝒯′ (𝒙) is the Fréchet derivative of 𝒯 at 𝐱 and function 𝜌() denotes the spectral radius.
Convergence is asymptotically geometric, with a decay rate faster than 𝜌(𝒯′ ) + 𝜀 for any 𝜀 > 04.
ep

3.2. Accelerated Time Iteration


Building on 𝒯 and 𝒯′ , we propose a simple acceleration method.
Assuming one time-iteration step 𝒙̃ 𝑘+1 = 𝒯(𝒙⃗ 𝑘 ) has been computed we can build a linear approx-
Pr

imation of the time operator 𝒯 around 𝒙⃗ 𝑘 as

3 Commonly used methods interpolate data linearly. This includes splines of any order and all spectral polynomial methods (complete,

chebychev, smolyak).
4 Section B provides formal convergence speed estimates for the algorithms outlined in this paper

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
𝒯𝑘 (𝒙) = 𝒙⃗ 𝑘+1 + 𝒯′ (𝒙⃗ 𝒌 )(𝒙 − 𝒙⃗ 𝑘 ) (8)

ed
Then a fixed point of 𝒯 can be approximated by solving for 𝒯𝑘 (𝒙)⃗ = 𝒙⃗ which yields a new guess:
𝒙⃗ 𝑘+1 = 𝒜(̃
𝒙𝑘+1 , 𝒙⃗ 𝑘 )
(9)
= 𝒙⃗ 𝑘 + (𝐼 − 𝒯′ (𝒙⃗ 𝑘 ))−1 (̃
𝒙𝑘+1 − 𝒙⃗ 𝑘 )

iew
Note that this expression is well defined, since, by continuity 𝜌(𝒯′ (𝒙⃗ 𝑘 )) < 1 close enough to the
solution.
The accelerated time-iteration algorithm consists in computing the iterates:
𝒙⃗ 𝑘+1 = 𝒜(𝒯(𝒙⃗ 𝑘 ), 𝒙⃗ 𝑘 )

ev
(10)

The sequence of accelerated time iterations also converges to 𝒙 when the initial guess 𝒙⃗ 0 is close
enough to the steady-state.

rr
Furthermore, the convergence is quadratic.5

4. Computing the derivatives


The two algorithms we have described in Section 3 require the calculation of operators 𝒯(𝒙⃗ 𝑘 ) and
ee
𝒯′ (𝒙⃗ 𝑘 )). In this section we describe how they can be obtained from a suitable representation of the
derivative of 𝐹 with respect to its first (resp second) argument 𝐹𝐴′ (resp 𝐹𝐵′ ).
First, 𝒯(𝒙⃗ 𝑘 ) can be obtained by applying a Newton solver to the function 𝑢 → 𝐹 (𝑢, 𝒙⃗ 𝑘 ) that uses
p

derivative 𝐹𝐴′ .
Then, once a solution 𝒙⃗ 𝑘+1 = 𝒯(𝒙⃗ 𝑘 ) is found that satisfies 𝐹 (𝒯(𝒙⃗ 𝑘 ), 𝒙⃗ 𝑘 ) = 0 we can differentiate
Equation 5
ot

𝐹𝐴′ (𝒙⃗ 𝑘+1 , 𝒙⃗ 𝑘 )𝒯′ (𝒙⃗ 𝑘 ) + 𝐹𝐵′ (𝒙⃗ 𝑘+1 , 𝒙⃗ 𝑘 ) = 0 (11)

to get
tn

−1
𝒯′ (𝒙⃗ 𝑘 ) = −𝐹𝐴′ (𝒙⃗ 𝑘+1 , 𝒙⃗ 𝑘 ) 𝐹𝐵′ (𝒙⃗ 𝑘+1 , 𝒙⃗ 𝑘 ) (12)

In the next subsection we show how to compute operators 𝐹𝐴′ (., .) and 𝐹𝐵′ (., .) and how to combine
rin

them to compute Equation 12.

4.1. Computing 𝐹𝐴′


From the definition of 𝐹 in Equation 3, it is apparent that the 𝑛-th component of 𝐹 only depends
ep

on the 𝑛-th component of 𝒙.⃗


Hence 𝐹𝐴′ is a matrix with a block diagonal structure where the diagonal elements are given by:
𝑭𝐴′ (𝒙,⃗ 𝒙)⃖ 𝑛,𝑛 = 𝐹𝑥′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙)⃖ (13)
Pr

= ∑ 𝑤𝑓𝑥′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 , 𝑠′ , 𝒥(𝑠′ ; 𝒙))



(𝑤,𝑠′ )∈𝜏(𝒔𝑛
⃗ ,𝒙⃗ 𝑛 )

5
This results from observing that Equation 10 produces the same iterates than a Newton-Raphson scheme applied to 𝑇 (𝑥) − 𝑥.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
ed
iew
Figure 1: Structure of the jacobian for the RBC model

4.2. Computing 𝐹𝐵′ .𝒖⃗


As shown in Figure 1, the matrix representation of 𝐹𝐵′ does not have a very simple structure. It is

ev
however possible to efficiently compute 𝐹𝐵′ .𝒖⃗ for many vectors 𝒖⃗ efficiently.
Since the function 𝒥 used to interpolate future controls as 𝒥(𝑠; 𝒙)⃖ is linear in 𝒙⃖ we can write for
any 𝒖,⃗

rr
𝒥𝒙′ (𝑠; 𝒙).
⃖ 𝒖⃗ = 𝒥(𝑠; 𝒖)⃗ (14)

This insight allows us to differentiate Equation 2 w.r.t. 𝑥 as follows:


𝐹𝐵′ (𝑠, 𝑥; 𝒙).
⃖ 𝒖⃗
ee
= ∑ 𝑤𝑓𝑥′ ′ (𝑠, 𝑥, 𝑠′ , 𝒥(𝑠′ ; 𝒙))𝒥(𝑠
⃖ ′
; 𝒖)⃗ (15)
(𝑤,𝑠′ )∈𝜏(𝑠,𝑥)

Finally, we get the expression:


p

𝑭𝐵′ (𝒙,⃗ 𝒙).


⃖ 𝒖⃗ = (𝐹𝐵′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙).𝒖)
⃖ 𝑛∈⟦1,𝑁⟧
(16)
ot

4.3. Computing (𝐹𝐴′−1 𝐹𝐵′ ).𝒖


We now want to compute the vector

𝐿.𝒖⃗ = 𝐹𝐴′ (𝒙,⃗ 𝒙)⃖ −1 𝐹𝐵′ (𝒙,⃗ 𝒙).


tn

⃖ 𝒖⃗ =
(17)
(𝐹𝐴′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙)⃖ −1 𝐹𝐵′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 , 𝒙).
⃖ 𝒖)

𝑛∈⟦1,𝑁⟧

which defines linear operator 𝐿(𝒖,⃗ 𝒖).



rin

Because 𝐹𝐴′ has a block diagonal structure, we can compute for any grid point 𝒔𝑛⃗ and choice 𝒙⃗ 𝑛 :

𝐹𝐴′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙)⃖ −1 𝐹𝐵′ (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ; 𝒙).


⃖ 𝒖⃗ (18)
′ 𝑛 𝑛 ′ ′ −1 ′ 𝑛 𝑛 ′ ′ ′
= ∑ 𝑤𝑓 𝑥 (𝒔 ⃗ , 𝒙⃗ , 𝑠 , 𝑥 ) 𝑓𝑥′ (𝒔 ⃗ , 𝒙⃗ , 𝑠 , 𝑥 )𝒥(𝑠 ; 𝒖)

ep

⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
(𝑤,𝑠′ )∈
𝐷(𝒔𝑛
⃗ ,𝑠′ )
𝜏(𝒔𝑛
⃗ ,𝒙⃗ 𝑛 )

For the algorithms we consider, repeated evaluations of 𝐿.𝒖⃗ at various values of 𝒖⃗ take most the
time. It is then profitable to find ways to increase its efficiency.
Pr

In that spirit, we can precompute all the terms appearing in Equation 18 that is

(𝑠′ , (𝐷(𝒔𝑛⃗ , 𝑠′ ))(𝑤′ ,𝑠′ )∈𝜏(𝒔𝑛⃗ ,𝒙⃗ 𝑛 ) ) (19)


𝑛∈[1,𝑁]

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
and store the result in two matrices

ed
𝑆 = (𝑠𝑛,𝑘 ) (20)
𝑛∈[1,𝑁],𝑘∈[1,𝐾]

𝐷 = (𝐷(𝑠𝑛 , 𝑠𝑛,𝑘 )) (21)


𝑛∈[1,𝑁],𝑘∈[1,𝐾]

iew
whose elements are 𝑛𝑠 vectors and 𝑛𝑥 × 𝑛𝑥 matrices respectively.
To apply 𝐿 on a vector 𝒖⃗ we then compute:

𝐿.𝒖⃗ = ∑ ∑ 𝐷𝑛,𝑘 𝒥(𝑆𝑛,𝑘 ; 𝒖)⃗ (22)


𝑛 𝑘

ev
Note that linear operator 𝐿(𝒙,⃗ 𝒙)⃖ can be computed at any two 𝒙,⃗ 𝒙.⃖ As a special case we have
𝒯′ (𝒙)⃗ = 𝐿(𝒯(𝒙),
⃗ 𝒙)⃗ which is what is required by the accelerated time-iterations. We will show in
Section 6 that we can also use 𝐿(., .) to implement the Newton-Raphson Algorithm.

5. Inverting (𝐼 − 𝐿)
rr
To implement the acceleration step in Equation 9 we need to compute the term (𝐼 − 𝐿).𝒖.⃗
ee
5.1. Jacobian Matrix
A natural approach would be to convert this term into a (sparse) matrix and use a linear (sparse)
algebra solver to get the solution.6
In general the complexity of matrix inversion is greater than 𝑁 2 and since the jacobian we consider
p

doesn’t exhibit a simple structure, matrix inversion quickly becomes very expensive computation-
ally.
ot

5.2. (Truncated) Neumann Series


A simple solution consists in using the Neumann identity:
tn

(𝐼 − 𝐿).𝒖⃗ = (𝐼 + 𝐿 + 𝐿2 + …)𝒖⃗ (23)

Recall, that close to the solution, spectral radius of 𝒯′ hence of 𝐿 is strictly smaller than one for any
well defined model. For this reason, the infinite sum in Equation 23 is converging.
rin

In practice, it can be computed recursively by setting 𝒗0⃗ = 𝒖⃗ and then computing recursively the
terms

𝒗𝑝+1 = 𝒖⃗ 0 + 𝐿.𝒗𝑝⃗ (24)
ep

until |𝑣𝑝+1 − 𝑣𝑝 | < 𝜂 for a given 𝜂 > 0.


The speed up the calculations, we can limit the number of terms to a predetermined number 𝑄. This
often leads to faster convergence than solving the system completely. We refer to this variant as the
optimistic Neumann sum.
Pr

5.3. Krylov Projection / GMRES


Instead of the optimistic truncating heuristics, we can consider the sum

6
For the model FI described below, solving Equation 23 with a sparse jacobian takes 40s vs 253 ms vs for our fastest method .

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
𝒘⃗ 𝑄 = 𝑐0 𝒖⃗ + 𝑐1 𝐿.𝒖⃗ + … + 𝑐𝑄 𝐿𝑄 .𝒖⃗ (25)

ed
defined for the scalar coefficients 𝑐 = (𝑐0 , …, 𝑐𝑄 ) and look for the coefficients that minimize
‖(𝐼 − 𝐿)𝒘⃗ 𝑄 − 𝒖⃗ ‖ (26)

iew
When ‖.‖ is a hilbert norm associated with scalar product ⋅ defined over 𝒳, the result of the
minimization problem is precisely the orthogonal projection of 𝒖⃗ over the Krylov subspace spanned
by iterates of 𝐿:

⃗ 𝐿𝑄 𝒖)⃗
𝒦(𝒖,⃗ 𝐿, 𝑄) = span(𝒖,⃗ 𝐿𝒖…, (27)

ev
This relates to the famous GMRES algorithm that can be used to solve the linear system (𝐼 − 𝐿)𝒗 ⃗ =
0 in 𝒗 ⃗ .
This algorithm works by solving the same minimization problem as in Equation 26, on the Krylov
subspace 𝒦((𝐼 − 𝐿)𝒖⃗ 0 − 𝒖⃗ 0 , 𝐼 − 𝐿, 𝑄) for any given 𝒖⃗ 0 . The minimization is done recursively

rr
for all 𝑄′ ≤ 𝑄 and is made numerically stable by computing an orthonormal basis for the Krylov
subspace.
For the choice 𝒖⃗ 0 = 𝒖⃗ this Krylov space exactly corresponds to 𝒦(𝒖,⃗ 𝐿, 𝑄) so that in theory the
ee
GMRES algorithm produces the same result as the minimization problem from Equation 26 for any
given 𝑄.
Several libraries implemen t a GMRES function providing us with a more precise estimate of the
p
inverse than the Neumann sum. It comes at the cost of higher computational complexity and
memory footprint, since all base vectors must be computed and stored. In practice, this limits the
value of 𝑄 that can be used.7
ot

6. Newton-Raphson Method
There is a more direct alternative to the (accelerated) time-iteration algorithm exposed in Section 3.
tn

It consists in solving directly the equation 𝐺(𝒙)⃗ = 0 using a nonlinear solution algorithm. Since
we know how to compute all derivatives of the system, a natural candidate is the Newton Raphson
scheme.
Starting with an initial guess, 𝒙⃗ 𝑘 , we can compute the residuals 𝒓𝑘⃗ = 𝐺(𝒙⃗ 𝑘 ). The newton formula
rin

provides the new estimate:

𝒙⃗ 𝑘+1 = 𝒙⃗ 𝑘 − 𝐺
⏟⏟

(𝒙⃗ 𝒌 )−1 𝐺(⏟⏟
⏟⏟⏟ 𝒙⃗ 𝑘 ) (28)
𝜹𝑘⃗
ep

where 𝜹𝑘⃗ is the Newton step. From Equation 4 we know:


𝐺′ (𝒙⃗ 𝑘 ) = 𝐹𝐴′ (𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 ) + 𝐹𝐵′ (𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 ) (29)
Pr

and we can therefore write

𝐺′ (𝒙⃗ 𝑘 )−1 = (𝐼 − 𝐿(𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 ))−1 𝐹𝐴′ (𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 )−1 (30)

7 We use the gmres function from the Krylov.jl library. In our applications, we don’t find any computational advantage in restarting

iterations (replacing 𝒖⃗ by the solution at order 𝑄) as long as memory is sufficient to contain the full base.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
according to the definition of 𝐿 in Section 4.

ed
The Newton-step can then be computed efficiently as:

𝜹𝑘⃗ = (𝐼 − 𝐿(𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 ))−1 𝐹𝐴′ (𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 )−1 𝒓𝑘⃗ (31)

When the initial guess is close enough to the solution, the newton algorithm converges at a quadratic

iew
rate. In particular, 𝐿(𝒙⃗ 𝑘 , 𝒙⃗ 𝑘 ) is close to 𝒯′ (𝒙) and has a spectral radius smaller than 1 implying
that the infinite sum in Equation 23 converges and that all inversion methods from Section 5 can
be used.

CS RBC FI

ev
N. of continuous states 1 2 3
N. of grid points 3 × 50 502 9 × 503
Initial Iterations 0 25 5
Interpolation Type Linear Cubic Cubic

rr
N. of CPUs 1 8 8

Table 1: Summary of the three models

7. Timings
ee
We benchmark the methods we have described on three simple models (cf Table 1) that can be solved
using the time-iteration method. The first model is a simple infinite horizon consumption-savings
model with a borrowing contraint (CS). The second one is a standard Real Business Cycle model
p

(RBC). The third is a two countries neoclassical economy taken from Coeurdacier, Rey, and Winant
(2020), with Epstein-Zin preferences, capital adjustment costs and a riskless bond (FI). Appendix A
of the appendix provides the precise description of all models, as well as the precedure to discretize
ot

them in order to comply with the specification from Section 2.1.


Each model has a very simple initial guess for the controls. When that one is too far from the
solution, the accelerated time-iterations and/or the Newton-Raphson algorithm can fail. For this
tn

reason, we perform a fixed initial number of time-iteration steps, and use the result as a refined
guess to benchmark all algorithms against each other. The number of initial iterations is reported
with other model statistics in Table 1.
All algorithms are coded using the Julia language and run on an Intel Core i7-9700 processor. They
rin

all use the same elementary functions so that we can reliably comment on algorithmic gains.8
Table 2 reports the running time for the unmodified time-iteration algorithm (Section 3.1), for
accelerated time-iteration (Section 3.2) and Newton-Raphson (Section 6)9 . For the latter two, we
ep

compare three methods to invert the jacobian: by computing the Neumann sum until convergence
(Full), by truncating this sum with 𝑄 = 50 terms (Optimistic), or by using the GMRES algorithm
(GMRES).
Pr

8 The implementation extensively uses Julia’s ability to differentiate any code and just-in-time the result. Since the language allows us to

define arrays of arbitrary types and efficient iterators, the memory layout and the model’s computational representation closely mimic the
objects described in the algorithmic section. Code is available on request and will be on github when/if the paper published.
9
All algorithms have the same termination criterion: they terminate when Equation 4 is solved up to accuracy 𝜀 = 10−8 for the supremum
norm.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
CS RBC FI 𝐹 ′
𝐹𝐴 ′
𝐹𝐵 𝐿.𝑢

ed
Time Iteration 121ms 1230ms 56.4min Time Iteration 1590 463 - -
Accelerated A.T.I. (full) 33 16 5 1500
Time Iteration A.T.I. (optimistic) 45 22 8 400
Full 66.1ms 682ms 10.1 min A.T.I. (gmres) 33 15 8 246

iew
Optimistic 32.9ms 246ms 5.88 min
Table 3: Number of Function Calls for Model FI
GMRES 28.4ms 214ms 4.8 min
Newton Raphson
Full 85.5ms 644ms 9.72min
Optimistic 34.2ms 202ms 4.20min
GMRES 28.2ms 164ms 3.53min

ev
Table 2: Timing Comparison

With the Neuman jacobian inversion method, both Accelerated Time Iterations and Newton

rr
Raphson provide significant performance gains over the reference Time-Iteration algorithm. We
measure speedups around × 2, × 4 and × 5 for the CS, RBC and FI models respectively. Similar
timings between these two methods echo the fact that Accelerated Time Iteration is essentially a
ee
newton scheme applied to find the fixed point of the time iteration operator.
Table 3 shows a break down of the main function calls, for the FI model. Performance gains stem
from a drastic reduction in the number of model evaluations10. In the Full accelerated algorithm,
𝐹 (resp 𝐹 ′ ) is evaluated 33 times (resp 16) instead of 1600 times (resp 463 times) for the-non
p

accelerated version. Instead we there are repeated calls to cheaper linear operator 𝐿 to approximate
the actual time-iterations.
ot

Limiting the number of evaluations of 𝐿 at the cost of a more imprecise jacobian inversion, also
clearly improves running times with another × 2 speedup for the Optimistic variant. Surprisingly
we get similar performance gains from the more theoretically satisfying GMRES algorithm even
tn

though the latter calls 𝐿 less often and produces an accurate jacobian.11

8. Conclusion
The life of a computational economist is plagued by a very annoying problem: a realistic nonlinear
rin

rational expectations model can take a long time to solve, with the commonly used time-iteration
algorithm.
In this paper we have proposed a method to accelerate convergence with a scheme that is remi-
niscent of the Howard improvement method, whereby policy improvement steps (here the time
ep

iterations) alternate with value evaluation steps (here the gradient of the time iterations).
We have shown how to implement it efficiently by computing explicitly all derivative operators and
discussed several ways to invert the needed jacobians.
Pr

Practical results are encouraging: quadratic convergence leads to very sizable gains. In the biggest
model we consider, the fastest method offers a × 10 speedup over the benchmark algorithm. In fact,
Detailed timing results, with a breakdown between important functions and information on memory footprint can be found in Section C.
10

This surprising result might just result from the higher memory footprint of the library we use for GMRES compared to our simple
11

implementation of Neumann sums.

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
in line with the theory, our algorithm performs similarly to a global Newton-Raphson algorithm

ed
applied to the model equations.
One should be careful about not overgeneralizing the timings we present. While we have been
careful to make a fair comparison between the various algorithms, we can’t guarantee it is the fastest
possible. In fact, there are almost surely performance gains, that can be obtained by improving

iew
memory allocation and code vectorization. A nice feature of our family of algorithms is that they
spend most of their time, inverting an abstract linear operator that has a simple memory represen-
tation. To provide even higher speed-ups, one can thus focus all efforts on improving the efficiency
of this particular operation, for instance, by using GPU acceleration.
Another limitation is that we have focused on asymptotic speed, assuming the initial guess was

ev
already good enough for all algorithms. But one could also be interested in knowing which algo-
rithms converge for any reasonably uninformed initial guess. An easy upgrade would consist in
implementing a line search so that the acceleration step is always improving.12

rr
Finally, the computational representation of the model derivatives can be used for other purposes.
For instance, in ongoing work, we use it to study the sensitivity of the solution to the model
parameters, as well as to study the spectral properties of the time iteration algorithm.
p ee
ot
tn
rin
ep
Pr

12 ⃗
This can be done by computing 𝜹𝑘+1 = (𝐼 − 𝒯′ (𝒙⃗ 𝑘 ))−1 (̃
𝒙𝑘+1 − 𝒙⃗ 𝑘 ) in Equation 9 and by changing the values of 𝜆 so that 𝒙⃗ 𝑘+1 =

𝒙⃗ 𝑘 + 𝜆𝜹𝑘+1 minimizes 𝐺(𝒙⃗ 𝑘+1 )

10

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
Bibliography

ed
Azinovic, Marlon, Luca Gaegauf, and Simon Scheidegger. 2022. “Deep Equilibrium Nets”. Interna
tional Economic Review 63 (4): 1471–1525. https://doi.org/10.1111/iere.12575.
Brumm, Johannes, and Simon Scheidegger. 2017. “Using Adaptive Sparse Grids to Solve High-Di-
mensional Dynamic Models”. Rochester, NY. May 22, 2017. https://doi.org/10.2139/ssrn.2349281.

iew
Carroll, Christopher D. 2006. “The Method of Endogenous Gridpoints for Solving Dynamic Stochas-
tic Optimization Problems”. Economics Letters 91 (3): 312–20. https://doi.org/10.1016/j.econlet.
2005.09.013.
Coeurdacier, Nicolas, Hélène Rey, and Pablo Winant. 2020. “Financial Integration and Growth in a

ev
Risky World”. Journal of Monetary Economics 112 (June):1–21. https://doi.org/10.1016/j.jmoneco.
2019.01.022.
Coleman, Wilbur John. 1990. “Solving the Stochastic Growth Model by Policy-Function Iteration”.
Journal of Business & Economic Statistics 8 (1): 27–29. https://doi.org/10.2307/1391745.

rr
Deaton, Angus, and Guy Laroque. 1992. “On the Behaviour of Commodity Prices”. The Review of
Economic Studies 59 (1): 1–23. https://doi.org/10.2307/2297923.
ee
Fernández-Villaverde, Jesús, Grey Gordon, Pablo Guerrón-Quintana, and Juan F. Rubio-Ramírez.
2015. “Nonlinear Adventures at the Zero Lower Bound”. Journal of Economic Dynamics and
Control 57 (August):182–204. https://doi.org/10.1016/j.jedc.2015.05.014.
Haan, Wouter J. den, and Albert Marcet. 1990. “Solving the Stochastic Growth Model by Parame-
p

terizing Expectations”. Journal of Business & Economic Statistics 8 (1): 31–34. https://doi.org/10.
2307/1391746.
Judd, Kenneth L. 1998. Numerical Methods in Economics. Vol. 1. MIT Press Books. The MIT Press.
ot

https://ideas.repec.org/b/mtp/titles/0262100711.html.
Judd, Kenneth L., Lilia Maliar, Serguei Maliar, and Inna Tsener. 2017. “How to Solve Dynamic
tn

Stochastic Models Computing Expectations Just Once”. Quantitative Economics 8 (3): 851–93.
https://doi.org/10.3982/QE329.
Judd, Kenneth L., Lilia Maliar, Serguei Maliar, and Rafael Valero. 2014. “Smolyak Method for Solving
Dynamic Economic Models: Lagrange Interpolation, Anisotropic Grid and Adaptive Domain”.
rin

Journal of Economic Dynamics and Control 44 (July):92–123. https://doi.org/10.1016/j.jedc.2014.


03.003.
Maliar, Lilia, Serguei Maliar, and Pablo Winant. 2021. “Deep Learning for Solving Dynamic Eco-
ep

nomic Models.”. Journal of Monetary Economics 122 (September):76–101. https://doi.org/10.1016/


j.jmoneco.2021.07.004.
Maliar, Serguei, Lilia Maliar, and Kenneth Judd. 2011. “Solving the Multi-Country Real Business
Cycle Model Using Ergodic Set Methods”. Journal of Economic Dynamics and Control, Compu-
Pr

tational Suite of Models with Heterogeneous Agents II: Multi-Country Real Business Cycle
Models, 35 (2): 207–28. https://doi.org/10.1016/j.jedc.2010.09.014.

11
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
Puterman, Martin L., and Shelby L. Brumelle. 1979. “On the Convergence of Policy Iteration in

ed
Stationary Dynamic Programming”. Mathematics of Operations Research 4 (1): 60–69. https://
www.jstor.org/stable/3689239.
Rouwenhorst, K. Geert. 1995. “Asset Pricing Implications of Equilibrium Business Cycle Models”.
Edited by Thomas F. Cooley. Frontiers of Business Cycle Research. Princeton University Press.

iew
https://doi.org/10.2307/j.ctv14163jx.16.
Rust, John. 1996. “Chapter 14 Numerical Dynamic Programming in Economics”. Handbook of
Computational Economics. Elsevier. https://doi.org/10.1016/S1574-0021(96)01016-7.
Saad, Youcef, and Martin H. Schultz. 1986. “GMRES: A Generalized Minimal Residual Algorithm for
Solving Nonsymmetric Linear Systems”. SIAM Journal on Scientific and Statistical Computing 7

ev
(3): 856–69. https://doi.org/10.1137/0907058.

rr
p ee
ot
tn
rin
ep
Pr

12
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
A Models

ed
This appendix provides a detailed description of the three models used in the main text. This should be enough to
replicate the results without any ambiguity. For economic justifications we refer the readers to the source papers.

1.1. Consumption Savings (CS)


The specification of the consumption-saving model is the Parameter Value

iew
same as in Maliar, Maliar, and Winant (2021). Calibration
is provided in Table 4. Time-discount 𝛽 = 0.96
Risk-aversion 𝛾 = 4.0
The vector of states is 𝑠𝑡 = (𝑦𝑡 , 𝑤𝑡 ) where 𝑦𝑡 is log-
income and 𝑤𝑡 is available income. Risk-free interest rate 𝑟 = 1.02

The vector of controls to be determined in any state is Income Process (stdev) 𝜎 = 0.1
𝑥𝑡 = (𝑐𝑡 , ℎ𝑡 ) where 𝑐𝑡 denotes consumption and ℎ𝑡 is a Income Process (stdev) 𝜌 = 0.0

ev
budget multiplier. It is strictly positive if 𝑐𝑡 = 𝑤𝑡 and zero Table 4: Calibration (CS)
when 𝑐𝑡 < 𝑤𝑡 .

1.1.1. State transitions


Income 𝑦𝑡 follows an exogenous AR1 process with autocorrelation 𝜌 and standard deviation 𝜎.
Available income evolves according to the rule:

rr
𝑤𝑡 = exp(𝑦𝑡 ) + (𝑤𝑡−1 − 𝑐𝑡−1 )𝑟 (32)
ee
1.1.2. Optimality conditions
Optimal consumption is determined by the Euler condition

𝑐𝑡+1 −𝛾
𝛽𝔼𝑡 [( ) 𝑟] − 1 + ℎ𝑡 (33)
𝑐𝑡
p

with multiplier ℎ𝑡 satisfying the condition


FB(ℎ𝑡 , 𝑤𝑡 − 𝑐𝑡 ) (34)
ot


where FB(𝑎, 𝑏) = 𝑎 + 𝑏 − 𝑎2 + 𝑏2 is the Fisher-Burmeister function.

1.1.3. Approximation
The process (𝑦𝑡 ) is discretized as a 3 states markov chain, using Rouwenhorst’s algorithm (Rouwenhorst (1995)). This
tn

yields an exogenous state-space 𝒮exo = (𝒔𝑛⃗ )𝑛=1:𝑁 exo with 𝑁 exo = 3 discrete exogenous values and the corresponding
exo exo
transition matrix 𝑃 ∈ ℝ𝑁 × ℝ𝑁 .
The endogenous state-space is 𝒮endo = [0.5, 4]. It is discretized with 𝑁 endo = 200 linearly spaced points.
Finally the discretized state-space of the model is defined as the cartesian product of both exognous and endogenous
rin

states spaces 𝒔 ⃗ = 𝒔exo


⃗ × 𝒔endo
⃗ = (𝒔,⃗ 𝒔)⃗ 𝑝=1:𝑁 exo ,𝑞=1:𝑁 endo With linear indexing we can also write it as (𝒔𝑛⃗ )𝑛=1:𝑁 where
𝑁 = 𝑁 exo 𝑁 endo = 600 is the number of grid points.
The initial guess 𝒙⃗ 0 = (𝒙⃗ 𝑛0 )𝑛=1:𝑁 = ((𝑐𝑛 , ℎ𝑛 ))𝑛=1:𝑁 on the grid satisfies the relation
ep

𝑐𝑛 = min(0.95𝑤𝑛 , 1.0 + 0.04(𝑤𝑛 − 1.0)),


(35)
ℎ𝑛 = 0.05

Future controls between grid points are interpolated using linear interpolation.
Pr

13
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
1.2. Real Business Cycle (RBC)

ed
The model is taken straight from Dynare’s example files.
The vector of states is 𝑠𝑡 = (𝑘𝑡 , 𝑎𝑡 ) where 𝑘𝑡 is the capital Parameter Value
level and 𝑎𝑡 is the productivity level.
Time Discount 𝛽 = 0.99
We treat all other variables as controls even though some
Capital Depreciation 𝛿 = 0.025

iew
of them could be more efficiently computed in closed
form. Capital Share 𝛼 = 0.33
1 1
The vector of controls is thus E.I.S 𝜎 = 2.0
𝑤
• 𝑖𝑡 : investment Labor Preference Weight 𝜒= 𝑐𝜎
𝑛𝜂
• 𝑛𝑡 : Labor level Frisch Elasticity 𝜂 = 1.0
• 𝑤𝑡 : wage
Productivity (autocorr.) 𝜌 = 0.8

ev
• 𝑐𝑡 : consumption
• 𝑟𝑘,𝑡 : return on capital Productivity (stdev.) 𝜎𝑧 = 0.0012

The only exogenous shock is productivity innovation 𝑧𝑡 . Table 5: Calibration (RBC)


It is i.i.d. and follows a normal law with standard devia-
tion 𝜎𝑧 .
The parameters and the steady-state values of the model
are given in the following table: Table 5.

1.2.1. Equations rr
ee
The transitions of the states are given by: Variable Value
𝑘𝑡 = (1 − 𝛿)𝑘𝑡−1 + 𝑖𝑡−1 Labor 𝑛 = 0.33
(36)
𝑎𝑡 = 𝜌𝑎𝑡−1 + 𝑧𝑡 Productivity 𝑧 = 1.0
p
𝑛
Optimality condition are: Capital 𝑘= 1
𝑟𝑘 1−𝛼
( 𝛼 )
𝜎
𝑐𝑡 Investment 𝑖 = 𝛿𝑘
1 = 𝔼𝑡 [𝛽( ) (1 − 𝛿 + 𝑟𝑘,𝑡+1 )]
𝑐𝑡+1
Consumption 𝑐 = 𝑧𝑘𝛼 𝑛1−𝛼 − 𝑖
ot

𝑤𝑡 = 𝜒𝑛𝜂𝑡 𝑐𝑡𝜎
Table 6: Steady-state Values
𝑐𝑡 = exp(𝑎𝑡 )𝑘𝑡𝛼 𝑛1−𝛼
𝑡 − 𝑖𝑡
(37)
1−𝛼
𝑛
tn

𝑟𝑘,𝑡 = 𝛼 exp(𝑎𝑡 )( 𝑡 )
𝑘𝑡
𝛼
𝑘𝑡
𝑤𝑡 = (1 − 𝛼) exp(𝑎𝑡 )( )
𝑛𝑡
rin

1.2.2. Discretization
The state-space is defined as 𝒮 = [0.5𝑘, 2𝑘] × [−0.1, 0.1]. It is then discretized as a cartesian grid 𝒔 ⃗ = (𝒔𝑛⃗ )𝑛=1:𝑁 using
50 points in each dimension. We have 𝑁 = 2500 points.
The initial guess on the grid is equal to steady-state values 𝒙⃗ = (𝒙⃗ 𝑛 )𝑛=1:𝑁 = ((𝑖, 𝑛, 𝑤, 𝑐, 𝑟))
𝑛=1:𝑁
ep

We discretize the exogenous shock 𝑧 using 𝐾 = 7 Gauss-Hermite nodes (𝑧𝑘 )𝑘=1:𝐾 with associated weights (𝑤𝑘 )𝑘=1:𝐾 .
For a given state 𝒔𝑛⃗ and a corresponding control, the transition function 𝜏 is there:
𝜏 (𝒔𝑛⃗ , 𝒙⃗ 𝑛 ) = (𝑤𝑘 , 𝑔(𝒔𝑛⃗ , 𝒙⃗ 𝑛 , 𝑧𝑘 ))𝑘=1:𝐾 (38)

Future controls are approximated using a cubic spline approximation.


Pr

14

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
1.3. Financial Integration (FI)

ed
The model is taken from Coeurdacier, Rey, and Winant (2020).

1.3.1. Definitions
The vector of states contain 5 variables 𝑠𝑡 = (𝐴1,𝑡 , 𝐴2,𝑡 , 𝑘1,𝑡 , 𝑘2,𝑡 , 𝑏𝑓,𝑡 ).
• vector of exogenous states 𝑠exo 𝑡 = (𝐴1,𝑡 , 𝐴2,𝑡 )

iew
‣ 𝐴1,𝑡 , 𝐴2,𝑡 productivity level in both countries
• vector of endogenous states 𝑠exo 𝑡 = (𝑘1,𝑡 , 𝑘2,𝑡 , 𝑏𝑓,𝑡 )
‣ 𝑘1,𝑡 , 𝑘2,𝑡 : capital levels in each country
‣ 𝑏𝑓,𝑡 net position of country 1

The vector of controls contains 8 variables 𝑥𝑡 = (𝑏𝑓,𝑡 , 𝑝𝑓,𝑡 , 𝑖1,𝑡 , 𝑖2,𝑡 , 𝑤 ̃ 2,𝑡 , 𝑤1,𝑡 , 𝑤2,𝑡 ) where:
̃ 1,𝑡 , 𝑤

• 𝑏𝑓,𝑡 is the net position chosen in period 𝑡 − 1 for period 𝑡
• 𝑝𝑓,𝑡 is the price of a two period bond yielding in the next period

ev
• 𝑖1,𝑡 and 𝑖2,𝑡 is investment in each country
• 𝑤1,𝑡 and 𝑤2,𝑡 is the value function of each country
• 𝑤
̃ 1,𝑡 and 𝑤̃ 2,𝑡 are expectation functions appearing in the Epstein-Zin definition of value13
Parameters and steady-state values are reported in Table 7 and Table 8.

Parameter
Time-discount
Value
𝛽 = 0.96
rr Steady-state
Debt position
Value (country 𝑗)
𝑏𝑓′ = 𝑏𝑓 = 0
ee
Capital-share 𝜃 = 0.3 Price of bond 𝑝𝑓 = 𝛽
1
Depreciation Rate 𝛿 = 0.08 Investment 1
𝛽 −(1−𝛿)
𝜃−1
𝑖1 = ( 𝜃 ) 𝛿
Risk Aversion 𝛾 = 4.0
Value 𝑤1 = 𝑐1
1/EIS 𝜓 = 4.0
Expected Value 𝑤
̃ 1 = 𝑐1
p

Risk adjustment (1) 𝜉 = 0.2


Productivity 𝐴1 = 0
Risk adjustment (2) 𝑎2 = 𝛿 𝜉 𝑖1
𝑎2 1−𝜉 Capital 𝑘1 = 𝛿
Risk adjustment (3) 𝑎1 = 𝛿 − 1−𝜉 𝛿
ot

𝑏𝑓 = 0
Tfp Persistence 𝜌𝐴 = 0.9
Production 𝑦1 = 𝑘1𝜃
Tfp Volatility 𝜎𝐴1 = 0.025
Consumption 𝑐1 = (𝑦1 − 𝛿𝑘1 )
Table 7: Calibration (LFI) 𝑖1 1−𝜉
tn

𝑎2
Capital Adjustment Φ1 = 𝑎1 + 1−𝜉 ( 𝑘1 )
−𝜉
Capital Adjustment (diff) Φ1′ = 𝑎2 ( 𝑘𝑖1 )
1

Table 8: Steady-state (LFI)


As a function of states and controls at date 𝑡, we define several auxiliary variables the same date. They can be directly
rin

calculated and substituted in all other equations. We report them in table Table 9.
ep
Pr

1−𝜓
( 1−𝛾 )
+ 𝛽( )
1−𝜓 1−𝜓 1−𝛾
13 The Epstein-Zin value in country 𝑖 satisfies the recursive equation 𝑤𝑖,𝑡 = (1 − 𝛽)(𝑐𝑖,𝑡 ) (
((𝔼[𝑤 𝑖,𝑡+1 ]) )
⏟⏟⏟⏟⏟⏟⏟)
( 𝑤
̃ 𝑖,𝑡 )

15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
Country 1 Country 2

ed
Production 𝜃 𝜃
𝑦1,𝑡 = 𝑘1,𝑡 exp(𝐴1,𝑡 ) 𝑦2,𝑡 = 𝑘2,𝑡 exp(𝐴2,𝑡 )

Budget Constraint ′ ′
𝑐1,𝑡 = 𝑦1,𝑡 − 𝑖1,𝑡 + 𝑏1,𝑡 − 𝑝𝑓,𝑡 𝑏𝑓,𝑡 ) 𝑐2,𝑡 = 𝑦2,𝑡 − 𝑖2,𝑡 − (𝑏𝑓,𝑡 − 𝑝𝑓,𝑡 𝑏𝑓,𝑡 )

iew
Investment Friction 𝑎2 𝑖1,𝑡 1−𝜉 𝑎2 𝑖2,𝑡 1−𝜉
Φ1,𝑡 = 𝑎1 + 1−𝜉 ( 𝑘1,𝑡 ) Φ2,𝑡 = 𝑎1 + 1−𝜉 ( 𝑘2,𝑡 )

−𝜉 −𝜉
Investment Friction 𝑖
Φ1′𝑝,𝑡 = 𝑎2 ( 𝑘1,𝑡 )
𝑖
Φ2′𝑝,𝑡 = 𝑎2 ( 𝑘2,𝑡 )
1,𝑡 2,𝑡
(derivative)

Table 9: Auxiliary Variables (FI)

ev
1.3.2. Transition of the states
Exogenous states 𝐴1,𝑡 and 𝐴2,𝑡 both follow an AR1 with autocorrelation 𝜌. They have conditional standard deviation
𝜎𝐴,1 = 0.025 and 𝜎𝐴,1 = 0.05 respectively.

rr
Endogenous states follow the transitions:

𝑘1,𝑡 = ((1 − 𝛿) + Φ1,𝑡−1 )𝑘1,𝑡−1


𝑘2,𝑡 = ((1 − 𝛿) + Φ2,𝑡−1 )𝑘2,𝑡−1 (39)
ee

𝑏𝑓,𝑡 = 𝑏𝑓,𝑡−1

This defines a function 𝑔 such that 𝑠endo


𝑡 = 𝑔(𝑠exo endo exo
𝑡−1 , 𝑠𝑡−1 , 𝑠𝑡 )

1.3.3. Optimality conditions


The optimality conditions are given by a function 𝑓 such that 𝔼𝑡 [𝑓(𝑠𝑡 , 𝑥𝑡 , 𝑠𝑡+1 , 𝑥𝑡+1 )] where the components of the
p

residual function 𝑓 denoted by (𝑓1 , 𝑓2 , 𝑓3 , 𝑓4 , 𝑓5 , 𝑓6 , 𝑓7 , 𝑓8 ) are:


𝜓−𝛾 −𝜓 𝜓−𝛾 −𝜓
𝑤1,𝑡+1 𝑐1,𝑡+1 𝑤2,𝑡+1 𝑐2,𝑡+1
𝑓1 = 𝛽( ) ( ) − 𝛽( ) ( )
ot

𝑤
̃ 1,𝑡 𝑐1,𝑡 𝑤
̃ 2,𝑡 𝑐2,𝑡
𝜓−𝛾 𝜓−𝛾 −𝜓
𝛽 𝑤1,𝑡+1 𝑐1,𝑡+1 −𝜓 𝛽 𝑤2,𝑡+1 𝑐2,𝑡+1
𝑓2 = ( ) ( ) + ( ) ( ) − 𝑝𝑓,𝑡
2 𝑤
̃1 𝑐1 2 𝑤
̃ 2,𝑡 𝑐2,𝑡
tn

𝜓−𝛾 −𝜓

𝑤1,𝑡+1 𝑐1,𝑡+1 𝑦1,𝑡+1 ′
Φ1,𝑡 𝑖1,𝑡+1 ′
𝑓3 = 𝛽( ) ( ) (𝜃( )Φ1,𝑡 + ′ ((1 − 𝛿) + Φ1,𝑡+1 − Φ )) − 1
𝑤
̃ 1,𝑡 𝑐1,𝑡 𝑘1,𝑡+1 Φ1,𝑡+1 𝑘1,𝑡+1 1,𝑡+1
𝜓−𝛾 −𝜓

𝑤2,𝑡+1 𝑐2,𝑡+1 𝑦2,𝑡+1 Φ2,𝑡 𝑖2,𝑡+1 ′
rin


𝑓4 = 𝛽( ) ( ) (𝜃( )Φ2,𝑡 + ′ ((1 − 𝛿) + Φ2,𝑡+1 − Φ )) − 1
𝑤
̃ 2,𝑡 𝑐2,𝑡 𝑘2,𝑡+1 Φ2,𝑡+1 𝑘2,𝑡+1 2,𝑡+1
1−𝛾
(40)
𝑤1,𝑡+1
𝑓6 = ( ) −1
𝑤
̃ 1,𝑡
ep

1−𝛾
𝑤2,𝑡+1
𝑓1 = ( ) −1
𝑤
̃ 2,𝑡
1−𝜓 1−𝜓
𝑐1,𝑡 𝑤1,𝑡
𝑓7 = 𝛽 + (1 − 𝛽)( ) −( )
𝑤
̃ 1,𝑡 𝑤
̃ 1,𝑡
Pr

1−𝜓 1−𝜓
𝑐2,𝑡 𝑤2,𝑡
𝑓8 = 𝛽 + (1 − 𝛽)( ) −( )
𝑤
̃ 2,𝑡 𝑤
̃ 2,𝑡

16

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
1.3.4. Discretization

ed
The two independent exogenous AR1s 𝐴1,𝑡 and 𝐴2,𝑡 are both discretized as 3 states markov chain, using Rouwenhorst’s
algorithm. This yields an exogenous state-space 𝒮exo = (𝒔𝑛⃗ exo )𝑛=1:𝑁 exo with 𝑁 exo = 9 discrete exogenous values and
exo exo
the corresponding transition matrix 𝑃 ∈ ℝ𝑁 × ℝ𝑁
The endogenous state-space is defined as 𝒮endo = [0.5𝑘1 , 2𝑘1 ] × [0.5𝑘1 , 2𝑘1 ] × [−5, 5]. It is discretized as a cartesian
grid using 50 points in each dimension. The resulting grid 𝒔endo
⃗ = (𝒔𝑛⃗ endo ) thus has 𝑁 endo = 125000 points.

iew
endo𝑛=1:𝑁

Finally the discretized state-space of the model is defined as the cartesian product of both exogenous and endogenous
states spaces 𝒔 ⃗ = 𝒔exo
⃗ × 𝒔endo
⃗ = (𝒔𝑝⃗ exo , 𝒔𝑞⃗ endo ) exo endo
. With linear indexing we can also write (𝒔𝑛⃗ )𝑛=1:𝑁 . It
𝑝=1:𝑁 ,𝑞=1:𝑁
contains 𝑁 = 𝑁 exo 𝑁 endo = 1125000 points.
For a given grid point 𝒔𝑛⃗ = (𝒔𝑝⃗ exo , 𝒔𝑞⃗ endo ) and a vector of controls 𝒙⃗ the function 𝜏 returns 𝑁 exo values:

ev
( exo )
𝜏 (𝒔,⃗ 𝒙)⃗ = (
(⏟𝑃𝑝𝑘 , 𝑔(𝒔𝑝⃗ exo , 𝒔𝑞⃗ endo , 𝒔𝑘⃗ ))
) (41)
( 𝑘
𝑤 )𝑘=1:𝐾

The initial guess on the grid is equal to steady-state values for all grid points 𝒔𝑛⃗ , except for the new financial position,

rr
which we set equal to the old one on the grid, denoted here by 𝑏𝑓,𝑛 :

𝒙⃗ = (𝒙⃗ 𝑛 )𝑛=1:𝑁 = ((𝑏𝑓,𝑛 , 𝑝𝑓 , 𝑖1 , 𝑖2 , 𝑤


̃ 1, 𝑤
̃ 2 , 𝑤1 , 𝑤2 )) (42)
𝑛=1:𝑁

Future controls are approximated using cubic spline approximation.


p ee
ot
tn
rin
ep
Pr

17

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
B Proofs of the convergence speed

ed
1
For a vector 𝑥 = (𝑥1 , …𝑥𝑛 ) ∈ 𝑅𝑛 we define the p-norm as |𝑥|𝑝 = (𝑥𝑝1 + … + 𝑥𝑝𝑛 ) 𝑝 . We denote the subordinate norm
associated to |.|𝑝 by ‖.‖𝑝 . For a linear application 𝐿 : ℝ𝑛 → ℝ𝑛 it is defined as
|𝐿.𝑥|𝑝
‖𝐿‖𝑝 = sup (43)
𝑥≠0 |𝑥|𝑝

iew
and for multilinear application 𝑀 : ℝ𝑛 × ℝ𝑛 → ℝ𝑛 mapping (𝑥, 𝑦) to 𝑀 .[𝑥, 𝑦] we have:
|𝑀 .[𝑥, 𝑦]|𝑝
‖𝑀 ‖𝑝 = sup (44)
𝑥≠0,𝑦≠0 |𝑥|𝑝 |𝑦|𝑝

For any linear application 𝐿 we also denote by 𝜌(𝐿) the spectral radius of 𝐿, i.e. the absolute value of the largest complex
eigenvalue 𝜆 such that there exists 𝑥 ∈ (ℝ𝑛 )⋆ satisfying 𝐿𝑥 = 𝜆𝑥. For any 𝑝 we have 𝜌(𝐿) ≤ ‖𝐿‖𝑝 . More precisely, the

ev
Gelfand formula states that 𝜌(𝐿) = lim𝑝→∞ ‖𝐿‖𝑝 . As a result, for any 𝜀 there exists 𝑝𝜀 such that ‖𝐿‖𝑝𝜀 < 𝜌(𝐿) − 𝜀.
Consider an application 𝑇 : ℝ𝑛 → ℝ𝑛 , for instance the time-iteration algorithm for the discretized model described in
Section 2.1. Assume that 𝑇 is continuously differentiable and that there exists a fixed-point 𝑥 ∈ ℝ𝑛 such that 𝑇 (𝑥) =
𝑥. Denote the spectral radius at the steady-state by 𝜆 = 𝜌(𝑇 ′ (𝑥)) and assume 𝜆 < 1. According to the property stated

rr
in the last paragraph, we choose 𝑞 such that ‖𝑇 ′ ‖𝑞 < 𝜆
Under those assumptions we characterize the convergence speed of the algorithms exposed in Section 3.2 and Section 6.
These proofs are provided here, for reference, as they follow from standard results in real analysis.
ee
2.1. Time iteration
Let us choose 𝜀 > 0 such that 𝜌 + 𝜀 < 1. Since 𝑇 ′ (𝑥) is continuous, there exists 𝜂 > 0 such that for any 𝑥 satisfying
|𝑥 − 𝑥| < 𝜂 we have |𝑇 ′ (𝑥) − 𝑇 ′ (𝑥)|𝑝 < 𝜀. It implies that for any such 𝑥, ‖𝑇 ′ (𝑥)‖𝑝 < 𝜌 + 𝜀
Take 𝑥, 𝑦 ∈ ℝ𝑛 such that |𝑥 − 𝑥|𝑝 < 𝜂 and |𝑦 − 𝑥|𝑝 < 𝜂. We can write
p

1
𝑇 (𝑥) = 𝑇 (𝑦) + ∫ 𝑇 ′ (𝑦 + 𝜆(𝑥 − 𝑦)).(𝑥 − 𝑦) d𝜆 (45)
𝜆=0

from which we get:


ot

1
|𝑇 (𝑥) − 𝑇 (𝑦)|𝑝 ≤ ∫ |𝑇 ′ (𝑦 + 𝜆(𝑥 − 𝑦)) |𝑝 |𝑥 − 𝑦|𝑝 d𝜆
𝜆=0
tn

1
(46)
≤∫ |𝜌 + 𝜀 | |𝑥 − 𝑦|𝑝 d𝜆
𝜆=0

≤ (𝜌 + 𝜀) |𝑥 − 𝑦|𝑝

Finally choose 𝜅 > 0 such that 𝜅 < 𝜂(1 − (𝜌 + 𝜀)).


rin

Given 𝑥0 such that |𝑥0 − 𝑥|𝑝 < 𝜅. For 𝑘 > 0, construct the iterates 𝑥𝑘 = 𝑇 (𝑥𝑘−1 ). It is easy to check recursively
that all iterates 𝑥𝑘 satisfy |𝑥𝑘 − 𝑥|𝑝 < 𝜂. Indeed, for any 𝑘 we have |𝑥𝑘 − 𝑥|𝑝 ≤ |𝑥𝑘 − 𝑥𝑘−1 |𝑝 + … + |𝑥1 − 𝑥0 |𝑝 ≤
𝜂
((𝜌 + 𝜀)𝑘 ) + … + (𝜌 + 𝜀)) |𝑥0 − 𝑥|𝑝 ≤ 1−(𝜌+𝜀) 𝜅<𝜂
We can now show that the sequence (𝑥𝑘 )𝑘 converges to 𝑥 using the local contractivity of 𝑇 ′ for the norm |.|𝑝 . For any
ep

𝑘 we have
|𝑥𝑘 − 𝑥|𝑝 ≤ |𝑥𝑘 − 𝑥𝑘+1 |𝑝 + … + |𝑥𝑘+𝑙 − 𝑥𝑘+𝑙+1 |𝑝 + …)
≤ |𝑥𝑘 − 𝑥𝑘+1 |𝑝 (1 + (𝜌 + 𝜀)2 + (𝜌 + 𝜀)3 + …)
(47)
Pr

𝑘 1
≤ 𝜂(𝜌 + 𝜀) → 0
1 − 𝜌 + 𝜀 𝑘→∞

This establishes that convergence is geometric with decay 𝜌 + 𝜀 for any 𝜀. Since all norms are equivalent in finite spaces,
this is true for any norm, not just the 𝑝-norm.

18

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
2.2. Newton Raphson

ed
Let us consider 𝐺 : ℝ𝑛 → ℝ𝑛 . Assume 𝐺 is twice continuously differentiable. Assume there exists 𝑥 ∈ ℝ𝑛 such that
𝐺(𝑥) = 0 and that 𝐺′ (𝑥) is invertible. From the inverse function theorem it follows that (𝐺′ (𝑥))−1 is well defined and
twice continuously differentiable in a neighborhood of 𝑥.
We consider the function ℝ𝑛 → ℝ𝑛 :

iew
𝑁 (𝑥) = 𝑥 − (𝐺′ (𝑥))−1 𝐺(𝑥) (48)

and for a given initial guess 𝑥0 and 𝑘 > 1 define the newton iterates as 𝑥𝑘 = 𝑁 (𝑥𝑘−1 ) = 𝑁 𝑘 (𝑥0 ). We prove below that
sequence (𝑥𝑘 )𝑘 converges to 𝑥 if 𝑥0 is close enough to 𝑥.
It is easy to check that 𝑁 (𝑥) = 𝑥 and that 𝑁 is continuously differentiable. To compute the derivatives of 𝑁 we write:
𝐺′ (𝑥)(𝑁 (𝑥) − 𝑥) + 𝐺(𝑥) = 0 (49)

ev
which we differentiate to get for any 𝑢 ∈ ℝ𝑛
𝐺″ (𝑥).[(𝑁 (𝑥) − 𝑥), 𝑢] + 𝐺′ (𝑥).(𝑁 ′ (𝑥).𝑢 − 𝑢) + 𝐺′ (𝑥).𝑢 = 0 (50)

and

rr
𝑁 ′ (𝑥).𝑢 = 𝑢 − (𝐺′ (𝑥))−1 (𝐺′ (𝑥).𝑢 + 𝐺″ (𝑥).[(𝑁 (𝑥) − 𝑥), 𝑢])
= −(𝐺′ (𝑥))−1 𝐺″ (𝑥).[(𝑁 (𝑥) − 𝑥), 𝑢]
(51)
ee
Since (𝐺′ (𝑥))−1 𝐺″ (𝑥) is continuous in 𝑥, its p-norm must be locally bounded by a positive real number 𝐶1 . From
Equation 51, we get in a neighborhood of 𝑥:
‖𝑁 ′ (𝑥)‖𝑝 < 𝐶1 ‖𝑁 (𝑥) − 𝑥‖𝑝 (52)

In particular, this implies that 𝜌(𝑁 ′ (𝑥)) = 0 and that the newton iterates converge to 𝑥 when 𝑥0 is close enough to 𝑥.
p

Since 𝑁 is continuously differentiable and has 𝑥 as a fixed point, there exists 𝐶2 such that, close to 𝑥
2
|𝑥𝑘+1 − 𝑥𝑘 |𝑝 = |𝑁 (𝑥𝑘 ) − 𝑁 (𝑥𝑘−1 )|𝑝 ≤ ‖𝑁 ′ (𝑥𝑘−1 )‖𝑝 |𝑥𝑘 − 𝑥𝑘−1 |𝑝 + 𝐶2 (|𝑥𝑘 − 𝑥𝑘−1 |𝑝 )
(53)
ot

≤ 𝐶1 |𝑥𝑘 − 𝑥𝑘−1 |2𝑝 + 𝐶2 |𝑥𝑘 − 𝑥𝑘−1 |2𝑝

Taking 𝐶 = max(𝐶1 , 𝐶2 ) we thus have |𝑥𝑘+1 − 𝑥𝑘 |𝑝 ≤ 𝐶 |𝑥𝑘 − 𝑥𝑘−1 |2𝑝 . This establishes that the convergence is
quadratic in a neighborhood of 𝑥.
tn

2.3. Accelerated Time Iteration


Let us define the accelerated time-iteration operator as:

𝐴(𝑥) = 𝑥 + (𝐼 − 𝑇 ′ (𝑥))−1 (𝑇 (𝑥) − 𝑥) (54)


rin

Assuming 𝑇 is twice continuously differentiable, let us define 𝐺𝑇 (𝑥) = 𝑇 (𝑥) − 𝑥. Then 𝐺𝑇 is twice continuously
differentiable too and we have 𝐺𝑇′ (𝑥) = 𝑇 ′ (𝑥) − 𝐼.
Since 𝜌(𝑇 ′ (𝑥)) < 1, the application 𝐺𝑇′ is invertible at 𝑥 and by continuity in a neigborhood of 𝑥. The equation

𝐴(𝑥) = 𝑥 + (𝐺𝑇′ (𝑥))−1 (𝐺𝑇 (𝑥))


ep

(55)

makes it obvious that application 𝐴() is actually a Newton step applied to 𝐺𝑇 .


We can then use the result from the last subsection to conclude that iterates of 𝐴 (and of 𝐺𝑇 ) converge at a quadratic rate.
Pr

19

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
C Detailed Timings

ed
3.1. Model CS
3.1.1. Time Iterations
──────────────────────────────────────────────────────────────────────────
Time Allocations

iew
─────────────────────── ────────────────────────
Tot / % measured: 121ms / 99.3% 12.3KiB / 7.8%

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────
solve for T 116 106ms 88.3% 913μs 976B 100.0% 8.41B
eval F 513 61.2ms 51.0% 119μs 0.00B 0.0% 0.00B

ev
compute J_1 198 40.8ms 34.0% 206μs 0.00B 0.0% 0.00B
eval F 117 13.9ms 11.6% 119μs 0.00B 0.0% 0.00B
fit φ 117 124μs 0.1% 1.06μs 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────

3.1.2. Accelerated Time Iterations

rr
──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 66.1ms / 99.8% 62.0KiB / 3.7%
ee
Section ncalls time %tot avg alloc %tot avg
──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 51.2ms 77.7% 10.2ms 1.31KiB 57.9% 269B
neumann sum 5 51.2ms 77.7% 10.2ms 672B 29.0% 134B
eval L 731 48.4ms 73.4% 66.3μs 0.00B 0.0% 0.00B
p
solve for T 5 11.4ms 17.3% 2.28ms 976B 42.1% 195B
eval F 54 6.30ms 9.6% 117μs 0.00B 0.0% 0.00B
compute J_1 23 4.66ms 7.1% 203μs 0.00B 0.0% 0.00B
compute J_2 5 1.48ms 2.2% 296μs 0.00B 0.0% 0.00B
ot

compute J_1 5 1.01ms 1.5% 201μs 0.00B 0.0% 0.00B


eval F 6 700μs 1.1% 117μs 0.00B 0.0% 0.00B
compute J_1 \ J_2 5 141μs 0.2% 28.1μs 0.00B 0.0% 0.00B
fit φ 6 8.50μs 0.0% 1.42μs 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────────
tn

3.1.3. Accelerated Time Iterations (optimistic)


──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
rin

Tot / % measured: 32.9ms / 99.6% 62.0KiB / 3.7%

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 17.7ms 54.2% 3.55ms 1.31KiB 57.9% 269B
neumann sum 5 17.7ms 54.2% 3.55ms 672B 29.0% 134B
ep

eval L 250 16.8ms 51.3% 67.1μs 0.00B 0.0% 0.00B


solve for T 5 11.5ms 35.2% 2.31ms 976B 42.1% 195B
eval F 54 6.38ms 19.5% 118μs 0.00B 0.0% 0.00B
compute J_1 23 4.65ms 14.2% 202μs 0.00B 0.0% 0.00B
compute J_2 5 1.53ms 4.7% 305μs 0.00B 0.0% 0.00B
Pr

compute J_1 5 1.02ms 3.1% 204μs 0.00B 0.0% 0.00B


eval F 6 764μs 2.3% 127μs 0.00B 0.0% 0.00B
compute J_1 \ J_2 5 136μs 0.4% 27.3μs 0.00B 0.0% 0.00B
fit φ 6 7.68μs 0.0% 1.28μs 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────────

20

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.1.4. Accelerated Time Iterations (GMRES)

ed
──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 28.4ms / 99.5% 4.75MiB / 98.8%

Section ncalls time %tot avg alloc %tot avg

iew
──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 13.1ms 46.4% 2.63ms 4.69MiB 100.0% 960KiB
GMRES 5 13.1ms 46.3% 2.62ms 4.63MiB 98.8% 949KiB
solve for T 5 11.8ms 41.6% 2.35ms 2.02KiB 0.0% 413B
eval F 54 6.52ms 23.0% 121μs 0.00B 0.0% 0.00B
compute J_1 23 4.79ms 16.9% 208μs 0.00B 0.0% 0.00B
compute J_2 5 1.48ms 5.2% 297μs 0.00B 0.0% 0.00B

ev
compute J_1 5 1.02ms 3.6% 204μs 0.00B 0.0% 0.00B
eval F 6 743μs 2.6% 124μs 0.00B 0.0% 0.00B
compute J_1 \ J_2 5 139μs 0.5% 27.7μs 0.00B 0.0% 0.00B
fit φ 6 7.50μs 0.0% 1.25μs 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────────

rr
3.1.5. Newton-Raphson
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 85.5ms / 99.8% 12.4KiB / 23.1%
ee
Section ncalls time %tot avg alloc %tot avg
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 9 79.3ms 92.9% 8.81ms 1.31KiB 45.9% 149B
neumann sum 9 79.3ms 92.9% 8.81ms 672B 23.0% 74.7B
p
eval L 1.12k 75.0ms 87.8% 66.9μs 0.00B 0.0% 0.00B
compute Jacobian 9 4.84ms 5.7% 538μs 1.55KiB 54.1% 176B
compute J_2 9 2.65ms 3.1% 295μs 0.00B 0.0% 0.00B
compute J_1 9 1.84ms 2.2% 205μs 0.00B 0.0% 0.00B
ot

compute J\T 9 250μs 0.3% 27.7μs 0.00B 0.0% 0.00B


compute -T 9 25.4μs 0.0% 2.82μs 0.00B 0.0% 0.00B
compute F() 10 1.18ms 1.4% 118μs 0.00B 0.0% 0.00B
fit φ 10 14.3μs 0.0% 1.43μs 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────
tn

3.1.6. Newton-Raphson (optimistic)


─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
rin

Tot / % measured: 34.2ms / 99.6% 12.4KiB / 23.1%

Section ncalls time %tot avg alloc %tot avg


─────────────────────────────────────────────────────────────────────────────
invert Jacobian 8 28.7ms 84.2% 3.59ms 1.31KiB 45.9% 168B
neumann sum 8 28.7ms 84.2% 3.59ms 672B 23.0% 84.0B
ep

eval L 400 27.1ms 79.6% 67.8μs 0.00B 0.0% 0.00B


compute Jacobian 8 4.30ms 12.6% 537μs 1.55KiB 54.1% 198B
compute J_2 8 2.36ms 6.9% 295μs 0.00B 0.0% 0.00B
compute J_1 8 1.64ms 4.8% 205μs 0.00B 0.0% 0.00B
compute J\T 8 215μs 0.6% 26.9μs 0.00B 0.0% 0.00B
Pr

compute -T 8 23.3μs 0.1% 2.92μs 0.00B 0.0% 0.00B


compute F() 9 1.08ms 3.2% 120μs 0.00B 0.0% 0.00B
fit φ 9 9.89μs 0.0% 1.10μs 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────

21

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.1.7. Newton-Raphson (GMRES)

ed
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 28.2ms / 99.5% 8.14MiB / 99.9%

Section ncalls time %tot avg alloc %tot avg

iew
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 9 22.0ms 78.4% 2.45ms 8.12MiB 100.0% 924KiB
GMRES 9 22.0ms 78.2% 2.44ms 8.02MiB 98.7% 913KiB
compute Jacobian 9 4.87ms 17.3% 541μs 1.55KiB 0.0% 176B
compute J_2 9 2.64ms 9.4% 293μs 0.00B 0.0% 0.00B
compute J_1 9 1.88ms 6.7% 209μs 0.00B 0.0% 0.00B
compute J\T 9 256μs 0.9% 28.5μs 0.00B 0.0% 0.00B

ev
compute -T 9 23.6μs 0.1% 2.62μs 0.00B 0.0% 0.00B
compute F() 10 1.18ms 4.2% 118μs 0.00B 0.0% 0.00B
fit φ 10 10.7μs 0.0% 1.07μs 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────

rr
p ee
ot
tn
rin
ep
Pr

22

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.2. Model RBC

ed
3.2.1. Time Iterations
──────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 1.23s / 99.3% 11.9MiB / 99.9%

iew
Section ncalls time %tot avg alloc %tot avg
──────────────────────────────────────────────────────────────────────────
solve for T 154 1.10s 90.0% 7.15ms 9.33MiB 78.4% 62.1KiB
compute J_1 217 559ms 45.7% 2.57ms 2.94MiB 24.7% 13.9KiB
eval F 588 450ms 36.8% 765μs 6.39MiB 53.7% 11.1KiB
eval F 155 116ms 9.5% 750μs 1.68MiB 14.1% 11.1KiB
fit φ 155 6.48ms 0.5% 41.8μs 906KiB 7.4% 5.84KiB

ev
──────────────────────────────────────────────────────────────────────────

3.2.2. Accelerated Time Iterations


──────────────────────────────────────────────────────────────────────────────
Time Allocations

rr
─────────────────────── ────────────────────────
Tot / % measured: 682ms / 99.8% 10.8MiB / 95.5%

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────────
ee
invert (I-T') 5 585ms 86.0% 117ms 9.68MiB 93.7% 1.94MiB
neumann sum 5 585ms 86.0% 117ms 9.68MiB 93.7% 1.94MiB
eval L 951 546ms 80.2% 574μs 9.68MiB 93.7% 10.4KiB
solve for T 5 49.0ms 7.2% 9.80ms 418KiB 4.0% 83.6KiB
compute J_1 10 25.3ms 3.7% 2.53ms 139KiB 1.3% 13.9KiB
p
eval F 25 19.5ms 2.9% 781μs 278KiB 2.6% 11.1KiB
compute J_1 \ J_2 5 15.5ms 2.3% 3.10ms 0.00B 0.0% 0.00B
compute J_2 5 13.3ms 2.0% 2.65ms 77.2KiB 0.7% 15.4KiB
compute J_1 5 12.8ms 1.9% 2.56ms 69.5KiB 0.7% 13.9KiB
ot

eval F 6 4.56ms 0.7% 760μs 66.8KiB 0.6% 11.1KiB


fit φ 6 352μs 0.1% 58.7μs 35.1KiB 0.3% 5.84KiB
──────────────────────────────────────────────────────────────────────────────

3.2.3. Accelerated Time Iterations (optimistic)


tn

──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 246ms / 99.4% 3.65MiB / 86.6%
rin

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 154ms 62.8% 30.7ms 2.55MiB 80.5% 521KiB
neumann sum 5 154ms 62.8% 30.7ms 2.55MiB 80.5% 521KiB
eval L 250 143ms 58.6% 573μs 2.54MiB 80.5% 10.4KiB
solve for T 5 44.6ms 18.2% 8.91ms 382KiB 11.8% 76.4KiB
ep

compute J_1 9 23.4ms 9.5% 2.60ms 125KiB 3.9% 13.9KiB


eval F 23 17.4ms 7.1% 755μs 256KiB 7.9% 11.1KiB
compute J_1 \ J_2 5 15.5ms 6.3% 3.10ms 0.00B 0.0% 0.00B
compute J_2 5 13.3ms 5.4% 2.65ms 77.2KiB 2.4% 15.4KiB
compute J_1 5 12.8ms 5.2% 2.56ms 69.5KiB 2.1% 13.9KiB
eval F 6 4.57ms 1.9% 761μs 66.8KiB 2.1% 11.1KiB
Pr

fit φ 6 334μs 0.1% 55.7μs 35.1KiB 1.1% 5.84KiB


──────────────────────────────────────────────────────────────────────────────

23

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.2.4. Accelerated Time Iterations (GMRES)

ed
──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 214ms / 99.3% 37.7MiB / 98.7%

Section ncalls time %tot avg alloc %tot avg

iew
──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 117ms 54.9% 23.4ms 36.6MiB 98.3% 7.32MiB
GMRES 5 117ms 54.9% 23.3ms 36.1MiB 96.9% 7.22MiB
solve for T 5 49.1ms 23.1% 9.82ms 418KiB 1.1% 83.7KiB
compute J_1 10 25.3ms 11.9% 2.53ms 139KiB 0.4% 13.9KiB
eval F 25 19.9ms 9.3% 794μs 278KiB 0.7% 11.1KiB
compute J_1 \ J_2 5 15.8ms 7.4% 3.16ms 0.00B 0.0% 0.00B

ev
compute J_2 5 13.3ms 6.2% 2.65ms 77.2KiB 0.2% 15.4KiB
compute J_1 5 12.8ms 6.0% 2.56ms 69.5KiB 0.2% 13.9KiB
eval F 6 4.57ms 2.1% 762μs 66.8KiB 0.2% 11.1KiB
fit φ 6 291μs 0.1% 48.5μs 35.1KiB 0.1% 5.84KiB
──────────────────────────────────────────────────────────────────────────────

rr
3.2.5. Newton-Raphson
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 644ms / 99.9% 10.1MiB / 99.9%
ee
Section ncalls time %tot avg alloc %tot avg
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 5 594ms 92.3% 119ms 9.89MiB 97.6% 1.98MiB
neumann sum 5 594ms 92.3% 119ms 9.89MiB 97.6% 1.98MiB
p
eval L 972 554ms 86.1% 570μs 9.89MiB 97.6% 10.4KiB
compute Jacobian 5 43.9ms 6.8% 8.78ms 148KiB 1.4% 29.6KiB
compute J\T 5 15.5ms 2.4% 3.09ms 0.00B 0.0% 0.00B
compute J_2 5 13.4ms 2.1% 2.67ms 77.2KiB 0.7% 15.4KiB
ot

compute J_1 5 12.7ms 2.0% 2.54ms 69.5KiB 0.7% 13.9KiB


compute -T 5 864μs 0.1% 173μs 0.00B 0.0% 0.00B
compute F() 6 5.10ms 0.8% 850μs 66.8KiB 0.6% 11.1KiB
fit φ 6 326μs 0.1% 54.4μs 35.1KiB 0.3% 5.84KiB
─────────────────────────────────────────────────────────────────────────────
tn

3.2.6. Newton-Raphson (optimistic)


─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
rin

Tot / % measured: 202ms / 99.8% 2.80MiB / 99.7%

Section ncalls time %tot avg alloc %tot avg


─────────────────────────────────────────────────────────────────────────────
invert Jacobian 5 153ms 75.8% 30.6ms 2.55MiB 91.2% 521KiB
neumann sum 5 153ms 75.8% 30.6ms 2.55MiB 91.2% 521KiB
ep

eval L 250 143ms 70.7% 571μs 2.54MiB 91.2% 10.4KiB


compute Jacobian 5 43.9ms 21.8% 8.78ms 148KiB 5.2% 29.6KiB
compute J\T 5 15.6ms 7.7% 3.11ms 0.00B 0.0% 0.00B
compute J_2 5 13.2ms 6.6% 2.65ms 77.2KiB 2.7% 15.4KiB
compute J_1 5 12.7ms 6.3% 2.54ms 69.5KiB 2.4% 13.9KiB
Pr

compute -T 5 883μs 0.4% 177μs 0.00B 0.0% 0.00B


compute F() 6 4.53ms 2.2% 756μs 66.8KiB 2.3% 11.1KiB
fit φ 6 310μs 0.2% 51.7μs 35.1KiB 1.2% 5.84KiB
─────────────────────────────────────────────────────────────────────────────

24

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.2.7. Newton-Raphson (GMRES)

ed
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 164ms / 99.7% 36.9MiB / 100.0%

Section ncalls time %tot avg alloc %tot avg

iew
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 5 115ms 69.9% 22.9ms 36.7MiB 99.3% 7.34MiB
GMRES 5 115ms 69.8% 22.9ms 36.2MiB 98.0% 7.24MiB
compute Jacobian 5 44.1ms 26.9% 8.82ms 148KiB 0.4% 29.6KiB
compute J\T 5 15.4ms 9.4% 3.09ms 0.00B 0.0% 0.00B
compute J_2 5 13.3ms 8.1% 2.66ms 77.2KiB 0.2% 15.4KiB
compute J_1 5 12.8ms 7.8% 2.56ms 69.5KiB 0.2% 13.9KiB

ev
compute -T 5 1.01ms 0.6% 203μs 0.00B 0.0% 0.00B
compute F() 6 4.90ms 3.0% 816μs 66.8KiB 0.2% 11.1KiB
fit φ 6 335μs 0.2% 55.9μs 35.1KiB 0.1% 5.84KiB
─────────────────────────────────────────────────────────────────────────────

rr
p ee
ot
tn
rin
ep
Pr

25

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.3. Model FI

ed
3.3.1. Time Iterations
──────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 3388s / 99.5% 153MiB / 97.6%

iew
Section ncalls time %tot avg alloc %tot avg
──────────────────────────────────────────────────────────────────────────
solve for T 330 3005s 89.1% 9.11s 127MiB 85.5% 395KiB
compute J_1 463 1399s 41.5% 3.02s 45.3MiB 30.4% 100KiB
eval F 1.26k 1344s 39.9% 1.07s 82.1MiB 55.1% 67.0KiB
eval F 331 361s 10.7% 1.09s 21.6MiB 14.5% 67.0KiB
fit φ 331 5.53s 0.2% 16.7ms 0.00B 0.0% 0.00B

ev
──────────────────────────────────────────────────────────────────────────

3.3.2. Accelerated Time Iterations


──────────────────────────────────────────────────────────────────────────────
Time Allocations

rr
─────────────────────── ────────────────────────
Tot / % measured: 608s / 99.4% 407MiB / 15.6%

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────────
ee
invert (I-T') 5 462s 76.6% 92.5s 59.3MiB 93.2% 11.9MiB
neumann sum 5 462s 76.6% 92.5s 59.3MiB 93.2% 11.9MiB
eval L 1.50k 405s 67.1% 271ms 59.3MiB 93.2% 40.6KiB
solve for T 5 67.3s 11.1% 13.5s 2.84MiB 4.5% 582KiB
compute J_1 11 32.9s 5.4% 2.99s 1.08MiB 1.7% 100KiB
p
eval F 27 28.3s 4.7% 1.05s 1.77MiB 2.8% 67.0KiB
compute J_1 \ J_2 5 38.5s 6.4% 7.71s 0.00B 0.0% 0.00B
compute J_1 5 15.1s 2.5% 3.02s 500KiB 0.8% 100KiB
compute J_2 5 14.2s 2.4% 2.85s 601KiB 0.9% 120KiB
ot

eval F 6 6.25s 1.0% 1.04s 402KiB 0.6% 67.0KiB


fit φ 6 104ms 0.0% 17.3ms 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────────

3.3.3. Accelerated Time Iterations (optimistic)


tn

──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 353s / 98.3% 568MiB / 3.2%
rin

Section ncalls time %tot avg alloc %tot avg


──────────────────────────────────────────────────────────────────────────────
invert (I-T') 8 134s 38.6% 16.8s 12.3MiB 67.1% 1.53MiB
neumann sum 8 134s 38.6% 16.8s 12.3MiB 67.1% 1.53MiB
eval L 400 119s 34.2% 297ms 12.3MiB 67.0% 31.4KiB
solve for T 8 92.3s 26.6% 11.5s 3.72MiB 20.3% 477KiB
ep

compute J_1 14 43.6s 12.5% 3.11s 1.37MiB 7.5% 100KiB


eval F 36 40.7s 11.7% 1.13s 2.35MiB 12.9% 67.0KiB
compute J_1 \ J_2 8 61.7s 17.8% 7.72s 0.00B 0.0% 0.00B
compute J_1 8 24.9s 7.2% 3.12s 801KiB 4.3% 100KiB
compute J_2 8 23.8s 6.9% 2.98s 962KiB 5.1% 120KiB
eval F 9 10.3s 3.0% 1.15s 603KiB 3.2% 67.0KiB
Pr

fit φ 9 146ms 0.0% 16.2ms 0.00B 0.0% 0.00B


──────────────────────────────────────────────────────────────────────────────

26

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.3.4. Accelerated Time Iterations (GMRES)

ed
──────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 290s / 98.7% 34.4GiB / 99.0%

Section ncalls time %tot avg alloc %tot avg

iew
──────────────────────────────────────────────────────────────────────────────
invert (I-T') 5 148s 51.8% 29.7s 34.0GiB 100.0% 6.80GiB
GMRES 5 148s 51.7% 29.6s 33.7GiB 99.0% 6.73GiB
solve for T 5 64.8s 22.6% 13.0s 2.84MiB 0.0% 582KiB
compute J_1 11 31.8s 11.1% 2.89s 1.08MiB 0.0% 100KiB
eval F 27 27.2s 9.5% 1.01s 1.77MiB 0.0% 67.0KiB
compute J_1 \ J_2 5 38.5s 13.4% 7.69s 0.00B 0.0% 0.00B

ev
compute J_1 5 14.8s 5.2% 2.96s 500KiB 0.0% 100KiB
compute J_2 5 14.0s 4.9% 2.79s 601KiB 0.0% 120KiB
eval F 6 6.12s 2.1% 1.02s 402KiB 0.0% 67.0KiB
fit φ 6 95.8ms 0.0% 16.0ms 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────────────

rr
3.3.5. Newton-Raphson
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 583s / 100.0% 49.8MiB / 99.9%
ee
Section ncalls time %tot avg alloc %tot avg
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 5 501s 86.1% 100s 48.3MiB 97.0% 9.66MiB
neumann sum 5 501s 86.1% 100s 48.3MiB 97.0% 9.66MiB
p
eval L 1.57k 441s 75.7% 280ms 48.3MiB 97.0% 31.4KiB
compute Jacobian 5 74.4s 12.8% 14.9s 1.08MiB 2.2% 221KiB
compute J\T 5 38.5s 6.6% 7.71s 0.00B 0.0% 0.00B
compute J_1 5 15.8s 2.7% 3.16s 500KiB 1.0% 100KiB
ot

compute J_2 5 14.6s 2.5% 2.93s 601KiB 1.2% 120KiB


compute -T 5 3.15s 0.5% 630ms 0.00B 0.0% 0.00B
compute F() 6 6.63s 1.1% 1.10s 402KiB 0.8% 67.0KiB
fit φ 6 102ms 0.0% 17.0ms 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────
tn

3.3.6. Newton-Raphson (optimistic)


─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
rin

Tot / % measured: 252s / 99.8% 14.6MiB / 99.8%

Section ncalls time %tot avg alloc %tot avg


─────────────────────────────────────────────────────────────────────────────
invert Jacobian 8 123s 49.0% 15.4s 12.3MiB 84.1% 1.53MiB
neumann sum 8 123s 49.0% 15.4s 12.3MiB 84.1% 1.53MiB
ep

eval L 400 108s 43.0% 270ms 12.3MiB 84.1% 31.4KiB


compute Jacobian 8 118s 47.0% 14.8s 1.72MiB 11.8% 221KiB
compute J\T 8 61.3s 24.4% 7.66s 0.00B 0.0% 0.00B
compute J_1 8 24.8s 9.8% 3.09s 801KiB 5.4% 100KiB
compute J_2 8 23.6s 9.4% 2.95s 962KiB 6.4% 120KiB
Pr

compute -T 8 5.05s 2.0% 631ms 0.00B 0.0% 0.00B


compute F() 9 9.75s 3.9% 1.08s 603KiB 4.0% 67.0KiB
fit φ 9 147ms 0.1% 16.3ms 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────

27

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532
3.3.7. Newton-Raphson (GMRES)

ed
─────────────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 212s / 99.8% 31.5GiB / 100.0%

Section ncalls time %tot avg alloc %tot avg

iew
─────────────────────────────────────────────────────────────────────────────
invert Jacobian 5 134s 63.3% 26.8s 31.5GiB 100.0% 6.29GiB
GMRES 5 134s 63.2% 26.8s 31.1GiB 98.9% 6.22GiB
compute Jacobian 5 71.6s 33.8% 14.3s 1.08MiB 0.0% 221KiB
compute J\T 5 38.3s 18.0% 7.65s 0.00B 0.0% 0.00B
compute J_1 5 14.4s 6.8% 2.88s 500KiB 0.0% 100KiB
compute J_2 5 13.5s 6.4% 2.71s 601KiB 0.0% 120KiB

ev
compute -T 5 3.13s 1.5% 626ms 0.00B 0.0% 0.00B
compute F() 6 6.24s 2.9% 1.04s 402KiB 0.0% 67.0KiB
fit φ 6 95.0ms 0.0% 15.8ms 0.00B 0.0% 0.00B
─────────────────────────────────────────────────────────────────────────────

rr
p ee
ot
tn
rin
ep
Pr

28

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=5186532

You might also like