0% found this document useful (0 votes)
70 views27 pages

Algorithm 2

Algorithm 2

Uploaded by

lalaxlala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views27 pages

Algorithm 2

Algorithm 2

Uploaded by

lalaxlala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

CALCULATING AND USING SECOND ORDER ACCURATE SOLUTIONS OF

DISCRETE TIME DYNAMIC EQUILIBRIUM MODELS


JINILL KIM, SUNGHYUN KIM, ERNST SCHAUMBURG, AND CHRISTOPHER A. SIMS
ABSTRACT. We describe an algorithm for calculating second order approximations to the
solutions to nonlinear stochastic rational expectations models. The paper also explains
methods for using such an approximate solution to generate forecasts, simulated time paths
for the model, and evaluations of expected welfare differences across different versions
of a model. The paper gives conditions for local validity of the approximation that allow
for disturbance distributions with unbounded support and allow for non-stationarity of the
solution process.
1. INTRODUCTION
It is now widely understood how to obtain rst-order accurate approximations to the
solution to a dynamic, stochastic general equilibrium model (DSGE model). Such solutions
are fairly easy to construct and useful for a wide variety of purposes. They can under some
conditions be accurate enough to be a basis for tting the models to data, for example.
However, rst-order accuracy is not always enough. This is true in particular for compar-
ing welfare across policies that do not have rst-order effects on the models deterministic
steady state, for example. It is also true for attempts to study asset pricing in the context of
DSGE models. It is possible to assume directly that nonlinearities are themselves small in
Date: February 3, 2005.
Discussions, and in some cases exchange of code testing results, with Fabrice Collard, Kenneth L. Judd,
Robert Kollmann, Stephanie Schmitt-Groh e, and Martin Uribe have been useful to us. Kollmann has con-
tributed to the Matlab code that implements the papers algorithm. c 2005 by Jinill Kim, Sunghyun Kim,
Ernst Schamburg and Christopher Sims. This material may be reproduced for educational and research pur-
poses so long as the copies are not sold, even to recover costs, the document is not altered, and this copyright
notice is included in the copies.
1
SECOND ORDER SOLUTION 2
certain dimensions as a justication for use of rst-order approximations in these contexts;
Woodford (2002) is an example of making the necessary auxiliary assumptions explicit.
But the usual reliance on local approximation being generally locally accurate does not
apply to these contexts.
It is therefore of some interest to have an algorithm available that will produce second-
order accurate approximations to the solutions to DSGEs from a straightforward second-
order expansion of the models equilibrium equations, and this is an active area of recent
research.
Kenneth Judd pioneered this eld by using perturbation methods in solving various types
of economic models
1
. Jin and Judd (2002) describe howto compute approximations of arbi-
trary order in discrete-time rational expectations models. They aim at providing a complete
set of regularity conditions justifying the local approximations, and they discuss methods
for checking the validity of the approximations. Others also have studied perturbation
methods of higher than rst order including Collard and Juillard (2000), Anderson and
Levin (2002), and Schmitt-Groh e and Uribe (2002).
Kim and Kim (2003a) and Sutherland (2002) have developed a bias correction method
that produces the same results as the second order perturbation method for certain welfare
calculations, while requiring less computational effort than a full perturbation solution.
Several papers have applied the second-order perturbation method to dynamic general
equilibrium models. Kim and Kim (2003b) used the second-order solution method to ana-
lyze welfare effects of tax policies in a two-country framework. In particular, they calculate
the optimal degree of response for various tax rates to TFP shocks faced by each country.
Welfare gains of tax policies are measured by conditional welfare changes from the bench-
mark case. Kollmann (2002) has analyzed the welfare effects of monetary policies in open
economies using the software that has been developed along with this paper, and Bergin
and Tchakarov (2002) have used it to examine the welfare effects of exchange rate risk.
1
Judd (1998). For continuous time models see Gaspar and Judd (1997) as well.
SECOND ORDER SOLUTION 3
This paper describes the algorithm for computing a second order approximation and
shows how to apply it to calculating forecasts and impulse responses in dynamic models
and to evaluating welfare in DSGE models. It points out some necessary regularity con-
ditions for application of the method and discusses the sense in which the approximate
solutions are locally accurate.
While much of the paper parallels others in this rapidly growing literature, this paper
makes some new contributions. The rest of the literature in most cases begins from a for-
mulation of the problem in which a partition of variables in the model into states and
controls or co-states is assumed known. While in smaller models such a partition is
often obvious, in larger models it can be unclear how to partition the variables into states
and controls. The Matlab program gensys.m, implementing the approach described in
Sims (2001), accepts model specications that do not partition the variable list into pre-
determined and non-predetermined variables; instead it partitions disturbances into prede-
termined and non-predetermined categories. This approach is more natural in systems de-
rived from equilibrium models, in which equation disturbances often fall neatly into these
categories. In such models translating the list of predetermined disturbances into a corre-
sponding list of predetermined variables (or, where necessary, new predetermined variables
that are linear combinations of the original models variables) may not be easy. This paper
extends that approach to second-order approximations.
2
The state-free approach of gensys.m has the disadvantage that its output, while com-
pletely characterizing the dynamics in terms of the original variables, includes only its own
articial decomposition into states and co-states, which may be opaque. For some purposes
it is important to have an intuitively appealing decomposition into states and co-states. We
discuss how to do this, with the aid of another program, gstate.m, that uses the output
of gensys.m or gensys2.m to test proposed state vectors and and to provide guidance
as to what a valid state vector must look like.
2
King and Watson (1998) and Klein (2000) describe solution algorithms that handle the essentially the
same class of models as Sims (2001), but presume that the list of predetermined variables is given.
SECOND ORDER SOLUTION 4
Where the sense in which accuracy of local expansions is claimed has been made explicit
in the literature, it has for the most part (Jin and Judd, 2002, most prominently) focused on
accuracy of the function mapping state variables to co-states. It has also tended to assert as
regularity conditions almost-sure boundedness of stochastic disturbances and stationarity
of the dynamic model being studied. These assumptions allow strong claims to be made
about approximation accuracy, but they are disquieting for most DSGE modeling applica-
tions. Models with unit roots, or even mild explosiveness, are not uncommon in macroeco-
nomics, and models with near-unit roots are the rule. Often disturbance distributions with
unbounded support seem more realistic than any particular truncation to bounded support.
If perturbation methods break down, or are at the edge of their domain of applicability, for
such models, they might seem to be unattractive for many of the models to which they have
in fact been applied.
In this paper we argue that boundedness of shocks and stationarity of the model are
not essential to the validity of perturbation methods. For their main applications so far,
perturbation methods can be shown to produce results that are in a natural sense locally
accurate, without the invocation of the dubious stationarity and boundedness assumptions.
There is little explicit discussion in the literature of how to use higher order perturbation
approximations in constructing simulations, forecasts, and welfare evaluations. We show
that some apparently obvious approaches to these tasks in fact result in an accumulation of
garbage high-order terms that can make accuracy deteriorate. We lay out an algorithm
that always produces stationary second-order accurate dynamics whenever the rst-order
dynamics are stable.
The Matlab and R code that was built along with this paper is available at http:
//eco-072399b.princeton.edu/yftp/gensys2/, where the current version of
this paper will also be found.
SECOND ORDER SOLUTION 5
2. THE GENERAL FORM OF THE MODEL
We suppose a model that takes the form
(1) K
n1
( w
t
n1
, w
t1
,
t
m1
) +
t
p1
= 0,
where E
t

t+1
=0 and E
t

t+1
=0. The equations hold for t =0, . . . , , as does the E
t

t+1
=
0 condition. The disturbances
t
are exogenously given, while
t
is determined as a func-
tion of when the model is solved, if the solution exists and is unique. Note that because
there is no assumption at all about
0
, it is a free vector that is likely to make certain linear
combinations of the equations tautological at the initial date.
The scale factor is introduced to allow us to shrink the distribution of
t
toward zero
as we seek a domain of validity for our local approximation. The distribution of
t
itself is
assumed to be constant across time t and invariant to changes in , so that in particular it
has a xed covariance matrix .
The equation system could be written equivalently as
Q
1
K(w
t
, w
t1
,
t
) = 0 (2)
E
t
[Q
2
K(w
t+1
, w
t
,
t+1
)] = 0, (3)
where Q
1
is any matrix such that Q
1
= 0 and [Q

1
, Q

2
] is a full rank square matrix. The
forward-shift of the expectational block reects the absence of any restriction on
0
.
It is of course easy to accommodate models with more, but nitely many, lags in this
general framework by lengthening the w vector to include lags. Equilibrium models often
come in exactly the form (2)-(3), with the (3) part emerging from rst-order conditions.
Some models are usually written with multiperiod or lagged expectations in the behavioral
equations. If, say, E
t1
[ f (w
t
)] initially appears as an argument of K, we can dene a new
variable f

t
= E
t
[ f (w
t+1
)] and add this denitional equation to the expectational block (3)
of the system, thereby putting the system in the canonical form. If, say, E
t2
[w
jt
] appears
in the system, we can dene w

jt
= E
t1
[w
j,t+1
] and add this denitional equation to the
system. Then, since E
t2
[w
jt
] = E
t2
[w

j,t1
], we will have made all the expectations in
SECOND ORDER SOLUTION 6
the system one-step-ahead. It may then be necessary (if E
t2
w
jt
enters with those absolute
time subscripts, rather than as, say, E
t
[w
j,t+2
]) to extend the state vector in the usual way
to make the system rst-order. By combining and repeating these maneuvers, any system
involving a nite number of lags and expectations over nite-length horizons can be cast
into the form (2)-(3) or (therefore) (1).
We assume that the solution will imply that w
t
remains always on a stable manifold,
dened by H(w
t
, ) = 0 and satisfying
(4) { H
n
u
1
(w
t
,
n1
) = 0, H(w
t+1
, ) = 0 a.s. and Q
1
K(w
t+1
, w
t
,
t+1
) = 0 a.s.}
E
t
[Q
2
K(w
t+1
, w
t
,
t+1
)] = 0.
We consider expansion of the system about a deterministic steady state w, i.e. a point
satisfying K( w, w, 0) = 0. We do not need to assume the steady state is unique, so the
situation arising in unit root models, where there is a continuum of steady states, is not
ruled out.
We also assume that the nonlinear system (1) is formulated in such a way that its rst-
order expansion characterizes the rst-order behavior of the deterministic solution. That is,
we assume that solving the rst-order expansion of (1) about w,
(5) K
1
dw
t
=K
2
dw
t1
K
3

t
+
t
,
as a linear system results in a unique stable saddle path in the neighborhood of the deter-
ministic steady state. If so, this saddle path characterizes the rst-order behavior of the
system. We assume further that H
1
( w, 0) is of full row rank, so that the rst-order character
of the saddle path is determined by the rst-order expansion of H.
3
3
This assumption on H is not restrictive so long as there is a continuous, differentiable saddle manifold.
However there are models some asset pricing models, for example in which the rst order approxima-
tion does not deliver determinacy, but higher-order terms do. The algorithms suggested here cannot handle
models of this type.
SECOND ORDER SOLUTION 7
The system (1) has the second-order Taylor expansion about w
(6) K
1i j
dw
jt
=K
2i j
dw
j,t1
K
3i j

jt
+
i j

jt

1
2
(K
11i jk
dw
jt
dw
kt
+2K
12i jk
dw
jt
dw
k,t1
+2K
13i jk
dw
jt

kt
+K
22i jk
dw
j,t1
dw
k,t1
+2K
23i jk
dw
j,t1

kt
+K
33i jk

jt

kt
) ,
where we have resorted to tensor notation. That is, we are using the notation that
(7) A
i jk
B
mnjq
=C
ikmnq
c
ikmnq
=

j
a
i jk
b
mnjq
.
where a, b, c in this expression refer to individual elements of multidimensional arrays,
while A, B,C refer to the arrays themselves. As special case, for example, ordinary matrix
multiplication is AB = A
i j
B
jk
and the usual matrix expression A

BA becomes A
ji
B
jk
A
km
.
Note that we are distinguishing the array K
mi j
of rst derivatives from the array K
mni jk
of
second derivatives only by the number of indexing subscripts the two arrays have.
3. REGULARITY CONDITIONS
Because we are taking rst and second derivatives and because we are expanding about
the steady state w, it is clear that we require existence of rst and second derivatives of K
at w. We have also directly assumed that the rst order behavior of K near w determines
H(, 0). In order to make our local expansion in dw, , and work, we will need that
H(w, ) is continuous and twice-differentiable in both its arguments.
It may seem that these are all standard assumptions on the degree of differentiability
of the system near w. Consider what emerges, though, when we split the system into
expectational and non-expectational components as in (2)-(3). If we replace (3) with its
second-order expansion and take some expectations explicitly, we arrive at
(8) E
t
_
Q
2
(K
1i j
dw
j,t+1
+K
2i j
dw
jt
+
1
2
(K
11i jk
dw
j,t+1
dw
k,t+1
+2K
12i jk
dw
j,t+1
dw
k,t
+K
22i jk
dw
j,t
dw
k,t
+K
33i jk

jk

2
))

= 0,
SECOND ORDER SOLUTION 8
and nd ourselves needing to assert that
t
has nite second moments, which is not a local
property. That is, if
t
does not have second moments, shrinking will not make
t
have nite second moments. The same point applies to (3) in its original nonlinear form.
If it is to be differentiable in w
t
and , we will in general need to impose restrictions on
the distribution of
t
. Jin and Judd (2002) have an example of a model in which some
apparently natural choices of a distribution for
t
imply that E
t
[Q
2
K(w
t+1
, w
t
,
t+1
)] is
discontinuous in at = 0, even though K has plenty of derivatives at the steady state.
4. SOLUTION METHOD
The solution we are looking for can be written in the form
(9) w
t
= F

(w
t1
,
t
, ) .
Because we know the saddle manifold characterized by H exists and that H
1
( w, ) has
full row rank n
u
, we can use H to express n
u
linear combinations of ws in terms of the
remaining n
s
= n n
u
. Let the n
s
linear combinations of ws chosen as explanatory
variables in this relation be
(10) y
t
=
n
s
n
w
t
.
Then the solution (9) can be expressed equivalently, in a neighborhood of w, as
y
t
=F

(w
t1
,
t
, ) = F(y
t1
, x
t1
,
t
, ) (11)
x
t
n
u
1
= h(y
t
, ) , (12)
where (12) is just the solved version of the H = 0 equation that characterizes the stable
manifold. Here of course x, like y, is a linear combination of ws.
The appearance of x
t1
in (11) may seemredundant, since along the solution path we will
have x
t
=h(y
t
, ), but at the initial date the lagged wvector might not satisfy this restriction.
This is likely in a growth model with multiple types of capital, for example, where there
SECOND ORDER SOLUTION 9
may be optimal proportions of capital of different types, but no physical requirement that
the initial endowments are in these proportions.
4
The solution method for linear rational expectations systems described in Sims (2001)
begins by applying linear transformations to the list of variables and to the equation system
to produce an upper triangular block recursive system. In the transformed system, the
unstable roots of the system are all associated with the lower right block,
t
does not appear
in the upper set of equations in the system,
5
and the upper part of the equation system is
normalized to have the identity as the coefcient matrix on current values of the upper part
of the transformed variable matrix. In other words, by applying to the equation system the
same sequence of linear operations as applied in the earlier paper to a linear system
6
, we
can transform (6) to
dy
it
=G
1i j
dx
jt
+G
2i j
dv
j,t1
+G
3i j

jt
+
1
2
_
G
11i jk
dv
jt
dv
kt
+2G
12i jk
dv
jt
dv
k,t1
+2G
13i jk
dv
jt

kt
+G
22i jk
dv
j,t1
dv
k,t1
+2G
23i jk
dv
j,t1

kt
+G
33i jk

jt

kt
_
(13)
J
1i j
dx
jt
= J
2i j
dx
j,t1
+J
3i j

jt
+

t
+
1
2
_
J
11i jk
dv
jt
dv
kt
+2J
12i jk
dv
jt
dv
k,t1
+2J
13i jk
dv
jt

kt
+J
22i jk
dv
j,t1
dv
k,t1
+2J
23i jk
dv
j,t1

kt
+J
33i jk

jt

kt
_
,
(14)
where v
t
= (y

t
x

t
)

, i.e. the y and x vectors stacked up.


4
See section 5 below for further discussion of this point.
5
It may not be possible in fact to eliminate
t
from the upper part of the system. When it is not, the
solution is not unique. The programs signal the non-uniqueness and deliver one solution, in which the s
are set to zero in the upper block of this system.
6
and implemented in the R function gensys.R and the Matlab function gensys.m
SECOND ORDER SOLUTION 10
Now the y and x introduced above may seem to have no connection to the y and x in terms
of which we wrote the solution (11)-(12). But that solution has second-order expansion
dy
it
= F
1i j
dv
j,t1
+F
2i j

jt
+F
3i

2
+
1
2
_
F
11i jk
dv
j,t1
dv
k,t1
+2F
12i jk
dv
j,t1

kt
+F
22i jk

jt

kt
_ (15)
dx
it
=
1
2
M
11i jk
dy
jt
dy
kt
+M
2

2
. (16)
Of course if x were chosen as an arbitrary linear combination of ws, there would in general
be a rst-order term in dy
t
on the right-hand side of (16). However, we can always move
such terms to the left-hand side and then redene x to include them. We will now proceed
to show that the dy and dx in (15)-(16) are indeed those in (13)-(14), and that indeed we can
construct the coefcient matrices in the former from knowledge of the coefcient matrices
in the latter.
The terms in in (15)-(16) deserve discussion. As can be seen from (8), the appearance
of expectations operators in our system makes it depend on the distribution of , not just
on realized values of . But there is only one term in (8) that is rst-order in dw
t+1
. All
the other terms are second-order, or depend on dw
t
or
2
, not . Therefore if there were
a component of Q
1
K
1
dw
t+1
that depended on (rather than
2
), that term could not be
zero as the equation requires. Hence we can be sure that there is no term linear in in
the second order expansion of (2)-(3), and thus none in (15)-(16). This then also rules out
any term of the form
t+1
also, since such a term could enter only through the cross
products in dw
t+1

t+1
or through the dw
t+1
dw
t+1
terms, and without a rst-order term in
in dw
t+1
, these cross products can generate no
t+1
terms.
Observe that dx
t
in (13)-(14) must be zero to rst order (except for t = 1), because
otherwise there would be an explosive component in the rst order part of the solution,
contradicting the stability assumption. Therefore, F
1
is exactly G
2
from (13). Clearly also
SECOND ORDER SOLUTION 11
F
2
= G
3
. Therefore we have a complete rst-order solution for dy and dx in hand:
dy
t
.
= F
1
dv
t1
+F
2

t
(17)
dx
t
.
= 0. (18)
We nd the second order terms in the following steps. First shift (14) forward in time by
one (so that the left-hand side is dx
t+1
) and substitute the right-hand side of (16), shifted
forward in time by 1, for the dx
t+1
on the left. Then substitute the right-hand-side of (17),
shifted forward by 1, for all occurrences of dy
t+1
in the resulting system. Finally apply
the E
t
operator to the result. In doing this, we are dropping all the second order terms in
the solution for dy and dx when these terms themselves occur in second order terms. This
makes sense because cross products involving terms higher than rst order are third order
or higher, and thus do not contribute to the second order expansion. Note that this means
that, since dx is zero to rst order, in (13)-(14) all the second-order terms in dv can be
written in terms of dy alone. We will abuse notation by using the same G and J labels for
the smaller second-order coefcient matrices that apply to dy alone that we use in (13)-(14)
for the second order terms involving the full v vector. In this way we arrive at
(19) J
1i j
_
1
2
_
M
11jk
F
1kr
dy
rt
F
1s
dy
st
+M
11jk
F
2kr
F
2s

rs

2
_
+M
2j

2
_
= J
2i j
_
1
2
M
11 jk
dy
kt
dy
t
+M
2 j

2
_
+
1
2
_
J
11i jk
_
F
1 jr
F
1ks
dy
rt
dy
st
+F
2 jr
F
2ks

rs

2
_
+2J
12i jk
F
1jr
dy
rt
dy
kt
+2J
13i jk
F
2jr

rk

2
+J
22i jk
dy
jt
dy
kt
+J
33i jk

jk

2
_
,
Where we have set Var(
t
) =.
For this equation to hold for all dy and
2
values, we must match coefcients on common
terms. Therefore, looking at the dy
t
dy
t
terms, we conclude that
(20) J
1i j
M
11 jkt
F
1kr
F
1s
= J
2i j
M
11 jrs
+J
11i jk
F
1 jr
F
1ks
+2J
12i js
F
1 jr
+J
22i jk
.
SECOND ORDER SOLUTION 12
This is a linear equation, and every element of it is known except for M
11
. The transfor-
mations that produced the block-recursive system with ordered roots guarantee that J
2
, an
ordinary 22 matrix, has all its eigenvalues above the critical stability value. It is therefore
invertible, and we can multiply (20) through on the left by J
1
2
, to get a system in the form
(21) AM

F
1
F
1
= M

+B.
In this equation, M

is the ordinary n
s
n
2
s
matrix obtained by stacking up the second and
third dimensions of M
11
, A = J
1
2
J
1
, and B is everything else in the equation that doesnt
depend on M

. If the dividing line we have specied between stable and unstable roots is
1 +, then our construction of the block-recursive system has guaranteed that J
1
2
J
1
has
all its eigenvalues 1/(1+), while at the same time it is a condition on the solution that
all the eigenvalues of F
1
be < 1 +. To guarantee that a second-order solution exists, we
require that the largest eigenvalue of F
1
F
1
, which is the square of the largest eigenvalue
of F
1
, be less than the inverse of the largest eigenvalue of A =J
1
2
J
1
. If =0 this condition
is automatically satised. Otherwise, there is an extra condition that was not required for
nding a solution to the linear system: the smallest unstable root must exceed the square
of the largest stable root.
Assuming this condition holds, (21) has the form of a discrete Lyapunov or Sylvester
equation that is guaranteed to have a solution. Because of the special structure of F
1
F
1
,
it would be very inefcient to solve this system with standard packages (like Matlabs
lyap.m), but it is easy to exploit the special structure with a doubling algorithm to obtain
an efcient solution for M

. That is, one notes that (21) implies


(22) M

i=0
A
i
BF
i
1
F
i
1
and denes the jth approximation to M

as
(23) M

j
=
j1

i=0
A
i
BF
i
1
F
i
1
.
SECOND ORDER SOLUTION 13
Then the recursion, starting with M

1
=B,
(24) M

2 j
= M

j
+A
j
M

j
F
j
1
F
j
1
quickly converges, unless the eigenvalue condition is extremely close to the boundary.
With M
11
in hand, it is easy to see from (19) that we can obtain a solution for M
2
by matching coefcients on
2
. The only slightly demanding calculation is a required
inversion of J
2
J
1
. But since J
1
2
J
1
has all its eigenvalues less than one, this J
2
J
1
is
guaranteed to be nonsingular.
The next step is to use (16) to substitute for the rst-order term in dx
t
on the right of (13)
and to use (17)-(18) to substitute for all occurrences of dy
t
and dx
t
in second-order terms
on the right in the resulting equation. This produces an equation with dy
t
on the left, and
rst and second-order terms in dy
t1
and
t
and terms in
2
on the right. With M
11
and M
2
in hand, it turns out that it is only a matter of bookkeeping to read off the values of F
12
, F
22
,
and F
3
by matching them to the collected coefcients in this equation.
7
5. ANALYZING THE STATE REPRESENTATION
The gensys.m and gensys.R programs produce as output, among other things, a
rst-order expansion of (9), as
(25) dw
t
.
= F

1
dw
t1
+F

2

t
.
To nd a conventional state-space representation of such a system, we can form a singular
value decomposition
(26) [F

1
F

2
] =
_
U V
_
_
_
D 0
0 0
_
_
_
_
R

1
R

2
S

1
S

2
_
_
,
where [U V] and [R S] are orthonormal matrices and D is diagonal. Any state vector z
t
that has the property that w
t
is determined by z
t
in this system will have to be of the form
7
This bookkeeping is not trivial to program, but it is probably best for those who need to program it to
consult the program, rather than take up space here with the bookkeeping.
SECOND ORDER SOLUTION 14
z
t
=U

w
t
. The only way w
t1
affects current w
t
is via R

1
w
t1
. While R

1
can be the same
row rank as U, it can also be less, so that a smaller state vector summarizes the past than
is needed to characterize the current situation. Also, the rank of F

1
can be below its number
of non-zero singular values. In this case it may be possible to nd a z
t
that , after the system
has run a few periods, summarizes the past and/or characterizes the current situation yet is
of lower dimension than the rank of D.
The program gstate.m takes as input F

1
and F

2
, together with an optional candidate
matrix of coefcients that might form a state vector as z
t
= w
t
. The program checks
whether lies in Us or R

1
s row space and returns U and R

1
for further analysis.
Once a state/co-state representation of the form v
t
=w
t
has been settled on, where is
non-singular and the v = (y

vector is partitioned into state and costate, it is straightfor-


ward to convert a rst or second-order approximate solution from one co-ordinate system
into the another.
6. THE LOCAL ACCURACY OF THE APPROXIMATION
Once we have a second-order accurate approximation to the dynamics, in the form (15)-
(16), we can make a claim to local accuracy of the following form:
(27) dv
t+1
=

F(dv
t
,
t+1
, ) +o
p
(dv
t
,
2
) ,
where o
p
means order in probability and

F is the second-order approximation to the
dynamics. That is, the error in the approximation is claimed to converge in probability to
zero, at a more rapid rate than dv
t
,
2
, when dv
t
,
2
goes to zero. This rate is the
weakest kind of claim that can be made for a Taylor expansion. If we are willing to claim
that third derivatives exist at the deterministic steady state, then we can replace the error
term with O
p
(dv
t
,
3
). This claim does not depend on strict boundedness of the support
of the distribution of
t
, because we are only claiming our local accuracy with a certain
(high) probability. Whatever the distribution of
t
,
t
converges in probability to zero as
0, allowing us to make this claim. Of course this is all dependent on the underlying
assumption that the original nonlinear model has dynamics differentiable of sufciently
SECOND ORDER SOLUTION 15
high order in in the neighborhood of deterministic steady state, and on the existence and
continuity of the expectations that occur in the statement of the model.
This one-step-ahead local accuracy in probability claim obviously can be extended
to a corresponding claim to accuracy n-steps-ahead for any nite n. We have made no
appeal to stationarity of the system in making these claims. Of course the size of the n for
which accuracy remains good at a given level of will in general be smaller for systems
that are not stationary. But the qualitative nature of the accuracy claim is no different for
non-stationary systems.
This type of nite-time-span, accuracy-in-probability claimis exactly what is appropriate
for purposes of tting a model to data which always cover a nite time span or for
purposes of simulating the model from given initial conditions over a nite span of time. It
is also exactly appropriate for the correct calculation of expected welfare, when welfare is
constructed as a discounted sum of period utilities. The discounting means that accuracy
of the approximation is unimportant after some time horizon in the future.
It goes without saying that no theoretical result about local or asymptotic global accu-
racy for approximate solutions can prove that in a particular model, with particular shock
variances, one method or another is more accurate than another or accurate enough for
some specic purpose. The emphasis by Jin and Judd (2002) on checking model validity is
therefore appropriate. There is no uniquely best measure of solution accuracy, but by now
a variety of stringent checks have been proposed. The methods that have been applied most
widely (but not widely enough) are based on evaluating the conditional expectation in (3) at
a collection of values of the lagged state vector. There are important practical questions as
to how to select the collection of state vector values at which one evaluates the expectation
and as to what metric to use in measuring the vector of deviations from the theoretical zero
values for these expectations. Jin and Judd suggest deterministically xing a collection
of state variable values and a set of relative error metrics for the expectational errors,
based on economic interpretation of the model being solved. den Haan and Marcet (1994)
suggest another approach, in which the state variable values are generated stochastically
SECOND ORDER SOLUTION 16
via simulation and the metric for evaluation of expectational errors is based on statistical
detectability of the errors in a sample of relevant length. Each of these approaches has
pitfalls, but is worth consideration.
Though there is no widely understood alternative to this Euler equation residual family
of accuracy checks at this point, there is probably room for further work in this area. For
many purposes, the most relevant measure of accuracy is the accuracy of the solutions
approximation to the mapping from w
t1
,
t
, and to w
t
corresponding to (9). This is not
measured directly by the size of the Euler equation errors, but no more direct measure of
the accuracy of this mapping is at this point commonly computed. The usual approaches
to assessing numerical accuracy in nonlinear models could be applied here. For example,
one could sample from solutions that are close to equivalent by the Euler equation accuracy
criterion and describe the degree to which the solution function varies while Euler equation
error stays within the convergence criteria.
7. FORECASTING AND SIMULATION
Forecasts s steps ahead, E
t
[dw
t+s
] and Var
t
[dw
t+s
] are the building blocks for the calcu-
lation of impulse response functions as well as welfare.
We build the forecasts from the second-order accurate dynamic model given by (15)-
(16), modied here to reect our assumption that the initial conditions satisfy the equations
of the model and that therefore dx
t1
= 0 to rst order. We abuse notation by using the
same Fs here, for the pieces of the original F matrices corresponding to dys, as we did
for the original F matrices in (15)-(16) that corresponded to the full dv = [dy, dx] vector.
dy
t
.
= F
1 j
dy
j,t1
+F
2 j

j,t
+F
3

2
+
1
2
F
11jk
dy
j,t1
dy
k,t1
+F
12 jk
dy
j,t1

k,t
+
1
2
F
22 jk

j,t

k,t
(28)
dx
t
.
=
1
2
M
11 jk
dy
j,t
dy
k,t
+M
2

2
(29)
We would then like to calculate, to second order accuracy, E
t
[dy
t+s
] and Var
t
[y
t+s
].
SECOND ORDER SOLUTION 17
To begin with, note that, since the conditional mean of dy
t+s
is of second order, the vari-
ance terms

s
Var
t
(y
t+s
) are correct to second order accuracy when computed from the
rst-order terms in the expansion (28) alone and that, to second-order accuracy, Var
t
(x
t+s
) =
0 since dx
t
itself is of second order.
For s = 1, it is easy to see from (28)-(29) that we have
d y
t+1
= E
t
[dy
t+1
]
.
= F
1j
dy
j,t
+F
3

2
+
1
2
F
11 jk
dy
jt
dy
kt
+

2
2
F
22 jk

jk
(30)
d x
t+1
= E
t
[dx
t+1
]
.
=
1
2
M
11 jk
_
d y
j,t+1
d y
k,t+1
+

1 jk
_
+M
2

2
. (31)
The expression in (31) for determining E
t
[dx
t+1
] from the conditional mean and variance of
dy
t+1
works equally well for determining E
t
[dx
t+s
] from the conditional mean and variance
of dy
t+s
for s > 1. The straightforward approach to determining d y
t+s
and d x
t+s
is to apply
(30) recursively, computing d y
t+s
from d y
t+s1
and

k1
, etc. This procedure is in fact
second-order accurate, but it introduces higher order terms into the expansion. For example,
since

dy
t+1
contains quadratic terms in dy
t
, and (30) makes d y
t+2
quadratic in d y
t+1
, in a
simple recursive computation d y
t+2
becomes quartic in dy
t
. These extra high-order terms
do not in general increase accuracy of the approximation, as they do not correspond to
higher order coefcients in a Taylor series expansion of the true dynamic system, and in
practice often lead to explosive time paths for d y
t+s
.
To see what goes wrong, consider the simple univariate model
y
t
= y
t1
+y
2
t1
+
t
,
where || <1 and >0. Though this model is locally stable about its unique deterministic
steady state of y = 0, it has a second steady-state, at (1 )/. If y exceeds the other
steady state, it will tend to diverge. This is likely to be a generic problem with quadratic
expansions they will have extra steady states not present in the original model, and some
of these steady states are likely to mark transitions to unstable behavior.
SECOND ORDER SOLUTION 18
Since the unique local dynamics are stable in a neighborhood of the steady state, it will be
desirable to choose amongst the second order accurate expansions one that implies stability.
Deriving sufcient conditions on the support of
t
to guarantee non explosiveness under the
iterative scheme (30)-(31) is in general a non-trivial task and therefore it is useful to have
available an algorithm which generates non-explosive forecasts and simulations without
imposing explicit conditions on the support of
t
. The mere fact that the generated forecasts
are stable of course does not imply superior accuracy in general, especially when shocks
are not bounded. However, stationarity will in general imply that, for a given neighborhood
U of the steady state and a given time horizon T, we can restrict in such a way as to
make the probability of leaving U in time T arbitrarily small.
Obtaining a stable solution based on (30) can be achieved by pruning out the extraneous
high-order terms in each iteration by computing the projections of the second order terms
based on a rst-order expansion, d y
t+1
of E
t
[dy
t+1
], as follows:
d y
t+s
.
= F
1j
d y
j,t+s1
+F
3

2
+
1
2
F
11 jk
_
d y
j,t+s1
d y
k,t+s1
+

k1, jk
_
+

2
2
F
22 jk

jk
(32)
d x
t+s
.
=
1
2
M
11 jk
_
d y
j,t+s
d y
k,t+s
+

s, jk
_
+M
2

2
(33)
d y
t+s
.
= F
1j
d y
j,t+s1
(34)

i j,s
=
2
F
2ik

k
F
2 j
+F
1ik

k,s1
F
1j
. (35)
Using these equations recursively results in a d y
t+s
series which, by construction, is
quadratic in dy
t
for all s. Furthermore, when the eigenvalues of F
1
are less than one in
absolute value, the rst order accurate solution d y
t+s
is stable and hence so is the squared
process
_
d y
j,t+s
d y
k,t+s
_
. It follows that d y
t
must be stable as well.
8
Note that the F
12
component of the second order expansion the coefcients of the interactions between
dy
t1
and
t
do not enter this recursion at all.
8
The same matrix eigenvalue conditions are at issue here as in section 4s discussion of existence of the
solution to (21)
SECOND ORDER SOLUTION 19
The same issues arise if the aim is to generate simulated time paths, rather than simply
conditional expectations and variances of future variables. For this purpose, we can intro-
duce the notation dy
(1)
t+s
and dy
(2)
t+s
for rst and second order accurate simulated time paths,
respectively. A recursive, non-explosive, pruned simulation scheme is then given by
dy
(2)
t+s
.
= F
1 j
dy
(2)
j,t+s1
+F
2 j

j,t+s
+F
3

2
+
1
2
F
11 jk
dy
(1)
j,t+s1
dy
(1)
k,t+s1
+F
12 jk
y
(1)
j,t+s1

k,t+s
+

2
2
F
22 jk

j,t+s

k,t+s
(36)
dx
(2)
t+s
.
=
1
2
M
11 jk
dy
(1)
j,t+s
dy
(1)
k,t+s
+M
2

2
(37)
dy
(1)
t+s
.
= F
1 j
dy
(1)
j,t+s1
+F
2 j

j,t+s
, (38)
where the F
12
terms that could be ignored in forming conditional expectations have neces-
sarily returned for generation of accurate simulations. By preventing buildup of spurious
higher-order terms, we make stability of the simulation over a long time path more likely,
while at the same time preserving second-order accuracy of the mapping from initial vari-
able values y
t
, x
t
, shocks
t+1
, . . . ,
t+s
, and to the simulated values y
(2)
t+1
, . . . , y
(2)
t+s
.
It can help in understanding these recursions to append the vector dy
(1)
dy
(1)
to dy
(2)
and use matrix notation:
(39)
_
_
dy
(2)
t+1
(dy
(1)
t+1
dy
(1)
t+1
)
_
_
=
1
_
_
dy
(2)
t
(dy
(1)
t
dy
(1)
t
)
_
_
+
2

2
+
t+1
with

1
=
_
_
F
1
1
2
(F

11
)
0 (F
1
F
1
)
_
_
(40)

2
=
_
_
F
3
+
1
2
F
22 jk

jk
(F
2
F
2
)vec()
_
_
(41)

t
=
_
_

t
+F
23 jk

k,t
dy
(1)
j,t1
+

2
2
F
33 jk
(
jt

kt

jk
)

2
_

t
(F
2
F
2
)vec()
_
_
_
. (42)
The F

11
in the denition of
1
(40) is a matrix with number of rows equal to the length of
y and with the second and third dimensions of the array vectorized into a row vector so
SECOND ORDER SOLUTION 20
it is an n
s
n
2
s
matrix. Note that
1
is upper block triangular and is stable exactly when
the eigenvalues of F
1
are less than one in absolute value. Note also that, to second order
accuracy,
Var(
t
) =
_
_

2
0
0 0
_
_
.
Calculations of conditional and unconditional rst and second moments can therefore be
carried out using (39) as if it were an ordinary rst order VAR. This can be an aid to
understanding, or to computation in small models, though for larger systems it is likely to
be important for computational efciency to take account of the special structure of the
j
matrices in (39).
Finite-time-span, accuracy-in-probability claims will not justify estimating unconditional
expectations of any functions of variables in the model via simulation. To make the effects
of initial conditions die away, such simulations must cover long spans of time. If the
second-order approximation is non-stationary, expectations calculated from simulations of
it will of course not converge. If the true nonlinear model is non-stationary, then the true
unconditional expectations will in general not exist, even though it is possible that the local
second-order approximation is stationary, so again in this case it will not be possible to
estimate unconditional expectations from simulated paths.
When both the true nonlinear model and the second order approximate model are station-
ary and ergodic, and the true unconditional expectation in question is a twice-differentiable
function of in the neighborhood of = 0, then it is possible to estimate the expectation
from long simulations of the approximate model, with the estimates accurate locally in
in the usual sense. This is true even though it may be (e.g. because of unbounded support
of
t
) that with probability one the path of the model repeatedly enters regions where the
local approximation is inaccurate. This is possible because as 0 the fraction of time
spent in these regions goes to zero, for both the true and the approximate model.
SECOND ORDER SOLUTION 21
However it will most often be preferable to estimate an expectation by using the second-
order approximation analytically, expanding the function whose expectation is being taken
as a Taylor series by the methods described in this section.
8. WELFARE
One can easily produce cases where the second-order approximation is necessary to get
an accurate evaluation of certain aspects of the model. Utility-based welfare calculation is
one case. For example, calculating welfare effects of various monetary and scal policies
or welfare effects of changes in economic environment such as nancial market structure
should include second-order or even higher-order terms in order to get an accurate mea-
sure. Kim and Kim (2003a) present an example of how inaccurate the linearized solution
can be in calculating welfare using a two-country model. Using the linearized solution,
welfare of autarky can appear to be higher than that of the complete markets, solely because
of the inaccuracy of the linearization method. Another application in which second-order
approximation is important is examination of asset price behavior in DSGEs. Linearized
solutions will imply equal expected returns on all assets. Second order solutions will gen-
erate correct risk premia, though generally to analyze time variation in risk premia will
require higher than second-order accuracy.
Equation (39) makes it relatively straightforward to see how to carry out a second-order
accurate welfare calculation. Welfare is dened as a discounted sum of expected utility.
Let the period utility function be given by u : R
n
s
R.
9
Then the utility conditional on an
9
Of course often in growth models utility is a function of consumption, which is not a conventional state
variable. To use the formulation we develop here, then, consumption (an x variable) has to be replaced by the
corresponding component of h(y, ). Also, because we work entirely in terms of y, we are not covering the
case where the initial distribution of w does not lie on the saddle path. The methods we describe here can be
expanded to cover this case and to allow x to enter u, at the cost of some increase in the burden of notation.
SECOND ORDER SOLUTION 22
initial distribution of y
0
with mean and variance (, ) is
U(, ) = E
0
_

t=0

t
u(y
t
)
_

u( y)
1
+E
0
_

t=0

t
_
u( y)dy
(2)
t
+
1
2
vec(
2
u( y))

(dy
(1)
t
dy
(1)
t
)
_
_

(43)
U(, ) =
u( y)
1
+
_
u( y)
1
2
vec(
2
u( y))

_
[I
1
]
1
_
_
_
_

vec(+

)
_
_
+(1)
1

2
_
_
(44)
If we are interested only in unconditional expected u, we can arrive at the correct formula
by multiplying (44) through by 1 and taking the limit as 1, giving us
(45) E[u(y
t
)] = u( y) +
_
u( y)
1
2
vec(
2
u( y))

_
(I
1
)
1

2
.
Note that in (44) we make no use, explicitly or implicitly, of F
12
. Also note that though
the matrix I
1
appears in the formula inverted, the utility calculation only requires
_
u( y)
1
2
vec(
2
u( y))

_
(I
1
)
1
,
whose computation is only an equation-solving problem, not a full inversion;
10
further-
more, this part of the computation does not need to be repeated as and are varied.
Finally, note that (45) uses only (I
1
)
1

2
, regardless of the form of u. This is again
an equation-solving problem. So if we are interested only in unconditional expectations,
even in unconditional expectations of many different functions u, the computation of a
full second-order correction may be much simpler than calculation of the full second-order
expansion of the dynamics.
10
Though for an n n matrix A both solving Ax = b for x and computing A
1
are O(n
3
) operations, the
latter is substantially more time consuming. In Matlab inversion takes roughly twice the time.
SECOND ORDER SOLUTION 23
It is these simplications, applied to particular models, that are the insights provided by
the papers that have put forward bias-correction methods for making second-order accu-
rate expected welfare computations in DSGE models (Kim and Kim, 2003a; Sutherland,
2002).
We should note that there is a situation in which second-order accurate evaluations of
welfare can avoid entirely the need for a second-order expansion of the model solution.
If u( y) = 0, as would be true if the deterministic steady-state sets y to the value that
maximizes u(y), then only the lower blocks of
1
and
2
enter the solution, as can be
seen from (44) or (45). As can be seen from (40) and (42), these blocks contain F
1
and
F
2
only, not any terms from the second-order solution. Of course in most problems with
discounting, even an optimal solution will not maximize static welfare u(y) in the steady
state, so this result will not apply. Also, even where the solution has been computed to
maximize static period welfare u, the result depends on having a second order expansion of
u in terms of the state vector y. When the problem has been formulated (as in usual growth
models) with a non-state variable (e.g. consumption) appearing in the utility function, the
second-order expansion of the utility function in terms of y may require use of the second-
order solution for x as a function of y.
11
8.1. Conditional vs. Unconditional welfare. From the discussion in the preceding sec-
tion it is apparent that evaluating expected welfare based on unconditional E[u(y)] is a
more straightforward task than evaluating the conditional expectation of discounted ex-
pected utility at a given date.
12
It is therefore not surprising that many existing papers have
11
Rotemberg and Woodford (1997) is an example of a context where use of the rst-order solution for
welfare analysis is justied by special regularity conditions. The paper evaluated welfare using unconditional
expectation of period utility. Regularity conditions required to justify use of the rst-order solution in the
papers model include an assumption that some other policy change perfectly offsets second-order effects of
monetary policy on the mean level of output and an assumption that monetary policy is the only source of
inefcient uctuations in prices.
12
Woodford (2002) discusses the differences between unconditional and conditional welfare in calculating
welfare effects of monetary policies.
SECOND ORDER SOLUTION 24
used unconditional welfare for evaluating policies. Examples include Clarida, Gal, and
Gertler (1999), Rotemberg and Woodford (1997, 1999), Sutherland (2002) and Kollmann
(2002).
There are strong objections in principle to use of the unconditional welfare criterion. We
know that it takes time for one steady state to reach another steady state and unconditional
welfare neglects the welfare effects during the transitional period. It is therefore generally
not in fact optimal, in problems with discounting, to use policies that maximize the uncon-
ditional expectation of one-period welfare. This is not a new point it is the same point as
the non-optimality of driving the rate of return to zero in a growth model and it has been
recognized in the DSGE literature in, e.g, Kim and Kim (2003b), and Woodford (2002).
Because unconditional welfare can often be computed easily, using the bias correc-
tion shortcut, it is important to note that using unconditional welfare can give nonsen-
sical results. Kim and Kim (2003a) construct a two-country DSGE model and compute
risk-sharing gains from autarky to the complete-markets economy using a second-order
approximation method. Welfare is dened as conditional welfare and the results show that
there are positive welfare gains from autarky to the complete-markets economy. But the
unconditional welfare measure can for certain parameter values produce the paradoxical
result that autarky generates a higher level of welfare than the complete markets.
The use of conditional welfare does not imply that results necessarily are tied to some
particular initial state. One can condition on a distribution of values for the initial state.
The critical point is that when comparing two policies or equilibria one should use the same
distribution for the initial state for each. When there is no time-inconsistency problem the
optimal policy will have the property that no matter what initial distribution is specied for
the state, it will produce a higher conditional expectation of welfare than any other policy.
However, when comparing a collection of policies that are not optimal, one may nd that
rankings of policies vary with the assumed distribution of the initial state.
When there is a time-inconsistency problem, the optimal policy generally depends on the
initial conditions, even if we restrict attention to policy rules that are a xed mapping from
SECOND ORDER SOLUTION 25
state to actions. Using a conditional expectation as the welfare measure does not avoid
this problem. One attempt to get around this issue is the suggestion in, e.g., Giannoni and
Woodford (2002) that policy should follow the rule that would prevail under commitment
in the limit as the initial conditions recede into the past. This timeless perspective policy
can be implemented by treating the Lagrange multipliers on private sector Euler equations
as states, and then maximizing conditional expected discounted utility.
13
9. CONCLUSION
Use of perturbation methods to improve analysis of DSGE models is still in its early
stages. Programs that automate computations for models higher than second order are
just beginning to emerge. Methods of dealing with the kinds of singularities that show
up in economic models for example the indeterminacy of asset allocations in standard
portfolio problems when variances are zero are still not widely understood. And we have
only begun to get a feel for where these methods are useful and what their limitations are.
Real progress is being made, however, in an atmosphere that is both competitive enough
to be stimulating and cooperative enough that researchers located around the world are
beneting from each others insights.
13
Note that though the timeless perspective policy is a useful benchmark, it cannot resolve the fundamen-
tal problem of time inconsistency. A policy-maker who can make believable commitments will not want to
choose the timeless-perspective solution, while one that cannot make believable commitments cannot imple-
ment the timeless-perspective solution.
SECOND ORDER SOLUTION 26
REFERENCES
ANDERSON, G., AND A. LEVIN (2002): A user-friendly, computationally-efcient al-
gorithm for obtaining higher-order approximations of non-linear rational expectations
models, Discussion paper, Board of Governors of the Federal Reseerve.
BERGIN, P. R., AND I. TCHAKAROV (2002): Does Exchange Rate Risk Matter for Wel-
fare? A Quantitative Investigation, Discussion paper, University of California at Davis,
http://www.econ.ucdavis.edu/faculty/bergin/.
CLARIDA, R., J. GAL

I, AND M. GERTLER (1999): The science of monetary policy: A


new Keynesian perspective, Journal of Economic Literature, 37, 16611707.
COLLARD, F., AND M. JUILLARD (2000): Perturbation Methods for Rational Ex-
pectations Models, Discussion paper, CEPREMAP, Paris, fabrice.collard@
cepremap.cnrs.fr.
DEN HAAN, W. J., AND A. MARCET (1994): Accuracy in Simulations, The Review of
Economic Studies, 61(1), 317.
GASPAR, J., AND K. L. JUDD (1997): Solving large-scale rational expectations models,
Macroeconomic Dynamics, 1, 4475.
GIANNONI, M. P., AND M. WOODFORD (2002): Optimal Interest-Rate Rules: I. General
Theory, Discussion paper, Columbia University and Princeton University, http://
www.princeton.edu/woodford.
JIN, H., AND K. L. JUDD (2002): Perturbation Methods for General Dynamic Stochastic
Models, Discussion paper, Stanford University, [email protected].
JUDD, K. L. (1998): Numerical Methods in Economics. MIT Press, Cambridge, Mass.
KIM, J., AND S. KIM (2003a): Spurious Welfare Reversals in International Business
Cycle Models, Journal of International Economics, 60, 471500.
(2003b): Welfare Effects of Tax Policy in Open Economies: Stabilization and
Cooperation, Discussion paper, University of Virginia and Tufts University, http:
//www.tufts.edu/skim20.
SECOND ORDER SOLUTION 27
KING, R. G., AND M. WATSON (1998): The Solution of Singular Linear Difference
Systems Under Rational Expectations, International Economic Review, 39(4), 1015
1026.
KLEIN, P. (2000): Using the generalized Schur form to solve a multivariate linear rational
expectations model, Journal of Economic Dynamics and Control, 24(10), 14051423.
KOLLMANN, R. (2002): Monetary Policy Rules in the Open Economy:Effects on Wel-
fare and Business Cycles, Journal of Monetary Economics, 49, 9891015, http:
//www1.wiwi.uni-bonn.de/users/rkollmann/www/.
ROTEMBERG, J. J., AND M. WOODFORD (1997): An Optimization-Based Econometric
Framework for the Evaluation of Monetary Policy, NBER Macro Annual, 12, 297345.
(1999): Interest rate rules in an estimated sticky-price model, in Monetary Policy
Rules, ed. by J. B. Taylor. University of Chicago Press, Chicago.
SCHMITT-GROH

E, S., AND M. URIBE (2002): Solving dynamic general equilibrium


models using a second-order approximation to the policy function, Discussion paper,
Rutgers University and University of Pennsylvania.
SIMS, C. A. (2001): Solving Linear Rational Expectations Models, Computational Eco-
nomics, 20(1-2), 120, http://www.princeton.edu/sims/.
SUTHERLAND, A. (2002): A simple second-order solution method for dynamic gen-
eral equilibrium models, Discussion paper, University of St. Andrews, http://www.
st-andrews.ac.uk/ajs10/home.html.
WOODFORD, M. (2002): Ination Stabilization and Welfare, Contributions to Macroe-
conomics, 2(1), Article 1.
FEDERAL RESERVE BOARD, TUFTS UNIVERSITY, NORTHWESTERN UNIVERSITY, PRINCETON UNI-
VERSITY
E-mail address: [email protected]

You might also like