0% found this document useful (0 votes)
2 views

lecture_2

The document discusses the computation of likelihood functions for Dynamic Stochastic General Equilibrium (DSGE) models using filtering techniques, specifically the Kalman filter for linear Gaussian models and particle filters for nonlinear models. It outlines the process of initializing, forecasting, and updating state estimates through these filters, providing mathematical formulations for likelihood functions and state transitions. Additionally, it includes an illustration comparing the performance of the Kalman filter and particle filters with varying numbers of particles.

Uploaded by

jessezheng742247
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

lecture_2

The document discusses the computation of likelihood functions for Dynamic Stochastic General Equilibrium (DSGE) models using filtering techniques, specifically the Kalman filter for linear Gaussian models and particle filters for nonlinear models. It outlines the process of initializing, forecasting, and updating state estimates through these filters, providing mathematical formulations for likelihood functions and state transitions. Additionally, it includes an illustration comparing the performance of the Kalman filter and particle filters with varying numbers of particles.

Uploaded by

jessezheng742247
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Likelihoods For DSGE Models

Frank Schorfheide
University of Pennsylvania, CEPR, NBER

June 19, 2013


Computing the DSGE Model Likelihood
• DSGE model solution leads to a (possibly nonlinear) state-space
model.

• We use a so-called filter to evaluate the likelihood function

• State-space representation of linearized DSGE model


yt = Ψ0 (θ) + Ψ1 (θ)t + Ψ2 (θ)st (+ut ) measurement
st = Φ1 (θ)st + Φ (θ)t state transition

Frank Schorfheide Likelihoods For DSGE Models


Filtering - General Idea
• Likelihood function:
T
Y
p(Y1:T |θ) = p(yt |Y1:t−1 , θ)
t=1

• A filter generates a sequence of conditional distributions st |Y1:t . (“It


removes the noise ut from the data yt to extract the signal st ”).
• Iterations:
• Initialization at time t − 1: p(st−1 |Y1:t−1 , θ)
• Forecasting t given t − 1:
1 Transition equation:
R
p(st |Y1:t−1 , θ) = p(st |st−1 , Y1:t−1 , θ)p(st−1 |Y1:t−1 , θ)dst−1
2 Measurement equation:
R
p(yt |Y1:t−1 , θ) = p(yt |st , Y1:t−1 , θ)p(st |Y1:t−1 , θ)dst
• Updating with Bayes theorem. Once yt becomes available:

p(yt |st , Y1:t−1 , θ)p(st |Y1:t−1 , θ)


p(st |Y1:t , θ) = p(st |yt , Y1:t−1 , θ) =
p(yt |Y1:t−1 , θ)

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter (Linear+Gaussian) and
Particle Filter (Fully Nonlinear)

• If the DSGE model is log-linearized and the errors are Gaussian, then
the Kalman filter can be used to construct the likelihood function.

• We will review the Kalman filter; many textbook treatments are


available.

• Alternatively, one can compute the likelihood by sequential


numerical integration which is done for DSGE models that have
been solved nonlinearly. The algorithm is called particle or sequential
Monte Carlo filter:
• Early contributions: Gordon, Salmond, and Smith (1993) and
Kitagawa (1996)
• Stochastic volatility models: Pitt and Shephard (1999) and Kim,
Shephard and Chib (1998).
• DSGE models: Fernández-Villaverde and Rubio-Ramı́rez (2007)

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter
• We’ll consider the state-space model
yt = Ψst + ut
st = Φst−1 + t
where t ∼ iidN(0, Σ) and ut ∼ iidN(0, H).

• Initialization: if
1 st is stationary we can initialize the filter with the unconditional
distribution of st . Covariance matrix:

E[st st0 ] = ΦE[st st0 ]Φ0 + Σ;

2 otherwise, could assume that st = 0 for t = −τ or treat s0 as


parameter.

• In linear Gaussian state-space model all distributions are Gaussian.


Thus, Kalman filter only tracks means and covariance matrices.

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter – Initialization
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• Write E[s0 ] = ŝ0|0 and V[s0 ] = P0|0 .

• Prior distribution for initial state s0 : s0 ∼ N (ŝ0|0 , P0|0 ).

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter – Forecasting
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• At (t − 1)+ , that is, after observing yt−1 , the belief about the state
vector has the form st−1 |Y1:t−1 ∼ N (ŝt−1|t−1 , Pt−1|t−1 ).

• “Posterior” from period t − 1 turns into a prior for (t − 1)+ .

• Since st−1 and t are independent multivariate normal random


variables, it follows that

st |Y1:t−1 ∼ N (ŝt|t−1 , Pt|t−1 )

where
ŝt|t−1 = Φŝt−1|t−1
Pt|t−1 = ΦPt−1|t−1 Φ0 + Σ

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter – Forecasting and Likelihood Function
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• The conditional distribution of yt |st , Y1:t−1 is of the form


yt |st , Y1:t−1 ∼ N (Ψst , H)
• Since st |Y1:t−1 ∼ N (ŝt|t−1 , Pt|t−1 ), we can deduce that the
marginal distribution of yt conditional on Y1:t−1 is of the form
yt |Y1:t−1 ∼ N (ŷt|t−1 , Ft|t−1 )
where ŷt|t−1 = Ψŝt|t−1 and Ft|t−1 = ΨPt|t−1 Ψ0 + H.

• Likelihood Function:
p(Y1:T |Ψ, Φ, Σ, H)
T
!−1/2
Y
= (2π)−nT /2 |Ft|t−1 |
t=1
( T
)
1X 0 −1
× exp − (yt − ŷt|t−1 ) Ft|t−1 (yt − ŷt|t−1 )
2 t=1
Frank Schorfheide Likelihoods For DSGE Models
Kalman Filter – Updating
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• To obtain the posterior distribution of st |yt , Y1:t−1 note that


st = ŝt|t−1 + (st − ŝt|t−1 )
yt = ŷt|t−1 + Ψ(st − ŝt|t−1 ) + ut

• and the joint distribution of st and yt is given by

Pt|t−1 Ψ0
     
st t−1 ŝt|t−1 Pt|t−1
Y ∼N , 0
yt ŷt|t−1 ΨPt|t−1 Ft|t−1

Frank Schorfheide Likelihoods For DSGE Models


Kalman Filter – Updating
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• Applying Bayes theorem, i.e., calculating a conditional distribution


based on a joint...

st |yt , Y1:t−1 ∼ N (ŝt|t , Pt|t )

where
−1
ŝt|t = ŝt|t−1 + Pt|t−1 Ψ0 Ft|t−1 (yt − ŷt|t−1 )
−1
Pt|t = Pt|t−1 − Pt|t−1 Ψ0 Ft|t−1 ΨPt|t−1

• The conditional mean and variance ŷt|t−1 and Ft|t−1 were given
above.

• This completes one iteration of the algorithm. The posterior st |Y1:t


is the prior for the next iteration.

Frank Schorfheide Likelihoods For DSGE Models


Particle Filter
• We consider the same state-space model as before:
yt = Ψst + ut
st = Φst−1 + t
where E[t 0t ] = Σ and E[ut ut0 ] = H.

• But now we will use Monte Carlo integration to average out the
latent states.

• This procedure can be easily generalized to nonlinear/non-Gaussian


state space models.

Frank Schorfheide Likelihoods For DSGE Models


Particle Filters - Initialization
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• Suppose we start out by drawing N particles from an initial


distribution p(s0 ), which might correspond to the unconditional
(i)
distribution associated with the state transition equation: {s0 }Ni=1 .

(i)
• By induction, in period t we start with particles {st−1 }N
i=1 which
approximate p(st−1 |Y1:t−1 ).

Frank Schorfheide Likelihoods For DSGE Models


Particle Filters - Forecasting
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

(i)
• Draw one-step ahead forecasted particles {s̃t }N
i=1 from p(st |Y1:t−1 )
(i) (i)
as follows: for each s̃t−1 draw an t ∼ N (0, Σ) and let
(i) (i) (i)
s̃t = Φst−1 + t

• This simulation step approximates

Z N
1 X (i)
p(st |Y1:t−1 ) = p(st |st−1 )p(st−1 |Y1:t−1 )dst−1 ≈ p(st |st−1 )
N
i=1

Frank Schorfheide Likelihoods For DSGE Models


Particle Filter - Likelihood Function
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• The distribution yt |Y1:t−1 is approximated by


N
1 X (i)
p(yt |Y1:t−1 ) ≈ p(yt |s̃t ).
N
i=1

• Define the un-normalized weights


 
−n/2 −1/2 1 (i) 0 −1 (i)
π̃ti = (2π) |H| exp − (yt − Ψs̃t ) H (yt − Ψs̃t )
2
• The log likelihood function can be approximated as follows
ln p(Y1:T |Ψ, Φ, Σ, H)
T
X
= ln p(yt |Y1:t−1 , |Ψ, Φ, Σ, H)
t=1
T N
!
X 1 X (i)
≈ ln π̃t .
t=1
N
i=1

Frank Schorfheide Likelihoods For DSGE Models


Particle Filters - Updating
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• The goal is to approximate

p(yt |st ) p(st |Y1:t−1 )


p(st |Y1:t ) = ,
p(yt |Y1:t−1 )
(i)
• Note that for each particle s̃t we have:
(i)
p(yt |s̃t ) : equals π̃ti
(i)
p(s̃t |Y1:t−1 ) : point mass 1/N
N
X
p(yt |Y1:t−1 ) : approximated by π̃ti
i=1
(i)
• Thus, we assign to each particle s̃t the normalized weights:
(i)
(i) π̃
πt = PN t (j)
.
j=1 π̃t

Frank Schorfheide Likelihoods For DSGE Models


Particle Filters - Effective Number of Particles and
Resampling
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).

• At every step t we can calculate an estimate of the effective number


of particles as follows
b eff = P 1
N N (i) 2
i=1 (πt )

 N
• We can generate a new set of particles sti i=1 by re-sampling with
replacement N times from an approximate discrete representation of
 N
p (st |Y1:t t, θ) given by s̃ti , πti i=1 so that
Pr sti = s̃ti = πti , i = 1, . . . , N.


• The resulting sample is in fact an iid sample from the discretized


density of p(st | Y t ), and hence is equally weighted.

• This completes the iteration of the filter.


Frank Schorfheide Likelihoods For DSGE Models
Illustration
Consider the linear Gaussian state-space model

yt = Γ + Ψst + ut
st = Φst−1 + t

where E [ut ut0 ] = H and E [t 0t ] = Σ. Let the dimension of yt be 2 × 1


and st be 1 × 1.

• Fix parameters at

0.472
     
0.7 1 0
Γ= , Ψ= , H=
0.8 0.65 0 0.622
Φ = 1, Σ = 0.752

• 50 observations
• Kalman Filter delivers the exact likelihood.
• Particle Filter with 10, 100, 1000 particles.

Frank Schorfheide Likelihoods For DSGE Models


State Estimates

Figure: Kalman Filter Figure: Particle Filter (N = 10)

RMSE

KF N = 10 N = 100 N = 1000
0.37 0.41 0.37 0.36

Frank Schorfheide Likelihoods For DSGE Models


State Estimates

Figure: Particle Filter (N = 100) Figure: Particle Filter (N = 1000)

RMSE

KF N = 10 N = 100 N = 1000
0.37 0.41 0.37 0.36

Frank Schorfheide Likelihoods For DSGE Models


Likelihood Estimates
Likelihood estimates based on the particle filter.

Figure: # of Particles = 10 Figure: # of Particles = 100


0 0

−1 −1

−2 −2

−3 −3

−4 −4

−5 −5

−6 −6

−7 −7
KF KF
PF (N = 10) PF (N = 100)
−8 −8
10 20 30 40 50 10 20 30 40 50

Log-likelihood

KF N = 10 N = 100 N = 1000
-99.85 -99.52 -99.61 -99.87

Frank Schorfheide Likelihoods For DSGE Models


Likelihood Estimates
Likelihood estimates based on the particle filter.

Figure: # of Particles = 100 Figure: # of Particles = 1000


0 0

−1 −1

−2 −2

−3 −3

−4 −4

−5 −5

−6 −6

−7 −7
KF KF
PF (N = 100) PF (N = 1000)
−8 −8
10 20 30 40 50 10 20 30 40 50

Log-likelihood

KF N = 10 N = 100 N = 1000
-99.85 -99.52 -99.61 -99.87

Frank Schorfheide Likelihoods For DSGE Models


Effective Number of Particles

Figure: Effective Number of Particles


PF (N = 10)
800 PF (N = 100)
PF (N = 1000)

600

400

200

0
5 10 15 20 25 30 35 40 45 50

Mean

N = 10 N = 100 N = 1000
5.39 51.43 511.54

Frank Schorfheide Likelihoods For DSGE Models


(Sequential) Importance Sampling

• A particle filter is a sequential importance sampler, which also


involves some resampling steps.

• To understand the particle filter, it is important to understand


importance sampling.

Frank Schorfheide Likelihoods For DSGE Models


Importance Sampling
• Let π(θ) = f (θ)/Z . Monte Carlo approximation of
Z Z
1
Eπ [h(θ)] = h(θ)π(θ)dθ = h(θ)w (θ)g (θ)dθ,
Z
where
f (θ)
w (θ) = .
g (θ)
• Define
1
PN i i
N i=1 h(θ )w (θ )
h̄N = 1
PN ,
i
N i=1 w (θ )

where the “particles” θi ’s are drawn from the distribution with


density g (·).
• Notation: Eπ and Eg for expectations under g (·) and π(·).

Frank Schorfheide Likelihoods For DSGE Models


Importance Sampling: Consistency
• Note that
Z
Eg [hf /g ] = hZ πdθ = Z Eπ [h].

• Likewise,
Z
Eg [f /g ] = Z πdθ = Z Eπ [1] = Z .

• Provided that

Eg [|hf /g |] < ∞ and Eg [|f /g |] < ∞

• we can deduce from the Strong Law of Large Numbers and the
Continuous Mapping Theorem that
a.s.
h̄N −→ Eπ [h].

Frank Schorfheide Likelihoods For DSGE Models


Back to the Particle Filter - A Refined Updating Step
• Consider
(i)
(i) p(yt |st )p(st |st−1 )
p(st |Y t , st−1 ) = (i)
p(yt |Y t−1 , st−1 )

• Suppose we want to compute


R (i)
f (st )p(yt |st )p(st |st−1 )dst
Z
t (i)
f (st )p(st |Y , st−1 )dst = R (i)
p(yt |st )p(st |st−1 )dst

• Recall idea of importance sampler: we generate draws from some


normalized density g (st ) and approximate
(j) (j) (i)
1
PJ (j) p(yt |st )p(st |st−1 )
Z
J j=1 f (st ) (j)
(i) g (st )
f (st )p(st |Y t , st−1 )dst ≈ (j) (j) (i)
1
PJ p(yt |st )p(st |st−1 )
J j=1 (j)
g (st )

Frank Schorfheide Likelihoods For DSGE Models


Particle Filters - A Refined Updating Step
• Optimal yet often infeasible choice:

(j) (j) (i)


g (st ) = p(st |Y t , st−1 ).

which leads to the constant importance weight of


Z
(j) (j) (j) (i) (j)
π̃t = p(yt |st )p(st |st−1 )dst .

Frank Schorfheide Likelihoods For DSGE Models

You might also like