lecture_2
lecture_2
Frank Schorfheide
University of Pennsylvania, CEPR, NBER
• If the DSGE model is log-linearized and the errors are Gaussian, then
the Kalman filter can be used to construct the likelihood function.
• Initialization: if
1 st is stationary we can initialize the filter with the unconditional
distribution of st . Covariance matrix:
• At (t − 1)+ , that is, after observing yt−1 , the belief about the state
vector has the form st−1 |Y1:t−1 ∼ N (ŝt−1|t−1 , Pt−1|t−1 ).
where
ŝt|t−1 = Φŝt−1|t−1
Pt|t−1 = ΦPt−1|t−1 Φ0 + Σ
• Likelihood Function:
p(Y1:T |Ψ, Φ, Σ, H)
T
!−1/2
Y
= (2π)−nT /2 |Ft|t−1 |
t=1
( T
)
1X 0 −1
× exp − (yt − ŷt|t−1 ) Ft|t−1 (yt − ŷt|t−1 )
2 t=1
Frank Schorfheide Likelihoods For DSGE Models
Kalman Filter – Updating
yt = Ψst + ut , st = Φst−1 + t where t ∼ N(0, Σ) and ut ∼ N(0, H).
Pt|t−1 Ψ0
st t−1 ŝt|t−1 Pt|t−1
Y ∼N , 0
yt ŷt|t−1 ΨPt|t−1 Ft|t−1
where
−1
ŝt|t = ŝt|t−1 + Pt|t−1 Ψ0 Ft|t−1 (yt − ŷt|t−1 )
−1
Pt|t = Pt|t−1 − Pt|t−1 Ψ0 Ft|t−1 ΨPt|t−1
• The conditional mean and variance ŷt|t−1 and Ft|t−1 were given
above.
• But now we will use Monte Carlo integration to average out the
latent states.
(i)
• By induction, in period t we start with particles {st−1 }N
i=1 which
approximate p(st−1 |Y1:t−1 ).
(i)
• Draw one-step ahead forecasted particles {s̃t }N
i=1 from p(st |Y1:t−1 )
(i) (i)
as follows: for each s̃t−1 draw an t ∼ N (0, Σ) and let
(i) (i) (i)
s̃t = Φst−1 + t
Z N
1 X (i)
p(st |Y1:t−1 ) = p(st |st−1 )p(st−1 |Y1:t−1 )dst−1 ≈ p(st |st−1 )
N
i=1
N
• We can generate a new set of particles sti i=1 by re-sampling with
replacement N times from an approximate discrete representation of
N
p (st |Y1:t t, θ) given by s̃ti , πti i=1 so that
Pr sti = s̃ti = πti , i = 1, . . . , N.
yt = Γ + Ψst + ut
st = Φst−1 + t
• Fix parameters at
0.472
0.7 1 0
Γ= , Ψ= , H=
0.8 0.65 0 0.622
Φ = 1, Σ = 0.752
• 50 observations
• Kalman Filter delivers the exact likelihood.
• Particle Filter with 10, 100, 1000 particles.
RMSE
KF N = 10 N = 100 N = 1000
0.37 0.41 0.37 0.36
RMSE
KF N = 10 N = 100 N = 1000
0.37 0.41 0.37 0.36
−1 −1
−2 −2
−3 −3
−4 −4
−5 −5
−6 −6
−7 −7
KF KF
PF (N = 10) PF (N = 100)
−8 −8
10 20 30 40 50 10 20 30 40 50
Log-likelihood
KF N = 10 N = 100 N = 1000
-99.85 -99.52 -99.61 -99.87
−1 −1
−2 −2
−3 −3
−4 −4
−5 −5
−6 −6
−7 −7
KF KF
PF (N = 100) PF (N = 1000)
−8 −8
10 20 30 40 50 10 20 30 40 50
Log-likelihood
KF N = 10 N = 100 N = 1000
-99.85 -99.52 -99.61 -99.87
600
400
200
0
5 10 15 20 25 30 35 40 45 50
Mean
N = 10 N = 100 N = 1000
5.39 51.43 511.54
• Likewise,
Z
Eg [f /g ] = Z πdθ = Z Eπ [1] = Z .
• Provided that
• we can deduce from the Strong Law of Large Numbers and the
Continuous Mapping Theorem that
a.s.
h̄N −→ Eπ [h].