lecture_1
lecture_1
Equilibrium Models
Frank Schorfheide
University of Pennsylvania, CEPR, NBER
p(Y |θ)p(θ)
p(θ|Y ) = R (Bayes Theorem)
p(Y |θ)p(θ)dθ
• In practical work, we use algorithms to generate draws from the
posterior distribution p(θ|Y ), say, {θ(s) }ns=1
sim
.
• Once we have these draws we can
• compute point estimates, credible intervals, or plot
histograms/densities of posterior distributions;
• compute decisions that minimize posterior expected loss.
Frank Schorfheide Bayesian Analysis of DSGE Models
VAR vs. DSGE
VAR DSGE Model
yt = Ψ(θ)st
Representation yt = Byt−1 + ut
st = Φ1 (θ)st−1 + Φ (θ)
ut ∼ N(0, Σ) t ∼ N(0, I )
Need to solve the DSGE
numerically
to obtain Ψ(θ), Φ1 (θ), Φ (θ)
Likelihood Fcn Available in analytical form Has to be evaluated numerically:
p(Y |θ) - Kalman filter
- particle filter
Prior p(θ) e.g. Minnesota prior see empirical applications
Posterior p(θ|Y ) Analytical characterization No analytical characterization
Sampling from - Direct Sampling - Metropolis-Hastings Algo.
Posterior: {θ(s) }ns=1
sim
- Gibbs Sampling - (other methods)
Reporting Use {θ(s) }ns=1
sim
draws to compute mean, std, percentiles,
histograms, density estimates, etc.
Impulse Responses, Convert {θ(s) }ns=1
sim
draws into IRFs {f (θ(s) )}ns=1
sim
,
etc. compute mean, percentiles, etc. from {f (θ(s) )}ns=1 sim
,
Frank Schorfheide Bayesian Analysis of DSGE Models
Household
• Preferences:
∞
" #
(Ht+s /Bt+s )1+1/ν
X
Et β t+s ln Ct+s − (1)
s=0
1 + 1/ν
• Budget constraint:
Ct + It ≤ Wt Ht + Rt Kt .
• Capital accumulation:
• First-order conditions:
1/ν
1 1 1 1 Ht
= βE (Rt+1 + (1 − δ)) and Wt = .
Ct Ct+1 Ct Bt Bt
(3)
• Technology:
Yt Yt
Wt = α , Rt = (1 − α) . (5)
Ht Kt
• Market clearing:
Yt = Ct + It . (6)
ln At = ln A0 + (ln γ)t + ln A
et , ln A
e t = ρa ln A
e t−1 + σa a,t (7)
ln A
e −τ = 0 and ln B−τ = 0.
et = Yt , C
Y et = Ct , eIt = It , K
et+1 = Kt+1 , W
ft = Wt . (9)
At At At At At
" # 1/ν
1 1 −at+1 1 f 1 Ht
= βE e (Rt+1 + (1 − δ)) , Wt = (10)
Ct
e C
et+1 Ct
e Bt Bt
Y
et Yet
W
ft = α , Rt = (1 − α) e at
Ht Kt
e
1−α
Y
et = Htα K et e −at , Y
et = C
et + eIt , K et e −at + eIt .
et+1 = (1 − δ)K
At
at = ln = ln γ + (ρa − 1) ln A
e t−1 + σa a,t . (11)
At−1
e
γ K
e∗ (1 − α)γ eI∗ 1−δ K ∗
R∗ = − (1 − δ), = , = 1− .
β Y∗
e R∗ Y∗
e γ Y∗
e
(12)
θ = [α, β, γ, δ, ν, ρa , σa , ρb , σb ]0 . (13)
R∗
C
bt = Et C at+1 −
bt+1 + b Rt+1
b (14)
R∗ + (1 − δ)
H
bt = ct − ν C
νW bt + (1 + ν)Bbt , W ct = Y bt − Hbt ,
bt+1 = 1 − δ K bt + I∗ bIt − 1 − δ b
e
R
bt = bt − K
Y bt + b
at , K at ,
γ Ke∗ γ
bt = C∗ C bt + I∗ bIt ,
e e
Y
bt = αHb t + (1 − α)Kbt − (1 − α)b at , Y
Y
e∗ Y
e∗
A
bt = ρa A
b t−1 + σa a,t , b bt − A
at = A b t−1 , B
bt = ρb B
bt−1 + σb b,t .
Log-linearization of f (x):
z
1 write f (x) = f (e );
• Measurement:
" # 1/ν
1 1 −at+1 1 f 1 Ht
= βE e (Rt+1 + (1 − δ)) , Wt =
Ct
e C
et+1 Ct
e Bt Bt
Y
et Yet
W
ft = α , Rt = (1 − α) e at
Ht Kt
e
1−α
Y
et = Htα K et e −at , Y
et = C
et + eIt , K et e −at + eIt
et+1 = (1 − δ)K
At
at = ln = ln γ + (ρa − 1) ln A
e t−1 + σa a,t .
At−1
yt = f (yt−1 , σut ).
(1) (1)
σyt = fy σyt−1 + fu σut + o(σ)
(1) (1)
• Deduce that yt = fy yt−1 + fu ut
• Notice that:
• (log)linear solutions lead to linear state-space models;
• nonlinear solutions lead to nonlinear state-space models
• We will now review some of the details for linear solution techniques
and discuss multiplicity issues
where
• st is a vector of model variables, t is a vector of exogenous shocks,
• ηt is a vector of RE errors with elements ηtx = x̂t − Et−1 [x̂t ], and
• st contains (among others) the conditional expectation terms
Et [e
xt+1 ].
• Solution methods for LREs: Blanchard and Kahn (1980), King and
Watson (1998), Uhlig (1999), Anderson (2000), Klein (2000),
Christiano (2002), Sims (2002).
• Typically, the solution in terms of st is of the form
st = Φ1 (θ)st−1 + Φ (θ)t .
1
yt = Et [yt+1 ] + t ,
θ
where t ∼ iid(0, 1) and θ ∈ Θ = [0, 2].
• Introduce conditional expectation ξt = Et [yt+1 ] and forecast error
ηt = yt − ξt−1 .
• Thus,
ξt = 0, ηt = t , yt = t
ηt = M
e t + ζt .
yt − θyt−1 = M
e t − θt−1 .
• Or
yt = t + M(1 − θL)−1 t
• Let the i’th element of wt be wi,t and denote the i’th row of J −1 Π∗
and J −1 Ψ∗ by [J −1 Π∗ ]i. and [J −1 Ψ∗ ]i. , respectively.
• Let Ix (θ) be its complement. Let ΨJx and ΠJx be the matrices
composed of the row vectors [J −1 Ψ∗ ]i. and [J −1 Π∗ ]i. that
correspond to unstable eigenvalues, i.e., i ∈ Ix (θ).
• Stability condition:
ΨJx t + ΠJx ηt = 0
for all t.
η t = η 1 t + η 2 ζ t .