0% found this document useful (0 votes)
5 views22 pages

Implementation

The document discusses Bayesian methods for analyzing Dynamic Stochastic General Equilibrium (DSGE) models, detailing the components of Bayesian analysis such as likelihood functions and prior densities. It presents examples of two observationally equivalent models, M1 and M2, and emphasizes the importance of specifying priors and conducting sensitivity analysis. Additionally, it outlines posterior computations using techniques like the Random Walk Metropolis Algorithm and Importance Sampling, along with references to empirical literature in the field.

Uploaded by

jessezheng742247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views22 pages

Implementation

The document discusses Bayesian methods for analyzing Dynamic Stochastic General Equilibrium (DSGE) models, detailing the components of Bayesian analysis such as likelihood functions and prior densities. It presents examples of two observationally equivalent models, M1 and M2, and emphasizes the importance of specifying priors and conducting sensitivity analysis. Additionally, it outlines posterior computations using techniques like the Random Walk Metropolis Algorithm and Importance Sampling, along with references to empirical literature in the field.

Uploaded by

jessezheng742247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Frank Schorfheide, University of Pennsylvania: Bayesian Methods 1

Bayesian Analysis of DSGE Models

Frank Schorfheide
Department of Economics, University of Pennsylvania
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 2

Bayesian Analysis

• Ingredients of Bayesian Analysis:

– Likelihood function L(θ|Y T ) = p(Y T |θ)

– Prior density p(θ)

T
R
– Marginal data density p(Y ) = p(Y T |θ)p(θ)dθ

• Bayes Theorem:
T L(θ|Y T )p(θ)
p(θ|Y ) = (1)
p(Y T )
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 3

Specifying Priors

• Where do priors come from?

– We all have them: introspection

– Pre-sample estimates, e.g., Lubik and Schorfheide (2005a,b).

– Micro-estimates, e.g., Chang, Gomes, and Schorfheide (2002).

• Sanity check:

– Implications of observables under prior?

– Implications for parameter transformations?

• Sensitivity Analysis: how robust are conclusions to choice of prior?


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 4

Example

• Consider the following two models:

1
M1 : yt = IE t[yt+1] + ρyt−1 + ut, ut = ²t ∼ iid(0, σ 2). (2)
α

and
1
M2 : yt = IE t[yt+1] + ut, ut = ρut−1 + ²t ∼ iid(0, σ 2). (3)
α

• Under the “backward-looking” specification the equilibrium law of motion becomes

1 p 2α
M1 : yt = (α − α2 − 4ρα)yt−1 + p ²t, (4)
2 2
α + α − 4ρα

• whereas under the model M2

1
M2 : yt = ρyt−1 + ²t . (5)
1 − ρ/α
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 5

Example

• Models M1 and M2 are observationally equivalent.

• The model with the ‘backward looking’ component is distinguishable from the purely

‘forward looking’ specification only under a strong a priori restriction on the exogenous

component, namely ρ = 0.

• Although M1 and M2 will generate identical reduced form forecasts, the effect of

changes in α on the law of motion of yt is different in the two specifications.


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 6

Table 1: Prior Distributions

Name Domain Prior 1 Prior 2

Density Para (1) Para (2) Para (1) Para (2)

α IR+ Gamma 2.00 0.10 2.00 0.10

ρ [0, 1) Beta 0.50 0.05 0.73 0.10

σ IR+ InvGamma 1.00 4.00 1.00 4.00

Notes: Para (1) and Para (2) list the means and the standard deviations for Beta, Gamma,
and Normal distributions; the upper and lower bound of the support for the Uniform distri-
2 2
bution; s and ν for the Inverse Gamma distribution, where pIG (σ|ν, s) ∝ σ −ν−1e−νs /2σ .
The effective prior is truncated at the boundary of the determinacy region.
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 7

Figure 1: Example - Predictive Distributions of Sample Moments


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 8

Figure 2: Example - Parameter Draws from Model M2, Prior 1


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 9

Figure 3: Example - Parameter Draws from Model M2, Prior 2


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 10

Posterior Computations

• Bayes Theorem:
L(θ|Y )p(θ)
p(θ|Y ) = R
L(θ|Y )p(θ)dθ

• Posterior moments
R
h(θ)L(θ|Y )p(θ)dθ
IE[h(θ)|Y ] = R
L(θ|Y )p(θ)dθ

• Use Markov Chain Monte Carlo Techniques...


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 11

Bayesian Computations, continued

• For DSGE model: use Random Walk Metropolis Algorithm, e.g., Schorfheide (2000),

Otrok (2001) or Importance Sampler as in Dejong, Ingram, and Whiteman (2000).


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 12

Random-Walk Metropolis (RWM) Algorithm

1. Use a numerical optimization routine to maximize ln L(θ|Y ) + ln p(θ). Denote the

posterior mode by θ̃.

2. Let Σ̃ be the inverse of the Hessian computed at the posterior mode θ̃.

3. Draw θ(0) from N (θ̃(0), c20Σ̃).

4. For s = 1, . . . , nsim, draw ϑ from the proposal distribution N (θ(s−1), c2Σ̃). The jump

from θ(s−1) is accepted (θ(s) = ϑ) with probability min {1, r(θ(s−1), ϑ|Y )} and rejected

(θ(s) = θ(s−1)) otherwise. Here

L(ϑ|Y )p(ϑ)
r(θ(s−1), ϑ|Y ) = .
L(θ(s−1)|Y )p(θ(s−1))

1
Pnsim
5. Approximate the posterior expected value of a function h(θ) by nsim s=1 h(θ(s)).
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 13

Importance Sampling (IS) Algorithm

1. Use a numerical optimization routine to maximize ln L(θ|Y ) + ln p(θ). Denote the

posterior mode by θ̃.

2. Let Σ̃ be the inverse of the Hessian computed at the posterior mode θ̃.

3. Let q(θ) be the density of a multivariate t-distribution with mean θ̃, covariance matrix

c2Σ̃, and ν degrees of freedom.

4. For s = 1, . . . , nsim generate draws θ(s) from q(θ).


Pnsim
5. Compute w̃s = L(θ(s)|Y )p(θ(s))/q(θ(s)) and ws = w̃s/ s=1 w̃s.
Pnsim
6. Approximate the posterior expected value of a function h(θ) by s=1 w(θ(s))h(θ(s)).

(insert figures)
52

Figure 1: Prior and Posterior

Notes: Output gap rule specification M1 , Data Set 1-M1 . The panels depict 200 draws
from prior and posterior distributions. Intersections of solid lines signify posterior mode
values.
53

Figure 2: Draws from Multiple Chains

Notes: Output gap rule specification M1 , Data Set 1-M1 . Panels (1,1) and (1,2): contours
of posterior density at the mode as function of τ and ψ2 . Panels (2,1) to (3,2): 200 draws
from four Markov chains generated by the Metropolis Algorithm. Intersections of solid lines
signify posterior mode values.
54

Figure 3: Recursive Means from Multiple Chains

Notes: Output gap rule specification M1 , Data Set 1-M1 . Each line corresponds to recursive
means (as a function of the number of draws) calculated from one of the four Markov chains
generated by the Metropolis Algorithm.
55

Figure 4: Metropolis Algorithm versus Importance Sampling

Notes: Output gap rule specification M1 , Data Set 1-M1 . Panels depict posterior modes
(solid), recursively computed 95% bands for posterior means based on the Metropolis Algo-
rithm (dotted) and the Importance Sampler (dashed).
56

Figure 5: Draws from Multiple Chains

Notes: Output growth rule specification M2 , Data Set 1-M2 . Panels (1,1) and (1,2):
contours of posterior density at “low” and “high” mode as function of τ and ψ 2 . Panels
(2,1) to (3,2): 200 draws from four Markov chains generated by the Metropolis Algorithm.
Intersections of solid lines signify “low” (left panels) and “high” (right panels) posterior
mode values.
57

Figure 6: Recursive Means from Multiple Chains

Notes: Output growth rule specification M2 , Data Set 1-M2 . Each line corresponds to
recursive means (as a function of the number of draws) calculated from one of the four
Markov chains generated by the Metropolis Algorithm.
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 14

Empirical Literature (I)

• Early MLE: Altug (1989), McGrattan (1994), Leeper and Sims (1994), and Kim (2000).

• Bayesian calibration: Canova (1994), DeJong, Ingram, and Whiteman (1996), and

Geweke (1999b).

• Early Bayesians: Landon-Lane (1998), DeJong, Ingram, and Whiteman (2000), Schorfheide

(2000), and Otrok (2001).

• Real models: DeJong and Ingram (2001), Chang, Gomes, and Schorfheide (2002),

Chang and Schorfheide (2003), Fernández-Villaverde and Rubio-Ramırez (2004a).


Frank Schorfheide, University of Pennsylvania: Bayesian Methods 15

Empirical Literature (II)

• New Keynesian DSGE’s: Rabanal and Rubio-Ramırez (2003, 2005), Lubik and Schorfheide

(2004), Schorfheide (2005), Canova (2004), Galı́ and Rabanal (2004), Smets and

Wouters (2003, 2005), Laforte (2004), Onatski and Williams (2004), and Levin, Onatski,

Williams, and Williams (2005)

• SOE models: Lubik and Schorfheide (2003), Del Negro (2003), Justiniano and Preston

(2004), Adolfsen, Laséen, Lindé, and Villani (2004).

• Multi-country models: Lubik and Schorfheide (2005), Rabanal and Tuesta (2005), and

de Walque and Wouters (2004).

• ...
Frank Schorfheide, University of Pennsylvania: Bayesian Methods 16

Further Extensions

• DSGE model estimation in the presence of indeterminacy: Lubik and Schorfheide

(American Economic Review, 2004).

• DSGE model with regime switches (inflation target) in monetary policy rule: Schorfheide

(Review of Economic Dynamics, 2005).

• DSGE model embedded in a factor model: Boivin and Giannoni (2005) “DSGE Models

in a Data-rich Environment”, Manuscript, Columbia University.

• DSGE models with heteroskedastic shocks: Justiniano and Primiceri (2005) “The

Time-varying Volatility of Macroeconomic Fluctuations,” Manuscript, Northwestern

University and Board of Governors.

You might also like