Slidesbayesian
Slidesbayesian
Overview
Maximum Likelihood
A very useful tool: Kalman lter
Estimating DSGEs
Maximum Likelihood & DSGEs
formulating the likelihood
Singularity when #shocks number of observables
Bayesian estimation
Tools:
Metropolis Hastings
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Theory:
yt = a0 + a1 xt + t
t N (0, 2 )
xt : exogenous
ML estimator
T
max
a0 ,a1 ,
p ( t )
t=1
where
t = yt a0 a1 xt
1 2t
p( t ) = p exp
2 22
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
ML estimator
!
T
1 ( yt a0 a1 xt )2
max p exp
a0 ,a1 ,
t=1 2 22
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Rudolph E. Kalman
Kalman lter
Linear projection
Linear projection with orthogonal regressors
Kalman lter
The slides for the Kalman lter is based on Ljungqvist and Sargents
textbook
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Linear projection
b [yjx] = a + Bx,
E
b [yjx] = + yx xx1 (x
E x )
y
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
True model:
+ Dz
y = Bx + ,
Ex = Ez = E = 0, E [jx, z] = 0, E [zjx] 6= 0
Comments:
Least-squares estimate 6= B
Projection:
b [yjx] = Bx = Bx
E +D
Eb [zjx]
Message:
you dont have to worry about the properties of the error term.
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
b [yjx] = + yx xx1 (x
E x )
y
x11x1 0
= y + yx1 yx2 (x x )
0 x21x2
= y + yx1 x11x1 (x1 x1 ) + yx2 x21x2 (x2 x2 )
Thus
b [yjx] = E
E b [yjx1 ] + E
b [ y j x2 ] y (1)
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Objective
Let
y t = yt E b [yt jy t 1 , y t 2 , , y 1 , x 1 ]
t
Y = fy t , y t 1 , , y 1 g
Then
b yt+1 jYt , x 1 = E
E b yt+1 jY t , x 1 = CE
b xt+1 jY t , x 1 (2)
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Some algebra
Similar to the denition of y t , let
x t+1 = xt+1 b [xt+1 jy t , y t
E 1, , y 1 , x 1 ]
= xt+1 bt xt+1
E
Let x t =Ext x t0
cov(xt+1 , y t ) = Ax t C0 + GV3
cov(y t , y t ) = Cx t C0 + V2
b [xt+1 jy t ]
E
1
= Ext+1 + cov(xt+1 , y t ) [cov(y t , y t )] y t
1
= Ext+1 + Ax t C0 + GV3 Cx t C0 + V2 y t (4)
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
where
1
Kt = Ax t C0 + GV3 Cx t C0 + V2
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
From
yt+1 = Cxt+1 + w2,t+1
we get
b [yt+1 jYt , x 1 ] = CE
E bt xt+1
Thus
y t+1 = yt+1 b t xt + 1
CE
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Forget the Kalman lter for now, we will not use it for a while
What is next?
Specify the neoclassical model that will be used as an example
Specify the linearized version
Specify the estimation problem
Maximum Likelihood estimation
Explain why Kalman lter is useful
Bayesian estimation
MCMC, a necessary tool to do Bayesian estimation
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
h i
ct
= Et ct+1 (zt+1 kt 1 +1 )
ct + kt = zt kt 1 + (1 ) kt 1
zt = (1 ) + zt 1 + t
t N (0, 2 )
= f , , , , , g
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Policy functions
yt = a0 + a1 xt + t , t N 0, 2
kt = g(kt 1 , zt ; )
ct = h(kt 1 , zt ; )
zt = (1 ) + zt 1 + t
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Policy functions
Problems:
functional form of policy functions not known
they are nonlinear
Steady state
Steady state
no uncertainty =) no Et [ ] in equations
no transition =) zt = zt 1 and ct = ct+1
z = (1 ) + z =) z = 1
1/(1 )
c
= c
(k 1
+1 ) =) k =
1 (1 )
budget constraint =) c = k k
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Back to FOCs
or h i
Et F(k t t , z t+1 ; ) = 0
1 , kt , kt+1 , z
where
k t = kt z t = zt
k, z
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
h i
Et F(k t 1 , k t , k t+1 , z t , z t+1 ; ) = 0
h i
=) Et kt+1 + 1 kt + 2 kt 1 + 3 zt + 4 zt+1 = 0
h i
=) Et k t+1 + 1 k t + 2 k t 1 + 3 z t = 0, where 3 = 3 + 4
k t = ak,k k t 1 + ak,z z t
Concavity implies that only one solution for ak,k is less than 1
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Linearized solution
Estimation problem
Given data for capital, fkt gT0 , estimate the set of coe cients,
= [, , , , , , z0 ]
No data on productivity, zt .
If you had data on zt =) Likelihood = 0 for sure
More on this below.
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Basic idea:
Given a value for and give the data set, YT , you can
calculate the implied values for t
zt = b0 + b1 kt 1 + b2 kt
t = zt (1 ) zt 1
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
General rule:
Example
zt
xt0 = [kt 1 k, 1 z ]
w1,t+1 = t
c
yt0 = [kt 1 k, c ]
t
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Example
kt k ak,k ak,z kt 1 k 0
= +
zt+1 z 0 zt z t+1
Log-Likelihood
1
ln L(YT j) = x00 bx 1b
nx ln(2 ) + ln(jbx0 j) + b x0
2 0
!
1 T h i
Tny ln(2 ) + ln(jbyt j) + b
yt0 byt 1b
yt
2 t=1
ny : dimension of y t
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
y 1 = y1 b [ y1 j x1 ]
E
= y1 Cx1
bt xt+1 = AE
E bt 1 xt + Kt y t
where
1
Kt = Ax t C0 + GV3 Cx t C0 + V2
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Calculate
y 2 = y2 b [y2 jy1 , x1 ]
E
= y2 CEb [x2 jy1 , x1 ]
etc.
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Bayesian Estimation
Posterior
P ( YT , ) = P ( j YT ) P ( YT )
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Posterior
L ( YT j ) P ( )
From this we can get Bayes rule: P(jYT ) = P ( YT )
Posterior
Therefore we focus on
L ( YT j ) p ( )
L ( YT j ) P ( ) _ = P ( j YT )
P ( YT )
Examples
to calculate the mean of , let g() =
to calculate the probability that 2 ,
let g() = 1 if 2 and
let g() = 0 otherwise
to calculate the posterior for jth element of
g( ) = j
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Idea:
travel through the state space of
weigh the outcomes appropriately
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
properties of f ( )
P ( j YT )
q(i+1 ji ) = min 1,
P ( i j YT )
P ( j YT ) P(i jYT ) =)
always include candidate as new element
P( jYT ) < P(i jYT ) =)
not always included; the lower P( jYT ) the lower the
chance it is included
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Metropolis-Hasting
" #
P( jYT )/f ( ji , f )
q(i+1 ji ) = min 1,
P(i jYT )/f (i j , f )
= i + with E [] = 0
Independence sampler:
f ( j i , f ) = f ( j f )
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
F1 () = L(Y1T j)P1 ()
F2 () = L(Y2T j)P2 ()
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Final answer:
F2 () = L(Y2T j)P2 ()
= L(Y2T j)L(Y1T j)P1 ()
Obviously:
Problems of approach:
Procedure avoids singularity problem by not considering joint
implications of two observables
Procdure misses some structural shock/misspecication
Key question:
Is this worse than adding bogus shocks?
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Uninformative prior
P () = 1/ (b a) if 2 [a, b]
P (ln ) = 1/ (ln b ln a) if 2 [ln a, ln b]
Overview ML Kalman Filter Estimating DSGEs ML & DSGE Bayesian estimation MCMC Other
Uninformative prior
P () = 1/ (b a) if 2 [a, b]
P (ln ) = 1/ (ln b ln a) if 2 [ln a, ln b]
P() = 1/ ( (1 ))
Gibbs sampler
References
Roberts, G.O., and J.S. Rosenthal, 2004, General state space Markov chains and
MCMC algorithms, Probability Surveys.
References