Econometrics (EM2008/EM2Q05) Autocorrelation: Irene Mammi
Econometrics (EM2008/EM2Q05) Autocorrelation: Irene Mammi
Lecture 6
Autocorrelation
Irene Mammi
1 / 30
outline
I autocorrelation
I autocorrelated disturbances
I testing for autocorrelated disturbances
I estimation with autocorrelated disturbances
I autoregressive conditional heteroskedasticity
I References:
I Johnston, J. and J. DiNardo (1997), Econometrics Methods, 4th
Edition, McGraw-Hill, New York, Chapter 6.
2 / 30
autocorrelated disturbances
I heteroskedasticity affects the elements on the principal diagonal of
var(u ), but it is still assumed E(ut ut +s ) = 0 for all t and s 6= 0
I when the errors are autocorrelated, this assumption no longer holds
I the pairwise autocovariances are
I when s = 0 we have
γ0 = E(ut2 ) = σu2
I the autocorrelation coefficient at lag s is
cov(ut , ut +s )
ρs = p
var(ui ) var(ut +s )
I under homoskedasticity, this reduces to
γs
ρs = s = 0, ±1, ±2, . . .
γ0
3 / 30
autocorrelated disturbances (cont.)
· · · γn−1 · · · ρ n −1
γ0 γ1 1 ρ1
γ1 γ0 · · · γn−2 ρ1 1 · · · ρ n −2
var(u ) = . = σu2
. . . .. .. .. ..
.. .. .. ..
. . . .
γn−1 γn−2 ··· γ0 ρ n −1 ρ n −2 ··· 1
4 / 30
autoregressive and moving average schemes
ut = ϕut −1 + et
| ϕ| < 1
σe2
var(ut ) = σu2 =
1 − ϕ2
5 / 30
autoregressive and moving average schemes (cont.)
I the autocorrelation coefficients are
ρs = ϕs s = 0, 1, 2, . . .
ut = et + ϕet −1 + ϕ2 et −2 + · · ·
· · · ϕ n −1
1 ϕ
2
ϕ 1 · · · ϕ n −2
var(u ) = σu .
.. .. ..
.. . . .
ϕ n − 1 ϕ n − 2 ··· 1
6 / 30
autoregressive and moving average schemes (cont.)
I now need to estimate only k + 2 parameters: a feasible GLS
procedure exists
I a first-order moving average MA(1) process is defined as
ut = et + θet −1
σu2 = σe2 (1 + θ 2 )
ρ1 = θ/(1 + θ 2 )
ρi = 0 i = 2, 3, . . .
7 / 30
OLS and autocorrelated errors
I OLS with nonstochastic X and autocorrelated errors are unbiased,
consistent but inefficient; inference procedures are invalid
I if lags of y appear in X, we have more serious problems
I assume, for example,
yt = βyt −1 + ut | β| < 1
ut = ϕut −1 + et | ϕ| < 1
where
E( e ) = 0 and E(ee0 ) = σe2 I
I estimating β by OLS gives
∑ yt yt − 1 ∑ yt −1 ut
b= = β+
∑ yt −1
2
∑ yt2−1
thus,
1
∑ yt −1 ut
plim n
plim b = β +
plim n1 ∑ yt2−1
8 / 30
OLS and autocorrelated errors (cont.)
ϕσu2
1
plim
n ∑ y u
t −1 t = ϕσu2 + βϕ2 σu2 + β2 ϕ3 σu2 + · · · =
1 − βϕ
9 / 30
testing for autocorrelation
I suppose that in the model y = X β + u one suspects that the error
follows a AR(1) process:
ut = ϕut −1 + et
H0 : ϕ = 0
H1 : ϕ 6= 0
which implies that, also under the null, the OLS residual will display
some autocorrelation.
10 / 30
testing for autocorrelation (cont.)
Durbin-Watson test
yt = β 1 yt −1 + · · · + β r yt −r + β r +1 x1t + · · · + β r +s xst + ut
with
ut = ϕut −1 + et | ϕ| < 1 and e ∼ N (0, σe2 I )
13 / 30
testing for autocorrelation (cont.)
I Durbin’s basic result is that under the null, H0 : ϕ = 0, the statistic is
r
n a
h = ϕ̂ ∼ N (0, 1)
1 − n · var(b1 )
where
n =sample size
var(b1 ) =estimated sampling variance of the coefficient
of yt −1 in the OLS fit of the complete model
n n
ϕ̂ = ∑ et et −1 / ∑ et2−1 , the estimate of ϕ from the regression
t =2 t =2
of et on et −1 , the e’s in turn being the residuals from the
OLS regression of the complete model
14 / 30
testing for autocorrelation (cont.)
15 / 30
testing for autocorrelation (cont.)
I an asymptotically equivalent procedure is the following
1. estimate the model by OLS and obtain the residual e’s
2. estimate the OLS regression of
et on et −1 , yt −1 , . . . , yt −r , x1t , . . . , xst
ut = ϕ1 ut −1 + ϕ2 ut −2 + · · · + ϕp ut −p + et
H0 : ϕ 1 = ϕ 2 = · · · = ϕ p = 0
16 / 30
testing for autocorrelation (cont.)
Breusch-Godfrey test
yt = β 1 + β 2 xt + ut
with
ut = β 3 ut − 1 + e t
where it is assumed that | β 3 | < 1 and that the e’s are i.i.d.N (0, σe2 ).
I substituting the second equation in the first one we have
yt = β 1 (1 − β 3 ) + β 2 xt + β 3 yt −1 − β 2 β 3 xt −1 + et
I we want to test H0 : β 3 = 0
I the test is obtained in two steps
1. apply OLS to the model yt = β 1 + β 2 xt + ut to obtain the residuals
et
17 / 30
testing for autocorrelation (cont.)
et −1 to find R 2
2. regress et on 1 xt
I under H0 , nR 2 is asymptotically χ2 (1)
I the second (auxiliary) regression is exactly the regression of the
Durbin procedure
I the procedure extends to testing for higher order autocorrelation:
simply add further-lagged OLS residuals to the second regression
I the Breusch-Godfrey test also tests against the alternative hypothesis
of an MA(p) process for the error
I the Durbin and Breusch-Godfrey procedures are asymptotically
equivalent
18 / 30
testing for autocorrelation (cont.)
Box-Pierce-Ljung statistic
where
∑nt=j +1 et et −j
rj =
∑nt=1 et2
I the limiting distribution of Q was derived under the assumption that
the residuals come from an AR scheme (or ARMA) fitted to some
variable y
I under the hypothesis of zero autocorrelations for the residuals, Q will
have an asymptotic χ2 distribution, with df equal to p minus the
number of parameters estimated in fitting the ARMA model
19 / 30
testing for autocorrelation (cont.)
20 / 30
estimation of models with autocorrelated disturbances
I one possibility is to consider a joint specification of the relationship,
y = X β + u, and an associated autocorrelation structure,
yt = γ1 + γ2 xt + γ3 xt −1 + γ4 yt −1 + et
where the error are white noise and that the estimated model only
includes xt : not surprisingly, significant autocorrelation is detected
21 / 30
estimation of models with autocorrelated disturbances
(cont.)
I suppose now that to account for this autocorrelation, we specify
yt = β 1 + β 2 xt + ut and ut = ϕut −1 + et
yt = β 1 (1 − ϕ) + β 2 xt − ϕβ 2 xt −1 + ϕyt −1 + et
γ3 + γ2 γ4 = 0
GLS estimation
23 / 30
estimation of models with autocorrelated disturbances
(cont.)
··· ϕ n −1
1 ϕ
ϕ 1 ··· ϕ n −2
var(u ) = σu2
.. .. .. ..
. . . .
ϕ n −1 ϕ n −2 ··· 1
··· ϕ n −1
1 ϕ
σe2 ϕ 1 ··· ϕ n −2
=
.. .. .. ..
1 − ϕ2
. . . .
ϕ n −1 ϕ n −2 ··· 1
= σe2 Ω
where
··· ϕ n −1
1 ϕ
1 ϕ 1 ··· ϕ n −2
Ω=
.. .. .. ..
1 − ϕ2
. . . .
ϕ n −1 ϕ n −2 ··· 1
24 / 30
estimation of models with autocorrelated disturbances
(cont.)
I the inverse matrix is
−ϕ ···
1 0 0 0 0
− ϕ 1 + ϕ2 −ϕ ··· 0 0 0
1 + ϕ2
−1
0 −ϕ ··· 0 0 0
Ω = .
. .. .. .. .. .. ..
.
. . . . . .
0 0 0 · · · − ϕ 1 + ϕ2 − ϕ
0 0 0 ··· 0 −ϕ 1
satisfies Ω−1 = P 0 P
25 / 30
estimation of models with autocorrelated disturbances
(cont.)
26 / 30
autoregressive conditional heteroskedasticity (ARCH)
I Engle suggested that heteroskedasticity might also occur in a time
series framework
I in exchange rate and stock market returns, large and small errors tend
to occur in cluster
I Engle formulated the idea that the recent past might give information
about the conditional disturbance variance and postulated the relation
28 / 30
autoregressive conditional heteroskedasticity (ARCH)
(cont.)
testing for ARCH
I The obvious test is implied by σt2 = α0 + α1 ut2−1 + · · · + αp ut2−p :
1. fit y to X by OLS and obtain the residuals {et }
2. compute the OLS regression,
et2 = α̂0 + α̂1 et2−1 + · · · + α̂p et2−p + error
3. test the joint significance of α̂1 , . . . , α̂p
I if these coefficients are significantly different from zero, the
assumption of conditionally homoskedastic disturbances is rejected in
favor of ARCH disturbances
estimation under ARCH
I to estimate the regression in stage 2, suitable restrictions on the α
parameters need to be imposed, e.g.
29 / 30
autoregressive conditional heteroskedasticity (ARCH)
(cont.)
I a less restrictive specification is the GARCH(p, q) model