Economics 536 Introduction To Specification Testing in Dynamic Econometric Models
Economics 536 Introduction To Specification Testing in Dynamic Econometric Models
Economics 536
Lecture 7
yi = x0i β + ui
but when the errors are autocorrelated, for example, u ∼ N (0, Ω), then
which, in effect, restores the model to the original iid structure and thereby
achieves optimality. DiNardo and Johnston give an example, a standard
1
one, showing that the precision of β̌ can be considerably greater than that
of β̂, even for modest amounts of autocorrelation.
One can estimate deirectly the OLS covariance matrix using a variant of
the proposal of Newey and West (1987)
n
(1 − j/(p + 1))(Γ̂j + Γ̂0j )
X
Ω̂ = Γ̂0 +
j=1
where n
Γ̂j = n−1 ut ut−j xt x0t−j
X
t=j+1
yt = β0 yt−1 + β1 xt + ut
where
ut = ρut−1 + εt .
with {εt } assumed to be iid N (0, σ 2 ). Consistent estimation of β requires
orthogonality of ut and the “explanatory variables” (yt−1 , xt ), but note that
Ho : ρ1 = ρ2 = · · · = ρs = 0
2
in the potential autocorrelation model
ut = ρ1 ut−1 + · · · + ρs ut−s .
Tn = nR2
Digression on R2 asymptotics
The connection between R2 and F is an important aspect of trying to
interpret nR2 as a reasonable test statistic. For a general linear hypothesis,
say Rβ = r, in the regression setting, let Sω and SΩ denote the restricted
and unrestricted sums of squared residuals, and
(Sω − SΩ )/q
F =
SΩ /(n − p)
3
and therefore nR2 is approximately equal to the numerator χ2q of the F
statistic. Now to make the connection between χ2q and F we need only note
that χ2q /q ∼ Fq,∞ . Alternatively, we may observe that under H0 , R2 → 0 so
n−p R2 n
→ R2
q 1 − R2 q
The next obvious question is: what do we do if the Breusch-Godfrey test
rejects H0 ? There are two general approaches which I will describe briefly.
Nonlinear Models
Dynamic models of the type we have been discussing can always be
written in the form
yt = D(L)xt + ut
where D(L) = B(L)/A(L) is called a rational lag polynomial. As an exam-
ple, take the simple model
Since this version of the model has a nice (iid) error structure, it can
be consistently estimated by ordinary least squares. Note, however, that we
now have 4 parameters not 3, as in the original formulation of the model.
Since these 4 parameters are simple functions of the original 3 parameters,
we can impose the implied constraints and estimate the model by nonlinear
least squares.
A related approach, which we may also illustrate with this simple model,
is to write the original model in the form
∞
X
yt = δj xt−j + vt .
j=0
This form may appear impractical since we have an infinite number of δj ’s,
but as we have seen in the second lecture these δ’s may be expressed in
terms of a finite number of α’s and β’s. Harvey (1989) discusses several
versions of this in some simple models and the resulting nonlinear least
squares estimation strategy.
4
Instrumental Variable Estimation
An alternative strategy for estimating models of this type relies on in-
strumental variables. Recall that our fundamental problem, the bias result-
ing from autocorrelation in dynamic models, was attributed to the lack of
orthogonality between errors and lagged endogenous variables. An obvious
strategy for dealing with this problem is to identify suitable instrumental
variables (IV’s) which have the properties:
ARCH in Brief
Frequently, we observe (particularly in financial data) time-varying het-
eroscedasticity. In the early 80’s Engel coined the term autoregressive con-
ditional heteroscedasticity ARCH to refer to model in which
These are simple examples of a broad class of nonlinear time series models.
In the ARCH model we have unconditional expectations,
1.) Eut = Eht εt = 0
2.) V ut = α0 /A(1) where A(L) = (1 + α1 L + · · · + αp Lp )
5
As long as the lag polynomial A(L) is stable, i.e., if the roots of A(z) = 0
lie outside the unit circle, then we get a gradual oscillation of ht around the
unconditional variance. In integrated ARCH, i.e., roots on the unit circle,
we get long swings away from the initial ht .
Testing for ARCH
Naive LM tests can be implemented just like LM-AR tests. Regress û2t
on {û2t−1,···, û2t−q and xt } compute nR2 compare to χ2q or nR2 /q compare
to Fq,n−p . Note that in this form the test may be regarded as a joint test
for ARCH and heteroscedasticity of the form usually tested by the Breusch-
Pagan, and related tests. One could obviously consider refining the hypoth-
esis under consideration in light of the results obtained for this expanded
version of the test.
The LM Principle
Let `(θ̃) denote log likelihood evaluated at mle under H0 : θ ∈ Θ0 ⊂ Θ1
we are interested in testing H0 vs H1 : θ ∈ Θ1 . One way to do this is to
ask how does ` change as we move the restricted estimator θ̃ ∈ Θ0 toward
the unrestricted θ̂ ∈ Θ1 . To explore this we need to say something about
nonlinear optimization, but this would take us too far away from the main
topic. Sufficient to say that the LM-test is based on the magnitude of
the gradient of ` at θ̂ ∈ Θ0 in the direction of θ̂1 ∈ Θ1 . Again, there are
questions to be answered about what to do when ARCH effects are found
to be present in the model. Joint estimation of ARCH and regression shift
effects is the preferred solution when it is computationally feasible. Various
iterative solutions are obviously available as alternatives.