0% found this document useful (0 votes)
33 views6 pages

Economics 536 Introduction To Specification Testing in Dynamic Econometric Models

1) The document discusses techniques for testing dynamic econometric models, including tests for autocorrelation. Autocorrelation can bias estimates from ordinary least squares regression and ignoring it in dynamic models leads to bias and efficiency costs. 2) The Breusch-Godfrey test is described as a way to test for autocorrelation in the residuals from a regression with lagged dependent variables. If autocorrelation is detected, nonlinear models or instrumental variables can be considered. 3) Autoregressive conditional heteroscedasticity (ARCH) models are mentioned as a way to model time-varying volatility, where the variance is modeled as depending on past residuals.

Uploaded by

Maria Roa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Economics 536 Introduction To Specification Testing in Dynamic Econometric Models

1) The document discusses techniques for testing dynamic econometric models, including tests for autocorrelation. Autocorrelation can bias estimates from ordinary least squares regression and ignoring it in dynamic models leads to bias and efficiency costs. 2) The Breusch-Godfrey test is described as a way to test for autocorrelation in the residuals from a regression with lagged dependent variables. If autocorrelation is detected, nonlinear models or instrumental variables can be considered. 3) Autoregressive conditional heteroscedasticity (ARCH) models are mentioned as a way to model time-varying volatility, where the variance is modeled as depending on past residuals.

Uploaded by

Maria Roa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

University of Illinois Department of Economics

Fall 2016 Roger Koenker

Economics 536
Lecture 7

Introduction to Specification Testing in Dynamic


Econometric Models

In this lecture I want to briefly describe some techniques for evaluat-


ing dynamic econometric models like the models for gasoline demand you
have been estimating. Until now, we have implicitly assumed that these
models satisfied the classical assumptions of the Gaussian linear model. In
particular, we have assumed that the error sequences {ut } were iid and ap-
proximately Gaussian, thus justifying the application of elementary least
squares methods of estimation.

Testing for Autocorrelation


We might begin by recalling some basic facts about autocorrelation. In
classical regression with fixed regressors,

yi = x0i β + ui

we know that if the vector, u = (ui ) ∼ N (0, σ 2 I) then

β̂ = (X 0 X)−1 X 0 y ∼ N (β, σ 2 (X 0 X)−1 )

but when the errors are autocorrelated, for example, u ∼ N (0, Ω), then

β̂ ∼ N (β, (X 0 X)−1 X 0 ΩX(X 0 X)−1 )

and therefore the conventional estimates of standard errors from ordinary


least squares regression may badly misrepresent the true precision of β̂. Of
course, in this case it is preferable to use

β̌ = (X 0 Ω−1 X)−1 X 0 Ω−1 y ∼ N (0, (X 0 Ω−1 X)−1 )

which, in effect, restores the model to the original iid structure and thereby
achieves optimality. DiNardo and Johnston give an example, a standard

1
one, showing that the precision of β̌ can be considerably greater than that
of β̂, even for modest amounts of autocorrelation.
One can estimate deirectly the OLS covariance matrix using a variant of
the proposal of Newey and West (1987)
n
(1 − j/(p + 1))(Γ̂j + Γ̂0j )
X
Ω̂ = Γ̂0 +
j=1

where n
Γ̂j = n−1 ut ut−j xt x0t−j
X

t=j+1

When dynamic models are specified with lagged endogenous explana-


tory variables, autocorrelation raises more serious problems. Even in the
simplest examples, there are bias as well as efficiency costs to ignoring the
autocorrelation. To see this consider the model

yt = β0 yt−1 + β1 xt + ut

where
ut = ρut−1 + εt .
with {εt } assumed to be iid N (0, σ 2 ). Consistent estimation of β requires
orthogonality of ut and the “explanatory variables” (yt−1 , xt ), but note that

Eyt−1 ut = E(β0 yt−2 + β1 xt−1 + ut−1 )(ρut−1 + εt )


= ρσ 2 + β0 ρE(yt−2 ut−1 )

But, by stationarity, a concept introduced in the next lecture, Eyt−1 ut =


Eyt−2 ut−1 so
Eyt−1 ut = ρσu2 /(1 − β0 ρ)
Not surprisingly, given the serious consequences of this bias effect, there
is a large literature on testing for autocorrelation in this context. The most
straight forward approach is that of Breusch and Godfrey which derives
from work of Durbin. While the details are usually reserved for Ec575, the
Breusch and Godfrey test is easily described.
Let ût = yt − x0t β̂, t = 1, · · · , n denote the residuals from a least squares
regression in which the vector xt may include lagged endogenous variables.
Suppose that we wish to test the hypothesis

Ho : ρ1 = ρ2 = · · · = ρs = 0

2
in the potential autocorrelation model

ut = ρ1 ut−1 + · · · + ρs ut−s .

Consider the auxiliary regression equation

ût = ρ1 ût−1 + · · · + ρs ût−s + x0t γ + vt

and the associated test statistic

Tn = nR2

based on the conventional R2 of the auxiliary regression. Under H0 , Tn ∼ χ2s ,


providing a rather general testing strategy for this important class of models.
Note that it is crucial that the variables xt appear in the auxiliary regression,
even though in some superficially similar circumstances this is found to be
unnecessary.

Digression on R2 asymptotics
The connection between R2 and F is an important aspect of trying to
interpret nR2 as a reasonable test statistic. For a general linear hypothesis,
say Rβ = r, in the regression setting, let Sω and SΩ denote the restricted
and unrestricted sums of squared residuals, and
(Sω − SΩ )/q
F =
SΩ /(n − p)

and we can define an R2 of ω relative to Ω as,


SΩ
R2 = 1 −

thus
n−p R2 (Sω − SΩ )/q
=
q 1 − R2 SΩ /(n − p)
Under the null hypothesis we have suggested that nR2 can be used as a
asymptotically valid test statistic which has χ2q behavior under the null,
where q = rank (R). What is the connection of this to the foregoing algebra
which relates the finite sample value of R2 to the appropriate F statistic for
testing H0 ? Note first that under H0 SΩ /(n − p) → σ 2 and Sω /(n − p) → σ 2
so for n large,
Sω − SΩ Sω − SΩ
nR2 = ≈
Sω /n SΩ /n

3
and therefore nR2 is approximately equal to the numerator χ2q of the F
statistic. Now to make the connection between χ2q and F we need only note
that χ2q /q ∼ Fq,∞ . Alternatively, we may observe that under H0 , R2 → 0 so

n−p R2 n
→ R2
q 1 − R2 q
The next obvious question is: what do we do if the Breusch-Godfrey test
rejects H0 ? There are two general approaches which I will describe briefly.

Nonlinear Models
Dynamic models of the type we have been discussing can always be
written in the form
yt = D(L)xt + ut
where D(L) = B(L)/A(L) is called a rational lag polynomial. As an exam-
ple, take the simple model

(∗) yt = αyt−1 + βxt + ut

with ut = ρut + εt and {εt } iid N (0, σ 2 ). Subtracting ρyt−1 = ραyt−2 +


ρβxt−1 + ρut from (*) we have

yt = (ρ + α)yt−1 − ραyt−2 + βxt − βρxt−1 + εt .

Since this version of the model has a nice (iid) error structure, it can
be consistently estimated by ordinary least squares. Note, however, that we
now have 4 parameters not 3, as in the original formulation of the model.
Since these 4 parameters are simple functions of the original 3 parameters,
we can impose the implied constraints and estimate the model by nonlinear
least squares.
A related approach, which we may also illustrate with this simple model,
is to write the original model in the form

X
yt = δj xt−j + vt .
j=0

This form may appear impractical since we have an infinite number of δj ’s,
but as we have seen in the second lecture these δ’s may be expressed in
terms of a finite number of α’s and β’s. Harvey (1989) discusses several
versions of this in some simple models and the resulting nonlinear least
squares estimation strategy.

4
Instrumental Variable Estimation
An alternative strategy for estimating models of this type relies on in-
strumental variables. Recall that our fundamental problem, the bias result-
ing from autocorrelation in dynamic models, was attributed to the lack of
orthogonality between errors and lagged endogenous variables. An obvious
strategy for dealing with this problem is to identify suitable instrumental
variables (IV’s) which have the properties:

• orthogonality with u, i.e., EZ 0 u = 0

• Mutual association with X, i.e., E X̂ 0 X positive definite, where X̂ =


Z(Z 0 Z)−1 Z 0 X is the projection of X onto the column space of Z.

The immediate question is where would such variables, Z, come from? In


the case of dynamic models of the type we have been discussing it is quite
straightforward to answer this question. They may be chosen to be lagged
exogenous variables. But this question changes a worry that there might be
too few IV’s into a new worry that there might be too many. This question
does not have a good analytical answer and remains a question of active
research concern. Nevertheless, empirical research has suggested some rules
of thumb which can be used in applications. Generally, we would suggest
that the number of IV’s be kept to some modest number above the number
of existing explanatory variables, say, p ≤ q ≤ 2p where p is the parametric
dimension of the original model and q is the number of IV’s. We will return
to this question later in the course.

ARCH in Brief
Frequently, we observe (particularly in financial data) time-varying het-
eroscedasticity. In the early 80’s Engel coined the term autoregressive con-
ditional heteroscedasticity ARCH to refer to model in which

V (ut | past ) = ht = α0 + α1 u2t−1 + · · · + αp u2t−p


In subsequent work this has been generalized in several directions notably
to GARCH (1,1),

V (ut | past ) = ht = α0 + α1 u2t−1 + γ1 ht−1

These are simple examples of a broad class of nonlinear time series models.
In the ARCH model we have unconditional expectations,
1.) Eut = Eht εt = 0
2.) V ut = α0 /A(1) where A(L) = (1 + α1 L + · · · + αp Lp )

5
As long as the lag polynomial A(L) is stable, i.e., if the roots of A(z) = 0
lie outside the unit circle, then we get a gradual oscillation of ht around the
unconditional variance. In integrated ARCH, i.e., roots on the unit circle,
we get long swings away from the initial ht .
Testing for ARCH
Naive LM tests can be implemented just like LM-AR tests. Regress û2t
on {û2t−1,···, û2t−q and xt } compute nR2 compare to χ2q or nR2 /q compare
to Fq,n−p . Note that in this form the test may be regarded as a joint test
for ARCH and heteroscedasticity of the form usually tested by the Breusch-
Pagan, and related tests. One could obviously consider refining the hypoth-
esis under consideration in light of the results obtained for this expanded
version of the test.
The LM Principle
Let `(θ̃) denote log likelihood evaluated at mle under H0 : θ ∈ Θ0 ⊂ Θ1
we are interested in testing H0 vs H1 : θ ∈ Θ1 . One way to do this is to
ask how does ` change as we move the restricted estimator θ̃ ∈ Θ0 toward
the unrestricted θ̂ ∈ Θ1 . To explore this we need to say something about
nonlinear optimization, but this would take us too far away from the main
topic. Sufficient to say that the LM-test is based on the magnitude of
the gradient of ` at θ̂ ∈ Θ0 in the direction of θ̂1 ∈ Θ1 . Again, there are
questions to be answered about what to do when ARCH effects are found
to be present in the model. Joint estimation of ARCH and regression shift
effects is the preferred solution when it is computationally feasible. Various
iterative solutions are obviously available as alternatives.

You might also like