ARDL
ARDL
if the regression model includes not only the current but also the
lagged (past) values of the explanatory variables (the X’s), it is
called a distributed-lag model.
Alt and Tinbergen They suggest that one may proceed sequentially;
that is, first regress Yt on Xt , then regress Yt on Xt and Xt−1,
then regress Yt on Xt , Xt−1, and Xt−2, and so on
Assuming that the β’s are all of the same sign, Koyck assumes that
they decline geometrically as follows.
As a result of
Koyck transformation
The median lag is the time required for the first half, or 50 percent,
of the total change in Y following a unit sustained change in X. For
the Koyck model
the higher the value of λ the lower the speed of adjustment, and the
lower the value of λ the greater the speed of adjustment
which is simply the weighted average of all the lags involved, with
the respective β coefficients serving as weights. In short, it is a
lag-weighted average of time.
From the preceding discussion it is clear that the median and mean
lags serve as a summary measure of the speed with which Y responds to
X.
Estimation of Autoregressive Models
And since Yt−1 appears in the Koyck model as an explanatory variable, it is bound to be
correlated with vt
The reason why OLS cannot be applied to the Koyck or adaptive expectations model is that the
explanatory variable Yt−1 tends to be correlated with the error term vt.
Suppose that we find a proxy for Yt−1 that is highly correlated with Yt−1 but is uncorrelated with
vt, where vt is the error term appearing in the Koyck or adaptive expectations model. Such a
proxy is called an instrumental variable (IV)
Sargan has developed a statistic dubbed SARG, to test the validity of the instruments used in
instrumental variable(s) (IV)
Sargan has shown the SARG test asymptotically has the χ2 distribution with (s − q) degrees of
freedom, where s is the number of instruments (i.e., the variables in W) and q is the number of
regressors in the original equation.
Where n = the number of observations and k is the number of coefficients in the original
regression equation.
7) DO NOT REJECT NULL= STATISTICALLY INSIGNIFICANT= all (W) instruments are valid.
REJECT NULL = STATISTICALLY SIGNIFICANT= all (W) instruments are invalid.
Detecting Autocorrelation in Autoregressive Models: Durbin h Test
in the Koyck and adaptive expectations models vt was serially correlated even if ut was serially
independent
The question, then, is: How does one know if there is serial correlation in the error term
appearing in the autoregressive models?
3. Since the test is a large-sample test, its application in small samples is not strictly justified, as
shown by Inder and Kiviet. It has been suggested that the Breusch–Godfrey
(BG) test, also known as the Lagrange multiplier test, is statistically more powerful not only in
the large samples but also in finite, or small, samples
for a large sample, Durbin has shown that,
under the null hypothesis that ρ (first-order serial correlation) = 0,
the h statistic follows the standard normal distribution
or
Recall that the probability that a standard normal variate exceeds the value of ±3 is
extremely small.
In the present example our conclusion, then, is that there is (positive) autocorrelation
But in regressions involving time series data, the situation may be somewhat different because
time does not run backward. That is, if event A happens before event B, then it is possible that
A is causing B. However, it is not possible that B is causing A. In other words, events in the past
can cause events to happen today. Future events cannot.
where it is assumed that the disturbances u1t and u2t are uncorrelated. I
Examples 17.12-17.14