0% found this document useful (0 votes)
23 views12 pages

ARDL

The document discusses autoregressive and distributed-lag models in regression analysis, highlighting the significance of lagged values of explanatory and dependent variables. It explains the reasons for lags in economics, methods for estimating these models, and the implications of serial correlation in error terms. Additionally, it introduces the Granger causality test to determine the causal relationships between time series variables.

Uploaded by

Study Account
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views12 pages

ARDL

The document discusses autoregressive and distributed-lag models in regression analysis, highlighting the significance of lagged values of explanatory and dependent variables. It explains the reasons for lags in economics, methods for estimating these models, and the implications of serial correlation in error terms. Additionally, it introduces the Granger causality test to determine the causal relationships between time series variables.

Uploaded by

Study Account
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Autoregressive and Distributed-Lag Models

if the regression model includes not only the current but also the
lagged (past) values of the explanatory variables (the X’s), it is
called a distributed-lag model.

If the model includes one or more lagged values of the dependent


variable among its explanatory variables, it is called an
autoregressive model.

autoregressive models are also known as dynamic models since they


portray the time path of the dependent variable in relation to its
past value(s).

The Role of Lag in Economics


In economics the dependence of a variable Y on another variable X is
rarely instantaneous.
Very often, Y responds to X with a lapse of time. Such a lapse of
time is called a lag
Examples 17.1-17.6 (section 17.1 gujarati)

Reasons for Lags


 Psychological reasons. As a result of the force of habit, people
do not change their consumption habits immediately following a
price decrease or an income increase perhaps because the process
of change may involve some immediate disutility

 Technological reasons. Suppose the price of capital relative to


labor declines, making substitution of capital for labor
economically feasible. Of course, addition of capital takes time
(the gestation period). Moreover, if the drop in price is
expected to be temporary, firms may not rush to substitute
capital for labor, especially if they expect that after the
temporary drop the price of capital may increase beyond its
previous level

 Institutional reasons. These reasons also contribute to lags. For


example, contractual obligations may prevent firms from switching
from one source of labor or raw material to another
Such a model where we have not defined the length of the lag is
called an infinite (lag) model

finite (lag) distributed-lag model=length of the lag k is specified.

Ad Hoc Estimation of Distributed-Lag Models

Since the explanatory variable Xt is assumed to be nonstochastic (or


at least uncorrelated with the disturbance term ut), Xt−1, Xt−2, and
so on, are nonstochastic, too. Therefore, in principle, the ordinary
least squares (OLS) can be applied to the eqn

Alt and Tinbergen They suggest that one may proceed sequentially;
that is, first regress Yt on Xt , then regress Yt on Xt and Xt−1,
then regress Yt on Xt , Xt−1, and Xt−2, and so on

This sequential procedure stops when the regression coefficients of


the lagged variables start becoming statistically insignificant
and/or the coefficient of at least one of the variables changes
signs from positive to negative or vice versa

Koyck Approach to Distributed-Lag Models


Suppose we start with the infinite lag distributed-lag model

Assuming that the β’s are all of the same sign, Koyck assumes that
they decline geometrically as follows.

postulates that each successive β


coefficient is numerically less than each preceding β
implying that as one goes back into the distant past, the effect of
that lag on Yt becomes progressively smaller, a quite plausible
assumption.

where λ, such that 0 <λ< 1, is known as the rate of decline, or


decay, of the distributed lag and where 1 − λ is known as the speed
of adjustment.
(1) By assuming nonnegative values for λ, Koyck rules out the β’s
from changing sign;
(2) by assuming λ < 1, he gives lesser weight to the distant β’s
than the current ones;
(3) Koyck ensures that the sum of the β’s, which gives the long-run
multiplier, is finite,

As a result of

the infinite lag model may be written as

As it stands, the model is still not amenable to easy estimation


since a large (literally infinite) number of parameters remain to be
estimated and the parameter λ enters in a highly nonlinear form.

Strictly speaking, the method of linear (in the parameters)


regression analysis cannot be applied to such a model.

Koyck transformation

But now Koyck suggests an ingenious way out.


He lags by one period to obtain

He then multiplies by λ to obtain


Subtracting from Koyck gets
The Median Lag

The median lag is the time required for the first half, or 50 percent,
of the total change in Y following a unit sustained change in X. For
the Koyck model

Thus, if λ = 0.2 the median lag is 0.4306, but if λ = 0.8 the


median lag is 3.1067. Verbally, in the former case 50 percent of the
total change in Y is accomplished in less than half a period, whereas
in the latter case it takes more than 3 periods to accomplish the 50
percent change.

the higher the value of λ the lower the speed of adjustment, and the
lower the value of λ the greater the speed of adjustment

The Mean Lag

Provided all βk are positive, the mean, or average, lag is defined


as

which is simply the weighted average of all the lags involved, with
the respective β coefficients serving as weights. In short, it is a
lag-weighted average of time.

For the Koyck model the mean lag is

Thus, if λ = 1/2 , the mean lag is 1.

From the preceding discussion it is clear that the median and mean
lags serve as a summary measure of the speed with which Y responds to
X.
Estimation of Autoregressive Models

classical least-squares theory,


If we assume that the original disturbance term ut satisfies all the classical assumptions, such
as E(ut) = 0, var (ut) = σ2 (the assumption of homoscedasticity), and cov (ut, ut+s) = 0 for s = 0
(the assumption of no autocorrelation)

In the autoregressive model,

, vt may not inherit all these properties.

vt= (ut − λut−1)

covariance(vt, vt-1)= E(xy)- E(X)E(Y)

E(vt)=e(vt-1)=0 but E(vt,vt−1) = −λσ2 (see derivation later)

Therefore, covariance(vt, vt-1)= E(vt,vt−1)-E(vt)-E(vt-1) = −λσ2-0-0= −λσ2

And since Yt−1 appears in the Koyck model as an explanatory variable, it is bound to be
correlated with vt

(see derivation later)

if an explanatory variable in a regression model is correlated with the stochastic disturbance


term, the OLS estimators are not only biased but also not even consistent; that is, even if the
sample size is increased indefinitely, the estimators do not approximate their true population
values.30 Therefore, estimation of the Koyck models by the usual OLS procedure may yield
seriously misleading results.

Therefore 2 main conclusion


1) Error term in Koyck model suffers from serial correlation
2) Lagged value of explanatory variable Yt-1 is correlated with error term
The Method of Instrumental Variables (IV)

The reason why OLS cannot be applied to the Koyck or adaptive expectations model is that the
explanatory variable Yt−1 tends to be correlated with the error term vt.

Suppose that we find a proxy for Yt−1 that is highly correlated with Yt−1 but is uncorrelated with
vt, where vt is the error term appearing in the Koyck or adaptive expectations model. Such a
proxy is called an instrumental variable (IV)

SARG test to find out if the chosen instrument(s) is valid

Sargan has developed a statistic dubbed SARG, to test the validity of the instruments used in
instrumental variable(s) (IV)

Sargan has shown the SARG test asymptotically has the χ2 distribution with (s − q) degrees of
freedom, where s is the number of instruments (i.e., the variables in W) and q is the number of
regressors in the original equation.

Where n = the number of observations and k is the number of coefficients in the original
regression equation.

6) Null hypothesis that the instruments are exogenous


Alternate hypothesis that the instruments are endogenous

7) DO NOT REJECT NULL= STATISTICALLY INSIGNIFICANT= all (W) instruments are valid.
REJECT NULL = STATISTICALLY SIGNIFICANT= all (W) instruments are invalid.
Detecting Autocorrelation in Autoregressive Models: Durbin h Test

in the Koyck and adaptive expectations models vt was serially correlated even if ut was serially
independent

The question, then, is: How does one know if there is serial correlation in the error term
appearing in the autoregressive models?

Durbin himself has proposed a large-sample test of first-order serial correlation in


autoregressive models. This test is called the h statistic.

Note these features of the h statistic.


1. It does not matter how many X variables or how many lagged values of Y are included in
the regression model. To compute h, we need consider only the variance of the coefficient of
lagged Yt−1.

2. The test is not applicable if [n var (αˆ 2)] exceeds 1.

3. Since the test is a large-sample test, its application in small samples is not strictly justified, as
shown by Inder and Kiviet. It has been suggested that the Breusch–Godfrey
(BG) test, also known as the Lagrange multiplier test, is statistically more powerful not only in
the large samples but also in finite, or small, samples
for a large sample, Durbin has shown that,
under the null hypothesis that ρ (first-order serial correlation) = 0,
the h statistic follows the standard normal distribution

where asy means asymptotically

one can estimate ρ as


Example 17.7

If the significance level is 5 percent, critical z value is 1.96

4.1061>1.96 therefore reject null hypothesis

or
Recall that the probability that a standard normal variate exceeds the value of ±3 is
extremely small.

In the present example our conclusion, then, is that there is (positive) autocorrelation

Example:17.11 The Demand for Money in Canada, 1979–I to 1988–IV(see later)

Section 17.12- illustrative examples


Causality in Economics: The Granger Causality Test
regression analysis deals with the dependence of one variable on other variables, it does not
necessarily imply causation

But in regressions involving time series data, the situation may be somewhat different because

time does not run backward. That is, if event A happens before event B, then it is possible that
A is causing B. However, it is not possible that B is causing A. In other words, events in the past
can cause events to happen today. Future events cannot.

The Granger Test


To explain the Granger test, we will consider the often asked question in macroeconomics:
Is it GDP that “causes” the money supply M (GDP → M)?
Or is it the money supply M that causes GDP (M → GDP)?

where it is assumed that the disturbances u1t and u2t are uncorrelated. I

Since we have two variables, we are dealing with bilateral causality


We now distinguish four cases:

Examples 17.12-17.14

You might also like