0% found this document useful (0 votes)
415 views37 pages

Chapter9 - Serial Correlation

This document discusses serial correlation in time series data. It begins by explaining the differences between time series and cross-sectional notation. It then describes how time series data can exhibit serial correlation, where a observation's error term is correlated with previous periods' error terms. The document outlines pure and impure forms of serial correlation and discusses the Durbin-Watson test for detecting first-order serial correlation. It provides examples of interpreting Durbin-Watson test statistics and outlines the Lagrange multiplier test for serial correlation.

Uploaded by

ZiaNaPiramLi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
415 views37 pages

Chapter9 - Serial Correlation

This document discusses serial correlation in time series data. It begins by explaining the differences between time series and cross-sectional notation. It then describes how time series data can exhibit serial correlation, where a observation's error term is correlated with previous periods' error terms. The document outlines pure and impure forms of serial correlation and discusses the Durbin-Watson test for detecting first-order serial correlation. It provides examples of interpreting Durbin-Watson test statistics and outlines the Lagrange multiplier test for serial correlation.

Uploaded by

ZiaNaPiramLi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

SERIAL CORRELATION

β 9-1
Time Series
• Time series data involve a single entity over multiple
points in time.
• Notation for time series is different than cross-sectional.
• Cross-sectional:

where i goes from 1 to N

• Time series:

where t goes from 1 to T

β 9-2
Time Series (continued)
• Time-series have some characteristics that make them
more difficult to deal with than cross-section.
1. The order of observations in a time series is fixed.
2. Time-series samples tend to be much smaller
than cross-sectional ones.
3. The theory underlying time-series analysis can
be quite complex.
4. The stochastic error term in a time-series is often
affected by events that took place in a previous
time period. This is called serial correlation!
β 9-3
Pure Serial Correlation
• Pure serial correlation occurs when Classical
Assumption IV is violated in a correctly specified
equation.
• Most commonly assumed form of serial correlation is
first-order serial correlation.

where:
ε = the error term of the equation in question
ρ = the first-order autocorrelation coefficient

β
u = a classical (not serially correlated) error term
9-4
Pure Serial Correlation (continued)
• ρ is called the first-order autocorrelation coefficient.
• Magnitude of ρ indicates the strength of the serial
correlation.
• If ρ = 0, there is no serial correlation.
• As ρ approaches 1 in absolute value a high degree of
serial correlation exists.
• In general:

• If ρ > 0, there is positive serial correlation.


• If ρ < 0, there is negative serial correlation.
β 9-5
Pure Serial Correlation (continued)

β 9-6
Pure Serial Correlation (continued)

β 9-7
Pure Serial Correlation (continued)

β 9-8
Pure Serial Correlation (continued)
• Serial correlation can take on many forms other than
first-order.
• For quarterly data, the current quarter’s error term may
be functionally related to the observation of the error
term from the same quarter in the previous year:

• The error term in an equation might be a function of more


than one previous observation of the error term, such as:

β 9-9
Impure Serial Correlation
• Impure Serial Correlation is serial correlation caused
by a specification error.
• The error term of an incorrectly specified equation
includes a portion of the effect of the misspecification.
• Even if the true error term is not serially correlated, the
error term containing the specification error can be.
• Consider two cases of specification error:
1. Omitted variable
2. Incorrect functional form
β 9-10
Impure Serial Correlation (continued)
• Suppose the true equation:

• If X2 is accidently omitted, then:

• 𝜀𝑡∗ will tend to exhibit detectable serial correlation when:


1. X2 itself is serially correlated.
2. The size of ε is small compared to the size of 𝛽2 𝑋2 .

β 9-11
Impure Serial Correlation (continued)
• Suppose the true equation:

• But the following regression is run:

• The new error term is now a function of the true error


term and of the differences between the linear and
polynomial functional forms.
• As Figure 9.4 displays, these differences often follow an
autoregressive pattern.
β 9-12
Impure Serial Correlation (continued)

β 9-13
The Consequences of Serial Correlation
• Serial correlation in the error term has at least three
consequences:
1. Pure serial correlation does not cause bias in the
coefficient estimates.

2. Serial correlation causes OLS to no longer be the


minimum variance estimator (of all linear
estimators).
3. Serial correlation causes the OLS estimates of

the 𝑆𝐸 𝛽 𝑠 to be biased, leading to unreliable

β hypothesis testing.
9-14
The Durbin-Watson Test
• The Durbin-Watson test is used to determine if there is
first-order serial correlation.

• It requires three assumptions:


1. The regression model includes an intercept term.
2. The serial correlation is first-order in nature.
3. The regression model does not include a lagged
dependent variable as an independent variable.

β 9-15
The Durbin-Watson Test (continued)
• The equation for the Durbin-Watson statistic for T
observations is:

where ets are the OLS residuals

• Extreme positive serial correlation: d = 0


• Extreme positive negative correlation: d ≈ 4
• No serial correlation: d ≈ 2
β 9-16
The Durbin-Watson Test (continued)
• The advantage of the d statistics is that it is based on
estimated residual, which are routinely computed in
regression analysis.
• Its now common to report the d statistics along with
summary measures, such as 𝑅2 , adjusted-𝑅2 , t-
statistics and F-statistics

β 9-17
The Durbin-Watson Test (continued)
• Important assumption d statistics:
1. Regression model includes the intercept term. If not, its is essential
to rerun the regression including intercept term to obtain RSS.
2. The independent variables, the X’s are nonstochastic, or fixed in
repeated sampling.
3. The disturbances 𝑢𝑡 are generated by the first-order autoregressive
scheme: 𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡 .
4. The error terms 𝑢𝑡 is assumed to be normally distributed.
5. The regression model does not include the lagged value(s) of the
dependent variable as one of the independent variable:
𝑌𝑡 = 𝛽0 + 𝛽1 𝑋1𝑡 + 𝛽2 𝑋2𝑡 +. . +𝛽𝑘 𝑋𝑘𝑡 + 𝛾𝑌𝑡−1 + 𝑢𝑡

6. There is no missing obs in the data.


β 9-18
The Durbin-Watson Test (continued)
• How to perform d statistics test:
1. Run the OLS regression and obtain the residuals
2. Computed the d statistics (most computer programs now do this
routinely).
3. For the given sample size and given number of explanatory
variables, find out the critical 𝑑𝐿 and 𝑑𝑈 values. Refer Table d
statistics (Table D.5A)
4. Now follow the decision rules given in Table 12.6 or Figure 12.10.
5. The hypothesis:
𝐻0 : 𝑑 = 2 (no autocorrelation)
𝐻𝑎 : 𝑑 ≠ 2 (autocorrelation exists)
4. As a rule of thumb, if d is found to be 2 in an application, one
may assume that there is no first-order autocorrelation.

β 9-19
β 9-20
β 9-21
β 9-22
β 9-23
Examples of the Use of
the Durbin-Watson Statistic
Example: 5% test, 3 variables, and 25 observations
• Critical values: dL = 1.123 and dU = 1.654
• Hypothesis:
𝐻0 : 𝑑 = 2 (no autocorrelation)
𝐻𝑎 : 𝑑 ≠ 2 (autocorrelation exists)

• Decision rule: If d < 1.123 Reject H0


If d > 1.654 Do not reject H0
If 1.123 < d < 1.654 Inconclusive

• If d = 1.78? If d = 1.28? If d = 0.60?


β 9-24
Examples of the Use of
the Durbin-Watson Statistic (continued)

β 9-25
Examples of the Use of
the Durbin-Watson Statistic (continued)
Example: Chicken demand model

where:
Yt = per capita chicken consumption (in pounds) in
year t
PCt = the price of chicken (in cents per pound) in
year t
PBt = the price of beef (in cents per pound) in year t
YDt = U.S. per capita disposable income (in
hundreds of dollars) in year t
β 9-26
Examples of the Use of
the Durbin-Watson Statistic (continued)

• The Durbin-Watson statistic is calculated to be 0.99.


• DL = 1.198 and DU = 1.650
• Decision rule:If d < 1.198 Reject H0
If d > 1.650 Do not reject H0
If 1.198 < d < 1.650 Inconclusive
β 9-27
The Durbin-Watson Test (continued)
• The Durbin-Watson (d) test in Stata
o Estimate Eq(8) and then perfom the D-W test.
regress lnpce lnpdi lngdp
estat dwatson

Durbin-Watson d-statistic( 3, 88) = .6014906

o The value of DW statistics is 0.6015.


o The value of DW critical from the table DW at 0.05 is ( k=3, n =88):

dL = 1.589 dU = 1.726 4-dU = 2.274 4-dL = 2.411

o Because our DW stat =0.6015 is lower than dL=1.589, means that our
results is reject Ho and accept Ha.

β o Conclusion, there is evidence of positive autocorrelation.

9-28
The Lagrange Multiplier (LM) Test
• The Lagrange multiplier (LM) test tests for serial
correlation by analyzing how well the lagged residuals
explain the residual of the original equation in an
equation that also includes all the original explanatory
variables.

• If lagged residuals are statistically significant, then the


null hypothesis of no serial correlation is rejected.

β 9-29
The Lagrange Multiplier (LM) Test (continued)
• The LM test involves three steps (assume an equation
with two independent variables):

1. Obtain residuals from estimated equation.

2. Specify the auxiliary equation:

β 9-30
The Lagrange Multiplier (LM) Test (continued)
3. Use OLS to estimate auxiliary and test the null
hypothesis that α3 = 0 with the following test statistic:
𝐿𝑀 = 𝑁 ∙ 𝑅2

where N = sample size and R2 is the unadjusted


coefficient of determination (both of the auxiliary
equation).
• For large sample, LM has a chi-square distribution with
degrees of freedom equal to one.
• If LM is greater than critical value, reject the null.

β 9-31
Remedies for Serial Correlation
• The first place to start in correcting serial correlation is to
look carefully at the specification of the equation for
possible errors that might be causing impure serial
correlation.
• Only after a careful review should the possibility of an
adjustment for pure serial correlation be considered.
• The appropriate response if you have pure serial
correlation is to consider:
1. Generalized Least Squares
2. Newey-West standard errors
β 9-32
Generalized Least Squares
• Generalized least squares (GLS) rids an equation of
pure first-order serial correlation and restores the
minimum variance property to its estimation.
• GLS starts with an equation that has pure serial
correlation and transforms it into one that does not.
• It is instructive to examine this transformation in order to
understand GLS.
• There are two cases; the 𝜌 is known and the 𝜌 is
unknown.
• Our discussion will focus on 𝜌 is known .

β 9-33
Generalized Least Squares (continued)
• Start with an equation that has first-order serial
correlation:
𝑌𝑡 = 𝛽0 + 𝛽1 𝑋𝑡 + 𝑢𝑡 (9.18)
• If Eq(9.18) hold true at time 𝑡, it also hold true at time 𝑡 − 1
𝑌𝑡−1 = 𝛽0 + 𝛽1 𝑋𝑡−1 + 𝑢𝑡−1 (9.19)

• Multiply Equation (9.19) by 𝜌 on both side:


𝜌𝑌𝑡−1 = 𝜌𝛽0 + 𝜌𝛽1 𝑋1𝑡−1 + 𝜌𝑢𝑡−1 (9.20)

• Subtract Equation (9.20) from Equation (9.19).


β 9-34
Generalized Least Squares (continued)
𝑌𝑡 − 𝜌𝑌𝑡−1 = 𝛽0 1 − 𝜌 + 𝛽1 𝑋𝑡 − 𝜌𝑋𝑡−1 + 𝜀𝑡
where 𝜀𝑡 = 𝑢𝑡 − 𝜌𝑢𝑡−1 . (9.21)

• Equation (9.21) can be rewritten:


𝑌𝑡∗ = 𝛽0∗ + 𝛽1∗ 𝑋𝑡∗ + 𝜀𝑡 (9.22)

Where
𝛽0∗ = 𝛽0 (1 − 𝜌)
𝑌𝑡∗ = (𝑌𝑡 − 𝜌𝑌𝑡−1 )
𝑋𝑡∗ = (𝑋𝑡 − 𝜌𝑋𝑡−1 )
𝛽1∗ = 𝛽1

β 9-35
Generalized Least Squares (continued)
• Notice that in Equation 9.22:
1. The error term is not serially correlated.
2. The slope coefficient β1 is the same as the slope
coefficient of the original equation.
3. The dependent variable has changed which means
the GLS 𝑅 2 is not comparable to the OLS 𝑅 2 .

β 9-36
β

CHAPTER 9: the end

You might also like