Chapter9 - Serial Correlation
Chapter9 - Serial Correlation
β 9-1
Time Series
• Time series data involve a single entity over multiple
points in time.
• Notation for time series is different than cross-sectional.
• Cross-sectional:
• Time series:
β 9-2
Time Series (continued)
• Time-series have some characteristics that make them
more difficult to deal with than cross-section.
1. The order of observations in a time series is fixed.
2. Time-series samples tend to be much smaller
than cross-sectional ones.
3. The theory underlying time-series analysis can
be quite complex.
4. The stochastic error term in a time-series is often
affected by events that took place in a previous
time period. This is called serial correlation!
β 9-3
Pure Serial Correlation
• Pure serial correlation occurs when Classical
Assumption IV is violated in a correctly specified
equation.
• Most commonly assumed form of serial correlation is
first-order serial correlation.
where:
ε = the error term of the equation in question
ρ = the first-order autocorrelation coefficient
β
u = a classical (not serially correlated) error term
9-4
Pure Serial Correlation (continued)
• ρ is called the first-order autocorrelation coefficient.
• Magnitude of ρ indicates the strength of the serial
correlation.
• If ρ = 0, there is no serial correlation.
• As ρ approaches 1 in absolute value a high degree of
serial correlation exists.
• In general:
β 9-6
Pure Serial Correlation (continued)
β 9-7
Pure Serial Correlation (continued)
β 9-8
Pure Serial Correlation (continued)
• Serial correlation can take on many forms other than
first-order.
• For quarterly data, the current quarter’s error term may
be functionally related to the observation of the error
term from the same quarter in the previous year:
β 9-9
Impure Serial Correlation
• Impure Serial Correlation is serial correlation caused
by a specification error.
• The error term of an incorrectly specified equation
includes a portion of the effect of the misspecification.
• Even if the true error term is not serially correlated, the
error term containing the specification error can be.
• Consider two cases of specification error:
1. Omitted variable
2. Incorrect functional form
β 9-10
Impure Serial Correlation (continued)
• Suppose the true equation:
β 9-11
Impure Serial Correlation (continued)
• Suppose the true equation:
β 9-13
The Consequences of Serial Correlation
• Serial correlation in the error term has at least three
consequences:
1. Pure serial correlation does not cause bias in the
coefficient estimates.
β hypothesis testing.
9-14
The Durbin-Watson Test
• The Durbin-Watson test is used to determine if there is
first-order serial correlation.
β 9-15
The Durbin-Watson Test (continued)
• The equation for the Durbin-Watson statistic for T
observations is:
β 9-17
The Durbin-Watson Test (continued)
• Important assumption d statistics:
1. Regression model includes the intercept term. If not, its is essential
to rerun the regression including intercept term to obtain RSS.
2. The independent variables, the X’s are nonstochastic, or fixed in
repeated sampling.
3. The disturbances 𝑢𝑡 are generated by the first-order autoregressive
scheme: 𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡 .
4. The error terms 𝑢𝑡 is assumed to be normally distributed.
5. The regression model does not include the lagged value(s) of the
dependent variable as one of the independent variable:
𝑌𝑡 = 𝛽0 + 𝛽1 𝑋1𝑡 + 𝛽2 𝑋2𝑡 +. . +𝛽𝑘 𝑋𝑘𝑡 + 𝛾𝑌𝑡−1 + 𝑢𝑡
β 9-19
β 9-20
β 9-21
β 9-22
β 9-23
Examples of the Use of
the Durbin-Watson Statistic
Example: 5% test, 3 variables, and 25 observations
• Critical values: dL = 1.123 and dU = 1.654
• Hypothesis:
𝐻0 : 𝑑 = 2 (no autocorrelation)
𝐻𝑎 : 𝑑 ≠ 2 (autocorrelation exists)
β 9-25
Examples of the Use of
the Durbin-Watson Statistic (continued)
Example: Chicken demand model
where:
Yt = per capita chicken consumption (in pounds) in
year t
PCt = the price of chicken (in cents per pound) in
year t
PBt = the price of beef (in cents per pound) in year t
YDt = U.S. per capita disposable income (in
hundreds of dollars) in year t
β 9-26
Examples of the Use of
the Durbin-Watson Statistic (continued)
o Because our DW stat =0.6015 is lower than dL=1.589, means that our
results is reject Ho and accept Ha.
9-28
The Lagrange Multiplier (LM) Test
• The Lagrange multiplier (LM) test tests for serial
correlation by analyzing how well the lagged residuals
explain the residual of the original equation in an
equation that also includes all the original explanatory
variables.
β 9-29
The Lagrange Multiplier (LM) Test (continued)
• The LM test involves three steps (assume an equation
with two independent variables):
β 9-30
The Lagrange Multiplier (LM) Test (continued)
3. Use OLS to estimate auxiliary and test the null
hypothesis that α3 = 0 with the following test statistic:
𝐿𝑀 = 𝑁 ∙ 𝑅2
β 9-31
Remedies for Serial Correlation
• The first place to start in correcting serial correlation is to
look carefully at the specification of the equation for
possible errors that might be causing impure serial
correlation.
• Only after a careful review should the possibility of an
adjustment for pure serial correlation be considered.
• The appropriate response if you have pure serial
correlation is to consider:
1. Generalized Least Squares
2. Newey-West standard errors
β 9-32
Generalized Least Squares
• Generalized least squares (GLS) rids an equation of
pure first-order serial correlation and restores the
minimum variance property to its estimation.
• GLS starts with an equation that has pure serial
correlation and transforms it into one that does not.
• It is instructive to examine this transformation in order to
understand GLS.
• There are two cases; the 𝜌 is known and the 𝜌 is
unknown.
• Our discussion will focus on 𝜌 is known .
β 9-33
Generalized Least Squares (continued)
• Start with an equation that has first-order serial
correlation:
𝑌𝑡 = 𝛽0 + 𝛽1 𝑋𝑡 + 𝑢𝑡 (9.18)
• If Eq(9.18) hold true at time 𝑡, it also hold true at time 𝑡 − 1
𝑌𝑡−1 = 𝛽0 + 𝛽1 𝑋𝑡−1 + 𝑢𝑡−1 (9.19)
Where
𝛽0∗ = 𝛽0 (1 − 𝜌)
𝑌𝑡∗ = (𝑌𝑡 − 𝜌𝑌𝑡−1 )
𝑋𝑡∗ = (𝑋𝑡 − 𝜌𝑋𝑡−1 )
𝛽1∗ = 𝛽1
β 9-35
Generalized Least Squares (continued)
• Notice that in Equation 9.22:
1. The error term is not serially correlated.
2. The slope coefficient β1 is the same as the slope
coefficient of the original equation.
3. The dependent variable has changed which means
the GLS 𝑅 2 is not comparable to the OLS 𝑅 2 .
β 9-36
β