0% found this document useful (0 votes)
4 views41 pages

Lesson 6

The document discusses linear regression models under conditions of heteroskedasticity and autocorrelation, focusing on the implications for the Ordinary Least Squares (OLS) estimator's consistency and asymptotic properties. It presents various examples illustrating the effects of serial correlation on hypothesis testing and the variance-covariance matrix estimation. The framework includes assumptions about ergodic stationarity, linearity, and correct model specification necessary for the analysis.

Uploaded by

zhangzhiyuan49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views41 pages

Lesson 6

The document discusses linear regression models under conditions of heteroskedasticity and autocorrelation, focusing on the implications for the Ordinary Least Squares (OLS) estimator's consistency and asymptotic properties. It presents various examples illustrating the effects of serial correlation on hypothesis testing and the variance-covariance matrix estimation. The framework includes assumptions about ergodic stationarity, linearity, and correct model specification necessary for the analysis.

Uploaded by

zhangzhiyuan49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Linear Regression Models Under Conditional

Heteroskedasticity and Autocorrelation

Professor Yongmiao Hong


September 22, 2024
Copyright ◎ 2024 by Professor Hong Yongmiao, All rights reserved. Requests for permission should be mailed to: [email protected]
1. 版权归作者洪永淼教授所有;
2. 不得移除作者署名,否则将视为侵权;
3. 对于不遵守此声明或者其他违法使用本文内容者,作者依法保留追究权等。
4. 发现课件错误请联系作者 [email protected]
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
5.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

In many economic applications, 𝜀𝑡 is serially correlated, 𝑋𝑡 𝜀𝑡 is


generally no longer a MDS.

Example 6.1 [Testing a Zero Population Mean]

Suppose the daily stock return 𝑌𝑡 is an stationary ergodic process


with 𝐸 𝑌𝑡 = 𝜇
• the null hypothesis: 𝐇0 : 𝜇 = 0
• the alternative hypothesis: 𝐇𝐴 : 𝜇 ≠ 0
A test for ℍ0 can be based on the sample mean:
𝑌᪄𝑛 = 𝑛−1 ∑𝑛𝑡=1 𝑌𝑡
By a suitable CLT (White (1999)):
𝑑
𝑛𝑌᪄𝑛 → 𝑁(0, 𝑉)
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.1 [Testing a Zero Population Mean] (Cont.)

Where:
𝑉 ≡ avar 𝑛𝑌᪄𝑛
◼ Because
var 𝑛𝑌᪄𝑛 = 𝑛−1 ∑𝑛𝑡=1 var 𝑌𝑡
+2𝑛−1 ∑𝑛𝑡=2 ∑𝑡−1
𝑗=1 cov 𝑌𝑡 , 𝑌𝑡−𝑗
◼ Serial correlation in {𝑌𝑡 } is expected to affect the asymptotic
variance of 𝑛𝑌᪄𝑛 . avar( 𝑛𝑌᪄𝑛 ) is no longer equal to var(𝑌𝑡 )

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.1 [Testing a Zero Population Mean] (Cont.)

𝑝
• Suppose: 𝑉ƶ → 𝑉, by Slutsky’s theorem,
under 𝐇0 and 𝑛 → ∞
𝑛𝑌᪄𝑛 𝑑
𝑇𝑟 = → 𝑁(0,1)
𝑉ƶ

Eugene Slutsky

◆ Question
Why does serial correlation exist?

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.2 [Unbiasedness Hypothesis]

• Consider the following linear regression model:


𝑆𝑡+𝜏 = 𝛼 + 𝛽𝐹𝑡 (𝜏) + 𝜀𝑡+𝜏
where 𝑆𝑡+𝜏 is the spot foreign exchange rate at time 𝑡 + 𝜏, 𝐹𝑡 (𝜏)
is the forward exchange rate (with maturity τ > 0) at time 𝑡, and
the disturbance 𝜀𝑡+𝜏 is not observable

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.2 [Unbiasedness Hypothesis] (Cont.)

• Unbiasedness Hypothesis:
𝐸 𝑆𝑡+𝜏 ∣ 𝐼𝑡 = 𝐹𝑡 𝜏
where 𝐼𝑡 is the information set available at time t. This implies
𝐇0 : 𝛼 = 0, 𝛽 = 1
𝐸 𝜀𝑡+𝜏 ∣ 𝐼𝑡 = 0, 𝑡 = 1, 2, …
• However, with 𝜏 > 1, 𝐸 𝜀𝑡+𝜏 ∣ 𝐼𝑡 ≠ 0 a.s. for 1 ≤ 𝑗 ≤ 𝜏 − 1
• There exists serial correlation in 𝜀𝑡 up to 𝜏 − 1 lags under 𝐇0 .
This will affect the asymptotic variance of the OLS estimator 𝑛𝛽ƶ

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.3 [Long Horizon Return Predictability]

• Consider a predictive regression:


𝑌𝑡+ℎ,ℎ = 𝛽0 + 𝛽1 𝑟𝑡 + 𝛽2 𝑑𝑡 − 𝑝𝑡 + 𝜀𝑡+ℎ,ℎ
where 𝑌𝑡+ℎ,ℎ is the cumulative return over the holding period
from time 𝑡 to time 𝑡 + ℎ, namely:
𝑌𝑡+ℎ,ℎ = ∑ℎ𝑗=1 𝑅𝑡+𝑗
where 𝑅𝑡+𝑗 is an asset return in 𝑡 + 𝑗, 𝑟𝑡 is the short term interest
rate in time 𝑡, 𝑑𝑡 − 𝑝𝑡 = ln(𝐷𝑡 /𝑃𝑡 )
• When h > 1, cov 𝜀𝑡+ℎ , 𝜀𝑡+h−j ≠ 0 for 1 ≤ 𝑗 ≤ ℎ − 1

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

Example 6.4 [Relationship Between GDP and Money Supply]

• Consider a linear macroeconomic regression model:


𝑌𝑡 = 𝛼 + 𝛽𝑀𝑡 + 𝜀𝑡
𝑌𝑡 : GDP growth rate in time 𝑡
𝑀𝑡 : money supply growth rate in time 𝑡
𝜀𝑡 : an unobservable disturbance such that 𝐸 𝜀𝑡 ∣ 𝑀𝑡 = 0, but
there may exist persistent serial correlation of unknown form
in {𝜀𝑡 }

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation

◆ Question
What happens to the OLS estimator 𝛽መ if the disturbance {𝜀𝑡 }
displays conditional heteroskedasticity (i.e., 𝐸 𝜀𝑡2 ∣ 𝑋𝑡 ≠ 𝜎 2 )
and/or autocorrelation (i.e., cov 𝜀𝑡 , 𝜀𝑡−1 ≠ 0 at least for some j >
0)?

• In particular, we would like to address the following important


issues:
✓ Is the OLS estimator 𝛽መ consistent for 𝛽𝑜 ?
✓ Is 𝛽መ asymptotically most efficient?
✓ Is 𝛽መ after properly scaled, asymptotically normal?
✓ Are the t-test and F-test statistics applicable for large sample
inference?
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions

Assumption 6.1.
[Ergodic Stationarity]: The observable stochastic process
{𝑌𝑡 , 𝑋𝑡′ }𝑛𝑡=1 is ergodic stationary, where 𝑌𝑡 is a random variable and 𝑋𝑡
is a 𝐾 × 1 random vector.

Assumption 6.2.
[Linearity]:
𝑌𝑡 = 𝑋𝑡′ 𝛽𝑜 + 𝜀𝑡 ,
where 𝛽𝑜 is a 𝐾 × 1 unknown parameter vector, and 𝜀𝑡 is the
unobservable disturbance.

Assumption 6.3.
[Correct Model Specification]: 𝐸(𝜀𝑡 |𝑋𝑡 ) = 0 with 𝐸(𝜀𝑡2 ) =
𝜎 2 < ∞.

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions

Assumption 6.4. [Nonsingularity]: The 𝐾 × 𝐾 matrix


𝑄 = 𝐸(𝑋𝑡 𝑋𝑡′ )
is symmetric, finite and nonsingular.

Assumption 6.5. [Long-Run Variance-Covariance Matrix]


a) For 𝑗 = 0, ±1, … , define the 𝐾 × 𝐾 autocovariance function of
𝑋𝑡 𝜀𝑡
Γ(𝑗) = cov 𝑋𝑡 𝜀𝑡 , 𝑋𝑡−𝑗 𝜀𝑡−𝑗

= 𝐸 𝑋𝑡 𝜀𝑡 𝜀𝑡−𝑗 𝑋𝑡−𝑗

Then ∑∞𝑗=−∞ ∥ Γ(𝑗) ∥< ∞ , ∥ 𝐴 ∥= ∑ 𝐾 𝐾


𝑖=1 𝑗=1 𝐴(𝑖,𝑗)
∑ for any
𝐾 × 𝐾 matrix, and the long-run variance-covariance matrix
𝑉 = ∑∞
𝑗=−∞ Γ(𝑗)

is positive definite
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions

Assumption 6.5. (Cont.)


b) The conditional expectation
𝑞.𝑚.
𝐸 𝑋𝑡 𝜀𝑡 ∣ 𝑋𝑡−𝑗 𝜀𝑡−𝑗 , 𝑋𝑡−𝑗−1 𝜀𝑡−𝑗−1 , … → 0 as 𝑗 → ∞.
1/2
c) ∑∞
𝑗=0 𝐸 𝑟𝑗′ 𝑟𝑗 <∞

𝑟𝑗 = 𝐸 𝑋𝑡 𝜀𝑡 ∣ 𝑋𝑡−𝑗 𝜀𝑡−𝑗 , 𝑋𝑡−𝑗−1 𝜀𝑡−𝑗−1 , …


−𝐸 𝑋𝑡 𝜀𝑡 ∣ 𝑋𝑡−𝑗−1 𝜀𝑡−𝑗−1 , 𝑋𝑡−𝑗−2 𝜀𝑡−𝑗−2 , …
.

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

◆ Question
Why are we interested in the long-run variance-covariance matrix
𝑉 = ∑∞ 𝑗=−∞ Γ(𝑗)
መ we have
• For the OLS estimator 𝛽,
𝑛 𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 .
• Suppose a suitable CLT holds for {𝑋𝑡 𝜀𝑡 } in the present context.
That is, suppose
𝑛 𝑑
−1/2
𝑛 ∑𝑡=1 𝑋𝑡 𝜀𝑡 → 𝑁(0, 𝑉)
• As n→∞, where V is an asymptotic variance-covariance matrix,
namely
𝑉 ≡ avar 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 = lim var 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡
𝑛→∞

𝑑
• By Slutsky’s theorem, we have 𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄 −1 𝑉𝑄−1
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

◼ Proof
• Put 𝑔𝑡 = 𝑋𝑡 𝜀𝑡 , note that 𝐸(𝑔𝑡 ) = 0 given E(𝜀𝑡 |𝑋𝑡 ) = 0 ,
for some lag order j > 0, Γ 𝑗 = cov(𝑔𝑡 , 𝑔𝑡−𝑗 ) ≠ 0
 −1/2 n   −1/2 n 
var  n  X t  t  = var  n  gt 
 t =1   t =1 
 
 n
 n

= E  n −1/2  gt  n −1/2  g s  
 t =1  s =1 
 

 E ( g g  )
n n
−1
=n t s
t =1 s =1

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

◼ Proof (Cont.)
t −1 n −1

 E ( g g  ) + n  E ( g g  ) + n   E ( g g  )
n n n
−1 −1 −1
=n t t t s t s
t =1 t = 2 s =1 t =1 s =t +1
n −1 −1 n+ j
= n −1  E ( gt gt ) +  n −1  E ( g t g t− j ) + n −1  E ( g t g t− j )
n n

t =1 j =1 t = j +1

j =− ( n −1) t =1
n −1
= 
j =− ( n −1)
(1− | j | / n)( j )


→  ( j ), as n → 
j =−
by dominated convergence. Therefore, we have 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗)

✓ when {𝑔𝑡 } is an MDS, we have 𝑉 = Γ(0)

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

Definition 6.1.
[Spectral Density Matrix]: Suppose {𝑔𝑡 = 𝑋𝑡 𝜀𝑡 } is a K×1 weakly
stationary process with 𝐸 𝑔𝑡 = 0 and autocovariance function

Γ 𝑗 ≡ cov(𝑔𝑡 , 𝑔𝑡−𝑗 ) = 𝐸(𝑔𝑡 , 𝑔𝑡−𝑗 ), which is a K × K matrix.
Suppose
∑∞
𝑗=−∞ ∥ Γ 𝑗 ∥< ∞

The Fourier transform of the autocovariance function Γ 𝑗 exists


and is given by
1
𝐻(𝜔) = ∑∞ Γ(𝑗)exp(−𝐢𝑗𝜔), 𝜔 ∈ [−𝜋, 𝜋]
2𝜋 𝑗=−∞

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

• The inverse Fourier transform of the spectral density matrix is


𝜋
Γ(𝑗) = ∫−𝜋 𝐻(𝜔)𝑒 𝐢𝑗𝜔 𝑑𝜔
• when 𝜔 = 0, we have the long-run variance-covariance matrix
𝑉 = 2𝜋𝐻(0) = ∑∞
𝑗=−∞ Γ(𝑗)

◆ Question
How to estimate the long-run variance-covariance matrix 𝑉?

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

• We first consider a naive estimator


𝑉ƶ = ∑𝑛−1 ƶ
𝑗=−(𝑛−1) Γ 𝑗

where the sample autocovariance function


 −1 n
n  X t et X t− j et − j , j = 0,1, , n − 1,
 t = j +1
ˆ ( j ) =  n
n −1

 t =1− j
X t et X t− j et − j , j = −1, −2, , −(n − 1)

However,𝑉ƶ𝑉ƶ isisnot
➢ However, notconsistent
consistentfor
for𝑉𝑉
• There are too many estimated terms in the summation over lag
orders in V. 𝐵𝑖𝑎𝑠[∑𝑛−1 ƶ
𝑗=−(𝑛−1) 𝛤 𝑗 ] → 0 as 𝑛 → ∞ , but
𝑉𝑎𝑟[∑𝑛−1 ƶ
𝑗=−(𝑛−1) 𝛤 𝑗 ] will never go to zero.
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

𝑝
• So, we consider the following truncated sum 𝑉ƶ = ∑𝑗=−𝑝 Γƶ 𝑗
where p is a positive integer. If p is fixed, we expect
𝑝
𝑝
𝑉ƶ → ∑𝑗=−𝑝 Γ(𝑗) ≠ 2𝜋𝐻(0) = 𝑉
we should let 𝑝 = 𝑝𝑛 → ∞ as 𝑛 → ∞
• However, we cannot let p grow as fast as the sample size n. To
𝑝
ƶ
ensure 𝑉 → 𝑉, using a truncated variance estimator
𝑝𝑛
𝑉ƶ = ∑𝑗=−𝑝𝑛
ƶ
Γ(𝑗)
ƶ ∝ 𝑝𝑛+1 /𝑛 → 0,
where 𝑝𝑛 → ∞, 𝑝𝑛 /𝑛 → 0, 𝑉𝑎𝑟(𝑉)
ƶ =∑ 𝑛 ƶ 𝑝 ∞ ƶ ƶ
𝐵𝑖𝑎𝑠(𝑉) 𝑗=−𝑝𝑛 Γ(𝑗) − ∑ 𝑗=−∞ Γ(𝑗) = −∑|𝑗| >𝑝𝑛 Γ(𝑗) → 0.

✓ Example: 𝑝𝑛 = 𝑛1/3
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

➢ However, it may not be PSD for all n


• To ensure that it is always PSD, use a weighted average estimator
ƶ𝑉 = ∑𝑝𝑛 𝑘 𝑗/𝑝𝑛
ƶ
Γ(𝑗)
𝑗=−𝑝𝑛
where the weighting function 𝑘 · is called a kernel function.
✓ Example: Bartlett kernel
𝑘(𝑧) = (1 − |𝑧|)𝟏(|𝑧| ≤ 1)
✓ Newey and West (1987) first used this kernel function to
estimate V in econometrics.

Whitney Newey Kenneth West


September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

➢ In fact, we can consider a more general


variance estimator for V :
𝑉ƶ = ∑𝑛−1 ƶ
𝑗=1−𝑛 𝑘 𝑗/𝑝𝑛 Γ(𝑗),
where 𝑘(·) may have unbounded
support. In fact, any kernel 𝑘: ℝ →
−1,1 can be used under certain
conditions
✓ Example: Quadratic-Spectral kernel
that k(·) has unbounded support.
Andrews (1991) used it to estimate V
3 sin(𝜋𝑧)
𝑘(𝑧) = − cos(𝜋𝑧) , −∞ < 𝑧 < ∞
(𝜋𝑧)2 𝜋𝑧 Donald Andrews

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation

➢ In practice, it is most important to determine an appropriate


smoothing parameter 𝑝𝑛 .
✓ Andrews (1991) and Newey & West (1994) discuss data-
driven methods to choose 𝑝𝑛 .

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Consistency of the OLS Estimator

Theorem 6.1.
Suppose Assumptions 6.1 to 6.5(a) hold. Then
𝑝
𝛽ƶ → 𝛽𝑜 as 𝑛 → ∞
◼ Proof:
𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡
• By Assumptions 6.1, 6.2 and 6.4 and WLLN for an ergodic
stationary process, we have
𝑝 𝑝
ƶ ƶ
𝑄 → 𝑄 and 𝑄 → 𝑄−1
−1

• Similarly , by Assumptions 6.1 𝑝


to 6.3 and 6.5(a), we have
𝑛−1 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 → 𝐸 𝑋𝑡 𝜀𝑡 = 0
• Using WLLN for an ergodic stationary process, where 𝐸 𝑋𝑡 𝜀𝑡 =
0 given Assumption 6.2 𝐸 𝜀𝑡 |𝑋𝑡 = 0 and the law of iterated
expectations.
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator

Lemma 6.1.
[CLT for a Zero Mean Ergodic Stationary Process (White 1984,
Theorem 5.15)]: Suppose {𝑍𝑡 } is an ergodic stationary process with:
(1) 𝐸 𝑍𝑡 = 0;
(2) 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗) is finite and nonsingular,

where Γ 𝑗 = 𝐸 𝑍𝑡 𝑍𝑡−𝑗 ;
𝑞.𝑚.
(3) 𝐸 𝑍𝑡 ∣ 𝑍𝑡−𝑗 , 𝑍𝑡−𝑗−1 , … → 0, as 𝑗 → ∞;
1/2
(4) ∑∞
𝑗=0 𝐸 𝑟𝑗′ 𝑟𝑗 < ∞. Then as n→∞,
𝑑
𝑛1/2 𝑍᪄𝑛 = 𝑛−1/2 ∑𝑛𝑡=1 𝑍𝑡 → 𝑁(0, 𝑉).

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator

Theorem 6.2.
[Asymptotic Normality]: Suppose Assumptions 6.1 to 6.5 hold.

Then as 𝑛 → ∞,
𝑑
𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄−1 𝑉𝑄−1 ,

where 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗)

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator

◼ Proof:
• Because
𝑛 𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 .
• By Assumptions 6.1 to 6.3 and 6.5 and CLT for an ergodic
stationary process, we have
𝑛 𝑑
𝑛 −1/2 ∑𝑡=1 𝑋𝑡 𝜀𝑡 → 𝑁(0, 𝑉),
𝑝 𝑝
• 𝑄ƶ → 𝑄 and ƶ
𝑄 → 𝑄−1 by Assumption 6.4
−1 and WLLN for an
ergodic stationary process. We then have by Slutsky’s theorem
𝑑
𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄−1 𝑉𝑄 −1

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Hypothesis Testing

➢ Consider testing the null hypothesis


𝐇0 : 𝑅𝛽𝑜 = 𝑟
where R is a nonstochastic 𝐽 × 𝐾 matrix with full rank, r is a 𝐽 ×
1 nonstochastic vector, and 𝐽 ≤ 𝐾
◼ Corollary 6.1. Suppose Assumptions 6.1 to 6.5 hold. Then under 𝐇0 ,
as 𝑛 → ∞,
𝑑
𝑛(𝑅𝛽ƶ − 𝑟) → 𝑁 0, 𝑅𝑄−1 𝑉𝑄 −1 𝑅′ .
𝑝
◼ Assumption 6.6. 𝑉ƶ → 𝑉
In some special scenarios, we may have Γ(𝑗) = 𝟎 for all 𝑗 > 𝑝𝑜 ,
where 𝑝𝑜 is a fixed lag order. In these cases, we can use the
ƶ 𝑝𝑜 ƶ
following variance estimator 𝑉 = ∑ Γ(𝑗) 𝑗=−𝑝𝑜

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Hypothesis Testing

Theorem 6.3.
[Robust t-Test and Wald Test]: Under Assumptions 6.1 to 6.6 and
𝐇0 , we have as 𝑛 → ∞,
✓ When 𝐽 = 1, we define a robust t-test statistic

𝑛 𝑅𝛽ƶ − 𝑟 𝑑
𝑇𝑟 = → 𝑁(0,1)
𝑅𝑄ƶ −1 𝑉ƶ 𝑄ƶ −1 𝑅′
✓ when 𝐽 ≥ 1, we define a robust Wald test statistic
ƶ −1 ′ −1
′ ′ −1 ƶ
𝑊𝑟 = 𝑛(𝑅𝛽 − 𝑟) 𝑅 𝑿 𝑿/𝑛 𝑉 𝑿 𝑿/𝑛 𝑅 ′
(𝑅𝛽ƶ − 𝑟)
𝑑
→ 𝜒𝐽2

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS

6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure

• Long-run variance estimators are necessary for statistical inference


of the OLS estimation in a linear regression model when there
exists serial correlation of unknown form.
• If serial correlation in the regression disturbance has of a known
form up to some unknown parameters, then simpler statistical
inference procedures are possible.
✓ Example: Ornut-Cochrane procedure.

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure

• Consider a linear regression model with serially correlated


disturbances:
𝑌𝑡 = 𝑋𝑡′ 𝛽𝑜 + 𝜀𝑡
where E(𝜀𝑡 |𝑋𝑡 ) = 0 but {𝜀𝑡 } follows an AR(p) process
𝑝
𝜀𝑡 = ∑𝑗=1 𝛼𝑗 𝜀𝑡−𝑗 + 𝑣𝑡 , 𝑣𝑡 ∼ IID 0, 𝜎 2
• The OLS estimator 𝛽መ is consistent for 𝛽𝑜 given E(𝑋𝑡 𝜀𝑡 ) = 0, but its
asymptotic variance depends on serial correlation in {𝜀𝑡 }. We can
consider the following transformed linear regression model

𝑝 𝑝 𝑝
𝑌𝑡 − ∑𝑗=1 𝛼𝑗 𝑌𝑡−𝑗 = 𝑋𝑡 − ∑𝑗=1 𝛼𝑗 𝑋𝑡−𝑗 𝛽𝑜 + 𝜀𝑡 − ∑𝑗=1 𝛼𝑗 𝜀𝑡−𝑗

𝑝
= 𝑋𝑡 − ∑𝑗=1 𝛼𝑗 𝑋𝑡−𝑗 𝛽𝑜 + 𝑣𝑡 .

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure

• We can write it as follows:


𝑌𝑡∗ = 𝑋𝑡∗′ 𝛽𝑜 + 𝑣𝑡
where
𝑝
𝑌𝑡∗ = 𝑌𝑡 − ∑𝑗=1 𝛼𝑗 𝑌𝑡−𝑗
𝑝
𝑋𝑡∗ = 𝑋𝑡 − ∑𝑗=1 𝛼𝑗 𝑋𝑡−𝑗
• The OLS estimator 𝛽෨ of this transformed regression will be
consistent for 𝛽𝑜 and 𝑛 𝛽ƿ − 𝛽𝑜 will be asymptotically normal:
𝑑
𝑛 𝛽ƿ − 𝛽𝑜 → 𝑁 0, 𝜎𝑣2 𝑄𝑋−1∗ 𝑋 ∗
where 𝑄𝑋 ∗𝑋 ∗ = 𝐸 𝑋𝑡∗ 𝑋𝑡∗′ . The OLS estimator 𝛽෨ is a GLS estimator
discussed in Chapter 3. It is BLUE.

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure

• However, the GLS estimator 𝛽෨ is infeasible. As a solution, one can


use an adaptive feasible two-step GLS procedure:
◼ Step 1: Regress
𝑌𝑡 = 𝑋𝑡′ 𝛽𝑜 + 𝜀𝑡 , 𝑡 = 1, … , 𝑛
namely, regress 𝑌𝑡 on 𝑋𝑡 and obtain the estimated OLS residual
ƶ Then run an AR(p) model
𝑒𝑡 = 𝑌𝑡 − 𝑋𝑡′ 𝛽.
𝑝
𝑒𝑡 = ∑𝑗=1 𝛼𝑗 𝑒𝑡−𝑗 + 𝑣ƿ 𝑡 , 𝑡 = 𝑝 + 1, … , 𝑛
𝑝
and obtain the OLS estimators 𝛼ƶ𝑗 𝑗=1

◼ Step 2: Regress the transformed model


𝑌ƶ𝑡∗ = 𝑋ƶ 𝑡∗′ 𝛽𝑜 + 𝑣𝑡∗ , 𝑡 = 𝑝 + 1, … , 𝑛

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure

• The sampling error resulting from the first step estimation has no
impact on the asymptotic properties of the OLS estimator in the
second step.
• The asymptotic variance estimator of 𝛽ƿ𝑎 is given by
𝑠ƶ𝑣2 𝑄ƶ 𝑋−1∗ 𝑋 ∗
where
1 ∗2 ƶ ∗ ∗
1 𝑛
𝑠ƶ𝑣2 = ∑𝑡=1 𝑣ƶ 𝑡 , 𝑄𝑋 𝑋 = ∑𝑡=1 𝑋ƶ 𝑡∗ 𝑋ƶ 𝑡∗′
𝑛
𝑛−𝐾 𝑛
with
𝑣ƶ 𝑡 = 𝑌ƶ𝑡∗ − 𝑋ƶ 𝑡∗′ 𝛽ƿ𝑎

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Thank You !

September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation

You might also like