Lesson 6
Lesson 6
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
5.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
Where:
𝑉 ≡ avar 𝑛𝑌᪄𝑛
◼ Because
var 𝑛𝑌᪄𝑛 = 𝑛−1 ∑𝑛𝑡=1 var 𝑌𝑡
+2𝑛−1 ∑𝑛𝑡=2 ∑𝑡−1
𝑗=1 cov 𝑌𝑡 , 𝑌𝑡−𝑗
◼ Serial correlation in {𝑌𝑡 } is expected to affect the asymptotic
variance of 𝑛𝑌᪄𝑛 . avar( 𝑛𝑌᪄𝑛 ) is no longer equal to var(𝑌𝑡 )
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
𝑝
• Suppose: 𝑉ƶ → 𝑉, by Slutsky’s theorem,
under 𝐇0 and 𝑛 → ∞
𝑛𝑌᪄𝑛 𝑑
𝑇𝑟 = → 𝑁(0,1)
𝑉ƶ
Eugene Slutsky
◆ Question
Why does serial correlation exist?
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
• Unbiasedness Hypothesis:
𝐸 𝑆𝑡+𝜏 ∣ 𝐼𝑡 = 𝐹𝑡 𝜏
where 𝐼𝑡 is the information set available at time t. This implies
𝐇0 : 𝛼 = 0, 𝛽 = 1
𝐸 𝜀𝑡+𝜏 ∣ 𝐼𝑡 = 0, 𝑡 = 1, 2, …
• However, with 𝜏 > 1, 𝐸 𝜀𝑡+𝜏 ∣ 𝐼𝑡 ≠ 0 a.s. for 1 ≤ 𝑗 ≤ 𝜏 − 1
• There exists serial correlation in 𝜀𝑡 up to 𝜏 − 1 lags under 𝐇0 .
This will affect the asymptotic variance of the OLS estimator 𝑛𝛽ƶ
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Motivation
◆ Question
What happens to the OLS estimator 𝛽መ if the disturbance {𝜀𝑡 }
displays conditional heteroskedasticity (i.e., 𝐸 𝜀𝑡2 ∣ 𝑋𝑡 ≠ 𝜎 2 )
and/or autocorrelation (i.e., cov 𝜀𝑡 , 𝜀𝑡−1 ≠ 0 at least for some j >
0)?
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions
Assumption 6.1.
[Ergodic Stationarity]: The observable stochastic process
{𝑌𝑡 , 𝑋𝑡′ }𝑛𝑡=1 is ergodic stationary, where 𝑌𝑡 is a random variable and 𝑋𝑡
is a 𝐾 × 1 random vector.
Assumption 6.2.
[Linearity]:
𝑌𝑡 = 𝑋𝑡′ 𝛽𝑜 + 𝜀𝑡 ,
where 𝛽𝑜 is a 𝐾 × 1 unknown parameter vector, and 𝜀𝑡 is the
unobservable disturbance.
Assumption 6.3.
[Correct Model Specification]: 𝐸(𝜀𝑡 |𝑋𝑡 ) = 0 with 𝐸(𝜀𝑡2 ) =
𝜎 2 < ∞.
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions
is positive definite
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Framework and Assumptions
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
◆ Question
Why are we interested in the long-run variance-covariance matrix
𝑉 = ∑∞ 𝑗=−∞ Γ(𝑗)
መ we have
• For the OLS estimator 𝛽,
𝑛 𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 .
• Suppose a suitable CLT holds for {𝑋𝑡 𝜀𝑡 } in the present context.
That is, suppose
𝑛 𝑑
−1/2
𝑛 ∑𝑡=1 𝑋𝑡 𝜀𝑡 → 𝑁(0, 𝑉)
• As n→∞, where V is an asymptotic variance-covariance matrix,
namely
𝑉 ≡ avar 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 = lim var 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡
𝑛→∞
𝑑
• By Slutsky’s theorem, we have 𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄 −1 𝑉𝑄−1
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
◼ Proof
• Put 𝑔𝑡 = 𝑋𝑡 𝜀𝑡 , note that 𝐸(𝑔𝑡 ) = 0 given E(𝜀𝑡 |𝑋𝑡 ) = 0 ,
for some lag order j > 0, Γ 𝑗 = cov(𝑔𝑡 , 𝑔𝑡−𝑗 ) ≠ 0
−1/2 n −1/2 n
var n X t t = var n gt
t =1 t =1
n
n
= E n −1/2 gt n −1/2 g s
t =1 s =1
E ( g g )
n n
−1
=n t s
t =1 s =1
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
◼ Proof (Cont.)
t −1 n −1
E ( g g ) + n E ( g g ) + n E ( g g )
n n n
−1 −1 −1
=n t t t s t s
t =1 t = 2 s =1 t =1 s =t +1
n −1 −1 n+ j
= n −1 E ( gt gt ) + n −1 E ( g t g t− j ) + n −1 E ( g t g t− j )
n n
t =1 j =1 t = j +1
j =− ( n −1) t =1
n −1
=
j =− ( n −1)
(1− | j | / n)( j )
→ ( j ), as n →
j =−
by dominated convergence. Therefore, we have 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗)
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
Definition 6.1.
[Spectral Density Matrix]: Suppose {𝑔𝑡 = 𝑋𝑡 𝜀𝑡 } is a K×1 weakly
stationary process with 𝐸 𝑔𝑡 = 0 and autocovariance function
′
Γ 𝑗 ≡ cov(𝑔𝑡 , 𝑔𝑡−𝑗 ) = 𝐸(𝑔𝑡 , 𝑔𝑡−𝑗 ), which is a K × K matrix.
Suppose
∑∞
𝑗=−∞ ∥ Γ 𝑗 ∥< ∞
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
◆ Question
How to estimate the long-run variance-covariance matrix 𝑉?
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
However,𝑉ƶ𝑉ƶ isisnot
➢ However, notconsistent
consistentfor
for𝑉𝑉
• There are too many estimated terms in the summation over lag
orders in V. 𝐵𝑖𝑎𝑠[∑𝑛−1 ƶ
𝑗=−(𝑛−1) 𝛤 𝑗 ] → 0 as 𝑛 → ∞ , but
𝑉𝑎𝑟[∑𝑛−1 ƶ
𝑗=−(𝑛−1) 𝛤 𝑗 ] will never go to zero.
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
𝑝
• So, we consider the following truncated sum 𝑉ƶ = ∑𝑗=−𝑝 Γƶ 𝑗
where p is a positive integer. If p is fixed, we expect
𝑝
𝑝
𝑉ƶ → ∑𝑗=−𝑝 Γ(𝑗) ≠ 2𝜋𝐻(0) = 𝑉
we should let 𝑝 = 𝑝𝑛 → ∞ as 𝑛 → ∞
• However, we cannot let p grow as fast as the sample size n. To
𝑝
ƶ
ensure 𝑉 → 𝑉, using a truncated variance estimator
𝑝𝑛
𝑉ƶ = ∑𝑗=−𝑝𝑛
ƶ
Γ(𝑗)
ƶ ∝ 𝑝𝑛+1 /𝑛 → 0,
where 𝑝𝑛 → ∞, 𝑝𝑛 /𝑛 → 0, 𝑉𝑎𝑟(𝑉)
ƶ =∑ 𝑛 ƶ 𝑝 ∞ ƶ ƶ
𝐵𝑖𝑎𝑠(𝑉) 𝑗=−𝑝𝑛 Γ(𝑗) − ∑ 𝑗=−∞ Γ(𝑗) = −∑|𝑗| >𝑝𝑛 Γ(𝑗) → 0.
✓ Example: 𝑝𝑛 = 𝑛1/3
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Long-Run Variance-Covariance Matrix Estimation
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Consistency of the OLS Estimator
Theorem 6.1.
Suppose Assumptions 6.1 to 6.5(a) hold. Then
𝑝
𝛽ƶ → 𝛽𝑜 as 𝑛 → ∞
◼ Proof:
𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡
• By Assumptions 6.1, 6.2 and 6.4 and WLLN for an ergodic
stationary process, we have
𝑝 𝑝
ƶ ƶ
𝑄 → 𝑄 and 𝑄 → 𝑄−1
−1
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator
Lemma 6.1.
[CLT for a Zero Mean Ergodic Stationary Process (White 1984,
Theorem 5.15)]: Suppose {𝑍𝑡 } is an ergodic stationary process with:
(1) 𝐸 𝑍𝑡 = 0;
(2) 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗) is finite and nonsingular,
′
where Γ 𝑗 = 𝐸 𝑍𝑡 𝑍𝑡−𝑗 ;
𝑞.𝑚.
(3) 𝐸 𝑍𝑡 ∣ 𝑍𝑡−𝑗 , 𝑍𝑡−𝑗−1 , … → 0, as 𝑗 → ∞;
1/2
(4) ∑∞
𝑗=0 𝐸 𝑟𝑗′ 𝑟𝑗 < ∞. Then as n→∞,
𝑑
𝑛1/2 𝑍᪄𝑛 = 𝑛−1/2 ∑𝑛𝑡=1 𝑍𝑡 → 𝑁(0, 𝑉).
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator
Theorem 6.2.
[Asymptotic Normality]: Suppose Assumptions 6.1 to 6.5 hold.
Then as 𝑛 → ∞,
𝑑
𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄−1 𝑉𝑄−1 ,
where 𝑉 = ∑∞
𝑗=−∞ Γ(𝑗)
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Asymptotic Normality of the OLS Estimator
◼ Proof:
• Because
𝑛 𝛽ƶ − 𝛽𝑜 = 𝑄ƶ −1 𝑛−1/2 ∑𝑛𝑡=1 𝑋𝑡 𝜀𝑡 .
• By Assumptions 6.1 to 6.3 and 6.5 and CLT for an ergodic
stationary process, we have
𝑛 𝑑
𝑛 −1/2 ∑𝑡=1 𝑋𝑡 𝜀𝑡 → 𝑁(0, 𝑉),
𝑝 𝑝
• 𝑄ƶ → 𝑄 and ƶ
𝑄 → 𝑄−1 by Assumption 6.4
−1 and WLLN for an
ergodic stationary process. We then have by Slutsky’s theorem
𝑑
𝑛 𝛽ƶ − 𝛽𝑜 → 𝑁 0, 𝑄−1 𝑉𝑄 −1
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Hypothesis Testing
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Hypothesis Testing
Theorem 6.3.
[Robust t-Test and Wald Test]: Under Assumptions 6.1 to 6.6 and
𝐇0 , we have as 𝑛 → ∞,
✓ When 𝐽 = 1, we define a robust t-test statistic
𝑛 𝑅𝛽ƶ − 𝑟 𝑑
𝑇𝑟 = → 𝑁(0,1)
𝑅𝑄ƶ −1 𝑉ƶ 𝑄ƶ −1 𝑅′
✓ when 𝐽 ≥ 1, we define a robust Wald test statistic
ƶ −1 ′ −1
′ ′ −1 ƶ
𝑊𝑟 = 𝑛(𝑅𝛽 − 𝑟) 𝑅 𝑿 𝑿/𝑛 𝑉 𝑿 𝑿/𝑛 𝑅 ′
(𝑅𝛽ƶ − 𝑟)
𝑑
→ 𝜒𝐽2
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
CONTENTS
6.1 Motivation
6.2 Framework and Assumptions
6.3 Long-Run Variance-Covariance Matrix
Estimation
6.4 Consistency of the OLS Estimator
6.5 Asymptotic Normality of the OLS Estimator
6.6 Hypothesis Testing
6.7 Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Cochrane-Ornut Procedure
• The sampling error resulting from the first step estimation has no
impact on the asymptotic properties of the OLS estimator in the
second step.
• The asymptotic variance estimator of 𝛽ƿ𝑎 is given by
𝑠ƶ𝑣2 𝑄ƶ 𝑋−1∗ 𝑋 ∗
where
1 ∗2 ƶ ∗ ∗
1 𝑛
𝑠ƶ𝑣2 = ∑𝑡=1 𝑣ƶ 𝑡 , 𝑄𝑋 𝑋 = ∑𝑡=1 𝑋ƶ 𝑡∗ 𝑋ƶ 𝑡∗′
𝑛
𝑛−𝐾 𝑛
with
𝑣ƶ 𝑡 = 𝑌ƶ𝑡∗ − 𝑋ƶ 𝑡∗′ 𝛽ƿ𝑎
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation
Thank You !
September 22,
ADVANCED ECONOMETRICS Linear Regression Models Under Conditional Heteroskedasticity and Autocorrelation