0% found this document useful (0 votes)
6 views6 pages

Econometrics 005

The empirical Hessian estimator is obtained by evaluating (45) at the ML estimates. ↭ But the residuals y→ XωML are orthogonal to X.

Uploaded by

tobobo4880
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views6 pages

Econometrics 005

The empirical Hessian estimator is obtained by evaluating (45) at the ML estimates. ↭ But the residuals y→ XωML are orthogonal to X.

Uploaded by

tobobo4880
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

The normal linear regression model cont’d

↭ Therefore, ω
! =ω
ML
!
OLS .

↭ This implies that OLS is also asymptotically e!cient.

↭ This is di”erent from the Gauss-Markov Theorem (BLUE )


which restricts to the class of linear estimators.

↭ Assuming normality rules out that some non-linear estimators


have smaller variance.

↭ However, ω̂ML
2 is biased downwards in finite samples.

Maximum Likelihood Application Chap. 2.5 - Slide 75/295


The normal linear regression model cont’d

↭ The second derivatives are given by:

 ω 2 ln L ω 2 ln L
  →

→X→ ε
ωωωω → ωωωε 2 → Xε2X ε4
 =  (45)
ω2 ln L ω2 ln L ε→ X n ε→ ε
ωε ωω →
2 ω(ε 2 )2 → ε4 2ε 4
→ ε6

↭ Taking the inverse of (minus the) expectations, we obtain the


IM-expression of the variance-covariance matrix:
 
2 ↑
ω (X X) →1 0
→1
[ (ε 0 )] =   (46)
↑ 4
0 2ω /n

and then replace ω 2 by the ML estimate ω 2 .


!ML

Maximum Likelihood Application Chap. 2.5 - Slide 76/295


The normal linear regression model cont’d
↭ Note the block-diagonality of the information matrix and
therefore also of (46).
↭ Otherwise, it would have been necessary to invert the entire
matrix (45), not each block separately.
↭ There is no covariance between ε! and ω
!2 .

↑ We can make inference about ε! without taking into


!2 .
account the estimation error in ω
↭ If we had treated ω instead of ω 2 as a parameter, then

ω ) = ω 2 /2n.
Var(!
↭ This can be easily seen by applying (56):
' 2 (2 2
&2
ϑω 2ω 2ω 4
Var(ω ) = · Var(!
ω ) = 4ω = .
ϑω 2n n
Maximum Likelihood Application Chap. 2.5 - Slide 77/295
The normal linear regression model cont’d

↭ The empirical Hessian estimator is obtained by evaluating


(45) at the ML estimates.
↭ But the residuals y → Xω
!
ML are orthogonal to X.

↭ Thus, in this case the IM estimate and the empirial Hessian


estimate are identical.

↭ The concentrated log-likelihood (with respect to ω 2 ) is:


n ↑
ln Lc = → [ln 2ϖ + ln( /n) + 1]. (47)
2

Maximum Likelihood Application Chap. 2.5 - Slide 78/295


Normal linear regression: hypothesis testing
↭ Consider the set of (possibly non-linear) J restrictions
(ω) = 0.

↭ The LR statistic can be obtained by inserting the


concentrated log-likelihoods according to (47) into (35):
) *
LR = →2(log Lr → log Lu ) = n ln ↑R R / ↑

= n ln(ω̂R2 /ω̂ 2 ).

↭ The Wald statistic can be easily obtained by using (37), (38)


and (46):
+ ,→1
W = (ω)! ↑ ! (X X)
(ω)ω̂ 2 ↑ →1 !
(ω) ↑ ! ↓ ϱ2 [J].
(ω)
(48)
!
ϑc(ω)
where !
C(ω) = .
ϑω ↑
Maximum Likelihood Application Chap. 2.5 - Slide 79/295
Normal linear regression: hypothesis cont’d
↭ Using (40), the FOC from constrained maximization are:
 ω ln L↑   X→ (y→Xω) ↑
  
ωω ε2
+ C(ω) ϑ 0
   
 ω ln L↑     
 ωε2  =  n (y→Xω) (y→Xω)  =  0 
→ 
.
  
→ 2ε 2
+ 2ε 4    
 
ω ln L ↑
ωϑ c(ω) 0

↭ Under the null hypothesis beeing tested, it holds:


/ 
!
ϑ ln L(ω R ) ) *
E = E 1/ω 2 X↑ ϖ = 0
ϑω

↭ Further:
/  / →1
! )
ϑ ln L(ω !
ϑ 2 ln L(ω
R R)
Asy.Var = →E ↑ = ω 2 (X↑ X)→1
ϑω ϑωϑω

Maximum Likelihood Application Chap. 2.5 - Slide 80/295

You might also like