0% found this document useful (0 votes)
27 views2 pages

Ch4 2945310

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views2 pages

Ch4 2945310

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Chapter 4 Variance of the regression Coefficients and the

GAUSS MARKOV Theorem

The coefficients we calculated from the OLS are estimators and used sampling data not the population
data. It is not the true value of the coefficient according to the PRF. If we change the sample, we will
get different estimates.
Thus, the estimator, β̂ , is a random variable. As a random variable, it must have a specific
distribution which can be explain by its moments. We will focus on the first and second moments i.e.
Expected Value and its Variance.

Expected Value of β̂
E [ β̂ ∣ X ]=E [( X ' X ) X ' Y ]
−1

E [ β̂ ∣ X ]=E [( X ' X )−1 X ' ( X β+ϵ)] ; Y = X β+ϵ


̂ −1 −1
E [ β X ]=E [( X ' X ) X ' X β + ( X ' X ) X ' ϵ]

E [ β̂ ∣ X ]=E [( X ' X )−1 X ' X β] + E [( X ' X )−1 X ' ϵ]
E [ β̂ ∣ X ]= β + ( X ' X )−1 X ' E [ ϵ] ; X are known and fixed only error terms are random
Therefore E [β̂ ∣ X ]= β ; an UNBIASED Estimator since E [ϵ]=0 by assumptions

E [ β̂ ∣ X ]= β
Variance of β̂
Var [β̂ ∣ X ]=E [( β−β)(
̂ ̂
β−β)' ∣ X ] since E [ β̂ ∣ X ]=β
Var [β̂ ∣ X ]=E [( X ' X ) X ' ϵϵ' X ( X ' X )−1∣ X ] since β=β+(
̂
−1
X ' X )−1 X ' ϵ
Var [β̂ ∣ X ]=( X ' X )−1 X ' E [ϵϵ' ∣ X ] X ( X ' X )−1
Var [β̂ ∣ X ]=( X ' X )−1 X ' (σ2 I ) X ( X ' X )−1 ; E [ϵϵ'∣ X ]=(σ 2 I ) by assumptions
Var [β̂ ∣ X ]=σ 2 ( X ' X )−1 X ' X ( X ' X )−1
Var [β̂ ∣ X ]=σ 2 ( X ' X )−1

Var [ β̂ ∣ X ]=σ ( X ' X )


2 −1
Gauss Markov Theorem
“In the classical linear regression model with regressor matrix X, the least squares estimator β̂ is
the minimum variance linear unbiased estimator of β . For any vector of constants w, the minimum
variance linear unbiased estimator of w ' β in the classical regression model is w ' β̂ , where β̂
is the least squares estimator”, Econometric Analysis, 6th Edition, Greene

“Given the assumptions of the classical linear regression model, the least squares estimators, in the
class of unbiased linear estimators, have minimum variance, that is, they are the Best Linear Unbiased
Estimators, BLUE.”, Basic Econometrics, 5th Edition, Gujarati and Porter

Proof:
̃
Let β=CY
−1
, any estimator linear in Y and C =( X ' X ) X ' + D then

̃
E [β]=E [( X ' X )−1 X ' + D][ X β+ϵ]
̃
E [β]=β+ DX β
̃
E [β]=β or unbiased if DX =0

Covariance Matrix of β̃ is
̃
E ( β−β)( ̃
β−β)'=
−1 −1
E [( X ' X ) X ' + D]ϵϵ' [( X ' X ) X ' + D]'
E ( β̃ −β)( β̃ −β)'=σ2 [( X ' X )−1 X ' IX ( X ' X )−1 + DIX ( X ' X )−1 +( X ' X )−1 X ' ID ' + DID ' ]
E ( β̃ −β)( β̃ −β)'=σ2 ( X ' X )−1+ σ2 DD ' since DX = 0

But DD' is a positive semidefinite matrix i.e. at least = 0. Thus Covariance of β̃ cannot be less than
the Covariance of β̂

You might also like