Chapter One Review of Linear Regression Models: Definitions and Components of Econometrics
Chapter One Review of Linear Regression Models: Definitions and Components of Econometrics
ˆ ( X X )( Y Y )
(X X )2
X i2
Example
• Suppose we want to study the relationship
between input (number of workers) and
output (thousands of Birr) of five factories
given in above table.
• To fit the regression line of Yi (thousands of
Birr) on Xi(number of workers, we can employ
the method of least squares as follows:
Arrange the data in tabular form
Industry output (Y)in input(X)(no. of Paired data
thousand of birr workers) (X,Y)
1 4 2 2,4
2 7 3 3,7
3 3 1 1,3
4 9 5 5,9
5 17 9 9,17
Var (U i ) E [U i E (U i )] 2 E (U i ) 2 2
• Mathematically;
(Since E ( U i ) 0).This constant variance is called
homoscedasticity assumption and the constant variance
itself is called homoscedastic variance.
11. The random variable (U) has a normal distribution
• This means the values of u (for each x) have a bell shaped
symmetrical distribution about their zero mean and
constant variance , i.e. 2
• U i N (0, 2
)
( X iU i ) ( X i )(U i ) ( X i U i ) X i (U i ) 0
15, The dependent variable is normally distributed.
• i.e. Y ~ N ( x ),
i i
2
X since (u ) 0
i i
X i ui ( X i )
2
• (u i ) 2
(Since
2 (ui )2 2 )
var(Yi ) 2
• The shape of the distribution of Y is determined by
i
Y i ~ N( x i , 2 )
• successive values of the dependent variable are
independent, i.e Cov(Y , Y ) 0 i j
Since Yi X i U i and Y j X j U j
= E[( X i Ui X i )( X U X )] Since (u ) 0
j j j i
E (U iU j ) 0
Therefore, Cov (Y i ,Y j ) 0
PROPERTIES OF OLS ESTIMATORS
• The ideal or optimum properties that the OLS
estimates possess may be summarized by well
known theorem known as the Gauss-Markov
Theorem.
• Statement of the theorem: “Given the assumptions
of the classical linear regression model, the OLS
estimators, in the class of linear and unbiased
estimators, have the minimum variance, i.e. the
OLS estimators are BLUE.
The BLUE Theorem
• i.e. Best, Linear, Unbiased Estimator. An estimator is called
BLUE if:
• Linear: a linear function of the a random variable, such as,
the dependent variable Y.
• Unbiased: its average or expected value is equal to the true
population parameter.
• Minimum variance: It has a minimum variance in the class
of linear and unbiased estimators. An unbiased estimator
with the least variance is known as an efficient estimator.
• According to the Gauss-Markov theorem, the OLS
estimators possess all the BLUE properties. The detailed
proof of these properties are presented below
Linearity: (for ˆ & )ˆ
• ˆ x y x (Y Y ) x Y Y x but
i i i i i
,
xi (X X ) X nX nX nX 0
x 2x 2
x 2
•
i i
i
xi
• ˆ x i Y now let K i ( i 1, 2 ,..... n )
xi 2
x i2
ˆ K i Y
• ̂ K1Y1 K2Y2 K3Y3 KnYn
• ˆ is linear in Y
• Check yourself question:
• Show that ̂is linear in Y? Hint: ̂ .1 nDerive
Xk i Ythis
i
relationship between and Y. ̂
Unbiasedness:
• In our case, ˆ & ˆ are estimators of the true
parameters & .To show that they are the
unbiased estimators of their respective parameters
means to prove that: ( ˆ ) and (ˆ )
k i X i
xi X i ( X X ) Xi
x i
2
x i
2
X 2 XX X 2 nX 2
X 2
nX 2
X 2
nX 2
1 k X i i 1
1
n X i 1
n u i Xk i Xk i X i Xk i u i
1
n u i Xk i u i
ˆ 1
n u i Xk iu i
1
n X k i )u i
(ˆ ) 1
n ( u i ) X k i ( u i )
( ˆ )
•̂ is an unbiased estimator of .
Minimum variance of ˆ and ˆ
2
var( ˆ ) 2
k i
2
x i2
Variance of ˆ
var(ˆ ) (ˆ ()
2
2
ˆ
var( ˆ )
1 n Xk i ui2
2
1 n Xk i (ui ) 2
2
2( 1n Xk i ) 2
2 ( 1n X 2 ki2 )
2 1 X2 x 2
1
( ) since ki2 i
n xi 2 (xi2 ) 2 xi2
Again, 1 X 2 x i2 n X 2
X 2
n xi2
n x i2 nxi
2
1 X2 2 X i
2
var(ˆ ) n 2
2
2
xi nxi
The variance of the random variable (Ui)
• You may observe that the variances of the OLS
estimates involve ,which is the population variance
2
n 2
Show that OLS estimators have minimum
variance
• Minimum variance of Alpha
• Minimum variance of Beta