SLRM note
SLRM note
i Yi E (Y X i )
Yi E (Y X i ) i
Yi E (Y X i ) i
1 2 X i i
Taking expected value on both sides,
E (Yi X i ) E[ E (Y X i )] E ( i X i )
E (Y X i ) E ( i X i )
The sample counterpart of the population regression function may be written as:
Yˆi ˆ1 ˆ 2 X i
Yˆi estimator of E(Y X i )
ˆ1 estimator of 1
ˆ 2 estimator of 2
Note: An estimator, also known as a (sample) statistic, is simply a rule, a formula,
or method that tells how to estimate the population parameter from the information
provided by the sample at hand.
In terms of the sample regression function, the observed Yi can be expressed as:
Yi Yˆi ̂ i
And in terms of the population regression function, it can be expressed as:
Yi E (Y X i ) i
Yi ˆ1 ˆ 2 X i i
Y ˆ1 ˆ2 X
Hence the parameters must be chosen in such a way that the estimated line passes
through ( X , Y ).
We apply the least squares criterion which requires that te values of the parameters
2
i is minimized.
are chosen in such a way that
We can write,
(Yi ˆ1 ˆ 2 X i ) 2
2
i
i2
0
ˆ 2
X Y ˆ X ˆ X
i i 1 i 2 i
2
Yˆi
which represents the estimated or predicted values of Yi.
Proof:
By definition,
1
cov(X i , i )
n
( X i X )( i )
1
n
( X i X ) i [ 0]
1 1
X i i - X i
n n
1
X i i [ i 0]
n
i2
0
Now the condition ˆ
implies:
2
ˆ
(Y i ˆ1 ˆ 2 X i ) 2 0
2
3. The estimated coefficients ˆ1 and ˆ2 may be computed using the following
formulae:
ˆ1 Y ˆ 2 X (1)
ˆ 2
x y i i
(2)
x 2
i
where,
xi X i X
y i Yi Y
(1) is a rearrangement of the first normal equation.
(2) Follows from substitution of (1) into the second normal equation. This is
shown below:
X Y (Y ˆ X ) X ˆ X
i i 2 i 2 i
2
X Y Y X X X ˆ X
i i i 2 i 2 i
2
ˆ [ X X X ] X Y Y X
2 i
2
i i i
1 1
ˆ 2 [ X i2 ( X i ) 2 ] X i Y X i Yi
n n
ˆ 2 xi2 xi y i
ˆ 2
x y i i
x 2
i
Alternativ ely , ˆ 2
x y / n [ ( X X )(Y Y )] / n cov(X , Y )
i i i i i i
x /n 2
i (X X ) / n var( X )
i
2
i
yˆ 2 yˆ
2
i i
2
i i
yˆ 2
i [ cov( yˆ , ) 0]
i
2
i i
y 2
i : TSS; yˆ i2 : ESS; i2 : RSS
TSS ESS RSS
ESS TSS RSS
y i i2 yˆ i2
2
xi y i
2
( 2 xi ) 2 xi
ˆ 2 ˆ 2 2 x 2
x2 i
i
therefore , ESS ˆ xy 2 i i
Properties of Estimators
Unbiasedness:
An estimator ˆ is said to be an unbiased estimator of if its mean or
expected value is equal to the value of true population parameter, , that is,
E (ˆ ) .
This means that if repeated samples of a given size are drawn, and ˆ
computed for each sample, the average of such ˆ values would be equal to
.
If E ( ˆ ) or E (ˆ ) 0, then ˆ is said to be biased and the extent of
bias for ˆ is measured by E ( ˆ ) .
its variance is less than the variance of any other estimator, say *.
Thus, when Var(ˆ ) Var( *), ˆ is called the minimum variance or best
estimator of .
Efficiency:
ˆ is an efficient estimator if the following two conditions are satisfied
together:
i. ˆ is unbiased, and
ii. Var(ˆ ) Var( *)
In this situation, where one estimator has a larger bias but a smaller variance
than the other estimator, it is intuitively plausible to consider a trade-off
between the two characteristics.
MSE( ˆ ) E[ ˆ ] 2
E[{ˆ E ( ˆ )} {E ( ˆ ) }] 2
E{ˆ E ( ˆ )}2 E{E ( ˆ ) }2 2 E[{ˆ E ( ˆ )}{E ( ˆ ) }]
Var( ˆ ) ( Biasˆ ) 2
Asymptotic unbiasedness
lim E( ˆ )
n
ˆ
This means that the estimator , which is otherwise biased, becomes
unbiased as the sample size approaches infinity. It is to be noted that if an
estimator is unbiased, it is also asymptotically unbiased, but the reverse is
not necessarily true.
Consistency
Whether or not an estimator is consistent is understood by looking at the
behavior of its bias and variance as the sample size approaches infinity.
ˆ
is a consistent estimator if,
lim [E( ˆ ) ] 0
n
lim Var( ˆ ) 0
n
Hence, the ordinary least squares estimators are BLUE (best, linear and
unbiased estimator).
Proof:
Unbiasedness of ˆ
x y
ˆ
i i
x 2
i
x (Y Y )
i i
x 2
i
x Y Y x
i i i
xY i i
x 2
i x 2
i
xi
ˆ wi Yi where wi
xi2
It follows,
x
w 0
i
x
i 2
i
x X (X X )X
w X 1
i i i i
x (X X )
i i 2 2
i i
x 2
1
w
2 i
( x ) xi2
i 2 2
i
Now we have,
ˆ wi Yi
wi ( X i i )
wi wi X i wi i
wi i
Linearity of ˆ
We know,
̂ wiYi
2 1
ˆ
Var( ) [ wi2
xi2 xi2
* ciYi
Var( *) 2 ci2
Note that,
w (c
i i wi ) 0
Thus,
Var( *) 2 [ wi2 (ci wi ) 2 ]
Var( ˆ ) 2 (ci wi ) 2