Evaluating Estimators PDF
Evaluating Estimators PDF
^ ^
Let Θ = h(X 1 , X 2 , ⋯ , X n ) be a point estimator for θ. The bias of point estimator Θ is defined by
^ ^
B(Θ ) = E [Θ ] − θ.
^
In general, we would like to have a bias that is close to 0, indicating that on average, Θ is close to θ. It is
^
worth noting that B(Θ ) might depend on the actual value of θ. In other words, you might have an estimator
^
for which B(Θ ) is small for some values of θ and large for some other values of θ. A desirable scenario is
^ ^ ^
when B(Θ ) = 0, i.e, E [Θ ] = θ, for all values of θ. In this case, we say that Θ is an unbiased estimator
of θ.
^ ^
Let Θ = h(X 1 , X 2 , ⋯ , X n ) be a point estimator for a parameter θ. We say that Θ is an unbiased of
estimator of θ if
^
B(Θ ) = 0, for all possible values of θ.
Example 8.2
¯¯¯¯
X 1 + X 2 +. . . +X n
^
Θ = X =
n
is an unbiased estimator of θ = E Xi .
Solution
We have
^ ^
B(Θ ) = E [Θ ] − θ
¯¯¯¯
= E [X ] − θ
= E Xi − θ
= 0.
Note that if an estimator is unbiased, it is not necessarily a good estimator. In the above example, if we
^ ^
choose Θ 1 = X1 , then Θ 1 is also an unbiased estimator of θ:
^ ) = E[ ^ ] − θ
B(Θ 1 Θ1
= E X1 − θ
= 0.
https://www.probabilitycourse.com/chapter8/8_2_1_evaluating_estimators.php 1/4
5/7/23, 2:37 PM Evaluating Estimators
¯¯¯¯
^
Nevertheless, we suspect that Θ 1 is probably not as good as the sample mean X . Therefore, we need other
measures to ensure that an estimator is a "good" estimator. A very common measure is the mean squared
^
error defined by E [(Θ − θ) ] .
2
^ ^
The mean squared error (MSE) of a point estimator Θ , shown by M S E (Θ ) , is defined as
^ ^ 2
M S E (Θ ) = E [(Θ − θ) ].
^ − θ ^
Note that Θ is the error that we make when we estimate θ by Θ . Thus, the MSE is a measure of the
^
distance between Θ and θ, and a smaller MSE is generally indicative of a better estimator.
Example 8.3
^
1. Θ 1 = X1 .
¯¯¯¯ X 1 +X 2 +...+X n
^
2. Θ 2 = X =
n
.
^ ^
Find M S E (Θ 1 ) and M S E (Θ 2 ) and show that for n > 1, we have
^ ^
M S E (Θ 1 ) > M S E (Θ 2 ).
Solution
We have
^ ^ 2
M S E (Θ 1 ) = E [(Θ 1 − θ) ]
2
= E [(X 1 − E X 1 ) ]
= Var(X 1 )
2
= σ .
^
To find M S E (Θ 2 ) , we can write
^ ) = E [( ^ − θ)2 ]
M S E (Θ 2 Θ2
¯¯¯¯ 2
= E [(X − θ) ]
¯¯¯¯ ¯¯¯¯ 2
= Var(X − θ) + (E [X − θ]) .
¯¯¯¯
The last equality results from E Y 2
= Var(Y ) + (E Y )
2
, where Y = X − θ. Now,
note that
¯¯¯¯ ¯¯¯¯
Var(X − θ) = Var(X )
¯¯¯¯
since θ is a constant. Also, E [X − θ] = 0. Thus, we conclude
¯¯¯¯
^
M S E (Θ 2 ) = Var(X )
2
σ
= .
n
https://www.probabilitycourse.com/chapter8/8_2_1_evaluating_estimators.php 2/4
5/7/23, 2:37 PM Evaluating Estimators
^ ^
M S E (Θ 1 ) > M S E (Θ 2 ).
^ ^
From the above example, we conclude that although both Θ 1 and Θ 2 are unbiased estimators of the mean,
¯¯¯¯
^ ^
Θ 2 = X is probably a better estimator since it has a smaller MSE. In general, if Θ is a point estimator for
θ, we can write
^ ^ 2
M S E (Θ ) = E [(Θ − θ) ]
2
^ ^
= Var(Θ − θ) + (E [Θ − θ])
^ ^ 2
= Var(Θ ) + B(Θ ) .
^
If Θ is a point estimator for θ,
^ ^ ^ 2
M S E (Θ ) = Var(Θ ) + B(Θ ) ,
^ ^ ^
where B(Θ ) = E [Θ ] − θ is the bias of Θ .
The last property that we discuss for point estimators is consistency. Loosely speaking, we say that an
^
estimator is consistent if as the sample size n gets larger, Θ converges to the real value of θ. More precisely,
we have the following definition:
^ ^ ^ ^
Let Θ 1 , Θ 2 , ⋯ , Θ n , ⋯ , be a sequence of point estimators of θ. We say that Θ n is a consistent
estimator of θ, if
^
lim P (|Θ n − θ| ≥ ϵ) = 0, for all ϵ > 0.
n→∞
Example 8.4
Solution
We need to show that
¯¯¯¯
lim P (|X − θ| ≥ ϵ) = 0, for all ϵ > 0.
n→∞
But this is true because of the weak law of large numbers. In particular, we can use Chebyshev's
inequality to write
¯¯¯¯
¯¯¯¯
Var(X )
P (|X − θ| ≥ ϵ) ≤
2
ϵ
2
σ
= ,
2
nϵ
which goes to 0 as n → ∞.
https://www.probabilitycourse.com/chapter8/8_2_1_evaluating_estimators.php 3/4
5/7/23, 2:37 PM Evaluating Estimators
¯¯¯¯
^
We could also show the consistency of Θ n = X by looking at the MSE. As we found previously, the MSE
¯¯¯¯
^
of Θ n = X is given by
2
σ
^
M S E (Θ n ) = .
n
¯¯¯¯
^ ) ^
Thus, M S E (Θ n
goes to 0 as n → ∞ . From this, we can conclude that Θ n
= X is a consistent
estimator for θ. In fact, we can state the following theorem:
Theorem 8.2
^
Let Θ , ^ ,
1 Θ2 ⋯
be a sequence of point estimators of θ. If
^
lim M S E (Θ n ) = 0,
n→∞
^
then Θ n is a consistent estimator of θ.
Proof
We can write
2
^ ^ 2
P (|Θ n − θ| ≥ ϵ) = P (|Θ n − θ| ≥ ϵ )
^ 2
E [Θ n − θ ]
≤ (by Markov's inequality)
2
ϵ
^
M S E (Θ n )
= ,
2
ϵ
← previous
next →
https://www.probabilitycourse.com/chapter8/8_2_1_evaluating_estimators.php 4/4