0% found this document useful (0 votes)
19 views6 pages

Lecture Note 15

The document discusses the theory of unbiased estimation in statistics, defining unbiased estimators and their properties, including the concepts of locally minimum variance unbiased estimators (LMVUE) and uniformly minimum variance unbiased estimators (UMVUE). It presents various theorems related to unbiased estimation, including the Rao-Blackwell theorem and the Lehmann-Scheffé theorem, which provide methods for finding UMVUEs. Additionally, examples illustrate the application of these concepts in estimating parameters from different probability distributions.

Uploaded by

Aayushi Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Lecture Note 15

The document discusses the theory of unbiased estimation in statistics, defining unbiased estimators and their properties, including the concepts of locally minimum variance unbiased estimators (LMVUE) and uniformly minimum variance unbiased estimators (UMVUE). It presents various theorems related to unbiased estimation, including the Rao-Blackwell theorem and the Lehmann-Scheffé theorem, which provide methods for finding UMVUEs. Additionally, examples illustrate the application of these concepts in estimating parameters from different probability distributions.

Uploaded by

Aayushi Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MTL390: Statistical Methods

Instructure: Dr. Biplab Paul


February 11, 2025

Lecture 15

Theory of Estimation (Cont.)

Unbiased Estimation
Let {F0 , θ ∈ Θ}, where Θ ⊂ Rk , be a nonempty set of probability distributions. Let
X = (X1 , X2 , . . . , Xn ) be a multiple random vector with distribution function Fθ and
sample space X . Let ψ : Θ → R be a real-valued parametric function. A (Borel-
measurable) function T : X → Θ is said to be unbiased for ψ if

Eθ [T (X)] = ψ(θ) for all θ ∈ Θ, (1)

provided that Eθ |T (X)| < ∞ for all θ ∈ Θ. Any parametric function, ψ(·) is said to
be estimable if we can find an unbiased estimator T that satisfies (1). An estimator that
is not unbiased is called biased. The function b(T, ψ) defined by

b(T, ψ) = Eθ [T (X)] − ψ(θ),

is called the bias of T . In most applications, we consider Θ ⊂ R and ψ(θ) = θ, and


X1 , X2 , . . . , Xn are iid random variables.
Remarks

(a) If T is unbiased for θ, g(T ) is not, in general, an unbiased estimator of g(θ) unless
g is a linear function.

(b) Unbiased estimators do not always exist. For example, suppose X sample from
b(1, p), and we wish to estimate ψ(p) = p2 . For an estimator T to be unbiased for
p2 , we must have

Ep [T ] = p2 = pT (1) + (1 − p)T (0), 0 ≤ p ≤ 1.

that is,
p2 = p{T (1) − T (0)} + T (0).
must hold for all p in the interval [0, 1], which is impossible.

1
(c) Sometimes an unbiased estimator may be absurd. Let X ∼ P ossion(λ) and define
ψ(λ) = e−3λ . We show that T (X) = (−2)X is an unbiased estimator for ψ(λ). We
have
Eλ [T (X)] = ψ(λ).
However, T (x) = (−2)x is positive if x is even and negative if x is odd, which is
absurd since ψ(λ) > 0.
Let θ0 ∈ Θ and let U(θ0 ) be the class of all unbiased estimators T of ψ(θ0 ) such that
Eθ0 [T 2 ] < ∞. Then, T0 ∈ U(θ0 ) is called a locally minimum variance unbiased
estimator (LMVUE) of ψ(θ0 ) if
Eθ0 [(T0 − ψ(θ0 ))2 ] ≤ Eθ0 [(T − ψ(θ0 ))2 ]
holds for all T ∈ U(θ0 ).
Let U be the set of all unbiased estimators T of ψ(θ) such that Eθ [T 2 ] < ∞ for all
θ ∈ Θ. An estimator T0 ∈ U is called a uniformly minimum variance unbiased
estimator (UMVUE) of ψ(θ) if
Eθ [(T0 − ψ(θ))2 ] ≤ Eθ [(T − ψ(θ))2 ]
for all θ ∈ Θ and every T ∈ U.
Let a1 , a2 , . . . , an be any set of real numbers such that ni=1 ai = 1. Let X1 , X2 , . . . , Xn
P
be independent random variables (RVs) with common mean µ and variances σi2 for i =
1, 2, . . . , n. Then, the estimator
n
X
T = ai Xi
i=1
is an unbiased estimator of µ with variance
Xn
a2i σi2 (Exercise!).
i=1

The estimator T is called a linear unbiased estimator of µ.


Linear unbiased estimators of µ that have minimum variance (among all linear unbi-
ased estimators) are called best linear unbiased estimators (BLUEs).
Theorem 1. Let X1 , X2 , . . . , Xn be independent and identically distributed (iid) random
2
variables
Pn (RVs) with common variance σ . Also, let a1 , a2 , . . . , an be real numbers such
that i=1 ai = 1, and define
Xn
T = ai X i .
i=1
1
Then, the variance of T is minimized if we choose ai = n
for i = 1, 2, . . . , n.
Proof. We have
n
X
Var(T ) = σ 2 a2i ,
i=1
Pn 2
which is minimized if and Pnonly if we choose the ai ’s such that i=1 ai is minimized,
subject to the condition i=1 ai = 1. We have
n n  2 X n  2
X
2
X 1 1 1 1
ai = ai − + = ai − +
i=1 i=1
n n i=1
n n
which is minimized for the choice ai = n1 , i = 1, 2, . . . , n.

2
Corollary 1.1. Let X1 , X2 , . . . , Xn be iid random variables (RVs) with common mean µ
and variance σ 2 . Then the Best Linear Unbiased Estimator (BLUE) of µ is
n
1X
X̄ = Xi .
n i=1

Let X1 , X2 , . . . , Xn be independent random variables (RVs) with common mean µ but


different variances σ12 , σ22 , . . . , σn2 . The Best Linear Unbiased Estimator (BLUE) of µ is
obtained if we choose ai proportional to σ12 . The minimum variance is given by Hn , where
i
H is the harmonic mean of σ12 , σ22 , . . . , σn2 , defined as
n
!−1
1X 1
H= .
n i=1 σi2

Let X and Y be random variables (RVs) defined on a probability space (Ω, S, P ), and
let h be a Borel-measurable function. Then the conditional expectation of h(X), given
Y , written as E[h(X) | Y ], is an RV that takes the value E[h(X) | y], defined by
P
 h(x)P (X = x | Y = y), if discrete type with P (Y = y) > 0,
E[h(X) | y] = Rx∞

−∞
h(x)fX|Y (x | y) dx, if continuous type with fY (y) > 0,

when the RV Y assumes the value y.


Needless to say, a similar definition may be given for the conditional expectation
E[h(Y ) | X].
It is immediate that E[h(X) | Y ] satisfies the usual properties of an expectation,
provided we remember that E[h(X) | Y ] is not a constant but an RV.
Remarks
(a) Let E[h(X)] exist. Then
 
E[h(X)] = E E[h(X) | Y ] . (2)

(b) If EX 2 < ∞, then


Var(X) = E[Var(X | Y )] + Var(E[X | Y ]). (3)

The following two theorems are the most useful method for finding UMVUEs.
Theorem 2 (Rao-Blackwell Theorem). Let {Fθ : θ ∈ Θ} be a family of probability
distribution functions, and let h be any statistic in U, where U is the (nonempty) class of
all unbiased estimators of ψ(θ) with Eθ h2 < ∞. Let T be a sufficient statistic for ψ(θ).
Then the conditional expectation ϕ(T ) = Eθ [h | T ] is independent of θ and is an unbiased
estimator of ψ(θ). Moreover,

Eθ (ϕ(T ) − ψ(θ))2 ≤ Eθ (h − ψ(θ))2 , ∀θ ∈ Θ. (4)


Equality in (4) holds if and only if h = ϕ(T ), that is,

Pθ {h = ϕ(T )} = 1, ∀θ.

3
Theorem 3. Let U be the nonempty class of unbiased estimators as defined in Theorem
2. Then there exists at most one UMVUE for ψ(θ).

This theorem basically stated that if W is a UMVUE of ψ(θ), then W is unique.


The following result gives a necessary and sufficient condition for an unbiased estima-
tor to be a UMVUE.

Theorem 4. Let U be the class of all unbiased estimators T of ψ(θ) where θ ∈ Θ with
Eθ [T 2 ] < ∞ for all θ, and suppose that U is nonempty. Let U0 be the class of all unbiased
estimators v of 0, that is,

U0 = v : Eθ [v] = 0, Eθ [v 2 ] < ∞ for all θ ∈ Θ .




Then T0 ∈ U is the UMVUE of ψ(θ) if and only if T0 is uncorrelated with all unbiased
estimators of 0, for all θ, i.e.,

Eθ [vT0 ] = 0 for all θ and all v ∈ U0 .

There is a relationship between completeness and UMVUE in the following theorem.

Theorem 5. Let T be a complete sufficient statistic for a parameter θ. If ϕ(T ) is an


estimator based only on T , then ϕ(T ) is the UMVUE of its expected value.

Theorem 6 (Lehmann-Scheffé Theorem). If T is a complete sufficient statistic for a


parameter θ and there exists an unbiased estimator h of ψ(θ), then there exists a unique
UMVUE of ψ(θ), which is given by E[h | T ].

Proof. If h1 , h2 ∈ U , then E[h1 | T ] and E[h2 | T ] are both unbiased, and

Eθ [E[h1 | T ] − E[h2 | T ]] = 0, ∀θ ∈ Θ.

Since T is a complete sufficient statistic, it follows that

E[h1 | T ] = E[h2 | T ].

By Theorem 2, E[h | T ] is the UMVUE. □


Recipe for Finding the UMVUEs Suppose we want to find the UMVUE for ψ(θ).

(i) Start by finding a statistic T that is both sufficient and complete.

(ii) Find a function of T , say ϕ(T ), that satisfies

Eθ [ϕ(T )] = ψ(θ), for all θ ∈ Θ.

Then ϕ(T ) is the UMVUE for ψ(θ).

Example 1 Let X1 , X2 , . . . , Xn be iid P ois(λ). Then X̄ is the UMVUE of λ.


Solution Poisson
P distribution belongs to Exponential family. One can easily shows
that T = T (X) = ni=1 Xi is a complete sufficient statistics. Now,
" n # n
X X
Eλ (T ) = Eλ Xi = Eλ (Xi ) = nλ.
i=1 i=1

4
Thus,  
T
Eλ = Eλ (X) = λ.
n
Since X̄ = Tn is unbiased for θ and is a function of T , which is a complete sufficient
statistic. By Theorem (5), X̄ is the UMVUE of λ.
Example 2 Suppose X1 , X2 , . . . , Xn are iid U (0, θ), where θ > 0. Then find the
UMVUE of θ.
Solution T = T (X) = X(n) is complete sufficient complete. We know,
n
Eθ (T ) = Eθ (X(n) ) = θ, for all θ > 0.
n+1
implies,  
(n + 1)
Eθ X(n) = θ.
n
Since (n+1)
n
X(n) is an unbiased estimator of θ and is a function of X(n) , which is a
complete sufficient statistic, therefore,
(n + 1)
X(n)
n
is the UMVUE of θ.
Example 3 Suppose X1 , X2 , . . . , Xn are iid Poisson(θ), where θ > 0. Find the
UMVUE for
ψ(θ) = Pθ (X = 0) = e−θ .
Pn
Solution T = T (X) = i=1 Xi is a complete sufficient statistic for θ, since the
Poisson is a member of the exponential family.
Any unbiased estimator W will ”work,” so let’s keep our choice simple, say

W = W (X) = I(X1 = 0).

We compute the expectation

Eθ (W ) = Eθ [I(X1 = 0)] = Pθ (X1 = 0) = e−θ .

Therefore, W is an unbiased estimator of e−θ .


Now, we just calculate ϕ(T ) = E(W | T ) directly. For fixed t, we compute

ϕ(t) = E(W | T = t) = E[I(X1 = 0) | T = t] = P (X1 = 0 | T = t).

By the definition of conditional probability:


Pθ (X1 = 0, T = t)
P (X1 = 0 | T = t) = .
Pθ (T = t)
Pn
Since X1 and i=2 Xi are independent given T , we can write
n
!
X
Pθ (X1 = 0, T = t) = Pθ (X1 = 0)Pθ Xi = t .
i=2

Thus,

5
Pθ (X1 = 0)Pθ ( ni=2 Xi = t)
P
P (X1 = 0 | T = t) = .
Pθ (T = t)
X1 ∼ Pois(θ), ni=2 Xi ∼ Pois((n − 1)θ), and T = ni=1 Xi ∼ Poisson(nθ), Therefore,
P P

 t
n−1
ϕ(t) = .
n

By Lehmann-Scheffé theorem that:


 T
n−1
ϕ(T ) =
n
is the UMVUE of ψ(θ) = e−θ .

Theorem 7. If UMVUEs Ti exist for real functions ψi , i = 1, 2 of θ, they also exist for
λψi (λ real), as well as for ψ1 + ψ2 , and are given by λTi and T1 + T2 , respectively.

Theorem 8. Let {Tn } be a sequence of UMVUEs and T be a statistic with Eθ T 2 < ∞


and such that
Eθ (Tn − T )2 → 0 as n → ∞
for all θ ∈ Θ. Then T is also the UMVUE.

You might also like