0% found this document useful (0 votes)
6 views16 pages

Invariance and Unbaisness

The document discusses the concepts of Maximum Likelihood Estimation (MLE) and Unbiased Estimation in statistics, providing examples and theorems related to these topics. It explains the invariance property of MLE and how to determine if an estimator is unbiased. Additionally, it includes methods for finding unbiased estimators and specific examples involving different probability distributions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views16 pages

Invariance and Unbaisness

The document discusses the concepts of Maximum Likelihood Estimation (MLE) and Unbiased Estimation in statistics, providing examples and theorems related to these topics. It explains the invariance property of MLE and how to determine if an estimator is unbiased. Additionally, it includes methods for finding unbiased estimators and specific examples involving different probability distributions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

MA2540/MA4240: Applied Statistics

Dr. Sameen Naqvi


Department of Mathematics, IIT Hyderabad
Email id: [email protected]
Remark 3: M.L.E. may not exist

Example: Let X1 , . . . , Xn ∼ Bin(1, p), where 0 < p < 1 is


unknown.

If (0, . . . , 0) (or (1, . . . , 1)) is observed, X = 0 (or X = 1) is


the MLE, which is not admissible value of p. Hence, an MLE
does not exist.
Invariance of MLE

I Theorem: Let θ̂ denote the MLE of θ ∈ Θ. Then, the MLE


of φ = g (θ), where g is a one-to-one function and
φ ∈ Φ = {g (θ) : θ ∈ Θ}, is φ̂ = g (θ̂).

I Theorem (Zahna, 1967): If θ̂ is the MLE of θ, then for any


function g (θ), the MLE of g (θ) is g (θ̂).
Invariance of MLE

I Example
(i) Let X1 , . . . , Xn be a random sample from a Poisson(λ)
distribution, where λ ≥ 0 is unknown. Find the MLE of
g (λ) = P(X1 = 0) = e −λ .

Answer: Since MLE of λ is X , MLE of e −λ is e −X .

(ii) Let X ∼ Bin(1, p), where 0 ≤ p ≤ 1 is unknown. Find the


MLE of g (p) = p(1 − p).

Answer: Since MLE of p is X , MLE of p(1 − p) is X (1 − X ).


How to be sure that this estimator is good? Is it close to the
actual true parameter?

Unbiased Estimation
Unbiasedness

I Let X = (X1 , . . . , Xn ) be a random sample from a population


with the probability distribution F (x, θ), θ ∈ Θ.

I An estimator T (X ) is said to be unbiased for estimating


g (θ), if
Eθ T (X ) = g (θ), ∀ θ ∈ Θ.
Unbiasedness

I If Eθ T (X ) = g (θ) + b(θ), then b(θ) is called the bias of T .

If b(θ) > 0, ∀ θ then T is said to over-estimate g (θ).

If b(θ) < 0, ∀ θ then T is said to under-estimate g (θ).


Examples
(1) If Xi is a Bernoulli random variable with parameter p, then
n
1X
p̂ = Xi
n
i=1

is the MLE of p. Is the MLE an unbiased estimator (UE) of


p?
Solution.
We know that if Xi is a Bernoulli random variable with
parameter p, then E (Xi ) = p. Therefore,
n n
!
1X 1X 1
E (p̂) = E Xi = EXi = np = p
n n n
i=1 i=1

Since E (p̂) = p, we can say that MLE is an unbiased


estimator of p.
Examples

(2) Let X1 , . . . , Xn ∼ P(λ), λ > 0.

We know that MoM of λ is X .

(i) Is it also an UE of λ?

n
1 X 
E (X ) = E Xi = λ.
n
i=1

Thus, T1 (X ) = X is an unbiased estimator of λ.


Examples

(ii) Check whether the following estimators are unbiased for


λ.

T2 (X ) = Xi , i = 1, 2, . . . , n
X1 + 2X2
T3 (X ) =
3
1 X
T4 (X ) = (Xi − X )2
n−1
Examples
(iii) Estimate the probability of no occurrence, i.e.,
P(X = 0) = e −λ .
We know that X ∼ P(λ) with p.m.f.
e −λ λx
P(X = x) = , x = 0, 1, . . . .
x!

Define an indicator function


(
1, if X = 0
I (X ) =
0, 6 0.
if X =
Then
E [I (X )] = 1.P(X = 0) + 0.P(X 6= 0) = P(X = 0) = e −λ .

So, I (X ) is an unbiased estimator of e −λ .


Examples
(3) If Xi are normally distributed random variables with mean µ
and variance σ 2 , what is an unbiased estimator of σ 2 ?
Solution.
We know that if Xi ∼ N(µ, σ 2 ) then
(n − 1)S 2
∼ χ2n−1 .
σ2

Also, recall that, if X ∼ χ2r then E (X ) = r .


Therefore,
" # " #
σ 2 (n − 1)S 2 σ 2 (n − 1)S 2
E (S 2 ) = E . = E
(n − 1) σ2 (n − 1) σ2
σ2
= .(n − 1) = σ 2
(n − 1)
Since E (S 2 ) = σ 2 , we say that S 2 is an unbiased estimator of
σ2.
A method to find UEs

Solve directly the equation

E [T (X )] = g (θ), ∀ θ ∈ Θ.

I Example:
Let X be a truncated Poisson r.v. with zero missing
1 λx
P(X = x) = , x = 1, 2, . . .
e λ − 1 x!

What will be an UE of λ?
A method to find UEs cont’d

E [T (X )] = λ, ∀ λ > 0

X 1 λx
T (x) = λ, ∀ λ > 0
e λ − 1 x!
x=1

X λx
T (x) = λ(e λ − 1), ∀ λ > 0
x!
x=1
T (2) 2
T (1)λ + λ + . . . = λ(e λ − 1)
2!
h λ2 i
=λ λ+ + ... , ∀ λ > 0
2!

Since the two power series can be identical on an open


interval iff all their coefficients match, we get
A method to find UEs cont’d

T (1) = 0,
T (2) = 2! = 2,
T (3) = 3!/2! = 3,
..
.
T (r ) = r !/(r − 1)! = r

So the UE is
(
0, if X = 1
T (X ) =
X , if X = 2, 3, . . .
Thank you for listening!

You might also like