Point Estimation
Point Estimation
(U O M) CM2111 1 / 13
O UTLINE
1 I NTRODUCTION
(U O M) CM2111 2 / 13
I NTRODUCTION
In one common estimation scenario, we take a random sample from the distribution to elicit some
information about the unknown parameter θ. That is, we repeat the experiment n independent
times, observe the sample, X1 , X2 , ..., Xn , and try to estimate the value of θ by using the
observations x1 , x2 , ..., xn . The function of X1 , X2 , ..., Xn used to estimate θ—say, the statistic
u(X1 , X2 , ..., Xn )—is called an estimator of θ. We want it to be such that the computed estimate
u(x1 , x2 , ..., xn ) is usually close to θ. Since we are estimating one member of θ ∈ Ω, such an
estimator is often called a point estimator.
(U O M) CM2111 3 / 13
M ETHOD OF F INDING E STIMATORS M AXIMUM L IKELIHOOD E STIMATOR
OUTLINE
1 I NTRODUCTION
(U O M) CM2111 4 / 13
M ETHOD OF F INDING E STIMATORS M AXIMUM L IKELIHOOD E STIMATOR
Let X1 , X2 , . . . , Xn be a random sample from a distribution then their joint likelihood is denoted by
f (x, θ) or f (x1 , x2 , . . . , xn ; θ). If X1 , X2 , . . . , Xn are independent then the joint density function (pdf)
or pmf can be written as
f (x, θ) = f (x1 ; θ)f (x2 ; θ), . . . , f (xn ; θ)
Maximum likelihood estimation is a method to estimate parameters and these parameter values
are fixed such that they maximize the likelihood function that the process produced the data that
was actually observed.
The joint pdf above when considered as a function of the unknown parameter θ instead of the
sample x, is known as the likelihood function.
Using the methods of calculus, we now find the values of θ that maximize L(θ).
(U O M) CM2111 5 / 13
M ETHOD OF F INDING E STIMATORS M ETHOD OF M OMENTS
OUTLINE
1 I NTRODUCTION
(U O M) CM2111 6 / 13
M ETHOD OF F INDING E STIMATORS M ETHOD OF M OMENTS
OUTLINE
1 I NTRODUCTION
(U O M) CM2111 8 / 13
M ETHODS OF E VALUATING E STIMATORS M EAN S QUARED E RROR
In this section, we will consider some basic criteria for evaluating estimators.
B IAS OF AN ESTIMATOR
The bias of a point estimator W of a parameter θ is the difference between the expected value of
W and θ; that is, Biasθ W = Eθ W − θ. An estimator whose bias is identically equal to 0 is called
unbiased and satisfies Eθ W = θ for all θ.
(U O M) CM2111 9 / 13
M ETHODS OF E VALUATING E STIMATORS B EST U NBIASED E STIMATORS
OUTLINE
1 I NTRODUCTION
(U O M) CM2111 10 / 13
M ETHODS OF E VALUATING E STIMATORS B EST U NBIASED E STIMATORS
C RAMER -R AO I NEQUALITY
Let X1 , X2 , . . . , Xn be a sample with pdf f (x|θ) and let W (X ) = W (X1 , . . . , Xn ) be any estimator
satisfying
d
Z
∂
Eθ W (X ) = [W (x)f (x|θ)]dx
dθ χ ∂θ
and
Varθ W (X ) < ∞.
Then
( d Eθ W (X ))2
Varθ (W (X )) ≥ dθ
∂
Eθ ( ∂θ log f (X |θ))2
(U O M) CM2111 11 / 13
M ETHODS OF E VALUATING E STIMATORS S UFFICIENCY AND U NBIASEDNESS
OUTLINE
1 I NTRODUCTION
(U O M) CM2111 12 / 13
M ETHODS OF E VALUATING E STIMATORS S UFFICIENCY AND U NBIASEDNESS
R ECALL
If X and Y are any two random variables, then provided the expectations exist, we have
R AO -B LACKWELL T HEOREM
Let W be any unbiased estimator of τ (θ), and let T be a sufficient statistics for θ. Define
ϕ(T ) = E(W |T ). Then Eθϕ (T ) = τ (θ) and Varθϕ (T ) ≤ Varθ W for all θ; that is, ϕ(T ) is a
uniformly better unbiased estimator of τ (θ)
(U O M) CM2111 13 / 13