Point Estimation: Institute of Technology of Cambodia
Point Estimation: Institute of Technology of Cambodia
POINT ESTIMATION
2020-2021
Department of Foundation Year CHAPTER I ITC 1 / 22
Outline
1 Introduction
Outline
1 Introduction
Definition 1
Let X1 , . . . , Xn be independent and identically distributed (iid) random
variables (in statistical language, a random sample) with a probability
density function (pdf) or probability mass function (pmf)
f (x, θ1 , . . . , θm ), where θ1 , ., θm are the unknown population
parameters (characteristics of interest). The actual values of these
parameters are not known. The statistics gi (X1 , . . . , Xn ), i = 1, . . . , m,
which can be used to estimate the value of each of the parameters θi ,
are called estimators for the parameters, and the values calculated
from these statistics using particular sample data values are called
estimates of the parameters. Estimators of θi are denoted by θ̂i ,
where θ̂i = gi (X1 , . . . , Xn ), i = 1, . . . , m.
Remark 1
The estimators are random variables. When we actually run the
experiment and observe the data, let the observed values of the
random variables X1 , . . . , Xn be x1 , . . . , xn ; then, θ̂(X1 , . . . , Xn ) is an
estimator, and its value θ̂(x1 , . . . , xn ) is an estimate.
Department of Foundation Year CHAPTER I ITC 4 / 22
Introduction
Definition 2
An estimator θ̂ is said to be an unbiased estimator of θ if
E (θ̂) = θ
Unbiased Estimators
Proposition 1
Let X ∼ Bin(n, p), where n is known and p ∈ (0, 1) is the parameter.
Then the sample proportion p̂ = X /n is an unbiased estimator of p.
Proposition 2
If X1 , X2 , . . . , Xn is a random P
sample from a dist. with mean µ, then
the sample average µ̂ = X = ni=1 Xi /n is an unbiased estimator of µ.
Proposition 3
Let X1 , X2 , · · · , Xn be a random sample from a dist. with mean µ and
variance σ 2 . Then the sample variance
Pn 2
2 2 i=1Xi − X
σ̂ = S =
n−1
Theorem 1
Let X1 , X2 , . . . , Xn be a random sample from a normal distribution with
parameters µ and σ 2 . Then the estimator µ̂ = X is the MVUE for µ.
Definition 3
The standard error of an estimator θ̂ is its standard deviation
q
σθ̂ = V (θ̂).
Example 1
Assuming that breakdown voltage is normally distributed, µ̂ = X is the
best estimator of µ. If the value √
of σ is known to be 1.5, the standard
√
error of X is σX = σ/ n = 1.5/ 20 = 0.335. If, as is usually the
case, the value of σ is unknown, the estimate σ̂ = s = 1.462 is
substituted into σX to obtain
√ the estimated standard error
√
σ̂X = sX = s/ n = 1.462/ 20 = 0.327.
Example 2
The accompanying data on flexural strength (MPa) for concrete beams
5.9 7.2 7.3 6.3 8.1 6.8 7.0 7.6 6.8 6.5
7.0 6.3 7.9 9.0 8.2 8.7 7.8 9.7 7.4 7.7
9.7 7.8 7.7 11.6 11.3 11.8 10.7
(a) Calculate a point estimate of the mean value of strength for the
conceptual population of all beams manufactured
P in this fashion,
and state which estimator you used. [Hint: xi =219.8.]
(b) Calculate a point estimate of the strength value that separates
the weakest 50% of all such beams from the strongest 50%, and
state which estimator you used.
(c) Calculate and interpret a point estimate of the population
standard
P 2 deviation s. Which estimator did you use? [Hint:
xi = 1860.94.]
Outline
1 Introduction
Definition 5
Maximum likelihood estimators(MLE) are those values of the
parameters that maximize the likelihood function with respect to the
parameter θ. That is,
Example 3
Suppose X1 , X2 , . . . , Xn is a random sample from an exponential
distribution with parameter λ. Because of independence, the likelihood
function is a product of the individual pdf’s:
1 − x1 1 − xn
P
xi
L(λ) = e λ ··· e λ = λ−n e − λ
λ λ
Example 4
(a) Let X1 , . . . , Xn be a random sample from a Poisson distribution
with the parameter λ > 0. Find the MLE λ̂ of λ. Is λ̂ unbiased?
(b) Traffic engineers use the Poisson distribution to model light
traffic. This is based on the rationale that when the rate is
approximately constant in light traffic, the distribution of counts
of cars in a given time interval should be Poisson. The following
data show the number of vehicles turning left in 15 randomly
chosen 5-minute intervals at a specific intersection. Calculate the
maximum likelihood estimate.
10 17 12 6 12 11 9 6
10 8 8 16 7 10 6
Example 5
In the normal case, the MLE’s of µ and σ 2 are µ̂ = X and
2
(Xi −X )
P
σ̂ 2 = n . Find the MLE of h(µ, σ 2 ) = σ.
Remark
Sometimes calculus cannot be used to obtain MLE’s.
Example 6
Suppose my waiting time for a bus is uniformly distributed on [0, θ],
with unknown θ > 0, and the results x1 , · · · , xn of a random sample
from this distribution have been observed. Find the MLE of θ.
Department of Foundation Year CHAPTER I ITC 18 / 22
Cramer-Rao inequality, Efficient estimator
Outline
1 Introduction
Definition 6
The Fisher information I (θ) in a single observation from a pmf or pdf
∂
f (x; θ) is the variance of the random variable U = ln f (x; θ):
∂θ
∂
I (θ) = V ln f (x; θ) .
∂θ
Remark 2
There is an alternative expression for I (θ) that is sometimes easier to
compute than the variance in the definition:
2
∂
I (θ) = −E ln f (x; θ) .
∂θ2
Definition 7
Let θ̂ be an unbiased estimator of θ. The ratio of the lower bound to
the variance of θ̂ is its efficiency. Then θ̂ is said to be an efficient
estimator if θ̂ achieves the Cramer–Rao lower bound (the efficiency is
1). An efficient estimator is a minimum variance unbiased (MVUE)
estimator.
Department of Foundation Year CHAPTER I ITC 21 / 22
Cramer-Rao inequality, Efficient estimator
Definition 8
An unbiased estimator θ̂ is said to be efficient if
1
V (θ̂) = .
nI(θ)
Remark
Efficient estimator needs not to exist, but if it does exist and if it is
unbiased, it is the MVUE.