0% found this document useful (0 votes)
109 views22 pages

Point Estimation: Institute of Technology of Cambodia

This document provides an introduction to point estimation. It defines key terms like estimators, unbiased estimators, and minimum variance unbiased estimators. It discusses common unbiased estimators like the sample mean and sample variance. The document also introduces maximum likelihood estimation as a method for finding point estimators. Specifically, it defines the likelihood function and describes the procedure for finding the maximum likelihood estimator, which maximizes the likelihood function. An example using the exponential distribution is provided.

Uploaded by

Sao Savath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views22 pages

Point Estimation: Institute of Technology of Cambodia

This document provides an introduction to point estimation. It defines key terms like estimators, unbiased estimators, and minimum variance unbiased estimators. It discusses common unbiased estimators like the sample mean and sample variance. The document also introduces maximum likelihood estimation as a method for finding point estimators. Specifically, it defines the likelihood function and describes the procedure for finding the maximum likelihood estimator, which maximizes the likelihood function. An example using the exponential distribution is provided.

Uploaded by

Sao Savath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CHAPTER I

POINT ESTIMATION

Department of Foundation Year

Institute of Technology of Cambodia

2020-2021
Department of Foundation Year CHAPTER I ITC 1 / 22
Outline

1 Introduction

2 The methods of finding point estimators

3 Cramer-Rao inequality, Efficient estimator

Department of Foundation Year CHAPTER I ITC 2 / 22


Introduction

Outline

1 Introduction

2 The methods of finding point estimators

3 Cramer-Rao inequality, Efficient estimator

Department of Foundation Year CHAPTER I ITC 3 / 22


Introduction

Definition 1
Let X1 , . . . , Xn be independent and identically distributed (iid) random
variables (in statistical language, a random sample) with a probability
density function (pdf) or probability mass function (pmf)
f (x, θ1 , . . . , θm ), where θ1 , ., θm are the unknown population
parameters (characteristics of interest). The actual values of these
parameters are not known. The statistics gi (X1 , . . . , Xn ), i = 1, . . . , m,
which can be used to estimate the value of each of the parameters θi ,
are called estimators for the parameters, and the values calculated
from these statistics using particular sample data values are called
estimates of the parameters. Estimators of θi are denoted by θ̂i ,
where θ̂i = gi (X1 , . . . , Xn ), i = 1, . . . , m.

Remark 1
The estimators are random variables. When we actually run the
experiment and observe the data, let the observed values of the
random variables X1 , . . . , Xn be x1 , . . . , xn ; then, θ̂(X1 , . . . , Xn ) is an
estimator, and its value θ̂(x1 , . . . , xn ) is an estimate.
Department of Foundation Year CHAPTER I ITC 4 / 22
Introduction

Definition 2
An estimator θ̂ is said to be an unbiased estimator of θ if

E (θ̂) = θ

If θ̂ is not unbiased, the difference E (θ̂) − θ is called the bias of θ̂.

Principal of Unbiased Estimation


When choosing among several different estimators of θ, select an
unbiased one.

Department of Foundation Year CHAPTER I ITC 5 / 22


Introduction

Unbiased Estimators
Proposition 1
Let X ∼ Bin(n, p), where n is known and p ∈ (0, 1) is the parameter.
Then the sample proportion p̂ = X /n is an unbiased estimator of p.

Proposition 2
If X1 , X2 , . . . , Xn is a random P
sample from a dist. with mean µ, then
the sample average µ̂ = X = ni=1 Xi /n is an unbiased estimator of µ.

Proposition 3
Let X1 , X2 , · · · , Xn be a random sample from a dist. with mean µ and
variance σ 2 . Then the sample variance
Pn 2
2 2 i=1Xi − X
σ̂ = S =
n−1

is unbiased for estimating σ 2 .


Department of Foundation Year CHAPTER I ITC 6 / 22
Introduction

Estimators with Minimum Variance


Principle of Minimum Variance Unbiased Estimation
Among all estimators of θ that are unbiased, choose the one that has
minimum variance. The resulting θ̂ is called the minimum variance
unbiased estimator (MVUE) of θ.

Theorem 1
Let X1 , X2 , . . . , Xn be a random sample from a normal distribution with
parameters µ and σ 2 . Then the estimator µ̂ = X is the MVUE for µ.

Definition 3
The standard error of an estimator θ̂ is its standard deviation
q
σθ̂ = V (θ̂).

If the standard error itself involves unknown parameters whose values


can be estimated, substitution of these estimates into σθ̂ yields the
Department of Foundation Year CHAPTER I ITC 7 / 22
Introduction

Reporting a Point Estimate: The Standard Error

Example 1
Assuming that breakdown voltage is normally distributed, µ̂ = X is the
best estimator of µ. If the value √
of σ is known to be 1.5, the standard

error of X is σX = σ/ n = 1.5/ 20 = 0.335. If, as is usually the
case, the value of σ is unknown, the estimate σ̂ = s = 1.462 is
substituted into σX to obtain
√ the estimated standard error

σ̂X = sX = s/ n = 1.462/ 20 = 0.327.

Department of Foundation Year CHAPTER I ITC 8 / 22


Introduction

Example 2
The accompanying data on flexural strength (MPa) for concrete beams

5.9 7.2 7.3 6.3 8.1 6.8 7.0 7.6 6.8 6.5
7.0 6.3 7.9 9.0 8.2 8.7 7.8 9.7 7.4 7.7
9.7 7.8 7.7 11.6 11.3 11.8 10.7

(a) Calculate a point estimate of the mean value of strength for the
conceptual population of all beams manufactured
P in this fashion,
and state which estimator you used. [Hint: xi =219.8.]
(b) Calculate a point estimate of the strength value that separates
the weakest 50% of all such beams from the strongest 50%, and
state which estimator you used.
(c) Calculate and interpret a point estimate of the population
standard
P 2 deviation s. Which estimator did you use? [Hint:
xi = 1860.94.]

Department of Foundation Year CHAPTER I ITC 9 / 22


Introduction

(d) Calculate a point estimate of the proportion of all such beams


whose flexural strength exceeds 10 MPa.
(e) Calculate a point estimate of the population coefficient of
variation σ/µ , and state which estimator you used.

Department of Foundation Year CHAPTER I ITC 10 / 22


The methods of finding point estimators

Outline

1 Introduction

2 The methods of finding point estimators

3 Cramer-Rao inequality, Efficient estimator

Department of Foundation Year CHAPTER I ITC 11 / 22


The methods of finding point estimators

The method of maximum likelihood


Definition 4
Let f (x1 , . . . , xn ; θ), θ ∈ Θ ⊆ Rm , be the joint pmf or joint pdf of n
random variables X1 , . . . , Xn with sample values x1 , . . . , xn . The
likelihood function of the sample is given by:

L(θ; x1 , . . . , xn ) = f (x1 , x2 , . . . , xn ; θ)[= L(θ), is a briefer notation.]

If X1 , . . . , Xn are discrete iid random variables with pmf p(x, θ),


then the likelihood function is given by:
L(θ) = ni=1 p(xi , θ)
Q

If X1 , . . . , Xn are continuous iid random variables with pdf f (x, θ),


then the likelihood function is given by:
n
Y
L(θ) = f (xi , θ)
i=1
Department of Foundation Year CHAPTER I ITC 12 / 22
The methods of finding point estimators

Maximum Likelihood Estimation

Definition 5
Maximum likelihood estimators(MLE) are those values of the
parameters that maximize the likelihood function with respect to the
parameter θ. That is,

L(θ̂; x1 , . . . , xn ) = max L(θ; x1 , . . . , xn )


θ∈Θ

where Θ is the set of possible values of the parameter θ.

Department of Foundation Year CHAPTER I ITC 13 / 22


The methods of finding point estimators

Maximum Likelihood Estimation

Procedure to find the maximum likelihood estimator


1. Define the likelihood function, L(θ).
2. Often it is easier to take the natural logarithm (ln) of L(θ).
3. When applicable, differentiate ln L(θ) with respect to θ, and then
equate the derivative to zero.
4. Solve for the parameter θ, and we will obtain θ̂.
5. Check whether it is a maximizer or a global maximizer.

Department of Foundation Year CHAPTER I ITC 14 / 22


The methods of finding point estimators

Maximum Likelihood Estimation

Example 3
Suppose X1 , X2 , . . . , Xn is a random sample from an exponential
distribution with parameter λ. Because of independence, the likelihood
function is a product of the individual pdf’s:
   
1 − x1 1 − xn
P
xi
L(λ) = e λ ··· e λ = λ−n e − λ
λ λ

The natural logarithm of the likelihood function is


P
xi
ln L(λ) = −n ln λ − .
λ
P
EquatingP
(d/dλ) [ln(likelihood)] to zero results in xi − nλ = 0, or
λ = n−1 xi = x. Thus the MLE of θ is λ̂ = X . Is λ̂ unbiased?

Department of Foundation Year CHAPTER I ITC 15 / 22


The methods of finding point estimators

Maximum Likelihood Estimation

Example 4
(a) Let X1 , . . . , Xn be a random sample from a Poisson distribution
with the parameter λ > 0. Find the MLE λ̂ of λ. Is λ̂ unbiased?
(b) Traffic engineers use the Poisson distribution to model light
traffic. This is based on the rationale that when the rate is
approximately constant in light traffic, the distribution of counts
of cars in a given time interval should be Poisson. The following
data show the number of vehicles turning left in 15 randomly
chosen 5-minute intervals at a specific intersection. Calculate the
maximum likelihood estimate.

10 17 12 6 12 11 9 6
10 8 8 16 7 10 6

Department of Foundation Year CHAPTER I ITC 16 / 22


The methods of finding point estimators

Estimating Functions of Parameters

Proposition 4 (The Invariance Principle)


Let θ̂1 , θ̂2 , . . . , θ̂m be the MLE’s of the parameters θ1 , θ2 , . . . , θm .
Then the MLE of any one-to-one function h(θ1 , θ2 , . . . , θm ) of these
parameters is the function h(θ̂1 , θ̂2 , . . . , θ̂m ) of the MLE’s.

Example 5
In the normal case, the MLE’s of µ and σ 2 are µ̂ = X and
2
(Xi −X )
P
σ̂ 2 = n . Find the MLE of h(µ, σ 2 ) = σ.

Department of Foundation Year CHAPTER I ITC 17 / 22


The methods of finding point estimators

Large Sample Behavior of the MLE


Proposition 5
Under very general conditions on the joint distribution of the sample,
when the sample size n is large, the maximum likelihood estimator of
any parameter θ is approximately unbiased [E (θ̂) ≈ θ] and has
variance that is either as small as or nearly as small as can be achieved
by any estimator. Stated another way, the MLE θ̂ is approximately the
MVUE of θ.

Remark
Sometimes calculus cannot be used to obtain MLE’s.

Example 6
Suppose my waiting time for a bus is uniformly distributed on [0, θ],
with unknown θ > 0, and the results x1 , · · · , xn of a random sample
from this distribution have been observed. Find the MLE of θ.
Department of Foundation Year CHAPTER I ITC 18 / 22
Cramer-Rao inequality, Efficient estimator

Outline

1 Introduction

2 The methods of finding point estimators

3 Cramer-Rao inequality, Efficient estimator

Department of Foundation Year CHAPTER I ITC 19 / 22


Cramer-Rao inequality, Efficient estimator

Cramer-Rao inequality, Efficient estimator

Definition 6
The Fisher information I (θ) in a single observation from a pmf or pdf

f (x; θ) is the variance of the random variable U = ln f (x; θ):
∂θ
 

I (θ) = V ln f (x; θ) .
∂θ

Remark 2
There is an alternative expression for I (θ) that is sometimes easier to
compute than the variance in the definition:
 2 

I (θ) = −E ln f (x; θ) .
∂θ2

Department of Foundation Year CHAPTER I ITC 20 / 22


Cramer-Rao inequality, Efficient estimator

Cramer-Rao inequality, Efficient estimator


Theorem 2 (Cramer-Rao inequality)
Assume a random sample X1 , X2 , . . . , Xn from the distribution with
pmf or pdf f (x; θ) such that the set of possible values does not depend
on θ. If the statistic θ̂ = t(X1 , X2 , . . . , Xn ) is an unbiased estimator for
the parameter θ, then
1 1
V (θ̂) ≥  ∂
= . (1)
V ∂θ ln f (X1 , . . . , Xn ; θ) nI (θ)

Definition 7
Let θ̂ be an unbiased estimator of θ. The ratio of the lower bound to
the variance of θ̂ is its efficiency. Then θ̂ is said to be an efficient
estimator if θ̂ achieves the Cramer–Rao lower bound (the efficiency is
1). An efficient estimator is a minimum variance unbiased (MVUE)
estimator.
Department of Foundation Year CHAPTER I ITC 21 / 22
Cramer-Rao inequality, Efficient estimator

Cramer-Rao inequality, Efficient estimator

Definition 8
An unbiased estimator θ̂ is said to be efficient if
1
V (θ̂) = .
nI(θ)

Example 7 (Poisson example (continue))


Let X1 , · · · , Xn be a random sample from a Poisson distribution with
the parameter λ > 0. Let λ̂ be the MLE of the parameter λ. Is λ̂
efficient?

Remark
Efficient estimator needs not to exist, but if it does exist and if it is
unbiased, it is the MVUE.

Department of Foundation Year CHAPTER I ITC 22 / 22

You might also like