0% found this document useful (0 votes)
45 views19 pages

CE 207 Lecture 07 - Estimation of Parameters

This document discusses parameter estimation techniques, including point estimation. It defines key terms like point estimator, which is a single numerical value used to estimate an unknown population parameter. Common point estimators include the sample mean, variance, and proportion. Maximum likelihood estimation and the method of moments are introduced as approaches to derive point estimators. Examples are provided for the Bernoulli, Poisson, and normal distributions. The document also defines population moments and sample moments, and how sample moments can be used to estimate population parameters.

Uploaded by

Zaki Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views19 pages

CE 207 Lecture 07 - Estimation of Parameters

This document discusses parameter estimation techniques, including point estimation. It defines key terms like point estimator, which is a single numerical value used to estimate an unknown population parameter. Common point estimators include the sample mean, variance, and proportion. Maximum likelihood estimation and the method of moments are introduced as approaches to derive point estimators. Examples are provided for the Bernoulli, Poisson, and normal distributions. The document also defines population moments and sample moments, and how sample moments can be used to estimate population parameters.

Uploaded by

Zaki Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

CE 207: Applied Mathematics for Engineers

Lecture# 7
Parameter Estimation
(Ref: Chapter 7 of Sheldon M. Ross)

Dr. Sheikh Mokhlesur Rahman


Associate Professor, Dept. of CE
Contact: [email protected]
2
Point Estimator

• A point estimate is a reasonable value of a population


parameter.
• A point estimate of some population parameter 𝜃 is a
single numerical value 𝜃. መ
• The statistic 𝜃መ is called the point estimator.
As an example,suppose the random variable X is normally distributed with
an unknown mean μ. The sample mean is a point estimator of the unknown
population mean μ. That is, μ = X . After the sample has been selected,
the numerical value x is the point estimate of μ.
Thus if x1 = 25, x2 = 30, x3 = 29, x4 = 31, the point estimate of μ is

25 + 30 + 29 + 31
x= = 28.75
4
January 2023 Semester - SMR CE 207_Point Estimation
3
Parameters and Their Statistics

Parameter Measure Statistic


μ Mean of a single population 𝑥ҧ
σ2 Variance of a single population 𝑠2
σ Standard deviation of a single population 𝑠
p Proportion of a single population 𝑝Ƹ
μ1 - μ2 Difference in means of two populations 𝑥ҧ1 − 𝑥ҧ2
p1 - p2 Difference in proportions of two populations 𝑝Ƹ1 − 𝑝Ƹ 2

January 2023 Semester - SMR CE 207_Point Estimation


4
General Concepts of Point Estimation

• We want point estimators that are:


▪ Are unbiased.
▪ Have a minimal variance.
• We use the standard error of the estimator to
calculate its mean square error.

January 2023 Semester - SMR CE 207_Point Estimation


5
Methods of Point Estimation
• There are three methodologies to create point
estimates of a population parameter.
▪ Method of moments
▪ Method of maximum likelihood
▪ Bayesian estimation of parameters
• Each approach can be used to create estimators with
varying degrees of biasedness and relative MSE
efficiencies.

January 2023 Semester - SMR CE 207_Point Estimation


6
Maximum Likelihood Estimators

• Suppose that X is a random variable with probability


distribution f(x:θ), where θ is a single unknown
parameter. Let x1, x2, …, xn be the observed values
in a random sample of size n. Then the likelihood
function of the sample is:
L(θ) = f(x1: θ) ∙ f(x2; θ) ∙…∙ f(xn: θ)

• Note that the likelihood function is now a function of


only the unknown parameter θ. The maximum
likelihood estimator (MLE) of θ is the value of θ that
maximizes the likelihood function L(θ).
• If X is a discrete random variable, then L(θ) is the
probability of obtaining those sample values. The
MLE is the θ that maximizes that probability.
January 2023 Semester - SMR CE 207_Point Estimation
7
Bernoulli MLE

• Let X be a Bernoulli random variable. The probability


mass function is f(x;p) = px(1-p)1-x, x = 0, 1 where P
is the parameter to be estimated. The likelihood
function of a random sample of size n is:
𝐿 𝑝 = 𝑝 𝑥1 1 − 𝑝 1−𝑥1 ⋅ 𝑝 𝑥2 1 − 𝑝 1−𝑥2 ⋅. . .⋅ 𝑝 𝑥𝑛 1 − 𝑝 1−𝑥𝑛
𝑛
σ𝑛 𝑛−σ𝑛
= ෑ 𝑝 𝑥𝑖 1−𝑝 1−𝑥𝑖 =𝑝 𝑖=1 𝑥𝑖 1−𝑝 𝑖=1 𝑥𝑖

𝑖=1
𝑛 𝑛

ln 𝐿 𝑝 = ෍ 𝑥𝑖 ln 𝑝 + 𝑛 − ෍ 𝑥𝑖 ln 1 − 𝑝
𝑖=1 𝑖=1
𝑑 ln 𝐿 𝑝 σ𝑛𝑖=1 𝑥𝑖 𝑛 − σ𝑛𝑖=1 𝑥𝑖
= − =0
𝑑𝑝 𝑝 1−𝑝
σ𝑛𝑖=1 𝑥𝑖
𝑝Ƹ =
January 2023 Semester - SMR 𝑛 CE 207_Point Estimation
8
MLE of Poisson Distribution

Let X1, X2, ….., Xn be independent Poisson random variables


each having mean . Let us determine this Poisson parameter
from sample values x1, x2, ….., xn.
Likelihood function is given by

𝑛
𝑒 −𝜆 𝜆𝑥1 𝑒 −𝜆 𝜆𝑥2 𝑒 −𝜆 𝜆𝑥𝑛 𝑒 −𝑛𝜆 𝜆σ1 𝑥𝑖
𝐿(𝜆) = ............ =
𝑥1 ! 𝑥2 ! 𝑥𝑛 ! 𝑥1 ! 𝑥1 !. . . . 𝑥𝑛 !
𝑛 𝑛
Thus, ln 𝐿(𝜆) = −𝑛𝜆 + ෍ 𝑥𝑖 log 𝜆 − log ෑ 𝑥𝑖 !
𝑖=1 𝑖=1
𝑑 ln 𝐿(𝜆) σ𝑛𝑖=1 𝑥𝑖
Differentiating, = −𝑛 + =0
𝑑𝜆 𝜆
σ𝑛𝑖=1 𝑥𝑖
𝑖. 𝑒. , 𝜆መ =
𝑛

January 2023 Semester - SMR CE 207_Point Estimation


9
Normal MLE for μ

• Let X be a normal random variable with unknown


mean μ and known variance σ2. The likelihood
function of a random sample of size n is:
𝑛 𝑥𝑖 −𝜇 2
1 −
𝐿 𝜇 =ෑ 𝑒 2𝜎2
𝜎 2𝜋 𝑖=1
1 −1 𝑛
2 σ𝑖=1 𝑥𝑖 −𝜇 2
= 𝑛 𝑒 2𝜎
2𝜋𝜎 2 2
𝑛
−𝑛 2
1 2
ln 𝐿 𝜇 = ln 2𝜋𝜎 − 2 ෍ 𝑥𝑖 − 𝜇
2 2𝜎
𝑖=1
𝑛
𝑑 ln 𝐿 𝜇 1
= 2 ෍ 𝑥𝑖 − 𝜇 = 0
𝑑𝜇 𝜎
𝑖=1
𝑛
σ𝑖=1 𝑥𝑖
𝜇ො = = 𝑋ሜ
𝑛
January 2023 Semester - SMR CE 207_Point Estimation
10
Normal MLEs for μ and σ2
• Let X be a normal random variable with both
unknown mean μ and variance σ2. The likelihood
function of a 𝑛random sample
2
of size n is:
− 𝑥𝑖 −𝜇
1
𝐿 𝜇, 𝜎 2
=ෑ 𝑒 2𝜎2
𝑖=1
𝜎 2𝜋
1 −1 𝑛 2
2 σ𝑖=1 𝑥𝑖 −𝜇
= 𝑛𝑒 2𝜎
2𝜋𝜎 2 2
𝑛
−𝑛 1
ln 𝐿 𝜇, 𝜎 2 = ln 2𝜋𝜎 2 − 2 ෍ 𝑥𝑖 − 𝜇 2
2 2𝜎
𝑖=1
𝑛
𝜕 ln 𝐿 𝜇, 𝜎 2 1
= 2 ෍ 𝑥𝑖 − 𝜇 = 0
𝜕𝜇 𝜎
𝑖=1
𝑛
𝜕 ln 𝐿 𝜇, 𝜎 2 −𝑛 1 2
= + ෍ 𝑥𝑖 − 𝜇 =0
𝜕 𝜎2 2𝜎 2 2𝜎 4
𝑖=1
σ𝑛𝑖=1 𝑥𝑖 − 𝑋ሜ 2
𝜇Ƹ = 𝑋ሜ and 𝜎ො 2 =
𝑛
January 2023 Semester - SMR CE 207_Point Estimation
11
Moments Defined
1
• The kth sample moment is σ 𝑋𝑘 , k = 1, 2, ….
𝑛
1
E X𝑘 = σ 𝑋𝑘
𝑛
• If k = 1 (called the first moment), then:

▪ Population moment is μ, and sample moment is 𝑋.
• The sample mean is the moment estimator of the
population mean.

January 2023 Semester - SMR CE 207_Point Estimation


12
Moment Estimators

• Let, 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 be a random sample from either


a probability mass function or a probability density
function with 𝑚 unknown parameters
θ1 , θ2 , . . . , θ𝑚 .

• The moment estimators Θ ෡1 , Θ


෡2, . . . , Θ
෡ 𝑚 are found
by equating the first 𝑚 population moments to the
first 𝑚 sample moments and solving the resulting
simultaneous equations for the unknown
parameters.

January 2023 Semester - SMR CE 207_Point Estimation


13
Normal Moment Estimators

• Suppose that X1, X2, …, Xn is a random sample from


a normal distribution with parameter μ and σ2. So
E(X) = μ and
E(X2) = μ2 + σ2
𝑛 𝑛
1 1
𝜇Ƹ = 𝑋ሜ = ෍ 𝑋𝑖 and 𝜇 + 𝜎 = ෍ 𝑋𝑖2
2 2
𝑛 𝑛
𝑖=1 𝑖=1
2
𝑛 𝑛 2 1 𝑛
1 σ 𝑋
𝑖=1 𝑖 − 𝑛 σ𝑖=1 𝑋𝑖
𝑛
𝜎ො 2 = ෍ 𝑋𝑖2 − 𝑋ሜ 2 =
𝑛 𝑛
𝑖=1
𝑛𝑛
1 2
σ 𝑖=1 𝑋𝑖
2 σ𝑛𝑖=1 𝑋𝑖 − 𝑋ሜ 2
= ෍ 𝑋𝑖 − =
𝑛 𝑛 𝑛
𝑖=1
January 2023 Semester - SMR CE 207_Point Estimation
14
Unbiased Estimators Defined

෡ is an unbiased estimator for the


The point estimator Θ
parameter θ if:
෡ =θ
𝐸 Θ

If the estimator is not unbiased, then the difference:


𝐸 Θ෡ −θ
෡.
is called the bias of the estimator Θ

෡ is equal to θ.
The mean of the sampling distribution of Θ

January 2023 Semester - SMR CE 207_Point Estimation


15
Choosing Among Unbiased Estimators
෡1 and Θ
Suppose that Θ ෡ 2 are unbiased estimators of θ.
෡ 1 is less than the variance of Θ
The variance of Θ ෡2.
෡1 is preferable.
So, Θ
If we consider all unbiased estimators of θ, the one with the
smallest variance is called the minimum variance unbiased
estimator (MVUE).

The sampling distributions of two unbiased estimators.


January 2023 Semester - SMR CE 207_Point Estimation
16
Standard Error of an Estimator
෡ is its standar deviation,
The standard error of an estimator Θ
given by
𝜎Θ෡ = ෡ .
𝑉 Θ
If the standard error involves unknown parameters that can
be estimated, substitution of these value into 𝜎Θ෡ produces an
estimated standard error, denoted by 𝜎ෞΘ෡ .


Equivalent notation: 𝜎ෞΘ෡ = 𝑠Θ෡ = 𝑠𝑒 Θ

If the 𝑋𝑖 are ~𝑁 𝜇, 𝜎 , then 𝑋ሜ is normally distributed,


𝜎 𝑠
and 𝜎𝑋ሜ = . If 𝜎 is not known, then 𝜎ෞ𝑋ሜ = .
𝑛 𝑛

January 2023 Semester - SMR CE 207_Point Estimation


17
Mean Squared Error

The mean squared error of an estimator Θ
of the parameter θ is defined as:
2
෡ ෡
MSE Θ = 𝐸 Θ − θ
2 2
෡ ෡
Can be rewritten as = 𝐸 Θ − 𝐸 Θ + θ − 𝐸 Θ෡
= 𝑉𝑎𝑟 Θ ෡ + bias 2

• The mean squared error (MSE) of the estimator is


equal to the variance of the estimator plus the bias
squared.
• It measures both characteristics.

January 2023 Semester - SMR CE 207_Point Estimation


18
Optimal Estimator
• A biased estimator can be preferred to an unbiased
estimator if it has a smaller MSE.
• Biased estimators are occasionally used in linear
regression.
• An estimator whose MSE is smaller than that of any
other estimator is called an optimal estimator.

෢1 ) has a smaller
A biased estimator (Θ
variance than the unbiased estimator
෢2 ).

January 2023 Semester - SMR CE 207_Point Estimation


19
Practice Problems

• Let X be an exponential random variable with


parameter 𝜆 and density function as 𝑓 𝑥 = 𝜆𝑒 −𝜆𝑥 .
Find the estimator of 𝜆 using a) maximum likelihood
method, and b) moment method.

• Chapter 7:
1, 2, 4

January 2023 Semester - SMR CE 207_Point Estimation

You might also like