0% found this document useful (0 votes)
8 views24 pages

CH 5

This document is a course outline for Probability and Random Process (ECEg 2061) at Mekelle University, focusing on Estimation Theory. It covers types of estimation, including parameter estimation (maximum likelihood and Bayes estimation) and value estimation (mean square and linear mean square estimation). Key concepts such as unbiased, efficient, and consistent estimators are discussed along with examples to illustrate the estimation techniques.

Uploaded by

haileslassiet28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

CH 5

This document is a course outline for Probability and Random Process (ECEg 2061) at Mekelle University, focusing on Estimation Theory. It covers types of estimation, including parameter estimation (maximum likelihood and Bayes estimation) and value estimation (mean square and linear mean square estimation). Key concepts such as unbiased, efficient, and consistent estimators are discussed along with examples to illustrate the estimation techniques.

Uploaded by

haileslassiet28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Mekelle University

Ethiopian Institute of Technology- Mekelle


School of Electrical and Computer Engineering

Course: Probability and Random Process (ECEg 2061)

Prepared by:
Ephrem Biedu (MSc.)
Chapter-5: Estimation Theory
 This chapter Covers,
• Types of estimation theory
• Parameter (Variable) estimation
• Value estimation
• Parameter estimation types
• Maximum likelihood estimation
• Bayes estimation
• Value estimation types
• Mean square estimation (MSE)
• Linear mean square estimation (LMSE)

03/01/2025 2024/2025 2
5.1 Introduction to estimation theory

Estimation: means finding the value of the unknown


statistical parameter based on the known observation
(data).
• There are two types of estimation techniques,
1) Parameter (Variable) estimation
2) Value estimation
03/01/2025 3
5.2 Parameter estimation
• Parameter estimation means predicting or estimating the
unknown parameter value based on the observed statistics.
• Let X be a rv with pdf which depends on unknown parameter
.
• Let be a set of ‘n’ independent rv's each with pdf , be random
samples of X. The joint pdf of the random samples of statistics
is given by,

03/01/2025 Prepared by, Ephrem B. 4


Con’t…
• For a particular set of observations , the value of the estimator
will be called an estimate of and denoted by .
• An estimator is a r.v. and an estimate is a particular realization
of it.
• An estimate of a parameter can have a single value or a range
of values.
• Estimates, which specify a single value, are called point
estimates, and estimates, which specify a range of values, are
called interval estimates.
03/01/2025 5
Con’t…
Properties of Point Estimators
• Point estimators must be unbiased, efficient and consistent estimators of the
unknown parameter.
1) Unbiased estimator: an estimator is said to be unbiased estimator of the
unknown parameter if, .
• If is unbiased estimator, then its mean square error equals with its variance and
given by,
Example 5.1: Let be a random sample of X having unknown mean µ. Show that the
estimator of µ defined by, is unbiased estimator of µ.
Solution: . Thus, M is an unbiased estimator of .
03/01/2025 6
Con’t…
2) Efficient estimator: An estimator, is said to be a more efficient
estimator of the parameter than the estimator if,
a) and are both unbiased estimators of .
b) .
Example 5.2: Let () be a random sample of a Poisson rv X with
unknown parameter λ. If the unknown parameter λ is estimated
by the estimators, and . Show that the estimator is more
efficient estimator of λ than .

03/01/2025 7
Con’t…
Solution: To say that is more efficient estimator of λ than , both
estimators must be unbiased and . Thus,,
. Thus, both estimators are unbiased estimators of λ.
, and . Since,
, is a more efficient estimator of than .

03/01/2025 8
Con’t…
3) Consistent estimator: The estimator of based on a random sample of
size ‘n’ is said to be consistent if for any small ,

Exercise: Show that if and , then the estimator is consistent.


• There are two types of parameter estimation techniques. These are,
1. Maximum likelihood estimation
2. Bayes’ estimation

03/01/2025 9
5.2.1 Maximum Likelihood estimation
• Let denote the joint pmf of the rv's (), when they are discrete, and let it be their
joint pdf when they are continuous.
• Let, . Where, represents the likelihood that the values , will be observed when is
the true value of the parameter.
• is often referred to as the likelihood function of the random sample.
• Let be the maximizing value of ; that is,
. Then the maximum-likelihood estimator of is,
• , and is the maximum-likelihood estimate of .

03/01/2025 10
Con’t…
Example 5.5: Let be a random samples of an exponential rv X with
unknown parameter λ. Determine the maximum likelihood estimator of
λ.
Solution: the pdf of an exponential distributed rv X is given by,
.
• For any independent statistics , the joint pdf becomes, .
,

03/01/2025 11
Con’t…
• Taking the natural logarithm of the expression we get,
and .
Setting , the maximum-likelihood estimate of ‘’ is obtained as
. Hence, the maximum-likelihood estimator of ‘’ is given by,

03/01/2025 12
Con’t…
Example 5.6: In analyzing the flow of traffic through a drive-in bank,
the times (in minutes) between arrivals of 10 customers are recorded as
3.2, 2.1, 5.3, 4.2, 1.2, 2.8, 6.4, 1.5, 1.9, and 3.0. Assuming that the inter
arrival time is an exponential r.v. with parameter , find the maximum
likelihood estimate of .
Solution: The maximum likelihood estimate of for an exponential
distributed rv X is given by,
. Hence, , and the summation of the data is 31.6,
.

03/01/2025 13
5.2.2 Bayes estimation
• Suppose that the unknown parameter is considered to be rv
having some fixed distribution or prior pdf . Then the joint pdf is
viewed as a conditional pdf and written as . Hence, the Bayes
estimate of is given by,
.
Where, is called posterior pdf of and given by,

03/01/2025 14
Con’t…
• , is called the marginal pdf of the random samples and given by,

• The Bayes estimator of the unknown parameter is given by,


.

03/01/2025 15
Con’t…

Example 5.7: Let be a random samples of an exponential rv X with


unknown parameter λ. Assume that λ is also an exponential rv with
parameter α. Find the Bayes estimator of λ.
Solution:
The prior pdf of λ is, , and
, where, .
• The marginal pdf of the random samples (), is given by,

03/01/2025 16
Con’t…
• The posterior pdf of λ, is given by,

• Thus, the Bayes estimate of λ is given by,

03/01/2025 17
Con’t…

Therefore, the Bayes estimator of λ is given by,

03/01/2025 18
5.3 Value estimation
• Value estimation: means estimating the value of
inaccessible rv in terms of the observation of an accessible
rv or vice versa.
• There are two types of value estimation techniques. These
are,
1. Mean square estimation (MSE)
2. Linear mean square estimation (LMSE)

03/01/2025 19
5.3.1 Mean Square Estimation

• Mean square estimation means estimating the value of the


unknown rv in terms of the known rv value that minimizes the
estimation error which is the mean square estimation error.

• The best estimator of Y in the sense that the mean square error is
minimum is given by,

03/01/2025 20
Con’t…
Example 5.9: Let , and let X be a uniform r.v. over (0, 2). Find the m.s.
estimator of Y in terms of X and its m.s. error value.
Solution: The m.s. estimate of Y is given by,
.
Hence, the m.s. estimator of Y is, , and its m.s. error value is, .

03/01/2025 21
5.3.2 Linear mean square estimation
• Let Y is inaccessible rv and X is the known value rv, then Y can be estimated by a
linear combination of rv X, and given by,
.
• We would like to find the values of ‘a’ and ‘b’ such that the m.s. error is minimum
and given by,
•,
• The mean square error (e) is minimum if we maximize both ‘a’ and ‘b’ values.
Hence,
, and

. Implies,
, and .
03/01/2025 22
Con’t…
Example 5.10: Let , and let X be a uniform r.v. over (0, 2). Find the
linear m.s. estimator of Y in terms of X and its minimum error value.
Solution: the linear mean square estimator of Y in terms of X is,
. Where, and .
,.
,

. Hence,

03/01/2025 23
Con’t…
.
• Therefore, the linear mean square estimator of Y is given by,

• Its minimum error value is, ,

03/01/2025 24

You might also like