0% found this document useful (0 votes)
116 views5 pages

ECE531 Screencast 2.1: Introduction To The Cramer-Rao Lower Bound (CRLB)

The document introduces the Cramer-Rao Lower Bound (CRLB) in three paragraphs: 1) It states that the CRLB gives a lower bound on the variance of an unbiased estimator, and that an estimator achieving the CRLB is the most efficient or minimum variance unbiased (MVU) estimator. 2) It notes that the CRLB provides a benchmark for comparing different estimator performances. 3) It intuitively explains that the minimum achievable variance depends on how sensitive the probability density function is to changes in the parameter being estimated - a more sensitive density allows a lower variance estimator.

Uploaded by

Karthik Mohan K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views5 pages

ECE531 Screencast 2.1: Introduction To The Cramer-Rao Lower Bound (CRLB)

The document introduces the Cramer-Rao Lower Bound (CRLB) in three paragraphs: 1) It states that the CRLB gives a lower bound on the variance of an unbiased estimator, and that an estimator achieving the CRLB is the most efficient or minimum variance unbiased (MVU) estimator. 2) It notes that the CRLB provides a benchmark for comparing different estimator performances. 3) It intuitively explains that the minimum achievable variance depends on how sensitive the probability density function is to changes in the parameter being estimated - a more sensitive density allows a lower variance estimator.

Uploaded by

Karthik Mohan K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ECE531 Screencast 2.

1: Introduction to the CRLB

ECE531 Screencast 2.1: Introduction to the


Cramer-Rao Lower Bound (CRLB)

D. Richard Brown III

Worcester Polytechnic Institute

Worcester Polytechnic Institute D. Richard Brown III 1/5


ECE531 Screencast 2.1: Introduction to the CRLB

Introduction

◮ Context: We are interested in understanding the performance of


unbiased estimators under the squared error cost function.
◮ Squared error: Estimator variance var(θ̂(Y )) determines performance.
◮ The CRLB gives a lower bound on the variance of an unbiased
estimator: var(θ̂(Y )) ≥ CRLB.

Why is this important?


◮ If a given unbiased estimator achieves the CRLB,
i.e. var(θ̂(Y )) = CRLB, it must be the MVU estimator.
◮ A good lower bound also provides a benchmark by which we can
compare the performance of different estimators.

Worcester Polytechnic Institute D. Richard Brown III 2/5


ECE531 Screencast 2.1: Introduction to the CRLB

Intuition: When Can We Expect Low Variance?


Recall our unknown parameter θ ∈ Λ.

Suppose our parameter space Λ = R and we get a scalar observation


distributed as pY (y ; θ) = U (0, 1). What can we say about the
performance of a good estimator θ̂(y) in this case?

Suppose now that we get a scalar observation distributed as


pY (y ; θ) = U (θ − ǫ, θ + ǫ) for some small value of ǫ. What can we say
about the performance of a good estimator θ̂(y) in this case?

Worcester Polytechnic Institute D. Richard Brown III 3/5


ECE531 Screencast 2.1: Introduction to the CRLB

Intuition: When Can We Expect Low Variance?


◮ The minimum achievable variance of an estimator is somehow related
to the sensitivity of the density pY (y ; θ) to changes in the
parameter θ.
◮ If the density pY (y ; θ) is insensitive to the parameter θ, then we
can’t expect even the MVU estimator to do very well.
◮ If the density pY (y ; θ) is sensitive to changes in the parameter θ,
then the achievable performance (minimum variance) should be
better.
◮ Our notion of sensitivity:
◮ Hold y fixed.
◮ How “steep” is pY (y ; θ) as we vary the parameter θ?
◮ This steepness should somehow be averaged over the observations.
◮ Terminology: When we discuss pY (y ; θ) with y fixed and θ as a
variable, we call this a “likelihood function”. It is not a valid pdf in θ.
Recall that θ is not a random variable.
Worcester Polytechnic Institute D. Richard Brown III 4/5
ECE531 Screencast 2.1: Introduction to the CRLB
2
y −y
Example: Rayleigh Family pY (y ; θ) = σ2 e
σ2 with θ = σ

3.5
1.2
3
1
2.5

2 0.8

θ=σ
p (y;θ)
Y

1.5 0.6

1
0.4
0.5
0.2
0 0.2 0.4 0.6 0.8 1
1.2 y
0
1
0.8
0.5 0.6
0.4
1 0.2 θ=σ
y

Worcester Polytechnic Institute D. Richard Brown III 5/5

You might also like