0% found this document useful (0 votes)
15 views5 pages

EEE539 Fall2017 Final v2-2

Uploaded by

selinturhan22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

EEE539 Fall2017 Final v2-2

Uploaded by

selinturhan22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1

Bilkent University

EEE-539 DETECTION AND ESTIMATION THEORY

Fall 2017 - Final Exam

January 9, 2018

Duration: 2.5 hours

Surname
Name
ID #
Signature

Question-1 (30 pts)


Question-2 (35 pts)
Question-3 (35 pts)
TOTAL (100 pts)
2

1) Consider random variables X and Y with the following joint probability density function (PDF):
(
2 e−(x+y) , if 0 < x < y < ∞
p(x, y) =
0, otherwise
The aim is to estimate X after observing a realization of Y .
a) Obtain the maximum a-posteriori probability (MAP) estimator for X based on Y and simplify it as
much as possible.
b) Obtain the minimum mean-absolute error (MMAE) estimator for X based on Y and simplify it as
much as possible.
c) Obtain the minimum mean-squared error (MMSE) estimator for X based on Y and simplify it as
much as possible.
3

2) Suppose that Y1 and Y2 are two independent, scalar, and continuous random variables, each of which
is uniformly distributed between 0 and θ, where θ > 0 is an unknown parameter.
a) Obtain the maximum likelihood estimator (MLE) for estimating θ based on Y1 and Y2 , and calculate
the mean-squared error (MSE) of that MLE.
b) Obtain the minimum variance unbiased estimator (MVUE) for estimating θ based on Y1 and Y2 , and
calculate the MSE of that MVUE.
4

3) Consider the following problem:

H0 : Y ∼ p0 (y)
H1 : Y ∼ p1 (y | θ) , θ ∼ w(θ)

and assume that both θ and Y are scalar random variables. In this scenario, we perform both detection
and estimation, where the estimation is performed only when the decision is H1 (since H0 is a simple
hypothesis). Let δ(y) denote the decision rule (i.e., the probability of selecting H1 ), and θ̂(y) represent
the estimator.
The aim is to minimize an estimation error metric subject to constraints on the detection probability
and the false-alarm probability. The estimation error metric is defined as follows:
n o
J(δ, θ̂) = E C(θ̂(Y ), Θ) | H1 is true & We select H1 (1)

where the expectation is jointly over both Θ and Y given the conditions that H1 is true and we select
H1 , and C(θ̂(Y ), θ) is the cost of estimating θ as θ̂(Y ). It is noted from eqn. (1) that J(δ, θ̂) corresponds
to the expected cost of the estimator when H1 is true and the decision is H1 .
The optimal joint detection and estimation problem is formulated as

minimize J(δ, θ̂) (2)


δ, θ̂
Z
subject to δ(y)p0(y) dy ≤ α (3)
Z Z
δ(y)p1(y | θ)w(θ) dθ dy ≥ β (4)

where the integral in (3) corresponds to the false-alarm probability (i.e., the probability of selecting H1
when H0 is true) and the double integral in (4) corresponds to the detection probability (i.e., the probability
of selecting H1 when H1 is true).
a) Express J(δ, θ̂) in terms of δ(y), p1 (y | θ), w(θ), and C(θ̂(y), θ). (Other functions must not appear
in your final expression.)
Hints: Note that p1 (y | θ) can also be expressed as p(y | θ, H1 ). Also, w(θ) corresponds to p(θ | H1 ). In
addition, the probability of selecting H1 given that Y = y is equal to δ(y) irrespective of the other given
conditions.
b) Prove or disprove the following statement: “Let (δ ∗ , θ̂∗ ) denote a solution of the optimization problem
in (2)–(4). Then, it is always possible to find an alternative solution (δ ′ , θ̂∗ ) for which at least one of the
constraints is satisfied with equality and the same estimation error metric is achieved.RIn other words, for
R R δ ′ , there always exists δ for which J(δ , θ̂ ) = J(δ , θ̂ ) and at least one of δ (y)p0 (y) dy = α
∗ ′ ′ ∗ ∗ ∗ ′
a given
and δ (y)p1(y | θ)w(θ) dθ dy = β is satisfied.”
c) Suppose that the decision rule is fixed as δ̃. In this case, what should be the optimal estimator that
minimizes J(δ̃, θ̂) for the squared-error cost function; that is, C(θ̂(y), θ) = (θ̂(y) − θ)2 . Try to express
that optimal estimator as explicitly as possible.

(5 points bonus if all three parts are completed successfully.)


5

You might also like