0% found this document useful (0 votes)
177 views9 pages

An Introduction To Signal Detection and Estimation - Second Edition

This document provides solutions to exercises from the book "An Introduction to Signal Detection and Estimation - Second Edition". Specifically, it solves problems regarding likelihood ratios, Bayes rules, minimum Bayes risk, Neyman-Pearson tests, and least favorable priors for various distributions. Key results include deriving the critical regions, conditional risks, and thresholds that minimize error or achieve a given false alarm rate for the hypothesis testing problems considered.

Uploaded by

rohanpatilpa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
177 views9 pages

An Introduction To Signal Detection and Estimation - Second Edition

This document provides solutions to exercises from the book "An Introduction to Signal Detection and Estimation - Second Edition". Specifically, it solves problems regarding likelihood ratios, Bayes rules, minimum Bayes risk, Neyman-Pearson tests, and least favorable priors for various distributions. Key results include deriving the critical regions, conditional risks, and thresholds that minimize error or achieve a given false alarm rate for the hypothesis testing problems considered.

Uploaded by

rohanpatilpa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

An Introduction to Signal Detection and

Estimation - Second Edition


Chapter II: Selected Solutions
H. V. Poor
Princeton University
March 16, 2005
Exercise 2:
The likelihood ratio is given by
L(y) =
3
2(y + 1)
, 0 y 1.
a. With uniform costs and equal priors, the critical region for minimum Bayes error is
given by {y [0, 1]|L(y) 1} = {y [0, 1]|3 2(y + 1)} = [0, 1/2]. Thus the Bayes rule
is given by

B
(y) =
_
1 if 0 y 1/2
0 if 1/2 < y 1
.
The corresponding minimum Bayes risk is
r(
B
) =
1
2
_
1/2
0
2
3
(y + 1)dy +
_
1
1/2
dy =
11
24
.
b. With uniform costs, the least-favorable prior will be interior to (0, 1), so we examine
the conditional risks of Bayes rules for an equalizer condition. The critical region for the
Bayes rule

0
is given by

1
=
_
y [0, 1]

L(y)

0
1
0
_
= [0,

],
where

=
_

_
1 if 0
0

3
7
1
2
_
3

0
5
_
if
3
7
<
0
<
3
5
0 if
3
7

0
1
.
1
Thus, the conditional risks are:
R
0
(

0
) =
_

0
2
3
(y + 1)dy =
_

_
1 if 0
0

3
7
2

3
_

2
+ 1
_
if
3
7
<
0
<
3
5
0 if
3
7

0
1
,
and
R
1
(

0
) =
_
1

dy =
_

_
0 if 0
0

3
7
1

if
3
7
<
0
<
3
5
1 if
3
7

0
1
.
By inspection, a minimax threshold

L
is the solution to the equation
2

L
3
_

L
2
+ 1
_
= 1

L
,
which yields

L
= (

375)/2. The minimax risk is the value of the equalized conditional


risk; i.e., V (
L
) = 1

L
.
c. The Neyman-Pearson test is given by

NP
(y) =
_

_
1 if
3
2(y+1)
>

0
if
3
2(y+1)
=
0 if
3
2(y+1)
<
,
where and
0
are chosen to give false-alarm probability . Since L(y) is monotone
decreasing in y, the above test is equivalent to

NP
(y) =
_

_
1 if y <

0
if y =

0 if y >

,
where

=
3
2
1. Since Y is a continuous random variable, we can ignore the random-
ization. Thus, the false-alarm probability is:
P
F
(
NP
) = P
0
(Y <

) =
_

0
2
3
(y + 1)dy =
_

_
0 if

0
2

3
_

2
+ 1
_
if 0 <

< 1
1 if

1
.
The threshold for P
F
(
NP
) = is the solution to
2

3
_

2
+ 1
_
= ,
which is

1 + 3 1. So, an -level Neyman-Pearson test is

NP
(y) =
_
1 if y

1 + 3 1
0 if y >

1 + 3 1
.
The detection probability is
P
D
(
NP
) =
_

0
dy =

1 + 3 1, 0 < < 1.
2
Exercise 4:
Here the likelihood ratio is given by
L(y) =

e
y
y
2
2

2e

(y1)
2
2
, y 0.
a. Thus, Bayes critical regions are of the form

1
=
_
y 0

(y 1)
2

_
,
where

=
_

2e
log
_

0
1
0
_
. There are three cases:

1
=
_

_
if

< 0
_
1

, 1 +

_
if 0

1
_
0, 1 +

_
if

> 1
.
The condition

< 0 is equivalent to

0
<
0
1, where

0
=

2e

1+

2e

; the condition
0

1 is equivalent to

0

0

0
, where

0
=

1+

; and the condition

> 1 is
equivalent to 0
0
<

0
.
The minimum Bayes risk V (
0
) can be calculated for the three regions:
V (
0
) = 1
0
,

0
<
0
1,
V (
0
) =
0
_
1+

e
y
dy+(1
0
)

_
_
1

0
e

y
2
2
dy +
_

1+

y
2
2
dy
_
,

0

0

0
,
and
V (
0
) =
0
_
1+

0
e
y
dy + (1
0
)

_

1+

y
2
2
dy, 0
0
<

0
.
b. The minimax rule can be found by equating conditional risks. Investigation of the
above shows that this equality occurs in the intermediate region

0

0

0
, and thus
corresponds to a threshold

L
(0, 1) solving
e

L
e

L
= 2e(1 + (1 +
_

L
) (1
_

L
)).
The minimax risk is then either of the equal conditional risks; e.g.,
V (
L
) = e
1+

L
e
1

L
.
c. Here, randomization is unnecessary, and the Neyman-Pearson critical regions are
of the form

1
=
_
y 0

(y 1)
2

_
,
3
where

=
_

2e
log(). There are three cases:

1
=
_

_
if

< 0
_
1

, 1 +

_
if 0

1
_
0, 1 +

_
if

> 1
.
The false-alarm probability is thus:
P
F
(
NP
) = 0,

< 0
P
F
(
NP
) =
_
1+

e
y
dy = e
1+

e
1

=
2
e
sinh
_
_

_
, 0

1,
and
P
F
(
NP
) =
_
1+

0
e
y
dy = 1 e
1

> 1.
From this we see that the threshold for level NP testing is

=
_
_
_
_
sinh
1
_
e
2
__
2
if 0 < 1 e
2
[1 + log(1 )]
2
if 1 e
2
< < 1
.
The detection probability is thus
P
D
(
NP
) = 2
_

_
1 +
_

_
1
_

__
= 2
_

_
1 + sinh
1
_
e
2
__

_
1 sinh
1
_
e
2
___
, 0 < 1 e
2
,
and
P
D
(
NP
) = 2
_

_
1 +
_

1
2
_
= 2
_
(2 + log(1 ))
1
2
_
, 1 e
2
< 1.
Exercise 6 a & b:
Here we have p
0
(y) = p
N
(y + s) and p
1
(y) = p
N
(y s), which gives
L(y) =
1 + (y + s)
2
1 + (y s)
2
.
a. With equal priors and uniform costs, the critical region for Bayes testing is
1
=
{L(y) 1} = {1 + (y + s)
2
1 + (y s)
2
} = {2sy 2sy} = [0, ). Thus, the Bayes
test is

B
(y) =
_
1 if y 0
0 if y < 0
4
The minimum Bayes risk is then
r(
B
) =
1
2
_

0
1
[1 + (y + s)
2
]
dy +
1
2
_
0

1
[1 + (y s)
2
]
dy =
1
2

tan
1
(s)

.
b. Because of the symmetry of this problem with uniform costs, we can guess that
1/2 is the least-favorable prior. To conrm this, we can check that this answer from Part
a gives an equalizer rule:
R
0
(
1/2
) =
_

0
1
[1 + (y + s)
2
]
dy =
1
2
_
0

1
[1 + (y s)
2
]
dy = R
1
(
1/2
).
Exercise 7:
a. The densities under the two hypotheses are:
p
0
(y) = p(y) = e
y
, y > 0,
and
p
1
(y) =
_

p(y s)p(y)ds =
_
y
0
e
sy
e
s
ds = ye
y
, y > 0.
Thus, the likelihood ratio is
L(y) =
p
1
(y)
p
0
(y)
= y, y > 0.
b. Randomization is irrelevant here, so the false-alrm probability for threshold is
P
F
(
NP
) = P
0
(Y > ) = e

,
which gives the threshold = log , for level Neyman-Pearson testing. The corre-
sponding detection probability is
P
D
(
NP
) = P
1
(Y > ) =
_

ye
y
dy = ( + 1)e

= (1 log ), 0 < < 1.


c. Here the densities under the two hypotheses become:
p
0
(y) =
n

k=1
p(y
k
) =
n

k=1
e
y
k
, 0 < min{y
1
, y
2
, . . . , y
n
},
and
p
1
(y) =
_

_
n

k=1
p(y
k
s)
_
p(s)ds =
_
min{y
1
,y
2
,...,yn}
0
_
n

k=1
e
sy
k
_
e
s
ds
=
p
0
(y)
n 1
_
e
(n1) min{y
1
,y
2
,...,yn}
1
_
, 0 < min{y
1
, y
2
, . . . , y
n
}.
5
Thus, the likelihood ratio is
L(y) =
1
n 1
_
e
(n1) min{y
1
,y
2
,...,yn}
1
_
, 0 < min{y
1
, y
2
, . . . , y
n
}.
d. The false-alarm probability incurred by comparing L(y) from Part c to a threshold
is
P
F
(
NP
) = P
0
(L(Y ) > ) = P
0
_
min{Y
1
, Y
2
, . . . , Y
n
} >


log((n 1) + 1)
n 1
_
= P
0
(
n

k=1
(Y
k
>

)) =
n

k=1
P
0
(Y
k
>

) =
n

k=1
e

= e
n

,
from which we have

=
1
n
log , or, equivalently,
=
e
(n1)

1
n 1
=

(n1)/n
1
n 1
.
Exercise 15
a. The LMP test is

lo
(y) =
_

_
1 if
p

(y)

|
=0
> p
0
(y)
, if
p

(y)

|
=0
= p
0
(y)
0 if
p

(y)

|
=0
< p
0
(y) .
we have
p

(y)

|
=0
p
0
(y)
= sgn(y) ;
thus

lo
(y) =
_

_
1 if sgn(y) >
, if sgn(y) =
0 if sgn(y) < .
To set the threshold , we consider
P
0
(sgn(Y ) > ) =
_

_
0 if 1
1/2 if 1 < 1
1 if < 1 .
This implies that
=
_
1 if 0 < < 1/2
1 if 1/2 < 1 .
6
The randomization is
=
P
0
(sgn(Y ) > )
P
0
(sgn(Y ) )
=
_
2 if 0 < < 1/2
2 1 if 1/2 < 1 .
The LMP test is thus

lo
(y) =
_
2 if y > 0
0 if y 0
for 0 < < 1/2 ; and it is

lo
(y) =
_
1 if y 0
2 1 if y < 0
for 1/2 < 1.
For xed > 0, the detection probability is
P
D
(

lo
; ) = P

(sgn(Y ) > ) + P

(sgn(Y ) = )
=
_
2
_

0
1
2
e
|y|
dy if 0 < < 1/2
_

0
1
2
e
|y|
dy + (2 1)
_
0

1
2
e
|y|
dy if 1/2 < 1
=
_
(2 e

) if 0 < < 1/2


1 + ( 1)e

if 1/2 < 1
.
b. For xed , the NP critical region is

= {|y| |y | >

}
=
_

_
(, ) if

<
((

+
2
), ) if


if

> ,
from which
P
0
(

) =
_

_
1 if

<
1
2
e
(

+)/2
if


0 if

> .
Clearly, we must know to set

, and thus the NP critical region depends on . This


implies that there is no UMP test.
The generalized likelihood ratio test uses this statistic:
sup
>0
e
|y||y|
= exp{sup
>0
(|y| |y |)}
=
_
1 if y < 0
e
y
if y 0
.
7
Exercise 16:
We have M hypotheses H
0
, H
1
. . . , H
M1
, where Y has distribution P
i
and density p
i
under hypothesis H
i
. A decision rule is a partition of the observation set into regions

0
,
1
, . . . ,
M1
, where chooses hypothesis H
i
when we observe y
i
. Equivalently, a
decision rule can be viewed as a mapping from to the set of decisions {0, 1, . . . , M1},
where (y) is the index of the hypothesis accepted when we observe Y = y.
On assigning costs C
ij
to the acceptance of H
i
when H
j
is true, for 0 i, j (M1),
we can dene conditional risks, R
j
(), j = 0, 1, . . . , M 1, for a decision rule , where
R
j
() is the conditional expected cost given that H
j
is true. We have
R
j
() =
M1

i=0
C
ij
P
j
(
i
).
Assuming priors
j
= P(H
j
occurs), j = 0, 1, . . . , M 1, we can dene an overall average
risk or Bayes risk as
r() =
M1

j=0

j
R
j
().
A Bayes rule will minimize the Bayes risk.
We can write
r() =
M1

j=0
M1

i=0

j
C
ij
P
j
(
i
) =
M1

i=0
_
_
M1

j=0

j
C
ij
P
j
(
i
)
_
_
=
M1

i=0
_
_
M1

j=0

j
C
ij
_

i
p
j
(y)(dy)
_
_
=
M1

i=0
_

i
_
_
M1

j=0

j
C
ij
p
j
(y)
_
_
(dy).
Thus, by inspection, we see that the Bayes rule has decision regions given by

i
=
_
_
_
y

M1

j=0

j
C
ij
p
j
(y) = min
0kM1
M1

j=0

j
C
kj
p
j
(y)
_
_
_
.
Exercise 19:
a. The likelihood ratio is given by
L(y) =

n
k=1
1

2
1
e
(y
k

1
)
2
/2
2
1

n
k=1
1

2
0
e
(y
k

0
)
2
/2
2
0
=
_

1
_
n
e
n
2
_

2
0

2
0

2
1

2
1
_
e
_
1
2
2
0

1
2
2
1
_

n
k=1
y
2
k
e
_

2
1

2
0
_

n
k=1
y
k
,
8
which shows the structure indicated.
b. If
1
=
0
and
2
1
>
2
0
, then the Neyman-Pearson test operates by comparing
the quantity

n
k=1
(y
k
)
2
to a threshold, choosing H
1
if the threshold is exceeded and
H
0
otherwise. Alternatively, if
1
>
0
and
2
1
=
2
0
, then the NP test compares

n
k=1
y
k
to a threshold, again choosing H
1
when the threshold is exceeded. Note that, in the rst
case, the test statistic is quadratic in the observations, and in the second case it is linear.
c. For n = 1,
1
=
0
and
2
1
>
2
0
,, the NP test is of the form

NP
(y) =
_
1 if (y
1
)
2

0 if (y
1
)
2
<

,
where

> 0 is an appropriate threshold. We have


P
F
(
NP
) = P
0
((Y
1
)
2
>

) = 1 P
0
(
_

Y
1

_

)
= 1
_

0
_
+
_

0
_
= 2
_
1
_

0
__
.
Thus, for size we set

=
_

1
_
1

2
__
2
,
and the detection probability is
P
D
(
NP
) = 1 P
1
(
_

Y
1

_

) = 2
_
1
_

1
__
= 2
_
1
_

1
_
1

2
___
, 0 < < 1.
9

You might also like