Signal Detection Binary Detection With A Single Observation
Signal Detection Binary Detection With A Single Observation
-'
';i:~
348
SIGNAL DETECTION
fYJn,(y!Hol
alarm and an optimum decision rule has to take into account the relative costs
and minimize the average cost. We now derive a decision rule that minimizes
the average cost.
If we denote the decisions made in the binary hypothesis problem by D;,
i = 0, 1, where D 0 and D 1 denote the decisions in favor of H 0 and H 1 , respectively, then we have the following four possibilities: (D;, Hi), i = 0, 1, and
j = 0, 1. The pair (D;, Hi) denotes H1 being the true hypothesis and a decision
of D;. Pairs (D 0 , H 0 ) and (D 1 , H 1 ) denote correct decisions, and (D~> H 0 ) and
(D 0 , H 1) denote incorrect decisions. If we associate a cost Cii with each pair
(D;, H), then the average cost can be written as
fYJn,Cy!Htl
0.622
1.0
accept H 1 -----~
--------accept H 0
Figure 6.2 Decision rule and error probabilities for Example 6.1.
(b)
JR,
.622
-oo
3
2'1T
, ;;:;- exp
V
9
-- (y - 1) 2 dy
2
P(Dt!Ho)
= P(y
E RtiHo)
= ( fY!H,(YIHo) dy
JR,
(6.5)
2:2: C;iP(H1)P(D;IH1)
i=O,I
i=O.I
P,
349
""
C = CooP(DoiHo)P(Ho) + CloP(DtiHo)P(Ho)
+ CotP(Dol Ht)P(H 1) + C1 1P(D 1IH 1)P(H 1)
"".055
=
)Ro
+ CwP(Ho)
6.2.3 Bayes' Decision Rule-Costs of Errors
In many engineering applications, costs have to be taken into account in the
design process. In such applications, it is possible to derive decision rules that
minimize certain average costs. For example, in the context of radar detection,
the costs and consequences of a miss are quite different from that of a false
iY!H,(yiHo)dy
JR,
---
350
SIGNAL DETECTION
C 10 > C 00
and
= C 10 P(H0 )
C01 > Cu
= ( -oo, oo),
+ CllP(H1 ) + ( {[P(H,)(C0 ,
JRo
and R 1
n R 0 = ~'then we
C 11 )fy!H, (yjH,)]
(6.6)
Since the decision rule is specified in terms of R 0 (and R 1), the decision rule
that minimizes the average cost is derived by choosing R 0 that minimizes the
integral on the right-hand side of Equation 6.6. Note that the first two terms in
Equation 6.6 do not depend on R 0
The smallest value of Cis achieved when the integrand in Equation 6.6 is
negative for all values of y E R 0 Since C 01 > C 11 and C 10 > C00 , the integrand
will be negative and C will be minimized if R 0 is chosen such that for every
value of y E R 0 ,
H0
In order to use the MAP and Bayes' decision rules, we need to know the a
priori probabilities P(H0 ) and P(H 1) as well as relative costs. In many engineering applications, these quantities may not be available. In such cases, other
decision rules are used that do not require P(H0 ), P(H 1), and costs. Two rules
that are quite commonly used in these situations are the minmax rule and the
Neyman-Pearson rule.
The minmax rule is used when the costs are given, but the a priori probabilities P(H0 ) and P(H 1 ) are not known. This decision rule is derived by obtaining the decision rule that minimizes the expected cost corresponding to the
value of P(H 1) for which the average cost is maximum.
The Neyman-Pearson (N-P) rule is used when neither a priori probabilities
nor cost assignments are given. The N-P rule is derived by keeping the probability of false alarm, P(DdH 0 ), below some specified value and minimizing
the probability of a miss P(D 0 iH 1).
Details of the derivation of the minmax and N-P decision rules are omitted.
Both of these decision rules also lead to the form
L(y)
fY[H,(YIH 1 )
H1
f Y!H/Y IHo)
Ho
. (6.8)
'Y
where 'Y is the decision threshold. Thus, only the value of the threshold with
which the likelihood ratio is compared varies with the criterion that is optimized.
In many applications including radar systems, the performance of
decision rules are displayed in terms of a graph of the detection probability
Choose
6.2.4
if
l
(
L(y)
'
l
l
frru,(YIH,)
u,
P(H 0 )(C 1o - C 0 o)
/Y[H 0 (Y IHo)
Ho
P(HI)(Cot - C11)
351
(6.7)
Note that, with the exception of the decision threshold, the form of the Bayes'
decision rule given before is the same as the form of the MAP decision rule
given in Equation 6.2! It is left as an exercise for the reader to show that the
two decision rules are identical when C 10 - C00 = C01 - C 11 and that the Bayes'
decision rule minimizes the probability of making incorrect decisions when
Coo = C 11 = 0 and C 10 = C01 = 1.
-A
(a)
Figure 6.3a
and Pv.
:;--~~
352
SIGNAL DETECTION
353
where
X(t)
PD
{s
under hypothesis H 0
under hypothesis H 1
(t)
St(l)
0.6
0.5
0.4
0.3
0.2
0.1
0.1
-and s 0 (t) and s 1(t) are the waveforms used by the transmitter to represent 0 and
_1, respectively. In a simple case s 0 (t) can be a rectangular pulse of duration T
and amplitude -1, and s 1(t) can be a rectangular pulse of duration T and
amplitude 1. In any case, s 0 (t) and s 1 (t) are known deterministic waveforms;
the only thing the receiver does not know in advance is which one of the two
waveforms is transmitted during a given interval.
Suppose the receiver takes m samples of Y(t) denoted by Y 1 , Y 2 , , Ym.
Note that Yk = Y(tk), tk E (0, T), is a random variable and h is a particular
value of Yk. These samples under the two hypothesis are given by
PF~
(b)
P(D 1 /HJ) = P 0 versustheprobabilityoffalsealarmP(D 1 /H0 ) = PFforvarious values of the threshold. These curves are called the receiver operating
.characteristic (ROC) curves. (See problems 6.7-6.13.) An example is shown
in Figure 6.3. Figure 6.3a shows the conditional probabilities with P 0 and P F
indicated. Note that 2A is the difference in the means. Figure 6.3b shows example ROC curves as a function of the mean divided by the standard deviation.
0:5t:5T
Yk = so.k
nk
under H 0
Yk = s,,k
nk
under H 1
Y = [Yt. Y 2 ,
and assume that the distributions ofYunder H 0 and H 1 are given (the distribution
of Y 1 , Y 2 , , Ym will be the same as the joint distribution of N 1 , N 1 , ,
Nm except for a translation of the means by so.k or sl.k, k = 1, 2, ... , m).
A direct extension of the MAP decision rule discussed in the preceding section
leads to a decision algorithm of the form
L(y)
6.3.1
/YiH,(y/Ht)
H,
!Yu/Y/Ho)
H0
P(H
0
--
(6.9)
P(Ht)
.....