Chapter 10
Chapter 10
\
|
|
.
|
\
|
= |
.
|
\
|
+ =
3
2
sin ) 2 sin(
3
2
cos ) 2 cos(
3
2
2 cos ) (
2
t t t t s
|
.
|
\
|
+ |
.
|
\
|
= |
.
|
\
|
=
3
2
sin ) 2 sin(
3
2
cos ) 2 cos(
3
2
2 cos ) (
3
t t t t s
2
1
2
1
t
Also, = =
2
1
) ( ) 2 (cos
2 / 1
2 / 1
2
1
2 / 1
2 / 1
2
dt t s dt t
2
1
2
1
, 2 sin
2
2
) (
, 2 cos
2
2
) (
2
1
=
=
t
t t
t t
Therefore,
) ( 2 ) (
1 1
t t s =
) (
2
3
) (
2
2
) (
2 1 2
t t t s =
) (
2
3
) (
2
2
) (
2 1 3
t t t s + =
138
Detection and Parameter Estimation
139
(b) The decision space is
10.2
2 1
1 2
2 1
for
1
) ( ] , [ T t T
T T
t f T T t
T
< <
=
) ( ) ( :
) ( ) ( ) ( :
0
1
t N t Y H
t N t s t Y H
=
+ =
A
t
s(t)
T1 t0 t0+T T2
1
1
s
2
s
3
s
Decide
Decide
Decide
2
2 / 2
2 / 3
2 / 3
3
s
2
s
1
s
Received
signal
) (
1
t
) (
2
t
Choose
largest
variable
2 / 1
2 / 1
2 / 1
2 / 1
Signal detection and estimation
140
where
0
2
/
0
1
) (
N n
N
e
N
n f
= .
The problem may be reduced to
Under H
0
, we have
= =
2
0
) ( ) ( ) (
1
T
t
dt t N t N t Y
0 )] ( [ )] ( [
2
0
1
= =
T
t
dt t N E t N E and
=
(
(
=
2
0
2
0
2
0
2
0
2 1 2 1 2 2 1 1
2
1
)] ( ) ( [ ) ( ) ( )] ( [
T
t
T
t
T
t
T
t
dt dt t N t N E dt t N dt t N E t N E
where
=
=
2 1
2 1
0
2 1
, 0
,
2
)] ( ] ( [
t t
t t
N
t N t N E
= =
2
0
2
0
)] ( [ var ) (
2
) (
2
)] ( [
1 0 2
0
2 1 2 1
0 2
1
T
t
T
t
t N t T
N
dt dt t t
N
t N E
Under H
1
, we have ) ( ) ( ) (
0 1
2
2
0
t T A dt A dt t s t s
T
t
T
t
= = =
. Then,
t N t T A t Y H
t N t Y H
( ) ( ) ( :
) ( ) ( :
1 0 1
1 0
+ =
=
The LRT is
2
0
T
t
LRT
) ( ) ( ) ( t N t s t Y + = ) ( ) ( ) (
1 1
t N t s t Y + =
Detection and Parameter Estimation
141
) (
) ( ) , (
) (
) (
) (
0
1 1 ,
0
1
0
1 1
0
1
H y f
dt H t f H t y f
H y f
H y f
y
H Y
H T H T Y
H Y
H Y
= =
(
(
=
) (
exp
) (
1
1
) (
)] ( [
exp
) (
1
0 2 0
2
0 2 0
1 2 0 2 0
2
0
0 2 0
2
1
t T N
y
t T N
dt
T T t T N
t t A y
t T N
T
T
<
>
(
(
=
0
1
) (
exp
) (
)] ( [
exp
1
0 2 0
2
0 2 0
2
0
1 2
2
1
H
H
t T N
y
dt
t T N
t t A y
T T
T
T
| |
<
>
+
(
(
(
(
+ +
=
0
1
) 1 ( 2 exp
) (
exp
) (
) 2 (
exp
1
2
1
0
2 2
0 2 0
2
0 2 0
0 0
2
1 2
H
H
dt t At A t A
t T N
y
t T N
At At y
T T
T
T
| |
4 4 4 4 4 3 4 4 4 4 4 2 1
+
<
>
(
(
+ +
2
1
) 1 ( 2 exp
) (
0
1
) (
) 2 ( 2
exp
0
2 2
1 2
0 2 0
0 0
2
T
T
dt t At A t A
T T
H
H
t T N
At At y
Therefore,
2
) 2 ( ln ) (
0 0 0 2 0
1
0
2
At At t T N
H
H
y
+
<
>
.
10.3 From (10.85), the probability of error is
|
|
.
|
\
|
=
0
2
2
1
) (
N
Q P where
0 1 2 1
2 E E E E + = .
and
Signal detection and estimation
142
T A dt t s E
T
2
0
2
1 1
) ( = =
T A dt t s E
T
2
0
2
0 0
) ( = =
T A
T A
dt t s t s E E
T
2
2
0
2 1 2 1
2
1
,
2
) ( ) ( = = = =
and
|
|
.
|
\
|
=
0
2
2
2
1
) (
N
T A
Q P
The optimum receiver is shown below
10.4 We have,
) ( ) ( :
) ( ) ( ) (
) ( ) ( ) (
:
0
2
1
1
t W t Y H
t W t s t Y
t W t s t Y
H
=
+ =
+ =
Under H
1
, we have
1 1
0
1 1 1
) ( )] ( ) ( [ W E dt t s t W t s Y
T
+ = + =
2 2
0
2 2 2
) ( )] ( ) ( [ W E dt t s t W t s Y
T
+ = + =
.
T
0
T
0
0
0
1
1
Y
H
H
Y
<
>
H1
H0
Y(t)
) (
1
t s
) (
2
t s
Y1
Y2
Detection and Parameter Estimation
143
The problem reduces to:
+
=
0 0
1 1
1 1 1
1
:
:
:
H W
H W
H W E
Y
+
=
0 0
1 2
1 2 2
2
:
:
:
H W
H W
H W E
Y
Under H
0
, we have
. 2 , 1 , ) ( ) (
0
0
= = =
k W W dt t s t W Y
k
T
k k
The LRT is
) (
) ( ) , ( ) ( ) , (
) (
) (
0
2 2 1 , 1 1 1 ,
0
1
0
2 1 1 1
0
1
H f
s P s H f s P s H f
H f
H f
H
S H S H
H
H
y
y y
y
y
Y
Y Y
Y
Y
+
=
where
(
(
=
0
2
1 1
0
1 1 1 ,
) (
exp
1
) , (
1 1 1
N
E y
N
s H y f
S H Y
T
0
T
0
Y(t)
) (
1
t s
) (
2
t s
Y1
Y2
Signal detection and estimation
144
|
|
.
|
\
|
=
0
2
1
0
2 1 1 ,
exp
1
) , (
2 1 1
N
y
N
s H y f
S H Y
|
|
.
|
\
|
=
0
2
2
0
1 1 2 ,
exp
1
) , (
2 1 2
N
y
N
s H y f
S H Y
(
(
=
0
2
2 2
0
2 1 2 ,
) (
exp
1
) , (
2 1 2
N
E y
N
s H y f
S H Y
and
2 , 1 , exp
1
) (
0
2
0
0
0
=
|
|
.
|
\
|
= k
N
y
N
H y f
k
k H Y
k
Therefore, the LRT becomes
0
2
2 0
2
1
0
2
1 0
2
2
/ /
0
0
2
2 2 /
0
/
0
2
1 1
0
1
2
1 ) (
exp
1
2
1 ) (
exp
1
N y N y
N y N y
e e
N
N
E y
e
N
e
N
E y
N
|
.
|
\
|
(
(
+ |
.
|
\
|
(
(
=
<
>
0
1
2 2
2
2
0
1 1
2
1
0
) 2 (
1
exp ) 2 (
1
exp
2
1
H
H
E y E
N
E y E
N
When 1 = , the LRT becomes
2 exp
2
exp exp
2
exp
0
1
0
2
2
2
0
2
0
2
1
1
0
1
H
H
N
E
y
N
E
N
E
y
N
E
<
>
|
|
.
|
\
|
|
|
.
|
\
|
+
|
|
.
|
\
|
|
|
.
|
\
|
The optimum receiver may be
Detection and Parameter Estimation
145
10.5 (a) The probability of error is given by
|
|
.
|
\
|
=
0
2
2
1
) (
N
Q P where
0 1 0 1
2 E E E E + =
49 . 0 ) 1 (
2
1
4
2
0
2
1
= = =
e dt e E
t
The signals are antipodal
|
|
.
|
\
|
= = =
0
92 . 3
2
1
) ( and 96 . 1 1
N
Q P .
(b) The block diagram is shown below with ) ( 43 . 1 ) (
1
t s t .
10.6 At the receiver, we have
T t t W t s t Y H
T t t W t s t Y H
+ =
+ =
0 , ) ( ) ( ) ( :
0 , ) ( ) ( ) ( :
2 2
1 1
T
0
1
0
0
1
P
P
H
H
<
>
Y(t) y1
) (
1
t
H1
H0
] exp[
] exp[
1
Y
1
Y
0
1
2
N
E
0
/
2
1
N E
e
T
0
T
0
2
0
1
H
H
<
>
Y(t)
H1
) (
1
t s
) (
2
t s
0
2
2
N
E
0
/
2
2
N E
e
H0
Signal detection and estimation
146
2
2 1
T
E E = = and ) ( and ) ( 0 ) ( ) (
2 1
0
2 1 12
t s t s dt t s t s
T
= =
are uncorrelated.
The receiver is
where . 2 , 1 , ) (
2 ) (
) ( = = = k t s
T
E
t s
t
k
k
k
k
The observation variables Y
1
and Y
2
are then
=
+ =
=
1
0
1 0
1 1
0
1 1
1
) ( ) ( :
) ( ) ( :
W dt t t Y H
W E dt t t Y H
Y
T
T
+ =
=
=
2 2
0
1 0
2
0
1 1
2
) ( ) ( :
) ( ) ( :
W E dt t t Y H
W dt t t Y H
Y
T
T
This is the general binary detection case. Then,
(
=
(
=
(
=
22
21
2
12
11
1
2
1
and ,
s
s
s
s
Y
Y
s s Y
T
0
T
0
H1
H0
Y(t)
) (
1
t
) (
2
t
Choose
largest
Y1
Y2
Detection and Parameter Estimation
147
The conditional means are
1
12
11
1
1 1
0
] [ s Y m =
(
=
(
(
= =
s
s
E
H E
2
22
21
2
2 2
0
] [ s Y m =
(
=
(
= =
s
s
E
H E
) (
1
t s and ) (
2
t s uncorrelated the covariance matrix is
C C C = =
(
=
2
0
0
1
2 / 0
0 2 /
N
N
and the probability of error is
|
|
.
|
\
|
=
|
|
.
|
\
|
=
0 0
2
1 2
2
1
) (
N
T
Q
N
Q P where E E E 2
2 1
= + =
10.7 At the receiver, we have
T t t W t s E t Y H
T t t W t s E t Y H
+ =
+ =
0 , ) ( ) ( ) ( :
0 , ) ( ) ( ) ( :
2 2 2
1 1 1
T
0
T
0
Y(t)
) (
1
t
) (
2
t
Y1
Y2
Signal detection and estimation
148
with
=
T
dt
E
t s
t
0 1
1
1
) (
) ( and
= =
T T
dt t s dt
t s
t
0
2
0
2
2
) ( 2
2 / 1
) (
) (
Since the signals are orthogonal, we can have a correlation receiver with two
orthogonal functions or with one orthonormal function ) (
t s given by
(
(
=
+
= ) (
2
1
) (
2
3
) ( ) (
) (
2 1
2 1
2 2 1 1
t s t s
E E
t s E t s E
t s
We obtain the sufficient statistic as follows
The conditional means are
3
2
) (
2
1
) (
3
2
)] ( ) ( [ ] | ) ( [
0
2 1 1 1
=
(
(
(
(
+ =
T
dt t s t s t W t s E H y T E
6
1
) (
2
1
) (
2
2
) ( ) (
2
1
] | ) ( [
0
2 1 2 2
=
(
(
(
(
(
(
+ =
T
dt t s t s t W t s E H y T E
The noise variance is 2 / 1 ] | ) ( var[
0
= H y T . Hence, the performance index is
2
d
{ }
3
2 / 1
) 6 / 1 3 / 2 (
] | ) ( var[
| ) ( [ ] | ) ( [
2
0
2
0 1
=
+
=
H y T
H y T E H y T E
The probabilities of false alarm and detection are
|
.
|
\
|
= |
.
|
\
|
=
2
3
2
Q
d
Q P
F
T
0
) (
t S
y(t)
T(y)
Detection and Parameter Estimation
149
|
|
.
|
\
|
= |
.
|
\
|
=
2
3
2
Q
d
Q P
D
and thus, the achievable probability of error is
|
|
.
|
\
|
=
2
3
2
1
) (
2 / 3
2 /
2
Q dx e P
x
(b) In this case, the two signals will have the same energy E and thus,
E d E
E
d 2 4
2 / 1
2
2
= = =
From
2
1
2
3
2
3
) (
2
) (
(
(
|
|
.
|
\
|
= = = |
.
|
\
|
=
Q E E Q
d
Q P
10.8 We need to find the sufficient statistic. Since ) (
1
t s and ) (
2
t s are
orthogonal, let
2 1
2 2 1 1
1
) ( ) (
) (
E E
t s E t s E
t
+
=
Then,
(
(
(
(
+
= =
T
T
T
dt
E E
t s E t s E
t W H
dt
E E
t s E t s E
t W t s E H
dt t t y Y
0 2 1
2 2 1 1
0
0 2 1
2 2 1 1
1
0
1 1
) ( ) (
) ( :
) ( ) (
)] ( ) ( [ :
) ( ) (
Decision region
E1
E2
E
E
Signal detection and estimation
150
Y
1
is Gaussian with conditional means
0 0 1
0 ] | [ m H Y E = =
and
1
2 1
2
2
2 1
1
1
0
2
2
2 1
2 2
0
2
1
2 1
1 1
0
1 2 2 2
0
1 1 1 1 1 1
) ( ) (
) ( ) ( ) ( ) ( ] | [
m
E E
E
P
E E
E
P
dt t s
E E
E P
dt t s
E E
E P
dt t t s E P dt t t s E P H Y E
T T
T T
=
+
+
=
+
+
=
=
The variance is 2 /
0
N and thus,
|
|
.
|
\
|
=
0
2
1
0
0 1 |
exp
1
) | (
0 1
N
y
N
H y f
H Y
(
(
=
0
2
1 1
0
1 1 |
) (
exp
1
) | (
1 1
N
m y
N
H y f
H Y
Applying the likelihood ratio test, taking the natural logarithm and rearranging
terms, we obtain
2 2
ln
1
1
0
0
1
1
m
m
N
H
H
y +
<
>
For minimum probability of error, 1 = and the decision rule becomes
2 1
2 2 1 1
0
1
1
2
2
E E
E P E P m
H
H
y
+
=
<
>
The optimum receiver is
Detection and Parameter Estimation
151
10.9 (a) The energy B dt t dt t dt t A dt t s E
T T T T
k
+
(
(
+ + = =
0
2
3
0
2
2
0
2
1
2
0
2
) ( ) ( ) ( ) (
where B is the sum involving terms of the form
k j dt t t
T
j j
, ) ( ) (
0
But the s are orthonormal 0 = B and thus,
3
3
2
E
A A E = = .
(b) The signals 7 , , 1 , 0 ), ( L = k t s
k
, span a 3-dimentional space. The
coefficients are
k k
T
k k
T
k k
W s dt t t W t s
k dt t t y y
+ = + =
= =
0
0
) ( )] ( ) ( [
3 , 2 , 1 , ) ( ) (
such that
(
(
(
=
3
2
1
y
y
y
y ,
(
(
(
=
3
2
1
W
W
W
W and
(
(
(
=
3
2
1
k
k
k
k
s
s
s
s
Hence,
(
(
(
=
1
1
1
3
0
E
s ,
(
(
(
=
1
1
1
3
1
E
s ,
(
(
(
=
1
1
1
3
2
E
s ,
(
(
(
=
1
1
1
3
3
E
s ,
(
(
(
=
1
1
1
3
4
E
s ,
T
0
2
1
0
1
m
H
H
<
>
y(t) y1
) (
1
t
H1
H0
Signal detection and estimation
152
(
(
(
=
1
1
1
3
5
E
s ,
(
(
(
=
1
1
1
3
6
E
s ,
(
(
(
=
1
1
1
3
7
E
s .
Since the criterion is minimum probability of error, the receiver is then a
"minimum distance" receiver.
The receiver evaluates the sufficient statistic
7 , , 1 , 0 , )] ( ) ( [
0
2
2
L = = =
k dt t s t y T
T
k k j
s y
and chooses the hypothesis for which
j
T is smallest.
Since the transmitted signals have equal energy, the minimum probability of
error receiver can also be implemented as a "largest of " receiver. The receiver
computes the sufficient statistic
7 , , 1 , 0 , ) ( ) (
0
L = = =
k dt t y t s T
T
k
T
k j
y s
and chooses the hypothesis for which
j
T is largest.
(c)
1
1
s
2
s
3
s
4
s
5
s
6
s
7
s 3
0
s
Detection and Parameter Estimation
153
Using "minimum distance" or "nearest neighbor", the decision regions are
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
3 2 1 7
3 2 1 6
3 2 1 5
3 2 1 4
3 2 1 3
3 2 1 2
3 2 1 1
3 2 1 0
< < <
> < <
< > <
> > <
< < >
> < >
< > >
> > >
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H
(d) The probability of error is
) ( ) ( ) ( ) (
0
7
0
0
7
0
H P P H P H P P P
j
j
j
j j
= = =
= =
1
Y ,
2
Y and
3
Y are independent Gaussian random variables with conditional
means
3
] [ ] [ ] [
0 3 0 2 0 1
E
H Y E H Y E H Y E = = =
and conditional variances
2
] var[ ] var[ ] var[
0
0 3 0 2 0 1
N
H Y H Y H Y = = =
Therefore, ] 0 , 0 , 0 [ 1 ) | ( ) (
3 2 1 0
> > > = = Y Y Y P H P P
3
0
3
0
0
2
0
3 2 1
3
2
1
) 3 / (
exp
1
1
) 0 ( ) 0 ( ) 0 ( 1
(
(
|
|
.
|
\
|
=
(
(
=
> > > =
N
E
Q
dy
N
E y
N
Y P Y P Y P
Signal detection and estimation
154
10.10 (a) We observe that the dimension of the space is 2 and that we have 4
signal levels per axis Basis functions { }
2 1
, such that 0 ) ( ) (
0
2 1
=
T
dt t t
and
1 ) ( ) (
0
2
2
0
2
1
= =
T T
dt t dt t
The receiver is then
with
t f
T
t
t f
T
t
0 2
0 1
2 sin
2
) (
2 cos
2
) (
=
=
T
0
T
0
Y(t)
) (
1
t
) (
2
t
1
Threshold
Threshold 4-level
signal
4-level
signal
2
3
4
Detection and Parameter Estimation
155
(c) From (b), we observe that the probability of a correct decision is
) along ( ) along (
) along decision correct and along decision (correct ) (
2 1
2 1
=
=
c P c P
P c P
where, ) along (
1
c P is, from the figure below, given by
| | |
.
|
\
|
= + + + =
=
=
q q q q q
P c P
k
k
4
6
1 ) 1 ( ) 2 1 ( ) 2 1 ( ) 1 (
4
1
) decision correct (
4
1
) along (
4
1
1
s
where,
|
|
.
|
\
|
=
0
2N
d
Q q .
1
1
s
2
s
4
s
1
d d
3
s
Signal detection and estimation
156
Similarly, |
.
|
\
|
= q c P
4
6
1 ) along (
2
. Therefore, the probability of a correct
decision is
2
4
6
1 ) ( |
.
|
\
|
= q c P
and the probability of error is
2
4
9
9
3
) ( 1 ) ( q q c P P = =
10.11 From (10.104), we have
M j
T
T
j j
j
T
j
T
j
T
j
T
j
T
j j
, , 2 , 1 2
2 ) )( ( ) (
2
2
2
2
L = + =
+ = = =
y s s y
s s y s y y s y s y s y y
For equal energy
2
R and
2
j
s are common to all hypotheses Minimizing
) (
2
y
j
T is equivalent to maximizing y s
T
j
. Therefore, the receiver computes the
sufficient statistic
M j dt t y t s T
T
j
T
j
T
j
, , 2 , 1 , ) ( ) ( ) (
0
L = = =
y s y
and chooses the hypothesis having the largest dot product. The "Largest of "
receiver is
Detection and Parameter Estimation
157
"Largest of " receiver
10.12 We have
) ( ) ( :
) ( ) ( ) ( :
0
1
t W t Y H
t W t As t Y H
=
+ =
where
=
T
dt t s t W W
0
1
) ( ) (
A H Y E = ] [
1 1
2
] var[
2
] 2 [ ] [
0
1 1
0 2
1
2
1
2
1
2
1
N
H Y
N
A AW W A E H Y E = + = + + =
A unknown H
1
is a Composite hypothesis and
T
0
) (t s
Y(t)
=
= +
1 1 0
1 1 1
:
:
Y W H
Y W A H
T
0
T
0
y(t)
) (
1
t s
) (t s
M
Choose
largest
decision
variable
T
0
) (
2
t s
Decision
T1
T2
TM
Signal detection and estimation
158
) (
) , ( max
) (
0
1 1 ,
0
1 1
1
H y f
H y f
y
H Y
H Y
g
We need the estimate A
of A such that =
0
) ( ln
a
a y f
A Y
the ML estimate is
Y A =
(
(
= =
0
2
0
0
2
0
0
1 ,
exp
2
1
) (
exp
2
1
) (
) , (
) (
0
1
N
y
N
N
a y
N
H y f
H a y f
y
H Y
H A Y
g
<
>
(
+ =
0
1
2 2 2
0
) 2 (
1
exp ) (
H
H
y ay a y
N
y
g
but 1 = and 1 exp ) (
0
1
0
2
H
H
N
y
y y a
g
<
>
|
|
.
|
\
|
= =
or 0
0
1
0
2
H
H
N
y
<
>
. Therefore, always decide H
1
since 0 /
0
2
> N y .
10.13
Y
1
is a sufficient statistic and thus,
T
0
) (t
Y(t)
=
T
dt t t Y Y
0
1
) ( ) (
Detection and Parameter Estimation
159
1 1 1
) ( Y W t
E
Y +
=
0
2
1
0
1
)] / ( [
exp
1
) (
1
N
E y
N
y f
Y
Hence,
(
(
|
|
.
|
\
|
= =
2
1
0
1 1
0 0
) ( ln
1
E
y
N
y f
Y
or
=
E
y
1
. Thus,
1
y
E
ml
= and the optimum receiver is shown below.
10.14 The density function of is
2 2
2 /
2
1
) (
= e f . Hence, from the
MAP equation, we have
0
2
0
) ( ln
) ( ln
2 2
1
0
1
1
=
|
|
.
|
\
|
=
+
=
E E
y
N
f
y f
map
Y
0
2 2
0
1
0
2
4
=
|
|
.
|
\
|
+
map
N
E
E y
N
T
0
E t s / ) (
y(t)
Inverter
E
y1
1
1
y
ml
Signal detection and estimation
160
As
, we have
0
2 2
0
1
0
=
=
map
N
E
E y
N
Therefore,
ml map
y
E
= =
lim
1
2
.
10.15 The ML equation is given by
0
) , (
)] , ( ) ( [
2
0
0
=
dt
t s
t s t y
N
T
where, ) cos( ) , ( + = t A t s
c
and ) sin(
) , (
+ =
t A
t s
c
.
Substituting into the ML equation, we have
0 ) sin( )] cos( ) ( [
2
0
0
= + +
dt t t A t y
N
A
c
T
c
+ = + + = +
T
c
T
c c c
T
dt t
A
dt t t A dt t t y
0 0 0
)] ( 2 sin[
2
) sin( ) cos( ) sin( ) (
Assuming many cycles of the carrier within [0,T], the integral involving the double
frequency terms is approximately zero. Hence,
+
T
c c
dt t t t y
0
0 ] cos sin sin )[cos (
Therefore,
=
T
c
T
c
dt t t y tdt t y
0 0
cos ) ( sin sin ) ( cos
Detection and Parameter Estimation
161
=
T
c
T
c
tdt t y
tdt t y
0
0
cos ) (
sin ) (
tan
or,
(
(
(
(
(
T
c
T
c
ml
tdt t y
tdt t y
0
0 1
cos ) (
sin ) (
tan
.
(b) Indeed, it can be shown that
ml
T
ml
dt
t s
N
0
2
0
) , (
2
]
var[
with
2
) 2 2 cos(
2 2
)] 2 2 cos( 1 [
2
) ( sin
) , (
) sin(
) , (
2
0
2 2
0
2
0
2 2
0
2
T A
dt t
A T A
dt t
A
dt t A dt
t s
t A
t s
T
c
T
c
T
c
T
c
+ = + =
+ =
(
+ =
Hence,
T A
N
ml
2
0
]
var[ then 1
0
2
<<
N
T A
.
10.16 (a) The matched filters to ) (
1
t s and ) (
2
t s are ) ( ) (
1 1
t T s t h = and
) ( ) (
2 2
t T s t h = , respectively, as shown below.
Signal detection and estimation
162
(b) The filters outputs as a function of time when the signal matched to it is
the input are the resulting convolutions ) (
1
t y and ) (
2
t y as shown below.
t
t
s1(t)
h1(t)=s1(T-t)
0
1 2 3 4 5 6 7
0
1 2 3 4 5 6 7
t
y1(t)
0
1 2 3 4 5 6 7 8
1.0
1.0
1
2
t
h1(t)=s1(T-t)
0
1 2 3 4 5 6 7
t
h2(t)=s2(T-t)
0
1 2 3 4 5 6 7
0.5
-0.5
1.0
Detection and Parameter Estimation
163
t
t
s2(t)
h2(t)=s2(T-t)
0
1 2 3 4 5
6
7
0
1 2 3 4 5 6 7
t
y2(t)
0
1 2 3 4 5 6 7 8
0.5
-0.5
0.5
-0.5
1.75
-0.5
9 10
11
12
13 14 10
Signal detection and estimation
164
(c) The output of the filter matched to ) (
2
t s when the input is ) (
1
t s is
) ( ) ( ) (
2 1
t h t s t y = as shown below.
10.17 (a) The signals ) (
1
t s and ) (
2
t s are orthonormal
Hence,
= =
otherwise , 0
2 / , / 2
) ( ) (
1 1
T t T T
t T s t h
and
= =
otherwise , 0
2 / 0 , / 2
) ( ) (
2 2
T t T
t T s t h .
0 1 2 3
4
5 6 7 8 9 10 11
0.5
-0.5
1.0
-1.0
t
) ( ) ( ) (
2 1
t h t s t y =
2 / T T
t
) (
1
t s
2 / T
2 / T T
t
) (
2
t s
2 / T
Detection and Parameter Estimation
165
(b) The noise free output of the matched filters is
2 , 1 , ) ( ) ( ) ( = = k t h t s t y
k k k
. Hence,
Note that we sample at T t = and thus, 1 ) (
1
= T y and 0 ) (
2
= T y .
(c) The SNR at the output of the matched filter is
0 0 0
2
0
2 2
2 / N N
E
N
d
SNR = = = since 1 = E .
10.18 t t s
c
= cos ) (
1
, then the signal energy is 2 / T E = , and the first basis
function is t T t
c
= cos / 2 ) (
1
. Consequently, the first coefficient in the
Karhunen-Love expansion of ) (t Y is
+ +
= =
T
T
c
T
dt t t W H
dt t t W t A H
dt t t Y Y
0
1 0
0
1 1
0
1 1
) ( ) ( :
) ( )] ( ) cos( [ :
) ( ) (
Then, we select a suitable set of functions L , 3 , 2 ), ( = k t
k
, orthogonal to ) (
1
t .
We observe that for 2 > k , we always obtain
k
W independently of the hypothesis.
Only
1
Y depends on which hypothesis is true. Thus,
1
Y is a sufficient statistic.
2 / T T
t
) (
1
t y
1
2 / T T
t
) (
2
t y
2 / 3T
1
0
2 / T T
t
) (
1
t h
2 / T
2 / T T
t
) (
2
t h
2 / T
Signal detection and estimation
166
1
Y is a Gaussian random variable with conditional means
=
(
(
= cos cos
2
] , , [
1 1
E a
T
a E H a Y E
0 ] [ ] , , [
1 0 1
= = W E H a Y E
and variances
2
] , , var[ ] , , var[
0
0 1 1 1
N
H a Y H a Y = =
The conditional likelihood ratio is given by
|
|
.
|
\
|
|
|
.
|
\
|
=
=
2 2
0
1
0 0 1 , ,
1 1 , ,
cos
1
exp cos
2
exp
) , , (
) , , (
] , ) ( [
0 1
1 1
E
N
E y
N H a y f
H a y f
a t y
H A Y
H A Y
) ( ) ( ) , (
,
= f a f a f
A A
since A and are independent. Hence,
=
A
dad a t y t y ] , ) ( [ )] ( [
Substituting for ] , ) ( [ a t y and ) , (
,
a f
A
into the above integral, the decision
rule reduces to
<
>
(
(
+
=
0
1
2
1
0
2
0
2
0
2
0
) 2 (
2
exp
2
)] ( [
H
H
y
N N N
N
t y
a
a
a
or,
<
>
0
1
2
1
H
H
y
with
Detection and Parameter Estimation
167
0
0
2
2
0
2
0
) 2 (
ln
2
) 2 (
N
N N N
a
a
a
+
+
=
(b) The receiver can be implemented as follows
10.19 Under hypothesis H
0
, no signal is present and the conditional density
function was derived in Example 10.7 to be
(
(
=
2
2 2
2
0
2
exp
2
1
) , (
0
s c
s c H Y Y
y y
H y y f
s c
Using the transformations = cos r Y
c
and = sin r Y
s
then,
|
|
.
|
\
|
=
2
2
2
0
2
exp ) (
0
r r
H r f
H R
and the probability of false alarm is
|
|
.
|
\
|
= |
.
|
\
|
=
|
|
.
|
\
|
0
2 2
2
2
exp
2
exp
2
exp
N
dr
r r
P
F
The probability of detection is
=
A
A D D
da a f a P P ) ( ) (
where,
(
(
= dr
AT r r
a P
D
2
2 2
2
2
) 2 / (
exp ) (
T
0
<
>
0
1
H
H
Y(t)
t
T
c
cos
2
Squarer
H1
H0
Signal detection and estimation
168
Solving for the expressions of ) (a P
D
and ) (a f
A
, and solving the integral, we
obtain
(
(
=
) (
2
exp
2
0
2
a
D
T N T
P
Expressing
D
P in terms of
F
P , we obtain
( )
T N
N
F D a
P P
2
0
0
+
=
10.20 (a) 0 ) ( )] ( [ ) ( ) ( ] [
0 0
= =
(
(
=
T
k
T
k k
dt t t N E dt t t N E N E
and
k k k
N
N E N + = =
2
] [ ] var[
0 2
(b) 2 /
0
N is the variance of the white noise process.
k
may be considered
as the variance of the colored noise. That is, we assume that the variance is
composed of the white noise variance and the colored noise variance. The white
noise coefficients are independent; the others are Karhunen-Love coefficients,
which are Gaussian and uncorrelated Independent.
(c) =
) (
2
) , (
0
u t
N
u t c
n n
white Gaussian noise
0 ] [ =
k
N E and
2
] [ ] var[
0 2
N
N E N
k k
= =
10.21 (a) W t N = ) (
1
has one eigenfunction.
0
N is the component to filter It
cannot be whitened since the process has no contribution in any other direction in
the signal space.
(b) In this case, the noise ) ( ) ( ) (
2 1
t N t N t N + = can be whitened by:
Detection and Parameter Estimation
169
That is, the whitening is performed by an amplifier and a dc canceller.
Delay T
2
0
2
) 2 / (
w
w
N +
) (t N
) (t N
0
/ 2 N
T / 1