0% found this document useful (0 votes)
36 views5 pages

Exerc Det Deter Answer

This document contains 4 problems related to detection theory. Problem 1 involves detecting between two hypotheses based on a likelihood ratio test. Problem 2 compares the detection performance of two different signal shapes in additive white Gaussian noise. Problem 3 examines a likelihood ratio test for detecting between two Bernoulli processes. Problem 4 defines a test statistic that involves projecting the observations onto the eigenvectors of the covariance matrix of the signal and noise.

Uploaded by

zhenyang zhong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views5 pages

Exerc Det Deter Answer

This document contains 4 problems related to detection theory. Problem 1 involves detecting between two hypotheses based on a likelihood ratio test. Problem 2 compares the detection performance of two different signal shapes in additive white Gaussian noise. Problem 3 examines a likelihood ratio test for detecting between two Bernoulli processes. Problem 4 defines a test statistic that involves projecting the observations onto the eigenvectors of the covariance matrix of the signal and noise.

Uploaded by

zhenyang zhong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Delft University of Technology

Faculty of Electrical Engineering, Mathematics, and Computer Science


Circuits and Systems Group

ET 4386 Estimation and Detection


Autumn 2017

Exercises -Detection

Problem 1:
T
Problem 1a: Let H = 1, r, ..., rN −1 .


h i
1 1 T −1 (x − AH)
p(x; H1 ) = N 1 exp − 2 (x − AH) C
(2π) 2 det 2 (C)
1
exp − 12 xT C−1 x .
 
p(x; H0 ) = N 1
(2π) 2 det 2 (C)

p(x;H1 )
L(x) = p(x;H0 ) > λ.

1
ln L(x) = xT C−1 AH − HT C−1 HA2 > ln λ
2
1 0
T (x) = xT C−1 HA > ln λ + HT C−1 HA2 = λ
2
Problem 1b: T (x) is Gaussian distributed under both H1 and H0 .

E[T ; H0 ] = E[wT C−1 HA] = 0


N −1
A2 X 2n
E[T ; H1 ] = E[(AH + w)T C−1 HA] = A2 HT C−1 H = r
σ2
n=0
N −1
2 A2 X 2n
var[T ; H0 ] = E[ wT C−1 HA ] = A2 HT C−1 H = 2 r
σ
n=0
−1
2
HA − E[(AH + w) C−1 HA]
var[T ; H1 ] = E[ (AH + w) C T T

N −1
A2 X 2n
 2  h
T −1
2 i
= E ((AH + w) − E [(AH + w)]) C HA = E wT C−1 HA = var[T ; H0 ] = 2 r
σ
n=0
  v
0 u 2 NX
u −1
λ  → λ = Q (Pf a ) t A
0 −1
Pf a = Q  q P 2
r2n
A2 N −1 2n
r σ
σ2 n=0 n=0
 v  
0 A2 PN −1 2n
u N −1
λ − 2 r 2
 = Q Q−1 (Pf a ) − t A
u X
PD = Q  q σ P n=0 2
r2n  ,
A 2 N −1 2n
r σ
σ2 n=0 n=0
 q 
PN −1 −1 1−r2N A2 1−r2N
Problem 1c: For 0 ≤ r ≤ 1, n=0 = r2n
and PD = Q Q (Pf a ) − σ2 1−r2
1−r2
 q 
2
When N → ∞ for 0 ≤ r ≤ 1, PD will become PD = Q Q−1 (Pf a ) − A 2
1
σ 1−r 2

which will be smaller than 1 for Pfa < 1. 


q
−1 N A2
For r = 1 PD will become PD = Q Q (Pf a ) − σ2
and for N → ∞ PD will
P −1 2n
approach 1. For r ≥ 1 PD will also approach 1 as limN →∞ N n=0 r will then
→ ∞.

Problem 2:

Problem 2a: As the noise is white and Gaussian, the shape of the signal does not
influence the detection performancce, but the power does. As both signals have
an equal power, the detection performance will be equal.
Problem 2b: Using the matrix inversion lemma we can calculate C−1 , that is, C−1 =
1
1 11T
σ4
σ2
− 1+ N2
. We can use this result to calcualte the PD :
σ

 √ 
PD = Q Q−1 (Pf a ) − sT C−1 s .
!  r 
N
A2
r
A2
For s1 [n] we then get PD = Q − Q−1 (P f a)
=Q f a) −
σ2 N
+1
. Q−1 (P σ2

N
+1 σ2
 q 
For s2 [n] and (even) N we get PD = Q Q−1 (Pf a ) − A2 σN2 . The PD for even
N and s2 [n] will thus always be larger.
One can also argue that s[n] should ideally equal the eigenvector of C that corre-
sponds to the minimum eigenvalue. The largest eigenvalue is 1. This corresponds
with s1 [n]. Signal s2 [n] is at least orthogonal to this eigenvector and corresponds
to the minimum eigenvalue. s2 [n] will thus have the best detection performance.

Problem 3:
(1−p1 )k p1
Problem 3a: LRT: (1−p0 )k p0
≥λ
(1−p1 )k
(1−p0 )k
≥ λ pp10
p
log λ 0
k ≥  1−pp11  = λ0
log 1−p
0

Problem 3b:

Problem 4: s ∼ N (0, Cs ) and w ∼ N (0, σ 2 I)


−1
T (x) = xT Cs Cs + σ 2 I x = xT Λx
 2 
σ 0 σs21 σs2
with Λ = diag σ2 s+σ ,
2 σ 2 +σ 2 , ..., σ2
N −1
+σ 2
s0 s1 sN −1

N −1
X σs2n
T (x) = x2 [n]
σs2n + σ 2
n=0

2
Problem 5:

Problem 5a: We need to calculate ŝ = E[s|x]. However, A and w are Gaussian (and
thus also jointly Gaussian) distributed. In addition, the model is linear:

x = 1A + w = s + w.
−1
In this case the MMSE estimator is given by ŝ = E[A|x]1 = C−1 T −1 −1
A + H Cw H HT C−1
w x=
−1
σ 2 x̄

1
N σ2
+ σ12 x̄
σ2
= 2 A σ2 1
A σA + N

Problem 5b: NP: T (x) = xT ŝ

Problem 6:
T
Problem 6a: s = AH, H = 1, r1 , ..., rN −1 with s ∼ N (0, σA
2 HHT ).


−1
ŝ = Cs Cs + σ 2 I x
Using the matrix inversion lemma it follows that
−1 2
σA HT HσA
2 2 HHT
σA
ŝ = Cs Cs + σ 2 I x= HHT
(1 − 2 )x = 2 x
σ2 σ 2 + HT HσA σ 2 + HT HσA

We then get
P 2
N −1 n
σA 2 xT HHT x n=0 r x[n]
T (x) = xT ŝ = 2 = 
σ 2 + HT HσA σ2
2 +
PN −1
r2n
σA n=0

or (using 14.7 vol - I):


PN −1 !−1 P
N −1
−1 T −1 1 r2n rn x[n]
 = C−1
A +HT
C−1
w H H Cw x = 2 +
n=0 n=0
σA σ2 σ2

N −1
!−1 N −1
σ2 X X
= 2 + r2n rn x[n]
σA n=0 n=0

ŝ = ÂH
P 2
N −1 n
n=0 r x[n]
T (x) = xT ŝ = xT HÂ =  PN −1 
σ2
2
σA
+ n=0 r2n

PN −1 σs2n x2 [n]
Problem 7: T (x) = xT ŝ = xT Cs (Cs + σ 2 I)−1 x = n=0 σs2n +σ 2

Problem 8:

Problem 8a: We have w ∼ N (0, Cw ) and s ∼ N (0, Cs ) = N (0, Cw η). So, T (x) =
xT C−1
w xη
xT C−1 −1
w Cs (Cs + Cw ) x = 1+η ≥ γ and T 0 (x) = xT C−1 0
w x ≥ γ . We know
that xT C−1 x ∼ χ2N (whitening of x)

3
Problem 8b:
H0 x ∼ N (0, Cw )
H1 x ∼ N (0, (1 + η)Cw )
so,
H0 T (x) ∼ χ2N
H1 T1+η
(x)
∼ χ2N

Pf a = P (T (x) ≥ γ 0 ; H0 ) = Qχ2 (γ 0 ) ⇒ γ 0 = Q−1


χ2
(Pf a )
N N
0 0
PD = P (T (x) ≥ γ 0 ; H1 ) = P ( T1+η (x) γ
≥ 1+η γ
; H1 ) = Qχ2 ( 1+η ).
N
2
Notice that for N = 2, χ2 −distributed RVs becomes exponentially distributed.
R∞ x γ0 −2 log P
Qχ2 (γ 0 ) = γ 0 12 e− 2 dx = e− 2 = Pf a ⇒ γ 0 = −2 log Pf a . PD = Qχ2 ( 1+η f a ) =
N N
log Pf a 1
1 − x2
R∞ 1+η
−2 log Pf a
2 e dx = e 1+η = P fa
1+η

Problem 9: We can use the expression for general Gaussian detection. That is
1 T h −1 i
T 0 (x) = x Cw Cs (Cs + Cw )−1 x + xT (Cs + Cw )−1 µs
2
1 T σs2 2 −1 −1
= x 2 σs + σ 2 x + xT σs2 + σ 2 A1
2 σ
1 σs2 2 −1 T A
= 2
σs + σ 2 x x+ 2 xT 1
2σ σs + σ 2
N −1 N −1
N σs2 1 1 X 2 NA 1 X
= x [n] + 2 x[n]
2 σ 2 σs2 + σ 2 N σs + σ 2 N n
n=0
| {z } | {z }
estimate of variance estimate of mean

From this we can clearly see the contribution in the detector based on the deterministic
component (mean) of the data and the random component (variance) of the data.

Problem 10:
T
Problem 10a: Let H = 1, r, ..., rN −1 .


h i
p(x; A, H1 ) = 1
N exp − 2σ1 2 (x − AH)T (x − AH)
(2πσ 2 ) 2
1
 1 T 
p(x; H0 ) = N exp − 2σ 2 x x .
(2πσ 2 ) 2

Determine the MLE of A:

p(x; Â, H1 ) = max p(x; A, H1 )


A

dp(x; A, H1 )
=0
dA
PN −1 n
xT H+HT x n=0 r x[n]
This leads to ÂM LE = HT H
= P N −1 2n .
n=0 r

4
Problem 10b:
h PN −1 i
1 1
N exp − 2σ 2 n=0 (x[n] − ÂM LE rn )2
p(x; ÂM LE , H1 ) (2πσ 2 ) 2
LG (x) = = h PN −1 2 i >γ
p(x; H0 ) 1
exp − 1
x [n]
N
(2πσ 2 ) 2 2σ 2 n=0

This can be written as


N
X −1 N
X −1 N
X −1 N
X −1
− (x[n] − ÂM LE rn )2 + x2 [n] = −Â2M LE r2n + 2ÂM LE x[n]rn =
n=0 n=0 n=0 n=0
P 2
!2 N −1 N −1 N −1 n
n=0 r x[n]
PN −1 PN −1
rn x[n] X rn x[n] X
− n=0
N −1
r2n + 2 P
n=0
N −1
x[n]rn = PN −1 > γ0
r2n r2n r2n
P
n=0 n=0 n=0 n=0 n=0

This can be written as Â2M LE > γ 00

You might also like