0% found this document useful (0 votes)
60 views5 pages

Exam e and D 2015 2016 April Answer

This document contains an exam with multiple questions on the topics of estimation theory and detection theory. It provides models, equations, and asks students to estimate unknown parameters from observed data using techniques like maximum likelihood estimation, minimum variance unbiased estimation, and maximum a posteriori estimation. It also asks students to derive the optimal detector for detecting a known signal in white noise using the Neyman-Pearson criterion.

Uploaded by

aaymysz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views5 pages

Exam e and D 2015 2016 April Answer

This document contains an exam with multiple questions on the topics of estimation theory and detection theory. It provides models, equations, and asks students to estimate unknown parameters from observed data using techniques like maximum likelihood estimation, minimum variance unbiased estimation, and maximum a posteriori estimation. It also asks students to derive the optimal detector for detecting a known signal in white noise using the Neyman-Pearson criterion.

Uploaded by

aaymysz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Exam et4386 Estimation and Detection Answer

April 8th, 2016


Answer each question on a separate sheet. Make clear in your answer
how you reach the final result; the road to the answer is very important.
Write your name and student number on each sheet. It is allowed to answer
in Dutch or English.
It is allowed to use a double sided A4 self-written formula sheet.

Question 1 (Estimation theory) - 10 points


Using a sensor we observe data according to the following assumed model
x[n] = Ar n+1 + Ar n + w[n],

n = 0, . . . , N 1

where A is an unknown constant parameter, r is 0 < r < 1 and known, n is


the time-index in samples and w[n] is zero-mean uncorrelated Gaussian noise
having variance 2 . The goal is to make an estimate of A by using the data.
PN 1
1
)N exp( 212 n=0
(1 p) (a) p(x; A) = ( 2
(x[n] Ar n (r + 1))).
2

PN 1
p(x;A)
(x[n] Ar n (r + 1))r n (r + 1).
= 12 n=0
(2 p) (b) logA
P
N
1
12 n=0 r 2n (r + 1)2 .
>
var(A)

2 logp(x;A)
A2

2 (1 r)
(1 + r)(1 r 2N )

(2 p) (c) Yes, fromderiving the CRB,it P


follows that
N1
PN 1 2n
x[n]r n (r+1)
log p(x;A)
1
2
( Pn=0
= 2
A) and thus is the
N1 2n
n=0 r (r + 1)
A
(r+1)2
n=0 r
PN1
P
N1
n
PN 1
x[n]r n (r+1)
1r
n
n=0 x[n]r
MVU given by A = Pn=0
= PN1
= 1r
N1 2n
2N
n=0 x[n]r .
r (r+1)2
r 2n (r+1)
n=0

n=0

(1 p) (d) Yes, by increasing the bias of the estimator, it is possible to further


decrease the variance.
PN 1
1r
n
(1 p) (e) The BLUE is in this case identical to the MVU: A = 1r
2N
n=0 x[n]r
as the noise is Gaussian. Alternatively, we can use the expression (6.5)
T
from the book A = rrT xr with rT = [1, r 1, ..., r N 1 ]
(3 p) (f) AM AP =

P
n
2 + N1
n=0 x[n]r (r+1)
PN1
2n
(r+1)2
n=0 r

= (1 r)

2 PN1
+ n=0 x[n]r n

r+1
1r 2N

(1 p) (g) For 0, the prior pdf has variance of infinity. This means that
the prior gives no information and the estimator is determined mainly
by the conditional pdf of the data.

Question 2 (Estimation theory) - 10 points


We are interested in estimating the realization of a random variable , while
we can only observe the noisy data x[n] for n = 0, 1, ..., N 1. The observed
samples have the conditional pdf

exp[(x[n] )] x[n] >
p(x[n]|) =
0,
x[n] < .
The prior pdf is given by
p() =
with V AR[] =

1
.
2

exp[] > 0
0,
< 0,

Let x = [x[0], x[1], ..., x[N 1]]T .

(2 p) (a) Since V AR[] , the prior does not give any information. The
MAP is given by
!
N
1
X
max p(x|) = max exp
(x[n] ) U(min(x[n]) ).
A

n=0

 P

N 1
p(x|) = 0 for min(x[n]) < . As a function of , exp n=0 (x[n] )
is increasing. The maximum is then at = min(x[n]).
(2 p) (b)
max p(x|)p() = max exp
A

N
1
X
n=0

(x[n] ) exp[]U(min(x[n]))U().

is zero outside the range 0 < < min(x[n]). If N > 0 then


map = min(x[n]) and if N < 0 then map = 0
(2 p) (c)
p(x) =

min(x[n])

PN1
n=0

x[n] (N )

PN 1
exp( n=0
x[n])
(exp ((N ) min(x[n])) 1) .
N
3

(2 p) (d)
Z

min(x[n])

e(N ) e

PN1
n=0

x[n]

1
min(x[n])

(N
)
min(x[n])
1e
N

(2 p) (e) Question about explaining influence of N and in relation to

Question 3 (Detection theory) - 11 points


Given is a known signal s[n] = r n with 0 < r < 1. We would like to detect
whether signal s[n] is present in white noise w[n] with variance 2 .
To do so, we distinguish between two hypotheses, that are,
H0 : x[n] = w[n]
n = 0, 1, ..., N 1
H1 : x[n] = s[n] + w[n] n = 0, 1, ..., N 1.
(2 p) (a) T (x) =
(2 p) (b)

PN 1
n=0

x[n]s[n] and = 2 log() +

T (x) =

1
2

PN 1
n=0

s2 [n].

N (0, E 2) under H0
,
N (E, E 2) under H1

PN 1 2n 1r2N
with E = n=0
r = 1r2 .

(2 p) (c) / 2 E = Q(Pf a ). = 2 EQ1 (Pf a )


q 
q 
 2 1

EQ (Pf a )
E
E
1

(2 p) (d) PD = Q
=
Q
Q
(P
)

fa
2
2
2 E

(1 p e) Increase the false alarm probability and increase E, which implies


to icrease N.
(2 p) (e) The impulse reponse is given by h[n] = s[N 1 n]. The test
statistic is obtained by sampling the filter output at time N 1. The
impulse reponse is matched to the signal we want to detect.
T

(2 p) (f) = (h2 hs)


T h Using the Cauchy-Schwarz inequality it follows that
12 sT s with equality if and only if h = cs.

You might also like