0% found this document useful (0 votes)
3 views

final_practice_2023

Uploaded by

匡政
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

final_practice_2023

Uploaded by

匡政
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UCSD ECE 250 Fall 2023

Prof. Tara Javidi

Final Practice Exam

1. Conditional expectations. Let X and Y be i.i.d. Exp(1) random variables. Let Z =


X + Y and W = X − Y .
(a) Find the joint pdf fX,Z (x, z) of X and Z.
(b) Find the joint pdf fZ,W (z, w) of Z and W .
(c) Find E[Z|X].
(d) Find E[X|Z].
Solution:
(a) For z < x, we have
FZ|X (z|x) = P{Z ≤ z | X = x} = 0.
For 0 ≤ x ≤ z,
FZ|X (z|x) = P{Z ≤ z | X = x}
= P{X + Y ≤ z | X = x}
= P{Y ≤ z − x | X = x}
(a)
= P{Y ≤ z − x}
= 1 − e−(z−x) ,
where (a) follows from the independence of X and Y . We therefore have
(
e−(z−x) , if 0 ≤ x ≤ z
fZ|X (z|x) =
0, otherwise.
Therefore, (
e−z , if 0 ≤ x ≤ z
fX,Z (x, z) =
0, otherwise.
(b) From the previous part, we have, for 0 ≤ x ≤ z,
fX,Z (x, z)
fX|Z (x|z) =
fZ (z)
fX,Z (x, z)
= Rz
f (x, z)dx
0 X,Z
1
= .
z

1
Thus for z ≥ 0, X | {Z = z} ∼ Unif[0, z]. We have W = X − Y = 2X − Z.
Therefore,

FW |Z (w|z) = P{W ≤ w | Z = z}
= P{2X − Z ≤ w | Z = z}
z+w
= P{X ≤ | Z = z}
 2
0,
 if w < −z
z+w
= 2z
, if − z ≤ w ≤ z

1, if w > z.

Thus, (
1
2z
, if |w| ≤ z
fW |Z (w|z) =
0, otherwise,
which leads us to conclude that
(
1 −z
2
e , if |w| ≤ z
fZ,W (z, w) = fW |Z (w|z)fZ (z) =
0, otherwise.

(c) We have

E[Z|X] = E[X + Y | X]
= X + E[Y |X]
= X + E[Y ]
= X + 1,

where E[Y |X] = E[Y ] since X and Y are independent.


(d) From part (b), we have X | {Z = z} ∼ Unif[0, z]. Therefore,

Z
E[X|Z] = .
2

2. MMSE estimation. Let X ∼ Exp(1) and Y = min{X, 1}.

(a) Find E[Y ].


(b) Find the estimate X̂ = g(Y ) of X given Y that minimizes the mean square error
E[(X − X̂)2 ] = E[(X − g(Y ))2 ], and plot g(y) as a function of y.
(c) Find the mean square error of the estimate found in part (b).

Solution:

2
(a) We have
E[Y ] = E[min{X, 1}]
Z ∞
= min{x, 1}e−x dx
Z0 1 Z ∞
−x
= xe dx + e−x dx
0 1
1
= −xe−x − e−x + e−1
0
−1
=1−e .

(b) We have g(y) = E[X | Y = y]. For y < 1,


E[X | Y = y] = E[X | X = y] = y.
For y = 1, we have
E[X | Y = y] = E[X | X ≥ 1]
(a)
= E[X] + 1
= 2,
where (a) follows from the memorylessness property of the exponential distribu-
tion. Thus, (
y, 0 ≤ y < 1
g(y) =
2, y = 1.
The plot of g(y) vs y is shown in Fig. 1.
g(y)

0 1 y

Figure 1: Plot of g(y) versus y

(c) For 0 ≤ y < 1, Var(X | Y = y) = 0. For y = 1,


Var(X | Y = y) = Var(X | X ≥ 1)
(a)
= Var(X)
= 1,

3
where the step (a) follows from the memoryless property. We therefore have

MSE = E[Var(X|Y )]
= Var(X | Y = 1)P{Y = 1}
= e−1 .

3. Sampled Wiener process. Let {W (t), t ≥ 0} be the standard Brownian motion. For
n = 1, 2, . . . , let  
1
Xn = n · W .
n
(a) Find the mean and autocorrelation functions of {Xn }.
(b) Is {Xn } WSS? Justify your answer.
(c) Is {Xn } Markov? Justify your answer.
(d) Is {Xn } independent increment? Justify your answer.
(e) Is {Xn } Gaussian? Justify your answer.
(f) For n = 1, 2, . . . , let Sn = Xn /n. Find the limit

lim Sn
n→∞

in probability.

Solution:

(a) We have
E[Xn ] = nE[W (1/n)] = 0.
For m, n ∈ N and m ≥ n, we have
E[Xm Xn ] = mnE[W (1/m)W (1/n)]
= mn · min{1/m, 1/n}
1
= mn ·
m
= n.

Thus in general,
E[Xm Xn ] = min{m, n}.
(b) No. Since the autocorrelation function is not time-invariant, {Xn } is not WSS.

4
(c) Yes. Clearly, {Xn } is a Gaussian process (see the solution to part (e)) with
mean and autocorrelation functions as found in part (a). Therefore, for integers
m1 < m2 ≤ m3 < m4 , we have

E[(Xm2 − Xm1 )(Xm4 − Xm3 )] = E[Xm2 Xm4 ] + E[Xm1 Xm3 ] − E[Xm2 Xm3 ] − E[Xm1 Xm4 ]
= min{m2 , m4 } + min{m1 , m3 } − min{m2 , m3 } − min{m1 , m4 }
= m2 + m1 − m2 − m1
=0
= E[Xm2 − Xm1 ]E[Xm4 − Xm3 ].

Therefore, since (Xm2 − Xm1 ) and (Xm4 − Xm3 ) are jointly Gaussian and uncor-
related, they are independent. Now, for positive integers n1 < n2 < · · · < nk
for some k, (Xn1 , Xn2 − Xn1 , . . . , Xnk − Xnk−1 ), being a linear transformation of a
Gaussian random vector, is itself Gaussian. Moreover, from what we just showed,
(Xn1 , Xn2 − Xn1 , . . . , Xnk − Xnk−1 ) are pairwise independent. Therefore, they are
all independent, which implies that {Xn } is independent-increment. This implies
Markovity.
(d) Yes. See the solution to part (c).
(e) Yes. For integers n1 , n2 , . . . , nk for any k, we have
    
Xn1 n1 0 ··· 0 W (1/n1 )
 Xn   0 n2 · · · 0  W (1/n2 )
 
 2 
 ..  =  .. .. . . .. .
. 0 

 .  . . . 
Xnk 0 0 · · · nk W (1/nk )

Thus, [Xn1 · · · Xnk ]T , being a linear transformation of a Gaussian random


vector, is itself a Gaussian random vector. Therefore, {Xn } is Gaussian.

(f) Recall that Xn ∼ N(0, n), which implies that Xn / n ∼ N(0, 1). Therefore, for
any fixed ϵ > 0, we have

P{|Sn | > ϵ} = P{|Xn | > nϵ}



 
|Xn |
=P √ >ϵ n
n

= 2Q(ϵ n)
→ 0,

as n → ∞. Therefore, limn→∞ Sn = 0 in probability. Alternatively, note that


W (0) = 0 and W (t) is continuous with probability 1. Therefore
 
1
lim Sn = lim W = W (0) = 0.
n→∞ n→∞ n

5
4. Poisson process. Let {N (t), t ≥ 0} be a Poisson process with arrival rate λ > 0. Let
s ≤ t.
(a) Find the conditional pmf of N (t) given N (s).
(b) Find E[N (t)|N (s)] and its pmf.
(c) Find the conditional pmf of N (s) given N (t).
(d) Find E[N (s)|N (t)] and its pmf.
Solution:
(a) Assume 0 ≤ ns ≤ nt . By the independent increment property of the Poisson
process, we would get
P{N (t) = nt |N (s) = ns } = P{N (t) − N (s) = nt − ns |N (s) = ns }
= P{N (t) − N (s) = nt − ns }
(λ(t − s))nt −ns
= e−λ(t−s)
(nt − ns )!
for ns = 0, 1, . . . and nt = ns , ns + 1, . . .. Thus,
N (t)|{N (s) = ns } ∼ ns + Poisson(λ(t − s)).
(b) From part (a), it immediately follows that
E[N (t)|N (s)] = N (s) + λ(t − s).
Therefore, the pmf of E[N (t)|N (s)] is
( k
e−λs (λs)
k!
if x = k + λ(t − s), k = 0, 1, . . .
pE[N (t)|N (s)] (x) =
0 otherwise
(c) From part (a), the joint pmf of (N (t), N (s)) for 0 ≤ ns ≤ nt , is
P{N (t) = nt , N (s) = ns } = P{N (s) = ns }P{N (t) = nt |N (s) = ns }
(λs)ns −λ(t−s) (λ(t − s))nt −ns
= e−λs e
ns ! (nt − ns )!
ns nt −ns
s (t − s)
= e−λt λnt .
ns !(nt − ns )!
Therefore, the conditional pmf of N (s)|{N (t) = nt } is for nt ≥ ns ≥ 0
P{N (s) = ns , N (t) = nt }
P{N (s) = ns |N (t) = nt } =
P{N (t) = nt }
ns nt −ns nt −1
  
−λt nt s (t − s) −λt (λt)
= e λ e
ns !(nt − ns )! nt !
   
nt s ns s nt −ns

= 1− .
ns t t

6
Hence,
s
N (s)|{N (t) = nt } ∼ Binom nt , .
t
(d) From part (c), it immediately follows that
s
E[N (s)|N (t)] = N (t),
t
and its pmf is
( k
e−λt (λt)
k!
if x = st k, k = 0, 1, . . .
pE[N (s)|N (t)] (x) =
0 otherwise

5. Hidden Markov process. Let X0 ∼ N(0, σ 2 ) and Xn = 12 Xn−1 + Zn for n ≥ 1, where


Z1 , Z2 , . . . are i.i.d. N(0, 1), independent of X0 . Let Yn = Xn + Vn , where Vn are i.i.d.
∼ N(0, 1), independent of {Xn }.

(a) Find the variance σ 2 such that {Xn } and {Yn } are jointly WSS.

Under the value of σ 2 found in part (a), answer the following.

(b) Find RY (n).


(c) Find RXY (n).
(d) Find the MMSE estimate of Xn given Yn .
(e) Find the MMSE estimate of Xn given (Yn , Yn−1 ).
(f) Find the MMSE estimate of Xn given (Yn , Yn+1 ).

Solution:

(a) If {Xn } is WSS, then Var(Xn ) = Var(X0 ) = σ 2 for all n ≥ 0. From the recursive
relation, we would get
1
Var(Xn ) = Var(Xn−1 ) + Var(Zn ),
4
which implies σ 2 = 43 .
(b) First, note that for n ≥ 0,
1
Xm+n = Xm+n−1 + Zm+n
2
1 1
= Xm+n−2 + Zm+n−1 + Zm+n
4 2
= ...
1 1 1
= n Xm + n−1 Zm+1 + · · · + Zm+n−1 + Zm+n .
2 2 2

7
Hence, it follows that
4
RX (n) = E[Xm+n Xn ] = 2−n E[Xm
2
] = 2−|n| .
3
Now we can find the autocorrelation function of {Yn } easily.

RY (n) = E[Ym+n Ym ]
= E[(Xm+n + Vm+n )(Xm + Vm )]
= E[Xm+n Xm + Xm+n Vm + Vm+n Xm + Vm+n Vm ]
= RX (n) + δ(n)
4
= 2−|n| + δ(n)
3
Here δ(n) denotes the Kronecker delta function, that is,
(
1 if n = 0
δ(n) =
0 otherwise.

(c) The cross correlation function RXY (n) is

RXY (n) = E[Xm+n Ym ]


= E[Xm+n Xm + Xm+n Vm ]
4
= RX (n) = 2−|n| .
3

(d) Since Xn and Yn are jointly Gaussian, we can find the conditional expectation
E[Xn |Yn ], which is the MMSE estimate of Xn given Yn , as follows:

Cov(Xn , Yn )
E[Xn |Yn ] = E[Xn ] + (Yn − E[Yn ])
Var(Yn )
RXY (0)
= Yn
RY (0)
4
= Yn .
7

8
(e) As in part (d), the MMSE estimate of Xn given (Yn , Yn−1 ) is
   
−1 Yn Yn
E[Xn |Yn , Yn−1 ] = E[Xn ] + ΣXn ,(Yn ,Yn−1 ) Σ(Yn ,Yn−1 ) −E
Yn−1 Yn−1
 −1  
  RY (0) RY (1) Yn
= RXY (0) RXY (1)
RY (1) RY (0) Yn−1
 −1  
  7/3 2/3 Yn
= 4/3 2/3
2/3 7/3 Yn−1
 
  Yn
= 8/15 2/15
Yn−1
8 2
= Yn + Yn−1 .
15 15
(f) Since (Xn , Yn ) are jointly WSS, from part (e) it immediately follows that the
conditional expectation E[Xn |Yn , Yn+1 ] has the same form with E[Xn |Yn , Yn−1 ]:
8 2
E[Xn |Yn , Yn+1 ] = Yn + Yn+1 .
15 15
6. Random-delay mixture. Let {X(t)}, −∞ < t < ∞, be a zero-mean wide-sense station-
ary process with autocorrelation function RX (τ ) = e−|τ | . Let
Y (t) = X(t − U ),
where U is a random delay, independent of {X(t)}. Suppose that U ∼ Bern(1/2), that
is, (
X(t) with probability 1/2,
Y (t) =
X(t − 1) with probabiltiy 1/2.

(a) Find the mean and autocorrelation functions of {Y (t)}.


(b) Is {Y (t)} wide-sense stationary? Justify your answer.
(c) Find the average power E(Y (t)2 ) of {Y (t)}.
(d) Now suppose that U ∼ Exp(1), i.e., fU (u) = e−u , u ≥ 0. Find the autocorrelation
function of {Y (t)}.
Solution:
(a) By the iterated expectation and the independence of U and {X(t)}, we have
E(Y (t)) = E(X(t − U ))
= E[E(X(t − U )|U )]
1 1
= E(X(t)|U = 0) + E(X(t − 1)|U = 1)
2 2
1 1
= E(X(t)) + E(X(t − 1))
2 2
=0

9
and

RY (t1 , t2 ) = E(Y (t1 )Y (t2 ))


= E(X(t1 − U )X(t2 − U ))
= E[E(X(t1 − U )X(t2 − U )|U )]
1 1
= E(X(t1 )X(t2 )|U = 0) + E(X(t1 − 1)X(t2 − 1)|U = 1)
2 2
(a)
= E(X(t1 )X(t2 ))
= e−|t1 −t2 | ,

where (a) follows by the stationarity of {X(t)}.


(b) Since E(Y (t)) and RY (t1 , t2 ) are time invariant, it is WSS.
(c) From part (a), we have
E(Y (t)2 ) = RX (0) = 1.
(d) For U ∼ Exp(1), we have

E(Y (t)) = E(X(t − U ))


= E[E(X(t − U )|U )]
Z ∞
= E(X(t − u))e−u du
0
=0

and

RY (t1 , t2 ) = E(Y (t1 )Y (t2 ))


= E(X(t1 − U )X(t2 − U ))
= E[E(X(t1 − U )X(t2 − U )|U )]
Z ∞
= E(X(t1 − u)X(t2 − u))e−u du
Z0 ∞
= RX (t1 − t2 )e−u du
0
= RX (t1 − t2 )
= e−|t1 −t2 | .

Note that {Y (t)} is WSS with RY (t1 , t2 ) = RX (t1 , t2 ) for any random delay U .
In fact, {Y (t)} has the same distribution as {X(t)}, not only the first and second
moments.

10

You might also like