0% found this document useful (0 votes)
239 views15 pages

hw7 - Sol 2

1. The document provides solutions to homework problems on statistical signal processing. It analyzes several sequences of random variables and whether they converge in probability, with probability one, or in mean square as the number of variables increases. 2. It shows that the minimum of a sequence of independent and identically distributed exponential random variables converges with probability one to zero. 3. It proves that a sequence of non-negative random variables converges in probability to zero if the expected value converges to zero.

Uploaded by

zach
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
239 views15 pages

hw7 - Sol 2

1. The document provides solutions to homework problems on statistical signal processing. It analyzes several sequences of random variables and whether they converge in probability, with probability one, or in mean square as the number of variables increases. 2. It shows that the minimum of a sequence of independent and identically distributed exponential random variables converges with probability one to zero. 3. It proves that a sequence of non-negative random variables converges in probability to zero if the expected value converges to zero.

Uploaded by

zach
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

EE 278 November 20, 2009

Statistical Signal Processing Handout #19


Homework #7 Solutions
1. Convergence examples. Consider the following sequences of random variables defined on the
probability space (Ω, F , P), where Ω = {0, 1, . . . , m − 1}, F is the collection of all subsets of Ω,
and P is the uniform distribution over Ω.
( ( (
1 n
ω = n mod m 2 ω = 1 1 ω=1
Xn (ω) = n Yn (ω) = Zn (ω) =
0 otherwise 0 otherwise 0 otherwise
Which of these sequences converges to zero (a) with probability one, (b) in mean square, and/or
(c) in probability? Justify your answers.
Solution (15 points)
1
First consider Xn . Since Xn (ω) is either 0 or n
,
1
0 ≤ Xn (ω) ≤ n
for every ω .
Therefore lim Xn (ω) = 0 for every ω ∈ Ω. Thus
n→∞

P{ω : lim Xn (ω) = 0} = P(Ω) = 1 ,


n→∞

which shows that Xn → 0 with probability one.


Now consider convergence in mean square.
X
E((Xn − 0)2 ) = x2 pX (x)
x∈X
1
= 0 · P{Xn = 0} + 2
P{Xn = n1 }
n
1 1 1
= 2 P{ω = (n mod m)} = 2 · .
n n m
Obviously,
1
lim = 0,
n→∞ mn2
so Xn → 0 in mean square. This implies that Xn → 0 in probability also.
Next consider Yn . For any  > 0,
lim P{|Yn − 0| > } = lim P{Yn > }
n→∞ n→∞
1
= lim P{Yn = 2n } = P{ω = 1} = 6= 0 .
n→∞ m
Thus Yn 6→ 0 in probability. Since convergence with probability 1 implies convergence in
probability, Yn 6→ 0 with probability 1. Similarly, Yn 6→ 0 in mean square.
Finally consider Zn , which is independent of n. For any  such that 0 <  < 1,
lim P{|Zn − 0| > } = lim P{Zn > }
n→∞ n→∞
1
= P{Zi = 1} = P{ω = 1} = 6= 0 .
m
Thus Zn 6→ 0 in probability, hence Zn 6→ 0 either with probability 1 or in mean square.
Comments:
• Xn also converges to 0 in distribution since it converges in probability.
• Yn does not converge in any sense. To show this it suffices to show that it does not converge
in distribution. But
1
P{Yn ≤ y} → 1 − <1
m
for every y < ∞, so the limit of FYn is not a cdf at all.
• Zn converges in every sense to the nonzero random variable Z defined by
(
1 ω=1
Z(ω) =
0 otherwise
This convergence is immediate since Zn = Z for every n.

2. Convergence with probability 1. Let X1 , X2 , X3 , . . . be i.i.d. random variables with Xi ∼ Exp(λ).


Show that the sequence of random variables Yn = min{X1 , . . . , Xn } converges with probabil-
ity 1. What is the limit?
Solution (10 points)
For any values of {Xn }, the sequence of Yn values is monotonically decreasing in n. Since the
random variables are ≥ 0, we know that the limit of Yn is ≥ 0. We suspect that Yn → 0. To
prove that Yn converges w.p.1 to 0, we show that for every  > 0,
lim P{|Yn − 0| <  for all n ≥ m} = 1 .
m→∞

which is equivalent to limm→∞ P{|Yn −0| ≥  for some n ≥ m} = 0. So, let m ≥ 1 and consider
P{|Yn − 0| ≥  for some n ≥ m} = P{Yn ≥  for some n ≥ m}

n [ o
(a)
=P {X1 ≥ , . . . , Xn ≥ , Xn+1 < }
n=m

(b) X
= P{X1 ≥ , . . . , Xn ≥ , Xn+1 < }
n=m
∞ n
(c) X Y
= P{Xn+1 < } P{Xi ≥ }
n=m i=1

X
= (1 − e−λ ) e−λn = e−λm → 0 as m → ∞
n=m

Step (a) follows because the event on the previous line is the same as saying that the smallest
index k ≥ m such that Xk <  is either n + 1 or n + 2, . . .. Step (b) follows by the fact that
these events are disjoint. Step (c) follows by the independence of X1 , X2 , . . ..
Therefore Yn converges w.p.1 to 0.

Page 2 of 15 EE 278, Autumn 2009


3. Convergence in probability. (Bonus) Let X1 , X2 , X3 , . . . be a sequence of nonnegative random
variables such that lim E(Xn ) = 0.
n→∞
a. Does the sequence Xn converge in probability? If so, what is the limit? Justify your answer
mathematically.
b. Does the sequence Yn = 1 − e−Xn converge in probability? If so, what is the limit? Justify
your answer mathematically.
Solution (10 points)

a. A sequence of random variables Xn converges in probability to X if for every  > 0


lim P{|Xn − X| > } = 0 .
n→∞

We guess that Xn converges to 0. To prove this, consider


E(Xn )
P{|Xn − 0| > } = P{Xn > } ≤ → 0 as n → ∞ .

The inequality follows by the Markov inequality, since Xn ≥ 0.
b. We guess that Yn also converges to 0. By Jensen’s inequality
E(Yn ) ≤ 1 − e− E(Xn ) ,
since (1 − e−x ) is a concave function. Therefore E(Yn ) → 0 as n → ∞ and so Yn → 0 by the
result of part (a).

4. Vector CLT. The signal received over a wireless communication channel can be represented by
two sums n n
1 X 1 X
X1n = √ Zj cos Θj and X2n = √ Zj sin Θj ,
n j=1 n j=1

where Z1 , Z2 , Z3 , . . . are i.i.d. with mean µ and variance σ 2 and Θ1 , Θ2 , Θ3 , . . . are i.i.d. U[0, 2π]
and independent of the Zi ’s. Find the distribution of [ X1n X2n ] as n → ∞.
Solution (10 points)
The key point to this problem is to realize that we are asked to find the distribution of the
random vector Yn = [ X1n X2n ]T as n → ∞. First note that
 n 
1 X
E(X1n ) = E √ Zj cos Θj
n
j=1
n
1 X
=√ E(Zj cos Θj ) (by linearity of expectation)
n
j=1
n
1 X
=√ E(Zj ) E(cos Θj ) (by independence of Zj and Θj )
n
j=1

Since E(cos Θj ) = 0, we conclude that E(X1n ) = 0. Similarly, E(X2n ) = 0.


As discussed in the lecture notes, the Central Limit Theorem applies to a sequence of i.i.d.
random vectors. Thus the pdf of Yn converges to N (0, ΣY ). All that remains is to find the

Homework #7 Solutions Page 3 of 15


covariance matrix for Yn .
X n 
1
Var(X1n ) = Var Zj cos Θj
n
j=1
n
1X
= Var(Zj cos Θj ) (independence)
n
j=1

= Var(Z1 cos Θ1 ) (identically distributed random variables)


= E(Z12 cos2 Θ1 ) − (E(Z1 cos Θ1 ))2
= E(Zj2 ) E(cos2 Θj ) (since E(Z1 cos Θ1 ) = 0)
= (σ 2 + µ2 ) E(cos2 Θj )
= 12 (σ 2 + µ2 ) . (since E(cos2 Θj ) = 12 )
Now consider
n X
X n 
1
Cov(X1n , X2n ) = E Zj Zk cos Θk sin Θj − E(X1n ) E(X2n )
n
j=1 k=1
n n
1 XX
= E(Zj Zk ) E(cos Θk sin Θj ) (independence)
n
j=1 k=1

If j 6= k then
Cov(X1n , X2n ) = E(Zj ) E(Zk ) E(cos Θk ) E(sin Θj )
=0 since E(cos Θk ) = 0 .
If j = k then

Cov(X1n , X2n ) = E(Zj2 ) E(cos Θj sin Θj )


1
=0 since E(cos Θj sin Θj ) = 2
E(sin 2Θj ) = 0 .

Since Cov(X1n , X2n ) = 0 in all cases,


1 2
(σ + µ2 )

0
ΣY = 2 1 .
0 2
(σ 2 + µ2 )

5. Digital modulation using PSK. The data to be modulated, {Xn : n ≥ 0} , is modeled by a


Bernoulli process with p = 1/2. Define the discrete-time phase process {Θn : n ≥ 0} by
(
+ π2 if Xn = 1
Θn =
− π2 if Xn = 0
Let T > 0 be the transmission time for one bit. Define the continuous-time phase process
{Θ(t) : t ≥ 0} by Θ(t) = Θn , where nT ≤ t < (n + 1)T .
The modulated data is given by
 4πt 
X(t) = cos + Θ(t) , t ≥ 0.
T

Page 4 of 15 EE 278, Autumn 2009


a. Use Matlab to sketch a sample function of the process X(t). Let T = 1.
b. Find the first order pmf of X(t).
c. Define the process  4πt 
Y (t) = cos + Θ(t) + Ψ ,
T
where Ψ ∼ U[0, 2π] is independent of Xn for n ≥ 0 and t ≥ 0. Use Matlab to sketch a
sample path of Y (t). Again let T = 1.
d. Find the first-order pdf of Y (t).
e. Find the mean and autocorrelation functions for random processes X(t) and Y (t).
Solution (20 points)

a. A sample function is shown in Figure 1.


Problem 1: part a
1

0.5
X(t)

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t

Problem 1: part c
1

0.5
Y(t)

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t

Figure 1: Digital modulation using PSK.

The following code is used for parts (a) and (c).


clear all;
clf;

% This code will generate 5-second sample runs of X(t) and


% Y(t) and print out the corresponding plots.

% First, select 5 random phases (either pi/2 or -pi/2 with equal probability).
% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Homework #7 Solutions Page 5 of 15


theta_n = (pi/2)*(2*(rand(5,1)>0.5)-1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Replicate theta_n so that each random phase covers a 100-step time range.
theta_t = ones(1,100) * theta_n(1);

for i=2:5
theta_t = [theta_t ones(1,100)*theta_n(i)];
end

% Generate the time steps corresponding to theta_t.


t=0:.01:4.99;

% Generate the values of X(t).


% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
X_t = cos(4*pi.*t + theta_t);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

subplot(2,1,1);
plot(t, X_t);
xlabel(’t’);
ylabel(’X(t)’);
title(’Problem 2: part a’);

% Generate the values of Y(t). (Dont forget to generate "psi"!)


% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Y_t = cos(4*pi.*t + theta_t + rand(1)*2*pi);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

subplot(2,1,2);
plot(t, Y_t);
xlabel(’t’);
ylabel(’Y(t)’);
title(’Problem 2: part c’);
print hw7_p2;

Page 6 of 15 EE 278, Autumn 2009


b. The first-order pmf of X(t) is
n  4π  o
pX(t) (x) = P cos t + Θ(t) = x
T
n  4π  π
o n π
o
= P cos t + Θbt/T c = x Θbt/T c = +
P Θbt/T c = + +
nT  2 2
4π π π
 o n o
P cos t + Θbt/T c = x Θbt/T c = − P Θbt/T c = −
n  4π Tπ  o n  4π π
2
 o 2
= 12 P cos t+ = x + 12 P cos t− =x
T 2 T 2
n  4π  o n  4π  o
= 12 P − sin t = x + 21 P sin t =x
T T



 1 x = 0, t = nT 4

 1 x = + sin( 4π t), t 6= nT

2 T 4
=
1 4π nT


 2
x = − sin( T t), t 6= 4

0 otherwise

c. A sample function of Y (t) is shown in Figure 1.


d. Since Ψ ∼ U[0, 2π], the pdf of Ψ is unchanged when we add either +π/2 or −π/2 . Thus
the first order pdf of Y (t) is the pdf of the random variable X(t) = cos(4πt/T + Ψ), which
is the same as the first order pdf for the random phase process in the lecture notes. Thus
1
fY (t) (y) = p if |y| < 1 and 0 if |y| ≥ 1 .
π 1 − y2
e. First consider X(t). The mean is
 4π π
  4π π

E(X(t)) = 21 cos t+ + 21 cos t−
T  2
 4π  4π  T 2
1 1
= 2 sin t − 2 sin t = 0.
T T
The autocorrelation is
RX (t1 , t2 ) = E(X(t1 )X(t2 ))
h  4π   4π i
= E cos t1 + Θ(t1 ) cos t2 + Θ(t2 )
h T4π T i
1
= 2 E cos (t1 + t2 ) + Θ(t1 ) + Θ(t2 )) +
hT  4π i
1
2
E cos (t 1 − t2 ) + Θ(t 1 ) − Θ(t 2 ) .
T
(The last equality uses the identity cos x cos y = 21 (cos(x + y) + cos(x − y)) .) There are two
cases to consider. If bt1 /T c = bt2 /T c then Θ(t1 ) = Θ(t2 ), so
h  4π   4π i
RX (t1 , t2 ) = 12 E cos (t1 + t2 ) + 2Θ(t1 ) + cos (t1 − t2 )
  4πT   4π T

1
= 2 cos (t1 − t2 ) − cos (t1 + t2 ) .
T T

Homework #7 Solutions Page 7 of 15


On the other hand, if bt1 /T c =
6 bt2 /T c then
RX (t1 , t2 ) = 21 E cos 4π

T
(t1 + t2 ) + Θ(t1 ) + Θ(t2 )


cos T
(t1 − t2 ) + Θ(t1 ) − Θ(t2 )

1 1
= 2 4
cos( 4π
T
(t1 + t2 ) + π
2
+ π2 ) + 14 cos( 4π
T
(t1 − t2 ) + π
2
− π2 ) +
1
4
cos( 4π
T
(t1 + t2 ) + π
2
− π2 ) + 14 cos( 4π
T
(t1 − t2 ) + π
2
+ π2 ) +
1
4
cos( 4π
T
(t1 + t2 ) − π
2
+ π2 ) + 14 cos( 4π
T
(t1 − t2 ) − π
2
− π2 ) +

1
4
cos( 4π
T
(t1 + t2 ) − π
2
− π2 ) + 41 cos( 4π
T
(t1 − t2 ) − π
2
+ π2 )
= 0.
Summarizing the two cases:
   
4π 4π
(
1 1
cos T 1
(t − t2 ) − cos T 1
(t + t2 ) bt1 /T c = bt2 /T c
RX (t1 , t2 ) = 2 2
0 otherwise

The mean of Y (t) is obviously also zero. To find the autocorrelation function consider that
Ψ does not affect Θ(t), i.e., the transitions between the bits still occur at t = nT . The
difference is that we have the same phase offset within each bit, Ψ. Thus,
RY (t1 , t2 ) = E(Y (t1 )Y (t2 ))

= 21 E cos 4π

T
(t 1 + t2 ) + Θ(t1 ) + Ψ + Θ(t2 ) + Ψ +

cos 4π

T
(t1 − t2 ) + Θ(t1 ) + Ψ − Θ(t2 ) − Ψ .

If bt1 /T c = bt2 /T c then Θ(t1 ) = Θ(t2 ), and


RY (t1 , t2 ) = 12 E cos 4π
+ t2 ) + 2Θ(t1 ) + 2Ψ + cos 4π
 
T
(t1 T
(t1 − t2 )
1 4π 4π
 
= 2 E cos T (t1 + t2 ) + π + 2Ψ + cos T (t1 − t2 )
= 12 cos 4π

T
(t1 − t2 ) since 2Ψ ∼ U[0, 4π] .
If bt1 /T c =
6 bt2 /T c then

1 4π

RY (t1 , t2 ) = E cos 2 T
(t1 + t2 ) + Θ(t1 ) + Θ(t2 ) + 2Ψ +


cos T
(t1 − t2 ) + Θ(t1 ) − Θ(t2 ) = 0.

Summarizing the two cases:


(
1 4π

2
cos T
(t1 − t2 ) bt1 /T c = bt2 /T c
RY (t1 , t2 ) =
0 otherwise

6. Absolute value random walk. (Bonus) Let Xn be a random walk defined by


X0 = 0
X n
Xn = Zi , n ≥ 1,
i=1

Page 8 of 15 EE 278, Autumn 2009


where {Zi } is an i.i.d. process with P(Z1 = −1) = P(Z1 = +1) = 12 . Define the absolute value
random process Yn = |Xn |.
a. Find P{Yn = k}.
b. Find P{max{Yi : 1 ≤ i < 20} = 10 | Y20 = 0}.
Solution (10 points)

a. This is a straightforward calculation and we can use results from lecture notes. If k ≥ 0
then
P{Yn = k} = P{Xn = +k or Xn = −k} .
If k > 0 then P{Yn = k} = 2P{Xn = k}, while P{Yn = 0} = P{Xn = 0}. Thus
n
  1 n−1
k > 0, n − k is even, n − k ≥ 0
 (n+k)/2
2



n
 1 n
P{Yn = k} = k = 0, n is even, n ≥ 0

 n/2 2

0 otherwise

b. If Y20 = |X20 | = 0 then there are only two sample paths with max1≤i<20 |Xi | = 10 . These
two paths are shown in Figure 2. Since the total number of sample paths is 20 10
and all
paths are equally likely,
 2 2 1
P max Yi = 10 | Y20 = 0 = 20 = = .
1≤i<20 184756 92378
10

Xn

10

0 n
10 20

−10

Figure 2: Sample paths for Problem 3.

7. Random walk with random start. (Bonus) Let X0 be a random variable with pmf
(
1
5
x ∈ {−2, −1, 0, +1, +2}
pX0 (x) =
0 otherwise.
Suppose that X0 is the starting position of a random walk {Xn : n ≥ 0} defined by
n
X
Xn = X0 + Zi ,
i=1

Homework #7 Solutions Page 9 of 15


1
where {Zi } is an i.i.d. random process with P(Z1 = −1) = P(Z1 = +1) = 2
and every Zi is
independent of X0 .
a. Does Xn have independent increments? Justify your answer.
b. What is the conditional pmf of X0 given that X11 = 2 ?
Solution (10 points)

a. We must show that for every sequence of indexes i1 , i2 , . . . , in such that i1 < i2 < . . . < in ,
the increments Xi1 , Xi2 −Xi1 , . . . , Xin −Xin−1 are independent. This is true by the definition
of the {Xi } random process; each Xij − Xij−1 is the sum of a different set of Zi ’s, and the
Zi ’s are i.i.d. and independent of X0 , which appears only in the first increment.
b. Starting at an even number (0 or ±2) can be ruled out, since there is no way that the process
could then end up at X11 = 2. Using Bayes rule for the remaining possibilities, we get
P(X11 = 2 | X0 = −1)P(X0 = −1)
P(X0 = −1 | X11 = 2) =
P(X11 = 2)
1
 11 1 7 1 4
=
1
 11
 5
1 7
7
1 4
2
1
 211 1 6
 1 5

5 7 2 2
+ 5 6 2 2
11

7 1 1 5
= 11
 =
11 11!7!4!
= 7 = 12
7
+ 6
1+ 11!6!5!
1+ 5
7
Similarly, P(X0 = 1 | X11 = 2) = .
To summarize,
12
5
 12
 x = −1
P(X0 = x | X11 = 2) = 127
x = +1

0 otherwise

8. Markov processes. Let {Xn } be a discrete-time continuous-valued Markov random process, that
is,
f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn )
for every n ≥ 1 and for all sequences (x1 , x2 , . . . , xn+1 ).
a. Show that f (x1 , . . . , xn ) = f (x1 )f (x2 |x1 ) · · · f (xn |xn−1 ) = f (xn )f (xn−1 |xn ) · · · f (x1 |x2 ) .
b. Show that f (xn |x1 , x2 , . . . , xk ) = f (xn |xk ) for every k such that 1 ≤ k < n.
c. Show that f (xn+1 , xn−1 |xn ) = f (xn+1 |xn )f (xn−1 |xn ), that is, the past and the future are
independent given the present.
Solution (15 points)

a. We are given that f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn ). From the chain rule, in general,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x1 , x2 ) · · · f (xn |x1 , x2 , . . . , xn−1 ) .
Thus, by the definition of Markovity,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x2 ) · · · f (xn |xn−1 ) . (1)

Page 10 of 15 EE 278, Autumn 2009


Similarly, applying the chain rule in reverse we get
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn ).
Next,
f (xi , xi+1 , . . . , xn ) f (xi )f (xi+1 |xi )
f (xi |xi+1 , . . . , xn ) = = = f (xi |xi+1 ) , (2)
f (xi+1 , . . . , xn ) f (xi+1 )
where the second equality follows from (1). Therefore
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn )
= f (xn )f (xn−1 |xn )f (xn−2 |xn−1 ) · · · f (x1 |x2 ) ,
where the second line follows from (2).
b. First consider
f (x1 , . . . , xk , xn )
f (xn |x1 , . . . , xk ) =
f (x1 , . . . , xk )
f (xn )f (xk |xn )f (xk−1 |xk , xn ) · · · f (x1 |x2 , . . . , xk , xn )
= , (3)
f (xk )f (xk−1 |xk ) · · · f (x1 |x2 )
where the denominator in the second line follows from part (a). Next consider
f (xk−1 , xk , . . . , xn ) = f (xk , xn )f (xk−1 |xk , xn )f (xk+1 , xk+2 , · · · , xn−1 |xk−1 , xk , xn )
= f (xn )f (xn−1 |xn ) · · · f (xk−1 |xk ) ,
where the second line follows from (2). Integrating both sides over xk+1 , . . . , xn−1 (i.e., using
the law of total probability), we get
f (xk , xn )f (xk−1 |xk , xn ) = f (xk , xn )f (xk−1 |xk ) .
Finally, substituting into (3), we get
f (xn )f (xk |xn )f (xk−1 |xk ) · · · f (x1 |x2 )
f (xn |x1 , . . . , xk ) =
f (xk )f (xk−1 |xk ) · · · f (x1 |x2 )
f (xn )f (xk |xn )
= = f (xn |xk ) .
f (xk )
c. By the chain rule for conditional densities,
f (xn+1 , xn−1 |xn ) = f (xn+1 |xn )f (xn−1 |xn+1 , xn ) = f (xn+1 |xn )f (xn−1 |xn ) ,
where the second equality follows from (2).

Homework #7 Solutions Page 11 of 15


Extra Problems Solutions
random variables with the same mean µ 6= 0 and
1. Gambling Let X1 , X2 , X3 , . . . be independentP
the same variance σ . Find the limit of P{ n ni=1 Xi < µ/2} as n → ∞.
2 1

Solution
By the weak law of large numbers, the sample mean n1 ni=1 Xi converges to the mean E(X) in
P
probability, so P(|Sn − µ| > ) → 0 as n → ∞. The limiting value of P(Sn < µ/2) depends
on µ.
• If µ < 0 then P(Sn < µ/2) → 1. This is because P(|Sn − µ| > ) → 0 as n → ∞ for all
positive . But this means P(|Sn − µ| < ) → 1 as n → ∞. Since Sn → µ < µ/2, we see
that P(Sn < µ/2) → 1.
• If µ > 0 then P(|Sn − µ| < ) → 1 as n → ∞. But if Sn → µ then P (Sn < µ/2) → 0.

2. Convergence to a random variable. Consider a coin with random bias P ∼ FP (p). Flip the coin
n times independently to generate XP
1 , X2 , . . . , Xn , where Xi = 1 if the i-th outcome is heads
and Xi = 0 otherwise. Let Sn = n ni=1 Xi be the sample average. Show that Sn converges
1

to P in mean square.
Solution
We show that Sn converges to P in mean square. Consider
E((Sn − P )2 ) = EP (E((Sn − P )2 | P ))
= EP (Var(Sn | P ))
1
 P 
n
= EP
n2
Var i=1 Xi | P
1  Pn
= EP 2
(nP (1 − P )) (since i=1 Xi is Binom(n, P ) given P )
n
1
E(P ) − E(P 2 ) .

=
n
Therefore limn→∞ E((Sn − P )2) = 0 and Sn converges to P in mean square.

3. Polls. A population of 108 voters chooses between two candidates A and B. A fraction 0.5005
of the voters plan to vote for candidate A and the rest for candidate B. A fair poll with sample
size n is taken, i.e., the n samples are i.i.d. and done with replacement (same person may be
polled more than once). Find a good estimate of n such that the probability that candidate A
wins the poll is greater than 0.99.
Solution
Let U1 , U2 , . . . , Un be i.i.d. such that
(
+1 if person i votes for candidate A
Ui =
−1 otherwise
Pn
Thus PUi (1) = 0.5005, and the difference in the number of votes is Xn = i=1 Ui . By the

Page 12 of 15 EE 278, Autumn 2009


Central Limit Theorem, for large n we can approximate the distribution of Xn by a Gaussian:
Xn ∼ N (n E(U1 ), nσU2 1 ) .
Therefore
E(U1 ) = 0.5005 − 0.4995 = 0.001
σU2 1 = E(Ui2 ) − (0.001)2 = 1 − 10−6 = 0.999999
The probability that A wins after n votes are counted is P(Xn > 0). Thus
 
0.001 n
P(Xn > 0) ≈ 1 − Q √ = 0.99 ,
0.999999 n
which yields n ≥ 5475600.

4. Random binary waveform. In a digital communication channel, the symbol “1” is represented
by the fixed duration rectangular pulse
(
1 for 0 ≤ t < T
g(t) =
0 otherwise
and the symbol “0” is represented by −g(t). The data transmitted over the channel is repre-
sented by the random process

X
X(t) = Ak g(t − kT ) , t ≥ 0,
k=0

where A0 , A1 , A2 , . . . is an i.i.d. random sequence with


(
1
+1 with probability 2
Ai = 1
−1 with probability 2

a. Sketch a sample function of the process X(t).


b. Find the first-order and second-order pmfs of the process X(t).
c. Find the mean and the autocorrelation functions of the process X(t).
Solution

a. See Figure 3 for a sketch of the sample function of X(t).


b. The first-order pmf is

X 
PX(t) (x) = P(X(t) = x) = P Ak g(t − kT ) = x
k=0
(
1
 2
x = ±1
= P Abt/T c = x = P (A0 = x) =
0 otherwise
Note that X(t1 ) and X(t2 ) are dependent only if t1 and t2 fall within the same time interval T

Homework #7 Solutions Page 13 of 15


X(t)

T 3T 4T 5T 7T

Figure 3: Sample function of the process X(t).

(indexed by k). Thus the second-order pmf is


PX(t1 )X(t2 ) (x1 , x2 ) = P(X(t1 ) = x1 , X(t2 ) = x2 )
nX∞ ∞
X o
=P Ak g(t1 − kT ) = x1 , Ak g(t2 − kT ) = x2
k=0 k=0
= P(Abt1 /T c = x1 , Abt2 /T c = x2 )
(
P(A0 = x1 , A0 = x2 ) bt1 /T c = bt2 /T c
=
P(A0 = x1 , A1 = x2 ) otherwise

1
bt1 /T c = bt2 /T c and (x1 , x2 ) = (+1, +1), (−1, −1)
2


= 41 bt1 /T c = 6 bt2 /T c and (x1 , x2 ) = (±1, ±1)


0 otherwise.

c. For t ≥ 0 ,

X  ∞
X
E(X(t)) = E Ak g(t − kT ) = g(t − kT ) E(Ak ) = 0 .
k=0 k=0

To find the autocorrelation RX (t1 , t2 ), we note again that X(t1 ) are X(t2 ) dependent only
if t1 and t2 fall within the same interval (indexed by k). Thus
RX (t1 , t2 ) = E(X(t1 )X(t2 ))

(
X 1 bt1 /T c = bt2 /T c
= g(t1 − kT )g(t2 − kT ) E(A2k ) =
k=0 0 otherwise

5. Moving-average process. Let {Xn : n ≥ 1} be a discrete-time white Gaussian noise process, that
is, X1 , X2 , X3 . . . are i.i.d. random variables with Xn ∼ N (0, N). Consider the moving-average
process {Yn : n ≥ 2} defined by
Yn = 23 Xn−1 + 31 Xn−2 , n ≥ 2.
Let X0 = 0. Find the mean and autocorrelation functions for the process Yn .

Page 14 of 15 EE 278, Autumn 2009


Solution
2
Mean and autocorrelation function of moving-average process with weights 3
and 31 :
E(Yn ) = E 23 Xn−1 + 31 Xn−2 = 32 E(Xn−1 ) + 13 E(Xn−2 ) = 0


RY (2, 2) = E(Y2 )2 = E( 23 X1 + 13 X0 )2 = 94 E(X12 ) = 94 N


RY (2, 3) = RY (3, 2) = E(Y2 Y3 ) = E ( 23 X1 + 13 X0 )( 32 X2 + 31 X1 ) = 2
E(X12 ) = 92 N

9

For m ≥ 3, n ≥ 3 ,
RY (m, n) = E(Ym Yn ) = E ( 23 Xn−1 + 31 Xn−2 )( 23 Xm−1 + 31 Xm−2 )

2
2

 9
E(Xn−2 ) n−m=1

 1 E(X 2 ) + 4 E(X 2 ) n = m

n−1 n−2
= 92 2
9


 9
E(Xn−1 ) m−n= 1

0 otherwise


5
N m=n
9


= 29 N |m − n| = 1


0 otherwise

To summarize, if m ≥ 2, n ≥ 2,
4

 9
N |m = n| = 0, m = 2

5N

|m − n| = 0, m 6= 2
RY (m, n) = 92


 9
N |m − n| = 1

0 otherwise

Note that {Yn : n ≥ 3} is a WSS stationary Gaussian random process, hence is SSS.

Homework #7 Solutions Page 15 of 15

You might also like