hw7 - Sol 2
hw7 - Sol 2
which is equivalent to limm→∞ P{|Yn −0| ≥ for some n ≥ m} = 0. So, let m ≥ 1 and consider
P{|Yn − 0| ≥ for some n ≥ m} = P{Yn ≥ for some n ≥ m}
∞
n [ o
(a)
=P {X1 ≥ , . . . , Xn ≥ , Xn+1 < }
n=m
∞
(b) X
= P{X1 ≥ , . . . , Xn ≥ , Xn+1 < }
n=m
∞ n
(c) X Y
= P{Xn+1 < } P{Xi ≥ }
n=m i=1
∞
X
= (1 − e−λ ) e−λn = e−λm → 0 as m → ∞
n=m
Step (a) follows because the event on the previous line is the same as saying that the smallest
index k ≥ m such that Xk < is either n + 1 or n + 2, . . .. Step (b) follows by the fact that
these events are disjoint. Step (c) follows by the independence of X1 , X2 , . . ..
Therefore Yn converges w.p.1 to 0.
4. Vector CLT. The signal received over a wireless communication channel can be represented by
two sums n n
1 X 1 X
X1n = √ Zj cos Θj and X2n = √ Zj sin Θj ,
n j=1 n j=1
where Z1 , Z2 , Z3 , . . . are i.i.d. with mean µ and variance σ 2 and Θ1 , Θ2 , Θ3 , . . . are i.i.d. U[0, 2π]
and independent of the Zi ’s. Find the distribution of [ X1n X2n ] as n → ∞.
Solution (10 points)
The key point to this problem is to realize that we are asked to find the distribution of the
random vector Yn = [ X1n X2n ]T as n → ∞. First note that
n
1 X
E(X1n ) = E √ Zj cos Θj
n
j=1
n
1 X
=√ E(Zj cos Θj ) (by linearity of expectation)
n
j=1
n
1 X
=√ E(Zj ) E(cos Θj ) (by independence of Zj and Θj )
n
j=1
If j 6= k then
Cov(X1n , X2n ) = E(Zj ) E(Zk ) E(cos Θk ) E(sin Θj )
=0 since E(cos Θk ) = 0 .
If j = k then
0.5
X(t)
−0.5
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
Problem 1: part c
1
0.5
Y(t)
−0.5
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
% First, select 5 random phases (either pi/2 or -pi/2 with equal probability).
% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Replicate theta_n so that each random phase covers a 100-step time range.
theta_t = ones(1,100) * theta_n(1);
for i=2:5
theta_t = [theta_t ones(1,100)*theta_n(i)];
end
subplot(2,1,1);
plot(t, X_t);
xlabel(’t’);
ylabel(’X(t)’);
title(’Problem 2: part a’);
subplot(2,1,2);
plot(t, Y_t);
xlabel(’t’);
ylabel(’Y(t)’);
title(’Problem 2: part c’);
print hw7_p2;
The mean of Y (t) is obviously also zero. To find the autocorrelation function consider that
Ψ does not affect Θ(t), i.e., the transitions between the bits still occur at t = nT . The
difference is that we have the same phase offset within each bit, Ψ. Thus,
RY (t1 , t2 ) = E(Y (t1 )Y (t2 ))
= 21 E cos 4π
T
(t 1 + t2 ) + Θ(t1 ) + Ψ + Θ(t2 ) + Ψ +
cos 4π
T
(t1 − t2 ) + Θ(t1 ) + Ψ − Θ(t2 ) − Ψ .
a. This is a straightforward calculation and we can use results from lecture notes. If k ≥ 0
then
P{Yn = k} = P{Xn = +k or Xn = −k} .
If k > 0 then P{Yn = k} = 2P{Xn = k}, while P{Yn = 0} = P{Xn = 0}. Thus
n
1 n−1
k > 0, n − k is even, n − k ≥ 0
(n+k)/2
2
n
1 n
P{Yn = k} = k = 0, n is even, n ≥ 0
n/2 2
0 otherwise
b. If Y20 = |X20 | = 0 then there are only two sample paths with max1≤i<20 |Xi | = 10 . These
two paths are shown in Figure 2. Since the total number of sample paths is 20 10
and all
paths are equally likely,
2 2 1
P max Yi = 10 | Y20 = 0 = 20 = = .
1≤i<20 184756 92378
10
Xn
10
0 n
10 20
−10
7. Random walk with random start. (Bonus) Let X0 be a random variable with pmf
(
1
5
x ∈ {−2, −1, 0, +1, +2}
pX0 (x) =
0 otherwise.
Suppose that X0 is the starting position of a random walk {Xn : n ≥ 0} defined by
n
X
Xn = X0 + Zi ,
i=1
a. We must show that for every sequence of indexes i1 , i2 , . . . , in such that i1 < i2 < . . . < in ,
the increments Xi1 , Xi2 −Xi1 , . . . , Xin −Xin−1 are independent. This is true by the definition
of the {Xi } random process; each Xij − Xij−1 is the sum of a different set of Zi ’s, and the
Zi ’s are i.i.d. and independent of X0 , which appears only in the first increment.
b. Starting at an even number (0 or ±2) can be ruled out, since there is no way that the process
could then end up at X11 = 2. Using Bayes rule for the remaining possibilities, we get
P(X11 = 2 | X0 = −1)P(X0 = −1)
P(X0 = −1 | X11 = 2) =
P(X11 = 2)
1
11 1 7 1 4
=
1
11
5
1 7
7
1 4
2
1
211 1 6
1 5
5 7 2 2
+ 5 6 2 2
11
7 1 1 5
= 11
=
11 11!7!4!
= 7 = 12
7
+ 6
1+ 11!6!5!
1+ 5
7
Similarly, P(X0 = 1 | X11 = 2) = .
To summarize,
12
5
12
x = −1
P(X0 = x | X11 = 2) = 127
x = +1
0 otherwise
8. Markov processes. Let {Xn } be a discrete-time continuous-valued Markov random process, that
is,
f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn )
for every n ≥ 1 and for all sequences (x1 , x2 , . . . , xn+1 ).
a. Show that f (x1 , . . . , xn ) = f (x1 )f (x2 |x1 ) · · · f (xn |xn−1 ) = f (xn )f (xn−1 |xn ) · · · f (x1 |x2 ) .
b. Show that f (xn |x1 , x2 , . . . , xk ) = f (xn |xk ) for every k such that 1 ≤ k < n.
c. Show that f (xn+1 , xn−1 |xn ) = f (xn+1 |xn )f (xn−1 |xn ), that is, the past and the future are
independent given the present.
Solution (15 points)
a. We are given that f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn ). From the chain rule, in general,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x1 , x2 ) · · · f (xn |x1 , x2 , . . . , xn−1 ) .
Thus, by the definition of Markovity,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x2 ) · · · f (xn |xn−1 ) . (1)
Solution
By the weak law of large numbers, the sample mean n1 ni=1 Xi converges to the mean E(X) in
P
probability, so P(|Sn − µ| > ) → 0 as n → ∞. The limiting value of P(Sn < µ/2) depends
on µ.
• If µ < 0 then P(Sn < µ/2) → 1. This is because P(|Sn − µ| > ) → 0 as n → ∞ for all
positive . But this means P(|Sn − µ| < ) → 1 as n → ∞. Since Sn → µ < µ/2, we see
that P(Sn < µ/2) → 1.
• If µ > 0 then P(|Sn − µ| < ) → 1 as n → ∞. But if Sn → µ then P (Sn < µ/2) → 0.
2. Convergence to a random variable. Consider a coin with random bias P ∼ FP (p). Flip the coin
n times independently to generate XP
1 , X2 , . . . , Xn , where Xi = 1 if the i-th outcome is heads
and Xi = 0 otherwise. Let Sn = n ni=1 Xi be the sample average. Show that Sn converges
1
to P in mean square.
Solution
We show that Sn converges to P in mean square. Consider
E((Sn − P )2 ) = EP (E((Sn − P )2 | P ))
= EP (Var(Sn | P ))
1
P
n
= EP
n2
Var i=1 Xi | P
1 Pn
= EP 2
(nP (1 − P )) (since i=1 Xi is Binom(n, P ) given P )
n
1
E(P ) − E(P 2 ) .
=
n
Therefore limn→∞ E((Sn − P )2) = 0 and Sn converges to P in mean square.
3. Polls. A population of 108 voters chooses between two candidates A and B. A fraction 0.5005
of the voters plan to vote for candidate A and the rest for candidate B. A fair poll with sample
size n is taken, i.e., the n samples are i.i.d. and done with replacement (same person may be
polled more than once). Find a good estimate of n such that the probability that candidate A
wins the poll is greater than 0.99.
Solution
Let U1 , U2 , . . . , Un be i.i.d. such that
(
+1 if person i votes for candidate A
Ui =
−1 otherwise
Pn
Thus PUi (1) = 0.5005, and the difference in the number of votes is Xn = i=1 Ui . By the
4. Random binary waveform. In a digital communication channel, the symbol “1” is represented
by the fixed duration rectangular pulse
(
1 for 0 ≤ t < T
g(t) =
0 otherwise
and the symbol “0” is represented by −g(t). The data transmitted over the channel is repre-
sented by the random process
∞
X
X(t) = Ak g(t − kT ) , t ≥ 0,
k=0
T 3T 4T 5T 7T
c. For t ≥ 0 ,
∞
X ∞
X
E(X(t)) = E Ak g(t − kT ) = g(t − kT ) E(Ak ) = 0 .
k=0 k=0
To find the autocorrelation RX (t1 , t2 ), we note again that X(t1 ) are X(t2 ) dependent only
if t1 and t2 fall within the same interval (indexed by k). Thus
RX (t1 , t2 ) = E(X(t1 )X(t2 ))
∞
(
X 1 bt1 /T c = bt2 /T c
= g(t1 − kT )g(t2 − kT ) E(A2k ) =
k=0 0 otherwise
5. Moving-average process. Let {Xn : n ≥ 1} be a discrete-time white Gaussian noise process, that
is, X1 , X2 , X3 . . . are i.i.d. random variables with Xn ∼ N (0, N). Consider the moving-average
process {Yn : n ≥ 2} defined by
Yn = 23 Xn−1 + 31 Xn−2 , n ≥ 2.
Let X0 = 0. Find the mean and autocorrelation functions for the process Yn .
For m ≥ 3, n ≥ 3 ,
RY (m, n) = E(Ym Yn ) = E ( 23 Xn−1 + 31 Xn−2 )( 23 Xm−1 + 31 Xm−2 )
2
2
9
E(Xn−2 ) n−m=1
1 E(X 2 ) + 4 E(X 2 ) n = m
n−1 n−2
= 92 2
9
9
E(Xn−1 ) m−n= 1
0 otherwise
5
N m=n
9
= 29 N |m − n| = 1
0 otherwise
To summarize, if m ≥ 2, n ≥ 2,
4
9
N |m = n| = 0, m = 2
5N
|m − n| = 0, m 6= 2
RY (m, n) = 92
9
N |m − n| = 1
0 otherwise
Note that {Yn : n ≥ 3} is a WSS stationary Gaussian random process, hence is SSS.