Midterm 2019 Sol

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

IEOR E3106 Stochastic Systems and Applications Fall 2019

Dr. A. B. Dieker

Midterm
You have 75 minutes
Include all intermediate steps of the computations in your answers.
No calculators (calculators, phones, watches) are allowed.
This problem sheet will not be graded.
This exam consists of 5 page(s).

Your name and UNI:

1. (48 points) You subscribe to a music streaming service, which plays songs based on your past ‘likes’
in conjunction with a proprietary music-fingerprint algorithm. In this problem, for simplicity, we
suppose that this service can only play nine songs. Your playlist evolves according to a Markov
chain {Xn }, with each song corresponding to a state. The transition diagram is personalized with
the latest machine learning techniques, and for your playlist it is depicted below:

0.3

0.1
1 2 4
0.5 0.2
0.4

0.7 0.9
9 7 0.5 1 3 1 0.7
0.5 0.3

0.6
0.3 0.1
8 6 5
0.4

0.5

In this problem you do not have to add explanations to your answers unless explicitly asked for.

(a) Write down the first row of the transition matrix. Label each column in your answer with the
corresponding state.
(b) Calculate P (X9 = 8, Xn 6= 7 for n = 1, . . . , 8 | X0 = 1).
3 .
(c) Calculate P16
(d) Is this Markov chain irreducible? Add a (very) brief explanation showing that you know what
irreducible means.
(e) Classify each state of this Markov chain as positive recurrent, null recurrent, or transient.
(f) List all aperiodic states of this Markov chain.
(g) Calculate P (X1 = 1|X0 > 4) if X0 has a uniform distribution on each of the nine states.
(h) Define A = {2, 3, 4, 5, 6}. Write down a transition matrix that can be used to calculate
P (Xn 6∈ A for n = 1, . . . , 20 | X0 = 1). You do not have to calculate that probability.

Solution.

(a) Your answer depends on the state you chose as your first state. If you order the states in
increasing order according to their label, then your answer would be:

1 2 3 4 5 6 7 8 9
 
1 0 0.1 0 0 0 0 0.2 0 0.7 .

(b) Consider the class {1, 7, 8, 9}. This is a transient class, since we can escape from state 1 within
a single step and never come back. We need to stay inside this class till time 9 in order to get
X9 = 8. On the other hand, we always need even number of steps in going from state 1 to
state 8, hence P (X9 = 8|X0 = 1) = 0. The sought probability, which can at most be equal to
P (X9 = 8|X0 = 1), must therefore also be 0.
(c) We can go from states 1 to state 6 in three steps via three possible paths: 1 → 9 → 8 → 6,
1 → 7 → 8 → 6 and 1 → 2 → 6 → 6. The respective probabilities for taking these paths are
P (1 → 9 → 8 → 6) = 0.7 × 0.5 × 0.4 = 0.14, P (1 → 7 → 8 → 6) = 0.2 × 0.1 × 0.4 = 0.008 and
P (1 → 2 → 6 → 6) = 0.1×1×0.5 = 0.05. We thus find that P16 3 = 0.140+0.008+0.050 = 0.198.

(d) Not irreducible. For example, if you start at state 1, you can never reach 4 or 5 and vice versa;
hence in particular there exist states that do not communicate with each other. Another way
of saying this is that there are multiple classes, which contradicts irreducibility (meaning there
is just one class).
(e) States 2, 6, 4, 5 are positive recurrent and states 1, 3, 7, 8, 9 are transient.
(f) Aperiodicity is a class property. Since states 4 and 6 have self-loops and are in the same class
as states 5 and 2, respectively, states 2, 6, 4, and 5 are aperiodic. States 1, 7, 8, 9 have period
2 so they are periodic. State 3 is not aperiodic, so there are only four aperiodic states: 2, 4,
5, 6.
(g) Since we can go to state 1 only from states 7 and 9, we find that P (X1 = 1|X0 > 4) equals

P (X1 = 1|X0 = 7)P (X0 = 7|X0 > 4) + P (X1 = 1|X0 = 9)P (X0 = 9|X0 > 4).

Since the distribution on X0 is uniform across all states, we conclude that P (X1 = 1|X0 >
4) = 1/5 × 0.9 + 1/5 × 0.5 = 0.28.
(h) Define N = min{n ≥ 1 : Xn ∈ A}, and another process Wn as
(
Xn , if n ≤ N
Wn =
A, if n > N.

This Markov chain has transition matrix


1 7 8 9 A
 
1 0 0.2 0 0.7 0.1
7 
 0.9 0 0.1 0 0 
8 
 0 0.3 0 0.3 0.4 
.
9  0.5 0 0.5 0 0 
A 0 0 0 0 1
,

2. (21 points) The discrete random variables Y1 , Y2 , . . . are independent and have a common probability
mass function 
1/10, a = −1


2/10, a = 0



p(a) = 4/10, a = 1

3/10, a = 2





0, otherwise.

Suppose N is a geometric random variable with success


PN probability 1/2, independent of Y1 , Y2 , . . ..
You may recall that E(N ) = Var(N ) = 2. Write S = i=1 Yi .

(a) Calculate E(Y1 ) and Var(Y1 ).


(b) Calculate the conditional probability mass function pS|N of S given N = 2.
(c) Calculate Var(S).

Solution.

(a) We find that E(Y1 ) = (−1) × p(−1) + 0 × p(0) + 1 × p(1) + 2 × p(2) = 9/10. Moreover,

Var(Y1 ) = E(Y12 ) − [E(Y1 )]2


= 1 × p(−1) + 0 × p(0) + 1 × p(1) + 4 × p(2) − (9/10)2
= 89/100.

(b) Due to independence of N and the Yi ’s, we need to find the pmf of Y1 + Y2 . This random
variable takes values in {−2, −1, 0, 1, 2, 3, 4}. Considering all 16 possible values of the pair
(y1 , y2 ), we find that

1

 100 , s = −2 [i.e., (y1 , y2 ) = (−1, −1)]

2 2 1
 100 + 100 = 25 , s = −1 [i.e., (y1 , y2 ) ∈ {(−1, 0), (0, −1)}]




4 4 4 3
 100 + 100 + 100 = 25 , s = 0 [i.e., (y1 , y2 ) ∈ {(−1, 1), (0, 0), (1, −1)}]



8 8 3 3 11
pS|N (s|2) = 100 + 100 + 100 + 100 = 50 , s = 1 [i.e., (y1 , y2 ) ∈ {(−1, 2), (0, 1), (1, 0), (2, −1)}]
 16 6 6 7
 100 + 100 + 100 = 25 , s = 2 [i.e., (y1 , y2 ) ∈ {(1, 1), (0, 2), (2, 0)}]



 12 12 6
100 + 100 = 25 , s = 3 [i.e., (y1 , y2 ) ∈ {(1, 2), (2, 1)}]





 9
100 , s = 4 [i.e., (y1 , y2 ) = (2, 2)].

(c) We apply the conditional variance formula, using the fact that all Yi ’s and N are mutually
independent. Since Var(S|N ) = N Var(Y1 ) and E(S|N ) = N E(Y1 ), we find that

Var(S) = E[Var(S|N )] + Var[E(S|N )] = Var(Y1 )E(N ) + (E(Y1 ))2 Var(N ).

Since we already know these quantities from previous parts, we conclude that
 2
89 9 17
Var(S) = ×2+ ×2= .
100 10 5

,
3. (31 points; 7 points for (c)) Given some p ∈ (0, 1), {Xn } is a Markov chain on the state space
E = {0, 1} with transition probability matrix
0 1
 
0 p 1−p
P = .
1 1−p p

For each state i = 0, 1, we write Nni for the number visits to state i up to time n, i.e.,
n
X
Nni = 1(Xk = i),
k=0

where 1 represents an indicator function. We define, for n ≥ 0, αn = E(Nn0 | X0 = 0) and βn =


E(Nn0 | X0 = 1).
(a) Does this Markov chain have a unique stationary distribution π? If so, calculate it. If not,
explain why.
(b) Show that, for n ≥ 1, we have
αn = 1 + pαn−1 + (1 − p)βn−1 ,
βn = (1 − p)αn−1 + pβn−1 .
Your answer should include a justification of each step.
(c) Does the limit limn→∞ n1 Nn0 exist with probability 1? Explain why or why not.
(d) Show that, for n ≥ 0, we have
n + 1 1 − (2p − 1)n+1
αn = +
2 4(1 − p)
n + 1 1 − (2p − 1)n+1
βn = − .
2 4(1 − p)
Solution.
(a) This finite Markov chain is irreducible since p ∈ (0, 1), so both states communicate with each
other and the Markov
P chain has one positive recurrent class. We know that there is a unique
π solving πj = i πi Pij in that case, and this corresponds to a stationary distribution. The
matrix is doubly stochastic so the solution is: π0 = π1 = 1/2.
(b) Suppose n ≥ 1. Then we have
n n
! !
X X
αn = E 1(Xk = 0) X0 = 0 =1+E 1(Xk = 0) X0 = 0 .
k=0 k=1

We next condition on whether X1 = 1 or X1 = 0:


n
!
X
E 1(Xk = 0) X0 = 0
k=1
n n
! !
X X
= P00 E 1(Xk = 0) X0 = 0, X1 = 0 + P01 E 1(Xk = 0) X0 = 0, X1 = 1
k=1 k=1
n−1 n−1
! !
X X
= P00 E 1(Xk = 0) X0 = 0 + P01 E 1(Xk = 0) X0 = 1
k=0 k=0
= pαn−1 + (1 − p)βn−1 ,
where in the second to last step we have used the Markov property. This gives

αn = 1 + pαn−1 + (1 − p)βn−1 .

Similarly for βn , we observe that


n n
! !
X X
βn = E 1(Xk = 0) X0 = 1 =E 1(Xk = 0) X0 = 1
k=0 k=1
n n
! !
X X
= P10 E 1(Xk = 0) X0 = 1, X1 = 0 + P11 E 1(Xk = 0) X0 = 1, X1 = 1
k=1 k=1
= (1 − p)αn−1 + pβn−1 .

(c) There are number of ways to answer this question. For instance,P you can define the function
r on the state space as follows: r(0) = 1, r(1) = 0. Then Nn0 = nk=0 r(Xk ). We know from
class that limn→∞ n1 Nn0 = π0 r(0) + π1 r(1) = 1/2 with probability 1.
Another way to answer this problem is to say that π0 is defined to be this limit and therefore
from part (a) we get that π0 = 1/2.
(d) We use induction. We first verify that the base case n = 0 yields α0 = 1 and β0 = 0. We next
assume that
n 1 − (2p − 1)n
αn−1 = +
2 4(1 − p)
n 1 − (2p − 1)n
βn−1 = −
2 4(1 − p)

and subsitute this in the recursion from part (b). This yields the required expression for αn
and βn .

You might also like