0% found this document useful (0 votes)
133 views

Problems Solution

This document summarizes random walks and Markov chains. 1. It provides examples of calculating probabilities of random walks in 1 and 2 dimensions reaching certain points or staying within certain regions. 2. It also defines Markov chains for a taxi driver moving between locations and children passing a ball, finding transition matrices and classifying states. 3. The long-term behavior of Markov chains is examined, including finding stationary distributions and determining which states the system will end up in over time.

Uploaded by

Up Toyou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views

Problems Solution

This document summarizes random walks and Markov chains. 1. It provides examples of calculating probabilities of random walks in 1 and 2 dimensions reaching certain points or staying within certain regions. 2. It also defines Markov chains for a taxi driver moving between locations and children passing a ball, finding transition matrices and classifying states. 3. The long-term behavior of Markov chains is examined, including finding stationary distributions and determining which states the system will end up in over time.

Uploaded by

Up Toyou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

On random walks

Random walk in dimension 1. Let Sn = x + ni=1 Ui , where x ∈ Z is the starting point of


P
the random walk, and the Ui ’s are IID with P(Ui = +1) = P(Un = −1) = 1/2.
1. Let N be fixed (goal you want to attain). Compute the probability px that the random walk
reaches 0 before N, starting from x. (Hint: show that px+1 −px = 12 (px −px−1 ) for x ∈ {1, . . . , N −1}).
Solution: px = NN−x

2. Use 1. to compute the probability that, starting from 0, you reach a > 0 before −b < 0.
a
Solution: from previous question: a+b

3. Use 1. to compute the probability that, starting from the origin, you reach a > 0 before
returning to the origin.
Solution: from previous questions: 21 a1 (first step is up, then reach a before returning to 0).

4.* Use 3. to show that the average number of visits to a > 0 before returning to the origin is 1
(hint: show that it is closely related to the expectation of some geometric random variable).
Solution: let Na be the number of visits to a before returning to 0.
Using question 3. one has P(Na = 0) = 21 + 21 (1 − 1/a) (call that probability q).
Note that once you’ve made a visit to a (that is given Na ≥ 1), which happens with probability
1 − q, the number of returns is a geometric random variable, with probability of success 1 − q : the
probability of coming back to a before 0 (failure) is 12 + 21 (1 − 1/a) = q. Therefore, E[Na − 1|Na ≥
1
1] = 1−q .

1
E[Na ] = P(Na = 0)E[Na |Na = 0] + P(Na ≥ 1)E[Na |Na ≥ 1] = (1 − q) =1
1−q

5. Do the same 1.2.3.4. problems when the random walk is ”lazy”: P(Ui = 0) = P(Ui = +1) =
P(Ui = −1) = 1/3
Solution: same answers for 1, 2 ; for 3. one gets 31 a1 ; same reasoning for 4. to get that the
average number of visits is 1.

6. Suppose that x = 0. Recall how to prove that P(S1 ≥ 0, . . . , S2n ≥ 0) = P(S1 6= 0, . . . , S2n 6=
0) = P(S2n = 0).
Solution:

P(S1 6= 0, . . . , S2n 6= 0) = 2P(S1 > 0, . . . , S2n > 0) by symmetry


1
= 2P(S1 = 1, S2 ≥ 1, . . . , S2n ≥ 1) = 2 P(S2 ≥ 1, . . . , S2n ≥ 1|S1 = 1)
2
= P(S1 ≥ 0, . . . , S2n−1 ≥ 0) (vertical translation)
= P(S1 ≥ 0, . . . , S2n ≥ 0) bc. then S2n−1 ≥ 1 (parity matter), and S2n ≥ 0
1
X
P(S1 6= 0, . . . , S2n 6= 0) = 2P(S1 > 0, . . . , S2n > 0) = P(S2 > 0, . . . , S2n>0 , S2n = 2k|S1 = 1)
k≥1
X
= (P(S2n = 2k|S1 = 1) − P(∃m, s.t. Sm = 0, S2n = 2k|S1 = 1))
k≥1
X
= (P(S2n = 2k|S1 = 1) − P(S2n = −2k|S1 = 1)) by a reflexion argument
k≥1
X
= (P(S2n = 2k|S1 = 1) − P(S2n = 2k|S1 = −1)) by symmetry
k≥1
X
= (P(S2n−1 = 2k − 1|S0 = 0) − P(S2n = 2k + 1|S0 = 0)) by a vertical translation
k≥1

= P(S2n−1 = −1|S0 = 0) = P(S2n−1 = 1|S0 = 0) = P(S2n = 0|S0 = 0)


because P(S2n = 0) = 12 P(S2n−1 = −1) + 21 P(S2n−1 = 1).

Random walk in dimension 2. Let Zn = ni=1 Wi , where the Wi ’s are IID nearest neighbor
P
steps P(Wi = (1, 0)) = P(Wi = (−1, 0)) = P(Wi = (0, 1)) = P(Wi = (0, −1)) = 1/4. We denote
(1) (2)
Wi = (Wi , Wi ), and Zn = (Xn , Yn ).
(1) (2) (1) (2) Pn
7. Show that if you define Ui := Wi + Wi and V i := Wi − Wi , then An := i=1 Ui and
Bn := ni=1 Vn are independent one-dimensional simple random walks.
P
(1) (2)
Solution: Just verify that P(Ui = +1, Vi = −1) = P(Wi = 0, Wi = 1) = 1/4 (for all
possibilities: Ui = +1, Vi = +1, Ui = −1, Vi = +1, Ui = −1, Vi = −1), and that P(Ui = +1) =
P(Ui = −1) = P(Vi = +1) = P(Vi = −1) = 1/2.

8. Use An and Bn defined in the previous question to compute P(Z2n = (0, 0)).
2
Solution: P(Z2n = (0, 0)) = P(A2n = 0)P(B2n = 0) = 2n n
, because An and Bn are independent.

9. Use questions 6.7.8. to compute the probability that the random walk stays in the cone
{x + y ≥ 0, x − y ≥ 0} up to time 2n.
Solution:
(1)
P(Zk ∈ {x + y ≥ 0, x − y ≥ 0} ∀0 ≤ k ≤ 2n) = P(Ak ≥ 0, ∀0 ≤ k ≤ 2n)P(Ak ≥ 0, ∀0 ≤ k ≤ 2n)
 2
2n
= P(A2n=0 )P(B2n=0 ) = ,
n
using question 6. on random walks in dimension 1.

On Markov Chains
1. A taxicab driver moves between the airport A and two hotels B and C according to the
following rules: if at the airport: go to one of the hotel with equal probability, and if at one hotel,
go to the airport with probability 3/4, and to the other hotel with probability 1/4.
(a) Find the transition matrix.
(b) Suppose the driver starts at the airport. Find the probability for each of the three possible
location at time 2. Find the probability that he is at the airport at time 3.
(c) Find the stationary distribution, and apply the convergence theorem to find
limn→∞ P(at the airport at time n).  
0 1/2 1/2
Solution: 1= airport, 2= 1st hotel, 3=2nd hotel P =  3/4 0 1/4  .
3/4 1/4 0
P(X2 = airport|X0 = airport) = 3/4, P(X2 = hotel1|X0 = airport) = P(X2 = hotel2|X0 =
airport) = 1/8.
P(X3 = airport|X0 = airport) = 3/16.
Stationary distribution π(airport) = 3/7, π(hotel1) = π(hotel2) = 2/7. Since irreducible, aperi-
odic, limn→∞ P(at the airport at time n) = 3/7

2. Consider the following transition matrices. Classify the states, and describe the long-term
behavior of te chain (and justify your reasoning).

2/3 0 0 1/3 0 0
     
0 0 0 0 1 .4 .4 .3 0 0
 0 .2 0 0 1/2 0 0 1/2 0
.8 0  0 .5 0 .5 0
   
0 0 1/3 1/3 1/3 0
     
 .1 .2 .3
P = .4 0  .5 0
P = .5 0 0 P =
   
1/2 0 0 1/2 0 0
  
 0 .6 0 .4 0   0 .5 0 .5 0   
 0 1/2 0 0 1/2 0 
.3 0 0 0 .7 0 .3 0 .3 .4
1/2 0 0 1/2 0 0
Solution:
(1) Communication classes {1, 5} (positive recurrent), {2, 3, 4} (transient, because communicates
with exterior). In the long run, we end up in the two states {1, 5}, with equilibrium distribution
π(1) = 3/13, π(5) = 10/13.
(2) Communication classes {2, 4} (positive recurrent), {1, 3} (transient, because communicates
with exterior), {5} (transient) In the long run, we end up in the two states {2, 4}, with equilibrium
distribution π(2) = π(4) = 1/2.
(3) Communication classes: {1, 4} (positive recurrent) {2, 5} (positive recurrent) {3} and {6}
(transient). In the long run, we end up in one of the two classes C1 := {1, 4} or C2 := {2, 5}.
Starting from 1, 4, 6 one ends up in C1 , starting from 2, 5 one ends up in C2 , starting from 3, one
ends up in C1 with probability 1/2 and in C2 with probability 1/2.
Stationary distribution in C1 : π(1) = 3/5, π(4) = 2/5.
Stationary distribution in C2 : π(2) = 1/2, π(5) = 1/2.

3. Six children (A, B, C, D, E, F) play catch. If A has the ball, then he/she is equally likely to
throw the ball to B, D, E or F. If B has the ball, then he/she is equally likely to throw the ball
to A, C, E or F. If E has the ball, then he/she is equally likely to throw the ball to A, B, D, F. If
either C or F gets the ball, they keep throwing it at each other. If D gets the balls, he/she runs
away with it.
(a) Find the transition matrix, and classify the states.
(b) Suppose that A has the ball at the beginning of the game. What is the probability that D ends
up with the ball?
Solution: Communication classes: {C, F } (positive recurrent) {D} (positive recurrent) {A, B, E}
(transient).
Call pX the probability that D ends up with the ball staring from X (X = A, B, C, D, E or
F ). Clearly, pD = 1 and pC = pF = 0. Also, pA = 41 pB + 14 pD + 14 pE + 14 pF , pB = .... One finds
pA = pE = 2/5, pB = 1/5.
4. Find the stationary distribution(s) for the Markov Chains with transition matrices
 
  .4 .6 0 0
.4 .6 0  0 .7 .3 0 
P =  .2 .4 .4  P =   .1 0 .4 .5 

0 .3 .7
.5 0 0 .5
Solution:
(1) Unique stationary distribution (bc. irreducible, positive recurrent) π = (1/8, 3/8, 1/2).
(2) Unique stationary distribution (bc. irreducible, positive recurrent) π = (1/5, 2/5, 1/5, 1/5)

5. Consider the Markov Chain with transition matrix


 
0 0 3/5 2/5
 0 0 1/5 4/5 
P =  1/4 3/4 0

0 
1/2 1/2 0 0
(a) Compute P 2 . (b) Find the stationary distributions of P and P 2 . (c) Find the limit of p2n (x, x)
as n → ∞.  
7/20 13/20 0 0
 9/20 11/20 0 0 
Solution: P 2 =   0

0 3/10 7/10 
0 0 2/5 3/5
2
(b) P is not irreducible. Stationary distribution on {1, 2}: π̃(1) = 9/22, π̃(2) = 13/22 (and
π̃(3) = π̃(4) = 0). Stationary distribution on {3, 4}: π̄(3) = 4/11, π̄(4) = 7/11 (and π̄(1) = π̄(2) =
0). Any convex combination of π̃ and π̄, of the type θπ̃ + θπ̄, is a stationary distribution for P 2 .
P is irreducible: unique stationary distribution, π, which must be a stationary distribution for
P , thus of the form θπ̃+θπ̄. From the first column, we must get θπ̃(1) = 41 (1−θ)π̄(3)+ 21 (1−θ)π̄(4),
2

so that θ = 1/2, and π = (9/44, 13/44, 4/22, 7/22).


(c) limn→∞ p2n (x, x) = 9/22 if x = 1, 13/22 if x = 2, 4/11 if x = 3, 7/11 if x = 4.

6. Folk wisdom holds that in Ithaca in the summer, it rains 1/3 of the time, but a rainy day
is followed by a second one with probability 1/2. Suppose that Ithaca weather is a Markov chain.
What is it transition matrix?
Solution: If rain=1, sunny=2, the stationary distribution is π = (1/3, 2/3). We have that
p(1, 1) = 1/2, p(1, 2) = 1/2. To be a stationary distribution, π must verify π(1) = p(1, 1)π(1) +
p(2, 1)π(2), so that p(2, 1) = 1/4, and p(2, 2) = 3/4.
 
1−a a
7. Consider a Markov chain with transition matrix P = . Use the Markov
b 1−b
property to show that
 
b b
P(Xn+1 = 1) − = (1 − a − b) P(Xn = 1) − .
a+b a+b
Deduce that if 0 < a + b < 2, then P(Xn = 1) converges exponentially fast to its limiting value
b/(a + b).
Solution: P(Xn+1 = 1) = P(Xn = 1)(1−a)+P(Xn = 0)b = b+(1−a−b)P(Xn = 1) which gives
b
the equation needed. Then, one gets P(Xn+1 = 1) − a+b = (1 − a − b)n+1 (P(X0 = n) − b/(a + b)),
converges exponentially fast to 0.

8. Let Nn be the number of heads observed in the first n flips of a fair coin, and let Xn = Nn mod 5.
Use the Markov Chain Xn to find limn→∞ P(Nn is a multiple of 5).
Solution: The state space of Xn is {0, 1, 2, 3, 4}. It is irreducible, positive recurrent, with
transition probabilities p(i, i + 1) = 1/2, p(4, 0) = 1/2. The stationary distribution is π =
(1/5, 1/5, 1/5, 1/5, 1/5). It is aperiodic, thus the limit theorem tells that limn→∞ P(Nn is a multiple of 5) =
limn→∞ P(Xn = 0) = π(0) = 1/5.

9. A basketball player makes a shot with the following probabilities: 1/2 if he misses the last
two times, 2/3 if he has hit one of his last two shots, 3/4 if he has hit both his last two shots.
Formulate a Markov Chain to model his shooting, and compute the limiting fraction of the time he
hits a shot.
Solution: call Xn the indicator function that he hits at time n, and call Yn = (Xn−1 , Xn ).
There are 4 states (0, 0), (0, 1), (1, 0), (1, 1). The transition matrix is the 
following ((0, 0) is the first

1/2 0 1/2 0
 1/3 0 2/3 0 
state, (0, 1) the second one, (1, 0) the third one, (1, 1) the fourth) P =   0 1/3 0 2/3 .

0 1/4 0 3/4
This is an irreducible, positive recurrent, aperiodic chain. Stationary distribution: π(0, 0) = 1/8,
π(0, 1) = π(1, 0) = 3/16, π(1, 1) = 1/2. Proportion of the time he hits one = limit of the probability
he hits one at a given time = limn→∞ P(Xn = 1) = π(1, 0) + π(1, 1) = 11/16.

10. Ehrenfest chain. Consider N particles, divided into two urns (which communicate through a
small hole). Let Xn be the number of particles in the left urn. The transition probabilities are given
by the following rule: at each step, take one particle uniformly at random among the N particles,
and move it to the other urn (from left to right or right to left, depending on where the particle
you chose is).
(a) give the transition probabilities of this Markov Chain.
(b) Let µn := E[Xn |X0 = x]. Show that µn+1 = 1 + (1 − 2/N)µn .
n
(c) Show by induction that µn = N2 + 1 − N2 (x − N/2).
Solution: (a) p(j, j + 1) = NN−j if 0 ≤ j ≤ N − 1 and p(j, j − 1) = Nj if 1 ≤ j ≤ N. (b) Use
that E[Xn+1 |Xn = k] = 1 + k(1 − 2/N) for all k ∈ {0, . . . , N}. (c) Induction comes easily from part
(b): µn − N/2 = (1 − 2/N)(µn−1 − N/2).

11. Random walk on a circle. Consider the numbers 1, 2, . . . N written around a ring. Consider a
Markov Chain that at any point jumps with equal probability to one of the two adjacent numbers.
(a) What is the expected number of steps that Xn will take to return to its starting position?
(b) What is the probability that Xn will visit all the other states before returning to its starting
position?
Solution: (a) Stationary distribution is π(i) = 1/N = 1/µi . Thus, µi = E[Ti |X0 = i] = N.
(b) After one step, it correspond to the probability for a random walk to hit N − 1 before 0,
starting from 1. Already seen that this probability is N 1−1 .

12. Knight’s random walk. We represent a chessboard as S = {(i, j) , 1 ≤ i, j ≤ 8}. Then a


knight can move from (i, j) to any of eight squares (i + 2, j + 1), (i + 2, j − 1), (i + 1, j + 2), (i + 1, j −
2), (i − 1, j + 2), (i − 1, j − 2), (i − 2, j + 1), (i − 2, j − 1), provided of course that they are on the
chessboard. Let Xn be the sequence of squares that results if we pick one of knight’s legal move at
random. Find the stationary distribution, and deduce the expected number of moves to return to
the corner (1, 1), when starting at that corner.
Solution: (see what was done in class) Stationary distribution is µ1x = π(x) = P deg(x) , where
x∈S deg(x)
deg(x)
P is the number of moves possible from x.
x∈S deg(x) = 336, expected number of moves to return to the corner (1, 1) = 168.
13. Queen’s random walk. Same questions as 12., but for the Queen, which can move any number
of squares horizontally, vertically or diagonally. P
Solution: Same type of stationary distribution, except x∈S deg(x) = 1452, and for the corner
deg(x) = 21. expected number of moves to return to the corner ≈ 69.14.
14. If we have a deck of 52 cards, then its state can be described by a sequence of numbers that
gives the cards we find as we examine the deck from the top down. Since all the cards are distinct,
this list is a permutation of the set {1, 2, . . . , 52}, i.e., a sequence in which each number is listed
exactly once. There are 52! permutations possible. Consider the following shuffling procedures:
(a) pick the card from the top, and put it uniformly at random in the deck (on the top or bottom
is acceptable!);
(b) cut the deck into two parts and choose one part at random, and put it on top of the other.
In both cases, tell, if one repeats the algorithm, if in the limit the deck is perfectly shuffled, in the
sense that all 52! possibilities are equally likely.
Solution: Just verify that the chains are irreducible, aperiodic, and that the stationary distri-
1
bution is the uniform distribution π(σ) = 52! . The limit Theorem does the rest.
(a) Irreducible. Start from the cards in order, and try to get any permutation σ. We first try to
place the right card in the last position (position 52): make it go up by putting the cards one by
one on the bottom, and when it’s on the top, put it on the bottom. Then you don’t want to touch
that card anymore, and want to put the right card in position 51. Do the same process: make the
card you want go up in the pile by putting all the cards one by one in position 51, then stop when
you put the right card in position 51. Keep doing that for all cards. You can go back to the cards
in order by the same type of procedure.
Aperiodic, bc. you can stay at the same permutation in one step by putting the card on the top.
1
Stationary distribution. All possible moves are equally likely: p(σ, σ ′ ) = 52 if there exists a move
which transform σ in σ ′ .
X 1 1 1 1
p(σ, σ ′ ) = #{σ, σ → σ ′ (in one step)} =
σ
52! 52! 52 52!
because there are 52 permutations σ which can give σ ′ by moving the top card into the deck (just
choose in σ ′ which one came from the top).
(b) NOT irreducible. The shuffle is too periodic for that: it keeps the order: if i is followed by j,
then after any number of shuffles, i is still followed by j (mod 52). From one permutation, you can
reach only 52 other permutations... So there are 51! communication classes.

You might also like