100% found this document useful (1 vote)
1K views

Solved Problems

1) The document contains 4 solved probability problems involving Markov chains. 2) The problems involve calculating transition probabilities, drawing state transition diagrams, finding absorption probabilities, and calculating expected times until absorption or return to a given state. 3) The solutions work through the problems step-by-step using standard Markov chain methodology.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views

Solved Problems

1) The document contains 4 solved probability problems involving Markov chains. 2) The problems involve calculating transition probabilities, drawing state transition diagrams, finding absorption probabilities, and calculating expected times until absorption or return to a given state. 3) The solutions work through the problems step-by-step using standard Markov chain methodology.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.

php

11.2.7 Solved Problems


Problem 1

Consider the Markov chain with three states, S = {1, 2, 3}, that has the following transition matrix

⎡ ⎤
1 1 1

⎢ 2 ⎥
2 4 4

P =⎢

1 ⎥.
3 ⎥
⎢ ⎥
3
0
⎣ 1 1
0⎦
2 2

a. Draw the state transition diagram for this chain.


1
b. If we know P (X1 = 1) = P (X1 = 2) = 4 , find P (X1 = 3, X2 = 2, X3 = 1) .
Solution
a. The state transition diagram is shown in Figure 11.6

Figure 11.6 - A state transition diagram.

b. First, we obtain

P (X1 = 3) = 1 − P (X1 = 1) − P (X1 = 2)


1 1
=1− −
4 4
1
= .
2
We can now write

1 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

P (X1 = 3, X2 = 2, X3 = 1) = P (X1 = 3) ⋅ p32 ⋅ p21


1 1 1
= ⋅ ⋅
2 2 3
1
= .
12

Problem 2

Consider the Markov chain in Figure 11.17. There are two recurrent classes, R1 = {1, 2}, and
R2 = {5, 6, 7}. Assuming X0 = 3, find the probability that the chain gets absorbed in R1 .

Figure 11.17 - A state transition diagram.

Solution
Here, we can replace each recurrent class with one absorbing state. The resulting state diagram is
shown in Figure 11.18

Figure 11.18 - The state transition diagram in which we have replaced each recurrent
class with one absorbing state.

Now we can apply our standard methodology to find probability of absorption in state R1 . In
particular, define

ai = P (absorption in R1 |X0 = i),  for all i ∈ S.


By the above definition, we have aR1 = 1, and aR2 = 0. To find the unknown values of ai 's,

2 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

we can use the following equations

ai = ∑ ak pik ,  for i ∈ S.
k

We obtain

1 1
a3 = aR1 + a4
2 2
1 1
= + a4 ,
2 2
1 1 1
a4 = aR1 + a3 + aR2
4 4 2
1 1
= + a3 .
4 4
Solving the above equations, we obtain

5 3
a3 = , a4 = .
7 7
5
Therefore, if X0 = 3, the chain will end up in class R1 with probability a3 = 7
.

Problem 3

Consider the Markov chain of Example 2. Again assume X0 = 3. We would like to find the expected time
(number of steps) until the chain gets absorbed in R1 or R2 . More specifically, let T be the absorption time,
i.e., the first time the chain visits a state in R1 or R2 . We would like to find E[T |X0 = 3] .

Solution
Here we follow our standard procedure for finding mean hitting times. Consider Figure 11.18.
Let T be the first time the chain visits R1 or R2 . For all i ∈ S , define

ti = E[T |X0 = i].


By the above definition, we have tR1 = tR2 = 0. To find t3 and t4 , we can use the following
equations

ti = 1 + ∑ tk pik ,  for i = 3, 4.
k

Specifically, we obtain

3 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

1 1
t3 = 1 + tR 1 + t4
2 2
1
= 1 + t4 ,
2
1 1 1
t4 = 1 + tR 1 + t3 + tR 2
4 4 2
1
= 1 + t3 .
4
Solving the above equations, we obtain

12 10
t3 = , t4 = .
7 7
12
Therefore, if X0 = 3, it will take on average 7
steps until the chain gets absorbed in R1 or
R2 .

Problem 4

Consider the Markov chain shown in Figure 11.19. Assume X0 = 1, and let R be the first time that the
chain returns to state 1, i.e.,

R = min{n ≥ 1 : Xn = 1}.
Find E[R|X0 = 1].

Figure 11.19 - A state transition diagram.

4 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

Solution
In this question, we are asked to find the mean return time to state 1. Let r1 be the mean return
time to state 1, i.e., r1 = E[R|X0 = 1]. Then

r1 = 1 + ∑ tk p1k ,
k

where tk is the expected time until the chain hits state 1 given X0 = k . Specifically,
t1 = 0,
tk = 1 + ∑ tj pkj ,  for k ≠ 1.
j

So, let's first find tk 's. We obtain

1 2
t2 = 1 + t1 + t3
3 3
2
= 1 + t3 ,
3
1 1
t3 = 1 + t3 + t1
2 2
1
= 1 + t3 .
2
Solving the above equations, we obtain

7
t3 = 2, t2 = .
3
Now, we can write

1 1 1
r1 = 1 + t1 + t2 + t3
4 2 4
1 1 7 1
=1+ ⋅0+ ⋅ + ⋅2
4 2 3 4
8
= .
3

Problem 5

Consider the Markov chain shown in Figure 11.20.

5 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

Figure 11.20 - A state transition diagram.

a. Is this chain irreducible?


b. Is this chain aperiodic?
c. Find the stationary distribution for this chain.
d. Is the stationary distribution a limiting distribution for the chain?

Solution
a. The chain is irreducible since we can go from any state to any other states in a finite
number of steps.
b. The chain is aperiodic since there is a self-transition, i.e., p11 > 0 .
c. To find the stationary distribution, we need to solve

1 1 1
π1 = π1 + π2 + π3 ,
2 3 2
1 1
π2 = π1 + π3 ,
4 2
1 2
π3 = π1 + π2 ,
4 3
π1 + π2 + π3 = 1.
We find

π1 ≈ 0.457, π2 ≈ 0.257, π3 ≈ 0.286


d. The above stationary distribution is a limiting distribution for the chain because the chain
is irreducible and aperiodic.

Problem 6
1
Consider the Markov chain shown in Figure 11.21. Assume that 2 < p < 1. Does this chain have a

6 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php

limiting distribution? For all i, j ∈ {0, 1, 2, ⋯}, find


lim P (Xn = j|X0 = i).
n→∞

Figure 11.21 - A state transition diagram.

Solution
This chain is irreducible since all states communicate with each other. It is also aperiodic since it
includes a self-transition, P00 > 0. Let's write the equations for a stationary distribution. For
state 0, we can write

π0 = (1 − p)π0 + (1 − p)π1 ,
which results in
p
π1 = π0 .
1−p
For state 1, we can write

π1 = pπ0 + (1 − p)π2
= (1 − p)π1 + (1 − p)π2 ,
which results in
p
π2 = π1 .
1−p
Similarly, for any j ∈ {1, 2, ⋯}, we obtain
πj = απj−1 ,
p 1
where α = 1−p
. Note that since 2 < p < 1, we conclude that α > 1. We obtain

πj = αj π0 ,  for j = 1, 2, ⋯ .
Finally, we must have

7 of 8 13-07-2021, 12:54 pm
Solved Problems https://www.probabilitycourse.com/chapter11/11_2_7_solved_probs.php


1 = ∑ πj
j=0

= ∑ αj π0 , (where α > 1)
j=0
= ∞π0 .
Therefore, the above equation cannot be satisfied if π0 > 0. If π0 = 0, then all πj 's must be
zero, so they cannot sum to 1. We conclude that there is no stationary distribution. This means
that either all states are transient, or all states are null recurrent. In either case, we have

lim P (Xn = j|X0 = i) = 0,  for all i, j.


n→∞

We will see how to figure out if the states are transient or null recurrent in the End of Chapter
Problems (see Problem 15 in Section 11.5).

← previous
next →

8 of 8 13-07-2021, 12:54 pm

You might also like