0% found this document useful (0 votes)
36 views9 pages

Using The Law of Total Probability With Recursion Return Time

This document discusses the application of the Law of Total Probability in analyzing Markov chains, specifically focusing on absorption probabilities, mean hitting times, and mean return times. It provides examples and formulas to calculate these probabilities and times, emphasizing the importance of understanding the concepts rather than memorizing formulas. The document also highlights the classification of states within Markov chains and the recursive nature of the calculations involved.

Uploaded by

vaishnavpawar57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views9 pages

Using The Law of Total Probability With Recursion Return Time

This document discusses the application of the Law of Total Probability in analyzing Markov chains, specifically focusing on absorption probabilities, mean hitting times, and mean return times. It provides examples and formulas to calculate these probabilities and times, emphasizing the importance of understanding the concepts rather than memorizing formulas. The document also highlights the classification of states within Markov chains and the recursive nature of the calculations involved.

Uploaded by

vaishnavpawar57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

11.2.5 Using the Law of Total Probability with Recursion


A very useful technique in the analysis of Markov chains is using law of total probability. In fact, we have already used this when finding n-step transition
probabilities. In this section, we will use this technique to find absorption probabilities, mean hitting times, and mean return times. We will introduce this
technique by looking at an example. We will then provide the general formulas. You should try to understand the main idea here. This way, you do not need to
memorize any formulas. Let's consider the Markov chain shown in Figure 11.12.

Figure 11.12 - A state transition diagram.

The state transition matrix of this Markov chain is given by the following matrix.

1 0 0 0
⎡ ⎤

⎢ 1 2 ⎥
⎢ 0 0 ⎥
⎢ 3 3

P = ⎢ ⎥.
⎢ 1 1

⎢ 0 0 ⎥
⎢ 2 2 ⎥

⎣ ⎦
0 0 0 1

Before going any further, let's identify the classes in this Markov chain.

Example 11.9
For the Markov chain given in Figure 11.12, answer the following questions: How many classes are there? For each class, mention if it is recurrent or transient.

Solution

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 1/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

There are three classes: Class 1 consists of one state, state 0, which is a recurrent state. Class two consists of two states, states 1 and 2, both of
which are transient. Finally, class three consists of one state, state 3, which is a recurrent state.

Note that states 0 and 3 have the following property: once you enter those states, you never leave them. For this reason, we call them absorbing
states. For our example here, there are two absorbing states. The process will eventually get absorbed in one of them. The first question that we
would like to address deals with finding absorption probabilities.

Absorption Probabilities:

Consider the Markov chain in Figure 11.12. Let's define ai as the absorption probability in state 0 if we start from state i. More specifically,

a 0 = P (absorption in 0|X 0 = 0),

a 1 = P (absorption in 0|X 0 = 1),

a 2 = P (absorption in 0|X 0 = 2),

a 3 = P (absorption in 0|X 0 = 3).

By the above definition, we have a0 = 1 and a3 = 0. To find the values of a1 and a2 , we apply the law of total probability with recursion. The main idea is
the following: if Xn = i, then the next state will be Xn+1 = k with probability pik . Thus, we can write

a i = ∑ a k pik , for i = 0, 1, 2, 3 (11.6)

Solving the above equations will give us the values of a1 and a2 . More specifically, using Equation 11.6, we obtain

a0 = a0 ,

1 2
a1 = a0 + a2 ,
3 3
1 1
a2 = a1 + a3 ,
2 2

a3 = a3 .

We also know a0 = 1 and a3 = 0. Solving for a1 and a2 , we obtain

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 2/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

1
a1 = ,
2
1
a2 = .
4

Let's now define bi as the absorption probability in state 3 if we start from state i. Since ai + bi = 1 , we conclude

1 3
b 0 = 0, b1 = , b2 = , b 3 = 1.
2 4

Nevertheless, for practice, let's find the bi 's directly.

Example 11.10
Consider the Markov chain in Figure 11.12. Let's define bi as the absorption probability in state 3 if we start from state i. Use the above procedure to obtain bi
for i = 0, 1, 2, 3 .

Solution
From the definition of bi and the Markov chain graph, we have b0 = 0 and b3 = 1. Writing Equation 11.6 for i = 1, 2, we obtain

1 2
b1 = b0 + b2
3 3
2
= b2 ,
3
1 1
b2 = b1 + b3
2 2
1 1
= b1 + .
2 2

Solving the above equations, we obtain

1
b1 = ,
2
3
b2 = .
4

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 3/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

Absorption Probabilities

Consider a finite Markov chain {Xn , n = 0, 1, 2, ⋯} with state space S = {0, 1, 2, ⋯ , r}. Suppose that all states are either absorbing or transient.
Let l ∈ S be an absorbing state. Define

a i = P (absorption in l|X 0 = i), for all i ∈ S .

By the above definition, we have al = 1, and aj = 0 if j is any other absorbing state. To find the unknown values of ai 's, we can use the following
equations

a i = ∑ a k pik , for i ∈ S .

In general, a finite Markov chain might have several transient as well as several recurrent classes. As n increases, the chain will get absorbed in one of the
recurrent classes and it will stay there forever. We can use the above procedure to find the probability that the chain will get absorbed in each of the recurrent
classes. In particular, we can replace each recurrent class with one absorbing state. Then, the resulting chain consists of only transient and absorbing states. We
can then follow the above procedure to find absorption probabilities. An example of this procedure is provided in the Solved Problems Section (See Problem 2 in
Section 11.2.7).

Mean Hitting Times:

We now would like to study the expected time until the process hits a certain set of states for the first time. Again, consider the Markov chain in Figure 11.12.
Let's define ti as the number of steps needed until the chain hits state 0 or state 3, given that X0 = i . In other words, ti is the expected time (number of
steps) until the chain is absorbed in 0 or 3, given that X0 = i . By this definition, we have t0 = t3 = 0 .

To find t1 and t2 , we use the law of total probability with recursion as before. For example, if X0 = 1, then after one step, we have X1 = 0 or X1 = 2.
Thus, we can write

1 2
t1 = 1 + t0 + t2
3 3
2
= 1 + t2 .
3

Similarly, we can write

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 4/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

1 1
t2 = 1 + t1 + t3
2 2
1
= 1 + t1 .
2

Solving the above equations, we obtain

5 9
t1 = , t2 = .
2 4

Generally, let A ⊂ S be a set of states. The above procedure can be used to find the expected time until the chain first hits one of the states in the set A.
Mean Hitting Times

Consider a finite Markov chain {Xn , n = 0, 1, 2, ⋯} with state space S = {0, 1, 2, ⋯ , r}. Let A ⊂ S be a set of states. Let T be the first time the
chain visits a state in A. For all i ∈ S , define

ti = E [T |X 0 = i].

By the above definition, we have tj = 0, for all j ∈ A. To find the unknown values of ti 's, we can use the following equations

ti = 1 + ∑ tk pik , for i ∈ S − A.

Mean Return Times:

Another interesting random variable is the first return time. In particular, assuming the chain is in state l, we consider the expected time (number of steps)
needed until the chain returns to state l. For example, consider a Markov chain for which X0 = 2 . If the chain gets the values

X 0 = 2, X 1 = 1, X 2 = 4, X 3 = 3, X 4 = 2, X 5 = 3, X 6 = 2, X 7 = 3, ⋯ ,

then the first return to state 2 occurs at time n = 4. Thus, the first return time to state 2 is equal to 4 for this example. Here, we are interested in the expected
value of the first return time. In particular, assuming X0 = l , let's define rl as the expected number of steps needed until the chain returns to state l. To make
the definition more precise, let's define

Rl = min{n ≥ 1 : X n = l}.

Then,
https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 5/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

rl = E [Rl |X 0 = l].

Note that by definition, Rl ≥ 1 , so we conclude rl ≥ 1. In fact, rl = 1 if and only if l is an absorbing state (i.e., pll = 1). As before, we can apply the law
of total probability to obtain rl . Again, let's define tk as the expected time until the chain hits state l for the first time, given that X0 = k. We have already
seen how to find tk 's (mean hitting times). Using the law of total probability, we can write

rl = 1 + ∑ plk tk .

Let's look at an example to see how we can find the mean return time.

Example 11.11
Consider the Markov chain shown in Figure 11.13. Let tk be the expected number of steps until the chain hits state 1 for the first time, given that X0 = k.
Clearly, t1 = 0. Also, let r1 be the mean return time to state 1.

1. Find t2 and t3 .
2. Find r1 .

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 6/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

Figure 11.13 - A state transition diagram.

Solution
1. To find t2 and t3 , we use the law of total probability with recursion as before. For example, if X0 = 2, then after one step, we have
X 1 = 2 or X 1 = 3 . Thus, we can write

1 2
t2 = 1 + t2 + t3 .
3 3

Similarly, we can write

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 7/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

1 1
t3 = 1 + t1 + t2
2 2
1
= 1 + t2 .
2

Solving the above equations, we obtain

7
t2 = 5, t3 = .
2
2. To find r1 , we note that if X0 = 1, then X1 = 1 or X1 = 2. We can write

1 1
r1 = 1 + ⋅ t1 + t2
2 2
1 1
= 1 + ⋅ 0 + ⋅ 5
2 2
7
= .
2

Here, we summarize the formulas for finding the mean return times. As we mentioned before, there is no need to memorize these formulas once you understand
how they are derived.
Mean Return Times

Consider a finite irreducible Markov chain {Xn , n = 0, 1, 2, ⋯} with state space S = {0, 1, 2, ⋯ , r}. Let l ∈ S be a state. Let rl be the mean
return time to state l. Then

rl = 1 + ∑ tk plk ,

where tk is the expected time until the chain hits state l given X0 = k. Specifically,

tl = 0,

tk = 1 + ∑ tj pkj , for k ≠ l.

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 8/9
1/31/25, 10:53 AM Using the Law of Total Probability with Recursion

← previous
next →

The print version of the book is available on Amazon.

Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

https://www.probabilitycourse.com/chapter11/11_2_5_using_the_law_of_total_probability_with_recursion.php 9/9

You might also like