0% found this document useful (0 votes)
27 views8 pages

Week 5

This document discusses key concepts related to Markov chains, including definitions of transient and recurrent states, communicating states, closed and irreducible sets of states, and the decomposition of a finite state space into transient and recurrent components. It also covers theorems about the properties of recurrent and transient states. The document provides examples of applying these concepts to analyze Markov chains and calculate probabilities related to first visit times and state occupancy. It concludes with a homework problem asking the reader to calculate the expected value of a Markov chain's state at a given time step.

Uploaded by

文 徐
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views8 pages

Week 5

This document discusses key concepts related to Markov chains, including definitions of transient and recurrent states, communicating states, closed and irreducible sets of states, and the decomposition of a finite state space into transient and recurrent components. It also covers theorems about the properties of recurrent and transient states. The document provides examples of applying these concepts to analyze Markov chains and calculate probabilities related to first visit times and state occupancy. It concludes with a homework problem asking the reader to calculate the expected value of a Markov chain's state at a given time step.

Uploaded by

文 徐
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

SUStech Applied Stochastic Process

Week 5 : Markov Chain and Related Problems


TA: Wen Xu Email: [email protected]

Review of key concepts


1. Definition
i) y is transient if ρyy < 1.
ii) y is recurrent if ρyy = 1.

2. Proposition
i) If y is transient, then
Py (Xn = y, i.o. ) = 0.

ii) If y is recurrent, then


Py (Xn = y, i.o. ) = 1

3. Definition
We say that x communicates with y, write x → y, if

ρxy ≡ Px (Ty < ∞) > 0.

4. Theorem
If ρxy > 0 and ρyx < 1, then x is transient.

5. Corollary
If x recurrent and ρxy > 0, then ρyx = 1.

6. Definition
A set A is closed if it is impossible to get out.

7. Definition
A set B is irreducible if ∀i, j ∈ B, i → j.

8. Theorem
If C is a finite closed irreducible set, then all states in C are recurrent.

Page 1
SUStech Applied Stochastic Process

9. Theorem
If the state space S is finite, then S can be written as a disjoint union

S = T ∪ R1 ∪ · · · ∪ Rk ,

where T is a set of transient states and Ri , 1 ≤ i ≤ k, are closed irreducible sets of recurrent states.

10. Lemma
Let
N (y) = # of visits to y at n ≥ 1,
Then
ρxy
Ex N (y) = .
1 − ρyy

11. Lemma



Ex N (y) = pn (x, y)
n=1
12. Theorem
y is recurrent iff


pn (y, y) = Ey N (y) = ∞.
n=1
13. Lemma
If x is recurrent and x → y, then y is recurrent.

14. Lemma
In a finite closed set there has to be at lease one recurrent state.

15. (Homogenecity). The chain {Xn } is called homogeneous if

P (Xn+1 = j | Xn = i) = P (X1 = j | X0 = i)

for all n, i and j. The transition matrix P = (pij ) is the N × N matrix of transition probabilities

pij = P (Xn+1 = j | Xn = i)

16. Theorem
Suppose Xn is a Markov chain on S with transition probabilities pij and initial distribution αi =
P {X0 = i}. Then, for any i0 , . . . , in ∈ S and n ⩾ 0,

P {X0 = i0 , . . . , Xn = in } = αi0 pi0 ,i1 · · · pin−1 ,in .

Page 2
SUStech Applied Stochastic Process

Proof. Proceeding by induction, this statement is obvious for n = 0. Now, assume it is true for
some n, and let An = {X0 = i0 , . . . , Xn = in }. Then the statement is true for n + 1, since

P (An+1 ) = P (An ) P {Xn+1 = xn+1 | An } = αi0 pi0 ,i1 · · · pin−1 ,in pin ,in+1 ,

where P {Xn+1 = xn+1 | An } = pin ,in+1 by the Markov property.

17. Consider a Markov chain with transition probability Pij . Let A denote a set of states, and
assume that we want to find the probability that this Markov chain has entered any state in A before
time m. That is to say, for the given state i ∈
/ A, we want to determine

β = P (Xk ∈ A f or some k = 1, · · · , m | X0 = i)

To determine the above probability, we define a Markov chain, {Wn }. Its state is: a state that
does not belong to A plus an additional state, which is referred to as state A in our general discussion.
Once the Markov chain {Wn } enters state A, it remains in it forever.
This new Markov chain is defined as follows. The state of a Markov chain with transition proba-
bility Pi,j at time n is denoted as Xn , defined

N = min {n : Xn ∈ A}

And if Xn ∈
/ A for all n, then let N = ∞. In short, N is the time when the Markov chain first
enters the state set A. Now defined

 Xn , if n < N
Wn =
A, if n ⩾ N
So, until the original Markov chain {Xn } enters a certain state in A, the state of process {Wn }is
equal to the state of the original Markov chain. At this moment, the new process reaches state A
and remains there forever. From this description, we deduce that {Wn , n ≥ 0}is a Markov chain with
i(i ∈
/ A) and A as states, and its transition probability Qi,j is

Qi,j = Pi,j , if i ∈
/ A, j ∈
/A

Qi,A = j∈A Pi,j , if i ∈
/A
QA,A = 1
Because the original Markov chain entered the state in A before time m, and only if the new
Markov chain’s state at time m is A, we can see from this that

P (Xk ∈ A, for some k = 1, · · · , m | X0 = i)


=P (Wm = A | X0 = i) = P (Wm = A | W0 = i) = Qm
i,A

That is to say, the required probability is equal to the mstep transition probability of the new chain.

Page 3
SUStech Applied Stochastic Process

18. Absorbing State. A closed set containing exactly one state is called absorbing

19. (First Visit Time) Let Tj = min {n ≥ 1 : Xn = j}, i.e. the first time to visit state j, with
convention that Tj = ∞ if the visit never occurs. We have P (Ti = ∞ | X0 = i) > 0 if and only if i
is transient and P (Ti = ∞ | X0 = i) = 0 if and only if i is recurrent. A state is recurrent does not
guarantee that E [Ti | X0 = i] < ∞.

20. (Mean Recurrence Time)


Denoted fij (n) = P (X1 ̸= j, · · · , Xn−1 ̸= j, Xn = j | X0 = i). The first visit to state j starting
from state i takes place at time n.
We define the mean recurrence time µi of a state i as
∑
 i is recurrent
n nfii (n)
µi = E [Ti | X0 = i] =
∞ i is transient

Since it is not guaranteed that for a recurrent state, n nfii (n) < ∞, for recurrent state, we call
state i null recurrent (persistent) if µi = ∞ and non-null recurrent (positive) if µi < ∞. We have
already seen that state 0 in a one-dimensional random walk is recurrent when p = 1 − p = 21 . We are
interested in whether it is null recurrent or non-null recurrent.

Homework
5. A Markov chain {Xn , n ≥ 0} with states 0, 1, 2, has the transition probability matrix
 
1 1 1
 2 3 6 
 0 1 2 
 3 3 
1 1
2
0 2

If P {X0 = 0} = P {X0 = 1} = 14 , find E [X3 ].


Solution Note that  
13 11 47
 36 54 108 
P3 = 

4
9
4
27
11
27


5 2 13
12 9 36

So,

2
P (X3 = 1) = P (X3 = 1 | X0 = i) P (X0 = i)
i=0
11 1 4 1 2 1
= × + × + ×
54 4 27 4 9 2
43
=
216

Page 4
SUStech Applied Stochastic Process


2
P (X3 = 2) = P (X3 = 2 | X0 = i) P (X0 = i)
i=0
47 1 11 1 13 1
= × + × + ×
108 4 27 4 36 2
169
=
432
Hence,
43 169 53
E (X3 ) = 1 × +2× =
216 432 54

8. Suppose that coin 1 has probability 0.7 of coming up heads, and coin 2 has probability 0.6 of coming
up heads. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow, and if it
comes up tails, then we select coin 2 to flip tomorrow. If the coin initially flipped is equally likely to
be coin 1 or coin 2, then what is the probability that the coin flipped on the third day after the initial
flip is coin 1? Suppose that the coin flipped on Monday comes up heads. What is the probability that
the coin flipped on Friday of the same week also comes up heads?
Solution:There are two ways to do this problem. Either use H and T (results of flips) as state or 1
and 2 (coins used) as state. We used the first. Note that

P (X1 = H | X0 = H) = .7, P (X1 = H | X0 = T ) = .6.

Thus, the transition matrix is [ ]


0.7 0.3
P = .
0.6 0.4
Then, [ ][ ] [ ]
2 0.7 0.3 0.7 0.3 0.67 0.33
P = =
0.6 0.4 0.6 0.4 0.66 0.34
Third flip coin 1 means X2 = H. The initial distribution
1 1
P (X0 = H) = (.7) + (.6) = .65
2 2
P (X0 = T ) = 1 − .65 = .35

Then
P (X2 = H) = .65p2 (H, H) + .35p2 (T, H) = .6665.

Note that [ ][ ] [ ]
0.67 0.33 0.67 0.33 0.6667 0.3333
P4 = =
0.66 0.34 0.66 0.34 0.6666 0.3334
Hence,
P (X5 = H | X1 = H) = P 4 (H, H) = 0.6667

13. Let P be the transition probability matrix of a Markov chain. Argue that if for some positive
integer r, P r has all positive entries, then so does P n , for all integers n ≥ r.

Page 5
SUStech Applied Stochastic Process

Proof:Note that

n−r r
Pijn = Pik Pkj .
k
Since

n−r
Pik = 1,
k
there exists at least one k such that
n−r
Pik > 0.
r
For Pkj > 0,

n−r r
Pijn = Pik Pkj > 0
k
14. Specify the classes of the following Markov chains, and determine whether they are transient
or recurrent. 4 matrices (omitted) with the following communication diagrams.

Solution
1) It is clear that the states communicate with each other and hence,they have the same status.For a
finite chain, there is at least one recurrent. Therefore, there is exactly one recurrent class R1 = {1, 2, 3}.
2) From the communicate diagram we see this chain is irreducible, and hence,the finite chain has
only one recurrent class R1 = {1, 2, 3, 4}.
3) From the communicate diagram we see that 1 ↔ 3, 4 ↔ 5, 2 → 1 and 2 → 3. Thus R1 = {1, 3}
and R2 = {4, 5} are irreducible closed sets and hence, recurrent.As ρ23 > 0 and ρ32 = 0 < 1, we see
that 2 is transient.
4) From the communicate diagram we see that 5 → 1 ↔ 2 and 4 → 3.R1 = {1, 2} and R2 = {3}
are irreducible closed sets and hence, recurrent.Since ρ51 > 0 and ρ15 = 0 < 1, 5 is transient. Similarly,
4 is transient.Thus, T = {4, 5} is the transient class.

Supplementary exercises
1. (Equivalent conditions for Markov property) Show that the Markov property is equiv-
alent to each of the following conditions:
1. For all n, m ≥ 1 and all s, x0 , · · · , xn ∈ S

P (Xn+m = s | X1 = x1 , · · · , Xn = xn ) = P (Xn+m = s | Xn = xn )

Page 6
SUStech Applied Stochastic Process

2. For all 0 ≤ n1 < · · · < nk ≤ n, all m > 1, s, x1 , · · · , xk ∈ S

P (Xn+m = s | Xn1 = x1 , · · · , Xnk = xk ) = P (Xn+m = s | Xnk = xk )

Proof. For the first statement, we prove by induction. For m = 1, the statement is just the
definition of Markov chain, it automatically holds. Now assume for m = d we have

P (Xn+d = s | X1 = x1 , · · · , Xn = xn )
= P (Xn+d = s | Xn = xn ) .

We consider the case when m = d + 1. We have

P (Xn+d+1 = s | X1 = x1 , · · · , Xn = xn )

= P (Xn+d+1 = s, Xn+d = k | X1 = x1 , · · · , Xn = xn )
k

= P (Xn+d+1 = s | Xn+d = k, X1 = x1 , · · · , Xn = xn ) P (Xn+d = k | X1 = x1 , · · · , Xn = xn ) .
k
∑ (1)
= P (Xn+d+1 = s | Xn+d = k) P (Xn+d = k | Xn = xn )
k

= P (Xn+d+1 = s, Xn+d = k | Xn = xn )
k

= P (Xn+d+1 = s | Xn = xn ) ,

where the second to the last line uses the Markov property of X. Therefore, by induction we have
proved that for m ∈ N+ , the statement holds.
For the second claim, we can use the same strategy.

2.(Subchain from a Markov chain) Assume X = {Xn : n ≥ 0} is a Markov chain and let
{nk : k ≥ 0} be an unbounded increasing sequence of positive integers. Define a new stochastic process
Y = {Yk : k ≥ 0} such that Yk = Xnk . Show that Y is a Markov chain. Is Y timehomogeneous Markov
chain without additional conditions?
Proof. By definition of Markov chain, for Yn to be a Markov chain, we need to show that

P (Yk = s | Y0 = x0 , · · · , Yk−1 = xk−1 )


(2)
= P (Yk = s | Yk−1 = xk−1 )

By definition of stochastic process Yn , to show (1) is just to show


( )
P Xnk = s | Xn0 = x0 , · · · , Xnk−1 = xk−1
( ) (3)
= P Xnk = s | Xnk−1 = xk−1

for a sequence of increasing integers n0 < n1 < · · · nk . Since X is a Markov chain, (2) automatically
follows because it is just the alternative defintion of Markov chain. Y may not be homogenous if we
do not have further restrictions because n1 , · · · , nk , · · · may not be equally spaced.

Page 7
SUStech Applied Stochastic Process

Remark. The Markov chain X(t) is time-homogeneous if P (Xn+1 = j | Xn = i) = P (X1 = j | X0 = i),


i.e. the transition probabilities do not depend on time n. If this is the case, we write pij = P (X1 = j | X0 = i)
for the probability to go from i to j in one step, and P = (pij ) for the transition matrix.

Page 8

You might also like