Chapter 8 - Markov Chains
Chapter 8 - Markov Chains
4 Markov Chains
Introduction Stochastic Processes.
Markov Chains.
Chapman-Kolmogorov Equations
Classification of States
Recurrence and Transience
Limiting Probabilities
Stochastic Processes
State 0 1 2
0 0.5 0.4 0.1
1 0.3 0.6 0.1
2 0.2 0.3 0.5
0 1 2 3
0 0.7 0 0.3 0
States: 0: RR, 1: NR, 2: RN, 3: NN
1 0.5 0 0.5 0
2 0 0.4 0 0.6
3 0 0.2 0 0.8
• Pi, i + 1 = p ; Pi, i - 1 = 1 – p i = 0, 1, …
• At each point of time, either it takes one step to the right with
probability p, or one step to the left with probability 1-p.
… -2 -1 0 1 2 …
S
p p p p
1 1
0 1 2 i-1 i i+1 N-1 N
q q q q
Chapman-Kolmogorov Equations
• Chapman-Kolmogorov Equations
∞
𝑃𝑖𝑗𝑛+𝑚 = 𝑃𝑖𝑘
𝑛 𝑚
𝑃𝑘𝑗 , 𝑛, 𝑚 ≥ 0, 𝑖, 𝑗 ≥ 0
𝑘=0
• Let P(n) be the matrix of n-step transition probabilities:
• 𝐏 𝑛+𝑚 =𝐏 𝑛 𝐏 𝑚 and 𝐏 𝑛 = 𝐏𝑛
Example
.7 .3
• Weather transition probability matrix: 𝑃 =
.4 .6
With: i = 1: it rains; i = 2: it does not rain. Then 4-steps
.5749 .4251
transition matrix is 𝑃4 =
.5668 .4332
Classification of States
• State j is accessible from state i if 𝑃𝑖𝑗𝑛 ≥ 0 for some 𝑛 ≥ 0
• If j is accessible from i and i is accessible from j, we say that states i
and j communicate (i j).
• Communication is a class property:
(i) State i communicates with itself, for all i 0
(ii) If i j then j i : communicate is commutative
(iii) If i j and j k, then i k : communicate is transitive
• Therefore, communication divides the state space up into mutually
exclusive classes.
• If all the states communicate, the Markov chain is irreducible.
Classification of States
An irreducible Markov chain: An reducible Markov chain:
1 2 1 2
0 0
3 4 3 4
Examples
0 0 0.5 0.5 All states are recurrent
P= 1 0 0 0
0 1 0 0
0 1 0 0
0.5 0.5 0 0 0
Classes: {0,1}, {2,3}- recurrent.
P= 0.5 0.5 0 0 0
0 0 0.5 0.5 0
Class: {4}transient
0 0 0.5 0.5 0
0.25 0.25 0 0 0.5
0 0 0 1
P= 0 0 0 1
0.5 0.5 0 0
Irreducible ➔ All states are recurrent
0 0 1 0
Limiting Probabilities
• If 𝑃𝑖𝑖𝑛 = 0 whenever n is not divisible by d, and d is the largest integer
with this property, then state i is periodic with period d.
• If a state has period d = 1, then it is aperiodic.
• If state i is recurrent and if, starting in state i, the expected time until
the process returns to state i is finite, it is positive recurrent
(otherwise it is null recurrent).
• A positive recurrent, aperiodic state is called ergodic.
𝜋𝑗 = 𝜋𝑖 𝑃𝑖𝑗 , 𝑗 ≥ 0
𝑖=0
∞
𝜋𝑗 = 1
𝑗=0
• The probability pj also equals the long run proportion of time that the
process is in state j.
Examples
𝛼 1−𝛼
𝑃=
𝛽 1−𝛽
Limiting probabilities:
𝜋0 = 𝜋0 𝛼 + 𝜋1 𝛽
൞𝜋1 = 𝜋0 1 − 𝛼 + 𝜋1 1 − 𝛽
𝜋0 + 𝜋1 = 1
𝛽 1−𝛼
⇒ 𝜋0 = ; 𝜋1 =
1−𝛼+𝛽 1−𝛼+𝛽
• Let mjj be the expected number of transitions until the Markov chain,
starting in state j, returns to state j (finite if state j is positive
1
recurrent). Then 𝑚𝑗𝑗 =
𝜋𝑗
• Note that: P0 = 0
⇒ 𝑃𝑖
𝑞 𝑖
1 − ൗ𝑝 𝑞
𝑞 𝑃 1 𝑖𝑓 ൗ𝑝 ≠ 1
= 1 − ൗ𝑝 𝑖 = 2, . . . , 𝑁
𝑞
𝑖𝑃1 𝑖𝑓 ൗ𝑝 = 1
𝑞 𝑖
1 − ൗ𝑝
𝑖𝑓𝑝 ≠ 1ൗ
𝑞 𝑁 2
⇒ 𝑃𝑖 = 1 − ൗ𝑝 𝑖 = 0,1,2, . . . , 𝑁
𝑖
𝑖𝑓𝑝 = 1ൗ2
𝑁
𝑞 𝑖
1 − ൗ𝑝 𝑖𝑓𝑝 > 1ൗ2
• For N → ∞: 𝑃𝑖 = ൞
0𝑖𝑓𝑝 ≤ 1ൗ2
• For p > 1/2: there is a positive prob. that the gambler’s fortune will
increase indefinitely.
• For p ≤ 1/2: the gambler will “almost certainly” go broke against an
infinitely rich adversary.
Example
0 0.5 0.5 0
P= 0 0 1 0
Find the mean first passage time to
0 0.25 0.5 0.25
state 3 from states 0, 1, 2
1 0 0 0
1
𝜇𝑖𝑖 = 𝑖 = 1,2, … , 𝑛
𝜋𝑖
Excercises