Markov Chains
Markov Chains
Markov Chain
https://vi.wikipedia.or
g/wiki/Andrey_Andre
yevich_Markov
Markov Chains
Introduction Stochastic Processes.
Markov Chains.
Chapman-Kolmogorov Equations
Classification of States
Recurrence and Transience
Limiting Probabilities
Stochastic Processes
• A stochastic process is a collection of values of random variables
𝑋 𝑡 , 𝑡 = 0,1, … , 𝑛, 𝑡 ∈ 𝑇 . Typically, T is continuous (time) and we have
𝑋 𝑡 , 𝑡 ≥ 0 . Or, T is discrete and we are observing 𝑋𝑛 , 𝑛 = 0,1,2, . . . at
discrete time points n. Refer to X(t) as the state of the process at time t.
• Example: The condition of a machine at the time of the monthly preventive
maintenance is poor, fair or good. For month t, the stochastic process for
this situation can be represented as follows:
0 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑝𝑜𝑜𝑟
𝑋𝑡 = 1 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑓𝑎𝑖𝑟 𝑡 = 1,2, … , 𝑛
2 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑔𝑜𝑜𝑑
0 1 2 2 0 0 1 1 2 1 2 2 1 1 0 0 0 2 1 2
• The random variable Xt is finite because it represents three states: poor (0),t
fair (1), and good (2). Can we use network to represent Markov Chain?
HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong
5
0 1 2 3
0 0.7 0 0.3 0
States: 0: RR, 1: NR, 2: RN, 3: NN
1 0.5 0 0.5 0
2 0 0.4 0 0.6
3 0 0.2 0 0.8
• Pi, i + 1 = p ; Pi, i - 1 = 1 – p i = 0, 1, …
• At each point of time, either it takes one step to the right with
probability p, or one step to the left with probability 1-p.
… -2 -1 0 1 2 …
S
q q q q
Chapman-Kolmogorov Equations
• Chapman-Kolmogorov Equations
∞
𝑃𝑖𝑗𝑛+𝑚 = 𝑃𝑖𝑘
𝑛 𝑚
𝑃𝑘𝑗 , 𝑛, 𝑚 ≥ 0, 𝑖, 𝑗 ≥ 0
𝑘=0
𝑛 𝑚
• By noting that 𝑃𝑖𝑘 𝑃𝑘𝑗 represents the probability that starting in state i the
process will go to state j in (n+m) transitions through a path which takes it
into state k at the nth transition.
• Let P(n) be the matrix of n-step transition probabilities:
• 𝐏 𝑛+𝑚 =𝐏 𝑛 𝐏 𝑚 and 𝐏 𝑛 = 𝐏𝑛
.7 .3
• Weather transition probability matrix: 𝑃 =
.4 .6
With: i = 1: it rains; i = 2: it does not rain. Then 4-steps
.5749 .4251
transition matrix is 𝑃4 =
.5668 .4332
• Given Prob. it rains today is α1 = 0.4 and Prob. it does not
rain today is α2 = 0.6. What is probability it will rains after 4
days ? We have 𝛼 0 = (.4, .6)
4 (0) 4 .5749 .4251
• 𝛼 = 𝛼 𝑃 = .4 .6 = .57 .43
.5668 .4332
• What is value of 𝛼 8 ? 𝛼 16 ?
Example 7
• A one-year transition matrix is given for a gardener. States are as
follows: 1 – good, 2 – fair and 3 – poor.
• Initial condition is a(0) = (1, 0, 0). Determine the absolute probabilities of
the three states of the system after 1, 8, and 16 gardening years.
1 2 3 1 2 3
P1 = 1 0.3 0.6 0.1 P8 = 1 0.101753 0.525514 0.372733
2 0.1 0.6 0.3 2 0.101702 0.525435 0.372863
3 0.05 0.4 0.55 3 0.101669 0.525384 0.372863
Classification of States
• State j is accessible from state i if 𝑃𝑖𝑗𝑛 ≥ 0 for some 𝑛 ≥ 0
• If j is accessible from i and i is accessible from j, we say that states i
and j communicate (i j).
• Communication is a class property:
(i) State i communicates with itself, for all i 0
(ii) If i j then j i : communicate is commutative
(iii) If i j and j k, then i k : communicate is transitive
• Therefore, communication divides the state space up into mutually
exclusive classes.
• If all the states communicate, the Markov chain is irreducible.
Classification of States
An irreducible Markov chain: An reducible Markov chain:
1 2 1 2
0 0
3 4 3 4
Limiting Probabilities
• If 𝑃𝑖𝑖𝑛 = 0 whenever n is not divisible by d, and d is the largest integer with
this property, then state i is periodic with period d. For example: after n=1,
n=2, n=3, n=5 … state has not entered itself again, but it will reenter after 4,
8, 12 → period d = 4.
• If a state has period d = 1, then it is aperiodic.
• If state i is recurrent and if, starting in state i, the expected time until the
process returns to state i is finite, it is positive recurrent (otherwise it is null
recurrent).
• A positive recurrent, aperiodic state is called ergodic. Meaning: a state has
no period, after a finite time, will reenter itself again. It shows the property
that state will be stable after a long enough time.
Example 8
• We can test the periodicity of a state by computing pn and observing the
values of pnii for n = 2, 3, 4 ... These values will be positive only at the
corresponding period of the state. For example, consider following matrix:
1 2 3 1 2 3 1 2 3
1 0 0.6 0.4 1 0.24 0.76 0 1 0 0.904 0.096
P
2 0 1 0 P2 2 0 1 0 P3 2 0 1 0
1 2 3 1 2 3
1 0.567 0.9424 0 1 0 0.97696 0.02304
P4 2 0 1 0 P5 2 0 1 0
• The results show that p11 and p33 are positive for even values of n and
zero otherwise (we can confirm this observation by computing pn for
n>5). This means that each of states 1 and 3 has period t = 2.
Example 9
A transition matrix P is given, we obtain P100 . The results are follow:
1 2 3 1 2 3
P= 1 0.2 0.5 0.3 P100 = 1 0 0 1
2 0 0.5 0.5 2 0 0 1
3 0 0 0 3 0 0 1
States 1 and 2 are transient because they can reach state 3 but can never be
reached back. State 3 is absorbing because p33 = 1. These classifications can
(𝑛)
also be seen when lim 𝑝𝑖𝑗 = 0 is computed.
𝑛→∞
The result shows that, in the long run, the probability of reentering transient
state 1 or 2 is zero, and the probability of being in absorbing state 3 is
certain.
Example 10
0 1 2 3 All states are recurrent
P= 0 0 0 0.5 0.5
1 1 0 0 0
2 0 1 0 0
0 1 2 3 4
3 0 1 0 0
0 0.5 0.5 0 0 0
1 0.5 0.5 0 0 0
Classes: {0,1}, {2,3}- recurrent. P= 2 0 0 0.5 0.5 0
𝝅𝒋 = 𝝅𝒊 𝑷𝒊𝒋 , 𝒋 ≥ 𝟎 (𝝅 = 𝝅. 𝑷)
𝒊=𝟎
∞
𝝅𝒋 = 𝟏
𝒋=𝟎
• The probability pj is the long run proportion of time that the process
is in state j.
Example 11
𝛼 1−𝛼
𝑃=
𝛽 1−𝛽
Limiting probabilities:
𝜋0 = 𝜋0 𝛼 + 𝜋1 𝛽
൞𝜋1 = 𝜋0 1 − 𝛼 + 𝜋1 1 − 𝛽
𝜋0 + 𝜋1 = 1
𝛽 1−𝛼
⇒ 𝜋0 = ; 𝜋1 =
1−𝛼+𝛽 1−𝛼+𝛽
• Let mjj be the expected number of transitions until the Markov chain,
starting in state j, returns to state j (finite if state j is positive
recurrent).
𝟏
Then 𝒎𝒋𝒋 = this known as Mean first return time or the
𝝅𝒋
Mean recurrence time,
Example 12
• Consider gardener transition matrix P, find out limiting probability
1 2 3 We have: 𝝅 = 𝝅. 𝑷 or
P= 1 0.3 0.6 0.1 (𝜋1 , 𝜋2 , 𝜋3 ) = (𝜋1 , 𝜋2 , 𝜋3 ). 𝑃 Or
2 0.1 0.6 0.3 𝜋1 = 0.3𝜋1 + 0.1𝜋2 + 0.05 𝜋3
3 0.05 0.4 0.55 𝜋2 = 0.6𝜋1 + 0.6𝜋2 + 0.4 𝜋3
𝜋3 = 0.1𝜋1 + 0.3𝜋2 + 0.55 𝜋3
𝜋1 + 𝜋2 + 𝜋3 = 1.
• Solve the above system of equations, we obtained 𝜋1 = 0.1017, 𝜋2 = 0.5254, 𝜋3 =
0.3729. Meaning that in the long run the system will be in state 1 10% of the
time, in state 2 about 52% of the time, and in the state 3 about 37% of the time.
1 1
• The Mean Recurrent Times are: 𝜇11 = = 9.83, 𝜇22 = = 1.9,
0.1017 0.5254
1
𝜇33 = = 2.68.
0.3729
The results show that it will takes approximately 9.83 years or ~10 years for the
system to return to state 1, approximately 2 years for the system to return to state 2
and approximately 2.68 years for the system to return to state 3.
Example 13
1 2 3 4
1 0 0.5 0.5 0
Find the mean first passage time to
P= 2 0 0 1 0
state 4 from states 1, 2, 3
3 0 0.25 0.5 0.25
4 1 0 0 0
Excercises