0% found this document useful (0 votes)
27 views29 pages

Markov Chains

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views29 pages

Markov Chains

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

1

Chapter 8 – Probabilistics Model

Markov Chain

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


2

Andrey Andreyevich Markov

https://vi.wikipedia.or
g/wiki/Andrey_Andre
yevich_Markov

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


3

Markov Chains
Introduction Stochastic Processes.
Markov Chains.
Chapman-Kolmogorov Equations
Classification of States
Recurrence and Transience
Limiting Probabilities

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


4

Stochastic Processes
• A stochastic process is a collection of values of random variables
𝑋 𝑡 , 𝑡 = 0,1, … , 𝑛, 𝑡 ∈ 𝑇 . Typically, T is continuous (time) and we have
𝑋 𝑡 , 𝑡 ≥ 0 . Or, T is discrete and we are observing 𝑋𝑛 , 𝑛 = 0,1,2, . . . at
discrete time points n. Refer to X(t) as the state of the process at time t.
• Example: The condition of a machine at the time of the monthly preventive
maintenance is poor, fair or good. For month t, the stochastic process for
this situation can be represented as follows:
0 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑝𝑜𝑜𝑟
𝑋𝑡 = 1 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑓𝑎𝑖𝑟 𝑡 = 1,2, … , 𝑛
2 𝑖𝑓 𝑚𝑎𝑐ℎ𝑖𝑛𝑒 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑖𝑠 𝑔𝑜𝑜𝑑
0 1 2 2 0 0 1 1 2 1 2 2 1 1 0 0 0 2 1 2

• The random variable Xt is finite because it represents three states: poor (0),t
fair (1), and good (2). Can we use network to represent Markov Chain?
HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong
5

Markov Chains - Definition


• A Markov chain: process that present the change from a state to
another state with a given probability.
• A Markov chain is a stochastic process 𝑋𝑡 , 𝑡 = 0,1,2, . . . . We
have state 0, 1, 2, …, n, n+1. 𝐼𝑓 𝑋𝑛 = 𝑖: Value of process at state
n is i. 𝐼𝑓 𝑋𝑛+1 = 𝑗: Value of process at state (n+1) is j.

• 𝑃 𝑋𝑛+1 = 𝑗ห𝑋𝑛 = 𝑖, 𝑋𝑛−1 = 𝑖𝑛−1 , . . . , 𝑋1 = 𝑖1 , 𝑋0 = 𝑖0


= 𝑃 𝑋𝑛+1 = 𝑗ȁ𝑋𝑛 = 𝑖 is the conditional probbability
that state (n+1) equal j given that previous state n equal to i .
𝑋𝑛+1 depends only on the present state 𝑋𝑛

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


6

Markov Chains - Definition


Denote 𝑃𝑖𝑗 = 𝑃 𝑋𝑛+1 = 𝑗ȁ𝑋𝑛 = 𝑖 , 𝑃𝑖𝑗 ≥ 0 for all 𝑖, 𝑗 as the transition
probability.
Then For any 𝑖, σ𝑎𝑙𝑙𝑗 𝑃𝑖𝑗 = 1 for all states i0, i1,…, in-1 and all n  0 .

Let 𝑃 = 𝑃𝑖𝑗 be the matrix of one-step transition probabilities.


0 1 ... j ... n
0 p00 p01 ... p0j ... p0n
1 p10 p11 ... p1j ... p1n
P= ...
i pi0 pi1 ... pij ... pin
...
n pn0 pn1 ... pnj ... pnn

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


7

Markov Chains - Examples


Example 1: Forecasting the weather if knowing weather one days
𝑋𝑛 is weather of day n = (0,1); State space S = 0, 1 ; 𝑃00 = 𝛼, 𝑃10 = 𝛽
𝛼 1−𝛼
Transition matrix is 𝑃 =
𝛽 1−𝛽
Example 2: Estimating machine condition.
Define state: 0 – poor, 1 – fair, 2 – good. Assuming probability to change
from one state i to state j is pij
For instance: probability to change from state of fair to state of good is 0.1
State 0 (poor) 1 (fair) 2 (good)
0 (poor) 0.5 0.4 0.1
1 (fair) 0.3 0.6 0.1
2 (good) 0.2 0.3 0.5

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


8

Markov Chains - Examples


Example 3: Forecasting the weather if knowing weather two days
• If it rains for the past two days then it will rain tomorrow with probability 0.7
• If it rains today but not yesterday then it will rain tomorrow with probability
0.5
• If it rains yesterday but not today then it will rain tomorrow with probability
0.4
• If it has not rain in the past two days then it will rain tomorrow with
probability 0.2

0 1 2 3
0 0.7 0 0.3 0
States: 0: RR, 1: NR, 2: RN, 3: NN
1 0.5 0 0.5 0
2 0 0.4 0 0.6
3 0 0.2 0 0.8

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


9

Markov Chains - Examples


Example 4: Random Walks

• State Space (S): 0, ±1, ±2, ±3, ±4,….

• Pi, i + 1 = p ; Pi, i - 1 = 1 – p i = 0, 1, …

• At each point of time, either it takes one step to the right with
probability p, or one step to the left with probability 1-p.

… -2 -1 0 1 2 …
S

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


10

Markov Chains - Examples


Represent a state by a node, change from this state to other state by a link

Example 5: A Gambling Model


wins $1 with 𝑝
Gambler at each play ቊ
loses $1 with 1 − 𝑝
𝑃𝑖,𝑖+1 = 𝑝; 𝑃𝑖,𝑖−1 = 1 − 𝑝; 𝑖 = 1,2,3, . . . , 𝑁 − 1
𝑃00 = 𝑃𝑁𝑁 = 1: 0 and N are absorbing states

Gambler quits if he goes broke or if he obtains a fortune N.


p p p p
1 1
0 1 2 i-1 i i+1 N-1 N

q q q q

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


11

Chapman-Kolmogorov Equations

• n-step Transition Probabilities:


𝑃𝑖𝑗𝑛 = 𝑃{𝑋𝑛+𝑚 = 𝑗ȁ𝑋𝑚 = 𝑖}, 𝑛, 𝑚 ≥ 0, 𝑖, 𝑗 ≥ 0

• Chapman-Kolmogorov Equations

𝑃𝑖𝑗𝑛+𝑚 = ෍ 𝑃𝑖𝑘
𝑛 𝑚
𝑃𝑘𝑗 , 𝑛, 𝑚 ≥ 0, 𝑖, 𝑗 ≥ 0
𝑘=0
𝑛 𝑚
• By noting that 𝑃𝑖𝑘 𝑃𝑘𝑗 represents the probability that starting in state i the
process will go to state j in (n+m) transitions through a path which takes it
into state k at the nth transition.
• Let P(n) be the matrix of n-step transition probabilities:
• 𝐏 𝑛+𝑚 =𝐏 𝑛 𝐏 𝑚 and 𝐏 𝑛 = 𝐏𝑛

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


12

Chapman-Kolmogorov Equation application

• Given transition matrix P and vector 𝑎(0) = 𝑎𝑗0 , 𝑗 = 1,2, … , 𝑛


Assume after n-step transition, we have 𝑎(𝑛) = 𝑎𝑗𝑛 , 𝑗 = 1,2, … , 𝑛
a(1) = a(0).P;
a(2) = a(1).P = a(0).P.P = a(0).P2
a(3) = a(2).P = a(0).P2.P = a(0).P3

a(n) = a(0).Pn
➔ If we want to find a(n) after n steps transition, we should find Pn and
multiply with a(0) .

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


13
Example 6

.7 .3
• Weather transition probability matrix: 𝑃 =
.4 .6
With: i = 1: it rains; i = 2: it does not rain. Then 4-steps
.5749 .4251
transition matrix is 𝑃4 =
.5668 .4332
• Given Prob. it rains today is α1 = 0.4 and Prob. it does not
rain today is α2 = 0.6. What is probability it will rains after 4
days ? We have 𝛼 0 = (.4, .6)
4 (0) 4 .5749 .4251
• 𝛼 = 𝛼 𝑃 = .4 .6 = .57 .43
.5668 .4332
• What is value of 𝛼 8 ? 𝛼 16 ?

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


14

Example 7
• A one-year transition matrix is given for a gardener. States are as
follows: 1 – good, 2 – fair and 3 – poor.
• Initial condition is a(0) = (1, 0, 0). Determine the absolute probabilities of
the three states of the system after 1, 8, and 16 gardening years.
1 2 3 1 2 3
P1 = 1 0.3 0.6 0.1 P8 = 1 0.101753 0.525514 0.372733
2 0.1 0.6 0.3 2 0.101702 0.525435 0.372863
3 0.05 0.4 0.55 3 0.101669 0.525384 0.372863

1 2 3 After 1 year: a(1) = a(0) x P1 = (0.3, 0.6, 0.1)


P16 = 1 0.101659 0.52454 0.372881
2 0.101659 0.52454 0.372881 After 8 years: a(8) = a(0) x P8=
(0.101753, 0.525514, 0.372733 )
3 0.101659 0.52454 0.372881

After 16 years: a(16) = a(0) x P16 =


(0.101659, 0.52454, 0.372881)

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


15

Classification of States
• State j is accessible from state i if 𝑃𝑖𝑗𝑛 ≥ 0 for some 𝑛 ≥ 0
• If j is accessible from i and i is accessible from j, we say that states i
and j communicate (i  j).
• Communication is a class property:
(i) State i communicates with itself, for all i  0
(ii) If i  j then j  i : communicate is commutative
(iii) If i  j and j  k, then i  k : communicate is transitive
• Therefore, communication divides the state space up into mutually
exclusive classes.
• If all the states communicate, the Markov chain is irreducible.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


16

Classification of States
An irreducible Markov chain: An reducible Markov chain:

1 2 1 2

0 0

3 4 3 4

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


17

Recurrence vs. Transience


• Let fi be the probability that, starting in state i, the process will ever reenter
state i. If fi = 1, the state is recurrent, otherwise it is transient.
– If state i is recurrent then, starting from state i, the process will reenter
state i infinitely often (with probability is 1).
– If state i is transient then, starting in state i, the number of periods in
which the process is in state i has a geometric distribution with
parameter 1 – fi.
𝑛 𝑛
• State i is recurrent if σ∞ 𝑃
𝑛=1 𝑖𝑖 = ∞ and transient if σ ∞
𝑛=1 𝑖𝑖 < ∞
𝑃
• Recurrence (transient) is a class property ➔ If i is recurrent (transient) and
i  j then j is recurrent (transient).
• A special case of a recurrent state is if Pii = 1 then i is absorbing.
HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong
18

Recurrence vs. Transience (2)


• Not all states in a finite Markov chain can be transient.
• All states of a finite irreducible Markov chain are recurrent.
• If state i is recurrent and state i does not communicate with state j,
then 𝑃𝑖𝑗 = 0
– when a process enters a recurrent class of states it can never leave
that class.
– A recurrent class is often called a closed class

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


19

Limiting Probabilities
• If 𝑃𝑖𝑖𝑛 = 0 whenever n is not divisible by d, and d is the largest integer with
this property, then state i is periodic with period d. For example: after n=1,
n=2, n=3, n=5 … state has not entered itself again, but it will reenter after 4,
8, 12 → period d = 4.
• If a state has period d = 1, then it is aperiodic.
• If state i is recurrent and if, starting in state i, the expected time until the
process returns to state i is finite, it is positive recurrent (otherwise it is null
recurrent).
• A positive recurrent, aperiodic state is called ergodic. Meaning: a state has
no period, after a finite time, will reenter itself again. It shows the property
that state will be stable after a long enough time.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


20

Example 8
• We can test the periodicity of a state by computing pn and observing the
values of pnii for n = 2, 3, 4 ... These values will be positive only at the
corresponding period of the state. For example, consider following matrix:
1 2 3 1 2 3 1 2 3
1 0 0.6 0.4 1 0.24 0.76 0 1 0 0.904 0.096
P
2 0 1 0 P2 2 0 1 0 P3 2 0 1 0

3 0.6 0.4 0 3 0 0.76 0.24 3 0.144 0.856 0.24

1 2 3 1 2 3
1 0.567 0.9424 0 1 0 0.97696 0.02304

P4 2 0 1 0 P5 2 0 1 0

3 0 0.9424 0.0576 3 0.03456 0.96544 0

• The results show that p11 and p33 are positive for even values of n and
zero otherwise (we can confirm this observation by computing pn for
n>5). This means that each of states 1 and 3 has period t = 2.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


21

Example 9
A transition matrix P is given, we obtain P100 . The results are follow:
1 2 3 1 2 3
P= 1 0.2 0.5 0.3 P100 = 1 0 0 1
2 0 0.5 0.5 2 0 0 1
3 0 0 0 3 0 0 1

States 1 and 2 are transient because they can reach state 3 but can never be
reached back. State 3 is absorbing because p33 = 1. These classifications can
(𝑛)
also be seen when lim 𝑝𝑖𝑗 = 0 is computed.
𝑛→∞

The result shows that, in the long run, the probability of reentering transient
state 1 or 2 is zero, and the probability of being in absorbing state 3 is
certain.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


22

Example 10
0 1 2 3 All states are recurrent
P= 0 0 0 0.5 0.5
1 1 0 0 0
2 0 1 0 0
0 1 2 3 4
3 0 1 0 0
0 0.5 0.5 0 0 0
1 0.5 0.5 0 0 0
Classes: {0,1}, {2,3}- recurrent. P= 2 0 0 0.5 0.5 0

Class: {4}transient 3 0 0 0.5 0.5 0


4 0.25 0.25 0 0 0.5
0 1 2 3
0 0 0 0 1
P= 1 0 0 0 1
2 0.5 0.5 0 0
Irreducible ➔ All states are recurrent
3 0 0 1 0

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


23

Limiting Probabilities (2)


Theorem:
• For an irreducible ergodic Markov chain, 𝜋𝑗 = lim 𝑃𝑖𝑗𝑛 exists for all j
𝑛→∞
and is independent of i.
• Furthermore, pj is the unique nonnegative solution of

𝝅𝒋 = ෍ 𝝅𝒊 𝑷𝒊𝒋 , 𝒋 ≥ 𝟎 (𝝅 = 𝝅. 𝑷)
𝒊=𝟎

෍ 𝝅𝒋 = 𝟏
𝒋=𝟎

• The probability pj is the long run proportion of time that the process
is in state j.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


24

Example 11

𝛼 1−𝛼
𝑃=
𝛽 1−𝛽

Limiting probabilities:

𝜋0 = 𝜋0 𝛼 + 𝜋1 𝛽
൞𝜋1 = 𝜋0 1 − 𝛼 + 𝜋1 1 − 𝛽
𝜋0 + 𝜋1 = 1

𝛽 1−𝛼
⇒ 𝜋0 = ; 𝜋1 =
1−𝛼+𝛽 1−𝛼+𝛽

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


25

Limiting Probabilities (3)


• The long run proportions pj are also called stationary probabilities
because if 𝑃 𝑋0 = 𝑗 = 𝜋𝑗 then 𝑃 𝑋𝑛 = 𝑗 = 𝜋𝑗 for all 𝑛, 𝑗 ≥ 0

• Let mjj be the expected number of transitions until the Markov chain,
starting in state j, returns to state j (finite if state j is positive
recurrent).
𝟏
Then 𝒎𝒋𝒋 = this known as Mean first return time or the
𝝅𝒋
Mean recurrence time,

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


26

Example 12
• Consider gardener transition matrix P, find out limiting probability
1 2 3 We have: 𝝅 = 𝝅. 𝑷 or
P= 1 0.3 0.6 0.1 (𝜋1 , 𝜋2 , 𝜋3 ) = (𝜋1 , 𝜋2 , 𝜋3 ). 𝑃 Or
2 0.1 0.6 0.3 𝜋1 = 0.3𝜋1 + 0.1𝜋2 + 0.05 𝜋3
3 0.05 0.4 0.55 𝜋2 = 0.6𝜋1 + 0.6𝜋2 + 0.4 𝜋3
𝜋3 = 0.1𝜋1 + 0.3𝜋2 + 0.55 𝜋3
𝜋1 + 𝜋2 + 𝜋3 = 1.
• Solve the above system of equations, we obtained 𝜋1 = 0.1017, 𝜋2 = 0.5254, 𝜋3 =
0.3729. Meaning that in the long run the system will be in state 1 10% of the
time, in state 2 about 52% of the time, and in the state 3 about 37% of the time.
1 1
• The Mean Recurrent Times are: 𝜇11 = = 9.83, 𝜇22 = = 1.9,
0.1017 0.5254
1
𝜇33 = = 2.68.
0.3729
The results show that it will takes approximately 9.83 years or ~10 years for the
system to return to state 1, approximately 2 years for the system to return to state 2
and approximately 2.68 years for the system to return to state 3.

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


27

Mean First Passage Time of Recurrent States


• For an ergodic Markov chain, it is possible to go from one state to
another state in a finite number of transitions. Hence:

𝑘
෍ 𝑓𝑖𝑗 =1
𝑘=1
fij(k): Prob. of going from i to j for the first time in exactly k
transitions.
𝒌
• Mean first passage time: 𝝁𝒊𝒋 = σ∞
𝒌=𝟏 𝒌𝒇𝒊𝒋

• Mean first passage time can be found by solving:


𝝁𝒊𝒋 = 𝟏 + ෍ 𝑷𝒊𝒌 𝝁𝒌𝒋


𝒌=𝟎,𝒌≠𝒋

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


28

Example 13

1 2 3 4
1 0 0.5 0.5 0
Find the mean first passage time to
P= 2 0 0 1 0
state 4 from states 1, 2, 3
3 0 0.25 0.5 0.25
4 1 0 0 0

𝜇14 = 1 + 0 × 𝜇14 + 0.5 × 𝜇24 + 0.5 × 𝜇34


ቐ 𝜇24 = 1 + 0 × 𝜇14 + 0 × 𝜇24 + 1 × 𝜇34
𝜇34 = 1 + 0 × 𝜇14 + 0.25 × 𝜇24 + 0.5 × 𝜇34

⇒ 𝜇14 = 6.5; 𝜇24 = 6; 𝜇34 = 5

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong


29

Excercises

• Book: Operations Research – An Introduction. Hamdy


Taha, page 642 – 647
• Problems: 1, 2, 4, 9, 10, 11, 13, 18, 23

HCMC University of Technology – Dept. of ISE Assoc. Prof. Ho Thanh Phong

You might also like