ESI 4313 Operations Research 2: Markov Chains Basics
ESI 4313 Operations Research 2: Markov Chains Basics
Operations Research 2
0.9 0.1
P
0.2 0.8
1 0 0 0 0
½ 0 ½ 0 0
P 0 ½ 0 ½ 0
0 0 ½ 0 ½
0 0 0 0 1
Example 2
Some n-step transition probability matrices are:
1 0 0 0 0 1 0 0 0 0
1 5 1
2
1
4 0 1
4 0 8 0 1
4 0 8
P2 14 0 1
2 0 1
4 P3 14 1
4 0 1
4
1
4
1
0
1
4 0 1
4
1
2 8 0 1
4 0 5
8
0 0 0 0 1 0 0 0 0 1
Example 2
Some more n-step transition probability
matrices are:
1 0 0 0 0 1 0 0 0 0
.73 .02 0 .02 .23 .75 0 0 0 .25
P 10 .485 0 .03 0 .485 P 15 .50 0 0 0 .50
.23 .02 0 .02 .73 .25 0 0 0 .75
0 0 0 0 1 0 0 0 0 1
Long-run behavior
Again, it seems that, as n grows large,
the matrix Pn hardly changes
However, in this case
the rows of the matrix do not approach the same
values
Can you explain this?
Example 2
Next, let us look at the case p=¼
The transition probability matrix is then:
1 0 0 0 0
3
4 0 1
4 0 0
P 0 3
4 0 1
4 0
0 0 0
3 1
4 4
0 0 0 0 1
Example 2
Some n-step transition probability matrices are:
1 0 0 0 0 1 0 0 0 0
.75 .19 0 .06 0 .89 0 .09 0 .02
P .56 0 .38 0 .06 P .56 .28 0 .10 .06
2 3
0 .56 0 .19 .25 .42 0 .28 0 .30
0 0 0 0 1 0 0 0 0 1
Example 2
Some more n-step transition probability
matrices are:
1 0 0 0 0 1 0 0 0 0
.97 .005 0 0 .025 .975 0 0 0 .025
.10 P .90 0 .10
15
P 10 .89 0 .01 0 0 0
.66 .01 0 .01 .32 .675 0 0 0 .325
0 1 0
0 0 0 0 0 0 1
Classification of states and Markov
chains
In terms of their limiting behavior, there seem to
be different types of Markov chains
We will next discuss a useful way to classify
different states and Markov chains
Communicating states
We say that two states communicate if they are
reachable from each other
becomes
s
j
k 1
k pkj
In matrix notation, P
Limiting behavior
But… P
Has 0 as a solution…
In fact, it has an infinite number of solutions
(because P is singular – why?)
Fortunately, we know that should be a probability
distribution, so not all solutions are meaningful!
We should ensure that
i 1
i 1
Example 3
Recall the 1st Smalltown example:
gives 1 2 2 3 1
3
Compare with
20 0.67 0.33
P
0.67 0.33
Mean first passage time
Consider an ergodic Markov chain
Suppose we are currently in state i
What is the expected number of transitions until we reach state
j?
This is called the mean first passage time from state i to
state j and is denoted by mij
For example, in Smallville’s weather example, m12 would be the
expected number of days until the first cloudy day, given that it
is currently sunny
How can we compute these quantities?
Mean first passage time
We are currently in state i
In the next transition, we will go to some state k
If k=j, the first passage time from i to j is 1
If kj, the mean first passage time from i to j is 1+mkj
So: s
mij pij 1 p
k 1
ik (1 mkj )
k j s s
s
1 p ik mkj p
k 1
ik p
k 1
ik mkj
k 1
k j k j
Mean first passage time
We can thus find all mean first passage times by
solving the following system of equations:
s s
mij p
k 1
ij p
k 1
ik mkj i 1,..., s; j 1,..., s
k j
What is mii?
The mean number of transitions until we return to
state i
This is equal to 1/i !
Example 3
Recall the 1st Smalltown example:
0.9 0.1
P
0.2 0.8
The steady-state probabilities are 1=2/3 and
2=1/3
Example 3
Thus:
m11 =1/1 = 1/(2/3) = 1½
m22 =1/2 = 1/(1/3) = 3
And m12 and m21 satisfy:
n 1
n lim Q lim Q R
n
lim P n n
0
n
0 I
0 I Q 1 R
0 I
Example
So the probability that a student that starts as
a freshman will eventually graduate is
the (1,6-4=2) element of (I-Q)-1R
In the example:
0.25 0.75
1 0.16 0.84
(I Q) R
0.11 0.89
0.06 0.94