Tutorial 8: Markov Chains
Tutorial 8: Markov Chains
Markov Chains
2
Markov Chains
Consider a sequence of random variables
X0, X1, , and the set of possible values of
these random variables is {0, 1, , M}.
Xn : the state of some system at time n
Xn = i
the system is in state i at time n
M i s s 0 where ,
3
Markov Chains
X0, X1, form a Markov Chain if
Pij = transition prob.
= prob. that the system is in state i
and it will next be in state j
ij
n n
n n n n
P
i X j X P
i X i X i X i X j X P
=
= = =
= = = = =
+
+
} | {
} , ,..., , | {
1
0 0 1 1 1 1 1
4
Transition Matrix
Transition prob., Pij
Transition matrix, P
(
(
(
(
=
MM M M
M
M
P P P
P P P
P P P
P
...
...
...
...
1 0
1 11 10
0 01 00
M i P P
M
j
ij ij
,..., 1 , 0 , 1 & 0
0
= = >
=
5
Example 1
Suppose that whether or not it rains tomorrow
depends on previous weather conditions only
through whether or not it is raining today.
If it rain today, then it will rain tomorrow with
prob 0.7; and if it does not rain today, then it
will not rain tomorrow with prob 0.6.
6
Example 1
Let state 0 be the rainy day
state 1 be the sunny day
The above is a two-state Markov chain having
transition probability matrix,
(
=
6 . 0 4 . 0
3 . 0 7 . 0
P
0.39] 61 . 0 [ : 2 Day in on distributi
0.3] 7 . 0 [
6 . 0 4 . 0
3 . 0 7 . 0
0] 1 [ : 1 Day in on distributi
], 0 1 [ : 0 Day in on distributi starting the If
2 (1) (2)
(1)
= = =
=
(
= =
=
uP P u u
uP u
u
7
Transition matrix
The probability that the chain is in state i
after n steps is the ith entry in the vector
where
P: transition matrix of a Markov chain
u: probability vector representing the
starting distribution.
n (n)
uP u =
8
Ergodic Markov Chains
A Markov chain is called an ergodic chain
(irreducible chain) if it is possible to go
from every state to every state (not
necessarily in one move).
A Markov chain is called a regular chain if
some power of the transition matrix has
only positive elements.
9
Regular Markov Chains
For a regular Markov chain with transition
matrix, P and ,
ith entry in the vector t is the long run
probability of state i.
n
n
P W
= lim
...] [ and
W of row common the is where
1 0
t t t
t
=
P t t =
10
Example 2
From example 1,
the transition matrix
The long run prob. for rainy day is 4/7.
(
=
6 . 0 4 . 0
3 . 0 7 . 0
P
=
=
+ =
+ =
= +
(
= = =
7 / 3
7 / 4
6 . 0 3 . 0
4 . 0 7 . 0
1 here w
6 . 0 4 . 0
3 . 0 7 . 0
] [ ] [
2
1
2 1 2
2 1 1
2 1 2 1 2 1
t
t
t t t
t t t
t t t t t t t t P
11
Markov chain with
absorption state
Example:
Calculate
(i) the expect time to absorption
(ii) the absorption prob.
|
|
|
|
|
.
|
\
|
=
1 0 0 0
2 . 0 1 . 0 3 . 0 4 . 0
4 . 0 2 . 0 3 . 0 1 . 0
0 0 0 1
matrix transition
12
MC with absorption state
First rewrite the transition matrix to
N=(I-Q)
-1
is called a fundamental matrix for P
Entries of N,
n
ij
= E(time in transient state j|start at transient state i)
|
|
.
|
\
|
u
=
|
|
|
|
|
.
|
\
|
=
I
R Q
P
1 0 0 0
0 1 0 0
2 . 0 4 . 0 1 . 0 3 . 0
4 . 0 1 . 0 2 . 0 3 . 0
13
MC with absorption state
(i) E(time to absorb |start at i)=
i
Q I )
1
...
1
) ((
1
|
|
|
.
|
\
|
|
|
.
|
\
|
= =
2281 . 1 5263 . 0
3509 . 0 5789 . 1
) (
1
Q I N
|
|
.
|
\
|
=
|
|
.
|
\
|
|
|
.
|
\
|
=
|
|
.
|
\
|
7544 . 1
9298 . 1
1
1
2281 . 1 5263 . 0
3509 . 0 5789 . 1
1
1
) (
1
Q I
14
MC with absorption state
(ii) Absorption prob. B=NR
bij = P( absorbed in absorption state j |
start at transient state i)
|
|
.
|
\
|
=
|
|
.
|
\
|
|
|
.
|
\
|
=
4561 . 0 5439 . 0
7017 . 0 2983 . 0
0.2 4 . 0
0.4 1 . 0
2281 . 1 5263 . 0
3509 . 0 5789 . 1
) (
1
R Q I