10.1 Properties of Markov Chains: Markov Chain Which Is Named After A Russian Mathematician Called
10.1 Properties of Markov Chains: Markov Chain Which Is Named After A Russian Mathematician Called
{ Markov is particularly
remembered for his
study of Markov chains,
sequences of random
variables in which the
future variable is
determined by the
present variable but is
independent of the way
in which the present
state arose from its
predecessors. This work
launched the theory of
stochastic processes
Transition probability matrix: Rows indicate the current state and
column indicate the transition . For example, given the current state of
A, the probability of going to the next state A is s. Given the current
state A’, the probability of going from this state to A is r. Notice that
the rows sum to 1. We will call this matrix P.
Initial State distribution matrix:
A A'
S0 = [ t 1 − t ]
First and second state matrices:
S1 = So P
S 2 = S1 P = S o P ⋅ P = S 0 P 2
Kth – State matrix
S k = S k −1 P = S 0 P k
An example: An insurance company classifies drivers as low-risk if they are accident-
free for one year. Past records indicate that 98% of the drivers in the low-risk category (L)
will remain in that category the next year, and 78% of the drivers who are not in the low-
risk category ( L’) one year will be in the low-risk category the next year.
L L'
L ⎡ 0.98 0.02 ⎤
P=
L ' ⎢⎣ 0.78 0.22 ⎥⎦
S4 = S0 P 4
4
⎡0.98 0.02 ⎤
{
[0.90 0.10]
⎢0.78 0.22 ⎥=
⎣ ⎦
[.97488 0.02512]
{ after four states, the percentage of low-risk drivers has
increased to .97488
10.2 Regular Markov Chains
In this section, we will study what happens to the entries in the kth
state matrix as the number of trials increases. We wish to
determine the long-run behavior of the both the state matrices and
the powers of the transition matrix P.
Stationary Matrix
S4 = S0 P 4
[0.90 0.10] ⎡⎢0.98 0.02⎤⎥
4
⎣0.78 0.22 ⎦
[.97488 0.02512]
{ We would find if we calculated the 5th , 6th and and kth
state matrix, we would find that they approach a limiting
matrix of [0.975 0.025] This final matrix is called a
stationary matrix.
{
Stationary Matrix
⎡ 0.3 0.7 ⎤
D= ⎢
0 ⎥⎦
{
⎣ 1
⎡ .79 .210 ⎤
D =⎢
2
⎥
⎣ 0.30 0.70 ⎦
⎡.447 .553 ⎤
D =⎢
3
⎥
⎣ .79 .21 ⎦
Properties of Regular Markov chains
{ SP = S
Finding the stationary matrix. We will find the stationary matrix
for the matrix we have just identified as regular:
S = [ s1 s2 ]
[ s1 s2 ] ⎡0.3 0.7 ⎤
= [ s1 s2 ]
⎢1 0 ⎥⎦
⎣
⎡ 0.3 0.7 ⎤
=P
[0.3 s1 + s2 0.7 s1 + 0] = [ s1 s2 ]
⎢1 ⎥
⎣ 0 ⎦ → 0.3 s1 + s2 = s1 →
s2 = 0.7 s1 →
if s1 + s2 = 1 →
s1 + 0.7 s1 = 1 → s1 ≈ 0.5882 → s2 ≈ 0.4118
{ Thus, S = [0.5882 0.4118]
Limiting matrix P to the power k
{ S=
[0.5882 0.4118]
⎡0.5881 0.4112 ⎤
P* = ⎢ ⎥
⎣0.5881 0.4112 ⎦
10.3 Absorbing Markov Chains
A B C
A ⎡0.5 0 0.5 ⎤
P = B ⎢⎢ 0 1 0 ⎥⎥
C ⎢⎣ 0 0.5 0.5 ⎥⎦
We see that B is an absorbing state
In a transition matrix, if all the absorbing states precede all the non-
absorbing states, the matrix is said to be in standard form. Standard
forms are very useful in determining limiting matrices for absorbing
Markov chains.
{ .
A B C
A⎡ 1 0 0 ⎤
P = B ⎢⎢ 0 1 0 ⎥⎥
C ⎢⎣0.1 0.4 0.5 ⎥⎦
Initial State Matrix; Successive state matrices
10
⎡1 0 0 ⎤
S10 = S0 P 10 = [0 0 1] ⋅ ⎢⎢ 0 1 0 ⎥⎥ = [0.1998 0.7992]
⎢⎣ 0.1 0.4 0.5 ⎥⎦
Theorem 2: Limiting Matrices for Absorbing Markov chains