0% found this document useful (0 votes)
92 views

10.1 Properties of Markov Chains: Markov Chain Which Is Named After A Russian Mathematician Called

The document discusses Markov chains and their properties. It defines a Markov chain as a stochastic process with a sequence of trials where the future variable is determined by the present variable independently of past variables. It then discusses key concepts such as transition probability matrices, initial state distributions, state matrices, stationary matrices, and absorbing Markov chains. For regular Markov chains, it notes that successive state matrices will approach a unique stationary matrix, while absorbing Markov chains have at least one absorbing state from which the system cannot transition out of once entered.

Uploaded by

Vishnuvardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

10.1 Properties of Markov Chains: Markov Chain Which Is Named After A Russian Mathematician Called

The document discusses Markov chains and their properties. It defines a Markov chain as a stochastic process with a sequence of trials where the future variable is determined by the present variable independently of past variables. It then discusses key concepts such as transition probability matrices, initial state distributions, state matrices, stationary matrices, and absorbing Markov chains. For regular Markov chains, it notes that successive state matrices will approach a unique stationary matrix, while absorbing Markov chains have at least one absorbing state from which the system cannot transition out of once entered.

Uploaded by

Vishnuvardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

10.

1 Properties of Markov Chains

In this section, we will study a concept that utilizes a mathematical


model that combines probability and matrices to analyze what is
called a stochastic process, which consists of a sequence of trials
satisfying certain conditions. The sequence of trials is called a
Markov Chain which is named after a Russian mathematician called
Andrei Markov (1856-1922).
Andrei Markov (1856-1922)
http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Markov.html

{ Markov is particularly
remembered for his
study of Markov chains,
sequences of random
variables in which the
future variable is
determined by the
present variable but is
independent of the way
in which the present
state arose from its
predecessors. This work
launched the theory of
stochastic processes
Transition probability matrix: Rows indicate the current state and
column indicate the transition . For example, given the current state of
A, the probability of going to the next state A is s. Given the current
state A’, the probability of going from this state to A is r. Notice that
the rows sum to 1. We will call this matrix P.
Initial State distribution matrix:

{ This is the initial probabilities of being in state A as well as not A


, A’ . Notice again that the row probabilities sum to one, as they
should.

A A'
S0 = [ t 1 − t ]
First and second state matrices:

{ If we multiply the Initial state matrix by the transition


matrix, we obtain the first state matrix.

S1 = So P

{ If the first state matrix is multiplied by the transition


matrix we obtain the second state matrix:

S 2 = S1 P = S o P ⋅ P = S 0 P 2
Kth – State matrix

{ If this process is repeated we will obtain the following


expression: The entry in the ith row and j th column indicates
the probability of the system moving from the i th state to the j th
state in k observations or trials.

S k = S k −1 P = S 0 P k
An example: An insurance company classifies drivers as low-risk if they are accident-
free for one year. Past records indicate that 98% of the drivers in the low-risk category (L)
will remain in that category the next year, and 78% of the drivers who are not in the low-
risk category ( L’) one year will be in the low-risk category the next year.

1. Find the transition matrix, P

L L'
L ⎡ 0.98 0.02 ⎤
P=
L ' ⎢⎣ 0.78 0.22 ⎥⎦

1. If 90% of the drivers in the community are in the low-risk category


this year, what is the probability that a driver chosen at random
from the community will be in the low-risk category the next year?
The year after next ? (answer 0.96, 0.972 from matrices)
L L' S 2 = S1 P = S o P ⋅ P = S 0 P 2
S0 = [ 0.90 0.10]
S1 = So P L L'
S2 = [ 0.972 0.028]
L L'
S1 = [ 0.96 0.04]
Finding the kth State matrix

{ Use the formula S =S P


k 0
k
to find the 4th state
matrix for the previous problem.

S4 = S0 P 4

4
⎡0.98 0.02 ⎤
{
[0.90 0.10]
⎢0.78 0.22 ⎥=
⎣ ⎦

[.97488 0.02512]
{ after four states, the percentage of low-risk drivers has
increased to .97488
10.2 Regular Markov Chains

In this section, we will study what happens to the entries in the kth
state matrix as the number of trials increases. We wish to
determine the long-run behavior of the both the state matrices and
the powers of the transition matrix P.
Stationary Matrix

{ When we computed the fourth state matrix of a previous


problem we saw that the numbers appeared to
approaching fixed values. Recall,

S4 = S0 P 4
[0.90 0.10] ⎡⎢0.98 0.02⎤⎥
4

⎣0.78 0.22 ⎦

[.97488 0.02512]
{ We would find if we calculated the 5th , 6th and and kth
state matrix, we would find that they approach a limiting
matrix of [0.975 0.025] This final matrix is called a
stationary matrix.
{
Stationary Matrix

{ The stationary matrix for a Markov chain


with transition matrix P has the property
that SP = S
{ To prove that the matrix [ 0.975 0.025]
is the stationary matrix, we need to show
that SP = S

[0.975 0.025] ⎡0.98 0.02 ⎤


{ ⎢ 0.78 0.22 ⎥ =
⎣ ⎦ [0.975 0.025]

{ Upon multiplication, we find the


above statement to be true, so the
stationary matrix is [0.975 0.025]
Stationary matrix concept

{ What this means is that in the long-run the system will be


at a steady state . Later states will change very little, if
at all. This means that in the long run , the number of
low-risk drivers will be 0.975 and the number of drivers
which are not low-risk will be 0.025.

{ The question of whether or not every Markov chain has a


unique stationary matrix can be answered- it is no.
However, if a Markov chain is regular , then it will have
a unique stationary matrix and successive state matrices
will always approach this stationary matrix.
Regular Markov Chains

{ A transition matrix P is regular if some power of P has only


positive entries. A Markov chain is a regular Markov chain if
its transition matrix is regular. For example, if you take
successive powers of the matrix D, the entries of D will always be
positive (or so it appears). So D would be regular.

⎡ 0.3 0.7 ⎤
D= ⎢
0 ⎥⎦
{
⎣ 1
⎡ .79 .210 ⎤
D =⎢
2

⎣ 0.30 0.70 ⎦
⎡.447 .553 ⎤
D =⎢
3

⎣ .79 .21 ⎦
Properties of Regular Markov chains

{ If a Markov chain is regular (that is, it has a transition


matrix, P, that is regular (successive powers of this matrix
P contains only positive entries)) then
{ there is a unique stationary matrix S that can be
found by solving the equation

{ SP = S
Finding the stationary matrix. We will find the stationary matrix
for the matrix we have just identified as regular:

{ Solve the system below:


1. Solve SP = S , where

S = [ s1 s2 ]
[ s1 s2 ] ⎡0.3 0.7 ⎤
= [ s1 s2 ]
⎢1 0 ⎥⎦

⎡ 0.3 0.7 ⎤
=P
[0.3 s1 + s2 0.7 s1 + 0] = [ s1 s2 ]
⎢1 ⎥
⎣ 0 ⎦ → 0.3 s1 + s2 = s1 →
s2 = 0.7 s1 →
if s1 + s2 = 1 →
s1 + 0.7 s1 = 1 → s1 ≈ 0.5882 → s2 ≈ 0.4118
{ Thus, S = [0.5882 0.4118]
Limiting matrix P to the power k

{ According to theorem 1 of this section, the state matrices


S k will approach the stationary matrix S and the matrices given
by successive powers of P approach a limiting matrix P* where
each row of P* is equal to the stationary matrix S.

{ S=
[0.5882 0.4118]

⎡0.5881 0.4112 ⎤
P* = ⎢ ⎥
⎣0.5881 0.4112 ⎦
10.3 Absorbing Markov Chains

In this section, the concepts of absorbing Markov chains will


be discussed and illustrated through examples. We will see
that the powers of the transition matrix for an absorbing
Markov chain will approach a limiting matrix. We must
introduce some terminology first.
Absorbing state and Absorbing Chains

{ A state in a Markov chain is called an absorbing state if


once the state is entered, it is impossible to leave.

{ This concept will be illustrated by an example: For the


following transition matrix, we determine that B is an
absorbing state since the probability from going from
state B to state B is one (this means that once the system
enters state B it does not leave since the probability of
moving from state B to states A or C is zero as indicated
by the second row. A B C
A ⎡0.5 0 0.5 ⎤
B ⎢⎢ 0 1 0 ⎥⎥
C ⎢⎣ 0 0.5 0.5 ⎥⎦
Theorem 1:Absorbing states and transition matrices

{ A state in a Markov chain is absorbing if and only if the


row of the transition matrix corresponding to the state has
a 1 on the main diagonal and zeros elsewhere.

{ To ensure that the transition matrices for Markov chains


with one or more absorbing states have limiting
matrices it is necessary that the chain satisfies the
following definition:

{ A Markov chain is an absorbing chain if:


{ A) there is at least one absorbing state.
{ B) It is possible to go from each non-absorbing
state to at least one absorbing state in a finite
number of steps.
Recognizing Absorbing Markov chains

{ We will use a transition diagram to determine whether P is


the transition matrix for an absorbing Markov chain.

A B C
A ⎡0.5 0 0.5 ⎤
P = B ⎢⎢ 0 1 0 ⎥⎥
C ⎢⎣ 0 0.5 0.5 ⎥⎦
We see that B is an absorbing state
In a transition matrix, if all the absorbing states precede all the non-
absorbing states, the matrix is said to be in standard form. Standard
forms are very useful in determining limiting matrices for absorbing
Markov chains.

{ The matrix below is in standard form since the absorbing


states A and B precede the non-absorbing state C. The
general standard form matrix P is listed on the right in
which the matrix P is partitioned into four sub-matrices I,
O , R and Q where I is an identity matrix and O is the
zero matrix.
A B C
A⎡ 1 0 0 ⎤ ⎡I 0⎤
B ⎢⎢ 0 1 0 ⎥⎥ P=⎢ ⎥
C ⎢⎣0.1 0.4 0.5 ⎥⎦ ⎣ R Q ⎦
Limiting Matrix

{ We are now ready to discuss the long-run behavior of


absorbing Markov chains. This will be illustrated with an
application.
{ Two competing real estate companies are trying to buy all the
farms in a particular area for future housing development. Each
year, 10% of the farmers sell to company A each year, 40% sell
to company B and the remainder continue farming. Neither
company ever sells any of the farms they purchase.
{ A) Draw a transition diagram for this Markov process and
determine whether the associated Markov chain is absorbing.
{ B) Write a transition matrix in standard form
{ C) If neither company owns any farms at the beginning of this
competitive buying process, estimate the percentage of farms
that each company will purchase in the long run.
Transition Diagram
We see that states A and B are absorbing. We also see that the associated
Markov chain is absorbing, since there are two absorbing states A and B and it
is possible to go from the non-absorbing state C to either A or B in one step.
We use the transition diagram to write a transition matrix that is
in standard form:

{ .
A B C
A⎡ 1 0 0 ⎤
P = B ⎢⎢ 0 1 0 ⎥⎥
C ⎢⎣0.1 0.4 0.5 ⎥⎦
Initial State Matrix; Successive state matrices

{ At the beginning of this process,


all of the farmers are in state C { . A B C
since no land has been sold
S0 = [ 0 0 1]
initially. Therefore, we have the
A⎡ 1 0 0 ⎤

P = B⎢ 0 0 ⎥⎥
following initial state matrix: The 1
successive state matrices are C ⎢⎣ 0.1 0.4 0.5 ⎥⎦
given as well. Do the state
matrices have a limit? To find
out, we compute the 10th state
S1 = So P
matrix and look at the product.
We conjecture the limit of the
state matrices is S 2 = S1 P = S o P ⋅ P = S 0 P 2
{ [ 0.2 0.8]

10
⎡1 0 0 ⎤
S10 = S0 P 10 = [0 0 1] ⋅ ⎢⎢ 0 1 0 ⎥⎥ = [0.1998 0.7992]
⎢⎣ 0.1 0.4 0.5 ⎥⎦
Theorem 2: Limiting Matrices for Absorbing Markov chains

{ If a standard form P for an absorbing Markov chain is


partitioned as
⎡I 0⎤
P=⎢ ⎥
⎣ R Q ⎦
k
{ then P approaches a limiting matrix P as k increases
where ⎡ I 0⎤
P=⎢ ⎥
⎣ FR 0 ⎦ −1
{ The matrix F is given by F = F = ( I − Q )
{ and is called the fundamental matrix for P
{ The identity matrix used to form the fundamental matrix F
must be the same size as the matrix Q.
Using the Formulas to find the limiting matrix

{ We identify the display { . ⎡1 0 0 ⎤


the appropriate matrices: P = ⎢⎢ 0 1 0 ⎥⎥
⎢⎣0.1 0.4 0.5 ⎥⎦
{ Use the formula to find
the matrix F. Q is a 1x1 ⎡1 0 ⎤ ⎡0⎤
I =⎢ ⎥ O = ⎢0 ⎥ R = [ 0.1 0.4]
matrix. ⎣0 1 ⎦ ⎣ ⎦
Q = [0.5]
{ Next, determine FR F = [ I − Q ]−1 = [[1] − [.5]]−1 = 2
FR = 2 [ 0.1 0.4] = [ 0.2 0.8]
{ Display the limiting
⎡1 0 0⎤
matrix. Notice that the
P = ⎢⎢ 0 1 0 ⎥⎥
last row is the same the
kth state matrix. ⎢⎣0.2 0.8 0 ⎥⎦

You might also like