0% found this document useful (0 votes)
13 views5 pages

Markov Model

This document discusses Markov chains and stochastic matrices, explaining how to find the nth state matrix and the steady state matrix of a Markov chain. It provides examples of consumer preference models and demonstrates the calculations involved in determining subscriber numbers for competing companies over time. Additionally, it covers the concept of regular stochastic matrices and their properties, including convergence to a steady state matrix.

Uploaded by

an.lengocvinh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views5 pages

Markov Model

This document discusses Markov chains and stochastic matrices, explaining how to find the nth state matrix and the steady state matrix of a Markov chain. It provides examples of consumer preference models and demonstrates the calculations involved in determining subscriber numbers for competing companies over time. Additionally, it covers the concept of regular stochastic matrices and their properties, including convergence to a steady state matrix.

Uploaded by

an.lengocvinh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

84 Chapter 2 Matrices

2.5 Markov Chains


Use a stochastic matrix to find the nth state matrix of a Markov
chain.
Find the steady state matrix of a Markov chain.
Find the steady state matrix of an absorbing Markov chain.

STOCHASTIC MATRICES AND MARKOV CHAINS


Many types of applications involve a finite set of states { S1, S2, . . . , Sn } of a
population. For instance, residents of a city may live downtown or in the suburbs.
Voters may vote Democrat, Republican, or Independent. Soft drink consumers may buy
Coca-Cola, Pepsi Cola, or another brand.
The probability that a member of a population will change from the jth state to the
ith state is represented by a number pij, where 0 ≤ pij ≤ 1. A probability of pij = 0
means that the member is certain not to change from the jth state to the ith state,
whereas a probability of pij = 1 means that the member is certain to change from the
jth state to the ith state.
From

S1 S2 . . . Sn

[ ]}
p11 p12 . . . p1n S1
p21 p22 . . . p2n S2
P= To
⋮ ⋮ ⋮ ⋮
pn1 pn2 . . . pnn Sn
P is called the matrix of transition probabilities because it gives the probabilities of
each possible type of transition (or change) within the population.
At each transition, each member in a given state must either stay in that state or
change to another state. For probabilities, this means that the sum of the entries in any
column of P is 1. For instance, in the first column,
p11 + p21 + . . . + pn1 = 1.
Such a matrix is called stochastic (the term “stochastic” means “regarding
conjecture”). That is, an n × n matrix P is a stochastic matrix when each entry is a
number between 0 and 1 inclusive, and the sum of the entries in each column of P is 1.

Examples of Stochastic Matrices


and Nonstochastic Matrices
The matrices in parts (a), (b), and (c) are stochastic, but the matrices in parts (d), (e),
and (f) are not.

[ ]
1 1 1 1
4 5 3 2

[ ]
1 0 0 1 13
0
1
a. 0 1 0 b.
4
1
60
1 1
6
1 c. [0.9
0.1
0.8
0.2 ]
0 0 1 4 3 3 6
1 1 1 1
4 4 3 6

[ ] [ ]
1
2
1
4
1
4
0.1 0.2 0.3 0.4

[ ]
1 1
2 4 1 2 0.2 0.3 0.4 0.5
d. 3 e. 3 0 3 f.
0 4
0.3 0.4 0.5 0.6
1 3
4 4 0 0.4 0.5 0.6 0.7

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
2.5 Markov Chains 85

Example 2 describes the use of a stochastic matrix to measure consumer preferences.

A Consumer Preference Model

Two competing companies offer satellite television service to a city with 100,000
households. Figure 2.1 shows the changes in satellite subscriptions each year.
20% Company A now has 15,000 subscribers and Company B has 20,000 subscribers. How
Company Company many subscribers will each company have in one year?
A B
SOLUTION
15%
70% 80%
The matrix of transition probabilities is
15% 15%
10% From
5%

No A B None
Satellite

[ ]
0.70 0.15 0.15 A
Television
P = 0.20 0.80 0.15 B To
0.10 0.05 0.70 None
70%
and the initial state matrix representing the portions of the total population in the three
Figure 2.1
states is

[ ]
0.1500 A
X0 = 0.2000 . B
0.6500 None

To find the state matrix representing the portions of the population in the three states in
one year, multiply P by X0 to obtain

[ ][ ] [ ]
0.70 0.15 0.15 0.1500 0.2325
X1 = PX0 = 0.20 0.80 0.15 0.2000 = 0.2875 .
0.10 0.05 0.70 0.6500 0.4800

REMARK In one year, Company A will have 0.2325(100,000) = 23,250 subscribers and
Always assume that the matrix Company B will have 0.2875(100,000) = 28,750 subscribers.
P of transition probabilities in a
Markov chain remains constant A Markov chain, named after Russian mathematician Andrey Andreyevich Markov
between states. (1856–1922), is a sequence { Xn } of state matrices that are related by the equation
Xk+1 = PXk, where P is a stochastic matrix. For instance, consider the consumer
preference model discussed in Example 2. To find the state matrix representing the
portions of the population in each state in three years, repeatedly multiply the initial
state matrix X0 by the matrix of transition probabilities P.
X1 = PX0
X2 = PX1 = P ∙ PX0 = P2X0
X3 = PX2 = P ∙ P2X0 = P3X0
In general, the nth state matrix of a Markov chain is PnX0, as summarized below.

nth State Matrix of a Markov Chain


The nth state matrix of a Markov chain for which P is the matrix of transition
probabilities and X0 is the initial state matrix is
Xn = PnX0.

Example 3 uses the model discussed in Example 2 to demonstrate this process.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
86 Chapter 2 Matrices

A Consumer Preference Model

Assuming the matrix of transition probabilities from Example 2 remains the same year
after year, find the number of subscribers each satellite television company will have after
(a) 3 years, (b) 5 years, (c) 10 years, and (d) 15 years.
SOLUTION
a. To find the numbers of subscribers after 3 years, first find X3.

[ ]
0.3028 A
X3 = P3X0 ≈ 0.3904 B After 3 years
0.3068 None
After 3 years, Company A will have about 0.3028(100,000) = 30,280 subscribers
and Company B will have about 0.3904(100,000) = 39,040 subscribers.
b. To find the numbers of subscribers after 5 years, first find X5.

[ ]
0.3241 A
X5 = P5X0 ≈ 0.4381 B After 5 years
0.2378 None
After 5 years, Company A will have about 0.3241(100,000) = 32,410 subscribers
and Company B will have about 0.4381(100,000) = 43,810 subscribers.
c. To find the numbers of subscribers after 10 years, first find X10.

[ ]
0.3329 A
X10 = P10X0 ≈ 0.4715 B After 10 years
0.1957 None
After 10 years, Company A will have about 0.3329(100,000) = 33,290 subscribers
and Company B will have about 0.4715(100,000) = 47,150 subscribers.
d. To find the numbers of subscribers after 15 years, first find X15.

[ ]
0.3333 A
X15 = P15X0 ≈ 0.4756 B After 15 years
0.1911 None
After 15 years, Company A will have about 0.3333(100,000) = 33,330 subscribers
and Company B will have about 0.4756(100,000) = 47,560 subscribers.

Google’s PageRank algorithm makes use of Markov chains.


LINEAR For a search set that contains n web pages, define an n × n
ALGEBRA matrix A such that aij = 1 when page j references page i
APPLIED and aij = 0 otherwise. Adjust A to account for web pages
without external references, scale each column of A so
that A is stochastic, and call this matrix B. Then define
1−p
M = pB + E
n
where p is the probability that a user follows a link on a
page, 1 − p is the probability that the user goes to any
page at random, and E is an n × n matrix whose entries
are all 1. The Markov chain whose matrix of transition
probabilities is M converges to a unique steady state
matrix, which gives an estimate of page ranks.
Section 10.3 discusses a method that can be used to
estimate the steady state matrix.
d8nn/Shutterstock.com

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
2.5 Markov Chains 87

STEADY STATE MATRIX OF A MARKOV CHAIN


In Example 3, notice that there is little difference between the numbers of subscribers
after 10 years and after 15 years. If you continue the process shown in this example,
then the state matrix Xn eventually reaches a steady state. That is, as long as the matrix
P does not change, the matrix product PnX approaches a limit X. In Example 3, the limit
is the steady state matrix

[]
1

[ ]
3 0.3333 A
10
X= 21 ≈ 0.4762 . B Steady state matrix
4 0.1905 None
21

Check to see that PX = X, as shown below.

][ ] []
1 1

[
0.70 0.15 0.15 3 3
10 10
PX = 0.20 0.80 0.15 21 = 21 =X
0.10 0.05 0.70 4 4
21 21

In Example 5, you will verify the above result by finding the steady state matrix X.
The matrix of transition probabilities P used above is an example of a regular
stochastic matrix. A stochastic matrix P is regular when some power of P has only
positive entries.

Regular Stochastic Matrices

a. The stochastic matrix

[ ]
0.70 0.15 0.15
P = 0.20 0.80 0.15
0.10 0.05 0.70
is regular because P1 has only positive entries.
b. The stochastic matrix

P= [0.50
0.50
1.00
0 ]
is regular because

P2 = [0.75
0.25
0.50
0.50 ]
has only positive entries.
c. The stochastic matrix
REMARK

[ ]
1
For a regular stochastic matrix 3 0 1
P, the sequence of successive P=
1
1 0
3
powers 1
3 0 0
P, P 2, P 3, . . .
is not regular because every power of P has two zeros in its second column.
approaches a stable matrix P. (Verify this.)
The entries in each column of
P are equal to the corresponding
entries in the steady state When P is a regular stochastic matrix, the corresponding regular Markov chain
matrix X . You are asked to PX0, P2X0, P3X0, . . .
show this in Exercise 55.
approaches a unique steady state matrix X. You are asked to prove this in Exercise 56.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
88 Chapter 2 Matrices

Finding a Steady State Matrix

See LarsonLinearAlgebra.com for an interactive version of this type of example.

Find the steady state matrix X of the Markov chain whose matrix of transition
probabilities is the regular matrix

[ ]
0.70 0.15 0.15
P = 0.20 0.80 0.15 .
0.10 0.05 0.70

SOLUTION
Note that P is the matrix of transition probabilities that you found in Example 2 and
whose steady state matrix X you verified at the top of page 87. To find X, begin by

[]
x1
letting X = x2 . Then use the matrix equation PX = X to obtain
x3

[ ][ ] [ ]
0.70 0.15 0.15 x1 x1
0.20 0.80 0.15 x2 = x2
REMARK 0.10 0.05 0.70 x3 x3
Recall from Example 2 that the or
state matrix consists of entries
that are portions of the whole. 0.70x1 + 0.15x2 + 0.15x3 = x1
So it should make sense that 0.20x1 + 0.80x2 + 0.15x3 = x2
x1 + x2 + x3 = 1. 0.10x1 + 0.05x2 + 0.70x3 = x3.
Use these equations and the fact that x1 + x2 + x3 = 1 to write the system of linear
equations below.
−0.30x1 + 0.15x2 + 0.15x3 =0
0.20x1 − 0.20x2 + 0.15x3 =0
0.10x1 + 0.05x2 − 0.30x3 =0
x1 + x2 + x3 = 1.
Use any appropriate method to verify that the solution of this system is
x1 = 13, x2 = 10 4
21 , and x3 = 21 .
So the steady state matrix is

[]
1

[ ]
3 0.3333
10
X= 21 ≈ 0.4762 .
4 0.1905
21

Check that PX = X.
REMARK
If P is not regular, then the A summary for finding the steady state matrix X of a Markov chain is below.
corresponding Markov chain
may or may not have a unique
Finding the Steady State Matrix of a Markov Chain
steady state matrix.
1. Check to see that the matrix of transition probabilities P is a regular matrix.
2. Solve the system of linear equations obtained from the matrix equation
PX = X along with the equation x1 + x2 + . . . + xn = 1.
3. Check the solution found in Step 2 in the matrix equation PX = X.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

You might also like