IEOR 3106: Introduction to Operations Research: Stochastic Models
Professor Whitt
SOLUTIONS to Homework Assignment 3
Introduction to Discrete-Time Markov Chains
Read Sections 4.1-4.5 in the Ross text, up to, but not including Section 4.5.2, i.e., pages
185-221 (pages 191-234 of the 9th edition). However, to keep the work under control, Examples
4.18, 4.19, 4.23, 4.24, 4.25 and 4.26 are optional. These optional examples are Examples 4.15,
4.16, 4.20, 4.21 and 4.22 in the 9th edition. (Example 4.25 does not appear in the 9th edition.)
That cuts the reading almost in half.
Do the following exercises at the end of Chapter 4:
2. This exercise is closely related to Example 4.4. As in Example 4.4, in this exercise we
must choose new states in order to create a Markov process. In particular, here we need the
state to indicate the weather, not only today, but also in the previous two days. Let R indicate
that it is rainy on some day and let D indicate that it is dry. Then, for any three days in a
row, there are 23 = 8 possible states, which we can represent by the vectors:
(D, D, D), (D, D, R), (D, R, D), (D, R, R), (R, D, D), (R, D, R), (R, R, D), (R, R, R) .
If we let D = 0 and R = 1, then we obtain the eight binary numbers:
(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1) .
If we translate the binary numbers into decimal numbers, then we obtain the eight states
0, 1, 2, 3, 4, 5, 6, 7, all in the correct order.
3. We continue with Exercise 2. Using the eight states in the given order, we obtain the
following 8 × 8 transition matrix:
0.8 0.2 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.4 0.6 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.6 0.4 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.6
P = 0.6
0.4 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.4 0.6 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.6 0.4 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.8
Of course, you would get a different matrix if you reordered the states. Note that there
must be only two positive entries in each row. From state (ABC), we can only transition to
state (B, C, X), where X can assume only two possible values. Indeed, we transition to state
(BCC) with probability 0.6.
(2) (2)
7. In Example 4.4 the states are 0, 1, 2 and 3. We want the probability P3,0 + P3,1 .
The superscript (2) appears because we must consider the two-step transition - from yesterday
until tomorrow. We start in state 3 and we must end in one of the states 0 or 1. Thus we have
(2) (2)
P3,0 + P3,1 = P3,1 ∗ P1,0 + P3,1 ∗ P1,1 + P3,3 ∗ P3,0 + P3,3 ∗ P3,1
= (0.2)(0.5) + (0.8)0 + (.2)0 + (0.8)(0.2)
= 0.26.
14.
(i) {0, 1, 2} recurrent
(ii) {0, 1, 2, 3} recurrent
(iii) {0, 2} recurrent, {1} transient and {3, 4} recurrent. (You have to be careful or you
will miss the transient state 1. Note that you can go from state 1 to states 0 and 2, but you
cannot go back. Thus state 1 must be transient.)
(iv) {0, 1} recurrent, {2} recurrent, {3} transient and {4} transient
18.
Let Xn = 1 if coin 1 is flipped at the nth toss; let Xn = 2 if coin 2 is flipped at the
nth toss. (There are no other possibilities. Note that {Xn : n ≥ 1} is a Markov chain with
transition probabilities
P1,1 = 1 − P1,2 = 0.6 and P2,2 = 1 − P2,1 = 0.5 .
Thus, the transition matrix is µ ¶
0.6 0.4
P =
0.5 0.5
(a) We seek the limiting steady-state distribution, which requires we solve the equation
π = πP , where π is the steady-state probability vector to be found (whose components must
sum to 1) and P is the transition matrix displayed above. As in display (4.7) in the book, we
obtain the equations
π1 = 0.6π1 + 0.5π2
π2 = 0.4π1 + 0.5π2
π1 + π2 = 1 .
One of the first two equations displayed above is redundant. That can be deduced from
the fact that the matrix is singular. That in turn can be deduced from the fact that the last
column is equal to a column of 10 s minus the sum of the previous columns. Since the last
column is a linear combination of the other columns, the full set of columns is not a set of
independent columns.
Considering only the first and third equation, we obtain π1 = 5/9 and π2 = 4/9. Hence
the (long-run0 proportion of flips that use coin 1 is 5/9.
(b) We seek the 4-step transition probability
(4)
P1,2 = (P 4 )1,2 ,
where P 4 = (P 2 )2 and P 2 = P ∗ P . First,
µ ¶
0.56 0.44
P2 =
0.55 0.45
2
and then
(4) (2) (2) (2) (2)
P1,2 = P1,1 ∗ P1,2 + P1,2 ∗ P2,2 = 0.56 ∗ 0.44 + 0.44 ∗ 0.45 = 0.44440 .
19.
We seek the probability π0 + π1 , where π = (π0 , π1 , π2 , π3 ) is the steady-state probability
vector associated with the transition matrix displayed in Example 4.4. The answer is
π0 = 1/4, π1 = 3/20, π2 = 3/20 and π3 = 9/20 .
Thus the desired answer is π0 + π1 = 8/20 = 2/5 .
20. By Section 4.4 there exists a unique probability vector π satisfying π = πP . It thus
suffices to show that the special probability vector π with πi = 1/(M + 1) for all i satisfies the
equation π = πP . Note that, for this special probability vector π,
i=M
X i=M
X
(πP )j = (1/(M + 1))Pi,j = (1/(M + 1) Pi,j = [1/(M + 1)] ∗ 1 = 1/(M + 1) ,
i=0 i=0
by virtue of the assumed doubly-stochastic property.
30. Let Xn = 0 if the nth vehicle on the road is a car; let Xn = 1 if it is a truck.
By the assumptions, the stochastic process {Xn : n ≥ 1} is a Markov chain with transition
probabilities P0,1 = 1 − P0,0 = 1/5 and P1,0 = 1 − P1,1 = 3/4. Solving π = πP with π0 + π1 = 1,
we obtain the steady-state limiting probabilities π0 = 15/19 and π1 = 4/19.
Notes on Extra Problems with Answers in Back:
1. The process is a Markov chain because the probability of a future state conditional on
past history, including the present state, depends only on the present state; that is, formula
(4.1) in the book holds.
The specific transition probabilities Pi,j are given in the back of the book. As a regularity
check on your answer (or as a way to speed up your calculation), note that there is symmetry:
the probabilities are the same if we change the names of “white” and “black”. Thus, P1,2 = P2,1
and, more generally, Pi,j = P3−i,3−j . To illustrate one calculation, consider P1,2 . We can make
the transition from state i = 1 to state i = 2 if and only if we select a black ball from urn 1 and
we select a white ball from urn 2. The probability of selecting a black ball from urn 1 and a
white ball from urn 2, starting from 1 white ball in urn 1 (and necessarily 2 white balls in urn
2) is the product of the separate probabilities by independence. The probability of selecting
a black ball from urn 1, starting from 1 white ball in urn 1, is clearly 2/3. The probability
of selecting a white ball from urn 2, starting from 2 white balls in urn 2, is again 2/3. The
product 2/3 × 2/3 yields P1,2 = 4/9.
4. The stochastic process {Xn : n ≥ 1} is not Markov, because the conditional probability
displayed in the left side of (4.1) does not equal Pi,j for all n; it also depends on whether n
is even or odd. We can make the process Markov by adding extra states. As indicated in the
book, we can use six states instead of three: We can use the states 0, 1, 2, 0̄, 1̄ and 2̄, where
i signifies that the present value is i and n is even, while ī indicates that the present value is i
and n is odd.
21. The answer is in the book. For part (b), we use the doubly-stochastic property in the
previous exercise.