Markov Chains 2023
Markov Chains 2023
1 What is a Stochastic
Process?
◼ Suppose we observe some characteristic of a system
at discrete points in time.
◼ Let Xt be the value of the system characteristic at
time t. In most situations, Xt is not known with
certainty before time t and may be viewed as a
random variable.
◼ A discrete-time stochastic process is simply a
description of the relation between the random
variables X0, X1, X2 …..
◼ A continuous –time stochastic process is
simply the stochastic process in which the
state of the system can be viewed at any
time, not just at discrete instants in time.
P(X4 = 10 I X3=7, X2 = 5 X1 = 2, X0 = 0)
= P(X4 = 10 I X3=7) MARKOVIAN PROPERTY
◼ Essentially, (1) says that the probability
distribution of the state at time t + 1
depends on the state at time t (it) and
does not depend on the states the
chain passed through on the way to it
at time t.
◼ In our study of Markov chains, we make the further
assumption that for all states i and j and all t, P(Xt+1
= j |Xt = i) is independent of t. This assumption
allows us to write
p 1 p 2 p s
p p p
lim P n = 1 2 s
n →
p 1 p 2 p s
◼ The vector p [p1 p2... ps] is often
called the steady-state distribution,
or equilibrium distribution, for the
Markov chain.
◼ Transition Matrix
◼ Steady-State Probabilities
◼ Question
How many times per year can Henry expect to talk to
Shirley?
◼ Answer
To find the expected number of accepted calls per year,
find the long-run proportion (probability) of a call being
accepted and multiply it by 52 weeks.
. . . continued
An Inventory Example
◼ A camera store stocks a particular model
camera that can be ordered weekly. Let D1,
D2, . . . represent the demand for this camera
(the number of units that would be sold if the
inventory is not depleted) during the first
week, second week, . . . , respectively.
◼ It is assumed that the Di are independent
and identically distributed random variables
having a Poisson distribution with a mean of
1.
◼ Let X0 represent the number of cameras on
hand at the outset, X1 the number of cameras
on hand at the end of week 1, X2 the number
of cameras on hand at the end of week 2,
and so on.
◼ Assume that X0 = 3. On Saturday night the
store places an order that is delivered in time
for the next opening of the store on Monday.
◼ The store uses the following order policy:
◼ If there are no cameras in stock, the store orders 3
cameras.
◼ However, if there are any cameras in stock, no order
is placed.
◼ Sales are lost when demand exceeds the inventory
on hand.
◼ Thus, {Xt} for t = 0, 1, . . . is a stochastic process of
the form just described.
◼ The possible states of the process are the integers 0,
1, 2, 3, representing the possible number of cameras
on hand at the end of the week.
◼ The random variables Xt are dependent and may be
evaluated iteratively by the expression
◼ for t = 0, 1, 2, . . . .
◼ given that Dt+1 has a Poisson distribution with a
mean of 1. Thus,
for t 1, 2, . . . .
where PD(i) is the probability that the demand equals i, as given by a
Poisson distribution with a mean of 1, so that PD(i) becomes negligible
for i larger than about 6. Since PD(4) = 0.015, PD(5) = 0.003, and PD(6)
= 0.001, we obtain k(0) = 86.2. Also using PD(2) = 0.184 and PD(3) =
0.061, similar calculations lead to the results
Mean first passage times for
inv. problem
Mean First Passage Times
◼ For an ergodic chain, let mij expected
number of transitions before we first reach
state j, given that we are currently in state i ;
mij is called the mean first passage time
from state i to state j.
◼ In Example 4, m12 would be the expected
number of bottles of cola purchased by a
person who just bought cola 1 before first
buying a bottle of cola 2.
◼ Assume that we are currently in state i. Then with
probability pij, it will take one transition to go
◼ from state i to state j. For k ≠ j, we next go with
probability pik to state k. In this case, it
◼ will take an average of 1 + mkj transitions to go from
i to j. This reasoning implies that
To illustrate the use of (13), let’s solve for the mean first passage times
in Example 4. Recall that p1 = 2/3 and p2 = 1/3. Then m11 = 3/2 = 1.5
and m22 = 3/1 = 3.
This means, for example, that a person who last drank cola 1 will drink
an average of ten bottles of cola 1 before switching to cola 2.
Absorbing Chains
◼ Many interesting applications of Markov
chains involve chains in which some of
the states are absorbing and the rest
are transient states. Such a chain is
called an absorbing chain.
Accounts Receivable
◼ The accounts receivable situation of a firm is often modeled as
an absorbing Markov chain. Suppose a firm assumes that an
account is uncollectable if the account is more than three
months overdue. Then at the beginning of each month, each
account may be classified into one of the following states:
s-m rows Q R
P= 0 1
m rows
I = an identity matrix indicating one always remains
in an absorbing state once it is reached
0 = a zero matrix representing 0 probability of
transitioning from the absorbing states to the
nonabsorbing states
R = the transition probabilities from the
nonabsorbing states to the absorbing states
Q = the transition probabilities between the
nonabsorbing states
◼ In this format, the rows and column of P correspond
(in order) to the states t1, t2, . . . ,ts-m, a1, a2, . . . ,
am. Here, I is an m x m identity matrix reflecting the
fact that we can never leave an absorbing state: Q is
an (s - m) x (s - m) matrix that represents transitions
between transient states;
◼ R is an (s - m) x m matrix representing transitions
from transient states to absorbing states; 0 is an
m x (s - m) matrix consisting entirely of zeros.
◼ This reflects the fact that it is impossible to go from
an absorbing state to a transient state.
Accounts Receivable
◼ The transition matrix for this example is
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
2 months 0 0 Q 0 .4 .6 R 0
3 months
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
Bad Debt
0 0 0 0 0 1