Markov Chains: Yitbarek Takele (PHD, Mba & Ma Econ) Associate Professor of Business Administration Addis Ababa University
Markov Chains: Yitbarek Takele (PHD, Mba & Ma Econ) Associate Professor of Business Administration Addis Ababa University
Markov Chains
▪ Overview
▪ What is a Stochastic Process?
▪ Characteristic of Markov Analysis
▪ Assumptions Underlying Markov Chain Analysis
▪ What is a Markov Chain?
▪ N-Step Transition Probabilities
▪ Classification of States in a Markov Chain
▪ Markov Analysis: Input & Output
▪ Use of Steady-State & Mean Return Times
▪ Mean First Passage Times of Ergodic Chains
▪ Absorbing Chains
2
Yitbarek Takele (PhD), Department of Management, AAU
5.1 Overview
▪ Sometimes we are interested in how a random
variable changes over time.
3
Yitbarek Takele (PhD), Department of Management, AAU
▪ Andrei Markov, a Russian Mathematician
developed the technique to describe the
movement of gas in a closed container in 1940
4
Yitbarek Takele (PhD), Department of Management, AAU
5.2 What is a Stochastic Process?
5
Yitbarek Takele (PhD), Department of Management, AAU
◼ A continuous–time stochastic process is
simply the stochastic process in which the
state of the system can be viewed at any time,
not just at discrete instants in time.
◼ For example, the number of people in a
supermarket t minutes after the store opens
for business may be viewed as a continuous-
time stochastic process.
6
Yitbarek Takele (PhD), Department of Management, AAU
5.3 Characteristic of Markov Analysis
7
Yitbarek Takele (PhD), Department of Management, AAU
5.4 Assumptions Underlying Markov
Chain Analysis
1. Finite states
◼ The given system has a finite no. of states, none
of which is “absorbing” in nature.
Example: 3 states i.e, 3 brands of detergent which the
customers can switch between.
2. First-order process
◼ The condition (or state) of the system in any
given period is dependent only on its condition
prevailing in the previous period & the transition
probabilities.
Example: in the brand switching case it is assumed that
the choice of a particular brand of detergent is dependent
upon & influenced only by the choice in the previous month 8
Yitbarek Takele (PhD), Department of Management, AAU
A second-order Markov process assumes that the
customers’ choice next month may depend upon their
choices during the immediately preceding two
months.
A third-order Markov process assumes that the
customers’ choice next month may depend upon their
choices during the immediately preceding three
months.
3. Stationarity
◼ The transition probabilities are constant over
time
Example: it is assumed that the system has settled
down so that the switching among different brands
takes place at the given rates in each time period.
9
Yitbarek Takele (PhD), Department of Management, AAU
4. Uniform time periods
◼ The changes from one state to another take
place only once during each time period, and
the time periods are equal in duration.
Example: the customers change their brands of
detergent on a monthly basis & accordingly, the
monitoring is also done on a month-to-month basis
10
Yitbarek Takele (PhD), Department of Management, AAU
5.5 What is a Markov Chain?
◼ Markov Chain - Special type of discrete-time
stochastic process.
◼ Markov analysis is used to analyze the current
state and movement of a variable to predict
the future occurrences and movement of this
variable by the use of presently known
probabilities.
11
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: A stochastic process is a Markov
process if the occurrence of a future state
depends only on the immediately preceding
state. This means that given the chronological
times to, t1, …,tn, the family of random
variables{Xtn} = {x1, x2, …, xn} is said to be a
Markov process if it possesses the following
property:
◼ P{Xtn = xn|Xtn-1=Xn-1, …, Xt0 = x0 } = P{Xtn =
xn|Xtn-1=Xn-1 }
12
Yitbarek Takele (PhD), Department of Management, AAU
OR
◼ A discrete-time stochastic process is a Markov
chain if, for t = 0,1,2… and all states
P(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0)
= P(Xt+1=it+1|Xt = it)
◼ Essentially this says that the probability
distribution of the state at time t+1 depends
on the state at time t(it) and does not depend
on the states the chain passed through on the
way to it at time t.
13
Yitbarek Takele (PhD), Department of Management, AAU
◼ In our study of Markov chains, we make further
assumption that for all states i and j and all t,
P(Xt+1 = j|Xt = i) is independent of t.
◼ This assumption allows us to write P(Xt+1 = j|Xt
= i) = pij where pij is the probability that given
the system is in state i at time t, it will be in a
state j at time t+1.
◼ If the system moves from state i during one
period to state j during the next period, we say
that a transition from i to j has occurred.
14
Yitbarek Takele (PhD), Department of Management, AAU
◼ This is known as the one-step transition
probability of moving from state i at t to state
j at t+1. By definition, we have
j =s
p
j =1
ij = 1 , i = 1, 2, ….n
15
Yitbarek Takele (PhD), Department of Management, AAU
◼ This equation implies that the probability law
relating the next period’s state to the current
state does not change over time.
◼ It is often called the Stationary Assumption
and any Markov chain that satisfies it is called
a stationary Markov chain.
◼ We also must define qi to be the probability
that the chain is in state i at the time 0; in
other words, P(X0=i) = qi. It is called the
initial state.
16
Yitbarek Takele (PhD), Department of Management, AAU
◼ We call the vector q= [q1, q2,…qs] the initial
probability distribution for the Markov chain.
◼ In most applications, the transition
probabilities are displayed as an s x s
transition probability matrix P. The
transition probability matrix P may be written
as
p11 p12 p1s
p p p
P = 21 22 2s
s1
p p s2 p ss
17
Yitbarek Takele (PhD), Department of Management, AAU
◼ For each I
j =s
p
j =1
ij =1
18
Yitbarek Takele (PhD), Department of Management, AAU
5.6 n-Step Transition Probabilities
◼ A question of interest when studying a Markov
chain is: If a Markov chain is in a state i at
time m, what is the probability that n periods
later the Markov chain will be in state j?
◼ Given the initial probabilities of
starting in state j and transition matrix P of a
Markov chain, the absolute probabilities
of being in state j after n transitions (n>0) are
computed as follows:
(1) (0)
a = a P
( 2) (1) (0) 2
a = a P = a p
.
.
.
(n) (0) n
a = a p , n = 1, 2, ...
19
Yitbarek Takele (PhD), Department of Management, AAU
◼ The matrix is known as the n-step transition
matrix. From these calculations, we can see
that
OR
, 0<m<n
◼ These are known as Chapman-Kolomogorov
equations
20
Yitbarek Takele (PhD), Department of Management, AAU
◼ This probability will be independent of m, so
we may write
P(Xm+n =j|Xm = i) = P(Xn =j|X0 = i) = Pij(n)
where Pij(n) is called the n-step probability of
a transition from state i to state j.
◼ For n > 1, Pij(n) = ijth element of Pn
21
Yitbarek Takele (PhD), Department of Management, AAU
The Cola Example
◼ Suppose the entire cola industry produces only
two colas.
◼ Given that a person last purchased cola 1, there
is a 90% chance that her next purchase will be
cola 1.
◼ Given that a person last purchased cola 2, there
is an 80% chance that her next purchase will be
cola 2.
1. If a person is currently a cola 2 purchaser, what is the
probability that she will purchase cola 1 two purchases
from now?
2. If a person is currently a cola 1 purchaser, what is the
probability that she will purchase cola 1 three
purchases from now? 22
Yitbarek Takele (PhD), Department of Management, AAU
◼ We view each person’s purchases as a Markov
chain with the state at any given time being
the type of cola the person last purchased.
◼ Hence, each person’s cola purchases may be
represented by a two-state Markov chain,
where
State 1 = person has last purchased cola 1
State 2 = person has last purchased cola 2
◼ If we define Xn to be the type of cola
purchased by a person on his nth future cola
purchase, then X0, X1, … may be described as
the Markov chain with the following transition
matrix: 23
Yitbarek Takele (PhD), Department of Management, AAU
Cola1 Cola 2
Cola1 .90 .10
P=
Cola 2 .20 .80
24
Yitbarek Takele (PhD), Department of Management, AAU
Hence, P21(2) =.34. This means that the probability
is .34 that two purchases in the future a cola 2
drinker will purchase cola 1.
By using basic (conditional) probability theory, we
may obtain this answer in a different way.
2. We seek P(X3 = 1|X0 = 1) = P11(3) = element
11 of P3:
25
Yitbarek Takele (PhD), Department of Management, AAU
◼ Many times we do not know the state of the
Markov chain at time 0. Then we can
determine the probability that the system is in
state i at time n by using the reasoning.
i=s
= qi Pij (n)
i =1
26
Yitbarek Takele (PhD), Department of Management, AAU
◼ To illustrate the behavior of the n-step
transition probabilities for large values of n, we
have computed several of the n-step transition
probabilities for the Cola example.
◼ This means that for large n, no matter what
the initial state, there is a .67 chance that a
person will be a cola 1 purchaser.
27
Yitbarek Takele (PhD), Department of Management, AAU
5.7 Classification of States in a
Markov Chain
◼ To understand the n-step transition in more
detail, we need to study how mathematicians
classify the states of a Markov chain.
◼ The following transition matrix illustrates most
of the following definitions.
.4 .6 0 0 0
.5 .5 0 0 0
P = 0 0 .3 .7 0
0 0 .5 .4 .1
0 0 0 .8 .2
28
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: Given two states of i and j, a path
from i to j is a sequence of transitions that
begins in i and ends in j, such that each
transition in the sequence has a non-negative
probability of occurring.
◼ Definition: A state j is reachable from state i
if there is a path leading from i to j.
◼ Definition: Two states i and j are said to
communicate if j is reachable from i, and i is
reachable from j.
29
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: A set of states S in a Markov chain
is a closed set if no state outside of S is
reachable from any state in S.
A closed Markov chain is said to be ergodic if all its
states are recurrent and aperiodic. In this case, the
( )
absolute probabilities after n transitions, a = a p ,
n (0) n
P= , = , =
= , =
32
Yitbarek Takele (PhD), Department of Management, AAU
◼ Continuing with n = 6, 7, . . . , shows that
and are positive for even values of n
and zero otherwise.
▪ This means that the period for states 1 and 3 is
2.
33
Yitbarek Takele (PhD), Department of Management, AAU
Representation of a Markov Chain as
a Diagram
A B C D
0.95 0.95 0 0.05 0
A
0.2 0.5 0 0.3
B
0.2 0.5
0 0.2 0 0.8
C
0 0 1 0
0.05 0.2 0.3 D
0.8
34
Yitbarek Takele (PhD), Department of Management, AAU
Properties of Markov Chain states
States of Markov chains are classified by the
digraph representation (omitting the actual
probability values)
A, C and D are recurrent states: they are in strongly connected
components which are sinks in the graph.
Alternative definitions:
A state s is recurrent if it can be
reached from any state reachable from
s; otherwise it is transient.
35
Yitbarek Takele (PhD), Department of Management, AAU
Irreducible Markov Chains
1
36
Yitbarek Takele (PhD), Department of Management, AAU
Ergodic Markov Chains
37
Yitbarek Takele (PhD), Department of Management, AAU
5.8 Markov Analysis: Input & Output
◼ In the Markovian analysis, the analysis of a
given system is based on the following two
sets of input data.
The transition matrix
The initial condition
◼ Based on these inputs, the model provides the
following predictions:
a. The probability of the system being in a given state
at a given future time
b. The steady state probabilities
38
Yitbarek Takele (PhD), Department of Management, AAU
1. Inputs
i. Transition probabilities
◼ Example
Assume that a person has Br. 200 at time t=0. At times
1, 2, …, he plays a game wherein he can gain or lose
Br. 100, with probabilities p and 1-p, respectively. He
would quit if & when he loses his initial amount of Br.
200, or doubles this amount to Br. 400.
Now, if we define Xi as his capital position after the
time t game is played, then X0, X1, X2 …, Xt may be
viewed as a discrete-time stochastic process.
While X0 is known to be a constant equal to Br. 200,
we are not sure about X1 , X2 , X3 , and so on.
39
Yitbarek Takele (PhD), Department of Management, AAU
For instance, X1 shall be equal to Br. 300 (if he gains)
with a probability of p and Br. 100 with a probability of
1-p. It may be noted that if Xt = Br. 400, then Xt+1 & all
other Xt ’s shall be equal Br. 400, since he quits after
the initial amount is doubled.
◼ Similarly, whenever he loses his initial capital so that
Xt = 0, then Xt+1 & all other Xt ’s shall be equal zero.
Transition Probabilities
State/State Br. 0 Br. 100 Br. 200 Br. 300 Br. 400
Br. 0 1 0 0 0 0
Br. 100 1-p 0 p 0 0
P= Br. 200 0 1-p 0 p 0
Br. 300 0 0 1-p 0 p
Br. 400 0 0 0 0 1
40
Yitbarek Takele (PhD), Department of Management, AAU
◼ From the given matrix, it can be seen that there
are five states viz.
Br. 0, Br. 100, Br. 200, Br. 300, and Br. 400
◼ From all the states, except states Br. 0 and Br.
400, we know that the person would be in the
next state (with Br. 100 more) with a probability
of p, and in the previous state (with Br. 100 less)
with a probability of 1-p.
Of course, if the person reaches the state of Br. 0 or Br.
400, then he doesn’t play the game any more. Hence,
for each of these states, the transition probability would
be equal to 1, i.e., P00 = P44 = 1
41
Yitbarek Takele (PhD), Department of Management, AAU
ii. Initial conditions
◼ The initial conditions describe the situation the
system presently is in
◼ For the gambler’s ruin problem, the initial
condition is given by [0 0 1 0 0], which implies
that he currently is in the state where his capital
is Br. 200.
42
Yitbarek Takele (PhD), Department of Management, AAU
2. Outputs
i. Specific-state probabilities
◼ To calculate the probabilities for the system in
specific states, we let qi (k) to represent the
probability (q) of the system being in a certain
state (i) in a certain period (k), called the state
probability.
◼ Since the system would occupy one & only one
state at a given point in time, it is obvious that
the sum of all qi values would be equal to 1.
43
Yitbarek Takele (PhD), Department of Management, AAU
◼ In general, with a total of n states,
q1(k) + q2(k) + q3(k) + … + qn(k) =1, for every k
Where, k is the no. of transitions (0,1,2,…)
◼ Example
The brand switching properties of the detergent customers
are examined for many months & noted when they stabilize
so that the probabilities of transition from brand D2 to D1, D2
and D3 are, respectively, 0.20, 0.50, & 0.30.
With states of the system designated as D1, D2 and D3, qD1 (0)
represents the probability of a customer choosing brand D1
this month (at t=0) and qD1 (1) represents the probability of
choosing this brand after one transition. Similarly, qD1 (2) is
the probability of choosing this brand after two transitions,
etc.
44
Yitbarek Takele (PhD), Department of Management, AAU
Thus, the probability distribution of the customer choosing
any given brand (D1, D2 and D3) in any given period(k) may
be expressed as a row vector as follows:
Q (k) = [qD1 (k) qD2 (k) qD3 (k)], and
for n states
Q (k) = [q1 (k) q2 (k) q3 (k) . . . qn (k)]
The initial condition is obviously expressed as Q(0)
45
Yitbarek Takele (PhD), Department of Management, AAU
◼ Q(1) = [q1 (1) q2 (1) q3 (1)] = Q(0)P. Thus,
◼ Q(k) = Q(k-1)P = Q(k-2) = . . . = Q(0)
= =
46
Yitbarek Takele (PhD), Department of Management, AAU
◼ Thus,
Q(2) = Q(0) =
47
Yitbarek Takele (PhD), Department of Management, AAU
◼ Conditional Probabilities
◼ The probability of a customer to buy D3 two
months hence, given that his latest purchase has
been D2 , could be calculated by taking the
following into account:
i. Customer switches from brand D2 and D1 in month 1,
and to brand D3 in month 2
ii. Customer stays on the brand D2 in month 1, and
switches to brand D3 in month 2
iii. Customer switches from brand D2 and D3 in month 1
and then stays on D3 in month 2
48
Yitbarek Takele (PhD), Department of Management, AAU
From the transition probability matrix, we can compute the probability of
switching from D2 and D1 in a month is 0.20; D1 to D3 is 0.10; D2 to D3 is 0.30
and staying at D2 and D3 is 0.50 & 0.80, respectively
D1 0.10
D3
0.20
D2 0.50 D2 0.30
D3
0.30
D3 0.80 D3
P((D1D3/D2)) = 0.2 x 0.1 = 0.02
This probability can also be obtained
P((D2D3/D2)) = 0.5 x 0.3 = 0.15
P((D3D3/D2)) = 0.3 x 0.8 = 0.24 from the matrix . In this matrix P23
0.41 = 0.41 49
Yitbarek Takele (PhD), Department of Management, AAU
ii. Steady state probabilities
◼ Steady-state probabilities are used to describe
the long-run behavior of a Markov chain.
◼ Theorem 1: Let P be the transition matrix for
an s-state ergodic chain. Then there exists a
vector
π = [π1 π2 … πs] such that
1 2 s
lim P n =
n →
1 2 s
50
Yitbarek Takele (PhD), Department of Management, AAU
◼ Theorem 1 tells us that for any initial state i,
lim Pij (n) = j
n →
51
Yitbarek Takele (PhD), Department of Management, AAU
◼ Steady state probabilities are the equilibrium, or
steady state probabilities
◼ At steady state
Q(k) = Q(k)P [since Q(k) = Q(k-1)]
◼ Thus,
Q = QP
◼ In matrix notation,
52
Yitbarek Takele (PhD), Department of Management, AAU
Transient Analysis & Intuitive Interpretation
j (1 − pij ) = k pkj
k j
◼ This equation may be viewed as saying that in
the steady-state, the “flow” of probability into
each state must equal the flow of probability
out of each state.
53
Yitbarek Takele (PhD), Department of Management, AAU
◼ P (that a particular transition leaves state j) =
P (that a particular transition enters state j)
Recall that in the steady state, the probability that the
system is in state j is pj.
◼ From this observation, it follows that Probability
that a particular transition leaves state j =
(probability that the current period begins in j) x
(probability that the current transition leaves j)
= j (1- pjj)
AND
54
Yitbarek Takele (PhD), Department of Management, AAU
◼ Probability that a particular transition enters state j =
(probability that the current period begins in k
k j) x
(probability that the current transition enters j)
55
Yitbarek Takele (PhD), Department of Management, AAU
◼ In matrix notation,
q1 = P11q1 + P21q2 + . . . + Pn1qn
q2 = P12q1 + P22q2 + . . . + Pn2qn
q3 = P13q1 + P23q2 + . . . + Pn3qn
.
.
.
qn = P1nq1 + P2nq2 + . . . + Pnnqn
◼ Example: Detergent
q1 = 0.60q1 + 0.20q2 + 0.15q3
q2 = 0.30q1 + 0.50q2 + 0.05q3
q3 = 0.10q1 + 0.30q2 + 0.80q3 and
q1 + q2 + q3 = 1
56
Yitbarek Takele (PhD), Department of Management, AAU
◼ To obtain the values of q1, q2, & q3
q1 = 0.60q1 + 0.20q2 + 0.15q3
q2 = 0.30q1 + 0.50q2 + 0.05q3 and
q1 + q2 + q3 = 1
◼ Restating the equations, we get
0.40q1 - 0.20q2 - 0.15q3 = 0
-0.30q1 + 0.50q2 - 0.50q3 = 0
q1 + q2 + q3 = 1
◼ In matrix notation
57
Yitbarek Takele (PhD), Department of Management, AAU
◼ Accordingly,
q1 = 0.293
q2 = 0.224
q3 = 0.483
58
Yitbarek Takele (PhD), Department of Management, AAU
5.9 Use of Steady-State & Mean First Return
Times Probabilities in Decision Making
◼ A direct bi-product of the steady state probabilities
is the determination of the expected no. of
transitions before the system returns to state j for
the first time. This is known as the mean first
return time or the mean recurrence time, and is
computed in an n-state Markov chain as
61
Yitbarek Takele (PhD), Department of Management, AAU
◼ Example-2: Mean Return Times
(case: soil condition vs productivity)
62
Yitbarek Takele (PhD), Department of Management, AAU
The gardener can alter the transition probabilities P
by using fertilizer to boost soil condition. In this case
the transition matrix becomes:
63
Yitbarek Takele (PhD), Department of Management, AAU
= .3 + .1 + .05
= .6 + .6 + .40
= .1 + .3 + .55
+ + =1
◼ The solution is
= 0.1017
= 0.5254
= 0.3729
The solution is
64
Yitbarek Takele (PhD), Department of Management, AAU
◼ This means that depending on the current state
of the soil, it will take approximately 10
gardening seasons for the soil to return to good
state, 2 seasons to return to a fair state, and 3
seasons to return to a poor state. Bleak result?
◼ A more aggressive program could be as follows:
65
Yitbarek Takele (PhD), Department of Management, AAU
Cost model
◼ Example:
Consider the gardener problem with fertilizer.
Suppose that the cost of the fertilizer is Br. 500 per
bag & the garden needs two bags if the soil is good.
The amount of fertilizer is increased by 25% if the
soil is fair & 60% if the soil is poor. The gardener
estimates the annual yield to be worth Br. 2,500 if no
fertilizer is used & Br. 4,200 if fertilizer is applied. Is
it worth to use fertilizer?
66
Yitbarek Takele (PhD), Department of Management, AAU
◼ Solution
Using the steady state probabilities we compute
earlier, we get
Expected annual cost of fertilizer =
2 x Br. 500 x + (1.25x2) x Br. 500 x + (1.6x2) x
Br. 500 x
Br. 1,000 x .1017 + 1,250 x .5254 + 1,600 x . 3729
= Br. 1,355.1
Increase in the annual value of the yield
= Br 4, 200 – Br. 2, 500 = Br. 1, 700
The result shows that , on the average, the use of
fertilizer nets to Br. 1,700-1,355.1 = Br. 334.9.
hence, use of fertilizer is recommended. 67
Yitbarek Takele (PhD), Department of Management, AAU
5.10 Mean First Passage Times of
Ergodic Chains
◼ In 5.9 we used the steady state probabilities to
compute , the mean return time for state j.
In this section we discuss the determination of
the mean first passage time , the expected
no. of transitions needed to reach state j from
state i for the first time.
◼ The calculations are rooted in the determination
of the prob. of at least one passage from
(n) (n)
state i to state j as f ij= f
n=1 ij , where f
ij is the
probability of a first passage from state i to state
j in n transitions. An expression for fij( n ) can be
determined recursively from
n-1 (k)
p(n) f ij pij ,n =1,2,...
(n-k)
ij
= f (n)
ij +
k=1 68
Yitbarek Takele (PhD), Department of Management, AAU
◼ The transition matrix P = p ij is assumed to have
m states
1. If f ij 1, it is not certain that the system will ever
pass from state i to j & ij =.
2. If f ij=1, the Markov chain is ergodic and the
mean first passage time from state i to state
j is computed as
ij = nf ij
n
n=1
69
Yitbarek Takele (PhD), Department of Management, AAU
◼ A simpler way to determine the mean first passage
time for all the states in an m-transition matrix, P, is
to use the following matrix-based formula:
−1
μ ιj =( I − N j) 1, j i
Where
I = (m-1) – identity matrix
N j= transition matrix P less its jth row and jth
column of target state j
1= (m-1) column vector with all elements
equal to 1
The matrix operation ( I − N j=) −11 essentially sums the
−1
columns of ( I − N j .
) 70
Yitbarek Takele (PhD), Department of Management, AAU
◼ Example:
Consider the gardener Markov chain with fertilizer
once again.
.30 .60 .10
P = .10 .60 .30
.05 .40 .55
72
Yitbarek Takele (PhD), Department of Management, AAU
Solving for Steady-State Probabilities and
Mean First Passage Times on the
Computer
◼ Since we solve steady-state probabilities and
mean first passage times by solving a system
of linear equations, we may use LINDO to
determine them.
◼ Simply type in an objective function of 0, and
type the equations you need to solve as your
constraints.
◼ Alternatively, you may use the LINGO model in
the file Markov.lng to determine steady-state
probabilities and mean first passage times for
an ergodic chain.
73
Yitbarek Takele (PhD), Department of Management, AAU
5.11 Absorbing Chains
◼ Many interesting applications of Markov chains
involve chains in which some of the states are
absorbing and the rest are transient states.
◼ This type of chain is called an absorbing
chain.
◼ To see why we are interested in absorbing
chains we consider the following accounts
receivable example.
74
Yitbarek Takele (PhD), Department of Management, AAU
Accounts Receivable Example
75
Yitbarek Takele (PhD), Department of Management, AAU
◼ Suppose that past data indicate that the
following Markov chain describes how the
status of an account changes from one month
to the next month:
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
2 months 0 0 0 .4 .6 0
3 months
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
Bad Debt
0 0 0 0 0 1
76
Yitbarek Takele (PhD), Department of Management, AAU
◼ To simplify our example, we assume that after
three months, a debt is either collected or
written off as a bad debt.
◼ Once a debt is paid up or written off as a bad
debt, the account if closed, and no further
transitions occur.
◼ Hence, Paid or Bad Debt are absorbing states.
Since every account will eventually be paid or
written off as a bad debt, New, 1 month, 2
months, and 3 months are transient states.
77
Yitbarek Takele (PhD), Department of Management, AAU
◼ A typical new account will be absorbed as
either a collected debt or a bad debt.
◼ What is the probability that a new account will
eventually be collected?
◼ To answer this questions we must write a
transition matrix. We assume s – m transient
states and m absorbing states. The transition
matrix is written in thes-mformm of
columns columns
s-m rows Q R
P= 0 1
m rows
78
Yitbarek Takele (PhD), Department of Management, AAU
◼ The transition matrix for this example is
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
2 months 0 0 Q 0 .4 .6 R 0
3 months
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
Bad Debt
0 0 0 0 0 1
81
Yitbarek Takele (PhD), Department of Management, AAU
◼ To answer questions 1-3, we need to compute
82
Yitbarek Takele (PhD), Department of Management, AAU
then
1. t1 = New, a1 = Paid. Thus, the probability that a new
account is eventually collected is element 11 of (I –
Q)-1R =.964.
2. t2 = 1 month, a2 = Bad Debt. Thus, the probability
that a one-month overdue account turns into a bad
debt is element 22 of (I –Q)-1R = .06.
3. From answer 1, only 3.6% of all debts are
uncollected. Since yearly accounts payable are
$1,200,000 on the average, (0.036)(1,200,000) =
$43,200 per year will be uncollected.
83
Yitbarek Takele (PhD), Department of Management, AAU
Discussion
Session
84
Yitbarek Takele (PhD), Department of Management, AAU
I Thank You
85
Yitbarek Takele (PhD), Department of Management, AAU