0% found this document useful (0 votes)
142 views85 pages

Markov Chains: Yitbarek Takele (PHD, Mba & Ma Econ) Associate Professor of Business Administration Addis Ababa University

The document discusses Markov chains, which are a type of stochastic process used to model how a random variable evolves over time. Some key points: - Markov chains assume that the probability of future states depends only on the present state, not on the sequence of events that preceded it. - They have a finite number of possible states and stationary transition probabilities between states that do not change over time. - The transition probabilities between states can be represented using a transition probability matrix. - Markov chains are used to analyze the movement of systems between states and to predict future state probabilities.

Uploaded by

hailu ayalew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views85 pages

Markov Chains: Yitbarek Takele (PHD, Mba & Ma Econ) Associate Professor of Business Administration Addis Ababa University

The document discusses Markov chains, which are a type of stochastic process used to model how a random variable evolves over time. Some key points: - Markov chains assume that the probability of future states depends only on the present state, not on the sequence of events that preceded it. - They have a finite number of possible states and stationary transition probabilities between states that do not change over time. - The transition probabilities between states can be represented using a transition probability matrix. - Markov chains are used to analyze the movement of systems between states and to predict future state probabilities.

Uploaded by

hailu ayalew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Chapter 5

Markov Chains

Yitbarek Takele (PhD, MBA & MA Econ)


Associate Professor of Business Administration
Addis Ababa University
Content

▪ Overview
▪ What is a Stochastic Process?
▪ Characteristic of Markov Analysis
▪ Assumptions Underlying Markov Chain Analysis
▪ What is a Markov Chain?
▪ N-Step Transition Probabilities
▪ Classification of States in a Markov Chain
▪ Markov Analysis: Input & Output
▪ Use of Steady-State & Mean Return Times
▪ Mean First Passage Times of Ergodic Chains
▪ Absorbing Chains

2
Yitbarek Takele (PhD), Department of Management, AAU
5.1 Overview
▪ Sometimes we are interested in how a random
variable changes over time.

▪ The study of how a random variable evolves


over time includes stochastic processes.

▪ Markov chain is a type of stochastic process used


to study how a random variable evolves over
time.

3
Yitbarek Takele (PhD), Department of Management, AAU
▪ Andrei Markov, a Russian Mathematician
developed the technique to describe the
movement of gas in a closed container in 1940

▪ In 1950s, management scientists began to


recognize that the technique could be adapted
to decision situations which fit the pattern
Markov described

4
Yitbarek Takele (PhD), Department of Management, AAU
5.2 What is a Stochastic Process?

◼ Suppose we observe some characteristic of a


system at discrete points in time.
◼ Let Xt be the value of the system characteristic
at time t. In most situations, Xt is not known
with certainty before time t and may be viewed
as a random variable.
◼ A discrete-time stochastic process is simply
a description of the relation between the
random variables X0, X1, X2 …..

5
Yitbarek Takele (PhD), Department of Management, AAU
◼ A continuous–time stochastic process is
simply the stochastic process in which the
state of the system can be viewed at any time,
not just at discrete instants in time.
◼ For example, the number of people in a
supermarket t minutes after the store opens
for business may be viewed as a continuous-
time stochastic process.

6
Yitbarek Takele (PhD), Department of Management, AAU
5.3 Characteristic of Markov Analysis

1. Descriptive but not Normative (determine


the optimal course of action)
 The aim is to determine the behavior of the system
 The objective is not to determine an appropriate
marketing strategy (for the detergent example-to
be discussed) rather to know as to how the system
would behave over time in terms of the actions of
households buying behavior, for example, a
consumer product.
2. Dynamic - involves multiple time periods i.e.,
a stochastic process.

7
Yitbarek Takele (PhD), Department of Management, AAU
5.4 Assumptions Underlying Markov
Chain Analysis
1. Finite states
◼ The given system has a finite no. of states, none
of which is “absorbing” in nature.
 Example: 3 states i.e, 3 brands of detergent which the
customers can switch between.
2. First-order process
◼ The condition (or state) of the system in any
given period is dependent only on its condition
prevailing in the previous period & the transition
probabilities.
 Example: in the brand switching case it is assumed that
the choice of a particular brand of detergent is dependent
upon & influenced only by the choice in the previous month 8
Yitbarek Takele (PhD), Department of Management, AAU
 A second-order Markov process assumes that the
customers’ choice next month may depend upon their
choices during the immediately preceding two
months.
 A third-order Markov process assumes that the
customers’ choice next month may depend upon their
choices during the immediately preceding three
months.
3. Stationarity
◼ The transition probabilities are constant over
time
 Example: it is assumed that the system has settled
down so that the switching among different brands
takes place at the given rates in each time period.
9
Yitbarek Takele (PhD), Department of Management, AAU
4. Uniform time periods
◼ The changes from one state to another take
place only once during each time period, and
the time periods are equal in duration.
 Example: the customers change their brands of
detergent on a monthly basis & accordingly, the
monitoring is also done on a month-to-month basis

10
Yitbarek Takele (PhD), Department of Management, AAU
5.5 What is a Markov Chain?
◼ Markov Chain - Special type of discrete-time
stochastic process.
◼ Markov analysis is used to analyze the current
state and movement of a variable to predict
the future occurrences and movement of this
variable by the use of presently known
probabilities.

11
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: A stochastic process is a Markov
process if the occurrence of a future state
depends only on the immediately preceding
state. This means that given the chronological
times to, t1, …,tn, the family of random
variables{Xtn} = {x1, x2, …, xn} is said to be a
Markov process if it possesses the following
property:
◼ P{Xtn = xn|Xtn-1=Xn-1, …, Xt0 = x0 } = P{Xtn =
xn|Xtn-1=Xn-1 }

12
Yitbarek Takele (PhD), Department of Management, AAU
OR
◼ A discrete-time stochastic process is a Markov
chain if, for t = 0,1,2… and all states
P(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0)
= P(Xt+1=it+1|Xt = it)
◼ Essentially this says that the probability
distribution of the state at time t+1 depends
on the state at time t(it) and does not depend
on the states the chain passed through on the
way to it at time t.

13
Yitbarek Takele (PhD), Department of Management, AAU
◼ In our study of Markov chains, we make further
assumption that for all states i and j and all t,
P(Xt+1 = j|Xt = i) is independent of t.
◼ This assumption allows us to write P(Xt+1 = j|Xt
= i) = pij where pij is the probability that given
the system is in state i at time t, it will be in a
state j at time t+1.
◼ If the system moves from state i during one
period to state j during the next period, we say
that a transition from i to j has occurred.

14
Yitbarek Takele (PhD), Department of Management, AAU
◼ This is known as the one-step transition
probability of moving from state i at t to state
j at t+1. By definition, we have

j =s

p
j =1
ij = 1 , i = 1, 2, ….n

pij ≥ 0, (i, j) = 1, 2, …,n

◼ The pij’s are often referred to as the transition


probabilities for the Markov chain.

15
Yitbarek Takele (PhD), Department of Management, AAU
◼ This equation implies that the probability law
relating the next period’s state to the current
state does not change over time.
◼ It is often called the Stationary Assumption
and any Markov chain that satisfies it is called
a stationary Markov chain.
◼ We also must define qi to be the probability
that the chain is in state i at the time 0; in
other words, P(X0=i) = qi. It is called the
initial state.

16
Yitbarek Takele (PhD), Department of Management, AAU
◼ We call the vector q= [q1, q2,…qs] the initial
probability distribution for the Markov chain.
◼ In most applications, the transition
probabilities are displayed as an s x s
transition probability matrix P. The
transition probability matrix P may be written
as
 p11 p12  p1s 
p p  p 
P =  21 22 2s 

    
 
 s1
p p s2  p ss 

17
Yitbarek Takele (PhD), Department of Management, AAU
◼ For each I
j =s

p
j =1
ij =1

◼ We also know that each entry in the P


matrix must be positive
◼ Hence, all entries in the transition probability
matrix are nonnegative, and the entries in
each row must sum to 1.

18
Yitbarek Takele (PhD), Department of Management, AAU
5.6 n-Step Transition Probabilities
◼ A question of interest when studying a Markov
chain is: If a Markov chain is in a state i at
time m, what is the probability that n periods
later the Markov chain will be in state j?
◼ Given the initial probabilities of
starting in state j and transition matrix P of a
Markov chain, the absolute probabilities
of being in state j after n transitions (n>0) are
computed as follows:
(1) (0)
a = a P
( 2) (1) (0) 2
a = a P = a p
.
.
.
(n) (0) n
a = a p , n = 1, 2, ...
19
Yitbarek Takele (PhD), Department of Management, AAU
◼ The matrix is known as the n-step transition
matrix. From these calculations, we can see
that

OR
, 0<m<n
◼ These are known as Chapman-Kolomogorov
equations

20
Yitbarek Takele (PhD), Department of Management, AAU
◼ This probability will be independent of m, so
we may write
P(Xm+n =j|Xm = i) = P(Xn =j|X0 = i) = Pij(n)
where Pij(n) is called the n-step probability of
a transition from state i to state j.
◼ For n > 1, Pij(n) = ijth element of Pn

21
Yitbarek Takele (PhD), Department of Management, AAU
The Cola Example
◼ Suppose the entire cola industry produces only
two colas.
◼ Given that a person last purchased cola 1, there
is a 90% chance that her next purchase will be
cola 1.
◼ Given that a person last purchased cola 2, there
is an 80% chance that her next purchase will be
cola 2.
1. If a person is currently a cola 2 purchaser, what is the
probability that she will purchase cola 1 two purchases
from now?
2. If a person is currently a cola 1 purchaser, what is the
probability that she will purchase cola 1 three
purchases from now? 22
Yitbarek Takele (PhD), Department of Management, AAU
◼ We view each person’s purchases as a Markov
chain with the state at any given time being
the type of cola the person last purchased.
◼ Hence, each person’s cola purchases may be
represented by a two-state Markov chain,
where
 State 1 = person has last purchased cola 1
 State 2 = person has last purchased cola 2
◼ If we define Xn to be the type of cola
purchased by a person on his nth future cola
purchase, then X0, X1, … may be described as
the Markov chain with the following transition
matrix: 23
Yitbarek Takele (PhD), Department of Management, AAU
Cola1 Cola 2 
Cola1  .90 .10 
P=
Cola 2  .20 .80 
 

We can now answer questions 1 and 2.


1. We seek P(X2 = 1|X0 = 2) = P21(2) = element
21 of P2: .90 .10 .90 .10 .83 .17
P =
2
   = 
.20 .80  .20 .80   .34 .66 

24
Yitbarek Takele (PhD), Department of Management, AAU
 Hence, P21(2) =.34. This means that the probability
is .34 that two purchases in the future a cola 2
drinker will purchase cola 1.
 By using basic (conditional) probability theory, we
may obtain this answer in a different way.
2. We seek P(X3 = 1|X0 = 1) = P11(3) = element
11 of P3:

.90 .10 .83 .17  .781 .219


P 3 = P( P 2 ) =  =
.20 .80 
 .34 .66
.438
 .562

Therefore, P11(3) = .781

25
Yitbarek Takele (PhD), Department of Management, AAU
◼ Many times we do not know the state of the
Markov chain at time 0. Then we can
determine the probability that the system is in
state i at time n by using the reasoning.

 Probability of being in state j at time n

where q=[q1, q2, … qn]

i=s
=  qi Pij (n)
i =1
26
Yitbarek Takele (PhD), Department of Management, AAU
◼ To illustrate the behavior of the n-step
transition probabilities for large values of n, we
have computed several of the n-step transition
probabilities for the Cola example.
◼ This means that for large n, no matter what
the initial state, there is a .67 chance that a
person will be a cola 1 purchaser.

27
Yitbarek Takele (PhD), Department of Management, AAU
5.7 Classification of States in a
Markov Chain
◼ To understand the n-step transition in more
detail, we need to study how mathematicians
classify the states of a Markov chain.
◼ The following transition matrix illustrates most
of the following definitions.

.4 .6 0 0 0
.5 .5 0 0 0
 
P =  0 0 .3 .7 0 
 
 0 0 .5 .4 .1
 0 0 0 .8 .2

28
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: Given two states of i and j, a path
from i to j is a sequence of transitions that
begins in i and ends in j, such that each
transition in the sequence has a non-negative
probability of occurring.
◼ Definition: A state j is reachable from state i
if there is a path leading from i to j.
◼ Definition: Two states i and j are said to
communicate if j is reachable from i, and i is
reachable from j.

29
Yitbarek Takele (PhD), Department of Management, AAU
◼ Definition: A set of states S in a Markov chain
is a closed set if no state outside of S is
reachable from any state in S.
 A closed Markov chain is said to be ergodic if all its
states are recurrent and aperiodic. In this case, the
( )
absolute probabilities after n transitions, a = a p ,
n (0) n

always converge uniquely to a limiting (steady-state)


distribution as that is independent of the initial
probabilities .
◼ A state j is an absorbing state if it return to
itself certainly in one transition- that is pij=1.
◼ A state j is a transient state if it can reach
another state but cannot itself be reached back
from another state OR 30
Yitbarek Takele (PhD), Department of Management, AAU
◼ If there exists a state i that is reachable from j,
but the state j is not reachable from state i.
Mathematically, this will happen if for
all i.
◼ A state j is recurrent if the probability of being
revisited from other states is 1. this can
happen if, and only if, the state is not
transient.
◼ A state j is periodic with period t > 1 if a
return is possible only in t, 2t, 3t, . . ., steps.
This means that whenever n is not
divisible by t.
31
Yitbarek Takele (PhD), Department of Management, AAU
◼ We can test the periodicity of a state by
computing & observing the values of for
n = 2,3,4, . . . . These values will be positive
only at the corresponding period of the state.
◼ For example, in the chain

P= , = , =

= , =

32
Yitbarek Takele (PhD), Department of Management, AAU
◼ Continuing with n = 6, 7, . . . , shows that
and are positive for even values of n
and zero otherwise.
▪ This means that the period for states 1 and 3 is
2.

33
Yitbarek Takele (PhD), Department of Management, AAU
Representation of a Markov Chain as
a Diagram

A B C D
0.95 0.95 0 0.05 0
A
0.2 0.5 0 0.3
B
0.2 0.5
0 0.2 0 0.8
C
0 0 1 0
0.05 0.2 0.3 D

0.8

Each directed edge A→C is associated with the positive


transition probability from A to C.

34
Yitbarek Takele (PhD), Department of Management, AAU
Properties of Markov Chain states
States of Markov chains are classified by the
digraph representation (omitting the actual
probability values)
A, C and D are recurrent states: they are in strongly connected
components which are sinks in the graph.

B is not recurrent – it is a transient state

Alternative definitions:
A state s is recurrent if it can be
reached from any state reachable from
s; otherwise it is transient.

35
Yitbarek Takele (PhD), Department of Management, AAU
Irreducible Markov Chains

A Markov Chain is irreducible if the corresponding


graph is strongly connected (and thus all its states
are recurrent).

1
36
Yitbarek Takele (PhD), Department of Management, AAU
Ergodic Markov Chains

A Markov chain is ergodic if :


1. the corresponding
graph is strongly
connected i.e.,
recurrent
2. It is not periodic
Ergodic Markov Chains are important since they
guarantee the corresponding Markovian process
converges to a unique distribution, in which all
states have strictly positive probability.

37
Yitbarek Takele (PhD), Department of Management, AAU
5.8 Markov Analysis: Input & Output
◼ In the Markovian analysis, the analysis of a
given system is based on the following two
sets of input data.
 The transition matrix
 The initial condition
◼ Based on these inputs, the model provides the
following predictions:
a. The probability of the system being in a given state
at a given future time
b. The steady state probabilities

38
Yitbarek Takele (PhD), Department of Management, AAU
1. Inputs
i. Transition probabilities
◼ Example
 Assume that a person has Br. 200 at time t=0. At times
1, 2, …, he plays a game wherein he can gain or lose
Br. 100, with probabilities p and 1-p, respectively. He
would quit if & when he loses his initial amount of Br.
200, or doubles this amount to Br. 400.
 Now, if we define Xi as his capital position after the
time t game is played, then X0, X1, X2 …, Xt may be
viewed as a discrete-time stochastic process.
 While X0 is known to be a constant equal to Br. 200,
we are not sure about X1 , X2 , X3 , and so on.
39
Yitbarek Takele (PhD), Department of Management, AAU
 For instance, X1 shall be equal to Br. 300 (if he gains)
with a probability of p and Br. 100 with a probability of
1-p. It may be noted that if Xt = Br. 400, then Xt+1 & all
other Xt ’s shall be equal Br. 400, since he quits after
the initial amount is doubled.
◼ Similarly, whenever he loses his initial capital so that
Xt = 0, then Xt+1 & all other Xt ’s shall be equal zero.

Transition Probabilities

State/State Br. 0 Br. 100 Br. 200 Br. 300 Br. 400
Br. 0 1 0 0 0 0
Br. 100 1-p 0 p 0 0
P= Br. 200 0 1-p 0 p 0
Br. 300 0 0 1-p 0 p
Br. 400 0 0 0 0 1

40
Yitbarek Takele (PhD), Department of Management, AAU
◼ From the given matrix, it can be seen that there
are five states viz.
Br. 0, Br. 100, Br. 200, Br. 300, and Br. 400
◼ From all the states, except states Br. 0 and Br.
400, we know that the person would be in the
next state (with Br. 100 more) with a probability
of p, and in the previous state (with Br. 100 less)
with a probability of 1-p.
 Of course, if the person reaches the state of Br. 0 or Br.
400, then he doesn’t play the game any more. Hence,
for each of these states, the transition probability would
be equal to 1, i.e., P00 = P44 = 1

41
Yitbarek Takele (PhD), Department of Management, AAU
ii. Initial conditions
◼ The initial conditions describe the situation the
system presently is in
◼ For the gambler’s ruin problem, the initial
condition is given by [0 0 1 0 0], which implies
that he currently is in the state where his capital
is Br. 200.

42
Yitbarek Takele (PhD), Department of Management, AAU
2. Outputs
i. Specific-state probabilities
◼ To calculate the probabilities for the system in
specific states, we let qi (k) to represent the
probability (q) of the system being in a certain
state (i) in a certain period (k), called the state
probability.
◼ Since the system would occupy one & only one
state at a given point in time, it is obvious that
the sum of all qi values would be equal to 1.

43
Yitbarek Takele (PhD), Department of Management, AAU
◼ In general, with a total of n states,
q1(k) + q2(k) + q3(k) + … + qn(k) =1, for every k
Where, k is the no. of transitions (0,1,2,…)
◼ Example
 The brand switching properties of the detergent customers
are examined for many months & noted when they stabilize
so that the probabilities of transition from brand D2 to D1, D2
and D3 are, respectively, 0.20, 0.50, & 0.30.
 With states of the system designated as D1, D2 and D3, qD1 (0)
represents the probability of a customer choosing brand D1
this month (at t=0) and qD1 (1) represents the probability of
choosing this brand after one transition. Similarly, qD1 (2) is
the probability of choosing this brand after two transitions,
etc.
44
Yitbarek Takele (PhD), Department of Management, AAU
 Thus, the probability distribution of the customer choosing
any given brand (D1, D2 and D3) in any given period(k) may
be expressed as a row vector as follows:
Q (k) = [qD1 (k) qD2 (k) qD3 (k)], and
 for n states
Q (k) = [q1 (k) q2 (k) q3 (k) . . . qn (k)]
 The initial condition is obviously expressed as Q(0)

◼ For the detergent example the initial market


share is given to be 30%, 45% and 25%. Thus,
 Q (0) = [qD1 (0) qD2 (0) qD3 (0)] = [0.30 0.45 0.25]
 To calculate Q (1), Q (0) would be post-multiplied by
the matrix P.

45
Yitbarek Takele (PhD), Department of Management, AAU
◼ Q(1) = [q1 (1) q2 (1) q3 (1)] = Q(0)P. Thus,
◼ Q(k) = Q(k-1)P = Q(k-2) = . . . = Q(0)

= =

46
Yitbarek Takele (PhD), Department of Management, AAU
◼ Thus,
Q(2) = Q(0) =

= (0.30475 0.27425 0.42100)

◼ Thus, market shares of the three brands of


detergents D1, D2 and D3 are expected to be
30.5%, 27.4% & 42.1%, respectively, two
months from now.

47
Yitbarek Takele (PhD), Department of Management, AAU
◼ Conditional Probabilities
◼ The probability of a customer to buy D3 two
months hence, given that his latest purchase has
been D2 , could be calculated by taking the
following into account:
i. Customer switches from brand D2 and D1 in month 1,
and to brand D3 in month 2
ii. Customer stays on the brand D2 in month 1, and
switches to brand D3 in month 2
iii. Customer switches from brand D2 and D3 in month 1
and then stays on D3 in month 2

48
Yitbarek Takele (PhD), Department of Management, AAU
From the transition probability matrix, we can compute the probability of
switching from D2 and D1 in a month is 0.20; D1 to D3 is 0.10; D2 to D3 is 0.30
and staying at D2 and D3 is 0.50 & 0.80, respectively

D1 0.10
D3
0.20

D2 0.50 D2 0.30
D3
0.30

D3 0.80 D3
P((D1D3/D2)) = 0.2 x 0.1 = 0.02
This probability can also be obtained
P((D2D3/D2)) = 0.5 x 0.3 = 0.15
P((D3D3/D2)) = 0.3 x 0.8 = 0.24 from the matrix . In this matrix P23
0.41 = 0.41 49
Yitbarek Takele (PhD), Department of Management, AAU
ii. Steady state probabilities
◼ Steady-state probabilities are used to describe
the long-run behavior of a Markov chain.
◼ Theorem 1: Let P be the transition matrix for
an s-state ergodic chain. Then there exists a
vector
π = [π1 π2 … πs] such that
 1  2   s 
  
lim P n =  
n →   
 
 1  2   s 
50
Yitbarek Takele (PhD), Department of Management, AAU
◼ Theorem 1 tells us that for any initial state i,
lim Pij (n) =  j
n →

◼ The vector π = [π1 π2 … πs] is often called the


steady-state distribution, or equilibrium
distribution, for the Markov chain.

51
Yitbarek Takele (PhD), Department of Management, AAU
◼ Steady state probabilities are the equilibrium, or
steady state probabilities
◼ At steady state
 Q(k) = Q(k)P [since Q(k) = Q(k-1)]
◼ Thus,
 Q = QP
◼ In matrix notation,

Q = [q1, q2, . . ., qn]

52
Yitbarek Takele (PhD), Department of Management, AAU
Transient Analysis & Intuitive Interpretation

◼ The behavior of a Markov chain before the


steady state is reached is often called
transient (or short-run) behavior.
◼ An intuitive interpretation can be given to the
steady-state probability equations.

 j (1 − pij ) =   k pkj
k j
◼ This equation may be viewed as saying that in
the steady-state, the “flow” of probability into
each state must equal the flow of probability
out of each state.
53
Yitbarek Takele (PhD), Department of Management, AAU
◼ P (that a particular transition leaves state j) =
P (that a particular transition enters state j)
 Recall that in the steady state, the probability that the
system is in state j is pj.
◼ From this observation, it follows that Probability
that a particular transition leaves state j =
(probability that the current period begins in j) x
(probability that the current transition leaves j)
=  j (1- pjj)
AND

54
Yitbarek Takele (PhD), Department of Management, AAU
◼ Probability that a particular transition enters state j =
(probability that the current period begins in k
k  j) x
(probability that the current transition enters j)

55
Yitbarek Takele (PhD), Department of Management, AAU
◼ In matrix notation,
q1 = P11q1 + P21q2 + . . . + Pn1qn
q2 = P12q1 + P22q2 + . . . + Pn2qn
q3 = P13q1 + P23q2 + . . . + Pn3qn
.
.
.
qn = P1nq1 + P2nq2 + . . . + Pnnqn

◼ Example: Detergent
q1 = 0.60q1 + 0.20q2 + 0.15q3
q2 = 0.30q1 + 0.50q2 + 0.05q3
q3 = 0.10q1 + 0.30q2 + 0.80q3 and
q1 + q2 + q3 = 1
56
Yitbarek Takele (PhD), Department of Management, AAU
◼ To obtain the values of q1, q2, & q3
q1 = 0.60q1 + 0.20q2 + 0.15q3
q2 = 0.30q1 + 0.50q2 + 0.05q3 and
q1 + q2 + q3 = 1
◼ Restating the equations, we get
0.40q1 - 0.20q2 - 0.15q3 = 0
-0.30q1 + 0.50q2 - 0.50q3 = 0
q1 + q2 + q3 = 1
◼ In matrix notation

57
Yitbarek Takele (PhD), Department of Management, AAU
◼ Accordingly,
q1 = 0.293
q2 = 0.224
q3 = 0.483

58
Yitbarek Takele (PhD), Department of Management, AAU
5.9 Use of Steady-State & Mean First Return
Times Probabilities in Decision Making
◼ A direct bi-product of the steady state probabilities
is the determination of the expected no. of
transitions before the system returns to state j for
the first time. This is known as the mean first
return time or the mean recurrence time, and is
computed in an n-state Markov chain as

◼ In the Cola Example, suppose that each customer


makes on purchase of cola during any week.
◼ Suppose there are 100 million cola customers.
◼ One selling unit of cola costs the company $1 to
produce and is sold for $2. 59
Yitbarek Takele (PhD), Department of Management, AAU
◼ Example-1: Steady state
 For $500 million/year, an advertising firm guarantees to
decrease from 10% to 5% the fraction of cola 1 customers who
switch after a purchase.
 Should the company that makes cola 1 hire the firm?
◼ At present, a fraction π1 = ⅔ of all purchases are cola
1 purchases.
◼ Each purchase of cola 1 earns the company a $1 profit.
We can calculate the annual profit as $3,466,666,667.
◼ The advertising firm is offering to change the P matrix
to
.95 .05
P1 =  
.20 .80 
60
Yitbarek Takele (PhD), Department of Management, AAU
◼ For P1, the steady-state equations become
π1 = .95π1+.20π2
π2 = .05π1+.80π2
◼ Replacing the second equation by π1 + π2 = 1
and solving, we obtain π1=.8 and π2 = .2.
◼ Now the cola 1 company’s annual profit will be
$3,660,000,000.
◼ Hence, the cola 1 company should hire the ad
agency.

61
Yitbarek Takele (PhD), Department of Management, AAU
◼ Example-2: Mean Return Times
(case: soil condition vs productivity)

 Every year, at the beginning of the gardening season


(March through September), a gardener uses a
chemical test to check soil condition. Depending on
the outcome of the test, productivity for the new
season falls in one of the three states: (1) good, (2)
fair, and (3) poor. Over the years, the gardener has
observed that last year’s soil condition impacts
current year’s productivity & that the situation can be
described by the following Markov chain:

62
Yitbarek Takele (PhD), Department of Management, AAU
 The gardener can alter the transition probabilities P
by using fertilizer to boost soil condition. In this case
the transition matrix becomes:

 To determine the steady-state probability distribution


of the gardener problem with fertilizer, we have

63
Yitbarek Takele (PhD), Department of Management, AAU
= .3 + .1 + .05
= .6 + .6 + .40
= .1 + .3 + .55
+ + =1
◼ The solution is
= 0.1017
= 0.5254
= 0.3729
The solution is

64
Yitbarek Takele (PhD), Department of Management, AAU
◼ This means that depending on the current state
of the soil, it will take approximately 10
gardening seasons for the soil to return to good
state, 2 seasons to return to a fair state, and 3
seasons to return to a poor state. Bleak result?
◼ A more aggressive program could be as follows:

65
Yitbarek Takele (PhD), Department of Management, AAU
Cost model
◼ Example:
 Consider the gardener problem with fertilizer.
Suppose that the cost of the fertilizer is Br. 500 per
bag & the garden needs two bags if the soil is good.
The amount of fertilizer is increased by 25% if the
soil is fair & 60% if the soil is poor. The gardener
estimates the annual yield to be worth Br. 2,500 if no
fertilizer is used & Br. 4,200 if fertilizer is applied. Is
it worth to use fertilizer?

66
Yitbarek Takele (PhD), Department of Management, AAU
◼ Solution
 Using the steady state probabilities we compute
earlier, we get
 Expected annual cost of fertilizer =
2 x Br. 500 x + (1.25x2) x Br. 500 x + (1.6x2) x
Br. 500 x
Br. 1,000 x .1017 + 1,250 x .5254 + 1,600 x . 3729
= Br. 1,355.1
Increase in the annual value of the yield
= Br 4, 200 – Br. 2, 500 = Br. 1, 700
The result shows that , on the average, the use of
fertilizer nets to Br. 1,700-1,355.1 = Br. 334.9.
hence, use of fertilizer is recommended. 67
Yitbarek Takele (PhD), Department of Management, AAU
5.10 Mean First Passage Times of
Ergodic Chains
◼ In 5.9 we used the steady state probabilities to
compute , the mean return time for state j.
In this section we discuss the determination of
the mean first passage time , the expected
no. of transitions needed to reach state j from
state i for the first time.
◼ The calculations are rooted in the determination
of the prob. of at least one passage from
(n) (n)
state i to state j as f ij= f
n=1 ij , where f
ij is the
probability of a first passage from state i to state
j in n transitions. An expression for fij( n ) can be
determined recursively from
n-1 (k)
p(n)  f ij pij ,n =1,2,...
(n-k)
ij
= f (n)
ij +
k=1 68
Yitbarek Takele (PhD), Department of Management, AAU
◼ The transition matrix P = p ij is assumed to have
m states
1. If f ij 1, it is not certain that the system will ever
pass from state i to j &  ij =.
2. If f ij=1, the Markov chain is ergodic and the
mean first passage time from state i to state
j is computed as

 ij =  nf ij
n
n=1

69
Yitbarek Takele (PhD), Department of Management, AAU
◼ A simpler way to determine the mean first passage
time for all the states in an m-transition matrix, P, is
to use the following matrix-based formula:
−1
μ ιj =( I − N j) 1, j i

Where
I = (m-1) – identity matrix
N j= transition matrix P less its jth row and jth
column of target state j
1= (m-1) column vector with all elements
equal to 1
The matrix operation ( I − N j=) −11 essentially sums the
−1
columns of ( I − N j .
) 70
Yitbarek Takele (PhD), Department of Management, AAU
◼ Example:
 Consider the gardener Markov chain with fertilizer
once again.
.30 .60 .10
P = .10 .60 .30
 
.05 .40 .55

 To demonstrate the computation of the first passage


time to a specific state from all others, consider the
passage from states 2 and 3 (fair and poor) to state
1 (good). Thus, j=1 and
−1
 .60 .30  −1  .40 −.30   7.50 5.00 
=
N1  , ( N 1) 
I − =  = 
 .40 .55   −.40 .45   6.67 6.67 
71
Yitbarek Takele (PhD), Department of Management, AAU
◼ Thus,

  21   7.50 5.00  1  12.50 


   = 6.67 6.67  1 = 13.34 
 31      

◼ This means that, on the average, it will take


12.5 seasons to pass from fair to good soil and
13.34 seasons to go from bad to good soil.
◼ Similar calculations can be carried out to obtain
 12 and  32 from ( I − N 2 ) and 13 and  23 from ( I − N 3) .

72
Yitbarek Takele (PhD), Department of Management, AAU
Solving for Steady-State Probabilities and
Mean First Passage Times on the
Computer
◼ Since we solve steady-state probabilities and
mean first passage times by solving a system
of linear equations, we may use LINDO to
determine them.
◼ Simply type in an objective function of 0, and
type the equations you need to solve as your
constraints.
◼ Alternatively, you may use the LINGO model in
the file Markov.lng to determine steady-state
probabilities and mean first passage times for
an ergodic chain.

73
Yitbarek Takele (PhD), Department of Management, AAU
5.11 Absorbing Chains
◼ Many interesting applications of Markov chains
involve chains in which some of the states are
absorbing and the rest are transient states.
◼ This type of chain is called an absorbing
chain.
◼ To see why we are interested in absorbing
chains we consider the following accounts
receivable example.

74
Yitbarek Takele (PhD), Department of Management, AAU
Accounts Receivable Example

◼ The accounts receivable situation of a firm is often


modeled as an absorbing Markov chain.
◼ Suppose a firm assumes that an account is uncollected if
the account is more than three months overdue.
◼ Then at the beginning of each month, each account may
be classified into one of the following states:
◼ State 1 New account
◼ State 2 Payment on account is one month overdue
◼ State 3 Payment on account is two months overdue
◼ State 4 Payment on account is three months overdue
◼ State 5 Account has been paid
◼ State 6 Account is written off as bad debt

75
Yitbarek Takele (PhD), Department of Management, AAU
◼ Suppose that past data indicate that the
following Markov chain describes how the
status of an account changes from one month
to the next month:
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
 
2 months 0 0 0 .4 .6 0
3 months  
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
 
Bad Debt

0 0 0 0 0 1

76
Yitbarek Takele (PhD), Department of Management, AAU
◼ To simplify our example, we assume that after
three months, a debt is either collected or
written off as a bad debt.
◼ Once a debt is paid up or written off as a bad
debt, the account if closed, and no further
transitions occur.
◼ Hence, Paid or Bad Debt are absorbing states.
Since every account will eventually be paid or
written off as a bad debt, New, 1 month, 2
months, and 3 months are transient states.

77
Yitbarek Takele (PhD), Department of Management, AAU
◼ A typical new account will be absorbed as
either a collected debt or a bad debt.
◼ What is the probability that a new account will
eventually be collected?
◼ To answer this questions we must write a
transition matrix. We assume s – m transient
states and m absorbing states. The transition
matrix is written in thes-mformm of
columns columns

s-m rows Q R
P= 0 1 
m rows

78
Yitbarek Takele (PhD), Department of Management, AAU
◼ The transition matrix for this example is
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
 
2 months 0 0 Q 0 .4 .6 R 0
3 months
 
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
 
Bad Debt

0 0 0 0 0 1

◼ Then s =6, m =2, and Q and R are as shown.


79
Yitbarek Takele (PhD), Department of Management, AAU
1. What is the probability that a new account will
eventually be collected?
2. What is the probability that a one-month
overdue account will eventually become a bad
debt?
3. If the firm’s sales average $100,000 per
month, how much money per year will go
uncollected?
1 − .6 0 0 
0 1 − .5 0 
I −Q= 
0 0 1 − .4
 
0 0 0 1 
80
Yitbarek Takele (PhD), Department of Management, AAU
◼ By using the Gauss-Jordan method, we find
that
t1 t2 t3 t4
t1
1 .60 .30 .12
t2 0 1 .50 .20
(I-Q)-1 =  
t3 0 0 1 .40
t4  
0 0 0 1

81
Yitbarek Takele (PhD), Department of Management, AAU
◼ To answer questions 1-3, we need to compute

1 .60 .30 .12 .4 0 


  5 0
(I-Q)-1 R= 0 1 .50 .20   =
0 0 1 .40 .6 0 
   
0 0 0 1 .7 .3

82
Yitbarek Takele (PhD), Department of Management, AAU
then
1. t1 = New, a1 = Paid. Thus, the probability that a new
account is eventually collected is element 11 of (I –
Q)-1R =.964.
2. t2 = 1 month, a2 = Bad Debt. Thus, the probability
that a one-month overdue account turns into a bad
debt is element 22 of (I –Q)-1R = .06.
3. From answer 1, only 3.6% of all debts are
uncollected. Since yearly accounts payable are
$1,200,000 on the average, (0.036)(1,200,000) =
$43,200 per year will be uncollected.

83
Yitbarek Takele (PhD), Department of Management, AAU
Discussion

Session
84
Yitbarek Takele (PhD), Department of Management, AAU
I Thank You

85
Yitbarek Takele (PhD), Department of Management, AAU

You might also like