0% found this document useful (0 votes)
187 views130 pages

Markov Chains 2023

The document describes several examples of stochastic processes that can be modeled as Markov chains. It defines key concepts such as discrete-time and continuous-time stochastic processes, states, transitions, and transition probabilities. It then provides examples of Markov chains modeling situations like the gambler's ruin problem, cola purchases, choosing balls from an urn, and modeling weather. It also describes how to represent Markov chains using transition matrices and how to calculate n-step transition probabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views130 pages

Markov Chains 2023

The document describes several examples of stochastic processes that can be modeled as Markov chains. It defines key concepts such as discrete-time and continuous-time stochastic processes, states, transitions, and transition probabilities. It then provides examples of Markov chains modeling situations like the gambler's ruin problem, cola purchases, choosing balls from an urn, and modeling weather. It also describes how to represent Markov chains using transition matrices and how to calculate n-step transition probabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

17.

1 What is a Stochastic
Process?
◼ Suppose we observe some characteristic of a system
at discrete points in time.
◼ Let Xt be the value of the system characteristic at
time t. In most situations, Xt is not known with
certainty before time t and may be viewed as a
random variable.
◼ A discrete-time stochastic process is simply a
description of the relation between the random
variables X0, X1, X2 …..
◼ A continuous –time stochastic process is
simply the stochastic process in which the
state of the system can be viewed at any
time, not just at discrete instants in time.

◼ For example, the number of people in a


supermarket t minutes after the store opens
for business may be viewed as a continuous-
time stochastic process.
Discrete time Markov Chain
◼ Markov chain is named after Prof. Andrei A.
Markov (1856-1922) who first published his
result in 1906.

◼ Markov chain concerns about a sequence of


random variables, which correspond to the
states of a certain system, in such a way that
the state at one time epoch depends only on
the one in the previous time epoch.
Discrete time Markov Chain

X0 = 0 CUSTOMERS IN THE MARKET


X1 = 2 CUSTOMERS IN THE MARKET
X2 = 5 CUSTOMERS IN THE MARKET
X3 = 7 CUSTOMERS IN THE MARKET

P(X4 = 10 I X3=7, X2 = 5 X1 = 2, X0 = 0)
= P(X4 = 10 I X3=7) MARKOVIAN PROPERTY
◼ Essentially, (1) says that the probability
distribution of the state at time t + 1
depends on the state at time t (it) and
does not depend on the states the
chain passed through on the way to it
at time t.
◼ In our study of Markov chains, we make the further
assumption that for all states i and j and all t, P(Xt+1
= j |Xt = i) is independent of t. This assumption
allows us to write

◼ P(Xt+1 = j |Xt = i) =pij =1/3 (2)

◼ where pij is the probability that given the system is in


state i at time t, it will be in a state j at time t + 1.
◼ If the system moves from state i during
one period to state j during the next
period, we say that a transition from i
to j has occurred. The pij’s are often
referred to as the transition
probabilities for the Markov chain.
◼ TRANSITION: MOVEMENT OF SYSTEM
FROM STATE i TO STATE j.
◼ TRANSITION PROBABILITIES:
◼ Pst= 1/3
◼ Pse= 1/3
◼ Psh= 1/3
◼ Equation (2) implies that the probability law
relating the next period’s state to the current
state does not change (or remains stationary)
over time.

◼ For this reason, (2) is often called the


Stationarity Assumption. Any Markov
chain that satisfies (2) is called a stationary
Markov chain.
◼ In most applications, the transition
probabilities are displayed as an s x s
transition probability matrix P. The
transition probability matrix P may be written
as
◼ Given that the state at time t is i, the
process must be somewhere at time
t + 1. This means that for each i,
The Gambler’s Ruin
◼ At time 0, I have $2. At times 1, 2, . . .
, I play a game in which I bet $1. With
probability p, I win the game, and with
probability 1 - p, I lose the game.
◼ My goal is to increase my capital to $4,
and as soon as I do, the game is over.
◼ The game is also over if my capital is
reduced to $0.
◼ System state (what I observe):
◼ Xt = the money owned at time t.
◼ Possible values of Xt:
◼ $0, 1$, $2, $3, $4
◼ READY TO CONSTRUCT THE
TRANSITION MATRIX
◼ Since the amount of money I have after t + 1
plays of the game depends on the past
history of the game only through the amount
of money I have after t plays, we definitely
have a Markov chain.
◼ Since the rules of the game don’t change
over time, we also have a stationary Markov
chain. The transition matrix is as follows
(state i means that we have i dollars):
Graphical Representation of Transition
Matrix for Gambler’s Ruin
The Cola Example
◼ Suppose the entire cola industry produces only two
colas. Given that a person last purchased cola 1,
there is a 90% chance that her next purchase will be
cola 1. Given that a person last purchased cola 2,
there is an 80% chance that her next purchase will
be cola 2.

◼ 1 If a person is currently a cola 2 purchaser, what is


the probability that she will purchase cola 1 two
purchases from now?
◼ 2 If a person is currently a cola 1 purchaser, what is
the probability that she will purchase cola 1 three
purchases from now?
◼ We view each person’s purchases as a
Markov chain with the state at any given time
being the type of cola the person last
purchased.
◼ Hence, each person’s cola purchases may be
represented by a two-state Markov chain,
where
◼ State 1 person has last purchased cola 1
◼ State 2 person has last purchased cola 2
◼ If we define Xn to be the type of cola
purchased by a person on her nth future
cola purchase (present cola purchase
X0), then X0, X1, . . . may be described
as the Markov chain with the following
transition matrix:
Choosing Balls from an
Urn
◼ An urn contains two unpainted balls at present. We
choose a ball at random and flip a coin.
◼ If the chosen ball is unpainted and the coin comes up
heads, we paint the chosen unpainted ball red; if the
chosen ball is unpainted and the coin comes up tails,
we paint the chosen unpainted ball black.
◼ If the ball has already been painted, then (whether
heads or tails has been tossed) we change the color
of the ball (from red to black or from black to red).
◼ To model this situation as a stochastic
process, we define time t to be the time after
the coin has been flipped for the tth time and
the chosen ball has been painted.
◼ The state at any time may be described by
the vector [u r b], where u is the number of
unpainted balls in the urn, r is the number of
red balls in the urn, and b is the number of
black balls in the urn.
◼ We are given that X0 = [2 0 0]. After the first
coin toss, one ball will have been painted
either red or black, and the state will be
either [1 1 0] or [1 0 1].
◼ Hence, we can be sure that X1 = [1 1 0] or
X1 = [1 0 1]. Clearly, there must be some
sort of relation between the Xt’s. For
example, if Xt [0 2 0], we can be sure that
Xt1 will be [0 1 1].
◼ Since the state of the urn after the next
coin toss only depends on the past
history of the process through the state
of the urn after the current coin toss,
we have a Markov chain.
Computations of Transition Probabilities If
Current State Is [1 1 0]
Graphical Representation of Transition
Matrix for Urn
Exercise 1
◼ In Smalltown, 90% of all sunny days
are followed by sunny days, and 80%
of all cloudy days are followed by
cloudy days. Use this information to
model Smalltown’s weather as a Markov
chain.
Exercise 2
◼ Referring to exercise 1, suppose that tomorrow’s Smalltown
weather depends on the last two days of Smalltown weather, as
follows:
◼ (1) If the last two days have been sunny, then 95% of the time,
tomorrow will be sunny.
◼ (2) If yesterday was cloudy and today is sunny, then 70% of
◼ the time, tomorrow will be sunny.
◼ (3) If yesterday was sunny and today is cloudy, then 60% of the
time, tomorrow will be cloudy.
◼ (4) If the last two days have been cloudy, then 80% of the time,
tomorrow will be cloudy.

◼ Using this information, model Smalltown’s weather as a Markov


chain.
◼ Tomorrow’s weather condition (sunny or
cloudy) depends on today’s and
yesterday’s weather condition;
◼ S S → S (0.95), C (0.05)
◼ C S → S (0.70)
◼ S C → S (0.40)
◼ C C → S (0.20)
◼ Let be the weather condition of the last
two days. i = SS, CS, SC, CC
SS CS SC CC
SS 0.95 0 0.05 0 
CS 0.70 0 0.30 0 
SC  0 0.40 0 0.60
 
CC  0 0.20 0 0.80
n-Step Transition Probabilities
◼ Suppose we are studying a Markov
chain with a known transition
probability matrix P.
◼ If a Markov chain is in state i at time m,
what is the probability that n periods
later the Markov chain will be in state j?
◼ Since we are dealing with a stationary Markov
chain, this probability will be independent of
m, so we may write
◼ P(Xm+n = j |Xm = i)=P(Xn = j |X0 = i)= Pij (n)

◼ where Pij (n) is called the n-step probability


of a transition from state i to state j.
◼ Clearly, Pij (1) = pij. To determine Pij (2),
note that if the system is now in state i,
then for the system to end up in state j
two periods from now, we must go from
state i to some state k and then go
from state k to state j. This reasoning
allows us to write
The right-hand side of (3) is just the scalar product of row i of
the P matrix with column j of the P matrix.
◼ Hence, Pij (2) is the ijth element of the
matrix P2. By extending this reasoning,
it can be shown that for n > 1,
Pij (n) =ijth element of Pn
The Cola Example
◼ Suppose the entire cola industry produces only two
colas. Given that a person last purchased cola 1,
there is a 90% chance that her next purchase will be
cola 1. Given that a person last purchased cola 2,
there is an 80% chance that her next purchase will
be cola 2.

◼ 1 If a person is currently a cola 2 purchaser, what is


the probability that she will purchase cola 1 two
purchases from now?
◼ 2 If a person is currently a cola 1 purchaser, what is
the probability that she will purchase cola 1 three
purchases from now?
◼ We view each person’s purchases as a
Markov chain with the state at any given time
being the type of cola the person last
purchased.
◼ Hence, each person’s cola purchases may be
represented by a two-state Markov chain,
where
◼ State 1 person has last purchased cola 1
◼ State 2 person has last purchased cola 2
◼ If we define Xn to be the type of cola
purchased by a person on her nth future
cola purchase (present cola purchase
X0), then X0, X1, . . . may be described
as the Markov chain with the following
transition matrix:
◼ 1 We seek P(X2 = 1|X0 = 2) P21(2)
element 21 of P2:

Hence, P21(2) = 0.34.


◼ This means that the probability is .34 that two purchases in the
future a cola 2 drinker will purchase cola 1.

◼ By using basic probability theory, we may obtain this answer in


a different way.

◼ Note that P21(2) = (probability that next purchase is cola 1 and


second purchase is cola 1) + (probability that next purchase is
cola 2 and second purchase is cola 1)

◼ = p21p11 + p22p21 = (.20)(.90) + (.80)(.20) = .34.


◼ 2 If a person is currently a cola 1
purchaser, what is the probability that
she will purchase cola 1 three
purchases from now?
◼ We seek P11(3) element 11 of P3
Classification of States in a Markov Chain
◼ A state j is reachable from state i if
there is a path leading from i to j.
◼ Two states i and j are said to
communicate if j is reachable from i,
and i is reachable from j.
◼ 1. If state i communicates with state j,
then state j communicates with state i.
◼ 2. If state i communicates with state j
and state j communicates with state k,
then state i communicates with state k.
For the transition probability matrix P represented in the figure, state 5 is
reachable from state 3 (via the path 3–4–5), but state 5 is not reachable
from state 1 (there is no path from 1 to 5). Also, states 1 and 2
communicate (we can go from 1 to 2 and from 2 to 1).
◼ A set of states S in a Markov chain is a
closed set if no state outside of S is
reachable from any state in S.
◼ Two states that communicates are said
to be in the same class.
◼ A Markov chain is said to be irreducible,
if all states belong to the same class,
i.e. they communicate with each other.
◼ The states may be partitioned into one or
more separate classes such that those states
that communicate with each other are in the
same class. (A class may consist of a single
state).

◼ If there is only one class, i.e., all the states


communicate, the Markov chain is said to be
irreducible.
From the Markov chain with transition matrix P, S1 = {1, 2} and S2 = {3, 4,
5} are both closed sets. Observe that once we enter a closed set, we can
never leave the closed set (no arc begins in S1 and ends in S2 or begins in S2
and ends in S1).
◼ A state i is an absorbing state if pii = 1

In Example 1, the gambler’s ruin, states 0 and 4 are absorbing states. Of


course, an absorbing state is a closed set containing only one state.
◼ A state i is a transient state if there exists a state j that is
reachable from i, but the state i is not reachable from state j.
◼ In other words, a state i is transient if there is a way to leave
state i that never returns to state i.
◼ In the gambler’s ruin example, states 1, 2, and 3 are transient
states.
◼ For any state i in a Markov chain, let fi
be the probability that starting in state
i, the process will ever re-enter state i.
State i is said to be recurrent if fi = 1
and transient if fi < 1.
◼ If a state is not transient, it is called a
recurrent state.

In Example 1, states 0 and 4 are recurrent


states (and also absorbing states)
in Example 2, [0 2 0], [0 0 2], and [0 1 1]
are recurrent states.
all states are recurrent.
◼ A state i is periodic with period k >1 if
k is the smallest number such that all
paths leading from state i back to state
i have a length that is a multiple of k.
◼ If a recurrent state is not periodic, it is
referred to as aperiodic.
◼ For the Markov chain with transition
matrix
A Periodic Markov
Chain k = 3

each state has period 3. For example, if we begin in state 1, the


only way to return to state 1 is to follow the path 1–2–3–1 for
some number of times (say, m).

Hence, any return to state 1 will take 3m transitions, so state 1


has period 3. Wherever we are, we are sure to return three
periods later.
◼ If all states in a chain are recurrent,
aperiodic, and communicate with each
other, the chain is said to be ergodic.
The gambler’s ruin example is not an ergodic chain, because (for
example) states 3 and 4 do not communicate.
Example 2 is also not an ergodic chain, because (for example)
[2 0 0] and [0 1 1] do not communicate.
the cola example, is an ergodic
Markov chain.
Nonergodic

P2 is not ergodic because there are two closed classes of


states (class 1 {1, 2} and class 2 {3, 4}), and the states
in different classes do not communicate with each other.
Steady-State Probabilities
◼ Let P be the transition matrix for an s-
state ergodic chain.† Then there exists a
vector p = [p1 p2...ps] such that

p 1 p 2  p s 
p p  p 
lim P n =  1 2 s
n →   
 
p 1 p 2  p s 
◼ The vector p [p1 p2... ps] is often
called the steady-state distribution,
or equilibrium distribution, for the
Markov chain.

◼ For a given chain with transition matrix


P, how can we find the steady-state
probability distribution?
In matrix form, it may be written as:
SSP for Cola Example
Use of Steady-State Probabilities
in Decision Making
◼ In Example 4, suppose that each customer makes
one purchase of cola during any week (52 weeks 1
year).
◼ Suppose there are 100 million cola customers. One
selling unit of cola costs the company $1 to produce
and is sold for $2. For $500 million per year, an
advertising firm guarantees to decrease from 10% to
5% the fraction of cola 1 customers who switch to
cola 2 after a purchase. Should the company that
makes cola 1 hire the advertising firm?
◼ At present, a fraction p1 = 2/3
◼ of all purchases are cola 1 purchases.
Each purchase of cola
◼ 1 earns the company a $1 profit. Since
there are a total of 52(100,000,000), or
5.2 billion,
◼ cola purchases each year, the cola 1
company’s current annual profit is
Solution
The advertising firm is offering to change the P matrix to

For P1, the steady-state equations become

Now the cola 1 company’s annual profit will be


(.80)(5,200,000,000) - 500,000,000 $3,660,000,000

Hence, the cola 1 company should hire the ad agency


◼ Henry, a persistent salesman, calls North's
Hardware Store once a week hoping to speak
with the store's buying agent, Shirley.
◼ If Shirley does not accept Henry's call this
week, the probability she will do the same
next week is .35.
◼ On the other hand, if she accepts Henry's call
this week, the probability she will not do
so next week is .20.
Exercise: North’s Hardware

◼ Transition Matrix

Next Week's Call


Refuses Accepts

This Refuses .35 .65


Week's
Call Accepts .20 .80
Example: North’s Hardware

◼ Steady-State Probabilities
◼ Question
How many times per year can Henry expect to talk to
Shirley?
◼ Answer
To find the expected number of accepted calls per year,
find the long-run proportion (probability) of a call being
accepted and multiply it by 52 weeks.

. . . continued
An Inventory Example
◼ A camera store stocks a particular model
camera that can be ordered weekly. Let D1,
D2, . . . represent the demand for this camera
(the number of units that would be sold if the
inventory is not depleted) during the first
week, second week, . . . , respectively.
◼ It is assumed that the Di are independent
and identically distributed random variables
having a Poisson distribution with a mean of
1.
◼ Let X0 represent the number of cameras on
hand at the outset, X1 the number of cameras
on hand at the end of week 1, X2 the number
of cameras on hand at the end of week 2,
and so on.
◼ Assume that X0 = 3. On Saturday night the
store places an order that is delivered in time
for the next opening of the store on Monday.
◼ The store uses the following order policy:
◼ If there are no cameras in stock, the store orders 3
cameras.
◼ However, if there are any cameras in stock, no order
is placed.
◼ Sales are lost when demand exceeds the inventory
on hand.
◼ Thus, {Xt} for t = 0, 1, . . . is a stochastic process of
the form just described.
◼ The possible states of the process are the integers 0,
1, 2, 3, representing the possible number of cameras
on hand at the end of the week.
◼ The random variables Xt are dependent and may be
evaluated iteratively by the expression

◼ for t = 0, 1, 2, . . . .
◼ given that Dt+1 has a Poisson distribution with a
mean of 1. Thus,

P{Dt+1 ≥ 3} = 1 - P{Dt+1 ≤ 2} = 1 - (0.368 + 0.368 + 0.184) = 0.080.


◼ For the first row of P, we are dealing
with a transition from state Xt = 0 to
some state Xt+1.
◼ A transition from Xt = 0 to Xt+1 = 0
implies that the demand for cameras in
week t + 1 is 3 or more after 3 cameras
are added to the depleted inventory at
the beginning of the week, so
p00 = P{Dt+1 ≥ 3} = 0.080.
◼ For the other rows of P, the formula at
for the next state is
◼ Xt+1 = max {Xt – Dt+1, 0} if Xt+1 ≥ 1.
◼ This implies that Xt+1 ≤ Xt, so p12 = 0,
p13 = 0, and p23 = 0. For the other
transitions,
p20 = P{Dt+1 ≥ 2} = 1 - P{Dt+1≤ 1} = 1
- (0.368 + 0.368) = 0.264.
◼ For the last row of P, week t + 1 begins with 3
cameras in inventory, so the calculations for the
transition probabilities are exactly the same as for the
first row. Consequently, the complete transition
matrix is
Steady state prob.’s for inventory problem
Expected Average Cost per
Unit Time
◼ suppose that the costs to be considered are
the ordering cost and the penalty cost for
unsatisfied demand (storage costs are so
small they will be ignored).
◼ It is reasonable to assume that the number of
cameras ordered to arrive at the beginning of
week t depends only upon the state of the
process Xt-1 (the number of cameras in stock)
when the order is placed at the end of week
t - 1.
◼ However, the cost of unsatisfied
demand in week t will also depend
upon the demand Dt. Therefore, the
total cost (ordering cost plus cost of
unsatisfied demand) for week t is a
function of Xt-1 and Dt, that is,
C(Xt-1, Dt).
◼ Now let us assign numerical values to the two
components of C(Xt-1, Dt) in this example,
namely, the ordering cost and the penalty
cost for unsatisfied demand.
◼ If z > 0 cameras are ordered, the cost
incurred is (10 + 25z) dollars.
◼ If no cameras are ordered, no ordering cost is
incurred. For each unit of unsatisfied demand
(lost sales), there is a penalty of $50.
◼ Therefore, given the ordering policy
described in Sec. 16.1, the cost in week
t is given by

for t 1, 2, . . . .
where PD(i) is the probability that the demand equals i, as given by a
Poisson distribution with a mean of 1, so that PD(i) becomes negligible
for i larger than about 6. Since PD(4) = 0.015, PD(5) = 0.003, and PD(6)
= 0.001, we obtain k(0) = 86.2. Also using PD(2) = 0.184 and PD(3) =
0.061, similar calculations lead to the results
Mean first passage times for
inv. problem
Mean First Passage Times
◼ For an ergodic chain, let mij expected
number of transitions before we first reach
state j, given that we are currently in state i ;
mij is called the mean first passage time
from state i to state j.
◼ In Example 4, m12 would be the expected
number of bottles of cola purchased by a
person who just bought cola 1 before first
buying a bottle of cola 2.
◼ Assume that we are currently in state i. Then with
probability pij, it will take one transition to go
◼ from state i to state j. For k ≠ j, we next go with
probability pik to state k. In this case, it
◼ will take an average of 1 + mkj transitions to go from
i to j. This reasoning implies that

Since we may rewrite the last equation as


◼ By solving the linear equations given in
(13), we may find all the mean first
passage times. It can be shown that
Cola example

To illustrate the use of (13), let’s solve for the mean first passage times
in Example 4. Recall that p1 = 2/3 and p2 = 1/3. Then m11 = 3/2 = 1.5
and m22 = 3/1 = 3.

This means, for example, that a person who last drank cola 1 will drink
an average of ten bottles of cola 1 before switching to cola 2.
Absorbing Chains
◼ Many interesting applications of Markov
chains involve chains in which some of
the states are absorbing and the rest
are transient states. Such a chain is
called an absorbing chain.
Accounts Receivable
◼ The accounts receivable situation of a firm is often modeled as
an absorbing Markov chain. Suppose a firm assumes that an
account is uncollectable if the account is more than three
months overdue. Then at the beginning of each month, each
account may be classified into one of the following states:

◼ State 1 New account


◼ State 2 Payment on account is one month overdue.
◼ State 3 Payment on account is two months overdue.
◼ State 4 Payment on account is three months overdue.
◼ State 5 Account has been paid.
◼ State 6 Account is written off as bad debt.
◼ Suppose that past data indicate that the following
Markov chain describes how the status of an account
changes from one month to the next month:
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
 
2 months 0 0 0 .4 .6 0
3 months  
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
 
Bad Debt

0 0 0 0 0 1

◼ To simplify our example, we assume that after three
months, a debt is either collected or written off as a
bad debt.
◼ Once a debt is paid up or written off as a bad debt,
the account if closed, and no further transitions
occur.
◼ Hence, Paid or Bad Debt are absorbing states. Since
every account will eventually be paid or written off as
a bad debt, New, 1 month, 2 months, and 3 months
are transient states.
◼ A typical new account will be absorbed as either a
collected debt or a bad debt.
◼ What is the probability that a new account will
eventually be collected?
◼ To answer this questions we must write a transition
matrix. We assume s – m transient states and m
absorbing states. The transition matrix is written in
the form of s-m m
columns columns

s-m rows Q R
P= 0 1 

m rows
I = an identity matrix indicating one always remains
in an absorbing state once it is reached
0 = a zero matrix representing 0 probability of
transitioning from the absorbing states to the
nonabsorbing states
R = the transition probabilities from the
nonabsorbing states to the absorbing states
Q = the transition probabilities between the
nonabsorbing states
◼ In this format, the rows and column of P correspond
(in order) to the states t1, t2, . . . ,ts-m, a1, a2, . . . ,
am. Here, I is an m x m identity matrix reflecting the
fact that we can never leave an absorbing state: Q is
an (s - m) x (s - m) matrix that represents transitions
between transient states;
◼ R is an (s - m) x m matrix representing transitions
from transient states to absorbing states; 0 is an
m x (s - m) matrix consisting entirely of zeros.
◼ This reflects the fact that it is impossible to go from
an absorbing state to a transient state.
Accounts Receivable
◼ The transition matrix for this example is
New 1 month 2 months 3 months Paid Bad Debt
New 0 .6 0 0 .4 0
1 month
0 0 .5 0 .5 0
 
2 months 0 0 Q 0 .4 .6 R 0
3 months  
0 0 0 0 .7 .3
Paid 0 0 0 0 1 0
 
Bad Debt

0 0 0 0 0 1

◼ Then s =6, m =2, and Q and R are as shown.


◼ Applying this notation to Example 7, we let
◼ t1 = New
◼ t2 = 1 Month
◼ t3 = 2 Months
◼ t4 = 3 Months
◼ a1 = Paid
◼ a2 = Bad Debt
1. What is the probability that a new account will
eventually be collected?

2. What is the probability that a one-month overdue


account will eventually become a bad debt?

3. If the firm’s sales average $100,000 per month,


how much money per year will go uncollected?
◼ 1 t1 = New, a1 = Paid. Thus, the probability that a
new account is eventually collected is element 11 of
(I - Q)-1R = .964.
◼ 2 t2 = 1 Month, a2 = Bad Debt. Thus, the probability
that a one-month overdue account turns into a bad
debt is element 22 of (I - Q)-1R = .06.
◼ 3 From answer 1, only 3.6% of all debts are
uncollected. Since yearly accounts payable are
$1,200,000, on the average, (.036)(1,200,000)
$43,200 per year will be uncollected.
Work-Force Planning
◼ The law firm of Mason and Burger employs three
types of lawyers: junior lawyers, senior lawyers, and
partners.
◼ During a given year, there is a .15 probability that a
junior lawyer will be promoted to senior lawyer and a
.05 probability that he or she will leave the firm.
◼ Also, there is a .20 probability that a senior lawyer
will be promoted to partner and a .10 probability that
he or she will leave the firm.
◼ There is a .05 probability that a partner will leave the
firm. The firm never demotes a lawyer.
The last two states are absorbing states, and all other states are
transient. For example, Senior is a transient state, because there is a
path from Senior to Leave as Nonpartner, but there is no path
returning from Leave as Nonpartner to Senior (we assume that once
a lawyer leaves the firm, he or she never returns).
◼ 1 What is the average length of time that a newly
hired junior lawyer spends working for the firm?

◼ 2 What is the probability that a junior lawyer makes


it to partner?

◼ 3 What is the average length of time that a partner


spends with the firm (as a partner)?
◼ 1. Expected time junior lawyer stays
with firm as junior) = (expected time
junior lawyer stays with firm as senior)
+ (expected time junior lawyer stays
with firm as partner).
◼ Expected time as junior (I- Q)11-1 = 5
◼ Expected time as senior (I- Q)12-1 = 2.5
◼ Expected time as partner (I- Q)13-1 = 10

◼ Hence, the total expected time that a


junior lawyer spends with the firm is
5 + 2.5 + 10 = 17.5 years.
◼ 2. The probability that a new junior
lawyer makes it to partner is just the
probability that he or she leaves the
firm as a partner. Since t1 = Junior
Lawyer and a2 = Leave as Partner, the
answer is element 12 of (I - Q)-1R =
.50.
◼ 3. Since t3 Partner, we seek the expected
number of years that are spent in t3, given
that we begin in t3. This is just element 33 of
(I - Q)-1 = 20 years.

◼ This is reasonable, because during each year,


there is 1 chance in 20 that a partner will
leave the firm, so it should take an average of
20 years before a partner leaves the firm.

You might also like