0% found this document useful (0 votes)
51 views8 pages

Computing Absorption Probabilities For A Markov Chain

This document summarizes a new matrix construction algorithm for computing absorption probabilities for finite, reducible Markov chains. The algorithm has two steps: 1) matrix augmentation, where artificial states are added to form an augmented matrix, and 2) matrix reduction, where the augmented matrix is recursively partitioned and reduced until the final matrix is reached. This final matrix directly provides the absorption probabilities. The algorithm requires more memory than alternative approaches like LU decomposition but fewer computational operations, and is numerically stable due to the absence of subtractions or negative numbers. An example application to a production line model is provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views8 pages

Computing Absorption Probabilities For A Markov Chain

This document summarizes a new matrix construction algorithm for computing absorption probabilities for finite, reducible Markov chains. The algorithm has two steps: 1) matrix augmentation, where artificial states are added to form an augmented matrix, and 2) matrix reduction, where the augmented matrix is recursively partitioned and reduced until the final matrix is reached. This final matrix directly provides the absorption probabilities. The algorithm requires more memory than alternative approaches like LU decomposition but fewer computational operations, and is numerically stable due to the absence of subtractions or negative numbers. An example application to a production line model is provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Journal of Mathematical Education in

Science and Technology

ISSN: 0020-739X (Print) 1464-5211 (Online) Journal homepage: https://www.tandfonline.com/loi/tmes20

Computing absorption probabilities for a Markov


chain

Theodore J. Sheskin

To cite this article: Theodore J. Sheskin (1991) Computing absorption probabilities for a Markov
chain, International Journal of Mathematical Education in Science and Technology, 22:5, 799-805,
DOI: 10.1080/0020739910220512

To link to this article: https://doi.org/10.1080/0020739910220512

Published online: 09 Jul 2006.

Submit your article to this journal

Article views: 127

View related articles

Citing articles: 6 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tmes20
INT. J. MATH. EDUC. SCI. TECHNOL., 1991, VOL. 22, NO. 5, 799-805

Computing absorption probabilities


for a Markov chain

by THEODORE J. SHESKIN
Industrial Engineering Department, Cleveland State University,
Cleveland, Ohio 44115, USA
(Received 31 August 1990)

We present a new matrix construction algorithm for computing absorption


probabilities for a finite, reducible Markov chain. The construction algorithm
contains two steps: matrix augmentation and matrix reduction. The algorithm
requires more memory and less execution time than the calculation of absorption
probabilities by the LU decomposition. We apply the algorithm to a Markovian
model of a production line.

1. Introduction
Many physical processes and activities involving random behaviour are transient
processes which terminate after a relatively short period of time. Examples of such
processes are gambling games and the movement of patients through a hospital.
Transient random processes are often modelled as reducible Markov chains which
have one or more closed communicating sets of recurrent states and a set of transient
states.
When we model a transient random process as a reducible Markov chain, we are
often interested in calculating first passage probabilities from transient states to
recurrent states, and absorption probabilities from transient states to absorbing
states. In this paper we present an interesting new matrix construction algorithm for
computing absorption probabilities for any finite, reducible Markov chain.
The matrix construction algorithm is a new application of the matrix reduction
routine [1,2] which is motivated by a result in Kemeny and Snell [3]. Other authors
have recently developed different Markov chain reduction procedures for calculat-
ing first passage times and absorption probabilities. Kohlas [4] and Heyman and
Reeves [5] apply the state reduction method [6]. Lai and Bhat [7,8] partition a
transition probability matrix to derive a reduced system.

2. Probabilistic motivation
The matrix construction algorithm is motivated by the following result in [3]
(pp. 114—115). Suppose that we have a Markov chain with a transition probability
matrix P partitioned as
5 Y

YlV Q]
Assume that we observe the process only when it is in a subset 5 of the states having s
elements. A new Markov chain with s states, which we call a reduced process, is
0020-739X/91 $3-00 © 1991 Taylor & Francis Ltd.
800 T. J. Sheskin

obtained. A single step in the reduced process corresponds in the original process to
the transition, not necessarily in one step, from a state in S to another state in S. To
compute the transition probability matrix D for the reduced process, let T = [*y],
11 = [uy], V = [»y], Q = [?y], and D = [dtj\. Let k and / be two states of S. We have

u
<*U = hl+ Z

Z
9.'•61'

In matrix form we have


= T+ U[I-Q]1V (1)

3. Canonical form of the transition matrix


Consider a finite, reducible Markov chain. Arrange the transition probability
matrix P in canonical form
0 0 ... 0 0
0 s2 0 ... 0 0
0 0

0
0 0 0
-E, E2 EM QJ

The Markov chain has M+1 equivalence classes: one class with absorbing states,
M—\ closed communicating classes with recurrent states, and one class with
transient states. The identity matrix \l is the transition matrix for the absorbing
states. The square submatrices S 2 , S 3 , . . . , S M are the transition matrices
corresponding to the M — 1 closed classes of recurrent states. Submatrix Ex specifies
transitions from transient states to absorbing states. Submatrices E2, E 3 ,..., EM
specify transitions from transient states to recurrent states. Submatrix Q governs
transitions among the transient states. We want to compute absorption probabilities
and first passage probabilities. Since we are interested in this process until it enters
an absorbing state or a recurrent state for the first time, we make all recurrent states
absorbing. We replace submatrices S 2 , S 3 , . . . , S M , respectively, with identity
matrices l2, l 3 ,..., IM, respectively, to produce an absorbing Markov chain. To
simplify the notation we shall consider the canonical form of the absorbing Markov
chain in an aggregated version. We combine all the identity matrices into a single
identity matrix I, and all the submatrices Ex, E 2 ,..., EM into a submatrix E. The form
of the transition probability matrix P becomes

QJ
The two steps of the algorithm are matrix augmentation and matrix reduction.
Markov chain absorption probabilities 801

4. Matrix augmentation
We assume that an absorbing Markov chain has s absorbing states and r transient
states. If r > s, we add r—s artificial absorbing states and let n = r. Artificial absorbing
states cannot be entered from transient states. If s>r, we add s — r artificial transient
states and let n = s. Each artificial transient state may have arbitrary transition
probabilities. We construct an augmented matrix B of order In by adjoining a null
matrix O, an identity matrix I, and matrix E, each of order n, to Q. We arrange the
augmented matrix in the form

Let B = [&,.,.].

5. Matrix reduction
The detailed steps of matrix reduction applied to the augmented matrix B are
presented below.
1. Initialize k = 2n.
2. Let B ^ B .
3. Partition Bt as
k-l 1
k-l

4-
where

5. Decrement k by 1. If k = n, stop. Otherwise, repeat steps 3 and 4.

6. Absorption probabilities
Matrix reduction ends when k — n, indicating that the final reduced matrix, Bn, is
of order n. The calculation of absorption probabilities is based on the following
application of equation (1): when a matrix B is partitioned as in (2), then
B. = O + I [ I - Q ] - 1 E = [ I - Q ] - 1 E (3)
By Theorem 3.3.7 in [3] (p. 52), the matrix Bn gives the probabilities of absorption in
every absorbing state if the process starts in a given transient state. When artificial
absorbing states or artificial transient states are deleted, the resulting matrix F is the
matrix of absorption probabilities.

7. Computational requirements
Assume that a Markov chain has n absorbing states and n transient states. To
compute the absorption probabilities the matrix construction algorithm must store
802 T. J. Sheskin

n2 elements for matrix E plus n2 elements for matrix Q, for a total of 2M2 elements.
The matrix construction algorithm performs 3M3/2 + M2/2 — n multiplications, n
divisions, and 3«3/2 — M2/2 — n additions. If we call each division, and each
multiplication-addition, a single operation [9], the total number of operations is
approximately 3M3/2 + M2/2. For large n the number of operations is approximately
3n3/2.
The matrix F of absorption probabilities may also be computed by using the LU
decomposition [9] to solve the system of equations F = E+QF. When the LU
decomposition is used to solve for F column by column, approximately n2 elements
of storage and M3/3 operations are required for each absorbing state. The LU
decomposition solution procedure must be repeated for all n absorbing states. The
matrix construction algorithm requires more storage but fewer operations.
Grassmann et al. [6] assert that an algorithm which does not contain subtractions
or operations involving negative numbers is numerically very stable. Heyman [10]
presents extensive numerical evidence to support this assertion. The matrix
construction algorithm is resistant to roundoff error because of the absence of
subtractions and negative numbers.

8. Application to a production line


We will apply the matrix construction algorithm to a model of a production
process. This model is a modification of examples in [11] (p. 221), and in [3] (p. 30).
Assume that items pass through « manufacturing stages of a production line. At the
end of each stage the items are inspected, and are either discarded with probability p ,
sent through the stage again for reworking with probability q, or passed on to the next
stage with probability r. W e have p + q + r= 1. A completed item is either sold, used
as a demonstrator, or used for training. Ninety-seven per cent of the completed items
are sold. Those remaining are twice as likely to be used as demonstrators as they are
to be used as trainers. At each moment when items are inspected a demonstrator has a
probability u of being used as a trainer, and a trainer has a probability v of being used
as a demonstrator. Neither demonstrators nor trainers are sold or discarded. W e
model this process as a Markov chain with n + 4 states indexed as follows

2M: item discarded


2« — 1: item completed for sale
2M — 2: item completed as a demonstrator
2M — 3: item completed as a trainer
k: item in stage k, k = 1 , 2 , . . . , n

T h e Markov chain has three equivalence classes

{2n,2n—l} absorbing states


{2M —2, 2M —3} recurrent states
{ 1 , 2 , . . . , M} transient states
Markov chain absorption probabilities 803

The transition probability matrix is given below in canonical form


In 2/1-1 2n-2 2n-3 n n-l ... 3 2 1
In ' 1 0 0 0 0 0 ... 0 0 0
2«-l 0 1 0 0 0 0 ... 0 0 0

2«-2 0 0 1-M u 0 0 ... 0 0 0


2«-3 0 0 V l-v 0 0 ... 0 0 0

n-l
n p
p
0-97r
0
002r
0
0-01r
0 r
0
q
... 0 0
0 0
0
0
•ca
2 p 0 0 0 0 0 ... r q 0
1 p 0 0 0 0 0 ... 0 r q

We want to find the probabilities that an item at a given stage will eventually be
discarded, completed for sale, completed as a demonstrator, or completed as a
trainer. Since we are interested in this process only until an item enters an absorbing
state or a recurrent set, we make the recurrent states absorbing. Since the number of
transient states, n, exceeds the number of absorbing states, 4, we add n — 4 artificial
absorbing states, indexed n + 1, n + 2,..., 2« — 4, to produce the augmented matrix B
Deleting the n—4 rightmost columns of Bn which contain the zero probabilities of
absorption in the n — 4 artificial absorbing states, we obtain the matrix F of
absorption and first passage probabilities. The entries of F give the probabilities that
an item inspected at a given stage will eventually be discarded or sold or used initially
as a demonstrator or as a trainer. For example, an item in stage 1 has a probability of
0-97VI\p + r)n of being sold.

9. Conclusion
The matrix construction algorithm is a new procedure for calculating the
absorption probabilities for a finite, reducible Markov chain. The algorithm has
good numerical stability because of the absence of subtractions and negative
numbers.

2n 2n-l 2n-2 2n-3 2n-4 ... n + 1 n n - l n-2 n-3 n-4 ... 3 2 1


n
2n 0 0 0 0 0 0 1 0 0 0 0 ... 0 0 0'
2n-l 0 0 0 0 0 0 0 1 0 0 0 ... 0 0 0
2n-2 0 0 0 0 0 0 0 0 1 0 0 ... 0 0 0
2n-3 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0
2n-4 0 0 0 0 0 0 0 0 0 0 1 0 0 0

*;, 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 1
To i
n P 0-97r 002r OOlr 0 0 Q 0 0 0 0 ... 0 0 0 "LE Q
n-l P 0 0 0 0 0 r q 0 0 0 ... 0 0 0

2 P 0 0 0 0 0 0 0 0 0 0 r q 0
1 P 0 0 0 0 0 0 0 0 0 0 ... 0 r g
804 T. J. Sheskin

3 O
C C

I- K

o © © © © © © ... o
(N © © © © © © © © b<
© © © © © o © © ©

+ © © ©
1 © © © © © © © © ©
R

7R © © © © ••• © © © o ©
CN
© © ©

CM
1 © © © © •••
© © © • o ©
R
•*—i I o -S o
1 © o © © ••• o o o o ss ~^*~ ~~^~ "**~~
K CM i "k. "k,

©
g P © ©
R © © © © ••• © o © © ©

o © © o © ••• © © o ••• © ©
R

K ? ?-k. ir-k,
.

CNCN
© CN CM CM
6 <=> <? ©
2«-4

© © ©
© © © © © ••• o © © ... © ©

o
1 © © © © o ••• © © o ••• © ©
R ©
CM

k.
© © s
2n-2

CN © ... ©
© © © © © ••• © © ©
o

7
0-97r

© © © © © ••• © o ••• © a l
CM

CM © © o © o ••• © •ft, •ft, ••• •ft,

I I I I ... + „ h- i- a 3 «.

CO R R
Markov chain absorption probabilities 805

References
[1] SHESKIN, T. J., 1985, Op. Res., 33, 228-235.
[2] SHESKIN, T. J., 1990, 1st International Workshop on the Numerical Solution of Markov
Chains, North Carolina State University, Raleigh, NC.
[3] KEMENY, J. G., and SNELL, J. C , 1960, Finite Markov Chains (Princeton, N.J.: Van
Nostrand).
[4] KOHLAS, J., 1986, Zeitschrift für Operations Research, 30, A197-A207.
[5] HEYMAN, D. P., and REEVES, A., 1989, ORSA J. Computing, 1, 52-60.
[6] GRASSMANN, W. K., TAKSAR, M. I., and HEYMAN, D. P., 1985, Op. Res., 33,1107-1116.
[7] LAL, R., and BHAT, U. N., 1988, Mgmt Sci., 34, 1202-1220.
[8] LAL, R., and BHAT, U. N., 1987, Queueing Systems, 2, 147-172.
[9] PRESS, W. H., FLANNERY, B. P., TEUKOLSKY, S. A., and VETTERLING, W. T., 1986,
Numerical Recipes (Cambridge: Cambridge University Press).
[10] HEYMAN, D. P., 1987, SIAM J. Alg. Disc. Meth., 8, 226-232.
[11] CLARKE, A. B., and DISNEY, R. L., 1970, Probability and Random Processes for Scientists
and Engineers (New York: Wiley).

You might also like