0% found this document useful (0 votes)
86 views20 pages

Chap08-Discrete Markov Chains

This document discusses discrete Markov chains and their application to reliability analysis. It introduces Markov chains as a technique that can model systems whose future behavior depends only on their current state, not past states. The key requirements for applying Markov chains are that the system must be stationary (transition probabilities do not change over time) and have memory-less behavior. An example of a simple two-state system is provided to illustrate basic Markov chain concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views20 pages

Chap08-Discrete Markov Chains

This document discusses discrete Markov chains and their application to reliability analysis. It introduces Markov chains as a technique that can model systems whose future behavior depends only on their current state, not past states. The key requirements for applying Markov chains are that the system must be stationary (transition probabilities do not change over time) and have memory-less behavior. An example of a simple two-state system is provided to illustrate basic Markov chain concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

only if the hazard rate is 0

8 Discrete Markov chains


transition between two states
probability is a function of tm
process is non-stationary and
In the general case of Mad
be discrete or continuous. I
evaluation, s~ace is normally:
this representsffie discrete an
its components can reside,
8.1 Introduction continuous. This text only CG
generally known as a Markc
Chapter 7 described and illustrated several analytical techniques for
continuous case, generally kl
evaluating the reliability of systems. Although these techniques can be
Chapter 9. The reader may
applied to both non-repairable and repairable systems, in the latter case
case, but should also be f~
they assume that the repair process is instantaneous or negligible com­
described in this chapter befe
pared with the operating time. This is an inherent restriction and addi­
The reader should recogni
tional techniques are required if this assumption is not valid. One very
introduced by the problem th
important technique that overcomes this problem and which has received
of Chapter 7, and could be ov
considerable attention and use during the past few years is known as the
the techniques described in th
Markov approach or Markov modelling. Several excellent texts [4,25-27]
both repairable and non-repa
are available on the subject of the application of Markov chains to
in Chapter 7. The only reql
reliability analysis.
applicable are that the systen
. The Markov approach can be applied to the random behaviour of
memory and the states of tht
systems that vary discretely or continuously withrespecCto time" and

~li~e.This discrete or continuous random variation is known as a

stochastic process. Not all stochastic processes, can be modelled using the
8.2 General modelling cor
basic Markov approach although there are techniques available for mod­

elling some additional stochastic processes using extensions of this basic


The basic concepts of MarkQ1,
method. These additional techniques will be discussed in Chapter 12.
the simple system shown in I
IV In order for the basic Markov approach to be applicable, the behaviour are identifiable, being designa
of the system must be characterized by ~fIack. of memory:! that is, the in or leaving a particular sta1
of
future states of a system are independent -aif past states except the 8.1, and these probabilities ill
immediately preceding one. Therefore the future random behaviour of a the future.
system only depends on where it is at present, not on where it has been in
the past or how it arrived at its present position. In addition, the process
must b~'sta!!2!Htry: sometimes called 1!Qm()geneous, for the approach to
be applicable. This means that the behaviour of the system must be the
same at all points of time irrespective of the point of time being
Fig. 8.1 A two state system
considered, Le., the probability of making a transition from one given
state to another is the same (stationary) at all times in the past and future.
It is evident from these two aspects, lack of memory and being stationary, This is a discrete Markov.
that the Markov approach is applicable to those systems whose behaviour movement between states 00
can be described by a probability distribution that is characterized by a Consider the first time intel
r constant hazard rate'; i.e., Poisson and exponential distributions, since
"" state 1. The system can rem~

260
______ c.'"~

I
__

...

Discrete Markov chains 261


.~

,
only if the hazard rate is constant does the probability of making a
• transition between two states remain constant at all points of time. If this
lalnS
probability is a function of time or the number of discrete steps, then the
process is non-stationary and designated as non-Markovian.
In the general case of Markov models, both time and space may either
be discrete or continuous. In the particular case of system reliability
evaluation, s~ is normally represented only as a di~:unction since
this represents t e discrete and identifiable states in which the system and
its components can reside, whereas t!!'ne may either be discrete or
continuous. This text only considers these two cases. The discrete case,
generally known as a Markov chain, is discussed in this chapter. The
analytical techniques for continuous case, generally known as a Markov process, is discussed in
I these techniques can be Chapter 9. The reader may be primarily interested in the continuous
systems, in the latter case case, but should also be familiar with the concepts and techniques
meous or negligible com­ described in this chapter before reading Chapter 9.
:rent restriction and addi­ The reader should recognize that, although this chapter was initially
on is not valid. One very introduced by the problem that repair was not included in the techniques
.m and which has received of Chapter 7, and could be overcome by the use of the Markov approach,
few years is known as the the techniques described in this and subsequent chapters are applicable to
I excellent texts [4, 25-27] both repairable and non-repairable systems including all those discussed
,on of Markov chains to in Chapter 7. The only requirements needed for the technique to be
applicable are that the system must be stationary, the process must lack
:he random behaviour of memory and the states of the system must be identifiable.
with respect to time·· and
variation is known as a
:an be modelled using the 8.2 General modelling concepts
Iniques available for mod­
19 extensions of this basic The basic concepts of Markov modelling can be illustrated by considering
iscussed in Chapter 12. the simple system shown in Figure 8.1. In this system two system states
applicable, the behaviour are identifiable, being designated 1 and 2. The probabilities of remaining
c of mem<>.ry;! that is, the in or leaving a particular state in a finite time are also shown in Figure
all past states except the 8.1, and these probabilities are assumed to be constant for all times into

co
re random behaviour of a the future.
ot on where it has been in 1/~
1. In addition, the process
eous, for the approach to
)f the system must be the
1/2 1/4
8)3/4
the point of time being
Fig. 8.1 A two state system
ransition from one given
nes in the past and future.
Gory and being stationary, This is a discrete Markov chain since the system is stationary and the
systems whose behaviour movement between states occurs in discrete steps.
that is characterized by a
ential distributions, since
.' Consider the first time interval and assume that the system is initially in
state 1. The system can remain in state 1 with a probability of 1/2 or it


,;;I.

262 Reliability evaluation of engineering systems

Number of time intervals can move (make a transition)


important to recognize that th
2 3 4 In 128ths Le., the system must either rel
out of the state. This principl
what degree of complexity exi
out of a given state, the sur
moving out of a state must bf:
Once the system shown in .
with a probability of 3/4 or it,
probability of 1/4 during the 1
The behaviour of this sys1
diagram (see Section 5.7) sho
system starts in state 1, show~
after each step or time interva
Using the concepts described
anyone branch of this tree Cl
priate probabiJities of each ste]
in a particular state of the systl
is then evaluated by surnmatin
state after the number of tin:
probabilities are also shown in
4 time intervals. If aU these pI
that they add up to unity, tm
accuracy. If those branch prot
the probability of residing in
whereas a similar sum would
residing in state 2 is 85J128.
If the same technique is use
state probabilities after each tn
Table 8.1.are obtained.
The results shown in Table
Figure 8.3. These characteristi,
time-dependent values of the s

Table 8.1 State Probabilities of

St4
Time interval State 1

1 1/2 ==0
2 3/8==0
3 11/32=0
4 43/128 0
5 171/512=0

ctS
-
.----------------~- ...~---.~.~-~ ......
'~ "

\~
Discrete Markov chains 283

can move (make a transition) into state 2 with a probability of 1/2. It is


important to recognize that the sum of these probabilities must be unity,
In 128ths i.e., the system must either remain in the state being considered or move
out of the state. This principle applies equally to all systems no matter
what degree of complexity exists or how many ways there are of moving
out of a given state, the sum of the probabilities of remaining in or
1/16 8
moving out of a state must be unity.
Once the system shown in Figure 8.1 is in state 2, it can remain in it
1/16 8 with a probability of 3/4 or it can make a transition back to state 1 with a
probability of 1/4 during the next time interval.
1/32 4 The behaviour of this system can be easily illustrated by the tree
diagram (see Section 5.7) shown in Figure 8.2. This figure assumes the
3/32 12 system starts in state 1, shows the states in which the system can reside
after each step or time interval and considers up to 4 such time intervals.
Using the concepts described in Section 5.7, the probability of following
1/32 4
anyone branch of this tree can be evaluated by multiplying the appro­
priate probabilities of each step of this branch. The probability of residing
1/32 4 in a particular state of the system after a certain number of time intervals
is then evaluated by summating the branch probabilities that lead to that
3/64 6 state after the number of time intervals being considered. The branch
probabilities are also shown in Figure 8.2 for the situation that arises after
9/64 18 4 time intervals. If all these probabilities are summated it is again found
that they add up to unity, this being a necessity and a useful check of
accuracy. If those branch probabilities leading to state 1 are summated,
1/32 4
the probability of residing in state 1 after 4 time intervals is 43/128
whereas a similar sum would show that the equivalent probability of
1/32 4 residing in state 2 is 85/128.
If the same technique is used to evaluate the branch probabilities and
1/64 2 state probabilities after each time interval, the state probabilities shown in
¥ Table 8. t.are obtained.
3/ 6 The results shown in Table 8.1 are represented in graphical form in
64
Figure 8.3. These characteristics are known as the transient behaviour or
time-dependent values of the state probabilities. It is evident from Figure
3/64 6

3/64 6
Table 8.1 State Probabilities of the 2-state system

State probability
9/128 9 Time internal State 1 State 2

1 1/2=0.5 1/2=0.5
27/ Z1
128 2 3/8=0.375 5/8=0.625
3 11/32 0.344 21/32 = 0.656
128 4 43/128 = 0.336 85/128 = 0.664
5 171/512 = 0.334 341/512 = 0.666

...

~
264 Reliability evaluation of engineering systems

of the system either directly or


this is not possible and a partic\
1.0 left, the system is not ergodi
absorbing states. This problem'
discussion will consider only el
Although the limiting or stea
.~_ _- .....- - - - - State 2 ergodic system are independe
convergence to the limiting-st~
conditions and is very dependl
tions between the states of the
_ - -...._ _....._ _ State 1
The tree diagram method
concepts of Markov chains bU1
and a large number of time in'
fore other solution techniques
the remaining sections of this .
4 5 6
Number of time intervals

Fig. 8.3 System transient behaviour 8.3 Stochastic transitional

Matrix solution techniques ar


8.3, that as the number of time intervals is increased, the values of state analyses when other, more bas
probabilities tend to a constant or limiting value. This is characteristic of System reliability evaluation is
most systems which satisfy the conditions of the Markov approach, and described in Section 5.6 in ord
these limiting values of probability are known as the limiting-state or the input and output of a net'l
time-independent values of the state probabilities. . " . Matrix algebra and matrix I
In this example, it was assumed that the system started in state 1 and this and subsequent chapters.
the transient behaviour was evaluated as time increased. The state of the techniques, a description of ell
system at step 0 or zero time is known as the initial conditions. In most in Appendix 3.
reliability evaluation problems these initial conditions are known, and the In order to apply matrix teel
problem centres around evaluating the system reliability as time extends necessary to deduce a matrix VI
into the future. The transient behaviour is very dependent on the initial a transition from one state to
conditions and the reader is left to evaluate a similar graph to that shown Again consider the system ~
in Figure 8.3 for the case when the system initially resides in state 2 abilities can be represented b;
rather than state 1. This evaluation provides a very interesting and
1/2 1
important conclusion. This is that, although the transient behaviour is [ 1/4 3
very dependent on the initial conditions, the limiting values of the state
probabilities are totally independent of the initial conditions and both will
where Pij probability of rna
tend to the same limiting-state values shown in Figure 8.3. This is a very
interval given tha
important conclusion and one that is used again in subsequent discus­
time interval.
sions. A.system or process for which the limiting values of state prob­
abilities are independent of the initial conditions is known as tergodiENot Applying this concept to til
all systems are characterized by ergodicity. For a system to beerg6dic it is time interval means that P l1 '
essential that every state of a system can be reached from all other states shown in Equation 8.1.

"-~---'-- .. ------­
-
..
r\

Discrete Markov chains 265


\
of the system either directly or indirectly through intermediate states. If
this is not possible and a particular state or states, once entered cannot be
left, the system is not ergodic and the relevant states are known as
absorbing states. This problem will be discussed in Section 8.6. The initial
discussion will consider only ergodic systems.
Although the limiting or steady-state probabilities for the states of any
.. Stata2
ergodic system are independent of the initial conditions, the rate of
convergence to the limiting-state value can be dependent on the initial
conditions and is very dependent on the probabilities of making transi­
tions between the states of the system.
.. State 1
The tree diagram method is a useful technique for illustrating the
concepts of Markov chains but it is totally impractical for large systems
and a large number of time intervals in even very small systems. There­
fore other solution techniques are required and these are the subject of
the remaining sections of this chapter.

8.3 Stochastic transitional probability matrix

Matrix solution techniques are used frequently in a variety of system


reased, the values of state analyses when other, more basic, techniques would be totally intractable.
Ie. This is characteristic of System reliability evaluation is no exception and one example of this was
he Markov approach, and described in Section 5.6 in order to deduce the path connections between
n as the limiting-state or the input and output of a network.
ties, Matrix algebra and matrix manipulation is used several times in both
tern started in state 1 and this and subsequent chapters. For those readers unfamiliar with. these
ncreased. The state of the techniques, a description of elementary matrix algebra has been included
initial conditions. In most in Appendix 3.
iitions are known, and the In order to apply matrix techniques to system reliability evaluation, it is
reliability as time extends necessary to deduce a matrix which represents the probabilities of making
y dependent on the initial a transition from one state to another in a single step or time interval.
milar graph to that shown Again consider the system shown in Figure 8.1, these transition prob­
nitially resides in state 2 abilities can be represented by the following matrix P
s a very interesting and
he transient behaviour is P= [Pll P12 J = [112 1I2J (8.1)
miting values of the state P21 P22 1/4 3/4
11 conditions and both will
Figure 8.3. This is a very where Pij = probability of making a transition to state j after a time
ain in subsequent discus­ interval given that it was in state i at the beginning of the
ling values of~stattq~rob­ time intervaL
; is known as :ergodit. Not Applying this concept to the system shown in Figure 8.1 for the first
l system to be ergodic it is time interval means that PH = 112, P 12 = 1/2, P21 = 114 and P 22 3/4 as
ched from all other states shown in Equation 8.1.

"'"
266 Reliability evaluation of engineering systems

The definition of Pjj indicates that the TOW position of the matrix is the started in state 1. Similarly t
state from which the transition occurs and the column position of the probability of being in state 2
matrix is the state to which the transition occurs. Consequently, for an state 1. Similar reasoning can
n-state system the general form of the matrix, which must always be If the values in row 1 are no'
square, is shown in Equation 8.2. it will be seen that they are id
~ to state evaluated after two time intel
state 1. A similar comparison,
1 2 . 3 and the probabilities that we
from 1 P n P 12 P u intervals if the system had star1
state 2 P21 P22 P23 that the elements of p2 give al
(8.2) two time intervals, both those:
! 3 P31 P 32 P33 starting in state 2.
p=
The principle illustrated by 1
of P and the matrix pn can b
represents the probability tha1
n Pn1 intervals given that it started
This matrix is known as the stochastic transitional probability matrix for It is suggested that the read
the system, since it represents, in matrix form, the transitional prob­ order to confirm the techniq\l
abilities of the stochastic process. It should be noted that the summation approach should also be use,
of the probabilities in each row of the matrix must be unity since row i starting in state 2. These res
represents the complete and exhaustive ways in which the system can Figure 8.3.
behave in a particular time interval given that it is in state i at the Equations 8.3 and 8.4 for t
beginning of that time interval. equations for more complex s
probability of residing in any
for certain in which state tt
starting in a particular state il;
8.4 Time dependent probability evaluation others is zero. Frequently this
the deterministic state of the s
In order to illustrate the evaluation of the transient behaviour of a system
evaluate these state probabi
using the stochastic transitional probability matrix, reconsider the simple
known with this degree of cer
two-state system shown in Figure 8.1. The stochastic transitional proba­
plied by the initial probability'
bility matrix for this system is shown in Equation 8.1. Now multiply this
of being in each of the system
matrix by itself (see Appendix 3), i.e., square it. This gives
of probability contained in th~
If the system shown in Figw
p2= fP n P 12 ][Pn P 12 ]
vector is
LP21 P22 P21 P22

1 2
= [(PnPll + P 12P21 ) (Pn P 12 + P 12P22 )]
(8.3) P(O) =[1 0]
(P21 P n + P 22 P n ) (P21 P 12 + P22 P22 )

If the values of Pn, P12 , P 2b P22 are substituted into Equation 8.3
since the probability of bein:
probability of being in state ~
3/8 5/8] that the system is equally lik
(8.4)
p2 = [ 5/16 11/16 initial probability vector bero
Recalling the principle of Equation 8.2, the first element of row 1 (3/8) 1 2
is the probability of being in state 1 after a time interval given that it p(O) = [1/2 1/2]

-
---'::­

.~
Discrete Markov chains 267

)Osition of the matrix is the started in state 1. Similarly the second element of row 1 (5/8) is the
he column position of the probability of being in state 2 after a time interval given that it started in
curs. Consequently, for an state 1. Similar reasoning can be applied to the second row.
rix, which must always be If the values in row 1 are now compared with those shown in Table 8.1,
it will be seen that they are identical to the state probabilities that were
evaluated after two time intervals given that the system commenced in
state 1. A similar comparison could be made between the values in row 2
and the probabilities that would have been evaluated after two time
intervals if the system had started in state 2. It follows from this reasoning
that the elements of p2 give all the state probabilities of the system after
(8.2) two time intervals, both those when starting in state 1 and those when
starting in state 2.
The principle illustrated by this example can be extended to any power
of P and the matrix P" can be defined as the matrix whose element Pij
represents the probability that the system will be in state j after n time
intervals given that it started in state i.
'tional probability matrix for It is suggested that the reader finds the values of higher orders of P in
.rm, the transitional prob­ order to confirm the technique and the values given in Table 8.1. This
~ noted that the summation approach should also be used to obtain similar values for the case of
[ must be unity since row i starting in state 2. These results should be plotted on top of those in
s in which the system can Figure 8.3.
hat it is in state i at the Equations 8.3 and 804 for this particular two-state system and similar
equations for more complex systems and other time intervals permit the
probability of residing in any state to be evaluated provided it is known
for certain in which state the system started, i.e., the probability of
starting in a particular state is unity and the probability of starting in all
.n others is zero. Frequently this is the case in practice because, at time zero,
sient behaviour of a system the deterministic state of the system is known. If however it is required to
ltrix, reconsider the simple evaluate these state probabilities when the initial conditions are not
)chastic transitional proba­ known with this degree of certainty, then the matrix P" can be premulti­
tion 8.1. Now multiply this plied by the initial probability vector P(O) which represents the probability
it. This gives of being in each of the system states at the start of the mission. The values
of probability contained in this vector must themselves summate to unity.
If the system shown in Figure 8.1 starts in state 1, this initial probability
vector is
1 2
] (8.3) P(O) [1 0] (8Aa)

uted into Equation 8.3 since the probability of being in state 1 at zero time is unity and the
probability of being in state 2 is zero. If, on the other hand, it is known
(804) that the system is equally likely to start in state 1 or state 2, then this
initial probability vector becomes
first element of row 1 (3/8) 1 2
time interval given that it
. ·•
·

P(O) = [1/2 1/2] (8Ab)

IiiI i
t
268 Reliability evaluation of engineering systems

In the first case, the probability vector representing the state prob­ does not change the values 0
abilities after two time intervals is represents the limiting probat
tional probability matrix, then
P(2) = P(O)P2
uP = u
=[1 [3/8 5/8 ]
0] 5/16 11/16 This principle can be applied
1 2 Figure 8.1. Define PI and P 2 a:
1 and 2 respectively, then
=[3/8 5/8] (8.5a)
[PI P2 ]P= [PI P2 ]
as given in Table 8.1 for this set of initial conditions. or
In the second case, the probability vector representing the state prob­
1/2 1/2] _ P
abilities after two time intervals is [PI P2 ] [ 1/4 3/4 - [ 1
P(2) = P(O)P2
from which
=[1/2 1/2 ][3/8 5/8 ]
!P 1 +!P2 = PI

5/16 11/16

!P1 +iP2 = P 2

1 2 rearranging gives
=[11/32 21/32] (8.5b)
-!P1 +!P2 = 0
This principle can again be extended to give
!P 1 -!P2 = 0
Pen) = P(O)P" (8.5c)
It is evident that Equations 8
From the foregoing discussion, it is evident that the state probabilities solve for the two unknowm
can be readily evaluated at any time interval, simply by multiplying the independent of Equations 8.7
stochastic transitional probability matrix by itself the relevant number of is
times. If this process is continued sequentially, the transient behaviour P 1 +P2 =1
can be deduced. The limiting or steady-state values of state probabilities
can be derived by continuing the multiplication process a sufficient With systems of any size, 0
number of times. tion 8.6 will always be redund
so developed must be replace
If Equations 8.7 and 8.9 a
8.5 Limiting state probability evaluation they can be expressed in mati

The steady-state or limiting values of state probabilities of an ergodic


system can be evaluated using the matrix multiplication technique de­
scribed in Section 8.4. If the transient behaviour is also required, it may which is of the form AX = b, 1
be sensible to use this technique. If on the other hand, only the limiting where A-I is the inverse n
state probabilities are required, matrix multiplication can be tedious and unfamiliar with the inversion I

time-consuming. A very efficient alternative method is described in this Cramer's Rule, which is suital
section for evaluating these limiting probabilities. of the matrix is small. This rul
The principle of this technique is that, once the limiting state prob­ the present example. Howevel
abilities have been reached by the matrix multiplication method, any particularly suited for digital c
further multiplication by the stochastic transitional probability matrix errors. In such cases other tel

~-~~--------------~--

-- .. = ..
-,
l\
Discrete Markov chains 269

presenting the state prob­ ,does not change the values of the limiting state probabilities, Le., Q_a
represents the limiting probability vector and P is the stochastic transi­
tional probability matrix, then
aP=a (8.6)
This principle can be applied to the simple two state system shown in

Figure 8.1. Define PI and P 2 as the limiting probabilities of being in states

1 and 2 respectively, then

(8.5a)
[PI P2 ]P=[PI P2 ]

lditions.
or

~presenting the state prob­


112 112]
[PI P2 ] [ 114 3/4 = [PI P2 ]

from which
!PI +!P2 = PI
!P1 +iP2 P 2
rearranging gives
(8.5b)
-tPl +!P2 = 0 (8.7)
!PI lP2 = 0 (8.8)
(8.5c)
It is evident that Equations 8.7 and 8.8 are identical and, therefore, to

that the state probabilities solve for the two unknowns, PI and Pz, a third equation which is

simply by multiplying the independent of Equations 8.7 and 8.8 is needed. This additional equation

elf the relevant number of is

y, the transient behaviour


P I +P2 =1 (8.9)
'alues of state probabilities
ation process a sufficient With systems of any size, one of the equations developed from Equa­
tion 8.6 will always be redundant and therefore anyone of the equations
so developed must be replaced by one of the form of Equation 8.9.
If Equations 8.7 and 8.9 are used as the two independent equations,
they can be expressed in matrix form as

)robabilities of an ergodic [-~/2 1~4][=:l= [~] (8.10)


llitiplication technique de­
mr is also required, it may which is of the form AX b, the solution for X being given by X = A ~lb

ler hand, only the limiting where A-I is the inverse matrix of A. For those readers who are

ication can be tedious and unfamiliar with the inversion of matrices, there is a useful method called

Ilethod is described in this Cramer's Rule, which is suitable for hand calculations provided the order

ies. of the matrix is small. This rule is explained in Appendix 3 and is used in

:e the limiting state prob­ the present example. However, it should be noted that this method is not

mltiplication method, any particularly suited for digital computer solution as it can lead to precision

;itional probability matrix errors. In such cases other techniques [28] are available.

....

270 Reliability evaluation of engineering systems

Equation 8.10 can be solved using Cramer's rule to give where n is the number of tim
(lxO)-(l/4x1) -1/4 remaining in state 1. If in this ~
1-~/2
state, it follows that eventually
PI = 1i4\ = -3/4 all systems unless the probabil
most improbable. The problerr
=0.333
intervals before the absorbing
(-l/2x1)-(lxO) -1/2
If P is the stochastic transit
2
P = -3/4 -3/4
truncated m~td~(tcan be cre:
~~soci~ted';ithlheibsorbing st
=0.667 8.1, this truncation will creal
these being the values which the characteristics shown in Figure 8.3 namely [PH] if state 2 is defill
would approach asymptotically. matrix 0 represents the tram
evaluate the expected numbeJ
remains in one of the states re
8.6 Absorbing states The principle of mathematic
In Section 8.1 it was stated that some states of a system may be absorbing OQ

states, i.e., states which, once entered, cannot be left until the, system E(x) = L XjPj

starts a new mission. These can readily be identifie,d in terms or~.n i=1

oriented systems' as th~"c::atastrophic1afuu;,e event,states into which the This principle not only applie
prOb;tbilit}r' ofentering must be minimized to ensure safe operation of the to multi-probability elements r(
mission. In such cases, one requirement of the reliability analysis is to the expected number of time il
evaluate the average number of time intervals in which the system resides
in one of the non-absorbing states or, expressed another way, for how N = 1·1 + 1 • 0+ 1- if+ ..
many time intervals does the system operate on average before it enters
where I is the identity (or unit
one of the absorbing states.
The principle of Equation
The principle behind such a system can also be applied to repairable
identity matrix represents the I
systems in order to evaluate the average number of time intervals the Le., the unity in row 1 represe
system will operate satisfactorily before entering an undesirable state or
the system starting in state 1, t
states. In this case the states may not be real absorbing states because tion of the system starting in st~
they can be left following a repair action. The principle of absorbing states
Equation 8.13 represents one fi
can be used however in order to deduce the average number of time ent to Xj in Equation 8.12. The
intervals by defining them as absorbing states.
the second with probability 0,
The following technique for evaluating the number of time intervals the nth time interval being t
can be used for both mission oriented and repairable systems. Equation 8.13 is therefore an I
Reconsider the two state system shown in Figure 8.1, it can be seen From Equation 8.13
from the tree diagram in Figure 8.2, that, if the system starts in state 1,
the probability of continuing to reside in this state without ever entering N =1 +o+if+ .. . +on-l
state 2 becomes progressively smaller as the number of time intervals
Equation 8.14 is not readily
increases, i.e., provided the number of time intervals is allowed to become
great enough, the system must eventually enter state 2. identity
Mathematically this is because [1-OII+o+if+ ... +()'

lim
n-""
(.!)"
2
=0 (8.11)
Equation 8.15 can easily be
side.
- i'F'fe
• • - ­ Discrete Markov chains 271

's rule to give where n is the number of time intervals and (112) is the probability of
remaining in state 1. If in this example, state 2 is defined as an absorbing
state, it follows that eventually this state must be entered. This applies to
all systems unless the probability of residing in a state is unity, which is
most improbable. The problem is to evaluate the average value of time
intervals before the absorbing state is reached.
If P is the stochastic transitional probability matrix of the system, a
tru.m::Il~£JJ;!lltrrb(Q,;can be created by deleting the row(s) and column(s)
associated with the absorbing state(s). In the case of P shown by Equation
8.1, this truncation will create a matrix Q having one element only,
sties shown in Figure 8.3 namely [Pn] if state 2 is defined as the absorbing state. This truncated
matrix Q represents the transient set of states and it is necessary to
evaluate the expected number of time intervals for which the system
remains in one of the states represented in this matrix.
The principle of mathematical expectation was given in Chapter 2 as
a system may be absorbing 00

)t be left until the system E(x) = L X;P i (8.12)


mtified in terms of.mis~ion j=1

:vent states into which the This principle not only applies to single probability elements Pi but also
nsure safe operation of the to multi-probability elements represented by matrix Q. Therefore if N is
he reliability analysis is to the expected number of time intervals
in which the system resides
;sed another way, for how N = 1-1 + 1 • Q+ l' (j2+ . .. + 1. q'-l (8.13)
)n average before it enters
where 1 is the identity (or unit) matrix (See Appendix 3).
The principle of Equation 8.13 can be explained as follows. The
o be applied to repairable
identity matrix represents the probability of all possible initial conditions
nber of time intervals the
i.e., the unity in row 1 represents the contribution to the expectation of
ing an undesirable state or
the system starting in state 1, the unity in row 2 represents the contribu­
1 absorbing states because
tion of the system starting in state 2, and so on. Each of the unity digits in
,rinciple of absorbing states
Equation 8.13 represents one further time interval, i.e., these are equival­
~ average number of time
ent to Xi in Equation 8.12. The first time interval occurs with probability I,
the second with probability Q, the third with probability (j2, and so on,
number of time intervals
the nth time interval being the one that enters the absorbing state.
lairable systems.
Equation 8.13 is therefore an explicit form of Equation 8.12.
Figure 8.1, it can be seen
From Equation 8.13
he system starts in state 1,
;tate without ever entering N I+Q+Q2+ ... +Qn-l (8.14)
number of time intervals
~rvals is allowed to become
Equation 8.14 is not readily evaluated. Instead consider the following
~r state 2.
identity
[1-Qll+Q+(j2+ .. . +Qn-l]=I_Qn (8.15)

(8.11) Equation 8.15 can easily be verified by mUltiplying out the left hand
side.

,""
2n Reliability evaluation of engineering systems

Following Equation 8.11 socia ted with each state and,


lim on 0 spent in each state if state 3 .
n-= (a) The stochastic transiti<
therefore, as n -'» (Xl 1 2
I-Qn-'»I
1 [3/4 1/4
and Equation 8.15 becomes p= 2 0 1/2
[I Q][I+Q+Q2+ ... +Qn-1J=1 3 1/3 1/3
or If the limiting state probabilitil
I+Q+if+ ... +Qn-1 = [1-Qr 11 Equation 8.6
=[I-Qr 1 (8.16)
Therefore, from Equations 8.14 and 8.16,
N [1-Q]-l (8.l?)
which is readily evaluated compared with Equation 8.14. An example giving the following explicit e
using Equation 8.17 is given in the next section.
If, using the simple 2 state system, state 2 is defined as the absorbing
3/4P1 + 1/3P3 = P 1
state then, ~,stated previously, Q=Pl l 1/2. 1/4P1 + 1/2P2 + 1/3P3 = P~
ThereforeJ~'I= [1 1/2J-1 = 2 time intervals on average before state 2 is 1/2P2 + 1/3P3 = p:
entered, if the system commences in state 1.
One of these equations must

8.7 Application of discrete Markov techniques P 1 +P2 +P3 =1

Deleting Equation 8.20, rea


Two particular numerical examples can be considered in order to illus"'
putting into matrix form gives
trate the application of the techniques described in the previous section of
this chapter.

Example B.l [ -~~:


1
-~/2 ~~~][::]
1 1 P3
Consider the 3-state system shown in Figure 8.4 and the transition
probabilities indicated. Evaluate (a) the limiting state probabilities as- Using Cramer's Rule (Appenl

0 0 1/3
0 -1/2 1/3
1 1 1
P1 = ==
-1/4 0 1/3
1/4 -1/2 1/3
1 1 1

Fig. 8.4 System used in Example 8.1 similarly P 2 = 4/11 and P 3 3


• ... ...

Discrete Markov chains 273


~
sociated with each state and, (b) the average number of time intervals
spent in each state if state 3 is defined as an absorbing state.
(a) The stochastic transitional probability matrix for this system is
1 2 3

1~2]
1 [3/4 1/4

P=2 0 1/2

3 1/3 1/3 1/3

If the limiting state probabilities are PI, P 2 and P 3 respectively, then from
Equation 8.6
(8.16)
[PI P2 P3 ]
3/4
0
114
1/2
0]
1/2 [PI P2 P3 ]
[
(8.17) 1/3 1/3 1/3

luation 8.14. An example giving the following explicit equations


::>0.
lS defined as the absorbing 3/4PI +1/3P3 = PI (8.18)
1/4PI + 1/2P2 + 1/3P3 P2 (8.19)
)fl average before state 2 is 1/2P2 + 1/3P3 = P3 (8.20)
One of these equations must be deleted and replaced by

iques P 1 + P2 +P3 = 1 (8.21)

losidered in order to illus­ Deleting Equation 8.20, rearranging the remaining three equations and
putting into matrix form gives
d in the previous section of

[
-1/4
~/4 -
0
~/2 1~3~:
1/3] r=1] [0]
~
re 8.4 and the transition
ling state probabilities as­ Using Cramer's Rule (Appendix 3)

0 0 1/3
0 -1/2 113
1 1 1
PI =4/11
-1/4 0 113
1/4 -1/2 1/3
1 1 1
similarly P 2 = 4111 and P 3 = 3/11.

....
'
274 Reliability evaluation of engineering systems

(b) If state 3 is the absorbing state, the truncated matrix Q becomes The stochastic probabil
1 2 t d
Q 1[3/4 1/4]

2 0 1/2

P t[ 0 1/21]
d 1/2

[1- Q] = [~ ~] _ [3~4 ~~~] in which t denotes his tal


The elements of P can 1
= [114 -1/4] probability of taking the
o 1/2 interval) he caught the 1
consecutive days, the etc
1/2 1/4] respectively. Row 2 repr~
[
[1-Qr 1 = 0 1/4 driving the day after he _
1/4 -1/4\
therefore 1/2 each.
\o 1/2
(a) 0) The transition I
are given by pZ
= 8[1~2 ~~:]
p2 [1~2 1~2][1~2
= [~ ~] t d

therefore, from Equation 8.17 = t [1/2 112]


d 1/4 3/4
1 2
Suppose first that, on the f
N=~[~ ~] the initial vector of proba
t d
or Nu 4, N 1Z = 2, NZl = 0, N zz = 2.
These values indicate that the average number of time intervals spent p(O) = [1 0]
in state 1 given the system started in state 1 is 4 (= N n ), the average and the state probabilities
number of time intervals spent in state 2 given the system started in state
1 is 2 (= N 12 ), and so on. P(2) [1 0][1/2 1/
One of these values, NZh is zero and this indicates that the system 1/4 31­
spends zero time intervals in state 1 given that it starts in state 2. The t d
reason for this is that there is no direct transition from state 2 to state 1,
[1/2 1/2]
the only way of going from state 2 to state 1 being through state 3, the
absorbing state. i.e., if he takes the train OJ
as to drive two days later. :
Example 8.2 In this case the initial veC1
A man either drives his car to work or catches a train. Assume that he t d
never takes the train two days in a row but if he drives to work, then the P(O) [0 1] and
next day he is just as likely to drive again as he is to catch the train.
Evaluate (a) the probability that he drives to work after (i) 2 days (ii) a P(2) = [0 1][1/2 1/:
long time, (b) the probability that he drives to work after (i) 2 days, (ii) a 1/4 3/'
long time if on the first day of work he tosses a fair die and drives to td
work only if a 2 appears. =[1/4 3/4]
Discrete Markov chains 275

;ated matrix Q becomes The stochastic probability matrix for this Markov process is

t d

t[
p= d 1/2 1/2
0 1]
in which t denotes his taking a train to work and d denotes his driving.
The elements of P can be deduced as follows. Row 1 represents the
probability of taking the train and driving the day after (next time
interval) he caught the train. Since he never takes the train on two
consecutive days, the elements of this row must be zero and unity
respectively. Row 2 represents the probability of taking the train and
driving the day after he drove. These have equal probabilities and are
therefore 1/2 each.
(a) (i) The transition probabilities after 2 days, i.e., 2 time intervals,
are given by p2

p2 0
[ 1/2 1/2
1][1/20 1]

1/2

t d
= t [1/2 1/2]
d 1/4 3/4
Suppose first that, on the first day of work, he takes the train. In this case
the initial vector of probabilities P(O) is
t d
P(O) =[1 0]
ber of time intervals spent
is 4 (= N u ), the average and the state probabilities after 2 days are
the system started in state
1/2 1/2]
P(2) = [1 0][ 1/4 3/4
, indicates that the system
Lat it starts in state 2. The t d
ion from state 2 to state 1, = [l/21/2]
being through state 3, the
i.e., if he takes the train on the first day, he is as likely to catch the train
as to drive two days later. Suppose that he drove to work on the first day.
In this case the initial vector of probabilities is
t d
es a train. Assume that he
he drives to work, then the P(O) = [0 1] and
lS he is to catch the train.
[112 1/2]
work after (i) 2 days (ii) a P(2) [0 1] 114 3/4
work after (i) 2 days, (ii) a
:es a fair die and drives to t d

[1/4 3/4]

...

1
\

276 Reliability evaluation of engineering systems

Therefore, the probability that he drives to work is 3/4, i.e., he is 3 times techniques used for con1
more likely to drive than he is to catch the train. principles described in th
(a) (ii) To eva1uate the probabilities after a long time, we need to Chapter 9, the reader sh
evaluate the limiting state probabilities. Let these limiting probabilities be pounded in the previous :
Pt and Pd for catching the train and driving respectively, then Further and more del
evaluation of discrete sys
[Pt P J[1~2 1~2] [Pt Pd ] Snell [27].

That is,
Problems

also 1 A man's exercise habits


70% sure not to do thel
Pt+Pd 1 his exercises one day he
A straightforward evaluation of these 2 simultaneous equations gives run, how often does he
2 A tax consultant has a c~
Pd = 2/3 and PI 1/3 any city more than one (
Y. If, however, he visits
Therefore, in the long run, he will drive to work 2/3 of the time.
to visit city X as the 0
(b) 0) The probability of getting a 2 in a single throw of a fair each city?
die = 1/6. Therefore the initial vector of probabilities in this case is 3 A discrete process has
t d probabilities of being in
diagram given that the
P(O) =[5/6 1/6] and state probabilities and
process starts each time
P(2) = [5/6 1/6] [1/2 1/2]
1/4 3/4
t d
=[11/24 13/24] ,
Hence, the probability that he drives two days later is 13/24.
(b) (ii) Since this problem is an ergodic problem, the limiting values
of probability do not depend on the initial conditions. The results for this
case, which the reader may like to verify, are identica1 therefore to case
(a), (ii), Le.,
1/3

8.8 Conclusions .Fig. 8.5

This chapter has presented the essential and basic concepts of Markov

modelling in terms of system problems that are discrete in space and time. " 4 A player has 3 dollars.


f probability of 3/4 but wi
Most reliability problems are concerned with systems that operate con­ if he has lost his 3 dollal
tinuously with time and for this reason the reader may find the next probability matrix of the
chapter more applicable. It should be noted however that the underlying at least 4 plays of the g
~ III::

IS Discrete Markov chains 2n

work is 3/4, i.e., he is 3 times techniques used for continuously operated systems are based on the
e train. principles described in this chapter and therefore, before progressing to
fter a long time, we need to Chapter 9, the reader should become familiar with the techniques ex­
these limiting probabilities be pounded in the previous sections of this chapter.
19 respectively, then Further and more detailed applications of Markov modelling and
evaluation of discrete systems are given in Feller [4] and Kemeny and
Snell [27].

Problems

1 A man's exercise habits are as follows. If he does his exercise one day, he is
70% sure not to do them the next day. On the other hand, if he does not do
his exercises one day he is 60% sure not to do them the next day. In the long
multaneous equations gives run, how often does he do his exercises?
2 A tax consultant has a contract for three cities: X, Y and Z. He never stays in
any city more than one day. If he visits city X, then the next day he visits city
Y. If, however, he visits either Y or Z, then the next day he is twice as likely
o work 2/3 of the time. to visit city X as the other city. In the long run, how often does he visit
in a single throw of a fair each city?
robabilities in this case is 3 A discrete process has the state diagram shown in Figure 8.5. Find the
probabilities of being in each of the three states after three steps using a tree
diagram given that the process started in State 1. Determine the limiting
state probabilities and the mean number of steps to enter State 3 if the
I process starts each time in State 2.

. '/2

'0days later is 13/24.


c problem, the limiting values
conditions. The results for this
are identical therefore to case :e

\ '/4

• .Pig. 8.5

md basic concepts of Markov 4 A player has 3 dollars. At each play of a game, he loses a dollar with a
are discrete in space and time. ~

•,
probability of 3/4 but wins 2 dollars with probability of 1/4. He stops playing
{ith systems that operate con­ if he has lost his 3 dollars or he has won at least 3 dollars. Find the transient
the reader may find the next ;~ probability matrix of the Markov chain. What is the probability that there are
d however that the underlying at least 4 plays of the game?

...

t
J
278 Reliability evaluation of engineering systems

5 A gambler's luck follows a pattern: If he wins a game, the probability of

winning the next game is O.S. However if he loses a game, the probability of

losing the next game is 0.7. There is an even chance that he wins the first

game.

(a) What is the probability that he wins the second game?


(b) What is the probability that he wins the third game?
(c) In the long run how often does he win?
6 Each year a man trades his car for a new car. If he has a Chrysler he trades

Fig. 8.6
for a Plymouth. If he has a Plymouth he trades it for a Ford. However, if he

has a Ford, he is just as likely to trade it in for a Chrysler or a Plymouth. In

I
1977 he bought his first car which was a Ford.
failure. The system starts
(a) Find the probability that he has a (a) the probability of re
(i) 1979 Ford
(b) the limiting state av.
(ij) 1979 Chrysler

(iii) 19S0 Plymouth.


(b) In the long run how often will he have a Ford?
7 A psychologist makes the following assumptions concerning the behaviour of

mice subjected to a particular feeding schedule. For any particulat trial SO%

of the mice that went right on the previous experiment will go right on this

trial and 60% of those mice that went left on the previous experiment will go

right on this triaL If 50% went right on the first trial, what would he predict

for

(a) the second trial


(b) third trial
(c) the thousandth trial.
S There are 2 white marbles in urn A and 3 red marbles in urn B. At each step

of the process a marble is selected from each urn and the two marbles

selected are interchanged. If the state a, designates the number of ired

marbles in urn A,

(a) Find the transitional probability matrix of the process.


(b) What is the probability that there are 2 red marbles in urn A after 3

steps?

(c) In the long run what is the probability that there are 2 red marbles in

urn A?

9 Two boys, b h b 2 and girls gl> g2 are throwing a ball from one to another.

Each boy throws the ball to the other boy with a probability 0.5 and to each

girl with a probability 0.25. On the other hand, ,each girl throws the ball to a

boy with probability 0.5 and never to the other girl. In the long run how often

does each receive the ball?

10 A man's smoking habits are as follows, If he smokes filter cigarettes this


week, he switches to non-filter ones the next week with a probability of 0,2.
On the other hand if he smokes non-filter ones this week there is 0.7
probability that he will smoke non-filter cigarettes the next week. In the long
run how often does he smoke filter cigarettes?
11 A system can reside in one of the three mutually exclusive states shown in
the state space diagram of Figure S.6. The values shown are the probabilities
of making the related transition at the end of each discrete time interval of 1
hr. States 1 and 2 represent system success and state 3 represents system
~ ....
'.
\

ems Discrete Markov chains 279

Ie wins a game, the probability of


he loses a game, the probability of
even chance that he wins the first

; the second game?


; the third game?
win?
car. If he has a Chrysler he trades
Fig. 8.6
trades it for a Ford. However, if he
in for a Chrysler or a Plymouth. In
Ford. failure. The system starts in state 1. Calculate:
(a) the probability of residing in each state after three time intervals;
(b) the limiting state availability of the system.

ave a Ford?

Iptions concerning the behaviour of

edule. For any particulat trial 80%

IUS experiment will go right on this

on the previous experiment will go

he first trial, what would he predict

red marbles in urn B. At each step

m each urn and the two marbles

,
I; designates the number of ired

trix of the process.

re 2 red marbles in urn A after 3

ity that there are 2 red marbles in

owing a ball from one to another.

r with a probability 0.5 and to each

hand, "each girl throws the ball to a

Hher girl. In the long run how often

If he smokes filter cigarettes this


1
lext week with a probability of 0.2.
t
tilter ones this week there is 0.7
"
igarettes the next week. In the long

ettes?

mutually exclusive states shown in

: values shown are the probabilities

j of each discrete time interval of 1

i
:ess and state 3 represents system

.;."..

You might also like