0% found this document useful (0 votes)
46 views116 pages

02 Basic Tools and Techniques

This document introduces basic concepts of stochastic processes and their application to power system reliability assessment. It defines a stochastic process as a set of random variables indexed by time or another parameter. Examples of stochastic processes include daily temperatures and flight trajectories. The document discusses different types of stochastic processes, including Markov processes (where future states depend only on the present state) and renewal processes (where the intervals between events are independent and identically distributed). It notes that Poisson processes and alternating renewal processes are commonly used in reliability analysis to model failure events over time.

Uploaded by

sarah alina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views116 pages

02 Basic Tools and Techniques

This document introduces basic concepts of stochastic processes and their application to power system reliability assessment. It defines a stochastic process as a set of random variables indexed by time or another parameter. Examples of stochastic processes include daily temperatures and flight trajectories. The document discusses different types of stochastic processes, including Markov processes (where future states depend only on the present state) and renewal processes (where the intervals between events are independent and identically distributed). It notes that Poisson processes and alternating renewal processes are commonly used in reliability analysis to model failure events over time.

Uploaded by

sarah alina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

2 Basic Tools and Techniques

2.1 Modern random processes methods

2.1.1 General concepts

The purpose of this chapter is to describe basic concepts of applying a ran-


dom process approach to power system reliability assessment. Here, we do
not present the basics of the measure-theoretic framework that are neces-
sary to those who are not familiar with the general theory of stochastic
processes. Readers who need this fundamental framework and more
detailed stochastic processes presentation can find it in the following lit-
erature: (Kallenberg 1997), (Karlin and Taylor 1981), (Ross 1983). For re-
liability engineers and analysts, the books of Trivedi (2002), Aven and
Jensen (1999) and Lisnianski and Levitin (2003) are especially recom-
mended.
A stochastic or random process is, essentially, a set of random variables,
where the variables are ordered in a given sequence. For example, the daily
maximum temperatures at a weather station form a sequence of random
variables, and this ordered sequence can be considered as a stochastic
process. Another example is the sequence formed by the continuously
changing number of people waiting in a queue at ticket window of a rail-
way station.
More formally, the sequence of random variables in a process can be
denoted by X(t) where t is the index of the process. In this book, we deal
with stochastic processes where t represents time.
A random variable X can be considered as the rule for assigning to every
outcome ς of an experiment the number X( ς ). A stochastic process is a
rule for assigning to every ς the function X(t, ς ). Thus, a stochastic proc-
ess is a family of time functions depending on the parameter ς or, equiva-
lently, a function of t and ς . The domain of ς is the set of all the possible
experimental outcomes and the domain of t is a set of non-negative real
numbers.
For example, a trajectory of an airplane's flight from point A to point B
at given height H will be a stochastic process. Each flight can be consid-
ered as an experimental outcome ς and each flight will have its own tra-

D. Elmakias: Basic Tools and Techniques, Studies in Computational Intelligence (SCI) 111, 55–170
(2008)
www.springerlink.com © Springer-Verlag Berlin Heidelberg 2008
56 New Computational Methods in Power System Reliability

jectory X(t, ς ) that characterize for this case a height of the flight as a func-
tion of time. This trajectory will be different from trajectories of other
flights because of the influence of many random factors (such as wind,
temperature, pressure etc.). In Fig. 2.1 one can see three different trajecto-
ries for three flights that can be treated as three different realizations of the
stochastic process. It should be noticed that the cut of this stochastic proc-
ess at any time t1 will present the random variable with mean H. In power
systems such parameters as generating capacity, voltage, and frequency
may be considered as stochastic processes.

Height of flight

A B

Time

t1
Fig. 2.1. Trajectories of three flights

The time may be discrete or continuous. A discrete time may have a fi-
nite or infinite number of values; continuous time obviously has only an
infinite number of values. The values taken by the random variables con-
stitute the state-space. This state-space, in its turn, may be discrete or
continuous. Therefore, stochastic processes may be classified into four
categories according to whether their state-spaces and time are continuous
or discrete. A process with discrete state-space is usually called a chain.
The stochastic process X(t, ς ) has the following interpretations:
1. It is a family of functions X(t, ς ) where t and ς are variables.
2. It is a single time function or a realization (sample) of the given process
if t is a variable and ς is fixed.
3. It is a random variable equal to the state of the given process at time t
when t is fixed and ς is variable.
Basic Tools and Techniques 57

4. It is a number if t and ς are fixed.


One can use the notation X(t) to represent a stochastic process omitting,
as in the case of random variables, its dependence on ς .
First, consider a process with a discrete set of indices {ti} and a discrete
state-space {xk}. The indices are ordered so that ti<ti+1 for any i (no such
arrangement is necessary for the set {xk}). The process is defined by some
rules indicating how the distribution of any random variable Xn=X(tn) de-
pends on the values assumed by all previous variables Xi (1 ≤ i ≤ n-1).
These rules actually define conditional probabilities
Pr{Xn=xn | (X1=x1) ∩ (X2=x2) ∩... ∩ (Xn-1=xn-1)} (2.1)
and provide the distribution of Xn in terms of what can be called the “past
history” of the process.

In general, a different distribution of Xn is obtained for every realization


of the process in the time domain (t1,tn-1), with a given set x1,…,xn-1 repre-
senting a single realization. The dependence of the distribution of Xn on
past history can be illustrated by the daily temperature example. Assume
that neither the date, nor the season, is known about day tn and the only in-
formation available is the maximum temperature readings for a number of
preceding days. The expected temperature distribution for day tn is then
based on this information and is also greatly dependent on it. These read-
ings are characteristic of the season since different temperature probabili-
ties can be expected in the summer and in the winter.
Of course, it is also possible that the distribution of Xn does not depend
on past history. Process with this property is called the Markov process. In
the Markov process, the probabilities of the random variable at time tn de-
pend on the value of the random variable at tn-1 but not on the realization of
the process prior to tn-1. In other words, the state probabilities at a future
instant, given the present state of the process, do not depend on the states
occupied in the past. Therefore, this process is also called “memoryless”.
In power system reliability analysis, stochastic processes with a contin-
uum set {ti} and a discrete state-space {xk} are widely used. In order to il-
lustrate this type of stochastic processes we define here two important
processes that will be used in the future: point and renewal processes.
A point process is a set of random points ti on the time axis. For each
point process one can associate a stochastic process X(t) equal to the num-
ber of points ti in the interval (0,t). In reliability theory point processes are
widely used to describe the appearance of events in time (e.g., failures,
terminations of repair, etc.)
An example of the point processes is the so-called Poisson process.
The Poisson process is usually introduced using the Poisson points. These
58 New Computational Methods in Power System Reliability

points are associated with certain events and the number N (t1 , t 2 ) of the
points in an interval (t1, t2) of length t=t2-t1 is a Poisson random variable
with parameter λt , where λ is the mean occurrence rate of the events:
e − λt (λt ) k
Pr{N(t1,t2)=k} = . (2.2)
k!
If the intervals (t1,t2) and (t3,t4) are not overlapping, then the random
variables N(t1,t2) and N(t3,t4) are independent. Using the points ti, one can
form the stochastic process X (t ) = N (0, t ) .
The Poisson process plays a special role in reliability analysis, compa-
rable to the role of the normal distribution in probability theory. Many real
physical situations can be successfully described with the help of Poisson
processes.
A well-known type of point process is the so-called renewal process.
This process can be described as a sequence of events, the intervals be-
tween which are independent and identically distributed random variables.
In reliability theory, this kind of mathematical model is used to describe
the flow of failures in time.
To every point process ti one can associate a sequence of random vari-
ables yn such that y1=t1, y2=t2-t1,…, yn=tn-tn-1 where t1 is the first random
point to the right of the origin. This sequence is called a renewal process.
An example is the life history of the items that are replaced as soon as they
fail. In this case, yi is the total time the i-th item is in operation and ti is the
time of its failure.
One can see a correspondence among the following three processes:
• a point process ti ,
• a discrete-state stochastic process X (t ) increasing (or decreasing) by 1 at
the points ti ,
• a renewal process consisting of the random variables yi such that
t n = y1 + ... + y n .
A generalization of this type of process is the so-called alternating re-
newal process. This process consists of two types of independent and iden-
tically distributed random variables alternating with each other in turn.
This type of process is convenient for the description of reparable systems.
For such systems, periods of successful operation alternate with periods of
idle time.
In this chapter, the power system reliability models will be consequently
studied based on the Markov processes, the Markov rewards processes and
semi-Markov processes. The Markov processes are widely used for reli-
ability analysis because the number of failures in arbitrary time interval in
many practical cases can be described as a Poisson process and the time up
Basic Tools and Techniques 59

to the failure and repair time are often exponentially distributed. It will be
shown how by using the Markov processes theory power system reliability
measures can be determined. It will be also shown how such power system
reliability measures as the mean time up to the failure, mean number of
failures in a time interval, mean sojourn time in a set of unacceptable states
can be found by using the Markov reward models. In practice, basic as-
sumptions about exponential distributions of times between failures and
repair times often do not hold. In this case, the more complicated mathe-
matical technique named the semi-Markov processes may be applied.

2.1.2 Markov models

Main definitions and properties


A discrete-state continuous-time stochastic process X (t ) ∈ {1,2,...} is called
a Markov chain if for t1 < t 2 < ... < t n − 1 < t n , its conditional probability mass
function satisfies the relation
Pr{ X (t n ) = xn | X (t n − 1 ) = xn − 1 ,..., X (t 2 ) = x2 , X (t1 ) = x1} =

= Pr{ X (tn ) = xn | X (t n −1 ) = xn −1} . (2.3)


Introducing the notations t=tn-1 and tn=tn-1+ ∆t the expression (2.3) sim-
plifies to:
Pr{ X (t + ∆t ) = i | X (t ) = j} = π ji (t , ∆t ) . (2.4)

These conditional probabilities are called transition probabilities. If the


probabilities π ji (t , ∆t ) do not depend on t, but only on the time difference
∆t , the Markov process is said to be homogeneous. Note that π jj (t , ∆t ) jjis
the probability that no change in the state will occur in a time interval of
length ∆t given that the process is in state j at the beginning of the interval.
One can define for each j a nonnegative continuous function a j (t ) :

π jj (t ,0) − π jj (t , t + ∆t ) 1 − π jj (t , t + ∆t )
a j (t ) = lim = lim (2.5)
∆t →0 ∆t ∆t →0 ∆t
and for each j and i ≠ j a nonnegative continuous function aji(t):
π ji (t ,0) − π ji (t , t + ∆t ) π ji (t , t + ∆t )
a ji (t ) = lim = lim . (2.6)
∆t → 0 − ∆t ∆t → 0 ∆t
60 New Computational Methods in Power System Reliability

The function a ji (t ) is called the transition intensity from state i to state j


at time t. For the homogeneous Markov processes, the transition intensities
do not depend on t and therefore are constant.
If the process is in state j at a given moment, in the next ∆t time interval
there is either a transition from j to some state i or the process remains at j.
Therefore
π jj (∆t ) + ∑ π ji (∆t ) = 1. (2.7)
i≠ j

Designating a jj = −a j and combining (2.7) with (2.5) one obtains

1
a jj = −a j = lim −
∆t →0
∑ π ji (∆t ) = − ∑ a ji .
∆t i ≠ j
(2.8)
i≠ j

Let pi(t) be the state probabilities of X (t ) at time t:


pi (t ) = Pr{ X (t ) = i} , j=1, …, k; t ≥ 0 . (2.9)
Therefore, expression (2.9) defines the probability mass function (pmf)
of X(t).
Since at any given time the process must be in one of k states,
k
∑ pi (t ) = 1 (2.10)
i =1

for any t ≥ 0 ,
The states probabilities at instant t + ∆t can be expressed based on states
probabilities at instant t by using the following equations:
p j (t + ∆t ) = p j (t )[1 − ∑ a ji dt ] + ∑ pi (t )aij dt , i, j = 1,..., k . (2.11)
i≠ j i≠ j

Equation (2.11) can be obtained by using the following considerations.


The process can achieve the state j at instant t + ∆t by two ways.
1. The process may already be in the state j at instant t and doesn't leave
this state up to the instant t + ∆t . These events have probabilities p j (t )
and 1 − ∑ a ji ∆t respectively.
i≠ j
2. At instant t the process may be in one of the states i ≠ j and during time
∆t transits from state i to state j. These events have probabilities pi (t )
and aij ∆t respectively. These probabilities should be multiplied and
Basic Tools and Techniques 61

summarized for all i ≠ j because the process can achieve state j from
any state i.
Now one can rewrite (2.11) by using (2.8) and obtain the following
p j (t + ∆t ) = p j (t )[1 + a jj ∆t ] + ∑ pi (t )aij ∆t , (2.12)
i≠ j

or
k
p j (t + ∆t ) − p j (t ) = ∑ pi (t )aij ∆t . (2.13)
i =1

After dividing both sides of equation (2.13) by ∆t and passing to limit


∆t → 0 , we get
dp j (t ) k
= ∑ pi (t )aij , j=1,2, …, k. (2.14)
dt i =1

The system of differential equations (2.14) is used for finding the state
probabilities p j (t ), j=1,…,k for the homogeneous Markov process when
the initial conditions are given
p j (t ) = α j , j = 1,..., k , (2.15)

More mathematical details about (2.14) may be found in (Trivedi 2002)


or in (Ross 1993).
Equation (2.14) defines the following rule: time-derivative of p j (t ) for
any arbitrary state j equals the sum of the probabilities of the states that
have transitions to the state j multiplied by the corresponding transition in-
tensities minus the probability of state j multiplied by the sum of the inten-
sities of all transitions from the state j.
Introducing the row-vector p(t)= [ p1 (t ), p2 (t ),..., pk (t )] and the transition
intensity matrix a
a11 a12 ... a1K
a21 a22 ... a2 K
a= (2.16)
...
a K1 a K 2 ...a KK

in which the diagonal elements are defined as ajj=-aj we can rewrite the
system (2.14) in matrix notation
62 New Computational Methods in Power System Reliability

dp(t )
=p(t)a. (2.17)
dt
K
Note that the sum of the matrix elements in each row equals 0: ∑ aij = 0
j =1
for each i: 1≤i≤K.
When the system state transitions are caused by failures and repairs of
its elements, the corresponding transition intensities are expressed by the
element’s failure and repair rates.
The element’s failure rate λ (t ) is the instantaneous conditional density of
the probability of failure of an initially operational element at time t given
that the element has not failed up to time t. Briefly, one can say that λ (t ) is
the time-to-failure conditional probability density function (pdf). It ex-
presses a hazard of failure in time instant t under a condition where there
was no failure up to time t. The failure rate of an element at time t is de-
fined as
1 ⎡ F (t + ∆t ) − F (t ) ⎤ f (t )
λ (t ) = lim ⎥ = F (t ) , (2.18)
∆t → 0 ∆t ⎢⎣ R (t ) ⎦
where F (t ) is the cdf of the time to failure of the element, f (t ) is pdf of the
time to failure of the element, R(t ) = 1 − F (t ) is the reliability function of the
element.
For homogeneous Markov processes the failure rate doesn’t depend on t
and can be expressed as
λ = MTTF −1, (2.19)
where MTTF is mean time to failure. Similarly, the repair rate µ (t ) is the
time-to-repair conditional pdf. For homogeneous Markov processes a re-
pair rate does not depend on t and can be expressed as
µ = MTTR −1 , (2.20)
where MTTR is the mean time to repair.
In many applications, the long-run (final) or steady state probabilities
lim pi (t ) are of interest for the repairable element. If the long-run state
t →∞
probabilities exist, the process is called ergodic. For the final state prob-
abilities, the computations become simpler. The set of differential equa-
tions (2.14) is reduced to a set of k algebraic linear equations because for
Basic Tools and Techniques 63

dpi (t )
the constant probabilities all time-derivatives , i=1,…,k are equal to
dt
zero.
Let the final states probabilities pi = lim pi (t ) exist. For this case in
t →∞
steady state, all derivatives of states probabilities in the right side of (2.14)
will be zeroes. So, in order to find the long run probabilities the following
system of algebraic linear equations should be solved
k
0 = ∑ pi (t )aij , j=1,2, …, k. (2.21)
i =1

The k equations in (2.21) are not linearly independent (the determinant


of the system is zero). An additional independent equation can be provided
by the simple fact that the sum of the state probabilities is equal to 1 at any
time:
k
∑ pi = 1 . (2.22)
i =1

The frequency fi of state i, is defined as the expected number of arrivals


into this state per unit time. Usually the concept of frequency is associated
with the long-term (steady-state) behavior of the process. In order to relate
the frequency, probability and mean time of staying in state i, we consider
the system evolution in the state space as consisting of two alternating pe-
riods – the stays in i and the stays outside i. Thus, the process is repre-
sented by two states. Designate the mean duration of the stays in state i as
T i and that of the stays outside i, T oi . The mean cycle time, T ci is then:

T ci = T i + T oi . (2.23)
From the definition of the state frequency it follows that, in the long run,
fi equals the reciprocal of the mean cycle time
1
fi = . (2.24)
T ci

Multiplying by T i , both two sides of equation (2.23) one gets

Ti
T i fi = = pi . (2.25)
T ci

Therefore,
64 New Computational Methods in Power System Reliability

pi
fi = . (2.26)
Ti

This is a fundamental equation, which provides the relation between the


three state parameters in the steady state.
Unconditional random value Ti is minimal from all random values Tij
that characterize the conditional random time of staying in state i, if the
transition will be performed from state i to any state j ≠ i
Ti = min{Ti1 ,..., Tij }. (2.27)

All conditional times Tij are distributed exponentially with the following
−a t
cumulative distribution functions Fij (Tij ≤ t ) = 1 − e ij . All transitions from
state i are independent and, therefore, the cumulative distribution function
of unconditional time Ti of staying in state i can be computed as the fol-
lows
Fi (Ti ≤ t ) = 1 − Pr{Ti > t} = 1 − ∏ Pr{Tij > t} =
j ≠i
− ∑ aij t (2.28)
− aij t j ≠i
= 1 − ∏ [1 − Fij (Tij ≤ t )] = 1 − ∏ e = 1− e .
j ≠i j ≠i

It means that unconditional time Ti is distributed exponentially with pa-


rameter ai = ∑ aij , and the mean time of staying in state i is the following
j

1
Ti = . (2.29)
∑ aij
j ≠i

Substituting T i in the expression (2.26) we finally get


f i = pi ∑ aij . (2.30)
j ≠i

Example 2.1
Consider a power generating unit that has k=4 possible performance levels
(generating capacities): g4=100 MW, g3=80 MW, g2=50 MW and g1=0
MW.
The unit has the following failure rates
λ4 ,3 = 2 year −1 , λ3, 2 = 1 year −1 , λ2,1 = 0.7 year −1 ,
Basic Tools and Techniques 65

λ3,1 = 0.4 year −1 , λ4 , 2 = 0.3 year −1 , λ4 ,1 = 0.1 year −1 ,


and the following repair rates
µ 3,4 = 100 year −1 , µ 2 ,3 = 80 year −1 , µ1, 2 = 50 year −1 (for minor repairs),
µ 1,4 = 32 year −1 , µ 1,3 = 40 year −1 , µ 2,4 = 45 year −1 (for major repairs).
The state-space diagram for the unit is presented in Fig. 2.2. The initial
state is state 4.

4
µ3,4 λ4,3
µ2,4 λ4,2
3
µ1,4 µ2,3 λ3,2 λ4,1

µ1,3 2 λ3,1
µ1,2 λ2,1

Fig. 2.2. State-space diagrams for four-state unit

In order to find the state probabilities the following system of differen-


tial equations needs to be solved (see (2.14))
⎧ dp4 (t )
⎪ dt = −(λ4,3 + λ4,2 + λ4,1 ) p4 (t ) + µ3, 4 p3 (t ) + µ 2, 4 p2 (t ) + µ1, 4 p1 (t )
⎪ dp (t )
⎪⎪ 3 = λ4,3 p4 (t ) − (λ3,2 + λ3,1 + µ3,4 ) p3 (t ) + µ1,3 p1 (t ) + µ 2,3 p2 (t )
⎨ dpdt(t )
⎪ 2 = λ4,2 p4 (t ) + λ3,2 p3 (t ) − (λ2,1 + µ 2,3 + µ 2,4 ) p2 (t ) + µ1, 2 p1 (t )
⎪ dt
⎪ dp1 (t ) = λ p (t ) + λ p (t ) + λ p (t ) − ( µ + µ + µ ) p (t )
4,1 4 3,1 3 2,1 2 1, 2 1,3 1, 4 1
⎩⎪ dt

with the initial conditions p4 (t ) = 1 , p3 (t ) = p2 (t ) = p1 (t ) = 0 .


The state probabilities obtained by solving this system are presented in
Fig. 2.3.
The unit instantaneous availability can be obtained for different constant
demand levels w
A3 (t ) = p4 (t ) , for g 3 < w ≤ g 4 ;
A2 (t ) = p4 (t ) + p3 (t ) , for g 2 < w ≤ g 3 ;

A1 (t ) = p4 (t ) + p3 (t ) + p2 (t ) = 1 − p1 (t ) , for g1 < w ≤ g 2
66 New Computational Methods in Power System Reliability

The unit’s mean instantaneous capacity at time t is


4
Et = ∑ g k pk (t ) = 100 p4 (t ) + 80 p3 (t ) + 50 p2 (t ) + 0 p1 (t ) ,
k =1

and the mean instantaneous capacity deficiency (for constant demand


w=60 MW) is
4
Dt = ∑ pk (t ) max(w − g k ,0) = 10 p2 (t ) + 60 p1 (t ) .
k =1

These indices, as functions of time, are presented in Fig. 2.4.

0.8
Probability

0.6

0.4

0.2

0
0 0.02 0.04 0.06 0.08 0.1
time (years)
p1(t) p2(t) p3(t)
A3=p4(t) A1 A2

Fig. 2.3. State probabilities and instantaneous availability of the four-state ele-
ment

The final state or steady state probabilities can be found by solving the
system of linear algebraic equations (2.21) in which one of the equations is
replaced by the equation (2.22). In our example, the system takes the form
⎧(λ4,3 + λ4,2 + λ4,1 ) p4 = µ3,4 p3 + µ 2,4 p2 + µ1,4 p1
⎪(λ3,2 + λ3,1 + µ3,4 ) p3 = λ4,3 p4 + µ 2,3 p2 + µ1,3 p1
⎨(λ + µ + µ ) p = λ p + λ p + µ p
⎪ 2,1 2 ,3 2, 4 2 4, 2 4 3, 2 3 1, 2 1
⎩ p1 + p2 + p3 + p4 = 1
Solving this system, we obtain the final state probabilities:
Basic Tools and Techniques 67

µ1,4(b2c3 − b3c2 ) + µ1,2(a2b3 − a3b2 ) + µ1,3(a3c2 − a2c3 )


p1 = ,
a1b2c3 + a2b3c1 + a3b1c2 − a3b2c1 − a1b3c2 − a2b1c3

µ 2,3 (a1c3 − a3c1 ) + µ 2,4 (b3c1 − b1c3 ) + (λ2,1 + µ 2,3 + µ 2,4 )(a1b3 − a3b1 )
p2 = ,
a1b2c3 + a2b3c1 + a3b1c2 − a3b2c1 − a1b3c2 − a2b1c3
λ3,2 (a1b2 − a2b1 ) + (λ3,2 + λ3,1 + µ3,4 )(a1c2 − a2c1 ) + µ3,4 (b1c2 − b2c1 )
p3 = ,
a1b2c3 + a2b3c1 + a3b1c2 − a3b2c1 − a1b3c2 − a2b1c3

p4 = 1 − p1 − p2 − p3 ,
where
a1 = µ1,4 − µ 2 ,4 , a2 = µ1,4 − µ 3,4 , a3 = µ1,4 + λ4 ,3 + λ4 ,2 + λ4 ,1 ,
b1 = µ1,3 − µ 2 ,3 , b2 = µ1,3 + λ3,2 + λ3,1 + µ 3,4 , b3 = µ1,3 − λ4 ,3 ,
c1 = µ1,2 + λ2 ,1 + µ 2 ,3 + µ 2 ,4 , c2 = µ1,2 − λ3,2 , c3 = µ1,2 − λ4 ,2 .

Corresponding state frequencies are obtained according to (2.30):


f1 = p1 ( µ1,2 + µ1,3 + µ1,4 ) ,
f 2 = p 2 ( µ 2,4 + µ 2,3 + λ 2,1 ) ,
f 3 = p3 ( µ 3,4 + λ3, 2 + λ3,1 ) ,

f 4 = p 4 (λ 4,3 + λ 4,2 + λ 4,1 ) .

The steady state availability of the element for constant demand w=60
MW is
A = p4 + p3 ,

the mean steady state capacity is


4
E∞ = ∑ g k pk = 100 p4 + 80 p3 + 50 p2 + 0 p1 ,
k =1

and the mean steady state capacity deficiency is


4
D∞ = ∑ pk max(w − g k ,0) = 10 p2 + 60 p1 .
k =1

As can be seen in Fig. 2.3 and Fig. 2.4, the steady state values of the
state probabilities are achieved during a short time. After 0.07 years, the
process becomes stationary. Due to this consideration, only the final solu-
tion is important in many practical cases. This is especially so for elements
with a relatively long lifetime. This is the case in our example if the unit
lifetime is at least several years. However, if one deals with highly respon-
sible components and takes into account even small energy losses at the
68 New Computational Methods in Power System Reliability

beginning of the process, the analysis based on a system of differential


equations should be performed.
100 4.5

98 3

Dt (1/sec )
Et (1/sec )

96 1.5

94 0
0 0.05 0.1 0 0.05 0.1

time (years) time (years)

Fig. 2.4. Instantaneous mean capacity and capacity deficiency of the four-state
generating unit

Example 2.2
This example presents the development of Markov model for typical gen-
eration unit based on historical failure data (Goldner and Lisnianski
2006).
The model representing a commercial 360-megawatt coal fired genera-
tion unit that was designed to incorporate the major unit's derated/outage
states derived from the analysis of the historical failure data (unit's failures
in the period from 1985 to 2003). The methodology of the data acquisition
is based on NERC-GADS Data Reporting Instruction (North American
Electric Reliability Council, 2003, Generating Availability Data System.
Data Reporting Instructions. Princeton, NJ.)
The total of 1498 recorded events are classified as planned or unplanned
outages, scheduled or forced deratings (each derating is reported independ-
ently and relates to capacity reduction from a nominal level). Based on the
data analysis ten space states were identified as most significant. Fig. 2.5
presents the distribution of the events as a function of available capacity
remaining after the event.
Table 2.1 provides a summary of failure statistics and reliability pa-
rameters for the unit. Each event that leads to capacity derating is treated
as a failure. For example, there were 111 events (failures) that lead to
capacity derating from nominal capacity En=360 MW down to capacity
levels ranging from 290 MW up to 310 MW. An average derated capacity
level for this kind of failure is G8=303 MW. The mean time up to this kind
Basic Tools and Techniques 69

of failures is MTTF10,8=1,454 hours. Mean time up to unit repair and return


to the state with nominal generating capacity is MTTR8,10 = 6.7 hours.
It is assumed that the time to repair (TTR) and time to failure (TTF) are
distributed exponentially. So, MTTF and MTTR define the corresponding
transition intensities according to expressions (2.19) and (2.20).

250
N u m b e r o f E v e n ts

200

150
100

50
0
0 50 100 150 200 250 300 350
Capacity range (MW)

Fig. 2.5. Distribution of all types of failure events

The state-space diagram for the coal fired generation unit is presented in
Fig. 2.6. According to Table 2.1, the diagram has 10 different states, rang-
ing from state 10 with nominal generating capacity of 360 MW up to state
1 (complete failure with a capacity of 0 MW).
Table 2.1. Summary of failure parameters
Level Average ca- Capacity Number MTTRi ,10 MTTF10,i
number pacity level range of (hr) (hr)
i Gi (MW) (MW) failures
1 0 0 189 93.5 749
2 124 (0, 170] 134 4.3 1,219
3 181 (170, 190] 160 7.7 1,022
4 204 (190, 220] 147 6.4 1,120
5 233 (220, 240] 52 3.9 3.187
6 255 (240, 270] 120 7.8 1,389
7 282 (270, 290] 75 6.0 2,221
8 303 (290, 310] 111 6.7 1,464
9 328 (310, 360) 510 6.9 311
70 New Computational Methods in Power System Reliability

K=10
360Mw

λ10,9 µ9,10
K=9
328Mw

λ10,8 µ8,10
K=8
303Mw

λ10,7 µ7,10
K=7
282Mw

λ10,6 µ6,10
K=6
255Mw

λ10,5 µ 5 ,10
K=5
233Mw

λ 10 , 4 µ4,10
K=4
204Mw

λ 10 , 3 µ 3,10
K=3
181Mw

λ10 ,1 µ 2,10
K=2
123Mw

λ10,1 µ1,10
K=1
0Mw

Fig. 2.6. State-space diagram for the coal fired generation unit

We designate an intensity of transition from state 10 (with nominal gen-


erating capacity) to any derated state i=1,2,…,9 as λ10,i = 1 / MTTF10,i . In
Basic Tools and Techniques 71

the same way we designate an intensity of transition from state i to state 10


as µi,10 = 1 / MTTRi ,10 .
According to the state-space diagram in Fig. 2.6 and transition parame-
ters from Table 2.1, the corresponding system of differential equations for
the state probabilities pk(t), k=1,….10 takes the form:
dp k ( t )
= − µ k , 10 * p k ( t ) + λ 10 , k * p ( t ) for k=1,…,9 10

dt
dp (t )

10
= µ k , 10 p k ( t ) − ∑ 10k = 1 λ 10 , k
9
k =1
* * p (t )
10

dt
Solving this system under initial conditions p10(0) =1; p9(0)=…=p1(0)=0
one obtains the state probabilities.
For example, for t=1month=744 hours: p10(744) =0.854, p9(744) =0.020,
p8(744) =0.004.
The element instantaneous availability can be obtained for different
constant demand levels as the sum of probabilities of acceptable states,
where the generating capacity is greater than or equal to the demand level.
For the demand lower than or equal to capacity level in state k we obtain:
10
A k (t ) = ∑ e=k p e(t )
The corresponding graphs Ak (t ) for k=9, k=7, and k=8 are presented in
Fig. 2.7.

100

95
A k (% )

90

85

80
0 200 400 600
t (hours)
k=5 k=7 k=9

Fig. 2.7. Availability of generation unit for different demand levels

The unit mean instantaneous performance at time t is


E (t ) = ∑10
k =1Gk Pk (t )
72 New Computational Methods in Power System Reliability

The graph of the function E(t) (in percents of the nominal unit capacity) is
presented in Fig. 2.8.

100

97
E (t )/E n (%)

94

91

88
0 200 400 600
t (hours)

Fig. 2.8. Instantaneous mean generating capacity of generation unit

Based on the derated Markov state we obtained the real availability and
mean performance indices for typical coal fired unit. This model allows
analyst to compare performances indices for different generation units
based on their failure history data, which is important for the units’ avail-
ability improvement and for system planning support.

2.1.3 Markov reward models

Basic definition and model description


In the preceding subchapters, it was shown how some important reliability
indices can be found by using the Markov technique. Here we consider ad-
ditional indices such as states frequencies and the mean number of system
failures during an operating period. We demonstrate the method for their
computation, which is based on the general Markov reward model that was
primarily introduced by Howard (1962) and then was essentially extended
in (Mine and Osaki 1970) and many other research works.
This model considers the continuous-time Markov chain with a set of
states {1,…,K} and transition intensity matrix a=| aij |, i,j =1,…,K. It is sug-
gested that if the process stays in any state i during the time unit, a certain
cost rii should be paid. It is also suggested that each time that the process
Basic Tools and Techniques 73

transits from state i to state j a cost rij should be paid. These costs rii and
rij are called rewards (the reward may also be negative when it character-
izes losses or penalties). The Markov process with the rewards associated
with its states or/and transitions is called the Markov process with rewards.
For these processes, an additional matrix r=| rij |, i,j=1,…,K of rewards is
determined. If all rewards are zeroes, the process reduces to the ordinary
Markov process.
Note that the rewards rii and rij have different dimensions. For example,
if rij is measured in cost units, the reward rii is measured in cost units per
time unit. The value that is of interest is the total expected reward accumu-
lated up to time instant t under specified initial conditions.
Let Vi (t ) be the total expected reward accumulated up to time t, given
the initial state of the process at time instant t=0 is state i. According to
Howard, the following system of differential equations must be solved un-
der specified initial conditions in order to find the total expected rewards:
dVi (t ) K K
= rii + ∑ aij rij + ∑ aijV j (t ) , i=1,…,K. (2.31)
dt j =1 j =1
j ≠i

The system (2.31) can be obtained in the following manner. Assume


that at time instant t=0 the process is in the state i. During the time incre-
ment ∆t , the process can remain in this state or transit to some other state
j. If it remains in state i during time ∆t , the expected reward accumulated
during this time is rii ∆t . Since at the beginning of the time interval
[ ∆t , ∆t + t ] the process is still in state i , the expected reward during this in-
terval is Vi (t ) and the expected reward during the entire interval
[0, ∆t + t ] is Vi (∆t + t ) = rii ∆t + Vi (t ) . The probability that the process will
remain in state i during the time interval ∆t equals to 1 minus the probabil-
ity that it will transit to any other state j ≠ i during this interval:
K
π ii (0, ∆t ) = 1 − ∑ aij ∆t = 1 + aii ∆t . (2.32)
j =1
j ≠i

On the other hand, during time ∆t the process can transit to some other
state j ≠ i with the probability π ij (0, ∆t ) = aij ∆t . In this case the expected
reward accumulated during the time interval [0, ∆t ] is rij. At the beginning
of the time interval [ ∆t , ∆t + t ] the process is in the state j. Therefore, the
74 New Computational Methods in Power System Reliability

expected reward during this interval is V j (t ) and the expected reward dur-
ing the interval [0, ∆t + t ] is Vi (∆t + t ) =rij+ V j (t ) .
In order to obtain the total expected reward one must summarize the
products of rewards and corresponding probabilities for all of the states.
Thus, for the small ∆t one has
K
Vi (∆t + t ) ≈ (1 + aii ∆t )[rii ∆t + Vi (t )] + ∑ aij ∆t[rij + V j (t )] , (2.33)
j =1
j ≠i

for i=1,…,K.

Neglecting the terms with an order greater than ∆t one can rewrite the
last expression as follows:
Vi (∆t + t ) − Vi (t ) K K
= rii + ∑ aij rij + ∑ aijV j (t ) , for i=1,…,K. (2.34)
∆t j =1 j =1
j ≠i

Passing to the limit in this equation gives (2.31).


Defining the vector-column of total expected rewards V(t) with compo-
nents V1 (t ),..., V K (t ) and vector-column u with components
K
ui = rii + ∑ aij rij , i=1,…,K (2. 35)
j ≠i
j =1

one obtains the equation (2.31) in matrix notation:


d
V(t)=u+aV(t). (2.36)
dt
Usually the system (2.31) should be solved under initial conditions
Vi (0) = 0 , i=1,…,K.
In order to find the long-run (steady state) solution of (2.31) the follow-
ing system of algebraic equations must be solved
0=u+aV(t), (2.37)
where 0 is vector-column with zero components.

Example 2.3
As an example of straightforward application of the method we consider a
power generator with the nominal capacity L = 105 KW where the genera-
Basic Tools and Techniques 75

tor only has complete failures with a failure rate of λ = 1 year −1 . The un-
supplied energy penalty is cp=3 $ per KW*hour. After the generator fail-
ure, a repair is performed with a repair rate of µ = 200 year −1 . The mean
cost of repair is cr = 50000 $.
The problem is to evaluate a total expected cost CT associated with the
unreliability of the generator during the time interval [0, T].
The state-space diagram for the power generating unit is presented in
Fig. 2.9. It has only two states: perfect functioning with a nominal generat-
ing capacity (state 2) and complete failure where the unit generating capac-
ity is zero (state 1). The transitions from state 2 to state 1 are associated
with failures and have intensity λ . If the generator is in state 1, the penalty
cost cpL should be paid for each time unit (hour). Hence, the reward r11 as-
sociated with state 1 is r11=cpL.

r22
2

µ1,2 λ2,1
r12 r21
1 r
11

Fig. 2.9. Markov reward model for power generating unit

The transitions from state 1 to state 2 are associated with repairs and
have an intensity of µ . The repair cost is cr, therefore the reward associ-
ated with the transition from state 1 to state 2 is r12=cr.
There are no rewards associated with the transition from state 2 to state
1 and with remaining in the state 2: r22 = r21 = 0 .
The reward matrix takes the form
r11 r12 c L cr
r = | rij | = = p ,
r21 r22 0 0

and the transition intensity matrix takes the form


a11 a12 −µ µ
a = | aij | = = .
a21 a22 λ −λ

Using (2.31) the following system of differential equations can be writ-


ten in order to find the expected total rewards V1 (t ) and V2 (t ) :
76 New Computational Methods in Power System Reliability

⎧ dV1(t )
⎪⎪ dt = c p L + µcr − µV1(t ) + µV2 (t )
⎨ .
⎪ dV2 (t ) = λV (t ) − λV (t )
⎪⎩ dt 1 2

The total expected cost in time interval [0, t] associated with the unreli-
ability of the generator is equal to the expected reward V2 (t ) accumulated
up to time t, given the initial state of the process at time instant t=0 is state
2.
Using the Laplace-Stieltjes transform under the initial conditions
V1 (0) = V2 (0) = 0 , we transform the system of differential equations to the
following system of linear algebraic equations
⎧ c p L + µc r
⎪sv1 ( s ) = − µv1 ( s ) + µv2 ( s )
⎨ s
⎪sv ( s ) = λv ( s ) − λv ( s )
⎩ 2 1 2

Where vk (s) is the Laplace-Stieltjes transform of a function Vk (t ) .


The solution of this system is
λc p L + λµcr
v2 ( s ) = .
s 2 (s + λ + µ )

After applying the inverse Laplace-Stieltjes transform we obtain


λc p L + λµcr
V2 (t ) = L−1{v2 ( s )} = [e − (λ + µ )t + ( µ + λ )t − 1] .
2
(µ + λ )

The total expected cost CT during the operation time T is


λc p L + λµcr
CT = V2 (T ) = [e − (λ + µ )T + ( µ + λ )T − 1] .
2
(µ + λ )

For relatively large T the term e −(λ + µ )T can be neglected and the fol-
lowing approximation can be used
λ (c p L + µcr )
CT ≈ T .
µ +λ
Therefore, for large T, the total expected reward is a linear function of
time and the coefficient
Basic Tools and Techniques 77

λ ( c p L + µc r )
cun =
µ +λ

defines the annual expected cost associated with generating unit unreliabil-
ity. For the data given in the example, cun = 13.14 ⋅ 106 $/year.

Computation of power system reliability measures by using


Markov reward models
In its general form the Markov reward model was intended to provide eco-
nomical and financial calculations. However, it was shown by Volik et al.
(1988) that some important reliability measures could be found by the cor-
responding determination of the rewards in matrix r. The method was ex-
tended by Lisnianski (2007a) to systems with variable demand levels. Here
we apply this method for computation of such important power system
reliability measures as the mean number of failures and failure states fre-
quencies, mean time to failure and expected energy not supplied to con-
sumers.
In the previous section, the power system was considered to have con-
stant demand. In practice, it is often not so. A power system can fall into a
set of unacceptable states in two ways: either through capacity derating be-
cause of failures or through an increase in demand (load).
For example, consider the demand variation that is typical for power
systems. Usually demand can be represented by a daily demand curve.
This curve is cyclic in nature with a maximum level (peak) during the day
and a minimum level at night (Endrenyi 1979), (Billinton and Allan 1996).
In the simplest and most frequently used model, the cyclic demand varia-
tion can be approximated by a two-level demand curve as shown in Fig.
2.10A.
In this model, the demand is represented as a continuous time Markov
chain with two states: w={w1, w2} (Fig. 2.10B). When the cycle time Tc
and the mean duration of the peak tp are known (usually Tc=24 hours), the
transition intensities of the model can be obtained as
1 1
λp = , λl = . (2.38)
Tc − t p tp

In the further extension of the variable demand model the demand proc-
ess can be approximated by defining a set of discrete values {w1,w2,…,wm}
representing different possible demand levels and determining the transi-
tion intensities between each pair of demand levels (usually derived from
the demand statistics). The realization of the stochastic process of the
78 New Computational Methods in Power System Reliability

demand for a specified period and the corresponding state-space diagram


are shown in Fig. 2.11. bij is the transition intensity from demand level wi
to demand level wj.

w
2
λl

t w2 w1
w p
T
1 λp

Actual demand curve


Two-level approximation

A B
Fig. 2.10. Two-level demand model (A. Approximation of actual demand curve;
B. State-space diagram)

v1,m

wm
... ... ... wm ... w3 w2 w1
...
...
w2
w1 vm,1
t
A B
Fig. 2.11. Discrete variable demand (A. Realization of stochastic demand process,
B. Space-state diagram).

So, for a general case we assume that demand W(t) is also a random
process that can take on discrete values from the set w={w1,…,wM}. The
desired relation between the power system capacity and the demand at any
time instant t can be expressed by the acceptability func-
tion Φ (G (t ), W (t )) . The acceptable system states correspond to
Φ (G (t ),W (t )) ≥ 0 and the unacceptable states correspond to
Φ (G (t ),W (t )) < 0 . The last inequality defines the system failure crite-
rion. Usually in power systems, the system generating capacity should be
equal to or exceed the demand. Therefore, in such cases the acceptability
function takes on the following form:
Φ (G (t ),W (t )) = G (t ) − W (t ) , (2.39)
Basic Tools and Techniques 79

and the criterion of state acceptability can be expressed as


Φ (G (t ),W (t )) = G (t ) − W (t ) ≥ 0 . (2.40)
Below we present a general method, which proved to be very useful for
the computation of power system reliability measures when output capac-
ity and demand (load) are independent discrete-state continuous-time
Markov processes.

Combined capacity-demand model


Power system output generating capacity is represented by a stochastic
process G(t) that is described as a continuous-time Markov chain Ch1 with
K different possible states g1, …, gK and corresponding transition intensi-
ties matrix a= aij , i,j=1,2, …, K . Therefore, Ch1 is a mathematical model
for generating capacity stochastic process G(t). It is graphically repre-
sented in Fig. 2.12, where system output capacities for each state are
presented inside the ellipses and the state number is presented near the cor-
responding ellipse. Transition intensities are presented near the arcs con-
necting the corresponding states. The state with the largest performance
g K is the best state and all the states are ordered according to their capac-
ity, so that g K > g K −1 > ... > g1 .
The demand process W(t) is also modeled as a continuous-time Markov
chain Ch2 with m different possible states w1, …, wm and corresponding
constant transition intensities with the matrix b= bij , i,j=1,2,…,m. Ch2 is
a mathematical model for the demand stochastic process W(t) and it is
graphically presented in Fig. 2.13. The demand levels for each state are
presented inside the ellipses. As in the previous case, the state number is
presented near the corresponding ellipse and transition intensities are pre-
sented near the corresponding arcs (connecting corresponding states). The
state m is the state with the largest demand and all states are ordered ac-
cording to their demand levels, so that wm > wm−1 > ... > w1 .
The capacity and demand models can be combined based on the inde-
pendence of events in these two models. The probabilities of transitions
in each model are not affected by the events that occur in another one.
The state-space diagram for the combined m-state demand model and K-
state output capacity model is shown in Fig. 2.14. Each state in the dia-
gram is labeled by two indices indicating the demand level
w ∈ {w1 ,..., w m } and the element performance rate g ∈ {g1 , g 2 ,..., g K } .
80 New Computational Methods in Power System Reliability

K 1i
K

a1, K −1 a K , K −1

K g K −1
a1, K −1 aK −1,1

...
a1, K
a K ,1

g1
Fig. 2.12. Markov model for power system output performance

b1, m b1, m −1

bm−1,m

w1 ... wm−1 wm

m-1 m
bm,m−1
1
bm−1,1
bm,1

Fig. 2.13. Markov model for power system demand

These indices for each state are presented in the lower part of the cor-
responding ellipse. The combined model is considered to have mK states.
Each state corresponds to a unique combination of demand levels wi and
element performance g j and is numbered according to the following rule:

z=(i-1)K+j, (2.41)
where z is a state number in the combined capacity-demand model, n=1, … ,
mK; I is a number of demand level, i=1,…m; j is a number of MSS output
performance level, j=1,…K.
Basic Tools and Techniques 81

In order to designate that state z in a combined performance-demand


model corresponds to demand level wi and performance g j we use the
form z ~ {wi , g j } .
In Fig. 2.14 the number of each state is shown in the upper part of the
corresponding ellipse.

K 2K ... mK
w1 g K w2 g K wm g K

K −1 2K −1 mK − 1
w2 g K −1 ...
w1 g K −1 wm g K −1

... ... ...

1 K +1 (m−1)K+1
w1 g1 ...
w2 g1 wm g1

Unacceptable states

Fig. 2.14. Combined capacity–demand model

In addition to transitions between the states with different capacities,


there are transitions between the states with the same capacities, but with
different demand levels. All intensities of horizontal transitions are defined
by transition intensities bi,j, i,j=1,…m of the Markov demand model Ch2,
and all intensities of vertical transitions are defined by transition intensities
ai,j, i,j=1,…K of the performance model Ch1. All other (diagonal) transi-
tions are forbidden. We designate the transition intensities matrix for com-
bined capacity-demand model as c= cij , where i,j=1,2,…,mK .
Thus, the algorithm of the combined capacity-demand model building
based on separate capacity and demand models Ch1 and Ch2 can be pre-
sented by the following steps.
82 New Computational Methods in Power System Reliability

1. The state-space diagram of combined capacity-demand model is shown


in Fig. 2.14, where the nodes represent power system states and the arcs
represent corresponding transitions.
2. The graph consists of mK nodes that should be ordered in K rows and m
columns.
3. Each state (node) should be numbered according to the rule (2.41).
4. All intensities c z1 ,z2 of horizontal transitions from state z1 (correspond-
ing to demand wi and capacity gj) to state z2 (corresponding to demand
ws and the same capcity gj according to the rule (2.41)) are defined by
demand transition intensities matrix b,
c z1 , z2 = bi ,s , (2.42)

where z1 ~ {wi , g j } , z2 ~ {ws , g j } , i, s = 1, m , j = 1, K .


5. All intensities of vertical transitions from state z1 (corresponding to de-
mand wi and performance gj) to state z3 (corresponding to the same de-
mand wi and capacity gt according to the rule (2.41) are defined by the
capacity transition intensities matrix a,
c z1 , z3 = a j ,t , (2.43)

where z1 ~ {wi , g j } , z3 ~ {wi , g t } , i = 1, m , j , t = 1, K .


6. All diagonal transitions are forbidden so that the corresponding transi-
tions’ intensities in matrix c are zeroed.

Rewards determination for combined capacity-demand model


In the previous section we have built the combined capacity-demand
model and therefore, defined its transition intensities matrix c based on
matrixes a and b for capacity and demand processes. In order to find reli-
ability measures for a power system the specific reward matrix r should be
defined for each measure. At first, all states in the model can be subdivided
by two different sets – the set of acceptable states and the set of unaccept-
able states, according to the value of the acceptability function. The exam-
ple of such model is presented in the Fig. 2.15. The transition intensity for
each transition is determined by the matrix c. For each state i determined
reward rii during time unit, and for each transition from state i to state j the
reward rij associated with this transition is determined. Rewards corre-
sponding to states are written inside of states cycles. Rewards correspond-
ing to transitions are written after corresponding transitions intensities.
Basic Tools and Techniques 83

Acceptable
c36 / r36 States
6
3
r
33 c63 / r63
r66

c23 / r23 c65 / r65 c56 / r56


c32 / r32
c25 / r25
2 5
r
r22 c52 / r52 55

c31 / r31 c13 / r13 c64 / r64 c46 / r46

c14 / r14
1 4
r11 c 41 / r41 r44
Unacceptable
States

Fig. 2.15. Combined capacity–demand model with rewards

The power system average availability A (T) is defined as a mean frac-


tion of time when the system resides in the set of acceptable states during
the time interval [0, T],
T
1
A (T) =
T ∫ A(t )dt . (2.44)
0

In order to prevent misunderstanding, it should be noticed that there is


another measure that is closely related to the system availability - the
power system instantaneous (point) availability A(t). It is the probability
that the system at instant t>0 is in one of the acceptable states:
A(t ) = Pr{Φ (G (t ), W (t )) ≥ 0} . (2.45)
As was shown in the previous section, A(t) can be found by solving the
differential equations (2.14) and summarizing probabilities correspond-
ing to all acceptable states.
84 New Computational Methods in Power System Reliability

To assess average availability A (T) for a power system the Markov re-
ward model can be used. The rewards in matrix r for the combined capac-
ity-demand model can be determined in the following manner:
• The rewards associated with all acceptable states should be defined as 1.
• The rewards associated with all unacceptable states should be zeroed as
well as all the rewards associated with the transitions.
For the example in Fig. 2.11 this means that r22=r33=r55=r66=1. All
other rewards are zeroes.
The mean reward Vi(T) accumulated during interval [0, T] defines a part
of time that the power system will be in the set of acceptable states in the
case where state i is the initial state. This reward should be found as a solu-
tion of the general system (2.31). That for the combined capacity – de-
mand model would be the following:
dVi (t ) mK mK
= rii + ∑ cij rij + ∑ cij V j (t ) , i=1,…,mK. (2.46)
dt j =1 j =1
j ≠i

After solving the (2.46) and finding Vi(t), power system average avail-
ability can be obtained for every different initial state i = 1,..., K :

Vi (T )
A i (T ) = . (2.47)
T
Usually the state K (with greatest capacity level and minimum demand)
is determined as an initial state.
Mean number Nfi(T) of power system failures during the time interval
[0, T], if state i is the initial state. This measure can be treated as a mean
number of power system entrances into the set of unacceptable states dur-
ing the time interval [0, T]. For its computation, the rewards associated
with each transition from the set of acceptable states to the set of unaccept-
able states should be defined as 1. All other rewards should be zeroed.
For the example in Fig. 2.15 it means that r31=r21=r64=r54=1. All other
rewards are zeroes.
In this case the mean accumulated reward Vi(T), obtained by solving
(2.31) provides the mean number of entrances into the unacceptable area
during the time interval [0, T]:
N fi (T ) = Vi (T ) . (2.48)

When the mean number of system failures is computed, the correspond-


ing frequency of failures or frequency of entrances into the set of unac-
ceptable states can be found
Basic Tools and Techniques 85

1
f fi (T ) = (2.49)
N fi (T )

Expected Energy Not Supplied (EENS) to consumers can be defined as


mean capacity deficiency accumulated within interval [0, T]. The rewards
for any state number z=(i-1)K+j, in a combined model, where wj-gi>0,
should be defined as rnn=wj-gi. All other rewards should be zeroed. There-
fore, the mean reward Vi(T) accumulated during the time interval [0, T], if
the state i is in the initial state, defines the mean accumulated capacity de-
ficiency or EENS:
T
EENS i = Vi (T ) = E{∫ (W (t ) − G (t ))dt} . (2.50)
0

Mean Time To Failure (MTTF) is the mean time up to the instant when
the system enters the subset of unacceptable states for the first time. For its
computation the combined performance-demand model should be trans-
formed - all transitions that return the power system from an unacceptable
states should be forbidden, as in this case all unacceptable states should be
treated as absorbing states. For the example in Fig. 2.15 it means that
c13=c46=0.
In order to assess MTTF for a power system, the rewards in matrix r for
the transformed performance-demand model should be determined as fol-
lows:
• The rewards associated with all acceptable states should be defined
as 1.
• The reward associated with unacceptable (absorbing) states should be
zeroed as well as all rewards associated with transitions.
For the example in Fig. 2.15 it means that r22=r33=r55=r66=1. All other
rewards are zeroes.
In this case, the mean accumulated reward Vi(t) defines the mean time
accumulated up to the first entrance into the subset of unacceptable states
(MTTF), if the state i is the initial state.
Probability of power system failure during the time interval [0, T]. The
combined capacity-demand model should be transformed as in the previ-
ous case – all unacceptable states should be treated as absorbing states and,
therefore, all transitions that return the system from unacceptable states
should be forbidden. As in the previous case for the example in Fig. 2.15,
c13=c46=0.
Rewards associated with all transitions to the absorbing state should be
defined as 1. All other rewards should be zeroed. For the example in Fig.
2.15 it means that r22=r33=r55=r66=1. All other rewards are zeroes.
86 New Computational Methods in Power System Reliability

The mean accumulated reward Vi(T) in this case defines the probability
of system failure during the time interval [0, T], if the state i is the initial
state. Therefore, the power system reliability function can be obtained as:
Ri (T ) = 1 − Vi (T ) , where i=1,…,K . (2.51)

Example 2.4
Consider reliability evaluation for a power system, the output generating
capacity of which is represented by a continuous time Markov chain with 3
states. Corresponding capacity levels for states 1, 2, 3 are
g1 = 0, g 2 = 70, g 3 = 100 respectively and the transition intensities
matrix is such as the following:
− 500 0 500
a= aij = 0 − 1000 1000 .
1 10 − 11
All intensities aij are represented in such units as 1/year.
The corresponding capacity model Ch1 is graphically shown in Fig.
2.17A.
The demand for the power system is also represented by a continuous
time Markov chain with three possible levels w1=0, w2=60, w3=90. This
demand is graphically shown in Fig. 2.16.

Demand
(MWT)
w1 = 0

w2 = 60

Tp Tp
TL
w1 = 0 Time

24 hours 24 hours

Fig. 2.16. Daily demand for the power system


Basic Tools and Techniques 87

Daily peaks w2 and w3 occur twice a week and five times a week respec-
tively and the mean duration of the daily peak is Tp=8 hours. Mean dura-
tion of low demand level w1 = 0 is defined as TL = 24 − 8 = 16 hours .
According to the approach presented in (Endrenyi 1979) that is justified
for a power system, peak duration and low level duration are assumed to
be exponentially distributed random values.
Markov demand model Ch2 is shown in Fig. 2.13B. States 1, 2 and 3
represent corresponding demand levels w1 , w2 and w3 . Transition in-
tensities are such as follows:
1 1
b21 = b31 = = hours −1 = 1110 years −1 ,
Tp 8
2 1 2 1
b12 = = = 0.0179 hours −1 = 156 years −1 ,
7 TL 7 16
5 1 5 1
b13 = = = 0.0446 hours −1 = 391 years −1 .
7 TL 7 16
There are no transitions between states 2 and 3, therefore
b23 = b32 = 0 .
Taking into account the sum of elements in each row of the matrix to be
zero, we can find the diagonal elements in the matrix.
Therefore, a transition intensities matrix b for the demand takes the
form:
− 547 156 391
b= bij = 1110 − 1110 0 .
1110 0 − 1110
All intensities bij are also represented in 1/year.
The acceptability function is given: Φ (G (t ), W (t )) = G (t ) − W (t ) .
Therefore, a failure is treated as an entrance in the state where the accept-
ability function is negative or G (t ) < W (t ) .
By using the suggested method we find the mean number Nf(T) of sys-
tem failures during the time interval [0,T], if the state with maximal gener-
ating capacity and minimal demand level is given as the initial state.
88 New Computational Methods in Power System Reliability

3
g3
b1,3
a3, 2 a2,3
b2,3
1 2 3
2
g2 w1 w2
b3, 2 w3

a3,1 a1,3 b3,1

1
g1
A B

Fig. 2.17. Output capacity model (A) and demand model (B)

First, the combined capacity-demand model should be built according to


the algorithm presented above.
The model consists of mK=3*3=9 states (nodes) that should be ordered
in K=3 rows and m=3 columns.
Each state should be numbered according to the rule (2.41).
All intensities of horizontal transitions from state z1 ~ {wi , g j } to state
z2 ~ {ws , g j } , i, s = 1,3 , j = 1,3 are defined by demand transition inten-
sities matrix b,
c z1z2 = bi ,s .

All intensities of vertical transitions from state z1 ~ {wi , g j } to state z3


~ {wi , g t } , i = 1,3 , j , t = 1,3 are defined by the capacity transition inten-
sities matrix a,
c z1z3 = a j ,t .

All diagonal transitions are forbidden; therefore, the corresponding tran-


sition intensities in matrix c are zeroed.
The state-space diagram for the combined capacity-demand Markov
model for this example is shown in Fig. 2.18.
Basic Tools and Techniques 89

b1 , 3
b2,3
6
3 b3, 2 9
w1 g 3 b3,1 w2 g 3 w3 g 3
a 2 ,3 b1,3
a 2,3 a3, 2
a3, 2 a3, 2 a 2,3
5 b2,3 8
2
w1 g 2 w2 g 2 b3, 2 w3 g 2
a 3 ,1

b3,1
a 3 ,1 a1,3 a3,1 a1,3
a1,3 b1,3
1 4 b2,3 7
w3 g1
w1 g1 w2 g1 b3,2
b3,1 Unacceptable states
Fig. 2.18. Combined capacity-demand model

Corresponding transition intensity matrix c for the combined capacity-


demand model can be written as follows:

x1 0 a13 b12 0 0 b13 0 0


0 x2 a23 0 b12 0 0 b13 0
a31 a32 x3 0 0 b12 0 0 b13
b21 0 0 x4 0 a13 0 0 0
c= cij = 0 b21 0 0 x5 a23 0 0 0
0 0 b21 a31 a32 x6 0 0 0
b31 0 0 0 0 0 x7 0 a13
0 b31 0 0 0 0 0 x8 a23
0 0 b31 0 0 0 a31 a32 x9
where x1=−a13−b12−b13, x2=−a23−b12−b13, x3=−a31−a32−b12−b13, x4=−a13−b21,
x5=−a23−b21, x6=−a31−a32−b21, x7=−a13−b31, x8=−a23−b31, x9=−a31−a32−b31.
The state with the maximal performance g3=100 and the minimal de-
mand w1=0 (the state 3) is given as the initial state. In states 2, 5, 8 the
MSS performance is 70, in states 3, 6, 9 it is 100, and in states 1, 4, 7 it is
0. In states 4, 7 and 8 the MSS performance is lower than the demand.
These states are unacceptable and have a performance deficiency:
90 New Computational Methods in Power System Reliability

D4 = w2 − g1 = −60 , D7 = w3 − g1 = −90 and D8 = w3 − g 2 = −70


respectively. States 1, 2, 3, 5, 6, 9 constitute the set of acceptable states.
In order to find the mean number of failures the reward matrix should be
defined according to the suggested method. Each reward associated with
transition from the set of acceptable states to the set of unacceptable states
should be defined as 1. All other rewards should be zeroed. Therefore, in
reward matrix r14=r17=r28=r98=r97=1 and all other rewards are zeroes.
So, the reward matrix r is obtained:
0 0 0 1 0 0 1 0 0
0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
r = rij = 0 0 0 0 0 0 0 0 0 .
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0

By solving the system of differential equations (2.46) under the initial


conditions Vi(t)=0, i=1,…,9 all expected rewards Vi(t), i=1,…,9 can be
found as functions of time t.
The state K=3, in which the system has a maximum capacity level and a
minimum demand, is given as the initial state. Then, according to expres-
sion (2.48) the value V3(T) is treated as the mean numbers of system en-
trances into the area of unacceptable states or the mean number of power
system failures during the time interval [0, T]. The function Nf3(t) =V3(t) is
graphically presented in Fig. 2.19, where N f 3 (t ) is the mean number of
system failures in the case, when state 3 is an initial state.
The function Nf1(t) = V1 (t ) characterizes a mean number of system fail-
ures in the case where state 1 is given as the initial state. It is also pre-
sented in this figure. As shown Nf3(t) < Nf1(t) because state 1 is “closer” to
the set of unacceptable states – it has the direct transition to the set in the
unacceptable area and state 3 does not. Therefore, at the beginning of the
process the system's entrance into the set of unacceptable states is more
likely from state 1 than from state 3. Fig. 2.19 graphically represents a
number of power system failures for a short period – only 15 days. How-
ever, after this short period the function N f 3 (t ) will be a linear function.
Basic Tools and Techniques 91

The reliability evaluation is usually performed over an extended period


(years). For example, for one year we obtain N f 3 (T = 1 year ) ≈ 3.5 .
According to (2.49) the frequency of the power system failures can be
obtained:
1
ff3 = = 0.286 year −1 .
Nf3

0.35
0.3
Nf(t) (number)

0.25
0.2
0.15
0.1
0.05
0
0 2 4 6 8
Time (days)
Nf1(t) Nf3(t)

Fig. 2.19. Mean number of the generator entrances to the set of unacceptable
states

2.1.4 Semi-Markov models

As was mentioned above, a discrete-state, continuous-time stochastic


process can only be represented as a continuous-time Markov chain when
the transition time between any states is distributed exponentially. This
fact seriously restricts the application of the Markov chain model to real
world problems. One of the ways to investigate processes with arbitrarily
distributed sojourn times is to use a semi-Markov processes model. The
main advantage of semi-Markov model is that it allows non-exponential
distributions for transitions between states and generalizes several kinds of
stochastic processes. Since in many real cases the lifetime and repair times
are not exponential, this is very important.
The semi-Markov processes were introduced almost simultaneously by
Levy (1954) and Smith (1955). At the same time, Takacs (1955) intro-
duced essentially the same type of processes and applied them to some
92 New Computational Methods in Power System Reliability

problems in counter theory. The foundations of the theory of semi-Markov


processes can be found in (Cinlar 1975), (Gihman and Skorohod 1974),
(Korolyuk and Swishchuk 1995), (Silverstov 1980). For the readers inter-
ested in the field of semi-Markov processes applications to the reliability
theory and performability analysis, the following books may be especially
recommended: (Limnios and Oprisan 2000), (Kovalenko et al. 1997),
(Lisnianski and Levitin 2003), (Sahner et al. 1996).
The general theory of semi-Markov processes is quite complex. Here we
study some aspects of reliability evaluation based on using semi-Markov
processes that do not involve very complex computations. In many real
world problems, relatively simple computation procedures allow engineer
to assess reliability for power system with arbitrary transition times with-
out Monte-Carlo simulation. This especially relates to power system
steady state behavior.

Definition of semi-Markov process


In order to define a semi-Markov process, consider a system that at any
time instant t ≥ 0 can be in one of possible states g1, g2, …, gK. The system
behavior is defined by the discrete state continuous time stochastic per-
formance process G(t) ∈ {g1, g2, …, gK}. We assume that the initial state i
of the system and one-step transition probabilities are given:
G (0) = g i , i ∈ {1,..., K } ,

π jk = P{G (t m ) = g k | G (t m −1 ) = g j } , j,k ∈ {1,..., K } . (2.52)

Here π jk is the probability that the system will transit from state j with
performance rate g j to state k with performance rate g k . Probabilities
π jk , j,k ∈ {1,..., K } define the one-step transition probability matrix π = π jk
for the discrete time Markov chain G (t m ) , where transitions from one state
to another may happen only at discrete time moments t1, t2, …, tm-1, tm, … .
To each π jk ≠ 0 a random variable corresponds T*jk with the cumulative
distribution function
F * jk (t ) = F * jk (T * jk ≤ t ) (2.53)

and probability density function f*jk(t). This random variable is called a


conditional sojourn time in the state j and characterizes the system sojourn
time in the state j under condition that the system transits from state j to
state k.
Basic Tools and Techniques 93

The graphical interpretation of possible realization of the considered


process is shown in Fig. 2.20. At the initial time instant G(0)=gi . The
process transits to state j (with performance rate gj) from the initial state i
with probability π ij . Therefore, if the next state is state j, the process re-
mains in state i during random time T*ij with cdf F*ij(t). When the process
transits to state j, the probability of the transition from this state to any
state k is π jk . If the system transits from state j to state k, it remains in
state j during random time T*jk with cdf F*jk(t) up to the transition to state
k.

G(t)
T jk
g j

π ij
T ij π jk
gi
T kl
gk
π kl
gl
time

Fig. 2.20. Semi-Markov performance process

This process can be continued over an arbitrary period T. Each time the
next state and the corresponding sojourn time in the current state must be
chosen independently of the previous history of the process. The described
performance stochastic process G(t) is called a semi-Markov process.
In order to define the semi-Markov process one has to define the initial
state of the process and the matrices: π = π jk and F*(t)= F *ij (t ) for
i, j ∈ {1,..., K } . The discrete time Markov chain G (t m ) with one-step transi-
tion probabilities π jk , j,k ∈ {1,..., K } is called an imbedded Markov chain.
Note that the process in which the arbitrary distributed times between
transitions are ignored and only time instants of transitions are of interest
is a homogeneous discrete time Markov chain. However, in a general case,
if one takes into account the sojourn times in different states, the process
does not have Markov properties. (It remains the Markov process only if
all the sojourn times are distributed exponentially). Therefore, the process
can be considered a Markov process only at time instants of transitions.
This explains why the process was named semi-Markov.
94 New Computational Methods in Power System Reliability

The most general definition of the semi-Markov process is based on


kernel matrix Q(t). Each element Qij (t ) of this matrix determines the prob-
ability that transition from state i to state j occurs during time interval [0,
t]. Using kernel matrix, one-step transition probabilities for embedded
Markov chain can be obtained as:
π ij = lim Qij (t ) (2.54)
t →∞

and the cdf F *ij (t ) of conditional sojourn time in the state i can be ob-
tained as
1
F *ij (t ) = Qij (t ) . (2.55)
π ij

Based on the kernel matrix the cdf Fi (t ) of unconditional sojourn time


Ti in any state i can be defined:
K K
Fi (t ) = ∑ Qij (t ) = ∑ π ij F *ij (t ) . (2.56)
j =1 j =1

Hence, for probability density function (pdf) of unconditional sojourn


time in the state i with performance rate gi, we can write
d K
f i (t ) = Fi (t ) = ∑ π ij f *ij (t ) . (2.57)
dt j =1

Based on (2.58), the mean unconditional sojourn time in the state i can
be obtained as
∞ K
Ti = ∫ tf i (t )dt = ∑ π ijT *ij , (2.58)
0 j =1

where T *ij is the mean conditional sojourn time in the state i given that the
system transits from state i to state j.
The kernel matrix Q(t) and the initial state completely define the sto-
chastic behavior of semi-Markov process.
In practice, when MSS reliability is studied, in order to find the kernel
matrix for a semi-Markov process, one can use the following considera-
tions (Lisnianski and Yeager 2000). Transitions between different states
are usually executed as consequences of such events as failures, repairs,
inspections, etc. For every type of event, the cdf of time between them is
known. The transition is realized according the event that occurs first in a
competition among the events.
Basic Tools and Techniques 95

F0,1 (t ) F0, 2 (t ) F0,3 (t )

1 2 3

Fig. 2.21. State-space diagram of simplest semi-Markov system

In Fig. 2.21, one can see a state space diagram for the simplest semi-
Markov system with three possible transitions from the initial state 0. If the
event of type 1 is the first one, the system transits to state 1. The time be-
tween events of type 1 is random variable T0,1 distributed according to cdf
F0,1(t). If the event of type 2 occurs earlier than other events, the system
transits from the state 0 to state 2. The random variable T0,2 that defines the
time between events of type 2 is distributed according to cdf F0,2(t). At last,
if the event of type 3 occurs first, the system transits from state 0 to state 3.
The time between events of type 3 is random variable T0,3 distributed ac-
cording to cdf F0,3(t). The probability Q01 (t ) that the system transits from
state 0 to state 1 up to time t (the initial time t=0) may be determined as
the probability that under condition T0,1 ≤ t , the random variable T0,1 is less
than variables T0,2 and T0,3. Hence, we have
Q01 (t ) = Pr{(T0,1 ≤ t ) & (T0,2 > t ) & (T0,3 > t )} =
t ∞ ∞ t
= ∫ dF0,1 (u ) ∫ dF0,2 (u ) ∫ dF0,3 (u ) = ∫ [1 − F0,2 (u )][1 − F0,3 (u )]dF0,1 (u ) . (2.59)
0 t t 0

In the same way we obtain


t
Q02 (t ) = ∫ [1 − F0,1 (u )][1 − F0,3 (u )]dF0,2 (u ) , (2.60)
0

t
Q03 (t ) = ∫ [1 − F0,1 (u )][1 − F0,2 (u )]dF0,3 (u ) . (2.61)
0

For the semi-Markov process that describes the system with the state
space diagram presented in Fig. 2.21, we have the following kernel matrix
96 New Computational Methods in Power System Reliability

0 Q01 (t ) Q02 (t ) Q03 (t )


0 0 0 0
Q(t)= . (2.62)
0 0 0 0
0 0 0 0

Expressions (2.60)-(2.62) can be easily generalized to the arbitrary


number of possible transitions from the initial state 0.
In order to demonstrate the technique of kernel matrix computation we
consider the following example. Assume that, for the simplest system with
state space diagram shown in Fig. 2.21, two random variables T0,1 and T0,2
−λ t
are exponentially distributed with cdf F0,1 (t ) = 1 − e 0 ,1 and
− λ0, 2 t
F0, 2 (t ) = 1 − e respectively and the third random variable T0,3 has the
following cdf
⎧0 if t < Tc ,
F0,3 (t ) = ⎨ (2.63)
⎩1 if t ≥ Tc ,
(which corresponds to the arrival of events with constant period Tc).
Using (2.60)-(2.62) we obtain
⎧ λ0,1 − ( λ + λ )t
⎪ [1 − e 0,1 0, 2 ], if t < Tc
⎪ λ0,1 + λ0,2
Q01 (t ) = ⎨ (2.64)
λ0,1 − ( λ + λ )T
⎪ [1 − e 0,1 0, 2 c ], if t ≥ Tc
⎪⎩ λ0,1 + λ0,2

⎧ λ0,2 − ( λ + λ )t
⎪ [1 − e 0,1 0, 2 ], if t < Tc
⎪ λ + λ
Q02 (t ) = ⎨ 0,1 0, 2 (2.65)
λ0,2 − ( λ + λ )T
⎪ [1 − e 0,1 0, 2 c ], if t ≥ Tc
⎪⎩ λ0,1 + λ0,2

⎧⎪0, if t < Tc ,
Q03 (t ) = ⎨ − (λ0,1 + λ0, 2 )Tc (2.66)
⎪⎩e , if t ≥ Tc .

According to (2.56), unconditional sojourn time T0 in state 0 is distrib-


uted as follows:
⎧⎪ − ( λ0,1 + λ0, 2 )t
F1 (t ) = ⎨1 − e if t < Tc , (2.67)
⎪⎩1 if t ≥ Tc .
Basic Tools and Techniques 97

One-step transition probabilities for the embedded Markov chain are de-
fined according to (2.54):
λ0,1 − ( λ + λ )T
π 01 = [1 − e 0 ,1 0 , 2 c ] , (2.68)
λ0,1 + λ0, 2

λ0 , 2 − ( λ + λ )T
π 02 = [1 − e 0,1 0, 2 c ] , (2.69)
λ0,1 + λ0, 2

− ( λ 0 ,1 + λ 0 , 2 )Tc
π 03 = e . (2.70)
According to (2.55), we obtain the cdf of conditional sojourn times
⎧ 1 − e −(λ0,1 + λ0, 2 )t
⎪ , if t < Tc
F *01 (t ) = ⎨1 − e − (λ0,1 + λ0, 2 )TC (2.71)
⎪1, if t ≥ T
⎩ c

⎧ 1 − e −(λ0,1 + λ0, 2 )t
⎪ , if t < Tc
F *02 (t ) = ⎨1 − e − (λ0,1 + λ0, 2 )TC (2.72)
⎪1, if t ≥ T
⎩ c

⎧0 if t < Tc ,
F *03 (t ) = ⎨ (2.73)
⎩1 if t ≥ Tc .

Evaluation of reliability indices based on semi-Markov


processes
In order to find the MSS reliability indices, the system state space diagram
should be built as it was done in previous sections for Markov processes.
The only difference is that, in the case of the semi-Markov model, the tran-
sition times may be distributed arbitrarily. Based on transition time distri-
butions Fi,j (t ) , the kernel matrix Q(t) should be defined according to the
method presented in the previous section.
The main problem of semi-Markov processes analysis is to find the state
probabilities. Let θ ij (t ) be the probability that the process that starts in ini-
tial state i at instant t=0 is in state j at instant t. It was shown that probabili-
ties θ ij (t ) , i,j ∈ {1, …, K} can be found from the solution of the following
system of integral equations:
98 New Computational Methods in Power System Reliability

K t
θ ij (t ) = δ ij [1 − Fi (t )] + ∑ ∫ qik (τ )θ kj (t − τ ) dτ , (2.74)
k =1 0

where
dQik (τ )
qik (τ ) = , (2.75)

K
Fi (t ) = ∑ Qij (t ) , (2.76)
j =1

⎧1, if i = j ,
δ ij = ⎨0, if i ≠ j. (2.77)

The system of linear integral equations (2.74) is the main system in the
theory of semi-Markov processes. By solving this system, one can find all
the probabilities θ ij (t ) , i, j ∈ {1,..., K } for the semi-Markov process with a
given kernel matrix Qij (t ) and given initial state.
Based on the probabilities θ ij (t ) , i, j ∈ {1,..., K } , important reliability in-
dices can easily be found. Suppose that system states are ordered accord-
ing to their performance rates g K ≥ g K −1 ≥ ... ≥ g 2 ≥ g1 and demand
g m ≥ w > g m −1 is constant. State K with performance rate gK is the initial
state. In this case system instantaneous availability is treated as the prob-
ability that a system starting at instant t=0 from the state K will be at in-
stant t ≥ 0 in any state gK, … ,gm. Hence, we obtain
K
A(t , w) = ∑ θ Ki (t ) . (2.78)
j =m

The mean system instantaneous output performance and the mean in-
stantaneous performance deficiency can be obtained, respectively, as
K
Et = ∑ g iθ Ki (t ) (2.79)
i =1

and
m −1
Dt ( w) = ∑ ( w − g i )θ Ki (t ) 1(w>gi). (2.80)
i =1
Basic Tools and Techniques 99

In the general case, the system of integral equations (2.74) can be solved
only by numerical methods. For some of the simplest cases the method of
Laplace-Stieltjes transform can be applied in order to derive an analytical
solution of the system. As was done for Markov models, we designate a
Laplace-Stieltjes transform of function f(x) as
t
~
f ( s ) = L{ f ( x)} = ∫ e − sx f ( x)dx . (2.81)
0

Applying Laplace-Stieltjes transform to both sides of (2.74) we obtain


~ ~ K ~ ~
θ ij ( s ) = δ ijΨ i ( s ) + ∑ π ik f ik ( s )θ kj ( s ) , 1 ≤ i, j ≤ K , (2.82)
k =1
~
where Ψ i ( s) is Laplace-Stieltjes transform of the function

Ψ i (t ) = 1 − Fi (t ) = ∫ f i (t )dt = Pr{Ti > t} (2.83)
t

and, therefore,
~ 1 ~
Ψ i ( s ) = [1 − f i ( s)] . (2.84)
s
The system of algebraic equations (2.82) defines Laplace-Stieltjes trans-
form of probabilities θ ij (t ) , i, j ∈ {1,..., K } as a function of main parameters
of a semi-Markov process.
By solving this system, one can also find steady state probabilities. The
detailed investigation is out of scope of this book and we only give here
the resulting formulae for computation of steady state probabilities. Steady
state probabilities θ ij = lim θ ij (t ) (if they exist) do not depend on the initial
t →∞
state of the process i and for their designation, one can use only one index:
θ j . It is proven that

p jT j
θj = K
, (2.85)
∑ p jT j
j =1

where p j , j=1,…,K are steady state probabilities of the embedded Markov


chain. These probabilities are the solutions of the following system of al-
gebraic equations
100 New Computational Methods in Power System Reliability

⎧ K
⎪ p j = ∑ piπ ij , j = 1,..., K ,
⎪ i =1
⎨K (2.86)
⎪∑ p = 1
⎪⎩i =1 i

Note that the first K equations in (2.86) are linearly dependant and we
K
cannot solve the system without the last equation ∑ pi = 1 .
i =1
In order to find the reliability function, the additional semi-Markov
model should be built in analogy with the corresponding Markov models:
all states corresponding to the performance rates lower than constant de-
mand w should be united in one absorbing state with the number 0. All
transitions that return the system from this absorbing state should be for-
bidden. The reliability function is obtained from this new model as
R( w, t ) = θ K 0 (t ) .

Example 2.5
Consider an electric generator that has four possible performance (generat-
ing capacity) levels g4=100 MW, g3=70 MW, g2=50 MW and g1=0. The
constant demand is w=60 MW. The best state with performance rate
g4=100 MW is the initial state.
Only minor failures and minor repairs are possible. Times to failures are
distributed exponentially with following parameters
λ4,3 = 10 −3 (hours −1 ), λ3,2 = 5 *10 −4 (hours −1 ), λ2,1 = 2 *10 −4 (hours −1 ) .

Hence, times to failures T4,3, T3,2, T2,1 are random variables distributed
according to the corresponding cdf:
− λ 4 , 3t − λ3, 2 t − λ 2,1t
F4,3 (t ) = 1 − e , F3,2 (t ) = 1 − e , F2,1 (t ) = 1 − e .

Repair times are normally distributed. T3,4 has mean time to repair
T3, 4 = 240 hours and standard deviation σ 3,4 =16 hours, T2,3 has a mean
time to repair T2,3 = 480 hours and standard deviation σ 2,3 = 48 hours, T1,2
has a mean time to repair T1, 2 = 720 hours and standard deviation
σ 1, 2 = 120 hours. Hence, the cdf of random variables T3,4, T2,3 and T1,2 are
respectively
1 t (u − T
3, 4 )
F3,4 (t ) = ∫ exp[− 2σ ]du ,
2πσ 32,4 0 3, 4
Basic Tools and Techniques 101

1 t (u − T )
2,3
F2,3 (t ) = ∫ exp[− 2σ ]du ,
2πσ 22,3 0 2,3

1 t (u − T )
1, 2
F1, 2 (t ) = ∫ exp[− 2σ ]du .
2πσ 12, 2 0 1, 2

We have to find the generator reliability function, steady state availabil-


ity, mean steady state performance (generating capacity), and mean steady
state performance deficiency.
The evolution of the generator in the state space is shown in Fig. 2.22A.

4 4
F4,3 (t) F3,4 (t ) Q4,3 (t ) Q3,4 (t )

3 3
F3,2 (t ) F2,3 (t ) Q3, 2 (t ) Q2,3 (t )

2 2
Q1,2 (t )
F2,1 (t ) F1,2 (t ) Q2,1 (t )
1 1
A B

Fig. 2.22. Behavior of the electric generator. A. Evolution in the state-space.


B. Semi-Markov model

Based on (2.59)-(2.61), we obtain the kernel matrix Q(t)= Qij (t ) ,


i,j=1,2,3,4:
0 Q12 (t ) 0 0
Q21 (t ) 0 Q23 (t ) 0
Q(t)=
0 Q32 (t ) 0 Q34 (t )
0 0 Q43 (t ) 0

in which
t
Q12 (t ) = F1,2 (t ) , Q21 (t ) = ∫ [1 − F2,3 (t )]dF2,1 (t ) ,
0

t t
Q2,3 (t ) = ∫ [1 − F2,1 (t )]dF2,3 (t ), Q3, 2 (t ) = ∫ [1 − F3, 4 (t )]dF3, 2 (t ) ,
0 0
102 New Computational Methods in Power System Reliability

t
Q3, 4 (t ) = ∫ [1 − F3, 2 (t )]dF3, 4 (t ) , Q4,3 (t ) = F4,3 (t ) .
0

The corresponding semi-Markov process is presented in Fig. 2.22B.


Based on the kernel matrix, the cdf of unconditional sojourn times in
states 1, 2, 3, and 4 can be written according to (2.56) as
F1 (t ) = Q12 (t ) , F2 (t ) = Q12 (t ) + Q23 (t ) ,

F3 (t ) = Q32 (t ) + Q34 (t ) , F4 (t ) = Q43 (t ) .

According to (2.57) and (2.58) we have the following mean uncondi-


tional sojourn times
T1 = 720 hours, T2 = 457 hours, T3 = 226 hours, T4 = 1000 hours.

Using (2.54) we obtain one-step probabilities for the embedded Markov


chain

π 12 = F1, 2 (∞ ) = 1 , π 21 = ∫ [1 − F2,3 (t )]dF2,1 (t ) ,
0

∞ ∞
π 23 = ∫ [1 − F2,1 (t )]dF2,3 (t ), π 32 = ∫ [1 − F3, 4 (t )]dF3, 2 (t ) ,
0 0


π 34 = ∫ [1 − F3, 2 (t )]dF3, 4 (t ) , π 43 = F4,3 (∞ ) = 1 .
0

Calculating the integrals numerically, we obtain the following one-step


probability matrix for the embedded Markov chain:
0 π 12 0 0 0 1 0 0
π 21 0 π 23 0 0.0910 0 0.9090 0
π = lim Q(t) = = .
t →∞ 0 π 32 0 π 34 0 0.1131 0 0.8869
0 0 π 43 0 0 0 1 0

In order to find steady state probabilities pj, j=1,2,3,4 for the embedded
Markov chain, we have to solve the system of algebraic equations (2.86)
that takes the form
Basic Tools and Techniques 103

⎧ p1 = π 21 p2

⎪ p2 = π 12 p1 + π 32 p3

⎨ p3 = π 23 p2 + π 43 p4
⎪p = π p
⎪ 4 34 3
⎪⎩ p1 + p2 + p3 + p4 = 1

By solving this system we obtain


p1 = 0.0056 , p2 = 0.0615 , p3 = 0.4944 , p4 = 0.4385 .
Now using (2.61) we obtain the steady state probabilities
p1T1 p2T2
θ1 = 4
= 0.0069 , θ 2 = 4
= 0.0484 ,
∑ p jT j ∑ p jT j
j =1 j =1

p3T3 p4T4
θ3 = 4
= 0.1919 , θ 4 = 4
= 0.7528 .
∑ p jT j ∑ p jT j
j =1 j =1

The steady state availability of the generator for the given constant de-
mand is
A( w) = θ 3 + θ 4 = 0.9447 .

According to (2.79), we obtain mean steady state performance


4
E∞ = ∑ g kθ k = 91.13 MW.
k =1

and according to (2.80), we obtain mean steady state performance defi-


ciency
D∞ = ( w − g 2 )θ 2 + ( w − g1 )θ1 = 0.50 MW.

In order to find the reliability function for the given constant demand
w=60 MW, we unite states 1 and 2 into one absorbing state 0. The modi-
fied graphical representation of the system evolution in the state space for
this case is shown in Fig. 2.23A. In Fig. 2.23B, the state space diagram for
the corresponding semi-Markov process is shown.
104 New Computational Methods in Power System Reliability

4 4
F4,3 (t ) F3,4 (t ) Q4,3 (t ) Q3, 4 (t )

3 3
F3,0 (t) = F3,1(t) Q3,0 (t )

0 0
A B

Fig. 2.23. Diagrams for evaluating reliability function of generator. A. Evolution


in modified state-space. B. Semi-Markov model.

As in the previous case, we define the kernel matrix for the correspond-
ing semi-Markov process based on expressions (2.59)-(2.61):
0 0 0
Q(t)= Q30 (t ) 0 Q34 (t ) ,
0 Q43 (t ) 0

where
t t
Q30 (t ) = ∫ [1 − F3, 4 (t )]dF3,1 (t ) , Q3, 4 (t ) = ∫ [1 − F3,1 (t )]dF3, 4 (t ) , Q43 (t ) = F4,3 (t ) .
0 0

The reliability function for constant demand w=60 MW is defined as


R( w, t ) = θ 40 (t ) .

According to (2.74), the following system of integral equation can be


written in order to find the probability θ 40 (t )

⎧ t
⎪θ 40 (t ) = ∫ q43 (τ )θ 30 (t − τ )dτ
⎪ 0
⎪⎪ t t
⎨θ 30 (t ) = ∫ q34 (τ )θ 40 (t − τ )dτ + ∫ q30 (τ )θ 00 (t − τ )dτ .
⎪ 0 0
⎪θ (t ) = 1
⎪ 00
⎪⎩

The reliability function obtained by solving this system numerically is


presented in Fig. 2.24.
Basic Tools and Techniques 105

1
R(t)
0.8

0.6

0.4

0.2

0
0 2000 4000 6000 8000 10000
time (hours)

Fig. 2.24. Reliability function of generator

2.2 The Universal Generating Function Method

The recently emerged universal generating function (UGF) technique al-


lows one to find the entire MSS performance distribution based on the per-
formance distributions of its elements by using algebraic procedures. This
technique (also called the method of generalized generating sequences)
generalizes the technique that is based on using a well-known ordinary
generating function. The basic ideas of the method were introduced by
Professor I. Ushakov in the mid 1980s (Ushakov 1986, Ushakov 1987).
Since then, the method has been considerably expanded (Lisnianski and
Levitin 2003, Levitin 2005).
The UGF approach is straightforward. It is based on intuitively simple
recursive procedures and provides a systematic method for the system
states’ enumeration that can replace extremely complicated combinatorial
algorithms used for enumerating the possible states in some special types
of systems (such as consecutive systems or networks).
The UGF approach is effective. Combined with simplification tech-
niques, it allows the system’s performance distribution to be obtained in a
short time. The computational burden is the crucial factor when one solves
optimization problems where the performance measures have to be evalu-
ated for a great number of possible solutions along the search process. This
makes using the traditional methods in reliability optimization problem-
atic. Contrary to that, the UGF technique is fast enough to be implemented
in optimization procedures.
106 New Computational Methods in Power System Reliability

The UGF approach is universal. An analyst can use the same recursive
procedures for systems with a different physical nature of performance and
different types of element interaction.

2.2.1 Moment-generating function and z-transform

Consider a discrete random variable X that can take on a finite number of


possible values. The probabilistic distribution of this variable can be repre-
sented by the finite vector x = (x0, …, xk) consisting of the possible values
of X and the finite vector p consisting of the corresponding probabilities pi
= Pr{X = xi}. The mapping xi→ pi is usually called the probability mass
function (pmf).
X must take one of the values xi. Therefore
k
∑ pi = 1 (2.87)
i =0

Example 2.6
Suppose that one performs k independent trials and that each trial can re-
sult either in a success (with probability π) or in a failure (with probability
1−π). Let random variable X represent the number of successes that occur
in k trials. Such a variable is called a binomial random variable. The pmf
of X takes the form
⎛k ⎞
xi = i, pi = ⎜⎜ ⎟⎟π i (1 − π ) k − i , 0 ≤ i ≤ k
i ⎝ ⎠
According to the binomial theorem it can be seen that
k k ⎛k ⎞
∑ pi = ∑ ⎜⎜ ⎟⎟π i (1 − π ) k −i = [π + (1 − π )]k = 1
i =0 i =0⎝ i ⎠

The expected value of X is defined as a weighted average of the possible


values that X can take on, where each value is weighted by the probability
that X assumes that value:
k
E ( X ) = ∑ xi p i (2.88)
i =0
Basic Tools and Techniques 107

Example 2.7
The expected value of a binomial random variable is
k k ⎛k ⎞
E( X ) = ∑ xi pi = ∑ i⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0 i =0 ⎝ i ⎠

k −1⎛ k − 1⎞ i
= kπ ∑ ⎜⎜ ⎟π (1 − π ) k −i −1 = kπ [π + (1 − π )] k −1 = kπ
i ⎟
i =0⎝ ⎠

The moment-generating function m(t ) of the discrete random variable X


with pmf x, p is defined for all values of t by
k
tx
m(t ) = E (etX ) = ∑ e i pi (2.89)
i =0

The function m(t ) is called the moment-generating function because all


of the moments of random variable X can be obtained by successively dif-
ferentiating m(t). For example:
d k txi k
m' (t ) = ( ∑ e pi ) = ∑ xi e txi pi . (2.90)
dt i =0 i =0

Hence
k
m ' ( 0) = ∑ xi pi = E ( X ) . (2.91)
i =0

Then
d d k k
m' ' (t ) = (m' (t )) = ( ∑ xi e txi pi ) = ∑ xi2 e txi pi (2.92)
dt dt i =0 i =0

and
k
m ' ' ( 0 ) = ∑ x 2 pi = E ( X 2 ) (2.93)
i
i =0

The n-th derivative of m(t ) is equal to E(Xn)at t = 0.

Example 2.8
The moment-generating function of the binomial distribution takes the
form
108 New Computational Methods in Power System Reliability

k ⎛k ⎞
m(t ) = E (e tX ) = ∑ e ti ⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0 ⎝ i ⎠

k ⎛k ⎞
= ∑ (π e t ) i ⎜⎜ ⎟⎟(1 − π ) k −i = (π e t + 1 − π ) k
i =0 ⎝i ⎠
Hence
m' (t ) = k (π e t + 1 − π ) k −1πe t and E(X) = m '(0) = kπ .

The moment-generating function of a random variable uniquely deter-


mines its pmf. This means that a one-to-one correspondence exists be-
tween the pmf and the moment-generating function.
The following important property of moment-generating function is of
special interest for us. The moment-generating function of the sum of the
independent random variables is the product of the individual moment-
generating functions of these variables. Let mX(t) and mY(t) be the moment-
generating functions of random variables X and Y respectively. The pmf of
the random variables are represented by the vectors
x = (x0, …, x k X ), pX = (pX0, …, p Xk X ) (2.94)

and
y = (y0, …, y kY ), pY = (pY0, …, pYkY ) (2.95)

respectively. Then mX+Y(t), the moment-generating function of X+Y, is ob-


tained as
kX kY
tx ty
m X +Y (t ) = m X (t )mY (t ) = ∑ e i p X i ∑ e i pY j
i =0 j =0
k X kY ty k X kY t ( x + y )
= ∑ ∑ e txi e j p X i pY j = ∑ ∑ e i j p X i pY j (2.96)
i =0 j =0 i =0 j =0

The resulting moment-generating function m X +Y (t ) relates the prob-


abilities of all the possible combinations of realizations X = xi, Y = yj, for
any i and j, with the values that the random function X + Y takes on for
these combinations.
In general, for n independent discrete random variables X1, …, Xn
Basic Tools and Techniques 109

n
mn (t ) = ∏ m X i (t ) (2.97)
∑ Xi i =1
i =1

By replacing the function et by the variable z in Eq. (2.89) we obtain an-


P
P

other function related to random variable X that uniquely determines its


pmf:
k
ω(z)=E(zX)= ∑ z xi pi (2.98)
i =0

This function is usually called the z-transform of discrete random vari-


able X. The z-transform preserves some basic properties of the moment-
generating functions. The first derivative of ω (z ) is equal to E(X) at z = 1.
Indeed:
d k xi k
ω ' ( z) = ( ∑ z pi ) = ∑ x i z xi −1 pi (2.99)
dt i = 0 i =0

Hence
k
ω '(1) = ∑ xi pi = E ( X ) (2.100)
i =0

The z-transform of the sum of independent random variables is the


product of the individual z-transforms of these variables:
kX kY
ω X +Y ( z ) = ω X ( z )ωY ( z ) = ∑ z xi p X i ∑ z yi pY j
i =0 j =0
(2.101)
kX kY yj kX kY ( xi + y j )
= ∑ ∑ z xi z p X i pY j = ∑ ∑ z p X i pY j
i =0 j =0 i =0 j =0

and in general
n
ωn = ∏ ω Xi ( z) (2.102)
∑ Xi i =1
i =1

The reader wishing to learn more about the generating function and z-
transform is referred to the books (Grimmett and Stirzaker 1992) and
(Ross 2000).
110 New Computational Methods in Power System Reliability

Example 2.9
Suppose that one performs k independent trials and each trial can result ei-
ther in a success (with probability π) or in a failure (with probability 1−π).
Let random variable Xj represent the number of successes that occur in the
jth trial.
The pmf of any variable Xj ( 1 ≤ j ≤ k ) is

Pr{Xj = 1} = π, Pr{Xj = 0} = 1 − π.

The corresponding z-transform takes the form

ωXj(z) = πz1 + (1−π)z0

The random number of successes that occur in k trials is equal to the


sum of the numbers of successes in each trial
k
X= ∑Xj
j =1

Therefore, the corresponding z-transform can be obtained as


n
ω X ( z ) = ∏ ω X j ( z ) = [π z + (1 − π ) z 0 ]k
j =1

k ⎛k ⎞ k ⎛k ⎞
= ∑ ⎜⎜ ⎟⎟ z iπ i (1 − π ) k −i = ∑ z i ⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0⎝ i ⎠ i =0 ⎝ i ⎠

This z-transform corresponds to the binomial pmf:

⎛k ⎞
xi = i, pi = ⎜⎜ ⎟⎟π i (1 − π ) k − i , 0 ≤ i ≤ k
⎝i ⎠

2.2.2 Mathematical fundamentals of the Universal Generating


Function

Definition of the Universal Generating Function


Consider n independent discrete random variables X1, …, Xn and assume
that each variable Xi has a pmf represented by the vectors xi, pi. In order to
Basic Tools and Techniques 111

evaluate the pmf of an arbitrary function f(X1, …, Xn), one has to evaluate
the vector y of all of the possible values of this function and the vector q of
probabilities that the function takes these values.
Each possible value of function f corresponds to a combination of the
values of its arguments X1, …, Xn. The total number of possible combina-
tions is
n
K = ∏ (ki + 1) (2.103)
i =1

where ki + 1 is the number of different realizations of random variable Xi.


Since all of the n variables are statistically independent, the probability of
each unique combination is equal to the product of the probabilities of the
realizations of arguments composing this combination.
The probability of the jth combination of the realizations of the vari-
ables can be obtained as
n
q j = ∏ piji (2.104)
i =1

and the corresponding value of the function can be obtained as


f j = f ( x1 j1 , ..., x njn ) (2.105)

Some different combinations may produce the same values of the func-
tion. All of the combinations are mutually exclusive. Therefore, the
probability that the function takes on some value is equal to the sum of
probabilities of the combinations producing this value. Let Ah be a set of
combinations producing the value fh. If the total number of different reali-
zations of the function f(X1, …, Xn) is H, then the pmf of the function is
n
y = ( f h : 1 ≤ h ≤ H ), q=( ∑ ∏ piji : 1 ≤ h ≤ H ) (2.106)
( x1 j1 ,..., xnjn )∈Ah i =1

Example 2.10
Consider two random variables X1 and X2 with pmf x1 = (1, 4), p1 = (0.6,
0.4) and x2 = (0.5, 1, 2), p2 = (0.1, 0.6, 0.3). In order to obtain the pmf of
the function Y = X 1 X 2 we have to consider all of the possible combinations
of the values taken by the variables. These combinations are presented in
Table 2.2.
112 New Computational Methods in Power System Reliability

The values of the function Y corresponding to different combinations of


realizations of its random arguments and the probabilities of these combi-
nations can be presented in the form
y = (1, 2, 1, 4, 1, 16), q = (0.06, 0.04, 0.36, 0.24, 0.18, 0.12)

Table 2.2. pmf of the function of two variables


No of com- Combination Value Value Value
bination probability of X1 of X2 of Y
1 0.6×0.1 = 0.06 1 0.5 1
2 0.4×0.1 = 0.04 4 0.5 2
3 0.6×0.6 = 0.36 1 1 1
4 0.4×0.6 = 0.24 4 1 4
5 0.6×0.3 = 0.18 1 2 1
6 0.4×0.3 = 0.12 4 2 16

Note that some different combinations produce the same values of the
function Y. Since all of the combinations are mutually exclusive, we can
obtain the probability that the function takes some value as being the sum
of the probabilities of different combinations of the values of its arguments
that produce this value:

Pr{Y = 1} = Pr{X1 = 1, X2 = 0.5} + Pr{X1 = 1, X2 = 1}

+ Pr{X1 = 1, X2 = 2} = 0.06 + 0.36 + 0.18 = 0.6


The pmf of the function Y is
y = (1, 2, 4, 16), q = (0. 6, 0.04, 0.24, 0.12)

The z-transform of each random variable Xi represents its pmf


( xi 0 , ..., xiki ), ( pi 0 , ..., piki ) in the polynomial form

ki xij
∑ pij z (2.107)
j =0

According to (2.102), the product of the z-transform polynomials corre-


sponding to the variables X1, …, Xn determines the pmf of the sum of these
variables.
In a similar way one can obtain the z-transform representing the pmf of
the arbitrary function f by replacing the product of the polynomials by a
Basic Tools and Techniques 113

more general composition operator ⊗ f over z-transform representations of


pmf of n independent variables:
ki xiji k1 k2 kn n f ( xij1 ,..., xnjn )
⊗(
f
∑ piji z )= ∑ ∑ ... ∑ ( ∏ piji z ) (2.108)
ji = 0 j1 = 0 j2 = 0 j n = 0 i = 0

The technique based on using z-transform and composition operators


⊗ f is named the universal z-transform or universal (moment) generating
function (UGF) technique. In the context of this technique, the z-transform
of a random variable for which the operator ⊗ f is defined is referred to as
its u-function. We refer to the u-function of variable Xi as uj(z), and to the
u-function of the function f(X1, …, Xn) as U(z). According to this notation
U ( z ) = ⊗(u1 ( z ), u 2 ( z ), ..., u n ( z )) (2.109)
f

where ui(z) takes the form (2.107) and U(z) takes the form (2.108). For
functions of two arguments, two interchangeable notations can be used:
U ( z ) = ⊗(u1 ( z ), u 2 ( z )) = u1 ( z ) ⊗ u 2 ( z ) (2.110)
f f

Despite the fact that the u-function resembles a polynomial, it is not a


polynomial because:
- Its coefficients and exponents are not necessarily scalar variables, but
can be other mathematical objects (e.g. vectors);
- Operators defined over the u-functions can differ from the operator of
the polynomial product (unlike the ordinary z-transform, where only the
product of polynomials is defined).
When the u-function U(z) represents the pmf of a random function
f(X1,…, Xn), the expected value of this function can be obtained (as an
analogy with the regular z-transform) as the first derivative of U(z) at z = 1.
In general, the u-functions can be used not just for representing the pmf
of random variables. In the following chapters, we also use other interpre-
tations. However, in any interpretation the coefficients of the terms in the
u-function represent the probabilistic characteristics of some object or state
encoded by the exponent in these terms.
The u-functions inherit the essential property of the regular polynomi-
als: they allow for collecting like terms. Indeed, if a u-function represent-
ing the pmf of a random variable X contains the terms ph z xh and
pm z xm for which xh = xm, the two terms can be replaced by a single term
( ph + pm ) z xm , since in this case Pr{X = xh} = Pr{X = xm} = ph+pm.
114 New Computational Methods in Power System Reliability

Example 2.11
Consider the pmf of the function Y from Example 2.10, obtained from Ta-
ble 2.2. The u-function corresponding to this pmf takes the form:
U(z) = 0.06z1 + 0.04z2 + 0.36z1 + 0.24z4 + 0.18z1 + 0.12z16.
P P
P
P
P
P
P
P
P
P

By collecting the like terms in this u-function we obtain:


U(z) = 0.6z1 + 0.04z2 + 0.24z4 + 0.12z16,
P P
P
P P P

which corresponds to the final pmf obtained in Example 2.10.


The expected value of Y can be obtained as
E(Y) = U '(1) = 0.6×1 + 0.04×2 + 0.24×4 + 0.12×16 = 3.56
The described technique of determining the pmf of functions is based on
an enumerative approach. This approach is extremely resource consuming.
Indeed, the resulting u-function U(z) contains K terms (see Eq. (2.103)),
which requires excessive storage space. In order to obtain U(z) one has to
perform (n-1)K procedures of probabilities multiplication and K proce-
dures of function evaluation. Fortunately, many functions used in reliabil-
ity engineering produce the same values for different combinations of the
values of their arguments. The combination of recursive determination of
the functions with simplification techniques based on the like terms collec-
tion allows one to considerably reduce the computational burden associ-
ated with evaluating the pmf of complex functions.

Example 2.12
Consider the function
Y = f(X1, …, X5) = (max(X1, X2) + min(X3, X4)) X5
of five independent random variables X1, …, X5. The probability mass
functions of these variables are determined by pairs of vectors xi, pi
( 0 ≤ i ≤ 5 ) and are presented in Table 2.3.
These pmf can be represented in the form of u-functions as follows:
u1(z) = p10 z x10 + p11 z x11 + p12 z x12 = 0.6z5 + 0.3z8 + 0.1z12; P
P
P
P
P

u2(z) = p20 z x20 + p21z x21 = 0.7z8 + 0.3z10; P


P
P
P

u3(z) = p30 z x30 + p31z x31 = 0.6z0 + 0.4z1; P


P
P
Basic Tools and Techniques 115

u4(z) = p40 z x40 + p41 z x41 + p42 z x42 = 0.1z0 + 0.5z8 + 0.4z10;
P
P
P
P
P

u5(z) = p50 z x50 + p51z x51 = 0.5z1 + 0.5z1.5.P


P
P
P P

Using the straightforward approach one can obtain the pmf of the ran-
dom variable Y applying the operator (2.108) over these u-functions. Since
k1 + 1 = 3, k2 + 1 = 2, k3 + 1 = 2, k4 + 1 = 3, k5 + 1 = 2, the total number of
term multiplication procedures that one has to perform using this equation
is 3×2×2×3×2 = 72.

Table 2.3. pmf of random variables

X1 p1 0.6 0.3 0.1


x1 5 8 12
X2 p2 0.7 0.3 -
x2 8 10 -
X3 p3 0.6 0.4 -
x3 0 1 -
X4 p4 0.1 0.5 0.4
x4 0 8 10
X5 p5 0.5 0.5 -
x5 1 1.5 -

Now let us introduce three auxiliary random variables X6, X7 and X8, and
define the same function recursively:
X6 = max{X1, X2};

X7 = min{X3, X4};

X8 = X6 + X7;

Y = X8 X5.
We can obtain the pmf of variable Y using composition operators over
pairs of u-functions as follows:
u6(z) = u1(z) ⊗ u1(z) = (0.6z5+0.3z8+0.1z12) ⊗ (0.7z8+0.3z10)
P P P
P
P P
P

max max
116 New Computational Methods in Power System Reliability

=0.42zmax{5,8}+0.21zmax{8,8}+0.07zmax{12,8}+0.18zmax{5,10}+ 0.09zmax{8,10}+
P P
P P P
P
P
P

0.03zmax{12,10} = 0.63z8 + 0.27z10 + 0.1z12;


P
P
P P
P P
P

u7(z) = u3(z) ⊗ u4(z)=(0.6z +0.4z ) ⊗ (0.1z +0.5z3+0.4z5) 0


P
2
P
P
0
P P P
P

min min

=0.06zmin{0,0}+0.04zmin{2,0}+0.3zmin{0,3}+0.2zmin{2,3}
P P P
P
P
P

+0.24zmin{0,5}+0.16zmin{2,5}=0.64z0+0.36z2;
P
P
P P P
P

u8(z)=u6(z) ⊗ u7(z)=(0.63z + 0.27z +0.1z ) ⊗ (0.64z0+0.36z2) 8 10


P
P
12
P
P
P P
P

+ +

=0.4032z8+0+0.1728z10+0+ 0.064z12+0+0.2268z8+2+0.0972z10+2
P
P
P P P P P
P

+0.036z12+2=0.4032z8+0.3996z10+ 0.1612z12+0.036z14;
P P
P

U(z) = u8(z) ⊗ u5(z)


×

= (0.4032z +0.3996z +0.1612z + 0.036z14)(0.5z1+0.5z1.5)


8 10 12

= 0.2016z8×1+0.1998z10×1+0.0806z12×1+0.018z14×1+0.2016z8×1.5
+0.1998z10×1.5+0.0806z12×1.5+0.018z14×1.5=0.2016z8+0.1998z10
+0.2822z12+0.018z14+0.1998z15+0.0806z18+0.018z21.
The final u-function U(z) represents the pmf of Y, which takes the form
y = (8, 10, 12, 14, 15, 18, 21)
q = (0.2016, 0.1998, 0.2822, 0.018, 0.1998, 0.0806, 0.018).
Note that during the recursive derivation of this pmf we used only 26
term multiplication procedures. This considerable computational complex-
ity reduction is possible because of the like term collection in intermediate
u-functions.
The problem of system reliability analysis usually includes evaluation of
the pmf of some random values characterizing the system's behavior.
These values can be very complex functions of a large number of random
variables. The explicit derivation of such functions is an extremely com-
plicated task. Fortunately, the UGF method for many types of system al-
lows one to obtain the system u-function recursively. This property of the
UGF method is based on the associative property of many functions used
in reliability engineering. The recursive approach presumes obtaining u-
functions of subsystems containing several basic elements and then treat-
ing the subsystem as a single element with the u-function obtained when
computing the u-function of a higher-level subsystem. Combining the re-
cursive approach with the simplification technique reduces the number of
terms in the intermediate u-functions and provides a drastic reduction of
the computational burden.
Basic Tools and Techniques 117

Properties of composition operators


The properties of composition operator ⊗ f strictly depend on the proper-
ties of the function f(X1, …, Xn). Since the procedure of the multiplication
of the probabilities in this operator is commutative and associative, the en-
tire operator can also possess these properties if the function possesses
them.
If
f(X1, X2, …, Xn) = f(f(X1, X2, …, Xn-1), Xn), (2.111)
then
U ( z ) = ⊗(u1 ( z ), u 2 ( z ), ..., u n ( z ))
f
(2.112)
= ⊗(⊗(u1 ( z ), u 2 ( z ), ..., u n−1 ( z )), u n ( z )).
f f

Therefore, one can obtain the u-function U(z) assigning U1(z) = u1(z)
and applying operator ⊗f consecutively:

Uj(z) = ⊗f (Uj−1(z), uj(z)) for 2≤ j≤ n, (2.113)

such that finally U(z) = Un(z).


If the function f possesses the associative property
f(X1, …, Xj, Xj+1, …, Xn)=f( f(X1, …, Xj), f(Xj+1, …, Xn)) (2.114)
for any j, then the ⊗f operator also possesses this property:

⊗(u1 ( z ), ..., u n ( z ))
f

= ⊗(⊗(u1 ( z ), ..., u j −1 ( z )), ⊗(u j ( z ), ..., u n ( z )) (2.115)


f f f

If, in addition to the property (2.112), the function f is also commuta-


tive:
f(X1, …, Xj, Xj+1,…, Xn) = f(X1, …, Xj+1, Xj,…, Xn) (2.116)
then for any j, which provides the commutative property for the ⊗f opera-

tor:
118 New Computational Methods in Power System Reliability

⊗(u1 ( z ),..., u j ( z ), u j +1 ( z ),..., u n ( z ))


f
(2.117)
= ⊗(u1 ( z ),..., u j +1 ( z ), u j ( z ),..., u n ( z ))
f

the order of arguments in the function f(X1, …, Xn) is inessential and the u-
function U(z) can be obtained using recursive procedures (2.111) and
(2.113) over any permutation of u-functions of random arguments
X1,…,Xn.
If a function takes the recursive form
f(f1(X1, …, Xj), f2(Xj+1, …, Xh), …, fm(Xl, …, Xn)) (2.118)
then the corresponding u-function U(z) can also be obtained recursively:
⊗( ⊗ (u1 ( z ),..., u j ( z )), ⊗ (u j +1 ( z ),..., u h ( z )),..., ⊗ (ul ( z ),..., u n ( z )). (2.119)
f f1 f2 fm

Example 2.13
Consider the variables X1, X2, X3 with pmf presented in Table 2.2. The u-
functions of these variables are:
u1(z) = 0.6z5 + 0.3z8 + 0.1z12;
P P P

u2(z) = 0.7z8 + 0.3z10;


P P

u3(z) = 0.6z0 + 0.4z1.


P P

The function Y=min(X1,X2,X3) possesses both commutative and associa-


tive properties. Therefore
min(min(X1,X2),X3)=min(min(X2,X1),X3)=min(min(X1, X3), X2)
=min(min(X3, X1), X2)=min(min(X2, X3), X1)

=min(min(X3, X2), X1).


The u-function of Y can be obtained using the recursive procedure
u4(z)=u1(z) ⊗ u2(z)=(0.6z5+0.3z8+0.1z12) ⊗ (0.7z8+0.3z10)
min min

min{5,8} min{8,8}
=0.42z +0.21z +0.07zmin{12,8}

+0.18zmin{5,10}+0.09zmin{8,10}+0.03zmin{12,10}=0.6z5+0.37z8+0.03z10;

U(z)=u4(z) ⊗ u3(z) (0.6z5+0.37z8+0.03z10) ⊗ (0.6z0+0.4z1)


min min
Basic Tools and Techniques 119

= 0.36zmin{5,0}+0.222zmin{8,0}+0.018zmin{12,0}
+ 0.24zmin{5,1}+0.148zmin{8,1}+0.012zmin{12,1}=0.6z0+0.4z1.
The same u-function can also be obtained using another recursive pro-
cedure
u4(z)=u1(z) ⊗ u3(z)=(0.6z5+0.3z8+0.1z12) ⊗ (0.6z0+0.4z1)
min min

min{5,0} min{8,0}
= 0.36z +0.18z +0.06zmin{12,0}
+0.24zmin{5,1}+0.12zmin{8,1}+0.04zmin{12,1}=0.6z0+0.4z1;
U(z)=u3(z) ⊗ u2(z)=(0.6z0+0.4z1) ⊗ (0.7z8+0.3z10)
min min

=0.42zmin{0,8}+0.28zmin{1,8}+0.18zmin{0,10}+0.12zmin{1,10}=0.6z0+0.4z1
Note that while both recursive procedures produce the same u-function,
their computational complexity differs. In the first case, 12 term multipli-
cation operations have been performed; in the second case, only 10 opera-
tions have been performed.

Consider a random variable X with pmf represented by u-function


xj
u X ( z ) = ∑kj =0 p j z . In order to obtain the u-function representing the pmf
of function f(X, c) of the variable X and a constant c one can apply the fol-
lowing simplified operator:
k xj k f ( x j ,c )
U(z) = u X ( z ) ⊗ c = ( ∑ p j z )⊗c = ∑ pjz (2.120)
f j =0 f j =0

This can be easily proved if we represent the constant c as the random


variable C that can take the value of c with a probability of 1. The u-
function of such a variable takes the form
uc(z) = z c. (2.121)
Applying the operator ⊗ f over the two u-functions uX(z) and uc(z) we ob-
tain Eq. (2.120).

2.2.3 Obtaining the system reliability and performance indices


using the UGF

Having the pmf of the random MSS output performance G and the pmf of
the demand W in the form of u-functions U MSS ( z ) and u w (z ), one can ob-
tain the u-functions representing the pmf of the random functions F(G,W),
120 New Computational Methods in Power System Reliability

~
G (G,W ), D − (G, W ) or D + (G, W ) (see Section 1.5.3) using the corresponding
composition operators over U MSS ( z ) and u w (z ) :
U F ( z ) = U MSS ( z ) ⊗ u w ( z ) (2.122)
F

U G~ ( z ) = U MSS ( z ) ⊗
~ u w ( z) (2.123)
G

U D ( z ) = U MSS ( z ) ⊗ u w ( z ) (2.124)
D

~
Since the expected values of the functions G, F, D and G are equal to
the derivatives of the corresponding u-functions UMSS(z), UF(z), UD(z) and
U G~ ( z ) at z = 1, the MSS performance measures can now be obtained as

E (G ) = U ' MSS (1) (2.125)

E ( F (G, W )) = U ' F (1) (2.126)

E ( D(G, W )) = U ' D (1) (2.127)

~
E (G (G, F )) / E ( F (G, W )) = U 'G~ (1) / U ' F (1) (2.128)

Example 2.14
Consider two power system generators with a nominal capacity of 100
MW as two separate MSSs. In the first generator, some types of failure re-
quire its capacity G1 to be reduced to 60 MW and other types lead to a
complete outage. In the second generator, some types of failure require its
capacity G2 to be reduced to 80 MW, others lead to a capacity reduction to
40 MW, and others lead to a complete outage. The generators are repair-
able and each of their states has a steady-state probability.
Both generators should meet a variable two-level demand W. The high
level (day) demand is 50 MW and has the probability 0.6; the low level
(night) demand is 30 MW and has the probability 0.4.
The capacity and demand can be presented as a fraction of the nominal
generator capacity. There are three possible relative capacity levels that
characterize the performance of the first generator:
g10 = 0.0, g11 = 60/100 = 0.6, g12 = 100/100 = 1.0
Basic Tools and Techniques 121

and four relative capacity levels that characterize the performance of the
second generator:
g20 = 0.0, g21 = 40/100 = 0.4, g22 = 80/100 = 0.8, g23 = 100/100 = 1.0
Assume that the corresponding steady-state probabilities are
p10 = 0.1, p11 = 0.6, p12 = 0.3
for the first generator and
p20 = 0.05, p21 = 0.35, p22 = 0.3, p23 = 0.3
for the second generator and that the demand distribution is
w1 = 50/100 = 0.5, w2 = 30/100 = 0.3, q1 = 0.6, q2 = 0.4
The u-functions representing the capacity distribution of the generators
(the pmf of random variables G1 and G2) take the form
U1(z) = 0.1z0+0.6z0.6+0.3z1, U2(z) = 0.05z0+0.35z0.4+0.3z0.8+0.3z1
and the u-function representing the demand distribution takes the form
uw(z) = 0.6z0.5+0.4z0.3.
The mean steady-state performance (capacity) of the generators can be
obtained directly from these u-functions:
ε1 = E (G1 ) = U '1 (1) = 0.1 × 0 + 0.6 × 0.6 + 0.3 × 1.0 = 0.66

which means 66% of the nominal generating capacity for the first genera-
tor, and
ε 2 = E (G 2 ) = U ' 2 (1) = 0.05 × 0 + 0.35 × 0.4 + 0.3 × 0.8 + 0.3 × 1.0 = 0.68

which means 68% of the nominal generating capacity for the second gen-
erator.
The available generation capacity should be no less than the demand.
Therefore, the system acceptability function takes the form
F (G,W ) = 1(G ≥ W )

and the system performance deficiency takes the form


D − (G, W ) = max(W − G,0)

The u-functions corresponding to the pmf of the acceptability function


are obtained using the composition operator ⊗ :
F
122 New Computational Methods in Power System Reliability

U F1 ( z ) = U 1 ( z ) ⊗ u w ( z ) = (0.1z 0 + 0.6 z 0.6 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )


F F
1( 0≥0.5) 1( 0.6≥0.5) 1(1≥0.5)
= 0.06 z + 0.36 z + 0.18 z + 0.04 z 1(0≥0.3)
+ 0.24 z1(0.6≥0.3) + 0.12 z 1(1≥0.3) = 0.06 z 0 + 0.36 z 1 + 0.18 z 1 + 0.04 z 0
+ 0.24 z 1 + 0.12 z 1 = 0.9 z 1 + 0.1z 0

U F 2 ( z) = U 2 ( z) ⊗ u w ( z)
F

= (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )
F

= 0.03 z 0 + 0.21z 0 + 0.18 z 1 + 0.18 z 1 + 0.02 z 0 + 0.14 z 1


+ 0.12 z 1 + 0.12 z 1 = 0.74 z 1 + 0.26 z 0
The system availability (expected acceptability) is

A1 = E (1(G1 ≥ W ) = U ' F1 (1) = 0.9

A2 = E (1(G 2 ≥ W ) = U ' F 2 (1) = 0.74

The u-functions corresponding to the pmf of the performance deficiency


function are obtained using the composition operator ⊗ :
D

U D1 ( z ) = U 1 ( z ) ⊗ u w ( z )
D

= (0.1z 0 + 0.6 z 0.6 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )


D

= 0.06 z max(0.5−0,0) + 0.36 z max(0.5−0.6,0) + 0.18 z max(0.5−1,0)


+ 0.04 z max(0.3−0,0) + 0.24 z max(0.3−0.6,0) + 0.12 z max(0.3−1,0)
= 0.06 z 0.5 + 0.36 z 0 + 0.18 z 0 + 0.04 z 0.3 + 0.24 z 0 + 0.12 z 0
= 0.06 z 0.5 + 0.04 z 0.3 + 0.9 z 0

U D2 ( z) = U 2 ( z) ⊗ u w ( z)
D

= (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )
D
0.5 0.1
= 0.03 z + 0.21z + 0.18 z + 0.18 z + 0.02 z 0.3 + 0.14 z 0
0 0

+ 0.12 z 0 + 0.12 z 0 = 0.03 z 0.5 + 0.21z 0.1 + 0.02 z 0.3 + 0.74 z 0


Basic Tools and Techniques 123

The expected performance deficiency is

∆1 = E (max(W − G1 ,0)) = U ' D1 (1)


= 0.06 × 0.5 + 0.04 × 0.3 + 0.9 × 0 = 0.042

∆ 2 = E (max(W − G2 ,0)) = U ' D 2 (1)


= 0.03 × 0.5 + 0.21 × 0.1 + 0.02 × 0.3 + 0.74 × 0 = 0.042

In this case, ∆ may be interpreted as expected electrical power unsup-


plied to consumers. The absolute value of this unsupplied demand is 4.2
MW for both generators. Multiplying this index by T, the system operating
time considered, one can obtain the expected unsupplied energy.
Note that since the performance measures obtained have different na-
tures they cannot be used interchangeable. For instance, in the present ex-
ample the first generator performs better than the second one when avail-
ability is considered (A1>A2), the second generator performs better than the
first one when the expected capacity is considered (ε1<ε2), and both gen-
erators have the same expected unsupplied demand (∆1 = ∆2).
Now we determine the conditional expected system performance. The u-functions
~
corresponding to the pmf of the function G are obtained using the composition
operator ⊗
~ = ⊗ :
G GF

0 0.6
U G~1 ( z ) = U 1 ( z ) ⊗
~ u w ( z ) = (0.1z + 0.6 z + 0.3 z1 ) ⊗ (1z 0.5 + 0.4 z 0.3 )
G GF

= 0.06 z 0×1(0≥0.5) + 0.36 z 0.6×1(0.6≥0.5) + 0.18 z1×1(1≥0.5) + 0.04 z 0×1(0≥0.3)


+ 0.24 z 0.6×1(0.6≥0.3) + 0.12 z1×1(1≥0.3) = 0.06 z 0 + 0.36 z 0.6 + 0.18 z 1
+ 0.04 z 0 + 0.24 z 0.6 + 0.12 z1 = 0.3 z 1 + 0.6 z 0.6 + 0.1z 0

U G~ 2 ( z ) = U 2 ( z ) ⊗
~ u w ( z)
G

= (0.05 z 0 + 0.35 z 0.4 + 0.3z 0.8 + 0.3 z 1 ) ⊗ (0.6 z 0.5 + 0.4 z 0.3 )
G⋅ F

= 0.03z 0 + 0.21z 0 + 0.18 z 0.8 + 0.18 z 1 + 0.02 z 0 + 0.14 z 0.4


+ 0.12 z 0.8 + 0.12 z 1 = 0.3z 1 + 0.3 z 0.8 + 0.14 z 0.4 + 0.26 z 0

The system conditional expected performance is

ε~1 = U 'G~1 (1) / U ' F1 (1) = (0.3 × 1 + 0.6 × 0.6 + 0.1 × 0) / 0.9
= 0.66 / 0.9 = 0.733
124 New Computational Methods in Power System Reliability

ε~2 = U ' G~ 2 ( z ) / U ' F 2 (1) = (0.3 × 1 + 0.3 × 0.8 + 0.14 × 0.4 + 0.26 × 0) / 0.74
= 0.596 / 0.74 = 0.805

This means that generators 1 and 2 when they meet the variable demand
have average capacities 73.3 MW and 80.5 MW respectively.

Note that the acceptability function F(G,W) is a binary one. Therefore, if


the demand is constant ( W ≡ w), operator U F ( z ) = U MSS ( z ) ⊗ F w pro-
duces a u-function in which all of the terms corresponding to the accept-
able states will have the exponent 1 and all of the terms corresponding to
the unacceptable states will have the exponent 0. It can easily be seen that
U ' F (1) is equal to the sum of the coefficients of the terms with exponents
1. Therefore, instead of obtaining the u-function of F(G, w) and calculating
its derivative at z=1 for determining the system’s reliability (availability),
one can calculate the sum of the terms in UMSS(z) that correspond to the ac-
ceptable states. Introducing an operator δw(UMSS(z)) that produces the sum
of the coefficients of those terms in UMSS(z) that have exponents g satisfy-
ing the condition F(g,w)= 1 (and correspond to the states acceptable for the
demand level w) we obtain the following simple expression for the sys-
tem’s expected acceptability:
E(F(G,w)) = δw(UMSS(z)) (2.129)
When the demand is variable, the system reliability can also be obtained
as
M M M
∑ Pr(W = wi ) E ( F (G, wi )) = ∑ qi E ( F (G, wi )) = ∑ qi δ wi (U MSS ( z )) (2.130)
i =1 i =1 i =1

Example 2.15
Consider the two power system generators presented in the previous ex-
ample and obtain the system availability directly from the u-functions
UMSS(z) using Eq. (2.130). Since, in this example, F (G, W ) = 1(G ≥ W ), the
operator δ w (U ( z )) sums up the coefficients of the terms having exponents
not less than w in the u-function U(z):
For the first generator with U1(z) = 0.1z0+0.6z0.6+0.3z1
E ( F (G1 , w1 )) = δ 0.5 (0.1z 0 + 0.6 z 0.6 + 0.3 z 1 ) = 0.6 + 0.3 = 0.9
Basic Tools and Techniques 125

E ( F (G1 , w2 )) = δ 0.3 (0.1z 0 + 0.6 z 0.6 + 0.3 z 1 ) = 0.6 + 0.3 = 0.9

A1 = q1 E ( F (G1 , w1 )) + q 2 E ( F (G1 , w2 )) = 0.6 × 0.9 + 0.4 × 0.9 = 0.9

For the second generator with U2(z) = 0.05z0 + 0.35z0.4 + 0.3z0.8 + 0.3z1

E ( F (G2 ,0.5)) = δ 0.5 (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) = 0.3+0.3=0.6

E ( F (G 2 ,0.3)) = δ 0.3 (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 )


= 0.35 + 0.3 + 0.3 = 0.95

A1 = q1 E ( F (G2 , w1 )) + q 2 E ( F (G2 , w2 )) = 0.6 × 0.6 + 0.4 × 0.95 = 0.74

2.2.4 UGF in analysis of series-parallel multi-state systems

Reliability block diagram method


Having a generic model of an MSS in the form (1.3) and (1.4) we can ob-
tain the measures of MSS reliability and performance by applying the fol-
lowing steps:
1. Represent the pmf of the random performance of each system element j,
in the form of the u-function
k j −1
g ji
u j ( z) = ∑ p ji z , 1≤ j ≤ n (2.131)
i =0
2. Obtain the u-function of the entire system (representing the pmf of the
random variable G) applying the composition operator that uses the sys-
tem structure function.
~
3. Obtain the u-functions representing the random functions F , G and D
using operators (2.122)-(2.124).
4. Obtain the system reliability measures by calculating the values of the
derivatives of the corresponding u-functions at z=1 and applying Equa-
tions. (2.125)-(2.128).
While steps 1 and 3 are rather trivial, step 2 may involve complicated
computations. Indeed, the derivation of a system structure function for
various types of system is usually a difficult task.
As shown in Section 2.2.1, representing the functions in the recursive
form is beneficial from both the derivation clarity and computation sim-
plicity viewpoints. In many cases, the structure function of the entire MSS
can be represented as the composition of the structure functions corre-
sponding to some subsets of the system elements (MSS subsystems). The
126 New Computational Methods in Power System Reliability

u-functions of the subsystems can be obtained separately and the subsys-


tems can be further treated as single equivalent elements with the perform-
ance pmf represented by these u-functions.
The method for distinguishing recurrent subsystems and replacing them
with single equivalent elements is based on a graphical representation of
the system structure and is referred to as the reliability block diagram
method. This approach is usually applied to systems with a complex se-
ries-parallel configuration.
While the structure function of a binary series-parallel system is unam-
biguously determined by its configuration (represented by the reliability
block diagram), the structure function of a series-parallel MSS also de-
pends on the physical meaning of the system and of the elements’ per-
formance and on the nature of the interaction among the elements.

Series Systems. In the flow transmission MSS, where performance is de-


fined as capacity or productivity, the total capacity of a subsystem contain-
ing n independent elements connected in series is equal to the capacity of a
bottleneck element (the element with least performance). Therefore, the
structure function for such a subsystem takes the form
φser (G1 ,..., Gn ) = min{G1 ,..., Gn } (2.132)
In the task processing MSS, where the performance is defined as the
processing speed (or operation time), each system element has its own op-
eration time and the system’s total task completion time is restricted. The
entire system typically has a time resource that is larger than the time
needed to perform the system’s total task. However, unavailability or dete-
riorated performance of the system elements may cause time delays, which
in turn would cause the system’s total task performance time to be unsatis-
factory. The definition of the structure function for task processing systems
depends on the discipline of the elements' interaction in the system.
When the system operation is associated with consecutive discrete ac-
tions performed by the ordered line of elements, each element starts its op-
eration after the previous one has completed its operation. Assume that the
random performances Gj of each element j is characterized by its process-
ing speed. The random processing time Tj of any system element j is de-
fined as T j = 1 / G j . The total time of task completion for the entire system
is
n n
T = ∑ T j = ∑ G −j 1 (2.133)
j =1 j =1

The entire system processing speed is therefore


Basic Tools and Techniques 127

n
G = 1 / T = ( ∑ G −j 1 ) −1 (2.134)
j =1

Note that if for any j Gj=0 the equation cannot be used, but it is obvious
that in this case G=0. Therefore, one can define the structure function for
the series task processing system as
⎧ n n
⎪1 / ∑ G −j 1 if ∏ G j ≠ 0
⎪ j =1 j =1
φser (G1 ,..., Gn ) = ×(G1 ,..., Gn ) = ⎨ (2.135)
⎪ n
⎪0 if ∏G j = 0
⎩ j =1

One can see that the structure functions presented above are associative
and commutative (i.e. meet conditions (2.114) and (2.116)). Therefore, the
u-functions for any series system of described types can be obtained recur-
sively by consecutively determining the u-functions of arbitrary subsets of
the elements. For example, the u-function of a system consisting of four
elements connected in a series can be determined in the following ways:
[(u1 ( z ) ⊗ u 2 ( z )) ⊗ u 3 ( z )] ⊗ u 4 ( z )
φser φser φser
= (u1 ( z ) ⊗ u 2 ( z )) ⊗ (u 3 ( z ) ⊗ u 4 ( z )) (2.136)
φ ser φ ser φ ser

and by any permutation of the elements' u-functions in this expression.

Example 2.16
Consider a system consisting of n elements with the total failures con-
nected in series. Each element j has only two states: operational with a
nominal performance of gj1 and failure with a performance of zero. The
probability of the operational state is pj1. The u-function of such an ele-
ment is presented by the following expression:
g j1
u j ( z ) = (1 − p j1 ) z 0 + p j1 z , j = 1, …, n

In order to find the u-function for the entire MSS, the corresponding
⊗ φser operators should be applied. For the MSS with the structure function
(2.132) the system u-function takes the form
n n
U ( z ) = ⊗ (u1 ( z ),..., u n ( z )) = (1 − Π p j1 ) z 0 + Π p j1 z min{g11 ,..., g n1}
min j =1 j =1
128 New Computational Methods in Power System Reliability

For the MSS with the structure function (2.135) the system u-function
takes the form
n
n n ( ∑ g −j11 ) −1
U ( z ) = ⊗{u1 ( z ),..., u n ( z )} = (1 − Π p j1 ) z 0 + Π p j1 z j =1
× j =1 j =1

Since the failure of each single element causes the failure of the entire
system, the MSS can have only two states: one with the performance level
of zero (failure of at least one element) and one with the performance level
gˆ = min{g11 ,..., g n1} for the flow transmission MSS and gˆ = 1 / ∑nj=1 g −j11 for
the task processing MSS.
The measures of the system performance A(w) = Pr{G≥w}, ∆−(w) =
E(max(w−G,0)) and ε = E(G) are presented in the Table 2.4.

Table 2.4. Measures of MSS performance


w A(w) ∆−-(w) ε
n n n
w > gˆ 0 w(1 − ∏ p j1) + ( w − gˆ ) ∏ p j1 = w − gˆ ∏ p j1
j =1 j =1 j =1
n
n n gˆ ∏ p j1
0 < w ≤ gˆ ∏ p j1 w(1 − ∏ p j1 ) j =1
j =1 j = 1

The u-function of a subsystem containing n identical elements (pj1=p,


gj1=g for any j) takes the form
(1 − p n ) z 0 + p n z g (2.137)
for the system with the structure function (2.132) and takes the form
(1 − p n ) z 0 + p n z g / n (2.138)
for the system with the structure function (2.135).

Parallel Systems. In the flow transmission MSS, in which the flow can be
dispersed and transferred by parallel channels simultaneously (which pro-
vides the work sharing), the total capacity of a subsystem containing n
independent elements connected in parallel is equal to the sum of the ca-
pacities of the individual elements. Therefore, the structure function for
such a subsystem takes the form
Basic Tools and Techniques 129

n
φ par (G1 ,..., Gn ) = +(G1 ,..., Gn ) = ∑ G j . (2.139)
j =1

In some cases, only one channel out of n can be chosen for the flow
transmission (no flow dispersion is allowed). This happens when the
transmission is associated with the consumption of certain limited re-
sources that does not allow simultaneous use of more than one channel.
The most effective way for such a system to function is by choosing the
channel with the greatest transmission capacity from the set of available
channels. In this case, the structure function takes the form

φ par (G1 ,..., G n ) = max{G1 ,..., G n }. (2.140)

In the task processing MSS, the definition of the structure function de-
pends on the nature of the elements’ interaction within the system.
First consider a system without work sharing in which the parallel ele-
ments act in a competitive manner. If the system contains n parallel ele-
ments, then all the elements begin to execute the same task simultaneously.
The task is assumed to be completed by the system when it is completed
by at least one of its elements. The entire system processing time is
defined by the minimum element processing time and the entire system
processing speed is defined by the maximum element processing speed.
Therefore, the system structure function coincides with (2.140).
Now consider a system of n parallel elements with work sharing for
which the following assumptions are made:
1. The work x to be performed can be divided among the system elements
in any proportion.
2. The time required to make a decision about the optimal work sharing is
negligible, the decision is made before the task execution and is based
on the information about the elements state during the instant the de-
mand for the task executing arrives.
3. The probability of the elements failure during any task execution is
negligible.
The elements start performing the work simultaneously, sharing its total
amount x in such a manner that element j has to perform xj portion of the
work and x = ∑nj=1 x j . The time of the work processed by element j is
xj/Gj. The system processing time is defined as the time during which the
last portion of work is completed: T = max1≤ j ≤n {x j / G j }. The minimal
time of the entire work completion can be achieved if the elements share
the work in proportion to their processing speed Gj: x j = xG j / ∑nk =1 Gk .
130 New Computational Methods in Power System Reliability

The system processing time T in this case is equal to x / ∑nk =1 Gk and its to-
tal processing speed G is equal to the sum of the processing speeds of its
elements. Therefore, the structure function of such a system coincides with
the structure function (2.139).
One can see that the structure functions presented also meet conditions
(2.114) and (2.116). Therefore, the u-functions for any parallel system of
described types can be obtained recursively by the consecutive determina-
tion of u-functions of arbitrary subsets of the elements.

Example 2.17
Consider a system consisting of two elements with total failures connected
in parallel. The elements have nominal performance g11 and g21 (g11<g21)
and the probability of operational state p11 and p21 respectively. The per-
formances in the failed states are g10 = g20 =0. The u-function for the entire
MSS is
U ( z ) = u1 ( z ) ⊗ u 2 ( z )
φpar

g 11 g 21
= [(1 − p11 ) z 0 + p11 z ] ⊗ [(1 − p 21 ) z 0 + p 21 z ]
φpar

which for structure function (2.139) takes the form


g 11
U ( z ) = (1 - p11 )(1 - p21 ) z 0 + p11 (1 - p21 ) z +
g g +g
p21 (1 - p11 ) z 21 + p11 p21 z 11 21

and for structure function (2.140) takes the form


g g
U ( z ) = (1 - p11 )(1 - p21 ) z 0 + p11 (1 - p21 ) z 11 + p21 (1 - p11 ) z 21 +
g
p11 p21 z max( g11 ,g 21 ) = (1 - p11 )(1 - p21 ) z 0 + p11 (1 - p21 ) z 11 + p21 z g 21 .

The measures of the system output performance for MSSs of both types
are presented in Tables 2.5 and 2.6.

Table 2.5. Measures of MSS performance for system with structure function
(2.139)

w A(w) ∆−(w) ε
w>g11+g21 0 w-p11g11−p21g21
g21<w≤g11+g21 p11p21 g11p11(p21−1)+g21p21(p11−1)+w(1−p11p21)

g11<w≤ g21 p21 (1−p21)(w−g11p11) p11g11+p21g21


0<w≤g11 p11+p21−p11p21 (1−p11)(1−p21)w
Basic Tools and Techniques 131

Table 2.6. Measures of MSS performance for system with structure function
(2.140)

w A(w) ∆ –(w) ε
w>g21 0 w−p11g11−p21g21+p11p21g11
g11<w≤ g21 p21 (1−p21)(w−g11p11) p11(1−p21)g11+p21g21
0<w≤g11 p11+p21− p11p21 (1−p11)(1−p21)w

The u-function of a subsystem containing n identical parallel elements


(pj1 = p, gj1 = g for any j) can be obtained by applying the operator
⊗φpar (u ( z ),...., u ( z )) over n identical u-functions u(z) of an individual ele-
ment. The u-function of this subsystem takes the form
n n!
∑ p k (1 − p) n − k z kg (2.141)
k = 0 k! (n − k )!

for the structure function (2.139) and


(1 − p) n z 0 + (1 − (1 − p ) n ) z g (2.142)
for the structure function (2.140).
Series-Parallel Systems. The structure functions of complex series-
parallel systems can always be represented as compositions of the structure
functions of statistically independent subsystems containing only elements
connected in a series or in parallel. Therefore, in order to obtain the u-
function of a series-parallel system one has to apply the composition op-
erators recursively in order to obtain u-functions of the intermediate pure
series or pure parallel structures.
The following algorithm realizes this approach:
1. Find the pure parallel and pure series subsystems in the MSS.
2. Obtain u-functions of these subsystems using the corresponding
⊗φser and ⊗φpar operators.
3. Replace the subsystems with single elements having the u-function ob-
tained for the given subsystem.
4. If the MSS contains more then one element return to step 1.
The resulting u-function represents the performance distribution of the
entire system.
The choice of the structure functions used for series and parallel subsys-
tems depends on the type of system. Table 2.7 presents the possible com-
binations of structure functions corresponding to the different types of
MSS.
132 New Computational Methods in Power System Reliability

Table 2.7. Structure functions for a purely series and for purely parallel subsystems
No of Description Structure func- Structure function
MSS type of MSS tion for series for parallel ele-
elements (φser) ments (φpar)
Flow transmission
1 MSS with flow disper- (2.132) (2.139)
sion
Flow transmission
2 MSS without flow dis- (2.132) (2.140)
persion
Task processing MSS
3 with work sharing (2.135) (2.139)
Task processing MSS
4 without work sharing (2.135) (2.140)

2.2.5 Combination of random processes methods and the UGF


technique

In many cases, the state probability distributions of system elements are


unknown whereas the state transition rates (failure and repair rates) can
easily be evaluated from history data or mathematical models. The Markov
process theory allows analyst to obtain probability of any system state at
any time solving a system of differential equations.
The main difficulty of the random processes methods application to
the MSS reliability evaluation is the “dimension damnation”. Indeed, the
number of differential equations in the system that should be solved using
the Markov approach is equal to the total number of MSS states (product
of numbers of states of all of the system elements). This number can be
very large even for a relatively small MSS. Even though the modern soft-
ware tools provide solutions for high-order systems of differential equa-
tions, building the state-space diagram and deriving the corresponding
system of differential equations is a difficult non-formalized process that
may cause numerous mistakes.
The UGF-based reliability block diagram technique can be used for
reducing the dimension of system of equations obtained by the random
process method. The main idea of the approach lies in solving the sepa-
rated smaller systems of equations for each MSS element and then com-
bining the solutions using the UGF technique in order to obtain the
dynamic behavior of the entire system. The approach not only separates
the equations but also reduces the total number of equations to be solved.
The approach was introduced in (Lisnianski & Levitin 2003) and extended
Basic Tools and Techniques 133

in (Lisnianski 2004) and in (Lisnianski 2007b). The basic steps of the ap-
proach are as follows:
1. Build the random process Markov model for each MSS element (con-
sidering only state transitions within this element). Obtain two sets
gj={gj1,gj2,…, g jk j } and pj(t)={pj1(t),pj2(t),…, p jk j (t ) } for each ele-
ment j (1≤ j≤ n) by solving the system of kj ordinary differential equa-
n
tions. Note that instead of solving one high-order system of ∏ k j equa-
j =1
tions one has to solve n low-order systems with the total number of
n
equations ∑ k j .
j =1
2. Having the sets gj and pj(t) for each element j define u-function of this
g j1 g j2 g jk j
element in the form uj(z)=pj1(t)z +pj2(t)z +…+ p jk (t ) z .
j
3. Using the generalized RBD method, obtain the resulting u-function for
the entire MSS.
~
4. Obtain the u-functions representing the random functions F , G and D
using operators (2.122)-(2.124).
5. Obtain the system reliability measures by calculating the values of the
derivatives of the corresponding u-functions at z=1 and applying Eqs.
(2.125)-(2.128).

Example 2.18
Consider a flow transmission system (Fig. 2.25) consisting of three pipes.
The oil flow is transmitted from point C to point E. The pipes’ perform-
ance is measured by their transmission capacity (ton per minute). Elements
1 and 2 are binary. A state of total failure for both elements corresponds to
a transmission capacity of 0 and the operational state corresponds to the
capacities of the elements 1.5 and 2 ton per minute respectively so that
G1(t) ∈ {0,1.5}, G2(t) ∈ {0,2}. Element 3 can be in one of three states: a
state of total failure corresponding to a capacity of 0, a state of partial fail-
ure corresponding to a capacity of 1.8 tons per minute and a fully opera-
tional state with a capacity of 4 tons per minute so that G3(t)∈{0,1.8,4}.
The demand is constant: θ*=1.0 ton per minute.
The system output performance rate V(t) is defined as the maximum
flow that can be transmitted between nodes A and B:
V(t)=min{G1(t)+G2(t),G3(t)}.
134 New Computational Methods in Power System Reliability

A 1 B

3
2

1
3
2

Fig. 2.25. Simple flow transmission MSS

The state-space diagrams of the system elements are shown in Fig. 2.26.

Element 1
(1)
λ 2,1
2 1
G12=1.5 G11=0
Element 3
(1)
µ1, 2 (3)
λ3, 2 (3)
λ 2,1
0 1.5
u1(z)=p11(t)z +p12(t)z 3 2 1
G33=4.0 G32=1.8 G31=0.0
Element 2
(3) (3)
( 2) µ 2, 3 µ1,2
λ 2,1
2 1
G22=2.0 G21=0
u3(z)=p31(t)z0+p32(t)z1.8+p33(t)z4.0
( 2)
µ1, 2

u2(z)=p21(t)z0+p22(t)z2.0

Fig. 2.26. State-space diagrams and u-functions of system elements


The failure rates and repair rates corresponding to these two elements
are
λ(21,)1 = 7 year −1 , µ1(,12) = 100 year −1 for element 1,
Basic Tools and Techniques 135

λ(22,1) = 10 year −1 , µ1(,22) = 80 year −1 for element 2.

Element 3 is a multi-state element with only minor failures and minor


repairs. The failure rates and repair rates corresponding to element 3 are
λ3(3,2) = 10 year −1 , λ(33,1) = 0, λ(23,1) = 7 year −1 ,
µ1(,33) = 0, µ1(,32) = 120 year −1 , µ 2(3,3) = 110 year −1.

According to the classical Markov approach, one has to enumerate all


the system states corresponding to different combinations of all possible
states of system elements (characterized by their performance levels). The
total number of different system states is K=k1k2k3=2*2*3=12. The state-
space diagram of the system is shown in Fig. 2.27 (in this diagram the vec-
tor of element performances for each state and the corresponding system
performance are shown respectively in the upper and lower parts of the el-
lipses).
Then the state transition analysis should be performed for all pairs of
system states. For example, for the state number 2 where states of the ele-
ments are {g11,g22,g33}={2,4,2} the transitions to states 1, 5 and 6 exist
with the intensities µ1(,12) , λ(22,1) , λ(33,2) respectively.
The corresponding system of differential equations for the state prob-
abilities pi (t ), 2≤i≤12 takes the form:
dp1 (t )
= −(λ(21,)1 + λ(22,1) + λ(33,2) ) p1 (t ) + µ1(,12) p 2 (t ) + µ1(,22) p3 (t ) + µ 2(3,3) p 4 (t ) ,
dt

dp 2 (t )
= λ(21,)1 p1 (t ) − ( µ1(,12) + λ(22,1) + λ3(3,2) ) p 2 (t ) + µ1(,22) p5 (t ) + µ 2(3,3) p6 (t ) ,
dt

dp3 (t )
= λ(22,1) p1 (t ) − ( µ1(,22) + λ(21,)1 + λ3(3,2) ) p3 (t ) + µ1(,12) p5 (t ) + µ 2(3,3) p7 (t ) ,
dt

dp4 (t )
= λ(23,3) p1 (t ) − ( µ 2(3,3) + λ(21,)1 + λ(22,1) + λ(23,1) ) p4 (t ) + µ1(,12) p6 (t ) +
dt
µ1(,22) p7 (t ) + µ1(,32) p8 (t ),

dp5 (t )
= λ(22,1) p2 (t ) + λ(21,)1 p3 (t ) − ( µ1(,22) + µ1(,12) + λ(33,2) ) p5 (t ) + µ 2(3,3) p9 (t ) ,
dt
136 New Computational Methods in Power System Reliability

dp6 (t )
= λ(33,2) p2 (t ) + λ(21,)1 p4 (t ) − ( µ 2(3,3) + µ1(,12) + λ(22,1) + λ(23,1) ) p6 (t ) +
dt
µ1(,22) p9 (t ) + µ1(,32) p10 (t ),

dp 7 (t )
= λ(33,2) p3 (t ) + λ(21,)1 p 4 (t ) − ( µ 2(3,3) + µ1(,22) + λ1(1,2) + λ(23,1) ) p 7 (t )
dt
+ µ1(,12) p9 (t ) + µ1(,32) p11 (t ),

dp8 (t )
= λ(23,1) p4 (t ) − ( µ1(,32) + λ(21,)1 + λ(22,1) ) p8 (t ) + µ1(,12) p10 (t ) + µ1(,22) p11 (t ) ,
dt

dp9 (t )
= λ(33,2) p5 (t ) + λ(22,1) p6 (t ) + λ(21,)1 p7 (t ) − ( µ 2(3,3) + µ1(,22) + µ1(,12) + λ(23,1) ) p9 (t )
dt
+ µ1(,12) p10 (t ) + µ1(,32) p12 (t ),

dp10 (t )
= λ(23,1) p6 (t ) + λ(21,)1 p8 (t ) − ( µ1(,32) + µ1(,12) + λ(22,1) ) p10 (t ) + µ1(,22) p12 (t ) ,
dt

dp11 (t )
= λ(23,1) p 7 (t ) + λ(22,1) p8 (t ) − ( µ1(,32) + µ1(,22) + λ(21,)1 ) p11 (t ) + µ1(,12) p12 (t ) ,
dt

dp12 (t )
= λ(23,1) p9 (t ) + λ(22,1) p10 (t ) + λ(21,)1 p11 (t ) − ( µ1(,32) + µ1(,22) + µ1(,12) ) p12 (t ) .
dt

Solving this system with the initial conditions p1 (0) = 1 , pi (0) = 0 for
2≤i≤12 one obtains the probability of each state at time t.
According to Fig. 2.27, in different states MSS has the following per-
formance rates: in the state 1 v1=3.5, in the state 2 v2=2.0, in the states 4
and 6 v4 =v6=1.8, in the states 3 and 7 v3=v7=1.5, in the states 5, 8, 9, 10,
11 and 12 v5=v8=v9=v10=v11=v12=0. Therefore,
Pr{V=3.5}=p1(t), Pr{V=2.0}=p2(t), Pr{V=1.8}=p4(t)+p6(t),

Pr{V=0}=p5(t)+p8(t)+p9(t)+p10(t)+p11(t)+p12(t).
For the constant demand level θ*=1 one obtains the MSS instantaneous
availability as a sum of states probabilities where the MSS output perform-
ance is greater than or equal to 1. The states 1, 2, 3, 4, 6 and 7 are accept-
able. Hence A(t ) = p1 (t ) + p2 (t ) + p3 (t ) + p4 (t ) + p6 (t ) + p7 (t ) .
Basic Tools and Techniques 137

12
The MSS instantaneous expected performance is W (t ) = ∑ pi (t )vi .
i =1

1 1.5, 2, 4
3.5
λ(21,)1 λ(33,2)
2 µ1(1, 2) µ 2(3,3)
0, 2, 4 4
λ(22,1) µ 1(,22) 1.5, 2, 1.8
2 1.8
3
1.5, 0, 4 λ(21,)1
µ 1(,22) λ(33, 2) µ 2(3,3)
λ(22,1) 1.5
λ(21,)1 µ 1(1, 2) λ(23,1) µ 1(,32)
5 0, 0, 4 6 0, 2, 1.8 λ(22,1) 8 1.5, 2, 0
µ 1(1, 2) λ(33,2) (3)
µ 2,3 µ1(,22)
0 1.8 0
λ(22,1) 7 1.5, 0, 1.8 λ(21,1)
( 3)
λ(21,)1
λ(33, 2) µ 2(3,3) µ1(,22) λ2,1 µ1(,32) 1.5

10 0, 2, 0 λ(22,1) µ 1(,22)
9 0, 0, 1.8
µ 1(1, 2) (3)
0 0 µ 1(,12) λ2,1 µ 2(3,3)
11
1.5, 0, 0
µ 1(,32) λ(22,1) µ1(,22) 0
λ(23,1) 12 λ(21,)1
0, 0, 0
µ 1(1, 2)
0

Fig. 2.27. State-space diagram for the entire system

Solving the system of 12 differential equations is quite complicated task


that can be solved only numerically. Applying the combination of Markov
and UGF technique one can drastically simplify the calculations and even
obtain an analytical solution for the reliability and expected performance
of the given system. One should proceed as follows:
1. According to the Markov method build the following systems of differ-
ential equations for each element separately (using the state-space dia-
grams presented in Fig. 2.28):
For element 1:
⎧dp (t ) / dt = − µ (1) p (t ) + λ(1) p (t )
⎪ 11 1, 2 11 2,1 12
⎨ (1) (1)
⎪⎩dp12 (t ) / dt = −λ2,1 p12 (t ) + µ1,2 p11 (t )
138 New Computational Methods in Power System Reliability

Initial conditions are: p12(0)=1, p11(0)=0.


For element 2:
⎧dp (t ) / dt = − µ ( 2) p (t ) + λ( 2) p (t )
⎪ 21 1, 2 21 2,1 22
⎨ ( 2) ( 2)
⎪⎩dp22 (t ) / dt = −λ2,1 p22 (t ) + µ1,2 p21 (t )

Initial conditions are: p21(0)=1, p22(0)=0.


For element 3:
⎧dp (t ) / dt = − µ (3) p (t ) + λ(3) p (t )
⎪ 31 1, 2 31 2,1 32
⎪ (3) (3) (3) (3)
⎨dp32 (t ) / dt = λ3,2 p33 (t ) − (λ2,1 + µ 2,3 ) p32 (t ) + µ1,2 p31 (t )

⎪dp33 (t ) / dt = −λ3(3,2) p33 (t ) + µ 2(3,3) p32 (t )

Initial conditions are: p31(0)=p32(0)=0, p33(0)=1.
After solving the three separate systems of differential equations under
the given initial conditions, we get the following expressions for state
probabilities:
For element 1:

p11 (t ) = λ(21,)1 λ(21,)1 − ( λ(21,)1 + µ1(,12) )t


− e ,
µ1(,12) + λ(21,)1 µ1(,12) + λ(21,)1

p12 (t ) = µ1(,12) λ(21,)1 (1) (1)


− (λ 2,1 + µ1, 2 )t
+ e ,
µ1(,12) + λ(21,)1 µ1(,12) + λ(21,)1

For element 2:
λ(22,1) λ(22,1) −( λ(22,1) + µ1(,22) )t
p21 (t ) = − e ,
µ1(,22) + λ(22,1) µ1(,22) + λ(22,1)

p 22 (t ) = µ1(,22) λ(22,1) − ( λ(22,1) + µ1(,22) )t


+ e ,
µ1(,22) + λ(22,1) µ1(,22) + λ(22,1)

For element 3:
p31 (t ) = A1eαt + A2 e βt + A3
,
Basic Tools and Techniques 139

p32 (t ) = B1eαt + B2 e βt + B3
,

p33 (t ) = C1eαt + C 2 e βt + C3
,
where

α = −η / 2 + η 2 / 4 − ζ , β = −η / 2 − η 2 / 4 − ζ ,

λ(23,1) λ3(3,2) λ(23,1) λ(33,2) λ(23,1) λ3(3,2)


A1 = , A2 = , A3 = ,
α (α − β ) β (β − α ) ζ

( µ1(,32) + α )λ3(3,2) ( µ1(,32) + α )λ(33,2) µ1(,32) λ(33,2)


B1 = , B2 = , B3 = ,
α (α − β ) β (β − α ) ζ

( µ1(,32) + α )λ(33,2) µ 2(3,3)( µ1(,32) + β )λ(33,2) µ 2(3,3)


C1 = , C2 = ,
α (α − β )(α + λ(33,2) ) β ( β − α )( β + λ(33,2) )

µ1(,32) µ 2(3,3) ( β + λ(33,2) (λ(33,2) − α ))


C3 = ,
αβ (α + λ(33,2) )( β + λ(33,2) )

η = λ(23,1) + λ3(3,2) + µ1(,32) + µ 2(3,3) , ζ = λ(23,1) λ(33,2) + µ1(,32) µ 2(3,3) + µ1(,32) λ(33,2) .

After determining the state probabilities for each element, we obtain the
following performance distributions:
for element 1: g1 = {g11 , g12 } = {0, 1.5} , p1(t)= { p11 (t ), p12 (t )} ;
for element 2: g2 = {g 21 , g 22 } = {0, 2.0} , p2(t)= { p21 (t ), p 22 (t )} ;
for element 3: g3 = {g 31 , g 32 , g 33 } = {0, 1.8, 4.0} ,
p3(t)= { p31 (t ), p32 (t ), p33 (t )} .
2. Having the sets gj, pj(t) for j=1,2,3 obtained in the first step we can
define the u-functions of the individual elements as:
140 New Computational Methods in Power System Reliability

u1(z)=p11(t) z g11 +p12(t) z g12 =p11(t)z0+p12(t)z1.5.

u2(z)=p21(t) z g 21 +p22(t) z g 22 =p21(t)z0+p22(t)z2.

u3(z)=p31(t) z g 31 +p32(t) z g 32 +p33(t) z g 33 =p31(t)z0+p32(t)z1.8+p33(t)z4.


3. Using the composition operators for flow transmission MSS we ob-
tain the resulting u-function for the entire series-parallel MSS U(z)=
[u1(z) ⊗
+
⊗ u3(z) by the following recursive procedure:
u2(z)] min
u1(z) ⊗ u2(z))=[p11(t)z0+p12(t)z1.5] ⊗ [p21(t)z0+p22(t)z2]
+ +

=p11(t)p21(t)z0+p12(t)p21(t)z1.5+p11(t)p22(t)z2+ p12(t)p22(t)z3.5.
U(z)=u3(z) ⊗ [u1(z) ⊗ u2(z)]
min +

=[p31(t)z0+p32(t)z1.8+p33(t)z4] ⊗ p11(t)p21(t)z0+p12(t)p21(t)z1.5+p11(t)p22(t)z2
min

+p12(t)p22(t)z3.5)=p31(t)p11(t)p21(t)z0+p31(t)p12(t)p21(t)z0+p31(t)p11(t)p22(t)z0

+p31(t)p12(t)p22(t)z0+p32(t)p11(t)p21(t)z0+p32(t)p12(t)p21(t)z1.5+p32(t)p11(t)p22(t)z1.8

+p32(t)p12(t)p22(t)z1.8+p33(t)p11(t)p21(t)z0+p33(t)p12(t)p21(t)z1.5+p33(t)p11(t)p22(t)z2

+p33(t)p12(t)p22(t)z3.5.
Taking into account that
p31(t)+p32(t)+p33(t)=1, p21(t)+p22(t)=1 and p11(t)+p12(t)=1,
we obtain the u-function that determines the performance distribution v,
5
q(t) of the entire MSS in the following form U (z)= ∑ qi (t ) z vi where
i =1
v1=0, q1(t)=
p11(t)p21(t)+p31(t)p12(t)+p31(t)p11(t)p22(t),
v2=1.5 tons/min, q2(t)= p12(t)p21(t)[p32(t)+p33(t)],
v3=1.8 tons/min, q3(t)= p32(t)p22(t),
v4=2.0 tons/min, q4(t)= p33(t)p11(t)p22(t),
v5=3.5 tons/min, q5(t)= p33(t)p12(t)p22(t).
4. Based on the entire MSS u-function U(z) we obtain the MSS reliability
indices:
Basic Tools and Techniques 141

The instantaneous MSS availability for different demand levels θ* takes


the form
A(t ) = q2 (t ) + q3 (t ) + q4 (t ) + q5 (t ) for 0<θ*≤1.5;

A(t ) = q3 (t ) + q4 (t ) + q5 (t ) for 1.5<θ*≤1.8;

A(t ) = q4 (t ) + q5 (t ) for 1.8<θ*≤2;

A(t ) = q5 (t ) for 2<θ*≤3.5;

A(t ) = 0 for 3.5<θ*.

The instantaneous expected performance at any instant t>0 is


5
W (t ) = ∑ qi (t )vi =1.5q2(t)+1.8q3(t)+2q4(t)+3.5q5(t).
i =1

The obtained functions W(t) and A(t) for θ*=1 are presented in Fig. 2.28.

1 3.5
A(t) W(t)
3.4
0.95

3.3
0.9
3.2

0.85
3.1

0.8 3
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2

time (years) time (years)

Fig. 2.28. System availability for θ*=1 and instantaneous expected performance

2.3 Monte Carlo simulation

Monte Carlo simulation (MCS) is applied for problems involving random


variables with known or assumed probability distributions. This simulation
process uses a particular set of values of random variables generated in ac-
cordance with the corresponding probability distributions in each simula-
tion. The process is repeated using different sets of values of the random
variables. The MCS results are presented in the histogram form which is
especially useful for further statistical assessment. MCS process is actually
142 New Computational Methods in Power System Reliability

deterministic for every given set of random numbers, generated in advance


from prescribed probability distributions.
As such, simulation can be performed either analytically or numerically.
With the advent of computers, numerically performed simulation has be-
come a much more practical tool, widely applied to study the system
performance for engineering purposes. The simulation process allows for
estimating a specific performance with a given set of the system parame-
ters. The sensitivity of the system performance to variation in the system
parameters may be examined through repeated simulations. Simulation is
also used to check alternative designs and determine the optimal one.
Monte Carlo methods can be applied to large and complex systems.
They are generally used as a very effective mean when analytical solution
methods are not available or too complex. For high accuracy, Monte Carlo
method requires a large number of samplings.

2.3.1 Generation of random numbers

Generation of the random variables in accordance with the respective pre-


scribed probability distributions is the significant part of the MCS process.
This can be performed for each variable by prior generating a uniformly
distributed random number between 0 and 1. The basis for this is as fol-
lows.
Suppose X is a random variable with distribution function F (x). Then, at
a given probability F (x) = u, the value of x is
x = F -1(u) (2.143)
Now suppose that u is a specific value of the standard uniformly distrib-
uted variable U, with a uniform density function ranging from 0 to 1. Then
the corresponding value x of the variable X will have a probability F(x).
Therefore, if (u1 , u 2 ,...u n ) is a set of values from U, the corresponding set
of values (x1,x2,…,xn) obtained through Eq. (2.143) will model the desired
distribution function F(x). The relationship between u and x may be seen
graphically in Fig. 2.29.
Uniformly distributed random numbers between 0 and 1 provide a basis
for obtaining random numbers with a general probability distribution. In
reality, random numbers generated by any systematic procedure, can be
duplicated exactly, thus constituting a deterministic set. They are only
“pseudo random” rather then random numbers. Note that the generated
pseudo random numbers are cyclic; that is, they are repeated with a given,
though rather large period. The computers are now commonly enhanced
with an extremely fast built-in random number generator.
Basic Tools and Techniques 143

F (x)

U 450 X

Fig. 2.29. Relation between u and x

Example 2.19
This example demonstrates using the MCS for computing the expected un-
supplied energy (EUE) for the demand of 1000 MW in a generating sys-
tem, which consists of 9 interconnected operating units with parameters
presented in Table 2.8 where Forced Outage Range (FOR) is a probability
that unit is not available. For simplicity, we assume that any unit i can
have only two states: either perfect functioning with nominal capacity ci or
total failure with capacity 0 (which corresponds to the limiting values of
the admissible capacity interval: Ci ∈{0, ci}):

Table 2.8. Parameters of the generating system

Unit Nominal capacity ci FOR


1 200 0.2
2 200 0.2
3 200 0.2
4 150 0.1
5 150 0.1
6 100 0.15
7 100 0.15
8 100 0.15
9 100 0.15

Using the MCS we performed a set of 10 samplings each containing a


randomly generated numbers pi,j, 0≤pi,j≤1 (for units i=1,…,9 and samples
144 New Computational Methods in Power System Reliability

j=1,…,10) that define the probability of full availability for unit i


(i =1,…,9) as follows
C = 0, when pi,j <FORi and Ci = ci otherwise (2.144)
With this in view, the EUE is calculated as
9
EUE = max(0, L − ∑ Ci ), (2.145)
i =1

where L is a supposed demand system value at the calculation moment


(1000Mw).

Random numbers for 10 samplings have been generated as presented in


Table 2.9.

Table 2.9. Ten randomly generated samples

Sample
Unit
1 2 3 4 5 6 7 8 9 10
1 0.57 0.47 0.03 0.93 0.28 0.62 0.52 0.09 0.99 0.33
2 0.01 0.13 0.59 0.38 0.72 0.07 0.19 0.64 0.43 0.78
3 0.12 0.80 0.25 0.71 0.17 0.18 0.85 0.31 0.77 0.22
4 0.90 0.24 0.92 0.40 0.94 0.95 0.30 0.98 0.10 1.00
5 0.34 0.69 0.70 0.49 0.61 0.04 0.74 0.75 0.54 0.67
6 0.23 0.02 0.37 0.27 0.05 0.29 0.08 0.42 0.32 0.11
7 0.68 0.91 0.81 0.60 0.83 0.73 0.97 0.87 0.65 0.89
8 0.79 0.35 0.48 0.15 0.50 0.84 0.41 0.53 0.21 0.55
9 0.45 0.58 0.14 0.82 0.39 0.51 0.63 0.20 0.88 0.44

The units’ available capacities, obtained from these randomly generated


numbers in accordance with (2.144) are presented in Table 2.10.
The expected EUE is obtained as the average value over 10 samples:
EUE = (100+250)/10 = 35 MWh.
The obtained estimation is pretty close to the theoretical assessment.
Repetition of such calculations for a successive set of moments (taken,
for example, with an hourly resolution) estimates the system performance
over a study period. The larger is the number of simulations the better is
the statistical accuracy of the system assessment.
Basic Tools and Techniques 145

Table 2.10. Available capacity of units

Available capacity
Unit
1 2 3 4 5 6 7 8 9 10
1 200 200 0 200 200 200 200 0 200 200
2 0 0 200 200 200 0 0 200 200 200
3 0 200 200 200 0 0 200 200 200 200
4 150 150 150 150 150 150 150 150 0 150
5 150 150 150 150 150 0 150 150 150 150
6 100 0 100 100 0 100 0 100 100 0
7 100 100 100 100 100 100 100 100 100 100
8 100 100 100 100 100 100 100 100 100 100
9 100 100 0 100 100 100 100 100 100 100
Total 900 1000 1000 1300 1000 750 1000 1100 1150 1200
Eue 100 0 0 0 0 250 0 0 0 0

2.3.2 Continuous random variables

As indicated above, random numbers with a prescribed distribution can be


generated using Equation. (2.143) once the standard uniformly distributed
random numbers have been obtained. The so-called inverse transform
method performs generation of random numbers due to Equation (2.143).
It is especially effective when the inverse function F -1(u) can be expressed
in a closed form as explained below

Example 2.20
Consider the exponential distribution
F ( x ) = 1 − e − λx
The inverse function is
1
x = F −1 (u ) = − ln(1 − u )
λ
After generating the standard uniformly distributed random numbers ui
for i = 1,2,..., the corresponding exponentially distributed random numbers
are obtained, according to Eq. (2.1), as
1
xi = − ln(1 − u i )
λ
146 New Computational Methods in Power System Reliability

Since (1 – ui) is also uniformly distributed, the required random num-


bers may also be generated as
1
xi = − ln u i ; i = 1,2,...
λ

Example 2.21
Let Y = max{X1, …,Xn}, where Xi represent independent and identically
distributed random variables with distribution function F(x). The distribu-
tion function of Y is

G ( x) = [F ( x)]
n

Then, from
G ( x) = u
It follows that
F ( x) = u 1 / n
Thus,
x = F −1 (u 1 / n )
In order to obtain a value x of the random variable Y, we first generate a
value u for the standard uniformly distributed variable U and then com-
pute F −1 (u 1 / n ) .
The MCS schemes briefly sketched here can be further developed and
tailored to specific features of the considered problem. The detailed de-
scription of the MCS technique can be found in (Gentle 2003, Robert
2005).

2.3.3 Combining the Monte Carlo simulation and UGF for


estimating reliability confidence bounds of multi-state systems

Whenever no economic or time constraints exist, an effective approach to


obtain the true value of system reliability would be to test an infinite num-
ber of systems in real life situations until failure occurs. Unfortunately,
system and component testing is limited to tight economic budgets and
schedules. Thus, it is often unrealistic and infeasible to perform extensive
system testing both at the component and system level. A more efficient
Basic Tools and Techniques 147

approach to estimate reliability of complex systems is to understand the re-


lationships between component and system level interactions.
In general, the evaluation of MSS reliability depends on two key factors:
reliability data associated to each component and an entire MSS reliability
evaluation technique. With respect to the first factor, component testing is
usually limited due to budget and schedule constraints. That is, data re-
garding component reliability is scarce and thus, component and MSS reli-
ability should be regarded as an estimate of the true reliability value based
on this limited data. Previous research in reliability (Coit 1997, Coit and
Jin 2001, Ramirez-Marquez et al. 2004, Ramirez-Marquez and Jiang
2006) showed that uncertainty at the component level propagates to the
system level and significantly affects accuracy of system reliability estima-
tion.
Unfortunately, current methods for the estimation of system reliability
and associated uncertainty are restricted to the case where the system and
the components follow a binary behavior. Currently, for different MSS, re-
liability and associated uncertainty estimates cannot be approximated. It is
thus of interest to quantify the uncertainty associated with the reliability
estimation of these systems so that, similar to the binary case, one can
identify, measure, and prioritize risks to improve reliability and safety
(Coit 1997).
This section presents a method for the estimation of multi-state system
reliability confidence bounds based on component reliability and uncer-
tainty data (Ramirez-Marquez and Levitin 2007). The proposed method is
based on a structured approach that generates an α-level confidence inter-
val (CI). The method can be applied only to multi-state systems consisting
of two-state components that can either work with nominal performance or
totally fail.
The general problem addressed in this section is related to the accuracy
of the reliability estimate for MSS applications. Currently, simulation ap-
proaches based on component reliability and uncertainty data can only be
used. These approaches are time consuming and usually underestimate the
true value of system reliability. Here we use the UGF to generate an esti-
mate of MSBC reliability that is based on component reliability estimates.
The estimate is then used to develop a (1-α)% confidence interval.
First, we provide a method for quantifying the uncertainty in terms of
variance associated to the reliability estimate of general MSS. The second
objective provides statistical inference methods that can be used to make
an accurate estimation of MSS reliability based on component level reli-
ability data.
Different studies indicate that system structure and component reliabil-
ity estimates, with its associated uncertainty, contribute to the propagation
148 New Computational Methods in Power System Reliability

to system level uncertainty. In general multi-state and binary reliability


applications, a common approach to obtain these estimates is to obtain bi-
nomial test data (Wright and Bierbaum 2002, Stamatelatos 2001) with
variance being the preferred measure of uncertainty.
The proposed approach works under the assumption that a specific
number of units of component type j, nj, are tested for t hours. After com-
pletion of the test, the state (working at a nominal performance or failed)
of each unit, can be regarded as an independent Bernoulli trial with pa-
rameter rj(t), where the index j is associated with the type of component
being tested. An unbiased estimate for rj(t), and its associated variance can
be determined from the binomial distribution as
f j (t )
rˆ j (t ) = 1 − , (2.146)
nj

and

( )
v rˆ j (t ) =
(
rˆj (t ) 1 − rˆj (t ) ), (2.147)
n j −1

respectively, where fj (t) defines the number of failures observed during the
duration of the test.
For binary and capacitated systems with a single demand, closed form
expressions for quantifying the uncertainty associated with the reliability
estimate of any system can be obtained as long as component reliability is
available (Ramirez-Marquez and Jiang 2005). However, for the multi-state
case, developing closed form expressions is dependent on obtaining the
variance of each individual term associated with the loss of load probabil-
ity equation. Computing the exact values associated with the covariance
terms is computationally burdensome and no closed form expression is
currently at hand. To overcome, this we use the UGF method, to approxi-
mate the system reliability variance estimate and provide MSS reliability
bounds. The method assumes that component test data is readily available.
The UGF method utilizes the information generated from the test data or
by MCS of components’ performance (estimates of components’ reliabil-
ity) to immediately provide an estimate of the system reliability without
the need to further simulate the behavior of the test data. For any combina-
tion of component reliability estimates, this method calculates the entire
MSS performance distribution using the presented reliability block dia-
gram technique and obtains the system reliability estimate.
Two alternatives to generate bounds for multi-state system reliability at
a desired confidence level α are described below.
Basic Tools and Techniques 149

Confidence Bound 1. This bound is intended to provide an intuitive or


practical explanation about the distribution of system failures. It assumes
that the number of successes during N tests of a multi-state system follows
a Binomial distribution with parameter R(t). The rationale behind this as-
sumption is that, if completed N systems were subject to test, and the num-
bers of failures were recorded at time t, then it is intuitive to assume that
the true value of reliability of the system, R(t), could be estimated via R̂(t ) ,
obtained by combination of MCS and UGF method. If the number of tests
N is large enough, the confidence bounds can be obtained through the s-
normal approximation of R(t) with mean R̂(t ) , and variance
( )
Rˆ (t ) 1 − Rˆ (t )
.
N
The two sided confidence interval of R(t) at a (1-α)% level is given as

Rˆ (t ) − zα
( )
Rˆ (t ) 1 − Rˆ (t ) 1
+ ≤ R(t ) ≤ Rˆ (t ) + zα
( )
Rˆ (t ) 1 − Rˆ (t )

1
, (2.148)
2 N N 2 N N

where
(
⎧ R̂ 1 − R̂ )
if the number of testing units is not the same for every

N = ⎨ v R̂
⎪n j ,
,
()
component, as given by (Jin and Coit 2001)
⎩ otherwise

Similarly, the s-normal approximation can be employed to generate


lower, and upper confidence bounds:
(1-α)% Confidence Lower Bound

Rˆ (t ) − zα
(
Rˆ (t ) 1 − Rˆ (t ) ) + 1 ≤ R(t ) (2.149)
N N

(1-α)% Confidence Upper Bound

R(t ) ≤ Rˆ (t ) + zα
( )− 1
Rˆ (t ) 1 − Rˆ (t )
(2.150)
N N

Confidence Bound 2. This bound is based on the assumption that the dis-
tribution for the system reliability R (t) follows an unknown discrete distri-
bution that can be well approximated by a Gaussian distribution. The mean
of this unknown distribution is R̂ , while the variance is
(
Rˆ (t ) 1 − Rˆ (t ) )
.
N
150 New Computational Methods in Power System Reliability

However, this alternative uses Wilson’s score method based on the com-
ponent reliability estimates. Similar to the case of Bound 1, R̂(t ) can be
obtained by combination of MCS and UGF method.
These assumptions allow for the computation of a (1-α)% level two-
sided CI for R (t), given as:
2 2
⎛ ⎞
2 ⎝
⎛ ⎞
(
2 NRˆ (t ) + ⎜ zα ⎟ − zα ⎜ zα ⎟ + 4 NRˆ (t ) 1 − Rˆ (t )
⎝ 2⎠ 2⎠
)1
+ ≤ R(t ) , (2.151)
⎛ ⎛ ⎞ ⎞⎟
2 N

2 N + ⎜ zα ⎟
⎜ ⎝ 2 ⎠ ⎟⎠

2 2
⎛ ⎞
2 ⎝
⎛ ⎞
(
2 NRˆ (t ) + ⎜ zα ⎟ + zα ⎜ zα ⎟ + 4 NRˆ (t ) 1 − Rˆ (t )
⎝ 2⎠ 2⎠
)
1
− ≥ R(t ) , (2.152)
⎛ ⎛ ⎞
2⎞ N
2⎜ N + ⎜ zα ⎟ ⎟
⎜ ⎝ 2 ⎠ ⎟⎠

where N is given as in confidence bound 1.
The lower and upper confidence bounds can also be constructed:
(1-α)% Confidence Lower Bound

2 NRˆ (t ) + (zα )2 − zα (zα )2 + 4 NRˆ (t )(1 − Rˆ (t )) 1


≤ R(t ) ,
2(N + (zα )2 )
+ (2.153)
N

(1-α)% Confidence Upper Bound

2 NRˆ (t ) + (zα )2 + zα(zα )2 + 4 NRˆ (t )(1 − Rˆ (t )) 1


R(t ) ≤
2(N + (zα )2 )
− . (2.154)
N

Example 2.22
Consider a series-parallel system shown in Fig. 2.30. Table 2.11 introduces
the reliability data for each component and its associated nominal per-
formance while Table 2.12 presents the demand distribution. The results of
a single test run when considering a data sample size of 50 simulations per
component are presented in Table 2.13. Table 2.14 presents MSS reliabil-
ity lower bounds, corresponding to each of the bounding methods, ob-
tained from the simulated component test data and the UGF approach. This
Basic Tools and Techniques 151

table also includes the coverage associated to the level α=0.1 against each
of the bounds actual coverage.

15
5
1 8 9 14
3 4 12 13 16
6
2 10 11 17
7

Fig. 2.30. Series-parallel system

Table 2.11. Component Data for Series-Parallel System

Component Nominal Nominal Component Nominal Nominal


Type Reliability performance Type Reliability performance
1 0.91 7 9 0.95 5
2 0.91 6 10 0.92 7
3 0.96 12 11 0.95 7
4 0.98 15 12 0.96 14
5 0.8 4 13 0.99 15
6 0.8 5 14 0.9 7
7 0.8 7 15 0.83 4
6 16 0.83 4
8 0.92
17 0.88 9

Table 2.12. Demand distribution

Demand 4 5 6 7 9 11 12
Probability 1/7 1/7 1/7 1/7 1/7 1/7 1/7

To test the accuracy of the proposed approaches the binomial compo-


nent data were obtained by Monte Carlo simulation. This simulation was
based on nominal component reliability values and done for one thousand
repetitions of a test of 50 units for each component type. For each repeti-
tion, the estimate of MSS reliability was obtained using the UGF approach
and by pure MCS (in which the entire system failures were counted). Table
2.15 illustrates 20 random estimates out of the 1,000 estimates of the MSS
reliability obtained by the combined MCS and UGF approach and pure
MCS approach. For this case, it is evident that the UGF provides a more
152 New Computational Methods in Power System Reliability

precise estimation of the MSS reliability without the need of extensive


simulation for evaluating the system reliability for any given combination
of reliabilities of its components.

Table 2.13. MSS test run data

Components Reliability
Component Type No of units
Failed Estimate
1 50 8 0.84
2 50 5 0.9
3 50 3 0.94
4 50 2 0.96
5 50 15 0.7
6 50 4 0.92
7 50 11 0.78
8 50 5 0.9
9 50 5 0.9
10 50 4 0.92
11 50 2 0.96
12 50 2 0.96
13 50 0 1
14 50 6 0.88
15 50 12 0.76
16 50 3 0.94
17 50 6 0.88
MCS+UGF 0.5326 Pure MCS 0.4743
MSS Reliability Reliability

Table 2.14. Reliability bounds for test run data

Bound 1 Bound 2
Reliability Bound 0.4622 0.4822

Actual α Coverage 0.065 0.111


Basic Tools and Techniques 153

Table 2.15. Comparison of MSS Reliability Estimate obtained by combined MCS


and UGF method and pure MCS

Pure
MCS+UGF Pure MCS MCS+UGF
MCS Re-
Repetition Reliability Reliability Repetition Reliability
liability
Estimate Estimate Estimate
Estimate
1 0.5665 0.4914 11 0.4779 0.4543
2 0.6192 0.5886 12 0.5649 0.5057
3 0.6452 0.5629 13 0.6173 0.5829
4 0.5225 0.5029 14 0.5898 0.5143
5 0.5587 0.4943 15 0.6083 0.5457
6 0.6029 0.5229 16 0.6318 0.5629
7 0.5565 0.5000 17 0.5485 0.4657
8 0.6108 0.5343 18 0.6060 0.4971
9 0.5740 0.4914 19 0.4956 0.4143
10 0.6031 0.5457 20 0.5742 0.4629

As illustrated by Table 2.14 Bound 2 provides a much better coverage


when compared against the desired α level. For this case, the actual cover-
age aligns almost exactly with the desired α level. That is, for the 90%
bound desired, Bound 2 provides coverage of 88.9%.

2.4 Introduction to Genetic Algorithms

An abundance of optimization methods have been used to solve various re-


liability optimization problems. The algorithms applied are either heuris-
tics or exact procedures based mainly on modifications of dynamic pro-
gramming and nonlinear programming. Most of these methods are strongly
problem oriented. This means that, since they are designed for solving cer-
tain optimization problems, they cannot be easily adapted for solving other
problems. In recent years, many studies on reliability optimization use a
universal optimization approach based on metaheuristics. These metaheu-
ristics hardly depend on the specific nature of the problem that is solved
and, therefore, can be easily applied to solve a wide range of optimization
problems. The metaheuristics are based on artificial reasoning rather than
on classical mathematical programming. Their important advantage is that
they do not require any information about the objective function besides its
values corresponding to the points visited in the solution space. All meta-
heuristics use the idea of randomness when performing a search, but they
154 New Computational Methods in Power System Reliability

also use past knowledge in order to direct the search. Such search algo-
rithms are known as randomized search techniques.
Genetic algorithms (GA’s) are one of the most widely used metaheuris-
tics. They were inspired by the optimization procedure that exists in nature,
the biological phenomenon of evolution. A GA maintains a population of
different solutions allowing them to mate, produce offspring, mutate, and
fight for survival. The principle of survival of the fittest ensures the popu-
lation’s drive towards optimization. The GA’s have become the popular
universal tool for solving various optimization problems, as they have the
following advantages:
• they can be easily implemented and adapted;
• they usually converge rapidly on solutions of good quality;
• they can easily handle constrained optimization problems;
• they produce variety of good quality solutions simultaneously, which is
important in the decision-making process.
The GA concept was developed by John Holland at the University of
Michigan and first described in his book (Holland 1975). Holland was im-
pressed by the ease with which biological organisms could perform tasks,
which eluded even the most powerful computers. He also noted that very
few artificial systems have the most remarkable characteristics of biologi-
cal systems: robustness and flexibility. Unlike technical systems, biologi-
cal ones have methods for self-guidance, self-repair and reproducing these
features. Holland’s biologically inspired approach to optimization is based
on the following analogies:
• As in nature, where there are many organisms, there are many possible
solutions to a given problem.
• As in nature, where an organism contains many genes defining its prop-
erties, each solution is defined by many interacting variables (parame-
ters).
• As in nature, where groups of organisms live together in a population
and some organisms in the population are more fit than others, a group
of possible solutions can be stored together in computer memory and
some of them are closer to the optimum than others.
• As in nature, where organisms that are more fit have more chances of
mating and having offspring, solutions that are closer to the optimum
can be selected more often to combine their parameters to form new so-
lutions.
• As in nature, where organisms produced by good parents are more likely
to be better adapted than the average organism because they received
good genes, offspring of good solutions are more likely to be better than
a random guess, since they are composed of better parameters.
Basic Tools and Techniques 155

• As in nature, where survival of the fittest ensures that the successful


traits continue to get passed along to subsequent generations, and are re-
fined as the population evolves, the survival-of-the-fittest rule ensures
that the composition of the parameters corresponding to the best guesses
continually get refined.
GA’s maintain a population of individual solutions, each one repre-
sented by a finite string of symbols, known as the genome, encoding a pos-
sible solution within a given problem space. This space, referred to as the
search space, comprises all of the possible solutions to the problem at
hand. Generally speaking, a GA is applied to spaces, which are too large to
be searched exhaustively.
GA’s exploit the idea of the survival of the fittest and an interbreeding
population to create a novel and innovative search strategy. They itera-
tively create new populations from the old ones by ranking the strings and
interbreeding the fittest to create new strings, which are (hopefully) closer
to the optimum solution for the problem at hand. In each generation, a GA
creates a set of strings from pieces of the previous strings, occasionally
adding random new data to keep the population from stagnating. The result
is a search strategy that is tailored for vast, complex, multimodal search
spaces.
The idea of survival of the fittest is of great importance to genetic algo-
rithms. GA’s use what is termed as the fitness function in order to select
the fittest string to be used to create new, and conceivably better, popula-
tions of strings. The fitness function takes a string and assigns it a relative
fitness value. The method by which it does this and the nature of the fit-
ness value do not matter. The only thing that the fitness function must do is
rank the strings in some way by producing their fitness values. These val-
ues are then used to select the fittest strings.
GA’s use the idea of randomness when performing a search. However,
it must be clearly understood that the GA’s are not simply random search
algorithms. Random search algorithms can be inherently inefficient due to
the directionless nature of their search. GA’s are not directionless. They
utilize knowledge from previous generations of strings in order to con-
struct new strings that will approach the optimal solution. GA’s are a form
of a randomized search, and the way that the strings are chosen and com-
bined comprise a stochastic process.
The essential differences between GA’s and other forms of optimiza-
tion, according to (Goldberg 1989), are as follows.
GA’s usually use a coded form of the solution parameters rather than
their actual values. Solution encoding in a form of strings of symbols (an
analogy to chromosomes containing genes) provides the possibility of
crossover and mutation. The symbolic alphabet that was used was initially
156 New Computational Methods in Power System Reliability

binary, due to certain computational advantages purported in (Goldberg


1989). This has been extended to include character-based encodings, inte-
ger and real-valued encodings, and tree representations (Michalewicz
1996).
GA’s do not just use a single point on the problem space, rather they use
a set, or population, of points (solutions) to conduct a search. This gives
the GA’s the power to search noisy spaces littered with local optimum
points. Instead of relying on a single point to search through the space,
GA’s look at many different areas of the problem space at once, and use all
of this information as a guide.
GA’s use only payoff information to guide them through the problem
space. Many search techniques need a range of information to guide them-
selves. For example, gradient methods require derivatives. The only in-
formation a GA needs to continue searching for the optimum is some
measure of fitness about a point in the space.
GA’s are probabilistic in nature, not deterministic. This is a direct result
of the randomization techniques used by GA’s.
GA’s are inherently parallel. Herein lies one of their most powerful fea-
tures. GA’s, by their nature, are very parallel, dealing with a large number
of solutions simultaneously. Using schemata theory, Holland has estimated
that a GA, processing n strings at each generation, in reality processes n3
useful substrings.
Two of the most common GA implementations are “generational” and
“steady state”, although recently the steady-state technique has received
increased attention (Kinnear 1993). This interest is partly attributed to the
fact that steady-state techniques can offer a substantial reduction in the
memory requirements of a system: the technique abolishes the need to
maintain more than one population during the evolutionary process, which
is necessary in the generational GA. In this way, genetic systems have
greater portability for a variety of computer environments because of the
reduced memory overhead. Another reason for the increased interest in
steady-state techniques is that, in many cases, a steady-state GA has been
shown to be more effective than a generational GA (Syswerda 1991),
(Vavak and Fogarty 1996). This improved performance can be attributed
to factors such as the diversity of the population and the immediate avail-
ability of superior individuals.
A comprehensive description of a generational GA can be found in
(Goldberg 1989). Here, we present the structure of a steady-state GA.
Basic Tools and Techniques 157

2.4.1 Structure of steady-state Genetic Algorithms

The steady-state GA (see Fig. 2.31) proceeds as follows (Whitley 1989) an


initial population of solutions, generated randomly or heuristically. Within
this population, new solutions are obtained during the genetic cycle by us-
ing the crossover operator. This operator produces an offspring from a
randomly selected pair of parent solutions (the parent solutions are se-
lected with a probability proportional to their relative fitness), facilitating
the inheritance of some basic properties from the parents to the offspring.
The newly obtained offspring undergoes mutation with the probability pmut.

Random
generation

Crossover Decoding
Population &
of criteria
solutions evaluation

Mutation
?

Selection

New solution

Fig. 2.31. Structure of a steady-state GA

Each new solution is decoded and its objective function (fitness) values
are estimated. These values, which are a measure of quality, are used to
compare different solutions. The comparison is accomplished by a selec-
tion procedure that determines which solution is better: the newly obtained
solution or the worst solution in the population. The better solution joins
the population, while the other is discarded. If the population contains
equivalent solutions following selection, then redundancies are eliminated
and the population size decreases as a result.
A genetic cycle terminates when Nrep new solutions are produced or
when the number of solutions in the population reaches a specified level.
Then, new randomly constructed solutions are generated to replenish the
shrunken population, and a new genetic cycle begins. The whole GA is
terminated when its termination condition is satisfied. This condition can
158 New Computational Methods in Power System Reliability

be specified in the same way as in a generational GA. The following is the


steady-state GA in pseudo-code format.

begin STEADY STATE GA


Initialize population Π
Evaluate population Π {compute fitness values}
while GA termination criterion is not satisfied do
{GENETIC CYCLE}
while genetic cycle termination criterion is not satisfied do
Select at random Parent Solutions S1, S2 from Π
Crossover: (S1, S2) → SO {offspring}
Mutate offspring SO → S*O with probability pmut
Evaluate S*O
Replace SW {the worst solution in Π with S*O } if S*O is
better than SW
Eliminate identical solutions in Π
end while
Replenish Π with new randomly generated solutions
end while
end GA

Example 2.23
In this example we present several initial stages of a steady-state GA, that
maximizes the function of six integer variables x1, …, x6 taking the form
f ( x1 ,..., x 6 ) = 1000 [( x1 − 3.4) 2 + ( x 2 − 1.8) 2 + ( x3 − 7.7) 2

+ ( x 4 − 3.1) 2 + ( x5 − 2.8) 2 + ( x 6 − 8.8) 2 ] −1

The variables can take values from 1 to 9. The initial population, con-
sisting of five solutions ordered according to their fitness (value of func-
tion f), is:

No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 1 2 5 297.8
2 3 7 7 7 2 7 213.8
3 7 5 3 5 3 9 204.2
4 2 7 4 2 1 4 142.5
5 8 2 3 1 1 4 135.2

Using the random generator that produces the numbers of the solutions,
the GA chooses the first and third strings, i.e. (4 2 4 1 2 5) and (7 5 3 5 3
9) respectively. From these strings, it produces a new one by applying a
Basic Tools and Techniques 159

crossover procedure that takes the three first numbers from the better par-
ent string and the last three numbers from the inferior parent string. The
resulting string is (4 2 4 5 3 9). The fitness of this new solution is f(x1, …,
x6) = 562.4. The new solution enters the population, replacing the one with
the lowest fitness. The new population is now

No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 5 3 9 562.4
2 4 2 4 1 2 5 297.8
3 3 7 7 7 2 7 213.8
4 7 5 3 5 3 9 204.2
5 2 7 4 2 1 4 142.5

Choosing at random the third and fourth strings, (3 7 7 7 2 7) and (7 5 3


5 3 9) respectively, the GA produces the new string (3 7 7 5 3 9) using the
crossover operator. This string undergoes a mutation that changes one of
its numbers by one (here, the fourth element of the string changes from 5
to 4). The resulting string (3 7 7 4 3 9) has a fitness of f(x1, …, x6) = 349.9.
This solution is better than the inferior one in the population; therefore, the
new solution replaces the inferior one. Now the population takes the form

No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 5 3 9 562.4
2 3 7 7 4 3 9 349.9
3 4 2 4 1 2 5 297.8
4 3 7 7 7 2 7 213.8
5 7 5 3 5 3 9 204.2

A new solution (4 2 4 4 3 9) is obtained by the crossover operator over


the randomly chosen first and second solutions, i.e. (4 2 4 5 3 9) and (3 7 7
4 3 9) respectively. After the mutation this solution takes the form (4 2 4 5
3 9) and has the fitness f(x1,…, x6) = 1165.5. The population obtained after
the new solution joins it is

No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 5 4 3 9 1165.5
2 4 2 4 5 3 9 562.4
3 3 7 7 4 3 9 349.9
4 4 2 4 1 2 5 297.8
5 3 7 7 7 2 7 213.8

Note that the mutation procedure is not applied to all the solutions ob-
tained by the crossover. This procedure is used with some pre-specified
160 New Computational Methods in Power System Reliability

probability pmut. In our example, only the second and the third newly ob-
tained solutions underwent the mutation.
The actual GA’s operate with much larger populations and produce
thousands of new solutions using the crossover and mutation procedures.
The steady-state GA with a population size of 100 obtained the optimal so-
lution for the problem presented after producing about 3000 new solutions.
Note that the total number of possible solutions is 96 = 531441. The GA
managed to find the optimal solution by exploring less than 0.6% of the
entire solution space.
Both types of GA are based on the crossover and mutation procedures,
which depend strongly on the solution encoding technique. These proce-
dures should preserve the feasibility of the solutions and provide the in-
heritance of their essential properties.

2.4.2 Adaptation of Genetic Algorithms to specific optimization


problems

There are three basic steps in applying a GA to a specific problem.


In the first step, one defines the solution representation (encoding in a
form of a string of symbols) and determines the decoding procedure,
which evaluates the fitness of the solution represented by the arbitrary
string.
In the second step, one has to adapt the crossover and mutation proce-
dures to the given representation in order to provide feasibility for the new
solutions produced by these procedures as well as inheriting the basic
properties of the parent solutions by their offspring.
In the third step, one has to choose the basic GA parameters, such as the
population size, the mutation probability, the crossover probability (gen-
erational GA) or the number of crossovers per genetic cycle (in the steady-
state GA), and formulate the termination condition in order to provide the
greatest possible GA efficiency (convergence speed).
The strings representing GA solutions are randomly generated by the
population generation procedure, modified by the crossover and mutation
procedures, and decoded by the fitness evaluation procedure. Therefore,
the solution representation in the GA should meet the following require-
ments:
• It should be easily generated (the sophisticated complex solution gen-
eration procedures reduce the GA speed).
• It should be as compact as possible (using very long strings requires ex-
cessive computational resources and slows the GA convergence).
Basic Tools and Techniques 161

• It should be unambiguous (i.e. different solutions should be represented


by different strings).
• It should represent feasible solutions (if not any randomly generated
string represents a feasible solution, then the feasibility should be pro-
vided by simple string transformation).
• It should provide feasibility inheritance of new solutions obtained from
feasible ones by the crossover and mutation operators.
The field of reliability optimization includes the problems of finding op-
timal parameters, optimal allocation and assignment of different elements
into a system, and optimal sequencing of the elements. Many of these
problems are combinatorial by their nature. The most suitable symbol al-
phabet for this class of problems is integer numbers. The finite string of in-
teger numbers can be easily generated and stored. The random generator
produces integer numbers for each element of the string in a specified
range. This range should be the same for each element in order to make the
string generation procedure simple and fast. If for some reason different
string elements should belong to different ranges, then the string should be
transformed to provide solution feasibility.
In the following sections, we show how integer strings can be inter-
preted for solving different kinds of optimization problems.

Parameter determination problems


When the problem lies in determining a vector of H parameters (x1, x2, …,
xH) that maximizes an objective function f(x1, x2, …, xH) one always has to
specify the ranges of the parameter variation:
x min
j ≤ x j ≤ x max
j for 1 ≤ j ≤ H (2.155)
In order to facilitate the search in the solution space determined by ine-
qualities (2.155), integer strings a = (a1 a2 … aH) should be generated with
elements ranging from 0 to N and the values of parameters should be ob-
tained for each string as
x j = x min
j + a j ( x max
j − x min
j ) / N. (2.156)

Note that the space of the integer strings just approximately maps the
space of the real-valued parameters. The number N determines the preci-
sion of the search. The search resolution for the j-th parameter is
( x max
j − x min
j ) / N . Therefore the increase of N provides a more precise
search. On the other hand, the size of the search space of integer strings
grows drastically with the increase of N, which slows the GA convergence.
A reasonable compromise can be found by using a multistage GA search.
162 New Computational Methods in Power System Reliability

In this method, a moderate value of N is chosen and the GA is run to ob-


tain a “crude” solution. Then the ranges of all the parameters are corrected
to accomplish the search in a small vicinity of the vector of parameters ob-
tained and the GA is started again. The desired search precision can be ob-
tained by a few iterations.

Example 2.24
Consider a problem in which one has to minimize a function of seven pa-
rameters. Assume that following a preliminary decision the ranges of the
possible variations of the parameters are different.
Let the random generator provide the generation of integer numbers in
the range of 0 - 100 (N = 100). The random integer string and the corre-
sponding values of the parameters obtained according to (2.156) are pre-
sented in Table 2.16.

Table 2.16. Example of parameters encoding


No. of variable 1 2 3 4 5 6 7
xjmin 0.0 0.0 1.0 1.0 1.0 0.0 0.0
xjmax 3.0 3.0 5.0 5.0 5.0 5.0 5.0
Random integer string 21 4 0 100 72 98 0
Decoded variable 0.63 0.12 1.0 5.0 3.88 4.9 0.0

Partition and Allocation Problems


The partition problem can be considered as a problem of allocating Y items
belonging to a set Φ in K mutually disjoint subsets Φi, i.e. such that
K
∪ Φi = Φ, Φi ∩Φ j = ∅, i ≠ j (2.157)
i =1

Each set can contain from 0 to Y items. The partition of the set Φ can be
represented by the Y-length string a = (a1 a2 … aY−1 aY) in which aj is a
number of the set to which item j belongs. Note that, in the strings repre-
senting feasible solutions of the partition problem, each element can take a
value in the range (1, K).
Now consider a more complicated allocation problem in which the
number of items is not specified. Assume that there are H types of differ-
ent items with an unlimited number of items for each type h. The number
of items of each type allocated in each subset can vary. To represent an al-
location of the variable number of items in K subsets one can use the fol-
lowing string encoding a = (a11 a12 …a1K a21 a22 … a2K… aH1 aH2… aHK), in
Basic Tools and Techniques 163

which aij corresponds to the number of items of type i belonging to subset


j. Observe that the different subsets can contain identical elements.

Example 2.25
Consider the problem of allocating items of three different types in two
disjoint subsets. In this problem, H=3 and K=2. Any possible allocation
can be represented by an integer string using the encoding described
above. For example, the string (2 1 0 1 1 1) encodes the solution in
which two type 1 items are allocated in the first subset and one in the sec-
ond subset, one item of type 2 is allocated in the second subset, one item of
type 3 is allocated in each of the two subsets.
When K = 1, one has an assignment problem in which a number of dif-
ferent items should be chosen from a list containing an unlimited number
of items of K different types. Any solution of the assignment problem can
be represented by the string a = (a1 a2 … aK), in which aj corresponds to
the number of chosen items of type j.
The range of variance of string elements for both allocation and assign-
ment problems can be specified based on the preliminary estimation of the
characteristics of the optimal solution (maximal possible number of ele-
ments of the same type included into the single subset). The greater the
range, the greater the solution space to be explored (note that the minimal
possible value of the string element is always zero in order to provide the
possibility of not choosing any element of the given type to the given sub-
set). In many practical applications, the total number of items belonging to
each subset is also limited. In this case, any string representing a solution
in which this constraint is not met should be transformed in the following
way:
⎧⎢ H ⎥ H

⎪ ⎢ aij N j / ∑ ahj ⎥, if N j < ∑ ahj
aij* = ⎨⎣ h =1 ⎦ h =1 for 1≤i≤H,1≤j≤K (2.158)
⎪a , otherwise
⎪⎩ ij

where Nj is the maximal allowed number of items in subset j.

Example 2.26
Consider the case in which the items of three types should be allocated into
two subsets. Assume that it is prohibited to allocate more than five items
of each type to the same subset. The GA should produce strings with ele-
ments ranging from 0 to 5. An example of such a string is (4 2 5 1 0 2).
164 New Computational Methods in Power System Reliability

Assume that for some reason the total numbers of items in the first and
in the second subsets are restricted to seven and six respectively. In order
to obtain a feasible solution, one has to apply the transform (2.158) in
which N1 = 7, N2 = 6:
3 3
∑ ah1 = 4+5+0=9, ∑ ah 2 = 2+1+2=5.
h =1 h =1

The string elements take the values


a11 = ⎣4×7/9⎦ = 3, a21 = ⎣5×7/9⎦ = 3, a31 = ⎣0×7/9⎦ = 0

a12 = ⎣2×6/5⎦ = 2, a22 = ⎣1×6/5⎦ = 1, a32 = ⎣2×6/5⎦ = 2


After the transformation, one obtains the following string: (3 2 3 1 0 2).

When the number of item types and subsets is large, the solution repre-
sentation described above results in an enormous growth of the length of
the string. Besides, to represent a reasonable solution (especially when the
number of items belonging to each subset is limited), such a string should
contain a large fraction of zeros because only a few items should be in-
cluded in each subset. This redundancy causes an increase in the need of
computational resources and lowers the efficiency of the GA. To reduce
the redundancy of the solution representation, each inclusion of m items of
type h into subset k is represented by a triplet (m h k). In order to preserve
the constant length of the strings, one has to specify in advance a maximal
reasonable number of such inclusions I. The string representing up to I in-
clusions takes the form (m1 h1 k1 m2 h2 k2 … mI hI kI). The range of string
elements should be (0, max{M, H, K}), where M is the maximal possible
number of elements of the same type included into a single subset. An ar-
bitrary string generated in this range can still produce infeasible solutions.
In order to provide the feasibility, one has to apply the transform
a *j = mod x +1 a j , where x is equal to M, H and K for the string elements
corresponding to m, h and k respectively. If one of the elements of the trip-
let is equal to zero, then this means that no inclusion is made.
For example, the string (3 1 2 1 2 3 2 1 1 2 2 2 3 2) represents the same
allocation as string (3 2 3 1 0 2) in Example 2.26. Note that the permuta-
tion of triplets, as well as an addition or reduction of triplets containing ze-
ros, does not change the solution. For example, the string (4 0 1 2 3 2 2 1 2
3 1 1 1 2 2 3 2 1) also represents the same allocation as that of the previous
string.
Basic Tools and Techniques 165

2.4.3 Determination of solution fitness

Having a solution represented in the GA by an integer string a one then


has to estimate the quality of this solution (or, in terms of the evolution
process, the fitness of the individual). The GA seeks solutions with the
greatest possible fitness. Therefore, the fitness should be defined in such a
way that its greatest values correspond to the best solutions.
For example, when optimizing the system reliability R (which is a func-
tion of some of the parameters represented by a) one can define the solu-
tion fitness equal to this index, since one wants to maximize it. On the con-
trary, when minimizing the system cost C, one has to define the solution
fitness as M − C, where M is a constant number. In this case, the maximal
solution fitness corresponds to its minimal cost.
In the majority of optimization problems, the optimal solution should
satisfy some constraints. There are three different approaches to handling
the constraints in GA (Michalewicz 1996). One of these uses penalty func-
tions as an adjustment to the fitness function; two other approaches use
“decoder” or “repair” algorithms to avoid building illegal solutions or re-
pair them respectively. The “decoder” and “repair” approaches suffer from
the disadvantage of being tailored to the specific problems and thus are not
sufficiently general to handle a variety of problems. On the other hand, the
penalty approach based on generating potential solutions without consider-
ing the constraints and on decreasing the fitness of solutions, violating the
constraints, is suitable for problems with a relatively small number of con-
straints. For heavily constrained problems, the penalty approach causes the
GA to spend most of its time evaluating solutions violating the constraints.
Fortunately, the reliability optimization problems usually deal with few
constraints.
Using the penalty approach one transforms a constrained problem into
an unconstrained one by associating a penalty with all constraint violations.
The penalty is incorporated into the fitness function. Thus, the original prob-
lem of maximizing a function f(a) is transformed into the maximization of
the function
J
f(a) − ∑ π jη j (2.159)
j =1

where J is the total number of constraints, πj is a penalty coefficient related


to the j-th constraint (j = 1, …, J) and ηj is a measure of the constraint vio-
lation. Note that the penalty coefficient should be chosen in such a way as
to allow the solution with the smallest value of f(a) that meets all of the
constraints to have a fitness greater than the solution with the greatest
value of f(a) but violating at least one constraint.
166 New Computational Methods in Power System Reliability

Consider, for example, a typical problem of maximizing the system reli-


ability subject to cost constraint: R(a) → max subject to C(a)≤C*.
The system cost and reliability are functions of parameters encoded by a
string a: C(a) and R(a) respectively. The system cost should not be greater
than C*. The fitness of any solution a can be defined as

M+R(a)−πη(C*, a)
where
η(C*, a)=(1+C(a)−C*)1(C(a)>C*) (2.160)

The coefficient π should be greater than one. In this case the fitness of any
solution violating the constraint is smaller than M (the smallest violation of
the constraint C(a)≤C* produces a penalty greater than π) while the fitness
of any solution meeting the constraint is greater than M. In order to keep
the fitness of the solutions positive, one can choose M>π (1+Cmax−C*),
where Cmax is the maximal possible system cost.
Another typical optimization problem is minimizing the system cost
subject to the reliability constraint: C(a) → min subject to R(a)≥R*.
The fitness of any solution a of this problem can be defined as

M−C(a)−πη(R*,a)
where
η(A*, a)=(1+R*−R(a))1(R(a)<R*) (2.161)
The coefficient π should be greater than Cmax. In this case, the fitness of
any solution violating the constraint is smaller than M − Cmax whereas the
fitness of any solution meeting the constraint is greater than M − Cmax. In
order to keep the fitness of the solutions positive, one can choose
M>Cmax + 2π.

2.4.4 Basic Genetic Algorithm procedures and parameters

The crossover procedures create a new solution as the offspring of a pair of


existing ones (parent solutions). The offspring should inherit some useful
properties of both parents in order to facilitate their propagation through-
out the population. The mutation procedure is applied to the offspring so-
lution. It introduces slight changes into the solution encoding string by
modifying some of the string elements. Both of these procedures should be
developed in such a way as to provide the feasibility of the offspring solu-
tions given that parent solutions are feasible.
Basic Tools and Techniques 167

When applied to parameter determination, partition, and assignment


problems, the solution feasibility means that the values of all of the string
elements belong to a specified range. The most commonly used crossover
procedures for these problems generate offspring in which every position
is occupied by a corresponding element from one of the parents. This
property of the offspring solution provides its feasibility. For example, in
the uniform crossover each string element is copied either from the first or
second parent string with equal probability.
The commonly used mutation procedure changes the value of a ran-
domly selected string element by 1 (increasing or decreasing this value
with equal probability). If after the mutation the element is out of the
specified range, it takes the minimal or maximal allowed value.
When applied to the sequencing problems, the crossover and mutation
operators should produce the offspring that preserve the form of permuta-
tions. This means that the offspring string should contain all of the ele-
ments that appear in the initial strings and each element should appear in
the offspring only once. Any omission or duplication of the element con-
stitutes an error. For example, in the fragment crossover operator all of the
elements from the first parent string are copied to the same positions of the
offspring. Then, all of the elements belonging to a randomly chosen set of
adjacent positions in the offspring are reallocated within this set in the or-
der that they appear in the second parent string. It can be seen that this op-
erator provides the feasibility of the permutation solutions.
The widely used mutation procedure that preserves the permutation fea-
sibility swaps two string elements initially located in two randomly chosen
positions.
There are no general rules in order to choose the values of basic GA
parameters for solving specific optimization problems. The best way to
determine the proper combination of these values is by experimental com-
parison between GA’s with different parameters.
A detailed description of a variety of different crossover and mutation
operators and recommendations concerning the choice of GA parameters
can be found in the GA literature.
168 New Computational Methods in Power System Reliability

References
Aven T and Jensen U (1999) Stochastic models in reliability. Springer, NY.
Billinton R. and Allan R (1996) Reliability evaluation of power systems. Plenum
Press, Boston.
Cinlar E (1975) Introduction to stochastic processes. Prentice-Hall, Englewood
Cliffs, NY.
Coit D, Jin T (2001) Prioritizing system-reliability prediction improvements, IEEE
Transactions on Reliability, 50(1): 17-25.
Coit D (1997) System-reliability confidence-intervals for complex-systems with
estimated component-reliability, IEEE Transactions on Reliability, 46(4):
487-493.
Endrenyi J (1979) Reliability modeling in electric power systems. John Wiley &
Sons, NY.
Gentle J (2003) Random number generation and Monte Carlo methods, Springer,
New York.
Goldberg D (1989) Genetic Algorithms in search, optimization and machine learn-
ing. Addison-Wesley.
Goldner Sh., Lisnianski A (2006) Markov reward models for ranking units per-
formance, IEEE 24 th Convention of Electrical and Electronics Engineers in
Israel: 221-225.
Grimmett G, Stirzaker D (1992) Probability and random processes. Second edi-
tion. Clarendon Press, Oxford.
Holland J (1975) Adaptation in natural and artificial systems. The University of
Michigan Press, Ann Arbor, Michigan.
Howard R (1960) Dynamic programming and Markov processes, MIT Press,
Cambridge, Masschusetts.
International Standard (1995) Application of Markov techniques, International
Electrotechnical Commission IEC 1165.
Karlin S, Taylor H (1981) A second course in stochastic processes. Academic
Press, Orlando, FL.
Kovalenko I, Kuznetsov N, Pegg Ph. (1997) Mathematical theory of reliability of
time dependent systems with practical applications, Wiley, Chchester, Eng-
land.
Kinnear K (1993) Generality and difficulty in Genetic Programming: evolving a
sort. In Proceedings of the Fifth International Conference on Genetic Algo-
rithms. Ed. Forrest S. Morgan Kaufmann. San Mateo, CA: 287-94.
Levitin G (2005) Universal generating function in reliability analysis and optimi-
zation, Springer-Verlag, London.
Levy P (1954) Process semi-markoviens. Proc. Int. Cong. Math. Amsterdam, 416-
426.
Limnios N, Oprisan G (2000) Semi-Markov processes and reliability, Birkhauser,
Boston, Basel, Berlin.
Lindqvist B (1987) Monotone Markov models, Reliability Engineering, 17: 47-58.
Basic Tools and Techniques 169

Lisnianski A, Yeager A (2000) Time-redundant system reliability under randomly


constrained time resources, Reliability Engineering and System Safety, 70:
157-166.
Lisnianski A, Levitin G (2003) Multi-state system reliability. Assessment, optimi-
zation and applications. World Scientific. Singapore.
Lisnianski A (2004) Universal generating function technique and random process
methods for multi-state system reliability analysis. Proceedings of the 2nd In-
ternational Workshop in Applied Probability (IWAP2004). Piraeus, Greece:
237-242.
Lisnianski A (2007a) The Markov reward model for a multi-state system reliabil-
ity assessment with variable demand, Quality Technology & Quantitative
Management, 4(2): 265-278.
Lisnianski A (2007b) Extended block diagram method for a multi-state system re-
liability assessment, Reliability Engineering and System Safety, 92(12): 1601-
1607.
Michalewicz Z (1996) Genetic Algorithms + data structures = evolution programs.
Third edition. Springer-Verlag. Berlin.
Mine H. Osaki S (1970), Markovian decision processes, American Elsevier Pub-
lishing Company, Inc., New York.
Ramirez-Marquez, J, Coit D, Jin T (2004) Test plan allocation to minimize system
reliability estimation variability, International Journal of Reliability, Quality
and Safety Engineering, 11(3): 257-272.
Ramirez-Marquez J, Jiang W (2005) Confidence bounds for the reliability of bi-
nary capacitated two-terminal networks, Reliability Engineering and System
Safety, 91(8): 905-914.
Ramirez-Marquez J, Jiang W (2006) On improved confidence bounds for system
reliability, IEEE Transactions on Reliability, 55(1): 26-36.
Robert C (2005) Monte Carlo statistical methods, Springer. New York.
Ross S (1993) Stochastic Processes, John Wiley, NY.
Ross S (2000) Introduction to probability models. Seventh Edition. Boston: Aca-
demic Press.
Sahner R, Trivedi K, Poliafito A (1996) Performance and reliability analysis of
computer systems. An example-based approach using the SHARPE software
package. Kluwer Academic Publishers, Boston/London.
Stamatelatos M (2001) Improving NASA capability in probabilistic risk assess-
ment, Office of Safety and Mission Assurance, Safety Directors Meeting
March 21, 2001.
Syswerda G (1991) A study of reproduction in generational and steady state ge-
netic algorithms. In Foundations of genetic algorithms, Morgan Kauffman,
San Mateo, CA: 94-101.
Takacs L (1954) Some investigations concerning reccurent stochastic processes of
certain type, Magyar Tud. Akad. Mat. Kutato Int. Kzl., 3: 115-128.
Trivedi K. (2002) Probability and statistics with reliability, queuing and computer
science applications, John Wiley, NY.
Ushakov I (1986) A universal generating function. Sov J Comput Syst Sci 24: 37-
49.
170 New Computational Methods in Power System Reliability

Ushakov I (1987) Optimal standby problem and a universal generating function.


Sov J Comput Syst Sci 25: 61-73.
Vavak F, Fogarty T (1996) A comparative study of steady state and generational
genetic algorithms for use in nonstationary environments. In Evolutionary
computing (Lecture notes in computer science; 1143), Springer, Brighton,
UK: 297-306.
Volik, B. et al. (1988) Methods of analysis and synthesis of control systems struc-
tures, Moscow, Energoatomizdat, (in Russian).
Whitley D (1989) The Genitor algorithm and selective pressure: Rank best alloca-
tion of reproductive trails is best. Proc. 3th International Conference on Ge-
netic Algorithms. Ed. Schaffer D. Morgan Kaufmann: 116-121.
Wright D, Bierbaum R (2002) Nuclear weapon reliability evaluation methodol-
ogy”, SAND Report, SAND: 2002-8133.

You might also like