02 Basic Tools and Techniques
02 Basic Tools and Techniques
D. Elmakias: Basic Tools and Techniques, Studies in Computational Intelligence (SCI) 111, 55–170
(2008)
www.springerlink.com © Springer-Verlag Berlin Heidelberg 2008
56 New Computational Methods in Power System Reliability
jectory X(t, ς ) that characterize for this case a height of the flight as a func-
tion of time. This trajectory will be different from trajectories of other
flights because of the influence of many random factors (such as wind,
temperature, pressure etc.). In Fig. 2.1 one can see three different trajecto-
ries for three flights that can be treated as three different realizations of the
stochastic process. It should be noticed that the cut of this stochastic proc-
ess at any time t1 will present the random variable with mean H. In power
systems such parameters as generating capacity, voltage, and frequency
may be considered as stochastic processes.
Height of flight
A B
Time
t1
Fig. 2.1. Trajectories of three flights
The time may be discrete or continuous. A discrete time may have a fi-
nite or infinite number of values; continuous time obviously has only an
infinite number of values. The values taken by the random variables con-
stitute the state-space. This state-space, in its turn, may be discrete or
continuous. Therefore, stochastic processes may be classified into four
categories according to whether their state-spaces and time are continuous
or discrete. A process with discrete state-space is usually called a chain.
The stochastic process X(t, ς ) has the following interpretations:
1. It is a family of functions X(t, ς ) where t and ς are variables.
2. It is a single time function or a realization (sample) of the given process
if t is a variable and ς is fixed.
3. It is a random variable equal to the state of the given process at time t
when t is fixed and ς is variable.
Basic Tools and Techniques 57
points are associated with certain events and the number N (t1 , t 2 ) of the
points in an interval (t1, t2) of length t=t2-t1 is a Poisson random variable
with parameter λt , where λ is the mean occurrence rate of the events:
e − λt (λt ) k
Pr{N(t1,t2)=k} = . (2.2)
k!
If the intervals (t1,t2) and (t3,t4) are not overlapping, then the random
variables N(t1,t2) and N(t3,t4) are independent. Using the points ti, one can
form the stochastic process X (t ) = N (0, t ) .
The Poisson process plays a special role in reliability analysis, compa-
rable to the role of the normal distribution in probability theory. Many real
physical situations can be successfully described with the help of Poisson
processes.
A well-known type of point process is the so-called renewal process.
This process can be described as a sequence of events, the intervals be-
tween which are independent and identically distributed random variables.
In reliability theory, this kind of mathematical model is used to describe
the flow of failures in time.
To every point process ti one can associate a sequence of random vari-
ables yn such that y1=t1, y2=t2-t1,…, yn=tn-tn-1 where t1 is the first random
point to the right of the origin. This sequence is called a renewal process.
An example is the life history of the items that are replaced as soon as they
fail. In this case, yi is the total time the i-th item is in operation and ti is the
time of its failure.
One can see a correspondence among the following three processes:
• a point process ti ,
• a discrete-state stochastic process X (t ) increasing (or decreasing) by 1 at
the points ti ,
• a renewal process consisting of the random variables yi such that
t n = y1 + ... + y n .
A generalization of this type of process is the so-called alternating re-
newal process. This process consists of two types of independent and iden-
tically distributed random variables alternating with each other in turn.
This type of process is convenient for the description of reparable systems.
For such systems, periods of successful operation alternate with periods of
idle time.
In this chapter, the power system reliability models will be consequently
studied based on the Markov processes, the Markov rewards processes and
semi-Markov processes. The Markov processes are widely used for reli-
ability analysis because the number of failures in arbitrary time interval in
many practical cases can be described as a Poisson process and the time up
Basic Tools and Techniques 59
to the failure and repair time are often exponentially distributed. It will be
shown how by using the Markov processes theory power system reliability
measures can be determined. It will be also shown how such power system
reliability measures as the mean time up to the failure, mean number of
failures in a time interval, mean sojourn time in a set of unacceptable states
can be found by using the Markov reward models. In practice, basic as-
sumptions about exponential distributions of times between failures and
repair times often do not hold. In this case, the more complicated mathe-
matical technique named the semi-Markov processes may be applied.
π jj (t ,0) − π jj (t , t + ∆t ) 1 − π jj (t , t + ∆t )
a j (t ) = lim = lim (2.5)
∆t →0 ∆t ∆t →0 ∆t
and for each j and i ≠ j a nonnegative continuous function aji(t):
π ji (t ,0) − π ji (t , t + ∆t ) π ji (t , t + ∆t )
a ji (t ) = lim = lim . (2.6)
∆t → 0 − ∆t ∆t → 0 ∆t
60 New Computational Methods in Power System Reliability
1
a jj = −a j = lim −
∆t →0
∑ π ji (∆t ) = − ∑ a ji .
∆t i ≠ j
(2.8)
i≠ j
for any t ≥ 0 ,
The states probabilities at instant t + ∆t can be expressed based on states
probabilities at instant t by using the following equations:
p j (t + ∆t ) = p j (t )[1 − ∑ a ji dt ] + ∑ pi (t )aij dt , i, j = 1,..., k . (2.11)
i≠ j i≠ j
summarized for all i ≠ j because the process can achieve state j from
any state i.
Now one can rewrite (2.11) by using (2.8) and obtain the following
p j (t + ∆t ) = p j (t )[1 + a jj ∆t ] + ∑ pi (t )aij ∆t , (2.12)
i≠ j
or
k
p j (t + ∆t ) − p j (t ) = ∑ pi (t )aij ∆t . (2.13)
i =1
The system of differential equations (2.14) is used for finding the state
probabilities p j (t ), j=1,…,k for the homogeneous Markov process when
the initial conditions are given
p j (t ) = α j , j = 1,..., k , (2.15)
in which the diagonal elements are defined as ajj=-aj we can rewrite the
system (2.14) in matrix notation
62 New Computational Methods in Power System Reliability
dp(t )
=p(t)a. (2.17)
dt
K
Note that the sum of the matrix elements in each row equals 0: ∑ aij = 0
j =1
for each i: 1≤i≤K.
When the system state transitions are caused by failures and repairs of
its elements, the corresponding transition intensities are expressed by the
element’s failure and repair rates.
The element’s failure rate λ (t ) is the instantaneous conditional density of
the probability of failure of an initially operational element at time t given
that the element has not failed up to time t. Briefly, one can say that λ (t ) is
the time-to-failure conditional probability density function (pdf). It ex-
presses a hazard of failure in time instant t under a condition where there
was no failure up to time t. The failure rate of an element at time t is de-
fined as
1 ⎡ F (t + ∆t ) − F (t ) ⎤ f (t )
λ (t ) = lim ⎥ = F (t ) , (2.18)
∆t → 0 ∆t ⎢⎣ R (t ) ⎦
where F (t ) is the cdf of the time to failure of the element, f (t ) is pdf of the
time to failure of the element, R(t ) = 1 − F (t ) is the reliability function of the
element.
For homogeneous Markov processes the failure rate doesn’t depend on t
and can be expressed as
λ = MTTF −1, (2.19)
where MTTF is mean time to failure. Similarly, the repair rate µ (t ) is the
time-to-repair conditional pdf. For homogeneous Markov processes a re-
pair rate does not depend on t and can be expressed as
µ = MTTR −1 , (2.20)
where MTTR is the mean time to repair.
In many applications, the long-run (final) or steady state probabilities
lim pi (t ) are of interest for the repairable element. If the long-run state
t →∞
probabilities exist, the process is called ergodic. For the final state prob-
abilities, the computations become simpler. The set of differential equa-
tions (2.14) is reduced to a set of k algebraic linear equations because for
Basic Tools and Techniques 63
dpi (t )
the constant probabilities all time-derivatives , i=1,…,k are equal to
dt
zero.
Let the final states probabilities pi = lim pi (t ) exist. For this case in
t →∞
steady state, all derivatives of states probabilities in the right side of (2.14)
will be zeroes. So, in order to find the long run probabilities the following
system of algebraic linear equations should be solved
k
0 = ∑ pi (t )aij , j=1,2, …, k. (2.21)
i =1
T ci = T i + T oi . (2.23)
From the definition of the state frequency it follows that, in the long run,
fi equals the reciprocal of the mean cycle time
1
fi = . (2.24)
T ci
Ti
T i fi = = pi . (2.25)
T ci
Therefore,
64 New Computational Methods in Power System Reliability
pi
fi = . (2.26)
Ti
All conditional times Tij are distributed exponentially with the following
−a t
cumulative distribution functions Fij (Tij ≤ t ) = 1 − e ij . All transitions from
state i are independent and, therefore, the cumulative distribution function
of unconditional time Ti of staying in state i can be computed as the fol-
lows
Fi (Ti ≤ t ) = 1 − Pr{Ti > t} = 1 − ∏ Pr{Tij > t} =
j ≠i
− ∑ aij t (2.28)
− aij t j ≠i
= 1 − ∏ [1 − Fij (Tij ≤ t )] = 1 − ∏ e = 1− e .
j ≠i j ≠i
1
Ti = . (2.29)
∑ aij
j ≠i
Example 2.1
Consider a power generating unit that has k=4 possible performance levels
(generating capacities): g4=100 MW, g3=80 MW, g2=50 MW and g1=0
MW.
The unit has the following failure rates
λ4 ,3 = 2 year −1 , λ3, 2 = 1 year −1 , λ2,1 = 0.7 year −1 ,
Basic Tools and Techniques 65
4
µ3,4 λ4,3
µ2,4 λ4,2
3
µ1,4 µ2,3 λ3,2 λ4,1
µ1,3 2 λ3,1
µ1,2 λ2,1
A1 (t ) = p4 (t ) + p3 (t ) + p2 (t ) = 1 − p1 (t ) , for g1 < w ≤ g 2
66 New Computational Methods in Power System Reliability
0.8
Probability
0.6
0.4
0.2
0
0 0.02 0.04 0.06 0.08 0.1
time (years)
p1(t) p2(t) p3(t)
A3=p4(t) A1 A2
Fig. 2.3. State probabilities and instantaneous availability of the four-state ele-
ment
The final state or steady state probabilities can be found by solving the
system of linear algebraic equations (2.21) in which one of the equations is
replaced by the equation (2.22). In our example, the system takes the form
⎧(λ4,3 + λ4,2 + λ4,1 ) p4 = µ3,4 p3 + µ 2,4 p2 + µ1,4 p1
⎪(λ3,2 + λ3,1 + µ3,4 ) p3 = λ4,3 p4 + µ 2,3 p2 + µ1,3 p1
⎨(λ + µ + µ ) p = λ p + λ p + µ p
⎪ 2,1 2 ,3 2, 4 2 4, 2 4 3, 2 3 1, 2 1
⎩ p1 + p2 + p3 + p4 = 1
Solving this system, we obtain the final state probabilities:
Basic Tools and Techniques 67
µ 2,3 (a1c3 − a3c1 ) + µ 2,4 (b3c1 − b1c3 ) + (λ2,1 + µ 2,3 + µ 2,4 )(a1b3 − a3b1 )
p2 = ,
a1b2c3 + a2b3c1 + a3b1c2 − a3b2c1 − a1b3c2 − a2b1c3
λ3,2 (a1b2 − a2b1 ) + (λ3,2 + λ3,1 + µ3,4 )(a1c2 − a2c1 ) + µ3,4 (b1c2 − b2c1 )
p3 = ,
a1b2c3 + a2b3c1 + a3b1c2 − a3b2c1 − a1b3c2 − a2b1c3
p4 = 1 − p1 − p2 − p3 ,
where
a1 = µ1,4 − µ 2 ,4 , a2 = µ1,4 − µ 3,4 , a3 = µ1,4 + λ4 ,3 + λ4 ,2 + λ4 ,1 ,
b1 = µ1,3 − µ 2 ,3 , b2 = µ1,3 + λ3,2 + λ3,1 + µ 3,4 , b3 = µ1,3 − λ4 ,3 ,
c1 = µ1,2 + λ2 ,1 + µ 2 ,3 + µ 2 ,4 , c2 = µ1,2 − λ3,2 , c3 = µ1,2 − λ4 ,2 .
The steady state availability of the element for constant demand w=60
MW is
A = p4 + p3 ,
As can be seen in Fig. 2.3 and Fig. 2.4, the steady state values of the
state probabilities are achieved during a short time. After 0.07 years, the
process becomes stationary. Due to this consideration, only the final solu-
tion is important in many practical cases. This is especially so for elements
with a relatively long lifetime. This is the case in our example if the unit
lifetime is at least several years. However, if one deals with highly respon-
sible components and takes into account even small energy losses at the
68 New Computational Methods in Power System Reliability
98 3
Dt (1/sec )
Et (1/sec )
96 1.5
94 0
0 0.05 0.1 0 0.05 0.1
Fig. 2.4. Instantaneous mean capacity and capacity deficiency of the four-state
generating unit
Example 2.2
This example presents the development of Markov model for typical gen-
eration unit based on historical failure data (Goldner and Lisnianski
2006).
The model representing a commercial 360-megawatt coal fired genera-
tion unit that was designed to incorporate the major unit's derated/outage
states derived from the analysis of the historical failure data (unit's failures
in the period from 1985 to 2003). The methodology of the data acquisition
is based on NERC-GADS Data Reporting Instruction (North American
Electric Reliability Council, 2003, Generating Availability Data System.
Data Reporting Instructions. Princeton, NJ.)
The total of 1498 recorded events are classified as planned or unplanned
outages, scheduled or forced deratings (each derating is reported independ-
ently and relates to capacity reduction from a nominal level). Based on the
data analysis ten space states were identified as most significant. Fig. 2.5
presents the distribution of the events as a function of available capacity
remaining after the event.
Table 2.1 provides a summary of failure statistics and reliability pa-
rameters for the unit. Each event that leads to capacity derating is treated
as a failure. For example, there were 111 events (failures) that lead to
capacity derating from nominal capacity En=360 MW down to capacity
levels ranging from 290 MW up to 310 MW. An average derated capacity
level for this kind of failure is G8=303 MW. The mean time up to this kind
Basic Tools and Techniques 69
250
N u m b e r o f E v e n ts
200
150
100
50
0
0 50 100 150 200 250 300 350
Capacity range (MW)
The state-space diagram for the coal fired generation unit is presented in
Fig. 2.6. According to Table 2.1, the diagram has 10 different states, rang-
ing from state 10 with nominal generating capacity of 360 MW up to state
1 (complete failure with a capacity of 0 MW).
Table 2.1. Summary of failure parameters
Level Average ca- Capacity Number MTTRi ,10 MTTF10,i
number pacity level range of (hr) (hr)
i Gi (MW) (MW) failures
1 0 0 189 93.5 749
2 124 (0, 170] 134 4.3 1,219
3 181 (170, 190] 160 7.7 1,022
4 204 (190, 220] 147 6.4 1,120
5 233 (220, 240] 52 3.9 3.187
6 255 (240, 270] 120 7.8 1,389
7 282 (270, 290] 75 6.0 2,221
8 303 (290, 310] 111 6.7 1,464
9 328 (310, 360) 510 6.9 311
70 New Computational Methods in Power System Reliability
K=10
360Mw
λ10,9 µ9,10
K=9
328Mw
λ10,8 µ8,10
K=8
303Mw
λ10,7 µ7,10
K=7
282Mw
λ10,6 µ6,10
K=6
255Mw
λ10,5 µ 5 ,10
K=5
233Mw
λ 10 , 4 µ4,10
K=4
204Mw
λ 10 , 3 µ 3,10
K=3
181Mw
λ10 ,1 µ 2,10
K=2
123Mw
λ10,1 µ1,10
K=1
0Mw
Fig. 2.6. State-space diagram for the coal fired generation unit
dt
dp (t )
∑
10
= µ k , 10 p k ( t ) − ∑ 10k = 1 λ 10 , k
9
k =1
* * p (t )
10
dt
Solving this system under initial conditions p10(0) =1; p9(0)=…=p1(0)=0
one obtains the state probabilities.
For example, for t=1month=744 hours: p10(744) =0.854, p9(744) =0.020,
p8(744) =0.004.
The element instantaneous availability can be obtained for different
constant demand levels as the sum of probabilities of acceptable states,
where the generating capacity is greater than or equal to the demand level.
For the demand lower than or equal to capacity level in state k we obtain:
10
A k (t ) = ∑ e=k p e(t )
The corresponding graphs Ak (t ) for k=9, k=7, and k=8 are presented in
Fig. 2.7.
100
95
A k (% )
90
85
80
0 200 400 600
t (hours)
k=5 k=7 k=9
The graph of the function E(t) (in percents of the nominal unit capacity) is
presented in Fig. 2.8.
100
97
E (t )/E n (%)
94
91
88
0 200 400 600
t (hours)
Based on the derated Markov state we obtained the real availability and
mean performance indices for typical coal fired unit. This model allows
analyst to compare performances indices for different generation units
based on their failure history data, which is important for the units’ avail-
ability improvement and for system planning support.
transits from state i to state j a cost rij should be paid. These costs rii and
rij are called rewards (the reward may also be negative when it character-
izes losses or penalties). The Markov process with the rewards associated
with its states or/and transitions is called the Markov process with rewards.
For these processes, an additional matrix r=| rij |, i,j=1,…,K of rewards is
determined. If all rewards are zeroes, the process reduces to the ordinary
Markov process.
Note that the rewards rii and rij have different dimensions. For example,
if rij is measured in cost units, the reward rii is measured in cost units per
time unit. The value that is of interest is the total expected reward accumu-
lated up to time instant t under specified initial conditions.
Let Vi (t ) be the total expected reward accumulated up to time t, given
the initial state of the process at time instant t=0 is state i. According to
Howard, the following system of differential equations must be solved un-
der specified initial conditions in order to find the total expected rewards:
dVi (t ) K K
= rii + ∑ aij rij + ∑ aijV j (t ) , i=1,…,K. (2.31)
dt j =1 j =1
j ≠i
On the other hand, during time ∆t the process can transit to some other
state j ≠ i with the probability π ij (0, ∆t ) = aij ∆t . In this case the expected
reward accumulated during the time interval [0, ∆t ] is rij. At the beginning
of the time interval [ ∆t , ∆t + t ] the process is in the state j. Therefore, the
74 New Computational Methods in Power System Reliability
expected reward during this interval is V j (t ) and the expected reward dur-
ing the interval [0, ∆t + t ] is Vi (∆t + t ) =rij+ V j (t ) .
In order to obtain the total expected reward one must summarize the
products of rewards and corresponding probabilities for all of the states.
Thus, for the small ∆t one has
K
Vi (∆t + t ) ≈ (1 + aii ∆t )[rii ∆t + Vi (t )] + ∑ aij ∆t[rij + V j (t )] , (2.33)
j =1
j ≠i
for i=1,…,K.
Neglecting the terms with an order greater than ∆t one can rewrite the
last expression as follows:
Vi (∆t + t ) − Vi (t ) K K
= rii + ∑ aij rij + ∑ aijV j (t ) , for i=1,…,K. (2.34)
∆t j =1 j =1
j ≠i
Example 2.3
As an example of straightforward application of the method we consider a
power generator with the nominal capacity L = 105 KW where the genera-
Basic Tools and Techniques 75
tor only has complete failures with a failure rate of λ = 1 year −1 . The un-
supplied energy penalty is cp=3 $ per KW*hour. After the generator fail-
ure, a repair is performed with a repair rate of µ = 200 year −1 . The mean
cost of repair is cr = 50000 $.
The problem is to evaluate a total expected cost CT associated with the
unreliability of the generator during the time interval [0, T].
The state-space diagram for the power generating unit is presented in
Fig. 2.9. It has only two states: perfect functioning with a nominal generat-
ing capacity (state 2) and complete failure where the unit generating capac-
ity is zero (state 1). The transitions from state 2 to state 1 are associated
with failures and have intensity λ . If the generator is in state 1, the penalty
cost cpL should be paid for each time unit (hour). Hence, the reward r11 as-
sociated with state 1 is r11=cpL.
r22
2
µ1,2 λ2,1
r12 r21
1 r
11
The transitions from state 1 to state 2 are associated with repairs and
have an intensity of µ . The repair cost is cr, therefore the reward associ-
ated with the transition from state 1 to state 2 is r12=cr.
There are no rewards associated with the transition from state 2 to state
1 and with remaining in the state 2: r22 = r21 = 0 .
The reward matrix takes the form
r11 r12 c L cr
r = | rij | = = p ,
r21 r22 0 0
⎧ dV1(t )
⎪⎪ dt = c p L + µcr − µV1(t ) + µV2 (t )
⎨ .
⎪ dV2 (t ) = λV (t ) − λV (t )
⎪⎩ dt 1 2
The total expected cost in time interval [0, t] associated with the unreli-
ability of the generator is equal to the expected reward V2 (t ) accumulated
up to time t, given the initial state of the process at time instant t=0 is state
2.
Using the Laplace-Stieltjes transform under the initial conditions
V1 (0) = V2 (0) = 0 , we transform the system of differential equations to the
following system of linear algebraic equations
⎧ c p L + µc r
⎪sv1 ( s ) = − µv1 ( s ) + µv2 ( s )
⎨ s
⎪sv ( s ) = λv ( s ) − λv ( s )
⎩ 2 1 2
For relatively large T the term e −(λ + µ )T can be neglected and the fol-
lowing approximation can be used
λ (c p L + µcr )
CT ≈ T .
µ +λ
Therefore, for large T, the total expected reward is a linear function of
time and the coefficient
Basic Tools and Techniques 77
λ ( c p L + µc r )
cun =
µ +λ
defines the annual expected cost associated with generating unit unreliabil-
ity. For the data given in the example, cun = 13.14 ⋅ 106 $/year.
In the further extension of the variable demand model the demand proc-
ess can be approximated by defining a set of discrete values {w1,w2,…,wm}
representing different possible demand levels and determining the transi-
tion intensities between each pair of demand levels (usually derived from
the demand statistics). The realization of the stochastic process of the
78 New Computational Methods in Power System Reliability
w
2
λl
t w2 w1
w p
T
1 λp
A B
Fig. 2.10. Two-level demand model (A. Approximation of actual demand curve;
B. State-space diagram)
v1,m
wm
... ... ... wm ... w3 w2 w1
...
...
w2
w1 vm,1
t
A B
Fig. 2.11. Discrete variable demand (A. Realization of stochastic demand process,
B. Space-state diagram).
So, for a general case we assume that demand W(t) is also a random
process that can take on discrete values from the set w={w1,…,wM}. The
desired relation between the power system capacity and the demand at any
time instant t can be expressed by the acceptability func-
tion Φ (G (t ), W (t )) . The acceptable system states correspond to
Φ (G (t ),W (t )) ≥ 0 and the unacceptable states correspond to
Φ (G (t ),W (t )) < 0 . The last inequality defines the system failure crite-
rion. Usually in power systems, the system generating capacity should be
equal to or exceed the demand. Therefore, in such cases the acceptability
function takes on the following form:
Φ (G (t ),W (t )) = G (t ) − W (t ) , (2.39)
Basic Tools and Techniques 79
K 1i
K
a1, K −1 a K , K −1
K g K −1
a1, K −1 aK −1,1
...
a1, K
a K ,1
g1
Fig. 2.12. Markov model for power system output performance
b1, m b1, m −1
bm−1,m
w1 ... wm−1 wm
m-1 m
bm,m−1
1
bm−1,1
bm,1
These indices for each state are presented in the lower part of the cor-
responding ellipse. The combined model is considered to have mK states.
Each state corresponds to a unique combination of demand levels wi and
element performance g j and is numbered according to the following rule:
z=(i-1)K+j, (2.41)
where z is a state number in the combined capacity-demand model, n=1, … ,
mK; I is a number of demand level, i=1,…m; j is a number of MSS output
performance level, j=1,…K.
Basic Tools and Techniques 81
K 2K ... mK
w1 g K w2 g K wm g K
K −1 2K −1 mK − 1
w2 g K −1 ...
w1 g K −1 wm g K −1
1 K +1 (m−1)K+1
w1 g1 ...
w2 g1 wm g1
Unacceptable states
Acceptable
c36 / r36 States
6
3
r
33 c63 / r63
r66
c14 / r14
1 4
r11 c 41 / r41 r44
Unacceptable
States
To assess average availability A (T) for a power system the Markov re-
ward model can be used. The rewards in matrix r for the combined capac-
ity-demand model can be determined in the following manner:
• The rewards associated with all acceptable states should be defined as 1.
• The rewards associated with all unacceptable states should be zeroed as
well as all the rewards associated with the transitions.
For the example in Fig. 2.11 this means that r22=r33=r55=r66=1. All
other rewards are zeroes.
The mean reward Vi(T) accumulated during interval [0, T] defines a part
of time that the power system will be in the set of acceptable states in the
case where state i is the initial state. This reward should be found as a solu-
tion of the general system (2.31). That for the combined capacity – de-
mand model would be the following:
dVi (t ) mK mK
= rii + ∑ cij rij + ∑ cij V j (t ) , i=1,…,mK. (2.46)
dt j =1 j =1
j ≠i
After solving the (2.46) and finding Vi(t), power system average avail-
ability can be obtained for every different initial state i = 1,..., K :
Vi (T )
A i (T ) = . (2.47)
T
Usually the state K (with greatest capacity level and minimum demand)
is determined as an initial state.
Mean number Nfi(T) of power system failures during the time interval
[0, T], if state i is the initial state. This measure can be treated as a mean
number of power system entrances into the set of unacceptable states dur-
ing the time interval [0, T]. For its computation, the rewards associated
with each transition from the set of acceptable states to the set of unaccept-
able states should be defined as 1. All other rewards should be zeroed.
For the example in Fig. 2.15 it means that r31=r21=r64=r54=1. All other
rewards are zeroes.
In this case the mean accumulated reward Vi(T), obtained by solving
(2.31) provides the mean number of entrances into the unacceptable area
during the time interval [0, T]:
N fi (T ) = Vi (T ) . (2.48)
1
f fi (T ) = (2.49)
N fi (T )
Mean Time To Failure (MTTF) is the mean time up to the instant when
the system enters the subset of unacceptable states for the first time. For its
computation the combined performance-demand model should be trans-
formed - all transitions that return the power system from an unacceptable
states should be forbidden, as in this case all unacceptable states should be
treated as absorbing states. For the example in Fig. 2.15 it means that
c13=c46=0.
In order to assess MTTF for a power system, the rewards in matrix r for
the transformed performance-demand model should be determined as fol-
lows:
• The rewards associated with all acceptable states should be defined
as 1.
• The reward associated with unacceptable (absorbing) states should be
zeroed as well as all rewards associated with transitions.
For the example in Fig. 2.15 it means that r22=r33=r55=r66=1. All other
rewards are zeroes.
In this case, the mean accumulated reward Vi(t) defines the mean time
accumulated up to the first entrance into the subset of unacceptable states
(MTTF), if the state i is the initial state.
Probability of power system failure during the time interval [0, T]. The
combined capacity-demand model should be transformed as in the previ-
ous case – all unacceptable states should be treated as absorbing states and,
therefore, all transitions that return the system from unacceptable states
should be forbidden. As in the previous case for the example in Fig. 2.15,
c13=c46=0.
Rewards associated with all transitions to the absorbing state should be
defined as 1. All other rewards should be zeroed. For the example in Fig.
2.15 it means that r22=r33=r55=r66=1. All other rewards are zeroes.
86 New Computational Methods in Power System Reliability
The mean accumulated reward Vi(T) in this case defines the probability
of system failure during the time interval [0, T], if the state i is the initial
state. Therefore, the power system reliability function can be obtained as:
Ri (T ) = 1 − Vi (T ) , where i=1,…,K . (2.51)
Example 2.4
Consider reliability evaluation for a power system, the output generating
capacity of which is represented by a continuous time Markov chain with 3
states. Corresponding capacity levels for states 1, 2, 3 are
g1 = 0, g 2 = 70, g 3 = 100 respectively and the transition intensities
matrix is such as the following:
− 500 0 500
a= aij = 0 − 1000 1000 .
1 10 − 11
All intensities aij are represented in such units as 1/year.
The corresponding capacity model Ch1 is graphically shown in Fig.
2.17A.
The demand for the power system is also represented by a continuous
time Markov chain with three possible levels w1=0, w2=60, w3=90. This
demand is graphically shown in Fig. 2.16.
Demand
(MWT)
w1 = 0
w2 = 60
Tp Tp
TL
w1 = 0 Time
24 hours 24 hours
Daily peaks w2 and w3 occur twice a week and five times a week respec-
tively and the mean duration of the daily peak is Tp=8 hours. Mean dura-
tion of low demand level w1 = 0 is defined as TL = 24 − 8 = 16 hours .
According to the approach presented in (Endrenyi 1979) that is justified
for a power system, peak duration and low level duration are assumed to
be exponentially distributed random values.
Markov demand model Ch2 is shown in Fig. 2.13B. States 1, 2 and 3
represent corresponding demand levels w1 , w2 and w3 . Transition in-
tensities are such as follows:
1 1
b21 = b31 = = hours −1 = 1110 years −1 ,
Tp 8
2 1 2 1
b12 = = = 0.0179 hours −1 = 156 years −1 ,
7 TL 7 16
5 1 5 1
b13 = = = 0.0446 hours −1 = 391 years −1 .
7 TL 7 16
There are no transitions between states 2 and 3, therefore
b23 = b32 = 0 .
Taking into account the sum of elements in each row of the matrix to be
zero, we can find the diagonal elements in the matrix.
Therefore, a transition intensities matrix b for the demand takes the
form:
− 547 156 391
b= bij = 1110 − 1110 0 .
1110 0 − 1110
All intensities bij are also represented in 1/year.
The acceptability function is given: Φ (G (t ), W (t )) = G (t ) − W (t ) .
Therefore, a failure is treated as an entrance in the state where the accept-
ability function is negative or G (t ) < W (t ) .
By using the suggested method we find the mean number Nf(T) of sys-
tem failures during the time interval [0,T], if the state with maximal gener-
ating capacity and minimal demand level is given as the initial state.
88 New Computational Methods in Power System Reliability
3
g3
b1,3
a3, 2 a2,3
b2,3
1 2 3
2
g2 w1 w2
b3, 2 w3
1
g1
A B
Fig. 2.17. Output capacity model (A) and demand model (B)
b1 , 3
b2,3
6
3 b3, 2 9
w1 g 3 b3,1 w2 g 3 w3 g 3
a 2 ,3 b1,3
a 2,3 a3, 2
a3, 2 a3, 2 a 2,3
5 b2,3 8
2
w1 g 2 w2 g 2 b3, 2 w3 g 2
a 3 ,1
b3,1
a 3 ,1 a1,3 a3,1 a1,3
a1,3 b1,3
1 4 b2,3 7
w3 g1
w1 g1 w2 g1 b3,2
b3,1 Unacceptable states
Fig. 2.18. Combined capacity-demand model
0.35
0.3
Nf(t) (number)
0.25
0.2
0.15
0.1
0.05
0
0 2 4 6 8
Time (days)
Nf1(t) Nf3(t)
Fig. 2.19. Mean number of the generator entrances to the set of unacceptable
states
Here π jk is the probability that the system will transit from state j with
performance rate g j to state k with performance rate g k . Probabilities
π jk , j,k ∈ {1,..., K } define the one-step transition probability matrix π = π jk
for the discrete time Markov chain G (t m ) , where transitions from one state
to another may happen only at discrete time moments t1, t2, …, tm-1, tm, … .
To each π jk ≠ 0 a random variable corresponds T*jk with the cumulative
distribution function
F * jk (t ) = F * jk (T * jk ≤ t ) (2.53)
G(t)
T jk
g j
π ij
T ij π jk
gi
T kl
gk
π kl
gl
time
This process can be continued over an arbitrary period T. Each time the
next state and the corresponding sojourn time in the current state must be
chosen independently of the previous history of the process. The described
performance stochastic process G(t) is called a semi-Markov process.
In order to define the semi-Markov process one has to define the initial
state of the process and the matrices: π = π jk and F*(t)= F *ij (t ) for
i, j ∈ {1,..., K } . The discrete time Markov chain G (t m ) with one-step transi-
tion probabilities π jk , j,k ∈ {1,..., K } is called an imbedded Markov chain.
Note that the process in which the arbitrary distributed times between
transitions are ignored and only time instants of transitions are of interest
is a homogeneous discrete time Markov chain. However, in a general case,
if one takes into account the sojourn times in different states, the process
does not have Markov properties. (It remains the Markov process only if
all the sojourn times are distributed exponentially). Therefore, the process
can be considered a Markov process only at time instants of transitions.
This explains why the process was named semi-Markov.
94 New Computational Methods in Power System Reliability
and the cdf F *ij (t ) of conditional sojourn time in the state i can be ob-
tained as
1
F *ij (t ) = Qij (t ) . (2.55)
π ij
Based on (2.58), the mean unconditional sojourn time in the state i can
be obtained as
∞ K
Ti = ∫ tf i (t )dt = ∑ π ijT *ij , (2.58)
0 j =1
where T *ij is the mean conditional sojourn time in the state i given that the
system transits from state i to state j.
The kernel matrix Q(t) and the initial state completely define the sto-
chastic behavior of semi-Markov process.
In practice, when MSS reliability is studied, in order to find the kernel
matrix for a semi-Markov process, one can use the following considera-
tions (Lisnianski and Yeager 2000). Transitions between different states
are usually executed as consequences of such events as failures, repairs,
inspections, etc. For every type of event, the cdf of time between them is
known. The transition is realized according the event that occurs first in a
competition among the events.
Basic Tools and Techniques 95
1 2 3
In Fig. 2.21, one can see a state space diagram for the simplest semi-
Markov system with three possible transitions from the initial state 0. If the
event of type 1 is the first one, the system transits to state 1. The time be-
tween events of type 1 is random variable T0,1 distributed according to cdf
F0,1(t). If the event of type 2 occurs earlier than other events, the system
transits from the state 0 to state 2. The random variable T0,2 that defines the
time between events of type 2 is distributed according to cdf F0,2(t). At last,
if the event of type 3 occurs first, the system transits from state 0 to state 3.
The time between events of type 3 is random variable T0,3 distributed ac-
cording to cdf F0,3(t). The probability Q01 (t ) that the system transits from
state 0 to state 1 up to time t (the initial time t=0) may be determined as
the probability that under condition T0,1 ≤ t , the random variable T0,1 is less
than variables T0,2 and T0,3. Hence, we have
Q01 (t ) = Pr{(T0,1 ≤ t ) & (T0,2 > t ) & (T0,3 > t )} =
t ∞ ∞ t
= ∫ dF0,1 (u ) ∫ dF0,2 (u ) ∫ dF0,3 (u ) = ∫ [1 − F0,2 (u )][1 − F0,3 (u )]dF0,1 (u ) . (2.59)
0 t t 0
t
Q03 (t ) = ∫ [1 − F0,1 (u )][1 − F0,2 (u )]dF0,3 (u ) . (2.61)
0
For the semi-Markov process that describes the system with the state
space diagram presented in Fig. 2.21, we have the following kernel matrix
96 New Computational Methods in Power System Reliability
⎧ λ0,2 − ( λ + λ )t
⎪ [1 − e 0,1 0, 2 ], if t < Tc
⎪ λ + λ
Q02 (t ) = ⎨ 0,1 0, 2 (2.65)
λ0,2 − ( λ + λ )T
⎪ [1 − e 0,1 0, 2 c ], if t ≥ Tc
⎪⎩ λ0,1 + λ0,2
⎧⎪0, if t < Tc ,
Q03 (t ) = ⎨ − (λ0,1 + λ0, 2 )Tc (2.66)
⎪⎩e , if t ≥ Tc .
One-step transition probabilities for the embedded Markov chain are de-
fined according to (2.54):
λ0,1 − ( λ + λ )T
π 01 = [1 − e 0 ,1 0 , 2 c ] , (2.68)
λ0,1 + λ0, 2
λ0 , 2 − ( λ + λ )T
π 02 = [1 − e 0,1 0, 2 c ] , (2.69)
λ0,1 + λ0, 2
− ( λ 0 ,1 + λ 0 , 2 )Tc
π 03 = e . (2.70)
According to (2.55), we obtain the cdf of conditional sojourn times
⎧ 1 − e −(λ0,1 + λ0, 2 )t
⎪ , if t < Tc
F *01 (t ) = ⎨1 − e − (λ0,1 + λ0, 2 )TC (2.71)
⎪1, if t ≥ T
⎩ c
⎧ 1 − e −(λ0,1 + λ0, 2 )t
⎪ , if t < Tc
F *02 (t ) = ⎨1 − e − (λ0,1 + λ0, 2 )TC (2.72)
⎪1, if t ≥ T
⎩ c
⎧0 if t < Tc ,
F *03 (t ) = ⎨ (2.73)
⎩1 if t ≥ Tc .
K t
θ ij (t ) = δ ij [1 − Fi (t )] + ∑ ∫ qik (τ )θ kj (t − τ ) dτ , (2.74)
k =1 0
where
dQik (τ )
qik (τ ) = , (2.75)
dτ
K
Fi (t ) = ∑ Qij (t ) , (2.76)
j =1
⎧1, if i = j ,
δ ij = ⎨0, if i ≠ j. (2.77)
⎩
The system of linear integral equations (2.74) is the main system in the
theory of semi-Markov processes. By solving this system, one can find all
the probabilities θ ij (t ) , i, j ∈ {1,..., K } for the semi-Markov process with a
given kernel matrix Qij (t ) and given initial state.
Based on the probabilities θ ij (t ) , i, j ∈ {1,..., K } , important reliability in-
dices can easily be found. Suppose that system states are ordered accord-
ing to their performance rates g K ≥ g K −1 ≥ ... ≥ g 2 ≥ g1 and demand
g m ≥ w > g m −1 is constant. State K with performance rate gK is the initial
state. In this case system instantaneous availability is treated as the prob-
ability that a system starting at instant t=0 from the state K will be at in-
stant t ≥ 0 in any state gK, … ,gm. Hence, we obtain
K
A(t , w) = ∑ θ Ki (t ) . (2.78)
j =m
The mean system instantaneous output performance and the mean in-
stantaneous performance deficiency can be obtained, respectively, as
K
Et = ∑ g iθ Ki (t ) (2.79)
i =1
and
m −1
Dt ( w) = ∑ ( w − g i )θ Ki (t ) 1(w>gi). (2.80)
i =1
Basic Tools and Techniques 99
In the general case, the system of integral equations (2.74) can be solved
only by numerical methods. For some of the simplest cases the method of
Laplace-Stieltjes transform can be applied in order to derive an analytical
solution of the system. As was done for Markov models, we designate a
Laplace-Stieltjes transform of function f(x) as
t
~
f ( s ) = L{ f ( x)} = ∫ e − sx f ( x)dx . (2.81)
0
and, therefore,
~ 1 ~
Ψ i ( s ) = [1 − f i ( s)] . (2.84)
s
The system of algebraic equations (2.82) defines Laplace-Stieltjes trans-
form of probabilities θ ij (t ) , i, j ∈ {1,..., K } as a function of main parameters
of a semi-Markov process.
By solving this system, one can also find steady state probabilities. The
detailed investigation is out of scope of this book and we only give here
the resulting formulae for computation of steady state probabilities. Steady
state probabilities θ ij = lim θ ij (t ) (if they exist) do not depend on the initial
t →∞
state of the process i and for their designation, one can use only one index:
θ j . It is proven that
p jT j
θj = K
, (2.85)
∑ p jT j
j =1
⎧ K
⎪ p j = ∑ piπ ij , j = 1,..., K ,
⎪ i =1
⎨K (2.86)
⎪∑ p = 1
⎪⎩i =1 i
Note that the first K equations in (2.86) are linearly dependant and we
K
cannot solve the system without the last equation ∑ pi = 1 .
i =1
In order to find the reliability function, the additional semi-Markov
model should be built in analogy with the corresponding Markov models:
all states corresponding to the performance rates lower than constant de-
mand w should be united in one absorbing state with the number 0. All
transitions that return the system from this absorbing state should be for-
bidden. The reliability function is obtained from this new model as
R( w, t ) = θ K 0 (t ) .
Example 2.5
Consider an electric generator that has four possible performance (generat-
ing capacity) levels g4=100 MW, g3=70 MW, g2=50 MW and g1=0. The
constant demand is w=60 MW. The best state with performance rate
g4=100 MW is the initial state.
Only minor failures and minor repairs are possible. Times to failures are
distributed exponentially with following parameters
λ4,3 = 10 −3 (hours −1 ), λ3,2 = 5 *10 −4 (hours −1 ), λ2,1 = 2 *10 −4 (hours −1 ) .
Hence, times to failures T4,3, T3,2, T2,1 are random variables distributed
according to the corresponding cdf:
− λ 4 , 3t − λ3, 2 t − λ 2,1t
F4,3 (t ) = 1 − e , F3,2 (t ) = 1 − e , F2,1 (t ) = 1 − e .
Repair times are normally distributed. T3,4 has mean time to repair
T3, 4 = 240 hours and standard deviation σ 3,4 =16 hours, T2,3 has a mean
time to repair T2,3 = 480 hours and standard deviation σ 2,3 = 48 hours, T1,2
has a mean time to repair T1, 2 = 720 hours and standard deviation
σ 1, 2 = 120 hours. Hence, the cdf of random variables T3,4, T2,3 and T1,2 are
respectively
1 t (u − T
3, 4 )
F3,4 (t ) = ∫ exp[− 2σ ]du ,
2πσ 32,4 0 3, 4
Basic Tools and Techniques 101
1 t (u − T )
2,3
F2,3 (t ) = ∫ exp[− 2σ ]du ,
2πσ 22,3 0 2,3
1 t (u − T )
1, 2
F1, 2 (t ) = ∫ exp[− 2σ ]du .
2πσ 12, 2 0 1, 2
4 4
F4,3 (t) F3,4 (t ) Q4,3 (t ) Q3,4 (t )
3 3
F3,2 (t ) F2,3 (t ) Q3, 2 (t ) Q2,3 (t )
2 2
Q1,2 (t )
F2,1 (t ) F1,2 (t ) Q2,1 (t )
1 1
A B
in which
t
Q12 (t ) = F1,2 (t ) , Q21 (t ) = ∫ [1 − F2,3 (t )]dF2,1 (t ) ,
0
t t
Q2,3 (t ) = ∫ [1 − F2,1 (t )]dF2,3 (t ), Q3, 2 (t ) = ∫ [1 − F3, 4 (t )]dF3, 2 (t ) ,
0 0
102 New Computational Methods in Power System Reliability
t
Q3, 4 (t ) = ∫ [1 − F3, 2 (t )]dF3, 4 (t ) , Q4,3 (t ) = F4,3 (t ) .
0
∞ ∞
π 23 = ∫ [1 − F2,1 (t )]dF2,3 (t ), π 32 = ∫ [1 − F3, 4 (t )]dF3, 2 (t ) ,
0 0
∞
π 34 = ∫ [1 − F3, 2 (t )]dF3, 4 (t ) , π 43 = F4,3 (∞ ) = 1 .
0
In order to find steady state probabilities pj, j=1,2,3,4 for the embedded
Markov chain, we have to solve the system of algebraic equations (2.86)
that takes the form
Basic Tools and Techniques 103
⎧ p1 = π 21 p2
⎪
⎪ p2 = π 12 p1 + π 32 p3
⎪
⎨ p3 = π 23 p2 + π 43 p4
⎪p = π p
⎪ 4 34 3
⎪⎩ p1 + p2 + p3 + p4 = 1
p3T3 p4T4
θ3 = 4
= 0.1919 , θ 4 = 4
= 0.7528 .
∑ p jT j ∑ p jT j
j =1 j =1
The steady state availability of the generator for the given constant de-
mand is
A( w) = θ 3 + θ 4 = 0.9447 .
In order to find the reliability function for the given constant demand
w=60 MW, we unite states 1 and 2 into one absorbing state 0. The modi-
fied graphical representation of the system evolution in the state space for
this case is shown in Fig. 2.23A. In Fig. 2.23B, the state space diagram for
the corresponding semi-Markov process is shown.
104 New Computational Methods in Power System Reliability
4 4
F4,3 (t ) F3,4 (t ) Q4,3 (t ) Q3, 4 (t )
3 3
F3,0 (t) = F3,1(t) Q3,0 (t )
0 0
A B
As in the previous case, we define the kernel matrix for the correspond-
ing semi-Markov process based on expressions (2.59)-(2.61):
0 0 0
Q(t)= Q30 (t ) 0 Q34 (t ) ,
0 Q43 (t ) 0
where
t t
Q30 (t ) = ∫ [1 − F3, 4 (t )]dF3,1 (t ) , Q3, 4 (t ) = ∫ [1 − F3,1 (t )]dF3, 4 (t ) , Q43 (t ) = F4,3 (t ) .
0 0
⎧ t
⎪θ 40 (t ) = ∫ q43 (τ )θ 30 (t − τ )dτ
⎪ 0
⎪⎪ t t
⎨θ 30 (t ) = ∫ q34 (τ )θ 40 (t − τ )dτ + ∫ q30 (τ )θ 00 (t − τ )dτ .
⎪ 0 0
⎪θ (t ) = 1
⎪ 00
⎪⎩
1
R(t)
0.8
0.6
0.4
0.2
0
0 2000 4000 6000 8000 10000
time (hours)
The UGF approach is universal. An analyst can use the same recursive
procedures for systems with a different physical nature of performance and
different types of element interaction.
Example 2.6
Suppose that one performs k independent trials and that each trial can re-
sult either in a success (with probability π) or in a failure (with probability
1−π). Let random variable X represent the number of successes that occur
in k trials. Such a variable is called a binomial random variable. The pmf
of X takes the form
⎛k ⎞
xi = i, pi = ⎜⎜ ⎟⎟π i (1 − π ) k − i , 0 ≤ i ≤ k
i ⎝ ⎠
According to the binomial theorem it can be seen that
k k ⎛k ⎞
∑ pi = ∑ ⎜⎜ ⎟⎟π i (1 − π ) k −i = [π + (1 − π )]k = 1
i =0 i =0⎝ i ⎠
Example 2.7
The expected value of a binomial random variable is
k k ⎛k ⎞
E( X ) = ∑ xi pi = ∑ i⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0 i =0 ⎝ i ⎠
k −1⎛ k − 1⎞ i
= kπ ∑ ⎜⎜ ⎟π (1 − π ) k −i −1 = kπ [π + (1 − π )] k −1 = kπ
i ⎟
i =0⎝ ⎠
Hence
k
m ' ( 0) = ∑ xi pi = E ( X ) . (2.91)
i =0
Then
d d k k
m' ' (t ) = (m' (t )) = ( ∑ xi e txi pi ) = ∑ xi2 e txi pi (2.92)
dt dt i =0 i =0
and
k
m ' ' ( 0 ) = ∑ x 2 pi = E ( X 2 ) (2.93)
i
i =0
Example 2.8
The moment-generating function of the binomial distribution takes the
form
108 New Computational Methods in Power System Reliability
k ⎛k ⎞
m(t ) = E (e tX ) = ∑ e ti ⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0 ⎝ i ⎠
k ⎛k ⎞
= ∑ (π e t ) i ⎜⎜ ⎟⎟(1 − π ) k −i = (π e t + 1 − π ) k
i =0 ⎝i ⎠
Hence
m' (t ) = k (π e t + 1 − π ) k −1πe t and E(X) = m '(0) = kπ .
and
y = (y0, …, y kY ), pY = (pY0, …, pYkY ) (2.95)
n
mn (t ) = ∏ m X i (t ) (2.97)
∑ Xi i =1
i =1
Hence
k
ω '(1) = ∑ xi pi = E ( X ) (2.100)
i =0
and in general
n
ωn = ∏ ω Xi ( z) (2.102)
∑ Xi i =1
i =1
The reader wishing to learn more about the generating function and z-
transform is referred to the books (Grimmett and Stirzaker 1992) and
(Ross 2000).
110 New Computational Methods in Power System Reliability
Example 2.9
Suppose that one performs k independent trials and each trial can result ei-
ther in a success (with probability π) or in a failure (with probability 1−π).
Let random variable Xj represent the number of successes that occur in the
jth trial.
The pmf of any variable Xj ( 1 ≤ j ≤ k ) is
Pr{Xj = 1} = π, Pr{Xj = 0} = 1 − π.
k ⎛k ⎞ k ⎛k ⎞
= ∑ ⎜⎜ ⎟⎟ z iπ i (1 − π ) k −i = ∑ z i ⎜⎜ ⎟⎟π i (1 − π ) k −i
i =0⎝ i ⎠ i =0 ⎝ i ⎠
⎛k ⎞
xi = i, pi = ⎜⎜ ⎟⎟π i (1 − π ) k − i , 0 ≤ i ≤ k
⎝i ⎠
evaluate the pmf of an arbitrary function f(X1, …, Xn), one has to evaluate
the vector y of all of the possible values of this function and the vector q of
probabilities that the function takes these values.
Each possible value of function f corresponds to a combination of the
values of its arguments X1, …, Xn. The total number of possible combina-
tions is
n
K = ∏ (ki + 1) (2.103)
i =1
Some different combinations may produce the same values of the func-
tion. All of the combinations are mutually exclusive. Therefore, the
probability that the function takes on some value is equal to the sum of
probabilities of the combinations producing this value. Let Ah be a set of
combinations producing the value fh. If the total number of different reali-
zations of the function f(X1, …, Xn) is H, then the pmf of the function is
n
y = ( f h : 1 ≤ h ≤ H ), q=( ∑ ∏ piji : 1 ≤ h ≤ H ) (2.106)
( x1 j1 ,..., xnjn )∈Ah i =1
Example 2.10
Consider two random variables X1 and X2 with pmf x1 = (1, 4), p1 = (0.6,
0.4) and x2 = (0.5, 1, 2), p2 = (0.1, 0.6, 0.3). In order to obtain the pmf of
the function Y = X 1 X 2 we have to consider all of the possible combinations
of the values taken by the variables. These combinations are presented in
Table 2.2.
112 New Computational Methods in Power System Reliability
Note that some different combinations produce the same values of the
function Y. Since all of the combinations are mutually exclusive, we can
obtain the probability that the function takes some value as being the sum
of the probabilities of different combinations of the values of its arguments
that produce this value:
ki xij
∑ pij z (2.107)
j =0
where ui(z) takes the form (2.107) and U(z) takes the form (2.108). For
functions of two arguments, two interchangeable notations can be used:
U ( z ) = ⊗(u1 ( z ), u 2 ( z )) = u1 ( z ) ⊗ u 2 ( z ) (2.110)
f f
Example 2.11
Consider the pmf of the function Y from Example 2.10, obtained from Ta-
ble 2.2. The u-function corresponding to this pmf takes the form:
U(z) = 0.06z1 + 0.04z2 + 0.36z1 + 0.24z4 + 0.18z1 + 0.12z16.
P P
P
P
P
P
P
P
P
P
Example 2.12
Consider the function
Y = f(X1, …, X5) = (max(X1, X2) + min(X3, X4)) X5
of five independent random variables X1, …, X5. The probability mass
functions of these variables are determined by pairs of vectors xi, pi
( 0 ≤ i ≤ 5 ) and are presented in Table 2.3.
These pmf can be represented in the form of u-functions as follows:
u1(z) = p10 z x10 + p11 z x11 + p12 z x12 = 0.6z5 + 0.3z8 + 0.1z12; P
P
P
P
P
u4(z) = p40 z x40 + p41 z x41 + p42 z x42 = 0.1z0 + 0.5z8 + 0.4z10;
P
P
P
P
P
Using the straightforward approach one can obtain the pmf of the ran-
dom variable Y applying the operator (2.108) over these u-functions. Since
k1 + 1 = 3, k2 + 1 = 2, k3 + 1 = 2, k4 + 1 = 3, k5 + 1 = 2, the total number of
term multiplication procedures that one has to perform using this equation
is 3×2×2×3×2 = 72.
Now let us introduce three auxiliary random variables X6, X7 and X8, and
define the same function recursively:
X6 = max{X1, X2};
X7 = min{X3, X4};
X8 = X6 + X7;
Y = X8 X5.
We can obtain the pmf of variable Y using composition operators over
pairs of u-functions as follows:
u6(z) = u1(z) ⊗ u1(z) = (0.6z5+0.3z8+0.1z12) ⊗ (0.7z8+0.3z10)
P P P
P
P P
P
max max
116 New Computational Methods in Power System Reliability
=0.42zmax{5,8}+0.21zmax{8,8}+0.07zmax{12,8}+0.18zmax{5,10}+ 0.09zmax{8,10}+
P P
P P P
P
P
P
min min
=0.06zmin{0,0}+0.04zmin{2,0}+0.3zmin{0,3}+0.2zmin{2,3}
P P P
P
P
P
+0.24zmin{0,5}+0.16zmin{2,5}=0.64z0+0.36z2;
P
P
P P P
P
+ +
=0.4032z8+0+0.1728z10+0+ 0.064z12+0+0.2268z8+2+0.0972z10+2
P
P
P P P P P
P
+0.036z12+2=0.4032z8+0.3996z10+ 0.1612z12+0.036z14;
P P
P
= 0.2016z8×1+0.1998z10×1+0.0806z12×1+0.018z14×1+0.2016z8×1.5
+0.1998z10×1.5+0.0806z12×1.5+0.018z14×1.5=0.2016z8+0.1998z10
+0.2822z12+0.018z14+0.1998z15+0.0806z18+0.018z21.
The final u-function U(z) represents the pmf of Y, which takes the form
y = (8, 10, 12, 14, 15, 18, 21)
q = (0.2016, 0.1998, 0.2822, 0.018, 0.1998, 0.0806, 0.018).
Note that during the recursive derivation of this pmf we used only 26
term multiplication procedures. This considerable computational complex-
ity reduction is possible because of the like term collection in intermediate
u-functions.
The problem of system reliability analysis usually includes evaluation of
the pmf of some random values characterizing the system's behavior.
These values can be very complex functions of a large number of random
variables. The explicit derivation of such functions is an extremely com-
plicated task. Fortunately, the UGF method for many types of system al-
lows one to obtain the system u-function recursively. This property of the
UGF method is based on the associative property of many functions used
in reliability engineering. The recursive approach presumes obtaining u-
functions of subsystems containing several basic elements and then treat-
ing the subsystem as a single element with the u-function obtained when
computing the u-function of a higher-level subsystem. Combining the re-
cursive approach with the simplification technique reduces the number of
terms in the intermediate u-functions and provides a drastic reduction of
the computational burden.
Basic Tools and Techniques 117
Therefore, one can obtain the u-function U(z) assigning U1(z) = u1(z)
and applying operator ⊗f consecutively:
⊗(u1 ( z ), ..., u n ( z ))
f
tor:
118 New Computational Methods in Power System Reliability
the order of arguments in the function f(X1, …, Xn) is inessential and the u-
function U(z) can be obtained using recursive procedures (2.111) and
(2.113) over any permutation of u-functions of random arguments
X1,…,Xn.
If a function takes the recursive form
f(f1(X1, …, Xj), f2(Xj+1, …, Xh), …, fm(Xl, …, Xn)) (2.118)
then the corresponding u-function U(z) can also be obtained recursively:
⊗( ⊗ (u1 ( z ),..., u j ( z )), ⊗ (u j +1 ( z ),..., u h ( z )),..., ⊗ (ul ( z ),..., u n ( z )). (2.119)
f f1 f2 fm
Example 2.13
Consider the variables X1, X2, X3 with pmf presented in Table 2.2. The u-
functions of these variables are:
u1(z) = 0.6z5 + 0.3z8 + 0.1z12;
P P P
min{5,8} min{8,8}
=0.42z +0.21z +0.07zmin{12,8}
+0.18zmin{5,10}+0.09zmin{8,10}+0.03zmin{12,10}=0.6z5+0.37z8+0.03z10;
= 0.36zmin{5,0}+0.222zmin{8,0}+0.018zmin{12,0}
+ 0.24zmin{5,1}+0.148zmin{8,1}+0.012zmin{12,1}=0.6z0+0.4z1.
The same u-function can also be obtained using another recursive pro-
cedure
u4(z)=u1(z) ⊗ u3(z)=(0.6z5+0.3z8+0.1z12) ⊗ (0.6z0+0.4z1)
min min
min{5,0} min{8,0}
= 0.36z +0.18z +0.06zmin{12,0}
+0.24zmin{5,1}+0.12zmin{8,1}+0.04zmin{12,1}=0.6z0+0.4z1;
U(z)=u3(z) ⊗ u2(z)=(0.6z0+0.4z1) ⊗ (0.7z8+0.3z10)
min min
=0.42zmin{0,8}+0.28zmin{1,8}+0.18zmin{0,10}+0.12zmin{1,10}=0.6z0+0.4z1
Note that while both recursive procedures produce the same u-function,
their computational complexity differs. In the first case, 12 term multipli-
cation operations have been performed; in the second case, only 10 opera-
tions have been performed.
Having the pmf of the random MSS output performance G and the pmf of
the demand W in the form of u-functions U MSS ( z ) and u w (z ), one can ob-
tain the u-functions representing the pmf of the random functions F(G,W),
120 New Computational Methods in Power System Reliability
~
G (G,W ), D − (G, W ) or D + (G, W ) (see Section 1.5.3) using the corresponding
composition operators over U MSS ( z ) and u w (z ) :
U F ( z ) = U MSS ( z ) ⊗ u w ( z ) (2.122)
F
U G~ ( z ) = U MSS ( z ) ⊗
~ u w ( z) (2.123)
G
U D ( z ) = U MSS ( z ) ⊗ u w ( z ) (2.124)
D
~
Since the expected values of the functions G, F, D and G are equal to
the derivatives of the corresponding u-functions UMSS(z), UF(z), UD(z) and
U G~ ( z ) at z = 1, the MSS performance measures can now be obtained as
~
E (G (G, F )) / E ( F (G, W )) = U 'G~ (1) / U ' F (1) (2.128)
Example 2.14
Consider two power system generators with a nominal capacity of 100
MW as two separate MSSs. In the first generator, some types of failure re-
quire its capacity G1 to be reduced to 60 MW and other types lead to a
complete outage. In the second generator, some types of failure require its
capacity G2 to be reduced to 80 MW, others lead to a capacity reduction to
40 MW, and others lead to a complete outage. The generators are repair-
able and each of their states has a steady-state probability.
Both generators should meet a variable two-level demand W. The high
level (day) demand is 50 MW and has the probability 0.6; the low level
(night) demand is 30 MW and has the probability 0.4.
The capacity and demand can be presented as a fraction of the nominal
generator capacity. There are three possible relative capacity levels that
characterize the performance of the first generator:
g10 = 0.0, g11 = 60/100 = 0.6, g12 = 100/100 = 1.0
Basic Tools and Techniques 121
and four relative capacity levels that characterize the performance of the
second generator:
g20 = 0.0, g21 = 40/100 = 0.4, g22 = 80/100 = 0.8, g23 = 100/100 = 1.0
Assume that the corresponding steady-state probabilities are
p10 = 0.1, p11 = 0.6, p12 = 0.3
for the first generator and
p20 = 0.05, p21 = 0.35, p22 = 0.3, p23 = 0.3
for the second generator and that the demand distribution is
w1 = 50/100 = 0.5, w2 = 30/100 = 0.3, q1 = 0.6, q2 = 0.4
The u-functions representing the capacity distribution of the generators
(the pmf of random variables G1 and G2) take the form
U1(z) = 0.1z0+0.6z0.6+0.3z1, U2(z) = 0.05z0+0.35z0.4+0.3z0.8+0.3z1
and the u-function representing the demand distribution takes the form
uw(z) = 0.6z0.5+0.4z0.3.
The mean steady-state performance (capacity) of the generators can be
obtained directly from these u-functions:
ε1 = E (G1 ) = U '1 (1) = 0.1 × 0 + 0.6 × 0.6 + 0.3 × 1.0 = 0.66
which means 66% of the nominal generating capacity for the first genera-
tor, and
ε 2 = E (G 2 ) = U ' 2 (1) = 0.05 × 0 + 0.35 × 0.4 + 0.3 × 0.8 + 0.3 × 1.0 = 0.68
which means 68% of the nominal generating capacity for the second gen-
erator.
The available generation capacity should be no less than the demand.
Therefore, the system acceptability function takes the form
F (G,W ) = 1(G ≥ W )
U F 2 ( z) = U 2 ( z) ⊗ u w ( z)
F
= (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )
F
U D1 ( z ) = U 1 ( z ) ⊗ u w ( z )
D
U D2 ( z) = U 2 ( z) ⊗ u w ( z)
D
= (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) ⊗(0.6 z 0.5 + 0.4 z 0.3 )
D
0.5 0.1
= 0.03 z + 0.21z + 0.18 z + 0.18 z + 0.02 z 0.3 + 0.14 z 0
0 0
0 0.6
U G~1 ( z ) = U 1 ( z ) ⊗
~ u w ( z ) = (0.1z + 0.6 z + 0.3 z1 ) ⊗ (1z 0.5 + 0.4 z 0.3 )
G GF
U G~ 2 ( z ) = U 2 ( z ) ⊗
~ u w ( z)
G
= (0.05 z 0 + 0.35 z 0.4 + 0.3z 0.8 + 0.3 z 1 ) ⊗ (0.6 z 0.5 + 0.4 z 0.3 )
G⋅ F
ε~1 = U 'G~1 (1) / U ' F1 (1) = (0.3 × 1 + 0.6 × 0.6 + 0.1 × 0) / 0.9
= 0.66 / 0.9 = 0.733
124 New Computational Methods in Power System Reliability
ε~2 = U ' G~ 2 ( z ) / U ' F 2 (1) = (0.3 × 1 + 0.3 × 0.8 + 0.14 × 0.4 + 0.26 × 0) / 0.74
= 0.596 / 0.74 = 0.805
This means that generators 1 and 2 when they meet the variable demand
have average capacities 73.3 MW and 80.5 MW respectively.
Example 2.15
Consider the two power system generators presented in the previous ex-
ample and obtain the system availability directly from the u-functions
UMSS(z) using Eq. (2.130). Since, in this example, F (G, W ) = 1(G ≥ W ), the
operator δ w (U ( z )) sums up the coefficients of the terms having exponents
not less than w in the u-function U(z):
For the first generator with U1(z) = 0.1z0+0.6z0.6+0.3z1
E ( F (G1 , w1 )) = δ 0.5 (0.1z 0 + 0.6 z 0.6 + 0.3 z 1 ) = 0.6 + 0.3 = 0.9
Basic Tools and Techniques 125
For the second generator with U2(z) = 0.05z0 + 0.35z0.4 + 0.3z0.8 + 0.3z1
E ( F (G2 ,0.5)) = δ 0.5 (0.05 z 0 + 0.35 z 0.4 + 0.3 z 0.8 + 0.3 z 1 ) = 0.3+0.3=0.6
n
G = 1 / T = ( ∑ G −j 1 ) −1 (2.134)
j =1
Note that if for any j Gj=0 the equation cannot be used, but it is obvious
that in this case G=0. Therefore, one can define the structure function for
the series task processing system as
⎧ n n
⎪1 / ∑ G −j 1 if ∏ G j ≠ 0
⎪ j =1 j =1
φser (G1 ,..., Gn ) = ×(G1 ,..., Gn ) = ⎨ (2.135)
⎪ n
⎪0 if ∏G j = 0
⎩ j =1
One can see that the structure functions presented above are associative
and commutative (i.e. meet conditions (2.114) and (2.116)). Therefore, the
u-functions for any series system of described types can be obtained recur-
sively by consecutively determining the u-functions of arbitrary subsets of
the elements. For example, the u-function of a system consisting of four
elements connected in a series can be determined in the following ways:
[(u1 ( z ) ⊗ u 2 ( z )) ⊗ u 3 ( z )] ⊗ u 4 ( z )
φser φser φser
= (u1 ( z ) ⊗ u 2 ( z )) ⊗ (u 3 ( z ) ⊗ u 4 ( z )) (2.136)
φ ser φ ser φ ser
Example 2.16
Consider a system consisting of n elements with the total failures con-
nected in series. Each element j has only two states: operational with a
nominal performance of gj1 and failure with a performance of zero. The
probability of the operational state is pj1. The u-function of such an ele-
ment is presented by the following expression:
g j1
u j ( z ) = (1 − p j1 ) z 0 + p j1 z , j = 1, …, n
In order to find the u-function for the entire MSS, the corresponding
⊗ φser operators should be applied. For the MSS with the structure function
(2.132) the system u-function takes the form
n n
U ( z ) = ⊗ (u1 ( z ),..., u n ( z )) = (1 − Π p j1 ) z 0 + Π p j1 z min{g11 ,..., g n1}
min j =1 j =1
128 New Computational Methods in Power System Reliability
For the MSS with the structure function (2.135) the system u-function
takes the form
n
n n ( ∑ g −j11 ) −1
U ( z ) = ⊗{u1 ( z ),..., u n ( z )} = (1 − Π p j1 ) z 0 + Π p j1 z j =1
× j =1 j =1
Since the failure of each single element causes the failure of the entire
system, the MSS can have only two states: one with the performance level
of zero (failure of at least one element) and one with the performance level
gˆ = min{g11 ,..., g n1} for the flow transmission MSS and gˆ = 1 / ∑nj=1 g −j11 for
the task processing MSS.
The measures of the system performance A(w) = Pr{G≥w}, ∆−(w) =
E(max(w−G,0)) and ε = E(G) are presented in the Table 2.4.
Parallel Systems. In the flow transmission MSS, in which the flow can be
dispersed and transferred by parallel channels simultaneously (which pro-
vides the work sharing), the total capacity of a subsystem containing n
independent elements connected in parallel is equal to the sum of the ca-
pacities of the individual elements. Therefore, the structure function for
such a subsystem takes the form
Basic Tools and Techniques 129
n
φ par (G1 ,..., Gn ) = +(G1 ,..., Gn ) = ∑ G j . (2.139)
j =1
In some cases, only one channel out of n can be chosen for the flow
transmission (no flow dispersion is allowed). This happens when the
transmission is associated with the consumption of certain limited re-
sources that does not allow simultaneous use of more than one channel.
The most effective way for such a system to function is by choosing the
channel with the greatest transmission capacity from the set of available
channels. In this case, the structure function takes the form
In the task processing MSS, the definition of the structure function de-
pends on the nature of the elements’ interaction within the system.
First consider a system without work sharing in which the parallel ele-
ments act in a competitive manner. If the system contains n parallel ele-
ments, then all the elements begin to execute the same task simultaneously.
The task is assumed to be completed by the system when it is completed
by at least one of its elements. The entire system processing time is
defined by the minimum element processing time and the entire system
processing speed is defined by the maximum element processing speed.
Therefore, the system structure function coincides with (2.140).
Now consider a system of n parallel elements with work sharing for
which the following assumptions are made:
1. The work x to be performed can be divided among the system elements
in any proportion.
2. The time required to make a decision about the optimal work sharing is
negligible, the decision is made before the task execution and is based
on the information about the elements state during the instant the de-
mand for the task executing arrives.
3. The probability of the elements failure during any task execution is
negligible.
The elements start performing the work simultaneously, sharing its total
amount x in such a manner that element j has to perform xj portion of the
work and x = ∑nj=1 x j . The time of the work processed by element j is
xj/Gj. The system processing time is defined as the time during which the
last portion of work is completed: T = max1≤ j ≤n {x j / G j }. The minimal
time of the entire work completion can be achieved if the elements share
the work in proportion to their processing speed Gj: x j = xG j / ∑nk =1 Gk .
130 New Computational Methods in Power System Reliability
The system processing time T in this case is equal to x / ∑nk =1 Gk and its to-
tal processing speed G is equal to the sum of the processing speeds of its
elements. Therefore, the structure function of such a system coincides with
the structure function (2.139).
One can see that the structure functions presented also meet conditions
(2.114) and (2.116). Therefore, the u-functions for any parallel system of
described types can be obtained recursively by the consecutive determina-
tion of u-functions of arbitrary subsets of the elements.
Example 2.17
Consider a system consisting of two elements with total failures connected
in parallel. The elements have nominal performance g11 and g21 (g11<g21)
and the probability of operational state p11 and p21 respectively. The per-
formances in the failed states are g10 = g20 =0. The u-function for the entire
MSS is
U ( z ) = u1 ( z ) ⊗ u 2 ( z )
φpar
g 11 g 21
= [(1 − p11 ) z 0 + p11 z ] ⊗ [(1 − p 21 ) z 0 + p 21 z ]
φpar
The measures of the system output performance for MSSs of both types
are presented in Tables 2.5 and 2.6.
Table 2.5. Measures of MSS performance for system with structure function
(2.139)
w A(w) ∆−(w) ε
w>g11+g21 0 w-p11g11−p21g21
g21<w≤g11+g21 p11p21 g11p11(p21−1)+g21p21(p11−1)+w(1−p11p21)
Table 2.6. Measures of MSS performance for system with structure function
(2.140)
w A(w) ∆ –(w) ε
w>g21 0 w−p11g11−p21g21+p11p21g11
g11<w≤ g21 p21 (1−p21)(w−g11p11) p11(1−p21)g11+p21g21
0<w≤g11 p11+p21− p11p21 (1−p11)(1−p21)w
Table 2.7. Structure functions for a purely series and for purely parallel subsystems
No of Description Structure func- Structure function
MSS type of MSS tion for series for parallel ele-
elements (φser) ments (φpar)
Flow transmission
1 MSS with flow disper- (2.132) (2.139)
sion
Flow transmission
2 MSS without flow dis- (2.132) (2.140)
persion
Task processing MSS
3 with work sharing (2.135) (2.139)
Task processing MSS
4 without work sharing (2.135) (2.140)
in (Lisnianski 2004) and in (Lisnianski 2007b). The basic steps of the ap-
proach are as follows:
1. Build the random process Markov model for each MSS element (con-
sidering only state transitions within this element). Obtain two sets
gj={gj1,gj2,…, g jk j } and pj(t)={pj1(t),pj2(t),…, p jk j (t ) } for each ele-
ment j (1≤ j≤ n) by solving the system of kj ordinary differential equa-
n
tions. Note that instead of solving one high-order system of ∏ k j equa-
j =1
tions one has to solve n low-order systems with the total number of
n
equations ∑ k j .
j =1
2. Having the sets gj and pj(t) for each element j define u-function of this
g j1 g j2 g jk j
element in the form uj(z)=pj1(t)z +pj2(t)z +…+ p jk (t ) z .
j
3. Using the generalized RBD method, obtain the resulting u-function for
the entire MSS.
~
4. Obtain the u-functions representing the random functions F , G and D
using operators (2.122)-(2.124).
5. Obtain the system reliability measures by calculating the values of the
derivatives of the corresponding u-functions at z=1 and applying Eqs.
(2.125)-(2.128).
Example 2.18
Consider a flow transmission system (Fig. 2.25) consisting of three pipes.
The oil flow is transmitted from point C to point E. The pipes’ perform-
ance is measured by their transmission capacity (ton per minute). Elements
1 and 2 are binary. A state of total failure for both elements corresponds to
a transmission capacity of 0 and the operational state corresponds to the
capacities of the elements 1.5 and 2 ton per minute respectively so that
G1(t) ∈ {0,1.5}, G2(t) ∈ {0,2}. Element 3 can be in one of three states: a
state of total failure corresponding to a capacity of 0, a state of partial fail-
ure corresponding to a capacity of 1.8 tons per minute and a fully opera-
tional state with a capacity of 4 tons per minute so that G3(t)∈{0,1.8,4}.
The demand is constant: θ*=1.0 ton per minute.
The system output performance rate V(t) is defined as the maximum
flow that can be transmitted between nodes A and B:
V(t)=min{G1(t)+G2(t),G3(t)}.
134 New Computational Methods in Power System Reliability
A 1 B
3
2
1
3
2
The state-space diagrams of the system elements are shown in Fig. 2.26.
Element 1
(1)
λ 2,1
2 1
G12=1.5 G11=0
Element 3
(1)
µ1, 2 (3)
λ3, 2 (3)
λ 2,1
0 1.5
u1(z)=p11(t)z +p12(t)z 3 2 1
G33=4.0 G32=1.8 G31=0.0
Element 2
(3) (3)
( 2) µ 2, 3 µ1,2
λ 2,1
2 1
G22=2.0 G21=0
u3(z)=p31(t)z0+p32(t)z1.8+p33(t)z4.0
( 2)
µ1, 2
u2(z)=p21(t)z0+p22(t)z2.0
dp 2 (t )
= λ(21,)1 p1 (t ) − ( µ1(,12) + λ(22,1) + λ3(3,2) ) p 2 (t ) + µ1(,22) p5 (t ) + µ 2(3,3) p6 (t ) ,
dt
dp3 (t )
= λ(22,1) p1 (t ) − ( µ1(,22) + λ(21,)1 + λ3(3,2) ) p3 (t ) + µ1(,12) p5 (t ) + µ 2(3,3) p7 (t ) ,
dt
dp4 (t )
= λ(23,3) p1 (t ) − ( µ 2(3,3) + λ(21,)1 + λ(22,1) + λ(23,1) ) p4 (t ) + µ1(,12) p6 (t ) +
dt
µ1(,22) p7 (t ) + µ1(,32) p8 (t ),
dp5 (t )
= λ(22,1) p2 (t ) + λ(21,)1 p3 (t ) − ( µ1(,22) + µ1(,12) + λ(33,2) ) p5 (t ) + µ 2(3,3) p9 (t ) ,
dt
136 New Computational Methods in Power System Reliability
dp6 (t )
= λ(33,2) p2 (t ) + λ(21,)1 p4 (t ) − ( µ 2(3,3) + µ1(,12) + λ(22,1) + λ(23,1) ) p6 (t ) +
dt
µ1(,22) p9 (t ) + µ1(,32) p10 (t ),
dp 7 (t )
= λ(33,2) p3 (t ) + λ(21,)1 p 4 (t ) − ( µ 2(3,3) + µ1(,22) + λ1(1,2) + λ(23,1) ) p 7 (t )
dt
+ µ1(,12) p9 (t ) + µ1(,32) p11 (t ),
dp8 (t )
= λ(23,1) p4 (t ) − ( µ1(,32) + λ(21,)1 + λ(22,1) ) p8 (t ) + µ1(,12) p10 (t ) + µ1(,22) p11 (t ) ,
dt
dp9 (t )
= λ(33,2) p5 (t ) + λ(22,1) p6 (t ) + λ(21,)1 p7 (t ) − ( µ 2(3,3) + µ1(,22) + µ1(,12) + λ(23,1) ) p9 (t )
dt
+ µ1(,12) p10 (t ) + µ1(,32) p12 (t ),
dp10 (t )
= λ(23,1) p6 (t ) + λ(21,)1 p8 (t ) − ( µ1(,32) + µ1(,12) + λ(22,1) ) p10 (t ) + µ1(,22) p12 (t ) ,
dt
dp11 (t )
= λ(23,1) p 7 (t ) + λ(22,1) p8 (t ) − ( µ1(,32) + µ1(,22) + λ(21,)1 ) p11 (t ) + µ1(,12) p12 (t ) ,
dt
dp12 (t )
= λ(23,1) p9 (t ) + λ(22,1) p10 (t ) + λ(21,)1 p11 (t ) − ( µ1(,32) + µ1(,22) + µ1(,12) ) p12 (t ) .
dt
Solving this system with the initial conditions p1 (0) = 1 , pi (0) = 0 for
2≤i≤12 one obtains the probability of each state at time t.
According to Fig. 2.27, in different states MSS has the following per-
formance rates: in the state 1 v1=3.5, in the state 2 v2=2.0, in the states 4
and 6 v4 =v6=1.8, in the states 3 and 7 v3=v7=1.5, in the states 5, 8, 9, 10,
11 and 12 v5=v8=v9=v10=v11=v12=0. Therefore,
Pr{V=3.5}=p1(t), Pr{V=2.0}=p2(t), Pr{V=1.8}=p4(t)+p6(t),
Pr{V=0}=p5(t)+p8(t)+p9(t)+p10(t)+p11(t)+p12(t).
For the constant demand level θ*=1 one obtains the MSS instantaneous
availability as a sum of states probabilities where the MSS output perform-
ance is greater than or equal to 1. The states 1, 2, 3, 4, 6 and 7 are accept-
able. Hence A(t ) = p1 (t ) + p2 (t ) + p3 (t ) + p4 (t ) + p6 (t ) + p7 (t ) .
Basic Tools and Techniques 137
12
The MSS instantaneous expected performance is W (t ) = ∑ pi (t )vi .
i =1
1 1.5, 2, 4
3.5
λ(21,)1 λ(33,2)
2 µ1(1, 2) µ 2(3,3)
0, 2, 4 4
λ(22,1) µ 1(,22) 1.5, 2, 1.8
2 1.8
3
1.5, 0, 4 λ(21,)1
µ 1(,22) λ(33, 2) µ 2(3,3)
λ(22,1) 1.5
λ(21,)1 µ 1(1, 2) λ(23,1) µ 1(,32)
5 0, 0, 4 6 0, 2, 1.8 λ(22,1) 8 1.5, 2, 0
µ 1(1, 2) λ(33,2) (3)
µ 2,3 µ1(,22)
0 1.8 0
λ(22,1) 7 1.5, 0, 1.8 λ(21,1)
( 3)
λ(21,)1
λ(33, 2) µ 2(3,3) µ1(,22) λ2,1 µ1(,32) 1.5
10 0, 2, 0 λ(22,1) µ 1(,22)
9 0, 0, 1.8
µ 1(1, 2) (3)
0 0 µ 1(,12) λ2,1 µ 2(3,3)
11
1.5, 0, 0
µ 1(,32) λ(22,1) µ1(,22) 0
λ(23,1) 12 λ(21,)1
0, 0, 0
µ 1(1, 2)
0
For element 2:
λ(22,1) λ(22,1) −( λ(22,1) + µ1(,22) )t
p21 (t ) = − e ,
µ1(,22) + λ(22,1) µ1(,22) + λ(22,1)
For element 3:
p31 (t ) = A1eαt + A2 e βt + A3
,
Basic Tools and Techniques 139
p32 (t ) = B1eαt + B2 e βt + B3
,
p33 (t ) = C1eαt + C 2 e βt + C3
,
where
α = −η / 2 + η 2 / 4 − ζ , β = −η / 2 − η 2 / 4 − ζ ,
η = λ(23,1) + λ3(3,2) + µ1(,32) + µ 2(3,3) , ζ = λ(23,1) λ(33,2) + µ1(,32) µ 2(3,3) + µ1(,32) λ(33,2) .
After determining the state probabilities for each element, we obtain the
following performance distributions:
for element 1: g1 = {g11 , g12 } = {0, 1.5} , p1(t)= { p11 (t ), p12 (t )} ;
for element 2: g2 = {g 21 , g 22 } = {0, 2.0} , p2(t)= { p21 (t ), p 22 (t )} ;
for element 3: g3 = {g 31 , g 32 , g 33 } = {0, 1.8, 4.0} ,
p3(t)= { p31 (t ), p32 (t ), p33 (t )} .
2. Having the sets gj, pj(t) for j=1,2,3 obtained in the first step we can
define the u-functions of the individual elements as:
140 New Computational Methods in Power System Reliability
=p11(t)p21(t)z0+p12(t)p21(t)z1.5+p11(t)p22(t)z2+ p12(t)p22(t)z3.5.
U(z)=u3(z) ⊗ [u1(z) ⊗ u2(z)]
min +
=[p31(t)z0+p32(t)z1.8+p33(t)z4] ⊗ p11(t)p21(t)z0+p12(t)p21(t)z1.5+p11(t)p22(t)z2
min
+p12(t)p22(t)z3.5)=p31(t)p11(t)p21(t)z0+p31(t)p12(t)p21(t)z0+p31(t)p11(t)p22(t)z0
+p31(t)p12(t)p22(t)z0+p32(t)p11(t)p21(t)z0+p32(t)p12(t)p21(t)z1.5+p32(t)p11(t)p22(t)z1.8
+p32(t)p12(t)p22(t)z1.8+p33(t)p11(t)p21(t)z0+p33(t)p12(t)p21(t)z1.5+p33(t)p11(t)p22(t)z2
+p33(t)p12(t)p22(t)z3.5.
Taking into account that
p31(t)+p32(t)+p33(t)=1, p21(t)+p22(t)=1 and p11(t)+p12(t)=1,
we obtain the u-function that determines the performance distribution v,
5
q(t) of the entire MSS in the following form U (z)= ∑ qi (t ) z vi where
i =1
v1=0, q1(t)=
p11(t)p21(t)+p31(t)p12(t)+p31(t)p11(t)p22(t),
v2=1.5 tons/min, q2(t)= p12(t)p21(t)[p32(t)+p33(t)],
v3=1.8 tons/min, q3(t)= p32(t)p22(t),
v4=2.0 tons/min, q4(t)= p33(t)p11(t)p22(t),
v5=3.5 tons/min, q5(t)= p33(t)p12(t)p22(t).
4. Based on the entire MSS u-function U(z) we obtain the MSS reliability
indices:
Basic Tools and Techniques 141
The obtained functions W(t) and A(t) for θ*=1 are presented in Fig. 2.28.
1 3.5
A(t) W(t)
3.4
0.95
3.3
0.9
3.2
0.85
3.1
0.8 3
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
Fig. 2.28. System availability for θ*=1 and instantaneous expected performance
F (x)
U 450 X
Example 2.19
This example demonstrates using the MCS for computing the expected un-
supplied energy (EUE) for the demand of 1000 MW in a generating sys-
tem, which consists of 9 interconnected operating units with parameters
presented in Table 2.8 where Forced Outage Range (FOR) is a probability
that unit is not available. For simplicity, we assume that any unit i can
have only two states: either perfect functioning with nominal capacity ci or
total failure with capacity 0 (which corresponds to the limiting values of
the admissible capacity interval: Ci ∈{0, ci}):
Sample
Unit
1 2 3 4 5 6 7 8 9 10
1 0.57 0.47 0.03 0.93 0.28 0.62 0.52 0.09 0.99 0.33
2 0.01 0.13 0.59 0.38 0.72 0.07 0.19 0.64 0.43 0.78
3 0.12 0.80 0.25 0.71 0.17 0.18 0.85 0.31 0.77 0.22
4 0.90 0.24 0.92 0.40 0.94 0.95 0.30 0.98 0.10 1.00
5 0.34 0.69 0.70 0.49 0.61 0.04 0.74 0.75 0.54 0.67
6 0.23 0.02 0.37 0.27 0.05 0.29 0.08 0.42 0.32 0.11
7 0.68 0.91 0.81 0.60 0.83 0.73 0.97 0.87 0.65 0.89
8 0.79 0.35 0.48 0.15 0.50 0.84 0.41 0.53 0.21 0.55
9 0.45 0.58 0.14 0.82 0.39 0.51 0.63 0.20 0.88 0.44
Available capacity
Unit
1 2 3 4 5 6 7 8 9 10
1 200 200 0 200 200 200 200 0 200 200
2 0 0 200 200 200 0 0 200 200 200
3 0 200 200 200 0 0 200 200 200 200
4 150 150 150 150 150 150 150 150 0 150
5 150 150 150 150 150 0 150 150 150 150
6 100 0 100 100 0 100 0 100 100 0
7 100 100 100 100 100 100 100 100 100 100
8 100 100 100 100 100 100 100 100 100 100
9 100 100 0 100 100 100 100 100 100 100
Total 900 1000 1000 1300 1000 750 1000 1100 1150 1200
Eue 100 0 0 0 0 250 0 0 0 0
Example 2.20
Consider the exponential distribution
F ( x ) = 1 − e − λx
The inverse function is
1
x = F −1 (u ) = − ln(1 − u )
λ
After generating the standard uniformly distributed random numbers ui
for i = 1,2,..., the corresponding exponentially distributed random numbers
are obtained, according to Eq. (2.1), as
1
xi = − ln(1 − u i )
λ
146 New Computational Methods in Power System Reliability
Example 2.21
Let Y = max{X1, …,Xn}, where Xi represent independent and identically
distributed random variables with distribution function F(x). The distribu-
tion function of Y is
G ( x) = [F ( x)]
n
Then, from
G ( x) = u
It follows that
F ( x) = u 1 / n
Thus,
x = F −1 (u 1 / n )
In order to obtain a value x of the random variable Y, we first generate a
value u for the standard uniformly distributed variable U and then com-
pute F −1 (u 1 / n ) .
The MCS schemes briefly sketched here can be further developed and
tailored to specific features of the considered problem. The detailed de-
scription of the MCS technique can be found in (Gentle 2003, Robert
2005).
and
( )
v rˆ j (t ) =
(
rˆj (t ) 1 − rˆj (t ) ), (2.147)
n j −1
respectively, where fj (t) defines the number of failures observed during the
duration of the test.
For binary and capacitated systems with a single demand, closed form
expressions for quantifying the uncertainty associated with the reliability
estimate of any system can be obtained as long as component reliability is
available (Ramirez-Marquez and Jiang 2005). However, for the multi-state
case, developing closed form expressions is dependent on obtaining the
variance of each individual term associated with the loss of load probabil-
ity equation. Computing the exact values associated with the covariance
terms is computationally burdensome and no closed form expression is
currently at hand. To overcome, this we use the UGF method, to approxi-
mate the system reliability variance estimate and provide MSS reliability
bounds. The method assumes that component test data is readily available.
The UGF method utilizes the information generated from the test data or
by MCS of components’ performance (estimates of components’ reliabil-
ity) to immediately provide an estimate of the system reliability without
the need to further simulate the behavior of the test data. For any combina-
tion of component reliability estimates, this method calculates the entire
MSS performance distribution using the presented reliability block dia-
gram technique and obtains the system reliability estimate.
Two alternatives to generate bounds for multi-state system reliability at
a desired confidence level α are described below.
Basic Tools and Techniques 149
Rˆ (t ) − zα
( )
Rˆ (t ) 1 − Rˆ (t ) 1
+ ≤ R(t ) ≤ Rˆ (t ) + zα
( )
Rˆ (t ) 1 − Rˆ (t )
−
1
, (2.148)
2 N N 2 N N
where
(
⎧ R̂ 1 − R̂ )
if the number of testing units is not the same for every
⎪
N = ⎨ v R̂
⎪n j ,
,
()
component, as given by (Jin and Coit 2001)
⎩ otherwise
Rˆ (t ) − zα
(
Rˆ (t ) 1 − Rˆ (t ) ) + 1 ≤ R(t ) (2.149)
N N
R(t ) ≤ Rˆ (t ) + zα
( )− 1
Rˆ (t ) 1 − Rˆ (t )
(2.150)
N N
Confidence Bound 2. This bound is based on the assumption that the dis-
tribution for the system reliability R (t) follows an unknown discrete distri-
bution that can be well approximated by a Gaussian distribution. The mean
of this unknown distribution is R̂ , while the variance is
(
Rˆ (t ) 1 − Rˆ (t ) )
.
N
150 New Computational Methods in Power System Reliability
However, this alternative uses Wilson’s score method based on the com-
ponent reliability estimates. Similar to the case of Bound 1, R̂(t ) can be
obtained by combination of MCS and UGF method.
These assumptions allow for the computation of a (1-α)% level two-
sided CI for R (t), given as:
2 2
⎛ ⎞
2 ⎝
⎛ ⎞
(
2 NRˆ (t ) + ⎜ zα ⎟ − zα ⎜ zα ⎟ + 4 NRˆ (t ) 1 − Rˆ (t )
⎝ 2⎠ 2⎠
)1
+ ≤ R(t ) , (2.151)
⎛ ⎛ ⎞ ⎞⎟
2 N
⎜
2 N + ⎜ zα ⎟
⎜ ⎝ 2 ⎠ ⎟⎠
⎝
2 2
⎛ ⎞
2 ⎝
⎛ ⎞
(
2 NRˆ (t ) + ⎜ zα ⎟ + zα ⎜ zα ⎟ + 4 NRˆ (t ) 1 − Rˆ (t )
⎝ 2⎠ 2⎠
)
1
− ≥ R(t ) , (2.152)
⎛ ⎛ ⎞
2⎞ N
2⎜ N + ⎜ zα ⎟ ⎟
⎜ ⎝ 2 ⎠ ⎟⎠
⎝
where N is given as in confidence bound 1.
The lower and upper confidence bounds can also be constructed:
(1-α)% Confidence Lower Bound
Example 2.22
Consider a series-parallel system shown in Fig. 2.30. Table 2.11 introduces
the reliability data for each component and its associated nominal per-
formance while Table 2.12 presents the demand distribution. The results of
a single test run when considering a data sample size of 50 simulations per
component are presented in Table 2.13. Table 2.14 presents MSS reliabil-
ity lower bounds, corresponding to each of the bounding methods, ob-
tained from the simulated component test data and the UGF approach. This
Basic Tools and Techniques 151
table also includes the coverage associated to the level α=0.1 against each
of the bounds actual coverage.
15
5
1 8 9 14
3 4 12 13 16
6
2 10 11 17
7
Demand 4 5 6 7 9 11 12
Probability 1/7 1/7 1/7 1/7 1/7 1/7 1/7
Components Reliability
Component Type No of units
Failed Estimate
1 50 8 0.84
2 50 5 0.9
3 50 3 0.94
4 50 2 0.96
5 50 15 0.7
6 50 4 0.92
7 50 11 0.78
8 50 5 0.9
9 50 5 0.9
10 50 4 0.92
11 50 2 0.96
12 50 2 0.96
13 50 0 1
14 50 6 0.88
15 50 12 0.76
16 50 3 0.94
17 50 6 0.88
MCS+UGF 0.5326 Pure MCS 0.4743
MSS Reliability Reliability
Bound 1 Bound 2
Reliability Bound 0.4622 0.4822
Pure
MCS+UGF Pure MCS MCS+UGF
MCS Re-
Repetition Reliability Reliability Repetition Reliability
liability
Estimate Estimate Estimate
Estimate
1 0.5665 0.4914 11 0.4779 0.4543
2 0.6192 0.5886 12 0.5649 0.5057
3 0.6452 0.5629 13 0.6173 0.5829
4 0.5225 0.5029 14 0.5898 0.5143
5 0.5587 0.4943 15 0.6083 0.5457
6 0.6029 0.5229 16 0.6318 0.5629
7 0.5565 0.5000 17 0.5485 0.4657
8 0.6108 0.5343 18 0.6060 0.4971
9 0.5740 0.4914 19 0.4956 0.4143
10 0.6031 0.5457 20 0.5742 0.4629
also use past knowledge in order to direct the search. Such search algo-
rithms are known as randomized search techniques.
Genetic algorithms (GA’s) are one of the most widely used metaheuris-
tics. They were inspired by the optimization procedure that exists in nature,
the biological phenomenon of evolution. A GA maintains a population of
different solutions allowing them to mate, produce offspring, mutate, and
fight for survival. The principle of survival of the fittest ensures the popu-
lation’s drive towards optimization. The GA’s have become the popular
universal tool for solving various optimization problems, as they have the
following advantages:
• they can be easily implemented and adapted;
• they usually converge rapidly on solutions of good quality;
• they can easily handle constrained optimization problems;
• they produce variety of good quality solutions simultaneously, which is
important in the decision-making process.
The GA concept was developed by John Holland at the University of
Michigan and first described in his book (Holland 1975). Holland was im-
pressed by the ease with which biological organisms could perform tasks,
which eluded even the most powerful computers. He also noted that very
few artificial systems have the most remarkable characteristics of biologi-
cal systems: robustness and flexibility. Unlike technical systems, biologi-
cal ones have methods for self-guidance, self-repair and reproducing these
features. Holland’s biologically inspired approach to optimization is based
on the following analogies:
• As in nature, where there are many organisms, there are many possible
solutions to a given problem.
• As in nature, where an organism contains many genes defining its prop-
erties, each solution is defined by many interacting variables (parame-
ters).
• As in nature, where groups of organisms live together in a population
and some organisms in the population are more fit than others, a group
of possible solutions can be stored together in computer memory and
some of them are closer to the optimum than others.
• As in nature, where organisms that are more fit have more chances of
mating and having offspring, solutions that are closer to the optimum
can be selected more often to combine their parameters to form new so-
lutions.
• As in nature, where organisms produced by good parents are more likely
to be better adapted than the average organism because they received
good genes, offspring of good solutions are more likely to be better than
a random guess, since they are composed of better parameters.
Basic Tools and Techniques 155
Random
generation
Crossover Decoding
Population &
of criteria
solutions evaluation
Mutation
?
Selection
New solution
Each new solution is decoded and its objective function (fitness) values
are estimated. These values, which are a measure of quality, are used to
compare different solutions. The comparison is accomplished by a selec-
tion procedure that determines which solution is better: the newly obtained
solution or the worst solution in the population. The better solution joins
the population, while the other is discarded. If the population contains
equivalent solutions following selection, then redundancies are eliminated
and the population size decreases as a result.
A genetic cycle terminates when Nrep new solutions are produced or
when the number of solutions in the population reaches a specified level.
Then, new randomly constructed solutions are generated to replenish the
shrunken population, and a new genetic cycle begins. The whole GA is
terminated when its termination condition is satisfied. This condition can
158 New Computational Methods in Power System Reliability
Example 2.23
In this example we present several initial stages of a steady-state GA, that
maximizes the function of six integer variables x1, …, x6 taking the form
f ( x1 ,..., x 6 ) = 1000 [( x1 − 3.4) 2 + ( x 2 − 1.8) 2 + ( x3 − 7.7) 2
The variables can take values from 1 to 9. The initial population, con-
sisting of five solutions ordered according to their fitness (value of func-
tion f), is:
No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 1 2 5 297.8
2 3 7 7 7 2 7 213.8
3 7 5 3 5 3 9 204.2
4 2 7 4 2 1 4 142.5
5 8 2 3 1 1 4 135.2
Using the random generator that produces the numbers of the solutions,
the GA chooses the first and third strings, i.e. (4 2 4 1 2 5) and (7 5 3 5 3
9) respectively. From these strings, it produces a new one by applying a
Basic Tools and Techniques 159
crossover procedure that takes the three first numbers from the better par-
ent string and the last three numbers from the inferior parent string. The
resulting string is (4 2 4 5 3 9). The fitness of this new solution is f(x1, …,
x6) = 562.4. The new solution enters the population, replacing the one with
the lowest fitness. The new population is now
No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 5 3 9 562.4
2 4 2 4 1 2 5 297.8
3 3 7 7 7 2 7 213.8
4 7 5 3 5 3 9 204.2
5 2 7 4 2 1 4 142.5
No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 4 5 3 9 562.4
2 3 7 7 4 3 9 349.9
3 4 2 4 1 2 5 297.8
4 3 7 7 7 2 7 213.8
5 7 5 3 5 3 9 204.2
No. x1 x2 x3 x4 x5 x6 f(x1,…,x6)
1 4 2 5 4 3 9 1165.5
2 4 2 4 5 3 9 562.4
3 3 7 7 4 3 9 349.9
4 4 2 4 1 2 5 297.8
5 3 7 7 7 2 7 213.8
Note that the mutation procedure is not applied to all the solutions ob-
tained by the crossover. This procedure is used with some pre-specified
160 New Computational Methods in Power System Reliability
probability pmut. In our example, only the second and the third newly ob-
tained solutions underwent the mutation.
The actual GA’s operate with much larger populations and produce
thousands of new solutions using the crossover and mutation procedures.
The steady-state GA with a population size of 100 obtained the optimal so-
lution for the problem presented after producing about 3000 new solutions.
Note that the total number of possible solutions is 96 = 531441. The GA
managed to find the optimal solution by exploring less than 0.6% of the
entire solution space.
Both types of GA are based on the crossover and mutation procedures,
which depend strongly on the solution encoding technique. These proce-
dures should preserve the feasibility of the solutions and provide the in-
heritance of their essential properties.
Note that the space of the integer strings just approximately maps the
space of the real-valued parameters. The number N determines the preci-
sion of the search. The search resolution for the j-th parameter is
( x max
j − x min
j ) / N . Therefore the increase of N provides a more precise
search. On the other hand, the size of the search space of integer strings
grows drastically with the increase of N, which slows the GA convergence.
A reasonable compromise can be found by using a multistage GA search.
162 New Computational Methods in Power System Reliability
Example 2.24
Consider a problem in which one has to minimize a function of seven pa-
rameters. Assume that following a preliminary decision the ranges of the
possible variations of the parameters are different.
Let the random generator provide the generation of integer numbers in
the range of 0 - 100 (N = 100). The random integer string and the corre-
sponding values of the parameters obtained according to (2.156) are pre-
sented in Table 2.16.
Each set can contain from 0 to Y items. The partition of the set Φ can be
represented by the Y-length string a = (a1 a2 … aY−1 aY) in which aj is a
number of the set to which item j belongs. Note that, in the strings repre-
senting feasible solutions of the partition problem, each element can take a
value in the range (1, K).
Now consider a more complicated allocation problem in which the
number of items is not specified. Assume that there are H types of differ-
ent items with an unlimited number of items for each type h. The number
of items of each type allocated in each subset can vary. To represent an al-
location of the variable number of items in K subsets one can use the fol-
lowing string encoding a = (a11 a12 …a1K a21 a22 … a2K… aH1 aH2… aHK), in
Basic Tools and Techniques 163
Example 2.25
Consider the problem of allocating items of three different types in two
disjoint subsets. In this problem, H=3 and K=2. Any possible allocation
can be represented by an integer string using the encoding described
above. For example, the string (2 1 0 1 1 1) encodes the solution in
which two type 1 items are allocated in the first subset and one in the sec-
ond subset, one item of type 2 is allocated in the second subset, one item of
type 3 is allocated in each of the two subsets.
When K = 1, one has an assignment problem in which a number of dif-
ferent items should be chosen from a list containing an unlimited number
of items of K different types. Any solution of the assignment problem can
be represented by the string a = (a1 a2 … aK), in which aj corresponds to
the number of chosen items of type j.
The range of variance of string elements for both allocation and assign-
ment problems can be specified based on the preliminary estimation of the
characteristics of the optimal solution (maximal possible number of ele-
ments of the same type included into the single subset). The greater the
range, the greater the solution space to be explored (note that the minimal
possible value of the string element is always zero in order to provide the
possibility of not choosing any element of the given type to the given sub-
set). In many practical applications, the total number of items belonging to
each subset is also limited. In this case, any string representing a solution
in which this constraint is not met should be transformed in the following
way:
⎧⎢ H ⎥ H
⎪
⎪ ⎢ aij N j / ∑ ahj ⎥, if N j < ∑ ahj
aij* = ⎨⎣ h =1 ⎦ h =1 for 1≤i≤H,1≤j≤K (2.158)
⎪a , otherwise
⎪⎩ ij
Example 2.26
Consider the case in which the items of three types should be allocated into
two subsets. Assume that it is prohibited to allocate more than five items
of each type to the same subset. The GA should produce strings with ele-
ments ranging from 0 to 5. An example of such a string is (4 2 5 1 0 2).
164 New Computational Methods in Power System Reliability
Assume that for some reason the total numbers of items in the first and
in the second subsets are restricted to seven and six respectively. In order
to obtain a feasible solution, one has to apply the transform (2.158) in
which N1 = 7, N2 = 6:
3 3
∑ ah1 = 4+5+0=9, ∑ ah 2 = 2+1+2=5.
h =1 h =1
When the number of item types and subsets is large, the solution repre-
sentation described above results in an enormous growth of the length of
the string. Besides, to represent a reasonable solution (especially when the
number of items belonging to each subset is limited), such a string should
contain a large fraction of zeros because only a few items should be in-
cluded in each subset. This redundancy causes an increase in the need of
computational resources and lowers the efficiency of the GA. To reduce
the redundancy of the solution representation, each inclusion of m items of
type h into subset k is represented by a triplet (m h k). In order to preserve
the constant length of the strings, one has to specify in advance a maximal
reasonable number of such inclusions I. The string representing up to I in-
clusions takes the form (m1 h1 k1 m2 h2 k2 … mI hI kI). The range of string
elements should be (0, max{M, H, K}), where M is the maximal possible
number of elements of the same type included into a single subset. An ar-
bitrary string generated in this range can still produce infeasible solutions.
In order to provide the feasibility, one has to apply the transform
a *j = mod x +1 a j , where x is equal to M, H and K for the string elements
corresponding to m, h and k respectively. If one of the elements of the trip-
let is equal to zero, then this means that no inclusion is made.
For example, the string (3 1 2 1 2 3 2 1 1 2 2 2 3 2) represents the same
allocation as string (3 2 3 1 0 2) in Example 2.26. Note that the permuta-
tion of triplets, as well as an addition or reduction of triplets containing ze-
ros, does not change the solution. For example, the string (4 0 1 2 3 2 2 1 2
3 1 1 1 2 2 3 2 1) also represents the same allocation as that of the previous
string.
Basic Tools and Techniques 165
M+R(a)−πη(C*, a)
where
η(C*, a)=(1+C(a)−C*)1(C(a)>C*) (2.160)
The coefficient π should be greater than one. In this case the fitness of any
solution violating the constraint is smaller than M (the smallest violation of
the constraint C(a)≤C* produces a penalty greater than π) while the fitness
of any solution meeting the constraint is greater than M. In order to keep
the fitness of the solutions positive, one can choose M>π (1+Cmax−C*),
where Cmax is the maximal possible system cost.
Another typical optimization problem is minimizing the system cost
subject to the reliability constraint: C(a) → min subject to R(a)≥R*.
The fitness of any solution a of this problem can be defined as
M−C(a)−πη(R*,a)
where
η(A*, a)=(1+R*−R(a))1(R(a)<R*) (2.161)
The coefficient π should be greater than Cmax. In this case, the fitness of
any solution violating the constraint is smaller than M − Cmax whereas the
fitness of any solution meeting the constraint is greater than M − Cmax. In
order to keep the fitness of the solutions positive, one can choose
M>Cmax + 2π.
References
Aven T and Jensen U (1999) Stochastic models in reliability. Springer, NY.
Billinton R. and Allan R (1996) Reliability evaluation of power systems. Plenum
Press, Boston.
Cinlar E (1975) Introduction to stochastic processes. Prentice-Hall, Englewood
Cliffs, NY.
Coit D, Jin T (2001) Prioritizing system-reliability prediction improvements, IEEE
Transactions on Reliability, 50(1): 17-25.
Coit D (1997) System-reliability confidence-intervals for complex-systems with
estimated component-reliability, IEEE Transactions on Reliability, 46(4):
487-493.
Endrenyi J (1979) Reliability modeling in electric power systems. John Wiley &
Sons, NY.
Gentle J (2003) Random number generation and Monte Carlo methods, Springer,
New York.
Goldberg D (1989) Genetic Algorithms in search, optimization and machine learn-
ing. Addison-Wesley.
Goldner Sh., Lisnianski A (2006) Markov reward models for ranking units per-
formance, IEEE 24 th Convention of Electrical and Electronics Engineers in
Israel: 221-225.
Grimmett G, Stirzaker D (1992) Probability and random processes. Second edi-
tion. Clarendon Press, Oxford.
Holland J (1975) Adaptation in natural and artificial systems. The University of
Michigan Press, Ann Arbor, Michigan.
Howard R (1960) Dynamic programming and Markov processes, MIT Press,
Cambridge, Masschusetts.
International Standard (1995) Application of Markov techniques, International
Electrotechnical Commission IEC 1165.
Karlin S, Taylor H (1981) A second course in stochastic processes. Academic
Press, Orlando, FL.
Kovalenko I, Kuznetsov N, Pegg Ph. (1997) Mathematical theory of reliability of
time dependent systems with practical applications, Wiley, Chchester, Eng-
land.
Kinnear K (1993) Generality and difficulty in Genetic Programming: evolving a
sort. In Proceedings of the Fifth International Conference on Genetic Algo-
rithms. Ed. Forrest S. Morgan Kaufmann. San Mateo, CA: 287-94.
Levitin G (2005) Universal generating function in reliability analysis and optimi-
zation, Springer-Verlag, London.
Levy P (1954) Process semi-markoviens. Proc. Int. Cong. Math. Amsterdam, 416-
426.
Limnios N, Oprisan G (2000) Semi-Markov processes and reliability, Birkhauser,
Boston, Basel, Berlin.
Lindqvist B (1987) Monotone Markov models, Reliability Engineering, 17: 47-58.
Basic Tools and Techniques 169