Lecture 10. Failure Probabilities and Safety Indexes: Igor Rychlik

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Lecture 10.

Failure Probabilities and Safety


Indexes

Igor Rychlik
Chalmers
Department of Mathematical Sciences

Probability, Statistics and Risk, MVE300 • Chalmers • May 2013


Safety analysis - General setup:
An alternative method to compute risk, here the probability of at least
one accident in one year, is to identify streams of events Ai which, if
followed by a suitable scenario Bi , leads to the P accident. Then the risk
for the accident is approximately measured by λAi P(Bi )1 where the
intensities of the streams of Ai , λAi , all have units [year−1 ].
An important assumption is that the streams of initiation events are
independent and much more frequent than the occurrences of studied
accidents. Hence these can be estimated from historical records.

What remains is computation of probabilities P(Bi ).

We consider cases when the scenario B describes the ways systems can
fail, or generally, some risk-reduction measures fail to work as planned.
In safety of engineering structures, B is often written in a form that a
function of uncertain values (random variables) exceeds some critical
level u crt
B = “ g (X1 , X2 , . . . , Xn ) > u crt ”

1
1 − exp(−x) ≈ x
Failure probability:
Some of the variables Xi may describe uncertainty in parameters, model,
etc. while others genuine random variability of the environment. One
thus mixes the variables X with distributions interpreted in the
frequentist’s way with variables having subjectively chosen distributions.
Hence the interpretation of what the failure probability

Pf = P(B) = P(g (X1 , X2 , . . . , Xn ) > u crt )

means is difficult and depends on properties of the analysed scenario.

It is convenient to find a function h such that

B = ”h(X1 , X2 , . . . , Xn ) ≤ 0”.

Then, with Z = h(X1 , X2 , . . . , Xn ), the failure probability Pf = FZ (0).2


One might think that it is a simple matter to find the failure probability
Pf , since only the distribution of a single variable Z needs to be found.
2
Often h(X1 , X2 , . . . , Xn ) = u crt − g (X1 , X2 , . . . , Xn ). Note that h is not
uniquely defined.
Example - summing many small contributions:

By Hooke’s law, the elongation  of a fibre is proportional to the force F ,


that is,  = F /K or F = K . Here K , called Young’s modulus, is
uncertain and modelled as a rv. with mean m and variance σ 2 .

Consider a wire containing 1000 fibres with individual independent values


P modulus Ki . A safety criterion is given by  ≤ 0 . With
of Young’s
F =  Ki we can write

F  X
Pf = P P > 0 = P(0 Ki − F < 0).
Ki

Hence, in this example, we have


X
h(K1 , . . . , K1000 , F ) = 0 Ki − F

which is a linear function of Ki and F .3

3 P
Here, F is an external force (load) while Ki is the material strength.
Assume F ∈ N(mF , σF2 ) is independent of Ki (E[Ki ] = m, V[Ki ] = σ 2 ).

Ki is approximately N(1000m, 1000σ 2 ).


P
By the centralP limit theorem,
Hence Z = 0 Ki − F , is the difference of two independent normal
variables. Since

sum of independent normally distributed variables has normal distribution.


4

hence Z ∈ N(mZ , σZ2 ) where mZ = 1000m0 − mF , σZ2 = 1000 20 σ 2 + σF2 .

 
Consequently Pf = P(Z < 0) = Φ −m
σZ
Z
.

mZ
Bigger the fraction βC = σZ lower the probability of failure.

4
Sum of jointly normally distributed variables (can be dependent) is
normally distributed too.
Some results for sums:
I If X1 , . . . , Xn are independent normally distributed, i.e.
Xi ∈ N(mi , σi2 ), then their sum Z is normally distributed too, i.e.
Z ∈ N(m, σ 2 ), where
m = m1 + · · · + mn , σ 2 = σ12 + · · · + σn2 .
I For independent Gamma distributed random variables
X1 , X2 , . . . , Xn , where Xi ∈ Gamma(ai , b), i = 1, . . . , n, one can
show that
X n
Xi ∈ Gamma(a1 + a2 + · · · + an , b).
i=1

I Sum of independent Poisson variables, Ki ∈ Po(mi ), i = 1, . . . , n, is


again Poisson distributed:
n
X
Ki ∈ Po(m1 + · · · + mn ).
i=1

Recall the more general results of superposition and decomposition


of Poisson processes
The weakest-link principle:
The principle means that the strength of a structure is equal to the
strength of its weakest part. For a chain “failure” occurs if minimum of
strengths of chain components is below a critical level u crt :

min(X1 , . . . , Xn ) ≤ u crt .

If Xi are independent with distributions Fi , then

P(min(X1 , . . . , Xn ) ≤ u crt ) = 1 − P(min(X1 , . . . , Xn ) > u crt )


= 1 − P(X1 > u crt , . . . , Xn > u crt )
= 1 − (1 − F1 (u crt )) · . . . · (1 − Fn (u crt )).

The computations are particularly simple if Xi are iid Weibull distributed


then the cdf of X = min(X1 , X2 , . . . , Xk ) is
c c c
P(X ≤ x) = 1 − (1 − (1 − e−(x/a) ))k = 1 − e−k(x/a) = 1 − e−(x/ak ) ,

that is, a Weibull distribution with a new scale parameter ak = a/k 1/c .5
5
The change of scale parameter due to minimum formation is called size
effect (larger objects are weaker).
Example: Strength of a wire
Experiments have been performed with 5 cm long wires. Estimated
average strength was 200 kg and coefficient of variation 0.20. From
experience, one knows that such wires have Weibull distributed strengths.
c
For Weibull cdf
√ F (x) = 1 − e−(x/a) , x > 0,
Γ(1+2/c)−Γ2 (1+1/c)
R(X) = Γ(1+1/c) .

c Γ(1 + 1/c) R(X)


1.00 1.0000 1.0000

2.00 0.8862 0.5227

2.10 0.8857 0.5003

2.70 0.8893 0.3994

3.00 0.8930 0.3634

3.68 0.9023 0.3025

4.00 0.9064 0.2805

5.00 0.9182 0.2291

5.79 0.9259 0.2002

8.00 0.9417 0.1484

10.00 0.9514 0.1203

12.10 0.9586 0.1004

20.00

21.80
0.9735

0.9758
0.0620

0.0570
The table gives c = 5.79 and
50.00 0.9888 0.0253 Γ(1 + 1/c) = 0.9259. Next using
128.00 0.9956 0.0100
the relation a = E[X ]/Γ(1 + 1/c)
one gets
a = 200/0.9259 = 216.01.
We now consider strength of a 5 meters long wire. It is 100 times longer
than the tested wires and hence its strength is Weibull distributed with
c = 5.79 and a = 216.01/1001/c = 97.51. In average the 5 meter long
wires are 2.22 weaker than the 5 cm long test specimens.

Now we can calculate the probability that a wire of length 5 m will have
a strength less than 50 kg,
5.79
P(X ≤ 50) = 1 − e−(50/97.51) = 0.021.

For the 5 cm long test specimens


5.79
P(X ≤ 50) = 1 − e−(50/216) = 0.00021,

i.e. 100 times smaller. Not surprising since 1 − exp(−x) ≈ x for small x
values.
Multiplicative models:
Assume that January 2009, one has invested K SEK in a stock portfolio
and one wonders what its value will be in year 2020. Denote the value of
the portfolio in year 2020 by Z and let Xi be factors by which this value
changed during a year 2009 + i, i = 0, 1, . . . , 11. Obviously the value is
given by
Z = K · X0 · X1 · . . . · X11 .
Here “failure” is subjective and depends on our expectations, e.g.
“failure” can be that we lost money, i.e. Z < K .

In order to estimate the risk (probability) for failure, one needs to model
the properties of Xi . As we know factors Xi are either independent nor
have the same distribution.6 For simplicity suppose that Xi are iid, then
employing logarithmic transformation

ln Z = ln K + ln X1 + · · · + ln Xn ,

Now if n is large the Central Limit Theorem tells us that ln Z is


approximatively normally distributed.
6
The so called theory of time series is often used to model variability of Xi .
Lognormal rv. :

A variable Z such that ln Z ∈ N(m, σ 2 ) is called a lognormal variable.

Using the distribution Φ of a N(0, 1) variable we have that

ln z − m 
FZ (z) = P(Z ≤ z) = P(ln Z ≤ ln z) = Φ .
σ
In can be shown that
2
E[Z ] = em+σ /2
,
2m 2σ 2 2
V[Z ] = e · (e − eσ ),
2
p p
D[Z ] = em e2σ2 − eσ2 = em+σ /2 · eσ2 − 1.

Please study applications of log-normally distributed variables given in


the course book.
Safety Indexes:
A safety index is used in risk analysis as a measure of safety which is high
when the probability of failure Pf is low. This measure is a more crude
tool than the probability, and is used when the uncertainty in Pf is too
large or when there is not sufficient information to compute Pf .

Consider the simplest case Z = R − S and suppose that variables R and


S are independent normally distributed, i.e. R ∈ N(mR , σR2 ),
2 2
S ∈ N(m
p S , σS ). Then also Z ∈ N(mZ , σZ ), where mZ = mR − mS and
2 2
σZ = σR + σS , and thus
0 − mZ 
Pf = P(Z < 0) = Φ = Φ(−βC ) = 1 − Φ(βC ),
σZ
where βC = mZ /σZ is called Cornell’s safety index.

0.4

0.35

0.3
Illustration of safety index. Here: βC = 2.
0.25

0.2
Failure probability Pf = 1 − Φ(2) = 0.023
0.15
(area of shaded region).
0.1

0.05

0
−2 −1 0 1 2 3 4 5 6
Cornell - index
The index βC gives the failure probabilities when Z is approximately
normally distributed. Note that for any distribution of Z the Cornell’s
safety index βC = 4 always means that the distance from the mean of Z
to the unsafe region is 4 standard deviations. In quality control 6
standard deviations7 are used lately, however in that case one is
interested in fraction of components that do not meet specifications. In
our case we do not consider mass production but long exposures times.

Even if in general Pf 6= 1 − Φ(βC ) there exists, although very


conservative, estimate
1
P(“System fails”) = P(Z < 0) ≤ .
1 + βC2

The Cornells index has some deficiencies and hence an improved version,
called Hasofer-Lind index, is commonly used in reliability analysis. Since
quite advanced computer software is needed for computation of βHL it
will not be discussed in details.
7
Six Sigma is a registered service mark and trademark of Motorola, Inc.
Motorola has reported over US$ 17 billion in savings from Six Sigma as of 2006.
Use of safety indexes in risk analysis

For βHL , one has approximately that Pf ≈ Φ(−βHL ). Clearly, a higher


value of the safety index implies lower risk for failure but also a more
expensive structure. In order to propose the so-called target safety
index one needs to consider both costs and consequences. Possible
classes of consequences are:
Minor Consequences This means that risk to life, given a failure, is
small to negligible and economic consequences are small
or negligible (e.g. agricultural structures, silos, masts).
Moderate Consequences This means that risk to life, given a failure, is
medium or economic consequences are considerable (e.g.
office buildings, industrial buildings, apartment buildings).
Large Consequences This means that risk to life, given a failure, is
high or that economic consequences are significant (e.g.
main bridges, theatres, hospitals, high-rise buildings).
Obviously, the cost of risk prevention etc. also has to be considered,
when we are choosing target reliability indexes (“target” means that one
wishes to design the structures so that the safety index for a particular
failure mode will have the target value). Here the so-called “ultimate
limit states” are considered, which means failure modes of the structure
— in everyday-language: that one can not use it anymore.

It is important to remember that the values of βHL contain time


information; it is a measure of safety for one year. Index βHL = 3.7
means that ”nominal” return period for failure A, say, is 104 years. (Note
that If you have 1000 independent streams of A then return period is
only 10 years.)
Table 1: Safety index and consequences.

Relative cost of Minor consequences Moderate consequences Large consequences


safety measure of failure of failure of failure
Large βHL = 3.1 βHL = 3.3 βHL = 3.7
Normal βHL = 3.7 βHL = 4.2 βHL = 4.4
Small βHL = 4.2 βHL = 4.4 βHL = 4.7
Computation of Cornell’s index

I Although Cornell’s index βC has some deficiencies it is still an


important measure of safety.

I Recall the setup: Ri are strength-, Si the load-variables and


h(·)-function of strengthes and loads being negative when failure
occurs. Let
Z = h(R1 , . . . , Rk , S1 , . . . , Sn ),
and assume that E[Z ] > 0. Now βC = E[Z ]/V[Z ]1/2 .

I Assume that only expected values and variances of the variables Ri


and Si are known. (We also assume that all strength and load
variables are independent.) In order to compute βC we need to find

E[h(R1 , . . . , Rk , S1 , . . . , Sn )], V[h(R1 , . . . , Rk , S1 , . . . , Sn )].

which often can only be done by means of some approximations.


The main tools are the so-called Gauss’ formulae.
Gauss’ Approximations.

Let X be a random variable with E[X ] = m and V[X ] = σ 2 then

E[h(X )] ≈ h(m) and V[h(X )] ≈ (h0 (m))2 σ 2 .8



'
Let X and Y be independent random variables with expectations mX , mY ,
respectively. For a smooth function h the following approximations

E[h(X , Y )] ≈ h(mX , mY ),
 2  2
V[h(X , Y )] ≈ h1 (mX , mY ) V[X ] + h2 (mX , mY ) V[Y ],

where
∂ ∂
h1 (x, y ) = h(x, y ), h2 (x, y ) = h(x, y ).
∂x ∂y
&
8
Use Taylor’s formula to approximate h around x0 by a polynomial function
h(x) ≈ h(x0 ) + h0 (x0 )(x − x0 ). Choose “typical value” x0 = E[X ] = m.
If X and Y are correlated then
E[h(X , Y )] ≈ h(mX , mY ),
 2  2
V[h(X , Y )] ≈ h1 (mX , mY ) V[X ] + h2 (mX , mY ) V[Y ]
+2h1 (mX , mY ) h2 (mX , mY ) Cov[X , Y ].

Extension to higher dimension then 2 is straightforward.

For independent strength and load variables Cornell’s index can be


approximately computed by the following formula

h(mR1 , . . . , mRk , mS1 , . . . , mSn )


βC ≈ ,
k+n 1/2
X 2
σi2

hi (mR1 , . . . , mRk , mS1 , . . . , mSn )
i=1

where σi2 is the variance of the ith variable in the vector of loads and
strengths (R1 , . . . , Rk , S1 , . . . , Sn ), while hi denote the partial derivatives
of the function h.
Example - displacement of a beam

Suppose that for a beam in a structure the vertical displacement U must


be smaller than 1.5 mm. A formula from mechanics says that the vertical
displacement of the midpoints is

PL3
U= .
48EI
Estimate a safety index, i.e. compute βC = E[Z ]/V[Z ]1/2 , where
Z = 1.5 · 10−3 − U. Obviously

E[Z ] = 1.5 · 10−3 − E[U], V[Z ] = V[U].9

9
The data you find is; beam length L = 3 m; P is a random force applied at
the midpoint E[P] = 25 000 N and D[P] = 5 000 N; the modulus of elasticity
E of a randomly chosen beam has E[E ] = 2 · 1011 Pa and D[E ] = 3 · 1010 Pa;
all beams share the same second moment of (cross-section) area I = 1 · 10−4
m4 . It seems reasonable to assume that P and E are uncorrelated.
Use of Gauss formulae

PL3
I Introducing h(P, E ) = 48EI we have

∂ L3 ∂ PL3
h1 (P, E ) = h(P, E ) = , h2 (P, E ) = h(P, E ) = − ,
∂P 48EI ∂E 48E 2 I
I Employing Gauss formulae

E[P]L3 25 000 · 33
E[U] = = = 7.03 · 10−4 m,
48E[E ]I 48 · 2 · 1011 · 1 · 10−4
2 2
V[U] = V[P] h1 (E[P], E[E ]) + V[E ] h2 (E[P], E[E ]) = 1.11 · 10−8 m
 

I Since D[U] = 1.06 · 10−4 m and the Cornell’s index10

βC = (1.5 · 10−3 − E[U])/D[U] = 7.52.

10 1
P(Z < 0) ≤ 1+β 2
= 0.017

You might also like