Adaptive Relevance Vector Machine Combined With Markov-Chain-Based Importance Sampling For Reliability Analysis
Adaptive Relevance Vector Machine Combined With Markov-Chain-Based Importance Sampling For Reliability Analysis
A R T I C L E I N F O A B S T R A C T
Keywords: Many engineering systems involve complex implicit performance functions, and evaluating the failure proba
Relevance vector machine bility of these systems usually requires time-consuming finite element simulations. In this research, a new
Markov chain reliability method is proposed by combining relevance vector machine and Markov-chain-based importance
Importance sampling
sampling (RVM-MIS), which improves computational efficiency by decreasing the number of expensive model
Surrogate model
Failure probability
simulations. Relevance vector machine (RVM) is a machine learning method based on the concept of probabi
listic Bayesian learning framework. It is worth noting that RVM provides predicted value of the sample and
corresponding variance. Due to this important feature, various active learning functions can be applied to
improve the accuracy of RVM to approximate real performance functions. In addition, Markov-chain-based
importance sampling (MIS) is utilized to generate important samples covering areas that significantly
contribute to failure probability. The important samples are then predicted by a well-constructed RVM to obtain
failure probability, rather than being evaluated using real performance functions, so the computation time is
drastically decreased. RVM-MIS reduces the number of calls to real performance function while ensuring the
accuracy of results. Four academic examples and a bearing statics problem with an implicit performance function
are performed to verify the accuracy and efficiency of the proposed method.
1. Introduction nonlinearity. Monte Carlo simulation (MCS) has been widely used as a
reference method [4,5]. It is the most intuitive and accurate method, but
In practical engineering systems, there are many uncertainties in the computational cost is prohibitively high. To overcome the drawback
geometric parameters, loads and material properties. The reliability of MCS, lots of variance reduction methods have been developed to
analysis of engineering problems aims to estimate the failure probability reduce the number of calls to real performance functions, such as
under these uncertain factors [1]. In addition, many engineering prob quasi-Monte Carlo [6,7], line sampling (LS) [8,9], importance sampling
lems involve complex performance functions, and evaluating the failure (IS) [10,11], subset simulation (SS) [12,13], etc. However, the compu
probability for these problems usually requires a large number of tational burden of these methods is still unacceptable in practical en
time-consuming finite element simulations, which takes unbearable gineering problems.
time to obtain an accurate result. In order to improve computational Recently, surrogate model methods have received widespread
efficiency, it is necessary to reduce the number of expensive model attention to further improve the computational efficiency of reliability
simulations. In recent years, various reliability analysis methods have analysis. Commonly used surrogate models including neural network
been proposed to effectively and accurately assess failure probability, (NN) [14,15], response surface method (RSM) [16,17], support vector
including approximation methods, sampling simulation methods, sur machine (SVM) [18,19] and Kriging model [20,21], which are applied
rogate model methods, etc. to approximate the real performance function. Kriging model is the most
First order reliability method (FORM) [2] and second order reli widely used among them, as it can provide prediction variance infor
ability method (SORM) [3] are well-known approximation methods. mation. Due to this property, various active learning functions have
However, the results obtained by these two methods have large errors been proposed to reduce the number of actual evaluations required to
when the performance function has multiple design points or highly construct a high-precision Kriging model. Jones et al. [22] developed the
* Corresponding author.
E-mail address: [email protected] (B. Xie).
https://doi.org/10.1016/j.ress.2021.108287
Received 9 July 2021; Received in revised form 26 November 2021; Accepted 19 December 2021
Available online 22 December 2021
0951-8320/© 2021 Elsevier Ltd. All rights reserved.
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
efficient global optimization (EGO) method, which applied the expected follows [33]:
improvement function (EIF). Based on EIF, Bichon et al. [23] further
tn = y(zn ; w) + ηn (1)
proposed the efficient global reliability analysis (EGRA) method for
nonlinear implicit performance functions, which used the expected
where ηn ∼ N(0, σ 2 ) is a random noise component which is the normal
feasibility function (EFF). Echard et al. [24] developed the well-known
distribution with mean-zero and variance σ2 , y(zn ; w) is a linear com
learning function U. It focuses on the probability that Kriging model
bination of kernel functions can be expressed as [33]:
misclassifies structural response symbols. Based on information entropy,
Lv et al. [25] proposed the H function, which describes the uncertainty ∑
N
of predictions. Sun et al. [26] developed the least improvement function y(zn ; w) = wl K(zn , zl ) + w0 (2)
(LIF), but its practical application is limited due to the complexity of
l=1
expression. Zhang et al. [27] proposed reliability-based expected where w = (w0 , w1 , w2 , …, wN )T is the weight vector, and K(zn , zl ) is a
improvement function (REIF) based on the folded normal theory. Shi kernel function. Thus, based on Eq. (1), the conditional probability for
et al. [28] further developed folded normal based expected improve target variable tn is a Gaussian distribution with mean y(zn ; w) and
ment function (FNEIF) based on REIF, which assesses the contribution of variance σ 2 :
a sample to the approximate performance function improvement of ( )
surrogate model from the perspective of folded normal distribution. In p(tn |zn ) ∼ N tn |y(zn ; w), σ2 (3)
general, learning function U and EFF are the most commonly applied,
and there are no obvious advantages or disadvantages to their perfor Then, the likelihood of output data t = (t1 …tN )T based on Eq. (3) can
mance. Meanwhile, various variance reduction methods combined with be written as:
active learning Kriging have been proposed to assess problems with ( ) ( )− N
{
1
}
small failure probabilities, such as AK-IS [29], AK-SS [30], etc. To p t|w, σ2 = 2πσ 2 2 exp − ‖ t − Φw ‖2
(4)
2σ2
evaluate system reliability with multiple failure modes, Fauriat and
Gayton [31] proposed the AK-SYS method, which is a competitive where Φ is a kernel matrix [33]:
method for structural systems involving time-consuming simulations. ⎛ ⎞
Yang et al. [32] developed a system reliability method based on the 1 K(z1 , z1 ) K(z1 , z2 ) … K(z1 , zN )
⎜1 K(z2 , z1 ) K(z2 , z2 ) … K(z2 , zN ) ⎟
truncated candidate region named AK-TCR. Φ=⎜ ⎝⋮
⎟
⎠ (5)
⋮ ⋮ ⋱ ⋮
Currently, relevance vector machine (RVM), a machine learning
1 K(zN , z1 ) K(zN , z2 ) … K(zN , zN )
method for classification and regression first proposed by Tipping [33],
offers a new tool for reliability analysis [34–36]. RVM, based on the The estimation of w and σ 2 in Eq. (4) using the maximum likelihood
concept of a probabilistic Bayesian learning framework, is proposed to estimation often suffers from overfitting. To avoid that, the automatic
overcome the drawbacks of SVM. It can derive an accurate prediction relevance determination Gaussian prior is introduced [33]:
model with fewer kernel functions than a comparable SVM while
∏
N
( )
providing many other advantages, such as automatic estimation of p(w|α) = N wl |0,α−l 1 (6)
complex parameters, the convenience of utilizing arbitrary kernel l=0
functions, good generalization performance and excellent sparsity [33,
36]. Furthermore, RVM can realize probabilistic predictions. It provides where α = (α0 , α1 , …, αN )T is a hyperparameters vector. Utilizing Bayes’
the predicted value of samples as well as the corresponding variance rule, the posterior distribution of w conditioned on the sampling data is
similar to Kriging model, allowing RVM to incorporate existing active given by Tipping [33]:
learning functions. RVM inherits the advantages of SVM and Kriging ( ) p(t|w, σ 2 )p(w|α)
model, so it is chosen as the surrogate model used in this paper. p w|t, α, σ 2 = (7)
p(t|α, σ2 )
For assessing engineering reliability problems more accurately and
efficiently, a new reliability method is proposed by combining relevance Both terms p(t|w, σ2 ) and p(w|α) in Eq. (7) are Gaussian priors, and
∫
vector machine and Markov-chain-based importance sampling (RVM- p(t|α, σ2 ) = p(t|w, σ 2 )p(w|α)dw is a convolution of Gaussians. There
MIS). In this method, Markov-chain-based importance sampling (MIS) is fore, w follows the Gaussian distribution N(w|μ, Σ) with the posterior
first utilized to generate important samples covering the most likely covariance and mean are given by Tipping [33]:
failure areas. Then, RVM is constructed with a small group of important ( )− 1
samples and progressively updated utilizing the learning function U. Σ = σ− 2 ΦT Φ + A (8)
Finally, important samples are predicted by a well-constructed RVM
surrogate model to obtain failure probability. RVM-MIS inherits the μ = σ− 2 ΣΦT t (9)
benefits of RVM to approximate the real limit state accurately and MIS to
populate important failure areas efficiently, resulting in a significant where A = diag(α0 , α1 ,…, αN ). For uniform hyperpriors over α and σ 2 , it
reduction in computation effort while ensuring the accuracy of the is necessary to maximize the term p(t|α, σ2 ) in Eq. (7):
results. ( )
∫
( )
The organization of this work is as follows. Section 2 introduces RVM p t|α, σ2 = p t|w, σ 2 p(w|α)dw
briefly. Section 3 presents the principles of MIS. The details of RVM-MIS { } (10)
⃒ ⃒− 1 ( )− 1
are described in Section 4. Four academic examples and a bearing statics = (2π)− N/2 ⃒ 2
σ + ΦA− 1 ΦT ⃒
1/2
exp − tT σ 2 + ΦA− 1 ΦT t
problem with an implicit performance function are provided in Section 5 2
to illustrate the efficiency and accuracy of RVM-MIS. The last section is The maximization of Eq. (10) is known as the type-II maximum
the conclusion. likelihood method [37]. After obtaining the optimal hyperparameter
values αMP and σ 2MP , the predictive distribution over t∗ for a new input
2. Fundamental theory of relevance vector machine vector z∗ can be calculated by [33]:
( ) ( )
RVM is a machine learning method first proposed by Tipping [33]. p t∗ |t,αMP , σ2MP = N t∗ |y∗ , σ2∗ (11)
Denote the inputs vector as {zn }Nn=1 , and the corresponding outputs as
{tn }Nn=1 . RVM describes the relationship between inputs and outputs as y∗ = φ(z∗ )T μ (12)
2
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
where hX (x) denotes the importance sampling density (ISD) function. where βi = 1/M is the weighted coefficient. Combine Eqs. (16) and (20),
The effectiveness of importance sampling depends on the correct choice we can obtain:
of ISD. The theoretically optimal ISD can be expressed as [38,39]: ∫
fX (x) ∑M ∑M [
fX (x)
]
/ Pf = IF (x) βi hi (x)dx = βi Ehi IF (x) (21)
ho (x) = f (x|F) = IF (x)fX (x) Pf (17) Rn hX (x) i=1 i=1
hX (x)
However, ho (x) is not available as it requires prior failure probability where Ehi [⋅] represents the expectation of hi (x).
information. Based on Eq. (21), the unbiased estimate of Pf can be calculated by:
[ ( )]
3.2. Markov-chain-based importance sampling ∑M
1 ∑H
( ) fX xki
̂f =
P βi IF xki (22)
i=1
H k=1 hX (xki )
A popular choice for ISD is applying Markov chain Markov chain
Metropolis algorithm (MCMA) [40]. Au and Beck [38] first utilized where xki denotes the kth sample generated based on ISD hi (x). Let:
MCMA to adaptively generate a scheme of important samples, which ( )
does not need to determine design points. Yuan et al. [39] modified Au’s ̂ fhi = 1
∑H
( ) fX xki
P IF xki (23)
method and proposed another Markov-chain-based importance sam H k=1 hX (xki )
pling (MIS).
In this paper, the process of utilizing the modified MCMA to produce Combine Eqs. (22) and (23):
important samples covering the most likely failure area can be briefly ∑
M
described as follows: ̂f =
P ̂ fhi
βi P (24)
i=1
1 An initial sample x1, as long as in the failure region, is selected to The variance of Pf cannot be derived analytically. However, it can be
start MCMA, and it is also considered as the first importance sam approximated by [40]:
pling center. [ ( )]
2 Based on the jth importance sampling center xj , the distribution ( ) ∑ 1 ∑
M H
( ) fX xki
̂f ≈
Var P βi 2 Var IF xki ( )
⃒ ⃒ H k=1 hX xki
f(τ ⃒xj ) is used to generate K samples [x , x , …, x ], where f(τ ⃒xj )
(1) (2) (K)
i=1
j j j
(25)
can be expressed as [39]: ∑M
( )
[ ( = 2 ̂
βi Var P fhi
)2 ]
( ) ∏ m
1 τs − xjs i=1
f τ |xj = √̅̅̅̅̅ exp − (18)
2π σ s 2σ s
s=1
where:
{ ( ) ]2 }
H [
( ) 1 1 ∑ ( ) fX xki ( )
Var P̂ fhi ≈ IF xki − ̂ fhi 2
P (26)
3 is randomly selected from K samples, and the ratio r is computed
(r)
xj H − 1 H k=1 hX (xki )
by the following formula [39]:
( ) ( ) Finally, the COVPf can be estimated by:
IF x(r)
j fX x(r)
j √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
( )
/
r= ( ) ( ) (19) ̂ f ≈ Var P
COV P ̂f ̂f
P (27)
IF xj fX xj
3
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
4
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
2 Build the RVM with M0 sampling centers. As mentioned above, all comparison accuracy. In addition, RVM-MIS and AK-MIS employ the
accepted and rejected sampling centers must be evaluated in the real same initial important samples, so the results obtained can reflect the
performance function during the generation of important samples. performance of RVM.
Therefore, these real evaluations are regarded as the initial DoE for
building the RVM. 5.1. Example 1: high-order problem
3 Make prediction of each important samples. The predicted mean and
variance of each important samples can be obtained by the con The first example is a high-order problem with the performance
structed RVM, which is essential for implementing the active function given by [29]:
learning process.
4 Identify the update sample. The U value of each important samples (d1 − 2)2 3(d2 − 5)3
G(d1 , d2 ) = − − 3 (32)
can be obtained by Eq. (29). The sample corresponding to the 2 2
smallest U value is regarded as the update sample xu. If the U value of
where d1 and d2 are independent standard normal variables. The results
xu satisfies the stopping condition, RVM-MIS goes to step 6, other
of these methods are summarized in Table 1, which are given in forms of
wise goes to step 5.
the total number of samples Ns, the number of calls to real performance
5 Update the initial DOE and rebuild the RVM. If the stopping condi
function Ncall, the failure probability Pf, the variation coefficient of the
tion is not satisfied, the preceding RVM is deemed insufficiently ac
failure probability COVPf and the failure probability error percentage ε
curate to approximate the real performance function. Therefore, the
compared to benchmark. ε and Ncall are two significant indications that
original DOE should be revised. The update sample xu is assessed
can be utilized to assess the accuracy and efficiency of methods,
with the true performance function. Then, the evaluation G(xu) and
respectively. For Ncall of AK-MCS, “10+24′′ denotes 10 initial samples
the sample xu are added to the initial DOE. Following that, RVM-MIS
and 24 samples added in active learning process. For AK-IS, “21+13′′
returns to step 2. Therefore, a more accurate RVM than the previous
denotes 21 samples in FORM approximation and 13 added samples. For
one can be built with the updated DOE.
RVM-MIS, “10+16+7′′ denotes 10 importance sampling centers, 16
6 Assess failure probability and coefficient of variation. If the stopping
rejected sampling centers and 7 added samples.
condition is met, the current RVM is considered sufficiently accurate.
As can be seen from Table 1, RVM-MIS has a considerable gain in
Then the failure probability (Pf) and the coefficient of variation
computing efficiency while also providing extremely accurate failure
(COVPf) of the Pf can be estimated utilizing the theory presented in
probability estimation second only to AK-MCS. The number of Ns and
Section 3.3.
Ncall of RVM-MIS are much lower than MCS, there are only 33 samples
7 Increase the number of important samples. If the obtained COVPf is
call the true performance function. In addition, no further computation
higher than 0.05, the present number of important samples is
is required to determine the design point. Therefore, the computational
considered to be so small as to yield an inaccurate result. Another set
effort of RVM-MIS is lower than IS and AK-IS. Compared with AK-MCS,
of important samples continues to be generated via MCMA. After the
the error is slightly larger. However, due to large number of Ns, the
number of important samples has increased, RVM-MIS returns to step
computational time for AK-MCS to select the update sample is too long.
3.
It takes 2716 s for AK-MCS to solve this problem, while RVM-MIS takes
8 End the RVM-MIS. If COVPf is lower than 0.05, RVM-MIS is termi
only 47 s. Hence, RVM-MIS still shows its advantages in comparison
nated and the latest assessment of Pf is regarded as the final result of
with AK-MCS. Since RVM-MIS and AK-MIS employ the same important
the problem.
samples and initial DOE, the number of Ns and sample centers are the
same. The active learning process of RVM-MIS adds only 7 samples,
4.3. Comparison with other existent methods
while AK-MIS adds 10 samples. The failure probability estimated by
RVM-MIS is slightly more accurate than AK-MIS, which indicates that
The main contribution of this paper is to propose a novel active
the performance of RVM is superior to Kriging model. The actual per
learning RVM combined with MIS to efficiently and accurately estimate
formance function values and predicted values by RVM-MIS and AK-MIS
failure probability. In addition, RVM can be easily combined with other
of 25 important samples are shown in Fig. 2. It can be seen that the
existing active learning functions and variance reduction methods. The
predicted value by RVM-MIS is closer to the actual value in most cases.
improvements to the existing methods are summarized as follows.
The predicted symbol of each sample by RVM-MIS is consistent with the
Compared with SVM-based method [36], RVM inherits the merits of
actual symbol, but AK-MIS has a predicted symbol error, which verifies
Gaussian process, which provides the predicted variance similar to
the result that the failure probability estimated by RVM-MIS is more
Kriging model. In addition, RVM shows greater sparsity than SVM, it can
accurate than AK-MIS.
derive an accurate prediction model with fewer kernel functions than a
In order to show the results visually, Figs. 3 and 4 depict the analysis
comparable SVM. As a result, RVM can acquire sufficient valuable in
results of AK-MCS and RVM-MIS for this example, respectively. As
formation from fewer relevant vectors due to excellent sparsity.
shown in Fig. 3, the samples added in active learning process are mainly
In comparison to AK-MCS [24], the proposed method applies MIS to
located close to the true limit state. The predicted limit state by AK-MCS
populate important failure areas. It can obtain sufficient failure infor
is inaccurate, because the regions with a low probability density popu
mation with fewer samples than AK-MCS. Therefore, the proposed
late very few samples, which cannot be estimated correctly. The
method is effective in dealing with small failure probability problems.
importance sampling centers, the rejected sampling centers, the
In contrast to AK-IS [29], the proposed method utilizes MCMA to
adaptively generate important samples, which avoids the identification
of design points. Thus, it is very suitable for multiple design points and Table 1
Results of Example 1.
non-linear problems.
Method Ns Ncall Pf COVPf (%) ε(%)
5. Academic verification of the proposed method MCS 5 × 107
5 × 10 7
2.798 × 10 − 5
2.674 –
IS 1 × 104 1 × 104 2.819 × 10− 5
2.429 0.751
The efficiency and accuracy of RVM-MIS are verified by four repre AK-MCS 5 × 107 10+24 2.801 × 10− 5
2.672 0.107
sentative numerical examples and a bearing statics problem with im AK-IS 1 × 104 21+13 2.815 × 10− 5
2.426 0.608
plicit performance function. These examples are also analyzed utilizing AK-MIS 1 × 104 10+16+10 2.809 × 10− 5
2.513 0.390
some other mainstream reliability methods, including MCS, IS, AK-MCS, RVM-MIS 1 × 104 10+16+7 2.805 × 10− 5
2.507 0.250
AK-IS and AK-MIS. The results of MCS serve as a benchmark for
5
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
Fig. 2. The actual performance function values and predicted values for Example 1.
important samples, the added samples and the predicted limit state of
vectors for RVM is 7, while the number of support vectors for SVM is 16.
RVM-MIS are displayed in Fig. 4. It can be seen that the 10 importance
It indicates that the constructed SVM uses 16 kernel functions to obtain
sampling centers are all located in the failure area. The positions of the
prediction results, while RVM uses only 7 kernel functions, verifying
added samples in the active learning process are all close to the real limit
that the sparsity of RVM is much better.
state. In addition, the important samples generated by MIS cover the
failure area quite well. As a result, the predicted limit state by RVM-MIS
5.2. Example 2: four branch series system
is very accurate which is almost consistent with the true limit state.
Sparsity is one of the most outstanding features of RVM (see
The second example is a four branch series system [24,41], which
Appendix A). In addition, RVM is considered to be more sparse than SVM
consists of four component performance functions:
as it can derive accurate prediction results with fewer kernel functions.
To verify this conclusion, RVM and SVM are built based on the same
training samples that were evaluated in RVM-MIS. The results are shown
in Fig. 5. It can be seen that the predicted limit states by RVM and SVM
are almost the same and very accurate. However, the number of relevant
6
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
where x1 and x2 are independent standard normal variables. Different 5.3. Example 3: nonlinear oscillation system
values of a and b give different levels of failure probability. The failure
probability at level 10− 3 is analyzed at first with a = 3 and b = 7. The The next example is a nonlinear oscillation system with the structure
results are summarized in Table 2. diagram given in Fig. 11. The performance function is expressed as [28,
RVM-MIS shows a high accuracy as the obtained failure probability is 29,42]:
very close to the baseline values. With a much lower Ncall, it outperforms ⃒
⃒ 2F1
( 2 )⃒
ρ T1 ⃒⃒
MCS and IS in terms of efficiency. In comparison to AK-MCS, the Ncall is G(C1 , C2 , m, R, T1 , F1 ) = 3R − ⃒⃒ 2 sin (34)
mρ 2 ⃒
slightly lager. Considering that AK-MCS is not suitable for small failure
probabilities as shown in Example 1, RVM-MIS still shows its competi √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
where ρ = (C1 + C2 )/m. The statistical characteristics and distribu
tiveness. As for AK-MIS, the Ncall is 6 times more than RVM-MIS, and the tions of the six independent variables in the performance function are
obtained failure probability accuracy is slightly worse, indicating that given in Table 4. The analysis results are depicted in Table 5.
the accuracy of the constructed RVM is higher than Kriging model. The The results of AK-MCS reproduced from Ref. [42] and are the most
actual performance function values and predicted values by RVM-MIS accurate than the other methods in Table 5. However, due to large
and AK-MIS of this example are presented in Fig. 6. Similar to the re number of Ns, the computational time of AK-MCS is about 35 h [42],
sults in Example 1, there is no predicted symbol error in RVM-MIS, and which is very time-consuming compared to the other methods. Although
most of the predicted results for RVM-MIS are closer to the real value. the traditional IS has a large Ncall, the obtained failure probability is not
Therefore, it is extremely accurate to build RVM utilizing the process accurate enough compared to other methods. RVM-MIS still maintains
described in this paper. excellent performance, has the smallest Ncall and provides a fairly ac
Fig. 7 shows the results of utilizing MIS to generate important sam curate failure probability, whereas AK-MIS, which demands three more
ples. In Fig. 7, the important samples of each component performance Ncall to meet stopping criterion, is less accurate. This example shows that
function are generated based on 10 importance sampling centers, which RVM-MIS is suitable for dealing with the nonlinear oscillation system
can effectively cover the failure region of the corresponding component problem with small failure probability.
performance function. Since the system consists of four component
7
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
Fig. 6. The actual performance function values and predicted values for Example 2.
Fig. 7. Utilizing MCMA to generate important samples. 5.5. Example 5: bearing statics problem
are studied in this example. The results of these two cases are summa A bearing statics problem with an implicit performance function is
rized in Table 6 and Table 7, respectively. taken as the final example. Deep groove ball bearing 6205 is chosen for
The results obtained by IS have a large error, which is consistent with the case study. The bearing is subjected to a radial force Fr, and the
the fact that IS is not suitable for dealing with high-dimensional prob structure diagram is depicted in Fig. 12.
lems. RVM-MIS has a significant impact on reducing the number of calls As can be seen form Fig. 13, based on ANSYS, a finite element model
to real performance function while ensuring the accuracy of the result. It of the bearing is established to obtain the maximum stress σm . The sta
performs extremely well in terms of efficiency and accuracy under two tistical characteristics and distributions of independent variables, which
different values of q, which shows that increasing the number of vari are related to maximum stress σm , are listed in Table 8. The maximum
ables has little influence on the performance of RVM-MIS. RVM-MIS is allowable stress σ a of the bearing is set as 4200 MPa. Based on the stress-
superior to the other methods listed in the table, gives satisfactory re strength model, failure occurs once the maximum stress σm of the
sults and is suitable for dealing with high-dimensional problems. bearing exceeds the maximum allowable stress σ a . Therefore, the
8
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
Table 4
Random variables of Example 3.
Variable Distribution Mean Standard deviation
m Normal 1 0.05
C1 Normal 1 0.1
C2 Normal 0.1 0.01
R Normal 0.5 0.05
T1 Normal 1 0.2
Fig. 9. Analysis results of RVM-MIS method for Example 2. F1 Normal 0.6 0.1
Table 5
Results of Example 3.
Method Ns Ncall Pf COVPf (%) ε(%)
Table 6
Results of Example 4 for q = 40.
Method Ns Ncall Pf COVPf (%) ε(%)
5 5 − 3
MCS 3 × 10 3 × 10 1.820 × 10 4.276 –
IS 6 × 103 6 × 103 1.978 × 10− 3
4.946 8.681
AK-MCS 2.5 × 105 30+78 1.836 × 10− 3
4.663 0.879
AK-MIS 1 × 104 20+37+36 1.841 × 10− 3
4.724 1.154
RVM-MIS 1 × 104 20+37+28 1.832 × 10− 3
4.672 0.659
Fig. 10. Comparison of the sparsity between RVM and SVM for Example 2.
Table 7
Results of Example 4 for q = 100.
Table 3
Results of Example 2 for a = 5, b = 9. Method Ns Ncall Pf COVPf (%) ε(%)
5 5 − 3
Method Ns Ncall Pf COVPf (%) ε(%) MCS 3 × 10 3 × 10 1.650 × 10 4.491 –
8 8 − 6
IS 6 × 103 6 × 103 1.794 × 10− 3
4.915 8.727
MCS 1 × 10 1 × 10 7.400 × 10 3.676 –
AK-MCS 2.5 × 105 30+121 1.668 × 10− 3
4.893 1.091
IS 1 × 104 1 × 104 7.319 × 10− 6
3.623 1.095
4 − 6
AK-MIS 1 × 104 20+43+33 1.637 × 10− 3
4.816 0.788
AK-MIS 4 × 10 40+98+82 7.443 × 10 3.546 0.581
4 − 6
RVM-MIS 1 × 104 20+43+30 1.661 × 10− 3
4.784 0.667
RVM-MIS 4 × 10 40+98+78 7.431 × 10 3.539 0.419
9
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
6. Conclusions
10
Y. Wang et al. Reliability Engineering and System Safety 220 (2022) 108287
Sparsity is one of the most outstanding features of RVM, and the derivation process is as follows.
Values of α and σ 2 which maximize Eq. (10) cannot be obtained in closed form. However, it can be solved by an iterative re-estimation scheme as
follows:
∑
1 − αl ll
αnew
l = 2
(A.1)
μl
( )new ‖ t − Φμ ‖2
σ2 = (A.2)
N − δl
∑
where ll is the lth diagonal element of the posterior weight covariance in Eq. (8), μl is the lth posterior mean from Eq. (9) and δl can be calculated by
the following formula:
( )/
δl = tr ΣΦT Φ σ2 (A.3)
The iterative process proceeds by repeated application of Eqs. (A.1) and (A.2), concurrent with updating of the posterior statistics Σ and μ from
Eqs. (8) and (9), until convergence criteria is satisfied, and the values of α and σ 2 at termination are denoted as optimal hyperparameter values αMP and
σ 2MP .
In practice, during re-estimation, it can be found that many of αl (l = 0, 1⋯, N) values tend to be infinite. From Eq. (6), this implies that the weights
wl corresponding to these αl have posterior distributions with mean and variance both zero. Therefore, based on Eq. (2), those weights and their
corresponding kernel functions play no role in RVM and can be pruned from the model, and then the sparsity of RVM is achieved. Furthermore, the
inputs corresponding to the remaining nonzero weights are called as relevant vectors, which is similar to the determination of support vectors in SVM.
Reference [21] Yang M, Zhang DQ, Jiang C, et al. A hybrid adaptive Kriging-based single loop
approach for complex reliability-based design optimization problems. Reliab Eng
Syst Saf 2021;215:107736.
[1] Wang J, Sun ZL, Cao RN. An efficient and robust Kriging-based method for system
[22] Jones DR, Schonlau M, Welch WJ. Efficient global optimization of expensive black-
reliability analysis. Reliab Eng Syst Saf 2021;216:107953.
box functions. J Glob Optim 1998;13:455–92.
[2] Hohenbichler M, Rackwitz R. First-order concepts in system reliability. Struct Saf
[23] Bichon BJ, Eldred MS, Swiler LP, et al. Efficient global reliability analysis for
1982;1:177–88.
nonlinear implicit performance functions. AIAA J 2008;46:2459–68.
[3] Castillo E, Sarabia JM, Solares C, et al. Uncertainty analyses in fault trees and
[24] Echard B, Gayton N, Lemaire M. AK-MCS: an active learning reliability method
Bayesian networks using FORM/SORM methods. Reliab Eng Syst Saf 1999;65:
combining Kriging and Monte Carlo Simulation. Struct Saf 2011;33:145–54.
29–40.
[25] Lv Z, Lu Z, Wang P. A new learning function for Kriging and its applications to
[4] Metropolis N. The beginning of the monte-carlo method. Los Alamos Sci 1987;15:
solve reliability problems in engineering. Comput Math Appl 2015;70:1182.
125–30.
[26] Sun Z, Wang J, Li R, Tong C. LIF: a new Kriging based learning function and its
[5] Kaya GK, Ozturk F, Sariguzel EE. System-based risk analysis in a tram operating
application to structural reliability analysis. Reliab Eng Syst Saf 2017;157:152–65.
system: integrating Monte Carlo simulation with the functional resonance analysis
[27] Zhang XF, Wang L, Sørensen JD. REIF: a novel active-learning function toward
method. Reliab Eng Syst Saf 2021;215:107835.
adaptive Kriging surrogate models for structural reliability analysis. Reliab Eng
[6] Ökten G, Liu YN. Randomized quasi-Monte Carlo methods in global sensitivity
Syst Saf 2019;185:440–54.
analysis. Reliab Eng Syst Saf 2021;210:107520.
[28] Shi Y, Lu ZZ, He RY, et al. A novel learning function based on Kriging for reliability
[7] Liu XH, Zheng SS, Wua XX, et al. Research on a seismic connectivity reliability
analysis. Reliab Eng Syst Saf 2020;198:106857.
model of power systems based on the quasi-Monte Carlo method. Reliab Eng Syst
[29] Echard B, Gayton N, Lemaire M, Relun N. A combined Importance Sampling and
Saf 2021;215:107888.
Kriging reliability method for small failure probabilities with time-demanding
[8] Pradlwarter H, Schuller G, Koutsourelakis P, Charmpis D. Application of line
numerical models. Reliab Eng Syst Saf 2013;111:232–40.
sampling simulation method to reliability benchmark problems. Struct Saf 2007;
[30] Huang X, Chen J, Zhu H. Assessing small failure probabilities by AK–SS: an active
29:208–21.
learning method combining Kriging and Subset Simulation. Struct Saf 2016;59:
[9] Valdebenito MA, Wei PF, Song JW, et al. Failure probability estimation of a class of
86–95.
series systems by multidomain line Sampling. Reliab Eng Syst Saf 2021;213:
[31] Fauriat W, Gayton N. AK-SYS: an adaptation of the AK-MCS method for system
107673.
reliability. Reliab Eng Syst Saf 2014;123:137–44.
[10] Au SK, Beck JL. Important sampling in high dimensions. Struct Saf 2003;25:
[32] Yang X, Liu Y, Mi C, Tang C. System reliability analysis through active learning
139–63.
Kriging model with truncated candidate region. Reliab Eng Syst Saf 2018;169:
[11] Wang C, Xie HP, Bie ZH, et al. Fast supply reliability evaluation of integrated
235–41.
power-gas system based on stochastic capacity network model and importance
[33] Tipping ME. Sparse bayesian learning and the relevance vector machine. J Mach
sampling. Reliab Eng Syst Saf 2021;213:107452.
Learn Res 2001;1:211–44.
[12] Au SK, Beck JL. Estimation of small failure probabilities in high dimensions by
[34] Pijush S, Tim L, Dookie KU. Relevance vector machine for slope reliability analysis.
subset simulation. Probab Eng Mech 2001;16:263–77.
Appl Soft Comput 2011;11:4036–40.
[13] Zhang JH, Xiao M, Gao L. An active learning reliability method combining Kriging
[35] Zhou CC, Lu ZZ, Zhang F, Yue ZF. An adaptive reliability method combining
constructed with exploration and exploitation of failure region and subset
relevance vector machine and importance sampling. Struct Multidiscip Optim
simulation. Reliab Eng Syst Saf 2019;188:90–102.
2015;52:945–57.
[14] Gomes HM, Awruch AM. Comparison of response surface and neural network with
[36] Li TZ, Pana Q, Diasa D. Active learning relevant vector machine for reliability
other methods for structural reliability analysis. Struct Saf 2004;26:49–67.
analysis. Appl Math Model 2021;89:381–99.
[15] Bao YQ, Xiang ZL, Li H. Adaptive subset searching-based deep neural network
[37] Berger JO. Statistical decision theory and bayesian analysis. Springer Science and
method for structural reliability analysis. Reliabil Eng Syst Saf 2021;213:107778.
Business Media; 1985.
[16] Rajashekhar MR, Ellingwood BR. A new look at the response surface approach for
[38] Au SK, Beck JL. A new adaptive importance sampling scheme for reliability
reliability analysis. Struct Saf 1993;12:205–20.
calculations. Struct Saf 1999;21:135–8.
[17] He JJ, Huang M, Wang W, et al. An asymptotic stochastic response surface
[39] Yuan XK, Lu ZZ, Zhou CC, Yue ZF. A novel adaptive importance sampling
approach to reliability assessment under multi-source heterogeneous uncertainties.
algorithm based on Markov chain and low-discrepancy sequence. Aerosp Sci
Reliab Eng Syst Saf 2021;215:107804.
Technol 2013;19:253–61.
[18] Rocco CM, Moreno JA. Fast Monte Carlo reliability evaluation using support vector
[40] Zhao HL, Yue ZF, Liu YS, Gao ZZ, Zhang YS. An efficient reliability method
machine. Reliab Eng Syst Saf 2002;76:237–43.
combining adaptive importance sampling and Kriging metamodel. Appl Math
[19] Lee S. Monte Carlo simulation using support vector machine and kernel density for
Model 2015;39:1853–66.
failure probability estimation. Reliab Eng Syst Saf 2021;209:107481.
[41] Xiong YF, Sampath S. A fast-convergence algorithm for reliability analysis based on
[20] Metheron G. Principles of geostatistics economic geology. Econ Geol 1963;58:
the AK-MCS. Reliab Eng Syst Saf 2021;213:107693.
1246–66.
[42] Yun WY, Lu ZZ, Jiang X, et al. AK-ARBIS: an improved AK-MCS based on the
adaptive radial-based importance sampling for small failure probability. Struct Saf
2020;82:101891.
11