An Adaptive Algorithm To Build Up Sparse Polynomial Chaos Expansions For Stochastic FEM
An Adaptive Algorithm To Build Up Sparse Polynomial Chaos Expansions For Stochastic FEM
the method has been extended to models featuring particular types sparse PC expansions is illustrated in Section 5 by numerical
of nonlinearities, see e.g. [3–5]. studies, i.e. the analysis of an analytical function and two stochastic
As an alternative, non-intrusive computational schemes emerged finite element models featuring 10 and 21 input random variables,
recently in stochastic finite element analysis. These methods allow respectively.
the analyst to compute the stochastic model response by means of
a set of calls to the existing deterministic model, i.e. without mod- 2. Polynomial chaos approximation of the response of a model
ifying the underlying computer code. Two approaches based on a with random input parameters
PC representation of the response are usually distinguished:
- the projection approach: each PC coefficient is recast as a 2.1. Introduction
multidimensional integral [6,7] which can be computed either
by simulation or quadrature; Let us consider a physical model represented by a deterministic
- the regression approach [8–11]: the PC coefficients are estimated mapping y = M (x). Here x = {x1 , . . . , xM }T ∈ RM , M ≥ 1 is
by minimizing the mean-square error of the response approxi- the vector of the input variables, and y = {y1 , . . . , yQ }T ∈ RQ ,
mation in the L2 sense. Q ≥ 1 is the vector of quantities of interest provided by the model,
referred to as the model response in the sequel. As the input vector x
Note that a non-intrusive scheme based on Lagrange interpolation
is assumed to be affected by uncertainty, a probabilistic framework
of the model response, namely the stochastic collocation method,
is now introduced.
was also proposed in the past few years as an efficient alternative
Let (Ω , F , P ) be a probability space, where Ω is the event
to SSFEM [12,13].
space equipped with σ -algebra F and probability measure P .
However, the required number of model evaluations (i.e. the
Random variables are denoted by upper case letters X (ω) : Ω →
computational cost) increases with the size of the truncated PC
DX ⊂ R, while their realizations are denoted by the corresponding
expansion which itself dramatically increases with the number
lower case letters, e.g. x. Moreover, bold upper and lower case
of input variables, whatever the applied computational method,
letters are used to denote random vectors (e.g. X = {X1 , . . . , XM }T )
may it be intrusive or non-intrusive. Several attempts to downsize
and their realizations (e.g. x = {x1 , . . . , xM }T ), respectively.
the stochastic problems under consideration may be found in the
Let us denote by L2 (Ω , F , P ) the space of random variables X
literature. On the one hand, the dimension-adaptive tensor product with finite second moments:
quadrature technique outlined in [14] was applied to the stochastic Z Z
collocation scheme by [15]. The method is intended to reducing 2
X (ω)dP (ω) =
2
x2 fX (x)dx < +∞
E X = (1)
the number of calls to the model when evaluating multivariate Ω DX
integrals by quadrature. On the other hand, various authors
detailed generalized decomposition schemes of the random where E [·] denotes the mathematical expectation operator and fX
response of standard elliptic models [16,17]. The main stochastic (resp. DX ) represents the probability density function (PDF) (resp.
features of the response are then captured by a reduced number the support) of X . This space is an Hilbert space with respect to the
of basis functions compared to the classical PC expansions. An inner product:
extension to problems featuring some nonlinearity is described in Z
[5]. hX1 , X2 iL2 ≡ E [X1 X2 ] = X1 (ω)X2 (ω)dP (ω). (2)
Ω
In contrast to most of the literature, the present paper is focused q
on a methodology that can be applied to the uncertainty and
This inner product induces the norm kX kL2 ≡ E X2 .
reliability analyses of industrial systems, whatever the type of the
governing equations. A non-intrusive strategy is adopted so that a The input vector of the physical model M is represented as
broad class of industrial problems (either linear or nonlinear, and a random vector X (ω), ω ∈ Ω with prescribed joint PDF fX (x).
not necessarily elliptic) can be tackled. It is therefore crucial to The model response is also a random variable Y (ω) = M (X (ω)),
minimize the number of model evaluations which may be time which is assumed scalar in the sequel for the sake of simplicity,
consuming. In this purpose, one presents a stepwise regression i.e. Q = 1. Note that in case of a vector-valued model response Y ,
technique that allows one to build up a sparse PC expansion, the following derivations hold componentwise.
i.e. in which only a small number of significant basis functions are
retained in the response PC approximation. It should be noted that 2.2. Chaos representation of the model response
a similar stepwise scheme was recently proposed in [8], in which
classical statistical tests are adopted as criteria for accepting or 2.2.1. General case
rejecting the candidate basis functions. It was assumed implicitly It is now assumed that the random model response Y has a finite
that the error of approximation of the response by the PC variance, i.e. Y ∈ L2 (Ω , F , P ). As a consequence, it admits the
expansion is random conditionally to the input variables. As following chaos representation [2]:
only deterministic models are of interest herein though, selection X
criteria based on estimates of the deterministic model deviation Y = M (X ) = aα φ α ( X ) (3)
have been preferred in the present paper. In addition, a sequential α∈NM
sampling strategy [18] is used herein in order to ensure the well-
where the aα are unknown deterministic coefficients, and the φα
posedness of the various regression problems.
are multivariate basis functions that are orthonormal with respect
The remainder of this paper is organized as follows. In Section 2
to the joint PDF fX (x) of X , i.e.:
one presents a general framework of the chaos representation of
the random response. Section 3 proposes two estimates of the hφα (X ), φβ (X )iL2 = E φα (X )φβ (X ) = δα,β
(4)
approximation error of the model response, namely the empirical
error and the leave-one-out error [19]. An iterative procedure for where δα,β = 1 if α = β and 0 otherwise.
(i)
progressively selecting the significant terms in the response PC Let {πj , j ∈ N} be a family of orthonormal polynomials with
expansion is described in Section 4, and the Nested Latin Hypercube respect to the marginal PDF fXi of random variable Xi . Most common
(NLHS) scheme for generating sequential experimental designs is distributions can be associated to a specific type of polynomial [20],
also outlined. The computational gain provided by the adaptive e.g. normalized Hermite (resp. Legendre) polynomials for standard
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 185
normal variables (resp. uniform variables over [−1, 1]). Upon 2.2.3. Case of an input Nataf distribution
tensorizing the M resulting families of univariate polynomials, one In the present section it is assumed that the input random
gets a set of multivariate polynomials {ψα , α ∈ NM } defined by: variables in X are correlated using a Nataf distribution [27] which is
of common use in structural reliability analysis, i.e. that the input
ψα (x) = πα(11) (x1 ) × · · · × πα(M
M
)
(xM ). (5) CDF reads:
According to [2], a possible choice for the multivariate orthonormal
FX (x1 , . . . , xM ) = ΦM ,R Φ −1 (F1 (x1 )), . . . , Φ −1 (FM (xM ))
(13)
basis in Eq. (3) is:
s where Fi (xi ) is the marginal CDF of the random variable Xi , ΦM ,R is
fX1 (x1 ) · · · fXM (xM ) the standard Gaussian CDF of dimension M and correlation matrix
φα (x) = ψα (x) . (6) R and Φ is the unidimensional standard Gaussian CDF. Let us
fX (x)
denote by b ξ = {b ξi = Φ −1 (Fi (Xi )), i = 1, . . . , M } the correlated
As shown in [21], the above equation can be further elaborated standard Gaussian random variables which appear in Eq. (13). In
by introducing the formalism of copula theory [22]. This theory is order to derive a classical Hermite PC approximation the model
a tool for the representation of multivariate cumulative density response shall be recast as a function of independent standard
functions (CDFs), which is of common use in financial engineering Gaussian random variables ξi . In this respect, b ξ is transformed
whereas rather unused in probabilistic engineering mechanics (see into a standard Gaussian random vector ξ whose components are
e.g. [23,24]). Copula theory allows one to clearly separate the uncorrelated:
description of a random vector X into two parts:
ξ = Γξ
b (14)
- the marginal distribution (or margin) of each component,
denoted by {FXi (xi ), i = 1, . . . , M }; where the matrix Γ is obtained by the Cholesky decomposition of
- the structure of dependence between these components, R, that is:
contained in a so-called copula function C : [0, 1]M 7−→ R = Γ TΓ . (15)
[0, 1], which is nothing but a M-dimensional CDF with standard
uniform margins. Thus the model response may be recast as a function of independent
standard Gaussian random variables. Hence it may be expanded
According to Sklar’s theorem [25], a continuous joint CDF FX (x) onto a classical PC expansion made of normalized Hermite
has a unique representation in terms of its margins and its copula polynomials, as shown in Section 2.2.2.
function C (u1 , . . . , uM ):
FX (x) = C (FX1 (x1 ), , . . . , FXM (xM )). (7) 2.3. Estimation of the PC coefficients
The practical feature of this approach appears when the probabilis-
tic model of X is to be built from data: the models for the margins For computational purpose the series in Eq. (12) is truncated
is first inferred, then suitable methods for determining the appro- in order to retain only a finite number of terms. One commonly
priate copula function are applied. Introducing the copula density retains those polynomials whose total degree |α| does not exceed
function: a given p:
X
∂ M C (u1 , . . . , uM ) Y = M (X ) ' Mp (X ) = aα ψα (X ) (16)
c (u1 , . . . , uM ) = (8)
∂ u1 . . . ∂ uM |α|≤p
Let us consider the case of M independent input random The multidimensional integral (17) may be computed using
variables Xi . Thus the copula in Eq. (7) (called product or various quadrature schemes, which differ in the choice of the
independent copula) and its density respectively read: selected integration points (i.e. the model evaluations). Quadrature
schemes based on (quasi) random sampling (e.g. Monte Carlo,
C ind (u1 , . . . , uM ) = u1 u2 . . . uM c ind (u1 , . . . , uM ) = 1. (11) Latin Hypercube, quasi-Monte Carlo) may be used, as well as
Hence the chaos representation in Eqs. (3) and (10) reduces to: multivariate Gauss quadrature techniques (e.g. full tensor product
X quadrature, Smolyak sparse quadrature), see [28] and references
Y = M (X ) = aα ψα (X ). (12) therein.
α∈NM Nonetheless these projection schemes might lead to a pro-
The above series is usually referred to as polynomial chaos (PC) ex- hibitive computational cost since:
pansion. The decomposition was originally formulated with stan- - the simulation techniques often require a large set of realiza-
dard Gaussian random variables and Hermite polynomials as the tions of the input random variables in order to provide a suffi-
finite-dimensional Wiener polynomial chaos [1]. It was later ex- cient accuracy;
tended to other classical distributions together with basis func- - the cost of the quadrature techniques strongly increases
tions from the Askey family of hypergeometric polynomials [20] with the number of input parameters, even if the Smolyak
(generalized PC expansion), and then to arbitrary probability mea- algorithm [29] allows one to moderate this so-called curse of
sures [26]. dimensionality.
186 G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197
2.3.2. Regression approach More generally, the convergence rate of the truncated PC ex-
The regression approach consists in adjusting an a priori pansion with respect to the number of terms P depends on the
truncated PC expansion to the model under consideration. It can smoothness of the model function M . In our context, no informa-
thus be regarded as a response surface insofar as it is an analytical tion is available concerning the smoothness of M prior to perform-
metamodel that is fitted to the possibly complex model, which ing model evaluations. Various metrics were proposed in [30] for
then allows the analyst to perform a large number of evaluations evaluating a posteriori the accuracy of truncated PC expansions.
at a negligible computational cost. Nevertheless, as our research is aimed at minimizing the number
Let us consider the following PC expansion of prescribed total of model evaluations, this paper is focused on the derivation of in-
degree p: expensive error estimates, i.e. whose evaluation requires no addi-
X tional computer experiment (Section 3).
Mp (X ) = aα ψα (X ) (18) Another problem arises from the noticeable increase of the
0≤|α|≤p required number N of model evaluations for computing the PC co-
which rewrites using a vector notation: efficients. Indeed, N has to be greater than the number P of un-
known coefficients when performing standard regression analysis,
Mp (X ) = aT ψ(X ) (19) whereas P raises itself polynomially with both the total degree p
and the number of input variables M:
where a (resp. ψ ) gathers the coefficients {aα , 0 ≤ |α| ≤ p}
(resp. the basis polynomials {ψα , 0 ≤ |α| ≤ p}). Let X =
M +p
{x(1) , . . . , x(N ) } be a set of N realizations of the input random P = . (25)
p
vector, and Y = {y(1) , . . . , y(N ) }T be the corresponding model
evaluations {y(i) = M (x(i) ), i = 1, . . . , N }. The collection X is To circumvent this problem, Section 4 proposes an adaptive
called the experimental design (ED). The coefficients in Eq. (18) are scheme in order to build up a sparse PC expansion, i.e. which cap-
estimated by minimizing some norm of the residual M − Mp , tures the main stochastic features of the model response by means
say the L2 -norm (least-square regression). The estimates of the PC of a small number of basis functions compared to a full PC approx-
coefficients are thus given by: imation.
N
2 3. Assessment of the polynomial chaos approximation
M(x(i) ) − aT ψ(x(i) )
X
â = Arg min (20)
a∈RP
i =1
It has been shown in the previous section that PC approxi-
which is equivalent to: mations of the mathematical model can be obtained using non-
intrusive techniques, namely the projection approach or the
Ψ T Ψ â = Ψ T Y
(21)
regression approach. Both methods provide a stochastic response
where the data matrix Ψ is defined by: surface whose performance has to be assessed. In terms of statis-
tical learning theory (see e.g. [31]), the discrepancy between the
Ψ ij ≡ ψαj (x(i) ) .
i=1,...,N (22) model response and the metamodel is measured by means of a
j=0,...,P −1
risk functional, for instance the commonly used mean-square er-
Whatever the solving scheme, the experimental design must ror. Such a quantity depends on the PDF of the response, which is
be selected in such a way that the information matrix is well- unknown in our context. As this work is aimed at minimizing the
conditioned in order to make the regression problem well-posed. number of model evaluations, one investigates inexpensive risk es-
This can be quantified by the condition number: timates, i.e. which can be computed by only recycling the already
performed simulations. The simplest estimate is the well-known
κ(M ) =
M −1
· kM k
(23)
determination coefficient R2 . This coefficient may be a biased es-
where k · k is a specific matrix norm, e.g. the 1-norm defined by: timate of the risk functional though since it does not take into
! account the robustness of the metamodel, i.e. its capability of cor-
P
X rectly predicting the model response at any point which does not
kM k1 = max |Mij | (24) belong to the experimental design.
1≤j≤P
i=1
As a consequence, one makes use of a more reliable error
denoting by Mij (resp. P) the generic coefficient (resp. the size) estimate, namely the leave-one-out error [19]. Indeed, beside its
of matrix M . Thus M is said to be well (resp. ill)-conditioned relative robustness, it shares the two following characteristics with
if κ(M ) is low (resp. high). A necessary condition for having M the empirical error:
well-conditioned is to use an experimental design whose size N
- it can be evaluated from already performed numerical experi-
is greater than the size P of the PC expansion.
ments, i.e. no additional evaluation of the model is required;
Once the PC coefficients have been estimated using either the
- it can be derived analytically from the regression information
projection or the regression approach, it is easy to post-process
matrix M .
the obtained PC approximation in order to carry out distribution,
moment, sensitivity and reliability analyses (see Appendix A). Furthermore, it has been shown in [32] that the leave-one-out error
is quite performant with respect to other cross-validation-based
2.4. Error estimation and complexity of the PC expansion error estimates.
As seen in Eq. (16), the PC expansion of the model response is 3.1. Generalization error
usually truncated in such a way that one only retains those basis
polynomials with degree not greater than p. It was shown in [10,11] Let X = {x(1) , . . . , x(N ) }T be an experimental design and Y =
(1)
that a PC approximation of degree p = 2 usually provides {y = M(x(1) ), . . . , y(N ) = M(x(N ) )}T be the corresponding
satisfactory results for moment and sensitivity analyses, whereas model evaluations. Let us estimate the coefficients of the PC
a degree p = 3 is often necessary when performing a reliability expansion by regression using X. The resulting PC approximation
analysis. is denoted by MX,p :
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 187
N N
1 2 1 X (i)
M(x(i) ) − ȳ
X
c [Y ] =
Var ; ȳ = y . (31) 4. Adaptive sparse polynomial chaos approximation
N − 1 i =1 N i =1
Versatile non-intrusive techniques have been described in
However the use of the R2 statistic might be misleading for Section 2 for building a PC approximation of the random model
comparing two different regression-based metamodels since it response. Then error estimates have been proposed in Section 3
automatically increases with the number P of basis polynomials. in order to assess the performance of this stochastic response
Furthermore, it is highly biased since it tends to 1 as P increases. surface without performing additional model evaluations. This is
It typically underestimates the generalization error. In particular, a crucial point since it provides the ingredients for carrying out
when P is close to N, the polynomial metamodel may overfit the a convergence analysis of the PC with respect to its total degree.
model at the training points. Nonetheless, as already noted in [30] and mentioned in Section 2.4,
As an alternative, it is possible to derive the adjusted determina- such a procedure might lead to a considerable increase of the
tion coefficient defined by: number of terms in case of a large number of random input
variables. This may be a problem in practice since it has been
N −1 shown that the required number of model evaluations (i.e. the
R2adj MX,p = 1 − 1 − R2 MX,p .
(32)
N −P −1 computational cost) itself increases with the PC size.
It is however our belief that in most applications, the number of
The R2adj statistic is penalized as P increases, i.e. as extra terms are significant terms in the PC expansion is relatively small, because of
included in the metamodel. From the authors’ experience though, the two following points:
this coefficient still often overpredicts the true approximation
- high-order interaction effects are usually negligible compared
accuracy.
to main effects and low-order interaction effects (this property
is referred to as a low effective dimension in applications);
3.2.2. Leave-one-out cross-validation - the input variables might have a different impact on the model
The cross-validation technique consists in dividing the data response, as shown from global sensitivity analysis [11].
sample into two subsamples. A metamodel is built from one A methodology is thus proposed hereinafter in order to build up a
subsample, i.e. the training set, and its performance is assessed by sparse PC expansion, i.e. a truncated series in which only a small
comparing its predictions to the other subset, i.e. the test set. A number of coefficients are retained. In this purpose, an iterative
refinement of the method is the ν -fold cross-validation,in which algorithm based on non-intrusive regression is developed.
188 G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197
4.1. Sparse polynomial chaos approximation - Forward step: for any interaction order j ∈ {1, . . . , jmax },
gather the candidate terms in a set Ij,p . Add each candidate
Let us consider the PC expansion of the model response: term to Ap−1 one-by-one and compute the PC expansion
X coefficients by regression (Eq. (21)) and the associated
Y = M (X ) = aα ψα (X ) (38) determination coefficient R2 in each case. Retain eventually
α∈NM those candidate terms that lead to a significant increase
in R2 , i.e. greater than ε1 , and discard the other candidate
where the notation of Section 2 is used. Let A be a nonempty finite terms. Let Ap,+ be the final truncation set at this stage.
subset of NM . Its associated PC approximation reads: - Backward step: remove in turn each term in Ap,+ of degree
X not greater than p. In each case, compute the PC expansion
Y ≈ MA (X ) = aα ψα (X ). (39) coefficients and the associated determination coefficient R2 .
α∈A
Eventually discard from Ap,+ those terms that lead to an
The set A is referred to as the truncation set. Let us define its degree insignificant decrease in R2 , i.e. less than ε2 . Let Ap be the
πA by: final truncation set.
! - If QA2 p ≥ Qtgt
2
, stop.
M
Note that the regression calculations only involve solving linear
X
πA = max αi . (40)
α∈A systems (see Eq. (21)), so their computational cost is small or
i=1
negligible with respect to the model evaluations on the ED.
The common truncation scheme presented in Section 2.4 corre- Moreover, the resulting sparse PC approximation does not depend
sponds to the following truncation sets: on the (arbitrary) ordering of the PC basis.
While running the algorithm, the conditioning of the informa-
AM ,p ≡ {α ∈ NM : |α| ≤ p}. (41) tion matrix M is quantified by its condition number κ(M ) defined
in Eq. (23). In this respect, one considers the 1-norm defined in Eq.
One defines the index of sparsity of the truncation set A by:
(24). Indeed, the associated reciprocal condition number κ(M )−1
card(A) can be efficiently estimated using a specific algorithm (LAPACK
IS = (42) package in MATLAB v6). One thus requires that κ(M )−1 be greater
card(AM ,πA )
than e.g. 10−4 .
where card(A) denotes the number of elements in A. The Moreover, the determination coefficient R2 has been preferred
truncation set A and the related PC approximation (39) are said to R2adj and Q 2 for adding (resp. discarding) PC terms in the forward
to be sparse if the index of sparsity IS is low. (resp. backward) steps since it appears to be particularly efficient
Let us now define the degree pα and the interaction order jα of for this purpose. On the other hand, the Q 2 statistic has been
any index α in A by: employed for assessing the accuracy of the PC expansion since it
is the most conservative error measure.
M
X M
X In practice, one may set ε1 = ε2 = ε in the applications in order
pα = |α| = αi , jα = 1αi >0 (43) to reduce the number of algorithm tuning parameters. Note that ε
2
i =1 i=1 should depend on the prescribed accuracy Qtgt . Indeed it would not
where 1αi >0 = 1 if αi > 0 and 0 otherwise. Moreover, let us denote make any sense to require a large accuracy Qtgt2
(say 0.9999) while
by Ij,p the set of indices α of interaction order j and total degree p. choosing a large ε (say 0.1) since only few terms would be retained
Using the concept of sparse PC expansion and the error in the PC expansion, necessarily leading to a poor approximation.
estimates presented in Section 3.2, it is now possible to devise an In the sequel one uses the thumb rule ε = α(1 − Qtgt 2
), where α is
algorithm that builds up iteratively a sparse PC expansion while a constant whose value may range from 10−3 to 10−2 .
mastering the approximation error. A first algorithm has been
recently proposed in [34]. It is recalled in Section 4.2 for the sake 4.2.2. Step-by-step run of the algorithm using a polynomial model
of completeness. Due to some limitations that are reported below, The iterative procedure detailed above is illustrated by the
this algorithm is further combined with adaptive experimental following simple polynomial model:
designs (Section 4.3).
Y = M (ξ1 , ξ2 ) = 1 + H1 (ξ1 )H1 (ξ2 ) + H3 (ξ1 ) (44)
4.2. Adaptive sparse polynomial chaos approximation using a fixed where Hj represents the Hermite polynomial of degree j (j =
experimental design 1, . . . , 3) and ξ1 , ξ2 are independent standard Gaussian random
variables. A random design made of N = 100 Latin Hypercube
samples is used. The various steps of the PC construction are
4.2.1. Iterative algorithm using a fixed experimental design illustrated in Fig. 1. As the model is polynomial, the PC expansion
An iterative procedure is now presented for building a PC should recover the exact model function M .
approximation of the system response using a fixed experimental The iterations on the interaction order j and the PC total degree
design (ED): p are respectively displayed from top to bottom and from left to
right. The squares which appear in each subfigure represent the
(i) Choose an ED X and perform the model evaluations Y once
monomials of the model (44) that are to be found. The crosses
and for all.
correspond to the monomials of the current PC approximation. The
(ii) Select the values of the algorithm parameters, i.e. the target x-axis (resp. y-axis) is associated to the degree of the monomials in
2
accuracy Qtgt , the maximal PC degree pmax and maximal ξ1 (resp. ξ2 ). In this example the polynomials H1 (ξ2 ) and H3 (ξ2 )
interaction order jmax and the cut-off values ε1 , ε2 . are correctly neglected in the forward steps associated with p = 1
(iii) Initialize the algorithm: p = 0, truncation set A0 = {0}, and p = 3 respectively. All the remaining useless polynomials are
where 0 is the null element of NM . discarded in the backward step of iteration p = 3, i.e. once the
(iv) For any degree p ∈ {1, . . . , pmax }: polynomial model is fully represented.
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 189
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
Backward step
Q2= 0.50
4
3
2
1
0
0 1 2 3 4
Fig. 1. Polynomial model — Step-by-step construction of the sparse polynomial chaos approximation (from left to right and top to bottom). Squares represent the original
polynomial to be recovered. Crosses represent the current polynomial chaos expansion. In each subfigure the x-axis (resp. y-axis) is associated with the degree of the
polynomials in ξ1 (resp. ξ2 ).
remaining, in which the additional point is randomly sampled. where the Zi (i = 1, . . . , 3) are independent random variables that
However it is not always possible to adapt the LHS design this way. are uniformly distributed over [−π , π]. Note that the model under
Consider for instance the situation depicted in Fig. 4. After having consideration is sparse in nature since:
built the new 4 × 4 grid, there are two points falling in the same
- only interactions of order lower than 2 are involved;
bin. If shaded areas are removed, the underrepresented variable
- the underlying function is even with respect to the variables X2
intervals form a 2 × 2 grid: a 2-point (random) LHS sample is then
and X3 , hence the PC terms associated with odd polynomials in
generated in these cells. Note that the 2 samples in cell (3, 3) are
these variables are zero.
kept in the analysis, although the resulting scheme is a quasi-LHS.
Indeed, the model evaluation in those points has been carried out Of interest are the estimations of the first four moments of
already, and it is desirable to exploit this information in the sequel. the response, namely the mean µY , the standard deviation σY , the
Note also that it is possible to add several sampling points in one skewness coefficient δY and the kurtosis coefficient κY .
shot using NLHS. This feature has been taken advantage of in the The true values of µY and σY are obtained from analytical
application example. derivations. The reference values of δY and κY are computed by
sparse quadrature using a Gauss–Legendre rule of level 19. This
5. Application examples ensures a relative accuracy of 10−6 on these results and requires
a total number of 310,155 evaluations of the Ishigami function.
This section is dedicated to the validation of the adaptive sparse
PC expansions on an analytical model and two finite element 5.1.1. Full PC expansions
models. In all the examples, classical full PC representations are Full PC approximations made of normalized Legendre polyno-
compared to sparse ones in terms of by-products of interest, mials and of degree 6, 8, 10 and 12 are used first. Legendre poly-
namely statistical moments and/or probability of exceeding some nomials are naturally introduced since the input random variables
threshold. The coefficients of the ‘‘full’’ PC metamodels are Z1 , Z2 , Z3 are uniformly distributed. Their coefficients are evalu-
computed by regression. ated by regression using as many samples as necessary to ensure
the well-conditioning of the information matrix. The NHLS scheme
5.1. Analytical model: Ishigami function is used in this purpose, allowing a fair comparison with the adap-
tive sparse PC approximations.
Let us consider the so-called Ishigami function which is widely On the one hand, the mean and standard deviation of the
used for benchmarking in global sensitivity analysis [35]: response are post-processed from full PC expansions using
Eqs. (A.1) and (A.2). On the other hand, the skewness and kurto-
Y = sin Z1 + 7 sin2 Z2 + 0.1Z34 sin Z1 (45) sis coefficients are calculated by sparse quadrature as described
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 191
(a) Initial LHS design. (b) Grid of the new design. (c) Represented subdomains.
Table 1
Ishigami function — Estimates of the first four statistical moments by full PC expansions.
PC order c2
1−Q N P µ
cY σbY δbY κbY
−1
p =6 2 × 10 155 84 3.7301 3.7806 0.1405 3.4648
p =8 2 × 10−3 443 165 3.4941 3.7338 0.0012 3.5332
p = 10 6 × 10−6 1,420 286 3.5005 3.7203 0.0009 3.5073
p = 12 1 × 10−8 2,604 455 3.5000 3.7209 0.0000 3.5072
Reference 310,155 – 3.5000 3.7208 0.0000 3.5072
values ε1 , ε2 to 5 × 10−3 . The evolution of the error estimates 1 − Rb2 hence an ill-conditioning of the regression information matrix
and 1−Q c2 with respect to the number of iterations in the algorithm
which leads to stop the algorithm. To overcome this problem, a
is depicted in Fig. 5. NLHS sequential design is now considered. In this respect, the
An overfitting situation clearly appears since the empirical error initial size of the experimental design is set heuristically to 2(M +
1 − Rb2 decreases whereas the leave-one-out error 1 − Q c2 increases, 1) = 8, where M = 3 is the number of input variables.
192 G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197
Table 2
Ishigami function — Estimates of the first four statistical moments by sparse PC expansions using a sequential NLHS design.
Target error Algorithm output Statistical moments
2
1 − Qtgt c2
1−Q N P p µ
cU σU
c δbU κbU
−2 −3
10 2 × 10 99 42 7 3.5600 3.8205 0.0777 4.1135
10−4 1 × 10−5 256 87 10 3.4998 3.7181 0.0000 3.5083
10−6 4 × 10−8 501 82 12 3.4999 3.7207 0.0000 3.5072
10−8 5 × 10−9 994 27 12 3.5000 3.7208 0.0000 3.5072
Reference 310,155 3.5000 3.7208 0.0000 3.5072
The results obtained by the NLHS experimental design are re- 100
ported in Table 2 together with the estimates of the approxima- Full PCEs
Sparse PCEs
tion error. The accuracy of the PC expansions and of the related
2 10–2
moment estimates globally increase with Qtgt . Setting the target
−8
error to 10 provides by far the best estimates of the statistical
moments, with a four-digit accuracy. An estimate of the approxi- 10–4
mation error of 5 × 10−9 is thus obtained whereas the number of
1–Q2
terms has been divided by 17 (P = 27 instead of 455) compared to
a full metamodel of degree 12. Furthermore, an approximation er- 10–6
ror of magnitude 10−5 is obtained by dividing the number of model
evaluations by 6 (256 calls to the model instead of 1420) compared
10–8
to the full PC expansion of degree 10 that provided such an accu-
racy.
The convergence of the empirical error of the sparse PC 10–10 0 500 1000 1500 2000 2500 3000
expansions based on sequential designs is depicted in Fig. 6. It
Number of model evaluations
clearly appears that the sparse PC expansions outperform the full
PC expansions. Fig. 6. Ishigami function — Convergence of the full and sparse PC expansions.
5.1.3. Conclusion
From this first analytical example, several conclusions may be
drawn:
- the adaptive algorithm allows one to build a sparse PC
expansion of any degree of accuracy. The final metamodel
−8
contains here only P = 27 terms for an accuracy of 10
, which
reveals an extremely sparse structure (IS = 27/
12+3
3
≈
6%). Note that it is easy to show that the remaining PC terms
correspond to the Taylor expansion of Eq. (45);
- the use of adaptive sampling (NLHS) is mandatory in order to
avoid overfitting (Fig. 5); Fig. 7. Truss structure with 23 members.
2
- for a prescribed accuracy 1 − Qtgt , the number of model
evaluations N required when using a sparse representation is up Table 3
to 6 times less than when using a full PC representation (Fig. 6). Truss example — Input random variables.
Variable Distribution Mean Standard deviation
5.2. Maximal deflection of a truss structure E1 , E2 (Pa) Lognormal 2.10 × 1011
2.10 × 1010
A1 (m2 ) Lognormal 2.0 × 10−3 2.0 × 10−4
5.2.1. Problem statement A2 (m2 ) Lognormal 1.0 × 10−3 1.0 × 10−4
P1 –P6 (N) Gumbel 5.0 × 104 7.5 × 103
Let us consider the truss structure sketched in Fig. 7. Ten
independent input random variables are considered, namely the
Young’s moduli and the cross-section areas of the horizontal and where Φ denotes the standard normal cumulative distribution
the oblical bars (respectively denoted by E1 , A1 and E2 , A2 ) and the function (CDF) and FZi denotes the CDF of Zi . This leads to the
applied loads (denoted by Pi , i = 1, . . . , 6) [28], whose mean and following metamodel:
standard deviation are reported in Table 3. Thus the input random X
vector is defined by: V1 (X ) ' aα ψα (X ). (48)
α∈A
Z = {E1 , E2 , A1 , A2 , P1 , . . . , P6 }T . (46)
5.2.2. Reliability analysis
The model random response Y is the deflection at midspan V1 . It The serviceability of the structure with respect to an admissible
is approximated by a truncated PC expansion made of normalized maximal deflection vmax is studied. The associated limit state
Hermite polynomials. In this respect, the random vector Z is recast function reads:
as a standard Gaussian random vector X by transforming the g (x) = vmax − |v1 (x)| ≤ 0. (49)
random variables Zi as follows:
The reference value of the probability of failure is obtained PfREF
Xi = Φ −1 FZi Zi , i = 1, . . . , 10
(47) by importance sampling using 500,000 evaluations of the model
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 193
Table 4 Table 6
Truss structure — Estimates of the generalized reliability index β = −Φ −1 (Pf ) for Frame structure — Input random variables properties.
various values of the threshold displacement. Variable Distribution Mean* Standard deviation*
Threshold (cm) Reference Full PCE Sparse PCE
P1 (kN) Lognormal 133.454 40.04
β REF β
b (%) β
b (%) P2 (kN) ’’ 88.97 35.59
P3 (kN) 71.175 28.47
10 1.72 1.71 0.6 1.72 0.0
11 2.38 2.38 0.0 2.38 0.0 E4 (kN/m2 ) Truncated Gaussian 2.1738 × 107 1.9152 × 106
12 2.97 2.98 0.3 2.99 0.7 over [0, +∞)
14 3.98 4.04 1.5 4.07 2.3 E5 (kN/m ) 2
’’ 2.3796 × 107 1.9152 × 106
16 4.85 4.95 2.1 5.02 3.5 I6 (m4 ) 8.1344×10−3 1.0834 × 10−3
I7 (m4 ) 1.1509×10−2 1.2980 × 10−3
1−Qc2 1 × 10−6 9 × 10−5 I8 (m4 ) 2.1375×10−2 2.5961 × 10−3
Number of terms 286 114 I9 (m4 ) 2.5961×10−2 3.0288 × 10−3
Number of FE runs 443 207 I10 (m4 ) ’’ 1.0812×10−2 2.5961 × 10−3
I11 (m4 ) 1.4105×10−2 3.4615 × 10−3
I12 (m4 ) 2.3279×10−2 5.6249 × 10−3
Table 5
I13 (m4 ) 2.5961×10−2 6.4902 × 10−3
Frame structure — Element properties.
A14 (m2 ) 3.1256×10−1 5.5815 × 10−2
Element Young’s modulus Moment of inertia Cross-sectional area A15 (m2 ) 3.7210×10−1 7.4420 × 10−2
B1 E4 I10 A18 A16 (m2 ) 5.0606×10−1 9.3025 × 10−2
B2 E4 I11 A19 A17 (m2 ) 5.5815×10−1 1.1163 × 10−1
B3 E4 I12 A20 A18 (m2 ) ’’ 2.5302×10−1 9.3025 × 10−2
B4 E4 I13 A21 A19 (m2 ) 2.9117×10−1 1.0232 × 10−1
C1 E5 I6 A14 A20 (m2 ) 3.7303×10−1 1.2093 × 10−1
C2 E5 I7 A15 A21 (m2 ) 4.1860×10−1 1.9537 × 10−1
C3 E5 I8 A16 *
The mean value and standard deviation of the cross sections, moments of inertia
C4 E5 I9 A17 and Young’s moduli are those of the untruncated Gaussian distributions.
(the sampling density is a multinormal density centered on the of the random variables are reported in Table 6. Note that truncated
design point resulting from a FORM (first-order reliability method) Gaussian distributions are used in contrast to the original example
analysis). The corresponding generalized reliability index is given by in [36]. Indeed, it is not possible to obtain a reference solution by
β REF = −Φ −1 (PfREF ). Monte Carlo simulation when using Gaussian distributions due to
On the other hand, the model response is approximated using a non-physical negative realizations of the geometrical and material
full second-order PC expansion, as well as an adaptive sparse PC properties.
expansion corresponding to the prescribed approximation error Moreover the various input random variables are correlated
10−2 . This leads to an analytical limit state function. As in the using a Nataf distribution (Section 2.2.3). The correlation matrix
previous example, the coefficients of the two PC expansions are R (using the notation in Section 2.2.3) is defined as follows:
computed by regression using the NLHS scheme. The probability - the correlation coefficient of the b Zi ’s associated with the cross-
of failure is then computed by FORM and importance sampling section areas and the moments of inertia of a given member is
(500,000 samples are used). A parametric study is carried out set equal to ρAi ,Ii = 0.95;
varying the threshold vmax from 10 to 16 cm. The results are - otherwise the correlation coefficients of the geometrical
reported in Table 4 in terms of generalized reliability indices. properties are set equal to ρAi ,Ij = ρIi ,Ij = ρAi ,Aj = 0.13;
As expected, the discrepancy between the PC-based and the - the correlation coefficient of the two Young’s moduli is set equal
reference solutions increases with the threshold value, i.e. when to ρE4 ,E5 = 0.9;
the probability of failure decreases. Accurate results are obtained - the remaining correlation coefficients in R are zero.
by both the full and the sparse PC expansions (relative error on β
less than 3.5%). Note however that the computational cost of the Note that these values are not the correlation coefficients of
sparse PC expansion approach is about twice less than the full PC the input variables in Z , the latter being insignificantly different
approach for a similar accuracy. though.
The model response is recast as a function of independent
standard Gaussian random variables Xi so that it may be expanded
5.3. Top-floor displacement of a frame structure
onto a PC expansion made of normalized Hermite polynomials, as
shown in Section 2.2.3.
5.3.1. Problem statement
Let us consider now the structure sketched in Fig. 8, already 5.3.3. Moment analysis
studied in [36,37]. It is a three-span, five-story frame structure The mean and the standard deviation of the random maximal
subjected to horizontal loads. The frame elements are made of 8 displacement U = M (Z (X )) are now considered. The quadrature
different materials, whose properties are gathered in Table 5. scheme already used in the previous examples is applied to
The response of interest is the horizontal component of the top- provide reference values with a two-digit accuracy (15,179 model
floor displacement at the top right corner, which is denoted by u. evaluations are performed). The response reference mean and
standard deviation are thus equal to 2.09 cm and 0.64 cm.
5.3.2. Probabilistic model On the other hand, estimates of the statistical moments are
The 3 applied loads and the 2 Young’s moduli, the 8 moments of obtained alternatively from a full second-order expansion and
inertia and the 8 cross-section areas of the frame components are an adaptive sparse PC expansion, respectively. The coefficients of
modelled by random variables. They are gathered in random vector the full metamodel are computed using a NLHS design which is
Z = (P1 , P2 , P3 , . . . , I6 , . . . , I13 , A14 , . . . , A21 ) of size M = 21. enriched until the regression problem is well-posed. The target
2
The applied loads (resp. the material properties) are assumed approximation error 1 − Qtgt of the sparse PC expansion is set equal
to follow a lognormal distribution (resp. truncated Gaussian to 10−2 , its maximum degree pmax to 7, its maximum interaction
distribution over [0, +∞)). The mean and the standard deviation order jmax to 2 and the cut-off values ε1 , ε2 to 5 × 10−5 . An initial
194 G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197
Table 7 Table 8
Frame structure — Estimates of the response mean and standard deviation by full Frame structure — Estimates of the generalized reliability index β = −Φ −1 (Pf ) for
and sparse polynomial chaos expansions. various values of the threshold displacement.
Reference Full PCE Sparse PCE Threshold (cm) Reference Full PCE Sparse PCE
Estimate (%) Estimate (%) β REF β
b (%) β
b (%)
µU (cm) 2.09 2.09 0.0 2.09 0.0 4 2.27 2.26 0.4 2.29 0.9
σU (cm) 0.64 0.64 0.0 0.64 0.0 5 2.96 3.00 1.4 3.01 1.7
1−Qc2 3 × 10−2 1 × 10−2 6 3.51 3.60 2.6 3.61 2.8
Number of terms 253 138 7 3.96 4.12 4.0 4.11 3.8
Number of FE runs 15,179 330 214 8 4.33 4.58 5.8 4.56 5.3
9 4.65 4.99 7.3 4.94 6.2
which are produced by the algorithm can reach any prescribed Otherwise a quadrature scheme can be used. Similarly the kurtosis
accuracy, illustrating the heuristic convergence of the proposed coefficient of the response is given by:
scheme. Then the method is tested on two finite element structural
models involving 10 and 21 random variables, respectively. These 1
κY = E (Y − µY )4
(A.5)
examples reveal the sparsity of the model responses under σ 4
Y
consideration, hence the significant computational gain provided
and is approximated as follows:
by the proposed scheme compared to full PC expansions.
Good results are also expected in global sensitivity analysis 1 X
(GSA), which is aimed at quantifying the impact of each input κ Y ,p = aα aβ aγ aδ
σ 4
Y ,P 0<|α|,|β|,|δ|,|γ|≤p
parameter on the variance of the model response. Indeed, the
iterative procedure outlined herein help improve the PC-based × E ψα (X )ψβ (X )ψγ (X )ψδ (X ) .
(A.6)
GSA method detailed in [11], as shown in [38].
The sparse PC decomposition has been applied to scalar-valued Note that Eqs. (A.4)–(A.6) may lead to computationally expensive
models M . The extension to vector-valued models is straightfor- calculations though if the number of terms P is high. As an
ward, since the algorithm may be applied componentwise. Note alternative, it is possible to recast the quantities δY ,p and κY ,p
that the computational cost does not grow much with the size Q by substituting the model M by its PC approximation Mp into
of the response vector, since the latter is supposed to be mainly Eqs. (A.3), (A.5) as follows:
related to the evaluations of M on the experimental design X in
1
δ Y ,p = E (Mp (X ) − µY ,p )3
industrial applications. (A.7)
Moreover, once an optimal sparse PC basis has been determined σY3
for a given component of the response vector (e.g. a component of 1
κ Y ,p = E (Mp (X ) − µY ,p )4 .
the displacement vector in stochastic finite element analysis) this (A.8)
σY4
basis is usually almost optimal for the other components of the
response vector. Investigations are currently in progress in order These quantities may be evaluated by a sparse quadrature scheme.
to optimize this strategy.
Applications dealing with elliptic stochastic partial differential
A.2. Global sensitivity analysis
equations (e.g. mechanical problems involving random fields) may
be solved using the proposed approach. The input random fields
Variance-based methods for sensitivity analysis aim at quanti-
are first discretized using e.g. the Karhunen–Loève expansion. Then
fying the relative importance of each model input parameter Xi on
the iterative procedure allows one to solve the problem.
the variance of the response Y . Such an impact may be assessed by
deriving the following global sensitivity indices:
Appendix A. Post-processing of the generalized polynomial
chaos approximation Var [E [Y |Xi ]]
Si = . (A.9)
Var [Y ]
The coefficients of the PC expansion contain the complete
probabilistic information on the random response Y . According to Sensitivity indices can also be derived to evaluate the combined
the type of problem under consideration, these coefficients may be effect of various input parameters, see e.g. [35] for details.
post-processed in order to plot the probability density function of Of particular interest is the so-called Sobol’ method [35] for es-
Y , compute its statistical moments or the probability of exceedance timating the sensitivity indices (A.9). It relies upon the orthogo-
of a threshold (reliability analysis). nal decomposition of the response into summands of increasing
dimension. It was shown in [11] that the Sobol’ sensitivity indices
can be computed analytically from the coefficients of the PC expan-
A.1. Moment analysis sion of the model response.
(resp. to fail) if g (x, x0 ) > 0 (resp. g (x, x0 ) ≤ 0). The associated 38: ∆R2 ← R20 − R2
probability of failure reads: 39: if ∆R2 ≤ ε2 then D p ← D p ∪ {α}
Z 40: end for
Pf = 1g (x,x0 )≤0 (x, x0 ) fX ,X 0 (x, x0 )dxdx0 . (A.11) 41: Ap+1 ← Ap,+ \ D p
DX ,X 0 42: [R20 , Q02 ] ← Regression(Ap , X)
43: end if
Upon substituting the model response M (X ) for its PC representa-
b(X ) into g (X , X 0 ), one gets the following analytical approx- 44: end while
tion M
imation: 45: end while
g (X , X 0 ) ' b
g (X , X 0 ) = g (M
b(X ), X 0 ). (A.12) References
Thus the probability of failure (A.11) may be inexpensively
[1] Ghanem R, Spanos P. Stochastic finite elements: A spectral appoach. Courier
estimated by applying the classical reliability methods (e.g. crude Dover Publications; 2003.
Monte Carlo, FORM and importance sampling [39]) to the response [2] Soize C, Ghanem R. Physical systems with random uncertainties: Chaos
surface (A.12). representations with arbitrary probability measure. SIAM J Sci Comput 2004;
26(2):395–410.
[3] Le Maître OP, Knio OM, Najm HN, Ghanem RG. Uncertainty propagation using
Appendix B. Algorithm for building up a sparse polynomial Wiener–Haar expansions. J Comput Phys 2004;197:28–57.
chaos expansion using a sequential experimental design [4] Matthies HG, Keese A. Galerkin methods for linear and nonlinear elliptic
stochastic partial differential equations. Comput Methods Appl Mech Eng
2005;194:1295–331.
Require: Target accuracy Qtgt 2
, cut-off values ε1 and ε2 , maximum [5] Nouy A. A generalized spectral decomposition technique to solve stochastic
interaction order jmax and experimental design X. partial differential equations. Comput Methods Appl Mech Eng 2007;
196(45–48):4521–37.
Ensure: Truncation set Ap , error estimates R20 and Q02 . [6] Ghiocel DM, Ghanem RG. Stochastic finite element analysis of seismic
1: Perform all the model evaluations associated with X. soil–structure interaction. J Eng Mech 2002;128:66–77.
2: restartAnalysis ← true [7] Le Maître OP, Reagan M, Najm HN, Ghanem RG, Knio OM. A stochastic
3: while (restartAnalysis = true) do
projection method for fluid flow — II. Random process. J Comput Phys 2002;
181:9–44.
4: restartAnalysis ← false [8] Choi SK, Grandhi RV, Canfield RA, Pettit CL. Polynomial chaos expansion with
5: p ← 0 , Ap ← {0} , A ← Ap Latin Hypercube sampling for estimating response variability. AIAA J 2004;45:
6: [R20 , Q02 ] ← Regression(A, X) 1191–8.
[9] Berveiller M, Sudret B, Lemaire M. Non linear non intrusive stochastic finite
7: while (Q02 ≤ Qtgt 2
) AND (p ≤ pmax ) AND (restartAnalysis = element method — Application to a fracture mechanics problem. In: Augusti G,
true) do Schuëller GI, Ciampoli M, editors. Proc. 9th int. conf. struct. safety and
8: p←p+1 , j←0 reliability (ICOSSAR’2005), Roma, Italy. Rotterdam: Millpress; 2005.
[10] Berveiller M, Sudret B, Lemaire M. Stochastic finite elements: A non intrusive
9: while (Q02 ≤ Qtgt 2
) AND (j ≤ jmax ) AND approach by regression. Eur J Comput Mech 2006;15(1–3):81–92.
(restartAnalysis = true) do [11] Sudret B. Global sensitivity analysis using polynomial chaos expansions. Reliab
10: j←j+1 Eng Sys Safety 2008;93:964–79.
J j ,p ← ∅
[12] Xiu D, Hesthaven JS. High-order collocation methods for differential equations
11:
with random inputs. SIAM J Sci Comput 2005;27(3):1118–39.
12: C j,p ← {α ∈ NM : pα = p , jα = j} [13] Babus̆ka I, Nobile F, Tempone R. A stochastic collocation method for elliptic
{Forward step: add significant α’s one-by-one} partial differential equations with random input data. SIAM J Num Anal 2007;
13: for α ∈ C j,p do 45(3):1005–34.
[14] Gerstner T, Griebel M. Dimension-adaptive tensor-product quadrature.
14: A ← Ap ∪ {α} Computing 2003;71:65–87.
15: [R2 , Q 2 ] ← Regression(A, X) [15] Ganapathysubramanian B, Zabaras N. Sparse grid collocation schemes for
16: ∆R2 ← R2 − R20 stochastic natural convection problems. J Comput Physics 2007;225:652–85.
[16] Ghanem R, Saad G, Doostan A. Efficient solution of stochastic systems:
17: if ∆R2 ≥ ε1 then J j,p ← J j,p ∪ {α} Application to the embankment dam problem. Struct Saf 2006;29:238–51.
18: end for [17] Sachdeva SK, Nair PB, Keane AJ. Hybridization of stochastic reduced basis
{Forward step: add significant α’s one-by-one} methods with polynomial chaos expansions. Prob Eng Mech 2006;21(2):
j ,p 182–92.
19: Sort J j,p according to ∆R2 −→ J∗ [18] Wang GG. Adaptive response surface method using inherited latin hypercube
20: R j ,p ← ∅ design points. J Mech Des 2003;125:210–20.
j,p
21: for α ∈ J∗ do [19] Stone M. Cross-validatory choice and assessment of statistical predictions. J
Roy Statist Soc Ser B 1974;36:111–47.
22: if the information matrix associated with [20] Xiu D, Karniadakis GE. The Wiener–Askey polynomial chaos for stochastic
Ap ∪ {α} is ill-conditioned then differential equations. J Sci Comput 2002;24(2):619–44.
23: restartAnalysis ← true [21] Sudret B. Uncertainty propagation and sensitivity analysis in mechanical
models — Contributions to structural reliability and stochastic spectral
24: X ← EnrichED(X) methods. Clermont-Ferrand (France): Habilitation à diriger des recherches.
25: BREAK Université Blaise Pascal; 2007.
26: else [22] Nelsen RB. An introduction to copulas. Lecture notes in statistics, vol. 139.
27: Rj,p ← Rj,p ∪ {α} Springer; 1999.
[23] Lebrun R, Dutfoy A. A generalization of the Nataf transformation to
28: end if distributions with elliptical copula. Prob Eng Mech 2009;24(2):172–8.
29: end for [24] Lebrun R, Dutfoy A. An innovating analysis of the nataf transformation from
30: Ap,+ ← Ap ∪ Rj,p the copula viewpoint. Prob Eng Mech 2009;24(3):312–20.
[25] Sklar A. Fonctions de répartition n dimensions et leurs marges. In: Publications
31: end while de l’Institut de Statistique de l’Université de Paris 8. 1959. p. 229–31.
32: if (restartAnalysis = false) then [26] Wan X, Karniadakis G. Beyond Wiener–Askey expansions: Handling arbitrary
33: [R20 , Q02 ] ← Regression(Ap,+ , X) PDFs. J Sci Comput 2006;27:455–64.
[27] Nataf A. Détermination des distributions dont les marges sont données. C R
34: Dp ← ∅ Acad Sci Paris 1962;225:42–3.
{Backward step: remove insignificant α’s one-by- [28] Blatman G, Sudret B, Berveiller M. Quasi-random numbers in stochastic finite
one} element analysis. Méc Ind 2007;8:289–97.
35: for α ∈ Ap do [29] Smolyak SA. Quadrature and interpolation formulas for tensor products of
certain classes of functions. Soviet Math Dokl 1963;4:240–3.
36: A ← Ap,+ \ {α} [30] Field R-V, Grigoriu M. On the accuracy of the polynomial chaos approximation.
37: [R2 , Q 2 ] ← Regression(A, X) Prob Eng Mech 2004;19:68–80.
G. Blatman, B. Sudret / Probabilistic Engineering Mechanics 25 (2010) 183–197 197
[31] Vapnik VN. The nature of statistical learning theory. New York: Springer- [36] Liu P-L, Der Kiureghian A. Optimization algorithms for structural reliability.
Verlag; 1995. Struct Saf 1991;9:161–77.
[32] Molinaro AM, Simon R, Pfeiffer RM. Prediction error estimation: A comparison [37] Wei D, Rahman S. Stuctural reliability analysis by univariate decomposition
of resampling methods. Bioinformatics 2005;21:3301–7. and numerical integration. Prob Eng Mech 2007;22:27–38.
[33] Saporta G. Probabilités, analyse des données et statistique. 2nd ed. Editions
[38] Blatman G, Sudret B. Efficient computation of Sobol’ sensitivity indices using
Technip; 2006.
sparse polynomial chaos expansions. Reliab Eng Sys Safety 2008 (submitted
[34] Blatman G, Sudret B. Sparse polynomial chaos expansions and adaptive
stochastic finite elements using a regression approach. C R Méc 2008;336(6): for publication).
518–23. [39] Ditlevsen O, Madsen HO. Structural reliability methods. Chichester: J. Wiley
[35] Saltelli A, Chan K, Scott EM, editors. Sensitivity analysis. J. Wiley & Sons; 2000. and Sons; 1996.