05 Spectral Power Iterations For The Random Eigenvalue Problem
05 Spectral Power Iterations For The Random Eigenvalue Problem
sampling.
Nomenclature λex
ξ = exact random eigenvalue
A = generic matrix of interest λMC
ξ = random eigenvalue obtained by Monte
cijk = expected value of the triple product Ψi Ψj Ψk Carlo simulations
DλKL = Kullback–Leibler divergence associated λPC
ξ = polynomial chaos form of the random
with the random eigenvalue solution eigenvalue obtained by the proposed algorithms
DϕKL = Kullback–Leibler divergence associated μ = probability measure associated with ξ
with the random eigenvector solution ξ = vector of uncertainty sources
d = number of uncertainty sources fξq g = set of quadrature points in the stochastic domain
Ex; ω = random process representing the random fξr g = set of sampled points in the stochastic domain
elastic modulus ϕs = eigenvector corresponding to the
F = σ algebra sth physical mode
fPC
θi θi = probability density function of θPC
i ξ
ϕMC
ξ = random eigenvector obtained by Monte
fex probability density function of θex Carlo simulations
θi θi = i ξ
ϕPC
ξ = polynomial chaos form of the random eigenvector
fPC
λ λ = probability density functions of λPC ξ obtained by the proposed algorithms
fex
λ λ = probability density functions of λex ξ ϕex
ξ = exact random eigenvector
L = length of the beam fΨi g = set of orthonormal polynomial basis functions
MatPSD
n×n R = the set of positive semidefinite real-valued Ω = sample space
square matrices of size n
m = prescribed number of physical modes of interest
N = number of Monte Carlo samples I. Introduction
n = size of matrix A
P = order of polynomial chaos expansion for
solution eigenpairs T HE robust predictive models for natural and engineered systems
should incorporate the significant variabilities in the behavior of
these systems induced because of the inherent variability of system
P = probability measure
P0 = order of polynomial chaos expansion for matrix A components, inadequacy of the input–output models, and inaccuracy
Q = number of quadrature points of the numerical implementations (finite-dimensional approxima-
tion). Toward this end, efficient uncertainty quantification techniques
fwq g = the set of quadrature weights
have been developed that represent the uncertainty in the input,
εtol = convergence threshold
propagate it through the numerical models, and finally produce a
ελ = error measure defined for random eigenvalues
probabilistic description for the system outputs and quantities of interest
εφ = error measure defined for random eigenvectors
(QOIs). In addition to the classical sampling techniques, such as
εquad = integration error in using quadrature rule Monte Carlo (MC) simulation and its improved derivatives, such as
θex
i ξ = cosine angles of random eigenvectors φex ξ Markov chain Monte Carlo techniques, recently, there has been growing
with respect to the ith coordinate axis interest in the nonsampling spectral technique, named polynomial chaos
θPC
i ξ = cosine angles of random eigenvector φξ PC
expansion (PCE) [1–5]. In this stochastic projection technique, the
with respect to the ith coordinate axis governing equation is projected onto truncated polynomial chaos (PC)
λs = eigenvalue corresponding to the coordinates, resulting in a finite-dimensional approximation of the
sth physical mode sample space, which is amenable to numerical calculation.
Our objective in this work is to characterize the eigenspace of
Received 30 January 2012; revision received 24 December 2012; accepted linear systems with parametric uncertainties. Such characterization
for publication 2 February 2013; published online 28 February 2014. can be used as the basis in various model reduction techniques and
Copyright © 2012 by the American Institute of Aeronautics and Astronautics, thus significantly improve the computational cost of the analysis of
Inc. All rights reserved. Copies of this paper may be made for personal or large uncertain systems. Our particular interest is on random matrices
internal use, on condition that the copier pay the $10.00 per-copy fee to the that arise from finite-dimensional approximation (discretization) of
Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923; continuous operator, such as partial differential operators, involving
include the code 1533-385X/14 and $10.00 in correspondence with the CCC.
*Postdoctoral Scholar — Research Associate, Department of Aerospace parametric uncertainty, as well as on those corresponding to a discrete
and Mechanical Engineering, 3620 South Vermont Avenue, KAP 210; random system, such as multidegree of freedom oscillators. Generally,
[email protected]. there is no closed-form expression for these random matrices.
† Among the first solution approaches for the eigenvalue analysis of
Professor, Department of Aerospace and Mechanical Engineering, 3620
South Vermont Avenue, KAP 210; [email protected]. Member AIAA. such matrices, the perturbation technique was proposed in [6,7], in
912
MEIDANI AND GHANEM 913
which the solution eigenpair was calculated based on few low-order matrix-valued random variable with values in MatPSD n×n R, that
perturbations. This approach will give inaccurate results in cases is, the set of positive semidefinite real-valued square matrices of size
in which higher-order perturbations are needed or when the n. We seek the solution of the following stochastic eigenvalue
perturbations are not small enough, disqualifying the Taylor series problem:
approximation. Statistical sampling is another approach first
proposed in [8]. It is computationally intensive, however, especially Aξϕs ξ λs ξϕs ξ s 1; : : : ; n
in cases in which a prohibitively large number of samples of (1)
independent random variables are required. Researchers have μ-almost surely a:s:
introduced approaches to enhance the accuracy of perturbation-based
methods [9] and also methods to improve the computational where superscript s indexes the invariant subspace associated with
efficiency of simulation-based algorithms [10,11]. A dimensional the pair λs ; ϕs . Suppose there exists a functional representation for
decomposition approach was also proposed in [12] for the the random matrix in the following form:
probabilistic characterization of the random eigenvalues using a
lower-variate approximation of these random variables, which proves P0
X
superior over the perturbation techniques. Another approach for the Aξ Ai Ψi ξ (2)
approximation of random eigenvalue was reported in [13] in which i0
the first and second statistical moments of random eigenpairs were
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
analytically calculated based on the moments of the random matrix. where Ai ∈ Matn×n R is deterministic, and fΨg is an orthonormal
Recently, polynomial chaos expansion techniques have been def
applied to the solution of random eigenvalue problem [14–16], in basis in Θ L2 Ω; F ξ ; μ.
which given a polynomial chaos representation for the random The analytical form in Eq. (2) could be synthesized from a
matrix, the solution eigenvalue and eigenpairs are assumed to have Karhunen–Loeve expansion of an underlying stochastic process, in
similar spectral representation in the same vector space. The weak which case the random variables ξ would correspond to the
form of the eigenvalue problem is then obtained by Galerkin Karhunen–Loeve random variables, the deterministic coefficient Ai
projection of the eigenvalue equation on the PC orthogonal basis, would depend on the ith eigenvector and eigenvalue of the covariance
spanning the Hilbert space of input random variables, leading to a matrix, and Ψi ξ would be a set of dμ orthonormal random variables
system of nonlinear algebraic equations. The PC coefficient of the [23]. Equation (2) could also be obtained directly from a PCE of the
solution eigenpairs are then obtained by the numerical solution of underlying stochastic process, with coefficients estimated from
these systems. The resulting PC solution lends itself readily to available information [24,25]. A complete sequence of orthonormal
accurate uncertainty representation of QOI and sensitivity indices basis functions can be found for the random matrix Aω if each
[17] and is greatly advantageous to other proposed methods. component of the matrix has finite variance and its underlying
In our approach, given a PC representation for the random matrix, probability measure is uniquely determined by its statistical
instead of solving the system of algebraic equations for the PC moments; i.e., the moment problem is uniquely solvable for each
coefficients, we use the idea of a power iteration scheme in order to random component [26,27]. In the present work, we assume that such
calculate the PC coefficients of dominant random eigenpairs. chaos representation for random matrix Aω is available to the
Iterative methods, among the numerical methods for the solution of analyst.
eigenvalue problems, have been widely used and extended due to Stochastic fluctuations in entries of matrix A will induce
their computational advantage. Specifically, the power method [18], corresponding fluctuations in its eigenvalues and eigenvectors. Thus,
and its derivatives such as inverse iteration [19] and subspace λs : Ω ↦ R and ϕs : Ω ↦ Rn are F ξ -measurable random variables.
iteration [20], has been the basis for more developments [21]. The With mild conditions on matrix A, both λs and ϕs can be assumed to
first iterative solution algorithm for the PC representation of the be square integrable and are thus in Θ. As a basis in Θ, we choose
eigenpairs was introduced in [22], in which the authors extended a polynomials orthonormal with respect to the measure of dμ. The
classical iterative method relying on simple matrix operation. approximate representation for the eigenpairs is then given by
Specifically, they recasted the inverse power method [19] as an
iterative algorithm on the PC coefficients of random vectors in order
X
P
to solve for the random eigenvector with a random eigenvalue for λs ξ ≈ λsi Ψi ξ; s 1; : : : ; n
which the average is closest to a prescribed value. In present paper, i0
we propose an algorithm that solves for multiple dominant random
eigenpairs, with a different definition for the norm of random vectors X
P
ϕs ξ ≈ ϕsi Ψi ξ; s 1; : : : ; n (3)
compared to that used in [22], and thus arrive at an optimal i0
approximation for a lower-dimensional invariant subspace in which
the random operator can be represented.
where λsi and ϕsi are the chaos coefficients given by
This paper is structured as follows. In Sec. II, we describe the
mathematical setting and provide the theoretical background for the
proposed spectral approximation. Section III includes the proposed λsi hλs ; Ψi i; s 1; : : : ; n;
solution algorithms for the random eigenvalues and eigenvectors of a
random matrix. In Sec. IV, the performance of these algorithms is ϕsi hϕs ; Ψ i i; s 1; : : : ; n (4)
evaluated in two numerical examples.
with the following notational convention that will be used throughout
II. Problem Definition the paper:
Consider an experimental setting defined by the probability triplet ZZ Z
Ω; F ; P. Let ξ: Ω ↦ Rd be an Rd -valued random variable and let hf; Ψi i
def
··· fξΨi ξ dμξ (5)
F ξ ⊂ F be the σ algebra induced by ξ. Further, denote the probability Rd
density of ξ by dμ. We consider ξ to refer to the source of uncertainty
in an underlying mathematical problem, such as random coefficients for any integrable function f defined on Rd . We use the notation
or boundary conditions in a partial differential equation describing h·; ·iRn , instead, for the inner product of two vectors in Rn in order to
the physics of the problem. We will further assume that these random distinguish it from the inner product in the Hilbert space of random
variables
Q are statistically independent, with the implication that variables.
dμ di1 dμi where μi is the probability measure of the real-valued Therefore, the Galerkin projection on the stochastic coordinates is
random variable ξi . Let A: Ω ↦ MatPSDn×n R be a F ξ ; μ-measurable formulated as
914 MEIDANI AND GHANEM
X
P0 X
P X
P X
P X
P
Ai Ψi ϕi Ψi − λ i Ψi ϕi Ψi ; Ψk 0; y0
i Ψi ξ Aξu ξ
0 (10)
i0 i0 i0 i0 i0
k 0; : : : ; P (6)
and then calculate its deterministic chaos coefficient as
where, for the sake of notational simplicity, we have dropped the P0 X
X P
superscripts denoting the eigenmode. This weak formulation leads to y0 Ai u0
k j cijk (11)
a nonlinear system of equations in the following form: i0 j0
P0 X
X P P X
X P
where cijk hΨi Ψj Ψk i.
Ai ϕj cijk λi ϕj cijk k 0; : : : ; P (7)
i0 j0 i0 j0
1. Normalization
where cijk hΨi Ψj Ψk i and h·i is the expectation operator. The We seek to characterize the eigenpairs λs ω; ϕs ω such that the
random eigensolution is obtained by solving Eq. (7) for the eigenvectors are normalized almost surely; i.e.,
deterministic coefficients ϕi and λi . In [14,15], two numerical
schemes are proposed for solving this set of equations: one based on kϕs ωkRn 1; a:s:; s 1; : : : ; n (12)
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
each step in the algorithm to make it applicable for the stochastic case.
For initialization and mat–vec multiplication, we start with a PC where fξ1 ; : : : ; ξ0 g and fw1 ; : : : ; wQ g are quadrature points and
representation for an initial (normal) random vector: associated weights, respectively. Using the quadrature integration,
we will incur a small error as follows:
X
P
u0 ξ u0
i Ψi ξ (8)
u1 quad
i hu ; Ψi i ϵi
1 (16)
i0
We apply the random matrix Aξ to the left and compute the product, where ϵquad
i denotes the integration error, which decreases as Q
increases.
y0 ξ Aξu0 ξ (9)
2. Convergence Criterion
using Galerkin projection; that is, we replace the left-hand side with a The algorithm stops when the following inequality in terms of the
PCE representation for yξ, PCE coefficients of the vector uk ξis satisfied:
X
P
kuk k−1
i − ui kRn ≤ ϵtol (17)
Algorithm 1 Deterministic power i0
iteration method
Step 1. Choose an initial normal vector u0 3. Eigenvalue Calculation
k0 Once the PCE of the dominant eigenvector is calculated, Eq. (7)
while δ > εtol do
Step 2. yk Aukk
can be solved for the PC coefficients of the corresponding random
Step 3. uk1 kyyk k eigenvalue. Given the values of the chaos coefficients of the random
eigenvector, it would be less computationally intensive to solve for
δ kuk1 − uk k
kk1
the chaos coefficients of the random eigenvalue, as the number of
end while unknowns decreases from n 1 × P 1 to P 1. Alter-
Step 4. λ uk−1 T Auk−1 natively, the Rayleigh quotient can be used to calculate the chaos
coefficients of the eigenvalues in a Galerkin projection sense.
MEIDANI AND GHANEM 915
performed at quadrature points, and the chaos coefficients of the random eigenvalue problem. First, we initialize the set of m random
random eigenvectors are calculated based on the converged vectors at vectors by setting their chaos coefficients. The initial set of m random
all the quadrature points. Such construction will mitigate the need for vectors resulting from these coefficients need not be orthogonal. For
the spectral mat–vec operation of step 2. Identical results were example, we can choose the eigenvectors of the mean matrix as the
obtained for the two numerical examples discussed in this paper. It initial guess. In step 2, we solve the weak form of the product similar
cannot be easily concluded, however, that such reconstruction will to that in the step 2 of the power iteration.
result in computational speed gain. Specifically, the computational
cost of a fully quadrature-based algorithm is prohibitively high,
D. QR Factorization
especially for a high-dimensional PCE.
To decompose the m random vectors, we have developed a
stochastic variation of the Gram–Schmidt decomposition. We use a
B. Analysis of Stochastic Power Iteration
quadrature-based decomposition, in which, given a set of m random
To prove convergence for the stochastic power iteration algorithm, vectors, a new set of m random vectors are generated that are
we will make use of the following two lemmas. The proofs of these orthogonal at every point in an acceptably large subset of the
lemmas are included in the Appendix. stochastic domain.
Lemma 1: The PC representation of the exact dominant Let us assume that we have a set of m random vectors denoted by
eigenvector is a fixed point of iterations defined by Algorithm 2. f1 yξ; : : : ; M yξg, where i yξ: Rd → Rn , and for each random
Lemma 2: Starting with an initial vector composed of the first vector, a PCE is available. The objective is to compute the PCE of a
two
Pdominant eigenvectors,
P i.e., u0 ξ α1 ϕ1 ξ α2 ϕ2 ξ new set of m “orthogonal” random vectors f1 qξ; : : : m qξg, where
α1 i ϕ1i Ψi ξ α2 i ϕ2i Ψi ξ, the solution of the algorithm 1
qξ: Rd → Rn . Ideally, these new orthogonal random vectors can
eventually converges, in direction, to the PCE of the dominant be obtained by satisfying the Gram–Schmidt equalities almost surely,
eigenvector. as follows:
Theorem: Given the PC representation for the random matrix Aξ,
one of the following two statements is true for the solution obtained X
k−1
hk yξ;j qξiRn
by Algorithm 2: k
qξ k yξ − qj ξ
j1
hj qξ;j qξiRn
X
P
lim kuk − ϕ1i kRn 0 as P; Q → ∞ k 1; : : : ; m a:s: (19)
i
k→∞
i0
X
P Let the random vector jk χ ξ denote the term under the summation:
lim kuk
i ϕ1i kRn 0 as P; Q → ∞ (18)
k→∞
i0 jk χ ξ hk yξ; j qξiRn j
qξ a:s: (20)
hj qξ; j qξiRn
where fϕ1i g are the chaosP
coefficients of the true PCE of the dominant
eigenvector (ϕ1 ξ ϕ1i Ψi ξ), and fuki g are the chaos Eq. (19) can then be rewritten as
coefficient in a Pth order expansion, obtained using a quadrature-
based normalization at Q quadrature points. X
k−1 jk
Proof: In the light of Lemmas 1 and 2, the proof easily follows. □ k qξ k yξ − χξ k 1; : : : ; m a:s: (21)
j1
i − ϕi kRn
kϕex PC 2
(28)
i0
Aϕ λϕ (24) Clearly, in most cases of interest, λex and ϕex are not available, and
they are obtained by suitable approximation of the integrals in the
and the following perturbed eigenvalue problem preceding definitions. We will rely on a Monte Carlo–based
approximation of these integrals, resulting in the following
A Eϕ~ λ~ ϕ~ (25) approximations to the error measures:
jλ − λj
~ ≤ kEk2 (26) and
kEk2 where N refers to the number of samples in the Monte Carlo scheme,
sin θϕ; ϕ
~ ≤ (27)
δ and fξi g refers to N independent samples of random variable ξ.
Here, ϕMC ξi and λMC ξi refer to the deterministic eigenpairs
where δ is the gap between λ~ and any other eigenvalues. Therefore, if associated with realization ξi .
the eigenvalues are closely clustered, the error in the approximated
eigenvectors may be significant. However, based on the (nonsmall) 2. Error in Convergence in Distribution
gap that separates the cluster of eigenvalues from all other Our second measure of discrepancy consists in comparing
eigenvalues, there will be a tight bound on how much the subspace probability density functions of the random eigenpairs obtained
spanned by the eigenvectors of the perturbed system differs from that through our proposed approximation scheme and the probability
of the exact system [28,29]. The stochastic subspace iteration, density functions obtained through an extensive Monte Carlo
therefore, mitigates the problem of closely clustered eigenvalues by sampling scheme. We introduce this error to show convergence in
simultaneously considering all the associated eigenvectors. distribution for the random eigenpairs. Let fPC ex
λ λ and f λ λ denote
the probability density functions of λPC ξ and λex ξ, respectively.
The solution is said to converge in distribution to the exact random
IV. Numerical Examples eigenvalue, if the following condition holds:
In this section, we investigate the performance of the proposed
λ λ − f λ λ 0
fPC ex
(32)
algorithms. First, a low-dimensional system is chosen so that the
eigenpairs can be derived analytically. Specifically, a two-degree-of-
at every λ ∈ R at which fex λ is continuous. To quantify the error in
freedom (DOF) mass-spring oscillator is chosen in which the
satisfying this condition, the Kullback–Leibler (KL) divergence is
stiffness of one of the springs is random. As a second example, a finite
element model of an elastic cantilever beam with random elasticity is used, which for fPC ex
λ and f λ , is given by
considered. Because the random eigenpairs of this system cannot be
calculated analytically, the results of the MC simulations are assumed
to yield a sufficiently accurate approximation to the exact PCE
coefficients and are used to verify our procedure.
A. Measure of Discrepancy
We will consider two measures of discrepancy between
representations of the solution to the random eigenvalue problem.
The first measure is consistent with the Hilbert space structure in
which we have formulated the problem. The second measure
involves a comparison of some relevant probability density Fig. 1 Three angles between the vector ϕ and coordinates, i.e., α, β,
functions (PDFs). and γ.
MEIDANI AND GHANEM 917
1X N
fex k
λ λ
DλKL ≈ fex k
λ λ ln PC k
N k1 fλ λ
k
1X n
1X N fex
θi θi
DϕKL ≈ fex k
θi θi ln PC k (35)
n i1 N k1 fθ θi
Fig. 2 Two-DOF mass-spring system (after [15]). i
Z ∞ fex
λ λ
DλKL fex
λ λ ln PC
dλ (33) where fλk g and fθk
i g refer to the N test points used in the kernel
0 fλ λ smoothing density estimators.
To investigate the convergence in distribution for random
eigenvectors, the distribution of the phase angles between the B. Two-Degree-of-Freedom Mass-Spring Oscillator
random vector and the Cartesian coordinate axes are chosen for A two-DOF oscillator example is considered (Fig. 2), in which the
1 ξ; : : : ; θn ξg and fθ1 ξ; : : : ;
comparison (Fig. 1). Let fθPC PC ex stiffness of the first spring, denoted by k1, is the only random
θn ξg denote the cosine angles corresponding to random
ex parameter in the system, given by the following equation:
eigenvectors ϕPC ξ and ϕMC ξ, respectively, with their associated
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
0.02 0.02
0.25 0.25
0 0
0.2 0 20 40 0.2 60 80 100
pdf
0.15 0.15
0.1 0.1
0.05 0.05
0 0
0 10 20 30 40 50 60 30 40 50 60 70 80 90
θ (degree) θ (degree)
1 2
a) b)
0.02 0.02
0.25 0.25
0 0
0.2 60 80 100 0.2 0 20 40
pdf
0.15 0.15
0.1 0.1
0.05 0.05
0 0
30 40 50 60 70 80 90 0 10 20 30 40 50 60
θ1 (degree) θ2 (degree)
c) d)
Fig. 3 Comparison between the exact and approximate PDFs of the phase angles of the first and second eigenvector with respect to the first and second
axes. The insets show good agreement between the tails of distributions. The errors in mean standard deviations of the four phase angles are respectively
computed as a) 0.5%, 5.3%; b) 0.5%, 5.3%; c) 0.5%, 6.5%; and d) 0.5%, 6.5%.
918 MEIDANI AND GHANEM
40 70
θ1 (degree)
θ2 (degree)
30 60
20 50
10 40
0 30
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
ξ ξ
a) b)
70 40
θ1 (degree)
θ2 (degree)
60 30
50 20
40 10
30 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
ξ ξ
c) d)
Fig. 4 Comparison between the functional forms of the exact and approximate phase angles of the first and second eigenvector relative to first and second
axes.
1.6 1.4
exact exact
1.4 subspace 1.2 subspace
1.2
1
1
0.8
pdf
0.8
0.6
0.6
0.4
0.4
0.2 0.2
0 0
15 20 25 30 35 40 45 50 5 10 15 20 25 30 35
λ λ
1 2
a) b)
Fig. 5 Comparison between the exact and approximate PDFs of the first and second eigenvalues of the two-DOF system.
MEIDANI AND GHANEM 919
65 35
exact exact
60
subspace subspace
30
55
50
25
45
2
40 20
λ
λ
35
15
30
25
10
20
15 5
−5 0 5 −5 0 5
ξ ξ
a) b)
Fig. 6 Comparison between the functional forms of the exact and approximate solutions of the first and second eigenvalues of the two-DOF system.
Larger distances are in the low probability regions. Thus, as seen in Fig. 5, using the same PC order, an acceptable approximation is achieved for the PDFs.
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
Fig. 7 Discretization of the cantilever beam with 10 elements and random elasticity Ex;ω and random density ρx;ω in the most general case.
0.12 0.07
MC MC
0.1 Subspace method 0.06 Subspace method
0.05
0.08
0.04
pdf
0.06
0.03
0.04
0.02
0.02 0.01
0 0
60 65 70 75 80 85 90 95 40 50 60 70 80 90 100
θ1 (degree) θ5 (degree)
a) b)
6 0.25
MC MC
5 Subspace method Subspace method
0.2
4
0.15
pdf
3
0.1
2
0.05
1
0 0
89.4 89.6 89.8 90 90.2 60 65 70 75 80 85 90 95
θ (degree) θ (degree)
6 11
c) d)
5 8
MC MC
Subspace method 7 Subspace method
4
6
3 5
pdf
4
2 3
2
1
1
0 0
89.2 89.4 89.6 89.8 90 90.2 89.4 89.6 89.8 90 90.2
θ (degree) θ (degree)
14 16
e) f)
Fig. 8 Comparisons between the distributions of the first random eigenvector, obtained from Monte Carlo and subspace iteration. Shown are the PDFs of
phase angles with respect to a) 1st, b) 5th, c) 6th, d) 11th, e) 14th, and f) 16th coordinate axes.
920 MEIDANI AND GHANEM
q
2 ξ4 − 2σ 2 ξcomparesx2 1 − x22 2the PDFs of the exact and approximate random eigenvalues,
k1 σξ2 − 1 k21 ξ2 − 12k1 − 20σ − 20rx k1 ; xσ sinc
whereas σ 164 compares the functional forms of these two. It can(38)
λ1
1 2
2L Fig. 6 9 be
2 seen that the solutions for the random eigenvalues and eigenvectors are
q
in close agreement with the exact values.
with the
k1 σξ2 − 1length
correlation − k21Lset − 12
ξ2equal 1 − 20σ − 20k1 σ ξ − 2σ ξ σ 164
to kthe length of the beam, 2 4which2 is2 three. 2 The mean and coefficient of the variation of the lognormal
λ2
distribution expgx; ω is set equal to 1 and 10%, based on which the mean and standard 9 deviation of the Gaussian process is calculated.
Consider the following KL expansion for the Gaussian 2 random field: C. Cantilever Beam with Random Elasticity
The finite element model of a one-dimensional cantilever beam
Given the aforementioned numerical values, the exact eigenvalues
with random elasticity E is next considered (Fig. 7). The length of the
are
beam is divided into 10 elements of equal size. Only rotational and
X pd transverse degrees of freedom X P
are retained, and axial deformation is
λ1 gx;
13 ωξ2 ξg4 i− 2ξ
xξi ω
2
17 (39) Ex; ω Ei xΨi ξω (40)
p neglected.
i0 4 i
λ2 13 ξ − ξ − 2ξ 17
2 2 The spatial variation in the elastic modulus of the beam is
represented by the random process Ex; ω that follows a lognormal
distribution. Specifically,
where
A second-order PC expansion is considered for the random
where d is the
eigenpairs. orderwe
Here, of the expansion
report the results and fξobtained
i g are a setby of independent
the random
Ex;ω E expgx; ω (37)
Gaussian algorithm.
subspace random variables.
Figure 3Given
showsthis KL expansion,
the comparison betweenthe lognormal
the PDFs hΨi ηi 1X d
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
3.5 0.12
MC MC
3 Subspace method 0.1 Subspace method
2.5
0.08
2
pdf
0.06
1.5
0.04
1
0.5 0.02
0 0
65.5 66 66.5 67 67.5 68 68.5 65 70 75 80 85 90 95
θ (degree) θ (degree)
1 5
a) b)
0.8 0.1
MC MC
0.7 Subspace method Subspace method
0.08
0.6
0.5 0.06
pdf
0.4
0.3 0.04
0.2
0.02
0.1
0 0
62 64 66 68 70 72 74 76 60 65 70 75 80 85 90 95
θ (degree) θ (degree)
7 9
c) d)
0.25 5
MC MC
Subspace method Subspace method
0.2 4
0.15 3
pdf
0.1 2
0.05 1
0 0
60 65 70 75 80 85 90 87 87.5 88 88.5 89
θ (degree) θ (degree)
13 16
e) f)
Fig. 9 Comparisons between the distributions of the fourth random eigenvector, obtained from Monte Carlo and subspace iteration. Shown are the PDFs
of phase angles with respect to a) 1st, b) 5th, c) 7th, d) 9th, e) 13th, and f) 16th coordinate axes.
MEIDANI AND GHANEM 921
Using a PCE representation with Hermite polynomials of order 2 subspace iteration algorithm. For the quadrature scheme in three
and dimension 3 for the elastic modulus, the PCE of the random dimensions, the Cartesian product of five Gauss–Hermite quadrature
stiffness matrix of the finite element model, which takes values in points in each direction was used. First, we evaluate the performance
Mat20×20 R, is formed. Next, we investigate the solution of the of this algorithm in estimating the random eigenvectors. To this end,
following random eigenvalue problem: the PDFs of the phase angles of the random eigenvectors with respect
to the Cartesian coordinate axes are plotted. The eigenpairs obtained
Kξϕξ λξϕξ (42) by 30,000 Monte Carlo simulations (resulting in approximately 1%
error in the standard deviations) were used as the reference for
The numerical values used for the parameters of this beam are as comparison. Figures 8 and 9 depict a representative set of these PDFs
follows: L 3, E 0.21. The second-order three-dimensional PCE for the first and fourth random eigenvectors with respect to different
form of the four most dominant eigenpairs was calculated using the
1.15
1.1 1.1
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
1.1
|| φ1 ||
|| φ1 ||
|| φ1 ||
1.05 1.05
1.05
1 1
1
a) b) c)
1.2
1.1 1.04
1.15
|| φ2 ||
|| φ2 ||
|| φ2 ||
1.02
1.05 1.1
1.05 1
1
1
0.98
0.95 0.95
−3 0 3 −3 0 3 −3 0 3
ξ ξ ξ
1 2 3
d) e) f)
|| φ3 ||
|| φ3 ||
1 1 1
g) h) i)
1.015 1.005
1.006
1.01
1.004
|| φ4 ||
|| φ4 ||
|| φ4 ||
1.002 1.005 1
1
1
0.998
0.995 0.995
−3 0 3 −3 0 3 −3 0 3
ξ1 ξ2 ξ3
j) k) l)
Fig. 10 Each row corresponds to a particular mode and shows the norm of the corresponding random eigenvector versus three uncertainty sources: ξ1 ,
ξ2 , and ξ3 , which are independent standard normal random variables. For each value of an uncertainty source, the error bar corresponds to the variation
induced by the other two independent uncertainty sources.
922 MEIDANI AND GHANEM
0.018
Appendix: Proofs
1st mode
0.016 2nd mode In what follows, the detailed proofs for the two lemmas are
3rd mode presented.
0.014 4th mode Lemma 1: The PC representation of the exact dominant
eigenvector is a fixed point
P of1iterations defined by Algorithm 2.
0.012 Proof: Let ϕ1 ξ ϕi Ψi ξ denote the solution to the
following equation:
0.01
pdf
P0 X
X P P X
X P
0.008
Ai ϕ1j cijk λ1i ϕ1j cijk k 0; : : : ; P (A1)
i0 j0 i0 j0
0.006
0
u0
i ϕi
1 k 0; : : : ; P (A2)
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849
axes, which show agreement with results obtained from Monte Carlo
simulations. Starting with u0
i ϕi , the mat–vec multiplication using Galerkin
1
PP X
Q
α2 λ2 ξq (A27)
y0 ξq i0 yi Ψi ξ
q ϕ2 ξq Ψi ξq wq as P → ∞
p (A16)
X Q PP α1 λ ξq 2 α22 λ2 ξq 2
2 1
0 q
ky ξ kRn q k i0 y Ψ ξ kRn q q1
Therefore, for α1 > 0, substituting Eqs. (A24) and (A27) in
uk
i y Ψi P
ξq wiq i PP
q1
P λ1 Ψ ξ q ϕ1 Ψ ξq ϵproj ξq ϵptrunc ξq Eq. (A23), we will get P
PPi0 1 i i q PPi0 1 i i q which gives the PCE uξ ui Ψi ξ with the following
k i0 λi Ψ1 i ξq k i0 ϕi Ψi ξ ϵproj ξq ϵptrunc ξq kRn
X Q
PP α1 λ1 ξ q PP pointwise property for some Q q:
λi Ψi ξ 2 i0q ϕ2k1i Ψϕi 1ξξq q Ψi ξq wq
p X k
P X
α2P1 i0
λ ξq 2k α2P λ ξ as P → ∞ lim kui − ϕ1i kRn ≤ ϕ 1 ξq Ψ ξq wq − ϕ1
i i as P;Q → ∞
q1 k1 Pi0 λ1i Ψi ξq 2 Pi0 ϕ1i Ψi ξq kRn k→∞
i0 X 1
P
q1 Rn
XQ
XP α λ2 ξq k
1 q
u ξ ui Ψi ξ q
q1 i0 X Q
kϵi kRn as P;Q → ∞ y0 ξq
Q
0 q
Ψi ξ w Ψi ξq
q q
[7] vom Scheidt, J. (ed.), Random Eigenvalue Problems, Elsevier, New [20] Bathe, K.-J. (ed.), Finite Element Procedures, Prentice–Hall, Upper
York, 1984. Saddle River, NJ, 1996.
[8] Shinozuka, M., and Astill, C., “Random Eigenvalue Problems in [21] Saad, Y. (ed.), Numerical Methods for Large Eigenvalue Problems,
Structural Analysis,” AIAA Journal, Vol. 10, No. 4, 1972, pp. 456–462. Manchester Univ. Press, Manchester, England, U.K., 1992.
doi:10.2514/3.50119 [22] Verhoosel, C. V., Gutiérrez, M. A., and Hulshoff, S. J., “Iterative
[9] Nair, P., and Keane, A., “An Approximate Solution Scheme for the Solution of the Random Eigenvalue Problem with Application to
Algebraic Random Eigenvalue Problem,” Journal of Sound and Spectral Stochastic Finite Element Systems,” International Journal for
Vibration, Vol. 260, No. 1, 2003, pp. 45–65. Numerical Methods in Engineering, Vol. 68, No. 4, 2006, pp. 401–424.
doi:10.1016/S0022-460X(02)00899-4 doi:10.1002/(ISSN)1097-0207
[10] Pradlwarter, H. J., Schueller, G. I., and Szekely, G. S., “Random [23] Das, S., Ghanem, R., and Finette, S., “Polynomial Chaos Representation
Eigenvalue Problems for Large Systems,” Computers and Structures, of Spatio-Temporal Random Fields from Experimental Measurements,”
Vol. 80, Nos. 27–30, 2002, pp. 2415–2424. Journal of Computational Physics, Vol. 228, No. 23, 2009, pp. 8726–
doi:10.1016/S0045-7949(02)00237-7 8751.
[11] Szekely, G., and Schueller, G., “Computational Procedure for a Fast doi:10.1016/j.jcp.2009.08.025
Calculation of Eigenvectors and Eigenvalues of Structures with Random [24] Ghanem, R., and Doostan, A., “On the Construction and Analysis of
Properties,” Computer Methods in Applied Mechanics and Engineering, Stochastic Models: Characterization and Propagation of the Errors
Vol. 191, Nos. 8–10, 2001, pp. 799–816. Associated with Limited Data,” Journal of Computational Physics,
doi:10.1016/S0045-7825(01)00290-0 Vol. 217, No. 1, 2006, pp. 63–81.
[12] Rahman, S., “A Solution of the Random Eigenvalue Problem by a doi:10.1016/j.jcp.2006.01.037
[25] Desceliers, C., Ghanem, R., and Soize, C., “Maximum Likelihood
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849