0% found this document useful (0 votes)
48 views13 pages

05 Spectral Power Iterations For The Random Eigenvalue Problem

Uploaded by

JR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views13 pages

05 Spectral Power Iterations For The Random Eigenvalue Problem

Uploaded by

JR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

AIAA JOURNAL

Vol. 52, No. 5, May 2014

Spectral Power Iterations for the Random Eigenvalue Problem


Hadi Meidani∗ and Roger Ghanem†
University of Southern California, Los Angeles, California 90089
DOI: 10.2514/1.J051849
Two computationally efficient algorithms are developed for solving the stochastic eigenvalue problem. An
algorithm based on the power iteration technique is proposed for the calculation of the dominant eigenpairs. This
algorithm is then extended to find other subdominant random eigenpairs. The uncertainty in the operator is
represented by a polynomial chaos expansion, and a similar representation is considered for the random eigenvalues
and eigenvectors. The algorithms are distinguished due to their speed in converging to the true random eigenpairs and
their ability to estimate a prescribed number of subdominant eigenpairs. The algorithms are demonstrated on two
examples with close agreement observed with the exact solution and a solution synthesized through Monte Carlo
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

sampling.

Nomenclature λex
ξ = exact random eigenvalue
A = generic matrix of interest λMC
ξ = random eigenvalue obtained by Monte
cijk = expected value of the triple product Ψi Ψj Ψk Carlo simulations
DλKL = Kullback–Leibler divergence associated λPC
ξ = polynomial chaos form of the random
with the random eigenvalue solution eigenvalue obtained by the proposed algorithms
DϕKL = Kullback–Leibler divergence associated μ = probability measure associated with ξ
with the random eigenvector solution ξ = vector of uncertainty sources
d = number of uncertainty sources fξq g = set of quadrature points in the stochastic domain
Ex; ω = random process representing the random fξr g = set of sampled points in the stochastic domain
elastic modulus ϕs = eigenvector corresponding to the
F = σ algebra sth physical mode
fPC
θi θi  = probability density function of θPC
i ξ
ϕMC
ξ = random eigenvector obtained by Monte
fex probability density function of θex Carlo simulations
θi θi  = i ξ
ϕPC
ξ = polynomial chaos form of the random eigenvector
fPC
λ λ = probability density functions of λPC ξ obtained by the proposed algorithms
fex
λ λ = probability density functions of λex ξ ϕex
ξ = exact random eigenvector
L = length of the beam fΨi g = set of orthonormal polynomial basis functions
MatPSD
n×n R = the set of positive semidefinite real-valued Ω = sample space
square matrices of size n
m = prescribed number of physical modes of interest
N = number of Monte Carlo samples I. Introduction
n = size of matrix A
P = order of polynomial chaos expansion for
solution eigenpairs T HE robust predictive models for natural and engineered systems
should incorporate the significant variabilities in the behavior of
these systems induced because of the inherent variability of system
P = probability measure
P0 = order of polynomial chaos expansion for matrix A components, inadequacy of the input–output models, and inaccuracy
Q = number of quadrature points of the numerical implementations (finite-dimensional approxima-
tion). Toward this end, efficient uncertainty quantification techniques
fwq g = the set of quadrature weights
have been developed that represent the uncertainty in the input,
εtol = convergence threshold
propagate it through the numerical models, and finally produce a
ελ = error measure defined for random eigenvalues
probabilistic description for the system outputs and quantities of interest
εφ = error measure defined for random eigenvectors
(QOIs). In addition to the classical sampling techniques, such as
εquad = integration error in using quadrature rule Monte Carlo (MC) simulation and its improved derivatives, such as
θex
i ξ = cosine angles of random eigenvectors φex ξ Markov chain Monte Carlo techniques, recently, there has been growing
with respect to the ith coordinate axis interest in the nonsampling spectral technique, named polynomial chaos
θPC
i ξ = cosine angles of random eigenvector φξ PC
expansion (PCE) [1–5]. In this stochastic projection technique, the
with respect to the ith coordinate axis governing equation is projected onto truncated polynomial chaos (PC)
λs = eigenvalue corresponding to the coordinates, resulting in a finite-dimensional approximation of the
sth physical mode sample space, which is amenable to numerical calculation.
Our objective in this work is to characterize the eigenspace of
Received 30 January 2012; revision received 24 December 2012; accepted linear systems with parametric uncertainties. Such characterization
for publication 2 February 2013; published online 28 February 2014. can be used as the basis in various model reduction techniques and
Copyright © 2012 by the American Institute of Aeronautics and Astronautics, thus significantly improve the computational cost of the analysis of
Inc. All rights reserved. Copies of this paper may be made for personal or large uncertain systems. Our particular interest is on random matrices
internal use, on condition that the copier pay the $10.00 per-copy fee to the that arise from finite-dimensional approximation (discretization) of
Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923; continuous operator, such as partial differential operators, involving
include the code 1533-385X/14 and $10.00 in correspondence with the CCC.
*Postdoctoral Scholar — Research Associate, Department of Aerospace parametric uncertainty, as well as on those corresponding to a discrete
and Mechanical Engineering, 3620 South Vermont Avenue, KAP 210; random system, such as multidegree of freedom oscillators. Generally,
[email protected]. there is no closed-form expression for these random matrices.
† Among the first solution approaches for the eigenvalue analysis of
Professor, Department of Aerospace and Mechanical Engineering, 3620
South Vermont Avenue, KAP 210; [email protected]. Member AIAA. such matrices, the perturbation technique was proposed in [6,7], in
912
MEIDANI AND GHANEM 913

which the solution eigenpair was calculated based on few low-order matrix-valued random variable with values in MatPSD n×n R, that
perturbations. This approach will give inaccurate results in cases is, the set of positive semidefinite real-valued square matrices of size
in which higher-order perturbations are needed or when the n. We seek the solution of the following stochastic eigenvalue
perturbations are not small enough, disqualifying the Taylor series problem:
approximation. Statistical sampling is another approach first
proposed in [8]. It is computationally intensive, however, especially Aξϕs ξ  λs ξϕs ξ s  1; : : : ; n
in cases in which a prohibitively large number of samples of (1)
independent random variables are required. Researchers have μ-almost surely a:s:
introduced approaches to enhance the accuracy of perturbation-based
methods [9] and also methods to improve the computational where superscript s indexes the invariant subspace associated with
efficiency of simulation-based algorithms [10,11]. A dimensional the pair λs ; ϕs . Suppose there exists a functional representation for
decomposition approach was also proposed in [12] for the the random matrix in the following form:
probabilistic characterization of the random eigenvalues using a
lower-variate approximation of these random variables, which proves P0
X
superior over the perturbation techniques. Another approach for the Aξ  Ai Ψi ξ (2)
approximation of random eigenvalue was reported in [13] in which i0
the first and second statistical moments of random eigenpairs were
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

analytically calculated based on the moments of the random matrix. where Ai ∈ Matn×n R is deterministic, and fΨg is an orthonormal
Recently, polynomial chaos expansion techniques have been def
applied to the solution of random eigenvalue problem [14–16], in basis in Θ L2 Ω; F ξ ; μ.
which given a polynomial chaos representation for the random The analytical form in Eq. (2) could be synthesized from a
matrix, the solution eigenvalue and eigenpairs are assumed to have Karhunen–Loeve expansion of an underlying stochastic process, in
similar spectral representation in the same vector space. The weak which case the random variables ξ would correspond to the
form of the eigenvalue problem is then obtained by Galerkin Karhunen–Loeve random variables, the deterministic coefficient Ai
projection of the eigenvalue equation on the PC orthogonal basis, would depend on the ith eigenvector and eigenvalue of the covariance
spanning the Hilbert space of input random variables, leading to a matrix, and Ψi ξ would be a set of dμ orthonormal random variables
system of nonlinear algebraic equations. The PC coefficient of the [23]. Equation (2) could also be obtained directly from a PCE of the
solution eigenpairs are then obtained by the numerical solution of underlying stochastic process, with coefficients estimated from
these systems. The resulting PC solution lends itself readily to available information [24,25]. A complete sequence of orthonormal
accurate uncertainty representation of QOI and sensitivity indices basis functions can be found for the random matrix Aω if each
[17] and is greatly advantageous to other proposed methods. component of the matrix has finite variance and its underlying
In our approach, given a PC representation for the random matrix, probability measure is uniquely determined by its statistical
instead of solving the system of algebraic equations for the PC moments; i.e., the moment problem is uniquely solvable for each
coefficients, we use the idea of a power iteration scheme in order to random component [26,27]. In the present work, we assume that such
calculate the PC coefficients of dominant random eigenpairs. chaos representation for random matrix Aω is available to the
Iterative methods, among the numerical methods for the solution of analyst.
eigenvalue problems, have been widely used and extended due to Stochastic fluctuations in entries of matrix A will induce
their computational advantage. Specifically, the power method [18], corresponding fluctuations in its eigenvalues and eigenvectors. Thus,
and its derivatives such as inverse iteration [19] and subspace λs : Ω ↦ R and ϕs : Ω ↦ Rn are F ξ -measurable random variables.
iteration [20], has been the basis for more developments [21]. The With mild conditions on matrix A, both λs and ϕs can be assumed to
first iterative solution algorithm for the PC representation of the be square integrable and are thus in Θ. As a basis in Θ, we choose
eigenpairs was introduced in [22], in which the authors extended a polynomials orthonormal with respect to the measure of dμ. The
classical iterative method relying on simple matrix operation. approximate representation for the eigenpairs is then given by
Specifically, they recasted the inverse power method [19] as an
iterative algorithm on the PC coefficients of random vectors in order
X
P
to solve for the random eigenvector with a random eigenvalue for λs ξ ≈ λsi Ψi ξ; s  1; : : : ; n
which the average is closest to a prescribed value. In present paper, i0
we propose an algorithm that solves for multiple dominant random
eigenpairs, with a different definition for the norm of random vectors X
P
ϕs ξ ≈ ϕsi Ψi ξ; s  1; : : : ; n (3)
compared to that used in [22], and thus arrive at an optimal i0
approximation for a lower-dimensional invariant subspace in which
the random operator can be represented.
where λsi and ϕsi are the chaos coefficients given by
This paper is structured as follows. In Sec. II, we describe the
mathematical setting and provide the theoretical background for the
proposed spectral approximation. Section III includes the proposed λsi  hλs ; Ψi i; s  1; : : : ; n;
solution algorithms for the random eigenvalues and eigenvectors of a
random matrix. In Sec. IV, the performance of these algorithms is ϕsi  hϕs ; Ψ i i; s  1; : : : ; n (4)
evaluated in two numerical examples.
with the following notational convention that will be used throughout
II. Problem Definition the paper:
Consider an experimental setting defined by the probability triplet ZZ Z
Ω; F ; P. Let ξ: Ω ↦ Rd be an Rd -valued random variable and let hf; Ψi i
def
··· fξΨi ξ dμξ (5)
F ξ ⊂ F be the σ algebra induced by ξ. Further, denote the probability Rd
density of ξ by dμ. We consider ξ to refer to the source of uncertainty
in an underlying mathematical problem, such as random coefficients for any integrable function f defined on Rd . We use the notation
or boundary conditions in a partial differential equation describing h·; ·iRn , instead, for the inner product of two vectors in Rn in order to
the physics of the problem. We will further assume that these random distinguish it from the inner product in the Hilbert space of random
variables
Q are statistically independent, with the implication that variables.
dμ  di1 dμi where μi is the probability measure of the real-valued Therefore, the Galerkin projection on the stochastic coordinates is
random variable ξi . Let A: Ω ↦ MatPSDn×n R be a F ξ ; μ-measurable formulated as
914 MEIDANI AND GHANEM

X
P0 X
P X
P X
P  X
P
Ai Ψi ϕi Ψi − λ i Ψi ϕi Ψi ; Ψk  0; y0
i Ψi ξ  Aξu ξ
0 (10)
i0 i0 i0 i0 i0

k  0; : : : ; P (6)
and then calculate its deterministic chaos coefficient as
where, for the sake of notational simplicity, we have dropped the P0 X
X P
superscripts denoting the eigenmode. This weak formulation leads to y0 Ai u0
k  j cijk (11)
a nonlinear system of equations in the following form: i0 j0

P0 X
X P P X
X P
where cijk  hΨi Ψj Ψk i.
Ai ϕj cijk  λi ϕj cijk k  0; : : : ; P (7)
i0 j0 i0 j0
1. Normalization
where cijk  hΨi Ψj Ψk i and h·i is the expectation operator. The We seek to characterize the eigenpairs λs ω; ϕs ω such that the
random eigensolution is obtained by solving Eq. (7) for the eigenvectors are normalized almost surely; i.e.,
deterministic coefficients ϕi and λi . In [14,15], two numerical
schemes are proposed for solving this set of equations: one based on kϕs ωkRn  1; a:s:; s  1; : : : ; n (12)
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

the Newton–Raphson method and one based on an optimization


problem. Instead of solving for the chaos coefficients in Eq. (7), we where k · kRn denotes the l2 norm in Rn . Alternatively, one could
adapt the power iteration method to calculate the PCE of the random consider random eigenvectors that are orthogonal with respect to the
eigenpairs using efficient iterative algorithms. Two different norm on the product space Rn × Θ. As will be discussed later, our
algorithms are proposed. First, one produces the PCE of the choice of the norm for the random eigenvectors will lead us to
dominant random eigenpair, and the second one is capable of guarantee the convergence for the stochastic version of the power
estimating a dominant subspace of arbitrary dimension. iteration method.
To enforce the normality at the kth iteration in our algorithm, we
normalize the random vector yk ξ and obtain a new vector
III. Iteration on the Vector Space uk1 ξ that is normalized in the following sense:
A. Algorithm for the Dominant Eigenpair
Among the solution algorithms for the deterministic eigenvalue kuk1 ξkRn  1; a:s: (13)
problem, methods based on power iteration are well adapted to high-
performance computing as they can be formulated efficiently as which is equivalent to
matrix–vector (mat–vec) multiplication. We extend this iterative
scheme to the stochastic eigenvalue problem. yk ξ
The main idea behind the power iteration algorithm is that an initial uk1 ξ  a:s:
kyk ξkRn
vector under consecutive application of the matrix A will converge PP k
to the direction of the dominant eigenvector, provided proper yi Ψi ξ
 P i0 k a:s: (14)
normalization at each step. The only case in which convergence is not k i0 yi Ψi ξkRn
P
achieved is when the initial vector has a zero component in the
direction of the actual dominant eigenvector. The deterministic power The PC coefficient of uk1 ξ is approximated as follows:
method is summarized in Algorithm 1.
In the stochastic version, we iteratively apply the random matrix to X
Q
yk ξq 
an initial random vector, in which both the random matrix and vector uk1
i  k q
Ψi ξq wq (15)
are represented in the PCE form. Next, we explain the modification to q1 ky ξ k R n

each step in the algorithm to make it applicable for the stochastic case.
For initialization and mat–vec multiplication, we start with a PC where fξ1 ; : : : ; ξ0 g and fw1 ; : : : ; wQ g are quadrature points and
representation for an initial (normal) random vector: associated weights, respectively. Using the quadrature integration,
we will incur a small error as follows:
X
P
u0 ξ  u0
i Ψi ξ (8)
u1 quad
i  hu ; Ψi i  ϵi
1 (16)
i0

We apply the random matrix Aξ to the left and compute the product, where ϵquad
i denotes the integration error, which decreases as Q
increases.
y0 ξ  Aξu0 ξ (9)
2. Convergence Criterion
using Galerkin projection; that is, we replace the left-hand side with a The algorithm stops when the following inequality in terms of the
PCE representation for yξ, PCE coefficients of the vector uk ξis satisfied:

X
P
kuk k−1
i − ui kRn ≤ ϵtol (17)
Algorithm 1 Deterministic power i0
iteration method
Step 1. Choose an initial normal vector u0 3. Eigenvalue Calculation
k0 Once the PCE of the dominant eigenvector is calculated, Eq. (7)
while δ > εtol do
Step 2. yk  Aukk
can be solved for the PC coefficients of the corresponding random
Step 3. uk1  kyyk k eigenvalue. Given the values of the chaos coefficients of the random
eigenvector, it would be less computationally intensive to solve for
δ  kuk1 − uk k
kk1
the chaos coefficients of the random eigenvalue, as the number of
end while unknowns decreases from n  1 × P  1 to P  1. Alter-
Step 4. λ  uk−1 T Auk−1  natively, the Rayleigh quotient can be used to calculate the chaos
coefficients of the eigenvalues in a Galerkin projection sense.
MEIDANI AND GHANEM 915

Algorithm 2 Stochastic power method Algorithm 3 Deterministic subspace iteration


Step 1. Choose a set of deterministic coefficients fu0
0 ; : : : ; u0
P g Step 1 Choose an initial set of m normalized vectors f1 u0 ; : : : ; m u0 g
k0 k0
while δ > εtol do P P while δ > εtol do
Step 2. yk
r 
P0 P k
n j0 Ai uj coijr r  0; : :n: ; P
i0 o Step 2 s yk   As uk , s  1; : : : ; m and Y  1 yk ; : : : ; m yk 
Step 3. Normalize yk k
and obtain uk1 ; : : : ; uk1 Step 3 Compute the QR factorization, QR  Y, and set
0 ; : : : ; yP 0 P
PP k1 k 1 uk1 1 uk1   Q
P ; : : :s; k1
δ  i0 kui − ui k δ m − s uk k
s1 k u
kk1 kk1
end while end while
Step 4. Solve Eq. (7) with fϕi g  fuk−1
i g for fλ0 ; : : : ; λP g Step 4 s λ  s uk−1 T As uk−1  s  1; : : : ; m

The steps in the proposed stochastic power iteration are


Here, V is the set of m vectors in Rn , which are distinguished by
summarized in Algorithm 2. their left superscripts, and i ϕc is the ith most dominant eigenvector
As an alternative for these spectral algorithms, we have also
given by a converged iteration.
considered algorithms in which the power and subspace iterations are In what follows, we explain the way each step is adapted to the
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

performed at quadrature points, and the chaos coefficients of the random eigenvalue problem. First, we initialize the set of m random
random eigenvectors are calculated based on the converged vectors at vectors by setting their chaos coefficients. The initial set of m random
all the quadrature points. Such construction will mitigate the need for vectors resulting from these coefficients need not be orthogonal. For
the spectral mat–vec operation of step 2. Identical results were example, we can choose the eigenvectors of the mean matrix as the
obtained for the two numerical examples discussed in this paper. It initial guess. In step 2, we solve the weak form of the product similar
cannot be easily concluded, however, that such reconstruction will to that in the step 2 of the power iteration.
result in computational speed gain. Specifically, the computational
cost of a fully quadrature-based algorithm is prohibitively high,
D. QR Factorization
especially for a high-dimensional PCE.
To decompose the m random vectors, we have developed a
stochastic variation of the Gram–Schmidt decomposition. We use a
B. Analysis of Stochastic Power Iteration
quadrature-based decomposition, in which, given a set of m random
To prove convergence for the stochastic power iteration algorithm, vectors, a new set of m random vectors are generated that are
we will make use of the following two lemmas. The proofs of these orthogonal at every point in an acceptably large subset of the
lemmas are included in the Appendix. stochastic domain.
Lemma 1: The PC representation of the exact dominant Let us assume that we have a set of m random vectors denoted by
eigenvector is a fixed point of iterations defined by Algorithm 2. f1 yξ; : : : ; M yξg, where i yξ: Rd → Rn , and for each random
Lemma 2: Starting with an initial vector composed of the first vector, a PCE is available. The objective is to compute the PCE of a
two
Pdominant eigenvectors,
P i.e., u0 ξ  α1 ϕ1 ξ  α2 ϕ2 ξ  new set of m “orthogonal” random vectors f1 qξ; : : : m qξg, where
α1 i ϕ1i Ψi ξ  α2 i ϕ2i Ψi ξ, the solution of the algorithm 1
qξ: Rd → Rn . Ideally, these new orthogonal random vectors can
eventually converges, in direction, to the PCE of the dominant be obtained by satisfying the Gram–Schmidt equalities almost surely,
eigenvector. as follows:
Theorem: Given the PC representation for the random matrix Aξ,
one of the following two statements is true for the solution obtained X
k−1
hk yξ;j qξiRn
by Algorithm 2: k
qξ  k yξ − qj ξ
j1
hj qξ;j qξiRn
X
P
lim kuk − ϕ1i kRn  0 as P; Q → ∞ k  1; : : : ; m a:s: (19)
i
k→∞
i0
X
P Let the random vector jk χ ξ denote the term under the summation:
lim kuk
i  ϕ1i kRn  0 as P; Q → ∞ (18)
k→∞
i0 jk χ ξ hk yξ; j qξiRn j
  qξ a:s: (20)
hj qξ; j qξiRn
where fϕ1i g are the chaosP
coefficients of the true PCE of the dominant
eigenvector (ϕ1 ξ  ϕ1i Ψi ξ), and fuki g are the chaos Eq. (19) can then be rewritten as
coefficient in a Pth order expansion, obtained using a quadrature-
based normalization at Q quadrature points. X
k−1 jk
Proof: In the light of Lemmas 1 and 2, the proof easily follows. □ k qξ  k yξ − χξ k  1; : : : ; m a:s: (21)
j1

C. Algorithm for Dominant Subspaces


We rely on the weak form of this equation for computational
In many applications, the analyst is interested in calculating not
P given the PCE of random vector χξ in the
convenience. Thus, jk
only the dominant eigenpair but a dominant subspace. In this section,
form jk χξ  i jk χ i Ψi ξ, it is easy to see that
we propose an iterative algorithm that finds the subdominant
eigenpairs. This algorithm is developed based on the subspace
X
k−1
iteration algorithm. kq  k yi − jk χ k  1; : : : ; m; i  0; : : : ; P (22)
i i
In the power iteration technique, if instead of a single initial vector, j1
one chooses a set of m vectors and applies the matrix to them
multiplicatively, all the vectors will converge to the dominant The coefficients jk χ i can be readily evaluated as
eigenvector. However, if we orthogonalize these m vectors at each
step, the resulting vectors will evolve in independent directions and X
Q
will eventually converge, under this process, to the eigenvectors jk χ  hjk χ ; Ψi i  jk χ ξq Ψ q wq
i i ξ (23)
spanning the dominant m-dimensional subspace. Algorithm 3 q1
summarizes the steps of this method for the calculation of the first m
dominant eigenpairs of the matrix A ∈ Matn×n R. where ξr refers to a realization of random vector ξ.
916 MEIDANI AND GHANEM

Algorithm 4 Stochastic subspace iteration 1. Error in L2 Convergence


Step 1 Choose a set of chaos coefficients for m normalized random vectors
We define the L2 error as the norm of the difference between two
ff1 u0 m 0 PCE representations. This error definition is used to evaluate the
i g; : : : ; f ui gg
k0 performance of the algorithm. Using superscripts ex to denote an
while δ > εtol do P P exact quantity and PC to denote a PCE approximation of that
Step 2 s ykr 
P0
i0
P A s uk c
j0 i j ijk s  1; : : : ; m quantity, we introduce the following two error measures for the
Step 3 Compute the stochastic QR factorization [Eq. (21), using eigenvectors and eigenvalues:
k s k1
fs yj gP
 fs yP j g], and set f uj g  fs qj g
δ m s1
P ks uk1 − s uk k
i0 i j
Z
kk1 ϵϕ  keϕ k2Rn ×Θ  kϕex ξ − ϕPC ξk2Rn dμ
end while
Step 4 Solve Eq. (7) with fϕi g  fs uk−1 i g for fs λi g s  1; : : : ; m X
P

i − ϕi kRn
kϕex PC 2
 (28)
i0

Algorithm 4 summarizes the steps in the proposed stochastic


subspace iteration scheme. and
One of the challenges in solving eigenvalue problems is when the Z X
P
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

matrix of interest has closely clustered eigenvalues. Consider the


ϵλ  jeλ j2Θ  jλex ξ − λPC ξj2 dμ  i − λi j
jλex PC 2
(29)
following deterministic eigenvalue problem: i0

Aϕ  λϕ (24) Clearly, in most cases of interest, λex and ϕex are not available, and
they are obtained by suitable approximation of the integrals in the
and the following perturbed eigenvalue problem preceding definitions. We will rely on a Monte Carlo–based
approximation of these integrals, resulting in the following
A  Eϕ~  λ~ ϕ~ (25) approximations to the error measures:

where E is the perturbation matrix with norm kEk2 . This perturbation 1X N


ϵϕ ≍ kϕMC ξi  − ϕPC ξi k2Rn (30)
induces an error in the approximated eigenvalues bounded as follows: N i1

jλ − λj
~ ≤ kEk2 (26) and

and an error in the approximated eigenvector bounded according to 1X N


ϵλ ≈ jλMC ξi  − λPC ξi j2 (31)
[28], N i1

kEk2 where N refers to the number of samples in the Monte Carlo scheme,
sin θϕ; ϕ
~ ≤ (27)
δ and fξi g refers to N independent samples of random variable ξ.
Here, ϕMC ξi  and λMC ξi  refer to the deterministic eigenpairs
where δ is the gap between λ~ and any other eigenvalues. Therefore, if associated with realization ξi .
the eigenvalues are closely clustered, the error in the approximated
eigenvectors may be significant. However, based on the (nonsmall) 2. Error in Convergence in Distribution
gap that separates the cluster of eigenvalues from all other Our second measure of discrepancy consists in comparing
eigenvalues, there will be a tight bound on how much the subspace probability density functions of the random eigenpairs obtained
spanned by the eigenvectors of the perturbed system differs from that through our proposed approximation scheme and the probability
of the exact system [28,29]. The stochastic subspace iteration, density functions obtained through an extensive Monte Carlo
therefore, mitigates the problem of closely clustered eigenvalues by sampling scheme. We introduce this error to show convergence in
simultaneously considering all the associated eigenvectors. distribution for the random eigenpairs. Let fPC ex
λ λ and f λ λ denote
the probability density functions of λPC ξ and λex ξ, respectively.
The solution is said to converge in distribution to the exact random
IV. Numerical Examples eigenvalue, if the following condition holds:
In this section, we investigate the performance of the proposed
λ λ − f λ λ  0
fPC ex
(32)
algorithms. First, a low-dimensional system is chosen so that the
eigenpairs can be derived analytically. Specifically, a two-degree-of-
at every λ ∈ R at which fex λ is continuous. To quantify the error in
freedom (DOF) mass-spring oscillator is chosen in which the
satisfying this condition, the Kullback–Leibler (KL) divergence is
stiffness of one of the springs is random. As a second example, a finite
element model of an elastic cantilever beam with random elasticity is used, which for fPC ex
λ and f λ , is given by
considered. Because the random eigenpairs of this system cannot be
calculated analytically, the results of the MC simulations are assumed
to yield a sufficiently accurate approximation to the exact PCE
coefficients and are used to verify our procedure.

A. Measure of Discrepancy
We will consider two measures of discrepancy between
representations of the solution to the random eigenvalue problem.
The first measure is consistent with the Hilbert space structure in
which we have formulated the problem. The second measure
involves a comparison of some relevant probability density Fig. 1 Three angles between the vector ϕ and coordinates, i.e., α, β,
functions (PDFs). and γ.
MEIDANI AND GHANEM 917

1X N
fex k
λ λ 
DλKL ≈ fex k
λ λ  ln PC k
N k1 fλ λ 
k
1X n
1X N fex
θi θi 
DϕKL ≈ fex k
θi θi  ln PC k (35)
n i1 N k1 fθ θi 
Fig. 2 Two-DOF mass-spring system (after [15]). i

Z ∞ fex
λ λ
DλKL  fex
λ λ ln PC
dλ (33) where fλk g and fθk
i g refer to the N test points used in the kernel
0 fλ λ smoothing density estimators.
To investigate the convergence in distribution for random
eigenvectors, the distribution of the phase angles between the B. Two-Degree-of-Freedom Mass-Spring Oscillator
random vector and the Cartesian coordinate axes are chosen for A two-DOF oscillator example is considered (Fig. 2), in which the
1 ξ; : : : ; θn ξg and fθ1 ξ; : : : ;
comparison (Fig. 1). Let fθPC PC ex stiffness of the first spring, denoted by k1, is the only random
θn ξg denote the cosine angles corresponding to random
ex parameter in the system, given by the following equation:
eigenvectors ϕPC ξ and ϕMC ξ, respectively, with their associated
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

PDFs denoted by ffPC PC ex


θ1 θ1 ; : : : ; f θn θn g and ff θ1 θ1 ; : : : ;
ex
fθn θn g. The KL divergence for a random eigenvector is then k1 ω  k1  σξ2 − 1 (36)
defined as follows:
Xn Z 2π fex
def 1 θi θi  where ξ is a standard normal random variable. This particular form
DϕKL  fex
θi θ i  ln PC
dθi (34)
n i1 0 fθi θi  generates a random variable k1 ω that is positive almost surely
[30]. The numerical values used for the model parameters are as
Here, we rely on the test points of the kernel smoothing density follows: k1  k2  10 N∕m, k3  4 N∕m, m1  m2  1 kg, and
estimates in order to approximate the KL-divergence terms as σ  2 N∕m. The exact solution for this eigenvalue problem can be
follows: derived analytically as

1st eigenvector 1st eigenvector


0.35 0.35
distribution tail distribution tail

0.3 0.04 0.3 0.04

0.02 0.02
0.25 0.25

0 0
0.2 0 20 40 0.2 60 80 100
pdf

pdf

0.15 0.15

0.1 0.1

0.05 0.05

0 0
0 10 20 30 40 50 60 30 40 50 60 70 80 90
θ (degree) θ (degree)
1 2
a) b)

2nd eigenvector 2nd eigenvector


0.35 0.35
distribution tail distribution tail

0.3 0.04 0.3 0.04

0.02 0.02
0.25 0.25

0 0
0.2 60 80 100 0.2 0 20 40
pdf

pdf

0.15 0.15

0.1 0.1

0.05 0.05

0 0
30 40 50 60 70 80 90 0 10 20 30 40 50 60
θ1 (degree) θ2 (degree)
c) d)
Fig. 3 Comparison between the exact and approximate PDFs of the phase angles of the first and second eigenvector with respect to the first and second
axes. The insets show good agreement between the tails of distributions. The errors in mean standard deviations of the four phase angles are respectively
computed as a) 0.5%, 5.3%; b) 0.5%, 5.3%; c) 0.5%, 6.5%; and d) 0.5%, 6.5%.
918 MEIDANI AND GHANEM

1st eigenvector 1st eigenvector


60 90
exact exact
subspace subspace
50 80

40 70
θ1 (degree)

θ2 (degree)
30 60

20 50

10 40

0 30
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
ξ ξ

a) b)

2nd eigenvector 2nd eigenvector


90 60
exact exact
subspace subspace
80 50

70 40
θ1 (degree)

θ2 (degree)

60 30

50 20

40 10

30 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
ξ ξ
c) d)
Fig. 4 Comparison between the functional forms of the exact and approximate phase angles of the first and second eigenvector relative to first and second
axes.

1.6 1.4
exact exact
1.4 subspace 1.2 subspace

1.2
1

1
0.8
pdf

pdf

0.8
0.6
0.6

0.4
0.4

0.2 0.2

0 0
15 20 25 30 35 40 45 50 5 10 15 20 25 30 35
λ λ
1 2

a) b)
Fig. 5 Comparison between the exact and approximate PDFs of the first and second eigenvalues of the two-DOF system.
MEIDANI AND GHANEM 919

65 35
exact exact
60
subspace subspace
30
55

50
25
45

2
40 20
λ

λ
35
15
30

25
10
20

15 5
−5 0 5 −5 0 5
ξ ξ
a) b)
Fig. 6 Comparison between the functional forms of the exact and approximate solutions of the first and second eigenvalues of the two-DOF system.
Larger distances are in the low probability regions. Thus, as seen in Fig. 5, using the same PC order, an acceptable approximation is achieved for the PDFs.
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

Fig. 7 Discretization of the cantilever beam with 10 elements and random elasticity Ex;ω and random density ρx;ω in the most general case.

0.12 0.07
MC MC
0.1 Subspace method 0.06 Subspace method

0.05
0.08
0.04
pdf

pdf

0.06
0.03
0.04
0.02

0.02 0.01

0 0
60 65 70 75 80 85 90 95 40 50 60 70 80 90 100
θ1 (degree) θ5 (degree)
a) b)
6 0.25
MC MC
5 Subspace method Subspace method
0.2

4
0.15
pdf

pdf

3
0.1
2

0.05
1

0 0
89.4 89.6 89.8 90 90.2 60 65 70 75 80 85 90 95
θ (degree) θ (degree)
6 11

c) d)
5 8
MC MC
Subspace method 7 Subspace method
4
6

3 5
pdf

pdf

4
2 3

2
1
1

0 0
89.2 89.4 89.6 89.8 90 90.2 89.4 89.6 89.8 90 90.2
θ (degree) θ (degree)
14 16
e) f)
Fig. 8 Comparisons between the distributions of the first random eigenvector, obtained from Monte Carlo and subspace iteration. Shown are the PDFs of
phase angles with respect to a) 1st, b) 5th, c) 6th, d) 11th, e) 14th, and f) 16th coordinate axes.
920 MEIDANI AND GHANEM

 
q
2 ξ4 − 2σ 2 ξcomparesx2 1 − x22 2the PDFs of the exact and approximate random eigenvalues,
k1  σξ2 − 1  k21  ξ2 − 12k1 − 20σ − 20rx k1  ; xσ   sinc 
whereas σ  164 compares the functional forms of these two. It can(38)
λ1 
1 2
2L Fig. 6  9 be
2 seen that the solutions for the random eigenvalues and eigenvectors are
q
in close agreement with the exact values.
with the
k1  σξ2 − 1length
correlation − k21Lset − 12
ξ2equal 1 − 20σ − 20k1  σ ξ − 2σ ξ  σ  164
to kthe length of the  beam, 2 4which2 is2 three. 2 The mean and coefficient of the variation of the lognormal
λ2 
distribution expgx; ω is set equal to 1 and 10%, based on which the mean and standard  9 deviation of the Gaussian process is calculated.
Consider the following KL expansion for the Gaussian 2 random field: C. Cantilever Beam with Random Elasticity
The finite element model of a one-dimensional cantilever beam
Given the aforementioned numerical values, the exact eigenvalues
with random elasticity E is next considered (Fig. 7). The length of the
are
beam is divided into 10 elements of equal size. Only rotational and
X pd  transverse degrees of freedom X P
are retained, and axial deformation is
λ1 gx;
13 ωξ2 ξg4 i− 2ξ
xξi ω
2
 17 (39) Ex; ω  Ei xΨi ξω (40)
p neglected.
i0 4 i
λ2  13  ξ − ξ − 2ξ  17
2 2 The spatial variation in the elastic modulus of the beam is
represented by the random process Ex; ω that follows a lognormal
distribution. Specifically,
where
A second-order PC expansion is considered for the random
where d is the
eigenpairs. orderwe
Here, of the expansion
report the results and fξobtained
i g are a setby of independent
the random
Ex;ω  E expgx; ω (37)
Gaussian algorithm.
subspace random variables.
Figure 3Given
showsthis KL expansion,
the comparison betweenthe lognormal
the PDFs hΨi ηi 1X d
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

random field can now be represented by the following


of the phase angles of the exact and approximate random eigenvectors. Pth-order d- E i x  E exp 
gx  g i x 2 ; ηi  ξ i − gi
hΨ2i i 2
dimensional
Figure PC representation
4 compares the exact and[31], approximate functional forms of the where E is a constant and gx; ω is aj1 Gaussian random process with
random phase angles versus the uncertainty source. Similarly, Fig. 5 its autocorrelation function given by (41)

3.5 0.12
MC MC
3 Subspace method 0.1 Subspace method

2.5
0.08
2
pdf

pdf

0.06
1.5
0.04
1

0.5 0.02

0 0
65.5 66 66.5 67 67.5 68 68.5 65 70 75 80 85 90 95
θ (degree) θ (degree)
1 5
a) b)

0.8 0.1
MC MC
0.7 Subspace method Subspace method
0.08
0.6

0.5 0.06
pdf

pdf

0.4

0.3 0.04

0.2
0.02
0.1

0 0
62 64 66 68 70 72 74 76 60 65 70 75 80 85 90 95
θ (degree) θ (degree)
7 9

c) d)

0.25 5
MC MC
Subspace method Subspace method
0.2 4

0.15 3
pdf

pdf

0.1 2

0.05 1

0 0
60 65 70 75 80 85 90 87 87.5 88 88.5 89
θ (degree) θ (degree)
13 16

e) f)
Fig. 9 Comparisons between the distributions of the fourth random eigenvector, obtained from Monte Carlo and subspace iteration. Shown are the PDFs
of phase angles with respect to a) 1st, b) 5th, c) 7th, d) 9th, e) 13th, and f) 16th coordinate axes.
MEIDANI AND GHANEM 921

Using a PCE representation with Hermite polynomials of order 2 subspace iteration algorithm. For the quadrature scheme in three
and dimension 3 for the elastic modulus, the PCE of the random dimensions, the Cartesian product of five Gauss–Hermite quadrature
stiffness matrix of the finite element model, which takes values in points in each direction was used. First, we evaluate the performance
Mat20×20 R, is formed. Next, we investigate the solution of the of this algorithm in estimating the random eigenvectors. To this end,
following random eigenvalue problem: the PDFs of the phase angles of the random eigenvectors with respect
to the Cartesian coordinate axes are plotted. The eigenpairs obtained
Kξϕξ  λξϕξ (42) by 30,000 Monte Carlo simulations (resulting in approximately 1%
error in the standard deviations) were used as the reference for
The numerical values used for the parameters of this beam are as comparison. Figures 8 and 9 depict a representative set of these PDFs
follows: L  3, E  0.21. The second-order three-dimensional PCE for the first and fourth random eigenvectors with respect to different
form of the four most dominant eigenpairs was calculated using the

1.15 1.2 1.15

1.15
1.1 1.1
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

1.1
|| φ1 ||

|| φ1 ||
|| φ1 ||
1.05 1.05
1.05

1 1
1

0.95 0.95 0.95


−3 0 3 −3 0 3 −3 0 3
ξ1 ξ2 ξ3

a) b) c)

1.15 1.25 1.06

1.2
1.1 1.04
1.15
|| φ2 ||

|| φ2 ||

|| φ2 ||
1.02
1.05 1.1

1.05 1
1
1
0.98
0.95 0.95
−3 0 3 −3 0 3 −3 0 3
ξ ξ ξ
1 2 3

d) e) f)

1.04 1.08 1.08

1.03 1.06 1.06

1.02 1.04 1.04


|| φ3 ||

|| φ3 ||

|| φ3 ||

1.01 1.02 1.02

1 1 1

0.99 0.98 0.98


−3 0 3 −3 0 3 −3 0 3
ξ1 ξ2 ξ3

g) h) i)

1.015 1.005
1.006

1.01
1.004
|| φ4 ||
|| φ4 ||

|| φ4 ||

1.002 1.005 1

1
1

0.998
0.995 0.995
−3 0 3 −3 0 3 −3 0 3
ξ1 ξ2 ξ3

j) k) l)
Fig. 10 Each row corresponds to a particular mode and shows the norm of the corresponding random eigenvector versus three uncertainty sources: ξ1 ,
ξ2 , and ξ3 , which are independent standard normal random variables. For each value of an uncertainty source, the error bar corresponds to the variation
induced by the other two independent uncertainty sources.
922 MEIDANI AND GHANEM

0.018
Appendix: Proofs
1st mode
0.016 2nd mode In what follows, the detailed proofs for the two lemmas are
3rd mode presented.
0.014 4th mode Lemma 1: The PC representation of the exact dominant
eigenvector is a fixed point
P of1iterations defined by Algorithm 2.
0.012 Proof: Let ϕ1 ξ  ϕi Ψi ξ denote the solution to the
following equation:
0.01
pdf

P0 X
X P P X
X P
0.008
Ai ϕ1j cijk  λ1i ϕ1j cijk k  0; : : : ; P (A1)
i0 j0 i0 j0
0.006

0.004 where λ1 is the largest random eigenvalue. We will show that, if we


start with the PCE of dominant eigenvector, i.e.,
0.002

0
u0
i  ϕi
1 k  0; : : : ; P (A2)
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

150 200 250 300 350 400 450 500 550


λ the PCE coefficients of the output of the algorithm after one iteration
Fig. 11 PDFs of the four dominant random eigenvalues. The PDFs will satisfy
obtained by the subspace iteration algorithm overlap with those obtained
by Monte Carlo simulations.
X
P
ku1
i − ϕi kRn → 0
1
(A3)
i0

axes, which show agreement with results obtained from Monte Carlo
simulations. Starting with u0
i  ϕi , the mat–vec multiplication using Galerkin
1

In the proposed algorithms, the normalization of the random projection yields


vectors was imposed at a few quadrature points. To investigate
whether the random vectors have unit norm almost anywhere in the P0 X
X P
stochastic domain, the norms of the four solution eigenvectors are y0
k  Ai ϕ1j cijk k  0; : : : ; P
plotted. Figure 10 shows a close agreement with the almost sure unit i0 j0
norm constraint of Eq. (12). To evaluate the performance of the P X
X P
subspace algorithm in estimating the random eigenvalues, the PDFs  λ1i ϕ1j cijk k  0; : : : ; P (A4)
of the solution random eigenvalues are plotted in Fig. 11, which i0 j0
shows perfect agreement with the results obtained by Monte Carlo
simulations. with the following pointwise equality:
Furthermore, Table 1 tabulates values for relative L2 error and
errors in converging in distribution for the four dominant random X
P X
P X
P
eigenpairs. The relative L2 errors are taken to be the errors given yi Ψξ  λ1i Ψi ξ ϕ1i Ψi ξ  ϵPproj ξ
by Eqs. (28) and (29) normalized by the corresponding mean i0 i0 i0
values given by the MC simulations. These small errors are further
evidence of the acceptable performance of the subspace iteration  ϵPtrunc ξ μ − a:s: (A5)
algorithm.
where ϵPproj ξ represents the L∞ error incurred by the Galerkin
projection of Pth order PCEs; i.e.,
V. Conclusions
In this paper, two iterative algorithms are proposed to solve the X
P X
P X
P
random eigenvalue problem represented in the PCE form: power ϵPproj ξ  yi Ψi ξ − λ1i Ψi ξ ϕ1i Ψi ξ μ − a:s:
iteration and subspace iteration. The former finds the dominant i0 i0 i0
random eigenvalue and eigenvector, whereas the latter finds also the (A6)
subdominant random eigenpairs. Calculation of these subdominant
random eigenpairs will result in the construction of a lower-
with the following L2 property (to be interpreted componentwise):
dimensional subspace under uncertainties, in which most of the
energy in the operator is captured. The projection on this lower-
dimensional subspace can significantly facilitate computational hϵPproj ; Ψi i  0 i  0; : : : ; P (A7)
efforts. The performances of these algorithms were evaluated by
comparing their solutions to the exact analytical solution of a small Because of the analytic extension of the random eigenpairs in the
system and to the MC results of a larger system. In both cases, the parameter space (i.e., the tensor product of the supports of fξi g), we
algorithms yielded results with great accuracy. expect the following component-size L∞ property to hold true:

ϵPproj ξ  0 μ − a:s: as P → ∞ (A8)


Table 1 Values of relative L2 error and KL divergence
for the four dominant random eigenpairs The other error term in Eq. (A5), ϵPtrunc ξ, refers to the order
Relative L2 error KL divergence truncation; i.e.,
Mode λ ϕ λ ϕ
X
P X
P
1 1.10 × 10−3 5.39 × 10−2 4.27 × 10−8 1.12 × 10−2 yi Ψi ξ  yi Ψi ξ  ϵPtrunc ξ μ − a:s: (A9)
2 8.19 × 10−4 4.28 × 10−2 1.44 × 10−8 1.93 × 10−2 i0 i0
3 1.50 × 10−4 1.59 × 10−2 1.91 × 10−9 1.47 × 10−2
4 1.12 × 10−4 0.61 × 10−2 1.10 × 10−9 1.00 × 10−2
with the following property:
MEIDANI AND GHANEM 923

yk ξq  ϵptrunc


α1ξ q 0k ϕ1 ξμq −a:s:
λ1 ξ α2 λas2 ξPq →
k ϕ∞2 ξq  (A10) Similarly,
XQ we 0
y canξqwrite

 ui1    Ψi ξq wq
k q
ky ξ kRn kα1 λ ξ  ϕ ξ   α2 λ ξ  ϕ ξq kRn
1 q k 1 q 2 q k 2  
q1 y0 ξ1q q k
 α1 λ ξ n α1 λ1 ξq k
as P; Q → ∞ (A21) lim p
R   lim q
k→∞ α2 λ1 ξq 2k  α2 λ2 ξq 2k
jα1 jλ1 ξq k 1  α2 2 λ1 ξq 
k→∞ 2 q 2k
XQ1
α1 λ ξ  12 q α1 λ ξ 
In the normalization step, using Eq. (15), the normalized vector at   ϕ1 ξq Ψi ξq wq
p
the qththe
Thus, quadrature point canofbe
ith PC coefficient thewritten by output will be given by
kth iteration  1q1 α 1 λ
2 1 q 2
ξ   α22 λ2 ξq 2

PP X
Q
α2 λ2 ξq  (A27)
y0 ξq  i0 yi Ψi ξ 
q   ϕ2 ξq Ψi ξq wq as P → ∞
p (A16)
X Q  PP α1 λ ξq 2  α22 λ2 ξq 2
2 1
0 q
ky ξ kRn q k i0 y Ψ ξ kRn q q1
Therefore, for α1 > 0, substituting Eqs. (A24) and (A27) in
uk
i  y Ψi P
ξq wiq i PP
q1
P λ1 Ψ ξ q  ϕ1 Ψ ξq ϵproj ξq ϵptrunc ξq  Eq. (A23), we will get P
 PPi0 1 i i q PPi0 1 i i q which gives the PCE uξ  ui Ψi ξ with the following
k i0 λi Ψ1 i ξq k i0 ϕi Ψi ξ ϵproj ξq ϵptrunc ξq kRn
X Q
PP α1 λ1 ξ q PP pointwise property for some  Q q: 
 λi Ψi ξ  2 i0q ϕ2k1i Ψϕi 1ξξq q Ψi ξq wq
p X k
P X 
 α2P1 i0
λ ξq 2k  α2P λ ξ  as P → ∞ lim kui − ϕ1i kRn ≤   ϕ 1 ξq Ψ ξq wq − ϕ1 
i i  as P;Q → ∞
q1 k1 Pi0 λ1i Ψi ξq 2 Pi0 ϕ1i Ψi ξq kRn k→∞
i0 X 1
P
q1 Rn
XQ
XP α λ2 ξq k
1 q
u ξ   ui Ψi ξ  q

 ϕ 1 Ψ ξ2q  as P → ∞  ϕ2 ξq Ψi ξq wq


p
i i (A22)
(A11)  kϕi  ϵi − ϕ
1 Q
i kRn as P;Q → ∞
1
i0
α21 λ1 ξq 2k  α22 λ2 ξq 2k P X 
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

q1 i0 X Q
 kϵi kRn as P;Q → ∞ y0 ξq 
Q
 0 q
Ψi ξ w Ψi ξq 
q q

 0 as P;Q → ∞ i0 q1 ky ξ kR


n
where we used the fact that the assumption random vector that (A28)
We
the can now write
dominant random eigenvector has unit norm almost any- P   
X y 0
where [Eq. (12)]. For α1 < 0, we will have 0 k n
; Ψi  ϵ Q i Ψi ξ 
q
 Q that the ith 1chaos ky
P Now, we will show coefficient of the normal i0 R
X 1 
X α1 λ ξq k 1
kuikvector
random  isp
− ϕ1i kRun ξ equal to that of ϕ ξ:  ϕ1 ξq Ψi ξq wq
 X XP   XP
 P y0
i0 q1 α21 λ1 ξq 2k  α22 λ2 ξq 2k lim kuk
 − −ϕ 1 k
n ; Ψ i Ψi ξ  
q
ϵQ
i Ψi ξ 
q
i i 0R n
 k→∞ ky k R
X
Q  qQ k   i0 i0 i0
α2 λ2ξX  y0 ξq 2 q   
 1
ku −pϕ
k    ϕ ξ Ψ Ψξ i q q
ξw w −−hϕ
q q ϕi 
11
; Ψ as
iP;Q → ∞  X Q y0 ξq  
i α i2 λR n
1 ξ q 2k   α 2 λ2 ξ0q 2kq i i 
≤  ϕ1 ξ asq 
Qϕ →1 ∞ as P; Q → ∞ (A17)
q12 ky ξ kR −
q1 1 n R n q Ψ ξq w
 Q Rn ky0 ξ q k
i n R i
X α λ1 ξQq k   q1 Rn
≤ 1  X
p  ϕ qξ Ψ
1 q q q 
− ϕi 
1
q  q i ξhϕw
2k  αϕ 2ki ξ w −
1; Ψ  as P → ∞

α21 λ1 ξ
1 qqΨ
2 λ2ξ i i  k − ϕ1i −  ϕ1i kRn as P; Q → ∞
ϵQ
q1 2 ξ  Rn
i
 Q q1
 R n At theQ second iteration, the mat–vec step will then involve
X  n as P; Q → ∞
 kϵi kRthe
α λ2 ξq k calculating new PCE for y1 ξ using the Galerkin projection

 2khϕ1 ; Ψi i  ϵQ
p i ϕ− ξ
2 hϕ q 1 ; Ψ ik
Ψi ξi q w q
Rn  as Pas→ P;Q∞→ ∞
q1 α1 λ ξ   α2 λ ξ 
2 1 q 2k 2 2 q 2k according
 0 as toP; Q → ∞ (A29)
Q Rn
 Q ≤ khϕ ; Ψi i − hϕ ; Ψi ikRn  kϵi kRn as P → ∞
1 1
X α1 λ1Qξq k  P0 X
X P
≤ p ≤ P → ϕ∞

kϵ k n2 as 1 ξ q Ψ ξ q wq − ϕ1 
(A12)

q1 α 2 λ1 ξ q 2ki R α λ 2 ξ q 2k i i y1
k  Ai u1
j cijk k  0; : : : ; P (A18)
1 2 Rn
i0 j0
X
Q 2
jα2 jλsmall q k
ξ  integration error
where
 ϵQ
p is the  jΨi ξq wq jincurred
i as P;Q →by
∞the numerical
(A23)
α21 λintegration
quadrature ξ   α22in
1 q 2k
λ2 ξtheq 2k
 calculation of the ith chaos coefficient The approximate pointwise Acknowledgment
equivalent to the preceding mat–vec
q1
using Q quadrature points. Thus, we can show product at the quadrature
The financial support ofpoint ξq reads
the U.S. Department of Energy, through
an Advanced Scientific Computing Research (ASCR) grant, is
But, we know
ku1
gratefully
y Aξq u1 ξq   ϵPproj ξq   ϵPtrunc ξq 
1 ξq  acknowledged.
i − ϕi kRn → 0 as P; Q → ∞ (A13)
y0 ξq 
 Aξq   ϵPproj ξq   ϵPtrunc ξq  as Q → ∞
□ ky0 ξq kRn
Lemma 2: Starting with an initial vector composed of the first two References 
α1 λ1 ξq 
dominant
[1]
P 1 Ghanem, eigenvectors,
R., and Spanos,
P 2 i.e.,
P., u
Stochastic
0 ξ  α ϕ1 ξ  α ϕ2 ξ 
Finite Elements:
1 A Spectral
2 Approach, Dover, 
New Aξ q  p
York, 2003.2 1 q 2  ϕ1 ξq 
α1 i ϕi Ψi ξ  α2 i ϕi Ψi ξ, the solution of the algorithm α1 λ ξ   α22 λ2 ξq 2

eventually converges, in direction, to the PCE of the dominant α2 λ2 ξq 
eigenvector.  p 
 ϕ 2 ξ q  as P; Q → ∞
α21 λ1 ξq 2  α22 λ2 ξq 2
Proof: After the Galerkin mat–vec multiplication, we have
jα2 jλ2 ξq k jα2 jλ2 ξq k [2] Maitre, O. P. L., 1 ξq 2O. M., Najm, H. N., and Ghanem, R., “A
α1 λKnio,
p
 < p
  p
Stochastic Projection Method for Fluid ϕ1 ξqFlow:
 I. Basic Formulation,”
X P α
0
X1Pλ ξ   α2 λ ξ 
2 1 q 2k 2 2 q 2k α1 λ ξ 
2 1 q 2k
Journal ofαComputational
2
λ 1
ξ q
 2
 α λ
2 2 q 2

Physics,  Vol. 173, No. 2, 2001, pp. 481–511.
1 2
0 α1 A  j  j c ijk (A24)
α iλϕ2 ξ q  α2kAi ϕ
yk  1 2 k  0; : : : ; P
α doi:10.1006/jcph.2001.6889
α2 λ2 ξq 2 G. E., “Modeling
 2 1 q
i0 j0 ≤ 2 βk [3] Xiu, D., p and Karniadakis,  ϕ2 ξq  asUncertainty
 P; Q → ∞in(A19)Flow
α 1 λ ξ  α 1 Simulations α 2
via λ 1
ξ q 2

Generalized  α λ
2Polynomial
2 q 2
ξ  Chaos,” Journal of Computa-
X P X P 1 2
tional Physics, Vol. 187, No. 1, 2003, pp. 137–167.
 α1 λ1i ϕ1j  α2 λ2i ϕ2j cijk k  0; : : : ; P (A14)
where βi0 < 1j0is doi:10.1016/S0021-9991(03)00092-5
and the normalized samples to be used in Eq. (15) will then be given
[4] Debusschere, B., Najm, H., Matta, A., Knio, O., Ghanem, R., and
by Maitre, O. P. L., “Protein Labeling Reactions in Electrochemical
Similar to the proof of theβ  preceding λ 2 ξq  Microchannel Flow: Numerical Simulation and Uncertainty Propaga-
inf 1 qlemma, the qth normalized (A25) y1 ξq Physics ofα1Fluids,
tion,” λ1 ξq Vol. ϕ ξq No.
2 1 15,
 α8,2 2003,
λ2 ξq pp.
ϕ2 ξq 
2 2238–2250.
vector used in Eq. (15) will then qbeλgiven ξ  by 
1doi:10.1063/1.1582857
q kα1 λ ξ  ϕ
1 q 2 1 ξ   α λ ξ ϕ2 ξq kRn
q 2 q
ky Soize,
ξ kRC., n
R., “Physical2 Systems with Random
PP 1 PP PP 2 P[5] and Ghanem, Uncer-
0 q
y ξ  we can write
Therefore, α1 i0 λi Ψi ξ  i0 ϕi Ψi ξ   α2 i0 λi Ψi ξ  i0
q 1 q q P ϕi Q
2 Ψi→
as tainties:
P;
q

ξChaos
 Representations with Arbitrary Probability Measure,” (A20)
 P P P (A15) P as P → ∞
ky0 ξq kRn kα1 Pi0 λ1i Ψi ξq  Pi0 ϕ1i Ψi ξq   α2 Pi0 λ2i Ψi ξq  Pi0 ϕ SIAM
2
i Ψ i
Journal
ξ q
k R n
on Scientific Computing, Vol. 26, No. 2, 2004, pp. 395–
410.
X Q
jα2 jλ2 ξq k α2 baseddoi:10.1137/S1064827503424505
on which the PC coefficients of the new iteration output is
≤ these
β  0
lim p q
 jΨi ξbased q
w jon k
lim normalized
The
k→∞ith coefficient
α21 λ1 ξqof2kthePCE α22 λ2constructed
ξq 2k k→∞ α 1
calculated.
[6] Collins,We J., can therefore W.,
and Thomson, show “The thatEigenvalue
at the endProblem
of the kth iteration
for Structural
q1
samples, which is the output of the algorithm at the end of first we will havewith
Systems the Statistical,”
following AIAA normalized
Journal,sample at 4,
Vol. 7, No. the qthpp.
1969, quadrature
642–648.
iteration, will be given by (A26) point:doi:10.2514/3.5180
924 MEIDANI AND GHANEM

[7] vom Scheidt, J. (ed.), Random Eigenvalue Problems, Elsevier, New [20] Bathe, K.-J. (ed.), Finite Element Procedures, Prentice–Hall, Upper
York, 1984. Saddle River, NJ, 1996.
[8] Shinozuka, M., and Astill, C., “Random Eigenvalue Problems in [21] Saad, Y. (ed.), Numerical Methods for Large Eigenvalue Problems,
Structural Analysis,” AIAA Journal, Vol. 10, No. 4, 1972, pp. 456–462. Manchester Univ. Press, Manchester, England, U.K., 1992.
doi:10.2514/3.50119 [22] Verhoosel, C. V., Gutiérrez, M. A., and Hulshoff, S. J., “Iterative
[9] Nair, P., and Keane, A., “An Approximate Solution Scheme for the Solution of the Random Eigenvalue Problem with Application to
Algebraic Random Eigenvalue Problem,” Journal of Sound and Spectral Stochastic Finite Element Systems,” International Journal for
Vibration, Vol. 260, No. 1, 2003, pp. 45–65. Numerical Methods in Engineering, Vol. 68, No. 4, 2006, pp. 401–424.
doi:10.1016/S0022-460X(02)00899-4 doi:10.1002/(ISSN)1097-0207
[10] Pradlwarter, H. J., Schueller, G. I., and Szekely, G. S., “Random [23] Das, S., Ghanem, R., and Finette, S., “Polynomial Chaos Representation
Eigenvalue Problems for Large Systems,” Computers and Structures, of Spatio-Temporal Random Fields from Experimental Measurements,”
Vol. 80, Nos. 27–30, 2002, pp. 2415–2424. Journal of Computational Physics, Vol. 228, No. 23, 2009, pp. 8726–
doi:10.1016/S0045-7949(02)00237-7 8751.
[11] Szekely, G., and Schueller, G., “Computational Procedure for a Fast doi:10.1016/j.jcp.2009.08.025
Calculation of Eigenvectors and Eigenvalues of Structures with Random [24] Ghanem, R., and Doostan, A., “On the Construction and Analysis of
Properties,” Computer Methods in Applied Mechanics and Engineering, Stochastic Models: Characterization and Propagation of the Errors
Vol. 191, Nos. 8–10, 2001, pp. 799–816. Associated with Limited Data,” Journal of Computational Physics,
doi:10.1016/S0045-7825(01)00290-0 Vol. 217, No. 1, 2006, pp. 63–81.
[12] Rahman, S., “A Solution of the Random Eigenvalue Problem by a doi:10.1016/j.jcp.2006.01.037
[25] Desceliers, C., Ghanem, R., and Soize, C., “Maximum Likelihood
Downloaded by UNIVERSITY OF CALIFORNIA - DAVIS on May 10, 2014 | http://arc.aiaa.org | DOI: 10.2514/1.J051849

Dimensional Decomposition Method,” International Journal for


Numerical Methods in Engineering, Vol. 67, No. 9, 2006, pp. 1318– Estimation of Stochastic Chaos Representations from Experimental
1340. Data,” International Journal for Numerical Methods in Engineering,
doi:10.1002/(ISSN)1097-0207 Vol. 66, No. 6, 2006, pp. 978–1001.
[13] Lee, C., and Singh, R., “Analysis of Discrete Vibratory Systems with doi:10.1002/(ISSN)1097-0207
Parameter Uncertainties, Part I: Eigensolution,” Journal of Sound and [26] Riesz, M., “Sur le Problème des Moments et le Théorème de Parseval
Vibration, Vol. 174, No. 3, 1994, pp. 379–394. Correspondant,” Acta Scientiarum Mathematicarum (Szeged), Vol. 1,
doi:10.1006/jsvi.1994.1282 1922, pp. 209–225.
[14] Ghanem, R., and Ghosh, D., “Efficient Characterization of the Random [27] Berg, C., “Moment Problems and Polynomial Approximation,” Annales
Eigenvalue Problem in a Polynomial Chaos Decomposition,” de la Faculté des Sciences de Toulouse, Vol. 6, No. S5, 1996, pp. 9–32.
International Journal for Numerical Methods in Engineering, Vol. 72, doi:10.5802/afst.847
No. 4, 2007, pp. 486–504. [28] Bai, Z., Demmel, J., Dongarra, J., Ruhe, A., and van der Vorst, H.,
doi:10.1002/(ISSN)1097-0207 Templates for the Solution of Algebraic Eigenvalue Problems, Soc. for
[15] Ghosh, D., and Ghanem, R., “Stochastic Convergence Acceleration Industrial and Applied Mathematics, Philadelphia, 2000.
Through Basis Enrichment of Polynomial Chaos Expansions,” [29] Davis, C., and Kahan, W. M., “The Rotation of Eigenvectors by a
International Journal for Numerical Methods in Engineering, Vol. 73, Perturbation, III,” SIAM Journal on Numerical Analysis, Vol. 7, No. 1,
No. 2, 2008, pp. 162–184. 1970, pp. 1–46.
doi:10.1002/(ISSN)1097-0207 doi:10.1137/0707001
[16] Ghosh, D., Ghanem, R., and Red-Horse, J., “Analysis of Eigenvalues [30] Desceliers, C., Ghanem, R., and Soize, C., “Polynomial Chaos
and Modal Interaction of Stochastic Systems,” AIAA Journal, Vol. 43, Representation of a Stochastic Preconditioner,” International Journal
No. 10, April 1999, pp. 2196–2201. for Numerical Methods in Engineering, Vol. 64, No. 5, 2005, pp. 618–
doi:10.2514/1.8786 634.
[17] Pellissetti, M., and Ghanem, R., “A Method for the Validation of doi:10.1002/(ISSN)1097-0207
Predictive Computations Using a Stochastic Approach,” Journal of [31] Ghanem, R., “Ingredients for a General Purpose Stochastic Finite
Offshore Mechanics and Arctic Engineering, Vol. 126, No. 3, 2004, Elements Implementation,” Computer Methods in Applied Mechanics
pp. 227–234. and Engineering, Vol. 168, Nos. 1–4, 1999, pp. 19–34.
doi:10.1115/1.1782915 doi:10.1016/S0045-7825(98)00106-6
[18] Wilkinson, J. (ed.), The Algebraic Eigenvalue Problem, Oxford Univ.
Press, Oxford, England, U.K., 1965. R. Kapania
[19] Golub, G., and van Loan, C. (eds.), Matrix Computation, 3rd ed., Johns Associate Editor
Hopkins Univ. Press, Baltimore, MD, 1996.

You might also like