0% found this document useful (0 votes)
26 views58 pages

3 - Basics of Random Processes and Structural Dynamics

This document discusses random processes and their characterization. It defines a random process as a continuous physical process influenced by nondeterministic factors that generates multiple realizations over time. The key ways to characterize a random process are through its probability distribution functions of different orders, which describe the likelihood of the process taking on certain values. Important averages used to characterize random processes are the mean, variance, and autocorrelation function. The autocorrelation function describes the correlation between values of the process at different points in time.

Uploaded by

Sepideh Khaleghi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views58 pages

3 - Basics of Random Processes and Structural Dynamics

This document discusses random processes and their characterization. It defines a random process as a continuous physical process influenced by nondeterministic factors that generates multiple realizations over time. The key ways to characterize a random process are through its probability distribution functions of different orders, which describe the likelihood of the process taking on certain values. Important averages used to characterize random processes are the mean, variance, and autocorrelation function. The autocorrelation function describes the correlation between values of the process at different points in time.

Uploaded by

Sepideh Khaleghi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

C H

3
A

Basics of Random Processes


P T E R

and Structural Dynamics

3.1 Random Processes


A random process is a continuous physical process influenced by nondeterministic
factors. In the Kth experiment, a random process X(t) generates a record X (K)(t),
which is the Kth realization or sample function of the process. The random nature of
the process is reflected in the fact that no two records are identical in every aspect, as
shown in Figure 3.1. The collection of all possible realizations of the random process
is called the ensemble of realizations. While each realization is a definite function of
t in the sense that X (K)(t) is an ordinary function for fixed K, the ensemble of all
realizations can be specified only statistically. For any particular value of t, X(t) is a
random variable.
A random process may be a function of a single variable such as time (e.g., the
vertical displacement of the support of equipment as a function of time during an
earthquake) or may depend on several independent variables (e.g., the vertical acceler-
ation of soil surface as a function of both time and location during an earthquake).

3.1.1 Probability Distribution and Density Functions


A random process is described by its various probability distribution functions. With
reference to Figure 3.1, at a fixed time instance t, the first-order probability distribution
function of X(t) is defined as
 
F1 (x, t) = P X(t) < x . (3.1.1)

If the ensemble consists of N sample functions, out of which there are n realizations
with X(t) < x, then the first-order probability distribution function defined in equation
57
58
Ensemble averaging

X (3)

X (2)

X (1)

Time averaging
t1 t2
Figure 3.1 Sample functions X (K)(t) of a random process.

(3.1.1) is approximately, for N large,


n
F1 (x, t) ≈ .
N
Now consider two time instances, t1 and t2 . The probability that X(t1 ) < x1 and
X(t2 ) < x2 is known as the second-order probability distribution function, i.e.,
 
F2 (x1 , t1 ; x2 , t2 ) = P X(t1 ) < x1 , X(t2 ) < x2 . (3.1.2)

If n12 is the number of sample functions with X(t1 ) < x1 and X(t2 ) < x2 , for large N
n12
F2 (x1 , t1 ; x2 , t2 ) ≈ .
N
Similarly, the probability distribution function of order n is defined as
 
Fn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = P X(t1 ) < x1 , X(t2 ) < x2 , . . . , X(tn ) < xn . (3.1.3)

A random process is said to be completely specified if its distribution functions of


all orders, i.e., n = 1, 2, . . . , are known. The corresponding probability density function
of order n, n = 1, 2, . . . , is defined by

∂ n Fn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn )
pn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = , (3.1.4)
∂x1 ∂x2 · · · ∂xn
i.e.,
pn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn )dx1 dx2 · · · dxn
3.1 random processes 59

 
= P x1 < X(t1 ) < x1 +dx1 ; x2 < X(t2 ) < x2 +dx2 ; . . . ; xn < X(tn ) < xn +dxn .

It is usually either unnecessary or impossible to specify the probability distribution


functions of all orders. For many practical purposes, the knowledge of only the first-
order and second-order probability distribution functions is sufficient. In particular,
for a Gaussian random process, the probability distribution functions of the first two
orders describe the process completely, as will be seen in Section 3.1.6.
If the probability distribution functions Fn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) are invariant
under a change of time origin, i.e., if

Fn (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = Fn (x1 , t1 +τ ; x2 , t2 +τ ; . . . ; xn , tn +τ ), (3.1.5)

for all orders n and any value of τ , the random process is said to be stationary. This
implies that the first-order probability distribution is independent of time, and the
second-order probability distribution depends only on the time difference, i.e.

F1 (x, t) = F1 (x);
(3.1.6)
F2 (x1 , t1 ; x2 , t2 ) = F2 (x1 , x2 ; t2 −t1 ) = F2 (x1 , x2 ; τ ), τ = t2 −t1 .

A random process can be expected to be stationary when the physical factors influenc-
ing it do not change with time. For example, the wind pressure on a building will be
stationary when the wind flow is steady, whereas ground accelerations of an earthquake
are nonstationary random processes.

3.1.2 Averages and Moments


For many physical applications, it is extremely difficult to determine all the probability
distribution functions from the available data. Furthermore, such detailed information
is often unnecessary. In these circumstances, one may have to be content with knowing
only certain average properties of the random process. Two types of average can be
defined for a random process X(t):
❧ The ensemble average for a fixed value t of the time parameter, denoted by E[ X(t) ]
or  X(t), is obtained by evaluating the average of the random variable X(t) over
the ensemble of possible realizations:

1  (K)
N
E[ X(t) ] = lim X (t). (3.1.7)
N→∞ N
K=1

❧ The time average, denoted by X̄(t), is determined by selecting a particular realiza-


tion X (K)(t) and computing the average of X (K)(t) over a large time period:

1 T (K)
X̄(t) = lim X (t)dt. (3.1.8)
T→∞ T 0
60

Both of these averages can also be defined for any functions of a random process.
The simplest of such functions are the polynomials X K1(t1 )X K2(t2 ) · · · X Kn(tn ), whose
averages are called the moments.
The most important of the moments obtained from the first-order probability dis-
tribution are the mean (or expected value) and the mean-square value defined by
 +∞
μX (t) = E[ X(t) ] = x p1 (x, t)dx,
−∞
 +∞
(3.1.9)
2
Xrms (t) = E[ X (t) ] =
2 2
x p1 (x, t)dx,
−∞

where Xrms is the root-mean-square value of X(t). The variance of X(t) is defined by
 2 
σX2 (t) = E X(t) − μ(t) = E[ X 2 (t) ] − μ2X (t), (3.1.10)

which is the expectation of the square of the deviation from the mean. The positive
square root of the variance σX (t) is called the standard deviation. For a random process
with zero mean value, Xrms (t) = σX (t).
For a stationary random process, because p1 (x, t) is independent of t, all of these
averages are also independent of t.

3.1.3 Correlation Functions and Power Spectral Density Functions


The most important average obtained from the second-order probability distribution
is the autocorrelation function
 +∞
RXX (t1 , t2 ) = E[ X(t1 )X(t2 ) ] = x1 x2 p2 (x1 , t1 ; x2 , t2 )dx1 dx2 . (3.1.11)
−∞

The prefix auto- indicates that the two random variables considered, X(t1 ) and X(t2 ),
belong to the same random process. A related quantity is the covariance function
defined by
   
KXX (t1 , t2 ) = E[ X(t1 )−μ1 · X(t2 )−μ2 ] = RXX (t1 , t2 ) − μ1 μ2 , (3.1.12)

where μI = E[ X(tI ) ], I = 1, 2.
The counterpart of the autocorrelation function is the cross-correlation function,
defined as
 +∞
RXY (t1 , t2 ) = E[ X(t1 )Y(t2 ) ] = x y pXY (x, t1 ; y, t2 )dxdy, (3.1.13)
−∞

where X(t1 ) and Y(t2 ) belong to two different random processes X(t) and Y(t).
Two random processes X(t1 ) and Y(t2 ) are independent if

pXY (x, t1 ; y, t2 ) = pX (x, t1 ) pY ( y, t2 ),


3.1 random processes 61

and equation (3.1.13) becomes


 +∞  +∞
E[ X(t1 )Y(t2 ) ] = x pX (x, t1 )dx y pY ( y, t2 )d y = E[ X(t1 ) ] E[ Y(t2 ) ].
−∞ −∞

For a stationary random process, the autocorrelation function depends on the time
difference t2 −t1 only and is usually denoted by RXX (τ ), where τ = t2 −t1 . Without
loss of generality, suppose that X(t) has zero mean value. The autocorrelation function
RXX (τ ) possesses the following properties:
1. RXX (0) = E[ X 2 (t) ] = Xrms
2 (t) = σ 2 , the mean-square value of X(t).
X
2. RXX (τ ) is symmetric about τ = 0, i.e., RXX (τ ) = RXX (−τ ).
 
3. It can be shown that RXX (0)  RXX (τ ). Hence, R XX (0), if it exists, must be zero
and R XX (0), if it exists, is negative.
4. If there is no periodic component, then lim RXX (τ ) = 0.
τ →±∞
5. RXX (τ ) may be expressed in the following form
 +∞
F
  1
RXX (τ ) = −1
SXX (ω) = SXX (ω) ei ωτ dω, (3.1.14a)
2π −∞

i.e., RXX (τ ) is the inverse Fourier transform of SXX (ω), where SXX (ω) is a real
positive even function of frequency ω. Hence, SXX (ω) is the Fourier transform of
RXX (τ ) given by
 +∞
SXX (ω) = F
 
RXX (τ ) = RXX (τ ) e−i ωτ dτ. (3.1.14b)
−∞

Function SXX (ω) is known as the power spectral density (PSD) function of the
random process X(t). Equations (3.1.14) connect the autocorrelation function and
the power spectral density function of a stationary random process.
The meaning of the term “power spectral density” becomes clear when looking at
the relation
  +∞
1 +∞
. E[ X 2 (t) ] = RXX (0) = SXX (ω)dω = SXX ( F )dF, ω = 2πF. (3.1.15)
2π −∞ −∞

If X(t) is considered as a current in an electrical circuit, then E[ X 2 (t) ] is the average


power dissipated in a unit resistor, and SXX ( F ) is the contribution to this power at
the frequency F from a band of width dF.
Some common examples of autocorrelation functions and their corresponding power
spectral density functions are:
1. Periodic functions:

RXX (τ ) = A2 cosω0 τ ,
62
RXX(τ) SXX(ω)

0 τ

(a) –ω0 0 ω0 ω

RXX(τ) SXX(ω)
α1
α1 <α2 <α3
α2
α=α1 α3
α2
α3
0 τ (b) 0 ω

RXX(τ) SXX(ω)

A2 δ(τ) A2

0 τ (c) 0 ω
Figure 3.2 Autocorrelation functions and corresponding power spectral density functions.
 +∞  
SXX (ω) = A2 e−i ωτ cosω0 τ dτ = πA2 δ(ω +ω0 ) + δ(ω −ω0 ) ,
−∞

where δ(·) denotes the Dirac delta function. Thus, all the power is concentrated at
the frequencies ω = ±ω0 as shown in Figure 3.2(a).
2. Exponential autocorrelation function as shown in Figure 3.2(b):
2αA2
RXX (τ ) = A2 e−α|τ | , SXX (ω) = .
α 2 + ω2
3. White noise process:
RXX (τ ) = A2 δ(τ ), SXX (ω) = A2 . (3.1.16)
The power spectral density of a white noise process is constant over all frequencies
as shown in Figure 3.2(c). A white noise process is clearly not physically realizable
because its total power is infinite. However, it is a convenient mathematical ide-
alization for a process whose power spectral density function remains practically
constant over a wide band of frequency.

3.1.4 Fourier Amplitude Spectra


Power spectral density function SXX (ω) is very important in characterizing a station-
ary random process X(t). For a transient nonstationary random process, Fourier
amplitude spectrum (FAS) is often used.
3.1 random processes 63

The Fourier transform of a random process X(t) is


 +∞
F
 
X(t) = X(ω) = X(t) e−i ωt dt, (3.1.17)
−∞
 
in which  X(ω) is called Fourier amplitude spectrum (FAS). The inverse of the Fourier
transform is 
1 +∞
F
 
−1
X(t) = X(ω) = X(ω) e i ωt dω. (3.1.18)
2π −∞
From equation (B.2.13), the total energy e of random process X(t) is given by Parse-
val’s theorem
 +∞ 
1 +∞  2
e= 2
X (t)dt = X(ω) dω, (3.1.19)
−∞ 2π −∞
 2
in which  X(ω) is the energy spectral density (ESD) function.
For a transient nonstationary random process X(t) defined over a finite time, the
mean-square value is then given by
 +∞ 
1 1 1 +∞  2
E[ X (t) ] = Xrms =
2 2 2
X (t)dt = · X(ω) dω, (3.1.20)
Trms −∞ Trms 2π −∞

where Trms is the time interval suitable for calculating the mean-square or root-mean-
square value of X(t), such as the total duration of X(t).
☞ It is appropriate to characterize a stationary process in terms of power, which
is the energy in unit time, and power spectral density function.
☞ It is suitable to describe a transient nonstationary random process in terms of
total energy, Fourier amplitude spectrum, and energy spectral density function.

3.1.5 Ergodic Random Processes


The averages discussed so far have all been ensemble averages. The practical evaluation
of such averages can be a challenging task because a very large number of sample
functions are required. On the other hand, the time average over a particular record
defined by equation (3.1.8) can be easily computed.
It is then most desirable to know under what conditions the two types of averages
are equal. A random process whose averages possess this property is said to be ergodic.
In essence, ergodicity means that any particular sample function used to calculate
the time averages can be expected to be typical of the whole ensemble. For a typical
record X (K)(t) of an ergodic random process, one has
 T
E[ X(t) ] = lim
1
X (K)(t)dt,
T→∞ T −T
 T (3.1.21)
RXX (τ ) = E[ X(t)X(t+τ ) ] = lim
1
X (K)(t)X (K)(t+τ )dt.
T→∞ T −T
64

Because time averages are independent of time, stationarity is a necessary condition


for ergodicity. Many practical random processes are often assumed to be ergodic.

3.1.6 Gaussian Processes


An important random process is the Gaussian process; the probability density functions
are given by
   
1 x−μ(t) 2  2
p1 (x, t) = √ exp − , μ(t) = E[ X(t)], σ 2(t) = E[ X(t)−μ(t) ],
2π σ(t) 2σ 2(t)
1
p2 (x1 , t1 ; x2 , t2 ) = 
2π σ1 σ2 1−ρ 2
  
1 (x1 −μ1 )2 (x1 −μ1 )(x2 −μ2 ) (x2 −μ2 )2
× exp − −2ρ + , (3.1.22)
2(1−ρ 2 ) σ12 σ1 σ2 σ22

where, for I = 1, 2,
  
 2 E[ X(t1 )−μ1 X(t2 )−μ2 ]
μI = E[ X(tI ) ], σI2 = E[ X(tI )−μI ], ρ(t1 , t2 ) = ,
σ1 σ2

and generally, for n = 1, 2, . . . ,


 
1  n  n
exp −   λ (x −μI )(xj −μj )
2  I=1 j=1 Ij I




pn (x1 , t1 ; . . . ; xn , tn ) =   , (3.1.23a)
(2π )n/2  1/2

where  is the covariance matrix with elements


  
Ij = E[ X(tI )−μI X(tj )−μj ],

and λIj is the cofactor of the element Ij in . In the matrix form, one has

1  
T −1
pn (x, t) =  1/2 exp − 2 (x−m)  (x−m) .
1
(3.1.23b)
(2π ) 
n/2  

A Gaussian random process is completely characterized by its mean μ(t) and covari-
ance function K(t1 , t2 ). Many physical processes, resulting from the superimposition
of a large number of random factors, are often modelled as Gaussian. This assump-
tion is a consequence of the central limit theorem, which states that, under very general
conditions, a random variable that occurs as the sum of many smaller independent ran-
dom variables, each of which is almost negligible in itself, is approximately Gaussian,
whatever the distributions of the component variables are.
3.2 properties of random processes 65

3.2 Properties of Random Processes


3.2.1 Spectral Parameters
Let X(t) be a mean zero stationary random process having power spectral density
function SXX (ω). Referring to equation (3.1.15), the moments λn are defined by
 +∞  
1  +∞
λn = ωn S (ω) dω = 1 ωn SXX (ω) dω. (3.2.1)
XX
2π −∞ π 0

Differentiating equation (3.1.14a) with respect to τ yields


 +∞
1
RXX (τ ) = iω SXX (ω) e i ωτ dω
2π − ∞

= E[ X(t+τ ) X(t) ] = E[ Ẋ(t+τ ) X(t) ] = RXẊ (τ ) = E[ Ẋ(t) X(t−τ ) ],
∂τ
 +∞
1
R XX (τ ) = − ω2 SXX (ω) e i ωτ dω
2π − ∞

= E[ Ẋ(t) X(t−τ ) ] = − E[ Ẋ(t) Ẋ(t−τ ) ] = −RẊẊ (τ ).
∂τ
Hence,
 +∞
1
λ0 = SXX (ω) dω = RXX (0) = σX2 , (3.2.2)
2π −∞
 +∞
1
λ2 = ω2 SXX (ω) dω = RẊẊ (0) = σẊ2 , (3.2.3)
2π −∞
 +∞
1
λ4 = ω4 SXX (ω) dω = RẌẌ (0) = σẌ2 . (3.2.4)
2π −∞

The spectral parameter q is defined as



λ21
q= 1− . (3.2.5)
λ0 λ2

From Schwarz’s inequality, 0  λ21 /(λ0 λ2 )  1; therefore, 0  q  1. q is a measure of


the degree of dispersion or spread of SXX (ω) about its central frequency ω1 = λ1 /λ0 ,
which is the frequency-coordinate of the centroid of SXX (ω).

3.2.2 Rate of Occurrence of Crossings or Peaks


Consider a random process X(t) and a level X = u. The event X(t) = u with Ẋ(t) > 0
is called upcrossing, and the event X(t) = u with Ẋ(t) < 0 is called downcrossing, as
shown in Figure 3.3. Let νX+ (u, t) and νX− (u, t) be the expected rates of upcrossing and
downcrossing the level X = u, respectively.
66
X(t) Upcrossing Downcrossing
u

t
0

Z(t)=H[ X(t)−u]
1
t
0

Z(t)
t
0

Z(t)H[ X(t)]
t
0

NX+(u,t)
3
2
1
t
0
Figure 3.3 Level crossing and counting process.
 
Define a process Z(t) = H X(t)−u as shown in Figure 3.3, where H(x) is the
Heaviside step function with H(x) = 1, if x > 0, and H(x) = 0, if x < 0.
 
Differentiating Z(t) with respect to t yields Ż(t) = δ X(t)−u Ẋ(t), where δ(t) is
the Dirac delta function. Ż(t) is a positive delta function when there is an upcrossing of
level X = u, and a negative delta function when there is a downcrossing of level X = u.
 
Multiplying Ż(t) by H Ẋ(t) eliminates downcrossing of level X = u. Integrating
 
Ż(t) H Ẋ(t) results in the counting process NX+ (u, t), which is the total number of
upcrossings in time [0, t]:
 t
 
NX+ (u, t) = Ż(s) H Ẋ(s) ds.
0

Hence, the expected rate of upcrossing level X = u is the expected value of the derivative
of NX+ (u, t), i.e.,
 
dNX+ (u, t)
[ ] [ ]
     
νX+ (u, t) =E = E Ż(t) H Ẋ(t) = E δ X(t)−u Ẋ(t) H Ẋ(t)
dt
 +∞  +∞
= δ(x−u) ẋ H(ẋ) pXẊ (x, ẋ) dx dẋ
ẋ= − ∞ x= − ∞
 +∞  +∞
= ẋ H(ẋ) pXẊ (u, ẋ) dẋ = ẋ pXẊ (u, ẋ) dẋ.
ẋ= − ∞ 0
3.2 properties of random processes 67

Using 
pXẊ (x, ẋ) = pX (x) pẊ ẋ  X(t) = x
yields the expected rate of upcrossing the level X = u:
 +∞  +∞
+ 
νX (u, t) = ẋ pXẊ (u, ẋ) dẋ = pX (u) ẋ pẊ ẋ  X(t) = u dẋ. (3.2.6)
0 0

Equation (3.2.6) is very general, applicable to any stationary or nonstationary process


with any probability distribution.
Recall that the event X(t) = u with Ẋ(t) < 0 is downcrossing. Reversing the sign of
Ẋ(t) in equation (3.2.6) gives the expected rate of downcrossing the level X = u:
 0  0

νX− (u, t) = − ẋ pXẊ (u, ẋ) dẋ = − pX (u) ẋ pẊ ẋ  X(t) = u dẋ. (3.2.7)
−∞ −∞

Stationary Gaussian Process with Zero Mean


For a stationary Gaussian process with zero mean, its joint probability density is
  
1 1 x2 ẋ2
pXẊ (x, ẋ) = exp − + 2 .
2π σX σẊ 2 σ2 σẊ
X

Substituting into equation (3.2.6) gives


   +∞    
1 u2 ẋ2 1 σẊ u2
νX+ (u) = exp − ẋ exp − dẋ = exp − ,
2π σX σẊ 2σX2 0 2σẊ2 2π σX 2σX2
i.e.,
  
± ± u2 1 σẊ 1 λ2
νX (u) = νX (0) exp − , νX± (0) = = . (3.2.8)
2σX2 2π σX 2π λ0

Rate of Occurrence of Peaks


Because a peak of X(t) occurs whenever Ẋ(t) = 0 and Ẍ(t) < 0, the rate of occurrence
of peaks of X(t) is the rate of downcrossing of the level Ẋ = 0 by Ẋ(t). Hence, from
equation (3.2.7), the rate of occurrence of peaks of X(t) is
 0  0

νp (t) = νẊ− (0, t) = − ẍ pẊẌ (0, ẍ)dẍ = − pẊ (0) ẍ pẌ ẍ  Ẋ(t) = 0 dẍ. (3.2.9)
−∞ −∞

Between any two upcrossing of the same level u, at least one peak must occur. Hence,
νp  νX+ (u, t) for any u and for any process X(t) with continuous time derivatives.
For a narrow-band process, the rate of occurrence of peaks is expected to be only
slightly larger than the rate of upcrossings of the mean μX = E[ X(t) ]. This property is
commonly used to provide a measure of bandwidth through the irregularity factor:
ν+
X (μX , t)
I= , 0 < I  1. (3.2.10)
νp (t)
For a narrow-band process, I tends to 1.
68

Stationary Gaussian Process with Zero Mean


For a stationary Gaussian process with zero mean, referring to equation (3.2.8),

1 σẌ 1 λ4
νp = νẊ− (0) = = , (3.2.11)
2π σẊ 2π λ2
and the irregular factor is
ν+
X (0)
σ2 λ2
I= = Ẋ =⇒ I=  . (3.2.12)
νp σX σẌ λ0 λ4

3.2.3 Probability Distribution of Peaks


 
Define νp t, X(t)  u as the expected rate of occurrence of peaks not exceeding the
level u. During an infinitesimal time interval dt, there is either one peak or none.
Hence, the expected number of occurrences in the interval dt is the same as the
probability of one occurrence in dt, i.e.,
     
νp t, X(t)  u = P Peak  u during [t, t+dt] =⇒ νp (t)dt = P Peak during [t, t+dt] ,
 
where νp (t) = lim νp t, X(t)  u is the total expected rate of peak occurrences. Using
u→∞
conditional probability, one has
 
P Peak  u during [t, t+dt]
    
= P Peak during [t, t+dt] · P Peak  u  Peak during [t, t+dt] .
  
Because P Peak  u  Peak during [t, t+dt] = Fp(t) (u), where Fp(t) is the cumulative
distribution function for a peak at time t, hence
 
P Peak  u during [t, t+dt] ν (u, t)
Fp(t) (u) =   = p . (3.2.13)
P Peak during [t, t+dt] νp (t)
 
To determine νp (u, t), define a process Z(t) = H − Ẋ(t) as shown in Figure 3.4.
   
Differentiating Z(t) with respect to t yields Ż(t) = δ − Ẋ(t) · − Ẍ(t) , which is a
positive delta function when there is a peak and a negative delta function when there is
 
a valley. Multiplying Ż(t) by H − Ẍ(t) eliminates all negative Dirac delta functions
 
or valley. Multiplying the result by H u−X(t) further eliminates all peaks above the
level u. Integrating the result leads to the counting process Np (u, t), which is the total
number of peaks not exceeding the level u in time [0, t]:
 t
     
Np (u, t) = − Ẍ(s) · δ − Ẋ(s) · H − Ẍ(s) · H u−X(s) ds.
0
Hence, the rate of occurrence of peaks not exceeding the level u is the expected value
of the derivative of Np (u, t), i.e.,
 
dNp (u, t)
[ ]
     
νp (u, t) = E = E − Ẍ(t) · δ − Ẋ(t) · H − Ẍ(t) · H u−X(t) . (3.2.14)
dt
3.2 properties of random processes 69

X(t)
u
0 t

H[ X(t)]
1
t
0
H[−X(t)]
1
t
0
δ[−X(t)]·[−X(t)]

t
0

−X(t) · δ[−X(t)]· H[−X(t)]

t
0
−X(t) · δ[−X(t)]· H[−X(t)]· H[u −X(t)]

t
0
Np(u,t)
3
2
1
t
0
Figure 3.4 Determination of Np (u, t).

Substituting into equation (3.2.13) yields

[ ]
     
E − Ẍ(t) · δ − Ẋ(t) · H − Ẍ(t) · H u−X(t)
Fp(t) (u) =
[ ]
   
E − Ẍ(t) · δ − Ẋ(t) · H − Ẍ(t)
 +∞  +∞  +∞
     
− ẍ(t) δ − ẋ(t) H − ẍ(t) H u−x(t) pXẊẌ (x, ẋ, ẍ)dx dẋ dẍ
= ẍ= − ∞ ẋ= − ∞ x= − ∞
 +∞  +∞
   
− ẍ(t) δ − ẋ(t) H − ẍ(t) pẊẌ (ẋ, ẍ) dẋ dẍ
ẍ= − ∞ ẋ= − ∞
 0  u  
ẍ pXẊẌ (x, 0, ẍ) dx dẍ
 
 
ẍ= − ∞ x= − ∞
=  0 . (3.2.15)
 
ẍ pẊẌ (0, ẍ) dẍ
 
 
ẍ= − ∞
70

Differentiating with respect to u gives the probability density function for the peak
 0
 
 ẍ  p (u, 0, ẍ) dẍ
 
XẊẌ
−∞
pp(t) (u) =  0 . (3.2.16)
 
 ẍ  p (0, ẍ) dẍ
 
ẊẌ
ẍ= − ∞

Equations (3.2.15) and (3.2.16) give the probability distribution of peak that occurs
within the vicinity of time t. Other quantities can be easily obtained, such as the mean,
the mean-square value
 +∞  +∞
μp = E[ p(t) ] = u pp(t) (u)du, E[ p (t) ] =
2
u2 pp(t) (u)du, (3.2.17)
−∞ −∞

and the variance σp2 = E[ p2 (t) ] −μ2p .


☞ Equations (3.2.15) to (3.2.17) are the conditional probability distribution and
conditional moments given the existence of peaks.
☞ To find the probability distribution of the peak p(t), one needs the joint proba-
bility distribution of X(t), Ẋ(t), and Ẍ(t) because the occurrence of a peak p(t)
at level u requires the intersection of the events X(t) = u, Ẋ(t) = 0, and Ẍ(t) < 0.

Stationary Gaussian Process with Zero Mean


 
For a stationary Gaussian process X(t), Ẋ(t) is independent of X(t), Ẍ(t) . pẊ (0) can
be factored out of both the numerator and denominator of equation (3.2.16)
 0
 
 ẍ  p
 
(u, ẍ)dẍ √ 
ẍ = − ∞
XẌ 2π 0  
pp (u) =  0 =  ẍ  p
 
(u, ẍ)dẍ
  σ ẍ = − ∞
XẌ
 ẍ  p (ẍ)dẍ
  Ẍ

ẍ = − ∞
√ 
2π 0   
= p (u)  
 ẍ pẌ ẍ  X = u dẍ. Using conditional probability
σẌ X ẍ = − ∞

The conditional probability density function of a Gaussian process is also Gaussian, i.e.,
 
 1 (ẍ − μ̂)2

pẌ ẍ X = u = √ exp − ,
2π σ̂ 2 σ̂ 2
where the conditional mean and standard deviation of Ẍ(t) are
σẌ  σẊ2
μ̂ = ρXẌ · · u, σ̂ = σẌ 1−ρX2Ẍ , ρXẌ = − = − I. (3.2.18)
σX σX σẌ

Hence,
  
1 0 (ẍ − μ̂)2
pp (u) = − p (u) ẍ exp − dẍ
σẌ σ̂ X ẍ = − ∞ 2 σ̂ 2
3.2 properties of random processes 71

  μ̂2  √ 
σ̂ μ̂  μ̂ 
= pX (u) exp − 2 − 2π

σẌ 2 σ̂ σẌ σ̂
    
I 2 u2 √ Iu Iu
= pX (u) 1− I 2 ξ 2 exp − + 2π
√ .
2(1− I )σX
2 2 σX 1− I 2 σX

Substituting the Gaussian form for pX (u) gives the probability density function for the
peaks of a mean zero stationary Gaussian process X(t)
√      
1− I 2 u2 Iu u2 Iu
pp. (u) = √ exp − + 2 exp − 2
√ . (3.2.19)
2π σX 2(1− I 2 )σX2 σX 2σX 1− I 2 σX

The corresponding cumulative distribution function is given by


     
u u2 Iu
Fp (u) =
√ − I exp − 2
√ . (3.2.20)
1− I 2 σX 2σX 1− I 2 σX

These results are commonly referred to as the Rice distribution.


For the limiting case of narrow-band process with I →1− , noting that
(− ∞) = 0
and
(∞) = 1, one has the Rayleigh distribution:
   
u u2 u2
pp (u) = 2 exp − 2 , Fp (u) = 1 − exp − 2 . (3.2.21)
σX 2σX 2σX

The Highest Peak in a Time Interval


Considering a time interval [0, t], the total number of peaks is Np = νp t, where νp is
the rate of occurrence of peaks given by equation (3.2.11). Among these Np peaks, let
Xmax be the highest peak, which is also the maximum value of X(t) in time interval
[0, t]. Assuming that the peaks are independent, the distribution of Xmax is
 
Np
FXmax (x) = Fp (x) , (3.2.22)

where Fp is the probability distribution function given by equation (3.2.20). The


probability distribution of Xmax is given by

d   d   !
Np
pX (x) = FXmax (x) = Fp (x) . (3.2.23)
max dx dx
The expected value of the maximum peak is given by
 +∞  +∞
 
E[ Xmax ] = x · pX (x)dx = x · d Fp (x) Np
max
−∞ −∞
 0  +∞ !
   
Np Np
=− Fp (x) dx + 1 − Fp (x) dx, (3.2.24)
−∞ 0

in which the first integral is negligible.


72

Note that the asymptotic expansion of


(x) is, for x large,
 x  
1 1 1 1
√ e−x /2 dx = 1 + √ e−x /2 − + 3 − · · · .
2 2

(x) = (3.2.25)
−∞ 2π 2π x x
Hence, the probability distribution function Fp given by equation (3.2.20) can be
approximated as  
x2
Fp (x) ≈ 1 − I exp − 2 . (3.2.26)
2σX
Equation (3.2.24) becomes
    Np 
+∞ x2
E[ Xmax ] = 1 − 1 − I exp − 2 dx. (3.2.27)
0 2σX

The peak factor Pf, defined as the ratio of the peak value E[ Xmax ] and root-mean-
square value σX of random process X(t), is given by
 !
E[ Xmax ] 1 +∞   x2
Pf = =√ 1 − 1 − I e−θ Np
θ −1/2 dθ , θ= , (3.2.28)
σX 2 0 2σX2
or
 !
Pf = [ max ] = 2
E X √ +∞  2 N x
1 − 1 − I e−z p dz, z= √ . (3.2.29)
σX 0 2 σX

☞ Equation (3.2.28) is the same as equation (6.8) in Cartwright and Longuet-


Higgins (1956); whereas equation (3.2.29) is different from equation (29) in

Boore (2003) by a factor of 2.

3.2.4 Extreme Value Distribution


Define a new stochastic process Y(t) that is the extreme value of X(t) during the period
[0, t], i.e.,
Y(t) = max X(s). (3.2.30)
0st

The extreme value distribution is then the distribution of Y(t). Note that even for a
stationary process X(t), Y(t) is generally nonstationary because larger and larger values
of X(t) will generally occur when the period [0, t] is extended.
The cumulative distribution function of Y(t) is
   
FY(t) (u) = P Y(t)  u = P X(s)  u for 0  s  t . (3.2.31)

The probability density function of the extreme value is given by


∂  
pY(t) (u) = FY(t) (u) . (3.2.32)
∂u
3.2 properties of random processes 73

Let TX (u) > 0 denote the time at which X(t) has the first upcrossing of the level
   
u, i.e., X TX (u) = u, Ẋ TX (u) > 0, and there has been no crossing in the interval
0  t < TX (u). For any given u value, TX (u) is a random variable.
 
Note that the extreme value problem X(s)  u for 0  s  t is equivalent to the
 
first passage time problem X(0)  u, TX (u)  t . Taking the probabilities gives
    
FY(t) (u) = P TX (u)  t  X(0)  u P X(0)  u
  
= P TX (u)  t  X(0)  u FY(0) (u). (3.2.33)
  
In many problems, the condition in P TX (u)  t  X(0)  u can be neglected:
 
❧ In some problems, P X(0)  u = 1, such as when the system is known to start at
X(0) = 0, and conditioning by a sure event can always be neglected.
❧ In other situations, although the distribution of TX (u) depends on X(0), the effect
of X(0) may be significant only for a small period of time.
Equation (3.2.33) can be written as
   FY(t) (u)
P TX (u)  t  X(0)  u = 1 − .
FY(0) (u)

Differentiating with respect to t gives


  1 ∂  
pT t  X(0)  u = − FY(t) (u) . (3.2.34)
X FY(0) (u) ∂t

It is often convenient to define FY(t) (u) as


  t 
FY(t) (u) = FY(0) (u) exp − ηX (u, s) ds . (3.2.35)
0

Differentiating equation (3.2.35) with respect to t yields


  t 
∂    
FY(t) (u) = FY(0) (u) exp − ηX (u, s) ds · −ηX (u, t) = −FY(t) (u) ηX (u, t),
∂t 0

which leads to
1 ∂   1 FY(t) (u) − FY(t + t) (u)
ηX (u, t) = − FY(t) (u) = lim
FY(t) (u) ∂t t→0 FY(t) (u) t
  
1 P t  TX (u)  t+t  X(0)  u
= lim    . (3.2.36)
t→0 t P TX (t)  t  X(0)  u
 
The event t  TX (u)  t+t means that the first upcrossing of level u is in the time
 
interval t, t+t . This event is the intersection of the event that there is no upcrossing
 
prior to t and the event that there is an upcrossing in the time interval t, t+t , i.e.,
      

t  TX (u)  t+t = TX (u)  t ∩ Upcrossing in t, t+t .
74

Equation (3.2.36), in which the ratio is the conditional probability, can be written as
   
P Upcrossing in t, t+t  X(0)  u ∩ No upcrossing prior to t
ηX (u, t) = lim
t→0 t

E[ Upcrossing in t, t+t  X(0)  u ∩ No upcrossing prior to t ]
 

= lim . (3.2.37)
t→0 t
Hence, ηX (u, t) is the conditional rate of upcrossing of level u, given that the initial
condition is below u and that there is no prior upcrossing.
Using the conditional probability density function, ηX (u, t) can be written as
 +∞
  
ηX (u, t) = ẋ pXẊ u, ẋ  X(0)  u ∩ No upcrossing in t, t+t dẋ.
ẋ=0

However, the conditional probability density function is generally unknown, and some
approximations must be made.
Note that most physical processes have only a finite memory in the sense that X(t)
and X(t−τ ) can generally be considered independent if τ > T for some large T value.
Hence, for t > T,
 
pXẊ u, ẋ  X(0)  u ∩ No upcrossing in [0, t] ≈ pXẊ u, ẋ  No upcrossing in [t−T, t] .

If X(t) is a stationary process, the approximate conditional probability density is


stationary because it is independent of the choice of the origin of the time axis, except
for the restriction that t > T. This means that ηX (u, t) approaches asymptotically to

a stationary value of ηX (u) as pXẊ u, ẋ  X(0)  u ∩ No upcrossing in [0, t] tends to

pXẊ u, ẋ  No upcrossing in [0, t] . This asymptotic behaviour of ηX (u, t) implies that
equation (3.2.35) can be approximated as
−η (u) t
FY(t) (u) ≈ F0 e X , for large t. (3.2.38)

This limiting behaviour for large t is also applicable if X(t) is a nonstationary process
that has finite memory and that becomes stationary with the passage of time.

Poisson Approximation
The most widely used approximation of the extreme distribution problem is to neglect
 
the conditioning event X(0)  u ∩ No upcrossing prior to t . Hence

ηX (u, t) ≈ Rate of upcrossings of the level u = νX+ (u, t), (3.2.39)


  t 
+
FY(t) (u) ≈ FY(0) (u) exp − νX (u, s) ds . (3.2.40)
0

If X(t) is a stationary process, equation (3.2.40) reduces to


+
FY(t) (u) ≈ FY(0) (u) e−νX (u) t . (3.2.41)
3.2 properties of random processes 75

If the crossing rate is independent of the past history of the process, the time inter-
vals between upcrossings are independent, which makes the integer-valued counting
process NX+ (u, t) a Poisson process. Hence, the approximate of equations (3.2.39) to
(3.2.41) is commonly called the Poisson approximation of the extreme value or the
first-passage problem.
Substituting equation (3.2.41) into (3.2.34) yields, for stationary process X(t),

1 ∂FY(t) (u) +
pT (t) = − ≈ νX+ (u) e−νX (u) t . (3.2.42)
X FY(0) (u) ∂t

Hence, the Poisson approximation gives an exponential distribution for the first-
passage time of a stationary process X(t). The mean first-passage time can be easily
obtained from the exponential distribution and is given by
1
E[ TX+ (u) ] ≈ , for stationary process X(t). (3.2.43)
νX+ (u)

☞ For a narrow-band process X(t), an upcrossing of level u at time t is very likely to


be associated with another upcrossing approximately one period later, due to the
slowly varying amplitude of X(t). Such a relationship between the upcrossing
times is inconsistent with the Poisson approximation that the times between up-
crossings are independent. Hence, the Poisson approximation is not appropriate
when the process X(t) is narrow-band.
☞ When u is very large, the assumption of independent crossing becomes better.
☞ The Poisson approximation is best when the process X(t) is broadband and the
level u is large.
For a stationary Gaussian process zero-mean with zero mean, νX+ (u) and the zero-
upcrossing rate ν0 = νX+ (0) is given by equation (3.2.8). The probability distribution
function FY(t) (u) is, with FY(0) (u) = 1,
+  
FY(t) (u) = FY(0) (u) e−νX (u) t = exp −ν0 t e− u
2 /(2 σ 2 )
X . (3.2.44)

The probability density function is


∂    ∂   2 

exp −ν0 t e− u /(2 σX ) .
2
pY(t) (u) = P Y(t)  u = (3.2.45)
∂u ∂u
Defining
ξ = ν0 t e− u
2 /(2 σ 2 )
X , (3.2.46)

equation (3.2.46) becomes

∂e−ξ dξ dξ
pY(t) (u) = · = e−ξ . (3.2.47)
∂ξ du du
76
4.0
Peak Factor Pf
3.5

3.0

2.5

2.0
ν0 t
1.5
0 100 200 300 400 500 600 700 800 900 1000
Figure 3.5 Peak factor.

When u→ + ∞, ξ →0, and when u→0, ξ →ν0 t. The expected value of Y(t) is
 +∞  νt
E[ Y(t) ] = E[ Xmax ] =
0
u pY(t) (u) du = u e−ξ dξ. (3.2.48)
0 0

Solving for u from equation (3.2.48) gives



 √ lnξ
u = σX 2( lnν0 t− lnξ ) = σX 2 lnν0 t · 1−
lnν0 t
√   lnξ   lnξ 2 
= σX 2 lnν0 t 1 − 12 1
− 8 − ··· . (3.2.49)
lnν0 t lnν0 t
Because ξ  ν0 t, equation (3.2.49) can be approximated as
√ 
lnξ
u ≈ σX 2 lnν0 t − √ , (3.2.50)
2 lnν0 t
equation (3.2.48) gives the peak factor
 ν t √ 
E[ Xmax ] lnξ
Pf =
0
≈ 2 lnν0 t − √ e−ξ dξ
σX 0 2 lnν0 t
 +∞  √ 
lnξ
≈ 2 lnν0 t − √ e−ξ dξ , (3.2.51)
0 2 lnν0 t
which yields

Pf = [ max ] ≈ 2 lnν0 t + √
E X √ γ
, γ = 0.5772, (3.2.52)
σX 2 lnν0 t

where γ is the Euler number. This result was first obtained in Davenport (1964) and is
plotted in Figure 3.5.
Der Kiureghian (1980) determined an empirical reduced zero-upcrossing rate νe ,
representing an equivalent rate of statistically independent crossings, given by

1.63 q0.45 − 0.38 ν0 , q < 0.69,
νe = (3.2.53)
ν0 , q  0.69,
3.3 single degree-of-freedom system 77

where q is the spectral parameter defined in equation (3.2.5). In the peak factor given
by equation (3.2.52), ν0 is replaced by the reduced rate νe . Equations (3.2.53) and
(3.2.52) with ν0 replaced by νe are applicable for 0.1  q  1 and 5  ν0 t  1, 000,
which are of interest in earthquake engineering. Resulting error in the estimated peak
factor is generally within 3 %.

Double-Barrier Problem
In many engineering applications, it is required to determine large excursions of X(t)
in either the positive or negative direction. For example, for earthquake ground motion
excitation üg(t), the positive or negative sign of üg(t) has no real significance, and it is
 
important to determine peak ground acceleration üg(t)max .
The event of X(t) remaining between −u and +u is exactly the same as the event
 
of X(t) remaining below the level u. Following equation (3.2.35), one can write
  t 
 
FY(t) (u) = FY(0) (u) exp − η|X| (u, s) ds , Y(t) = max X(s). (3.2.54)
0 0st

The terms double-barrier problem and single-barrier problem are often used to distin-
 
guish between the upcrossings by X(t) and X(t), respectively.
The Poisson approximation of the symmetric double-barrier problem of equation
(3.2.54) is simply to replace η|X| (u, s) with
+
ν|X| (u, s) = νX+ (u, s) + νX− (−u, s).
+
If the distribution of X(t) and Ẋ(t) is symmetric, this gives ν|X| (u, s) = 2νX+ (u, s).

3.3 Single Degree-of-Freedom System


3.3.1 Equations of Motion
Consider a single-storey shear building consisting of a rigid girder of mass m, supported
by weightless columns with combined stiffness K. The columns can take shear forces
but not bending moments. In the horizontal direction, the columns act as a spring of
stiffness K. As a result, the girder can move only in the horizontal direction; its motion
can be described by horizontal displacement u(t), and the system is single degree-of-
freedom (SDOF). The building is subjected to a dynamic load P(t), and the base of the
building is subjected to a dynamic displacement ug(t), as shown in Figure 3.6.
The elastic and damping forces applied on the girder are, respectively, K(u−ug ) and
c(u̇− u̇g ). Newton’s Second Law requires that
    
→ ma = F =⇒ m ü(t) = P(t) − K u(t)−ug(t) − c u̇(t)− u̇g (t) .
78
u(t)
Reference position
Rigid girder P(t) m P(t)
m u(t)
c (uug) k (u ug)
Weightless
k k
c columns c

ug(t) Ground displacement


ug(t) x(t)
Figure 3.6 A single-storey shear building.

u(t) ug(t) u(t)


m
k
k, c m
c
ω, ζ
x(t) u, u, u
u(t)
m ug(t) k k(u−ug)
ug(t)
k, c c m c (u −ug) m

Figure 3.7 SDOF oscillator under ground excitation.

Let x(t) = u(t)−ug(t) be the relative displacement between the girder and the base. In
terms of the relative displacement x(t), the equation of motion becomes

m( ẍ + üg ) = P(t) − Kx − c ẋ =⇒ m ẍ(t) + c ẋ(t) + Kx(t) = P(t) − m üg(t). (3.3.1)

The equivalent loading on the girder created from ground excitation is −m üg(t), which
is proportional to the mass of the structural system m and the ground acceleration üg(t).
The equation of motion (3.3.1) can be written in the standard form as
1 K c
ẍ(t) + 2ζ0 ω0 ẋ(t) + ω02 x(t) = m F(t), ω02 = m , 2ζ0 ω0 = m , (3.3.2)

where ω0 is the natural circular frequency, ζ0 is the damping ratio, and the forcing
is F(t) = P(t)−m üg(t). In earthquake engineering, it is convenient to use an SDOF
oscillator as illustrated in Figure 3.7 to model an SDOF system under base excitation
ug(t) or üg (t), with P(t) = 0.

3.3.2 Free Vibration


For free vibration, equation of motion (3.3.2) is reduced to

ẍ(t) + 2ζ0 ω0 ẋ(t) + ω02 x(t) = 0. (3.3.3)


3.3 single degree-of-freedom system 79

xC(t)
a
a e−ζω0 t
ϕ
ωd
t
ae−ζω0 t cos(ωd t − ϕ)
−ae−ζω0 t (Envelope)
−a
Figure 3.8 Response of underdamped free vibration.

F(t) Impulse I = F(τ)τ F(t)


I

t–τ
t t t
τ t τ t –τ

xP
xP(t,τ)

t
τ t
Figure 3.9 Response of underdamped SDOF system due to an impulse.

The characteristic equation is λ2 +2ζ0 ω0 λ+ω02 = 0, giving λ = ω0 −ζ0 ± ζ02 −1 .
Most engineering structures are underdamped with 0 < ζ0 < 1. The roots of the
characteristic equation become
 
λ = ω0 −ζ0 ± i 1−ζ02 = −ζ0 ω0 ± iωd , ωd = ω0 1−ζ02 ,
where ωd is the damped natural circular frequency.
The response of free vibration is
xC (t) = e − ζ0 ω0 t (A cosωd t + B sinωd t), (3.3.4)
where constants A and B are determined from the initial conditions x(0) = x0 and
ẋ(0) = v0 , resulting in
 v +ζ ω x 
xC (t) = e − ζ0 ω0 t x0 cosωd t + 0 0 0 0 sinωd t , 0  ζ0 < 1, (3.3.5)
ωd
= ae − ζ0 ω0 t cos(ωd t − ϕ), (3.3.6)
where   v +ζ ω x 2  v +ζ ω x 
0 0 0 0 −1 0 0 0 0
a= x20 + , ϕ = tan .
ωd ωd x0
The response of free vibration of an underdamped system with 0 < ζ0 < 1 is shown
in Figure 3.8, which decays exponentially and approaches zero as t→∞ and is called
transient response. Because its value becomes negligible after some time, its effect is
small and is not important in practice.
80

3.3.3 Forced Vibration − Duhamel Integral


The response of forced vibration due to force F(t) is the particular solution xP (t) of the
differential equation (3.3.2). Consider the response of the SDOF system (3.3.2) under a
general dynamic force F(t), as shown in Figure 3.9. Let xP (t, τ ) be the displacement
at time t due to an impulse I = F(τ )τ applied at time τ . The initial velocity v0
imparted by the impulse I is
I F(τ )τ
v0 = = , (3.3.7)
m m
and the initial displacement x0 = 0.
At time t, the displacement xP (t, τ ), due to an impulse I at time τ , is the
response of free vibration with initial displacement x0 = 0 and initial velocity v0 .
Using equation (3.3.5), one has
 v0 +ζ0 ω0 x0 
xP (t, τ ) = e − ζ0 ω0 (t − τ ) x0 cosωd (t−τ ) + sinωd (t−τ )
ωd
F(τ )τ
= e − ζ0 ω0 (t − τ ) sinωd (t−τ ). (3.3.8)
m ωd
Summing up the effect of all such impulses from τ = 0 to t due to force F(t) yields
 t sinωd (t−τ )
xP (t) = e − ζ0 ω0 (t − τ )
F(τ )dτ
0 m ωd
 t  t (3.3.9)
= H(t−τ ) F(τ )dτ = H(t) ∗ F(t) = H(τ ) F(t−τ )dτ ,
0 0

where H(t) is the unit impulse response function of the SDOF system given by
sinωd t 
H(t) = e−ζ0 ω0 t , ωd = ω0 1−ζ02 . (3.3.10)
m ωd
Integral of the form (3.3.9) is called a convolution integral or Duhamel integral. For

lightly damped system, ζ0 1, ωd = ω0 1−ζ02 ≈ ω0 .

Response of SDOF System under Base Excitation


For the case when the SDOF oscillator under base excitation üg (t) only, the forcing
function F(t) = −müg (t). The relative displacement x(t) of the oscillator, given by
equation (3.3.9), becomes
 t
sinωd (t−τ )
x= − e − ζ0 ω0 (t − τ ) üg (τ )dτ. (3.3.11)
0 ωd
In earthquake engineering, the negative sign has no real significance with regard to
earthquake excitation and can be dropped; hence, equation (3.3.11) can be written as
 t
sinωd (t−τ )
x(t) = e − ζ0 ω0 (t − τ ) üg (τ )dτ = h(t) ∗ üg (t), (3.3.12)
0 ωd
3.3 single degree-of-freedom system 81

where h(t) is referred to as the impulsive response function with respect to base excitation
in this book and is given by
sinωd t
h(t) = e − ζ0 ω0 t . (3.3.13)
ωd

☞ The difference between functions H(t) and h(t) is the mass term m. h(t) is used
in structures under base excitation (earthquake) because the mass term m in
H(t) is cancelled with the mass in the equivalent earthquake load −m üg (t).
Equation (3.3.12) can be rewritten as
 t
1
x(t) = e−ζ0 ω0 (t−τ ) sinωd (t−τ ) üg(τ ) dτ. (3.3.14)
ωd 0

Taking time derivative of equation (3.3.14) gives the relative velocity


 t
ζ ω
ẋ(t) = − 0 0 e−ζ0 ω0 (t−τ ) sinωd (t−τ ) üg(τ ) dτ (3.3.15)
ωd 0
 t
+ e−ζ0 ω0 (t−τ ) cosωd (t−τ ) üg(τ ) dτ. (3.3.16)
0

Substituting equations (3.3.14) and (3.3.16) into (3.3.2) yields the absolute acceleration

ü(t) = ẍ(t) + üg(t) = −ω02 x(t) − 2ζ0 ω0 ẋ(t)


 
2ζ 2 ω2
= −ω0 + 0 0 I s (t) − 2ζ0 ω0 I c (t), (3.3.17)
ωd

where
 t
I s
(t) = e−ζ0 ω0 (t−τ ) sinωd (t−τ ) üg(τ ) dτ ,
0
 (3.3.18)
t
I c
(t) = e −ζ0 ω0 (t−τ )
cosωd (t−τ ) üg(τ ) dτ.
0

For small damping ζ0 1, ωd ≈ ω0 , and


sinω0 t
h(t) ≈ e − ζ0 ω0 t , (3.3.13 )
ω0
 t
1
x(t) ≈ e−ζ0 ω0 (t−τ ) sinω0 (t−τ ) üg(τ ) dτ = h ∗ üg , (3.3.14 )
ω0 0

 t
ẋ(t) ≈ e−ζ0 ω0 (t−τ ) cosω0 (t−τ ) üg(τ ) dτ ≈ ω0 h ∗ üg , (3.3.16 )
0

 t
ü(t) ≈ −ω0 e−ζ0 ω0 (t−τ ) sinω0 (t−τ ) üg(τ ) dτ = −ω02 h ∗ üg . (3.3.17 )
0
82

3.3.4 Forced Vibration − Harmonic Excitation

Externally Applied Load P(t) = P0 e i ωt


Consider the case when the SDOF system (3.3.2) is subjected to an externally applied
harmonic load P(t) = P0 e i ωt . Substituting xP (t) = x̂P e i ωt into equation (3.3.2) yields

x̂P = H(ω) P0 =⇒ xP (t) = H(ω) P0 e i ωt , (3.3.19)

where H(ω) is the complex frequency response function given by


 ∞
1
H(ω) = H(t) e−i ωt dt =  2  . (3.3.20)
−∞ m (ω0 −ω2 ) + i2ζ0 ω0 ω

If dynamic effect is not considered, i.e., if only static terms are considered in equation
(3.3.2), one obtains xstatic = P0 /K, which is the static displacement of the structure
under static force P0 .
The dynamic magnification factor (DMF) is defined by
 
x (t) 1 ω
DMF =
- P max
= , r= ω , (3.3.21)
xstatic (1−r 2 )2 +(2ζ0 r)2 0

where r is the frequency ratio. D-MF is plotted in Figure 3.10 for various values of the
damping ratio ζ0 ; it is one of the most important quantities describing the dynamic
behavior of an underdamped SDOF system under harmonic excitation.
❧ When r→0 (ω ω0 ), D-MF →1. The dynamic excitation is effectively a static force
and the amplitude of dynamic response approaches the static displacement.
❧ When r→∞ (ω  ω0 ), DMF →0 or the dynamic response approaches zero.
-

❧ When r ≈ 1 (ω ≈ ω0 ), DMF tends to large values for small damping.


-

❧ DMFmax occurs when d(DMF)/dr = 0, giving r ≈ 1−ζ02 , for ζ0 1, and


- -

 
1  1
DMFmax
-
≈ DMF r=1 = 
-


= . (3.3.22)
(1−r ) +(2ζ0 r) r=1
2 2 2 2ζ 0

Hence, the smaller the damping ratio, the larger the amplitude of dynamic response.
❧ When ζ0 = 0 and ω = ω0 , the system is in resonance, and the amplitude of the
response grows linearly with time.

Ground Excitation ug(t) = u0 e i ωt


When the SDOF system is subjected to only the ground excitation ug(t) = u0 e i ωt, the
equivalent earthquake load is F(t) = −m üg (t) = mω2 u0 e i ωt . Referring to equation
(3.3.19), the response of the SDOF oscillator under ground excitation is given by

x(t) = H(ω) mω2 u0 e i ωt = H(ω) ω2 u0 e i ωt , (3.3.23)
3.3 single degree-of-freedom system 83
7
ζ=0

Dynamic Magnification Factor (DMF)


6

5
ζ=0.1

3
ζ=0.2

1
r ≈1 – ζ2 ζ=0.3 r = ωω
0

0.5 1 1.50 2 2.5


Figure 3.10 DMF of SDOF system under externally applied force.
7
ζ=0
Dynamic Magnification Factor (DMF)

5
ζ=0.1

3
ζ=0.2
2

ζ =0.3
1 r ≈1 + ζ2
r = ωω
0
0 0.5 1 1.5 2 2.5
Figure 3.11 DMF of SDOF system under ground excitation.

where H(ω) is the complex frequency response function with respect to base excitation
 ∞ 1
H(ω) = h(t) e−i ωt dt = . (3.3.24)
−∞ (ω02 −ω2 ) + i2ζ0 ω0 ω
The D-MF, characterizing the magnification of the dynamic displacement response am-
 
plitude x(t)max of the SDOF oscillator in terms of the ground displacement amplitude
u0 , is defined as
 
x(t) r2 ω
DMF =
- max
= , r= ω . (3.3.25)
u0 (1−r 2 )2 +(2ζ0 r)2 0
84

The D-MF is plotted in Figure 3.11 for various values of the damping ratio ζ0 .
❧ When r→0 (ω ω0 ) or when the SDOF oscillator is very stiff, D-MF →0. The
SDOF oscillator moves with the ground as a rigid body, and the relative displacement
between the mass and the ground approaches 0.
❧ When r→∞ (ω  ω0 ) or when the SDOF oscillator is very flexible, D-MF →1. The
mass m does not move, and the relative displacement between the mass and the
ground approaches the ground displacement.
❧ When r ≈ 1 (ω ≈ ω0 ),DMF tends to large values for small damping.
-

❧ DMFmax occurs when d(DMF)/dr = 0, giving r ≈ 1+ζ02 , for ζ0 1, and


- -

 
r2  1
DMFmax
-
≈ DMF r=1 = 

-


= . (3.3.26)
(1−r ) +(2ζ0 r) r=1
2 2 2 2ζ 0

❧ When ζ0 = 0 and ω = ω0 , the system is in resonance.

3.4 Multiple Degrees-of-Freedom Systems


The equation of motion of a multiple degrees-of-freedom (MDOF) structure with N
degrees-of-freed (DOF) under the excitation of ground motion can be written as

M ẍ(t) + C ẋ(t) + Kx(t) = −M I üg(t), (3.4.1)


 T  T
where x = x1 , x2 , . . . , xN is the relative displacement vector, I = 1, 1, . . . , 1 is
the N-dimensional influence vector, M, C, K are the mass, damping, and stiffness
matrices of dimension N×N, respectively. Matrices M and K are symmetric, i.e.,
MT = M, KT = K, and positive definite.

3.4.1 Free Vibration


Consider the undamped free vibration governed by

M ẍ(t) + Kx(t) = 0. (3.4.2)

Seeking a solution of the form x(t) = ϕ e i ωt and substituting into equation (3.4.2) yield
an eigenvalue problem
(K−ω2 M)ϕ = 0. (3.4.3)

To have nonzero solutions for ϕ, the determinant of the coefficient matrix must be zero

det(K−ω2 M) = 0. (3.4.4)

The Ith root (eigenvalue) ωI (ω1 < ω2 < · · · < ωN ) is the natural frequency of the Ith
mode of the system or the Ith modal frequency.
3.4 multiple degrees-of-freedom systems 85

Corresponding to the Ith eigenvalue ωI , a nonzero solution ϕ I of system (3.4.4),

(K−ωI2 M)ϕ I = 0, I = 1, 2, . . . , N, (3.4.5)

is the Ith eigenvector or the Ith mode shape. Construct the modal matrix  as
⎡ ⎤
ϕ11 ϕ12 · · · ϕ1N
  ⎢ϕ ϕ22 · · · ϕ2N ⎥
 = ϕ1 ϕ2 · · · ϕN = ⎢ ⎣ .
21
. .. .. .. ⎥
⎦, (3.4.6)
. . . .
ϕN1 ϕN2 · · · ϕNN

where the first subscript I of element ϕI j refers to the node number and the second
subscript j corresponds to the mode number.  has the following orthogonal relations
 
T M = diag m̄1 , m̄2 , . . . , m̄N = m̄, (3.4.7)
 
T K = m̄2 = diag m̄1 ω12 , m̄2 ω22 , . . . , m̄N ωN , 2
(3.4.8)
 
where  = diag ω1 , ω2 , . . . , ωN , and m̄1 , m̄2 , . . . , m̄N are the modal masses.
It is usually assumed that
 
T C = diag c̄1 , c̄2 , . . . , c̄N , c̄n = 2ζn ωn m̄n , (3.4.9)

i.e., the structure has classical damping and the modal matrix  can diagonalize the
damping matrix.

3.4.2 Forced Vibration


Substituting x(t) = q(t) into the equation of motion (3.4.1) and multiplying T from
the left yield

(T M) q̈ + (T C) q̇ + (T K)q = −T M I üg .

Applying the orthogonal relations (3.4.7) to (3.4.9) gives

m̄n q̈n + m̄n 2ζn ωn q̇n + m̄n ωn2 qn = − Ln üg , n = 1, 2, . . . , N,

or
Ln
q̈n + 2ζn ωn q̇n + ωn2 qn = −n üg , n = , (3.4.10)
m̄n
where Ln , called the earthquake excitation factors, are given by

Ln = ϕ Tn M I.

Using Duhamel integral, the solution of equation (3.4.10) is

n
 t
qn (t) = − ω Vn (t), Vn (t) = e−ζn ωn (t−τ ) sin ωn (t−τ ) üg(τ )dτ. (3.4.11)
n 0
86

The relative displacement vector x(t) becomes



N
n
x(t) = q(t) = − ϕ n ω Vn (t). (3.4.12)
n
n=1

The elastic forces associated with the relative displacements are given by

f e (t) = Kx(t) = Kq(t). (3.4.13)

Noting that K = M2 , one has


 T
f e (t) = M2 q(t) = Fe,1 (t), Fe, 2 (t), . . . , Fe, N (t) , (3.4.14)

where Fe,n (t) is the elastic force at the nth floor given by

Fe,n (t) = −Mϕ n n ωn Vn (t). (3.4.15)

Having obtained the elastic forces Fe,n (t) at any time t during the earthquake, any
desired force results, such as base shear and overturning moment, can be determined.

3.5 Stationary Response to Random Excitation


When a structural or mechanical system is subjected to random excitation, the response
will be a random process. The general response problem is to determine the statistical
properties of the response process in terms of the given statistical properties of the
excitation and the system parameters.
The complete solution would require the determination of the distribution functions
of all orders of the response. Such exhaustive information is quite often either difficult
to obtain or unnecessary, and in many practical problems a knowledge of the first few
moments of the response is adequate.
In the important case of a linear system subjected to Gaussian excitation, it can be
shown that the response is also Gaussian, which is then completely defined by the mean
and the auto-correlation function only.
In general, the response depends on the state of the system when the excitation is
applied, i.e., on the initial conditions. These conditions may be specified uniquely,
as in deterministic problems, or only statistically. However, in an important class
of problems in which only the asymptotic behaviour for t→∞, i.e., the stationary
response, is of interest, a knowledge of the initial conditions is unnecessary. In a stable
system, this is the motion that will persist after the initial transient has died away.

3.5.1 Single DOF Systems under Random Excitations


Consider a single DOF oscillatory system governed by equation (3.3.2), i.e.,
1 K c
Ẍ + 2ζ ω0 Ẋ + ω02 X = F(t), ω02 = , 2ζ ω0 = , (3.5.1)
m m m
3.5 stationary response to random excitation 87

where F(t) is assumed to be a stationary random process.


The response of system (3.5.1) is given by the Duhamel integral (3.3.9). It can be
shown that, for stationary response, the Duhamel integral can be written as
 ∞  ∞
X(t) = F(τ ) H(t−τ ) dτ = F(t−τ ) H(τ ) dτ. (3.5.2)
−∞ −∞

Taking the expectation of both sides of equation (3.5.2), the mean response is
 +∞
E[ X(t) ] = H(τ ) E[ F(t−τ ) ] dτ.
−∞

Because F(t) is stationary, E[ F(t) ] = mF ; thus,


 +∞
mX = E[ X(t) ] = mF H(τ ) dτ. (3.5.3)
−∞

If F(t) has zero mean, i.e., mF = 0, then the mean response is zero, i.e., mX = 0.
The auto-correlation function of X(t) is
 +∞  +∞ 
RXX (τ ) = E[ X(t) X(t+τ ) ] = E H(τ1 )F(t−τ1 )dτ1 H(τ2 )F(t+τ −τ2 )dτ2
−∞ −∞
 +∞ +∞
= H(τ1 ) H(τ2 ) RFF (τ +τ1 −τ2 ) dτ1 dτ2 , (3.5.4)
−∞ −∞

where RFF (τ ) = E[ F(t) F(t+τ ) ] is the auto-correlation function of F(t).


Direct evaluation of RXX (τ ) using (3.5.4) is seldom performed. Instead, one takes
the Fourier transform of both sides to obtain the power spectral density of the response

 +∞
SXX (ω) = RXX (τ ) e−i ωτ dτ
−∞
 +∞ +∞ +∞
= H(τ1 ) H(τ2 ) RFF (τ +τ1 −τ2 ) e−i ωτ dτ dτ1 dτ2 .
−∞ −∞ −∞

Writing τ3 = τ +τ1 −τ2 and rearranging, one obtains


 +∞  +∞  +∞
SXX (ω) = H(τ1 ) e i ωτ1
dτ1 H(τ2 ) e −i ωτ2
dτ2 RFF (τ3 ) e−i ωτ3 dτ3
−∞ −∞ −∞

= H ∗ (ω) H(ω) SFF (ω),

where H(ω) is the frequency response function given by equation (3.3.20), H∗ (ω)
is the complex conjugate of H(ω), and SFF (ω) is the power spectral density (PSD)
function of the excitation. Thus,
 
SXX (ω) = H(ω)2 SFF (ω). (3.5.5)
88

The response auto-correlation function can be calculated from (3.5.5) by inverse


Fourier transform
 +∞  +∞
1 1  
RXX (τ ) = SXX (ω) e i ωτ dω = H(ω)2 S (ω) e i ωτ dω. (3.5.6)
FF
2π −∞ 2π −∞
The mean-square of the response is given by
 +∞
1  
E[ X 2 (t) ] = RXX (0) = H(ω)2 S (ω) dω. (3.5.7)
FF
2π −∞

If the excitation F(t) is a stationary Gaussian process, it can be shown that the response
X(t) is also Gaussian; the mean and auto-correlation function are then sufficient to
describe the response random process completely.
For the SDOF system described by equation (3.5.1), for which H(ω) is given by
(3.3.20), the response power spectral density is, by (3.5.5),
1
SXX (ω) =   SFF (ω). (3.5.8)
m2 (ω02 −ω2 )2 + (2ζ ω0 ω)2
The auto-correlation function, given by equation (3.5.6), is
 +∞
1 1
 S (ω) e
i ωτ
RXX (τ ) =  dω, (3.5.9)
2π −∞ m (ω0 −ω ) + (2ζ ω0 ω)2 FF
2 2 2 2

and the mean-square response is, using equation (3.5.7),


 +∞
1 1
E[ X 2 (t) ] = RXX (0) =   SFF (ω) dω. (3.5.10)
2π −∞ m2 (ω02 −ω2 )2 + (2ζ ω0 ω)2

Suppose that F(t) is a white noise process with SFF (ω) = S0 and zero mean. Then
the mean response given by equation (3.5.3) is mX = 0. The auto-correlation function
and the mean-square response given by equations (3.5.9) and (3.5.10) are
S0  +∞ e i ωτ
RXX (τ ) = dω, (3.5.11)
2πm2 −∞ (ω02 −ω2 )2 + (2ζ ω0 ω)2

S0  +∞ 1
σX = E[ X (t) ] =
2 2
dω. (3.5.12)
2πm −∞ (ω0 −ω ) + (2ζ ω0 ω)2
2 2 2 2

The integrals in equations (3.5.11) and (3.5.12) may be evaluated by the method of
residues (see Section 3.8, in particular equation (3.8.11)) to give

S0 π S0 S0
σX2 = · =⇒ σX2 = = . (3.5.13)
2πm2 2ζ ω03 4 m2 ζ ω03 2cK

and
 
ζ ω0
RXX (τ ) = σX2 e−ζ ω0 τ cosωd τ + sinωd τ . (3.5.14)
ωd
3.5 stationary response to random excitation 89

For a general (coloured noise) case, the integration in (3.5.9) may have to be per-
 2
formed numerically. However, if the damping is light with ζ 1, the function H(ω)
is sharply peaked near the frequency ω = ω0 (see Figure 3.12). If further SFF (ω) does
not vary too rapidly in the neighbourhood of ω = ω0 as in Figure 3.13, the excitation
may be approximated by a white noise with spectral density equal to SFF (ω0 ). One may
then write

S (ω )  +∞  2 S (ω ) S (ω )
σX2 ≈ FF 0 H(ω) dω = FF2 0 3 = FF 0 . (3.5.15)
2π −∞ 4 m ζ ω0 2cK

3.5.2 MDOF Systems under Random Excitations


As derived in Section 3.4.2, the response of an N-DOF system in the nth normal mode
to a single component of earthquake input üg (t) is governed by equation (3.4.10),
where üg (t) is a random process with zero mean describing the ground acceleration.

Assumptions on the Ground Motion and Structural Response


To develop the response spectrum method, the following assumptions are made on the
ground motion process üg (t) and the response process.
1. The ground motion üg (t) is a stationary, Gaussian process with a wide-band power
spectral density.
❧ Whereas earthquake-induced ground motions are inherently nonstationary, the
strong phase of such motions is usually nearly stationary, as shown in Figure 3.14.
Because the peak response generally occurs during this phase, it is reasonable, at
least for the purpose of developing a response spectrum method, to assume it to
be a stationary process. This assumption would clearly become less accurate for
short-duration, impulsive earthquakes.
❧ The assumption of Gaussian excitation is acceptable on the basis of the central
limit theorem because the earthquake ground motion is the accumulation of a
large number of randomly arriving pulses.
❧ The wide-band assumption for the earthquake motion has been verified based on
recorded motions and is generally accepted.
2. The response of the linear structure is a stationary process.
❧ It is well known that the response of a not-too-lightly damped oscillator to a
wide-band input reaches stationarity in just a few cycles. Thus, this assumption
is acceptable for structures whose fundamental periods are several times shorter
than the strong-phase duration of the ground motion.
It is clear from the preceding discussion that the response spectrum method for earth-
quake loading will be most accurate for earthquakes with long stationary phases of
90

H(ω) 2

1
2ζω0
(2kζ)2
1 1
2(2kζ)2 k2

−ω0 ω0 ω

Figure 3.12 Frequency response function.

H(ω) 2
White noise
approximation PSD of excitation

SFF(ω)

H(ω) 2
White noise
approximation

SFF(ω)

H(ω) 2

Not suitable for


white noise
approximation SFF(ω)

Figure 3.13 Approximation of a random process by a white noise process.

strong shaking and for not-too-lightly damped, not-too-flexible structures (whose


fundamental periods are several times shorter than the duration of earthquake).

Response of the Structure


The solution of equation (3.4.10) is given by the Duhamel integral (3.5.2), i.e.,
 ∞
qn (t) = −n üg (t−τ ) hn (τ ) dτ , (3.5.16)
−∞
3.5 stationary response to random excitation 91

u(t)

t (s)

Stationary

Figure 3.14 A sample of earthquake ground motion.

where
sinωn, d t 
hn (t) = e − ζn ωn t , ωn, d = ωn 1−ζn2 , (3.5.17)
ωn, d
is the impulse response function with respect to base excitation of the nth mode, and
its Fourier transform is the complex frequency response function with respect to base
excitation given by
 ∞
1
Hn (ω) = hn (τ ) e−i ωτ dτ = . (3.5.18)
−∞ (ωn −ω ) + i2ζn ωn ω
2 2

Mean Response
Taking the expectation of both sides of equation (3.5.16), the mean response is
 ∞
E[ qn (t) ] = −n hn (τ ) E[ üg (t−τ ) ] dτ = 0, (3.5.19)
−∞

because üg (t) is stationary with mean zero, i.e., E[ üg (t) ] = 0.
Covariance of Response
The covariance of responses produced by modes m and n is given by

E[ qm (t)qn (t+τ ) ]
 ∞  ∞ 
=E −m üg (t−τ1 ) hm (τ1 )dτ1 −n üg (t+τ −τ2 ) hn (τ2 )dτ2
−∞ −∞
 ∞  ∞
= m n hm (τ1 ) hn (τ2 ) E[ üg (t−τ1 ) üg (t+τ −τ2 ) ] dτ1 dτ2
−∞ −∞
 ∞  ∞
= m n hm (τ1 ) hn (τ2 )Rüg üg (τ +τ1 −τ2 ) dτ1 dτ2 , (3.5.20)
−∞ −∞

where Rü (τ ) = E[ üg (t) üg (t+τ ) ] is the auto-correlation function of üg (t). Taking
g üg
Fourier transform of both sides yields
 ∞
Sqm qn (ω) = E[ qm (t)qn (t+τ ) ] e−i ωτ dτ
−∞
 ∞  ∞  ∞
= m n hm (τ1 ) hn (τ2 ) Rüg üg (τ +τ1 −τ2 ) e−i ωτ dτ1 dτ2 dτ.
−∞ −∞ −∞
92

Writing τ3 = τ +τ1 −τ2 =⇒ τ = τ3 +τ2 −τ1 , one has


 ∞  ∞  ∞
Sqm qn (ω) = m n hm (τ1 )e i ωτ1
dτ1 hn (τ2 )e −i ωτ2
dτ2 Rüg üg (τ3 )e−i ωτ3 dτ3
−∞ −∞ −∞

= m n Hm∗ (ω) Hn (ω) Süg üg(ω), (3.5.21)

where Süg üg(ω) is the power spectral density of earthquake excitation üg (t), which is
the Fourier transform of Rü (τ ). Hence, taking the inverse Fourier transform gives
g üg
 ∞
1
E[ qm (t)qn (t+τ ) ] = Sqm qn (ω) e i ωτ dω
2π −∞
 ∞
 
= m n Hm∗ (ω) Hn (ω) Süg üg(ω) e i ωτ dω, (3.5.22)
2π −∞

and, by setting τ = 0,
 ∞
m n
E[ qm (t)qn (t) ] = Hm∗ (ω) Hn (ω) Süg üg(ω) dω. (3.5.23)
2π −∞

As a special case, when m = n, one obtains the auto-correlation function



 2 ∞  
E[ qn (t)qn (t+τ ) ] = n Hn (ω)2 Süg üg(ω) e i ωτ dω, (3.5.24)
2π −∞

and the mean-square response



n2 ∞  
E[ q2n (t) ] =  H (ω)2 S (ω) dω. (3.5.25)
n üg üg
2π −∞

Approximation of Covariance of Response


To evaluate equation (3.5.22), consider the following cases.
 
❧ When frequencies ωm and ωn are well separated, the narrow peaks of  Hm (ω)
 
and  Hn (ω) do not overlap for lightly damped systems. In this case, the numerical
value of the integral is relatively small, and the covariance E[ qm (t)qn (t) ] is very
small compared to the mean-square values E[ q2m (t) ] and E[ q2n (t) ].
 
❧ When frequencies ωm and ωn are very close together, the narrow peaks of  Hm (ω)
 
and  H (ω) overlap sufficiently so that the covariance E[ q (t)q (t) ] becomes
n m n
of similar order of magnitude to the mean-square values E[ q2m (t) ] and E[ q2n (t) ].
Because the frequencies ωm and ωn must become very close to each other for this to
happen, the value of Süg üg(ω) will not vary greatly in the neighbourhood of these
closely spaced frequencies, i.e., Süg üg(ωm ) ≈ Süg üg(ωn ) ≈ Smn . Hence,
 ∞
m n
E[ qm (t)qn (t) ] = Smn Hm∗ (ω) Hn (ω)dω. (3.5.26)
2π −∞
3.5 stationary response to random excitation 93

Using equation (3.8.8), equation (3.5.26) becomes


m n π 1 Smn  
E[ qm (t)qn (t) ] = Smn ·  ρ =
3 ω3 mn
 m n ρ ,
3 ω3 mn
2π 2 ζm ζn ωm n
4 ζm ζn ωm n
(3.5.27)
where ρmn is given by equation (3.8.9). Noting that
Lm Km Lm 2 Lm
m = = · = ωm ,
m̄m m̄m Km Km
equation (3.5.27) can be written as
mL L
ωm
2 · ωn2 n 
Smn Km Kn Smn Lm Ln ωm ωn
E[ qm (t)qn (t) ] =  ρmn = ρ . (3.5.28)
4 ζm ζn ωm ωn3
3 4 Km K n ζm ζn mn
For the special case when m = n, ωm = ωn , ζm = ζn , r = 1, ρnn = 1, equation (3.5.27)
yields the mean-square response

Snn n2
E[ q2n (t) ] = , (3.5.29)
4 ζn ωn3

where Snn = Süg üg(ωn ), or the root-mean-square value of qn (t)


  
Snn  
σqn =  n . (3.5.30)
2 ζn ωn3
Using equation (3.5.30) and noting that Smm ≈ Snn ≈ Smn , equation (3.5.27) can be
written as
S  
E[ qm (t)qn (t) ] = mn  m n ρ
4 ζm ζn ωm3 ω3 mn
n
     
Smm m  Snn n   
=  ·  · ρmn ·  m n 
2 ζm ωm3 2 ζn ωn3   m n 

 
= αmn ρmn σqm σqn , αmn =  m n  = sgn(m n ). (3.5.31)
  
m n

3.5.3 CQC and SRSS Combination of Modal Responses


Consider a response z(t), which has contributions from all N normal modes, given by

N
z(t) = An qn (t), (3.5.32)
n=1

where coefficients An are known for the structural system under consideration. Squar-
ing both sides of equation (3.5.32) yields

N 
N
z2 (t) = Am An qm (t) qn (t).
m=1 n=1
94

Taking expected value of both sides gives the mean-square response


N 
N
E[ z2 (t) ] = σz2 = Am An E[ qm (t) qn (t) ].
m=1 n=1

Using equation (3.5.31) results in




N 
N
σz = αmn Am An ρmn σqm σqn . (3.5.33)
m=1 n=1

The maximum value of the response z(t) is, according to Section 3.2,
 
z(t)
max
= Pfz · σz , Pfz = Peak factor. (3.5.34)
Similarly,
 
q (t)
n max
= Pfqn · σqn , Pfqn = Peak factor, n = 1, 2, . . . , N. (3.5.35)

Substituting equations (3.5.34) and (3.5.35) into (3.5.33) results in


(    
) q (t) q (t)
  ) N  N
z(t) = Pfz * m max n max
αmn Am An ρmn · · . (3.5.36)
max
m=1 n=1 Pfqm Pfqn
For responses in earthquake engineering, the values of the peak factors Pfqm , Pfqn , and

Pfz do not differ significantly, i.e., Pfz2 / Pfqm · Pfqn ≈ 1; hence, equation (3.5.36) can
be approximated as

  
N 
N    
z(t) ≈ αmn Am An ρmn · qm (t)max · qn (t)max . (3.5.37)
max
m=1 n=1

❧ CQC Method. The combination method of (3.5.37) for evaluating maximum total
response from the individual maxima of modal responses is known as the complete
quadratic combination (CQC) method. When the major contributing modes have
frequencies close together, the corresponding cross terms in equation (3.5.37) can be
very significant and should be retained.
❧ SRSS Method. If the frequencies of the contributing modes are well separated, the
cross terms in equation (3.5.37) are negligible, i.e., ρmn 1, m  = n. In this case,
equation (3.5.37) reduces to

  
N  2
z(t) = A2n qn (t)max . (3.5.38)
max
n=1

which is known as the square root of sum of squares (SRSS) method.


3.6 seismic response analysis 95

It is very important to note that using the CQC or the SRSS combination
method must always be the last step in evaluating the maximum value of any
response quantity. In other words, one cannot use the maximum value of one
response quantity, obtained using the CQC and the SRSS method, to evaluate
the maximum value of another response quantity.

3.6 Seismic Response Analysis


N

un,6 Ground response spectrum


un,3 un,2
un,5 uGi (t) (Absolute) SAi (ω0, ζ0)=max |uGi (t) |
n xGi (t)= uGi (t)− ugi (t) (Relative)
un,1 un,4

2 Multiple ω0, ζ0 SDOF oscillator


DOF
1 primary ugi (t)
structure
ug2(t)
ug3(t) Tridirectional ground excitations
ug1(t)
Figure 3.15 Multiple DOF structure under tridirectional seismic excitations.

Consider a three-dimensional model of a structure with N nodes. A typical node n


has six DOF: three translational DOF un,1 , un,2 , and un,3 , and three rotational DOF
un,4 , un,5 , and un,6 . The structure is subjected to tridirectional seismic excitations
(Figure 3.15). The relative displacement vector x of dimension 6N is governed by

3
M ẍ(t) + C ẋ(t) + Kx(t) = −M I I ügI (t), (3.6.1)
I=1

where ⎧ ⎫
⎧ ⎫ x ⎪ ⎧ I⎫ ⎧ ⎫
⎪ x ⎪ ⎪
⎪ δ ⎪
⎨ x1 ⎪
⎪ ⎬ ⎪ n,1 ⎪
⎨ ⎪ ⎪1⎪
⎨1I ⎪
⎪ ⎪
⎨δI1 ⎪

xn,2 ⎬ I

I I2

x = .2 , xn = . , I = . ,
. 1 = . ,
.⎪
(3.6.2)
⎪ . ⎪ ⎪ . ⎪ ⎪ .⎪ ⎪
⎩ . ⎪
⎪ ⎭ ⎪
⎩x . ⎪
⎪ ⎪
⎭ ⎩ I⎪
⎪ ⎭ ⎩.⎪
⎪ ⎭
xN 1 δI6
n,6

M, C, and K are, respectively, the mass, damping, and stiffness matrices of dimension
6N×6N, xn is the relative displacement vector of node n, I I is the influence vector of
seismic excitation in direction I, and δIj denotes the Kronecker delta function, i.e.,
δIj = 0, if I = j, and δIj = 1, if I = j.
Let x = x1 + x2 + x3 , where x I is the relative displacement vector due to earthquake
excitation ugI (t) in direction I. Hence, xn,I j = un,I j − ugI δIj , where xn,I j and un,I j
96

are, respectively, the relative and absolute displacements of node n in direction j due to
earthquake excitation in direction I. Because the system is linear, from equation (3.6.1),
x I is governed by

M ẍ I (t) + C ẋ I (t) + Kx I (t) = −M I I ügI (t), I = 1, 2, 3. (3.6.3)

In response time-history methods, equation (3.6.3) may be solved by modal superpo-


sition method or by direct time integration method; whereas in response spectrum
method, equation (3.6.3) are solved by modal superposition method.

3.6.1 Modal Superposition Method


Free Vibration
Consider first the undamped free vibration with

M ẍ I (t) + Kx I (t) = 0. (3.6.4)


 
Let ω1 , ω2 , . . . , ω6N be the 6N natural frequencies and  = ϕ 1 , ϕ 2 , · · · , ϕ 6N be the
 
modal matrix, where ϕ K = ϕ T1,K , ϕ T2,K , . . . , ϕ TN,K T is the mode shape of the Kth mode,
 
with ϕ n,K = ϕn,1; K , ϕn,2; K , . . . , ϕn,6; K T . In element ϕn, j; K , the first subscript n refers
to the node number, the second subscript j indicates the direction of response, and the
third subscript K is the mode number.
The modal matrix  has the following orthogonal relations
 
T M = diag m̄1 , m̄2 , . . . , m̄6N T = m̄,
 
T K = m̄2 = diag m̄1 ω12 , m̄2 ω22 , . . . , m̄6N ω6N
2 T
, (3.6.5)
 
 = diag ω1 , ω2 , . . . , ω6N T ,

where m̄1 , m̄2 , . . . , m̄6N are the modal masses. Assume that the structure has classical
damping so that the modal matrix  can also diagonalize the damping matrix
 
T C = diag c̄1 , c̄2 , . . . , c̄6N T , c̄K = m̄K · 2ζK ωK . (3.6.6)

Forced Vibration
Apply the transformation

x I (t) = Q I (t), I = 1, 2, 3, (3.6.7)


where
⎧ I⎫ ⎧ I I ⎫ ⎧ I I ⎫ ⎧ I⎫ ⎧ I ⎫
⎪ Q1 ⎪ ⎪ 1 q1 ⎪ ⎪n,1 qn,1 ⎪ ⎪ x1 ⎪ ⎪
⎪xn,1 ⎪⎪

⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎨Q I ⎬ ⎨ I q I ⎬ ⎨ I q I ⎬ ⎨x I ⎬ ⎨x I ⎪ ⎬
Q I = .2 = , QnI = n,2 n,2 , x I = 2 , xI =
2 2 n,2
⎪ .. ⎪ ⎪ .
.. ⎪ ⎪ .. ⎪ ⎪ .
.. ⎪ n
⎪ .
.. ⎪
, (3.6.8)

⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ . ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪

⎩ I⎭ ⎩ I I ⎭ ⎩ I I ⎭ ⎩ I⎭ ⎩xI ⎪
⎪ ⎭
QN 6N q6N n,6 qn,6 xN n,6
3.6 seismic response analysis 97


N 
6 
6N
xn,I j = ϕn, j; 6(ν−1)+δ ν,I δ qν,I δ = ϕn, j; K KI qKI , (3.6.9)
ν=1 δ=1 K=1
I
Ln, T I
j = ϕ 6(n−1)+j M I , or LKI = ϕ TK M I I , (3.6.10)
I
I
Ln, j LKI ϕ TK M I I
 n, j = , or KI = = . (3.6.11)
m̄ 6(n−1)+j m̄K ϕ TK Mϕ K
For ease of presentation, the two-subscript-notation n, j (node, direction) and the
one-subscript-notation K = 6(n−1)+j are used interchangeably; the former is advan-
tageous in describing the meaning of the quantity in terms of node and direction, and
I
the latter gives the position of the quantity in the corresponding vector. Ln, j is the
earthquake excitation factor, quantifying the contribution of earthquake excitation in
the Ith direction to the modal response qn,I .  I is the modal participation factors; if  I
j K K
is small, then the contribution of mode ϕ K to the structural response due to excitation
in the Ith direction is small.
Substituting equation (3.6.7) into (3.6.3) and multiplying T from the left yield

(T M) Q̈I (t) + (T C) Q̇I (t) + (T K)QI (t) = −T M I I ügI (t).

Using equations (3.6.5), (3.6.6), (3.6.10), and (3.6.11) gives

m̄K · KI q̈KI + m̄K 2ζK ωK · KI q̇KI + m̄K ωK2 · KI qKI = − LKI ügI (t),
or
q̈KI (t) + 2ζK ωK q̇KI (t) + ωK2 qKI (t) = − ügI (t), K = 1, 2, . . . , 6N, I = 1, 2, 3. (3.6.12)

The solution of equation (3.6.12) is given by Duhamel integral


 t
I 1 I I
qK (t) = − ω VK (t), VK (t) = e−ζK ωK (t−τ ) sin ωK (t−τ ) ügI (τ )dτ. (3.6.13)
K 0

The modal response Q IK (t) is


I
Q IK (t) = KI qKI = − ωK VKI (t). (3.6.14)
K

3.6.2 Seismic Response History Analysis


The set of 6N coupled differential equations in (3.6.3) in nodal displacements xKI (t)
is transformed to the set of 6N uncoupled differential equations given by (3.6.12) in
modal coordinates qKI (t). Equation (3.6.12) governs the equation of motion of the Kth
mode (an SDOF system) of the structural system with damping ratio ζK and natural
frequency ωK subjected to ground acceleration ügI (t) in the Ith direction. The response
(relative displacement with respect to the ground) of the linear 6N DOF system as
shown in Figure 3.15 subjected to earthquake ground motion ügI (t) (I = 1, 2, 3) can then
be obtained using equations (3.6.7) and (3.6.14).
98

The seismic response history analysis (SRHA) procedure is concerned with the
calculation of structural response as a function of time when the system is subjected to
a set of tridirectional ground acceleration ügI (t) (I = 1, 2, 3). For illustration purpose,
the nodal inertia forces of the 6N-DOF system are computed using SRHA based on the
modal superposition method.
From equations (3.6.7) and (3.6.14), the acceleration vector of modal masses (DOF)
of the system relative to the ground is given by

2N
ẍ I (t) = ϕ K KI q̈KI (t), I = 1, 2, 3, (3.6.15)
K=1
where q̈KI (t) is the acceleration of the Kth mode of the system relative to the ground.
The vector of the earthquake-induced ground motion in the Ith direction can be
written as

2N 
2N
ügI (t) = I I ügI (t) = ϕ K KI ügI (t), II = ϕ K KI . (3.6.16)
K=1 K=1
The nodal inertia forces of the 6N-DOF system subjected to ground acceleration in the
Ith direction is given by
 
F I (t) = −M ügI (t) + ẍ I (t) , (3.6.17)
 
where ügI (t)+ ẍ I (t) is the vector of the nodal absolute accelerations of the system
subjected to ground acceleration in the Ith direction. Substituting equations (3.6.15)
and (3.6.16) into equation (3.6.17) gives

2N  
FI (t) = −M ϕ K KI ügI (t)+ q̈KI (t) , (3.6.18)
K=1
which is a function of time. The vector of time-histories of the nodal inertial forces
of the system due to the tridirectional ground acceleration can be obtained by alge-
braic summation of the response time-histories at each time step due to the ground

3
acceleration in individual direction, i.e., F(t) = F I (t).
I=1

3.6.3 Direct Time Integration Method


In Section 3.6.2, coupled equations of motion (3.6.1) or (3.6.3) of a linear structural
system with classical damping are decoupled using modal analysis, such that the solu-
tions to a set of coupled differential equations are transformed to the solutions to a set
of uncoupled differential equations of equivalent SDOF system. In this case, numerical
methods are only involved in solving the equations of motion (3.6.12) of SDOF system
when subjected to earthquake ground motion.
However, uncoupling of equations of motion is not possible if the structural system
has nonclassical damping or it responds into the nonlinear range. In this section, the
basic concepts of direct time integration method for solving the uncoupled equations
of motion of such structural system are introduced.
3.6 seismic response analysis 99

The objective is to numerically solve the system of differential equations of the


multiple DOF system given by equation (3.6.1),


3
M ẍ(t) + C ẋ(t) + Kx(t) = p(t), p(t) = −M I I ügI (t) (3.6.19)
I=1

with equivalent earthquake loading p(t) and initial conditions x = x(0), ẋ = ẋ(0) at
t = 0. The solution will provide the displacement vector x(t) as a function of time.
By direct time integration, the equation of motion (3.6.19) is solved using numerical
integration schemes. For linear response analysis of multiple DOF systems, central
difference method and Newmark’s method are two popular direct time integration
methods, which are detailed in Chopra (2012). For nonlinear systems, the solution
algorithm involves an iteration process to converge at each time step. Direct time
integration is usually done using a commercial finite element analysis package, such as
STARDYNE and ANsys.

3.6.4 Seismic Response Spectrum Analysis


The seismic response spectrum analysis (SRSA) is concerned with procedures for com-
puting the peak response of a structure during an earthquake directly from the earth-
quake response (or design) spectra without the need for time-history analysis of the
structure. This procedure does not give the exact peak response, but it provides an
estimate that is sufficiently accurate for structural analysis and design applications.
Based on equation (3.6.14), the maximum absolute value of Q IK (t) is, using response
spectra studied in details in Chapter 4,
   
 I   I   I   I I
Q (t)
K = K
ωK S I
V (ζK , ω K ) = K
SA
I
(ζK , ω K ) =   S (ζ , ω ),
K D K K (3.6.20)
max ω2 K

where
 t 
   
S I
V (ζK , ωK ) = V I (t)
K max

= e−ζK ωK (t−τ )
sin ωK (t−τ ) üg (τ )dτ 
I
(3.6.21)
0 max

is the velocity response spectrum in direction I, and

S VI (ζK , ωK )
SAI (ζK , ωK ) = ωK , SDI (ζK , ωK ) = ωK S VI (ζK , ωK ) (3.6.22)

are the acceleration and displacement response spectra, respectively, in direction I.


The relatively displacement xn,I j of node n in direction j due to seismic excitation
in direction I is given by (3.6.9). The elastic force vector f sI associated with the relative
displacement x I (t) due to seismic excitation in direction I is

f sI (t) = Kx I (t) = KQ I (t). (3.6.23)


100

In undamped free vibration, the elastic forces can be expressed in terms of the equiva-
lent inertial forces
K x I = M ẍ I =⇒ K = M2 . (3.6.24)

Hence,
f sI (t) = KQ I (t) = M2 Q I (t). (3.6.25)
In general, for a response quantity z I (t) due to seismic excitation in direction I


6N
z I (t) = AIK qKI (t), (3.6.26)
K=1

its maximum absolute value can be obtained using CQC as



 I  
6N 
6N  I   
z (t) = α I AI AI ρ · q (t) · q I (t) , (3.6.27)
max K K K K K K K max K max
K=1 K =1

where αK K is given by equation (3.5.31) and ρK K is given by equation (3.8.9)

I I
αK K =  KI KI  = sgn(KI KI ), (3.6.28)
  
K K

8 ζK ζK (ζK +rζK )r3/2 ω
ρK K = , r = ωK , (3.6.29)
(1−r 2 )2 + 4 ζK ζK r (1+r 2 ) + 4 (ζK2 +ζK2 )r 2 K

or using SRSS as 
 I  
6N  2
z (t) = (AIK )2 qKI (t)max . (3.6.30)
max
K=1
 
The directional maximum absolute value z I (t)max , I = 1, 2, 3, are combined using
 
SRSS to obtain z(t)max as
     2  2
z(t) = z1 (t)2 + z 2 (t)max + z 3 (t)max . (3.6.31)
max max

Example
Consider a frame ABC with a rigid right-angle at B and clamped to ground at support
A as shown in Figure 3.16(a). It supports a lumped mass m at end A with three DOF
u1 , u2 , and u3 . The weight of the frame is negligible. The flexural rigidity of both
members AB (in both directions 1 and 2) and BC (in both directions 2 and 3) is EI.
Both members AB and BC are rigid in the axial direction and in torsion.
The frame is subject to tridirectional ground excitations üg1 (t), üg2 (t), and üg3 (t). The
ground response spectra (GRS) follow USNRC R.G. 1.60 (USNRC, 2014), with GRS in
both horizontal directions being the same and anchored at PGA = 0.3g and vertical
GRS anchored at PGA = 0.2g. The modal damping is 5 % for all modes .
3.6 seismic response analysis 101

(a) u3 (b) L,EI


u2 δB
l, EI
B θB
C m u1
P
L,EI 3 2
δB = PL θB = PL
L, EI 3EI 2EI

ug2(t) M
ug3(t) 2
L,EI
A
ug1(t) δB = ML θB = ML
2EI EI
Figure 3.16 A frame under tridirectional seismic excitations.
2
(1) f11 f11 f11 L3 θ= L
2EI
3EI θ lL2
−f31 F=1
F=1 l 2EI
f11
L,EI

F=1 F=1
(2) L3 F=1 l, EI 3
f22 3EI l
3EI

L,EI

(3) F=1 −f13


θ= lL
−f13 EI
lL2
f33
2EI l l2L
EI
θ
M =1. l
L,EI
F =1

l,EI l3
A EI

Figure 3.17 Flexibility.

Use the parameters L = 2L, L = 4 m, m = 5 kg, and EI = 106 N · m2 . Using the re-
sponse spectrum method, determine the maximum absolute values of the relative
displacements x1 = u1 −u1g , x2 = u2 −ug2 , and x3 = u3 −ug3 and the maximum over-
turning moments M1 and M2 in directions 1 and 2.

Because there is only one lumped mass at end A, the mass matrix is
 
M = diag m, m, m .

It is easy to determine the flexibility matrix using the definition of flexibility: F I j is the
displacement along DOF I due to a unit force applied along DOF j, while forces along
102

all other DOF being zero. Some useful results for bending of cantilever are shown in
Figure 3.16(b). Referring to Figure 3.17, apply a unit force F = 1 along each of DOF in
turn, the elements of flexibility matrix F are
L3 LL2
F11 = , F21 = 0, F31 = − ,
3EI 3EI
L3 +L 3
F12 = 0, F22 = , F32 = 0,
3EI
LL2 L2L L3
F13 = − , F23 = 0, F33 = + .
3EI EI 3EI
Using L = 2L, the stiffness matrix is given by
⎡ ⎤
21 9
0
EI ⎢ ⎢ 20 10 ⎥
1 ⎥
K = F−1 = ⎢0 0 ⎥.
L3 ⎣ 3 ⎦
9 6
0
10 5
Due to the tridirectional ground excitations, the inertial forces applied on the mass m
are FI = −m(ẍI + ügI ), I = 1, 2, 3. Using the flexibility matrix, the relative displacements
xI = uI −ugI , I = 1, 2, 3, due to the inertial forces are
     
xI = FI1 −m(ẍ1 + üg1 ) + FI2 −m(ẍ2 + üg2 ) + FI3 −m(ẍ3 + üg3 ) ,

or, in the matrix form,


⎧ ⎫ ⎡ ⎤⎡ ⎤ ⎧ẍ + ü 1 ⎫
⎪ x F11 F12 F13 m 0 ⎪ 0 g⎪
⎨ 1⎪ ⎬ ⎪
⎨ 1 ⎪

⎢ ⎥⎢ ⎥
x2 = − ⎣ F21 F22 F23 ⎦ ⎣ 0 m 0 ⎦ ẍ2 + üg .
2

⎩ ⎪ ⎭ ⎪
⎪ ⎪

x3 F31 F32 F33 0 0 m ⎩ẍ3 + üg3 ⎭

Multiplying the equation by K = F−1 yields


⎡ ⎤⎧ ⎫ ⎡ ⎤⎧ ⎫ ⎡ ⎤⎧ ⎫
m 0 0 ⎨1⎬ m 0 0 ⎨0⎬ m 0 0 ⎨0⎬
Kx = −M ẍ − ⎣ 0 m 0 ⎦ 0 üg1 − ⎣ 0 m 0 ⎦ 1 üg2 − ⎣ 0 m 0 ⎦ 0 üg3 ,
0 0 m ⎩0⎭ 0 0 m ⎩0⎭ 0 0 m ⎩1⎭
or
M ẍ + Kx = −M I 1 üg1 − M I 2 üg2 − M I 3 üg3 ,
where ⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎨1⎬ ⎨0⎬ ⎨0⎬
I1 = 0 , I2 = 1 , I3 = 0 .
⎩0⎭ ⎩0⎭ ⎩1⎭

The eigenequation is given by


 
 21 −λ 9 
 20 0
10 
   mL 3
K − ω M = 0 =⇒  0
2 1
−λ

0  = 0, λ = ω2 · .
 3  EI
 9 6 
 10 0 −λ
5
3.6 seismic response analysis 103

The eigenvalues and eigenvectors are

λ1 = 0.2219, λ2 = 0.3333, λ3 = 2.0281,


⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎨ −1.0868⎬ ⎨0⎬ ⎨0.9201⎬  
ϕ1 = 0 , ϕ2 = 1 , ϕ3 = 0 =⇒  = ϕ 1 ϕ 2 ϕ 3 .
⎩ 1 ⎭ ⎩0⎭ ⎩ 1 ⎭

The modal frequencies are determined as



EI λI ωI
ωI = , FI = ,
mL 3 2π
which give

ω1 = 26.3320 rad/s, ω2 = 32.2749 rad/s, ω3 = 79.6108 rad/s,

F1 = 4.1909 Hz, F2 = 5.1367 Hz, F3 = 12.6704 Hz.

It is easy to determine that


   
T M = diag m̄1 , m̄2 , m̄3 = diag 10.9057, 5, 9.2332
 
T K = diag 7561.7302, 5208.3333, 58518.9990 .

Using equation (3.6.11), the modal participation factors are

11 = −0.4983, 21 = 0, 31 = 0.49823,

12 = 0, 22 = 1, 32 = 0,


13 = 0.4585, 23 = 0, 33 = 0.5415.

The critical points of USNRC R.G. 1.60 GRS are listed in Table 3.1; each spectrum
is obtained by connecting the critical points linearly in the log-log scale. Using linear
interpolation, the spectral values at the modal frequencies can be determined, with
ζ1 = ζ2 = ζ3 = 5 %:

SA1 (ζ1 , F 1 ) = 0.8957g, SA1 (ζ2 , F 2 ) = 0.8723g, SA1 (ζ3 , F 3 ) = 0.6761g,


SA2 (ζ1 , F 1 ) = 0.8957g, SA2 (ζ2 , F 2 ) = 0.8723g, SA2 (ζ3 , F 3 ) = 0.6761g,
SA3 (ζ1 , F 1 ) = 0.5862g, SA3 (ζ2 , F 2 ) = 0.5729g, SA3 (ζ3 , F 3 ) = 0.4508g.
The GRS are shown in Figure 3.18.
Using equation (3.6.20), the maximum modal responses can be determined
 1     1 
Q (t) = 0.006307 m, Q 12 (t)max = 0, Q (t) = 0.000521 m,
1 max 3 max
 2   2   
Q (t) = 0, Q (t) = 0.008206 m, Q 23 (t)max = 0,
1 max 2 max
 3     3 
Q (t) = 0.003798 m, Q 32 (t)max = 0, Q (t) = 0.000377 m.
1 max 3 max
104
Table 3.1 USNRC R.G. 1.60 GRS with 5 % damping
Horizontal Vertical
F (Hz) SA ( g) F (Hz) SA ( g)
0.1 0.02264 0.1 0.01009
0.25 0.14149 0.25 0.06304
2.5 0.93900 3.5 0.59600
9 0.78300 9 0.52200
33 0.30000 33 0.20000
100 0.30000 100 0.20000

1 0.8957 0.8723
0.5862 0.6761
0.5729

0.4508 Horizontal 0.3


Acceleration (g)

Vertical 0.2

0.1

0.01 f1 =4.1909 f2 =5.1367 f3 =12.6704


0.1 1 10 Frequency (Hz) 100
Figure 3.18 USNRC R.G. 1.60 GRS with 5 % damping.

The relative displacement vector is given by equation (3.6.7),


⎡ ⎤⎧ I ⎫ ⎧ I + 0.9201 Q I

−1.0868 0 0.9201 ⎪ ⎪ Q ⎪ ⎪ −1.0868 Q 3⎪
⎨ 1⎪ ⎬ ⎪ ⎨ 1 ⎪

I I ⎢ ⎥ I I
x (t) = Q (t) = ⎣ 0 1 0 ⎦ Q2 = Q2 .

⎪ ⎪
⎪ ⎪
⎪ ⎪

⎩ I⎭ ⎩ ⎭
1 0 1 Q3 Q I1 + Q I3

From SRSS, the maximum relative displacements are


 I     
x  2  I 2 2  I 2
1 max = 1.0868 Q 1 max + 0.9201 Q 3 max ,

 I    I    
x  = Q I2 max , x  = Q I 2 + Q I 2 ,
2 max 3 max 1 max 3 max

which give
 1  2  3
x  x  x 
1 max = 0.006872 m, 1 max = 0, 1 max = 0.006329 m,
 1  2  3
x  x  x 
2 max = 0, 2 max = 0.008206 m, 2 max = 0,
 1  2  3
x  x  x 
3 max = 0.004143 m, 3 max = 0, 3 max = 0.003817 m.
3.7 nonlinear systems 105

Applying SRSS, the maximum relative displacements are


       
x  = x 1 2 + x 2 2 + x 3 2 , K = 1, 2, 3,
K max K max K max K max
     
x  x  x 
1 max = 0.008024 m, 2 max = 0.008206 m, 3 max = 0.007391 m.

The elastic force vector is give by equation (3.6.25)

f sI (t) = M2 Q I (t)


⎡ ⎤⎡ ⎤⎡ ⎤⎧ I ⎫
m 0 0 −1.0868 0 0.9201 26.33202 0 0 ⎪
⎪Q 1 ⎪

⎢ ⎥⎢ ⎥⎢ ⎥⎨ I ⎬
= ⎣0 m 0 ⎦⎣ 0 1 0 ⎦⎢
⎣ 0 32.27492 0 ⎥ ⎦⎪Q 2 ⎪
⎩ I⎪
⎪ ⎭
0 0 m 1 0 1 0 0 79.61082 Q3
⎧ ⎫
I I
⎪−3767.8051 Q 1 + 29158.4301 Q 3 ⎪

⎨ ⎪

= 5208.3333 Q 2I .

⎪ ⎪

⎩ ⎭
3466.8813 Q I1 + 31689.3687 Q I3
The overturning moments at support A are
 I    
M1 − F s2I · L −41666.6667 Q I2
= = .
M2I F s1I · L − F s3I · L −44009.9660 Q I1 + 106509.9662 Q I3
Applying SRSS, the maximum overturning moments are
 I   ⎧   ⎫
M1 max ⎨ 41666.6667 Q I2 max ⎬
 I =      .
M  ⎩ 2 Q I 2 2 Q I 2 ⎭
2 max 44009.9660 1 max + 106509.9662 3 max

The maximum overturning moments due to directional earthquake excitations are


 1  2  3
M  M   
1 max = 0, 1 max = 341.9358 N · m, M1 max = 0,
 1  2  3
M    M 
2 max = 283.0883 N · m, M2 max = 0, 2 max = 171.9290 N · m.

Applying SRSS, the maximum overturning moments are


       
M  M 1 2 + M 2 2 + M 3 2 , K = 1, 2,
K max = K max K max K max
   
M  M 
1 max = 341.9358 N · m, 2 max = 331.2078 N · m.

3.7 Nonlinear Systems


The method of equivalent linearization, which replaces a nonlinear system by an equiv-
alent linear system, is useful in determining the mean-square response to stationary
excitation. Consider the following SDOF system

Ẍ + F(Ẋ) + g(X) = F(t). (3.7.1)


106

Property of Gaussian Random Variables


1. If X1 (t), X2 (t), …, Xn (t) are Gaussian random variables with mean zero, then

E[X1 X2 X3 ] = 0,

E[X1 X2 X3 X4 ] = E[X1 X2 ] E[X3 X4 ] + E[X1 X3 ] E[X2 X4 ] + E[X1 X4 ] E[X2 X3 ].

In general, for m = 2, 3, . . . ,

E[X1 X2 · · · X2m−1 ] = 0, E[X1 X2 · · · X2m ] = E[XI Xj ] E[XK XL ], (3.7.5)

where the summation involves (2m)!/(2m m!) terms and is to be taken over all
different ways by which 2m elements can be grouped into m distinct pairs.
2. If X(t) is a stationary process, then

RX X (τ ) = E[ X(t) X(t+τ ) ],

RX X (τ ) = RX Ẋ (τ ) = E[ X(t) Ẋ(t+τ ) ] = E[ X(t−τ ) Ẋ(t) ], (3.7.6)

RX X (τ ) = −RẊ Ẋ (τ ) = − E[ Ẋ(t) Ẋ(t+τ ) ] = − E[ Ẋ(t−τ ) Ẋ(t) ].

Replacing equation (3.7.1) by the equivalent linear system

Ẍ + β X + ω02 X = F(t), (3.7.2)

the error of linearization, which is also a random process, is


   
E = ω02 X − g(X) + β Ẋ − F(Ẋ) . (3.7.3)

If the error term is zero, the response of the linear system (3.7.2) is given by (3.5.10)
 +∞
1 SFF (ω)
E[ X (t) ] =
2
dω. (3.7.4)
2π −∞ (ω0 −ω2 )2 + (βω)2
2

The parameters β and ω02 are chosen in such a way that some average function of the
error E is minimized.
Let E1 (X) = α X− g(X), E2 (Ẋ) = β Ẋ− F(Ẋ), α = ω02 . The usual choice is to mini-
mize the mean-square errors

E[ E12 ] = α 2 E[ X 2 ] − 2α E[ X g(X) ] + E[ g 2 (X) ],

E[ E22 ] = β 2 E[ Ẋ ] − 2β E[ Ẋ F(Ẋ) ] + E[ F 2 (Ẋ) ],


2

i.e.,

∂ E[ E12 ] E[ X g(X) ] ∂ E[ E22 ] E[ Ẋ F(Ẋ) ]


=0 =⇒ α= , =0 =⇒ β= . (3.7.7)
∂α E[ X2 ] ∂β E[ Ẋ
2
]
3.7 nonlinear systems 107

Equations (3.7.4) and (3.7.7) are solved to determine the mean-square response E[ X 2 ],
and parameters β and ω02 = α.
As an example, consider the following SDOF system with nonlinear damping
3
Ẍ + β (Ẋ + ε Ẋ ) + ω02 X = F(t), (3.7.8)

where F(t) is a white noise process with SFF (ω) = S0 .


Replace the equation of motion by the linear equation (3.7.2). Because F(t) is a
white noise process, then RFF (τ ) = S0 δ(τ ), and
  S
SXX (ω) = H(ω)2 SFF (ω) = 0
. (3.7.9)
(ω02 −ω2 )2 + (βω)2

Therefore, from equation (3.5.14),


 +∞
1
RXX (τ ) = SXX (ω) e i ωτ dω
2π −∞
1
 β  1/2
= σX2 e− 2 βτ cosωD τ + sinωD τ , ωD = ω02 − 41 β 2 , (3.7.10)
2 ωD

and, from equation (3.5.13), the mean-square response of system (3.7.2) is given by
 +∞
H(ω)2 S dω = S0 .
1  
E[ X ] = σX = RXX (0) =
2 2
0 (3.7.11)
2π −∞ 2βω02

Differentiating equation (3.7.10) and using equation (3.7.6) result in

ω02 2 − 1 βτ
RXX (τ ) = RX Ẋ (τ ) = − σ e 2 sinωD τ , (3.7.12)
ωD X

RXX (0) = −RẊ Ẋ (0) = −ω02 σX2 . (3.7.13)

From equation (3.7.13)


S0
E[ Ẋ ] = RẊẊ (0) = −R XX (0) = ω02 σX2 =
2
. (3.7.14)

The error of approximation is


3
E2 (X) = β Ẋ − F(Ẋ), F(Ẋ) = β (Ẋ + ε Ẋ ). (3.7.15)

Minimizing the mean-square error E[ E22 (X) ], one obtains, from equation (3.7.7),

S0
E[ Ẋ F(Ẋ) ] = β E[ Ẋ ]=
2
. (3.7.16)
2
Because
!
E[ Ẋ F(Ẋ) ] = E[ Ẋ · β (Ẋ + ε Ẋ ) ] = β E[ Ẋ ] + ε · E[ Ẋ4 ]
3 2
108

!
= β E[ Ẋ ] + ε · 3 E[ Ẋ 2 ] E[ Ẋ 2 ]
2
, (3.7.17)

in which equation (3.7.5) is used. Substituting equation (3.7.14) into (3.7.17) gives
  S0
β ω02 σX2 + ε · 3 · ω02 σX2 · ω02 σX2 = , (3.7.18)
2
which yields the mean-square response

S0 1 S0 S0  3 S0 
σX2 = ≈ (1−ε · 3ω0
2 2
σX ) = 1−ε . (3.7.19)
2βω02 1+ε · 3ω02 σX2 2βω02 2βω02 2β

☞ Although this result has been obtained for a Gaussian white noise excitation, it
can be expected to yield a reasonable approximate solution when the spectral
density of the excitation is slowly varying in the neighbourhood of ω02 and when
the damping in the system is light.

3.8 Appendix − Method of Residue

Theorem – Method of Residue


Suppose P(x) and Q(x) are polynomials that are real-valued on the real axis and
for which the degree of Q(x) exceeds the degree of P(x) by 2 or more. If Q(x)  = 0
for all real x, then
 ∞   P(x) 
P(x)
dx = 2π i Res ;z , (3.8.1)
−∞ Q(x) Q(x) j
U

P(x)
where the sum is taken over all poles of that lie in the upper half-plane
  Q(x)
U = z : Im (z) > 0 .

The complex frequency response function is given by equation (3.5.18). Hence,


 ∞  ∞

Imn = Hm (ω) Hn (ω)dω =
∗   
−∞ (ωm −ω )− i2ζm ωm ω (ωn −ω )+ i2ζn ωn ω
2 2 2 2
−∞
 ∞
 
(ωm
2 −ω2 ) + i2ζ ω ω
m m
=   
−∞ (ωm
2 −ω2 ) − i2ζ ω ω (ω2 −ω2 ) + i2ζ ω ω
m m m m m
 
(ωn2 −ω2 ) − i2ζn ωn ω
×    dω
(ωn2 −ω2 ) + i2ζn ωn ω (ωn2 −ω2 ) − i2ζn ωn ω
R I
= Imn + iImn , (3.8.2)
3.8 appendix − method of residue 109

where
 ∞
R (ωm 2 −ω2 )(ω2 −ω2 ) + 4ζ ζ ω ω ω2
n m n m n
Imn =    dω, (3.8.3)
−∞
2 −ω2 )2 + (2ζ ω ω)2 (ω2 −ω2 )2 + (2ζ ω ω)2
(ωm m m n n n
 ∞
 
I 2ω ζm ωm (ωn2 −ω2 ) − ζn ωn (ωm
2 −ω2 )
Imn =    dω. (3.8.4)
−∞
2 −ω2 )2 + (2ζ ω ω)2 (ω2 −ω2 )2 + (2ζ ω ω)2
(ωm m m n n n

Because
2
F(ω) = (ωm −ω2 )2 + (2ζm ωm ω)2 = ω4 − 2ωm
2
(1−2ζm2 )ω2 + ωm
4
,
the roots of F(ω) = 0, solved by treating the equation as a quadratic equation in ω2, are

2 2ωm 2 (1−2ζ 2 )± 4ω4 (1−2ζ 2 )2 − 4 · 1 · ω4
m m m m 2

ω = = ωm 1−2ζm2 ± i 2ζm 1−ζm2
2
2

= ωm ( cosθ ± i sinθ), cosθ = 1−2ζm2 , sinθ = 2ζm 1−ζm2 .

It is easy to evaluate that

θ 1+ cosθ 1 + (1−2ζm2 )
cos 2 = = = 1−ζm2 ,
2 2 2
θ 1− cosθ 1 − (1−2ζm2 )
sin 2 = = = ζm2 .
2 2 2

The roots of F(ω) = 0 are then given by


 θ +2Kπ θ +2Kπ 
ω = ωm cos ± i sin , K = 0, 1,
2 2
  
θ θ
K = 0: ω = ωm cos ± i sin = ωm 1−ζm2 ± iζm ,
2 2
      
θ θ θ θ
K = 1: ω = ωm cos +π ± i sin +π = ωm − cos ∓ i sin
2 2 2 2

= ωm − 1−ζm2 ∓ iζm .

Hence, for both ImnR and I I , there are four of total eight poles that lie in the upper
mn
half-plane given by
 
ω = ωm ± 1−ζm2 + iζm , ωn ± 1−ζn2 + iζn . (3.8.5)

Using equation (3.8.1) and a symbolic computation software package, such as Maple,
R and I I can be easily evaluated to yield
the integrals Imn mn

I
Imn = 0, (3.8.6)
R 4π(ωm ζm + ωn ζn )
Imn =
(ωm −2ωm ωn +ωn ) + 4ωm ωn ζm ζn (ωm
4 2 2 4 2 +ω2 ) + 4ω2 ω2 (ζ 2 +ζ 2 )
n m n m n
110

1  ωn 
ζm +
4π · ζ
ωm3 ωm n
=
ω2 ω4  ω  ω2  ω2
1−2 2n + 4n + 4 n ζm ζn 1+ 2n + 4 2n (ζm2 +ζn2 )
ωm ωm ωm ωm ωm
4π ζm + rζn ωn
= · , r= . (3.8.7)
ωm (1−r ) + 4ζm ζn r(1+r 2 ) + 4(ζm2 +ζn2 )r 2
3 2 2 ωm

Hence, equation (3.8.2) becomes


 ∞
Hm∗ (ω) Hn (ω)dω = Imn = Imn
R I
+ iImn
−∞
4π ζm + rζn
= ·
ωm (1−r ) + 4ζm ζn r(1+r 2 ) + 4(ζm2 +ζn2 )r 2
3 2 2

4π 1 8 ζm ζn (ζm +rζn )r3/2
= 3 ·  · ,
ωm 8 ζm ζn r3/2 (1−r 2 )2 + 4ζm ζn r(1+r 2 ) + 4(ζm2 +ζn2 )r 2
 ∞ π 1
∴ Hm∗ (ω) Hn (ω)dω = Imn = · ρ ,
3 ω3 mn
(3.8.8)
−∞ 2 ζm ζn ωm n

where

8 ζm ζn (ζm + rζn )r3/2 ωn
ρmn = , r= . (3.8.9)
(1−r 2 )2 + 4ζm ζn r(1+r 2 ) + 4(ζm2 +ζn2 )r 2 ωm

For the special case when m = n, ωm = ωn , ζm = ζn , r = 1, one has


 
8 ζm ζn (ζm +rζn )r3/2 

ρmm =  = 1, (3.8.10)
(1−r 2 )2 + 4ζm ζn r(1+r 2 ) + 4(ζm2 +ζn2 )r 2 m=n
r=1

and equation (3.8.8) reduces to




∞ 
 H (ω)2 dω = I = π · 1 . (3.8.11)
m mm
−∞ 2 ζm ωm
3

Maple Program
> restart:
I and I R , expressed in
Q is the denominator of the integrands of integrals Imn mn
terms of poles.
> Q:=(w-omega[m1])*(w-omega[m2])*(w-omega[m3])*(w-omega[m4])
*(w-omega[n1])*(w-omega[n2])*(w-omega[n3])*(w-omega[n4]);
Q := (w−ωm1 )(w−ωm2 )(w−ωm3 )(w−ωm4 )(w−ωn1 )(w−ωn2 )(w−ωn3 )(w−ωn4 )
I of the integrand of integral I I .
IP is the numerator Pmn mn
3.8 appendix − method of residue 111

> IP:=2*w*(zeta[m]*omega[m]*(omega[n]^2-w^2)-zeta[n]*omega[n]
*(omega[n]^2-w^2));
IP := 2w(ζm ωm (ωn2 −w2 ) − ζn ωn (ωn2 −w2 ))
> FI:=IP/Q; I .
FI is the integrand of integral Imn
2w(ζm ωm (ωn2 −w2 ) − ζn ωn (ωn2 −w2 ))
FI :=
(w−ωm1 )(w−ωm2 )(w−ωm3 )(w−ωm4 )(w−ωn1 )(w−ωn2 )(w−ωn3 )(w−ωn4 )
R of the integrand of integral I R .
RP is the numerator Pmn mn
> RP:=(omega[m]^2-w^2)*(omega[n]^2-w^2) +4*zeta[m]*zeta[n]*omega[m]
*omega[n]*w^2;
RP := (ωm
2 −w2 )(ω2 −w2 ) + 4ζ ζ ω ω w2
n m n m n

> FR:=RP/Q; R .
FR is the integrand of integral Imn
(ωm
2 −w2 )(ω2 −w2 ) + 4ζ ζ ω ω w2
n m n m n
FR :=
(w−ωm1 )(w−ωm2 )(w−ωm3 )(w−ωm4 )(w−ωn1 )(w−ωn2 )(w−ωn3 )(w−ωn4 )
I and I R .
Q is the denominator of the integrands of both integrals Imn mn
> Q:=((omega[m]^2-w^2)^2 +(2*zeta[m]*omega[m]*w)^2)*((omega[n]^2-w^2)^2
+(2*zeta[n]*omega[n]*w)^2);
Q := ((ωm
2 −w2 )2 + (2ζ ω w)2 )((ω2 −w2 )2 + (2ζ ω w)2 )
m m n n n
I , IR .
omega[m1],..., omega[n4] are eight poles of integrands of integrals Imn mn
> omega[m1]:=omega[m]*(sqrt(1-zeta[m]^2)+I*zeta[m]);

ωm1 := ωm ( 1−ζm2 + iζm );
> omega[m2]:=omega[m]*(sqrt(1-zeta[m]^2)-I*zeta[m]);

ωm2 := ωm ( 1−ζm2 − iζm );
> omega[m3]:=omega[m]*(-sqrt(1-zeta[m]^2)+I*zeta[m]);

ωm3 := ωm (− 1−ζm2 + iζm );
> omega[m4]:=omega[m]*(-sqrt(1-zeta[m]^2)-I*zeta[m]);

ωm4 := ωm (− 1−ζm2 − iζm );
> omega[n1]:=omega[n]*(sqrt(1-zeta[n]^2)+I*zeta[n]);

ωn1 := ωn ( 1−ζn2 + iζn );
> omega[n2]:=omega[n]*(sqrt(1-zeta[n]^2)-I*zeta[n]);

ωn2 := ωn ( 1−ζn2 − iζn );
> omega[n3]:=omega[n]*(-sqrt(1-zeta[n]^2)+I*zeta[n]);

ωn3 := ωn (− 1−ζn2 + iζn );
112

> omega[n4]:=omega[n]*(-sqrt(1-zeta[n]^2)-I*zeta[n]);

ωn4 := ωn (− 1−ζn2 − iζn );
> RIm1:=residue(FI,w=omega[m1]): Evaluate the residue of FI at ωm1 .
> RIm3:=residue(FI,w=omega[m3]): Evaluate the residue of FI at ωm3 .
> RIn1:=residue(FI,w=omega[n1]): Evaluate the residue of FI at ωn1 .
> RIn3:=residue(FI,w=omega[n3]): Evaluate the residue of FI at ωn3 .
I .
Use Theorem A.1 to determine the integral Imn
> INTI:=simplify(2*Pi*I*(RIm1+RIm3+RIn1+RIn3));
INTI :=0
> RRm1:=residue(FR,w=omega[m1]): Evaluate the residue of FR at ωm1 .
> RRm3:=residue(FR,w=omega[m3]): Evaluate the residue of FI at ωm3 .
> RRn1:=residue(FR,w=omega[n1]): Evaluate the residue of FI at ωn1 .
> RRn3:=residue(FR,w=omega[n3]): Evaluate the residue of FI at ωn3 .
R .
Use Theorem A.1 to determine the integral Imn
> INTR:=simplify(2*Pi*I*(RRm1+RRm3+RRn1+RRn3)):

☞ When a function is in the form of a complex expression divided by another


complex expression, it is difficult to simplify the function on its own. The
numerator and denominator of the function must be simplified separately.
R , expand, and factorize the result, called NINT.
Extract the numerator of Imn
> NINT:=factor(expand(numer(INTR))):
R , expand, and factorize the result, called DINT.
Extract the denominator of Imn
> DINT:=factor(expand(denom(INTR))):
> INTR:=NINT/DINT; R is obtained.
A simple expression of Imn
4π(ωm ζm + ωn ζn )
INTR :=
(ωm −2ωm ωn +ωn ) + 4ωm ωn ζm ζn (ωm
4 2 2 4 2 +ω2 ) + 4ω2 ω2 (ζ 2 +ζ 2 )
n m n m n

❧ ❧
In this chapter, fundamentals of random processes and structural dynamics that are
the theoretical foundations for a number of topics in this book were presented.
❧ A random process is described by its various probability distribution functions. If
the probability distribution functions are invariant under a change of time origin,
the random process is stationary. A random process can be practically modelled as
stationary when the physical factors influencing it do not change with time.
3.8 appendix − method of residue 113

❧ For many practical applications, a random process can be satisfactorily described


by its moments, in particular the first two moments (mean and mean-square value
or variance). A random process is ergodic if its time average is equal to its ensemble
average; for a random process to be ergodic, stationarity is a necessary condition.
❧ Autocorrelation function is one of the most important averages in random vibra-
tion. Power spectral density (PSD) is the Fourier transform of the autocorrelation
correlation function; PSD is very important in characterizing the frequency content
of a stationary random process. A random process can be satisfactorily modelled as
a white noise process if it has a nearly constant PSD over the frequency range of in-
terest. For a transient nonstationary random process, Fourier amplitude spectrum
(FAS) is more appropriate for characterizing its frequency content.
❧ There is a significant gap between the theory of random vibration and the practice in
earthquake engineering. In random vibration, responses of a system are determined
in terms of averages, such as mean responses and mean-square responses. On the
other hand, in earthquake engineering, responses of a structure are determined
in terms of peak values, such as peak accelerations. The peak value of a random
process is related to its mean-square value through the peak factor. However, there
is no exact analytical expression available for the peak factor. Various assumptions,
approximations, and statistical approaches have been applied to obtain approximate
or empirical expressions.
❧ Modal analysis is important in studying the responses of multiple DOF systems. The
strong-motion portion of an earthquake ground motion time-history can be rea-
sonably modelled as a stationary, Gaussian process with a wide-band PSD; hence,
the theory of random vibration can be applied to study the responses of a multiple
DOF system under earthquake ground motion excitation. CQC and SRSS combi-
nation rules are developed based on results from random vibration to formulate the
method of seismic response spectral analysis.
❧ The maximum responses of a multiple DOF structure can be determined using
the time-history method or the response spectrum method. For a given ground
motion time-history, the time-history method gives the numerically exact response.
However, depending on how the spectrum-compatible time-histories are generated,
there may be large uncertainties in the generated time-histories (see Chapter 6); as
a result, there may be large variabilities in responses obtained from the time-history
method. The response spectrum method does not give the exact peak responses;
however, it gives sufficiently accurate results for practical engineering applications.
114

❧ There are many engineering applications in which nonlinearity has to be considered.


The method of equivalent linearization is a popular approach to replace a nonlinear
system by an equivalent linear system (equivalent in the sense that the error of
approximation is minimized).

You might also like