Incremental Dynamic Analysis For Estimating Seismic Performance Sensitivity and Uncertainty
Incremental Dynamic Analysis For Estimating Seismic Performance Sensitivity and Uncertainty
Incremental Dynamic Analysis For Estimating Seismic Performance Sensitivity and Uncertainty
SUMMARY
Incremental dynamic analysis (IDA) is presented as a powerful tool to evaluate the variability in the
seismic demand and capacity of non-deterministic structural models, building upon existing methodologies
of Monte Carlo simulation and approximate moment-estimation. A nine-story steel moment-resisting
frame is used as a testbed, employing parameterized moment-rotation relationships with non-deterministic
quadrilinear backbones for the beam plastic-hinges. The uncertain properties of the backbones include the
yield moment, the post-yield hardening ratio, the end-of-hardening rotation, the slope of the descending
branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately
assess the seismic performance of the model for any combination of the parameters by performing multiple
nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the
IDA and the static pushover level reveal the yield moment and the two rotational-ductility parameters
to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual
seismic performance we employ (a) Monte Carlo simulation with latin hypercube sampling, (b) point-
estimate and (c) first-order second-moment techniques, thus offering competing methods that represent
different compromises between speed and accuracy. The final results provide firm ground for challenging
current assumptions in seismic guidelines on using a median-parameter model to estimate the median
seismic performance and employing the well-known square-root-sum-of-squares rule to combine aleatory
randomness and epistemic uncertainty. Copyright q 2009 John Wiley & Sons, Ltd.
∗ Correspondence to: Dimitrios Vamvatsikos, 75 Kallipoleos Str, P.O. Box 20537, Nicosia 1678, Cyprus.
†
E-mail: [email protected]
‡
Based on short papers presented at the 1st European Conference on Earthquake Engineering and Seismology, Geneva,
2006 and at the 14th World Conference on Earthquake Engineering, Beijing, 2008.
1. INTRODUCTION
The accurate estimation of the seismic demand and capacity of structures stands at the core of
performance-based earthquake engineering. Still, seismic performance is heavily influenced by
both aleatory randomness, e.g. due to natural ground motion record variability, and epistemic
uncertainty, owing to modeling assumptions, omissions or errors. Ignoring their effect means that
structures are being designed and built without solid data or even adequate understanding of the
expected range of behavior. While guidelines have emerged (e.g. SAC/FEMA [1]) that recognize
the need for assessing epistemic uncertainties by explicitly including them in estimating seismic
performance, this role is usually left to ad hoc safety factors, or, at best, standardized dispersion
values that often serve as placeholders. Hence, if one wanted to actually compute the variability
in the seismic behavior due to parameter uncertainty, the question still remains: What would be a
good way to do so?
As a partial answer to this issue, there have been several attempts to isolate some useful cases
and gain insight into the effect of the properties of a model to its estimated seismic performance.
For example, Luco and Cornell [2, 3] found that random connection fractures have a detrimental
effect on the dynamic response of steel moment-resisting frames while Foutch and Shi [4] used
different hysteretic models to show the effect of hysteresis of moment connections on global
demand. Perhaps the most exhaustive study on the influence of model parameters on global collapse
capacity has been performed by Ibarra [5] who studied the dynamic instability of oscillators and
idealized single-bay frames with beam–column connections having non-trivial backbones including
both cyclic and in-cycle degradation. Finally, Porter et al. [6], have discussed the sensitivity of loss
estimation to structural modeling parameters in order to discern the most influential variables.
Such studies have offered a useful look into the sensitivity of structures to uncertain parameters.
Yet, only Ibarra [5] actually proposes a method to propagate the uncertainty from model parameters
to structural behavior using first-order-second-moment (FOSM) principles verified through Monte
Carlo to evaluate the collapse capacity uncertainty. Lee and Mosalam [7] have also used FOSM
to determine the response uncertainty of a reinforced-concrete (RC) shear wall structure to several
modeling parameters. However, in our opinion, two of the most important contributions in this
field have come from parallel research efforts that proposed the use of Monte Carlo simulation [8]
within the framework of IDA [9] to incorporate parameter uncertainty. Liel et al. [10] used IDA
with Monte Carlo and FOSM coupled with a response surface approximation method to evaluate
the collapse uncertainty of an RC building. On a similar track, Dolsek [11] has proposed using
Monte Carlo with efficient Latin Hypercube Sampling (LHS) on IDA to achieve the same goal.
While both methods were only applied on RC frame structures and only discussed the estimation
of uncertainty for collapse or near-collapse limit-states, they are fairly generalizable and applicable
to a variety of building types and limit states.
Working independently of the above research teams, we have also come to similar conclusions on
the use of Monte Carlo and simpler moment-estimation techniques to estimate seismic performance
uncertainty. Thus, in light of existing research, we aim to present our own view on the use of
IDA to offer a comprehensive solution to the issue of model-parameter uncertainty, while drawing
useful conclusions on the effects of uncertainties along the way. IDA being a resource-intensive
method, we will attempt to economically tap into its power through computation-saving methods.
Efficient Monte Carlo simulation and moment-estimation techniques will also be employed to
propagate the uncertainty from parameters to the IDA-evaluated seismic performance offering
different compromises in speed and accuracy. Using a well-studied steel moment-resisting frame
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 143
as a testbed and focusing on the plastic-hinge modeling uncertainties, we will nevertheless present
a general methodology that is applicable to a wide range of structures.
2. MODEL DESCRIPTION
The structure selected is a nine-story steel moment-resisting frame with a single-story base-
ment (Figure 1) that has been designed for Los Angeles, following the 1997 NEHRP (National
Earthquake Hazard Reduction Program) provisions [12]. A centerline model with nonlinear beam–
column connections was formed using OpenSees [13]. It allows for plastic hinge formation at the
beam ends while the columns are assumed to remain elastic. This has been a conscious choice on
our part: despite the rules of capacity design, there is always the possibility of a column yielding
earlier than the connecting beams, an issue aggravated by uncertain yield strengths. Preliminary
tests found this effect to be minor for this nine-story structure, especially when high correlation
was assumed between the steel strengths of beams and columns.
The structural model also includes P − effects while the internal gravity frames have been
directly incorporated (Figure 1). The fundamental period of the reference frame is T1 = 2.35 s and
accounts for approximately 84% of the total mass. Essentially this is a first-mode dominated struc-
ture that still allows for significant sensitivity to higher modes. Previous studies (e.g. Fragiadakis
et al. [14]) have identified the yield strength of the hinges as the most influential parameter in a steel
frame, compared with story mass and stiffness, for displacement-related quantities. While stiffness
might prove to be a more important parameter for floor accelerations and contents’ damage, we
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
144 D. VAMVATSIKOS AND M. FRAGIADAKIS
will only focus on drift-sensitive structural and non-structural damage. Thus, studying the influence
of the beam-hinge properties on the structural performance of the building will be our goal.
The beam-hinges are modeled as rotational springs with a quadrilinear moment-rotation back-
bone (Figure 2) that is symmetric for positive and negative rotations [5]. The backbone hardens
after a yield moment of aMy times the nominal, having a non-negative slope of ah up to a normal-
ized rotation (or rotational ductility) c where the negative stiffness segment starts. The drop, at
a slope of ac , is arrested by the residual plateau appearing at normalized height r that abruptly
ends at the ultimate rotational ductility u . The spring employs a moderately pinching hysteresis
without any cyclic degradation, as shown in Figure 3.
This complex model is versatile enough to simulate the behavior of numerous moment-
connections, from ductile down to outright fracturing. A ‘base’ hinge was defined using the
normalized moment, M / M yield
non–negative
(hardening) negative
1 a ac
h
residual plateau
elastic r ultimate
0
0 1 μc μu
normalized rotation, θ / θ yield
Figure 2. The moment-rotation beam-hinge backbone to be investigated and its six controlling parameters.
1.2
1.0
0.8
normalized moment, M / M yield
0.6
0.4
0.2
0.0
backbone
cyclic test
0 2 4 6 8
yield
normalized rotation, θ / θ
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 145
3. PERFORMANCE EVALUATION
Incremental dynamic analysis (IDA, Vamvatsikos and Cornell [9]) is a powerful analysis method
that can provide accurate estimates of the complete range of the model’s response, from elastic
to yielding, then to nonlinear inelastic and finally to global dynamic instability. To perform IDA
we will use a suite of thirty ordinary ground motion records (Table I) representing a scenario
earthquake. These belong to a bin of relatively large magnitudes of 6.5–6.9 and moderate distances,
all recorded on firm soil and bearing no marks of directivity. IDA involves performing a series
of nonlinear dynamic analyses for each record by scaling it to multiple levels of intensity. Each
dynamic analysis is characterized by two scalars, an intensity measure (IM), which represents
the scaling factor of the record, and an engineering demand parameter (EDP) (according to
current Pacific Earthquake Engineering Research Center terminology), which monitors the struc-
tural response of the model.
For moderate-period structures with no near-fault activity, an appropriate choice for the IM is
the 5%-damped first-mode spectral acceleration Sa (T1 , 5%). While this selection is made easier by
the fact that we chose to vary strengths only, thus maintaining a constant first-mode period, it can
nevertheless prove useful beyond this limited example. Even under stiffness and mass uncertainties,
the fundamental period of the base-case frame, T1base , can still serve as a reliable reference point,
as shown, for example, by the results of Vamvatsikos and Cornell [15]. Therefore, Sa (T1base , 5%)
can be recommended for general use, avoiding simpler but less efficient IMs, such as the peak
ground acceleration [9]. Regarding the building’s response, as we have previously discussed, our
focus is on deformation-sensitive structural and non-structural damage. Therefore, the maximum
interstory drift, max , of the structure is a good candidate for the EDP.
It should be noted that recent studies have shown that simply using Sa (T1 , 5%) as the IM will
generate biased results when using large-scale factors [16]. Unfortunately, the limitations of the
existing record catalogue do not allow us to refrain from scaling, which was the basis of IDA after
all. As observed by Luco and Bazzurro [16], these differences are mainly an effect of spectral shape,
something that can be corrected e.g. by considering improved scalar or vector IMs that include
spectral shape parameters, as proposed by Vamvatsikos and Cornell [15], Luco and Cornell [17]
and Baker and Cornell [18]. Nevertheless, we will maintain the use of Sa (T1 , 5%) for the benefit
of the readers, since it makes for better understanding of the IDA curves. Renormalizing to another
IM is actually trivial and only a matter of postprocessing [15]. Furthermore, sufficiency and bias
are actually most important when combining IDA results with hazard information. Since we will
not be engaging in any such calculations, we are safe to proceed with running the analysis with
Sa (T1 , 5%).
Using the hunt&fill algorithm [19] allows capturing each IDA curve with only 12 runs per
record. Appropriate interpolation techniques allow the generation of a continuous IDA curve in the
IM-EDP plane from the discrete points obtained by the dynamic analyses. Such results are in turn
summarized to produce the median and the 16%, 84% IDA curves that can accurately characterize
the distribution of the seismic demand and capacity of the structure for frequent or rarer ground
motion intensities.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
146 D. VAMVATSIKOS AND M. FRAGIADAKIS
Having such a powerful, albeit resource-intensive, tool at our disposal, we are left with the
selection of the alternate models to evaluate. There is obviously an inexhaustible number of
variations one could try with the six parameters of the adopted plastic hinge, not including the
possibility of having different hinge models in each story, or even for each individual connection.
In the course of this study we chose to vary all six backbone parameters, namely ah , c , ac ,r, u
and aMy , independently from each other but uniformly throughout the structure. Thus, a perfect,
positive spatial correlation of the beam hinges has been adopted: All beam–column connections in
the model have the same normalized properties, a deliberate choice that is expected to substantially
increase the parameters’ influence on the results.
Contrary to our assessment above, it could be argued that non-perfect correlation of the hinges in
the structure might cause strength irregularities that can lead to a higher variability in the response.
Still, for strong-column, weak-beam moment-frames with rigid diaphragms it is the combined
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 147
response of all hinges within a story that defines its response, not the individual strength. Therefore,
such irregularities will not arise unless there is high positive correlation within the hinges of each
story but no, or negative, correlation from story to story, an unrealistic assumption in general.
Weak-column, strong-beam designs can further magnify such effects. Since the above conditions
do not apply in our nine-story structure, it makes sense to expect relatively high variabilities as an
outcome of our assumptions.
In the following sections, we evaluate the effect of the six parameters, first by varying them
individually, one at a time, to perform sensitivity analysis and then concurrently for uncertainty
analysis.
4. SENSITIVITY ANALYSIS
To evaluate the behavior of our model we performed a sensitivity study by perturbing each of
the six backbone parameters independently of each other and only one at a time, pushing each
random parameter above and below its central, base-case, value. The sensitivity of each parameter
is evaluated using both static pushover and IDA for the following pairs of modifications: aMy =
{0.8, 1.2}, ah = {1%, 20%}, c = {2, 4}, ac = {−100%, −25%}, r = {20%, 80%} and u = {4, 8}.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
148 D. VAMVATSIKOS AND M. FRAGIADAKIS
14000 14000
a = 1.2 a = 0.2
My h
12000 a = 1 (base) 12000 a = 0.1 (base)
My h
a = 0.8 a = 0.01
My h
10000 10000
base shear (kN)
6000 6000
4000 4000
2000 2000
0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
(a) roof drift ratio, θroof (b) roof drift ratio, θroof
14000 14000
μ =4 ac
c
12000 μ = 3 (base) 12000 ac
c
μ =2 ac
c
10000 10000
base shear (kN)
8000 8000
6000 6000
4000 4000
2000 2000
0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
(c) roof drift ratio, θroof (d) roof drift ratio, θroof
14000 14000
r = 0.8 μ =8
u
r = 0.5 (base)
12000 12000 μ = 6 (base)
r = 0.2 u
μ =4
u
10000 10000
base shear (kN)
8000 8000
6000 6000
4000 4000
2000 2000
0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
(e) roof drift ratio, θroof (f) roof drift ratio, θroof
Figure 4. Sensitivity of the SPO curves to the beam-hinge backbone parameters: (a) aMy
influences global strength and roof capacity; (b) ah modifies global strength; (c) increased c
leads to higher ductilities; (d) ac is moderately influential; (e) r is of minor importance; and
(f ) decreasing u reduces the system ductility.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 149
inverse effect, forcing the SPO to lose strength earlier, both in force and deformation terms. Even
when fracturing occurs, the changes in c still seem to help, even if only by a little: The three
curves follow converging descents that end in relatively close values of roof drift. Unsurprisingly,
both ah and c expend most of their influence in the pre-fracturing nonlinear range, as they both
control the hardening, positive stiffness segment but have little, if any, control on the segments
that follow. Figure 4(b) shows a rather minor influence of the negative slope ah on the system
performance. Reducing the descent slope to 25% seems to help the post-fracture performance,
but only marginally. Making it steeper seems to have an even smaller effect. The reason is
obviously the relatively high base-case value of r = 0.5 which means that no matter how fast the
connection loses strength, it will maintain a healthy 50% residual strength that will always boost
its performance. Had we used a lower central r -value the results would probably have been quite
different.
The final two parameters, namely r and u , only influence the hinge behavior beyond the
negative drop. Therefore, in Figures 4(e) and (f ) we see no change at all in the SPO curves before
the loss of strength occurs. Afterwards, it is obvious that increasing the height of the residual
plateau r is less useful than increasing its length u before the ultimate failure. The former only
provides some marginal benefits in the post-fracture area, while the latter is the best way to extend
the SPO curve of the building to roof drifts higher than 7%, while maintaining somewhat higher
strengths than the base case. Similarly, reducing r to 20% makes only a small difference while
decreasing u seems to force an earlier collapse of the structure: The drop to zero strength now
appears at about 4% versus the 5% of the base case.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
150 D. VAMVATSIKOS AND M. FRAGIADAKIS
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 a =1.2
0.4
My =0.20
h
a =1.0 =0.10
0.2 My 0.2 h
a =0.8 =0.01
My h
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(a) maximum interstory drift ratio, max (b) maximum interstory drift ratio, max
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
c =4 c
0.2 c =3 0.2 c
c =2 c
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(c) maximum interstory drift ratio, max (d) maximum interstory drift ratio, max
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
r = 0.8 u =8
0.2 r = 0.5
0.2 u =6
r = 0.2 u =4
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(e) maximum interstory drift ratio, max (f) maximum interstory drift ratio, max
Figure 5. Sensitivity of the median IDA curves to the beam-hinge backbone parameters: (a) Overwhelming
effect of aMy ; (b) only minor influence by ah ; (c) c largely determines capacity; (d) ac is influential but
not as much as expected; (e) r is of minor importance; and (f ) decreasing u is detrimental.
is a direct result of the relatively high default residual plateau; at r = 50% it tends to trim down
the effect of the negative drop, thus reducing its importance.
Figure 5(e) shows the effect of r , where it appears that for a given negative drop and a relatively
short plateau (u = 6), the residual moment of the plastic hinge has little influence on the predicted
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 151
performance of the LA9 structure. However, different default settings on ac and u can easily
change such results; therefore, no general conclusions should be drawn just yet. On the other hand,
for u there can be no objection that the median IDAs are greatly influenced by its reduction but not
significantly by its increase (Figure 5(f )). A 33% decrease in ultimate-ductility cost the structure a
40% reduction in collapse capacity, while an equal improvement made no difference statistically.
It seems that the strength loss caused by a brittle and fracturing connection will dominate the
response of the building. On the other hand, even a substantial increase in the rotational ductility
does not make much difference for this building, perhaps because of other mechanisms or effects,
e.g. P −, taking the lead to cause collapse. In other words, even letting u go to infinity, as is
typically assumed by most existing models, we would not see much improvement as the building
has already benefited from ultimate rotational ductility as much as it could.
5. UNCERTAINTY ANALYSIS
In order to evaluate the effect of uncertainties on the seismic performance of the structure we
chose to vary the base-case beam-hinge backbone by assigning probabilistic distributions to its six
parameters. Since existing literature does not provide adequate guidance on the properties of all six
variables, we chose to arbitrarily define them. Thus, each parameter is assumed to be independently
normally distributed with a mean equal to its default value and a coefficient of variation (c.o.v)
equal to 0.2 for aMy (due to its overwhelming effect) and 0.4 for the remaining five parameters.
Since the normal distribution assigns non-zero probabilities even for physically impossible values
of the parameters, e.g. r <0, or ah >1 we have truncated the distribution of each parameter within a
reasonable minimum and maximum that satisfies the physical limits. We chose to do so by setting
hard limits at 1.5 standard deviations away from the central value, thus cutting off only the most
extreme cases as shown in Table II. All distributions where appropriately rescaled to avoid the
concentration of high probabilities at the cutoff points [20].
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
152 D. VAMVATSIKOS AND M. FRAGIADAKIS
number of structures has been sampled, we can reliably estimate the full distribution of the seismic
performance of the structure.
Monte Carlo simulation can be further improved by replacing the classic random sampling of
the population with LHS (McKay et al. [21]). Similar conclusions have been reached earlier by
Dolsek [11], who has also chosen this route to handle the parameter uncertainties within IDA.
This makes absolute sense as LHS is a special case of stratified sampling that allows efficient
estimation of the quantity of interest by reducing the variance √ of classic Monte Carlo. While
random sampling produces standard√errors that decline with N , the error in LHS goes down
much faster, approaching the rate of N 3 for linear functions [22]. In other words, we can reduce
the number of simulations needed to achieve the desired confidence in our results by a factor of
N 2 at best. This might seem trivial, especially since most people might think of buildings as highly
nonlinear creatures. Actually, buildings are only mildly nonlinear in functional terms, especially
compared with physical processes (e.g. weather forecasting), since most nonlinearity is isolated
in relatively few elements of the model. This is actually one of the reasons why simple elastic
analysis or modal combination rules still remain useful in structural analysis. Thus, LHS is ideally
suited to reducing the dispersion of Monte Carlo simulation on nonlinear structures.
Unfortunately, the nature of LHS does not allow us to determine a priori the appropriate
sample size N to achieve a certain confidence level [22]. Still, the use of a relatively high N that
is substantially larger than the number of parameters will always result to reasonably accurate
estimates for practical purposes. The optimal N to use is obviously a function of the number of
random variables and their influence on the response; this remains a subject of further research.
In our case, Monte Carlo with LHS was performed for N = 200 realizations of the frame, a relatively
high number (compared with, e.g. Dolsek [11]) that was chosen to allow pinpoint accuracy in
our estimates. To further improve the quality of our calculations, we employed the Iman and
Conover [23] algorithm to reduce any spurious correlation appearing between the samples.
To provide some insight on the range of models generated, their SPO curves were evaluated and
plotted in Figure 6. Therein, the flexibility of our model becomes apparent: the sample ranges from
ultra-ductile systems that continue to gain strength up to 4.5% roof drift down to brittle frames that
rapidly lose strength after only 1.5% roof drift. The maximum and minimum strengths are equally
impressive, varying within 7000–15 000 kN. The 16, 50, 84% fractile pushover curves of base shear
given roof also appear in Figure 6, showing a coefficient of variation that rapidly increases with
roof : starting from zero, in the elastic range, it goes almost to 100% at roof = 4%, being dominated
by the decreasing value of base shear and the relatively constant standard deviation. Clearly, the
wide distribution of the beam-hinge parameters has produced a very diverse sample of structures.
As an early taste of discussions to follow, Figure 7 shows a comparison of the mean pushover
curve versus the base-case pushover, i.e. a comparison of the actual mean response versus the
response of the mean model. Considering the excellent accuracy offered by the N = 200 samples,
there is obviously a clear violation of the typical first-order assumption: the mean response is
not the same as the response of the mean structure. On the other hand, the median pushover
curve is quite closer to the base case, although some differences are still there. Nevertheless,
for engineering purposes, one can still argue that the differences shown might not be signifi-
cant. It remains to be seen whether such observations in the pushover space actually translate
to similar conclusions in the IDA results.
Thus, by performing IDA on each of the N samples we have obtained 30× N = 6000 IDA
curves and the N = 200 corresponding median IDAs shown in Figure 8. The variability in the
results is apparent, even within just the medians: showing a similar spread to the pushover results
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 153
16000
14000
12000
8000
6000
4000
2000
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
roof drift ratio, θroof
Figure 6. 200 static pushover curves shown against the 16, 50, 84% fractile
curves of base shear given roof .
16000
base case
mean SPO
14000 50% SPO
12000
base shear (kN)
10000
8000
6000
4000
2000
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
roof drift ratio, θroof
Figure 7. The mean and median pushover curves compared against the base case.
(Figure 6), there exist realizations of the nine-story structure that collapse for half the records at
Sa (T1 , 5%)-values as low as 0.3g, while others remain stable up to 1.5g, the average case having
a (median) capacity of about 0.9g. In order to draw useful conclusions from such results, we need
to quantify and simplify the probabilistic nature of the curves. Therefore, we have to estimate
their moments. We will attempt to do so first by taking advantage of the Monte Carlo results
(Figure 8) and then by attempting simpler approximations that bypass the cumbersome Monte
Carlo simulation.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
154 D. VAMVATSIKOS AND M. FRAGIADAKIS
1.8
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.03 0.06 0.09 0.12 0.15
maximum interstory drift ratio, θmax
Figure 8. 200 median IDAs shown against their mean and ± one standard deviation curves of
Sa (T1 , 5%) given max . The corresponding 16, 50 and 84% fractiles are practically coincident
with the mean-sigma, mean and mean+sigma curves shown.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 155
where ‘med j ’ is the median operator over all indices (structures) j. It is worthwhile to note that
since all sampled structures were analyzed with the same number of records (P = 30), if we took
the mean-log of all NP = 6000 single-record IDA curves, we would find the same results as with
taking the mean of the N = 200 mean-log (or median) capacities. This is where the last approximate
equality in Equation (1) comes from.
Then, assuming lognormality, the median Sa |max and the dispersion U can be estimated as:
Sa = exp(m ln Sa ) (8)
U = m ln Sa · Vln Sa (9)
For every limit-state, Equations (5)–(9) are used to estimate the median IDA curve and the
-dispersion values with only 2K +1 IDA simulations.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
156 D. VAMVATSIKOS AND M. FRAGIADAKIS
The gradient and curvature of f can be approximated with a finite difference approach, which
is why we need 2K +1 simulations. The random parameters are set equal to their mean to obtain
Sa0 and then each random parameter is perturbed according to Equation (5). Thus, the first and the
second derivative of f with respect to X k , will be:
Truncating after the linear terms in Equation (10) provides a first-order approximation for the
limit-state mean-log capacities, where essentially they are assumed to be equal to the base-case
values ln Sa0 . A more refined estimate is the mean-centered, second-order approximation, which
according to Equation (10) can be estimated as:
1 K *2 f
m ln Sa ≈ ln Sa,50% +
0
2 (13)
2 k=1 *X k2 X k
X
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 157
1.2 1.2
median capacity Sa(T1,5%) (g)
0.8 0.8
0.6 0.6
0.4 0.4
Basecase Basecase
0.2 LHS 0.2 LHS (5 par.)
FOSM FOSM (5 par.)
PEM PEM (5 par.)
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(a) maximum interstory drift ratio, θmax (b) maximum interstory drift ratio, θmax
1.2 1.2
median capacity Sa(T1,5%) (g)
0.8 0.8
0.6 0.6
0.4 0.4
Basecase Basecase
0.2 LHS 0.2 LHS (5 par.)
FOSM FOSM (5 par.)
PEM PEM (5 par.)
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(c) maximum interstory drift ratio, θmax (d) maximum interstory drift ratio, θmax
Figure 9. Median IDA curves estimated for 6 or 5 parameters using LHS, FOSM and PEM: (a) Six
parameters, unsmoothed; (b) all parameters except u , unsmoothed; (c) six parameters, smoothed; and (d)
all parameters except u , smoothed.
Interestingly enough, the LHS median of all sample medians shows a collapse capacity of 0.9g,
i.e. 0.1g lower than the base-case median of almost 1.0g. Given the dispersion shown and the
sample size used, this difference becomes statistically significant at the 95% confidence level. Thus,
considerable doubt is cast on the typical, first-order assumption that the median-parameter model
will produce the median seismic performance (e.g. [24]). Still, the 10% error found in this case
may only be of theoretical interest; it should not be considered important for practical applications.
Were this difference any larger, it could have far-reaching implications: almost all building analyses
executed by engineers utilize central values (means or medians) for the parameters, implicitly
aiming to estimate the ‘central value’ (mean or median) of response. This is only approximately
true for the structure studied.
In order to better understand the reasons behind this apparent disagreement with current engi-
neering intuition, we have repeated the simulation for a deterministic u = 6 and only five random
parameters using N = 120 samples. The resulting medians, appearing in Figure 9(b), show a much
improved picture that is now closer to what we might normally expect. While there is still a
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
158 D. VAMVATSIKOS AND M. FRAGIADAKIS
statistically significant difference (at 95% confidence) of about 4% between the base-case and the
LHS median, the two curves are practically indistinguishable from each other. Even the PEM and
FOSM approximations perform much better and manage to provide a good estimate. Thinking
back to the extremely asymmetric influence of u on the median IDAs (Figure 5(f )) it becomes
apparent that the unbalanced response of the system to changes in u is the reason why the overall
median has been dragged down and away from the response of the median-parameter model. Still,
this is not an isolated case by any means. Structures under earthquake loads can be visualized
as links in a chain: a series system of collapse mechanisms. The weakest link will always cause
collapse. As long as we keep sampling from the distributions of the parameters, the capacity of
some mechanisms will be increased and for others it will decrease. Similar conclusions have been
drawn for an RC frame by Liel et al. [10]. Thus, on average, we should always expect that the
overall median/mean capacity will be lower than the capacity of the median/mean model, even if
by a little. The number of asymmetric sensitivity plots in Figure 5 should provide ample warning.
As a postscript to the discussion of medians, it should be noted that the approximate FOSM and
PEM-derived median Sa (T1 , 5%)-given-max results in Figures 9(a) and (b) cannot be thought of as
IDA curves in the classical form; the reason is that due to the inverse method of their construction,
there are multiple values of max -demand for a given value of Sa (T1 , 5%). To rectify this issue,
one can simply apply a monotone smoother (e.g. Zhang [29]) and obtain the perfectly acceptable
curves shown in Figures 9(c) and (d). Such methods, though, need IDA-quality data to work with.
When performing a limited performance estimation using dynamic runs at a single IM-level, it
would be advisable to stay with the base case or, even better, use Monte Carlo with LHS for a
reasonable estimate of the median.
The estimates of U obtained by the three methods for six and five random parameters appear
in Figures 10(a) and (b), respectively. In all cases the epistemic uncertainty smoothly rises from a
zero value in the elastic range (reasonable, as all modifications to the plastic hinges are post-yield)
and slowly increases up to its collapse-level value. For the six-parameter case this is estimated to
be U = 0.30 by Monte Carlo, while both PEM and FOSM manage to get quite close, moderately
overpredicting the dispersion at collapse as 0.36, showing errors of 20–25%. Obviously, both
methods can provide a usable estimate to U using only 2×6+1 = 13 sample points, rather than
120–200 for LHS. That is almost an order-of-magnitude reduction in computations at the cost of
a reasonable error. Monte Carlo might become more competitive at lower sample sizes [11], but
as discussed, coming up with an appropriate number a priori can be difficult. A different strategy
aimed at reducing the computational load can be found in Fragiadakis and Vamvatsikos [30], using
static pushover analyses rather than IDA.
In terms of combining epistemic uncertainty and aleatory randomness, the epistemic uncertainty
U is competing against the dispersion due to record-to-record variability of Sa (T1 , 5%) given
the EDP max . This dispersion is also important for the performance evaluation of structures and
similarly represented by its -value [1], i.e. by the standard deviation of the natural logarithm of
the IM given the EDP. This can be directly estimated from the sample IDA curves of the base
case, as we will do, or it can be approximated from the corresponding fractile IDAs as
where Sa84% and Sa16% are the 84% and 16% values of Sa (T1 , 5%)-capacity.
Both the epistemic uncertainty U and the aleatory randomness R contribute to the value of
the total dispersion RU caused by the record-to-record randomness and the model uncertainty.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 159
0.4 0.4
0.35 0.35
0.3 0.3
dispersion,β
dispersion,β
0.25 0.25
0.2 0.2
0.15 0.15
LHS LHS
0.1 β
U
0.1 β
U
(5 par.)
FOSM FOSM
β β (5 par.)
0.05 U
PEM
0.05 U
PEM
β β (5 par.)
U U
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
(a) maximum interstory drift ratio, θmax (b) maximum interstory drift ratio, θmax
Figure 10. Values of U given max estimated for 6 or 5 parameters using LHS, FOSM and PEM: (a) Six
parameters and (b) all parameters except u .
This is often used, e.g. in the SAC/FEMA framework, to assess performance in the presence of
uncertainty [24]. Since we have available the full IDA data from the Monte Carlo simulation, we
can estimate RU directly from the 200×30 single-record IDA curves (Equation (3)). Alternatively,
SAC/FEMA [1] estimates RU as the square-root-sum-of-squares (SRSS) of R and U , an
approximation which is usually taken for granted:
SRSS
RU = 2R +U
2
(17)
Such a value for every limit-state, or value of max , serves as a useful comparison of the relative
contribution of randomness and uncertainty with the total dispersion as shown in Figure 11. Of
course, we should keep in mind that we are only showing a simple example that does not include
all possible sources of uncertainty. Therefore, any conclusions that we draw should be viewed in
light of these limitations.
Having said that, in our case the high R generally overshadows the lower U , despite the use
of perfect spatial correlation and moderate-to-high values for the parameter c.o.v of 0.2–0.4. The
record-to-record variability is higher for any limit-state, ranging from 0.30 to 0.40. Still, the U
increases rapidly as the structure approaches global dynamic instability. At such higher limit-states
the uncertainty caused by all parameters rises to such an extent that U can almost reach, in this
example, the corresponding value of R ; 0.29 for U versus 0.31 for R . Finally, we see that the
SRSS estimate of RU is very close to its value estimated by LHS. On average Equation (17)
accurately predicts the actual RU . The error is in the order of 5% or less, except for drifts within
0.05–0.08 where the error grows to almost 20%. For all practical purposes, the SRSS rule for
combining aleatory randomness and epistemic uncertainty can be considered accurate for this
structure.
As a final comment we have produced histograms showing the probabilistic distributions of
Sa (T1 , 5%)-values of capacity for four limit-states, i.e. conditional on max = 0.03, 0.06, 0.09,
0.12. Figure 12 presents the distribution of the median Sa (T1 , 5%) due to parameter epistemic
uncertainty and Figure 13 shows the distribution of Sa (T1 , 5%) due to combined epistemic
and aleatory variability. In other words, U and RU can be estimated for each of the four
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
160 D. VAMVATSIKOS AND M. FRAGIADAKIS
0.45
0.4
0.35
0.3
dispersion, β 0.25
0.2
0.15
β
RU
0.1 β
SRSS
RU
β
0.05 R
β
U
0
0 0.05 0.1 0.15 0.2
maximum interstory drift ratio, θmax
30 θ = 0.03
30 θ = 0.06
max max
20 20
10 10
number of samples
0 0
0 0.5 1 1.5 0 0.5 1 1.5
30 θ = 0.09
30 θ = 0.12
max max
20 20
10 10
0 0
0 0.5 1 1.5 0 0.5 1 1.5
Figure 12. The distribution due to epistemic uncertainty of the median Sa (T1 , 5%)-values of
capacity, given four values of max .
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 161
1000 1000
θ max = 0.03 θmax = 0.06
800 800
600 600
400 400
200 200
number of samples
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
1000 1000
θmax = 0.09 θ max = 0.12
800 800
600 600
400 400
200 200
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Figure 13. The distribution due to both epistemic and aleatory uncertainty of the Sa (T1 , 5%)-values of
capacity given four values max .
max -values by calculating the standard deviation of the natural logs of the Sa (T1 , 5%)-data
contained in Figures 12 and 13, respectively.
To better judge the distribution of this data we have also plotted the best-fit normal density func-
tion. Obviously the actual distributions are only approximately symmetric and slightly skewed to the
right. Having used normal distributions for the parameters, the epistemic uncertainty in Sa (T1 , 5%)
also comes out to be approximately normal. Kolmogorov–Smirnov distribution tests [20]
confirm the appropriateness of the normality assumption at the 95% level for all but the lowest
values of max . On the other hand, the lognormality assumption is rejected. Furthermore, the heavy
tail of the record-to-record variability skews the combined aleatory plus epistemic uncertainty to
the right, pushing it closer to the lognormal rather than the normal. This trend becomes more
prominent for the lower values of max where R is dominant (Figure 11). Still, statistical tests at
the 95%, or even the 90% level do not confirm the feasibility of either of the two distributions.
In other words, the theoretically attractive lognormal assumption may be good enough for the
record-to-record variability, but, depending on the assumed distribution of model parameters, it
may not be appropriate for the epistemic uncertainty or the combined total.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
162 D. VAMVATSIKOS AND M. FRAGIADAKIS
6. CONCLUSIONS
The parameter sensitivity and epistemic uncertainty in the seismic demand and capacity of a nine-
story steel moment-resisting frame with non-deterministic beam-hinges have been estimated using
IDA. Sensitivity results have shown the differing effect of the hinge moment-rotation backbones
to the system’s behavior: while the yield moment, the capping ductility, the negative stiffness ratio
and the ultimate ductility have a significant impact, the hardening stiffness ratio and the residual
moment are only marginally important. In addition, in line with recent research, the Monte Carlo
simulation with latin hypercube sampling has been employed as the primary means to propagate
the uncertainty from the model parameters to the seismic performance, while simplified methods
based on point-estimate methods and first-order second-moment techniques have also been proven
to allow accurate estimation at a fraction of the cost of simulation.
All in all, the epistemic uncertainty in beam-hinges is shown to be an important contributor
to the overall dispersion in the performance estimation as well as a key point that raises issues
regarding the validity of current assumptions in performance evaluation. The classic notion that the
median-parameter model produces the median seismic demand and capacity has been disproved.
Nevertheless, the error is low enough that it can still be considered as reasonably accurate for
practical applications. Finally, the simple square-root-sum-of-squares rule for the combination of
aleatory randomness with epistemic uncertainty has been proven to be accurate enough for some
limit-states but significantly off the mark for others. In summary, corroborating existing research,
IDA has been shown to possess the potential to become the standard for the performance uncertainty
estimation. Although resource-intensive and sometimes controversial for using record-scaling, it
can be used as a basis for developing and validating simpler methodologies that can provide reliable
results at a reduced computational cost.
ACKNOWLEDGEMENTS
The first author wishes to gratefully acknowledge the everlasting friendship and support of Professor C.
Allin Cornell who was lost to us on 14 December 2007.
REFERENCES
1. SAC/FEMA. Recommended seismic design criteria for new steel moment-frame buildings. Report No. FEMA-350,
SAC Joint Venture, Federal Emergency Management Agency, Washington, DC, 2000.
2. Luco N, Cornell CA. Effects of random connection fractures on demands and reliability for a 3-story pre-
Northridge SMRF structure. Proceedings of the 6th U.S. National Conference on Earthquake Engineering, Paper
No. 244. EERI: El Cerrito, CA, Seattle, WA, 1998; 1–12.
3. Luco N, Cornell CA. Effects of connection fractures on SMRF seismic drift demands. ASCE Journal of Structural
Engineering 2000; 126:127–136.
4. Foutch DA, Shi S. Effects of hysteresis type on the seismic response of buildings. Proceedings of the 6th U.S.
National Conference on Earthquake Engineering, Paper No. 409. EERI: El Cerrito, CA, Seattle, WA, 1998;
1–12.
5. Ibarra LF. Global collapse of frame structures under seismic excitations. Ph.D. Dissertation, Department of Civil
and Environmental Engineering, Stanford University, Stanford, CA, 2003.
6. Porter KA, Beck JL, Shaikhutdinov RV. Sensitivity of building loss estimates to major uncertain variables.
Earthquake Spectra 2002; 18(4):719–743.
7. Lee TH, Mosalam KM. Seismic demand sensitivity of reinforced concrete shear-wall building using FOSM
method. Earthquake Engineering and Structural Dynamics 2005; 34(14):1719–1736.
8. Rubinstein RY. Simulation and the Monte Carlo Method. Wiley: New York, 1981.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe
IDA FOR SEISMIC PERFORMANCE SENSITIVITY AND UNCERTAINTY 163
9. Vamvatsikos D, Cornell CA. Incremental dynamic analysis. Earthquake Engineering and Structural Dynamics
2002; 31(3):491–514.
10. Liel AB, Haselton CB, Deierlein GG, Baker JW. Incorporating modeling uncertainties in the assessment of
seismic collapse risk of buildings. Structural Safety 2009; 31(2):197–211.
11. Dolsek M. Incremental dynamic analysis with consideration of modelling uncertainties. Earthquake Engineering
and Structural Dynamics 2009; 38(6):805–825.
12. Foutch DA, Yun SY. Modeling of steel moment frames for seismic loads. Journal of Constructional Steel Research
2002; 58:529–564.
13. McKenna F, Fenves G, Jeremic B, Scott M. Open system for earthquake engineering simulation, 2000. Available
from http://opensees.berkeley.edu [May 2008].
14. Fragiadakis M, Vamvatsikos D, Papadrakakis M. Evaluation of the influence of vertical irregularities on the seismic
performance of a 9-storey steel frame. Earthquake Engineering and Structural Dynamics 2006; 35(12):1489–1509.
15. Vamvatsikos D, Cornell CA. Developing efficient scalar and vector intensity measures for IDA capacity estimation
by incorporating elastic spectral shape information. Earthquake Engineering and Structural Dynamics 2005;
34(13):1573–1600.
16. Luco N, Bazzurro P. Does amplitude scaling of ground motion records result in biased nonlinear structural drift
responses? Earthquake Engineering and Structural Dynamics 2007; 36(13):1813–1835.
17. Luco N, Cornell CA. Structure-specific, scalar intensity measures for near-source and ordinary earthquake ground
motions. Earthquake Spectra 2007; 23(3):357–392.
18. Baker JW, Cornell CA. Vector-valued intensity measures incorporating spectral shape for prediction of structural
response. Journal of Earthquake Engineering 2008; 12(4):534–554.
19. Vamvatsikos D, Cornell CA. Applied incremental dynamic analysis. Earthquake Spectra 2004; 20(2):523–553.
20. Benjamin JR, Cornell CA. Probability, Statistics, and Decision for Civil Engineers. McGraw-Hill: New York,
1970.
21. McKay MD, Conover WJ, Beckman R. A comparison of three methods for selecting values of input variables
in the analysis of output from a computer code. Technometrics 1979; 21(2):239–245.
22. Iman R. Latin Hypercube Sampling. Encyclopedia of Statistical Sciences. Wiley: New York. DOI:
10.1002/0471667196.ess1084.pub2.
23. Iman RL, Conover WJ. A distribution-free approach to inducing rank correlation among input variables.
Communication in Statistics Part B: Simulation and Computation 1982; 11(3):311–334.
24. Cornell CA, Jalayer F, Hamburger RO, Foutch DA. The probabilistic basis for the 2000 SAC/FEMA steel
moment frame guidelines. Journal of Structural Engineering (ASCE) 2002; 128(4):526–533.
25. Jalayer F. Direct probabilistic seismic analysis: implementing non-linear dynamic assessments. Ph.D. Dissertation,
Department of Civil and Environmental Engineering, Stanford University, Stanford, CA, 2003. Available from:
http://www.stanford.edu/group/rms/Thesis/FatemehJalayer.pdf [October 2008].
26. Rosenblueth E. Point estimates for probability. Applied Mathematical Modeling 1981; 5:329–335.
27. Baker JW, Cornell CA. Uncertainty propagation in probabilistic seismic loss estimation. Structural Safety 2008;
30(3):236–252.
28. Pinto PE, Giannini R, Franchin P. Seismic Reliability Analysis of Structures. IUSS Press: Pavia, Italy, 2004.
29. Zhang JT. A simple and efficient monotone smoother using smoothing splines. Journal of Nonparametric Statistics
2004; 16(5):779–796.
30. Fragiadakis M, Vamvatsikos D. Fast performance uncertainty estimation via pushover and approximate IDA.
Earthquake Engineering and Structural Dynamics 2009; submitted.
Copyright q 2009 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2010; 39:141–163
DOI: 10.1002/eqe