0% found this document useful (0 votes)
15 views24 pages

Preview

Uploaded by

gegonzales95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

Preview

Uploaded by

gegonzales95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

EFFICIENT COMPUTATION OF ACCURATE SEISMIC FRAGILITY FUNCTIONS

THROUGH STRATEGIC STATISTICAL SELECTION

A Dissertation

Submitted to the Faculty

of

W
Purdue University

by
IE
Francisco Peña
EV

In Partial Fulfillment of the


PR

Requirements for the Degree

of

Doctor of Philosophy

May 2019

Purdue University

West Lafayette, Indiana


ii

THE PURDUE UNIVERSITY GRADUATE SCHOOL


STATEMENT OF APPROVAL

Dr. Shirley Dyke, Co-Chair


Lyles School of Civil Engineering
Dr. Ilias Bilionis, Co-Chair
School of Mechanical Engineering

W
Dr. Ayhan Irfanoglu
Lyles School of Civil Engineering
Dr. Julio Ramirez
IE
Lyles School of Civil Engineering
EV

Approved by:
Dr. Dulcy Abraham
Head of the Burke Graduate Program
PR
iii

W
To my wife, Natalia, who has been an unconditional source of support and encouragement.
IE
I am truly thankful for all her love, patience, and support.
EV

To the memory of my father-in-law, William, who helped shape me into the person I am
today.
PR
iv

ACKNOWLEDGMENTS

My sincere gratitude and appreciation goes to my advisors, Dr. Shirley Dyke and Dr.
Ilias Bilionis. Their continuous support, guidance, and encouragement made of this Ph.D.
an enriching and rewarding journey. I am grateful for all they did for my entire family
within these years.

I also want to thank the rest of the committee members for this dissertation, Dr. Ayhan
Irfanoglu and Dr. Julio Ramirez, for their valuable input to this dissertation, advice, and

W
challenging questions that helped shape this research.
IE
Besides the committee members, I would like to thank Dr. George Mavroeidis and
Yenan Cao, from the University of Notre Dame, for providing the earthquake database
EV
implemented for this analysis. I want to acknowledge my fellow graduate students, current
and former members of the Intelligent Infrastructure Systems Laboratory (IISL) and the
Predictive Science Laboratory, for sharing your knowledge and insights, your company
PR

made my time at Purdue really enjoyable and pleasant. I sincerely appreciate the trust given
by professors Robert Jacko, Donald Meinheit, Roger Reckers, and Ayhan Irfanoglu, with
the teaching assistant positions; and all the guidance and collaboration from Jenny Ricksy,
Molly Stetler, and Bailey Ritchie.

I would like to thank my family for all their love and support, my children, Sebastian
and Emily, for all the happiness and joy that you bring to my life. Most importantly, special
gratitude is owed to my wife, Natalia, for her undoubted faith in me, unconditional support
and love. There are no enough words to express how thankful I am to her. The culmination
of this doctoral degree is entirely because of her sacrifices and continuous encouragement.
v

The financial support of this research is provided by the Colombian government, through
the scholarship 568-2012 from Colciencias (Departamento Administrativo de Ciencia, Tec-
nologia e Innovacion), the Colombia-Purdue Institute for Advanced Scientific Research, and
the teaching assistantship from the Lyles school of Civil Engineering at Purdue University.

W
IE
EV
PR
vi

TABLE OF CONTENTS

Page
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
ABBREVIATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

W
1.1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.2 Classification of FFs . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.3 Uncertainty in FFs . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.4 Computation of FFs . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Remaining Challenges . . .
IE . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3 Objective and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
EV
2 AVAILABLE DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1 Case Study Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2 Generating synthetic broadband ground motions . . . . . . . . . . . . . . 31
3 INPUT-OUTPUT PARAMETERS . . . . . . . . . . . . . . . . . . . . . . . . 35
PR

3.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


3.2 Structural response parameter: Engineering demand parameter (EDP) . . . 36
3.3 Structural response threshold: Limit States - LS . . . . . . . . . . . . . . 37
3.4 Input parameter: Intensity Measure - IM . . . . . . . . . . . . . . . . . . 39
4 BAYESIAN MODEL SELECTION . . . . . . . . . . . . . . . . . . . . . . . 48
4.1 Monte Carlo approximation of the FF . . . . . . . . . . . . . . . . . . . 48
4.2 Learning the FF from limited data . . . . . . . . . . . . . . . . . . . . . 49
4.2.1 Conventional model of FF . . . . . . . . . . . . . . . . . . . . . 49
4.2.2 Prior knowledge about the model . . . . . . . . . . . . . . . . . . 50
4.2.3 Models of FFs . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2.4 Bayesian inference of model parameters . . . . . . . . . . . . . . 52
4.2.5 Predicting fragility under a given model . . . . . . . . . . . . . . 54
4.3 Bayesian model selection . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.1 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.2 Epistemic uncertainty of FF . . . . . . . . . . . . . . . . . . . . 58
vii

Page
4.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.1 Selecting the best model . . . . . . . . . . . . . . . . . . . . . . 59
4.4.2 Quantification of the epistemic uncertainty in the FFs . . . . . . . 67
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5 GROUND MOTION SELECTION TECHNIQUE . . . . . . . . . . . . . . . . 71
5.1 Effect of a hypothetical observation on the reduction in epistemic uncertainty 72
5.2 Sequential selection based on single-dimension IM . . . . . . . . . . . . . 73
5.3 Sequential selection based on two-dimensional IM . . . . . . . . . . . . . 73
5.4 Process to implement the sequential selection strategy . . . . . . . . . . . 75
5.5 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.5.1 Sequential selection strategies . . . . . . . . . . . . . . . . . . . 77
5.5.2 Statistical model for secondary IM . . . . . . . . . . . . . . . . . 78
5.5.3 Results of selected selection strategies . . . . . . . . . . . . . . . 80
5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

W
6 PRACTICES THAT UNINTENSIONALLY BIAS FRAGILITY FUNCTIONS . 87
6.1 Effect of using different ground motion databases . . . . . . . . . . . . . 87
6.2 Effect of including historical ground motions . . . . . . . . . . . . . . .
IE . 90
6.3 Effect of scaling ground motions . . . . . . . . . . . . . . . . . . . . . . 92
6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7 SUMMARY AND CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . 96
EV
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
A DERIVATION OF FRAGILITY FUNCTION UNDER LOGNORMAL DISTRI-
BUTION ASSUMPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
PR

B SELECTION OF INPUT PARAMETERS . . . . . . . . . . . . . . . . . . . . 110


C EARTHQUAKE DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . 115
D UPDATED PARTICLE APPROXIMATION . . . . . . . . . . . . . . . . . . . 118
E MODEL SELECTION RESULTS . . . . . . . . . . . . . . . . . . . . . . . . 120
F SEQUENTIAL SELECTION OF GROUND MOTIONS RESULTS . . . . . . . 122
VITA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
viii

LIST OF TABLES

Table Page
1.1 Classification of fragility functions . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Examples of uncertainty on modeling parameters . . . . . . . . . . . . . . . 12
3.1 Commonly used critical values of building response per limit state [98–101] . . 39
4.1 Average (COV) log-value for the evidence, log(ZM ), for different number of
basis functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Comparison metrics of the models . . . . . . . . . . . . . . . . . . . . . . . 63

W
5.1 Comparison metrics of the models . . . . . . . . . . . . . . . . . . . . . . . 80
E.1 Model selection results for N = 500, X1 : PGV, and Y. . . . . . . . . . . . . 121
IE
F.1 Model selection results for N = 1000, X1 : PGV, and X2 : Sv . . . . . . . . . . 123
EV
PR
ix

LIST OF FIGURES

Figure Page
1.1 Representation of fragility function . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Overview of the fragility computation challenge . . . . . . . . . . . . . . . . 26
2.1 Case study structure: 20-story nonlinear benchmark building . . . . . . . . . 32
2.2 Earthquake fault (rectangle), grid of stations (triangles), and hypocenter loca-
tions (stars) considered in this study. . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Example of synthetic acceleration and velocity time histories for the station

W
denoted by the filled triangle in Fig. 2.2 and for Mw = 6.5, a hypocenter on the
edge of the fault, and NEHRP Site Class C. . . . . . . . . . . . . . . . . . . . 33
3.1 Distribution of drift parameter Y
IE . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Initial dataset of X vs. Y . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 Scatter matrix plot for X . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
EV
3.4 Linear regression for X vs. Y (µ, σ) in log-space . . . . . . . . . . . . . . . 46
3.5 Distribution of PGV vs. Sv . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1 Data set of 36,000 observations . . . . . . . . . . . . . . . . . . . . . . . . 60
PR

4.2 Considered basis functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 60


4.3 Variation of log(ZM2 ) (for qm = qs = 1) due to number of particles . . . . . . 62
4.4 Numerical results for models (a) M1 , (b) M2 , (c) M3 , (d) M4 , and (e) M5 . . 64
4.5 Comparison metrics of models (a) Bayesian model selection, (b) Predictive
interval, (c) K-S distance, and (d) Q-Q error . . . . . . . . . . . . . . . . . . 65
4.6 Expected FFs for life safety limit state . . . . . . . . . . . . . . . . . . . . . 66
4.7 Prior/posterior comparison (a) Exponent γ M , (b) Variance ς 2M , and (c) Co-
efficients cM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.8 Comparison of FFs for: (a) 10, (b) 40, and (c) 500 observations . . . . . . . 68
4.9 Reduction of epistemic uncertainty as function of N . . . . . . . . . . . . . . 69
5.1 Data set of X1 vs. X2 observations . . . . . . . . . . . . . . . . . . . . . . . 78
x

Figure Page
5.2 Numerical results for (X1, X2 ) models (a) M1 , (b) M2 , (c) M3 , (d) M4 , and
(e) M5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3 Comparison metrics of (X1, X2 ) models (a) Bayesian model selection, (b)
Predictive interval, (c) KS-distance, and (d) Q-Q error . . . . . . . . . . . . . 82
5.4 Prior/posterior comparison (a) Exponent γ M , (b) Variance ς 2M , and (c) Co-
efficients cM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.5 Numerical representation of X1 -X2 lognormal model . . . . . . . . . . . . . 83
5.6 Input domain division for X1 -sel (left) and (X1, X2 )-sel (right) . . . . . . . . . 84
5.7 Comparison of expected fragility functions using different datasets with 100
observations and different selection strategies . . . . . . . . . . . . . . . . . 84
5.8 Comparison of epistemic uncertainty performance as function of N for the
strategies (a) random, (b) X1 -based and (c) (X1, X2 )-based with respect to

W
uniform (blue) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.1 Representation of different ground motion databases
IE . . . . . . . . . . . . . 89
6.2 Fragility analysis for 50 observations from different ground motion databases:
(a) California, (b) Stiff-Soil, and (c) SAC-Combined . . . . . . . . . . . . . . 91
6.3
EV
Fragility analysis for 50 observations combining synthetic and historic ground
motion records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.4 Observations from scaled and original ground motions from SAC-Combined
database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
PR

6.5 Fragility analysis for (a) N = 720 scaled, (b) N = 60 scaled, and (c) N = 60
original records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.1 Relationship between normal and lognormal RV (Courtesy of [123]) . . . . . 109
B.1 Scatter matrix plot for X (top left portion of Fig. 3.3) . . . . . . . . . . . . . 111
B.2 Scatter matrix plot for X (bottom left portion of Fig. 3.3) . . . . . . . . . . . 112
B.3 Scatter matrix plot for X (top right portion of Fig. 3.3) . . . . . . . . . . . . 113
B.4 Scatter matrix plot for X (bottom right portion of Fig. 3.3) . . . . . . . . . . . 114
C.1 Ground motion database in terms of earthquake magnitude . . . . . . . . . . 115
C.2 Ground motion database in terms of NEHRP soil classification . . . . . . . . 116
C.3 Ground motion database in terms of wave direction . . . . . . . . . . . . . . 116
C.4 Ground motion database in terms of epicentral distance . . . . . . . . . . . . 117
xi

ABBREVIATIONS

ATC Applied Technology Council


BIF Bayesian Inference Framework
CCDF Complementary cumulative distribution function
CDF Cumulative distribution function
CLT Central limit theorem
CP Collapse prevention

W
DC Damage control
DM Damage measure
EDP Engineering demand parameter
FF
IE
Fragility function/curve
FS Fragility surface
EV
FY First yield
IDA Incremental dynamic analysis
IM Intensity measure
PR

IO Immediate occupancy
KS Kolmogorov-Smirnov
LHS Latin-Hypercube sampling
LS Limit state
LSf Life safety
MC Monte Carlo
MCMC Markov chain Monte Carlo
MISD Maximum inter-story drift
MLE Maximum likelihood estimation
MRF Moment-resisting frame
NEHRP National Earthquake Hazards Reduction Program
xii

PMI Plastic mechanism initiation


PDF Probability density function
PGA Peak ground acceleration
PGD Peak ground displacement
PGV Peak ground velocity
RQ Research question
RV Random variable
SBM Specific barrier model
SDOF Single-degree-of-freedom
SHM Structural health monitoring
SMC Sequential Monte Carlo

W
SR Limited safety range
SSMRP Seismic safety margins research program
IE
EV
PR
xiii

ABSTRACT

Author: Peña, Francisco. Ph.D.


Institution: Purdue University
Degree Received: May 2019
Title: Efficient Computation of Accurate Seismic Fragility Functions Through Strategic
Statistical Selection.
Major Professors: Shirley J. Dyke & Ilias Bilionis.

A fragility function quantifies the probability that a structural system reaches an un-
desirable limit state, conditioned on the occurrence of a hazard of prescribed intensity

W
level. Multiple sources of uncertainty are present when estimating fragility functions, e.g.,
record-to-record variation, uncertain material and geometric properties, model assumptions,
IE
adopted methodologies, and scarce data to characterize the hazard. Advances in the last
decades have provided considerable research about parameter selection, hazard characteris-
EV
tics and multiple methodology for the computation of these functions. However, there is no
clear path on the type of methodologies and data to ensure that accurate fragility functions
can be computed in an efficient manner. Fragility functions are influenced by the selection
PR

of a methodology and the data to be analyzed. Each selection may lead to different levels of
accuracy, due to either increased potential for bias or the rate of convergence of the fragility
functions as more data is used. To overcome this difficulty, it is necessary to evaluate the
level of agreement between different statistical models and the available data as well as to
exploit the information provided by each piece of available data. By doing this, it is pos-
sible to accomplish more accurate fragility functions with less uncertainty while enabling
faster and widespread analysis. In this dissertation, two methodologies are developed to
address the aforementioned challenges. The first methodology provides a way to quantify
uncertainty and perform statistical model selection to compute seismic fragility functions.
This outcome is achieved by implementing a hierarchical Bayesian inference framework in
conjunction with a sequential Monte Carlo technique. Using a finite amount of simulations,
xiv

the stochastic map between the hazard level and the structural response is constructed using
Bayesian inference. The Bayesian approach allows for the quantification of the epistemic
uncertainty induced by the limited number of simulations. The most probable model is then
selected using Bayesian model selection and validated through multiple metrics such as the
Kolmogorov-Smirnov test. The subsequent methodology proposes a sequential selection
strategy to choose the earthquake with characteristics that yield the largest reduction in
uncertainty. Sequentially, the quantification of uncertainty is exploited to consecutively
select the ground motion simulations that expedite learning and provides unbiased fragility
functions with fewer simulations. Lastly, some examples of practices during the computa-
tion of fragility functions that results i n undesirable bias in the results are discussed. The
methodologies are implemented on a widely studied twenty-story steel nonlinear bench-

W
mark building model and employ a set of realistic synthetic ground motions obtained from
earthquake scenarios in California. Further analysis of this case study demonstrates the
IE
superior performance when using a lognormal probability distribution compared to other
models considered. It is concluded by demonstrating that the methodologies developed in
EV
this dissertation can yield lower levels of uncertainty than traditional sampling techniques
using the same number of simulations. The methodologies developed in this dissertation
enable reliable and efficient structural assessment, by means of fragility functions, for civil
PR

infrastructure, especially for time-critical applications such as post-disaster evaluation. Ad-


ditionally, this research empowers implementation by being transferable, facilitating such
analysis at community level and for other critical infrastructure systems (e.g., transportation,
communication, energy, water, security) and their interdependencies.
1

1. INTRODUCTION

The successful operation of a community relies on the preservation of normal functionality


of the built environment. However, normal performance is unfortunately interrupted by
major disruptions, e.g., natural or man-made hazard events. Even though the livelihood
of a community after a strong event may be disrupted by extensive damage in the built
infrastructure, additional factors like the lack of knowledge about the safety and integrity
of the existing assets may further impede the recovery process (e.g., loss of functionality,

W
inability to use designated shelter structures, inadequate classification of damage levels
for critical infrastructure). As a result, it is very important for any community to be
aware of the structural integrity of its assets before, but most importantly after, a major
IE
disruption. By knowing the conditions of the built environment after a disaster, it is
possible to make informed decisions and an adequately allocate resources. Herein, the
EV
terms "built environment", or "infrastructure", refer to the set of assets and facilities that
provide essential services and commodities to the inhabitants of a community, such as
buildings, hospitals, roads, bridges, power and communication networks.
PR

Current practices to evaluate the integrity of a structural asset include on-site inspection
and structural health monitoring (SHM). The former requires a team of experts that are
available and willing to participate in the assessment, while the latter requires deploying a
set of sensors and other computational resources to monitor the dynamic response of a single
asset. Both practices have certain limitations, precluding their application at large scale
(e.g., community–level, city-level, or even state-level). On the one hand, human inspection
is expensive, time-consuming, and it may be inconsistent. Typically, there is a limited
number of qualified teams available to perform a subjective evaluation of the integrity for an
extensive number of large-scale assets [1]. On the other hand, SHM can be expensive, time-
consuming, and it is not a widely-accepted technique among stakeholders (e.g., owners,
regulatory agencies). A large network of sensors, data acquisition systems, and other
2

computational resources in conjunction with trained staff are needed for the implementation
of a robust, redundant, and well maintained system to extract the necessary information.
Furthermore, these systems may require supplementary investments for maintenance or
tuning due to time-dependent disturbances that weather conditions and transient loads
impose on the structure. Lastly, it may be difficult to identify local damage because these
systems are often trained to recognize patterns associated with global anomalies on the
structure. This is partly due to the fact that SHM systems often operate in an unsupervised
learning mode because data from damaged structures is scarce [2].
Consequently, special interest has been given to predict, in a more accurate and rapid
manner than human inspection or SHM, the structural integrity of an asset after a strong
event. Here the terms, hazard, event, or disturbance, refer to ground motion. By increasing

W
the level of knowledge about the effects that a ground motion may impose on the built
environment, stakeholders would be able to make better decisions to improve community
IE
resilience. Thus, critical infrastructure would endure less damage and communities will
face minimal loss of functionality and number of casualties after a hazard event. For
EV
example, it is possible to direct a larger amount of resources to the critical infrastructure
that requires immediate intervention. Similarly, response and reconnaissance missions can
better prepare action plans to safely mount rescue and response activities. Because of this
PR

need to assess the structural integrity, the concept of fragility function (FF) emerged in the
mid 1970s. Originally FFs were established to address the need for evaluating assets in the
nuclear power industry. Recently, these methods obtained recognition for a broad range of
elements in the built environment.
Initially the term fragility was used to refer to the threshold value of the seismic capacity
of a nuclear power plant before failure occurs [3–5]. It was in 1980 when the term fragility
became recognized as the probability of failure of systems and components in the nuclear
power industry [6]. FFs are employed as an alternative solution to assess the structural
integrity of the built environment. A FF quantifies the probability of a system to reach
an undesirable limit state (LS), e.g., collapse, yielding, conditioned on the occurrence
3

of a hazard event of given magnitude [7, 8]. Thus, a FF is a crucial component for the
quantification of economic loss and casualties, vulnerability, and risk assessment [7–11].
Although quite informative, FFs are sensitive to the choice of data used for their es-
timation. The uncertainty in a FF can be decomposed into aleatory (a.k.a., fundamental,
frequency, probabilistic, random variability) and epistemic uncertainties (other names given
in the literature are incomplete knowledge, probability, statistical, systematic, uncertainty
and modeling variation [12]). The former corresponds to the intrinsic variability (true
randomness) that may characterize the system (e.g., real structure, numerical model) and
the ground motion (e.g., record-to-record variation), while the latter refers to the lack of
knowledge about the problem (e.g., uncertainty induced by limited data). Since aleatory
uncertainty is irreducible, efforts should address reducing epistemic uncertainty.

W
The last decades have been dedicated substantially to increasing the level of knowl-
edge about all of the factors that contribute into the uncertainty of FFs [13–15]. For
IE
instance, several methodologies have been implemented (although some of them were not
originally intended) for the computation of FFs, such as: Safety Factor Method [12, 16],
EV
Linear Regression in Logarithmic Space [17, 18], numerical simulation using Maximum
Likelihood Estimation [19, 20], fitting model parameters using Moment Matching [21],
Sum-of-Squared-Errors [21, 22], Least-Squares [21, 22], Gaussian Kernel Smoothing [23],
PR

Incremental Dynamic Analysis (IDA) [24, 25], Neural Networks [25], Capacity Spectrum
Method [26, 27], Modal Pushover Analysis [28], Bayesian Inference [29, 30], among oth-
ers. In addition to the notable differences in the time that a fragility analysis may take
among all the different methodologies, the shape of the resulting FFs may exhibit differ-
ences as well [22, 31]. Similarly, multiple probability distribution functions have been used
to describe the dispersion in the structural response, the lognormal distribution being the
most commonly adopted case for seismic excitation [8]. Other distributions studied are
the Weibull [15], generalized extreme value [15], and non-parametric distributions [32].
For other natural hazard, such as turbulent wind forces, it is common practice to use log-
normal, Weibull, gamma, normal, and Cauchy distributions [33–35]. Additionally, other
factors have been studied for their influence on uncertainty, such as material and geometric
4

properties [36, 37], record-to-record variations [19], damping [36], input/output parame-
ters [31, 35], aging and deterioration [38], modeling calibration [39], and even the quantity
of analyzed ground motions [22, 25, 30].
Consequently, it is possible to find an extensive number of guidelines, frameworks, and
books dedicated almost exclusively to discuss all of these findings [7,8,40–45]. Despite the
tremendous advances made towards increasing the understanding about ground motions,
structural response, and FFs, there is a need to evaluate all the assumptions (most of
them adopted since the late 1970s when the computational resources were limited), the
different methodologies, and the intrinsic variation between fragility analyses. By being
more selective in the methodologies and type of data used for the analysis, it is possible
to achieve more accurate and realistic FF, supporting a better informed-decision-making

W
process and optimizing the allocation of community resources.
Given that FFs have potential to produce important supporting information in the deci-
IE
sion process that transforms the built environment and correspondingly impacts community
livelihood, the following inquiries need to be properly addressed:
EV

• Is this FF derived using the appropriate model?

• Are the data used representative and applicable for the case study?
PR

• Is the amount of data used sufficient for the FF to be conclusive?

Although some of the techniques found in the literature partially address these questions,
primarily for the dynamic response of the structures, the development of a systematic
approach centered on seismic FFs remains unaddressed. The methodologies developed in
this dissertation leverage the use of Bayesian inference which is used to estimate the model
parameters while quantifying uncertainty. Similarly, this approach enables the evaluation
of the level of agreement between models and actual data as a means to acquire more
accurate FFs with a minimum expenditure of computational resources. Additionally, a
methodology for the selection of seismic data is developed to expedite the computation
of accurate FFs. Furthermore, this dissertation demonstrates that some of the common
5

practices in the computation of FF may lead to biased results, incorporating in some cases
substantial levels of uncertainty.

1.1 Literature Review

The concept of a seismic FF is defined as the conditional probability of a structure or


component to reach a prescribed LS, given the occurrence of a ground motion with intensity
measure (IM) X1 = x. Here, the undesirable LS does not necessarily signify the complete
failure or collapse of a structure. It only refers to a given condition of functionality that is
associated with a selected structural response (a.k.a., damage measure, DM, or engineering
demand parameter, EDP), denoted by Y. It is possible to establish a critical level or

W
threshold ycrit in the structural response for each LS. The fragility or probability of reaching
a LS refers to Y ≥ ycrit and the most general mathematical expression is:
IE ∫
F( x; ycrit ; I ) : = P[ Y ≥ ycrit | X1 = x , I ] =

fY (y | X1 = x, I) dy
ycrit
(1.1)
EV
h i
= E 1[ycrit,+∞) (Y) X1 = x, I

where P [ A | B ] is the probability of event A occurring given that B already occurred,


fY (y) is the probability density function (PDF) for Y = y, E [· | ·] is the conditional
PR

expected value, and 1 A is the characteristic function of set A, i.e., 1 A(y) = 1, if y ∈ A, and
1 A(y) = 0, otherwise. Additionally, the term I corresponds to all known (or assumed to
be known) information about the structure and its surroundings. For instance, I includes
information such as the geographical location and orientation of the structure, specific site
conditions, soil characteristics, distance to seismically active faults, type of seismic fault,
weather conditions that may affect the soil properties, among many other factors. Including
this information in the fragility analysis is translated into implementing a realistic database
of ground motions that are representative for the system. From herein, the information
I is assumed to be an implicit property of the FF and it will be omitted to simplify the
mathematical notation. A graphical representation of a FF is shown in Fig. 1.1. The line
6

represents the expected FF while the are corresponds to the 95% predictive interval. FFs have

100
Predicted Mean
95% Pred. Interval

ℙ[  ≥ ycrit ∣  ]

0
0.0 0.5 1.0
1

Fig. 1.1. Representation of fragility function

W
become one of the principal tools for structural, loss, vulnerability, and risk assessments
IE
[7–11, 40]. Given the tight relationship between the concepts of hazard, vulnerability
and risk, it is possible to unintentionally interchange their definitions. For this reason, it is
EV
convenient to briefly introduce proper definitions based on the detailed explanation prepared
by [8]. In summary, hazard or hazard probability refers to the relationship between a given
IM value and the frequency in which events of this magnitude or larger are expected to occur
at a specific geographical location. Vulnerability differs from fragility because it quantifies
PR

the impact of reaching an undesirable LS in terms of a variable that measures the loss (e.g.,
casualties, dollars). And, the seismic risk of an asset represents the potential consequences
(vulnerabilities) that ground motions can generate in a structure with a specific geographical
location during a certain period of time.

1.1.1 History

The term FF was initially introduced in the late 1970s for the design and analysis of
facilities from the nuclear power industry. Although FFs are extensively applied nowadays
for the built environment, especially buildings and bridges, it was only in the late 1990s
when the concept started to be applied to such types of structures [15]. As was already
7

stated, the term fragility was associated with the margin ycrit in 1975 [3] and later became
the probability of failure in 1980 [6]. The surge of scientific research that precipitated
the advances in fragility during the late 1970s was triggered in response to the Reactor
Safety Study (a.k.a., the Rasmussen Report or WASH-1400) [46]. This controversial work
concluded that "earthquake-induced accidents should not contribute significantly to reactor
accident risk" along with multiple unsupported assumptions, probability distributions, and
methodologies [47]. According to [48], three programs followed that exposed deficiencies
of the Rasmussen Report and contributed remarkably with the advances in seismic fragility
analysis, including: (i) The Diablo canyon seismic risk evaluation [4], (ii) the Seismic Safety
Margins Research Program (SSMRP) [6,16,49–52], and (iii) the Oyster creek probabilistic
safety analysis [12].

W
The first program exposed the necessity to almost double the structural capacity (denoted
by R and measured in the same units as the ground motion) of the nuclear power plant to
IE
preserve a similar level of reliability, after the discovery of a seismic fault line (Hosgri)
just 6 km away. Two major contributions resulted from this program: (i) an expression to
EV
define the probability of failure, PF ∈ [0, 1] (see Eq. (1.2)), and (ii) the use of the lognormal
distribution to model the seismic capacity (see Eq. (1.3)), which was initially proposed
by [53]. These are expressed as
PR

∫ ∞ ∫ ∞
P[X1 ≥ ycrit ] := FR (x) fX1 (x) dx = 1 − FX1 (x) fR (x) dx. (1.2)
 
0 0

log(x/r̄)
 
FR (x) := Φ (1.3)
σR
where FR (x) is the cumulative distribution function (CDF) for R evaluated at x, Φ(·)
represents the CDF for the standard normal distribution, r̄ is the median seismic capacity
of the system, and σR corresponds to the standard deviation of the natural logarithm of
R. Assuming that R can be expressed as the product of a series of positive random
variables (RVs), the central limit theorem, CLT, (in particular Gibra’s law) states that fR
can be approximated using a lognormal distribution [53]. Thus, the capacity becomes
8

R := c1 c2 · · · ck R̂, in which R̂ corresponds to the calculated estimate of the capacity, and


the coefficients {ck } k=1
K are correction terms to account for the variations induced by site

conditions, dynamic amplification, damping, modeling discrepancies, among other factors.


The SSMRP included a compendium of projects to enhance the prediction of the
behavior of nuclear facilities disturbed by seismic excitation [6]. One of these projects
exclusively focused on fragility. In particular, this fragility project centered its attention
on augmenting a database of FFs and safety margins (conservative values for ycrit with
approximately 95% confidence) for elements and components from nuclear power plants [6].
Additionally, the program separated the correction terms ck into two ensembles: aleatory
and epistemic uncertainties.
The Oyster creek project centered its attention on the importance of incorporating

W
confidence intervals around FFs given the scarceness on available data and the considerable
dependency on engineering judgment for their derivation [12]. Before this project, FFs
IE
were presented as deterministic functions representing the "best estimate" (median FF),
whose shape was labelled as unwarranted in accordance with the lack of incorporating
EV
uncertainty [12].
Although [12, 16] supported the use of lognormal distributions, the applicability of
low probability events into real life was brought into question. The use of the lognormal
PR

distribution for such rare events was said to result in a conservative estimate, giving lower
capacity and/or larger demand values [12]. Confidence intervals were obtained after defining
the ground motion IM for a prescribed probability of failure 0 ≤ PF ≤ 1 as:

X1 := X̌1  R U (1.4)

where X̌1 is the "best estimate" (for PF = 0.5) of the median ground motion IM,  R repre-
sents the random (aleatory) variability while U corresponds to the uncertainty (epistemic).
Both  R and U were modeled using lognormal RVs with median value of one and loga-
rithmic standard deviations of σR and σU , respectively. This approach is the most common
methodology used worldwide for seismic probability risk assessment in the nuclear power
9

industry, and it is known as the Safety Factor Method [31] (further explanation is provided
in Section 1.1.4). Its name was given due to the implementation of a set of safety factors
describing the uncertainty in terms of capacity for multiple components with the purpose
of computing the entire system’s median capacity X̌1 . This method is still used nowadays
because of its simplicity which requires inferring just three parameters: X̌1 , σR , and σU ,
but the method also has the complexity to include different sources of uncertainty as safety
factors [15, 54].

1.1.2 Classification of FFs

FFs are often classified into four categories, according to the source of the data used

W
for their derivation. These four categories are: empirical, judgmental or expert-based,
analytical, and hybrid [7, 11, 14, 39, 55]. A description of the source of data used, and the
IE
advantages and disadvantages for each category can be found in Table 1.1. Examples of
empirical FFs can be found in [25, 27, 35]; judgmental FFs are commonly found in building
and rehabilitation codes [55–57]; analytical FF are becoming more common due to the
EV

growing expertise and widespread use of computational modeling tools (e.g., Opensees,
SAP2000, Abaqus) [15, 17, 18, 22, 28, 30, 58]; and some of the major applications of hybrid
FF include [39, 56, 57]. In particular, the use of hybrid FF results is fascinating since it
PR

may allow for the strengths of certain categories to compensate for the weaknesses of other
sources of data. For instance, [39] presented the differences between analytical FFs for
a calibrated/uncalibrated numerical model of a bridge that were subsequently updated by
implementing real experimental data for a scaled version of the center pile model. The results
show that experimental data enables convergence of the FF regardless of the differences
in the models due to calibration. Another example of hybrid FFs are the judgemental
functions that were corrected by incorporating data from the San Fernando (1971) and
Northridge (1994) earthquakes, and were finally presented by the American Technology
Council (ATC) in ATC-13 [56] and ATC-40 [57], respectively. Using numerical models
to derive analytical FFs has been widely adopted due to increased availability and access
10

Table 1.1. Classification of fragility functions


ine height
Classification Source Advantages Disadvantages
+ Most realistic class
+ Incorporates complex pragmatic
- Available data is scarce
information (e.g., soil-structure
and clustered in the low-
interaction effects, topography,
Post-earthquake damage/low-intensity
Empirical location, orientation of the struc-
assessments region
ture, path and source charac-
- Requires the occurrence
teristics, construction process,
of a major earthquake
material and geometric proper-
ties, actual boundary conditions)
Panel of experts - Easily biased by the
with extensive + No requirement for expensive opinion, knowledge,
Judgmental
experience in computations expertise and recognition
earthquake eng. of each expert
- Accuracy depends on
numerical model and its
+ No requirement for real data nor capability to mimic real

W
panel of experts dynamic behavior
Simulation of + Artificial ground motions can be - A large ensemble of
Analytical
numerical model used ground motions may be
IE + Most accessible option -> widely required
used - Biased by earthquake
characteristics and model
assumptions
+ Same as the ones from the
EV
selected sources - Requires enough data
Two or more
Hybrid + Strength of one class may to counteract each source’s
different sources
compensate for weakness of disadvantage
a different class
PR

to computing clusters and supercomputers [9, 11, 14, 18, 22, 30, 40, 55, 58, 59]. Researchers
are not only able to perform faster fragility analysis but they are also able to quantify the
uncertainty associated with each function. For this reason, the methodologies explained
in this dissertation are discussed in the context of analytical FFs. However the proposed
methodologies of the subsequent chapters are independent of the category of FF and the
selected method.

Reproduced with permission of copyright owner. Further reproduction prohibited without permission.

You might also like