Geo-Risk 2017 Keynote Lectures
Geo-Risk 2017 Keynote Lectures
Geo-Risk 2017
Keynote Lectures
Edited by
D. V. Griffiths, Ph.D., P.E., D.GE
Gordon A. Fenton, Ph.D., P.Eng
Jinsong Huang, Ph.D.
Limin Zhang, Ph.D.
GEOTECHNICAL SPECIAL PUBLICATION NO. 282
GEO-RISK 2017
KEYNOTE LECTURES
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
SPONSORED BY
EDITED BY
D. V. Griffiths, Ph.D., P.E., D.GE
Gordon A. Fenton, Ph.D., P.Eng.
Jinsong Huang, Ph.D.
Limin Zhang, Ph.D.
Any statements expressed in these materials are those of the individual authors and do not
necessarily represent the views of ASCE, which takes no responsibility for any statement
made herein. No reference made in this publication to any specific method, product, process,
or service constitutes or implies an endorsement, recommendation, or warranty thereof by
ASCE. The materials are for general information only and do not represent a standard of
ASCE, nor are they intended as a reference in purchase specifications, contracts, regulations,
statutes, or any other legal document. ASCE makes no representation or warranty of any
kind, whether express or implied, concerning the accuracy, completeness, suitability, or
utility of any information, apparatus, product, or process discussed in this publication, and
assumes no liability therefor. The information contained in these materials should not be used
without first securing competent advice with respect to its suitability for any general or
specific application. Anyone utilizing such information assumes all liability arising from such
use, including but not limited to infringement of any patent or patents.
ASCE and American Society of Civil Engineers—Registered in U.S. Patent and Trademark
Office.
Preface
Interest and use of probabilistic methods and risk assessment tools in geotechnical engineering has grown
rapidly in recent years. The natural variability of soil and rock properties, combined with a frequent lack
of high quality site data, makes a probabilistic approach to geotechnical design a logical and scientific
way of managing both technical and economic risk. The burgeoning field of geotechnical risk assessment
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
is evidenced by numerous publications, textbooks, dedicated journals and sessions at general geotechnical
conferences. Risk assessments are increasingly becoming a requirement in many large engineering
construction projects. Probabilistic methods are also recognized in design codes as a way of delivering
reasonable load and resistance factors (LRFD) to target allowable risk levels in geotechnical design.
This Geotechnical Special Publication (GSP), coming out of the Geo-Risk 2017 specialty conference held
in Denver, Colorado from June 4-7, 2017, presents eight outstanding contributions from the keynote
speakers. Four of the contributions are from practitioners and the other four are from academics, but they
are all motivated by a desire to promote the use of risk assessment and probabilistic methodologies in
geotechnical engineering practice. Honor Lectures are presented by Greg Baecher (Suzanne Lacasse
Lecturer) on Bayesian thinking in geotechnical engineering and Gordon Fenton (Wilson Tang Lecturer)
on future directions in reliability based design. The reliability-based design theme is continued by Dennis
Becker who includes discussion of risk management, and Brian Simpson, who focuses on aspects of
Eurocode 7 and the rapidly growing importance of robustness in engineering design. The evolution and
importance of risk assessment tools in dam safety is covered in lectures by John France and Jennifer
Williams, and Steven Vick. The challenges of liquefaction modeling and the associated risks of problems
due to instability and deformations are covered in lectures by Hsein Juang and Armin Stuedlein.
These contributions to the use of risk assessment methodologies in geotechnical practice are very timely,
and will provide a valuable and lasting reference for practitioners and academics alike.
All the papers in this GSP went through a rigorous review process. The contributions of the reviewers are
much appreciated.
The Editors
D.V. Griffiths, Ph.D., P.E., D.GE, F.ASCE, Colorado School of Mines, Golden, CO, USA
Gordon A. Fenton, Ph.D., P.Eng., FEIC, FCAE, M.ASCE, Dalhousie University, Halifax, Canada
Jinsong Huang, Ph.D., M.ASCE, University of Newcastle, NSW, Australia
Limin Zhang, Ph.D., F.ASCE, Hong Kong University of Science and Technology, PR China
© ASCE
Geo-Risk 2017 GSP 282 iv
Acknowledgments
The following individuals deserve special acknowledgment and recognition for their efforts in making
this conference a success
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
• Conference Chair: D.V. Griffiths, Colorado School of Mines, Golden, Colorado, USA
• Conference Co-Chair: Gordon A. Fenton, Dalhousie University, Halifax, Canada
• Technical Program Chair: Jinsong Huang, University of Newcastle, NSW, Australia
• Short-Courses: Limin Zhang, Hong Kong University of Science and Technology
• Student Program co-Chairs: Zhe Luo, University of Akron; Jack Montgomery, Auburn University
• Sponsorships and Exhibits Chair: Armin Stuedlein, Oregon State University
The Editors greatly appreciate the work of Ms. Helen Cook, Ms. Leanne Shroeder, Ms. Brandi Steeves,
and Mr. Drew Caracciolo of the ASCE Geo-Institute for their administration of many important
conference organizational issues, including management of the on-line paper submissions, the conference
web site and sponsorship.
© ASCE
Geo-Risk 2017 GSP 282 v
Contents
Bayesian Thinking in Geotechnics ............................................................................ 1
Gregory B. Baecher
Steven G. Vick
© ASCE
Geo-Risk 2017 GSP 282 1
Abstract
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
The statistics course most of us took in college introduced a peculiar and narrow species of the
subject. Indeed, that species of statistics—usually called, Relative Frequentist theory—is not of
much use in grappling with the problems geotechnical engineers routinely face. The sampling
theory approach to statistics that arose in the early 20th C. has to do with natural variations within
well-defined populations. It has to do with frequencies like the flipping of a coin. Geotechnical
engineers, in contrast, deal with uncertainties associated with limited knowledge. They have to
do with the probabilities of unique situations. These uncertainties are not amenable to Frequentist
thinking; they require Bayesian thinking. Bayesian thinking is that of judgment and belief. It
leads to remarkably strong inferences from even sparse data. Most geotechnical engineers are in-
tuitive Bayesians whether they know it or not, and have much to gain from a more formal under-
standing of the logic behind these straightforward and relatively simple methods.
BAYESIAN THINKING
Most geotechnical engineers are intuitive Bayesians. Practical examples of Bayesian thinking in
site characterization, dam safety, data analysis, and reliability are common in practice; and the
emblematic observational approach of Terzaghi is a pure Bayesian concept although in a quali-
tative form (Lacasse 2016).
The statistics course one took in college most likely introduced a peculiar and narrow
form of statistics, generally known as Relative Frequentist theory or Sampling Theory statistics.
In the way normal statistics courses are taught, one is led to believe that this is all there is to sta-
tistics. That is not the case. As one of the reviewers of this paper said, it’s not your fault if you
haven’t thought about Bayesian methods until now; and it’s not too late.
This traditional frequentist form of statistical thinking is not particularly useful except in
narrowly defined problems of the sort one finds in big science, like medical trials, or in sociolog-
ical surveys like the US Census. It is tailored to problems for which data have been acquired
through a carefully planned and randomized set of trials. It is tailored to aleatory uncertainties,
that is, uncertainty dealing with variations in nature. This almost never describes the problems a
normal person faces, and especially not geotechnical engineers. Most geotechnical uncertainties
are epistemic: they deal with limited knowledge, with uncertainties in the mind not variations in
nature.
Two concepts of probability. The reason that college statistics courses deal with this peculiar
form of statistics and not something more useful in daily life has to do with intellectual battles in
the history of probability, and in how the pedagogy of statistical teaching evolved in the early
20thC. Even though concepts of uncertainty, inference, and induction arose in antiquity, what we
© ASCE
Geo-Risk 2017 GSP 282 2
think of as modern probability theory, at least its mathematical foundations, arose only around
1654. Hacking (1975) and Bernstein (1996) trace this history.
From that time, two concepts of probability evolved in parallel. These deal with different
problems, and while they have evolved to use the same mathematical theory, and while they are
commonly confused with one another, in fact they are philosophically distinct. One concept, the
one taught in undergraduate courses, deals with the relative frequency with which particular
events occur in a long series of similar trials. For example, if you roll a pair of dice a thousand
times, “snake eyes” (double-1) will occur in about 8.3% of the tosses. This is the sort of proba-
bility that is involved with large clinical trials. One exposes 1000 subjects to a test drug, and
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
1000 subjects to a placebo, and then compares the frequency with which particular outcomes oc-
cur in each group.
The other concept of probability deals with degrees of belief that one should rationally
hold in the likely outcome of some experiment or in the truth of a proposition. This species of
statistics has nothing to do with frequencies in long series of similar trials, but rather with how
willing one is to make decisions or to take action when faced with uncertainties. For example,
frequency statistics might be used to describe the rates of false positive or false negative results
when a medical test is applied to a large number of subjects; but the probability that you as a
unique individual are sick if a diagnostic test comes back positive is not a matter of frequencies,
it is a matter of one unique individual, namely, you. You are either sick or well. Probability in
this case is a matter of the degree of belief about which of those two conditions you think ob-
tains. Vick (2002) interprets this theory of degrees-of-belief as a formalization of “engineering
judgment.”
Scope of this paper. This paper focusses on inferences which at first glance seem difficult or
impossible to make—and indeed they are, using frequentist thinking. But they are easy when
viewed through the lens of Bayesian thinking. Bayesian methods have been used across the spec-
trum of geotechnical applications since the 1970’s, as reflected in the early work of Tang
(Lacasse et al. 2013), Wu (2011), Einstein (Einstein et al. 1978), Marr (2011), and many others.
These methods have revolutionized many fields of engineering and continue to do so (McGrayne
2012). “Clippy” the annoying Microsoft self-help wizard was a Bayesian app. Spam filtering of
your email inbox is, too. The Enigma Code of the German Kriegsmarine was broken using
Bayesian methods at Bletchley Park. And the wreckage of Air France flight 447 was found using
a Bayesian search algorithm. Recent reviews of the use of Bayesian methods in geotechnical en-
gineering have been provided by Yu Wang (2016), Zhang (2016), and Juang and Zhang (2017).
For reasons of space and to avoid complicating the ‘message,’ advanced topics in Bayesian
methods such as belief nets and Markov-chain Monte-Carlo are not discussed here.
The application of statistics to practical problems is of two sorts. On the one hand, we use statis-
tics to describe the variability of data using summaries such as measures of central tendency and
spread, or frequency distributions such as histograms or probability density functions. On the
other hand, we use statistics to infer probabilities over properties of a population that we have
not observed and based on a limited sample that we have observed. It is this latter meaning of
statistics that we deal with here. It is the inductive use of statistics, which in the 19th C. was
called inverse reasoning.
© ASCE
Geo-Risk 2017 GSP 282 3
Bayes' rule. Bayes’ Rule tells us that the weight of evidence in observations is wholly contained
in the Likelihood Function, that is, in the conditional probability of the observations, given the
true state of nature (see Hacking 2001 for a comprehensible introduction). Statisticians might call
the true state of nature, the hypothesis, thus,
( | )= × ( ) × ( | ) (1)
in which H=the hypothesis is true, ( )=the (prior) probability of the hypothesis being true be-
fore seeing the data, ( | )=the probability of the observed data were the hypothesis true,
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
( | )=the (posterior) probability of the hypothesis being true, and N=a normalizing con-
stant.
The term, ( | ) is called, the Likelihood,
ℎ ℎ = ( | ) (2)
The Likelihood might be thought of as the degree of plausibility of the data in light of the hy-
pothesis (Schweckendiek 2016). In the Bayesian literature, the Likelihood is sometimes written
as, ( | ) (O’Hagan and Forster 2004).
The normalizing constant in Eq. (1) is just that which makes the sum of the probabilities
for and against the hypothesis (H) equal 1.0. In practical applications, N is often and most easily
obtained numerically, but in the simple case above it can be calculated from the Total Probability
Theorem as, = { ( ) × ( | )+ ( )× ( | ) }, in which =the hypothesis is
not true.
Dividing Eq. (1) by its complement for not-H,
( | ) ( ) ( | )
= × (3)
( | ) ( ) ( | )
The normalizing constant, N, which is same in numerator and denominator, cancels out. In eve-
ry-day English, this reads, “the posterior odds for the hypothesis equals the prior odds times the
likelihood (LR) ratio.” What one thought before seeing the data is entirely contained in the prior
odds, while the weight of information in the data is entirely contained in the Likelihood Ratio.
The Likelihood Ratio is the relation of the Likelihood for a true hypothesis to that for a false hy-
pothesis, = ( | )/( ( | ).
The weight of evidence. The crucial thing about Eq. (3) is the unique role of the Likelihood Ra-
tio. The Likelihood Ratio contains the entire weight of evidence contained in the observations.
This is true whether there is one observation or a large number, which means that inferences can
be made even if the data are relatively weak (Jaynes 2003), and sometimes these inferences from
weak data can actually be relatively strong (Good 1996).
Sir Harold Jeffreys (1891-1989), late Professor of Geophysics at Cambridge and ardent
defender of Bayesianism (although an opponent of continental drift), proposed that this weight of
evidence—for purposes of testing scientific hypotheses—be characterized as in Table 1. Whether
one agrees with the verbal descriptions and corresponding LR’s is left to the reader’s judgment.
In the modern literature, this weight of evidence in the LR is called, the Bayes factor (Kass and
Raftery 1995). For Millennial readers, the logarithm of the Bayes Factor will be recognized as
© ASCE
Geo-Risk 2017 GSP 282 4
the decimal version of binary bits of information in the Shannon and Weaver (1949) sense. For
readers of the author’s age the odds might be compared to bets at the horse track.
Table 1. Qualitative scale for the degree of support provided by evidence (Jeffreys 1998)
LR = Likelihood Ratio (from-to) Weight of evidence to support the hypothesis
1 10 Limited evidence
10 100 Moderate evidence
100 1000 Moderately strong evidence
1000 10,000 Strong evidence
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
A straightforward but powerful example of simple Bayesian inference is given by the work of
Chen and Gilbert (2014) on Gulf of Mexico offshore structures. These inferences might be based
on laboratory tests, in situ measurements, performance data, or even quantified expert opinion.
This contrasts to an earlier time when such inferences were almost always made using
Frequentist methods (Lumb 1974).
Chen and Gilbert use Bayes' Rule to update bias factors in engineering models for pile
system capacity based on observed performance in Gulf of Mexico hurricanes. The initial work
included events up to Hurricane Andrew in 1992, and in a subsequent paper up to more recent
hurricanes between 2004 and 2008 (Chen and Gilbert in press). The analysis addresses model bi-
as in four predictions: wave load, base shear capacity, overturning in clay, and overturning in
sand.
The question addressed is, what is the systematic bias in the predictions of the engineer-
ing models being used to forecast pile system capacity, given observations of how the platforms
performed in various storm events, and given the prior forecasts of how they would perform. Re-
written in the notation of the present paper (O’Hagan and Forster 2004),
( | )= × ( ) × ( | ) (4)
in which, f(.) is a probability density function (pdf), B = model bias factor, = the observed
performance of the pile system, and the normalizing constant, N, is that which makes the integral
over all B equal to 1.0, = ( ) × ( | ) (i.e., the area under ( | ) has
to be unity for the pdf to be proper). This is simply a restatement of Eq. (1).
The updated pdf’s of Figure 1 show how the storm loading data led to a re-evaluation of
the uncertainty in the model bias factors for wave loading and to overturning. The dotted curves
show the prior pdf’s of model bias. The solid curves show the posterior or “updated” pdf’s. In
both cases the performance data led to a lowering of the best estimate of model bias and to a
slight reduction in the uncertainty in the bias. As noted by the authors, however, there is no as-
surance that observed performance will always reduce uncertainty. If the observations are incon-
sistent with what was thought ex ante, the variance of the pdf might, in fact, increase rather than
decrease. The weight of evidence in the Likelihood Function works both ways, either in favor of
an hypothesis or opposed to it. Similar applications are provided by Zhang (2004) inferring pile
capacities based on incomplete load tests, and by Huang, et al. (2016) inferring the reliability of
© ASCE
Geo-Risk 2017 GSP 282 5
single piles and pile groups by load tests. Zhang et al. (2011) describe examples of back-analysis
of slope stability considering site-specific performance information.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 1. Comparison of prior and updated probability distributions for, (a) wave load
model bias, and (b) overturning in sand model bias. These are the marginal distributions of
a joint PDF of the two biases (Chen and Gilbert in press).
Just as one learns from direct loading data, one also learns from observing more complex per-
formance. Of course, this is just Terzaghi’s Observational Method (Casagrande 1965; Peck 1969).
Bayesian methods, however, allow a quantitative updating of model uncertainty, parameter values,
and predictions based on quantitative measurements of performance. So, anyone using an observa-
tional approach is, de facto, applying Bayesian methods.
Staged construction example. Ladd (Baecher and Ladd 1997; Noiray 1982) performed a relia-
bility assessment of the staged loading of a limestone aggregate storage embankment on soft
Gulf of Mexico Clay at a cement plant in Alabama (Figure 2) using the SHANSEP method
(Ladd and Foott 1974). The embankment was loaded in stages to allow consolidation and an in-
crease in the undrained strength of the clay stratum. It was incrementally raised to a final height
of 55 ft (17 m). Extensive in situ and laboratory testing allowed a formal observation approach to
be used, in which uncertainty analysis of the preliminary design was combined with field moni-
toring to update both soil engineering parameters and site conditions (maximum past pressures).
Bayesian methods were used to modify initial calculations, and thus to update predictions of per-
formance for later stages of raising the embankment.
The site abuts a ship channel and a gantry crane unloads limestone ore from barges
moored at a relieving platform. The ore is placed in a reserve storage area adjacent to the chan-
nel. As the site is underlain by thick deposits of medium to soft plastic deltaic clay, concrete pile
foundations had been used to support earlier facilities at the plant. Although the clay underlying
the site was too weak to support the planned stockpile, the cost of a pile supported mat founda-
tion for the storage area was prohibitive.
To allow construction, a foundation stabilization scheme was conceived in which lime-
stone ore would be placed in stages, leading to consolidation and strengthening of the clay, has-
© ASCE
Geo-Risk 2017 GSP 282 6
tened by vertical drains. However, given large scatter in engineering property data for the clay,
combined with low factors of safety against embankment stability, field monitoring became es-
sential. The site was instrumented to monitor pore pressures, horizontal displacements, and set-
tlements. Foundation performance was predicted during the first construction stage and revised
for the final design.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
( )≅[ ( )] + [ ( )] (5a)
( )≅[ ( )+ ( )] + [ ( )+ ( )] (5b)
The variance components are those due to spatial variation, random error, statistical estimation
error, and model or measurement bias, respectively.
Application of SHANSEP. An expected maximum vertical past pressure profile was obtained
by linear regression of the measurements. The observed data scatter about the expected value
profile was assumed to reflect spatial variability of the clay plus random measurement error in
the test data. The data scatter causes statistical estimation error in the location of the expected
value because of the limited number of tests.
© ASCE
Geo-Risk 2017 GSP 282 7
The contribution of spatial uncertainty to predictive total uncertainty depends on the vol-
ume of soil involved in the problem. Spatial variability averages over large soil volumes, so the
larger the volume the smaller the contribution. The contribution of random measurement errors
to uncertainty in performance predictions depends only on their importance in estimating ex-
pected value trends, since random measurement errors do not reflect real variations in soil prop-
erties. It was estimated (guessed) that ~50% of the variance was spatial and about ~50% random
measurement noise. This was a judgment, but was informed by earlier work (DeGroot 1985).
Ten CKoUDSS tests were performed on undisturbed clay samples to determine undrained
stress-strain-strength parameters to be used in the SHANSEP procedure. Since there was no ap-
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
parent trend with elevation, the expected value and standard deviation values were computed by
averaging the data to yield the SHANSEP equation,
= (6)
© ASCE
Geo-Risk 2017 GSP 282 8
the formal observational approach allowed a balancing between prior information and observed
field performance. This helped overcome the tendency to discount prior information in favor of
parameters back calculated from field observations, i.e., the representativeness bias of Tversky
and Kahneman (1974). The Observational Method and Bayesian Thinking are, indeed, the same
thing. The latter simply makes the former quantitative and thus internally consistent.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 3. Reliability index against slope instability in clay, in which r = autocorrelation dis-
tance of the clay properties, and L = length of the failure surface. The expected value of the
Factor of Safety is represented as E[FS] and the standard deviation as SD[FS].
Other examples of the Bayesian Observational Method. Similar approaches to the observa-
tional method using Bayes’ Rule have been made by a number of people. Einstein and his stu-
dents over the years developed a Bayesian approach to subsurface information and tunneling de-
cisions called, Decision Aids for Tunneling (Einstein 2001). In this approach an updating is made
of the predicted geology in the unexcavated part of a tunnel alignment based on observed geolo-
gy in the excavated part. One then uses the geology (ground classes) to select the most suitable
design option from among a set of pre-planned options. Einstein (1997) used a similar approach
for landslide risk. Wu (2011) devoted his Peck Lecture to this topic and gives a number of inter-
esting examples involving embankment design and performance from a Bayesian view.
Schweckendiek applies Bayesian thinking to internal erosion in coastal dikes (Schweckendiek et
al. 2016; Schweckendiek and Kanning 2016), and to updating the fragility curves of dikes
(Schweckendiek 2014) based on their successful loading under historical conditions.
QUALITATIVE INDICATORS
Much of routine practice involves inspections and qualitative observations. For example, levee
systems are periodically inspected for indications of distress, since detailed geotechnical analysis
of miles of levee reach is for the most part prohibitively expensive. Bayesian methods allow the-
se qualitative data to be turned into quantitative conclusions about reliability.
© ASCE
Geo-Risk 2017 GSP 282 9
Scoring rules. An inspector observes that a reach of levee exhibits both excessive settlement and
cracking. Presumably, this indicates lessened reliability, but by how much? Perhaps he or she
observes other indicators as well. To combine these, it is convenient to create a simple score, for
example a weighted sum of the form,
= (7)
in which z = summary score, = weight or importance of each type observation i, and = the
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
observed data of type i. In principle, one would like to relate this index z to the reliability of the
levee reach. Qualitative risk scales of this form, which include risk matrices, are common: EPA,
FHWA, GAO, and other US agencies recommend them. But they are often incorrect from a
measurement theory view (Cox 2009). Bayes’ Rule comes to the rescue.
To make Eq. (3) additive, one takes the logarithm of each side,
( | ) ( ) ( | )
= + (8)
( | ) ( ) ( | )
If there are more than one type of observation, = = { , … , }, the likelihood becomes
the joint likelihood of all the observations, that is, the joint probability of observing the set of
things that were observed, were H true. This is a joint conditional probability, so if the observa-
tions are correlated, that correlation must be accounted for; but in the desirable case that the ob-
servations are mutually independent—or are assumed independent—Eq. (8) reduces to a simple
product,
( | )
= (9)
( | )
Thus, to create a mathematically rigorous linear scoring rule one need only take the scores for
each category of observation in proportion to ln-LR.
Levee screening. The table below shows hypothetical data on the association of two indica-
tors—embankment cracking and embankment settlement—with levee sections that later failed or
later did not fail within a specified time, e.g., the time until the next scheduled inspection. For
simplicity, presume there has been a set of 1000 levee reaches, and that 100 of them failed after
being inspected. So, the base rate of reach failure would be ~10%. Thus, ( )/ ( ) =
{0.1/(1 − 0.1)] = (0.1/0.9), and the log prior odds would be ln(1/9)=-2.2.
The Likelihood of observing “cracking” for a levee section that later fails can be estimat-
ed by taking the number of levee sections that later fail, and counting what fraction of these ex-
hibit cracking during the preceding inspection (Table 2). In the present case, this is 70 of the 100
levee sections, or 70%. Thus, the Likelihood of observing “cracking” for a failing levee section
is, P(cracking|H) ≈ 0.7, where H = the hypothesis that the reach will fail within the next cycle. If
cracking were observed, Bayes’ Rule says that the posterior odds of failure in that reach ought to
be raised from 0.1 to 0.26 (Table 3). To see this, the prior odds are 1/9=0.11. The LR is
(0.70/0.22)=3.2. Thus, the posterior odds are (0.11)(3.2)=(0.36), and the probability is
(0.36)/(0.36+1)=0.26.
© ASCE
Geo-Risk 2017 GSP 282 10
Table 2. Observed levee reach frequencies observed in the past. The data are hypothetical.
Failing Non-failing
Number Fraction Number Fraction Total
Cracking 70 70% 200 22% 270
Settlement 40 40% 100 11% 140
Both 30 30% 50 6% 80
Neither 20 20% 650 72% 670
Total 100 100% 900 100% 1000
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Table 3. Updated probability of reach failure given the cracking or settlement or both.
P(F) P(data|F) P( F ) P(data | F ) P(F|data)
P(H|C) 0.10 70% 0.90 22% 0.26
P(H|S) 0.10 40% 0.90 11% 0.29
Puncorrelated(H|C,S) 0.10 28% 0.90 2% 0.56
Pcorrelated(H|C,S) 0.10 30% 0.90 6% 0.38
The tables also show the data on excessive settlement and the implications by Bayes’
Rule of observing either cracking and settlement alone or together. In the data of Table 2, the oc-
currence of cracking and settlement appear correlated, that is, they occur more often together
(6% of the cases) than would be predicted were they independent (i.e., 22% x 11% = 2%). So,
the observation of both cracking and settlement together is less informative than were they inde-
pendent (Figure 4). The details of the dependent case are discussed in Baecher and Christian
(2013) and in Margo et al. (2009). The method is easily expanded to multiple indicators. Several
workers have used a similar approach for mapping landslide hazard (Dahal et al. 2008; Quinn et
al. 2010; vanWesten et al. 2003), and the dam safety base-rate work of Foster et al. (2000) is
© ASCE
Geo-Risk 2017 GSP 282 11
conceptually similar. Straub (2005) uses a Bayesian network approach to generate a risk index
for rock falls.
QUANTITATIVE INDICATORS
While qualitative data are sometimes convenient, it is more common to deal with quantitative da-
ta and measurements. An area of practice where Bayesian methods have long been recognized is
site characterization. This was a topic of a great deal of early Bayesian work, for example, by
Wu (1974), Tang and Quek (1986), Dowding (1979), and others. Straub and Papaioannou (2015)
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
present a more recent overview of the use of Bayesian methods in inferring in situ soil proper-
ties.
Looking for anomalies. If one expends effort in site characterization by searching for a geologi-
cal anomaly suspected to exist (e.g., a sink hole, a fault zone, or a weak lens) but the anomaly is
not found, what is the probability of its existing undetected? This is not an uncommon situation
(Poulos et al. 2013). The answer depends on, (1) how likely one thought its existence to be be-
fore the search, and (2) how likely the search effort is to find the anomaly if it exists. Unsurpris-
ingly, these are the two terms in Bayes’ Rule.
The calculation is simple. A prior probability is assigned, perhaps by judgment or per-
haps from prior data. Then the geometric chance of finding an anomaly of given description is
calculated, for which there are mature models (Kendall and Moran 1963). The description itself
might be specified by a probability distribution of size, shape, obliquity, or the like. Then Eq. (1)
is applied to calculate the posterior probability, say, if nothing is found (Figure 5). The curves in
the figure show different conditional probabilities of finding an existing target.
Figure 5. Posterior probability of anomaly existing (y-axis) given it was searched for with-
out being found, as a function of the prior probability before searching (x-axis), and of the
conditional probability of finding an existing target (the multiple curves).
Among the many things interesting about Figure 5 is that the posterior probability is
strongly dependent on the prior probability, even with intensive searching, i.e., a high probability
© ASCE
Geo-Risk 2017 GSP 282 12
of finding an existing target. The multiple lines in the figure represent the conditional probability
of finding an existing target. So, just because an intensive search has been performed and noth-
ing found, one should not be over-confident that the searched-for target is absent. Another inter-
esting observation is that the search efficacy needs to be at least p≥0.5 before the posterior prob-
ability is much affected by the search. This is why Terzaghi’s “geological details” are seldom
found prior to construction (Terzaghi 1929).
Tang and his students did a great deal of work on this problem (Tang 1987; Tang and
Halaim 1988; Tang and Quek 1986; Tang and Saadeghvaziri 1983). More recent work along the-
se same lines has been done by Ditlevsen (2006) in the context of tunneling through Danish till.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Kaufman and Barouch (1976) applied this thinking to undiscovered oil and gas reserve estimates.
Figure 6. PDF’s of undetected anomaly size given no-find with n randomly placed borings,
n=number of borings, a=anomaly area, and A0=search area (Tang 1987).
Subtler details about the searched-for anomalies can also be inferred. For example, if a
given amount of effort is invested in searching for anomalies, the posterior probability distribu-
tions over such characteristics as the total number and the size distribution are easily calculated.
Tang (1987) made such calculations for the number and sizes of undetected features in the sub-
surface of a site, e.g., weak pockets of clay in a granular stratum (Figure 6).
Failure rates when there are no failures. This problem of searching for an anomaly is related
to an interesting problem which arises in testing the reliability of components like spillway gates.
Lacasse (2016) considers the same problem in relation to avalanche risk. There are many ways to
characterize the reliability of a gate, for example, by its availability (up time/total time), or its
mean-time-between-failure (up time/number of failures); but let us choose the even simpler fail-
ure rate, , and its estimate based on data,
(number of failures)
= = (10)
(number of demands)
© ASCE
Geo-Risk 2017 GSP 282 13
in which = the estimate of failure rate, k = number of failures of a gate to open on demand, and
n = number of demands. Thus, if six failures have been observed in five years of monthly tests
(n=60), the estimate of failure rate on demand is = 6/60 = 0.1/ . Presuming the tests
to be independent, the number or failures follows a Binomial distribution,
( | , )= (1 − ) (11)
the mean of which is [ ] = and the Likelihood is ( | , ). So, = k/n is the moment es-
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 7. Posterior PDF of failure rate given no gate failures among 60 monthly tests.
The average of this distribution (as a return period) is [ ] ≅ / .
What if the gate is tested for five years and no gate failures occur? According to Eq. (10),
= 0. But this doesn’t seem right. Surely there remains some chance of the gate being unavaila-
ble at an arbitrary time in the future. Luckily, Bayes’ Rule comes to the rescue again. The Bayes-
ian posterior pdf of is,
( | = 0, ) = ( ) ( = 0| , ) (12)
in which ( ) = the prior distribution (here assumed uniform over 0,1), and N = a normalizing
constant to make the integral over all unity. The result for no gate failures among 60 tests is
shown as Figure 7. The average failure rate estimate is now ≅ 0.013 or one per 75 demands,
rather than zero. This makes more sense.
DATA FUSION
There is a further thing that Bayesian methods allow one to do that is truly significant: they allow
data of different sorts—subjective, qualitative, quantitative, multi-sensor, etc.—to be fused into a
© ASCE
Geo-Risk 2017 GSP 282 14
unified probabilistic inference, which is otherwise not easily obtained. Data fusion is the activity
of combining information from different sources such that the result has less uncertainty then
when the sources are used separately. Traditionally, is has been difficult to do this quantitatively
when the sources of data are heterogeneous (Castanedo 2013). It is greatly simplified by using
Bayesian methods.
Returning to Eq. (1), data from multiple sources, = 1, … , , can be combined through
their respective Likelihoods,
( | )∝ ( ) ( | )∝ ( ) ( | ) (13)
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
in which = { , … , } are the various data types. If the data sources are conditionally inde-
pendent this reduces to the simple expression at right. It does not matter that the data sources
might be quite different from one another: it is their likelihoods which are combined and these
are simple probabilities. For example, in dam safety risk studies one often has empirical data,
modelling results, and expert opinion elicitations; and each possibly from more than one source.
These distinct sources of quantitative information can easily be “fused” by expanding Eq. (13),
( | ) ∝ ( ) × ( | ) × ( | ) × ( | ) (14)
in which x are the empirical data, y are the modelling outcomes, and z are the quantified expert
opinions. Morris (1974) describes how Likelihood Functions over expert opinion can be generat-
ed and used within this context.
Figure 8. Observed liquefaction features and predicted probability of liquefaction for (a)
the Christchurch NZ 22Feb2011 and (b) the Darfield NZ 03Sep2010 (Zhu et al. 2015).
© ASCE
Geo-Risk 2017 GSP 282 15
used as direct input to a Seed and Idriss type analysis. However, there are also other sorts of in-
formation: There is the lived experience of local engineers and geologists. There is also regional
geological mapping and analysis based on a multiplicity of factors, as demonstrated in the work
of Baise and her colleagues (Baise and Brankman 2004; Thompson et al. 2011) (Figure 8). Ideal-
ly, one would combine these sources of information into an integrated appraisal of liquefaction
triggering potential. For a specific site these three sources of information are readily combined
by considering their respective likelihoods should liquefaction triggering obtain and combining
these through Bayes' Rule. This has been recommended in the recent Academies report cited
above. Related data fusion work has also been contributed by Wang (2016) and reviewed by Wu
(2011).
CONCLUSION
Most geotechnical professionals behave as if they were Bayesians. This is as it should be: rela-
tive-frequentist theory does not serve the needs of the discipline and should be used only with
caution. However, beyond this behavioral observation, Bayesian thinking can be powerful. It
provides means for drawing strong inferences from weak data. It allows qualitative data to lead
to quantitative conclusions. It allows rational inferences on the probabilities of things which have
yet to be observed. It allows the observational method to be placed on a quantitative foundation.
It allows the data of disparate sorts to be combined into coherent probability statements. An un-
derstanding of the reasonably simple mathematics behind Bayesian thinking allows anyone to
make much stronger use of data and observations than would otherwise be the case.
ACKNOWLEDGEMENTS
The author appreciates the efforts of the many people who have contributed to our thinking about
Bayesian reasoning in geotechnical engineering, and specifically of those who have helped with
the current paper. At the risk of forgetting someone important, these include, alphabetically, Luis
Alfaro, Romanas Ascila, Laurie Baise, John T. Christian, Karl M. Dise, Herbert H. Einstein,
Robert B. Gilbert, Anand Govinasamy, Fernando Guerra, Desmond N.D. Hartford, Low Bak
Kong, Suzanne Lacasse, David Margo, W. Allen Marr, Martin W. McCann, Farrokh Nadim,
Robert C. Patev, K.K. Phoon, Timo Schweckendiek, Yu Wang, Jie Zhang, and P. Andy Zielinski
(who does not admit to being Bayesian). Appreciation is also given to anonymous reviewers
whose suggestions improved the manuscript. This list would be incomplete without also recog-
nizing the influence of the late C. Allin Cornell, Charles C. Ladd, and Robert V. Whitman on the
development of these ideas.
© ASCE
Geo-Risk 2017 GSP 282 16
REFERENCES
Baddeley, M. C., Curtis, A., and Wood, R. (2004). An introduction to prior information derived from
probabilistic judgements: Elicitation of knowledge, cognitive bias and herding. . 15-27. Geologi-
cal Society of London, London, 15–27.
Baecher, G. B., and Christian, J. T. (2003). Reliability and statistics in geotechnical engineering. J. Wiley,
Chichester, West Sussex, England; Hoboken, NJ.
Baecher, G. B., and Christian, J. T. (2008). “Spatial variability and geotechnical reliability,” in Phoon,
K.K. (Ed.), Reliability Based Design in Geotechnical Engineering, Taylor and Francis, Abigdon
OX.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Baecher, G. B., and Christian, J. T. (2013). “Screening Geotechnical Risks.” Foundation Engineering in
the Face of Uncertainty, American Society of Civil Engineers, 215–224.
Baecher, G. B., and Ladd, C. C. (1997). “Formal Observational Approach to Staged Loading.” Transpor-
tation Research Record, 1582, 49–52.
Baise, L. G., and Brankman, C. M. (2004). Liquefaction Hazard Mapping in Boston, Massachusetts: Col-
laborative Research with William Lettis & Associates, Inc., and Tufts University. National Earth-
quake Hazards Reduction Program (U.S.), Tufts University, Medford, 63.
Bernstein, P. L. (1996). Against the Gods: The remarkable story of risk. John Wiley & Sons, New York.
Casagrande, A. (1965). “The role of the ‘calculated risk’ in earthwork and foundation engineering.” Jour-
nal of the Soil Mechanics and Foundations Division, ASCE, Vol. 91(No. SM4), 1–40.
Castanedo, F. (2013). “A Review of Data Fusion Techniques, A Review of Data Fusion Techniques.” The
Scientific World Journal, The Scientific World Journal, 2013, 2013, e704504.
Chen, J., and Gilbert, R. B. (in press). “Offshore Pile System Model Biases and Reliability.” GeoRisk.
Chen, J.-Y., and Gilbert, R. B. (2014). “Insights into the Performance Reliability of Offshore Piles Based
on Experience in Hurricanes.” From Soil Behavior Fundamentals to Innovations in Geotechnical
Engineering, American Society of Civil Engineers, 283–292.
Cox, L. A. (2009). “What’s Wrong with Hazard-Ranking Systems? An Expository Note.” Risk Analysis,
29(7), 940–948.
Dahal, R. K., Hasegawa, S., Nonomura, A., Yamanaka, M., Masuda, T., and Nishino, K. (2008). “GIS-
based weights-of-evidence modelling of rainfall-induced landslides in small catchments for land-
slide susceptibility mapping.” Environmental Geology, 54(2), 311–324.
DeGroot, D. J. (1985). “Maximum likelihood estimation of spatially correlated soil properties.” Massa-
chusetts Institute of Technology, Cambridge.
Ditlevsen, O. (2006). “A story about distributions of dimensions and locations of boulders.” Probabilistic
Engineering Mechanics, 21(1), 9–17.
Dowding, C. H. (Ed.). (1979). Site Characterization and Exploration. ASCE, New York.
Einstein, H. H. (1997). “Landslide Risk - Systematic Approaches to Assessment and Management.” Pro-
ceedings of the International Workshop on Landslide Risk Assessment, Balkema, Honolulu, 25–
50.
Einstein, H. H. (2001). “The Decision Aids for Tunnelling (DAT) -- a brief review.” Review, Korean
Tunnelling Technology, 3(3), 37–49.
Einstein, H. H., Labreche, D. A., Markow, J. J., and Baecher, G. B. (1978). “Decision Analysis Applied
to Rock Exploration.” Engineering Geology, 12(2), 143–161.
Foster, M., Fell, R., and Spannagle, M. (2000). “The statistics of embankment dam failures and acci-
dents.” Canadian Geotechnical Journal, 37(5), 1000–1024.
Good, I. J. (1996). “When batterer becomes murderer.” Nature, 381(6582), 481–481.
Hacking, I. (1975). The Emergence of Probability. Cambridge University Press, Cambridge.
Hacking, I. (2001). An introduction to probability and inductive logic. Cambridge University Press, Cam-
bridge, U.K.; New York.
Huang, J., Kelly, R., Li, D., Zhou, C., and Sloan, S. (2016). “Updating reliability of single piles and pile
groups by load tests.” Computers and Geotechnics, 73, 221–230.
© ASCE
Geo-Risk 2017 GSP 282 17
Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press, Cambridge,
UK; New York, NY.
Jeffreys, H. (1998). Theory of probability. Clarendon Press; Oxford University Press, Oxford and New
York.
Juang, C. H., and Zhang, J. (2017). “Bayesian Methods for Geotechnical Applications - A Practical
Guide.” ASCE, Denver.
Kass, R. E., and Raftery, A. E. (1995). “Bayes Factors.” Journal of the American Statistical Association,
90(430), 773–795.
Kaufman, G., and Barouch, E. (1976). “Estimating undiscovered oil and gas.” SIAM Review, 18(4), 812–
812.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Kendall, M. G., and Moran, P. A. P. (1963). Geometrical probability [by] M.G. Kendall and P.A.P. Mo-
ran. C. Griffin, London.
Lacasse, S. (2016). “Hazard, Reliability and Risk Assessment - Research and Practice for Increased Safe-
ty.” Proceedings of the 17th Nordic Geotechnical Meeting Challenges in Nordic Geotechnic, Ice-
landic Geotechnical Society, Reykjavik, 17–42.
Lacasse, S., Hoeg, K., Liu, Z. Q., and Nadim, F. (2013). “Lacasse S, Høeg K, Liu ZQ, Nadim F. An hom-
age to Wilson Tang: Reliability and risk in geotechnical practice - How Wilson led the way. 3-
26.” Geotechnical Safety and Risk IV - Proceedings of the 4th International Symposium On Ge-
otechnical Safety and Risk, Hong Kong.
Ladd, C., and Foott, R. (1974). “New design procedure for stability of soft clays.” Journal of the Ge-
otechnical Engineering Division, ASCE, 100(7), 763–786.
Lumb, P. (1974). “Application of Statistics in Soil Mechanics.” Soil Mechanics: New Horizons, Newnes-
Butterworth, London, 44-112-239.
Margo, D., Harkness, A., and Needham, J. (2009). “Levee screening tool.” Proc. USSD Annual Meeting,
Nashville.
Marr, W. A. (2011). “Active risk management in geotechnical engineering.” Proc. GeoRisk 2011, ASCE
Press, Atlanta, 894–903.
McGrayne, S. B. (2012). The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code,
Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controver-
sy. Yale University Press, New Haven.
Morris, P. A. (1974). “Decision analysis expert use.” Management Science, 20(9), 1233–1241.
Noiray, L. (1982). “Predicted and Measured Performance of a Soft Clay Foundation under Stage Load-
ing.” MSc, Massachusetts Institute of Technology, Cambridge.
NRC. (2016). State of the Art and Practice in the Assessment of Earthquake-Induced Soil Liquefaction
and Consequences. National Academies of Science, Washington DC.
O’Hagan, A., and Forster, J. (2004). The Advanced Theory of Statistics, Vol. 2B: Bayesian Inference. 2
Ed. London: New York: Wiley.
Peck, R. B. (1969). “Advantages and limitations of the observation method in applied soil mechanics.”
Géotechnique, 19(2), 171–187.
Poulos, H. G., Small, J. C., and Chow, H. (2013). “Foundation Design for High-Rise Tower in Karstic
Ground.” Foundation Engineering in the Face of Uncertainty, American Society of Civil Engi-
neers, 720–731.
Quinn, P. E., Hutchinson, D. J., Diederichs, M. S., and Rowe, R. K. (2010). “Regional-scale landslide
susceptibility mapping using the weights of evidence method: an example applied to linear infra-
structure.” Canadian Geotechnical Journal, 47(8), 905–927.
Schweckendiek, T. (2014). “On Reducing Piping Uncertainties - A Bayesian Decision Approach.” PhD,
Delft University of Technology, Delft.
Schweckendiek, T. (2016). “Personal communication.”
Schweckendiek, T., and W. Kanning. 2016. “Reliability Updating for Slope Stability of Dikes - Approach
with Fragility Curves (Background Report).” Report 1230090-033-GEO-0001. Delft: Deltares.
© ASCE
Geo-Risk 2017 GSP 282 18
Seed, H. B., and Idriss, I. (1971). “Simplified procedure for soil liquefaction potential.” Journal of the
Soil Mechanics and Foundations Division, ASCE, 107(SM9), 1249–1274.
Shannon, C. E., and Weaver, W. (1949). The mathematical theory of communication. University of Illi-
nois Press, Urbana.
Straub, D. (2005). “Natural hazards risk assessment using Bayesian networks.” Rome, Italy,
Straub, D., and Papaioannou, I. (2015). “Bayesian analysis for learning and updating geotechnical param-
eters and models with measurements.” Risk and Reliability in Geotechnical Engineering, CRC
Press, Boca Ratan, 221–264.
Tang, W. H. (1987). “Updating anomaly statistics--single anomaly case.” Structural Safety, 4, 151–163.
Tang, W. H., and Halaim, I. (1988). “Updating anomaly statistics--multiple anomaly pieces.” Journal of
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 19
Steven G. Vick1
1
Consulting Geotechnical Engineer, 42 Holmes Gulch Way, Bailey, CO 80421.
Abstract
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
The purpose of risk assessment for dam safety is to improve it. Three case histories of failure or
near-failure of dams and mine tailings dams that employed various risk-based procedures are
examined to evaluate the influence of these procedures on the outcome. In all three cases, the
operative failure mode was recognized but disregarded. Effective risk management was defeated
by an organizational process known as normalization of deviance whereby departures from
desirable conditions become expected and accepted, imparting a false sense of security and
complacency. Normalization of deviance can be controlled by embedding risk-based thinking
and processes in organizational culture and values.
INTRODUCTION
© ASCE
Geo-Risk 2017 GSP 282 20
these serial anomalies are taken to validate the view that they are inconsequential. With this, the
unexpected becomes the expected, which in turn becomes the accepted (Pinto 2014).
Challenger was propelled into orbit by the two solid-fuel rocket boosters (SRBs) shown
in Figure 1a, each fabricated in cylindrical segments. Connecting these segments required that
the joints be sealed to prevent escape of the hot gasses generated by fuel combustion. This was
accomplished with two 12m diameter rubber O-rings, a primary and a secondary for redundancy,
plus a sealing compound of zinc chromate putty. Later during the post-failure investigation,
physicist Richard Feynmann would famously demonstrate how O-rings lost their resiliency by
dipping one in a glass of icewater.
NASA had in place at the time a systematic design process using qualitative Failure
Modes and Effects Analysis (FMEA) and Hazard Analysis (HA) for identifying critical
components. Risk-based procedures continued during operations through a formal process. If a
performance anomaly was encountered in a critical item, it had to be corrected, or otherwise the
risk reduced to as low as reasonably possible (ALARP) with a documented engineering rationale
for retention. Only then would the item be designated an accepted risk and the shuttle be
approved to fly (Vaughan 1996; Vick 2002). The primary SRB O-rings had been designated a
critical component, but with the redundancy of the secondary O-rings as the rationale for
retention, they were designated an accepted risk.
a b c
Figure 1. Space Shuttle Challenger, flight STS 51-L. (a) orbiter with external fuel tank and SRBs
on either side; (b) flame from O-ring burn-through on right SRB (arrow); (c) external tank
explosion
© ASCE
Geo-Risk 2017 GSP 282 21
complete burn-through of a primary O-ring and damage to the secondary. Although risk was
clearly escalating, the accepted risk designation continued to be retained.
The following flight was by any measure a near miss. Sealing of both a primary and
companion secondary O-ring was delayed, exactly the circumstance that their redundancy was
intended to prevent. Nevertheless, accepted risk continued to be rationalized by this redundancy.
But the question of temperature effects was raised for the first time. The launch had been
preceded by three nights of record-low Florida temperatures. Shuttle components for the most
part had been designed for extreme heat, not cold, and this was something that had never been
fully considered.
Now the accepted risk designation of the SRB joints became the subject of serious
debate. Although the effects of temperature on O-ring resiliency and sealing were intuitively
evident, it was considered extremely unlikely that such cold temperatures would recur. But they
did. And on January 28, 1986, Challenger went down in history. It had never been recognized
that cold temperature was a common-cause failure initiator that would equally affect both the
primary and secondary O-rings. Cold had made redundancy an illusion.
As the prototype for normalization of deviance, the Challenger case-history defines it.
The identified failure mode for O-ring sealing occurred repeatedly but was rationalized and thus
became normal and expected. And when failure finally resulted, it was under conditions that the
reduced performance expectations had not anticipated. Against this backdrop, normalization of
deviance can be seen to contain the following elements:
1. Intended performance is established from design or operating criteria, field experience, or
standard practices.
2. Repeated or sustained deviations from intended performance arise from anomalies,
unexpected events, or adopted modifications. These deviations cause reduced
performance and elevated risk.
3. Over time, reduced performance and increased risk become rationalized, expected, and
accepted as normal, often despite warning signs or near-misses.
4. Reduced performance allows unrecognized events or conditions to trigger failure mode
occurrence, making foreseeable failures unforeseen.
As the following case histories illustrate, normalization of deviance affects geotechnical, as well
as astronautical, failures and the responses to risk that accompanied them.
© ASCE
Geo-Risk 2017 GSP 282 22
The Mount Polley tailings dam in central British Columbia failed on August 4, 2014 in a
portion designated the Perimeter Embankment, resulting in the loss of 24.4 Mm3 of tailings and
free water. The failure was determined to be the result of undrained shearing in a localized
deposit of foundation clay that became normally consolidated when the stresses imposed by the
embankment exceeded its preconsolidation pressure (Panel 2015).
As is customary, the Mount Polley tailings dam was constructed in stages to keep pace
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
with the rising elevation of the tailings behind it. As shown on Figure 2, there were nine such
stages, each incorporating predominantly rockfill-sized mine waste in the downstream shell.
Beginning with the Main Dam followed by its Perimeter and South embankment extensions, the
dam progressed incrementally up the gently-sloping abutments as its height increased to
eventually extend over a total length of 5 km.
Figure 2. Mount Polley raised dam alignment; inset (a): raised dam stages
The Main Dam foundation consisted of glacial till interlayered with a varved silt and clay
unit of glaciolacustrine origin designated GLU. In a crucial interpretation, the GLU was assumed
to be everywhere stiff and overconsolidated such that no load or shear-induced pore pressures
would develop. Corresponding effective-stress analysis (ESA) with a minimum factor of safety
(FS) of 1.3 resulted in downstream dam slopes of 2.0H:1.0V. With this, the design and its
intended performance were predicated on the absence of any softer GLU susceptible to
undrained shearing.
© ASCE
Geo-Risk 2017 GSP 282 23
By the time Stage 4 was constructed, the first warning sign appeared in a groundwater
well designated GW96-1 on Figure 2, where softer GLU was encountered. Nevertheless, this
material was dismissed as discontinuous and too far from the dam to affect its stability. In
keeping with this interpretation, a Potential Failure Mode (PFM) assessment identified slope
failure due to weak foundation materials as a failure mode, but the risk was dismissed as
inconsequential.
The Stage 5 raise incorporated two key changes. First, the downstream dam slope was
steepened to 1.4H:1.0V, an exceptionally steep inclination ordinarily reserved for rockfill dams
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
on sound rock foundations that was rationalized as only temporary. Second, an undrained
strength analysis (USA) for normally-consolidated GLU showed that such materials, if present,
would reduce FS to 1.1. Even so, such a marginal value was accepted despite the reduced
standard of performance and elevated risk it embodied.
Because by now, the absence of any softer GLU had become expected and normal—so
much so that the Perimeter Embankment was raised during the next four stages without any deep
borings within its footprint over its 2 km length. The elevated risk had become accepted and
normal as well, allowing the oversteepened slope to become a permanent, not temporary, fixture.
In the early hours of August 4, 2014 as Raise 9 was being completed, the Perimeter
Embankment failed, releasing tailings and water through the breach shown on Figure 3.
Subsequent investigations showed that a discontinuous deposit of softer GLU with OCR of about
4 had been present beneath the dam as indicated on Figure 2. The stresses imposed on the GLU
as the dam was raised had exceeded the clay’s preconsolidation pressure, and the GLU had
become normally consolidated with OCR=1.0 beneath much of the downstream slope. With this,
its permeability decreased and it became subject to undrained shearing.
© ASCE
Geo-Risk 2017 GSP 282 24
The Fundão tailings dam in Minas Gerias, Brazil failed by static liquefaction on
November 5, 2015 with the loss of 32 Mm3 of tailings, 19 lives, and damages, reparations, and
contingent liabilities in excess of $60 billion (BHP 2016).
The Fundão tailings consisted of two separate materials: relatively free-draining silty
sands, and soft, clay-like slimes. The dam was originally conceived as a drained buttress of sand
to retain the slimes behind it, with the two materials physically separated. The central element
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
was a high-capacity drain at the base of the buttress to eliminate saturation of the loose,
contractive sands. This would eliminate the risk of static liquefaction, the central aspect of the
dam’s intended performance (Pimenta de Ávila 2011). The sand would be hydraulically
deposited behind an initial starter dam, then raised by the upstream method.
No sooner had the starter dam been placed into operation than internal erosion resulting
from construction defects in the base drain produced damage so severe that the original concept
could not be implemented. Instead, upstream raising would continue without the base drain,
resulting in saturation that deviated from the original design premise. As raising progressed,
increasing saturation of the sands, manifested by repeated breakout of seepage on the dam face,
introduced the potential for sand liquefaction (Morgenstern, et al. 2016). But by then, saturation
and the associated liquefaction risk had become an accepted, hence normal, aspect of dam
operation, notwithstanding the adoption of FMEA on a continuing basis (Samarco 2012, 2013,
2014).
Another deviation from intended performance occurred during operation. Instead of
being separated, the sands and slimes were repeatedly allowed to intermingle during deposition,
with the slimes encroaching on the dam crest where exclusively sands were intended.
Yet a third deviation supplied the means by which the first two interacted. A construction
defect in a concrete spillway conduit buried within the dam’s left abutment limited its structural
capacity. As a temporary solution, the dam alignment was set back from the crest until the
conduit could be filled with concrete and removed from service. Instead, this setback, as shown
on Figure 4, was maintained throughout subsequent raising, thus becoming an expected and
normal condition despite a near-miss involving the abrupt appearance of extensive cracking on
the slope.
© ASCE
Geo-Risk 2017 GSP 282 25
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
South Florida’s Lake Okeechobee sits at the crossroads of hurricane tracks from both the
Atlantic and Gulf Coasts. Originally a natural lake, in the 1930s Congress authorized the U.S.
Army Corps of Engineers (USACE) to construct the Herbert Hoover Dike (HHD) around its
entire 140-mile perimeter following storm surges that had caused some 2500 fatalities. Figure 6
shows the dike itself along with satellite imagery of its location with Hurricane Wilma passing
over it.
© ASCE
Geo-Risk 2017 GSP 282 26
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 6. Herbert Hoover Dike (center). Lake Okeechobee (upper left), eye of Hurricane
Wilma over Lake Okeechobee (upper right).
Constructed with hydraulic fill on a porous limestone foundation, the HHD was never
designed to permanently retain water, so it was not considered a dam. Nevertheless, with
Florida’s rapid growth it was pressed into service in the 1980s as the region’s only major water
reservoir, with some 40,000 people in areas that might be inundated in the event of breach. In
addition to the increased water level from reservoir operation were hurricane storm surges as
high as 25 ft. that produced reservoir oscillations with dangerous reversal of foundation seepage
gradients.
Indications of internal erosion first became evident as early as 1983. In 1986, internal
erosion was recognized as a potential failure mode and highlighted again in 1993. These
assessments were confirmed in 1995 when internal erosion manifested as excessive and cloudy
seepage, sand boils, and sinkholes that nearly caused failure in nine separate areas. These near-
misses were followed in 1998 by similar incidents at both former and new locations, along with
signs of cumulative damage (USACE 1999). By this time, 24 distinct internal erosion
mechanisms had been identified, with a board of geotechnical consultants characterizing the risk
of catastrophic failure as “very serious.” Nevertheless, internal erosion had come to be a normal
and expected effect of hurricanes.
A reliability analysis by USACE the following year yielded an alarmingly high annual
probability of system failure by internal erosion on the order of 0.16 (USACE 1999, Bromwell et
al., 2006). But it was rationalized that the HHD’s original authorization as a navigation project
made no allowance for loss of life, and that economic cost-benefit analysis alone could not
justify major structural modifications. The risk would continue to be accepted, mitigated only by
sending out crews in hurricane conditions over the dike’s 140-mile perimeter to monitor and
© ASCE
Geo-Risk 2017 GSP 282 27
sandbag 94 separate problem sites, measures of questionable efficacy ((USACE 2005, Bromwell
et al., 2006).
In 2004 and 2005, Florida was struck by five separate hurricanes, one of which was
Hurricane Katrina en route to New Orleans. Following the destruction there, Florida’s governor
authorized a safety review of the HHD that made public the findings of the 1999 reliability
analysis and highlighted the need for structural modifications (Bromwell et al., 2006). At the
same time, USACE responded to Katrina by implementing 12 actions for organizational change,
including cornerstone risk-based practices and communication (USACE 2006). Since then, the
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
HHD has been reclassified as a dam, and risk-based methods using new USACE tolerable risk
guidelines have been applied (Bowles, et al. 2012). As a result, 21.4 miles of cutoff wall have
been constructed to date with another 6.6 miles to be completed in critical areas (USACE, 2016).
The Herbert Hoover Dike is unique among the preceding case histories in that failure did
not occur, which is attributable at least in some measure to incorporation of risk-informed
processes in USACE organizational values. But this did not occur on the first attempt. The initial
1999 reliability analysis failed to overcome longstanding normalization of deviance. It took an
exceptionally salient external event—Hurricane Katrina and its effects on New Orleans—to turn
deviance in risk acceptance into diligence in risk reduction.
DISCUSSION
The three cases examined here represent but a miniscule sample of dams to which risk-
based methods have been applied, and they do not reflect the undoubtedly much larger
population where these methods did have their intended effect. With these caveats, some
pertinent observations are as follows:
1. Risk-based methods successfully identified the operative failure mode in all three cases:
foundation failure for Mount Polley, static liquefaction for Fundão, and internal erosion
for the Herbert Hoover Dike.
2. The methods spanned a full range of sophistication and quantification, from rudimentary
PFMA for Mount Polley, to qualitative FMEA for Fundão, to quantitative reliability
analysis for the Herbert Hoover Dike. There is no indication that the type of method
employed affected the respective outcomes.
3. The identified risks were not acted upon, allowing failure to occur in two of the three
cases. For Mount Polley, there was insufficient foundation exploration to identify
conditions that led to undrained failure. For Fundão, saturation and the presence of slimes
allowed static liquefaction to occur. For the Herbert Hoover Dike, internal erosion was
eventually mitigated, but only after an external event intervened.
© ASCE
Geo-Risk 2017 GSP 282 28
Hence, these outcomes were not attributable to the methods themselves, but failure to
implement their findings. In all three cases, the operative failure modes were recognized but not
acted upon in ways to sufficiently mitigate their risks. In this sense, they represent less failures of
risk assessment than of risk management. The inherent safety objectives of risk-based methods
were defeated by normalization of deviance in the following ways:
1. Repeated deviations from intended performance became accepted as normal. The Mount
Polley dam was raised repeatedly without confirming the intended absence of soft
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
foundation clay, while accepting the risk associated with an operative FS only slightly
greater than unity. The Fundão dam continued to be raised despite increasing saturation
never anticipated in the original concept for mitigating liquefaction risk. And internal
erosion damage to the Herbert Hoover Dike with each successive hurricane became
routine.
2. Deviations were rationalized. Slope oversteepening for Mount Polley and the alignment
setback for Fundão were rationalized as temporary despite becoming permanent in both
cases. Operation of the Herbert Hoover Dike as a reservoir despite its intended use as a
storm surge barrier was rationalized administratively.
3. Warning signs and near-misses were ignored, including the discovery of nearby soft clay
at Mount Polley, slope cracking at Fundão, and near-failures of the Herbert Hoover Dike.
4. Accepted deviations allowed failure triggers to go unrecognized. At Mount Polley,
absence of soft foundation clay became normal, so the reduction in OCR with increasing
dam height was unforeseen. At Fundão, the alignment setback became normal, so the
effect of slimes beneath the slope was not recognized.
© ASCE
Geo-Risk 2017 GSP 282 29
CONCLUSIONS
Although the fundamental justification for risk-based methods in dam safety is to make
dams safer, they may not always achieve this objective. For the case histories examined here, the
problem was not with the methods but with their implementation. And the problem with
implementation was attributable to normalization of deviance. Normalization of deviance within
organizations inhibits risk management by allowing departures from desirable performance to
become expected, hence accepted, thereby imparting a false sense of security and complacency.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Normalization of deviance can be overcome, and diligence in risk management can be achieved,
if its operation and characteristics are recognized and if risk-based processes are embedded in
organizational culture.
REFERENCES
BC (2016). Guidance Document, Health, Safety and Reclamation Code for Mines in British
Columbia, Province of British Columbia, Victoria.
Bea, R. (2006). “Reliability and Human Factors in Geotechnical Engineering.” J. Geotech. Eng.
132(5).
BHP (2016). BHP Billiton Annual Report 2016.
Bowles, D., Anderson, L., Glover, T., and Chauhan, S. (1998). “Portfolio Risk Assessment: A
Tool for Dam Safety Risk Management.” Proc. 1998 USCOLD Annual Lecture, Buffalo,
New York, U.S. Society on Dams.
Bowles, D., Chauhan, S., Anderson, L., and Grove, R., (2012). “Baseline Risk Assessment for
Herbert Hoover Dike.” ANCOLD Conference on Dams, Perth, Australian Committee on
Large Dams.
Bromwell, L., Dean, R., and Vick, S. (2006). Report of Expert Review Panel, Technical
Evaluation of Herbert Hoover Dike, Lake Okeechobee, Florida, South Florida Water
Management District, South Palm Beach,
https://my.sfwmd.gov/portal/page/portal/common/newsr/hhd_report.pdf
EU (2009). Reference Document on Best Available Technologies for Management of Tailings
and Waste-Rock in Mining Activities, European Commission, Brussels.
FERC (2016). Risk Informed Decision Making (RIDM) Guidelines for Dam Safety, U.S. Federal
Energy Regulatory Commission, Washington DC.
Kahneman, D. (2011). Thinking, Fast and Slow, Farrar, Straus and Giroux, New York.
MAC (2011). A Guide to the Management of Tailings Facilities, Mining Assn. of Canada, Ottawa.
Morgenstern, N., Vick, S., Viotti, C., and Watts, B. (2016). Report on the Immediate Causes of
the Failure of the Fundão Dam, Fundão Tailings Dam Review Panel, http://fundao
investigation.com/the-report/
© ASCE
Geo-Risk 2017 GSP 282 30
Panel (2015). Report on Mount Polley Tailings Storage Facility Breach, Independent Expert
Investigation and Review Panel, Province of British Columbia, Victoria, https://www.
mountpolleyreviewpanel.ca/final-report
Pinto, J. (2014). “Project Management, Governance, and the Normalization of Deviance.” Int. J.
Project Mgmt., 32(3).
Pimenta de Ávila (2011). “The Drained Stacking of Granular Tailings: A Tailings Disposal
Method for a Low Degree of Saturation of the Tailings Mass.” Tailings and Mine Waste
2011, Proceedings of the 15th International Conference on Tailings and Mine Waste
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 31
Abstract
Although our understanding of ground movements on structural damage has improved, the effect
of inherent spatial variability of soil on the static and liquefaction-induced differential
settlements of structures remains a challenge to the profession. Following a review of pertinent
definitions and previous observations on structure performance in response to differential
movements, the role of inherent variability on differential settlement is discussed. Open
questions regarding the assessment of the effects of spatial variability that remain to be addressed
are identified and treated through an evaluation of the spatial variability of a well-characterized
test site. The role of cone penetration test (CPT) data conditioning on derived random field
model (RFM) parameters used to describe spatial variability is presented, indicating that the
fluctuating component of overburden stress- and fines-corrected cone tip resistance differs
significantly from that of uncorrected cone data for silty soils. Simple calculations of
liquefaction-induced settlement are made for hypothetical frame-type structures on isolated
spread footings that illustrate the role of spatial variability and structure scale on the anticipated
seismically-induced differential movement.
INTRODUCTION
The role of differential foundation movement on structural damage has been widely-
recognized in the modern era of civil engineering. Significant observational studies have been
conducted on the tolerable movements of buildings in general (Skempton and MacDonald 1956,
Polshin and Tokar 1957, Grant et al. 1974, Burland and Wroth 1974), and in response to deep
excavation-induced movements (Clough and O’Rourke 1990; Son and Cording 2005) and
tunneling-induced movements (e.g., Boscardin and Cording 1989). Wahls (1981) synthesized
many of the observations reported at the time, and noted intuitively that the structure type
strongly contributed to the ability of the structure to tolerate movements, a view developed in
part based on the previously reported observations. Improved numerical methods developed
since these observational studies have allowed the consideration of inelastic soil-structure
interaction which has pointed to critical mechanisms in the developed of structural damage, such
as crack generation, propagation, and the role of frame-induced confinement in the limitation of
crack propagation (e.g., Son and Cording 2011). Thus, the ability to consider structure stiffness
in response to vertical movements seems sufficient for many applications. Yet, the ability to
© ASCE
Geo-Risk 2017 GSP 282 32
predict the vertical structure movements in consideration of inherent spatial variability of the
supporting foundations soils remains insufficient in routine design scenarios.
Sources of statically-induced structure damage stems from spatial variability of foundation
soils, poor construction practices and materials (both geotechnical and structural), or imposed
from adjacent construction activities (e.g., tunneling, excavation). The former two sources may
be treated as random variables in resistance, whereas the latter represents a more-or-less
deterministic source of loading. The determination of the magnitude of differential settlement of
structures during seismic loading is necessarily and significantly more complex. Sources of
uncertainty in addition to the two previously identified random variables range from the distance
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
to (perhaps unknown) faults, frequency content, and intensity of the ground motions (e.g.,
Rezaeian et al. 2014), uncertainty in amplification of the ground motion through the near-surface
soils (Stewart et al. 2002; Bazzurro and Cornell 2004), uncertainty in soil response to seismic
loading (i.e., liquefaction vs. cyclic softening; Bray and Sancio 2006, Boulanger and Idriss
2006), and uncertainty in the structural response to base excitation. Accordingly, significantly
more study in the arena of seismically-induced differential settlement is warranted.
A comprehensive treatment of the aforementioned factors represents a significant challenge
to the geotechnical engineering community and partner disciplines, and is beyond the scope of
this paper. Rather, this work addresses a rather narrow portion of the problem, that is, the effect
of inherent variability of soil deposit on the static and liquefaction-induced differential
settlements of structures. First, the basic definitions of settlement, tilt, and differential settlement
are introduced as they pertain to mat and isolated spread foundations. The discussion then turns
to a review of the performance of structures to statically- and seismically-displaced structures
with focus on the role of the inherent spatial variability on performance. Open questions
regarding the assessment of the effects of spatial variability that remain to be addressed are
identified and treated through an evaluation of the spatial variability of a well-characterized test
site. The role of cone penetration test (CPT) data conditioning on derived random field model
(RFM) parameters used to describe spatial variability is presented, indicating that the fluctuating
component of overburden stress- and fines-corrected cone tip resistance differs significantly
from that of uncorrected cone data for silty soils. Then, simple calculations of liquefaction-
induced settlement are made for hypothetical frame-type structures on isolated spread footings
that illustrate the role of spatial variability and structure scale on seismically-induced differential
movement. This work shows the possibility that greater damage to structures can be anticipated
during low-intensity shaking than high-intensity shaking under certain circumstances of
spatially-variable soils.
A brief review of the terminology of differential structure movement follows to aid the
discussion of the impact of inherent soil variability on structure performance. Generally, the
settlement of structures may be discussed in terms of total vertical movement or total settlement,
ρ, and differential movement, δ, both of which may vary considerably across the footprint or
length, L, of a structure (Fig. 1). Total settlements will generally consist of some amount of
uniform settlement, ρuniform, rigid body rotation or tilt settlement, and differential movement.
Rotational movements may be difficult to quantify for framed structures supported on isolated
© ASCE
Geo-Risk 2017 GSP 282 33
spread foundations, though easier for continuous footings and mat foundations (Wahls 1994).
Relative rotation (Burland and Wroth 1974), also known as angular distortion (Skempton and
MacDonald 1956), β, has been found to correlate strongly with structural damage. The angular
distortion represents the rotation of a straight line of two points of interest (say, two adjacent
footings) relative to rigid body tilt, and is computed as the difference between the deflection
ratio, δ/l and the global tilt. In the case of framed structures supported on isolated spread
footings, the angular distortion is commonly computed equal to the deflection ratio, such as in
the study by Polshin and Tokar (1957). Clearly, the scatter apparent in observational studies
others includes the uncertainty associated with the inability to accurately identify and quantify
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
the various contributors of movement, among other factors (e.g., construction practices).
© ASCE
Geo-Risk 2017 GSP 282 34
values of L/H. Additionally, beam theory allowed the comparison of damage from sagging (i.e.,
concave up) movement profiles to those typically induced by adjacent excavations, which
commonly produce hogging settlement profiles (i.e., concave down). The comparison points to
hogging movements as those that will necessarily produce cracks at smaller angular distortions
1 1 than for sagging-type movements (Wahls
3.00E-03 β = 1/150 Substantial Damage
333 (a) 284
1981).
Slight Damage
It is useful to compare some of the
Relative sag, (δ/l)max
2.50E-03
1
No Damage aforementioned criteria to observations
2.00E-03
500 reported by others and summarized by
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
(1956), β = 1/500
1 Burland & angular distortion to relative deflection
1.00E-03 Wroth (1974)
1000 assuming the model of a simply-
supported beam that is deformed in both
1
bending and shear, with the neutral axis
5.00E-04
2000 coinciding with the center of the beam
and with maximum rotation at the
Polshin & Tokar (1957)
supports as a function of the length-to-
0.00E+00 height ratio of the structure:
0 1 2 3 4 5 6 2
Length-to-Height ratio, L/H H
1 1 + 3.9 ⋅
1.50E-03
(c) δ β L
667 = ⋅ 2
(1)
l 3 H
1 + 2.6 ⋅
Relative hog, (δ/l)max
1 L
1.00E-03
1000
Burland &
Wroth (1974) to allow comparison of the
Polshin &
1
Tokar (1957) recommendation by Skempton and
5.00E-04
2000 MacDonald (1956) to other criteria, as
shown in Fig. 2. For framed structures
(Fig. 2a), Skempton and MacDonald’s
0.00E+00 recommendations for the tolerable
0 1 2 3 4 5 6
Length-to-Height ratio, L/H magnitude of angular distortion, β =
Figure 2. Comparison of some common criteria for 1/500, appears to satisfactorily separate
tolerable structure movement to observed cases the cases of no damage from and slight
assuming εcrit = 0.075%: (a) framed structures, (b) damage. Burland and Wroth (1974) note
sagging of load bearing walls, and (c) hogging of that for L/H < 3, the two criteria produce
load bearing walls (data from Burland and Wroth relatively similar allowable values of
1974). relative deflection, whereas for greater
© ASCE
Geo-Risk 2017 GSP 282 35
L/H (bending strain dominant) and where data was limited, Burland and Wroth’s proposed limits
allows for greater tolerable deformations. Exceedance of the SLS as proposed by Skempton and
MacDonald (i.e., β = 1/300), appears to separate the cases of slight and substantial damage as
qualified by Burland and Wroth (1974), who also agreed with the ULS recommendation
proposed by Skempton and MacDonald (1956) for framed structures.
Load-bearing walls, the subject of Figs. 2b and 2c, tend to exhibit damage due to bending-
induced tensile strains. For sagging load-bearing walls (Fig. 2b), Skempton and MacDonalds
proposed allowable angular rotation limit appears to over-predict the permissible deformations,
whereas the permissible limits set by Polshin and Tokar (1957) and Burland and Wroth (1974)
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
appear satisfactory. Burland and Wroth’s proposed limits differentiate between sagging and
hogging, and this distinction appears to capture the observations of tolerable movements better
than the limits proposed by Polshin and Tokar (1957). Clearly, and has been noted previously,
the type of structure and mode of deformation is critical when considering the setting of tolerable
movements, as is the rate of settlement relative to the rate of construction (Grant et al. 1974;
Wahls 1994).
© ASCE
Geo-Risk 2017 GSP 282 36
random field theory (RFT; Baecher 1999). For instance, a measurement of some spatially-
varying soil property of interest, g(z), may be separated into a deterministic trend function, t(z),
and a randomly fluctuating component, w(z), as (DeGroot and Baecher 1993, Phoon and
Kulhawy 1999a):
g ( z ) = t ( z ) + w ( z ) + ε (z) (2)
where z =depth and ε(z) = measurement error. The spatially-varying soil property of interest is
then characterized by its mean (through the trend function), the variance or COV of the
fluctuating component, and the autocorrelation length known as the scale of fluctuation, the
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
distance within which soils demonstrate reasonably strong correlation (Vanmarcke 1977; 1984).
The use of the coefficient of inherent variability, COVw, defined as the ratio of the standard
deviation of w(z) and the trend function, t(z), is commonly used in geotechnical engineering
applications (Phoon and Kulhawy 1999).
Quantification of the spatial variability and its impact on observed foundation performance
has been limited, but rather useful. For example, Stuedlein and Holtz (2012) describe the use of a
probabilistic site characterization to mitigate the influence of spatial variability on the
experimental results of full-scale footing loading tests. However, much of the current
understanding regarding the role of spatial variability on static settlements has been developed
using numerical methods. For example, Fenton and Griffiths (2002, 2005) use the random finite
element method (RFEM) to illustrate the role of the spatial variability on the serviceability of
single and pairs of footings. The RFEM approach developed by Fenton and Griffiths (2002,
2005) uses isotropic random fields and interpreted using the resulting mean and standard
deviations in response, perhaps stemming from the large computational demands. Some critical
findings that resulted from these studies include: (1) that the variance and covariance of
foundation displacement are governed by the statistical distributions of the local average of soil
stiffness, and (2) a bivariate lognormal distribution that can be readily used in practice accurately
captures the joint probability of exceeding a given differential settlement, pe, given that the
footings are sufficiently far apart to avoid significant stress overlapping. Building on this work,
Ahmed and Soubra (2013) used the subset
simulation approach to improve Bay Length-normalized Scale of Fluctuation
Δ = δh/L or δv/L
computational efficiency in the RFEM 0.1 1 10
Tolerable Differential Settlement,
© ASCE
Geo-Risk 2017 GSP 282 37
given δv, pe is greatest when δh = L. In other words, the relationship between the autocorrelation
length of the governing soil property (i.e., stiffness) and the effective length of the structure (e.g.,
the bay length of a framed building) is most critical. Given that most modes of geologic
deposition result in δv < δh, it may be tentatively generalized that the characterization of δh of
foundation soils will yield the most information regarding the differential settlement
performance of a given structure.
Recent earthquakes, such as the 2011 East Japan earthquake and 2010-2011 Canterbury
Sequence of earthquakes, remind have highlighted the tremendous potential for liquefaction-
induced damage to structures, and the limited availability of robust methods by which to
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
determine the vulnerability of structures to damage (van Ballegooy et al. 2014). More than
25,000 homes experienced liquefaction-related damage in the Tohoku and Kanto districts in
Japan (Yasuda et al. 2012); $15B liquefaction related-losses were estimated to have occurred to
structures in Christchurch region (Cubrinovski et al. 2014). Critical post-earthquake
reconnaissance efforts often attribute variability in structure performance to the inherent spatial
variability of the liquefiable soils and overlying crust thickness and quality (e.g., Cubrinovski et
al. 2011). Broadly, differential seismic movements of shallow foundations may be separated into
those developed in response to volumetric strains, deviatoric strains, or a combination of both.
Tokimatsu et al. (1994) describe the performance of generally two- to four-story structures
supported on shallow foundations following the 1990 Mw 7.8 Luzon earthquake. Observations
pointed to the relationship between number of stories (and hence loading) and liquefaction-
induced settlements, which often produced structure tilting of more than one degree (about 1/57).
Many of these structures were founded on uniform clean sands of large thicknesses; however, the
role of neighboring structures (and their imposed shear strains) on structure movement appeared
to play some role in performance. Following the 1999 Mw 7.6 Kocaeli earthquake, observations
in Adapazari by Sancio et al. (2004) pointed to the role of near-surface, thin, and spatially
variable deposits of silt and silty sand on poor building performance, where damaging
magnitudes of shear strains are likely to be generated. Here, differential movements varied
proportionally with the buildings height to width ratio and corresponding bearing pressure.
Centrifuge studies reported by Dashti et al. (2009) highlight the various mechanisms responsible
for liquefaction-induced movements, reinforcing the role of near-surface shearing on structure
movements. Recent observations in Christchurch have confirmed the role of variable, near
surface liquefaction-susceptible soils on shear-induced movements, with significant magnitudes
of ejecta playing an important role in the foundation movements (Bray et al. 2014, 2016).
These observations have generally focused on Sources 1 and 2 of spatial variability, that is,
the spatial extent of a liquefaction-susceptible layer, rather than the inherent variability within
the layer (i.e., Source 3). Studies by Popescu et al. (1997), Fenton and Vanmarcke (1998),
Popescu et al. (2005), Baker and Faber (2008), and Montgomery and Boulanger (2006) have
investigated the impact of inherent spatial variability of liquefiable soils. Numerical approaches
have again served as the basis for these investigations: Fenton and Vanmarcke (1998) explored
the roles of local versus global liquefaction as function of spatial variability, whereas Popescu et
al. (2005), and Montgomery and Boulanger (2016) have used multi-dimensional probabilistic
finite element analyses and varying standard and cone penetration test (SPT and CPT,
respectively) resistances to explore the role of spatial variability on the consequences following
liquefaction (Popescu et al. 2005, Montgomery and Boulanger 2016), allowing for some general
and practical recommendations to be formalized. Presently, the outlook for probabilistic
© ASCE
Geo-Risk 2017 GSP 282 38
Ballegooy et al. 2014). A significant challenge remains in identifying the potential for damaging
liquefaction-induced differential settlements in cases where a liquefiable soil of relatively
uniform thickness is distributed across a site. The key to addressing this particular concern lies in
connecting the inherent autocorrelation of spatially-varying soil properties with the underlying
geological process(es). Owing to its reliability and resolution, the CPT represents the preferred
in-situ test for studying the role of spatial variability (DeGroot and Baecher 1993, Fenton 1999a,
Cafaro and Cherubini 2002, Uzielli et al. 2005, 2007; Stuedlein 2011, Stuedlein et al. 2012a), at
least in soils suitable for evaluation with the CPT, although some level of ground truth (sampling
and testing) will always be necessary. While the variation in vertical inherent variability has been
studied extensively, there is significantly less information available regarding the range in
magnitudes of horizontal inherent variability and how these ranges vary between possible
geological formations (e.g., fluvial, estuarine, etc.).
The need for “typical” horizontal scale of fluctuation and coefficients of inherent variability,
δh and COVw,h, respectively, remains strong nearly two decades following the aggregation of
RFM parameters by Phoon and Kulhawy (1999) owing to the difficulty in accumulating
sufficient number of explorations to perform statistically valid analyses. The lack of “typical”
horizontal autocorrelation data is of great concern, particularly in light of the numerical studies
of differential settlements by Fenton and Griffiths (2002, 2005) and Ahmed and Soubra (2013)
described previously. Except when explorations can be performed horizontally, such as the
horizontal CPTs reported by Jaksa (1995), horizontal RFM parameters cannot achieve the same
resolution of vertical RFM parameters. Thus, there is significant uncertainty as to what “typical”
and “deterministic” RFM parameters should be used in forward probabilistic modeling such as
those analyses described by Popscu et al. (2005), Fenton and Griffiths (2008), Stuedlein et al.
(2012b) and Montgomery and Boulanger (2016), let alone whether deterministic RFM
parameters are sufficient for the description of a random field. In regard to the relatively lower
resolution of horizontal RFM parameters, another key question that must be addressed is the
assessment of the impact of statistical uncertainty on the horizontal RFM parameters derived.
Additionally, there is little guidance as to the role of cone data conditioning, such as normalizing
the measured cone stress to one atmosphere of pressure or correcting for fines content, FC, on
the change in RFM parameters that represent a given site of interest. Thus, other outlying
questions that need to be addressed includes the identification of the effect of conditioning on the
autocorrelation length and magnitude of inherent variability. The remainder of this paper will
attempt to address some of these questions using a recently acquired dataset of liquefaction-
susceptible silty beach sands and illustrate some possible outcomes of spatial variability on
liquefaction-induced differential settlement.
© ASCE
Geo-Risk 2017 GSP 282 39
The investigation into the issues identified above is facilitated through the selection of a
suitable framework for estimating liquefaction triggering. While numerical analyses serve such a
goal well, there is value in evaluating those factors influencing differential settlement through
simplified and established methods on actual in-situ data. Accordingly, the deterministic
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
framework proposed by Idriss and Boulanger (2008), as updated by Boulanger and Idriss (2015),
is used herein. This framework has been widely adopted, and is derived from the simplified
method put forth by Seed and Idriss (1971), whereby the cyclic stress ratio (CSR), equal to
approximately 65% of the vertical effective stress-normalized cyclic shear stress is compared
against the cyclic resistance of soil to determine a factor of safety against liquefaction triggering,
FSL. This framework implements the normalized, corrected, cone tip resistance, qc1N:
qc
qc1N = C N ⋅ (3)
Pa
where qc = qt = the pore-pressure corrected cone tip resistance (where pore pressure is
measured), Pa = atmospheric pressure, and CN = the overburden stress correction factor, which
serves to correct qc to represent that expected at one atmosphere of effective overburden stress.
Boulanger and Idriss (2014, 2015) describe the iterative calculation required for overburden
stress correction with CN.
Tokimatsu and Yoshimi (1983), Seed et al. (1985), and others have recognized the role of FC
in liquefaction triggering. The Boulanger and Idriss (2015) triggering procedures propose a
correction of qc1N to account for silty fines using the clean sand equivalent cone penetration
resistance, qc1Ncs. This correction, qc1Ncs = qc1N + Δqc1N, increases the overburden stress-corrected
cone tip resistance as a nonlinear function of FC and qc1N. As noted above, not much is known
regarding the impact of the overburden stress and clean sand corrections on the spatial variability
of cone penetration resistance.
The magnitude of shaking-induced shear strain and post-shaking volumetric strain associated
with the reconsolidation of partially- and fully-liquefied soils has been correlated to the
normalized cyclic shear stress (Tokimatsu and Seed 1987) and FSL (Ishihara and Yoshimine
1992). Although the shear strain accumulated during undrained shear appears to correlate better
to the reconsolidation volumetric strain, εv,max (Sento et al. 2004), the use of the maximum, single
amplitude shear strain γmax proposed by Ishihara and Yoshimine (1992) has gained wide
acceptance in practice. Post-shaking settlements may be computed with γmax calculated using the
procedure reported by Yoshimine et al. (2005) and summarized briefly in the following:
1. Compute γmax:
1 − Fult
γ max = 3.5 ( 2 − FS L ) if Fult ≤ FS L ≤ 2.0 (4)
FS L − Fult
© ASCE
Geo-Risk 2017 GSP 282 40
where γmax = 0 if FSL ≥ 2.0 and infinite if FSL ≤ FSult, and where:
2. Compute εv,max as a function of the initial relative density, Dr,ini and γmax:
(6)
ε v ,max = 12 exp ( −0.025 ⋅ Dr ,ini ) if γ max > 8%
3. Sum the products of incremental or discretized soil thickness (e.g., 20 mm) and εv,max to
produce the settlement.
This procedure was developed using data derived for clean Toyoura sand reported by Nagase
and Ishihara (1988); thus, error may be necessarily introduced when applied to more
compressible silty sands (e.g., Bandini and Sathiskumar 2009). However, these procedures are
widely applied to silty sands in practice (e.g., Bray et al. 2014). Separately, Eqs. (5) and (6)
require an estimate of relative density. For the site used and described subsequently, Gianella
(2015) determined that the CPT-based relative density correlation proposed by Mayne (2007)
provided the best estimate of Dr,ini for the test site. Accordingly, this correlation was used with
the condition 10% ≤ Dr,ini ≤ 100% to limit unrealistic estimates of Dr,ini. The FC of the silty sands
in this study was estimated using the site-specific CPT-based correlation presented by Stuedlein
et al. (2016), rather than the global correlation presented by Boulanger and Idriss (2015).
Distance (m)
3
P2-6 P2-7 B-6 P4-6 P4-7
A B-2 P5-6 P5-7
2 B-4 B-8
P1-1 P2-1 P3-1 P4-1 P5-1 B-1
B-3 B-5 B-7 B-9 B-11
1 P2-9 P4-9
P2-8 P5-9 P5-
P1-9 P1-8 P3-9 P3-8 P4-8
0
0 4 8 12 16 20 24
A well-characterized test site, located in Hollywood, SC, and described in detail with regard
to a densification program reported by Stuedlein et al. (2016), effect of installation sequence and
spacing of piles on driving and geotechnical resistance described by Stuedlein and Gianella
(2016), and a controlled blasting program by Gianella and Stuedlein (2017) is used to illustrate
the role of spatial variability on liquefaction-induced differential settlement. Figure 4 presents
the initial exploration plan, which consisted of 25 CPTs (about one per 3 m2), five downhole
shear wave velocity (Vs) tests, and five mud-rotary borings; Fig. 5 shows the subsurface
stratigraphy including an indication of the spatial variability in qt; a particularly good feature of
© ASCE
Geo-Risk 2017 GSP 282 41
this site with regard to the investigations performed to-date is the relative consistency in layer
thicknesses across the site, allowing focus on the effects of Source 3 in spatial soil variability.
The general stratigraphy consists of a 2 m thick layer of loose to medium dense silty and clayey
sand (SM and SC) fill overlying 9.5 m of loose to medium dense, clean to silty fine sand (SP and
SM). Below this potentially liquefiable soil unit lies several non-liquefiable strata including a
layer of soft to medium stiff clay approximately 1 m thick, underlain by a 1.5 m thick deposit of
dense sand. The groundwater table varied seasonally, but could be as shallow as 2.15 m below
the ground surface.
RelDistance (m)
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
6
Loose to
medium dense
Depth (m)
8 poorly-graded
SAND (SP)
with lenses of
silty SAND
10 (SM)
The nature of soil deposition and subsequent alteration on spatial variability implies that
some portions of Eq. (2) will be site-specific for given geologic process, such as the trend
function t(z), whereas the fluctuating component w(z) may be characteristic of the geologic
process. Accordingly, subtraction of trend functions from in-situ measurements is common for
development of characteristic RFM parameters, which: (1) require stationary data, and (2) can be
paired with the site-specific trend function of any site, which may be affected by local
groundwater conditions, overconsolidation, aging, or other factor. The liquefiable layer for the
test site ranges between about 2.5 to 11 m depth below grade (Fig. 5), and therefore the soil from
© ASCE
Geo-Risk 2017 GSP 282 42
these depths are considered in the development of RFM parameters. For brevity, the reader is
referred to the specific procedures outlined by Bong and Stuedlein (2017b) for the assessment of
stationarity, linear trend removal, and generation of vertical and horizontal RFM parameters.
Vertical and horizontal RFM parameters were determined for the liquefiable beach sands and
effect of conditioning for liquefaction triggering determined. Since vertical cone data is generally
abundant as collected at 2 cm intervals in this study, it allows the use of a number of possible
approaches to determine the deposits vertical autocorrelation characteristics. Four autocorrelation
models were fit to the 25 sets of detrended residuals of vertical qt, qcN1, and qcN1cs and the
corresponding vertical scale of fluctuation, δv, determined using the best typical goodness-of-fit
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
metrics.
Horizontal cone data is necessarily less abundant than vertical; as such, approximate methods
must be used to generate RFM parameters. A suitable approximate approach for the development
of horizontal RFM parameters used by Stuedlein et al. (2012a) and Bong and Stuedlein (2017b)
begins by developing profiles using projection-to-section. Two “sections” or profiles of qt, qc1N,
and qc1Ncs from the depths of 2.5 to 11.0 m were developed at 2 cm increments (2 x 426), running
north-south to the east and west of Section A-A’ (Fig. 4). CPT soundings along Section A-A’
were projected onto both the east and west profiles in order to improve the resolution of the RFM
parameters (Bong and Stuedlein 2017b). This approach allowed the incorporation of fifteen
measured qt and computed qc1N and qc1Ncs per elevation, to which linear trends were fitted for the
determination of δh and COVw,h. The expeditive method was used to determine δh, which sets the
scale of fluctuation equal to eight-tenths of the average distance between trend function crossings
(Vanmarcke 1977).
The main questions being addressed here pertain to the effect of conditioning (i.e.,
overburden stress- and FC-correction) on the magnitude of RFM parameters and the magnitude
of statistical uncertainty in the RFM parameters. For example, how reliable are the derived scales
of fluctuation and coefficients of inherent variability given the 25 vertical and 852 horizontal
individual records? Figure 6 presents the cumulative distribution functions (CDFs) for the
vertical and horizontal RFM parameters δv and δh, and COVw,v and COVw,h, for qt, qc1N, and
qc1Ncs. Although some differences at the tails in the CDFs are observed for the scales of
fluctuation δv and δh, the distribution in magnitudes do not differ significantly. The median and
COV in δv ranges from 470 (for qt and qc1Ncs) to 500 mm (qc1N) and 17 to 29% (Fig. 6a). The
median δh equals 3.18 and 3.23 m, 3.11 and 3.12 m, and 2.87 and 2.94 m in the east and west
profiles, respectively, for qt, qc1N and qc1Ncs (Fig. 6c). Thus, overburden stress and clean sand
corrections resulted in no reduction for δv, and an approximate 10% reduction in the median δh.
Likewise, the COV in δh reduced from 30.5 and 30.6% for the east and west profiles of qt to 21.5
and 21.7% for qc1Ncs, respectively. Thus, overburden stress and clean sand correction appears to
slightly reduce the correlation length and its variability from elevation to elevation in these beach
sands.
The changes to the scale of fluctuation as a result of conditioning appear minor. However,
the CDFs for COVw,v(qc1Ncs) and COVw,h(qc1Ncs) shown in Fig. 6b and 6d depart dramatically from
those for qt and qc1N. The fluctuations in qt in the compressible interbedded sands and silty sands
reflect the higher variability in these zones (e.g., Figure 5), and the conditioning of qt to one
atmosphere of pressure served to reduce the median COVw,v in qc1N by only about 3% in the
© ASCE
Geo-Risk 2017 GSP 282 43
vertical direction, an essentially negligible change. However, the correction for silty fines
produced a significant reduction in the median COVw,v(qc1Ncs), which is equal to about 21%.
Generally, the average and COV in COVw,v in qt and qc1Ncs for these liquefiable on susceptible
silty sands was determined equal to approximately 45.1 and 9.8%compared to 21.7 and 30.1%,
respectively, indicating that COVw,v reduced substantially across the cone records while its
variability increased relative to qt and qc1N. The CDFs for COVw,h of qt, qc1N and qc1Ncs indicate
similar findings: little change in the CDFs are noted between qt and qc1N, whereas the CDFs for
qc1Ncs represent significant reductions in variability. The median COVw,h for qt and qc1Ncs are 34.4
and 34.3%, and 14.8 and 17.0%, for the east and west sections. Accordingly, clean sand
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
correction produces significant reductions in the ranges of the fluctuating component of cone
penetration resistance interbedded silty sands, with corresponding significant implication for the
selection of representative values in stochastic simulations. As shown in Fig. 7, the reduction in
the vertical and horizontal COVw,v in qc1Ncs is directly associated with the inherent variability in
FC, a focus of work by Bong and Stuedlein (2017b). Accordingly, selection of RFM parameters
for qc1Ncs must consider the variability in FC for a given soil layer.
1.0 1.0
(a) (b)
0.8 0.8
CDF [COVw,v ]
0.6 0.6
CDF [δ v ]
0.4 0.4
qqtt
0.2 qqc1N
c1N 0.2
qqc1Ncs
c1Ncs
0.0 0.0
0 250 500 750 1000 1250 10% 20% 30% 40% 50% 60 %
Vertical Scale of Fluctuation, δ v (mm) Vertical Coeff. of Inherent Var., COVw,v
1.0 1.0
(c) (d)
0.8 0.8
CDF [ COVw,h ]
CDF [ δ h ]
0.6 0.6
qtEast
EastProfile: qt
qtWest
WestProfile: qt
0.4 0.4
qc1N East qc1N
East Profile:
West West
qc1N Profile: qc1N
0.2 0.2
east
East qc1Ncs
Profile: qc1Ncs
west
Westqc1ncs
Profile: qc1Ncs
0.0 0.0
1 2 3 4 5 6 7 0% 20% 40% 60% 80%
Horizontal Scale of Fluctuation, δ h (m) Horizontal Coeff. of Inherent Var., COVw,h
Figure 6. Cumulative distribution functions for the scale of fluctuation and coefficient of
inherent variability in the (a, b) vertical and (c, d) horizontal directions.
© ASCE
Geo-Risk 2017 GSP 282 44
It was discussed previously that most probabilistic geotechnical analyses conducted to-date
have implemented “deterministic” or invariant RFM parameters. Presentation of RFM
parameters here and by others (e.g., Fenton 1999, Stuedlein 2011, Stuedlein et al. 2012a) clearly
indicate some variability in the RFM parameters. Some of the variability must be attributed to
statistical uncertainty, particularly when derived 60
using sparse data as in the case for the Horizontal Variability
horizontal random field parameters. However, 50 Vertical Variability
statistical uncertainty cannot explain the
COVw,qc1N - COVw,qc1Ncs
40
majority of variation in δv and COVw,v since the
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
30
measurement intervals and error is small. In
other words, there is no significant evidence that 20
the variation in δv and COVw,v is suspect, and so
10
variation in δh and COVw,h cannot be purely
attributed to statistical uncertainty. Thus, there 0
is evidence that probabilistic geotechnical
-10
analyses should consider random fields that 0 5 10 15 20 25
Standard Deviation in FCw
represents the RFM parameters as random
variables. Unfortunately, the effect of sampling Figure 7. Variation in net vertical and
from distributions of RFM parameters on horizontal coefficients of inherent variability
geotechnical performance has not yet been for qc1N and qc1Ncs with the standard deviation
investigated; such efforts are encouraged. of inherent variability in fines content.
A geostatistical model of qc1Ncs for the test site volume was generated using the derived RFM
parameters following the procedures used for kriging FC outlined by Bong and Stuedlein
(2017b) to investigate the characteristics of differential settlement that may be expected at the
test site in its unimproved condition. Differential settlements were actually measured at the site,
albeit following a densification program (Stuedlein et al. 2016) and in response to controlled
blasting (Gianella and Stuedlein 2017). The plan dimensions of the kriging grid were 3.05 m by
25.93 m, which corresponds to the largest dimensions encapsulating the CPTs (Fig. 4); an
approximately square 0.25 m grid spacing was used for all kriging interpolations. Kriging then
proceeded using variograms calibrated for each of the 426 elevations where CPT measurements
were available, and for the east section discussed previously. For each variogram, the range was
computed as a function of the depth-dependent δh whereas the depth-dependent sill, C, was set
equal to the variance corresponding to the computed COVw,h. In all cases, the nugget was set to
zero (i.e., C0 = 0). Following kriging, the FSL and εv was computed at each grid point for M = 7.5
(only) and PGA in the range of 0.10 to 0.40g, and the subsequent vertical settlement integrated
over the depths of 2.5 to 11 m.
Kriging throughout the 663 m3 volume of the liquefiable layer allowed the cutting of sections
in any azimuthal direction to reveal the impact of spatial variability of cone penetration
resistance on FSL, εv, and the resulting total settlements. Figure 8 presents typical results for
these performance metrics for the cases of PGA = 0.13 and 0.25g. The spatial distributions of
FSL, which are plotted in the range of 0 to 2.0, indicates that the case of harder shaking produces
uniform contours across the north-south section at a given depth. On the other hand, the
distribution of FSL for PGA = 0.13g indicates both depth- and distance-dependence, with
© ASCE
Geo-Risk 2017 GSP 282 45
= 0.13g, compared to 191 mm2 for the case of PGA = 0.25g. It may be expected that the
differences in variance will have significant implications for the magnitude of differential
settlement.
Figure 8. Factor of safety against liquefaction Figure 9. Post-shaking volumetric strain, εv,
triggering, FSL, for MW = 7.5 and the East for MW = 7.5 and the East Section through the
Section through the kriged volume for: PGA = kriged volume for: PGA = 0.13g (top), and
0.13g (top), and PGA = 0.25g (bottom). PGA = 0.25g (bottom).
Figure 10. Plan view showing distribution of post-shaking settlement computed for PGA =
0.13g (top), and PGA = 0.25g (bottom).
© ASCE
Geo-Risk 2017 GSP 282 46
site given the 2.5 m thick, unsaturated crust overlying the liquefiable layer and the lack of ejecta
noted during liquefaction induced by controlled blasting as described by Gianella and Stuedlein
(2017). Consider a structural bay 6 m wide, with a concentric column and with the southeast
corner of the south footing located at coordinate [0 m, 0 m] of the kriging grid: the average
settlement would be computed over the domain traced by [0 m, 0 m] to [0.75 m, 0 m] to [0.75 m,
0.75 m] to [0 m, 0.75 m]. The average settlement for the corresponding north footing would be
computed over the domain traced by [6 m, 0 m] to [6.75 m, 0 m] to [6.75 m, 0.75 m] to [6 m,
0.75 m]; differences between the average settlements are then used to compute the angular
distortion over the assumed bay width. Such calculations were made for each kriging grid point
(spaced approximately 0.25 m in each direction) and for the common bay widths of 3, 6, and 9 m
in order to draw observations on relationships between the horizontal scale of fluctuation of cone
penetration resistance and damage from liquefaction-induced differential settlement. This
procedure provided 900, 780, and 660 unique bay positions across the kriging domain for bay
widths of 3, 6, and 9 m, respectively.
Figure 11 presents the computed differential settlements and angular distortions possible over
the kriging domain for a 3 m bay width for PGAs of 0.13 and 0.25g. Regardless of the magnitude
of shaking assumed, several trends may be observed: (1) the magnitude of differential settlement
varies widely depending on the location of the hypothetical bay structure, (2) the magnitude of
angular distortion varies over several orders of magnitude, and (3) the differential settlement and
angular distortion profiles exhibit distinct periodicity. Clearly, differential settlements are larger
for the case of PGA = 0.13g as compared to that for PGA = 0.25g as anticipated from the spatial
distribution of settlements in Figure 10. Differential settlements as large as 80 mm are computed
for PGA = 0.13g (Figure 11a), whereas the differential settlement computed for PGA = 0.25g are
generally 30 mm or less, with maximum equal to about 50 mm. The periodicity in angular
distortion does not appear to vary significantly in the east-west direction, with consistent
minimum magnitudes separated by approximately 3 m, similar to the 3 m bay width considered.
This distance is consistent with the median horizontal scale of fluctuation in qc1NCs (Figure 6c)
determined for this site, indicating a link between the scale of mechanical soil resistance and the
governing scale of the structure (i.e., the bay length).
In order to further explore the possible link between the spatial variability of the soil and
structure performance, the differential settlements for hypothetical bay locations were computed
for PGAs ranging from 0.10 to 0.40g and compared to the three angular distortion limits
described previously (i.e., 1/500, 1/300, and 1/150). The results, shown in Figure 12 as a function
of the percentage of hypothetical bays investigated, allow several broad observations: (1) the
percentage of bays exceeding a specified limit of angular distortion decreases with increases in
the bay width, (2) the percentage of bays exceeding a specified angular distortion limit generally
exhibits a peak near the lower range of PGA considered, and generally decreases with increases
© ASCE
Geo-Risk 2017 GSP 282 47
in PGA, and (3) in many instances for the largest (i.e., 9 m) bay width, no structural damage
could be expected at the highest level of shaking, whereas a considerable number of hypothetical
cases across all bay widths could experience structural damage for lower levels of shaking. In
nearly all cases, it is the hypothetical 3 m bay width that exhibits most instances that exceed a
given angular distortion limit. These results are consistent with the findings reported by Ahmed
and Soubra (2013) for the static response of footing pairs in spatially-variable soil.
100
(a) 0.25 m grid (b) East-West Position of Southeast corner
Differential Settlement between Footing
East-west
40
20
0
0 5 10 15 20 25 0 5 10 15 20 25
North-South Distance (m) North-South Distance (m)
1.E-01
(c) (d)
ULS: 1/150
Angular Distortion between Footing Pair
1.E-02
1.E-03
1.E-04
1.E-05
Figure 11. Variation in differential settlement for 3 m bay supported on 0.75 m square footings
for (a) PGA = 0.13g, and (b) PGA = 0.25g, and angular distortion for (c) PGA = 0.13g, and
(d) PGA = 0.25g.
CONCLUDING REMARKS
The response of structures to ground movements, whether from static compression or post-
shaking reconsolidation of partially- or fully-liquefied soils, remains a challenge to the
geotechnical profession. A review of those recommendations for limiting movements associated
with allowable, serviceability, and ultimate limit states appear sufficient when applying proposed
methods correctly. However, a critical challenge remains in the identification of the effect of
spatial variability on possible structure movements. While the variability of the thickness and
spatial extent of a given soil deposit may be sufficiently characterized by careful planning and
© ASCE
Geo-Risk 2017 GSP 282 48
100% 100%
post-liquefaction settlements (a) 3 m Bay (d) 1 / 500
of exceedance of selected
0% 0%
structural limit states, owing 0.05 0.15 0.25 0.35 0.45 0.05 0.15 0.25 0.35 0.45
75% 75%
across the volume of soil
Distortion
1/300 6 m Bay
uation and the governing 75%
1/150
75%
9 m Bay
50% 50%
to improve our understanding
of the spatial variability of 25% 25%
soil and its connection to the
underlying geological proc- 0% 0%
variability on structural
Figure 12. Variation in the percentage of bays exceeding a given
response to differential
limit on angular distortion with PGA for: (a) 3 m bay width, (b) 6
movement continue to be
m bay width, (c) 9 m bay width, (d) angular distortion of 1/500,
warranted.
(e) angular distortion of 1/300, (f) angular distortion of 1/150.
REFERENCES
Ahmed A., Soubra A.-H. (2014). “Probabilistic analysis at the serviceability limit state of two
neighboring strip footings resting on a spatially random soil,” Structural Safety, Vol. 49, pp. 2 – 9.
Baecher, G.B. (1999) Discussion of “Inaccuracies Associated with Estimation of Random Measurement
Errors,” J. of Geotech. Geoenv. Engrg., Vol. 125, No. 1, pp. 79-81.
Baecher, G.B., and Christian, J.T., (2003) Reliability and Statistics in Geotechnical Engineering, John
Wiley and Sons, London and New York.
© ASCE
Geo-Risk 2017 GSP 282 49
Baker, J.W. and Faber, M.H. (2008) “Liquefaction Risk Assessment using Geostatistics to account for
Soil Spatial Variability,” J. Geot. Geoenv. Eng., 134(1), pp. 14-23.
Bandini, P., and Sathiskumar, S. (2009). “Effects of silt content and void ratio on the saturated hydraulic
conductivity and compressibility of sand-silt mixtures.” J. of Geotech. and Geoenv. Engrg., 135(12),
pp. 1976-1980.
Bazzurro, P. and Cornell, C.A. (2004). “Ground-Motion Amplification in Nonlinear Soil Sites with
Uncertain Properties,” Bulletin of the Seismological Society of America, 94(6), pp. 2090-2109.
Bong, T. and Stuedlein, A.W. (2017a). “CPT-based Random Field Model Parameters for Liquefiable
Silty Sands,” Proceedings, GeoRisk 2017, Geotechnical Special Publication, ASCE, Reston, VA., 10
pp.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Bong, T. and Stuedlein, A.W. (2017b). “Spatial Variability of CPT Parameters and Silty Fines in
Liquefiable Beach Sands,” J. of Geotech. Geoenv. Engrg., In final review.
Boscardin, M.D. and Cording, E.J. (1989). “Building Response to Excavation-induced Settlement,” J.
Geot. Eng., 115(1), pp. 1-21.
Boulanger, R.W., Mejia, L.H., and Idriss, I.M. (1997). “Liquefaction at Moss Landing during Loma
Prieta Earthqauke,” J. Geotech. Geoenv. Engrg., 123(7), pp. 453-467.
Boulanger, R.W. and Idriss, I.M. (2006). “Liquefaction Susceptibility Criteria for Silts and Clays,” J. of
Geotech. Geoenv. Engrg., 132(11), pp. 1413-1426.
Boulanger, R.W., and Idriss, I M. (2014). CPT and SPT based liquefaction triggering procedures. Report
UCD/CGM-14/01, Department of Civil and Environmental Engineering, University of California,
Davis, CA, 138 pp.
Boulanger, R. W., and Idriss, I. M. (2015). “CPT-based liquefaction triggering procedure,” J. Geotech.
Geoenviron. Eng., 04015065
Bray, J.D. and Dashti, S. (2010). “Liquefaction-induced movements of buildings with shallow
foundations,” Fifth Inter. Conf. on Rec. Adv. in Geo. EQ Engrg. & Soil Dyn., Missouri University of
Science and Technology, Rolla, Missouri, 19 pp.
Bray, J.D. and Sancio, R.B. (2006). “Assessment of the Liquefaction Susceptibility of Fine-Grained
Soils,” J. of Geotech. Geoenv. Engrg., 132(9), pp. 1165-1177.
Bray, J., Cubrinovski, M., Zupan, J., and Taylor, T. (2014). “Liquefaction Effects on Buildings in the
Central Business District of Christchurch,” Earthquake Spectra, 30(1), pp. 85-109.
Bray, J., Markham, C.S., and Cubrinovski, M. (2016). “Liquefaction Assessments at Shallow Foundation
Building Sites in the Central Business District of Christchurch, NZ,” Soil Dynamics and Earthquake
Engineering, 92, pp. 153-164.
Cafaro, F. and Cherubini, C. (2002) “Large Sample Spacing in Evaluation of Vertical Strength Variability
of Clayey Soil,” J. of Geotech. and Geoenv. Engrg., Vol. 128, No. 7, pp. 558-568.
Clough, G.W. and O’Rourke, T.D. (1990). “Construction induced Movements of in situ Walls,” Design
and Performance of Earth Retaining Structures, GSP No. 25, ASCE, New York, NY, pp. 439-470.
Cubrinovski, M., Bradley, B., Wotherspoon, L., Green, R., Bray, J., Wood, C., Pender, M., Allen, J.,
Bradshaw, A., Rix, G., Taylor, M., Robinson, K., Henderson, D., Giorgini, S., Ma, K., Winkley, A.,
Zupan, J., O’Rourke, T., DePascale, G., and Wells, D. (2011). Geotechnical aspects of the 22
February 2011 Christchurch earthquake,” Bulletin of the New Zealand Society of Earthquake
Engineering, 44, pp. 205–226.
Cubrinovski, M., Taylor, M., Henderson, D., Winkley, A., Haskell, J., Bradley, B. A., Hughes, M.,
Wotherspoon, L., Bray, J., O’Rourke, T. (2014). “Key factors in the liquefaction-induced damage to
buildings and infrastructure in Christchurch: Preliminary findings,” Proc. 2014 New Zealand Society
for Earthquake Engineering, New Zealand Society for Earthquake Engineering, Wellington, New
Zealand, Paper No. O78.
Dashti, S., Bray, J.D., Pestana, J.M., Riemer, M., and Wilson, D. (2009) “Mechanisms of Seismically
Induced Settlement of Buildings with Shallow Foundations on Liquefiable Soil,” J. of Geotech. and
Geoenv. Engrg., 136(1), pp. 151-164.
© ASCE
Geo-Risk 2017 GSP 282 50
DeGroot, D.J. and Baecher, G.B. (1993) “Estimating Autocovariancea of In-situ Soil Properties,” J. of
Geotech. Geoenv. Engrg., Vol. 119, No. 1, pp. 147-166.
Fenton, G.A., and Vanmarcke, E. H. (1998). “Spatial variation in liquefaction risk.” Geotechnique, 48(6),
pp. 819–831.
Fenton, G.A. (1999a) “Random Field Modeling of CPT Data,” J. of Geotech. Geoenv. Engrg., ASCE,
Vol. 125, No. 6, pp. 486-498.
Fenton G.A., Griffiths D.V. (2002). “Probabilistic foundation settlement on a spatially random soil,” J. of
Geotech. Geoenv. Engrg., 128(5), pp. 381–90.
Fenton G.A., Griffiths D.V. (2005) “Three-dimensional Probabilistic Foundation Settlement,” J. of
Geotech. Geoenv. Engrg., 131(2), pp. 232–239.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Fenton, G.A. and Griffiths, D.V. (2008). Risk Assessment in Geotechnical Engineering, John Wiley &
Sons, Inc., New York, NY, 461 pp.
Gianella, T.N. and Stuedlein, A.W. (2017) “Performance of Driven Displacement Pile-Improved Ground
in Controlled Blasting Field Tests,” Journal of Geotechnical and Geoenvironmental Engineering, In
Re-Review.
Griffiths, D.V., Fenton, G.A., and Manoharan, N. (2002) “Bearing Capacity of Rough Rigid Footing on
Cohesive Soil: Probabilistic Study,” Journal of Geotechnical and Geoenvironmental Engineering,
ASCE, Vol. 128, No. 9. pp. 743-755.
Idriss, I.M., and Boulanger, R.W. (2008). Soil liquefaction during earthquakes, Earthquake Engineering
Research Institute, Oakland, CA.
Ishihara, K., and Yoshimine, M. (1992). Evaluation of settlements in sand deposits following liquefaction
during earthquakes,” Soils and Foundations, 32(1), pp. 173–188.
Jaksa, M.B. (1995) “The Influence of Spatial Variability on the Geotechnical Design Properties of a Stiff,
Overconsolidated Clay,” Ph.D. Thesis, Faculty of Engineering, University of Adelaide.
Lacasse, S., and Nadim, F. (1996) “Uncertainties in characterising soil properties,’’ Uncertainty in the
geologic environment: from theory to practice, GSP No. 58, ASCE, Reston, Va., pp. 49–75.
Lumb, P. (1975). ‘‘Spatial variability of soil properties.’’ Proc., 2nd Int. Conf. on Application of Statistics
and Probability in Soil and Structural Engineering, Aachen, Germany, pp. 397–421.
Mayne, P.W. (2007). NCHRP Synthesis 368: Cone penetration testing. Transportation Research Board,
Washington, DC.
Montgomery, J. and Boulanger, R.W. (2016). “Effects of Spatial Variability on Liquefaction-Induced
Settlement and Lateral Spreading,” J. Geotech. Geoenviron. Eng., ASCE, 04016086.
Nagase H. and Ishihara, K. (1988). Liquefaction-induced compaction and settlement of sand during
earthquakes. Soils and Foundations, 28(1), pp. 66–76.
Phoon, K.K. & Kulhawy, F.H. (1999) “Characterization of Geotechnical Variability,” Canadian
Geotechnical Journal., Vol. 36, No. 4, pp. 612-624.
Polshin, D.E. and Tokar, R.A. (1957). “Maximum Allowable Non-uniform Settlement of Structures,”
Proc. 4th ICSMFE, London, Vol. 1, pp. 402-406.
Popescu, R., Prevost, J. H., and Deodatis, G. (1997). “Effects of spatial variability on soil liquefaction:
Some design recommendations.” Geotechnique, 47(5), pp. 1019–1036.
Popescu, R., Prevost, J. H., and Deodatis, G. (2005). “3D effects in seismic liquefaction of stochastically
variable soil deposits.” Geotechnique, 55(1), pp. 21–31.
Rezaeian, S., Petersen, M.D., Moschetti, M.P.. Powers, P., Harmsen, S.C., and Frankel, A.D. (2014).
“Implementation of NGA-West2 Ground Motion Models in the 2014 U.S. National Seismic Hazard
Maps,” Earthquake Spectra, 30(3), pp. 1319-1333
Sancio, R., Bray, J. D., Durgunoglu, T., and Onalp, A. (2004). “Performance of buildings over
liquefiable ground in Adapazari, Turkey,” Proc., 13th World Conf. on Earthquake Engineering, St.
Louis, Mo., Canadian Association for Earthquake Engineering, Vancouver, Canada, Paper No. 935.
Seed, H.B., and Idriss, I.M. (1971). ‘‘Simplified procedure for evaluating soil liquefaction potential.’’ J.
Geotech. Engrg. Div., ASCE, 97(9), pp. 1249–1273.
© ASCE
Geo-Risk 2017 GSP 282 51
Skempton, A.W. and MacDonald, D.H. (1956). “Allowable Settlement of Buildings,” Ins. Civ. Eng., III,
Vol. 5. pp. 727-768.
Son, M. and Cording, E. (2005). "Estimation of Building Damage Due to Excavation-Induced Ground
Movements." J. of Geotech. Geoenv. Engrg., 131(2), pp. 162-177.
Son, M. and Cording, E. (2011). “Responses of Buildings with Different Structural Types to Excavation-
Induced Ground Settlements,”J. of Geotech. Geoenv. Engrg., 137(4), pp. 323-333.
Stewart, J.P., Chiou, S.-J., Bray, J.D., Graves, R.W., Somerville, P.G, and Abrahamson, N.A. (2002).
“Ground motion evaluation procedures for performance-based design,” Soil Dynamics and
Earthquake Engineering, 22(9-12), pp. 765-772
Stuedlein, A.W. (2011). “Random Field Model Parameters for Columbia River Silt,” Proceedings,
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 52
1
Arup, 13 Fitzroy St., London W1T 4BQ, U.K. E-mail: [email protected]
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Abstract
The Eurocodes generally use a partial factor approach as a means of regulating safety levels, and
for geotechnical design a range of alternative formulations is allowed, including some that are
similar to LRFD. Although the possible use of more direct reliability methods was
acknowledged, their application in practice has been limited to a few examples of verifying that
particular sets of partial factors are appropriate for use.
One significant advantage of partial factor methods, compared with former global factors,
was that safety margins could more effectively reflect the known uncertainties of leading
parameters in calculations. These are usually material strengths, or resistances of structural
members or bodies of ground, and actions (the European word for loads) or the effects of actions
within structural members or the ground. Generally, expert judgement was used to relate factor
values to parameter uncertainties in an approximate way, supported by calibration of calculation
results against past experience.
When failures occur it is often not because of unexpectedly severe values of the known
lead variables, but rather because an unforeseen event or action has taken place. Sometimes
these relate to “human errors” by designers or constructors. It is therefore considered that
structures should be designed to accommodate events and actions whose nature may be
unforeseen, up to a magnitude that is acceptable to society. This issue, termed robustness, also
affects the selection of appropriate values for partial factors and is arguably more difficult to
accommodate in reliability calculations.
INTRODUCTION
The Eurocodes have been under development since the 1970’s for the design of buildings and
civil engineering structures. Their purpose was to facilitate trade between nations, involving
both collaboration and competition between engineering designers, with the aim of producing
safe and serviceable structures. Early in the 1980’s it was recognized that a geotechnical code
was needed in the suite, so that foundations, retaining structures and other geotechnical
structures could be included. Eurocode 7 (EC7) – Geotechnical design – was eventually released
in 2004 and published in each nation (eg BSI 2013, incorporating later amendments).
© ASCE
Geo-Risk 2017 GSP 282 53
The Eurocodes generally use a partial factor approach as a means of regulating safety
levels, and for geotechnical design a range of alternative formulations is allowed, termed Design
Approaches, including some that are similar to LRFD. Although the possible use of more direct
reliability methods was acknowledged, their application in practice has been limited to a few
examples of verifying that particular sets of partial factors are appropriate for use.
During 2016, a series of Working Groups has been set up jointly by ISSMGE technical
committees TC205 (Safety and Serviceability in Geotechnical Design) and TC304 (Engineering
Practice of Risk Assessment and Management). Their goal is to report on applications of
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
reliability theory in geotechnical design, with particular reference to EC7. The author has been
convener of a group considering the issue of “Robustness” in geotechnical design, a topic that
has featured prominently in recent discussions about revisions to the code.
In this paper, the way safety provisions are specified in EC7 will be described. Likely
future developments will be noted, and the significance of reliability methods for EC7 will be
discussed. The issue of robustness will receive particular attention.
The term “partial factor methods” will be taken to include all safety formats in which
factors of safety are spread among several variables. The variables include: actions (loads);
effects of actions such as internal forces derived in calculations; material strengths; and
resistances of structural components (such as bending capacity) or of bodies of ground (such as
bearing resistance). Thus all the “Design Approaches” of EC7 and all LRFD formats are
included as “partial factor methods”. Some of the partial factors may be “model factors”.
The Eurocodes adopt a limit state approach, directing attention to two types of limit states:
Ultimate Limit States (ULS), broadly representing states of danger or major economic loss, and
Serviceability Limit States (SLS), representing states of inconvenience, disappointment with
performance or more modest economic loss. Ideally, calculations are carried out by setting up
design situations involving extreme, or sufficiently severe, sets of design values of variables, for
which it is shown that the limit states would not be exceeded, though they might just be reached.
In most cases, the design values are derived by applying partial factors to characteristic values of
the variables. On the loading side, load combination factors are also used; they will not be
discussed here.
Generally, for serviceability limit state calculations (of deformations, for example) the
partial factors are all 1.0, so characteristic values are used directly as design values for SLS. To
make ultimate limit states far more unlikely to occur, design values for ULS are generally
derived by applying factors >1.0 characteristic values. In Eurocodes, unlike American codes,
factors are applied to material strengths and resistances by dividing by values >1.
© ASCE
Geo-Risk 2017 GSP 282 54
effects, etc. While wanting to maintain a similar level of improbability, the drafters of Eurocode
7 wanted to ensure that characteristic values represented realistic behavior of the ground, as far
as it could be assessed. They therefore considered that this approach is not suitable for
geotechnics for several reasons: (a) the number of test results available is often quite small; (b)
the test results often include several different means of testing of variable reliability, many of
them being indirect, relying on correlations of a test result to the parameter required for
calculation; (c) many of the test results do not directly represent behavior in the ground, but need
adjustments for time effects, anisotropy etc; (d) it is important to incorporate knowledge from
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
publications and experience as well as more immediately available test results; (e) if the
characteristic value is to represent the behavior of the ground, allowance must be made for the
extent of the zone of ground involved in a possible limit state event.
The drafters of EC7 concluded that an element of engineering judgement is required in
the assessment of characteristic values, which they defined as “a cautious estimate of the value
affecting the occurrence of the limit state”. This definition was supplemented by the statement
that “if statistical methods are used, the characteristic value should be derived such that the
calculated probability of a worse value governing the occurrence of the limit state under
consideration is not greater than 5%.” Simpson and Hocombe (2010) suggested that in many
situations this definition is fairly similar to the “conservatively assessed mean” sometimes used
in American practice.
Table 1. Default values for the partial factors in EC7 (the values can be varied nationally).
DA1 DA2 DA3
Comb 1 Comb 2 Piles
Actions Permanent unfav 1.35 1.35 1.35
fav
Variable unfav 1.5 1.3 1.3 1.5 1.5/1.3*
Soil tan φ' 1.25 1.25
Effective cohesion 1.25 1.25
Undrained strength 1.4 1.4
Unconfined strength 1.4 1.4
Weight density
Spread Bearing 1.4
footings Sliding 1.1
Driven Base 1.3 1.1
piles Shaft (compression) 1.3 1.1
Total/combined (compression) 1.3 1.1
Shaft in tension 1.25 1.6 1.15 1.1
Note: Values of all other factors are 1.0. Further resistance factors are provided for other types of piles, anchors etc.
* 1.5 for structural loads; 1.3 for loads derived from the ground.
Design Approaches. Agreement was not achieved among the code drafters, representing all the
European nations, about which parameters are to be factored. As a result, three “Design
Approaches” (DA1, DA2, DA3) are allowed by EC7, as summarized in Table 1. DA1 requires
© ASCE
Geo-Risk 2017 GSP 282 55
two separate calculations using two “combinations” of factors; the design has to accommodate
both combinations in which factors are applied, broadly, to actions or material strength, though
factors are applied to resistances for piles and ground anchors. In DA2, factors are applied
simultaneously to actions and resistances, which include bearing resistance, passive earth forces,
as well as pile and anchor capacities; DA2 is fairly similar to LRFD approaches. In DA3, factors
are applied to actions and material strengths simultaneously. In all cases, the factors on actions
are sometimes applied to action effects such as structural bending moments or other resultant
forces within the equilibrium calculations.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
In Europe, each nation has to choose which Design Approach to adopt, and the values of the
factors can be varied nationally. Each of the approaches has advantages and disadvantages.
The purpose and application of partial factors. The purpose of the partial factors is not
directly stated in the main text of the Eurocodes. However, Annex C of the head code, EN 1990
(eg BSI 2005), provides an approach using reliability analysis, in which the purpose of the partial
factors is seen to be to allow for extreme variations of the factored parameters. In practice, the
values adopted for partial factors in the Eurocodes have been chosen by judgement of
uncertainties combined with calibration against successful past practice, with very little use of
reliability methods. Because the values are strongly influenced by correlation of design results
to past practice, they probably have some part to play in limiting displacements and in providing
overall robustness, a topic discussed later in this paper.
To the extent that the partial factors are intended to provide safety in case of extreme
values of the factored parameters, the author contends that it is appropriate to apply them to
those parameters for which there is measured data giving some indication of their uncertainty. In
geotechnical engineering, the main parameters that are frequently measured are the strength of
the ground, expressed as undrained strength or angle of shearing resistance, and the capacities of
piles and anchors. It is relatively unusual to measure the capacities of spread foundations, and
measurement of passive earth resistance or active forces is almost unknown. It therefore seems
logical to apply partial factors to material strengths in calculations for spread foundations, slopes
and retaining structures, and to the capacities of piles and anchors. Simpson et al (2009) note
that bearing capacity and earth resistance both increase in a non-linear over-proportional way
with angle of shearing resistance, so it is safer to factor strength at source than to factor these
derived resistances; this is consistent with the principles in EN1990, where they are expressed
specifically for actions.
Reliability discrimination – Consequences of failure. EC7 says that the values of partial
factors should be increased in cases of abnormal risk or unusual or exceptionally difficult ground
or loading conditions. They may be reduced for temporary structures or transient design
situations, where the likely consequences justify it. The code gives these general indications but
no more specific guidelines or requirements about the magnitudes of adjustments to factors.
© ASCE
Geo-Risk 2017 GSP 282 56
Pros and cons. The advantages claimed for DA1 are: (a) it provides a consistent method across
a wide range of problems; (b) it can readily be used with numerical analysis; (c) by requiring
checks against two sets of factors, a more consistent level of reliability can be achieved across a
wide range of problems. The main disadvantage of DA1 is that it requires slightly more work on
the part of the designer, though in practice three factors minimise this problem: (a) it is
frequently the case that the critical combination is obvious by inspection; (b) Combination 1 can
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
often be derived from a serviceability limit state calculation, which is required by all the design
approaches; (c) most computations are carried out by computer and there is very little difficulty
in running a second case, if it is needed. Furthermore, many designers already carry out repeated
calculations for multiple load combinations, and it is argued that with modern computing
facilities this is not a significant drawback. Some applications of DA1, including comparisons
with designs to AASHTO (2008), are given by Simpson and Hocombe (2010).
More consistent reliability. One of the aims of design is to achieve roughly constant
reliabilities irrespective of how actions, strengths and resistances combine in particular situations.
In Annex C of EN1990, reliability is represented by the target reliability index β. EN1990
discusses how the values of partial factors might be selected in order to achieve this, proposing
that factors could be applied simultaneously to actions and strengths (or action effects and
resistances). In effect it proposes that the action effects for ULS design should be 0.7β standard
deviations from their characteristic values, and the margin on resistances should be 0.8β. But it
places an important limit on this approach: it is only applicable if the ratio of the standard
deviations of the action effect and resistance, σE/σR, lies within the range 0.16 to 7.6. The
implication of this is that a different approach is to be used if the uncertainty of one of the
variables – actions or resistances – is much more important to the design than is the other one.
For such a situation, the margin on the more critical variable is required to be 1.0β, with a lower
margin, 0.4β, on the less critical variable.
The result of this approach is shown in Figure 1, in which the reliability achieved (in terms of
number of standard deviations of the design point from the mean) is plotted against the ratio of
the standard deviations expressed as σE/(σE+σR). The result is normalised by dividing by the
required reliability, β standard deviations, so that the desired value is 1.0. Over the range in
which both σE and σR are of similar, significant magnitude, the result is reasonably close to the
desired value. However, as either σE and σR becomes small compared to the other one, the
reliability achieved drops substantially, indicating an unsafe design with inadequate reliability.
This explains why EN1990 limits the range of applicability of the approach to σE/σR = 0.16 to
7.6.
© ASCE
Geo-Risk 2017 GSP 282 57
E R = 0.16
E R = 7.6
1.1
.
economic
1.1
1
σ /σ
SAFETY RATIO
αE=-0.7, αR=0.8
SAFETY RATIO.
σ /σ
1
0.9
0.9
0.8
αE=-0.7, αR=0.8
0.8 Slope Tower
Less stability
safe
0.7 foundations
0.7
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
0.6
0.6
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
σE/(σR+σE)
σE/(σR+σE)
Figure 1. Reliability achieved using (0.7, 0.8) Figure 2. Reliability for some typical
combination for α geotechnical situations
Figure 2 shows that in geotechnical design it is important to consider the full range of σE/σR
values. Conventional foundations may have σE and σR of similar magnitude, but other situations
are dominated by either σE and σR. For example, in slope stability problems there is often very
little uncertainty about the loading and uncertainty of soil strength is dominant, as shown by the
fact that factors of safety are normally applied to soil strength. At the other extreme, designs for
foundations of tall towers may have loading as the dominant uncertainty. In geotechnical design,
these problems often occur together, so the approach adopted must be able to accommodate the
full range of σE/σR.
1.2
Uneconomic
αE=-0.4, αR=1.0 αE=-1.0, αR=0.4
1.1
.
1
SAFETY RATIO
0.9
0.8
Unsafe
0.7
0.6
0 0.2 0.4 0.6 0.8 1
σE/(σR+σE)
© ASCE
Geo-Risk 2017 GSP 282 58
Figure 3 shows the result in terms of reliability of an approach using two “combinations” in
which the margin on the more critical variable is required to be 1.0β, with 0.4β on the less
critical variable. Much greater consistency is achieved, with none of the resulting values falling
substantially lower than required (ie 1.0).
The benefit of the use of two combinations is that a very wide range of design situations can
be covered without change in the design approach. In common with other design approaches, the
factors used in DA1 have not been deduced by probabilistic calculation. Nevertheless, they do
reflect the principles propounded in EN1990 and the lessons which may be learnt by considering
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
a probabilistic framework.
Although the concept of “combinations” is relatively new to geotechnics, it is familiar to
structural engineers who frequently design for several combinations of actions. The background
to DA1 is essentially the same as that of combinations of actions, giving a severe value to the
lead variable in combination with less severe values of other variables, but in DA1 the method is
extended to include resistances or material strengths, as suggested by EN1990. The fundamental
principle of DA1 is that “All designs must comply with both combinations in all respects, both
geotechnical and structural”. The “design” meaning “that which will be built”.
A programme for the revision of EC7 started in 2010, with the intention of publishing a new
version in about 2020, together with the full suite of revised Eurocodes. The scope of EC7 will
be extended to include ground improvement and reinforced ground. A likely feature of this will
be an attempt to combine the Design Approaches, preserving the most important advantages of
each of them, while making the code simpler and less confusing by reducing the number of
options.
The new generation of the Eurocodes is likely to place more emphasis on reliability
discrimination. This relates to consequences of failure and probably also to the perceived
difficulty of the ground conditions. The new version is likely to relate these to requirements for
supervision, review and checking and also to provide modification factors applied to the standard
partial factors.
It is also possible that the values of partial factors may be varied in relation to the
perceived level of difficulty or hazard related to the ground conditions, that is, how unreliable the
ground conditions are thought to be, which is partly dependent on the extent and quality of
ground investigation. However, this is already accommodated to some extent in the definition of
characteristic values of material strengths, as discussed above, so the final format of any change
is unclear at present.
Another likely development is that a more statistical approach could be adopted, at least
as an option, for the derivation of characteristic values of material strengths from test results.
© ASCE
Geo-Risk 2017 GSP 282 59
Use of reliability methods is not mentioned in EC7, although the possible use of statistical
methods for derivation of characteristic values is included, as noted above. To the author’s
knowledge, there has been no discussion in the code development committees about inclusion of
reliability methods in the revision of EC7, either as a basis for deriving alternative partial factor
values or for direct use by designers. The Working Groups of ISSMGE TC205 and TC304 were
set up to examine the possibility of further use of reliability methods in EC7.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
In the development of modern codes and standards for design, attention is often focused
on the uncertainty of the most obviously dominant variables such as loads, material strengths and
resistances of structural or geotechnical elements. A fear has often been expressed, however,
that concentration on these alone might lead to design of structures that lack overall
“robustness”, particularly in cases where the dominant variables have very little uncertainty.
This issue has received considerable attention in the development of standards for structural
design.
The author therefore agreed to be convener of a Working Group set up to examine how
sufficient robustness can be ensured in geotechnical designs, using reliability analysis or other
safety formats. The members of the group first set out to define the term robustness in a relevant
way and then exchanged emails and papers to develop an understanding of how it can be
provided in geotechnical design and codes of practice. It is intended that reports on the findings
of the Working Groups will be presented during 2017; this paper provides a summary of the
author’s own understanding and opinions.
It became clear that the term robustness could be used with several different meanings, as
will be noted below. This paper is concerned with only one of these: the ability of the final
design to accommodate events and actions that were not foreseen or consciously included in
design.
DEFINITIONS OF ROBUSTNESS
© ASCE
Geo-Risk 2017 GSP 282 60
to the designer, their magnitude can be considered: society expects that a construction will be
able to withstand moderate unforeseen events and actions, but probably not extremely severe
ones. A design that produces such a construction can be termed a “robust design”.
A concise definition is given by ISO 2394 (ISO 2014), which equates robustness to
“damage insensitivity”:
Ability of a structure to withstand adverse and unforeseen events (like fire, explosion, impact)
or consequences of human errors without being damaged to an extent disproportionate to the
original cause (ISO 2394:2014, §2.1.46).
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
An alternative definition, with the same basic meaning, could help designers to
understand the degree of robustness required:
Ability of a structure to withstand adverse events that are unforeseen but of a magnitude such
that society will expect that our designs can accommodate them, having tolerance against
mistakes within the design process and during construction.
Issues of progressive failure and resilience are also relevant to consideration of
(dis)proportionality of effects, so they are considered briefly here.
Local damage and progressive failure. The term robustness is often applied to a complete
structure rather than to an individual element of it. For example, CEN (2016) Practical
definition of structural robustness, gives a definition of structural robustness:
Structural robustness is an attribute of a structural concept, which characterizes its ability to
limit the follow-up indirect consequences caused by the direct damages (component damages
and failures) associated with identifiable or unspecified hazard events (which include
deviations from original design assumptions and human errors), to a level that is not
disproportionate when compared to the direct consequences these events cause in isolation.
Robustness is often linked to the ability to prevent progressive failure, which could lead
to damage disproportionate to cause (eg COST (2011) Structural robustness design for
practising engineers). This is considered to be consistent with strict limit state definitions in
which ultimate limit state (ULS) is a state of danger, but as a practical design expedient ULS is
often considered as only localised failure, not necessarily dangerous in itself. EN 1990 3.3(3) is
relevant to this: “States prior to structural collapse, which, for simplicity, are considered in place
of the collapse itself, may be treated as ultimate limit states.”
Val (2006), discussing robustness of framed structures, provides a definition similar to
that of ISO 2394, and then offers as an alternative:
The robustness of a structure can be defined as ability of the structure to withstand local
damage without disproportionate collapse, with an appropriate level of reliability.
Resilience. Robustness can be distinguished from “resilience”, which refers to the ability of a
structure to be recovered after it has failed. On the other hand, a complete structure, or a system
such as a metro system, might be considered robust if its members are all resilient, so that local
failures can be repaired without failing the complete system (Huang et al 2016).
© ASCE
Geo-Risk 2017 GSP 282 61
In most design processes, “lead variables” are identified and the possibility that they might adopt
extreme values, or occur in adverse combinations, is considered in some way. Lead variables are
usually actions (loads), material strengths and component resistances. However, most designs
are also affected by a large number of “secondary variables”, which the design is expected to
accommodate.
Robustness relates to the ability of a construction to withstand events and actions that
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
were not foreseen or consciously included in design, in effect because they were considered
“secondary”. These have to be judged in their context. For example, in a building structure if a
heating engineer puts a 150mm hole through a wall, it would be unacceptable for the wall to fail;
however, if the same hole were put through a 250mm column the heating engineer, not the
column designer, could be liable for the failure that ensued.
The definition of robustness given in ISO2394, in common with EN1990, mentions as
examples fire, explosion, impact and human errors. Human errors occur both in design and
construction, the latter often resulting in geometric inaccuracies in the construction. In a
geotechnical context, other secondary variables could include examples such as sedimentation or
erosion around a structure in water, excavation of small trenches etc, or of the ground above a
structure relying on the weight of ground, disturbance caused by burrowing animals, unidentified
loading above retaining walls, and vandalism of various kinds.
If these events are very large, it might be judged that the designer should have allowed
for them, or they might lead to successful insurance claims or prosecution of the perpetrators.
However, where they are only moderate in magnitude, clients and society reasonably expect that
they will not cause significant problems to constructions. In this respect, although the events
themselves are unforeseen at the time of design, the magnitude that a design must able to
accommodate is understood, at least roughly. For example, whilst all structures may be expected
to have reasonable robustness against vandalism, ability to resist more severe acts of terrorism is
only required in the specifications of more exceptional structures.
In reliability work, the term “black swan” is used to describe something that was
unforeseeable and that has an extreme impact. The implication is that nobody could have
prepared for the disaster that was caused, and society would accept that no designer could be
blamed. Robustness relates to events that are also unforeseen but are of smaller magnitude, such
that society will expect that robust designs can accommodate them. It might be helpful to think
of these as grey swans – signets – they are neither black nor white and somewhat smaller.
© ASCE
Geo-Risk 2017 GSP 282 62
It is stressed that the removal of a single vertical load bearing element "is not intended to
reproduce or replicate any specific abnormal load or assault on the structure". Rather,
member removal is simply used as a "load initiator" and serves as means to introduce
redundancy and resiliency into the structure.
EN1990 offers a similar approach as one option to avoid exceedance of limit states:
Potential damage shall be avoided or limited by appropriate choice of one or more of the
following … selecting a structural form and design that can survive adequately the accidental
removal of an individual member or a limited part of the structure, or the occurrence of
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Use of partial factor methods to provide robustness. As noted in the Introduction, the term
“partial factor methods” will be taken to include all safety formats in which factors of safety are
spread among several variables; some of the partial factors may be “model factors”. Thus all the
“Design Approaches” of Eurocode 7 (EC7) and all LRFD formats are included as “partial factor
methods”.
Many studies have been carried out to derive values for partial factors using reliability
analysis (eg Foye et al 2006, Schweckendiek et al 2012). However, in practice, almost all values
used in modern codes of practice have been derived by calibration against previous experience of
successful design. Sometimes, further reliability studies have been used to provide additional
justification. The disadvantage of calibration processes is that the “successful” designs
demonstrated adequacy in terms of both ultimate and serviceability limit states and also with
regard to robustness. So it is difficult, if not impossible, to determine which of these criteria
would have been infringed if lower values had been used for the factors. Nevertheless, existing
experience shows that the factors adopted have provided, at least, a level of robustness that has
been found to be adequate. A cautious approach to adoption of changes that might make designs
less robust is therefore understandable.
EC7 notes one particular aspect of robustness, without using that word: the
accommodation of small geometric variations. For these it says:
© ASCE
Geo-Risk 2017 GSP 282 63
The partial action and material factors (γF and γM) include an allowance for minor variations
in geometrical data and, in such cases, no further safety margin on the geometrical data
should be required. (EC7, 2.4.6.3(1))
CEN (2014) Robustness in Eurocodes notes that: “The national partial safety factors are
also expected to cover a (part of)” the effects of errors in design and execution (Section 2, page
4).
It may be concluded, therefore, that the use of partial factor methods with values derived
by calibration against existing successful experience, is a valid approach to provision of adequate
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
robustness. Their values are roughly aligned with typical coefficients of variation of the lead
parameters, if only by the judgement of the drafters of the standards.
Direct use of reliability methods to provide robustness. The potential benefit of reliability
methods over partial factor methods is that they can take account directly of the real uncertainty
of the lead variables, for which data may be available. This would allow the safety of designs to
be gauged by a reliability index, β, which, in principle, is related to the probability of failure,
intended to be very low. Reliability methods are generally more complicated to implement than
partial factor methods, so designers and codes of practice are only likely to adopt them if they are
shown to have clear advantages.
The Working Group has not been able to suggest practicable methods of accommodating
robustness, in relation to unforeseen events and actions, in reliability based design. It is possible
that a major study of civil engineering failures, of large and small magnitude, might provide a
database that could be used as an input to reliability studies. This would give, for example,
objective data on the occurrence and significance of humans errors in design. However, an
immediate problem arises that in many cases the detailed analysis of failures is confidential to
legal proceedings, so accumulation of reliable data would be very difficult.
It might be possible to calibrate reliability methods against past experience in the same
way that partial factor methods have been calibrated. This could mean that values of the
reliability index β could be chosen so as to reproduce previous successful designs, which are
considered to have sufficient robustness. Unfortunately, this would lose the logical connection
between reliability index, probability of failure and the actual uncertainty of the lead variables.
It was noted above that while actions and events for which robustness is needed are not
identified at the time of design, their magnitude is roughly determined by what is acceptable to
society. Because they are independent of the lead variables, they are also independent of the
range of uncertainty of those variables. This means that the magnitudes of unforeseen actions
and events, for which robustness is required, cannot be measured on the same scale as the
uncertainties of the lead variables. Hence, simply designing for larger β might not achieve what
is required.
Consider, for example, a situation in which the coefficients of variation of the lead
variables are very small. In that case, a large value for β could be achieved with little change to
the design, giving significant robustness to meet unforeseen actions and events. In this respect,
the use of partial factors with values roughly aligned to typical coefficients of variation of the
lead variables, but not tuned specifically for individual designs, appears to be advantageous.
© ASCE
Geo-Risk 2017 GSP 282 64
improve robustness against “unforeseen” events and actions by forcing more of them to be
explicitly foreseen and accommodated in the design. This is usually to be expected when
designs are critically reviewed by a multi-disciplinary team with a high level of expertise. One
possible danger that must be avoided is that the process becomes so dominated by probability
expertise that clear thinking about the physical processes involved gets crowded out.
It seems likely that studies of this type will provide valuable insights into the process of
setting values for partial factors. In relation to robustness, a key issue is to ensure that the
eventual designs are able to accommodate, to a reasonable extent, events and actions beyond
those normally included in conventional designs. This suggests that reduction of safety levels
below those of conventional practice, even when apparently indicated by reliability studies,
should be adopted only gradually and with considerable caution.
The collapse of the underground station at Nicoll Highway in Singapore in 2004 brought into
focus many issues of geotechnical design. In some respect, the structure exhibited considerable
robustness, withstanding a series of errors in design and construction. However, it eventually
collapsed because too many design rules had been transgressed.
The incident was fully described in the report of the public enquiry (Magnus et al 2005),
and has been discussed by Whittle and Davies (2006), Simpson et al (2008) and many others. It
involved the construction of a ten-level excavation in Singapore Marine Clay, a soft clay that
increases in undrained strength to about 50kPa at 35m depth. The design cross-section is shown
in Figure 4. The collapse occurred during the excavation for the tenth (final) level.
All parties to the public enquiry agreed that the most significant cause of failure was a
combination of errors in the design and construction of the steel strutting. Specifically, the joints
between struts and walers were under-designed and some of the splays included in the design of
the struts were omitted on drawings and in the construction. A large number of other problems,
mainly geotechnical, were also noted in the design and construction, which may be listed as:
• Incorrect use of finite element analysis, resulting in an over-estimate of the undrained
strength of the clay.
• Failure to ensure, during construction, that the wall penetrated at least 3m into the Old
Alluvium stratum, as required by the design.
• Failure to check that that the strutting would be adequate if one strut were removed, as
required by the contract.
© ASCE
Geo-Risk 2017 GSP 282 65
• Failure to replace a critical piezometer beneath the excavation when it was damaged, with
the result that the water pressure beneath the excavation was not known.
• Failure to respond adequately to observed excessive displacements and bending of the
wall and unexpected compression of the jet grout prop.
• Failure to check the toe stability of the walls in accordance with the relevant standard,
which was BS8002 (BSI 1994).
• Failure to provide by calculation adequate factors of safety on load in the steelwork.
• Failure to adhere to the Method Statement requirement that not more than 4 bays would
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
be open at any time, before the installation of the next layer of struts.
• Uplift of central soldier pile, which was deemed not to be a cause of the failure by some
witnesses and by the public enquiry.
It is not intended here to develop these points in detail, but it can be seen that they include
errors in design, construction and in the communication between design and construction. The
authors and others contend, and the public enquiry concluded, that it was only when the major
errors in the steelwork were combined with all these geotechnical errors that collapse became
inevitable. This implies that if the design had been carried out and the structure constructed to
normal standards of good practice, it would have been sufficiently robust to withstand the
© ASCE
Geo-Risk 2017 GSP 282 66
unforeseen event of the major error in the steelwork. Indeed, it could have withstood most of the
errors in combination, but not all of them.
Many of the geotechnical errors affected the earlier stages of excavation. Wall
displacements were large and it was agreed in the inquiry that the wall had been overloaded in
bending. Nevertheless, the structure remained stable and was acceptable in its temporary state.
Evidently, at those stages it had sufficient robustness to withstand a series of errors, including the
major steelwork errors.
Considering both the evidence of the pre-collapse stages and the public enquiry’s
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
conclusions about the collapse, this example shows that a well designed and constructed
structure would normally be sufficiently robust to withstand a fairly major error, or even a series
of errors. No design, however, can be expected to withstand the “black swan” of a catastrophic
event or a relentless series of more minor events.
Reliability discrimination – “temporary works”. The intended factors of safety for the design
were: 1.0 on soil properties and 1.2 on bending moments and strut forces. Normal material
factors were used for steel and concrete, implying a value close to 1.0 for steel. These low
factors were justified by the designers because the structure was regarded as “temporary works”,
enabling the later construction of an internal, stronger permanent structure for the station.
Although this structure was not designed to EC7, it was noted above that EC7 allows lower
factors to be used for temporary structures, but only “where the likely consequences justify it”.
The author commends this emphasis on consequences of failure rather than on the
temporary nature of the structure. In many cases, the failure of a temporary structure, or at least
the apparent approach to failure, may be less consequential than is typical of a permanent
structure: there may be fewer people near it, or evacuation may be easier, or trained personnel
and equipment may be available to prevent the failure developing. However, this is not always
the case, so a critical review of consequences is always required before considering any
reductions in safety levels.
CONCLUSIONS
The approaches to design safety used in Eurocode 7 have been briefly presented. It is suggested
that the use of Design Approach 1 reflects, in a simple way, some basic principles learned from
reliability analysis.
This paper has concentrated on robustness defined as the ability of the final design to
accommodate events and actions that were not foreseen or consciously included in design. For
this, it is noted that the required margins of safety may relate more to the magnitudes of the lead
variables, which govern the overall geometry and strength of the structure, than to their
uncertainties. In this case, simply reducing the target probability of failure or increasing the
reliability index β calculated for the lead variables may not provide the robustness required. A
partial factor approach may more readily accommodate this requirement. Similarly, carrying out
design for the “worst credible” values of the lead variables may not provide the required
robustness.
© ASCE
Geo-Risk 2017 GSP 282 67
In codes of practice, the inclusion of extensive checklists is a useful aid to ensuring that
no foreseeable hazards are overlooked, especially by designers not familiar with the specific
construction or ground conditions involved in the project.
For large projects, processes that involve critical reviews of designs, or proposed design
standards, by multi-disciplinary teams of experts are likely to identify a larger range of situations
and variables for which the designs should be checked. They will therefore increase robustness
by transferring some events and actions from the category of “unforeseen” (and therefore not
explicitly designed for) to “foreseen”. Rigorous study using reliability schemes and processes
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
will probably be helpful in this respect, provided the concentration on reliability expertise is not
allowed to eclipse the other skills needed in the critical review.
ACKNOWLEDGEMENTS
The author is grateful for discussion and correspondence with members of the ISSMGE Working
Group: Sonia Hortencia, Hongwei Huang, Charnghsein Juang, Bernd Schuppener, Timo
Schweckendiek, Paul Vardanega and Norbert Vogt. It is emphasized, however, that the views
expressed in this paper are the author’s own and may not necessarily be fully endorsed by all
members of the Working Group.
REFERENCES
AASHTO (2008). LRFD Bridge Design Specifications, 4th edition, 2007 with 2008 Interim
Revisions. American Association of State Highway and Transportation Officials.
BSI (1994) Code of Practice for Earth Retaining Structures. BSI. (Note: this code was replaced
by a new version in 2015.)
BSI (2005) BS EN 1990:2002 +A1:2005. Eurocode – Basis of structural design. BSI.
BSI (2013) BS EN 1997-1:2004 +A1:2013. Eurocode 7: Geotechnical design – Part 1: General
rules. BSI.
CEN (2014) Robustness in Eurocodes. Document CEN_TC_250_WG_6_N_10.
CEN (2016) Practical definition of structural robustness. Document CEN/TC 250/WG 6, N042
WG6.PT1, NA 005-51-01 AA N 439.
COST (2011) Structural robustness design for practising engineers. COST Action TU0601 -
Robustness of Structures. NA005-51-01AA_N0132, Ed. T. D. Gerard Canisius. European
Cooperation in Science and Technology.
Foye, K.C. Salgado, R. & Scott, B. (2006). “Resistance factors for use in shallow foundation
LRFD.” J. Geotech. Geoenviron. Eng., 132(9), 1208–1218.
Gong, W, Juang, CH, Khoshnevisan, S, Phoon, KK (2016) “R-LRFD: Load and resistance factor
design considering robustness.” Computers and Geotechnics 74, 74–87.
Huang, HW, Shao, H, Zhang, DM and Wang, F (2016) “Deformational Responses of Operated
Shield Tunnel to Extreme Surcharge: A Case Study.” Structure and Infrastructure
Engineering, June 2106.
© ASCE
Geo-Risk 2017 GSP 282 68
Magnus, R, Teh, CI and Lau, JM (2005). Report on the Incident at the MRT Circle Line worksite
that led to the collapse of the Nicoll Highway on 20 April 2004. Subordinate Courts,
Singapore.
ISO (2014) ISO 2394: General principles on reliability for structures. International Standards
Organisation, Geneva, Switzerland.
Schweckendiek, T., Vrouwenvelder, A. C. W. M., Calle, E. O. F., Kanning, W., & Jongejan, R.
B. (2012). “Target Reliabilities and Partial Factors for Flood Defenses in the Netherlands.”
In P. Arnold, G. A. Fenton, M. A. Hicks, & T. Schweckendiek (Eds.), Modern
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 69
targeting acceptable failure probabilities, rather than factors of safety, since the latter
do not provide an accurate estimate of safety, despite their name. This trend requires
an ever-increasing understanding of the probabilistic behaviour of geotechnical
systems. As a result, probabilistic geotechnical models are becoming more complex,
yet more realistic. In particular, models which consider the effects of the ground's
spatial variability on failure probability of geotechnical systems are rapidly gaining
popularity. This is because it is well known that spatial variability leads to weakest
paths which are preferentially followed by geotechnical failure mechanisms. The
paper begins by looking at the current state-of-the-art in probabilistic ground models.
The effect of spatial variability on geotechnical system failure probability is
discussed, followed by how the random finite element method (RFEM) has and can
be used to aid in the calibration of geotechnical design codes-of-practice. The paper
finally looks at what is needed in the future to further improve cost effective
geotechnical design practices while increasing overall geotechnical system reliability.
INTRODUCTION
leads to an acceptably safe geotechnical system for each limit state. In eq. 1, R̂ is the
characteristic (nominal) geotechnical resistance and Fˆ is the i'th characteristic
i
(nominal) load effect. In most civil engineering design codes, the load factors are
specified in the structural part of the code, and so the challenge on the geotechnical
"resistance" side is to find the values of the resistance factor, ϕ g which achieve the
code specified target reliability.
© ASCE
Geo-Risk 2017 GSP 282 70
There are many uncertainties that must be considered in order to calibrate the
resistance factors. Perhaps one of the major questions that must be answered in order
to develop a rational reliability-based geotechnical design code is how to properly
account for the fact that the ground is a highly and spatially variable material.
This paper concentrates on this last question. It starts by looking at the current
state of the art in modeling the ground. In particular, how to best include spatial
variability in the ground's properties, and what effect spatial variability has on the
ground response and failure probability of geotechnical systems? If the spatial
variability of the ground can be realistically modeled and failure probabilities
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
reasonably estimated, the paper then discusses how this information can used be to
calibrate the LRFD to achieve target levels of safety (or, equivalently, sufficiently
small failure probabilities). Finally, the paper discusses future directions and
requirements in the further development of reliability-based geotechnical design.
© ASCE
Geo-Risk 2017 GSP 282 71
between any two points is dependent only on the distance (and possibly orientation)
between the points. The correlation coefficient between two points X ( x1 ) and
X ( x2 ) is commonly expressed using a function such a
2 2 2
2τ x 2τ y 2τ z
ρ (τ ) = exp − + + (2)
θ x θ y θ z
where τ = x1 − x2 , the vector between the two points, has components τ = (τ x ,τ y ,τ z )
in three-dimensional space. The parameters θ x , θ y , and θ z are the directional
correlation lengths, which basically govern how rapidly the random field varies.
Small correlation lengths lead to rapidly varying random fields. In the limit, as the
correlation lengths go to zero, all points in the field become independent – the field
becomes infinitely rough (white noise). At the other extreme, as the correlation
lengths go to infinity, the field becomes spatially constant – a single random variable.
If θ z = θ x = θ y = θ , then the field is said to be isotropic, an assumption which
might be made in non-site specific studies, i.e., where the actual relationship between
horizontal and vertical correlation lengths is unknown, or when the correlation
lengths are basically unknown and only the effect of their magnitude on probabilistic
site response is being investigated. Figure 2 shows a possible realization of a two-
dimensional isotropic random field (contrast this to Figure 3 which is anisotropic).
© ASCE
Geo-Risk 2017 GSP 282 72
isotropic random field and then either stretch it in the direction(s) of the longer
correlation length, or compress it in the direction(s) of the shorter correlation length.
For example, if a random field of dimension Lx × Ly × Lz is desired where θ x = θ y = 1
and θ z = 0.25 , then simulating an isotropic random field, with θ = 1 , of dimension
Lx × Ly × 4 Lz and then compressing it to dimension Lx × Ly × Lz will yield a field with
the proper statistics.
Figure 3 illustrates a realization of an anisotropic random field having a horizontal
correlation length equal to 10 times the vertical correlation length.
© ASCE
Geo-Risk 2017 GSP 282 73
function (e.g., Eq. 2) is non-stationary, then simulation of the random field becomes
more complicated and direct techniques, such as Covariance Matrix Decomposition
(see, e.g., Fenton and Griffiths, 2008), may be required.
Weibull, Chi-square, Gamma, and lognormal. One significant advantage to using the
lognormal distribution to model non-negative engineering properties is that it arises
from a simple transformation of the normal distribution:
X ( x ) = exp {μ ( x ) + σ ( x ) G ( x )} (4)
and so is still fully characterized by only the first two moments: the mean and
covariance structure of G ( x ) . Because of this advantage, one rarely sees the other
distributions (e.g., Weibull, etc) employed for random fields. Some ground properties
are also bounded above. For example, porosity, degree of saturation, and friction
angle are all bounded both below and above. Possible bounded distributions include
the uniform (which assumes equilikely possible outcomes), Beta, and Tanh. The last
is also a transformation of a Gaussian random field
m + sG ( x )
X ( x ) = a + 0.5 ( b − a ) 1 + tanh (5)
2π
where a and b are the lower and upper bounds and m and s are location and scale
parameters. See Fenton and Griffiths (2008) for more details.
© ASCE
Geo-Risk 2017 GSP 282 74
2. denote as “clay” all regions where G ( x ) > c , where c is some threshold, and as
“non-clay” all regions where G ( x) ≤ c . If the ground has more than two ground
types, then multiple disjoint and collectively exhaustive ranges can be used to
simulate the random boundaries of each material.
3. once the boundaries of each material have been simulated, simulate the material
properties within each “lense” using an appropriately specified random field(s).
Multiple random fields in which the geometric aspects are unknown are rarely used in
practice. This is because the distribution(s) of the geometric uncertainties can be very
difficult to estimate. For example, even specifying the mean and variance of a layer's
thickness implies that the layer thickness has been sampled at a reasonable number of
locations. If that is the case, then it makes more sense to simply assume the layer
thickness is known at the sampled locations. This motivates the final type of random
field model to be considered here, as discussed next.
Once a more realistic model of the ground, including its spatial variability, has
been developed, the next major challenge is to model the response of the ground to
external or internal loads. When the ground is spatially variable, its failure
mechanisms become more complex. For example, the traditional symmetric double
log-spiral failure mechanism found in most textbooks to predict bearing failure under
a spread footing assumes that the ground is spatially constant.
© ASCE
Geo-Risk 2017 GSP 282 75
When the ground properties vary spatially, the bearing failure mechanism is no
long symmetric and is attracted to weaker zones. Figure 4 shows what the failure
mechanism might look like in a real soil. The lighter (weaker) region to the right of
the footing attracts the failure mechanism, which is now non-symmetric. The failure
mechanism is following the path of least resistance through the ground. What this
means is that the traditionally assumed symmetric failure mechanism is
unconservative -- it gives a higher strength than actually provided by the ground
along its weakest path.
A natural approach to finding the weakest failure mechanism is to employ a finite
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
element model of the ground (see, e.g., Smith and Griffiths, 2004). The basic idea is
to simulate a random field of ground properties, map these properties to a finite
element mesh and use the finite element method to predict the ground response.
Figure 5 shows a cross-section through a finite element model of the ground under a
stiff footing for a typical realization of the ground's effective elastic modulus field in
a probabilistic settlement analysis.
© ASCE
Geo-Risk 2017 GSP 282 76
To illustrate the effect that spatial variability has on the response of the ground to
external or internal loads, two examples will be considered below.
© ASCE
Geo-Risk 2017 GSP 282 77
random. In addition, the soil is assumed to be isotropic – that is, the correlation
structure is assumed to be the same in both the horizontal and vertical directions.
Although soils generally exhibit a stronger correlation in the horizontal direction, due
to their layered nature, the degree of anisotropy is site specific. In that this example is
demonstrating the basic probabilistic behaviour of settlement, anisotropy is left as a
refinement for the reader. The program used to perform the study presented in this
example is RSETL2D (Fenton and Griffiths 2002, Griffiths and Fenton 2007; see also
http://www.engmath.dal.ca/rfem).
Assuming that the settlement, δ of a single footing is lognormally distributed, as
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
was found to be reasonable by Fenton and Griffiths (2002), having probability density
function
1 1 ln x − μ 2
ln δ
fδ ( x ) = exp − , 0≤ x<∞ (8)
2πσ ln δ x 2
σ ln δ
the task is to estimate the parameters μln δ and σ ln δ as functions of the footing width,
B, elastic modulus standard deviation, σ E , and correlation length θ ln E . Figure 6
shows how the estimator of μln δ , denoted mln δ , varies with σ ln2 E for B = 0.1H . All
correlation lengths are drawn in the plot, but are not individually labeled since they
lie so close together. This observation implies that the mean log-settlement is largely
independent of the correlation length, θ ln E . This is as expected since the correlation
length does not affect the mean of a local average of a normally distributed process.
Figure 6 suggests that the mean of log-settlement can be closely estimated by a
straight line of the form,
1
μln δ = ln(δ det ) + σ ln2 E (9)
2
where δ det is the `deterministic' settlement obtained from a single finite element
analysis (or appropriate approximate calculation) of the problem using E = μ E
everywhere. This equation is also shown in Figure 6 and it can be seen that the
agreement is very good. Even closer results were found for other footing widths.
© ASCE
Geo-Risk 2017 GSP 282 78
Taking the logarithm of Eq. 11 and then computing its mean and variance leads to
Eq's 9 and 10. The geometric mean is dominated by small values of elastic modulus,
© ASCE
Geo-Risk 2017 GSP 282 79
which means that the total settlement is dominated by low elastic modulus regions
underlying the footing, as would be expected.
These results can be extended to the serviceability limit state design of a single
footing. If a square footing of dimension B × B is considered, the design requirement
is to find B and the ratio of the load to resistance factors, α / ϕ g , such that
α Fˆ
δ max = u1 (13)
Bϕ g Eˆ
and
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
F α Fˆ
P u1 > u1 = pm (14)
BEeff Bϕ Eˆ
g
where δ max is the maximum tolerable settlement (serviceability limit state), u1 is an
influence factor (see Fenton et al., 2005, for more details), F is the actual load, Eeff is
the equivalent elastic modulus as seen by the footing, F̂ is the characteristic
(nominal) load, Ê is the characteristic (nominal) elastic modulus, and pm is the
maximum tolerable failure probability. In the above, we are assuming that the soil’s
elastic modulus is the ‘resistance’ to the load and that it is to be factored due to its
high uncertainty.
Five different sampling schemes will be considered in this example, as illustrated
in Figure 8. The outer solid line denotes the edge of the soil model, which is 9.6 x 9.6
m in plan and 4.8 m in depth as in Figure 5, and the interior dashed line the location
of the footing. The small black squares show the plan locations where the site is
virtually sampled. It is expected that the quality of the estimate of Eeff will improve
for higher numbered sampling schemes. That is, the probability of design failure will
decrease for higher numbered sampling schemes, everything else being held constant.
For fixed resistance factor, ϕ g , the soil samples allow an estimate of the
characteristic elastic modulus, Ê and Eq. 13 can then be used to design the footing.
Repeating the design for many realizations of the soil allows the probability that a
footing design using ϕ g will result in excessive settlement to be estimated. Figure 9
illustrates the effect of correlation length on the probability of excessive settlement,
p f , for sampling scheme #1. It is evident that a) spatial variability of the ground has
a strong influence on p f , and b) that there is a worst case correlation length, in this
case around 10 m – which is of the order of the distance from the footing to the
sampling point (6.8 m).
© ASCE
Geo-Risk 2017 GSP 282 80
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 10 shows the failure probability for the various sampling schemes at a
coefficient of variation, vE = 0.5 , and θ ln E = 10 m. Improved sampling (i.e.
improved understanding of the site) makes a significant difference to the required
value of ϕ g , which ranges from ϕ g ≈ 0.46 for sampling scheme #1 to ϕ g ≈ 0.65 for
sampling scheme #5, assuming a target probability of pm = 0.05 . Note that if a
distance-weighted or trend estimate were used, sampling scheme #4 would have been
better than #5. In general, more samples are preferable – however, only a simple
average was used in this study to estimate the soil properties so that the four samples
not taken directly under the footing in sampling scheme #4 actually just “muddy the
© ASCE
Geo-Risk 2017 GSP 282 81
waters”, decreasing the accuracy of the sample taken under the footing. The overall
implications of Figure 10 are that when soil variability is significant, considerable
design/construction savings can be achieved when the sampling scheme is improved.
Bearing Capacity
The design of a shallow footing typically begins with a site investigation aimed at
determining the strength of the founding soil or rock. Once this information has been
gathered, the geotechnical engineer is in a position to determine the footing
dimensions required to avoid entering various limit states. In so doing, it will be
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
assumed here that the geotechnical engineer is in close communication with the
structural engineer(s) and is aware of the loads that the footings are being designed to
support. The limit states that are usually considered in the footing design are
serviceability limit states (typically deformation – see example above) and ultimate
limit states. The latter is concerned with safety and includes the load-carrying
capacity, or bearing capacity, of the footing.
This example illustrates an LRFD approach for shallow foundations designed
against bearing capacity failure. The design goal is to determine the footing
dimensions such that the ultimate geotechnical resistance based on characteristic soil
properties, Rˆ u , satisfies
ϕ Rˆ ≥ α Fˆ
g u i i (15)
i
where ϕ g is the geotechnical resistance factor, α i is the i’th load factor, and Fˆi is the
i’th characteristic load effect. The relationship between ϕ g and the probability that
the designed footing will experience a bearing capacity failure will be summarized
below (from Fenton et al., 2007) followed by some results on resistance factors
required to achieve certain target maximum acceptable failure probabilities for the
particular case of a strip footing (from Fenton et al., 2008).
The characteristic ultimate geotechnical resistance Rˆ u is determined using
characteristic soil properties, in this case characteristic values of the soil's cohesion, c,
and friction angle, φ (note that although the primes are omitted from these quantities
it should be recognized that the theoretical developments described in this example
are applicable to either total or effective strength parameters).
The characteristic value of the cohesion, ĉ , is defined here as the median of the
sampled observations, cio , which, assuming c is lognormally distributed, can be
computed using the geometric average,
1/ m
m 1 m
cˆ = ∏ cio = exp lncio (16)
i =1 m i =1
The geometric average is used here because if c is lognormally distributed, as
assumed, then ĉ will also be lognormally distributed. The characteristic value of the
friction angle is computed as an arithmetic average
1 m
φˆ = φio (17)
m i =1
© ASCE
Geo-Risk 2017 GSP 282 82
the overall bearing capacity. This assumption also allows the analysis to explicitly
concentrate on the role of c Nc on ultimate bearing capacity, since this is the only
term that includes the effects of spatial variability relating to both shear strength
parameters c and φ .
Bearing capacity predictions, involving specification of the N c factor in this case,
are generally based on plasticity theories (see, e.g., Prandtl, 1921; Terzaghi, 1943;
and Sokolovski, 1965) in which a rigid base is punched into a softer material. These
theories assume that the soil underlying the footing has properties which are spatially
constant (everywhere the same). This type of ideal soil will be referred to as a
uniform soil henceforth. Under this assumption, most bearing capacity theories (e.g.,
Prandtl, 1921; Meyerhof, 1951, 1963) assume that the failure slip surface takes on a
logarithmic spiral shape to give
π φ
eπ tan φ tan 2 + − 1
Nc = 4 2 (19)
tan φ
The theory is derived for the general case of a c − φ soil. One can always set φ = 0 to
obtain results for an undrained clay.
Consistent with the theoretical results presented by Fenton et al. (2008), this
example will concentrate on the design of a strip footing. In this case, the
characteristic ultimate geotechnical resistance Rˆ u becomes
Rˆ u = Bqˆu (20)
where B is the footing width and Rˆ u has units of load per unit length out-of-plane,
that is, in the direction of the strip footing. The characteristic ultimate bearing stress
qˆu is defined by
ˆ ˆc
qˆu = cN (21)
where the characteristic N c factor is determined using the characteristic friction angle
in Eq. 19,
ˆ π φˆ
eπ tan φ tan 2 + − 1
Nˆ c = 4 2 (22)
tan φˆ
For the strip footing and just the dead and live load combination, the LRFD equation
becomes
© ASCE
Geo-Risk 2017 GSP 282 83
α L FˆL + α D FˆD
ϕ g Bqˆu = α L FˆL + α D FˆD B= (23)
ϕ g qˆu
To determine the resistance factor ϕ g required to achieve a certain acceptable
reliability of the constructed footing, it is necessary to estimate the probability of
bearing capacity failure of a footing designed using Eq. 23. Once the probability of
failure p f for a certain design using a specific value for ϕ g is known, this probability
can be compared to the maximum acceptable failure probability pm . If p f exceeds
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
pm , then the resistance factor must be reduced and the footing redesigned. Similarly,
if p f is less than pm , then the design is overconservative and the value of ϕ g can be
increased. Using either simulation or theory, design curves can then be developed
from which the value of ϕ g required to achieve a maximum acceptable failure
probability can be determined.
Figure 11 shows the resistance factors required for the case where the soil is
sampled at a distance of r = 4.5 m from the footing centerline for the target failure
probability, pm = 0.001 . In the figure, vc is the coefficient of variation of cohesion.
© ASCE
Geo-Risk 2017 GSP 282 84
uniform, having the same value everywhere. In this case, any soil sample also
perfectly predicts conditions under the footing.
At intermediate correlation lengths soil samples become imperfect estimators of
conditions under the footing, and so the probability of bearing capacity failure
increases, or equivalently, the required resistance factor decreases. Thus, the
minimum required resistance factor will occur at some correlation length between 0
and infinity. The precise value depends on the geometric characteristics of the
problem under consideration, such as the footing width, depth to bedrock, length of
soil sample, and/or the distance to the sample point.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 85
recommendations for the resistance factor considers both site and model
understanding along with failure consequence in their single factor.
As is well known, the overall safety level of any design should depend on at least
three things: 1) uncertainty in the loads, 2) uncertainty in the resistance, and 3) the
severity of the failure consequences. These three items are all usually deemed to be
independent of one another and in most modern codes are thus treated separately.
Uncertainties in the loads are handled by load and load combination factors, failure
consequences are handled by applying a multiplicative importance factor to the more
site-specific and highly uncertain loads (e.g. earthquake, snow, and wind), and
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 12. Floating partial safety factor, relative to the default, applied to
geotechnical resistance (numbers are for illustration only).
© ASCE
Geo-Risk 2017 GSP 282 86
Rather than introducing a 3 x 3 matrix of resistance factors for each limit state, the
multiplicative approach taken in structural engineering (where the load is multiplied
by both a load factor and an importance factor) is adopted for geotechnical resistance
as well in the 2014 CHBDC (CSA, 2014).
In other words, the overall safety factor applied to geotechnical resistance is
broken into two parts;
1. a resistance factor, ϕ gu or ϕ gs , which accounts for resistance uncertainty. This
factor basically aims to achieve a target maximum acceptable failure probability
equal to that used currently for geotechnical designs for typical failure
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 87
The resulting table for ULS and SLS geotechnical resistance factors appearing in the
2014 CHBDC is shown in Table 1. How the geotechnical resistance factor values
appearing in Table 1 were obtained is explained in the following sections on
calibration.
The consequence factor, Ψ , appearing in Eq. 24, adjusts the maximum acceptable
failure probability of the geotechnical system being designed to a value which is
appropriate for the magnitude of the failure consequences. Three failure consequence
levels are considered in the 2014 CHBDC;
• High consequence: the foundations and/or geotechnical systems are designed for
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Table 1. Some of the geotechnical resistance factors for ULS and SLS appearing
in Table 6.2 of the 2014 CHBDC (numbers are for illustration only).
Application Limit State Test Degree of understanding
Method/Model Low Typical High
Shallow Bearing, ϕ gu Analysis 0.45 0.50 0.60
foundations Scale model test 0.50 0.55 0.65
Sliding frictional, ϕ gu Analysis 0.70 0.80 0.90
Scale model test 0.75 0.85 0.95
Sliding cohesive, ϕ gu Analysis 0.55 0.60 0.65
Scale model test 0.60 0.65 0.70
Passive resistance, ϕ gu Analysis 0.40 0.50 0.55
Settlement or lateral Analysis 0.7 0.8 0.9
movement, ϕ gs Scale model test 0.8 0.9 1.0
Table 2. ULS and SLS consequence factors, Ψ , appearing in Table 6.1 of the
2014 CHBDC.
Consequence level Consequence factor, Ψ
High 0.9
Typical 1.0
Low 1.15
The consequence factors specified in the 2014 CHBDC for the three consequence
levels are shown in Table 2. This table is very similar to Table B-3 in Eurocode 0
(British Standard BS EN 1990, 2002) which specifies three multiplicative factors, 0.9,
1.0, and 1.1, to be applied to loads (actions) for low, medium, and high failure
consequences, respectively (these factors are approximately the inverse of the factors
seen in Table 2 because they appear on the load side of the LRFD equation). In other
© ASCE
Geo-Risk 2017 GSP 282 88
words, the concept of shifting the target failure probability to account for severity of
failure consequences is not new, although the application of the consequence factor to
the resistance side, rather than the load side, of the LRFD equation appears to be new.
Calibration of Geotechnical Resistance Factors
The geotechnical resistance factor calibration must start with a review of the
factors currently used in Canadian geotechnical design codes, as well as those used in
other codes from around the world. Table 3 is a small subset of a much more
extensive table that was prepared to compare the load and geotechnical resistance
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
factors between a variety of codes, reports, and manuals from various jurisdictions
(Fenton et al., 2016). In the calibration process, Table 3, and its more extensive
counterpart, is used to suggest the ‘best’ currently acceptable estimates of ‘typical’
resistance factors. These are the ϕ gu factors that have been found to lead to societally
acceptable failure probabilities under current design practice. The factor RD / L is the
dead to live load ratio which was assumed in the code calibration process.
© ASCE
Geo-Risk 2017 GSP 282 89
The question of how the geotechnical resistance factor should be adjusted as the
level of site and model understanding changes brings up the question of how the
reliability of a geotechnical design can be estimated in the first place, for any given
level of site and model understanding. The approach used here is essentially to use
Monte Carlo simulations, modeling the ground as a spatially varying random field,
and carry out a virtual site investigation, design, and construction of the geotechnical
system. The geotechnical system is then subjected to random maximum lifetime loads
and checked to see if the particular limit state under investigation is exceeded. If so, a
failure is recorded and the process is repeated. The failure probability of the design is
then estimated as the number of failures divided by the number of trials – if the
failure probability is too high, the design factors are suitably adjusted, and so on. The
detailed steps are as follows;
1. for a particular geotechnical system (e.g., shallow foundation) and limit state
(e.g., bearing capacity), choose a geotechnical resistance factor to be used in the
design,
2. simulate a random field of ground properties, having a specified variance and
correlation structure,
3. virtually sample the ground at some location to obtain ‘observations’ of the
ground properties. The distance between the sample and the geotechnical system
acts as a proxy for site and model understanding – the farther the sample is from
the geotechnical system, the more the uncertainty about the system performance
(decreased site and model understanding),
4. design the geotechnical system using the characteristic geotechnical parameters
determined from the sample taken in step 3. The definition of ‘characteristic’
depends on the design code being used. For example, in Europe, the characteristic
values would be a lower 5-percentile. In North America, a ‘cautious estimate of
the mean’ is probably a more common definition, as discussed previously. In most
of the calibration exercises undertaken for the CHBDC, the characteristic values
were taken as the geometric average of the sampled ‘observations’. The geometric
average is always at least slightly lower (more so for higher variability) than the
arithmetic average, and so can be viewed as a ‘cautious estimate of the mean’,
5. virtually construct the geotechnical system according to the design in the
previous step and place it on (or in) the random field generated in step 2,
6. employ a sophisticated numerical model (e.g., the finite element method) to
determine if the geotechnical system exceeds the limit state being designed
against (this is a failure),
7. repeat from step 2 a large number of times, recording the number of failures.
© ASCE
Geo-Risk 2017 GSP 282 90
index of about β = 3.1 . If the worst case resistance factor for a reasonable coefficient
of variation of the ground shear strength, vc = 0.3 , is examined, it can be seen from
Figure 11 that the typical `understanding' (assumed to be r = 4.5 m) geotechnical
resistance factor is about 0.45. For ‘high’ understanding ( r = 0 m), a similar plot (not
shown) suggests a geotechnical resistance factor of about 0.65 when vc = 0.3 . At the
other extreme, for `low' understanding ( r = 9 m), the theory suggests a geotechnical
resistance factor of about 0.4. These theoretical results seem to be in reasonable
agreement with the range suggested by other codes.
© ASCE
Geo-Risk 2017 GSP 282 91
factor for this case has been rounded down to 0.90, as discussed shortly. Similarly, to
adjust the vc = 0.23 case for a low consequence design ( pm = 1 /1000 ), the
consequence factor is obtained at the intersection of the vc = 0.23 curve and the upper
horizontal line. This occurs at about Ψ = 1.13 (which will be rounded to Ψ = 1.15
shortly).
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 92
site investigation is sufficient to keep the residual variability below this level, then
Ψ = 0.9 is a reasonable design value for the high failure consequence case which will
almost always lead to a failure probability well below pm = 1/10, 000 ( β = 3.7 ).
A similar argument can be applied to Figure 14b for the low consequence case,
where a solid line at Ψ = 1.15 has been drawn across the plot. It can be seen that this
value is not quite as conservative as the high consequence factor (selected above) in
that the vc = 0.2 curve comes somewhat closer to Ψ = 1.15 . The authors feel,
however, that conservatism is not quite as important for the low failure consequence
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 14. Consequence factor versus correlation length for r = 4.5 m and
ϕ gu = 0.5 at high consequence level ( pm = 1 / 10, 000 ) in (a), where Ψ = 0.9 is
proposed, and at low consequence level ( pm = 1 / 1000 ) in (b), where Ψ = 1.15 is
proposed.
Research into the consequence values for deep foundation design (Naghibi et al.,
2013) yields similar consequence factors for both ULS and SLS design. Thus, it
appears that the consequence factors selected for the 2014 CHBDC are reasonably
appropriate for other limit states of geotechnical system.
FUTURE DIRECTIONS
Previous sections illustrated how spatial variability models can be used both to
more realistically represent the ground and its failure mechanism as well as to serve
as a mathematical proxy for site understanding. Although computers are becoming
© ASCE
Geo-Risk 2017 GSP 282 93
fast enough to bring random field models into the design office, there are still a
number of impediments to their widespread use;
1. While the mean values of the ground properties may be reasonably well known,
estimating their variances requires significantly more samples of the ground
throughout the site. In many cases, such intensive sampling will not have been
done, and so variance estimates commonly come from the literature. The entire
issue of specifying the variance of the random ground model is complicated by a
number of factors;
a. it is really the uncertainty between sample locations (e.g., between CPT
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 94
exactly the same thing – modifies the target system reliability depending on failure
consequences.
Adjusting the target reliability, depending on failure consequence severity, is one
way of accomplishing a risk-based design. The perhaps more precise way is to
actually perform a risk assessment of the design, where risk is defined as the product
of failure probability times the cost of failure and choose the design having the lowest
risk. Such an approach is not commonly taken in civil engineering for the simple fact
that failure often involves loss of life and assigning a cost to loss of human life (or
any lives, for that matter) is a difficult and sensitive issue.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Nevertheless, there are many geotechnical design issues that do not involve loss of
lives which would definitely benefit from a risk assessment (cost-benefit) approach to
design. Serviceability limit states, for example, have fairly well defined limits (e.g.,
excessive settlement) and entering such a state will have a cost which can be
estimated (e.g., the cost of improving/stiffening a foundation).
For example, Figures 8 and 10 can be used to perform a risk assessed design of a
shallow foundation against entering a serviceability limit state. For a fixed failure
probability, p f = 0.05, the various sampling schemes shown in Figure 8 result in
different resistance factors, which in turn directly influence the cost of constructing
the foundation. This construction cost can then be balanced against the cost of
sampling and the optimum sampling scheme determined. For example, assuming an
unsophisticated sample average is used to estimate the soil properties, it appears from
Figure 10 that the best sampling scheme (highest resistance factor, lowest
construction cost) is #5, where a single sample is taken directly under the footing. It
must be remembered, however, that in order to achieve maximum construction
savings, a sample would have to be obtained under every footing, which is often not
practical. The system level cost-benefit analysis is a relatively straightforward
extension of a single footing cost-benefit analysis.
CONCLUSIONS
© ASCE
Geo-Risk 2017 GSP 282 95
resistance factor. The next logical step is to routinely perform such risk assessments
for individual designs in order to optimize overall savings while maintaining system
reliability.
Design codes of the future will increasingly allow, and promote, flexibility in the
design process if rigorous probabilistic and risk (cost-benefit) assessments have been
performed.
REFERENCES
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 96
Prandtl, L. (1921). Uber die Eindringungsfestigkeit (Harte) plastischer Baustoffe und die
Festigkeit von Schneiden, Zeitschrift fur angewandte Mathematik und Mechanik, 1(1), 15–
20.
Smith, I.M., and Griffiths, D.V. (2004). Programming the Finite Element Method, J. Wiley &
Sons, 4th ed., Hoboken, NJ.
Sokolovski, V.V. (1965). Statics of Granular Media, 270 pages, Pergamon Press, London,
UK.
Standards Australia (2004). Bridge Design, Part 3: Foundations and Soil-Supporting
Structures, Australian Standard AS 5100.3--2004, Sydney, Australia.
Terzaghi, K. (1943). Theoretical Soil Mechanics, John Wiley & Sons, New York, NY.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
NOTATION
© ASCE
Geo-Risk 2017 GSP 282 97
© ASCE
Geo-Risk 2017 GSP 282 98
implementation of reliability based codes in Canada for foundations and geotechnical systems.
Implementation challenges such as selection of suitable target reliability index, appropriate
selection of geotechnical characteristic values, and geotechnical resistance factors for uplift
resistance due to frost action are described through project examples. Lessons learned are
identified and discussed. Lack of sufficient understanding of fundamental concepts, training and
education are factors that contribute to the identified implementation issues. Reliability based
design is not meant to be a substitute for good understanding of geology, geological processes,
fundamental ground behaviour, failure mechanisms, and engineering judgement and experience.
The paper demonstrates that enhanced effective risk management is obtained through close
collaboration between the owner and their consultant.
INTRODUCTION
Geotechnical engineering practitioners need to understand that uncertainty and risk always exist
in projects. As such, effective management of geotechnical risk is a requirement for successful
geotechnical engineering design. The role of geotechnical engineers on projects is to provide
solutions to manage geotechnical risks to acceptable levels as may be specified in relevant codes.
This paper briefly outlines the development of Load and Resistance Factor Design (LRFD) based
codes for foundations and geotechnical systems in Canada, and the issues and challenges arising
from their implementation. Lessons learned from both positive and negative project experiences
are identified and discussed, along with primary factors that have contributed to the
implementation issues and challenges.
HISTORY OF LIMIT STATES DESIGN (LRFD) CODE DEVELOPMENT IN CANADA
The history and background of limit states Load and Resistance Factor Design (LRFD) code
development for foundations in Canada is presented in Becker (1996 and 2006) and the Canadian
Foundation Engineering Manual (CFEM) (Canadian Geotechnical Society 2006). The two
primary LRFD-based codes are the National Building Code of Canada (NBCC) and the
Canadian Highway Bridge Design Code (CHBDC). These codes apply to foundations and
geotechnical systems where a structure component is supported by the ground (e.g., foundations,
retaining walls, ground anchors, soil nails). For geotechnical applications such as slope stability,
a single global factor of safety approach is typically used.
Limit states design for foundations based on a factored strength approach was first introduced
around 1980, but it was not well understood, not well received and not well embraced by
geotechnical practitioners. Instead of achieving efficiency and economy in design and
construction, foundations and geotechnical systems increased significantly in cost.
In the 1990s, limit states design for foundations was re-introduced using a factored load and
overall resistance approach as embodied by the LRFD framework. LRFD code development was
implemented to achieve harmony in design approach between structural and geotechnical
© ASCE
Geo-Risk 2017 GSP 282 99
and theoretical principles. As practitioners become more accustomed with LRFD and reliability
based design concepts, future code editions would introduce sophistication.
The degrees of sophistication are evident when the CHBDC (2014) is compared with the
previous editions in 2000 and 2006. Examples of sophistication include the introduction of
geotechnical resistance factors based on Level of Understanding of site and ground conditions
and Consequence Factor to account for the consequence of failure of a structure (Fenton et al.
2016). The rationale in support of these two significant and fundamental changes is that a higher
geotechnical resistance factor should apply when the geotechnical engineer has an improved
level of understanding of ground conditions due to more comprehensive site investigation and
analyses, or when the consequence of failure is lower than the typical case. Previously, a single
geotechnical resistance factor for ultimate limit states applied regardless of level of
understanding and consequence of failure (e.g., for bearing resistance of shallow foundations, the
geotechnical resistance factor was 0.5 and for pile axial compression resistance, the geotechnical
resistance factor ranged from 0.4 (static analysis) to 0.6 (static load test). CHBDC (2014) also
introduces geotechnical resistance factors other than 1.0 for serviceability limit states. In earlier
versions of CHBDC and NBCC, geotechnical resistance factors for serviceability limit states
were taken as 1.0.
For details of the changes and revised geotechnical resistance factors, the reader is referred to
CHBDC (2014) and Fenton et al. (2016). For example, for bearing resistance of shallow
foundations, the geotechnical resistance factor now varies from 0.45 (low understanding) to 0.65
(high understanding). For pile axial compression resistance, the geotechnical resistance factor
ranges from 0.35 (low understanding - static analysis) to 0.7 (high understanding - static load
test).
The next edition of the NBCC and CFEM will similarly be updated to reflect the provisions in
CHBDC (2014). This will be done so that consistency (harmony) is obtained amongst Canadian
codes and the CFEM, which is frequently referenced by the codes.
This incremental approach appears to have worked reasonably well, though implementation
issues such as those presented and described in this paper exist. In any event, for the reasons
presented in Becker (1996), the author believes that the factored overall resistance LRFD
approach has been much better received and accepted by geotechnical practitioners than the
factored strength approach. The author is of the opinion that with sufficient time and experience,
practitioners will feel increasingly comfortable with LRFD and realize its benefits over that of
allowable (working) stress design. It is noted that structural engineers also went through similar
challenges, albeit almost 40 years ago, when structural engineering design switched from
allowable (working) stress design to limit states design (LRFD).
LRFD applied to retaining walls has also received significant attention over the past 10 to
20 years. Publications such as Bathurst et al. (2012) and others summarize this work.
© ASCE
Geo-Risk 2017 GSP 282 100
LRFD FRAMEWORK
The general LRFD design equation is:
ФR ≥ Σα S [1]
where ФR is the factored geotechnical resistance, Ф is the geotechnical resistance factor (values
of less than 1.0), R is the nominal (characteristic) geotechnical resistance, Σα S is the
summation of factored load effects for a given load combination, and α is the load factor
(usually greater than 1.0) for nominal load effects S (e.g., dead load due to weight of structure
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
or live load due to wind), and i represents various types of loads such as dead load, live load and
wind load.
The design equation can be visualized by inspecting the interaction of the probability distribution
curves for resistance and load effects, as shown schematically in Figure 1. The design intent is to
achieve a specified reliability index (or probability of failure) that is related to the size of the
shaded area shown in Figure 1, which corresponds to a failure condition (i.e., resistance is less
than applied loading). It should be noted that the resistance and load effects are assumed to be
independent variables, which is approximately true for the case of static loading condition
associated with foundations. The characteristic values for design are related to the mean values
through the factors kr (the ratio of mean value to characteristic value for geotechnical resistance)
and the factor ks (the ratio of mean value to specified (characteristic) value for load effects).
Typically, kr values are equal to or greater than 1.0 and ks values are less than 1.0. The terms kr
and ks are also referred to as bias factors. The bias factor is 1.0 if the mean value is used as the
characteristic value or when the predicted mean resistance is the same as the measured mean
average resistance.
© ASCE
Geo-Risk 2017 GSP 282 101
The values of Ψ range from 0.9 to 1.15 for high and low consequence structures, respectively.
For typical consequence structures, Ψ = 1.0. The rationale for Consequence Factor and its values
are provided in CHBDC (2014) and Fenton et al. (2016).
IMPLEMENTATION ISSUES AND CHALLENGES
Since LRFD-based codes became mandatory in Canada, the following key implementation issues
and challenges have arisen:
• Some owners have observed that costs of foundations and geotechnical systems have
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 102
large quantities of reliable data can be produced by the in-situ probes and data acquisition
systems.
The normal distribution curve can reasonably approximate the distribution (histogram) of some
geotechnical engineering parameters with sufficient accuracy for engineering purposes. A log-
normal distribution is a common alternative and may be better suited in some cases. Figure 2
shows the normal distribution curve and some of its characteristics that are useful for
interpretation or inference of soil parameters and in calibration of load and resistance factors
using reliability concepts. The main characteristics are the mean ( ), standard deviation (σ),
coefficient of variation (V = σ/μ) and confidence levels. The distribution curve becomes
narrower (shows less data scatter) as V decreases. Approximately 68% of the values within the
normal distribution curve lie within one standard deviation of the mean. Approximately 95% of
the values lie within two standard deviations of the mean value. Only approximately 0.25% of
the values lie at and beyond three standard deviations from the mean.
© ASCE
Geo-Risk 2017 GSP 282 103
These equations are used in the next section to estimate approximate mean value when maximum
value is known.
FROST UPLIFT ON LIGHTLY LOADED PILES
This implementation issue discusses the standard practice of geotechnical engineers tending to
report the maximum frost depth that would be anticipated within the design life of a project. This
practice is not consistent with the intent of the characteristic value being the mean value or close
to the mean value of the expected frequency distribution for frost depth. The geotechnical
resistance factor for uplift of 0.3 given in the NBCC and CHBDC is based on the characteristic
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
value being the mean value or close to the mean and the specified load factors for type of load
and load combinations (e.g., in the range of 1.25 to 1.7).
LRFD design of lightly loaded piles to resist uplift due to frost action is an example of
misalignment and misunderstanding of the appropriate selection of characteristic value for frost
depth in geotechnical reports. For sites with reasonably deep frost depth, the design of lightly
loaded piles, such as piles to support pipe racks in petroleum process facilities in Alberta, is often
controlled by resistance against frost uplift. Traditional allowable (working) stress design has
worked well and led to an empirical design guidance that, in most cases, the length of a pile
(with minimal externally applied axial compression load) to resist frost uplift is approximately
three times the anticipated maximum frost depth. In Alberta, the required length of pile was often
in the range of 5 m to 10 m, depending on maximum frost depth.
Following the implementation of LRFD-based codes in Canada, the author has received many
phone calls and emails from structural engineers saying that when they followed the code they
calculated excessively long piles to resist frost uplift. In some cases piles as long as 20 m or
longer were calculated. Clearly something is not right.
The reason for the outcome of very long piles is that the design values given for frost depth in
geotechnical reports are usually based on maximum (extreme) values that lie in the tails of their
probability density functions (Figure 2). The estimated maximum frost depth is an important
parameter for the design of buried infrastructure, such as unheated water supply lines; however,
the maximum frost depth is not directly appropriate for pile design to resist frost action uplift.
The basis of the derived geotechnical resistance factor of 0.3 for uplift action on a pile in the
CHBDC and NBCC is that characteristic values for adfeeze (skin friction of frozen soil) and
frost depth be based on its mean value or close to the mean value, not the estimated maximum
value.
In order to resolve the situation for the excessively long piles, the author’s immediate response
was that, for the reported values of maximum frost depth, a geotechnical resistance factor and
load factor of 1.0 be used when checking for resistance against uplift to frost action. The reason
for this is that resistance and load factors of 1.0 seem reasonable when extreme (maximum)
values are used for the characteristic value for frost depth. The value of 0.3 is appropriate when
uplift is caused by external loads such as wind and other loads leading to eccentric loading, used
in combination with code-specified load factors.
Frost depth is a function of many factors including soil type, water content, degree-freezing days,
snow cover, albedo and others. A number of relationships to estimate frost depth has been
developed and reported in the technical literature (e.g. CFEM 2006). Uplift action on a pile is a
© ASCE
Geo-Risk 2017 GSP 282 104
function of the frost depth and the adfreeze acting of the circumference of the pile shaft within
the frost depth (Figure 3) and is given as:
Fu = fadfdC [5]
where Fu = uplift force due to frost action (kN), fad = adfreeze (kPa), fd = anticipated frost depth
(m), and C = circumference of the pile shaft within the frost depth (m).
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 105
From Equation [7], the following equation is developed for the case of maximum frost depth
fdmax:
ФL ≥ (fad/fs)fdmax [8]
Frost action uplift is a load and the structural engineer would apply a load factor (e.g., = 1.25)
to the uplift force due to frost action (Fu). If Ф = 0.3, = 1.25 and (fad/fs) ranging from 1.4 to 2.5
are put into Equation [8], L ranges from ≥ 5.8fdmax to ≥ 10.4fdmax, which is consistent with
structural engineers calculating pile lengths that are much longer than pile lengths based on
empirical guidance (i.e., L ≥ 2fdmax). For example, if Ф = 0.3 and α = 1.25 with maximum frost
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
depth of 2.5 m is used in the calculation, L becomes 20.8 m and a 23 m long pile is calculated to
be required, which is excessively long and not appropriate.
If Ф = = 1.0 and (fad/fs) = 2, Equation [8] becomes L ≥ 2fdmax and is consistent with empirical
guidance experience. This seems reasonable because for extreme loading cases, resistance and
load factors of 1.0 generally apply. In order to obtain L ≥ 2fdmax when Ф = 0.3, needs to be
0.3. This is a very low load factor that would probably cause angst for structural engineers.
The above approach is examined below in more detail using reliability design concepts to assess
appropriate values for geotechnical resistance and load factors.
The First Order Second Moment (FOSM) method (Becker 1996, FHWA 2001), is a simple
approximate equation that is considered to capture the essence of the problem and serves as an
approximate basis to theoretically interrogate the suitable value of geotechnical resistance factor
and load factor for frost uplift loading condition. The FOSM relationships for load and
geotechnical resistance factor is as follows (Becker 1996):
Ф = kre-θβVr [9]
= kseθβVs [10]
where is a separation factor of 0.75, kr and ks are bias factors (typically in range of 1.0 to 1.1
for kr and 0.9 to 1.0 for ks), β is the target reliability index (typically 2.5 to 3.0), and Vr and Vs
are the coefficient of variation of the resistance and load distributions, respectively.
Although the author does not have data to support the opinion, it may be reasonable to consider
that coefficient of variation for extreme loads would be small. If it is assumed that the coefficient
of variation is small (say 0.05) the value of from the FOSM Equation [10] becomes
approximately 1.0. This provides additional support that = 1.0 is appropriate for design based
on maximum frost depth.
As an example of an initial step for examining and assessing an appropriate and consistent
methodology for designing lightly loaded piles to resist uplift forces due to frost action, the
climatic data for Calgary from 1970 to 2010 was used to predict annual values of frost depth for
a typical clayey/silty soil in Calgary. The frost depths would be greater for sandy and gravelly
soils. The freezing index in terms of Celsius degree-days is a key factor in predicting frost depth.
The freezing index varied from about 250 Celsius degree-days to about 1,450 Celsius degree-
days, with a mean value of about 800 Celsius degree-days (Figure 4). The corresponding
predicted frost depth varied from about 0.7 m to 2.3 m as shown in Figure 5. The mean value is
approximately 1.4 m, with a standard deviation of 0.37 m and a coefficient of variation of 0.26.
Substituting these values into Equations [3] and [4] results in a maximum frost depth of 2.5 m.
© ASCE
Geo-Risk 2017 GSP 282 106
The results of this analysis agrees with the state-of-practice in Calgary where a maximum frost
depth of about 2.5 m is often stated in geotechnical reports.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 4. Histogram and normal distribution – annual freezing index, Calgary, Alberta.
© ASCE
Geo-Risk 2017 GSP 282 107
Given the above examination, the following design options become available so that excessively
long piles would not be calculated to resist frost action uplift:
• Use Ф = 1.0 and α = 1.0 with maximum frost depth, fdmax.
• Use Ф = 0.3 (possibly 0.4) and α = 1.25 with mean frost depth, fdmean.
The CHBDC and NBCC technical committees are in the process of rigorously examining and
assessing the most appropriate and consistent methodology for designing lightly loaded piles to
resist uplift forces due to frost action. The author is of the opinion that it makes sense that the
characteristic value for frost depth be the mean or close to the mean value to be consistent with
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
current LRFD methods. If this is the approach is to be adopted in the CHBDC, NBCC and
CFEM, geotechnical engineers will need to also report the mean value of frost depth instead of
just the maximum frost depth. In the interim, geotechnical reports could state that for the
maximum frost depth reported, a load and resistance factor of 1.0 should be used in the
calculation of required pile length to resist uplift forces due to frost action.
CHARACTERISTIC VALUE
In the author’s experience, the inappropriate selection of characteristic value by geotechnical
practitioners is the greatest implementation issue in connection with LRFD-based codes. The
following discussion is intended to assist the reader in better understanding how to select
characteristic values for design.
Although the values of the geotechnical resistance factors are important, the real essence of
design and the key question is: What is the appropriate characteristic value? The definition and
basis for the appropriate selection of the characteristic value has not received the attention it
rightly deserves. The technical literature and design codes in general give little or no guidance
for the appropriate selection of geotechnical characteristic values. In the author’s opinion, the
assessment and selection of the appropriate characteristic value is a most important part of the
geotechnical design process.
A description and discussion of the characteristic value is provided by Becker (1996) and CFEM
(2006). The characteristic value is defined as the geotechnical engineer’s best assessment of the
most likely representative (unfactored) value that controls a specific limit state. The value needs
to account for all factors that have influence on the parameter or property within the volume of
ground (zone of influence) under consideration.
Selection of Appropriate Characteristic Value
Let’s interrogate the above definition to gain a better sense of what the words mean. From the
above definition, the characteristic value for undrained shear strength may not directly be the
mean value or a value close (cautious estimate) to the mean of the actual field or laboratory
measurements. Measured values often require adjustments to take into account factors that have
influence on the measured parameter. For example, field vane strength for high plasticity clay
should be adjusted using Bjerrum or Aas correction factors (CFEM 2006). In clay deposits
consisting of regular interlayers of stiffer and weaker soil, such as varved clay, the field vane
strength may measure a strength that is too high because the vertical failure surface imposed by
the vane passes through the stiffer and weaker layers (Becker et al. 1988). A slope stability
failure surface often has a relatively large horizontal plane component and would tend to pass
through the weaker layers in the deposit. The direct use of the field vane strength along
© ASCE
Geo-Risk 2017 GSP 282 108
horizontal failure surfaces would not be representative and as such is not the characteristic value
for horizontal failure surfaces. In the author’s experience, a suitable characteristic value for
horizontal weak layers in varved clay deposits may be the strength corresponding to 0.75 times
the measured vane strength.
Another example is the direct use of strength measured using small sized tested specimens taken
from a fissured clay or fractured bedrock. The measured value is often the intact strength, which
is much higher than the overall operational or mobilized strength of the ground mass that is
largely controlled by the fissures and fractures (Lo 1970).
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Factors such as fabric anisotropy and stress anisotropy as described and discussed in the
technical literature (e.g., Bjerrum 1972, Becker et al. 1984) need to be considered when
assessing appropriate values for the characteristic value. All factors that influence the
geotechnical parameter or property must be taken into account. When assessing characteristic
values for strength, the key consideration by the geotechnical engineer is suitably assessing the
mobilized strength for the given limit state under consideration.
It is important to realize that the above considerations also apply when selecting suitable design
values for use in traditional allowable (working) stress design.
What is meant by “within the volume of ground (zone of influence) under consideration”?
Figure 6 illustrates the intended meaning of zone of influence. For a given site, there is more than
one characteristic value for strength and displacement (deformation) parameters. The
characteristic value is not simply the mean (or cautious estimate of the mean) based on all the
test results from the site investigation program. For example, for the design of spread footings,
the test results within the stress bulb (zone of influence) would be considered in assessing the
characteristic value. For deep foundation design it would be along and deeper than the
anticipated pile length. If the ground is layered there would be characteristic values for each of
the individual layers. For embankments, stress anisotropy considerations and parameters
associated with the anticipated field stress range for the loading condition need to be taken into
account. This similarly applies to excavations and cuts, except that parameters associated with
unloading condition and anticipated stress range would be considered when selecting appropriate
values for characteristic values. The primary reason for these considerations is that ground
behaviour exhibits significant non-linearity.
The geotechnical engineer also needs to be cognizant of the interrelationship between resistance
and load factors and characteristic value when selecting characteristic geotechnical parameters
for design purposes. Currently, it appears that practicing geotechnical engineers may not be
aware of the importance of selecting the characteristic value in a specific manner as described in
the following paragraph.
Load and resistance factors and characteristic values are interrelated. Load and resistance factors
have been derived (calibrated) based on characteristic values that have been defined in a specific
manner. For consistent and rational design in practice, the selection of a characteristic value for
geotechnical resistance needs to be made in the same manner as that used to derive the resistance
factor. The geotechnical resistance factors were derived using characteristic values that are close
to the mean value; therefore, a characteristic value of a geotechnical property that is similarly
close to the mean should be used in the calculation of geotechnical resistance. A value closer to
the lower bound value should not be used because the uncertainty associated with a given design
© ASCE
Geo-Risk 2017 GSP 282 109
parameter has already been incorporated into the numerical values of the resistance factors.
Should a value closer to the lower bound strength be chosen, an additional level of conservatism
enters into the design. Although an additional level of reliability and safety may be viewed as
desirable, it could have significant cost implications as discussed later in this paper.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 110
sediments. The site investigation program consisted of sampled boreholes with frequent field
vane tests and was complemented by a reasonably detailed laboratory testing program. The site
investigation program was considered consistent with good state-of-practice. The highway
embankment performed well except for a small reach where a series of embankment foundation
failures took place. In the subsequent forensic investigation it was found that the data along the
alignment was compiled and plotted collectively, including plots of field vane test results. The
plasticity of the clay deposits was such that adjustment to measured vane strength (e.g. Bjerrum
or Aas correction factors) was not required.
Although the details are not certain, it appears that all test results were plotted using the same
symbol for each borehole rather than specific and unique symbols to denote each borehole along
the alignment. The range in the collective strength vs. depth profiles did not raise flags of
significant variability. Figure 7a shows the design strength profile.
© ASCE
Geo-Risk 2017 GSP 282 111
The profile corresponds to a trend line slightly less than the mean and as such could be viewed as
being consistent with a ‘cautious estimate’ of the mean. The author has provided the
corresponding probability density function plotted at a given elevation within the strength vs.
depth profile. The forensic investigation into the embankment failures showed that the failed
section of the alignment contained three boreholes in which the measured field vane strength
comprised the lower bound of the plotted strength profile as shown by the symbol “x” in Figure
7b. The figure also shows that for the project, there should have been at least two separate
characteristic values defined for the alignment.
Lesson Learned: Initially plot test results with a specific and unique label (symbol) so that a
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
better assessment of spatial variability can be made. If all symbols are sufficiently dispersed
within the strength vs. depth profiles and within the probability density function, a single symbol
could then be used to simply presentation in the report.
Parochial Knowledge and Local State-of-Practice
This case record demonstrates that local state-of-practice and the general tendency of
practitioners to resist change can stand in the way of good engineering and economic benefit and
efficiency. The project involves the first time use in this municipality of a mechanically
stabilized earth (MSE) wall as part of a highway grade separation project. The foundation soils
consist of lacustrine clay deposits. The geotechnical consultant for the project did not have
significant work experience in the municipality. The owner (municipality) had concerns about
the first time use of high MSE walls on clay soils in their region and wanted to impose the use of
traditional effective stress strength parameters in analyses and design. Based on the results of site
specific triaxial compression tests and review of the technical literature, the geotechnical
consultant considered that the parameters traditionally used were low, unduly conservative, and
would significantly increase cost should design be based on the traditional values.
During the follow-up assessment it was found that the traditional effective stress strength
parameters of effective cohesion and friction angle were based on an unloading condition such as
back-analysis of river valley slopes that had been incised into the otherwise flat terrain. From a
geotechnical perspective, the use of unloading parameters for a loading situation class of
problem such as an MSE wall and embankment is not appropriate. However, debate remained as
to the strength values that should be used for design. The use of the unloading parameters in
design was challenged to comply with the design criteria in terms of minimum factor of safety.
The use of the traditional strength values produced a design that was not typical - very long
reinforcement relative to the height of the wall and ground improvement of the existing
foundation clay soil though the use of vertical structural elements (piles). The cost to build the
wall would be very high.
A suggestion was made to carry out probabilistic limit equilibrium stability analysis to assist in
sorting out the design issue and resolve the debate of suitable strength parameters. However,
such analysis was not performed because the owner and their local consultant wanted the
probability density function for the strength parameters to include both loading and unloading
values. The project geotechnical consultant felt that collective grouping of unloading and loading
strength values would be misleading and inconsistent with the definition of an appropriate
characteristic value that requires all factors that influence strength be considered for the limit
state under consideration. The MSE wall and embankment induce a loading condition and, as
such, only strength values corresponding to a loading condition are relevant. Fundamentally the
© ASCE
Geo-Risk 2017 GSP 282 112
strength parameters should be grouped into separate loading and unloading probability density
functions – not lumped all together.
Lesson Learned: Parochial knowledge and long standing local practice (i.e. “we always do it
this way”) can stand in the way of good engineering and improvement in terms of economic
benefit and efficiency. The application of statistics and probability concepts is not to replace
good understanding of fundamental soil behaviour and theoretical considerations. Statistics and
probability are useful tools if properly applied. They must not become a substitute for
understanding fundamental behaviour, loading conditions and failure mechanisms, geology and
geological depositional processes.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 113
be cautious when there are data to establish the Vr of the strength parameters and the value is
approaching 0.3 or higher. In this case the use of a lower Ф or higher FS may be warranted when
the characteristic value used for analysis and design corresponds to a bias factor in the range of
1.0 to 1.1.
The CHBDC (2014) introduces Consequence Factor, Ψ as shown in Equation [2]. The equation
for FS then becomes 1/( Ψ Ф). So for a structure with high consequence ( Ψ = 0.9) and typical
degree of understanding (Ф = 0.65), the required FS from a geotechnical limit equilibrium
analysis would need to be 1.7 instead of 1.5 for a typical consequence structure ( Ψ = 1.0).
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 114
Figure 8a. Histogram and normal distribution – peak effective friction angle (c’ = 0 kPa).
Figure 8b. Histogram and normal distribution – residual effective friction angle
(c’ = 0 kPa).
From Equation [9], Vr is the controlling parameter for fixed values for β, kr and θ. Table 3
summarizes the results for the cases of kr = 1.0, = 0.75 and β = 2.3 and 3.0, and Vr ranges by a
factor of 2 (i.e., 0.30 to 0.15 and 0.2 to 0.1). When Vr is changed from 0.2 to 0.1 there is an
approximate change in the value of FS of 0.25 or about 20%. Similarly, when Vr changes from
0.30 to 0.15, there is an approximate change in the value of FS of 0.45 or about 35%. Therefore,
if the basis of design is 1.3 for the most likely case or peak strength parameters, the use of 1.1 for
residual strength or reasonable worst case parameters appears to be appropriate on the basis that
the Vr may be a factor of about 2 lower than for peak strength parameters.
© ASCE
Geo-Risk 2017 GSP 282 115
© ASCE
Geo-Risk 2017 GSP 282 116
In the author’s experience and opinion, a primary cause of increased foundation costs is the
inappropriate selection of characteristic value as discussed in earlier sections of this paper. In
summary, most practitioners tend to stay with the procedure and basis of selecting design values
such as strength as they did when using allowable (working) stress design. In many cases they
would tend to select characteristic strength values (profiles) closer to lower bounds than the
mean value or a cautious estimate of the mean. When a lower bound strength is used, an
additional level of conservatism enters into the design, which tends to increase costs. As
discussed previously in this paper, the geotechnical resistance factors listed in the NBCC and
CHBDC are based on characteristic values that are close to the mean.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
The owner’s experience is also that most practitioners say that they must follow the code and use
the geotechnical resistance factors provided in the code. The NBCC does not include the
geotechnical resistance factors within the code (legally binding) portion of the document. The
factors are provided in the Commentary to the code (not legally binding) with reference to the
Canadian Foundation Engineering Manual (CFEM 2006). The language (provisions) in the
NBCC and CHBDC allows the engineer the freedom to be innovative. The codes permit the
engineer to develop project and site specific values of reliability index and load and resistance
factors, based on sound engineering principles and fundamental theoretical considerations.
However, the development of project and site specific criteria requires a reasonably significant
effort in terms of resources (schedule and cost). In many cases, there is not enough economic
incentive to do this because there is insufficient cost savings to justify the effort. For large
projects, however, substantial cost savings can be realized, thereby justifying the investment.
While staying within the intent of codes, there is an opportunity to increase the values of
geotechnical resistance factors provided that: (i) site-specific geotechnical uncertainties are
characterized; (ii) reliability of the geotechnical prediction method is quantified; and (iii) the
target reliability index is rationally selected and meets the intent of the codes. The selection of a
suitable project specific target reliability index is ultimately the responsibility of the owner,
though the consultant’s role and contributions are important.
The following case record shows the key elements of how project and site specific values can be
developed within the framework of reliability based design and within the intent of the NBCC.
The details of the development of project and site specific reliability index and geotechnical
resistance factors are provided in Thomson et al. (2016) and (2017). The following discussion
provides a summary of that work.
Bitumen Processing Facility
A proposed bitumen processing facility in Northern Alberta includes many low to no occupancy
structures that support pipe racks and processing units, and provide protection to pumps and
other mechanical/electrical components. Several thousand steel piles will be driven during
construction of the facility. The owner put the challenge to the geotechnical consultant to provide
foundation design recommendations that comply with code requirements and result in
constructed foundations that are cost effective to the fullest extent practicable.
Values for reliability index and associated geotechnical resistance factors for design of structures
are given in applicable codes (e.g., NBCC). However, the use of the building code may not be
strictly applicable to structures with limited to no occupancy that do not satisfy the definition of
building or do not have high failure consequence should they not perform as expected. The
© ASCE
Geo-Risk 2017 GSP 282 117
reliability for such structures can be lower than that normally used in building codes. The direct
use of geotechnical resistance factors identified in a building code in such circumstances would
tend to produce a foundation design that is too conservative and expensive.
The ground conditions at the site comprised sandy silty clay till containing dense to very dense
sand and silty sand interlayers. The till was cohesive with intermediate plasticity, varied from
firm to hard and contained varying proportions of sand and gravel. The geotechnical
investigation included 23 sampled boreholes, 41 cone penetration tests with pore pressure
measurement (CPTs), and a laboratory index, shear strength and stress-deformation testing
program. Groundwater monitoring instrumentation was also installed.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
In addition to the reasonably comprehensive site investigation program, a static compression pile
load test program consisting of five tests was carried out in conformance with ASTM D1143 as
part of design to develop site-specific geotechnical resistance factors. The test and reaction piles
had diameter of 324 mm and 406 mm with wall thicknesses of 9.5 mm and 12.7 mm,
respectively. The piles were driven open-ended using a hydraulic hammer. Pile embedment
lengths were pre-selected to vary between 13 m and 18 m. High strain dynamic testing was
completed at end of initial drive and beginning of restrike on all test and reaction piles using a
Pile Driving Analyzer (PDA) in conformance with ASTM D4945, followed by Case Pile Wave
Analysis Program (CAPWAP) assessment of selected hammer blows.
Inherent (spatial) uncertainty was quantified by predicting the geotechnical axial compression
resistance of piles with various diameters and lengths at each individual boring and CPT
investigation location within the site. The geotechnical resistance was predicted with methods
commonly used in Canadian practice (e.g., unit shaft resistance estimated as a proportion of soil
undrained shear (CFEM 2006)). A typical frequency distribution of predicted geotechnical
resistance is shown in Figure 9 for a 406 mm diameter pile with a 15 m embedment length.
Comparison between predicted geotechnical axial compression resistance and measured axial
compression resistance showed a bias factor (measured/predicted) ranging from 0.80 to 1.64,
with a mean value of 1.16 and coefficient of variation of 0.18.
© ASCE
Geo-Risk 2017 GSP 282 118
Assuming that the sources of uncertainty are statistically independent, the overall coefficient of
variation (V ) was calculated as (FHWA 2001):
V = V +V +V [12]
where Vinh, Vmeas and Vpred are the coefficients of variation for inherent (geological/spatial),
measurement and prediction model uncertainty, respectively. The results of the work showed that
V was 0.16 (see Table 4), V was 0 (assumed) and Vpred was 0.18. The resulting value of
Vr was 0.24.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 119
the geotechnical consultant and the owner. This study would not have been carried out without
the owner appreciating and understanding the fundamental relationship between tolerable project
risks, geotechnical resistance factors and project construction costs. This case history
demonstrates that value and enhanced effective risk management can be added to a project
through close collaboration between the owner and their consultant.
DISCUSSION AND CONCLUDING REMARKS
From a Canadian geotechnical practitioner’s perspective, the author is of the opinion that
primary factors leading to existing implementation issues and challenges of reliability based
design (LRFD) concepts include the following:
(1) There appears to be a general lack of understanding, communication, education and training
concerning the fundamental principles and intent of LRFD. For consistent and rational design in
practice, the selection of a characteristic value for geotechnical resistance needs to be made in
the same manner as that used to derive the associated resistance factor. Geotechnical
practitioners frequently do not appear to fully understand or comply with this requirement. The
geotechnical resistance factors were derived using a characteristic value that is close to the mean
value; therefore, a characteristic value of a geotechnical property that is similarly close to the
mean should be used in the calculation of geotechnical resistance. A value closer to the lower
bound value should not be used because the uncertainty associated with a given design parameter
has already been incorporated into the numerical value of the resistance factors. Should a value
closer to the lower bound strength be chosen, the design can result in foundations with an
inadvertent high degree of conservatism, which leads to more costly foundations than necessary.
(2) A greater degree of design interaction needs to exist between structural and geotechnical
engineers. A higher degree of interaction is both positive and beneficial to the interest of the
designs and of the project. Engineers are generally not adverse to a higher degree of interaction;
in fact they commonly strive to achieve it. However, frequently the project and/or the client do
not appreciate the benefits of increased interaction or the stage of the project is not conducive to
such interaction, in particular during early project stages when specific details are generally
unknown. To promote this interaction, Canadian codes require effective interaction between
structural and geotechnical engineers during design and construction stages of projects.
(3) Education as to the benefits and fundamental principles of LRFD continues to be limited and
appears to be a low priority of the Canadian geotechnical community. Universities, provincial
jurisdictions, learned societies and other agencies need to put a higher priority to promoting and
disseminating the understanding of fundamental principles and components of reliability based
design (LRFD) concepts to foundations and geotechnical systems. It is essential that the
© ASCE
Geo-Risk 2017 GSP 282 120
The author acknowledges the support of Golder Associates Ltd. and the contributions of his
colleagues who worked on the projects and case records mentioned and described in this paper,
in particular Drs. Peter Thomson, Paul Dittrich and Graeme Skinner who reviewed the paper and
provided valuable and insightful comments and suggestions. Special thanks are also extended to
Ms. Sarah Bungay, Dr. Masoumeh Saiyar and Mr. Bob McDonald who assisted in the
preparation of this paper. The thoughtful review comments of Dr. Gordon Fenton are also greatly
appreciated and valued.
REFERENCES
Bathurst, R.J., Huang, B. and Allen, T.M. 2012. LRFD calibration of the ultimate pullout limit
state for geogrid reinforced soil retaining walls. ASCE International Journal of
Geomechanics, Special Issue on Geosynthetics 12(4): 399-413.
Becker, D.E. 1996a. The Eighteenth Canadian Geotechnical Colloquium: Limit States Design for
Foundations I. An overview of the foundation design process, Canadian Geotechnical
Journal, Vol. 33, No. 6, 956-983.
Becker, D.E. 1996b. The Eighteenth Canadian Geotechnical Colloquium: Limit States Design
for Foundations II. Development for the National Building Code of Canada, Canadian
Geotechnical Journal, Vol. 33, No. 6, 984-1007.
Becker, D.E. 2006. Limit States Design Based Codes for Geotechnical Aspects of Foundations in
Canada. International Symposium on New Generation Design Codes for Geotechnical
Engineering Practice, Taipei, Taiwan. November 2 and 3, 2006.
Becker, D.E., Crooks, J.H. and Been, K. 1988. Interpretation of the Field Vane Test in Terms of
In-Situ and Yield Stresses in Vane Shear Strength Testing in Soils, Field and Laboratory
Studies, Richards, A.F, Editor, ASTM. pp. 71-87.
Becker, D.E., Crooks, J.H.A., Jefferies, M.G. and McKenzie, K.J. 1984. Yield behaviour and
consolidation, Part 2: strength gain. Proceedings ASCE Geotechnical Engineering Division
Symposium on Sedimentation and consolidation models: predictions and validation, Ed.
Young & Townsend, pp. 382-398.
Bjerrum, L. 1972. Embankment on soft ground. Proceedings of the ASCE Specialty Conference
on Performance of Earth and Earth-Supported Structures. Purdue University, USA, 2: 1.54.
Canadian Geotechnical Society. 2006. Canadian Foundation Engineering Manual (CFEM) 4th
edition. Edited by D. Becker and I. Moore. BiTech Publishers, Vancouver, BC.
© ASCE
Geo-Risk 2017 GSP 282 121
CHBDC – Canadian Standards Association. 2000. Canadian Highway Bridge Design Code – A
National Standard of Canada. CAN/CSA Standard S6-00.
CHBDC – Canadian Standards Association. 2014. Canadian Highway Bridge Design Code – A
National Standard of Canada. CAN/CSA Standard S6-14.
Federal Highway Administration (FHWA). 2001. Load and Resistance Factor Design (LRFD)
for Highway Bridge Substructures: Reference Manual and Participant Workbook.
Publication No. FHWA HI-98-032.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Fenton, G.A., Naghibi, F., Dundas, D., Bathurst, R. J. and Griffiths, D. V. 2016. Reliability-
based geotechnical design in the 2014 Canadian Highway Bridge Design Code. Canadian
Geotechnical Journal, 53(2): 236 - 251.
Lo, K.Y. 1970. The operational strength of fissured clays. Geotechnique 20, No. 1, 57 - 74.
Meyerhof, G.G. 1970. Safety factors in soil mechanics. Canadian Geotechnical Journal, 7: 349 -
355.
Meyerhof, G.G. 1995. Development of geotechnical limit state design. Canadian Geotechnical
Journal, 32: 128 - 136.
NBCC - National Research Council of Canada. 2005. National Building Code of Canada
(NBCC) Volumes 1 and 2. 12th Edition 2005, Ottawa, Canada.
Phoon, K.K., Becker, D.E., Kulhawy, F.H., Honjo, V., Ovesen, N.K. and Lo, S.R. 2003. Why
consider reliability analysis for geotechnical limit state design. Proceedings of LSD 2003:
International Workshop on Limit State Design in Geotechnical Engineering Practice,
Cambridge, Massachusetts, USA. June 26, 2003.
Thomson, P., Becker, D. E., Esposito, G. and Wright, J. 2016. Reliability-based calibration of
geotechnical resistance factors for a large industrial project. Proceedings of Geo-Vancouver,
69th Canadian Geotechnical Society Conference, Vancouver, British Columbia, October 2 -
5, 2016.
Thomson, P., Becker, D. E., Esposito, G. and Wright, J. 2017. Site-Specific Geotechnical
Resistance Factors for a Large Industrial Project in Canada. Proceedings of ASCE Geo-Risk
2017 Conference: Geotechnical Risk from Theory to Practice, Denver, Colorado, June 4 - 6,
2017.
© ASCE
Geo-Risk 2017 GSP 282 122
1
Glenn Dept. of Civil Engineering and Center for Risk Engineering and System Analytics
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Abstract
Cyclic stress-based simplified methods have been widely used for liquefaction potential
assessment. While the original simplified procedure pioneered by Seed and Idriss in the early
1970s was based on a large number of fundamental laboratory studies supplemented with some
field observations, the more recent simplified methods were almost always developed solely
based on the database of field cases using the framework of the original simplified procedure.
There are, however, substantial uncertainties in the collected case histories and in the model
development process. Coupled with the need for risk assessment and performance-based design
requirement, the probabilistic methods have been increasingly used in liquefaction potential and
effect assessment. While various probabilistic methods for liquefaction assessment are available
in the literature, these methods have not been addressed systematically in a single report. In this
paper, the probabilistic methods for liquefaction assessment, including the discriminant analysis,
the logistic regression, artificial neural network, Bayesian methods, and performance-based
methods, are reviewed. The formulations, key assumptions, advantages and limitations, and their
applications for liquefaction assessment are discussed. The challenges and the need for further
research are also addressed.
INTRODUCTION
Earthquake induced soil liquefaction can cause enormous economic losses during an earthquake
event. For instance, approximately 27,000 houses were damaged in the Tohoku and Kanto
districts due to liquefaction during the 2011 East Japan earthquake (Yasuda et al. 2012), and
approximately half of the $30 billion losses caused by earthquakes in the 2010-2011 Canterbury
earthquake sequence was related to liquefaction (Cubrinovski et al. 2014). A realistic assessment
of soil liquefaction potential is the basis for liquefaction hazard assessment and mitigation. The
cyclic stress-based simplified procedure pioneered by Seed, Idriss, and their colleagues (e.g.,
© ASCE
Geo-Risk 2017 GSP 282 123
Seed and Idriss 1971 & 1982; Seed et al. 1985) has evolved into a practical tool widely used for
liquefaction potential assessment (e.g., Youd et al. 2001; Cetin et al. 2004; Boulanger and Idriss
2012). In the simplified method, the seismic loading that can cause a soil to liquefy is expressed
in terms of the cyclic stress ratio (CSR). Because the simplified methods were developed mainly
based on calibration with field data with different earthquake magnitudes and overburden
stresses, CSR is often “normalized” to a reference state with moment magnitude Mw = 7.5 and
effective overburden stress σ′v = 100 kPa. At the reference state, the CSR is denoted as CSR7.5,σ,
which may be expressed as follows (Juang et al. 2006; Boulanger and Idriss 2012):
σ a rd
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 124
To address the above limitations, numerous studies have been carried out on how to
assess liquefaction potential and its impact in a probabilistic manner. As the probabilistic
methods for liquefaction assessment become increasingly complex, the chance for misuse of
these methods by the practicing engineers becomes a possibility. The objective of this paper is
thus to provide a comprehensive review on existing probabilistic methods for liquefaction
assessment, including a discussion of the key assumptions involved, and the advantages,
limitations, and applications of these methods. The challenges and the needs for further research
are also discussed. It is hoped that this paper will provide a timely summary of the probabilistic
methods as the geotechnical engineer’s toolbox, offer guidance on how these tools can be used
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
1.0
Non-liquefied
Adjusted Cyclic Stress Ratio, CSR 7.5,σ
0.9 Liquefied
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 50 100 150 200
Adjusted Normalized Cone Tip Resistance, qc1N,m
Figure 1. Boundary curve separating liquefied cases (close-circle points) and non-liquefied cases
(open circle points) (Juang et al. 2006)
DISCRIMINANT ANALYSIS
The uncertain location of the boundary curve as shown in Fig. 1 was first addressed by
discriminant analysis, which is widely used to classify objects in data mining. When a large
number of liquefaction and non-liquefaction case histories is available, prediction of liquefaction
potential may be viewed as the problem of classifying a future case to be in the liquefaction
group or in the non-liquefaction group. To this end, discriminant analysis can be used for
liquefaction potential evaluation, which was first pioneered by Christian and Swiger (1975). To
use the discriminant analysis, let x ={x1, x2,…, xr} denote a vector that characterizes a liquefied
or non-liquefied case history. For instance, in Christian and Swiger (1975), x ={ln(CSR), N1,60},
where N1,60 is the corrected standard penetration test (SPT) blow count. The key assumption in
the discriminant analysis is that both liquefied cases and non-liquefied cases follow the same
multivariate normal distribution. If this assumption is not valid, discriminant analysis should be
used with caution.
© ASCE
Geo-Risk 2017 GSP 282 125
Let μL and μNL denote the mean values of x of the liquefied cases and non-liquefied
cases, respectively. Let Σ denote the covariance of x of all the case histories in the calibration
database. Often Σ is estimated as the weighted average of the covariance of the liquefied cases
and non-liquefied case histories (Lai et al. 2004). In the discriminant analysis, a discriminant
function V is first defined as follows:
V = x − 0.5 ( μ NL + μ L ) Σ −1 ( μ NL − μ L )
T
(2)
If V is larger than a certain critical value Vc, it indicates non-liquefaction; otherwise it
indicates liquefaction. The critical value Vc is often determined based on the probability of a
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
wrong prediction, and in particular, the probability of a liquefied case being wrongly predicted as
a non-liquefied case. Let P(NL|L) denote the probability of the aforementioned wrong prediction,
which is related to the discriminant function V as follows (Anderson 1984):
V + 0.5α
P ( NL | L ) = 1 − Φ (3)
α
where α is the Mahalanobis distance measuring the dissimilarity distance between the liquefied
case histories and non-liquefaction case histories in the calibration database, and Φ is the
cumulative density function (CDF) of a standard normal random variable. The Mahalanobis
distance α can be computed using the following equation:
α = [μ NL − μ L ] Σ −1 ( μ NL − μ L )
T
(4)
As shown in Eqs. (3) and (4), the probability of wrong prediction is a function of V. Therefore,
one can solve the value of Vc for a given probability of wrong prediction.
Christian and Swiger (1975) first applied discriminant analysis to derive a SPT-based
criterion for liquefaction potential evaluation. Lai et al. (2004) developed a CPT-based criterion
for liquefaction potential analysis. As noticed by Christian and Baecher (2015), under some “not-
too unreasonable” conditions, the discriminant analysis can yield the same result as the logistic
regression. However, the logistic regression seems to be more popular and preferred in current
practice (e.g., Jha et al. 2009, Juang et al. 2002 & 2006, Moss et al. 2006). Nevertheless, the
discriminant analysis is reviewed here as it represents the earliest attempt to use probabilistic
methods for soil liquefaction potential evaluation.
LOGISTIC REGRESSION
Logistic regression is often used to predict the response of a binary system. Whether or not a soil
will liquefy when subjected to a seismic loading may be considered a binary event; thus, the
logistic regression can be used to predict liquefaction potential (e.g., Liao et al. 1988; Toprak et
al. 1999; Juang et al. 2002, 2003 & 2006; Lai et al. 2006). The widespread application of logistic
regression may be due to its ease of use (e.g., Juang et al. 2002 & 2015). Let x1, x2, …, xr denote
explanatory variables in a regression model. In the logistic regression model, the liquefaction
potential is measured by the probability of liquefaction, PL, under the assumption that ln[PL/(1-
PL)] is a linear function of explanatory variables as follows (e.g., Liao et al. 1988):
P
log L = θ 0 + θ1 x1 + θ 2 x2 + + θ r xr (5)
1 − PL
© ASCE
Geo-Risk 2017 GSP 282 126
where θ = {θ1, θ2, …, θr} are regression coefficients to be determined. Like the discriminant
analysis, a logistic regression model is most appropriate when the explanatory variables follow
the normal distribution (e.g., Hoffmann 2004).
Let D denote the calibration database. Suppose there are NL liquefied cases and NNL non-
liquefied cases in the calibration database. The likelihood function of θ given the calibration
database D can be written as follows (e.g., Juang et al. 2015):
NL
1
l (θ | D) = ∏
i =1 1 + exp {−[θ 0 + θ1 x1 + θ 2 x2 + + θ r xr ]}
(6)
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
N NL
1
×∏ 1 −
j =1 1 + exp {−[θ 0 + θ1 x1 + θ 2 x2 + + θ r xr ]}
The parameters in the logistic regression, i.e., θ, can then be readily estimated by maximizing the
likelihood function in Eq. (6) through using the maximum likelihood principle.
Adhering to the rule of random sampling (i.e., randomly selecting samples from the
population) is important when applying statistical methods. In post-liquefaction survey, one
often tends to be more interested in liquefied sites than non-liquefied sites. As a result, the
proportion of liquefied cases in the calibration database may exceed the proportion in the real-
world (Cetin et al. 2002; Oommen et al. 2010). The weighted likelihood method may be used to
consider the choice-based sampling bias in the calibration database, and Eq. (6) may be modified
accordingly.
wL
NL
1
l (θ | D) = ∏
i =1 1 + exp {−[θ 0 + θ1 x1 + θ 2 x2 + + θ r xr ]}
wNL
(7)
N NL
1
×∏ 1 −
j =1 1 + exp {−[θ 0 + θ1 x1 + θ 2 x2 + + θ r xr ]}
Q
w L = ( N L + N NL ) Qp N L = p (8)
Q s
1 − Qp
wNL = ( N L + N NL ) (1 − Qp ) N NL = (9)
1 − Qs
where Qs and Qp are the proportions of liquefied cases in the database (i.e., sample) and in the
real world (i.e., population), respectively.
Evaluating the effect of sampling bias in model calibration requires the knowledge of Qp,
which in practice is very difficult to ascertain. Cetin et al. (2002) assessed the value of Qp
through a survey of expert opinions. Although the exact value of Qp used in Cetin et al. (2002)
was not directly reported, it can be back-calculated as follows: let Qs0, wL0, and wNL0 denote Qs,
wL, and wNL of the database used in Cetin et al. (2002). As there are 112 liquefied and 89 non-
liquefied cases in the calibration database used in that database (Cetin et al. 2002), Qs0 = 0.557.
The adjusting weights used in Cetin et al. (2002) satisfied the following relationship: wNL0 / wL0 =
1.5. Substituting Eqs. (8) and (9) and Qs0 = 0.557 into this relationship yields Qp = 0.456, a value
that represents the knowledge about Qp in the absence of other information (Juang et al. 2009).
The logistic regression is a member of the generalized linear regression models, which
are a class of statistical models specifically used for the analysis of binary systems (e.g.,
Hoffmann 2004). The use of logistic regression in liquefaction model development seems rather
© ASCE
Geo-Risk 2017 GSP 282 127
arbitrary. Zhang et al. (2013) assessed the applicability of different generalized linear models for
liquefaction potential assessment. They found that when the seismic loading (i.e., CSR) is
modest (i.e., CSR < 0.3), the predictions from different generalized linear models are quite
similar to each other, as the predictions of different models at this seismic loading level are well
constrained by the abundant case histories. At higher levels of seismic loading, however, the
prediction is sensitive to the adopted regression model due to lack of data in this zone. The
logistic regression model may not always be the optimal solution for constructing liquefaction
models.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
In the discriminant analysis or logistic regression, the liquefaction model constructed may
depend strongly on choices made by the analyst, especially on the way the data are normalized
and on the form of the function that is used to separate the two classes of observations (Christian
and Bacher 2015). The artificial neural network (ANN), a model-free and adaptive tool to learn
the significant structure in data, has attracted substantial attention in liquefaction potential
assessment. The ANN has the advantage over traditional data analysis methods in that it does not
require the prior knowledge about the cause-effect relationship (Juang and Chen 1999 & 2000).
The first application of ANN in liquefaction potential assessment was reported by Goh (1995 &
1996), in which the feed-forward ANN is used to predict liquefaction initiation based on the SPT
and CPT, respectively. It is found that the ANN performed better than the traditional simplified
methods for liquefaction potential assessment based on the available database of case histories.
Juang and Chen (1999) investigated use of the feed-forward ANN to assess liquefaction potential
using the CPT-based database of liquefaction case histories, and found that the Levenberg-
Marquardt algorithm is generally more efficient than the backpropagation algorithm with
learning rate and momentum. In a later study, Goh (2002) reported that the probabilistic neural
network (PNN) is computationally more efficient than the feed-forward ANN. Young-Su and
Byung-Tak (2006) found that a well-trained ANN can predict the CRR of sands measured from
laboratory studies with reasonable accuracy.
Limit State
Liquefied Zone
Seismic Load
B
C
A
Non-liquefied Zone
© ASCE
Geo-Risk 2017 GSP 282 128
The main disadvantage of the neural-network approach seems to be its inability to trace
and explain the step-by-step logic used to arrive at the outputs from the inputs provided (Goh
1995). To overcome such a limitation, Juang and Chen (2000) suggested a three-step procedure
to develop an explicit liquefaction potential assessment model based on a trained ANN. The
developed liquefaction potential model has a form similar to those developed using the more
traditional methods such as logistic regression, and hence is easy to understand and use. In their
procedure, the first step is to train an ANN based on the adopted liquefaction database. Then,
points on the boundary separating the liquefied region and non-liquefied region are searched
based on the trained ANN. The search can be carried out by means of path A, B, C, or D in Fig.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
2, depending on whether the case is a liquefied case or non-liquefied case. For a liquefied case in
the database (e.g., a close circle in Fig. 2), the level of seismic loading (i.e., CSR) is gradually
reduced until the case becomes non-liquefied according to the trained ANN. For a non-liquefied
case in the database (e.g., an open circle in Fig. 2), the level of seismic loading is gradually
increased until the case becomes liquefied according to the trained ANN. Similarly, the search
for points on the limit state can be carried out by changing the soil strength (path B or D). Finally,
regression analysis is performed to fit an explicit equation to the searched discrete points on the
boundary, which yields an explicit expression about the boundary curve learned implicitly
through the ANN model. In such a way, a complex ANN can be simplified into a form similar to
the existing empirical models that are easily accessible. The above procedure was also used in
Juang et al. (2001) and Muduli and Das (2015) to develop liquefaction potential assessment
models.
ANN has also been used to predict the liquefaction induced lateral ground displacement,
known as lateral spreading. For instance, Wang and Rahman (1999) showed that the ANN is a
promising tool to predict the liquefaction induced lateral spreading. Baziar and Ghorbani (2005)
developed ANN-based models to predict the liquefaction-induced lateral spreading. Chiru-
Danzer et al. (2001) found that an ANN-based lateral spreading prediction model is more
accurate than existing models based on traditional regression analysis.
Let x denote the uncertain parameters in the liquefaction potential assessment model, and g(x) =
0 denote the limit state function with g(x) > 0 indicating non-liquefaction and g(x) < 0 indicating
liquefaction. The limit state function can be constructed based on a deterministic liquefaction
potential assessment model. For instance, one possible form of the limit state function is
expressed as follows:
g ( x ) = CRR − CSR (10)
Take the deterministic model suggested by Youd et al. (2001) as an example. Here, the two
parameters N1,60 and FC (i.e., fine content) required in the calculation of CRR are modelled as
random variables to account for inherent spatial variability and testing error. Further, the input
parameters for evaluating CSR, including amax, Mw, σ′v, and σv, are always uncertain and hence
modeled as random variables. It is noted that the evaluation of MSF and Kσ is subjected to model
error, which is, however, included in the model uncertainty of CSR. As such, when the model
suggested by Youd et al. (2001) is used, x = {N1,60, FC, amax, Mw, σ′v, σv}. The uncertainty
associated with soil properties can be assessed based on site exploration data (e.g., Haldar and
Tang 1979). The uncertainties associated with the seismic loading may be estimated from the
© ASCE
Geo-Risk 2017 GSP 282 129
established attenuation relationships, or via the seismic hazard maps presented in the USGS
Seismic Hazard website. One may refer to Juang et al. (2008a) for a detailed discussion on how
to estimate the uncertainties associated with different parameters. As an example, Table 1
summarizes the typical coefficients of variation (COVs) of uncertain variables in the liquefaction
analysis. An uncertainty analysis performed by Christian and Bacher (2015) showed that the
CSR contributes approximately 1/3 of the uncertainty to the overall uncertainty in liquefaction
analysis based on SPT.
With the knowledge of COVs of the input parameters of a deterministic model, structural
reliability theory can readily be used to assess the probability of g(x) < 0, i.e., the liquefaction
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
probability. In Haldar and Tang (1979), the mean value first order second moment method
(MVFOSM) is used. In later studies, the advanced first order second moment method (AFOSM),
also known as first order reliability method (FORM), is preferred as optimization software
becomes increasingly available to implement FORM. In particular, the spreadsheet method (e.g.,
Low and Tang 1997) greatly facilitate the application of FORM. Note that FORM is more
accurate than MVFOSM when the limit state function is non-linear and the input random
variables are not normal. In FORM, the reliability index against soil liquefaction is as follows
(e.g., Ditlevsen and Madsen 2007):
( x − μ x ) C-1x ( x − μ x )
T
β = min (11)
g ( x )=0
where μx = mean of x, and Cx = covariance matrix of x. Note the above equation is only valid
when x is multivariate normal. However, it can be extended to consider correlated, non-normal
random variables (e.g., Low and Tang 2007). After the reliability index is obtained, the
liquefaction probability can be computed using the following equation:
PL = 1 − Φ ( β ) (12)
In the above discussion, only the uncertainties associated with the input parameters of a
deterministic model are considered. As any model is only an abstraction of the real world, model
uncertainty almost always exists. This is particularly true for the semi-empirical models for
liquefaction potential assessment. For a realistic estimate of liquefaction probability, the
reliability analysis must consider both parameter and model uncertainties. The evaluation of
model uncertainty and its effect is a challenging problem in the structural reliability field (e.g.,
Ditlevsen and Madsen 2007). In liquefaction potential evaluation, substantial progress has been
made to include the effect of model uncertainty on the reliability analysis due to the availability
of a large amount of performance data. As Bayes’ theorem is the main basis for model
uncertainty characterization (e.g., Zhang et al. 2009 & 2012), the latter is discussed along with
the presentation of the Bayesian method.
© ASCE
Geo-Risk 2017 GSP 282 130
Cetin (2000)
c
amax 0.10 − 0.20 Juang et al. (1999);
Cetin (2000)
Mw 0.05 − 0.10 Juang et al. (1999);
Cetin (2000)
*The word “typical” here implies the range approximately bounded by the 15th percentile and the 85th
percentile. The actual value could be higher or lower, depending on the variability of the site and the
quality and quantity of data that are available.
BAYESIAN METHODS
The discriminant analysis, logistic regression, and ANN mainly deal with the uncertain location
of the boundary between the liquefied region and the non-liquefied region. However, as
illustrated in Fig. 3, when developing new liquefaction models based on an adopted database of
case histories, it is important to recognize the uncertainties associated with the case histories. In
recent years, substantial efforts have been made in the process of model development and model
calibration to address the uncertainties in the case histories in the calibration database. To this
end, Bayes’ theorem has been proven as a powerful tool to consider different sources of
uncertainties in a consistent way. In the sub-sections that follow, the application of Bayesian
methods in liquefaction potential assessment is introduced. We notice that in some studies (e.g.,
Juang et al. 2015) the maximum likelihood method is used. As the maximum likelihood method
can be loosely considered as a special case of the Bayesian method when a special prior
distribution is used, studies based on the maximum likelihood method are also included in this
section.
© ASCE
Geo-Risk 2017 GSP 282 131
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure3. Plot showing sample mean and sample mean ±1 standard deviation of each case history
(closed circle for liquefaction and open circle for non-liquefaction) (Moss et al. 2006)
© ASCE
Geo-Risk 2017 GSP 282 132
Let f(θ,σε) denote the prior distribution of θ and σε. Based on Bayes’ theorem, the
posterior distribution of θ and σε can be written as follows:
f ( θ, σ ε | D ) ∝ l ( θ, σ ε | D ) f ( θ, σ ε ) (17)
The above procedure was first proposed by Cetin et al. (2002 & 2004) to develop a model for
liquefaction potential assessment using an SPT-based database of case histories. It was also used
in Moss et al. (2006) and recently in Boulanger and Idriss (2015b) for developing liquefaction
potential models using CPT-based database of liquefaction case histories.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
f (β | L) P ( L)
PL = P ( L | β ) = (18)
f ( β | L ) P ( L ) + f ( β | NL ) P ( NL )
The above equation is called the reliability index mapping method in the literature. Here, the β-
PL relationship is established based on comparison with the actual distributions of β of the
liquefied cases and non-liquefied cases. Therefore, whatever uncertainty exists in the model will
be accounted, and the effect of model uncertainty on liquefaction potential assessment is
automatically considered although it is not explicitly characterized.
To develop the β-PL relationship using Eq. (18), it is necessary to first determine the
distributions of the reliability indexes for both liquefied and non-liquefied groups, i.e., f(β|L) and
f(β|NL). Take f(β|L) as an example. To determine f(β|L), the reliability index of each of all
liquefied case in the calibration database needs to be computed first. The calculated reliability
index values for all cases in the liquefied group are then fitted with a theoretical probability
function, which yields f(β|L). The same procedure can be used to derive f(β|NL). Eq. (18) was
used in Juang et al. (2000, 2006 & 2009) as well as Muduli and Das (2013) to enhance
liquefaction potential assessment based on FORM. As an example, Fig. 4 compares the notional
failure probability calculated without considering the model uncertainty and the actual
probability predicted from the Bayesian mapping function (Juang et al. 2006). The difference
between the two curves shows the effect of model uncertainty and the significance of model
calibration.
© ASCE
Geo-Risk 2017 GSP 282 133
1.0
Probability of liquefaction, PL
0.9 Bayesian Mapping
0.8 Notional Concept
0.7
0.6
0.5
0.4
0.3
0.2
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
0.1
0.0
-4 -3 -2 -1 0 1 2 3 4
Reliability index, β
Figure 4. Calibration of β –PL relationship based on Eq. (18) (Juang et al. 2006)
Figure 5. Comparison of predictions from the Bayesian mapping function and logistic regression
(Juang et al. 2002)
Following the same logic, the above Bayesian mapping approach can be extended to
calibrate the Fs – PL relationship as follows (e.g., Juang et al. 2000 & 2002):
f ( Fs | L ) P ( L )
PL = P ( L | Fs ) = (19)
f ( Fs | L ) P ( L ) + f ( Fs | NL ) P ( NL )
where f(Fs|L) and f(Fs|NL) denote the PDFs of Fs calculated from a deterministic liquefaction
potential assessment model for the liquefaction group and the non-liquefaction group,
respectively. Through the above procedure, one can conveniently establish the relationship
between Fs computed from a deterministic liquefaction model to the implied liquefaction
probability, which in most situations would be more informative for decision makers. Juang et al.
(2002) showed that the failure probability computed from Eq. (19) is comparable with those
interpreted from the logistic regression models (see Fig. 5). However, the method-dependent
© ASCE
Geo-Risk 2017 GSP 282 134
(FOS) is usually related to the actual FOS via a model bias factor c as shown below:
Fsa = cg ( x ) (20)
where Fsa = actual FOS. In the literature, the model bias factor c is often assumed to follow the
lognormal distribution. Let μlnc and σlnc denote the mean and standard deviation of ln(c),
respectively. The task of model uncertainty characterization is then reduced to estimation of μlnc
and σlnc.
The model uncertainty characterization process can be classified into two categories
depending on whether the uncertainty in x is considered. If the uncertainty in x is not considered,
then the calibrated uncertainty reflects the effect of both model uncertainty and uncertainty in x;
hence it represents the overall effect of both model uncertainty and parameter uncertainty.
Consider the first category where the effect of uncertainty in x is not considered. In such a
scenario, let xˆ i denote the nominal values of x used for the ith case history in the model
characterization process. The likelihood function to observe a database with NL liquefied cases
and NNL non-liquefied cases can be written as follows:
NL
1 − g ( xˆ i ) − μln c N NL 1 − g ( xˆ j ) − μln c
l ( μln c , σ ln c | D ) = ∏ Φ ∏ 1 − Φ (21)
i =1 σ ln c j =1 σ ln c
The values of μlnc and σlnc can then be estimated by maximizing the above likelihood
function. The results will be identical to that obtained from a Bayesian method based on the
maximum posterior density method when a prior distribution of f(μlnc, σlnc) ∝ 1 is adopted. Eq.
(21) was used in Ku et al. (2012) to develop a probabilistic version of the Roberson and Wride
(1998) model for liquefaction potential assessment. Note it is also possible to assume other types
of distribution for the model bias factor. Different assumptions can be quantitatively compared
based on model comparison theory (e.g., Juang et al. 2015).
If the uncertainties in x are considered, the likelihood function in such a scenario can be
written as follows:
NL 1 − μ − μ N NL 1 − μ − μ
ln c
l ( μln c , σ ln c | D ) = ∏ Φ ∏ 1 − Φ
gi ln c gj
(22)
i =1 σ 2
+ σ 2 j =1 σ 2
+ σ 2
ln c g i ln c g i
where μgi and σgi are the mean and standard deviation of g(x) caused by uncertainty in x for the
ith case history, which can be calculated by methods like Monte Carlo simulation (e.g., Zhang et
al. 2009). Eq. (22) was used in Huang et al. (2012) to calibrate the model uncertainty associated
with the model suggested in Juang et al. (2003). A similar approach is also used in Juang et al.
(2013a) to calibrate the model uncertainty of liquefaction-induced settlement prediction model,
© ASCE
Geo-Risk 2017 GSP 282 135
and in Khoshnevisan et al. (2015) for calibrating a liquefaction-induced lateral spread prediction
model.
It should be noted that the model calibration results must be used in a manner consistent
with the assumption made in the characterization of model uncertainty. If it is calibrated based
on the nominal values, nominal values of x shall be used for liquefaction potential evaluation in
future applications. On the other hand, if the model calibration is conducted with consideration
of the uncertainties in x, in future applications the x should be modeled as random variables
(Juang et al. 2013b). Blind use of the existing probabilistic triggering models for liquefaction
analysis without knowing how these models were calibrated may lead to unnecessary errors.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
λL = P ( L | (a
i =1 j =1
max i , M wj ) ) Δλamaxi , M wj (23)
where NMw and Namax = number of magnitude and peak acceleration increments into which
“hazard space” is subdivided; and Δλamax,Mwj = incremental mean annual rate of exceedance for
intensity measure, amaxi, and magnitude, Mwj. The values of amaxi can be visualized as a series of
seismic hazard curves distributed with respect to magnitude according to the results of a
deaggregation analysis, as shown in Fig. 6. A similar procedure was used in Baker and Faber
(2008) to assess the liquefaction potential in a region considering the spatial variability of soil
properties and earthquake shaking intensity.
The above procedure has also been extended to estimate the hazard curve of lateral
displacement caused by soil liquefaction (e.g., Franke and Kramer 2014). To estimate the lateral
displacement, the knowledge of both the moment magnitude (Mw) and the distance from the site
to the source (R) are required (e.g., Youd et al. 2002; Bartlett and Youd 1995; Rauch and Martin
2000). Based on the theorem of total probability, it can be shown that the annual probability of
the lateral displacement (DH) exceeding a threshold value (d) can be computed as follows:
Ns
λ ( DH > d | S ) = λi ( M w > M w min )
i =1
NM w NR
(24)
× P ( DH > d | S , M w = M w j , R = Rk )P ( M w = M w j , R = Rk )
j =1 k =1
where λ(DH > d|S) = annual probability DH > d for a given site condition denoted by S; λi(Mw >
Mwmin) = the annual probability of occurrence of an earthquake with a magnitude larger than
Mwmin at earthquake source i; Ns = number of earthquake sources; and NMw and NR = number of
magnitude and distance into which “hazard space” is subdivided.
© ASCE
Geo-Risk 2017 GSP 282 136
When assessing the liquefaction potential, the statistics of the ground motion parameters
may be obtained from the USGS Seismic Hazard Maps website. In the liquefaction potential
assessment, however, the knowledge of the maximum peak acceleration at the ground surface
amax is needed. In the USGS website, however, only the statistics of the rock outcrop peak
ground acceleration (PGA) is available. Thus, a site response analysis is required. Alternatively,
the ground surface amax may be estimated through an amplification factor that relates the outcrop
PGA to the surface amax. Further, Juang et al. (2008b) suggested a procedure to derive the joint
distribution of Mw and amax based on data from the USGS website, and calculated the probability
of liquefaction during a given exposure time T based on the total probability theorem as follows:
N a max N Mw
PL , A = P L | ( M
j =1 i =1
w = M wi , amax = amax j ) P ( M w = M wi , amax = amax j ) (25)
where P(L|Mw=Mwi, amax = amaxj) = conditional probability of liquefaction given a set of seismic
parameters amax and Mw; P(L|Mw=Mwi, amax = amaxj) = the joint probability of Mw= Mwi and amax =
amaxj during the given exposure time T.
To apply the method in Juang et al. (2008b), the joint distribution of amax and Mw must
first be obtained. As an illustration, Fig. 7 shows the joint distribution of these two variables at
an example site established using the method suggested in Juang et al. (2008b). Once the joint
probability P(L|Mw=Mwi, amax = amaxj) is determined (note: discrete form is used herein), the
conditional probability of liquefaction P(L|Mw=Mwi, amax = amaxj) can be determined using any
deterministic liquefaction evaluation model such as Youd et al. (2001). Thus, the evaluation of
Eq. (25) is straightforward.
© ASCE
Geo-Risk 2017 GSP 282 137
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 7. Joint distribution of amax and Mw at an example site (Juang et al. 2008b)
For estimating the liquefaction induced settlement, the same ground motion parameters
Mw and amax are required (e.g., Ishihara and Yoshimine, 1992; Zhang et al., 2002; Tsukamoto et
al., 2004; Cetin et al., 2009; Juang et al. 2013a). The procedure described in Juang et al. (2008b)
can be extended for assessment of liquefaction-induced ground settlement. For instance, Lu et al.
(2009) suggested that the probability that the ground settlement, Dv, larger than a threshold value
d in an exposure time T can be estimated as follows:
N a max N Mw
P ( Dv > d ) = P D v > d | ( M w = M wi , amax = amax j ) P ( M w = M wi , amax = amax j ) (26)
j =1 i =1
Again, the method suggested in Juang et al. (2008b) can be used to estimate the joint distribution
of Mw and amax with data from the USGS website, and Eq. (26) can be evaluated in the same way.
This method was also adopted in Liu et al. (2016) for quantitative mapping of liquefaction-
induced lateral spread hazard based on a lateral prediction model in which Mw and amax are the
input ground motion parameters.
In recent years, efforts have been made to develop simplified procedures for
implementing performance-based liquefaction hazard assessment without going through the full
procedures as described above. In these studies, the results of a complete performance based
analyses are expressed in terms of a scalar parameter corresponding to a particular element of
soil in a reference soil profile, and the procedure to adjust that parameter to account for site-
specific conditions that differ from those of the reference profile is presented (Mayfield et al.
2010). For instance, Mayfield et al. (2010) presents a simplified procedure for performance-
based evaluation of soil liquefaction. Franke et al. (2014a) further suggested a simplified
procedure for liquefaction potential assessment that is compatible with the existing AASHTO
design specifications. Ekstrom and Franke (2016) developed a similar procedure for
performance-based evaluation of lateral spread displacements. The availability of these
simplified performance-based liquefaction assessment procedure is expected to facilitate the
profession for adopting the performance-based design methodology.
© ASCE
Geo-Risk 2017 GSP 282 138
While great progress has been made in the probabilistic assessment of soil liquefaction, several
challenges still remain that deserve further research.
Firstly, the more recent simplified methods are developed based solely on case histories.
Therefore, regardless of the method used for model development, the derived liquefaction
assessment model is only as good as the adopted database. To illustrate this point, let’s look at
Fig. 8, which compares the predictions from several commonly used CPT-based liquefaction
assessment models. The results show that different models usually produce relatively consistent
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
predictions in the low to medium CSR region (i.e., 0.1 to 0.3), but the predictions from different
models in the high CSR region (i.e., CSR > 0.4) differ substantially. This is mainly due to the
lack of data in the high CSR region in the adopted database (Juang et al. 2006). Thus, there is a
need to collect more case histories that can help define the boundary in the high CSR region.
1.0
Non-liquefied
Adjusted Cyclic Stress Ratio, CSR 7.5,σ
0.9
Liquefied
0.8 This study Moss (2003)
0.5
0.4
0.3
0.2
0.1
Idriss and Boulanger (2004)
0.0
0 50 100 150 200
Secondly, many probabilistic models have been reported in the literature. The predictions
from even the most popular liquefaction models may differ greatly however (e.g., Franke et al.
2014b). Note these models are developed based on the existing databases, and hence may
represent our “best” knowledge on liquefaction potential prediction. From a practical point of
view, it is useful to investigate the reasons causing the disparity among predictions from
different models. For instance, are the differences caused by difference in the adopted database,
or by the difference in the model calibration methods and processes? Such efforts may help work
towards consensus model summarizing our best knowledge for liquefaction potential assessment,
and hence reduce confusion among practicing engineers.
Thirdly, model uncertainty is knowledge-based. Therefore, a better understanding of the
liquefaction phenomenon can reduce the model uncertainty associated with a liquefaction model.
Currently, there are quite a few challenges preventing the accurate modeling of the liquefaction
phenomenon using the simplified methods. Specifically, the effect of the fines content on the
liquefaction resistance remain not well understood (e.g., Semple 2013); similar statement may be
© ASCE
Geo-Risk 2017 GSP 282 139
made regarding the age effect (e.g., Maurer et al. 2014) and the magnitude scaling factor
(Boulanger and Idriss 2015a). The model uncertainty associated with liquefaction assessment
models can be reduced if improved knowledge on the aforementioned challenges can be gained
through further research.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Figure 9. The relationship between FOS computed based on the Robertson and Wride model and
liquefaction probability in different regions (Zhang et al. 2016)
Last but not the least, simplified method is typically calibrated based on liquefaction case
histories from a number of regions, and it is then used worldwide. The premise is that the
simplified method can be used in different regions with the same level of reliability. In recent
years, however, it has been observed that a given simplified method when applied to a region
outside of its calibration database may be less reliable (Li et al. 2012; Wotherspoon et al 2015;
Facciorusso et al 2015). This suggests that the model uncertainty of a simplified method may be
heterogeneous across different regions. Zhang et al. (2016) studied the inter-region variability of
the model error associated with the popular Robertson-Wride method (1998). They found that
the model uncertainty associated with this model indeed varies from one region to another. In
general, a region with more calibration data will have less model uncertainty, and a region
without any case histories will experience the greatest model uncertainty. Therefore, the Fs – PL
relationship may differ from one region to another (Fig. 9). To achieve consistent level of
reliability, it is necessary to adopt different design FOS in different regions. While this
preliminary research provides useful insights on the characteristics of inter-region variability,
further research is needed to gain better understanding about the causes of inter-region
variability, the method to reduce inter-region variability, and techniques to reduce its impact on
liquefaction potential assessment.
Simplified methods developed based on case histories have found wide applications in
liquefaction potential assessment. There are substantial uncertainties in interpreting the case
histories and in the model development process. Combined with the need to meet performance-
© ASCE
Geo-Risk 2017 GSP 282 140
based design requirement, the probabilistic methods have been increasingly used in liquefaction
potential assessment. In this paper, the probabilistic methods for liquefaction potential
assessment have been systematically reviewed, including the discriminant analysis, the logistic
regression, artificial neural network, Bayesian methods, and performance-based methods. In
particular, the key assumptions, the advantages, the limitations, and their applications for
liquefaction potential assessment are discussed. The challenges and the need for further research
are also summarized. It is suggested that the following four areas may need concerted effort and
focus, collecting calibration case histories in the high CSR region, understanding the disparity
between different probabilistic liquefaction analysis models, reducing model uncertainty by
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
ACKNOWLEDGEMENT
The first author wishes to acknowledge the National Science Foundation and the U.S. Geological
Survey for supporting his studies on soil liquefaction in the last two decades. The second author
wishes to acknowledge the support of Natural Science Foundation of China (Project No.
41672276). The authors also wish to acknowledge the support by Center for Risk Engineering &
System Analytics (RESA), Clemson University.
REFERENCES
© ASCE
Geo-Risk 2017 GSP 282 141
Chiru-Danzer, M., Juang, C.H., Christopher, R., and Suber, J. (2011). "Estimation of
liquefaction-induced horizontal displacements using artificial neural networks." Can.
Geotech. J., 38(1), 200-207.
Christian, J. H., and Baecher, B. G. (2015) “Bayesian methods and liquefaction.” Proc. 12th Int.
Conf. on Applications of Statistics and Probability in Civil Engineering (ICASP12), The
University of British Columbia, Vancouver, Canada, Paper No. 102.
Christian, J. T., and Swiger, W. F. (1975). “Statistics of liquefaction and SPT results.” J.
Geotech. Eng. Div., 101(11), 1135 -1150
Cornell, C. A., and Krawinkler, H. (2000). “Progress and challenges in seismic performance
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 142
Huang, H. W., Zhang, J., and Zhang, L. M. (2012). “Bayesian network for characterizing model
uncertainty of liquefaction potential evaluation models.” KSCE J. Civ. Eng., 16, 714-722
Ishihara, K., Yoshimine, M. (1992). “Evaluation of settlements in sand deposits following
liquefaction during earthquakes.” Soils Found., 32 (1), 173-188.
Jha, S. K., and Suzuki, K. (2009) “Liquefaction potential index considering parameter
uncertainty.” Eng. Geol., 107, 55-60.
Juang, C. H., Chen, C. J., Rosowsky, D. V., and Tang, W. H. (2000). “CPT-based liquefaction
analysis, Part 2: Reliability for design.” Geotechnique, 50 (5), 593-599.
Juang, C. H., and Chen, C. J. (1999) “CPT-Based Liquefaction Evaluation Using Artificial
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 143
Khoshnevisan, S., Juang, C. H., Zhou, Y.G., and Gong, W. (2015), “Probabilistic assessment of
liquefaction-induced lateral spreads using CPT - Focusing on the 2010–2011 Canterbury
earthquake sequence.” Eng. Geol., 192, 113-12.
Kramer, S. L., and Mayfield, R. T. (2007). “Return period of soil liquefaction.” J. Geotech.
Geoenviron. Eng., 133(7), 802-813.
Kramer, S. L. (2008). “Performance-based earthquake engineering: opportunities and
implications for geotechnical engineering practice.” Proc., Geotechnical Earthquake
Engineering and Soil Dynamics IV, ASCE, Reston, VA, 1-32.
Ku, C. S., Juang, C. H. Chang, C. W., and Ching, J. (2012), “Probabilistic version of the
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Robertson and Wride method for liquefaction evaluation: Development and application,”
Can. Geotech. J., 49(1), 27-44.
Lai, S. Y., Chang, W. J., and Lin, P. S. (2006). “Logistic regression model for evaluating soil
liquefaction probability using CPT data.” J. Geotech. Geoenviron. Eng., 132(6), 694-704.
Lai, S., Hsu, S., and Hsieh, M. (2004). “Discriminant Model for Evaluating Soil Liquefaction
Potential Using Cone Penetration Test Data.” J. Geotech. Geoenviron. Eng., 130(12),
1271-1282.
Liao, S. S. C., Veneziano, D., and Whitman, R. V. (1988). “Regression model for evaluating
liquefaction probability.” J. Geotech. Eng., 114(4), 389-410.
Li, Z. Y., Yuan, X. M., Cao, Z. Z., Sun, R., Dong, L., and Shi, J. H. (2012). “New evaluation
formula for sand liquefaction based on survey of Bachu Earthquake in Xinjiang.”
Chinese J. Geotech. Eng., 34(3), 483-489. (In Chinese)
Liu, F., Li, Z.,, Jiang, M., Frattini, P., and Crosta, G. (2016). “Quantitative liquefaction-induced
lateral spread hazard mapping.” Eng. Geol., 207, 36-47.
Low, B. K., and Tang, W. H. (1997). “Efficient reliability evaluation using spreadsheet.” J. Eng.
Mech., 123(7), 749-752.
Low, B., and Tang, W. (2007). "Efficient spreadsheet algorithm for first-order reliability
method." J. Eng. Mech., 133(12), 1378-1387.
Lu, C. C., Hwang, J. H., Juang, C. H., Ku, C. S., and Luo, Z. (2009). “Framework for assessing
probability of exceeding a specified liquefaction-induced settlement at a given site in a
given exposure time.” Eng. Geol., 108, 24-35.
Maurer, B. W., Green, R. A., Cubrinovski, M., and Bradley, B. A. (2014). “Assessment of aging
correction factors for liquefaction resistance at sites of recurrent liquefaction.” Proc. 10th
U.S. National Conference on Earthquake Engineering Frontiers of Earthquake
Engineering, Earthquake Engineering Research Institute, Oakland, CA.
Mayfield, R. T., Kramer, S. L., and Huang, Y. M. (2010). “Simplified approximation procedure
for performance-based evaluation of liquefaction potential.” J. Geotech. Geoenviron.
Eng., 136(1), 140-150.
Moss, R. E. S., Seed, R. B., Kayen, R. E., Stewart, J. P. , Der Kiureghian, A., and Cetin, K. O.
(2006). “CPT-based probabilistic and deterministic assessment of in situ seismic soil
liquefaction potential.” J. Geotech. Geoenviron. Eng., 132(8), 1032-1051.
Muduli, P. and Das, S. (2013). "First-order reliability method for probabilistic evaluation of
liquefaction potential of soil using genetic programming." Int. J. Geomech.,
10.1061/(ASCE)GM.1943-5622.0000377, 04014052.
Muduli, P., and Das, S. (2015). "Model uncertainty of SPT-based method for evaluation of
seismic soil liquefaction potential using multi-gene genetic programming." Soils Found.,
55(2), 258-275.
© ASCE
Geo-Risk 2017 GSP 282 144
Oommen, T., Baise, L. G., and Vogel, R. (2010). “Validation and application of empirical
liquefaction models.” J. Getech. Geoenviron. Eng., 136(12), 1618-1633.
Rauch, A. F., and Martin, J. R. (2000). “EPOLLS model for predicting average displacements on
lateral spreads.” J. Getech. Geoenviron. Eng., 126(4), 360-371.
Robertson, P. K., and Wride, C. E. (1998). “Evaluating cyclic liquefaction potential using the
cone penetration test.” Can. Geotech. J., 35(3), 442-459.
Seed, H. B., and Idriss, I. M. (1971). “Simplified procedure for evaluating soil liquefaction
potential.” J. Soil Mech. and Found. Div., 97(9), 1249-1273.
Seed, H. B., and Idriss, I. M. (1982). Ground Motions and Soil Liquefaction During
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 145
Zhang, J., Tang, W., Zhang, L., and Huang, H. (2012). “Characterising geotechnical model
uncertainty by hybrid Markov Chain Monte Carlo simulation.” Comput. Geotech., 43, 26-
36.
Zhang, J., Zhang, L. M., and Huang, H. W. (2013). “Evaluation of generalized linear models for
soil liquefaction probability prediction.” Environ. Earth Sci., 68, 1925-1933.
Zhang, J., Zhang, L., and Tang, W. (2009). “Bayesian rramework for characterizing geotechnical
model uncertainty.” J. Geotech. Geoenviron. Eng., 135(7), 932-940.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
© ASCE
Geo-Risk 2017 GSP 282 146
1
AECOM, 6200 South Quebec St., Greenwood Village, CO 80111. E-mail:
[email protected]
2
AECOM, 6200 South Quebec St., Greenwood Village, CO 80111. E-mail:
[email protected]
Abstract
The application of risk analysis has fundamentally changed the practice of dam safety
engineering in the United States and will continue to do so. Dam safety risk analysis in the
United States has its roots in the Bureau of Reclamation’s application of failure modes and
effects analysis (FMEA) in the 1980s, an approach that evolved into what we know today as
potential failure modes analysis (PFMA). Beginning in the 1990s, Reclamation further evolved
its methodology from PFMA to quantitative risk analysis as a key tool in dam safety decision
making. Around 2000, the Federal Energy Regulatory Commission (FERC) began requiring that
all dams within its regulatory jurisdiction be subjected to PFMAs as part of its 5-year
independent consultant inspection program. In the early 2000s, the U.S. Army Corps of
Engineers (USACE) began using a risk-informed decision making process in its dam safety
program in a manner similar to what Reclamation had been doing, and in 2016 the FERC
introduced a risk- informed decision making (RIDM) program for its licensees. Over the past
decade, a number of state dam safety agencies and dam owner organizations have begun to
introduce risk methodologies into their dam safety programs. The increasing application of risk
analysis and risk consideration has resulted in the dam safety community 1) openly recognizing
in a formal manner the many ways a dam can fail and the consequences of those failures, 2)
using risk as a tool for prioritizing risk reduction actions, and 3) focusing monitoring programs
and remediation efforts on the highest risk dams and potential failure modes.
INTRODUCTION
The application of risk analysis has fundamentally changed the practice of dam safety
engineering in the United States and will continue to do so. This paper briefly reviews the history
of dam safety risk analysis in the United States and the changes it has produced.
© ASCE
Geo-Risk 2017 GSP 282 147
The increasing application of risk analysis and risk considerations in the United States
over the past 30 years has resulted in the dam safety community 1) openly recognizing in a
formal manner the many ways a dam can fail and the consequences of those failures, 2) using
risk as a tool for prioritizing risk reduction actions, and 3) focusing monitoring programs and
remediation efforts on the highest risk dams and potential failure modes. One example of a
substantial change in emphasis resulting from risk considerations is an increased recognition of
internal erosion risks, relative to those from rare events such as large floods and earthquakes.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
FMEA and PFMA. Dam safety risk analysis in the United States has its roots in the U.S.
Department of the Interior, Bureau of Reclamation’s (Reclamation’s) application of failure
modes and effects analysis (FMEA) in the 1980s, an approach that evolved into what we know
today as potential failure modes analysis (PFMA). Prior to the application of the FMEA, dam
safety engineering practice in the United States focused on evaluating dams through visual
inspections and comparison of analysis results with deterministic criteria.
Visual inspections would be conducted periodically, with dam safety professionals
examining a dam and its appurtenant structures for potential signs of distress, e.g. unexpected
seepage, settlement, deformation, or structural deterioration. Analyses would be completed to
evaluate the dam and appurtenant structures for various loading conditions: normal
operations/normal pool loading, flood loading, earthquake loading, etc. For some of the older
dams that existed before the modern dam safety era in the United States (before about 1980),
data on the design, composition, and construction of the dam were limited or even non-existent.
In such cases, analyses would often be completed based on reasonably conservative estimates of
material property and dam configurations. If these initial analyses indicated potential concerns,
further investigation and data collection would be completed and the analyses would be refined.
However the analyses were completed, the results would be compared to deterministic criteria.
Some representative examples of such criteria are:
• Spillway capacity would be compared to the ability to safely pass an inflow design flood,
which typically ranged from the 100-year flood to the probable maximum flood (PMF),
depending on the hazard (potential downstream consequences) posed by the dam.
• Calculated stability factors of safety would be compared to recommended or required
minimum factors of safety, for example 1.3 for end of construction, 1.5 for normal
operation, 1.2 or 1.3 for rapid drawdown, and 1.0 to 1.2 for earthquake loading. Different
organizations had established different factor of safety criteria for the various loadings.
• Stresses in structures were compared to allowable or ultimate strengths of the materials
composing the structure.
Although in the profession there was general consensus, with some variability, regarding
criteria for spillway capacity, stability factors of safety, and structural stresses, there was
© ASCE
Geo-Risk 2017 GSP 282 148
significant divergence concerning criteria for seepage and internal erosion. There were differing
opinions as to whether specific seepage gradients could be judged as safe. Further, although
many practitioners held a view that internal erosion was often episodic, and thought observation
of clear seepage is not proof that internal erosion was not occurring, this view was by no means
universal. As a result, conclusions concerning spillway capacity, stability, and stresses appeared
to be based on definitive numbers, while conclusions concerning seepage and internal erosion
were based principally on qualitative professional judgments.
FMEA and its descendant, PFMA, changed the basic thought process in dam safety
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
engineering from one of evaluating dams as described above to one of critically assessing the
ways a dam could fail, along with the relative likelihoods of the different failure modes and their
consequences. The steps in the process include:
• Assemble and critically review all available information about the dam, including design
and construction records, performance records, instrumentation data, analyses, and
photographs (including construction photographs).
• Compile a complete list of possible ways the dam could fail, known as potential failure
modes (PFMs). This list should be compiled without consideration of the likelihood of
failure for each failure mode.
• Screen the PFMs to identify which ones are credible or plausible and which are
essentially physically impossible or so remote in likelihood as to be judged not credible.
For those PFMs identified as essentially physically impossible or so remote in likelihood
as to be judged not credible, document the reasons for that judgment.
• For those PFMs judged credible or plausible, (1) compile lists of adverse/unfavorable
factors (factors making the PFM more likely) and positive/favorable factors (factors
making the PFM less likely), (2) identify surveillance and instrumentation methods that
can be used to detect initiation or progression of the PFM, (3) identify measures that
could reduce the risk of the PFM and (4) identify missing data or analyses that would be
required to evaluate the likelihood of the PFM.
• Compile a list of major findings and understandings that came to light during the process.
In most cases the PFMA process results in the PFMs being assigned to a category. The
categories currently used by the Federal Energy Regulatory Commission (FERC) are described
on Figure 1.
© ASCE
Geo-Risk 2017 GSP 282 149
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Owners and engineers who have used the PFMA process have almost universally noted
benefits of the process including:
• A deeper understanding of the dam
• A better understanding of the most important PFMs for a dam, which in some cases had
not previously been clearly identified or understood
• Improved surveillance and monitoring programs that are better targeted toward the dam’s
true vulnerabilities
• Better informed operators with regard to the dam’s sensitivities to operational procedures
• In some cases, identification of serious safety concerns that had previously not been
identified, particularly with regard to internal erosion PFMs.
© ASCE
Geo-Risk 2017 GSP 282 150
Control District in the Phoenix, Arizona area, have added FMEA/PFMA methods to their dam
safety management programs. In almost all instances, the organizations that have adopted these
methodologies report significant benefits to their programs.
As Reclamation, and later the U.S. Army Corps of Engineers (USACE), implemented
more explicit risk estimation methods, as described below, the PFMA has remained a key
component in any risk analysis, typically as the first task in a quantitative risk analysis.
further evolved its methodology from PFMA to quantitative risk analysis as a key tool in dam
safety decision making. Quantitative estimates of risk provide primary input into Reclamation’s
decisions concerning the urgency of dam safety concerns and the relative priority of concerns at
different dams.
In the early 2000s, the USACE began using a risk-informed decision making process in
its dam safety program, in a manner similar to what Reclamation had been doing. In 2016 the
FERC introduced a risk informed decision making (RIDM) program for its licensees, and pilot
programs are now underway for applying RIDM to some FERC regulated dams. Over the past
decade, a number of state dam safety agencies and dam owner organizations have begun to
introduce risk methods into their dam safety programs.
As risk analyses evolved into more common use in dam safety evaluations, it became
apparent that guidance was required to assist dam safety practitioners in the application of risk
methods to evaluate dams. In the early 2000s, Reclamation developed a document called Best
Practices in Dam Safety Risk Analysis. Later, after the USACE began to apply risk analysis
methods to its dam and levee safety programs, Reclamation and the USACE worked together to
revise and update this document, which is now titled Best Practices in Dam and Levee Safety
Risk Analysis (Bureau of Reclamation/USACE 2015). The current version of the document
contains ten parts, with a total of 41 chapters, addressing a wide range of topics related to dam
and levee safety risk analysis. Recognizing that dam and levee safety risk analysis practice
would be evolving quickly, Reclamation and the USACE chose not to publish Best Practices in
Dam and Levee Safety Risk Analysis as a printed document, but rather to make it available for
download on the internet. Revisions were made to various chapters over several years. As of
January 2017, the latest version of the document is designated as Version 4.0, dated July 2015,
and it contains chapters that were developed and updated at various times from June 9, 2009 to
June 20, 2015. The document can be downloaded at the following website:
https://www.usbr.gov/ssle/damsafety/risk/methodology.html
Best Practices in Dam and Levee Safety Risk Analysis contains guidance for detailed
quantitative risk analyses, which is the most sophisticated form of risk analysis currently used in
dam safety practice. Quantitative risk analysis consists of estimating annual probabilities of
failure, failure consequences (e.g., expected life loss), and annual life loss risks for failure modes
of significance. The annual probabilities of failure are typically estimated by developing event
trees for the failure modes of concern and then estimating occurrence probabilities for the
© ASCE
Geo-Risk 2017 GSP 282 151
various events in the trees. An example event tree for an internal erosion failure mode is shown
on Figure 2.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
The occurrence probabilities for the individual events are estimated based on either
statistical data/analysis or expert elicitation (degree-of-belief estimates of subjective probability
estimates by a group of subject matter experts). Detailed quantitative risk analyses are typically
used to support decisions to complete more detailed investigations or to implement risk reduction
measures. Such analyses have also been used to evaluate risk reduction effectiveness for dam
modification alternatives and to evaluate risk during construction of a dam safety modification.
Best Practices in Dam and Levee Safety Risk Analysis also contains guidance for semi-
quantitative risk analyses (SQRAs). In this approach, likelihood categories and consequence
categories are used in place of detailed quantitative estimates of probabilities of failure and
consequences. Examples of these categories are shown on Figures 3 and 4. As can be seen from
Figures 3 and 4, there is typically a general correspondence between the categories and
quantitative estimates.
SQRAs are sometimes used for portfolio risk analyses for a group of dams as a
prioritization tool, to determine which dams and or PFMs should be addressed in more detail
first. However, full quantitative risk analyses can also be used for portfolio risk analyses,
depending on the desires of the dam owner. SQRAs are typically more efficient both in time and
cost, but do not provide a quantitative risk value that is appropriate for comparison to published
risk guidelines (described below). Instead SQRAs provide a value that can be used as a relative
comparison among the set of dams evaluated and a general indication of the level of risk a dam
and/or PFM poses.
© ASCE
Geo-Risk 2017 GSP 282 152
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
Risk Evaluation Guidelines. As quantitative risk analysis became more common in dam safety,
it became apparent that guidelines were needed to help evaluate the results of the analyses. The
first such guidelines to be published in the United States were Interim Guidelines for Achieving
Public Protection in Dam Safety Decision Making (Bureau of Reclamation 1997). This
document was subsequently finalized as Guidelines for Achieving Public Protection in Dam
Safety Decision Making (Bureau of Reclamation 2003). After almost a decade of additional
experience with risk analysis, Reclamation updated its guidelines with publication of Interim
Dam Safety Public Protection Guidelines, A Risk Framework to Support Dam Safety Decision
Making (Bureau of Reclamation 2011a) and a companion document, Rationale Used to Develop
© ASCE
Geo-Risk 2017 GSP 282 153
distribution of potential life loss due to dam breach, economic risks, and environmental and other
non-monetary risks (USACE 2014).
Although Reclamation and the USACE have both published quantitative guidelines for
evaluation of risk analysis results, both agencies recognize that the quantitative risk analysis
results are not precise, and, in fact, involve significant uncertainty. As such, the guidelines are
not “bright lines in the sand,” but rather a tool to facilitate broader informed decision making.
This approach to the quantitative risk estimates is described in the Preface to Best Practices in
Dam and Levee Safety Risk Analysis (Bureau of Reclamation/USACE 2015) as follows:
© ASCE
Geo-Risk 2017 GSP 282 154
than understanding and documenting what the major risk contributors are and
why.
Based on the authors’ experience, there have been a number of changes in dam safety practice
that have resulted from the application of risk analysis.
Dam safety practitioners and dam owners are increasingly approaching dam evaluations
in a more holistic fashion. Rather than simply evaluating a dam against a limited set of
prescribed safety factors and other criteria, the universe of potential ways a particular dam can
© ASCE
Geo-Risk 2017 GSP 282 155
fail is being considered. Potential failure modes are being critically evaluated, and appropriate
risk reduction measures are being more effectively implemented.
It is clear that having the same estimated probability of failure does not result in the same
level of risk for different dams. If two dams have the same probability of failure, but the first
dam has the potential to cause 100 times the loss of life from a failure as the second dam, then
the risk posed by the first dam is 100 times higher than that posed by the second. To achieve the
same risk, the first dam must have a probability of failure two orders of magnitude lower than the
first.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
The risks posed by our aging dam infrastructure continue to rise due to population
incursion into previously underdeveloped areas, gradual deterioration of existing structures, and
outdated design and construction practices used for older dams. Risk analysis is an essential tool
for recognizing, understanding, and managing these risks. Risk analysis is helping the dam
safety community better prioritize resources by focusing them on the dams and dam features that
pose the greatest risk. For a portfolio of dams, risk analysis helps to identify which dams and
which specific failure modes for those dams pose the greatest risk. In addition, the relative level
of urgency of various risks can be better understood. Available resources can then be allocated to
more effectively reduce the greatest and most urgent risks.
Resources have also become better focused on risks that exist every day, as compared to
risks from rare events. The increased focus on internal erosion failure modes is the best example
of this outcome. In a paper describing Reclamation’s two decades of risk analysis and risk
management practice in dam safety (Knight 2017), it is noted that in the 1980s, 65 percent of
Reclamation’s dam safety modifications were directed at hydrologic and seismic concerns versus
35 percent for state-of-the-art issues (mostly internal erosion). By contrast, from 2010 to 2015,
47 percent of modifications were for state-of-the-art issues (internal erosion) compared to 53
percent for hydrologic and seismic issues. This reduced focus on the rare hydrologic and seismic
issues and increased emphasis on internal erosion issues is in large part attributable to the
application of risk analysis.
Risk analysis has also helped owners and dam safety practitioners to develop effective
surveillance and monitoring programs and emergency response plans that are tuned to the key
potential failure modes for a particular dam. These focused programs should increase the
likelihood that, if a failure mode develops, it will be identified early, allowing intervention to
prevent failure or reduce failure consequences.
Alternative evaluations, as well as final design efforts, have become better informed by
using a risk-informed process. The resulting risk reductions that can be achieved by different
alternatives and the risks posed by different alternatives during construction have become key
evaluation considerations when selecting a preferred alternative. In addition, a final design
informed by potential failure modes for the site becomes a more robust design.
Finally, visual portrayals of risk levels for a dam or inventory of dams is an effective
communication tool not only to the decision makers, but to other stakeholders who otherwise do
not have an in-depth knowledge of the process. Standardized graphics, tables and other visual
© ASCE
Geo-Risk 2017 GSP 282 156
formats for portraying risk results, similar to those shown on Figure 5, can be easily and quickly
comprehended.
CLOSURE
Risk analysis has seen increasing use in dam safety practice in the United States over the past
thirty years, and all indications are that the application of risk analysis will continue in the future.
In the authors’ opinion, the application of risk analysis has resulted in a more holistic approach
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
to dam safety and a better prioritization of dam safety risk reduction actions.
REFERENCES
Bureau of Reclamation (1985), Application of Statistical Data from Dam Failures and Accidents
to Risk-Based Decision Analysis on Existing Dams, U.S. Department of the Interior,
Bureau of Reclamation. Denver, CO. October 1985.
Bureau of Reclamation (1997), Interim Guidelines for Achieving Public Protection in Dam
Safety Decision Making, U.S. Department of the Interior, Bureau of Reclamation.
Denver, CO. April 1997.
Bureau of Reclamation (2003), Guidelines for Achieving Public Protection in Dam Safety
Decision Making, U.S. Department of the Interior, Bureau of Reclamation, Denver, CO.
June 2003.
Bureau of Reclamation (2011a), Interim Dam Safety Public Protection Guidelines, A Risk
Framework to Support Dam Safety Decision Making, U.S. Department of the Interior,
Bureau of Reclamation. Denver, CO. August 2011.
Bureau of Reclamation (2011b), Rationale Used to Develop Reclamation’s Interim Dam Safety
Public Protection Guidelines, U.S. Department of the Interior, Bureau of Reclamation,
Denver, CO. August 2011.
Bureau of Reclamation/USACE (2015), Best Practices in Dam and Levee Safety Risk Analysis,
Version 4.0, U.S. Department of the Interior, Bureau of Reclamation, and Department of
the Army, U.S. Army Corps of Engineers, Denver, CO.
https://www.usbr.gov/ssle/damsafety/risk/methodology.html, July 2015.
Federal Emergency Management Agency (1979), Federal Guidelines for Dam Safety, July 1979.
Prepared by the Ad Hoc Interagency Committee on Dam Safety of the Federal
Coordinating Council for Science, Engineering, and Technology, Washington, DC.
FEMA (2015), Federal Guidelines for Dam Safety Risk Management (FEMA P-1025), 1 January
2015.
FERC (Federal Energy Regulatory Commission) (2017), Chapter 14 Dam Safety Performance
Monitoring Program, Washington, DC, Revision 2, January 3, 2017, original publication
April 11, 2003.
France, John W. and Gregg Batchelder Adams (2009), Rush Dam Hydrologic and Seismic Risk
Analysis, Dam Safety 2009, Annual Conference of the Association of State Dam Safety
Officials, Hollywood, FL, September 27 to October 1, 2009.
© ASCE
Geo-Risk 2017 GSP 282 157
France, John W., Audrey L. Coy, and Brad Iarossi (2011a), Simplified Screening Level Risk
Analysis Approach Developed for U.S. Fish and Wildlife Service Dams, Dam Safety
2011, Annual Conference of the Association of State Dam Safety Officials, Washington,
DC, September 25 to 29, 2011.
France, John W., Bill McCormick, and Matt Gavin (2011b), Risk Analysis Provides Support for
Dam Safety Decisions for Beaver Park Dam, Colorado, Dam Safety 2011, Annual
Conference of the Association of State Dam Safety Officials, Washington, DC,
September 25 to 29, 2011.
Downloaded from ascelibrary.org by Tufts University on 03/17/18. Copyright ASCE. For personal use only; all rights reserved.
France, John W. and Jeff Martin (2012), Risk Analysis Provides Perspective for Antero Dam,
Dam Safety 2012, Annual Conference of the Association of State Dam Safety Officials,
Denver, CO, September 16 to 20, 2012.
Hunyadi, John, Mark Perry, Jennifer L. Williams, and John W. France (2016), Comprehensive
Dam Safety Evaluations: Colorado Expands Risk Considerations in its Dam Safety
Program, Dam Safety 2012, Annual Conference of the Association of State Dam Safety
Officials, Denver, CO, September 16 to 20, 2012.
Knight, Karen A. (2017), Twenty Years of Dam Safety Risk Analysis and Risk Management
Practice at the Bureau of Reclamation, GeoRisk 2017, Denver, CO, ASCE, June 2017.
Raeburn, Roger, Jennifer Williams, and Frank Blackett (2012), Risk Informed Decision Making
Influences on the Ashton Dam Remediation Project Design, 32nd Annual United States
Society of Dams Meeting and Conference, 2012.
Raeburn, Roger, Jennifer Williams, Richard Davidson (2015), “Ashton Dam – Risk Guided
Rehabilitation,” The Journal of Dam Safety, Association of State Dam Safety Officials,
Vol. 13, Issue 5, 2015.
USACE (2014), Policy and Procedures, Regulation No. 1110-2-1156, Department of the Army,
U.S. Army Corps of Engineers, Denver, CO. March 31, 2014 (originally published
October 26, 2011).
© ASCE