Introduction To Engineering Seismology: Keywords

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 34

Introduction to Engineering Seismology Lecture 12

Lecture 12: Introduction to seismic hazard analysis; methods; Deterministic and


probabilistic; suitable method for your project; attenuation models and simulation of
strong ground motion

Topics

Introduction to Seismic Hazard Analysis


Representations of Seismic Hazard
Data completeness
Recurrence Relation
Gutenberg-Richter recurrence law
Mmax Estimation
Predictive relationships
Deterministic Seismic Hazard Analysis
Probabilistic Seismic Hazard Analysis
Applicability of DSHA and PSHA
Summary of uncertainties
Uncertainty in the Hypocentral Distance
Regional Recurrence
Deaggregation
Uniform hazard spectrum (UHS)
Logic tree methods
Ready Made Software for PSHA
Attenuation models
Simulation of Strong Ground Motion
Forward modeling in strong ground motion seismology

Keywords: Seismic Hazard Analysis, Deterministic, Probabilistic, Attenuation Models

Topic 1

Introduction to Seismic Hazard Analysis

Seismic hazard is defined as any physical phenomenon, such as ground shaking or


ground failure, which is associated with an earthquake and that, may produce
adverse effects on human activities.

Seismic hazard analyses involve the quantitative estimation of ground-shaking


hazards at a particular site. Seismic hazards may be analyzed deterministically, as
when a particular earthquake scenario is assumed, or probabilistically, in which
uncertainties in earthquake size, location, and time of occurrence are explicitly
considered.

Dr. P. Anbazhagan 1 of 39
Introduction to Engineering Seismology Lecture 12
To evaluate the seismic hazards for a particular site or region, all possible sources
of seismic activity must be identified and their potential for generating future
strong ground motion evaluated. Identification of seismic sources requires some
detective work; nature‟s clues, some of which are obvious and others quite
obscure, must be observed and interpreted.

Seismic hazard analysis involves the quantitative estimation of ground shaking


hazards at a particular area. The most important factors affecting seismic hazard at
a location are:
1. Earthquake magnitude
2. the source-to-site distance
3. earthquake rate of occurrence (return period)
4. duration of ground shaking

Earthquake Magnitude - Magnitude is the most common measure of an


earthquake's size. It is a measure of the size of the earthquake source and is the
same number no matter where you are or what the shaking feels like. Magnitudes
can be based on any of the following:

1. Ml - local magnitude is defined as the logarithm of the maximum trace


amplitude recorded on a Wood-Anderson seismometer located 100km
from the epicenter of the earthquake for magnitudes of 6.8 or greater,
and hence is not useful for very strong earthquakes.

2. Mb – Body wave magnitude is based on the longitudinal wave


amplitude and their period. This magnitude scale becomes insensitive
to the actual size of an earthquake for magnitudes of 6.4 or greater, and
hence is not useful for very strong earthquakes.

3. Ms - surface wave magnitude is based on the amplitude of maximum


ground displacement caused by Rayleigh waves with a period of about
20 seconds and the epicentral distance of the seismometer measured in
degrees. This magnitude scale becomes insensitive to the actual size of
an earthquake for magnitudes of 8.4 or greater and hence, is not useful
for very strong earthquakes. The total seismic energy released during
an earthquake and the Magnitude Ms is given as
log 10E0 4.8 1.5Ms (12.1)

4. Mw – Moment magnitude is bases on the seismic moment M0. This


magnitude does not have an upper limit. Where Lf and Wf are the
length and width of a fault area, Sf is the average slip on the fault
during an earthquake in meters, μ is shear modulus of the Earth‟s
crust. Mw 2 log 10(M0) 6.1 (12.2)
3

2 of 39
Dr. P. Anbazhagan
Introduction to Engineering Seismology Lecture 12
Mo Lf .Wf .Sf . (12.3)

Present practice appears to be moving towards the use of moment magnitude in


preference to other magnitudes. Many earthquake magnitudes are defined using
different magnitude scales and, therefore, a conversion between magnitudes is
applied. The conversion relationships are usually specified when different
magnitude scales are used. Ambraseys derived the following relationships
between various common earthquake magnitude scales:

0.77mb 0.64ML 0.73 (12.4)


0.86mb 0.49Ms 1.94 (12.5)
0.80ML 0.60Ms 1.04 (12.6)

Chen and Chen provided the following relationships between log10 (M0) and Ms.
log10(M0) Ms 12.2 for Ms 6.4 (12.7)
log10(M0) 1.5Ms 9.0 for 6.4 Ms 7.8 (12.8)
log10(M0) 3.0Ms 2.7 for7.8 Ms 8.5 (12.9)

The source-to-site distance - Much of the energy released by rupture along a fault takes
the form of stress waves. As stress waves travel away from the source of an earthquake,
they spread out and are partially absorbed by the materials they travel through. As a
result, the specific energy decreases with increasing distance from the source. The
distance between the source of an earthquake and particular site can be interpreted in
different ways. Different distance used in engineering seismology is given in Figure 12.1.

Surface projection

Epicenter

R5
R4
R3 R2
R1

High stress
zone

Fault rupture
surface

Hypocenter

Fig 12.1: Various measures of distance used in strong –motion predictive relationships

Dr. P. Anbazhagan 3 of 39
Introduction to Engineering Seismology Lecture 12
R1 and R2 are the hypocentral and epicentral distances, which are the easiest
distances to determine after an earthquake. If the length of fault rupture is a
significant fraction of the distances between the fault and the site, however,
energy may be released closer to the site, and R1 and R2 may not accurately
represent the “effective distance”.

R3 is the distance to the zone of highest energy release. Since rupture of this zone
is likely to produce the peak ground motion amplitudes, it represents the best
distance measure for peak amplitude predictive relationships. Unfortunately, its
location is difficult to determine after an earthquake and nearly impossible to
predict before an earthquake.

R4 is the closest distance to the zone of rupture and R5 is the closest distance to
the surface projection of the fault rupture.

Earthquake rate of occurrence (return period) - A return period is an estimate


of the interval of time between earthquake. It is a statistical measurement
denoting the average recurrence interval over an extended period of time, and is
usually required for risk analysis.

The theoretical return period is the inverse of the probability that the event will be
exceeded in any one year. While it is true that a 10-year event will occur, on
average, once every 10 years and that a 100-year event is so large that we expect
it only to occur every 100 years, this is only a statistical statement: the expected
number of 100-year events in an n year period is n/100, in the sense of expected
value.
Similarly, the expected time until another 100-year event is 100 years, and if in a
given year or years the event does not occur, the expected time until it occurs
remains 100 years, with this "100 years" resetting each time.

It does not mean that 100-year earthquakes will happen regularly, every 100
years, despite the connotations of the name "return period"; in any given 100-year
period, a 100-year earthquakes may occur once, twice, more, or not at all.

Note also that the estimated return period is a statistic: it is computed from a set of
data (the observations), as distinct from the theoretical value in an idealized
distribution. One does not actually know that a certain magnitude or greater
happens with 1% probability, only that it has been observed exactly once in 100
years.

This distinction is significant because there are few observations of rare events:
for instance if observations go back 400 years, the most extreme event (a 400-year
event by the statistical definition) may later be classed, on longer observation, as a
200-year event (if a comparable event immediately occurs) or a 500-year event (if
no comparable event occurs for 100 years).

Dr. P. Anbazhagan 4 of 39
Introduction to Engineering Seismology Lecture 12
Further, one cannot determine the size of a 1,000-year event based on such
records alone, but instead must use a statistical model to predict the magnitude of
such an (unobserved) event.

Topic 2

Representations of Seismic Hazard

The seismic hazard can be expressed in different ways: from simple observed
macroseismic fields, to seismostatistical calculations for analyzing earthquake
occurrences in time and space and assessing their dynamic effects in a certain site
or region, to sophisticated seismogeological approaches for evaluating the
maximum expected earthquake effects on the Earth's surface.

Representation of seismic hazard and ground motion includes


(1) The selection and utilization of national ground motion maps;
(2) The representation of site response effects; and
(3) The possible incorporation of other parameters and effects, including
energy or duration of ground motions, vertical ground motions, near
source horizontal ground motions, and spatial variations of ground
motions.

Seismic hazard can be represented in different ways but most frequently in terms
of values or probability distributions of accelerations, velocities, or Displacements
of either bedrock or the ground surface:

1. The peak ground acceleration, ground acceleration time history or


response spectral acceleration are useful because the product of a mass
and the acting acceleration equals the magnitude of inertial force
acting on the mass. However, peak acceleration occurs in high
frequency pulses at infrequent intervals during the time history of
ground vibration, and thus contains only a small fraction of the emitted
seismic energy. For this reason peak acceleration is not suitable as a
single measure of ground motion representation (e.g. Sarma and
Srbulov, 1998).
2. The peak ground velocity, ground velocity time history or response
spectral velocity are useful because the product of square of velocity
and a half of mass equals the amount of kinetic energy of the mass.
Ground motions of smaller amplitude but longer duration frequently
results in larger ground velocity and more severe destruction capability
of ground shaking (e.g. Ambraseys and Srbulov,1994).
3. The peak ground displacement, ground displacement time history or
response spectral displacement of a structure are useful since damage
of structures subjected to earthquakes is certainly expressed in
deformations (e.g. Bommer and Elnashai, 1999).

Dr. P. Anbazhagan 5 of 39
Introduction to Engineering Seismology Lecture 12
Ground acceleration, velocity and displacement are related among them because
integration or differentiation in time of one of them produces another.

Time histories of ground motions are often used in practice for non-linear
analyses when damage caused by ground shaking can accumulate in time. Single
peak values are poor indicators of earthquake destructiveness, so time histories of
ground motion are usually considered for important, large, expensive and unusual
structures and ground conditions. Response spectral values are a compromise
between the singular values and a complete ground motion definition in time.

Topic 3

Data completeness

Important step in any earthquake data analysis is to investigate the available data
set to asses its nature and degree of completeness. Incompleteness of available
earthquake data make it difficult to obtain fits of Gutenberg-Richter recurrence
law that is thought to represent true long term recurrence rate.

Uncertainty in size of earthquakes produced by each source zone can be described


by various recurrence laws. The Gutenberg-Richter recurrence law that assumes
an exponential distribution of magnitude is commonly used with modification to
account for minimum and maximum magnitudes and is given by:
LogN a bM (12.10)

For a certain range and time interval, the above equation will provide the number
of earthquakes, (N) with magnitude, (M) where „a‟ and „b‟ are positive, real
constants. „a‟ describes the seismic activity (log number of events with M=0) and
„b‟ which is typically close to 1 is a tectonics parameter describing the relative
abundance of large to smaller shocks.

The problem of data incompleteness can be overcome by the method proposed by


Stepp (1972). In this method analysis is carried out by grouping the earthquake
data into several magnitude classes and each magnitude class is modeled as a
point process in time.

By taking the advantage of the property of statistical estimation that variance of


the estimate of a sample mean is inversely proportional to the number of
observations in the sample (Stepp, 1972). Thus the variance can be made as small
as desired by making the number of observation in the sample large enough,
provided that reporting is complete in time and the process is stationary i.e. the
mean variance and other moments of each observations remains the same.

Dr. P. Anbazhagan 6 of 39
Introduction to Engineering Seismology Lecture 12
In order to obtain an efficient estimate of the variance of the sample mean, it is
assumed that the earthquake sequence can be modeled by the Poisson distribution.
If k1, k2, k3…..kn are the number of earthquakes per unit time interval, then an
unbiased estimate of the mean rate per unit time interval of this sample is:
1 n
ki (12.11)
n i1
And its variance is:
2
(12.12)
n

Where n is the number of unit time intervals. Taking the unit time interval to be
one year gives a standard deviation of:

(12.13)
T

Where T is the sample length. Hence by assuming stationary process, one can
σ
expect that λ behaves as 1 in the subintervals, in which the mean rate of
T
occurrence in a magnitude class is constant. In other words when λ is constant,
1
σ
and then the standard deviation λ varies as T where T is the time interval of the
sample. If the mean rate of occurrence is constant we expect stability to occur
only in the subinterval that is long enough to give a good estimate of the mean but
short enough that it does not include intervals in which reports are complete
(Stepp, 1972).

The rate of earthquake occurrence as a function of time interval is given as N/T


where N is the cumulative number of earthquakes in the time interval T, for
subintervals of the 200-year sample. These data are used to determine the standard
deviation of the estimate of the mean through the equation 2.13.

Below figure reveals several features significant to statistical treatment of


earthquake data regardless of whether one uses empirical relationship log N = a –
b M with the extreme value distribution or other statistical approaches.

For each magnitude interval in the Figure 12.2 the plotted points are supposed to
define a straight line relation as long as the data set for the magnitude interval is
complete. For a given seismic region the slope of the lines for all magnitude
intervals should be same.

Dr. P. Anbazhagan 7 of 39
Introduction to Engineering Seismology Lecture 12

10 1 <M<1.9

2 <M<2.9
3 <M<3.9
4 <M<4.9
1 M>5
1 / sqrt (T)
0.1

0.01

10 100 1000
Time (yrs)

Figure 12.2: Variance of seismicity rate for different magnitude intervals and different
lengths of moving time windows

Topic 4

Recurrence Relation

The distribution of earthquake sizes in a given period of time is described by a


recurrence law. Frequency of earthquakes recurrence is important because
frequent earthquakes are likely to cause more cumulative damage than the same
size rare earthquakes, which usually occur within interiors of tectonic plates (i.e.
within the continents). Different rates of occurrence are proposed but most
frequently referred are:
1. Poisson process in which earthquakes occurs randomly, with no regard
to the time, size or location of any preceding event. This model does
not account for time clustering of earthquakes and may be appropriate
only for large areas containing many tectonic faults. The probability of
at least one exceedance of a particular earthquake magnitude in a
period of t years P[N≥1] is given by the expression:

.t (12.14)
P[N 1] 1 e
Where is the average rate of occurrence of the event with considered
earthquake magnitude. Cornell and Winterstein (1986) have shown
that the Poisson model should not be used when the seismic hazard is
dominated by a single source for which the return period is greater
than the average return period and when the source displays strong
characteristic-time behavior.

Dr. P. Anbazhagan 8 of 39
Introduction to Engineering Seismology Lecture 12
2. Time predictable, which specifies a distribution of the time to the next
earthquake that depends on the magnitude of the most recent
earthquake.
3. Slip predictable, which considers the distribution of earthquake
magnitude to depend on the time since the most recent earthquake.

Topic 5

Gutenberg-Richter recurrence law

Recurrence relations are a crucial component of seismic hazard analysis. They are
the means of defining the relative distribution of large and small earthquakes and
incorporating the seismic history into the hazard analysis. On the basis of
worldwide seismicity data, Gutenberg and Richter established the loglinear
relation (G-R line)
log 10n(M) a bM (12.15)

Here N(M) is the number of earthquakes per year with a magnitude equal to or
greater than M and a and b are constants for the seismic zone. N is associated with
a given area and time period. The constant „a‟ is the logarithm of the number of
earthquakes with magnitudes equal to or greater than zero. The constant „b‟ is the
slope of the distribution and controls the relative proportion of large to small
earthquakes

A critical issue to be addressed before carrying out seismic hazard analysis is to


assess the quality, consistency and homogeneity of the earthquake catalogue. The
catalogues prepared should thus undergo a quality check especially for cutoff
magnitude which has direct bearing on the estimation of a and b values of the
Gutenberg–Richter relationship.

There are nine methods using which the a, b and Mc values are estimated. The
nine methods include the estimation of a, b and Mc are
1. Maximum Curvature method (M1),
2. Fixed Mc = Mmin (M2),
3. Goodness of fit Mc90 (M3) and
4. Mc95 (M4),
5. best combinations of Mc90 and Mc95 and maximum curvature
(M5),
6. entire magnitude range (M6),
7. Shi and Bolt (1982) method (M7),
8. Bootstrap method (M8) and
9. Cao and Gao (2002) method (M9).

Dr. P. Anbazhagan 9 of 39
Introduction to Engineering Seismology Lecture 12

(a)Maximum Curvature method (b) Fixed Mc = Mmin

(c) Goodness of fit mc90 (d)Best combinations

Dr. P. Anbazhagan 10 of 39
Introduction to Engineering Seismology Lecture 12

(e) Entire Magnitude Range (f)Shi and Bolt (1982) method

(g)Bootstrap method (h) Cao and Gao (2002) method

Fig 12.3(a-h): Methods of Recurrence relation estimation

Topic 6

Mmax Estimation

The maximum magnitude is an important variable in the seismic hazard


estimation as it reflects maximum potential of strain released in larger
earthquakes. The instrumental and historical records of earthquakes are often too
short to reflect the full potential of faults or thrusts.

Dr. P. Anbazhagan 11 of 39
Introduction to Engineering Seismology Lecture 12
The maximum regional magnitude, Mmax, is defined as the upper limit of
magnitude for a given region or it is magnitude of largest possible earthquake. In
other words it is a sharp cut-off magnitude at a maximum magnitude Mmax, so
that, by definition, no earthquakes are to be expected with magnitude exceeding
Mmax.

The maximum earthquake magnitude in a given area can be estimated using the
geothermal gradient. Such a relation is brought about by the fact that the upper
limit of fault size is constrained within the brittle zone in the crust, the thickness
of which is regulated by the geothermal structure of the focal region.

An expected maximum magnitude value is widely needed where the disaster


prevention planning and the earthquake-proof design for buildings are ongoing.

The probabilistic approach for estimating the maximum regional magnitude


Mmax was suggested by Kijko and Sellevoll (1989) based on the doubly
truncated G-R relationship. It has been further refined by Kijko and Graham
(1998) to consider the uncertainties in the input magnitude data. Mmax from
Kijko-Sellevoll-Bayes estimator is obtained as a solution of following equation,
Kijko and Graham (1998)

1/ q 2
exp[nr q /(1 r q ) [ ( 1/ q, r q )
mobs
max

m ( 1/ q, )] (12.16)
max

where p ( )2 , q 2
, where β = 2.303b, denotes the mean value of

β, is the standard deviation of β and Cβ is a normalizing coefficient and which


is equal to {1−[p/(p+mmax−mmin)]q}−1, r = p/(p + mmax − mmin), c1 =
exp[−n(1 − Cβ)], δ = nCβ and Γ(·, ·) is the Incomplete Gamma Function. Mmax
is obtained by iterative solution of equation (14). The results showing the values
of λ, β and Mmax are given in table 2. The recurrence period and probability of
occurrence of magnitude 6.0 earthquakes in 50 years and 100 years in the
respective source zones are shown in table 3 for all the three catalogues.

Topic 7

Predictive relationships

R3 in the Figure 12.1 represents the best distance measure for peak amplitude
predictive relationships. It is the distance to the zone of highest energy release.
Predictive relationships for earthquake ground motion and response spectral values
are empirically obtained by well-designed regression analyses of a particular
strong-motion parameter data set.

Dr. P. Anbazhagan 12 of 39
Introduction to Engineering Seismology Lecture 12

Predictive relationship allows the estimation of the peak ground motions at a given
distance and for an assumed magnitude. Thus, ground motions are estimated for a
given magnitude earthquake, and at a particular distance from the assumed fault, in
a manner consistent with recordings of past earthquakes under similar conditions.

Recently, Bommer et al. (2003) analyzed a number of strong-motion predictive


relationships and proposed a simple method to scale any such relation according to
style-of-faulting.

Predictive relationships usually express gorund motion parameters as functions of


magnitude, distance and in some cases, other variables for example,

Y f (M, R, Pi) (12.17)

Where Y is the ground motion parameter of interest, M the magnitude of the


earthquake, R a measure of the distance form the source to the site being
considered, and the Pi are other parameters which may be used to characterize the
earthquake source, wave propagation path, and/or local site conditions.

Common forms for predictive relationships are based on the following


observations:

1. Peak values of strong motion parameters are approximately lognormally


distributed. As a result, the regression is usually performed on the
logarithm of Y rather than on Y itself.
2. Earthquake magnitude is typically defined as the logarithm of some peak
motion parameter. Consequently, ln Y should be approximately
proportional to M.
3. The spreading of stress waves as they travel away from the source of an
earthquake causes body wave amplitudes to decrease according to 1 R and
surface wave amplitudes to decrease according to 1 R .
4. The area over which fault rupture occurs increases with increasing
earthquake magnitude. As a result, some of the waves that produce strong
motion at a site arrive from a distance, R, and some arrive from greater
distances. The effective distance, therefore, is greater than R by an amount
that increases with increasing magnitude.
5. Some of the energy carried by stress waves is absorbed by the materials
they travel through. This material damping causes ground motion
amplitudes to decrease exponentially with R.
6. Ground motion parameters may be influenced by source characteristics or
site characteristics.

Combining these observations, a typical predictive relationship may have the form

Dr. P. Anbazhagan 13 of 39
Introduction to Engineering Seismology Lecture 12
ln Y C1 C2M C3MC4 C5 ln[ R C6 exp(C7M)] C8R f (source) f (site). ln Y C9
1 5
2 3 4 6
(12.18)

Where the circled numbers indicate the observations associated with each term.
Some predictive relationships utilize all these terms and others do not.

Topic 8

Deterministic Seismic Hazard Analysis

Earliest approach taken to seismic hazard analysis Originated in nuclear power


industry applications Still used for some significant structures such as:

1. Nuclear power plants


2. Large dams
3. Large bridges
4. Hazardous waste containment facilities
5. As “cap” for probabilistic analyses

In Deterministic Seismic Hazard Analysis (DSHA), is done for a particular


earthquake, either assumed or realistic. The DSHA approach uses the known
seismic sources sufficiently near the site and available historical seismic and
geological data to generate discrete, single-valued events or models of ground
motion at the site. Typically one or more earthquakes are specified by magnitude
and location with respect to the site. Usually the earthquakes are assumed to occur
on the portion of the site closest to the site. The site ground motions are estimated
deterministically, given the magnitude, source-to-site distance, and site condition.

Deterministic Seismic Hazard Analysis Consists of four primary steps:

1. Identification and characterization of all sources


2. Selection of source-site distance parameter
3. Selection of “controlling earthquake”.
4. Definition of hazard using controlling earthquake

14 of 39
Dr. P. Anbazhagan
Introduction to Engineering Seismology Lecture 12
Source 3

Source 1
Site R3

M3 R1
M1 R2

M2

Source 2

STEP 1 STEP 2

Identification of seismic sources Shortest distance from the source

MM21
.. {} Y N

.
Y1
Controlling Y2
parameter, Y Ground motion Y= .
M3 earthquake .

R3 R1 Distance
R2

STEP 3 STEP 4
PGA for controlling EQ
PGA from Attenuation relation for each source

Fig 12.4.: Four Steps of a deterministic seismic hazard analysis

Step 1: Identification of all sources capable of producing significant ground


motion at the site such as Large sources at long distances and Small sources at
short distances. Characterization includes Definition of source geometry and
Establishment of earthquake potential.
Estimate maximum magnitude that could be produced by any source in vicinity of
site and Find value of Rmax - corresponds to Mmax at threshold value of
parameter of interest, Ymin.

Dr. P. Anbazhagan 15 of 39
Introduction to Engineering Seismology Lecture 12
Definition of source geometry includes

1. Point source where there is constant source- to site distance.


Earthquakes associated with volcanic activity, for example, generally
originate in zones near the volcanoes that are small enough to allow
them to be characterized as point source.

2. Linear source in which one parameter controls distance


example Shallow and distant fault

3. Areal source in which two geometric parameters control distance


example Constant depth crustal source. Well defined fault planes, on
which earthquakes can occur at many different locations, can be
considered as two-dimentional areal sources.

4. Areas where earthquake mechanisms are poorly defined, or where


faulting is so extensive as to preclude distinction between individual
faults, can be treated as three-dimenstional volumentric sources

Establish earthquake potential - typically Mmax can be found by the following

Dr. P. Anbazhagan 16 of 39
Introduction to Engineering Seismology Lecture 12
1.Empirical correlations
a. Rupture length correlations
b. Rupture area correlations
c. Maximum surface displacement correlations

2. “Theoretical” determination by Slip rate correlations

Slip rate approach: seismic moment is given by the following equation, where µ=
shear modulus of rock, A = rupture area, D = average displacement over rupture
area M0 AD (12.19)

Slip rate (S) approach: If average displacement relieves stress/strain built up by


movement of the plates over some period, T, then
D S.T (12.20)

Then the “moment rate” can be defined as


M0 M0TAS (12.21)

Knowing the slip rate and knowing (assuming) values of m, A, and T, the moment
rate can be used to estimate the seismic moment as

M0 M0.T (12.22)
Mw log M0 1.5 10.7 (12.23)

Step 2: Selection of source-site distance parameter must be consistent with


predictive relationship and should include finite fault effect (Figs 15.5-15.7)

Source – Site Distance

Dr. P. Anbazhagan 17 of 39
Introduction to Engineering Seismology Lecture 12
Measurement of Distances

Fig 12.5: Vertical Faults

Fig 12.6: Dipping Faults


Typically assume shortest source-site distance for Point Source, Linear source,
Areal source and Volumetric source

Point Source Linear source

Areal source
Fig 12.7: Arial sources and associated distances

Dr. P. Anbazhagan 18 of 39
Introduction to Engineering Seismology Lecture 12
Step 3: Selection of Controlling Earthquakes is based on ground motion
parameter(s). Consider all sources, assume Mmax occurs at Rmin for each source
Compute ground motion parameter(s) based on Mmax and Rmin Determine
critical value(s) of ground motion parameter(s). An example is shown in Figure
12.8 below.

Fig 12.8: Selection of Controlling Earthquake (Combination of M2 and


R2 produces highest value of Y)

Step 4: Definition of hazard using controlling earthquake involves the use of M


and R to determine parameters such as Peak acceleration, spectral acceleration
and Duration.

DSHA calculations are relatively simple, but implementation of procedure in


practice involves numerous difficult judgments. The lack of explicit consideration
of uncertainties should not be taken to imply that those uncertainties do not exist.

Typical results obtained from DSHA analysis is shown in Figure 12.9

Dr. P. Anbazhagan 19 of 39
Introduction to Engineering Seismology Lecture 12

Fig 12.9: Typical spectral curve and hazard plot from DSHA analysis

Topic 9

Probabilistic Seismic Hazard Analysis

The seeds of PSHA were sown in the early 1960s in the form of two efforts that
came together in 1966. One effort was the 1964 doctoral dissertation of Allin
Cornell at Stanford titled „Stochastic Processes in Civil Engineering,‟ which
studied probability distributions of factors affecting engineering decisions.

The second effort consisted of studies at the Universidad National Autonomy de


Mexico (UNAM) by PhD student Luis Esteva, Prof. Emilio Rosenblueth, and co-
workers, who were studying earthquake ground motions, their dependence on
magnitude and distance, and the relationship between the frequency of occurrence
of earthquakes and the frequency of occurrence of ground motions at a site.

Probabilistic seismic hazard analysis (PSHA) is the most widely used approach
for the determination of seismic design loads for engineering structures. The use
of probabilistic concept has allowed uncertainties in the size, location, and rate of
recurrence of earthquakes and in the variation of ground motion characteristics
with earthquake size and location to be explicitly considered for the evaluation of
seismic hazard. In addition, PSHA provides a frame work in which these
uncertainties can be identified, quantified and combined in a rational manner to
provide a more complete picture of the seismic hazard.

Figure 12.10 shows element of probabilistic hazard methodology

Dr. P. Anbazhagan 20 of 39
Introduction to Engineering Seismology Lecture 12

Fig 12.10: Flowchart showing the elements of the probabilistic hazard methodology in
the context of a seismic design criteria methodology.

Topic 10

Applicability of DSHA and PSHA

DSHA involve the assumption of some scenario and the occurrence of an


earthquake of a particular size at a particular location for which ground motion
characteristics are determined.

When applied to structures for which failure could have catastrophic


consequences, such as nuclear power plants and large dams, DSHA provides a
straight forward framework for evaluation of “worst-case” ground motions.

Dr. P. Anbazhagan 21 of 39
Introduction to Engineering Seismology Lecture 12
However, it provides no information on the likelihood of occurrence of the
controlling earthquake, the likelihood of it occurring where it is assumed to occur,
the level of shaking that might be expected during a finite period of time (such as
the useful lifetime of a particular structure or facility), or the effects of
uncertainties in the various steps required to compute the resulting ground motion
characteristics.

PSHA allows uncertainties in the size, location, rate of recurrence, and effects of
earthquakes to be explicitly considered in the evaluation of seismic hazards. A
PSHA requires that uncertainties in earthquake location, size, recurrence, and
ground shaking effects be quantified.

The accuracy of PSHA depends on the accuracy with which uncertainty in


earthquake size, location, recurrence, and effects can be characterized. Although
models and procedures for characterization of uncertainty of these parameters are
available they may be based on data collected over periods of time that,
geologically, are very short. Engineering judgment must be applied to the
interpretation of PSHA results.

Model uncertainties can be incorporated into a PSHA by means a of a logic tree,


eg. Fig 12.13. A logic tree allows the use of alternative models, each of which is
assigned a weighting factor related to the likelihood of that model being correct.
The weighting factors are usually assigned subjectively, often using expert
opinion.

Topic 11

Summary of uncertainties

Location or Spatial Uncertainty- Earthquakes are usually assumed to be


uniformly distributed within a particular source zone. A uniform distribution
within the source zone does not, however, often translate into a uniform
distribution of source-to-site distance.

Since predictive relationships express ground motion parameters in terms of some


measure of source-to-site distance, the spatial uncertainty must be described with
respect to the appropriate distance parameter. The uncertainty in source-to-site
distance can be described by a probability density function.

Dr. P. Anbazhagan 22 of 39
Introduction to Engineering Seismology Lecture 12

Source
L L
l r
dl
Site Site

Source

fP(r) fP(r)

1/dr dr

fS R rmin R
(a) (b)

Site

Source

Nr

(c)

Fig 12.11: Examples of Variation of Source to Site Distance for different source zone
geometries.

For the point source of figure 12.11(a) the distance R, is known to be r s;


consequently, the probability that R = r s is assumed to be 1 and the probability
that R ≠ rs, zero. For the linear source of figure 12.11(b), the probability that an
earthquake occurs on the small segment of the fault between L=l and L=l+dl is
the same as the probability that it occurs between R = r and R = r+dr; that is,

fL(l)dl fR (r)dr (12.24)

Dr. P. Anbazhagan 23 of 39
Introduction to Engineering Seismology Lecture 12
Where fL(l) and fR(r) are the probability density functions for the variables L and

R, respectively. Consequently,
fR (r) fL(l) dl
dr (12.25)
If earthquakes are assumed to be uniformly distributed over the length of the fault,

fL(l) l / Lf since l 2 r2 r 2 the probability density function of R is given by


min
fR (r) r (12.26)

r2

Size uncertainty- all source zones have a maximum earthquake magnitude that
cannot be exceeded; in general, the source zone will produce earthquakes of
different sizes up to the maximum earthquake, with smaller earthquakes occurring
more frequently than larger ones.

The strain energy may be released aseismically, or in the form of earthquakes.


Assuming that the strain energy is released by earthquakes of magnitude 5.5 to 9.0
and that the average fault displacement is one-half the maximum surface
displacement the rate of movement was related to earthquake magnitude and
recurrence interval.

The distribution of earthquake sizes in a given period of time is described by


recurrence laws such as: Gutenberg-Richter Recurrence law, Bounded Gutenberg-
Richter Recurrence laws, Characteristic Earthquake Recurrence Laws and other
Recurrence Laws.

A basic assumption of PSHA is that the recurrence law obtained from past
seismicity is appropriate for the prediction of future seismicity. In most PSHA‟s,
the lower threshold magnitude is set at values from about 4.0 to 5.0 since
magnitudes smaller than that seldom cause significant damage. The resulting
probability distribution of magnitude for the Gutenberg-Richter law with lower
bound can be expressed in terms of the cumulative distribution function (CDF)
m0 m ( m m0 )
FM(m) P[M m | M m ] 0 1 e (12.27)
m0

Or the probability density function (PDF):


( m m0)
e (12.28)
PM (m) ( mu m0)
; (m0 m mu), =2.303b
1 e
Effect Uncertainty: seismic hazard or effects can be expressed in the form of
seismic hazard curves and can be obtained for individual source zones and
combined to express the aggregate hazard at a particular site.

Dr. P. Anbazhagan 24 of 39
Introduction to Engineering Seismology Lecture 12
The probability of exceeding a particular value of y*, of a ground motion
parameter, Y, is calculated for one possible earthquake at one possible source
location and then multiplied by the probability that, that particular magnitude
earthquake would occur at that particular location. The process is then repeated for
all possible magnitudes and locations with the probabilities of each summed.

For a given earthquake occurrence, the probability that a ground motion parameter
Y will exceed a particular value y* can be computed using the total probability
theorem, that is,
P[Y y*] P[Y y* | X]P[X]P[Y y* | X]fx(X)dx (12.29)

Where X is a vector of random variables that influence Y. In most cases the


quantities in X are limited to the magnitude, M, and distance, R. assuming that M
and R are independent, the probability of exceedance can be written as
P[Y y*] P[Y y* | m, r]fM(m)fR (r)dm.dr (12.30)

Where P[Y>y*|m, r] is obtained from the predictive relationship and fM (m) and
fR(r) are the probability density functions for magnitude and distance, respectively.

If the site of interest is in a region of Ns potential earthquake sources, each of


which has an average rate of threshold magnitude exceedance,
vi [exp( i im0)] , the total average exceedance rate for the region will be given

by
NS
(12.31)
y*
vi P[Y y* | m, r]fMi(m)fRi (r)dm.dr
i 1

The individual components of above equation are, for virtually all realistic
PSHA‟s, sufficiently complicated that the integrals cannot be evaluated
analytically. Numerical integration, which can be performed by a variety of
different techniques, is therefore required.

The next step is to divide the possible ranges of magnitude and distance into N M
and NR segments, respectively the average exceedance rate can then be
estimated by
NS NM NR
y*
viP[Y y* | mj, rk]fMi(mj)fRi (rk)dm.dr (12.32)
i 1 j1 k 1
Where mj m0 ( j 0.5)(m max m0) NM , rk r min 0.5)(r max r min) NR ,
(k
m (m max m0) Nm , and r (r max r min) NR . This is equivalent to assuming
that each source is capable of generating only NM different earthquakes of
magnitude, mj, at only NR different source-to-site distances, rk. Then the above
equation is equivalent to
NS NM NR
y*
viP[Y y* | mj, rk]P[M mj]P[R rk] (12.33)
i 1 j1 k 1

Dr. P. Anbazhagan 25 of 39
Introduction to Engineering Seismology Lecture 12
The accuracy of the crude numerical integration procedure described above
increases with increasing NM and NR. More refined methods of numerical
integration will provide greater accuracy at the same values of NM and NR.

Temporal Uncertainty: the temporal occurrence of earthquakes is most


commonly described by a Poisson model. The Poisson model provides a simple
framework for evaluating probabilities of events that follow a Poisson process, one
that yields values of a random variable describing the number of occurrences of a
particular event during a given time interval or in a specified spatial region.

Since PSHA‟s deals with temporal uncertainty, the spatial applications of the
Poisson model will not be considered further. Poisson processes possess the
following properties, which indicate that the events of a Poisson process occur
randomly, with no “memory” of the time, size or location of any preceding event.

1. The number of occurrences in one time interval are independent of


the number that occur in any other time interval
2. The probability of occurrence during a very short time interval is
proportional to the length of the time interval
3. The probability of more than one occurrence during a very short time
interval is negligible.

For a Poisson process, the probability of a random variable N, representing the


number of occurrences of a particular event during a given time interval is given
by n.e (12.33)
P[N n]
n!
Where µ is the average number of occurrences of the event in that time interval.
The time between events in a Poisson process can be shown to be exponentially
distributed. To characterize the temporal distribution of earthquake recurrence for
PSHA purposes, the Poisson probability is usually expressed as
P[N n] ( t) n .e t
n! (12.34)
Where the average rate of occurrence of the event and t is the time period of
interest. Note that the probability of occurrence of at least one event in a period of
time t is given by
P[N 1] P[N 1] P[N 2] P[N 3] ..... P[N ] 1 P[N 0] 1 e t
(12.35)
When the event of interest is the exceedance of a particular earthquake magnitude,
the Poisson model can be combined with a suitable recurrence law to predict the
probability of at least one exceedance in a period of t years by the expression
t (12.36)
P[N 1] 1 e m

Dr. P. Anbazhagan 26 of 39
Introduction to Engineering Seismology Lecture 12
Poisson model is useful for practical seismic risk analysis except when the seismic
hazard is dominated by a single source for which the time interval since the
previous significant event is greater than the average interevent time and when the
source displays strong “characteristic-time” behavior.

Topic 12

Uncertainty in the Hypocentral Distance

On an active fault it is possible that all points are equally vulnerable to rupture.
Thus, depending on the relative orientation of a fault with respect to the station,
the hypocentral distance R will have to be treated as a random variable. Further,
the conditional probability distribution function of R given that the magnitude M=
m for a rupture segment, uniformly distributed along a fault is given by

P(R r|M m) 0 for R (D2 L02 )1 2 (12.37)


12
P(R r|M m) r 2 d 2 L0 for (12.38)
L X(m)
(D2 L02 ) R D2 L L0 12
X(m) 2
Here, X(m) the rupture length in kilometres, for an event of magnitude m is given by
( 2.44 0.59m)
X(m) MIN 10 , faultlengt h (12.39)

MIN stands for the minimum of the two arguments inside the parentheses. This
condition is used to confine the rupture to the fault length. The first term provides
an estimate of the rupture length expected for an event of magnitude m.

The above solution pertains to the case of a fault situated entirely to one side of a
site. In the more general situation when the fault is extending on both sides of the
source, the conditional probabilities for the two sides are multiplied by the
fraction of length of the corresponding sides and summed up to get the probability
for the total fault.

Topic 13

Regional Recurrence

Each seismic source has a maximum earthquake that cannot exceed. In PSHA, the
lower magnitude can be taken from 4.0 to 5.0 magnitudes, since smaller than this
will not cause significant damage to the engineering structures and larger
magnitude can be evaluated by considering the seismotectonic of the region and
historic earthquake data.

Dr. P. Anbazhagan 27 of 39
Introduction to Engineering Seismology Lecture 12
The magnitude recurrence model for a seismic source specifies the frequency of
seismic events of various sizes per year. For any region the seismic parameters are
determined using Gutenberg-Richter (G-R) magnitude-frequency relationship
which is given in Equation below.

The recurrence relation of each fault capable of producing earthquake magnitude


0 u
in the range m to m is calculated using the truncated exponential recurrence
model developed by Cornel and Van Mark (1969), and it is given by the following
expression:
(m m 0 )
N(m) Ni (m 0 ) e (12.40)
u 0

(m m )
1 e

For m0 m mu Where β=b ln (10) and Ni (m0) is proposal weightage factor for
particular source based on the deaggregation.

Topic 14

Uniform hazard spectrum (UHS)

The uniform hazard spectra (UHS) are derived from a probabilistic hazard
analysis. The basic steps of the analysis are as follows. First, seismotectonic
information is used to define seismic source zones. Generally, a number of
alternative hypotheses regarding the configuration of these seismic zones are
formulated.

For each source zone, the earthquake catalogue is used to define the magnitude
recurrence relation and its uncertainty, which provides the description of the
frequency of occurrence of events within the zone, as a function of earthquake
magnitude. Ground motion relations are then defined to provide the link between
the occurrence of earthquakes within the zones, and the resulting ground motions
at a specified location. Ground motion relations can be given in terms of peak
ground acceleration or velocity or in terms of response spectral ordinates of
specific periods of vibration.

The final step of the hazard analysis is integration over all earthquake magnitudes
and distances, of the contributions to the probability of exceeding specified ground
motion levels at the site of interest. Repeating this process for a number of
vibration periods defines the uniform hazard spectrum, which is a response
spectrum having a specified probability of exceedance at the particular site.

The uniform hazard spectrum can be thought of as a composite of the types of


earthquakes that contribute most strongly to the hazard at the specified probability
level. The shape of a ground motion spectrum and therefore, the response

Dr. P. Anbazhagan 28 of 39
Introduction to Engineering Seismology Lecture 12
spectrum is strongly dependent upon magnitude and distance. In general, the
dominant contributor to the short-period ground motion hazard comes from small-
to-moderate earthquakes at close distance, whereas larger earthquakes at greater
distance contribute most strongly to the long-period ground motion hazard.

Topic 15

Deaggregation

The PSHA procedures allow computation of the mean annual rate of exceedance at
a particular site based on the aggregate risk from potential earthquakes of many
different magnitudes occurring at many different source-to-site distances. The rate
of exceedance computed in a PSHA, therefore, is not associated with any
particular earthquake magnitude or source-site-distance.

In some cases, it may be useful to estimate the most likely earthquake magnitude
and/or the most likely source-site-distance. These quantities may be used, for
example, to select existing ground motion records for response analyses. This
process of deaggregation requires that the mean annual rate of exceedance be
expressed as a function of magnitude and/or distance. For example, the mean
annual rate of exceedance can be expressed as a function of magnitude by
NS NR
y* (mj)P[M mj] viP[Y y* | mj, rk]P[R rk] (12.41)
i 1 k 1

Similarly, the mean annual rate of exceedance can be expressed as a function of


source-site distance by
NS NM
y* (rk)P[R rk] viP[Y y* | mj, rk]P[M mj] (12.42)
i 1 j1

Finally it is possible to compute the mean annual rate of exceedance as functions


of both earthquake magnitude and source-site distance i.e.
NS
y* (mj, rk)P[M mj]P[R rk] viP[Y y* | mj, rk] (12.43)
i 1

Topic 16

Logic tree methods

The probability computations described previously allow systematic consideration


of uncertainty in the values of the parameters of a particular seismic hazard model.
In some cases, however, the best choices for elements of the seismic hazard model
itself may not be clear. The use of logic trees provides a convenient framework for
the explicit treatment of model uncertainty.

Dr. P. Anbazhagan 29 of 39
Introduction to Engineering Seismology Lecture 12
The logic tree approach allows the use of alternative models, each of which is
assigned a weighting factor that is interpreted as the relative likelihood of that
model being correct. It consists of a series of nodes, representing points at which
models are specified and branches that represent the different models specified at
each node. The sum of the possibility of all branches connect to a given node must
be 1.

The simple logic tree allows uncertainty in selection of models for attenuation,
magnitude distribution and maximum magnitude to be considered. In this logic
tree, attenuation according to the models of Campbell and Bozorgnia (1994) and
Boore et al. (1993) are considered equally likely to be correct, hence each is
assigned a relative likelihood of 0.5.

Fig 12.13: Simple logic tree for incorporation of model uncertainty

Proceeding to the next level of nodes, the Gutenberg-Richter magnitude


distribution is considered to be 50% more likely to be correct than the
characteristic earthquake distribution.

At the final level of nodes, different relative likelihoods are assigned to the
maximum magnitude. This logic tree terminates with a total of 2x2x3=12 (no. of
attenuation models x no. of magnitude distributions x no. of maximum
magnitudes) branches (Fig 12.13).

The relative likelihood of the combination of models and/or parameters implied by


each terminal branch is given by the product of the relative likelihood of the
terminal branch and all prior branches leading to it. Hence the relative likelihood
of the combination of the Campbell attenuation mode, Gutenberg-Richter
magnitude distribution and maximum magnitude of 7.5 is 0.5x0.6x0.3=0.09. The

Dr. P. Anbazhagan 30 of 39
Introduction to Engineering Seismology Lecture 12
sum of the relative likelihoods of the terminal branches or of those at any prior
level, is equal to 1.

To use the logic tree, a seismic hazard analysis is carried out for the combination
of models and/or parameters associated with each terminal branch. The result of
each analysis is weighted by the relative likelihood of its combination of branches,
with the final result taken as the sum of the weighted individual results.

Topic 17

Ready Made Software for PSHA

PSHA is the most commonly used approach to evaluate the seismic design load for
the important engineering projects. PSHA method was initially developed by
Cornell (1968) and its computer form was developed by McGuire (1976 and 1978)
and Algermissen and Perkins (1976).

McGuire developed EqRisk in the year 1976 and FRISK in the year 1978.
Algermissen and Perkins (1976) developed RISK4a, presently called as SeisRisk
III.

EQRISK, written by McGuire (1976)- The code was freely and widely distributed,
and today is still probably the most frequently used hazard software, and has led to
PSHA often being referred to as the Cornell-McGuire method.

The program included the integration across the scatter in the attenuation equation
as part of the hazard calculations: "under the principal option for which this
program was written, the conditional probability of (random) intensity I exceeding
value i at the given site is evaluated using the normal distribution."

The program commendably made it impossible to run a hazard analysis without


sigma by forcing a division by zero if the user attempted to do so. However, the
program did, quite naturally, provide the user the option of varying sigma and also
acknowledged that it may sometimes even be desired to run hazard calculations
without integration across the variability in the ground motions; the user manual
(McGuire, 1976) states "SIG is the standard deviation of the residuals about the
mean. If no dispersion of residuals is desired, insert a very small value for SIG."

The importance of EQRISK cannot be overstated because it enabled analysts to


begin running PSHA calculations with integration of the scatter fully incorporated,
but misunderstanding of the issues resulted in many users approximating the
hazard calculations without the scatter by entering very small values of sigma.

Topic 18

Attenuation models

Dr. P. Anbazhagan 31 of 39
Introduction to Engineering Seismology Lecture 12
There is evidence that the decay rate of ground motions is dependent on the
magnitude of the causative earthquake (e.g. Douglas, 2003), and the decay rate
also changes systematically with distance. Fourier spectra and response spectra
moreover decay differently.

Geometrical spreading is dependent on wave type, where in general body waves


spread spherically and surface waves cylindrically, while anelastic attenuation is
wavelength (frequency) dependent.

As hypocentral distance increases, the up going ray impinges at a shallower angle


on the interfaces, reflecting increasing amount of energy downwards, thereby
reducing the energy transmitted to the surface.

For moderate and large earthquakes the source can no longer be considered a point
source and therefore the size of the fault will mean the decay rate will be less than
for smaller events, which is essentially why, for large events, the distance to the
causative fault (Joyner-Boore distance) usually is used instead of epicentral or
hypocentral distance.

Assuming the occurrence of an event of magnitude Mi at a site-source distance of


Rj, the probability of exceedance of ground motion level Z needs to be defined.
From studies of strong-motion records, a lognormal distribution is found to be
generally consistent with the data, where the mean often have a simple form such
as:
ln Z C C .M C .ln R C .R (12.44)
1 2i 3j 4j

Where Z is the ground motion variable and C1 to C4 are empirically determined


constants where C2 reflects magnitude scaling (often in itself magnitude
dependent), C3 reflects geometrical spreading and C4 reflects inelastic attenuation.
Also found from the recorded data is an estimate of the distribution variance.

One of the most important sources of uncertainty in PSHA is the variability or


scatter in the ground motion (attenuation) models, which is an aleatory uncertainty
usually expressed through a sigma (σ) value which is often of the order of 0.3 in
natural logarithms, corresponding to about 0.7 in base 10 units. This uncertainty,
which usually also is both magnitude and frequency dependent, is mostly
expressing a basic randomness in nature and therefore cannot be significantly
reduced with more data or knowledge. In PSHA we integrate over this uncertainty
which thereby is directly influencing (driving) the seismic hazard results.

Dr. P. Anbazhagan 32 of 39
Introduction to Engineering Seismology Lecture 12
Topic 19

Simulation of Strong Ground Motion

The present earthquake hazard study requires the availability of earthquake


ground motion models for peak ground acceleration and spectral acceleration,
for the frequency range of engineering interest.

Available models include near field excitation as well as the attenuation with
distance, and the scaling with magnitude here is essentially developed for
estimating the effects of an earthquake which is not yet been observed in the
region considered.

Strong-motion attenuation relationships are important in any seismic hazard


model along with seismic source characterization, and it is noteworthy here that
the uncertainties in attenuation often are among those which contribute the most
to the final results. This is true for any area, and in particular for the Himalaya
region, where very few strong-motion observations exist in spite of a high
seismicity level.

The empirical Green‟s function method, developed by Irikura, is applied to


synthesize strong ground motion all over the world. Using this method
seismologist can explain the nature of the physical phenomena, trying to
determine the parameters that describe them and the processes that regulate
them.

Given the spectrum of motion at a site, there are two ways of obtaining ground
motions: 1) time-domain simulation and 2) estimates of peak motions using
random vibration theory.

Topic 20

Forward modeling in strong ground motion seismology

Forward modeling deals with the estimation of ground motion at the ground
surface by modeling the earthquake faulting process, the earth medium between
the earthquake source and the station, and local site effects near the station,
such as modeling of topography, basin structure, and soft soil conditions

For engineering purposes, estimation of ground motion at a location, whether it


is the future site of an important engineering facility or the site of a future
earthquake, is important. Therefore, forward modeling is used on many
occasions for strong ground motion estimation, using the results of inverse
modeling as input, if available.

Dr. P. Anbazhagan 33 of 39
Introduction to Engineering Seismology Lecture 12
There are two types of source models: kinematic and dynamic. In kinematic
source models the slip over the rupturing portion of a fault, as a function of
fault plane coordinates and of time, is known or given a priori (that is, it is not a
function of the causative stresses). In dynamic source models, on the other
hand, slip over the rupturing segment of a fault is a function of tectonic stresses
acting on the region.

In kinematic source models, the final slip distribution over the fault plane, as
well as the location and time-specific evolution of slip over it, can be taken
from inverse problem solutions, which use recorded data, or can be found by
source models such as Haskell‟s model.

In dynamic source models, shear dislocation or slip is the result of a stress drop
in a tectonic region [Kostrov and Das, 1989; Scholz, 1989; Madariaga, 1976].
Slip, its amount, direction, the way the rupture travels over the fault plane (i.e.,
its velocity and direction) are controlled by surrounding forces in the region, as
well as by the material properties of the earth material adjacent to the fault
plane.

The rupture of a fault takes between a fraction of a second for small


earthquakes and several minutes for major events. During the slippage of the
fault, waves are generated, varying from frequencies near zero, corresponding
to the permanent ground deformation, up to very high frequencies.

With the deployment of numerous accelerometers in the near field of causative


faults, there has been a definite increase in near-field strong motion data. This
has led to an awareness of the existence and importance of coherent, long-
period velocity pulses in these regions.

The 1994 Northridge and 1995 Kobe earthquake strong motion records
reconfirmed the severity of the previously noted long-period pulses associated
with severe damage. Passing of the rupture front, or so-called source directivity,
causes these large, coherent velocity pulses.

Given this, and the needs of the earthquake engineering community, there is a
growing trend towards simulation techniques that incorporate broadband
ground motions of longer periods, directivity effects, and higher frequencies.

For simulation of deterministic ground motion, empirical [e.g., Hartzell, 1978],


semi-empirical [e.g., Irikura, 1983; Somerville, 1991], stochastic [e.g., Boore,
1983; Silva et al., 1990], and hybrid methods have been proposed and utilized.

Lecture 12 in Introduction to seismic hazard analysis; methods; Deterministic and


probabilistic; suitable method for your project; attenuation models and simulation of
strong ground motion

Dr. P. Anbazhagan 34 of 39

You might also like