Introduction To Engineering Seismology: Keywords
Introduction To Engineering Seismology: Keywords
Introduction To Engineering Seismology: Keywords
Topics
Topic 1
Dr. P. Anbazhagan 1 of 39
Introduction to Engineering Seismology Lecture 12
To evaluate the seismic hazards for a particular site or region, all possible sources
of seismic activity must be identified and their potential for generating future
strong ground motion evaluated. Identification of seismic sources requires some
detective work; nature‟s clues, some of which are obvious and others quite
obscure, must be observed and interpreted.
2 of 39
Dr. P. Anbazhagan
Introduction to Engineering Seismology Lecture 12
Mo Lf .Wf .Sf . (12.3)
Chen and Chen provided the following relationships between log10 (M0) and Ms.
log10(M0) Ms 12.2 for Ms 6.4 (12.7)
log10(M0) 1.5Ms 9.0 for 6.4 Ms 7.8 (12.8)
log10(M0) 3.0Ms 2.7 for7.8 Ms 8.5 (12.9)
The source-to-site distance - Much of the energy released by rupture along a fault takes
the form of stress waves. As stress waves travel away from the source of an earthquake,
they spread out and are partially absorbed by the materials they travel through. As a
result, the specific energy decreases with increasing distance from the source. The
distance between the source of an earthquake and particular site can be interpreted in
different ways. Different distance used in engineering seismology is given in Figure 12.1.
Surface projection
Epicenter
R5
R4
R3 R2
R1
High stress
zone
Fault rupture
surface
Hypocenter
Fig 12.1: Various measures of distance used in strong –motion predictive relationships
Dr. P. Anbazhagan 3 of 39
Introduction to Engineering Seismology Lecture 12
R1 and R2 are the hypocentral and epicentral distances, which are the easiest
distances to determine after an earthquake. If the length of fault rupture is a
significant fraction of the distances between the fault and the site, however,
energy may be released closer to the site, and R1 and R2 may not accurately
represent the “effective distance”.
R3 is the distance to the zone of highest energy release. Since rupture of this zone
is likely to produce the peak ground motion amplitudes, it represents the best
distance measure for peak amplitude predictive relationships. Unfortunately, its
location is difficult to determine after an earthquake and nearly impossible to
predict before an earthquake.
R4 is the closest distance to the zone of rupture and R5 is the closest distance to
the surface projection of the fault rupture.
The theoretical return period is the inverse of the probability that the event will be
exceeded in any one year. While it is true that a 10-year event will occur, on
average, once every 10 years and that a 100-year event is so large that we expect
it only to occur every 100 years, this is only a statistical statement: the expected
number of 100-year events in an n year period is n/100, in the sense of expected
value.
Similarly, the expected time until another 100-year event is 100 years, and if in a
given year or years the event does not occur, the expected time until it occurs
remains 100 years, with this "100 years" resetting each time.
It does not mean that 100-year earthquakes will happen regularly, every 100
years, despite the connotations of the name "return period"; in any given 100-year
period, a 100-year earthquakes may occur once, twice, more, or not at all.
Note also that the estimated return period is a statistic: it is computed from a set of
data (the observations), as distinct from the theoretical value in an idealized
distribution. One does not actually know that a certain magnitude or greater
happens with 1% probability, only that it has been observed exactly once in 100
years.
This distinction is significant because there are few observations of rare events:
for instance if observations go back 400 years, the most extreme event (a 400-year
event by the statistical definition) may later be classed, on longer observation, as a
200-year event (if a comparable event immediately occurs) or a 500-year event (if
no comparable event occurs for 100 years).
Dr. P. Anbazhagan 4 of 39
Introduction to Engineering Seismology Lecture 12
Further, one cannot determine the size of a 1,000-year event based on such
records alone, but instead must use a statistical model to predict the magnitude of
such an (unobserved) event.
Topic 2
The seismic hazard can be expressed in different ways: from simple observed
macroseismic fields, to seismostatistical calculations for analyzing earthquake
occurrences in time and space and assessing their dynamic effects in a certain site
or region, to sophisticated seismogeological approaches for evaluating the
maximum expected earthquake effects on the Earth's surface.
Seismic hazard can be represented in different ways but most frequently in terms
of values or probability distributions of accelerations, velocities, or Displacements
of either bedrock or the ground surface:
Dr. P. Anbazhagan 5 of 39
Introduction to Engineering Seismology Lecture 12
Ground acceleration, velocity and displacement are related among them because
integration or differentiation in time of one of them produces another.
Time histories of ground motions are often used in practice for non-linear
analyses when damage caused by ground shaking can accumulate in time. Single
peak values are poor indicators of earthquake destructiveness, so time histories of
ground motion are usually considered for important, large, expensive and unusual
structures and ground conditions. Response spectral values are a compromise
between the singular values and a complete ground motion definition in time.
Topic 3
Data completeness
Important step in any earthquake data analysis is to investigate the available data
set to asses its nature and degree of completeness. Incompleteness of available
earthquake data make it difficult to obtain fits of Gutenberg-Richter recurrence
law that is thought to represent true long term recurrence rate.
For a certain range and time interval, the above equation will provide the number
of earthquakes, (N) with magnitude, (M) where „a‟ and „b‟ are positive, real
constants. „a‟ describes the seismic activity (log number of events with M=0) and
„b‟ which is typically close to 1 is a tectonics parameter describing the relative
abundance of large to smaller shocks.
Dr. P. Anbazhagan 6 of 39
Introduction to Engineering Seismology Lecture 12
In order to obtain an efficient estimate of the variance of the sample mean, it is
assumed that the earthquake sequence can be modeled by the Poisson distribution.
If k1, k2, k3…..kn are the number of earthquakes per unit time interval, then an
unbiased estimate of the mean rate per unit time interval of this sample is:
1 n
ki (12.11)
n i1
And its variance is:
2
(12.12)
n
Where n is the number of unit time intervals. Taking the unit time interval to be
one year gives a standard deviation of:
(12.13)
T
Where T is the sample length. Hence by assuming stationary process, one can
σ
expect that λ behaves as 1 in the subintervals, in which the mean rate of
T
occurrence in a magnitude class is constant. In other words when λ is constant,
1
σ
and then the standard deviation λ varies as T where T is the time interval of the
sample. If the mean rate of occurrence is constant we expect stability to occur
only in the subinterval that is long enough to give a good estimate of the mean but
short enough that it does not include intervals in which reports are complete
(Stepp, 1972).
For each magnitude interval in the Figure 12.2 the plotted points are supposed to
define a straight line relation as long as the data set for the magnitude interval is
complete. For a given seismic region the slope of the lines for all magnitude
intervals should be same.
Dr. P. Anbazhagan 7 of 39
Introduction to Engineering Seismology Lecture 12
10 1 <M<1.9
2 <M<2.9
3 <M<3.9
4 <M<4.9
1 M>5
1 / sqrt (T)
0.1
0.01
10 100 1000
Time (yrs)
Figure 12.2: Variance of seismicity rate for different magnitude intervals and different
lengths of moving time windows
Topic 4
Recurrence Relation
.t (12.14)
P[N 1] 1 e
Where is the average rate of occurrence of the event with considered
earthquake magnitude. Cornell and Winterstein (1986) have shown
that the Poisson model should not be used when the seismic hazard is
dominated by a single source for which the return period is greater
than the average return period and when the source displays strong
characteristic-time behavior.
Dr. P. Anbazhagan 8 of 39
Introduction to Engineering Seismology Lecture 12
2. Time predictable, which specifies a distribution of the time to the next
earthquake that depends on the magnitude of the most recent
earthquake.
3. Slip predictable, which considers the distribution of earthquake
magnitude to depend on the time since the most recent earthquake.
Topic 5
Recurrence relations are a crucial component of seismic hazard analysis. They are
the means of defining the relative distribution of large and small earthquakes and
incorporating the seismic history into the hazard analysis. On the basis of
worldwide seismicity data, Gutenberg and Richter established the loglinear
relation (G-R line)
log 10n(M) a bM (12.15)
Here N(M) is the number of earthquakes per year with a magnitude equal to or
greater than M and a and b are constants for the seismic zone. N is associated with
a given area and time period. The constant „a‟ is the logarithm of the number of
earthquakes with magnitudes equal to or greater than zero. The constant „b‟ is the
slope of the distribution and controls the relative proportion of large to small
earthquakes
There are nine methods using which the a, b and Mc values are estimated. The
nine methods include the estimation of a, b and Mc are
1. Maximum Curvature method (M1),
2. Fixed Mc = Mmin (M2),
3. Goodness of fit Mc90 (M3) and
4. Mc95 (M4),
5. best combinations of Mc90 and Mc95 and maximum curvature
(M5),
6. entire magnitude range (M6),
7. Shi and Bolt (1982) method (M7),
8. Bootstrap method (M8) and
9. Cao and Gao (2002) method (M9).
Dr. P. Anbazhagan 9 of 39
Introduction to Engineering Seismology Lecture 12
Dr. P. Anbazhagan 10 of 39
Introduction to Engineering Seismology Lecture 12
Topic 6
Mmax Estimation
Dr. P. Anbazhagan 11 of 39
Introduction to Engineering Seismology Lecture 12
The maximum regional magnitude, Mmax, is defined as the upper limit of
magnitude for a given region or it is magnitude of largest possible earthquake. In
other words it is a sharp cut-off magnitude at a maximum magnitude Mmax, so
that, by definition, no earthquakes are to be expected with magnitude exceeding
Mmax.
The maximum earthquake magnitude in a given area can be estimated using the
geothermal gradient. Such a relation is brought about by the fact that the upper
limit of fault size is constrained within the brittle zone in the crust, the thickness
of which is regulated by the geothermal structure of the focal region.
1/ q 2
exp[nr q /(1 r q ) [ ( 1/ q, r q )
mobs
max
m ( 1/ q, )] (12.16)
max
where p ( )2 , q 2
, where β = 2.303b, denotes the mean value of
Topic 7
Predictive relationships
R3 in the Figure 12.1 represents the best distance measure for peak amplitude
predictive relationships. It is the distance to the zone of highest energy release.
Predictive relationships for earthquake ground motion and response spectral values
are empirically obtained by well-designed regression analyses of a particular
strong-motion parameter data set.
Dr. P. Anbazhagan 12 of 39
Introduction to Engineering Seismology Lecture 12
Predictive relationship allows the estimation of the peak ground motions at a given
distance and for an assumed magnitude. Thus, ground motions are estimated for a
given magnitude earthquake, and at a particular distance from the assumed fault, in
a manner consistent with recordings of past earthquakes under similar conditions.
Combining these observations, a typical predictive relationship may have the form
Dr. P. Anbazhagan 13 of 39
Introduction to Engineering Seismology Lecture 12
ln Y C1 C2M C3MC4 C5 ln[ R C6 exp(C7M)] C8R f (source) f (site). ln Y C9
1 5
2 3 4 6
(12.18)
Where the circled numbers indicate the observations associated with each term.
Some predictive relationships utilize all these terms and others do not.
Topic 8
14 of 39
Dr. P. Anbazhagan
Introduction to Engineering Seismology Lecture 12
Source 3
Source 1
Site R3
M3 R1
M1 R2
M2
Source 2
STEP 1 STEP 2
MM21
.. {} Y N
.
Y1
Controlling Y2
parameter, Y Ground motion Y= .
M3 earthquake .
R3 R1 Distance
R2
STEP 3 STEP 4
PGA for controlling EQ
PGA from Attenuation relation for each source
Dr. P. Anbazhagan 15 of 39
Introduction to Engineering Seismology Lecture 12
Definition of source geometry includes
Dr. P. Anbazhagan 16 of 39
Introduction to Engineering Seismology Lecture 12
1.Empirical correlations
a. Rupture length correlations
b. Rupture area correlations
c. Maximum surface displacement correlations
Slip rate approach: seismic moment is given by the following equation, where µ=
shear modulus of rock, A = rupture area, D = average displacement over rupture
area M0 AD (12.19)
Knowing the slip rate and knowing (assuming) values of m, A, and T, the moment
rate can be used to estimate the seismic moment as
M0 M0.T (12.22)
Mw log M0 1.5 10.7 (12.23)
Dr. P. Anbazhagan 17 of 39
Introduction to Engineering Seismology Lecture 12
Measurement of Distances
Areal source
Fig 12.7: Arial sources and associated distances
Dr. P. Anbazhagan 18 of 39
Introduction to Engineering Seismology Lecture 12
Step 3: Selection of Controlling Earthquakes is based on ground motion
parameter(s). Consider all sources, assume Mmax occurs at Rmin for each source
Compute ground motion parameter(s) based on Mmax and Rmin Determine
critical value(s) of ground motion parameter(s). An example is shown in Figure
12.8 below.
Dr. P. Anbazhagan 19 of 39
Introduction to Engineering Seismology Lecture 12
Fig 12.9: Typical spectral curve and hazard plot from DSHA analysis
Topic 9
The seeds of PSHA were sown in the early 1960s in the form of two efforts that
came together in 1966. One effort was the 1964 doctoral dissertation of Allin
Cornell at Stanford titled „Stochastic Processes in Civil Engineering,‟ which
studied probability distributions of factors affecting engineering decisions.
Probabilistic seismic hazard analysis (PSHA) is the most widely used approach
for the determination of seismic design loads for engineering structures. The use
of probabilistic concept has allowed uncertainties in the size, location, and rate of
recurrence of earthquakes and in the variation of ground motion characteristics
with earthquake size and location to be explicitly considered for the evaluation of
seismic hazard. In addition, PSHA provides a frame work in which these
uncertainties can be identified, quantified and combined in a rational manner to
provide a more complete picture of the seismic hazard.
Dr. P. Anbazhagan 20 of 39
Introduction to Engineering Seismology Lecture 12
Fig 12.10: Flowchart showing the elements of the probabilistic hazard methodology in
the context of a seismic design criteria methodology.
Topic 10
Dr. P. Anbazhagan 21 of 39
Introduction to Engineering Seismology Lecture 12
However, it provides no information on the likelihood of occurrence of the
controlling earthquake, the likelihood of it occurring where it is assumed to occur,
the level of shaking that might be expected during a finite period of time (such as
the useful lifetime of a particular structure or facility), or the effects of
uncertainties in the various steps required to compute the resulting ground motion
characteristics.
PSHA allows uncertainties in the size, location, rate of recurrence, and effects of
earthquakes to be explicitly considered in the evaluation of seismic hazards. A
PSHA requires that uncertainties in earthquake location, size, recurrence, and
ground shaking effects be quantified.
Topic 11
Summary of uncertainties
Dr. P. Anbazhagan 22 of 39
Introduction to Engineering Seismology Lecture 12
Source
L L
l r
dl
Site Site
Source
fP(r) fP(r)
1/dr dr
fS R rmin R
(a) (b)
Site
Source
Nr
(c)
Fig 12.11: Examples of Variation of Source to Site Distance for different source zone
geometries.
Dr. P. Anbazhagan 23 of 39
Introduction to Engineering Seismology Lecture 12
Where fL(l) and fR(r) are the probability density functions for the variables L and
R, respectively. Consequently,
fR (r) fL(l) dl
dr (12.25)
If earthquakes are assumed to be uniformly distributed over the length of the fault,
r2
Size uncertainty- all source zones have a maximum earthquake magnitude that
cannot be exceeded; in general, the source zone will produce earthquakes of
different sizes up to the maximum earthquake, with smaller earthquakes occurring
more frequently than larger ones.
A basic assumption of PSHA is that the recurrence law obtained from past
seismicity is appropriate for the prediction of future seismicity. In most PSHA‟s,
the lower threshold magnitude is set at values from about 4.0 to 5.0 since
magnitudes smaller than that seldom cause significant damage. The resulting
probability distribution of magnitude for the Gutenberg-Richter law with lower
bound can be expressed in terms of the cumulative distribution function (CDF)
m0 m ( m m0 )
FM(m) P[M m | M m ] 0 1 e (12.27)
m0
Dr. P. Anbazhagan 24 of 39
Introduction to Engineering Seismology Lecture 12
The probability of exceeding a particular value of y*, of a ground motion
parameter, Y, is calculated for one possible earthquake at one possible source
location and then multiplied by the probability that, that particular magnitude
earthquake would occur at that particular location. The process is then repeated for
all possible magnitudes and locations with the probabilities of each summed.
For a given earthquake occurrence, the probability that a ground motion parameter
Y will exceed a particular value y* can be computed using the total probability
theorem, that is,
P[Y y*] P[Y y* | X]P[X]P[Y y* | X]fx(X)dx (12.29)
Where P[Y>y*|m, r] is obtained from the predictive relationship and fM (m) and
fR(r) are the probability density functions for magnitude and distance, respectively.
by
NS
(12.31)
y*
vi P[Y y* | m, r]fMi(m)fRi (r)dm.dr
i 1
The individual components of above equation are, for virtually all realistic
PSHA‟s, sufficiently complicated that the integrals cannot be evaluated
analytically. Numerical integration, which can be performed by a variety of
different techniques, is therefore required.
The next step is to divide the possible ranges of magnitude and distance into N M
and NR segments, respectively the average exceedance rate can then be
estimated by
NS NM NR
y*
viP[Y y* | mj, rk]fMi(mj)fRi (rk)dm.dr (12.32)
i 1 j1 k 1
Where mj m0 ( j 0.5)(m max m0) NM , rk r min 0.5)(r max r min) NR ,
(k
m (m max m0) Nm , and r (r max r min) NR . This is equivalent to assuming
that each source is capable of generating only NM different earthquakes of
magnitude, mj, at only NR different source-to-site distances, rk. Then the above
equation is equivalent to
NS NM NR
y*
viP[Y y* | mj, rk]P[M mj]P[R rk] (12.33)
i 1 j1 k 1
Dr. P. Anbazhagan 25 of 39
Introduction to Engineering Seismology Lecture 12
The accuracy of the crude numerical integration procedure described above
increases with increasing NM and NR. More refined methods of numerical
integration will provide greater accuracy at the same values of NM and NR.
Since PSHA‟s deals with temporal uncertainty, the spatial applications of the
Poisson model will not be considered further. Poisson processes possess the
following properties, which indicate that the events of a Poisson process occur
randomly, with no “memory” of the time, size or location of any preceding event.
Dr. P. Anbazhagan 26 of 39
Introduction to Engineering Seismology Lecture 12
Poisson model is useful for practical seismic risk analysis except when the seismic
hazard is dominated by a single source for which the time interval since the
previous significant event is greater than the average interevent time and when the
source displays strong “characteristic-time” behavior.
Topic 12
On an active fault it is possible that all points are equally vulnerable to rupture.
Thus, depending on the relative orientation of a fault with respect to the station,
the hypocentral distance R will have to be treated as a random variable. Further,
the conditional probability distribution function of R given that the magnitude M=
m for a rupture segment, uniformly distributed along a fault is given by
MIN stands for the minimum of the two arguments inside the parentheses. This
condition is used to confine the rupture to the fault length. The first term provides
an estimate of the rupture length expected for an event of magnitude m.
The above solution pertains to the case of a fault situated entirely to one side of a
site. In the more general situation when the fault is extending on both sides of the
source, the conditional probabilities for the two sides are multiplied by the
fraction of length of the corresponding sides and summed up to get the probability
for the total fault.
Topic 13
Regional Recurrence
Each seismic source has a maximum earthquake that cannot exceed. In PSHA, the
lower magnitude can be taken from 4.0 to 5.0 magnitudes, since smaller than this
will not cause significant damage to the engineering structures and larger
magnitude can be evaluated by considering the seismotectonic of the region and
historic earthquake data.
Dr. P. Anbazhagan 27 of 39
Introduction to Engineering Seismology Lecture 12
The magnitude recurrence model for a seismic source specifies the frequency of
seismic events of various sizes per year. For any region the seismic parameters are
determined using Gutenberg-Richter (G-R) magnitude-frequency relationship
which is given in Equation below.
(m m )
1 e
For m0 m mu Where β=b ln (10) and Ni (m0) is proposal weightage factor for
particular source based on the deaggregation.
Topic 14
The uniform hazard spectra (UHS) are derived from a probabilistic hazard
analysis. The basic steps of the analysis are as follows. First, seismotectonic
information is used to define seismic source zones. Generally, a number of
alternative hypotheses regarding the configuration of these seismic zones are
formulated.
For each source zone, the earthquake catalogue is used to define the magnitude
recurrence relation and its uncertainty, which provides the description of the
frequency of occurrence of events within the zone, as a function of earthquake
magnitude. Ground motion relations are then defined to provide the link between
the occurrence of earthquakes within the zones, and the resulting ground motions
at a specified location. Ground motion relations can be given in terms of peak
ground acceleration or velocity or in terms of response spectral ordinates of
specific periods of vibration.
The final step of the hazard analysis is integration over all earthquake magnitudes
and distances, of the contributions to the probability of exceeding specified ground
motion levels at the site of interest. Repeating this process for a number of
vibration periods defines the uniform hazard spectrum, which is a response
spectrum having a specified probability of exceedance at the particular site.
Dr. P. Anbazhagan 28 of 39
Introduction to Engineering Seismology Lecture 12
spectrum is strongly dependent upon magnitude and distance. In general, the
dominant contributor to the short-period ground motion hazard comes from small-
to-moderate earthquakes at close distance, whereas larger earthquakes at greater
distance contribute most strongly to the long-period ground motion hazard.
Topic 15
Deaggregation
The PSHA procedures allow computation of the mean annual rate of exceedance at
a particular site based on the aggregate risk from potential earthquakes of many
different magnitudes occurring at many different source-to-site distances. The rate
of exceedance computed in a PSHA, therefore, is not associated with any
particular earthquake magnitude or source-site-distance.
In some cases, it may be useful to estimate the most likely earthquake magnitude
and/or the most likely source-site-distance. These quantities may be used, for
example, to select existing ground motion records for response analyses. This
process of deaggregation requires that the mean annual rate of exceedance be
expressed as a function of magnitude and/or distance. For example, the mean
annual rate of exceedance can be expressed as a function of magnitude by
NS NR
y* (mj)P[M mj] viP[Y y* | mj, rk]P[R rk] (12.41)
i 1 k 1
Topic 16
Dr. P. Anbazhagan 29 of 39
Introduction to Engineering Seismology Lecture 12
The logic tree approach allows the use of alternative models, each of which is
assigned a weighting factor that is interpreted as the relative likelihood of that
model being correct. It consists of a series of nodes, representing points at which
models are specified and branches that represent the different models specified at
each node. The sum of the possibility of all branches connect to a given node must
be 1.
The simple logic tree allows uncertainty in selection of models for attenuation,
magnitude distribution and maximum magnitude to be considered. In this logic
tree, attenuation according to the models of Campbell and Bozorgnia (1994) and
Boore et al. (1993) are considered equally likely to be correct, hence each is
assigned a relative likelihood of 0.5.
At the final level of nodes, different relative likelihoods are assigned to the
maximum magnitude. This logic tree terminates with a total of 2x2x3=12 (no. of
attenuation models x no. of magnitude distributions x no. of maximum
magnitudes) branches (Fig 12.13).
Dr. P. Anbazhagan 30 of 39
Introduction to Engineering Seismology Lecture 12
sum of the relative likelihoods of the terminal branches or of those at any prior
level, is equal to 1.
To use the logic tree, a seismic hazard analysis is carried out for the combination
of models and/or parameters associated with each terminal branch. The result of
each analysis is weighted by the relative likelihood of its combination of branches,
with the final result taken as the sum of the weighted individual results.
Topic 17
PSHA is the most commonly used approach to evaluate the seismic design load for
the important engineering projects. PSHA method was initially developed by
Cornell (1968) and its computer form was developed by McGuire (1976 and 1978)
and Algermissen and Perkins (1976).
McGuire developed EqRisk in the year 1976 and FRISK in the year 1978.
Algermissen and Perkins (1976) developed RISK4a, presently called as SeisRisk
III.
EQRISK, written by McGuire (1976)- The code was freely and widely distributed,
and today is still probably the most frequently used hazard software, and has led to
PSHA often being referred to as the Cornell-McGuire method.
The program included the integration across the scatter in the attenuation equation
as part of the hazard calculations: "under the principal option for which this
program was written, the conditional probability of (random) intensity I exceeding
value i at the given site is evaluated using the normal distribution."
Topic 18
Attenuation models
Dr. P. Anbazhagan 31 of 39
Introduction to Engineering Seismology Lecture 12
There is evidence that the decay rate of ground motions is dependent on the
magnitude of the causative earthquake (e.g. Douglas, 2003), and the decay rate
also changes systematically with distance. Fourier spectra and response spectra
moreover decay differently.
For moderate and large earthquakes the source can no longer be considered a point
source and therefore the size of the fault will mean the decay rate will be less than
for smaller events, which is essentially why, for large events, the distance to the
causative fault (Joyner-Boore distance) usually is used instead of epicentral or
hypocentral distance.
Dr. P. Anbazhagan 32 of 39
Introduction to Engineering Seismology Lecture 12
Topic 19
Available models include near field excitation as well as the attenuation with
distance, and the scaling with magnitude here is essentially developed for
estimating the effects of an earthquake which is not yet been observed in the
region considered.
Given the spectrum of motion at a site, there are two ways of obtaining ground
motions: 1) time-domain simulation and 2) estimates of peak motions using
random vibration theory.
Topic 20
Forward modeling deals with the estimation of ground motion at the ground
surface by modeling the earthquake faulting process, the earth medium between
the earthquake source and the station, and local site effects near the station,
such as modeling of topography, basin structure, and soft soil conditions
Dr. P. Anbazhagan 33 of 39
Introduction to Engineering Seismology Lecture 12
There are two types of source models: kinematic and dynamic. In kinematic
source models the slip over the rupturing portion of a fault, as a function of
fault plane coordinates and of time, is known or given a priori (that is, it is not a
function of the causative stresses). In dynamic source models, on the other
hand, slip over the rupturing segment of a fault is a function of tectonic stresses
acting on the region.
In kinematic source models, the final slip distribution over the fault plane, as
well as the location and time-specific evolution of slip over it, can be taken
from inverse problem solutions, which use recorded data, or can be found by
source models such as Haskell‟s model.
In dynamic source models, shear dislocation or slip is the result of a stress drop
in a tectonic region [Kostrov and Das, 1989; Scholz, 1989; Madariaga, 1976].
Slip, its amount, direction, the way the rupture travels over the fault plane (i.e.,
its velocity and direction) are controlled by surrounding forces in the region, as
well as by the material properties of the earth material adjacent to the fault
plane.
The 1994 Northridge and 1995 Kobe earthquake strong motion records
reconfirmed the severity of the previously noted long-period pulses associated
with severe damage. Passing of the rupture front, or so-called source directivity,
causes these large, coherent velocity pulses.
Given this, and the needs of the earthquake engineering community, there is a
growing trend towards simulation techniques that incorporate broadband
ground motions of longer periods, directivity effects, and higher frequencies.
Dr. P. Anbazhagan 34 of 39