Earth: Seismic Signal Processing For Many Years, Seismograms Consisted Only of A Paper Record of
Earth: Seismic Signal Processing For Many Years, Seismograms Consisted Only of A Paper Record of
the trace drawn by a pen or light-beam. This trace was an analogue record of the motion of the
Earth at that point. That total motion consisted of more than the seismic signals which were of
interest, but little could be done to separate interfering motions (i. e. noise) from the signal. As
the computer revolution unfolded, seismology experienced a digital revolution. As a result,
voltage variations from the seismometer that represent the ground motion could be fed into an
electronic device called an analogue to digital converter, and the seismogram became a
sequence of numbers which represented the amplitude of the ground motion sampled at some
constant time interval, often a few milliseconds. This digital record, the signal, is a time series
and is ideally suited for mathematical procedures which improve its utility. These procedures are
generally known as signal processing, or conditioning, and are quite varied and sometimes
complex. The digital revolution occurred first in the seismic reflection industry, which focuses on
the exploration for hydrocarbons. Later, earthquake seismographs were developed that
recorded digitally, and today virtually all modern seismic recordings are digital and thus involve
some sort of signal processing. A simple example of seismic signal processing occurs in
engineering and environmental applications when a hammer is used as a seismic source. An
individual impact of the hammer is too weak to provide a signal strong enough to be useful.
However, a usable record can be obtained by a simple signal processing scheme, which begins
by storing the digital signal from the first impact in the memory of the computer which is part of
the recording system. The digital record that represents each subsequent impact is first
summed with the record in the memory, and the sum is then stored as the new record. The
instrument operator examines each sum, and the process continues until a suitable record is
obtained. At the other end of the spectrum of complexity, there are three-dimensional seismic
reflection surveys where each seismic trace in the resulting ultimate image represents tens of
thousands of individual seismic signals which have been summed in a variety of complex
processes. The signal processing effort required to produce a good three-dimensional image in
fact rivals the intensive field effort required to collect the actual data.
T
he most basic goal of seismic signal processing can be expressed simply as increasing the
signal to noise (S/N) ratio. Noise is simply any ground motion which is not of interest. In digital
terms, S/N is just the ratio of the number representing the amplitude of the signal to the number
representing the amplitude of the noise. Another basic goal of signal processing is to increase
resolution, which is the ability to recognize individual reflected or refracted seismic waves (see
seismic exploration methods) as distinct arrivals or wavelets. Processing techniques that
address this issue try to make the individual wavelets as compact as possible and to remove
interfering waves which have travelled from the source to the receiver (seismograph) along any
indirect path. In seismic reflection studies, a process called migration is usually the most
complex processing step. Here the goal is to unravel the effects of complex subsurface
structures and, in essence, sharpen the focus of the image which is the ultimate product of the
processing effort.
Seismic signal processing is a complex subject, and processing of modern seismic reflection
surveys is one of the most sophisticated and intensive computer applications that exists.
However, a relatively small number of basic principles form the basis of most processing
operations. Perhaps the most basic consideration is the application of Fourier analysis (named
after J. B. J. Fourier, a prominent French mathematician). This common mathematical tool is
very useful in seismic signal processing and has been the driving force behind many advances
in this field. When viewed as a relationship between amplitude and time, a seismogram is a
well-behaved mathematical function. Thus, a seismogram can be represented as the sum of a
series of time-shifted sine waves with varying frequencies and amplitudes. If one chooses
enough different sine waves (frequencies), their sum approximates the seismogram to an
arbitrary degree of precision. When one views the seismogram in terms of the relative
amplitudes (amplitude spectra) and time shifts (phase spectra) as a function of frequency (Fig.
1), one is analysing the frequency domain representation of the signal. This representation is
obtained through a mathematical process called the Fourier transform, and it is a very intuitive
way of representing the signal for many applications. For example, frequency filtering becomes
simply a process wherein the desired frequencies are retained (i. e. multiplied by 1 in the
frequency domain) and the undesired frequencies are multiplied by zero. After doing these
multiplications, an inverse Fourier transform provides the seismogram with the undesired
frequencies removed (Fig. 1).
When high frequencies are undesired, one employs a low-pass filter. A high-pass filter
attenuates low frequencies, and a range of frequencies is passed by a band-pass filter. Modern
seismic studies usually place a premium on obtaining broad-band data, which refers to the
frequency domain where the desired signal would contain a broad range of frequencies. A
broad-band signal has numerous advantages, including superior resolution. This fact can be
illustrated by looking at the opposite extreme, which is a single sine wave containing, by
definition, only one frequency and which is thus narrow band. A sine wave travelling through the
Earth would be reflected and refracted, producing a sinusoidal seismogram on which no distinct
arrivals could be recognized. This lack of resolution is the opposite of what is desired.
Another basic principal of signal processing is that of the linear system. Once the seismic wave
has been generated at the source, a linear system allows one to model mathematically the
processes that represent modifications of the wave as simple operations called convolutions.
The main processes are the effects of travel through the Earth and the effects of the response
of the seismograph system. These operations are independent of the order in which they are
applied in both the frequency domain or the time domain. Often the goal of the seismologist is to
model the response of the Earth by calculating seismograms that match those observed (see
synthetic seismograms). By using the linear system concept, the seismic source and the
response of the recording system can be separated from the response of the Earth so that it can
be modelled.
A final basic concept is the stacking (summing) of seismograms to produce a result in which the
S/N ratio is improved. The idea is that a signal of a given amplitude can be obtained by
recording one big seismic source or many smaller ones and summing the results. The simple
summing of a series of weak signals from a single source location at one receiver location is
one example. Another is common mid-point (CMP) stacking, which was invented by the seismic
reflection industry in the 1960s and is illustrated in Fig. 2. Here, seismic reflections travelling
along different ray paths but sharing a common mid-point between a series of sources and
receivers are analysed. These arrivals are not aligned in time initially. They can be aligned after
making time shifts based on determining an average velocity of the material above the reflector.
When then summed, a great improvement in data quality (better S/N) is attained. In modern
reflection surveys, it is not uncommon to obtain over 200 seismograms that share the same
CMP. The S/N ratio improves with approximately the square root of the number of traces. For
example, 16 traces must be summed to obtain a fourfold increase in S/N.
The seismic signal processing procedures employed by the seismic reflection industry are by far
the most complex of any seismological technique. However, two of the basic steps are common
in most applications. The first is simply bookkeeping, in which computer files representing
seismograms must be organized so that it is clear when and where they were recorded. Getting
the data into a standard format that is easily recognized is also part of this procedure. This step
is not as trivial as it seems when one realizes that hundreds of seismographs and seismic
sources spread over large distances may be involved. The second step is harder to generalize,
but would include some basic filtering, a first look at the data in some organized form, and
perhaps some stacking of the seismograms. The last step entails sophisticated operations that
apply to seismic reflection techniques and their goal is producing an image of the subsurface.