0% found this document useful (0 votes)
48 views12 pages

Presentation of Geophysical Data

The document provides an overview of geophysical data presentation, focusing on the digitization, spectral analysis, waveform processing, digital filtering, and imaging and modeling techniques. It emphasizes the importance of separating signal from noise in geophysical surveys and describes various methods such as convolution, deconvolution, and correlation for data processing. Additionally, it discusses the use of frequency filters and inverse filters to enhance signal quality and the strategies for interpreting processed waveforms through imaging and modeling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views12 pages

Presentation of Geophysical Data

The document provides an overview of geophysical data presentation, focusing on the digitization, spectral analysis, waveform processing, digital filtering, and imaging and modeling techniques. It emphasizes the importance of separating signal from noise in geophysical surveys and describes various methods such as convolution, deconvolution, and correlation for data processing. Additionally, it discusses the use of frequency filters and inverse filters to enhance signal quality and the strategies for interpreting processed waveforms through imaging and modeling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Presentation of Geophysical Data

1 Introduction
2 Digitization of geophysical data
3 Spectral analysis
4 Waveform processing
4.1 Convolution
4.2 Deconvolution
4.3 Correlation
5 Digital filtering
5.1 Frequency filters
5.2 Inverse (deconvolution) filters
2.6 Imaging and modelling

Introduction Geophysical surveys measure the variation of some physical quantity,


with respect either to position or to time. The quantity may, for example, be the
strength of the Earth’s magnetic field along a profile across an igneous intrusion. It
may be the motion of the ground surface as a function of time associated with the
passage of seismic waves

Fig. 1 (a) A graph showing a typical magnetic field strength variation which may
be measured along a profile. (b) A graph of a typical seismogram, showing
variation of particle velocities in the ground as a function of time during the
passage of a seismic wave

In either case, the simplest way to present the data is to plot a graph (Fig.1)
showing the variation of the measured quantity with respect to distance or time as
appropriate. The graph will show some more or less complex waveform shape,
which will reflect physical variations in the underlying geology, superimposed on
unwanted variations from non-geological features (such as the effect of electrical
power cables in the magnetic example, or vibration from passing traffic for the
seismic case), instrumental inaccuracy and data collection errors.
The detailed shape of the waveform may be uncertain due to the difficulty in
interpolating the curve between widely spaced stations.
The geophysicist’s task is to separate the ‘signal’ from the ‘noise’ and interpret the
signal in terms of ground structure. Analysis of waveforms such as these
represents an essential aspect of geophysical data processing and interpretation.
Nearly all the techniques may be implemented in standard computer spread sheet
programs.

Digitization of geophysical data

 Waveforms of geophysical interest are generally continuous (analogue)


functions of time or distance. To apply the power of digital computers to the
task of analysis, the data need to be expressed in digital form, whatever the
form in which they were originally recorded.

 The analogue function of time f(t) shown in Fig. 2.(a) can be represented as
the digital function g(t) shown in Fig. 2.2(b) in which the continuous
function has been replaced by a series of discrete values at fixed, equal,
intervals of time. This process is inherent in many geophysical surveys,
where readings are taken as the value of some parameter (e.g. magnetic field
strength) at points along survey lines.

 The extent to which the digital values faithfully represent the original
waveform will depend on the accuracy of the amplitude measurement and
the intervals between measured samples. These two parameters of a
digitizing system are the sampling precision (dynamic range) and the
sampling frequency.
Fig. 2. (a) Analogue representation of a sinusoidal function.(b) Digital
representation of the same function

Dynamic range ans Sampling frequency

Dynamic range is an expression of the ratio of the largest measurable amplitude


Amax to the smallest measurable amplitude Amin in a sampled function. The
higher the dynamic range, the more faithfully the amplitude variations in the
analogue waveform will be represented in the digitized version of the waveform.
Dynamic range is normally expressed in the decibel (dB) scale used to define
electrical power ratios.

Sampling frequency is the number of sampling points in unit time or unit distance
Thus, if a waveform is sampled every two milliseconds (sampling interval), the
sampling frequency is 500 samples per second (or 500 Hz). Sampling at this rate
will preserve all frequencies up to 250 Hz in the sampled
function.This frequency of half the sampling frequency is known as the Nyquist
frequency ( fN)
3. Spectral analysis
An important mathematical distinction exists between periodic waveforms (Fig.
2.4(a)), that repeat themselves at a fixed time period T, and transient waveforms
(Fig. 2.4(b)), that are non-repetitive

Fig. 2.4 (a) Periodic and (b) transient waveforms

Fig. 2.5 Complex waveforms resulting from the summation of two sine wave
components of frequency f and 2f. (a) The two sine wave components are of equal
amplitude and in phase. (b) The higher frequency component has twice the
amplitude of the lower frequency component and is p/2 out of phase. (After Anstey
1965.)

From the above it follows that a periodic waveform can be expressed in two
different ways: in the familiar time domain, expressing wave amplitude as a
function of time, or in the frequency domain, expressing the amplitude and phase
of its constituent sine waves as a function of frequency.
Figs Complex waveforms

Fig. Fig. 2.6 Representation in the frequency domain of the waveforms illustrated
in Fig. 2.5, showing their amplitude and phase spectra.

The waveforms shown in Fig. 2.5(a) and (b) are represented in Fig. 2.6(a) and (b)
in terms of their amplitude and phase spectra. Transient waveforms do not repeat
themselves; that is, they have an infinitely long period. However, it is impossible
to cope analytically with a spectrum containing an infinite number of sine wave
components. Digitization of the waveform in the time domain (Section 2.2)
provides a means of dealing with the continuous spectra of transient waveforms.
Fourier transformation of digitized waveforms is readily programmed for
computers, using a ‘fast Fourier transform’ (FFT) algorithm as in the Cooley–
Tukey method (Brigham 1974). FFT subroutines can thus be routinely built into
data processing programs in order to carry out spectral analysis of geophysical
waveforms. Fourier transformation is supplied as a function to standard
spreadsheets such as Microsoft Excel. Fourier transformation can be extended into
two dimensions (Rayner 1971), and can thus be applied to areal distributions of
data such as gravity and magnetic contour maps.

4. Waveform processing

 The principles of convolution, deconvolution and correlation form the


common basis for many methods of geophysical data processing, especially
in the field of seismic reflection surveying

 Their importance is that they quantitatively describe how a waveform is


affected by a filter. Filtering modifies a waveform by discriminating
between its constituent sine wave components to alter their relative
amplitudes or phase relations, or both.

4.1 Convolution: Convolution (Kanasewich 1981) is a mathematical operation


defining the change of shape of a waveform resulting from its passage through a
filter. Thus, for example, a seismic pulse generated by an explosion is altered in
shape by filtering effects, both in the ground and in the recording system, so that
the seismogram (the filtered output) differs significantly from the initial seismic
pulse (the input).

The effect of a filter may be categorized by its impulse response which is defined
as the output of the filter when the input is a spike function

Fig. 2.10 The impulse response of a filter.


Fig. 2.11 Examples of filtering. (a) A spike input. (b) Filtered output equivalent to
impulse response of filter. (c) An input comprising two spikes. (d) Filtered output
given by summation of two impulse response functions offset in time. (e) A
complex input represented by a series of contiguous spike functions. (f) Filtered
output given by the summation of a set of impulse responses.

Convolution, or its equivalent in the frequency domain, finds very wide


application in geophysical data processing, notably in the digital filtering of
seismic and potential field data and the construction of synthetic seismograms for
comparison with field seismograms

4.2 Deconvolution

Deconvolution or inverse filtering (Kanasewich 1981) is a process that counteracts


a previous convolution (or filtering) action Deconvolution is an essential aspect of
seismic data processing, being used to improve seismic records by removing the
adverse filtering effects encountered by seismic waves during their passage
through the ground

The particular problem with deconvolving a seismic record is that the input
waveform g(t) and the impulse response f(t) of the Earth filter are in general
unknown. Thus the ‘deterministic’ approach to deconvolution outlined above
cannot be employed and the deconvolution operator has to be designed using
statistical methods. This special approach to the deconvolution of seismic records,
known as predictive deconvolution

4.3 Correlation
Cross-correlation of two digital waveforms involves cross multiplication of the
individual waveform elements and summation of the cross-multiplication products
over the common time interval of the waveforms. The cross correlation function
involves progressively sliding one waveform past the other and, for each time
shift, or lag, summing the cross-multiplication products to derive the cross-
correlation as a function of lag value.
Clearly, if two identical non-periodic waveforms are cross-correlated (Fig. 2.13)
all the cross-multiplication products will sum at zero lag to give a maximum
positive value.
Thus, the cross-correlation function measures the degree of similarity of
waveforms. An important application of cross-correlation is in the detection of
weak signals embedded in noise

Fig. 2.13 Cross-correlation of two identical waveforms.

5. Digital filtering
 In waveforms of geophysical interest, it is standard practice to consider the
waveform as a combination of signal and noise. The signal is that part of the
waveform that relates to the geological structures under investigation. The
noise is all other components of the waveform.
 The noise can be further subdivided into two components, random and
coherent noise. Random noise is just that, statistically random, and usually
due to effects unconnected with the geophysical survey. Coherent noise is,
on the other hand, components of the waveform which are generated by the
geophysical experiment, but are of no direct interest for the geological
interpretation.
 For example, in a seismic survey the signal might be the seismic pulse
arriving at a detector after being reflected by a geological boundary at depth.
Random noise would be background vibration due to wind, rain or distant
traffic. Coherent noise would be the surface waves generated by the seismic
source, which also travel to the detector and may obscure the desired signal.
 In favourable circumstances the signal-to-noise ratio (SNR) is high, so that
the signal is readily identified and extracted for subsequent analysis. Often
the SNR is low and special processing is necessary to enhance the
information content of the waveforms.
Removing the effect of noise
Different approaches are needed to remove the effect of different types of
noise.
 Random noise can often be suppressed by repeated measurement and
averaging.
 Coherent noise may be filtered out by identifying the particular
characteristics of that noise and designing a special filter to remove it.
 The remaining signal itself may be distorted due to the effects of the
recording system, and again, if the nature of the recording system is
accurately known, suitable filtering can be designed.
 Digital filtering is widely employed in geophysical data processing to
improve SNR or otherwise improve the signal characteristics.
 A very wide range of digital filters is in routine use in geophysical, and
especially seismic, data processing.

The two main types of digital filter are frequency filters and inverse
(deconvolution) filters

5.1 Frequency filters


 Frequency filters discriminate against selected frequency components of an
input waveform and may be low-pass (LP), high-pass (HP), band-pass (BP)
or band-reject (BR) in terms of their frequency response.

 Frequency filters are employed when the signal and noise components of a
waveform have different frequency characteristics and can therefore be
separated on this basis.
Fig. 2.16 Design of a digital low-pass filter.

5.2 Inverse (deconvolution) filters

The main applications of inverse filtering to remove the adverse effects of a


previous filtering operation lie in the field of seismic data processing

6 Imaging and modelling


 Once the geophysical waveforms have been processed to maximize the
signal content, that content must be extracted for geological interpretation.
Imaging and modelling are two different strategies for this work.

 As the name implies, in imaging the measured wave- forms themselves are
presented in a form in which they simulate an image of the subsurface
structure. The most obvious examples of this are in seismic reflection
and ground-penetrating radar sections, where the waveform of the variation
of reflected energy with time is used to derive an image related to the
occurrence of geological boundaries at depth.
 Often magnetic surveys for shallow engineering or archaeological
investigations are processed to produce shaded, coloured, or contoured maps
where the shading or colour correlates with variations of magnetic field
whichare expected to correlate with the structures being sought.
 Imaging is a very powerful tool, as it provides a way of summarizing
huge volumes of data in a format which can be readily comprehended,
that is, the visual image.
 A disadvantage of imaging is that often it can be difficult or
impossible to extract quantitative information from the image.
 In modelling, the geophysicist chooses a particular type of structural model
of the subsurface, and uses this to predict the form of the actual waveforms
recorded.
 The model is then adjusted to give the closest match between the predicted
(modelled) and observed waveforms.
 The goodness of the match obtained depends on both the signal-to-noise
ratio of the waveforms and the initial choice of the model used.
 The results of modelling are usually displayed as cross-sections through the
structure under investigation.
 Modelling is an essential part of most geophysical methods and is well
exemplified in gravity and magnetic interpretation

Reference: An Introduction to Geophysical Exploration


Philip Kearey
Michael Brooks

You might also like