100% found this document useful (1 vote)
134 views48 pages

Processing by Bramha

Very nyc

Uploaded by

Harsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
134 views48 pages

Processing by Bramha

Very nyc

Uploaded by

Harsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

CONTENTS

Foreword 2
Acknowledgements 3

Chapter-1. Introduction 4

Chapter-2. Basic Seismic Data Processing 6

Demultiplexing / Format conversion 7


Trace header Generation 8
Spherical divergence correction 9
Deconvolution before Stack 10
Band Pass Filter 12
Amplitude Normalization 13
Static corrections 14
Normal Move out Correction 15
Velocity Analysis 16
1. t2-x2 analysis
2. Constant velocity gathers (CVG)
3. Constant velocity stacks (CVS)
4. Velocity spectrum
Stacking 18
Residual Static Corrections 19
Dip Move Out Correction 21
Random Noise Attenuation 23
Coherent Noise suppression 23
1. FK filtering 23
2. Tau-p filtering 31
3. Radon transforms 32
Deconvolution After Stack 34
Time Variant Filter 34
Migration 35

Chapter-3. Case study: Himalayan foothill data processing 42

Chapter-4. Discussions and Conclusions 46

References 48

1
2
ACKNOWLEDGEMENTS

I am deeply indebted to the Head, Department of Geophysics


for permitting me to take up the dissertation work on “Seismic Signal
conditioning and Basic processing of Foothills data” at Geodata Processing
and Interpretation Center (GEOPIC), Oil and Natural Gas Corporation
limited, Dehradun. I take this opportunity to place on record my sincere
thanks to Sri Apurba Saha, Executive Director and Chief, Geophysical
Services, ONGC for providing the infra structural facilities at GEOPIC. I
also thank Sri S.K.Das, General Manager and Head, GEOPIC and Sri Kunal
Niyogi, Deputy General Manager and Head, Processing for providing the
necessary moral and technical support.

Sri T.R.Murali Mohan, Chief Geophysicist(S) and


Sri.V.R.Murty, S.G(S) was instrumental in providing the all-important
technical guidance and the software support in carrying out this work in a
logical way. I place heartfelt thanks to all their effort.

3
Chapter- 1

Introduction
Seismic method is the widely used geophysical technique in the exploration work
today. The predominance of the method is due to its high accuracy, high resolution and
great penetration of which the method is capable. Subsurface formations are mapped by
measuring the times required for a seismic wave (or pulse), generated in the earth by a
near-surface explosion of dynamite, mechanical impact, or vibration, to return to the
surface after reflection from interfaces between formations having different physical
properties. Changes in the speed (velocity) of sound and the density within particular
rocks cause reflection and refraction of the sound waves produced by a seismic source.
Specifically, variation of these parameters at an interface between two different rock
types causes a reflection of some of the seismic energy back towards the surface. It is the
record of these reflections against time that produce our seismic section. . The reflections
are recorded by detecting instruments responsive to ground motion. They are laid along
the ground at distances from the shot point, which are generally small compared with the
depth of the reflector.

Variations in the reflection times from place to place on the surface usually
indicate structural features in the strata below. Depths to reflecting interfaces can be
determined from the times using velocity information that can be obtained either from the
reflected signals themselves or from surveys in wells. Reflections from depths as great as
20,000 ft can normally be observed from a single explosion, so that in most areas
geologic structure can be determined throughout the sedimentary section.

Seismic exploration work essentially involves data acquisition, data processing


and data interpretation. Data acquisition means acquiring the data in the field, which
involves the designing of spreads (arrangement of sources and receivers) and
measurement of the arrival times. The data recorded from one "shot" (one detonation of
an explosive or implosive energy source) at one receiver position is referred to as a
seismic trace, and is recorded as a function of time (the time since the shot was fired).

4
The data recorded in the field consists of both wanted and unwanted information
(raw data). The wanted information is called the signal and the unwanted information
is called noise. We have to eliminate the noise and enhance the signal. The computer
processing enhances the signal to noise ratio and extracts the significant information and
displays the data in such a form that a geological interpretation can be carried out readily.
In the data interpretation we identify the structures such as anticlines, faults, salt domes,
reefs etc

Seismic record contains two parts: one is arrival time and other is amplitude. The
arrival time is directly related to the depth of the layer boundary from which energy is
reflected and amplitude (equal to square root of energy) is directly related to lithology.
Inversion of arrival time with appropriate velocity gives the depth estimate of the layer.
Amplitude inversion leads to identification of lithological variations. Such inversions are
possible only with high quality data.

There are three principal applications of the seismic method. They are one is
delineation of near-surface geology for engineering studies and coal and mineral
exploration within a depth of up to 1km. Second is hydrocarbon exploration and
development within a depth of up to 10km. Third is the investigation of the earth’s
crustal structure within a depth of up to 100km.

5
Chapter-2

Basic Seismic Data Processing

The major goal of seismic data processing is to produce an accurate image of the
earth’s subsurface from the data recorded in the field. The geologic information desired
from seismic data is the shape and relative position of the geologic formations of interest.
The characteristics of a seismic signal are primary reflections from layer boundaries and
random and coherent noise such as multiple reflections, reverberations linear noise
associated with guided waves and point scatterers. The processing is composed of
basically five types of corrections and adjustments: time, amplitude, frequency-phase
content, data compressing (stacking) and data positioning (migration).These adjustments
increase the signal-to-noise ratio, correct the data for various physical processes that
obscure the desired geological information of the seismic data. There are three primary
steps in processing the seismic data- deconvolution, stacking and migration. Seismic data
processing strategies and results are strongly affected by field acquisition parameters and
surface conditions. The basic data processing involves the following steps:

 Demultiplexing/ format conversion


 Trace header generations
 Spherical divergence correction
 Deconvolution before stack
 Band pass filter
 Amplitude normalization
 Velocity analysis
 Normal move out correction(NMO)
 Common midpoint stack
 Residual statics estimation and application
 Dip move out correction(DMO)
 Random noise attenuation
 Deconvolution after stack
 Time variant filter
 Migration

6
Demultiplexing

All processing systems have their own internal format for the storage of data. So,
our initial processing stage must read all of the field data and convert it into our internal
format. If the seismic data is recorded in multiplexed format, then the data has to be
demultiplexed. In the multiplexed format the data is in sample sequential format i.e. the
first sample in the first channel is recorded then first sample from second channel is
recorded and so on until first samples of all channels are recorded. Then second samples
of all the channels are recorded. In the Demultiplexing process the data is arranged in
trace sequential format i.e. the first sample in the first channel is recorded then second
sample in the first channel is recorded and so on till the last sample of the first channel is
recorded. Mathematically, Demultiplexing is equal to transposing of the multiplexed data
i.e. changing of rows into columns and columns into rows.

Fig 2.1 Demultiplexing

7
Trace header generation

In the process of data acquisition, there are many sources and receiver situated at
different locations. Each has occupied a particular position on the ground this information
is essential to identify, which geophone was recorded by which shot. All this information
is recorded in the trace header, which is used during signal processing. It is useful in
gathering traces belonging to same common mid point, common offset, common receiver
point etc. Also the dead and live traces are also included in the trace header so that
invalid traces can be bypassed during course of processing to save the computer time.

Fig-2.2 Different Types of gathers. Common Shot gather (top left), Common
Receiver gather (top right) and Common Mid Point (CMP) gather (bottom).

8
Spherical divergence correction

Seismic wave propagation in the subsurface media involves loss of energy with
time. Some of the major losses arise due to spherical divergence, inelastic attenuation,
conversion to heat, back scattering etc. Of these spherical divergence and inelastic
attenuation are main causes for the fall of energy. As a result, a shallow reflector will
have more energy and a deeper reflector will have less energy. This imbalance is due to
the fact that dissimilar energy reaching these locations. The seismic wave from a
localized impulsive source spreads out equally in all directions. After it has traveled some
distance, r, the energy in the original impulse is spread out over a spherical area of radius
r. Since energy is conserved and the area of a sphere is proportional to r2, the energy per
unit area must be proportional to 1/r2. The amplitude of a seismic wave is proportional to
the square root of its energy. Thus, the amplitude of a seismic reflection is inversely
proportional to the distance it has traveled. Another cause of energy loss is known as
inelastic attenuation. This is simply the energy lost due to the particles of earth through
which the wave travels not being perfectly elastic - some of the energy is absorbed and
permanently alters the position of the particles.

As the seismic wave travels through a material, the particles that make up the
material vibrate, generating heat. The energy that goes into that heat -intrinsic absorption
–is supplied by the seismic wave. Intrinsic absorption attenuates higher frequencies more
rapidly than lower frequencies, thereby changing the shape of the seismic pulse. The rate
of intrinsic absorption varies greatly from one type of material to another. The attenuation
of amplitude due to both mechanisms in a homogenous material is given by

At=A0*r0/r1*e-αt
α = πf / QV

where, r0 is the radius of the spherical wave front at time zero,

rt is the radius of the spherical wave front at time t,

α is the attenuation factor,

9
f is the frequency,

Q is known as quality factor and

V is the velocity of sound in medium.

The compensation of attenuated amplitude from the spherical divergence involves


only the measurement of amplitude and distance traveled by the wave at the instant of the
record. However, as compensation of attenuated amplitude due to inelastic property of
rocks require the knowledge of absorption coefficient as a function of depth preferably
from well data. Since absorption coefficient is proportional to frequency and inversely
proportional to velocity, the absorption is more for high frequency wave than low
frequency wave. This property of rock causes a progressive lowering of apparent
frequency of seismic event with increasing distance (depth).

Fig-2.3 Amplitude Decay and Recovery

Deconvolution before stack

The weathering layer and compact sediments in near surface regions do not act as
perfect elastic media and leads to improper communication of propagating shock waves
through the particles. Thus shorter wavelength components are lost in the near surface
regions and only low frequency components are recorded at the receiver. Thus earth acts
as a high cut filter and the broadband frequency response of energy source gets limited as

10
it propagates through the layers of the subsurface. The high cut filtering action of the
earth can be opposed or compensated by using another filter, whose response is opposite
to that of the earth’s response. This filter is called as inverse filter. Since filtering is
known as convolution, the process of an inverse filter is termed as deconvolution.

The recorded seismogram can be modeled as a convolution of the earth’s impulse


response with the seismic wavelet. This wavelet has many components including source
signature, recording filter, surface reflections and receiver-array response. The earth’s
impulsive response is recorded if the wavelet were just a spike. The impulse response
comprises primary reflections and all possible multiples.

Deconvolution should compress the wavelet components and eliminate multiples,


leaving only the earth’s reflectivity in the seismic trace. Wavelet compression can be done
using an inverse filter as a deconvolution operator. An inverse filter, when convolved with
the seismic wavelet, converts it to a spike. When applied to seismogram, the inverse filter
should yield the earth’s impulse response. An accurate inverse filter design is achieved
using least square method. The fundamental assumption underlying the deconvolution
process is that of minimum phase.

Convolution: f (t)*g (t) =h (t)

Deconvolution: f(t) =h (t)*k(t)

Where f (t) =Response of the source,


h(t)=Output signal, which has undergone high cut filtering effect,
g(t)=response of the earth,
k(t)=inverse of g(t).

Fig 2.4 Wavelet (top) and


Seismic trace (bottom) before
(left) and after (right)
Deconvolution.

11
The geometric spreading tries to recover amplitudes whereas deconvolution tries to
recover the poorly represented frequencies in the input data. The presence of high
frequencies in the input data makes it convenient for stratigraphic interpretation. The
purpose of deconvolution is to enhance the spectral content within the seismic range of
frequencies (≈100Hz). If the prediction lag is equal to the sample interval, this is called
spiking deconvolution. If the prediction lag is greater than the sample rate the result is a
wavelet of finite duration instead of spike.

Fig-2.5 Deconvolution

Band pass filter

The band pass filter is used to pass the frequencies containing coherent energy
while attenuating the undesired frequencies of the data. The amplitude spectrum of a
band pass filter is trapezoidal in shape with a slope at the low and high frequency cut off
points. Since deconvolution enhances frequencies or it broadens the band width so in
order to limit the frequencies to the seismic range (8-70 Hz) the band pass filter is used.

Filters

1. A Low-Pass Filter: Keep low frequency components below nc.

12
2. A High-Pass Filter: Keep high frequency components above nc.

3. A Band-Pass Filter: Keep components only within a frequency band bounded


by nl and nh.

Amplitude Normalization

The design of inverse operator during deconvolution requires the full range of
input amplitudes for accurate estimation. Amplitude normalization is used after
deconvolution to bring the amplitudes present in the data to a certain level. This does not
mean that the relative amplitude variations are lost in the process. This brings down the
large variations to a range that is easy for computational purpose.

13
Static Corrections

When the seismic observations are made on non-flat topography the observed
arrival times do not represent the true subsurface structures. The arrival times are to be
corrected for the elevations of the shot and receiver positions and for the thickness of the
weathering layer. The former correction removes difference in travel time due to variation
of surface elevation of the shot and receiver locations. The weathering correction removes
difference in travel times to the near surface zones of unconsolidated, low velocity layer,
which may vary in thickness from place to place. These are called static corrections, as
they don’t change with time. The static corrections are computed taking into the account
the elevation of the source and receiver locations with respect to seismic reference datum
(such as mean sea level), velocity information in the weathering and sub weathering
layers. Up hole surveys and shallow refraction studies are used to obtain characteristics of
low velocity layer.

Fig 2.6 static corrections


A datum is defined in Sheriff's Encyclopedic Dictionary of Exploration
Geophysics as a reference value to which other measurements are referred, or an arbitrary
reference surface, reduction to which minimizes local topographic and near-surface
effect.
The floating datum may be above or below the actual shot or geophone position
and the resultant statics may be positive (adding to the time and moving the data
downwards) or negative (subtracting from the time and moving the data up towards time
zero). In this diagram, where the floating datum is below the shot datum (at either end)

14
we need to take time off the events so a negative static correction is applied. It is positive
when the floating datum is above the shot datum.

NMO Correction

The arrival time increases with offset and has a hyperbolic path. This is to be
corrected for. The difference between the two way travel time at a given offset and the
two way zero offset time is called Normal Move Out (NMO). Reflection travel times
must be corrected for NMO prior to summing the traces in the CMP gather along the
offset axis. After applying NMO the traces are mostly flattened across the offset range.
Traces in each CMP gather are then summed to form a stacked trace at each mid point
location. The stacked section comprises the stacked traces at all mid point locations along
the line traverse.

Fig 2.7

Using the Pythagorean Theorem, the travel time equation as function of offset is

t2 = t02 + x2/v2stack ; ∆t = t0 - t(x).

15
Fig 2.8 Normal Move Out Correction

Thus NMO depends on stacking velocity and offset. In contrast to the static
correction, the correction along the trace can differ. The NMO correction is also called a
dynamic correction. It is more for large offsets and shallow velocities. For a flat reflector
with an overlying homogeneous medium, the reflection hyperbola can be corrected for
offset if the correct medium velocity is used in the NMO equation

As a result of move out correction, traces are stretched in a time-varying manner,


causing their frequency content to shift toward the low end of the spectrum. Frequency
distortion increases at shallow times and large offsets. To prevent the degradation of
especially shallow events, the amplitudes in the distorted zone are zeroed out (muted)
before stacking.

Velocity Analysis

To implement Normal move out correction, the velocity needs to be estimated for
a given seismic data. Computed velocities can in turn be used to correct for NMO, so that
reflections are aligned horizontal in the traces of a CMP gather. This ensures better
stacking of these traces to get a stacked trace corresponding to the CMP location. This
velocity, which ensures best alignment and stacking, is called as Stacking Velocity. On

16
the other hand, Root Mean Square (RMS) velocity for a given Two Way Time is
computed as the square root of mean of square of interval velocities of all the individual
layers above. For a horizontal layer and small offsets both vstack and vrms are similar.

When the reflectors are dipping then vstack is not equal to RMS velocity and is given by

Vstack =Vrms / Cosine (dip).

There are various ways to determine the stacking velocity using the seismic data:

• (t2-x2) analysis
• Constant velocity gathers (CVG)
• Constant velocity stacks (CVS)
• Velocity spectrum

1. T2-x2 analysis: The hyperbolic travel time equation is linear in the t2-x2 plane.
The inverse of the slope of the line is the stacking velocity.
2. CVG: Another way to estimate NMO velocity is to apply different NMO
corrections to a CMP gather using a range of constant velocity value, and then display
them side-by-side. The velocity that best flattens each event as a function of offset is
picked as its NMO velocity. The accuracy in velocity picking depends on cable length,
the two-way zero offset time associated with the reflection event and the velocity itself.
The higher the velocity the deeper is the reflector and shorter the cable length and the
poorer the velocity resolution.
3. CVS: A small portion of a line can be stacked with a range of constant
velocities. The resulting constant velocity CMP stacks then were displayed as a panel.
Stacking velocities are picked directly from the constant velocity stack panel by choosing
the velocity that yields the best stack response at a selected event time. In panel of CVS
the velocity resolution decreases with increasing depth. This method is especially useful
in areas with complex structure.
4. Velocity spectrum: It displays the resultant stack traces for each velocity side
by side on a plane of velocity versus two way zero offset time. This is called the velocity
spectrum. We have transformed the data from the offset versus two way time domain to
the stacking velocity versus two way zero offset domain. The low amplitude horizontal
streak on the velocity spectrum results from the contribution of small offsets, while the

17
large amplitude region on the spectrum is due to the contribution of full range of offsets.
Hence, we need large offsets for good resolution on velocity spectrum.

Fig-2.9. Velocity Spectrum

If the velocity used is higher than the medium velocity then under
correction results and if a lower velocity is used then over correction results.

Fig-2.10. over correction & under correction


Stacking
Stacking is nothing but adding the traces in a CMP gathers to yield a single
composite trace. It is a process of compression. Stacking enhances the in-phase
components and reduces the out of phase components. In this process random noise is
minimized and signal to noise ratio is increased.

18
Residual Static Corrections or Automatic Static Corrections
Reflection times often are affected by irregularities in the near surface. Reflection
travel times are to be reduced to a common datum level which may be flat or vary along
the line. Reduction of travel times to a datum usually requires correction for the near
surface weathering layer in addition to difference in elevation of source and receiver
station. Travel time correction to account for the irregular topography and near surface
weathering layer are commonly known as field statics or refraction static correction. Also
field static corrections are estimated on the assumption that the strong velocity contrast at
the base of the weathering layer will refract the rays vertically to the surface, so that a
simple formula of dividing the weathered layer thickness at the observed point by the
velocity in the weathered layer would correct the data for the extra path. This assumption
breaks down when there is rapid variation in surface elevations and weathered layer
characteristics (velocities and thicknesses). Rays may not be vertical, but may be angular
to the observation points. Hence, there are bound to be small errors in computation of
static corrections. These minor static shifts are computed based on a philosophy called
surface consistency. That is, static corrections are dependent on the location of shot and
the receiver points or they are dependent on the surface or they are surface consistent. On
the other hand, the geology at any given location is subsurface consistent. The methods
that involve the computation are based on estimating the cross correlation between the
traces after NMO correction. If the velocities are correctly estimated, then the travel time
differences for a given reflection event within an NMO applied gather can be due to four
components (in 2d). These are residual shot static, residual receiver static, residual NMO
error and structure (or dip) term. To decompose these unknowns, the gather data in
common shot, common receiver, common offset and common mid points are used. If
there are NS number of shots and NR number of receivers and NC number of CMP’s
with NF as foldage, then the number of unknowns are NS(residual shot
static)+NR(residual receiver static)+NC(residual NMO error)+NC(structure term). The
total number of traces are NC*NF. Generally, NS*NF >> (NS+NR+NC+NC). That is the
number of equations will out number the number of unknowns. It is easy to solve a set of
simultaneous equations using Gauss-Sidel techniques using computer.
Estimation of residual statics consist of three steps : 1) calculation of total time
shift, 2) decomposition of total time shift into four components and 3) application of shot

19
and receiver static correction leaving behind the residual NMO and structural terms. The
relative time shift between NMO corrected traces (individually) and a pilot (resultant
trace obtained from stacking of NMO corrected traces) traces are calculated through cross
correlation. After applying the shift to the traces, the traces are summed to yield a refined
pilot trace. Again the relative time shifts between the traces of gather are estimated with
respect to the new pilot trace by doing another cross correlation. Thus for each trace a
total shift is estimated. These time shifts can be expressed in terms of residuals for each
shot and receiver position, a residual NMO correction and a structural term for each
CMP(common mid point), forming a set of simultaneous equations. The set of equations
are solved for the source and receiver residuals using a least square method.
Residual static corrections are estimated in a surface consistent approach i.e., time
shift depends only on the source receiver location not on the ray paths from the shot to
receivers. All traces in common shot gather will have same residual shot static, all traces
in a common receiver gather will have same residual receiver static, all traces in common
offset gather will have same residual NMO error and all traces in a common mid point
gather will have same dip or structural term. To put it in simple terms, the correlation
shifts with a shot gather would consist of residual receiver, residual NMO and structural
term only but not residual shot static, which is common to all the traces. Similarly, the
correlation shifts in common receiver gather would consist of residual shot, residual
NMO and structure terms only but not residual receiver static. Similarly, the correlation
shifts in common offset gather would consist of residual shot, residual receiver and
structure terms only but not residual NMO term. Similarly, the correlation shifts in
common mid point gather would consist of residual shot, residual receiver, and residual
NMO terms only but not structure term. These properties are taken advantage to estimate
the time shift and decompose it to the individual components. Of these, only residual shot
and receiver corrections are applied to the data while the residual NMO is left for
velocity refinement and structural terms should be left untouched.
∆t= ∆ts+∆tr+∆tNMO+∆tCMP
Where ∆t= total residual static
∆ts= residual shot static
∆tr= residual receiver statics
∆tNMO= residual NMO correction

20
∆tCMP= structural term
After making the residual static correction, the CMP gathers with travel time
deviations show better alignment of reflection while those that do not require such
corrections remain unchanged.
Dip Move Out Correction
Anytime we have dipping events within our seismic section. Our seismic data will
not be recorded at the correct spatial position and the assumption that a Common Mid
Point is the same as a Common Depth Point breaks down. NMO correction is useful in
converting non zero offsets to zero offsets only in the case of plane parallel layers only
without any dip. DMO correction is used in case of dipping layers for converting non
zero offsets to zero offsets. The formula for DMO correction is

Ti2=To2+Xi2/v2-Xi2sin2α/v2
so the estimated velocity will be v*=v/cosine α

Where v* =dip corrupted velocity, V=actual velocity, and α is the dip angle (with
respect to horizontal)
2 2 2”
The term “Xi sin α/v is the dip move out term.
Hence the DMO correction is more for: 1) far offsets 2) Lesser (shallower levels) velocity
and 3) Steeper dips.

Fig-2.11 Dip Move Out


(DMO) Correction (S-
source, R- receiver, M-
geometric mid point, Md –
position after DMO and Ma
– position after position after
migration; Pn – reflector
position after NMO, Pd -
reflector position after
DMO, Pm - reflector
position after Migration)

In the presence of dip, the traces of CMP gather no longer refer to a common mid
point but to several mid points. This is known as reflection point dispersal. Dip move out
corrects the dispersal by redistributing the energies across the CMP gathers. DMO
correction is implemented in common offset domain. This is achieved by smearing each

21
of NMO corrected amplitudes to all possible locations in the common offset planes. In a
constant velocity medium, the locus of such possible locations in the common offset
plane is an ellipse, passing through source and receiver positions. Constructive
summation of all these ellipses from all the time samples yield DMO corrected common
offset planes. These common offset planes after resorting to CMP domain are more or
less corresponding to small subsurface locations(reflection point dispersal is reduced).
This additional correction promises non zero offset to zero offset conversion. Also, since
gathers are more or less rearranged in their proper location in space, the velocity analysis
on dip move out corrected gathers yield better quality velocities for use in stacking. It has
been seen that NMO+DMO+stack ensures better zero offset section and post stack
migration of DMO stack is approximately equal to pre stack time migration. However,
DMO cannot handle strong velocity variation and pre stack time migration is becoming
common practice as fast as computational supervising hardware and software are
available to meet the requirements. Since DMO is a partial migration, it also tries to
move energies in the up dip direction and reduces the length of the dipping segments. In
fig-2.12, a NMO stack across a synclinal structure, the opposing limbs in the axial parts
cris-cross. In the DMO stack (fig.2.12.) the axial parts are

Fig-2.12

resolved and the up dip movement of the limbs can be clearly seen.

Types of Noise

Noise attenuation is the main problem in the signal conditioning. Noise can be
broadly classified as Random noise and coherent noise.

22
Random noise and its attenuation

Random noise is spatially random and is also repeatable is due to scattering from
near surface irregularities and inhomogenities such as boulders, small-scale faulting,
vuggy limestone etc. such noise sources are so small and so near the spreads that the
outputs of two geophones will only be the same when the geophones are placed almost
side by side. The strength of such scattered waves is inversely related to the distance of
the scatterers from the shot and receivers. The estimation of random noise is done in
frequency-space (F-X) domain. Each seismic trace in T-X domain is Fourier transformed
along the time axis to yield corresponding trace in F-X domain. Signal energies, being
sinusoidal, become predictable and can be removed in this domain through a predictive
deconvolution. Thus deconvolution in F-X domain removes the signal content of the
trace leaving behind the noise content. Total field minus the estimated noise gives the
signal field. Inverse Fourier transform along frequency axis returns the signal data back
to T-X domain.

Coherent and its attenuation

Coherent noise includes ground roll, back scatters and multiples. Coherent noise
is some times subdivided into (a) energy which travels essentially horizontally (b) energy
which reaches the spread more or less vertically another important distinction is between
(a) noise which is repeatable (b) noise which is not i.e. whether the same noise is
observed at the same time on the same trace when a shot is recorded. The three properties
coherence, travel direction and repeatability form the basis of most methods of improving
the quality. Ground roll includes Rayleigh, Love and Air waves and are characterized by
low velocities, low frequencies and higher energies compared to the reflected energies.
Coherent noise is attenuated through multi-channel filters implemented in XT, FK, tau-p
domains etc.

1. FK Filtering:

This filtering is applied to attenuate linear noise and non linear noise
(multiples) in the data. FK filtering aids in eliminating the events based on the velocities.

23
Here the data have been transformed from time-space domain to frequency-wave number
domain using 2D Fourier transform. The wave number K of a dipping event is
determined by counting the number of peaks within a unit distance along the horizontal
direction.

T-X → F-K

2DFFT

Fig 2.13 Signal & noise alignment in TX & FK domains

24
Fig 2.14 Reduction of noise in FK filter

Velocity of noise is less than that of the primary or reflection events. The
inverse of the slope in the X-T domain gives the velocity of the events. The flat events
have high velocity. Thus the velocity varies with the slope. The high velocity events align
around the frequency axis while the lower velocity events align around the wave number
axis in the F-K domain. The events with intermediate velocity align in between. Also the
vertical events in TX domain become horizontal in FK domain and horizontal events
become vertical.

Case i): Linear noise attenuation: The CMP gathers in XT domain is converted into KF
domain through 2D Fourier transform. Now design a filter such that when multiplied with
CMP gathers in KF domain the linear noise is attenuated. Here the filter coefficients for
the pass band are ones and for the rejection band are zeroes. Now the data is
retransformed into XT domain. (Sequence of the filtering steps is shown in the fig 2.15).

Case ii): Multiples attenuation: (a) Here the CMP gathers are to be taken. Multiples have
large move out compared to the primary events. The velocity of primaries is greater than
the multiple velocities. Now velocity analysis is done by considering a velocity function
which takes the intermediate velocities in between the primary events velocity and

25
multiple velocities. As a result the primaries are over corrected and multiples are under
corrected. The primaries move towards the negative wave number axis. Now multiply the
resultant CMP gathers with a filter having filter coefficients for the pass band as ones and
for the rejection band as zeroes which eliminates the multiples. The data which is free of
multiples is retransformed into the XT domain. Here the pass band has filter coefficients
as ones and the rejection band has coefficients as zeros. (Sequence of the filtering steps
are shown in fig 2.16)

b) Here the CMP gathers are to be taken and apply NMO correction so that hyperbolic
events become flat. Now convert the NMO applied gathers in TX domain into FK
domain. The primary events which align around the frequency axis appear as steep cone.
And the noise is aligned near the wave number axis. A filter is designed such that when
multiplied with the obtained data eliminates the noise. Now apply inverse 2D Fourier
transform to get the resultant noise eliminated data in the TX domain. Now apply inverse
NMO for further analysis. Here the pass band has filter coefficients as ones and the
rejection band has coefficients as zeros. (Sequence of the filtering steps is shown in fig
2.17).

For a given dip, all frequency components map onto the FK plane along a straight
line that passes through the origin. Higher the dip, the closer the radial line in the FK
domain to the wave number axis. The zero dips components maps along the frequency
axis. Events with the same dip in the XT domain regardless of their location map onto a
single radial line in the FK domain. Events with different dips that may interfere in the
XT domain can be isolated in the FK domain. 2D Fourier transform is a way to
decompose a wave field into its plane wave components.

26
CMP GATHER NMO APPLIED GATHER
1 X 2
X

NMO 2DFFT

T
T

3 AFTER NOISE
4 ATTENUATION
SIGNAL
FK FILTER Inv 2DFFT

F
F
NOISE

K
K

5 X 6
X
FOR
FURTHER
ANALYSIS
Inv NMO

T T
RESIDUAL RESIDUAL

Fig 2.15 Attenuation of linear noise using FK filter

27
CMP GATHER
1 X 2

SIGNAL

F
2DFFT NOISE

T
K

FK FILTER
CMP GATHER
4 X
AFTER NOISE
3 ATTENUATION

Inv2DFFT

T
RESIDUAL

K
Fig 2.16 Multiple attenuation in CDP gathers using FK filter

1 2 V
X Fig 2.17a

P1 Primary
T velocity
T trend
P1 VELOCITY
ANALYSIS
M1 P2
Boundary
P2 velocity

M1
CDP GATHERS VB

28
3 X
4

P1
P2 2DFFT F
T
M1

AFTER NMO -K K
WITH VB

F
5
6
11 1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 0
REJECTION F
BAND FK FILTER
1 1 1 1
0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 1 1 1 1 K
0 0 0 0 1 1 1 11 K
0 0 0 1 1 1
0 0 0

Inv 2DFFT

8 X 7 X

FURTHER Inv NMO


ANALYSIS
T T
VB

Fig.2.17b. Multiple Attenuation through velocity analysis using FK filter.

29
Fig.2.18a.Raw data with different types of noise

Fig 2.18b.CDP gathers after attenuation of noise

30
2. Tau-p transform or slant-stack transform

The plane wave decomposition of a wave field such as a common shot gather can
be achieved by applying linear move out and summing amplitudes over the offset axis.
This procedure is called slant-stacking. Slant-stacking replaces the offset axis with the
ray parameter ‘p’ axis. The ‘p’ is the inverse of the horizontal phase velocity. A group of
traces with a range of ‘p’ values is called a slant-stack gather. Synthesize the plane waves
by summing the amplitudes in the offset domain along the slanted paths. As we know the
equation of a linear line is in the form of y = mx + c. Here this equation becomes

t = τ + PX

Where ‘p’ is called the ray parameter and is given as p= dt/dx = sin(θ)/v, τ is the intercept
time at p=0. The data summed over the offset axis is given by

S(p,τ) represents a plane wave with ray parameter p=sin(θ)/v .

By repeating the linear move out correction for a range of p values and
performing the summation a complete slant-stack gather is constructed. A slant-stack
gather is referred to as tau-p gather which consists of all the dip components with in the
specified range of p values in the original offset data.

Fig 2.19.

31
Fig2.19 illustrates the tau-p transform of more than one hyperbolic event in the T-
X domain. The reflection ‘c’ map into the region of higher p-values while the reflections
‘a’ and ‘b’ map into the region of low p values. Linear event in the T-X domain such as
refraction arrival maps to the point in a slant stacked domain and vice-versa. With
increase in dip τ decreases and ray parameter value increases and with decrease in dip τ
increases and ray parameter value decreases.

3. Radon Transform: a) Hyperbolic Radon transform: Velocity stack transformation


involves application of hyperbolic move out correction and summation over the offset
axis. As a result of this mapping, the offset axis is replaced by the velocity axis. The
relationship between the input coordinates (h,t) and the transform coordinates (v,tau) is
given by the hyperbolic move out equation.
t2 = τ2+4h2/v2
Where t is the two way travel time,
τ is the zero offset time, h is half offset,
v is the stacking velocity.

A linear event in the X-T domain maps on to a point in the ray-parameter domain.
A hyperbolic event such as a primary or a multiple is mapped on to an ellipse in the ray-
parameter domain. Multiples are not periodic in the offset domain but periodic in the ray
parameter domain. This periodicity is used for predicting and attenuating multiples.
Since the mapping function is hyperbolic, a hyperbola in the XT domain such as
primary or a multiple ideally maps onto a point in the velocity domain. Hence we are able
to distinguish between the primaries and multiples based on velocity discrimination and
use this criterion to attenuate multiples.
b) Parabolic Radon Transform:
Here the input CMP gather is NMO corrected using the hyperbolic move out
equation tn = sqrt (t2-4h2/vn2), Where tn is the time after NMO correction, Vn is the
hyperbolic move out correction velocity function, ‘h’ is the half offset, and ‘t’ is the zero
offset time.
Resulting move outs of the events which were originally hyperbolic are
now approximately parabolic.

32
tn=τ +qh2, where τ is the two way zero offset time,
‘q’ is the parameter that defines the curvature of the parabola.

Fig.2.20a

In the fig.2.20a multiples can be seen having more hyperbolic alignment


compared to the primaries in the CMP gather which appears as closures in the velocity
analysis having low velocity.
In the fig.2.20b radon transformation is applied to the CMP gathers and velocity
analysis is done. We can observe that the multiples are attenuated and the closures are
disappeared.

33
Fig.2.20b

Deconvolution After stack

Deconvolution after stack is usually applied to restore the high frequencies


limited by the CMP stacking. It compresses the basic wavelet in the recorded seismogram
attenuates reverberations and short –period multiples thus increases temporal resolution
and yields a representation of subsurface reflectivity.

Time Variant Filter

Time variant filters are often applied to the data in order to pass higher
frequencies shallow in the section and lower frequencies at greater depths. Earth consists
of sets of layers that are distinctly characterized by certain band of frequencies and
different parts of the seismic section has to be subjected to different sets of band pass
filters in a time variant manner. More over this facilitates cross-sectional view of earth in
tune with the characteristic band of frequencies each lithology has to offer. Shallower

34
part of the section generally contains broader frequency than middle or lower parts of the
section. As higher frequency bands of useful signal are confined to the shallow part of the
section, temporal resolution is greatly reduced in the deeper portion of the sections. The
time variant character of the signal bandwidth requires an application of frequency filters
in a time varying manner. By doing so the ambient noise which begins to dominate the
signal at late times is excluded and a section with a higher signal to noise ratio is
obtained.

Migration

Migration is the process that moves the data on the stacked seismic section from
apparent positions to correct positions in both space and time. Even after NMO and DMO
corrections, reflections from dipping events are plotted on the stack section in the wrong
place. They need to be moved "up-dip" along a hyperbolic curve in order to put them in
the right place, the shape of this hyperbola depending on the velocity field. Migration
moves dipping reflections to their true subsurface positions and collapses diffractions, thus
increasing spatial resolution and yielding a seismic image of the subsurface.

Migration strategies include:

a) 2D versus 3D migration

b) Post versus pre -stack migration and

c) Time versus depth migration.

On the unmigrated stacked section, an unconformity appears complex, while on the


migrated section it becomes interpretable. Migration moves the dipping events in the up-
dip direction. Hence on a migrated section the anticlines become sharper and the synclines
are broadened. The bowties in the stacked section are untied and turns into synclines on
the migrated section. This can be observed in the fig.2.21 in the next page.

35
Fig.2.21 Bowties (top) associated with a tight syncline (bottom)

36
For a set of observation (source coincident receiver) points along the surface, the
travel time for energy to reach the diffractor and reflect back to the observation point is
given by the 2* distance along the path / velocity of wave propagation. As the observation
point moves from one end of the diffractor to the other end, it can be seen that the
minimum time occurs when the observation is made just above the diffractor.
Conventionally, these observations are plotted as traces vertically below the observation
points. The resulting travel time surface resembles a hyperbola. It means that a single
point in depth appears as a hyperbola in a zero offset (source coincident receiver) section.
This also tells that given a hyperbolic surface in the zero offset section, it is ambiguous to
say that it represents an anticline. Thus some means has to be evolved to make all
hyperbolic surfaces to converge to a point. This is effectively the idea behind migration.
There are two ways of post stack migration.

1. Kirchoff’s migration: A point scatterer in the subsurface manifests as a hyperbola


in the zero offset time domain and the curvature of the hyperbola is a function of
the overlying velocity. Migration is achieved through summing of the energies
along the diffraction hyperbola and placing the summed energy at its vertex. Thus
for every output point, amplitudes along their corresponding hyperbolae (defined
by the velocity and time) in the input are summed.
2. Wave equation method: In this method semicircles are drawn with observation
point as centre and two way path time as radius. The semicircles are to be drawn
from all the observation points and the common envelope of all such semicircles
would define the reflector position. This is also known as semicircle superposition.

Migration can be explained with another simple example. Consider a case of


vertical boundary PQ (fig-2.22) in a constant velocity medium of velocity ‘V’. AP is the
surface (datum) of observation. The two way arrival time at observation point – A to P is
2AP/V. This will be plotted vertically below the observation point – A. The two way
arrival time at observation point – B (say midway between A and P) to P is 2BP/V
(=AP/V). This will be plotted vertically below the observation point – B. And the two
way arrival time at observation point – P is zero. If the arrival times at points A, B and P
are joined to get the reflection segment, a 45 degree segment results. This is not
representing the vertical reflector to be imaged. Then, how to make this 45 degree

37
segment as 90 degree segment? Thus, whenever there is a dipping reflector in the
subsurface, the time images in the zero offset sections do not represent the subsurface
truly and some position adjusting needs to be done to the zero offset section. Now the
task of MIGRATION is to correct this mis-positioning. Migration moves dipping
reflectors into their true subsurface positions and collapses diffractions, there by
delineating detailed subsurface features such as anticlines, fault planes etc. The efficacy
and perfection with which this is achieved is matter of continuous research even today. A
dipping segment P’Q’, when seen from two points A and B will appear as PQ in the time
section, which is not representing the correct dip. If two semi circles, one with AP as
radius and A as centre and one with BQ as radius and B as centre are drawn and a
common envelop is constructed for these two semi circles, the correct position of this
dipping segment will be obtained (fig-2.23). True dip and apparent dips are related by the
equation TAN (apparent dip) = SIN (true dip). This is known as Migration Equation.

A B

P
P’

Q’
Q

Fig-2.22. Requirement of Migration Fig-2.23. Movement of a dipping


Segment during migration

38
Observation plane Observation plane

S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R
Depth

Time
Velocity = V

Point Diffractor

KIRCHOFF’s Method of Migration


Observation plane
Observation plane
S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R S/R

Time
Time

Gather the energy and place it at apex

KIRCHOFF’s Method of Migration Depth Conversion


Observation plane
Observation plane
S/R S/R S/R S/R S/R S/R S/R
S/R S/R S/R S/R S/R S/R S/R
Depth
Time

Summed energy placed at the apex (in time) of the Hyperbola Vertical stretching of Focused time energy to depth

Fig 2.24. Principle of post stack (zero offset section) Migration


– Kirchoff’s integral method

39
Wave Equation Method of Migration
Observation plane
Observation plane
S/R S/R S/R S/R S/R S/R S/R S/R

Depth

Time
Velocity = V
Point Diffractor

In the common midpoint plane, loci of all possible Smear the energy along semi circles and draw common envelop
points of point diffractor is a circle

Wave Equation Method of Migration Depth Conversion


Observation plane
Observation plane
S/R S/R S/R S/R S/R S/R S/R
S/R S/R S/R S/R S/R S/R S/R

Depth
Time

Constructive summation focuses the energy to the actual point Vertical stretching of Focused time energy to depth
(in time)

Fig 2.25. Principle of post stack (zero offset section) Migration


– Wave Equation method

Post stack migration becomes inadequate because of complexities in the

a. geometry of the surface and sub surface layers and / or


b. Velocities of the sub surface layers.

In early days, seismic data processing mostly was done in post stack due to lack
of large scale computing facilities at affordable price and time. Industry could survive
with those limitations as discovery of hydrocarbons was possible even with qualitative
outputs and quantitative assessment of results was not the need of the hour. As the
success of finding major hydrocarbon prospects became alarmingly low, the need for
quality processing and need for extracting more out of the input data became the order of
the day. This urge, in turn, fuelled the development of more accurate algorithms and
computing machines to cater to the needs. In fact, the hardware development is yet to
match the software requirements fully within affordable time frame.

40
Normal Move Out (NMO) stack followed by post stack migration was successful
for horizontal and gently dipping (<10 degrees) layers. When dips are steep, NMO failed
to deliver the so called zero offset section, which is the requirement of post stack
migration. Dip Move Out (DMO) correction had a long stay to overcome this difficulty for
dipping layers and post stack migration on DMO stack was considered very closely equal
to pre stack time migration. The limitations of DMO in accounting large scale velocity
variations and true amplitudes forced industry towards pre stack migration.

MIGRATION STRUCTURE&
VELOCITY
Post stack time Simple velocities +
simple structure
Post stack depth Complex velocities +
simple structure
Pre stack time Simple velocities +
complex structure
Pre stack depth Complex velocities +
complex structure

41
Chapter 3
Case study: Signal conditioning and basic seismic processing of
Himalayan foothill data

The data recorded in the field is the raw data which consists of both desired signal
and undesired noise. The noise is of both coherent and random in nature. So the data
requires conditioning to eliminate noise and different processing steps are to be applied to
get the data into a geological interpretable form.

Input data

The data was acquired near the Himalayan foothill region (Fig.3.1). End-on spread
was employed where the receivers follow the shots. The data consists of 120 channels with
shot interval of 60m and channel interval of 30m.The near offset is 300m and the far offset
is 3870m.

Pre stack signal conditioning

Raw data consisted of various types of noise such as coherent noise and random
noise. In particular ground roll which is a linear noise (shot generated noise) can be
eliminated by manual editing. This based on the maximum amplitude, mono frequency
and frequency band was attempted with little success. Manual editing was adopted to
eliminate bad traces and bad zones were muted surgically. Care should be taken that
wanted information is not eliminated.

After manual editing a notch filter of 50Hz and a band pass filter of 6-16Hz and
40-50Hz were applied. In fig 3.2 and fig 3.3 we can observe the effect of band pass filter
where the noise is attenuated to some extent. The data is stacked.

Amplitude Recovery

Amplitude normalization is used after deconvolution to bring the amplitudes


present in the data to a certain level. After gain analysis maximum noise attenuation level
was below 10 dB (fig 3.4 &fig 3.5).

42
Deconvolution before stack

Predictive deconvolution was applied on the CDP sorted gather to marginally


restore the high frequency components and to reduce short period multiple noise. The
results are shown in fig 3.6.

Velocity analysis

In the fig 3.7&fig.3.8 on the decon CDP gather using velocity spectrum method
velocities were picked against the events. The velocities were in the range in between
3400m/s to 5400m/s. velocities were picked from the peak semblance trends coinciding
with the prior velocities in the area. The events became straight.

NMO correction

NMO correction was applied to check the quality of the velocity picked function.
Using the derived stacking velocity section, each trace within the CMP gather was
corrected for move out correction using the two term hyperbolic NMO equation. The
reflection events were flattened.

In the fig 3.9 a comparison was shown between single velocity stack and the multi
velocity stack. The single velocity stack was prepared to see what the processing steps are
required. This is also known as brute (raw) stack. In the multi velocity stack a range of
velocities which are picked interactively and the CDP gathers are stacked. This allows us
to observe the events more clearly than in the single velocity stack.

Static corrections

The elevation ranges from 734m to 962m above mean sea level in the profile. The
seismic reference datum is 500m above the mean sea level. From fig 3.10 and 3.11 it was
observed that the static corrections computed based on up hole times of the shot point are
of the order of 110-130ms each at the shot and receiver locations. This amounts to shift the
trace by 240ms upward and this leads to significant loss of near surface events. Also, the
static corrections fail in such situations where, the low velocity layer thickness and the

43
elevation are very large. It is recommended to correct the data to a floating datum.
Floating datum is a smooth representation of topography. The traces within the CMP
gather are corrected to this local datum. This leads to small static corrections and at each
CMP location, with floating datum to final datum correction will be preserved for
subsequent use in CMP stacking and most importantly, migration to topography.

Residual static corrections

Automatic residual static corrections based on surface consistency approach were


attempted to bring in minor static shifts in the CMP gather. The NMO corrected gathers
were subjected to this estimation in a slanting correlation window encompassing the major
correlatable primaries. Maximum allowed shift during the total shift estimation was kept at
36ms expecting large residual statics. In the fig 3.12 it can be seen that the decon stack
with residual statics shows the continuation of the events more clearly when compared
with the decon stack without residual statics.

Random noise attenuation and Time Variant Filter

Random noise is estimated in the F-X domain. In this domain, all the signals being
sinusoidal components can be removed through a predictive deconvolution, leaving the
random noise untouched. Thus estimated random noise, upon subtraction from the total
field, returns the sinusoidal signals. The coherent linear noise appears to be the aliased
ground roll that was present in the data prior to DMO. Since DMO too acts as a dip filter,
these noise or a part of them might have been modified with respect their actual dips and
frequencies. Band pass filters in a time variant sense was used to see the efficacy to
remove these energies. A filter of 15Hz (slope 24dB/octave) to 50Hz (slope of
60dB/octave) was chosen to upper part of the section up to 2000ms found to be good
enough to account them. The filter sets for the section is as follows. In the fig 3.13 noise
free stack section can be observed.

44
Time Window(ms TWT) Low Cut(Hz/dB/oct) High Cut(Hz/dB/oct)
0000-2000 15/24 50/60
2000-3000 12/24 45/60
3000-4000 10/24 40/60
4000-6000 8/24 30/60

Migration

Stacking velocity functions were used for migration. The velocity data at the
picked locations were interpolated for in between CMP locations and smoothed over a
distance of 1-km. This is necessary because, migration needs structure consistent
velocities. CMP stacking may require large variations in adjacent velocities for better
stacking. Large spikes in velocity or seismic sections introduce migration noise. It is seen
that over migration and aliasing effects dominated the migration results. To reduce these
effects, input data was filtered and the amplitudes were scaled to small reference values.
Migration was attempted on the sections and the results were satisfactory. For
comparison purposes, the traces were removed leaving the original traces. Since
migration is also a form of deconvolution (known as spatial deconvolution, since it
enhances spatial resolution), a time variant filter is customarily applied on the migration
results.
In fig 3.14, the difference between the stacked section and the migrated
section can be observed. In the stacked section the thrust faults are appearing closer and
longer whereas in migrated section the thrust faults in the data moved to their true
position in the up dip direction.

45
Chapter-4
Discussions and Conclusions

Acquisition and processing of seismic data from Himalayan foot hills data
remains a challenging task. Because of irregular topography, the source and receivers can
not be placed at the appropriate locations. So many corrections need to be applied in
order to extract signal and noise. In this case study no special processing was used. Using
the basic processing steps the noise was eliminated and as a consequence, signal to noise
ration was enhanced. The high velocity distribution in the near surface sediments and
large topographic variations in the profile make the estimation of field static corrections
and stacking velocity estimation very ambiguous. All this put together contribute to poor
subsurface images. The random bursts of energy and the source generated noise in the
form of ground-roll dominate the data. The data consisted of different types of noise. By
manual editing and application of band pass filter the noise was eliminated to some
extent. Sort period multiples are attenuated by applying deconvolution on the CDP
gathers.
Spherical divergence correction using V*T type of recovery to account the
amplitude decay was found satisfactory, but automatic gain control was seen better due to
the observation that it not only accounts to balance amplitude along the time axis, but
also effectively reduced the high energies of ground-roll. Predictive deconvolution is
better in this type of data as the main aim is to obtain a reliable subsurface image rather
than the resolution. Choice of appropriate band pass filter too is crucial in this dataset
with step dips in the shallower parts.
NMO correction was applied on the CDP gathers to remove the effect of increase
in travel times with increased offset. As a result the reflections were almost became
flattened while the multiples have the hyperbolic alignment. Velocity analysis was done
using the velocity spectrum method by picking the range of velocities which makes the
reflection events to be flat. To remove the effect of statics on the CDP gathers residual
static corrections were applied and the data was brought to certain floating datum level.
Post stack imaging heavily depends upon the assumption that the obtained stack is
ideal zero offset (source coincident receiver) section. CDP technique evolved nearly five
decades ago is still the method of seismic acquisition in spite of the fact that in several

46
geological situations, it does not cater fully. This method in association with NMO
correction promises to transform the non-zero offset to zero offset and simultaneously
achieve the S/N enhancement. But NMO fails to do so in dipping strata are present (CMP
is not equal to CDP). DMO had a long tenure in the industry, to overcome this limitation.
DMO could reduce the reflection point dispersal and was able to produce gathers that
were very close to CDP gathers and offered better velocity analysis and stack. Again,
DMO too could not handle strong velocity gradients. So, the assumption that NMO +
DMO + Post stack migration is equal to Pre stack migration is a subject that is forgotten
in the industry today. Pre stack migration in time and depth domains are the order of the
present environment in seismics. Pre stack time and depth migrations are generally
attempted for foothills imaging. In this data the results of pre stack imaging were very
disastrous, again the reason being the line length and the poor input data quality. This
also drives home a point that all data may not be conducive for high end processing
techniques. With this dataset, only post stack imaging could be attempted without losing
much of quality.

47
References

1) Abdurrazag, M.A., et al., 2004, Static Corrections for Vibroseis data over sand dunes
in Western Libya, First Break (22), pp 33-41.
2) Coffen, J.A., 1978, Seismic Exploration Fundamentals, Pennwell publishers
3) Dobrin, M.B., et.al. 1988, Introduction to Geophysical Prospecting, McGraw & Hill
publishers
4) Mehta, C.H., Murali Mohan, T.R., Singh, S.P., 2004, Pre & Post Stack Time & Depth
Migration covering Case Histories, ONGC Academy Training GP-22/2004-‘05
Training Report
5) Mohan Kumar, M.L., 2005, Seismic Signal Conditioning and Basic Processing of
Foothills Data, Dissertation work submitted to Andhra University
6) Murali Mohan, T.R., 2002, 2D and 3D Seismics Data Processing, unpublished ONGC
report
7) Murali Mohan, T.R., 2003, Depth – Interval velocity Model building thru Stacking
velocity Inversion and Map Migration – A review, unpublished ONGC report
8) Russel, B., 1998, A Simple Imaging Exercise, The Leading Edge, 7, 885-889
9) Yilmaz, O., 2001, Seismic Data Analysis Vol-I & II, Society of Exploration
Geophysicists
10) Seismic data processing by Robertson Research.

48

You might also like