Chap 7

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

1

Remote Sensing

Remote Sensing is defined as the science and technology by which characteristics of objects of interest can be
identified without direct contact.

Types-

Passive and Active Remote sensing

Passive Sensors: measure energy that is naturally available (e.g. Optical sensors)

Active Sensors: provide their own energy source for illumination (e.g. Synthetic Aperture Radar (SAR), Laser
Scanner (LIDAR).

Passive Remote Sensing

The sun provides a very convenient source of energy for remote sensing.

The sun's energy is reflected for visible wavelengths, or absorbed and then re-emitted for thermal IR
wavelengths

For all reflected energy, this can only take place during the time when the sun is illuminating the Earth

Energy that is naturally emitted (such as thermal infrared) can be detected day or night, as long as the amount
of energy is large enough to be recorded.

Active Remote Sensing

An active sensor emits radiation which is directed toward the target to be investigated. The radiation reflected
from that target is detected and measured by the sensor.

Advantages for active sensors: the ability to obtain measurements anytime, regardless of the time of day,
season or (weather), examine wavelengths that are not sufficiently provided by the sun (e.g., microwaves),
to better control the way that a target is illuminated.

However, active systems require the generation of a fairly large amount of energy to adequately illuminate
targets.

Brief History of Remote Sensing

1826 The invention of photography

1850s Photography from balloons

1873 Theory of electromagnetic energy by J. C. Maxwell

1909 Photography from airplanes

1910s World War I: aerial reconnaissance

1920s Development and applications of aerial photography and photogrammetry


2

1930s Development of radar in Germany, USA, and UK

1940s World War II: application of Infrared and microwave regions

1950s Military Research and Development

1960s The satellite era: Space race between USA and USSR.

1960 The first meteorological satellite (TIROS-1)

1960s First use of term remote sensing

1960s Skylab remote sensing observations from the space

1972 Launch of the first earth resource satellite (Landsat-1)

1970s Rapid advances in digital image processing

1980s Landsat-4: new generation of Landsat sensors

1986 Launch of French earth observation satellite (SPOT-1)

1980s Development of hyperspectral sensors

1990s Launch of earth resource satellites by national space agencies and commercial companies

Process of Remote Sensing

Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy source which
illuminates or provides electromagnetic energy to the target of interest.

Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in contact
with and interact with the atmosphere it passes through. This interaction may take place a second time as the
energy travels from the target to the sensor.

Interaction with the Target (C) - once the energy makes its way to the target through the atmosphere, it interacts
with the target depending on the properties of both the target and the radiation.

Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted from the target, we
require a sensor (remote not in contact with the target) to collect and record the electromagnetic radiation.

Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be transmitted, often in
electronic form, to a receiving and processing station where the data are processed into an image (hardcopy
and/or digital).

Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally or electronically, to
extract information about the target which was illuminated.

Application (G) - the final element of the remote sensing process is achieved when we apply the information
that we have been able to extract from the imagery about the target, in order to better understand it, reveal some
new information, or assist in solving a particular problem.
3

Seismic Reflection Methods

The physical process of reflection is illustrated in Figure 1, where the raypaths through successive layers are
shown. There are commonly several layers beneath the earth's surface that contribute reflections to a single
seismogram. The unique advantage of seismic reflection data is that it permits mapping of many horizon or
layers with each shot.. At later times in the record, more noise is present in the record making the reflections
difficult to extract from the unprocessed data.

Figure 2 indicates the paths of arrivals that would be recorded on a multichannel seismograph. Note that the
subsurface coverage is exactly one-half of the surface distance across the geophone spread. The subsurface
sampling interval is one-half of the distance between geophones on the surface. Another important feature of
modern reflection-data acquisition is illustrated by figure 3. If multiple shots, S1 and S2, are recorded by
multiple receivers, R1 and R2, and the geometry is as shown in the figure, the reflection point for both raypaths
is the same. However, the ray paths are not the same length, thus the reflection will occur at different times on
the two traces. This time delay, whose magnitude is indicative of the subsurface velocities, is called normal-
moveout. With an appropriate time shift, called the normal-moveout correction, the two traces (S1 to R2 and S2
to R1) can be summed, greatly enhancing the reflected energy and canceling spurious noise.

This method is called the common reflection point, common midpoint, or common depth point (CDP) method.
If all receiver locations are used as shot points, the multiplicity of data on one subsurface point (called CDP
fold) is equal to one-half of the number of recording channels. Thus, a 24-channel seismograph will record 12-
fold data if a shot corresponding to every receiver position is shot into a full spread. Thus, for 12-fold data,
every subsurface point will have 12 separate traces added, after appropriate time shifting, to represent that
point.

Figure 1. Schematic of the seismic reflection method.


4

Figure 2. Multichannel recordings for seismic reflection.

Figure 3. Illustration of common depth point (often called common mid point).

Arrivals on a seismic reflection record can be seen in figure 4. The receivers are arranged to one side of a shot,
which is 15 m from the first geophone. Various arrivals are identified on figure 4. Note that the gain is
increased down the trace to maintain the signals at about the same size by a process known as automatic gain
control (AGC). One side of the traces is shaded to enhance the continuity between traces.

Figure 4. Simple seismic reflection record.

The ultimate product of a seismic reflection survey is a corrected cross section of the earth with reflection events
in their true subsurface positions. This section does not present every detail of the acquisition and processing of
shallow seismic reflection data. Thus, the difference between deep petroleum-oriented reflection and shallow
reflection work suitable for engineering and environmental applications will be stressed.
5

Cost and frequency bandwidth are the principal differences between the two applications of seismic reflection.
One measure of the nominal frequency content of a pulse is the inverse of the time between successive peaks.
In the shallow subsurface, the exploration objectives are often at depths of 15 to 45 m. At 450 m/s, a wave with
10 ms peak-to-peak (nominal frequency of 100 Hz) is 45 m long. To detect (much less differentiate between)
shallow, closely spaced layers, pulses with nominal frequencies at or above 200 Hz may be required. A value of
1,500 m/s is used as a representative velocity corresponding to saturated, unconsolidated materials because,
without saturated sediments, both attenuation and lateral variability make reflection generally difficult.

Common-Offset Seismic Reflection Method

A technique for obtaining one-fold reflection data is called the common-offset method or common-offset gather
(COG). It is instructive to review the method, but it has fallen into disuse because of the decreased cost of CDP
surveys and the difficulty of quantitative interpretation in most cases. Figure 5 illustrates time-distance curves
for the seismic waves that can be recorded. In the optimum offset distance range, the reflected and refracted
arrivals will be isolated in time. Note that no quantitative scales are shown as the distances or velocities, and
wave modes are distinct at each site. Thus, testing is necessary to establish the existence and location of the
optimum offset window. Figure 6 illustrates the COG method. After the optimum offset distance is selected,
the source and receiver are moved across the surface. Note that the subsurface coverage is one-fold, and there is
no provision for noise cancellation. Figure 7 is a set of data presented as common offset data. The offset
between geophone and shot is 14 m. Note that the acoustic wave (visible as an arrival near 40 ms) is attenuated
(the shot was buried for this record). Note the prominent reflection near 225 ms that splits into two arrivals near
line distance 610 m. Such qualitative changes are the usual interpretative result of a common offset survey. No
depth scale is furnished.

Figure 5. Optimum offset distance determination for the common offset method.
6

Figure 6. Common offset method schematic.

Figure 7. Sample common offset record.

If the requirements for relative and absolute surveying are taken care of at a separate time, excellent production
rates, in terms of number of shot points per day, can be achieved. Rates of 1/min or 400 to 500 normal shots can
be recorded in a field day. Note that the spacing of these shot points may be only 0.6 to 1.2 m, so the linear
progress may be only about 300 m of line for very shallow surveys. Also, note that the amount of data acquired
is enormous. A 24-channel record sampled every 1/8 ms that is 200 ms long consists of nearly 60,000 32-bit
numbers, or upwards of 240 KB/record. Three hundred records may represent more than 75 MB of data for 1
day of shooting.

Field data acquisition parameters are highly site specific. Up to a full day of testing with a knowledgeable
consultant experienced in shallow seismic work may be required. The objective of these tests is identifiable,
demonstrable reflections on the raw records. If arrivals consistent with reflections from the zone of interest
cannot be seen, the chances that processing will recover useful data are slim. One useful testing technique is the
walkaway noise test. A closely spaced set of receivers is set out with a geophone interval equal to 1% or 2% of
the depth of interest - often as little as 30 or 60 cm for engineering applications. By firing shots at different
distances from this spread, a well-sampled long-offset spread can be generated. Variables can include geophone
arrays, shot patterns, high and low-cut filters, and AGC windows, among others.

Because one objective is to preserve frequency content, table 1 is offered as a comparison between petroleum-
oriented and engineering-oriented data acquisition. The remarks column indicates the reason for the differences.
7

Synthetic-aperture radar (SAR)

Synthetic-aperture radar (SAR) is a form of radar which is used to create images of objects, such as landscapes
these images can be either two or three dimensional representations of the object. SAR uses the motion of the
radar antenna over a targeted region to provide finer spatial resolution than is possible with conventional beam-
scanning radars. SAR is typically mounted on a moving platform such as an aircraft or spacecraft, and has its
origins in an advanced form of side-looking airborne radar (SLAR). The distance the SAR device travels over a
target in the time taken for the radar pulses to return to the antenna creates the large "synthetic" antenna aperture
(the "size" of the antenna). As a rule of thumb, the larger the aperture is, the higher the image resolution will be,
regardless of whether the aperture is physical (a large antenna) or 'synthetic' (a moving antenna) this allows
SAR to create high resolution images with comparatively small physical antennas.

To create a SAR image, successive pulses of radio waves are transmitted to "illuminate" a target scene, and the
echo of each pulse is received and recorded. The pulses are transmitted and the echoes received using a single
beam-forming antenna, with wavelengths of a meter down to several millimeters. As the SAR device on board
the aircraft or spacecraft moves, the antenna location relative to the target changes with time. Signal processing
of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna
positions this process forms the 'synthetic antenna aperture', and allows the creation of higher resolution
images than would otherwise be possible with a given physical antenna.

Current (2010) airborne systems provide resolutions of about 10 cm, ultra-wideband systems provide resolutions
of a few millimeters, and experimental terahertz SAR has provided sub-millimeter resolution in the laboratory.
SAR images have wide applications in remote sensing and mapping of the surfaces of both the Earth and other
planets. SAR can also be implemented as inverse SAR by observing a moving target over a substantial time with
a stationary antenna.

You might also like