DIP Notes
DIP Notes
Pixels
A digital image comprises of a two dimensional array of individual picture elements
called pixels arranged in columns and rows. Each pixel represents an area on the Earth's
surface. A pixel has an intensity value and a location address in the two dimensional
image.
The intensity value represents the measured physical quantity such as the solar radiance
in a given wavelength band reflected from the ground, emitted infrared radiation or
backscattered radar intensity. This value is normally the average value for the whole
ground area covered by the pixel.
The intensity of a pixel is digitised and recorded as a digital number. Due to the finite
storage capacity, a digital number is stored with a finite number of bits (binary digits). The
number of bits determine the radiometric resolution of the image. For example, an 8-bit
digital number ranges from 0 to 255 (i.e. 28 - 1), while a 11-bit digital number ranges from
0 to 2047. The detected intensity value needs to be scaled and quantized to fit within this
range of value. In a Radiometrically Calibrated image, the actual intensity value can be
derived from the pixel digital number.
The address of a pixel is denoted by its row and column coordinates in the two-
dimensional image. There is a one-to-one correspondence between the column-row
address of a pixel and the geographical coordinates (e.g. Longitude, latitude) of the
imaged location. In order to be useful, the exact geographical location of each pixel on the
ground must be derivable from its row and column indices, given the imaging geometry
and the satellite orbit parameters.
"A Push-Broom" Scanner: This
type of imaging system is
commonly used in optical remote
sensing satellites such as SPOT.
The imaging system has a linear
detector array (usually of the CCD
type) consisting of a number of
detector elements (6000 elements
in SPOT HRV). Each detector
element projects an
"instantaneous field of view
(IFOV)" on the ground. The signal
recorded by a detector element is
proportional to the total radiation
collected within its IFOV. At any
instant, a row of pixels are formed.
As the detector array flies along its
track, the row of pixels sweeps
along to generate a two-
dimensional image.
Multilayer Image
Several types of measurement may be made from the ground area covered by a single
pixel. Each type of measurement forms an image which carry some specific information
about the area. By "stacking" these images from the same area together, a multilayer
image is formed. Each component image is a layer in the multilayer image.
Multilayer images can also be formed by combining images obtained from different
sensors, and other subsidiary data. For example, a multilayer image may consist of three
layers from a SPOT multispectral image, a layer of ERS synthetic aperture radar image,
and perhaps a layer consisting of the digital elevation map of the area being studied.
Multispectral Image
A multispectral image consists of a few image layers, each layer represents an image
acquired at a particular wavelength band. For example, the SPOT HRV sensor operating
in the multispectral mode detects radiations in three wavelength bands: the green (500 -
590 nm), red (610 - 680 nm) and near infrared (790 - 890 nm) bands. A single
SPOT multispectral scene consists of three intensity images in the three wavelength
bands. In this case, each pixel of the scene has three intensity values corresponding to
the three bands.
A multispectral IKONOS image consists of four bands: Blue, Green, Red and Near
Infrared, while a landsat TM multispectral image consists of seven bands: blue, green,
red, near-IR bands, two SWIR bands, and a thermal IR band.
Superspectral Image
The more recent satellite sensors are capable of acquiring images at many more
wavelength bands. For example, the MODIS sensor on-board the NASA's TERRA satellite
consists of 36 spectral bands, covering the wavelength regions ranging from the visible,
near infrared, short-wave infrared to the thermal infrared. The bands have narrower
bandwidths, enabling the finer spectral characteristics of the targets to be captured by the
sensor. The term "superspectral" has been coined to describe such sensors.
Hyperspectral Image
A hyperspectral image consists of about a hundred or more contiguous spectral bands.
The characteristic spectrum of the target pixel is acquired in a hyperspectral image. The
precise spectral information contained in a hyperspectral image enables better
characterisation and identification of targets. Hyperspectral images have potential
applications in such fields as precision agriculture (e.g. monitoring the types, health,
moisture status and maturity of crops), coastal management (e.g. monitoring of
phytoplanktons, pollution, bathymetry changes).
Currently, hyperspectral imagery is not commercially available from satellites. There are
experimental satellite-sensors that acquire hyperspectral imagery for scientific
investigation (e.g. NASA's Hyperion sensor on-board the EO1 satellite, CHRIS sensor
onboard ESA's PRABO satellite).
An illustration of a hyperspectral
image cube. The hyperspectral
image data usually consists of over a
hundred contiguous spectral bands,
forming a three-dimensional (two
spatial dimensions and one spectral
dimension) image cube. Each pixel
is associated with a complete
spectrum of of the imaged area. The
high spectral resolution of
hyperspectral images enables better
identificaiton of the land covers.
Spatial Resolution
Spatial resolution refers to the size of the smallest object that can be resolved on the
ground. In a digital image, the resolution is limited by the pixel size, i.e. the smallest
resolvable object cannot be smaller than the pixel size. The intrinsic resolution of an
imaging system is determined primarily by the instantaneous field of view (IFOV) of the
sensor, which is a measure of the ground area viewed by a single detector element in a
given instant in time. However this intrinsic resolution can often be degraded by other
factors which introduce blurring of the image, such as improper focusing, atmospheric
scattering and target motion. The pixel size is determined by the sampling distance.
A "High Resolution" image refers to one with a small resolution size. Fine details can be
seen in a high resolution image. On the other hand, a "Low Resolution" image is one
with a large resolution size, i.e. only coarse features can be observed in the image.
A low resolution MODIS scene with
a wide coverage. This image was
received by CRISP's ground station
on 3 March 2001. The intrinsic
resolution of the image was
approximately 1 km, but the image
shown here has been resampled to
a resolution of about 4 km. The
coverage is more than 1000 km
from east to west. A large part of
Indochina, Peninsular Malaysia,
Singapore and Sumatra can be
seen in the image.
(Click on the image to display part of
it at a resolution of 1 km.)
The following images illustrate the effect of pixel size on the visual appearance of an area.
The first image is a SPOT image of 10 m pixel size derived by merging a
SPOT panchromatic image with a SPOT multispectral image. The subsequent images
show the effects of digitizing the same area with larger pixel sizes.
Pixel Size = 10 m Pixel Size = 20 m
Image Width = 160 pixels, Height = 160 pixels Image Width = 80 pixels, Height = 80 pixels
Radiometric Resolution
Radiometric Resolution refers to the smallest change in intensity level that can be
detected by the sensing system. The intrinsic radiometric resolution of a sensing system
depends on the signal to noise ratio of the detector. In a digital image, the radiometric
resolution is limited by the number of discrete quantization levels used to digitize the
continuous intensity value.
The following images illustrate the effects of the number of quantization levels on the
digital image. The first image is a SPOT panchromatic image quantized at 8 bits (i.e. 256
levels) per pixel. The subsequent images show the effects of degrading the radiometric
resolution by using fewer quantization levels.
Digitization using a small number of quantization levels does not affect very much the
visual quality of the image. Even 4-bit quantization (16 levels) seems acceptable in the
examples shown. However, if the image is to be subjected to numerical analysis, the
accuracy of analysis will be compromised if few quantization levels are used.
Data Volume
The volume of the digital data can potentially be large for multispectral data, as a given
area is covered in many different wavelength bands. For example, a 3-
band multispectral SPOT image covers an area of about 60 x 60 km2 on the ground with
a pixel separation of 20 m. So there are about 3000 x 3000 pixels per image. Each pixel
intensity in each band is coded using an 8-bit (i.e. 1 byte) digital number, giving a total of
about 27 million bytes per image.
In comparison, the panchromatic data has only one band. Thus, panchromatic systems
are normally designed to give a higher spatial resolution than the multispectral system.
For example, a SPOT panchromatic scene has the same coverage of about 60 x 60
km2 but the pixel size is 10 m, giving about 6000 x 6000 pixels and a total of about 36
million bytes per image. If a multispectral SPOT scene is digitized also at 10 m pixel size,
the data volume will be 108 million bytes.
For very high spatial resolution imagery, such as the one acquired by the IKONOS
satellite, the data volume is even more significant. For example, an IKONOS 4-band
multispectral image at 4-m pixel size covering an area of 10 km by 10 km, digitized at 11
bits (stored at 16 bits), has a data volume of 4 x 2500 x 2500 x 2 bytes, or 50 million bytes
per image. A 1-m resolution panchromatic image covering the same area would have a
data volume of 200 million bytes per image.
The images taken by a remote sensing satellite is transmitted to Earth through
telecommunication. The bandwidth of the telecommunication channel sets a limit to the
data volume for a scene taken by the imaging system. Ideally, it is desirable to have a
high spatial resolution image with many spectral bands covering a wide area. In reality,
depending on the intended application, spatial resolution may have to be compromised to
accommodate a larger number of spectral bands, or a wide area coverage. A small
number of spectral bands or a smaller area of coverage may be accepted to allow high
spatial resolution imaging.
Optical remote sensing makes use of visible, near infrared and short-wave infrared
sensors to form images of the earth's surface by detecting the solar radiation reflected
from targets on the ground. Different materials reflect and absorb differently at different
wavelengths. Thus, the targets can be differentiated by their spectral reflectance
signatures in the remotely sensed images. Optical remote sensing systems are classified
into the following types, depending on the number of spectral bands used in the imaging
process.
Panchromatic imaging system: The sensor is a single channel detector sensitive
to radiation within a broad wavelength range. If the wavelength range coincide with
the visible range, then the resulting image resembles a "black-and-white"
photograph taken from space. The physical quantity being measured is the
apparent brightness of the targets. The spectral information or "colour" of the
targets is lost. Examples of panchromatic imaging systems are:
o IKONOS PAN
o SPOT HRV-PAN
IKONOS, USA
Launched on September 24, 1999, IKONOS is the world's first commercial satellite
providing very high resolution (up to 1 m) imagery of the earth. The IKONOS
satellite is operated by Space Imaging Inc. of Denver, Colorado, USA. IKONOS
simultaneously collects one-meter resolution black-and-white (panchromatic)
images and four-meter resolution color (multispectral) images. The multispectral
images consist of four bands in the blue, green, red and near-infrared wavelength
regions. The multispectral images can be merged with panchromatic images of the
same locations to produce "pan-sharpened color" images of 1-m resolution. The
satellite camera can distinguish objects on the Earth�s surface as small as one
meter square, but it cannot see individual people. The IKONOS satellite is
equipped with state-of-the-art star trackers and on-board GPS, enabling it to
acquire imagery with very high positional accuracy. The IKONOS imagery is
suitable for applications requiring a high level of details and accuracy, such as
mapping, agricultural monitoring, resource management and urban planning.
IKONOS Orbit
Type Sun-Synchronous
Altitude 681 km
Period 98 min
Single scene: 13 km x 13 km
Image Modes Strips: 11 km x 100 km up to 11 km x 1000 km
Image mosaics: up to 12,000 sq. km
Orbit: 705 km, 10:30 a.m. descending node (Terra) or 1:30 p.m.
ascending node (Aqua), sun-synchronous, near-polar,
circular
Scan Rate: 20.3 rpm, cross track
Swath 2330 km (cross track) by 10 km (along track at nadir)
Dimensions:
Telescope: 17.78 cm diam. off-axis, a focal (collimated), with
intermediate field stop
Size: 1.0 x 1.6 x 1.0 m
Weight: 228.7 kg
Power: 162.5 W (single orbit average)
Data Rate: 10.6 Mbps (peak daytime); 6.1 Mbps (orbital average)
Quantization: 12 bits
Spatial 250 m (bands 1-2)
Resolution: 500 m (bands 3-7)
1000 m (bands 8-36)
Design Life: 6 years
MERIS is designed to acquire 15 spectral bands in the 390 - 1040 nm range. One of the
most outstanding features of MERIS is the programmability of its spectral bands in their
width and position, in accordance with the priorities of the mission.
The above table has been derived for oceanographic and interdisciplinary applications.
The exact position of the MERIS spectral bands will be determined following a detailed
spectral characterization of the instrument. The spectral range is restricted to the visible
near-infrared part of the spectrum between 390 and 1040 nm. The spectral bandwidth is
variable between 1.25 and 30 nm depending on the width of a spectral feature to be
observed and the amount of energy needed in a band to perform an adequate
observation. Over open ocean an average bandwidth of 10 nm is required for the bands
located in the visible part of the spectrum. Driven by the need to resolve spectral features
of the Oxygen absorption band occurring at 760 nm a minimum spectral bandwidth of 2.5
nm is required.
Hyperspectral Imaging Systems: A hyperspectral imaging system is also known
as an "imaging spectrometer". it acquires images in about a hundred or more
contiguous spectral bands. The precise spectral information contained in a
hyperspectral image enables better characterisation and identification of targets.
Hyperspectral images have potential applications in such fields as precision
agriculture (e.g. monitoring the types, health, moisture status and maturity of
crops), coastal management (e.g. monitoring of phytoplanktons, pollution,
bathymetry changes). An example of a hyperspectral system is:
Hyperion on EO1 satellite
Altitude 705 km
Period 99 min
EO-1 Sensors
Hyperion: The Hyperion is a high resolution hyperspectral imaging instrument. The
Hyperion images the earth's surface in 220 contiguous spectral bands with high
radiometric accuracy, covering the region from 400 nm to 2.5 µm, at a ground
resolution of 30 m. Through this large number of spectral bands, complex land eco-
systems can be imaged and accurately classified.
The Hyperion is a "push broom" instrument. It has a single telescope and two
spectrometers, one visible/near infrared (VNIR) spectrometer (with CCD detector
array) and one short-wave infrared (SWIR) spectrometer (HgCdTe detector array).
Hyperion Sensor Characteristics
Spatial Resolution 30 m
Digitization 12 bits
Signal-to-Noise
161 (550 nm); 147 (700 nm); 110 (1125 nm); 40 (2125 nm)
Ratio (SNR)
ALI (Advanced Land Imager): The ALI instrument features ten-meter ground
resolution in the panchromatic (black-and-white) band and 30-meter ground
resolution in its multispectral bands (0.4-2.4 microns), covering seven of the eight
bands of the current Landsat.
AC (Atmospheric Corrector): The AC instrument provides the first space-based test
of an Atmospheric Corrector for increasing the accuracy of surface reflectance
estimates. The AC enables more precise predictive models to be constructed for
remote sensing applications. It will provide significant improvements in generating
accurate reflectance measurements for land imaging missions. Covers the 0.890-
1.600 micron wavelength IR band.
SPOT (Satellite Pour l' Observation de la Terre), France
The SPOT program consists of a series of optical remote sensing satellites with the
primary mission of obtaining Earth imagery for landuse, agriculture, forestry, geology,
cartography, regional planning, water resources and GIS applications. it is committed to
commercial remote sensing on an international scale and has established a global
network of control centres, receiving stations, processing centres and data distributors.
The SPOT satellites are operated by the French Space Agency, Centre National d'Etudes
Spatiales (CNES). Worldwide commercial operations are anchored by SPOT IMAGE in
France with the following subsidiaries: SPOT Image Corp. in the US, SPOT Imaging
Services in Australia and SPOT Asia in Singapore.
SPOT 1 was launched on 22 February 1986, and withdrawn from active service on 31
December 1990. SPOT 2 was launched on 22 January 1990 and is still operational. SPOT
3 was launched on 26 September 1993. An incident occured on SPOT 3 on November 14,
1996. After 3 years in orbit the satellite has stopped functioning. SPOT 4 was launched on
24 Mar 1998. Engineering work for SPOT 5 has began so that the satellite can be
launched in 2002 to ensure service continuity. To meet the increasing demand for SPOT
imagery, notably during the northern hemisphere growing season, SPOT 1 is reactivated
in 1997 for routine operation. Currently, three SPOT satellites (SPOT 1, 2, 4) are
operational.
The SPOT system provides global coverage between 87 degrees north latitude and 87
degrees south latitude.
SPOT Orbit
Type Sun-Synchronous
Altitude 832 km
Altitude 705 km
Period 99 min
Sensors
MSS (Multi-Spectral Scanner), on LANDSAT-1 to 5. Being one of the older
generation sensors, routine data acquisition for MSS was terminated in late 1992.
The resolution of the MSS sensor was approximately 80 m with radiometric
coverage in four spectral bands from the visible green to the near-infrared (IR)
wavelengths. Only the MSS sensor on Landsat 3 had a fifth band in the thermal-IR.
LANDSAT 4,5 MSS Sensor Characteristics
Band Wavelength (µm) Resolution (m)
Green 1 0.5 - 0.6 82
Red 2 0.6 - 0.7 82
Near IR 3 0.7 - 0.8 82
Near IR 4 0.8 - 1.1 82
TM (Thematic Mapper), first operational on LANDSAT-4. TM sensors primarily
detect reflected radiation from the Earth surface in the visible and near-infrared (IR)
wavelengths, but the TM sensor provides more radiometric information than the
MSS sensor. The wavelength range for the TM sensor is from the visible (blue),
through the mid-IR, into the thermal-IR portion of the electromagnetic spectrum.
Sixteen detectors for the visible and mid-IR wavelength bands in the TM sensor
provide 16 scan lines on each active scan. Four detectors for the thermal-IR band
provide four scan lines on each active scan. The TM sensor has a spatial resolution
of 30 m for the visible, near-IR, and mid-IR wavelengths and a spatial resolution of
120 m for the thermal-IR band.
ETM+ (Enhanced Thematic Mapper Plus), is carried on board Landsat 7. The
ETM+ instrument is an eight-band multispectral scanning radiometer capable of
providing high-resolution image information of the Earths surface. Its spectral
bands are similar to thoss of TM, except that the thermal band (band 6) has an
improved resolution of 60 m (versus 120 m in TM). There is also an additional
panchromatic band at 15 m resolution.
LANDSAT TM, ETM+ Sensor Characteristics
Band Wavelength (µm) Resolution (m)
Blue 1 0.45 - 0.52 30
Green 2 0.52 - 0.60 30
Red 3 0.63 - 0.69 30
Near IR 4 0.76 - 0.90 30
SWIR 5 1.55 - 1.75 30
Thermal IR 6 10.40 - 12.50 120 (TM) 60 (ETM+)
SWIR 7 2.08 - 2.35 30
Panchromatic 0.5 - 0.9 15
Solar Irradiation
Optical remote sensing depends on the sun as the sole source of illumination. The solar
irradiation spectrum above the atmosphere can be modeled by a black body radiation
spectrum having a source temperature of 5900 K, with a peak irradiation located at about
500 nm wavelength. Physical measurement of the solar irradiance has also been
performed using ground based and spaceborne sensors.
After passing through the atmosphere, the solar irradiation spectrum at the ground is
modulated by the atmospheric transmission windows. Significant energy remains only
within the wavelength range from about 0.25 to 3 µm.
Solar Irradiation Spectra above the atmosphere and at sea-level.
Typical Reflectance Spectrum of Vegetation. The labelled arrows indicate the common
wavelength bands used in optical remote sensing of vegetation: A: blue band, B: green
band; C: red band; D: near IR band;
E: short-wave IR band
Interpretation of Optical Images
Interpreting Optical Remote Sensing Images
Four main types of information contained in an optical image are often utilized for image
interpretation:
Radiometric Information (i.e. brightness, intensity, tone),
Spectral Information (i.e. colour, hue),
Textural Information,
Geometric and Contextual Information.
They are illustrated in the following examples.
Panchromatic Images
A panchromatic image consists of only one band. It is usually displayed as a grey scale
image, i.e. the displayed brightness of a particular pixel is proportional to the pixel digital
number which is related to the intensity of solar radiation reflected by the targets in the
pixel and detected by the detector. Thus, a panchromatic image may be similarly
interpreted as a black-and-white aerial photograph of the area. The Radiometric
Information is the main information type utilized in the interpretation.
A panchromatic image
extracted from
a SPOT panchromatic scene at
a ground resolution of 10 m.
The ground coverage is about
6.5 km (width) by 5.5 km
(height). The urban area at the
bottom left and a clearing near
the top of the image have high
reflected intensity, while the
vegetated areas on the right
part of the image are generally
dark. Roads and blocks of
buildings in the urban area are
visible. A river flowing through
the vegetated area, cutting
across the top right corner of
the image can be seen. The
river appears bright due to
sediments while the sea at the
bottom edge of the image
appears dark.
Multispectral Images
A multispectral image consists of several bands of data. For visual display, each band of
the image may be displayed one band at a time as a grey scale image, or in combination
of three bands at a time as a colour composite image. Interpretation of a multispectral
colour composite image will require the knowledge of the spectral reflectance
signature of the targets in the scene. In this case, the spectral information content of the
image is utilized in the interpretation.
The following three images show the three bands of a multispectral image extracted from
a SPOT multispectral scene at a ground resolution of 20 m. The area covered is the same
as that shown in the above panchromatic image. Note that both the XS1 (green) and XS2
(red) bands look almost identical to the panchromatic image shown above. In contrast, the
vegetated areas now appear bright in the XS3 (near infrared) band due to high
reflectance of leaves in the near infrared wavelength region. Several shades of grey can
be identified for the vegetated areas, corresponding to different types of vegetation. Water
mass (both the river and the sea) appear dark in the XS3 (near IR) band.
Vegetation Indices
Different bands of a multispectral image may be combined to accentuate the vegetated
areas. One such combination is the ratio of the near-infrared band to the red band. This
ratio is known as the Ratio Vegetation Index (RVI)
RVI = NIR/Red
Since vegetation has high NIR reflectance but low red reflectance, vegetated areas will
have higher RVI values compared to non-vegetated areas. Another commonly used
vegetation index is the Normalised Difference Vegetation Index (NDVI) computed by
NDVI = (NIR - Red)/(NIR + Red)
Normalised Difference Vegetation Index (NDVI) derived from the above SPOT image
In the NDVI map shown above, the bright areas are vegetated while the non-vegetated
areas (buildings, clearings, river, sea) are generally dark. Note that the trees lining the
roads are clearly visible as grey linear features against the dark background.
The NDVI band may also be combined with other bands of the multispectral image to form
a colour composite image which helps to discriminate different types of vegetation. One
such example is shown below. In this image, the display colour assignment is:
R = XS3 (Near IR band)
G = (XS3 - XS2)/(XS3 + XS2) (NDVI band)
B = XS1 (green band)
NDVI Colour Composite of the SPOT image: Red: XS3; Green: NDVI; Blue: XS1.
Textural Information
Texture is an important aid in visual image interpretation, especially for high spatial
resolution imagery. An example is shown below. It is also possible to characterize the
textural features numerically, and algorithms for computer-aided automatic discrimination
of different textures in an image are available.
In real aperture radar imaging, the ground resolution is limited by the size of the
microwave beam sent out from the antenna. Finer details on the ground can be resolved
by using a narrower beam. The beam width is inversely proportional to the size of the
antenna, i.e. the longer the antenna, the narrower the beam.
It is not feasible for a spacecraft to carry a very long antenna which is required for high
resolution imaging of the earth surface. To overcome this limitation, SAR capitalises on
the motion of the space craft to emulate a large antenna (about 4 km for the ERS SAR)
from the small antenna (10 m on the ERS satellite) it actually carries on board.
Imaging geometry for a typical strip-mapping synthetic aperture radar imaging system.
The antenna's footprint sweeps out a strip parallel to the direction of the satellite's ground
track.
Interaction between Microwaves and Earth's Surface
When microwaves strike a surface, the proportion of energy scattered back to the sensor
depends on many factors:
Physical factors such as the dielectric constant of the surface materials which also
depends strongly on the moisture content;
Geometric factors such as surface roughness, slopes, orientation of the objects
relative to the radar beam direction;
The types of landcover (soil, vegetation or man-made objects).
Microwave frequency, polarisation and incident angle.
All-Weather Imaging
Due to the cloud penetrating property of microwave, SAR is able to acquire "cloud-
free" images in all weather. This is especially useful in the tropical regions which are
frequently under cloud covers throughout the year. Being an active remote sensing
device, it is also capable of night-time operation.
Both the ERS and RADARSAT SARs use the C band microwave while the JERS SAR
uses the L band. The C band is useful for imaging ocean and ice features. However, it
also finds numerous land applications. The L band has a longer wavelength and is more
penetrating than the C band. Hence, it is more useful in forest and vegetation study as it is
able to penetrate deeper into the vegetation canopy.
After interacting with the earth surface, the polarisation state may be altered. So the
backscattered microwave energy usually has a mixture of the two polarisation states. The
SAR sensor may be designed to detect the H or the V component of the backscattered
radiation. Hence, there are four possible polarisation configurations for a SAR system:
"HH", "VV", "HV" and "VH" depending on the polarisation states of the transmitted and
received microwave signals. For example, the SAR onboard the ERS satellite transmits V
polarised and receives only the V polarised microwave pulses, so it is a "VV" polarised
SAR. In comparison, the SAR onboard the RADARSAT satellite is a "HH" polarised SAR.
Incident Angles
The incident angle refers to the angle between the incident radar beam and the direction
perpendicular to the ground surface. The interaction between microwaves and the surface
depends on the incident angle of the radar pulse on the surface. ERS SAR has a constant
incident angle of 23o at the scene centre. RADARSAT is the first spaceborne SAR that is
equipped with multiple beam modes enabling microwave imaging at different incident
angles and resolutions.
The incident angle of 23o for the ERS SAR is optimal for detecting ocean waves and other
ocean surface features. A larger incident angle may be more suitable for other
applications. For example, a large incident angle will increase the contrast between the
forested and clear cut areas.
Acquisition of SAR images of an area using two different incident angles will also enable
the construction of a stereo image for the area.
Interpreting SAR Images
SAR Images
Synthetic Aperture Radar(SAR) images can be obtained from satellites such
as ERS, JERS and RADARSAT. Since radar interacts with the ground features in ways
different from the optical radiation, special care has to be taken when interpreting radar
images.
An example of a ERS SAR image is shown below together with
a SPOT multispectral natural colour composite image of the same area for
comparison.
ERS SAR image (pixel size=12.5 m)
The urban area on the left appears bright in the SAR image while the vegetated areas on
the right have intermediate tone. The clearings and water (sea and river) appear dark in
the image. These features will be explained in the following sections. The SAR image was
acquired in September 1995 while the SPOT image was acquired in February 1994.
Additional clearings can be seen in the SAR image.
Speckle Noise
Unlike optical images, radar images are formed by coherent interaction of the transmitted
microwave with the targets. Hence, it suffers from the effects of speckle noise which
arises from coherent summation of the signals scattered from ground scatterers
distributed randomly within each pixel. A radar image appears more noisy than an optical
image. The speckle noise is sometimes suppressed by applying a speckle removal
filter on the digital image before display and further analysis.
This image is extracted from the above SAR
image, showing the clearing areas between the
river and the coastline. The image appears
"grainy" due to the presence of speckles.
Calm sea surfaces appear dark in SAR images. However, rough sea surfaces may appear
bright especially when the incidence angle is small. The presence of oil films smoothen
out the sea surface. Under certain conditions when the sea surface is sufficiently rough, oil
films can be detected as dark patches against a bright background.
Trees and other vegetations are usually moderately rough on the wavelength scale.
Hence, they appear as moderately bright features in the image. The tropical rain forests
have a characteristic backscatter coefficient of between -6 and -7 dB, which is spatially
homogeneous and remains stable in time. For this reason, the tropical rainforests have
been used as calibrating targets in performing radiometric calibration of SAR images.
Very bright targets may appear in the image due to the corner-reflector or double-
bounce effect where the radar pulse bounces off the horizontal ground (or the sea)
towards the target, and then reflected from one vertical surface of the target back to the
sensor. Examples of such targets are ships on the sea, high-rise buildings and regular
metallic objects such as cargo containers. Built-up areas and many man-made features
usually appear as bright patches in a radar image due to the corner reflector effect.
The brightness of areas covered by bare soil may vary from very dark to very bright
depending on its roughness and moisture content. Typically, rough soil appears bright in
the image. For similar soil roughness, the surface with a higher moisture content will
appear brighter.
This image is an example of a multitemporal colour composite SAR image. The area
shown is part of the rice growing areas in the Mekong River delta, Vietnam, near the
towns of Soc Trang and Phung Hiep. Three SAR images acquired by the ERS satellite
during 5 May, 9 June and 14 July in 1996 are assigned to the red, green and blue
channels respectively for display. The colourful areas are the rice growing areas, where
the landcovers change rapidly during the rice season. The greyish linear features are the
more permanent trees lining the canals. The grey patch near the bottom of the image is
wetland forest. The two towns appear as bright white spots in this image. An area of
depression flooded with water during this season is visible as a dark region.