0% found this document useful (0 votes)
25 views41 pages

DIP Notes

The document discusses the differences between analog and digital images, focusing on the structure and characteristics of digital images, including pixels, radiometric and spatial resolution, and data volume. It explains various types of images such as multilayer, multispectral, superspectral, and hyperspectral images, as well as the implications of pixel size and quantization levels on image quality. Additionally, it highlights optical remote sensing and provides an overview of the IKONOS satellite, which captures high-resolution imagery of the Earth's surface.

Uploaded by

crackone751
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views41 pages

DIP Notes

The document discusses the differences between analog and digital images, focusing on the structure and characteristics of digital images, including pixels, radiometric and spatial resolution, and data volume. It explains various types of images such as multilayer, multispectral, superspectral, and hyperspectral images, as well as the implications of pixel size and quantization levels on image quality. Additionally, it highlights optical remote sensing and provides an overview of the IKONOS satellite, which captures high-resolution imagery of the Earth's surface.

Uploaded by

crackone751
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Digital Image

Analog and Digital Images


An image is a two-dimensional representation of objects in a real scene. Remote sensing
images are representations of parts of the earth surface as seen from space. The images
may be analog or digital. Aerial photographs are examples of analog images while
satellite images acquired using electronic sensors are examples of digital images.

A digital image is a two-dimensional array of pixels.


Each pixel has an intensity value (represented by a
digital number) and a location address (referenced
by its row and column numbers).

Pixels
A digital image comprises of a two dimensional array of individual picture elements
called pixels arranged in columns and rows. Each pixel represents an area on the Earth's
surface. A pixel has an intensity value and a location address in the two dimensional
image.
The intensity value represents the measured physical quantity such as the solar radiance
in a given wavelength band reflected from the ground, emitted infrared radiation or
backscattered radar intensity. This value is normally the average value for the whole
ground area covered by the pixel.
The intensity of a pixel is digitised and recorded as a digital number. Due to the finite
storage capacity, a digital number is stored with a finite number of bits (binary digits). The
number of bits determine the radiometric resolution of the image. For example, an 8-bit
digital number ranges from 0 to 255 (i.e. 28 - 1), while a 11-bit digital number ranges from
0 to 2047. The detected intensity value needs to be scaled and quantized to fit within this
range of value. In a Radiometrically Calibrated image, the actual intensity value can be
derived from the pixel digital number.
The address of a pixel is denoted by its row and column coordinates in the two-
dimensional image. There is a one-to-one correspondence between the column-row
address of a pixel and the geographical coordinates (e.g. Longitude, latitude) of the
imaged location. In order to be useful, the exact geographical location of each pixel on the
ground must be derivable from its row and column indices, given the imaging geometry
and the satellite orbit parameters.
"A Push-Broom" Scanner: This
type of imaging system is
commonly used in optical remote
sensing satellites such as SPOT.
The imaging system has a linear
detector array (usually of the CCD
type) consisting of a number of
detector elements (6000 elements
in SPOT HRV). Each detector
element projects an
"instantaneous field of view
(IFOV)" on the ground. The signal
recorded by a detector element is
proportional to the total radiation
collected within its IFOV. At any
instant, a row of pixels are formed.
As the detector array flies along its
track, the row of pixels sweeps
along to generate a two-
dimensional image.

Multilayer Image
Several types of measurement may be made from the ground area covered by a single
pixel. Each type of measurement forms an image which carry some specific information
about the area. By "stacking" these images from the same area together, a multilayer
image is formed. Each component image is a layer in the multilayer image.
Multilayer images can also be formed by combining images obtained from different
sensors, and other subsidiary data. For example, a multilayer image may consist of three
layers from a SPOT multispectral image, a layer of ERS synthetic aperture radar image,
and perhaps a layer consisting of the digital elevation map of the area being studied.

An illustration of a multilayer image


consisting of five component layers.

Multispectral Image
A multispectral image consists of a few image layers, each layer represents an image
acquired at a particular wavelength band. For example, the SPOT HRV sensor operating
in the multispectral mode detects radiations in three wavelength bands: the green (500 -
590 nm), red (610 - 680 nm) and near infrared (790 - 890 nm) bands. A single
SPOT multispectral scene consists of three intensity images in the three wavelength
bands. In this case, each pixel of the scene has three intensity values corresponding to
the three bands.
A multispectral IKONOS image consists of four bands: Blue, Green, Red and Near
Infrared, while a landsat TM multispectral image consists of seven bands: blue, green,
red, near-IR bands, two SWIR bands, and a thermal IR band.

Superspectral Image
The more recent satellite sensors are capable of acquiring images at many more
wavelength bands. For example, the MODIS sensor on-board the NASA's TERRA satellite
consists of 36 spectral bands, covering the wavelength regions ranging from the visible,
near infrared, short-wave infrared to the thermal infrared. The bands have narrower
bandwidths, enabling the finer spectral characteristics of the targets to be captured by the
sensor. The term "superspectral" has been coined to describe such sensors.

Hyperspectral Image
A hyperspectral image consists of about a hundred or more contiguous spectral bands.
The characteristic spectrum of the target pixel is acquired in a hyperspectral image. The
precise spectral information contained in a hyperspectral image enables better
characterisation and identification of targets. Hyperspectral images have potential
applications in such fields as precision agriculture (e.g. monitoring the types, health,
moisture status and maturity of crops), coastal management (e.g. monitoring of
phytoplanktons, pollution, bathymetry changes).
Currently, hyperspectral imagery is not commercially available from satellites. There are
experimental satellite-sensors that acquire hyperspectral imagery for scientific
investigation (e.g. NASA's Hyperion sensor on-board the EO1 satellite, CHRIS sensor
onboard ESA's PRABO satellite).
An illustration of a hyperspectral
image cube. The hyperspectral
image data usually consists of over a
hundred contiguous spectral bands,
forming a three-dimensional (two
spatial dimensions and one spectral
dimension) image cube. Each pixel
is associated with a complete
spectrum of of the imaged area. The
high spectral resolution of
hyperspectral images enables better
identificaiton of the land covers.

Spatial Resolution
Spatial resolution refers to the size of the smallest object that can be resolved on the
ground. In a digital image, the resolution is limited by the pixel size, i.e. the smallest
resolvable object cannot be smaller than the pixel size. The intrinsic resolution of an
imaging system is determined primarily by the instantaneous field of view (IFOV) of the
sensor, which is a measure of the ground area viewed by a single detector element in a
given instant in time. However this intrinsic resolution can often be degraded by other
factors which introduce blurring of the image, such as improper focusing, atmospheric
scattering and target motion. The pixel size is determined by the sampling distance.
A "High Resolution" image refers to one with a small resolution size. Fine details can be
seen in a high resolution image. On the other hand, a "Low Resolution" image is one
with a large resolution size, i.e. only coarse features can be observed in the image.
A low resolution MODIS scene with
a wide coverage. This image was
received by CRISP's ground station
on 3 March 2001. The intrinsic
resolution of the image was
approximately 1 km, but the image
shown here has been resampled to
a resolution of about 4 km. The
coverage is more than 1000 km
from east to west. A large part of
Indochina, Peninsular Malaysia,
Singapore and Sumatra can be
seen in the image.
(Click on the image to display part of
it at a resolution of 1 km.)

A browse image of a high resolution


SPOT scene. The multispectral
SPOT scene has a resolution of 20
m and covers an area of 60 km by
60 km. The browse image has been
resampled to 120 m pixel size, and
hence the resolution has been
reduced. This scene shows
Singapore and part of the Johor
State of Malaysia.

Part of a high resolution


SPOT scene shown at
the full resolution of 20 m.
The image shown here
covers an area of
approximately 4.8 km by
3.6 km. At this resolution,
roads, vegetation and
blocks of buildings can be
seen.
Part of a very high resolution
image acquired by the IKONOS
satellite. This true-colour image
was obtained by merging a 4-m
multispectral image with a 1-m
panchromatic image of the same
area acquired simultaneously. The
effective resolution of the image is
1 m. At this resolution, individual
trees, vehicles, details of
buildings, shadows and roads can
be seen. The image shown here
covers an area of about 400 m by
400 m. A very high spatial
resolution image usually has a
smaller area of coverage. A full
scene of an IKONOS image has a
coverage area of about 10 km by
10 km.

Spatial Resolution and Pixel Size


The image resolution and pixel size are often used interchangeably. In realiaty, they are
not equivalent. An image sampled at a small pixel size does not necessarily has a high
resolution. The following three images illustrate this point. The first image is
a SPOT image of 10 m pixel size. It was derived by merging a SPOT panchromatic
image of 10 m resolution with a SPOT multispectral image of 20 m resolution. The
merging procedure "colours" the panchromtic image using the colours derived from the
multispectral image. The effective resolution is thus determined by the resolution of the
panchromatic image, which is 10 m. This image is further processed to degrade the
resolution while maintaining the same pixel size. The next two images are the blurred
versions of the image with larger resolution size, but still digitized at the same pixel size of
10 m. Even though they have the same pixel size as the first image, they do not have the
same resolution.

10 m resolution, 10 m pixel 30 m resolution, 10 m pixel 80 m resolution, 10 m pixel


size size size

The following images illustrate the effect of pixel size on the visual appearance of an area.
The first image is a SPOT image of 10 m pixel size derived by merging a
SPOT panchromatic image with a SPOT multispectral image. The subsequent images
show the effects of digitizing the same area with larger pixel sizes.
Pixel Size = 10 m Pixel Size = 20 m
Image Width = 160 pixels, Height = 160 pixels Image Width = 80 pixels, Height = 80 pixels

Pixel Size = 40 m Pixel Size = 80 m


Image Width = 40 pixels, Height = 40 pixels Image Width = 20 pixels, Height = 20 pixels

Radiometric Resolution
Radiometric Resolution refers to the smallest change in intensity level that can be
detected by the sensing system. The intrinsic radiometric resolution of a sensing system
depends on the signal to noise ratio of the detector. In a digital image, the radiometric
resolution is limited by the number of discrete quantization levels used to digitize the
continuous intensity value.
The following images illustrate the effects of the number of quantization levels on the
digital image. The first image is a SPOT panchromatic image quantized at 8 bits (i.e. 256
levels) per pixel. The subsequent images show the effects of degrading the radiometric
resolution by using fewer quantization levels.

8-bit quantization (256 levels) 6-bit quantization (64 levels)


4-bit quantization (16 levels) 3-bit quantization (8 levels)

2-bit quantization (4 levels) 1-bit quantization (2 levels)

Digitization using a small number of quantization levels does not affect very much the
visual quality of the image. Even 4-bit quantization (16 levels) seems acceptable in the
examples shown. However, if the image is to be subjected to numerical analysis, the
accuracy of analysis will be compromised if few quantization levels are used.

Part of the running track in this IKONOS image is under cloud


shadow. The IKONOS uses 11-bit digitization during image
acquisition. The high radiometric resolution enables features under
shadow to be recovered.

The features under cloud shadow are recovered by applying a simple


contrast and brightness enhancement technique.

Data Volume
The volume of the digital data can potentially be large for multispectral data, as a given
area is covered in many different wavelength bands. For example, a 3-
band multispectral SPOT image covers an area of about 60 x 60 km2 on the ground with
a pixel separation of 20 m. So there are about 3000 x 3000 pixels per image. Each pixel
intensity in each band is coded using an 8-bit (i.e. 1 byte) digital number, giving a total of
about 27 million bytes per image.
In comparison, the panchromatic data has only one band. Thus, panchromatic systems
are normally designed to give a higher spatial resolution than the multispectral system.
For example, a SPOT panchromatic scene has the same coverage of about 60 x 60
km2 but the pixel size is 10 m, giving about 6000 x 6000 pixels and a total of about 36
million bytes per image. If a multispectral SPOT scene is digitized also at 10 m pixel size,
the data volume will be 108 million bytes.
For very high spatial resolution imagery, such as the one acquired by the IKONOS
satellite, the data volume is even more significant. For example, an IKONOS 4-band
multispectral image at 4-m pixel size covering an area of 10 km by 10 km, digitized at 11
bits (stored at 16 bits), has a data volume of 4 x 2500 x 2500 x 2 bytes, or 50 million bytes
per image. A 1-m resolution panchromatic image covering the same area would have a
data volume of 200 million bytes per image.
The images taken by a remote sensing satellite is transmitted to Earth through
telecommunication. The bandwidth of the telecommunication channel sets a limit to the
data volume for a scene taken by the imaging system. Ideally, it is desirable to have a
high spatial resolution image with many spectral bands covering a wide area. In reality,
depending on the intended application, spatial resolution may have to be compromised to
accommodate a larger number of spectral bands, or a wide area coverage. A small
number of spectral bands or a smaller area of coverage may be accepted to allow high
spatial resolution imaging.

Optical Remote Sensing

Optical remote sensing makes use of visible, near infrared and short-wave infrared
sensors to form images of the earth's surface by detecting the solar radiation reflected
from targets on the ground. Different materials reflect and absorb differently at different
wavelengths. Thus, the targets can be differentiated by their spectral reflectance
signatures in the remotely sensed images. Optical remote sensing systems are classified
into the following types, depending on the number of spectral bands used in the imaging
process.
 Panchromatic imaging system: The sensor is a single channel detector sensitive
to radiation within a broad wavelength range. If the wavelength range coincide with
the visible range, then the resulting image resembles a "black-and-white"
photograph taken from space. The physical quantity being measured is the
apparent brightness of the targets. The spectral information or "colour" of the
targets is lost. Examples of panchromatic imaging systems are:
o IKONOS PAN
o SPOT HRV-PAN

IKONOS, USA

Launched on September 24, 1999, IKONOS is the world's first commercial satellite
providing very high resolution (up to 1 m) imagery of the earth. The IKONOS
satellite is operated by Space Imaging Inc. of Denver, Colorado, USA. IKONOS
simultaneously collects one-meter resolution black-and-white (panchromatic)
images and four-meter resolution color (multispectral) images. The multispectral
images consist of four bands in the blue, green, red and near-infrared wavelength
regions. The multispectral images can be merged with panchromatic images of the
same locations to produce "pan-sharpened color" images of 1-m resolution. The
satellite camera can distinguish objects on the Earth�s surface as small as one
meter square, but it cannot see individual people. The IKONOS satellite is
equipped with state-of-the-art star trackers and on-board GPS, enabling it to
acquire imagery with very high positional accuracy. The IKONOS imagery is
suitable for applications requiring a high level of details and accuracy, such as
mapping, agricultural monitoring, resource management and urban planning.
IKONOS Orbit

Type Sun-Synchronous

Altitude 681 km

Inclination 98.1 deg

Descending node crossing time 10:30 am local solar time

Period 98 min

Off-Nadir Revisit 1.5 to 2.9 days at 40o latitude


Sensor Characteristics
Viewing Angle Agile spacecraft, along track and across track pointing

Swath Width 11 km nominal at nadir

Single scene: 13 km x 13 km
Image Modes Strips: 11 km x 100 km up to 11 km x 1000 km
Image mosaics: up to 12,000 sq. km

Metric Accuracy 12 m horizontal, 10 m vertical without GCP

Radiometric Digitization 11 bits

Spectral Bands wavelength (µm) Resolution

1 (blue) 0.40 - 0.52 4m

2 (green) 0.52 - 0.60 4m

3 (red) 0.63 - 0.69 4m

4 (NIR) 0.76 - 0.90 4m

Panchromatic 0.45 - 0.90 1m

 Superspectral Imaging Systems: A superspectral imaging sensor has many more


spectral channels (typically >10) than a multispectral sensor. The bands have
narrower bandwidths, enabling the finer spectral characteristics of the targets to be
captured by the sensor. Examples of superspectral systems are:
o MODIS
o MERIS
MODIS - Moderate Resolution Imaging Spectrometer (on Terra Satellite,
USA)

The Aqua satellite was successfully launched on 4 May 2002!


The Moderate-resolution Imaging Spectroradiometer (MODIS) is a key instrument aboard
the NASA's Terra and Aqua satellites. Terra was successfully launched from Vandenberg
Air Force Base, CA, on December 18, 1999, carrying the MODIS Proto-Flight Model
(PFM). PFM began collecting data on February 24th, 2000. The Aqua satellite, which will
also carry the MODIS instrument, is planned to be launched in late 2001.
Terra MODIS is viewing the entire Earth's surface every 1 to 2 days, acquiring data in 36
spectral bands. These data, along with data from a second MODIS (which is being
developed for the Aqua satellite), will improve understanding of global dynamics and
processes occurring on the land, in the oceans, and in the lower atmosphere. MODIS is
playing a vital role in the development of validated, global, interactive Earth system
models able to predict global change accurately enough to assist policy makers in making
sound decisions concerning the protection of our environment.
MODIS Instrument Characteristics

Orbit: 705 km, 10:30 a.m. descending node (Terra) or 1:30 p.m.
ascending node (Aqua), sun-synchronous, near-polar,
circular
Scan Rate: 20.3 rpm, cross track
Swath 2330 km (cross track) by 10 km (along track at nadir)
Dimensions:
Telescope: 17.78 cm diam. off-axis, a focal (collimated), with
intermediate field stop
Size: 1.0 x 1.6 x 1.0 m
Weight: 228.7 kg
Power: 162.5 W (single orbit average)
Data Rate: 10.6 Mbps (peak daytime); 6.1 Mbps (orbital average)
Quantization: 12 bits
Spatial 250 m (bands 1-2)
Resolution: 500 m (bands 3-7)
1000 m (bands 8-36)
Design Life: 6 years

Primary Use Band Bandwidth1 Spectral Required


Radiance2 SNR3
Land/Cloud/Aerosols 1 620 - 670 21.8 128
Boundaries 2 841 - 876 24.7 201
Land/Cloud/Aerosols 3 459 - 479 35.3 243
Properties 4 545 - 565 29.0 228
5 1230 - 1250 5.4 74
6 1628 - 1652 7.3 275
7 2105 - 2155 1.0 110
Ocean Color/ 8 405 - 420 44.9 880
Phytoplankton/ 9 438 - 448 41.9 838
Biogeochemistry
10 483 - 493 32.1 802
11 526 - 536 27.9 754
12 546 - 556 21.0 750
13 662 - 672 9.5 910
14 673 - 683 8.7 1087
15 743 - 753 10.2 586
16 862 - 877 6.2 516
Atmospheric 17 890 - 920 10.0 167
Water Vapor 18 931 - 941 3.6 57
19 915 - 965 15.0 250

Primary Use Band Bandwidth1 Spectral Required


Radiance2 NE[delta]T(K)4
Surface/Cloud 20 3.660 - 3.840 0.45(300K) 0.05
Temperature 21 3.929 - 3.989 2.38(335K) 2.00
22 3.929 - 3.989 0.67(300K) 0.07
23 4.020 - 4.080 0.79(300K) 0.07
Atmospheric 24 4.433 - 4.498 0.17(250K) 0.25
Temperature 25 4.482 - 4.549 0.59(275K) 0.25
Cirrus Clouds 26 1.360 - 1.390 6.00 150(SNR)
Water Vapor 27 6.535 - 6.895 1.16(240K) 0.25
28 7.175 - 7.475 2.18(250K) 0.25
Cloud Properties 29 8.400 - 8.700 9.58(300K) 0.05
Ozone 30 9.580 - 9.880 3.69(250K) 0.25
Surface/Cloud 31 10.780 - 11.280 9.55(300K) 0.05
Temperature 32 11.770 - 12.270 8.94(300K) 0.05
Cloud Top 33 13.185 - 13.485 4.52(260K) 0.25
Altitude 34 13.485 - 13.785 3.76(250K) 0.25
35 13.785 - 14.085 3.11(240K) 0.25
36 14.085 - 14.385 2.08(220K) 0.35
1
Bands 1 to 19 are in nm; Bands 20 to 36 are in µm
2
Spectral Radiance values are (W/m2 -µm-sr)
3
SNR = Signal-to-noise ratio
4
NE(delta)T = Noise-equivalent temperature difference
Note: Performance goal is 30-40% better than required
ENVISAT - MERIS - Medium Resolution Imaging Spectrometer (on
ENVISAT Satellite, European Space Agency)

Envisat was successfully launched on 1 March 2002!


MERIS is a medium resolution imaging instrument to be carried aboard the ESA's
Envisate satellite. The Envisat satellite is scheduled to be launched in November 2001.
The primary mission of MERIS is primarily dedicated to ocean and coastal sea water
colour observations. Knowledge of the sea colour can be converted into a measurement
of chlorophyll pigment concentration, suspended sediment concentration and of aerosol
loads over the marine domain. The instrument can also be used for atmospheric and land
surface related studies.
The global mission of MERIS will have a major contribution to scientific projects which
seek to understand the role of the oceans and ocean productivity in the climate system
through observations of water colour and will further our ability to forecast change through
models. Secondary objectives of the MERIS mission will be directed to the understanding
of atmospheric parameters associated with clouds, water vapour and aerosols in addition
to land surface parameters, in particular vegetation processes.
MERIS will have a high spectral and radiometric resolution and a dual spatial resolution
(1200m and 300m), within a global mission covering open ocean and coastal zone waters
and a regional mission covering land surfaces.
MERIS Spectral Bands
Band
MDS Bandwidth
centre Potential Applications
Nr. (nm)
(nm)
1 412.5 10 Yellow substance, turbidity
2 442.5 10 Chlorophyll absorption maximum
3 490 10 Chlorophyll, other pigments
Turbidity, suspended sediment, red
4 510 10
tides
Chlorophyll reference, suspended
5 560 10
sediment
6 620 10 Suspended sediment
7 665 10 Chlorophyll absorption
8 681.25 7.5 Chlorophyll fluorescence
9 705 10 Atmospheric correction, red edge
10 753.75 7.5 Oxygen absorption reference
11 760 2.5 Oxygen absorption R-branch
12 775 15 Aerosols, vegetation
13 865 20 Aerosols corrections over ocean
14 890 10 Water vapour absorption reference
15 900 10 Water vapour absorption, vegetation

MERIS is designed to acquire 15 spectral bands in the 390 - 1040 nm range. One of the
most outstanding features of MERIS is the programmability of its spectral bands in their
width and position, in accordance with the priorities of the mission.
The above table has been derived for oceanographic and interdisciplinary applications.
The exact position of the MERIS spectral bands will be determined following a detailed
spectral characterization of the instrument. The spectral range is restricted to the visible
near-infrared part of the spectrum between 390 and 1040 nm. The spectral bandwidth is
variable between 1.25 and 30 nm depending on the width of a spectral feature to be
observed and the amount of energy needed in a band to perform an adequate
observation. Over open ocean an average bandwidth of 10 nm is required for the bands
located in the visible part of the spectrum. Driven by the need to resolve spectral features
of the Oxygen absorption band occurring at 760 nm a minimum spectral bandwidth of 2.5
nm is required.
 Hyperspectral Imaging Systems: A hyperspectral imaging system is also known
as an "imaging spectrometer". it acquires images in about a hundred or more
contiguous spectral bands. The precise spectral information contained in a
hyperspectral image enables better characterisation and identification of targets.
Hyperspectral images have potential applications in such fields as precision
agriculture (e.g. monitoring the types, health, moisture status and maturity of
crops), coastal management (e.g. monitoring of phytoplanktons, pollution,
bathymetry changes). An example of a hyperspectral system is:
Hyperion on EO1 satellite

EO-1 (Earth Observing - 1), USA


Earth Observing-1 (EO-1) is the first satellite in NASA's New Millennium Program Earth
Observing series. EO-1 was launched on 21 November 2000.The EO missions will
develop and validate instruments and technologies for space-based Earth observations
with unique spatial, spectral and temporal characteristics not previously available.
EO-1's primary focus is to develop and test a set of advanced technology land imaging
instruments. However, many other key instruments and technologies are part of the
mission and will have wide ranging applications to future land imaging missions in
particular and future satellites in general. EO-1 is inserted into an orbit flying in formation
with the Landsat 7 satellite taking a series of the same images. Comparison of these
"paired scene" images will be one means to evaluate EO-1's land imaging instruments.
A unique feature of the EO-1 mission is that it carries an experimental hyperspectral
imager (the Hyperion) that can capture high resolution images of the earth surface in 220
contiguous spectral bands.
EO-1 Orbit

Type Sun-Synchronous, 10:01 am descending node

Altitude 705 km

Inclination 98.2 deg

Period 99 min

Repeat Cycle 16 days

EO-1 Sensors
 Hyperion: The Hyperion is a high resolution hyperspectral imaging instrument. The
Hyperion images the earth's surface in 220 contiguous spectral bands with high
radiometric accuracy, covering the region from 400 nm to 2.5 µm, at a ground
resolution of 30 m. Through this large number of spectral bands, complex land eco-
systems can be imaged and accurately classified.
The Hyperion is a "push broom" instrument. It has a single telescope and two
spectrometers, one visible/near infrared (VNIR) spectrometer (with CCD detector
array) and one short-wave infrared (SWIR) spectrometer (HgCdTe detector array).
Hyperion Sensor Characteristics

Spatial Resolution 30 m

Swath Width 7.75 km

220 unique channels. VNIR (70 channels, 356 nm - 1058


Spectral Channels
nm), SWIR (172 channels, 852 nm - 2577 nm)

Spectral Bandwidth 10 nm (nominal)

Digitization 12 bits

Signal-to-Noise
161 (550 nm); 147 (700 nm); 110 (1125 nm); 40 (2125 nm)
Ratio (SNR)
 ALI (Advanced Land Imager): The ALI instrument features ten-meter ground
resolution in the panchromatic (black-and-white) band and 30-meter ground
resolution in its multispectral bands (0.4-2.4 microns), covering seven of the eight
bands of the current Landsat.
 AC (Atmospheric Corrector): The AC instrument provides the first space-based test
of an Atmospheric Corrector for increasing the accuracy of surface reflectance
estimates. The AC enables more precise predictive models to be constructed for
remote sensing applications. It will provide significant improvements in generating
accurate reflectance measurements for land imaging missions. Covers the 0.890-
1.600 micron wavelength IR band.
SPOT (Satellite Pour l' Observation de la Terre), France

The SPOT program consists of a series of optical remote sensing satellites with the
primary mission of obtaining Earth imagery for landuse, agriculture, forestry, geology,
cartography, regional planning, water resources and GIS applications. it is committed to
commercial remote sensing on an international scale and has established a global
network of control centres, receiving stations, processing centres and data distributors.
The SPOT satellites are operated by the French Space Agency, Centre National d'Etudes
Spatiales (CNES). Worldwide commercial operations are anchored by SPOT IMAGE in
France with the following subsidiaries: SPOT Image Corp. in the US, SPOT Imaging
Services in Australia and SPOT Asia in Singapore.
SPOT 1 was launched on 22 February 1986, and withdrawn from active service on 31
December 1990. SPOT 2 was launched on 22 January 1990 and is still operational. SPOT
3 was launched on 26 September 1993. An incident occured on SPOT 3 on November 14,
1996. After 3 years in orbit the satellite has stopped functioning. SPOT 4 was launched on
24 Mar 1998. Engineering work for SPOT 5 has began so that the satellite can be
launched in 2002 to ensure service continuity. To meet the increasing demand for SPOT
imagery, notably during the northern hemisphere growing season, SPOT 1 is reactivated
in 1997 for routine operation. Currently, three SPOT satellites (SPOT 1, 2, 4) are
operational.
The SPOT system provides global coverage between 87 degrees north latitude and 87
degrees south latitude.
SPOT Orbit
Type Sun-Synchronous

Altitude 832 km

Inclination 98.7 deg

Period 101 min

Repeat Cycle 26 days

Off-Nadir Revisit 1 to 3 days


Sensors
HRV (High Resolution Visible) and HRVIR (High Resolution Visible IR) detectors

SPOT 1, 2 Twin HRV (SPOT 4 Twin HRVIR) Imaging System


Each SPOT 1 and SPOT 2 satellite carries two HRV sensors, constructed with
multilinear array detectors, operating in a cross-track direction. The SPOT 4
satellite carries two HRVIR detectors HRVIR is similar to the HRV, except that
HRVIR has an additional short wave infrared (SWIR) band, and the wavelength
bandwidth of the panchromatic mode for HRVIR is narrower than that for HRV. The
position of each HRV or HRVIR entrance mirror can be commanded by ground
control to observe a region of interest not necessarily vertically beneath the
satellite. Thus, each HRV or HRVIR offers an oblique viewing capability, the
viewing angle being adjustable through ±27º relative to the vertical. This off-nadir
viewing enables the acquisition of stereoscopic imagery and provides a short revisit
interval of 1 to 3 days.
Off-nadir viewing capability of SPOT HRV, HRVIR enables a short revisit interval of 1
to 3 day
Two imaging modes are employed, panchromatic (P) and multispectral (XS). Both
HRVs on the SPOT 1, 2 satellites (HRVIRs on the SPOT 4 satellite) can operate in either
mode, either simultaneously or individually.
Panchromatic (P) mode
Imaging is performed in a single spectral band, corresponding to the visible part of
the electromagnetic spectrum. The panchromatic band in SPOT 1, 2 HRV covers 0.51
to 0.73 �m. For SPOT 4 HRVIR, the panchromatic band is has a narrower bandwidth
centered at the red band (0.61 to 0.68 �m). The panchromatic mode of the SPOT 4
HRVIR is named as the Monospectral (M) mode, to differentiate it from the Panchromatic
mode of the SPOT 1,2 HRV. The single channel imaging mode (P or M mode) supplies
only black and white images with a pixel width of 10 m. This band is intended primarily for
applications calling for fine geometrical detail.
Multispectral (XS) mode
Multispectral imaging is performed in three spectral bands in SPOT 1, 2 HRV. The bands
used are band XS1covering 0.50 to 0.59 �m (green), band XS2 covering 0.61 to 0.68 �
m (red) band XS3 covering 0.79 to 0.89 �m (near infrared). There is a fourth band in
SPOT 4 HRVIR covering 1.53 to 1.75 �m (short-wave infrared). The four multispectral
bands of the HRVIR are denoted by XI1, XI2, XI3 and XI4). By combining the data
recorded in these channels, colour composite images can be produced with a pixel size
of 20 meters.
SPOT HRV and HRVIR Instrument Characteristics
Multispectral Mode (XS) Panchromatic Mode (P)

Instrument Field of View 4.13 deg 4.13 deg

Ground Sampling Interval


20 m by 20 m 10 m by 10 m
(Nadir Viewing)

Pixel per Line 3000 6000

Ground Swath (Nadir Viewing) 60 km 60 km


HRV Spectral Bands
Mode Band Wavelength (µm) Resolution (m)
Multispectral XS1 0.50 - 0.59 (Green) 20
Multispectral XS2 0.61 - 0.68 (Red) 20
multispectral XS3 0.79 - 0.89 (Near IR) 20
Panchromatic P 0.51 - 0.73 (Visible) 10
HRVIR Spectral Bands
Mode Band Wavelength (µm) Resolution (m)
Multispectral XI1 0.50 - 0.59 (Green) 20
Multispectral XI2 0.61 - 0.68 (Red) 20
Multispectral XI3 0.79 - 0.89 (Near IR) 20
Multispectral XI4 1.53 - 1.75 (SWIR) 20
Monospectral M 0.61 - 0.68 (Red) 10
SPOT 4 VEGETATION Instrument
The SPOT 4 satellite carries on-board a low-resolution wide-coverage instrument for
monitoring the continental biosphere and to monitor crops. The VEGETATION instrument
provides global coverage on an almost daily basis at a resolution of 1 kilometer with a
swath of 2250 km, enabling the observation of long-term environmental changes on a
regional and worldwide scale.
The VEGETATION program is being co-funded by the European Union, Belgium, France,
Italy and Sweden and led by French space agency CNES.
Spectral Bands of VEGETATION Instrument

Band Wavelength (�m)


Blue 0.43 to 0.47
Red 0.61 to 0.68
Near-infrared 0.78 to 0.89
Short-wave infrared 1.58 to 1.75
 Multispectral imaging system: The sensor is a multichannel detector with a few
spectral bands. Each channel is sensitive to radiation within a narrow wavelength
band. The resulting image is a multilayer image which contains both the
brightness and spectral (colour) information of the targets being observed.
Examples of multispectral systems are:
o LANDSAT MSS
o LANDSAT TM
o SPOT HRV-XS
o IKONOS MS
LANDSAT, USA

The LANDSAT program consists of a series of optical/infrared remote sensing satellites


for land observation. The program was first started by The National Aeronautics and
Space Administration (NASA) in 1972, then turned over to the National Oceanic and
Atmospheric Administration (NOAA) after it became operational. Since 1984, satellite
operation and data handling were managed by a commercial company EOSAT. However,
all data older than 2 years return to "public domain" and are distributed by the Earth
Resource Observation System (EROS) Data Center of the US Geological Survey (USGS).
The first satellite in the series, LANDSAT-1 (initially named as the Earth Resource
Technology Satellite ERTS-1) was launched on 23 July 1972. The satellite had a designed
life expectancy of 1 year but it ceased operation only on January 1978. LANDSAT-2 was
launched on 22 January 1975 and three additional LANDSAT satellites were launched in
1978, 1982, and 1984 (LANDSAT-3, 4, and 5 respectively). LANDSAT-6 was launched on
October 1993 but the satellite failed to obtain orbit. A new satellite LANDSAT-7 was
launched in 15 April 1999. Currently, only LANDSAT-5 and 7 are operational .
LANDSAT Orbit
Type Sun-Synchronous

Altitude 705 km

Inclination 98.2 deg

Period 99 min

Repeat Cycle 16 days

Sensors
 MSS (Multi-Spectral Scanner), on LANDSAT-1 to 5. Being one of the older
generation sensors, routine data acquisition for MSS was terminated in late 1992.
The resolution of the MSS sensor was approximately 80 m with radiometric
coverage in four spectral bands from the visible green to the near-infrared (IR)
wavelengths. Only the MSS sensor on Landsat 3 had a fifth band in the thermal-IR.
LANDSAT 4,5 MSS Sensor Characteristics
Band Wavelength (µm) Resolution (m)
Green 1 0.5 - 0.6 82
Red 2 0.6 - 0.7 82
Near IR 3 0.7 - 0.8 82
Near IR 4 0.8 - 1.1 82
 TM (Thematic Mapper), first operational on LANDSAT-4. TM sensors primarily
detect reflected radiation from the Earth surface in the visible and near-infrared (IR)
wavelengths, but the TM sensor provides more radiometric information than the
MSS sensor. The wavelength range for the TM sensor is from the visible (blue),
through the mid-IR, into the thermal-IR portion of the electromagnetic spectrum.
Sixteen detectors for the visible and mid-IR wavelength bands in the TM sensor
provide 16 scan lines on each active scan. Four detectors for the thermal-IR band
provide four scan lines on each active scan. The TM sensor has a spatial resolution
of 30 m for the visible, near-IR, and mid-IR wavelengths and a spatial resolution of
120 m for the thermal-IR band.
 ETM+ (Enhanced Thematic Mapper Plus), is carried on board Landsat 7. The
ETM+ instrument is an eight-band multispectral scanning radiometer capable of
providing high-resolution image information of the Earths surface. Its spectral
bands are similar to thoss of TM, except that the thermal band (band 6) has an
improved resolution of 60 m (versus 120 m in TM). There is also an additional
panchromatic band at 15 m resolution.
LANDSAT TM, ETM+ Sensor Characteristics
Band Wavelength (µm) Resolution (m)
Blue 1 0.45 - 0.52 30
Green 2 0.52 - 0.60 30
Red 3 0.63 - 0.69 30
Near IR 4 0.76 - 0.90 30
SWIR 5 1.55 - 1.75 30
Thermal IR 6 10.40 - 12.50 120 (TM) 60 (ETM+)
SWIR 7 2.08 - 2.35 30
Panchromatic 0.5 - 0.9 15

Solar Irradiation
Optical remote sensing depends on the sun as the sole source of illumination. The solar
irradiation spectrum above the atmosphere can be modeled by a black body radiation
spectrum having a source temperature of 5900 K, with a peak irradiation located at about
500 nm wavelength. Physical measurement of the solar irradiance has also been
performed using ground based and spaceborne sensors.
After passing through the atmosphere, the solar irradiation spectrum at the ground is
modulated by the atmospheric transmission windows. Significant energy remains only
within the wavelength range from about 0.25 to 3 µm.
Solar Irradiation Spectra above the atmosphere and at sea-level.

Spectral Reflectance Signature


When solar radiation hits a target surface, it may be transmitted, absorbed or reflected.
Different materials reflect and absorb differently at different wavelengths. The reflectance
spectrum of a material is a plot of the fraction of radiation reflected as a function of the
incident wavelength and serves as a unique signature for the material. In principle, a
material can be identified from its spectral reflectance signature if the sensing system has
sufficient spectral resolution to distinguish its spectrum from those of other materials.
This premise provides the basis for multispectral remote sensing.
The following graph shows the typical reflectance spectra of five materials: clear water,
turbid water, bare soil and two types of vegetation.

Reflectance Spectrum of Five Types of Landcover


The reflectance of clear water is generally low. However, the reflectance is maximum at
the blue end of the spectrum and decreases as wavelength increases. Hence, clear water
appears dark-bluish. Turbid water has some sediment suspension which increases the
reflectance in the red end of the spectrum, accounting for its brownish appearance. The
reflectance of bare soil generally depends on its composition. In the example shown, the
reflectance increases monotonically with increasing wavelength. Hence, it should appear
yellowish-red to the eye.
Vegetation has a unique spectral signature which enables it to be distinguished readily
from other types of land cover in an optical/near-infrared image. The reflectance is low in
both the blue and red regions of the spectrum, due to absorption by chlorophyll for
photosynthesis. It has a peak at the green region which gives rise to the green colour of
vegetation. In the near infrared (NIR) region, the reflectance is much higher than that in
the visible band due to the cellular structure in the leaves. Hence, vegetation can be
identified by the high NIR but generally low visible reflectances. This property has been
used in early reconnaisance missions during war times for "camouflage detection".
The shape of the reflectance spectrum can be used for identification of vegetation type.
For example, the reflectance spectra of vegetation 1 and 2 in the above figures can be
distinguished although they exhibit the generally characteristics of high NIR but low visible
reflectances. Vegetation 1 has higher reflectance in the visible region but lower
reflectance in the NIR region. For the same vegetation type, the reflectance spectrum also
depends on other factors such as the leaf moisture content and health of the plants.
The reflectance of vegetation in the SWIR region (e.g. band 5 of Landsat TM and band 4
of SPOT 4 sensors) is more varied, depending on the types of plants and the plant's water
content. Water has strong absorption bands around 1.45, 1.95 and 2.50 µm. Outside
these absorption bands in the SWIR region, reflectance of leaves generally increases
when leaf liquid water content decreases. This property can be used for identifying tree
types and plant conditions from remote sensing images. The SWIR band can be used in
detecting plant drought stress and delineating burnt areas and fire-affected vegetation.
The SWIR band is also sensitive to the thermal radiation emitted by intense fires, and
hence can be used to detect active fires, especially during night-time when the
background interference from SWIR in reflected sunlight is absent.

Typical Reflectance Spectrum of Vegetation. The labelled arrows indicate the common
wavelength bands used in optical remote sensing of vegetation: A: blue band, B: green
band; C: red band; D: near IR band;
E: short-wave IR band
Interpretation of Optical Images
Interpreting Optical Remote Sensing Images

Four main types of information contained in an optical image are often utilized for image
interpretation:
 Radiometric Information (i.e. brightness, intensity, tone),
 Spectral Information (i.e. colour, hue),
 Textural Information,
 Geometric and Contextual Information.
They are illustrated in the following examples.
Panchromatic Images
A panchromatic image consists of only one band. It is usually displayed as a grey scale
image, i.e. the displayed brightness of a particular pixel is proportional to the pixel digital
number which is related to the intensity of solar radiation reflected by the targets in the
pixel and detected by the detector. Thus, a panchromatic image may be similarly
interpreted as a black-and-white aerial photograph of the area. The Radiometric
Information is the main information type utilized in the interpretation.
A panchromatic image
extracted from
a SPOT panchromatic scene at
a ground resolution of 10 m.
The ground coverage is about
6.5 km (width) by 5.5 km
(height). The urban area at the
bottom left and a clearing near
the top of the image have high
reflected intensity, while the
vegetated areas on the right
part of the image are generally
dark. Roads and blocks of
buildings in the urban area are
visible. A river flowing through
the vegetated area, cutting
across the top right corner of
the image can be seen. The
river appears bright due to
sediments while the sea at the
bottom edge of the image
appears dark.
Multispectral Images
A multispectral image consists of several bands of data. For visual display, each band of
the image may be displayed one band at a time as a grey scale image, or in combination
of three bands at a time as a colour composite image. Interpretation of a multispectral
colour composite image will require the knowledge of the spectral reflectance
signature of the targets in the scene. In this case, the spectral information content of the
image is utilized in the interpretation.
The following three images show the three bands of a multispectral image extracted from
a SPOT multispectral scene at a ground resolution of 20 m. The area covered is the same
as that shown in the above panchromatic image. Note that both the XS1 (green) and XS2
(red) bands look almost identical to the panchromatic image shown above. In contrast, the
vegetated areas now appear bright in the XS3 (near infrared) band due to high
reflectance of leaves in the near infrared wavelength region. Several shades of grey can
be identified for the vegetated areas, corresponding to different types of vegetation. Water
mass (both the river and the sea) appear dark in the XS3 (near IR) band.

SPOT XS1 (green band) SPOT XS2 (red band)

SPOT XS3 (Near IR band)


Colour Composite Images
In displaying a colour composite image, three primary colours (red, green and blue) are
used. When these three colours are combined in various proportions, they produce
different colours in the visible spectrum. Associating each spectral band (not necessarily a
visible band) to a separate primary colour results in a colour composite image.
Many colours can be formed by combining the three
primary colours (Red, Green, Blue) in various
proportions.

True Colour Composite


If a multispectral image consists of the three visual primary colour bands (red, green,
blue), the three bands may be combined to produce a "true colour" image. For example,
the bands 3 (red band), 2 (green band) and 1 (blue band) of a LANDSAT TM image or
an IKONOS multispectral image can be assigned respectively to the R, G, and B colours
for display. In this way, the colours of the resulting colour composite image resemble
closely what would be observed by the human eyes.

A 1-m resolution true-colour IKONOS image.


False Colour Composite
The display colour assignment for any band of a multispectral image can be done in an
entirely arbitrary manner. In this case, the colour of a target in the displayed image does
not have any resemblance to its actual colour. The resulting product is known as a false
colour composite image. There are many possible schemes of producing false colour
composite images. However, some scheme may be more suitable for detecting certain
objects in the image.
A very common false colour composite scheme for displaying a SPOT multispectral image
is shown below:
R = XS3 (NIR band)
G = XS2 (red band)
B = XS1 (green band)
This false colour composite scheme allows vegetation to be detected readily in the image.
In this type of false colour composite images, vegetation appears in different shades of
red depending on the types and conditions of the vegetation, since it has a high
reflectance in the NIR band (as shown in the graph of spectral reflectance signature).
Clear water appears dark-bluish (higher green band reflectance), while turbid water
appears cyan (higher red reflectance due to sediments) compared to clear water. Bare
soils, roads and buildings may appear in various shades of blue, yellow or grey,
depending on their composition.

False colour composite multispectral SPOT image:


Red: XS3; Green: XS2; Blue: XS1
Another common false colour composite scheme for displaying an optical image with a
short-wave infrared (SWIR) band is shown below:
R = SWIR band (SPOT4 band 4, Landsat TM band 5)
G = NIR band (SPOT4 band 3, Landsat TM band 4)
B = Red band (SPOT4 band 2, Landsat TM band 3)
An example of this false colour composite display is shown below for a SPOT 4 image.
False colour composite of a
SPOT 4 multispectral image
including the SWIR band:
Red: SWIR band; Green: NIR
band; Blue: Red band. In this
display scheme, vegetation
appears in shades of green.
Bare soils and clearcut areas
appear purplish or magenta.
The patch of bright red area on
the left is the location of active
fires.
A smoke plume originating from
the active fire site appears faint
bluish in colour.
False colour composite of a
SPOT 4 multispectral image
without displaying the SWIR
band:
Red: NIR band; Green: Red
band; Blue: Green band.
Vegetation appears in shades of
red.
The smoke plume appears bright
bluish white.

Natural Colour Composite


For optical images lacking one or more of the three visual primary colour bands (i.e. red,
green and blue), the spectral bands (some of which may not be in the visible region) may
be combined in such a way that the appearance of the displayed image resembles a
visible colour photograph, i.e. vegetation in green, water in blue, soil in brown or grey, etc.
Many people refer to this composite as a "true colour" composite. However, this term is
misleading since in many instances the colours are only simulated to look similar to the
"true" colours of the targets. The term "natural colour" is preferred.
The SPOT HRV multispectral sensor does not have a blue band. The three bands, XS1,
XS2 and XS3 correspond to the green, red, and NIR bands respectively. But a reasonably
good natural colour composite can be produced by the following combination of the
spectral bands:
R = XS2
G = (3 XS1 + XS3)/4
B = (3 XS1 - XS3)/4
where R, G and B are the display colour channels.

Natural colour composite


multispectral SPOT image:
Red: XS2; Green: 0.75 XS2 + 0.25
XS3; Blue: 0.75 XS2 - 0.25 XS3

Vegetation Indices
Different bands of a multispectral image may be combined to accentuate the vegetated
areas. One such combination is the ratio of the near-infrared band to the red band. This
ratio is known as the Ratio Vegetation Index (RVI)
RVI = NIR/Red
Since vegetation has high NIR reflectance but low red reflectance, vegetated areas will
have higher RVI values compared to non-vegetated areas. Another commonly used
vegetation index is the Normalised Difference Vegetation Index (NDVI) computed by
NDVI = (NIR - Red)/(NIR + Red)

Normalised Difference Vegetation Index (NDVI) derived from the above SPOT image
In the NDVI map shown above, the bright areas are vegetated while the non-vegetated
areas (buildings, clearings, river, sea) are generally dark. Note that the trees lining the
roads are clearly visible as grey linear features against the dark background.
The NDVI band may also be combined with other bands of the multispectral image to form
a colour composite image which helps to discriminate different types of vegetation. One
such example is shown below. In this image, the display colour assignment is:
R = XS3 (Near IR band)
G = (XS3 - XS2)/(XS3 + XS2) (NDVI band)
B = XS1 (green band)

NDVI Colour Composite of the SPOT image: Red: XS3; Green: NDVI; Blue: XS1.
Textural Information
Texture is an important aid in visual image interpretation, especially for high spatial
resolution imagery. An example is shown below. It is also possible to characterize the
textural features numerically, and algorithms for computer-aided automatic discrimination
of different textures in an image are available.

This is an IKONOS 1-m resolution pan-


sharpened color image of an oil palm
plantation. The image is 300 m across. Even
though the general colour is green throughout,
three distinct land cover types can be
identified from the image texture. The
triangular patch at the bottom left corner is the
oil palm plantation with matured palm trees.
Individual trees can be seen. The predominant
texture is the regular pattern formed by the
tree crowns. Near to the top of the image, the
trees are closer together, and the tree
canopies merge together, forming another
distinctive textural pattern. This area is
probably inhibated by shrubs or abandoned
trees with tall under growths and shrubs in
between the trees. At the bottom right corner,
colour is more homogeneous, indicating that it
is probably an open field with short grass.

Geometric and Contexture Information


Using geometric and contextual features for image interpretation requires some a-priori
information about the area of interest. The "interpretational keys" commonly employed are:
shape, size, pattern, location, and association with other familiar features.
Contextual and geometric
information plays an
important role in the
interpretation of very high
resolution imagery.
Familiar features visible in
the image, such as the
buildings, roadside trees,
roads and vehicles, make
interpretation of the
image straight forward.
This is an IKONOS image
of a container port,
evidenced by the
presence of ships,
cranes, and regular rows
of rectangular containers.
The port is probably not
operating at its maximum
capacity, as empty
spaces can be seen in
between the containers.

This SPOT image shows


an oil palm plantation
adjacent to a logged over
forest in Riau, Sumatra.
The image area is 8.6 km
by 6.4 km. The
rectangular grid pattern
seen here is a main
characteristic of large
scale oil palm plantations
in this region.

This SPOT image shows


land clearing being
carried out in a logged
over forest. The dark red
regions are the remaining
forests. Tracks can be
seen intruding into the
forests, implicating some
logging activities in the
forests. The logging
tracks are also seen in
the cleared areas
(dark greenish areas). It
is obvious that the land
clearing activities are
carried out with the aid of
fires.
A smoke plume can be
seen emanating from a
site of active fires.
Microwave Remote Sensing

Electromagnetic radiation in the microwave wavelength region is used in remote


sensing to provide useful information about the Earth's atmosphere, land and ocean.
A microwave radiometer is a passive device which records the natural microwave
emission from the earth. It can be used to measure the total water content of the
atmosphere within its field of view.
A radar altimeter sends out pulses of microwave signals and record the signal scattered
back from the earth surface. The height of the surface can be measured from the time
delay of the return signals.
A wind scatterometer can be used to measure wind speed and direction over the ocean
surface. it sends out pulses of microwaves along several directions and records the
magnitude of the signals backscattered from the ocean surface. The magnitude of the
backscattered signal is related to the ocean surface roughness, which in turns is
dependent on the sea surface wind condition, and hence the wind speed and direction can
be derived. orne platforms to generate high resolution images of the earth surface using
microwave energy.
Synthetic Aperture Radar (SAR)
In synthetic aperture radar (SAR) imaging, microwave pulses are transmitted by an
antenna towards the earth surface. The microwave energy scattered back to the
spacecraft is measured. The SAR makes use of the radar principle to form an image by
utilising the time delay of the backscattered signals.
A radar pulse is transmitted The radar pulse is scattered by the ground
from the antenna to the ground targets back to the antenna.

In real aperture radar imaging, the ground resolution is limited by the size of the
microwave beam sent out from the antenna. Finer details on the ground can be resolved
by using a narrower beam. The beam width is inversely proportional to the size of the
antenna, i.e. the longer the antenna, the narrower the beam.

The microwave beam sent out by the


antenna illuminates an area on the
ground (known as the antenna's
"footprint"). In radar imaging, the recorded
signal strength depends on the
microwave energy backscattered from the
ground targets inside this footprint.
Increasing the length of the antenna will
decrease the width of the footprint.

It is not feasible for a spacecraft to carry a very long antenna which is required for high
resolution imaging of the earth surface. To overcome this limitation, SAR capitalises on
the motion of the space craft to emulate a large antenna (about 4 km for the ERS SAR)
from the small antenna (10 m on the ERS satellite) it actually carries on board.
Imaging geometry for a typical strip-mapping synthetic aperture radar imaging system.
The antenna's footprint sweeps out a strip parallel to the direction of the satellite's ground
track.
Interaction between Microwaves and Earth's Surface
When microwaves strike a surface, the proportion of energy scattered back to the sensor
depends on many factors:
 Physical factors such as the dielectric constant of the surface materials which also
depends strongly on the moisture content;
 Geometric factors such as surface roughness, slopes, orientation of the objects
relative to the radar beam direction;
 The types of landcover (soil, vegetation or man-made objects).
 Microwave frequency, polarisation and incident angle.
All-Weather Imaging
Due to the cloud penetrating property of microwave, SAR is able to acquire "cloud-
free" images in all weather. This is especially useful in the tropical regions which are
frequently under cloud covers throughout the year. Being an active remote sensing
device, it is also capable of night-time operation.

SAR Imaging - Frequency, Polarisation and Incident Angle


Microwave Frequency
The ability of microwave to penetrate clouds, precipitation, or land surface cover depends
on its frequency. Generally, the penetration power increases for longer wavelength (lower
frequency).
The SAR backscattered intensity generally increases with the surface roughness.
However, "roughness" is a relative quantity. Whether a surface is considered rough or not
depends on the length scale of the measuring instrument. If a meter-rule is used to
measure surface roughness, then any surface fluctuation of the order of 1 cm or less will
be considered smooth. On the other hand, if a surface is examined under a microscope,
then a fluctuation of the order of a fraction of a millimiter is considered very rough. In SAR
imaging, the reference length scale for surface roughness is the wavelength of the
microwave. If the surface fluctuation is less than the microwave wavelength, then the
surface is considered smooth. For example, little radiation is backscattered from a surface
with a fluctuation of the order of 5 cm if a L-band (15 to 30 cm wavelength) SAR is used
and the surface will appear dark. However, the same surface will appear bright due to
increased backscattering in a X-band (2.4 to 3.8 cm wavelength) SAR image.

The land surface appears smooth to a long


wavelength radar. Little radiation is backscattered
from the surface.

The same land surface appears rough to a short


wavelength radar. The surface appears bright in the
radar image due to increased backscattering from
the surface.

Both the ERS and RADARSAT SARs use the C band microwave while the JERS SAR
uses the L band. The C band is useful for imaging ocean and ice features. However, it
also finds numerous land applications. The L band has a longer wavelength and is more
penetrating than the C band. Hence, it is more useful in forest and vegetation study as it is
able to penetrate deeper into the vegetation canopy.

The short wavelength radar interacts


mainly with the top layer of the forest
canopy while the longer wavelength
radar is able to penetrate deeper into
the canopy to undergo multiple
scattering between the canopy,
trunks and soil.
Microwave Polarisation in Synthetic Aperture Radar
The microwave polarisation refers to the orientation of the electric field vector of the
transmitted beam with respect to the horizontal direction. If the electric field vector
oscillates along a direction parallel to the horizontal direction, the beam is said to be "H"
polarised. On the other hand, if the electric field vector oscillates along a direction
perpendicular to the horizontal direction, the beam is "V" polarised.

Microwave Polarisation: If the electric field


vector oscillates along the horizontal
direction, the wave is H polarised. If the
electric field vector oscillates perpendicular
to the horizontal direction, the wave is V
polarised.

After interacting with the earth surface, the polarisation state may be altered. So the
backscattered microwave energy usually has a mixture of the two polarisation states. The
SAR sensor may be designed to detect the H or the V component of the backscattered
radiation. Hence, there are four possible polarisation configurations for a SAR system:
"HH", "VV", "HV" and "VH" depending on the polarisation states of the transmitted and
received microwave signals. For example, the SAR onboard the ERS satellite transmits V
polarised and receives only the V polarised microwave pulses, so it is a "VV" polarised
SAR. In comparison, the SAR onboard the RADARSAT satellite is a "HH" polarised SAR.
Incident Angles
The incident angle refers to the angle between the incident radar beam and the direction
perpendicular to the ground surface. The interaction between microwaves and the surface
depends on the incident angle of the radar pulse on the surface. ERS SAR has a constant
incident angle of 23o at the scene centre. RADARSAT is the first spaceborne SAR that is
equipped with multiple beam modes enabling microwave imaging at different incident
angles and resolutions.
The incident angle of 23o for the ERS SAR is optimal for detecting ocean waves and other
ocean surface features. A larger incident angle may be more suitable for other
applications. For example, a large incident angle will increase the contrast between the
forested and clear cut areas.
Acquisition of SAR images of an area using two different incident angles will also enable
the construction of a stereo image for the area.
Interpreting SAR Images
SAR Images
Synthetic Aperture Radar(SAR) images can be obtained from satellites such
as ERS, JERS and RADARSAT. Since radar interacts with the ground features in ways
different from the optical radiation, special care has to be taken when interpreting radar
images.
An example of a ERS SAR image is shown below together with
a SPOT multispectral natural colour composite image of the same area for
comparison.
ERS SAR image (pixel size=12.5 m)

SPOT Multispectral image in Natural


Colour
(pixel size=20 m)

The urban area on the left appears bright in the SAR image while the vegetated areas on
the right have intermediate tone. The clearings and water (sea and river) appear dark in
the image. These features will be explained in the following sections. The SAR image was
acquired in September 1995 while the SPOT image was acquired in February 1994.
Additional clearings can be seen in the SAR image.
Speckle Noise
Unlike optical images, radar images are formed by coherent interaction of the transmitted
microwave with the targets. Hence, it suffers from the effects of speckle noise which
arises from coherent summation of the signals scattered from ground scatterers
distributed randomly within each pixel. A radar image appears more noisy than an optical
image. The speckle noise is sometimes suppressed by applying a speckle removal
filter on the digital image before display and further analysis.
This image is extracted from the above SAR
image, showing the clearing areas between the
river and the coastline. The image appears
"grainy" due to the presence of speckles.

This image shows the effect of applying a


speckle removal filter to the SAR image. The
vegetated areas and the clearings now appear
more homogeneous.

Backscattered Radar Intensity


A single radar image is usually displayed as a grey scale image, such as the one shown
above. The intensity of each pixel represents the proportion of microwave backscattered
from that area on the ground which depends on a variety of factors: types, sizes, shapes
and orientations of the scatterers in the target area; moisture content of the target area;
frequency and polarisation of the radar pulses; as well as the incident angles of the radar
beam. The pixel intensity values are often converted to a physical quantity called
the backscattering coefficient or normalised radar cross-section measured in decibel
(dB) units with values ranging from +5 dB for very bright objects to -40 dB for very dark
surfaces.
Interpreting SAR Images
Interpreting a radar image is not a straightforward task. It very often requires some
familiarity with the ground conditions of the areas imaged. As a useful rule of thumb, the
higher the backscattered intensity, the rougher is the surface being imaged.
Flat surfaces such as paved roads, runways or calm water normally appear as dark areas
in a radar image since most of the incident radar pulses are specularly reflected away.
Specular Reflection: A smooth surface acts like a
mirror for the incident radar pulse. Most of the
incident radar energy is reflected away according to
the law of specular reflection, i.e. the angle of
reflection is equal to the angle of incidence. Very little
energy is scattered back to the radar sensor.

Diffused Reflection: A rough surface reflects the


incident radar pulse in all directions. Part of the radar
energy is scattered back to the radar sensor. The
amount of energy backscattered depends on the
properties of the target on the ground.

Calm sea surfaces appear dark in SAR images. However, rough sea surfaces may appear
bright especially when the incidence angle is small. The presence of oil films smoothen
out the sea surface. Under certain conditions when the sea surface is sufficiently rough, oil
films can be detected as dark patches against a bright background.

A ship (bright target near the bottom


left corner) is seen discharging oil into
the sea in this ERS SAR image.

Trees and other vegetations are usually moderately rough on the wavelength scale.
Hence, they appear as moderately bright features in the image. The tropical rain forests
have a characteristic backscatter coefficient of between -6 and -7 dB, which is spatially
homogeneous and remains stable in time. For this reason, the tropical rainforests have
been used as calibrating targets in performing radiometric calibration of SAR images.
Very bright targets may appear in the image due to the corner-reflector or double-
bounce effect where the radar pulse bounces off the horizontal ground (or the sea)
towards the target, and then reflected from one vertical surface of the target back to the
sensor. Examples of such targets are ships on the sea, high-rise buildings and regular
metallic objects such as cargo containers. Built-up areas and many man-made features
usually appear as bright patches in a radar image due to the corner reflector effect.

Corner Reflection: When two


smooth surfaces form a right
angle facing the radar beam,
the beam bounces twice off
the surfaces and most of the
radar energy is reflected
back to the radar sensor.
This SAR image shows an
area of the sea near a busy
port. Many ships can be seen
as bright spots in this image
due to corner reflection. The
sea is calm, and hence the
ships can be easily detected
against the dark background.

The brightness of areas covered by bare soil may vary from very dark to very bright
depending on its roughness and moisture content. Typically, rough soil appears bright in
the image. For similar soil roughness, the surface with a higher moisture content will
appear brighter.

Dry Soil: Some of the incident radar energy is able


to penetrate into the soil surface, resulting in less
backscattered intensity.

Wet Soil: The large difference in electrical


properties between water and air results in higher
backscattered radar intensity.

Flooded Soil: Radar is specularly reflected off the


water surface, resulting in low backscattered
intensity. The flooded area appears dark in the SAR
image.
Multitemporal SAR images
If more than one radar images of the same area acquired at different time are available,
they can be combined to give a multitemporal colour composite image of the area. For
example, if three images are available, then one image can be assigned to the Red, the
second to the Green and the third to the Blue colour channels for display. This technique
is especially useful in detecting landcover changes over the period of image acquisition.
The areas where no change in landcover occurs will appear in grey while areas with
landcover changes will appear as colourful patches in the image.

This image is an example of a multitemporal colour composite SAR image. The area
shown is part of the rice growing areas in the Mekong River delta, Vietnam, near the
towns of Soc Trang and Phung Hiep. Three SAR images acquired by the ERS satellite
during 5 May, 9 June and 14 July in 1996 are assigned to the red, green and blue
channels respectively for display. The colourful areas are the rice growing areas, where
the landcovers change rapidly during the rice season. The greyish linear features are the
more permanent trees lining the canals. The grey patch near the bottom of the image is
wetland forest. The two towns appear as bright white spots in this image. An area of
depression flooded with water during this season is visible as a dark region.

You might also like