Gis 505

Download as pdf or txt
Download as pdf or txt
You are on page 1of 286

GIS 505/DGIS 505

M.A. /M.Sc. Geo-informatics/DGIS

ADVANCE REMOTE SENSING

DEPARTMENT OF REMOTE SENSING AND GIS


SCHOOL OF EARTH AND ENVIRONMENT SCIENCE
UTTARAKHAND OPEN UNIVERSITY
HALDWANI (NAINITAL)
GIS-505/ DGIS-505 Uttarakhand Open University

ADVANCE REMOTE SENSING

DEPARTMENT OF REMOTE SENSING AND GIS


SCHOOL OF EARTH AND ENVIRONMENT SCIENCE
UTTARAKHAND OPEN UNIVERSITY

Phone No. 05946-261122, 261123


Toll free No. 18001804025
Fax No. 05946-264232, E. mail info@uou.ac.in
Website: https://uou.ac.in
GIS-505/ DGIS-505 Uttarakhand Open University

Board of Studies

Chairman Convener
Vice Chancellor Professor P.D. Pant
Uttarakhand Open University, Haldwani School of Earth and Environment Science,
Uttarakhand Open University, Haldwani

Professor R.K. Pande Professor D.D. Chauniyal


Dean Arts, DSB Campus, Retd. Professor
Kumaun University, Garhwal University
Nainital Srinagar

Professor Pradeep Goswami Dr. Suneet Naithani


Department of Geology, Associate Professor,
DSB Campus, Department of Environmental Science,
Kumaun University Nainital Doon University, Dehradun

Dr. Ranju Joshi Pandey


Department of Geography & NRM
Department of Remote Sensing and GIS
School of Earth and Environmental Science
Uttarakhand Open University, Haldwani

Programme Coordinator
Dr. Ranju J. Pandey
Department of Geography & NRM
Department of Remote Sensing and GIS
School of Earth and Environment Science
Uttarakhand Open University, Haldwani
GIS-505/ DGIS-505 Uttarakhand Open University

S.No. Units Written By Unit No.


1. Dr. D.N.Pant 1,2,3,12,1314 &
Ex- Scientist, IIRS, Dehradun 15
2. Dr. Manish Kumar 7
Assistant Professor
Department of Geography,
School of Basic Sciences,
Central University of Haryana
&
Sanjit Kumar

3. Dr.Nitin Chauhan 8,9,10 & 11


Assistant Scientist (Landuse),
Haryana Space Application Center
Haryana
4. Dr. Himanshu Govil 4, 5& 6
Assistant Professor
Department of Applied Geology,
NIT Raipur

Course Editor
Dr. Ranju J. Pandey
Department of Geography & NRM
Department of Remote Sensing and GIS
School of Earth and Environment Science
Uttarakhand Open University, Haldwani

Title : Advance Remote Sensing


ISBN No. :
Copyright : Uttarakhand Open University
Edition : First (2021) Second (2022)

Published By: Uttarakhand Open University, Haldwani, Nainital-263139


Printed By:
GIS-505/ DGIS-505 Uttarakhand Open University

CONTENTS

BLOCK 1: OVERVIEW OF SATELLITE IMAGES


UNIT 1: Characteristics of Images Obtained From Different Sensors 01-23
UNIT 2: Geometric, Radiometric and Atmospheric Corrections 24-41
UNIT 3: Thermal Infra-Red Images 42-65

BLOCK 2: HYPER SPECTRAL


UNIT 4: Introduction to Hyper Spectral Remote Sensing 66-84
UNIT 5: Characteristics of Hyper Spectral Data, Spectral Image Library 85-106
UNIT 6: Hyper Spectral Data Interpretation 107-129

BLOCK 3: MICROWAVE
UNIT 7: Concept, Definition, Microwave Frequency Ranges and Factors Affecting
Microwave Measurements 130-143
UNIT 8: Radar Principles, Radar Wavebands, Side Looking Airborne Radar (Slar) Systems
& Synthetic Aperture Radar (Sar), Real Aperture Radar (Rar) 144-163
UNIT 9: Interaction between Microwaves and Earth’s Surface 164-178
UNIT 10: Geometrical Characteristics of Microwave Image 179-195
UNIT 11: Interpreting SAR Images 196-209

BLOCK 4: Digital Image Processing


UNIT 12: Introduction to Digital Image Processing 210-227
UNIT 13: Preprocessing, Image Registration & Image Enhancement Techniques 228-244
UNIT 14: Spatial Filtering Techniques & Image Transformation 245-263
UNIT 15: Image Classification 264-281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

BLOCK 1: OVERVIEW OF SATELLITE IMAGES


UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM
DIFFERENT SENSORS

1.1 OBJECTIVES
1.2 INTRODUCTION
1.3 CHARACTERISTICS OF IMAGES OBTAINED FROM
DIFFERENT SENSORS
1.4 SUMMARY
1.5 GLOSSARY
1.6 ANSWER TO CHECK YOUR PROGRESS
1.7 REFERENCES
1.8 TERMINAL QUESTIONS

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 1 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

1.1 OBJECTIVES
After reading this unit you will be able to understand:
 Indian Earth Observation Programme.
 Characteristics of Images of Indian Satellites.

1.2 INTRODUCTION
You have learnt about concept, need, importance and principles of satellite remote sensing; role
of electromagnetic radiation properties and principles and sensor characteristics for obtaining
satellite images, as per the users objectives and related all topics. Here, before writing about the
topic under this unit, it is necessary to introduce the platform and sensor types being considered
to describe their image characteristics. For this, we will take the Indian remote sensing satellites
and sensors and few important foreign satellites and sensors.

The Indian Remote Sensing (IRS) program started in the mid 1980s. Eventually, a continuous
supply of synoptic, repetitive, multispectral data of the Earth's land surfaces was obtained
(similar to the US Landsat program). In 1995, IRS imagery was made available to a larger
international community on a commercial basis. The initial program of Earth-surface imaging
was extended by the addition of sensors for complementary environmental applications. This
started with the IRS-P3 satellite which is flying MOS (Multispectral Optoelectronic Scanner) for
the measurement of ocean color. The IRS-P4 mission is dedicated to ocean monitoring.

The availability of Landsat imagery created a lot of interest in the science community. The
Hyderabad ground station started receiving Landsat data on a regular basis in 1978. The Landsat
program with its design and potentials was certainly a great model and yardstick for the IRS
program.

• The first generation satellites IRS-1A and 1B were designed, developed and launched
successfully during 1988 and 1991 with multispectral cameras with spatial resolutions of 72.5 m
and 36 m, respectively. These early satellites were launched by Russian Vostok boosters from
the Baikonur Cosmodrome.

• Subsequently, the second generation remote sensing satellites IRS-1C and -1D with improved
spatial resolutions have been developed and successfully launched in 1995 and 1997,
respectively. IRS-1C/1D data has been used for cartographic and town planning applications.

Starting with IRS-1A, ISRO has launched many operational remote sensing satellites. Today
India has one of the largest constellations of remote sensing satellites in operation. Currently,
thirteen operational satellites are in Sun-synchronous orbit and four in Geo-stationary orbit.
Those are named as RESOURCESAT-1 and 2, 2A, CARTOSAT-1, 2, 2A, 2B, RISAT-1and 2,
OCEASAT-2, 3, Megha-Tropiques and SARAL. Varieties of instruments have been flown
onboard these satellites to provide necessary data in a diversified spatial, spectral, radiometric
and temporal resolutions to cater to different user requirements in the country and global uses.
The data from these satellites are used for varieties of applications.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 2 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

In addition to its own satellite remote sensing data some of the premier organizations/Institutes
of India have used the data of foreign satellites namely LANDSAT series, ERS, NOAA, TERRA
and AQUA, SPOT, RADARSAT, IKONOS, QickkBird, etc.

1.3 CHARACTERISTICS OF IMAGES OBTAINED FROM


DIFFERENT SENSORS
INDIAN EARTH OBSERVATION PROGRAMME:
Over a period of time Indian Earth Observation (EO) programme has evolved into a fully
operational system providing rich data services to the user community. These satellite data are
acquired, processed, disseminated and distributed by National Remote Sensing Service Centre
(NRSC), ISRO to the user community as standard/value added products to the Indian and
International user community. The currently available satellites provide data set for obtaining
thematic details of land at local to global level (Table 1.1).

Table 1.1: Indian Remote Sensing Satellite (IRS) Specification (ISRO polar-orbiting missions)

Satellite Laun Launch Sensor Spectral Spatial Swath Repeat Data


ch vehicle Bands resolutio width (km) cycle available
year (µm) n (m) (days) (upto -
yr)

IRS-1A 1988 Vostok, LISS 0.45-0.52 72.5 m 148 22 1996


Baikanur and 0.52-0.59 LISS-I 74 x 2
Kazakhast LISS-II 0.62-0.68 36 m (swath of
han A/B 0.77-0.86 LISS-II 148 km)
(3
sensors)

IRS-1B 1991 same as LISS-I same as for 148 22 2003


for IRS and IRS-1A 74 x 2
LISS-II
A/B

IRS-P2 1994 PSLV-G LISS-II 0.45-0.52 32 m x 66 x 2( 24 1997


(1) M 0.52-0.59 37 m 131 km for
0.62-0.68 combined
0.77-0.86 swaths)

IRS-1C 1995 Molniya, LISS-III 0.52-0.59 23.5 142 24 2007


Baiknur, 0.62-0.68 23.5 142
Kazakhast 0.77-0.86 23.5 142
an 1.55-1.70 70 148

PAN 0.50-0.75 5.8 70 24 (5)

WiFS 0.62-0.68 188 804 5


0.77-0.86

IRS-P3 1996 PSLV WiFS 0.62-0.68 188 804 5 2004

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 3 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

0.77-0.86
1.55-1.70

MOS-A 0.75-0.77 1500 195 Ocean


MOS-B 0.41-1.01 520 200 surface
MOS-C 1.595-1.605 550 192

IXAE Indian X-ray Astronomy Experiment

IRS-1D 1997 PSLV– C1 Satellite and instruments are identical to those of IRS-1C 2010

IRS-P4 1999 PSLV-C2 OCM 0.4-0.9 360 x 236 142 2 2010


(OceanSat-1) MSMR 6.6, 10.65, 105x68, 66x43, 0 2
18, 21 GHz 40x26, 34x22 (km 136
(frequencie for frequency 0
s) sequence)

IRS-P6 2003 PSLV LISS- 0.52-0.59 5.8 70 24 (5) 2014


(ResourceSat-1) IV 0.62-0.68 5.8
0.77-0.86 5.8

LISS- 0.52-0.59 23.5 140 24


III* 0.62-0.68 23.5
0.77-0.86 23.5
1.55-1.70 23.5

AWiFS 0.62-0.68 70 740 5


0.77-0.86 70
1.55-1.70 70

IRS-P5 2005 PSLV-C6 PAN-F 0.50-0.75 2.5 30 5 2012


CartoSat-1 (2- PAN-A 0.50-0.75 2.5 30
line stereo
camera)

CartoSat-2 2007 PSLV-G PAN 0.50-0.85 <1 9.6 5 2018


camera

OceanSat-2 2009 PSLV-C14 OCM 0.40-0.90 8 360 x 236 142 2 2014


SCAT bands 25 km x 25 km 0
ROSA 13.515 140
GHz 0
GPS
occultation

RISAT 2011 SAR 5.350 GHz < 2 m to 50 m 100


instrum (C-band) -
ent 600

Megha 2011 MADR 5 chan. 40 km x 60 km 170 2


Tropiques AS radiometer 0
(ISRO/CNES) SAPHI Atmos
R sounder
ScaRaB Radiation
GPS- budget
ROS Occultation
s

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 4 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

SARAL 2011 AltiKa 35.75 GHz Ka-band altimeter


(ISRO/CNES) DORIS S/C tracking for POD services
Argos-3 Data collection system
LRA

Scope of Indian Remote Sensing Satellite Programme:


The Indian Remote Sensing satellite programme was towards going in for an semi-operational /
operational satellite-based remote sensing system that would serve as the bedrock for the
NNRMS programme by directly contributing to the data and information needs of natural
resources management in the areas of agriculture, forestry, geology and hydrology; and thus,
would serve to supplement and aid, strengthen the existing conventional methods towards
realisation of an optimally efficient resources management system for the country. The principal
components of the IRS system were envisaged as indigenous realisation of (i) a state-of-the-art
three axis stabilised satellite in polar sun-synchronous orbit with suitable multi-spectral imaging
sensor and other electrical and mechanical systems; and (ii) ground segment for in-orbit satellite
control including tracking network around the world with associated hardware and software; and
(iii) ground systems for data reception, processing and dissemination system with associated
hardware and software elements for generation of products and services meeting user demands.
The mission objectives were accordingly drawn up.

Highlights:

India’s Earth Observation Heritage

a) IRS Satellite Missions


– Resourcesat-1 (IRS-P6)
• Multispectral broad area coverage
– Cartosat-1 (IRS-P5)
• Real-time Stereo mapping
– Cartosat-2 (IRS-P7)
• High-Resolution imaging at 0.81m
-Resourcesat-2

b) Cartosat Series:
Increased resolution and more spectral bands:
• PAN at 0.5m resolution
• MSI at 2-4m, 4 bands
• HSI at 8m, ~200 bands – Swath at 8-10km

c) RISAT
– First IRS SAR system
– C-Band SAR
– 10km swath in Spot mode, 240km swath in Scan mode
– Resolution at 1m to 50m
– Single/Dual polarization

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 5 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

d) ISRO and Antrix are dedicated to providing IRS data


 Data continuity is assured
 Resourcesat-2 assures data continuity and improved collection rates while R-1 remains
operational
 Cartosat-1 provides high-resolution stereo data in real time
 Competitors do not downlink their stereo data to any ground stations
 Economically provides millions of km2 of data per day

CHARACTERISTICS OF IMAGES OF INDIAN SATELLITES:

IRS 1A and 1B:


IRS 1A and 1B (Indian Remote Sensing Satellite) were the first of the series of indigenous state-
of-art remote sensing satellites. IRS-1A was successfully launched into a polar sun-synchronous
orbit on 17 March 1988 from the Soviet Cosmodrome at Baikonur on a Vostok-2M booster. IRS-
1A carried two cameras, LISS-I and LISS-II with resolutions of 73 meters and 36.25 meters
respectively with a swath width of about 140 km during each pass over the country. Its mission
was completed during July 1996 after serving for 8 years and 4 months. IRS-1B was launched
on 29 August 1991 also on a Vostok-2M booster. It had some improved features compared to its
predecessor; gyro referencing for better orientation sensing, time tagged commanding facility for
more flexibility in camera operation and line count information for better data product
generation. The mission of IRS-1B ended on 20 December 2003 after serving for 12 years and
4 months. An engineering model was later modified into the IRS 1E satellite.

Sensor Characteristics of IRS-1A and 1-B:


LISS-I, -II (Linear Imaging Self-Scanning) sensors are having two multispectral camera
assemblies, each with a different resolution providing a swath of about 150 km. Each LISS
camera consists of the collecting optics, imaging detectors, in-flight calibration system, the
processing electronics, and data formatting electronics.

LISS-I employs four 2048-element linear CCD detector arrays with spectral filters (Fairchild
CCD 143A). All cameras use refractive type collecting optics with spectral selection by
appropriate filters. The refractive optics was chosen to obtain a large FOV (Field of View). A
lens assembly for each spectral band is used for better performance and effective utilization of
the full dynamic range of the CCDs. Two LEDs (Light Emitting Diodes) per band are provided
for inflight calibration.

A LISS-I scene is 148 km x 174 km. The LISS-II A/B assembly features eight 2048-element
linear CCD detector arrays with spectral filters (2 parallel swaths of 74 km each for the LISS-II
A/B assembly with 3 km overlap, the total swath is 145 km). Four LISS-II scenes cover the area
of one LISS-I scene.

IRS-1C/1D:
IRS-1C is an ISRO-built second generation remote sensing satellite with enhanced capabilities in
terms of spatial resolution and spectral bands.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 6 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Sensor complement: (PAN, LISS-3, WiFS):


The three sensors are cameras operating in the pushbroom scanning mode using solid state
charge-coupled device (CCD) detectors.

PAN:
PAN is a pushbroom imager using an all-reflective (off-axis f/4.5) folded mirror telescope (focal
length = 982 mm) along with three separately mounted 4096-element CCD arrays, adding up to
12,288 pixels in the cross-track direction (there is some overlapping of the three subscenes of the
image requiring special processing). Each detector array has separate interference filters and four
LEDs along with a cylindrical lens. Two LEDs are for optical biasing and two are for inflight
calibration of the sensor (calibration of CCDs excluding optics). A calibration cycle comprises
2048 lines (1.8 s for calibration cycle).

The PAN instrument is placed on a platform with a Payload Steering Mechanism (PSM)
enabling the camera to tilt in the cross-track direction with a pointing capability of ±26º,
providing a FOR (Field of Regard) coverage of ± 398 km. This provides a revisit capability of a
certain target area of 5 days.

Calibration of the PAN camera:


The inflight calibration of the camera is being carried out using LEDs. LEDs have the advantage
of low power consumption, low thermal dissipation and fast response time. The scheme
envisages the calibration of CCDs excluding optics. The LEDs are operated in pulse mode at
higher currents resulting in higher intensities. A calibration cycle is comprised of 2048 lines. The
time taken for one calibration cycle for the PAN camera is approximately 1.8 s. The LEDs are
operated at pulsed mode and the duration for which the LEDs are 'ON' is varied in specific steps.
The CCD detector integrates the light falling on it during one readout period. Six non-zero
exposure levels spanning the full dynamic range are provided for each detector.

LISS-3:
Continuous service of multispectral imagery. Application: Land and water resources
management (Figure 1.1). The pushbroom camera uses refractive optics in four spectral bands
(separate optics and detector array for each band). The collecting optics consists of eight
refractive lens elements with interference filter in front. A linear CCD array of 6000 silicon-
based elements is used for each VNIR band. The SWIR- band device has a 2100 element InGaAs
linear array (temperature controlled at -10ºC with a passive radiative cooler and an on-off heater
control). The SWIR device itself consists of a lattice mismatched heterojunction photodiode
array for the detection of SWIR radiation and silicon-based CCD multiplexers for signal readout
[seven identical modules are butted together to form a linear array of 2100 elements; each
module consists of a two-sided InGaAs die of 300 photodiodes and two 150-element silicon-
based CCD arrays on either side of the photodiode die to multiplex the signal from the
photodiodes. Instrument mass = 171 kg, power = 74-78 W.

The image characteristics of PAN+LISS-III, FCC is shown in figure 1.3. The image highlights
the water bodies, agricultural fields, water channels, plantation etc.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 7 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Calibration:
Inflight calibration of the LISS-3 camera is realized with LEDs (1.55 µm) as illuminating source
and operated in pulsed mode to generate six intensity levels. A LED is mounted on either side of
the photodiode array. Each LED is followed by diverging optics. A calibration cycle comprises
2048 scan lines (7.3 s for one cycle) and includes six intensity levels. Calibration is normally
performed during the eclipse period of the satellite pass.

WiFS (Wide Field Sensor):


The WiFS camera is similar to LISS-1. Application: Vegetation index mapping. The camera
provides two spectral bands in the VNIR range. The total swath is formed by using two optical
heads (i.e., two lenses and two CCDs) per band. The WiFS camera employs refractive collecting
optics consisting of eight refractive lens elements with interference filter and neutral density
(ND) filter in the frontend assembly. Two linear CCD arrays are used with 2048 pixels/element
(size of 13 µm/pixel). FOV=±26º. The two lenses are mounted with their optical axes canted at
13º either side of nadir for each imager. Instrument mass = 41 kg, power = 22 W.

Figure 1.1: IRS-1C, Natural Colour Composite, Nainital and surrounding

Figure 1.2. Ankleshwar, India (IRS 1C PAN+LISS-III)


(Source: www.nrsa.gov.in)

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 8 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 1.3: Cartosat 2 sample image of downtown Denver, CO (image credit:


ISRO)

IRS-P4 (OCEANSAT):

IRS-P4 is the first satellite primarily built for Ocean applications, weighing 1050 kg placed in
a Polar Sun Synchronous orbit of 720 km, launched by PSLV-C2 from SHAR Centre,
Sriharikota on May 26, 1999. This satellite carries Ocean Colour Monitor (OCM) and a Multi
- frequency Scanning Microwave Radiometer (MSMR) for oceanographic studies. IRS-P4
thus vastly augment the IRS satellite system of ISRO comprising four satellites, IRS-1B, IRS-
1C, IRS-P3 and IRS-1D and extend remote sensing applications to several newer areas.
Oceansat-1 was launched by ISRO's PSLV-C2along with German DLR-
Tubsat and South Korean Kit Sat 3 on 26 May 1999 from the First Launch Pad of Satish
Dhawan Space Centre in Sriharikota, India. It was the third successful launch of PSLV.

Payloads:
Oceansat-1 carried two payloads. The first of these, the Ocean Colour Monitor (OCM), is a
solid state camera literally designed primarily to monitor the colour of the ocean,[4] thereby
useful for documenting chlorophyllconcentration, phytoplankton blooms, atmospheric
aerosols and particulate matter.[1]It is capable of detecting eight spectrums ranging from
400 nm to 885 nm, all in the visible or near infrared spectrums. The second, the Multi-
frequency Scanning Microwave Radiometer (MSMR), collects data by
measuring microwave radiation passing through the atmosphere over the ocean.[6] This offers
information including sea surface temperature, wind speed, cloud water content, and water

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 9 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

vapour content.
ResourceSat-1 — formerly IRS-P6 :
IRS-P6 is an Earth observation mission within the IRS series of ISRO, Bangalore, India. The
overall objectives of the IRS-P6 mission are to provide continued remote sensing data services
on an operational basis for integrated land and water resources management. IRS-P6 is the
continuation of the IRS-1C/1D missions with considerably enhanced capabilities.
Prior to launch, ISRO renamed the IRS-P6 spacecraft to ResourceSat-1, to describe more
aptly the application spectrum of its observation data.
A launch of the IRS-P6 satellite took place Oct. 17, 2003 on a PSLV launcher from
SHAR (Satish Dhawan Space Centre, Sriharikota), India. ResourceSat-1 orbit is Sun-
synchronous, altitude = 817 km, inclination = 98.69º, period = 101.35 min, local time of
equator crossing at 10:30 hours on LTDN (Local Time on Descending Node). The ground
track is maintained within ± 1 km.
The achieved IRS-P6 injected orbit was estimated as 815.417 km x 831.668 km with
an inclination of 98.805º.The target orbit was achieved by performing three in-plane and five
combined (in-plane and out-of-plane) maneuvers. A total of eight orbit acquisition maneuvers,
starting from Oct. 20 to Nov. 29, 2003 were performed for obtaining "locked-path" and
"frozen-perigee" orbits. Path locking was done on Nov. 29, 2003.

Sensor complement: (LISS-4, LISS-3, AWiFS):


ResourceSat-1 carries following three instruments(Table 1)
• A high resolution linear imaging self-scanner (LISS-IV)
• A medium resolution linear imaging self-scanner (LISS-III)
• AwiFS (Advanced Wide Field Sensor).
All three cameras are pushbroom scanners using linear arrays of CCDs (Charge Coupled
Devices).

Figure 1.4: LISS-3 image, Ahmadabad, acquired on Nov. 10, 2011 (image credit: ISRO)

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 10 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 1.5: First day AwiFS Image of Resourcesat-1, Lake Manasarovar (right),
located in the Tibet Autonomous Region of China (image credit: ISRO)

LISS-4 :
The LISS-4 multispectral high-resolution camera is the prime instrument of this sensor
complement. LISS-4 is a three-band pushbroom camera of LISS-3 heritage (same spectral
VNIR bands as LISS-3) with a spatial resolution of 5.8 m and a swath of 70 km. LISS-4 can
be operated in either of two support modes:

• Multispectral (MS) mode: Data is collected in 3 bands corresponding to pre-selected 4096


contiguous pixels with a swath width of 23.9 km (selectable out of 70 km total swath). The 4
k detector strip can be selected anywhere within the 12 k pixels by commanding the start pixel
number using the electronic scanning scheme.
• Mono mode: Data of the full 12 k pixels of any one single selected band, corresponding to a
swath of 70 km, can be transmitted. Nominally, band-3 data (B3) are being observed and
transmitted in this mode.

LISS-4 features in addition a ±26º steering capability in the cross-track direction permitting a
5-day revisit cycle. The optoelectronic module of LISS-4 is identical to that of the PAN
camera of IRS-1C/1D. The CCD array features 12,288 elements for each band. The
instrument has a mass of 169.5 kg, power of 216 W, and a data rate of 105 Mbit/s. The
detector temperature control is implemented using a radiator plate coupled to each band CCD
through heat pipes and copper braid strips.

The LISS-4 camera is realized using the three mirror reflective telescope optics (same as that
of the PAN camera of IRS-1C/1D) and 12,288 pixels linear array CCDs with each pixel of the
size 7 µm x 7 µm. Three such CCDs are placed in the focal plane of the telescope along with

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 11 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

their individual spectral bandpass filters. An optical arrangement comprising an isosceles


prism is employed to split the beam into three imaging fields which are separated in the long
track direction. The projection of this separation on ground translates into a distance of 14.2
km between the B2 and B4 image lines. While B3 is looking at nadir, B2 is looking ahead and
B4 is looking behind in the direction of velocity vector. Detector type: THX31543A of
Thomson.
LISS-4 calibration: An in-flight calibration scheme is implemented using LEDs (Light
Emitting Diodes). Eight LEDs positioned in front of the CCD (without obstructing the light
path during imaging). These LEDs are driven with a constant current and the integration time
is varied to get 16 exposure levels, covering the dynamic range in a sequential manner. This
sequence repeats in a cyclic form.

LISS-3 :
LISS-3 is a medium-resolution multispectral camera. The pushbroom instrument is identical
to LISS-3 on IRS-1C/1D (with regard to lens modules, detectors, and electronics) in the three
VNIR bands, each with a spatial resolution of 23.5 m. The resolution of the SWIR band is
now also of 23.5 m on a swath of 140 km. The optics design and the detector of the SWIR
band are modified to suit the required resolution; B5 uses a 6,000 element Indium Gallium
Arsenide CCD with a pixel size of 13 µm. The SWIR CCD is a new device employing a
CMOS readout technique for each pixel, thereby improving noise performance. The VNIR
CCD array features 6,000 elements for each band. The instrument has a mass of 106.1 kg, a
power consumption of 70 W, and a data rate of 52.5 Mbit/s.
The in-flight calibration of the LISS-3 camera is carried out using 4 LEDs per CCD in the
VNIR bands and 6 LEDs for the SWIR band. These LEDs are operated in pulsed mode and
the pulse duration during which these LEDs are ON is varied in specific steps. Each LED has
a cylindrical lens to distribute the light intensity onto the CCD. Each calibration cycle consists
of 2048 lines providing six none zero intensity levels.

AWiFS (Advanced Wide Field Sensor):


AWiFS is a wide-angle medium resolution (56 m) camera with a swath of 740 km
(FOV=±25º) of WiFS heritage. The pushbroom instrument operates in three spectral bands
which are identical to two VNIR bands (0.62 - 0.68 µm, 0.77 - 0.86 µm) and the SWIR band
(1.55-1.70 µm) of the LISS-3 camera. The AWiFS camera is realized using two separate
optoelectronic modules which are tilted by 11.94º with respect to nadir. Each module covers a
swath of 370 km providing a combined swath of 740 km with a side lap between them. The
wide swath coverage enables AWiFS to provide a five-day repeat capability. The
optoelectronic modules contain refractive imaging optics along with band pass interference
filter, a neutral density filter and a 6000 pixels linear array CCD detector for each spectral
band.
The in-flight calibration is implemented using 6 LEDs in front of each CCD. For the VNIR
bands (B2, B3, B4), the calibration is a progressively increasing sequence of 16 intensity
levels through exposure control. For the SWIR band, the calibration sequence is similar to that
of LISS-3 through a repetitive cycle of 2048 scan lines.

CartoSat-1 - formerly IRS-P5 (Indian Remote Sensing Satellite-P5):


IRS-P5 is a spacecraft of ISRO. The objectives of the IRS-P5 mission are directed at geo-

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 12 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

engineering (mapping) applications, calling for high-resolution panchromatic imagery with


high pointing accuracies. The spacecraft features include two high-resolution panchromatic
cameras that may be used for in-flight stereo imaging.

Prior to launch, ISRO renamed the IRS-P5 spacecraft to CartoSat-1, to describe more aptly
the application spectrum of its observation data. In this mission, the high resolution of the data
(2.5 m GSD) is being traded at the expense of multispectral capability and smaller area
coverage, with a swath width of 30 km. The data products are intended to be used in DTM
(Digital Terrain Model)/DEM (Digital Elevation Model) generation in such applications as
cadastral mapping and updating, land use as well as other GIS applications.

The CartoSat-1 spacecraft was launched on May 5, 2005 through a PSLV C6 launch vehicle
of ISRO from the SDSC (Satish Dhawan Space Centre) Sriharikota launch site on the sea
coast of India.

A secondary payload on this flight was Hamsat (VUSat) of AmSat India with a launch mass
of 43.5 kg. Hamsat carries two transponders in UHF band to provide spaceborne radio
amateur services to India and the international Ham radio community.

Orbit:
Sun-synchronous circular orbit, altitude = 618 km, inclination =97.87º, period of 97 min,
nodal equatorial crossing time on ascending node at 10:30 hours. The orbital revisit cycle is
126 days. However, a revisit capability of 5 days is provided by the body-pointing feature of
the spacecraft about its roll axis by ±26º.

Mission status:
• The CartoSat-1 spacecraft and its payload are operating nominally in 2012. CartoSat-1 is
completing its 7th year on orbit in May 2012 and is being routinely operated; it is returning
high quality data.
• The CartoSat-1 spacecraft and its payload are operating nominally in 2011.
• The CartoSat-1 spacecraft and its payload are operating nominally in 2010.
• GAF/Euromap, the European commercial distributor of CartoSat-1 imagery, developed in
concert with DLR a DEM (Digital Elevation Model), mainly for Europe.

Sensor complement: (Pan Camera)


The payload instrumentation consists of two panchromatic cameras of PAN heritage as flown
on the IRS-1C/D satellites. The objective is to obtain fore-aft stereo imagery with two fixed
(body-mounted) instruments (i.e., a two-line stereo configuration). The discrimination of
elevation differences of better than 5 m make the data particularly suitable for map-making
and terrain modeling.
• PAN-F(Panchromatic Forward-pointing Camera) featuring a fixed forward tilt of 26º.
• PAN-A (Panchromatic Aft-pointing Camera), it is fixed at an aft tilt of -5º (Figure 1.6)
Each camera provides a spectral range of 0.5 - 0.85 µm, a spatial resolution of 2.5 m, a swath
width of 30 km, and data quantization of 10 bits. Stereo imagery (Figure 1.7). Is acquired
with a small time difference (about 50 s) due to the forward and backward look angles of the
two cameras. The major change in imaging conditions during this time period is due to

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 13 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

rotation of Earth. An algorithm for Earth rotation compensation is being used to eliminate the
delayed observations of the two cameras.
Aside from stereo observations, the two cameras may also be used for wide swath mode
acquisitions (Figure 1.8).

Figure 1.6: Along-track imaging geometry of the CartoSat-1 fore- and


aft-viewing cameras (image credit: USGS)

Figure 1.7 CartoSat-1 DEM of Ankara, Turkey (image credit: GAF/Euromap)

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 14 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 1.8 : Schematic observation configuration modes of the two-line Pan


Camera (image credit: ISRO)

The onboard source data rate of each camera is 336 Mbit/s. An onboard ADPCM/JPEG
compression algorithm of 3.2: 1 is applied reducing the data rate to 105 Mbit/s (i 52.5 + q
52.5) for each camera.
The optical system of each PAN camera is designed with a three-mirror off-axis reflective
telescope with an off-axis concave hyperboloidal primary mirror and an off-axis concave
ellipsoidal tertiary mirror - to meet the required resolution and swath width. The mirrors are
made from special Zerodur glass blanks and are light weighted. The mirrors are polished to an
accuracy of l/80 and are coated with enhanced AlO2coating. The mirrors are mounted to the
electro-optical module using iso-static mounts, so that the distortion on the light weighted
mirrors are reduced to a minimum. Each camera features a linear CCD detector array of
12,288 pixels. The overall size of each PAN camera is 150 cm x 850 cm x 100 cm with a
mass of 200 kg.
The imagery of the 2-line along-track stereo camera may be used for a variety of applications,
among them for the generation of DEMs (Digital Elevation Models). The data is expected to
provide enhanced inputs for large scale mapping applications and stimulate newer
applications in the urban and rural development.

Ground segment:
The spacecraft is being operated by ISTRAC (ISRO Telemetry, Tracking and Command
Network) of Bangalore, using its network of stations at Bangalore, Lucknow, Mauritius,
Bearslake in Russia and Biak in Indonesia. NRSA (National Remote Sensing Agency) of
Hyderabad is receiving the payload data and is the processing center for the CartoSat-1
mission. The payload data acquisition is performed at the NRSA ground station, located at
Shadnagar, near Hyderabad.

Cartosat-2:
Cartsat-2 is an Earth observation satellite in a sun-synchronous orbit and the second of
the Cartosat series of satellites. The satellite was built, launched and maintained by the Indian

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 15 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Space Research Organization. Weighing around 680 kg at launch, its applications will mainly
be towards cartography in India. It was launched by a PSLV-G rocket on 10 January 2007.
Cartosat-2 carries a state-of-the-art panchromatic (PAN) camera that take black and white
pictures of the earth in the visible region of the electromagnetic spectrum. The swath covered
by this high resolution PAN camera is 9.6 km and their spatial resolution is less than 1 metre.
The satellite can be steered up to 45 degrees along as well as across the track.
Cartosat-2 is an advanced remote sensing satellite capable of providing scene-specific spot
imagery. The data from the satellite will be used for detailed mapping and other cartographic
applications at cadastral level, urban and rural infrastructure development and management, as
well as applications in Land Information System (LIS) and Geographical Information System
(GIS).
Cartosat-2's panchromatic camera can produce images (Figure 1.9) better than 1 meter in
resolution, compared to the 82 cm panchromatic resolution offered by the Ikonos satellite.
India had previously purchased images from Ikonos at about US$20 per square kilometre; the
use of Cartosat-2 will provide imagery at 20 times lower cost. At the time of Cartosat-2's
launch, India was buying about 20 crore per year from Ikonos.

Figure 1.9: A 1m panchromatic CARTOSAT-2 image of Bangalore, India (image credit:


NRSC)

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 16 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The above figure of CARTOSAT -2 highlights the details of many of image elements
pertaining to the cultural features of Bangalore city. If you compare the details with ground
information you may nominate all the cover types/cultural features and the specific features
within the city area.

OceanSat-2:
The ISRO (Indian Space Research Organization) spacecraft OceanSat-2 is envisaged to
provide service continuity for the operational users of OCM (Ocean Color Monitor) data as
well as to enhance the application potential in other areas. OCM is flown on IRS-
P4/OceanSat-1, launched May 26, 1999. The main objectives of OceanSat-2 are to study
surface winds and ocean surface strata, observation of chlorophyll concentrations, monitoring
of phytoplankton blooms, study of atmospheric aerosols and suspended sediments in the
water. OceanSat-2 plays an important role in forecasting the onset of the monsoon and its
subsequent advancement over the Indian subcontinent and over South-East Asia.

Coverage of applications:
• Sea-state forecast: waves, circulation and ocean MLD (Mixed Layer Depth)
• Monsoon and cyclone forecast - medium and extended range
• Observation of Antarctic sea ice
• Fisheries and primary production estimation
• Detection and monitoring of phytoplankton blooms
• Study of sediment dynamics

RISAT
RISAT (Radar Imaging Satellite) is a series of Indian radar imaging reconnaissance
satellites built by ISRO. They provide all-weather surveillance using synthetic aperture
radars (SAR). The RISAT series are the first all-weather earth observation satellites from ISRO.
Previous Indian observation satellites relied primarily on optical and spectral sensors which were
hampered by cloud cover. After the November 26, 2008 Mumbai attacks, the launch plan was
modified to launch RISAT-2 before RISAT-1, since the indigenous C-band SAR to be used for
RISAT-1 was not ready. RISAT-2 used an Israel Aerospace Industries (IAI) X-band SAR sensor
similar to the one employed on TecSAR.

RISAT-2 was the first of the RISAT series to reach orbit. It was launched successfully on April
20, 2009 at 0015 hours GMT by a PSLV rocket. The 300-kg satellite was built by ISRO using an
X-band SAR manufactured by IAI.

This satellite was fast tracked in the aftermath of the 2008 Mumbai attacks. The satellite will be
used for border surveillance, to deter insurgent infiltration and for anti-terrorist operations. It is
likely to be placed under the Aerospace Command of the Indian Air Force.
No details of the technical specifications of RISAT-2 have been published. However, it is likely
to have a spatial resolution of about a metre or so. Ship detection algorithms for radar satellites
of this class are well-known and available. The satellite also has applications in the area of
disaster management and agriculture-related activities.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 17 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

RISAT-1 is an indigenously developed radar imaging satellite successfully launched by a PSLV-


XL rocket on April 26, 2012 from Satish Dhawan Space Centre, Shriharikota. RISAT-1 was
postponed in order to prioritize the building and launch of RISAT-2.[
The features of RISAT-1 include:

 160 x 4 Mbit/s data handling system


 50 Newton-meter-second reaction wheels
 SAR antenna deployment mechanism
 Phased array antenna with dual polarization

Megha- Tropiques:
Megha –Tropiques is a satellite mission to study the water cycle in the tropical atmosphere in the
context of climate change A collaborative effort between Indian Space Research Organisation
(ISRO) and French Centre National d’Etudes Spatial (CNES), Megha-Tropiques was
successfully deployed into orbit by a PSLV rocket in October 2011.

Megha-Tropiques was initially scrapped in 2003, but later revived in 2004 after India increased
its contribution and overall costs were lowered. With the progress made by GEWEX (Global
Energy and Water Cycle Experiment), Megha-Tropiques is designed to understand tropical
meteorological and climatic processes, by obtaining reliable statistics on the water and energy
budget of the tropical atmosphere. Megha-Tropiques complements other data in the current
regional monsoon projects such as MAHASRI and the completed GAME project. Megha-
Tropiques also seeks to describe the evolution of major tropical weather systems. The focus will
be the repetitive measurement of the tropics.

Megha-Tropiques provides instruments that allow simultaneously observation of 3 interrelated


components of the atmospheric engine: water vapor, condensed water (clouds and
precipitations), and radioactive fluxes, facilitating the repetitive sampling of the inter-tropical
zone over long periods of time. Its microwave radiometer, Multi-frequency Microwave Scanning
Radiometer (MADRAS), complements the radiometers of the other elements of the Global
Precipitation Measurement mission.

Payloads:
Instruments fulfill a role to other on geostationary satellites. In this, microwave instruments are
essential.

 Microwave Analysis and Detection of Rain and Atmospheric Structures (MADRAS) is


a microwave imager, with conical scanning (incidence angle 56°), close from the SSM/I and
TMI concepts. The main aim of the mission being the study of cloud systems, a frequency has
been added (150 GHz) in order to study the high level ice clouds associated with the
convective systems, and to serve as a window channel relative to the sounding instrument at
183 GHz.

 Sounder for Probing Vertical Profiles of Humidity (SAPHIR) is a sounding instrument with 6
channels near the absorption band of water vapor at 183 GHz. These channels provide

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 18 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

relatively narrow weighting functions from the surface to about 10 km, allowing retrieving
water vapor profiles in the cloud free troposphere. The scanning is cross-track, up to an
incidence angle of 50°. The resolution at nadir is of 10 km.

 Scanner for Radiation Budget (ScaRaB) is a scanning radiative budget instrument, which has
already been launched twice on Russian satellites. The basic measurements of ScaRaB are the
radiances in two wide channels, a solar channel (0.2 - 4 µm), and a total channel (0.2 -
200 µm), allowing to derive long wave radiances. The resolution at nadir will be 40 km from
an orbit at 870 km. The procedures of calibration and processing of the data in order to derive
fluxes from the original radiances have been set up and tested by CNES and LMD.
 Radio Occultation Sensor for Vertical Profiling of Temperature and Humidity (ROSA)
procured from Italy for vertical profiling of temperature and humidity.
Launch.

The Megha-Tropiques satellite was successfully placed in an 867 km orbit with an inclination of
20 degrees to the equator by the Indian Space Research Organisation through its Polar Satellite
Launch Vehicle (PSLV-C18) on October 12, 2011.[10] The PSLV-C18 was launched at 11:00 am
on October 12, 2011 from the first launch pad of the Satish Dhawan Space Centre (SHAR)
located in Sriharikota, Andhra Pradesh. The satellite was placed in orbit along with three micro
satellites: the 10.9 kg SRMSAT built by the SRM University, Chennai, the 3 kg remote sensing
satellite Jugnufrom the Indian Institute of Technology Kanpur (IIT Kanpur) and the
28.7 kg VesselSat-1 of Luxembourg to locate ships on high seas.

SARAL:

SARAL or Satellite with ARgos and ALtiKa is a cooperative altimetry technology mission
of Indian Space Research Organisation (ISRO) and CNES (Space Agency of France). SARAL
will perform altimetric measurements designed to study ocean circulation and sea surface
elevation. The payloads of SARAL are the ISRO built satellite with payloads modules (ALTIKA
altimeter), DORIS, Laser Retro-reflector Array (LRA) and ARGOS-3 (Advanced Research and
Global Observation Satellite) data collection system provided by CNES. It was launched by
Indian Polar Satellite Launch Vehicle rocket into the Sun-synchronous orbit (SSO). ISRO is
responsible for the platform, launch, and operations of the spacecraft. A CNES/ISRO MOU
(Memorandum of Understanding) on the SARAL mission was signed on Feb. 23, 2007.
SARAL was successfully launched on 25 February 2013, 12:31 UTC.

The SARAL mission is complementary to the Jason-2 mission of NASA/NOAA and


CNES/EUMETSAT. It will fill the gap between Envisat and the Sentinel 3 mission of the
European GMES program. The combination of two altimetry missions in orbit has a considerable
impact on the reconstruction of sea surface height (SSH), reducing the mean mapping error by a
factor of 4.

Payloads:
Ka band Altimeter, ALTIKA:
ALTIKA, the altimeter and prime payload of the SARAL mission, will be the first spaceborne
altimeter to operate at Ka band. It built by the French National Space Agency CNES. The

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 19 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

payload is intended for oceanographic applications, operates at 35.75 GHz. ALTIKA is set to
take over ocean-monitoring from Envisat. It is the first to operate at such a high frequency,
making it more compact and delivering better performance than the previous generation. While
existing satellite-borne altimeters determine sea level by bouncing a radar signal off the surface
and measuring the return-trip time, ALTIKA operates at a high frequency in Ka band. The
advantage of this is twofold. One, the earth’s atmosphere slows down the radar signal, so
altimetry measurements are skewed and have to carry additional equipment to correct for this
error. Since ALTIKA uses a different system, it does not have to carry an instrument to correct
for atmospheric effects as current-generation altimeters do. ALTIKA gets around this problem
by operating at a high frequency in Ka band. Another advantage of operating at higher
frequencies is greater accuracy. ALTIKA will measure ocean surface topography with an
accuracy of 8 mm, against 2.5 cm on average using current-generation altimeters, and with a
spatial resolution of 2 km.

The disadvantage, however, is that high-frequency waves are extremely sensitive to rain, even
drizzle. 10% of the data is expected to be lost. (Although this could be exploited to perform
crude measurements of precipitation).

ARGOS Data Collection System:


It built by the French National Space Agency CNES. ARGOS contributes to the development
and operational implementation of the global ARGOS Data Collection System. It will collect a
variety of data from ocean buoys to transmit the same to the ARGOS ground segment for
subsequent processing and distribution.

Solid State C-band Transponder (SCBT):


SCBT is from ISRO and intended for ground RADAR calibration. It is a continuation of such
support provided by C-Band Transponders flown in the earlier IRS-P3 and IRS-P5 missions.
The payloads of SARAL are accommodated in the Indian Mini Satellite-2 bus, which is built by
ISRO.

SARAL Applications:
SARAL data products will be useful for operational as well as research user communities in
many fields like
 Marine meteorology and sea state forecasting
 Operational oceanography
 Seasonal forecasting*
 Climate monitoring
 Ocean, earth system and climate research
 Continental ice studies
 Protection of biodiversity
 Management and protection of marine ecosystem
 Environmental monitoring
 Improvement of maritime security

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 20 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

1.4 SUMMARY
IRS program provides a continuous supply of synoptic, repetitive, multispectral data of the
Earth's land surfaces. IRS imagery is made available to a larger international community on a
commercial basis. The initial program of Earth-surface imaging was extended by the addition of
sensors for complementary environmental applications. This started with the IRS-P3 satellite
which is flying MOS (Multispectral Optoelectronic Scanner) for the measurement of ocean
color. The IRS-P4 mission is dedicated to ocean monitoring.
The availability of Landsat imagery created a lot of interest in the science community. The
Hyderabad ground station started receiving Landsat data on a regular basis since the year 1978.
The Landsat program with its design and potentials was certainly a great model and yardstick for
the IRS program.

Starting with IRS-1A, ISRO has launched many operational remote sensing satellites. Today
India has one of the largest constellations of remote sensing satellites in operation. Currently,
thirteen operational satellites are in Sun-synchronous orbit and four in Geo-stationary orbit.
Those are named as RESOURCESAT-1 and 2, 2A, CARTOSAT-1, 2, 2A, 2B, RISAT-1and 2,
OCEASAT-2, 3, Megha-Tropiques and SARAL. Varieties of instruments have been flown
onboard these satellites to provide necessary data in a diversified spatial, spectral, radiometric
and temporal resolutions to cater to different user requirements in the country and global uses.
The data from these satellites are used for varieties of applications.

In addition to its own satellite remote sensing data some of the premier organizations/Institutes
of India have used the data of foreign satellites namely LANDSAT series, ERS, NOAA, TERRA
and AQUA, SPOT, RADARSAT, IKONOS, QickkBird, etc.

With the passes of time, the improvement of spatial, spectral, temporal and radiometric
resolutions and subsequent improvements in the quality of images have certainly met the users
requirements and fulfilled the objectives related to different fields of applications.

1.5 GLOSSARY
CARTOSAT - The name CARTOSAT is a combination of Cartography and Satellite.
Cartography is the study and practice of making maps.

RESOURCESAT - RESOURCESAT is an advanced remote sensing satellite built by ISRO. It


is intended to enhance the data quality in comparison to IRS-1C and IRS-1D.

IKONOS- IKONOS Satellite is a high-resolution satellite operated by Digital Globe. Its


capabilities include capturing a 3.2m multispectral, Near-Infrared (NIR).

OCEANSAT - OCEANSAT is the first satellite primarily built for Ocean applications.

RISAT (Radar Imaging Satellite) - RISAT is a series of Indian radar imaging reconnaissance
satellites built by ISRO.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 21 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

MEGHA-TROPIQUES - Megha-Tropiques is a satellite mission to study the water cycle in


the tropical atmosphere in the context of climate change.

SARAL- SARAL or Satellite with ARgos and ALtiKa is a cooperative altimetry technology
mission of ISRO and CNES (Space Agency of France).

ERS - European Rremote Sensing Satellite


NOAA – National Oceanic and Atmospheric Administration predicts near-normal 2019 Atlantic
hurricane season · El Nino and warmer -than-average. It is a weather forecast tool.

TERRA - The Terra spacecraft is considered the flagship of NASA's EOS.

AQUA - AQUA is a NASA scientific research satellite in orbit around the Earth, studying the
precipitation, evaporation, and cycling of water.

SPOT - SPOT (from French "Satellite pour l'Observation de la Terre") constellation has been
supplying high-resolution, wide-area optical imagery.

RADARSAT- RADARSAT is a Canadian remote sensing Earth observation satellite program


overseen by the Canadian Space Agency.

QuickBird - QuickBird satellite collects image data to 0.65m pixel resolution degree of detail.
This satellite is an excellent source of environmental GIS data.

1.6 ANSWERS TO CHECK YOUR PROGRESS


1. Define RADARSAT.
2. Define QuickBird.
3. Define SPOT.
4. Define TERRA.
5. Define AQUA.
6. Define NOAA.
7. Define SARAL.
8. Define CARTOSAT.
9. Define IKONOS.
10. Define RESOURCESAT.

1.7 REFERENCES
1. www.nrsa.gov.in
2. https://en.wikipedia.org/wiki/Indian_Remote_Sensing_Programme
3. https://space.skyrocket.de/doc_sdat/irs-1a.htm
4. https://directory.eoportal.org/web/eoportal/satellite-missions/i/irs-1c-1d
5. https://en.wikipedia.org/wiki/Resourcesat

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 22 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

6. https://en.wikipedia.org/wiki/Cartosat
7. https://en.wikipedia.org/wiki/RISAT
8. https://www.usgs.gov/isro-resourcesat-1-and-resourcesat-2
9. https://en.wikipedia.org/wiki/SARAL

1.8 TERMINAL QUESTIONS


1. List the IRS satellites launched till date.
2. Highlight the scope of Indian Remote Sensing Satellite Programme.
3. Explain the IRS-1C and IRS-1D payloads and their image characteristics.
4. Describe the CARTOSAT -1 and 2 sensors and their image characteristics.
5. What is SARAL? Describe it in detail.
6. Mega –Tropiques satellite mission is studying the water cycle in the context of climate
change. Explain its payloads and the techniques.

UNIT 1 - CHARACTERISTICS OF IMAGES OBTAINED FROM DIFFERENT SENSORS Page 23 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC


CORRECTIONS

2.1 OBJECTIVES
2.2 INTRODUCTION
2.3 GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC
CORRECTIONS
2.4 SUMMARY
2.5 GLOSSARY
2.6 ANSWER TO CHECK YOUR PROGRESS
2.7 REFERENCES
2.8 TERMINAL QUESTIONS

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 24 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

2.1 OBJECTIVES
After reading this unit you will be able to understand:
 Geometric corrections
 Radiometric corrections
 Noise Removal
 Atmospheric corrections

2.2 INTRODUCTION
In the previous unit image characteristics of important Indian and foreign satellite platforms and
sensors have been explained. The digital image processing techniques and classification of those
images need preprocessing of raw data/images so as to achieve higher classification and mapping
accuracy. Raw digital images cannot be used as maps because they contain geometric distortions
which stem from the image acquisition process. To supply the same geometric integrity as a
map, original raw images must be geometrically corrected and the distortions, such as variations
in altitude, and earth curvature, must be compensated for.

When image data is recorded by sensors on satellites and aircraft, it can contain errors in
geometry and in the measured brightness values of pixels. The latter are referred to as
radiometric errors and can result from the instrumentation used to record the data and from the
effect of the atmosphere. Image geometry errors can arise, for example, from the curvature of the
earth, uncontrolled variations in the position and attitude of the platform, and sensor anomalies.
Before using an image, it is frequently necessary to make corrections to its brightness and
geometry. There are essentially two techniques that can be used to try to minimize these
geometric distortions. One is to model the nature and magnitude of the distortion and thereby
establish a correction; the other is to develop a mathematical relationship between the pixel
coordinates on the image and the corresponding points on the ground.

Geometric, Radiometric and Atmospheric Corrections in a remotely sensed digital image are
called as Image Rectification. These operations aim to correct distorted or degraded image data
to create a faithful representation of the original scene. This typically involves the initial
processing of raw image data to correct for geometric distortion, to calibrate the data radio
metrically and to eliminate noise present in the data. Image rectification and restoration
procedures are often termed pre-processing operations because they normally precede
manipulation and analysis of image data.

2.3 GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC


CORRECTIONS

Image rectification:
 Rectification is a transformation process used to project images onto a common image plane.
 Image rectification is the transformation of multiple images onto a common coordinate
system

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 25 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Geometric Correction:
 Geometric Correction (often referred to as Image Warping) is the process of digitally
manipulating image data such that the image’s projection precisely matches a specific
projection surface or shape.
 Geometric correction corrects a satellite image for the positional error and
displacement/distortion due to terrain elevation and terrain complexities.

Radiometric Correction:
 Generally speaking, radiometric correction converts Digital Number (DN) to radiance using
calibration coefficients that are usually provided in image header files.
 The radiometric correction involves subtracting the background signal (bias) and dividing by
the gain of the instrument, which converts the raw instrument output (in DN) to radiance (in
W/m2sr μm).
 Radiometric correction is to avoid radiometric errors or distortions, while geometric
correction is to remove geometric distortion.

Radiometric Calibration:
 Radiometric calibration is the conversion from the sensor measurement to a physical
quantity.

Image Noise:
 Image noise is random variation of brightness or color information in images
 Image noise is an undesirable by-product of image capture that obscures the desired
information.
 The original meaning of "noise" is an "unwanted signal" or unwanted electrical fluctuations
in signals.
 Noise refers to random error in pixel values acquired during image acquisition.

Atmospheric correction:
 Atmospheric correction is the process of removing the effects of the atmosphere on the
reflectance values of images taken by satellite or airborne sensors.
 Atmospheric correction removes the scattering and absorption effects from the atmosphere to
obtain the original surface reflectance characteristics.

IMAGE RECTIFICATION TECHNIQUES:


This process has several degrees of freedom and there are many strategies for transforming
images to the common plane.
 It is used in computer stereo vision to simplify the problem of finding matching points between
images (i.e. the correspondence problem).
 It is used in geographic information systems to merge images taken from multiple perspectives
into a common map coordinate system.
Computer stereo vision takes two or more images with known relative camera positions that
show an object from different viewpoints. For each pixel it then determines the corresponding
scene point's depth (i.e. distance from the camera) by first finding matching pixels (i.e. pixels
showing the same scene point) in the other image(s) and then applying triangulation to the found

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 26 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

matches to determine their depth. Finding matches in stereo vision is restricted by epipolar
geometry: Each pixel's match in another image can only be found on a line called the epipolar
line. If two images are coplanar, i.e. they were taken such that the right camera is only offset
horizontally compared to the left camera (not being moved towards the object or rotated), then
each pixel's epipolar line is horizontal and at the same vertical position as that pixel. However, in
general settings (the camera did move towards the object or rotate) the epipolar lines are slanted.
Image rectification warps both images such that they appear as if they have been taken with only
a horizontal displacement and as a consequence all epipolar lines are horizontal, which slightly
simplifies the stereo matching process. Note however, that rectification does not fundamentally
change the stereo matching process: It searches on lines, slanted ones before and horizontal ones
after rectification.

Image rectification is also an equivalent (and more often used) alternative to perfect camera co-
planarity. Even with high-precision equipment, image rectification is usually performed because
it may be impractical to maintain perfect co-planarity between cameras.

Image rectification can only be performed with two images at a time and simultaneous
rectification of more than two images is generally impossible.

Geo-referencing and Rectification:


Geo-referencing is the process of aligning geographic data to a known coordinate system so it
can be viewed, and analyzed with other geographic data. Geo-referencing may involve shifting,
rotating/skewing, scaling, skewing, and in some cases warping, or ortho-rectifying the data.
Many raster datasets need to be geo-referenced before any they can be viewed or analyzed with
other geographic data. For example, historical aerial photographs and maps are available through
Earth Explorer, but these images are not generally geo-referenced.
Once you have obtained a raster that you would like to geo-reference you need to obtain
reference data. This can be data layer with a known coordinate system or data collected in the
field. If you are using a layer with a known coordinate system you will need to identify Ground
Control Points (GCPs). Ground control points are locations with known coordinates that can be
easily identified in an image. Some examples of good GCPs are road intersections, stone wall
boundaries, building corners, and solitary trees that can be easily identified on the data layers.
Ground control points can also collected in the field with GPS. The control points create a link
between the data sets and allow us to geo-reference images.

GEOMETRIC CORRECTION TECHNIQUES:


All remote sensed imagery contains some degree of geometric distortion. These distortions are
due to changes in sensor position, the fact that the Earth is rotating on its axis as images are
being recorded and finally due to terrain effects. Some distortions are predictable and can easily
be fixed, while other is more complicated and difficult to remove.

Why Geometric Correction:


Raw digital images usually contain geometric distortions so significant that they cannot be used
as maps. The sources of these distortions range from variations in the altitude, and velocity of the
sensor platform, to factors such as panoramic distortion, earth curvature, atmospheric refraction,
relief displacement, and non-linearities in the sweep of a sensor's IFOV. The intent of geometric

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 27 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

correction is to compensate for the distortions introduced by these factors, so that the corrected
image will have the geometric integrity of a map.
Geometric correction is undertaken to avoid geometric distortions from a distorted image, and is
achieved by establishing the relationship between the image coordinate system and the
geographic coordinate system using calibration data of the sensor, measured data of position and
attitude, ground control points, atmospheric condition etc.
Steps for Geometric Correction:
The steps to follow for geometric correction are as follows:

Selection of method - After consideration of the characteristics of the geometric distortion as


well as the available reference data, a proper method should be selected.

Determination of parameters - Unknown parameters which define the mathematical equation


between the image coordinate system and the geographic coordinate system should be
determined with calibration data and/or ground control points.

Accuracy check - Accuracy of the geometric correction should be checked and verified. If the
accuracy does not meet the criteria, the method or the data used should be checked and corrected
in order to avoid the errors.

Interpolation and re-sampling - Geo-coded image should be produced by the technique of


resampling and interpolation. There are three methods of geometric correction as mentioned
below.

Geometric Distortions and Corrections:


There are two techniques that can be used to correct the various types of geometric distortions
present in digital image data; one is orbital geometry modelling for Systematic Distortions or
Predictable and the other one, rather used in many image processing, is the transformation based
on ground control points for random distortion or unpredictable. These are normally
implemented as a following two-step procedure:
 Systematic Distortions or Predictable
 Random Distortion or Unpredictable

Systematic distortions are well understood and easily corrected by applying formulas derived by
modelling the sources of the distortions mathematically. For example, a highly systematic source
of distortion involved in multi-spectral scanning from satellite altitudes is the eastward rotation
of the earth beneath the satellite during imaging. This causes each optical sweep of the scanner to
cover an area slightly to the west of the previous sweep. This is known as skew distortion. The
process of de skewing the resulting imagery involves offsetting each successive scan line slightly
to the west. The skewed- parallelogram appearance of satellite multi-spectral scanner data is a
result of this correction (Figure 2.1).

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 28 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 2.1: Geometric distortion and correction

Random distortions and residual unknown systematic distortions are corrected by analyzing
well-distributed ground control points (GCs) occurring in an image. As with their counterparts
on aerial photographs, GCPs are features of known ground location that can be accurately
located on the digital imagery. Some features that make good control points are highway
intersections and distinct shoreline features. In the correction process numerous GCPs are
located both in terms of their two image coordinates (column, row numbers) on the distorted
image and in terms of their ground coordinates (typically measured from a map in terms of UTM
coordinates or latitude and longitude). These values are then submitted to a least-squares
regression analysis to determine co-efficient for two coordinate transformation equations that can
be used to interrelate the geometrically correct (map) coordinates and the distorted image
coordinates. Once the coefficients for these equations are determined, the distorted image
coordinates for any map position can be precisely estimated.
Based on the distorted satellite images as mentioned above, following are three methods of
geometric corrections. A flow diagram is given in figure 2.2.
Systematic correction - When the geometric reference data or the geometry of sensor are given
or measured, the geometric distortion can be theoretically or systematically avoided. For
example, the geometry of a lens camera is given by the co linearity equation with calibrated focal
length, parameters of lens distortions, coordinates of fiducial marks etc. The tangent correction
for an optical mechanical scanner is a type of system correction. Generally systematic correction
is sufficient to remove all errors.

Non-systematic correction- Polynomials to transform from a geographic coordinate system to


an image coordinate system, or vice versa, will be determined with given coordinates of ground
control points using the least square method. The accuracy depends on the order of the
polynomials, and the number and distribution of ground control points.

Combined method - Firstly the systematic correction is applied, and then the residual errors will
be reduced using lower order polynomials. Usually the goal of geometric correction is to obtain
an error within plus or minus one pixel of its true position.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 29 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 2.2: Flow of Geometric Correction

Georeferencing (Geometric Correction):


The purpose of georeferencing is to transform the image coordinate system (u,v), which may be
distorted due to the factors discussed above, to a specific map projection (x,y) as shown in Figure
3. The imaging process involves the transformation of a real 3-D scene geometry to a 2-D image

Figure 2.3: Georeferencing is a transformation between the image space to the


geographical coordinate space
Terms such as geometric rectification or image rectification, image-to-image registration, image-
to-map registration have the following meanings:
1) Geometric rectification and image rectification recovers the imaging geometry
2) Image-to-image registration refers to transforming one image coordinate system into another
image coordinating system
3) Image-to-map registration refers to transformation of one image coordinate system to a map
coordinate system resulted from a particular map projection.

Georeferencing generally covers 1) and 3). It requires a transformation T:


Forward Transformation is composed of the following transformations:

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 30 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 2.4: Georeferencing Transformation Steps

RADIOMETRIC CORRECTION TECHNIQUES:

Involvement of Radiometric Errors and approaches for their Correction:


Spectral data acquired by satellite sensors are influenced by a number of factors, such as
atmospheric absorption and scattering, sensor-target-illumination geometry, sensor calibration,
and image data processing procedures, which tend to change through times (Teillet, 1986).
Targets in multi-date scenes are extremely variable and have been nearly impossible to compare
in an automated mode. Two approaches to radiometric correction are possible: absolute and
relative . The absolute approach requires the use of ground measurements at the time of data
acquisition for atmospheric correction and sensor calibration. This is not only costly but also
impractical when archival satellite image data are used for change analysis (Hall et al., 1991).
The relative approach to radiometric correction, known as relative radiometric normalization
(RRN), is preferred because no in-situ atmospheric data at the time of satellite overpasses are
required. This method involves normalizing or rectifying the intensities or digital numbers (DN)
of multi-date images band-by-band to a reference image selected by the analyst. The normalized
images would appear as if they were acquired with the same sensor under similar atmospheric
and illumination conditions to those of the reference image.

Why Radiometric Correction:


Radiometric correction is done to reduce or correct errors in the digital numbers of images. The
process improves the interpretability and quality of remote sensed data. Radiometric calibration
and correction are particularly important when comparing data sets over a multiple time periods.
The energy that sensors onboard aircrafts or satellites record can differ from the actual energy
emitted or reflected from a surface on the ground. This is due to the sun's azimuth and elevation

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 31 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

and atmospheric conditions that can influence the observed energy. Therefore, in order to obtain
the real ground irradiance or reflectance, radiometric errors must be corrected for.
When the emitted or reflected electro-magnetic energy is observed by a sensor on board an
aircraft or spacecraft, the observed energy does not coincide with the energy emitted or reflected
from the same object observed from a short distance. This is due to the sun's azimuth and
elevation, atmospheric conditions such as fog or aerosols, sensor's response etc. which influence
the observed energy. Therefore, in order to obtain the real irradiance or reflectance, those
radiometric distortions must be corrected. Further, In order to detect genuine landscape changes
as revealed by changes in surface reflectance from multi-date satellite images, it is necessary to
carry out radiometric correction.
Radiometric Correction and Calibration:
Radiometric calibration is the conversion from the sensor measurement to a physical quantity. In
remote sensing, the sensor is measuring radiance from the top of the atmosphere. Therefore the
image provider also provides calibration coefficients to convert from digit number (DN) to
radiance. Because we can trust the amount of light energy that comes from the sun, the radiance
is often normalized into a reflectance values (easier to work with because bounded by 0 and
one), so this step can also be part of the calibration. So the calibration gives you a reflectance
value, but it is the reflectance on top of the atmosphere (TOA).
Indeed, the proportion of the incident light that is really reflected by the observed object is
effected by different factors (mainly topography and atmospheric thickness). The reflectance
measured TOA therefore need to be corrected if you need absolute values. This does not depend
on the sensor itself, so it is not advisable to talk about calibration in this case. We need to correct
the values measured TOA in order to estimate the values top of canopy. The process of
Radiometric Correction and Calibration is shown on figure 2.5.

Radiometric correction Procedures:


As with geometric correction, the type of radiometric correction applied to any given digital
image data set varies widely among sensors. Other things being equal, the radiance measured by
any given system over a given object is influenced by such factors as changes in scene
illumination, atmospheric conditions, viewing geometry, and instrument response characteristics.
Some of these effects, such as viewing geometry variations are greater in the case of airborne
data collection than in satellite image acquisition. Also, the need to perform correction for any or
all of these influences depends directly upon the particular application at hand.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 32 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 2.5: Radiometric Correction and Calibration Process

In the case of satellite sensing in the visible and near-infrared portion of the spectrum, it is often
desirable to generate mosaics of images taken at different times or to study the changes in the
reflectance of ground features at different times or locations. In such applications, it is usually
necessary to apply a sun elevation correction and earth-sun distance correction. The sun
elevation correction accounts for the seasonal position of the sun relative to the earth. Through in
this process, image data acquired under different solar illumination angles are normalized by
calculating pixel brightness values assuming the sun was at the zenith on each date of sensing.
The correction is usually applied by dividing each pixel value in a scene by the sign of the solar
elevation angle for the particular time and location of imaging.
Radiometric correction is classified into the following three types:
Radiometric correction for sensor sensitivity:
In the case of optical sensors, with the use of a lens, a fringe area in the corners will be darker as
compared with the central area. This is called vignetting. Vignetting can be expressed by cos ,
where is the angle of a ray with respect to the optical axis. n is dependent on the lens
characteristics, though n is usually taken as 4. In the case of electro-optical sensors, measured
calibration data between irradiance and the sensor output signal, can be used for radiometric
correction.
Radiometric correction for sun angle and topography:
Sun spot - The solar radiation will be reflected diffusely onto the ground surface, which results
in lighter areas in an image. It is called a sun spot. The sun spot together with vignetting effects
can be corrected by estimating a shading curve which is determined by Fourier analysis to
extract a low frequency component (Figure 2.6).

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 33 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Original Image Result

Figure 2.6: Radiometric Correction of satellite Image

Shading - The shading effect due to topographic relief can be corrected using the angle between
the solar radiation direction and the normal vector to the ground surface.

Ignoring atmospheric effects, the combined influence of solar zenith angle and earth-sun distance
on the irradiance incident on the earth's surface can be expressed as

E = Eo Cos θ o (1)

d2
where,
E = normalized solar irradiance
Eo = solar irradiance at mean earth-sun distance
θ o = sun's angle from the zenith
d = earth-sun distance, in astronomical units
(Information on the solar elevation angle and earth -sun distance for a given scene is normally
part of the ancillary data supplied with the digital data).

Atmospheric effects compound the influence of solar illumination variation. The atmosphere
affects the radiance measured at any point in the scene in two contradictory ways. First, it
attenuates (reduces) the energy illuminating a ground object. Second, it acts as a reflector itself,
adding a scattered, extraneous "path radiance" to the signal detected by a sensor. Thus, the
composite signal observed at any given pixel location can be expressed by

Ltot = pET + Lp (2)


Where,
Ltot = total spectral radiance measured by sensor
p = reflectance of target
E = irradiance on the target
T = transmission of atmosphere
Lp = path radiance
(All of the above quantities depend on wavelength).

Only the first term in the above equation contains valid information about ground reflectance.
The second term represents the scattered path radiance, which introduces "haze" in the imagery

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 34 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

and reduces image contrast. Haze compensation procedures are designed to minimize the
influence of path radiance effects. One means of haze compensation in multi-spectral data is to
observe the radiance recorded over target areas of essentially zero reflectance. For example, the
reflectance of deep clear water is essentially zero in the near-infrared region of the spectrum.
Therefore, any signal observed over such as area represents the path radiance, and this value can
be subtracted from all pixels in that band.

Radiometric correction for Sensor Sensitivity:


Another radiometric data processing activity involved in many quantitative applications of
digital image data is conversion of DNs to absolute radiance values. This operation accounts for
the exact form of the A-to-D response functions for a given sensor and is essential in applications
where measurement of absolute radiances is required. For example, such conversions are
necessary when changes in the absolute reflectance of objects are to be measured over time using
different sensors (e.g. the MSS on Landsat-3 versus that on Landsat-5). Likewise, such
conversions are important in the development of mathematical models that physically relate
image data to quantitative ground measurements (e.g. water quality data).

Normally, detectors and data systems are designed to produce a linear response to incident
spectral radiance. For example, Fig.2.2 shows the linear radiometric response function typical of
an individual TM channel. Each spectral band of the TM has its own response function, and its
characteristics are monitored using onboard calibration lamps (and temperature references for the
thermal channel). The absolute spectral radiance output of the calibration sources is known from
pre-launch calibration and is assumed to be stable over the life of the sensor. Thus, the onboard
calibration sources form the basis for constructing the radiometric response function by relating
known radiance values incident on the detectors to the resulting DNs.

DN = GL + B (3)

where,
DN = digital number value recorded
G = slope of response function (channel gain)
L = spectral radiance measured (over the spectral bandwidth of the channel)
B = intercept of response function (channel offset)

Note that the slope and intercept of the above function are referred to as the gain and offset of the
response function, respectively.

L = (LMAX - LMIN) DN + LMIN (4)


255

Often the LMAX and LMIN values published for a given sensor are expressed in units of mW
cm-2 sr-I ~m-I. That is, the values are often specified in terms of radiance per unit wavelength. To
estimate the total within-band radiance in such cases, the value obtained from Eq. 5 must be
multiplied by the width of the spectral band under consideration. Hence, a precise estimate of
within-band radiance requires detailed knowledge of the spectral response curves for each band.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 35 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Uses of Radiometric Correction Processes:


The Radiometric Correction process calibrates images acquired by satellite optical sensors to
radiance or reflectance values. These corrections allow more accurate assessment of ground
surface properties and facilitate comparison between images acquired at different times or for
different areas. You can use this process to:
• Convert image digital numbers to at-sensor radiance.
• Convert image digital numbers to top-of-atmosphere reflectance with specified raster data type
and scale.
• Apply dark-object (path radiance) correction from image histograms, from a scattering model,
by manual entry, or by designating a dark area in the View.
• Apply cosine TZ (COST) atmospheric transmittance correction.
• Apply topographic correction using a DEM. Satellite sensors that are currently supported are
listed to the right. TIFF/ GeoTIFF and JP2/GeoJP2 images can be used in their original format;
correction parameters are read automatically for these images from metadata files in the
specified format. Images in other formats (e.g., HDF, NITF) should first be imported to a
MicroImages Project File; sensor metadata is automatically imported to a metadata subobject
that is also read automatically by the Radiometric Correction process. Default values for missing
parameters are provided from a TNTmips reference file. Correction parameters can also be
entered manually for these images and for images from other satellite optical sensors.

NOISE PROBLEMS AND REMOVAL TECHNIQUES:

Image noise is any unwanted disturbance in image data that is due to limitations in the sensing,
signal digitization, or data recording process. The potential sources of noise range from periodic
drift or malfunction of a detector, to electronic interference between sensor components, to
intermittent "hiccups" in the data transmission and recording sequence. Noise can either degrade
or totally mask the true radiometric information content of a digital image. Hence, noise removal
usually precedes any subsequent enhancement or classification of the image data. The objective
is to restore an image to as close an approximation of the original scene as possible.

As with geometric restoration procedures, the nature of noise correction required in any given
situation depends upon whether the noise is systematic (periodic), random, or some combination
of the two. For example, multi spectral scanners that sweep multiple scan lines simultaneously
often produce data containing systematic striping or banding. This stems from variations in the
response of the individual detectors used within each band. Such problems were particularly
prevalent in the collection of early Landsat MSS data. While, the six detectors used for each
band were carefully calibrated and matched prior to launch, the radiometric response of one or
more tended to drift over time, resulting in relatively higher or lower values along every sixth
line in the image data. In this case valid data are present in the defective lines, but they must be
normalized with respect to their neighboring observations.

Noise Removal Methods:

De striping - Several de striping procedures have been developed to deal with the type of
problem described above. One method is to compile a set of histograms for the image -one for

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 36 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

each detector involved in a given band. For MSS data, this means that for a given band, one
histogram is generated for scan lines 1,7, 13, etc. a second is generated for lines 2,8,14 etc. and
so forth. These histograms are then compared in terms of their mean and median values to
identify the problem detector(s). A grey-scale adjustment factor(s) can then be determined to
adjust the histogram(s) for the problem lines to resemble those for the normal data lines. This
adjustment factor is applied to each pixel in the problem lines and the others are not altered.

Line Drop - Another line-oriented noise problem sometimes encountered in digital data is line
drop. In this situation, a number of adjacent pixels along a line (or an entire line) may contain
spurious DNs. This problem is normally addressed by replacing the defective DNs with the
average of the values for the pixels occurring in the lines just above and below. Alliteratively, the
DNs from the preceding line can simply be inserted in the defective pixels.

Random noise - Random noise problems in digital data are handled quite differently than those
have been discussed to this point. This type of noise is characterized by non-systematic
variations in grey levels from pixel to pixel called bit errors. Such noise is often referred to as
being "spikey" in character, and it causes images to have a "salt and pepper" or "snowy"
appearance.

Many algorithms were proposed to remove the salt and pepper noise. The common salt and
pepper noise filtering algorithms includes: i)Traditional Median(TM) filter algorithm; ii)
Extreme median(EM) filter algorithms; iii) Switching median (SM) adaptive weight (AW);
iv)Adaptive median(AM) filter algorithms v) adaptive weight (AW) filter algorithms etc. Based
on various research papers TM filter algorithm is simple and speed, but it does not have ability of
effectively removing salt and pepper noise and protection for edges and details in high noise
density case. EM, SW and AM filter algorithms are sensitive to different noise density. Their
filtering properties get worse with noise density increasing. An adaptive weight (AW) filter
algorithms approach was proposed , in which the output is a weighted sum of the image and a
de-noising factor, these weighting coefficients depends on a state variable. The state variable is
the difference between the current pixel and the average of the remaining pixels in the
surrounding window. Because the coefficients are various, it is difficult to select an appropriate
one.

The decision based adaptive weight algorithm is based on the following steps:

i. It checks for the pixels that are noisy in satellite image i.e. pixels with the values 0 or 255 are to
be considered.
ii. For each such noisy pixel P, a window size of 3x3 neighbouring the pixel P is taken.
Iii.Find the absolute differences between the pixel P and the neighbouring pixels of P.
Iv.The arithmetic mean of the differences for a given pixel P is calculated.
v. The arithmetic mean is then compared with the threshold value to detect whether the pixel P is
signal pixel or corrupted by noise.
vi. If the arithmetic mean is greater than or equal to the threshold value the pixel P is considered
as noisy.
vii. Otherwise the pixel P is considered as signal pixel.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 37 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Median filters produce the best result for the mask of size 3x3 at low noise densities up to the
30%.Though the image is considerably blurred. The filter fails to perform well at higher density
levels and hence the new adaptive weight algorithm can be used for highly corrupted satellite
images. The adaptive weight algorithm is as follows:
i. Noise is detected by the noise detection algorithm mentioned above.
ii. Filtering is applied only at those pixels that were detected as noisy.
iii. Once a given pixel P is found to be noisy the following steps are applied.
iv.A 3x3 mask is centred at the pixel P and founds if there exists at least one signal pixel around
the pixel P.
v. If found, the pixel P is replaced by the median of the signal pixels found in 3x3 neighbourhood
of P.
vi. The above steps are repeated if noise still there in the output image for better results.

ATMOSPHERIC CORRECTION:

Various atmospheric effects cause absorption and scattering of the solar radiation. Reflected or
emitted radiation from an object and path radiance (atmospheric scattering) should be corrected
for.
Atmospheric Correction Techniques:
The solar radiation is absorbed or scattered by the atmosphere during transmission to the ground
surface, while the reflected or emitted radiation from the target is also absorbed or scattered by
the atmosphere before it reaches a sensor. The ground surface receive not only the direct solar
radiation but also sky light, or scattered radiation from the atmosphere. A sensor will receive not
only the direct reflected or emitted radiation from a target, but also the scattered radiation from a
target and the scattered radiation from the atmosphere, which is called path radiance (Figure
2.7).The atmospheric correction method is classified into the following categories:

Radiative Transfer Equation Method - An approximate solution is usually determined for


the radiative transfer equation. For atmospheric correction, aerosoldensity in the visible and near
infrared region and water vapor density in the thermal infrared region should be estimated.
Because these values cannot be determined from image data, a rigorous solution cannot be used.

The method with ground truth data - At the time of data acquisition, those targets with known
or measured reflectance will be identified in the image. Atmospheric correction can be made by
comparison between the known value of the target and the image data (output signal). However
the method can only be applied to the specific site with targets or a specific season.

Other method - A special sensor to measure aerosol density or water vapor density is utilized
together with an imaging sensor for atmospheric correction. For example, the NOAA satellite
has not only an imaging sensor of AVHRR (Advanced Very high Resolution Radiometer) but
also HIRS (High Resolution Infrared Radiometer Sounder) for atmospheric correction.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 38 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 2.7: Atmospheric Effect of Path Rdiance

2.4 SUMMARY

Satellite images are not always delivered with accurate geographic information (coordinates) of
features. Apart from positional error positional displacement/distortion may occur due to the
actual terrain elevation differing from that of the (simple) model used for it, and due to non-zero
off-nadir angles.

Systematic Image distortion occurs when coordinates are consistently off by a certain amount
across the whole image. In addition, remote sensors worry about systematic distortions
associated with the platform motion and imaging device. For example, the rate at which the
earth rotates out from beneath a satellite cause systematic distortion, as does the rate at which
off-nadir scale changes across an image. Nonsystematic distortion is another broad category of
image distortion. Nonsystematic distortion occurs when random factors cause local variations in
image scale and coordinate location.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 39 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

In most cases, however, your images will have some mix of systematic and nonsystematic
distortions in them. The approach you use to remove these distortions (if you remove them at
all) is dependent on user-define variables and needs, such as which map projection you use to fit
a sphere to a flat surface, the degree to which topography is a concern, the level of accuracy you
require, etc.
Geometric correction corrects a satellite image for these distortions, and ensures that
pixels/features in the image are in their proper and exact position on the earth’s surface. Image
levels 3A and 3B do not typically require this step as they often represent the highest possible
level of geometric correction. Two fundamental steps of Geo-rectifiication and Orthorectification
are required in geometric correction:
Geometric correction is to remove geometric distortion, while radiometric correction is to avoid
radiometric errors or distortions. When a sensor on board an aircraft or spacecraft observes the
emitted or reflected electro-magnetic energy, the observed energy does not coincide with the
energy emitted or reflected from the same object observed from a short distance. This is due to
the sun's azimuth and elevation, atmospheric conditions such as fog or aerosols, sensor's
response etc. which influence the observed energy. Therefore, in order to obtain the real
irradiance or reflectance, those radiometric distortions must be corrected. Radiometric
correction is classified on the basis of sensor sensitivity, Sun angle and the atmospheric
condition.
Radiometric correction is done to reduce or correct errors in the digital numbers of images. The
process improves the interpretability and quality of remote sensed data. Radiometric calibration
and correction are particularly important when comparing data sets over a multiple time periods.

The process of rectification reassigns coordinates to pixels. Although a spherical surface can
never be shown as a flat image without some distortion, different approaches (projections) can be
used to minimize certain types of distortion. Systematic distortions can be minimized using
equations that adjust the pixel locations systematically across the entire image. Often some of
the major systematic distortions will have been removed before you even get the imagery. For
example, it is common to purchase IRS and other images which have already been corrected to
account for the rotation of the earth beneath the satellite, which is why the image has the outline
of a parallelogram rather than a right angle rectangle.

ERDAS Imagine provides several approaches for rectifying your image beyond what has already
been done by the data provider. In general terms, these approaches are rubber sheeting
adjustments, camera adjustment and polynomial adjustments. Rubber sheeting adjustments
(called triangulation adjustments in other remote sensing software packages) break the image up
into many smaller pieces and apply a correct to each of these pieces. This is appropriate where
nonsystematic distortion is severe and pixel locations shift in a quasi-random way across the
image.

2.5 GLOSSARY

ERDAS IMAGINE - ERDAS IMAGINE is the world's most widely-used remote sensing software
package.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 40 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

TOA- top of the atmosphere.

RRN -Relative Radiometric Normalization

VIGNETTNG- In photography and optics, vignetting is a reduction of an image's brightness or


saturation toward the periphery compared to the image center. Vignetting, also known as “light fall-
off” (sometimes spelled “light falloff”) is common in optics and photography, which in simple terms
means darkening of image.

EPIPOLAR-Epipolar geometry is the geometry of stereo vision. When two cameras view a
3D scene from two distinct positions, there are a number of geometric relations between the
3D points and their projections onto the 2D images that lead to constraints between the
image points. These relations are derived based on the assumption that the cameras can be
approximated by the pinhole camera model.

2.6 ANSWER TO CHECK YOUR PROGRESS


1. Explain Geometric corrections.
2. Explain Atmospheric corrections.

2.7 REFERENCES
1. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 55, no 9.
2. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.
3. Teillet, P.M., 1986. Image correction for radiometric effects in remote sensing. Int. J. Remote
Sensing, 7(12), pp. 1637- 1651.
4. Linea S. Hall, Paul R, Krausman and Michel L. Morrison 1991. The Habitat concept and a
plea for standard terminology. Ecologialab. Tripod.com
5. https://www.sciencedirect.com/topics/earth-and-planetary-sciences/rectificatio
6. https://en.wikipedia.org/wiki/Image_rectification
7. https://en.wikipedia.org/wiki/Image_geometry_correction
8.http://www.seos-project.eu/modules/remotesensing/remotesensing-c0
9.http://www2.gi.alaska.edu/~rgens/teaching/asf_seminar/corrections.pdf 5-
10.https://gisgeography.com/atmospheric-correction/

2.8 TERMINAL QUESTIONS


i) What is image rectification? Explain its techniques of rectification.
ii) Why geometric corrections are required? Discuss the geometric distortion and corrections
iii) Explain the methods of geometric corrections with flow diagram
iv) How the radiometric errors are involved in an image? Explain the radiometric correction
and calibration.
v) Describe the radiometric correction types.
vi) Describe in detail the noise problems and removal techniques
vi) What are atmospheric effects and Correction? Describe the techniques of atmospheric
corrections.

UNIT 2 - GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC CORRECTIONS Page 41 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 3 - THERMAL INFRA-RED IMAGES

3.1 OBJECTIVES
3.2 INTRODUCTION
3.3 THERMAL INFRA-RED IMAGES
3.4 SUMMARY
3.5 GLOSSARY
3.6 ANSWER TO CHECK YOUR PROGRESS
3.7 REFERENCES
3.8 TERMINAL QUESTIONS

UNIT 3 - THERMAL INFRA-RED IMAGES Page 42 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

3.1 OBJECTIVES
After reading this unit you will be able to understand:
 Definitions and Concepts of Thermal infra-red images.
 Principles of Optical and Thermal Infrared Remote Sensing.
 Physical Principles of Thermal Infrared Remote Sensing.
 Thermal Sensor.
 Thermal Images.
 Applications of Thermal infrared Remote Sensing.

3.2 INTRODUCTION
Before starting this topic, you have been learning about remote sensing based on electromagnetic
radiation within the wavelength ranges of visible, near infrared, middle infrared and far
infrared. In this unit, you will learn the full concepts, principles, properties and applications of
thermal infrared images and thermal remote sensing which is based on long wave thermal
infrared within the range of 3-14 μm.

Let’s start with a little background. Our eyes see reflected light. Daylight cameras, night vision
devices, and the human eye all work on the same basic principle: visible light energy hits
something and bounces off it, a detector then receives it and turns it into an image. Whether an
eyeball, or in a camera, these detectors must receive enough light or they can’t make an image.
Obviously, there isn’t any sunlight to bounce off anything at night, so they’re limited to the light
provided by starlight, moonlight and artificial lights. If there isn’t enough, they won’t do much to
help you see.
Thermal energy comes from a combination of sources, depending on what you are viewing at the
time. Some things – warm-blooded animals (including people!), engines, and machinery, for
example – create their own heat, either biologically or mechanically. Other things – land, rocks,
buoys, vegetation – absorb heat from the sun during the day and radiate it off during the night.
Everything we encounter in our day-to-day lives gives off thermal energy, even ice. The hotter
something is the more thermal energy it emits. This emitted thermal energy is called a “heat
signature. Thermal remote sensing is the branch of remote sensing that deals with the acquisition,
processing and interpretation of data acquired primarily in the thermal infrared (TIR) region of
the electromagnetic (EM) spectrum. In thermal remote sensing we measure the radiations
'emitted' from the surface of the target, as opposed to optical remote sensing where we measure
the radiations 'reflected' by the target under consideration. Useful reviews on thermal remote
sensing are given by Kahle (1980), Sabins (1996) and Gupta (1991)

It is a well known fact that all natural targets reflect as well as emit radiations. In the TIR region
of the EM spectrum, the radiations emitted by the earth due to its thermal state are far more
intense than the solar reflected radiations and therefore, sensors operating in this wavelength
region primarily detect thermal radiative properties of the ground material. However, as also
discussed later in this article, very high temperature bodies also emit substantial radiations at

UNIT 3 - THERMAL INFRA-RED IMAGES Page 43 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

shorter wavelengths. As thermal remote sensing deals with the measurement of emitted
radiations, for high temperature phenomenon, the realm of thermal remote sensing broadens to
encompass not only the TIR but also the short wave infrared (SWIR), near infrared (NIR) and in
extreme cases even the visible region of the EM spectrum.

Thermal remote sensing, in principle, is different from remote sensing in the optical and
microwave region. In practice, thermal data prove to be complementary to other remote sensing
data. Thus, though still not fully explored, thermal remote sensing reserves potentials for a
variety of applications.

Incoming shortwave radiation from the Sun, which includes ultraviolet, visible, and a portion of
infrared energy reaches the Earth. As learned in previous modules, some of this energy is
reflected and absorbed by the atmosphere and some eventually reaches the surface. At the
surface portions of this energy are reflected and absorbed depending on the material types. The
shortwave energy that is absorbed by the atmosphere and the surface is converted to kinetic
energy and then emitted as long wave or thermal radiation. Most of the emitted long wave
radiation warms the lower atmosphere, which in turn warms the Earth's surface. In thermal
remote sensing there is often a need to compensate and correct for atmospheric interaction and
emitted energy. Based on the content necessary to be described, the topic is aimed at the
following objectives:

3.3 THERMAL INFRA RED IMAGES


Definitions:
Thermal Remote Sensing:
 Thermal remote sensing refers to measuring the energy that is being emitted from the Earth’s
surfaces rather than measuring the reflected energy.
 Thermal remote sensing is based in the infrared portion of the spectrum, more specifically the
long wave infrared portion.
 Thermal remote sensing is based in the infrared portion of the spectrum and measures emitted
thermal energy.
 Thermal remote sensing is a type of passive remote sensing since it detects naturally emitted
radiation.
 In most of the cases thermal remote sensing is conducted in the 3-5 μm and 8-14 μm
wavelengths.

Imaging Camera:
 A thermal imaging camera (also called thermographic camera or an infrared
camera or infrared thermography) is a device that forms a heat zone image using infrared
radiation, similar to a common camera that forms an image using visible light. Instead of the
400–700 nanometer range of the visible light camera, infrared cameras operate
in wavelengths as long as 14,000 nm (14 µm).

UNIT 3 - THERMAL INFRA-RED IMAGES Page 44 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

 A thermographic camera (or infrared camera) detects infrared light (or heat) invisible to the
human eye. That characteristic makes these cameras incredibly useful for all sorts of
applications, including security, surveillance and military uses, in which bad guys are
tracked in dark, smoky, foggy or dusty environs or even when they're hidden behind a boat
cover.
Thermal imaging:
 Thermal imaging is simply the process of converting infrared (IR) radiation (heat) into
visible images that depict the spatial distribution of temperature differences.
 Thermal imaging is a method of night vision that collects the infrared radiation from objects
in the scene and creates an electronic image.

Thermal Infrared and Infrared Image:

 Thermal infrared image is the product of electromagnetic spectrum which has a wavelength
of between 3.0 and 20 micrometers.
 Most remote sensing applications make use of the 8 to 13 micrometer range.
 The main difference between thermal infrared image and the infrared (color infrared - CIR)
image is that thermal infrared is emitted energy that is sensed digitally, whereas the near
infrared (also called the photographic infrared) is reflected energy.
Absorption by water and other gases in the atmosphere restricts sensors to record thermal
images in two wavelength windows - 3 to 5 µm and 8 to 15 µm.
 For this reason, thermal IR imagery is difficult to interpret and process because there is
absorption by moisture in the atmosphere.

CONCEPTS:
Thermal Infrared Remote Sensing:
The basic strategy for sensing electromagnetic radiation is clear. Everything in nature has its own
unique distribution of reflected, emitted and absorbed radiation. These spectral characteristics, if
ingeniously exploited, can be used to distinguish one thing from another or to obtain information
about shape, size and other physical and chemical properties. It is being explained in the next
unit.
Because different materials absorb and radiate thermal energy at different rates, an area that we
think of as being one temperature is actually a mosaic of subtly different temperatures. This is
why a log that’s been in the water for days on the end will appear to be a different temperature
than the water, and is therefore visible to a thermal imager. Thermal cameras detect these
temperature differences and translate them into image detail. While all this can seem rather
complex, the reality is that modern thermal cameras are extremely easy to use. Their imagery is
clear and easy to understand, requiring no training or interpretation.
It is important to remember that thermal infrared = emitted infrared energy rather than reflected.
Thermal remote sensing a type of passive remote sensing since it is measuring naturally emitted
energy.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 45 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

In remote sensing we examine the near-infrared from 0.7 – 0.9 μm, the middle or short wave
infrared (SWIR) from 0.9-1.3 μm, far-infrared from 1.3- 3μm and the longwave or thermal
infrared from 3-14 μm (Figure 3.1). The near infrared, middle infrared and shortwave infrared
sensors measure reflected infrared light.
Incoming shortwave radiation from the Sun, which includes ultraviolet, visible, and a portion of
infrared energy reaches the Earth. As learned in previous units, some of this energy is reflected
and absorbed by the atmosphere and some eventually reaches the surface. At the surface portions
of this energy are reflected and absorbed depending on the material types. The shortwave energy
that is absorbed by the atmosphere and the surface is converted to kinetic energy and then
emitted as long wave or thermal radiation. Most of the emitted long wave radiation warms the
lower atmosphere, which in turn warms the Earth's surface. In thermal remote sensing there is
often a need to compensate and correct for atmospheric interaction and emitted energy.

Figure 3.1: Electromagnetic Spectrum showing specifically Thermal Infrared (IR)


Wavelength range along with other wavelength categories and frequencies.

The concept of thermal infrared radiation along with other radiation phenomena has been shown
in figure 3.2. The radiation range of 3-14 μm highlights the sensation of heat within the minor
difference.

Thermal Camera or Imager and Thermal Image:


The visible light cameras (imager or sensor) – daylight cameras work by detecting reflected light
energy. But the amount of reflected light they receive is not the only factor that determines
image contrast. This is because of the fact that the lack of intensity of visible light naturally
decreases image contrast. Thermal imagers don’t have any of these shortcomings. First, they
have nothing to do with reflected light energy: they see heat. Everything you see in normal daily
life has a heat signature. This is why you have a much better chance of seeing something at night
with a thermal imager than you do with visible light camera, even a night vision camera.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 46 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Radiation at wave lengths between 3-14 μm

Figure 3.2: Portion of Thermal Infrared Radiation (3-14μm) highlighting the extended
form of increasing wave length and frequency (energy)

Thermal images are the product of thermal cameras, sensors and electronic scanners which make
pictures from heat, not visible light. Heat (also called infrared, or thermal energy) and light are
both parts of the electromagnetic spectrum, but a camera that can detect visible light won’t see
thermal energy, and vice versa. Thermal cameras detect more than just heat though; they detect
tiny differences in heat – as small as 0.01°C – and display them as shades of grey in black and
white pictures/ videos.

PRINCIPLES OF OPTICAL AND THERMAL INFRARED REMOTE SENSING:


The basic principle involved in optical and thermal infrared remote sensing is that the different
objects, based on their structural, chemical and physical properties, return (reflect and emit)
different amounts of energy in different wavelength ranges (common referred as BANDS) of the
electromagnetic spectrum. Those wave length ranges of electromagnetic spectrum include
visible, infrared and thermal infrared energy. Most remote sensing programmes utilize the Sun`s
energy, which is the predominant source of energy. These radiation travel through the
atmosphere and are selectively scattered and/or absorbed depending upon the composition of the
atmosphere and the wavelengths involved. These radiation upon reaching the earth`s surface
interact with the `target` objects (earth surface features). Everything in nature has reflected or
emitted energy from the target surface. The reflection and emission of non-thermal and thermal

UNIT 3 - THERMAL INFRA-RED IMAGES Page 47 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

energy are recorded by the wave length ranges of 0.4 -0.7 μm (visible-reflected energy only),
0.7- 3 μm ( infrared- both reflected and emitted energy) and 3-14 μm (thermal infrared -emitted
energy only). The recorded energy is calibrated and transmitted to the users.
Since remote sensing involves the detection and measurement of the radiations of different
wavelengths reflected or emitted from distant objects or materials, it offers following four basic
components:
(i) The energy source.
(ii) The transmission path.
(iii) The target.
(iv) The satellite sensor.
Among these, the energy source or electromagnetic energy, is very important, as it serves as the
crucial medium for transmitting information from the target to the sensor.
Both the reflected and thermal infrared remote sensingare basically a multi-disciplinary science
which includes a combination of various disciplines such as optics, spectroscopy, thermal and
non thermal images, computer, electronics and telecommunication, satellite launching etc. All
these technologies are integrated to act as one complete system in itself, known as Remote
Sensing System. There are a number of stages in a Remote Sensing process, and each of them is
important for successful operation. We can summarize the remote sensing process in the
following seven steps which are also depicted in figure 3.

Stages in Remote Sensing Process:


Energy Source: Energy source provides emission of electromagnetic radiation, or EMR
(sun/self- emission). The first requirement of remote sensing is to have an energy source, which
illuminates the target of interest. Though Sun`s energy, which is the predominant source of
energy, all the matters with an absolute temperature (0 K = -273.150 C) above zero emits
Electromagnetic energy and act as source of energy.
Energy Interactions with the atmosphere: The energy on its way from source to target and
then from target to sensor, comes in contact and interacts with atmosphere. It is also called the
transmission of energy from the source to the atmosphere and back to sensor. During energy
interaction the processes of reflection, refraction, absorption, radiation and scattering take place.
Interaction of EMR with the earth’s surface: Earth`s different surface features reacts
differently to the incident energy. Portions of incidents are reflected, transmitted and absorbed by
the surface .Thus both the processes of reflection and emission (after absorption) are recorded by
sensor.
Transmission of energy from the surface to the remote sensor: Here you can say the
recording of energy by the sensor after transmission from earth surface. The energy after
interacting with earth`s surface reaches the sensor (which is remote –not in touch with the earth
surface features) where it is recorded in a form which can be transmitted to and used by the
users.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 48 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Sensor data output: The energy recorded by the sensor is transmitted to a receiving and
processing station where the data are processed into an image. You may simply call it data
transmission and processing.
Data transmission, processing and analysis: The processed image is interpreted to extract the
information about the earth surface features. This you may simply call as image processing and
analysis.
Applications: The extracted information is then utilized, to make decisions for solving a
particular problem, concerning the surface or resource.

BUILTUP

Figure 3.3: Remote Sensing Processes for Optical and Thermal IR Remote Sensing

You may understand the above figure based on the following lines:
Light coming from the Sun in the form of electromagnetic radiations (Energy Source) is incident
on the earth surface (after interaction with atmosphere) supporting various land use and cover
types like, forest, water bodies, grasses, roads, built-up area, agriculture, bare soil etc. After
getting incident, the parts of light are reflected, refracted and absorbed on the earth surface
(Interaction of EMR with the earth’s surface). The reflected and radiated energy after
absorption/interaction on the earth surface passes through the atmospheric (after interaction with
atmosphere) windows (to be explained in the next unit) and are received by the sensor mounted
on the satellite (Transmission of energy from the surface to the remote sensor). The energy
variability of earth surface objects are converted into electrical impulses (signals- Sensor data
output) by the detectors under different wavelengths (wave bands; to be explained in the next
chapter/unit) . Those electrical impulses are stored in High Density Digital Tapes (HDDT; to be
explained in the next unit) fixed in the satellite and transmitted to the ground station whenever
the satellite passes nearby the earth /ground station (Data transmission). Those signals/digital
data after various corrections and preliminary processing (Preliminary processing of data) are
used for various applications and distributed to the users. This has been explained in the previous
chapters.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 49 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

PHYSICAL PRINCIPLES OF THERMAL INFRARED REMOTE SENSING:


Atmospheric Windows in the Thermal IR Region:
Atmospheric Windows in the Thermal IR Region are situated between 3 - 5 μm and 8 - 14 μm.
The latter has narrow absorption band from 9to 10 μm caused by ozone, which is omitted by
most thermal IR satellite sensors. You may note that reflected sunlight can contaminate thermal
IR signals recorded in the 3 - 5 μm windows during daytime.

Reflected

Figure 3.4: Atmospheric Windows of Thermal Infrared Region

Kinetic Temperature vs. Radiant Temperature:


Thermal remote sensing measures what is known as the radiant temperature of an object? The
radiant temperature is a measure of the emitted energy of an object. This also sometimes referred
to as the brightness temperature or the "apparent temperature" of an object. All objects with a
temperature greater than absolute zero (0 K) emit electromagnetic energy. Normally when we
refer to temperate we are referring the kinetic temperature. Kinetic temperature or heat is
generated by the vibration of molecules in all objects. Kinetic temperature is sometimes called
“true temperature”. Kinetic temperatures can be measured using a thermometer and is measured
using conventional temperature scales (°F,°C, K). There is a high correlation between the true
kinetic temperature of an object and its radiant temperature. Therefore we can utilize remote
sensing technology like radiometers to measure radiant temperature (from emitted energy) and
correlate this to the true kinetic temperature. Often the radiant temperature is often lower than we
would expect based on the true kinetic temperature. This is due to a property known as
emissivity.

As mentioned in the previous unit all materials at temperatures above absolute zero (0 K, -
273°C) continuously emit electromagnetic radiation by virtue of their atomic and molecular

UNIT 3 - THERMAL INFRA-RED IMAGES Page 50 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

oscillations. The total amount of emitted radiation increases with the body’s absolute
temperature and peaks at progressively shorter wavelengths. The earth with its ambient
temperature of ca. 300 K has its peak energy emission in the thermal infrared region at around
9.7 µm (Figure 3.5).

Blackbody Principle:
A blackbody is a hypothetical, ideal radiator that totally absorbs and re-emits all energy incidents
upon it. The total energy a blackbody radiates and the spectral distribution of the emitted energy
(radiation curve) depends on the temperature of the blackbody and can be described by:

 Planck’s radiation law


 Stefan-Boltzmann law
 Wien’s displacement law

Figure 3.5: Blackbody concept of Ideal Radiator

Planck’s Blackbody Radiation Law:


Planck’s Blackbody Radiation Law describes the electromagnetic radiation emitted from a
blackbody at a certain wavelength as a function of its absolute temperature (Figure 3.6).

M =2πhc2/5(eh c/ l k T )-1

UNIT 3 - THERMAL INFRA-RED IMAGES Page 51 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Where,
M = spectral radiant exitance [W m-2 μm-1]
h = Planck’s constant [6.626 x 10-34 J s]
c = speed of light [2.9979246 x 108 m s-1]
k = Boltzmann constant [1.3806 x 10-23 J K-1]
T = absolute temperature [K]
λ = wavelength [μm]

Stefan-Boltzmann Law:
Stefan-Boltzmann Law Describes the total electromagnetic radiation emitted by a
blackbody as afunction of the absolute temperature which corresponds to the area
under the radiation curve (integral).
M=σT4
Where,
M = total radiant exitance [W m²]
T = absolute temperature [K]
σ = Stefan-Boltzman constant[5.6697 x 10-8 W m-2 K-4]
→Higher the temperature of the radiator,the greater the total amount of radiation (Figure 3.6).

Wien‘s Displacement Law:


Describes the wavelength at which the maximum spectral radiant exitance occurs.
peak = A/T
Where,
λpeak = wavelength of maximum spectral radiant exitance [μm]
A = Wien‘s constant [2897.8 μm K]
T = absolute temperature [K]
→With increasing temperatureλmaxshifts to shorter wavelengths
peak T = 2.898 x 10-3m . K
This relationship is called Wien's displacement law (Figure 3.7) and is useful for the determining
the temperatures of hot radiant objects such as stars, and indeed for a determination of the
temperature of any radiant object whose temperature is far above that of its surroundings. The
temperature can be found from the wavelength at which the radiation curve is maximum or at the
peaks.
It should be noted that the peak of the radiation curve in the Wien relationship is the peak only
because the intensity is plotted as a function of wavelength. If frequency or some other variable
is used on the horizontal axis, the peak will be at a different wavelength.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 52 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Black body radiance


curve at the Sun`s
temperature

Black body
radiance curve
at
ireandescert
lamp
temperature

Area
under
the
radiation
curve

Source: Lillesand et al. (2008)

Figure 3.6: Black Body Radiance Curves with different stages of Temperature

Emissivity:
In fact, objects in the real world are not perfect blackbodies. Not all of the incident energy upon
them is absorbed; therefore they are not perfect emitters of radiation. The emissivity (ε) of a
material is the relative ability of its surface to emit heat by radiation. Emissivity is defined as the
ratio of the energy radiated from an object's surface to the energy radiated from a blackbody at
the same temperature.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 53 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 3.7: Wien‘s Displacement Law, where wavelength of the peak of blackbody
radiation
curve gives a measure of temperature

Emissivity values can range from 0 to 1. A blackbody has an emissivity of 1, while a perfect
reflector or white body has an emissivity of 0. Most natural objects are considered "graybodies"
as they emit a fraction of their maximum possible blackbody radiation at a given temperature.
Water has an emissivity close to 1 and most vegetation also has an emissivity close to 1. Many
minerals and metals have emissivities significantly less than 1. Depending on the material,
emissivity can also vary depending on its temperature.
The emissivity of a surface depends not only on the material but also on the nature of the surface.
For example, a clean and polished metal surface will have a low emissivity, whereas a roughened
and oxidized metal surface will have a high emissivity. Two materials lying next to one another
on the ground could have the same true kinetic temperature but have different apparent radiant
temperatures when sensed by a thermal radiometer simply because their emissivities are
different. Emissivity can be used to identify mineral composition. Knowledge of surface
emissivity is also important both accurate true kinetic temperature measurements from
radiometers.
Going back to the Stefan-Boltzmann law, the blackbody radiation principals can be extended to
real world materials by including the emissivity factor in the equation.
M=εσT4
where,
M = Total energy emitted from the surface of a material

UNIT 3 - THERMAL INFRA-RED IMAGES Page 54 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

ε = Emissivity
σ = Stefan-Boltzmann constant
T = Temperature of the emitting material in Kelvin
Thermal sensors measure the radiant temperatures of objects. The true kinetic temperature of an
objects can be estimated by the radiant temperature if the emissivity of the object is known.
Trad = ε 1/4 T kin
where,
T rad = Radiant Temperature
T kin = Kinetic Temperature

For example if we measure the radiant temperature of dry soil to be 293.8K and we know the
emissivity is 0.92, we can determine the true kinetic temperature:
T rad =ε 1/4 T kin
293.8K = 0.921/4 T kin
293.8 = 0.979 T kin
293.8/0.979 = T kin
T kin = 300K or 27°C

THERMAL SENSORS AND SATELLITES:


Thermal sensors or scanners detect emitted radiant energy. Due to atmospheric effects these
sensors usually operate in the 3 to 5 μm or 8 to 14μm range. Most thermal remote sensing of
Earth features is focused in the 8 to 14 μm range because peak emission (based on Wien's Law)
for objects around 300K (27° C or 80° F) occurs at 9.7μm. Many thermal imaging sensors are on
satellite platforms, although they can also be located on-board aircraft or on ground-base
systems. Many thermal systems are multispectral, meaning they collect data on emitted radiation
across a variety of wavelengths
Thermal Sensors:
Thermal Infrared Multispectral Scanner (TIMS) -
NASA and the Jet Propulsion Laboratory developed the Thermal Infrared Multispectral Scanner
(TIMS) for exploiting mineral signature information. TIMS is a multispectral scanning system
with six different bands ranging from 8.2 to 12.2 μm and a spatial resolution of 18m. TIMS is
mounted on an aircraft and was primarily designed as an airborne geologic remote sensing tool.
TIMS acquires mineral signature data that permits the discrimination of silicate, carbonate and
hydrothermally altered rocks. TIMS data have been used extensively in volcanology research in
the western United States, Hawaiian islands and Europe. The multispectral data allows for the
generate of three-band color composites similar other multispectral data. Many materials have
varying emissivity’s and can be identified by the variation in emitted energy.There are a variety
of different materials and minerals in Death Valley with varying emissivity’s.Alluvial fans

UNIT 3 - THERMAL INFRA-RED IMAGES Page 55 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

appear in shades of reds, lavender, and blue-greens; saline soils in yellow; and different saline
deposits in blues and greens (Figure 3.8)

Figure 3.8: Thermal image of TIMS pertaining to Death Valley California. FCC of Band 1
–blue (8.2 - 8.6μm), Band 3 -green (9.0 - 9.4μm) and Band 5-red (10.2 - 11.2 μm)

Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER):


Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a sensor on-
board the Terra satellite. In addition to collecting reflective data in the visible, bear and
shortwave infrared, ASTER also collects thermal infrared data. ASTER has five thermal bands
ranging from 8.1 to 11.6 μm with 90m spatial resolution. ASTER data are used to create detailed
maps of surface temperature of land, emissivity, reflectance, and elevation. ASTER data is
available for download through Earth Explorer.

Moderate-resolution Imaging Spectroradiometer (MODIS):


MODIS has a high spectral resolution and collects data in a variety of wavelength. Similar to
ASTER, MODIS collects reflective data and emitted, thermal data. MODIS has 16 bands that
collects thermal data with 1000m spatial resolution. MODIS has high temporal resolution with a
one to two day return time. This makes it an excellent resource for detecting and monitoring
wildfires. One of the products generated from MODIS data is the Thermal Anomalies/Fire
product which detects hotspots and fires.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 56 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 3.9: MODIS Thermal Anomalies/Fire data from August 2015

Landsat:
A variety of the Landsat satellites have carried thermal sensors. The first Landsat satellite to
collect thermal data was Landsat 3, however this part of the sensor failed shorty after the satellite
was launched. Landsat 4 and 5 included a single thermal band (band 6) on the Thematic Mapper
(TM) sensor with 120m spatial resolution that has been resampled to 30m. A similar band was
included on the Enhanced Thematic Mapper Plus (ETM+) on Landsat 7. Landsat 8 includes a
separate thermal sensor known a the Thermal Infrared Sensor (TIRS). TIRS has two thermal
bands, Band 10 (10.60 - 11.19μm) and Band 11 (11.50 - 12.51μm). The TIRS bands are acquired
at 100 m spatial resolution, but are resampled to 30m in the delivered data products.

THERMAL IMAGES:
Most thermal images are single band images and by default are displayed as greyscale images.
Lighters or brighter areas indicate areas that are warmer, while darker areas are cooler. Single
band thermal images can also be displayed in pseudo-color to better display the variation in
temperature. Thermal imagery can be used for a variety of application including estimating soil
moisture, mapping soil types, determining rock and mineral types, wildland fire management and
identifying leaks or emissions. Multiband color composites can also be created if multiple

UNIT 3 - THERMAL INFRA-RED IMAGES Page 57 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

wavelengths of thermal emission are recorded. An example of this is the TIMS image shown on
the previous page.

Figure 3.10: Thermal imagery of Las Vegas and Lake Mead acquired during the day on
October 12th, 2015 by the Thermal Infrared Sensor (TIRS) on Landsat 8. Greyscale image
is shown on the left, cool areas are dark while light areas are warmer. On the right is a
pseudo-color representation of the same data, temperature is shown as a color gradient,
cool areas are blue and warm areas are red.
The images of day time and night time satellite pass are described below:

Time of Day:
Thermal imagery can be acquired during the day or night but can produce very different results
because of a variety of factors. Some of these factors are thermal conductivity, thermal capacity
and thermal inertia. Thermal conductivity is the property of a material to conduct heat or a
measure of the rate at which heat can pass through a material. For example heat passes through
metals much faster than rocks. Thermal capacity is a measure of how well a material can store
heat, water has a very high thermal capacity. Thermal inertia measures how quickly a material
responds to temperature changes. Based on these factors different materials warm and cool at
different rates during the day and night. This gives rise to a diurnal cycle of temperature changes
for features at the Earth's surface. The diurnal cycle encompasses 24 hours. Beginning at sunrise,
the Earth begins to receive mainly short wavelength energy from the Sun. From approximately
6:00 am to 8:00 pm, the terrain intercepts the incoming short wavelength energy and reflects
much of it back into the atmosphere. Some of this energy is is absorbed and then emitted as long-
wave, thermal infrared radiation. Emitted thermal radiation reaches its peak during the day and
usually lags two to four hours after the midday peak of incoming shortwave radiation, owing to
the time it takes to heat the soil. Daytime imagery can contain thermal “shadows” in area that are
shaded from direct sunlight. Slopes may receive differential heating depending on their
orientation in relation to the sun (aspect). In the above daytime image of the Las Vegas area the
topography and topographic shadows are clearly visible.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 58 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 3.11: Graph shows the diurnal radiant temperature variation for rocks and soils
compared to water.

There is a diurnal radiant temperature variation for rocks and soils compared to water. Water has
relatively little temperature variation throughout the day. Dry soils and rocks on the other hand
heat up more and at a quicker rate during the day. They also tend to cool more at night compared
to water. Around dawn and sunset the curves for water and soils intersect. This point is known as
the thermal crossover, which indicates times where there is no difference in the radiant
temperature of materials.
Water generally appears cooler than its surrounding in the daytime thermal images and warmer
in nighttime imaging (Figure 3.12 and 3.13). The actual kinetic temperature of the water has not
changed significantly but the surrounding areas have cooled. Trees generally appear cooler than
their surroundings during the day and warmer at night. Paved areas appear relatively warm
during the day and night. Pavement heats up quickly and to higher temperatures than the
surrounding areas during the day. Paved areas also lose heat relatively slowly at night so they are
relatively warmer than surrounding features.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 59 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 3.12: Day time thermal image

Figure 3.13: Night time thermal image

APPLICATIONS:
Following are the specific applications of thermal infrared remote sensing
Agriculture:
Thermal imaging has been growing fast and playing an important role in various fields of
agriculture starting from nursery monitoring, irrigation scheduling, soil salinity stress detection,
plants disease detection, yield estimation, maturity evaluation and bruise detection of fruits and

UNIT 3 - THERMAL INFRA-RED IMAGES Page 60 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

vegetables. Thermal Imaging has gained popularity in agriculture due to its higher temporal and
spatial resolution. However, intensive researches need to be conducted for its potential
application in other fields of agriculture (e.g. Yield forecasting) that are not yet investigated. In
spite of the fact that it could be used in many agriculture operations during pre-harvest and post-
harvest period, as a noncontact, non-destructive technique, it has some drawbacks viz., it is
more expensive and thermal measurements depend on environmental and weather
conditions. Thus it may not be possible to develop a universal methodology for its application
in agricultural operations since thermal behaviors of crops vary with climatic conditions.

Other areas of application of thermal infrared imaging are detection of water stress in crops and
evapotranspiration in crops and river basins, which are significant inputs in the management of
agricultural practices and integrated watershed management.

Forestry:
Thermal infrared imaging is used in forestry to map and monitor forest cover in terms of
vegetation stress and evapotranspiration which is important in environmental management since
trees and other plants help cool the environment, making vegetation a simple and effective way
to reduce urban heat islands.

Quantitative information about forest canopy structure, biomass, age, and physiological
condition have been extracts from thermal infrared data. Basically, a change in surface
temperature can be measured by an airborne thermal infrared sensor (e.g., TIMS or ATLAS) by
repeatedly flying over the same area a few times. Usually a separation of about 30 minutes
results in a measurable change in surface temperature caused by the change in incoming solar
radiation. Average surface net radiation is measured in situ for the study area and is used to
integrate the effects of the non-radiating fluxes. The change in surface temperature from time
period t1 to t2 (i.e., t) is the value that reveals how those non-radiative fluxes arc reacting to
radiant energy inputs. The ratio of these two parameters is used to compute a surface proper
defined as a Thermal Response Number (TRN).

Terrains containing mostly soil and bare rock have the lowest TRN values, while forests have the
highest. The TRN is a site-specific property that may be used to discriminate among various
types of coniferous forest stands and some of their biophysical characteristics.

Forest Fires:
Forest fires are a major cause of degradation of India's forests. While statistical data on fire loss
are weak, it is estimated that the proportion of forest areas prone to forest fires annually ranges
from 33% in some states to over 90% in other. About 90% of the forest fires in India are created
by humans. The normal fire season in India is from the month of February to mid June. India
witnessed the most severe forest fires in the recent time during the summer of 1995 in the hills of
Uttar Pradesh & Himachal Pradesh. The fires were very severe and attracted the attention of
whole nation. An area of 677,700 ha was affected by fires.

Forest fires are characterized by their plumes, their temperature, and their luminosity. Most in-
situ daytime fire sightings result from the observation of smoke generated by fuel combustion,

UNIT 3 - THERMAL INFRA-RED IMAGES Page 61 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

while most nighttime sightings result from high and unusual luminosity of the burning areas. The
high temperature of the burning areas makes the fires detectable from satellite through thermal
infrared imaging.

Clouds consist of tiny particles of ice or water that have the same temperature as the surrounding
air. Images acquired from aircraft or satellites above cloud banks record the radiant temperature
of the clouds. Energy from the earth's surface does not penetrate the clouds but is absorbed and
reradiated. Smoke plumes however, consist of ash panicles and other combustion products so
fine that they are readily penetrated by the relatively long wavelengths of thermal IR radiation.

In visible and thermal IR images acquired over forest fires even during daytime, it is observed
that the smoke plume completely conceals the ground in the visible image, but terrain features
are clearly visible in the IR image and the burning front has a bright signature. The US Forest
Service uses aircraft equipped with IR scanners that produce image copies in flight, which are
dropped to fire fighters on the ground. These images provide information about the fire location
that cannot be obtained by visual observation through the smoke plumes. IR images are also
acquired after fires are extinguished in order to detect hot spots that could reignite. IR images
are also useful in estimating the burnt area.

Water Resources:
Detection of water stress and evapotranspiration retrieval is key applications for water
management purposes. Thermal infrared remote sensing has been recognized for a long time one
of the most feasible means to detect and evaluate water stress and to quantify evapotranspiration
over large areas and in a spatially distributed manner.

Water stress is considered to be a major environmental factor limiting plant productivity world-
wide. Water stress develops in plants as evaporative losses cannot be sustained by the extraction
of water from the soil by the roots. Evapotranspiration (ET) is a term used to describe the loss of
water from the Earth’s surface to the atmosphere by the combined processes of evaporation from
surface and transpiration from vegetation. Evapotranspiration depends on the presence of water
and is regulated by the availability of energy, needed to convert liquid water to water vapor, and
to transport vapor from the land surface to the atmosphere. Physiological regulations also occur
in plants through mechanisms controlling water extraction by the roots, water transport in plant
tissue, and water release to the atmosphere via the stomata at the leaf surface (in direct relation
with the mechanisms of CO2 assimilation and photosynthesis).

Water resources may be monitored and managed through detection of water stress in crops and
forests, detection of and quantification of evapotranspiration in crops, river basins and
continents.

Volcanic Eruptions:
Volcanic eruptions pose serious hazards to sensitive ecosystems transportation and
communication networks, and to populated regions. Knowing the mineralogy of a rock or
alluvial surface is critically important to a geologist trying to interpret the geologic, climatic, or
volcanic history of the surface. The utility of TIR remote sensing for geology and mineralogy

UNIT 3 - THERMAL INFRA-RED IMAGES Page 62 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

has become clear in the past decades and numerous air- and space-based instruments have
become available.

Thermal infrared imaging helps scientists track potentially deadly patterns of heat in and around
some of the world's 1,500 active volcanoes. Thermal infrared data processed to highlight
hotspots can alert volcanologists to volcanic activity before it becomes dangerous, and may one
day help them better forecast eruptions. Assessing volcanic hazards is an issue since ten percent
of the global population lives underneath active volcanoes.

In high resolution thermal IR images, active volcanoes stand out as bright spots. They become
brighter in a time series images as they ramp up for an eruption, and the speed with which they
cool down can tell scientists much about their geological composition, which in turn helps them
predict whether the volcanoes will erupt violently.

Scientists already know that volcanoes erupt because of density and pressure. Magma is less
dense than rock and rises to the surface at weak points in the earth's crust. As the magma rises,
water and gases dissolved in it expand rapidly, often causing violent explosions – or volcanic
eruptions. Volcanoes with high silica content are of particular interest, because they tend to
produce more viscous lava, which traps gas bubbles. As the pressure from the bubbles builds
inside the volcano, so does the potential for a powerful and dangerous eruption.

Through thermal infrared images, it is possible to monitor and map eruption clouds, tropospheric
Plumes, hot spots and active lava flows. Post eruptive studies may also make use of thermal
infrared imagery.

3.4 SUMMARY
Thermal infrared sensors can be difficult to calibrate. Changes in atmospheric moisture and
varying emissivities of surface materials can make it difficult to accurately calibrate thermal
data. Thermal IR imagery is difficult to interpret and process because there is absorption of
thermal radiation by moisture in the atmosphere. Most applications of thermal remote sensing
are qualitative, meaning they are not employed to determine the absolute surface temperature but
instead to study relative differences in radiant temperature. Thermal imagery works well to
compare the relative temperatures of objects or features in a scene.
It is important to note that thermal sensors detect radiation from the surface of objects. Therefore
this radiation might not be indicative of the internal temperature of an object. For example the
surface of a water body might be much warmer than the water temperature several feet deep, but
a thermal sensor would only record the surface temperature.
There are also topographic effects to consider. For example in the northern hemisphere, north
facing slopes will receive less incoming shortwave solar radiation from the sun and will therefore
be cooler. Clouds and fog will usually mask the thermal radiation from surface features. Clouds
and fog are generally cooler and will appear darker. Clouds will also produce thermal cloud
shadows, where ares underneath clouds are cooler than the surrounding areas.
Some limitations of thermal remote sensing are clear. On the other hand, from the variety of
possible applications, the advantages and potentials of thermal remote sensing are also obvious.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 63 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

One more fact that is now clear is that new satellite and airborne platforms with new and
improved thermal sensors also promise to bring more interest and challenge in this relatively less
explored field.
Therefore, now there is a definite need to promote the understanding and the use of thermal data
by the scientific and application community. These developments should encompass the
following:

 Fundamental research in the principles of thermal remote sensing


 Laboratory measurements of spectral response of natural materials in the thermal infrared
region
 Development of more sophisticated sensor technology.
 Finally to application oriented research where the already explored application fields can
be refined and new application areas can be tapped.

One of the most promising way to ensure that such a goal is achieved in by introducing the topic
of thermal remote sensing in greater depth in the remote sensing educational programmes. The
'spin off ' effect of such a venture would be that more and more researchers with fresh ideas
would explore the thermal data and its possibilities.
EPILOGUE
Temperature and emissivity are powerful biophysical variables critical to many investigations In
the near future thermal infrared remote sensing will become even more important. We now have
very sensitive linear- and area-array thermal infrared detectors that can function in broad thermal
bands or in hyperspectral configurations. Very soon unmanned aerial vehicles (UAV) carrying
miniature thermal infrared sensors would be seen being used by the military, scientists and even
ordinary people.

3.5 GLOSSARY
TRN - Thermal Response Number
SWIR - Short Wave Infrared

NIR - Near Infrared

Thermography - Any writing, printing, or recording process involving the use of heat.
Thermography is a test that uses an infrared camera to detect heat patterns and blood flow in
body tissues.Remote-sensing infrared thermography (IRT) has been advocated as a possible
means of screening for fever in travelers at airports and border crossings, Thermography,
including all of the techniques for the analysis of the infrared radiation emitted or reflected by
objects in the thermal infrared region of the electromagnetic spectrum, provides the opportunity
to analyze both the thermal characteristics of the materials lying on the ground and many
processes related to the exchange of heat between the surfaces; its importance has been
demonstrated in a wide range of military, civil, industrial and scientific applications.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 64 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Emissivity - The emissivity of the surface of a material is its effectiveness in emitting energy as
theral radiation.

ET- Evapotranspiration - Evapotranspiration (ET) is the sum of evaporation and plant


transpiration from the Earth's land and ocean surface to the atmosphere.

3.6 ANSWER TO CHECK PROGRESS


1. Explain the physical Principles of Thermal Infrared Remote Sensing.
2. Explain the applications of Thermal infrared Remote Sensing.

3.7 REFERENCES
1. Gupta R.P., 1991. Remote Sensing Geology (Berlin-Heidelberg:Springer-Verlag).
2. Kahle A.B., 1980. Surface thermal properties. In Remote Sensing in Geology, edited by B.S.
Siegal, and A.R. Gillespie(New York; John Wiley), pp.257-273
3. Markham, B.L., Barker J.L., 1986. Landsat MSS and TM post calibration dynamic ranges,
exo-atmospheric reflectancesand at-satellite temperatures. EOSAT Landsat Technical Notes 1,
August 1986, Earth Observation Satellite Co.(Lanham, Maryland), pp. 3-8.
4. Prakash A., Gens R., Vekerdy Z., 1999. Monitoring coal fires using multi-temporal night-time
thermal images in acoalfield in North-west China. International Journal of Remote Sensing,
20(14), pp. 2883-2888..
5. Prakash A., Gupta, R.P., 1998. Land-use mapping and change detection in a coal mining area -
a case study of the
6. Jharia Coalfield, India. International Journal of Remote Sensing, 19(3), pp. 391-410.
7. Prakash A., References on thermal remote sensing.
http://www.itc.nl/~prakash/research/thermal_ref.html
8. Sabins F.F. Jr, 1996. Remote Sensing: Principles and Interpretation, 3rd edn. (New York:
W.H. Freeman).

3.8 TERMINAL QUESTIONS


i) Define thermal remote sensing, thermal imaging, thermal imaging camera, and thermal
infrared and infrared image and write their concepts.
ii) Compare the principles of optical and thermal infrared remote sensing.
iii) What are the physical principles of thermal infrared remote sensing? Explain the blackbody
principles
iv) List the thermal sensors. Explain the characteristics of MODIS and ASTER
v) Explain the characteristics of thermal images.

UNIT 3 - THERMAL INFRA-RED IMAGES Page 65 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

BLOCK 2 : HYPERSPECTRAL

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL


REMOTE SENSING

4.1 OBJECTIVES
4.2 INTRODUCTION
4.3 INTRODUCTION TO HYPERSPECTRAL REMOTE
SENSING
4.4 SUMMARY
4.5 GLOSSARY
4.6 ANSWER TO CHECK YOUR PROGRESS
4.7 REFERENCES
4.8 TERMINAL QUESTIONS

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 66 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

4.1 OBJECTIVES
Recent development in remote sensing technologies lead us to the development of advanced
techniques such as imaging spectrometer which is capable of collecting information in more
than 100 contiguous spectrally narrow bands. These data are different from the traditional
remote sensing data and require unique processing steps.

After going through this module one can enhance their knowledge about hyperspectral
remote sensing and moreover, he/she can surely improve their knowledge in:

 Basic understanding of hyperspectral remote sensing.


 Difference between hyperspectral and multispectral remote sensing.
 Spectral, spatial, radiometric and temporal resolution of sensors.
 Airborne and Spaceborne hyperspectral remote sensing sensors.
 Hyperspectral remote sensing application in various disciplines of science.

4.2 INTRODUCTION
In present-day, advancement of remote sensing unlocked a new dimension for the
application of imaging spectrometers in various disciplines of science. Imaging spectrometers
acquire object image in several, fine, contiguous spectral bands all over in the visible, near-
infrared, mid-infrared, shortwave infrared and thermal range of electromagnetic spectrum.
These sensors are presently used for the identification and detection of the minerals,
vegetation, mangroves, objects and background. It generally includes information acquired in
typically hundreds of spectral bands with quite narrow bandwidth (5-10 nm) which allows us
to form spectral reflectance curve for each pixel in the hyperspectral image.

Hyperspectral remote sensing data proved its significance in the applications in


environment, ecology, forestry, agriculture, and geosciences. These sensors acquiring
information in fine and contiguous spectral bands provide the capability to identify the
characteristic molecular absorption feature of surface materials and its constituents. Precise
and accurate information about Earth’s surface have many applications in different domains.
Due to the global coverage and repetitiveness, space-borne hyperspectral sensors considered
as reliable method for the identification and measurement of vegetation or other land cover
types that was tough to access using traditional mapping methods.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 67 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

4.3 INTRODUCTION TO HYPERSPECTRAL REMOTE


SENSING
Hyperspectral remote sensing results from the recent advancement in remote sensing
technology and it efficiently integrates imaging and spectroscopy in a single system.
Hyperspectral sensors offer us correct and complete information about the Earth’s surface in
the form of spectral reflectance curve build in the narrow wavelength range of
electromagnetic spectrum. Hyperspectral remote sensing can successfully apply for the
identification of materials and generation of geological/land cover classification maps.

Data acquired by imaging spectrometers collect information as spectral reflectance


curve for every pixel in the image. This brings together image of an area in 100s of
contiguous spectral bands with fairly small bandwidth (~10nm). However, multispectral
sensors acquire information in about 5-10 spectral bands with quite large spectral band gap
(>100 nm). Imaging spectrometers collect spatial and spectral data which helps in the
construction of laboratory like reflectance curve for every pixel in the image.

Spectral-based analytical software “Environment for Visualizing Images (ENVI)” can


be used in the identification and mapping of materials on the Earth surface from data
collected by imaging spectrometers. Thus, respective materials can be identified and mapped
quickly and directly on the basis of their characteristic spectral absorption feature. By using
the spectral information, we can identify minerals, vegetation cover types, atmospheric gases,
and quality of water.

First portable field spectrometer for the measurement of spectral absorption feature
was developed by Alexander Goetz in 1974 that utilizes a charge-coupled device (CCD). It
was then followed by the development of imaging spectrometers for use in space- and air-
borne platforms. Today’s basic and simple definition (Alexander F.H. Goetz et al.,
1985)found its relevance: “The acquisition of images in hundreds of continuous registered
spectral bands such that for each pixel a radiant spectrum can be derived”. The above
description covers entire wavelength range from visible to long wave infrared through
visible, NIR, and SWIR; all types of mounting medium i.e., ground, space &air platform; and
includes all objects such as liquid, gas and solids.

The original accepted bandwidth for the geological application and mineral
exploration was approximately 10 nm (A. F.H. Goetz, Rock, & Rowan, 1983). However,

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 68 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

availability of narrower bandwidth widens the application of hyperspectral remote sensing in


geology and mineral exploration.

Fig. 4.1Characteristics of hyperspectral, multispectral and panchromatic sensors (Exelis,


2013).

The main goal of hyperspectral remote sensing is to extract the physical information
of objects from the reflectance data. This technology became interdisciplinary science and
including physics, computer science, engineering, aviation, mathematics, geology, statistics,
and atmospheric science.

Hyperspectral data contains spatial and spectral information from materials within a
given scene simultaneously. Every pixel contains spatial information and spectral information
in continuous, narrow spectral bands. A scanning device samples pixel collected in several
narrowband at a specific spatial resolution. The hyperspectral data cubeis represented in
Fig.4.2.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 69 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 4.2Representation of hyperspectral data sets consisting of all geographical and spectral
elements (Janos, 2008).

Imaging Spectrometers/Hyperspectral Sensors:


Satellites, ships and unmanned aerial vehicles (UAVs) can all be equipped with
imaging spectrometer sensors. Airborne sensors provide higher spectral and spatial image
whereas satellite-borne sensors have the potential to provide world-wide sweep with a
defined temporal resolution (repetitiveness of observation).
The spectral wavelength range of airborne imaging spectrometer is generally range
between 380-12700 nm and for satellite sensors 400-2500 nm. The Airborne Visible Infrared
Imaging Spectrometer (AVIRIS) collect spectral data over the wavelength range from 400 to
2500 nm in 224 continuous channels with a 10nm spectral bandwidth.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 70 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 4.3The imaging spectroscopy concept explained in detail. In a ground-based scene, an


airborne or spaceborne imaging spectrometer collects spatial and spectral data simultaneously
over a wide area(Shaw & Burke, 2003).

Airborne Imaging spectrometers:


An imaging spectrometer carried by an aircraft records electromagnetic radiation in
the form of energy reflected from the Earth’s surface. There are fewer hyperspectral sensors
on satellite platforms than on aerial platforms. In comparison to satellite-borne sensors,
airborne sensors obtain hyperspectral data with high spatial and spectral resolution, while
spatial resolution is primarily determined by the altitude of sensor and spectral resolution
depends on sensor characteristics. Aerial platform has several benefits over satellite-based
sensors which may include sensor maintenance, repair and configuration; revisit time for
change detection can be made to the aircraft without any trouble as compared with the
satellites.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 71 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 4.4 Comparative evaluation of multispectral, hyperspectral and ultraspectral remote


sensing sensors (Source: https://www.slideshare.net/sunilkumarmedida/multispectral-and-
hyperspectral-scanning).

Sensors can acquire data in either pushbroom or whiskbroom modes. Pushbrrom


sensors or along-track sensors involves a collection of sensors arranged perpendicular to the
aircraft’s flight path.

Whiskbroom sensors are more extensive, massive and difficult to buildas compared to
pushbroom scanners.Whiskbroom scanners concentrate on a portion of the full swath at any
given time, allowing for higher spatial resolution. Furthermore, whiskbroom sensors have
fewer sensor detectors that require calibration than other scanner sensor systems.

NASA’s Jet Propulsion Laboratory (JPL) conducted the first airborne imaging
spectrometers (AIS) sensor test in November 1982. Instrument in this survey used consists of
32 × 32 mercury cadmium telluride set of detectors which provide information in 128
spectral bands in the wavelength range of 900 to 2400 nm. The initial AIS flight operation
carried over Cuprite mining district, Nevada to classify kaolinite and alunite minerals and

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 72 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

thus proving that characteristic absorption feature in spectral curve can used to detect and
identify minerals.

Fig. 4.5 For the collection of data, a scanner platform device is used in hyperspectral remote
sensing.(Tan, 2016).

These experiments led to the development of Airborne Visible/Infrared Imaging


Spectrometer (AVIRIS), which flew for the first time in 1987. Moreover, two satellite-based
imaging spectrometers were developed namely Shuttle Imaging Spectrometer Experiment
(SISEX) and the High-Resolution Imaging Spectrometer (HIRIS). However, due to the
Challenger tragedy in 1986 and financial constraints, none of the instrument were installed.
AVIRIS was the primary source of hyperspectral data for the majority of the study, and it was
a huge success. The AVIRIS instrument also get improved with time and led to the

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 73 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

development of Airborne Visible/Infrared Imaging Spectrometer- Next Generation (AVIRIS-


NG) (joint venture of NASA and ISRO).

Fig. 4.6 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) whiskbroom


scanner((NASA/JPL-Caltech), 2014).

AVIRIS uses a scanning system which collects hyperspectral data in the transverse
direction of movement. It can be used on a number of aircraft, including the NASA ER-2,
which can take photographs from a height of 20kms with a spatial resolution of 20m and a
swath width of 10.5kms. The AVIRIS sensor collects hyperspectral data in 224 spectral
bands with a spectral resolution of 10nm in the wavelength range of 400 to 2500nm. The first
commercial imaging spectrometer was thought to be the Canadian Compact Airborne
Spectrographic Imager (CASI) in 1989. It was created by ITRES Corporation
(www.itres.com) and has a spectral range of 400-1000 nm, covering the visible and near-
infrared ranges of the electromagnetic spectrum. Depending on the altitude of CASI sensor, it

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 74 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

can collect hyperspectral data with spatial resolution of 25 cm. The spectral bands and
bandwidths used can be customized to suit the needs of the consumer.

Another important hyperspectral sensor developed by US Naval Research Laboratory


(NRL) is Hyperspectral Digital Imagery Collection Experiment (HYDICE). It was first flown
in 1995. High quality airborne imaging spectrometer developed by Australia includes HyMap
(also known as Probe-1). Commercial imaging spectrometer covering the wavelength range
of 400-2500 nm developed by other countries includes AISA from Finland and DAIS-7915
from Germany.

Table 4.1: Summary of airborne imaging spectrometers.

Spectral Spectral FOV


Available Number
Name region interval (degrees Developer
date of bands
(μm) (nm) )
FLI 1981 0.4–1.0 288 2.5 N/A DFO, Canada
NASA/JPL,US
AIS 1983 1.2–2.4 128 10 90
A
NASA/JPL,US
AVIRIS 1987 0.4–2.5 224 10 30
A
ITRES,
CASI 1988 0.4–1.0 288 3 35.4
Canada
0.4–1.05 20 32 Geoscan,Austr
AMSS 1989 92.16
2.05–2.4 44 8 alia
0.43–0.83 20 20
AHS 1991 85.92 Daedalus,USA
1.605–2.405 15 50
ASAS 1992 0.4–1.0 68 10 19 NASA, USA
CHRISS 1992 0.425–0.85 125 3.4 10.3 SAIC, USA
ROSIS 1992 0.43–0.88 256 5 16 DLR,Germany
193
AAHIS-1 1994 0.44–.835 108 11 SETS, USA
mrad
HYDICE 1994 0.4–2.5 210 ~10 8.94 USA
488 SPECIM,Finla
AISA 1995 0.4–2.5 4.5–14 40
254 nd
HyMap 1998 0.4 –2.5 128 15 61.3 Australia

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 75 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

VNIR:11
Probe-1 1998 0.4–2.5 128 10 Canada
SWIR:18
GER,
DAIS- USA,and
2000 0.4–2.5 72 0.9–60 78
7915 DLR,
Germany
0.4–2.5 128 15
ARES 2005 N/A Australia
8–12 32 130
ESA,
0.38–0.97 114 0.45–7.5
APEX 2014 28 Switzerland,
0.94–2.50 199 5–10
Belgian

Fig. 4.7 Hyperion hyperspectral image over Khirbaten-Nahas, Jordan. This area host ancient
copper mine, smelting sites and different rock types which was identified and mapped by
hyperspectral data(NASA, 2016).

Hyperspectral sensors for satellite were developed as a result of innovation and


experimentation with airborne sensors. The calibration and data processing capabilities of a

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 76 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

satellite-based hyperspectral sensor are close to those of airborne sensors, and it can be used
to create application data products prior to the satellite’s launch.

Spaceborne Hyperspectral Sensors:

Spaceborne hyperspectral sensors on satellite have global coverage at regular


intervals. NASA launched Hyperion sensor on board EO-1 in November 2000 to collect
hyperspectral data from space. Hyperion hyperspectral data collects in the wavelength range
of 400-2500nm in 242 spectral channels having spectral resolution of 10nm. Hyperion
collects hyperspectral data from an altitude on 705km with 7.5 spatial resolution and 8 bits
radiometric resolution. To improve signal-to-noise ratio (SNR), the sensor system includes
two spectrometers, first was operating in the visible/near infrared (400-1000nm) and the
second one in shortwave infrared region (900-2500nm) of electromagnetic spectrum.
Hyperspectral data from Hyperion has been shown to be effective in mining area study,
geological exploration, deforestation monitoring, environmental management and agricultural
area assessment. Hyperion data was used for accurate land asset classification, and
assessment, mineral discovery, crop yield assessment and contaminant area and thus
highlights the application of space borne imaging spectroscopy in the scientific applications.

Compact High-Resolution Imaging Spectrometers (CHRIS) onboard PROBA-1 was


developed by European Space agency and launched in 2001.The mission was designed to
collect BDRF data (bidirectional reflectance distribution function) to improve spectral
reflectance understanding and imaging spectrometer testing capabilities on moving small
satellite platforms. CHRIS has a resolution that can be adjusted from 19 to 63 bands, making
it a one-of-a-kind sensor.

Hyperion and CHRIS were significant development in the space borne hyperspectral
sensor. These sensors show the possibility of hyperspectral data in various scientific
applications. The Indian HySI, Chinese HJ-1A and NASA’s HICO all performed in the 400-
950nm spectral range (VNIR).The prototype pushbroom instrument HySI, launched by ISRO
in April 2008, offers a total of 64 spectral bands and is primarily used for resource
characterization and detailed studies.

Pushbroom Hyperspectral Imager (PHI) and Operative Modular Imaging


Spectrometers (OMIS) are two hyperspectral sensors developed by the Chinese Academy of
Sciences. On September 5, 2008, China launched the HJ-1A and HJ-1B small satellites, as
well as a new imaging spectrometer HIS onboard HJ-1A satellite. This sensor has a 96-hour

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 77 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

revisit period and a global tracking at a ±30-degree side-view angle. It acquires hyperspectral
imaging in 115 spectral bands spanning the wavelength range of 450-950nm with a spectral
resolution of 4.32nm and a spatial resolution of 100m over a 50km swath. Higher spectral
resolution of HJ-A/HIS sensor offers better ground feature identification. It's a valuable tool
for developing quantitative research applications like measuring composition of the
atmosphere, water management, and forest productivity monitoring.

NASA’s Hyperspectral Imager for the Coastal Ocean (HICO) was the first
hyperspectral imager designed for coastal study/research. It was made to collect data on
geometry, bottom types, surface water optical properties, and flora maps on the sea. The
sensor onboard collects hyperspectral data with a spectral resolution of 5.7nm in the
wavelength range of 400-900nm. On September 23, 2009, HICO was docked with the
International Space Station (ISS) and obtained about 10000 images during the next five years
of service, before the spectrometer was damaged by anX-class solar storm in September
2014, resulting in its complete failure.

Next decade witnessed development of several hyperspectral sensors for the imaging
of Earth’s surface. The EnMAP satellite was developed by German space agency and
launched in 2018. Other significant space borne satellites may include joint venture of
America and Brazil Flora Hiperspectral satellite, PRISMA from Italy, MSMI of South Africa,
HyspIRI from NASA JPL.

Future the hyperspectral capabilities will be integrated into future Landsat


programmes, according to the proposal. Hyperspectral sensors could be used in future EO
constellations such as the UK Disaster Monitoring Constellations (DMC) and Germany’s
RapidEye. China, Israel, Canada all have the capability to operate Earth observation satellites
and have missions in the works that will be launched in the near future.

Table 4.2: Summary of space borne imaging spectrometers.

Spectral Number Spectral GS Swath


Available
Name region of interval D width Developer
date
(μm) bands (nm) (m) (km)
0.400– NASA/JPL,
HIRIS 1994 192 11 30 30
2.500 USA
Hyperion 2000 0.356– 242 10 30 7.5 NASA/JPL,

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 78 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

2.576 USA
0.415– 17/3
CHRIS 2001 19/63 5–12 13 ESA, UK
1.050 4
0.400–
HySI 2008 64 8 506 130 ISRO, India
0.950
0.459– CAST,
HJ-1A 2008 115 5 100 51
0.950 China
0.380– NASA/ON
HICO 2009 128 5.7 90 42
0.960 R, USA
NASA/JPL,
0.400– USA, and
Flora 2016 200 10 30 150
2.500 INPE,
Brazil
VNIR:0.4
VNIR:66
0–1.01
PRISMA 2017 SWIR:17 10 30 30 ASI, Italy
SWIR:0.9
1
2–2.05
VNIR:0.4 VNIR:8.
VNIR:89
2–1.00 1+1 GFZ/DLR,
EnMAP 2018 SWIR:15 30 30
SWIR:0.9 SWIR:12 Germany
5
0–2.45 .5 + 1.5
VNIR:0.4
VNIR:57 VNIR:10
HISUI 0–0.97
2018 SWIR:12 SWIR:12 30 30 Japan
ALOS-3 SWIR:0.9
8 .5
0–2.50
HYPXIM- 0.400– CNES,
2018 N/A 14 15 15
CB 2.500 France
HYPXIM- 0.400– CNES,
2020 N/A 10 1 30
CA 2.500 France
0.380– NASA/JPL,
HyspIRI 2020 >200 10 30 30
2.500 USA
0.400– CSA,
HERO >2016 >200 10 30 30
2.500 Canada

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 79 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

SunSpace,
0.400–
MSMI >2016 200 10 15 15 South
2.350
Africa

Application of Hyperspectral Remote Sensing in Various fields of science:

1. Hyperspectral Remote Sensing in Agriculture and Forestry:

Crop modelling, cultivation data collection, sustainable agriculture, agronomy, soil


treatment, and wetlands development are just a few of the remote sensing-based applications
that have been developed. The discrimination of related crops, the detection of biotic and
abiotic factors, and knowledge about physicochemical and biological parameters are all
limited by broadband remote sensing. Hyperspectral data overcome these limitations and
helps in the improvement of crop monitoring. The wavelength range of 400-2500 nm have
vast application in identification of surface covers parameters which cannot be identified in
broadband and low spectral resolution imaging remote sensing. Thus, high spectral resolution
data of hyperspectral remote sensing prove significant in disease identification, crop
discrimination and stress identification in crops. To indicate physiological state and stressful
conditions, narrow band indices can be computed by combining spectral bands related to
biophysical parameters like the Leaf Area, moisture content, and mineral contents. LAI and
biomass stress can be assessed excellently by hyperspectral remote sensing data.
Management and planning phase of forestry have significant application of hyperspectral
remote sensing. One of the most important piece of knowledge required in the field of
forestry for management and conservation, is the geographical or spatial distribution of
vegetation in mixed forest canopies. Hyperspectral remote sensing has significant application
in the Land development, environment monitoring, forest stock, and natural forests
monitoring.

2. Hyperspectral Remote Sensing in Mineral Exploration and Geology:

Conventional way of mineral identification and geological mapping are tedious,


expansive and laborious process. Extensive field work, structural mapping, landform
research, petrology, mineralogy and geochemical analysis are all part of the traditional
methodology. This necessitates a robust laboratory database capable of detecting minor

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 80 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

differences in ore grade composition. Visible and Near-Infrared (VNIR) region shows
characteristics spectral absorption features due to the presence of transition metal such as Fe,
Mn, Cu, Ni, Cr etc. Shortwave infrared region is useful for the identification of minerals
composed of hydroxyls and carbonates. Due to coarse bandwidth and low spectral resolution
of multispectral remote sensing data, it is limited in the precise mineral composition and
relative abundance of constituent. These limitations of multispectral data have been overcome
in hyperspectral remote sensing data due to its narrow bandwidth and spectral resolution. The
use of hyperspectral remote sensing technology in geological applications such as lithological
mapping, mineral exploration, mapping of hydrothermal alteration zones and related mineral
deposits and hydrocarbon exploration. Thermal Infrared region is useful for the identification
of rock-forming minerals such as quartz, feldspar, amphibole, iron, dolomite etc.

3. Hyperspectral Remote Sensing in Ecology:

Ecological studies are the study of organisms and their environments, which include
both biotic and abiotic elements. Identification, mapping and management of ecology is
laborious due to varied nature of biodiversity. Hyperspectral data provide abundant spectral
information about biotic and abiotic components. Hyperspectral remote sensing applied
successfully for obtaining the information about leaf pigment, water content, chemical
composition and discrimination between different species. The spectral library of the United
States Geological Survey is useful for validating land cover classification, characterization
and change detection. Hyperspectral remote sensing allows for precise and reliable
information about ecological processes, monitoring of forest area, grassland, vegetation, land
cover classification. Hyperspectral remote sensing can estimate sensitive ecology factors such
as leaf nutrient, water content, leaf area, plant leaves drought responses, woody tissues and
soil pollution accumulation, vegetative growth patterns, and land use / cover changes.

4. Hyperspectral Remote Sensing in Coastal Zone Management:

Deterioration in quality of water is major concern nowadays due to increasing


population, industry, trade and commerce. Coastal zones form important component of
biodiversity and require proper management and monitoring. Coastal zones are recognized as
an important habitat for biodiversity. The use of hyperspectral data from airborne, satellite,
and field sources will provide an abundance of data for change detection, validation and

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 81 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

spectral library generation. Narrow spectral resolution of hyperspectral data found valuable
for the mapping and surveillance of coastal zone. Hyperspectral sensors can help with Coastal
ecosystem and protection of marine resource planning and monitoring, coastal conservation,
coastal water management, coastal disasters, and coastal area management.

Fig. 4.8 Application of Hyperspectral Imaging (Tan, 2016).

4.4 SUMMARY
Throughout the visible, near-IR, mid-IR and shortwave infrared portions of the
electromagnetic spectrum, hyperspectral sensors capture images in several small, contiguous spectral
bands. Hyperspectral remotes sensing is a powerful tool for the applications in environment, forestry,
agricultures and geosciences. Imaging spectrometer can be carried on satellite, aircraft, UAVs and
other platforms. Airborne sensors provide higher spectral and spatial image whereas space-borne
sensor have advantage over its repetitiveness and global coverage. Hyperspectral sensors can be of
two types such as push broom (along-track scanner) or whiskbroom modes (across-track scanner).

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 82 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

NASA-JPL launched the first airborne imaging spectrometer (AIS) in November 1982, and
hyperspectral data was collected over the Cuprite mining district in Nevada. These led to the
development of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and satellite-based
imaging spectrometer. Hyperspectral data has a wide range of uses in research. Crop forecasting,
cropping system, precision farming, horticulture, irrigation management, and watershed production
are all application areas of hyperspectral remote sensing. In mineral exploration and geology,
hyperspectral remote sensing finds its application in the identification and mapping of hydrothermally
altered/weathered minerals, zones and also used in reconnaissance survey. Hyperspectral data used in
identification, mapping, classification and management of biodiversity. Hydrological concern such as
deterioration of water quality and coastal zone management issue can be addressed by the application
of hyperspectral remote sensing

4.5 GLOSSARY
ENVI- Environment for Visualizing Images

VNIR- Visible Near Infrared

SWIR- Shortwave Infrared

LWIR- Longwave Infrared

AIS- Airborne Imaging Spectrometers

SISEX- Shuttle Imaging Spectrometer Experiments

HIRIS- High-Resolution Imaging Spectrometer

AVIRIS- Airborne Visible Infrared Imaging Spectrometer

HYDICE- Hyperspectral Digital Imagery Collection Experiments

4.6 ANSWER TO CHECK YOUR PROGRESS


1. State the need for the development of hyperspectral sensors.
2. Name airborne and space borne hyperspectral sensors, five each.
3. Application of hyperspectral remote sensing in various field of science.
4. Discuss ultra spectral remote sensing.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 83 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

4.7 REFERENCES
1. (NASA/JPL-Caltech). 2014. AVIRIS Airborne Visible/ Infrared Imaging Spectrometer.
The National Aeronautics and Space Administration.
2. Exelis. 2013. Vegetation Analysis: Using Vegetation Indices in ENVI (2013).
3. Goetz, A. F.H., Rock, B. N., & Rowan, L. C. 1983. Remote sensing for exploration: an
overview. Economic Geology.
4. Goetz, Alexander F.H., Vane, G., Solomon, J. E., & Rock, B. N. 1985. Imaging
spectrometry for earth remote sensing. Science, 228: 1147–1153.
5. Janos, T. 2008. Geoinformatics | Digitális Tankönyvtár.
6. NASA. 2016. Earth Observing-1: Ten Years of Innovation.
7. Shaw, G. A., & Burke, H. K. 2003. Spectral Imaging for Remote Sensing. LINCOLN
LABORATORY JOURNAL (Vol. 14).
8. Tan, S.-Y. 2016. Developments in Hyperspectral Sensing. In: Handbook of Satellite
Applications. Springer New York: pp. 1–21.

4.8 TERMINAL QUESTIONS


1. What is hyperspectral remote sensing?
2. Differentiate between multispectral and hyperspectral remote sensing.
3. Discuss development of hyperspectral sensors. State the spectral, spatial and wavelength
range of important hyperspectral sensors.
4. Differentiate between airborne and space borne hyperspectral remote sensing and its
advantages and disadvantages.

UNIT 4 - INTRODUCTION TO HYPERSPECTRAL REMOTE SENSING Page 84 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL


DATA, SPECTRAL IMAGE LIBRARY

5.1 OBJECTIVES
5.2 INTRODUCTION
5.3 CHARACTERISTICS OF HYPERSPECTRAL DATA,
SPECTRAL IMAGE LIBRARY
5.4 SUMMARY
5.5 GLOSSARY
5.6 ANSWER TO CHECK YOUR PROGRESS
5.7 REFERENCES
5.8 TERMINAL QUESTIONS

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 85 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

5.1 OBJECTIVES
To identify, map and characterize a material remotely, we need to understand the
characteristics of hyperspectral data.

After going through this module, one can enhance their knowledge in the hyperspectral
datasets, and moreover he/she can surely enhance their skill in

 Basic understanding of hyperspectral datasets.


 Range of different spectrum used in hyperspectral dataset
 Reflectance spectrum of characteristic material
 Absorption feature of mineral and vegetation
 Causes of spectral absorption feature
 Understand the USGS spectral image library

5.2 INTRODUCTION
The remote identification and mapping of features are only possible because of
hyperspectral datasets. Hyperspectral sensor acquires data in 100 to 200 spectral bands with
narrow bandwidth (5-10 nm) which enable us build the continuous reflectance spectra of each
pixel in a scene. Main characteristics of hyperspectral data sets are its spectral and
radiometric, resolution. Multispectral dataset enables only identification of material because
of its broad spectral bands whereas fine interval or narrow spectral resolution of
hyperspectral sensors provides us the data by which not only we identify the material but we
can also differentiate between material/rocks.
USGS spectral library play important role in the remote identification of material. It
includes spectral signature from variety of sources such as minerals, vegetation, man-made
materials etc. This library serves as a collection of reference material. The hyperspectral
dataset can be compared with the materials present in the spectral library and it helps us in
the identification of different materials including minerals, vegetation, construction material,
chemical etc.

5.3 CHARACTERISTICS OF HYPERSPECTRAL DATA,


SPECTRAL IMAGE LIBRARY
Thousands of bands make up the electromagnetic spectrum, each reflecting a distinct
source of light energy. Hyperspectral sensors break this electromagnetic spectrum into groups
of wavelength range that enables us to classify the objects based on their characteristic’s
spectral reflectance curve.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 86 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Hyperspectral Data and wavelength ranges:

Band and Wavelength - A set of wavelengths is referred to as a band. Let us take examples
as, the wavelength values between 2205 nm and 2210 nm may represent on wavelength as
collected by an imaging spectrometer. For light in the band, the hyperspectral sensor collects
data from reflected light energy in a pixel. When we worked with multispectral or
hyperspectral datasets, the center wavelength value is stated as the band detail. For example,
in the band spanning 2205-2210 nm, the center would be 2207.5 nm.

Fig. 5.1 Hyperspectral sensors collect information within defined wavelength region of the
electromagnetic spectrum (Source: https://www.neonscience.org/resources/learning-
hub/tutorials/hsi-hdf5-r).

A hyperspectral data is the collection of information from the electromagnetic


spectrum of the observed area. Different materials have their characteristics spectral signature
in the electromagnetic spectrum.

Hyperspectral remote sensing can identify and discriminate between materials as


compared with the conventional remote sensing techniques because it acquires images in
many bands which results in the formation of reflectance spectrum of every pixel in the
scene. With the advent of new sensor technology, it is possible for the development of

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 87 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

hyperspectral sensors. Hyperspectral sensors record the image in many tens of bands with
bandwidths on the order of 0.01 um and can construct a continuous reflectance spectral of
pixels in the scene.

a. Multispectral imaging b. Hyperspectral imaging

Fig 5.2. Principle of hyperspectral imaging, a. Multispectral sensors have discrete spectral
bands whereas, b. Hyperspectral senor produce the continuous reflectance spectrum. (Source:
https://www.edmundoptics.com/knowledge-center/application-notes/imaging/hyperspectral-
and-multispectral-imaging/)

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 88 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Multispectral vs. Hyperspectral data:

The key difference between multispectral and hyperspectral lies in the number of
bands and how narrow the bands are. Let us take an example, the channel below includes red,
green, blue, near-infrared, and shortwave infrared.

(Source: https://gisgeography.com/multispectral-vs-hyperspectral-imagery-explained/)

Hyperspectral data consists of much narrower bands (10-20 nm). A hyperspectral data
contains hundreds or thousands of bands. Generally, hyperspectral data doesn’t contain
descriptive channel names.

(Source: https://gisgeography.com/multispectral-vs-hyperspectral-imagery-explained/)

Before going further, we need to understand certain terms and their definition.

Spectroscopy: The study of the interaction of matter and electromagnetic radiations is known
as spectroscopy.

Spectroscopy is based on the premise that different materials are different due to differences
in their constituents and structures, and as a result, they interact with light differently, causing
them to look different.

Visible Spectrum:

Visible spectrum range from 0.4 µm to 0.7 µm of electromagnetic spectrum.


Color Wavelength range (µm)
Violet 0.4 – 0.446
Blue 0.446 – 0.500

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 89 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Green 0.500 – 0.578


Yellow 0.578 – 0.592
Orange 0.592 – 0.620
Red 0.620 – 0.70

Infrared Spectrum:
Infrared region encompasses the wavelength range from 700nm to 105nm. This entire
wavelength region is subdivided into reflected infrared and thermal infrared.
Reflected infrared region includes the wavelength region of approximately 0.7 to 3.0
µm and 3.0 to 100 µm is the wavelength range of thermal infrared region. In the same way as
radiation in the visible component is used for remote sensing, radiation in the reflected IR
field is used for remote sensing.

Fig 5.3. The electromagnetic spectrum, highlighting the visible and infrared regions. (Source:
https://lotusgemology.com/index.php/2-uncategorised/294-ftir-in-gem-testing-ftir-intrigue-
lotus-gemology).
Near Infrared (NIR):
Near-IR region of electromagnetic spectrum range from 0.7 to 1.1 µm. Water absorbs Near -
IR, therefore these wavelength regions can used to distinguish land and water boundaries.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 90 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Plants evidently reflect near-infrared light, with healthy plants reflecting more than
stressed plants. Since NIR light can penetrate through haze, it can aid in the recognition of
information in a smoky or hazy scene.
Shortwave Infrared (SWIR):
Shortwave infrared includes the wavelength range between 1.1 to 3.0 µm. Water absorption
wavelength band falls in three regions: 1400, 1900 and 2400 nm. The hyperspectral image
data will appear darker at these wavelengths if there is more water, including in the oil. As a
result, SWIR will facilitate in estimating the amount of water present in the soil and plants.
The SWIR band can also be used to differentiate between cloud forms (water vs. ice
clouds) as well as clouds, snow, and ice, which all appear white in visible light. SWIR bands
reflect strongly on recently burned ground, making them useful for detecting fire damage. In
the SWIR portion of the electromagnetic spectrum, active explosions, lava flows, and other
highly hot features “glow”.
Instantaneous field of view (IFOV):
The IFOV is the sensor's angular cone of vision, which defines the area on the Earth's surface
that can be "seen" from a given altitude at any given time.
Spatial Resolution:
The details discernible in an image are dependent on the spatial resolution of the sensor and
refer to the smallest possible feature that can be detected.
Spectral Resolution:
The capacity of a sensor to specify fine wavelength intervals is referred to as spectral
resolution. The narrower the wavelength ranges for a specific channel or band, the finer the
spectral resolution. Hundreds of very small spectral bands are detected by hyperspectral
sensors around the visible, near-infrared, and mid-infrared portions of the electromagnetic
spectrum.
Radiometric Resolution:
An imaging system's radiometric resolution defines its ability to distinguish very subtle
differences in light. The higher a sensor's radiometric resolution the more sensitive it is to
minor variations in reflected or emitted radiation.
Temporal resolution:
The time interval between successive observations is equal to the temporal resolution, which
refers to the repetitiveness of observation over a region. It is determined by the sensor's
swath-width and orbital parameters. The time interval between two identical flights over the

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 91 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

same location, also known as the repetition rate, determines the temporal resolution. It is
determined by the satellite's altitude and orbit, as well as its sensor characteristics.
In defense and civilian use of high-definition electro-optic image have been continuously
increasing. Furthermore, the air-borne and space-borne data are used to identify and classify
the materials present on the Earth’s surface. It is possible to identify the objects at Earth’s
surface because of analysis of characteristic reflectance spectra of the object.
The Absorption Process:
Beers law governs the behavior of photon when it enters into an absorbing medium.
Beers law states that:
I = Ioe-kx
Where I describe the observed intensity, Io represents the original light intensity, k denotes
the absorption coefficient, and x presents the distance travelled through the medium.
The absorption coefficient is measured in cm-1, and x is measured in cm.
The absorption coefficient is related to the complex index of refraction by the equation:
k = 4πK/λ
where λ is the wavelength of light, n is the sample index of refraction and K is the extinction
coefficient.
The reflection of light, R, incident onto a plane surface is described by the Fresnel
equation:
(𝑛 − 1)2 + 𝐾 2
𝑅=
(𝑛 + 1)2 + 𝐾 2
Causes of Absorption:
The general cause of absorption includes electronic and vibrational processes. (Burns,
1993)explores into the finer details of electronic systems, while Farmer (1974) focuses on
vibrational processes.
1. Electronic Processes:
Secluded ions and atoms have distinct energy states. Low energy state changes into
higher energy state due to absorption of photon at specific wavelength. As photons are
emitted, their energy state changes to a lower one. The absorbed photon does not emit at
same wavelength, for example Earth absorb shorter wavelength and emit at longer
wavelengths.
a. Crystal Field Effects:
Transition elements (Ni, Cr, Co, Fe etc.) shows their characteristic absorption spectra
due to unfilled electron shells. In an isolated ion, all transition elements have equal energies

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 92 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

for d-orbitals, but when the atom is in a crystal region, the energy level splits.This shift in
orbital energy states allows electrons to travel from a lower to a higher energy level by
absorbing a photon with the same energy as the energy difference between the two states.The
valency, coordination number, and symmetry of the site of an atom describe the energy
levels. The difference of structure in minerals leads to the variations in crystal field.
Therefore, amount of splitting varies and the same ion can produce different absorptions and
enables to identify specific mineral from spectroscopy.
b. Charge Transfer Absorptions:
Absorption of a photon can lead to the movement of electron between ions and thus
lead to the charge transfer. Fe2+ and Fe3+ are examples of transition elements that have the
same metal but unlike valence states. Generally, absorption bands caused by charge transfers
lead to the identification of minerals. Charge transfer is the main reason for the reddish color
of minerals containing iron.
c. Conduction bands:
Many minerals have two levels of energy wherein the electrons may occur in two
ways: first one is higher level is called as conduction band in which electron can freely pass
around the lattice and other one is the valence state, in which electrons are bound to
individual atoms. The contrast in energy levels is referred to as the band gap. This band gap
is very small or do not occurs in metals but very large in dielectrics. The energy of visible
and near-infrared wavelength portions corresponds to band gaps in semiconductor materials.
Because of these band gaps, the Sulphur has a yellow tint.
d. Color Centers:
Color center is produced due to irradiation from an imperfect crystal. The periodicity
of the crystal is disturbed by lattice defects. The movement of electron in the defect requires
photon energy. The yellow, purple and blue colors of fluorite mineral are due to color centers.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 93 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.4 Spectral signature diagram. The width of the black bars indicate the relative widths
of absorption bands (Hunt, 1977).

2. Vibrational Processes:
The bond in a molecule or crystal lattice is assumed to be like springs attached with
weights and the entire system can vibrates. The frequency of vibration depends on the
strength of bonds and their masses.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 94 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.5Reflectance spectra of calcite, dolomite, beryl, gypsum, alunite, rectorite, and jarosite
showing vibrational bands due to OH, CO2 and H2O (R. N Clark, 1999).

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 95 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.6 Reflectance spectra of phlogopite, biotite, pyrophyllite, muscovite, epidote, and illite
showing vibrational bands due to OH and H2O (R. N Clark, 1999).

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 96 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.7 Reflectance spectra of hectorite, halloysite, kaolinite, chrysotile, lizardite, and
antigorite, showing vibrational bands due to OH (R. N Clark, 1999).

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 97 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.8 Near 2200 nm, there are small spectral variations in the kaolinite group of minerals.
White KGa-2 is partially crystallised, while kaolinite CM9 is very well crystallised (WXL)
(PXL). The sampling rate is 0.95 nm and the spectral bandwidth is 1.9 nm. The original
spectrum ranged from 0.5 to 0.8.(R. N Clark, 1999).

Variousminerals and other materials Spectra:


Organics - Organic materials are present over Earth and also distributed all over the solar
system and it important material in several environmental problems. The C-H stretch
fundamentally occurs near 3.4 µm and the first overtone near 1.7 µm, and a combination
band is near 2.3 µm. The absorption feature near 2.3 µm can be confused with OH and
carbonate spectral absorption in minerals.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 98 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.9 Complex absorptions in the CH-stretch fundamental spectral region can be seen in
the transmittance spectra of organics and mixtures.(R. N Clark, 1999).

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 99 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 5.10 Spectral reflectance curve of montmorillonite and montmorillonite blended with
benzene, toluene, and trichlorethylene. The organics have a CH combination band near 2.3 m,
while montmorillonite has an absorption feature at 2.2 m. At 1.7 m, the first overtone of the
CH stretch can be seen, and at 1.15 m, the second overtone can be seen.(R. N Clark, 1999).

Ices - Water molecule in the minerals exhibits diagnostic absorption bands and ice, which is
also classified as mineral, exhibits strong absorption bands. Figure 5.11 depicts the spectra of
solid H2O, CO2 and CH4. Because of its hexagonal form, the H2O spectra have a wider range

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 100 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

of absorptions than the others, hydrogen bonds orientationally disordered and the disorder
broadens the absorptions. Solar system contains many ices.

Fig. 5.11 Spectral reflectance curve of solid carbon dioxide (CO2), methane (CH4) and water
(H2O) (R. N Clark, 1999).

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 101 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Vegetation - Vegetation has two general forms which includes green and wet
(photosynthetic) and dry non-photosynthetic. Spectra of these two vegetation forms are
compared with spectra of soil in Fig. 5.12. The NIR spectra of green vegetation are caused
due to vibrational process of absorption.

Fig. 5.12 Spectral reflectance curves of green vegetation, dry vegetation and soil(Roger N
Clark, 1995).
The dry or non-photosynthetic vegetation shows reflectance spectrum due to presence
of cellulose, lignin, and vegetation.

Spectral Image Library:


The spectral library includes samples from minerals, rocks, soil, physically
constructed as well as mathematically computed mixtures, plants, vegetation, microorganism
and man-made materials. These spectra are utilized to identify, map and remote detection of
similar materials.
Materials in the spectral library:
The spectra of following materials constitute spectral library:
1. Minerals
2. Elements
3. Rocks, minerals/mineral, coatings etc
4. Liquids, mixtures and Other Volatile including Frozen Volatiles.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 102 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

5. Manufactured chemicals
6. Vegetation
7. Micro-Organism

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 103 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 104 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

5.4 SUMMARY
Hyperspectral sensor collects information in 100 to 200 spectral bands with relatively
narrow bandwidth (5-10 nm) which offers to build the continuous reflectance spectra of each
pixels in a scene. The main difference between hyperspectral and multispectral data lies in
the number of bands and how narrow the bands are. The observed characteristic absorption
feature in the reflectance spectrum caused by Electronic or vibrational process of absorption.
Electronic absorption takes place due to change in the energy state of electrons. Electronic
process of absorption is dominant into VNIR range of electromagnetic spectrum. Vibrational
process of absorption is due to the vibration in the bonds and their masses. Vegetation shows
two general form of vegetation such as green and wet and dry non-photosynthetic. The near-
infrared region is dominated by the liquid-water absorptions. The USGS spectral image
library is used for the validation and reference purposes. It contains spectra of common
material which includes minerals, elements, solids, rocks, mixtures, coatings, liquids,
artificial materials, plants, vegetation, micro-organisms etc.

5.5 GLOSSARY
Band - A group of wavelengths.
Spectroscopy - Spectroscopy is the study of the interaction between matter and
electromagnetic radiations.
Visible Spectrum - covers a range from approximately 0.4 µm to 0.7 µm.
Infrared Spectrum - Covers the wavelength region of approximately 0.7 µm to 100 µm.
Near Infrared (NIR) - Near-IR covers the wavelength range of 0.7 to 1.1 µm.
Shortwave Infrared (SWIR) - It covers the wavelength range between 1.1 to 3.0 µm.
Instantaneous field of view (IFOV) - Angular cone of visibility of the sensor and
determines the area on the Earth’s surface which is “seen” from a given altitude at one
particular moment in time.
Spatial Resolution - Refers to the smallest possible feature that can be detected by a sensor.
Spectral Resolution - It describes the ability of a sensor to define fine wavelength intervals.
Radiometric Resolution - It describes the ability of a sensor to discriminate very slight
differences in the energy.
Temporal resolution - It refers to the repetitiveness of observation over an area, and is equal
to the time interval between successive observation.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 105 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

5.6 ANSWER TO CHECK YOUR PROGRESS


1. Discuss differences between multispectral and hyperspectral remote sensing.
2. Describe the characteristics of hyperspectral dataset.
3. Describe spectral, spatial, temporal and radiometric resolution for hyperspectral sensors.

5.7 REFERENCES
1. Burns, R. G. 1993. Mineralogical applications of crystal field theory. Second edition.
Mineralogical applications of crystal field theory. Second edition.

2. Clark, R. N. 1999. Spectroscopy of rocks andminerals, and principles of spectrocopy.


Remote sensing for the earth sciences: Manual of remote sensing.

3. Clark, Roger N. 1995. Mapping minerals, amorphous materials, environmental materials,


vegetation, water, ice and snow, and other materials: The USGS tricorder algorithm.
Clark, Roger N. SwaySummaries of the Fifth Annual JPL Airborne Earth Science
Workshop. Volume 1: AVIRIS Workshopze, Gregg A.

4. Hunt, G. R. 1977. SPECTRAL SIGNATURES OF PARTICULATE MINERALS IN THE


VISIBLE AND NEAR INFRARED. Geophysics.

5.8 TERMINAL QUESTIONS


1. Discuss Hyperspectral and ultra spectral remote sensing datasets.
2. Discuss reflectance spectra of vegetation and minerals.
3. Discuss USGS spectral image library.

UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 106 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT6 - HYPERSPECTRAL DATA


INTERPRETATION

6.1 OBJECTIVES
6.2 INTRODUCTION
6.3 HYPERSPECTRAL DATA INTERPRETATION
6.4 SUMMARY
6.5 GLOSSARY
6.6 ANSWER TO CHECK YOUR PROGRESS
6.7 REFERENCES
6.8 TERMINAL QUESTIONS

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 107 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

6.1 OBJECTIVES
The main objective of any scientific method is to transform it into useful information which
can be applied for management of natural resources. After going through this module, one
can enhance their knowledge about:

 Steps in the processing of hyperspectral data and transformation it into useful


information
 Method of interpretation of hyperspectral datasets
 Classification techniques for the mapping of materials
 Interpretation of hyperspectral data for the mapping of alteration zones.
 Hyperspectral remote sensing application in the mapping of volcanogenic massive
sulphide deposits.
 Hyperspectral remote sensing application in hydrocarbon exploration, relationship
and interpretation.

6.2 INTRODUCTION
These types of data can now be used by people who aren't spectral remote sensing
experts. Thanks to recent advancements in remote sensing and improvements in hyperspectral
sensors. Previously, hyperspectral data is primarily used in geological applications.
Conservation, earth sciences, irrigation, and forest management are now among the
disciplines that can be benefitted by hyperspectral remote sensing.

Hyperspectral remote sensing offers more precise details than multispectral imaging,
thus allowing the identification and differentiation of spectrally distinct materials. A
hyperspectral sensor collects data in a series of contiguous narrow bands. Due to huge
volume and high spectral resolution of hyperspectral data, traditional techniques of
interpretation and processing of remote sensing data will be no longer applicable. Therefore,
following sections will deal with the pre-processing and interpretation of hyperspectral
datasets. We will also see some case studies and application of hyperspectral data in various
fields.

6.3 HYPERSPECTRAL DATA INTERPRETATION


Hyperspectral dataset contains huge volume of information with narrow spectral
resolution in hundreds of contiguous spectral bands. When there are hundreds of spectral
bands in the data, conventional image analysis and data interpretation procedures make image

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 108 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

processing complicated. On the other hand, the data contains enough information that would
enable analysis based on spectroscopic concepts. It is also important to understand the
limitation of traditional techniques as those techniques are still used to interpret hyperspectral
data.

Although huge volume of data does not pose any major data processing challenges as
the modern computational system designed to handle these types of data easily. However,
examining the amount of data, such as multispectral and hyperspectral data from Landsat
Thematic Mapper and AVIRIS respectively is useful. The number of wavebands (7 vs. 224)
and radiometric resolution used in the two datasets are the most significant differences (8
versus 10 bits per pixel per band). Thus, the relative data volume per pixel are 7×8: 224×10
i.e., 56:2240. As a result, AVIRIS data has 40 times the number of bits per pixel as TM data.
As a result, hyperspectral data storage and transmission should be addressed, and appropriate
compression techniques should be used.

Redundancy in Hyperspectral dataset:

We deal with a lot of redundant data in the everyday lives. For example, if we remove
certain letters in a word then we can still identify the meaning of word. For example,
rmtesensg can be recognized as “remote sensing” because there is enough redundant letter
that removing some does not affect the understanding. The same principal is applicable in
hyperspectral data recorded by imaging spectrometers. The information content of the data
also overlaps significantly across the bands generated for a given pixel.

Formation of the correlation matrix for an image is necessary to visualize spectral


redundancy in images (or portion of the area of interest). High correlation between band pairs
indicates high degree of redundancy. As there are so many bands in the hyperspectral dataset,
it is better to shows the inherent correlations (redundancies) pictorially as shown in Fig. 6.1.
The grey scale in fig. 6.1 is used to describe correlation levels. Correlations between bands
are often identified using this representation.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 109 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 6.1 (a) Correlation matrix of 196 wavebands encompassing wavelength range of 400 to
2400 nm for the AVIRIS Jasper Ridge image (white portion describes correlations of 1 or -1,
whereas black indicates a correlation of 0). (b) Output image of edge detecting the correlation
matrix(Richards, 2013).

The Necessity for Calibration of hyperspectral sensors:

Due to the fine spectral resolution of hyperspectral datasets, atmospheric absorption


characteristics can be observed and shown. In order to get spectral reflectance curve of
ground cover image we need to remove these atmospheric disturbances from the data. High
spectral resolution indicates that observed spectra can be construed theoretically, but the
modulating influence of the solar spectrum must be removed. Multispectral data are corrected
for atmospheric scattering and transmittance only.

The Problem of Dimensionality: The Hughes Phenomenon:

Hyperspectral data can have 100s or even 1000s of bands of spectral bands. As
number of narrow bands of hyperspectral data increased, the number of samples (training
pixels) needed for optimum statistical confidence and functionality in hyperspectral datasets
also increases exponentially and making it difficult to address this issue adequately. Consider
the following scenario: if we needed to identify 10 ground cover forms using 100s or 1000s
of hyperspectral narrowband data, we will require very wide training samples for each class
to ensure statistical accuracy of classification; however, multispectral and broadband data can
be categorized with relatively smaller training samples for each class. Moreover, larger
dimension of hyperspectral data enables us to achieve large number of classes. The ability to

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 110 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

distinguish complex land cover groups with hyperspectral narrowband datasets is


advantageous. Numerical accuracy, on the other hand, can only be preserved if each class has
sufficient training samples to train the classifier and an equivalent number of training samples
to assess class accuracy. Hughes' phenomenon, or the curse of data dimensionality, is the
name given to this phenomenon(Thenkabail, 2014). However, with the introduction of a
plethora of methods for collecting image data in real time, development of super computers
and smart algorithm overcomes this problem or Hughes’ phenomenon.

To further illustrate the challenge, a basic example dependent on evaluating a


dependable linear separating surface may be used. Fig. 6.2 shows three separate training sets
of data for the similar two-dimensional data set. The first data set have only single pixel per
class. A separating surface can be created but it may not be accurate. We can estimate
separate surfaces better with two training pixels per class, but we might not get good
estimates of the parameters of the supervised classifier until we have many pixels per class
relative to the number of channels in the data (Fig. 6.2 (c)).

Fig. 6.2 Illustration of the effect of having adequate training samples per class to ensure
accurate separating surface estimation. When too few pixels are used (a) good separation of
the training data is possible but the classifier performs poorly on the testing data. Large
numbers of (randomly positioned) training pixels generate a surface that also performs well
for testing data (c)(Richards, 2013).

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 111 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Data Calibration Techniques:

1. Detailed Radiometric Correction:

Scattering and the Earth's atmosphere impact the radiance determined by the
hyperspectral sensor. To acquire surface reflectance curve for hyperspectral data, detailed
radiometric correction is needed. Since hyperspectral datasets covers the wavelength range of
400 to 2500 nm which includes water absorption features and have higher spectral resolution,
thus it requires a systematic processing method consisting of three steps:

 Compensation for the shape of the solar system. To get the apparent reflectance of the
earth, divide the calculated radiances by solar irradiances above the atmosphere.
 Compensation for the atmospheric gaseous transmission, molecules and aerosol
scattering. Simulation of this atmospheric effect allows the apparent reflectance to be
changed into scaled surface reflectance.
 After taking into account any topographic influence, scaled surface reflectance is
converted to real surface reflectance. If no topographic data is available, real
reflectance is presumed to be the same as scaled reflectance if the surfaces of
interest are Lambertian.

2. Data normalization:

If detailed radiometric correction is not possible, a different data normalization


approach may be used. It compensates for multiplicative noise, such as topographic and solar
spectrum effects, in hyperspectral data. This can be executed using Log Residuals (A. A.
Green & Craig, 1985), depending on the link between reflectance and radiance (raw data):

xi,n = TiRi,nIn , i = 1, ……., K; n = 1, 2, . . . . . . . . , N

xi,n denotes the radiance of pixel i in waveband n. The topographic influence, Ti, is presumed
to be constant across all wavelengths. The actual reflectance of pixel i in waveband n is Ri,n.
The illumination element is believed to be independent of the pixel. The total number of
pixels in the image and the total number of bands, respectively, are K and N.

where xi,n is radiance for pixel i in waveband n. Tiis the topographic effect, which is assumed
constant for all wavelength. Ri,n is the real reflectance for pixel i in waveband n. In is the
illumination factor, which is assumed independent of pixel. K and N are the total number of
pixels in the image and total number of bands, respectively.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 112 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

3. Approximate Radiometric Correction:

Approximate correction are applicable to some extent in hyperspectral remote sensing


and one approach is Empirical Line procedure (Roberts, Yamaguchi, & Lyon, 1985). One
light and one dark spectrally uniform reference in the region of interest are chosen, and their
reflectance calculations in the lab or in the field are analyzed. Both the target's radiance
spectra are derived from the image and converted to the real reflectance using linear
regression techniques. Each band's gain and offset are added to all pixels in the image,
determining their reflectance.

Pre-processing of Hyperspectral dataset:

A large number of narrow and contiguous bands cause radiometric errors in the
hyperspectral data. Therefore, hyperspectral data requires pre-processing which includes the
removal of atmospheric effects. The steps in the pre-processing of hyperspectral datasets is
shown in Fig 6.3.

Fig. 6.3 Pre-processing steps in hyperspectral remote sensing (Source:


https://www.l3harrisgeospatial.com/docs/spectralhourglasswizard.html).

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 113 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

1. Atmospheric Correction:

FLAASH (Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes)


Atmospheric correction

FLAASH (Fast line-of-Sight Atmospheric Analysis of Spectral Hypercubes) is a


MODTRAN-based “atmospheric correction” software developed by Air Force Phillips
Laboratory, Hanscom AFB and Spectral Sciences, Inc. to support IR-visible-UV
hyperspectral and multispectral sensors. This atmospheric correction model removes
atmospheric effect produced by surface albedo, surface altitude, water vapor column,
aerosol and minimize the computational time requirements.

FLAASH provides following capabilities:

 Can be applied for AVIRIS, HYDICE, AVIRIS-NG and similar near-


IR/visible/UV sensors;
 A graphical user interface for performing MODTRAN-4 spectral calculations,
including data simulations;

atmospheric corrected images (i.e., surface spectral reflectance) for non-thermal wavelengths
(mid-IR through UV), including an image-sharpening adjacency effect correction.

2. Minimum Noise Fraction (MNF):

Minimum noise fraction reduces the dimensionality in the data. Dimensionality


reduction is the process of converting high-dimensional data into low-dimensional data
without sacrificing detail. To enhance the analytical accuracy, dimensionality reduction is the
necessary process. On the basis of noise statistics facts, MNF successfully reduced the
dimensionality of the datasets and removed the noise.For the calculation of the noise
statistics, the nearest neighbor method given by Andrew A. Green et al., (1988) in used.

Purity Index (PPI):

MNF is subjected to a pixel purity index in order to extract the purest pixels. PPI is
used to retrieve spectrally pure end members from hyperspectral data. The extreme pixels
have been selected from region of interest (ROI). The n-dimensional visualization approach
was extended to the ROI on the MNF image by extracting pure pixels and evaluating their
spectra.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 114 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

4. Spectral Angle Mapper (SAM):


The spectral angle mapper (SAM) is a supervised classification technique that
compares image spectra to spectral library reference spectra(Cŕosta et al., 2003). The
algorithm for SAM classification is based on angle between image spectra and the reference
spectra in n-dimensional vector space. The degree of resemblance between the image and
reference spectra is determined by the angle between them, with a high angle indicating low
similarity and a low angle indicating high similarity.

The spectral angle, which can range from 0 to π/2, is determined using the formula
provided by (Kruse et al., 1993):

−1
∑𝑛𝑖=1 𝑡𝑖 𝑟𝑖
𝜃 = 𝑐𝑜𝑠
𝑛
𝑛
√∑ 𝑡𝑖2 ∑𝑖=1 𝑟𝑖2
[ 𝑖=1 ]

Where

n = the number of spectral bands

t = the reflectance of the actual spectrum and

r = the reflectance of the reference spectrum

5. Spectral Feature Fitting (SFF):

Spectral feature fitting (SFF) algorithm removes the continuum of absorption feature
from the image spectra and library spectra. Spectral feature fitting compares the continuum
removed image spectra with the reference spectral library spectra after continuum removal
process and operated the least square fitting. The identification of best fitting material
depends on spectral features of reference and done by comparing the correlation coefficient
of fits (Boardman & Kruse, 1994). Continuum removed image spectra can be derived by
dividing the original spectrum of every pixel in an image from continuum curve.

𝑆
𝑆𝑐𝑟 =
𝐶

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 115 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Where

Scr = Continuum removed spectra

S = Original Spectra

C = Continuum curve

The abundance of spectral features is represented by the scale factor. For the
computation of least square fit, a band-by-band measurement is used. The greater the scale
value, the stronger the absorption of mineral spectra, and the scale value is inversely
proportional to mineral abundance(F. van der Meer, 2004). The scale image of lighter pixels
shows a fair match between the pixel spectrum and the mineral's reference spectrum.To
compute the cumulative RMS error, an RMS error image is also created for each
endmember.Pixels that have a low RMS error and a large-scale factor are strongly
associated.The fit images are generated by calculating the ratio between the scale image and
the RMS error image to determine the correlations between the unknown and reference
spectrums on a pixel-by-pixel basis.(F. Van Der Meer et al., 2003).

Interpretation by Spectral Analysis:

1. Spectroscopic Analysis:

Sufficient spectral resolution is available in the hyperspectral data to identify and


characterize the absorption features in the reflectance spectral of the minerals. Absorption
features can be characterized by the location in spectral range, relative depths and widths
(FWHM, full width at half the maximum depth). The complete spectrum is divided into near-
IR and SWIR (shortwave infrared). If a feature is a local minimum and falls below a user-
specified threshold below the pixel spectral mean, it is classified as an absorption feature. If
the absorption properties of an unknown spectrum match those of a spectrum for that class
stored in a reference library, it is labelled as belonging to that class.

2. Spectral Angle Mapping:

A pixel vector x in an n-dimensional feature space has both magnitude (length) and an
angle defined with respect to the axes that form the space's coordinate system. The angular
information of spectra is used for the identification of object in the pixel.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 116 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 6.4 (a) Pixels are represented by their angles from the band axes.(b) Segmenting the
multispectral space by angle(Richards, 2013).

Fig. 6.4 shows a two-dimensional space where spectra are characterized by their angles from
the horizontal axes. The spectral can be distinguished by their angle with reference. High
angle shows less similarity whereas low angle shows greater similarity. We can set the
angular decision boundary depending on the requirement. There is a fair probability that
angular knowledge would have good results if the pixel spectra have different groups and are
spaced well in a function space.

3. Library Searching Techniques:

As the modern-day algorithm such as FLAASH (Fast Line-of-Sight Atmospheric


Analysis of Spectral Hypercubes) produce good result for atmospheric disturbances, it is
possible to obtain well-specified spectrum of each pixels of hyperspectral dataset. The
reference spectra are generally used from USGS spectral library. As the degree of redundancy
spectrally and radiometrically in hyperspectral dataset, coding technique can be employed to
represent a pixel spectrum in a simple and effective manner o that fast library searching
matching can be done.

Hyperspectral data in the interpretation of geology of the area:

With the development of handy and sophisticated spectrometers and spectro


radiometers, hyperspectral remote sensing plays important role in the mineral exploration and
geological mapping. It is used in first stage of mineral exploration survey which includes

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 117 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

reconnaissance survey and target identification in a broad area.

Fig. 6.5. Silicate minerals and rocks have a spectral absorption feature.(Hunt & Ashley, 1979)

Table 6.1 Alteration minerals and metallogenetic conditions are listed below(Thompson &
Thompson, 1996)

Metallogenetic Alteration Characteristic SWIR active mineral assemblage


environment zone name
Tourmaline Muscovite and tourmaline
Massive Carbonate Siderite, ankerite, calcite
Sulphide Sericite Chlorite and sericite
Albite Chlorite, biotite, muscovite
Carbonate Calcite, ankerite, dolomite, muscovite
Mesothermal Chlorite Chlorite, muscovite and actinolite
Biotite Chlorite and biotite
Propylitic Epidote, chlorite, sericite, calcite, illite, smectite,
montmorillonite
Low- and high-
sulphide Argillic Kaolinite, dickite, alunite, diaspore, pyrophyllite, jarosite
epithermal

Adularia Sericite, illite–smectite, kaolinite, chalcedony, opal,

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 118 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

montmorillonite,calcite,dolomite
Potassic Phlogopite, actinolite, sericite, chlorite, epidote,
muscovite, anhydrite
Sodic Actinolite, diopside, chlorite, epidote, scapolite
Igneous
Phyllic Muscovite–illite, chlorite, anhydrite
intrusion related
Argillic Pyrophyllite, sericite, diaspore, alunite, topaz,
tourmaline, kaolinite,montmorillonite, calcite
Greisen Topaz, muscovite, tourmaline
Skarn Clinopyroxene, wollastonite, actinolite–tremolite,
vesuvianite, epidote,serpentinite–talc, calcite, chlorite,
illite–smectite, nontronite
Oxidized Clay minerals, limonite, goethite, hematite, jarosite
Supergene and leached
Sulphide zones
Enriched Chalcocite, covellite, chrysocolla, native copper and
zone copper oxide,carbonate and sulphate minerals

Mineral deposits that can be targeted using reflectance spectra in mineral exploration
survey include epithermal gold, medium-, and high-sulphidation deposits, porphyries,
kimberlites, iron oxides, copper, gold, skarn, and uranium. Table 6.1 lists the minerals and
their characteristic zones which can be easily identified by hyperspectral remote sensing
technique.

Lithological mapping and exploration of industrial minerals:

The VNIR and SWIR region of hyperspectral remote sensing is used for the
identification and mapping of lithology of the area in any climate and tectonic areas (granite,
ophiolite, peridotite, and kimberlite). The VNIR-SWIR field is thought to be the best for
mapping alteration rocks/minerals, carbonates, and regoliths. Overtones and variations of Al-
OH, Fe-OH, and Mg-OH vibrations are most active in the SWIR field. Thermal infrared
region is mostly useful for the identification of quartz and silica presence in any area because
of fundamental vibration Si-O bond in this region. Thus, TIR region is mostly useful for
characterizing rock-forming minerals such as quartz, feldspar, amphibole, olivine, pyroxenes.
In case of evaporite deposit, the VNIR region is mostly useful for the mapping and shows

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 119 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

absorption features at 1.5, 1.74, 1.94, 2.03 2.22 and 2.39 µm and can be mapped in
hyperspectral data such as Hyperion dataset. (Kurz et al., 2012) applied hyperspectral remote
sensing in the VNIR-SWIR region to differentiate the carbonates such as limestone, karst and
hydrothermal dolomites.
Mapping of hydrothermal alteration zones and associated metal deposits:
Hydrothermal ore deposits often associated with the formation of altered minerals and
altered minerals are sensitive to the hyperspectral data. Therefore, mapping of altered mineral
may lead to discovery of an ore deposits. Hydrothermal alteration zones comprise of complex
mixture of primary and new mineral assemblages developed when primary minerals and
hydrothermal fluids meet.
Regolith mapping by hyperspectral remote sensing:
Regolith mapping critical role in the identification and discrimination between various
geomorphic feature, weathering pattern and link between surface and subsurface processes.
Pioneer research work by Tripathi et al. 2020 shows the potential of Airborne
Visible/Infrared Imaging Spectrometer- Next Generation in the mapping of regolith and
hydrothermally altered, weathered and clay minerals in the south-eastern Rajasthan.

Fig.6.6 Generated regolith map of the study area in south-eastern Rajasthan, India (Tripathi &
Govil, 2020).

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 120 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 6.7 Image and lab generated spectra related with the exposed rock samples in the area
(Tripathi & Govil, 2020).

Hydrothermal epigenetic deposits:


This comprise of fractional granitoid associated (tin, tungsten, molybdenum),
porphyry (copper, gold), iron oxides, copper, gold and carbonate-hosted strata bound (Pb, Zn)
and unconformity related uranium deposits. Tin-tungsten mineralization is often associated
with the altered minerals such as quartz, albite, muscovite, topaz, pyrite and clay minerals.
(Hoefen, Knepper Jr, & Giles, 2011) used HyMap data over Daykundi area to identify and
map huge Sn-W mineralization and associated alteration zones in Afghanistan. Dominant
alteration zones mapped includes albitization, silicification, limonitization, bleaching and
dickitization. Chlorite, epidote, kaolinite and iron-bearing carbonates are closely associated

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 121 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

with the mineralization. Common ore minerals found in the area includes scheelite,
wolframite, pyrite, chalcopyrite, bornite and cassiterite.

Fig. 6.8 Identification and mapping of carbonates, phyllosilicates, sulphates, altered minerals,
derived from HyMap hyperspectral data in Daykundi area, Afghanistan (Hoefen, Knepper Jr,
& Giles, 2011).
Hyperspectral remote sensing successfully applied for the exploration of porphyry-
type deposit in different geological settings (Bishop, Liu, & Mason, 2011).
Residual and secondary enrichment deposits:
This includes ore deposits such as bauxite; cobalt deposit; gold, copper; and uranium
deposit hosted by calcrete. Hyperspectral remote sensing successfully identifies and map
these secondary mineral components constituting regoliths (goethite, limonite, gibbsite).
Spectral absorption feature and linear unmixing techniques can be used for the mapping of
different grades of bauxite. Gossan is also known as iron hats and composed of goethite,
hematite, limonite, kaolinite and alunite. These minerals are sensitive to VNIR-SWIR region
of electromagnetic spectrum and can be easily identified by hyperspectral remote sensing.
Hyperspectral remote sensing techniques will detect the related supergene enrichment zone

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 122 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

minerals such as phosphate & argillic minerals, since they are responsive to the VNIR and
SWIR regions.
Hydrocarbon Exploration:
Most of the present day hydrocarbon reservoirs are deep seated whereas their
presence can be identified by surface indicators such as seepages and micro-seepages (F. D.
Van der Meer, 2000). In recent times, study of surface indicators (micro-seepages) is popular
for the oil and gas exploration. The mapping and characterization of oil springs, as well as the
modification of minerals in soil and rocks due to seepages, are also part of direct detection.
The aim of indirect measurement is to determine the secondary effects of volatile
hydrocarbons on plants and crops.
The following are some of the most significant spectral features and causative
molecules: (a) O-H overtones and C-H variations create 1.39-1.41 µm; (b) Absorptions of
1.72-1.73 µm attributable to a mixture of CH3 and CH2 stretching; (c) At 1.75-1.76 µm, there
is a CH2 vibration overtone; (d) Asymmetric and symmetric axial deformations of CH3 and
symmetric deformation of CH2 at 2.31 µm and (e) 2.35 µm, absorption due to a mixture of
symmetric angular deformations in CH3.
Coal bearing areas also have their spectral signatures. Low grade coals have distinct
absorption at 1.4, 1.9, and 2.1-2.6 µm in the wavelength range of 0.3-2.6 µm.

Fig. 6.9 Characteristic spectral absorption features due to different organic compounds in
different grades of coal(Ramakrishnan & Bharti, 2015).

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 123 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

(Cloutis, 2003) analysed large amount of coal samples for the retrieval of organic and
inorganic components.
Table 6.2. Coal physical and chemical characteristics and spectral absorption
regions(Ramakrishnan & Bharti, 2015).
Coal Property Spectral correlations
Aromaticity factor 3.41 µm-ABD
3.41/3.28 µm-ABDR
Aliphatic (CH + CH2 + CH3) content 3.41 µm-ABD
Aromatic C content 3.41/3.28 µm-ABDR
3.28 µm-ABD: ARR
Moisture Content 1.9 µm-ABD
2.9 µm-ABD
Volatile Content 2.31 µm-ABD
3.41 µm-ABD
3.41/3.28 µm-ABDR
Fixed carbon content 3.41 µm-ABD
3.41/3.28 µm-ABDR
3.28 µm-ABD: ARR
Fuel Ratio 3.41/3.28 µm-ABDR
2.31/3.28 µm-ABDR
Carbon Content 3.28 µm-ABD:ARR
Hydrogen Content 3.41m-ABD:ARR
2.9 + 3.41 m-ABD
Nitrogen Content 7.26 m-ABD
Oxygen Content 1.9 m-ABD
2.9 m-ABD
H/C ratio 3.41 m-ABD
3.41/3.28 m-ABDR
Vitrinite mean reflectance 3.41 m-ABD
3.41/3.28 m-ABDR
Calorific value 3.28 m-ABD: ARR
Petrofactor 1.6 m-ARR
3.41/3.28 m-ABDR
ABD: Absorption band depth; ABDR: Absorption band depth ratios; ARR: absolute
reflectance ratio.
Above table shows that quantitative spectral-compositional relationships are possible
and coals can be identified spectrally based on the properties such as aromaticity, total

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 124 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

aliphatic, aromatic content, moisture content, volatile content, fixed carbon abundance, fuel
ratio, carbon content, nitrogen abundance, H/C ration and vitrinite reflectance.

Fig. 6.10 with increasing inorganic material, the spectral pattern of coal also changes.
(Ramakrishnan & Bharti, 2015).

Mapping and discrimination between hydrothermally altered mineral using spaceborne


hyperspectral remote sensing
Pioneer work done by (Govil et al., 2021) shows the applicability of mapping
hydrothermally altered minerals in the Himalayan terrain, India. This work describes that
mapping of altered minerals and gossan may facilitate us further in the development of
mineral exploration survey. The hydrothermally altered minerals such as alunite, illite, axinite
and gossan location are mapped with the use Hyperion hyperspectral data. The gossan is
formed by the weathering of sulphide ore bodies and predominantly composed of goethite
and hematite.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 125 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Fig. 6.11

6.4 SUMMARY
Hyperspectral remote sensing data contains huge volume of data and thus redundancy
occurred in hyperspectral data. Redundancy can be removed by the application of minimum
noise fraction (MNF). As the hyperspectral data acquires information in 100s or even 1000s
of spectral bands, thus the training samples becomes exponentially large for optimum
statistical confidence and functionality. The statistical integrity can be preserved only if each
class have enough training samples to train classifier and equally large number of training
samples for each class to establish the class accuracy. This phenomenon is known as Hughes’
phenomenon. Pre-processing of hyperspectral data includes the application of FLAASH
atmospheric correction followed by Minimum Noise Fraction (MNF), Pixel Purity Index
(PPI) and then classification technique such as spectral angle mapper (SAM) and spectral
feature fitting (SFF). The hyperspectral data interpretation is done by three methods which
includes: Spectroscopic analysis, Spectral angle mapping and Library Searching Technique.
Hyperspectral data successfully applied for the mapping of lithology, hydrothermal alteration
zones, volcanogenic massive sulphide deposits, hydrothermal epigenetic deposits, residual
and secondary enrichment deposits and also in hydrocarbon exploration.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 126 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

6.5 GLOSSARY

Spectral Angle Mapper (SAM) - Spectral angle mapper (SAM) is a supervised classification
technique which compare the image spectra and reference spectra collected from spectral
library.
Spectral Feature Fitting (SFF) - Spectral feature fitting compares the continuum removed
image spectra with the reference spectral library spectra after continuum removal process and
operated the least square fitting.

HyMap - HyMap is an airborne hyperspectral imaging sensor developed by Australia and


manufactured by Integrated Spectronics.

AVIRIS - AVIRIS is an airborne hyperspectral developed by NASA, USA.

FLAASH - Fast Line-of-Sight Atmospheric Analysis of Hypercubes

Minimum Noise Fraction (MNF) - Minimum noise fraction; reduces the dimensionality in the
data.

Pixel Purity Index (PPI) - Pixel purity index; to extract the spectrally pure pixel in the
hyperspectral dataset.

6.6 ANSWER TO CHECK YOUR PROGRESS


1. Discuss the redundancy in hyperspectral datasets.

2. Describe the need for calibration of hyperspectral sensor.

3. Discuss steps in the hyperspectral pre-processing.

6.7 REFERENCES
1. Bishop, C. A., Liu, J. G., & Mason, P. J. 2011. Hyperspectral remote sensing for mineral
exploration in Pulang, Yunnan province, China. International Journal of Remote
Sensing, 32: 2409–2426.
2. Boardman, J. W., & Kruse, F. A. 1994. Automated spectral analysis: a geological example
using AVIRIS data, north Grapevine Mountains, Nevada. Proceedings of the Thematic
Conference on Geologic Remote Sensing.
3. Cloutis, E. A. 2003. Quantitative characterization of coal properties using bidirectional
diffuse reflectance spectroscopy. Fuel, 82: 2239–2254.
4. Cŕosta, A. P., De Souza Filho, C. R., Azevedo, F., & Brodie, C. 2003. Targeting key
alteration minerals in epithermal deposits in Patagonia, Argentina, using ASTER
imagery and principal component analysis. International Journal of Remote Sensing, 24:
4233–4240.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 127 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

5. Govil, H., Mishra, G., Gill, N., Taloor, A., & Diwan, P. 2021. Mapping Hydrothermally
Altered Minerals and Gossans using Hyperspectral data in Eastern Kumaon Himalaya,
India. Applied Computing and Geosciences, 9: 100054.
6. Green, A. A., & Craig, M. D. 1985. Analysis of aircraft spectrometer data with logarithmic
residuals. In: Proc. AIS workshop, JPL Publication 85-41, Jet Propulsion Laboratory,
Pasadena, California.
7. Green, Andrew A., Berman, M., Switzer, P., & Craig, M. D. 1988. A Transformation for
Ordering Multispectral Data in Terms of Image Quality with Implications for Noise
Removal. IEEE Transactions on Geoscience and Remote Sensing, 26: 65–74.
8. Hoefen, T. M., Knepper Jr, D. H., & Giles, S. A. 2011. Analysis of imaging spectrometer
data for the Daykundi area of interest. Summaries of Important Areas for Mineral
Investment and Production Opportunities of Nonfuel Minerals in Afganistan, US
Geological Survey, Reston, Virginia, 314–339.
9. Hunt, G. R., & Ashley, R. P. 1979. Spectra of altered rocks in the visible and near infrared.
Economic Geology, 74.
10. Kruse, F. A., Lefkoff, A. B., Boardman, J. W., Heidebrecht, K. B., Shapiro, A. T.,
Barloon, P. J., & Goetz, A. F. H. 1993. The spectral image processing system (SIPS)-
interactive visualization and analysis of imaging spectrometer data. Remote Sensing of
Environment, 44: 145–163.
11. Kurz, T. H., Dewit, J., Buckley, S. J., Thurmond, J. B., Hunt, D. W., & Swennen, R.
2012. Hyperspectral image analysis of different carbonate lithologies (limestone, karst
and hydrothermal dolomites): the Pozalague Quarry case study (Cantabria, North-west
Spain). Sedimentology, 59: 623–645.
12. Meer, F. Van Der, Jong, S. De, van der Meer, F. D., & de Jong, S. J. 2003. Spectral
mapping methods: many problems, some solutions. 3rd EARSeL workshop on imaging
spectroscopy.
13. Ramakrishnan, D., & Bharti, R. 2015. Hyperspectral remote sensing and geological
applications. Hyperspectral remote sensing and geological application.
14. Richards, J. A. 2013. Remote sensing digital image analysis: An introduction. Remote
Sensing Digital Image Analysis: An Introduction (Vol. 9783642300622). Springer-
Verlag Berlin Heidelberg.
15. Roberts, D. A., Yamaguchi, Y., & Lyon, R. J. P. 1985. Calibration of Airborne Imaging
Spectrometer Data to percent Refelctance Using Field Spectral Measurements. In: 19th
International Symposium on Remote Sensing of Environment, Ann Arbor, Michigan.
16. Thenkabail, P. 2014. Hyperspectral Remote Sensing of Vegetation and Agricultural
Crops. Photogrammetric Engineering & Remote Sensing (PE&RS);80,(2014) Pagination
697,723.
17. Thompson, A. J. B., & Thompson, J. F. H. 1996. Atlas of alteration: a field and
petrographic guide to hydrothermal alteration mienrals. Geological Association of
Canada, Mineral Deposits Division, 119.
18. Tripathi, M. K., & Govil, H. 2020. Regolith mapping and geochemistry of
hydrothermally altered, weathered and clay minerals, Western Jahajpur belt, Bhilwara,
India. Geocarto International, 1–17.
19. van der Meer, F. 2004. Analysis of spectral absorption features in hyperspectral imagery.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 128 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

International Journal of Applied Earth Observation and Geoinformation.


20. Van der Meer, F. D. 2000. An integrated geoscience approach for hyperspectral
hydrocarbon microseepage detection. In: In Proceedings of the 14th International
Conference on Applied Geologic Remote Sensing, Environmental Research Institute on
Michigan (ERIM), Las Vegas, Nev., USA.
21. Van Ruitenbeek, F. J. A., Bakker, W. H., Van Der Werff, H. M. A., Zegers, T. E.,
Oosthoek, J. H. P., Omer, Z. A., Marsh, S. H., & Van Der Meer, F. D. 2014. Mapping
the wavelength position of deepest absorption features to explore mineral diversity in
hyperspectral images. Planetary and Space Science, 101: 108–117.

6.8 TERMINAL QUESTIONS


1. Describe briefly the pre-processing steps and effect created by these steps to hyperspectral
data.
2. Describe the interpretation of hyperspectral data by spectral analysis.
3. Discuss application and role of hyperspectral data in mapping of hydrothermally altered
zones.

UNIT 6 - HYPERSPECTRAL DATA INTERPRETATION Page 129 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

BLOCK 3 : MICROWAVE
UNIT 7 - CONCEPT, DEFINITION, MICROWAVE
FREQUENCY RANGES AND FACTORS AFFECTING
MICROWAVE MEASUREMENTS

7.1 OBJECTIVES
7.2 INTRODUCTION
7.3 CONCEPT, DEFINITION, MICROWAVE FREQUENCY
RANGES AND FACTORS AFFECTING MICROWAVE
MEASUREMENTS
7.4 SUMMARY
7.5 GLOSSARY
7.6 ANSWER TO CHECK YOUR PROGRESS
7.7 REFERENCES
7.8 TERMINAL QUESTIONS

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 130 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

7.1 OBJECTIVES
After reading this unit learner will able to understand:

I. Fundamentals of microwave remote sensing.


II. The working mechanism of microwave remote sensing.
III. Different principles of microwave remote sensing.

7.2 INTRODUCTION
Remote sensing has a wide range of applications and has been identified as a technique with high
potential for assisting the nation's economic development and some of its problems. These
includes the improvement and creation of natural resources, the identification of areas at risk of
flooding, availability of water in the basins, estimation of the status of the watersheds,
determination of the area of forests and the estimation of harvests and resource exhaustion. The
electromagnetic spectrum, with its various wavelength bands, finds applications in a wide range
of fields. Expanding the demand for natural resources leads to scarcity and the explanations
behind this lack of access. Due to insufficient traditional approaches, remote sensing can play a
significant role in resolving these difficulties.

Remote sensing may be utilized for predicting climate, rainfall, cloud cover and other
physical properties as well as for assisting in the identification of cloud areas and other physical
factors. In overcast places, such as during the kharif season, when crops suffer and wheat yield
forecast is difficult, as well as on crops such as groundnuts, coffee, tea, and others that require a
lot of rain. Flooding is another issue to be concerned about during the rainy season as for many
years, floods have wreaked damage. Cloud movement could not be predicted since clouds
obstructed normal techniques of observation. Because observation is not feasible at night,
sensors that can operate at night as well as in cloudy conditions are required.

7.3 CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES


AND FACTORS AFFECTING MICROWAVE MEASUREMENTS

Historical Background:

A 1.275-GHz synthetic opening radar sensor, known as Shuttle Imaging Radar-B, was launched
by the Space Shuttles Challenger on 5 October 1984. (SIR-6). SIR-B took high-resolution photos
of the Earth's surface throughout the 10-day Challenger mission, some of them from specific
places illumined from different angles. This permitted stereo imagery on the earth's surface and
interpretation by means of three-dimensional viewing and contour map creation of the

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 131 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

morphological characteristics of the locations. Space Shuttle Columbia had a cargo of earth
observation three years earlier, including SIR-A, identical to SIR-B but with a fixed angle of
illumination of 47.'
South Egypt's hyperarid area was blanketed with one of the SIR-A picture strips. These
surprising radar pictures revealed large beds of dry rivers below the Sahara and previously
discovered structures on the underlying ground. After field work, the radar penetrated a thick,
multi feet deep surface layer of very dry olive sand to indicate features of an alluvial quaternary
basin. In 1978 NASA launched Seasat, which contains a 14GHz SAR and other sensors, as the
polar orbiting earth observation spacecraft. The findings of the Seasat X-shaped illumination
disperse indicated that this type of remote sensor with a space-borne microwave is accurate. The
sensitivity of this 2-cm scatterometer is owing to Bragg's rear scattered microwave ocean waves,
which are twice as long as a horizontal projection of the radar wavelength. The reinforcement of
this backscatter from those ocean surface waves runs parallel to the radar.
In climate study as well as early prediction of ocean storms and other physical
oceanographic problems, the capacity to regularly detect ocean winds at global scales is crucial.
One of the Seasat SAR pictures taken on 20 August 1978 showed the pastoral part of Iowa where
a summer storm line had just sunk over an inch of rain in a finely sculptured area near Cedar
Rapids. In this space-borne image, microwaves have demonstrated their great sensitivity to soil
moisture, matching passive and active investigations on prior ground and aircraft.
Radars and radiometers have both been created for non-distance sensing uses. Several
radars were created during World War II for military fire control and aircraft tracking, including
image radar. The first in the 1930s pioneers such as Karl Jansky and Grote Reber to build simple
radiometers consisting of an antenna, a low-noise receiver and strip chart recorders for radio
astronomy. Earth-based congestion hampered target-oriented military radars, and statistical
characterization of clutter was a key engineering issue in the 1950s. A group of scientists at the
Ohio State University examined diffusion coefficients of different crop materials, asphalt,
concrete and other materials, and carried out the first comprehensiveness of the cross-sectional
radar per unit area for earth-storm in the late 1950s. The group also examined the connection
between the passive emissivity of dispersed targets and their active dissemination coefficients. In
the mid-1960s, earth scientists began to employ geologic geology microwave remote sensing
when side-looking SLARs such as the AN/APQ-97 35GHz, designed by Westinghouse for
militant monitoring, were used by earth scientists. Earths have also been used. In 1967, Panama's
Oriente Province performed the first big airborne radar mapping research, which is normally
overcast. The AN/APQ-97 radar was used to undertake both geological and agricultural studies.
In spaceborne remote sensing, three kinds of radar are employed among them Scatterometers and
other altimeters of the radars of synthetic openings (SAR), Skylab and Seasat were the initial
flights to carry a range of remote radar sensors as well as earth study radiometers.
Between May 1973 and February 1974, NASA's human Skylab missions carried a
microwave radiometer/scatterometer/altimeter experiment, called S-193, that operated at 13.9
GHz [8], as well as an L-band radiometer (5194). Studies of planetary emission sparked the

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 132 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

development of microwave radiometry from space. Mariner 2's microwave radiometer with 15.8-
and 22.2-GHz channels performed three scans of the planetary disc during its December 1962
approach of Venus, confirming the high temperature of the Venusian surface and demonstrating
that its planetary emission was characterized by limb-darkening.
But the first radiometric measurements of the earth from orbit were carried out by the
Soviet spacecraft Cosmos 243 not until 1968. A 4 channel nadir radiometer was used to evaluate
atmospheric water vapour, liquid water, ice cover and sea temperature. After that attempt, a
number of Soviet and American radiometer flying systems with more advanced sensors were
launched. In 1972, two primary microwave radiometers launched the Nimbus-5 spacecraft: the
Nimbus-E Microwave Spectro-Meter (ESMR) and an electronically scanned 19.3GHz imaging
radiometer for measuring air rain and sea-surface ice (NEMS),a five-frequency radiometer for
detecting air temperature profiles, concentration of vapour in water and content of fluid vapour
was also used.

Electromagnetic Spectrum:
The frequency used in remote sensing has an impact on the applications. The International
Telecommunication Union's Radio Regulations define radio waves as electromagnetic waves
with frequencies less than 3000 GHz. Different regions of the radio spectrum are employed for
active and passive microwave remote sensing. The electromagnetic spectrum is depicted in
Figure 7.1.

Figure 7.1 - Electromagnetic Spectrum

This is shown in Table 7.1 from the extreme low Frequency Radio (ELF) to the extreme HF
(EHF), and from myriametric to sub millimeter waves. The following are shown in Table 7.2.
This range is between 30 and 3000 GHz. For various reasons, different portions of the radio
spectrum are employed. The spectrum extends between 0,3 and 30 GHz, the spectrum of the

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 133 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

millimetre wave between 30 and 300 GHz and the 300-3000 GHz sub-millimeter spectrum.
These spectra are split into strips for various uses.

Table 7.1 - Part of the Electromagnetic Spectrum

Table 7.2 - Microwave Band Designation

* The microwave frequency spectrum includes UHF, SHF, and EHF (300 MHz-325 GHz).

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 134 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Table 7.3 and 7.4 shows the Characteristics of Centimetric Waves ranging from 3 GHz to 30
GHz and Characteristics of Millimetric Waves ranging from 30 to 3000 GHz respectively, while
Table 7.5 shows Sub-millimetre wave characteristics. The frequency bands have been set aside
for passive microwave remote sensing and radio astronomy.

Table 7.3- SHF Cetimetric waves (3-30GHz)

Table 7.4 - EHF Millimetric waves (30-300 GHz)

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 135 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Table 7.5 - Sub-Millimetric Waves (300-3000 GHz)

Capabilities of Microwave Remote Sensing:


For remote sensing on cloud-covered areas and other applications 24 x 7 hour data gathering,
sensors operating in the electro-power microwave band are used. The benefits and capabilities of
microwave sensors over optical sensors are unique in many ways which inculcates the following:

 Can provide data in all-weather condition


 Capable of providing data in day and night time (independent of intensity and sun angle
at the time of illumination)
 To some extent, penetration through vegetation and soil
 Moisture sensitivity (in liquid or vapour forms).

Can provide data in all-weather condition:-

Ice clouds, water clouds, and rain are all likely to degrade the operation of microwave-frequency
sensors. These three natural occurrences have varied effects on radio waves at different
frequencies. All microwave frequencies are perfectly transparent to the ice clouds, but optical
wavelengths are opaque. Water clouds have a significant impact on frequencies over 30 GHz, but
have little impact below 15 GHz. The influence of clouds on radio transmission from space to
ground was demonstrated by Ulaby1 et. al In the case of heavy rain, the effect of rain is
particularly noticeable above 10 GHz.
Cloud cover and haze have little effect on imaging radars, which are generally weather-
independent. Water clouds have a significant impact on radars at frequencies over 15GHz,

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 136 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

however at frequency below 10GHz rain does not have a significant affect. Ulaby1 et. al
describes the effect of rain on space-to-ground radio transmission.

Capable of providing data in day and night time (independent of intensity and sun angle at
the time of illumination) :-

The sensors receive target signals at microwave frequencies, which are entirely reliant on the
target's dielectric characteristics as well as physical attributes such as surface roughness and
texture. As a result, the signal obtained by the sensor is independent of the sun's illumination,
including its angle and intensity. Thus, even at night, information about the target item can be
obtained using microwave frequencies. These things one can’t get through optical sensors.

Penetration into Greens and Soil to some degree:-


Only vegetation and soil can penetrate microwaves in a limited degree. Longer wavelengths
penetrate deeper compared to shorter wavelengths. In order to get information about canopies,
higher frequencies can be employed, whereas lower frequencies can be used to gain information
on soil and subsoil.

Moisture sensitivity (in liquid or vapour forms):-

The presence of moisture in the soil affects the microwaves. At different microwave frequencies,
dry and damp soil responds differently. This is due to the electrical characteristic, the dielectric
constant of the soil, which differs between dry soil or natural materials and materials containing
water in liquid or vapour form. At frequencies ranging from 2 GHz to 20 GHz2, Calla2 et al.
examined soils of oven-drying moisture levels to saturated moisture. The sensitivity to humidity
is influenced by the change of the dielectric constant at microwave frequencies. Soil dielectric
constant changes at different frequencies with varied moisture values as seen in Figure 7.2.
The information in microwave frequencies is mostly due to geometry and the mass
dielectric constant of earth, but is due to molecular resonance in the surface layer of the plant or
soil in the visible and infrared frequencies. This leads to the conclusion that when all three
microwaves, visible and infrared, are combined, surface geometry information, dielectric bulk
constant and resonance characteristics of molecules may be acquired. All three are therefore
complimentary for remote sensing and should be used in conjunction to define the surface
characteristics to achieve optimum results.

Microwave Remote Sensing:


Because of its unique capabilities, microwave remote sensing (MRS) has enormous potential. It
has specific benefits in applications such as geological surveying for petroleum and mineral
prospecting, crop and vegetation monitoring, soil moisture detection, water resource
management, agriculture, oceanography, and atmospheric sciences. Remote sensing is done with
passive and active microwave sensors at microwave frequencies.

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 137 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 7.2 - Dialectic soil constant with different humidity frequencies (solid curves for e' and dotted curves for e')

Passive Microwave Remote Sensing:-

Microwave radiometers are used to perform passive MRS. This method is practicable as every
natural substance emits electromagnetic radiation, a complicated function of the physical
characteristics of the emitting surface. In addition to optical sensors, sensors which operate at
microwave frequencies have been utilized in recent times on a limited scale for several purposes.
Passive sensors, previously also referred to as radiometers, detect microwave spectrum radiated
power. The approximation of Planck's Law by Rayleigh Jeans regulates the fundamental
principle of radiometer detection. The black-body electromagnetic emission, as controlled by
Planck's law, controls behavior at a certain temperature of T°K.

Active Microwave Remote Sensing:-

The active MRS analyses the data obtained and distinguishes one target from another by using
the scattering properties of the terrains and targets. The dispersive characteristics of the target are
shown in the dispersion factor. The dispersion coefficient depends on the incidence angle,
operating frequency and polarization. The dispersion coefficient also depends on the objective
electrical qualities (e.g. dielectric constants and conductivity), physical characteristics (e.g.
texture, surface type etc.).The two main radar characteristics are also employed in active MRS,
namely the capacity of producing high-resolution images and measuring distance / altitude with
great precision.
As the event is electromagnetic power, it is dispersed. This depends on the surface type
of the target. If the surface is smooth, the greatest power is dispersed towards the specular
reflection. There is a specular reflection. If the surface is rough, power will spread in all
directions. The scattered effect arises when the rugged surface of the sensor is connected to the
sensor wavelength. As electromagnetic waves can penetrate the surface, the dispersion factor is

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 138 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

also influenced by the sub-surface characteristics of the target. The dispersal which is perceived
on both the surface and the surface is a dispersing phenomenon in volume.

Microwave Principles:

 Microwave Spectrum:- The microwave spectrum, which has been utilized for remote
sensing, ranges between 500 MHz and 100 GHz. For both active and passive microwave
remote sensors, there are certain widely used frequencies or wavelengths, as well as letter
band designations. Space-borne remote sensing frequencies vary from L-band (1.3 GHz) to
Q-band (2.4 GHz) (58 GHz).

Atmospheric Transmissivity:- At frequencies above about 20 GHz, microwave


transmissions throughout the Earth's atmosphere are significantly muffled owing to water or
oxygen molecules absorption. The atmospheric vapour distribution dictates the exact shape
of the attenuation profile. A pressure-enhancing absorption line due to ambient vapour is the
attenuation pitch at 22 GHz while the attenuation pitch at around 60 GHz is an extended,
zero absorption line complex.

Comparison to Optical Remote Sensing:-Remote sensing microwave methods offer


information separate from, but additional to, information obtained by (visible/lR) optical
procedures. Microwave surface sensors provide cloud-free data that is particularly sensitive
to the geometry of the surface and the presence of water, as described below.

Microwave Remote Sensing's Unique Features:-Remote sensing data using microwave


include unique sensitivities and other features not available in infrared or optically/near-
infrared thermal data.

Surface Geometry Sensitivity:-The geometric features of the earth's surface are highly
sensitive to split radar signals and the geometric structure of cultural and natural coverings.
Radar back dissemination of the field is very sensitive to surface slope at both small and
large incidence angles (less than around 30'). (larger than about 55"). As already established,
Bragg's dispersion of capillary and short gravity waves is greatly impacted by centimeter
wave length radar spread from the sea, mainly because of the waves. Finally, a recent study
found that both the geometrical structure (size and form of the stalks, stalks, trunks, branches
and leaves) and the moisture content of vegetation such as trees and crops and other plants
substantially affects the radar reactions.

Water sensitivity:-Water is a strongly polarized molecule with a high dielectric constant


(about 80) in the microwave spectrum's lower portions. This indicates that water's
reflectivity is extremely high and its emissivity is extremely low. Increased water content in
soils with a rough surface or vegetation is detected by radar as higher backscatter; increased
water content is detected by a radiometer as a reduction in brightness temperature. The
degree of salt affects the dielectric constant of ice, and hence its reflectivity/emissivity. This
indicates that in polar locations, microwave radars and radiometers may be used not only for
ice mapping, but also to discriminate between first-year and multi-year ice.

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 139 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Regardless of the amount of cloud cover or the position of the sun:-Imaging radars can
gather high-resolution surface images at any time of the day or night, irrespective of the
cloud cover. As ocean, sea winds in or near cloud-covered storm cells are generally
visible and projected, this is essential to specific applications like as ocean surveillance.
It is also important to record regions like the Brazilian Jungle that are obscured forever.
The planet is generally documented at a constant sun angle with optical images
such as those acquired by Landsat for consistent illumination. The utilization of sun-
synchronous orbits is therefore required. As imaging radars supply their own light, they
do not depend on the angle of the sun and offer a broader variety of possibilities for
orbital height.

 Principles of passive microwave system:-


Thermal Radiation, Blackbodies, Emissivity, and Temperature of Brightness:-As a
result of thermally induced random movements of electrons and protons, all things not at
absolute zero temperature produce weak electromagnetic radiation. This emission takes
the form of noisy electromagnetic waves that include all frequency components.
Furthermore, the polarization of this thermally generated electromagnetic radiation is
largely random since the paths of these charged particles are essentially random. The
”thermal infrared” region of the spectrum, i.e., at wavelengths approaching infrared, is
used for objects with temperatures close to ambient. In the microwave spectrum,
thermally generated noise waves are roughly six orders of magnitude weaker, yet they
may still be detected using a microwave radiometer.
Brightness temperature of the sea:-Its ruggedness (proportional to its wavelength), its
salt and its view angle from nadir, its observation frequency, and its polarisation impact
the brightness temperature of the sea. The ruggedness of the water is affected by the
wind speed. Nadir luminosity temperature varies from around 90 k at 1.4 GHz to 110 k
at 10 GHz to 130 k at 37 GHz for the calm (specular) water at a temperature of 20°C and
a salinity of 36 pieces per thousand. The nadir angle increases as the nadir angle
increases. As the wave length becomes shorter, the susceptibility to surface roughness
and wind rises. With rising wind speeds of roughly 19 GHz and a 55" angle from nadir,
the horizontally polarised component of the brightness temperature climbs almost line,
for example, Without foam or whitecaps, the temperature varies at wind speeds of
around 82 kg to 96 kg at wind speed of about 14 m/s. The reader is referred at passive
microwave remote sensing of the sea for more information.
Temperature of sea ice brightness:-The combined emissivity is not the same as either
ice or seawater if sea water is enveloped in an ice sheet. Sea ice is a mixture of smooth
or rough sea-ice-air boundaries of ice, salt, bubbles and pockets of flowers. Mechanisms
for surface emission prevail at nadir angles whereas scattering processes of surface and
volume are dominating at wider angles. Emission and dispersal processes are
complicated and extremely wavelength and polarization dependent.
Terrain Temperature (brightness):-The emissivity of bare soil is highly influenced by
surface roughness and soil wetness, as well as the wavelength, incidence angle, and
polarization observation factors. Soil moisture changes have a significant impact on the
brightness temperature at lower microwave frequencies (e.g., 1.4 GHz). In comparison
with dry soil, the large dielectric water constant (about 80) is responsible for a

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 140 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

substantive reliance on humidity in the soil (about 3-4). The dielectric constant of wet
soil may reach 20 or more with increased soil humidity, resulting in an emissivity
variation of around 1.4 GHz for dry soil from around 0.95. (volume of humidity below
0.1 g/cm3) to 0.6 for wet soil (with volume of humidity more than 0.3 g/cm3).
Brightness temperature of snow:-The emissivity of snow-capped soil is generally
governed by the dielectric constant of underlying frozen soil (around 3) and by the
thickness, water equivalent and liquid water dispersion in the covering area of snow.
Where the snow is electrically heavier, the dependence is clearer at or above the
percentage band frequencies. As snow-water equivalent (i.e., the total snow mass of
water inside a column) grows, the brightness temperature in the dry snow layer lowers.
Even a slight increase in liquid water amount (snow wetness) leads to a rising brightness
in snow due to volume dispersion.

7.4 SUMMARY
Remote sensing has a wide range of applications. It can help improve natural resources, the
creation of natural resources and identify areas at risk of flooding. Remote sensing can also be
used to predict harvests and resource exhaustion in some areas. Remote sensing may be used for
predicting climate, rainfall, cloud cover and other physical properties. Remote sensing can also
help in the identification of cloud areas and other factors such as flood risk during the kharif
season. In overcast places, when crops suffer, wheat yield forecast is difficult. The presence of
moisture in the soil affects the microwaves. At different microwave frequencies, dry and damp
soil responds differently. This is due to the electrical characteristic, the dielectric constant of the
soil. Microwave remote sensing (MRS) has enormous potential. It has specific benefits in
applications such as geological surveying for petroleum and mineral prospecting, crop and
vegetation monitoring, soil moisture detection, water resource management, agriculture,
oceanography, and atmospheric sciences.

7.5 GLOSSARY
 Active Microwave Remote Sensing: - provide their own source of microwave radiation
to illuminate the target.
 Altimeter: - An altimeter or an altitude meter is an instrument used to measure the
altitude of an object above a fixed level.
 Emissivity: - is defined as the ratio of the energy radiated from a material's surface to
that radiated from a perfect emitter, known as a blackbody, at the same temperature and
wavelength and under the same viewing conditions.
 Passive Microwave Remote Sensing: - is similar in concept to thermal remote sensing.
All objects emit microwave energy of some magnitude, but the amounts are generally
very small. A passive microwave sensor detects the naturally emitted microwave energy
within its field of view.
 Radiometer:-A radiometer is a device for measuring the radiant flux (power) of
electromagnetic radiation.

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 141 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

 SAR:-Synthetic-aperture radar (SAR) is a form of radar that is used to create two-


dimensional images or three-dimensional reconstructions of objects, such as landscapes.
SAR uses the motion of the radar antenna over a target region to provide finer spatial
resolution than conventional beam-scanning radars.
 Scatterometer:- A scatterometer is a scientific instrument to measure the return of a
beam of light or radar waves scattered by diffusion in a medium such as air.
 Transmissivity:-The degree to which a medium allows something, in particular
electromagnetic radiation, to pass through it.
 Wavelength:- Wavelength can be defined as the distance between two successive crests
or troughs of a wave. It is measured in the direction of the wave. This means the longer
the wavelength, lower the frequency. In the same manner, shorter the wavelength, higher
will be the frequency.

7.6 ANSWER TO CHECK YOUR PROGRESS


Q.1 The interaction of the electromagnetic radiation produced with a specific wavelength
to illuminate a target on the terrain for studying its scattered radiance, is called:
(a) Passive remote sensing
(b) Active remote sensing
(c) Neutral remote sensing
(d) None of these

Q.2 ERS, Envisat, Sentinel and RISAT are example of which type of satellites:
(a) Optical
(b) Passive
(c) Thermal
(d) Microwave

Q.3 Explain what is a SAR system.


A SAR system (synthetic aperture radar) simulates a very long antenna by special data
recording and processing techniques. As a result a high spatial resolution (azimuth) is obtained.

Q.4 Remote sensing techniques make use of the properties of ___________ emitted,
reflected or diffracted by the sensed objects:
(a) Electric waves
(b) Sound waves
(c) Electromagnetic waves
(d) Wind waves

Q.5Remote Sensing is unique because it provides:


(a) Synoptic view
(b) Special information
(c) Superior information
(d) Encrypted information

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 142 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

7.7 REFERENCES
 Ulaby, Fawwaz T.; Moore, Richard K. & Fung, Adrian K. Microwave remote sensing – active
and passive, Vol. 1: Microwave remote sensing fundamentals and radiometry. Advanced Book
Program/World Science Division, Addison-Wesley Publishing Company, Reading,
Massachusetts, USA.
 Calla, O.P.N.; Borah, M.C.P.; Mishra, Vashishtha R.; Bhattacharya, A. & Purohit, S.P. Study
of the properties of dry and wet loamy soil at microwave frequencies. Ind. J. Rad. & Spa. Phy.,
1999, 28, 109-12.
 Woodhouse lain. H, “Introduction to Microwave Remote Sensing” Taylor & Francis, 2005.

 Ulaby, F.T., Moore, R.K, Fung, A.K, “Microwave Remote Sensing; active and passive, Vol.
1,2 and 3, Addison – Wesley publication company, 2001
 Floyd, M., Handerson and Anthony J. Lewis, ” Principles and application of
Imaging RADAR, Manual of Remote Sensing, 3rd edition, Vol.2, ASPRS, John Wiley and
Sons Inc., 1998
 Roger J Sullivan, Knovel, Radar foundations for Imaging and Advanced Concepts, SciTech
Pub, 2004.
 Ian Faulconbridge, Radar Fundamentals, Published by Argos Press, 2002.
 Eugene A.Sharkov,Passive Microwave Remote Sensing of the Earth: Physical Foundations,
Published by Springer, 2003.

7.8 TERMINAL QUESTIONS


Q.1 Passive microwave remote sensors have a lower spatial resolution than comparable (i.e.
same wavelength, same altitude/platform) active microwave sensors. Why?
A.1The amount of energy reflected or emitted by the Earth is very small in the microwave
region. In order to obtain a signal larger than the noise level, the signal must be integrated over
larger areas. Active systems can produce higher energy amounts.

Q.2Why is microwave remote sensing better suited for monitoring tropical rain forests
than optical remote sensing?
A.2 Tropical forests are very humid, causing cloud cover nearly all the time. Optical systems
then fail and radar is the only possibility for monitoring these areas since radar can 'look' through
clouds

Q.3 what is the main difference between radar and optical remote sensing systems, like
aerial photography or the SPOT satellite?
A.3 The main difference lies in the fact that radar is an active system, operating day and night,
and being insensitive to cloud cover. An optical system mostly is a passive system, which is not
able to penetrate clouds.

UNIT 7 - CONCEPT, DEFINITION, MICROWAVE FREQUENCY RANGES AND FACTORS AFFECTING


MICROWAVE MEASUREMENTS Page 143 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE


LOOKING AIRBORNE RADAR (SLAR) SYSTEMS &
SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE
RADAR (RAR)

8.1 OBJECTIVES
8.2 INTRODUCTION
8.3 RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE
LOOKING AIRBORNE RADAR (SLAR) SYSTEMS &
SYNTHETIC APERTURE RADAR (SAR), REAL
APERTURE RADAR (RAR)
8.4 SUMMARY
8.5 GLOSSARY
8.6 ANSWER TO CHECK YOUR PROGRESS
8.7 REFERENCES
8.8 TERMINAL QUESTIONS

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 144 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

8.1 OBJECTIVES
The objectives of this unit are to make the reader understand the following concepts:
 Introduction of active microwave remote sensing,
 Classification of the different microwave sensors.
 Radar Principal and Radar equation
 Wavelength bands used by the radar systems
 Concept of Side looking Airborne Radars (SLAR) & its types
 Radar geometry
 Rear Aperture SLAR systems and Radar resolution
 Synthetic Aperture Radar systems

8.2 INTRODUCTION
Microwave remote sensing incorporates both its active and passive forms of sensing
depending the source of illumination used. As described in Unit7, the microwave portion of
the electromagnetic spectrum covers wavelength range from approximately 1cm to 1m. These
longer wavelengths of the microwave region allow them to penetrate through cloud cover,
haze, dust, and rainfall since these wavelength ranges are not susceptible to atmospheric
scattering which on the other hand affects shorter optical wavelengths such as visible and
infrared regions. This property provides the microwave remote sensing with all weather and
environmental capturing capabilities as the data can be collected at any time and at any place.
The radiometers are examples of the imaging passive microwave remote sensing system
while radars are imaging active microwave remote sensing system. Both radiometers and
radars have antennas and receivers while the radars have additional transmitters for
transmitting their own source of energy. In this Unit active microwave sensors will be dealt in
detail. The active microwave sensors act as a self-source which transmits a directed pattern of
energy to irradiate a portion of the Earth’s surface, then receives the portion scattered back to
the instrument system.
These are usually divided into two distinct classes: imaging and non-imaging. Imaging radars
can be further classified into Real Aperture Radar (RAR) or Side Looking Airborne Radar
(SLAR) and Synthetic Aperture Radar (SAR) based on the antenna size and beam width
used. The different types of non-imaging real aperture radars are Scatterometer (used to
measure wind speed), Altimeters (used to measure platform height) and meteorological radars
(used to measure rainfall & other weather phenomenon). Other imaging synthetic aperture
platform in addition to SAR is Interferometric SAR (InSAR or IFSAR) which is used to
measure the topography; and Inverse synthetic-aperture radar (ISAR) is similar to SAR
except that it uses the motion of the target to create the synthetic aperture rather than the
emitter.
RADAR is the most common form of imaging active microwave sensors. RADAR is an
acronym for Radio Detection and Ranging. The operation and function of the RADAR is
characterised by the word itself. Radio stands for the microwave and range is another term
for distance. The microwave (radio) signal is transmitted by the sensor towards the target and
the backscattered signals from the objects are received back. The various target objects are
distinguished from one another based on the strength of the backscattered signals. Further,
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 145 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

the sensor also measures the time delay between the transmitted and received signals in order
to determine the range or distance to the target.

Figure 8.1:Classification of Microwave Remote Sensing systems

8.3 RADAR PRINCIPAL, RADAR WAVELENGTHS, SIDE


LOOKING AIRBORNE RADAR (SLAR) SYSTEM & SYNTHETIC
APERTURE RADAR (SAR) REAL APERTURE RADAR (RAR)

Radar Principal:
Typical imaging radar consists of a pulse generator, transmitter, a receiver, an antenna, a
transmitter/receiver switch and a recorder (Figure 8.2). The signal is initially produced by the
pulse generator device at a specified frequency and is sent to the transmitter. The transmitter
then emits this energy towards the target object through a transmitting antenna. The signal
interacts with the target object and is backscattered towards the receiving antenna. The
receiving antenna collates the backscattered signals into the receiver circuits of the recorder.
The amplification of the received signal is done by the receiver which finally extracts the
target characteristics. In some cases, two separate antennas are used for transmission and
receiving of signal while in some cases a transmit-receive (TR) switch is utilized to
modulated the monostatic antenna functioning between transmitter and receiver.
It is important for the reader to understand that the radar systems are not used to measure the
direct reflectance from the object instead they record the backscattered intensity of radiation
from the object. The strength of the returning signal measured by the antenna of the radar is
represented in several ways. One way of the directly representing the signal radiometrically is
as power and it can also be represented as decibels (db) in log format. In order to improvise
the visual appearance of the radar images they are converted into magnitude where each pixel
value represents the square root of the power. This process of representing the radar signal in
terms of physical parameters is given by Radar equation (Moore, 1983) as given below:

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 146 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

𝑮𝟐 𝝀𝟐 𝑷𝒕 𝝈
𝑷𝒓 = (𝟒𝝅)𝟑 𝑹𝟒
Equation (1)

Figure 8.2– Components of a Radar system


Where
Pr = received energy/power which returns from the target i.e earth surface
Pt = transmitted energy/power which is generated by radar
G = antenna gain (it determines the ability of the system to focus the transmitted
energy)
λ = wavelength of the energy
R = Range /distance between the antenna and target i.e., earth surface
σ = radar cross-section

All the variables of the radar equation except the radar cross-section are known as controlled
variables as they are dependent on the radar system design and are therefore known in
advance. On the other hand, the radar cross-section is dependent on the characteristics of the
terrain surface. The amount of radar cross-section (σ) per unit area (A) which gets reflected
back to the receiver antenna is called as radar backscattering coefficient (σ°) and is computed
as

𝝈
𝝈𝒐 = Equation (2)
𝑨
The radar backscattering coefficient is a dimensionless entity depending on the surface
dependent parameters such as surface roughness, moisture content and system parameters of
radar such as wavelength, polarization etc. replacing the value of radar cross-section in eq.1
with the value of radar backscattering coefficient as given eq. 2 the equation changes to
following.

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 147 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

𝑮𝟐 𝝀𝟐 𝑷𝒕 𝝈𝒐 𝑨
𝑷𝒓 = (𝟒𝝅)𝟑 𝑹𝟒
Equation (3)

Radar Wavelengths:
The electromagnetic radiation is transmitted by the antenna in form of short pulses for
specific duration and at a particular wavelength. The wavelength range of the imaging radars
is very small although they have broad intervals. Since the wavelength of the imaging radars
are longer as compared to visible, near-infrared (NIR), shortwave-infrared (SWIR) and
thermal-infrared (TIR) therefore they are measured in centimetres (cm) rather than
micrometre (µm). The nomenclature adopted for the radar wavelengths is K, Ka, Ku, X, C, S,
L and P (Table 8.1). The designated alphabetical naming convection was originated in United
states and were initially meant for military purposes. They were illogically named to obscure
the usage of specific frequencies to avoid their usage by unauthorized personnel. The specific
microwave wavelength band choice has numerous implications on the nature of the radar
image generated.
Table 8.1: Radar wavelength bands

Bands Wavelength – λ (cm) Frequency (Ghz) ν=cλ-1


Ka 0.75 - 1.18 40.0 – 26.5
K 1.19 - 1.67 26.5 – 18.0
Ku 1.67 - 2.4 18.0 – 12.5
X 2.4 - 3.8 12.5 – 8.0
C 3.9 - 7.5 8.0 – 4.0
S 7.5 - 15 4.0 – 2.0
L 15 -30 2.0 – 1.0
P 30 -100 1.0 – 0.3

Firstly, in case of Real Aperture Radar (discussed in upcoming section) wavelength affects
the resolution of image since it is directly proportional to it. But although K band has smallest
wavelength and should result into finest resolution it partially gets absorbed by the water
vapour therefore it is used by ground based weather radars for tracking precipitation and
cloud cover. Secondly, wavelength affects the penetration depth of the signal into the object.
The penetration of the microwave energy into an object is assessed by term skin depth which
is depth at which the signal strength is reduced to 1/α (α is the attenuation coefficient) of its
magnitude at the surface, or 37 %. The skin unit is measured in standard units of length and it
varies from feature to feature. The skin depth increase with increasing wavelength in absence
of moisture hence highest penetration is observed in arid regions. Another factor which
affects the penetration depth is the surface roughness and the angle of incidence at which
microwave energy interacts with the object. Penetration is higher for steeper angles while it
decreases with increase in incidence angle. Although microwave bands are of longer
wavelength hence are insensitive to atmospheric attenuation problems still heavy rainfall
creates hinderances in transmission of the microwave energy. The effects of wavelength on
penetration depth are shown Figure8. 3in which the penetration increases as one move from
X to L band.

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 148 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.3:Penetration depth for X, C and L band over an area (Ferro-Famil & Pottier,
2016)

Side Looking Airborne Radar (SLAR):


Side-looking airborne radar (SLAR) is the most commonly used imaging radars. The basic
principle of a SLAR system is illustrated in Figure 8.4.In these systems the antenna is
mounted to the side of the aircraft creating radar beam which is vertically wide and
horizontally narrow. The resultant image is generated because of the motion of the aircraft.
The signal is generated by the transmitter of a specific wavelength and of a specific duration
which is directed by the antenna towards the target object on the ground. The signal interacts
with the target and is backscattered which is received by antenna. The distance between the
transmitter and the objects reflecting the signals is determined by measuring the time delay of
the returning signals. This distance is known as slant range (SR) and it can be calculated
using the given equation:
𝒄𝒕
𝑺𝑹 = Equation (4)
𝟐

Figure 8.4: Principal of Side looking Airborne Radar(Lillesand, Kiefer, & Chipman, 2015)

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 149 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

where
SR – Slant range
c= speed of light (3 x 108m/sec)
t= time delay between transmission and backscattered signal reception. The factor 2 in the
equation is used since it is twice the slant range as the time is calculated for the signal
movement in both to and from direction.
The return signal is used for modulating the intensity of the beam on the oscilloscope
resulting in a single intensity modulated line which is transferred to the film through the lens.
The film is in strip from and its motion is synchronised with the aircraft. On successive
movement of the aircraft to next beam width results in return signals from next strip. In this
way adjacent line is recorded on the film next to previous line. Thus, the movement of
aircraft results in a series of lines imaged on the film creating a two-dimensional picture of
the radar return signals from the earth surface. The speed of the film is adjusted in such a
manner that scale of the image generated in perpendicular direction to flight motion is same
to image generated along the flight motion. Although some distortion is observed in
transforming an image from slant range to ground range which can be removed by recording
the radar sweeps on the cathode ray tube in non-linear manner and hence preserving the
points with exact ground range.
The Side Looking Airborne radars can be bifurcated into two groups i.e., Real Aperture
Radars (RAR) in which the actual antenna length determine the beam width and Synthetic
Aperture Radar (SAR) in which the signal processing to use to attain a much narrower beam
width in azimuth direction as compared to RAR. In active microwave remote sensing as per
the standard nomenclature Real Aperture Radar and Side Looking Airborne Radar are used as
synonyms although Synthetic Aperture Radar is also a SLAR. The basic geometrical
configuration of SLAR system is given in Figure 8.5. The configuration remains identical
whether the antenna is mounted on the aircraft or on the spacecraft. Before diving deep into
the RAR/SAR system few terminologies depicted in Figure 8.5need to be explained such
incidence, look & depression angles; azimuth & range/look direction and polarization.
1. Incidence Angle (θ): It is the angle formed by the radar pulse and a line which is
perpendicular to the point of its contact on the earth’s surface. In case of flat terrain,
the incidence angle is complement of depression angle (γ) while in case of sloped
terrain no such kind of relationship exists between the incidence and depression angle.
2. Look Angle (φ): It is the angle formed between a line vertical down from the antenna
of the radar system and the radar line of sight in near and far range. The look angle is
complement of the depression angle and it varies for the near and far range.
3. Depression Angle (γ): It is the angle formed between a horizontal line extending
from the airplane fuselage and the specific point on the earth along the radar’s line of
sight.
4. Azimuth direction: In a SLAR/RAR system the antenna is typically mounted
beneath the aircraft. The direction of the movement of the aircraft in a straight line is
called azimuth flight direction. While on the go the antenna illuminates the terrain on
sides. The part of the illuminated terrain which is near to aircraft line of sight is called
near-range while which is away is called far-range.
5. Range/ look Direction: The direction of the illumination of the radar energy which is
at right angle to the direction of the movement of aircraft/spacecraft. The range or
look direction has a significant impact on the appearance (brightness or darkness) and
on the interpretation. The features looking darker in one look angle may appear
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 150 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

brighter in another. Similarly, the linear objects are enhanced more which are
orthogonal to look direction as compared to those that are parallel to the look
direction.
6. Polarization:The polarization of a radar signal explains orientation of the electrical and
magnetic components of an electromagnetic energy that is transmitted and received by the
antenna (Figure 8.6). The unpolarized energy has the tendency to vibrate in all the
possible directions perpendicular to direction of motion. The Radar systems can be
configured to send and receive polarized energy i.e., they can transmit horizontal or
vertical energy and can receiver horizontal and vertical energy backscatter from the
terrain. The unconfigured radar transmits horizontal polarized energy and receives
horizontal polarized echoes from targets. Therefore, different kind of images is generated
by radar based on the component transmitted and received. The horizontal and vertical
components are extracted from the electromagnetic energy by placing the horizontal and
vertical polarized filters, respectively in front of the sensor lens. The images in which
horizontal component is transmitted and horizontal is received or vertical component is
transmitted and vertical component is received are called HH or VV images or like
polarized mode Figure 8.7(A & B). Similarly, the images in which horizontal component
is transmitted and vertical is received or vertical component is transmitted and horizontal
component is received are called HV or VH images or cross-polarized mode Figure
8.7(C).The radar system measuring more than one polarization (eg. HH or VH) are
referred as multipolarization radars. A radar system measuring all the four polarization
states i.e., HH, HV, VH and VV are referred as quadrature polarized of fully polarimetric.

Figure 8.5: Geometric configuration of a typical RAR/SLAR system with flat terrain
assumption
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 151 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.6: Horizontally and vertically polarized radar signal.

(A)

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 152 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

(B)

(C)
Figure 8.7 : (A) Like Polarized – VV Polarization, (B) Like Polarized – HH Polarization,
(C) Cross Polarized – HV Polarization(Jensen, 2014)
Real Aperture Radar (RAR):
The SLAR are categorised into 2 systems i.e., Real Aperture Radar and Synthetic Aperture
Radar (SAR) term is used as a synonym for Real Aperture Radar (RAR). They follow the
same configuration of SLAR as explained in previous section. The real aperture radars are
also known as brute force radarsor noncoherentradars since in these systems the physical
antenna length is responsible for controlling the beam width. The spatial resolution of the
RAR system is controlled by various variables. The transmitted signal is focused to
illuminate an area on the ground with the minimum possible covered area since it is this area
which defines the spatial details that are recorded. In case the area illuminated is larger than
backscatter signals from different objects within area will get averaged to form a single tone
on the image and it becomes difficult to distinguish the objects. On the other hand, if the
focused area is small than much finer details can be recorded on the image the distinct
identities of the features are preserved.
The one of the various factors affecting the size of the area illuminated is antenna length.
The relationship between the spatial resolution and antenna length can be understood by
following
𝝀
𝜷= Equation (5)
𝑨𝑳
where
β = antenna beam width
λ = wavelength
AL = Antenna length

A longer antenna length allows the system to concentrate the radar energy on small ground
area. Therefore, longer antenna length in real aperture system helps them to achieve higher
details but the practical limitations of the aircraft to carry larger antennas restrict the ability
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 153 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

system to achieve finer details. It is the same reason that the real aperture system is not used
in spacecraft. The smaller antennas result in coarser resolution imagery from the higher
altitudes space borne sensors.
A flashlight aiming at the floor and creating a spot can be considered analogous to the area
illuminated by the RAR on the ground. The size of the spot is small and circular in shape
when the flashlight is aimed exactly straight down. As the flashlight points away with
increasing distance it is noted that the size of the spot becomes larger, irregular and it gets
dimmer. Hence based on this example, the near range (R1) will have finer resolution as
compared to far range (R2) portions of the image (Figure 8.8). The antenna length helps in
determining the azimuth resolution of a real aperture SLAR i.e., its ability to distinguish
between two objects in the along-track dimension of the image. The azimuth resolution (Ra)
of a RAR is given by following:

𝑹𝒂 = 𝑺𝑹 × 𝜷 Equation (6)
where
Ra = Azimuth resolution
SR = Slant range (objects are resolved in near range than in far range)
β = antenna beamwidth

On replacing the β from equation 5 the equation 6 changes to

𝝀
𝑹𝒂 = 𝑺𝑹 × Equation (6)
𝑨𝑳

A trigonometric relationship exists between the slant range distances (SR), depression angle
(γ) and height of the aircraft (H) above the local vertical datum and is given by:
𝑯
𝑺𝑹 = Equation (7)
𝑺𝒊𝒏𝜸

So, the azimuth resolution equation changes to following equation


𝑯 𝝀
𝑹𝒂 = ( ) × Equation (8)
𝑺𝒊𝒏𝜸 𝑨𝑳

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 154 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.8: Azimuth Resolution of real aperture SLAR system


Again, reiterating the equation 5 it can be seen that antenna beam width is also directly
dependent on the wavelength used by the radar system. The smaller or narrow beam width
can be achieved by using smaller wavelengths but it has an inherent problem that shorter
wavelength suffers more atmospheric attenuation. Real Aperture Radars have simple design
and data processing involved but because of the resolution issues its operation is restricted to
short range, low heights and with smaller wavelengths. The short range and low heights
operations limits the coverage while smaller wavelength is more attenuated.
Apart from the azimuth resolution the radar systems have one other resolution which is
defined in the across trach direction and it is called range resolution (Rr). This range
resolution is directly proportional to the microwave pulse length. The finer range resolution
can be achieved by shorter pulse length. The pulse length is calculated as the product of the
speed of light (c) and transmission duration (τ). The transmission duration of the microwave
energy is measured in microseconds (10-6 sec) and the typical range of this duration is 0.4 –
1.0 microsecond. The pulse length for this duration ranges from 8 – 210 m. Since the pulse
length travels from sensor to target and back to sensor therefore a division factor of 2 is used
to calculate the slant range resolution.
In order to resolve two objects that are close to each other distinctly in the range direction, it
is necessary that reflected signals from the objects should be received by the antenna
separately. If the time of returning signals from the objects overlaps then this will result in
blurred images and objects won’t be resolved. To understand this concept let us understand it
with the help of an example (Figure 8.9).

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 155 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.9 : Relationship of Pulse length and range resolution


In this example the pulse length (PL) is transmitted towards the objects A and B and the slant
range distance between the two objects is less than PL/2. Because of this the echoes from the
object B got time to return to object A while the echoes from A were still reflected.
Therefore, the echoes from A and B objects were overlapped and hence these objects won’t
be resolved as two objects on the radar image but instead as one larger object extending from
A to B. But if the slant range between the objects is greater than PL/2 in that case the objects
will be resolved as two separate objects. So, we can conclude from this that slant range
resolution is invariable to the distance from the aircraft but corresponds to half of the pulse
length. The slant range resolution is unaffected by the distance from the aircraft but ground
range resolution changes. The cosine of depression angle (γ) is inversely proportional to
ground resolution in range direction (Figure 8.10). Hence, as the slant range distance
increases ground range resolution decreases. The range resolution equation is calculated by:
𝒄×𝝉
𝑹𝒓 = Equation (9)
𝟐 𝐜𝐨𝐬 𝜸
Where,
Rr = ground resolution in range direction
c = speed of light (m/sec)
τ = pulse duration
γ = depression angle

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 156 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.10: Relationship between slant range resolution and ground range resolution
A question comes in mind here that why don’t we select the shortest pulse length to achieve
the finest resolution? The reason to this is that if the pulse length is shortened then the total
energy which illuminates the target also get reduced. Since weaker energy interacts with the
target therefore the energy which get backscattered from the target will be having less
information about the target. Therefore, the pulse length is logical chosen so that a trade-off
is maintained between the shortening of the pulse length in the improvising the range
resolution and the strong signal strength received from the target feature. In order to resolve
two distinct objects distinctly on the terrain in the range direction, the respective range
distances should be separate by half of the pulse length.
Synthetic Aperture Radar (SAR) :
The development of synthetic aperture radar in radar remote sensing has immensely
improvised the azimuth resolution. As it can be recalled from the discussion done in real
aperture radar section where the angular beamwidth was inversely proportional to the antenna
length (Equation 5). According to that equation a finer azimuth resolution can be achieved
by using longer antenna. In orderto solve the physical challenges of deploying longer
antennas in aircraft and space craft, engineers developed a method in which a longer antenna
is synthesized electronically. These SAR systems use a smaller antenna which illuminates the
ground perpendicularly to the air craft using broader beam in similar manner to that of a
typical RAR system. The major difference is that the SAR system sends a larger number of
supplementary beams towards the object and doppler principle is used to monitor and
aggregate the returns from these supplementary beams to create the azimuth resolution as if it
is generated from a very narrow beam.
We all have witnessed the doppler principle in real life for e.g., when a whistling train is
moving towards an observer the sound of the whistle approaches with a higher frequency and
its frequency decreases as it moves away from the observer. So, doppler principle states that
if the observer and/or the source are in relative motion than the frequency of a sound changes.
This phenomenon applies to all motions of all harmonic waves including the microwaves
used in radar remote sensing.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 157 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.11 : Doppler principle in radar hologram creation(Jensen, 2014)

The shifting of the doppler frequency under the influence of relative motion of the terrain
objects at different time interval i.e., n, n+1, n+2, n+3 and n+4 through the microwave waves
occur as a result of the forward motion of aircraft is shown in Figure 8.11. It is clearly evident
from the Figure 8.11 that the frequency of the radar signal returning from the target objects
increases from time n to time n+2and after that the frequency decreases from time n+3 to
n+4. Now that we have learnt the principle of Doppler shift, let us understand how the images
are generated by Synthetic aperture radar. The long antenna of the SAR can be synthesized
from a smaller sized antenna utilizing the doppler shift of frequency and aircraft’s motion.
This works with two assumptions that firstly, the terrain remains stationary and secondly, the
target object being imaged lies at a fixed distance from the aircraft flight path. The shorter
antenna mounted on the aircraft emits series of microwave pulses at regular interval of time
while it flies in straight line. As the object encounters these microwave pulses it backscatters
a portion of this received energy back to the antenna Figure 8.11(a). The distance between the
target and the aircraft keeps on decreasing up to a certain point for e.g., in above figure the
target was first at 9 wavelengths away and then the decreases to 8, 7 and 6.5 in (b), (c) and
(d), respectively. The point (d) at 6.5 wavelength is perpendicular to the antenna having
shortest distance to the aircraft and this region is known as area of zero doppler shift. After
this point onwards the distance between the target and aircraft and again starts increasing as
shown in Figure 8.11 (e). The reflected waves from the object while the movement of aircraft
between time n to n+4is electronically combined with a reference wavelength together
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 158 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

resulting into interference. This interference is recorded as voltage controlling the brightness
of the spot which is scanned across the screen of the cathode ray tube. High voltage and
bright spot are observed when the returned pulse and the reference pulse coincides or they
have the displacement in same direction either up or down it is known as constructive
interference (Figure 8.12a). The destructive interreferences (Figure 8.12b) are observed
when the returning signal and reference signal do not coincide resulting in low voltage and
dim or dark spot. These spots are recorded as light and dark dashes representing one
dimensional interference having unequal lengths which are recorded on the film called radar
hologram which move proportionally with respect to aircraft velocity.

(A) Constructive Interference (B) Destructive Interference


Figure 8.12 :Constructive and destructive Interferences
The doppler frequency shifting helps in resolving the target object on the image assuming
that the antenna length was L as given in Figure 8.13.This synthetically generated longer
antenna produces a narrow beam effect in the azimuth direction as shown by shaded area in
Figure 8.13. It has to be noted that as the distance increases from the aircraft the azimuth
resolution increases for RAR system while it is unchanged with SAR systems while the
ground range resolution decreases in both RAR and SAR systems.

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 159 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 8.13: A typical SAR system displaying the synthesized longer antennas and
azimuth and range resolution
The equation for calculating the azimuth resolution of synthetic aperture radar is given by:
𝑳
𝑺𝑨𝑹𝒂 = Equation (9)
𝟐

The radar signals are coherent in nature i.e., they are transmitted at a very narrow range of
wavelengths which results in speckles in the images. These speckles produce a salt and
pepper error on the radar images which can be removed by processing it using several looks
i.e., an averaging is done. Therefore, the azimuth resolution equation 9 changes to
𝑳
𝑺𝑨𝑹𝒂 = 𝑵 × Equation (10)
𝟐

Where N is number of looks


The SAR sensors as we have discussed till now use the same antenna to transmit and receive
the radar signals these are referred as monostatic SAR however; they can also transmit and
receive signals using two separate antennas this is called bistatic configuration. In Bistatic
SAR one antenna transmits the signal while both antennas can receive the backscattered
signals. This kind of configuration is primarily used in SAR interferometry in which same
area is acquired by the SAR sensor from two vantage points with considerable overlapping.
This technique is useful in creation of digital elevation model. The Shuttle Radar Topography
Mission (SRTM) and TerraSAR-X are examples of the bistatic SAR system. The different
spaceborne SAR systems are given in Table 8.2.

Table 8.2 : Space borne Synthetic Aperture Radar system

Look Resolution
Year Satellite Country Band Polarization
Angle (m)
Soviet
1991 Almaz-1 S HH 20–70° 10–30
Union
1991 ERS-1 ESA C VV 23° 30
1992 JERS-1 Japan L HH 35° 18
1995 ERS-2 ESA C VV 23° 30
1995 Radarsat-1 Canada C HH 10–60° 8–100
2002 Envisat ESA C Dual 14–45° 30–1000
2006 ALOS Japan L Quad 10–51° 10–100
2007 TerraSAR-X Germany X Dual 15–60° 1–18
COSMO-
2007 Italy X Quad 20–60° 1–100
SkyMed 1
COSMO-
2007 Italy X Quad 20–60° 1–100
SkyMed 2
2007 Radarsat-2 Canada C Quad 10–60° 1–100
COSMO-
2008 Italy X Quad 20–60° 1–100
SkyMed 3
2009 RISAT-2 India X Quad 20-45º 1–8
2010 TanDEM-X Germany X Dual 15–60° 1–18
2010 COSMO- Italy X Quad 20–60° 1–100
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 160 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

SkyMed 4
2012 RISAT-1 India C Quad 12–50° 1–50
2014 ALOS-2 Japan L Quad 10–60° 1–100
2014 Sentinel 1A ESA C Dual 20–47° 5–40
2015 SMAP US L Trib 35–50° 1000+
2016 Sentinel 1B ESA C Dual 20–47° 5–40
2018 SEOSAR/Paz Spain X Dual 15–60° 1–18
2018 NOVASAR-S UK S Tria 15–70° 6–30

COSMO-
2019 SkyMed 2nd Italy X Quad 20–60° 1–100
Generation-1

COSMO-
2019 SkyMed 2nd Italy X Quad 20–60° 1–100
Generation-2

Radarsat
2019 Constellation Canada C Quad 10–60° 3–100
1,2,3
2019 RISAT-2B India X
Details not available
2019 RISAT-2B1 India X

8.4 SUMMARY
This unit comprises of the introduction to active remote sensing with a special emphasis on
the radar system. The working principle of a typical SLAR, SAR, and RAR system has been
deliberated in consecutive sections. This unit also gives the reader an idea about the typical
radar geometrical aspects that are further used in calculating the range and azimuth
resolution.

8.5 GLOSSARY
Acronym Description
ALOS Advanced Land Observing Satellite
Envisat Environmental Satellite
ERS European Radar Satellite-1
ESA European Space Agency
InSAR or IFSAR Interferometric Synthetic Aperture Radar
ISAR Inverse synthetic-aperture radar
JERS Japan Earth Resources Satellite-1
NIR Near-infrared
RADAR RAdio Detection And Ranging
RAR Real Aperture Radar
RISAT Radar Imaging Satellite

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 161 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

SAR Synthetic Aperture Radar


SEOSAR SatéliteEspañol de Observación SAR
SLAR Side Looking Airborne Radar
SMAP Soil Moisture Active Passive
SRTM TerraSAR-X
SWIR Shortwave-infrared
TIR Thermal-infrared

Terms Description
Azimuth In mapping and navigation azimuth is the direction to a target
withrespect to north and usually expressed in degrees. In radar RS
Azimuth
azimuthpertains to the direction of the orbit or flight path of the radar
platform.
The microwave signal reflected by elements of an illuminated
Backscatter
surface in the direction of the radar antenna.
In SAR when two different antennae are used for transmission and
Bistatic
receiving of signal
Range between the radar antenna and an object as given by a
Ground range side-looking radar image but projected onto the horizontal reference
plane of the object space.
In radar remote sensing: the wave interactions of the backscattered
Interference
signalsfrom the target surface.
Computational process that makes use of the interference of
two coherent waves. In the case of imaging radar, two different paths
Interferometry for imaging cause phase differences from which an interferogram can
be derived. In SAR applications, interferometry is used for constructing
a DEM.
In SAR when same antenna is used for transmission and receiving of
Monostatic
signal
Mathematical expression that describes the average received
signal level compared to the additive noise level in terms of system
Radar equation parameters. Principal parameters include the transmitted power,
antenna
gain, radar cross section, wavelength and range.
A sensor, which measures radiant energy and typically in one
broad spectral band (‘single-band radiometer’) or in only a few bands
Radiometer
(‘multi-band radiometer’), but with high radiometric resolution. The
term is associated with Passive microwave remote sensing sensor
Distance as measured by the radar to each reflecting point in the
Slant range
scene and recorded in the side-looking radar image.
Interference of backscattered waves stored in the cells of a radar image.
Speckle It causes the return signals to be extinguished or amplified resulting
in random dark and bright pixels in the image.
The (high) azimuth resolution (direction of
Synthetic the flight line) is achieved through off-line processing. The SAR is
aperture radar able to function as if it has a large virtual antenna aperture, synthesized
(SAR) from many observations with the (relative) small real antenna
of the SAR system.

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 162 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

8.6 ANSWER TO CHECK YOUR PROGRESS


1. Explain the Radar equation.
2. Explain the radar wavelength bands.

8.7 REFERENCES
1. Ferro-Famil, L., & Pottier, E. (2016). Synthetic Aperture Radar Imaging. In N. Baghdadi,
& M. Zribi, Microwave Remote Sensing of Land Surface (pp. 1-65). Elsevier.
2. Jensen, J. R. (2014). Remote Sensing of the Environment- An Earth Resource
Perspective.Upper Saddle River, NJ : Pearson Education.
3. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
4. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing
(pp. 429-474). Bethesda: ASP&RS.
5. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
6. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.

8.8 TERMINAL QUESTIONS

1. What do you understand by Radar principle?


2. A given imaging radar system transmits pulses over a duration of 0.1 µsec. Find the
range
resolution of the system at a depression angle of 35°.
3. A given imaging radar system has a 1.5-mrad antenna beamwidth. Determine the
azimuth
resolution of the system at slant ranges of 7 and 14 km. (Note 1mrad = 0.001 radian)
4. What is doppler shifting? How it is used in Synthetic Aperture radar.
5. What is Synthetic Aperture radar. How the image of a SAR system is formed.
6. What is the difference between monostatic and bistatic antennas?

UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 163 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND


EARTH’S SURFACE

9.1 OBJECTIVES
9.2 INTRODUCTION
9.3 INTERACTION BETWEEN MICROWAVES AND EARTH’S
SURFACE
9.4 SUMMARY
9.5 GLOSSARY
9.6 ANSWER TO CHECK YOUR PROGRESS
9.7 REFERENCES
9.8 TERMINAL QUESTIONS

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 164 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

9.1 OBJECTIVES
After reading this unit learnerwill be able to understand:

 how microwaves interact with materials and objects (both natural and artificial).
 the various characteristics such as surface, electric, penetration depth, polarization, and
frequency affecting radar signals' interaction with the earth's feature.

9.2 INTRODUCTION
The portion of the transmitted energy returning to the radar from targets on the surface governs
the brightness of features in a radar image. The radar interactions with the earth's surface control
the magnitude or intensity of the backscattered energy, which is a function of several variables or
parameters. These parameters include radar characteristics such as frequency, viewing geometry,
polarization, and surface feature characteristics such as topography, landcover type, surface
roughness, dielectric properties. The assessment of these characteristics contribute to the
appearance of the radar image features is impossible because of their close relationship.

Any modification in these parameters may impact other parameters' responses, which affects the
amount of backscatter. Thus, the interactions of these variables together affect the brightness of
features in an image.

9.3 INTERACTION BETWEEN MICROWAVE AND EARTH'S


SURFACE
The interactions between the microwave and earth’s surface are affected by various factors such
as surface, geometrical and electrical characteristics. Apart from this, polarization, penetration
depth, and frequency are also responsible for controlling the interactions.

Surface Roughness Characteristics:


The strength of the radar backscatter is strongly influenced by the terrain properties, such as
surface roughness.The surface texture characteristics such as “rough” (coarse), “intermediate,” or
“smooth” (fine) are used in visually interpreting aerial photographs. The same analogy can be
extended in radar image interpretation only thing that has to be kept in mind that roughness in
microwave remote sensing is projected in terms of microscale, mesoscale, and macroscale. The

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 165 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

microscale surface roughness corresponds to the objects having a height of the roughness
measured in centimeters. This kind of surface roughness is observed in stone heights, leave sizes,
and tree branch lengths. The topographic relief or mountains are not considered in this
category.The relationshipbetween the wavelength of the incident radar energy(λ), the depression
angle (γ), and the local height of objects(h in cm) found within the ground cell being illuminated
governs the amount of microwave energy backscattered towards the sensor in case of microscale
surface roughness. The microscale surface roughness characteristics and theradar system
parameters (λ,γ,h) can help predict the earth’s surface visual appearance in a radar image using
the modified Rayleigh criteria. An area having smooth surface roughness behaves like a specular
reflecting surface where most of the incident microwave energy is backscattered from the terrain
away from the antenna. The slightamount of backscattered energy returned to the antenna
isrecorded on the radar image as a dark area. The expression to calculate smooth surface
roughness criteria is :
𝝀
𝒉< Equation 1
𝟐𝟓 𝑺𝒊𝒏𝜸

To understand how this equation is used, let us take an example that we want to find out that for
producing a smooth (dark) radar return using X band wavelength (λ=3 cm) with a depression
angle(γ) of 35° what should be the local height (h) of the interacting object (Figure 9.1a). By
putting these values in equation 1

𝟑 𝒄𝒎
𝒉<
𝟐𝟓 𝑺𝒊𝒏𝟑𝟓

𝟑 𝒄𝒎
𝒉<
𝟐𝟓 × 𝟎. 𝟓𝟕𝟑𝟓𝟕𝟔

𝒉 < 0.21 𝑐𝑚
Thus, we can conclude that an object with a local height of < 0.21 cm, which uniformly fills a
radar image resolution cell, produces very little radar return and is therefore recorded as a dark
tone on the image.

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 166 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Similarly to equation 1, if one wants to expect a bright radar return recorded on the image, then
they can use the modified Rayleigh rough criteria (Figure 9.1b)equation as given:
𝝀
𝒉> Equation 2
𝟒.𝟒 𝑺𝒊𝒏𝜸

If we put the same λ (3 cm) values and γ (35°) in equation 2, we can calculate the object’s local
height producing a bright return to the radar image.

𝟑 𝒄𝒎
𝒉>
𝟒. 𝟒 𝑺𝒊𝒏𝟑𝟓

𝟑 𝒄𝒎
𝒉>
𝟒. 𝟒 × 𝟎. 𝟓𝟕𝟑𝟓𝟕𝟔

𝒉 > 𝟏. 𝟏𝟖 𝐜𝐦

Thus, we can conclude that an object with a local height of > 1.18 cm, which uniformly fills a
radar image resolution cell, produces high radar return and is therefore recorded as a bright tone
on the image.

(a) (b)

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 167 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 9.1: (a)Smooth Surface roughness having h< 0.21 cm representing specular reflector
(b) Rough Surface having h>1.18 cm representing diffuse reflector

There can be the third case also in which the objects having a local height in between 0.21 cm
and 1.18 cm within the radar resolution cell appears gray in the radar image and have
intermediate surface roughness for the same combination of radar wavelength and depression
angle. Hence, the radar backscatter is dependent on the wavelength and the depression angle.
The effect of wavelength and depression on radar return can be explained from the following
Table 9.1. In Table 9.1, the modified Rayleighcriteria are calculated for three radar wavelength
(λ = 0.86, 3.0, and 23.5 cm) and using three different depression angles (γ = 45° 60°, and 70°).
Suppose we have an object with a height of 0.5 cm, then it would appear bright on the Ka-band
imagery, gray on the X-band image, and dark on the L band imagery. This relationship is very
significant in terms that the same terrain/object can appear differently in radar images due to the
radar sensor’s wavelength and depression angle.

Table 9.1: Modified Rayleigh surface roughness criteria for three different wavelength and
depression angles.
Surface Roughness Ka band X band L Band
λ= 0.86 cm λ= 3 cm λ= 23.5 cm
γ = 45° γ = 60° γ = 70°
Smooth h < 0.05 h < 0.14 h< 1
Intermediate h = 0.05 to 0.28 h = 0.14 to 0.79 h = 1 to 5.68
Rough h > 0.28 h > 0.79 h > 5.68

Hence, radar image interpretation keys are challenging to create against the optical aerial/satellite
interpretation keys. In the later section of this unit, we will see that the radar returns are also
affected by look direction. (Mikhail, Bethel, & McGlone, 2001) gave another criterion for
predicting the weak, intermediate, and strong radar return. They suggested that a weak return is
produced if the local relief height is less than one-eighth of the incident wavelength. The weak
return is due to scattered or specular reflections that are away from the antenna. The height of the
relief ranging from one-eighth to one-half resulted in intermediate return since only a small

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 168 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

portion of the radar energy is backscattered to the antenna. Similarly, a local relief with a height
greater than one-half of the incident radar wavelength results in a strong radar return.

Just as explained earlier in microscale surface roughness, mesoscale and macroscale surfaces
also scatter the incident microwave energy. The microscale surface roughness was the function
of small leaves size within an individual resolution cell, whereas mesoscale surface roughness is
a function of backscattering characteristics from several resolution cells such as an entire canopy.
Hence, the forest canopy appears with a coarser texture on radar image when visualized under
the same wavelength and depression angle. Finally, macroscale surface roughness is affected by
slope and aspects of the terrain and shadow appearance, which influences image formation.

Geometrical Characteristics:
The radar returns are affected by the variations in the local incidence angle. The slopes facing the
sensor results in higher radar returns, while the slopes facing away from the sensor results in low
or no radar returns. The local incident angles affect the radar backscattering and shadow areas for
different surface properties. The radar backscatter is dominated by topographic slopefor local
incidentangles of 0° to 30°. The surface roughness properties dominate for angles of 30° to 70°.
For angles greater than 70°,radar shadows dominate the image.The shape and orientation of
objects need to be considered apart from their surfaceroughness when evaluating radar returns.

The return of the radar signal is influenced by the geometric configuration of targets as well.
Brighter returns are created by objects having complex geometrical shapes such as those present
in the urban landscape. The reasons for such brighter returns are the complex reflections of the
incident radar signals back to the antenna, similar to a ball bouncing from the corner of the pool
table back to the player. Such kinds of objects are called corner reflectors (Figure 9.2c), for,
e.g., corners of buildings, passages between dense urban areas. The corner reflector shape
availability in urban areas is more common due to complex angular shapes made up of concrete,
masonry, and metallic surface. In rural areas also the farm buildings, agricultural equipment acts
as corner reflectors. Corner reflectors aid the radar image interpretation. But one thing should be
kept in mind that the radar returns of corner reflectors on the image are not proportional to their
actual size. The radar returns appear very large and brighter than the size of the object causing it.

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 169 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 9.2: Radar reflections from (a)rough surface – diffuse scattering, (b)smooth surface
– specular scattering (c)corner surface

Electrical Characteristics (Dielectric Constant) :


The microwave energy sent by a radar system interacts with the earth’s features, and these
features conduct the electricity differently. The complex dielectric constant is one such measure
of terrain that controls a material's ability (such as vegetation, snow, ice, water, soil, etc.) to
conduct electricity. The dielectric constants of soil, rock, and vegetation range from 3 to 8, while
water has a dielectric constant of 80 within the microwave region. A material's dielectric
constant is most significantly affected by the moisture content. Hence, the amount of radar
energy backscattered to the antenna is significantly affected by moisture within the soil, rock
surface, and vegetation. The radar returns from the moist soil are more than that of dry soil. The
bare ground moisture estimation is done best with radar images, provided that the terrain is
devoid of any overlying features such as plants and rocks and having an unvarying surface
roughness. The depth of penetration of the microwave energy within the soil depends on the
moisture content. A high soil moisture content allows minimal penetration, i.e., up to few
centimeters only, and all the incident radar energy is highly scattered at the surface resulting in
brighter returns. A general thumb rule governing the radar energy penetration is that the energy
can penetrate a depth equal to radar wavelength. However, it has been noticed that the dry soil

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 170 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

exhibits a radar energy penetration depth of several meters. The dielectric constant of the ocean
surface is very high; thus, a large part of the incident radar energy is reflected at the water’s
surface. The water content present in the snow can be determined by microwave remote sensing
because the snow's dielectric constant depends on the availability of water in liquid form.
Similarly, the ice’s age, compaction, and type can be identified by differences in the dielectric
constants. Microwave remote sensing has been proven useful in the extraction of varied
biophysical parameters. Healthy vegetation such as crops and forest canopies have larger surface
areas and high moisture content. Hence, the dense and moist vegetations act like a hovering
cloud of water droplets above the terrain and extensively reflect radar energy. The radar energy
sensitivity to the soil and vegetation moisture content increases with the steepness of the
depression angle.

Response to Microwave Energy on Vegetation:


Recently quantitative monitoring of the biomass,its spatial distribution, and its gross and net
productivity have been a focal point of research among the global scientific communities. The
movement of energy and matter within these vegetated regimes has fascinated scientists. Many
of these vegetated ecosystems are in areas that are covered by cloud covers round the year, and
synthetic aperture radar can be useful in extracting biophysical parameters from the such as:
1. Type of vegetation
2. Water content
3. Biomass components such as its foliage, stems
4. Leaf Area Index, Orientation of leaf, branch angle distribution.

The optical remote sensing systems such as Landsat 8 (OLI), Sentinel 2 (MSI), SPOT use the
electromagnetic spectrum's optical wavelengths and measures the energy which is reflected,
scattered, transmitted, and /or absorbed from a few leaf layers and stems of the vegetation
canopy. Optical remote sensing gives less information regarding the canopy's internal
characteristics and least information about the underlying soil characteristics. On the other hand,
microwave energy depending on its frequency, polarization, and angle of incidence, can infiltrate
the vegetation canopy to varying depths. Microwave energy interacts with the plant internal
structures bearing heights in centimeters and decimeters. It helps in establishing a relationship
between the components of the canopy and radar backscattering. A part of the vertically or

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 171 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

horizontally polarized radar energy interacts with the trees, while some get scattered back to the
sensor. The amount of energy received at the antenna is proportional to the incident energy’s
frequency and polarization and depends on the canopy components' depolarizing capability,
signal penetration depth, and whether it eventually interacts with the ground soil surface. The
relationship between penetration depth and polarization and their interaction with the canopy
components can be understood by two types of scattering (i) surface scattering and (ii) volume
scattering. The like polarized radar returns (HH or VV) are generated due to the single
reflections from canopy components such as leaves, stems, branches are recorded as strong and
bright signals and are called canopy surface scattering. On the contrary, the radar energy
sometimes gets multiple scattered from the volume of canopy components resulting in
depolarized energy (HV or VH), and it is called volume scattering.
(Kasischke & Bourgeau-Chavez, 1997) explained production of radar backscattering coefficient
(σ°) when the microwave energy interacts with terrain resulting from surface or volume
scattering.
The volume scattering effect can be understood by assuming that the radar signature reaching the
antenna is coming from different canopy layers. These interactions can be either from (i)woody
vegetation having three distinct layers such as overhead canopy consisting of small branches and
leaves, trunk consisting of large branches and trunks, and ground surface (Figure 9.3(a)), or
from (ii) non-woody vegetation having only two distinct layers overhead canopy and the ground
surface layer (Figure 9.3(b)).

(a) (b)
Figure 9.3: Sources of scattering from (a) Woody vegetation (b)Herbaceous vegetation

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 172 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Theσ°w is the backscattering coefficient coming out from woody vegetation towards the radar
system antenna and is expressed as(Kasischke & Bourgeau-Chavez, 1997)

𝝈°𝒘 = 𝝈°𝒄 + 𝝉𝟐𝒄 𝝉𝟐𝒕 (𝝈°𝒎 + 𝝈°𝒕 𝝈°𝒔 + 𝝈°𝒅 ) Equation 3


where
σ°cis the backscatter coefficient of the canopy layer ofsmaller woody branches and foliage (i.e.,
surfacescattering);
τc is the transmission coefficient of the vegetation canopy;
τt is the transmission coefficient of the trunk layer;
σ°mis the multiple-path scattering between the ground andcanopy layer;
σ°tis direct scattering from the tree trunks;
σ°sis direct surface backscatter from the ground;
σ°dis the double-bounce scattering between the trunks andthe ground.

The same equation can be transformed for non-woody vegetation also by eliminating the
interactions associated with the trunks, and the radar backscattering from herbaceous vegetation
σ°h is expressed

𝝈°𝒘 = 𝝈°𝒄 + 𝝉𝟐𝒄 𝝉𝟐𝒕 (𝝈°𝒎 + 𝝈°𝒅 ) Equation 3

The terms in Equations 2 and 3 are dependent on 1) vegetation type (affected by surface
roughness), 2) polarization and wavelength of incident microwave energy, 3) the dielectric
constant of the vegetation, and 4) the dielectric constant of the ground surface. The vegetation
with higher water content has a higher dielectric constant, and scattering and attenuation in
Equations 2 and 3 are directly proportional to it.The watercontent per unit volume governs the
attenuation coefficient from a vegetation canopy rather than the plants' structure, i.e., the leaves,
trunk, or stems. The microwave scattering from vegetation surfaces also depends on ground layer
condition. The ground condition is dependent on two properties, i.e., 1) micro and mesoscale
surface roughness and 2) reflection coefficient. The amount of microwave energy backscattered
to the antenna (σ°s) increases with greater surface roughness. Similarly, the energy scattered in
forwarding direction (σ°mand σ°d) decreases with greater surface roughness. The dielectric

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 173 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

constant of the ground layer controls the reflection coefficient, and its low values for the dry
ground layer results in low reflection. The amount of microwave energy backscattered and
forwardscattered increases with an increase in the dielectric constant of moist soil resulting in an
increased reflection coefficient. For constant surface roughness, the amount of backscattered and
forward
scattered microwave energy (increases inσ°m, σ°s, and σ°d) and the soil dielectric constant both
increase. However, a water layer over the vegetated ground surface in wet environments
eliminates the effect of surface roughness and increases the reflection coefficient. The
elimination of surface roughness means that there will be a specular reflection and no
backscattering resulting in an increase ground-trunk (σ°d) and ground-canopy(σ°m) interaction

Penetration Depth and Frequency:


A greater penetration depth is achieved by longer radar wavelengths. The wavelength's impacts
on penetration depth can be understood by the given hypothetical situation of microwave energy
interacting with a pine forest (Figure 9.4).

Figure 9.4: Surface and Volume scattering from a hypothetical pine forest

The leaves and stems at the top of the canopy interact with the incident microwave energy
resulting in surface scattering. The stems, branches, leaves, and trunk interact with the canopy's
transmitted energy, resulting in volume scattering. Finally, surface scattering again occurs at the
soil surface. A visual comparison of the microwave energy response from the same canopy by

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 174 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

different wavelength bands such as X (3 cm), C (5.8 cm), and L (23.5 cm) is shown in (Figure
9.5 a-c)

Figure 9.5: Response of a typical forest canopy to X, C, and L microwave wavelength bands

Since the shorter X wavelength band has less penetration, they are attenuated mostly at the
canopy's top surface from leaves and small branches, resulting in surface scattering. The C-band
results in surface scattering at the top of the canopy, some volume scattering from the stands, and
very little response are recorded from the ground. The longer L-band penetrates deep into the
canopy resulting in volume scattering from the stems, leaves, trunk, and branches. A part of the
microwave energy also gets transmitted to the ground, where the interaction between the soil-
vegetation boundary layer results in surface scattering.

Radar Backscatter and Biomass:


A linear relationship is observed between the radar backscatter and increasing biomass, which
gets saturated at a certain biomass level depending on the radar frequency. The biomass
saturation level for a Loblolly pine was calculated by (Dobson, et al., 1992) as 200 and 100

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 175 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

tons/ha for P- and L band, respectively, using NASA/JPL polarimetric AIRSAR. The C-band of
the AIRSAR showed significantly less sensitivity towards total aboveground biomass. (Wang et
al., 1994)utilized the ERS-1 SAR data to study the impacts of biomass change of Loblolly pine
and soil moisture on radar backscattering. They inferred from their findings that the C band
responded poorly towards total aboveground biomass since its sensitivity was high for soil
moisture and steeper local incidence angle of the sensor, i.e., 23°. As discussed in the previous
section, lower frequencies such as P and L bands result in volume scattering, while the higher
frequencies such as C and X bands result in surface scattering from the top of the canopy.
Various researchers have also established the leaf area index (LAI) correlations with the radar
measurement. The surface scattering is the leading cause of brighter radar returns on like-
polarized (HH or VV) images. Similarly, volume scattering is the leading cause of brighter radar
returns on cross-polarized (HV or VH)images. The vegetation monitoring on sloped or
mountainous regions can be optimally done using the cross-polarized images (HV or VH) since
these images are less sensitive to slope variations. Sometimes the same crop grown in different
directions results in like-polarized images, which are difficult to interpret; to avoid this, cross-
polarized images are used.

9.4 SUMMARY
This unit comprises the introduction to radar energy interactions with terrain surface, which
depends on various terrain and radar system characteristics. In this unit, various terrain
characteristics such as geometry, surface roughness, and dielectric constants are discussed. The
radar system characteristics influencing the interactions with terrain features such as wavelength,
frequency, polarization, incidence angle, and penetration depth are also discussed. The surface
and volume scattering phenomenon is explained in detail.

9.5 GLOSSARY
Terms Description
In radar RS: a high backscatter is typically caused by two
Corner reflection flat surfaces intersecting at 90 degrees and situated orthogonally to
the radar incident beam.

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 176 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The combination of two or more intersecting specular surfaces


Corner reflector that combine to enhance the signal reflected back in the direction
of the radar, eg, houses in urban areas.
Parameter that describes the electrical properties of a medium.
Dielectric constant Reflectivity of a surface and penetration of microwaves into the
material are determined by this parameter.
term with different meanings, among themthe variation of surface
elevation within a ground resolution cell. A surface appears
Roughness rough to microwave illumination when the elevation variations
become
larger than a fraction of the radar wavelength
The process of multiple reflections and redirection of radiation
Volume scattering caused by heterogeneous material (atmosphere, water, vegetation
cover, etc).

9.6 ANSWER TO CHECK YOUR PROGRESS


1. What are the impacts of dielectric constants on radar returns?
2. Explain the surface and volume scattering phenomenon in vegetations.

9.7 REFERENCES
1. Dobson, M. C., Ulaby, F. T., LeToan, T., Beaudoin, A., Kasishke, E. S., & Christensen, N.
(1992). Dependence of Radar Backscatter on Coniferous Forest Biomass. IEEE
Transactions on Geoscience and Remote Sensing,, 30(2), 412-415.
2. Ferro-Famil, L., & Pottier, E. (2016). Synthetic Aperture Radar Imaging. In N. Baghdadi, &
M. Zribi, Microwave Remote Sensing of Land Surface (pp. 1-65). Elsevier.
3. Jensen, J. R. (2014). Remote Sensing of the Environment- An Earth Resource Perspective.
Upper Saddle River, NJ : Pearson Education.
4. Kasischke, E. S., & Bourgeau-Chavez, L. L. (1997). Monitoring south Florida wetlands using
ERS-1 SAR imagery. Photogrammetric Engineering & Remote Sensing, 63, 281-291.
5. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
6. Mikhail, E. M., Bethel, J. S., & McGlone, J. C. (2001). Introduction to Modern
Photogrammetry. New York: John Wiley.
7. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing (pp.
429-474). Bethesda: ASP&RS.

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 177 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

8. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
9. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.

9.8 TERMINAL QUESTION


1. How does the surface roughness characteristics of the terrain govern the interactions with
microwave energy?

2. The radar wavelength of 5.8 cm interacts with the terrain at a depression angle of 45°. Calculate the
local terrain heights for rough, intermediate, and smooth surfaces.

3. How does the penetration depth affect the interactions with earth features?
4. What are the impacts of radar polarization on interactions with earth features?

UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 178 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF


MICROWAVE IMAGE

10.1 OBJECTIVES
10.2 INTRODUCTION
10.3 GEOMETRICAL CHARACTERISTICS OF MICROWAVE
IMAGE
10.4 SUMMARY
10.5 GLOSSARY
10.6 ANSWER TO CHECK YOUR PROGRESS
10.7 REFERENCES
10.8 TERMINAL QUESTIONS

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 179 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

10.1 OBJECTIVES
After reading this unit learners will be able to understand:
 the geometrical characteristics/distortions of the radar image acquisitions:
 slant-range scale distortion, foreshortening, layover, shadows, and parallax.
 other characteristics that affect the appearance of radar images, such as radar image
speckle, range brightness variation, motion errors, and moving target errors.

10.2 INTRODUCTION
As a consequence of their side-looking geometry, the real and synthetic aperture generate
images having a lot of unusual image features that are readily visible on observing radar
images. A thorough understanding of these features is a must for effective planning of data
acquisition and correct image interpretation of radar images. Almost all the radar imageries
contain geometrical distortions. When the terrain is flat, these distortions are easy to correct,
but buildings, trees, and mountains result in relief displacement withinradar images. The
relief displacement in radar images is different from the relief displacement occurring in
optical images. In radar images, the relief displacement is reversed with targets being
displaced towards, instead of away from the sensor, as in optical images. There are two types
of elevation-induced distortions prevalent on radar imageries, i.e., Foreshortening and
Layover.Both foreshortening and layover result in another distortion called radar shadows.
Radar images are also affected by parallax when a terrain object is imaged from two different
flight lines. The differential reliefdisplacements result in image parallax,making it
challenging to create the stereo radar data. Radar Speckles are the random bright and dark
areas ina radar imagethat obscure the visual image interpretation. Radar image range
brightness variation is a result of a systematic gradient in imagebrightness across the image in
the range direction, which is more prominent in airborne radars than in spaceborne radars.
Side Looking Aperture Radar images are generated by exhibiting the returned power versus
time (or range) for one pulse in onedimension and “assembling” the range lines in the
otherdimension to create an image. Any Nonideal radar motion andpointing capabilities can
generate distortions in the resulting image.Moving objects in the scene, such as cars or ships
orsurface waves on water, results in an extra Doppler component (orphase component) which
is proportional to the relative velocity between the instrumentand the target, ultimately
resulting in the displacement of the object in azimuth direction in the image.

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 180 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

10.3 GEOMETRIC CHARACTERISTICS OF MICROWAVE


IMAGE

Slant-Range Scale Distortion:

There are two geometric formats for recording radar images, i.e., slant range format and
ground range format. In the slant range format, pixel spacing in the range direction is directly
proportional to the pulses' time interval. This time interval is proportional to the object's slant
range distance imaged from the sensor instead of the horizontal ground distance between the
nadir line and the object. This results in image compression in the near range and expansion
in the far range. On the contrary, in the ground range format, the image pixel spacing is
directly proportional to their distance on the theoretically flat ground. The characteristics of
the slant range and ground range image formats are illustrated in Figure 10.1.

Figure 10.1: Comparison of slant range and ground range format


The figure represents 3 objects A, B, and C, which are of equal sizes and are equally
separated in the near, middle, and far range. ̅̅̅̅
𝐺𝑅𝐴 , ̅̅̅̅
𝐺𝑅𝐵 , and ̅̅̅̅
𝐺𝑅𝐶 are the respective ground
ranges to the objects A, B, and C. The slant range image results in unequal distances and
unequal widths for the features because of the difference in signal return time. This results in
variable image scales, which are maximum at far range and minimum at near range. The
width of the objects A,B, C appear in increasing order as A1<B1<C1 and the distance appear
̅̅̅̅<𝐵𝐶
as 𝐴𝐵 ̅̅̅̅ on the slant range image presentation. After applying corrections, the object width
is equal, i.e., A=B=C and distance as ̅̅̅̅ ̅̅̅̅ on the ground range image. As the flying
𝐴𝐵 = 𝐵𝐶
height increases, the changes in scale across the images decrease for a given swath width.

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 181 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

This phenomenon is more common in the airborne system than in satellite systems. The slant
range distortion's preclusion within the slant range imagery restricts its direct use in
planimetric mapping. For flat terrain, the approximate ground range ̅̅̅̅
𝐺𝑅 can be calculated
̅̅̅̅ using following equation
from the flying height H’ and slant range 𝑆𝑅

̅̅̅̅𝟐 = 𝑯′𝟐 + 𝑮𝑹
𝑺𝑹 ̅̅̅̅𝟐 Equation 1

̅̅̅̅ = √𝑺𝑹
𝑮𝑹 ̅̅̅̅𝟐 + 𝑯′𝟐

̅̅̅̅ can also be calculated from the flying height H’ and


The approximate ground range𝐺𝑅
depression angle γ by the following equation

𝟏
̅̅̅̅ = 𝑯′√
𝑮𝑹 −𝟏 Equation 2
𝑺𝒊𝒏𝟐 𝜸

Apart from the flat terrain assumption used in the previous equation, the flight parameters
also affect the range and azimuth scales. The aircraft altitude variations affect the range scale,
while the synchronization between the image recording system and aircraft ground speed
controls the azimuth scale. In radar image collection and recording, it isn't easy to maintain
consistent scale throughout the tasking. The speed of light determines the scale in range
direction, while the aircraft/spacecraft speed determines the scale in the azimuth direction.
These scale variations are reconciled by controlling the data collection parameters for which
global positioning system (GPS) and inertial navigation system (INS) are used in
aircraft/spacecraft. The GPS is used to guide the aircraft along a flight line and maintain its
fixed flying height above the datum. The INS consists of angular sensors used to measure the
roll, pitch, and yaw, i.e., rotation of aircraft along, x, y, and z-direction. An inertial system
also controls the synchronization between the aircraft speed and data recording. Spaceborne
systems act as steadyflight platforms as compared to airborne systems.

Relief Displacement:

Layover and Foreshortening:

The relief displacement is the shift or displacement in the photographic position of an image
caused by relief of the object, i.e., elevation above or below a selected datum. This relief
displacement is one-dimensional and is perpendicular to the flight line. However, the only
difference is that relief displacement is reversed in radar images compared to optical images,
and the reason is the measurement of distances in range direction. In radar images, the top of

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 182 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

a feature interacts with the incident radar pulses before the base when they encounter a
vertical feature. Subsequently, return signals from the top of the elevated feature reach the
antenna before the signals from the base of the feature. This results in a layover effect on the
nearby features by the vertical features, and it appears leaned towards the nadir. The layover
effect is more severe in the near range, where the incident angles are steeper. A comparison
between the layover effect on an aerial photograph versus a radar image is shown in Figure
10.2.

Figure 10.2: Relief displacement on an aerial photograph and radar imagery

The layover effect is extreme in the near range, especially for the terrain slopes facing the
antenna. This effect is prominent when the terrain slopes are steep such that the top of the
slope is imaged ahead of the base. As seen in Figure 10.3, pyramid 1is located in the near-
rangeof the image, and the radar returns arrive at a very steep incident angle. As the returns
from the top of pyramid 1 are received first compared to the base (Figure 10.4); as a result,
the layover effect is prevalent first image, and the severity is more in close range (Figure10.
5).

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 183 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 10.3: Relationship between range and incident angle for relief displacement

Figure 10.4: Relationship between terrain slope and incident wavefronts on relief
displacement

Figure 10.5: Resulting Image due to relief displacement


From Figure 10.4, it can be noted that pyramid 4 has a flatter slope facing the radar
wavefront or incidence angle is less steep, so the base of the pyramid is recorded first than the

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 184 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

top. However, the objects are not recorded in their actual sizes, and a slight compression is
observed for the slopes facing towards the radar antenna (Figure 10.5- Pyramid 4).

There is one more type of relief displacement visible in the radar images known as
foreshortening. All the terrain features whose slope faces the radar antenna appear
compressed or foreshortened compared to backslope or slopes facing away from the radar
antenna. Foreshortening is less severe in the far range and becomes extreme in the near range
to an extent where the top and base of the features are imaged simultaneously. On further
moving towards the nearer range, the incidence angle becomes steeper, and foreshortening
changes into layover (Figure 10.5).

Figure 10.6: Radar Foreshortening


The following equation 3 can approximate the foreshortening

𝑭𝒔 = 𝐬𝐢𝐧(𝜽 − 𝜶) Equation 3

where 𝜃 is the incidence angle and 𝛼 is the slope angle (Figure10. 6). The slope angle is
positive (𝛼 + ) for the slope side facing the antenna while it is negative (𝛼 − ) for the slope-side
facing away from the antenna. In Figure 10.6, the mountain's height is low, and the distances
AB’ and B’C are equal in-ground range. When the radar pulse interacts with the mountain, it
first hits base A and is imaged as ‘a’ on the image; similarly, the top of mountain Binteracts
later and is imaged as‘b,’ and again base C is imaged as‘c.’ The foreshortening is affected by
the height of the object; the higher the object more is the foreshortening. The slope facing the
antenna will be brighter, while the slope facing away from the antenna will be darker. The

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 185 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

foreshortened radar images are challenging to interpret, and even if the height of the object is
not high, the planimetric displacement on the radar image appears, i.e., ‘ab’is smaller
than‘bc.’ The foreshortening also intensifies with an increase in depression angle or decrease
in incidence angle.

Radar Shadows:

Radar shadows are another type of geometrical distortions available in radar images. The
steep terrain hides the areas of the region imaged from getting illuminated by radar energy,
resulting in radar shadows. The shadows in the optical images are entirely different from the
radar shadows. In the optical case, the shadow areas are weakly illuminatedwhile the radar
shadows are entirely black and correspond to the areas which do not send any information to
the antenna. Radar shadows correspond to the area which lies on the backslope from which
no echos are returned. These shadow areas can be considered as radar silence areas, i.e., the
regions from which no measured signal is sent. The topographic relief and flight direction
concerning topography control the formation of radar shadow. Radar shadows are a function
of depression angle, and therefore shadows are more severe in the far range where the
depression angle is smallest.

Figure 10.7: Radar shadows

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 186 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The backslope portion of a terrain feature is in radar shadows when its backslope angle (𝛼 − )
is steeper than the depression angle (𝛾) i.e., 𝛼 − > 𝛾 (Figure 7). The backslope is fully
illuminated when the backslope angle is less than the depression angle i.e., 𝛼 − < 𝛾 (Figure
10.7). There is one more kind of illumination called grazing illumination is formed when the
backslope angle becomes equal to the depression angle i.e., 𝛼 − = 𝛾 (Figure 10.7). On
visualizing the 10. 7 it can be seen that terrain designated by BCDis in complete shadow in
the slant range image display bd. Further, the distance BC in the ground range is shorter than
the bd (slant range image distance). The radar shadows have the following characteristics:

 Radar shadows are a cross-track phenomenon, and their orientation provides


information about the look direction and the position in near and far range.
 The identical heights of terrain objects depending on their position in cross-track are
recorded with different shadows. A feature casting extensive shadows in the far range
can have its backslope illuminated in near range.
 Radar shadows not only obscure the scenes but are also helpful in extracting
geomorphic characteristics and enhancing lineaments.

Radar shadowing will be longer for higher range distances because of the flattened incidence
angle. The reader must understand that there is a trade-off between radar shadows and relief
displacement. A steeper incidence angle results in intense foreshortening and layover, and
little shadows. On the contrary, images with flatter incidence angles will have less relief
displacement but more image obscuring by shadows.

Radar Parallax:

The apparent shift in the object's position due to an actual change in the viewpoint of
observation is known as parallax. The change in viewpoint away from the radar’s nominal
flight path results in radar parallax. The stereo pairs in radar images are created using radar
parallax. The radar parallax arises when an object is visualized from two different flight lines
resulting in differential relief displacements(Figure 10.8a). The stereoscopic vision
establishment is difficult using radar images pair acquired from opposite side configurations
because radar side lighting is reversed in radar stereo pairs. Therefore, the acquisition of radar
stereo pairs is achieved by capturing an object from two flight lines at the same altitudeusing
the same side configuration as given in Figure 10.8b. In this case, the illumination direction
and side lighting effects will be uniform on both stereo pairs' images. The variation in look

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 187 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

angle and different flying heights is also used in the same side configuration for capturing
stereo pairs.

(a) (b)

Figure 10.8: Radar Parallax generation (a)opposite side configuration (b) same side
configuration
The radar parallax is also used for making image measurements such as object heights. The
radar parallax is calculated from mutual image displacement measurements on the radar
images involved in the formation of the stereo model. This type of measurement on radar
stereo pairs comes under radargrammetry. Radargarmmetry works with amplitude component
of the radar images in which two images are captured from same side of the object but from
different flight lines by varying the incidence angle.

Other Radar Image Characteristics:

Radar Speckle:

A random pattern of the brighter and darker pixel is often seen on all the radar images, called
speckle. The radar pulses are coherent and oscillate in phase with each other. The radar
pulses incident on a particular ground resolution cell and backscattered from the same cell
travel different distances from the radar antenna to terrain and back. There is a variation in
phases of the returning waves from a single pixel, and it may be in-phase or out of phase. The
constructive interference amplifies the intensity of the combined signal, which is in-phase.
On the contrary, destructive interference reduces the intensity of the opposite phase (out
phase), which cancels each other. These constructive and destructive interferences are the
phenomenon that generates a random pattern of a brighter and darker pixel in a radar image
which gives the image a grainy appearance. Radar Speckle is also known as the salt and

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 188 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

pepper effect, which is a type of noise that deteriorates image quality and makes visual and
digital image interpretation more difficult.

The radar speckle understanding can be more clear from Figure 10.9.Let us assume we have
an image with a grid of 24 pixels with a linear darker feature on it, and the rest of the grids
are of lighter tones. Radar speckle introduces pseudo-random noises in the resultant image,
as shown in Figure 10.9 (b).

Figure 10.9: Radar Speckle formation


Although called random noise, Radar Speckle is a result of known random variations in
backscattered signals arising as a result of radar illumination and sub pixel geometry of the
ground resolution cell. Therefore a highly correlated speckle pattern can be observed if two
radar images are acquired from the same position, having the same wavelength and
polarization with similar ground conditions. This phenomenon forms the base for radar
interferometry which is in the generation of digital elevation models. The image processing
techniques such as spatial filtering and averaging of neighbouring pixels can reduce the
speckle to some extent but cannot be eliminated. In the speckle spatial filtering technique, a
moving window or kernel (3 x 3, 5 x 5, 7 x 7) is applied over the original image, and the
mathematical calculations such as average, median, min, max are applied to the pixels falling
below this convolution window. A new value replaces the center cell of the original image
falling below the moving window, and the same process is repeated for the entire image row
by row (Figure 10.10).

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 189 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 10.10: Spatial filtering for reducing speckle noise


There is one more technique for reducing the radar speckle known as multi look processing.
In this procedure, the single radar beam is divided into several narrower sub-beams (Figure
10.11). These individual sub-beams illuminate the scene and provide an independent look.
These looks will also be subjected to speckle but the sum and average of speckle of these
independent looks will reduce the speckle in the final image. The speckle is inversely
proportional to the square root of the number of looks. The spatial resolution of the output
image is directly proportional to the number of looks. The spatial resolution of a 4-look
image would be 4 times the spatial resolution of a one look image.

Figure 10.11: Multilook processing for speckle reduction

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 190 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Both spatial filtering and multi look processing produce a smoothing effect on the image and
reduce the speckle, but this happens at the expense of resolution. So, there is a trade-off
between the desired speckle reduction and the amount of spatial detail required. Multi-
looking and spatial filtering should be avoided if high resolution is required, but speckle
reduction may be implemented if the requirements are broad interpretations and mapping.

Radar Image Range Brightness Variation:

A systematic gradient is observed in synthetic aperture radar images' brightness, especially in


the range direction known as range brightness variation. There are two geometric factors, i.e.,
ground resolution cell size and local incident angle, which control this brightness variation.
The backscattered signal's strength decreases from near range to far range because the ground
resolution cell size decreases as one moves from near to far range. Similarly, as the local
incidence angle increases, the backscattering decreases because backscattering is inversely
proportional to the local incident angle, which is related to distance in range direction.
Therefore the radar images increasingly become darker with an increase in range. The range
brightness variation effect in radar images is more prominent in aerial platforms than
spaceborne platforms since it has lower flying height and a range of look angles. Various
mathematical models are used to solve the range brightness variations, but they can solve
only the variations arising from the reduction of ground resolution cell size and not increasing
the local incidence angle.

Motion errors:

The radar images are generated by plotting the returning echoes' power against the single
pulse time in one dimension. The range lines are then stacked in other dimensions for image
creation. Any deviation of the platform from the ideal motion results in geometric distortions
and pointing errors in the resulting image. The factors responsible for these nonideal motions
are speed variation, vertical or lateral deviations, and roll, pitch & yaw motions of the
platform (). A nonlinear image stretching or compression is significantly observed in azimuth
direction due to lack of synchronization between the speed of platform and pulse repetition
frequency. Sometimes, the linear objects imaged near the flight line appear curved or sinuous
because curvilinear distortion results from deviation from the straight flight path (Figure
10.12). The roll, pitch, and yaw are the platform's rotation along the x, y, and z-direction
(Figure 10.13). These effects are more prominent in airborne platforms than the space
platform since later is affected less by atmospheric drags and winds.

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 191 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 10.12: Errors due to uncompensated aircraft motion

The roll motion is along the x-direction, and in radar images, this affects the antenna gain at
different points in the image and resulting in modulated grayscales. The pitch motion is along
the y-direction. Pitching results in the beam's movement intersecting the ground either ahead
of or back from the position directly to the side of the point which is beneath the aircraft. The
pitch motion is along the z-direction. This depends on the displacement from the flight line
and distorts the directions of different points relative to others. Excess yaw can completely
distort the image.

Figure 10.13: Roll, Pitch and Yaw rotation of the platform

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 192 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

These motion errors can be compensated using inertial navigation systems (INS), which
record the platform's speed, roll, pitch, and yaw information. Depending on INS data
availability, frequency shifting during processing, pixel’s position and timing adjustment, and
image rectification after its formation can be implemented to compensate for the motion
errors.

Moving Targets:

An assumption implicitly embedded in all our discussions given earlier that the target
remains stationary during the image acquisitions. However, moving objects such as cars,
ships, etc introduce an additional doppler component proportional to the relative velocity
between object and radar system. Hence, the object in motion is displaced in the azimuth
direction of the radar image proportional to the target's relative velocity. This is why the cars
appear to be moving beside the road rather than on it. The velocities on ocean surfaces are
proportional to wave heights. The water follows a circular movement in the direction of wave
motion. The water moving towards the radar instrument sents echoes, which are doppler
shifted to higher frequencies, and these areas are imaged by SAR further in the azimuth
direction. On the other hand, the water moving away from the system is imaged backward.
The appearance of waves on the image in range direction remains unaffected, while in the
azimuth direction, waves get bunched up. This wave bunching is dependent on incidence
angle and relative wave and platform velocity.

10.4 SUMMARY

This unit gives us an insight into the geometrical distortions associated with a radar image.
The different types of geometrical distortions studied in this unit are relief displacement
(foreshortening & layover), slant range scale distortion, radar shadows, and radar parallax.
The various factor affecting these distortions are incidence angle, depression angle, distance
from flight line (near /far range), and flying height. Apart from these commonly available
geometric distortions, few other distortions affecting the images' appearance are discussed in
detail, such as radar speckle, radar range brightness variation, motion errors, and moving
objects. The remedial measures to compensate for these errors are also discussed in detail.

10.5 GLOSSARY

Acronym Description
GPS Global Positioning System
GR Ground range

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 193 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

INS Inertial Navigation System


SR Slant range
Terms Description
Coherence is the fixed relationship between waves in a beam of
electromagnetic (EM) radiation. Two wave trains of EM radiation are
Coherence coherent when they are in phase. That is, they vibrate in unison. In terms
of the application to things like radar, the term coherence is also used to
describe systems that preserve the phase of the received signal.
The component of the signal that has the same phase as the complex
In-Phase (I)
reference frequency. In-phase is represented by the constant I
Device such as an imaging radar that uses two different paths for imaging,
and deduces information from the coherent interference between the two
Interferometer signals. In SAR applications, spatial interferometry has been
demonstrated to measure terrain height, and time delay interferometry is
used to measure movement in the scene such as oceanic currents.
Radar terminology refers to individual looks as groups of signal samples
in a SAR processor that splits the full synthetic aperture into several sub-
apertures, each representing an independent look of the identical scene.
The resulting image formed by incoherent summing of these looks is
characterised by reduced speckle and degraded spatial resolution. The
SAR signal processor can use the full synthetic aperture and the complete
signal data history in order to produce the highest possible resolution,
Looks
albeit very speckled, single-look complex (SLC) SAR image product.
Multiple looks may be generated by averaging over range and/or azimuth
resolution cells. For an improvement in radiometric resolution using
multiple looks there is an associated degradation in spatial resolution.
Note that there is a difference between the number of looks physically
implemented in a processor, and the effective number of looks as
determined by the statistics of the image data. ( See also ).
Adjustment of a sensing system and/or the recorded data to remove
Motion effects of platform motion, including rotation and translation, and
Compensation variations in along-track velocity. Motion compensation is essential for
aircraft SARs, but usually is not needed for spacecraft SARs.
The vertical fan-shaped beam of electromagnetic energy produced by the
Radar Beam
radar transmitter.
Apparent change in the position of an object due to an actual change in
the point of view of observation. For a SAR, true parallax occurs only
with viewpoint changes that are away from the nominal flight path of the
Radar Parallax
radar. In contrast to aerial photography, parallax cannot be created by
forward and aft looking exposures. Parallax may be used to create stereo
viewing of radar images.
Resulting image when independent images of the same area are averaged
Multi-Look
to create a single multi-look image. Such an image has a lower resolution
Imagery
but the speckle has been reduced. ( See also )

10.6 ANSWER TO CHECK YOUR PROGRESS


1. Which geometrical characteristics control the radargrammetry and how?
2. What is radar speckle, and how to correct them?

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 194 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

3. How do the moving platforms introduce the geometrical distortions in radar images?
10.7 REFERENCES

1. Campbell, J. B., & Wynne, R. H. (2011). Introduction to Remote Sensing. New York: The
Guilford Press.
2. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
3. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing
(pp. 429-474). Bethesda: ASP&RS.
4. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
5. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
6. Woodhouse, I. H. (2006). Introduction to Microwave Remote Sensing. Boca Raton, FL:
Taylor & Francis.

10.8 TERMINAL QUESTION

1. What is the difference between slant range and ground range? How can one change
slant range distance to ground range distance?
2. Explain the relief displacement in radar image and how does it differ from that in
optical images.
3. What factors control the foreshortening and layover in radar images?

UNIT 10 - GEOMETRICAL CHARACTERISTICS OF MICROWAVE IMAGE Page 195 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 11 - INTERPRETING SAR IMAGES

11.1 OBJECTIVES
11.2 INTRODUCTION
11.3 INTERPRETING SAR IMAGES
11.4 SUMMARY
11.5 GLOSSARY
11.6 ANSWER TO CHECK YOUR PROGRESS
11.7 REFERENCES
11.8 TERMINAL QUESTIONS

UNIT 11 - INTERPRETING SAR IMAGES Page 196 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

11.1 OBJECTIVES
After reading this unit you will be able to understand:

 About SAR images.


 the response of radar energy with different land use landcover categories.

11.2 INTRODUCTION
The radar image characteristics are fundamentally different from the optical images. These
explicit characteristics result from imaging radar techniques related to speckle, texture, or
geometry. The radar image visual and digital analysis is complicated because it visualizes the
scene differently from the human eye or optical sensor. The microwave backscattering properties
of the terrain correlate to grey levels in the radar images. The radar signal’s backscattering
intensity depends on several terrains, topographic and geometrical parameters such as surface
roughness, dielectric constant, and local slope. On the other hand, the optical sensor functioning
in the visible/infrared region records the target response in terms of their colour, chemical
composition, and temperature. The following section will describe various image interpretation
keys such as tone, texture, pattern, shadow, shape & dimension, useful in synthetic aperture radar
interpretation.

11.3 INTERPRETING SAR IMAGES


The SAR image interpretation can be understood by studying the different image interpretation
keys as given below:

Tone:
The tone on a SAR magnitude image represents the energy backscattered towards the radar
antenna after interaction with the earth surface. A single band radar image is represented as
monochromatic and has the tone varying from dark to bright. The digital values' quantification
may be done as per any defined framework, but the digital values represent power received at the
sensor. The visualization of a radar image can be established by using a relative group of tones.
Generally, tones are described by a minimum of two categories, i.e., dark and bright; however, as
per needs and details required, the additional levels of tones such as verydark, dark, intermediate,
bright, and very bright can be generated. The commonly used method for observation of tones is
dark, intermediate, and bright tones use to describe an extensive portion of the image, while very
dark and very dark tones are used for intermittently occurring features. The areas with coarser

UNIT 11 - INTERPRETING SAR IMAGES Page 197 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

image texture are difficult to interpret because the assessment of predominant tone is required,
and on the other hand, areas with little to no variation in tones are easy to interpret. The tone is
an essential image interpretation key element as it provides information regarding the surface
and spatial distribution of objects.
The backscattering, such as specular, diffuse, or corner reflections,primarily control the image
tones. The specular backscattering results from smooth surfaces, which direct the incident radar
energy in the opposite direction to that of the antenna, creating darker tones. The diffused
backscattering depolarizes the incident radar energy and directs them to the antenna, creating a
brighter tone as per the modified Rayleigh roughness criteria. The intermediate variation
between dark and bright tone results from volume scattering and intermediate rough surfaces.
The corner reflectors direct the incident energy towards the antenna after at least two specular
bounces from the horizontal surface and perpendicular façade surface, creating a brighter tone.
Varied tonal distribution over Sentinel 1 (C band) band data with VH polarization is shown in
Figure 11.1 where darker tones represent water bodies (present in river or fields) while brighter
tones are shown for urban features creating corner reflections. The agriculture fields represent
the uniform intermediate grey tones.

Figure 11.1: Tone distribution on VH polarized C band Sentinel 1 Image over Ganhe
District China

Texture:
The spatial variations of the tones define the image texture. The spatial frequency, similarities,
and contrast of the tones describe this interpretation element. Low-frequency tone changes
correspond to the fine texture, while the high-frequency tone changes correspond to the coarse
texture. It is to be kept in mind that image texture and surface texture are entirely different

UNIT 11 - INTERPRETING SAR IMAGES Page 198 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

concepts; while the former define the tonal variations during image interpretation, the latter
relates to the ground surface interacting with incident radar energy. The fine texture indicates a
similar kind of backscattering behaviour, either specular or diffuse in other words, and it
represents the likeness of tones.On the contrary, variations of tones in neighbouring pixels result
in coarse texture. The agricultural fields and standing calm water have brighter and darker tones,
respectively, while they represent fine texture because for a particular area, both of them
consistently backscatter radar energy in a diffuse and specular manner. The radar beam's
interaction with a patched surface or object comparable to image spatial resolution creates
intermediate textures. The areas such as closed forests, extensive bitumen surfaces, grass lawns
exhibit fine texture, while open forests, patchy residential areas are shown as coarse textures
(Figure 10.2). The texture proves beneficial when the distinction of the objects is challenging
due to coarser spatial resolution. The specular backscattering results in fine texture; therefore,
increment in spatial resolution doesn’t enhance fine textures. However, an increment in spatial
resolution can degrade the appearance of coarse texture arising from diffuse backscattering. The
synthetic aperture radar imagery with prevalent foreshortening, radar shadows, and volume
backscattering effects result in intermediate to coarser textures.

UNIT 11 - INTERPRETING SAR IMAGES Page 199 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 11.2: Texture distribution on VH polarized C band Sentinel 1 Image over


Panchkula, Haryana
Pattern:

The repetitive arrangement of tone variation expresses the pattern in synthetic aperture radar
imagery. The pattern applies to the objects which can be visually discriminated from each other
and can be recognized. The radar imagery patterns are represented as dotted, mottled, gridded,
patches, striped, and tiled. Like textures, pattern appearance is also governed by spatial
resolution, for, e.g., on high-resolution imagery, regularly spaced trees appear as dots. The
pattern assessment is more comfortable for the contrasting image tones. The spatial arrangement
of all objects present in the imagery side-by-side should be considered to evaluate whether a
feature presents a regular or irregular pattern. The pattern’s spatial distribution can be defined as
regular, clustered, or dispersed, and these can be visually and quantitatively assessed.Figure 11.3
shows the unique floating phumdis circular pattern seen in the Loktak lake of Manipur.

UNIT 11 - INTERPRETING SAR IMAGES Page 200 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 11.3: Pattern distribution on VH polarized C band Sentinel 1 Image over Loktak
Lake, Manipur
Shadows:

The angular relationship between the incident radar beam and the terrain slope facing away from
it results in shadows. The radar shadows are formed when the depression angle is smaller than
the terrain slope. The shadows are significantly useful in interpreting terrain concerning the
overlaying objects such as buildings, trees, ridges, and valleys along the range direction. The
radar shadows additionally provide information related to the height of the objects. However,
one must keep in mind that shadows increase from near to far range since the incidence angle
increases proportionately in that direction. Shadows hamper the information gathering visually,
but they are an essential interpretation element and therefore, formation of the shadows in radar
images is controlled by rightfully choosing the acquisition parameters. Large incident angles
enhance the shadow occurrence, while low angle minimizes the shadows. The radar images with
a significant amount of shadows captured create information gaps that can be filled by gathering
the information using alternate look angles.Figure 11.4 shows the appearance of shadows on
Sentinel 1 VH polarized image , the slopes facing the radar beam are having brighter returns
while those away from radar beam are having darker return.

Figure 11.4: Shadow distribution on VH polarized C band Sentinel 1 Image over Srinagar,
Jammu and Kashmir

UNIT 11 - INTERPRETING SAR IMAGES Page 201 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Shapes and Sizes:


A homogenous area having uniform tone, texture, and tone represents a shape. The minimum
dimensions of these areas may vary, which are known as minimum mappable units. The typical
geometrical configurations of these areas are represented by squares, rectangles, circles, ovals,
lines, or points. Sometimes the backscattering is so extreme that it takes the form of the star
shape on radar images. The radar image measurements, by default, are represented in slant range
directions; therefore, the shapes and dimensions represented in such images are not proportionate
to the actual ground measurement. The terrain objects projected above the mean sea level are
either compressed or reversed on radar image due to foreshortening and layover. The visual
interpretation of steep slopes and high-rise buildings becomes obscure and difficult because of
layover, foreshortening, and shadows. However, objects having smaller heights such as valleys,
trees, and field fences get enhanced by the effects of foreshortening and layover. The linear
objects which are arranged perpendicular to the look direction of the radar beam are enhanced.

Figure 11.5 : The image is displayed in ground range and any measurement done on image
are true to the ground

UNIT 11 - INTERPRETING SAR IMAGES Page 202 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Now that we have learned about the various radar image interpretation elements let us
understand microwave energy's response for different terrain objects.

Ocean response:

The study of oceans and sea has matured rapidly compared to other applications of active
microwave remote sensing. This is because the backscattering phenomenon from these water
surfaces is primarily controlled by surface roughness, a constant across a large area. As we have
discussed earlier, the dielectric constant for water is very high restricts the penetration of even
longer wavelengths into the water; thus, the backscattered energy of the surface is very high. The
dielectric constant of the water changes due to salinity, but the change is significantly minimal to
detect on radar imagery compared to changes in surface roughness. The local weather conditions
most significantly affect the water surface. The wind's interaction with the water surface creates
oscillation, resulting in short and longer wavelength waves on the surface called capillary and
gravity waves. The wind vector is aligned perpendicular to the capillary and gravity waves, and
their wavelength relies on wind speed.The motion of the waves moving towards or away from
radar in range direction is easily detected on radar imageries compared to those moving in the
azimuth direction. The water surface also gets rough by the impact of rainfall which can be
visualized on the radar images. The ocean surface's wave spectrum can be measured by the
satellite imaging radars operating in wave mode.

Figure 11.6: Presence of wave on ALOS PALSAR (L-Band) image (©JAXA/METI [2011].)

UNIT 11 - INTERPRETING SAR IMAGES Page 203 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Sea Ice and Ice response:

The active microwave systems can discriminate sea ice from open water depending on their
dielectric constants' variations and surface roughness.Apart from dielectric constant and surface
roughness, radar backscattering is also affected by other factors such as age, internal geometry,
temperature, and snow cover. The ice thickness and ice type mapping are possible with X and C
bands (Figure 11.7). The ice extent mapping is easily achievable using an L wavelength bandand
coarse resolution radar dataset; however, the high-resolution data are used to prepare ice maps
for merchant ship navigation. The Radarsat satellite of Canada was the first dedicated radar
satellite that released ice maps for aiding commercial ships in navigation. The high-resolution
radar imageries also provided maps containing valuable information regarding the ice shelves'
edge in Greenland and Antarctica. These ice sheets break off and end up in the sea, creating
floating icebergs. This information on the ice sheets can help in addressing the response to global
climate change issues.

Figure 11.7: Glacier visualised on VH polarized Sentinel 1 data just Over Kedarnath region

Soil response:

We have already discussed in unit 10 that the dielectric constant has an established relationship
with the availability of moisture in the soil top layer. The dielectric constant for water is 10 times
that of dry soil, and the presence of slight moisture in the topsoil can be easily detected using a
longer wavelength (Figure 11.8). The SAR sensors are used to prepare high-resolution soil
moisture maps using the longer wavelength bands used for precision agriculture. The medium-
resolution SAR sensors provide vital inputs for climate modelling, weather forecasting, and

UNIT 11 - INTERPRETING SAR IMAGES Page 204 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

hydrological studies. The radar signals' sensitivity depends on surface roughness, making it very
difficult to develop robust soil moisture, estimation models. Various studies have shown that
deserts' arid soil allows penetration of up to 2 meters if the L-band is used, making it easier to
identify subsurface geological features. It has been observed that longer wavelengths can be
useful in identifying paleochannels (ancient water channels). It should be kept in mind that the
radar capability to assess the soil moisture depends on the presence of biomass; if the biomass is
less than 1 mg/hadecreases (Wang et al., 1994), the radar can easily detect moisture, but as the
biomass increase, its ability of radar’s moisture detection (Waring, et al., 1995)

Figure 11.8: The waterbody, wetland and moist soil of Keoladeo National Park appear in darker
shades while the vegetated part is shown in brighter tone. (Source – Sentinel 1 C band VH
polarised)

Forest response:

The forests exert a local and global impact on the environment since they act as a carbon sink
and influence the climate. The mapping of vegetation dynamics, climatic modelling, and
conservation studies require accurate temporal mapping of various forest parameters such as
height, species, biomass, and structure. The studies related to global carbon budgeting and
carbon storage assessment require high-resolution forest biomass estimates. The imaging radars
have significantly proven their importance in forest studies because they have exploited the
correlation between radar backscattering and above-ground biomass. The difference in forest
types, weather conditions, and ground parameters influence the backscattering phenomenon;
hence, the correlation established at one place for a particular time is not always applicable to

UNIT 11 - INTERPRETING SAR IMAGES Page 205 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

another location. The sensitivity to the standing biomass and penetration into forest canopy
increases with use of longer wavelength.

Figure 11.9 : Brighter tone from dense forest due to volume scattering (Source: Sentinel 1 C band
VH polarised)

Urban response:

The urban features such as buildings, cars, fences act as corner reflectors, backscattering much of
the incident energy back to the antenna. The presence of these corner reflectors in the urban
areas results in bright-toned areas in radar images. The corner reflections are maximum for the
building which faces towards the radar signal originating direction. This is known as the cardinal
effect because the urban area reflections are similar to the compass's cardinal directions, where
the larger returns arise when the linear features are illuminated by the radar beam orthogonally to
their orientation()(Raney, 1998). Two identical types of urban areas constructed simultaneously,
same area and same material, appear different on radar imagery depending on the difference in
their orientation. An urban area acquired on two dates with changed acquisition parameters such
as look direction differ on radar imagery. The coarser resolution radar sensors can be used for
creating Level 1 Landuse maps, but these sensors can’t be used for Level II and Level III
classification. However, high-resolution sensors.

UNIT 11 - INTERPRETING SAR IMAGES Page 206 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 11.10: Appearance of cardinal effect in urban areas of Suncity, Arizona on ALOS PALSAR
HH polarised (© JAXA/METI 2010)

11.4 SUMMARY
In this unit, the traditionally available interpretation keys initially developed for the aerial and
satellite are implemented for synthetic aperture radar image interpretation. The different visual
interpretation keys such as tone, texture, shadow, pattern, and size & dimensions are described
for establishing evidence of different land use landcover types. However, the interpretation keys
established for optical data must be used wisely as their meanings change when used for images
acquired through synthetic aperture systems to exploit the longer wavelength of the
electromagnetic spectrum and slang range imaging. The underlying technical concept for
describing the interpretation keys with image examples will help the readers familiarize
themselves with image interpretation of radar imagery. To wrap this unit few generalizations can
be drawn, such as brighter radar signals are received from rough surfaces, metal surfaces, urban
areas, and high moisture areas; on the other hand, diffuse reflectors result in weak to
intermediate returns. The specular reflectors such as undisturbed water surfaces result in low
returns and appear dark on the image.

11.5 GLOSSARY
Acronym Description
ALOS Advanced Land Observation Satellite
PALSAR Phased Array type L-band Synthetic Aperture Radar

UNIT 11 - INTERPRETING SAR IMAGES Page 207 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

JAXA Japan Aerospace Exploration Agency


METI Ministry of Economy, Trade and Industry

Terms Description
A tendency of a radar to produce very strong echoes from a city street
Cardinal Effect
pattern or other linear feature oriented perpendicular to the radar beam.
A combination of two or more intersecting specular surfaces that
combine to enhance the signal reflected in the direction of the radar.
Corner Reflector
The strongest reflection is obtained for materials having a high
conductivity (i.e. ships, bridges).
Texture is generally referred to as the detailed spatial pattern of
variability of the average reflectivity (tone). Image texture is produced
by an assembly of features that are too small to be identified
individually. SAR image texture is composed of a convolution of
speckle with scene texture. Texture is an important radar image
interpretation element; its statistical characterisation requires
measurement from a finite sampling window rather than estimates from
a single picture element (pixel). Texture edge detection, or
Texture - Radar segmentation, is a critical method used for accurate radar image
classification. Radar image texture is apparent at various different scale
levels. Micro-scale texture is the result of more or less homogeneous,
noise-like and random fluctuations of light and dark tone throughout
the entire image. Meso-scale texture is produced by spatially and not
randomly organised fluctuations of grey tone on the order of several
resolution cells, within an otherwise homogeneous unit. Examples of
descriptive terms to characterise texture include rough (coarse), smooth
(fine), grainy, checkered, or speckled.
Tone refers to each distinguishable grey level from black to white. First
order spatial average of image brightness, often defined for a region of
nominally constant average reflectivity. It is proportional to the
strength of radar backscatter. Relatively smooth targets, like calm
Tone water, appear as dark tones. Diffuse targets, like some vegetation,
appear as intermediate tones. Man-made targets (buildings, ships) may
produce bright tones, depending on their shape, orientation and/or
constituent materials. Tone can be interpreted using a computer assisted
density slicing technique.

11.6 ANSWER TO CHECK YOUR PROGRESS


1. How does the radar energy respond to soil and explain the factors affecting it?
2. How does the radar energy respond to snow and ice and explain the factors affecting it?

UNIT 11 - INTERPRETING SAR IMAGES Page 208 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

11.7 REFERENCES
1. Campbell, J. B., & Wynne, R. H. (2011). Introduction to Remote Sensing. New York: The
Guilford Press.
2. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
3. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing (pp.
429-474). Bethesda: ASP&RS.
4. Raney, K. (1998). Radar Fundamentals: Technical Perspective. In F. Henderson, & A. Lewis,
Principles and Applications of Imaging Radar (pp. 42-43). NY: John Wiley.
5. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
6. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
7. Wang, Y., Kasischke, E. S., Melack, J. M., Davis, F. W., & Christensen, N. L. (1994). The
Effects of Changes in Loblolly Pine Biomass and Soil Moisture on ERS-1 SAR
Backscatter. Remote Sensing of Environment, 49, 25-31.
8. Waring, R. H., Way, J., Hunt, E. R., Morrissey, L., Ranson, K. J., Weishampel, J. F., . . .
Franklin, S. E. (1995). Imaging Radar for Ecosystem Studies. BioScience, 45(10), 715-
723.
9. Woodhouse, I. H. (2006). Introduction to Microwave Remote Sensing. Boca Raton, FL: Taylor
& Francis.

11.8TERMINAL QUESTIONS
1. What are the different interpretation keys that can be implemented to synthetic apaerture
radar?
2. How does the radar energy respond to urban features and explain the factors affecting it?
3. How does the radar energy respond to forest features and explain the factors affecting it?

UNIT 11 - INTERPRETING SAR IMAGES Page 209 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

BLOCK 4 : DIGITAL IMAGE PROCESSING


UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING

12.1 OBJECTIVES
12.2 INTRODUCTION
12.3 INTRODUCTION TO DIGITAL IMAGE PROCESSING
12.4 SUMMARY
12.5 GLOSSARY
12.6 ANSWER TO CHECK YOUR PROGRESS
12.7 REFERENCES
12.8 TERMINAL QUESTIONS

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 210 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

12.1 OBJECTIVES
After reading this unit you will be able to understand:
 Definitions of image, digital image, signal and digital image processing.
 Types and formats of Image
 Procurement of Digital Image
 Preliminary concepts for image processing
 Colour composites

12.2 INTRODUCTION
Visual/onscreen method of image interpretation has already been explained to you. You have
also learnt its related sub-topics namely-types of remote sensing and remote sensors, preliminary
concepts, criterion of image interpretation, image elements, advantages and limitations. Here,
under this topic of `basics of digital image processing`, you first try to understand the importance
of picture/image and how is it most convenient for bringing out a detailed useful information
and the signal processing followed by the basics of digital image processing.

Pictures are the most common and convenient means of conveying or transmitting information.
A picture is worth a thousand words. Pictures concisely convey information about positions,
sizes and inter-relationships between objects. They portray spatial information that we can
recognize as objects. Human beings are good at deriving information from such images, because
of visual and mental abilities. About 75% of the information received by Human is in pictorial
form.

Signal processing is a discipline in electrical engineering and in mathematics that deals with
analysis and processing of analog and digital signals, and deals with storing, filtering, and other
operations on signals. These signals include transmission signals, sound or voice signals, image
signals, and other signals. Out of all these signals, the field that deals with the type of signals for
which the input is an image and the output is also an image is done in image processing. As it
name suggests, it deals with the processing on images. It can be further divided into analog
image processing and digital image processing.

In the present context, the analysis of pictures that employ an overhead perspective, including the
radiation not visible to human eye are considered. Thus our discussion will be focusing on
analysis of remotely sensed images. These images are represented in digital form. When
represented as numbers, brightness can be added, subtracted, multiplied, divided and, in general,
subjected to statistical manipulations that are not possible if an image is presented only as a
photograph.

Although digital analysis of remotely sensed data dates from the early days of remote sensing,
the launch of the first Landsat earth observation satellite in 1972 began an era of increasing
interest in machine processing. Previously, digital remote sensing data could be analysed only at
specialized remote sensing laboratories. Specialized equipment and trained personnel necessary

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 211 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

to conduct routine machine analysis of data were not widely available, in part because of limited
availability of digital remote sensing data and a lack of appreciation of their qualities.

12.3 INTRODUCTION TO DIGITAL IMAGE PROCESSING

DEFINITIONS:

Image:

 An image is defined as a two-dimensional function, F(x, y), where x and y are spatial
coordinates, and the amplitude of F at any pair of coordinates (x, y) is called
the intensity of that image at that point.

 In other words, an image can be defined by a two-dimensional array specifically arranged


in rows and columns.

Digital Image:

 Digital Image Processing means processing digital image by means of a digital computer.
We can also say that it is a use of computer algorithms, in order to get enhanced image
either to extract some useful information.

 Digital Image is composed of a finite number of elements, each of which elements have a
particular value at a particular location. These elements are referred to as picture
elements, image elements, and pixels. A Pixel is most widely used to denote the elements
of a Digital Image.

Signal:

 In electronics, a signal is an electric current or electromagnetic field used to convey data


from one place to another. The simplest form of signal is a direct current (DC) that is
switched on and off; this is the principle by which the early telegraph worked. More
complex signals consist of an alternating-current (AC) or electromagnetic carrier that
contains one or more data streams.

 Signal is a token; indication.

 It may also define as an electrical quantity or effect, as current, voltage, or


electromagnetic waves that can be varied in such a way as to convey information.

Digital image processing:

Digital image processing (DIP) deals with manipulation of digital images through a digital
computer. It is a subfield of signals and systems but focus particularly on images.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 212 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Digital Image processing is a collection of techniques for manipulation of digital images by


computers.

TYPES AND FORMATS OF IMAGE:

Types of an Image:

i. Binary Image: The binary image as its name suggests, contain only two pixel elements
i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also known as
Monochrome.

ii. Black and White Image: – The image which consist of only black and white color is called
black and white image. This is an analogue image without any digital values. Such images or
photographs are generally referred as aerial or terrestrial photography.

iii. 8 bit Image: It is a medium radiometric resolution image type. It has 256 (28 ) different
shades of black and white in it and commonly known as Grayscale Image. In this format, 0
stands for black, and 255 stands for white, and 127 stands for gray. The variability of shades
of this image can be divided into 255 categories of gray levels between black and white. In
remote sensing it is called as 8 bit radiometric resolution data.

iv. 16 bit Image: It is a high radiometric resolution type. It has 65,536 (216) different shades of
black and white shades or colors in it. It is also known as High Color Format type . In this
type the distribution of color is not as same as Grayscale image. A 16 bit format is actually
divided into three further formats which are Red, Green and Blue. That is famous RGB
format.

Image Formats:

There are 5 main formats in which the images are stored.

i. TIFF (also known as TIF), file types ending in .tiff

TIFF stands for Tagged Image File Format. TIFF images create very large file sizes. TIFF
images are uncompressed and thus contain a lot of detailed image data (which is why the files
are so big). TIFFs are also extremely flexible in terms of color (they can be grayscale, or CMYK
for print, or RGB for web) and content (layers, image tags).

TIFF is the most common file format used in photo software (such as Photoshop), as well as
page layout software (such as Quark and Design).

ii. JPEG (also known as JPG), file types ending in .jpg

JPEG stands for Joint Photographic Experts Group. It is a standard type of image formatting. The
images of JPEG files are compressed to store a lot of information in a small-size file. Most
digital cameras store photos in JPEG format so that one can take more photos on one camera

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 213 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

card. While compressing the JPEG images some of the details are likely to be lost that why it is
called a “lossy” compression.

JPEG files are usually used for photographs on the web, because they create a small file that is
easily loaded on a web page and also looks good.

JPEG files are bad for line drawings or logos or graphics, as the compression makes them look
“bitmappy” (jagged lines instead of straight ones).

iii. GIF, file types ending in .gif

GIF stands for Graphic Interchange Format. This format compresses images but, as different
from JPEG, the compression is lossless (no detail is lost in the compression, but the file can’t be
made as small as a JPEG).

GIFs also have an extremely limited color range suitable for the web but not for printing. This
format is never used for photography, because of the limited number of colors. GIFs can also be
used for animations.

iv. PNG, file types ending in .png

PNG stands for Portable Network Graphics. It was created as an open format to replace GIF,
because the patent for GIF was owned by one company and nobody else wanted to pay licensing
fees. It also allows for a full range of color and better compression.

It’s used almost exclusively for web images, never for print images. For photographs, PNG is not
as good as JPEG, because it creates a larger file. But for images with some text, or line art, it’s
better, because the images look less “bitmappy.”

When you take a screenshot on your Mac, the resulting image is a PNG–probably because most
screenshots are a mix of images and text.

v. Raw image files

Raw image files contain data from a digital camera (usually). The files are called raw because
they haven’t been processed and therefore can’t be edited or printed yet. There are a lot of
different raw formats–each camera company often has its own proprietary format.

Raw files usually contain a vast amount of data that is uncompressed. Because of this, the size of
a raw file is extremely large. Remote sensing data obtained from different sensors systems are
stored in raw image files. Usually they are converted to TIFF before editing and color-
correcting.

Image data are in raster formats, stored in a rectangular matrix of rows and columns.
Radiometric resolution determines how many gradations of brightness can be stored for each cell
(pixel) in the matrix; 8-bit resolution, where each pixel contains an integer value from 0 to 255,
is most common. Modern sensors often collect data at higher bit depth (e.g. 16-bit for Landsat-

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 214 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

8), and advanced image processing software can make use of these values for analysis. The
human eye cannot detect very small differences in brightness, and most GIS software stretch data
for an 8-bit display.

In a grey scale image, 0 = black and 255 = white; and there is just one 8-bit value for each pixel.
However, in a natural color image, there is an 8-bit brightness values for red, green and blue.
Therefore, each pixel in a color image requires 3 separate values to be stored in the file.

The remote sensing industry and those associated with it have attempted to standardize the way
digital remote sensing data are formatted in order to make the exchange of data easier and to
standardize the way data can be read into different image analysis systems. The Committee on
Earth Observing Satellites (CEOS) has specified this format which is widely used around the
world for recording and exchanging data.

Following are the three possible ways/formats to organize these values in a raster file.

 BIP - Band Interleaved by Pixel: The red value for the first pixel is written to the file,
followed by the green value for that pixel, followed by the blue value for that pixel, and so
on for all the pixels in the image.
 BIL - Band Interleaved by Line: All of the red values for the first row of pixels are written to
the file, followed by all of the green values for that row followed by all the blue values for
that row, and so on for every row of pixels in the image.
 BSQ - Band Sequential: All of the red values for the entire image are written to the file,
followed by all of the green values for the entire image, followed by all the blue values for
the entire image.

Ortho images are delivered in a variety of image formats, either compressed or uncompressed.
The most common are TIF and JPG. Compression eases data management challenges, as large
high-resolution Ortho-photo projects can easily result in terabytes of uncompressed imagery.
Compression can also speed display in GIS systems. The downside is that compression can
introduce artifacts and change pixel values, possibly hampering interpretation and analysis,
particularly with respect to fine detail. The decision to compress should be driven by end-user
requirements; it is not uncommon to deliver a set of uncompressed imagery for archival and
special applications along with a set of compressed imagery for easy use by large numbers of
users. If there is an intention for web-based display or distribution of Ortho-imagery, a
compressed set of Ortho-imagery is often recommended. In any event, geo-referencing
information must also be provided. Both TIFF and JPG image formats can accommodate geo-
referencing information, either embedded in the image file itself, as in the case of Geo TIFF, or
as a separate file for each image, as in the case of TIFF with a TFW (TIF World) file. The geo-
referencing information tells GIS software i) the size of a pixel, ii) where to place one corner of

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 215 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

the image in the real world, and iii) whether the image is rotated with respect to the ground
coordinate system.

PROCUREMENT OF DIGITAL IMAGE:

Remote sensing data from foreign countries and Indian Remote Sensing (IRS) data is made
available by NDC (National Data Centre), NRSC to all users for various developmental and
application requirements as per RSDP policy. The following are the guidelines for data
dissemination.

 All data of resolutions up to 1 m shall be distributed on a non discriminatory basis and on


“as requested basis”
 Data better than 1m resolution can be distributed to users after screening and ensuring
the sensitive areas is excluded.
 Data better than 1m resolution will be supplied as per RSDP policy. Government users
can obtain the data without any further clearance. Public sector undertakings /
autonomous bodies / academic institutions/Private users, recommended by at least one
Government agency, can obtain data without any further clearance.
 Request for data from all users must be received in the prescribed form duly authorized
by the head of the organization that shall be responsible to state the end application and
identify a senior official for safe keeping of the data.

Ordering Procedure:
User order Processing System (UOPS) is an online web application using which users are
requested to specify their area and period of interest along with the sensor and product selection.
The user’s area of interest (AOI) may be specified in form of a point, polygon, draw-on-map,
location name, map sheet and shape file. For shape file based AOI specification, the input shape
file should be in ESRI compatible format (.shx, .shp, .dbf and .prj) and the distance between the
vertices of the shape file must be a minimum of 5 kms. Minimum order is 1 scene for the
corresponding sensor.

Based on the proforma invoice generated through UOPS, user can transfer 100% advance
payment to NRSC along 18% of the GST (as per the applicable guidelines) through NEFT online
transfer or a demand draft in favour “Pay and Accounts Officer NRSC” payable at Hyderabad.

All data products will be disseminated to the users as per the Remote Sensing Data Policy and
the guidelines provided by the Government of India. The order will be entertained when the
required information is furnished in full and payment made. Data orders are to be placed thru
UOPS along with the required undertakings and certificates. Please ensure the correct product
type is chosen. For further details, please contact NDC. Order once processed and confirmed
cannot be amended or cancelled unless technical problems are encountered during data
generation. NDC reserves the right to refuse/cancel any order in full or part.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 216 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Price and Payments:


For detailed price list, you may open the price list of web site of NDC, NRSC. Price list
indicates, the product no, product type, accuracy, price etc., Indicate the Satellite and product no
details in the order form. Please ensure the correct product type is chosen. For further details,
please contact NDC. The price applicable to each order is the price in effect on the date of
confirmation of the order with NDC. NDC publishes a price list of data products at periodic
intervals.

Discounts:

 50% discount on respective user category pricing for archived data older than 2 years from the
date of acquisition.
 50% discount for the data less than two years old for academic and research purpose.
 @ 3% for orders more than Rs. 10.0 lakhs
 @ 5% for orders more than Rs. 25.0 lakhs
 @ 10% for orders more than Rs. 1.00 crore placed at a time for IRS products.

Priority Services: The provision for supply of satellite data within 24 hours (1 day) with
additional charge @ 50% is available. Priority orders must be received at NDC before 11 AM on
a working day. In case the order is accepted and could not be shipped within 16 hours, no
additional charges will be levied.

Licensing:
NRSC grants only single user license for the use of IRS images. All products are sold for the sole
use of purchasers and shall not be loaned, copied or exported without express permission of and
only in accordance with terms and conditions if any, agreed with the NRSC Data Centre,
National Remote Sensing Centre, ISRO, Dept. of Space, and Govt. of India. All data will be
provided with encryptions/mechanisms, which may corrupt the data while copying unauthorized
or while attempting the same. Every such attempt shall attract criminal and civil liability from
the user without prejudice to the corruption of data or software/hardware, for which NRSC will
not be liable. NRSC grants the user a limited, non-exclusive, non-transferable license with the
following terms and conditions.

Terms and Conditions:

 User can install the product in his premises (including on an internal computer network) with
the express exclusion of the Internet.
 User can make copies of the product (for installation and back-up purposes)
 User can use the product for his own internal needs
 User can use the product to produce Value Added Products and/or derivative works
 User can use any Value Added Product for his own internal needs
 User can make the product and/or any Value Added Product temporarily available to
contractors and consultants, only for use on behalf of the user
 User can print and distribute or post on an Internet site, but with no possibility of
downloading, an extract of a product or Value Added Product (maximum size 1024 x 1024

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 217 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

pixels) for promotional purposes (not including on-line mapping or geolocation services for
on-line promotion), in each case with an appropriate credit conspicuously displayed.

Limited Warranty and Liability:


NRSC warrants (a) that it has sufficient distribution to make the Product available to the user
under the terms hereof, free from the adverse claims of third parties; and (b) that the Product
will, for thirty (30) days from the date of receipt, substantially conform to NRSC's specifications
when used on appropriate computer hardware and DOS certified Software packages. The high
resolution data products are complex in nature and may contain a few artefacts, which may not
affect the usability of the data products supplied. The recommended usage for Cartosat-2 data is
up to 1: 4000 scales. There are no expressed or implied warranties of fitness or merchantability
given in connection with the sale or use of this product. NRSC disclaims all other warranties not
expressly given herein.

Returns and Disputes:


For a period of thirty (30) days, NRSC warrants that the products delivered by NRSC will be
limited to the area of interest ordered and the original media NRSA supplied the data product
will be free from physical or material defects. If FTP is selected as the mode of data products
dissemination, products will be available for 15 days at FTP server for the users to download. No
complaint related to the quality and/or quantity of the products will be entertained unless the
complaint is lodged at NDC within 30 days from the data of receipt of data. On acceptance of the
complaint, products are to be returned in the original media supplied by NDC. NRSC's sole
obligation and user's sole remedy under this Limited Warranty is that NRSC either, in its
discretion, shall use reasonable efforts to repair or replace the Product or to provide a procedure
within a commercially reasonable time so that the Product substantially conforms to the
specifications.

This Limited Warranty is void if any non-conformity has resulted from accident, abuse, misuse,
misapplication, or modification by someone other than NRSC. The Limited Warranty is for
user's benefit only, and is non-transferable. NRSC is not liable for any incidental or
consequential damages associated with users’ possession and/or use of the Product.

In case any disputes arises on the applicability or interpretation of the above terms and
conditions between the NRSC and the user, the matter shall be referred to the Secretary,
Department of Space, Govt. of India, whose decision shall be binding on both parties.

Based on the policies and rules of data procurement and dissemination; you should know the
following before ordering the data and mention the correct specification on the data order
Performa.

 Satellite platform/mission
 Sensor characteristics (film types, digital systems)
 Season of the year and time of the day
 Atmospheric effects
 Resolution of the imaging system and scale

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 218 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

 Image motion
 Stereoscopic parallax (in case of aerial photographs or sensor type providing stereo
image)
 Exposure and processing
 Precision standard of data
 Type of data viz., analogue or digital
 Number of spectral bands or FCC and
 The type of digital data format

Digital Data Formats:


The digital data acquired from Remote Sensing Systems are stored in different types of formats
viz., (1) band sequential (BSQ), (2) band interleaved by line (BlL), (3) band interleaved by pixel
(BIP). It should be noted, however, that each of these formats is usually preceded on the digital
tape by 'header' and/ or 'tailer' information, which consists of ancillary data about the date,
altitude of the sensor, sun angle, and so on. Such information is useful when geometric or
radiometric correction of the data. The data are normally recorded on nine-track CCTs with data
density on the tape of 800, 1600, or 6250 bits per inch (bpi). Before ordering the data the user
must be aware of the above formats of data so that the correct procedures could be applied for its
downloading, correction and further processing.

PRELIMINARY CONCEPT OF DIGITAL MAGE PROCESSING:

Digital Image and Digital Number:


A digital remotely sensed image is typically composed of picture elements (pixels) located at the
intersection of each row i and column j in each k bands of imagery (Fig. 12.1). Associated with
each pixel a number known as Digital Number (DN) or Brightness value (BV) that depicts the
average radiance; of a relatively small area within a scene. A smaller number indicates low
average radiance from the area and the high number is an indicator of high radiant properties of
the area. The size of this area effects the reproduction of details within the scene. As pixel size is
reduced more scene detail is presented in digital representation.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 219 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure12.1. Digital image

A digital image is a representation of a real image as a set of numbers that can be stored and
handled by a digital computer. In order to translate the image into numbers, it is divided into
small areas called pixels (picture elements). For each pixel, the imaging device records a
number, or a small set of numbers, that describe some property of this pixel, such as its
brightness (the intensity of the light) or its color. The numbers are arranged in an array of rows
and columns that correspond to the vertical and horizontal positions of the pixels in the image.

Digital images have several basic characteristics. One is the type of the image. For example, a
black and white image records only the intensity of the light falling on the pixels. A color image
can have three colors, normally RGB (Red, Green, Blue) or four colors, CMYK (Cyan, Magenta,
Yellow, black). RGB images are usually used in computer monitors and scanners, while CMYK
images are used in color printers. There are also non-optical images such as ultrasound or X-ray
in which the intensity of sound or X-rays is recorded. In range images, the distance of the pixel
from the observer is recorded. Resolution is expressed in the number of pixels per inch (ppi). A
higher resolution gives a more detailed image. A computer monitor typically has a resolution of
100 ppi, while a printer has a resolution ranging from 300 ppi to more than 1440 ppi. This is why
an image looks much better in print than on a monitor.

To recognize an object, the computer has to compare the image to a database of objects in its
memory. This is a simple task for humans but it has proven to be very difficult to do

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 220 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

automatically. One reason is that an object rarely produces the same image of itself. An object
can be seen from many different viewpoints and under different lighting conditions, and each
such variation will produce an image that looks different to the computer. The object itself can
also change; for instance, a smiling face looks different from a serious face of the same person.
Because of these difficulties, research in this field has been rather slow, but there are already
successes in limited areas such as inspection of products on assembly lines, fingerprint
identification by the FBI, and optical character recognition (OCR). OCR is now used by the U.S.
Postal Service to read printed addresses and automatically direct the letters to their destination,
and by scanning software to convert printed text to computer readable text.

Need, scope and Importance of digital image processing:


Remote sensing data received from satellite, particularly Sun Synchronous Satellite, is having
many discrepancies like radiometric errors, geometric errors, atmospheric variability, weather
fluctuation, atmospheric haze, light intensity variability due to Sun’s inclination above horizon,
cloud cover, shadows, topographic influence and distortion due to earth`s movement under each
satellite pass. Therefore, digital image processing certainly needs the digital image/data
corrections and rectification with respect to those errors through digital processes by using the
specific computer software. An analogue data showing a considerable amount of contrast among
features is developed for visual interpretation (already discussed in the previous chapter) after
digital processing. Similarly, if required, visual interpretation on computer screen is also
developed for interpretation.

One of the greatest advantages of digital image processing/interpretation over visual


interpretation is capturing maximum amount of information (based on spectral and radiometric
resolution) as the computer, having specific software, is capable of acquiring the same. The
limitation of visual interpretation is based on the fact that our eyes can identify up to 20-22
colours within a colour composite image and up to 10 shades of grey tone within black and white
image.

Another advantage of digital images over traditional ones is the ability to transfer them
electronically almost instantaneously and convert them easily from one medium to another such
as from a web page to a computer screen to a printer. A bigger advantage is the ability to change
them according to one's needs. There are several programs available now which give a user the
ability to do that, including Photoshop, Photo paint, and the Gimp. With such a program, a user
can change the colors and brightness of an image, delete unwanted visible objects, move others,
and merge objects from several images, among many other operations. In this way a user can
retouch family photos or even create new images. Other software, such as word processors
and desktop publishing programs, can easily combine digital images with text to produce books
or magazines much more efficiently than with traditional methods.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 221 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

While processing the data digitally we need a lot of storage space and powerful computers to
analyse the data from today`s remote sensing systems. Following example highlight the space
required for digital image processing:
One 8-bit pixel takes up one single byte of computer disk space. One kilobyte (Kb) is 1024
bytes. One megabyte (Mb) is 1024 kilobytes. How many megabytes of computer disk space
would be required to store an 8-bit Landsat Thematic Mapper (TM) image (7 bands), which is
6000 pixels by 6000 lines in dimension?
If we have seven bands of TM data, each 6000 pixels by 6000 lines, and each pixel takes up one
byte of disk space, we have:
7 x 6000 x 6000 = 252,000,000 bytes of data
To convert this to kilobytes we need to divide by 1024, and to convert that answer to megabytes
we need to divide by 1024 again!
252,000,000 (1024 x 1024) = 240.33 megabytes
So, we would need over 240 megabytes of disk space just to hold one full TM image, let alone
analyze the imagery and create any new image variations! Needless to say, it takes a lot of
storage space and powerful computers to analyze the data from today's remote sensing systems.

Digital images tend to produce big files and are often compressed to make the files
smaller. Compression takes advantage of the fact that many nearby pixels in the image have
similar colors or brightness. Instead of recording each pixel separately, one can record that, for
example, "the 100 pixels around a certain position are all white." Compression methods vary in
their efficiency and speed. The GIF method has good compression for 8 bit pictures, while
the JPEG is lossy, i.e. it causes some image degradation. JPEG's advantage is speed, so it is
suitable for motion pictures.

In today's world of advanced technology where most remote sensing data are recorded in digital
format, virtually all image interpretation and analysis involves some element of digital
processing. Digital image processing may involve numerous procedures including formatting
and correcting of the data, digital enhancement to facilitate better visual interpretation, or even
automated classification of targets and features entirely by computer.

A very promising use of digital images is automatic object recognition. In this application, a
computer can automatically recognize an object shown in the image and identify it by name. One
of the most important uses of this is in robotics . A robot can be equipped with digital cameras
that can serve as its "eyes" and produce images. If the robot could recognize an object in these
images, then it could make use of it. For instance, in a factory environment, the robot could use a
screwdriver in the assembly of products. For this task, it has to recognize both the screwdriver
and the various parts of the product. At home a robot could recognize objects to be cleaned.
Other promising applications are in medicine, for example, in finding tumors in X-ray images.
Security equipment could recognize the faces of people approaching a building. Automated
drivers could drive a car without human intervention or drive a vehicle in inhospitable
environments such as on the planet Mars or in a battlefield.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 222 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

A digital image processing system consists of computer hardware and image processing software
necessary to analyse digital image data. DIP focuses on developing a computer system that is
able to perform processing on an image. The input of that system is a digital image and the
system process that image using efficient algorithms, and gives an image as an output. The most
common example is Adobe Photoshop. It is one of the widely used applications for processing
digital images. Image processing mainly include the steps i) Importing the image via image
acquisition tools ii) Analyzing and manipulating the image iii) Output in which result can be
altered image or a report which is based on analyzing that image.
Digital Image Processing is an extremely broad subject and involves procedures, which are
mathematically complex. The procedure for digital image processing may be categorized into the
following types of computer-assisted operations.

Sequence of activities for digital image processing:


In order to process remote sensing imagery digitally, the data must be recorded and available in
a digital form suitable for storage on a computer tape or disk. Obviously, the other requirement
for digital image processing is a computer system, sometimes referred to as an image analysis
system, with the appropriate hardware and software to process the data. Several commercially
available software systems have been developed specifically for remote sensing image
processing and analysis.

For discussion purposes, most of the common image processing functions available in image
analysis systems, can be categorized into the following four categories:

 Preprocessing
 Image Enhancement
 Image Transformation
 Image Classification and Analysis

COLOUR COMPOSITES:
Colour composite provides maximum amount of spectral variability by assigning different
colours to the spectral bands chosen for colour composite. High spectral resolution is important
when producing colour components. For a true colour composite an image data reused in red,
green and blue spectral region must be assigned bits of red, green and blue image processor
frame buffer memory. A colour infrared composite standard false colour composite 'standard
false colour composite' is displayed by placing the infrared, red, green and blue frame buffer
memory (Fig. 12.2). In this healthy vegetation shows up in shades of red because vegetation
absorbs most of green and red energy but reflects approximately half of incidence Infrared
energy. Urban areas reflect equal portion of NIR, R & G, and therefore they appear as steel grey.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 223 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 12.2. Colour composites

The color depth (of a color image) or "bits per pixel" is the number of bits in the numbers that
describe the brightness or the color. More bits make it possible to record more shades of gray or
more colors. For example, an RGB image with 8 bits per color has a total of 24 bits per pixel
("true color"). Each bit can represent two possible colors so we get a total of 16,777,216 possible
colors. A typical GIF image on a web page has 8 bits for all colors combined for a total of 256
colors. However, it is a much smaller image than a 24 bit one so it downloads more quickly. A
fax image has only one bit or two "colors," black and white. The format of the image gives more
details about how the numbers are arranged in the image file, including what kind of
compression is used, if any.

12.4 SUMMARY

Digital Image Processing is defined as the processing of digital data/image by means of a digital
computer. It is a use of computer algorithms to enhanced the image (spectrally and
radiometricaly) and improving the quality of interpretability for extracting the meaning full and
useful information/results. Image processing mainly include the importing of image via image
acquisition tools, analysing and manipulating the image and the output in which result are seen
as altered images and a report which is based on analyzing that image. An image is defined by a
two-dimensional array specifically arranged in rows and columns.

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 224 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Digital Image is composed of a finite number of elements showing a particular value at a


particular location. These elements are referred to as picture elements, image elements, and
pixels. A Pixel is most widely used to denote the elements of a Digital Image.

Digital image procurement is the most important task for all individual users and the offices
where this technique is being used. Digital image/data of remote sensing from foreign countries
and Indian Remote Sensing (IRS) data is made available by NDC (National Data Centre), NRSC
to all users for various developmental and application requirements as per RSDP policy. For this,
certain guidelines and ordering procedures are followed.

Remote sensing data received from satellite, particularly Sun Synchronous Satellite, is having
many discrepancies. Therefore, digital image processing certainly needs the digital image/data
corrections and rectification with respect to those errors through digital processes by using the
specific computer software.

One of the greatest advantages of digital image processing/interpretation over visual


interpretation is capturing maximum amount of information (based on spectral and radiometric
resolution) as the computer, having specific software, is capable of acquiring the same. The
limitation of visual interpretation is based on the fact that our eyes can identify up to 20-22
colours within a colour composite image and up to 10 shades of grey tone within black and white
image. Another advantage of digital images over traditional ones is the ability to transfer them
electronically almost instantaneously and convert them easily from one medium to another such
as from a web page to a computer screen to a printer. A bigger advantage is the ability to change
them according to one's needs.
While processing the data digitally we need a lot of storage space and powerful computers to
analyse the data from today`s remote sensing systems. Following example highlight the space
required for digital image processing:
In today's world of advanced technology where most remote sensing data are recorded in digital
format, virtually all image interpretation and analysis involves some element of digital
processing. Digital image processing may involve numerous procedures including formatting
and correcting of the data, digital enhancement to facilitate better visual interpretation, or even
automated classification of targets and features entirely by computer. A very promising use of
digital images is automatic object recognition.

Digital image Processing is an extremely broad subject and involves procedures, which are
mathematically complex. The procedure for digital image processing are categorized as creation
of false colour composite, preprocessing, image enhancement, image transformation, image
classification and Analysis

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 225 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

12.5 GLOSSARY
Calibration- The comparison of instrument performance to a standard of higher accuracy. The
standard is considered the reference and the more correct measure.

CFA- Colour filter array - Digital image sensors used in scanners and digital cameras do not
respond in a manner that differentiates colour. The sensors respond to the intensity of light: the
pixel that receives greater intensity produces a stronger signal. A colour filter array (CFA) is a
mosaic of colour filters (generally red, green and blue) that overlays the pixels comprising the
sensor. The colour filters limit the intensity of light being recorded at the pixel to be associated
with the wavelengths transmitted by that colour. A demosaicing algorithm is able to take the
information about the spectral characteristics of each color of a filter array, and the intensity of
the signal at each pixel location to create a color encoded digital image.

File format - Set of structural conventions that define a wrapper, formatted data, and embedded
metadata, and that can be followed to represent images, audiovisual waveforms, texts, etc., in a
digital object. The wrapper component on its own is often colloquially called a file format. The
formatted data may consist of one or more encoded binary bit streams for such entities as images
or waveforms.

Bit depth (image) - The number of bits used to represent each pixel in an image. The term can
be confusing since it is sometimes used to represent bits per pixel and at other times, the total
number of bits used multiplied by the number of total channels. For example, a typical color
image using 8 bits per channel is often referred to as a 24-bit colour image (8 bits x 3 channels).
Colour scanners and digital cameras typically produce 24 bit (8 bits x 3 channels) images or 36
bit (12 bits x 3 channels) capture, and high-end devices can produce 48 bit (16 bit x 3 channels)
images. A grayscale scanner would generally be 1 bit for monochrome or 8 bit for grayscale
(producing 256 shades of gray). Bit depth is also referred to as colour

Brightness - The attribute of the visual sensation that describes the perceived intensity of light.
Brightness is among the three attributes that specify color. The other two attributes are hue and
saturation.

12.6 ANSWER TO CHECK YOUR PROGRESS


 Define image?
 Define digital image?
 Define signal?
 Define digital image processing?
 Define types and formats of Image?
 Define preliminary concepts for image processing?

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 226 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

12.7 REFERENCES
1. Baxes, Gregory H. Digital Image Processing: Principles and Applications. New York: John
Wiley and Sons, 1994.

2. Davies, Adrian. The Digital Imaging A-Z. Boston: Focal Press, 1998.

3.http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Digital_Image_Processing_2ndEd.pdf

4.http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Digital_Image_Processing_2ndEd.pdf

12.8 TERMINAL QUESTIONS


1) Define image and image processing.

2) What are the different image types and their formats?

3) Describe digital image procurement procedure.

4) What are the guidelines and procedures for the procurement of digital images?

5) Why and how will you create the false colour composte (FCC)?

UNIT 12 - INTRODUCTION TO DIGITAL IMAGE PROCESSING Page 227 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION &


IMAGE ENHANCEMENT TECHNIQUES

13.1 OBJECTIVES
13.2 INTRODUCTION
13.3 PREPROCESSING, IMAGE REGISTRATION & IMAGE
ENHANCEMENT TECHNIQUES
13.4 SUMMARY
13.5 GLOSSARY
13.6 ANSWER TO CHECK YOUR PROGRESS
13.7 REFERENCES
13.8 TERMINAL QUESTIONS

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 228 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

13.1 OBJECTIVES
After reading this unit you will be able to understand:
 Preprocessing Techniques
 Image Registration
 Digital Image Enhancement techniques

13.2 INTRODUCTION
In the previous unit you have learnt about the basics of digital image processing and under its
sub-heads you got the concepts, definitions, need and scope of digital image, and digital image
processing; types and formats of digital Image; procurement of digital Image; preliminary
concepts for image processing and color composites. But that unit was related to basics of digital
image processing only. This unit includes the details of image processing with respect to
preprocessing, digital image registration and enhancement techniques.

Both visual onscreen/analogue (paper print) data and digital data of remotely sensed image
methods of image processing/ interpretation have their own roles based on the objectives and
available infrastructure, computer hardware and software. It is also to be noted that many image
processing and analysis techniques have been developed to aid the interpretation of remote
sensing images and to extract as much information as possible from the images. The choice of
specific techniques or algorithms to use depends on the goals of each individual project. In this
section, we will examine some procedures commonly used in analyzing/interpreting remote
sensing images.

The preprocessing of remote sensing data is a crucial step in the remote sensing analytical
workflow and is often the most time consuming and costly. Examples of preprocessing tasks
include geometrically correcting imagery to improve the positional accuracy, compressing
imagery to save disk space, converting lidar point cloud data to raster models for speed up
rendering in GIS systems and correcting for atmospheric effects to improve the spectral qualities
of an image.

Prior to data analysis, initial processing on the raw data is usually carried out to correct for any
distortion due to the characteristics of the imaging system and imaging conditions. Depending on
the user's requirement, some standard correction procedures may be carried out by the ground
station operators before the data is delivered to the end-user. These procedures
include radiometric correction to correct for uneven sensor response over the whole image
and geometric correction to correct for geometric distortion due to Earth's rotation and other
imaging conditions (such as oblique viewing). The image may also be transformed to conform to
a specific map projection system. Furthermore, if accurate geographical location of an area on
the image needs to be known, ground control points (GCP's) are used to register the image to a
precise map (geo-referencing)

For registration of remote sensing images the image processing tools provides to support point
mapping to determine the parameters of the transformation required to bring an image into

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 229 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

alignment with another image. In point mapping, you pick points in a pair of images that
identify the same feature or landmark in the images. Then, a geometric mapping is inferred from
the positions of these control points. There are many other methods of registration described in
this unit. Those methods are based on certain tasks and objectives to be fulfilled.

Keeping in view of contents and sub-topics to be explained under this unit, the lecture note is
aimed at the following objectives:

13.3 PREPROCESSING, IMAGE REGISTRATION & IMAGE


ENHANCEMENT TECHNIQUES

DEFINITIONS:

Preprocessing - Preprocessing of digital image is the use of computer to download the digital
data and make its fitness through computer algorithms to perform the detailed image processing.
Preprocessing describes the methods used to prepare images for further analysis, including
interest point and feature extraction.

Image Registration:
 Image registration is the process of transforming different sets of data into one coordinate
system.
 Image registration is an image processing technique used to align multiple scenes into a
single integrated image.
 Image registration is defined as a process that overlays two or more images from various
imaging equipment or sensor taken at different times and angles or from the same scene to
geometrically align the images for analysis (Zitova and Flusser, 2003)

Image Enhancement:
 Image enhancement refers to improve the appearance of the imagery to assist in visual
interpretation and analysis.
 Image Enhancement involves techniques for increasing the visual distinction between
features in a scene.
 The objective of image enhancement is to create new images from original data in order to
increase the amount of information that can be visually interpreted from the data.

PREPROCESSING TECHNIQIES:
Intelligent use of image pre-processing can provide benefits and solve problems that ultimately
lead to better local and global feature detection. Generally the following steps are taken while
preprocessing the digital data:

Reading Image:
The path is stored to an image dataset into a variable then a function is created to load folders
containing images into arrays.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 230 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Resizing Image:
In this step, in order to visualize the change, we create two functions to display the images; the
first being a one to display one image and the second for two images. After that, a function is
created called processing that just receives the images as a parameter. But why do we resize our
image during the pre-processing phase? This is because of the fact that some images captured by
camera/scanners and fed to our computer algorithm vary in size; therefore, we should establish a
base size for all images fed into our AI (Artificial Intelligence in the computer) algorithm.

Removing Noise (Denoise):


Inside the function processing the code is added to smooth image for removing unwanted noise. It
is generally done by using gaussian blur. Gaussian blur (also known as Gaussian smoothing) is
the result of blurring an image by a Gaussian function. It is a widely used effect in
graphics technique is a smooth blur resembling that of viewing the image through a translucent
screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow
of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage
in computer vision algorithms in order to enhance image structures at different scales.

Segmentation & Morphology:


Inside the function Processing the code segmentation and morphology is added. In this function,
the image is segmented, separated background from foreground objects and further improves
segmentation with more noise removal.

Preprocessing functions involve those operations that are normally required prior to the main
data analysis and extraction of information, and are generally grouped as radiometric or
geometric corrections. Radiometric corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, and converting the data so they
accurately represent the reflected or emitted radiation measured by the sensor.

Preprocessing also includes Image Rectification. These operations aim to correct distorted or
degraded image data to create a faithful representation of the original scene. This typically
involves the initial processing of raw image data to correct for geometric distortion, to calibrate
the data radiometrically and to eliminate noise present in the data. Image rectification and
restoration procedures are often termed pre-processing operations because they normally precede
manipulation and analysis of image data. Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and conversion of the data to real world
coordinates (e.g. latitude and longitude) on the Earth's surface.

IMAGE REGISTRATION:

Image registration is the process of aligning two or more images of the same scene. This process
involves designating one image as the reference image, also called the fixed image, and applying
geometric transformations or local displacements to the other images so that they align with the
reference. Images can be misaligned for a variety of reasons. Commonly, images are captured
under variable conditions that can change the camera perspective or the content of the scene.
Misalignment can also result from lens and sensor distortions or differences between capture
devices.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 231 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Image registration is often used as a preliminary step in image processing applications. For
example, you can use image registration to align satellite images or medical images captured
with different diagnostic modalities, such as MRI and SPECT. Image registration enables you to
compare common features in different images. For example, you might discover how a river has
migrated, how an area became flooded, or whether a tumor is visible in an MRI or SPECT
image.

Image Processing Toolbox offers three image registration approaches: an interactive Registration
Estimator app, intensity-based automatic image registration, and control point registration.
Computer Vision Toolbox offers automated feature detection and matching.

Registration Estimator App:


Registration Estimator app enables you to register 2-D images interactively. You can compare
different registration techniques, tune settings, and visualize the registered image. The app
provides a quantitative measure of quality, and it returns the registered image and the
transformation matrix. The app also generates code with your selected registration technique and
settings, so you can apply an identical transformation to multiple images. Registration Estimator
app offers six feature-based techniques, three intensity-based techniques, and one non-rigid
registration technique.

Image registration can be classified into several categories based on the transformation model.
Following are the different kinds of image registration algorithms categories used in the
computer:

Intensity-based vs feature-based:
Image registration or image alignment algorithms can be classified into intensity-based and
feature-based. One of the images is referred to as the moving or source and the others are
referred to as the target, fixed or sensed images. Image registration involves spatially
transforming the source/moving image(s) to align with the target image. The reference frame in
the target image is stationary, while the other datasets are transformed to match to the
target.[3] Intensity-based methods compare intensity patterns in images via correlation metrics,
while feature-based methods find correspondence between image features such as points, lines,
and contours. Intensity-based methods register entire images or sub-images. If sub-images are
registered, centers of corresponding sub images are treated as corresponding feature points.
Feature-based methods establish a correspondence between a numbers of especially distinct
points in images. Knowing the correspondence between a numbers of points in images, a
geometrical transformation is then determined to map the target image to the reference images,
thereby establishing point-by-point correspondence between the reference and target images.
Methods combining intensity-based and feature-based information have also been developed.

Transformation models:
Image registration algorithms can also be classified according to the transformation models they
use to relate the target image space to the reference image space. The first broad category of
transformation models includes linear transformations, which include rotation, scaling,

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 232 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

translation, and other affine transforms. Linear transformations are global in nature, thus, they
cannot model local geometric differences between images.

The second category of transformations allows 'elastic' or 'non- rigid' transformations. These
transformations are capable of locally warping the target image to align with the reference
image. Non-rigid transformations include radial basis functions (thin-plate or surface splines
and compactly-supported transformations ), physical continuum models (viscous fluids), and
large deformation models.

Transformations are commonly described by a parameterization, where the model dictates the
number of parameters. For instance, the translation of a full image can be described by a single
parameter, a translation vector. These models are called parametric models. Non-parametric
models on the other hand, do not follow any parameterization, allowing each image element to
be displaced arbitrarily.

Transformations of coordinates:
Alternatively, many advanced methods for spatial normalization are building on structure
preserving transformations homeomorphisms and diffeomorphisms since they carry smooth sub
manifolds smoothly during transformation. Diffeomorphisms are generated in the modern field
of Computational Anatomy based on flows since diffeomorphisms are not additive although they
form a group, but a group under the law of function composition. For this reason, flows which
generalize the ideas of additive groups allow for generating large deformations that preserve
topology, providing 1-1 and onto transformations. Computational methods for generating such
transformation are often called LDDMM which provide flows of diffeomorphisms as the main
computational tool for connecting coordinate systems corresponding to the geodesic flows of
Computational Anatomy. There are a number of programs which generate diffeomorphic
transformations of coordinates via diffeomorphic mapping including MRI Studio and MRI
Cloud.org.

Spatial vs frequency domain methods:


Spatial methods operate in the image domain, matching intensity patterns or features in images.
Some of the feature matching algorithms are outgrowths of traditional techniques for performing
manual image registration, in which an operator chooses corresponding control points (CP) in
images. When the number of control points exceeds the minimum required to define the
appropriate transformation model, iterative algorithms like RANSAC can be used to robustly
estimate the parameters of a particular transformation type (e.g. affine) for registration of the
images.

Frequency-domain methods find the transformation parameters for registration of the images
while working in the transform domain. Such methods work for simple transformations, such as
translation, rotation, and scaling. Applying the phase correlation method to a pair of images
produces a third image which contains a single peak. The location of this peak corresponds to the
relative translation between the images. Unlike many spatial-domain algorithms, the phase
correlation method is resilient to noise, occlusions, and other defects typical of medical or
satellite images. Additionally, the phase correlation uses the fast Fourier transform to compute
the cross-correlation between the two images, generally resulting in large performance gains.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 233 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The method can be extended to determine rotation and scaling differences between two images
by first converting the images to log-polar coordinates.[13][14] Due to properties of the Fourier
transform, the rotation and scaling parameters can be determined in a manner invariant to
translation.

Single- vs multi-modality methods:


Another classification can be made between single-modality and multi-modality methods.
Single-modality methods tend to register images in the same modality acquired by the same
scanner/sensor type, while multi-modality registration methods tended to register images
acquired by different scanner/sensor types.

Multi-modality registration methods are often used in medical imaging as images of a subject are
frequently obtained from different scanners. Examples include registration of
brain CT/MRI images or whole body PET/CT images for tumor localization, registration of
contrast-enhanced CT images against non-contrast-enhanced CT images for segmentation of
specific parts of the anatomy, and registration of ultrasound and CT images
for prostate localization in radiotherapy.

Automatic vs interactive methods:


Registration methods may be classified based on the level of automation they provide. Manual,
interactive, semi-automatic, and automatic methods have been developed. Manual methods
provide tools to align the images manually. Interactive methods reduce user bias by performing
certain key operations automatically while still relying on the user to guide the registration.
Semi-automatic methods perform more of the registration steps automatically but depend on the
user to verify the correctness of a registration. Automatic methods do not allow any user
interaction and perform all registration steps automatically.

Similarity measures for image registration:


Image similarities are broadly used in medical imaging. An image similarity measure quantifies
the degree of similarity between intensity patterns in two images.[3] The choice of an image
similarity measure depends on the modality of the images to be registered. Common examples of
image similarity measures include cross-correlation, mutual information, sum of squared
intensity differences, and ratio image uniformity. Mutual information and normalized mutual
information are the most popular image similarity measures for registration of multimodality
images. Cross-correlation, sum of squared intensity differences and ratio image uniformity are
commonly used for registration of images in the same modality.
Many new features have been derived for cost functions based on matching methods via large
deformations have emerged in the field Computational Anatomy including measure
matching which are point sets or landmarks without correspondence, Curve
matching and Surface matching via mathematical currents and veri-folds.

Intensity-Based Automatic Image Registration:


Intensity-Based Automatic Image Registration maps pixels in each image based on relative
intensity patterns. You can register both mono-modal and multimodal image pairs, and you can
register 2-D and 3-D images. This approach is useful for:
 Registering a large collection of images

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 234 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

 Automated registration

To register images using an intensity-based technique, use image-register and specify the type of
geometric transformation to apply to the moving image. Image-register iteratively adjusts the
transformation to optimize the similarity of the two images. Alternatively, you can estimate a
localized displacement field and apply a non-rigid transformation to the moving image.

Control Point Registration:


Control Point Registration enables you to select common features in each image manually.
Control point registration is useful when:

 You want to prioritize the alignment of specific features, rather than the entire set of features
detected using automated feature detection. For example, when registering two images, you
can focus the alignment on desired anatomical features and disregard matched features that
correspond to less informative anatomical structures.
 Images have repeated patterns that provide an unclear mapping using automated feature
matching. For example, photographs of buildings with many windows, or aerial photographs
of gridded city streets, have many similar features that are challenging to map automatically.
In this case, manual selection of control point pairs can provide a clearer mapping of
features, and thus a better transformation to align the feature points.

Control point registration can apply many types of transformations to the moving image. Global
transformations, which act on the entire image uniformly, include affine, projective, and
polynomial geometric transformations. Non-rigid transformations, which act on local regions,
include piecewise linear and local weighted mean transformations. Use the Control Point
Selection Tool to select control points. Start the tool with cp select. An illustration is given in
figure 1 for control point registration of an image.

Control point selection procedure:


To specify control points in a pair of images interactively, use the Control Point Selection
Tool, (as seen “cpselect” in the computer). The tool displays the image you want to register,
called the moving image, next to the reference image, called the fixed image.
Specifying control points is a four-step process:
i) Start the tool, specifying the moving image and the fixed image.
ii) Use navigation aids to explore the image, looking for visual elements that you can identify in
both images. Cp select provides many ways to navigate around the image. You can pan and
zoom to view areas of the image in more detail.
iii) Specify matching control point pairs in the moving image and the fixed image.
iv) Save the control points in the workspace.

Automated Feature Detection and Matching:


Automated Feature Detection and Extraction (Computer Vision Toolbox) detects features such
as corners and blobs, matches corresponding features in the moving and fixed images, and
estimates a geometric transform to align the matched features. For an example, see Find Image
Rotation and Scale Using Automated Feature Matching. You must have Computer Vision
Toolbox to use this method.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 235 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Uncertainty:
There is a level of uncertainty associated with registering images that have any spatio-temporal
differences. A confident registration with a measure of uncertainty is critical for many change
detection applications such as medical diagnostics.

In remote sensing applications where a digital image pixel may represent several kilometers of
spatial distance (such as NASA's LANDSAT imagery), an uncertain image registration can mean
that a solution could be several kilometers from ground truth. Several notable papers have
attempted to quantify uncertainty in image registration in order to compare results. However,
many approaches to quantifying uncertainty or estimating deformations are computationally
intensive or are only applicable to limited sets of spatial features.

Image Registration-Based Methods for Correcting Distortions:


Image registration aims to bring two images in register (i.e. to align them). To do this one needs
a transformation model (that specifies the allowed spatial transformations) and an interpolation
model that allows calculation of what the image intensity “should have been” at locations
between the original voxel centers. A very simple example would be a transformation model
where we only allow an image to move (translate) in the x-direction. That model, together with
some interpolation, allows us to specify a function F=f(Δx;I) where I is the original image, Δx is
the amount of x-translation and F is the image after it has been translated Δx mm. Let us now say
we have two images I and J and we want to find out how much the “subject” moved between the
two acquisitions. We can then use f to calculate F=f(Δx;I) and compare F and J to see how
“similar” they are. We do this for many different values of Δx until we find that which
maximizes the “similarity” and that value of Δx is said to be the “true movement.” As the
transform is extended beyond a single translation, it rapidly become impractical to perform an
exhaustive search of all possible parameter combinations and the task of finding the best
parameters is a nonlinear optimization problem. ( Press et al., 1992).

IMAGE ENHANCEMENT TECHNIQUES:


Enhancements are used to make it easier for visual interpretation and understanding of imagery.
The advantage of digital imagery is that it allows us to manipulate the digital pixel values in an
image. Although radiometric corrections for illumination, atmospheric influences, and sensor
characteristics may be done prior to distribution of data to the user, the image may still not be
optimized for visual interpretation. Remote sensing devices, particularly those operated from
satellite platforms, must be designed to cope with levels of target/background energy which are
typical of all conditions likely to be encountered in routine use. With large variations in spectral
response from a diverse range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic
radiometric correction could optimally account for and display the optimum brightness range and
contrast for all targets. Thus, for each application and each image, a custom adjustment of the
range and distribution of brightness values is usually necessary.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 236 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 13.1: Graphic illustration for control point Registration of an image

In raw imagery, the useful data often populates only a small portion of the available range of
digital values (commonly 8 bits or 256 levels). Contrast enhancement involves changing the
original values so that more of the available range is used, thereby increasing the contrast
between targets and their backgrounds. As shown in figure 13. 2, the original values are in
between 84 to 153 bits which are stretched to 0 to 255.

By manipulating the range of digital values in an image, graphically represented by its


histogram, we can apply various enhancements to the data. There are many different techniques

Figure 13.2: Changing the original digital values between 84 to 153 to new changed as 0 to
255 to enhance the image

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 237 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

and methods of enhancing contrast and detail in an image; we will cover only a few common
ones here. The simplest type of enhancement is a linear contrast stretch. This involves
identifying lower and upper bounds from the histogram (usually the minimum and maximum
brightness values in the image) and applying a transformation to stretch this range to fill the full
range. In our example, the minimum value (occupied by actual data) in the histogram is 84 and
the maximum value is 153. These 70 levels occupy less than one-third of the full 256 levels
available. A linear stretch uniformly expands this small range to cover the full range of values
from 0 to 255. This enhances the contrast in the image with light toned areas appearing lighter
and dark areas appearing darker, making visual interpretation much easier. This graphic
illustrates the increase in contrast in an image before (left) and after (right) a linear contrast
stretch.

The objective of the second group of image processing functions grouped under the term
of image enhancement is solely to improve the appearance of the imagery to assist in visual
interpretation and analysis. These procedures are applied to image data in order to effectively
display the data for subsequent visual interpretation. It involves techniques for increasing the
visual distinction between features in a scene. The objective is to create new images from
original data in order to increase the amount of information that can be visually interpreted from
the data.

Image enhancement techniques improve the quality of an image as perceived by a human. These
techniques are most useful because many satellite images when examined on a colour display
give inadequate information for image interpretation. There is no conscious effort to improve the
fidelity of the image with regard to some ideal form of the image. Image enhancement is
attempted after the image is corrected for geometric and radiometric distortions. Image
enhancement methods are applied separately to each band of a multi-spectral image. Digital
techniques have been found to be most satisfactory than the photographic technique for image
enhancement, because of the precision and wide variety of digital processes.

There exist a wide variety of techniques for improving image quality. Following are most
commonly used image enhancement techniques

 The contrast and contrast Enhancement (contrast stretch )


 Density slicing,
 Edge enhancement, and
 Spatial filtering

Contrast:
Contrast generally refers to the difference in luminance or grey level values in an image and is an
important characteristic. It can be defined as the ratio of the maximum intensity to the minimum
intensity over an image.
C = Imax/ Imin (1)
Contrast ratio has a strong bearing on the resolving power and detects ability of an image. Larger
this ratio, more easy it is to interpret the Image.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 238 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Reasons for Low Contrast of Image Data:


Most of the satellite images lack adequate contrast and require contrast improvement. Low
contrast may result from the following causes. The individual objects and background that make
up the terrain may have a nearly uniform electromagnetic response at the wavelength band of
energy that is recorded by the remote sensing system. In other words, the scene itself has a low
contrast ratio. Images with low contrast ratio are commonly referred to as -Washed out', with
nearly uniform tones of grey.
Scattering of electromagnetic energy by the atmosphere can reduce the contrast of a scene. This
effect is most pronounced in the shorter wavelength portions. The remote sensing system may
lack sufficient sensitivity to detect and record the contrast of the terrain. Also, incorrect
recording techniques can result in low contrast imagery although the scene has a high-contrast
ratio.
Detectors on the satellite are designed to record a wide range of scene brightness values without
getting saturated. They must encompass a range of brightness from black basalt outcrops to
White Sea ice. However, only a few individual scenes have a brightness range that utilizes the
full sensitivity range of remote sensor detectors. The limited range of brightness values in most
scenes does not provide adequate contrast for detecting image features. Saturation may also
occur when the sensitivity range of a detector is insufficient to record the full brightness range of
a scene. In the case of saturation, the light and dark extremes of brightness on a scene appear as
saturated white or black tones on the image.

Contrast Enhancement:
Contrast enhancement techniques expand the range of brightness values in an image so that the
image can be efficiently displayed in a manner desired by the analyst (Figure 13.3). The density
values in a scene are literally pulled farther apart, that is, expanded over a greater range. The
effect is to increase the visual contrast between two areas of different uniform densities. This
enables the analyst to discriminate easily between areas initially having a small difference in
density. Contrast enhancement can be effected by a linear or non-linear transformation.

Figure 13.3: Contrast enhancement

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 239 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Linear Contrast Stretch:


The grey values in the original image and the modified image follow a linear relation in this
algorithm. A density number in the low range of the original histogram is assigned to extremely
black, and a value at the high end is assigned to extremely white. The remaining pixel values are
distributed linearly between these extremes. The features or details that were obscure on the
original image will be clear in the contrast stretched image (Figure 13.4).

In exchange for the greatly enhanced contrast of most original brightness values, there is a trade
off in the loss of contrast at the extreme high and low density number values. However, when
compared to the overall contrast improvement, the contrast losses at the brightness extremes are
acceptable trade off, unless one was specifically interested in these elements of the scene.
The equation y = ax+ b performs the linear transformation in a linear contrast stretch method.
The values of 'a' and 'b' are computed from the equations.

Figure 13.4. Linear enhancement

Non-Linear Contrast Enhancement:


In these methods, the input and output data values follow a non-linear transformation. The
general form of the non-linear contrast enhancement is defined by y = f (x), where x is the input
data value and y is the output data value. The non-linear contrast enhancement techniques have
been found to be useful for enhancing the colour contrast between the nearly classes and
subclasses of a main class. The type of application restricts the use of non-linear contrast
enhancement. Good judgment by the analyst and several iterations through the computer are
usually required to produce the desired results. A type of non-linear contrast stretch involves
scaling the input data logarithmically. This enhancement has greatest impact on the brightness
values found in the darker part of histogram. It could be reversed to enhance values in brighter
part of histogram by scaling the input data using an inverse log function.

Histogram Equalization:
This is another non-linear contrast enhancement technique. In this technique, histogram of the
original image is redistributed to produce a uniform population density. This is obtained by
grouping certain adjacent grey values. Thus the number of grey levels in the enhanced image is
less than the number of grey levels in the original image. The redistribution of the histogram
results in greatest contrast being applied to the most populated range of brightness values in the
original image. In this process the light and dark tails of the original histogram are compressed,

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 240 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

thereby resulting in some loss of detail in those regions. This method gives large improvement in
image quality when the histogram is highly peaked (Figure 13.5).

Gaussian Stretch:
Most of the contrast enhancement algorithms result in loss of detail in the dark and light regions
in the image. Gaussian stretch technique enhances the contrast in the tails of the histogram, at the
expense of contrast in the middle grey range. When an analyst is interested to know the details of
the dark and bright regions, he can apply the Gaussian stretch algorithm. This algorithm fits the
original histogram to a normal distribution curve between the 0 and 255 limits (Figure 13.6).

Figure 13.5. Histogram equalisation

Figure 13.6. Gaussian stretch

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 241 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Simple linear stretching would only increase contrast in the centre of the distribution, and would
force the high and low peaks further towards saturation. With any type of contrast enhancement,
the relative tone of different materials is modified. Simple linear stretching has the least effect
on relative tones, and brightness differences can still be related to the differences in reflectivity.
In other cases, the relative tone can no longer be meaningfully related to the reflectance of
materials. An analyst must therefore be fully cognizant of the processing techniques that have
been applied to the data.

Density Slicing:
Digital images have high radiometric resolution. Images in some wavelength bands contain 256
distinct grey levels. But, a human interpreter can reliably detect and consistently differentiate
between 15 and 25 shades of grey only. However, human eye is more sensitive to colour than the
different shades between black and white. Density slicing is a technique that converts the
continuous grey tone of an image into a series of density intervals, or slices, each corresponding
to a specified digital range. Each slice is displayed in a separate colour, line printer symbol or
bounded by contour lines. This technique is applied on each band separately and emphasizes
subtle grey scale differences that are imperceptible to the viewer.

Image transformations:
Image transformations are operations similar in concept to those for image enhancement.
However, unlike image enhancement operations which are normally applied only to a single
channel of data at a time, image transformations usually involve combined processing of data
from multiple spectral bands. Arithmetic operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the original bands into "new" images which
better display or highlight certain features in the scene. You will study in the next chapter for
many of these operations including various methods of spectral or band ratioing, and a procedure
called principal components analysis which is used to more efficiently represent the information
in multichannel imagery.

13.4 SUMMARY
As a subfield of digital signal processing, digital image processing has many advantages
over analogue image processing. It allows a much wider range of algorithms to be applied to the
input data — the aim of digital image processing is to improve the image data (features) by
suppressing unwanted distortions and/or enhancement of some important image features so that
our AI-Computer Vision models can benefit from this improved data to work on.
Digital image processing of raw data needs preprocessing to correct for any distortion due to the
characteristics of the imaging system and imaging conditions. These procedures
include radiometric correction to correct for uneven sensor response over the whole image
and geometric correction to correct for geometric distortion due to Earth's rotation and other
imaging conditions (such as oblique viewing). The image may also be transformed to conform to
a specific map projection system. Furthermore, if accurate geographical location of an area on
the image needs to be known, ground control points (GCP's) are used to register the image to a
precise map (geo-referencing)

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 242 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Remote sensing images require image processing tools to register different images and their
alignment. The tools provide to support point mapping to determine the parameters of the
transformation required to bring an image into alignment with another image. In point mapping,
points are picked up in a pair of images that identify the same feature or landmark in the images.
A geometric mapping is inferred from the positions of these control points. The image
registration categories described in this unit include i) Intensity-based vs feature-based ii)
Transformation models iii)Transformations of coordinates iv) Spatial vs frequency domain
methods v) Single- vs multi-modality methods vi) Automatic vs interactive methods vii)
Similarity measures for image registration viii) Intensity-Based Automatic Image Registration
ix) Control Point Registration and x) Automated Feature Detection and Matching.

Image enhancement techniques are most useful because many satellite images when examined
on a colour display give inadequate information for image interpretation. There is no conscious
effort to improve the fidelity of the image with regard to some ideal form of the image. Image
enhancement is attempted after the image is corrected for geometric and radiometric distortions.
Image enhancement methods are applied separately to each band of a multi-spectral image.
There exist a wide variety of techniques for improving image quality .The most commonly used
image enhancement techniques are i) The contrast and contrast Enhancement (contrast stretch )
ii) Density slicing iii) Edge enhancement and iv) Spatial filtering.

13.5 GLOSSARY
AI-Computer Vision models
When the number of control points exceeds the minimum required to define the appropriate
transformation model, iterative algorithms like RANSAC can be used to robustly estimate the
parameters of a particular transformation type (e.g. affine) for registration of the images.

LANDSAT- This term was used for Earth Resources Technology Satellite (ERTS) by NASA
(National Aeronautics and Space Administration), USA.

13.6 ANSWER TO CHECK YOUR PROGRESS


1. Explain Preprocessing techniques, image registration and digital image enhancement
Techniques.
2. Explain Different methods of Image enhancement.

13.7 REFERENCES
1. https://sisu.ut.ee/imageprocessing/book/5
2. https://en.wikipedia.org/wiki/Image_registration
3. https://link.springer.com/chapter/10.1007/978-1-4899-3216-7_4

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 243 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

4. W.H. Press, S.A. Teukolsky, W.T. Vetterling & B.P. Flannery (1992): Numerical recipes in
C: The art of scientific computing. Second Edition. Cambridge University Press.
5. Barbar Zitova and Jan Flusser (2003). Image registration methods: a survey. In Image and
Vision Computing, Automation, Academy of Sciences, Pod vodárenskou 4, 182 08 Prague 8,
Czech Republic

13.8 TERMINAL QUESTIONS


1) Define pre-processing technique and explain the steps taken under this technique.
2) What is image registration? Describe Intensity-based vs feature-based image registration.
3) Differentiate spatial vs frequency domain and single- vs multi-modality methods of image
registration
4) Describe control point image registration
5) List out the digital image enhancement techniques and draw a Graphical illustration for
control point registration of an image
6) What are the steps to improve digital image quality? Explain each of them.

UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 244 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE


TRANSFORMATION

14.1 OBJECTIVES
14.2 INTRODUCTION
14.3 SPATIAL FILTERING TECHNIQUES & IMAGE
TRANSFORMATION
14.4 SUMMARY
14.5 GLOSSARY
14.6 ANSWER TO CHECK YOUR PROGRESS
14.7 REFERENCES
14.8 TERMINAL QUESTIONS

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 245 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

14.1 OBJECTIVES
The role of this chapter is to present spatial filtering and image transformation techniques of
value in the enhancement of remote sensing imagery. The spatial and frequency filtering
techniques are explained with respect to their types, methods and characteristics. The image
transformation techniques specifically include the principal components transformation,
creation of ratio images and the specialized transformation, such as the Kauth-Thomas tasseled
cap transform.
After reading this unit learner will able to understand:
 Characteristics of Filter, Spatial filters.
 Types and uses of spatial filtering.
 Image transformation Types, characteristics and uses.

14.2 INTRODUCTION
In the previous unit you have learnt about preprocessing, image registration and image
enhancement techniques. This unit is also in continuation of image enhancement under different
filtering techniques and image transformation. The Filter tool can be used to either eliminate
spurious data or enhance features otherwise not visibly apparent in the data. Filters essentially
create output values by a moving, overlapping 3x3 cell neighborhood window that scans through
the input raster. As the filter passes over each input cell, the value of that cell and its 8 immediate
neighbors are used to calculate the output value. Spatial filtering technique increases the analyst's
ability to discriminate detail.

An image 'enhancement' is basically anything that makes it easier or better to visually interpret
an image. In some cases, like 'low-pass filtering', the enhanced image can actually look worse
than the original, but such an enhancement was likely performed to help the interpreter see low
spatial frequency features among the usual high frequency clutter found in an image. Also, an
enhancement is performed for a specific application. This enhancement may be inappropriate for
another purpose, which would demand a different type of enhancement.

Spatial filtering encompasses another set of digital processing functions which are used to
enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific
features in an image based on their spatial frequency. Spatial frequency is related to the concept
of image texture which refers to the frequency of the variations in tone that appear in an image.
"Rough" textured areas of an image, where the changes in tone are abrupt over a small area, have
high spatial frequencies, while "smooth" areas with little variation in tone over several pixels,
have low spatial frequencies.

Spatial filtering includes the edge enhancement techniques for sharpening the edges of different
feature/cover types seen on remote sensing digital data. Here the filter works by identifying
sharp edge boundaries in the image, such as the edge between a subject and a background of a
contrasting color. This has the effect of creating bright and dark highlights on either side of any
edges in the image, called overshoot and undershoot, leading the edge to look more defined
when viewed.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 246 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Two‐dimensional image transforms are extremely important areas of studies in image


processing. The image output in the transformed space may be analyzed, interpreted and further
processed for implementing diverse image processing tasks. These transformations are widely
used, since by using these transformations, it is possible to express an image as a combination of
a set of basic signals, known as the basis functions.

The multispectral or vector character of most remote sensing image data renders it amenable to
spectral transformations that generate new sets of image components or bands. These
components then represent an alternative description of the data, in which the new components
of a pixel vector are related to its old brightness values in the original set of spectral bands via
a linear operation. The transformed image may make evident features not discernable in the
original data or alternatively it might be possible to preserve the essential information content
of the image (for a given application) with a reduced number of the transformed dimensions.
The last point has significance for displaying data in the three dimensions available on a colour
monitor or in colour hardcopy, and for transmission and storage of data.

14.3 SPATIAL FILTERING TECHNIQUES & IMAGE


TRANSFORMATION

DEFINITIONS:

Filter:
 In general terms a filter is a porous article or mass through which a gas or liquid is passed
to separate out matter in suspension.
 A colour filter is a transparent material (such as colored glass) that absorbs light of certain
wavelengths or colors selectively and is used for modifying light that reaches a sensitized
photographic material.
 We may also define filter as software for sorting or blocking access to certain online
material.
 In image processing filters are mainly used to suppress either the high frequencies in the
image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or detecting edges in
the image.

Filtering:
 Filtering is a technique for modifying or enhancing an image.
 Filtering is a neighborhood operation, in which the value of any given pixel in the output
image is determined by applying some algorithm to the values of the pixels in the
neighborhood of the corresponding input pixel.

Spatial Filtering:
 Spatial filtering is the process of dividing the image into its constituent spatial frequencies,
and selectively altering certain spatial frequencies to emphasize some image features.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 247 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Linear Filtering:
 Linear filtering is filtering in which the value of an output pixel is a linear combination of the
values of the pixels in the input pixel's neighborhood.

Convolution:
 Linear filtering of an image is accomplished through an operation called convolution.
 Convolution is a neighborhood operation in which each output pixel is the weighted sum of
neighboring input pixels.

Edge Enhancement:
 Edge Enhancement is an image processing filter that enhances the edge contrast of an
image or video in an attempt to improve its acutance (apparent sharpness). The Edge
Enhancement feature is much like the "Sharpness"

Image transformation:
 Image transformation is a function or operator that takes an image as its input and produces
an image as its output.
 Image transformations typically involve the manipulation of multiple bands of data, whether
from a single multispectral image or from two or more images of the same area acquired at
different times (i.e. multitemporal image data).
 Image transformations generate "new" images from two or more sources which highlight
particular features or properties of interest, better than the original input images.

CHARACTERISTICS OF FILTER, SPATIAL FILTERS:

A common filtering procedure involves moving a 'window' of a few pixels in dimension (e.g.
3x3, 5x5, etc.) over each pixel in the image (Figure 14.1), applying a mathematical calculation
using the pixel values under that window, and replacing the central pixel with the new value. The
window is moved along in both the row and column dimensions one pixel at a time and the
calculation is repeated until the entire image has been filtered and a "new" image has been
generated. By varying the calculation performed and the weightings of the individual pixels in
the filter window, filters can be designed to enhance or suppress different types of features.

Figure 14.1 Filtering by 3x3 pixels window


An image can be filtered either in the frequency or in the spatial domain.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 248 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Filtering in Frequency Domain:


Filtering in Frequency Domain involves transforming the image into the frequency domain,
multiplying it with the frequency filter function and re-transforming the result into the spatial
domain. The filter function is shaped so as to attenuate some frequencies and enhance others. For
example, a simple low pass function represents frequencies smaller than the cut-off
frequency and 0 for all others.
The corresponding process in the spatial domain is to convolve the input image f(i,j) with the
filter function h(i,j). This can be written as

The mathematical operation is identical to the multiplication in the frequency space, but the
results of the digital implementations vary, since we have to approximate the filter function with
a discrete and finite kernel.
The discrete convolution can be defined as a `shift and multiply' operation, where we shift the
kernel over the image and multiply its value with the corresponding pixel values of the image.
For a square kernel with size M× M, we can calculate the output image with the following
formula:

Various standard kernels exist for specific applications, where the size and the form of the kernel
determine the characteristics of the operation. The most important of them are discussed in this
chapter. The kernels for two examples, the mean and the Laplacian operator, can be seen in
Figure 14.2.

Figure 14.2 Convolution kernel for a mean filter and one form of the discrete Laplacian.

In contrast to the frequency domain, it is possible to implement non-linear filters in the spatial
domain. In this case, the summations in the convolution function are replaced with some kind of
non-linear operator:

For most non-linear filters the elements of h(i,j) are all 1. A commonly used non-linear operator
is the median, which returns the `middle' of the input values.

Filtering in the Spatial Domain:


An image is filtered to emphasize certain features or remove other features. Image processing
operations implemented with filtering include smoothing, sharpening, and edge enhancement.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 249 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Filtering in the Spatial Domain refers a neighborhood operation, in which the value of any given
pixel in the output image is determined by applying some algorithm to the values of the pixels in
the neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of pixels,
defined by their locations relative to that pixel.

Spatial filters generally serve two purposes when applied to remotely sensed data: i) enhance
imagery or ii) restore imagery. When it comes to enhancing imagery, spatial filters can help
uncover patterns and processes. Spatial filters are useful for both manual image interpretation
and automated feature extraction. Spatial filters can also help to restore imagery that has either
gaps or artifacts.

A characteristic of remotely sensed images is a parameter called spatial frequency defined as


number of changes in Brighter Value per unit distance for any particular part of an image. If
there are very few changes in Brighter Value once given areas in an image, this is referred to as
low frequency area. Conversely, if the Brighter Value changes dramatically over short distances,
this is an area of high frequency.

In ArcGIS Pro the most efficient way to apply spatial filters is to use the Raster Analysis
Function - Convolution. From the Analysis menu i) select Raster Functions. ii)Choose
the Convolution Raster Function. Within the Raster Functions pane iii) select the input raster iv)
specify the type, and optionally v) modify the kernal. When you finish adjusting the parameters
vi) click on create new layer.

Spatial Convolution Filtering:


A linear spatial filter is a filter for which the brightness value (BV i.j) at location i, j in the output
image is a function of some weighted average (linear combination) of brightness values located
in a particular spatial pattern around the i, j location in the input image. This process of
evaluating the weighted neighboring pixel values is called two-dimensional convolution filtering.
The procedure is often used the spatial frequency characteristics of an image. For example, a
linear spatial filter that emphasizes high spatial frequencies may sharpen the edges within an
image. A linear spatial filter that emphasizes low spatial frequencies may be used to reduce noise
within an image.

Linear Filtering and Convolution:


Linear filtering of an image is accomplished through an operation called convolution.
Convolution is a neighborhood operation in which each output pixel is the weighted sum of
neighboring input pixels. The matrix of weights is called the convolution kernel, also known as
the filter. A convolution kernel is a correlation kernel that has been rotated 180 degrees.
For example, suppose the image is-
A = [17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9]
Figure 14. 3 Digital values (numbers) of an image being filtered

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 250 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

and the correlation kernel is-


h = [8 1 6
3 5 7
4 9 2]
You would use the following steps to compute the output pixel at position (2,4):
1. Rotate the correlation kernel 180 degrees about its center element to create a convolution
kernel.
2. Slide the center element of the convolution kernel so that it lies on top of the (2,4) element
of A.
3. Multiply each weight in the rotated convolution kernel by the pixel of A underneath.
4. Sum the individual products from step 3.
Hence the (2,4) output pixel is

Shown in the following figure.


Computing the (2,4) Output of Convolution

High Frequency Filtering in Spatial Domain:


High-pass filtering is applied to imagery to remove the slowly varying components and enhance
the high-frequency local variations. One high- frequency filter {HFF5.out} is computed by
subtracting the output of the low- frequency filter {LFF 5out} from twice the value of the
original central pixel value, BV5:
HFF5 out = (2xBV5} -(LFF5.out}
Brightness values tend to be highly correlated in a nine-element window. Thus, the high-
frequency filtered image will have a relatively narrow intensity histogram. This suggests that the
output from most high-frequency filtered images must be contrast stretched prior to visual
analysis.

Correlation:
The operation called correlation is closely related to convolution. In correlation, the value of an
output pixel is also computed as a weighted sum of neighboring pixels. The difference is that the
matrix of weights, in this case called the correlation kernel, is not rotated during the
computation. The Image Processing Toolbox filter design functions return correlation kernels.
The following figure shows how to compute the output pixel of the correlation of A,
assuming h is a correlation kernel instead of a convolution kernel, using these steps:

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 251 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

1. Slide the center element of the correlation kernel so that lies on top of the (2, 4) element of A.
2. Multiply each weight in the correlation kernel by the pixel of A underneath.
3. Sum the individual products.
The (2, 4) output pixel from the correlation is

Computing the (2, 4) Output of Correlation

TYPES AND USES OF SPATIAL FILTERING:

The three types of spatial filters used in remotely sensed data processing are: Low pass filters,
Band pass filters and High pass filters.

Low Pass Filter:


The filter type LOW employs a low pass or averaging, filter over the input raster and essentially
smoothens the data. A low pass filter smoothes the data by reducing local variation and removing
noise. It calculates the average (mean) value for each 3 x 3 neighborhood. It is essentially
equivalent to the Focal Statistics tool with the Mean statistic option. The effect is that the high
and low values within each neighborhood will be averaged out, reducing the extreme values in
the data. Following is an example of the input neighborhood values for one processing cell, the
center cell with the value 8.

7 5 2

4 8 3

3 1 5

The calculation for the processing cell (the center input cell with the value 8) is to find the
average of the input cells. This is the sum of all the values in the input contained by the
neighborhood, divided by the number of cells in the neighborhood (3 x 3 = 9).
Value = ((7 + 5 + 2) + (4 + 8 + 3) + (3 + 1 + 5)) / 9 = 38 / 9 = 4.222
The output value for the processing cell location will be 4.22.
Since the mean is being calculated from all the input values, the highest value in the list, which is
the value 8 of the processing cell, is averaged out.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 252 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

In the following example, the input raster has an anomalous data point caused by a data
collection error. The averaging characteristics of the LOW option have smoothed the anomalous
data point.

Figure 14.4 Example of Filter output with LOW option filter

A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce
the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of
an image. Average and median filters, often used for radar imagery (and described in Chapter 3),
are examples of low-pass filters. High-pass filters do the opposite and serve to sharpen the
appearance of fine detail in an image. One implementation of a high-pass filter first applies a
low-pass filter to an image and then subtracts the result from the original, leaving behind only
the high spatial frequency information. Directional, or edge detection filters are designed to
highlight linear features, such as roads or field boundaries. These filters can also be designed to
enhance features which are oriented in specific directions. These filters are useful in applications
such as geology, for the detection of linear geologic structures.

High Pass Filter:


The high pass filter accentuates the comparative difference between a cell's values and its
neighbors. It has the effect of highlighting boundaries between features (for example, where a
water body meets the forest), thus sharpening edges between objects. It is generally referred to as
an edge-enhancement filter.

With the HIGH option, the nine input z-values are weighted in such a way that removes low
frequency variations and highlights the boundary between different regions.

The 3 x 3 filter for the HIGH option is:


-0.7 -1.0 -0.7
-1.0 6.8 -1.0
-0.7 -1.0 -0.7
Note that the values in the kernel sum to 0, since they are normalized.
The High Pass filter is essentially equivalent using the Focal Statistics tool with the Sum statistic
option, and a specific weighted kernel.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 253 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The output z-values are an indication of the smoothness of the surface, but they have no relation
to the original z-values. Z-values are distributed about zero with positive values on the upper side
of an edge and negative values on the lower side. Areas where the z-values are close to zero are
regions with nearly constant slope. Areas with values near z-min and z-max are regions where
the slope is changing rapidly.
Following is a simple example of the calculations for one processing cell (the center cell with the
value 8):
752
483
315
The calculation for the processing cell (the center cell with the value 8) is as follows:
Value = ((7*-0.7) + (5*-1.0) + (2*-0.7) + (4*-1.0) + (8*6.8) + (3*-1.0) + (3*-0.7) + (1*-1.0)
+ (5*-0.7)) = ((-4.9 + -5.0 + -1.4) + (-4.0 + 54.4 + -3.0) + (-2.1 + -1.0 + -3.5) = -11.3 + 47.4
+ -6.6 = 29.5
The output value for the processing cell will be 29.5.
By giving negative weights to its neighbors, the filter accentuates the local detail by pulling out
the differences or the boundaries between objects.
In an example below, the input raster has a sharp edge along the region where the values change
from 5.0 to 9.0. The edge enhancement characteristic of the HIGH option has detected the edge.

Figure 14.5 Example of Filter output with HIGH option filter (Edge Enhancement)

Processing cells of No Data:


The Ignore No Data in calculations option controls how No Data cells within the neighborhood
window are handled. When this option is checked (the DATA option), any cells in the
neighborhood that are No Data will be ignored in the calculation of the output cell value. When
unchecked (the NODATA option), if any cell in the neighborhood is No Data, the output cell
will be No Data.

If the processing cell itself is No Data, with the Ignore No Data option selected, the output value
for the cell will be calculated based on the other cells in the neighborhood that have a valid

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 254 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

value. Of course, if all of the cells in the neighborhood are No Data, the output will be No Data,
regardless of the setting for this parameter.

Edge Enhancement Filters:


Edge enhancement filters, enhances the local discontinuities at the boundaries of different
objects (edges) in the image. An edge in a signal is normally defined as the transition in the
intensity or amplitude of that signal.

Most of the edge enhancement filters are thus based on first and second order derivatives and
different gradient filters are also common to use.

The edge enhancement filters are divided in the following groups:


 Gradient
 Laplacian

Gradient edge enhancement:


The gradient of an image I(x,y) is defined along two orthogonal directions. This operator is
approximated in the discrete case. The output of such filters consists of positive and negative
intensities and emphasizes the high frequency details of the image. When the sensitivity for noise
is too high larger kernels should be considered to approximate the derivative operators.

Roberts edge enhancement kernel:


This kernel focuses on the diagonal pixel differentials, which emphasizes corners more clearly,
but can blur together small horizontal or vertical features.

Sobel edge enhancement kernel:


Provides a more uniform edge enhancement, although it still gives increased weight to the
orthogonal pixels over the diagonal pixels.

Pixel Difference:

The Pixel difference edge enhancement filter is very similar to the Roberts edge enhancement
filter and the output will be alike, but for opposite directions

For many remote sensing Earth science applications, the most valuable information that may be
derived from an image is contained in the edges surrounding various objects of interest. Edge
enhancement delineates these edges and makes the shaped and details comprising the image
more conspicuous and perhaps easier to analyze. Generally, what the eyes see as pictorial edges
are simply sharp changes in brightness value between two adjacent pixels. The edges may be
enhanced using either linear or nonlinear edge enhancement techniques.

Laplacian filter:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The
Laplacian of an image highlights regions of rapid intensity change and is therefore often used
for edge detection. The Laplacian is often applied to an image that has first been smoothed with
something approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise,

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 255 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

and hence the two variants will be described together here. The operator normally takes a single
graylevel image as input and produces another graylevel image as output.

How It Works:
The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:

This can be calculated using a convolution filter.


Since the input image is represented as a set of discrete pixels, we have to find a discrete
convolution kernel that can approximate the second derivatives in the definition of the Laplacian.
Two commonly used small kernels are shown in Figure 1.

Figure 14.6: Two commonly used discrete approximations to the Laplacian filter. (Note, we
have defined the Laplacian using a negative peak because this is more common; however, it
is equally valid to use the opposite sign convention.)

Linear Edge Enhancement:

A straightforward, method of extracting edges in remotely sensed imagery is the application of a


directional first-difference algorithm and approximates the first derivative between two adjacent
pixels. The algorithm produces the first difference of the image input in the horizontal, vertical,
and diagonal directions. The algorithms for enhancing horizontal, vertical, and diagonal edges
are, respectively are shown in table 14.1.

Table 14.1 Algorithms for enhancing horizontal, vertical, and diagonal edges

Vertical BViJ = BViJ -BViJ+l + K (7-16)


Horizontal BViJ = BViJ -BVi-1J + K (7 -17)
NE Diagonal BViJ = BViJ -BVi+1J+l +K (7-18)
SE Diagonal BViJ = BViJ -BVi-1J+l'+K (7-19)

Non-linear Edge Enhancement:

Nonlinear edge enhancements are performed using nonlinear combinations of pixels. Many
algorithms are applied using either 2x2 or 3x3 kernals. The Sobel edge detector is based on the
notation of the 3x3 window previously described and is computed according to the relationship:
Sobek15.out = -.√X2 + y2
where

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 256 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

X = (BV3 =2BV6 +BV9 ) -(BVl +2BV4 +BV7)


andY = (BVl =2BV2 +BV3 ) -(BV7 +2BV8 +BV9)
The Sobel operator may also be computed by simultaneously applying the
following 3x3 templates across the image.
-1 0 1 0 0 0
X=1 2 1 y = -1 0 1
-2 0 2 , -1 -2 -1
Like wise there are Robert's edge detector Kirsch non-linear edge enhancement techniques.

Figure 14.7. Filtering of images

MAGE TRANSFORMATION TYPES, CHARACTERISTICS AND USES:

In image transformation, depending on the transform chosen, the input and output images may
appear entirely different and have different interpretations. Fourier transforms, principal
component analysis (also called Karhunen-Loeve analysis), and various spatial filters, are
examples of frequently used image transformation procedures.In Image Transformation[image,f],
the value of every pixel at position {x,y} in the output image is obtained from the
position f[{x,y}] in the input image. This is known as a backward transformation.

Image Subtraction:
Basic image transformations apply simple arithmetic operations to the image data. Image
subtraction is often used to identify changes that have occurred between images collected on
different dates. Typically, two images which have been geometrically registered are used with
the pixel (brightness) values in one image (1) being subtracted from the pixel values in the other
(2). Scaling the resultant image (3) by adding a constant (127 in this case) to the output values

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 257 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

will result in a suitable 'difference' image. In such an image, areas where there has been little or
no change (A) between the original images, will have resultant brightness values around 127
(mid-grey tones), while those areas where significant change has occurred (B) will have values
higher or lower than 127 - brighter or darker depending on the 'direction' of change in reflectance
between the two images . This type of image transform can be useful for mapping changes in
urban development around cities and for identifying areas where deforestation is occurring, as in
this example.

Image division or spectral rationing:


Image division or spectral rationing is one of the most common transforms applied to image data.
Image rationing serves to highlight subtle variations in the spectral responses of various surface
covers. By rationing the data from two different spectral bands, the resultant image enhances
variations in the slopes of the spectral reflectance curves between the two different spectral
ranges that may otherwise be masked by the pixel brightness variations in each of the bands. The
following example illustrates the concept of spectral rationing. Healthy vegetation reflects
strongly in the near-infrared portion of the spectrum while absorbing strongly in the visible red.
Other surface types, such as soil and water, show near equal reflectances in both the near-
infrared and red portions. Thus, a ratio image of Landsat MSS Band 7 (Near-Infrared - 0.8 to 1.1
mm) divided by Band 5 (Red - 0.6 to 0.7 mm) would result in ratios much greater than 1.0 for
vegetation, and ratios around 1.0 for soil and water. Thus the discrimination of vegetation from
other surface cover types is significantly enhanced. Also, we may be better able to identify areas
of unhealthy or stressed vegetation, which show low near-infrared reflectance, as the ratios
would be lower than for healthy green vegetation.

Another benefit of spectral ratioing is that, because we are looking at relative values (i.e. ratios)
instead of absolute brightness values, variations in scene illumination as a result of topographic
effects are reduced. Thus, although the absolute reflectances for forest covered slopes may vary
depending on their orientation relative to the sun's illumination, the ratio of their reflectances
between the two bands should always be very similar. More complex ratios involving the sums
of and differences between spectral bands for various sensors have been developed for
monitoring vegetation conditions. One widely used image transform is the Normalized
Difference Vegetation Index (NDVI) which has been used to monitor vegetation conditions on
continental and global scales using the Advanced Very High Resolution Radiometer (AVHRR)
sensor onboard the NOAA series of satellites.

Principal components analysis:


Different bands of multispectral data are often highly correlated and thus contain similar
information. For example, Landsat MSS Bands 4 and 5 (green and red, respectively) typically
have similar visual appearances since reflectance’s for the same surface cover types are almost
equal. Image transformation techniques based on complex processing of the statistical
characteristics of multi-band data sets can be used to reduce this data redundancy and correlation
between bands. One such transform is called principal components analysis. The objective of this
transformation is to reduce the dimensionality (i.e. the number of bands) in the data, and
compress as much of the information in the original bands into fewer bands. The "new" bands
that result from this statistical procedure are called components. This process attempts to
maximize (statistically) the amount of information (or variance) from the original data into the

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 258 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

least number of new components. As an example of the use of principal components analysis, a
seven band Thematic Mapper (TM) data set may be transformed such that the first three
principal components contain over 90 percent of the information in the original seven bands.
Interpretation and analysis of these three bands of data, combining them either visually or
digitally, is simpler and more efficient than trying to use all of the original seven bands. Principal
components analysis, and other complex transforms, can be used either as an enhancement
technique to improve visual interpretation or to reduce the number of bands to be used as input to
digital classification procedures, discussed in the next section.

This is the most widely used and popular technique among digital image enhancement
techniques. The technique is nothing but deriving eigen values and associated eigen vectors
PCA output are produced using the eigen vectors in linear combination with original data. The
ultimate effect is the reorientation of the coordinate system with respect to the original system.

PCA, factor of Karhumem-Loeve analysis has proven to be of significant value in the analysis of
remotely sensed digital data (Jensen, 1986). PCA images often result in better interpretable
images compared to the original data and is also used as a data compression technique. For
example, if four multispectral bands are used as input to derive PCA images then the first PCA
(PC1) contains maximum information, compiled from all spectral channels, and rest will have
lesser information in decreasing order. The technique involves in reducing correlation between
PC’s and increasing variance within each spectral channel, which is directly proportional to
information content in the image.

Band rationing:
Sometimes differences in brightness values from identical surface materials are caused by
topographic slope and aspect, shadows, or seasonal changes in sunlight illumination angle and
intensity. These conditions may hamper the ability of an interpreter or classification algorithm to
identify correctly surface materials or land use in a remotely sensed image.

Fortunately, ratio transformations of the remotely sensed data can, in certain instances, be
applied to reduce the effects of such environmental conditions. In addition to minimizing the
effects of environmental factors, ratios may also provide unique information not available in any
single band that is useful for discriminating between soils and vegetation. The mathematical
expression of the ratio function is
BV I j, r = BV I j k /BV I j l

To represent the range of the function in a linear fashion and to encode the ratio values in a
standard 8-bit format (values from 0 to 255), normalizing functions are applied. Using this
normalizing function, the ratio value 1 is assigned the brightness value 128. Ratio values within
the range 1/255 to 1 are assigned values between 1 and 128 by the function
BVij,n = Int [(BVij,r X 127) +1]

Ratio values from 1 to 255 are assigned values within the range 128 to 255 by the function
BV ij,n = Int ( 128 + BVij,r ) (2)

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 259 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

The simple ratios between band, only negate multiplicative extraneous effects. When additive
effects are present, we must ratio between band differences. ratio techniques compensate only for
those factors that act equally on the various bands under analysis. In the individual bands the
reflectance values are lower in the shadowed area and it would be difficult to match this outcrop
with the sunlit outcrop. The ratio values, however, are nearly identical in the shadowed and sunlit
areas and the sandstone outcrops would have similar signatures on ratio images. This removal of
illumination differences also eliminates the dependence of topography on ratio images.

Ratio images can be meaningfully interpreted because they can be directly related to the spectral
properties of materials. Ratioing can be thought of as a method of enhancing minor differences
between materials by defining the slope of spectral curve between two bands.

Apart from the simple ratio of the form A/B, other ratios like A/(A+B), (A- B) /(A+B),
(A+B)/(A-B) are also used in some investigations. But a systematic study of their use for
different applications is not available in the literature. It is important that the analyst be cognizant
of the general types of materials found in the scene and their spectral properties in order to select
the best ratio images for interpretation. Ratio images have been successfully used in many
geological investigations to recognize and map areas of mineral alteration and for lithologic
mapping.

Colour Ratio Composite Images:


We can combine several ratio images to form a colour ratio composite image. Colour ratio
composite images are most effective for discriminating between the altered and unaltered
sedimentary rocks, and in some cases, for distinguishing subtle differences among the altered
and unaltered rocks. In some studies, it had been noticed that the ratios 4/5, 5/6 and 6/7
composite provides the greatest amount of information for discriminating between hypo-
thermally altered and unaltered rocks, as well as separating various types of igneous rocks.
Density slicing can also be used after band ratioing to enhance subtle tonal differences.

Tasseled Cap Transformation:


Tasseled cap (Kauth-Thomas) transformation is another transformation, which helps in deriving
‘brightness’, ‘greenness’, ‘yellowness’ and ‘non-such’ indices by using multispectral satellite
data. These images are derived using linear band combination of already derived coefficients
and input band data. The justification for this operation is that the areas will provide a
consistent, physically-based coordinate system for interpretation of images of an agricultural area
obtained at different stags of the growth cycle of the crop (Mather, 1987). This is a means by
which it is possible to highlight the most important (spectrally observable) phenomena of crop
development in a way that allows discrimination of specific crops, and crops from other
vegetative cover, in Landsat multitemporal image.

Three major orthogonal directions of significance in vegetation can be identified. The first is the
principal diagonal along which soils are distributed. This was chosen by Kauth and Thomas as
the first axis in the tasseled cap transformation and known as brightness. The development of
vegetation moves towards maturity appears to occur orthogonal to the soil major axis. This
direction was then chosen as the second axis, with the intention of providing a greenness
indicator. Senescence takes place in a different plane to maturity. Third axis orthogonal to the

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 260 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

soil line and greenness axis will give a yellowness measure. Finally a fourth axis is required to
account for data variance not substantially associated with differences in soil brightness or
vegetative greenness or yellowness. Again this needs to be orthogonal to the previous three. It
was called ‘non-such’ by Kauth and Thomas in contrast to the names ‘soil brightness’, ‘green-
stuff’ and ‘yellow-stuff’ they applied to the previous three.

Decorrelation Techniques:
Multispectral digital data normally exhibit high degree of correlation among the spectral
channels. Due to this, separation of certain features becomes extremely difficult. Contrast
stretching of the data does not produce improved results as the data tend to concentrate along the
diagonal in the three dimensional axis. Hence, to improve the interpretability and data quality it
is required to reduce the correlation between the spectral channels so that the data gets spread in
the three dimensional axis to all corners. Principal Component Analysis and Hue, Saturation,
Intensity (HIS) transformation are some of the commonly used tools.

HSI Technique:
This is yet another decorrelation technique which words in a three dimensional axis to produce
an output which has similar characteristics as a PC image. The Red, Green, Blue (RGB) input
can be manipulated by a three dimensional transformation to obtain Hue : explains the perceived
colours, Saturation: explains the degree of purity in colours and intensity explains the brightness
or dullness of colours. This enhancement is particularly useful in deriving better perceptible and
interpretable images. HSI image after stretching can be transformed back to RGB space to work
in the normal colour composite mode for better differentiation of objects.

14.4 SUMMARY
Spatial filtering types, techniques and methods are used to enhance the appearance of an image
which facilitates the clarity for visual as well as digital image processing and interpretation. Such
tools and techniques certainly provide the extraction of huge amount f information based on the
objectives of users. Spatial filters are designed to highlight or suppress specific features in an
image based on their spatial frequency. Spatial frequency is related to the concept of image
texture which refers to the frequency of the variations in tone that appear in an image. "Rough"
textured areas of an image, where the changes in tone are abrupt over a small area, have high
spatial frequencies, while "smooth" areas with little variation in tone over several pixels, have
low spatial frequencies. Similarly in image transformation, the multispectral character of
remote sensing data renders it amenable to spectral transformations that generate new sets of
image components or bands which highlights the clarity of ground features in different
shades/tones, texture etc. The transformed image makes evident features not discernable in the
original data. The techniques showssignificance for displaying data in the three dimensions
available on a colour monitor or in colour hardcopy, and for transmission and storage of data.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 261 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

14.5 GLOSSARY
Absorption factor-The ratio of the total absorbed radiant or luminous flux to the incident flux

Radiance- Radiance is the radiant intensity per unit surface area.

Standard unit of radiance- watts per steradian and square meter (W/sr m²).

Field Angle- The Field Angle is the angle between the two directions opposed to each other over
the beam axis for which the luminous intensity is 10% that of the maximum luminous intensity.
In some cases it is also called a beam angle.

Scattering Albedo-The ratio between the scattering coefficient and the absorption coefficient for
a participating medium. An albedo of 0 means that the particles do not scatter light. An albedo of
1 means that the particles do not absorb light.

Transmittace-Transmittance is the ratio of the total radiant or luminous flux transmitted by a


transparent object to the incident flux, usually given for normal incidence.

Transmission coefficient-The ratio of the directly transmitted light after passing through one
unit of a participating medium (atmosphere, dust, fog) to the amount of light that would have
passed the same distance through a vacuum. It is the amount of light that remains after the
absorption coefficient and the scattering coefficient (together the extinction coefficient) are
accounted for.

14.6 ANSWER TO CHECK YOUR PROGRESS


1. Explain the Characteristics of Filter, Spatial filters.
2. Explain the Image transformation Types, characteristics and uses.

14.7 REFERENCES
1. Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic.
2. Jensen, J.R. 1986. Introduction Digital Image processing: A Remote Sensing Perspective.
Prentice-Hall, Englewood Cliffs, NJ.
3. Lillesand, T.M. and Keifer, R.W. 1994. Remote Sensing and Image interpretation. John
Wiley & Sons, Inc. New York, pp 750.
4. Moik, J.G. 1980. Digital Processing of Remotely Sensed Images, NASA SP-431, Govt.
Printing Office, Washington, D.C.
5. Muller, J.P. (ed.) 1996. Digital Image Processing in Remote Sensing. Taylor & Francis,
London/Philadelphia.
6. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 262 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Vol. 55, no 9.
7. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.
8. Richards, J. A. 1986. Remote Sensing Digital Image Analysis: An Introduction. Berlin:
Springer-Verlag.
9. Rosenfeld, A. 1978. Image Processing and Recognition, Technical Report 664. University of
Maryland Computer Vision Laboratory
10. Sabins, F.F. 1986. Remote Sensing: Principles and Interpretation, 2nd Freeman New York.

14.8 TERMINAL QUESTIONS


1) Why filtering for digital image is needed? Define filter, filtering, spatial and linear filtering.
2) Describe Filtering in Frequency and spatial Domain
3) What are the types and uses of spatial filtering?
4) What type of filter you will use, if you want to highlight regions of rapid intensity change and
use edge enhancement. Describe it in detail.
5) Explain image subtraction and division.
6) What is the utility of principal components analysis in digital image enhancement technique?
7) Highlight the importance of spectral band rationing.
8) Highlight the techniques of Tasseled Cap Transformation, Decorrelation and HSI to
improve the interpretability and data quality.

UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 263 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

UNIT 15 - IMAGE CLASSIFICATION

15.1 OBJECTIVES
15.2 INTRODUCTION
15.3 IMAGE CLASSIFICATION
15.4 SUMMARY
15.5 GLOSSARY
15.6 ANSWER TO CHECK YOUR PROGRESS
15.7 REFERENCES
15.8 TERMINAL QUESTIONS

UNIT 15 - IMAGE CLASSIFICATION Page 264 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

15.1 OBJECTIVES
After reading this unit learner will be able to understand:
 Importance of Digital Image Classification
 Spectral Signature
 Classification Training and Types of Classification
 Classification Accuracy Assessment
 Classification Error Matrix

15.2 INTRODUCTION
In the previous units you have studied basics of digital image processing, preprocessing, image
registration and different types of digital enhancement techniques for bringing clarity,
interpretability and maximizing the reliable information extraction. The sequence of these study
topicsbecomes progressive for your full understanding towards digital image processing tools
and techniques. But your ultimate goal of digital image classification and analysis is still to be
learnt and the same is being described in this unit. Before learning digital image classification
you should have the curiosity in its historical background as mentioned in the following
paragraph:

Considerable amount of work has been done for mapping, monitoring and analysis of various
resources along with different ecosystems and environmental parameters at various levels.
Stereoscopic interpretation of aerial photographs and visual interpretation of coarse resolution of
satellite images were most common methods during 1960-70s. Initially, in the 1960s, with the
emergence of the space program, cosmonauts and astronauts started taking photographs out of
the window of their spacecraft in which they were orbiting the earth. During 1970s 1:1 million
scale satellite images were used for interpretation and mapping for broad land use and land cover
categories. With the passage of time the choice of visual interpretation method on paper print
was changed into onscreen visual interpretation of data visible in the computer monitor. Today,
remote sensing is carried out using airborne and satellite technology, not only utilizing film
photography, but also digital camera, scanner and video, as well as radar and thermal
sensors. Unlike in the past, when remote sensing was restricted to only the visual part of the
electromagnetic spectrum i.e., what could be seen with naked eye, today through the use of
special filters, photographic films and other types of sensors, the parts of the spectrum which
cannot be seen with the naked human eye can also be utilized. Now the digital image processing
techniques, by using specific sophisticated computer hardware and software’s, are quite common
for detailed data analysis. There are various methods by which the raw data available from
satellite are rectified and enhanced so as to get clarity with high degree of contrast. By applying
different mathematical algorithms during the processing and classification one can achieve the
results of his own choice. Thus, today remote sensing is largely utilized in environmental
management, which frequently requires rapid, accurate, and up-to-date data collection and its
digital image processing.

Digital image classification of satellite remote sensing operations is used to digitally identify
and classify pixels in the data. Classification is usually performed on multi-channel data sets and
this process assigns each pixel in an image to a particular class or theme based on statistical

UNIT 15 - IMAGE CLASSIFICATION Page 265 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

characteristics of the pixel brightness values. There are a variety of approaches taken to perform
digital classification. The two generic approaches which are used most often,
namely supervised and unsupervised classification.

The objective of these operations is to replace visual analysis of the image data with quantitative
techniques for automating the identification of features in a scene. This involves the analysis of
multi-spectral image data and the application of statistically based decision rules for determining
the land cover ideality of each pixel in an image. The intent of classification process is to
categorize all pixels in a digital image into one of several land cover classes or themes. This
classified data may be used to produce thematic maps of the land cover present in an image.
Based on the contents and sub topics to be described under this unit, the study is aimed at the
following objectives:

15.3 IMAGE CLASSIFICATION

DEFINITIONS:

Classification and Digital Image Classification:


 Classification is a process related to categorization, the process in which ideas and objects
are recognized, differentiated, and understood.
 Digital Image Classification classifies each individual pixel by using the spectral signature/
information represented by the digital numbers in one or more spectral bands.
 Image classification refers to the task of extracting information classes from multiband raster
image.

Spectral signature:
 Spectral signature is the variation of reflectance or emittance of a material with respect
to wavelengths (i.e., reflectance/emittance as a function of wavelength).
 The spectral signature of an object is a function of the incidental EM wavelength and
material interaction with that section of the electromagnetic spectrum.
Supervised classification:
 Supervised classification uses the spectral signatures obtained from training samples to
classify an image.

Unsupervised classification:
 Unsupervised classification finds spectral classes (or clusters) in a multiband image without
the analyst’s intervention.
Accuracy and Classification Accuracy:
 Accuracy is one metric for evaluating classification models.
 Informally, accuracy is the fraction of predictions our model got right.
 Formally, Accuracy=Number of correct predictions/Total number of predictions.
 Classification accuracy is defined as "percentage of correct predictions"..

UNIT 15 - IMAGE CLASSIFICATION Page 266 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Confusion Matrix:
 confusion matrix describes the performance of a classifier so that one can see what types of
errors the classifier is making.

IMPORTANCE OF DIGITAL IMAGE CLASSIFICATION:

Image processing techniques that assist the analyst in the qualitative, i.e. visual interpretation of
images. Multi-spectral classification is emphasized because it is, at the present times; the most
common approach to computer assisted mapping from remote sensing images. It is important at
this point, however, to make a few appropriate comments about multi-spectral classification.
First, it is fundamental that we are attempting to objectively map areas on the ground that has
similar spectral reflectance characteristics. The resulting labels assigned to the image pixels,
therefore, represent spectral classes that may or may not correspond to the classes of ground
objects that we are ultimately interested in mapping. Second, manually produced maps are the
result of a long, often complex process that utilizes many sources of information. The
conventional tools used to produce a map range from the strictly quantitative techniques of
photogrammetry and geodesy, to the less quantitative techniques of photo-interpretation and field
class descriptions, to the subjective and artistic techniques of map "generalization" and visual
exploration of discrete spatial data points.
The final output of the classification process is a type of digital image, specifically a map of the
classified pixels. For display, the class at each pixel may be coded by character or graphic
symbols or by color. The classification process compresses the image data by reducing the large
number of gray levels in each of several spectral bands into a few numbers of classes in a single
Image.

SPECTRAL SIGNATURES:
Relying on the assumption that different surface materials have different Spectral reflectance (in
visible and microwave regions) or thermal emission characteristics, multi spectral classification
logically partitions the large spectral measurement space (256k possible pixel vectors for an
image with 8 bits / pixel / band and k bands) into relatively few regions, each representing a.
different type of surface material. The set of discrete spectral radiance measurements provided
by the broad spectral bands of the sensor define the spectral signature of each class, as modified
by the atmosphere between the sensor and the ground. The spectral signature is a k- dimensional
vector whose coordinates is the measured radiance in each spectral band.
Concept of Spectral Signature in Image Classification:
Features on the Earth reflect, absorb, transmit, and emit electromagnetic energy from the sun.
Special digital sensors have been developed to measure all types of electromagnetic energy as it
interacts with objects in all of the ways listed above. The ability of sensors to measure these
interactions allows us to use remote sensing to measure features and changes on the Earth and in
our atmosphere. A measurement of energy commonly used in remote sensing of the Earth is
reflected energy (e.g., visible light, near-infrared, etc.) coming from land and water surfaces. The
amount of energy reflected from these surfaces is usually expressed as a percentage of the
amount of energy striking the objects. Reflectance is 100% if all of the light striking and object

UNIT 15 - IMAGE CLASSIFICATION Page 267 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

bounces off and is detected by the sensor. If none of the light returns from the surface,
reflectance is said to be 0%. In most cases, the reflectance value of each object for each area of
the electromagnetic spectrum is somewhere between these two extremes. Across any range of
wavelengths, the percent reflectance values for landscape features such as water, sand, roads,
forests, etc. can be plotted and compared. Such plots are called “spectral response curves” or
“spectral signatures.” Differences among spectral signatures are used to help classify remotely
sensed images into classes of landscape features since the spectral signatures of like features
have similar shapes. The figure no 15.1 below shows differences in the spectral response curves
for healthy versus stressed sugar beet plants.

Figure 15.1: Spectral curve of healthy and stressed Sugar beets


The more detailed the spectral information recorded by a sensor, the more information that can
be extracted from the spectral signatures. Hyperspectral sensors have much more detailed
signatures than multispectral sensors and thus provide the ability to detect more subtle
differences in aquatic and terrestrial features.
Importance of Spectral Signatures in Digital Image Classification:
Spectral signature is one of the important tools in the analysis of digital data as it forms the basis
of identification and discrimination between various features on the Earth. Spectral signatures of
various cover types at two different seasons must be studied with the objective to find out their
role in analysis of multispectral digital data using a Multispectral Data Analysis System (M-
DAS). Multispectral data in in different bands of two different seasons are generally plotted with
respect to their mean grey scale values. These values become very helpful in identification of
various land use/land covers/features and to understand their spectral separability which in turn
highlights a clue for final grouping of different classes during computer aided digital
classification.

UNIT 15 - IMAGE CLASSIFICATION Page 268 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Spectral Signature Generalization and Expansion Can Improve the Accuracy of


Satellite Image Classification:
Conventional supervised classification of satellite images uses a single multi-band image and
coincident ground observations to construct spectral signatures of land cover classes. The
approach is compared with the following three alternatives that derive signatures from multiple
images and time periods:
i) Signature Generalization: Spectral signatures are derived from multiple images within one
season, but perhaps from different years;
ii) Signature Expansion: Spectral signatures are created with data from images acquired during
different seasons of the same year; and
iii) Combinations of Expansion and Generalization: The quality of these different signatures
was assessed to (a) classify the images used to derive the signature, and (b) for use in temporal
signature extension, i.e., applying a signature obtained from data of one or several years to
images from other years. When applying signatures to the images they were derived from,
signature expansion improved accuracy relative to the conventional method, and variability in
accuracy declined markedly. In contrast, signature generalization did not improve classification.
When applying signatures to images of other years (temporal extension), the conventional
method, using a signature derived from a single image, resulted in very low classification
accuracy. Signature expansion also performed poorly but multi-year signature generalization
performed much better and this appears to be a promising approach in the temporal extension of
spectral signatures for satellite image classification.

CLASSIFICATION TRAINING:

The first step of any classification procedure is the training of the computer program to recognize
the class signatures of interest. To train the computer program, we must supply a sample of
pixels from which class signatures, for example, mean vectors and covariance matrices can be
developed. There are basically two ways to develop signatures:

 Supervised training
 Unsupervised training

Supervised Classification:
For supervised training, the analyst uses prior knowledge derived from field surveys, photo-
interpretation, and other sources, about small regions of the image to be classified to identify
those pixels that belong to the classes of interest. The feature signatures of this analyst --
identified pixels are then calculated and used to recognize pixels with similar signatures
throughout the image.
In a supervised classification, the identify and the location of some of the land cover types, such
as urban, agriculture, wetland, and forest, are known a priori though a combination of field work,
analysis of aerial photography, maps, and personal experience.
These areas are commonly referred to as training sites because the spectral characteristics of
these known areas are used to "train" the classification algorithm for eventual land cover

UNIT 15 - IMAGE CLASSIFICATION Page 269 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

mapping of the remainder of the image. Multi-variate statistical parameters (means, standard
deviations, covariance matrices, correlation matrices, etc.) are calculated for each training site.
Every pixel both within and outside these training sites is then evaluated and assigned to the
class of which it has be highest likelihood of being a member. The following are important
aspects of conducting a rigorous and hopefully useful supervised classification of remote sensor
data:

 An appropriate classification scheme must be adopted.


 Representative training sites must be selected, including an
 Appreciation for signature extension factors, if possible.
 Statistics must be extracted from the training site spectral data.
 The statistics are analyzed to select the appropriate features (bands) to be used in the
classification process.
 Select the appropriate classification algorithm.
 Classify the imagery into the required classes.
 Statistically evaluate the classification accuracy.

The classification performance depends on using suitable algorithms to label the pixels in an
image as representing particular ground cover types, or classes. A wide variety of algorithms are
available for supervised classification.
Maximum Likelihood Classification- This is the most common method used with remote
sensing image data interpretation/ classification. The decision rules are based on Baye’s
principle. To derive the a priori probabilities sufficient amount of training data should be
available in the form of ground referenced data for various cover types. From this information
training set spectral statistics and a priori probabilities are calculated for each pixel before
categorizing to respective likelihood class (Figure 15.2).

Figure 15.2. Maximum likelihood classification

Minimum Distance Classification: It is an automatic option for classification when sufficient


ground truth data is not available for different cover types in a given area. This is a classifier,
which does not make use of variance-covariance matrices while classifying different cover types.
With this classifier, training data is used only to determine class means; classification is then
performed by placing the pixel in the class of the nearest mean (Figure 15.3).

UNIT 15 - IMAGE CLASSIFICATION Page 270 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Figure 15.3. Minimum distance classification

Parallelepiped Classification- This is a very simple supervised classifier that is, in principle,
trained by inspecting histograms of the individual spectral components of the available training
data. The Upper and lower significant bounds on the histograms are identified and used to
describe the brightness value range for each component characteristic of that class. Together, the
range in all components describes a multidimensional box or parallelepiped (Figure 15.4). A two
dimensional pattern space might therefore be segmented. If there is a considerable gap between
two parallelepiped; pixels in those regions will not be classified. Whereas in the case of
maximum likelihood and minimum distance algorithms the pixels are labeled as belonging to
one of the available classes depending on the pre-set threshold.

Figure 15.4: Parallelepiped classification

Unsupervised Classification - For unsupervised training, the analyst employs a computer


algorithm that locates naturally occurring concentrations of feature vectors from a heterogeneous
sample of pixels. These computer- specified clusters are then assumed to represent feature
classes in the image and are used to calculate class signatures. Unsupervised classification
requires only a minimal amount of input from the analyst. It is a process whereby numerical
operations are performed that search for "natural" groupings of the spectral properties of pixels,
as examined in multi-spectral feature space. The user allows the computer to select the class

UNIT 15 - IMAGE CLASSIFICATION Page 271 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

means and covariance matrices to be used in the classification. Once the data are classified, the
analyst attempts, a posterior (after the fact) to assign these "natural" or spectral classes to the
information classes of interest. This may not be easy. Some of the clusters may be meaningless
because they represent mixed classes of earth surface materials.
Hundreds of clustering methods has been developed for a wide variety of purposes apart from
pattern recognition in remote sensing. The clustering algorithm operates in a two -pass mode (i.e.
it passes through the registered multi-spectral data set two times). In the first pass, the program
reads through the data set and sequentially builds clusters (groups of points in space). There is a
mean vector associated with each cluster. In the second pass, a minimum -distance classification
to means algorithm similar to the one described previously is applied to the whole data set on a
pixel -by -pixel basis, where each pixel is assigned to one of the mean vectors created in pass
one. The first pass, therefore, automatically creates the cluster signatures to be used by the
classifier:

PASS 1: Cluster Building:


During the first pass, the analyst may be required to supply four types of Information:

 R, a radius in spectral space used to determine when a new cluster should be formed.
 C, a spectral space distance parameter used when merging clusters
 .N, the number of pixels to be evaluated between each merging of the clusters
 .Cmax, the maximum number of clusters to be identified by the algorithm.

PASS 2: Assignment of Pixels to one of the Cmax Clusters using minimum distance
classification logic:
The final cluster mean data vectors are used in a minimum -distance to means classification
algorithm to classify all the pixels in the image into one of the Cmax clusters. The analyst usually
produces a display depicting to which cluster each pixel was assigned. It is then necessary to
evaluate the location of the clusters in the image, label them, if possible, and see if any should be
combined. It is usually necessary to combine some clusters. This is where an intimate knowledge
of the terrain is critical.

Cluster Labeling:
It is usually performed by interactively displaying all the pixels assigned to an individual cluster
on a CRT screen. In this manner, it is possible to identify their location, and spatial association
with the other clusters. This interactive visual analysis, in conjunction with the information
provided in the scatter plot, allows the analyst to group the clusters into information classes.
Combination of Supervised and Unsupervised Training:
Because supervised training does not necessarily result in class signatures that are numerically
separable in feature space, and because unsupervised training does not necessarily result in
classes that are meaningful to the analyst, a combined approach has the potential to meet both
requirements. If time and financial resources permit, this is undoubtedly the best procedure to
follow.

UNIT 15 - IMAGE CLASSIFICATION Page 272 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Pre-classification Processing and Feature Extraction:


Those aspects of remote sensing imagery that are used to define mapping classes are known as
features. The simplest features, the pixel grey levels in each band of a multispectral image, are
not necessarily the best features for accurate classification. They are influenced by such factors
as atmospheric scattering and topographic relief, and are more often highly correlated between
spectral bands, resulting in the inefficient analysis of redundant data.
Incorporating Ancillary and Contextual Data in the Classification Process:
Ancillary data is any type information used in the classification process that is not directly
obtainable from either the spatial or spectral characteristics of the remote sensor data itself. For
example, the amount and the type of agricultural land use in the previous year should be of value
in this year's agricultural crop -type inventory. Similarly, certain types of forest species are
spatially distributed based solely on aspect and/ or topographic characteristics. . Ideally, land
cover classifications based on the analysis of remote sensor data should incorporate such
valuable ancillary information.
Contextual Classification:
The classifiers so far discussed are point or pixel specific in which the pixels in an image are
classified independently of the classifications of their neighbors. Procedures are available for
classifying pixels in the context of their neighbors. These require information from a spatial
model of the region under consideration and tend to develop thematic map that is consistent both
spectrally and spatially.
Statistical classification techniques commonly achieve accuracies above 80%. The main
limitation of many classification techniques is that they perform classification on a pixel-by-pixel
basis. It is obvious that the classification is based on the spectral characteristics of the object in
consideration. If two independent objects possess similar spectral properties, statistical
classification methodology fails to recognize the object as a different class. Hence, the
probability of misclassification, even though can be minimized, is very much scene dependent.
Scenes having lot of variability in the spectral responses will require too many training sets for
satisfactory classification. Many analysts adopt post-classification smoothing to minimize
unclassified pixels. But this procedure results in exclusion of other small areas, which are of
interest. So, smoothing is more restricted to visual purpose only. The pixel process does not
emulate human abilities, which are clearly distinguished by their use of texture and context in
addition to tone to aid interpretation decisions. Here the role of digital spatial contextual analysis
is of great interest in order to improve a multispectral classification.

In order to improve the classification one form of context analysis is, viz., This is also known as
type I pixel-based re-classifier, as it combines local information surrounding pixels to assist the
reclassification of each pixel. The most important requirement in contextual algorithms is it
should maintain homogenous areas of irregular shape, but identify and correct those isolated
pixels, which are misclassified. The probabilistic relaxation model provides an appropriate
examination of the technique. Rosenfeld et al. (1976) define three models of relaxation: a
discrete model, a fuzzy model and a probabilistic model. Probabilistic models are the most
generally used methodology and has wide literature coverage. The probabilistic relaxation
model attempts to reduce the uncertainty in a twofold manner by.
 Examining the local neighborhood of each pixel to produce locally consistent labels.

UNIT 15 - IMAGE CLASSIFICATION Page 273 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

 Using statistical information on the label interrelationships present in the whole image.
The core of the model is the probability updating rule. The neighborhood operator provides
spatial context information over n-local pixels.
Temporal Classification:
Temporal classification exploits the usefulness of time related features as another element for
interpretation. Temporal analysis uses two basic time interpretive functions in forestry
application:

 Time related phonological features as additional discriminate.


 Time related change in forest cover features.

Suitability of specific index for estimation of biomass production is initially assessed through
pilot study using ground radiometers and aerial scanner data as complements to the satellite data.
Indices have become more efficient estimation of vegetation amount as compared with the
performance of different bands independently. Selection of remote sensing data and acquisition
is based on the required spatial resolution. A balance between spatial, spectral and temporal
resolution is usually made depending upon the scale of study.
For detailed digital image classification techniques of remote sensing data, students may consult
the concerned books.

CLASSIFICATION ACCURACY ASSESSMENT:

Quantitatively assessing classification accuracy requires the collection of some in situ data or a
priori knowledge about some parts of the terrain, which can then be compared with the remote
sensing derived classification map. Thus to asses classification accuracy it is necessary to
compare two classification maps 1) the remote sensing derived map, and 2) assumed true map (in
fact it may contain some error). The assumed true map may be derived from in situ investigation
or quite often from the interpretation of remotely sensed data obtained at a larger scale or higher
resolution.

Overall Classification Map Accuracy Assessment:


To determine the overall accuracy of a remotely sensed classified map it is necessary to ascertain
whether the map meets or exceeds some predetermined classification accuracy criteria. Overall
accuracy assessment evaluates the agreement between the two maps in total area or each
category. They usually do not evaluate construction errors that occur in the various categories

Site Specific Classification Map Accuracy Assessment:


This type of error analysis compares the accuracy of the remote sensing derived classification
map pixel by pixel with the assumed true land use map. First, it is possible to conduct a site-
specific error evaluation based only on the training pixels used to train the classifier in a
supervised classification. This simply means that those pixel locations i, j used to train the

UNIT 15 - IMAGE CLASSIFICATION Page 274 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

classifier are carefully evaluated on both the classified map from remote sensing data products
and the assumed true map. If training samples are distributed randomly throughout the study
area, this evaluation may consider representative of the study area. If they act biased by the
analyst a prior knowledge of where certain land cover types exist in the scene. Because of it is
bias, the classification accuracy for pixels found within the training sites are generally higher
than for the remainder of the map because these are the data locations that were used to train the
classifier. Conversely if others test locations in the study area are identified and correctly
labelled prior to classification and if these are not used in the training of the classification
algorithm they can be used to evaluate the accuracy of the classification map. This procedure
generally yields a more credible classification accuracy assessment. However additional ground
truth is required for these test site coupled with problem of determining how many pixels are
necessary in each test site class. Also the method of identifying the location of the test sites prior
to classification is important since many statistical tests require that locations be randomly
selected (e .g using a random number generator for the identification off unbiased row and
column coordinates) so that the analyst do not bias their selection.
Once the Criterion for objectively identifying the location of specific pixels to be compared is
determined, it is necessary to identify the class assigned to each pixel in both the remote sensing
derived map and the assumed true map. These data are tabulated and reported in a contingency
table (error matrix), where overall classification accuracy and misclassification between
categories are identified. It takes the form of an m x m matrix, where m is the number of classes
under investigation. The rows in the matrix represent the assumed true classes, while the
columns are associated with the remote sensing derived land use. The entries in the contingency
table represent the raw number of pixels encountered in each condition; however, they may be
expressed as percentages, if the number becomes too large. One of the most important
characteristics of such matrices is their ability to summarize errors of omission and commission.
These procedures allow quantitative evaluation of the classification accuracy. Their proper use
enhances the credibility, of using remote sensing derived land use information.

Classification Error Matrix:


One of the most common means of expressing classification accuracy is the preparation of
classification error matrix sometimes called confusion or a contingency table. Error matrices
compare on a category-by-category basis, the relationship between known reference data (ground
truth) and the corresponding results of an automated classification. Such matrices are square,
with the number of rows and columns equal to the number of categories whose classification
accuracy is being assessed. Table 15.1 is an error matrix that an image analyst has prepared to
determine how well a Classification has categorized a representative subset of pixels used in the
training process of a supervised classification. This matrix stems from classifying the sampled
training set pixels and listing the known cover types used for training (columns) versus the Pixels
actually classified into each land cover category by the classifier (rows).

UNIT 15 - IMAGE CLASSIFICATION Page 275 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Table 15.1 Error Matrix resulting from classifying training Set pixels

Training set data (Known cover types)


Classification data
W S F U C H Row Total
W 480 0 5 0 0 0 485
S 0 52 0 20 0 0 72
F 0 0 313 40 0 0 353
U 0 16 0 126 0 0 142
C 0 0 0 38 342 79 459
H 0 0 38 24 60 359 481
Column Total 480 68 356 248 402 438 1992

Producer's Accuracy Users Accuracy


W= 480/480 = 100% W= 480/485 =99%

S = 052/068 = 16% S = 052/072 = 72%

F = 313/356 = 88% F = 313/352 = 87%

U = 126,2411 = 51% U = 126/147 = 99%

C = 342/402 = 85% C = 342/459 = 74%

H = 359/438 = 82% H =359/481= 75%

Overall accuracy = (480 + 52 + 313+ 126+ 342 +359)/1992= 84%


(W, water; S, sand, F, forest; U Urban; C, corn; H, hay)

An error matrix expresses several characteristics about classification performance. For example,
one can study the various classification errors of omission (exclusion) and commission
(inclusion). Note in Table 1 the training set pixels that are classified into the proper land cover
categories are located along the major diagonal of the error matrix (running from upper left to
lower right). All non-diagonal elements of the matrix represent errors of omission or
commission. ‘Omission errors correspond to non-diagonal column elements’ (e.g. 16 pixels that
should have classified as "sand" were omitted from that category). ‘Commission errors are
represented by non-diagonal row element’ (e.g. 38 urban pixels plus 79 hay pixels were

UNIT 15 - IMAGE CLASSIFICATION Page 276 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

improperly included in the corn category). Several other ensures for e.g. the overall accuracy of
classification can be computed from the error matrix. It is determined by dividing the total
number correctly classified pixels (sum of elements along the major diagonal) by the total
number of reference pixels. Likewise, the accuracy's of individual categories can be calculated
by dividing the number of correctly classified pixels in each category by either the total number
of pixels in the corresponding rows or column. Producers accuracy which indicates how well the
training sets pixels of a given cover type are classified can be determined by dividing the number
of correctly classified pixels in each category by number of training sets used for that category
(column total). Whereas the Users accuracy is computed by dividing the number of correctly
classified pixels in each category by the total number of pixels that were classified in that
category (row total). This figure is a measure of commission error and indicates the probability
that a pixel classified into a given category actually represent that category on ground. Note the
error matrix in the table indicates an overall accuracy of 84%. However producers accuracy
range from just 51 %(urban) to 100% (water) and users accuracy range from 72%(sand) to 99%
(water). This error matrix is based on training data. If the results are good it indicates that the
training samples are spectrally separable and the classification works well in the training areas.
This aids in the training set refinement process, but indicates little about classifier performance
elsewhere in the scene.
Kappa coefficient:
‘Discrete multivariate’ techniques have been used to statistically evaluate the accuracy of remote
sensing derived maps and error matrices since 1983 and are widely adopted. These techniques
are appropriate as the remotely sensed data are discrete rather than continuous and are also
binomially or multinomial distributed rather than normally distributed. Kappa analysis is a
discrete multivariate technique for accuracy assessment. Kappa analysis yields a Khat statistic
that is the measure of agreement of accuracy.
The techniques of image processing so far discussed deals with deriving of end results in the
form of classified maps or enhanced image and associated statistics, which can be correlated
with available conventionally generated thematic maps. There are many procedures in satellite
image processing which have already been automated, especially under pattern recognition
techniques. However, efforts are in progress to find newer techniques which consumes lesser
amount of time, does better integration of data from various sources, has better automation
capabilities and simple to use. Some of the techniques, which have made a big impact in the
present day data processing, are:

 Expert Systems/Artificial Intelligence


 Contextual classification

The spectral classification of remotely sensed data, in the parametric approach, depends more on
the spectral statistics and related signature. The spectral knowledge is a limiting factor with
respect to the given image. Generation of scene independent spectral knowledge is a critical
element in the development of Expert systems. Expert systems generally require, knowledge
base, a rules interpreter (or rule base) and a working memory. The spectral knowledge based
computer systems are designed to avoid the need for scene based parameter optimization. It is to
make classification decisions based on the knowledge of spectral relationships within and
between classes to be categorized given that the relationships are stable over a period of time.

UNIT 15 - IMAGE CLASSIFICATION Page 277 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

Context based classification algorithms are gaining moment in the present day image processing.
Contextual classification is mainly based on categorization of the image data with respect to the
context of the particular pixel in consideration. The rules used to accept or reject the
classification decision at a given level depend upon the local contextual interpretation associated
with the pixel under consideration.
There are hybrid methods by which useful results are obtained in image processing. Temporal
images are classified using parametric methods over difference seasons. Specific knowledge base
is used to refine parametric classification with respect to the behavior of the features in the
particular phenological stage respectively. Different knowledge base is used to compare two
time parametric classification (already refined) to show constant and changing areas over the
period. In addition to this stratification information is superimposed from available thematic
maps (e.g. Forest boundaries are extracted and classified on the final product). This kind of
hybrid methods not only improve upon the classification performance but also go a long way in
deriving newer techniques of images processing to achieve better end results.
Future trends of the remote sensing and image processing are generation of data base and
national network for information exchange which finally will lead to the operationalisation of
National Resources Information System (NRIS) in India.

15.4 SUMMARY
The digital image processing techniques are quite common for detailed data analysis and data out
puts for obtaining the desired results. There are various methods by which the raw data available
from satellite are rectified and enhanced so as to get clarity with high degree of contrast. By
applying different mathematical algorithms during the processing and classification one can
achieve the results of his own choice. Thus, today remote sensing is largely utilized in
environmental management, which frequently requires rapid, accurate, and up-to-date data
collection and its digital image processing.
Digital image processing and classification techniques of satellite remote sensing data are used to
obtain clarity of features to identify and classify pixels in the data. Classification is usually
performed on multi-channel data sets and this process assigns each pixel in an image to a
particular class or theme based on statistical characteristics of the pixel brightness values. There
are a variety of approaches taken to perform digital classification. The two generic approaches
which are used most often, namely supervised and unsupervised classification.

The objective of digital image classification is to replace visual analysis of the image data with
quantitative techniques for automating the identification of features in a scene. The digital image
classification provides a detailed data output results based on the user`s requirement. This
involves the analysis of multi-spectral image data and the application of statistically based
decision rules for determining the land cover ideality of each pixel in an image. The intent of
classification process is to categorize all pixels in a digital image into one of several land cover
classes or themes. This classified data may be used to produce thematic maps of the land cover
present in an image. The topic defines and describes the sub-topics consisting of importance of

UNIT 15 - IMAGE CLASSIFICATION Page 278 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

digital image classification, spectral Signature, classification training and types of classification,
classification accuracy and assessment of Classification Error Matrix.

Spectral signature, concept and importance of spectral signature in digital image classification
have been explained. The theme and importance of supervised and unsupervised classification
under classification training are described. Under supervised classification, the maximum
likelihood classification algorithm, minimum distance classifier and box classification schemes
have their own role of bringing classification accuracy based on the distribution pattern of
spectral signatures for each of the assigned classes. Other classification schemes like contextual
classification, temporal classification etc. have also been described along with the classification
accuracy and error matrix.
In addition to pattern recognition techniques, efforts are in progress to find new techniques
which consumes lesser amount of time, does better integration of data from various sources, has
better automation capabilities and simple to use. Some of the techniques, which have made a big
impact in the present day data processing are i) Expert Systems/Artificial Intelligence
ii)Contextual classification.

15.5 GLOSSARY
CRT (Cathode Rays Tube) screen- A CRT monitor or computer screen contains millions of
tiny red, green, and blue phosphor dots that glow when struck by an electron beam that travels
across the screen to create a visible image.
Map Accuracy- The accuracy of any map is equal to the error inherent in it as due to the
curvature and changing elevations contained in each map from which the map was made, added
to or corrected by the map preparation techniques used in joining the individual maps.
Relative map accuracy- It is a measure of the accuracy of individual features on a map when
compared to other features on the same map.
Absolute map accuracy- Absolute map accuracy is a measure of the location of features on a
map compared to their true position on the face of the Earth.
Mapping accuracy –Mapping accuracy standards generally are stated as acceptable error and
the proportion of measured features that must meet the criteria. In the case of some plotting and
display devices, accuracy refers to tolerance in the display of graphic features relative to the
original coordinate file.
The level of allowable error of maps- As applied by National Map Accuracy Standards, it is
determined by comparing the positions of well-defined points whose locations or elevations are
shown on the map with corresponding positions as determined by surveys of a higher accuracy.

UNIT 15 - IMAGE CLASSIFICATION Page 279 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

15.6 ANSWER TO CHECK YOUR PROGRESS


1. Explain the Importance of Digital Image Classification.
2. Explain the Classification Training and Types of Classification.

15.7 REFERENCES
1. Campbell, J. 1987. Introduction to Remote Sensing. Guilford, New York, pp 620.
2. Ekstrom, M.P. 19984. Digital Image processing Techniques, Academic New York.
3. Hord, R.M. 1982. Digital Image Processing of Remotely Sensed Data, Academic, New
York.

4. Jensen, J.R. 1986. Introduction Digital Image processing: A Remote Sensing Perspective.
Prentice-Hall, Englewood Cliffs, NJ.

5. Lillesand, T.M. and Keifer, R.W. 1994. Remote Sensing and Image interpretation. John
Wiley & Sons, Inc. New York, pp 750.

6. Moik, J.G. 1980. Digital Processing of Remotely Sensed Images, NASA SP-431, Govt.
Printing Office, Washington, D.C.

7. Muller, J.P. (ed.) 1996. Digital Image Processing in Remote Sensing. Taylor & Francis,
London/Philadelphia.

8. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 55, no 9.

9. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.

10. Azriel Rosenfeld and Avinash C. Kak ,1976. Pub. Book-Digital Picture Processing: Volume
1.Morgan Kaufmann Publishers Inc. San Francisco, CA, USA ©1982 ISBN: 9780323139915

11. Sabins, F.F. 1986. Remote Sensing: Principles and Interpretation, 2nd Freeman New York.
12. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2865537/

15.8 TERMINAL QUESTIONS


1) Define classification and Digital Image Classification. What is the importance of digital image
Classification?
2)What do you understand by spectral signature. Describe its role in digital image classification.

UNIT 15 - IMAGE CLASSIFICATION Page 280 of 281


GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University

3) What is supervised classification, list out the aspects for extracting useful information/results
out of supervised classification.
4) In which condition the digital image classification using maximum likelihood algorithm
Becomes useful? Draw the related diagram to explain this statement.
5) Explain diagrammatically the difference between minimum distance and box or
Parallelepiped Classification.
6) Describe the concept of contextual classification.
7) The combination of both supervised and unsupervised classification techniques improves the
accuracy limit of digital classification. Elaborate this statement.
8) Describe site specific classification map accuracy assessment
9) What do you mean by Kappa coefficient? Highlight the newer tools and techniques for digital
image classification to reduce the classification and mapping errors.
10) Set an example of preparation of classification error matrix /confusion matrix.

UNIT 15 - IMAGE CLASSIFICATION Page 281 of 281

You might also like