Gis 505
Gis 505
Gis 505
Board of Studies
Chairman Convener
Vice Chancellor Professor P.D. Pant
Uttarakhand Open University, Haldwani School of Earth and Environment Science,
Uttarakhand Open University, Haldwani
Programme Coordinator
Dr. Ranju J. Pandey
Department of Geography & NRM
Department of Remote Sensing and GIS
School of Earth and Environment Science
Uttarakhand Open University, Haldwani
GIS-505/ DGIS-505 Uttarakhand Open University
Course Editor
Dr. Ranju J. Pandey
Department of Geography & NRM
Department of Remote Sensing and GIS
School of Earth and Environment Science
Uttarakhand Open University, Haldwani
CONTENTS
BLOCK 3: MICROWAVE
UNIT 7: Concept, Definition, Microwave Frequency Ranges and Factors Affecting
Microwave Measurements 130-143
UNIT 8: Radar Principles, Radar Wavebands, Side Looking Airborne Radar (Slar) Systems
& Synthetic Aperture Radar (Sar), Real Aperture Radar (Rar) 144-163
UNIT 9: Interaction between Microwaves and Earth’s Surface 164-178
UNIT 10: Geometrical Characteristics of Microwave Image 179-195
UNIT 11: Interpreting SAR Images 196-209
1.1 OBJECTIVES
1.2 INTRODUCTION
1.3 CHARACTERISTICS OF IMAGES OBTAINED FROM
DIFFERENT SENSORS
1.4 SUMMARY
1.5 GLOSSARY
1.6 ANSWER TO CHECK YOUR PROGRESS
1.7 REFERENCES
1.8 TERMINAL QUESTIONS
1.1 OBJECTIVES
After reading this unit you will be able to understand:
Indian Earth Observation Programme.
Characteristics of Images of Indian Satellites.
1.2 INTRODUCTION
You have learnt about concept, need, importance and principles of satellite remote sensing; role
of electromagnetic radiation properties and principles and sensor characteristics for obtaining
satellite images, as per the users objectives and related all topics. Here, before writing about the
topic under this unit, it is necessary to introduce the platform and sensor types being considered
to describe their image characteristics. For this, we will take the Indian remote sensing satellites
and sensors and few important foreign satellites and sensors.
The Indian Remote Sensing (IRS) program started in the mid 1980s. Eventually, a continuous
supply of synoptic, repetitive, multispectral data of the Earth's land surfaces was obtained
(similar to the US Landsat program). In 1995, IRS imagery was made available to a larger
international community on a commercial basis. The initial program of Earth-surface imaging
was extended by the addition of sensors for complementary environmental applications. This
started with the IRS-P3 satellite which is flying MOS (Multispectral Optoelectronic Scanner) for
the measurement of ocean color. The IRS-P4 mission is dedicated to ocean monitoring.
The availability of Landsat imagery created a lot of interest in the science community. The
Hyderabad ground station started receiving Landsat data on a regular basis in 1978. The Landsat
program with its design and potentials was certainly a great model and yardstick for the IRS
program.
• The first generation satellites IRS-1A and 1B were designed, developed and launched
successfully during 1988 and 1991 with multispectral cameras with spatial resolutions of 72.5 m
and 36 m, respectively. These early satellites were launched by Russian Vostok boosters from
the Baikonur Cosmodrome.
• Subsequently, the second generation remote sensing satellites IRS-1C and -1D with improved
spatial resolutions have been developed and successfully launched in 1995 and 1997,
respectively. IRS-1C/1D data has been used for cartographic and town planning applications.
Starting with IRS-1A, ISRO has launched many operational remote sensing satellites. Today
India has one of the largest constellations of remote sensing satellites in operation. Currently,
thirteen operational satellites are in Sun-synchronous orbit and four in Geo-stationary orbit.
Those are named as RESOURCESAT-1 and 2, 2A, CARTOSAT-1, 2, 2A, 2B, RISAT-1and 2,
OCEASAT-2, 3, Megha-Tropiques and SARAL. Varieties of instruments have been flown
onboard these satellites to provide necessary data in a diversified spatial, spectral, radiometric
and temporal resolutions to cater to different user requirements in the country and global uses.
The data from these satellites are used for varieties of applications.
In addition to its own satellite remote sensing data some of the premier organizations/Institutes
of India have used the data of foreign satellites namely LANDSAT series, ERS, NOAA, TERRA
and AQUA, SPOT, RADARSAT, IKONOS, QickkBird, etc.
Table 1.1: Indian Remote Sensing Satellite (IRS) Specification (ISRO polar-orbiting missions)
0.77-0.86
1.55-1.70
IRS-1D 1997 PSLV– C1 Satellite and instruments are identical to those of IRS-1C 2010
Highlights:
b) Cartosat Series:
Increased resolution and more spectral bands:
• PAN at 0.5m resolution
• MSI at 2-4m, 4 bands
• HSI at 8m, ~200 bands – Swath at 8-10km
c) RISAT
– First IRS SAR system
– C-Band SAR
– 10km swath in Spot mode, 240km swath in Scan mode
– Resolution at 1m to 50m
– Single/Dual polarization
LISS-I employs four 2048-element linear CCD detector arrays with spectral filters (Fairchild
CCD 143A). All cameras use refractive type collecting optics with spectral selection by
appropriate filters. The refractive optics was chosen to obtain a large FOV (Field of View). A
lens assembly for each spectral band is used for better performance and effective utilization of
the full dynamic range of the CCDs. Two LEDs (Light Emitting Diodes) per band are provided
for inflight calibration.
A LISS-I scene is 148 km x 174 km. The LISS-II A/B assembly features eight 2048-element
linear CCD detector arrays with spectral filters (2 parallel swaths of 74 km each for the LISS-II
A/B assembly with 3 km overlap, the total swath is 145 km). Four LISS-II scenes cover the area
of one LISS-I scene.
IRS-1C/1D:
IRS-1C is an ISRO-built second generation remote sensing satellite with enhanced capabilities in
terms of spatial resolution and spectral bands.
PAN:
PAN is a pushbroom imager using an all-reflective (off-axis f/4.5) folded mirror telescope (focal
length = 982 mm) along with three separately mounted 4096-element CCD arrays, adding up to
12,288 pixels in the cross-track direction (there is some overlapping of the three subscenes of the
image requiring special processing). Each detector array has separate interference filters and four
LEDs along with a cylindrical lens. Two LEDs are for optical biasing and two are for inflight
calibration of the sensor (calibration of CCDs excluding optics). A calibration cycle comprises
2048 lines (1.8 s for calibration cycle).
The PAN instrument is placed on a platform with a Payload Steering Mechanism (PSM)
enabling the camera to tilt in the cross-track direction with a pointing capability of ±26º,
providing a FOR (Field of Regard) coverage of ± 398 km. This provides a revisit capability of a
certain target area of 5 days.
LISS-3:
Continuous service of multispectral imagery. Application: Land and water resources
management (Figure 1.1). The pushbroom camera uses refractive optics in four spectral bands
(separate optics and detector array for each band). The collecting optics consists of eight
refractive lens elements with interference filter in front. A linear CCD array of 6000 silicon-
based elements is used for each VNIR band. The SWIR- band device has a 2100 element InGaAs
linear array (temperature controlled at -10ºC with a passive radiative cooler and an on-off heater
control). The SWIR device itself consists of a lattice mismatched heterojunction photodiode
array for the detection of SWIR radiation and silicon-based CCD multiplexers for signal readout
[seven identical modules are butted together to form a linear array of 2100 elements; each
module consists of a two-sided InGaAs die of 300 photodiodes and two 150-element silicon-
based CCD arrays on either side of the photodiode die to multiplex the signal from the
photodiodes. Instrument mass = 171 kg, power = 74-78 W.
The image characteristics of PAN+LISS-III, FCC is shown in figure 1.3. The image highlights
the water bodies, agricultural fields, water channels, plantation etc.
Calibration:
Inflight calibration of the LISS-3 camera is realized with LEDs (1.55 µm) as illuminating source
and operated in pulsed mode to generate six intensity levels. A LED is mounted on either side of
the photodiode array. Each LED is followed by diverging optics. A calibration cycle comprises
2048 scan lines (7.3 s for one cycle) and includes six intensity levels. Calibration is normally
performed during the eclipse period of the satellite pass.
IRS-P4 (OCEANSAT):
IRS-P4 is the first satellite primarily built for Ocean applications, weighing 1050 kg placed in
a Polar Sun Synchronous orbit of 720 km, launched by PSLV-C2 from SHAR Centre,
Sriharikota on May 26, 1999. This satellite carries Ocean Colour Monitor (OCM) and a Multi
- frequency Scanning Microwave Radiometer (MSMR) for oceanographic studies. IRS-P4
thus vastly augment the IRS satellite system of ISRO comprising four satellites, IRS-1B, IRS-
1C, IRS-P3 and IRS-1D and extend remote sensing applications to several newer areas.
Oceansat-1 was launched by ISRO's PSLV-C2along with German DLR-
Tubsat and South Korean Kit Sat 3 on 26 May 1999 from the First Launch Pad of Satish
Dhawan Space Centre in Sriharikota, India. It was the third successful launch of PSLV.
Payloads:
Oceansat-1 carried two payloads. The first of these, the Ocean Colour Monitor (OCM), is a
solid state camera literally designed primarily to monitor the colour of the ocean,[4] thereby
useful for documenting chlorophyllconcentration, phytoplankton blooms, atmospheric
aerosols and particulate matter.[1]It is capable of detecting eight spectrums ranging from
400 nm to 885 nm, all in the visible or near infrared spectrums. The second, the Multi-
frequency Scanning Microwave Radiometer (MSMR), collects data by
measuring microwave radiation passing through the atmosphere over the ocean.[6] This offers
information including sea surface temperature, wind speed, cloud water content, and water
vapour content.
ResourceSat-1 — formerly IRS-P6 :
IRS-P6 is an Earth observation mission within the IRS series of ISRO, Bangalore, India. The
overall objectives of the IRS-P6 mission are to provide continued remote sensing data services
on an operational basis for integrated land and water resources management. IRS-P6 is the
continuation of the IRS-1C/1D missions with considerably enhanced capabilities.
Prior to launch, ISRO renamed the IRS-P6 spacecraft to ResourceSat-1, to describe more
aptly the application spectrum of its observation data.
A launch of the IRS-P6 satellite took place Oct. 17, 2003 on a PSLV launcher from
SHAR (Satish Dhawan Space Centre, Sriharikota), India. ResourceSat-1 orbit is Sun-
synchronous, altitude = 817 km, inclination = 98.69º, period = 101.35 min, local time of
equator crossing at 10:30 hours on LTDN (Local Time on Descending Node). The ground
track is maintained within ± 1 km.
The achieved IRS-P6 injected orbit was estimated as 815.417 km x 831.668 km with
an inclination of 98.805º.The target orbit was achieved by performing three in-plane and five
combined (in-plane and out-of-plane) maneuvers. A total of eight orbit acquisition maneuvers,
starting from Oct. 20 to Nov. 29, 2003 were performed for obtaining "locked-path" and
"frozen-perigee" orbits. Path locking was done on Nov. 29, 2003.
Figure 1.4: LISS-3 image, Ahmadabad, acquired on Nov. 10, 2011 (image credit: ISRO)
Figure 1.5: First day AwiFS Image of Resourcesat-1, Lake Manasarovar (right),
located in the Tibet Autonomous Region of China (image credit: ISRO)
LISS-4 :
The LISS-4 multispectral high-resolution camera is the prime instrument of this sensor
complement. LISS-4 is a three-band pushbroom camera of LISS-3 heritage (same spectral
VNIR bands as LISS-3) with a spatial resolution of 5.8 m and a swath of 70 km. LISS-4 can
be operated in either of two support modes:
LISS-4 features in addition a ±26º steering capability in the cross-track direction permitting a
5-day revisit cycle. The optoelectronic module of LISS-4 is identical to that of the PAN
camera of IRS-1C/1D. The CCD array features 12,288 elements for each band. The
instrument has a mass of 169.5 kg, power of 216 W, and a data rate of 105 Mbit/s. The
detector temperature control is implemented using a radiator plate coupled to each band CCD
through heat pipes and copper braid strips.
The LISS-4 camera is realized using the three mirror reflective telescope optics (same as that
of the PAN camera of IRS-1C/1D) and 12,288 pixels linear array CCDs with each pixel of the
size 7 µm x 7 µm. Three such CCDs are placed in the focal plane of the telescope along with
LISS-3 :
LISS-3 is a medium-resolution multispectral camera. The pushbroom instrument is identical
to LISS-3 on IRS-1C/1D (with regard to lens modules, detectors, and electronics) in the three
VNIR bands, each with a spatial resolution of 23.5 m. The resolution of the SWIR band is
now also of 23.5 m on a swath of 140 km. The optics design and the detector of the SWIR
band are modified to suit the required resolution; B5 uses a 6,000 element Indium Gallium
Arsenide CCD with a pixel size of 13 µm. The SWIR CCD is a new device employing a
CMOS readout technique for each pixel, thereby improving noise performance. The VNIR
CCD array features 6,000 elements for each band. The instrument has a mass of 106.1 kg, a
power consumption of 70 W, and a data rate of 52.5 Mbit/s.
The in-flight calibration of the LISS-3 camera is carried out using 4 LEDs per CCD in the
VNIR bands and 6 LEDs for the SWIR band. These LEDs are operated in pulsed mode and
the pulse duration during which these LEDs are ON is varied in specific steps. Each LED has
a cylindrical lens to distribute the light intensity onto the CCD. Each calibration cycle consists
of 2048 lines providing six none zero intensity levels.
Prior to launch, ISRO renamed the IRS-P5 spacecraft to CartoSat-1, to describe more aptly
the application spectrum of its observation data. In this mission, the high resolution of the data
(2.5 m GSD) is being traded at the expense of multispectral capability and smaller area
coverage, with a swath width of 30 km. The data products are intended to be used in DTM
(Digital Terrain Model)/DEM (Digital Elevation Model) generation in such applications as
cadastral mapping and updating, land use as well as other GIS applications.
The CartoSat-1 spacecraft was launched on May 5, 2005 through a PSLV C6 launch vehicle
of ISRO from the SDSC (Satish Dhawan Space Centre) Sriharikota launch site on the sea
coast of India.
A secondary payload on this flight was Hamsat (VUSat) of AmSat India with a launch mass
of 43.5 kg. Hamsat carries two transponders in UHF band to provide spaceborne radio
amateur services to India and the international Ham radio community.
Orbit:
Sun-synchronous circular orbit, altitude = 618 km, inclination =97.87º, period of 97 min,
nodal equatorial crossing time on ascending node at 10:30 hours. The orbital revisit cycle is
126 days. However, a revisit capability of 5 days is provided by the body-pointing feature of
the spacecraft about its roll axis by ±26º.
Mission status:
• The CartoSat-1 spacecraft and its payload are operating nominally in 2012. CartoSat-1 is
completing its 7th year on orbit in May 2012 and is being routinely operated; it is returning
high quality data.
• The CartoSat-1 spacecraft and its payload are operating nominally in 2011.
• The CartoSat-1 spacecraft and its payload are operating nominally in 2010.
• GAF/Euromap, the European commercial distributor of CartoSat-1 imagery, developed in
concert with DLR a DEM (Digital Elevation Model), mainly for Europe.
rotation of Earth. An algorithm for Earth rotation compensation is being used to eliminate the
delayed observations of the two cameras.
Aside from stereo observations, the two cameras may also be used for wide swath mode
acquisitions (Figure 1.8).
The onboard source data rate of each camera is 336 Mbit/s. An onboard ADPCM/JPEG
compression algorithm of 3.2: 1 is applied reducing the data rate to 105 Mbit/s (i 52.5 + q
52.5) for each camera.
The optical system of each PAN camera is designed with a three-mirror off-axis reflective
telescope with an off-axis concave hyperboloidal primary mirror and an off-axis concave
ellipsoidal tertiary mirror - to meet the required resolution and swath width. The mirrors are
made from special Zerodur glass blanks and are light weighted. The mirrors are polished to an
accuracy of l/80 and are coated with enhanced AlO2coating. The mirrors are mounted to the
electro-optical module using iso-static mounts, so that the distortion on the light weighted
mirrors are reduced to a minimum. Each camera features a linear CCD detector array of
12,288 pixels. The overall size of each PAN camera is 150 cm x 850 cm x 100 cm with a
mass of 200 kg.
The imagery of the 2-line along-track stereo camera may be used for a variety of applications,
among them for the generation of DEMs (Digital Elevation Models). The data is expected to
provide enhanced inputs for large scale mapping applications and stimulate newer
applications in the urban and rural development.
Ground segment:
The spacecraft is being operated by ISTRAC (ISRO Telemetry, Tracking and Command
Network) of Bangalore, using its network of stations at Bangalore, Lucknow, Mauritius,
Bearslake in Russia and Biak in Indonesia. NRSA (National Remote Sensing Agency) of
Hyderabad is receiving the payload data and is the processing center for the CartoSat-1
mission. The payload data acquisition is performed at the NRSA ground station, located at
Shadnagar, near Hyderabad.
Cartosat-2:
Cartsat-2 is an Earth observation satellite in a sun-synchronous orbit and the second of
the Cartosat series of satellites. The satellite was built, launched and maintained by the Indian
Space Research Organization. Weighing around 680 kg at launch, its applications will mainly
be towards cartography in India. It was launched by a PSLV-G rocket on 10 January 2007.
Cartosat-2 carries a state-of-the-art panchromatic (PAN) camera that take black and white
pictures of the earth in the visible region of the electromagnetic spectrum. The swath covered
by this high resolution PAN camera is 9.6 km and their spatial resolution is less than 1 metre.
The satellite can be steered up to 45 degrees along as well as across the track.
Cartosat-2 is an advanced remote sensing satellite capable of providing scene-specific spot
imagery. The data from the satellite will be used for detailed mapping and other cartographic
applications at cadastral level, urban and rural infrastructure development and management, as
well as applications in Land Information System (LIS) and Geographical Information System
(GIS).
Cartosat-2's panchromatic camera can produce images (Figure 1.9) better than 1 meter in
resolution, compared to the 82 cm panchromatic resolution offered by the Ikonos satellite.
India had previously purchased images from Ikonos at about US$20 per square kilometre; the
use of Cartosat-2 will provide imagery at 20 times lower cost. At the time of Cartosat-2's
launch, India was buying about 20 crore per year from Ikonos.
The above figure of CARTOSAT -2 highlights the details of many of image elements
pertaining to the cultural features of Bangalore city. If you compare the details with ground
information you may nominate all the cover types/cultural features and the specific features
within the city area.
OceanSat-2:
The ISRO (Indian Space Research Organization) spacecraft OceanSat-2 is envisaged to
provide service continuity for the operational users of OCM (Ocean Color Monitor) data as
well as to enhance the application potential in other areas. OCM is flown on IRS-
P4/OceanSat-1, launched May 26, 1999. The main objectives of OceanSat-2 are to study
surface winds and ocean surface strata, observation of chlorophyll concentrations, monitoring
of phytoplankton blooms, study of atmospheric aerosols and suspended sediments in the
water. OceanSat-2 plays an important role in forecasting the onset of the monsoon and its
subsequent advancement over the Indian subcontinent and over South-East Asia.
Coverage of applications:
• Sea-state forecast: waves, circulation and ocean MLD (Mixed Layer Depth)
• Monsoon and cyclone forecast - medium and extended range
• Observation of Antarctic sea ice
• Fisheries and primary production estimation
• Detection and monitoring of phytoplankton blooms
• Study of sediment dynamics
RISAT
RISAT (Radar Imaging Satellite) is a series of Indian radar imaging reconnaissance
satellites built by ISRO. They provide all-weather surveillance using synthetic aperture
radars (SAR). The RISAT series are the first all-weather earth observation satellites from ISRO.
Previous Indian observation satellites relied primarily on optical and spectral sensors which were
hampered by cloud cover. After the November 26, 2008 Mumbai attacks, the launch plan was
modified to launch RISAT-2 before RISAT-1, since the indigenous C-band SAR to be used for
RISAT-1 was not ready. RISAT-2 used an Israel Aerospace Industries (IAI) X-band SAR sensor
similar to the one employed on TecSAR.
RISAT-2 was the first of the RISAT series to reach orbit. It was launched successfully on April
20, 2009 at 0015 hours GMT by a PSLV rocket. The 300-kg satellite was built by ISRO using an
X-band SAR manufactured by IAI.
This satellite was fast tracked in the aftermath of the 2008 Mumbai attacks. The satellite will be
used for border surveillance, to deter insurgent infiltration and for anti-terrorist operations. It is
likely to be placed under the Aerospace Command of the Indian Air Force.
No details of the technical specifications of RISAT-2 have been published. However, it is likely
to have a spatial resolution of about a metre or so. Ship detection algorithms for radar satellites
of this class are well-known and available. The satellite also has applications in the area of
disaster management and agriculture-related activities.
Megha- Tropiques:
Megha –Tropiques is a satellite mission to study the water cycle in the tropical atmosphere in the
context of climate change A collaborative effort between Indian Space Research Organisation
(ISRO) and French Centre National d’Etudes Spatial (CNES), Megha-Tropiques was
successfully deployed into orbit by a PSLV rocket in October 2011.
Megha-Tropiques was initially scrapped in 2003, but later revived in 2004 after India increased
its contribution and overall costs were lowered. With the progress made by GEWEX (Global
Energy and Water Cycle Experiment), Megha-Tropiques is designed to understand tropical
meteorological and climatic processes, by obtaining reliable statistics on the water and energy
budget of the tropical atmosphere. Megha-Tropiques complements other data in the current
regional monsoon projects such as MAHASRI and the completed GAME project. Megha-
Tropiques also seeks to describe the evolution of major tropical weather systems. The focus will
be the repetitive measurement of the tropics.
Payloads:
Instruments fulfill a role to other on geostationary satellites. In this, microwave instruments are
essential.
Sounder for Probing Vertical Profiles of Humidity (SAPHIR) is a sounding instrument with 6
channels near the absorption band of water vapor at 183 GHz. These channels provide
relatively narrow weighting functions from the surface to about 10 km, allowing retrieving
water vapor profiles in the cloud free troposphere. The scanning is cross-track, up to an
incidence angle of 50°. The resolution at nadir is of 10 km.
Scanner for Radiation Budget (ScaRaB) is a scanning radiative budget instrument, which has
already been launched twice on Russian satellites. The basic measurements of ScaRaB are the
radiances in two wide channels, a solar channel (0.2 - 4 µm), and a total channel (0.2 -
200 µm), allowing to derive long wave radiances. The resolution at nadir will be 40 km from
an orbit at 870 km. The procedures of calibration and processing of the data in order to derive
fluxes from the original radiances have been set up and tested by CNES and LMD.
Radio Occultation Sensor for Vertical Profiling of Temperature and Humidity (ROSA)
procured from Italy for vertical profiling of temperature and humidity.
Launch.
The Megha-Tropiques satellite was successfully placed in an 867 km orbit with an inclination of
20 degrees to the equator by the Indian Space Research Organisation through its Polar Satellite
Launch Vehicle (PSLV-C18) on October 12, 2011.[10] The PSLV-C18 was launched at 11:00 am
on October 12, 2011 from the first launch pad of the Satish Dhawan Space Centre (SHAR)
located in Sriharikota, Andhra Pradesh. The satellite was placed in orbit along with three micro
satellites: the 10.9 kg SRMSAT built by the SRM University, Chennai, the 3 kg remote sensing
satellite Jugnufrom the Indian Institute of Technology Kanpur (IIT Kanpur) and the
28.7 kg VesselSat-1 of Luxembourg to locate ships on high seas.
SARAL:
SARAL or Satellite with ARgos and ALtiKa is a cooperative altimetry technology mission
of Indian Space Research Organisation (ISRO) and CNES (Space Agency of France). SARAL
will perform altimetric measurements designed to study ocean circulation and sea surface
elevation. The payloads of SARAL are the ISRO built satellite with payloads modules (ALTIKA
altimeter), DORIS, Laser Retro-reflector Array (LRA) and ARGOS-3 (Advanced Research and
Global Observation Satellite) data collection system provided by CNES. It was launched by
Indian Polar Satellite Launch Vehicle rocket into the Sun-synchronous orbit (SSO). ISRO is
responsible for the platform, launch, and operations of the spacecraft. A CNES/ISRO MOU
(Memorandum of Understanding) on the SARAL mission was signed on Feb. 23, 2007.
SARAL was successfully launched on 25 February 2013, 12:31 UTC.
Payloads:
Ka band Altimeter, ALTIKA:
ALTIKA, the altimeter and prime payload of the SARAL mission, will be the first spaceborne
altimeter to operate at Ka band. It built by the French National Space Agency CNES. The
payload is intended for oceanographic applications, operates at 35.75 GHz. ALTIKA is set to
take over ocean-monitoring from Envisat. It is the first to operate at such a high frequency,
making it more compact and delivering better performance than the previous generation. While
existing satellite-borne altimeters determine sea level by bouncing a radar signal off the surface
and measuring the return-trip time, ALTIKA operates at a high frequency in Ka band. The
advantage of this is twofold. One, the earth’s atmosphere slows down the radar signal, so
altimetry measurements are skewed and have to carry additional equipment to correct for this
error. Since ALTIKA uses a different system, it does not have to carry an instrument to correct
for atmospheric effects as current-generation altimeters do. ALTIKA gets around this problem
by operating at a high frequency in Ka band. Another advantage of operating at higher
frequencies is greater accuracy. ALTIKA will measure ocean surface topography with an
accuracy of 8 mm, against 2.5 cm on average using current-generation altimeters, and with a
spatial resolution of 2 km.
The disadvantage, however, is that high-frequency waves are extremely sensitive to rain, even
drizzle. 10% of the data is expected to be lost. (Although this could be exploited to perform
crude measurements of precipitation).
SARAL Applications:
SARAL data products will be useful for operational as well as research user communities in
many fields like
Marine meteorology and sea state forecasting
Operational oceanography
Seasonal forecasting*
Climate monitoring
Ocean, earth system and climate research
Continental ice studies
Protection of biodiversity
Management and protection of marine ecosystem
Environmental monitoring
Improvement of maritime security
1.4 SUMMARY
IRS program provides a continuous supply of synoptic, repetitive, multispectral data of the
Earth's land surfaces. IRS imagery is made available to a larger international community on a
commercial basis. The initial program of Earth-surface imaging was extended by the addition of
sensors for complementary environmental applications. This started with the IRS-P3 satellite
which is flying MOS (Multispectral Optoelectronic Scanner) for the measurement of ocean
color. The IRS-P4 mission is dedicated to ocean monitoring.
The availability of Landsat imagery created a lot of interest in the science community. The
Hyderabad ground station started receiving Landsat data on a regular basis since the year 1978.
The Landsat program with its design and potentials was certainly a great model and yardstick for
the IRS program.
Starting with IRS-1A, ISRO has launched many operational remote sensing satellites. Today
India has one of the largest constellations of remote sensing satellites in operation. Currently,
thirteen operational satellites are in Sun-synchronous orbit and four in Geo-stationary orbit.
Those are named as RESOURCESAT-1 and 2, 2A, CARTOSAT-1, 2, 2A, 2B, RISAT-1and 2,
OCEASAT-2, 3, Megha-Tropiques and SARAL. Varieties of instruments have been flown
onboard these satellites to provide necessary data in a diversified spatial, spectral, radiometric
and temporal resolutions to cater to different user requirements in the country and global uses.
The data from these satellites are used for varieties of applications.
In addition to its own satellite remote sensing data some of the premier organizations/Institutes
of India have used the data of foreign satellites namely LANDSAT series, ERS, NOAA, TERRA
and AQUA, SPOT, RADARSAT, IKONOS, QickkBird, etc.
With the passes of time, the improvement of spatial, spectral, temporal and radiometric
resolutions and subsequent improvements in the quality of images have certainly met the users
requirements and fulfilled the objectives related to different fields of applications.
1.5 GLOSSARY
CARTOSAT - The name CARTOSAT is a combination of Cartography and Satellite.
Cartography is the study and practice of making maps.
OCEANSAT - OCEANSAT is the first satellite primarily built for Ocean applications.
RISAT (Radar Imaging Satellite) - RISAT is a series of Indian radar imaging reconnaissance
satellites built by ISRO.
SARAL- SARAL or Satellite with ARgos and ALtiKa is a cooperative altimetry technology
mission of ISRO and CNES (Space Agency of France).
AQUA - AQUA is a NASA scientific research satellite in orbit around the Earth, studying the
precipitation, evaporation, and cycling of water.
SPOT - SPOT (from French "Satellite pour l'Observation de la Terre") constellation has been
supplying high-resolution, wide-area optical imagery.
QuickBird - QuickBird satellite collects image data to 0.65m pixel resolution degree of detail.
This satellite is an excellent source of environmental GIS data.
1.7 REFERENCES
1. www.nrsa.gov.in
2. https://en.wikipedia.org/wiki/Indian_Remote_Sensing_Programme
3. https://space.skyrocket.de/doc_sdat/irs-1a.htm
4. https://directory.eoportal.org/web/eoportal/satellite-missions/i/irs-1c-1d
5. https://en.wikipedia.org/wiki/Resourcesat
6. https://en.wikipedia.org/wiki/Cartosat
7. https://en.wikipedia.org/wiki/RISAT
8. https://www.usgs.gov/isro-resourcesat-1-and-resourcesat-2
9. https://en.wikipedia.org/wiki/SARAL
2.1 OBJECTIVES
2.2 INTRODUCTION
2.3 GEOMETRIC, RADIOMETRIC AND ATMOSPHERIC
CORRECTIONS
2.4 SUMMARY
2.5 GLOSSARY
2.6 ANSWER TO CHECK YOUR PROGRESS
2.7 REFERENCES
2.8 TERMINAL QUESTIONS
2.1 OBJECTIVES
After reading this unit you will be able to understand:
Geometric corrections
Radiometric corrections
Noise Removal
Atmospheric corrections
2.2 INTRODUCTION
In the previous unit image characteristics of important Indian and foreign satellite platforms and
sensors have been explained. The digital image processing techniques and classification of those
images need preprocessing of raw data/images so as to achieve higher classification and mapping
accuracy. Raw digital images cannot be used as maps because they contain geometric distortions
which stem from the image acquisition process. To supply the same geometric integrity as a
map, original raw images must be geometrically corrected and the distortions, such as variations
in altitude, and earth curvature, must be compensated for.
When image data is recorded by sensors on satellites and aircraft, it can contain errors in
geometry and in the measured brightness values of pixels. The latter are referred to as
radiometric errors and can result from the instrumentation used to record the data and from the
effect of the atmosphere. Image geometry errors can arise, for example, from the curvature of the
earth, uncontrolled variations in the position and attitude of the platform, and sensor anomalies.
Before using an image, it is frequently necessary to make corrections to its brightness and
geometry. There are essentially two techniques that can be used to try to minimize these
geometric distortions. One is to model the nature and magnitude of the distortion and thereby
establish a correction; the other is to develop a mathematical relationship between the pixel
coordinates on the image and the corresponding points on the ground.
Geometric, Radiometric and Atmospheric Corrections in a remotely sensed digital image are
called as Image Rectification. These operations aim to correct distorted or degraded image data
to create a faithful representation of the original scene. This typically involves the initial
processing of raw image data to correct for geometric distortion, to calibrate the data radio
metrically and to eliminate noise present in the data. Image rectification and restoration
procedures are often termed pre-processing operations because they normally precede
manipulation and analysis of image data.
Image rectification:
Rectification is a transformation process used to project images onto a common image plane.
Image rectification is the transformation of multiple images onto a common coordinate
system
Geometric Correction:
Geometric Correction (often referred to as Image Warping) is the process of digitally
manipulating image data such that the image’s projection precisely matches a specific
projection surface or shape.
Geometric correction corrects a satellite image for the positional error and
displacement/distortion due to terrain elevation and terrain complexities.
Radiometric Correction:
Generally speaking, radiometric correction converts Digital Number (DN) to radiance using
calibration coefficients that are usually provided in image header files.
The radiometric correction involves subtracting the background signal (bias) and dividing by
the gain of the instrument, which converts the raw instrument output (in DN) to radiance (in
W/m2sr μm).
Radiometric correction is to avoid radiometric errors or distortions, while geometric
correction is to remove geometric distortion.
Radiometric Calibration:
Radiometric calibration is the conversion from the sensor measurement to a physical
quantity.
Image Noise:
Image noise is random variation of brightness or color information in images
Image noise is an undesirable by-product of image capture that obscures the desired
information.
The original meaning of "noise" is an "unwanted signal" or unwanted electrical fluctuations
in signals.
Noise refers to random error in pixel values acquired during image acquisition.
Atmospheric correction:
Atmospheric correction is the process of removing the effects of the atmosphere on the
reflectance values of images taken by satellite or airborne sensors.
Atmospheric correction removes the scattering and absorption effects from the atmosphere to
obtain the original surface reflectance characteristics.
matches to determine their depth. Finding matches in stereo vision is restricted by epipolar
geometry: Each pixel's match in another image can only be found on a line called the epipolar
line. If two images are coplanar, i.e. they were taken such that the right camera is only offset
horizontally compared to the left camera (not being moved towards the object or rotated), then
each pixel's epipolar line is horizontal and at the same vertical position as that pixel. However, in
general settings (the camera did move towards the object or rotate) the epipolar lines are slanted.
Image rectification warps both images such that they appear as if they have been taken with only
a horizontal displacement and as a consequence all epipolar lines are horizontal, which slightly
simplifies the stereo matching process. Note however, that rectification does not fundamentally
change the stereo matching process: It searches on lines, slanted ones before and horizontal ones
after rectification.
Image rectification is also an equivalent (and more often used) alternative to perfect camera co-
planarity. Even with high-precision equipment, image rectification is usually performed because
it may be impractical to maintain perfect co-planarity between cameras.
Image rectification can only be performed with two images at a time and simultaneous
rectification of more than two images is generally impossible.
correction is to compensate for the distortions introduced by these factors, so that the corrected
image will have the geometric integrity of a map.
Geometric correction is undertaken to avoid geometric distortions from a distorted image, and is
achieved by establishing the relationship between the image coordinate system and the
geographic coordinate system using calibration data of the sensor, measured data of position and
attitude, ground control points, atmospheric condition etc.
Steps for Geometric Correction:
The steps to follow for geometric correction are as follows:
Accuracy check - Accuracy of the geometric correction should be checked and verified. If the
accuracy does not meet the criteria, the method or the data used should be checked and corrected
in order to avoid the errors.
Systematic distortions are well understood and easily corrected by applying formulas derived by
modelling the sources of the distortions mathematically. For example, a highly systematic source
of distortion involved in multi-spectral scanning from satellite altitudes is the eastward rotation
of the earth beneath the satellite during imaging. This causes each optical sweep of the scanner to
cover an area slightly to the west of the previous sweep. This is known as skew distortion. The
process of de skewing the resulting imagery involves offsetting each successive scan line slightly
to the west. The skewed- parallelogram appearance of satellite multi-spectral scanner data is a
result of this correction (Figure 2.1).
Random distortions and residual unknown systematic distortions are corrected by analyzing
well-distributed ground control points (GCs) occurring in an image. As with their counterparts
on aerial photographs, GCPs are features of known ground location that can be accurately
located on the digital imagery. Some features that make good control points are highway
intersections and distinct shoreline features. In the correction process numerous GCPs are
located both in terms of their two image coordinates (column, row numbers) on the distorted
image and in terms of their ground coordinates (typically measured from a map in terms of UTM
coordinates or latitude and longitude). These values are then submitted to a least-squares
regression analysis to determine co-efficient for two coordinate transformation equations that can
be used to interrelate the geometrically correct (map) coordinates and the distorted image
coordinates. Once the coefficients for these equations are determined, the distorted image
coordinates for any map position can be precisely estimated.
Based on the distorted satellite images as mentioned above, following are three methods of
geometric corrections. A flow diagram is given in figure 2.2.
Systematic correction - When the geometric reference data or the geometry of sensor are given
or measured, the geometric distortion can be theoretically or systematically avoided. For
example, the geometry of a lens camera is given by the co linearity equation with calibrated focal
length, parameters of lens distortions, coordinates of fiducial marks etc. The tangent correction
for an optical mechanical scanner is a type of system correction. Generally systematic correction
is sufficient to remove all errors.
Combined method - Firstly the systematic correction is applied, and then the residual errors will
be reduced using lower order polynomials. Usually the goal of geometric correction is to obtain
an error within plus or minus one pixel of its true position.
and atmospheric conditions that can influence the observed energy. Therefore, in order to obtain
the real ground irradiance or reflectance, radiometric errors must be corrected for.
When the emitted or reflected electro-magnetic energy is observed by a sensor on board an
aircraft or spacecraft, the observed energy does not coincide with the energy emitted or reflected
from the same object observed from a short distance. This is due to the sun's azimuth and
elevation, atmospheric conditions such as fog or aerosols, sensor's response etc. which influence
the observed energy. Therefore, in order to obtain the real irradiance or reflectance, those
radiometric distortions must be corrected. Further, In order to detect genuine landscape changes
as revealed by changes in surface reflectance from multi-date satellite images, it is necessary to
carry out radiometric correction.
Radiometric Correction and Calibration:
Radiometric calibration is the conversion from the sensor measurement to a physical quantity. In
remote sensing, the sensor is measuring radiance from the top of the atmosphere. Therefore the
image provider also provides calibration coefficients to convert from digit number (DN) to
radiance. Because we can trust the amount of light energy that comes from the sun, the radiance
is often normalized into a reflectance values (easier to work with because bounded by 0 and
one), so this step can also be part of the calibration. So the calibration gives you a reflectance
value, but it is the reflectance on top of the atmosphere (TOA).
Indeed, the proportion of the incident light that is really reflected by the observed object is
effected by different factors (mainly topography and atmospheric thickness). The reflectance
measured TOA therefore need to be corrected if you need absolute values. This does not depend
on the sensor itself, so it is not advisable to talk about calibration in this case. We need to correct
the values measured TOA in order to estimate the values top of canopy. The process of
Radiometric Correction and Calibration is shown on figure 2.5.
In the case of satellite sensing in the visible and near-infrared portion of the spectrum, it is often
desirable to generate mosaics of images taken at different times or to study the changes in the
reflectance of ground features at different times or locations. In such applications, it is usually
necessary to apply a sun elevation correction and earth-sun distance correction. The sun
elevation correction accounts for the seasonal position of the sun relative to the earth. Through in
this process, image data acquired under different solar illumination angles are normalized by
calculating pixel brightness values assuming the sun was at the zenith on each date of sensing.
The correction is usually applied by dividing each pixel value in a scene by the sign of the solar
elevation angle for the particular time and location of imaging.
Radiometric correction is classified into the following three types:
Radiometric correction for sensor sensitivity:
In the case of optical sensors, with the use of a lens, a fringe area in the corners will be darker as
compared with the central area. This is called vignetting. Vignetting can be expressed by cos ,
where is the angle of a ray with respect to the optical axis. n is dependent on the lens
characteristics, though n is usually taken as 4. In the case of electro-optical sensors, measured
calibration data between irradiance and the sensor output signal, can be used for radiometric
correction.
Radiometric correction for sun angle and topography:
Sun spot - The solar radiation will be reflected diffusely onto the ground surface, which results
in lighter areas in an image. It is called a sun spot. The sun spot together with vignetting effects
can be corrected by estimating a shading curve which is determined by Fourier analysis to
extract a low frequency component (Figure 2.6).
Shading - The shading effect due to topographic relief can be corrected using the angle between
the solar radiation direction and the normal vector to the ground surface.
Ignoring atmospheric effects, the combined influence of solar zenith angle and earth-sun distance
on the irradiance incident on the earth's surface can be expressed as
E = Eo Cos θ o (1)
d2
where,
E = normalized solar irradiance
Eo = solar irradiance at mean earth-sun distance
θ o = sun's angle from the zenith
d = earth-sun distance, in astronomical units
(Information on the solar elevation angle and earth -sun distance for a given scene is normally
part of the ancillary data supplied with the digital data).
Atmospheric effects compound the influence of solar illumination variation. The atmosphere
affects the radiance measured at any point in the scene in two contradictory ways. First, it
attenuates (reduces) the energy illuminating a ground object. Second, it acts as a reflector itself,
adding a scattered, extraneous "path radiance" to the signal detected by a sensor. Thus, the
composite signal observed at any given pixel location can be expressed by
Only the first term in the above equation contains valid information about ground reflectance.
The second term represents the scattered path radiance, which introduces "haze" in the imagery
and reduces image contrast. Haze compensation procedures are designed to minimize the
influence of path radiance effects. One means of haze compensation in multi-spectral data is to
observe the radiance recorded over target areas of essentially zero reflectance. For example, the
reflectance of deep clear water is essentially zero in the near-infrared region of the spectrum.
Therefore, any signal observed over such as area represents the path radiance, and this value can
be subtracted from all pixels in that band.
Normally, detectors and data systems are designed to produce a linear response to incident
spectral radiance. For example, Fig.2.2 shows the linear radiometric response function typical of
an individual TM channel. Each spectral band of the TM has its own response function, and its
characteristics are monitored using onboard calibration lamps (and temperature references for the
thermal channel). The absolute spectral radiance output of the calibration sources is known from
pre-launch calibration and is assumed to be stable over the life of the sensor. Thus, the onboard
calibration sources form the basis for constructing the radiometric response function by relating
known radiance values incident on the detectors to the resulting DNs.
DN = GL + B (3)
where,
DN = digital number value recorded
G = slope of response function (channel gain)
L = spectral radiance measured (over the spectral bandwidth of the channel)
B = intercept of response function (channel offset)
Note that the slope and intercept of the above function are referred to as the gain and offset of the
response function, respectively.
Often the LMAX and LMIN values published for a given sensor are expressed in units of mW
cm-2 sr-I ~m-I. That is, the values are often specified in terms of radiance per unit wavelength. To
estimate the total within-band radiance in such cases, the value obtained from Eq. 5 must be
multiplied by the width of the spectral band under consideration. Hence, a precise estimate of
within-band radiance requires detailed knowledge of the spectral response curves for each band.
Image noise is any unwanted disturbance in image data that is due to limitations in the sensing,
signal digitization, or data recording process. The potential sources of noise range from periodic
drift or malfunction of a detector, to electronic interference between sensor components, to
intermittent "hiccups" in the data transmission and recording sequence. Noise can either degrade
or totally mask the true radiometric information content of a digital image. Hence, noise removal
usually precedes any subsequent enhancement or classification of the image data. The objective
is to restore an image to as close an approximation of the original scene as possible.
As with geometric restoration procedures, the nature of noise correction required in any given
situation depends upon whether the noise is systematic (periodic), random, or some combination
of the two. For example, multi spectral scanners that sweep multiple scan lines simultaneously
often produce data containing systematic striping or banding. This stems from variations in the
response of the individual detectors used within each band. Such problems were particularly
prevalent in the collection of early Landsat MSS data. While, the six detectors used for each
band were carefully calibrated and matched prior to launch, the radiometric response of one or
more tended to drift over time, resulting in relatively higher or lower values along every sixth
line in the image data. In this case valid data are present in the defective lines, but they must be
normalized with respect to their neighboring observations.
De striping - Several de striping procedures have been developed to deal with the type of
problem described above. One method is to compile a set of histograms for the image -one for
each detector involved in a given band. For MSS data, this means that for a given band, one
histogram is generated for scan lines 1,7, 13, etc. a second is generated for lines 2,8,14 etc. and
so forth. These histograms are then compared in terms of their mean and median values to
identify the problem detector(s). A grey-scale adjustment factor(s) can then be determined to
adjust the histogram(s) for the problem lines to resemble those for the normal data lines. This
adjustment factor is applied to each pixel in the problem lines and the others are not altered.
Line Drop - Another line-oriented noise problem sometimes encountered in digital data is line
drop. In this situation, a number of adjacent pixels along a line (or an entire line) may contain
spurious DNs. This problem is normally addressed by replacing the defective DNs with the
average of the values for the pixels occurring in the lines just above and below. Alliteratively, the
DNs from the preceding line can simply be inserted in the defective pixels.
Random noise - Random noise problems in digital data are handled quite differently than those
have been discussed to this point. This type of noise is characterized by non-systematic
variations in grey levels from pixel to pixel called bit errors. Such noise is often referred to as
being "spikey" in character, and it causes images to have a "salt and pepper" or "snowy"
appearance.
Many algorithms were proposed to remove the salt and pepper noise. The common salt and
pepper noise filtering algorithms includes: i)Traditional Median(TM) filter algorithm; ii)
Extreme median(EM) filter algorithms; iii) Switching median (SM) adaptive weight (AW);
iv)Adaptive median(AM) filter algorithms v) adaptive weight (AW) filter algorithms etc. Based
on various research papers TM filter algorithm is simple and speed, but it does not have ability of
effectively removing salt and pepper noise and protection for edges and details in high noise
density case. EM, SW and AM filter algorithms are sensitive to different noise density. Their
filtering properties get worse with noise density increasing. An adaptive weight (AW) filter
algorithms approach was proposed , in which the output is a weighted sum of the image and a
de-noising factor, these weighting coefficients depends on a state variable. The state variable is
the difference between the current pixel and the average of the remaining pixels in the
surrounding window. Because the coefficients are various, it is difficult to select an appropriate
one.
The decision based adaptive weight algorithm is based on the following steps:
i. It checks for the pixels that are noisy in satellite image i.e. pixels with the values 0 or 255 are to
be considered.
ii. For each such noisy pixel P, a window size of 3x3 neighbouring the pixel P is taken.
Iii.Find the absolute differences between the pixel P and the neighbouring pixels of P.
Iv.The arithmetic mean of the differences for a given pixel P is calculated.
v. The arithmetic mean is then compared with the threshold value to detect whether the pixel P is
signal pixel or corrupted by noise.
vi. If the arithmetic mean is greater than or equal to the threshold value the pixel P is considered
as noisy.
vii. Otherwise the pixel P is considered as signal pixel.
Median filters produce the best result for the mask of size 3x3 at low noise densities up to the
30%.Though the image is considerably blurred. The filter fails to perform well at higher density
levels and hence the new adaptive weight algorithm can be used for highly corrupted satellite
images. The adaptive weight algorithm is as follows:
i. Noise is detected by the noise detection algorithm mentioned above.
ii. Filtering is applied only at those pixels that were detected as noisy.
iii. Once a given pixel P is found to be noisy the following steps are applied.
iv.A 3x3 mask is centred at the pixel P and founds if there exists at least one signal pixel around
the pixel P.
v. If found, the pixel P is replaced by the median of the signal pixels found in 3x3 neighbourhood
of P.
vi. The above steps are repeated if noise still there in the output image for better results.
ATMOSPHERIC CORRECTION:
Various atmospheric effects cause absorption and scattering of the solar radiation. Reflected or
emitted radiation from an object and path radiance (atmospheric scattering) should be corrected
for.
Atmospheric Correction Techniques:
The solar radiation is absorbed or scattered by the atmosphere during transmission to the ground
surface, while the reflected or emitted radiation from the target is also absorbed or scattered by
the atmosphere before it reaches a sensor. The ground surface receive not only the direct solar
radiation but also sky light, or scattered radiation from the atmosphere. A sensor will receive not
only the direct reflected or emitted radiation from a target, but also the scattered radiation from a
target and the scattered radiation from the atmosphere, which is called path radiance (Figure
2.7).The atmospheric correction method is classified into the following categories:
The method with ground truth data - At the time of data acquisition, those targets with known
or measured reflectance will be identified in the image. Atmospheric correction can be made by
comparison between the known value of the target and the image data (output signal). However
the method can only be applied to the specific site with targets or a specific season.
Other method - A special sensor to measure aerosol density or water vapor density is utilized
together with an imaging sensor for atmospheric correction. For example, the NOAA satellite
has not only an imaging sensor of AVHRR (Advanced Very high Resolution Radiometer) but
also HIRS (High Resolution Infrared Radiometer Sounder) for atmospheric correction.
2.4 SUMMARY
Satellite images are not always delivered with accurate geographic information (coordinates) of
features. Apart from positional error positional displacement/distortion may occur due to the
actual terrain elevation differing from that of the (simple) model used for it, and due to non-zero
off-nadir angles.
Systematic Image distortion occurs when coordinates are consistently off by a certain amount
across the whole image. In addition, remote sensors worry about systematic distortions
associated with the platform motion and imaging device. For example, the rate at which the
earth rotates out from beneath a satellite cause systematic distortion, as does the rate at which
off-nadir scale changes across an image. Nonsystematic distortion is another broad category of
image distortion. Nonsystematic distortion occurs when random factors cause local variations in
image scale and coordinate location.
In most cases, however, your images will have some mix of systematic and nonsystematic
distortions in them. The approach you use to remove these distortions (if you remove them at
all) is dependent on user-define variables and needs, such as which map projection you use to fit
a sphere to a flat surface, the degree to which topography is a concern, the level of accuracy you
require, etc.
Geometric correction corrects a satellite image for these distortions, and ensures that
pixels/features in the image are in their proper and exact position on the earth’s surface. Image
levels 3A and 3B do not typically require this step as they often represent the highest possible
level of geometric correction. Two fundamental steps of Geo-rectifiication and Orthorectification
are required in geometric correction:
Geometric correction is to remove geometric distortion, while radiometric correction is to avoid
radiometric errors or distortions. When a sensor on board an aircraft or spacecraft observes the
emitted or reflected electro-magnetic energy, the observed energy does not coincide with the
energy emitted or reflected from the same object observed from a short distance. This is due to
the sun's azimuth and elevation, atmospheric conditions such as fog or aerosols, sensor's
response etc. which influence the observed energy. Therefore, in order to obtain the real
irradiance or reflectance, those radiometric distortions must be corrected. Radiometric
correction is classified on the basis of sensor sensitivity, Sun angle and the atmospheric
condition.
Radiometric correction is done to reduce or correct errors in the digital numbers of images. The
process improves the interpretability and quality of remote sensed data. Radiometric calibration
and correction are particularly important when comparing data sets over a multiple time periods.
The process of rectification reassigns coordinates to pixels. Although a spherical surface can
never be shown as a flat image without some distortion, different approaches (projections) can be
used to minimize certain types of distortion. Systematic distortions can be minimized using
equations that adjust the pixel locations systematically across the entire image. Often some of
the major systematic distortions will have been removed before you even get the imagery. For
example, it is common to purchase IRS and other images which have already been corrected to
account for the rotation of the earth beneath the satellite, which is why the image has the outline
of a parallelogram rather than a right angle rectangle.
ERDAS Imagine provides several approaches for rectifying your image beyond what has already
been done by the data provider. In general terms, these approaches are rubber sheeting
adjustments, camera adjustment and polynomial adjustments. Rubber sheeting adjustments
(called triangulation adjustments in other remote sensing software packages) break the image up
into many smaller pieces and apply a correct to each of these pieces. This is appropriate where
nonsystematic distortion is severe and pixel locations shift in a quasi-random way across the
image.
2.5 GLOSSARY
ERDAS IMAGINE - ERDAS IMAGINE is the world's most widely-used remote sensing software
package.
EPIPOLAR-Epipolar geometry is the geometry of stereo vision. When two cameras view a
3D scene from two distinct positions, there are a number of geometric relations between the
3D points and their projections onto the 2D images that lead to constraints between the
image points. These relations are derived based on the assumption that the cameras can be
approximated by the pinhole camera model.
2.7 REFERENCES
1. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 55, no 9.
2. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.
3. Teillet, P.M., 1986. Image correction for radiometric effects in remote sensing. Int. J. Remote
Sensing, 7(12), pp. 1637- 1651.
4. Linea S. Hall, Paul R, Krausman and Michel L. Morrison 1991. The Habitat concept and a
plea for standard terminology. Ecologialab. Tripod.com
5. https://www.sciencedirect.com/topics/earth-and-planetary-sciences/rectificatio
6. https://en.wikipedia.org/wiki/Image_rectification
7. https://en.wikipedia.org/wiki/Image_geometry_correction
8.http://www.seos-project.eu/modules/remotesensing/remotesensing-c0
9.http://www2.gi.alaska.edu/~rgens/teaching/asf_seminar/corrections.pdf 5-
10.https://gisgeography.com/atmospheric-correction/
3.1 OBJECTIVES
3.2 INTRODUCTION
3.3 THERMAL INFRA-RED IMAGES
3.4 SUMMARY
3.5 GLOSSARY
3.6 ANSWER TO CHECK YOUR PROGRESS
3.7 REFERENCES
3.8 TERMINAL QUESTIONS
3.1 OBJECTIVES
After reading this unit you will be able to understand:
Definitions and Concepts of Thermal infra-red images.
Principles of Optical and Thermal Infrared Remote Sensing.
Physical Principles of Thermal Infrared Remote Sensing.
Thermal Sensor.
Thermal Images.
Applications of Thermal infrared Remote Sensing.
3.2 INTRODUCTION
Before starting this topic, you have been learning about remote sensing based on electromagnetic
radiation within the wavelength ranges of visible, near infrared, middle infrared and far
infrared. In this unit, you will learn the full concepts, principles, properties and applications of
thermal infrared images and thermal remote sensing which is based on long wave thermal
infrared within the range of 3-14 μm.
Let’s start with a little background. Our eyes see reflected light. Daylight cameras, night vision
devices, and the human eye all work on the same basic principle: visible light energy hits
something and bounces off it, a detector then receives it and turns it into an image. Whether an
eyeball, or in a camera, these detectors must receive enough light or they can’t make an image.
Obviously, there isn’t any sunlight to bounce off anything at night, so they’re limited to the light
provided by starlight, moonlight and artificial lights. If there isn’t enough, they won’t do much to
help you see.
Thermal energy comes from a combination of sources, depending on what you are viewing at the
time. Some things – warm-blooded animals (including people!), engines, and machinery, for
example – create their own heat, either biologically or mechanically. Other things – land, rocks,
buoys, vegetation – absorb heat from the sun during the day and radiate it off during the night.
Everything we encounter in our day-to-day lives gives off thermal energy, even ice. The hotter
something is the more thermal energy it emits. This emitted thermal energy is called a “heat
signature. Thermal remote sensing is the branch of remote sensing that deals with the acquisition,
processing and interpretation of data acquired primarily in the thermal infrared (TIR) region of
the electromagnetic (EM) spectrum. In thermal remote sensing we measure the radiations
'emitted' from the surface of the target, as opposed to optical remote sensing where we measure
the radiations 'reflected' by the target under consideration. Useful reviews on thermal remote
sensing are given by Kahle (1980), Sabins (1996) and Gupta (1991)
It is a well known fact that all natural targets reflect as well as emit radiations. In the TIR region
of the EM spectrum, the radiations emitted by the earth due to its thermal state are far more
intense than the solar reflected radiations and therefore, sensors operating in this wavelength
region primarily detect thermal radiative properties of the ground material. However, as also
discussed later in this article, very high temperature bodies also emit substantial radiations at
shorter wavelengths. As thermal remote sensing deals with the measurement of emitted
radiations, for high temperature phenomenon, the realm of thermal remote sensing broadens to
encompass not only the TIR but also the short wave infrared (SWIR), near infrared (NIR) and in
extreme cases even the visible region of the EM spectrum.
Thermal remote sensing, in principle, is different from remote sensing in the optical and
microwave region. In practice, thermal data prove to be complementary to other remote sensing
data. Thus, though still not fully explored, thermal remote sensing reserves potentials for a
variety of applications.
Incoming shortwave radiation from the Sun, which includes ultraviolet, visible, and a portion of
infrared energy reaches the Earth. As learned in previous modules, some of this energy is
reflected and absorbed by the atmosphere and some eventually reaches the surface. At the
surface portions of this energy are reflected and absorbed depending on the material types. The
shortwave energy that is absorbed by the atmosphere and the surface is converted to kinetic
energy and then emitted as long wave or thermal radiation. Most of the emitted long wave
radiation warms the lower atmosphere, which in turn warms the Earth's surface. In thermal
remote sensing there is often a need to compensate and correct for atmospheric interaction and
emitted energy. Based on the content necessary to be described, the topic is aimed at the
following objectives:
Imaging Camera:
A thermal imaging camera (also called thermographic camera or an infrared
camera or infrared thermography) is a device that forms a heat zone image using infrared
radiation, similar to a common camera that forms an image using visible light. Instead of the
400–700 nanometer range of the visible light camera, infrared cameras operate
in wavelengths as long as 14,000 nm (14 µm).
A thermographic camera (or infrared camera) detects infrared light (or heat) invisible to the
human eye. That characteristic makes these cameras incredibly useful for all sorts of
applications, including security, surveillance and military uses, in which bad guys are
tracked in dark, smoky, foggy or dusty environs or even when they're hidden behind a boat
cover.
Thermal imaging:
Thermal imaging is simply the process of converting infrared (IR) radiation (heat) into
visible images that depict the spatial distribution of temperature differences.
Thermal imaging is a method of night vision that collects the infrared radiation from objects
in the scene and creates an electronic image.
Thermal infrared image is the product of electromagnetic spectrum which has a wavelength
of between 3.0 and 20 micrometers.
Most remote sensing applications make use of the 8 to 13 micrometer range.
The main difference between thermal infrared image and the infrared (color infrared - CIR)
image is that thermal infrared is emitted energy that is sensed digitally, whereas the near
infrared (also called the photographic infrared) is reflected energy.
Absorption by water and other gases in the atmosphere restricts sensors to record thermal
images in two wavelength windows - 3 to 5 µm and 8 to 15 µm.
For this reason, thermal IR imagery is difficult to interpret and process because there is
absorption by moisture in the atmosphere.
CONCEPTS:
Thermal Infrared Remote Sensing:
The basic strategy for sensing electromagnetic radiation is clear. Everything in nature has its own
unique distribution of reflected, emitted and absorbed radiation. These spectral characteristics, if
ingeniously exploited, can be used to distinguish one thing from another or to obtain information
about shape, size and other physical and chemical properties. It is being explained in the next
unit.
Because different materials absorb and radiate thermal energy at different rates, an area that we
think of as being one temperature is actually a mosaic of subtly different temperatures. This is
why a log that’s been in the water for days on the end will appear to be a different temperature
than the water, and is therefore visible to a thermal imager. Thermal cameras detect these
temperature differences and translate them into image detail. While all this can seem rather
complex, the reality is that modern thermal cameras are extremely easy to use. Their imagery is
clear and easy to understand, requiring no training or interpretation.
It is important to remember that thermal infrared = emitted infrared energy rather than reflected.
Thermal remote sensing a type of passive remote sensing since it is measuring naturally emitted
energy.
In remote sensing we examine the near-infrared from 0.7 – 0.9 μm, the middle or short wave
infrared (SWIR) from 0.9-1.3 μm, far-infrared from 1.3- 3μm and the longwave or thermal
infrared from 3-14 μm (Figure 3.1). The near infrared, middle infrared and shortwave infrared
sensors measure reflected infrared light.
Incoming shortwave radiation from the Sun, which includes ultraviolet, visible, and a portion of
infrared energy reaches the Earth. As learned in previous units, some of this energy is reflected
and absorbed by the atmosphere and some eventually reaches the surface. At the surface portions
of this energy are reflected and absorbed depending on the material types. The shortwave energy
that is absorbed by the atmosphere and the surface is converted to kinetic energy and then
emitted as long wave or thermal radiation. Most of the emitted long wave radiation warms the
lower atmosphere, which in turn warms the Earth's surface. In thermal remote sensing there is
often a need to compensate and correct for atmospheric interaction and emitted energy.
The concept of thermal infrared radiation along with other radiation phenomena has been shown
in figure 3.2. The radiation range of 3-14 μm highlights the sensation of heat within the minor
difference.
Figure 3.2: Portion of Thermal Infrared Radiation (3-14μm) highlighting the extended
form of increasing wave length and frequency (energy)
Thermal images are the product of thermal cameras, sensors and electronic scanners which make
pictures from heat, not visible light. Heat (also called infrared, or thermal energy) and light are
both parts of the electromagnetic spectrum, but a camera that can detect visible light won’t see
thermal energy, and vice versa. Thermal cameras detect more than just heat though; they detect
tiny differences in heat – as small as 0.01°C – and display them as shades of grey in black and
white pictures/ videos.
energy are recorded by the wave length ranges of 0.4 -0.7 μm (visible-reflected energy only),
0.7- 3 μm ( infrared- both reflected and emitted energy) and 3-14 μm (thermal infrared -emitted
energy only). The recorded energy is calibrated and transmitted to the users.
Since remote sensing involves the detection and measurement of the radiations of different
wavelengths reflected or emitted from distant objects or materials, it offers following four basic
components:
(i) The energy source.
(ii) The transmission path.
(iii) The target.
(iv) The satellite sensor.
Among these, the energy source or electromagnetic energy, is very important, as it serves as the
crucial medium for transmitting information from the target to the sensor.
Both the reflected and thermal infrared remote sensingare basically a multi-disciplinary science
which includes a combination of various disciplines such as optics, spectroscopy, thermal and
non thermal images, computer, electronics and telecommunication, satellite launching etc. All
these technologies are integrated to act as one complete system in itself, known as Remote
Sensing System. There are a number of stages in a Remote Sensing process, and each of them is
important for successful operation. We can summarize the remote sensing process in the
following seven steps which are also depicted in figure 3.
Sensor data output: The energy recorded by the sensor is transmitted to a receiving and
processing station where the data are processed into an image. You may simply call it data
transmission and processing.
Data transmission, processing and analysis: The processed image is interpreted to extract the
information about the earth surface features. This you may simply call as image processing and
analysis.
Applications: The extracted information is then utilized, to make decisions for solving a
particular problem, concerning the surface or resource.
BUILTUP
Figure 3.3: Remote Sensing Processes for Optical and Thermal IR Remote Sensing
You may understand the above figure based on the following lines:
Light coming from the Sun in the form of electromagnetic radiations (Energy Source) is incident
on the earth surface (after interaction with atmosphere) supporting various land use and cover
types like, forest, water bodies, grasses, roads, built-up area, agriculture, bare soil etc. After
getting incident, the parts of light are reflected, refracted and absorbed on the earth surface
(Interaction of EMR with the earth’s surface). The reflected and radiated energy after
absorption/interaction on the earth surface passes through the atmospheric (after interaction with
atmosphere) windows (to be explained in the next unit) and are received by the sensor mounted
on the satellite (Transmission of energy from the surface to the remote sensor). The energy
variability of earth surface objects are converted into electrical impulses (signals- Sensor data
output) by the detectors under different wavelengths (wave bands; to be explained in the next
chapter/unit) . Those electrical impulses are stored in High Density Digital Tapes (HDDT; to be
explained in the next unit) fixed in the satellite and transmitted to the ground station whenever
the satellite passes nearby the earth /ground station (Data transmission). Those signals/digital
data after various corrections and preliminary processing (Preliminary processing of data) are
used for various applications and distributed to the users. This has been explained in the previous
chapters.
Reflected
As mentioned in the previous unit all materials at temperatures above absolute zero (0 K, -
273°C) continuously emit electromagnetic radiation by virtue of their atomic and molecular
oscillations. The total amount of emitted radiation increases with the body’s absolute
temperature and peaks at progressively shorter wavelengths. The earth with its ambient
temperature of ca. 300 K has its peak energy emission in the thermal infrared region at around
9.7 µm (Figure 3.5).
Blackbody Principle:
A blackbody is a hypothetical, ideal radiator that totally absorbs and re-emits all energy incidents
upon it. The total energy a blackbody radiates and the spectral distribution of the emitted energy
(radiation curve) depends on the temperature of the blackbody and can be described by:
M =2πhc2/5(eh c/ l k T )-1
Where,
M = spectral radiant exitance [W m-2 μm-1]
h = Planck’s constant [6.626 x 10-34 J s]
c = speed of light [2.9979246 x 108 m s-1]
k = Boltzmann constant [1.3806 x 10-23 J K-1]
T = absolute temperature [K]
λ = wavelength [μm]
Stefan-Boltzmann Law:
Stefan-Boltzmann Law Describes the total electromagnetic radiation emitted by a
blackbody as afunction of the absolute temperature which corresponds to the area
under the radiation curve (integral).
M=σT4
Where,
M = total radiant exitance [W m²]
T = absolute temperature [K]
σ = Stefan-Boltzman constant[5.6697 x 10-8 W m-2 K-4]
→Higher the temperature of the radiator,the greater the total amount of radiation (Figure 3.6).
Black body
radiance curve
at
ireandescert
lamp
temperature
Area
under
the
radiation
curve
Figure 3.6: Black Body Radiance Curves with different stages of Temperature
Emissivity:
In fact, objects in the real world are not perfect blackbodies. Not all of the incident energy upon
them is absorbed; therefore they are not perfect emitters of radiation. The emissivity (ε) of a
material is the relative ability of its surface to emit heat by radiation. Emissivity is defined as the
ratio of the energy radiated from an object's surface to the energy radiated from a blackbody at
the same temperature.
Figure 3.7: Wien‘s Displacement Law, where wavelength of the peak of blackbody
radiation
curve gives a measure of temperature
Emissivity values can range from 0 to 1. A blackbody has an emissivity of 1, while a perfect
reflector or white body has an emissivity of 0. Most natural objects are considered "graybodies"
as they emit a fraction of their maximum possible blackbody radiation at a given temperature.
Water has an emissivity close to 1 and most vegetation also has an emissivity close to 1. Many
minerals and metals have emissivities significantly less than 1. Depending on the material,
emissivity can also vary depending on its temperature.
The emissivity of a surface depends not only on the material but also on the nature of the surface.
For example, a clean and polished metal surface will have a low emissivity, whereas a roughened
and oxidized metal surface will have a high emissivity. Two materials lying next to one another
on the ground could have the same true kinetic temperature but have different apparent radiant
temperatures when sensed by a thermal radiometer simply because their emissivities are
different. Emissivity can be used to identify mineral composition. Knowledge of surface
emissivity is also important both accurate true kinetic temperature measurements from
radiometers.
Going back to the Stefan-Boltzmann law, the blackbody radiation principals can be extended to
real world materials by including the emissivity factor in the equation.
M=εσT4
where,
M = Total energy emitted from the surface of a material
ε = Emissivity
σ = Stefan-Boltzmann constant
T = Temperature of the emitting material in Kelvin
Thermal sensors measure the radiant temperatures of objects. The true kinetic temperature of an
objects can be estimated by the radiant temperature if the emissivity of the object is known.
Trad = ε 1/4 T kin
where,
T rad = Radiant Temperature
T kin = Kinetic Temperature
For example if we measure the radiant temperature of dry soil to be 293.8K and we know the
emissivity is 0.92, we can determine the true kinetic temperature:
T rad =ε 1/4 T kin
293.8K = 0.921/4 T kin
293.8 = 0.979 T kin
293.8/0.979 = T kin
T kin = 300K or 27°C
appear in shades of reds, lavender, and blue-greens; saline soils in yellow; and different saline
deposits in blues and greens (Figure 3.8)
Figure 3.8: Thermal image of TIMS pertaining to Death Valley California. FCC of Band 1
–blue (8.2 - 8.6μm), Band 3 -green (9.0 - 9.4μm) and Band 5-red (10.2 - 11.2 μm)
Landsat:
A variety of the Landsat satellites have carried thermal sensors. The first Landsat satellite to
collect thermal data was Landsat 3, however this part of the sensor failed shorty after the satellite
was launched. Landsat 4 and 5 included a single thermal band (band 6) on the Thematic Mapper
(TM) sensor with 120m spatial resolution that has been resampled to 30m. A similar band was
included on the Enhanced Thematic Mapper Plus (ETM+) on Landsat 7. Landsat 8 includes a
separate thermal sensor known a the Thermal Infrared Sensor (TIRS). TIRS has two thermal
bands, Band 10 (10.60 - 11.19μm) and Band 11 (11.50 - 12.51μm). The TIRS bands are acquired
at 100 m spatial resolution, but are resampled to 30m in the delivered data products.
THERMAL IMAGES:
Most thermal images are single band images and by default are displayed as greyscale images.
Lighters or brighter areas indicate areas that are warmer, while darker areas are cooler. Single
band thermal images can also be displayed in pseudo-color to better display the variation in
temperature. Thermal imagery can be used for a variety of application including estimating soil
moisture, mapping soil types, determining rock and mineral types, wildland fire management and
identifying leaks or emissions. Multiband color composites can also be created if multiple
wavelengths of thermal emission are recorded. An example of this is the TIMS image shown on
the previous page.
Figure 3.10: Thermal imagery of Las Vegas and Lake Mead acquired during the day on
October 12th, 2015 by the Thermal Infrared Sensor (TIRS) on Landsat 8. Greyscale image
is shown on the left, cool areas are dark while light areas are warmer. On the right is a
pseudo-color representation of the same data, temperature is shown as a color gradient,
cool areas are blue and warm areas are red.
The images of day time and night time satellite pass are described below:
Time of Day:
Thermal imagery can be acquired during the day or night but can produce very different results
because of a variety of factors. Some of these factors are thermal conductivity, thermal capacity
and thermal inertia. Thermal conductivity is the property of a material to conduct heat or a
measure of the rate at which heat can pass through a material. For example heat passes through
metals much faster than rocks. Thermal capacity is a measure of how well a material can store
heat, water has a very high thermal capacity. Thermal inertia measures how quickly a material
responds to temperature changes. Based on these factors different materials warm and cool at
different rates during the day and night. This gives rise to a diurnal cycle of temperature changes
for features at the Earth's surface. The diurnal cycle encompasses 24 hours. Beginning at sunrise,
the Earth begins to receive mainly short wavelength energy from the Sun. From approximately
6:00 am to 8:00 pm, the terrain intercepts the incoming short wavelength energy and reflects
much of it back into the atmosphere. Some of this energy is is absorbed and then emitted as long-
wave, thermal infrared radiation. Emitted thermal radiation reaches its peak during the day and
usually lags two to four hours after the midday peak of incoming shortwave radiation, owing to
the time it takes to heat the soil. Daytime imagery can contain thermal “shadows” in area that are
shaded from direct sunlight. Slopes may receive differential heating depending on their
orientation in relation to the sun (aspect). In the above daytime image of the Las Vegas area the
topography and topographic shadows are clearly visible.
Figure 3.11: Graph shows the diurnal radiant temperature variation for rocks and soils
compared to water.
There is a diurnal radiant temperature variation for rocks and soils compared to water. Water has
relatively little temperature variation throughout the day. Dry soils and rocks on the other hand
heat up more and at a quicker rate during the day. They also tend to cool more at night compared
to water. Around dawn and sunset the curves for water and soils intersect. This point is known as
the thermal crossover, which indicates times where there is no difference in the radiant
temperature of materials.
Water generally appears cooler than its surrounding in the daytime thermal images and warmer
in nighttime imaging (Figure 3.12 and 3.13). The actual kinetic temperature of the water has not
changed significantly but the surrounding areas have cooled. Trees generally appear cooler than
their surroundings during the day and warmer at night. Paved areas appear relatively warm
during the day and night. Pavement heats up quickly and to higher temperatures than the
surrounding areas during the day. Paved areas also lose heat relatively slowly at night so they are
relatively warmer than surrounding features.
APPLICATIONS:
Following are the specific applications of thermal infrared remote sensing
Agriculture:
Thermal imaging has been growing fast and playing an important role in various fields of
agriculture starting from nursery monitoring, irrigation scheduling, soil salinity stress detection,
plants disease detection, yield estimation, maturity evaluation and bruise detection of fruits and
vegetables. Thermal Imaging has gained popularity in agriculture due to its higher temporal and
spatial resolution. However, intensive researches need to be conducted for its potential
application in other fields of agriculture (e.g. Yield forecasting) that are not yet investigated. In
spite of the fact that it could be used in many agriculture operations during pre-harvest and post-
harvest period, as a noncontact, non-destructive technique, it has some drawbacks viz., it is
more expensive and thermal measurements depend on environmental and weather
conditions. Thus it may not be possible to develop a universal methodology for its application
in agricultural operations since thermal behaviors of crops vary with climatic conditions.
Other areas of application of thermal infrared imaging are detection of water stress in crops and
evapotranspiration in crops and river basins, which are significant inputs in the management of
agricultural practices and integrated watershed management.
Forestry:
Thermal infrared imaging is used in forestry to map and monitor forest cover in terms of
vegetation stress and evapotranspiration which is important in environmental management since
trees and other plants help cool the environment, making vegetation a simple and effective way
to reduce urban heat islands.
Quantitative information about forest canopy structure, biomass, age, and physiological
condition have been extracts from thermal infrared data. Basically, a change in surface
temperature can be measured by an airborne thermal infrared sensor (e.g., TIMS or ATLAS) by
repeatedly flying over the same area a few times. Usually a separation of about 30 minutes
results in a measurable change in surface temperature caused by the change in incoming solar
radiation. Average surface net radiation is measured in situ for the study area and is used to
integrate the effects of the non-radiating fluxes. The change in surface temperature from time
period t1 to t2 (i.e., t) is the value that reveals how those non-radiative fluxes arc reacting to
radiant energy inputs. The ratio of these two parameters is used to compute a surface proper
defined as a Thermal Response Number (TRN).
Terrains containing mostly soil and bare rock have the lowest TRN values, while forests have the
highest. The TRN is a site-specific property that may be used to discriminate among various
types of coniferous forest stands and some of their biophysical characteristics.
Forest Fires:
Forest fires are a major cause of degradation of India's forests. While statistical data on fire loss
are weak, it is estimated that the proportion of forest areas prone to forest fires annually ranges
from 33% in some states to over 90% in other. About 90% of the forest fires in India are created
by humans. The normal fire season in India is from the month of February to mid June. India
witnessed the most severe forest fires in the recent time during the summer of 1995 in the hills of
Uttar Pradesh & Himachal Pradesh. The fires were very severe and attracted the attention of
whole nation. An area of 677,700 ha was affected by fires.
Forest fires are characterized by their plumes, their temperature, and their luminosity. Most in-
situ daytime fire sightings result from the observation of smoke generated by fuel combustion,
while most nighttime sightings result from high and unusual luminosity of the burning areas. The
high temperature of the burning areas makes the fires detectable from satellite through thermal
infrared imaging.
Clouds consist of tiny particles of ice or water that have the same temperature as the surrounding
air. Images acquired from aircraft or satellites above cloud banks record the radiant temperature
of the clouds. Energy from the earth's surface does not penetrate the clouds but is absorbed and
reradiated. Smoke plumes however, consist of ash panicles and other combustion products so
fine that they are readily penetrated by the relatively long wavelengths of thermal IR radiation.
In visible and thermal IR images acquired over forest fires even during daytime, it is observed
that the smoke plume completely conceals the ground in the visible image, but terrain features
are clearly visible in the IR image and the burning front has a bright signature. The US Forest
Service uses aircraft equipped with IR scanners that produce image copies in flight, which are
dropped to fire fighters on the ground. These images provide information about the fire location
that cannot be obtained by visual observation through the smoke plumes. IR images are also
acquired after fires are extinguished in order to detect hot spots that could reignite. IR images
are also useful in estimating the burnt area.
Water Resources:
Detection of water stress and evapotranspiration retrieval is key applications for water
management purposes. Thermal infrared remote sensing has been recognized for a long time one
of the most feasible means to detect and evaluate water stress and to quantify evapotranspiration
over large areas and in a spatially distributed manner.
Water stress is considered to be a major environmental factor limiting plant productivity world-
wide. Water stress develops in plants as evaporative losses cannot be sustained by the extraction
of water from the soil by the roots. Evapotranspiration (ET) is a term used to describe the loss of
water from the Earth’s surface to the atmosphere by the combined processes of evaporation from
surface and transpiration from vegetation. Evapotranspiration depends on the presence of water
and is regulated by the availability of energy, needed to convert liquid water to water vapor, and
to transport vapor from the land surface to the atmosphere. Physiological regulations also occur
in plants through mechanisms controlling water extraction by the roots, water transport in plant
tissue, and water release to the atmosphere via the stomata at the leaf surface (in direct relation
with the mechanisms of CO2 assimilation and photosynthesis).
Water resources may be monitored and managed through detection of water stress in crops and
forests, detection of and quantification of evapotranspiration in crops, river basins and
continents.
Volcanic Eruptions:
Volcanic eruptions pose serious hazards to sensitive ecosystems transportation and
communication networks, and to populated regions. Knowing the mineralogy of a rock or
alluvial surface is critically important to a geologist trying to interpret the geologic, climatic, or
volcanic history of the surface. The utility of TIR remote sensing for geology and mineralogy
has become clear in the past decades and numerous air- and space-based instruments have
become available.
Thermal infrared imaging helps scientists track potentially deadly patterns of heat in and around
some of the world's 1,500 active volcanoes. Thermal infrared data processed to highlight
hotspots can alert volcanologists to volcanic activity before it becomes dangerous, and may one
day help them better forecast eruptions. Assessing volcanic hazards is an issue since ten percent
of the global population lives underneath active volcanoes.
In high resolution thermal IR images, active volcanoes stand out as bright spots. They become
brighter in a time series images as they ramp up for an eruption, and the speed with which they
cool down can tell scientists much about their geological composition, which in turn helps them
predict whether the volcanoes will erupt violently.
Scientists already know that volcanoes erupt because of density and pressure. Magma is less
dense than rock and rises to the surface at weak points in the earth's crust. As the magma rises,
water and gases dissolved in it expand rapidly, often causing violent explosions – or volcanic
eruptions. Volcanoes with high silica content are of particular interest, because they tend to
produce more viscous lava, which traps gas bubbles. As the pressure from the bubbles builds
inside the volcano, so does the potential for a powerful and dangerous eruption.
Through thermal infrared images, it is possible to monitor and map eruption clouds, tropospheric
Plumes, hot spots and active lava flows. Post eruptive studies may also make use of thermal
infrared imagery.
3.4 SUMMARY
Thermal infrared sensors can be difficult to calibrate. Changes in atmospheric moisture and
varying emissivities of surface materials can make it difficult to accurately calibrate thermal
data. Thermal IR imagery is difficult to interpret and process because there is absorption of
thermal radiation by moisture in the atmosphere. Most applications of thermal remote sensing
are qualitative, meaning they are not employed to determine the absolute surface temperature but
instead to study relative differences in radiant temperature. Thermal imagery works well to
compare the relative temperatures of objects or features in a scene.
It is important to note that thermal sensors detect radiation from the surface of objects. Therefore
this radiation might not be indicative of the internal temperature of an object. For example the
surface of a water body might be much warmer than the water temperature several feet deep, but
a thermal sensor would only record the surface temperature.
There are also topographic effects to consider. For example in the northern hemisphere, north
facing slopes will receive less incoming shortwave solar radiation from the sun and will therefore
be cooler. Clouds and fog will usually mask the thermal radiation from surface features. Clouds
and fog are generally cooler and will appear darker. Clouds will also produce thermal cloud
shadows, where ares underneath clouds are cooler than the surrounding areas.
Some limitations of thermal remote sensing are clear. On the other hand, from the variety of
possible applications, the advantages and potentials of thermal remote sensing are also obvious.
One more fact that is now clear is that new satellite and airborne platforms with new and
improved thermal sensors also promise to bring more interest and challenge in this relatively less
explored field.
Therefore, now there is a definite need to promote the understanding and the use of thermal data
by the scientific and application community. These developments should encompass the
following:
One of the most promising way to ensure that such a goal is achieved in by introducing the topic
of thermal remote sensing in greater depth in the remote sensing educational programmes. The
'spin off ' effect of such a venture would be that more and more researchers with fresh ideas
would explore the thermal data and its possibilities.
EPILOGUE
Temperature and emissivity are powerful biophysical variables critical to many investigations In
the near future thermal infrared remote sensing will become even more important. We now have
very sensitive linear- and area-array thermal infrared detectors that can function in broad thermal
bands or in hyperspectral configurations. Very soon unmanned aerial vehicles (UAV) carrying
miniature thermal infrared sensors would be seen being used by the military, scientists and even
ordinary people.
3.5 GLOSSARY
TRN - Thermal Response Number
SWIR - Short Wave Infrared
Thermography - Any writing, printing, or recording process involving the use of heat.
Thermography is a test that uses an infrared camera to detect heat patterns and blood flow in
body tissues.Remote-sensing infrared thermography (IRT) has been advocated as a possible
means of screening for fever in travelers at airports and border crossings, Thermography,
including all of the techniques for the analysis of the infrared radiation emitted or reflected by
objects in the thermal infrared region of the electromagnetic spectrum, provides the opportunity
to analyze both the thermal characteristics of the materials lying on the ground and many
processes related to the exchange of heat between the surfaces; its importance has been
demonstrated in a wide range of military, civil, industrial and scientific applications.
Emissivity - The emissivity of the surface of a material is its effectiveness in emitting energy as
theral radiation.
3.7 REFERENCES
1. Gupta R.P., 1991. Remote Sensing Geology (Berlin-Heidelberg:Springer-Verlag).
2. Kahle A.B., 1980. Surface thermal properties. In Remote Sensing in Geology, edited by B.S.
Siegal, and A.R. Gillespie(New York; John Wiley), pp.257-273
3. Markham, B.L., Barker J.L., 1986. Landsat MSS and TM post calibration dynamic ranges,
exo-atmospheric reflectancesand at-satellite temperatures. EOSAT Landsat Technical Notes 1,
August 1986, Earth Observation Satellite Co.(Lanham, Maryland), pp. 3-8.
4. Prakash A., Gens R., Vekerdy Z., 1999. Monitoring coal fires using multi-temporal night-time
thermal images in acoalfield in North-west China. International Journal of Remote Sensing,
20(14), pp. 2883-2888..
5. Prakash A., Gupta, R.P., 1998. Land-use mapping and change detection in a coal mining area -
a case study of the
6. Jharia Coalfield, India. International Journal of Remote Sensing, 19(3), pp. 391-410.
7. Prakash A., References on thermal remote sensing.
http://www.itc.nl/~prakash/research/thermal_ref.html
8. Sabins F.F. Jr, 1996. Remote Sensing: Principles and Interpretation, 3rd edn. (New York:
W.H. Freeman).
BLOCK 2 : HYPERSPECTRAL
4.1 OBJECTIVES
4.2 INTRODUCTION
4.3 INTRODUCTION TO HYPERSPECTRAL REMOTE
SENSING
4.4 SUMMARY
4.5 GLOSSARY
4.6 ANSWER TO CHECK YOUR PROGRESS
4.7 REFERENCES
4.8 TERMINAL QUESTIONS
4.1 OBJECTIVES
Recent development in remote sensing technologies lead us to the development of advanced
techniques such as imaging spectrometer which is capable of collecting information in more
than 100 contiguous spectrally narrow bands. These data are different from the traditional
remote sensing data and require unique processing steps.
After going through this module one can enhance their knowledge about hyperspectral
remote sensing and moreover, he/she can surely improve their knowledge in:
4.2 INTRODUCTION
In present-day, advancement of remote sensing unlocked a new dimension for the
application of imaging spectrometers in various disciplines of science. Imaging spectrometers
acquire object image in several, fine, contiguous spectral bands all over in the visible, near-
infrared, mid-infrared, shortwave infrared and thermal range of electromagnetic spectrum.
These sensors are presently used for the identification and detection of the minerals,
vegetation, mangroves, objects and background. It generally includes information acquired in
typically hundreds of spectral bands with quite narrow bandwidth (5-10 nm) which allows us
to form spectral reflectance curve for each pixel in the hyperspectral image.
First portable field spectrometer for the measurement of spectral absorption feature
was developed by Alexander Goetz in 1974 that utilizes a charge-coupled device (CCD). It
was then followed by the development of imaging spectrometers for use in space- and air-
borne platforms. Today’s basic and simple definition (Alexander F.H. Goetz et al.,
1985)found its relevance: “The acquisition of images in hundreds of continuous registered
spectral bands such that for each pixel a radiant spectrum can be derived”. The above
description covers entire wavelength range from visible to long wave infrared through
visible, NIR, and SWIR; all types of mounting medium i.e., ground, space &air platform; and
includes all objects such as liquid, gas and solids.
The original accepted bandwidth for the geological application and mineral
exploration was approximately 10 nm (A. F.H. Goetz, Rock, & Rowan, 1983). However,
The main goal of hyperspectral remote sensing is to extract the physical information
of objects from the reflectance data. This technology became interdisciplinary science and
including physics, computer science, engineering, aviation, mathematics, geology, statistics,
and atmospheric science.
Hyperspectral data contains spatial and spectral information from materials within a
given scene simultaneously. Every pixel contains spatial information and spectral information
in continuous, narrow spectral bands. A scanning device samples pixel collected in several
narrowband at a specific spatial resolution. The hyperspectral data cubeis represented in
Fig.4.2.
Fig. 4.2Representation of hyperspectral data sets consisting of all geographical and spectral
elements (Janos, 2008).
Whiskbroom sensors are more extensive, massive and difficult to buildas compared to
pushbroom scanners.Whiskbroom scanners concentrate on a portion of the full swath at any
given time, allowing for higher spatial resolution. Furthermore, whiskbroom sensors have
fewer sensor detectors that require calibration than other scanner sensor systems.
NASA’s Jet Propulsion Laboratory (JPL) conducted the first airborne imaging
spectrometers (AIS) sensor test in November 1982. Instrument in this survey used consists of
32 × 32 mercury cadmium telluride set of detectors which provide information in 128
spectral bands in the wavelength range of 900 to 2400 nm. The initial AIS flight operation
carried over Cuprite mining district, Nevada to classify kaolinite and alunite minerals and
thus proving that characteristic absorption feature in spectral curve can used to detect and
identify minerals.
Fig. 4.5 For the collection of data, a scanner platform device is used in hyperspectral remote
sensing.(Tan, 2016).
AVIRIS uses a scanning system which collects hyperspectral data in the transverse
direction of movement. It can be used on a number of aircraft, including the NASA ER-2,
which can take photographs from a height of 20kms with a spatial resolution of 20m and a
swath width of 10.5kms. The AVIRIS sensor collects hyperspectral data in 224 spectral
bands with a spectral resolution of 10nm in the wavelength range of 400 to 2500nm. The first
commercial imaging spectrometer was thought to be the Canadian Compact Airborne
Spectrographic Imager (CASI) in 1989. It was created by ITRES Corporation
(www.itres.com) and has a spectral range of 400-1000 nm, covering the visible and near-
infrared ranges of the electromagnetic spectrum. Depending on the altitude of CASI sensor, it
can collect hyperspectral data with spatial resolution of 25 cm. The spectral bands and
bandwidths used can be customized to suit the needs of the consumer.
VNIR:11
Probe-1 1998 0.4–2.5 128 10 Canada
SWIR:18
GER,
DAIS- USA,and
2000 0.4–2.5 72 0.9–60 78
7915 DLR,
Germany
0.4–2.5 128 15
ARES 2005 N/A Australia
8–12 32 130
ESA,
0.38–0.97 114 0.45–7.5
APEX 2014 28 Switzerland,
0.94–2.50 199 5–10
Belgian
Fig. 4.7 Hyperion hyperspectral image over Khirbaten-Nahas, Jordan. This area host ancient
copper mine, smelting sites and different rock types which was identified and mapped by
hyperspectral data(NASA, 2016).
satellite-based hyperspectral sensor are close to those of airborne sensors, and it can be used
to create application data products prior to the satellite’s launch.
Hyperion and CHRIS were significant development in the space borne hyperspectral
sensor. These sensors show the possibility of hyperspectral data in various scientific
applications. The Indian HySI, Chinese HJ-1A and NASA’s HICO all performed in the 400-
950nm spectral range (VNIR).The prototype pushbroom instrument HySI, launched by ISRO
in April 2008, offers a total of 64 spectral bands and is primarily used for resource
characterization and detailed studies.
revisit period and a global tracking at a ±30-degree side-view angle. It acquires hyperspectral
imaging in 115 spectral bands spanning the wavelength range of 450-950nm with a spectral
resolution of 4.32nm and a spatial resolution of 100m over a 50km swath. Higher spectral
resolution of HJ-A/HIS sensor offers better ground feature identification. It's a valuable tool
for developing quantitative research applications like measuring composition of the
atmosphere, water management, and forest productivity monitoring.
NASA’s Hyperspectral Imager for the Coastal Ocean (HICO) was the first
hyperspectral imager designed for coastal study/research. It was made to collect data on
geometry, bottom types, surface water optical properties, and flora maps on the sea. The
sensor onboard collects hyperspectral data with a spectral resolution of 5.7nm in the
wavelength range of 400-900nm. On September 23, 2009, HICO was docked with the
International Space Station (ISS) and obtained about 10000 images during the next five years
of service, before the spectrometer was damaged by anX-class solar storm in September
2014, resulting in its complete failure.
Next decade witnessed development of several hyperspectral sensors for the imaging
of Earth’s surface. The EnMAP satellite was developed by German space agency and
launched in 2018. Other significant space borne satellites may include joint venture of
America and Brazil Flora Hiperspectral satellite, PRISMA from Italy, MSMI of South Africa,
HyspIRI from NASA JPL.
2.576 USA
0.415– 17/3
CHRIS 2001 19/63 5–12 13 ESA, UK
1.050 4
0.400–
HySI 2008 64 8 506 130 ISRO, India
0.950
0.459– CAST,
HJ-1A 2008 115 5 100 51
0.950 China
0.380– NASA/ON
HICO 2009 128 5.7 90 42
0.960 R, USA
NASA/JPL,
0.400– USA, and
Flora 2016 200 10 30 150
2.500 INPE,
Brazil
VNIR:0.4
VNIR:66
0–1.01
PRISMA 2017 SWIR:17 10 30 30 ASI, Italy
SWIR:0.9
1
2–2.05
VNIR:0.4 VNIR:8.
VNIR:89
2–1.00 1+1 GFZ/DLR,
EnMAP 2018 SWIR:15 30 30
SWIR:0.9 SWIR:12 Germany
5
0–2.45 .5 + 1.5
VNIR:0.4
VNIR:57 VNIR:10
HISUI 0–0.97
2018 SWIR:12 SWIR:12 30 30 Japan
ALOS-3 SWIR:0.9
8 .5
0–2.50
HYPXIM- 0.400– CNES,
2018 N/A 14 15 15
CB 2.500 France
HYPXIM- 0.400– CNES,
2020 N/A 10 1 30
CA 2.500 France
0.380– NASA/JPL,
HyspIRI 2020 >200 10 30 30
2.500 USA
0.400– CSA,
HERO >2016 >200 10 30 30
2.500 Canada
SunSpace,
0.400–
MSMI >2016 200 10 15 15 South
2.350
Africa
differences in ore grade composition. Visible and Near-Infrared (VNIR) region shows
characteristics spectral absorption features due to the presence of transition metal such as Fe,
Mn, Cu, Ni, Cr etc. Shortwave infrared region is useful for the identification of minerals
composed of hydroxyls and carbonates. Due to coarse bandwidth and low spectral resolution
of multispectral remote sensing data, it is limited in the precise mineral composition and
relative abundance of constituent. These limitations of multispectral data have been overcome
in hyperspectral remote sensing data due to its narrow bandwidth and spectral resolution. The
use of hyperspectral remote sensing technology in geological applications such as lithological
mapping, mineral exploration, mapping of hydrothermal alteration zones and related mineral
deposits and hydrocarbon exploration. Thermal Infrared region is useful for the identification
of rock-forming minerals such as quartz, feldspar, amphibole, iron, dolomite etc.
Ecological studies are the study of organisms and their environments, which include
both biotic and abiotic elements. Identification, mapping and management of ecology is
laborious due to varied nature of biodiversity. Hyperspectral data provide abundant spectral
information about biotic and abiotic components. Hyperspectral remote sensing applied
successfully for obtaining the information about leaf pigment, water content, chemical
composition and discrimination between different species. The spectral library of the United
States Geological Survey is useful for validating land cover classification, characterization
and change detection. Hyperspectral remote sensing allows for precise and reliable
information about ecological processes, monitoring of forest area, grassland, vegetation, land
cover classification. Hyperspectral remote sensing can estimate sensitive ecology factors such
as leaf nutrient, water content, leaf area, plant leaves drought responses, woody tissues and
soil pollution accumulation, vegetative growth patterns, and land use / cover changes.
spectral library generation. Narrow spectral resolution of hyperspectral data found valuable
for the mapping and surveillance of coastal zone. Hyperspectral sensors can help with Coastal
ecosystem and protection of marine resource planning and monitoring, coastal conservation,
coastal water management, coastal disasters, and coastal area management.
4.4 SUMMARY
Throughout the visible, near-IR, mid-IR and shortwave infrared portions of the
electromagnetic spectrum, hyperspectral sensors capture images in several small, contiguous spectral
bands. Hyperspectral remotes sensing is a powerful tool for the applications in environment, forestry,
agricultures and geosciences. Imaging spectrometer can be carried on satellite, aircraft, UAVs and
other platforms. Airborne sensors provide higher spectral and spatial image whereas space-borne
sensor have advantage over its repetitiveness and global coverage. Hyperspectral sensors can be of
two types such as push broom (along-track scanner) or whiskbroom modes (across-track scanner).
NASA-JPL launched the first airborne imaging spectrometer (AIS) in November 1982, and
hyperspectral data was collected over the Cuprite mining district in Nevada. These led to the
development of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and satellite-based
imaging spectrometer. Hyperspectral data has a wide range of uses in research. Crop forecasting,
cropping system, precision farming, horticulture, irrigation management, and watershed production
are all application areas of hyperspectral remote sensing. In mineral exploration and geology,
hyperspectral remote sensing finds its application in the identification and mapping of hydrothermally
altered/weathered minerals, zones and also used in reconnaissance survey. Hyperspectral data used in
identification, mapping, classification and management of biodiversity. Hydrological concern such as
deterioration of water quality and coastal zone management issue can be addressed by the application
of hyperspectral remote sensing
4.5 GLOSSARY
ENVI- Environment for Visualizing Images
4.7 REFERENCES
1. (NASA/JPL-Caltech). 2014. AVIRIS Airborne Visible/ Infrared Imaging Spectrometer.
The National Aeronautics and Space Administration.
2. Exelis. 2013. Vegetation Analysis: Using Vegetation Indices in ENVI (2013).
3. Goetz, A. F.H., Rock, B. N., & Rowan, L. C. 1983. Remote sensing for exploration: an
overview. Economic Geology.
4. Goetz, Alexander F.H., Vane, G., Solomon, J. E., & Rock, B. N. 1985. Imaging
spectrometry for earth remote sensing. Science, 228: 1147–1153.
5. Janos, T. 2008. Geoinformatics | Digitális Tankönyvtár.
6. NASA. 2016. Earth Observing-1: Ten Years of Innovation.
7. Shaw, G. A., & Burke, H. K. 2003. Spectral Imaging for Remote Sensing. LINCOLN
LABORATORY JOURNAL (Vol. 14).
8. Tan, S.-Y. 2016. Developments in Hyperspectral Sensing. In: Handbook of Satellite
Applications. Springer New York: pp. 1–21.
5.1 OBJECTIVES
5.2 INTRODUCTION
5.3 CHARACTERISTICS OF HYPERSPECTRAL DATA,
SPECTRAL IMAGE LIBRARY
5.4 SUMMARY
5.5 GLOSSARY
5.6 ANSWER TO CHECK YOUR PROGRESS
5.7 REFERENCES
5.8 TERMINAL QUESTIONS
5.1 OBJECTIVES
To identify, map and characterize a material remotely, we need to understand the
characteristics of hyperspectral data.
After going through this module, one can enhance their knowledge in the hyperspectral
datasets, and moreover he/she can surely enhance their skill in
5.2 INTRODUCTION
The remote identification and mapping of features are only possible because of
hyperspectral datasets. Hyperspectral sensor acquires data in 100 to 200 spectral bands with
narrow bandwidth (5-10 nm) which enable us build the continuous reflectance spectra of each
pixel in a scene. Main characteristics of hyperspectral data sets are its spectral and
radiometric, resolution. Multispectral dataset enables only identification of material because
of its broad spectral bands whereas fine interval or narrow spectral resolution of
hyperspectral sensors provides us the data by which not only we identify the material but we
can also differentiate between material/rocks.
USGS spectral library play important role in the remote identification of material. It
includes spectral signature from variety of sources such as minerals, vegetation, man-made
materials etc. This library serves as a collection of reference material. The hyperspectral
dataset can be compared with the materials present in the spectral library and it helps us in
the identification of different materials including minerals, vegetation, construction material,
chemical etc.
Band and Wavelength - A set of wavelengths is referred to as a band. Let us take examples
as, the wavelength values between 2205 nm and 2210 nm may represent on wavelength as
collected by an imaging spectrometer. For light in the band, the hyperspectral sensor collects
data from reflected light energy in a pixel. When we worked with multispectral or
hyperspectral datasets, the center wavelength value is stated as the band detail. For example,
in the band spanning 2205-2210 nm, the center would be 2207.5 nm.
Fig. 5.1 Hyperspectral sensors collect information within defined wavelength region of the
electromagnetic spectrum (Source: https://www.neonscience.org/resources/learning-
hub/tutorials/hsi-hdf5-r).
hyperspectral sensors. Hyperspectral sensors record the image in many tens of bands with
bandwidths on the order of 0.01 um and can construct a continuous reflectance spectral of
pixels in the scene.
Fig 5.2. Principle of hyperspectral imaging, a. Multispectral sensors have discrete spectral
bands whereas, b. Hyperspectral senor produce the continuous reflectance spectrum. (Source:
https://www.edmundoptics.com/knowledge-center/application-notes/imaging/hyperspectral-
and-multispectral-imaging/)
The key difference between multispectral and hyperspectral lies in the number of
bands and how narrow the bands are. Let us take an example, the channel below includes red,
green, blue, near-infrared, and shortwave infrared.
(Source: https://gisgeography.com/multispectral-vs-hyperspectral-imagery-explained/)
Hyperspectral data consists of much narrower bands (10-20 nm). A hyperspectral data
contains hundreds or thousands of bands. Generally, hyperspectral data doesn’t contain
descriptive channel names.
(Source: https://gisgeography.com/multispectral-vs-hyperspectral-imagery-explained/)
Before going further, we need to understand certain terms and their definition.
Spectroscopy: The study of the interaction of matter and electromagnetic radiations is known
as spectroscopy.
Spectroscopy is based on the premise that different materials are different due to differences
in their constituents and structures, and as a result, they interact with light differently, causing
them to look different.
Visible Spectrum:
Infrared Spectrum:
Infrared region encompasses the wavelength range from 700nm to 105nm. This entire
wavelength region is subdivided into reflected infrared and thermal infrared.
Reflected infrared region includes the wavelength region of approximately 0.7 to 3.0
µm and 3.0 to 100 µm is the wavelength range of thermal infrared region. In the same way as
radiation in the visible component is used for remote sensing, radiation in the reflected IR
field is used for remote sensing.
Fig 5.3. The electromagnetic spectrum, highlighting the visible and infrared regions. (Source:
https://lotusgemology.com/index.php/2-uncategorised/294-ftir-in-gem-testing-ftir-intrigue-
lotus-gemology).
Near Infrared (NIR):
Near-IR region of electromagnetic spectrum range from 0.7 to 1.1 µm. Water absorbs Near -
IR, therefore these wavelength regions can used to distinguish land and water boundaries.
Plants evidently reflect near-infrared light, with healthy plants reflecting more than
stressed plants. Since NIR light can penetrate through haze, it can aid in the recognition of
information in a smoky or hazy scene.
Shortwave Infrared (SWIR):
Shortwave infrared includes the wavelength range between 1.1 to 3.0 µm. Water absorption
wavelength band falls in three regions: 1400, 1900 and 2400 nm. The hyperspectral image
data will appear darker at these wavelengths if there is more water, including in the oil. As a
result, SWIR will facilitate in estimating the amount of water present in the soil and plants.
The SWIR band can also be used to differentiate between cloud forms (water vs. ice
clouds) as well as clouds, snow, and ice, which all appear white in visible light. SWIR bands
reflect strongly on recently burned ground, making them useful for detecting fire damage. In
the SWIR portion of the electromagnetic spectrum, active explosions, lava flows, and other
highly hot features “glow”.
Instantaneous field of view (IFOV):
The IFOV is the sensor's angular cone of vision, which defines the area on the Earth's surface
that can be "seen" from a given altitude at any given time.
Spatial Resolution:
The details discernible in an image are dependent on the spatial resolution of the sensor and
refer to the smallest possible feature that can be detected.
Spectral Resolution:
The capacity of a sensor to specify fine wavelength intervals is referred to as spectral
resolution. The narrower the wavelength ranges for a specific channel or band, the finer the
spectral resolution. Hundreds of very small spectral bands are detected by hyperspectral
sensors around the visible, near-infrared, and mid-infrared portions of the electromagnetic
spectrum.
Radiometric Resolution:
An imaging system's radiometric resolution defines its ability to distinguish very subtle
differences in light. The higher a sensor's radiometric resolution the more sensitive it is to
minor variations in reflected or emitted radiation.
Temporal resolution:
The time interval between successive observations is equal to the temporal resolution, which
refers to the repetitiveness of observation over a region. It is determined by the sensor's
swath-width and orbital parameters. The time interval between two identical flights over the
same location, also known as the repetition rate, determines the temporal resolution. It is
determined by the satellite's altitude and orbit, as well as its sensor characteristics.
In defense and civilian use of high-definition electro-optic image have been continuously
increasing. Furthermore, the air-borne and space-borne data are used to identify and classify
the materials present on the Earth’s surface. It is possible to identify the objects at Earth’s
surface because of analysis of characteristic reflectance spectra of the object.
The Absorption Process:
Beers law governs the behavior of photon when it enters into an absorbing medium.
Beers law states that:
I = Ioe-kx
Where I describe the observed intensity, Io represents the original light intensity, k denotes
the absorption coefficient, and x presents the distance travelled through the medium.
The absorption coefficient is measured in cm-1, and x is measured in cm.
The absorption coefficient is related to the complex index of refraction by the equation:
k = 4πK/λ
where λ is the wavelength of light, n is the sample index of refraction and K is the extinction
coefficient.
The reflection of light, R, incident onto a plane surface is described by the Fresnel
equation:
(𝑛 − 1)2 + 𝐾 2
𝑅=
(𝑛 + 1)2 + 𝐾 2
Causes of Absorption:
The general cause of absorption includes electronic and vibrational processes. (Burns,
1993)explores into the finer details of electronic systems, while Farmer (1974) focuses on
vibrational processes.
1. Electronic Processes:
Secluded ions and atoms have distinct energy states. Low energy state changes into
higher energy state due to absorption of photon at specific wavelength. As photons are
emitted, their energy state changes to a lower one. The absorbed photon does not emit at
same wavelength, for example Earth absorb shorter wavelength and emit at longer
wavelengths.
a. Crystal Field Effects:
Transition elements (Ni, Cr, Co, Fe etc.) shows their characteristic absorption spectra
due to unfilled electron shells. In an isolated ion, all transition elements have equal energies
for d-orbitals, but when the atom is in a crystal region, the energy level splits.This shift in
orbital energy states allows electrons to travel from a lower to a higher energy level by
absorbing a photon with the same energy as the energy difference between the two states.The
valency, coordination number, and symmetry of the site of an atom describe the energy
levels. The difference of structure in minerals leads to the variations in crystal field.
Therefore, amount of splitting varies and the same ion can produce different absorptions and
enables to identify specific mineral from spectroscopy.
b. Charge Transfer Absorptions:
Absorption of a photon can lead to the movement of electron between ions and thus
lead to the charge transfer. Fe2+ and Fe3+ are examples of transition elements that have the
same metal but unlike valence states. Generally, absorption bands caused by charge transfers
lead to the identification of minerals. Charge transfer is the main reason for the reddish color
of minerals containing iron.
c. Conduction bands:
Many minerals have two levels of energy wherein the electrons may occur in two
ways: first one is higher level is called as conduction band in which electron can freely pass
around the lattice and other one is the valence state, in which electrons are bound to
individual atoms. The contrast in energy levels is referred to as the band gap. This band gap
is very small or do not occurs in metals but very large in dielectrics. The energy of visible
and near-infrared wavelength portions corresponds to band gaps in semiconductor materials.
Because of these band gaps, the Sulphur has a yellow tint.
d. Color Centers:
Color center is produced due to irradiation from an imperfect crystal. The periodicity
of the crystal is disturbed by lattice defects. The movement of electron in the defect requires
photon energy. The yellow, purple and blue colors of fluorite mineral are due to color centers.
Fig. 5.4 Spectral signature diagram. The width of the black bars indicate the relative widths
of absorption bands (Hunt, 1977).
2. Vibrational Processes:
The bond in a molecule or crystal lattice is assumed to be like springs attached with
weights and the entire system can vibrates. The frequency of vibration depends on the
strength of bonds and their masses.
Fig. 5.5Reflectance spectra of calcite, dolomite, beryl, gypsum, alunite, rectorite, and jarosite
showing vibrational bands due to OH, CO2 and H2O (R. N Clark, 1999).
Fig. 5.6 Reflectance spectra of phlogopite, biotite, pyrophyllite, muscovite, epidote, and illite
showing vibrational bands due to OH and H2O (R. N Clark, 1999).
Fig. 5.7 Reflectance spectra of hectorite, halloysite, kaolinite, chrysotile, lizardite, and
antigorite, showing vibrational bands due to OH (R. N Clark, 1999).
Fig. 5.8 Near 2200 nm, there are small spectral variations in the kaolinite group of minerals.
White KGa-2 is partially crystallised, while kaolinite CM9 is very well crystallised (WXL)
(PXL). The sampling rate is 0.95 nm and the spectral bandwidth is 1.9 nm. The original
spectrum ranged from 0.5 to 0.8.(R. N Clark, 1999).
Fig. 5.9 Complex absorptions in the CH-stretch fundamental spectral region can be seen in
the transmittance spectra of organics and mixtures.(R. N Clark, 1999).
Fig. 5.10 Spectral reflectance curve of montmorillonite and montmorillonite blended with
benzene, toluene, and trichlorethylene. The organics have a CH combination band near 2.3 m,
while montmorillonite has an absorption feature at 2.2 m. At 1.7 m, the first overtone of the
CH stretch can be seen, and at 1.15 m, the second overtone can be seen.(R. N Clark, 1999).
Ices - Water molecule in the minerals exhibits diagnostic absorption bands and ice, which is
also classified as mineral, exhibits strong absorption bands. Figure 5.11 depicts the spectra of
solid H2O, CO2 and CH4. Because of its hexagonal form, the H2O spectra have a wider range
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 100 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
of absorptions than the others, hydrogen bonds orientationally disordered and the disorder
broadens the absorptions. Solar system contains many ices.
Fig. 5.11 Spectral reflectance curve of solid carbon dioxide (CO2), methane (CH4) and water
(H2O) (R. N Clark, 1999).
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 101 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Vegetation - Vegetation has two general forms which includes green and wet
(photosynthetic) and dry non-photosynthetic. Spectra of these two vegetation forms are
compared with spectra of soil in Fig. 5.12. The NIR spectra of green vegetation are caused
due to vibrational process of absorption.
Fig. 5.12 Spectral reflectance curves of green vegetation, dry vegetation and soil(Roger N
Clark, 1995).
The dry or non-photosynthetic vegetation shows reflectance spectrum due to presence
of cellulose, lignin, and vegetation.
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 102 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
5. Manufactured chemicals
6. Vegetation
7. Micro-Organism
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 103 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 104 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
5.4 SUMMARY
Hyperspectral sensor collects information in 100 to 200 spectral bands with relatively
narrow bandwidth (5-10 nm) which offers to build the continuous reflectance spectra of each
pixels in a scene. The main difference between hyperspectral and multispectral data lies in
the number of bands and how narrow the bands are. The observed characteristic absorption
feature in the reflectance spectrum caused by Electronic or vibrational process of absorption.
Electronic absorption takes place due to change in the energy state of electrons. Electronic
process of absorption is dominant into VNIR range of electromagnetic spectrum. Vibrational
process of absorption is due to the vibration in the bonds and their masses. Vegetation shows
two general form of vegetation such as green and wet and dry non-photosynthetic. The near-
infrared region is dominated by the liquid-water absorptions. The USGS spectral image
library is used for the validation and reference purposes. It contains spectra of common
material which includes minerals, elements, solids, rocks, mixtures, coatings, liquids,
artificial materials, plants, vegetation, micro-organisms etc.
5.5 GLOSSARY
Band - A group of wavelengths.
Spectroscopy - Spectroscopy is the study of the interaction between matter and
electromagnetic radiations.
Visible Spectrum - covers a range from approximately 0.4 µm to 0.7 µm.
Infrared Spectrum - Covers the wavelength region of approximately 0.7 µm to 100 µm.
Near Infrared (NIR) - Near-IR covers the wavelength range of 0.7 to 1.1 µm.
Shortwave Infrared (SWIR) - It covers the wavelength range between 1.1 to 3.0 µm.
Instantaneous field of view (IFOV) - Angular cone of visibility of the sensor and
determines the area on the Earth’s surface which is “seen” from a given altitude at one
particular moment in time.
Spatial Resolution - Refers to the smallest possible feature that can be detected by a sensor.
Spectral Resolution - It describes the ability of a sensor to define fine wavelength intervals.
Radiometric Resolution - It describes the ability of a sensor to discriminate very slight
differences in the energy.
Temporal resolution - It refers to the repetitiveness of observation over an area, and is equal
to the time interval between successive observation.
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 105 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
5.7 REFERENCES
1. Burns, R. G. 1993. Mineralogical applications of crystal field theory. Second edition.
Mineralogical applications of crystal field theory. Second edition.
UNIT 5 - CHARACTERISTICS OF HYPERSPECTRAL DATA, SPECTRAL IMAGE LIBRARY Page 106 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
6.1 OBJECTIVES
6.2 INTRODUCTION
6.3 HYPERSPECTRAL DATA INTERPRETATION
6.4 SUMMARY
6.5 GLOSSARY
6.6 ANSWER TO CHECK YOUR PROGRESS
6.7 REFERENCES
6.8 TERMINAL QUESTIONS
6.1 OBJECTIVES
The main objective of any scientific method is to transform it into useful information which
can be applied for management of natural resources. After going through this module, one
can enhance their knowledge about:
6.2 INTRODUCTION
These types of data can now be used by people who aren't spectral remote sensing
experts. Thanks to recent advancements in remote sensing and improvements in hyperspectral
sensors. Previously, hyperspectral data is primarily used in geological applications.
Conservation, earth sciences, irrigation, and forest management are now among the
disciplines that can be benefitted by hyperspectral remote sensing.
Hyperspectral remote sensing offers more precise details than multispectral imaging,
thus allowing the identification and differentiation of spectrally distinct materials. A
hyperspectral sensor collects data in a series of contiguous narrow bands. Due to huge
volume and high spectral resolution of hyperspectral data, traditional techniques of
interpretation and processing of remote sensing data will be no longer applicable. Therefore,
following sections will deal with the pre-processing and interpretation of hyperspectral
datasets. We will also see some case studies and application of hyperspectral data in various
fields.
processing complicated. On the other hand, the data contains enough information that would
enable analysis based on spectroscopic concepts. It is also important to understand the
limitation of traditional techniques as those techniques are still used to interpret hyperspectral
data.
Although huge volume of data does not pose any major data processing challenges as
the modern computational system designed to handle these types of data easily. However,
examining the amount of data, such as multispectral and hyperspectral data from Landsat
Thematic Mapper and AVIRIS respectively is useful. The number of wavebands (7 vs. 224)
and radiometric resolution used in the two datasets are the most significant differences (8
versus 10 bits per pixel per band). Thus, the relative data volume per pixel are 7×8: 224×10
i.e., 56:2240. As a result, AVIRIS data has 40 times the number of bits per pixel as TM data.
As a result, hyperspectral data storage and transmission should be addressed, and appropriate
compression techniques should be used.
We deal with a lot of redundant data in the everyday lives. For example, if we remove
certain letters in a word then we can still identify the meaning of word. For example,
rmtesensg can be recognized as “remote sensing” because there is enough redundant letter
that removing some does not affect the understanding. The same principal is applicable in
hyperspectral data recorded by imaging spectrometers. The information content of the data
also overlaps significantly across the bands generated for a given pixel.
Fig. 6.1 (a) Correlation matrix of 196 wavebands encompassing wavelength range of 400 to
2400 nm for the AVIRIS Jasper Ridge image (white portion describes correlations of 1 or -1,
whereas black indicates a correlation of 0). (b) Output image of edge detecting the correlation
matrix(Richards, 2013).
Hyperspectral data can have 100s or even 1000s of bands of spectral bands. As
number of narrow bands of hyperspectral data increased, the number of samples (training
pixels) needed for optimum statistical confidence and functionality in hyperspectral datasets
also increases exponentially and making it difficult to address this issue adequately. Consider
the following scenario: if we needed to identify 10 ground cover forms using 100s or 1000s
of hyperspectral narrowband data, we will require very wide training samples for each class
to ensure statistical accuracy of classification; however, multispectral and broadband data can
be categorized with relatively smaller training samples for each class. Moreover, larger
dimension of hyperspectral data enables us to achieve large number of classes. The ability to
Fig. 6.2 Illustration of the effect of having adequate training samples per class to ensure
accurate separating surface estimation. When too few pixels are used (a) good separation of
the training data is possible but the classifier performs poorly on the testing data. Large
numbers of (randomly positioned) training pixels generate a surface that also performs well
for testing data (c)(Richards, 2013).
Scattering and the Earth's atmosphere impact the radiance determined by the
hyperspectral sensor. To acquire surface reflectance curve for hyperspectral data, detailed
radiometric correction is needed. Since hyperspectral datasets covers the wavelength range of
400 to 2500 nm which includes water absorption features and have higher spectral resolution,
thus it requires a systematic processing method consisting of three steps:
Compensation for the shape of the solar system. To get the apparent reflectance of the
earth, divide the calculated radiances by solar irradiances above the atmosphere.
Compensation for the atmospheric gaseous transmission, molecules and aerosol
scattering. Simulation of this atmospheric effect allows the apparent reflectance to be
changed into scaled surface reflectance.
After taking into account any topographic influence, scaled surface reflectance is
converted to real surface reflectance. If no topographic data is available, real
reflectance is presumed to be the same as scaled reflectance if the surfaces of
interest are Lambertian.
2. Data normalization:
xi,n denotes the radiance of pixel i in waveband n. The topographic influence, Ti, is presumed
to be constant across all wavelengths. The actual reflectance of pixel i in waveband n is Ri,n.
The illumination element is believed to be independent of the pixel. The total number of
pixels in the image and the total number of bands, respectively, are K and N.
where xi,n is radiance for pixel i in waveband n. Tiis the topographic effect, which is assumed
constant for all wavelength. Ri,n is the real reflectance for pixel i in waveband n. In is the
illumination factor, which is assumed independent of pixel. K and N are the total number of
pixels in the image and total number of bands, respectively.
A large number of narrow and contiguous bands cause radiometric errors in the
hyperspectral data. Therefore, hyperspectral data requires pre-processing which includes the
removal of atmospheric effects. The steps in the pre-processing of hyperspectral datasets is
shown in Fig 6.3.
1. Atmospheric Correction:
atmospheric corrected images (i.e., surface spectral reflectance) for non-thermal wavelengths
(mid-IR through UV), including an image-sharpening adjacency effect correction.
MNF is subjected to a pixel purity index in order to extract the purest pixels. PPI is
used to retrieve spectrally pure end members from hyperspectral data. The extreme pixels
have been selected from region of interest (ROI). The n-dimensional visualization approach
was extended to the ROI on the MNF image by extracting pure pixels and evaluating their
spectra.
The spectral angle, which can range from 0 to π/2, is determined using the formula
provided by (Kruse et al., 1993):
−1
∑𝑛𝑖=1 𝑡𝑖 𝑟𝑖
𝜃 = 𝑐𝑜𝑠
𝑛
𝑛
√∑ 𝑡𝑖2 ∑𝑖=1 𝑟𝑖2
[ 𝑖=1 ]
Where
Spectral feature fitting (SFF) algorithm removes the continuum of absorption feature
from the image spectra and library spectra. Spectral feature fitting compares the continuum
removed image spectra with the reference spectral library spectra after continuum removal
process and operated the least square fitting. The identification of best fitting material
depends on spectral features of reference and done by comparing the correlation coefficient
of fits (Boardman & Kruse, 1994). Continuum removed image spectra can be derived by
dividing the original spectrum of every pixel in an image from continuum curve.
𝑆
𝑆𝑐𝑟 =
𝐶
Where
S = Original Spectra
C = Continuum curve
The abundance of spectral features is represented by the scale factor. For the
computation of least square fit, a band-by-band measurement is used. The greater the scale
value, the stronger the absorption of mineral spectra, and the scale value is inversely
proportional to mineral abundance(F. van der Meer, 2004). The scale image of lighter pixels
shows a fair match between the pixel spectrum and the mineral's reference spectrum.To
compute the cumulative RMS error, an RMS error image is also created for each
endmember.Pixels that have a low RMS error and a large-scale factor are strongly
associated.The fit images are generated by calculating the ratio between the scale image and
the RMS error image to determine the correlations between the unknown and reference
spectrums on a pixel-by-pixel basis.(F. Van Der Meer et al., 2003).
1. Spectroscopic Analysis:
A pixel vector x in an n-dimensional feature space has both magnitude (length) and an
angle defined with respect to the axes that form the space's coordinate system. The angular
information of spectra is used for the identification of object in the pixel.
Fig. 6.4 (a) Pixels are represented by their angles from the band axes.(b) Segmenting the
multispectral space by angle(Richards, 2013).
Fig. 6.4 shows a two-dimensional space where spectra are characterized by their angles from
the horizontal axes. The spectral can be distinguished by their angle with reference. High
angle shows less similarity whereas low angle shows greater similarity. We can set the
angular decision boundary depending on the requirement. There is a fair probability that
angular knowledge would have good results if the pixel spectra have different groups and are
spaced well in a function space.
Fig. 6.5. Silicate minerals and rocks have a spectral absorption feature.(Hunt & Ashley, 1979)
Table 6.1 Alteration minerals and metallogenetic conditions are listed below(Thompson &
Thompson, 1996)
montmorillonite,calcite,dolomite
Potassic Phlogopite, actinolite, sericite, chlorite, epidote,
muscovite, anhydrite
Sodic Actinolite, diopside, chlorite, epidote, scapolite
Igneous
Phyllic Muscovite–illite, chlorite, anhydrite
intrusion related
Argillic Pyrophyllite, sericite, diaspore, alunite, topaz,
tourmaline, kaolinite,montmorillonite, calcite
Greisen Topaz, muscovite, tourmaline
Skarn Clinopyroxene, wollastonite, actinolite–tremolite,
vesuvianite, epidote,serpentinite–talc, calcite, chlorite,
illite–smectite, nontronite
Oxidized Clay minerals, limonite, goethite, hematite, jarosite
Supergene and leached
Sulphide zones
Enriched Chalcocite, covellite, chrysocolla, native copper and
zone copper oxide,carbonate and sulphate minerals
Mineral deposits that can be targeted using reflectance spectra in mineral exploration
survey include epithermal gold, medium-, and high-sulphidation deposits, porphyries,
kimberlites, iron oxides, copper, gold, skarn, and uranium. Table 6.1 lists the minerals and
their characteristic zones which can be easily identified by hyperspectral remote sensing
technique.
The VNIR and SWIR region of hyperspectral remote sensing is used for the
identification and mapping of lithology of the area in any climate and tectonic areas (granite,
ophiolite, peridotite, and kimberlite). The VNIR-SWIR field is thought to be the best for
mapping alteration rocks/minerals, carbonates, and regoliths. Overtones and variations of Al-
OH, Fe-OH, and Mg-OH vibrations are most active in the SWIR field. Thermal infrared
region is mostly useful for the identification of quartz and silica presence in any area because
of fundamental vibration Si-O bond in this region. Thus, TIR region is mostly useful for
characterizing rock-forming minerals such as quartz, feldspar, amphibole, olivine, pyroxenes.
In case of evaporite deposit, the VNIR region is mostly useful for the mapping and shows
absorption features at 1.5, 1.74, 1.94, 2.03 2.22 and 2.39 µm and can be mapped in
hyperspectral data such as Hyperion dataset. (Kurz et al., 2012) applied hyperspectral remote
sensing in the VNIR-SWIR region to differentiate the carbonates such as limestone, karst and
hydrothermal dolomites.
Mapping of hydrothermal alteration zones and associated metal deposits:
Hydrothermal ore deposits often associated with the formation of altered minerals and
altered minerals are sensitive to the hyperspectral data. Therefore, mapping of altered mineral
may lead to discovery of an ore deposits. Hydrothermal alteration zones comprise of complex
mixture of primary and new mineral assemblages developed when primary minerals and
hydrothermal fluids meet.
Regolith mapping by hyperspectral remote sensing:
Regolith mapping critical role in the identification and discrimination between various
geomorphic feature, weathering pattern and link between surface and subsurface processes.
Pioneer research work by Tripathi et al. 2020 shows the potential of Airborne
Visible/Infrared Imaging Spectrometer- Next Generation in the mapping of regolith and
hydrothermally altered, weathered and clay minerals in the south-eastern Rajasthan.
Fig.6.6 Generated regolith map of the study area in south-eastern Rajasthan, India (Tripathi &
Govil, 2020).
Fig. 6.7 Image and lab generated spectra related with the exposed rock samples in the area
(Tripathi & Govil, 2020).
with the mineralization. Common ore minerals found in the area includes scheelite,
wolframite, pyrite, chalcopyrite, bornite and cassiterite.
Fig. 6.8 Identification and mapping of carbonates, phyllosilicates, sulphates, altered minerals,
derived from HyMap hyperspectral data in Daykundi area, Afghanistan (Hoefen, Knepper Jr,
& Giles, 2011).
Hyperspectral remote sensing successfully applied for the exploration of porphyry-
type deposit in different geological settings (Bishop, Liu, & Mason, 2011).
Residual and secondary enrichment deposits:
This includes ore deposits such as bauxite; cobalt deposit; gold, copper; and uranium
deposit hosted by calcrete. Hyperspectral remote sensing successfully identifies and map
these secondary mineral components constituting regoliths (goethite, limonite, gibbsite).
Spectral absorption feature and linear unmixing techniques can be used for the mapping of
different grades of bauxite. Gossan is also known as iron hats and composed of goethite,
hematite, limonite, kaolinite and alunite. These minerals are sensitive to VNIR-SWIR region
of electromagnetic spectrum and can be easily identified by hyperspectral remote sensing.
Hyperspectral remote sensing techniques will detect the related supergene enrichment zone
minerals such as phosphate & argillic minerals, since they are responsive to the VNIR and
SWIR regions.
Hydrocarbon Exploration:
Most of the present day hydrocarbon reservoirs are deep seated whereas their
presence can be identified by surface indicators such as seepages and micro-seepages (F. D.
Van der Meer, 2000). In recent times, study of surface indicators (micro-seepages) is popular
for the oil and gas exploration. The mapping and characterization of oil springs, as well as the
modification of minerals in soil and rocks due to seepages, are also part of direct detection.
The aim of indirect measurement is to determine the secondary effects of volatile
hydrocarbons on plants and crops.
The following are some of the most significant spectral features and causative
molecules: (a) O-H overtones and C-H variations create 1.39-1.41 µm; (b) Absorptions of
1.72-1.73 µm attributable to a mixture of CH3 and CH2 stretching; (c) At 1.75-1.76 µm, there
is a CH2 vibration overtone; (d) Asymmetric and symmetric axial deformations of CH3 and
symmetric deformation of CH2 at 2.31 µm and (e) 2.35 µm, absorption due to a mixture of
symmetric angular deformations in CH3.
Coal bearing areas also have their spectral signatures. Low grade coals have distinct
absorption at 1.4, 1.9, and 2.1-2.6 µm in the wavelength range of 0.3-2.6 µm.
Fig. 6.9 Characteristic spectral absorption features due to different organic compounds in
different grades of coal(Ramakrishnan & Bharti, 2015).
(Cloutis, 2003) analysed large amount of coal samples for the retrieval of organic and
inorganic components.
Table 6.2. Coal physical and chemical characteristics and spectral absorption
regions(Ramakrishnan & Bharti, 2015).
Coal Property Spectral correlations
Aromaticity factor 3.41 µm-ABD
3.41/3.28 µm-ABDR
Aliphatic (CH + CH2 + CH3) content 3.41 µm-ABD
Aromatic C content 3.41/3.28 µm-ABDR
3.28 µm-ABD: ARR
Moisture Content 1.9 µm-ABD
2.9 µm-ABD
Volatile Content 2.31 µm-ABD
3.41 µm-ABD
3.41/3.28 µm-ABDR
Fixed carbon content 3.41 µm-ABD
3.41/3.28 µm-ABDR
3.28 µm-ABD: ARR
Fuel Ratio 3.41/3.28 µm-ABDR
2.31/3.28 µm-ABDR
Carbon Content 3.28 µm-ABD:ARR
Hydrogen Content 3.41m-ABD:ARR
2.9 + 3.41 m-ABD
Nitrogen Content 7.26 m-ABD
Oxygen Content 1.9 m-ABD
2.9 m-ABD
H/C ratio 3.41 m-ABD
3.41/3.28 m-ABDR
Vitrinite mean reflectance 3.41 m-ABD
3.41/3.28 m-ABDR
Calorific value 3.28 m-ABD: ARR
Petrofactor 1.6 m-ARR
3.41/3.28 m-ABDR
ABD: Absorption band depth; ABDR: Absorption band depth ratios; ARR: absolute
reflectance ratio.
Above table shows that quantitative spectral-compositional relationships are possible
and coals can be identified spectrally based on the properties such as aromaticity, total
aliphatic, aromatic content, moisture content, volatile content, fixed carbon abundance, fuel
ratio, carbon content, nitrogen abundance, H/C ration and vitrinite reflectance.
Fig. 6.10 with increasing inorganic material, the spectral pattern of coal also changes.
(Ramakrishnan & Bharti, 2015).
Fig. 6.11
6.4 SUMMARY
Hyperspectral remote sensing data contains huge volume of data and thus redundancy
occurred in hyperspectral data. Redundancy can be removed by the application of minimum
noise fraction (MNF). As the hyperspectral data acquires information in 100s or even 1000s
of spectral bands, thus the training samples becomes exponentially large for optimum
statistical confidence and functionality. The statistical integrity can be preserved only if each
class have enough training samples to train classifier and equally large number of training
samples for each class to establish the class accuracy. This phenomenon is known as Hughes’
phenomenon. Pre-processing of hyperspectral data includes the application of FLAASH
atmospheric correction followed by Minimum Noise Fraction (MNF), Pixel Purity Index
(PPI) and then classification technique such as spectral angle mapper (SAM) and spectral
feature fitting (SFF). The hyperspectral data interpretation is done by three methods which
includes: Spectroscopic analysis, Spectral angle mapping and Library Searching Technique.
Hyperspectral data successfully applied for the mapping of lithology, hydrothermal alteration
zones, volcanogenic massive sulphide deposits, hydrothermal epigenetic deposits, residual
and secondary enrichment deposits and also in hydrocarbon exploration.
6.5 GLOSSARY
Spectral Angle Mapper (SAM) - Spectral angle mapper (SAM) is a supervised classification
technique which compare the image spectra and reference spectra collected from spectral
library.
Spectral Feature Fitting (SFF) - Spectral feature fitting compares the continuum removed
image spectra with the reference spectral library spectra after continuum removal process and
operated the least square fitting.
Minimum Noise Fraction (MNF) - Minimum noise fraction; reduces the dimensionality in the
data.
Pixel Purity Index (PPI) - Pixel purity index; to extract the spectrally pure pixel in the
hyperspectral dataset.
6.7 REFERENCES
1. Bishop, C. A., Liu, J. G., & Mason, P. J. 2011. Hyperspectral remote sensing for mineral
exploration in Pulang, Yunnan province, China. International Journal of Remote
Sensing, 32: 2409–2426.
2. Boardman, J. W., & Kruse, F. A. 1994. Automated spectral analysis: a geological example
using AVIRIS data, north Grapevine Mountains, Nevada. Proceedings of the Thematic
Conference on Geologic Remote Sensing.
3. Cloutis, E. A. 2003. Quantitative characterization of coal properties using bidirectional
diffuse reflectance spectroscopy. Fuel, 82: 2239–2254.
4. Cŕosta, A. P., De Souza Filho, C. R., Azevedo, F., & Brodie, C. 2003. Targeting key
alteration minerals in epithermal deposits in Patagonia, Argentina, using ASTER
imagery and principal component analysis. International Journal of Remote Sensing, 24:
4233–4240.
5. Govil, H., Mishra, G., Gill, N., Taloor, A., & Diwan, P. 2021. Mapping Hydrothermally
Altered Minerals and Gossans using Hyperspectral data in Eastern Kumaon Himalaya,
India. Applied Computing and Geosciences, 9: 100054.
6. Green, A. A., & Craig, M. D. 1985. Analysis of aircraft spectrometer data with logarithmic
residuals. In: Proc. AIS workshop, JPL Publication 85-41, Jet Propulsion Laboratory,
Pasadena, California.
7. Green, Andrew A., Berman, M., Switzer, P., & Craig, M. D. 1988. A Transformation for
Ordering Multispectral Data in Terms of Image Quality with Implications for Noise
Removal. IEEE Transactions on Geoscience and Remote Sensing, 26: 65–74.
8. Hoefen, T. M., Knepper Jr, D. H., & Giles, S. A. 2011. Analysis of imaging spectrometer
data for the Daykundi area of interest. Summaries of Important Areas for Mineral
Investment and Production Opportunities of Nonfuel Minerals in Afganistan, US
Geological Survey, Reston, Virginia, 314–339.
9. Hunt, G. R., & Ashley, R. P. 1979. Spectra of altered rocks in the visible and near infrared.
Economic Geology, 74.
10. Kruse, F. A., Lefkoff, A. B., Boardman, J. W., Heidebrecht, K. B., Shapiro, A. T.,
Barloon, P. J., & Goetz, A. F. H. 1993. The spectral image processing system (SIPS)-
interactive visualization and analysis of imaging spectrometer data. Remote Sensing of
Environment, 44: 145–163.
11. Kurz, T. H., Dewit, J., Buckley, S. J., Thurmond, J. B., Hunt, D. W., & Swennen, R.
2012. Hyperspectral image analysis of different carbonate lithologies (limestone, karst
and hydrothermal dolomites): the Pozalague Quarry case study (Cantabria, North-west
Spain). Sedimentology, 59: 623–645.
12. Meer, F. Van Der, Jong, S. De, van der Meer, F. D., & de Jong, S. J. 2003. Spectral
mapping methods: many problems, some solutions. 3rd EARSeL workshop on imaging
spectroscopy.
13. Ramakrishnan, D., & Bharti, R. 2015. Hyperspectral remote sensing and geological
applications. Hyperspectral remote sensing and geological application.
14. Richards, J. A. 2013. Remote sensing digital image analysis: An introduction. Remote
Sensing Digital Image Analysis: An Introduction (Vol. 9783642300622). Springer-
Verlag Berlin Heidelberg.
15. Roberts, D. A., Yamaguchi, Y., & Lyon, R. J. P. 1985. Calibration of Airborne Imaging
Spectrometer Data to percent Refelctance Using Field Spectral Measurements. In: 19th
International Symposium on Remote Sensing of Environment, Ann Arbor, Michigan.
16. Thenkabail, P. 2014. Hyperspectral Remote Sensing of Vegetation and Agricultural
Crops. Photogrammetric Engineering & Remote Sensing (PE&RS);80,(2014) Pagination
697,723.
17. Thompson, A. J. B., & Thompson, J. F. H. 1996. Atlas of alteration: a field and
petrographic guide to hydrothermal alteration mienrals. Geological Association of
Canada, Mineral Deposits Division, 119.
18. Tripathi, M. K., & Govil, H. 2020. Regolith mapping and geochemistry of
hydrothermally altered, weathered and clay minerals, Western Jahajpur belt, Bhilwara,
India. Geocarto International, 1–17.
19. van der Meer, F. 2004. Analysis of spectral absorption features in hyperspectral imagery.
BLOCK 3 : MICROWAVE
UNIT 7 - CONCEPT, DEFINITION, MICROWAVE
FREQUENCY RANGES AND FACTORS AFFECTING
MICROWAVE MEASUREMENTS
7.1 OBJECTIVES
7.2 INTRODUCTION
7.3 CONCEPT, DEFINITION, MICROWAVE FREQUENCY
RANGES AND FACTORS AFFECTING MICROWAVE
MEASUREMENTS
7.4 SUMMARY
7.5 GLOSSARY
7.6 ANSWER TO CHECK YOUR PROGRESS
7.7 REFERENCES
7.8 TERMINAL QUESTIONS
7.1 OBJECTIVES
After reading this unit learner will able to understand:
7.2 INTRODUCTION
Remote sensing has a wide range of applications and has been identified as a technique with high
potential for assisting the nation's economic development and some of its problems. These
includes the improvement and creation of natural resources, the identification of areas at risk of
flooding, availability of water in the basins, estimation of the status of the watersheds,
determination of the area of forests and the estimation of harvests and resource exhaustion. The
electromagnetic spectrum, with its various wavelength bands, finds applications in a wide range
of fields. Expanding the demand for natural resources leads to scarcity and the explanations
behind this lack of access. Due to insufficient traditional approaches, remote sensing can play a
significant role in resolving these difficulties.
Remote sensing may be utilized for predicting climate, rainfall, cloud cover and other
physical properties as well as for assisting in the identification of cloud areas and other physical
factors. In overcast places, such as during the kharif season, when crops suffer and wheat yield
forecast is difficult, as well as on crops such as groundnuts, coffee, tea, and others that require a
lot of rain. Flooding is another issue to be concerned about during the rainy season as for many
years, floods have wreaked damage. Cloud movement could not be predicted since clouds
obstructed normal techniques of observation. Because observation is not feasible at night,
sensors that can operate at night as well as in cloudy conditions are required.
Historical Background:
A 1.275-GHz synthetic opening radar sensor, known as Shuttle Imaging Radar-B, was launched
by the Space Shuttles Challenger on 5 October 1984. (SIR-6). SIR-B took high-resolution photos
of the Earth's surface throughout the 10-day Challenger mission, some of them from specific
places illumined from different angles. This permitted stereo imagery on the earth's surface and
interpretation by means of three-dimensional viewing and contour map creation of the
morphological characteristics of the locations. Space Shuttle Columbia had a cargo of earth
observation three years earlier, including SIR-A, identical to SIR-B but with a fixed angle of
illumination of 47.'
South Egypt's hyperarid area was blanketed with one of the SIR-A picture strips. These
surprising radar pictures revealed large beds of dry rivers below the Sahara and previously
discovered structures on the underlying ground. After field work, the radar penetrated a thick,
multi feet deep surface layer of very dry olive sand to indicate features of an alluvial quaternary
basin. In 1978 NASA launched Seasat, which contains a 14GHz SAR and other sensors, as the
polar orbiting earth observation spacecraft. The findings of the Seasat X-shaped illumination
disperse indicated that this type of remote sensor with a space-borne microwave is accurate. The
sensitivity of this 2-cm scatterometer is owing to Bragg's rear scattered microwave ocean waves,
which are twice as long as a horizontal projection of the radar wavelength. The reinforcement of
this backscatter from those ocean surface waves runs parallel to the radar.
In climate study as well as early prediction of ocean storms and other physical
oceanographic problems, the capacity to regularly detect ocean winds at global scales is crucial.
One of the Seasat SAR pictures taken on 20 August 1978 showed the pastoral part of Iowa where
a summer storm line had just sunk over an inch of rain in a finely sculptured area near Cedar
Rapids. In this space-borne image, microwaves have demonstrated their great sensitivity to soil
moisture, matching passive and active investigations on prior ground and aircraft.
Radars and radiometers have both been created for non-distance sensing uses. Several
radars were created during World War II for military fire control and aircraft tracking, including
image radar. The first in the 1930s pioneers such as Karl Jansky and Grote Reber to build simple
radiometers consisting of an antenna, a low-noise receiver and strip chart recorders for radio
astronomy. Earth-based congestion hampered target-oriented military radars, and statistical
characterization of clutter was a key engineering issue in the 1950s. A group of scientists at the
Ohio State University examined diffusion coefficients of different crop materials, asphalt,
concrete and other materials, and carried out the first comprehensiveness of the cross-sectional
radar per unit area for earth-storm in the late 1950s. The group also examined the connection
between the passive emissivity of dispersed targets and their active dissemination coefficients. In
the mid-1960s, earth scientists began to employ geologic geology microwave remote sensing
when side-looking SLARs such as the AN/APQ-97 35GHz, designed by Westinghouse for
militant monitoring, were used by earth scientists. Earths have also been used. In 1967, Panama's
Oriente Province performed the first big airborne radar mapping research, which is normally
overcast. The AN/APQ-97 radar was used to undertake both geological and agricultural studies.
In spaceborne remote sensing, three kinds of radar are employed among them Scatterometers and
other altimeters of the radars of synthetic openings (SAR), Skylab and Seasat were the initial
flights to carry a range of remote radar sensors as well as earth study radiometers.
Between May 1973 and February 1974, NASA's human Skylab missions carried a
microwave radiometer/scatterometer/altimeter experiment, called S-193, that operated at 13.9
GHz [8], as well as an L-band radiometer (5194). Studies of planetary emission sparked the
development of microwave radiometry from space. Mariner 2's microwave radiometer with 15.8-
and 22.2-GHz channels performed three scans of the planetary disc during its December 1962
approach of Venus, confirming the high temperature of the Venusian surface and demonstrating
that its planetary emission was characterized by limb-darkening.
But the first radiometric measurements of the earth from orbit were carried out by the
Soviet spacecraft Cosmos 243 not until 1968. A 4 channel nadir radiometer was used to evaluate
atmospheric water vapour, liquid water, ice cover and sea temperature. After that attempt, a
number of Soviet and American radiometer flying systems with more advanced sensors were
launched. In 1972, two primary microwave radiometers launched the Nimbus-5 spacecraft: the
Nimbus-E Microwave Spectro-Meter (ESMR) and an electronically scanned 19.3GHz imaging
radiometer for measuring air rain and sea-surface ice (NEMS),a five-frequency radiometer for
detecting air temperature profiles, concentration of vapour in water and content of fluid vapour
was also used.
Electromagnetic Spectrum:
The frequency used in remote sensing has an impact on the applications. The International
Telecommunication Union's Radio Regulations define radio waves as electromagnetic waves
with frequencies less than 3000 GHz. Different regions of the radio spectrum are employed for
active and passive microwave remote sensing. The electromagnetic spectrum is depicted in
Figure 7.1.
This is shown in Table 7.1 from the extreme low Frequency Radio (ELF) to the extreme HF
(EHF), and from myriametric to sub millimeter waves. The following are shown in Table 7.2.
This range is between 30 and 3000 GHz. For various reasons, different portions of the radio
spectrum are employed. The spectrum extends between 0,3 and 30 GHz, the spectrum of the
millimetre wave between 30 and 300 GHz and the 300-3000 GHz sub-millimeter spectrum.
These spectra are split into strips for various uses.
* The microwave frequency spectrum includes UHF, SHF, and EHF (300 MHz-325 GHz).
Table 7.3 and 7.4 shows the Characteristics of Centimetric Waves ranging from 3 GHz to 30
GHz and Characteristics of Millimetric Waves ranging from 30 to 3000 GHz respectively, while
Table 7.5 shows Sub-millimetre wave characteristics. The frequency bands have been set aside
for passive microwave remote sensing and radio astronomy.
Ice clouds, water clouds, and rain are all likely to degrade the operation of microwave-frequency
sensors. These three natural occurrences have varied effects on radio waves at different
frequencies. All microwave frequencies are perfectly transparent to the ice clouds, but optical
wavelengths are opaque. Water clouds have a significant impact on frequencies over 30 GHz, but
have little impact below 15 GHz. The influence of clouds on radio transmission from space to
ground was demonstrated by Ulaby1 et. al In the case of heavy rain, the effect of rain is
particularly noticeable above 10 GHz.
Cloud cover and haze have little effect on imaging radars, which are generally weather-
independent. Water clouds have a significant impact on radars at frequencies over 15GHz,
however at frequency below 10GHz rain does not have a significant affect. Ulaby1 et. al
describes the effect of rain on space-to-ground radio transmission.
Capable of providing data in day and night time (independent of intensity and sun angle at
the time of illumination) :-
The sensors receive target signals at microwave frequencies, which are entirely reliant on the
target's dielectric characteristics as well as physical attributes such as surface roughness and
texture. As a result, the signal obtained by the sensor is independent of the sun's illumination,
including its angle and intensity. Thus, even at night, information about the target item can be
obtained using microwave frequencies. These things one can’t get through optical sensors.
The presence of moisture in the soil affects the microwaves. At different microwave frequencies,
dry and damp soil responds differently. This is due to the electrical characteristic, the dielectric
constant of the soil, which differs between dry soil or natural materials and materials containing
water in liquid or vapour form. At frequencies ranging from 2 GHz to 20 GHz2, Calla2 et al.
examined soils of oven-drying moisture levels to saturated moisture. The sensitivity to humidity
is influenced by the change of the dielectric constant at microwave frequencies. Soil dielectric
constant changes at different frequencies with varied moisture values as seen in Figure 7.2.
The information in microwave frequencies is mostly due to geometry and the mass
dielectric constant of earth, but is due to molecular resonance in the surface layer of the plant or
soil in the visible and infrared frequencies. This leads to the conclusion that when all three
microwaves, visible and infrared, are combined, surface geometry information, dielectric bulk
constant and resonance characteristics of molecules may be acquired. All three are therefore
complimentary for remote sensing and should be used in conjunction to define the surface
characteristics to achieve optimum results.
Figure 7.2 - Dialectic soil constant with different humidity frequencies (solid curves for e' and dotted curves for e')
Microwave radiometers are used to perform passive MRS. This method is practicable as every
natural substance emits electromagnetic radiation, a complicated function of the physical
characteristics of the emitting surface. In addition to optical sensors, sensors which operate at
microwave frequencies have been utilized in recent times on a limited scale for several purposes.
Passive sensors, previously also referred to as radiometers, detect microwave spectrum radiated
power. The approximation of Planck's Law by Rayleigh Jeans regulates the fundamental
principle of radiometer detection. The black-body electromagnetic emission, as controlled by
Planck's law, controls behavior at a certain temperature of T°K.
The active MRS analyses the data obtained and distinguishes one target from another by using
the scattering properties of the terrains and targets. The dispersive characteristics of the target are
shown in the dispersion factor. The dispersion coefficient depends on the incidence angle,
operating frequency and polarization. The dispersion coefficient also depends on the objective
electrical qualities (e.g. dielectric constants and conductivity), physical characteristics (e.g.
texture, surface type etc.).The two main radar characteristics are also employed in active MRS,
namely the capacity of producing high-resolution images and measuring distance / altitude with
great precision.
As the event is electromagnetic power, it is dispersed. This depends on the surface type
of the target. If the surface is smooth, the greatest power is dispersed towards the specular
reflection. There is a specular reflection. If the surface is rough, power will spread in all
directions. The scattered effect arises when the rugged surface of the sensor is connected to the
sensor wavelength. As electromagnetic waves can penetrate the surface, the dispersion factor is
also influenced by the sub-surface characteristics of the target. The dispersal which is perceived
on both the surface and the surface is a dispersing phenomenon in volume.
Microwave Principles:
Microwave Spectrum:- The microwave spectrum, which has been utilized for remote
sensing, ranges between 500 MHz and 100 GHz. For both active and passive microwave
remote sensors, there are certain widely used frequencies or wavelengths, as well as letter
band designations. Space-borne remote sensing frequencies vary from L-band (1.3 GHz) to
Q-band (2.4 GHz) (58 GHz).
Surface Geometry Sensitivity:-The geometric features of the earth's surface are highly
sensitive to split radar signals and the geometric structure of cultural and natural coverings.
Radar back dissemination of the field is very sensitive to surface slope at both small and
large incidence angles (less than around 30'). (larger than about 55"). As already established,
Bragg's dispersion of capillary and short gravity waves is greatly impacted by centimeter
wave length radar spread from the sea, mainly because of the waves. Finally, a recent study
found that both the geometrical structure (size and form of the stalks, stalks, trunks, branches
and leaves) and the moisture content of vegetation such as trees and crops and other plants
substantially affects the radar reactions.
Regardless of the amount of cloud cover or the position of the sun:-Imaging radars can
gather high-resolution surface images at any time of the day or night, irrespective of the
cloud cover. As ocean, sea winds in or near cloud-covered storm cells are generally
visible and projected, this is essential to specific applications like as ocean surveillance.
It is also important to record regions like the Brazilian Jungle that are obscured forever.
The planet is generally documented at a constant sun angle with optical images
such as those acquired by Landsat for consistent illumination. The utilization of sun-
synchronous orbits is therefore required. As imaging radars supply their own light, they
do not depend on the angle of the sun and offer a broader variety of possibilities for
orbital height.
substantive reliance on humidity in the soil (about 3-4). The dielectric constant of wet
soil may reach 20 or more with increased soil humidity, resulting in an emissivity
variation of around 1.4 GHz for dry soil from around 0.95. (volume of humidity below
0.1 g/cm3) to 0.6 for wet soil (with volume of humidity more than 0.3 g/cm3).
Brightness temperature of snow:-The emissivity of snow-capped soil is generally
governed by the dielectric constant of underlying frozen soil (around 3) and by the
thickness, water equivalent and liquid water dispersion in the covering area of snow.
Where the snow is electrically heavier, the dependence is clearer at or above the
percentage band frequencies. As snow-water equivalent (i.e., the total snow mass of
water inside a column) grows, the brightness temperature in the dry snow layer lowers.
Even a slight increase in liquid water amount (snow wetness) leads to a rising brightness
in snow due to volume dispersion.
7.4 SUMMARY
Remote sensing has a wide range of applications. It can help improve natural resources, the
creation of natural resources and identify areas at risk of flooding. Remote sensing can also be
used to predict harvests and resource exhaustion in some areas. Remote sensing may be used for
predicting climate, rainfall, cloud cover and other physical properties. Remote sensing can also
help in the identification of cloud areas and other factors such as flood risk during the kharif
season. In overcast places, when crops suffer, wheat yield forecast is difficult. The presence of
moisture in the soil affects the microwaves. At different microwave frequencies, dry and damp
soil responds differently. This is due to the electrical characteristic, the dielectric constant of the
soil. Microwave remote sensing (MRS) has enormous potential. It has specific benefits in
applications such as geological surveying for petroleum and mineral prospecting, crop and
vegetation monitoring, soil moisture detection, water resource management, agriculture,
oceanography, and atmospheric sciences.
7.5 GLOSSARY
Active Microwave Remote Sensing: - provide their own source of microwave radiation
to illuminate the target.
Altimeter: - An altimeter or an altitude meter is an instrument used to measure the
altitude of an object above a fixed level.
Emissivity: - is defined as the ratio of the energy radiated from a material's surface to
that radiated from a perfect emitter, known as a blackbody, at the same temperature and
wavelength and under the same viewing conditions.
Passive Microwave Remote Sensing: - is similar in concept to thermal remote sensing.
All objects emit microwave energy of some magnitude, but the amounts are generally
very small. A passive microwave sensor detects the naturally emitted microwave energy
within its field of view.
Radiometer:-A radiometer is a device for measuring the radiant flux (power) of
electromagnetic radiation.
Q.2 ERS, Envisat, Sentinel and RISAT are example of which type of satellites:
(a) Optical
(b) Passive
(c) Thermal
(d) Microwave
Q.4 Remote sensing techniques make use of the properties of ___________ emitted,
reflected or diffracted by the sensed objects:
(a) Electric waves
(b) Sound waves
(c) Electromagnetic waves
(d) Wind waves
7.7 REFERENCES
Ulaby, Fawwaz T.; Moore, Richard K. & Fung, Adrian K. Microwave remote sensing – active
and passive, Vol. 1: Microwave remote sensing fundamentals and radiometry. Advanced Book
Program/World Science Division, Addison-Wesley Publishing Company, Reading,
Massachusetts, USA.
Calla, O.P.N.; Borah, M.C.P.; Mishra, Vashishtha R.; Bhattacharya, A. & Purohit, S.P. Study
of the properties of dry and wet loamy soil at microwave frequencies. Ind. J. Rad. & Spa. Phy.,
1999, 28, 109-12.
Woodhouse lain. H, “Introduction to Microwave Remote Sensing” Taylor & Francis, 2005.
Ulaby, F.T., Moore, R.K, Fung, A.K, “Microwave Remote Sensing; active and passive, Vol.
1,2 and 3, Addison – Wesley publication company, 2001
Floyd, M., Handerson and Anthony J. Lewis, ” Principles and application of
Imaging RADAR, Manual of Remote Sensing, 3rd edition, Vol.2, ASPRS, John Wiley and
Sons Inc., 1998
Roger J Sullivan, Knovel, Radar foundations for Imaging and Advanced Concepts, SciTech
Pub, 2004.
Ian Faulconbridge, Radar Fundamentals, Published by Argos Press, 2002.
Eugene A.Sharkov,Passive Microwave Remote Sensing of the Earth: Physical Foundations,
Published by Springer, 2003.
Q.2Why is microwave remote sensing better suited for monitoring tropical rain forests
than optical remote sensing?
A.2 Tropical forests are very humid, causing cloud cover nearly all the time. Optical systems
then fail and radar is the only possibility for monitoring these areas since radar can 'look' through
clouds
Q.3 what is the main difference between radar and optical remote sensing systems, like
aerial photography or the SPOT satellite?
A.3 The main difference lies in the fact that radar is an active system, operating day and night,
and being insensitive to cloud cover. An optical system mostly is a passive system, which is not
able to penetrate clouds.
8.1 OBJECTIVES
8.2 INTRODUCTION
8.3 RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE
LOOKING AIRBORNE RADAR (SLAR) SYSTEMS &
SYNTHETIC APERTURE RADAR (SAR), REAL
APERTURE RADAR (RAR)
8.4 SUMMARY
8.5 GLOSSARY
8.6 ANSWER TO CHECK YOUR PROGRESS
8.7 REFERENCES
8.8 TERMINAL QUESTIONS
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 144 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
8.1 OBJECTIVES
The objectives of this unit are to make the reader understand the following concepts:
Introduction of active microwave remote sensing,
Classification of the different microwave sensors.
Radar Principal and Radar equation
Wavelength bands used by the radar systems
Concept of Side looking Airborne Radars (SLAR) & its types
Radar geometry
Rear Aperture SLAR systems and Radar resolution
Synthetic Aperture Radar systems
8.2 INTRODUCTION
Microwave remote sensing incorporates both its active and passive forms of sensing
depending the source of illumination used. As described in Unit7, the microwave portion of
the electromagnetic spectrum covers wavelength range from approximately 1cm to 1m. These
longer wavelengths of the microwave region allow them to penetrate through cloud cover,
haze, dust, and rainfall since these wavelength ranges are not susceptible to atmospheric
scattering which on the other hand affects shorter optical wavelengths such as visible and
infrared regions. This property provides the microwave remote sensing with all weather and
environmental capturing capabilities as the data can be collected at any time and at any place.
The radiometers are examples of the imaging passive microwave remote sensing system
while radars are imaging active microwave remote sensing system. Both radiometers and
radars have antennas and receivers while the radars have additional transmitters for
transmitting their own source of energy. In this Unit active microwave sensors will be dealt in
detail. The active microwave sensors act as a self-source which transmits a directed pattern of
energy to irradiate a portion of the Earth’s surface, then receives the portion scattered back to
the instrument system.
These are usually divided into two distinct classes: imaging and non-imaging. Imaging radars
can be further classified into Real Aperture Radar (RAR) or Side Looking Airborne Radar
(SLAR) and Synthetic Aperture Radar (SAR) based on the antenna size and beam width
used. The different types of non-imaging real aperture radars are Scatterometer (used to
measure wind speed), Altimeters (used to measure platform height) and meteorological radars
(used to measure rainfall & other weather phenomenon). Other imaging synthetic aperture
platform in addition to SAR is Interferometric SAR (InSAR or IFSAR) which is used to
measure the topography; and Inverse synthetic-aperture radar (ISAR) is similar to SAR
except that it uses the motion of the target to create the synthetic aperture rather than the
emitter.
RADAR is the most common form of imaging active microwave sensors. RADAR is an
acronym for Radio Detection and Ranging. The operation and function of the RADAR is
characterised by the word itself. Radio stands for the microwave and range is another term
for distance. The microwave (radio) signal is transmitted by the sensor towards the target and
the backscattered signals from the objects are received back. The various target objects are
distinguished from one another based on the strength of the backscattered signals. Further,
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 145 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
the sensor also measures the time delay between the transmitted and received signals in order
to determine the range or distance to the target.
Radar Principal:
Typical imaging radar consists of a pulse generator, transmitter, a receiver, an antenna, a
transmitter/receiver switch and a recorder (Figure 8.2). The signal is initially produced by the
pulse generator device at a specified frequency and is sent to the transmitter. The transmitter
then emits this energy towards the target object through a transmitting antenna. The signal
interacts with the target object and is backscattered towards the receiving antenna. The
receiving antenna collates the backscattered signals into the receiver circuits of the recorder.
The amplification of the received signal is done by the receiver which finally extracts the
target characteristics. In some cases, two separate antennas are used for transmission and
receiving of signal while in some cases a transmit-receive (TR) switch is utilized to
modulated the monostatic antenna functioning between transmitter and receiver.
It is important for the reader to understand that the radar systems are not used to measure the
direct reflectance from the object instead they record the backscattered intensity of radiation
from the object. The strength of the returning signal measured by the antenna of the radar is
represented in several ways. One way of the directly representing the signal radiometrically is
as power and it can also be represented as decibels (db) in log format. In order to improvise
the visual appearance of the radar images they are converted into magnitude where each pixel
value represents the square root of the power. This process of representing the radar signal in
terms of physical parameters is given by Radar equation (Moore, 1983) as given below:
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 146 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
𝑮𝟐 𝝀𝟐 𝑷𝒕 𝝈
𝑷𝒓 = (𝟒𝝅)𝟑 𝑹𝟒
Equation (1)
All the variables of the radar equation except the radar cross-section are known as controlled
variables as they are dependent on the radar system design and are therefore known in
advance. On the other hand, the radar cross-section is dependent on the characteristics of the
terrain surface. The amount of radar cross-section (σ) per unit area (A) which gets reflected
back to the receiver antenna is called as radar backscattering coefficient (σ°) and is computed
as
𝝈
𝝈𝒐 = Equation (2)
𝑨
The radar backscattering coefficient is a dimensionless entity depending on the surface
dependent parameters such as surface roughness, moisture content and system parameters of
radar such as wavelength, polarization etc. replacing the value of radar cross-section in eq.1
with the value of radar backscattering coefficient as given eq. 2 the equation changes to
following.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 147 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
𝑮𝟐 𝝀𝟐 𝑷𝒕 𝝈𝒐 𝑨
𝑷𝒓 = (𝟒𝝅)𝟑 𝑹𝟒
Equation (3)
Radar Wavelengths:
The electromagnetic radiation is transmitted by the antenna in form of short pulses for
specific duration and at a particular wavelength. The wavelength range of the imaging radars
is very small although they have broad intervals. Since the wavelength of the imaging radars
are longer as compared to visible, near-infrared (NIR), shortwave-infrared (SWIR) and
thermal-infrared (TIR) therefore they are measured in centimetres (cm) rather than
micrometre (µm). The nomenclature adopted for the radar wavelengths is K, Ka, Ku, X, C, S,
L and P (Table 8.1). The designated alphabetical naming convection was originated in United
states and were initially meant for military purposes. They were illogically named to obscure
the usage of specific frequencies to avoid their usage by unauthorized personnel. The specific
microwave wavelength band choice has numerous implications on the nature of the radar
image generated.
Table 8.1: Radar wavelength bands
Firstly, in case of Real Aperture Radar (discussed in upcoming section) wavelength affects
the resolution of image since it is directly proportional to it. But although K band has smallest
wavelength and should result into finest resolution it partially gets absorbed by the water
vapour therefore it is used by ground based weather radars for tracking precipitation and
cloud cover. Secondly, wavelength affects the penetration depth of the signal into the object.
The penetration of the microwave energy into an object is assessed by term skin depth which
is depth at which the signal strength is reduced to 1/α (α is the attenuation coefficient) of its
magnitude at the surface, or 37 %. The skin unit is measured in standard units of length and it
varies from feature to feature. The skin depth increase with increasing wavelength in absence
of moisture hence highest penetration is observed in arid regions. Another factor which
affects the penetration depth is the surface roughness and the angle of incidence at which
microwave energy interacts with the object. Penetration is higher for steeper angles while it
decreases with increase in incidence angle. Although microwave bands are of longer
wavelength hence are insensitive to atmospheric attenuation problems still heavy rainfall
creates hinderances in transmission of the microwave energy. The effects of wavelength on
penetration depth are shown Figure8. 3in which the penetration increases as one move from
X to L band.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 148 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Figure 8.3:Penetration depth for X, C and L band over an area (Ferro-Famil & Pottier,
2016)
Figure 8.4: Principal of Side looking Airborne Radar(Lillesand, Kiefer, & Chipman, 2015)
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 149 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
where
SR – Slant range
c= speed of light (3 x 108m/sec)
t= time delay between transmission and backscattered signal reception. The factor 2 in the
equation is used since it is twice the slant range as the time is calculated for the signal
movement in both to and from direction.
The return signal is used for modulating the intensity of the beam on the oscilloscope
resulting in a single intensity modulated line which is transferred to the film through the lens.
The film is in strip from and its motion is synchronised with the aircraft. On successive
movement of the aircraft to next beam width results in return signals from next strip. In this
way adjacent line is recorded on the film next to previous line. Thus, the movement of
aircraft results in a series of lines imaged on the film creating a two-dimensional picture of
the radar return signals from the earth surface. The speed of the film is adjusted in such a
manner that scale of the image generated in perpendicular direction to flight motion is same
to image generated along the flight motion. Although some distortion is observed in
transforming an image from slant range to ground range which can be removed by recording
the radar sweeps on the cathode ray tube in non-linear manner and hence preserving the
points with exact ground range.
The Side Looking Airborne radars can be bifurcated into two groups i.e., Real Aperture
Radars (RAR) in which the actual antenna length determine the beam width and Synthetic
Aperture Radar (SAR) in which the signal processing to use to attain a much narrower beam
width in azimuth direction as compared to RAR. In active microwave remote sensing as per
the standard nomenclature Real Aperture Radar and Side Looking Airborne Radar are used as
synonyms although Synthetic Aperture Radar is also a SLAR. The basic geometrical
configuration of SLAR system is given in Figure 8.5. The configuration remains identical
whether the antenna is mounted on the aircraft or on the spacecraft. Before diving deep into
the RAR/SAR system few terminologies depicted in Figure 8.5need to be explained such
incidence, look & depression angles; azimuth & range/look direction and polarization.
1. Incidence Angle (θ): It is the angle formed by the radar pulse and a line which is
perpendicular to the point of its contact on the earth’s surface. In case of flat terrain,
the incidence angle is complement of depression angle (γ) while in case of sloped
terrain no such kind of relationship exists between the incidence and depression angle.
2. Look Angle (φ): It is the angle formed between a line vertical down from the antenna
of the radar system and the radar line of sight in near and far range. The look angle is
complement of the depression angle and it varies for the near and far range.
3. Depression Angle (γ): It is the angle formed between a horizontal line extending
from the airplane fuselage and the specific point on the earth along the radar’s line of
sight.
4. Azimuth direction: In a SLAR/RAR system the antenna is typically mounted
beneath the aircraft. The direction of the movement of the aircraft in a straight line is
called azimuth flight direction. While on the go the antenna illuminates the terrain on
sides. The part of the illuminated terrain which is near to aircraft line of sight is called
near-range while which is away is called far-range.
5. Range/ look Direction: The direction of the illumination of the radar energy which is
at right angle to the direction of the movement of aircraft/spacecraft. The range or
look direction has a significant impact on the appearance (brightness or darkness) and
on the interpretation. The features looking darker in one look angle may appear
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 150 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
brighter in another. Similarly, the linear objects are enhanced more which are
orthogonal to look direction as compared to those that are parallel to the look
direction.
6. Polarization:The polarization of a radar signal explains orientation of the electrical and
magnetic components of an electromagnetic energy that is transmitted and received by the
antenna (Figure 8.6). The unpolarized energy has the tendency to vibrate in all the
possible directions perpendicular to direction of motion. The Radar systems can be
configured to send and receive polarized energy i.e., they can transmit horizontal or
vertical energy and can receiver horizontal and vertical energy backscatter from the
terrain. The unconfigured radar transmits horizontal polarized energy and receives
horizontal polarized echoes from targets. Therefore, different kind of images is generated
by radar based on the component transmitted and received. The horizontal and vertical
components are extracted from the electromagnetic energy by placing the horizontal and
vertical polarized filters, respectively in front of the sensor lens. The images in which
horizontal component is transmitted and horizontal is received or vertical component is
transmitted and vertical component is received are called HH or VV images or like
polarized mode Figure 8.7(A & B). Similarly, the images in which horizontal component
is transmitted and vertical is received or vertical component is transmitted and horizontal
component is received are called HV or VH images or cross-polarized mode Figure
8.7(C).The radar system measuring more than one polarization (eg. HH or VH) are
referred as multipolarization radars. A radar system measuring all the four polarization
states i.e., HH, HV, VH and VV are referred as quadrature polarized of fully polarimetric.
Figure 8.5: Geometric configuration of a typical RAR/SLAR system with flat terrain
assumption
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 151 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
(A)
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 152 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
(B)
(C)
Figure 8.7 : (A) Like Polarized – VV Polarization, (B) Like Polarized – HH Polarization,
(C) Cross Polarized – HV Polarization(Jensen, 2014)
Real Aperture Radar (RAR):
The SLAR are categorised into 2 systems i.e., Real Aperture Radar and Synthetic Aperture
Radar (SAR) term is used as a synonym for Real Aperture Radar (RAR). They follow the
same configuration of SLAR as explained in previous section. The real aperture radars are
also known as brute force radarsor noncoherentradars since in these systems the physical
antenna length is responsible for controlling the beam width. The spatial resolution of the
RAR system is controlled by various variables. The transmitted signal is focused to
illuminate an area on the ground with the minimum possible covered area since it is this area
which defines the spatial details that are recorded. In case the area illuminated is larger than
backscatter signals from different objects within area will get averaged to form a single tone
on the image and it becomes difficult to distinguish the objects. On the other hand, if the
focused area is small than much finer details can be recorded on the image the distinct
identities of the features are preserved.
The one of the various factors affecting the size of the area illuminated is antenna length.
The relationship between the spatial resolution and antenna length can be understood by
following
𝝀
𝜷= Equation (5)
𝑨𝑳
where
β = antenna beam width
λ = wavelength
AL = Antenna length
A longer antenna length allows the system to concentrate the radar energy on small ground
area. Therefore, longer antenna length in real aperture system helps them to achieve higher
details but the practical limitations of the aircraft to carry larger antennas restrict the ability
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 153 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
system to achieve finer details. It is the same reason that the real aperture system is not used
in spacecraft. The smaller antennas result in coarser resolution imagery from the higher
altitudes space borne sensors.
A flashlight aiming at the floor and creating a spot can be considered analogous to the area
illuminated by the RAR on the ground. The size of the spot is small and circular in shape
when the flashlight is aimed exactly straight down. As the flashlight points away with
increasing distance it is noted that the size of the spot becomes larger, irregular and it gets
dimmer. Hence based on this example, the near range (R1) will have finer resolution as
compared to far range (R2) portions of the image (Figure 8.8). The antenna length helps in
determining the azimuth resolution of a real aperture SLAR i.e., its ability to distinguish
between two objects in the along-track dimension of the image. The azimuth resolution (Ra)
of a RAR is given by following:
𝑹𝒂 = 𝑺𝑹 × 𝜷 Equation (6)
where
Ra = Azimuth resolution
SR = Slant range (objects are resolved in near range than in far range)
β = antenna beamwidth
𝝀
𝑹𝒂 = 𝑺𝑹 × Equation (6)
𝑨𝑳
A trigonometric relationship exists between the slant range distances (SR), depression angle
(γ) and height of the aircraft (H) above the local vertical datum and is given by:
𝑯
𝑺𝑹 = Equation (7)
𝑺𝒊𝒏𝜸
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 154 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 155 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 156 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Figure 8.10: Relationship between slant range resolution and ground range resolution
A question comes in mind here that why don’t we select the shortest pulse length to achieve
the finest resolution? The reason to this is that if the pulse length is shortened then the total
energy which illuminates the target also get reduced. Since weaker energy interacts with the
target therefore the energy which get backscattered from the target will be having less
information about the target. Therefore, the pulse length is logical chosen so that a trade-off
is maintained between the shortening of the pulse length in the improvising the range
resolution and the strong signal strength received from the target feature. In order to resolve
two distinct objects distinctly on the terrain in the range direction, the respective range
distances should be separate by half of the pulse length.
Synthetic Aperture Radar (SAR) :
The development of synthetic aperture radar in radar remote sensing has immensely
improvised the azimuth resolution. As it can be recalled from the discussion done in real
aperture radar section where the angular beamwidth was inversely proportional to the antenna
length (Equation 5). According to that equation a finer azimuth resolution can be achieved
by using longer antenna. In orderto solve the physical challenges of deploying longer
antennas in aircraft and space craft, engineers developed a method in which a longer antenna
is synthesized electronically. These SAR systems use a smaller antenna which illuminates the
ground perpendicularly to the air craft using broader beam in similar manner to that of a
typical RAR system. The major difference is that the SAR system sends a larger number of
supplementary beams towards the object and doppler principle is used to monitor and
aggregate the returns from these supplementary beams to create the azimuth resolution as if it
is generated from a very narrow beam.
We all have witnessed the doppler principle in real life for e.g., when a whistling train is
moving towards an observer the sound of the whistle approaches with a higher frequency and
its frequency decreases as it moves away from the observer. So, doppler principle states that
if the observer and/or the source are in relative motion than the frequency of a sound changes.
This phenomenon applies to all motions of all harmonic waves including the microwaves
used in radar remote sensing.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 157 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The shifting of the doppler frequency under the influence of relative motion of the terrain
objects at different time interval i.e., n, n+1, n+2, n+3 and n+4 through the microwave waves
occur as a result of the forward motion of aircraft is shown in Figure 8.11. It is clearly evident
from the Figure 8.11 that the frequency of the radar signal returning from the target objects
increases from time n to time n+2and after that the frequency decreases from time n+3 to
n+4. Now that we have learnt the principle of Doppler shift, let us understand how the images
are generated by Synthetic aperture radar. The long antenna of the SAR can be synthesized
from a smaller sized antenna utilizing the doppler shift of frequency and aircraft’s motion.
This works with two assumptions that firstly, the terrain remains stationary and secondly, the
target object being imaged lies at a fixed distance from the aircraft flight path. The shorter
antenna mounted on the aircraft emits series of microwave pulses at regular interval of time
while it flies in straight line. As the object encounters these microwave pulses it backscatters
a portion of this received energy back to the antenna Figure 8.11(a). The distance between the
target and the aircraft keeps on decreasing up to a certain point for e.g., in above figure the
target was first at 9 wavelengths away and then the decreases to 8, 7 and 6.5 in (b), (c) and
(d), respectively. The point (d) at 6.5 wavelength is perpendicular to the antenna having
shortest distance to the aircraft and this region is known as area of zero doppler shift. After
this point onwards the distance between the target and aircraft and again starts increasing as
shown in Figure 8.11 (e). The reflected waves from the object while the movement of aircraft
between time n to n+4is electronically combined with a reference wavelength together
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 158 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
resulting into interference. This interference is recorded as voltage controlling the brightness
of the spot which is scanned across the screen of the cathode ray tube. High voltage and
bright spot are observed when the returned pulse and the reference pulse coincides or they
have the displacement in same direction either up or down it is known as constructive
interference (Figure 8.12a). The destructive interreferences (Figure 8.12b) are observed
when the returning signal and reference signal do not coincide resulting in low voltage and
dim or dark spot. These spots are recorded as light and dark dashes representing one
dimensional interference having unequal lengths which are recorded on the film called radar
hologram which move proportionally with respect to aircraft velocity.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 159 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Figure 8.13: A typical SAR system displaying the synthesized longer antennas and
azimuth and range resolution
The equation for calculating the azimuth resolution of synthetic aperture radar is given by:
𝑳
𝑺𝑨𝑹𝒂 = Equation (9)
𝟐
The radar signals are coherent in nature i.e., they are transmitted at a very narrow range of
wavelengths which results in speckles in the images. These speckles produce a salt and
pepper error on the radar images which can be removed by processing it using several looks
i.e., an averaging is done. Therefore, the azimuth resolution equation 9 changes to
𝑳
𝑺𝑨𝑹𝒂 = 𝑵 × Equation (10)
𝟐
Look Resolution
Year Satellite Country Band Polarization
Angle (m)
Soviet
1991 Almaz-1 S HH 20–70° 10–30
Union
1991 ERS-1 ESA C VV 23° 30
1992 JERS-1 Japan L HH 35° 18
1995 ERS-2 ESA C VV 23° 30
1995 Radarsat-1 Canada C HH 10–60° 8–100
2002 Envisat ESA C Dual 14–45° 30–1000
2006 ALOS Japan L Quad 10–51° 10–100
2007 TerraSAR-X Germany X Dual 15–60° 1–18
COSMO-
2007 Italy X Quad 20–60° 1–100
SkyMed 1
COSMO-
2007 Italy X Quad 20–60° 1–100
SkyMed 2
2007 Radarsat-2 Canada C Quad 10–60° 1–100
COSMO-
2008 Italy X Quad 20–60° 1–100
SkyMed 3
2009 RISAT-2 India X Quad 20-45º 1–8
2010 TanDEM-X Germany X Dual 15–60° 1–18
2010 COSMO- Italy X Quad 20–60° 1–100
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 160 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
SkyMed 4
2012 RISAT-1 India C Quad 12–50° 1–50
2014 ALOS-2 Japan L Quad 10–60° 1–100
2014 Sentinel 1A ESA C Dual 20–47° 5–40
2015 SMAP US L Trib 35–50° 1000+
2016 Sentinel 1B ESA C Dual 20–47° 5–40
2018 SEOSAR/Paz Spain X Dual 15–60° 1–18
2018 NOVASAR-S UK S Tria 15–70° 6–30
COSMO-
2019 SkyMed 2nd Italy X Quad 20–60° 1–100
Generation-1
COSMO-
2019 SkyMed 2nd Italy X Quad 20–60° 1–100
Generation-2
Radarsat
2019 Constellation Canada C Quad 10–60° 3–100
1,2,3
2019 RISAT-2B India X
Details not available
2019 RISAT-2B1 India X
8.4 SUMMARY
This unit comprises of the introduction to active remote sensing with a special emphasis on
the radar system. The working principle of a typical SLAR, SAR, and RAR system has been
deliberated in consecutive sections. This unit also gives the reader an idea about the typical
radar geometrical aspects that are further used in calculating the range and azimuth
resolution.
8.5 GLOSSARY
Acronym Description
ALOS Advanced Land Observing Satellite
Envisat Environmental Satellite
ERS European Radar Satellite-1
ESA European Space Agency
InSAR or IFSAR Interferometric Synthetic Aperture Radar
ISAR Inverse synthetic-aperture radar
JERS Japan Earth Resources Satellite-1
NIR Near-infrared
RADAR RAdio Detection And Ranging
RAR Real Aperture Radar
RISAT Radar Imaging Satellite
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 161 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Terms Description
Azimuth In mapping and navigation azimuth is the direction to a target
withrespect to north and usually expressed in degrees. In radar RS
Azimuth
azimuthpertains to the direction of the orbit or flight path of the radar
platform.
The microwave signal reflected by elements of an illuminated
Backscatter
surface in the direction of the radar antenna.
In SAR when two different antennae are used for transmission and
Bistatic
receiving of signal
Range between the radar antenna and an object as given by a
Ground range side-looking radar image but projected onto the horizontal reference
plane of the object space.
In radar remote sensing: the wave interactions of the backscattered
Interference
signalsfrom the target surface.
Computational process that makes use of the interference of
two coherent waves. In the case of imaging radar, two different paths
Interferometry for imaging cause phase differences from which an interferogram can
be derived. In SAR applications, interferometry is used for constructing
a DEM.
In SAR when same antenna is used for transmission and receiving of
Monostatic
signal
Mathematical expression that describes the average received
signal level compared to the additive noise level in terms of system
Radar equation parameters. Principal parameters include the transmitted power,
antenna
gain, radar cross section, wavelength and range.
A sensor, which measures radiant energy and typically in one
broad spectral band (‘single-band radiometer’) or in only a few bands
Radiometer
(‘multi-band radiometer’), but with high radiometric resolution. The
term is associated with Passive microwave remote sensing sensor
Distance as measured by the radar to each reflecting point in the
Slant range
scene and recorded in the side-looking radar image.
Interference of backscattered waves stored in the cells of a radar image.
Speckle It causes the return signals to be extinguished or amplified resulting
in random dark and bright pixels in the image.
The (high) azimuth resolution (direction of
Synthetic the flight line) is achieved through off-line processing. The SAR is
aperture radar able to function as if it has a large virtual antenna aperture, synthesized
(SAR) from many observations with the (relative) small real antenna
of the SAR system.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 162 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
8.7 REFERENCES
1. Ferro-Famil, L., & Pottier, E. (2016). Synthetic Aperture Radar Imaging. In N. Baghdadi,
& M. Zribi, Microwave Remote Sensing of Land Surface (pp. 1-65). Elsevier.
2. Jensen, J. R. (2014). Remote Sensing of the Environment- An Earth Resource
Perspective.Upper Saddle River, NJ : Pearson Education.
3. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
4. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing
(pp. 429-474). Bethesda: ASP&RS.
5. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
6. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
UNIT 8 - RADAR PRINCIPLES, RADAR WAVEBANDS, SIDE LOOKING AIRBORNE RADAR (SLAR)
SYSTEMS & SYNTHETIC APERTURE RADAR (SAR), REAL APERTURE RADAR (RAR) Page 163 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
9.1 OBJECTIVES
9.2 INTRODUCTION
9.3 INTERACTION BETWEEN MICROWAVES AND EARTH’S
SURFACE
9.4 SUMMARY
9.5 GLOSSARY
9.6 ANSWER TO CHECK YOUR PROGRESS
9.7 REFERENCES
9.8 TERMINAL QUESTIONS
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 164 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
9.1 OBJECTIVES
After reading this unit learnerwill be able to understand:
how microwaves interact with materials and objects (both natural and artificial).
the various characteristics such as surface, electric, penetration depth, polarization, and
frequency affecting radar signals' interaction with the earth's feature.
9.2 INTRODUCTION
The portion of the transmitted energy returning to the radar from targets on the surface governs
the brightness of features in a radar image. The radar interactions with the earth's surface control
the magnitude or intensity of the backscattered energy, which is a function of several variables or
parameters. These parameters include radar characteristics such as frequency, viewing geometry,
polarization, and surface feature characteristics such as topography, landcover type, surface
roughness, dielectric properties. The assessment of these characteristics contribute to the
appearance of the radar image features is impossible because of their close relationship.
Any modification in these parameters may impact other parameters' responses, which affects the
amount of backscatter. Thus, the interactions of these variables together affect the brightness of
features in an image.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 165 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
microscale surface roughness corresponds to the objects having a height of the roughness
measured in centimeters. This kind of surface roughness is observed in stone heights, leave sizes,
and tree branch lengths. The topographic relief or mountains are not considered in this
category.The relationshipbetween the wavelength of the incident radar energy(λ), the depression
angle (γ), and the local height of objects(h in cm) found within the ground cell being illuminated
governs the amount of microwave energy backscattered towards the sensor in case of microscale
surface roughness. The microscale surface roughness characteristics and theradar system
parameters (λ,γ,h) can help predict the earth’s surface visual appearance in a radar image using
the modified Rayleigh criteria. An area having smooth surface roughness behaves like a specular
reflecting surface where most of the incident microwave energy is backscattered from the terrain
away from the antenna. The slightamount of backscattered energy returned to the antenna
isrecorded on the radar image as a dark area. The expression to calculate smooth surface
roughness criteria is :
𝝀
𝒉< Equation 1
𝟐𝟓 𝑺𝒊𝒏𝜸
To understand how this equation is used, let us take an example that we want to find out that for
producing a smooth (dark) radar return using X band wavelength (λ=3 cm) with a depression
angle(γ) of 35° what should be the local height (h) of the interacting object (Figure 9.1a). By
putting these values in equation 1
𝟑 𝒄𝒎
𝒉<
𝟐𝟓 𝑺𝒊𝒏𝟑𝟓
𝟑 𝒄𝒎
𝒉<
𝟐𝟓 × 𝟎. 𝟓𝟕𝟑𝟓𝟕𝟔
𝒉 < 0.21 𝑐𝑚
Thus, we can conclude that an object with a local height of < 0.21 cm, which uniformly fills a
radar image resolution cell, produces very little radar return and is therefore recorded as a dark
tone on the image.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 166 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Similarly to equation 1, if one wants to expect a bright radar return recorded on the image, then
they can use the modified Rayleigh rough criteria (Figure 9.1b)equation as given:
𝝀
𝒉> Equation 2
𝟒.𝟒 𝑺𝒊𝒏𝜸
If we put the same λ (3 cm) values and γ (35°) in equation 2, we can calculate the object’s local
height producing a bright return to the radar image.
𝟑 𝒄𝒎
𝒉>
𝟒. 𝟒 𝑺𝒊𝒏𝟑𝟓
𝟑 𝒄𝒎
𝒉>
𝟒. 𝟒 × 𝟎. 𝟓𝟕𝟑𝟓𝟕𝟔
𝒉 > 𝟏. 𝟏𝟖 𝐜𝐦
Thus, we can conclude that an object with a local height of > 1.18 cm, which uniformly fills a
radar image resolution cell, produces high radar return and is therefore recorded as a bright tone
on the image.
(a) (b)
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 167 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Figure 9.1: (a)Smooth Surface roughness having h< 0.21 cm representing specular reflector
(b) Rough Surface having h>1.18 cm representing diffuse reflector
There can be the third case also in which the objects having a local height in between 0.21 cm
and 1.18 cm within the radar resolution cell appears gray in the radar image and have
intermediate surface roughness for the same combination of radar wavelength and depression
angle. Hence, the radar backscatter is dependent on the wavelength and the depression angle.
The effect of wavelength and depression on radar return can be explained from the following
Table 9.1. In Table 9.1, the modified Rayleighcriteria are calculated for three radar wavelength
(λ = 0.86, 3.0, and 23.5 cm) and using three different depression angles (γ = 45° 60°, and 70°).
Suppose we have an object with a height of 0.5 cm, then it would appear bright on the Ka-band
imagery, gray on the X-band image, and dark on the L band imagery. This relationship is very
significant in terms that the same terrain/object can appear differently in radar images due to the
radar sensor’s wavelength and depression angle.
Table 9.1: Modified Rayleigh surface roughness criteria for three different wavelength and
depression angles.
Surface Roughness Ka band X band L Band
λ= 0.86 cm λ= 3 cm λ= 23.5 cm
γ = 45° γ = 60° γ = 70°
Smooth h < 0.05 h < 0.14 h< 1
Intermediate h = 0.05 to 0.28 h = 0.14 to 0.79 h = 1 to 5.68
Rough h > 0.28 h > 0.79 h > 5.68
Hence, radar image interpretation keys are challenging to create against the optical aerial/satellite
interpretation keys. In the later section of this unit, we will see that the radar returns are also
affected by look direction. (Mikhail, Bethel, & McGlone, 2001) gave another criterion for
predicting the weak, intermediate, and strong radar return. They suggested that a weak return is
produced if the local relief height is less than one-eighth of the incident wavelength. The weak
return is due to scattered or specular reflections that are away from the antenna. The height of the
relief ranging from one-eighth to one-half resulted in intermediate return since only a small
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 168 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
portion of the radar energy is backscattered to the antenna. Similarly, a local relief with a height
greater than one-half of the incident radar wavelength results in a strong radar return.
Just as explained earlier in microscale surface roughness, mesoscale and macroscale surfaces
also scatter the incident microwave energy. The microscale surface roughness was the function
of small leaves size within an individual resolution cell, whereas mesoscale surface roughness is
a function of backscattering characteristics from several resolution cells such as an entire canopy.
Hence, the forest canopy appears with a coarser texture on radar image when visualized under
the same wavelength and depression angle. Finally, macroscale surface roughness is affected by
slope and aspects of the terrain and shadow appearance, which influences image formation.
Geometrical Characteristics:
The radar returns are affected by the variations in the local incidence angle. The slopes facing the
sensor results in higher radar returns, while the slopes facing away from the sensor results in low
or no radar returns. The local incident angles affect the radar backscattering and shadow areas for
different surface properties. The radar backscatter is dominated by topographic slopefor local
incidentangles of 0° to 30°. The surface roughness properties dominate for angles of 30° to 70°.
For angles greater than 70°,radar shadows dominate the image.The shape and orientation of
objects need to be considered apart from their surfaceroughness when evaluating radar returns.
The return of the radar signal is influenced by the geometric configuration of targets as well.
Brighter returns are created by objects having complex geometrical shapes such as those present
in the urban landscape. The reasons for such brighter returns are the complex reflections of the
incident radar signals back to the antenna, similar to a ball bouncing from the corner of the pool
table back to the player. Such kinds of objects are called corner reflectors (Figure 9.2c), for,
e.g., corners of buildings, passages between dense urban areas. The corner reflector shape
availability in urban areas is more common due to complex angular shapes made up of concrete,
masonry, and metallic surface. In rural areas also the farm buildings, agricultural equipment acts
as corner reflectors. Corner reflectors aid the radar image interpretation. But one thing should be
kept in mind that the radar returns of corner reflectors on the image are not proportional to their
actual size. The radar returns appear very large and brighter than the size of the object causing it.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 169 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Figure 9.2: Radar reflections from (a)rough surface – diffuse scattering, (b)smooth surface
– specular scattering (c)corner surface
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 170 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
exhibits a radar energy penetration depth of several meters. The dielectric constant of the ocean
surface is very high; thus, a large part of the incident radar energy is reflected at the water’s
surface. The water content present in the snow can be determined by microwave remote sensing
because the snow's dielectric constant depends on the availability of water in liquid form.
Similarly, the ice’s age, compaction, and type can be identified by differences in the dielectric
constants. Microwave remote sensing has been proven useful in the extraction of varied
biophysical parameters. Healthy vegetation such as crops and forest canopies have larger surface
areas and high moisture content. Hence, the dense and moist vegetations act like a hovering
cloud of water droplets above the terrain and extensively reflect radar energy. The radar energy
sensitivity to the soil and vegetation moisture content increases with the steepness of the
depression angle.
The optical remote sensing systems such as Landsat 8 (OLI), Sentinel 2 (MSI), SPOT use the
electromagnetic spectrum's optical wavelengths and measures the energy which is reflected,
scattered, transmitted, and /or absorbed from a few leaf layers and stems of the vegetation
canopy. Optical remote sensing gives less information regarding the canopy's internal
characteristics and least information about the underlying soil characteristics. On the other hand,
microwave energy depending on its frequency, polarization, and angle of incidence, can infiltrate
the vegetation canopy to varying depths. Microwave energy interacts with the plant internal
structures bearing heights in centimeters and decimeters. It helps in establishing a relationship
between the components of the canopy and radar backscattering. A part of the vertically or
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 171 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
horizontally polarized radar energy interacts with the trees, while some get scattered back to the
sensor. The amount of energy received at the antenna is proportional to the incident energy’s
frequency and polarization and depends on the canopy components' depolarizing capability,
signal penetration depth, and whether it eventually interacts with the ground soil surface. The
relationship between penetration depth and polarization and their interaction with the canopy
components can be understood by two types of scattering (i) surface scattering and (ii) volume
scattering. The like polarized radar returns (HH or VV) are generated due to the single
reflections from canopy components such as leaves, stems, branches are recorded as strong and
bright signals and are called canopy surface scattering. On the contrary, the radar energy
sometimes gets multiple scattered from the volume of canopy components resulting in
depolarized energy (HV or VH), and it is called volume scattering.
(Kasischke & Bourgeau-Chavez, 1997) explained production of radar backscattering coefficient
(σ°) when the microwave energy interacts with terrain resulting from surface or volume
scattering.
The volume scattering effect can be understood by assuming that the radar signature reaching the
antenna is coming from different canopy layers. These interactions can be either from (i)woody
vegetation having three distinct layers such as overhead canopy consisting of small branches and
leaves, trunk consisting of large branches and trunks, and ground surface (Figure 9.3(a)), or
from (ii) non-woody vegetation having only two distinct layers overhead canopy and the ground
surface layer (Figure 9.3(b)).
(a) (b)
Figure 9.3: Sources of scattering from (a) Woody vegetation (b)Herbaceous vegetation
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 172 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Theσ°w is the backscattering coefficient coming out from woody vegetation towards the radar
system antenna and is expressed as(Kasischke & Bourgeau-Chavez, 1997)
The same equation can be transformed for non-woody vegetation also by eliminating the
interactions associated with the trunks, and the radar backscattering from herbaceous vegetation
σ°h is expressed
The terms in Equations 2 and 3 are dependent on 1) vegetation type (affected by surface
roughness), 2) polarization and wavelength of incident microwave energy, 3) the dielectric
constant of the vegetation, and 4) the dielectric constant of the ground surface. The vegetation
with higher water content has a higher dielectric constant, and scattering and attenuation in
Equations 2 and 3 are directly proportional to it.The watercontent per unit volume governs the
attenuation coefficient from a vegetation canopy rather than the plants' structure, i.e., the leaves,
trunk, or stems. The microwave scattering from vegetation surfaces also depends on ground layer
condition. The ground condition is dependent on two properties, i.e., 1) micro and mesoscale
surface roughness and 2) reflection coefficient. The amount of microwave energy backscattered
to the antenna (σ°s) increases with greater surface roughness. Similarly, the energy scattered in
forwarding direction (σ°mand σ°d) decreases with greater surface roughness. The dielectric
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 173 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
constant of the ground layer controls the reflection coefficient, and its low values for the dry
ground layer results in low reflection. The amount of microwave energy backscattered and
forwardscattered increases with an increase in the dielectric constant of moist soil resulting in an
increased reflection coefficient. For constant surface roughness, the amount of backscattered and
forward
scattered microwave energy (increases inσ°m, σ°s, and σ°d) and the soil dielectric constant both
increase. However, a water layer over the vegetated ground surface in wet environments
eliminates the effect of surface roughness and increases the reflection coefficient. The
elimination of surface roughness means that there will be a specular reflection and no
backscattering resulting in an increase ground-trunk (σ°d) and ground-canopy(σ°m) interaction
Figure 9.4: Surface and Volume scattering from a hypothetical pine forest
The leaves and stems at the top of the canopy interact with the incident microwave energy
resulting in surface scattering. The stems, branches, leaves, and trunk interact with the canopy's
transmitted energy, resulting in volume scattering. Finally, surface scattering again occurs at the
soil surface. A visual comparison of the microwave energy response from the same canopy by
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 174 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
different wavelength bands such as X (3 cm), C (5.8 cm), and L (23.5 cm) is shown in (Figure
9.5 a-c)
Figure 9.5: Response of a typical forest canopy to X, C, and L microwave wavelength bands
Since the shorter X wavelength band has less penetration, they are attenuated mostly at the
canopy's top surface from leaves and small branches, resulting in surface scattering. The C-band
results in surface scattering at the top of the canopy, some volume scattering from the stands, and
very little response are recorded from the ground. The longer L-band penetrates deep into the
canopy resulting in volume scattering from the stems, leaves, trunk, and branches. A part of the
microwave energy also gets transmitted to the ground, where the interaction between the soil-
vegetation boundary layer results in surface scattering.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 175 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
tons/ha for P- and L band, respectively, using NASA/JPL polarimetric AIRSAR. The C-band of
the AIRSAR showed significantly less sensitivity towards total aboveground biomass. (Wang et
al., 1994)utilized the ERS-1 SAR data to study the impacts of biomass change of Loblolly pine
and soil moisture on radar backscattering. They inferred from their findings that the C band
responded poorly towards total aboveground biomass since its sensitivity was high for soil
moisture and steeper local incidence angle of the sensor, i.e., 23°. As discussed in the previous
section, lower frequencies such as P and L bands result in volume scattering, while the higher
frequencies such as C and X bands result in surface scattering from the top of the canopy.
Various researchers have also established the leaf area index (LAI) correlations with the radar
measurement. The surface scattering is the leading cause of brighter radar returns on like-
polarized (HH or VV) images. Similarly, volume scattering is the leading cause of brighter radar
returns on cross-polarized (HV or VH)images. The vegetation monitoring on sloped or
mountainous regions can be optimally done using the cross-polarized images (HV or VH) since
these images are less sensitive to slope variations. Sometimes the same crop grown in different
directions results in like-polarized images, which are difficult to interpret; to avoid this, cross-
polarized images are used.
9.4 SUMMARY
This unit comprises the introduction to radar energy interactions with terrain surface, which
depends on various terrain and radar system characteristics. In this unit, various terrain
characteristics such as geometry, surface roughness, and dielectric constants are discussed. The
radar system characteristics influencing the interactions with terrain features such as wavelength,
frequency, polarization, incidence angle, and penetration depth are also discussed. The surface
and volume scattering phenomenon is explained in detail.
9.5 GLOSSARY
Terms Description
In radar RS: a high backscatter is typically caused by two
Corner reflection flat surfaces intersecting at 90 degrees and situated orthogonally to
the radar incident beam.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 176 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
9.7 REFERENCES
1. Dobson, M. C., Ulaby, F. T., LeToan, T., Beaudoin, A., Kasishke, E. S., & Christensen, N.
(1992). Dependence of Radar Backscatter on Coniferous Forest Biomass. IEEE
Transactions on Geoscience and Remote Sensing,, 30(2), 412-415.
2. Ferro-Famil, L., & Pottier, E. (2016). Synthetic Aperture Radar Imaging. In N. Baghdadi, &
M. Zribi, Microwave Remote Sensing of Land Surface (pp. 1-65). Elsevier.
3. Jensen, J. R. (2014). Remote Sensing of the Environment- An Earth Resource Perspective.
Upper Saddle River, NJ : Pearson Education.
4. Kasischke, E. S., & Bourgeau-Chavez, L. L. (1997). Monitoring south Florida wetlands using
ERS-1 SAR imagery. Photogrammetric Engineering & Remote Sensing, 63, 281-291.
5. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
6. Mikhail, E. M., Bethel, J. S., & McGlone, J. C. (2001). Introduction to Modern
Photogrammetry. New York: John Wiley.
7. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing (pp.
429-474). Bethesda: ASP&RS.
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 177 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
8. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
9. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
2. The radar wavelength of 5.8 cm interacts with the terrain at a depression angle of 45°. Calculate the
local terrain heights for rough, intermediate, and smooth surfaces.
3. How does the penetration depth affect the interactions with earth features?
4. What are the impacts of radar polarization on interactions with earth features?
UNIT 9 - INTERACTION BETWEEN MICROWAVES AND EARTH’S SURFACE Page 178 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
10.1 OBJECTIVES
10.2 INTRODUCTION
10.3 GEOMETRICAL CHARACTERISTICS OF MICROWAVE
IMAGE
10.4 SUMMARY
10.5 GLOSSARY
10.6 ANSWER TO CHECK YOUR PROGRESS
10.7 REFERENCES
10.8 TERMINAL QUESTIONS
10.1 OBJECTIVES
After reading this unit learners will be able to understand:
the geometrical characteristics/distortions of the radar image acquisitions:
slant-range scale distortion, foreshortening, layover, shadows, and parallax.
other characteristics that affect the appearance of radar images, such as radar image
speckle, range brightness variation, motion errors, and moving target errors.
10.2 INTRODUCTION
As a consequence of their side-looking geometry, the real and synthetic aperture generate
images having a lot of unusual image features that are readily visible on observing radar
images. A thorough understanding of these features is a must for effective planning of data
acquisition and correct image interpretation of radar images. Almost all the radar imageries
contain geometrical distortions. When the terrain is flat, these distortions are easy to correct,
but buildings, trees, and mountains result in relief displacement withinradar images. The
relief displacement in radar images is different from the relief displacement occurring in
optical images. In radar images, the relief displacement is reversed with targets being
displaced towards, instead of away from the sensor, as in optical images. There are two types
of elevation-induced distortions prevalent on radar imageries, i.e., Foreshortening and
Layover.Both foreshortening and layover result in another distortion called radar shadows.
Radar images are also affected by parallax when a terrain object is imaged from two different
flight lines. The differential reliefdisplacements result in image parallax,making it
challenging to create the stereo radar data. Radar Speckles are the random bright and dark
areas ina radar imagethat obscure the visual image interpretation. Radar image range
brightness variation is a result of a systematic gradient in imagebrightness across the image in
the range direction, which is more prominent in airborne radars than in spaceborne radars.
Side Looking Aperture Radar images are generated by exhibiting the returned power versus
time (or range) for one pulse in onedimension and “assembling” the range lines in the
otherdimension to create an image. Any Nonideal radar motion andpointing capabilities can
generate distortions in the resulting image.Moving objects in the scene, such as cars or ships
orsurface waves on water, results in an extra Doppler component (orphase component) which
is proportional to the relative velocity between the instrumentand the target, ultimately
resulting in the displacement of the object in azimuth direction in the image.
There are two geometric formats for recording radar images, i.e., slant range format and
ground range format. In the slant range format, pixel spacing in the range direction is directly
proportional to the pulses' time interval. This time interval is proportional to the object's slant
range distance imaged from the sensor instead of the horizontal ground distance between the
nadir line and the object. This results in image compression in the near range and expansion
in the far range. On the contrary, in the ground range format, the image pixel spacing is
directly proportional to their distance on the theoretically flat ground. The characteristics of
the slant range and ground range image formats are illustrated in Figure 10.1.
This phenomenon is more common in the airborne system than in satellite systems. The slant
range distortion's preclusion within the slant range imagery restricts its direct use in
planimetric mapping. For flat terrain, the approximate ground range ̅̅̅̅
𝐺𝑅 can be calculated
̅̅̅̅ using following equation
from the flying height H’ and slant range 𝑆𝑅
̅̅̅̅𝟐 = 𝑯′𝟐 + 𝑮𝑹
𝑺𝑹 ̅̅̅̅𝟐 Equation 1
̅̅̅̅ = √𝑺𝑹
𝑮𝑹 ̅̅̅̅𝟐 + 𝑯′𝟐
𝟏
̅̅̅̅ = 𝑯′√
𝑮𝑹 −𝟏 Equation 2
𝑺𝒊𝒏𝟐 𝜸
Apart from the flat terrain assumption used in the previous equation, the flight parameters
also affect the range and azimuth scales. The aircraft altitude variations affect the range scale,
while the synchronization between the image recording system and aircraft ground speed
controls the azimuth scale. In radar image collection and recording, it isn't easy to maintain
consistent scale throughout the tasking. The speed of light determines the scale in range
direction, while the aircraft/spacecraft speed determines the scale in the azimuth direction.
These scale variations are reconciled by controlling the data collection parameters for which
global positioning system (GPS) and inertial navigation system (INS) are used in
aircraft/spacecraft. The GPS is used to guide the aircraft along a flight line and maintain its
fixed flying height above the datum. The INS consists of angular sensors used to measure the
roll, pitch, and yaw, i.e., rotation of aircraft along, x, y, and z-direction. An inertial system
also controls the synchronization between the aircraft speed and data recording. Spaceborne
systems act as steadyflight platforms as compared to airborne systems.
Relief Displacement:
The relief displacement is the shift or displacement in the photographic position of an image
caused by relief of the object, i.e., elevation above or below a selected datum. This relief
displacement is one-dimensional and is perpendicular to the flight line. However, the only
difference is that relief displacement is reversed in radar images compared to optical images,
and the reason is the measurement of distances in range direction. In radar images, the top of
a feature interacts with the incident radar pulses before the base when they encounter a
vertical feature. Subsequently, return signals from the top of the elevated feature reach the
antenna before the signals from the base of the feature. This results in a layover effect on the
nearby features by the vertical features, and it appears leaned towards the nadir. The layover
effect is more severe in the near range, where the incident angles are steeper. A comparison
between the layover effect on an aerial photograph versus a radar image is shown in Figure
10.2.
The layover effect is extreme in the near range, especially for the terrain slopes facing the
antenna. This effect is prominent when the terrain slopes are steep such that the top of the
slope is imaged ahead of the base. As seen in Figure 10.3, pyramid 1is located in the near-
rangeof the image, and the radar returns arrive at a very steep incident angle. As the returns
from the top of pyramid 1 are received first compared to the base (Figure 10.4); as a result,
the layover effect is prevalent first image, and the severity is more in close range (Figure10.
5).
Figure 10.3: Relationship between range and incident angle for relief displacement
Figure 10.4: Relationship between terrain slope and incident wavefronts on relief
displacement
top. However, the objects are not recorded in their actual sizes, and a slight compression is
observed for the slopes facing towards the radar antenna (Figure 10.5- Pyramid 4).
There is one more type of relief displacement visible in the radar images known as
foreshortening. All the terrain features whose slope faces the radar antenna appear
compressed or foreshortened compared to backslope or slopes facing away from the radar
antenna. Foreshortening is less severe in the far range and becomes extreme in the near range
to an extent where the top and base of the features are imaged simultaneously. On further
moving towards the nearer range, the incidence angle becomes steeper, and foreshortening
changes into layover (Figure 10.5).
𝑭𝒔 = 𝐬𝐢𝐧(𝜽 − 𝜶) Equation 3
where 𝜃 is the incidence angle and 𝛼 is the slope angle (Figure10. 6). The slope angle is
positive (𝛼 + ) for the slope side facing the antenna while it is negative (𝛼 − ) for the slope-side
facing away from the antenna. In Figure 10.6, the mountain's height is low, and the distances
AB’ and B’C are equal in-ground range. When the radar pulse interacts with the mountain, it
first hits base A and is imaged as ‘a’ on the image; similarly, the top of mountain Binteracts
later and is imaged as‘b,’ and again base C is imaged as‘c.’ The foreshortening is affected by
the height of the object; the higher the object more is the foreshortening. The slope facing the
antenna will be brighter, while the slope facing away from the antenna will be darker. The
foreshortened radar images are challenging to interpret, and even if the height of the object is
not high, the planimetric displacement on the radar image appears, i.e., ‘ab’is smaller
than‘bc.’ The foreshortening also intensifies with an increase in depression angle or decrease
in incidence angle.
Radar Shadows:
Radar shadows are another type of geometrical distortions available in radar images. The
steep terrain hides the areas of the region imaged from getting illuminated by radar energy,
resulting in radar shadows. The shadows in the optical images are entirely different from the
radar shadows. In the optical case, the shadow areas are weakly illuminatedwhile the radar
shadows are entirely black and correspond to the areas which do not send any information to
the antenna. Radar shadows correspond to the area which lies on the backslope from which
no echos are returned. These shadow areas can be considered as radar silence areas, i.e., the
regions from which no measured signal is sent. The topographic relief and flight direction
concerning topography control the formation of radar shadow. Radar shadows are a function
of depression angle, and therefore shadows are more severe in the far range where the
depression angle is smallest.
The backslope portion of a terrain feature is in radar shadows when its backslope angle (𝛼 − )
is steeper than the depression angle (𝛾) i.e., 𝛼 − > 𝛾 (Figure 7). The backslope is fully
illuminated when the backslope angle is less than the depression angle i.e., 𝛼 − < 𝛾 (Figure
10.7). There is one more kind of illumination called grazing illumination is formed when the
backslope angle becomes equal to the depression angle i.e., 𝛼 − = 𝛾 (Figure 10.7). On
visualizing the 10. 7 it can be seen that terrain designated by BCDis in complete shadow in
the slant range image display bd. Further, the distance BC in the ground range is shorter than
the bd (slant range image distance). The radar shadows have the following characteristics:
Radar shadowing will be longer for higher range distances because of the flattened incidence
angle. The reader must understand that there is a trade-off between radar shadows and relief
displacement. A steeper incidence angle results in intense foreshortening and layover, and
little shadows. On the contrary, images with flatter incidence angles will have less relief
displacement but more image obscuring by shadows.
Radar Parallax:
The apparent shift in the object's position due to an actual change in the viewpoint of
observation is known as parallax. The change in viewpoint away from the radar’s nominal
flight path results in radar parallax. The stereo pairs in radar images are created using radar
parallax. The radar parallax arises when an object is visualized from two different flight lines
resulting in differential relief displacements(Figure 10.8a). The stereoscopic vision
establishment is difficult using radar images pair acquired from opposite side configurations
because radar side lighting is reversed in radar stereo pairs. Therefore, the acquisition of radar
stereo pairs is achieved by capturing an object from two flight lines at the same altitudeusing
the same side configuration as given in Figure 10.8b. In this case, the illumination direction
and side lighting effects will be uniform on both stereo pairs' images. The variation in look
angle and different flying heights is also used in the same side configuration for capturing
stereo pairs.
(a) (b)
Figure 10.8: Radar Parallax generation (a)opposite side configuration (b) same side
configuration
The radar parallax is also used for making image measurements such as object heights. The
radar parallax is calculated from mutual image displacement measurements on the radar
images involved in the formation of the stereo model. This type of measurement on radar
stereo pairs comes under radargrammetry. Radargarmmetry works with amplitude component
of the radar images in which two images are captured from same side of the object but from
different flight lines by varying the incidence angle.
Radar Speckle:
A random pattern of the brighter and darker pixel is often seen on all the radar images, called
speckle. The radar pulses are coherent and oscillate in phase with each other. The radar
pulses incident on a particular ground resolution cell and backscattered from the same cell
travel different distances from the radar antenna to terrain and back. There is a variation in
phases of the returning waves from a single pixel, and it may be in-phase or out of phase. The
constructive interference amplifies the intensity of the combined signal, which is in-phase.
On the contrary, destructive interference reduces the intensity of the opposite phase (out
phase), which cancels each other. These constructive and destructive interferences are the
phenomenon that generates a random pattern of a brighter and darker pixel in a radar image
which gives the image a grainy appearance. Radar Speckle is also known as the salt and
pepper effect, which is a type of noise that deteriorates image quality and makes visual and
digital image interpretation more difficult.
The radar speckle understanding can be more clear from Figure 10.9.Let us assume we have
an image with a grid of 24 pixels with a linear darker feature on it, and the rest of the grids
are of lighter tones. Radar speckle introduces pseudo-random noises in the resultant image,
as shown in Figure 10.9 (b).
Both spatial filtering and multi look processing produce a smoothing effect on the image and
reduce the speckle, but this happens at the expense of resolution. So, there is a trade-off
between the desired speckle reduction and the amount of spatial detail required. Multi-
looking and spatial filtering should be avoided if high resolution is required, but speckle
reduction may be implemented if the requirements are broad interpretations and mapping.
Motion errors:
The radar images are generated by plotting the returning echoes' power against the single
pulse time in one dimension. The range lines are then stacked in other dimensions for image
creation. Any deviation of the platform from the ideal motion results in geometric distortions
and pointing errors in the resulting image. The factors responsible for these nonideal motions
are speed variation, vertical or lateral deviations, and roll, pitch & yaw motions of the
platform (). A nonlinear image stretching or compression is significantly observed in azimuth
direction due to lack of synchronization between the speed of platform and pulse repetition
frequency. Sometimes, the linear objects imaged near the flight line appear curved or sinuous
because curvilinear distortion results from deviation from the straight flight path (Figure
10.12). The roll, pitch, and yaw are the platform's rotation along the x, y, and z-direction
(Figure 10.13). These effects are more prominent in airborne platforms than the space
platform since later is affected less by atmospheric drags and winds.
The roll motion is along the x-direction, and in radar images, this affects the antenna gain at
different points in the image and resulting in modulated grayscales. The pitch motion is along
the y-direction. Pitching results in the beam's movement intersecting the ground either ahead
of or back from the position directly to the side of the point which is beneath the aircraft. The
pitch motion is along the z-direction. This depends on the displacement from the flight line
and distorts the directions of different points relative to others. Excess yaw can completely
distort the image.
These motion errors can be compensated using inertial navigation systems (INS), which
record the platform's speed, roll, pitch, and yaw information. Depending on INS data
availability, frequency shifting during processing, pixel’s position and timing adjustment, and
image rectification after its formation can be implemented to compensate for the motion
errors.
Moving Targets:
An assumption implicitly embedded in all our discussions given earlier that the target
remains stationary during the image acquisitions. However, moving objects such as cars,
ships, etc introduce an additional doppler component proportional to the relative velocity
between object and radar system. Hence, the object in motion is displaced in the azimuth
direction of the radar image proportional to the target's relative velocity. This is why the cars
appear to be moving beside the road rather than on it. The velocities on ocean surfaces are
proportional to wave heights. The water follows a circular movement in the direction of wave
motion. The water moving towards the radar instrument sents echoes, which are doppler
shifted to higher frequencies, and these areas are imaged by SAR further in the azimuth
direction. On the other hand, the water moving away from the system is imaged backward.
The appearance of waves on the image in range direction remains unaffected, while in the
azimuth direction, waves get bunched up. This wave bunching is dependent on incidence
angle and relative wave and platform velocity.
10.4 SUMMARY
This unit gives us an insight into the geometrical distortions associated with a radar image.
The different types of geometrical distortions studied in this unit are relief displacement
(foreshortening & layover), slant range scale distortion, radar shadows, and radar parallax.
The various factor affecting these distortions are incidence angle, depression angle, distance
from flight line (near /far range), and flying height. Apart from these commonly available
geometric distortions, few other distortions affecting the images' appearance are discussed in
detail, such as radar speckle, radar range brightness variation, motion errors, and moving
objects. The remedial measures to compensate for these errors are also discussed in detail.
10.5 GLOSSARY
Acronym Description
GPS Global Positioning System
GR Ground range
3. How do the moving platforms introduce the geometrical distortions in radar images?
10.7 REFERENCES
1. Campbell, J. B., & Wynne, R. H. (2011). Introduction to Remote Sensing. New York: The
Guilford Press.
2. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
3. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing
(pp. 429-474). Bethesda: ASP&RS.
4. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
5. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
6. Woodhouse, I. H. (2006). Introduction to Microwave Remote Sensing. Boca Raton, FL:
Taylor & Francis.
1. What is the difference between slant range and ground range? How can one change
slant range distance to ground range distance?
2. Explain the relief displacement in radar image and how does it differ from that in
optical images.
3. What factors control the foreshortening and layover in radar images?
11.1 OBJECTIVES
11.2 INTRODUCTION
11.3 INTERPRETING SAR IMAGES
11.4 SUMMARY
11.5 GLOSSARY
11.6 ANSWER TO CHECK YOUR PROGRESS
11.7 REFERENCES
11.8 TERMINAL QUESTIONS
11.1 OBJECTIVES
After reading this unit you will be able to understand:
11.2 INTRODUCTION
The radar image characteristics are fundamentally different from the optical images. These
explicit characteristics result from imaging radar techniques related to speckle, texture, or
geometry. The radar image visual and digital analysis is complicated because it visualizes the
scene differently from the human eye or optical sensor. The microwave backscattering properties
of the terrain correlate to grey levels in the radar images. The radar signal’s backscattering
intensity depends on several terrains, topographic and geometrical parameters such as surface
roughness, dielectric constant, and local slope. On the other hand, the optical sensor functioning
in the visible/infrared region records the target response in terms of their colour, chemical
composition, and temperature. The following section will describe various image interpretation
keys such as tone, texture, pattern, shadow, shape & dimension, useful in synthetic aperture radar
interpretation.
Tone:
The tone on a SAR magnitude image represents the energy backscattered towards the radar
antenna after interaction with the earth surface. A single band radar image is represented as
monochromatic and has the tone varying from dark to bright. The digital values' quantification
may be done as per any defined framework, but the digital values represent power received at the
sensor. The visualization of a radar image can be established by using a relative group of tones.
Generally, tones are described by a minimum of two categories, i.e., dark and bright; however, as
per needs and details required, the additional levels of tones such as verydark, dark, intermediate,
bright, and very bright can be generated. The commonly used method for observation of tones is
dark, intermediate, and bright tones use to describe an extensive portion of the image, while very
dark and very dark tones are used for intermittently occurring features. The areas with coarser
image texture are difficult to interpret because the assessment of predominant tone is required,
and on the other hand, areas with little to no variation in tones are easy to interpret. The tone is
an essential image interpretation key element as it provides information regarding the surface
and spatial distribution of objects.
The backscattering, such as specular, diffuse, or corner reflections,primarily control the image
tones. The specular backscattering results from smooth surfaces, which direct the incident radar
energy in the opposite direction to that of the antenna, creating darker tones. The diffused
backscattering depolarizes the incident radar energy and directs them to the antenna, creating a
brighter tone as per the modified Rayleigh roughness criteria. The intermediate variation
between dark and bright tone results from volume scattering and intermediate rough surfaces.
The corner reflectors direct the incident energy towards the antenna after at least two specular
bounces from the horizontal surface and perpendicular façade surface, creating a brighter tone.
Varied tonal distribution over Sentinel 1 (C band) band data with VH polarization is shown in
Figure 11.1 where darker tones represent water bodies (present in river or fields) while brighter
tones are shown for urban features creating corner reflections. The agriculture fields represent
the uniform intermediate grey tones.
Figure 11.1: Tone distribution on VH polarized C band Sentinel 1 Image over Ganhe
District China
Texture:
The spatial variations of the tones define the image texture. The spatial frequency, similarities,
and contrast of the tones describe this interpretation element. Low-frequency tone changes
correspond to the fine texture, while the high-frequency tone changes correspond to the coarse
texture. It is to be kept in mind that image texture and surface texture are entirely different
concepts; while the former define the tonal variations during image interpretation, the latter
relates to the ground surface interacting with incident radar energy. The fine texture indicates a
similar kind of backscattering behaviour, either specular or diffuse in other words, and it
represents the likeness of tones.On the contrary, variations of tones in neighbouring pixels result
in coarse texture. The agricultural fields and standing calm water have brighter and darker tones,
respectively, while they represent fine texture because for a particular area, both of them
consistently backscatter radar energy in a diffuse and specular manner. The radar beam's
interaction with a patched surface or object comparable to image spatial resolution creates
intermediate textures. The areas such as closed forests, extensive bitumen surfaces, grass lawns
exhibit fine texture, while open forests, patchy residential areas are shown as coarse textures
(Figure 10.2). The texture proves beneficial when the distinction of the objects is challenging
due to coarser spatial resolution. The specular backscattering results in fine texture; therefore,
increment in spatial resolution doesn’t enhance fine textures. However, an increment in spatial
resolution can degrade the appearance of coarse texture arising from diffuse backscattering. The
synthetic aperture radar imagery with prevalent foreshortening, radar shadows, and volume
backscattering effects result in intermediate to coarser textures.
The repetitive arrangement of tone variation expresses the pattern in synthetic aperture radar
imagery. The pattern applies to the objects which can be visually discriminated from each other
and can be recognized. The radar imagery patterns are represented as dotted, mottled, gridded,
patches, striped, and tiled. Like textures, pattern appearance is also governed by spatial
resolution, for, e.g., on high-resolution imagery, regularly spaced trees appear as dots. The
pattern assessment is more comfortable for the contrasting image tones. The spatial arrangement
of all objects present in the imagery side-by-side should be considered to evaluate whether a
feature presents a regular or irregular pattern. The pattern’s spatial distribution can be defined as
regular, clustered, or dispersed, and these can be visually and quantitatively assessed.Figure 11.3
shows the unique floating phumdis circular pattern seen in the Loktak lake of Manipur.
Figure 11.3: Pattern distribution on VH polarized C band Sentinel 1 Image over Loktak
Lake, Manipur
Shadows:
The angular relationship between the incident radar beam and the terrain slope facing away from
it results in shadows. The radar shadows are formed when the depression angle is smaller than
the terrain slope. The shadows are significantly useful in interpreting terrain concerning the
overlaying objects such as buildings, trees, ridges, and valleys along the range direction. The
radar shadows additionally provide information related to the height of the objects. However,
one must keep in mind that shadows increase from near to far range since the incidence angle
increases proportionately in that direction. Shadows hamper the information gathering visually,
but they are an essential interpretation element and therefore, formation of the shadows in radar
images is controlled by rightfully choosing the acquisition parameters. Large incident angles
enhance the shadow occurrence, while low angle minimizes the shadows. The radar images with
a significant amount of shadows captured create information gaps that can be filled by gathering
the information using alternate look angles.Figure 11.4 shows the appearance of shadows on
Sentinel 1 VH polarized image , the slopes facing the radar beam are having brighter returns
while those away from radar beam are having darker return.
Figure 11.4: Shadow distribution on VH polarized C band Sentinel 1 Image over Srinagar,
Jammu and Kashmir
Figure 11.5 : The image is displayed in ground range and any measurement done on image
are true to the ground
Now that we have learned about the various radar image interpretation elements let us
understand microwave energy's response for different terrain objects.
Ocean response:
The study of oceans and sea has matured rapidly compared to other applications of active
microwave remote sensing. This is because the backscattering phenomenon from these water
surfaces is primarily controlled by surface roughness, a constant across a large area. As we have
discussed earlier, the dielectric constant for water is very high restricts the penetration of even
longer wavelengths into the water; thus, the backscattered energy of the surface is very high. The
dielectric constant of the water changes due to salinity, but the change is significantly minimal to
detect on radar imagery compared to changes in surface roughness. The local weather conditions
most significantly affect the water surface. The wind's interaction with the water surface creates
oscillation, resulting in short and longer wavelength waves on the surface called capillary and
gravity waves. The wind vector is aligned perpendicular to the capillary and gravity waves, and
their wavelength relies on wind speed.The motion of the waves moving towards or away from
radar in range direction is easily detected on radar imageries compared to those moving in the
azimuth direction. The water surface also gets rough by the impact of rainfall which can be
visualized on the radar images. The ocean surface's wave spectrum can be measured by the
satellite imaging radars operating in wave mode.
Figure 11.6: Presence of wave on ALOS PALSAR (L-Band) image (©JAXA/METI [2011].)
The active microwave systems can discriminate sea ice from open water depending on their
dielectric constants' variations and surface roughness.Apart from dielectric constant and surface
roughness, radar backscattering is also affected by other factors such as age, internal geometry,
temperature, and snow cover. The ice thickness and ice type mapping are possible with X and C
bands (Figure 11.7). The ice extent mapping is easily achievable using an L wavelength bandand
coarse resolution radar dataset; however, the high-resolution data are used to prepare ice maps
for merchant ship navigation. The Radarsat satellite of Canada was the first dedicated radar
satellite that released ice maps for aiding commercial ships in navigation. The high-resolution
radar imageries also provided maps containing valuable information regarding the ice shelves'
edge in Greenland and Antarctica. These ice sheets break off and end up in the sea, creating
floating icebergs. This information on the ice sheets can help in addressing the response to global
climate change issues.
Figure 11.7: Glacier visualised on VH polarized Sentinel 1 data just Over Kedarnath region
Soil response:
We have already discussed in unit 10 that the dielectric constant has an established relationship
with the availability of moisture in the soil top layer. The dielectric constant for water is 10 times
that of dry soil, and the presence of slight moisture in the topsoil can be easily detected using a
longer wavelength (Figure 11.8). The SAR sensors are used to prepare high-resolution soil
moisture maps using the longer wavelength bands used for precision agriculture. The medium-
resolution SAR sensors provide vital inputs for climate modelling, weather forecasting, and
hydrological studies. The radar signals' sensitivity depends on surface roughness, making it very
difficult to develop robust soil moisture, estimation models. Various studies have shown that
deserts' arid soil allows penetration of up to 2 meters if the L-band is used, making it easier to
identify subsurface geological features. It has been observed that longer wavelengths can be
useful in identifying paleochannels (ancient water channels). It should be kept in mind that the
radar capability to assess the soil moisture depends on the presence of biomass; if the biomass is
less than 1 mg/hadecreases (Wang et al., 1994), the radar can easily detect moisture, but as the
biomass increase, its ability of radar’s moisture detection (Waring, et al., 1995)
Figure 11.8: The waterbody, wetland and moist soil of Keoladeo National Park appear in darker
shades while the vegetated part is shown in brighter tone. (Source – Sentinel 1 C band VH
polarised)
Forest response:
The forests exert a local and global impact on the environment since they act as a carbon sink
and influence the climate. The mapping of vegetation dynamics, climatic modelling, and
conservation studies require accurate temporal mapping of various forest parameters such as
height, species, biomass, and structure. The studies related to global carbon budgeting and
carbon storage assessment require high-resolution forest biomass estimates. The imaging radars
have significantly proven their importance in forest studies because they have exploited the
correlation between radar backscattering and above-ground biomass. The difference in forest
types, weather conditions, and ground parameters influence the backscattering phenomenon;
hence, the correlation established at one place for a particular time is not always applicable to
another location. The sensitivity to the standing biomass and penetration into forest canopy
increases with use of longer wavelength.
Figure 11.9 : Brighter tone from dense forest due to volume scattering (Source: Sentinel 1 C band
VH polarised)
Urban response:
The urban features such as buildings, cars, fences act as corner reflectors, backscattering much of
the incident energy back to the antenna. The presence of these corner reflectors in the urban
areas results in bright-toned areas in radar images. The corner reflections are maximum for the
building which faces towards the radar signal originating direction. This is known as the cardinal
effect because the urban area reflections are similar to the compass's cardinal directions, where
the larger returns arise when the linear features are illuminated by the radar beam orthogonally to
their orientation()(Raney, 1998). Two identical types of urban areas constructed simultaneously,
same area and same material, appear different on radar imagery depending on the difference in
their orientation. An urban area acquired on two dates with changed acquisition parameters such
as look direction differ on radar imagery. The coarser resolution radar sensors can be used for
creating Level 1 Landuse maps, but these sensors can’t be used for Level II and Level III
classification. However, high-resolution sensors.
Figure 11.10: Appearance of cardinal effect in urban areas of Suncity, Arizona on ALOS PALSAR
HH polarised (© JAXA/METI 2010)
11.4 SUMMARY
In this unit, the traditionally available interpretation keys initially developed for the aerial and
satellite are implemented for synthetic aperture radar image interpretation. The different visual
interpretation keys such as tone, texture, shadow, pattern, and size & dimensions are described
for establishing evidence of different land use landcover types. However, the interpretation keys
established for optical data must be used wisely as their meanings change when used for images
acquired through synthetic aperture systems to exploit the longer wavelength of the
electromagnetic spectrum and slang range imaging. The underlying technical concept for
describing the interpretation keys with image examples will help the readers familiarize
themselves with image interpretation of radar imagery. To wrap this unit few generalizations can
be drawn, such as brighter radar signals are received from rough surfaces, metal surfaces, urban
areas, and high moisture areas; on the other hand, diffuse reflectors result in weak to
intermediate returns. The specular reflectors such as undisturbed water surfaces result in low
returns and appear dark on the image.
11.5 GLOSSARY
Acronym Description
ALOS Advanced Land Observation Satellite
PALSAR Phased Array type L-band Synthetic Aperture Radar
Terms Description
A tendency of a radar to produce very strong echoes from a city street
Cardinal Effect
pattern or other linear feature oriented perpendicular to the radar beam.
A combination of two or more intersecting specular surfaces that
combine to enhance the signal reflected in the direction of the radar.
Corner Reflector
The strongest reflection is obtained for materials having a high
conductivity (i.e. ships, bridges).
Texture is generally referred to as the detailed spatial pattern of
variability of the average reflectivity (tone). Image texture is produced
by an assembly of features that are too small to be identified
individually. SAR image texture is composed of a convolution of
speckle with scene texture. Texture is an important radar image
interpretation element; its statistical characterisation requires
measurement from a finite sampling window rather than estimates from
a single picture element (pixel). Texture edge detection, or
Texture - Radar segmentation, is a critical method used for accurate radar image
classification. Radar image texture is apparent at various different scale
levels. Micro-scale texture is the result of more or less homogeneous,
noise-like and random fluctuations of light and dark tone throughout
the entire image. Meso-scale texture is produced by spatially and not
randomly organised fluctuations of grey tone on the order of several
resolution cells, within an otherwise homogeneous unit. Examples of
descriptive terms to characterise texture include rough (coarse), smooth
(fine), grainy, checkered, or speckled.
Tone refers to each distinguishable grey level from black to white. First
order spatial average of image brightness, often defined for a region of
nominally constant average reflectivity. It is proportional to the
strength of radar backscatter. Relatively smooth targets, like calm
Tone water, appear as dark tones. Diffuse targets, like some vegetation,
appear as intermediate tones. Man-made targets (buildings, ships) may
produce bright tones, depending on their shape, orientation and/or
constituent materials. Tone can be interpreted using a computer assisted
density slicing technique.
11.7 REFERENCES
1. Campbell, J. B., & Wynne, R. H. (2011). Introduction to Remote Sensing. New York: The
Guilford Press.
2. Lillesand, T. M., Kiefer, R. W., & Chipman, J. W. (2015). Remote Sensing and Image
Interpretation (Seventh ed.). New York: John Wiley & Sons.
3. Moore, R. K. (1983). Imaging Radar Systems. In R. Colwell, Manual of Remote Sensing (pp.
429-474). Bethesda: ASP&RS.
4. Raney, K. (1998). Radar Fundamentals: Technical Perspective. In F. Henderson, & A. Lewis,
Principles and Applications of Imaging Radar (pp. 42-43). NY: John Wiley.
5. Tempfli, K., Kerle, N., Huurneman, G. C., & Janssen, L. F. (2009). Principles of Remote
Sensing. The Netherlands: ITC.
6. Ulaby, F. T., & Long, D. G. (2014). Microwave Radar and Radiometric Remote Sensing.
USA: University of Michigan Press.
7. Wang, Y., Kasischke, E. S., Melack, J. M., Davis, F. W., & Christensen, N. L. (1994). The
Effects of Changes in Loblolly Pine Biomass and Soil Moisture on ERS-1 SAR
Backscatter. Remote Sensing of Environment, 49, 25-31.
8. Waring, R. H., Way, J., Hunt, E. R., Morrissey, L., Ranson, K. J., Weishampel, J. F., . . .
Franklin, S. E. (1995). Imaging Radar for Ecosystem Studies. BioScience, 45(10), 715-
723.
9. Woodhouse, I. H. (2006). Introduction to Microwave Remote Sensing. Boca Raton, FL: Taylor
& Francis.
11.8TERMINAL QUESTIONS
1. What are the different interpretation keys that can be implemented to synthetic apaerture
radar?
2. How does the radar energy respond to urban features and explain the factors affecting it?
3. How does the radar energy respond to forest features and explain the factors affecting it?
12.1 OBJECTIVES
12.2 INTRODUCTION
12.3 INTRODUCTION TO DIGITAL IMAGE PROCESSING
12.4 SUMMARY
12.5 GLOSSARY
12.6 ANSWER TO CHECK YOUR PROGRESS
12.7 REFERENCES
12.8 TERMINAL QUESTIONS
12.1 OBJECTIVES
After reading this unit you will be able to understand:
Definitions of image, digital image, signal and digital image processing.
Types and formats of Image
Procurement of Digital Image
Preliminary concepts for image processing
Colour composites
12.2 INTRODUCTION
Visual/onscreen method of image interpretation has already been explained to you. You have
also learnt its related sub-topics namely-types of remote sensing and remote sensors, preliminary
concepts, criterion of image interpretation, image elements, advantages and limitations. Here,
under this topic of `basics of digital image processing`, you first try to understand the importance
of picture/image and how is it most convenient for bringing out a detailed useful information
and the signal processing followed by the basics of digital image processing.
Pictures are the most common and convenient means of conveying or transmitting information.
A picture is worth a thousand words. Pictures concisely convey information about positions,
sizes and inter-relationships between objects. They portray spatial information that we can
recognize as objects. Human beings are good at deriving information from such images, because
of visual and mental abilities. About 75% of the information received by Human is in pictorial
form.
Signal processing is a discipline in electrical engineering and in mathematics that deals with
analysis and processing of analog and digital signals, and deals with storing, filtering, and other
operations on signals. These signals include transmission signals, sound or voice signals, image
signals, and other signals. Out of all these signals, the field that deals with the type of signals for
which the input is an image and the output is also an image is done in image processing. As it
name suggests, it deals with the processing on images. It can be further divided into analog
image processing and digital image processing.
In the present context, the analysis of pictures that employ an overhead perspective, including the
radiation not visible to human eye are considered. Thus our discussion will be focusing on
analysis of remotely sensed images. These images are represented in digital form. When
represented as numbers, brightness can be added, subtracted, multiplied, divided and, in general,
subjected to statistical manipulations that are not possible if an image is presented only as a
photograph.
Although digital analysis of remotely sensed data dates from the early days of remote sensing,
the launch of the first Landsat earth observation satellite in 1972 began an era of increasing
interest in machine processing. Previously, digital remote sensing data could be analysed only at
specialized remote sensing laboratories. Specialized equipment and trained personnel necessary
to conduct routine machine analysis of data were not widely available, in part because of limited
availability of digital remote sensing data and a lack of appreciation of their qualities.
DEFINITIONS:
Image:
An image is defined as a two-dimensional function, F(x, y), where x and y are spatial
coordinates, and the amplitude of F at any pair of coordinates (x, y) is called
the intensity of that image at that point.
Digital Image:
Digital Image Processing means processing digital image by means of a digital computer.
We can also say that it is a use of computer algorithms, in order to get enhanced image
either to extract some useful information.
Digital Image is composed of a finite number of elements, each of which elements have a
particular value at a particular location. These elements are referred to as picture
elements, image elements, and pixels. A Pixel is most widely used to denote the elements
of a Digital Image.
Signal:
Digital image processing (DIP) deals with manipulation of digital images through a digital
computer. It is a subfield of signals and systems but focus particularly on images.
Types of an Image:
i. Binary Image: The binary image as its name suggests, contain only two pixel elements
i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also known as
Monochrome.
ii. Black and White Image: – The image which consist of only black and white color is called
black and white image. This is an analogue image without any digital values. Such images or
photographs are generally referred as aerial or terrestrial photography.
iii. 8 bit Image: It is a medium radiometric resolution image type. It has 256 (28 ) different
shades of black and white in it and commonly known as Grayscale Image. In this format, 0
stands for black, and 255 stands for white, and 127 stands for gray. The variability of shades
of this image can be divided into 255 categories of gray levels between black and white. In
remote sensing it is called as 8 bit radiometric resolution data.
iv. 16 bit Image: It is a high radiometric resolution type. It has 65,536 (216) different shades of
black and white shades or colors in it. It is also known as High Color Format type . In this
type the distribution of color is not as same as Grayscale image. A 16 bit format is actually
divided into three further formats which are Red, Green and Blue. That is famous RGB
format.
Image Formats:
TIFF stands for Tagged Image File Format. TIFF images create very large file sizes. TIFF
images are uncompressed and thus contain a lot of detailed image data (which is why the files
are so big). TIFFs are also extremely flexible in terms of color (they can be grayscale, or CMYK
for print, or RGB for web) and content (layers, image tags).
TIFF is the most common file format used in photo software (such as Photoshop), as well as
page layout software (such as Quark and Design).
JPEG stands for Joint Photographic Experts Group. It is a standard type of image formatting. The
images of JPEG files are compressed to store a lot of information in a small-size file. Most
digital cameras store photos in JPEG format so that one can take more photos on one camera
card. While compressing the JPEG images some of the details are likely to be lost that why it is
called a “lossy” compression.
JPEG files are usually used for photographs on the web, because they create a small file that is
easily loaded on a web page and also looks good.
JPEG files are bad for line drawings or logos or graphics, as the compression makes them look
“bitmappy” (jagged lines instead of straight ones).
GIF stands for Graphic Interchange Format. This format compresses images but, as different
from JPEG, the compression is lossless (no detail is lost in the compression, but the file can’t be
made as small as a JPEG).
GIFs also have an extremely limited color range suitable for the web but not for printing. This
format is never used for photography, because of the limited number of colors. GIFs can also be
used for animations.
PNG stands for Portable Network Graphics. It was created as an open format to replace GIF,
because the patent for GIF was owned by one company and nobody else wanted to pay licensing
fees. It also allows for a full range of color and better compression.
It’s used almost exclusively for web images, never for print images. For photographs, PNG is not
as good as JPEG, because it creates a larger file. But for images with some text, or line art, it’s
better, because the images look less “bitmappy.”
When you take a screenshot on your Mac, the resulting image is a PNG–probably because most
screenshots are a mix of images and text.
Raw image files contain data from a digital camera (usually). The files are called raw because
they haven’t been processed and therefore can’t be edited or printed yet. There are a lot of
different raw formats–each camera company often has its own proprietary format.
Raw files usually contain a vast amount of data that is uncompressed. Because of this, the size of
a raw file is extremely large. Remote sensing data obtained from different sensors systems are
stored in raw image files. Usually they are converted to TIFF before editing and color-
correcting.
Image data are in raster formats, stored in a rectangular matrix of rows and columns.
Radiometric resolution determines how many gradations of brightness can be stored for each cell
(pixel) in the matrix; 8-bit resolution, where each pixel contains an integer value from 0 to 255,
is most common. Modern sensors often collect data at higher bit depth (e.g. 16-bit for Landsat-
8), and advanced image processing software can make use of these values for analysis. The
human eye cannot detect very small differences in brightness, and most GIS software stretch data
for an 8-bit display.
In a grey scale image, 0 = black and 255 = white; and there is just one 8-bit value for each pixel.
However, in a natural color image, there is an 8-bit brightness values for red, green and blue.
Therefore, each pixel in a color image requires 3 separate values to be stored in the file.
The remote sensing industry and those associated with it have attempted to standardize the way
digital remote sensing data are formatted in order to make the exchange of data easier and to
standardize the way data can be read into different image analysis systems. The Committee on
Earth Observing Satellites (CEOS) has specified this format which is widely used around the
world for recording and exchanging data.
Following are the three possible ways/formats to organize these values in a raster file.
BIP - Band Interleaved by Pixel: The red value for the first pixel is written to the file,
followed by the green value for that pixel, followed by the blue value for that pixel, and so
on for all the pixels in the image.
BIL - Band Interleaved by Line: All of the red values for the first row of pixels are written to
the file, followed by all of the green values for that row followed by all the blue values for
that row, and so on for every row of pixels in the image.
BSQ - Band Sequential: All of the red values for the entire image are written to the file,
followed by all of the green values for the entire image, followed by all the blue values for
the entire image.
Ortho images are delivered in a variety of image formats, either compressed or uncompressed.
The most common are TIF and JPG. Compression eases data management challenges, as large
high-resolution Ortho-photo projects can easily result in terabytes of uncompressed imagery.
Compression can also speed display in GIS systems. The downside is that compression can
introduce artifacts and change pixel values, possibly hampering interpretation and analysis,
particularly with respect to fine detail. The decision to compress should be driven by end-user
requirements; it is not uncommon to deliver a set of uncompressed imagery for archival and
special applications along with a set of compressed imagery for easy use by large numbers of
users. If there is an intention for web-based display or distribution of Ortho-imagery, a
compressed set of Ortho-imagery is often recommended. In any event, geo-referencing
information must also be provided. Both TIFF and JPG image formats can accommodate geo-
referencing information, either embedded in the image file itself, as in the case of Geo TIFF, or
as a separate file for each image, as in the case of TIFF with a TFW (TIF World) file. The geo-
referencing information tells GIS software i) the size of a pixel, ii) where to place one corner of
the image in the real world, and iii) whether the image is rotated with respect to the ground
coordinate system.
Remote sensing data from foreign countries and Indian Remote Sensing (IRS) data is made
available by NDC (National Data Centre), NRSC to all users for various developmental and
application requirements as per RSDP policy. The following are the guidelines for data
dissemination.
Ordering Procedure:
User order Processing System (UOPS) is an online web application using which users are
requested to specify their area and period of interest along with the sensor and product selection.
The user’s area of interest (AOI) may be specified in form of a point, polygon, draw-on-map,
location name, map sheet and shape file. For shape file based AOI specification, the input shape
file should be in ESRI compatible format (.shx, .shp, .dbf and .prj) and the distance between the
vertices of the shape file must be a minimum of 5 kms. Minimum order is 1 scene for the
corresponding sensor.
Based on the proforma invoice generated through UOPS, user can transfer 100% advance
payment to NRSC along 18% of the GST (as per the applicable guidelines) through NEFT online
transfer or a demand draft in favour “Pay and Accounts Officer NRSC” payable at Hyderabad.
All data products will be disseminated to the users as per the Remote Sensing Data Policy and
the guidelines provided by the Government of India. The order will be entertained when the
required information is furnished in full and payment made. Data orders are to be placed thru
UOPS along with the required undertakings and certificates. Please ensure the correct product
type is chosen. For further details, please contact NDC. Order once processed and confirmed
cannot be amended or cancelled unless technical problems are encountered during data
generation. NDC reserves the right to refuse/cancel any order in full or part.
Discounts:
50% discount on respective user category pricing for archived data older than 2 years from the
date of acquisition.
50% discount for the data less than two years old for academic and research purpose.
@ 3% for orders more than Rs. 10.0 lakhs
@ 5% for orders more than Rs. 25.0 lakhs
@ 10% for orders more than Rs. 1.00 crore placed at a time for IRS products.
Priority Services: The provision for supply of satellite data within 24 hours (1 day) with
additional charge @ 50% is available. Priority orders must be received at NDC before 11 AM on
a working day. In case the order is accepted and could not be shipped within 16 hours, no
additional charges will be levied.
Licensing:
NRSC grants only single user license for the use of IRS images. All products are sold for the sole
use of purchasers and shall not be loaned, copied or exported without express permission of and
only in accordance with terms and conditions if any, agreed with the NRSC Data Centre,
National Remote Sensing Centre, ISRO, Dept. of Space, and Govt. of India. All data will be
provided with encryptions/mechanisms, which may corrupt the data while copying unauthorized
or while attempting the same. Every such attempt shall attract criminal and civil liability from
the user without prejudice to the corruption of data or software/hardware, for which NRSC will
not be liable. NRSC grants the user a limited, non-exclusive, non-transferable license with the
following terms and conditions.
User can install the product in his premises (including on an internal computer network) with
the express exclusion of the Internet.
User can make copies of the product (for installation and back-up purposes)
User can use the product for his own internal needs
User can use the product to produce Value Added Products and/or derivative works
User can use any Value Added Product for his own internal needs
User can make the product and/or any Value Added Product temporarily available to
contractors and consultants, only for use on behalf of the user
User can print and distribute or post on an Internet site, but with no possibility of
downloading, an extract of a product or Value Added Product (maximum size 1024 x 1024
pixels) for promotional purposes (not including on-line mapping or geolocation services for
on-line promotion), in each case with an appropriate credit conspicuously displayed.
This Limited Warranty is void if any non-conformity has resulted from accident, abuse, misuse,
misapplication, or modification by someone other than NRSC. The Limited Warranty is for
user's benefit only, and is non-transferable. NRSC is not liable for any incidental or
consequential damages associated with users’ possession and/or use of the Product.
In case any disputes arises on the applicability or interpretation of the above terms and
conditions between the NRSC and the user, the matter shall be referred to the Secretary,
Department of Space, Govt. of India, whose decision shall be binding on both parties.
Based on the policies and rules of data procurement and dissemination; you should know the
following before ordering the data and mention the correct specification on the data order
Performa.
Satellite platform/mission
Sensor characteristics (film types, digital systems)
Season of the year and time of the day
Atmospheric effects
Resolution of the imaging system and scale
Image motion
Stereoscopic parallax (in case of aerial photographs or sensor type providing stereo
image)
Exposure and processing
Precision standard of data
Type of data viz., analogue or digital
Number of spectral bands or FCC and
The type of digital data format
A digital image is a representation of a real image as a set of numbers that can be stored and
handled by a digital computer. In order to translate the image into numbers, it is divided into
small areas called pixels (picture elements). For each pixel, the imaging device records a
number, or a small set of numbers, that describe some property of this pixel, such as its
brightness (the intensity of the light) or its color. The numbers are arranged in an array of rows
and columns that correspond to the vertical and horizontal positions of the pixels in the image.
Digital images have several basic characteristics. One is the type of the image. For example, a
black and white image records only the intensity of the light falling on the pixels. A color image
can have three colors, normally RGB (Red, Green, Blue) or four colors, CMYK (Cyan, Magenta,
Yellow, black). RGB images are usually used in computer monitors and scanners, while CMYK
images are used in color printers. There are also non-optical images such as ultrasound or X-ray
in which the intensity of sound or X-rays is recorded. In range images, the distance of the pixel
from the observer is recorded. Resolution is expressed in the number of pixels per inch (ppi). A
higher resolution gives a more detailed image. A computer monitor typically has a resolution of
100 ppi, while a printer has a resolution ranging from 300 ppi to more than 1440 ppi. This is why
an image looks much better in print than on a monitor.
To recognize an object, the computer has to compare the image to a database of objects in its
memory. This is a simple task for humans but it has proven to be very difficult to do
automatically. One reason is that an object rarely produces the same image of itself. An object
can be seen from many different viewpoints and under different lighting conditions, and each
such variation will produce an image that looks different to the computer. The object itself can
also change; for instance, a smiling face looks different from a serious face of the same person.
Because of these difficulties, research in this field has been rather slow, but there are already
successes in limited areas such as inspection of products on assembly lines, fingerprint
identification by the FBI, and optical character recognition (OCR). OCR is now used by the U.S.
Postal Service to read printed addresses and automatically direct the letters to their destination,
and by scanning software to convert printed text to computer readable text.
Another advantage of digital images over traditional ones is the ability to transfer them
electronically almost instantaneously and convert them easily from one medium to another such
as from a web page to a computer screen to a printer. A bigger advantage is the ability to change
them according to one's needs. There are several programs available now which give a user the
ability to do that, including Photoshop, Photo paint, and the Gimp. With such a program, a user
can change the colors and brightness of an image, delete unwanted visible objects, move others,
and merge objects from several images, among many other operations. In this way a user can
retouch family photos or even create new images. Other software, such as word processors
and desktop publishing programs, can easily combine digital images with text to produce books
or magazines much more efficiently than with traditional methods.
While processing the data digitally we need a lot of storage space and powerful computers to
analyse the data from today`s remote sensing systems. Following example highlight the space
required for digital image processing:
One 8-bit pixel takes up one single byte of computer disk space. One kilobyte (Kb) is 1024
bytes. One megabyte (Mb) is 1024 kilobytes. How many megabytes of computer disk space
would be required to store an 8-bit Landsat Thematic Mapper (TM) image (7 bands), which is
6000 pixels by 6000 lines in dimension?
If we have seven bands of TM data, each 6000 pixels by 6000 lines, and each pixel takes up one
byte of disk space, we have:
7 x 6000 x 6000 = 252,000,000 bytes of data
To convert this to kilobytes we need to divide by 1024, and to convert that answer to megabytes
we need to divide by 1024 again!
252,000,000 (1024 x 1024) = 240.33 megabytes
So, we would need over 240 megabytes of disk space just to hold one full TM image, let alone
analyze the imagery and create any new image variations! Needless to say, it takes a lot of
storage space and powerful computers to analyze the data from today's remote sensing systems.
Digital images tend to produce big files and are often compressed to make the files
smaller. Compression takes advantage of the fact that many nearby pixels in the image have
similar colors or brightness. Instead of recording each pixel separately, one can record that, for
example, "the 100 pixels around a certain position are all white." Compression methods vary in
their efficiency and speed. The GIF method has good compression for 8 bit pictures, while
the JPEG is lossy, i.e. it causes some image degradation. JPEG's advantage is speed, so it is
suitable for motion pictures.
In today's world of advanced technology where most remote sensing data are recorded in digital
format, virtually all image interpretation and analysis involves some element of digital
processing. Digital image processing may involve numerous procedures including formatting
and correcting of the data, digital enhancement to facilitate better visual interpretation, or even
automated classification of targets and features entirely by computer.
A very promising use of digital images is automatic object recognition. In this application, a
computer can automatically recognize an object shown in the image and identify it by name. One
of the most important uses of this is in robotics . A robot can be equipped with digital cameras
that can serve as its "eyes" and produce images. If the robot could recognize an object in these
images, then it could make use of it. For instance, in a factory environment, the robot could use a
screwdriver in the assembly of products. For this task, it has to recognize both the screwdriver
and the various parts of the product. At home a robot could recognize objects to be cleaned.
Other promising applications are in medicine, for example, in finding tumors in X-ray images.
Security equipment could recognize the faces of people approaching a building. Automated
drivers could drive a car without human intervention or drive a vehicle in inhospitable
environments such as on the planet Mars or in a battlefield.
A digital image processing system consists of computer hardware and image processing software
necessary to analyse digital image data. DIP focuses on developing a computer system that is
able to perform processing on an image. The input of that system is a digital image and the
system process that image using efficient algorithms, and gives an image as an output. The most
common example is Adobe Photoshop. It is one of the widely used applications for processing
digital images. Image processing mainly include the steps i) Importing the image via image
acquisition tools ii) Analyzing and manipulating the image iii) Output in which result can be
altered image or a report which is based on analyzing that image.
Digital Image Processing is an extremely broad subject and involves procedures, which are
mathematically complex. The procedure for digital image processing may be categorized into the
following types of computer-assisted operations.
For discussion purposes, most of the common image processing functions available in image
analysis systems, can be categorized into the following four categories:
Preprocessing
Image Enhancement
Image Transformation
Image Classification and Analysis
COLOUR COMPOSITES:
Colour composite provides maximum amount of spectral variability by assigning different
colours to the spectral bands chosen for colour composite. High spectral resolution is important
when producing colour components. For a true colour composite an image data reused in red,
green and blue spectral region must be assigned bits of red, green and blue image processor
frame buffer memory. A colour infrared composite standard false colour composite 'standard
false colour composite' is displayed by placing the infrared, red, green and blue frame buffer
memory (Fig. 12.2). In this healthy vegetation shows up in shades of red because vegetation
absorbs most of green and red energy but reflects approximately half of incidence Infrared
energy. Urban areas reflect equal portion of NIR, R & G, and therefore they appear as steel grey.
The color depth (of a color image) or "bits per pixel" is the number of bits in the numbers that
describe the brightness or the color. More bits make it possible to record more shades of gray or
more colors. For example, an RGB image with 8 bits per color has a total of 24 bits per pixel
("true color"). Each bit can represent two possible colors so we get a total of 16,777,216 possible
colors. A typical GIF image on a web page has 8 bits for all colors combined for a total of 256
colors. However, it is a much smaller image than a 24 bit one so it downloads more quickly. A
fax image has only one bit or two "colors," black and white. The format of the image gives more
details about how the numbers are arranged in the image file, including what kind of
compression is used, if any.
12.4 SUMMARY
Digital Image Processing is defined as the processing of digital data/image by means of a digital
computer. It is a use of computer algorithms to enhanced the image (spectrally and
radiometricaly) and improving the quality of interpretability for extracting the meaning full and
useful information/results. Image processing mainly include the importing of image via image
acquisition tools, analysing and manipulating the image and the output in which result are seen
as altered images and a report which is based on analyzing that image. An image is defined by a
two-dimensional array specifically arranged in rows and columns.
Digital image procurement is the most important task for all individual users and the offices
where this technique is being used. Digital image/data of remote sensing from foreign countries
and Indian Remote Sensing (IRS) data is made available by NDC (National Data Centre), NRSC
to all users for various developmental and application requirements as per RSDP policy. For this,
certain guidelines and ordering procedures are followed.
Remote sensing data received from satellite, particularly Sun Synchronous Satellite, is having
many discrepancies. Therefore, digital image processing certainly needs the digital image/data
corrections and rectification with respect to those errors through digital processes by using the
specific computer software.
Digital image Processing is an extremely broad subject and involves procedures, which are
mathematically complex. The procedure for digital image processing are categorized as creation
of false colour composite, preprocessing, image enhancement, image transformation, image
classification and Analysis
12.5 GLOSSARY
Calibration- The comparison of instrument performance to a standard of higher accuracy. The
standard is considered the reference and the more correct measure.
CFA- Colour filter array - Digital image sensors used in scanners and digital cameras do not
respond in a manner that differentiates colour. The sensors respond to the intensity of light: the
pixel that receives greater intensity produces a stronger signal. A colour filter array (CFA) is a
mosaic of colour filters (generally red, green and blue) that overlays the pixels comprising the
sensor. The colour filters limit the intensity of light being recorded at the pixel to be associated
with the wavelengths transmitted by that colour. A demosaicing algorithm is able to take the
information about the spectral characteristics of each color of a filter array, and the intensity of
the signal at each pixel location to create a color encoded digital image.
File format - Set of structural conventions that define a wrapper, formatted data, and embedded
metadata, and that can be followed to represent images, audiovisual waveforms, texts, etc., in a
digital object. The wrapper component on its own is often colloquially called a file format. The
formatted data may consist of one or more encoded binary bit streams for such entities as images
or waveforms.
Bit depth (image) - The number of bits used to represent each pixel in an image. The term can
be confusing since it is sometimes used to represent bits per pixel and at other times, the total
number of bits used multiplied by the number of total channels. For example, a typical color
image using 8 bits per channel is often referred to as a 24-bit colour image (8 bits x 3 channels).
Colour scanners and digital cameras typically produce 24 bit (8 bits x 3 channels) images or 36
bit (12 bits x 3 channels) capture, and high-end devices can produce 48 bit (16 bit x 3 channels)
images. A grayscale scanner would generally be 1 bit for monochrome or 8 bit for grayscale
(producing 256 shades of gray). Bit depth is also referred to as colour
Brightness - The attribute of the visual sensation that describes the perceived intensity of light.
Brightness is among the three attributes that specify color. The other two attributes are hue and
saturation.
12.7 REFERENCES
1. Baxes, Gregory H. Digital Image Processing: Principles and Applications. New York: John
Wiley and Sons, 1994.
2. Davies, Adrian. The Digital Imaging A-Z. Boston: Focal Press, 1998.
3.http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Digital_Image_Processing_2ndEd.pdf
4.http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Digital_Image_Processing_2ndEd.pdf
4) What are the guidelines and procedures for the procurement of digital images?
5) Why and how will you create the false colour composte (FCC)?
13.1 OBJECTIVES
13.2 INTRODUCTION
13.3 PREPROCESSING, IMAGE REGISTRATION & IMAGE
ENHANCEMENT TECHNIQUES
13.4 SUMMARY
13.5 GLOSSARY
13.6 ANSWER TO CHECK YOUR PROGRESS
13.7 REFERENCES
13.8 TERMINAL QUESTIONS
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 228 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
13.1 OBJECTIVES
After reading this unit you will be able to understand:
Preprocessing Techniques
Image Registration
Digital Image Enhancement techniques
13.2 INTRODUCTION
In the previous unit you have learnt about the basics of digital image processing and under its
sub-heads you got the concepts, definitions, need and scope of digital image, and digital image
processing; types and formats of digital Image; procurement of digital Image; preliminary
concepts for image processing and color composites. But that unit was related to basics of digital
image processing only. This unit includes the details of image processing with respect to
preprocessing, digital image registration and enhancement techniques.
Both visual onscreen/analogue (paper print) data and digital data of remotely sensed image
methods of image processing/ interpretation have their own roles based on the objectives and
available infrastructure, computer hardware and software. It is also to be noted that many image
processing and analysis techniques have been developed to aid the interpretation of remote
sensing images and to extract as much information as possible from the images. The choice of
specific techniques or algorithms to use depends on the goals of each individual project. In this
section, we will examine some procedures commonly used in analyzing/interpreting remote
sensing images.
The preprocessing of remote sensing data is a crucial step in the remote sensing analytical
workflow and is often the most time consuming and costly. Examples of preprocessing tasks
include geometrically correcting imagery to improve the positional accuracy, compressing
imagery to save disk space, converting lidar point cloud data to raster models for speed up
rendering in GIS systems and correcting for atmospheric effects to improve the spectral qualities
of an image.
Prior to data analysis, initial processing on the raw data is usually carried out to correct for any
distortion due to the characteristics of the imaging system and imaging conditions. Depending on
the user's requirement, some standard correction procedures may be carried out by the ground
station operators before the data is delivered to the end-user. These procedures
include radiometric correction to correct for uneven sensor response over the whole image
and geometric correction to correct for geometric distortion due to Earth's rotation and other
imaging conditions (such as oblique viewing). The image may also be transformed to conform to
a specific map projection system. Furthermore, if accurate geographical location of an area on
the image needs to be known, ground control points (GCP's) are used to register the image to a
precise map (geo-referencing)
For registration of remote sensing images the image processing tools provides to support point
mapping to determine the parameters of the transformation required to bring an image into
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 229 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
alignment with another image. In point mapping, you pick points in a pair of images that
identify the same feature or landmark in the images. Then, a geometric mapping is inferred from
the positions of these control points. There are many other methods of registration described in
this unit. Those methods are based on certain tasks and objectives to be fulfilled.
Keeping in view of contents and sub-topics to be explained under this unit, the lecture note is
aimed at the following objectives:
DEFINITIONS:
Preprocessing - Preprocessing of digital image is the use of computer to download the digital
data and make its fitness through computer algorithms to perform the detailed image processing.
Preprocessing describes the methods used to prepare images for further analysis, including
interest point and feature extraction.
Image Registration:
Image registration is the process of transforming different sets of data into one coordinate
system.
Image registration is an image processing technique used to align multiple scenes into a
single integrated image.
Image registration is defined as a process that overlays two or more images from various
imaging equipment or sensor taken at different times and angles or from the same scene to
geometrically align the images for analysis (Zitova and Flusser, 2003)
Image Enhancement:
Image enhancement refers to improve the appearance of the imagery to assist in visual
interpretation and analysis.
Image Enhancement involves techniques for increasing the visual distinction between
features in a scene.
The objective of image enhancement is to create new images from original data in order to
increase the amount of information that can be visually interpreted from the data.
PREPROCESSING TECHNIQIES:
Intelligent use of image pre-processing can provide benefits and solve problems that ultimately
lead to better local and global feature detection. Generally the following steps are taken while
preprocessing the digital data:
Reading Image:
The path is stored to an image dataset into a variable then a function is created to load folders
containing images into arrays.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 230 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Resizing Image:
In this step, in order to visualize the change, we create two functions to display the images; the
first being a one to display one image and the second for two images. After that, a function is
created called processing that just receives the images as a parameter. But why do we resize our
image during the pre-processing phase? This is because of the fact that some images captured by
camera/scanners and fed to our computer algorithm vary in size; therefore, we should establish a
base size for all images fed into our AI (Artificial Intelligence in the computer) algorithm.
Preprocessing functions involve those operations that are normally required prior to the main
data analysis and extraction of information, and are generally grouped as radiometric or
geometric corrections. Radiometric corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, and converting the data so they
accurately represent the reflected or emitted radiation measured by the sensor.
Preprocessing also includes Image Rectification. These operations aim to correct distorted or
degraded image data to create a faithful representation of the original scene. This typically
involves the initial processing of raw image data to correct for geometric distortion, to calibrate
the data radiometrically and to eliminate noise present in the data. Image rectification and
restoration procedures are often termed pre-processing operations because they normally precede
manipulation and analysis of image data. Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and conversion of the data to real world
coordinates (e.g. latitude and longitude) on the Earth's surface.
IMAGE REGISTRATION:
Image registration is the process of aligning two or more images of the same scene. This process
involves designating one image as the reference image, also called the fixed image, and applying
geometric transformations or local displacements to the other images so that they align with the
reference. Images can be misaligned for a variety of reasons. Commonly, images are captured
under variable conditions that can change the camera perspective or the content of the scene.
Misalignment can also result from lens and sensor distortions or differences between capture
devices.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 231 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Image registration is often used as a preliminary step in image processing applications. For
example, you can use image registration to align satellite images or medical images captured
with different diagnostic modalities, such as MRI and SPECT. Image registration enables you to
compare common features in different images. For example, you might discover how a river has
migrated, how an area became flooded, or whether a tumor is visible in an MRI or SPECT
image.
Image Processing Toolbox offers three image registration approaches: an interactive Registration
Estimator app, intensity-based automatic image registration, and control point registration.
Computer Vision Toolbox offers automated feature detection and matching.
Image registration can be classified into several categories based on the transformation model.
Following are the different kinds of image registration algorithms categories used in the
computer:
Intensity-based vs feature-based:
Image registration or image alignment algorithms can be classified into intensity-based and
feature-based. One of the images is referred to as the moving or source and the others are
referred to as the target, fixed or sensed images. Image registration involves spatially
transforming the source/moving image(s) to align with the target image. The reference frame in
the target image is stationary, while the other datasets are transformed to match to the
target.[3] Intensity-based methods compare intensity patterns in images via correlation metrics,
while feature-based methods find correspondence between image features such as points, lines,
and contours. Intensity-based methods register entire images or sub-images. If sub-images are
registered, centers of corresponding sub images are treated as corresponding feature points.
Feature-based methods establish a correspondence between a numbers of especially distinct
points in images. Knowing the correspondence between a numbers of points in images, a
geometrical transformation is then determined to map the target image to the reference images,
thereby establishing point-by-point correspondence between the reference and target images.
Methods combining intensity-based and feature-based information have also been developed.
Transformation models:
Image registration algorithms can also be classified according to the transformation models they
use to relate the target image space to the reference image space. The first broad category of
transformation models includes linear transformations, which include rotation, scaling,
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 232 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
translation, and other affine transforms. Linear transformations are global in nature, thus, they
cannot model local geometric differences between images.
The second category of transformations allows 'elastic' or 'non- rigid' transformations. These
transformations are capable of locally warping the target image to align with the reference
image. Non-rigid transformations include radial basis functions (thin-plate or surface splines
and compactly-supported transformations ), physical continuum models (viscous fluids), and
large deformation models.
Transformations are commonly described by a parameterization, where the model dictates the
number of parameters. For instance, the translation of a full image can be described by a single
parameter, a translation vector. These models are called parametric models. Non-parametric
models on the other hand, do not follow any parameterization, allowing each image element to
be displaced arbitrarily.
Transformations of coordinates:
Alternatively, many advanced methods for spatial normalization are building on structure
preserving transformations homeomorphisms and diffeomorphisms since they carry smooth sub
manifolds smoothly during transformation. Diffeomorphisms are generated in the modern field
of Computational Anatomy based on flows since diffeomorphisms are not additive although they
form a group, but a group under the law of function composition. For this reason, flows which
generalize the ideas of additive groups allow for generating large deformations that preserve
topology, providing 1-1 and onto transformations. Computational methods for generating such
transformation are often called LDDMM which provide flows of diffeomorphisms as the main
computational tool for connecting coordinate systems corresponding to the geodesic flows of
Computational Anatomy. There are a number of programs which generate diffeomorphic
transformations of coordinates via diffeomorphic mapping including MRI Studio and MRI
Cloud.org.
Frequency-domain methods find the transformation parameters for registration of the images
while working in the transform domain. Such methods work for simple transformations, such as
translation, rotation, and scaling. Applying the phase correlation method to a pair of images
produces a third image which contains a single peak. The location of this peak corresponds to the
relative translation between the images. Unlike many spatial-domain algorithms, the phase
correlation method is resilient to noise, occlusions, and other defects typical of medical or
satellite images. Additionally, the phase correlation uses the fast Fourier transform to compute
the cross-correlation between the two images, generally resulting in large performance gains.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 233 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The method can be extended to determine rotation and scaling differences between two images
by first converting the images to log-polar coordinates.[13][14] Due to properties of the Fourier
transform, the rotation and scaling parameters can be determined in a manner invariant to
translation.
Multi-modality registration methods are often used in medical imaging as images of a subject are
frequently obtained from different scanners. Examples include registration of
brain CT/MRI images or whole body PET/CT images for tumor localization, registration of
contrast-enhanced CT images against non-contrast-enhanced CT images for segmentation of
specific parts of the anatomy, and registration of ultrasound and CT images
for prostate localization in radiotherapy.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 234 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Automated registration
To register images using an intensity-based technique, use image-register and specify the type of
geometric transformation to apply to the moving image. Image-register iteratively adjusts the
transformation to optimize the similarity of the two images. Alternatively, you can estimate a
localized displacement field and apply a non-rigid transformation to the moving image.
You want to prioritize the alignment of specific features, rather than the entire set of features
detected using automated feature detection. For example, when registering two images, you
can focus the alignment on desired anatomical features and disregard matched features that
correspond to less informative anatomical structures.
Images have repeated patterns that provide an unclear mapping using automated feature
matching. For example, photographs of buildings with many windows, or aerial photographs
of gridded city streets, have many similar features that are challenging to map automatically.
In this case, manual selection of control point pairs can provide a clearer mapping of
features, and thus a better transformation to align the feature points.
Control point registration can apply many types of transformations to the moving image. Global
transformations, which act on the entire image uniformly, include affine, projective, and
polynomial geometric transformations. Non-rigid transformations, which act on local regions,
include piecewise linear and local weighted mean transformations. Use the Control Point
Selection Tool to select control points. Start the tool with cp select. An illustration is given in
figure 1 for control point registration of an image.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 235 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Uncertainty:
There is a level of uncertainty associated with registering images that have any spatio-temporal
differences. A confident registration with a measure of uncertainty is critical for many change
detection applications such as medical diagnostics.
In remote sensing applications where a digital image pixel may represent several kilometers of
spatial distance (such as NASA's LANDSAT imagery), an uncertain image registration can mean
that a solution could be several kilometers from ground truth. Several notable papers have
attempted to quantify uncertainty in image registration in order to compare results. However,
many approaches to quantifying uncertainty or estimating deformations are computationally
intensive or are only applicable to limited sets of spatial features.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 236 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
In raw imagery, the useful data often populates only a small portion of the available range of
digital values (commonly 8 bits or 256 levels). Contrast enhancement involves changing the
original values so that more of the available range is used, thereby increasing the contrast
between targets and their backgrounds. As shown in figure 13. 2, the original values are in
between 84 to 153 bits which are stretched to 0 to 255.
Figure 13.2: Changing the original digital values between 84 to 153 to new changed as 0 to
255 to enhance the image
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 237 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
and methods of enhancing contrast and detail in an image; we will cover only a few common
ones here. The simplest type of enhancement is a linear contrast stretch. This involves
identifying lower and upper bounds from the histogram (usually the minimum and maximum
brightness values in the image) and applying a transformation to stretch this range to fill the full
range. In our example, the minimum value (occupied by actual data) in the histogram is 84 and
the maximum value is 153. These 70 levels occupy less than one-third of the full 256 levels
available. A linear stretch uniformly expands this small range to cover the full range of values
from 0 to 255. This enhances the contrast in the image with light toned areas appearing lighter
and dark areas appearing darker, making visual interpretation much easier. This graphic
illustrates the increase in contrast in an image before (left) and after (right) a linear contrast
stretch.
The objective of the second group of image processing functions grouped under the term
of image enhancement is solely to improve the appearance of the imagery to assist in visual
interpretation and analysis. These procedures are applied to image data in order to effectively
display the data for subsequent visual interpretation. It involves techniques for increasing the
visual distinction between features in a scene. The objective is to create new images from
original data in order to increase the amount of information that can be visually interpreted from
the data.
Image enhancement techniques improve the quality of an image as perceived by a human. These
techniques are most useful because many satellite images when examined on a colour display
give inadequate information for image interpretation. There is no conscious effort to improve the
fidelity of the image with regard to some ideal form of the image. Image enhancement is
attempted after the image is corrected for geometric and radiometric distortions. Image
enhancement methods are applied separately to each band of a multi-spectral image. Digital
techniques have been found to be most satisfactory than the photographic technique for image
enhancement, because of the precision and wide variety of digital processes.
There exist a wide variety of techniques for improving image quality. Following are most
commonly used image enhancement techniques
Contrast:
Contrast generally refers to the difference in luminance or grey level values in an image and is an
important characteristic. It can be defined as the ratio of the maximum intensity to the minimum
intensity over an image.
C = Imax/ Imin (1)
Contrast ratio has a strong bearing on the resolving power and detects ability of an image. Larger
this ratio, more easy it is to interpret the Image.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 238 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Contrast Enhancement:
Contrast enhancement techniques expand the range of brightness values in an image so that the
image can be efficiently displayed in a manner desired by the analyst (Figure 13.3). The density
values in a scene are literally pulled farther apart, that is, expanded over a greater range. The
effect is to increase the visual contrast between two areas of different uniform densities. This
enables the analyst to discriminate easily between areas initially having a small difference in
density. Contrast enhancement can be effected by a linear or non-linear transformation.
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 239 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
In exchange for the greatly enhanced contrast of most original brightness values, there is a trade
off in the loss of contrast at the extreme high and low density number values. However, when
compared to the overall contrast improvement, the contrast losses at the brightness extremes are
acceptable trade off, unless one was specifically interested in these elements of the scene.
The equation y = ax+ b performs the linear transformation in a linear contrast stretch method.
The values of 'a' and 'b' are computed from the equations.
Histogram Equalization:
This is another non-linear contrast enhancement technique. In this technique, histogram of the
original image is redistributed to produce a uniform population density. This is obtained by
grouping certain adjacent grey values. Thus the number of grey levels in the enhanced image is
less than the number of grey levels in the original image. The redistribution of the histogram
results in greatest contrast being applied to the most populated range of brightness values in the
original image. In this process the light and dark tails of the original histogram are compressed,
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 240 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
thereby resulting in some loss of detail in those regions. This method gives large improvement in
image quality when the histogram is highly peaked (Figure 13.5).
Gaussian Stretch:
Most of the contrast enhancement algorithms result in loss of detail in the dark and light regions
in the image. Gaussian stretch technique enhances the contrast in the tails of the histogram, at the
expense of contrast in the middle grey range. When an analyst is interested to know the details of
the dark and bright regions, he can apply the Gaussian stretch algorithm. This algorithm fits the
original histogram to a normal distribution curve between the 0 and 255 limits (Figure 13.6).
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 241 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Simple linear stretching would only increase contrast in the centre of the distribution, and would
force the high and low peaks further towards saturation. With any type of contrast enhancement,
the relative tone of different materials is modified. Simple linear stretching has the least effect
on relative tones, and brightness differences can still be related to the differences in reflectivity.
In other cases, the relative tone can no longer be meaningfully related to the reflectance of
materials. An analyst must therefore be fully cognizant of the processing techniques that have
been applied to the data.
Density Slicing:
Digital images have high radiometric resolution. Images in some wavelength bands contain 256
distinct grey levels. But, a human interpreter can reliably detect and consistently differentiate
between 15 and 25 shades of grey only. However, human eye is more sensitive to colour than the
different shades between black and white. Density slicing is a technique that converts the
continuous grey tone of an image into a series of density intervals, or slices, each corresponding
to a specified digital range. Each slice is displayed in a separate colour, line printer symbol or
bounded by contour lines. This technique is applied on each band separately and emphasizes
subtle grey scale differences that are imperceptible to the viewer.
Image transformations:
Image transformations are operations similar in concept to those for image enhancement.
However, unlike image enhancement operations which are normally applied only to a single
channel of data at a time, image transformations usually involve combined processing of data
from multiple spectral bands. Arithmetic operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the original bands into "new" images which
better display or highlight certain features in the scene. You will study in the next chapter for
many of these operations including various methods of spectral or band ratioing, and a procedure
called principal components analysis which is used to more efficiently represent the information
in multichannel imagery.
13.4 SUMMARY
As a subfield of digital signal processing, digital image processing has many advantages
over analogue image processing. It allows a much wider range of algorithms to be applied to the
input data — the aim of digital image processing is to improve the image data (features) by
suppressing unwanted distortions and/or enhancement of some important image features so that
our AI-Computer Vision models can benefit from this improved data to work on.
Digital image processing of raw data needs preprocessing to correct for any distortion due to the
characteristics of the imaging system and imaging conditions. These procedures
include radiometric correction to correct for uneven sensor response over the whole image
and geometric correction to correct for geometric distortion due to Earth's rotation and other
imaging conditions (such as oblique viewing). The image may also be transformed to conform to
a specific map projection system. Furthermore, if accurate geographical location of an area on
the image needs to be known, ground control points (GCP's) are used to register the image to a
precise map (geo-referencing)
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 242 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Remote sensing images require image processing tools to register different images and their
alignment. The tools provide to support point mapping to determine the parameters of the
transformation required to bring an image into alignment with another image. In point mapping,
points are picked up in a pair of images that identify the same feature or landmark in the images.
A geometric mapping is inferred from the positions of these control points. The image
registration categories described in this unit include i) Intensity-based vs feature-based ii)
Transformation models iii)Transformations of coordinates iv) Spatial vs frequency domain
methods v) Single- vs multi-modality methods vi) Automatic vs interactive methods vii)
Similarity measures for image registration viii) Intensity-Based Automatic Image Registration
ix) Control Point Registration and x) Automated Feature Detection and Matching.
Image enhancement techniques are most useful because many satellite images when examined
on a colour display give inadequate information for image interpretation. There is no conscious
effort to improve the fidelity of the image with regard to some ideal form of the image. Image
enhancement is attempted after the image is corrected for geometric and radiometric distortions.
Image enhancement methods are applied separately to each band of a multi-spectral image.
There exist a wide variety of techniques for improving image quality .The most commonly used
image enhancement techniques are i) The contrast and contrast Enhancement (contrast stretch )
ii) Density slicing iii) Edge enhancement and iv) Spatial filtering.
13.5 GLOSSARY
AI-Computer Vision models
When the number of control points exceeds the minimum required to define the appropriate
transformation model, iterative algorithms like RANSAC can be used to robustly estimate the
parameters of a particular transformation type (e.g. affine) for registration of the images.
LANDSAT- This term was used for Earth Resources Technology Satellite (ERTS) by NASA
(National Aeronautics and Space Administration), USA.
13.7 REFERENCES
1. https://sisu.ut.ee/imageprocessing/book/5
2. https://en.wikipedia.org/wiki/Image_registration
3. https://link.springer.com/chapter/10.1007/978-1-4899-3216-7_4
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 243 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
4. W.H. Press, S.A. Teukolsky, W.T. Vetterling & B.P. Flannery (1992): Numerical recipes in
C: The art of scientific computing. Second Edition. Cambridge University Press.
5. Barbar Zitova and Jan Flusser (2003). Image registration methods: a survey. In Image and
Vision Computing, Automation, Academy of Sciences, Pod vodárenskou 4, 182 08 Prague 8,
Czech Republic
UNIT 13 - PREPROCESSING, IMAGE REGISTRATION & IMAGE ENHANCEMENT TECHNIQUES Page 244 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
14.1 OBJECTIVES
14.2 INTRODUCTION
14.3 SPATIAL FILTERING TECHNIQUES & IMAGE
TRANSFORMATION
14.4 SUMMARY
14.5 GLOSSARY
14.6 ANSWER TO CHECK YOUR PROGRESS
14.7 REFERENCES
14.8 TERMINAL QUESTIONS
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 245 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
14.1 OBJECTIVES
The role of this chapter is to present spatial filtering and image transformation techniques of
value in the enhancement of remote sensing imagery. The spatial and frequency filtering
techniques are explained with respect to their types, methods and characteristics. The image
transformation techniques specifically include the principal components transformation,
creation of ratio images and the specialized transformation, such as the Kauth-Thomas tasseled
cap transform.
After reading this unit learner will able to understand:
Characteristics of Filter, Spatial filters.
Types and uses of spatial filtering.
Image transformation Types, characteristics and uses.
14.2 INTRODUCTION
In the previous unit you have learnt about preprocessing, image registration and image
enhancement techniques. This unit is also in continuation of image enhancement under different
filtering techniques and image transformation. The Filter tool can be used to either eliminate
spurious data or enhance features otherwise not visibly apparent in the data. Filters essentially
create output values by a moving, overlapping 3x3 cell neighborhood window that scans through
the input raster. As the filter passes over each input cell, the value of that cell and its 8 immediate
neighbors are used to calculate the output value. Spatial filtering technique increases the analyst's
ability to discriminate detail.
An image 'enhancement' is basically anything that makes it easier or better to visually interpret
an image. In some cases, like 'low-pass filtering', the enhanced image can actually look worse
than the original, but such an enhancement was likely performed to help the interpreter see low
spatial frequency features among the usual high frequency clutter found in an image. Also, an
enhancement is performed for a specific application. This enhancement may be inappropriate for
another purpose, which would demand a different type of enhancement.
Spatial filtering encompasses another set of digital processing functions which are used to
enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific
features in an image based on their spatial frequency. Spatial frequency is related to the concept
of image texture which refers to the frequency of the variations in tone that appear in an image.
"Rough" textured areas of an image, where the changes in tone are abrupt over a small area, have
high spatial frequencies, while "smooth" areas with little variation in tone over several pixels,
have low spatial frequencies.
Spatial filtering includes the edge enhancement techniques for sharpening the edges of different
feature/cover types seen on remote sensing digital data. Here the filter works by identifying
sharp edge boundaries in the image, such as the edge between a subject and a background of a
contrasting color. This has the effect of creating bright and dark highlights on either side of any
edges in the image, called overshoot and undershoot, leading the edge to look more defined
when viewed.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 246 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The multispectral or vector character of most remote sensing image data renders it amenable to
spectral transformations that generate new sets of image components or bands. These
components then represent an alternative description of the data, in which the new components
of a pixel vector are related to its old brightness values in the original set of spectral bands via
a linear operation. The transformed image may make evident features not discernable in the
original data or alternatively it might be possible to preserve the essential information content
of the image (for a given application) with a reduced number of the transformed dimensions.
The last point has significance for displaying data in the three dimensions available on a colour
monitor or in colour hardcopy, and for transmission and storage of data.
DEFINITIONS:
Filter:
In general terms a filter is a porous article or mass through which a gas or liquid is passed
to separate out matter in suspension.
A colour filter is a transparent material (such as colored glass) that absorbs light of certain
wavelengths or colors selectively and is used for modifying light that reaches a sensitized
photographic material.
We may also define filter as software for sorting or blocking access to certain online
material.
In image processing filters are mainly used to suppress either the high frequencies in the
image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or detecting edges in
the image.
Filtering:
Filtering is a technique for modifying or enhancing an image.
Filtering is a neighborhood operation, in which the value of any given pixel in the output
image is determined by applying some algorithm to the values of the pixels in the
neighborhood of the corresponding input pixel.
Spatial Filtering:
Spatial filtering is the process of dividing the image into its constituent spatial frequencies,
and selectively altering certain spatial frequencies to emphasize some image features.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 247 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Linear Filtering:
Linear filtering is filtering in which the value of an output pixel is a linear combination of the
values of the pixels in the input pixel's neighborhood.
Convolution:
Linear filtering of an image is accomplished through an operation called convolution.
Convolution is a neighborhood operation in which each output pixel is the weighted sum of
neighboring input pixels.
Edge Enhancement:
Edge Enhancement is an image processing filter that enhances the edge contrast of an
image or video in an attempt to improve its acutance (apparent sharpness). The Edge
Enhancement feature is much like the "Sharpness"
Image transformation:
Image transformation is a function or operator that takes an image as its input and produces
an image as its output.
Image transformations typically involve the manipulation of multiple bands of data, whether
from a single multispectral image or from two or more images of the same area acquired at
different times (i.e. multitemporal image data).
Image transformations generate "new" images from two or more sources which highlight
particular features or properties of interest, better than the original input images.
A common filtering procedure involves moving a 'window' of a few pixels in dimension (e.g.
3x3, 5x5, etc.) over each pixel in the image (Figure 14.1), applying a mathematical calculation
using the pixel values under that window, and replacing the central pixel with the new value. The
window is moved along in both the row and column dimensions one pixel at a time and the
calculation is repeated until the entire image has been filtered and a "new" image has been
generated. By varying the calculation performed and the weightings of the individual pixels in
the filter window, filters can be designed to enhance or suppress different types of features.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 248 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The mathematical operation is identical to the multiplication in the frequency space, but the
results of the digital implementations vary, since we have to approximate the filter function with
a discrete and finite kernel.
The discrete convolution can be defined as a `shift and multiply' operation, where we shift the
kernel over the image and multiply its value with the corresponding pixel values of the image.
For a square kernel with size M× M, we can calculate the output image with the following
formula:
Various standard kernels exist for specific applications, where the size and the form of the kernel
determine the characteristics of the operation. The most important of them are discussed in this
chapter. The kernels for two examples, the mean and the Laplacian operator, can be seen in
Figure 14.2.
Figure 14.2 Convolution kernel for a mean filter and one form of the discrete Laplacian.
In contrast to the frequency domain, it is possible to implement non-linear filters in the spatial
domain. In this case, the summations in the convolution function are replaced with some kind of
non-linear operator:
For most non-linear filters the elements of h(i,j) are all 1. A commonly used non-linear operator
is the median, which returns the `middle' of the input values.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 249 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Filtering in the Spatial Domain refers a neighborhood operation, in which the value of any given
pixel in the output image is determined by applying some algorithm to the values of the pixels in
the neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of pixels,
defined by their locations relative to that pixel.
Spatial filters generally serve two purposes when applied to remotely sensed data: i) enhance
imagery or ii) restore imagery. When it comes to enhancing imagery, spatial filters can help
uncover patterns and processes. Spatial filters are useful for both manual image interpretation
and automated feature extraction. Spatial filters can also help to restore imagery that has either
gaps or artifacts.
In ArcGIS Pro the most efficient way to apply spatial filters is to use the Raster Analysis
Function - Convolution. From the Analysis menu i) select Raster Functions. ii)Choose
the Convolution Raster Function. Within the Raster Functions pane iii) select the input raster iv)
specify the type, and optionally v) modify the kernal. When you finish adjusting the parameters
vi) click on create new layer.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 250 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Correlation:
The operation called correlation is closely related to convolution. In correlation, the value of an
output pixel is also computed as a weighted sum of neighboring pixels. The difference is that the
matrix of weights, in this case called the correlation kernel, is not rotated during the
computation. The Image Processing Toolbox filter design functions return correlation kernels.
The following figure shows how to compute the output pixel of the correlation of A,
assuming h is a correlation kernel instead of a convolution kernel, using these steps:
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 251 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
1. Slide the center element of the correlation kernel so that lies on top of the (2, 4) element of A.
2. Multiply each weight in the correlation kernel by the pixel of A underneath.
3. Sum the individual products.
The (2, 4) output pixel from the correlation is
The three types of spatial filters used in remotely sensed data processing are: Low pass filters,
Band pass filters and High pass filters.
7 5 2
4 8 3
3 1 5
The calculation for the processing cell (the center input cell with the value 8) is to find the
average of the input cells. This is the sum of all the values in the input contained by the
neighborhood, divided by the number of cells in the neighborhood (3 x 3 = 9).
Value = ((7 + 5 + 2) + (4 + 8 + 3) + (3 + 1 + 5)) / 9 = 38 / 9 = 4.222
The output value for the processing cell location will be 4.22.
Since the mean is being calculated from all the input values, the highest value in the list, which is
the value 8 of the processing cell, is averaged out.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 252 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
In the following example, the input raster has an anomalous data point caused by a data
collection error. The averaging characteristics of the LOW option have smoothed the anomalous
data point.
A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce
the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of
an image. Average and median filters, often used for radar imagery (and described in Chapter 3),
are examples of low-pass filters. High-pass filters do the opposite and serve to sharpen the
appearance of fine detail in an image. One implementation of a high-pass filter first applies a
low-pass filter to an image and then subtracts the result from the original, leaving behind only
the high spatial frequency information. Directional, or edge detection filters are designed to
highlight linear features, such as roads or field boundaries. These filters can also be designed to
enhance features which are oriented in specific directions. These filters are useful in applications
such as geology, for the detection of linear geologic structures.
With the HIGH option, the nine input z-values are weighted in such a way that removes low
frequency variations and highlights the boundary between different regions.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 253 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The output z-values are an indication of the smoothness of the surface, but they have no relation
to the original z-values. Z-values are distributed about zero with positive values on the upper side
of an edge and negative values on the lower side. Areas where the z-values are close to zero are
regions with nearly constant slope. Areas with values near z-min and z-max are regions where
the slope is changing rapidly.
Following is a simple example of the calculations for one processing cell (the center cell with the
value 8):
752
483
315
The calculation for the processing cell (the center cell with the value 8) is as follows:
Value = ((7*-0.7) + (5*-1.0) + (2*-0.7) + (4*-1.0) + (8*6.8) + (3*-1.0) + (3*-0.7) + (1*-1.0)
+ (5*-0.7)) = ((-4.9 + -5.0 + -1.4) + (-4.0 + 54.4 + -3.0) + (-2.1 + -1.0 + -3.5) = -11.3 + 47.4
+ -6.6 = 29.5
The output value for the processing cell will be 29.5.
By giving negative weights to its neighbors, the filter accentuates the local detail by pulling out
the differences or the boundaries between objects.
In an example below, the input raster has a sharp edge along the region where the values change
from 5.0 to 9.0. The edge enhancement characteristic of the HIGH option has detected the edge.
Figure 14.5 Example of Filter output with HIGH option filter (Edge Enhancement)
If the processing cell itself is No Data, with the Ignore No Data option selected, the output value
for the cell will be calculated based on the other cells in the neighborhood that have a valid
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 254 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
value. Of course, if all of the cells in the neighborhood are No Data, the output will be No Data,
regardless of the setting for this parameter.
Most of the edge enhancement filters are thus based on first and second order derivatives and
different gradient filters are also common to use.
Pixel Difference:
The Pixel difference edge enhancement filter is very similar to the Roberts edge enhancement
filter and the output will be alike, but for opposite directions
For many remote sensing Earth science applications, the most valuable information that may be
derived from an image is contained in the edges surrounding various objects of interest. Edge
enhancement delineates these edges and makes the shaped and details comprising the image
more conspicuous and perhaps easier to analyze. Generally, what the eyes see as pictorial edges
are simply sharp changes in brightness value between two adjacent pixels. The edges may be
enhanced using either linear or nonlinear edge enhancement techniques.
Laplacian filter:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The
Laplacian of an image highlights regions of rapid intensity change and is therefore often used
for edge detection. The Laplacian is often applied to an image that has first been smoothed with
something approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise,
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 255 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
and hence the two variants will be described together here. The operator normally takes a single
graylevel image as input and produces another graylevel image as output.
How It Works:
The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:
Figure 14.6: Two commonly used discrete approximations to the Laplacian filter. (Note, we
have defined the Laplacian using a negative peak because this is more common; however, it
is equally valid to use the opposite sign convention.)
Table 14.1 Algorithms for enhancing horizontal, vertical, and diagonal edges
Nonlinear edge enhancements are performed using nonlinear combinations of pixels. Many
algorithms are applied using either 2x2 or 3x3 kernals. The Sobel edge detector is based on the
notation of the 3x3 window previously described and is computed according to the relationship:
Sobek15.out = -.√X2 + y2
where
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 256 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
In image transformation, depending on the transform chosen, the input and output images may
appear entirely different and have different interpretations. Fourier transforms, principal
component analysis (also called Karhunen-Loeve analysis), and various spatial filters, are
examples of frequently used image transformation procedures.In Image Transformation[image,f],
the value of every pixel at position {x,y} in the output image is obtained from the
position f[{x,y}] in the input image. This is known as a backward transformation.
Image Subtraction:
Basic image transformations apply simple arithmetic operations to the image data. Image
subtraction is often used to identify changes that have occurred between images collected on
different dates. Typically, two images which have been geometrically registered are used with
the pixel (brightness) values in one image (1) being subtracted from the pixel values in the other
(2). Scaling the resultant image (3) by adding a constant (127 in this case) to the output values
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 257 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
will result in a suitable 'difference' image. In such an image, areas where there has been little or
no change (A) between the original images, will have resultant brightness values around 127
(mid-grey tones), while those areas where significant change has occurred (B) will have values
higher or lower than 127 - brighter or darker depending on the 'direction' of change in reflectance
between the two images . This type of image transform can be useful for mapping changes in
urban development around cities and for identifying areas where deforestation is occurring, as in
this example.
Another benefit of spectral ratioing is that, because we are looking at relative values (i.e. ratios)
instead of absolute brightness values, variations in scene illumination as a result of topographic
effects are reduced. Thus, although the absolute reflectances for forest covered slopes may vary
depending on their orientation relative to the sun's illumination, the ratio of their reflectances
between the two bands should always be very similar. More complex ratios involving the sums
of and differences between spectral bands for various sensors have been developed for
monitoring vegetation conditions. One widely used image transform is the Normalized
Difference Vegetation Index (NDVI) which has been used to monitor vegetation conditions on
continental and global scales using the Advanced Very High Resolution Radiometer (AVHRR)
sensor onboard the NOAA series of satellites.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 258 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
least number of new components. As an example of the use of principal components analysis, a
seven band Thematic Mapper (TM) data set may be transformed such that the first three
principal components contain over 90 percent of the information in the original seven bands.
Interpretation and analysis of these three bands of data, combining them either visually or
digitally, is simpler and more efficient than trying to use all of the original seven bands. Principal
components analysis, and other complex transforms, can be used either as an enhancement
technique to improve visual interpretation or to reduce the number of bands to be used as input to
digital classification procedures, discussed in the next section.
This is the most widely used and popular technique among digital image enhancement
techniques. The technique is nothing but deriving eigen values and associated eigen vectors
PCA output are produced using the eigen vectors in linear combination with original data. The
ultimate effect is the reorientation of the coordinate system with respect to the original system.
PCA, factor of Karhumem-Loeve analysis has proven to be of significant value in the analysis of
remotely sensed digital data (Jensen, 1986). PCA images often result in better interpretable
images compared to the original data and is also used as a data compression technique. For
example, if four multispectral bands are used as input to derive PCA images then the first PCA
(PC1) contains maximum information, compiled from all spectral channels, and rest will have
lesser information in decreasing order. The technique involves in reducing correlation between
PC’s and increasing variance within each spectral channel, which is directly proportional to
information content in the image.
Band rationing:
Sometimes differences in brightness values from identical surface materials are caused by
topographic slope and aspect, shadows, or seasonal changes in sunlight illumination angle and
intensity. These conditions may hamper the ability of an interpreter or classification algorithm to
identify correctly surface materials or land use in a remotely sensed image.
Fortunately, ratio transformations of the remotely sensed data can, in certain instances, be
applied to reduce the effects of such environmental conditions. In addition to minimizing the
effects of environmental factors, ratios may also provide unique information not available in any
single band that is useful for discriminating between soils and vegetation. The mathematical
expression of the ratio function is
BV I j, r = BV I j k /BV I j l
To represent the range of the function in a linear fashion and to encode the ratio values in a
standard 8-bit format (values from 0 to 255), normalizing functions are applied. Using this
normalizing function, the ratio value 1 is assigned the brightness value 128. Ratio values within
the range 1/255 to 1 are assigned values between 1 and 128 by the function
BVij,n = Int [(BVij,r X 127) +1]
Ratio values from 1 to 255 are assigned values within the range 128 to 255 by the function
BV ij,n = Int ( 128 + BVij,r ) (2)
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 259 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
The simple ratios between band, only negate multiplicative extraneous effects. When additive
effects are present, we must ratio between band differences. ratio techniques compensate only for
those factors that act equally on the various bands under analysis. In the individual bands the
reflectance values are lower in the shadowed area and it would be difficult to match this outcrop
with the sunlit outcrop. The ratio values, however, are nearly identical in the shadowed and sunlit
areas and the sandstone outcrops would have similar signatures on ratio images. This removal of
illumination differences also eliminates the dependence of topography on ratio images.
Ratio images can be meaningfully interpreted because they can be directly related to the spectral
properties of materials. Ratioing can be thought of as a method of enhancing minor differences
between materials by defining the slope of spectral curve between two bands.
Apart from the simple ratio of the form A/B, other ratios like A/(A+B), (A- B) /(A+B),
(A+B)/(A-B) are also used in some investigations. But a systematic study of their use for
different applications is not available in the literature. It is important that the analyst be cognizant
of the general types of materials found in the scene and their spectral properties in order to select
the best ratio images for interpretation. Ratio images have been successfully used in many
geological investigations to recognize and map areas of mineral alteration and for lithologic
mapping.
Three major orthogonal directions of significance in vegetation can be identified. The first is the
principal diagonal along which soils are distributed. This was chosen by Kauth and Thomas as
the first axis in the tasseled cap transformation and known as brightness. The development of
vegetation moves towards maturity appears to occur orthogonal to the soil major axis. This
direction was then chosen as the second axis, with the intention of providing a greenness
indicator. Senescence takes place in a different plane to maturity. Third axis orthogonal to the
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 260 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
soil line and greenness axis will give a yellowness measure. Finally a fourth axis is required to
account for data variance not substantially associated with differences in soil brightness or
vegetative greenness or yellowness. Again this needs to be orthogonal to the previous three. It
was called ‘non-such’ by Kauth and Thomas in contrast to the names ‘soil brightness’, ‘green-
stuff’ and ‘yellow-stuff’ they applied to the previous three.
Decorrelation Techniques:
Multispectral digital data normally exhibit high degree of correlation among the spectral
channels. Due to this, separation of certain features becomes extremely difficult. Contrast
stretching of the data does not produce improved results as the data tend to concentrate along the
diagonal in the three dimensional axis. Hence, to improve the interpretability and data quality it
is required to reduce the correlation between the spectral channels so that the data gets spread in
the three dimensional axis to all corners. Principal Component Analysis and Hue, Saturation,
Intensity (HIS) transformation are some of the commonly used tools.
HSI Technique:
This is yet another decorrelation technique which words in a three dimensional axis to produce
an output which has similar characteristics as a PC image. The Red, Green, Blue (RGB) input
can be manipulated by a three dimensional transformation to obtain Hue : explains the perceived
colours, Saturation: explains the degree of purity in colours and intensity explains the brightness
or dullness of colours. This enhancement is particularly useful in deriving better perceptible and
interpretable images. HSI image after stretching can be transformed back to RGB space to work
in the normal colour composite mode for better differentiation of objects.
14.4 SUMMARY
Spatial filtering types, techniques and methods are used to enhance the appearance of an image
which facilitates the clarity for visual as well as digital image processing and interpretation. Such
tools and techniques certainly provide the extraction of huge amount f information based on the
objectives of users. Spatial filters are designed to highlight or suppress specific features in an
image based on their spatial frequency. Spatial frequency is related to the concept of image
texture which refers to the frequency of the variations in tone that appear in an image. "Rough"
textured areas of an image, where the changes in tone are abrupt over a small area, have high
spatial frequencies, while "smooth" areas with little variation in tone over several pixels, have
low spatial frequencies. Similarly in image transformation, the multispectral character of
remote sensing data renders it amenable to spectral transformations that generate new sets of
image components or bands which highlights the clarity of ground features in different
shades/tones, texture etc. The transformed image makes evident features not discernable in the
original data. The techniques showssignificance for displaying data in the three dimensions
available on a colour monitor or in colour hardcopy, and for transmission and storage of data.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 261 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
14.5 GLOSSARY
Absorption factor-The ratio of the total absorbed radiant or luminous flux to the incident flux
Standard unit of radiance- watts per steradian and square meter (W/sr m²).
Field Angle- The Field Angle is the angle between the two directions opposed to each other over
the beam axis for which the luminous intensity is 10% that of the maximum luminous intensity.
In some cases it is also called a beam angle.
Scattering Albedo-The ratio between the scattering coefficient and the absorption coefficient for
a participating medium. An albedo of 0 means that the particles do not scatter light. An albedo of
1 means that the particles do not absorb light.
Transmission coefficient-The ratio of the directly transmitted light after passing through one
unit of a participating medium (atmosphere, dust, fog) to the amount of light that would have
passed the same distance through a vacuum. It is the amount of light that remains after the
absorption coefficient and the scattering coefficient (together the extinction coefficient) are
accounted for.
14.7 REFERENCES
1. Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic.
2. Jensen, J.R. 1986. Introduction Digital Image processing: A Remote Sensing Perspective.
Prentice-Hall, Englewood Cliffs, NJ.
3. Lillesand, T.M. and Keifer, R.W. 1994. Remote Sensing and Image interpretation. John
Wiley & Sons, Inc. New York, pp 750.
4. Moik, J.G. 1980. Digital Processing of Remotely Sensed Images, NASA SP-431, Govt.
Printing Office, Washington, D.C.
5. Muller, J.P. (ed.) 1996. Digital Image Processing in Remote Sensing. Taylor & Francis,
London/Philadelphia.
6. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 262 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
Vol. 55, no 9.
7. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.
8. Richards, J. A. 1986. Remote Sensing Digital Image Analysis: An Introduction. Berlin:
Springer-Verlag.
9. Rosenfeld, A. 1978. Image Processing and Recognition, Technical Report 664. University of
Maryland Computer Vision Laboratory
10. Sabins, F.F. 1986. Remote Sensing: Principles and Interpretation, 2nd Freeman New York.
UNIT 14 - SPATIAL FILTERING TECHNIQUES & IMAGE TRANSFORMATION Page 263 of 281
GIS-505/DGIS-505 Advance Remote Sensing Uttarakhand Open University
15.1 OBJECTIVES
15.2 INTRODUCTION
15.3 IMAGE CLASSIFICATION
15.4 SUMMARY
15.5 GLOSSARY
15.6 ANSWER TO CHECK YOUR PROGRESS
15.7 REFERENCES
15.8 TERMINAL QUESTIONS
15.1 OBJECTIVES
After reading this unit learner will be able to understand:
Importance of Digital Image Classification
Spectral Signature
Classification Training and Types of Classification
Classification Accuracy Assessment
Classification Error Matrix
15.2 INTRODUCTION
In the previous units you have studied basics of digital image processing, preprocessing, image
registration and different types of digital enhancement techniques for bringing clarity,
interpretability and maximizing the reliable information extraction. The sequence of these study
topicsbecomes progressive for your full understanding towards digital image processing tools
and techniques. But your ultimate goal of digital image classification and analysis is still to be
learnt and the same is being described in this unit. Before learning digital image classification
you should have the curiosity in its historical background as mentioned in the following
paragraph:
Considerable amount of work has been done for mapping, monitoring and analysis of various
resources along with different ecosystems and environmental parameters at various levels.
Stereoscopic interpretation of aerial photographs and visual interpretation of coarse resolution of
satellite images were most common methods during 1960-70s. Initially, in the 1960s, with the
emergence of the space program, cosmonauts and astronauts started taking photographs out of
the window of their spacecraft in which they were orbiting the earth. During 1970s 1:1 million
scale satellite images were used for interpretation and mapping for broad land use and land cover
categories. With the passage of time the choice of visual interpretation method on paper print
was changed into onscreen visual interpretation of data visible in the computer monitor. Today,
remote sensing is carried out using airborne and satellite technology, not only utilizing film
photography, but also digital camera, scanner and video, as well as radar and thermal
sensors. Unlike in the past, when remote sensing was restricted to only the visual part of the
electromagnetic spectrum i.e., what could be seen with naked eye, today through the use of
special filters, photographic films and other types of sensors, the parts of the spectrum which
cannot be seen with the naked human eye can also be utilized. Now the digital image processing
techniques, by using specific sophisticated computer hardware and software’s, are quite common
for detailed data analysis. There are various methods by which the raw data available from
satellite are rectified and enhanced so as to get clarity with high degree of contrast. By applying
different mathematical algorithms during the processing and classification one can achieve the
results of his own choice. Thus, today remote sensing is largely utilized in environmental
management, which frequently requires rapid, accurate, and up-to-date data collection and its
digital image processing.
Digital image classification of satellite remote sensing operations is used to digitally identify
and classify pixels in the data. Classification is usually performed on multi-channel data sets and
this process assigns each pixel in an image to a particular class or theme based on statistical
characteristics of the pixel brightness values. There are a variety of approaches taken to perform
digital classification. The two generic approaches which are used most often,
namely supervised and unsupervised classification.
The objective of these operations is to replace visual analysis of the image data with quantitative
techniques for automating the identification of features in a scene. This involves the analysis of
multi-spectral image data and the application of statistically based decision rules for determining
the land cover ideality of each pixel in an image. The intent of classification process is to
categorize all pixels in a digital image into one of several land cover classes or themes. This
classified data may be used to produce thematic maps of the land cover present in an image.
Based on the contents and sub topics to be described under this unit, the study is aimed at the
following objectives:
DEFINITIONS:
Spectral signature:
Spectral signature is the variation of reflectance or emittance of a material with respect
to wavelengths (i.e., reflectance/emittance as a function of wavelength).
The spectral signature of an object is a function of the incidental EM wavelength and
material interaction with that section of the electromagnetic spectrum.
Supervised classification:
Supervised classification uses the spectral signatures obtained from training samples to
classify an image.
Unsupervised classification:
Unsupervised classification finds spectral classes (or clusters) in a multiband image without
the analyst’s intervention.
Accuracy and Classification Accuracy:
Accuracy is one metric for evaluating classification models.
Informally, accuracy is the fraction of predictions our model got right.
Formally, Accuracy=Number of correct predictions/Total number of predictions.
Classification accuracy is defined as "percentage of correct predictions"..
Confusion Matrix:
confusion matrix describes the performance of a classifier so that one can see what types of
errors the classifier is making.
Image processing techniques that assist the analyst in the qualitative, i.e. visual interpretation of
images. Multi-spectral classification is emphasized because it is, at the present times; the most
common approach to computer assisted mapping from remote sensing images. It is important at
this point, however, to make a few appropriate comments about multi-spectral classification.
First, it is fundamental that we are attempting to objectively map areas on the ground that has
similar spectral reflectance characteristics. The resulting labels assigned to the image pixels,
therefore, represent spectral classes that may or may not correspond to the classes of ground
objects that we are ultimately interested in mapping. Second, manually produced maps are the
result of a long, often complex process that utilizes many sources of information. The
conventional tools used to produce a map range from the strictly quantitative techniques of
photogrammetry and geodesy, to the less quantitative techniques of photo-interpretation and field
class descriptions, to the subjective and artistic techniques of map "generalization" and visual
exploration of discrete spatial data points.
The final output of the classification process is a type of digital image, specifically a map of the
classified pixels. For display, the class at each pixel may be coded by character or graphic
symbols or by color. The classification process compresses the image data by reducing the large
number of gray levels in each of several spectral bands into a few numbers of classes in a single
Image.
SPECTRAL SIGNATURES:
Relying on the assumption that different surface materials have different Spectral reflectance (in
visible and microwave regions) or thermal emission characteristics, multi spectral classification
logically partitions the large spectral measurement space (256k possible pixel vectors for an
image with 8 bits / pixel / band and k bands) into relatively few regions, each representing a.
different type of surface material. The set of discrete spectral radiance measurements provided
by the broad spectral bands of the sensor define the spectral signature of each class, as modified
by the atmosphere between the sensor and the ground. The spectral signature is a k- dimensional
vector whose coordinates is the measured radiance in each spectral band.
Concept of Spectral Signature in Image Classification:
Features on the Earth reflect, absorb, transmit, and emit electromagnetic energy from the sun.
Special digital sensors have been developed to measure all types of electromagnetic energy as it
interacts with objects in all of the ways listed above. The ability of sensors to measure these
interactions allows us to use remote sensing to measure features and changes on the Earth and in
our atmosphere. A measurement of energy commonly used in remote sensing of the Earth is
reflected energy (e.g., visible light, near-infrared, etc.) coming from land and water surfaces. The
amount of energy reflected from these surfaces is usually expressed as a percentage of the
amount of energy striking the objects. Reflectance is 100% if all of the light striking and object
bounces off and is detected by the sensor. If none of the light returns from the surface,
reflectance is said to be 0%. In most cases, the reflectance value of each object for each area of
the electromagnetic spectrum is somewhere between these two extremes. Across any range of
wavelengths, the percent reflectance values for landscape features such as water, sand, roads,
forests, etc. can be plotted and compared. Such plots are called “spectral response curves” or
“spectral signatures.” Differences among spectral signatures are used to help classify remotely
sensed images into classes of landscape features since the spectral signatures of like features
have similar shapes. The figure no 15.1 below shows differences in the spectral response curves
for healthy versus stressed sugar beet plants.
CLASSIFICATION TRAINING:
The first step of any classification procedure is the training of the computer program to recognize
the class signatures of interest. To train the computer program, we must supply a sample of
pixels from which class signatures, for example, mean vectors and covariance matrices can be
developed. There are basically two ways to develop signatures:
Supervised training
Unsupervised training
Supervised Classification:
For supervised training, the analyst uses prior knowledge derived from field surveys, photo-
interpretation, and other sources, about small regions of the image to be classified to identify
those pixels that belong to the classes of interest. The feature signatures of this analyst --
identified pixels are then calculated and used to recognize pixels with similar signatures
throughout the image.
In a supervised classification, the identify and the location of some of the land cover types, such
as urban, agriculture, wetland, and forest, are known a priori though a combination of field work,
analysis of aerial photography, maps, and personal experience.
These areas are commonly referred to as training sites because the spectral characteristics of
these known areas are used to "train" the classification algorithm for eventual land cover
mapping of the remainder of the image. Multi-variate statistical parameters (means, standard
deviations, covariance matrices, correlation matrices, etc.) are calculated for each training site.
Every pixel both within and outside these training sites is then evaluated and assigned to the
class of which it has be highest likelihood of being a member. The following are important
aspects of conducting a rigorous and hopefully useful supervised classification of remote sensor
data:
The classification performance depends on using suitable algorithms to label the pixels in an
image as representing particular ground cover types, or classes. A wide variety of algorithms are
available for supervised classification.
Maximum Likelihood Classification- This is the most common method used with remote
sensing image data interpretation/ classification. The decision rules are based on Baye’s
principle. To derive the a priori probabilities sufficient amount of training data should be
available in the form of ground referenced data for various cover types. From this information
training set spectral statistics and a priori probabilities are calculated for each pixel before
categorizing to respective likelihood class (Figure 15.2).
Parallelepiped Classification- This is a very simple supervised classifier that is, in principle,
trained by inspecting histograms of the individual spectral components of the available training
data. The Upper and lower significant bounds on the histograms are identified and used to
describe the brightness value range for each component characteristic of that class. Together, the
range in all components describes a multidimensional box or parallelepiped (Figure 15.4). A two
dimensional pattern space might therefore be segmented. If there is a considerable gap between
two parallelepiped; pixels in those regions will not be classified. Whereas in the case of
maximum likelihood and minimum distance algorithms the pixels are labeled as belonging to
one of the available classes depending on the pre-set threshold.
means and covariance matrices to be used in the classification. Once the data are classified, the
analyst attempts, a posterior (after the fact) to assign these "natural" or spectral classes to the
information classes of interest. This may not be easy. Some of the clusters may be meaningless
because they represent mixed classes of earth surface materials.
Hundreds of clustering methods has been developed for a wide variety of purposes apart from
pattern recognition in remote sensing. The clustering algorithm operates in a two -pass mode (i.e.
it passes through the registered multi-spectral data set two times). In the first pass, the program
reads through the data set and sequentially builds clusters (groups of points in space). There is a
mean vector associated with each cluster. In the second pass, a minimum -distance classification
to means algorithm similar to the one described previously is applied to the whole data set on a
pixel -by -pixel basis, where each pixel is assigned to one of the mean vectors created in pass
one. The first pass, therefore, automatically creates the cluster signatures to be used by the
classifier:
R, a radius in spectral space used to determine when a new cluster should be formed.
C, a spectral space distance parameter used when merging clusters
.N, the number of pixels to be evaluated between each merging of the clusters
.Cmax, the maximum number of clusters to be identified by the algorithm.
PASS 2: Assignment of Pixels to one of the Cmax Clusters using minimum distance
classification logic:
The final cluster mean data vectors are used in a minimum -distance to means classification
algorithm to classify all the pixels in the image into one of the Cmax clusters. The analyst usually
produces a display depicting to which cluster each pixel was assigned. It is then necessary to
evaluate the location of the clusters in the image, label them, if possible, and see if any should be
combined. It is usually necessary to combine some clusters. This is where an intimate knowledge
of the terrain is critical.
Cluster Labeling:
It is usually performed by interactively displaying all the pixels assigned to an individual cluster
on a CRT screen. In this manner, it is possible to identify their location, and spatial association
with the other clusters. This interactive visual analysis, in conjunction with the information
provided in the scatter plot, allows the analyst to group the clusters into information classes.
Combination of Supervised and Unsupervised Training:
Because supervised training does not necessarily result in class signatures that are numerically
separable in feature space, and because unsupervised training does not necessarily result in
classes that are meaningful to the analyst, a combined approach has the potential to meet both
requirements. If time and financial resources permit, this is undoubtedly the best procedure to
follow.
In order to improve the classification one form of context analysis is, viz., This is also known as
type I pixel-based re-classifier, as it combines local information surrounding pixels to assist the
reclassification of each pixel. The most important requirement in contextual algorithms is it
should maintain homogenous areas of irregular shape, but identify and correct those isolated
pixels, which are misclassified. The probabilistic relaxation model provides an appropriate
examination of the technique. Rosenfeld et al. (1976) define three models of relaxation: a
discrete model, a fuzzy model and a probabilistic model. Probabilistic models are the most
generally used methodology and has wide literature coverage. The probabilistic relaxation
model attempts to reduce the uncertainty in a twofold manner by.
Examining the local neighborhood of each pixel to produce locally consistent labels.
Using statistical information on the label interrelationships present in the whole image.
The core of the model is the probability updating rule. The neighborhood operator provides
spatial context information over n-local pixels.
Temporal Classification:
Temporal classification exploits the usefulness of time related features as another element for
interpretation. Temporal analysis uses two basic time interpretive functions in forestry
application:
Suitability of specific index for estimation of biomass production is initially assessed through
pilot study using ground radiometers and aerial scanner data as complements to the satellite data.
Indices have become more efficient estimation of vegetation amount as compared with the
performance of different bands independently. Selection of remote sensing data and acquisition
is based on the required spatial resolution. A balance between spatial, spectral and temporal
resolution is usually made depending upon the scale of study.
For detailed digital image classification techniques of remote sensing data, students may consult
the concerned books.
Quantitatively assessing classification accuracy requires the collection of some in situ data or a
priori knowledge about some parts of the terrain, which can then be compared with the remote
sensing derived classification map. Thus to asses classification accuracy it is necessary to
compare two classification maps 1) the remote sensing derived map, and 2) assumed true map (in
fact it may contain some error). The assumed true map may be derived from in situ investigation
or quite often from the interpretation of remotely sensed data obtained at a larger scale or higher
resolution.
classifier are carefully evaluated on both the classified map from remote sensing data products
and the assumed true map. If training samples are distributed randomly throughout the study
area, this evaluation may consider representative of the study area. If they act biased by the
analyst a prior knowledge of where certain land cover types exist in the scene. Because of it is
bias, the classification accuracy for pixels found within the training sites are generally higher
than for the remainder of the map because these are the data locations that were used to train the
classifier. Conversely if others test locations in the study area are identified and correctly
labelled prior to classification and if these are not used in the training of the classification
algorithm they can be used to evaluate the accuracy of the classification map. This procedure
generally yields a more credible classification accuracy assessment. However additional ground
truth is required for these test site coupled with problem of determining how many pixels are
necessary in each test site class. Also the method of identifying the location of the test sites prior
to classification is important since many statistical tests require that locations be randomly
selected (e .g using a random number generator for the identification off unbiased row and
column coordinates) so that the analyst do not bias their selection.
Once the Criterion for objectively identifying the location of specific pixels to be compared is
determined, it is necessary to identify the class assigned to each pixel in both the remote sensing
derived map and the assumed true map. These data are tabulated and reported in a contingency
table (error matrix), where overall classification accuracy and misclassification between
categories are identified. It takes the form of an m x m matrix, where m is the number of classes
under investigation. The rows in the matrix represent the assumed true classes, while the
columns are associated with the remote sensing derived land use. The entries in the contingency
table represent the raw number of pixels encountered in each condition; however, they may be
expressed as percentages, if the number becomes too large. One of the most important
characteristics of such matrices is their ability to summarize errors of omission and commission.
These procedures allow quantitative evaluation of the classification accuracy. Their proper use
enhances the credibility, of using remote sensing derived land use information.
Table 15.1 Error Matrix resulting from classifying training Set pixels
An error matrix expresses several characteristics about classification performance. For example,
one can study the various classification errors of omission (exclusion) and commission
(inclusion). Note in Table 1 the training set pixels that are classified into the proper land cover
categories are located along the major diagonal of the error matrix (running from upper left to
lower right). All non-diagonal elements of the matrix represent errors of omission or
commission. ‘Omission errors correspond to non-diagonal column elements’ (e.g. 16 pixels that
should have classified as "sand" were omitted from that category). ‘Commission errors are
represented by non-diagonal row element’ (e.g. 38 urban pixels plus 79 hay pixels were
improperly included in the corn category). Several other ensures for e.g. the overall accuracy of
classification can be computed from the error matrix. It is determined by dividing the total
number correctly classified pixels (sum of elements along the major diagonal) by the total
number of reference pixels. Likewise, the accuracy's of individual categories can be calculated
by dividing the number of correctly classified pixels in each category by either the total number
of pixels in the corresponding rows or column. Producers accuracy which indicates how well the
training sets pixels of a given cover type are classified can be determined by dividing the number
of correctly classified pixels in each category by number of training sets used for that category
(column total). Whereas the Users accuracy is computed by dividing the number of correctly
classified pixels in each category by the total number of pixels that were classified in that
category (row total). This figure is a measure of commission error and indicates the probability
that a pixel classified into a given category actually represent that category on ground. Note the
error matrix in the table indicates an overall accuracy of 84%. However producers accuracy
range from just 51 %(urban) to 100% (water) and users accuracy range from 72%(sand) to 99%
(water). This error matrix is based on training data. If the results are good it indicates that the
training samples are spectrally separable and the classification works well in the training areas.
This aids in the training set refinement process, but indicates little about classifier performance
elsewhere in the scene.
Kappa coefficient:
‘Discrete multivariate’ techniques have been used to statistically evaluate the accuracy of remote
sensing derived maps and error matrices since 1983 and are widely adopted. These techniques
are appropriate as the remotely sensed data are discrete rather than continuous and are also
binomially or multinomial distributed rather than normally distributed. Kappa analysis is a
discrete multivariate technique for accuracy assessment. Kappa analysis yields a Khat statistic
that is the measure of agreement of accuracy.
The techniques of image processing so far discussed deals with deriving of end results in the
form of classified maps or enhanced image and associated statistics, which can be correlated
with available conventionally generated thematic maps. There are many procedures in satellite
image processing which have already been automated, especially under pattern recognition
techniques. However, efforts are in progress to find newer techniques which consumes lesser
amount of time, does better integration of data from various sources, has better automation
capabilities and simple to use. Some of the techniques, which have made a big impact in the
present day data processing, are:
The spectral classification of remotely sensed data, in the parametric approach, depends more on
the spectral statistics and related signature. The spectral knowledge is a limiting factor with
respect to the given image. Generation of scene independent spectral knowledge is a critical
element in the development of Expert systems. Expert systems generally require, knowledge
base, a rules interpreter (or rule base) and a working memory. The spectral knowledge based
computer systems are designed to avoid the need for scene based parameter optimization. It is to
make classification decisions based on the knowledge of spectral relationships within and
between classes to be categorized given that the relationships are stable over a period of time.
Context based classification algorithms are gaining moment in the present day image processing.
Contextual classification is mainly based on categorization of the image data with respect to the
context of the particular pixel in consideration. The rules used to accept or reject the
classification decision at a given level depend upon the local contextual interpretation associated
with the pixel under consideration.
There are hybrid methods by which useful results are obtained in image processing. Temporal
images are classified using parametric methods over difference seasons. Specific knowledge base
is used to refine parametric classification with respect to the behavior of the features in the
particular phenological stage respectively. Different knowledge base is used to compare two
time parametric classification (already refined) to show constant and changing areas over the
period. In addition to this stratification information is superimposed from available thematic
maps (e.g. Forest boundaries are extracted and classified on the final product). This kind of
hybrid methods not only improve upon the classification performance but also go a long way in
deriving newer techniques of images processing to achieve better end results.
Future trends of the remote sensing and image processing are generation of data base and
national network for information exchange which finally will lead to the operationalisation of
National Resources Information System (NRIS) in India.
15.4 SUMMARY
The digital image processing techniques are quite common for detailed data analysis and data out
puts for obtaining the desired results. There are various methods by which the raw data available
from satellite are rectified and enhanced so as to get clarity with high degree of contrast. By
applying different mathematical algorithms during the processing and classification one can
achieve the results of his own choice. Thus, today remote sensing is largely utilized in
environmental management, which frequently requires rapid, accurate, and up-to-date data
collection and its digital image processing.
Digital image processing and classification techniques of satellite remote sensing data are used to
obtain clarity of features to identify and classify pixels in the data. Classification is usually
performed on multi-channel data sets and this process assigns each pixel in an image to a
particular class or theme based on statistical characteristics of the pixel brightness values. There
are a variety of approaches taken to perform digital classification. The two generic approaches
which are used most often, namely supervised and unsupervised classification.
The objective of digital image classification is to replace visual analysis of the image data with
quantitative techniques for automating the identification of features in a scene. The digital image
classification provides a detailed data output results based on the user`s requirement. This
involves the analysis of multi-spectral image data and the application of statistically based
decision rules for determining the land cover ideality of each pixel in an image. The intent of
classification process is to categorize all pixels in a digital image into one of several land cover
classes or themes. This classified data may be used to produce thematic maps of the land cover
present in an image. The topic defines and describes the sub-topics consisting of importance of
digital image classification, spectral Signature, classification training and types of classification,
classification accuracy and assessment of Classification Error Matrix.
Spectral signature, concept and importance of spectral signature in digital image classification
have been explained. The theme and importance of supervised and unsupervised classification
under classification training are described. Under supervised classification, the maximum
likelihood classification algorithm, minimum distance classifier and box classification schemes
have their own role of bringing classification accuracy based on the distribution pattern of
spectral signatures for each of the assigned classes. Other classification schemes like contextual
classification, temporal classification etc. have also been described along with the classification
accuracy and error matrix.
In addition to pattern recognition techniques, efforts are in progress to find new techniques
which consumes lesser amount of time, does better integration of data from various sources, has
better automation capabilities and simple to use. Some of the techniques, which have made a big
impact in the present day data processing are i) Expert Systems/Artificial Intelligence
ii)Contextual classification.
15.5 GLOSSARY
CRT (Cathode Rays Tube) screen- A CRT monitor or computer screen contains millions of
tiny red, green, and blue phosphor dots that glow when struck by an electron beam that travels
across the screen to create a visible image.
Map Accuracy- The accuracy of any map is equal to the error inherent in it as due to the
curvature and changing elevations contained in each map from which the map was made, added
to or corrected by the map preparation techniques used in joining the individual maps.
Relative map accuracy- It is a measure of the accuracy of individual features on a map when
compared to other features on the same map.
Absolute map accuracy- Absolute map accuracy is a measure of the location of features on a
map compared to their true position on the face of the Earth.
Mapping accuracy –Mapping accuracy standards generally are stated as acceptable error and
the proportion of measured features that must meet the criteria. In the case of some plotting and
display devices, accuracy refers to tolerance in the display of graphic features relative to the
original coordinate file.
The level of allowable error of maps- As applied by National Map Accuracy Standards, it is
determined by comparing the positions of well-defined points whose locations or elevations are
shown on the map with corresponding positions as determined by surveys of a higher accuracy.
15.7 REFERENCES
1. Campbell, J. 1987. Introduction to Remote Sensing. Guilford, New York, pp 620.
2. Ekstrom, M.P. 19984. Digital Image processing Techniques, Academic New York.
3. Hord, R.M. 1982. Digital Image Processing of Remotely Sensed Data, Academic, New
York.
4. Jensen, J.R. 1986. Introduction Digital Image processing: A Remote Sensing Perspective.
Prentice-Hall, Englewood Cliffs, NJ.
5. Lillesand, T.M. and Keifer, R.W. 1994. Remote Sensing and Image interpretation. John
Wiley & Sons, Inc. New York, pp 750.
6. Moik, J.G. 1980. Digital Processing of Remotely Sensed Images, NASA SP-431, Govt.
Printing Office, Washington, D.C.
7. Muller, J.P. (ed.) 1996. Digital Image Processing in Remote Sensing. Taylor & Francis,
London/Philadelphia.
8. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 55, no 9.
9. Photogrammetric Engineering and Remote Sensing. 1989. Special image processing issues
Vol. 56, no 1.
10. Azriel Rosenfeld and Avinash C. Kak ,1976. Pub. Book-Digital Picture Processing: Volume
1.Morgan Kaufmann Publishers Inc. San Francisco, CA, USA ©1982 ISBN: 9780323139915
11. Sabins, F.F. 1986. Remote Sensing: Principles and Interpretation, 2nd Freeman New York.
12. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2865537/
3) What is supervised classification, list out the aspects for extracting useful information/results
out of supervised classification.
4) In which condition the digital image classification using maximum likelihood algorithm
Becomes useful? Draw the related diagram to explain this statement.
5) Explain diagrammatically the difference between minimum distance and box or
Parallelepiped Classification.
6) Describe the concept of contextual classification.
7) The combination of both supervised and unsupervised classification techniques improves the
accuracy limit of digital classification. Elaborate this statement.
8) Describe site specific classification map accuracy assessment
9) What do you mean by Kappa coefficient? Highlight the newer tools and techniques for digital
image classification to reduce the classification and mapping errors.
10) Set an example of preparation of classification error matrix /confusion matrix.