REMOTE SENSING AND GIS
APPLICATIONS
Introduction to Remote Sensing
(Unit-I)
(CO’s:R20C404.1,PO’s:1,PSO’s:1,2)
M.SAI GANESH
Assistant Professor
Department of Civil Engineering
COURSE LEARNING OBJECTIVES
The course is designed to
1.Introduce the basic principles of Remote Sensing and
GIS techniques.
2.Learn various types of sensors and platforms.
3.Learn concepts of visual and digital image analysis.
4.Understand the principles of spatial analysis.
5.Appreciate application of RS and GIS to Civil
Engineering.
COURSE OUTCOMES
At the end of course the students will be able to
1.Be familiar with ground, air and satellite based sensor
platforms.
2.Interpret the aerial photographs and satellite imageries.
3.Create and input spatial data for GIS application.
4.Apply RS and GIS concepts in water resources
engineering
SYLLABUS
UNIT – I
Introduction to remote sensing: Basic concepts of remote
sensing, electromagnet radiation, electromagnet spectrum,
interaction with atmosphere, energy interaction with the earth
surfaces characteristics of remote sensing systems.
Sensors and platforms: Introduction, types of sensors,
airborne remote sensing, space-borne remote sensing, image
data characteristics, digital image data formats-band interleaved
by pixel, band interleaved by line, band sequential, IRS,
LANDSAT, SPOT.
SYLLABUS
UNIT – II
Image analysis: Introduction, elements of visual interpretation,
digital image processing- image preprocessing, image
enhancement, image classification, supervised classification,
unsupervised classification.
UNIT – III
Geographic Information system: Introduction, key
components, application areas of GIS, map projections.
Data entry and preparation: spatial data input, raster data
models, vector data models.
SYLLABUS
UNIT – IV
Spatial data analysis: Introduction, overlay function-vector
overlay operations, raster overlay operations, arithmetic
operations, comparison and logical operations, conditional
expressions, overlay using a decision table, network analysis-
optimal path finding, network allocation, network tracing.
UNIT – V
RS and GIS applications General: Land cover and land use,
agriculture, forestry, geology, geomorphology, urban
applications.
SYLLABUS
UNIT – VI
Application to Hydrology and Water Resources: Flood
zoning and mapping, ground water prospects and potential
recharge zones, watershed management.
UNIT-I
SYLLABUS
UNIT – I
Introduction to remote sensing: Basic concepts of remote
sensing, electromagnet radiation, electromagnet spectrum,
interaction with atmosphere, energy interaction with the earth
surfaces characteristics of remote sensing systems.
Sensors and platforms: Introduction, types of sensors,
airborne remote sensing, space-borne remote sensing, image
data characteristics, digital image data formats-band interleaved
by pixel, band interleaved by line, band sequential, IRS,
LANDSAT, SPOT.
INTRODUCTION TO REMOTE SENSING
Remote sensing is the science and art of obtaining
information about an object, area or phenomenon through an
analysis of the data acquired by a device which is not in
contact with the object, area or phenomenon under
investigation. In the present context, the definition of remote
sensing is restricted to mean the process of acquiring
information about any object without physically contacting it
in anyway regardless of whether the observer is immediately
adjacent to the object or millions of miles away.
It is further required that such sensing may be achieved in the
absence of any matter in the intervening space between the
object and the observer. Consequently, the information about
the object, area or any phenomenon must be available in a
form that can be impressed on a carrier vacuum. The
information carrier, or communication link, is
electromagnetic energy. Remote sensing data basically
consists of wavelength Intensity information acquired by
collecting the electromagnetic radiation leaving the object at
specific wavelength and measuring its intensity.
Remote sensing of earth's environment comprises measuring
and recording of electromagnetic energy reflected from or
emitted by the planet's surface and atmosphere from
advantage point above the surface, and relating of such
measurements to the nature and distribution of surface
materials and atmospheric conditions. Sensors mounted on
aircraft or satellite platforms measure the amounts of energy
reflected from or emitted by the earth's surface. These
measurements are made at a large number of points
distributed either along a one-dimensional profile on the
ground below the platform or over a two-dimensional area on
either side of the ground track of the platform.
The sensors scan the ground below the satellite or aircraft
platform and as the platform moves forward, an image of the
earth's surface is built up. The below figure shows how a
sensor on board satellite scans along line AB. Each scan line
of a remotely sensed image is a digital or numerical record of
radiance measurements made at regular intervals along the
line. A set of consecutive scan lines forms an image (Mather,
1987). Two-dimensional image data can be collected by
means of two types of imaging sensors, namely, nadir looking
or side looking sensor.
Fig. Sensor on-board satellite scans along line AB. As the platform moves
forward, an image of the swath region is built up.
Fig. Schematic representation of remote sensing technique
In the case of nadir looking, the ground area to either side of the
satellite or aircraft platform is imaged, whereas an area of the
earth's surface lying to one side of satellite track is imaged by
means of side looking sensor. Spatial patterns evident in
remotely sensed images are interpreted in terms of geographical
variation in the nature of material forming the surface of the
earth. Such materials may be vegetation, exposed soil and rock,
or water. These materials are not themselves detected directly by
remote sensing, and their nature inferred from the measurements
made. The characteristic of digital image data is that they can be
adjusted so as to provide an estimate of physical measurements
of properties of the targets, such as, radiance or reflectivity.
Broadly, there are two types of sensing systems to record the
information about any target. They are active sensing system
and passive sensing system. An active sensing system generates
and uses its own energy to illuminate the target and records the
reflected energy which carries the information content or
entropy. Synthetic aperture radar (SAR) is one of the best
examples of active sensing systems. These sensing systems
operate in ·the microwave region of electromagnetic spectrum
and include radiation with wavelengths longer than 1 mm. These
systems do not rely on the detection of solar or terrestrial
emissions as the solar irradiance in microwave region is
negligible.
Synthetic Aperture Radar Simplified geometry of a synthetic aperture radar (SAR) system
The active remote sensing operation principles and the general
details of latest imaging radar systems are described in the
following chapter. The second type of remote sensing systems
are passive systems mainly depending on the solar radiation
operates in visible and infrared region of electromagnetic
spectrum. The nature and properties of the target materials can be
inferred from incident electromagnetic energy that is reflected,
scattered or emitted by these materials on the earth's surface and
recorded by the passive sensor (for example, a camera without
flash). The remote sensing system that uses electromagnetic
energy can be termed as electromagnetic remote sensing.
Basic Concepts of Remote Sensing
Electromagnetic Radiation (EMR)
Electromagnetic energy or electromagnetic radiation (EMR)
is the energy propagated in the form of an advancing
interaction between electric and magnetic fields (Sabbins,
1978). It travels with the velocity of light. Visible light,
ultraviolet rays, infrared rays, heat, radio waves, X-rays all
are different forms of electro-magnetic energy.
Electro-magnetic energy (E) can be expressed either in terms
of frequency (f) or wave length (λ) of radiation as
E = h c f or h c / λ ---- (1)
where h is Planck's constant (6.626 x 10-34 Joules-sec), c is a
constant that expresses the celerity or speed of light (3 x 10 8
m/sec), f is frequency expressed in Hertz and λ is the
wavelength expressed in micro meters (1µm = 10 -6 m).
As can be observed from equation (1), shorter wavelengths
have higher energy content and longer wavelengths have
lower energy content.
Electromagnetic Spectrum
The electromagnetic spectrum may be defined as the ordering of
the radiation according to wavelength, frequency, or energy. The
wavelength, denoted by A, is the distance between adjacent
intensity maximum (for example) of the electromagnetic wave, and
consequently, it may be expressed in any unit of length. Most
commonly wavelength is expressed in meters (m) or centimeters
(cm); microns or micrometers (11 or 11m = 10-4 cm); nanometers
(nm = 10-7 cm); or Angstrom units (A = 10-8 cm). The frequency
denoted by v, is the number of maxima of the electromagnetic
wave that passes a fixed point in a given time. Its relationship to
wavelength is simply, v = CIA where, ‘c’ is the speed of light.
Fig Electromagnetic radiation spectrum
In remote sensing terminology, electromagnetic energy is
generally expressed in terms of wavelength, λ.
All matters reflect, emit or radiate a range of electromagnetic
energy, depending upon the material characteristics. In
remote sensing, it is the measurement of electromagnetic
radiation reflected or emitted from an object, is the used to
identify the target and to infer its properties.
Principles of Remote Sensing
Different objects reflect or emit different amounts of energy in
different bands of the electromagnetic spectrum. The amount of
energy reflected or emitted depends on the properties of both the
material and the incident energy (angle of incidence, intensity and
wavelength). Detection and discrimination of objects or surface
features is done through the uniqueness of the reflected or emitted
electromagnetic radiation from the object. A device to detect this
reflected or emitted electro-magnetic radiation from an object is
called a “sensor” (e.g., cameras and scanners). A vehicle used to
carry the sensor is called a “platform” (e.g., aircrafts and satellites).
Main stages in Remote sensing are the following
A. Emission of electromagnetic radiation
•The Sun or an EMR source located on the platform
B. Transmission of energy from the source to the object
•Absorption and scattering of the EMR while transmission
C. Interaction of EMR with the object and subsequent
reflection and emission
D. Transmission of energy from the object to the sensor
E. Recording of energy by the sensor
•Photographic or non-photographic sensors
F. Transmission of the recorded information to the ground
station
G. Processing of the data into digital or hard copy image
H. Analysis of data
Fig:- Important stages in remote sensing
Fig:- Electromagnetic Remote Sensing Process with overview in GIS
Energy(EMR) Interaction with the Atmosphere
All electromagnetic radiation detected by a remote sensor
has to pass through the atmosphere twice, before and after its
interaction with earth's atmosphere. This passage will alter
the speed, frequency, intensity, spectral distribution, and
direction of the radiation. As a result atmospheric scattering
and absorption occur (Curran,1988). These effects are most
severe in visible and infrared wavelengths, the range very
crucial in remote sensing.
During the transmission of energy through the
atmosphere, light interacts with gases and particulate matter
-in a process called atmospheric scattering. The two major
processes in scattering are selective scattering and non-selective
scattering. Reyleigh, Mie and Raman scattering are of selective
type. Non-selective scattering is independent of wavelength. It is
produced by particles whose radii exceed 10 μm, such as, water
droplets and ice fragments present the clouds. This type of
scattering reduces the contrast of the image. While passing through
the atmosphere, electromagnetic radiation is scattered and absorbed
by gasses and particulates. Besides the major gaseous components
like molecular nitrogen and oxygen, other constituents like water
vapor, methane, hydrogen, helium and nitrogen compounds play an
important role in modifying the incident and reflected radiation.
This causes a reduction in the image contrast and introduces
radiometric errors. Regions of the electromagnetic spectrum in
which the atmosphere is transparent are called atmospheric
windows. The atmosphere is practically transparent in the visible
region of the electromagnetic spectrum and therefore many of the
satellite based remote sensing sensors are designed to collect data in
this region. Some of the commonly used atmospheric windows are
0.38 - 0.72 μm (visible), 0.72 -3.00 μm (near infrared and middle
infrared) and 8.00 -14.00 μm (thermal infrared). Fig –shows
relative scatter as a function of wavelength from 0.3 to 0.1 μm of
the spectrum for various levels of atmospheric haze.
Fig:- Relative scatter for various levels of atmospheric haze
Energy(EMR) Interaction with the Atmosphere
Before radiation used for remote sensing reaches the Earth's
surface it has to travel through some distance of the Earth's
atmosphere. Particles and gases in the atmosphere can affect
the incoming light and radiation. These effects are caused by
the mechanisms of scattering and absorption.
1) Scattering
2) Absorption
1) Scattering
Scattering occurs when particles or large gas molecules present in the
atmosphere interact with and cause the electromagnetic radiation to be
redirected from its original path. How much scattering takes place
depends on several factors including the wavelength of the radiation,
the abundance of particles or gases, and the distance the radiation
travels through the atmosphere.
There are three (3) types of scattering which take place.
a)Rayleigh scattering
b)Mie scattering
c)Nonselective scattering
1) Scattering
a) Rayleigh scattering
Rayleigh scattering occurs when particles are very small compared to the
wavelength of the radiation. These could be particles such as small
specks of dust or nitrogen and oxygen molecules. Rayleigh scattering causes
shorter wavelengths of energy to be scattered much more than longer
wavelengths. Rayleigh scattering is the dominant scattering mechanism in
the upper atmosphere. The fact that the sky appears "blue" during the day is
because of this phenomenon.
b) Mie scattering
Mie scattering occurs when the particles are just about the same
size as the wavelength of the radiation. Dust, pollen, smoke and
water vapor are common causes of Mie scattering which tends to
affect longer wavelengths than those affected by Rayleigh
scattering. Mie scattering occurs mostly in the lower portions of
the atmosphere where larger particles are more abundant, and
dominates when cloud conditions are overcast.
c) Nonselective scattering
The final scattering mechanism of importance is
called nonselective scattering. This occurs when the particles are
much larger than the wavelength of the radiation. Water droplets
and large dust particles can cause this type of scattering.
Nonselective scattering gets its name from the fact that all
wavelengths are scattered about equally. This type of scattering
causes fog and clouds to appear white to our eyes because blue,
green, and red light are all scattered in approximately equal
quantities (blue+green+red light = white light).
2) Absorption
Absorption is the other main mechanism at work when
electromagnetic radiation interacts with the atmosphere. In
contrast to scattering, this phenomenon causes molecules in the
atmosphere to absorb energy at various wavelengths.
Ozone, carbon dioxide, and water vapor are the three main
atmospheric constituents which absorb radiation.
Ozone serves to absorb the harmful (to most living things)
ultraviolet radiation from the sun. Without this protective layer in
the atmosphere our skin would burn when exposed to sunlight.
Energy interaction with Earth’s surface materials
When electromagnetic energy is incident on any feature of
earth's surface, such as a water body, various fractions of
energy get reflected, absorbed, and transmitted as shown in
Fig. Applying the principle of conservation of energy, the
relationship can be expressed as:
EI (λ) = ER (λ) + EA (λ) + ET (λ)
Where, EI = Incident energy
ER = Reflected energy
EA = Absorbed energy
and, ET = Transmitted energy
Fig:- Basic Interaction between Electromagnetic Energy and a water body
All energy components are functions of wavelength, (I). In
remote sensing, the amount of reflected energy E R(λ) is more
important than the absorbed and transmitted energies.
Therefore, it is more convenient to rearrange these terms like
ER (λ) = EI (λ) -[EA (λ) + ET (λ)]
Characteristics of Remote Sensing System
I. Energy Source: The energy sources for real systems are
usually non-uniform over various wavelengths and also
vary with time and space. This has major effect on the
passive remote sensing systems. The spectral distribution of
reflected sunlight varies both temporally and spatially.
Earth surface materials also emit energy to varying degrees
of efficiency. A real remote sensing system needs
calibration for source characteristics.
II. The Atmosphere: The atmosphere modifies the spectral
distribution and strength of the energy received or emitted.
The effect of atmospheric
interaction varies with the
wavelength associated,
sensor used and the sensing
application. Calibration is
required to eliminate or
compensate these
atmospheric effects.
Fig: Interaction of the electromagnetic energy with the atmosphere
Characteristics of Remote Sensing System
III. The Energy/Matter Interaction at the Earth’s Surface:
Remote sensing is based on the principle that each and
every material reflects or emits energy in a unique, known
way. However, spectral signatures may be similar for
different materials types. This marks differentiation
difficult. Also, the knowledge of most of the energy/matter
interactions for earth surface features is either at elementary
level or even completely unknown.
Characteristics of Remote Sensing System
IV. The Sensor: Real sensors have fixed limits of spectral
sensitivity i.e., they are not sensitive to all wavelengths.
Also, they have limited spatial resolution. Selection of a
sensor requires a trade-off between spatial resolution and
spectral sensitivity. For example, while photography
systems have very good spatial resolution and poor spectral
sensitivity, non- photographic systems have poor spatial
resolution.
Characteristics of Remote Sensing System
V. The Data Handling System: Humans intervention is
necessary for processing sensor data even through
machines are also included in data handling. This makes
idea of real time handling almost impossible. The amount
of data generated by the sensors far exceeds the data
handling capacity.
VI. The Multiple Data Users: The success of any remote
sensing mission lies on the user who ultimately transforms
the data into information. This is possible only if the user
understands the problem thoroughly and has a wide
knowledge in the data generation
SENSORS AND PLATFORMS
Introduction: Remote sensing of the surface of the earth has a
long history, dating from the use of cameras carried by balloons
and pigeons in the eighteenth and nineteenth centuries. The term
'remote sensing' is used to refer to the aircraft mounted systems
developed for military purposes during the early part of the 20th
century. Air borne camera systems are still a very important
source of remotely sensed data (Lilles and kiefer,1994). Although
photographic imaging systems have many uses, this chapter is
concerned with image data collected by satellite sensing systems
which ultimately generate digital image products.
Space borne sensors are currently used to assist in scientific and
socioeconomic activities like weather prediction, crop monitoring,
mineral exploration, waste land mapping, cyclone warning, water
resources management, and pollution detection. All this has
happened in a short period of time. The quality of analysis of
remote sensing data and the varied types of applications to which
the science of remote sensing is being put to use are increasing
enormously as new and improved spacecraft are being placed into
the earth's orbit. The primary objectives, characteristics and
sensor
capabilities of the plethora of remote sensing satellites Circling
this planet, are discussed in this Chapter.
An attempt is made to classify the satellites into three types,
namely, earth resources satellites, meteorological satellites, and
satellites carrying microwave sensors. This classification is not
rigid. For instance, most of the meteorological satellites are also
capable of sensing the resources of the earth. Before turning to the
individual satellite's description and the corresponding sensors
and capabilities, a brief overview of satellite system parameters is
presented in the following paragraphs.
TYPES OF SENSORS
TYPES OF SENSORS
Sensors are mainly divide into two
types
They are 1) Active sensors
2) Passive sensors
1)Active sensor:
These sensors detect reflected
responses from object which are
irradiated from artificiality
generated energy sources
Ex : Radar, camera with flash
light
TYPES OF SENSORS
2) Passive sensor :
These sensors detect reflected EMR from natural source
Ex : cameras without flash light (depends on solar energy), and
all RS sensors.
SENSOR PLATFORMS
There are mainly three types of platforms
a) Ground Based Remote Sensing
b) Air borne Remote Sensing
c) Space borne Remote Sensing
a) Ground Based Remote Sensing :
• This Remote sensing technique is used to record detailed
information about the surface that is compared with
information collected from aircraft or satellite sensors.
• In some cases this can be used to better characterize the target
that is bring imaged by these sensors making it possible to
SENSOR PLATFORMS
-better understand the information in the imagery.
• Sensors may be placed on the ladder, scaffolding tall building
and crane etc.
SENSOR PLATFORMS
b) Air borne Remote Sensing :
• If the remotely sensed data are collected from the platforms
within the air is called aerial or airborne remote sensing.
• Different aerial platforms are balloons, kites, aircrafts etc. early
platforms
• Currently aircrafts are the main aerial platforms.
• The airborne remote sensing may be more susceptible to
imaging geometry problems. They are flexible in their
capability to collect data from different look angles and look
directions.
• By acquiring imaging from more than one look direction these
SENSOR PLATFORMS
-effects may be reduced.
• They are susceptible to variations in velocity and other
motions of the aircraft as well as to the environmental
(weather) conditions.
SENSOR PLATFORMS
• In order to avoid geometrical positing errors due to random
variations in motion of the aircrafts the radar system must use
sophisticated navigation and positioning equipment and
advanced image processing to compensate their variations.
Advantages
• Respective look is not needed.
• Surveys can be scheduled for specific purpose, time and
location.
• Higher image resolutions than space borne platforms.
• Low environmental loss than space bone platforms.
SENSOR PLATFORMS
c) Space borne Remote Sensing
• Space borne remote sensing is mainly conducted from satellite
and also called satellite remote sensing.
• Satellites are objects which revolve around the another objects.
E.g. moon is natural satellite of earth.
• A space borne remote doesn’t have ability to collect data
anywhere and at any time or such degree of flexibility as its
viewing geometry and data acquisition scheduled is controlled
by the pattern of its orbit.
• However space borne remote sensing do have the advantage of
being able to collect imaginary over a large area quicker than
SENSOR PLATFORMS
- an airborne remote sensing and provide consistent viewing
Geometry.
• The frequency of coverage may not be possible as often as that
with an airborne platforms but depending in the orbit
parameters, the viewing geometry flexibility and geographic
area of interest a space borne remote sensing may have revisit
period as short as one day.
• Space borne remote sensing are capable of avoiding imaging
geometry problems since they operate at altitude up to 100
times higher than air borne remote sensing. It also have
comparable swath width.
SENSOR PLATFORMS
SENSOR PLATFORMS
Disadvantages
• Space borne remote sensing include the inability of many
sensors to obtain data and information through cloud cover and
the relatively low spatial resolution achievable with many
sensing instruments.
• It also creates large quantities of data sets. Typically requires
extensive processing as well as strong and analysis.
What is an Image?
In a broad sense, an image is a picture or photograph. They are
most common and convenient means of storing, conveying and
transmitting information. They concisely convey information about
positions, sizes and interrelationships between objects and portray
spatial information that we can recognize as objects.
An image is usually a summary of the information in the object it
represents. The information of an image is presented in tones and
colors. In a strict sense, photographs are images, which are recorded
on photographic film and have been converted into paper form by
some chemical processing of the film whereas an image is any
pictorial representation of information. So, it can be said that all
photographs are images but not all images are photographs.
What is a Digital Image?
When a paper photograph is scanned through a scanner and stored
in a computer, it becomes a digital image as it has been
converted into digital mode. When you see a paper photograph
and its digital version in a computer, you do not see any
difference. In digital mode, photographic information is stored as
an array of discrete numbers. Each number corresponds to a
discrete dot, i.e. one image element in an image. This image
element is the smallest
part of an image and is generally known as picture element or
pixel or pel.
These numbers vary from place to place within the image
depending upon the tonal variation. Number of pixels in an image
What is a Digital Image?
-depends upon the image size (length and width of the image).
In any image, bright areas are represented by higher values
whereas dark areas are represented by lower values. The
values are known as digital number. We know now that a
digital image is composed of a finite number of pixels, each of
which has a particular location and value. In other words,
when (x,y)and amplitude values of ‘f’ are all finite, discrete
quantities both in spatial coordinates and in brightness, the
image is called a digital image.
What is a Digital Image?
Fig- A digital image (left) and its corresponding values (centre). Note the variation in the brightness
and the change in the corresponding digital numbers. Highlighted block in the centre figure shows
one pixel. The figure at right shows the range of values corresponding to the brightness
What is a Digital Image?
Fig- Arrangement of rows and columns of an image of size 4 × 4 (4 rows and 4columns). Left figure shows
the numerical values in the image and the table at right shows the representation of pixel location for an
image of size 4 × 4. You can observe that at location (1, 4), i.e. row 1 and column 4, the pixel value is 24
TYPES OF DIGITAL IMAGE
Types of Digital Images
• Digital image can be classified into several types based on
their form or method of generation. The actual information
stored in the digital image data is the brightness
information in each spectral band and in general, digital
images are of following three types.
1) Black and White or Binary image
2) Grey Scale or Monochrome Image
3) Color or RGB Image
TYPES OF DIGITAL IMAGE
1. Black and White or Binary image
Pixels in this type of images show only two colors, either black
or white and hence the pixels are represented by only two
possible values for each pixel, 0 for black and 1 for white.
Since a black and white image can be described in terms of
binary values, such images are also known as binary images or
bi-level or two-level I mages.
This also means that the binary images require only a single bit
(0 or 1) to represent each pixel hence storing of these kinds of
images require only one bit per pixel. Inability to represent
intermediate shades of gray limits usefulness of binary
images in dealing with remote sensing or photographic images.
Black and White or Binary image
Fig- Representation of (1) black and white and (2) gray scale images. Note the range of values for
the highlighted boxes in the two types of images
TYPES OF DIGITAL IMAGE
2. Grey Scale or Monochrome Image
Pixels in this type of images show white and black colors
including the different shades between the two colors as shown
in Fig. Generally, black color is represented by 0 value,
white by 255 and other in between gray shades by values
between the two values. This range means that each pixel can
be represented by eight bits or exactly one byte. In other
words, storing of gray scale images require 8 bits per pixel.
TYPES OF DIGITAL IMAGE
3. Color or RGB Image
Each pixel in this type of image has a particular color which is described by
the amount of red, green and blue colors in it (Fig. 10.5). Color images
are constructed by stacking three gray scale images where each image
(i.e. band) corresponds to a different color hence there are three values
(one each for red, green and blue components) corresponding to each
pixel. RGB (Red, Green and Blue) is the commonly used color space to
visualize color images. Thus, RGB are primary colors for mixing light
and are called additive primary colors. Any other color can be created
by mixing the correct amounts of red, green and blue light. If each of
these three components has a range of 0 - 255, there could be a total of
2563 different possible colors in a color image. Storing of a color images
require 24 bit for each pixel.
Colour or RGB Image
Fig- Representation of a colour image. Note the range of values of its three components, i.e. red,
green and blue
CHARACTERISTICS OF DIGITAL IMAGE
There are four basic measures for digital image characteristics
1. Spatial resolution- it refers to variations in the
reflectance/emittance determined by the shape. Size and
texture of the target
2. Spectral resolution- it refers changes in the reflectance or
emittance as a function of wavelength
3. Temporal resolution- it involves diurnal and/or seasonal
changes in reflectance or emittance
4. Radiometric resolution- it include changes in the
polarisation of the radiation reflected or emitted by the object
CHARACTERISTICS OF DIGITAL IMAGE
1. Spatial Resolution
There are different definitions of spatial resolution but in a general and
practical sense, it can be referred to as the size of each pixel. It is
commonly measured in units of distance, i.e. cm or m. In other
words, spatial resolution is a measure of the sensor’s ability to
capture closely spaced objects on the ground and their
discrimination as separate objects. Spatial resolution of a data
depends on altitude of the platform used to record the data and
sensor parameters. Relationship of spatial resolution with altitude
can be understood with the following example. You can compare
an astronaut on-board a space shuttle looking at the Earth to what
he/she can see from an airplane.
Spatial Resolution
Fig- Spatial variations of remote sensing data. Note the variations in resolution from 1 km till 1 m,
in the series of photographs. The photograph taken from 1 km shows lesser details as compared to
that at 1m
CHARACTERISTICS OF DIGITAL IMAGE
1. Spatial Resolution
The astronaut might see a whole province or country at a single glance
but will not Image Resolutions be able to distinguish individual
houses. However, he/she will be able to see individual houses or
vehicles while flying over a city or town. By comparing these two
instances you will have better understanding of the concept of spatial
resolution.
Suppose you are looking at a forested hillside from a certain
distance. What you see is the presence of the continuous forest;
however from a great distance you do not see individual trees. As you
go closer, eventually the trees, which may differ in size, shape, and
species, become distinct as individuals. As you draw much nearer,
you start to see individual leaves.
Spatial Resolution
Fig- Understanding concept of spatial resolution
CHARACTERISTICS OF DIGITAL IMAGE
2. Spectral Resolution
We all know that the Sun is a major source of electromagnetic radiation used
in the optical remote sensing. Different materials on the Earth’s surface
exhibit different spectral reflectance and emissivity's. The differences
(variations) in reflectance and emissivity's are used to distinguish features.
However, the spectral signature does not give continuous spectral
information and rather it gives spectral information at some selected
wavelengths. These wavelength regions of observation are called given
spectral information at some selected wavelengths. These wavelength
regions of observation are called spectral bands. These spectral bands are
defined in terms of a ‘central wavelength’ and a ‘band width’. The number
and dimension of specific wavelength intervals in the electromagnetic
spectrum to which a remote sensing instrument is sensitive is called
spectral resolution.
Spectral Resolution
Fig- Spectral variations of remote sensing data
CHARACTERISTICS OF DIGITAL IMAGE
3. Radiometric Resolution
As the arrangement of pixels describes spatial structure of an image, the
radiometric characteristics describe actual information content in an
image. The information content in an image is determined by the
number of digital levels (quantisation levels) used to express the data
collected by the sensors. In other words, a definite number of discrete
quantisation levels are used to record (digitise) the intensity of flow of
radiation (radiant flux) reflected or emitted from ground features. The
smallest change in intensity level that can be detected by a sensing
system is called radiometric resolutions. The quantisation levels are
expressed as n binary bits, such as 7 bit, 8 bit, 10 bit, etc. 8 bit
digitisation implies 28 or 256 discrete levels (i.e. 0-255). Similarly, 7 bit
digitisation implies 27 or 128th discrete levels (i.e. 0-127).
Radiometric Resolution
Fig- Images showing the effect of degrading the radiometric resolution
CHARACTERISTICS OF DIGITAL IMAGE
4. Temporal Resolution
In addition to spatial, spectral and radiometric resolution, it is also important to consider
the concept of temporal resolution in a remote sensing system. One of the
advantages of remote sensing is its ability to observe a part of the Earth (scene) at
regular intervals. The interval at which a given scene can be imaged is called
temporal resolution. Temporal resolution is usually expressed in days. For instance,
IRS-1A has 22 days temporal resolution, meaning it can acquire image of a
particular area in 22 days interval, respectively. Low temporal resolution refers to
infrequent repeat coverage whereas high temporal resolution refers to frequent
repeat coverage. Temporal resolution is useful for agricultural application or natural
disasters like flooding when you would like to re-visit the same location within
every few days. The requirement of temporal resolution varies with different
applications. For example, to monitor agricultural activity, image interval of 10
days would be required, but intervals of one year would be appropriate to monitor
urban growth patterns.
Temporal Resolution
Fig- Temporal variations of remote sensing data used to monitor changes in agriculture, showing
crop conditions in different months
Temporal Resolution
Fig- Showing the importance of temporal resolution. View of the flood situation at Brisbane,
Australia (a) pre flood and (b) post flood
WHAT IS DATA FORMAT?
Remote sensing data or image data is a digital picture or representation of
various objects on the Earth’s surface. The picture is a systematic arrangement
of raster cells. Each of the raster cells, depending on the intensity of radiation
received, contains a digital number between a certain range, for example, 0-
127 (7 bit image) or 0-255 (8 bit image) and so on, depending upon
radiometric processing capacity of the detector system of the sensor. Each
number (of each cell) in an image file is a data file value, sometimes also
called pixel (abbreviation of picture element), and data file value is the
measured brightness value of the pixel at a specific wavelength. Raster image
data are laid out in a grid format similar to squares on a checkerboard. These
raster cells are assigned grey shades from darkest shade for zero digital
number to the brightest white shade for digital number 127 or 255 or 511 and
so on, and comparative grades of dark and white shades are assigned in
WHAT IS DATA FORMAT?
between from digital numbers 1-126 or 1-254 or 1-510 and so on. Image data
format can be defined as the sequential arrangement of pixels, representing a
digital image in a computer compatible storage medium, such as a compact
disk (CDs/DVDs).
Similarly, the concept of image data format comes in, with the
question of how to arrange these pixels to achieve optimum level of desired
processing and display. Let us look at the following example, a data file in jpg
format is a compressed file in a small size, say 10MB; whereas, the same file
in tiff format is uncompressed and its size can go up to 100MB. What happens
in these two cases of files is the data transfer is easier with small size file, like
a jpg file than in tiff format.
TYPES OF DATA FORMATS
Basically there are three types of data formats
1.Band Interleaved by Pixel (BIP),
2.Band Interleaved by Line (BIL), and
3.Band Sequential (BSQ)
1.Band Interleaved by Pixel (BIP)
Data storage sequence in BIP format is shown in fig, for an image of size 3x3
(i.e 3 rows and 3 columns) having three bands. Bands, row and column (pixel) are
generally represented as B,R and P, respectively. B1.R1 and P1, respectively
represent band 1, row 1 and column (pixel)1. In this format, first pixel of row 1 of
band 1 is stored first then the first pixel of row 1 of band 2 and then the first pixel
of row 1 of band 3. These are followed by the second pixel of row 1 of band 1, and
then second pixel of row 1 of band 2 and then second pixel of row 1 of band 3 and
like wise.
TYPES OF DATA FORMATS
Fig. The data storage sequence in BIP format
TYPES OF DATA FORMATS
2. Band Interleaved by Line (BIL)
Data storage sequence in BIL format is shown here in Fig. for a
three band image of size 3x3 (i.e. 3 rows and 3 columns). B and
R represent band and row. B1 and R1 represent band 1 and row
1. In this format, all the pixels of row 1 of band 1 are stored in
sequence first, then all the pixels of row 1 of band 2 and then the
pixels of row 1 of band 3. These are followed by the all the
pixels of row 2 of band 1, and then all the pixels of row 1 of
band 2 and then all the pixels of row 1 of band 3 and likewise.
You should note that both
the BIP and BIL format store data/pixels in a line (row) at a time.
TYPES OF DATA FORMATS
Fig. Data storage sequence in BIL format
TYPES OF DATA FORMATS
3. Band Sequential (BSQ)
BSQ format stores each band of data as a separate file.
Arrangement sequence of data in each file is shown in Fig. for a
three band image of size 3×3 (i.e.3 rows and 3 columns). B and
R, respectively represent band and row. B1 and R1 represent
band 1 and row 1, respectively. In this format, all the pixels of
band 1 are stored in sequence first, followed by all the pixels of
band 2 and
then the pixels of band 3.
TYPES OF DATA FORMATS
Fig. Data storage sequence in BSQ format
THANK YOU