0% found this document useful (0 votes)
16 views

Unit 1 Questions and Answer

input

Uploaded by

madhuorganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unit 1 Questions and Answer

input

Uploaded by

madhuorganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 59

Unit Wise Questions & Answers

UNIT – 1 (Remote Sensing)

Introduction:
Remote sensing is an art and science of obtaining information about an object or feature without
physically coming in contact with that object or feature. Humans apply remote sensing in their day-to-day
business, through vision, hearing and sense of smell. The data collected can be of many forms: variations
in acoustic wave distributions (e.g., sonar), variations in force distributions (e.g., gravity meter),
variations in electromagnetic energy distributions (e.g., eye) etc. These remotely collected data through
various sensors may be analyzed to obtain information about the objects or features under investigation.
In this course we will deal with remote sensing through electromagnetic energy sensors only.
Thus, remote sensing is the process of inferring surface parameters from measurements of the
electromagnetic radiation (EMR) from the Earth’s surface. This EMR can either be reflected or emitted
from the Earth’s surface. In other words, remote sensing is detecting and measuring electromagnetic
(EM) energy emanating or reflected from distant objects made of various materials, so that we can
identify and categorize these objects by class or type, substance and spatial distribution [American
Society of Photogrammetry, 1975].
Remote sensing provides a means of observing large areas at finer spatial and temporal frequencies. It
finds extensive applications in civil engineering including watershed studies, hydrological states and
fluxes simulation, hydrological modeling, disaster management services such as flood and drought
warning and monitoring, damage assessment in case of natural calamities, environmental monitoring,
urban planning etc.

‘Remote’ means far away, and ‘sensing’ means believing or observing or acquiring some
information.
Of our five senses, we use three as remote sensors
1. Watch a cricket match from the stadium (sense of sight)
2. Smell freshly cooked curry in the oven (sense of smell)
3. Hear a telephone ring (sense of hearing)
Then what are our other two senses and why are they not used “remotely”?
4. Try to feel smoothness of a desktop (Sense of touch)
5. Eat a mango to check the sweetness (sense of taste)
In the last two cases, we are actually touching the object by our organs to collect the information
about the object.
Distance of Remote sensing:
Remote sensing occurs at a distance from the object or area of interest, Interestingly, there is no
clear distinction about this distance. It could be 1 m, 1,000 m, or greater than 1 million meters from the
object or area of interest. In fact, virtually all astronomy is based on RS. Many of the most innovative RS
systems, and visual and Digital image processing methods were originally developed for RS of
extraterrestrial landscapes such as moon, Mars, Saturn, Jupiter, etc.
RS techniques may also be used to analyse inner space, for example, an electron microscope and
its associated hardware may be used to obtain photographs of extremely small objects on the skin, in the
eye, etc, similarly, an X-ray device is a RS instrument to examin bones and organs inside the body. In
such cases, the distance is less than 1 m.

1. Remote Sensing Data Collection:

The data collection may be take place directly in the field, or at some remote distance from the
object or area of interest. Data that are collected directly in the field (study site or the ground for
which data are to be collected) are termed as in situ data, and the data collected remotely called
Remote Sensing data.

 In Situ Data: An in-situ data is the data collected that is associated with measurement
has the exact measurement of the actual location. An example of this would be when
collecting Remote sensing data, in-situ data will be used to verify that the measurement
of the data collected will be the same as the actual location.
Transducers are the devices that convert variations in physical quantities (such as
pressure or brightness) into electrical signals, or vice versa. Many different transducers are
available. A scientist could use a thermometer to measure the temperature of the air, soil, or
water: spectrometer to measure the spectral reflectance: anemometer to measure the speed of the
wind: or a psychrometer to measure the humidity of the air. The data recorded by the transducer
may be an analog signal with voltage variations related to the intensity of the property being
measured. Often these analog signals are transformed into digital values using analog to digital
conversion procedures.

 Remotely Sensed Data:


Although most remote sensors collect their data using the basic principles described
above, the format and quality of the resultant data varies widely. These variations are
dependent upon the resolution of the sensor. There are four types of resolution that affect the
quality and nature of the data a sensor collects: radiometric, spatial, spectral and temporal.
Radiometric resolution refers to the sensitivity of the sensor to incoming radiance (i.e., How
much change in radiance must there be on the sensor before a change in recorded brightness
value takes place?). This sensitivity to different signal levels will determine the total number of
values that can be generated by the sensor (Jensen, 1996).
Spatial resolution is a measurement of the minimum distance between two objects that
will allow them to be differentiated from one another in an image (Sabins, 1978; Jensen, 1996).
This is a function of sensor altitude, detector size, focal size and system configuration. For aerial
photography the spatial resolution is usually measured in resolvable line pairs per millimeter on
the image. For other sensors it is given as the dimensions, in meters, of the ground area that
falls within the instantaneous field of view of a single detector within an array - or pixel size
(Logicon, 1997). Figure 1-1.4 is a graphic representation showing the differences in spatial
resolution among some well known sensors.
2. What is the necessity and importance of Remote Sensing?
With growing population and rising standards of living, pressure on natural resources has been
increasing day by day, It, therefore, becomes necessary to manage the available resources
effectively and economically. It requires periodic preparation of accurate inventories of natural
resources both renewable and non-renewable. This can be achieved through remote sensing very
effectively since it provides multispectral – multi temporal data useful for recourse inventory,
monitoring and their management.
Remote sensing makes it possible to collect data on dangerous or inaccessible areas.
Remote sensing applications include monitoring deforestation in areas such as the Amazon
Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth
sounding of coastal and ocean depths.
Military collection during the cold war made use of stand-off collection of data about
dangerous border areas.
Remote sensing also replaces costly and slow data collection on the ground, ensuring in the
process that areas or objects are not disturbed

3. Define Remote Sensing and its classification explains in Detail?


"Remote Sensing is the science and art of obtaining information about an object, area, or
phenomenon through the analysis of data acquired by a device that is not in contact with the
object, area, or phenomenon under investigation.“

Types of Remote Sensing:


1. Classification Based on Platform:
 Ground based
 Air Based
 Space Based
2. Classification Based on Energy Source:
 Active Remote Sensing and
 Passive Remote Sensing

1-Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding
area being observed. Reflected sunlight is the most common source of radiation measured by
passive sensors. Examples of passive remote sensors include film photography, infrared, and
radiometers.
2-Active remote sensing, on the other hand, emits energy in order to scan objects and areas
whereupon a sensor then detects and measures the radiation that is reflected or backscattered from
the target. RADAR is an example of active remote sensing where the time delay between
emission and return is measured, establishing the location, height, speeds and direction of an
object.

3. Classification Based on Image Media:


Reflected or emitted energy from terrain may be imaged, either
 Photographic images
 Digital Images

4. Classification Based on the Regions of Electromagnetic spectrum


 Optical Remote sensing (0.3 µm to 3 µm)
 Photographic Remote sensing (0.3 µm to 0.9 µm)
 Thermal Remote Sensing (3 µm to 1 mm)
 Microwave Remote Sensing: is conducted with in the microwave region (1 mm to
1m), uses emitted energy from the earth’s surface. Active Microwave RS throws
energy is recorded by the sensor. Backscattered is the term given to reflections in the
opposite direction to the incident active microwaves. RADAR

5. Classification Based on number of Bands:


 Panchromatic Remote Sensing(single band)
 Multi-Spectral Remote sensing (Multi Band)
 Hyper-spectral Remote Sensing(Dozens or Hundreds of narrow (as little as 0.01 µm
in width for each), adjacent spectral bands)
Classification based on the number of Bands
 Panchromatic RS: The collection of reflected, emitted, or backscattered energy from an object
or area of interest in a single band of the electromagnetic spectrum. (visible -0.4 to 0.7 μm and
wider region 0.3 to 0.9 μm)
 Multi-spectral RS: the collection of the reflected, emitted, or back scattered energy from an
object of area of interest in multiple bands of the EMR. Optical , thermal , as well as microwave
regions. Sensors and imaging techniques are different for different regions.
 Hyper-spectral RS: is the major advantage of RS, which is currently coming into its own as
powerful and versatile means for continuous sampling of narrow intervals of the spectrum.
Collect image data simultaneously in dozens or hundreds of narrow (as little as 0.01 μm in width
for each), adjacent spectral bands up to 210 bands.

4. Difference between Active Remote Sensing and Passive Remote Sensing?


Depending on the source of electromagnetic energy, remote sensing can be classified as
passive or active remote sensing. In the case of passive remote sensing, source of energy is that
naturally available such as the Sun. Most of the remote sensing systems work in passive mode
using solar energy as the source of EMR. Solar energy reflected by the targets at specific
wavelength bands are recorded using sensors onboard air-borne or space borne platforms. In
order to ensure ample signal strength received at the sensor, wavelength / energy bands capable of
traversing through the atmosphere, without significant loss through atmospheric interactions, are
generally used in remote sensing.
Any object which is at a temperature above 0o K (Kelvin) emits some radiation, which is
approximately proportional to the fourth power of the temperature of the object. Thus the Earth
also emits some radiation since its ambient temperature is about 300o K. Passive sensors can also
be used to measure the Earth’s radiance but they are not very popular as the energy content is
very low.
In the case of active remote sensing, energy is generated and sent from the remote
sensing platform towards the targets. The energy reflected back from the targets are recorded
using sensors onboard the remote sensing platform. Most of the microwave remote sensing is
done through active remote sensing.

5. Explain in detail step by step procedure in Remote sensing process with neat diagram?
(Or) Describe briefly the different elements of RS?
The process involved in RS system requires an involvement of energy. For example,
when we view the screen of a computer monitor, we are actively engaged in RS. A physical
quantity (light) emanates from the screen, which is a source of radiation. The radiated light
passes over a distance, and thus is remote to some extent, until it encounters and is captured by
a sensor (eyes). Each eye sends a signal to a processor (brain) which records the data and
interprets this into information.
Now consider, if the energy being remotely sensed comes from the sun, the energy is
radiated by atomic particles at the source (the sun), propagates through the vacuum of space at
the speed of light, interacts with the earth’s atmosphere, interacts with the earth’s surface,
some amount of energy reflects back, interacts with the earth’s atmosphere once again, and
finally reaches the remote sensor, where it interacts with various optical systems, filters, film
emulsions, or detectors.
"Remote sensing is the science (and to some extent, art) of acquiring information about the Earth's
surface without actually being in contact with it. This is done by sensing and recording reflected or
emitted energy and processing, analyzing, and applying that information."
In much of remote sensing, the process involves an interaction between incident radiation and
the targets of interest. This is exemplified by the use of imaging systems where the following seven
elements are involved. Note, however that remote sensing also involves the sensing of emitted energy and
the use of non-imaging sensors.

1. Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy
source which illuminates or provides electromagnetic energy to the target of interest. (Active RS or
Passive RS)
2. Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in
contact with and interact with the atmosphere it passes through. This interaction may take place a second
time as the energy travels from the target to the sensor.
3. Interaction with the Target (C) - once the energy makes its way to the target through the atmosphere,
it interacts with the target depending on the properties of both the target and the radiation.
4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted from the
target, we require a sensor (remote - not in contact with the target) to collect and record the
electromagnetic radiation.
5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be
transmitted, often in electronic form, to a receiving and processing station where the data are processed
into an image (hardcopy and/or digital).
6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally or
electronically, to extract information about the target which was illuminated.
7. Application (G) - The final element of the remote sensing process is achieved when we apply the
information we have been able to extract from the imagery about the target in order to better understand
it, reveal some new information, or assist in solving a particular problem.
These seven elements comprise the remote sensing process from beginning to end. We will be covering
all of these in sequential order throughout the five chapters of this tutorial, building upon the information
learned as we go.

6. Explain about EMR with neat diagram?


To understand how EMR (Electro Magnetic Radiation) is produced, how it propagates through
space, and how it interacts with other matter, it is useful to describe the electromagnetic energy
using two different models: Wave model and Particle model.
 WAVE MODEL:
In the 1860s, James Maxwell conceptualized EMR as an electromagnetic energy or wave that
travels through space at the speed of light that is 299,792.46 km /s or 186,282.03 miles /s
(commonly rounded off to 3 X 10 8 m/s ). The electromagnetic wave consists of two fluctuating
fields – one electrical and the other magnetic. These two fluctuating fields are at right angles
(900) to one another and both are perpendicular to the direction of propagation. Both have the
same amplitudes (strengths) which reach their maxima – minima at the same time. Unlike other
wave types that require a carrier (e.g. sound waves), electromagnetic waves can transmit through
vacuum (such as in space). Electromagnetic radiation is generated whenever an electrical charge
is accelerated.
Wavelength and frequency are the two important characteristics of EMR which are particularly
important for understanding remote sensing. The wavelength is the length of one complete wave
cycle, which can be measured as the distance between two successive crests or troughs. A crest is
the point on a wave with the greatest positive value or upward displacement in a cycle. A trough
is the inverse of crest. The wavelength of the EMR depends up on the length of time that the
charged particle is accelerated. It is usually represented by the Greek letter lambda (λ). It is
measured in meters (m), or some factors of meters such as nanometers (nm, 10 -9 m),
micrometers (10 -6 m), or centimeters (cm, 10 -2 m),
Frequency refers to the number of cycles of a wave passing a fixed point per unit of time. It is
represented by Greek letter nu (v), It is normally measured in hertz (Hz), equivalent to one cycle
per second. A wave that completes one cycle in every second is said to have a frequency of one
cycle per second, or one hertz (1 Hz).
The relationship between the wave length () and frequency ( f of v) of EMR is based on
the following formula: c= λ v
V = c/ λ, or λ= c/v
Where ‘c’ is the velocity of light.
Note that frequency is inversely proportional to wavelength. The relationship is shown
diagrammatically in fig, the longer the wavelength, the lower the frequency, the shorter the wave
length, the higher the frequency. When the EMR passes from one medium to another, then the
speed of light and the wavelength change while the frequency remains constant.
 Particle model and quantum theory:
An anomaly arose in the late 19th century involving a contradiction between the wave theory of
light and measurements of the electromagnetic spectra that were being emitted by thermal radiators
known as black bodies. Physicists struggled with this problem unsuccessfully for many years. It later
became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body
radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies
emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These
packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real
particles. Later the particle of light was given the name photon, to correspond with other particles being
described around this time, such as the electron and proton. A photon has an energy, E, proportional to its
frequency, f, by

{\displaystyle E=hf={\frac {hc}{\lambda }}\,\!}

Where h is Planck's constant, {\displaystyle \lambda} is the wavelength and c is the speed
of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first
quantization) the energy of the photons is thus directly proportional to the frequency of the EMR
wave.
Likewise, the momentum p of a photon is also proportional to its frequency and inversely
proportional to its wavelength:

{\displaystyle p = {E \over c}={hf \over c}={h \over \lambda }.}


The source of Einstein's proposal that light was composed of particles (or could act as
particles in some circumstances) was an experimental anomaly not explained by the wave theory:
the photoelectric effect, in which light striking a metal surface ejected electrons from the surface,
causing an electric current to flow across an applied voltage. Experimental measurements
demonstrated that the energy of individual ejected electrons was proportional to the frequency,
rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which
depended on the particular metal, no current would flow regardless of the intensity. These
observations appeared to contradict the wave theory, and for years physicists tried in vain to find
an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light
to explain the observed effect. Because of the preponderance of evidence in favor of the wave
theory, however, Einstein's ideas were met initially with great skepticism among established
physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light
was observed, such as the Compton Effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a
higher energy level (one that is on average farther from the nucleus). When an electron in an
excited molecule or atom descends to a lower energy level, it emits a photon of light at a
frequency corresponding to the energy difference. Since the energy levels of electrons in atoms
are discrete, each element and each molecule emits and absorbs its own characteristic
frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An
example is visible light emitted from fluorescent paints, in response to ultraviolet (black light).
Many other fluorescent emissions are known in spectral bands other than visible light. Delayed
emission is called phosphorescence

7. Discuss in detail about Electromagnetic Spectrum with diagram?


or
What are the various bands and channels in Electromagnetic Spectrum and its range?
Electromagnetic spectrum: The entire distribution of electromagnetic radiation according
to frequency or wavelength. Although all electromagnetic waves travel at the speed of light in a
vacuum, they do so at a wide range of frequencies, wavelengths, and photon energies. The
electromagnetic spectrum comprises the span of all electromagnetic radiation and consists of many
sub ranges, commonly referred to as portions, such as visible light or ultraviolet radiation. The various
portions bear different names based on differences in behavior in the emission, transmission,
and absorption of the corresponding waves and also based on their different practical applications.
There are no precise accepted boundaries between any of these contiguous portions, so the ranges tend
to overlap.
The entire electromagnetic spectrum, from the lowest to the highest frequency (longest to
shortest wavelength), includes all radio waves (e.g.,
commercial radio and television, microwaves, radar), infrared radiation, visible light, ultraviolet
radiation, X-rays, and gamma rays. Nearly all frequencies and wavelengths of electromagnetic
radiation can be used for spectroscopy.
 The electromagnetic spectrum can be divided into several wavelength (frequency) regions, among
which only a narrow band from about 400 to 700 nm is visible to the human eyes. Note that there
is no sharp boundary between these regions. The boundaries shown in the above figures are
approximate and there are overlaps between two adjacent regions.
 Wavelength units: 1 mm = 1000 µm;
1 µm = 1000 nm.
 Radio Waves: 10 cm to 10 km wavelength.
 Microwaves: 1 mm to 1 m wavelength. The microwaves are further divided into different
frequency (wavelength) bands: (1 GHz = 109 Hz)
• P band: 0.3 - 1 GHz (30 - 100 cm)
• L band: 1 - 2 GHz (15 - 30 cm)
• S band: 2 - 4 GHz (7.5 - 15 cm)
• C band: 4 - 8 GHz (3.8 - 7.5 cm)
• X band: 8 - 12.5 GHz (2.4 - 3.8 cm)
• Ku band: 12.5 - 18 GHz (1.7 - 2.4 cm)
• K band: 18 - 26.5 GHz (1.1 - 1.7 cm)
• Ka band: 26.5 - 40 GHz (0.75 - 1.1 cm)
 INFRARED: 0.7 to 300 µm wavelength. This region is further divided into the following bands:
 Near Infrared (NIR): 0.7 to 1.5 µm.
 Short Wavelength Infrared (SWIR): 1.5 to 3 µm.
 Mid Wavelength Infrared (MWIR): 3 to 8 µm.
 Long Wavelength Infrared (LWIR): 8 to 15 µm.
 Far Infrared (FIR): longer than 15 µm.
 The NIR and SWIR are also known as the Reflected Infrared, referring to the main infrared
component of the solar radiation reflected from the earth's surface. The MWIR and LWIR are
the Thermal Infrared.
 VISIBLE LIGHT: This narrow band of electromagnetic radiation extends from about 400 nm
(violet) to about 700 nm (red). The various colour components of the visible spectrum fall
roughly within the following wavelength regions:
 Red: 610 - 700 nm
 Orange: 590 - 610 nm
 Yellow: 570 - 590 nm
 Green: 500 - 570 nm
 Blue: 450 - 500 nm
 Indigo: 430 - 450 nm
 Violet: 400 - 430 nm
 ULTRAVIOLET: The wavelength of UV rays is shorter than the violet end of the visible
spectrum but longer than the X-ray. Wave length is 3 to 400 nm
 X-RAYS
They are extremely used for medical purposes to see the inner side of human structures and
detect diseases. They have wavelength in ranges from
10-8 to 10-11 meters.
 GAMMA RAYS
They have wavelength in ranges starting from
10-11 meters. They are used for radiography purposes.

8. Explain the spectral reflectance curve in Remote Sensing?


Remote sensing is based on the measurement of reflected or emitted radiation from different
bodies. Objects having different surface features reflect or absorb the sun's radiation in different ways.
The reflectance properties of an object depend on the particular material and its physical and chemical
state (e.g. moisture), the surface roughness as well as the geometric circumstances (e.g. incidence angle of
the sunlight). The most important surface features are colour, structure and surface texture.

These differences make it possible to identify different earth surface features or materials by
analysing their spectral reflectance patterns or spectral signatures. These signatures can be
visualised in so called spectral reflectance curves as a function of wavelengths. The figure in the
right column shows typical spectral reflectance curves of three basic types of Earth
features: green vegetation, dry bare soil and clear water.

The spectral reflectance curve of healthy green vegetation has a significant minimum of
reflectance in the visible portion of the electromagnetic spectrum resulting from the pigments in
plant leaves. Reflectance increases dramatically in the near infrared. Stressed vegetation can also
be detected because stressed vegetation has a significantly lower reflectance in the infrared.

Vegetation covers a large portion of the Earth's land surface. Its role on the regulation of the
global temperature, absorption of CO2 and other important functions, make it a land cover type
of great significance and interest. Remote sensing can take advantage of the particular manner
that vegetation reflects the incident electromagnetic energy and obtain information about the
vegetation.

Cellular leaf structure and its interaction with electromagnetic energy. Most visible light is
absorbed, while almost half of the near infrared energy is reflected.

Under the upper epidermis (the thin layer of cells that forms the top surface of the leaf) there are
primarily two layers of cells. The top one is the palisade parenchyma and consists of elongated
cells, tightly arranged in a vertical manner. In this layer resides most of the chlorophyll, a protein
that is responsible for capturing the solar energy and power the process of photosynthesis. The
lower level is the spongy parenchyma, consisting of irregularly shaped cells, with a lot of air
spaces between them, in order to allow the circulation of gases.

The spectral reflectance curve of bare soil is considerably less variable. The reflectance curve is
affected by moisture content, soil texture, surface roughness, presence of iron oxide and organic
matter. These factors are less dominant than the absorbance features observed in vegetation
reflectance spectra

The water curve is characterized by a high absorption at near infrared wavelengths range and
beyond. Because of this absorption property, water bodies as well as features containing water
can easily be detected, located and delineated with remote sensing data. Turbid water has a
higher reflectance in the visible region than clear water. This is also true for waters containing
high chlorophyll concentrations. These reflectance patterns are used to detect algae colonies as
well as contaminations such as oil spills or industrial waste water (more about different
reflections in water can be found in the tutorial Ocean Colour in the Coastal Zone).

Features on the Earth reflect, absorb, transmit, and emit electromagnetic energy
from the sun. Special digital sensors have been developed to measure all types of
electromagnetic energy as it interacts with objects in all of the ways listed above. The
ability of sensors to measure these interactions allows us to use remote sensing to
measure features and changes on the Earth and in our atmosphere. A measurement of
energy commonly used in remote sensing of the Earth is reflected energy (e.g., visible
light, near-infrared, etc.) coming from land and water surfaces. The amount of energy
reflected from these surfaces is usually expressed as a percentage of the amount of energy
striking the objects. Reflectance is 100% if all of the light striking and object bounces off and is
detected by the sensor. If none of the light returns from the surface, reflectance is said to be 0%.
In most cases, the reflectance value of each object for each area of the electromagnetic spectrum
is somewhere between these two extremes. Across any range of wavelengths, the percent
reflectance values for landscape features such as water, sand, roads, forests, etc. can be plotted
and compared
Most remote sensing applications process digital images to extract spectral signatures at
each pixel and use them to divide the image in groups of similar pixels ( segmentation) using
different approaches. As a last step, they assign a class to each group (classification) by
comparing with known spectral signatures. Depending on pixel resolution, a pixel can represent
many spectral signature "mixed" together - that is why much remote sensing analysis is done to
"unmix mixtures". Ultimately correct matching of spectral signature recorded by image pixel with
spectral signature of existing elements leads to accurate classification in remote sensing.
9. Energy interaction with the atmosphere in remote sensing? (or) Atmospheric effects in RS?
Before radiation used for remote sensing reaches the Earth's surface it has to travel through some
distance of the Earth's atmosphere. Particles and gases in the atmosphere can affect the incoming
light and radiation. These effects are caused by the mechanisms of Scattering and Absorption.

Absorption and Scattering


 Absorption: Radiant energy is absorbed and converted into other forms of energy.
Ozone (Ultraviolet), carbon dioxide (greenhouse gas , area associated with thermal heating), and
water vapour (long wave(thermal) infrared and shortwave microwave radiations) are the three
main atmospheric constituents that absorb radiation

Scattering occurs when particles or large gas molecules present in the atmosphere interact with
and cause the electromagnetic radiation to be redirected from its original path. How much
scattering takes place depends on several factors including the wavelength of the radiation, the
abundance of particles or gases, and the distance the radiation travels through the atmosphere.
There are three (2) types of scattering which take place
1. Selective Scattering:
2. Non – Selective Scattering
1. SELECTIVE SCATTERING:
 Rayleigh scattering- Occurs when particles are very small compared to the wavelength
of the radiation. These could be particles such as small specks of dust or nitrogen and
oxygenmolecules. Rayleigh scattering causes shorter wavelengths of energy to be
scattered much more than longer wavelengths. Rayleigh scattering is the dominant
scattering mechanism in the upper atmosphere. The fact that the sky appears "blue"
during the day is because of this phenomenon. As sunlight passes through the
atmosphere, the shorter wavelengths (i.e. blue) of the visible spectrum are scattered more
than the other (longer) visible wavelengths. At sunrise and sunset the light has to travel
farther through the atmosphere than at midday and the scattering of the shorter
wavelengths is more complete; this leaves a greater proportion of the longer wavelengths
to penetrate the atmosphere.

 Mie scattering: occurs when the particles are just about the same size as the
wavelength of the radiation. Dust, pollen, smoke and water vapour are common
causes of Mie scattering which tends to affect longer wavelengths than those
affected by Rayleigh scattering. Mie scattering occurs mostly in the lower portions of
the atmosphere where larger particles are more abundant, and dominates when
cloud conditions are overcast.

The final scattering mechanism of importance is


called nonselective scattering. This occurs when the
particles are much larger than the wavelength of the
radiation. Water droplets and large dust particles can
cause this type of scattering. Nonselective scattering
gets its name from the fact that all wavelengths are
scattered about equally. This type of scattering causes
fog and clouds to appear white to our eyes because
blue, green, and red light are all scattered in
approximately equal quantities (blue+green+red light = white light).

 Raman’s Scattering: when the particles are same size , more or lessthan the wavelength
of the radiation.
2. NONSELECTIVE SCATTERING: This occurs when the particles are much larger
than the wavelength of the radiation. Water droplets and large dust particles can cause
this type of scattering.
10. REMOTE SENSING DATA ACQUISITION AND INTERPRETATION
Up to this point, we have discussed the principal sources of electromagnetic energy, the
propagation of this energy through the atmosphere, and the interaction of this energy with
earth surface features. Combined, these factors result i n en erg y "signals" from which we wish
to extract information. We now consider the procedures by which these signals are detected,
recorded and interpreted.
The detection of electromagnetic energy can be performed either photographically or
electronically. The process o f p h o topography uses chemical reaction on the surface of a light
sensitive film to detect energy variations within a scene. Photographic systems offer many
advantages: they are relatively simple and inexpensive and provide a high degree of spatial
detail and geometric integrity. Electronic sensors generate an electrical signal that corresponds
to t h e energy variations in the original scene. A familiar example of an electronic sensor is a
television camera. A l t h o u gh considerably more complex and expensive than photographic
systems, electronic sensors offer the advantages of a broader spectral range of sensitivity,
improved calibration potential, and the ability to electronically transmit image data. Another
mode of electronic sensor is recording with the help of charge coupled device which is used to
convert electrical signal to digital signal.
By developing a photograph, we obtain a record of its detected signals. Thus, the film
acts as both the detecting and recording medium. Electronic sensor signals are generally
recorded onto magnetic tape. Subsequently, the signals may be converted to an image form by
photographing a TV-like screen display of the data, or by using a specialized film recorder. In
these cases, photographic film is used only as a recording medium.
We can see that the data interpretation aspects of remote sensing can involve analysis
of pictorial (image) and/or numerical data. Visual interpretation of pictorial image data has long
been the workhorse of remote sensing.. Visual techniques make use of the excellent ability of
the human mind to qualitatively evaluate spatial patterns in a scene. Th e ability to make
subjective judgments based on selective scene elements is essential in many interpretation
efforts. Visual interpretation techniques have certain disadvantages, however, in that they may
require extensive t raining and are labour intensive. In addition, spectral characteristics are not
always fully evaluated in visual interpretation efforts. This is partly because of the limited ability
of the eye to discern tonal values on an image and the difficulty for an interpreter to
simultaneously analyze numerous spectral images. In applications where spectral p at t erns are
highly informative, it is therefore preferable to analyze numerical, rather than pictorial, image
data. In t h i s case, the image is described by a matrix of numerical brightness values covering
the scene. These values may be analyzed by quantitative procedures employing a computer,
which is referred to as digital interpretation.
The use of computer assisted analysis techniques permits the spectral patterns in
remote sensing data to be more fully examined. Digital interpretation is assisted by the image
processing techniques such as image enhancement, i n formation extraction etc. It also permits
the data analysis process to be largely automated, providing cost ad vantages over visual
interpretation techniques. However, just as humans are limited in their ability to interpret
spectral patterns, computers are limited in their ability to evaluate spatial patterns. Therefore,
visual and numerical techniques are complementary in nature, and consideration must be given
to which approach (or combination of approaches) best fits a particular application

11. Interaction of EMR with Earth surface features, Soil, Vegetation and water?
The interaction of electro-magnetic radiation with the Earth's surface is driven by three physical
processes: reflection, absorption, and transmission of radiation. Absorption involves a reduction in
radiation intensity as its energy is converted on reaching an object on the Earth's surface. Reflection
involves the returning or throwback of the radiation incident on an object on the Earth's surface, whilst
transmission entails the transfer of irradiative energy from an object on the Earth's surface to
surrounding bodies. Together, these three concepts make up an object's radioactive flux:
12. What is atmospheric window?
Atmospheric windows are these portions of the EM radiation spectrum with low absorption/high
transmission.
Following are some examples of the atmospheric windows:
(0.3 – 1.3 μm): Visible/near infrared window.
(1.5 – 1.8, 2.0 – 2.5, and 3.5 – 4.1μm): Mid infrared window.
(7.0 – 15.0 μm): Thermal/far infrared window.

Some wavelengths cannot be used in remote sensing because our atmosphere absorbs essentially
all the photons at these wavelengths that are produced by the sun. In particular, the molecules of water,
carbon dioxide, oxygen, and ozone in our atmosphere block solar radiation. The wavelength ranges in
which the atmosphere is transparent are called atmospheric windows. Remote sensing projects must be
conducted in wavelengths that occur within atmospheric windows. Outside of these windows, there is
simply no radiation from the sun to detect--the atmosphere has blocked it.

The figure above shows the percentage of light transmitted at various wavelengths from the near
ultraviolet to the far infrared, and the sources of atmospheric opacity are also given. You can see that
there is plenty of atmospheric transmission of radiation at 0.5 microns, 2.5 microns, and 3.5 microns, but
in contrast there is a great deal of atmospheric absorption at 2.0, 3.0, and about 7.0 microns. Both passive
and active remote sensing technologies do best if they operate within the atmospheric windows.

13. What is Black Body and its laws?


A black body or blackbody is an idealized physical body that absorbs all incident electromagnetic
radiation, regardless of frequency or angle of incidence. The name "black body" is given because it
absorbs all colors of light. A black body also emits black-body radiation. In contrast, a white body is one
with a "rough surface that reflects all incident rays completely and uniformly in all directions."

A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic black-
body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is
determined by the temperature alone (see figure at right), not by the body's shape or composition.

An ideal black body in thermal equilibrium has two main properties:

It is an ideal emitter: at every frequency, it emits as much or more thermal radiative energy as any
other body at the same temperature.

It is a diffuse emitter: measured per unit area perpendicular to the direction, the energy is
radiated isotropically, independent of direction.

Planck's Law:
Planck's Law can be generalized as such: Every object emits radiation at all times and
at all wavelengths. If you think about it, this law is pretty hard to wrap your brain around. We know that
the sun emits visible light (below left), infrared waves, and ultraviolet waves (below right), but did you
know that the sun also emits microwaves, radio waves, and X-rays? OK... you are probably saying, the
sun is a big nuclear furnace, so it makes sense that it emits all sorts of electromagnetic radiation.
However, Plank's Law states that every object emits over the entire electromagnetic spectrum. That
means that you emit radiation at all wavelengths -- so does everything around you!

Two images of the sun taken at different wavelengths of the electromagnetic spectrum. The left image
shows the sun's emission at a wavelength in the visible range. The right image is the ultraviolet emission
of the sun. Note: colors in these images and the ones above are deceptive. There is no sense of "color" in
spectral regions other than visible light. The use of color in these "false-color" images is only used as an
aid to show radiation intensity at one particular wavelength. Credit: NASA/JPL
Now before you dismiss this statement out-of-hand, let me say that you are not emitting X-rays in any
measurable amount (thank goodness!). The mathematics behind Plank's Law hinge on the fact that there
is a wide distribution of vibration speeds for the molecules in a substance. This means that it is possible
for matter to emit radiation at any wavelength, and in fact it does.
Another common misconception that Plank's Law dispels is that matter selectively emits radiation.
Consider what happens when you turn off a light bulb. Is it still emitting radiation? You might be
tempted to say "No" because the light is off. However, Plank's Law tells us that while the light bulb may
no longer be emitting radiation that we can see, it is still emitting at all wavelengths (most likely, it is
emitting copious amounts of infrared radiation). Another example that you hear occasionally on TV
weathercasts goes something like this. "When the sun sets, the ground begins to emit infrared
radiation..." This is certainly not true by nature of Planck's Law (and besides, how does the ground know
when the sun sets anyway). We'll talk more about radiation emission from the ground in a future lesson.
For now, please dismiss such statements as hogwash. The surface of the earth emits radiation all the time
and at all wavelengths.
Wein's Law:
At this point I know what you are thinking... there must be a "catch". In fact there is. While all matter
emits radiation at all wavelengths, it does not do so equally. This is where the next radiation law comes
in. Wein's Law states that the wavelength of peak emission is inversely proportional to the
temperature of the emitting object. Put another way, the hotter the object, the shorter the wavelength of
max emission. You have probably have observed this law in action all the time without even realizing it.
Want to know what I mean? Check out this steel bar. Which end might you pick up? Certainly not the
right end... it looks hot. Why does it "look hot"? Well, the wavelength of peak emission for the right side
of the bar is obviously shorter than the left side's peak emission wavelength. You see this shift in the
peak emission wavelength as a color changes from red to orange to yellow as the metal's temperature
increases.
Note: I should point out that even though the steel bar is a yellow-white color at the end, the peak
emission is still in the infrared part of the electromagnetic spectrum. However, the peak is so close to the
visible part of the spectrum, that there is a significant amount of visible light also being emitted from the
steel. Judging by the look of this photograph, the steel has a temperature of roughly 1500 kelvins,
resulting in a max emission wavelength of 2 microns (remember visible light is 0.4-0.7 microns). Here is
a chart showing how I estimated the steel temperature. To the left of the visibly red metal, the bar is still
likely several hundred degrees Celsius. However, in this section of the bar, the peak emission wavelength
is far into the IR portion of the spectrum -- so much so that no visible light emission is discernible with
the human eye.
So, now that we've established Wein's Law, how do we apply it to the emission sources that effect the
atmosphere. Consider the chart below showing the emission curves (called Planck functions) for both the
sun and the earth.
The emission spectrum of the sun (orange curve) compared to the earth's emission (dark red curve). The
x-axis shows wavelength in factors of 10 (called a "log scale"). The y-axis is the amount of energy per
unit area per unit time per unit wavelength. I have kept the units arbitrary because as you can see, they
are messy. Credit: David Babb

Note the idealized spectrum for the earth's emission (dark red line) of electromagnetic radiation compared
to the sun's electromagnetic spectrum (orange line). The radiating temperature of the sun is 6000 degrees
Celsius compared to the earth's measly 15 degrees Celsius. This means that given its high radiating
temperature, the sun's peak emission occurs near 0.5 microns, on the short-wave end of the visible
spectrum. Meanwhile the Earth's peak emission is located in the infrared portion of the electromagnetic
spectrum.
By the way, because the sun's peak emission is located around 0.5 microns, we see it as having a yellow
quality. But this is not the case for all stars. Some stars in our galaxy are somewhat cooler and exhibit a
reddish hue, while others are much hotter and appear blue. The constellation Orion(link is
external) contains the red supergiant Betelgeuse and several blue supergiants, the largest being Rigel and
Bellatrix. Can you spot them in this photograph of Orion?
Stefan–Boltzmann Law:
Examine once again the graph of the sun's emission curve versus the Earth's emission curve. Pay
particular attention to the energy values on the left axis (for the sun) and right axis (for the earth). The
first thing to notice is that the energy values are given in powers of 10 (that is, 10 6 is equal to 1,000,000).
This means that if we compare the peak emissions from the earth and sun we see that the sun at its peak
wavelength emits 30,000 times more energy than the earth at its peak. In fact, if we add up the total
energy emitted by each body (by adding the energy contribution at each wavelength), we see that the sun
emits over 150,000 times more energy per unit area than the earth!
I calculated the numbers above using the third radiation law that you need to know, the Stefan-Boltzmann
Law. The Stefan-Boltzmann Law states that the total amount of energy per unit area emitted by an
object is proportional to the 4th power of the temperature. We'll more talk more about this relationship
when discuss satellite remote sensing. It is also particularly useful when we want to understand how
much energy the earth's surface emits in the form of infrared radiation.

Kirchhoff's Law:
In the preceding radiation laws, we have been taking about the ideal amount of radiation than can
be emitted by an object. This theoretical limit is called "black body radiation". However, the actual
radiation emitted by an object can be much less than the ideal, especially at certain wavelengths.
Kirchhoff's Law describes the linkage between an object's ability to emit at a particular wavelength with
its ability to absorb radiation at that same wavelength. In plain language, Kirchhoff's Law states that for
an object whose temperature is not changing, an object that absorbs radiation well at a particular
wavelength will also emit radiation well at that wavelength. One implication of Kirchhoff's law is as
follows: If we want to measure a particular constituent in the atmosphere (water vapor for example), we
need to choose a wavelength that is emitted well by water vapor (otherwise we wouldn't detect it).
However, since water vapor readily emits at our chosen wavelength, it also readily absorbs radiation at
this wavelength -- which is going to cause some problems measurement-wise.
Well look at the implications of Kirchhoff's Law in a later section. For now, we need to complete
our discuss of radiation by looking at the possible things that can happen to a beam of radiation as it
passes through a medium.

14.What are the different platforms that are used in RS?


For remote sensing applications, sensors should be mounted on suitable stable platforms.
These platforms can be ground based air borne or space borne based. As the platform height
increases the spatial resolution and observational area increases. Thus, higher the sensor is
mounted; larger the spatial resolution and synoptic view is obtained. The types or characteristics
of platform depend on the type of sensor to be attached and its application. Depending on task,
platform can vary from ladder to satellite. For some task sensors are also placed on ground
platforms. Though aircrafts and satellites are commonly used platforms, balloons and rockets are
also used.
Three types of platforms are used to mount the remote sensors –
1. Ground Observation Platform
2. Airborne Observation Platform, and
3. Space-Borne Observation Platform
1. Ground Observation Platform
Ground observation platforms are used to record detailed information about the objects or
features of the earth’s surface. These are developed for the scientific understanding on the signal-object
and signal-sensor interactions. Ground observation includes both the laboratory and field study, used for
both in designing sensors and identification and characterization of land features. Ground observation
platforms include – handheld platform, cherry picker, towers, portable masts and vehicles etc. Portable
handheld photographic cameras and spectroradiometers are largely used in laboratory and field
experiments as a reference data and ground truth verification.

Examples: Mobile Hydraulic Platforms, Portable Masts, Towers, Weather Surveillance


Radar

Ground based platforms can also be classified according to operational range:

 Short range systems

Operate at ranges of 50-100m with panoramic scanning and are often used to map
building interiors or small objects

 Medium range systems


Operate at distances of 150-250m, also achieving millimeter accuracies in high definition
surveying in 3D modelling applications e.g bridge and dam monitoring

 Long range systems

Can measure at distances of up to 1km and are frequently used in open-pit mining and
topographic survey applications.

2. Air Borne Based Platform


Airborne platforms were the sole non-ground-based platforms for early remote sensing work.
Aircraft remote sensing system may also be referred to as sub-orbital or airborne, or aerial remote sensing
system. At present, airplanes are the most common airborne platform. Other airborne observation
platforms include balloons, drones (short sky spy) and high altitude sounding rockets. Helicopters are
occasionally used.

2.1 Balloon Platform


Balloons are used for remote sensing observation (aerial photography) and nature conservation
studies. The first aerial images were acquired with a camera carried aloft by a balloon in 1859. Balloon
floats at a constant height of about 30 km. It consists of a rigid circular base plate for supporting the entire
sensor system which is protected by an insulating and shock proof light casing. The payload used for
Indian balloon experiment of three Hasselblad cameras with different film filter combinations, to provide
PAN, infra red black and white and infra red false color images. Flight altitude being high compared to
normal aircraft height used for aerial survey, balloon imagery gives larger synoptic views. The balloon is
governed by the wind at the floating altitude. Balloons are rarely used today because they are not very
stable and the course of flight is not always predictable, although small balloons carrying expendable
probes are still used for some meteorological research.

2.2 Drone
Drone is a miniature remotely piloted aircraft. It is designed to fulfill requirements for a low cost
platform, with long endurance, moderate payload capacity and capability to operate without a runway or
small runway. Drone includes equipment of photography, infrared detection, radar observation and TV
surveillance. It uses satellite communication link. An onboard computer controls the payload and stores
data from different sensors and instruments. The payload computer utilizes a GSM/GPRS (where
available) or independent satellite downlink, and can be monitored its position and payload status from
anywhere in the world connected to the internet.

Drone was developed in Britain during World War-II, is the short sky spy which was originally conceived
as a military reconnaissance. Now it plays important role in remote sensing. The unique advantage is that
it could be accurately located above the area for which data was required and capable to provide both
night and day data.

2.3 Aircraft
Special aircraft with cameras and sensors on vibration less platforms are traditionally used to
acquire aerial photographs and images of land surface features. While low altitude aerial photography
results in large scale images providing detailed information on the terrain, the high altitude smaller scale
images offer advantage to cover a larger study area with low spatial resolution.

The National High Altitude Photography (NHAP) program (1978), coordinated by the US Geological
Survey, started to acquire coverage of the United States with a uniform scale and format. Beside aerial
photography multi spectral, hyperspectral and microwave imaging is also carried out by aircraft;
thereafter multi spectral, hyperspectral and microwave imaging were also initiated.

Aircraft platforms offer an economical method of remote sensing data collection for small to large study
areas with cameras, electronic imagers, across- track and along-track scanners, and radar and microwave
scanners. AVIRIS hyperspectral imaging is famous aircraft aerial photographic operation of USGS.

2.4 High Altitude Sounding Rockets


High altitude sounding rocket platforms are useful in assessing the reliability of the remote
sensing techniques as regards their dependence on the distance from the target is concerned. Balloons
have a maximum altitude of approximately 37 km, while satellites cannot orbit below 120 km. High
altitude sounding rockets can be used to a moderate altitude above terrain. Imageries with moderate
synoptic view can be obtained from such rockets for areas of some 500,000 square kilometers per frame.
The high altitude sounding rocket is fired from a mobile launcher. During the flight its scanning work is
done from a stable altitude, the payload and the spent motor are returned to the ground gently by
parachute enabling the recovery of the data. One most important limitations of this system is to ensure
that the descending rocket not going to cause damage.

3 Space-borne Observation Platforms


In spaceborne remote sensing, sensors are mounted on-board a spacecraft (space shuttle or satellite)
orbiting the earth. Space-borne or satellite platform are onetime cost effected but relatively lower cost per
unit area of coverage, can acquire imagery of entire earth without taking permission. Space borne imaging
ranges from altitude 250 km to 36000 km.

 Space borne remote sensing provides the following advantages:


 Large area coverage;
 Frequent and repetitive coverage of an area of interest;
 Quantitative measurement of ground features using radiometrically calibrated sensors;
 Semi-automated computerized processing and analysis;
 Relatively lower cost per unit area of coverage.

There are two types of well recognized satellite platforms- manned satellite platform and unmanned
satellite platform.
Manned Satellite Platforms:
Manned satellite platforms are used as the last step, for rigorous testing of the remote sensors on
board so that they can be finally incorporated in the unmanned satellites. This multi- level remote sensing
concept is already presented. Crew in the manned satellites operates the sensors as per the program
schedule.

Unmanned Satellite Platforms


Landsat series, SPOT series and IRS series of remote sensing satellite, NOAA series of meteorological
satellites, the entire constellation of the GPS satellites and the GOES and INSAT series of geostationary
environmental, communication, television broadcast, weather and earth observation satellites etc are

examples of unmanned satellite category .


15. Characteristics of satellite orbits?
The path followed by a satellite in the space is called the orbit of the satellite. Orbits may be
circular (or near circular) or elliptical in shape.
Orbital period: Time taken by a satellite to compete one revolution in its orbit around the earth
is called orbital period.
It varies from around 100 minutes for a near-polar earth observing satellite to 24 hours for a
geo-stationary satellite.
Altitude: Altitude of a satellite is its heights with respect to the surface immediately below it.
Depending on the designed purpose of the satellite, the orbit may be located at low (160-2000
km), moderate, and high (~36000km) altitude.
Apogee and perigee: Apogee is the point in the orbit where the satellite is at maximum distance
from the Earth. Perigee is the point in the orbit where the satellite is nearest to the Earth as
shown in Fig.
Schematic representation of the satellite orbit showing the Apogee and Perigee
Inclination: Inclination of the orbital plane is measured clockwise from the equator. Orbital
inclination for a remote sensing satellite is typically 99 degrees. Inclination of any satellite on the
equatorial plane is nearly 180 degrees.
Nadir, ground track and zenith: Nadir is the point of interception on the surface of the Earth of
the radial line between the center of the Earth and the satellite. This is the point of shortest
distance from the satellite to the earth’s surface. Any point just opposite to the nadir, above the
satellite is called zenith. The circle on the earth’s surface described by the nadir point as the
satellite revolves is called the ground track. In other words, it is the projection of the satellites
orbit on the ground surface.
Swath of a satellite is the width of the area on the surface of the Earth, which is imaged by the
sensor during a single pass. For example, swath width of the IRS-1C LISS-3sensor is 141 km in the
visible bands and 148 km in the shortwave infrared band.

16. What are the different satellite orbitals explain with diagram?
When a satellite is launched into the space, it moves in a well-defined path around the
Earth, which is called the orbit of the satellite. Gravitational pull of the Earth and the velocity of
the satellite are the two basic factors that keep the satellites in any particular orbit. Spatial and
temporal coverage of the satellite depends on the orbit. There are three basic types of orbits in
use.
1. By Inclination:
 Equatorial Orbit
 Inclined Orbit
 Polar Orbit
1. Inclined satellites: A satellite is said to occupy an inclined orbit around Earth if the orbit
exhibits an angle other than 0° to the equatorial plane. This angle is called the orbit's
inclination. A planet is said to have an inclined orbit around the Sun if it has an angle
other than 0° to the ecliptic plane
 Satellites orbit Earth at different heights, different speeds and along different paths. The two
most common types of orbit are "geostationary" (jee-oh-STAY-shun-air-ee) and "polar." A
geostationary satellite travels from west to east over the equator. Polar, satellite travels from
North - South.
 Sun Synchronous Satellites:
The location of sun synchronous satellites is at very lower altitudes, normally a few hundred
or thousand Km from earth surface. Travelling of this satellites from North pole to South pole as
the earth turns below it. Sun synchronous satellites pass once the same part of earth each day
at the same local time making a collection of different forms of data and communication more
easy
 A geostationary satellite travels from west to east over the equator. It moves in the same
direction and at the same rate Earth is spinning. From Earth, a geostationary satellite looks like it
is standing still since it is always above the same location.
 Polar-orbiting satellites travel in a North-South direction from pole to pole. As Earth spins
underneath, these satellites can scan the entire globe, one strip at a time.
 What Was the First Satellite in Space?
Sputnik 1 was the first satellite in space. The Soviet Union launched it in 1957.

2. BY ALTITUDE:
 Low Earth Orbit (LEO)
 Medium Earth Orbit (MEO)
 Geostationary Earth Orbit (GEO)
LEO: LOW EARTH ORBIT (160 – 2,000 KM), it takes 120 min to circle the Earth.
MEO: MEDIUM EARTH ORBIT (2,000 – 35,786 KM), 2 to 6 Hours,
GEO: GEOSTATIONARY EARTH ORBIT (36,000 KM) 24 hours over the Earth.
3. By Shape:
 Circular Orbit:- It is a fixed distance around the barycenter, that is in the shape of circle.
o Geostationary orbit, Polar Orbit and Equatorial Orbit.
 Elliptical Orbit:- Is the revolving of one object around another in an oval-shaped path
called ellipse.
Closest point is Perigee and longest point is Apogee.

17. Explain about any five IRS satellite characteristics?


18. Define satellite and its types explain in detail?
A satellite is a body that orbits around another body in space. There are two different
types of satellites – natural and man-made. Examples of natural satellites are the Earth and
Moon. Earth is a satellite because it orbits the sun. Likewise, the moon is a satellite because it
orbits Earth ... A man-made satellite is a machine that is launched into space and orbits around a
body in space.
Types Of Satellites:
 Navigation satellites. The GPS (global positioning system) is made up of 24 satellites that
orbit at an altitude of 20,200 km above the surface of the Earth.
 Communication satellites.
 Weather satellites.
 Earth observation satellites.
 Astronomical satellites.
 International Space Station (ISS).
 Navigation Satellites:-
A satellite navigation or satnav system is a system that uses satellites to provide
autonomous geo-spatial positioning. It allows small electronic receivers to determine their
location (longitude, latitude, and altitude/elevation) to high precision (within a few
centimeters to metres) using time signals transmitted along a line of sight by radio from
satellites. The system can be used for providing position, navigation or for tracking the
position of something fitted with a receiver (satellite tracking). The signals also allow the
electronic receiver to calculate the current local time to high precision, which allows time
synchronization.
 A satellite navigation system with global coverage may be termed a global navigation
satellite system (GNSS). As of September 2020, the United States' Global Positioning
System (GPS), Russia's Global Navigation Satellite System (GLONASS), China's BeiDou
Navigation Satellite System (BDS) and the European Union's Galileo are fully operational
GNSSs. Japan's Quasi-Zenith Satellite System (QZSS) is a (US) GPS satellite-based
augmentation system to enhance the accuracy of GPS, with satellite navigation independent
of GPS scheduled for 2023.
 The Indian Regional Navigation Satellite System (IRNSS) plans to expand to a global
version in the long term
 A communications satellite is an artificial satellite that relays and
amplifies radio telecommunication signals via a transponder; it creates a communication
channel between a source transmitter and a receiver at different locations on Earth.
Communications satellites are used for television, telephone, radio, internet,
and military applications. As of 1 January 2021, there are 2,224 communications satellites in
Earth orbit. Most communications satellites are in geostationary orbit 22,236 miles
(35,785 km) above the equator, so that the satellite appears stationary at the same point in the
sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that
spot and do not have to move to track the satellite.
 A weather satellite is a type of satellite that is primarily used to monitor
the weather and climate of the Earth. Satellites can be polar orbiting (covering the entire
Earth asynchronously), or geostationary (hovering over the same spot on the equator).
Satellites can be polar orbiting, covering the entire Earth asynchronously, or geostationary,
hovering over the same spot on the equator.
While primarily used to detect the development and movement of storm systems and
other cloud patterns, meteorological satellites can also detect other phenomena such as city
lights, fires, effects of pollution, auroras, sand and dust storms, snow cover, ice mapping,
boundaries of ocean currents, and energy flows. Other types of environmental information are
collected using weather satellites. Weather satellite images helped in monitoring the volcanic
ash.
 An Earth observation satellite or Earth remote sensing satellite is a satellite used or
designed for Earth observation (EO) from orbit, including spy satellites and similar ones
intended for non-military uses such
as environmental monitoring, meteorology, cartography and others. The most common type
are Earth imaging satellites, that take satellite images, analogous to aerial photographs.
 An Astronomy satellite is basically a really big telescope floating in space. ... Astronomy
satellites have many different applications: they can be used to make star maps. they can be
used to study mysterious phenomena such as black holes and quasars. they can be used to
take pictures of the planets in the solar system
 The International Space Station (ISS) is a modular space station (habitable artificial satellite)
in low Earth orbit. It is a multinational collaborative project involving five participating space
agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe),
and CSA (Canada).The ownership and use of the space station is established by
intergovernmental treaties and agreements. The station serves as a microgravity and space
environment research laboratory in which scientific research is conducted
in astrobiology, astronomy, meteorology, physics, and other fields. The ISS is suited for
testing the spacecraft systems and equipment required for possible future long-duration
missions to the Moon and Mars

19. Explain the different parameters of sensors? (Resolution)


In remote sensing the term resolution is used to represent the resolving power, which
includes not only the capability to identify the presence of two objects, but also their properties.
In qualitative terms the resolution is the amount of details that can be observed in an image.
Four types of resolutions are defined for the remote sensing systems.
 Spatial resolution
 Spectral resolution
 Radiometric resolution
 Temporal resolution
1. Spatial resolution:

 Spatial resolution can determine the quality of an image and describe how detailed an object
can be represented by the image. It is a measurement to determine how small an object should
be in order for an imaging system to detect it.
 Spatial resolution refers to the number of pixels utilized in construction of the image. ... The
spatial resolution of a digital image is related to the spatial density of the image and optical
resolution of the microscope used to capture the image.
 For example, a spatial resolution of 250m means that one pixel represents an area 250 by 250
meters on the ground
2. Spectral resolution:
 Spectral resolution describes the ability of a sensor to define fine wavelength intervals
 The finer the spectral resolution, the narrower the wavelength range for a particular
channel or band.
 Spectral resolution is an important experimental parameter. If the resolution is too
low, spectral information will be lost, preventing correct identification and
characterization of the sample. If the resolution is too high, total measurement time can
be longer than necessary.

3. Radiometric resolution:
 Sensor’s sensitivity to the magnitude of the electromagnetic energy,
 Sensor’s ability to discriminate very slight differences in (reflected or emitted) energy,
 The finer the radiometric resolution of a sensor, the more sensitive it is to detecting
small differences in energy.
4. Temporal resolution and coverage:
 Temporal resolution is the revisit period, and is the length of time for a satellite to
complete one entire orbit cycle, i.e. start and back to the exact same area at the same
viewing angle. For example, Landsat needs 16 days, MODIS needs one day, NEXRAD
needs 6 minutes for rain mode and 10 minutes for clear sky mode.
 Temporal coverage is the time period of sensor from starting to ending. For example,
o MODIS/Terra: 2/24/2000 through present
o Landsat 5: 1/3/1984 through present
o ICESat: 2/20/2003 to 10/11/2009

20. What are the advantages and Disadvantages of using remotely sensed data?
Advantages of using Remote Sensed data:

 Remote sensing data records the satellite images permanently.


 Maximum area coverage.
 Dynamic themes like water, Agriculture etc,
 Data is easily collected at various scales and Resolution.
 The data of single remotely sensed images are used for various applications and purpose.
 Computers are used processing the RS data fastly.
 The analysis of RS data is economical.
 The revision of maps is economical and fast from medium to small scales.
 Three band images are used producing color composite.
 For three dimensional studies, stereo, satellite data’s are adopted.

Dis Advantages of using Remote Sensed data:

 RS data are expensive for one time analysis and small area.
 Specialized training are needed for analyzing images.
 It cannot make large scale engineering maps through satellite.
 Aerial photographs are costly because the study of dynamic features are required when
repetitive photographs are used.

21. What are the different applications of Remote Sensing? State its uses?
There are probably hundreds of applications - these are typical:
 Meteorology - Study of atmospheric temperature, pressure, water vapour, and wind
velocity.
 Oceanography: Measuring sea surface temperature, mapping ocean currents, and wave
energy spectra and depth sounding of coastal and ocean depths
 Glaciology- Measuring ice cap volumes, ice stream velocity, and sea ice distribution.
(Glacial)
 Geology- Identification of rock type, mapping faults and structure.
 Geodesy- Measuring the figure of the Earth and its gravity field.
 Topography and cartography - Improving digital elevation models.
 Agriculture Monitoring the biomass of land vegetation (Crop Type, Crop Condition
Assessment, Crop Yield Estimation, Mapping of soil characteristic, soil moisture
estimation)
 Forest- monitoring the health of crops, mapping soil moisture
 Botany- forecasting crop yields.
 Hydrology- Assessing water resources from snow, rainfall and underground aquifers.
 (Watershed mapping and management, Flood delineation and mapping)
 Disaster warning and assessment - Monitoring of floods and landslides, monitoring
volcanic activity, assessing damage zones from natural disasters.
 Planning applications - Mapping ecological zones, monitoring deforestation, monitoring
urban land use.
 Oil and mineral exploration- Locating natural oil seeps and slicks, mapping geological
structures, monitoring oil field subsidence.
 Military- developing precise maps for planning, monitoring military infrastructure,
monitoring ship and troop movements
 Urban- Land parcel mapping, Infrastructure mapping, Land use change detection, Future
urban expansion planning.
 Climate- the effects of climate change on glaciers and Arctic and Antarctic regions
 Sea- Monitoring the extent of flooding (Storm forecasting, Water quality monitoring,
Aquaculture inventory and monitoring, Navigation routing, Coastal vegetation mapping,
Oil spills, Coastal hazard monitoring & Assessment)
 Rock- Recognizing rock types
 Space program- is the backbone of the space program
 Seismology: as a premonition.
22. What are the basic elements to be considered during visual interpretation of satellite image?

IMAGE INTERPRETATION:
 Image is a pictorial representation of an object or a scene.
 Image can be analog or digital.
 Aerial photographs are generally analog, while satellite data is in digital form.
 A digital image is made up of square or rectangular areas called pixels.
 Each pixel has an associated pixel value which depends on the amount reflected energy from
the ground
Advantages of aerial photographs/Satellite Images over ground observation
 Synoptic view
 Time freezing ability
 Permanent record
 Spectral resolution
 Spatial resolution
 Cost and time effective
 Stereoscopic view
 Brings out relationship between objects
Methods of Image Interpretation:
 Visual
Image interpretation on a hardcopy image/photograph
Visual image interpretation on a digital image
 Digital image processing
Why do we process images?
It has been developed to deal with 3 major problems —
 To improve the image data to suppress the unwanted distortions.
 To enhance some features of the input image.
 As a means of translation between the human visual system and digital imaging devices.
ACTIVITIES OF IMAGE INTERPRETATION:
 Detection
 Recognition
 Analysis
 Deduction
 Classification
 Idealization
 Convergence of evident
Elements of Visual Image Interpretation:
 Location, Size, Shape, Shadow, Tone, Colour, Texture, Pattern, Height and Depth, Site,
Situation, and Association

Location
There are two primary methods to obtain a precise location in the form of coordinates. 1) survey
in the field by using traditional surveying techniques or global positioning system instruments, or
2) collect remotely sensed data of the object, rectify the image and then extract the desired
coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS
instruments in the field to obtain the desired location of an object. If option 2 is chosen, most
aircraft used to collect the remotely sensed data have a GPS receiver.

Size
The size of an object is one of the most distinguishing characteristics and one of the most
important elements of interpretation. Most commonly, length, width and perimeter are measured.
To be able to do this successfully, it is necessary to know the scale of the photo. Measuring the
size of an unknown object allows the interpreter to rule out possible alternatives. It has proved to
be helpful to measure the size of a few well-known objects to give a comparison to the unknown-
object. For example, field dimensions of major sports like soccer, football, and baseball are
standard throughout the world. If objects like this are visible in the image, it is possible to
determine the size of the unknown object by simply comparing the two.

Shape
There is an infinite number of uniquely shaped natural and man-made objects in the world. A few
examples of shape are the triangular shape of modern jet aircraft and the shape of a common
single-family dwelling. Humans have modified the landscape in very interesting ways that has
given shape to many objects, but nature also shapes the landscape in its own ways. In general,
straight, recti-linear features in the environment are of human origin. Nature produces more
subtle shapes.

Shadow
Virtually all remotely sensed data are collected within 2 hours of solar noon to avoid extended
shadows in the image or photo. This is because shadows can obscure other objects that could
otherwise be identified. On the other hand, the shadow cast by an object act as a key for the
identification of the object as the length of the shadow will be used to estimate the height of the
object which is vital for the recognition of the object. Take for example, the Washington
Monument in Washington D.C. While viewing this from above, it can be difficult to discern the
shape of the monument, but with a shadow cast, this process becomes much easier. It is a good
practice to orient the photos so that the shadows are falling towards the interpreter. A
pseudoscopic illusion can be produced if the shadow is oriented away from the observer. This
happens when low points appear high and high points appear low.

Tone and color:


Real-world materials like vegetation, water and bare soil reflect different proportions of energy in
the blue, green, red, and infrared portions of the electro-magnetic spectrum. An interpreter can
document the amount of energy reflected from each at specific wavelengths to create a spectral
signature. These signatures can help to understand why certain objects appear as they do on black
and white or color imagery. These shades of gray are referred to as tone. The darker an object
appears, the less light it reflects. Color imagery is often preferred because, as opposed to shades
of gray, humans can detect thousands of different colors. Color aids in the process of photo
interpretation.

Texture
This is defined as the “characteristic placement and arrangement of repetitions of tone or color in
an image.” Adjectives often used to describe texture are smooth (uniform, homogeneous),
intermediate, and rough (coarse, heterogeneous). It is important to remember that texture is a
product of scale. On a large scale depiction, objects could appear to have an intermediate texture.
But, as the scale becomes smaller, the texture could appear to be more uniform, or smooth. A few
examples of texture could be the “smoothness” of a paved road, or the “coarseness” a pine forest.

Pattern
Pattern is the spatial arrangement of objects in the landscape. The objects may be arranged
randomly or systematically. They can be natural, as with a drainage pattern of a river, or man-
made, as with the squares formed from the United States Public Land Survey System. Typical
adjectives used in describing pattern are: random, systematic, circular, oval, linear, rectangular,
and curvilinear to name a few.
Height and depth
Height and depth, also known as “elevation” and “bathymetry”, is one of the most diagnostic
elements of image interpretation. This is because any object, such as a building or an electric pole
that rises above the local landscape will exhibit some sort of radial relief. Also, objects that
exhibit this relief will cast a shadow that can also provide information as to its height or
elevation. A good example of this would be buildings of any major city.

Site/situation/association
Site has unique physical characteristics which might include elevation, slope, and type of surface
cover (e.g., grass, forest, water, bare soil). Site can also have socioeconomic characteristics such
as the value of land or the closeness to water. Situation refers to how the objects in the photo or
image are organized and “situated” in respect to each other. Most power plants have materials and
building associated in a fairly predictable manner. Association refers to the fact that when you
find a certain activity within a photo or image, you usually encounter related or “associated”
features or activities. Site, situation, and association are rarely used independent of each other
when analyzing an image. An example of this would be a large shopping mall. Usually there are
multiple large buildings, massive parking lots, and it is usually located near a major road or
intersection.
23. Explain about Digital Image Processing?
Digital image processing is the use of a digital computer to process digital
images through an algorithm. As a subcategory or field of digital signal processing, digital image
processing has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up
of noise and distortion during processing. Since images are defined over two dimensions (perhaps
more) digital image processing may be modeled in the form of multidimensional systems. The
generation and development of digital image processing are mainly affected by three factors:
first, the development of computers; second, the development of mathematics (especially the
creation and improvement of discrete mathematics theory); third, the demand for a wide range of
applications in environment, agriculture, military, industry and medical science has increased.
1. Pre Processing:

 The correction of deficiencies and the removal of flaws present in the data are called pre-
processing( some times referred to as image restoration or image correction or image
rectification).
 Pre-processing techniques involved in remote sensing may be categorized into two broad
categories
Radiometric corrections:
When the emitted or reflected electro-magnetic energy is observed by a sensor on-
board an aircraft or spacecraft, the observed energy does not coincide with the energy emitted
or reflected from the same object observed from a short distance.
1. Detector response calibration
 De-striping
 Removal of missing scan line
 Random noise removal
 Vignetting removal (Corner to center clarity Differ)
2. Sun angle and topographic correction
3. Atmospheric Correction

Geometric Corrections:
Geometric errors that arise from
 The Earth Curvature
 Platform Motion
 Relief Displacement
 Non - linearity's in scanning motion
 The Earth rotation
These are Systematic Corrections, non-systematic, coordinate transformation.

2. Image Enhancement:
Image enhancement is the procedure of improving the quality and information content of
original data before processing. Common practices include contrast enhancement, spatial
filtering, density slicing, and FCC. Contrast enhancement or stretching is performed by linear
transformation expanding the original range of gray level. Spatial filtering improves the naturally
occurring linear features like fault, shear zones, and lineaments. Density slicing converts the
continuous gray tone range into a series of density intervals marked by a separate color or symbol
to represent different features.
FCC is commonly used in remote sensing compared to true colors because of the absence
of a pure blue color band because further scattering is dominant in the blue wavelength. The FCC
is standardized because it gives maximum identical information of the objects on Earth and
satisfies all users. In standard FCC, vegetation looks red (Fig. 3.6) because vegetation is very
reflective in NIR and the color applied is red. Water bodies look dark if they are clear or deep
because IR is an absorption band for water. Water bodies give shades of blue depending on their
turbidity or shallowness because such water bodies reflect in the green wavelength and the color
applied is blue.
3. Image Transformation:
A function or operator that takes an image as its input and produces an
image as its output. Fourier transforms, principal component analysis (also
called Karhunen-Loeve analysis), and various spatial filters, are examples of
frequently used image transformation procedures.
Image Reduction:
 Image Reduction: Image Reduction techniques allow the analyst to obtain a
regional perspective of the remotely sensed data.
 Common screen resolution is 1024 X 768, that is much lower than the
number of pixels generally present in an image
 The computer screen cannot display the entire image on the screen unless
reduce the visual representation of the image. It is commonly known as
zoom out.
Image Magnification:
 Referred to as zoom in. This technique is most commonly employed for two
purposes.
 To improve the display – scale of the image for enhanced visual interpretation.
 To match the display- scale of another image.

Colour Composition:
The remote sensing images, which are displayed in three primary
colours, True color composite uses visible light bands red (B04), green (B03)
and blue (B02) in the corresponding red, green and blue color channels,
resulting in a natural colored result, that is a good representation of the Earth
as humans would see it naturally.
Particularly, the colour composite with the assignment of blue color gun
to the green band, green gun to red band, and the red gun to NIR band is very
popular, and is called an infrared colour composition, which is the same as that
found in colour infrared film
Transect Extraction:
 The ability to extract brightness values along a user-specified transect (also
referred to as a spatial profile) between two points in a single-band or
multiple-band color composite image is important in many remote sensing
image interpretation application. Basically, the spatial profile in histogram
format depicts the magnitude of the brightness value at each pixel along the
transect.
Contrast Enhancement:
 One material would reflect a tremendous amount of energy in a certain
wavelength and another material would reflect much less energy in the
same wavelength. This would result in contrast between two types of
material when recorded by the remote sensing system.
 Unfortunately, different materials often reflect similar amounts of radiant
flux throughout the visible, near infrared and middle-infrared portions of
the electromagnetic spectrum, resulting in a relatively low-contrast
imagery. In addition, besides this obvious low-contrast characteristic of
biophysical materials, there are cultural factors at work.
 The detectors on remote sensing systems are designed to record a relatively
wide range of scene brightness values (e.g., 0-255) without becoming
saturated.
Filtering:
 Spatial filtering term is the filtering operations that are performed directly
on the pixels of an image
 Spatial frequency describes the periodic distributions of light and dark in an
image. High spatial frequencies correspond to features such as sharp edges
and fine details, whereas low spatial frequencies correspond to features such
as global shape
 Filters are classified as:
 Low-pass (i.e., preserve low frequencies)
 High-pass (i.e., preserve high frequencies)
 Band-pass (i.e., preserve frequencies within a band)
 Band-reject (i.e., reject frequencies within a band)
Image Transformation:
 Image transformations typically involve the manipulation of multiple bands
of data, whether from a single multispectral image or from two or more
images of the same area acquired at different times (i.e. multi temporal
image data). ... Basic image transformations apply simple arithmetic
operations to the image
 The basic transformations are scaling, rotation, translation, and shear. Other
important types of transformations are projections and mappings. By scaling
relative to the origin, all coordinates of the points defining an entity are
multiplied by the same factor, possibly different for each axis.age data.

24. Supervised and Unsupervised classification?

Unsupervised vs Supervised Classification in Remote Sensing


The 3 most common remote sensing classification methods are:

1. Supervised classification
2. Unsupervised classification
3. Object-based image analysis

What are the main differences between supervised and unsupervised classification? You can follow
along as we classify in ArcGIS.

Supervised Classification in Remote Sensing


Supervised and an unsupervised one is the ways of associating each spectral class to an information
class. In supervised classification, you select training samples and classify your image based on your
chosen samples. An algorithm is then used to summarize multi-spectral information from the
specified areas on the image to from class signatures.

When you run a supervised classification, you perform the following 3 steps:

1. Select training areas


2. Generate signature file
3. Classify
Step 1. Select training areas

In this step, you find training samples for each land cover class you want to create. For example, draw
a polygon for an urban area such as a road or parking lot. Then, continue drawing urban areas
representative of the entire image. Make sure it’s not just a single area.

Once you have enough samples for urban areas, you can start adding training samples for another
land cover class. For example, you can add polygons over treed areas for the “forest” class.

If you’re using ArcGIS, the steps are:

 Beforehand, you must enable the Image Analysis Toolbar (Windows ‣ Image Analysis).
 Add the training sample manager. Then, click the “Draw Polygon” icon to add training
samples.
 For each land cover class, draw polygons. Then, merge them into a single class.

Step 2. Generate signature file

At this point, you should have training samples for each class. The signature file is what holds all the
training sample data that you’ve collected up to this point. It’s a way to save your samples for you to
work on at a later time.

The steps in ArcGIS are:

 Create a signature file by clicking the “create a signature file” icon.


Step 3. Classify

The most common supervised classification methods include:

 Parametric Rules:

 Minimum Distance to mean


 Maximum likelyhood
 Linear Discriminant

 Non-Parametric Rules.

 Parallelepiped
 Feature Space

Minimum Distance to Mean: This is a simple classification strategy. This method first
analysis the areas designated in the training and then calculate a mean value in each band for each
training class. These mean values define the location of the class center in spectral space. The process
then assigns each pixel in the input image to the class with the closest class center in spectral.

Maximum Likelihood/ Bayesian Classifier:


The most powerful supervised parametric classifier in common use in that of maximum
likelihood, based on statistics (mean, variance/ covariance). The maximum likelihood classification
method applies the probability theory to the classification task. From the training set classes, the
method determines the class centers and the variability in raster values in each input band for each
class. This information allows the process to determine the probability depends upon the distance
from the cell to the class center, and the size and shape of the class in spectral space. The maximum
likelihood method computes all of the class probabilities for each raster cell and assigns the cell to the
class with the highest probability value.
Linear Discriminant Classifier:
It is used to determine which variables discriminate between two or more naturally occurring
groups. Discriminate analysis it is a technique for classifying a set of observations into predefined
classes. The purpose is to determine the class of an observation based on a set of variables known as
predictors or input variables. The mode is built based on a set of observations for which the classes
are known. This set of observations is referred to as the training set. Based on the training set, the
technique constructs a set of linear functions of the predictors, known as discriminant functions.
Parallelepiped Classifier:
In Parallelepiped classifier devices each axis of multi spectral features space. As shown in an
example in fig. The decision region for each class is defined on the basis of a lowest and highest value
on each axis. The accuracy of classification depends on the selection of the lowest and highest values
in consideration of the statistics of each class.

Feature space classifier: A feature space image is simply a graph of the data file
values of one band of data against the values of another band. The transformation of multilayer raster
image into a feature space image is done by mapping the input pixel values to a position in the feature
space image.

Unsupervised Classification in Remote Sensing


Unsupervised case, an algorithm is first applied to the image and some spectral classes (also called
clusters) are formed. This process is known as unsupervised training
Unsupervised classification generates clusters based on similar spectral characteristics inherent in the
image. Then, you classify each cluster without providing training samples of your own.

The steps for running an unsupervised classification are:

1. Generate clusters
2. Assign classes

Step 1. Generate clusters

In this step, the software clusters pixels into a set number of classes. So, the first step is to assign
the number of classes you want to generate. Also, you have to identify which bands you want to
use.

If you’re using Landsat, here is a list of Landsat bands. For Sentinel, here are Sentinel-2 bands.
We also have a handy guide on spectral signatures which explains which spectral bands are
useful for classifying different classes.

In ArcGIS, the steps for generating clusters are:

 First, you have to activate the spatial analyst extension (Customize ‣ Extensions ‣ Spatial
Analyst).
 In this unsupervised classification example, we use Iso-clusters (Spatial Analysis Tools ‣
Multivariate ‣ Iso clusters).

INPUT: The image you want to classify.


NUMBER OF CLASSES: The number of classes you want to generate during the unsupervised
classification. For example, if you are working with multispectral imagery (red, green, blue, and
NIR bands), then the number here will be 40 (4 classes x 10).
MINIMUM CLASS SIZE: This is the number of pixels to make a unique class.

 One-pass clustering
 Sequential Clustering
 Statistical Clustering
 K-means clustering
 ISODATA Clustering
 RGB Clustering

One-Pass Clustering: This method establishes initial class centers and assigns
cells to classes in one processing pass by determining the spectral distance between each cell and
established class centers. This method locates class centers and assigns cells to classes by
computing the Euclidean distance from an input cell to each class center. If the distance from an
input cell to existing class centers exceed a threshold value, the cell becomes the center of a new
class If not the cell is assigned to the closest class.
Sequential Clustering:
In this method the pixels are analyzed one at a time, pixel by pixel and line by line, It
assumes that all pixels are individual clusters and systematically merges clusters by checking
distances between mean. The spectral distance between each analyzed pixel and previously
defined cluster means are calculated. If the distance is greater than some threshold value, the
pixel begins a new cluster, otherwise it contributes to the nearest existing cluster in which case
the cluster mean is recalculated. Clusters are merged if too many them are formed by adjusting
the threshold value of the cluster means.
Statistical Clustering:
It overlooks the spatial relationship between adjacent pixels. The algorithms used 3 X 3
windows in which all pixels have similar vector in space. Histogram in high-dimensional space H
(V) is the occurrence frequency of gray-level vector v. The algorithm is to find peaks in the
multi-dimensional histogram.
K-Means:
The K-means (also known as C-means) method uses an iterative (repetitive) approach to
determine classes. The K Means algorithm analyses a sample of the input to determine a
specified number of initial class centers. Cells are assigned to classes by detrmining the closest
class center.
Iterative algorithm
✓ Number of clusters K is known by user
✓ Most popular clustering algorithm
✓ Initialize randomly K cluster mean vectors
✓ Assign each pixel to any of the K clusters based on minimum feature distance
✓ After all pixels are assigned to the K clusters, each cluster mean is recomputed.
✓ Iterate till cluster mean vectors stabilize

ISODATA Clustering:
 ISODATA initial distribution of five hypothetical mean vectors using +/- 1 standard deviation in
both bands as beginning and ending points.
 In the first iteration, each candidate pixel is compared to each cluster mean and assigned to the
cluster whose mean is closest.
 During the second iteration, a new mean is calculated for each cluster based on the actual spectral
locations of the pixels assigned to each cluster. After the new cluster mean vectors are selected,
every pixel in the scene is assigned to one of the new clusters
 This split-merge-assign process continues until there is little change in class assignment between
iterations (the T threshold is reached) or the maximum number of iterations is reached (M)

ISODATA iterations; pixels assigned to clusters with closest ISODATA spectral mean; mean recalculated;
pixels reassigned
✓ Continues until maximum iterations or convergence threshold reached.
✓ This is a simple, 2D illustration.
✓ Explain ISODATA iterations; pixels assigned to clusters with closest spectral mean; mean recalculated;
pixels reassigned
✓ Continues until maximum iterations or convergence threshold reached
RGB Clustering:
The RGB clustering is a simple classification and data compression technique for three bands of data. It
is a fast and simple algorithm that quickly compresses a three-band image into a single-band pseudo-
colour image, without necessarily classifying any particular feature. The algorithm plots all pixels in 3D
features space and then divides this space into clusters on a grid. In the more simplistic version of this
function, each of these clusters becomes a class in the output thematic raster layer.

Advantages and Disadvantages of Unsupervised Classification?

Advantages

✓ No prior knowledge of the image area is required

✓ Human error is minimized

✓ Unique spectral classes are produced

✓ Relatively fast and easy to perform

Disadvantages

✓ Spectral classes do not represent features on the ground

✓ Does not consider spatial relationships in the data

✓ Can be very time consuming to interpret spectral classes

✓ Spectral properties vary over time, across images

You might also like