Unit 1 Questions and Answer
Unit 1 Questions and Answer
Introduction:
Remote sensing is an art and science of obtaining information about an object or feature without
physically coming in contact with that object or feature. Humans apply remote sensing in their day-to-day
business, through vision, hearing and sense of smell. The data collected can be of many forms: variations
in acoustic wave distributions (e.g., sonar), variations in force distributions (e.g., gravity meter),
variations in electromagnetic energy distributions (e.g., eye) etc. These remotely collected data through
various sensors may be analyzed to obtain information about the objects or features under investigation.
In this course we will deal with remote sensing through electromagnetic energy sensors only.
Thus, remote sensing is the process of inferring surface parameters from measurements of the
electromagnetic radiation (EMR) from the Earth’s surface. This EMR can either be reflected or emitted
from the Earth’s surface. In other words, remote sensing is detecting and measuring electromagnetic
(EM) energy emanating or reflected from distant objects made of various materials, so that we can
identify and categorize these objects by class or type, substance and spatial distribution [American
Society of Photogrammetry, 1975].
Remote sensing provides a means of observing large areas at finer spatial and temporal frequencies. It
finds extensive applications in civil engineering including watershed studies, hydrological states and
fluxes simulation, hydrological modeling, disaster management services such as flood and drought
warning and monitoring, damage assessment in case of natural calamities, environmental monitoring,
urban planning etc.
‘Remote’ means far away, and ‘sensing’ means believing or observing or acquiring some
information.
Of our five senses, we use three as remote sensors
1. Watch a cricket match from the stadium (sense of sight)
2. Smell freshly cooked curry in the oven (sense of smell)
3. Hear a telephone ring (sense of hearing)
Then what are our other two senses and why are they not used “remotely”?
4. Try to feel smoothness of a desktop (Sense of touch)
5. Eat a mango to check the sweetness (sense of taste)
In the last two cases, we are actually touching the object by our organs to collect the information
about the object.
Distance of Remote sensing:
Remote sensing occurs at a distance from the object or area of interest, Interestingly, there is no
clear distinction about this distance. It could be 1 m, 1,000 m, or greater than 1 million meters from the
object or area of interest. In fact, virtually all astronomy is based on RS. Many of the most innovative RS
systems, and visual and Digital image processing methods were originally developed for RS of
extraterrestrial landscapes such as moon, Mars, Saturn, Jupiter, etc.
RS techniques may also be used to analyse inner space, for example, an electron microscope and
its associated hardware may be used to obtain photographs of extremely small objects on the skin, in the
eye, etc, similarly, an X-ray device is a RS instrument to examin bones and organs inside the body. In
such cases, the distance is less than 1 m.
The data collection may be take place directly in the field, or at some remote distance from the
object or area of interest. Data that are collected directly in the field (study site or the ground for
which data are to be collected) are termed as in situ data, and the data collected remotely called
Remote Sensing data.
In Situ Data: An in-situ data is the data collected that is associated with measurement
has the exact measurement of the actual location. An example of this would be when
collecting Remote sensing data, in-situ data will be used to verify that the measurement
of the data collected will be the same as the actual location.
Transducers are the devices that convert variations in physical quantities (such as
pressure or brightness) into electrical signals, or vice versa. Many different transducers are
available. A scientist could use a thermometer to measure the temperature of the air, soil, or
water: spectrometer to measure the spectral reflectance: anemometer to measure the speed of the
wind: or a psychrometer to measure the humidity of the air. The data recorded by the transducer
may be an analog signal with voltage variations related to the intensity of the property being
measured. Often these analog signals are transformed into digital values using analog to digital
conversion procedures.
1-Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding
area being observed. Reflected sunlight is the most common source of radiation measured by
passive sensors. Examples of passive remote sensors include film photography, infrared, and
radiometers.
2-Active remote sensing, on the other hand, emits energy in order to scan objects and areas
whereupon a sensor then detects and measures the radiation that is reflected or backscattered from
the target. RADAR is an example of active remote sensing where the time delay between
emission and return is measured, establishing the location, height, speeds and direction of an
object.
5. Explain in detail step by step procedure in Remote sensing process with neat diagram?
(Or) Describe briefly the different elements of RS?
The process involved in RS system requires an involvement of energy. For example,
when we view the screen of a computer monitor, we are actively engaged in RS. A physical
quantity (light) emanates from the screen, which is a source of radiation. The radiated light
passes over a distance, and thus is remote to some extent, until it encounters and is captured by
a sensor (eyes). Each eye sends a signal to a processor (brain) which records the data and
interprets this into information.
Now consider, if the energy being remotely sensed comes from the sun, the energy is
radiated by atomic particles at the source (the sun), propagates through the vacuum of space at
the speed of light, interacts with the earth’s atmosphere, interacts with the earth’s surface,
some amount of energy reflects back, interacts with the earth’s atmosphere once again, and
finally reaches the remote sensor, where it interacts with various optical systems, filters, film
emulsions, or detectors.
"Remote sensing is the science (and to some extent, art) of acquiring information about the Earth's
surface without actually being in contact with it. This is done by sensing and recording reflected or
emitted energy and processing, analyzing, and applying that information."
In much of remote sensing, the process involves an interaction between incident radiation and
the targets of interest. This is exemplified by the use of imaging systems where the following seven
elements are involved. Note, however that remote sensing also involves the sensing of emitted energy and
the use of non-imaging sensors.
1. Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy
source which illuminates or provides electromagnetic energy to the target of interest. (Active RS or
Passive RS)
2. Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in
contact with and interact with the atmosphere it passes through. This interaction may take place a second
time as the energy travels from the target to the sensor.
3. Interaction with the Target (C) - once the energy makes its way to the target through the atmosphere,
it interacts with the target depending on the properties of both the target and the radiation.
4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted from the
target, we require a sensor (remote - not in contact with the target) to collect and record the
electromagnetic radiation.
5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be
transmitted, often in electronic form, to a receiving and processing station where the data are processed
into an image (hardcopy and/or digital).
6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally or
electronically, to extract information about the target which was illuminated.
7. Application (G) - The final element of the remote sensing process is achieved when we apply the
information we have been able to extract from the imagery about the target in order to better understand
it, reveal some new information, or assist in solving a particular problem.
These seven elements comprise the remote sensing process from beginning to end. We will be covering
all of these in sequential order throughout the five chapters of this tutorial, building upon the information
learned as we go.
Where h is Planck's constant, {\displaystyle \lambda} is the wavelength and c is the speed
of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first
quantization) the energy of the photons is thus directly proportional to the frequency of the EMR
wave.
Likewise, the momentum p of a photon is also proportional to its frequency and inversely
proportional to its wavelength:
These differences make it possible to identify different earth surface features or materials by
analysing their spectral reflectance patterns or spectral signatures. These signatures can be
visualised in so called spectral reflectance curves as a function of wavelengths. The figure in the
right column shows typical spectral reflectance curves of three basic types of Earth
features: green vegetation, dry bare soil and clear water.
The spectral reflectance curve of healthy green vegetation has a significant minimum of
reflectance in the visible portion of the electromagnetic spectrum resulting from the pigments in
plant leaves. Reflectance increases dramatically in the near infrared. Stressed vegetation can also
be detected because stressed vegetation has a significantly lower reflectance in the infrared.
Vegetation covers a large portion of the Earth's land surface. Its role on the regulation of the
global temperature, absorption of CO2 and other important functions, make it a land cover type
of great significance and interest. Remote sensing can take advantage of the particular manner
that vegetation reflects the incident electromagnetic energy and obtain information about the
vegetation.
Cellular leaf structure and its interaction with electromagnetic energy. Most visible light is
absorbed, while almost half of the near infrared energy is reflected.
Under the upper epidermis (the thin layer of cells that forms the top surface of the leaf) there are
primarily two layers of cells. The top one is the palisade parenchyma and consists of elongated
cells, tightly arranged in a vertical manner. In this layer resides most of the chlorophyll, a protein
that is responsible for capturing the solar energy and power the process of photosynthesis. The
lower level is the spongy parenchyma, consisting of irregularly shaped cells, with a lot of air
spaces between them, in order to allow the circulation of gases.
The spectral reflectance curve of bare soil is considerably less variable. The reflectance curve is
affected by moisture content, soil texture, surface roughness, presence of iron oxide and organic
matter. These factors are less dominant than the absorbance features observed in vegetation
reflectance spectra
The water curve is characterized by a high absorption at near infrared wavelengths range and
beyond. Because of this absorption property, water bodies as well as features containing water
can easily be detected, located and delineated with remote sensing data. Turbid water has a
higher reflectance in the visible region than clear water. This is also true for waters containing
high chlorophyll concentrations. These reflectance patterns are used to detect algae colonies as
well as contaminations such as oil spills or industrial waste water (more about different
reflections in water can be found in the tutorial Ocean Colour in the Coastal Zone).
Features on the Earth reflect, absorb, transmit, and emit electromagnetic energy
from the sun. Special digital sensors have been developed to measure all types of
electromagnetic energy as it interacts with objects in all of the ways listed above. The
ability of sensors to measure these interactions allows us to use remote sensing to
measure features and changes on the Earth and in our atmosphere. A measurement of
energy commonly used in remote sensing of the Earth is reflected energy (e.g., visible
light, near-infrared, etc.) coming from land and water surfaces. The amount of energy
reflected from these surfaces is usually expressed as a percentage of the amount of energy
striking the objects. Reflectance is 100% if all of the light striking and object bounces off and is
detected by the sensor. If none of the light returns from the surface, reflectance is said to be 0%.
In most cases, the reflectance value of each object for each area of the electromagnetic spectrum
is somewhere between these two extremes. Across any range of wavelengths, the percent
reflectance values for landscape features such as water, sand, roads, forests, etc. can be plotted
and compared
Most remote sensing applications process digital images to extract spectral signatures at
each pixel and use them to divide the image in groups of similar pixels ( segmentation) using
different approaches. As a last step, they assign a class to each group (classification) by
comparing with known spectral signatures. Depending on pixel resolution, a pixel can represent
many spectral signature "mixed" together - that is why much remote sensing analysis is done to
"unmix mixtures". Ultimately correct matching of spectral signature recorded by image pixel with
spectral signature of existing elements leads to accurate classification in remote sensing.
9. Energy interaction with the atmosphere in remote sensing? (or) Atmospheric effects in RS?
Before radiation used for remote sensing reaches the Earth's surface it has to travel through some
distance of the Earth's atmosphere. Particles and gases in the atmosphere can affect the incoming
light and radiation. These effects are caused by the mechanisms of Scattering and Absorption.
Scattering occurs when particles or large gas molecules present in the atmosphere interact with
and cause the electromagnetic radiation to be redirected from its original path. How much
scattering takes place depends on several factors including the wavelength of the radiation, the
abundance of particles or gases, and the distance the radiation travels through the atmosphere.
There are three (2) types of scattering which take place
1. Selective Scattering:
2. Non – Selective Scattering
1. SELECTIVE SCATTERING:
Rayleigh scattering- Occurs when particles are very small compared to the wavelength
of the radiation. These could be particles such as small specks of dust or nitrogen and
oxygenmolecules. Rayleigh scattering causes shorter wavelengths of energy to be
scattered much more than longer wavelengths. Rayleigh scattering is the dominant
scattering mechanism in the upper atmosphere. The fact that the sky appears "blue"
during the day is because of this phenomenon. As sunlight passes through the
atmosphere, the shorter wavelengths (i.e. blue) of the visible spectrum are scattered more
than the other (longer) visible wavelengths. At sunrise and sunset the light has to travel
farther through the atmosphere than at midday and the scattering of the shorter
wavelengths is more complete; this leaves a greater proportion of the longer wavelengths
to penetrate the atmosphere.
Mie scattering: occurs when the particles are just about the same size as the
wavelength of the radiation. Dust, pollen, smoke and water vapour are common
causes of Mie scattering which tends to affect longer wavelengths than those
affected by Rayleigh scattering. Mie scattering occurs mostly in the lower portions of
the atmosphere where larger particles are more abundant, and dominates when
cloud conditions are overcast.
Raman’s Scattering: when the particles are same size , more or lessthan the wavelength
of the radiation.
2. NONSELECTIVE SCATTERING: This occurs when the particles are much larger
than the wavelength of the radiation. Water droplets and large dust particles can cause
this type of scattering.
10. REMOTE SENSING DATA ACQUISITION AND INTERPRETATION
Up to this point, we have discussed the principal sources of electromagnetic energy, the
propagation of this energy through the atmosphere, and the interaction of this energy with
earth surface features. Combined, these factors result i n en erg y "signals" from which we wish
to extract information. We now consider the procedures by which these signals are detected,
recorded and interpreted.
The detection of electromagnetic energy can be performed either photographically or
electronically. The process o f p h o topography uses chemical reaction on the surface of a light
sensitive film to detect energy variations within a scene. Photographic systems offer many
advantages: they are relatively simple and inexpensive and provide a high degree of spatial
detail and geometric integrity. Electronic sensors generate an electrical signal that corresponds
to t h e energy variations in the original scene. A familiar example of an electronic sensor is a
television camera. A l t h o u gh considerably more complex and expensive than photographic
systems, electronic sensors offer the advantages of a broader spectral range of sensitivity,
improved calibration potential, and the ability to electronically transmit image data. Another
mode of electronic sensor is recording with the help of charge coupled device which is used to
convert electrical signal to digital signal.
By developing a photograph, we obtain a record of its detected signals. Thus, the film
acts as both the detecting and recording medium. Electronic sensor signals are generally
recorded onto magnetic tape. Subsequently, the signals may be converted to an image form by
photographing a TV-like screen display of the data, or by using a specialized film recorder. In
these cases, photographic film is used only as a recording medium.
We can see that the data interpretation aspects of remote sensing can involve analysis
of pictorial (image) and/or numerical data. Visual interpretation of pictorial image data has long
been the workhorse of remote sensing.. Visual techniques make use of the excellent ability of
the human mind to qualitatively evaluate spatial patterns in a scene. Th e ability to make
subjective judgments based on selective scene elements is essential in many interpretation
efforts. Visual interpretation techniques have certain disadvantages, however, in that they may
require extensive t raining and are labour intensive. In addition, spectral characteristics are not
always fully evaluated in visual interpretation efforts. This is partly because of the limited ability
of the eye to discern tonal values on an image and the difficulty for an interpreter to
simultaneously analyze numerous spectral images. In applications where spectral p at t erns are
highly informative, it is therefore preferable to analyze numerical, rather than pictorial, image
data. In t h i s case, the image is described by a matrix of numerical brightness values covering
the scene. These values may be analyzed by quantitative procedures employing a computer,
which is referred to as digital interpretation.
The use of computer assisted analysis techniques permits the spectral patterns in
remote sensing data to be more fully examined. Digital interpretation is assisted by the image
processing techniques such as image enhancement, i n formation extraction etc. It also permits
the data analysis process to be largely automated, providing cost ad vantages over visual
interpretation techniques. However, just as humans are limited in their ability to interpret
spectral patterns, computers are limited in their ability to evaluate spatial patterns. Therefore,
visual and numerical techniques are complementary in nature, and consideration must be given
to which approach (or combination of approaches) best fits a particular application
11. Interaction of EMR with Earth surface features, Soil, Vegetation and water?
The interaction of electro-magnetic radiation with the Earth's surface is driven by three physical
processes: reflection, absorption, and transmission of radiation. Absorption involves a reduction in
radiation intensity as its energy is converted on reaching an object on the Earth's surface. Reflection
involves the returning or throwback of the radiation incident on an object on the Earth's surface, whilst
transmission entails the transfer of irradiative energy from an object on the Earth's surface to
surrounding bodies. Together, these three concepts make up an object's radioactive flux:
12. What is atmospheric window?
Atmospheric windows are these portions of the EM radiation spectrum with low absorption/high
transmission.
Following are some examples of the atmospheric windows:
(0.3 – 1.3 μm): Visible/near infrared window.
(1.5 – 1.8, 2.0 – 2.5, and 3.5 – 4.1μm): Mid infrared window.
(7.0 – 15.0 μm): Thermal/far infrared window.
Some wavelengths cannot be used in remote sensing because our atmosphere absorbs essentially
all the photons at these wavelengths that are produced by the sun. In particular, the molecules of water,
carbon dioxide, oxygen, and ozone in our atmosphere block solar radiation. The wavelength ranges in
which the atmosphere is transparent are called atmospheric windows. Remote sensing projects must be
conducted in wavelengths that occur within atmospheric windows. Outside of these windows, there is
simply no radiation from the sun to detect--the atmosphere has blocked it.
The figure above shows the percentage of light transmitted at various wavelengths from the near
ultraviolet to the far infrared, and the sources of atmospheric opacity are also given. You can see that
there is plenty of atmospheric transmission of radiation at 0.5 microns, 2.5 microns, and 3.5 microns, but
in contrast there is a great deal of atmospheric absorption at 2.0, 3.0, and about 7.0 microns. Both passive
and active remote sensing technologies do best if they operate within the atmospheric windows.
A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic black-
body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is
determined by the temperature alone (see figure at right), not by the body's shape or composition.
It is an ideal emitter: at every frequency, it emits as much or more thermal radiative energy as any
other body at the same temperature.
It is a diffuse emitter: measured per unit area perpendicular to the direction, the energy is
radiated isotropically, independent of direction.
Planck's Law:
Planck's Law can be generalized as such: Every object emits radiation at all times and
at all wavelengths. If you think about it, this law is pretty hard to wrap your brain around. We know that
the sun emits visible light (below left), infrared waves, and ultraviolet waves (below right), but did you
know that the sun also emits microwaves, radio waves, and X-rays? OK... you are probably saying, the
sun is a big nuclear furnace, so it makes sense that it emits all sorts of electromagnetic radiation.
However, Plank's Law states that every object emits over the entire electromagnetic spectrum. That
means that you emit radiation at all wavelengths -- so does everything around you!
Two images of the sun taken at different wavelengths of the electromagnetic spectrum. The left image
shows the sun's emission at a wavelength in the visible range. The right image is the ultraviolet emission
of the sun. Note: colors in these images and the ones above are deceptive. There is no sense of "color" in
spectral regions other than visible light. The use of color in these "false-color" images is only used as an
aid to show radiation intensity at one particular wavelength. Credit: NASA/JPL
Now before you dismiss this statement out-of-hand, let me say that you are not emitting X-rays in any
measurable amount (thank goodness!). The mathematics behind Plank's Law hinge on the fact that there
is a wide distribution of vibration speeds for the molecules in a substance. This means that it is possible
for matter to emit radiation at any wavelength, and in fact it does.
Another common misconception that Plank's Law dispels is that matter selectively emits radiation.
Consider what happens when you turn off a light bulb. Is it still emitting radiation? You might be
tempted to say "No" because the light is off. However, Plank's Law tells us that while the light bulb may
no longer be emitting radiation that we can see, it is still emitting at all wavelengths (most likely, it is
emitting copious amounts of infrared radiation). Another example that you hear occasionally on TV
weathercasts goes something like this. "When the sun sets, the ground begins to emit infrared
radiation..." This is certainly not true by nature of Planck's Law (and besides, how does the ground know
when the sun sets anyway). We'll talk more about radiation emission from the ground in a future lesson.
For now, please dismiss such statements as hogwash. The surface of the earth emits radiation all the time
and at all wavelengths.
Wein's Law:
At this point I know what you are thinking... there must be a "catch". In fact there is. While all matter
emits radiation at all wavelengths, it does not do so equally. This is where the next radiation law comes
in. Wein's Law states that the wavelength of peak emission is inversely proportional to the
temperature of the emitting object. Put another way, the hotter the object, the shorter the wavelength of
max emission. You have probably have observed this law in action all the time without even realizing it.
Want to know what I mean? Check out this steel bar. Which end might you pick up? Certainly not the
right end... it looks hot. Why does it "look hot"? Well, the wavelength of peak emission for the right side
of the bar is obviously shorter than the left side's peak emission wavelength. You see this shift in the
peak emission wavelength as a color changes from red to orange to yellow as the metal's temperature
increases.
Note: I should point out that even though the steel bar is a yellow-white color at the end, the peak
emission is still in the infrared part of the electromagnetic spectrum. However, the peak is so close to the
visible part of the spectrum, that there is a significant amount of visible light also being emitted from the
steel. Judging by the look of this photograph, the steel has a temperature of roughly 1500 kelvins,
resulting in a max emission wavelength of 2 microns (remember visible light is 0.4-0.7 microns). Here is
a chart showing how I estimated the steel temperature. To the left of the visibly red metal, the bar is still
likely several hundred degrees Celsius. However, in this section of the bar, the peak emission wavelength
is far into the IR portion of the spectrum -- so much so that no visible light emission is discernible with
the human eye.
So, now that we've established Wein's Law, how do we apply it to the emission sources that effect the
atmosphere. Consider the chart below showing the emission curves (called Planck functions) for both the
sun and the earth.
The emission spectrum of the sun (orange curve) compared to the earth's emission (dark red curve). The
x-axis shows wavelength in factors of 10 (called a "log scale"). The y-axis is the amount of energy per
unit area per unit time per unit wavelength. I have kept the units arbitrary because as you can see, they
are messy. Credit: David Babb
Note the idealized spectrum for the earth's emission (dark red line) of electromagnetic radiation compared
to the sun's electromagnetic spectrum (orange line). The radiating temperature of the sun is 6000 degrees
Celsius compared to the earth's measly 15 degrees Celsius. This means that given its high radiating
temperature, the sun's peak emission occurs near 0.5 microns, on the short-wave end of the visible
spectrum. Meanwhile the Earth's peak emission is located in the infrared portion of the electromagnetic
spectrum.
By the way, because the sun's peak emission is located around 0.5 microns, we see it as having a yellow
quality. But this is not the case for all stars. Some stars in our galaxy are somewhat cooler and exhibit a
reddish hue, while others are much hotter and appear blue. The constellation Orion(link is
external) contains the red supergiant Betelgeuse and several blue supergiants, the largest being Rigel and
Bellatrix. Can you spot them in this photograph of Orion?
Stefan–Boltzmann Law:
Examine once again the graph of the sun's emission curve versus the Earth's emission curve. Pay
particular attention to the energy values on the left axis (for the sun) and right axis (for the earth). The
first thing to notice is that the energy values are given in powers of 10 (that is, 10 6 is equal to 1,000,000).
This means that if we compare the peak emissions from the earth and sun we see that the sun at its peak
wavelength emits 30,000 times more energy than the earth at its peak. In fact, if we add up the total
energy emitted by each body (by adding the energy contribution at each wavelength), we see that the sun
emits over 150,000 times more energy per unit area than the earth!
I calculated the numbers above using the third radiation law that you need to know, the Stefan-Boltzmann
Law. The Stefan-Boltzmann Law states that the total amount of energy per unit area emitted by an
object is proportional to the 4th power of the temperature. We'll more talk more about this relationship
when discuss satellite remote sensing. It is also particularly useful when we want to understand how
much energy the earth's surface emits in the form of infrared radiation.
Kirchhoff's Law:
In the preceding radiation laws, we have been taking about the ideal amount of radiation than can
be emitted by an object. This theoretical limit is called "black body radiation". However, the actual
radiation emitted by an object can be much less than the ideal, especially at certain wavelengths.
Kirchhoff's Law describes the linkage between an object's ability to emit at a particular wavelength with
its ability to absorb radiation at that same wavelength. In plain language, Kirchhoff's Law states that for
an object whose temperature is not changing, an object that absorbs radiation well at a particular
wavelength will also emit radiation well at that wavelength. One implication of Kirchhoff's law is as
follows: If we want to measure a particular constituent in the atmosphere (water vapor for example), we
need to choose a wavelength that is emitted well by water vapor (otherwise we wouldn't detect it).
However, since water vapor readily emits at our chosen wavelength, it also readily absorbs radiation at
this wavelength -- which is going to cause some problems measurement-wise.
Well look at the implications of Kirchhoff's Law in a later section. For now, we need to complete
our discuss of radiation by looking at the possible things that can happen to a beam of radiation as it
passes through a medium.
Operate at ranges of 50-100m with panoramic scanning and are often used to map
building interiors or small objects
Can measure at distances of up to 1km and are frequently used in open-pit mining and
topographic survey applications.
2.2 Drone
Drone is a miniature remotely piloted aircraft. It is designed to fulfill requirements for a low cost
platform, with long endurance, moderate payload capacity and capability to operate without a runway or
small runway. Drone includes equipment of photography, infrared detection, radar observation and TV
surveillance. It uses satellite communication link. An onboard computer controls the payload and stores
data from different sensors and instruments. The payload computer utilizes a GSM/GPRS (where
available) or independent satellite downlink, and can be monitored its position and payload status from
anywhere in the world connected to the internet.
Drone was developed in Britain during World War-II, is the short sky spy which was originally conceived
as a military reconnaissance. Now it plays important role in remote sensing. The unique advantage is that
it could be accurately located above the area for which data was required and capable to provide both
night and day data.
2.3 Aircraft
Special aircraft with cameras and sensors on vibration less platforms are traditionally used to
acquire aerial photographs and images of land surface features. While low altitude aerial photography
results in large scale images providing detailed information on the terrain, the high altitude smaller scale
images offer advantage to cover a larger study area with low spatial resolution.
The National High Altitude Photography (NHAP) program (1978), coordinated by the US Geological
Survey, started to acquire coverage of the United States with a uniform scale and format. Beside aerial
photography multi spectral, hyperspectral and microwave imaging is also carried out by aircraft;
thereafter multi spectral, hyperspectral and microwave imaging were also initiated.
Aircraft platforms offer an economical method of remote sensing data collection for small to large study
areas with cameras, electronic imagers, across- track and along-track scanners, and radar and microwave
scanners. AVIRIS hyperspectral imaging is famous aircraft aerial photographic operation of USGS.
There are two types of well recognized satellite platforms- manned satellite platform and unmanned
satellite platform.
Manned Satellite Platforms:
Manned satellite platforms are used as the last step, for rigorous testing of the remote sensors on
board so that they can be finally incorporated in the unmanned satellites. This multi- level remote sensing
concept is already presented. Crew in the manned satellites operates the sensors as per the program
schedule.
16. What are the different satellite orbitals explain with diagram?
When a satellite is launched into the space, it moves in a well-defined path around the
Earth, which is called the orbit of the satellite. Gravitational pull of the Earth and the velocity of
the satellite are the two basic factors that keep the satellites in any particular orbit. Spatial and
temporal coverage of the satellite depends on the orbit. There are three basic types of orbits in
use.
1. By Inclination:
Equatorial Orbit
Inclined Orbit
Polar Orbit
1. Inclined satellites: A satellite is said to occupy an inclined orbit around Earth if the orbit
exhibits an angle other than 0° to the equatorial plane. This angle is called the orbit's
inclination. A planet is said to have an inclined orbit around the Sun if it has an angle
other than 0° to the ecliptic plane
Satellites orbit Earth at different heights, different speeds and along different paths. The two
most common types of orbit are "geostationary" (jee-oh-STAY-shun-air-ee) and "polar." A
geostationary satellite travels from west to east over the equator. Polar, satellite travels from
North - South.
Sun Synchronous Satellites:
The location of sun synchronous satellites is at very lower altitudes, normally a few hundred
or thousand Km from earth surface. Travelling of this satellites from North pole to South pole as
the earth turns below it. Sun synchronous satellites pass once the same part of earth each day
at the same local time making a collection of different forms of data and communication more
easy
A geostationary satellite travels from west to east over the equator. It moves in the same
direction and at the same rate Earth is spinning. From Earth, a geostationary satellite looks like it
is standing still since it is always above the same location.
Polar-orbiting satellites travel in a North-South direction from pole to pole. As Earth spins
underneath, these satellites can scan the entire globe, one strip at a time.
What Was the First Satellite in Space?
Sputnik 1 was the first satellite in space. The Soviet Union launched it in 1957.
2. BY ALTITUDE:
Low Earth Orbit (LEO)
Medium Earth Orbit (MEO)
Geostationary Earth Orbit (GEO)
LEO: LOW EARTH ORBIT (160 – 2,000 KM), it takes 120 min to circle the Earth.
MEO: MEDIUM EARTH ORBIT (2,000 – 35,786 KM), 2 to 6 Hours,
GEO: GEOSTATIONARY EARTH ORBIT (36,000 KM) 24 hours over the Earth.
3. By Shape:
Circular Orbit:- It is a fixed distance around the barycenter, that is in the shape of circle.
o Geostationary orbit, Polar Orbit and Equatorial Orbit.
Elliptical Orbit:- Is the revolving of one object around another in an oval-shaped path
called ellipse.
Closest point is Perigee and longest point is Apogee.
Spatial resolution can determine the quality of an image and describe how detailed an object
can be represented by the image. It is a measurement to determine how small an object should
be in order for an imaging system to detect it.
Spatial resolution refers to the number of pixels utilized in construction of the image. ... The
spatial resolution of a digital image is related to the spatial density of the image and optical
resolution of the microscope used to capture the image.
For example, a spatial resolution of 250m means that one pixel represents an area 250 by 250
meters on the ground
2. Spectral resolution:
Spectral resolution describes the ability of a sensor to define fine wavelength intervals
The finer the spectral resolution, the narrower the wavelength range for a particular
channel or band.
Spectral resolution is an important experimental parameter. If the resolution is too
low, spectral information will be lost, preventing correct identification and
characterization of the sample. If the resolution is too high, total measurement time can
be longer than necessary.
3. Radiometric resolution:
Sensor’s sensitivity to the magnitude of the electromagnetic energy,
Sensor’s ability to discriminate very slight differences in (reflected or emitted) energy,
The finer the radiometric resolution of a sensor, the more sensitive it is to detecting
small differences in energy.
4. Temporal resolution and coverage:
Temporal resolution is the revisit period, and is the length of time for a satellite to
complete one entire orbit cycle, i.e. start and back to the exact same area at the same
viewing angle. For example, Landsat needs 16 days, MODIS needs one day, NEXRAD
needs 6 minutes for rain mode and 10 minutes for clear sky mode.
Temporal coverage is the time period of sensor from starting to ending. For example,
o MODIS/Terra: 2/24/2000 through present
o Landsat 5: 1/3/1984 through present
o ICESat: 2/20/2003 to 10/11/2009
20. What are the advantages and Disadvantages of using remotely sensed data?
Advantages of using Remote Sensed data:
RS data are expensive for one time analysis and small area.
Specialized training are needed for analyzing images.
It cannot make large scale engineering maps through satellite.
Aerial photographs are costly because the study of dynamic features are required when
repetitive photographs are used.
21. What are the different applications of Remote Sensing? State its uses?
There are probably hundreds of applications - these are typical:
Meteorology - Study of atmospheric temperature, pressure, water vapour, and wind
velocity.
Oceanography: Measuring sea surface temperature, mapping ocean currents, and wave
energy spectra and depth sounding of coastal and ocean depths
Glaciology- Measuring ice cap volumes, ice stream velocity, and sea ice distribution.
(Glacial)
Geology- Identification of rock type, mapping faults and structure.
Geodesy- Measuring the figure of the Earth and its gravity field.
Topography and cartography - Improving digital elevation models.
Agriculture Monitoring the biomass of land vegetation (Crop Type, Crop Condition
Assessment, Crop Yield Estimation, Mapping of soil characteristic, soil moisture
estimation)
Forest- monitoring the health of crops, mapping soil moisture
Botany- forecasting crop yields.
Hydrology- Assessing water resources from snow, rainfall and underground aquifers.
(Watershed mapping and management, Flood delineation and mapping)
Disaster warning and assessment - Monitoring of floods and landslides, monitoring
volcanic activity, assessing damage zones from natural disasters.
Planning applications - Mapping ecological zones, monitoring deforestation, monitoring
urban land use.
Oil and mineral exploration- Locating natural oil seeps and slicks, mapping geological
structures, monitoring oil field subsidence.
Military- developing precise maps for planning, monitoring military infrastructure,
monitoring ship and troop movements
Urban- Land parcel mapping, Infrastructure mapping, Land use change detection, Future
urban expansion planning.
Climate- the effects of climate change on glaciers and Arctic and Antarctic regions
Sea- Monitoring the extent of flooding (Storm forecasting, Water quality monitoring,
Aquaculture inventory and monitoring, Navigation routing, Coastal vegetation mapping,
Oil spills, Coastal hazard monitoring & Assessment)
Rock- Recognizing rock types
Space program- is the backbone of the space program
Seismology: as a premonition.
22. What are the basic elements to be considered during visual interpretation of satellite image?
IMAGE INTERPRETATION:
Image is a pictorial representation of an object or a scene.
Image can be analog or digital.
Aerial photographs are generally analog, while satellite data is in digital form.
A digital image is made up of square or rectangular areas called pixels.
Each pixel has an associated pixel value which depends on the amount reflected energy from
the ground
Advantages of aerial photographs/Satellite Images over ground observation
Synoptic view
Time freezing ability
Permanent record
Spectral resolution
Spatial resolution
Cost and time effective
Stereoscopic view
Brings out relationship between objects
Methods of Image Interpretation:
Visual
Image interpretation on a hardcopy image/photograph
Visual image interpretation on a digital image
Digital image processing
Why do we process images?
It has been developed to deal with 3 major problems —
To improve the image data to suppress the unwanted distortions.
To enhance some features of the input image.
As a means of translation between the human visual system and digital imaging devices.
ACTIVITIES OF IMAGE INTERPRETATION:
Detection
Recognition
Analysis
Deduction
Classification
Idealization
Convergence of evident
Elements of Visual Image Interpretation:
Location, Size, Shape, Shadow, Tone, Colour, Texture, Pattern, Height and Depth, Site,
Situation, and Association
Location
There are two primary methods to obtain a precise location in the form of coordinates. 1) survey
in the field by using traditional surveying techniques or global positioning system instruments, or
2) collect remotely sensed data of the object, rectify the image and then extract the desired
coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS
instruments in the field to obtain the desired location of an object. If option 2 is chosen, most
aircraft used to collect the remotely sensed data have a GPS receiver.
Size
The size of an object is one of the most distinguishing characteristics and one of the most
important elements of interpretation. Most commonly, length, width and perimeter are measured.
To be able to do this successfully, it is necessary to know the scale of the photo. Measuring the
size of an unknown object allows the interpreter to rule out possible alternatives. It has proved to
be helpful to measure the size of a few well-known objects to give a comparison to the unknown-
object. For example, field dimensions of major sports like soccer, football, and baseball are
standard throughout the world. If objects like this are visible in the image, it is possible to
determine the size of the unknown object by simply comparing the two.
Shape
There is an infinite number of uniquely shaped natural and man-made objects in the world. A few
examples of shape are the triangular shape of modern jet aircraft and the shape of a common
single-family dwelling. Humans have modified the landscape in very interesting ways that has
given shape to many objects, but nature also shapes the landscape in its own ways. In general,
straight, recti-linear features in the environment are of human origin. Nature produces more
subtle shapes.
Shadow
Virtually all remotely sensed data are collected within 2 hours of solar noon to avoid extended
shadows in the image or photo. This is because shadows can obscure other objects that could
otherwise be identified. On the other hand, the shadow cast by an object act as a key for the
identification of the object as the length of the shadow will be used to estimate the height of the
object which is vital for the recognition of the object. Take for example, the Washington
Monument in Washington D.C. While viewing this from above, it can be difficult to discern the
shape of the monument, but with a shadow cast, this process becomes much easier. It is a good
practice to orient the photos so that the shadows are falling towards the interpreter. A
pseudoscopic illusion can be produced if the shadow is oriented away from the observer. This
happens when low points appear high and high points appear low.
Texture
This is defined as the “characteristic placement and arrangement of repetitions of tone or color in
an image.” Adjectives often used to describe texture are smooth (uniform, homogeneous),
intermediate, and rough (coarse, heterogeneous). It is important to remember that texture is a
product of scale. On a large scale depiction, objects could appear to have an intermediate texture.
But, as the scale becomes smaller, the texture could appear to be more uniform, or smooth. A few
examples of texture could be the “smoothness” of a paved road, or the “coarseness” a pine forest.
Pattern
Pattern is the spatial arrangement of objects in the landscape. The objects may be arranged
randomly or systematically. They can be natural, as with a drainage pattern of a river, or man-
made, as with the squares formed from the United States Public Land Survey System. Typical
adjectives used in describing pattern are: random, systematic, circular, oval, linear, rectangular,
and curvilinear to name a few.
Height and depth
Height and depth, also known as “elevation” and “bathymetry”, is one of the most diagnostic
elements of image interpretation. This is because any object, such as a building or an electric pole
that rises above the local landscape will exhibit some sort of radial relief. Also, objects that
exhibit this relief will cast a shadow that can also provide information as to its height or
elevation. A good example of this would be buildings of any major city.
Site/situation/association
Site has unique physical characteristics which might include elevation, slope, and type of surface
cover (e.g., grass, forest, water, bare soil). Site can also have socioeconomic characteristics such
as the value of land or the closeness to water. Situation refers to how the objects in the photo or
image are organized and “situated” in respect to each other. Most power plants have materials and
building associated in a fairly predictable manner. Association refers to the fact that when you
find a certain activity within a photo or image, you usually encounter related or “associated”
features or activities. Site, situation, and association are rarely used independent of each other
when analyzing an image. An example of this would be a large shopping mall. Usually there are
multiple large buildings, massive parking lots, and it is usually located near a major road or
intersection.
23. Explain about Digital Image Processing?
Digital image processing is the use of a digital computer to process digital
images through an algorithm. As a subcategory or field of digital signal processing, digital image
processing has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up
of noise and distortion during processing. Since images are defined over two dimensions (perhaps
more) digital image processing may be modeled in the form of multidimensional systems. The
generation and development of digital image processing are mainly affected by three factors:
first, the development of computers; second, the development of mathematics (especially the
creation and improvement of discrete mathematics theory); third, the demand for a wide range of
applications in environment, agriculture, military, industry and medical science has increased.
1. Pre Processing:
The correction of deficiencies and the removal of flaws present in the data are called pre-
processing( some times referred to as image restoration or image correction or image
rectification).
Pre-processing techniques involved in remote sensing may be categorized into two broad
categories
Radiometric corrections:
When the emitted or reflected electro-magnetic energy is observed by a sensor on-
board an aircraft or spacecraft, the observed energy does not coincide with the energy emitted
or reflected from the same object observed from a short distance.
1. Detector response calibration
De-striping
Removal of missing scan line
Random noise removal
Vignetting removal (Corner to center clarity Differ)
2. Sun angle and topographic correction
3. Atmospheric Correction
Geometric Corrections:
Geometric errors that arise from
The Earth Curvature
Platform Motion
Relief Displacement
Non - linearity's in scanning motion
The Earth rotation
These are Systematic Corrections, non-systematic, coordinate transformation.
2. Image Enhancement:
Image enhancement is the procedure of improving the quality and information content of
original data before processing. Common practices include contrast enhancement, spatial
filtering, density slicing, and FCC. Contrast enhancement or stretching is performed by linear
transformation expanding the original range of gray level. Spatial filtering improves the naturally
occurring linear features like fault, shear zones, and lineaments. Density slicing converts the
continuous gray tone range into a series of density intervals marked by a separate color or symbol
to represent different features.
FCC is commonly used in remote sensing compared to true colors because of the absence
of a pure blue color band because further scattering is dominant in the blue wavelength. The FCC
is standardized because it gives maximum identical information of the objects on Earth and
satisfies all users. In standard FCC, vegetation looks red (Fig. 3.6) because vegetation is very
reflective in NIR and the color applied is red. Water bodies look dark if they are clear or deep
because IR is an absorption band for water. Water bodies give shades of blue depending on their
turbidity or shallowness because such water bodies reflect in the green wavelength and the color
applied is blue.
3. Image Transformation:
A function or operator that takes an image as its input and produces an
image as its output. Fourier transforms, principal component analysis (also
called Karhunen-Loeve analysis), and various spatial filters, are examples of
frequently used image transformation procedures.
Image Reduction:
Image Reduction: Image Reduction techniques allow the analyst to obtain a
regional perspective of the remotely sensed data.
Common screen resolution is 1024 X 768, that is much lower than the
number of pixels generally present in an image
The computer screen cannot display the entire image on the screen unless
reduce the visual representation of the image. It is commonly known as
zoom out.
Image Magnification:
Referred to as zoom in. This technique is most commonly employed for two
purposes.
To improve the display – scale of the image for enhanced visual interpretation.
To match the display- scale of another image.
Colour Composition:
The remote sensing images, which are displayed in three primary
colours, True color composite uses visible light bands red (B04), green (B03)
and blue (B02) in the corresponding red, green and blue color channels,
resulting in a natural colored result, that is a good representation of the Earth
as humans would see it naturally.
Particularly, the colour composite with the assignment of blue color gun
to the green band, green gun to red band, and the red gun to NIR band is very
popular, and is called an infrared colour composition, which is the same as that
found in colour infrared film
Transect Extraction:
The ability to extract brightness values along a user-specified transect (also
referred to as a spatial profile) between two points in a single-band or
multiple-band color composite image is important in many remote sensing
image interpretation application. Basically, the spatial profile in histogram
format depicts the magnitude of the brightness value at each pixel along the
transect.
Contrast Enhancement:
One material would reflect a tremendous amount of energy in a certain
wavelength and another material would reflect much less energy in the
same wavelength. This would result in contrast between two types of
material when recorded by the remote sensing system.
Unfortunately, different materials often reflect similar amounts of radiant
flux throughout the visible, near infrared and middle-infrared portions of
the electromagnetic spectrum, resulting in a relatively low-contrast
imagery. In addition, besides this obvious low-contrast characteristic of
biophysical materials, there are cultural factors at work.
The detectors on remote sensing systems are designed to record a relatively
wide range of scene brightness values (e.g., 0-255) without becoming
saturated.
Filtering:
Spatial filtering term is the filtering operations that are performed directly
on the pixels of an image
Spatial frequency describes the periodic distributions of light and dark in an
image. High spatial frequencies correspond to features such as sharp edges
and fine details, whereas low spatial frequencies correspond to features such
as global shape
Filters are classified as:
Low-pass (i.e., preserve low frequencies)
High-pass (i.e., preserve high frequencies)
Band-pass (i.e., preserve frequencies within a band)
Band-reject (i.e., reject frequencies within a band)
Image Transformation:
Image transformations typically involve the manipulation of multiple bands
of data, whether from a single multispectral image or from two or more
images of the same area acquired at different times (i.e. multi temporal
image data). ... Basic image transformations apply simple arithmetic
operations to the image
The basic transformations are scaling, rotation, translation, and shear. Other
important types of transformations are projections and mappings. By scaling
relative to the origin, all coordinates of the points defining an entity are
multiplied by the same factor, possibly different for each axis.age data.
1. Supervised classification
2. Unsupervised classification
3. Object-based image analysis
What are the main differences between supervised and unsupervised classification? You can follow
along as we classify in ArcGIS.
When you run a supervised classification, you perform the following 3 steps:
In this step, you find training samples for each land cover class you want to create. For example, draw
a polygon for an urban area such as a road or parking lot. Then, continue drawing urban areas
representative of the entire image. Make sure it’s not just a single area.
Once you have enough samples for urban areas, you can start adding training samples for another
land cover class. For example, you can add polygons over treed areas for the “forest” class.
Beforehand, you must enable the Image Analysis Toolbar (Windows ‣ Image Analysis).
Add the training sample manager. Then, click the “Draw Polygon” icon to add training
samples.
For each land cover class, draw polygons. Then, merge them into a single class.
At this point, you should have training samples for each class. The signature file is what holds all the
training sample data that you’ve collected up to this point. It’s a way to save your samples for you to
work on at a later time.
Parametric Rules:
Non-Parametric Rules.
Parallelepiped
Feature Space
Minimum Distance to Mean: This is a simple classification strategy. This method first
analysis the areas designated in the training and then calculate a mean value in each band for each
training class. These mean values define the location of the class center in spectral space. The process
then assigns each pixel in the input image to the class with the closest class center in spectral.
Feature space classifier: A feature space image is simply a graph of the data file
values of one band of data against the values of another band. The transformation of multilayer raster
image into a feature space image is done by mapping the input pixel values to a position in the feature
space image.
1. Generate clusters
2. Assign classes
In this step, the software clusters pixels into a set number of classes. So, the first step is to assign
the number of classes you want to generate. Also, you have to identify which bands you want to
use.
If you’re using Landsat, here is a list of Landsat bands. For Sentinel, here are Sentinel-2 bands.
We also have a handy guide on spectral signatures which explains which spectral bands are
useful for classifying different classes.
First, you have to activate the spatial analyst extension (Customize ‣ Extensions ‣ Spatial
Analyst).
In this unsupervised classification example, we use Iso-clusters (Spatial Analysis Tools ‣
Multivariate ‣ Iso clusters).
One-pass clustering
Sequential Clustering
Statistical Clustering
K-means clustering
ISODATA Clustering
RGB Clustering
One-Pass Clustering: This method establishes initial class centers and assigns
cells to classes in one processing pass by determining the spectral distance between each cell and
established class centers. This method locates class centers and assigns cells to classes by
computing the Euclidean distance from an input cell to each class center. If the distance from an
input cell to existing class centers exceed a threshold value, the cell becomes the center of a new
class If not the cell is assigned to the closest class.
Sequential Clustering:
In this method the pixels are analyzed one at a time, pixel by pixel and line by line, It
assumes that all pixels are individual clusters and systematically merges clusters by checking
distances between mean. The spectral distance between each analyzed pixel and previously
defined cluster means are calculated. If the distance is greater than some threshold value, the
pixel begins a new cluster, otherwise it contributes to the nearest existing cluster in which case
the cluster mean is recalculated. Clusters are merged if too many them are formed by adjusting
the threshold value of the cluster means.
Statistical Clustering:
It overlooks the spatial relationship between adjacent pixels. The algorithms used 3 X 3
windows in which all pixels have similar vector in space. Histogram in high-dimensional space H
(V) is the occurrence frequency of gray-level vector v. The algorithm is to find peaks in the
multi-dimensional histogram.
K-Means:
The K-means (also known as C-means) method uses an iterative (repetitive) approach to
determine classes. The K Means algorithm analyses a sample of the input to determine a
specified number of initial class centers. Cells are assigned to classes by detrmining the closest
class center.
Iterative algorithm
✓ Number of clusters K is known by user
✓ Most popular clustering algorithm
✓ Initialize randomly K cluster mean vectors
✓ Assign each pixel to any of the K clusters based on minimum feature distance
✓ After all pixels are assigned to the K clusters, each cluster mean is recomputed.
✓ Iterate till cluster mean vectors stabilize
ISODATA Clustering:
ISODATA initial distribution of five hypothetical mean vectors using +/- 1 standard deviation in
both bands as beginning and ending points.
In the first iteration, each candidate pixel is compared to each cluster mean and assigned to the
cluster whose mean is closest.
During the second iteration, a new mean is calculated for each cluster based on the actual spectral
locations of the pixels assigned to each cluster. After the new cluster mean vectors are selected,
every pixel in the scene is assigned to one of the new clusters
This split-merge-assign process continues until there is little change in class assignment between
iterations (the T threshold is reached) or the maximum number of iterations is reached (M)
ISODATA iterations; pixels assigned to clusters with closest ISODATA spectral mean; mean recalculated;
pixels reassigned
✓ Continues until maximum iterations or convergence threshold reached.
✓ This is a simple, 2D illustration.
✓ Explain ISODATA iterations; pixels assigned to clusters with closest spectral mean; mean recalculated;
pixels reassigned
✓ Continues until maximum iterations or convergence threshold reached
RGB Clustering:
The RGB clustering is a simple classification and data compression technique for three bands of data. It
is a fast and simple algorithm that quickly compresses a three-band image into a single-band pseudo-
colour image, without necessarily classifying any particular feature. The algorithm plots all pixels in 3D
features space and then divides this space into clusters on a grid. In the more simplistic version of this
function, each of these clusters becomes a class in the output thematic raster layer.
Advantages
Disadvantages