Model Answers
Model Answers
Question 1
(i). DIP comprises the following four basic steps:
(a) Image correction/restoration: Image data recorded by sensors on a satellite or aircraft contain
errors related to geometry and brightness values of the pixels. These errors are corrected using
suitable mathematical models, which are either definite or statistical models.
(b) Image enhancement: Image enhancement is the modification of image, by changing the pixel
brightness values, to improve its visual impact. Image enhancement techniques are performed by
deriving the new brightness value for a pixel either from its existing value or from the brightness
values of a set of surrounding pixels.
(c) Image transformation: The multi-spectral character of image data allows it to be spectrally
transformed to a new set of image components or bands with a purpose to get some information
more evident or to preserve the essential information content of the image (for a given application),
with a reduced number of transformed dimensions. The pixel values of the new components are
related to the original set of spectral bands via a linear operation.
(d) Image classification: The overall objective of image classification procedures is to
automatically categorize all pixels in an image into land cover [Marks 10]
[Marks 30]
(iii). Optimum Index Factor (OIF)
Optimum Index Factor (OIF) is one of the most common statistical methods which were applied
in order to designate the most favorable three band combinations that has the information with the
least amount of duplication. It is based on the total variance within bands and correlation
coefficient between bands.
[Marks 20]
Ozone serves to absorb the harmful (to most living things) ultraviolet radiation from the sun.
Without this protective layer in the atmosphere our skin would burn when exposed to sunlight.
Carbon Dioxide absorbs in the far infrared portion of the spectrum which is related to thermal
heating and results in a 'greenhouse' effect.
Water Vapor absorbs energy depending upon its location and concentration, and forms a primary
component of the Earth's climatic system.
[Marks 20]
(ii).
There are three main types of scattering that impact incoming solar radiation:
• Rayleigh Scatter
• Mie Scatter
• Non-Selective Scatter
Rayleigh scatter occurs when radiation (light) interacts with molecules and particles in the
atmosphere that are smallerin diameter that the wavelength of the incoming radiation. Shorter
wavelengths are more readily scattered that longer wavelengths. Light at shorter wavelengths (blue
and violet) are scattered by small particles that include NO2 and O2. Since blue light is at the short
wavelength end of the visible spectrum, it is more strongly scattered in the atmosphere than long
wavelength red light. This results in the blue color of the sky. Rayleigh scatter is also responsible
for haze in images. In aerial photography special filters are used to filer out the scatter blue light
to reduce haze. In digital images there are different techniques used to minimize the impacts of
Rayleigh scatter.
At sunrise and sunset the incoming sunlight travels a longer distance through the atmosphere. The
longer path produces leads to scatter of the short (blue) wavelengths that is so complete we only
see the longer wavelengths of light, the red and orange. In the absence of particles and scattering
the sky would appear black.
Mie scatter occurs when the wavelength of the electromagnetic radiation is a similar size to the
atmospheric particles. Mie scatter generally influences radiation from the near UV through the
mid- infrared parts of the spectrum. Mie scatter mostly occurs in the lower portions of the
atmosphere where larger particles are more abundant, and dominates when cloud conditions are
overcast. Pollen, dust and smog are major cause of mie scatter. Mie scatter produces general haze
in images.
Non-selective scattering occurs when the diameter of the particles in the atmosphere are
much larger than the wavelength of radiation. Non-selective scatter is primarily caused by water
droplets in the atmosphere. Non-selective scatters scatter all visible light evenly - hence the term
non-selective. In the visible wavelength light is scattered evenly, hence fog and clouds appear
white. Since clouds scatter all wavelengths of light this means that clouds block all energy from
reach the Earth's surface. This can make interpreting and analyzing remote sensed imagery difficult
in areas prone to cloud cover. Clouds also cast shadows that change the illumination and relative
reflectance of surface features. This can be a major limitation in remote sensing imagery.
[Marks 30]
(iii). There are three common methods for resampling: nearest neighbor, bilinear interpolation,
and cubic convolution.
Nearest neighbor resampling uses the digital value from the pixel in the original image which is
nearest to the new pixel location in the corrected image. This is the simplest method and does not
alter the original values, but may result in some pixel values being duplicated while others are lost.
This method also tends to result in a disjointed or blocky image appearance.
Bilinear interpolation resampling takes a weighted average of four pixels in the original image
nearest to the new pixel location. The averaging process alters the original pixel values and creates
entirely new digital values in the output image. This may be undesirable if further processing and
analysis, such as classification based on spectral response, is to be done. If this is the case,
resampling may best be done after the classification process.
Cubic convolution resampling goes even further to calculate a distance weighted average of a
block of sixteen pixels from the original image which surround the new output pixel location. As
with bilinear interpolation, this method results in completely new pixel values. However, these
two methods both produce images which have a much sharper appearance and avoid the blocky
appearance of the nearest neighbor method.
[Marks 30]
(iv).
The purpose of georeferencing is to transform the image coordinate system (u,v), which may be
distorted due to the factors discussed above, to a specific map projection (x,y) as shown in
Figure below. There are two kinds of geometric corrections procedures such as Image-to-image
registration and Image-to-map registration.
Image-to-image registration refers to transforming one image coordinate system into another
image coordinating system. Image-to-map registration refers to transformation of one image
coordinate system to a map coordinate system resulted from a particular map projection.
[Marks 20]
Question 3
(i). High Frequency filters in special domain
Spatial filtering is the process of altering of pixel values based upon spatial characteristics for the
purpose of image enhancement.
High Frequency filters designed to emphasize high spatial frequency by emphasizing abrupt local
changes in gray level values between pixels. High Frequency filters preserve high frequencies and
removes slowly varying components and it emphasizes fine details. High Frequency filters used
for edge detection and enhancement. Edges are locations where transition from one category to
other occurs. There are two types of high pass filters linear and nonlinear. In linear filters output
brightness value is a function of linear combination of BV’s located in a particular spatial pattern
around the i,j location in the input image. In nonlinear output brightness value is a function of
nonlinear combination of BV’s. In edge detection brightness will be lost. Delineates Edges and
makes the shapes and details more prominent however.
[Marks 20]
(iii) Edge Enhancement
For many remote sensing Earth science applications, the most valuable information that may be
derived from an image is contained in the edges surrounding various objects of interest. Edge
enhancement delineates these edges. Edges may be enhanced using either linear or nonlinear edge
enhancement techniques.
There are two kinds of linear edge enhancement techniques. They are
- Directional
Vertical Horizontal or any direction
- Laplacian
Highlight points, lines, edges, suppress uniform and smooth regions
(iv). Band Rationing
Sometimes differences in brightness values from identical surface materials are caused by
topographic slope and aspect, shadows, atmospheric constitutional change, or season’s changes in
sun angle and intensity. Band ratio can be applied to reduce the effects of such environmental
conditions. In addition, band ratio also help to discriminate between soils and vegetation
BVi , j ,k
BVi , j ,ratio =
BVi , j ,l
Where:
- BVi,j,k is the original input brightness value in band k
- BVi,j,l is the original input brightness value in band l
- BVi,j,ratio is the ratio output brightness value [Marks 20]
(i).
Two major categories of image classification techniques include unsupervised (calculated by
software) and supervised (human-guided) classification.
Parallelepiped classifier
In this classifier, the range of spectral measurements are taken into account. The range is defined
as the highest and lowest digital numbers assigned to each band from the training data
An unknown pixel is therefore classified according to its location within the class range. However,
difficulties occur when class ranges overlap. This can occur when classes exhibit a high degree of
correlation or covariance.
This can be partially overcome by introducing stepped borders to the class ranges.
The minimum and maximum DNs for each
class are determined and are used as thresholds
for classifying the image.
Benefits: simple to train and use,
computationally fast
Drawbacks: pixels in the gaps between the
parallelepiped cannot be classified; pixels in
the region of overlapping parallelepiped cannot
be classified.
Maximum Likelihood Classifier
This classifier quantitatively evaluates both the variance and covariance of the trained spectral
response patterns when deciding the fate of an unknown pixel.
To do this the classifier assumes that the distribution of points for each cover-type are normally
distributed
Under this assumption, the distribution of a category response can be completely described by the
mean vector and the covariance matrix.
Given these values, the classifier computes the probability that unknown pixels will belong to a
particular category.
Max likelihood uses the variance and covariance in class spectra to determine classification
scheme. It assumes that the spectral responses for a given class are normally distributed.
We can then determine a probability surface, where for a given DN, being a member of a particular
class. The pixel is classified by using the most likely class or “Other” if the probability isn’t over
some threshold.
Benefits: takes variation in spectral response into consideration
Drawbacks: computationally inefficient, multimodal or non-normally distributed classes can be
misclassified.
Question 5
(i). A Radar system has three primary functions:
- It receives the portion of the transmitted energy backscattered from the scene
- It observes the strength (detection) and the time delay (ranging) of the return signals.
[Marks 20]
(ii).
The two main types of radar images are the circularly scanning plan-position indicator (PPI)
images and the side-looking images. The PPI applications are limited to the monitoring of air and
naval traffic. Remote sensing applications use the side-looking images which are divided into two
types--real aperture radar (usually called SLAR for side-looking airborne radar or SLR for side-
looking radar) and synthetic aperture radar (SAR).
- Synthetic resolution Δ Ls = βs . R = D / 2
where λ is the wavelength, D the radar aperture, and R the distance antenna-object.
This is the reason why SAR has a high azimuth resolution with a small size of antenna regardless
of the slant range, or very high altitude of a satellite.
Sideways Looking Airborne Radar ( SLAR) mmeasures range to scattering targets on the ground,
can be used to form a low resolution image whereas Synthetic Aperture Radar has same principle as
SLAR, but uses image processing to create high resolution images.
(iii) The spatial resolution of RAR is primarily determined by the size of the antenna used: the
larger the antenna, the better the spatial resolution. Other determining factors include the pulse
duration (τ) and the antenna beam width.
Range Resolution
Range resolution is defined as
[Marks 20]
(iv).
Advantages of radar
• All weather, day or night
– Some areas of Earth are persistently cloud covered
• Penetrates clouds, vegetation, dry soil, dry snow
• Sensitive to water content, surface roughness
– Can measure waves in water
• Sensitive to polarization and frequency
• Interferometry (later) using 2 receiving antennas
Disadvantages of radar
• Penetrates clouds, vegetation, dry soil, dry snow
– Signal is integrated over a depth range and a variety of materials
• Sensitive to water content, surface roughness
– Small amounts of water affect signal
– Hard to separate the volume response from the surface response
• Sensitive to polarization and frequency
– Many choices for instrument, expensive to cover range of possibilities
– The math can be formidable
[Marks 20]
(v).
Foreshortening
• In flat terrain, easy to convert a slant-range radar image into a ground-range radar image
– … but with trees, tall buildings, or mountains, you get radar relief displacement
– the higher the object, the closer it is to the radar antenna, and therefore the sooner
(in time) it is detected on the radar image
• Terrain that slopes toward the radar will appear compressed
or foreshortened compared to slopes away from the radar
Layover
Shadow
[Marks 20]