0% found this document useful (0 votes)
60 views5 pages

Chapter 4 RS

The document discusses digital image processing and describes common image processing functions including pre-processing, image enhancement, image transformation, and image classification and analysis. It provides details on techniques such as radiometric and geometric correction, contrast manipulation, image subtraction, ratioing, and supervised and unsupervised classification.

Uploaded by

anduyefkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views5 pages

Chapter 4 RS

The document discusses digital image processing and describes common image processing functions including pre-processing, image enhancement, image transformation, and image classification and analysis. It provides details on techniques such as radiometric and geometric correction, contrast manipulation, image subtraction, ratioing, and supervised and unsupervised classification.

Uploaded by

anduyefkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Wollo University Department of Geography and Environmental Studies by: Nurhussen A.

Chapter 4
Digital Image Processing
In today's world of advanced technology where most remote sensing data are recorded in digital
format, virtually all image interpretation and analysis involves some element of digital
processing. Digital image processing may involve numerous procedures including formatting
and correcting of the data, digital enhancement to facilitate better visual interpretation, or even
automated classification of targets and features entirely by computer. In order to process remote
sensing imagery digitally, the data must be recorded and available in a digital form suitable for
storage on a computer tape or disk. Obviously, the other requirement for digital image processing
is a computer system, sometimes referred to as an image analysis system, with the appropriate
hardware and software to process the data.
The common image processing functions available in image analysis systems can be categorized
into the following four categories:

 Pre-processing (Image rectification and restoration)

 Image Enhancement

 Image Transformation

 Image Classification and Analysis

4.3.1 Pre-processing (Image restoration and rectification)


Pre-processing operations, sometimes referred to as image restoration and rectification, are
intended to correct for sensor- and platform-specific radiometric and geometric distortions of
data that stem from the image acquisition process.

Radiometric Correction
Radiometric corrections may be necessary due to variations in scene illumination and viewing
geometry, atmospheric conditions, and sensor noise and response. Radiometric corrections
include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise,
and converting the data so they accurately represent the reflected or emitted radiation measured
by the sensor. This method is based on the assumption that the reflectance from these features, if
the atmosphere is clear, should be very small, if not zero. If we observe values much greater than
zero, then they are considered to have resulted from atmospheric scattering.
Wollo University Department of Geography and Environmental Studies by: Nurhussen A.

Noise in an image may be due to irregularities or errors that occur in the sensor response and/or
data recording and transmission. Common forms of noise include systematic striping or
banding and dropped lines. Both of these effects should be corrected before further
enhancement or classification is performed.

Geometric corrections include correcting for geometric distortions due to sensor-Earth


geometry variations, and conversion of the data to real world coordinates (e.g. latitude and
longitude) on the Earth's surface. These distortions may be due to several factors, including: the
perspective of the sensor optics; the motion of the scanning system; the motion of the platform;
the platform altitude, velocity and, the curvature and rotation of the Earth. The intent of
geometric correction is to compensate for these distortions introduced by these factors so that the
geometric representation of the imagery will be as close as possible to the real world. Systematic
distortions are well understood and easily corrected by applying formulas derived by modeling
the sources of the distortions mathematically.

4.3.2 Image Enhancement


The goal of image enhancements is to improve the visual interpretability of an image by
increasing the apparent distinction between the features in the scene. Although radiometric
corrections for illumination, atmospheric influences, and sensor characteristics may be done
prior to distribution of data to the user, the image may still not be optimized for visual
interpretation. Remote sensing devices, particularly those operated from satellite platforms, must
be designed to cope with levels of target/background energy which are typical of all conditions
likely to be encountered in routine use. With large variations in spectral response from a diverse
range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic radiometric correction
could optimally account for and display the optimum brightness range and contrast for all
targets. The most commonly applied digital enhancement techniques are Contrast manipulation,
spatial filtering, and convolution.

4.3.3 Image Transformations


Image transformations typically involve the manipulation of multiple bands of data, whether
from a single multispectral image or from two or more images of the same area acquired at
different times (i.e. multitemporal image data). Either way, image transformations generate
Wollo University Department of Geography and Environmental Studies by: Nurhussen A.

"new" images from two or more sources which highlight particular features or properties of
interest, better than the original input images. Basic image transformations apply simple
arithmetic operations to the image data. Image subtraction is often used to identify changes that
have occurred between images collected on different dates.

Image division or spectral ratioing is one of the most common transforms applied to image
data. Image ratioing serves to highlight subtle variations in the spectral responses of various
surface covers. By ratioing the data from two different spectral bands, the resultant image
enhances variations in the slopes of the spectral reflectance curves between the two different
spectral ranges that may otherwise be masked by the pixel brightness variations in each of the
bands.
For example Healthy vegetation reflects strongly in the near-infrared portion of the spectrum
while absorbing strongly in the visible red. Other surface types, such as soil and water, show
near equal reflectances in both the near-infrared and red portions. Thus, a ratio image of Landsat
TM Band 4 (Near-Infrared - 0.8 to 1.1 mm) divided by Band 3 (Red - 0.6 to 0.7 mm) would
result in ratios much greater than 1.0 for vegetation, and ratios around 1.0 for soil and water.
Thus the discrimination of vegetation from other surface cover types is significantly enhanced.
Also, we may be better able to identify areas of unhealthy or stressed vegetation, which show
low near-infrared reflectance, as the ratios would be lower than for healthy green vegetation. One
widely used image transform is the Normalized Difference Vegetation Index (NDVI) which
has been used to monitor vegetation conditions on continental and global scales using the
Advanced Very High Resolution Radiometer (AVHRR) sensor onboard the NOAA series of
satellites.
Wollo University Department of Geography and Environmental Studies by: Nurhussen A.

4.4 Digital Image Classification and Analysis

A human analyst attempting to classify features in an image uses the elements of visual
interpretation to identify homogeneous groups of pixels which represent various features or land
cover classes of interest. Digital image classification uses the spectral information represented by
the digital numbers in one or more spectral bands, and attempts to classify each individual pixel
based on this spectral information. This type of classification is termed spectral pattern
recognition. In either case, the objective is to assign all pixels in the image to particular classes
or themes (e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.). The resulting
classified image is comprised of a mosaic of pixels, each of which belong to a particular theme,
and is essentially a thematic "map" of the original image.

Spatial pattern recognition involves the categorization of image pixels on the basis of their
spatial relationship with pixels surrounding them. Spatial classifiers might consider such aspects
as image texture, pixel proximity, feature size, shape, directionality, repetition, and context.
These types of classifiers attempt to replicate the kind of spatial synthesis done by the human
analyst during the visual interpretation process. Accordingly, they tend to be much more
complex and computationally intensive than spectral pattern recognition procedures. Common
classification procedures can be broken down into two broad subdivisions based on the method
used: supervised classification and unsupervised classification.

Supervised Classification
In a supervised classification, the analyst identifies in the imagery homogeneous representative
samples of the different surface cover types (information classes) of interest. These samples are
referred to as training areas. The selection of appropriate training areas is based on the analyst's
familiarity with the geographical area and their knowledge of the actual surface cover types
present in the image. Thus, the analyst is "supervising" the categorization of a set of specific
classes.

Unsupervised classification
Wollo University Department of Geography and Environmental Studies by: Nurhussen A.

Unsupervised classifier does not utilize training data as the basis for classification. Unsupervised
classification in essence reverses the supervised classification process. Spectral classes are
grouped first, based solely on the numerical information in the data, and are then matched by the
analyst to information classes (if possible). Programs, called clustering algorithms, are used to
determine the natural (statistical) groupings or structures in the data. Usually, the analyst
specifies how many groups or clusters are to be looked for in the data. In addition to specifying
the desired number of classes, the analyst may also specify parameters related to the separation
distance among the clusters and the variation within each cluster.

You might also like