0% found this document useful (0 votes)
19 views47 pages

Lecture NO-2-3-digital Image Processing

The document discusses digital image processing in remote sensing, explaining concepts such as digital images, preprocessing, enhancement, and classification. It details steps for image rectification, radiometric restoration, and geometric correction, as well as techniques for image enhancement and classification methods. Additionally, it covers accuracy assessment through confusion matrices and provides insights into various algorithms used in supervised and unsupervised classification.

Uploaded by

falahala91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views47 pages

Lecture NO-2-3-digital Image Processing

The document discusses digital image processing in remote sensing, explaining concepts such as digital images, preprocessing, enhancement, and classification. It details steps for image rectification, radiometric restoration, and geometric correction, as well as techniques for image enhancement and classification methods. Additionally, it covers accuracy assessment through confusion matrices and provides insights into various algorithms used in supervised and unsupervised classification.

Uploaded by

falahala91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Lecture NO-3

Digital image processing


Most remote sensing data can be represented
in two interchangeable terms which are:
a) Photograph-like imagery
b) Digital image (Arrays of digital
brightness values).
Photograph-like imagery.
A digital image.
What is a Digital Image?
A digital image is a two-dimensional array of picture
elements called (pixels) arranged in columns and
rows.
The pixel address in the image is given by:
a) The line number and the pixel number
(Line 5, Pixel 7) or
b) The row number and the column number
(Row 5, column 7).
The digital image bands can be displayed
individually as gray scale images or combined
to produce Color images by showing different
image bands in varying display combinations of
the three-primary colors (R, G, B).
Digital image processing:
The digital image processing is used to improve
the efficiency of the image data and enhance
the visual quality of data. It makes the image
interpretable for specific use.
The digital image processing steps are:
1- Preprocessing.
2- Enhancement and
3- Classification.
Image preprocessing:
Image preprocessing operations are used to
correct distorted or degraded image data and
comprises:
- Image restoration and
- Rectification operations (Orthophoto map).
a) Rectification: Production of orthophotos:
An orthophoto or orthoimage is an image that is free of distortion (it
has been ortho-rectified) and which is characterized by a uniform
scale over its entire surface (orthophoto map). The orthophoto maps
combine all the advantages of conventional line maps and aerial
photography. Unlike a conventional aerial photograph, accurate
measurements can be made on the orthophoto map. Cartographic
elements that cannot be derived from the photographic background
have been added, namely: a co-ordinate grid, contours and spot-
heights, place names. These maps are well suited for detail planning
and analysis of what exists on the ground.
Orthophoto production steps are:
a) Spatial filtering.
b) Radiometric restoration and
c) Geometric correction.
a) Spatial filtering:
Spatial filtering is the process of selectively
altering certain image spatial frequencies to
emphasize some image features and increase
the analyst’s ability to discriminate detail.
The spatial frequency:
Spatial frequency is the change in brightness
values per unit distance.
Types of spatial filters used in remote sensing
data processing.
a) Low pass filters (smoothing filter)
b) Band pass filters and
c) High pass filters (sharpening filter)
They are also called spatial masks,
kernels, templates and windows.
b) Radiometric restoration:
Radiometric restoration (destriping) is the process
used to identify shadow areas in the image and
radiometrically restore their brightness values.
Cast shadows in optical images can be easily
misclassified as other dark objects such as water.
Radiometric restoration can be done using finer
resolution panchromatic images. Image restoration
is performed by reversing the process that blurred
the image to reduce noise and recover resolution
loss.
Typical image restoration methods are:
a) Gamma correction method.
b) Linear-correlation method and
c) Histogram Matching Method.
Gamma correction: A point processing method that
adjusts the density of an image by raising pixel value to
a power. Corrects for the non-linear response of the
imaging device.
Linear correlation: Measures the linear correlation
between two variables in image processing, used to
find similarities.
Histogram matching: A point processing method that
transforms th image histogram to match a desired
histogram. Used to improve the contrast or dynamic
range of an image.
Radiometric restoration
Before restoration After restoration
c) Geometric correction:
Geometric correction is used to avoid
geometric distortions from a distorted
image and is achieved by establishing the
relationship between the image
coordinate system and the geographic
coordinate system using calibration data.
Geometric correction steps are:
a) Selection of method (the mathematical
model)
b) Determination of parameters using
ground control points (georeferencing).
c) Accuracy check and
d) Interpolation and resampling, geo-coded
image is produced by resampling and
interpolation (rectification).
The methods of geometric correction are:
a) Systematic correction, when the geometric
reference data are given or measured (focal
length, lens distortion parameters,
coordinates of fiducial marks etc.)
b) Non-systematic correction, use polynomials,
GCPs and least square method. And
c) Combined method, use systematic correction
first, plus lower order polynomial.
Source materials
Image rectification resampling methods are:
a) Nearest neighbor.
b) Bilinear interpolation.
c) Cubic convolution
Lectur-3

• Image Enhancement and Classification


Image enhancement:
Image enhancement is the process of improving the
quality of image by using various techniques.
Image enhancements techniques are used to:
1- Make images lighter or darker, or to increase or
decrease contrast,
2- Remove undesired characteristics of an image
such as color cast.
The aim of image enhancement is to improve the
interpretability or perception of information in images
for human viewers, or to provide `better' input for
other automated image processing techniques.
Image enhancement is generally used in the
following three cases:
1- Noise reduction from image.
2- Contrast enhancement of the very dark,
low contrast and bright image,
3- Highlight the edges of the objects in a
blurring image.
Contrast enhancement:
Contrast is the range of brightness values
present in an image.
Contrast enhancement is used to match the
range of collected reflectance values with the
capabilities of the display device to improve
the quality of the image. It allows image
features to stand out clearly by making optimal
use of the display device colors.
Methods of contrast enhancement are:
a) Linear contrast enhancement.
linear contrast methods include, linear
stretch method, Percentage linear stretch
method and Piecewise contrast technique.
b) Non-linear contrast enhancement methods.
Include Histogram equalization method,
Adaptive histogram equalization method,
Homomorphic Filter method and
Unshaped Mask.
Examples are:
a) Linear stretch.
b) Percentage linear stretch and
c) Histogram equalization.
Linear contrast stretch:
Minimum and maximum data values from the original scene
are stretched to the minimum and maximum of the display
device.
This is the original scene; the range of reflectance
values is confined to small portion of the display
device range (0-255).
The improvement of the image quality is very
clear after applying the linear stretch contrast
enhancement.
Percentage Linear Stretch:
A modified linear stretch in which the minimum and maximum data
values are defined not from the absolute minimum and maximum
but from a certain percentage of pixels below and above the mean of
the histogram. All intermediate values are scaled proportionately
between the new minimum and maximum values.
Edge enhancement:
Edge enhancement filters enhance the local
discontinuities at the boundaries of different
objects (edges) in the image. An edge in a
signal is normally defined as the transition in
the intensity or amplitude of that signal.
Most of the edge enhancement filters are thus
based on first and second order derivatives and
different gradient filters are also common to
use.
The results for the Roberts filer applied in 5
orientations (N, E, S, SW and W).
Pan Sharpening:
Pan sharpening is the process of merging high resolution
panchromatic and low-resolution multispectral imagery to
create a single high-resolution color image of the overlapping
area.
Image classification:
Image classification refers to the process
of extracting information classes from a
multiband raster image (Biophysical
modeling) (corn, forest, clear water,
turbid water. hay/ grass land etc.). The
resulting raster can be used to create
thematic maps.
The digital image is a two-dimensional array of
numbers representing brightness values recorded by
the sensor which require processing and analysis to
get useful information out of it.
Image classification assumptions are:
a) Similar features will have similar spectral
responses.
b) The spectral response of a feature is unique
with respect to all other features of
interest.
c) If we quantify the spectral response of a
known feature, we can use this
information to find all occurrences of that
feature.
Image classification methods are:
a) Supervised classification uses
training areas (samples) to classify all
image pixels.
b) Unsupervised classification,
segments an image into spectral
classes based on the data natural
groupings.
Image supervised classification
steps are:
a) Selection of training areas
(sample areas).
b) Classification of the image and
c) Accuracy assessment.
The classification algorithms (used most)
are:
1- The minimum distance to means
2- Parallelepiped and
3- Maximum likelihood.
4- K-Nearest Neighbor classification.
5- Random Forest classification.
Unsupervised classification:
The unsupervised classification is the
process of automatically segmenting an
image into spectral classes based on the
data natural grouping
The unsupervised classification steps are:
a) Classify the image (clusters).
b) Identify clusters and
c) Accuracy assessment.
The unsupervised classification algorithms used most are:
1- K-Means Cluster Analysis.
2- Expectation Maximization Cluster Analysis.
Ground Trusting and Accuracy Assessment:
a) Field investigation.
b) GPS data analysis.
c) Statistical analysis for accuracy
assessment.
The classification accuracy indicators are;
a) Average accuracy.
b) Overall accuracy.
c) Kapa (ĸ) coefficient.
Classification accuracy assessment
sources of information are:
a) Remote sensing classification
map (produced map) and
b) The reference test
information.
Confusion or error matrix:
Confusion matrix (error matrix) is a square array of numbers that
expresses the number of sample pixels assigned to particular
category as verified in the field.
It is used as a quantitative method of characterizing image
classification accuracy.
a) Columns represent reference data and
b) Rows represent classification results (user classified data).
c) Proper classified pixels are located along the major diagonal
of the matrix.
d) All non-diagonal elements of the matrix represent errors.
e) Errors of omission (exclusion) represented by non-diagonal
column elements.
f) Errors of commission (inclusion) represented by non-diagonal
row elements.
Classificat Water Concrete Buildings Soil Grass Forest Row
ion Slopes Total
Results

Water 0 2 1 0 0 96
93

Concrete 0 4 6 0 0 75
65

Buildings 2 3 5 9 12 155
124
•Producer’s Accuracy
W=93/97 =96%
C = 65/71=92% Soil 2 3 21 24 12 227
H=124/165= 75% 165
B= 165/202=82%
G=201/310=65%
F=512/581=88% Grass 0 0 6 16 45 268
Slopes
201

Forest 0 0 8 9 76 512 605

Column 97 71 165 202 310 581 1426


Total
Report No. 2:W
Write brief comments about the following image restoration methods :
a) Gamma correction method.
b) Linear-correlation method and
c) Histogram Matching Method.

Assighnment-2:
1- Describe the following terms:
- Spatial filtering.
- Spatial frequency and
- Radiometric restoration.
2- The orthophoto maps combine all the advantages of
conventional line maps and aerial photography and accurate
measurements can be made on them. Briefly, specify and
describe the steps to rectify a digital image to produce an
orthophoto map.

You might also like