Module 2 Image Interpretation and Digital Image Processing
Module 2 Image Interpretation and Digital Image Processing
Prepared by,
Sujith Velloor S.
Assistant Professor,
Civil Engg Dept.
SCET Kalol.
Introduction
The digital image processing deals with
developing a digital system that performs
operations on an digital image.
An image sensor or imager is a sensor
that detects and conveys information used
to make an image.
What is an Image?
An image is nothing more
than a two dimensional
signal.
It is defined by the
mathematical function
f(x,y) where x and y are
the two co-ordinates
horizontally and vertically.
The value of f(x,y) at any
point is gives the pixel
value at that point of an
image.
Type of images
Remote Sensing images may be available
in hard copy (paper print) or soft
copy(digital form).
1. Panchromatic Remote sensing images
2. Multispectral Remote Sensing images
3. Hyperspectral Remote Sensing images
4. Digital images
Type of Sensors
Imaging Sensors
Data Formats of Digital Image
The standard grey scale images use 256 shades of
grey from 0 (black) to 255 (white).
With color images, the situation is more complex.
For a given number of pixels, considerably more
data is required to represent the image and more
than one color model is used.
We will consider the following data format:
GIF
JPEG
TIFF
BMP
PNG
Display of Digital Image
Digital Image
Processing
Rectification and Restoration
Images rectification and image restoration
techniques are used to correct image data
for distortions, noise reduction or data
reconstruction.
Data reconstruction is necessitated by the
design of sensor system by the limitation
of one or more of its components or by the
malfunction of components.
Rectification and Restoration
Image rectification is the processing by
which the geometry of an image is made
planimetric.
When ever accurate area direction &
distance measurements are required image
rectification should be performed
However it may not remove all distortion
caused by topographic relief displacement
in images.
Digital Image Processing Steps
1. Preprocessing
Preprocessing functions involve those operations that
are normally required prior to the main data analysis
and extraction of information, and are generally
grouped as radiometric or geometric corrections.
Radiometric corrections include correcting the data
for sensor irregularities and unwanted sensor or
atmospheric noise, and converting the data so they
accurately represent the reflected or emitted radiation
measured by the sensor.
Geometric corrections include correcting for
geometric distortions due to sensor-Earth geometry
variations, and conversion of the data to real world
coordinates (e.g. latitude and longitude) on the Earth's
surface.
Preprocessing
2. Image Enhancement
The objective of the second group of image
processing functions grouped under the term
of image enhancement, is solely to improve
the appearance of the imagery to assist in
visual interpretation and analysis.
Examples of enhancement functions include
contrast stretching to increase the tonal
distinction between various features in a
scene, and spatial filtering to enhance (or
suppress) specific spatial patterns in an
image.
Image Enhancement
3. Image transformations
Image transformations are operations
similar in concept to those for image
enhancement.
However, unlike image enhancement
operations which are normally applied
only to a single channel of data at a time,
image transformations usually involve
combined processing of data from
multiple spectral bands.
Arithmetic operations (i.e. subtraction,
addition, multiplication, division) are
performed to combine and transform the
original bands into "new" images which
better display or highlight certain features in
the scene.
We will look at some of these operations
including various methods of spectral or
band rationing, and a procedure
called principal components analysis which
is used to more efficiently represent the
information in multichannel imagery.
Image Transformations.
4. Image classification and
analysis
Image classification and
analysis operations are used to digitally
identify and classify pixels in the data.
Classification is usually performed on
multi-channel data sets (A) and this
process assigns each pixel in an image to
a particular class or theme (B) based on
statistical characteristics of the pixel
brightness values.
Image classification & analysis.