0% found this document useful (0 votes)
99 views40 pages

15 April 2020 - Session1 - Digital Image Enhancement - Mrs Minikashi Kumar

The document discusses digital image processing and image preprocessing. It begins with an introduction to digital images and digital image processing. The major goals of digital image processing are listed as data acquisition/restoration, image enhancement, and information extraction. Image preprocessing is then discussed, which aims to correct radiometric and geometric errors in images. Specific radiometric errors covered include line/column dropouts, banding, and atmospheric effects like haze. Methods for correcting different radiometric errors like line dropouts, banding, and atmospheric haze are also presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views40 pages

15 April 2020 - Session1 - Digital Image Enhancement - Mrs Minikashi Kumar

The document discusses digital image processing and image preprocessing. It begins with an introduction to digital images and digital image processing. The major goals of digital image processing are listed as data acquisition/restoration, image enhancement, and information extraction. Image preprocessing is then discussed, which aims to correct radiometric and geometric errors in images. Specific radiometric errors covered include line/column dropouts, banding, and atmospheric effects like haze. Methods for correcting different radiometric errors like line dropouts, banding, and atmospheric haze are also presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Digital Image Processing


Basic Concepts

Mrs. Minakshi Kumar


Scientist “SG”
Photogrammetry and Remote Sensing Department
Geospatial Technology and Outreach Programme Group
Indian Institute of Remote Sensing
Indian Space Research Organisation
Department of Space, Government of India
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Presentation Outline
 Digital Image
 Digital Image Processing
 Image Preprocessing
 Radiometric Errors & Correction
 Line /Column Dropout / Banding
 Haze Correction
 Sun angle Correction
 Geometric Error & Correction
 Rectification
 Resampling
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

What is an Image ?
 An IMAGE is a Pictorial Representation of an object or a scene.
 Forms of Images
 Analog
 Digital

Digital Image Processing Minakshi,PRSD,IIRS 3


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

What is a Digital Image ?


 Produced by Electro optical Sensors
 Composed of tiny equal areas, or
picture elements abbreviated as pixels
or pels arranged in a rectangular
array
 With each pixel is associated a
number known as Digital Number(DN)
or Brightness value (BV) or gray level
which is a record of variation in
radiant energy in discrete form.
 An object reflecting more energy • Range of DN Values depend on
records a higher number for itself on Radiometric resolution
the digital image and vice versa. • 0 – Darkest, • 255 – Brightest
(for 8 bit radiometric resolution)
Digital Image Processing Minakshi,PRSD,IIRS 4
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Multi Spectral Remotely Sensed Image


 Digital Images of an area captured in different spectral ranges (bands)
 A pixel is referred by its column, row, band number.

Digital Image Processing Minakshi,PRSD,IIRS 5


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Digital Image Processing


Digital image processing can be defined as the computer manipulation of digital
values contained in an image for the purposes of image correction, image
enhancement and feature extraction.

A digital image processing system consists of computer Hardware (Personal


Computer) and dedicated Image processing software necessary to analyze
digital image data.

Image Processing Software Functionalities


 Data Acquisition/Restoration - Compensates for data errors, i.e Preprocessing
(Radiometric and Geometric)
 Image Enhancement - Alters the visual impact of the image on the
interpreter to improve the information content
 Information Extraction - Utilizes the decision making capability of computers to recognize
and classify pixels on the basis of their signatures

Digital Image Processing Minakshi,PRSD,IIRS 6


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Major Digital Image Processing Systems


Open Source
Commercial  ILWIS (http://www.ilwis.org/index.htm)
 ERDAS IMAGINE  Opticks
http://opticks.org/confluence/display/opticks/Welcome+To+
 ENVI Opticks
 IDRISI  GRASS (Geographic Resources Analysis Support System
http://grass.osgeo.org/ )
 ER Mapper
 OSSIM (Open Source Software Image Map
 PCI Geomatica www.ossim.org )
 eCognition  Multispec
https://engineering.purdue.edu/~biehl/MultiSpec/index.html
 MATLAB  QGIS (A Free and Open Source Geographic Information
 Intergraph System http://www.qgis.org/en/site/)

Digital Image Processing Minakshi,PRSD,IIRS 7


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Image Preprocessing
 Remote sensing systems may not function perfectly all the
time.
 The Earth’s atmosphere, land, and water are complex and
do not lend themselves well to being recorded by remote
sensing devices that have constraints such as spatial,
spectral, temporal, and radiometric resolution.
 Consequently, error may creep into the data acquisition
process and can degrade the quality of the remote sensor
data collected.
 The two most common types of error encountered in
remotely sensed data are radiometric and geometric.
 Radiometric and geometric correction of remotely sensed
data are normally referred to as preprocessing
operations because they are performed prior to information
extraction.
Digital Image Processing Minakshi,PRSD,IIRS 8
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Radiometric errors- Causes


 Radiometric errors are caused by detector imbalance and
atmospheric deficiencies.
 Radiometric corrections are also called as cosmetic
corrections and are done to improve the visual
appearance of the image.
 Common radiometric errors
 Periodic line or column drop-outs,
 Line or column striping.

 Random bad pixels (shot noise),

 Partial line or column drop-outs

 Atmospheric errors
Digital Image Processing Minakshi,PRSD,IIRS 9
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Line dropout
 If a detector fails to function this can result in
an entire line or column of data with no
spectral information.

 The bad line or column is commonly called a


line or column drop-out and contains
brightness values equal to zero

Periodic Line Dropout

Partial Line Dropout

Digital Image Processing Minakshi,PRSD,IIRS 10


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Correction for Missing Lines /columns


 It is first necessary to locate each bad line in the dataset.
 A simple thresholding algorithm makes a pass through the dataset and flags
any scan line having a mean brightness value at or near zero.
 Once identified, it is then possible to evaluate the output for a pixel in the
preceding line (BVi –1,j,k) and succeeding line (BVi+1,j,k) and assign the output
pixel (BVi,j,k) in the drop-out line
 Replacement by either the preceding or the succeeding line
BVI,J=BVI,J-1 OR BVI,J=BVI,J+1
BVI,J =missing pixel value of pixel I scan line J
 Averaging of the neighbouring pixel values
BVI,J=(BVI,J-1 + BVI,J+1)/2
 Replacing the line with other highly correlated band.
Digital Image Processing Minakshi,PRSD,IIRS 11
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Line Striping (Banding)


 The response of some of the detectors may shift towards lower or higher end
causing the presence of a systematic horizontal / vertical banding pattern
 Banding is an cosmetic defect and it interferes with the visual appreciation of
the patterns and features on the image
 Variation in gain and offset of each sensor (linear sensor characteristic) as the
sensor deteriorates in time

Digital Image Processing Minakshi,PRSD,IIRS 12


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Line striping correction


 A sensor is called ideal when there is a linear relationship
between input and the output.
 Correction uses a linear expression to model the relationship
between input & output values.
 Assumes that mean and standard deviation of data from each
detector should be same.
 Linear sensor model : n sensors

y = a.x + b D (t ) D(t ) n 2 1

a = gain Measured
DN

b = offset
D (t m 2
)

x = input
y = output Input

Digital Image Processing Minakshi,PRSD,IIRS 13


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Atmosphere induced errors


HAZE
 Scattered light reaching the
sensor from the atmosphere
 Additive effect, reducing
CONTRAST
SKYLIGHT
 Scattered light reaching the
sensor after being reflected from
the Earth’s surface
 Multiplicative effect
SUNANGLE
 Time/Seasonal effect changing
the atmospheric path
 Multiplicative effect

Digital Image Processing Minakshi,PRSD,IIRS 14


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Atmospheric Haze Effect

with haze
without haze with haze

DN values of objects in a single band


Object1: DN = 20 DN = 20 + 20
Object2: DN = 40 DN = 40 + 20
Contrast: 40/20 = 2X 60/40 = 1.5 X without haze

Digital Image Processing Minakshi,PRSD,IIRS 15


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Haze Correction- Dark Object Subtraction

Histogram Minimum Method


Assumption: infrared bands are not affected by Haze
 Identify black bodies: clear water and shadow zones with zero
reflectance in the infrared bands
 Identify DN values at shorter wavelength bands of the same pixel
positions. These DN are entirely due to haze
 Subtract the minimum of the DN values related to black bodies of a
particular band from all the pixel values of that band

Digital Image Processing Minakshi,PRSD,IIRS 16


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Sun Angle Correction


 The position of sun relative to earth changes depending on
time of day and day of year.
 Solar elevation angle: Time- and location dependent
 Sun elevation correction accounts for the seasonal position
of the sun relative to the earth
 Image data acquired under different solar illumination
angles need to be normalized to a constant solar position
 In the northern hemisphere the solar elevation angle is
smaller in winter than in summer
 The solar zenith angle is equal to 90 degree minus the
solar elevation angle
 Irradiance varies with the seasonal changes in solar
elevation angle and the changing distance between the
earth and sun
 Correction necessary for mosaicking and change detection
Digital Image Processing Minakshi,PRSD,IIRS 17
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Sun Angle Correction


 Image data acquired under different solar illumination angles are normalized
by calculating pixel brightness values assuming the sun was at the zenith on
each date of sensing.
 The correction is usually applied by dividing each pixel value in a scene by
the sine of the solar elevation angle for the particular time and location of
imaging.

DN
DN ' 
SIN ( )

Two Images with different Sun-angles Corrected Mosaic

Digital Image Processing Minakshi,PRSD,IIRS 18


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Quiz Time
1. A Digital Image Composed of tiny equal areas called as ___________arranged in a rectangular
array
i. Picture Element
ii. Pixel
iii. Pel
iv. All of above
2. The size of the tiny area in the digital Image depends on __________ and the value depends on
_________ of Sensor.
i. Spatial , Radiometric Resolution
ii. Radiometric, Spectral Resolution
iii. Spectral Resolution, Swath of Sensor
iv. None of the Above
3. A digital Image affected with Haze is corrected by Subtracting the minimum of the DN values
related to black bodies of a particular band from all the pixel values of that band
1. True
2. False
4. Scattered light reaching the sensor after being reflected from the Earth’s surface is called
1. Haze
2. Skylight
3. Irradiance
4. None of the above.
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Geometric Errors and Corrections


 The transformation of remotely sensed images so that it has
a scale and projections of a map is called geometric
correction.
 It is concerned with placing the reflected, emitted, or back-
scattered measurements or derivative products in their proper
planimetric (map) location so they can be associated with
other spatial information in a geographic information system
(GIS)
 Include correcting for geometric distortions due to sensor-
Earth geometry variations, and conversion of the data to real
world coordinates (e.g. latitude and longitude) on the Earth's
surface

Digital Image Processing Minakshi,PRSD,IIRS 20


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Earth Rotation Effect - Image Offset (skew)


 Sun-synchronous satellites are normally in
fixed orbits that collect a path (or swath) of
imagery as the satellite makes its way from
the north to the south in descending mode.
 Meanwhile, the Earth below rotates on its
axis from west to east making one
complete revolution every 24 hours. This
skews the geometry of the imagery
collected
• Dashed line indicate
shape of distorted
image
• Solid line indicates
restored image
Digital Image Processing Minakshi,PRSD,IIRS 21
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Rectification
 Is a process of geometrically correcting an image so that it can be
represented on a planar surface , conform to other images or
conform to a map.
 That is it is the process by which geometry of an image is made
planimetric.
 It is necessary when accurate area , distance and direction
measurements are required to be made from the imagery.
 It is achieved by transforming the data from one grid system into
another grid system using a geometric transformation
 Grid transformation is achieved by establishing mathematical
relationship between the addresses of pixels in an image with
corresponding coordinates of those pixels on another image or
map or ground.
Digital Image Processing Minakshi,PRSD,IIRS 22
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Image to Map Rectification Procedure


Two basic operations must be performed to geometrically rectify a
remotely sensed image to a map coordinate system:
 Geometric Transformation coefficient computation
 The geometric relationship between input pixel location (row & column) and
associated map co-ordinates of the same point (x,y) are identified.
 Involves selecting Ground Control Points (GCPS) and fitting polynomial
equations using least squares technique.
 Intensity Interpolation (Resampling)
A pixel in the rectified image often requires a value from the input pixel grid
that does not fall neatly on a row and column co-ordinate.
 For this reason resampling mechanism is used to determine pixel
brightness value.

Digital Image Processing Minakshi,PRSD,IIRS 23


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Ground Control Points (GCPs)


A ground control point (GCP) is a location on the
surface of the Earth (e.g., a road intersection) that can
be identified on the imagery and located accurately on a
map.
 There are two distinct sets of coordinates associated
with each GCP:
 source or image coordinates specified in i rows and j
columns, and
 Reference or map coordinates (e.g., x, y measured in
degrees of latitude and longitude, or meters in a Universal
Transverse Mercator projection).

Digital Image Processing Minakshi,PRSD,IIRS 24


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Ground Control Points (GCPs)


 Accurate GCPs are essential for accurate rectification
 Well dispersed GCPs result in more reliable rectification
 GCPs for Large Scale Imagery
 Road intersections, airport runways, towers buildings etc.
 for small scale imagery
 larger features like Urban area or Geological features can be
used
 NOTE : landmarks that can vary (like lakes, other water bodies,
vegetation etc) should not be used.
 Sufficiently large number of GCPs should be selected
 Requires a minimum number depending on the type of
transformation

Digital Image Processing Minakshi,PRSD,IIRS 25


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Polynomial Coordinate transformation


 Polynomial equations are used to convert the source file
coordinates to rectified map coordinates.
 Depending upon the distortions in the imagery, the
number of GCPs used, their location relative to one other,
complex polynomial equations are used.
 The degree of complexity of the polynomial is expressed as
ORDER of the polynomial.
 The order is simply the highest exponent used in the
polynomial

Digital Image Processing Minakshi,PRSD,IIRS 26


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Mathematical Transformations
Linear Transformations/ Affine transformation/ first order
transformation
X = a0 + a1x + a2 y
Y = b0 + b 1 x + b2 y
where
 X , Y are the Rectified coordinates (output)
 x, y are the source coordinates (input)
 A first order transformation can change
 Location in x and/or y
 Scale in x and/or y
 Skew in x and/or y
 Rotation

Digital Image Processing Minakshi,PRSD,IIRS 27


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Polynomial transformation
 If the coefficients a0 ,a1,a2, b0, b1 and b2 are known then, the
above polynomial can be used to relate and point on map to its
corresponding point on image and vice versa. Hence six
coefficients are required for this transformation (three for X and
three for Y).
 So it requires Minimum THREE GCP’s for solving the above
equation.
 However the error cannot be estimated with three GCP’s alone.
Hence one additional GCP is taken
 Before applying rectification to the entire set of the data, it is
important to determine how well the six coefficients derived from
the least square regression of the initial GCPs account for the
geometric distortion in the input image.
Digital Image Processing Minakshi,PRSD,IIRS 28
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Accuracy of transformation
 In this method, we check how good do selected points fit between the map
and the Image?
 To solve linear polynomials we first take four GCP’s to compute the six
coefficients. Its source coordinates in the original input image are say xi and
yi. The position of the same points in reference map in degrees, feet or
meters are say x,y
 Now, if we input the map x,y values for the first GCP back into the linear
polynomial equation with all the coefficients in the place, we would get the
computed or retransformed xr and yr values , which are supposed to be
location of this point in input image
 Ideally measured and computed values should be equal.
 In reality this does not happen.

Digital Image Processing Minakshi,PRSD,IIRS 29


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Root Mean Square (RMS) error


 Accuracy is measured by computing Root Mean Square Error (RMS error)
for each of the ground control point
 RMS error is the distance between the input (source or measured) location
of a GCP and the retransformed (or computed) location for the same GCP.
 RMS error is computed with a Euclidean Distance Equation.

Where
 xi and yi are the input
source coordinates and
 xr and yr are the
retransformed coordinates

Digital Image Processing Minakshi,PRSD,IIRS 30


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Acceptable RMS error


 The amount of RMS error that is tolerated can be thought of as a window
around each source coordinate, inside which a retransformed coordinate is
considered to be correct.
 Acceptable RMS error depends upon the
 End use of the data
 The type of data being used, and
 The accuracy of the GCP and the ancillary data.
 Normally an RMS error of less than 1 per GCP and a total RMS error of less
than half a pixel (0.5) is acceptable

Digital Image Processing Minakshi,PRSD,IIRS 31


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Intensity Interpolation (Resampling)


 Once an image is warped, DNs are to be assigned to the “new”
pixels?
 Since the grid of pixels in the source image rarely matches the grid
for the reference image, the pixels are resampled so that new data
file values for the output file can be calculated.
 This process involves filling the rectified output grid with brightness
values extracted from a location in the input image and its
reallocation in the appropriate coordinate location in the rectified
output image.
 This results in input line and columns numbers as real
numbers ( and not integers)
 When this occurs, methods of assigning Brightness values are
 Nearest Neighbour
 Bilinear
 Cubic
Digital Image Processing Minakshi,PRSD,IIRS 32
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Nearest Neighbor
 The nearest neighbor approach uses the value of the closest
input pixel for the output pixel value.
 The pixel value occupying the closest image file coordinate to
the estimated coordinate will be used for the output pixel value
in the georeferenced image.

Digital Image Processing Minakshi,PRSD,IIRS 33


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Nearest Neighbor
ADVANTAGES:
 Output values are the original input values. Other methods of
resampling tend to average surrounding values. This may be
an important consideration when discriminating between
vegetation types or locating boundaries.
 Since original data are retained, this method is recommended
before classification.
 Easy to compute and therefore fastest to use.
DISADVANTAGES:
 Produces a choppy, "stair-stepped" effect. The image has a
rough appearance relative to the original unrectified data.
 Data values may be lost, while other values may be
duplicated.
Digital Image Processing Minakshi,PRSD,IIRS 34
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Bilinear Interpolation
 The bilinear interpolation approach uses the weighted average of the nearest
four pixels to the output pixel.
4
Zk where Zk are the surrounding four data point values, and
 2
k 1 Dk
D 2 are the distances squared from the point in question
k
BVwt  4 (x’, y’) to the these data points.
1
D
k 1
2
 ADVANTAGES:
 Stair-step effect caused by the nearest
k
neighbor approach is reduced. Image looks
smooth.
 DISADVANTAGES:
 Alters original data and reduces contrast by
averaging neighboring values together.
 Is computationally more extensive than
nearest neighbor.

Digital Image Processing Minakshi,PRSD,IIRS 35


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Cubic Convolution
 The cubic convolution approach uses the weighted average of the nearest
sixteen pixels to the output pixel. The output is similar to bilinear
interpolation, but the smoothing effect caused by the averaging of
surrounding input pixel values is more dramatic.
16
Zk
 2
k 1 Dk
where Zk are the surrounding four data
point values, and D2k are the distances
BVwt  16
1 squared from the point in question (x’, y’)
 2
k 1 Dk
to the these data points.

 ADVANTAGES:
 Stair-step effect caused by the nearest neighbor
approach is reduced. Image looks smooth.
 DISADVANTAGES:
 Alters original data and reduces contrast by averaging
neighboring values together.
 Is computationally more expensive than nearest
neighbor or bilinear interpolation.

Digital Image Processing Minakshi,PRSD,IIRS 36


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Digital Image Processing Minakshi,PRSD,IIRS 37


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Input Image

Rectified Image

Digital Image Processing Minakshi,PRSD,IIRS 38


I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Discussion / Query
I N D I A N I N S T I T U T E O F R E M O T E S E N S I N G, D E H R A D U N

Thank You

You might also like