GI23403 Unit3
GI23403 Unit3
Introduction
• The principal objective of image enhancement is to process a given
image so that the result is more suitable than the original image for a
specific application.
• It accentuates or sharpens image features such as edges, boundaries,
or contrast to make a graphic display more helpful for display and
analysis.
• The enhancement doesn't increase the inherent information content
of the data, but it increases the dynamic range of the chosen features
so that they can be detected easily.
Enhancement Types
• RADIOMETRIC ENHANCEMENT: Modification of brightness
values of each pixel in an image data set independently
("global operations" affect a pixel based on information from
the entire image).
• SPECTRAL ENHANCEMENT: Enhancing images by
transforming the values of each pixel on a multiband basis.
• SPATIAL ENHANCEMENT: Modification of pixel values based
on the values of surrounding pixels. (Local operations)
Point, local and regional operation
Radiometric Enhancement
CONTRAST
• Amount of difference between average gray level of an object and
that of surroundings.
• Difference in illumination or grey level values in an image or
Intuitively, how vivid or washed-out an image appears
• Ratio of Maximum Intensity to Minimum Intensity
• Larger the ratio more easy it is to interpret the image
• CONTRAST = Max . Gray Value / Min. Gray Value
Reason for Low Contrast
• Scene itself has low contrast ratio
• The individual objects and background that make up the terrain may
have a nearly uniform electromagnetic response at the wavelength
band of energy that is recorded by the remote sensing system
• Different materials often reflect similar amounts of radiant flux
through out the Visible, NIR and MIR portion of EM Spectrum.
• Cultural Factors e.g. People in developing countries use natural
building material (wood, soil) in construction of urban areas.
• Sensitivity of Detector • Atmospheric Factors
Radiometric Enhancements
• Radiometric enhancement is a technique • Non Linear Contrast Stretch: Input
that improves the appearance of an and Output Data Values do not
image by adjusting the brightness values
of individual pixels follow a linear relationship
• Radiometric Enhancement: Contrast (Min 1. Logarithmic
- Max) Enhancement 2. Inverse Log
Linear Contrast Stretch: Input and Output
Data Values follow a linear relationship 3. Exponential
1. Min – Max Stretch 4. Square
2. % stretch 5. Square root
3. Std. deviation Stretch
4. Piecewise Linear Stretch
5. Saw tooth Stretch
Minimum Maximum Stretch
• Expands the original input values to make use of the total range of
the sensitivity of the display device.
• The density values in a scene are literally pulled farther apart, that is,
expanded over a greater range.
• The effect is to increase the visual contrast between two areas of
different uniform densities.
• This enables the analyst to discriminate easily between areas initially
having a small difference in density.
• A DN in the low range of the original histogram is assigned to extreme
black, and a value at the high end is assigned to extreme white.
• The remaining pixel values are distributed linearly between these two
extremes
10
Linear Contrast Enhancement: Minimum Maximum
Stretch (Normalization)
• By expanding the original input values of
the image, the total range of sensitivity of
the display device can be utilized.
• Linear contrast enhancement also makes
subtle variations within the data more
obvious.
• These types of enhancements are best
applied to remotely sensed images with
Gaussian or nearGaussian histograms.
11
Minimum Maximum Stretch
Linear Contrast Enhancement: Histogram
Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization - Practice
1 5 5 2 Gray 0 1 2 3 4 5 6 7
Levels
1 5 5 2
No of 0 4 4 0 0 8 0 0
1 5 5 2 Pixels (nx)
1 5 5 2
23=8 bits
No of Pixels (nx)
Gray Levels
Piece-wise Stretching
• When the distribution of a histogram in an
image is bi or tri-modal, an analyst may
stretch certain values of the histogram for
increased enhancement in selected areas.
• A piecewise linear contrast enhancement
involves the identification of a number of
linear enhancement steps that expands
the brightness ranges in the modes of the
histogram.
• In the piecewise stretch, a series of small
min-max stretches are set up within a
single histogram.
Piece-wise Stretching
𝛼 = 0.66 𝛽=2 𝛾= 0.5
• Find the slope for each segment.
r S Round-off
0 0.66*0 = 0 0
1 0.66*1 = 0.66 1
2 0.66*2 = 1.32 1 2
3 2 (3-3) + 2 = 2 2
4 3 5
5
6
7
Piece-wise Stretching
4 3 5 2 4 2 6 1
r S 3 6 4 6 2 7 4 7
0 0
2 2 6 5 1 1 7 6
1 1
7 6 4 1 7 7 4 1
2 1
3 2
4 4
5 6
6 7
7 7
Intensity level Slicing
• Without Background (Clipping or Thresholding)
𝐿−1=7 3≤𝑟<5
𝑆=
r 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
7 7 7 2
4 3 5 2
7 6 7 6
3 6 4 6
2 2 6 7
2 2 6 5
7 6 7 1
7 6 4 1
Intensity level Slicing
• Without Background
𝐿−1=7 3≤𝑟<5
𝑆=
r 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
7 7 7 0
4 3 5 2
7 0 7 0
3 6 4 6
0 0 0 7
2 2 6 5
0 0 7 0
7 6 4 1
Spatial Filtering
• Spatial Filtering is the process of dividing the image into its constituent
spatial frequencies, and selectively altering certain spatial frequencies to
emphasize some image features.
• Process of suppressing (de-emphasizing) certain frequencies & passing
(emphasizing) others. Increases the analyst’s ability to discriminate detail.
• Local operation i.e. pixel value is modified based on the values
surrounding it.
• used for enhancing certain features, removal of noise and Smoothening
of image
Spatial Filtering
Spatial Filtering
• Algorithms for filtering
• Composed of Window mask /Kernel /
Convolution mask and
• Constants (Weights given to mask) •
Mask size 3x3, 5x5, 7x7, 9x9………
Convolution (Filtering)
• Brightness value (BVi, BVj,out) at location i,j in the output image is a
function of some weighted average of brightness values located in a
particular spatial pattern around the i,j location in the input image.
• Process of evaluating the weighted neighbouring pixel values located
in a particular spatial pattern around the i,j, location in input image.
C1 C2 C3
C4 C5 C6
C7 C8 C9
Filter
Convolution Process
Step 1 : Window mask is placed over part of Image
C1 C2 C3
Step 2 : Central Pixel values is calculated based
on its neighbouring values
C4 C5 C6
C1 C2 C3
C7 C8 C9 C1 C2 C3
C4 C5 C6
C4 C5 C6
C7 C8 C9
C7 C8 C9
• The filters discussed above are spatial filters. Fourier analysis or transform the
spatial domain is transferred to frequency domain.
• The frequency domain is represented as a 2D scatter plot, known as Fourier
domain, where low frequencies falls at the center and high frequencies are
progressively outward.
• Thus Fourier spectrum of an image can be used to enhance the quality of an
image with the help of the low and high frequency block filters.
• After filtering the inverse Fourier transformation gives the output of the Fourier
transformation.
Image Manipulation
• Band ratioing
• Indexing
• Principal Component Analysis
Band ratioing and Indexing
• Radiance or reflectance of surface
features differs depending on the
topography, shadows, seasonal
changes in solar illumination angle a) Blue band shows topography due to
and intensity. illumination difference, b) ratio of band3/band2
removes illumination and yield different rock
• The band ratio of two bands types.
removes much of the effect of
illumination in the analysis of
spectral differences.
• Ratio image resulting from the
division of DN values in one
spectral band by the corresponding
values in another band.
Principal Component Analysis
• Spectral bands of multispectral data are highly
correlated. Due to this high correlation in bands,
the analysis of the data sometimes becomes quite
difficult; thus images obtained by combining
different spectral bands looks similar.
• To decorrelate the spectral bands, Principal
Component Analysis (PCA) or Principal Component
Transformation (PCT) is applied. Vector
• PCA also have the capability of dimension
reduction, thus PCA of a data may be either an
enhancement operation prior to visual Magnitude Direction
interpretation or a preprocessing procedure for
further processing. 100
255 B1, B2
• Principal components analysis is a special case of 110
transforming the original data into a new
coordinate system.
Band 2
110
• It enhances the subtle variation in the image data,
thus many features will be identified which cannot
be identifiable in raw image
0 100 255
Band 1
No relationship is also
Feature Selection possible
Feature Extraction
Feature Selection Problems
PCA – Intro
-> Variance
Cost Function
Eigen Vector and Eigen Values
1. Covariance matrix between the features
2. Eigen Vector and Eigen Values will be
found from this Covariance matrix
Linear Transformation of Matrix
3. Eigen Vector -> Eigen Value->
Magnitude of the Eigen Vector that Av=λv
captures the maximum variance.
Eigen Vector and Eigen Values
Av=λv
• Maximum Magnitude
• Maximum Eigen Vector
• Best PC
Steps to Calculate Eigen Values and Vectors
Principal Components Analysis ( PCA)
• An exploratory technique used to reduce the
dimensionality of the data set to 2D or 3D
• Can be used to:
– Reduce number of dimensions in data
– Find patterns in high-dimensional data
– Visualize data of high dimensionality
• Example applications:
– Face recognition
– Image compression
– Gene expression analysis
51
52
Principal Components Analysis Ideas ( PCA)
53
54
Principal Component Analysis
X2
Y1
Y2 x
x
x
Note: Y1 is x x
xx
x
the first x x
x
x
eigen vector, x
x x
Y2 is the x x x
x
X1
x
second. Y2 x
x
x Key observation:
x
ignorable. x variance = largest!
55
Principal Component Analysis: one
attribute first Temperature
42
40
(Xi X ) 2 30
35
s
2 i 1
(n 1) 30
40
30
56
Now consider two dimensions
X=Temperature Y=Humidity
Covariance: measures the 40 90
correlation between X and Y 40 90
• cov(X,Y)=0: independent 40 90
•Cov(X,Y)>0: move same dir 30 90
•Cov(X,Y)<0: move oppo dir 15 70
15 70
15 70
n 30 90
(X
i 1
i X )(Yi Y ) 15
30
70
70
cov( X , Y )
(n 1) 30 70
30 90
57
40 70
More than two attributes: covariance
matrix
• Contains covariance values between all possible
dimensions (=attributes):
C nxn
(cij | cij cov( Dimi , Dim j ))
2 3 3 12 3
x 4 x
2 1 2 8 2
59
Eigenvalues & eigenvectors
• Ax=x (A-I)x=0
• How to calculate x and :
– Calculate det(A-I), yields a polynomial (degree n)
– Determine roots to det(A-I)=0, roots are eigenvalues
– Solve (A- I) x=0 for each to obtain eigenvectors x
60
Principal components
• 1. principal component (PC1)
– The eigenvalue with the largest absolute value will indicate that
the data have the largest variance along its eigenvector, the
direction along which there is greatest variation
• 2. principal component (PC2)
– the direction with maximum variation left in data, orthogonal to
the 1. PC
• In general, only few directions manage to capture most of
the variability in the data.
61
Steps of PCA
• Let X be the mean
• For matrix C, vectors e
vector (taking the mean (=column vector) having
of all rows) same direction as Ce :
• Adjust the original data – eigenvectors of C is e such
that Ce=e,
by the mean – is called an eigenvalue of
X’ = X – X C.
• Compute the • Ce=e (C-I)e=0
covariance matrix C of
– Most data mining
adjusted X packages do this for you.
• Find the eigenvectors
and eigenvalues of C.
62
Eigenvalues
• Calculate eigenvalues and eigenvectors x for
covariance matrix:
– Eigenvalues j are used for calculation of [% of total
variance] (Vj) for each component j:
j n
V j 100 n x n
x
x 1
x 1
63
64
Principal components - Variance
25
20
Variance (%) 15
10
0
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10
65
Transformed Data
• Eigenvalues j corresponds to variance on each
component j
• Thus, sort by j
• Take the first p eigenvectors ei; where p is the number of
top eigenvalues
• These are the directions with the largest variances
yi1 e1 xi1 x1
yi 2 e2 xi 2 x2
... ...
...
y e x x
ip p in n 66
Fourier transform - FFT
• In remote sensing, the Fast Fourier Transform (FFT) is used to analyze
images by converting them from the spatial domain to the frequency
domain, enabling tasks like noise reduction, edge enhancement, and
feature extraction, which are crucial for tasks like image classification
and object detection.
• The FFT is a highly efficient algorithm for computing the Discrete
Fourier Transform (DFT), which decomposes a signal (like a remote
sensing image) into its constituent frequencies.
• By applying FFT, we move from the spatial domain (where we see the
image pixels) to the frequency domain, where we can see the
patterns of frequencies present in the image.
Fourier transform – FFT- Benefits
• Noise Reduction: High-frequency noise can be filtered out in the
frequency domain, leading to cleaner images.
• Edge Enhancement: FFT can emphasize edges and fine details by
selectively boosting or suppressing certain frequency components.
• Feature Extraction: Analyzing frequency content can reveal important
features, like textures or patterns, that might be difficult to spot in the
spatial domain.
• Image Classification and Object Detection: Frequency domain
analysis can help in developing algorithms for classifying land cover
types or detecting objects of interest in remote sensing images.
FFT Applications in Remote Sensing
• Super-Resolution:
Researchers have developed a network using FFT to improve the resolution of
remote sensing images.
• Crop Classification:
A convolutional neural network based on Fourier frequency domain learning has
been proposed to improve the accuracy of remote sensing crop classification.
• Change Detection:
A multi-scale remote sensing change detection network using the global filter in
the frequency domain has been developed.
• Urban Image Segmentation:
A boundary-aware spatial and frequency dual-domain transformer has been
proposed for remote sensing urban images segmentation.
• Atmospheric Remote Sensing:
Fourier transform spectroscopy is used in atmospheric remote sensing.
Multi-image fusion
• Image fusion techniques should allow combination of images with different
spectral and spatial resolution keeping the radiometric information
• It is defined as “the set of methods, tools and means of using data from
two or more different images to improve the quality of information.
• Goal is to Combine higher spatial information in one band with higher
spectral information in another dataset to create „synthetic‟ higher
resolution multispectral datasets and images PAN MS FUSED IMAGE.
1. Sharper image resolution (display)
2. Improved classification (and others)
Sharper image resolution (display)
Fusion Methods
• Methods based on IHS transform and Principal Components Analysis
(PCA) probably are the most popular approaches used to enhance the
spatial resolution of multispectral images with panchromatic images.
• However, both methods suffer from the problem that the radiometry
on the spectral channels is modified after fusion. This is because the
high-resolution panchromatic image usually has spectral
characteristics different from both the intensity and the first principal
components
• New techniques have been proposed such as those that combine
wavelet transform with IHS model and PCA transform to manage the
color and details information distortion in the fused image
Fusion Methods
• The image fusion is performed at three different processing levels
which are pixel level, feature level and decision level according to the
stage at which the fusion takes place.
• The objective of information fusion is to improve the accuracy of
image interpretation and analysis by making use of complementary
information.
• An ideal image fusion technique should have three essential factors,
i.e. high computational efficiency, preserving high spatial resolution
and reducing color distortion
Fusion Methods
Hue, saturation, and intensity (or
brightness) are the three key
attributes that define a color,
with hue representing the color
itself, saturation indicating the
purity or intensity of that color, and
intensity (or brightness)
determining how light or dark the
color appears.
IHS color model
• IHS method consists on
transforming the R,G and B
bands of the multispectral
image into IHS components,
replacing the intensity
component by the
panchromatic image, and
performing the inverse
transformation to obtain a high
spatial resolution multispectral
image.
where I, H, S components are intensity, hue and
• The three multispectral bands, saturation, and V1and V2 are the intermediate
R, Gand B, of a low resolution variables. Fusion proceeds by replacing component
image are first transformed I with the panchromatic high-resolution image
tothe IHS color space as information, after matching its radiometric
information with the component I.
IHS color model
• To reduce the color distortion, the PAN image is matched to the intensity component
before the replacement or the hue and saturation components are stretching before the
reverse transform.
• The fused image, which has both rich spectral information and high spatial resolution, is
then obtained by performing the inverse transformation from IHS back to the original
RGB space.
• Although the IHS method has been widely used, the method cannot decompose an
image into different frequencies in frequency space such as higher or lower frequency.
Hence the IHS method cannot be used to enhance certain image characteristics.
• Besides, the color distortion of IHS technique is often significant. To reduce the color
distortion, the PAN image is matched to the intensity component before the replacement
or the hue and saturation components are stretching before the reverse transform.
• The problem with the IHS method is that spectral distortion may occur during the
merging process. The large difference between the values of Pan and I appears to cause
the large spectral distortion of fused images. Indeed, this difference (Pan-I) causes the
altered saturation component in the RGB-IHS conversion model
IHS color model
Fusion Methods: Principal Components Analysis
(PCA)
• The first principal component (PC1) is replaced with the panchromatic
band, which has higher spatial resolution than the multispectral
images. Afterwards, the inverse PCA transformation is applied to
obtain the image in the RGB color model.