0% found this document useful (0 votes)
12 views87 pages

GI23403 Unit3

The document discusses image enhancement techniques aimed at improving the suitability of images for specific applications by accentuating features like edges and contrast. It covers various enhancement types, including radiometric, spectral, and spatial enhancements, along with methods such as contrast stretching, histogram equalization, and filtering techniques. Additionally, it introduces advanced concepts like Principal Component Analysis for decorrelating spectral bands and enhancing image data interpretation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views87 pages

GI23403 Unit3

The document discusses image enhancement techniques aimed at improving the suitability of images for specific applications by accentuating features like edges and contrast. It covers various enhancement types, including radiometric, spectral, and spatial enhancements, along with methods such as contrast stretching, histogram equalization, and filtering techniques. Additionally, it introduces advanced concepts like Principal Component Analysis for decorrelating spectral bands and enhancing image data interpretation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

UNIT 3- IMAGE ENHANCEMENT

Introduction
• The principal objective of image enhancement is to process a given
image so that the result is more suitable than the original image for a
specific application.
• It accentuates or sharpens image features such as edges, boundaries,
or contrast to make a graphic display more helpful for display and
analysis.
• The enhancement doesn't increase the inherent information content
of the data, but it increases the dynamic range of the chosen features
so that they can be detected easily.
Enhancement Types
• RADIOMETRIC ENHANCEMENT: Modification of brightness
values of each pixel in an image data set independently
("global operations" affect a pixel based on information from
the entire image).
• SPECTRAL ENHANCEMENT: Enhancing images by
transforming the values of each pixel on a multiband basis.
• SPATIAL ENHANCEMENT: Modification of pixel values based
on the values of surrounding pixels. (Local operations)
Point, local and regional operation
Radiometric Enhancement
CONTRAST
• Amount of difference between average gray level of an object and
that of surroundings.
• Difference in illumination or grey level values in an image or
Intuitively, how vivid or washed-out an image appears
• Ratio of Maximum Intensity to Minimum Intensity
• Larger the ratio more easy it is to interpret the image
• CONTRAST = Max . Gray Value / Min. Gray Value
Reason for Low Contrast
• Scene itself has low contrast ratio
• The individual objects and background that make up the terrain may
have a nearly uniform electromagnetic response at the wavelength
band of energy that is recorded by the remote sensing system
• Different materials often reflect similar amounts of radiant flux
through out the Visible, NIR and MIR portion of EM Spectrum.
• Cultural Factors e.g. People in developing countries use natural
building material (wood, soil) in construction of urban areas.
• Sensitivity of Detector • Atmospheric Factors
Radiometric Enhancements
• Radiometric enhancement is a technique • Non Linear Contrast Stretch: Input
that improves the appearance of an and Output Data Values do not
image by adjusting the brightness values
of individual pixels follow a linear relationship
• Radiometric Enhancement: Contrast (Min 1. Logarithmic
- Max) Enhancement 2. Inverse Log
Linear Contrast Stretch: Input and Output
Data Values follow a linear relationship 3. Exponential
1. Min – Max Stretch 4. Square
2. % stretch 5. Square root
3. Std. deviation Stretch
4. Piecewise Linear Stretch
5. Saw tooth Stretch
Minimum Maximum Stretch
• Expands the original input values to make use of the total range of
the sensitivity of the display device.
• The density values in a scene are literally pulled farther apart, that is,
expanded over a greater range.
• The effect is to increase the visual contrast between two areas of
different uniform densities.
• This enables the analyst to discriminate easily between areas initially
having a small difference in density.
• A DN in the low range of the original histogram is assigned to extreme
black, and a value at the high end is assigned to extreme white.
• The remaining pixel values are distributed linearly between these two
extremes
10
Linear Contrast Enhancement: Minimum Maximum
Stretch (Normalization)
• By expanding the original input values of
the image, the total range of sensitivity of
the display device can be utilized.
• Linear contrast enhancement also makes
subtle variations within the data more
obvious.
• These types of enhancements are best
applied to remotely sensed images with
Gaussian or nearGaussian histograms.

11
Minimum Maximum Stretch
Linear Contrast Enhancement: Histogram
Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization
Histogram Equalization - Practice
1 5 5 2 Gray 0 1 2 3 4 5 6 7
Levels
1 5 5 2
No of 0 4 4 0 0 8 0 0
1 5 5 2 Pixels (nx)
1 5 5 2

23=8 bits

No of Pixels (nx)
Gray Levels
Piece-wise Stretching
• When the distribution of a histogram in an
image is bi or tri-modal, an analyst may
stretch certain values of the histogram for
increased enhancement in selected areas.
• A piecewise linear contrast enhancement
involves the identification of a number of
linear enhancement steps that expands
the brightness ranges in the modes of the
histogram.
• In the piecewise stretch, a series of small
min-max stretches are set up within a
single histogram.
Piece-wise Stretching
𝛼 = 0.66 𝛽=2 𝛾= 0.5
• Find the slope for each segment.

r S Round-off
0 0.66*0 = 0 0
1 0.66*1 = 0.66 1
2 0.66*2 = 1.32 1 2
3 2 (3-3) + 2 = 2 2
4 3 5

5
6
7
Piece-wise Stretching
4 3 5 2 4 2 6 1
r S 3 6 4 6 2 7 4 7
0 0
2 2 6 5 1 1 7 6
1 1
7 6 4 1 7 7 4 1
2 1
3 2
4 4
5 6
6 7
7 7
Intensity level Slicing
• Without Background (Clipping or Thresholding)
𝐿−1=7 3≤𝑟<5
𝑆=
r 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

7 7 7 2
4 3 5 2
7 6 7 6
3 6 4 6
2 2 6 7
2 2 6 5
7 6 7 1
7 6 4 1
Intensity level Slicing
• Without Background
𝐿−1=7 3≤𝑟<5
𝑆=
r 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

7 7 7 0
4 3 5 2
7 0 7 0
3 6 4 6
0 0 0 7
2 2 6 5
0 0 7 0
7 6 4 1
Spatial Filtering

• Spatial Filtering is the process of dividing the image into its constituent
spatial frequencies, and selectively altering certain spatial frequencies to
emphasize some image features.
• Process of suppressing (de-emphasizing) certain frequencies & passing
(emphasizing) others. Increases the analyst’s ability to discriminate detail.
• Local operation i.e. pixel value is modified based on the values
surrounding it.
• used for enhancing certain features, removal of noise and Smoothening
of image
Spatial Filtering
Spatial Filtering
• Algorithms for filtering
• Composed of Window mask /Kernel /
Convolution mask and
• Constants (Weights given to mask) •
Mask size 3x3, 5x5, 7x7, 9x9………
Convolution (Filtering)
• Brightness value (BVi, BVj,out) at location i,j in the output image is a
function of some weighted average of brightness values located in a
particular spatial pattern around the i,j location in the input image.
• Process of evaluating the weighted neighbouring pixel values located
in a particular spatial pattern around the i,j, location in input image.

C1 C2 C3

C4 C5 C6

C7 C8 C9

Filter
Convolution Process
Step 1 : Window mask is placed over part of Image
C1 C2 C3
Step 2 : Central Pixel values is calculated based
on its neighbouring values
C4 C5 C6
C1 C2 C3
C7 C8 C9 C1 C2 C3
C4 C5 C6
C4 C5 C6
C7 C8 C9
C7 C8 C9

Step 3: Central Pixel Value is replaced by


the new value and window is shifted by
one pixel to the right and the entire
process is repeated
Low-pass filter
• A low-pass filter emphasizes
low frequency or
homogeneous areas and
reduces the smaller details in
the image.
• It is one of the methods of
smoothing operation. It
reduces the noise in the
imagery. Average filter: In case of average or mean filter the output
• Low-pass filter normally sub- DN value is obtained by multiplying each coefficient of the
divided as average (mean), kernel with the corresponding DN in the input image and
mode and median filter. adding all them all and dividing by the sum of kernels
value. The coefficients of the kernel range from equal value
to different weighted value.
Low-pass filter
• Average filter: In case of average or mean filter the output DN value is obtained
by multiplying each coefficient of the kernel with the corresponding DN in the
input image and adding all them all and dividing by the sum of kernels
value. The coefficients of the kernel range from equal value to different
weighted value.
• Mode filter: A kernel is fitted in the input image, and computes the mode value
within this kernel. The resultant value is the final value of the center pixel of the
kernel: This filter is often used to clean up noisy satellite images, especially when
dealing with cloud cover or subtle variations in terrain.
• Median filter: Median filter is similar to mode filter. In this case the median
value of the kernel is computed which replaces the center pixel DN value.
Median filter is very useful for removing random noise, salt-pepper pattern
noise, speckle noise in RADAR imagery: Median filtering is commonly used in
pre-processing steps for analyzing satellite imagery, including land cover
classification, change detection, and environmental monitoring.
High-pass filter
• High-pass filter is the opposite of
low-pass filter. In contrast to low-
pass filter, high-pass filter enhance
the variations in the imagery. • Similarly, if the high filter is applied, the
• It increases the spatial frequency, calculated value will be 39, which will replace
thus sharpens the details in the the value 24.
imagery. • In the above example, the image values of the
3x3 window of the input image can be written in
• High frequency filter works in the increasing order as 20, 24, 28, 34, 36, 45, 56, 62,
similar way of average filter, uses a 68.
kernel may be normal or weighted. • In this set 45 in the median value, this will
• Another way of getting the output replace the central kernel value 24.
of high-pass filtering is subtraction • Similarly if a set of values of a 3x3 kernel is, as
of the low-pass filtered image from example 50, 62, 58, 20, 145, 19, 20, 96, 204, then
the original image. the central kernel of the output image will be the
mode value of the set, i.e. 20.
High-pass Filters
• Edge Enhancement filter: Edge enhancement is quite similar to high
frequency filter. High frequency filter exaggerate local brightness variation or
contrast and de-emphasize the low frequency areas in the imagery. In
contrast to this, edge enhancement filter emphasize both the high frequency
areas along with low frequency areas. Is enhances the boundary of features.
Different kinds of kernels are used depending on the roughness or tonal
variation of the image.
• Edge Detection filter: There may arise confusion with edge enhancement
filter with edge detection filter, which highlights the boundary of the features
and de-emphasizes the low contrast areas totally. It highlights the linear
features such as road, rail line, canal, or feature boundary etc. Sobel, Prewitt
and Laplacian etc filters are the example of edge detection filter. Fig. 11.11
shows image filtering operations using low pass, high pass, edge
enhancement and edge detection filters.
a) Original image, output image applying: b) low pass filter, c) high pass filter,
d) edge enhancement filter, e) edge detection filter.
Fourier Transform a) Original image, b) Fourier spectrum of input image, c)
Fourier spectrum after application of low pass filter, d) Output
image after inverse Fourier transformation.

• The filters discussed above are spatial filters. Fourier analysis or transform the
spatial domain is transferred to frequency domain.
• The frequency domain is represented as a 2D scatter plot, known as Fourier
domain, where low frequencies falls at the center and high frequencies are
progressively outward.
• Thus Fourier spectrum of an image can be used to enhance the quality of an
image with the help of the low and high frequency block filters.
• After filtering the inverse Fourier transformation gives the output of the Fourier
transformation.
Image Manipulation
• Band ratioing
• Indexing
• Principal Component Analysis
Band ratioing and Indexing
• Radiance or reflectance of surface
features differs depending on the
topography, shadows, seasonal
changes in solar illumination angle a) Blue band shows topography due to
and intensity. illumination difference, b) ratio of band3/band2
removes illumination and yield different rock
• The band ratio of two bands types.
removes much of the effect of
illumination in the analysis of
spectral differences.
• Ratio image resulting from the
division of DN values in one
spectral band by the corresponding
values in another band.
Principal Component Analysis
• Spectral bands of multispectral data are highly
correlated. Due to this high correlation in bands,
the analysis of the data sometimes becomes quite
difficult; thus images obtained by combining
different spectral bands looks similar.
• To decorrelate the spectral bands, Principal
Component Analysis (PCA) or Principal Component
Transformation (PCT) is applied. Vector
• PCA also have the capability of dimension
reduction, thus PCA of a data may be either an
enhancement operation prior to visual Magnitude Direction
interpretation or a preprocessing procedure for
further processing. 100
255 B1, B2
• Principal components analysis is a special case of 110
transforming the original data into a new
coordinate system.

Band 2
110
• It enhances the subtle variation in the image data,
thus many features will be identified which cannot
be identifiable in raw image
0 100 255
Band 1
No relationship is also
Feature Selection possible
Feature Extraction
Feature Selection Problems
PCA – Intro
-> Variance
Cost Function
Eigen Vector and Eigen Values
1. Covariance matrix between the features
2. Eigen Vector and Eigen Values will be
found from this Covariance matrix
Linear Transformation of Matrix
3. Eigen Vector -> Eigen Value->
Magnitude of the Eigen Vector that Av=λv
captures the maximum variance.
Eigen Vector and Eigen Values

3.00 1.00 1.00 = 4.00


0.00 2.00 1.00 2.00

Av=λv
• Maximum Magnitude
• Maximum Eigen Vector
• Best PC
Steps to Calculate Eigen Values and Vectors
Principal Components Analysis ( PCA)
• An exploratory technique used to reduce the
dimensionality of the data set to 2D or 3D
• Can be used to:
– Reduce number of dimensions in data
– Find patterns in high-dimensional data
– Visualize data of high dimensionality
• Example applications:
– Face recognition
– Image compression
– Gene expression analysis
51
52
Principal Components Analysis Ideas ( PCA)

• Does the data set ‘span’ the whole of d dimensional space?


• For a matrix of m samples x n genes, create a new covariance matrix
of size n x n.
• Transform some large number of variables into a smaller number of
uncorrelated variables called principal components (PCs).
• developed to capture as much of the variation in data as possible

53
54
Principal Component Analysis
X2

Y1
Y2 x
x
x
Note: Y1 is x x
xx
x
the first x x
x
x
eigen vector, x
x x
Y2 is the x x x
x
X1
x
second. Y2 x
x
x Key observation:
x
ignorable. x variance = largest!

55
Principal Component Analysis: one
attribute first Temperature
42
40

• Question: how much 24


30
spread is in the data
15
along the axis? 18
(distance to the mean) 15
• Variance=Standard 30
15
deviation^2 n

(Xi  X ) 2 30
35
s 
2 i 1
(n  1) 30
40
30
56
Now consider two dimensions
X=Temperature Y=Humidity
Covariance: measures the 40 90
correlation between X and Y 40 90
• cov(X,Y)=0: independent 40 90
•Cov(X,Y)>0: move same dir 30 90
•Cov(X,Y)<0: move oppo dir 15 70
15 70
15 70
n 30 90

(X
i 1
i  X )(Yi  Y ) 15
30
70
70
cov( X , Y ) 
(n  1) 30 70
30 90
57
40 70
More than two attributes: covariance
matrix
• Contains covariance values between all possible
dimensions (=attributes):
C nxn
 (cij | cij  cov( Dimi , Dim j ))

• Example for three attributes (x,y,z):

 cov( x, x) cov( x, y ) cov( x, z ) 


 
C   cov( y, x) cov( y, y ) cov( y, z ) 
 cov( z , x) cov( z , y ) cov( z , z ) 
 
58
Eigenvalues & eigenvectors

• Vectors x having same direction as Ax are called


eigenvectors of A (A is an n by n matrix).
• In the equation Ax=x,  is called an eigenvalue of A.

 2 3   3  12   3
  x      4 x 
 2 1  2  8   2

59
Eigenvalues & eigenvectors

• Ax=x  (A-I)x=0
• How to calculate x and :
– Calculate det(A-I), yields a polynomial (degree n)
– Determine roots to det(A-I)=0, roots are eigenvalues 
– Solve (A- I) x=0 for each  to obtain eigenvectors x

60
Principal components
• 1. principal component (PC1)
– The eigenvalue with the largest absolute value will indicate that
the data have the largest variance along its eigenvector, the
direction along which there is greatest variation
• 2. principal component (PC2)
– the direction with maximum variation left in data, orthogonal to
the 1. PC
• In general, only few directions manage to capture most of
the variability in the data.

61
Steps of PCA
• Let X be the mean
• For matrix C, vectors e
vector (taking the mean (=column vector) having
of all rows) same direction as Ce :
• Adjust the original data – eigenvectors of C is e such
that Ce=e,
by the mean –  is called an eigenvalue of
X’ = X – X C.
• Compute the • Ce=e  (C-I)e=0
covariance matrix C of
– Most data mining
adjusted X packages do this for you.
• Find the eigenvectors
and eigenvalues of C.
62
Eigenvalues
• Calculate eigenvalues  and eigenvectors x for
covariance matrix:
– Eigenvalues j are used for calculation of [% of total
variance] (Vj) for each component j:

j n
V j  100  n  x n
 x
x 1
x 1

63
64
Principal components - Variance
25

20

Variance (%) 15

10

0
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10

65
Transformed Data
• Eigenvalues j corresponds to variance on each
component j
• Thus, sort by j
• Take the first p eigenvectors ei; where p is the number of
top eigenvalues
• These are the directions with the largest variances
 yi1   e1  xi1  x1 
    
 yi 2   e2  xi 2  x2 
 ...    ...  
    ... 
 y   e  x  x 
 ip   p  in n 66
Fourier transform - FFT
• In remote sensing, the Fast Fourier Transform (FFT) is used to analyze
images by converting them from the spatial domain to the frequency
domain, enabling tasks like noise reduction, edge enhancement, and
feature extraction, which are crucial for tasks like image classification
and object detection.
• The FFT is a highly efficient algorithm for computing the Discrete
Fourier Transform (DFT), which decomposes a signal (like a remote
sensing image) into its constituent frequencies.
• By applying FFT, we move from the spatial domain (where we see the
image pixels) to the frequency domain, where we can see the
patterns of frequencies present in the image.
Fourier transform – FFT- Benefits
• Noise Reduction: High-frequency noise can be filtered out in the
frequency domain, leading to cleaner images.
• Edge Enhancement: FFT can emphasize edges and fine details by
selectively boosting or suppressing certain frequency components.
• Feature Extraction: Analyzing frequency content can reveal important
features, like textures or patterns, that might be difficult to spot in the
spatial domain.
• Image Classification and Object Detection: Frequency domain
analysis can help in developing algorithms for classifying land cover
types or detecting objects of interest in remote sensing images.
FFT Applications in Remote Sensing
• Super-Resolution:
Researchers have developed a network using FFT to improve the resolution of
remote sensing images.
• Crop Classification:
A convolutional neural network based on Fourier frequency domain learning has
been proposed to improve the accuracy of remote sensing crop classification.
• Change Detection:
A multi-scale remote sensing change detection network using the global filter in
the frequency domain has been developed.
• Urban Image Segmentation:
A boundary-aware spatial and frequency dual-domain transformer has been
proposed for remote sensing urban images segmentation.
• Atmospheric Remote Sensing:
Fourier transform spectroscopy is used in atmospheric remote sensing.
Multi-image fusion
• Image fusion techniques should allow combination of images with different
spectral and spatial resolution keeping the radiometric information
• It is defined as “the set of methods, tools and means of using data from
two or more different images to improve the quality of information.
• Goal is to Combine higher spatial information in one band with higher
spectral information in another dataset to create „synthetic‟ higher
resolution multispectral datasets and images PAN MS FUSED IMAGE.
1. Sharper image resolution (display)
2. Improved classification (and others)
Sharper image resolution (display)
Fusion Methods
• Methods based on IHS transform and Principal Components Analysis
(PCA) probably are the most popular approaches used to enhance the
spatial resolution of multispectral images with panchromatic images.
• However, both methods suffer from the problem that the radiometry
on the spectral channels is modified after fusion. This is because the
high-resolution panchromatic image usually has spectral
characteristics different from both the intensity and the first principal
components
• New techniques have been proposed such as those that combine
wavelet transform with IHS model and PCA transform to manage the
color and details information distortion in the fused image
Fusion Methods
• The image fusion is performed at three different processing levels
which are pixel level, feature level and decision level according to the
stage at which the fusion takes place.
• The objective of information fusion is to improve the accuracy of
image interpretation and analysis by making use of complementary
information.
• An ideal image fusion technique should have three essential factors,
i.e. high computational efficiency, preserving high spatial resolution
and reducing color distortion
Fusion Methods
Hue, saturation, and intensity (or
brightness) are the three key
attributes that define a color,
with hue representing the color
itself, saturation indicating the
purity or intensity of that color, and
intensity (or brightness)
determining how light or dark the
color appears.
IHS color model
• IHS method consists on
transforming the R,G and B
bands of the multispectral
image into IHS components,
replacing the intensity
component by the
panchromatic image, and
performing the inverse
transformation to obtain a high
spatial resolution multispectral
image.
where I, H, S components are intensity, hue and
• The three multispectral bands, saturation, and V1and V2 are the intermediate
R, Gand B, of a low resolution variables. Fusion proceeds by replacing component
image are first transformed I with the panchromatic high-resolution image
tothe IHS color space as information, after matching its radiometric
information with the component I.
IHS color model
• To reduce the color distortion, the PAN image is matched to the intensity component
before the replacement or the hue and saturation components are stretching before the
reverse transform.
• The fused image, which has both rich spectral information and high spatial resolution, is
then obtained by performing the inverse transformation from IHS back to the original
RGB space.
• Although the IHS method has been widely used, the method cannot decompose an
image into different frequencies in frequency space such as higher or lower frequency.
Hence the IHS method cannot be used to enhance certain image characteristics.
• Besides, the color distortion of IHS technique is often significant. To reduce the color
distortion, the PAN image is matched to the intensity component before the replacement
or the hue and saturation components are stretching before the reverse transform.
• The problem with the IHS method is that spectral distortion may occur during the
merging process. The large difference between the values of Pan and I appears to cause
the large spectral distortion of fused images. Indeed, this difference (Pan-I) causes the
altered saturation component in the RGB-IHS conversion model
IHS color model
Fusion Methods: Principal Components Analysis
(PCA)
• The first principal component (PC1) is replaced with the panchromatic
band, which has higher spatial resolution than the multispectral
images. Afterwards, the inverse PCA transformation is applied to
obtain the image in the RGB color model.

Fusion Methods: Arithmetic combination


The spatial information is well preserved
in this method but this method leads to
the spectral distortion in the results
Fusion Methods: Wavelets
• In image processing, wavelets
are mathematical functions used
to analyze and process images at
multiple scales and resolutions,
offering advantages over
traditional methods like Fourier
transforms by providing both
spatial and frequency localization,
enabling the capture of localized
features and details.
Fusion Methods: Wavelets
Re-projecting
• Reprojection, also known as spatial transformation or coordinate
system conversion, is the process of changing the spatial reference
system (CRS) of a dataset (like a satellite image).
• It involves applying mathematical transformations to the coordinates
of each pixel or point in the image to align it with a different CRS.
Common Reprojection Methods
• Resampling: When reprojection changes the spatial resolution of the
data, resampling is used to create new pixels.
Resampling
• In remote sensing, resampling is the process of recalculating pixel
values when a raster grid is transformed to a different spatial
resolution or coordinate system.
• It's essential for tasks like image registration, mosaicking, and data
integration, ensuring that different datasets are aligned and
compatible.
• The resampling process calculates the new pixel values from the
original digital pixel values in the uncorrected image. There are three
common methods for resampling: nearest neighbour, bilinear
interpolation, and cubic convolution.
Resampling Techniques
• Nearest neighbor: The nearest
neighbour resampling uses the digital
value from the pixel in the original
image which is nearest to the new
pixel location in the corrected image.
• This is the fastest interpolation
method, which is primarily applied
for discrete (categorical) raster data
as it does not change the value of the
pixel, but may result in some pixel
values being duplicated while others
are lost.
Nearest neighbor: Advantages and
Disadvantages
• Simplicity and Speed: It's the • Blocky Appearance: It can lead to a
simplest and fastest resampling noticeable blocky or pixelated effect,
method. especially when scaling up significantly.
• Preserves Original Values: It • Spatial Errors: It can introduce noticeable
doesn't calculate new values, position errors, particularly along linear
so it preserves the original features.
pixel values.
• Suitable for Discrete Data: It's
often used for categorical or
integer data, such as land use,
soil, or forest type, where
preserving the original values is
important.
Resampling Techniques
• Bilinear interpolation: Bilinear
interpolation resampling takes a
weighted average of four pixels in the
original image nearest to the new pixel
location.
• The averaging process alters the original
pixel values and creates entirely new
digital values in the output image.
• It is recommended for continuous data
and it cause some smoothing of the
data.
Resampling Techniques
• Cubic convolution: Cubic convolution
resampling is based on calculation of a
distance weighted average of a block of
sixteen pixels from the original image
which surround the new output pixel
location.
• As with bilinear interpolation, this
method results in completely new pixel
values.
• The disadvantage of the Cubic method
is that its requires more processing
time.

You might also like