UNIT II PREPROCESSING Im
UNIT II PREPROCESSING Im
IMAGE CHARACTERISTICS:
1. Resolution: This refers to the number of pixels that make up the image. A
higher resolution image will have more pixels and therefore more detail than a
lower resolution image.
2. Color Depth: This is the number of bits used to represent the color of each
pixel in the image. The more bits used, the more colors can be represented, and
the more accurate the color representation will be.
3. Brightness and Contrast: These characteristics determine the overall
brightness and contrast of the image. Brightness refers to the overall lightness
or darkness of the image, while contrast refers to the difference between the
lightest and darkest parts of the image.
4. Noise: Noise refers to any unwanted variations in the image caused by factors
such as electronic interference, sensor noise, or image compression.
5. Sharpness: This refers to the clarity of the edges in the image. A sharper image
will have more distinct edges and be more visually appealing.
6. Compression: Digital images can be compressed to reduce their file size.
Compression can be lossless (where no information is lost) or lossy (where some
information is discarded to achieve a smaller file size).
7. Dynamic range: This is the ratio between the largest and smallest possible
values in the image. A larger dynamic range allows for more detail to be captured
in the highlights and shadows of the image.
HISTOGRAMS:
The majority of the remote sensor data are quantized to 8 bits, with values ranging
from 0 to 255 (e.g., Landsat 5 Thematic Mapper and SPOT HRV data). Some sensor
systems such as IKONOS and Terra MODIS obtain data with 11 bits of precision. The
greater the quantization, the higher the probability that more subtle spectral
reflectance (or emission) characteristics may be extracted from the imagery.
Tabulating the frequency of occurrence of each brightness value within the image
provides statistical information that can be displayed graphically in a histogram (Hair
et al., 1998). The range of quantized values of a band of imagery, quant k, is provided on
the abscissa (x axis), while the frequency of occurrence of each of these values is
displayed on the ordinate (y axis). For example, consider the histogram of the original
brightness values for a Landsat Thematic Mapper band 4 scene of Charleston, SC
(Figure 4-2). The peaks in the histogram correspond to dominant types of land cover in
the image, including a) open water pixels, b) coastal wetland, and c) upland. Also, note
how the Landsat Thematic Mapper band 4 data are compressed into only the lower
onethird of the 0 to 255 range, suggesting that the data are relatively low in contrast.
If the original Landsat Thematic Mapper band 4 brightness values were displayed on a
monitor screen or on the printed page they would be relatively dark and difficult to
interpret. Therefore, in order to see the wealth of spectral information in the scene,
the original brightness values were contrast stretched
Histograms are useful for evaluating the quality of optical daytime multispectral data
and many other types of remote sensor data.
When an unusually large number of pixels have the same brightness value, the
traditional histogram display might not be the best way to communicate the information
content of the remote sensor data. When this occurs, it might be useful to scale the
frequency of occurrence (y-axis) according to the relative percentage of pixels with in
the image at-each brightness level along the x-axis.
A skewed distribution occurs when the data is not evenly distributed around the mean.
There are two types of skewness: positive skewness and negative skewness.
Skewed distributions can have an impact on image processing tasks, such as image
segmentation, since it can affect the accuracy of thresholding algorithms. If the
distribution is heavily skewed, it can be helpful to use histogram equalization to improve
the contrast and spread out the values more evenly across the histogram.
Image Metadata:
Metadata is data or information about data. Most quality digital image processing
systems read, collect, and store metadata about a particular image or sub image. It is
important that the image analyst have access to this metadata. In the most
fundamental instance, metadata might include: the file name, date of last modification,
level of quantization (eg, 8 bits), number" of rows and columns, number of bands,
univariate statistics (minimum, maximum, mean, median, mode, standard deviation),
georeferencing performed (if any), and pixel dimension (e.g., 5 x 5 m). Utility programs
within-the digital image processing system routinely provide this information.
An ideal remote sensing metadata system would keep track of every type of processing
applied to each digital image . This 'image genealogy' or 'lineage' information can be
very valuable when the remote sensor data are subjected to intense scrutiny (e.g., in a
public forum) or used in litigation.
The histogram and metadata information help the analyst understand the content of
remotely sensed data. Sometimes, however, it is very useful to look at individual
brightness values at specific locations in the imagery.
SCATTEROGRAMS:
Image scatter plots are used to examine the association between image bands and their
relationship to features and materials of interest. The pixel values of one band
(variable 1) are displayed along the x-axis, and those of another band (variable 2) are
displayed along the y-axis. Features and materials in the image can be identified where
the two variables intersect in the distribution, or scatter plot.
They are commonly used for visualizing the distribution and correlation between pixel
values or image features.
To create a scattergram, you typically need two sets of data. In the context of digital
images, these data sets can be:
1. Pixel Intensity Values: For grayscale images, you can extract the pixel intensity
values from the image. Each pixel's intensity value represents the brightness or
darkness of that pixel. You can plot these intensity values on the x-axis and y-
axis of the scattergram to analyze the relationship between them.
2. Feature Descriptors: In image processing, various features can be extracted
from an image, such as color histograms, texture descriptors, or edge strength
measurements. You can compute these features for different regions or objects
within an image and use their values as the data points for the scattergram.
The scattergram provides insights into the relationship between the two variables. It
helps visualize patterns, clusters, or trends that may exist in the data. By examining
the distribution of points, you can gain information about the correlation, linearity, or
randomness between the variables.
It's important to note that the strength of a correlation can vary. A strong correlation
indicates a clear and consistent relationship between the variables, while a weak
correlation suggests a more scattered or less predictable relationship. The closeness
of the points to a straight line in a scattergram can provide an indication of the
strength of the correlation.
Initial statistics:
In image processing, initial statistics refer to the basic quantitative measures that can
be computed from an image to gain insights into its characteristics and properties.
These statistics provide a summary of the distribution, intensity, and spatial properties
of pixel values in the image. Here are some commonly used initial statistics in image
processing:
1. Mean: The mean represents the average value of all the pixel intensities in the
image. It provides an overall measure of the brightness or intensity level.
2. Standard Deviation: The standard deviation is a measure of the spread or
dispersion of pixel values around the mean. It quantifies the variability or
contrast in the image.
3. Minimum and Maximum: The minimum and maximum values represent the
smallest and largest pixel intensities in the image, respectively. They provide
information about the dynamic range of pixel values.
4. Histogram: A histogram is a graphical representation of the frequency
distribution of pixel intensities in an image. It shows the number of pixels at
each intensity level, allowing for an analysis of the image's contrast and
distribution.
5. Skewness: Skewness measures the asymmetry of the histogram distribution.
Positive skewness indicates that the tail of the histogram is skewed towards
higher pixel values, while negative skewness indicates a skew towards lower pixel
values.
6. Kurtosis: Kurtosis quantifies the peakedness or flatness of the histogram
distribution. High kurtosis indicates a sharper, more peaked distribution, while
low kurtosis indicates a flatter distribution.
7. Spatial Statistics: In addition to intensity statistics, spatial statistics can
provide information about the arrangement and spatial relationships of pixels in
the image. Examples of spatial statistics include autocorrelation, which
measures the similarity between pixel values at different spatial locations, and
edge density, which quantifies the presence of edges or sharp transitions in the
image.
Most digital image processing systems can perform robust univariate and multivariate
statistical analyses Of single band multiple-band remote sensor data. For example,
image analysts have at their disposal statistical measures of central tendency and
measures of dispersion that can be extracted from the imagery.
The mode is the value that occurs most frequently in a distribution and is usually the
highest point on the curve (histogram). It is common, however, to encounter more than
one mode in a remote sensing dataset. The histograms of the Landsat TM image of
Charleston, SC (see Figure 4-2) and the predawn thermal infrared image of the
Savannah River (see Figure 4-3) have multiple modes. They are non symmetrical
(skewed) distributions.
The median is the value midway in the frequency distribution (e.g., see Figure 4-la).
One-half of the area below the distribution curve is to the right of the median, and
one-half is to the left. The mean (Ji) is the arithmetic average and is defined as the
sum of all brightness value observations divided by the number of observations (Freud
and Wilson, 2003). It is the most commonly used measure of central tendency. The
mean of a single band of imagery, µh composed of n brightness values (BVij) is
computed using the formula
The sample mean, µk, is an unbiased estimate of the population mean. For symmetrical
distributions the sample mean tends to be closer to the population mean than any other
unbiased estimate (such as the median or mode)_. Unfortunately, the sample mean is a
poor measure of central, tendency when the set of observations is skewed or contains
an extreme value (outlier). As the peak (mode) becomes more extremely located to the
right or left of the mean, the frequency-distribution is said to be skewed. A frequency
distribution curve (histogram) is said to be skewed in the direction of the longer tail.
Therefore, if a peak (mode) falls to the right of the mean, the frequency distribution
is negatively skewed. If the peak falls to the left of the mean, the frequency
distribution is positively skewed .
Measures of Dispersion:
Measures of the dispersion about the mean of a distribution also provide valuable
information about the image. For example, the range of a band of imagery (range k) is
computed as the difference between the maximum (max k) and minimum (min J values;
that is, range k = max k – min k. Unfortunately, when the minimum or maximum values
are extreme or unusual observations (i.e., possibly data blunders), the range could be a
misleading measure of dispersion. Such extreme values are not uncommon because the
remote sensor data are often collected by detector systems with delicate· electronics
that can experience spikes in voltage and other unfortunate malfunctions. When
unusual _values ~re not encountered, the range is a very important statistic often used
in image enhancement functions such as min-max contrast stretching.
The variance of a sample is the average squared deviation of all possible observations
from the sample- mean. The variance of a band of imagery, var k is computed using the
equation:
The numerator of the expression, summation (BVik - µk)2 , is the corrected sum of
squares (SS) . If the sample mean (µk) were actually the population mean:, this would be
an accurate measurement of the Variance. Unfortunately, there is some
underestimation when. variance is computed because the sample mean was calculated in
a manner that minimized the squared deviations about it. Therefore, the denominator
of the variance equation is reduced to n - 1, producing a somewhat larger, unbiased
estimate of the sample variance.
The standard deviation is the positive square root of the variance .The standard
deviation of the pixel brightness values in a band of imagery, sk, is computed as:
A small standard deviation suggests that observations are clustered tightly around a
central value. Conversely, a large standard deviation indicates that values are scattered
widely about the mean. The total area underneath a distribution curve is equal to 1.00
(or 100%). For normal distributions, · 68% of the observations lie within ± l standard
deviation of the mean, 95.4o/o of all observations lie wit.'1in ±2 standard deviations,
and 99.73% within ±3 standard deviations. The areas under the normal curve for
various standard deviations are shown in Figure 4-6. The standard deviation is a
statistic commonly used to perform digital image processing (e.g., linear contrast
enhancement, parallelepiped classification, and error evaluation) .To interpret variance
and standard deviation, analysts -should not attach a significance to each numerical
value but should compare one variance ·or standard deviation to another. The sample
having the largest variance or Standard deviation has the greater spread among the
brightness values of the observations, provided all the measurements were made in the
same units.
Measures of Distribution (Histogram) Asymmetry and Peak Sharpness:
A histogram can be symmetric but have a peak that is very sharp or one that is subdued
when compared with a perfectly normal distribution. A perfectly normal frequency
distribution (histogram) has zero kurtosis. The greater the positive kurtosis value, the
sharper the peak in the distribution when compared with a normal histogram.
Conversely, a negative kurtosis value suggests that the peak in the histogram is less
sharp than that of a normal distribution. Kurtosis is computed using the formula
Outliers or blunders in the remotely sensed data can have a serious impact on the
computation of skewness and kurtosis. Therefore, it is desirable to remove (or repair)
bad data values before-computing skewness or kurtosis.
Remote sensing research is' often concerned With the measurement of how much
radiant flux is reflected or emitted from an object in more than one band (e.g., in red
and near infrared bands). It is useful to compute multivariate statistical measures such
as covariance and Correlation among the several bands to determine how the
measurements covary. Later it will be shown that variance-covariance and correlation
matrices are used in remote sensing principal components analysis (PCA), feature
selection, classification and accuracy .For this reason, we will ex8.mine how the
variance-covariance between bands is computed and then proceed to compute the
correlation between bands. Although initially performed on a simple dataset consisting
of just five hypothetical pixels, this example provides insight into the utility of these
statistics for digital image processing purposes. Later, these statistics are computed
for a seven-band Charleston, SC, Thematic Mapper scene consisting of 240 x 256
pixels. Note that this scene is much larger than the 120 x 100 pixel .
The following examples are based on an analysis of the first five pixels ((1, 1), (1, 2), (1,
3), (1, 4) and (1, 5)] in a four band (green, red, near-infrared, near-infrared)
hypothetical multispectral dataset obtained over vegetated terrain. Thus, each pixel
consists of four spectral measurements (Table 4- 1). Note the low brightness values in
band 2 caused by plant chlorophyll absorption of red light for photosynthetic purposes.
Increased reflectance of the incident near-infrared energy by the green plant results
in higher brightness values in the two near-infrared bands (3 and 4). Although it is a
small hypothetical sample dataset, it represents well the spectral characteristics of
healthy green vegetation.
methods of selecting the most useful bands for analysis are described in later sections.
Initial image display in digital image processing refers to the process of presenting an
image on a display device for visual inspection and analysis. It involves loading the image
into an image processing software or application and rendering it on a computer monitor
or other display medium. The purpose of initial image display is to provide a visual
representation of the image content, allowing users to examine and interpret the image
data.
During the initial image display, the image is typically presented in its raw or
unprocessed form before any modifications or enhancements are applied. It allows
users to observe the image's colors, details, textures, and spatial properties. The
display may provide options for zooming, panning, and adjusting image settings like
brightness, contrast, and color balance to optimize the visual representation.
The software applications used for initial image display in digital image processing can
vary depending on the specific requirements and tasks involved. Some commonly used
software tools for image display and analysis include MATLAB, OpenCV, ImageJ, Adobe
Photoshop, and GIMP (GNU Image Manipulation Program). These applications provide
various features for image loading, visualization, manipulation, and analysis.
It's important to note that the initial image display is just the first step in the overall
image processing workflow. Once the image is displayed, it can be subjected to further
processing operations like filtering, segmentation, feature extraction, or object
recognition to achieve specific goals or tasks in digital image processing.
Ideal display:
An ideal display in digital image processing refers to a display device that accurately
reproduces the image content with high fidelity, preserving the colors, details, and
spatial characteristics of the original image. The ideal display aims to provide a
perceptually accurate representation of the image, minimizing any distortion or loss of
information.
1. High Color Accuracy: The display should have a wide color gamut and be capable
of reproducing a broad range of colors accurately. It should adhere to
established color standards, such as sRGB or Adobe RGB, to ensure consistency
in color representation.
2. High Dynamic Range: An ideal display should have a wide dynamic range, capable
of accurately rendering both shadow details and bright highlights without loss
of information. This ensures that the full range of intensities in the image is
faithfully displayed.
3. High Resolution: The display should have a high pixel density to provide sharp
and detailed image reproduction. The display's resolution should match or
exceed the resolution of the image being displayed to avoid any loss of fine
details.
4. Uniformity: An ideal display exhibits uniform brightness and color across its
entire surface, avoiding any variations or inconsistencies that could distort the
image perception.
5. Calibration and Profiling: An ideal display supports calibration and profiling to
ensure accurate color reproduction. Calibration involves adjusting the display's
settings to match a known reference standard, while profiling creates a color
profile that characterizes the display's specific color behavior.
The ideal display aims to provide a perceptually accurate representation of the image,
minimizing any distortion or loss of information. Here are some types of ideal displays
commonly used in digital image processing:
The selection of an ideal display is often based on factors such as color accuracy,
resolution, dynamic range, and uniformity that best suit the intended purpose.
Sensor models:
Sensor models are essential in various image processing tasks, including image
correction, image registration, image fusion, color correction, and 3D reconstruction.
By understanding and compensating for the characteristics and limitations of image
acquisition sensors, these models enable accurate and reliable image processing and
analysis.
Placing calibrated, parallel black and white lines on tarps that are placed in the
field,
Obtaining aerial photography of the study area, and
Computing the number of resolvable line· pairs per milli meter in the
photography.
Many satellite remote sensing systems use optics that have a constant
IFOV .Therefore, a sensor systems nominal spatial resolution is defined as the
dimension in meters (or feet) of the ground projected IFOV where the diameter of the
circle (D) on the ground is a function of the . instantaneous-field-of-view times the
altitude (HJ of the sensor above ground level (AGL)
Generally, the smaller the nominal spatial resolution, the greater the spatial resolving
power of the remote sensing system.
Spatial resolution would appropriately describe the ground-projected laser pulse (e.g.,
15 cm) but sampling density (i.e., number of points per unit area) describes the
frequency of ground observations.
Because we have spatial information about the location of each pixel (x , y) in the
matrix, it is also possible to examine the spatial relationship between a pixel and its
neighbors. Therefore, the amount of spectral autocorrelation and other spatial
geostatistical measurements can be determined based on the spatial information
inherent in the imagery.
The temporal resolution of a remote sensing system generally refers to how often the
sensor records imagery of a particular area.
Obtaining imagery at a high temporal resolution is very important for many applications.
For example, the National Ocean and Atmospheric Administration (NOAA)
Geostationary Operational Environmental Satellites {GOES) are in geostationary orbits,
allowing them to obtain very high temporal resolution imagery (e.g., every half-hour).
This allows meteorologists to provide hourly updates on the location of frontal systems
and hurricanes and use this information along with other data to predict storm tracks.
Another aspect of temporal information is how many observations are recorded from a
single pulse of energy that is directed at the Earth by_ an active sensor such as LID
AR. _For example, most LID AR sensors emit one pulse of laser energy and record
multiple responses from this pulse. Measuring the time differences between multiple
responses allows for the determination of object heights and terrain structure. Also,
the length of time required to emit an energy signal by an active sensor is referred to
as the pulse length. Short pulse lengths allow very precise distance (i.e., range)
measurement.
IFOV:
The IFOV is a measure of the spatial resolution of an imaging system and determines
the smallest distinguishable detail or feature that can be resolved by the system. A
smaller IFOV corresponds to higher spatial resolution, as it indicates that each pixel or
detector element captures a smaller portion of the scene.
IFOV = θ / D
Where:
In practical terms, the IFOV can be thought of as the angular size of a pixel or
detector element when viewed from the object or scene. It determines how much
detail can be captured and resolved by the imaging system. Smaller IFOV values are
desirable in applications that require high-resolution imaging, such as aerial and
satellite imagery, where fine details need to be captured and analyzed.
GIFOV:
GIFOV stands for Ground Instantaneous Field of View
GIFOV represents the area on the ground that is imaged by a single pixel or detector
element in the imaging system. It is the projection of the IFOV onto the ground
surface. GIFOV is an important parameter in remote sensing as it determines the
spatial footprint or coverage of each pixel on the ground.
The calculation of GIFOV involves taking into account the IFOV of the imaging system
and the altitude or height at which the system is positioned. The formula for GIFOV
can be expressed as:
Where:
The GIFOV provides information about the spatial resolution and coverage of each
pixel on the ground. It helps in understanding the level of detail captured by the
imaging system and the ground area represented by each pixel.
GSI:
Ground Sampling Interval (GSI) is a measure used to describe the spatial resolution or
pixel size of remotely sensed images. It represents the physical distance on the ground
that is represented by each pixel in the image. GSI is typically expressed in units of
meters per pixel.
Where:
Pixel Size: The size of a single pixel in the image, usually provided by the imaging
system or sensor. It is typically given in units of meters per pixel.
Distance to the Object: The distance between the imaging system and the
object or scene being imaged. It is typically measured in meters.
Focal Length: The effective focal length of the imaging system, which is a
characteristic property of the camera or lens used for image capture. It is
usually given in units of meters.
By applying the formula, the GSI provides an estimation of the physical distance
represented by each pixel in the image. A smaller GSI indicates higher spatial
resolution, meaning smaller ground features can be resolved in the image. Conversely, a
larger GSI corresponds to lower spatial resolution, where larger ground features are
represented by each pixel.
Geometry and radiometry are two fundamental aspects of digital image processing that
play important roles in analyzing and manipulating digital images.
1. Geometry:
Geometry in digital image processing refers to the spatial properties and
relationships within an image. It involves the analysis and manipulation of image
content in terms of size, shape, position, and orientation. Key concepts in
geometric image processing include:
a. Image Resolution: Resolution refers to the level of detail in an
image, typically measured in pixels per unit length. High resolution provides more
details, while low resolution results in a more pixelated image.
b. Image Registration: Image registration is the process of aligning
different images or aligning an image to a specific coordinate system. It involves
finding correspondences between points or features in images and transforming
one image to match the other.
c. Geometric Transformations: Geometric transformations involve
modifying the shape, size, position, or orientation of an image. Common
transformations include translation (shifting), rotation, scaling, and shearing.
d. Image Warping: Image warping involves distorting or deforming an
image based on geometric transformations. It can be used for tasks such as
image rectification, perspective correction, or creating special effects.
e. Image Segmentation: Image segmentation is the process of
partitioning an image into meaningful regions or objects. It involves identifying
boundaries between objects and grouping pixels with similar properties
together.
2. Radiometry:
Radiometry in digital image processing deals with the measurement and
interpretation of electromagnetic radiation (light) in an image. It focuses on the
analysis of image content based on the intensity and spectral properties of light.
Key concepts in radiometric image processing include:
a. Pixel Intensity: Pixel intensity represents the brightness or gray
level of a pixel in an image. It is usually represented by a numerical value ranging
from 0 (black) to 255 (white) in an 8-bit grayscale image.
b. Contrast Enhancement: Contrast enhancement techniques aim to
improve the visual quality of an image by adjusting the distribution of pixel
intensities. Histogram equalization, contrast stretching, and gamma correction
are common methods used for contrast enhancement.
c. Color Spaces: Color spaces provide different ways of representing
colors in images. Common color spaces include RGB (Red, Green, Blue), CMYK
(Cyan, Magenta, Yellow, Black), and HSV (Hue, Saturation, Value). Color space
conversions are often used for image analysis and processing.
d. Radiometric Calibration: Radiometric calibration involves the
conversion of pixel values in an image to physical measurements, such as
reflectance or radiance. This process allows quantitative analysis and comparison
of images acquired under different conditions.
e. Spectral Analysis: Spectral analysis involves studying the spectral
characteristics of an image, which refers to the distribution of energy across
different wavelengths. It is useful for applications such as vegetation analysis,
mineral identification, and remote sensing.
Sources of image degradation and correction procedures :
Images can be subject to various sources of degradation, which can negatively impact
their quality and clarity
1. Noise:
Sensor noise: Introduced during image acquisition by the camera sensor,
resulting in random variations in pixel values.
Transmission noise: Introduced during image transmission or storage,
leading to errors or corruption in the image data.
Noise Correction:
Removal of sensor noise: Techniques such as median filtering, Gaussian
filtering, or adaptive filtering can be applied to reduce noise while
preserving image details.
Reduction of transmission noise: Error correction algorithms or image
interpolation methods can be utilized to address noise introduced during
image transmission or storage.
2. Blur:
Motion blur: Caused by the relative motion between the camera and the
subject during the exposure time, resulting in blurred or smudged details.
Out-of-focus blur: Occurs when the camera fails to focus accurately on
the subject, leading to a lack of sharpness and loss of fine details.
Lens aberrations: Imperfections in camera lenses, such as chromatic
aberration, spherical aberration, or distortion, which can cause blurring
or distortions in the image.
Blur Correction:
Motion blur correction: Motion deblurring techniques aim to restore
sharpness and recover fine details in images affected by motion blur.
These methods can employ deconvolution algorithms or utilize motion
estimation techniques.
Out-of-focus blur correction: Deconvolution algorithms or image
enhancement techniques like the blind deblurring approach can be used to
mitigate the effects of out-of-focus blur.
Lens aberration correction: Image calibration methods, such as distortion
correction algorithms or lens-specific profile adjustments, can be
employed to correct lens-related distortions.
3. Compression artifacts:
Lossy compression: Commonly used image compression algorithms like
JPEG introduce artifacts due to the irreversible loss of information
during compression, resulting in blocky distortions, ringing, or mosquito
noise.
Compression Artifact Removal:
Post-processing filters: Adaptive filters, deblocking filters, or noise
reduction filters can be applied to reduce compression artifacts
introduced by lossy compression algorithms.
Advanced compression artifact removal: Machine learning-based
approaches, such as deep neural networks, can be trained to specifically
target and remove compression artifacts.
4. Geometric distortions:
Lens distortions: Imperfections in camera lenses can cause geometric
distortions such as barrel distortion or pincushion distortion, resulting in
a non-uniform scaling of the image.
Perspective distortions: Occur when the camera is not parallel to the
subject, leading to objects appearing distorted, stretched, or skewed.
Geometric Distortion Correction:
Lens distortion correction: Calibration techniques or lens-specific
correction algorithms can be utilized to correct geometric distortions
caused by lens imperfections.
Perspective correction: Homography -based transformations or
geometric calibration methods can be employed to rectify perspective
distortions and restore the correct shape and proportions of objects.
5. Illumination variations:
Uneven lighting: Non-uniform lighting conditions across the image, such
as shadows, overexposed or underexposed areas, can cause poor visibility
and loss of detail.
Backlighting: When the subject is lit from behind, resulting in a
silhouette or reduced visibility of the subject's details.
Illumination Correction:
Histogram equalization: Adjusting the image histogram to improve
contrast and enhance visibility of details in different lighting conditions.
Local adaptive methods: Techniques such as adaptive histogram
equalization (AHE), contrast-limited adaptive histogram equalization
(CLAHE), or other local enhancement methods can be applied to correct
uneven lighting and improve image quality.
6. Color distortions:
White balance issues: Incorrectly set white balance can result in color
casts, where the image appears too warm (yellowish) or too cool (bluish).
Color cross-talk: In some imaging systems, there may be unintended
interaction between color channels, resulting in color bleeding or
inaccurate color representation.
Color Correction:
White balance adjustment: Techniques such as gray-world assumption,
color constancy algorithms, or manual adjustment can be used to correct
color casts and restore accurate color representation.
Color space transformations: Converting the image to a different color
space and applying correction operations, such as channel scaling or
chromatic adaptation, to achieve desired color accuracy.
Histogram-based methods: Color mapping techniques or histogram
matching can be employed to align the color distribution of an image to a
reference or desired color distribution.
Even when the remote sensing system is functioning properly, radiometric error may be
introduced into the remote sensor data. The two most important sources of
environmental attenuation are
2) topographic attenuation.
The general principle is that atmospheric correction is not necessary as long as the
training data are extracted from the image (or composite image) under investigation
and are not imported from another image obtained· at another place or time.
Signature extension through space and time is becoming more important. The only way
to extend signatures through space and time is to atmospherically correct each
individual date of remotely sensed data.