0% found this document useful (0 votes)
41 views24 pages

UNIT II PREPROCESSING Im

The document discusses image characteristics and preprocessing techniques in digital image processing. It covers topics like resolution, color depth, brightness, contrast, noise, sharpness, compression, dynamic range, histograms, scattergrams, image metadata, and corrections for atmospheric, radiometric and geometric distortions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views24 pages

UNIT II PREPROCESSING Im

The document discusses image characteristics and preprocessing techniques in digital image processing. It covers topics like resolution, color depth, brightness, contrast, noise, sharpness, compression, dynamic range, histograms, scattergrams, image metadata, and corrections for atmospheric, radiometric and geometric distortions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT II PREPROCESSING

Image Characteristics – Histograms – Scattergrams –Initial statistics – Univariate and


multivariate statistics-Initial image display- Ideal display, types, Sensor models -
spatial, spectral, radiometric, temporal - IFOV, GIFOV& GSI - geometry and
Radiometry – Sources of Image degradation and Correction procedures - Atmospheric,
Radiometric, Geometric Corrections- Image Geometry Restoration-Interpolation
methods and resampling techniques.

IMAGE CHARACTERISTICS:

In digital image processing, some of the most important characteristics of an image


include:

1. Resolution: This refers to the number of pixels that make up the image. A
higher resolution image will have more pixels and therefore more detail than a
lower resolution image.
2. Color Depth: This is the number of bits used to represent the color of each
pixel in the image. The more bits used, the more colors can be represented, and
the more accurate the color representation will be.
3. Brightness and Contrast: These characteristics determine the overall
brightness and contrast of the image. Brightness refers to the overall lightness
or darkness of the image, while contrast refers to the difference between the
lightest and darkest parts of the image.
4. Noise: Noise refers to any unwanted variations in the image caused by factors
such as electronic interference, sensor noise, or image compression.
5. Sharpness: This refers to the clarity of the edges in the image. A sharper image
will have more distinct edges and be more visually appealing.
6. Compression: Digital images can be compressed to reduce their file size.
Compression can be lossless (where no information is lost) or lossy (where some
information is discarded to achieve a smaller file size).
7. Dynamic range: This is the ratio between the largest and smallest possible
values in the image. A larger dynamic range allows for more detail to be captured
in the highlights and shadows of the image.

Understanding these characteristics is important for image processing tasks such as


enhancement, restoration, and segmentation.

HISTOGRAMS:

The histogram is a useful graphic representation of the information content of a


remotely sensed image. Histograms for each band of imagery are often displayed and
analyzed in many remote sensing investigations because they provide the analyst with
an appreciation of the quality of the original data (e.g., whether it is low in contrast,
high in contra.st, or multimodal in nature). In fact, many analysts routinely provide
before (original) and after histograms of the imagery to document the effects of
applying an image enhancement technique .It is instructive to review how a histogram of
a single band of imagery, k, composed of I rows and j columns with a brightness value
BV ijk at each pixel location is constructed.

The majority of the remote sensor data are quantized to 8 bits, with values ranging
from 0 to 255 (e.g., Landsat 5 Thematic Mapper and SPOT HRV data). Some sensor
systems such as IKONOS and Terra MODIS obtain data with 11 bits of precision. The
greater the quantization, the higher the probability that more subtle spectral
reflectance (or emission) characteristics may be extracted from the imagery.

Tabulating the frequency of occurrence of each brightness value within the image
provides statistical information that can be displayed graphically in a histogram (Hair
et al., 1998). The range of quantized values of a band of imagery, quant k, is provided on
the abscissa (x axis), while the frequency of occurrence of each of these values is
displayed on the ordinate (y axis). For example, consider the histogram of the original
brightness values for a Landsat Thematic Mapper band 4 scene of Charleston, SC
(Figure 4-2). The peaks in the histogram correspond to dominant types of land cover in
the image, including a) open water pixels, b) coastal wetland, and c) upland. Also, note
how the Landsat Thematic Mapper band 4 data are compressed into only the lower
onethird of the 0 to 255 range, suggesting that the data are relatively low in contrast.
If the original Landsat Thematic Mapper band 4 brightness values were displayed on a
monitor screen or on the printed page they would be relatively dark and difficult to
interpret. Therefore, in order to see the wealth of spectral information in the scene,
the original brightness values were contrast stretched

Histograms are useful for evaluating the quality of optical daytime multispectral data
and many other types of remote sensor data.

When an unusually large number of pixels have the same brightness value, the
traditional histogram display might not be the best way to communicate the information
content of the remote sensor data. When this occurs, it might be useful to scale the
frequency of occurrence (y-axis) according to the relative percentage of pixels with in
the image at-each brightness level along the x-axis.

A skewed distribution occurs when the data is not evenly distributed around the mean.
There are two types of skewness: positive skewness and negative skewness.

 Positive skewness: In a positively skewed distribution, the tail of the histogram


is longer on the positive side, indicating that there are more pixels with higher
values. This can happen, for example, when there are very bright areas in an
image.
 Negative skewness: In a negatively skewed distribution, the tail of the
histogram is longer on the negative side, indicating that there are more pixels
with lower values. This can happen, for example, when there are very dark areas
in an image.

Skewed distributions can have an impact on image processing tasks, such as image
segmentation, since it can affect the accuracy of thresholding algorithms. If the
distribution is heavily skewed, it can be helpful to use histogram equalization to improve
the contrast and spread out the values more evenly across the histogram.

Image Metadata:

Metadata is data or information about data. Most quality digital image processing
systems read, collect, and store metadata about a particular image or sub image. It is
important that the image analyst have access to this metadata. In the most
fundamental instance, metadata might include: the file name, date of last modification,
level of quantization (eg, 8 bits), number" of rows and columns, number of bands,
univariate statistics (minimum, maximum, mean, median, mode, standard deviation),
georeferencing performed (if any), and pixel dimension (e.g., 5 x 5 m). Utility programs
within-the digital image processing system routinely provide this information.

An ideal remote sensing metadata system would keep track of every type of processing
applied to each digital image . This 'image genealogy' or 'lineage' information can be
very valuable when the remote sensor data are subjected to intense scrutiny (e.g., in a
public forum) or used in litigation.

The histogram and metadata information help the analyst understand the content of
remotely sensed data. Sometimes, however, it is very useful to look at individual
brightness values at specific locations in the imagery.

SCATTEROGRAMS:

Image scatter plots are used to examine the association between image bands and their
relationship to features and materials of interest. The pixel values of one band
(variable 1) are displayed along the x-axis, and those of another band (variable 2) are
displayed along the y-axis. Features and materials in the image can be identified where
the two variables intersect in the distribution, or scatter plot.

They are commonly used for visualizing the distribution and correlation between pixel
values or image features.

To create a scattergram, you typically need two sets of data. In the context of digital
images, these data sets can be:

1. Pixel Intensity Values: For grayscale images, you can extract the pixel intensity
values from the image. Each pixel's intensity value represents the brightness or
darkness of that pixel. You can plot these intensity values on the x-axis and y-
axis of the scattergram to analyze the relationship between them.
2. Feature Descriptors: In image processing, various features can be extracted
from an image, such as color histograms, texture descriptors, or edge strength
measurements. You can compute these features for different regions or objects
within an image and use their values as the data points for the scattergram.
The scattergram provides insights into the relationship between the two variables. It
helps visualize patterns, clusters, or trends that may exist in the data. By examining
the distribution of points, you can gain information about the correlation, linearity, or
randomness between the variables.

In scattergrams, correlations refer to the degree of relationship or association


between the two variables being plotted. Correlations help determine if there is a
systematic and predictable relationship between the variables or if they are unrelated.
Correlations in scattergrams can be analyzed visually by observing the pattern of points
on the graph. Here are some common types of correlations that can be identified:

1. Positive Correlation: In a positive correlation, the points on the scattergram


tend to form a pattern that slopes upward from left to right. This indicates
that as the values of one variable increase, the values of the other variable also
tend to increase. The points may not fall exactly on a straight line, but there is
a clear overall trend of increasing values.
2. Negative Correlation: In a negative correlation, the points on the scattergram
tend to form a pattern that slopes downward from left to right. This indicates
that as the values of one variable increase, the values of the other variable tend
to decrease. Again, the points may not fall exactly on a straight line, but there
is a clear overall trend of decreasing values.
3. No Correlation (or Weak Correlation): In cases where there is no clear pattern
or trend in the scattergram, it indicates little or no relationship between the
variables. The points are scattered randomly, and there is no discernible upward
or downward trend. This suggests that changes in one variable do not
systematically affect the other variable.

It's important to note that the strength of a correlation can vary. A strong correlation
indicates a clear and consistent relationship between the variables, while a weak
correlation suggests a more scattered or less predictable relationship. The closeness
of the points to a straight line in a scattergram can provide an indication of the
strength of the correlation.

In addition to visual analysis, numerical measures such as correlation coefficients can


also be calculated to quantify the strength and direction of the correlation. Common
correlation coefficients include Pearson's correlation coefficient and Spearman's rank
correlation coefficient. These coefficients range from -1 to 1, where -1 represents a
perfect negative correlation, 1 represents a perfect positive correlation, and 0
indicates no correlation.

Initial statistics:

In image processing, initial statistics refer to the basic quantitative measures that can
be computed from an image to gain insights into its characteristics and properties.
These statistics provide a summary of the distribution, intensity, and spatial properties
of pixel values in the image. Here are some commonly used initial statistics in image
processing:

1. Mean: The mean represents the average value of all the pixel intensities in the
image. It provides an overall measure of the brightness or intensity level.
2. Standard Deviation: The standard deviation is a measure of the spread or
dispersion of pixel values around the mean. It quantifies the variability or
contrast in the image.
3. Minimum and Maximum: The minimum and maximum values represent the
smallest and largest pixel intensities in the image, respectively. They provide
information about the dynamic range of pixel values.
4. Histogram: A histogram is a graphical representation of the frequency
distribution of pixel intensities in an image. It shows the number of pixels at
each intensity level, allowing for an analysis of the image's contrast and
distribution.
5. Skewness: Skewness measures the asymmetry of the histogram distribution.
Positive skewness indicates that the tail of the histogram is skewed towards
higher pixel values, while negative skewness indicates a skew towards lower pixel
values.
6. Kurtosis: Kurtosis quantifies the peakedness or flatness of the histogram
distribution. High kurtosis indicates a sharper, more peaked distribution, while
low kurtosis indicates a flatter distribution.
7. Spatial Statistics: In addition to intensity statistics, spatial statistics can
provide information about the arrangement and spatial relationships of pixels in
the image. Examples of spatial statistics include autocorrelation, which
measures the similarity between pixel values at different spatial locations, and
edge density, which quantifies the presence of edges or sharp transitions in the
image.

These initial statistics help in understanding the characteristics of an image, such as


its overall intensity, contrast, dynamic range, and spatial properties. They serve as a
starting point for further analysis and processing, such as image enhancement,
segmentation, or classification.

Univariate Descriptive Image Statistics:

Most digital image processing systems can perform robust univariate and multivariate
statistical analyses Of single band multiple-band remote sensor data. For example,
image analysts have at their disposal statistical measures of central tendency and
measures of dispersion that can be extracted from the imagery.

Measure of Central Tendency in Remote Sensor Data:

The mode is the value that occurs most frequently in a distribution and is usually the
highest point on the curve (histogram). It is common, however, to encounter more than
one mode in a remote sensing dataset. The histograms of the Landsat TM image of
Charleston, SC (see Figure 4-2) and the predawn thermal infrared image of the
Savannah River (see Figure 4-3) have multiple modes. They are non symmetrical
(skewed) distributions.

The median is the value midway in the frequency distribution (e.g., see Figure 4-la).
One-half of the area below the distribution curve is to the right of the median, and
one-half is to the left. The mean (Ji) is the arithmetic average and is defined as the
sum of all brightness value observations divided by the number of observations (Freud
and Wilson, 2003). It is the most commonly used measure of central tendency. The
mean of a single band of imagery, µh composed of n brightness values (BVij) is
computed using the formula

The sample mean, µk, is an unbiased estimate of the population mean. For symmetrical
distributions the sample mean tends to be closer to the population mean than any other
unbiased estimate (such as the median or mode)_. Unfortunately, the sample mean is a
poor measure of central, tendency when the set of observations is skewed or contains
an extreme value (outlier). As the peak (mode) becomes more extremely located to the
right or left of the mean, the frequency-distribution is said to be skewed. A frequency
distribution curve (histogram) is said to be skewed in the direction of the longer tail.
Therefore, if a peak (mode) falls to the right of the mean, the frequency distribution
is negatively skewed. If the peak falls to the left of the mean, the frequency
distribution is positively skewed .

Measures of Dispersion:

Measures of the dispersion about the mean of a distribution also provide valuable
information about the image. For example, the range of a band of imagery (range k) is
computed as the difference between the maximum (max k) and minimum (min J values;
that is, range k = max k – min k. Unfortunately, when the minimum or maximum values
are extreme or unusual observations (i.e., possibly data blunders), the range could be a
misleading measure of dispersion. Such extreme values are not uncommon because the
remote sensor data are often collected by detector systems with delicate· electronics
that can experience spikes in voltage and other unfortunate malfunctions. When
unusual _values ~re not encountered, the range is a very important statistic often used
in image enhancement functions such as min-max contrast stretching.

The variance of a sample is the average squared deviation of all possible observations
from the sample- mean. The variance of a band of imagery, var k is computed using the
equation:
The numerator of the expression, summation (BVik - µk)2 , is the corrected sum of
squares (SS) . If the sample mean (µk) were actually the population mean:, this would be
an accurate measurement of the Variance. Unfortunately, there is some
underestimation when. variance is computed because the sample mean was calculated in
a manner that minimized the squared deviations about it. Therefore, the denominator
of the variance equation is reduced to n - 1, producing a somewhat larger, unbiased
estimate of the sample variance.

The standard deviation is the positive square root of the variance .The standard
deviation of the pixel brightness values in a band of imagery, sk, is computed as:

A small standard deviation suggests that observations are clustered tightly around a
central value. Conversely, a large standard deviation indicates that values are scattered
widely about the mean. The total area underneath a distribution curve is equal to 1.00
(or 100%). For normal distributions, · 68% of the observations lie within ± l standard
deviation of the mean, 95.4o/o of all observations lie wit.'1in ±2 standard deviations,
and 99.73% within ±3 standard deviations. The areas under the normal curve for
various standard deviations are shown in Figure 4-6. The standard deviation is a
statistic commonly used to perform digital image processing (e.g., linear contrast
enhancement, parallelepiped classification, and error evaluation) .To interpret variance
and standard deviation, analysts -should not attach a significance to each numerical
value but should compare one variance ·or standard deviation to another. The sample
having the largest variance or Standard deviation has the greater spread among the
brightness values of the observations, provided all the measurements were made in the
same units.
Measures of Distribution (Histogram) Asymmetry and Peak Sharpness:

Sometimes it is useful to compute additional statistical measures that describe in


quantitative terms various characteristics of the distribution (histogram). Skewness is
a measure of the asymmetry of a histogram and is computed using the formula

A perfectly symmetric histogram has a skewness value of zero.

A histogram can be symmetric but have a peak that is very sharp or one that is subdued
when compared with a perfectly normal distribution. A perfectly normal frequency
distribution (histogram) has zero kurtosis. The greater the positive kurtosis value, the
sharper the peak in the distribution when compared with a normal histogram.
Conversely, a negative kurtosis value suggests that the peak in the histogram is less
sharp than that of a normal distribution. Kurtosis is computed using the formula
Outliers or blunders in the remotely sensed data can have a serious impact on the
computation of skewness and kurtosis. Therefore, it is desirable to remove (or repair)
bad data values before-computing skewness or kurtosis.

Multivariate Image Statistics:

Remote sensing research is' often concerned With the measurement of how much
radiant flux is reflected or emitted from an object in more than one band (e.g., in red
and near infrared bands). It is useful to compute multivariate statistical measures such
as covariance and Correlation among the several bands to determine how the
measurements covary. Later it will be shown that variance-covariance and correlation
matrices are used in remote sensing principal components analysis (PCA), feature
selection, classification and accuracy .For this reason, we will ex8.mine how the
variance-covariance between bands is computed and then proceed to compute the
correlation between bands. Although initially performed on a simple dataset consisting
of just five hypothetical pixels, this example provides insight into the utility of these
statistics for digital image processing purposes. Later, these statistics are computed
for a seven-band Charleston, SC, Thematic Mapper scene consisting of 240 x 256
pixels. Note that this scene is much larger than the 120 x 100 pixel .

The following examples are based on an analysis of the first five pixels ((1, 1), (1, 2), (1,
3), (1, 4) and (1, 5)] in a four band (green, red, near-infrared, near-infrared)
hypothetical multispectral dataset obtained over vegetated terrain. Thus, each pixel
consists of four spectral measurements (Table 4- 1). Note the low brightness values in
band 2 caused by plant chlorophyll absorption of red light for photosynthetic purposes.
Increased reflectance of the incident near-infrared energy by the green plant results
in higher brightness values in the two near-infrared bands (3 and 4). Although it is a
small hypothetical sample dataset, it represents well the spectral characteristics of
healthy green vegetation.
methods of selecting the most useful bands for analysis are described in later sections.

Initial image display:

Initial image display in digital image processing refers to the process of presenting an
image on a display device for visual inspection and analysis. It involves loading the image
into an image processing software or application and rendering it on a computer monitor
or other display medium. The purpose of initial image display is to provide a visual
representation of the image content, allowing users to examine and interpret the image
data.
During the initial image display, the image is typically presented in its raw or
unprocessed form before any modifications or enhancements are applied. It allows
users to observe the image's colors, details, textures, and spatial properties. The
display may provide options for zooming, panning, and adjusting image settings like
brightness, contrast, and color balance to optimize the visual representation.

The software applications used for initial image display in digital image processing can
vary depending on the specific requirements and tasks involved. Some commonly used
software tools for image display and analysis include MATLAB, OpenCV, ImageJ, Adobe
Photoshop, and GIMP (GNU Image Manipulation Program). These applications provide
various features for image loading, visualization, manipulation, and analysis.

It's important to note that the initial image display is just the first step in the overall
image processing workflow. Once the image is displayed, it can be subjected to further
processing operations like filtering, segmentation, feature extraction, or object
recognition to achieve specific goals or tasks in digital image processing.

Ideal display:

An ideal display in digital image processing refers to a display device that accurately
reproduces the image content with high fidelity, preserving the colors, details, and
spatial characteristics of the original image. The ideal display aims to provide a
perceptually accurate representation of the image, minimizing any distortion or loss of
information.

Characteristics of an ideal display include:

1. High Color Accuracy: The display should have a wide color gamut and be capable
of reproducing a broad range of colors accurately. It should adhere to
established color standards, such as sRGB or Adobe RGB, to ensure consistency
in color representation.
2. High Dynamic Range: An ideal display should have a wide dynamic range, capable
of accurately rendering both shadow details and bright highlights without loss
of information. This ensures that the full range of intensities in the image is
faithfully displayed.
3. High Resolution: The display should have a high pixel density to provide sharp
and detailed image reproduction. The display's resolution should match or
exceed the resolution of the image being displayed to avoid any loss of fine
details.
4. Uniformity: An ideal display exhibits uniform brightness and color across its
entire surface, avoiding any variations or inconsistencies that could distort the
image perception.
5. Calibration and Profiling: An ideal display supports calibration and profiling to
ensure accurate color reproduction. Calibration involves adjusting the display's
settings to match a known reference standard, while profiling creates a color
profile that characterizes the display's specific color behavior.

It enables reliable evaluation and interpretation of image data, ensuring that


subsequent processing steps are based on an accurate representation of the original
image content.
Types:

The ideal display aims to provide a perceptually accurate representation of the image,
minimizing any distortion or loss of information. Here are some types of ideal displays
commonly used in digital image processing:

1. High-Resolution Display: High-resolution displays have a high pixel density,


allowing for the faithful reproduction of fine details in the image. They provide
a clear and sharp representation of the image content, enabling accurate
analysis and interpretation. Examples of high-resolution displays include 4K
monitors, Retina displays, and high-density LCD screens.
2. Wide Color Gamut Display: Displays with a wide color gamut can accurately
reproduce a broad range of colors, providing vibrant and realistic color
representation. They cover a larger portion of the color spectrum, allowing for
more accurate color reproduction and minimizing color distortion. Examples of
displays with wide color gamut include those supporting Adobe RGB or DCI-P3
color spaces.
3. High Dynamic Range (HDR) Display: HDR displays are capable of rendering a
wider dynamic range of luminance levels, resulting in enhanced contrast and
more accurate representation of highlights and shadows. They can faithfully
reproduce details in both dark and bright areas of the image, providing a more
visually appealing and realistic display. HDR displays are commonly used in
applications such as photography, gaming, and video production.
4. Color-Calibrated Display: Color-calibrated displays are calibrated and profiled
to adhere to established color standards, ensuring accurate color reproduction.
They undergo a calibration process that adjusts the display's color settings to
match a known reference standard, resulting in consistent and reliable color
representation. Color calibration is crucial in industries such as graphic design,
print production, and medical imaging.
5. High-Uniformity Display: High-uniformity displays exhibit consistent
brightness and color across the entire display surface. They minimize variations
or inconsistencies in brightness and color representation, ensuring a visually
uniform image presentation. High-uniformity displays are important in
applications where accurate visual assessment and analysis of image content are
critical, such as medical imaging and scientific research.
6. Multi-Display Systems: In certain cases, an ideal display may involve a multi-
display setup where multiple monitors are used together to provide an expanded
viewing area. Multi-display systems allow for the simultaneous display of
different image views or analysis tools, providing more flexibility and enhancing
productivity in image processing tasks.

The selection of an ideal display is often based on factors such as color accuracy,
resolution, dynamic range, and uniformity that best suit the intended purpose.
Sensor models:

Sensor models in digital image processing refer to mathematical representations that


describe the behavior and characteristics of image acquisition sensors, such as cameras
or scanners. These models are used to understand and compensate for various factors
that affect the image formation process, enabling accurate and reliable image
processing. Some commonly used sensor models include:

1. Geometric Distortion Models: Geometric distortion models account for lens


distortions in cameras, such as radial and tangential distortions. These models
help correct the geometric distortions and achieve more accurate geometric
representation in the acquired image. Examples of geometric distortion models
include the radial distortion model and the Brown's distortion model.
2. Radiometric Calibration Models: Radiometric calibration models are used to
compensate for variations in sensor response, including sensitivity, gain, and
bias. These models ensure consistent and accurate radiometric representation
of the image data. Radiometric calibration models can involve linear
transformations, such as gain and offset corrections, or more complex models,
such as gamma correction or sensor-specific calibration curves.
3. Noise Models: Noise models describe the statistical characteristics of noise
present in the acquired image. They help in understanding and mitigating the
effects of noise, such as sensor noise, readout noise, or quantization noise.
Common noise models include Gaussian noise models, Poisson noise models, or
noise models based on specific sensor characteristics.
4. Color Transformation Models: Color transformation models are used to convert
the acquired sensor data into a specific color space representation, such as RGB,
CMYK, or LAB. These models take into account the spectral sensitivity of the
sensor and the color characteristics of the captured scene. Color
transformation models can involve matrix-based transformations or lookup
tables to map sensor data to the desired color space.
5. Sensor Response Models: Sensor response models describe the relationship
between the incident light and the sensor's response. These models capture the
non-linear characteristics of sensor response and can be used to map the
captured image data to a linear representation. Sensor response models can
involve gamma correction, logarithmic transformations, or camera-specific
calibration data.
6. Spectral Response Models: Spectral response models describe the sensitivity
of the image sensor to different wavelengths of light. These models capture the
spectral characteristics of the sensor and can be used in applications such as
multispectral or hyperspectral imaging. Spectral response models are typically
represented by spectral sensitivity curves that indicate the sensor's response
at different wavelengths.

Sensor models are essential in various image processing tasks, including image
correction, image registration, image fusion, color correction, and 3D reconstruction.
By understanding and compensating for the characteristics and limitations of image
acquisition sensors, these models enable accurate and reliable image processing and
analysis.

Spectral Information and Resolution:

Most remote sensing investigations are based on developing a deterministic relationship


(i.e., a model) between the amount of electromagnetic energy reflected, emitted, or
back-scattered in specific bands or frequencies and the chemical, biological, and
physical characteristics of the phenomena under investigation (e.g., a corn field
canopy). Spectral resolution is the number and dimension (size) of specific wavelength
intervals (referred to as bands or channels) in the electromagnetic spectrum to· which
a remote sensing instrument is sensitive.

Multispectral remote sensing systems record energy in multiple bands of the


electromagnetic spectrum. Certain regions or spectral bands of the electromagnetic
spectrum are optimum for obtaining information -on biophysical parameters. The bands
are normally selected to maxiIl.1-ize the contrast between the object of interest and
its background (i.e., object-to-background contrast). Careful selection of the spectral
bands might improve the probability that the desired information will be extracted
from the remote sensor data.

Spatial Information and Resolution:

There is a general relationship between the size of an object or area to be identified


and the spatial resolution of the remote sensing system. Spatial resolution is a measure
of the smallest angular or linear separation between two objects that caI1 be resolved
by the remote sensing system. The spatial resolution of aerial photography may be
measured by

 Placing calibrated, parallel black and white lines on tarps that are placed in the
field,
 Obtaining aerial photography of the study area, and
 Computing the number of resolvable line· pairs per milli meter in the
photography.

Many satellite remote sensing systems use optics that have a constant
IFOV .Therefore, a sensor systems nominal spatial resolution is defined as the
dimension in meters (or feet) of the ground projected IFOV where the diameter of the
circle (D) on the ground is a function of the . instantaneous-field-of-view times the
altitude (HJ of the sensor above ground level (AGL)

Pixels are normally represented on computer screens- and in hard-copy images as


rectangles with length and width. Therefore, we typically describe a sensor system's
nominal spatial resolution as being 10 x 10 mor 30 x 30 m.

Generally, the smaller the nominal spatial resolution, the greater the spatial resolving
power of the remote sensing system.

Spatial resolution would appropriately describe the ground-projected laser pulse (e.g.,
15 cm) but sampling density (i.e., number of points per unit area) describes the
frequency of ground observations.

Because we have spatial information about the location of each pixel (x , y) in the
matrix, it is also possible to examine the spatial relationship between a pixel and its
neighbors. Therefore, the amount of spectral autocorrelation and other spatial
geostatistical measurements can be determined based on the spatial information
inherent in the imagery.

Temporal Information and Resolution:

The temporal resolution of a remote sensing system generally refers to how often the
sensor records imagery of a particular area.

Obtaining imagery at a high temporal resolution is very important for many applications.
For example, the National Ocean and Atmospheric Administration (NOAA)
Geostationary Operational Environmental Satellites {GOES) are in geostationary orbits,
allowing them to obtain very high temporal resolution imagery (e.g., every half-hour).
This allows meteorologists to provide hourly updates on the location of frontal systems
and hurricanes and use this information along with other data to predict storm tracks.

Another aspect of temporal information is how many observations are recorded from a
single pulse of energy that is directed at the Earth by_ an active sensor such as LID
AR. _For example, most LID AR sensors emit one pulse of laser energy and record
multiple responses from this pulse. Measuring the time differences between multiple
responses allows for the determination of object heights and terrain structure. Also,
the length of time required to emit an energy signal by an active sensor is referred to
as the pulse length. Short pulse lengths allow very precise distance (i.e., range)
measurement.

Radiometric Information and Resolution:


Radiometric resolution is defined as the sens1hv1ty of a remote sensing detector to
differences in signal strength as it records the radiant flux reflected, emitted, or
back-scattered from the terrain. It defines the number of just discriminable signal
levels. Therefore, radiometric resolution can have a significant impact on our ability to
measure the properties of scene objects.

The original Landsat 1 Multispectral Scanner launched in 1972 recorded reflected


energy with a precision of 6-bits (values ranging from 0 to 63). Landsat 4 and 5
Thematic Mapper sensors launched in 1982 and 1984, respectively , recorded data in 8
bits (values from 0 to 255) . Thus, the Landsat TM sensors had improved radiometric
resolution (sensitivity) when compared with the original Landsat MSS. Quick Bird and
IKONOS sensors record information in 11 bits (values from 0 to 2,047). Several new
sensor systems have 12-bit radiometric resolution (values ranging from 0 to 4,095).
Radiometric resolution is sometimes referred to as the level of quantization. High
radiometric resolution generally increases the probability that phenomena will be
remotely sensed more accurately.

IFOV:

IFOV stands for Instantaneous Field of View.

The IFOV is a measure of the spatial resolution of an imaging system and determines
the smallest distinguishable detail or feature that can be resolved by the system. A
smaller IFOV corresponds to higher spatial resolution, as it indicates that each pixel or
detector element captures a smaller portion of the scene.

The IFOV can be calculated using the following formula:

IFOV = θ / D

Where:

 IFOV is the Instantaneous Field of View


 θ is the angular size of the pixel or detector element (in radians)
 D is the distance from the imaging system to the object or scene being imaged
(in meters)

In practical terms, the IFOV can be thought of as the angular size of a pixel or
detector element when viewed from the object or scene. It determines how much
detail can be captured and resolved by the imaging system. Smaller IFOV values are
desirable in applications that require high-resolution imaging, such as aerial and
satellite imagery, where fine details need to be captured and analyzed.
GIFOV:
GIFOV stands for Ground Instantaneous Field of View
GIFOV represents the area on the ground that is imaged by a single pixel or detector
element in the imaging system. It is the projection of the IFOV onto the ground
surface. GIFOV is an important parameter in remote sensing as it determines the
spatial footprint or coverage of each pixel on the ground.

The calculation of GIFOV involves taking into account the IFOV of the imaging system
and the altitude or height at which the system is positioned. The formula for GIFOV
can be expressed as:

GIFOV = IFOV × (altitude / focal length)

Where:

 GIFOV is the Ground Instantaneous Field of View


 IFOV is the Instantaneous Field of View (angular size of a pixel or detector
element)
 Altitude is the height or distance of the imaging system above the ground
 Focal length is the effective focal length of the imaging system

The GIFOV provides information about the spatial resolution and coverage of each
pixel on the ground. It helps in understanding the level of detail captured by the
imaging system and the ground area represented by each pixel.

GSI:

Ground Sampling Interval (GSI) is a measure used to describe the spatial resolution or
pixel size of remotely sensed images. It represents the physical distance on the ground
that is represented by each pixel in the image. GSI is typically expressed in units of
meters per pixel.

GSI can be calculated using the following formula:

GSI = (Pixel Size × Distance to the Object) / Focal Length

Where:

 Pixel Size: The size of a single pixel in the image, usually provided by the imaging
system or sensor. It is typically given in units of meters per pixel.
 Distance to the Object: The distance between the imaging system and the
object or scene being imaged. It is typically measured in meters.
 Focal Length: The effective focal length of the imaging system, which is a
characteristic property of the camera or lens used for image capture. It is
usually given in units of meters.
By applying the formula, the GSI provides an estimation of the physical distance
represented by each pixel in the image. A smaller GSI indicates higher spatial
resolution, meaning smaller ground features can be resolved in the image. Conversely, a
larger GSI corresponds to lower spatial resolution, where larger ground features are
represented by each pixel.

Geometry and Radiometry:

Geometry and radiometry are two fundamental aspects of digital image processing that
play important roles in analyzing and manipulating digital images.

1. Geometry:
Geometry in digital image processing refers to the spatial properties and
relationships within an image. It involves the analysis and manipulation of image
content in terms of size, shape, position, and orientation. Key concepts in
geometric image processing include:
a. Image Resolution: Resolution refers to the level of detail in an
image, typically measured in pixels per unit length. High resolution provides more
details, while low resolution results in a more pixelated image.
b. Image Registration: Image registration is the process of aligning
different images or aligning an image to a specific coordinate system. It involves
finding correspondences between points or features in images and transforming
one image to match the other.
c. Geometric Transformations: Geometric transformations involve
modifying the shape, size, position, or orientation of an image. Common
transformations include translation (shifting), rotation, scaling, and shearing.
d. Image Warping: Image warping involves distorting or deforming an
image based on geometric transformations. It can be used for tasks such as
image rectification, perspective correction, or creating special effects.
e. Image Segmentation: Image segmentation is the process of
partitioning an image into meaningful regions or objects. It involves identifying
boundaries between objects and grouping pixels with similar properties
together.
2. Radiometry:
Radiometry in digital image processing deals with the measurement and
interpretation of electromagnetic radiation (light) in an image. It focuses on the
analysis of image content based on the intensity and spectral properties of light.
Key concepts in radiometric image processing include:
a. Pixel Intensity: Pixel intensity represents the brightness or gray
level of a pixel in an image. It is usually represented by a numerical value ranging
from 0 (black) to 255 (white) in an 8-bit grayscale image.
b. Contrast Enhancement: Contrast enhancement techniques aim to
improve the visual quality of an image by adjusting the distribution of pixel
intensities. Histogram equalization, contrast stretching, and gamma correction
are common methods used for contrast enhancement.
c. Color Spaces: Color spaces provide different ways of representing
colors in images. Common color spaces include RGB (Red, Green, Blue), CMYK
(Cyan, Magenta, Yellow, Black), and HSV (Hue, Saturation, Value). Color space
conversions are often used for image analysis and processing.
d. Radiometric Calibration: Radiometric calibration involves the
conversion of pixel values in an image to physical measurements, such as
reflectance or radiance. This process allows quantitative analysis and comparison
of images acquired under different conditions.
e. Spectral Analysis: Spectral analysis involves studying the spectral
characteristics of an image, which refers to the distribution of energy across
different wavelengths. It is useful for applications such as vegetation analysis,
mineral identification, and remote sensing.
Sources of image degradation and correction procedures :

Images can be subject to various sources of degradation, which can negatively impact
their quality and clarity

1. Noise:
 Sensor noise: Introduced during image acquisition by the camera sensor,
resulting in random variations in pixel values.
 Transmission noise: Introduced during image transmission or storage,
leading to errors or corruption in the image data.
Noise Correction:
 Removal of sensor noise: Techniques such as median filtering, Gaussian
filtering, or adaptive filtering can be applied to reduce noise while
preserving image details.
 Reduction of transmission noise: Error correction algorithms or image
interpolation methods can be utilized to address noise introduced during
image transmission or storage.

2. Blur:
 Motion blur: Caused by the relative motion between the camera and the
subject during the exposure time, resulting in blurred or smudged details.
 Out-of-focus blur: Occurs when the camera fails to focus accurately on
the subject, leading to a lack of sharpness and loss of fine details.
 Lens aberrations: Imperfections in camera lenses, such as chromatic
aberration, spherical aberration, or distortion, which can cause blurring
or distortions in the image.
Blur Correction:
 Motion blur correction: Motion deblurring techniques aim to restore
sharpness and recover fine details in images affected by motion blur.
These methods can employ deconvolution algorithms or utilize motion
estimation techniques.
 Out-of-focus blur correction: Deconvolution algorithms or image
enhancement techniques like the blind deblurring approach can be used to
mitigate the effects of out-of-focus blur.
 Lens aberration correction: Image calibration methods, such as distortion
correction algorithms or lens-specific profile adjustments, can be
employed to correct lens-related distortions.

3. Compression artifacts:
 Lossy compression: Commonly used image compression algorithms like
JPEG introduce artifacts due to the irreversible loss of information
during compression, resulting in blocky distortions, ringing, or mosquito
noise.
Compression Artifact Removal:
 Post-processing filters: Adaptive filters, deblocking filters, or noise
reduction filters can be applied to reduce compression artifacts
introduced by lossy compression algorithms.
 Advanced compression artifact removal: Machine learning-based
approaches, such as deep neural networks, can be trained to specifically
target and remove compression artifacts.
4. Geometric distortions:
 Lens distortions: Imperfections in camera lenses can cause geometric
distortions such as barrel distortion or pincushion distortion, resulting in
a non-uniform scaling of the image.
 Perspective distortions: Occur when the camera is not parallel to the
subject, leading to objects appearing distorted, stretched, or skewed.
Geometric Distortion Correction:
 Lens distortion correction: Calibration techniques or lens-specific
correction algorithms can be utilized to correct geometric distortions
caused by lens imperfections.
 Perspective correction: Homography -based transformations or
geometric calibration methods can be employed to rectify perspective
distortions and restore the correct shape and proportions of objects.
5. Illumination variations:
 Uneven lighting: Non-uniform lighting conditions across the image, such
as shadows, overexposed or underexposed areas, can cause poor visibility
and loss of detail.
 Backlighting: When the subject is lit from behind, resulting in a
silhouette or reduced visibility of the subject's details.
Illumination Correction:
 Histogram equalization: Adjusting the image histogram to improve
contrast and enhance visibility of details in different lighting conditions.
 Local adaptive methods: Techniques such as adaptive histogram
equalization (AHE), contrast-limited adaptive histogram equalization
(CLAHE), or other local enhancement methods can be applied to correct
uneven lighting and improve image quality.
6. Color distortions:
 White balance issues: Incorrectly set white balance can result in color
casts, where the image appears too warm (yellowish) or too cool (bluish).
 Color cross-talk: In some imaging systems, there may be unintended
interaction between color channels, resulting in color bleeding or
inaccurate color representation.
Color Correction:
 White balance adjustment: Techniques such as gray-world assumption,
color constancy algorithms, or manual adjustment can be used to correct
color casts and restore accurate color representation.
 Color space transformations: Converting the image to a different color
space and applying correction operations, such as channel scaling or
chromatic adaptation, to achieve desired color accuracy.
 Histogram-based methods: Color mapping techniques or histogram
matching can be employed to align the color distribution of an image to a
reference or desired color distribution.

Remote Sensing Atmospheric Correction:

Even when the remote sensing system is functioning properly, radiometric error may be
introduced into the remote sensor data. The two most important sources of
environmental attenuation are

1) atmosphere attenuation caused by scattering and absorption in the atmosphere and

2) topographic attenuation.

Unnecessary Atmospheric Correction:

Sometimes it is possible to ignore atmospheric effects in remote sensor data


completely. For example, atmospheric correction is not always necessary for certain
types of classification and change detection. It is not . generally necessary to perform
atmospheric correction on a single date of remotely sensed data that will be classified
using a maximum likelihood classification algorithm . As long as the training data from
the image to be classified have the same relative scale (corrected or uncorrected),
atmospheric correction has little effect on classification accuracy.

The general principle is that atmospheric correction is not necessary as long as the
training data are extracted from the image (or composite image) under investigation
and are not imported from another image obtained· at another place or time.

Necessary Atmospheric Correction:

Sometimes it is essential that the remotely sensed data be atmospherically corrected.


For example, it is usually necessary to atmospherically correct the remote sensor data-
if biophysical parameters are going to be extracted from water bodies (e.g., chlorophyll
a, suspended sediment, temperature) or vegetation (e.g., biomass, leaf area index,.
chlorophyll, percent canopy closure). If the data are not corrected, the subtle
differences , reflectance (or emittance) among the important constituents may be lost.
Furthermore, if the biophysical measurements extracted from one· image (e.g.;
biomass) are to be compared with the same biophysical information extracted from
other images obtained on different dates, then it is usually essential that the remote
sensor data be atmospherically corrected.

Signature extension through space and time is becoming more important. The only way
to extend signatures through space and time is to atmospherically correct each
individual date of remotely sensed data.

Types of Atmospheric Correction:

 Absolute atmospheric correction, and


 Relative atmospheric correction.

You might also like