0% found this document useful (0 votes)
21 views20 pages

Resolution and Types, Elements of Aerial Photo and Satellite Image

The document discusses the various types of resolution in aerial photography and satellite imagery, including spatial, spectral, radiometric, and temporal resolution, each influencing the quality and applicability of remote sensing data. It outlines the characteristics, advantages, and limitations of aerial photography, emphasizing its role in detailed mapping and analysis despite competition from satellite imagery. Additionally, it covers the elements of photo interpretation and the processes involved in image analysis, including pre-processing and enhancement techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views20 pages

Resolution and Types, Elements of Aerial Photo and Satellite Image

The document discusses the various types of resolution in aerial photography and satellite imagery, including spatial, spectral, radiometric, and temporal resolution, each influencing the quality and applicability of remote sensing data. It outlines the characteristics, advantages, and limitations of aerial photography, emphasizing its role in detailed mapping and analysis despite competition from satellite imagery. Additionally, it covers the elements of photo interpretation and the processes involved in image analysis, including pre-processing and enhancement techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Resolution and Types, Elements of Aerial Photo and

Satellite Image

Image Resolution

Resolution in remote sensing is broadly defined as the sensor's ability to


capture and display details of ground features. It encapsulates how well a
sensor can describe and map these features, with higher resolutions providing
more detailed information.

Types of Resolution:

1. Coarse Resolution: Provides less detailed information; suitable for


large-scale mapping.
2. Fine Resolution: Offers detailed information; ideal for small-scale
mapping.

Influencing Factors:

● Sensor Characteristics: Different sensors have varying resolution


capabilities.
● Scene Characteristics: The nature of the imaged scene affects perceived
resolution.
● Environmental Conditions: Atmospheric conditions and illumination
impact image clarity.
● Interpreter Expertise: The ability of the image interpreter plays a role in
resolution effectiveness.

Types of Image Resolution

Image resolution can be categorized based on various parameters, including


spatial, spectral, temporal, and radiometric characteristics. Each type of
resolution plays a distinct role in determining the suitability of remote sensing
data for specific applications.

1. Spatial Resolution

Spatial resolution refers to the size


of each pixel in an image, typically
measured in units of distance such
as centimeters (cm) or meters (m). It indicates the sensor’s ability to capture
closely spaced objects on the ground and distinguish them as separate entities.

Key Aspects:

● Pixel Size: Smaller pixels represent higher spatial resolution, allowing for
more detailed imagery.
● Sensor Parameters: The design and specifications of the sensor influence
spatial resolution.
● Platform Altitude: The altitude of the data collection platform (e.g.,
satellite, airplane) affects the spatial resolution.

Illustrative Example:

Consider an astronaut observing the Earth from a space shuttle compared to an


airplane flying over a city:

● From Space Shuttle: The astronaut can view large areas, such as
provinces or countries, but cannot distinguish individual houses.
● From Airplane: The pilot can see individual houses and vehicles,
demonstrating higher spatial resolution.

Classification of Spatial Resolution:

● Coarse Resolution Sensors: Lower detail, suitable for regional or global


mapping.
● Intermediate Resolution Sensors: Moderate detail, useful for
national-scale mapping.
● High Resolution Sensors: High detail, ideal for urban planning and small
area mapping.

2. Spectral Resolution

Spectral resolution refers to the ability of a


sensor to define fine wavelength intervals. It is
determined by the number and width of spectral
bands that a sensor can capture within the
electromagnetic spectrum.

Key Components:

● Spectral Bands: Specific wavelength ranges at which a sensor collects


data.
● Central Wavelength: The midpoint of a spectral band.
● Band Width: The range of wavelengths encompassed by a spectral band.

Types of Spectral Resolution:

1. Panchromatic Sensors: Capture a wide range of wavelengths in a single


band, typically covering the entire visible spectrum.
2. Multi-spectral Sensors: Capture data across multiple, distinct spectral
bands, each with a specific wavelength range.
3. Hyperspectral Sensors: Capture data across dozens or hundreds of very
narrow and contiguous spectral bands, providing detailed spectral
information.

Applications:

● Vegetation Analysis: Identifying different plant species based on their


spectral signatures.
● Mineral Exploration: Distinguishing between various rock types.
● Water Quality Monitoring: Assessing the presence of pollutants or algae
based on spectral data.

3. Radiometric Resolution

Radiometric resolution refers to the


ability of a sensor to distinguish
between different levels of intensity or
brightness in an image. It is
determined by the number of discrete
quantization levels used to represent the intensity of reflected or emitted
radiation.

Key Concepts:

● Quantization Levels: The number of discrete values used to represent the


intensity of each pixel.
● Bit Depth: Indicates the number of bits used for each pixel, determining
the number of possible quantization levels.
○ 7-bit: 128 levels (0-127)
○ 8-bit: 256 levels (0-255)
○ 10-bit: 1024 levels (0-1023)

Impact on Image Quality:

● High Radiometric Resolution: Provides smooth gradations in brightness,


allowing for detailed analysis of image features.
● Low Radiometric Resolution: Results in images with high contrast and
fewer brightness levels, which may obscure subtle differences.
4. Temporal Resolution

Temporal resolution refers to the frequency at which a sensor can acquire


images of the same area on the Earth's surface. It is typically expressed in
terms of the number of days between successive image acquisitions of the
same location.

Key Aspects:

● Repeat Coverage: The ability of a sensor to revisit and image the same
area at regular intervals.
● Application Requirements: Different applications require different
temporal resolutions based on the dynamics of the observed phenomena.

Factors Affecting Temporal Resolution:

● Swath Width: The width of the area covered by the sensor during each
pass affects how frequently the sensor can revisit the same location.
● Orbital Parameters: The satellite's orbit determines its coverage pattern
and revisit frequency.
● Sensor Scheduling: The operational schedule of the sensor influences
temporal resolution.

Examples:

● Agricultural Monitoring: Requires frequent imaging (e.g., every 10 days)


to track crop growth and health.
● Disaster Response: High temporal resolution is crucial for assessing and
responding to events like floods or earthquakes.
● Urban Growth Analysis: Lower temporal resolution (e.g., yearly intervals)
may suffice to monitor long-term development patterns.

Classification of Temporal Resolution:


● High Temporal Resolution: Frequent revisits, suitable for dynamic and
rapidly changing environments.
● Low Temporal Resolution: Infrequent revisits, adequate for stable or
slowly changing areas.

Aerial Photography

Aerial photography is one of the most widespread and historically significant


methods in remote sensing. Utilizing aircraft as platforms, it captures images of
the Earth's surface, providing valuable data for mapping, surveying, and various
analytical applications. Despite the emergence of satellite sensors since 1972,
aerial photography remains a crucial tool, especially with advancements in
digital imaging technologies.

Types of Aerial Photographs

1. Panchromatic Photography:

● Characteristics: Black and white images capturing a broad range of


wavelengths in the visible spectrum.
● Advantages: High sensitivity and resolution, making it suitable for
detailed mapping.
● Limitations: Lack of color information can make interpretation less
intuitive.

2. Color Photography:

● Characteristics: Captures images in red, green, and blue (RGB)


wavelengths.
● Advantages: Intuitive for human interpretation, as colors correspond to
natural appearances.
● Limitations: May not capture information beyond the visible spectrum,
limiting material differentiation.

3. Infrared (IR) Photography:

● Characteristics: Utilizes IR-sensitive film or digital sensors to capture NIR


wavelengths.
● Advantages: Enhances contrast between different materials, such as
vegetation and soil.
● Applications: Useful in vegetation analysis, water body delineation, and
mineral exploration.

Wave Bands in Digital Aerial Photography:

● Blue Band: 0.45 - 0.52 μm


● Green Band: 0.52 - 0.60 μm
● Red Band: 0.63 - 0.69 μm
● Near-Infrared (NIR) Band: 0.76 - 0.90 μm

Digital Aerial Photography Workflow

From Image Capture to Mapping

1. Image Acquisition:
○ Platform: Aircraft equipped with digital cameras.
○ Flight Parameters: Altitude, speed, and overlap (front and side lap)
are carefully planned to ensure complete and consistent coverage.

2. Image Processing:
○ Geometric Correction: Adjusting images to correct distortions and
align with geographic coordinates.
○ Radiometric Correction: Enhancing image quality by adjusting
brightness and contrast.

3. Image Analysis and Interpretation:


○ Digitization: Extracting features from images using GIS or remote
sensing software.
○ Thematic Mapping: Creating maps that represent specific themes
such as land use, vegetation, or infrastructure.

4. Output Production:
○ Digital Maps: High-accuracy maps suitable for various applications.
○ Stereo-Pairs: Overlapping images used to create three-dimensional
representations of the terrain.

Stereo-Pairs and Photogrammetry

● Stereo-Pairs:
○ Definition: Two overlapping aerial photographs taken from slightly
different perspectives.
○ Purpose: Create a three-dimensional view of the terrain, allowing
for depth perception and accurate measurement of features.

● Parallax:
○ Definition: Apparent shift in the position of objects when viewed
from different angles.
○ Impact: Enhances the perception of elevation and depth in
stereo-pairs.
Applications of Photogrammetry:

● Topographic Mapping: Creating detailed and accurate topographic maps.


● 3D Modeling: Constructing three-dimensional models of terrain and
structures.
● Volume Calculations: Estimating the volume of earthworks, stockpiles,
and other features.

Modern Photogrammetry:

● Digital Stereo-Plotters: Computerized instruments that automate the


creation and analysis of stereo-pairs.
● Software Algorithms: Advanced algorithms that enhance the accuracy
and efficiency of photogrammetric processes.

Advantages of Aerial Photography

High Ground Resolution

● Detail Identification: Capable of identifying and mapping very small


objects, such as individual trees, vehicles, and buildings.
● Local Surveys: Superior to satellite imagery for detailed local studies due
to higher resolution.

Intuitive Interpretation

● Familiar Visuals: Black and white and color photographs resemble


human vision, making them easier to interpret, especially for
non-experienced users.
● Enhanced Understanding: Visual similarity to real-world scenes
facilitates intuitive analysis.

Flexibility in Image Acquisition


● Customizable Flight Paths: Ability to tailor flight parameters to specific
mapping needs.
● Timely Data Collection: Can be deployed quickly to capture images of
specific events or changes.

Limitations of Aerial Photography

Geometric Distortions

● Central Projection Distortion: Radial displacement of objects due to lens


properties.
● Platform Instability: Aircraft movement can introduce distortions that
require correction.

Cost and Accessibility

● Operational Costs: Higher costs associated with aircraft operation and


maintenance compared to satellite imagery.
● Limited Coverage: Restricted to specific areas during each flight, unlike
satellites which cover broader regions.

Dependency on Weather and Lighting

● Weather Conditions: Cloud cover, fog, and precipitation can obstruct


image acquisition.
● Lighting Variations: Shadows and varying illumination can affect image
quality and interpretation.

Comparative Analysis: Aerial Photography vs. Satellite Imagery

Aspect Aerial Photography Satellite Imagery

Ground Resolution Higher (0.25 - 1 Lower (1 meter to several


meter) kilometers)

Coverage Area Limited to flight path Extensive, global coverage


Cost Higher per area Lower per area

Flexibility Highly flexible in Fixed orbital paths and


scheduling and area schedules

Data Acquisition Faster for small areas Slower for large areas
Speed

Weather Sensitive to local Also affected by cloud cover but


Dependency weather conditions can cover larger areas quickly

Basic Elements of Photo Interpretation

The following sections detail the primary elements utilized in the identification
and interpretation of features on photographs.

1. Shape

Shape refers to the general form, structure, or outline of an object as seen from
a vertical view on a photograph.

Examples:

● Urban Features: Typically exhibit straight edges and regular geometric


shapes (e.g., buildings, roads).
● Natural Features: Often have irregular, organic shapes (e.g., forests,
rivers).
● Geological Features: Distinct shapes such as oxbow lakes and
meandering rivers aid in their identification (Refer to Figure 3).

2. Size

Size pertains to the scale of objects as depicted on photographs, influenced by


the photographer's scale.

Examples:
● Water Bodies: Differentiates a small pond from a large lake.
● Roadways: Distinguishes between minor roads and major highways.
● Urban Structures: Identifies commercial properties (large buildings)
versus residential areas (smaller buildings).

3. Shadows

Shadows are the dark areas cast by objects blocking the light source, providing
clues about the object's profile and relative height.

Examples:

● Buildings: High-rise buildings cast longer shadows compared to low-rise


structures.
● Natural Landforms: Shadows can indicate the slope and ruggedness of
terrain features.

4. Tone

Tone refers to the relative whiteness or blackness of objects on a photograph,


resulting from the reflectance of light.

Examples:

● Soil Classification: Light tones may represent sandy areas, whereas dark
tones indicate water bodies.
● Forestry: Differentiates between hardwood (lighter tones) and coniferous
forests (darker tones).

5. Color

Color is the variation in hues observed when objects reflect specific


wavelengths of light.

Examples:
● Vegetation: Appears green due to high reflectance in the green
wavelength.
● Water Bodies: Often appear dark blue or green, depending on depth and
composition.
● False Color Composites (FCC): Utilize non-visible wavelengths to
highlight specific features like vegetation health or soil moisture.

6. Texture

Texture describes the "smoothness" or "roughness" of surfaces in a


photograph, based on the variation in tonal or color patterns.

Examples:

● Smooth Texture: Calm water surfaces or paved roads appear smooth


with uniform tones.
● Rough Texture: Forest canopies or areas with tall grasses and shrubs
exhibit rough textures due to varied tonal changes.

7. Patterns

Patterns refer to the spatial arrangement of phenomena on the Earth's surface,


providing clues for feature identification.

Examples:

● Natural Patterns: Randomly arranged unmanaged forests versus evenly


spaced orchards.
● Cultural Patterns: Regularly spaced urban grids versus irregular rural
settlements.

8. Relationship/Association
Relationship/Association involves the contextual connections between
different groups of objects, providing additional information for accurate
interpretation.

Examples:

● Urban Settings: Residential areas are typically not adjacent to industrial


zones like nuclear power plants.
● Natural Settings: Wetlands are often found near rivers, lakes, or
estuaries.
● Infrastructure: Commercial centers are usually located along major roads,
railways, or waterways.

Image Analysis in Remote Sensing

Image analysis involves processing and interpreting remote sensing data to


extract meaningful information. Effective image analysis requires a systematic
approach, encompassing pre-processing, enhancement, transformation, and
classification.

1. Pre-processing

Pre-processing prepares raw remote sensing data for analysis by correcting


distortions and enhancing data quality. It includes geometric correction,
radiometric correction, and atmospheric correction.

Geometric Corrections

Geometric corrections rectify distortions in remote sensing images caused by


sensor-Earth geometry variations and platform movements, ensuring data
aligns with real-world coordinates.

● Accuracy: Essential for precise mapping and spatial analysis.


● Integration: Facilitates overlay with other geospatial data.
Correction Techniques:

● Ortho-rectification: Uses DEMs and sensor parameters to correct image


geometry.
● Polynomial and Rational Function Models: Applied for rapid geometric
correction of hyperspectral images, balancing precision and processing
time.

2. Radiometric Corrections

Radiometric corrections adjust the digital numbers (DN) of remote sensing


images to account for sensor irregularities and atmospheric conditions, ensuring
accurate reflectance or irradiance values.

● Data Consistency: Ensures comparability across different images and


sensors.
● Thematic Mapping: Critical for accurate biomass and land cover mapping.

Correction Methods:

● Holomorphic and Heteromorphic Calibration: Address terrain and


canopy variations.
● FMask Code and Tanre’s Formulation: Detect and correct clouds and
shadows in aerosol data.

3. Atmospheric Corrections

Atmospheric corrections compensate for the absorption and scattering of


electromagnetic radiation by the atmosphere, ensuring that the measured
reflectance accurately represents the Earth's surface.

● Data Accuracy: Essential for precise spectral analysis and classification.


● Path Radiance Removal: Eliminates atmospheric path radiance to
enhance surface feature detection.
Correction Techniques:

● Cloud-Shadow Atmospheric Correction: Corrects pixels with similar


optical properties (e.g., clouds, shadows).
● Phase Delay Correction in SAR: Addresses atmospheric phase delay
effects in Synthetic Aperture Radar (SAR) imagery.

Image Enhancement

Image enhancement improves the visual quality of remote sensing images,


making them easier to interpret and analyze. Enhancement techniques
manipulate pixel values to highlight specific features or improve overall image
clarity.

1. Radiometric Enhancement

Radiometric enhancement techniques adjust the brightness and contrast of


images to emphasize certain features or details.

Techniques:

● Tone-Mapping Algorithms: Enhance bright and shadow regions to reveal


minor details.
● Radiative Transfer Models: Improve image contrast by accounting for
bidirectional reflectance distribution functions (BRDF).

2. Spatial Enhancement

Spatial enhancement modifies the spatial frequency components of an image to


improve its sharpness and detail.

Techniques:

● Convolution Filters: Apply mathematical operations to emphasize edges


and textures.
● Resolution Merging: Combine multiple images to enhance spatial
resolution.
● Edge and Texture Filters: Highlight specific spatial features like roads or
forest boundaries.

3. Spectral Enhancement

Spectral enhancement involves creating new spectral data from existing bands
to highlight specific features or properties.

Techniques:

● Spectral Ratios and Indices: Combine multiple bands to emphasize


vegetation health (e.g., NDVI).
● Principal Components Analysis (PCA): Reduces dimensionality while
retaining essential spectral information.
● Tasseled Cap Transformation: Enhances contrast in brightness,
greenness, and wetness.

4. Geometric Enhancement

Geometric enhancement modifies the spatial relationships and geometric


details within an image to improve feature visibility and analysis.

Techniques:

● Edge Detection and Enhancement: Sharpen image boundaries to


highlight structures.
● Smoothing and Filtering: Reduce noise while preserving essential
geometric features.
● Deconvolution Models: Enhance spatial features by reversing the effects
of image blurring.

Image Transformation
Image transformation involves mathematical manipulation of image data to
derive new information or highlight specific features. Transformations can be
applied to single or multiple bands and are fundamental for various analytical
tasks.

Key Techniques:

● Basic Arithmetic Operations:


○ Addition: Reduces noise by combining multiple images.
○ Subtraction: Detects changes by highlighting differences between
images.
○ Multiplication and Division: Extracts regions of interest or assesses
magnitude differences.

● Advanced Techniques:
○ Principal Components Analysis (PCA): Reduces dimensionality,
emphasizing variance in data.
○ Normalized Difference Vegetation Index (NDVI): Highlights
vegetation health by comparing red and NIR bands.
○ Spectral Unmixing: Separates mixed pixel signals into constituent
spectral signatures.

Image Classification and Analysis

Image classification categorizes all pixels in a remote sensing image into


predefined classes based on their spectral and spatial characteristics.
Classification is essential for creating thematic maps and conducting spatial
analysis.

1. Supervised Classification

Supervised classification involves training a classifier using known sample sites


to assign labels to all pixels in an image based on the learned parameters.
Process:

A. Training Phase:
a. Select representative sample sites for each class.
b. Extract spectral and spatial features from these samples.
c. Develop classification models using algorithms like Maximum
Likelihood Classification (MLC).
B. Classification Phase:
a. Apply the trained classifier to the entire image.
b. Assign class labels to each pixel based on the classifier's output.

2. Unsupervised Classification

Unsupervised classification groups pixels into clusters based on their spectral


properties without prior knowledge of class labels.

Process:

1. Clustering Phase:
○ Determine the number of clusters.
○ Group pixels into clusters based on spectral similarity using
algorithms like K-means or Expectation Maximization.
2. Post-Classification:
○ Assign class labels to clusters based on contextual and ancillary
information.

You might also like