chapter-3-5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 141

Unit 3.

Digital Image Restoration and


Registration
•Image rectification and restoration procedures

are often termed preprocessing operations

because they normally precede further

manipulation and analysis of the image data

to extract specific information.


• Image Restoration:

• A process which aims to reverse known degradation on

images.

• Remove effects of sensing environment.

• Remove distortion from image, to go back to the

“original” objective process


• ------Image rectification and restoration-------

• It involves the initial processing of raw image data to correct for:

• geometric distortions,

• To calibrate the data radiometrically, and

• to eliminate noise present in the data.

• Thus, the nature of any particular image restoration process

is highly dependent upon the characteristics of the sensor used

to acquire the image data.


• The purpose of image restoration is to "compensate for"

or "undo" defects which degrade an image.

• Degradation comes in many forms such as:

• motion blur,

• noise, and

• camera miss-focus.

Noise
Image restoration and registration methods

1. Radiometric correction method,

2. Atmospheric correction method,

3. Geometric correction methods.


1. Radiometric correction method
• R a d i o m e t r i c errors are caused by imbalance of detected

EME and atmospheric deficiencies.

• When image data is recorded by sensors on satellites and

aircraft it can contain errors in the measured brightness

values of the pixels.

• Radiometric corrections are transformation on the data in

order to remove error.


• They are done to improve the visual appearance

of the image.

• Radiometric error in remotely sensed data may be

introduced by the sensor system itself when the

individual detectors do not function properly or

are improperly calibrated.


• It include correcting the data for:

• sensor irregularities

• unwanted sensor noise and

• atmospheric noise,
• Radiometric correction is a process that improves the quality and

accuracy of remote sensing images by removing or reducing the

effects of atmospheric, sensor, and illumination factors.

• Radiometric correction is done to calibrate the pixel values and/

correct for errors in the values. The process improves the

interpretability and quality of remote sensed data. Radiometric

calibration and corrections are particularly important when

comparing multiple data sets over a period of time.


• The recorded values get distorted due to one or

more of the following factors:

– sensor ageing

– random malfunctioning of the sensor elements

– atmospheric interference at the time of image

acquisition and

– topographic effects.
Some of the commonly observed systematic radiometric errors are:

– random bad pixels: Sometimes, an individual detector does not record received

signal for a pixel. It is called shot noise. These pixels removed by identifying

which are 0 or 255, different from neighbouring pixel values

– line or column drop-outs: a blank row containing no details of features on the

ground

– line start problems: scanner fails to start recording as soon as a new row starts

and the sensors place pixel data at inappropriate locations) along scan line

– n-Line striping: Sometimes, a detector does not fail completely but its calibration

parameters (gain and offset/bias) are disturbed. Bad lines


• Eg. of radiometric error

• Striping ( thin line) noise is an anomaly commonly seen in remote-sensing

imagery and other geospatial data sets in raster formats.

• Any image in which individual detectors appear lighter or darker than their

neighboring detectors is said to have striping. EG. LANDSAT 7 from 2010


• Radiance measured by a sensor at a given point is influenced by:

• Changes in illumination

• Atmospheric conditions (haze, clouds,…)

• Angle of view

• Objects response characteristics

• Elevation of the sun (seasonal change in sun angle)

• Earth-sun distance variation


• Radiometric correction includes:

– applying sensor calibration

– replacing missing scan lines

– de-striping

– applying atmospheric correction


Random noise correction
Correction for periodic line striping (Di-stripping)
2. Atmospheric correction method,

• The atmospheric effects may be referred to as the presence of gas

absorption, molecule, and aerosol scattering that can influence

incident and reflected radiation the atmosphere can have a high

impact on the reflectance values of images (especially those taken

from space).

• The value recorded at any pixel location on the remotely sensed

image is not a record of the true ground – leaving radiant at

that point.
• The signal is weakened due to absorption and scattering.

• The atmosphere has effect on the measured brightness value of

a pixel.

• Atmospheric path radiance introduces haze in the imager where by

decreasing the contrast of the data.

• Atmospheric correction is the process of removing the scattering

and absorption effects of the atmosphere on the reflectance values

of images taken by satellite or airborne sensors.


• Atmospheric correction is the process of removing the effects of the

atmosphere on the reflectance values of images taken by sensors.

• The objective of atmospheric correction is to determine true surface

reflectance values by removing atmospheric effects from images.

• Atmospheric correction removes the scattering and absorption

effects from the atmosphere to obtain the surface reflectance

properties.
Causes of Atmospheric error:

• Changes in the atmosphere,

• sun illumination, and

• viewing geometries during image capture

• These can impact data accuracy, resulting in distortions

that hinder automated information extraction and change

detection processes.
• Haze (fog, and other atmospheric phenomena) is a main

degradation of outdoor images, weakening both colors

and contrasts.

• Haze removal algorithms are used to improve the

visual quality of an image, which is affected by light

scattering through haze particles.


• Atmospheric correction algorithms use mathematical

models to estimate the atmospheric effects and remove

them from the imagery.

• The goal of atmospheric correction is to retrieve

accurate and reliable information about the Earth's

surface, such as land cover, vegetation, and water

quality.
Atmospheric
correction
3. Geometric correction methods

• The transformation of remotely sensed image into a map

with the scale and projection properties is called

geometric corrections.

Geometric corrections attempt to correct for positional

errors and to transform the original image into a new

image that has the geometric characteristics of a map.


• It include correcting for geometric distortions due to sensor-Earth

geometry variations, and conversion of the data to real world

coordinates (e.g. latitude and longitude) on the Earth's surface.

• Geometric correction is undertaken to avoid geometric distortions

from a distorted image, and is achieved by establishing the

relationship between the image coordinate system and the

geographic coordinate system using calibration data of the sensor,

measured data of position and attitude, ground control points.


• Geometric Correction

• Raw digital images usually contain geometric distortions so that

they cannot be used directly as a map base without subsequent

processing.

• The sources of these distortions range from:

o variations in the altitude

o attitude and velocity of the sensor platform to factors such

earth curvature, atmospheric refraction, relief displacement,

and nonlinearities in the sweep of a sensor's IFOV


• Geometric errors may be due to a variety of factors, including one

or more of the following, to name only a few:

• the perspective of the sensor optics,

• the motion of the scanning system,

• the motion and (in)stability of the platform,

• the platform altitude, attitude, and velocity,

• the terrain relief, and

• the curvature and rotation of the Earth.


• Geometric correction is essential to remove geometric errors because

non-removal of geometric distortions in an image may not be able to:

• relate features of the image to field data

• compare two images taken at different times and carry out change

analysis

• obtain accurate estimates of the area of different regions in the image

and

• relate, compare and integrate the image with any other spatial data.
• The geometric registration process involves identifying the

image coordinates of several clearly visible points,

called ground control points, in the distorted image and

matching them to their true positions in ground coordinates

(e.g. latitude, longitude).


Geometric Restoration Methods
• Georeferencing digital image ,

• Image-to-Map Rectification ,

• Image-to-Image Registration ,

• Spatial Interpolation Using Coordinate Transformations,

• Relief displacement,

• geometric correction with ground control points (GCP),

• Geocoding( Resampling and Interpolation)


1. Georeferencing digital image

• It is a method to define its existence in physical space.

• This process is completed by selecting pixels in the digital image and

assigning them geographic coordinates.

• A method of assigning ground location value to image.

• This involves the calculation of the appropriate transformation from image

to ground coordinates.

• It used when establishing the relation between raster or vector data and

coordinates and when determining the spatial location of other

geographical features.
2. Image-to-map registration refers to transformation of one image

coordinate system to a map coordinate system resulted from a

particular map projection.

• it is the method of rectification in which the geometry of imagery is

made planimetric. The image-to-map rectification process normally

involves selecting GCP image pixel coordinates (row and column)

with their map coordinate counterparts (e.g., meters northing and

easting in a Universal Transverse Mercator map projection).


Figure. Image to map rectification left side image is satellite imagery and on the
right, a rectified toposheet is represented.
3. Image-to-image registration refers to transforming one image coordinate system

into another image coordinating system

• his technique includes the process by which two images of a common area are

positioned coincident with respect to one another so that corresponding elements

of the same ground appear in the same place on the registered images.
4. Spatial Interpolation Using Coordinate Transformations

• A spatial transformation of an image is a geometric

transformation of the image coordinate system.

• A coordinate transformation brings spatial data into an Earth

based map coordinate system so that each data layer aligns

with every other data layer


5. Relief displacement

• This is the radial distance between where an object

appears in an image to where it actually should be

according to a Planimetric coordinate system.

• The images of ground positions are shifted or displaced due

to terrain relief, in the central projection of an aerial

photograph.
6. Geometric correction with ground control points (GCP)

• Ground control points are large marked targets on the ground,

spaced strategically throughout your area of interest.

• GCPs are defined as points on the surface of earth of known

location .

• GCPs help to ensure that the latitude and longitude of any point on

your map corresponds accurately with actual GPS coordinates.


A number of GCPs are defined on each of the images you want to correct. The

best GCPs are:

– road intersections,

– airport runways,

– edges of dams or buildings,

– corners of agricultural fields

– other permanent features.

Which are easily identifiable in image and on ground too.


7. Geocoding :

• This step involves resembling the image to obtain a new image in

which all pixels are correctly positioned within the terrain

coordinate system.

Resampling is used to determine the


digital values to place in the new pixel
locations of the corrected output image.
• Resampling

• The resampling process calculates the new pixel values from

the original digital pixel values in the uncorrected image.

• There are three common methods for resampling.

1. Nearest Neighborhood,

2. Bilinear Interpolation, and

3. Cubic Convolution.
• Nearest Neighbourhood

• Nearest neighbour resampling uses the digital value from the pixel

in the original image which is nearest to the new pixel location in

the corrected image.

• This is the simplest method and does not alter the original values,

but may result in some pixel values being duplicated while others

are lost.
Bi-linear interpolation
• Bilinear interpolation resampling takes a weighted average of four

pixels in the original image nearest to the new pixel location.

• The averaging process alters the original pixel values and creates

entirely new digital values in the output image.


• Cubic Convolution

• Resampling goes even further to calculate a distance weighted

average of a block of sixteen pixels from the original image which

surround the new output pixel location.


: Sample Geometric Distortions
3.2. Image processing
Digital Image Processing

• In order to process remote sensing imagery digitally, the first requirement

is that the data must be recorded and made available in a digital form,

suitable for storage on a computer tape or disk.

• The other requirement for digital image processing is a computer system,

sometimes referred to as an image analysis system, with the appropriate

hardware and software to process the data.

• Several commercially available software systems have been developed

specifically for remote sensing image processing and analysis.


 Digital image processing refers to processing digital

images by means of a digital computer.

 Digital Image Processing is manipulation of digital

data with the help of the computer hardware and

software to produce digital maps in which specific

information has been extracted and highlighted.


• Digital processing and analysis carried out automatically by

identifying targets and extract information without manual

intervention by a human interpreter. Often, it is done to

supplement and assist the human analyst.

• Manual interpretation requires little specialized equipment,

while digital analysis requires specialized and expensive

equipment.
• Manual interpretation is often limited to analyzing only a

single image at a time due to the difficulty in performing

visual interpretation with multiple images.

• The computer environment is more amenable to handle

complex images of many channels or from several dates.


• Digital image processing techniques are used

extensively to manipulate satellite imagery for

• Terrain classification and

• Meteorology
R e m o t e l y sensed raw data generally contains errors

and deficiencies received from imaging sensor.

The correction of shortages and removal of errors in the

data through some methods are termed as pre–processing

methods.

Raw remotely sensed image data contain faults and

correction is required prior to image processing


Image Processing
• It is enhancing an image or extracting information from image.

• It is analyzing and manipulating images with a computer for

information extraction.

• Digital Image Processing is manipulation of digital data with the

help of computer hardware and software to produce digital maps in

which specific information has been extracted and highlighted.

• Digital image processing is the task of processing and analyzing the

digital data using image processing algorithm.


• Digital image processing is a branch in which both the input and output of a

process are images.

 Image processing generally involves three steps:

• Import an image with optical scanner/directly through digital photography

• Manipulate or analyze the image in some way.

 This stage can include image enhancement or the image may be analyzed to

find patterns that aren't visible by human eye.

• Output the result.

 The result might be the image altered in some way or it might be a report based

on analysis of the image.


Advantages of Digital Image Processing

• Digital image processing has a number of advantages

over the conventional visual interpretation of remote

sensing imagery, such as increased efficiency and

reliability, and marked decrease in costs.


Efficiency

• Owing to the improvement in computing capability, a huge amount

of data can be processed quickly and efficiently. A task that used to

take days or even months for a human interpreter to complete

can be finished by the machine in a matter of seconds. This

process is speedup if the processing is routinely set up.

• Computer-based processing is even more advantageous than visual

interpretation for multiple bands of satellite data.


Flexibility

• Digital analysis of images offers high flexibility. The same processing

can be carried out repeatedly using different parameters to explore the

effect of alternative settings. If a classification is not satisfactory, it can be

repeated with different algorithms or with updated inputs in a new trial.

This process can continue until the results are satisfactory. Such

flexibility makes it possible to produce results not only from satellite data

that are recorded at one time only, but also from data that are obtained at

multiple times or even from different sensors.


Reliability
• Unlike the human interpreter, the computer’s performance in an

image analysis is not affected by the working conditions and

the duration of analysis. In contrast, the results obtained by a

human interpreter are likely to deteriorate owing to mental fatigue

after the user has been working for a long time, as the interpretation

process is highly demanding mentally. By comparison, the

computer can produce the same results with the same input

no matter who is performing the analysis.


Portability

• As digital data are widely used in the geoinformatics community,

the results obtained from digital analysis of remote sensing data

are seldom an end product in themselves. Instead, they are likely

to become a component in a vast database. Digital analysis means

that all processed results are available in the digital format.

• Digital results can be shared readily with other users who

are working in a different, but related, project.


• These results are fully compatible with other existent data that

have been acquired and stored in the digital format already.

• The results of digital analysis can be easily exported to a GIS for

further analysis, such as spatial modeling, land cover change

detection, and studying the relationship between land cover

change and socioeconomic factors (e.g., population growth).


Disadvantages of Digital Image Processing

• Digital image analysis has four major disadvantages, the critical ones

being the initial high costs in setting up the system and limited

classification accuracy.

• High Setup Costs

• Limited Accuracy

• Complexity

• Limited Choices All image processing systems are tailored for a

certain set of routine applications.


Purpose of Image Processing
The purpose of image processing is divided into 5 groups

1. Visualization - Observe the objects that are not visible

2. Image sharpening and restoration - To create a better image.

3. Image retrieval - Seek for the image of interest.

4. Image Recognition – Distinguish objects in an image.

5. Measurement of pattern – Measures various objects in an image.


Digital image processing Functions

• Most of the common image processing functions available in

image analysis systems can be categorized into the following 4

categories:

1. Preprocessing (Image rectification and restoration)

2. Image Enhancement

3. Image Classification and Analysis

4. Data Merging and GIS Interpretation


1. Image Rectification and Restoration ( or Preprocessing)

o These are corrections needed for distortion of raw data. Radiometric and

geometric correction are applicable to this.

Pre-processing: is an operation which take place before further manipulation

and analysis of the image data to extract specific information.

These operations aims to correct distorted or degraded image data to create

correct representation of the original scene.

These process corrects the data for sensor irregularities by removing

unwanted sensor distortion or atmospheric noise.


2. Image Enhancement

o This used to improve the appearance of imagery and to assist visual interpretation

and analysis. This involves techniques for increasing the visual distinction between

features by improving tone of various features in a scene.

3. Image Classification

• The objective of classification is to replace visual analysis of the image data

with quantitative techniques for automating identification of features in a scene

4. Data Merging and GIS Interpretation: These procedures are used to

combine image data for a given geographic area with other geographically

referenced data sets for the same area.


Generally, Image Processing Includes
• Image quality and statistical evaluation

• Radiometric correction

• Geometric correction

• Image enhancement and sharpening

• Image classification

• Pixel based

• Object-oriented based

• Accuracy assessment of classification

• Post-classification and GIS

• Change detection
 Why Image Processing?

For Human Perception

To make images more beautiful or understandable

Automatic Perception of Image

We call it Machine Vision, Computer Vision, Machine

Perception, Computer Recognition

For Storage and Transmission

Smaller, faster, more effective

For New Image Generation (New trends)


Fundamental steps in Digital Image Processing

Image acquisition

Image enhancement

Image Restoration

Color Image Processing

Image Compression

Image Segmentation

Representation and description

Recognition/Acknowledgment
Chapter-4
Image Enhancement
4.1. Image Enhancement
• Image enhancement is the procedure of improving the quality

and information content of original data before processing.

• It is the procedure of improving the quality and information

content of original data before processing.


• Image enhancement algorithms are commonly applied

to remotely sensed data to improve the appearance of image and a

new enhanced image is produced.


• Image enhancement is the modification of an image

to alter its impact on the viewer.

• Enhancements are used to make image easier for visual

interpretation and understanding of imagery.

• The enhanced image is generally easier to interpret

than the original image.


Examples: Image Enhancement

• One of the most common uses of DIP techniques: improve

quality, remove noise etc


• The enhancement process does not increase the inherent

information content in the data. But it does increase the

dynamic range of the chosen features so that they can be

detected easily.

• Image enhancement refers to sharpening of image features such as:

• edges, boundaries, or contrast to make a graphic display more

useful for analysis.


• Image enhancement refers to the process of highlighting certain

information of an image, as well as weakening or removing any

unnecessary information according to specific needs.

• For example,

• eliminating noise,

• Sharpen an image or Brighten an Image.

• revealing blurred details, and

• adjusting levels to highlight features of an image.


•Generally, Enhancement is employed to:

•emphasize,

•sharpen and

•smooth image features for display and analysis


• Types of Image Enhancement Techniques:

• The aim of image enhancement is to improve the interpretability

or perception of information in images for human viewers.

• Basic Image enhancement methods are:

1. Contrast enhancement

2. Density slicing

3. Frequency filtering/ spatial enhancement

4. Band rationing/spectral enhancement


– Contrast Enhancement - maximizes the performance

of the image for visual display.

– Spatial Enhancement - increases or decreases the level

of spatial detail in the image

– Spectral Enhancements - makes use of the spectral

characteristics of different physical features to highlight

specific features
1. Contrast Enhancement

• Contrast enhancement involves increasing the contrast between targets

and their backgrounds.

• Generally, the “contrast” term refers to the separation of dark and

bright areas present in an image.

• In raw imagery, the useful data often populates only a small portion of the

available range of DV (8 bits or 256 levels).

• It stretches spectral reflectance values

• Stretching is performed by linear transformation expanding the original

range of gray level.


• Contrast enhancement technique plays a vital role in image

processing to bring out the information that exists within low

dynamic range of that gray level image.

• To improve the quality of an image, it required to perform

the operations like contrast enhancement and reduction or

removal of noise.

• The key to understanding contrast enhancement is to

understand the concept of an image histogram.


• A histogram is a graphical representation of the brightness values that

comprise an image.

• The brightness values (i.e. 0-255) are displayed along the x-axis of

the graph.

• The frequency of occurrence of each of these values in the image is

shown on the y-axis.


• Histogram shows the statistical frequency of data distribution

in a dataset.

• In the case of remote sensing, the data distribution is the

frequency of the pixels in the range of 0 to 255, which is the

range of the 8-byte numbers used to store image information

on computers.

• This histogram is a graph showing the number of pixels in

an image at each different intensity value found in that image.


Techniques of contrast enhancement

•Linear contrast enhancement

•Histogram-equalized stretch
a. Linear contrast stretch
• It is the simplest type of enhancement technique.

• This involves identifying lower and upper bounds from the

histogram and apply a transformation to stretch this range to fill the

full range.

before After
------------Linear contrast stretch-------------

• This method enhances the contrast in the image with

light toned areas appearing lighter and dark areas

appearing darker, making visual interpretation much

easier.
b. Histogram-equalized stretch

• A uniform distribution of input range of values across the full

range.

• Histogram equalization is effective algorithm of image enhancing

technique.

• Histogram equalization is a technique for adjusting image intensities

to enhance contrast.

• This allows for areas of lower local contrast to gain a higher

contrast.
• Histogram Equalization

• Histogram equalization is a technique for adjusting

image intensities to enhance contrast. It is not

necessary that contrast will always be increase in this.

• The histogram of an image represents the relative

frequency of occurrence of grey levels within an

image.
• Histogram Equalization is an image processing technique that

adjusts the contrast of an image by using its histogram.

• To enhance the image's contrast, it spreads out the most

frequent pixel intensity values or stretches out the intensity

range of the image.

• This allows for areas of lower local contrast to gain a higher contrast.
• The original image and its histogram, and the equalized

versions. Both images are quantized to grey levels.


• The histogram of an image represents the relative frequency of

occurrence of grey levels within an image.

• To enhance the image's contrast, it spreads out the most

frequent pixel intensity values or stretches out the intensity

range of the image through out area.


2. Density slicing
• This technique normally applied to a single-band monochrome

image for highlighting areas that appear to be uniform in an image.

• Density slicing converts the continuous gray tone range into a series

of density intervals marked by a separate color or symbol to

represent different features.

o Mapping a range of attached grey levels of a single band to a single

level and color


o Each range of level is called a slice.

• Grayscale values (0-255) are converted into a series of

intervals, or slices, and different colors are assigned to

each slice.

• Density slicing is often used to highlight variations in

features.
----Density slicing---

o Range of 0-255 normally converted to several slices

o Effective for highlighting different but homogenous areas

within image.

o Effective if slice boundaries/colors carefully chosen


• 3. Spatial Filtering

• Filters - used to emphasize or deemphasize spatial information

contained on the image.

• The processed value for the current pixel depends on both itself and

surrounding pixels.

• Hence Filtering is a neighborhood operation, in which the value of

any pixel in the output image is determined by applying some

algorithm to the values of the pixels in the neighborhood of the

corresponding input pixel.


• A pixel's neighborhood is some set of pixels, defined by their

locations relative to that pixel.

• Image sharpening

• The main aim in image sharpening is to highlight fine detail in the

image, or to enhance detail that has been blurred (perhaps due to

noise or other effects, such as motion).

• With image sharpening, we want to enhance the high-frequency

components.
• The basic filters that can be used in frequency domain are low pass

filters, high pass filters.

• A. Low pass filter- Low pass filtering involves the elimination of

high frequency components from the image resulting in the sharp

transitions reduction that are associated with noise.

• Low pass Filters can reduce the amplitude of high-frequency

components and also can eliminate the effects of high-frequency

noise.
• A low-pass filter is designed to emphasize larger, homogeneous areas of

similar tone and reduce the smaller detail in an image.

• This serve to smooth the appearance of an image.

• A low-pass filter (LPF) is a circuit that only passes signals below its

cutoff frequency while attenuating all signals above it. It is the

complement of a high-pass filter, which only passes signals above its

cutoff frequency and attenuates all signals below it.

• Emphasize large area changes and de-emphasize local detail

• Low pass filters are very useful for reducing random noise.

……
B. High pass filter
• These filters are basically used to make the image appear sharper.

• High pass filtering works in exactly the same way as low pass filters but uses the

different convolution kernel and it emphasizes on the fine details of the image.

• High pass filters let the high frequency content of the image pass through the filter

and block the low frequency content.

• High-pass filter is a filter designed to pass all frequencies above its cut-off

frequency. A high-pass filter is used in an audio system to allow high frequencies

to get through while filtering or cutting low frequencies.

• While high pass filter can improve the image by sharpening and, overdoing of this

filter can actually degrade the image quality.


• Emphasize local detail and deemphasize large

area changes.

• Directional, or edge detection filters are

designed to highlight linear features, such as

roads or field boundaries.


Bandpass Filter

• Unlike the low pass filter which only pass signals of a low frequency range or the high
pass

filter which pass signals of a higher frequency range, a Band Pass Filters passes
signals

within a certain “band” or “spread” of frequencies without distorting the input

signal or introducing extra noise.

• Band-pass filter, arrangement of electronic components that allows only those

electric waves lying within a certain range, or band, of frequencies to pass

and blocks all others.

• Filter circuits can be designed to accomplish this task by combining the properties of

low-pass and high-pass into a single filter. The result is called a band-pass filter.
4. Band Rationing (Spectral)
• Often involve taking ratios or other mathematical combinations

of multiple input bands to produce a derived index of some sort,

• e.g.:Normalized Difference Vegetation Index (NDVI)

• Designed to contrast heavily-vegetated areas with areas containing

little vegetation, by taking advantage of vegetation’s strong

absorption of red and reflection of near infrared:

– NDVI = (NIR-R) / (NIR + R)


-----Band Rationing…..

• Image division or spectral ratioing is one of the most common

transforms applied to image data.

• Image rationing serves to highlight variations in the spectral

responses of various surface covers.

• Healthy vegetation reflects strongly in the near-infrared portion

of the spectrum while absorbing strongly in the visible red.

• Other surface types, such as soil and water, show near equal

reflectance in both the near-infrared and red portions.


5. False Color Composite
• FCC is commonly used in remote sensing compared to true colors because

of the absence of a pure blue color band because further scattering is

dominant in the blue wavelength.

• The FCC is standardized because it gives maximum identical information of

the objects on Earth. In FCC, vegetation looks red, because vegetation is

very reflective in NIR and the color applied is red.

• Water bodies look dark if they are clear or deep because IR is an absorption

band for water. Water bodies give shades of blue depending on

their turbidity or shallowness.


• Spatial Convolution Filtering

1. Edge enhancement is an image processing filter that

enhances the edge contrast of an image in an attempt to

improve its acutance (apparent sharpness).

• Edge enhancement can be either an analog or a digital

process.
2. Fourier Transform

• The Fourier transform is a representation of an image as a sum of complex

exponentials of varying magnitudes, frequencies, and phases. The Fourier

transform plays a critical role in a broad range of image processing

applications, including enhancement, analysis, restoration, and

compression.

• Fourier Transform is a mathematical model which helps to transform

the signals between two different domains, such as transforming signal

from frequency domain to time domain or vice versa.


• The Fourier transform is a mathematical function that

decomposes a waveform, which is a function of time, into the

frequencies that make it up. The result produced by the Fourier

transform is a complex valued function of frequency.

• Fourier transform has many applications in Engineering and

Physics, such as signal processing, RADAR, and so on.


• The main advantage of Fourier analysis is that very little

information is lost from the signal during the

transformation. The Fourier transform maintains

information on amplitude, harmonics, and phase and

uses all parts of the waveform to translate the signal

into the frequency domain.


• Image Magnification

• Digital image magnification is often referred to as zooming.

• This technique most commonly employed for two purposes:

• to improve the scale of the image for enhanced visual

interpretation to match the scale of another image.

• It can be looked upon as a scale transformation.


Unit 5:

Digital image analysis and transformation


• Topics to be covered:

 Spatial Transformations of image

 Principal Component Analysis

 Texture transformation

 Image stacking and compositing

 Image mosaicking and sub-setting

 Spectral Vegetation Indices


5.1.Spatial Transformations of
Image
• Spatial transformation of an image is a geometric transformation of

the image coordinate system.

• Spatial transformations refer to changes to coordinate systems that

provide a new approach.

• In a spatial transformation each point (x, y) of image A is mapped to

a point (u, v) in a new coordinate system.

• A digital image array has an implicit grid that is mapped to discrete

points in the new domain.


• It is often necessary to perform a spatial transformation to:

• Align images that were taken at different times or sensors

• Correct images for lens distortion

• Correct effects of camera orientation

• Image morphing/change or other special effects


• Principal Component Analysis.

• PCA, is a statistical procedure that allows to summarize the

information content in large data set by means of a smaller set of

“summary indices” that can be more easily visualized and analyzed.

• The new variables/dimensions – Are linear combinations of the

original ones – Are uncorrelated with one another.

• PCA capture as much of the original variance in the data as possible

– are called Principal.


• PCA is the way of identifying patterns in data, and expressing the

data to highlight their similarities and differences.

• PCA is a technique used to emphasize variation and bring out strong

patterns in a dataset: to make data easy to explore and visualize.

• PCA should be used mainly for variables which are strongly

correlated.

• If the relationship is weak between variables, PCA does not work

well to reduce data.


• PCA is a method of compressing image using PCA.

• PCA is technique which transforms the original highly

correlated image data, to a new set of uncorrelated

variables called principal components.

• Each principal component is called an eigenchannel.


• How do you do PCA?

• Standardize the range of continuous initial variables

• Compute the covariance matrix to identify correlations

• Compute the eigenvectors and eigenvalues of the covariance matrix

to identify the principal components

• Create feature vector to decide which principal components to keep

• Recast the data along the principal components axes


Interpret the key results for Principal Components Analysis

• Step 1: Determine the number of principal components.

• Step 2: Interpret each principal component in terms of the

original variables.

• Step 3: Identify outliers.


Texture Transformations

•In the digital image processing, the texture can


be defined

• as a function of spatial variationof the

brightness intensity of the pixels.

•Texture is the main term used to define

objects or concepts of a given image.


Identifying objects based on texture
• What are different types of image texture?

• Texture consists of texture elements, sometimes called texels.

• Texture can be described as:

• fine, coarse,

• grained,

• smooth, etc.–Such features are found in the tone and

structure of a texture.
• Image stacking and compositing

• Layer stacking is a process of combining multiple separate bands in

order to produce a new multi band image.

• In order to layer-stack, multiple image bands should have same

extent (no. of rows and columns).

• This type of multi band images are useful in visualizing and

identifying the available Land Use Land Cover classes.

• Compositing is assigning ‘suitable band arrangement’ for better

analysis.
Image mosaicking and sub-setting

• A mosaic is a combination or merge of two or more images.

• A mosaic combines two or more raster datasets together.

• Mosaics are used to create a continuous image surface across

large areas.

• In ArcGIS, you can create a single raster dataset from multiple raster

datasets by mosaicking them together.


• Image sub-setting

• A subset is a section of a larger downloaded image.

• Since satellite data downloads usually cover more area than you are

interested in and near 1 GB in size, you can select a portion of the

larger image to work with.

• Subset function to extract a subgroup of variable data from a

multidimensional raster object.


Spectral Vegetation Indices
• Remote sensing may be applied to a Varity of vegetated

landscapes, including:

 Agriculture;

 Forests’

 Rangeland;

 Wetland,

 Urban vegetation
 The basic assumption behind the use of vegetation indices is that remotely

sensed spectral bands can reveal valuable information such as:

 vegetation structure,

 state of vegetation cover,

 photosynthetic capacity,

 leaf density and distribution,

 water content in leaves,

 mineral deficiencies, and

 evidence of parasitic shocks or attacks.


 A vegetation index is formed from combinations of several spectral

values that are

 added,

 divided,

 Subtracted, or

 multiplied in a manner designed to yield a single value that

indicates the amount or vigor of vegetation within a pixel.


Exploring Some Commonly Used Vegetation Indices

1 Simple Ratio (SR) Index (ratio vegetation index (RVI))

Ratios are effective in revealing underlying information when

there is an inverse relationship between the two spectral responses

to the same biophysical phenomenon .

 SR used to indicate status of vegetation.

 SR is the earliest and simplest form of VI.

 SR/RVI is calculated: as: as:


2 Normalized Difference Vegetation Index (NDVI)

• NDVI is one of the earliest and the most widely used in various

applications.

• NDVI responds to changes in:

• amount of green biomass,

• chlorophyll content, and

• canopy water stress.

• It is calculated as:
• NDVI conveys the same kind of information as the SR/RVI but is

constrained to vary within limits that preserve desirable statistical

properties (-1<NDVI<1).

• Only positive values correspond to vegetated zones;

• the higher the index, the greater the chlorophyll content of the

target.

• The time of maximum NDVI corresponds to time of maximum

photosynthesis.
Some Application Areas of NDVI

i. Land-Use and Land-Cover Change

oNDVI is helpful in identifying LULC changes such as deforestation,

its rate, and the area affected.

oNDVI differencing image is able to provide insights into the nature

of detected change.

oThis method is much more accurate than other change detection

methods in detecting vegetation changes.


 However, one potential problem with NDVI-based

image differencing is the difficulty in setting the

appropriate threshold for significant change, such as

change from full vegetation to no vegetation.


ii. Drought and Drought Early Warning

 Generally, meteorological (dry weather patterns) and hydrological

(low water supply) droughts would not be detected by NDVI

before they impact the vegetation cover.

 But NDVI is found to be useful for detecting and monitoring

drought effects on the vegetation cover, especially agricultural

droughts, which in turn is useful for early warning.


iii. Soil Erosion

• NDVI has been proved to be a useful indicator of land-

cover condition and a reliable input in providing land-

cover management factor in to soil erosion models to

determine the vulnerability of soils to erosion.


3 Green-Red Vegetation Index (GRVI)

 NDVI is not sensitive enough to leaf-color change from green to yellow

or red because green reflectance is not used in the calculation of NDVI.

 Green-Red Vegetation Index (GRVI) is an indicator of plant phenology

due to seasonal variations.

 It is calculated as:

 Green vegetation, soils, and water/snow have positive, negative, and

near-zero values of GRVI, respectively.


4. Soil-Adjusted Vegetation Index (SAVI)

 SAVI is used to eliminate the effects from background soils observed

in NDVI and other Vis.

 Adjusts soil effect on reflectance either dense or sparse canopy.

• Where pn and pr are reflectance in the near infrared and red

• Where L is the coefficient that should vary with vegetation density, ranging from

0 for very high vegetation cover to 1 for very low vegetation cover.

 It is obvious that if L = 0, then SAVI is equivalent to NDVI.


5.Normalized Difference Water Index (NDWI)

• The normalized difference water index (NDWI), is proposed to

monitor moisture conditions of vegetation canopies over large

areas from space.

 It is defined as :

Where p represents radiance in reflectance units


Operations Between Images
Arithmetic operations

• An image is an array with numbers. So mathematical

operations can be performed on these numbers. In this section,


we consider 2D images but the generalization to different

dimensions is obvious.

• Image math includes addition, subtraction, multiplication,

and division of each one of the pixels of one image with the

corresponding pixel of the other image.


Unit 6

Digital Image Classification

You might also like