Practical Geg Nep
Practical Geg Nep
PRACTICAL NOTEBOOK
Submitted by:
DEPARTMENT OF GEOGRAPHY
MANIPUR UNIVERSITY, CANCHIPUR-795003
2022
CONTENTS
2. CREATING SUBSET
3. GEOMETRIC CORRECTION
4. RADIOMETRIC CORRECTION
6. IMAGE CLASSIFICATION
CHAPTER 1
INTRODUCTION TO REMOTE SENSING
Remote sensing has transformed the understanding of natural processes. The analysis,
application and interpretation of remote sensed image has bought significant advance in a
wide range of fields from climatology, geology, agriculture and oceanography to urban
planning, environmental monitoring, etc. Remote Sensing is broadly defined as the science
and art of collecting information about objects, area or phenomena from a distance without
being in physical contact with them. Remote Sensing is broadly classified into two
categories.
1. Passive Remote Sensing
Satellite Remote Sensing: The term Satellite Remote Sensing is commonly restricted to
methods that employ electromagnetic energy (such as lights, heat, micro wave) as means of
detecting and measuring target characteristics. In passive remote sensing system, the
naturally radiated or reflected energy from the earth's surface features is measured by the
sensors operating in different selected spectral bands on board the air borne/space borne
platforms. An active remote sensing supplies its own source of energy to illuminate the
objects and measures the reflected energy returned to the system. Stages in Remote
Sensing
1. Source of energy
Steps:
1. Right Click>> Open Raster Layer>> Pop Up Window (PUW) >> Navigate to the desired
Folder>> Select .img file
After loading the image
2. Right click>> Fit to frame
3. Tab>> Multispectral>> Change band combination
Study area is much smaller than the obtained data
Therefore, Subset/AOI has to be taken out
Creation of Subset
1. Tab>> Raster>> Subset & Chip PUW >> Input file (Main Image data) & Save Output file
(Subset file) >> AOI >> AOI File>> Select the created AOI
2. Right Click>> Open Raster Layer>> Select subset file
SUBSET using AOI Vector File (19th May 2022)
.shp is the file extension
Shape file is a most common vector file
Vector file has to load on top of the image
Steps:
There are various GIS tools available that can be transform image data to some geographic
control framework, like the commercial Arc Map, PCI Geomatica, TNT maps (Microimages,
Inc) or ERDAS Imagine. One can georeferenced the set of points, lines, polygons, mages, or
3D structures. For instance, a GPS device will record latitude and longitude coordinates for
a given point of interest, effectively Georeferencing this point. A georeferencing must be a
unique identifier. Rectifying satellite imagery to user coordinates
Open the image multispectral image in a viewer.
Left click Raster>Geometric correction from the viewer menu bar.
A geometric model selection dialog box opens up.
Choose polynomial and left click ok.
A polynomial model properties window and a Geo Correction Tools window will load on
the viewer.
Left click projection option at the top
Make the Mapunts "Others' and click on Add/Change projection.
Click on custom.
Change the projection type to 'UTM'.
Change the spheroid name to "WGS84".
Change the UTM zone to 44.
Choose north or south to "north" and then click OK to accept the changes.
Click on set projection from GCP points.
GCP tool reference setup dialog box will appear.
So choose "Existing Viewer "and click OK. Now we will be asked to click inside viewer.
Now click in the viewer where multispectral .img is displayed.
Now just click Ok in reference map information.
Press this button to create a GCP (Ground Control Point). After choosing a point from
image enters the corresponding co-ordinates in the X Ref and Y-Ref (X->Easting
->Northing) using keyboard. (Minimum four points are required for the first order)
Save the GCPs by file >save input option. Also save the GCPs of reference coordinates in
another file.
Resampling: once all the GCPs are given, resample the image (i, e, converting old image
new reference coordinate system image)
Click on the resample button in the Geo Correction Tools.
The resample dialog box appears
Select a new output file
In the out shell size option user can fill in the pixel size or use the default calculated by
the system
Click ok to start resampling
The processing window will appear
Open a new viewer load the output image and check the locational values by moving the
cursor
CHAPTER 3
RADIOMETRIC CORRECTION
Brightness Inversion
Left click on the image interpreter/radiometric enhance/brightness inversion on ERDAS
main menu
The brightness inversion dialog box opens up. Type in or load the input file i.e.,
Multispectral and output file i.e., brightness and then click ok
The output image will be executed • He Fig 2: Brightness Inversion 7
Noise Reduction
Left click on image interpreter /Radiometric enhance/ Noise reduction on ERDAS main
menu
The noise reduction dialog box opens up. Type in or load the input file i.e., multispectral
.img and output file i.e., noise .img and click ok
The output will be executed
CHAPTER 4
IMAGE ENHANCEMENT TECHNIQUE
Image Enhancement Contrast Manipulation
Contrast manipulation aims in enhancing the contrast in the image. It manipulates the gray
levels for effective display. The methods are gray level thresholding, level slicing and
contrast stretching.
CONTRAST STRETCHING
Contrast stretching by computer processing of digital data (DNS) is a common operation
although we need some user skill in selecting specific techniques and parameters (range
limits). The reassignment of DN values is based on the particular stretch algorithm chosen.
Values are accessed through a Look-Up-Table (LUT).
To understand the concept of contrast stretching and to be able to apply contrast stretch is
applied to improve the visual interpretability on image. Adjust image contrast.
Pre-requisite: display on image (multispectral .img) in a viewer. When images are
displayed in ERDAS IMAGINE, a linear contrast stretch is applied to the data file values, but
we can further enhance the image.
Left hold raster/contrast/brightness contrast in the viewer menu bar
The contrast dialog box appears
Change the numbers and or use the side bars to adjust image brightness and contrast
Left click apply. The image is displayed with new brightness values
Left click Reset to undo any changes
Left click to close the contrast too dialog box
HISTOGRAM EQUILIZATION
It is a stretch that favorably expands some parts of the DN range at the expense of others by
dividing the histogram into classes containing equal numbers of pixels. For instance, if
most of the radiance variation occurs in the lower range of brightness, those DN values may
be selectively extended in greater proportion to higher (brighter) values. Here, we carry out
a Histogram Equalization stretch, ERDAS imagine from the following steps
Left click on Image Interpreter/Radiometric enhance/Histogram equalization on ERDAS
main menu
The histogram equalization dialog box opens up
Input file: select stacked .img in the select dialog box
Subset definition: select this to define a rectangular area of data to be used as the
output file. The default is the entire output file
No. of Bins-the number of output grey level values
Ignore zero in the statistics when this check box is on pixels, with zero fit value will be
ignored when statistics are calculated for the output
Left click OK to run the program Left click close to close the dialog box
Open a new viewer and display file histogram .img to see the histogram equalized
FILTERING TECHNIQUES
Spatial Techniques
Spatial filtering is the processing procedure falling into the enhancement category that
often divulges valuables information of different valuable information of a different
nature. Filtering has many applications like nose removal, data enhancement, data
extraction, data compression and directional enhancement. Filtering can be divided into
many types:
Spatial and frequency domain
Low pass, bandpass, and high pass
Directional filters
Smoothing and edge enhancement filters
Gradient and Laplacian filter
Low Pass Filter
These are designed to enhance low frequency details in any image. It smoothens the details
in an image and reduces the gray level range. The strips of noise having high frequency are
removed.
High Pass Filter
High pass filter emphasizes high frequency details and low emphasize low frequency
features. It produces image with narrow histogram. High frequencies features are
sharpened and low frequency features are subdued.
Application of a low pass filter
Left click on the Image Interpreter/spatial enhancement/convolution in the ERDAS main
menu
The convolution dialog box opens up. This dialog box allows us to perform image
enhancement operations such as averaging, high pass filtering. We can either define the
convolution kernel or we choose one from the built-in kernel library.
MULTI IMAGE MANIPULATION
Band rationing: ratio image is enhancement resulting from the division of DN values in one
spectral band by the corresponding DN value of the other band, eliminate in slope of
reflectance curve. Principal and canonical component transformation: Extensive inter band
correlation is the problem frequently encountered in the analysis of the multispectral
image data. Principal component analysis helps in removing the redundancy in the data set.
Vegetation component: Vegetation indices are empirical formulae to emphasize the spectral
contrast between the red and the near infrared regions of the electromagnetic spectrum.
Higher the VI value, higher the probability of good vegetation in the ground.
Normalized Difference Vegetation Index (NDVI) is the most commonly used index. The
vegetation line coverage and the value ranges from -1 to +1.
NDVI= (NIR-RED)/(NIR+RED)
CALCULATE NDVI
LOW NDVI- NON-VEGETATED
HIGH NDVI - VEGETATED
STEPS
Open ArcGIS, clicks add data and load image
Click windows and select image analysis
The image analysis dialog box appears; click on the NDVI icon, the NDVI is calculated.
Save the NDVI image and provide its location, name, format type and click ok.
CHAPTER 5
IMAGE CLASSIFICATION
Image classification technique assigns the picture elements into various categories. It
utilizes the pixel-by-pixel spectral information for automated land cover classification. It
makes use of multiple images of the same scene, obtained over various spectral regions as
input and process output. It is based on statistical decision theory. The decision to classify
a pixel into various classes depends upon statistical calculation. It extracts the
information from the data. Patterns i.e., set of radiance measurements obtained in various
wavelength bands for each pixel is analyzed and image classification is done. In spectral
pattern recognition categorization on image pixels is done based on their spectral
properties, whereas in spatial pattern recognition categorization is done based on pixels
relationships with neighboring pixels. In temporal pattern recognition multidate data is
analyzed to classify a particular category. Classification can be divided in many ways.
Some of them are:
1. Supervised and unsupervised classification
2. Statistical and synthetic classification
3. Parametric and non-parametric classification
UNSUPERVISED CLASSIFICATION
Unsupervised classification is a process whereby numerical operations are performed that
search for "natural groups" in multispectral space. It does not utilize training data as the
basis for classification. The area is divided into number of spectral classes. Analyst then
specifies thematic class performing to each spectral class. Clustering algorithm operates
in two pass mode. In the first pass the program reads through the data set and sequentially
builds clusters. There is mean vector associated with each cluster. In the second pass a
minimum distance to mean algorithm is applied to the whole data set on pixel basis in which
each pixel is assigned to one of the mean vectors created in the first pass. In the first pass
analyst has to give following information:
R- Radius in spectral space used to determine when a new cluster should be formed.
C-A central space distance parameter when merging clusters.
N- The number of pixels to be evaluated between each merging of clusters.
Cmax - The maximum number of clusters to be identified by the algorithm.
After the cluster formation and inclusion of pixels in clusters mean is updated. When any
pixel distance is greater than R then from already formed cluster new cluster is formed. The
procedures continue.
Steps for unsupervised classification
Click on the classifier icon in the ERDAS panel to start the classification utility.
Select unsupervised classification from the classification menu to perform the
unsupervised classification using the isodata algorithm
The unsupervised classification dialog box opens.
Input raster file: enter the name of the input image file to cluster or click on the file
selector button.
Output cluster file: click on this check box to generate and output a classifier thematic
raster layer (img).
File name: enter the name of the output cluster layer file (say unsupervised.img) or
click on the file selector button the .img extension is automatically added.
Output signature set: turn on this box to generate and output a signature file.
File name: enter the name of the output signature set file or click on the file select
button the sig extension is automatically added (say unsupervised sig). cluster will be
generated. Initialize from statistics: turn on this check box to generate obituary
clusters from the file statistics for the .img file.
Clustering options: we need to define how the Number of classes: enter the number of
classes to created say 5.
The isodata utility repeats the clustering of image until either
o A maximum number of iterations has been performed
o A maximum percentage of unchanged pixels have been reached between two
iterations
Processing option: used the following number fields to specify the processing options.
Maximum iterations: enter the number of maximum times the isodata utility should
isocluster the data, this parameter prevents this utility from running too long, or from
potentially getting "stuck" in a cycle without reaching the convergence threshold.
Convergence threshold is the maximum percentage of pixels whose cluster assignments can
go unchanged between iterations; this threshold prevents the isodata utility from running
indefinitely. By specifying a convergence threshold of .95, we would specify that as soon as
95% or more of the pixels stay in the same cluster between one iteration and the next, the
utility should stop processing. In other words, as soon as 5% or fewer of the pixels changes
clusters between iterations, the utility will stop processing. Classify zeros: include zeros
in the classification. Click ok to execute the program. Open a new viewer; load the
unsupervised.img as pseudocolor. Using raster attribute editor change the color scheme.
Open the unsupervised .img in GIS; by loading the image in add data
Click on layout, insert heading text, north arrow, scale, and legends and provide co-
ordinates
Go to file and click export map and save it in the desired location
SUPERVISED CLASSIFICATION
Supervised classification refers to that classification process where the image analyst
supervises the classification procedures by specifying the algorithm, training sites, etc.
Identification and location of same land cover types is a must before attempting supervised
classification. The steps involved in classification process are
Appropriate classification scheme
Training site selection Analysis of training sites
Selection of classification algorithm
Classification into number of classes
Evaluation of classification accuracy
Classification scheme
A well-defined classification scheme is necessary before attempting any classification.
The scheme takes into consideration the resolution of sensor and the difference between
information class and spectral class. Classification scheme should be comparable over
difference levels. It can be a resource based or activity based.
Training site selection
Training sites are small representative areas identify by analyst for which a developed
numerical is available. The procedures assemble a set of statistic that describes the
spectral response pattern for each land cover type to be classified. Training site can be
given with a seed point. The number of training pixels should be theoretically equal to n+1
where n is number of bands for statistical calculations. Ideally 10n to 100n number of
training sample are sufficient.
Training site analysis
In training site analysis, refinement of training site is done and the spectral separability
between classes is determined. Purity of the data is checked and is ascertained that the
data is normally distributed. Extraneous pixels are removed from the sites. Training sites
are merged or splitted if the needs arise. The training refinement procedure involves the
following
graphical representation
quantitative expressions
self-classification of training set data
Classifier
Various algorithms exist to assign unknown pixel into number of classes. The choice of
particular classifier or decision rule depends upon the nature of input data and the desired
output. The most commonly used classifiers are
MNDIS
Parallelepiped and
Maximum likelihood
Steps to supervised classification
Supervised classification using maximum likelihood classifier.
Marking of training samples and signatures generation
Pre requisite: An image should be displayed in the viewer
Select File /open/Raster Layer and select image in the viewer menu bar. Select bands 4,
3,2 and click on Fit to Frame checkbox. Click ok to display the image to be classified
Click on classifier icon
Click on Signature Editor in ERDAS icon panel
A Signature Editor Dialog box opens up
Click on close in the classification Menu
In the Signature Editor select view/ columns. A view Signature Columns Dialog box
appears.
In the Signature Columns Dialog box. Then Shift clicks RED, GREEN AND BLUE in
columns boxes 3, 4, 5 to deselect these rows. Click Apply (these columns will not be
displayed in the signature editor dialog box).
Click close in view signature columns dialog box.
Select AOI/Tools from the viewer menu bar. The AOI tool palette displays on the screen
Zoom in one of the areas in the viewer where the training site is to be marked by
selecting the zoom in icon in the viewer tool bar
Click on the polygon icon in the AOI tool palette
Draw a polygon in the magnified area, drag to draw the polygon and click to draw
vertices Double click to close the program
In the Signature Editor select Edit/Add from the menu bar to add this AOI as signature
Click in the Signature Name and column in the signature editor for the signature just
added.
Click holds the color column in the signature editor and changes the color of the class.
Mark training samples for all the classes