Rec Cite9
Rec Cite9
Guide
20USR471DOC
Restricted Rights Notice
The IDL®, IDL Advanced Math and Stats™, ENVI®, ENVI Zoom™, and ENVI® EX software programs and the accompanying procedures, functions,
and documentation described herein are sold under license agreement. Their use, duplication, and disclosure are subject to the restrictions stated in the
license agreement. ITT Visual Information Solutions reserves the right to make changes to this document at any time and without notice.
Limitation of Warranty
ITT Visual Information Solutions makes no warranties, either express or implied, as to any matter not expressly set forth in the license agreement,
including without limitation the condition of the software, merchantability, or fitness for any particular purpose.
ITT Visual Information Solutions shall not be liable for any direct, consequential, or other damages suffered by the Licensee or any others resulting
from use of the software packages or their documentation.
Permission to Reproduce this Manual
If you are a licensed user of these products, ITT Visual Information Solutions grants you a limited, nontransferable license to reproduce this particular
document provided such copies are for your use only and are not sold or distributed to third parties. All such copies must contain the title page and this
notice page in their entirety.
Export Control Information
This software and associated documentation are subject to U.S. export controls including the United States Export Administration Regulations. The
recipient is responsible for ensuring compliance with all applicable U.S. export control laws and regulations. These laws include restrictions on
destinations, end users, and end use.
Acknowledgments
ENVI® and IDL® are registered trademarks of ITT Corporation, registered in the United States Patent and Trademark Office. ION™, ION Script™,
ION Java™, and ENVI Zoom™ are trademarks of ITT Visual Information Solutions.
ESRI®, ArcGIS®, ArcView®, and ArcInfo® are registered trademarks of ESRI.
Portions of this work are Copyright © 2009 ESRI. All rights reserved.
PowerPoint® and Windows® are registered trademarks of Microsoft Corporation in the United States and/or other countries.
Macintosh® is a registered trademark of Apple Inc., registered in the U.S. and other countries.
UNIX® is a registered trademark of The Open Group.
Adobe Illustrator® and Adobe PDF® Print Engine are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States
and/or other countries.
Numerical Recipes™ is a trademark of Numerical Recipes Software. Numerical Recipes routines are used by permission.
GRG2™ is a trademark of Windward Technologies, Inc. The GRG2 software for nonlinear optimization is used by permission.
NCSA Hierarchical Data Format (HDF) Software Library and Utilities. Copyright © 1988-2001, The Board of Trustees of the University of Illinois. All
rights reserved.
NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities. Copyright © 1998-2002, by the Board of Trustees of the University of
Illinois. All rights reserved.
CDF Library. Copyright © 2002, National Space Science Data Center, NASA/Goddard Space Flight Center.
NetCDF Library. Copyright © 1993-1999, University Corporation for Atmospheric Research/Unidata.
HDF EOS Library. Copyright © 1996, Hughes and Applied Research Corporation.
SMACC. Copyright © 2000-2004, Spectral Sciences, Inc. and ITT Visual Information Solutions. All rights reserved.
This software is based in part on the work of the Independent JPEG Group.
Portions of this software are copyrighted by DataDirect Technologies, © 1991-2003.
BandMax®. Copyright © 2003, The Galileo Group Inc.
Portions of this computer program are copyright © 1995-2008 Celartem, Inc., doing business as LizardTech. All rights reserved. MrSID is protected by
U.S. Patent No. 5,710,835. Foreign Patents Pending.
Portions of this software were developed using Unisearch’s Kakadu software, for which ITT has a commercial license. Kakadu Software. Copyright ©
2001. The University of New South Wales, UNSW, Sydney NSW 2052, Australia, and Unisearch Ltd, Australia.
This product includes software developed by the Apache Software Foundation (www.apache.org/).
MODTRAN is licensed from the United States of America under U.S. Patent No. 5,315,513 and U.S. Patent No. 5,884,226.
QUAC and FLAASH are licensed from Spectral Sciences, Inc. under U.S. Patent No. 6,909,815 and U.S. Patent No. 7,046,859 B2.
Portions of this software are copyrighted by Merge Technologies Incorporated.
Support Vector Machine (SVM) is based on the LIBSVM library written by Chih-Chung Chang and Chih-Jen Lin (www.csie.ntu.edu.tw/~cjlin/libsvm),
adapted by ITT Visual Information Solutions for remote sensing image supervised classification purposes.
IDL Wavelet Toolkit Copyright © 2002, Christopher Torrence.
IMSL is a trademark of Visual Numerics, Inc. Copyright © 1970-2006 by Visual Numerics, Inc. All Rights Reserved.
Other trademarks and registered trademarks are the property of the respective trademark holders.
Contents
Chapter 1
Interactive Displays .............................................................................. 11
Display Group Menu Bar Options ................................................................................... 13
Managing Files from the Display Group ......................................................................... 14
Saving Images from Displays .......................................................................................... 15
Printing ............................................................................................................................ 22
Exporting Images to ArcMap .......................................................................................... 23
Creating QuickMaps ........................................................................................................ 24
Overlays ........................................................................................................................... 29
Annotating Images and Plots ........................................................................................... 31
Overlaying Classes .......................................................................................................... 44
Plotting Contour Lines .................................................................................................... 50
Interactive Density Slicing .............................................................................................. 58
Adding Grid Lines ........................................................................................................... 63
Creating Regions of Interest ............................................................................................ 69
Chapter 2
File Management ................................................................................ 159
The File Menu ................................................................................................................ 160
Opening Image Files and Vector Files ........................................................................... 161
Opening Remote Files .................................................................................................... 162
Opening External Files .................................................................................................. 163
Opening Previous Files .................................................................................................. 192
Starting ENVI Zoom ...................................................................................................... 193
Editing ENVI Headers ................................................................................................... 194
Generating Test Data ..................................................................................................... 210
Using the Data Viewer ................................................................................................... 212
Subsetting Data .............................................................................................................. 215
Chapter 3
Basic Tools .......................................................................................... 263
The Basic Tools Menu .................................................................................................. 264
Resizing Data (Spatial/Spectral) ................................................................................... 265
Subsetting Data via ROIs .............................................................................................. 266
Rotating Images ............................................................................................................. 267
Layer Stacking ............................................................................................................... 269
Converting Data (BSQ, BIL, BIP) ................................................................................ 271
Stretching Data .............................................................................................................. 272
Statistics ......................................................................................................................... 274
Spatial Statistics ............................................................................................................ 283
Change Detection Analysis ........................................................................................... 294
Measurement Tool ......................................................................................................... 303
Band Math ..................................................................................................................... 306
Spectral Math ................................................................................................................ 320
Segmenting Images ....................................................................................................... 321
Defining ROIs ............................................................................................................... 323
Mosaicking Images ........................................................................................................ 348
Creating Masks .............................................................................................................. 349
Preprocessing Utilities ................................................................................................... 357
Chapter 4
Classification Tools ............................................................................. 399
The Classification Menu ............................................................................................... 400
Supervised Classification .............................................................................................. 401
Unsupervised Classification .......................................................................................... 425
Decision Tree Classifier ................................................................................................ 430
Collecting Endmember Spectra ..................................................................................... 439
Creating Class Images from ROIs ................................................................................. 456
Chapter 5
Transform Tools .................................................................................. 491
The Transform Menu ..................................................................................................... 492
Image Sharpening .......................................................................................................... 493
Calculating Band Ratios ................................................................................................ 501
Principal Component Analysis ...................................................................................... 504
Independent Components Analysis ................................................................................ 509
Minimum Noise Fraction Transform ............................................................................. 516
Color Transforms ........................................................................................................... 527
Applying Decorrelation Stretch ..................................................................................... 531
Applying Photographic Stretch ...................................................................................... 532
Applying Saturation Stretch ........................................................................................... 533
Creating Synthetic Color Images ................................................................................... 534
Calculating Vegetation Indices ...................................................................................... 535
Chapter 6
Filter Tools ........................................................................................... 539
The Filter Menu ............................................................................................................. 540
Using Convolution and Morphology Filters .................................................................. 541
Using Texture Filters ..................................................................................................... 548
Using Adaptive Filters ................................................................................................... 553
Using Frequency Filters (FFTs) ..................................................................................... 561
Chapter 7
Spectral Tools ..................................................................................... 567
The Spectral Menu ......................................................................................................... 569
SPEAR Tools ................................................................................................................. 570
THOR Workflows .......................................................................................................... 653
Target Detection Wizard ................................................................................................ 723
Spectral Libraries ........................................................................................................... 739
Spectral Slices ................................................................................................................ 746
MNF Rotation ................................................................................................................ 748
Pixel Purity Index .......................................................................................................... 749
The n-D Visualizer ......................................................................................................... 754
Mapping Methods .......................................................................................................... 771
Chapter 8
Map Tools ............................................................................................. 875
The Map Menu .............................................................................................................. 876
Registration ................................................................................................................... 877
Orthorectification .......................................................................................................... 911
Image Mosaicking ......................................................................................................... 919
Georeferencing from Input Geometry ........................................................................... 934
Georeferencing SPOT Data ........................................................................................... 943
Georeferencing SeaWiFS Data ..................................................................................... 946
Georeferencing ASTER Data ........................................................................................ 949
Georeferencing AVHRR Data ....................................................................................... 952
Georeferencing ENVISAT ............................................................................................ 956
Georeferencing MODIS ................................................................................................ 957
Georeferencing COSMO-SkyMed Data ....................................................................... 967
Georeferencing RADARSAT ........................................................................................ 969
Building RPCs ............................................................................................................... 972
Selecting Map Projection Types .................................................................................... 990
Building Customized Map Projections .......................................................................... 992
Chapter 9
Vector Tools ...................................................................................... 1009
The Vector Menu ......................................................................................................... 1010
Opening Vector Files ................................................................................................... 1011
The Available Vectors List .......................................................................................... 1012
Working with Vectors .................................................................................................. 1013
Creating Vector Layers ................................................................................................ 1038
Creating World Boundary Layers ................................................................................ 1040
Extracting Linear Features with Intelligent Digitizer .................................................. 1043
Converting Raster Images ............................................................................................ 1056
Converting Classification Images ................................................................................ 1057
Rasterizing Point Data ................................................................................................. 1058
Converting ROIs to DXF Files .................................................................................... 1060
Converting Annotation Files to DXF Files .................................................................. 1061
Converting EVFs to DXF Files .................................................................................... 1062
Chapter 10
Topographic Tools ............................................................................ 1063
The Topographic Menu ................................................................................................ 1064
Opening Topographic Files .......................................................................................... 1065
Using Topographic Modeling ...................................................................................... 1066
Extracting Topographic Features ................................................................................. 1070
Extracting Digital Elevation Model Data .................................................................... 1073
Creating Hill Shade Images ......................................................................................... 1074
Replacing Bad Values .................................................................................................. 1077
Rasterizing Point Data ................................................................................................. 1078
Converting Vector Topo Maps into Raster DEMs ...................................................... 1079
Using 3D SurfaceView ................................................................................................ 1081
Chapter 11
Radar Tools ........................................................................................ 1099
The Radar Menu .......................................................................................................... 1100
Opening and Preparing Radar Files ............................................................................. 1101
Calibrating Radar Files ................................................................................................ 1103
Removing Antenna Gain Variations ........................................................................... 1105
Resampling to Ground Ranges .................................................................................... 1107
Generating Incidence Angle Images ........................................................................... 1109
Adaptive Filters ........................................................................................................... 1110
Texture Filters ............................................................................................................. 1111
Creating Synthetic Color Images ................................................................................ 1112
Using Polarimetric Tools ............................................................................................. 1113
Using TOPSAR Tools ................................................................................................. 1140
Appendix A
ENVI Preference Settings ................................................................. 1143
Configuration Parameter Descriptions ........................................................................ 1144
Editing System Graphics Colors ................................................................................. 1163
Editing System Color Tables ....................................................................................... 1165
Installing Other TrueType Fonts with ENVI .............................................................. 1169
Modifying IDL CPU Parameters ................................................................................. 1171
Appendix B
ENVI ASCII File Formats ................................................................... 1175
ENVI ASCII Files ....................................................................................................... 1176
Appendix C
ENVI Spectral Libraries .................................................................... 1183
ENVI Spectral Library Files ........................................................................................ 1184
USGS Spectral Library (Minerals) .............................................................................. 1185
USGS Spectral Library (Vegetation) .......................................................................... 1186
Additional Vegetation Libraries .................................................................................. 1187
JPL Spectral Library .................................................................................................... 1188
IGCP264 Spectral Library ........................................................................................... 1189
JHU Spectral Library ................................................................................................... 1192
Appendix D
ENVI Map Projections ...................................................................... 1197
Introduction .................................................................................................................. 1198
Map Projections ........................................................................................................... 1199
Non-Standard Projections ............................................................................................ 1206
Appendix E
Vegetation Indices ............................................................................ 1211
Understanding Vegetation and Its Reflectance Properties ........................................... 1212
Vegetation Indices ....................................................................................................... 1221
This chapter describes the components of ENVI’s interactive displays, including display group
windows, vector windows, and plot windows. It covers the following topics:
2. Set the output image size on the page by entering the values into the xsize and
ysize fields. To maintain the relative aspect between x and y when one
dimension changes, select the Aspect check box.
3. Set the position of the image origin on the page (with respect to the lower left
corner) using the xoff and yoff parameters. An outline of the image showing its
relative size and position on the page appears within the draw window in the
upper-right of the dialog.
• To position the image on the output page, left-click inside the image
outline in the draw window and drag the image to a new position.
• To center the image outline on the page, right-click anywhere on the output
page.
4. Click the toggle button to select Landscape or Portrait as the page
orientation.
5. To scale an image to a specified map scale, enter the value in the Map Scale 1
field. The xsize and ysize values change automatically based on the image
pixel size (a default size of 30 meters is assumed if no pixel size is present in
the header).
6. Enable or disable color with the Color check box.
7. To create Encapsulated PostScript output, select the Encapsulate check box.
8. To select the number of output bits for the PostScript image, use the Bits drop-
down list.
3. To add graphics overlays, click Add Graphics Option and select the desired
option. With the exception of annotation, these overlays must be currently
displayed on the image or plot.
4. To add an annotation file that is not currently displayed to the graphics overlay,
click Add Graphics Option and select Additional Annotation File. Select
the annotation filename.
5. Click OK. The Output Display/Plot to PostScript File dialog appears.
Note
To clip all graphics at the edge of the page when multiple pages of output are
generated, select the Clip Graphics check box.
Applying Masks
To apply Masking on the image, click Select Mask.
2. Select the output format and resolution using the Resolution drop-down list.
Applying Masks
To apply Masking on the image, click Select Mask.
Printing
You can send output of display groups, plot windows, and Vector windows directly to
system printers. You can send output to any of your system printers or plotters
through your native system printer dialog.
For steps on how to print, see Printing in ENVI in Getting Started with ENVI.
Creating QuickMaps
Use the QuickMap feature to simplify the process of creating a map product in
ENVI. You can quickly add grid lines, scale bars, titles, north arrows, declination
diagrams, and logos to your image to make a quick output map. When the parameters
are set, you can save the settings as a QuickMap template that you can use on other
images. After the output map is created, you can make additional changes with the
interactive overlay capabilities (for example, annotation) found in the Display group
menu bar, and you can output the map to PostScript or to a standard printer.
The following figure shows an example QuickMap display.
Note
Extremely small images may result in QuickMap displays too small to
accommodate multiple annotation objects.
6. Click OK.
• To change the font used for any title, select a new font from the
corresponding drop-down button and enter the point size. For more
information on fonts, see “Installing Other TrueType Fonts with ENVI” on
page 1169.
• To set the justification for any title, select from the corresponding drop-
down button. Center is the default.
• To change the grid line parameters (thickness, style, color, and so forth),
click Additional Properties.
• To add scale bars and grid lines, leave the corresponding check boxes
selected and edit the parameters as needed.
• To add a logo and adjust its placement, click Edit Logo Files and
Placements. Locate the file containing the logo (the file must contain three
bands of byte data: red, green, and blue). Use the QuickMap Logo File
Parameters dialog to change the logo size and position. Click OK to return
to the QuickMap Parameters dialog.
• To add a North arrow, leave the corresponding check box selected and
select an arrow type from the North Arrow Type drop-down button.
• To add a declination diagram, click Declination Values. In the Declination
Diagram Values dialog, enter the values and click OK.
• To return to the QuickMap Image Selection window and edit the image
subset or map scale, click Change Mapping Parameters.
2. In the QuickMap Parameters dialog, click Apply. ENVI automatically adds a
virtual border to the QuickMap image, places your selected titles and
annotation onto the image, and produces a map, which displays as a new
display group. The QuickMap Parameters dialog remains open.
Note
If you added a logo, it appears as a red box with an RGB label on the map,
but appears correctly when printed.
3. Use the QuickMap Parameters dialog to edit the map. Click Apply to view the
changes.
Note
To edit the borders, add more annotation or overlay elements, or edit the
existing map elements, use ENVI interactive display functions.
Printing QuickMaps
1. From the QuickMap Image window menu bar, select File → Print. The ENVI
QuickMap Print dialog displays.
2. Choose Output QuickMap to Printer or Standard Printing.
3. Click OK. The Output QuickMap printing scales the output correctly for the
parameters that you entered at the start of QuickMap. Standard printing does
not take into consideration the page size and map scale you entered when you
created the QuickMap.
If you selected a large page size during QuickMap setup, you might first test the
output on a small scale by using the standard printing option, then use the QuickMap
printing option to print to a larger page.
QuickMap Templates
After creating a QuickMap, you can save the parameters in a template file to use on
other georeferenced images of the same dimension and pixel size.
In the QuickMap Parameters dialog, click Save Template, enter a filename (ENVI
adds a .qm extension), and click OK.
To open and apply an existing map template, from the georeferenced image, select
File → QuickMap → from Previous Template or click Restore Template in the
QuickMap Parameters dialog, then click Apply to apply the QuickMap to the image.
To edit template parameters, see “Setting QuickMap Parameters” on page 26.
Overlays
Use the options from the Display group Overlay menu to add the following to your
image:
• Annotations using text, polygons, symbols, and so forth. See “Annotating
Images and Plots” on page 31.
• Classifications. See “Overlaying Classes” on page 44.
• Contour lines. See “Plotting Contour Lines” on page 50.
• Density slices. See “Interactive Density Slicing” on page 58.
• Grid lines. See “Adding Grid Lines” on page 63 for details.
• Regions of interest. See “Creating Regions of Interest” on page 69.
• Vector overlays. See “Overlaying Vectors” on page 70.
Use can also use ENVI’s QuickMap feature to quickly overlay grid lines, titles,
declination diagrams, North arrows, and borders on georeferenced images. For
details, see “Creating QuickMaps” on page 24.
You can append a virtual border to an image and place any type of annotation object
in the border. If you want to use virtual borders, append the border to the image
before annotating the image, as described in the next section.
Adding Annotations
1. Select one of the following options:
• From the Display group menu bar select Overlay → Annotation.
• From any plot window menu bar, select Options → Annotation.
• From the Vector window menu bar, select Options → Annotate Plot.
The Annotation dialog appears. The current annotation mode displays in the
dialog title bar (for example, Annotation: Text). To hide the Annotation dialog
at any time without erasing your annotations, see “Showing and Hiding
Overlay Dialogs and Layers” on page 29.
2. If you are adding the annotation to a display group, select the Window you
want to use for adding the annotation. The choices are Image, Scroll, and
Zoom. To disable annotation mode, select Off.
3. From the Annotation dialog menu bar, select Object → annotation_type.
Where annotation_type is one of the following:
• Text: (default) Type the text in the field provided mid-dialog. Left-click on
the image or plot to place the text.
• Symbol: The available symbols display for the Misc font. Left-click to
select the desired symbol, which ENVI highlights in red. Left-click on the
image or plot to place the symbol. Additional symbols are available when
you select Special or Math from the Font drop-down button.
• Rectangle: Left-click and drag on the image or plot to draw the rectangle
or middle-click to draw a square.
• Ellipse: Left-click and drag on the image or plot to draw the ellipse or
middle-click to draw a circle.
• Polygon: Left-click on the image or plot to add polygon vertices. Right-
click to complete the polygon.
• Polyline: Left-click on the image or plot to add polyline vertices. Right-
click to complete the polyline.
• Arrow: Left-click and drag on the image or plot to draw the arrow.
• Scale Bar: For data that is not georeferenced, the Input Display Pixel Size
dialog appears for you to enter an x,y pixel value and units. When the
value is set (or if the data was already georeferenced), left-click in the
display or plot window to place the scale bar.
• Color Ramp: Left-click in the display or plot to place a gray scale wedge
or color table. The color table or wedge represents the currently applied
ENVI color table. For a gray scale image the color ramp is a gray scale
wedge from the minimum gray scale value to the maximum gray scale
value. For a color image, the color ramp shows the distribution of the
selected color palette.
• Declination: Left-click in the display or plot to place the declination
diagram. The declination diagram includes any combination of arrows
pointing to true North (shown with a star), grid North (shown with GN),
and magnetic North (shown with MN).
• Map Key: Left-click in the display or plot to place a map key that consists
of colored squares and corresponding labels for each map item or class in a
classification image. You can define map keys interactively. ENVI
automatically creates them as class keys for classification images. Vector
keys are automatically created for vector layers in the appropriate colors
with the layer names as labels. Vector keys show the vector symbol used
for points, a line for polylines, and a square outline for polygons. To define
or change items in the map key, see “Editing Map Key Items” on page 38.
• Image: Left-click in the display or plot to place other images, such as
imported logos or subsampled images, inside the current image or plot.
ENVI keeps a copy of the image in memory so it is best not to use large
images. Note that this is not the same as mosaicking.
• Plot: Click Select New Plot to select the plot from the Select Plot Window
dialog. Left-click and drag in the display or plot to place the selected plot
overlay. If annotating multiple plots, use a test image to annotate and
output multiple plots on a single page. From the ENVI main menu bar,
select File → Generate Test Data and create a blank white image (see
“Generating Test Data” on page 210 for further details). The plot colors
are automatically reversed to black on white for output.
Annotation type parameter settings in the Annotation dialog vary depending on
the type you select. See “Changing Annotation Characteristics” on page 35 for
parameter descriptions.
Mouse button functions vary for annotation mode. See “Annotation Mouse
Button Functions” on page 34 for details.
4. Use the colored diamond-shaped handle to move the annotation to another
location as needed.
5. Right-click to accept the annotation placement.
Mouse
Function
Button
Field Description
Field Description
Head Angle Sets the angle of the lines on the arrow head for arrow
annotations, in degrees. The smaller the angle, the narrower
the arrow head.
Head Size Sets the size of the arrow head for arrow annotations.
Height Sets the height of the scale bar, in pixels.
Inc Sets the number of increments shown on the scale bar or on
the color ramp. Use with the Sub Inc field.
Len Sets the length of the color ramp.
Length Sets the length of the scale bar, in scale bar units, or the or
length of the declination diagram, in pixels.
Line Style Sets the line style (solid, dotted, and so forth) for rectangle,
ellipse, polygon, and polyline annotations.
Magnetic North Sets the angle of the magnetic North arrow for declination
diagrams, in degrees. The North arrows are not drawn to scale,
so the arrows appear separated.
Max Sets the label for the maximum value to display on the color
ramp. Use with the Min field.
Min Sets the label for the minimum value to display on the color
ramp. Use with the Max field.
Orien Sets the orientation angle for the annotation. Enter the value of
the angle in degrees (counterclockwise, with respect to the
horizontal [0 degrees]).
Plot xsize Sets the x size for plot annotations, in pixels.
Plot ysize Sets the y size for plot annotations, in pixels.
Precision Sets the number of significant figures for the precision label in
the color ramp. For example, 0.25 is a precision of 2, and .03 is
a precision of 1. ENVI places the labels at the bottom of
horizontal ramps and to the right of vertical ramps.
Field Description
Ramp Sets the color ramp orientation either horizontally or
vertically.
For horizontal selections: Horz → shows low values to high
values left to right; Horz ← shows high to low left to right.
For vertical selections: Vert ← shows high values to low values
bottom to top; Vert → shows low to high bottom to top.
Reset Items For map key annotations, provides options to reset the
parameters to either the default classification or the default
vector window.
Rotation Sets the orientation of the rectangle, ellipse, polygon, or
polyline annotations.
Scale Sets the scale type(s) to include in the scale bar. Select any
combination of kilometers, miles, meters, and feet by selecting
the scale type, then toggling the button next to the drop-down
list to On (include in the scale bar) or Off (to not include).
Select New For image annotation, opens the Annotation Image Input
Image Bands dialog, where you can select the bands to include.
Select New Plot For plot annotations, opens the Select Plot Window dialog,
where you can select the plot to use.
Size Sets the pixel size of the font specified in the Font field. The
default is 12.
Spc Sets the line spacing for rectangle, ellipse, polygon, polyline,
and arrow annotations.
Sub Inc Sets the number of sub-increments within the main increment
shown on the scale bar. Use with the Inc field.
symbols Symbol selection field. Click Font, then select Misc, Special,
or Math to see all available symbols.
Field Description
• For a vector polyline item, use the Line Style drop-down list to select
different line styles.
• For a vector point item, use the Symbol drop-down list to select the
symbol type to show in the key.
3. To change other parameters within the Annotation dialog, use the following:
• To set a background color, click Background and select a color.
• To change the text font for the key, the size, and the thickness of the letters,
select Object → Text.
• To change class colors, see “Mapping Class Colors” on page 123.
Editing Annotations
You can move and change the attributes of annotation objects that are fixed in the
image.
1. From the Annotation dialog menu bar, select Object → Selection/Edit.
2. Select Options → Show Object Corners.
3. Left-click and drag to draw a box around the corner of the object in order to
select it.
4. Left-click on the handle and drag the object to a new location.
5. Change the object’s attributes by entering the new parameters in the
Annotation dialog.
Selection options are:
• To select multiple objects at once, left-click and drag a box around the objects.
• When more than one object is selected, the previously selected objects remain
active.
• When many objects are selected, using a handle to move one moves them all;
changing attributes change all the selected objects’ attributes.
• To edit selected vector objects (rectangle, ellipse, polygon, polyline, arrow),
left-click on a vertex and drag it to the desired location.
• To deselect an object, middle-click on the handle of the object.
• To deselect multiple objects, middle-click and drag to draw a box around the
objects.
• To deselect all of the annotation objects within an image, right-click within the
image.
• To select only those objects and deselect any previously active objects, right-
click and drag to draw a box around objects.
Additional options are available for selected annotation objects under the Selected
menu.
Annotation Options
In the Annotation dialog, use the Selected or Options menus to access annotation
options. Options available when working with annotation objects include undoing the
last action, joining polygons, swapping the positions of overlapping objects,
duplicating or deleting annotation objects, adding virtual borders to an image, turning
annotation mirroring on and off, and showing or hiding annotation object corners.
The options available from the Annotation dialog Selected menu are:
• Select All: Select all annotation objects.
• Join: Join the overlapping parts of two polygons.
• Swap: Bring an underlying object to the top.
• Duplicate: Duplicate existing annotation objects so you do not have to re-
create them.
• Delete: Delete all of the annotation objects within the image.
• Undo: Undo the last change made.
The options available from the Annotation dialog Options menu are:
• Turn Mirror On: Mirrors shapes and polygons around the center of the
image. Use mirroring with rectangles, ellipses, polygons, and polylines only. It
is intended designed for building custom filters for FFT filtering (see
“Defining FFT Filters” on page 562).
• Turn Mirror Off: Disable mirroring.
• Show Object Corners: Shows object corners. Use object corners to make it
easier to include corners in the selection box while in Selection mode. ENVI
can show object corners around all annotation objects except the vectors
(rectangle, ellipse, polygon, polyline, and arrow). It plots corners as small
asterisks around the annotation object.
• Hide Object Corners: Hide object corners.
2. In the Set Snap Value dialog, use the increase/decrease buttons to select a value
or enter the value in the Snap field.
3. Click OK.
Overlaying Classes
Use Classification to overlay classes on a gray scale or color image in a display
group; to control which classes display; to collect statistics; to edit the class colors
and names; to merge classes; and to edit classes by adding, deleting, or moving pixels
between classes.
Note
You must generate a classification image before using this function (see
“Classification Tools” on page 399).
1. From the Display group menu bar, select Overlay → Classification. The Input
File dialog appears. Select the classification file and click OK. The Interactive
Class Tool dialog appears, listing all the classes and class colors and names.
The active class displays at the top of the dialog. The Active Class is the class
to which to apply any operations, such as statistics or editing. To hide the
Interactive Class Tool dialog at any time without erasing your classes, see
“Showing and Hiding Overlay Dialogs and Layers” on page 29.
2. Select from the following options to edit and apply classes to your image.
• To change the active class, click the color box next to the class name.
• To display a class on the image select the On check box next to the class
name. You can display any number of classes at once.
• To display a single class and hide all other classes, double left-click on the
color box next to the class to display.
• To hide the currently displayed class and display all the other classes,
double left-click on the current class again.
• To hide all the classes, right-click on any color box.
• To display all the classes when all classes are hidden, right-click on any
color box.
Editing Classes
Use the Edit menu in the Interactive Class Tool dialog to edit classes by adding,
deleting, or moving pixels between classes and by drawing polygons, rectangles, or
ellipses.
Tip
Save your changes often by selecting File → Save. This way, if you make a mistake
you do not have to re-create everything.
When you are editing classes, make sure the active class is the class you want to edit.
Mouse button functions differ from the normal display mode when you use edit
mode. To return the mouse button functions to normal display mode, select Edit →
Mode: No Editing, or select Off from the Edit Window radio buttons.
Saving Changes
To save changes, select File → Save from the Interactive Class Tool dialog menu bar.
Class Options
Use the Interactive Class Tool Options menu to obtain statistics for classes and to
change class colors and names.
Merging Classes
To merge one or more classes into a selected base class:
1. From the Interactive Class Tool dialog menu bar, select Options → Merge
classes. The Interactive Merge Classes dialog appears.
2. Select the Base Class name.
3. Select the names of the Classes to Merge into the Base Class.
4. Click OK. The next time you open the Class Distributions dialog, it is updated.
• To plot the mean spectrum for all classes, select Options → Mean for all
classes.
2. If the Input File Associated with Classification Image dialog appears; select
the input file to calculate the statistics from and click OK. This dialog appears
only if an input file was set previously when using another Interactive Class
Tool dialog menu option.
ENVI computes the statistics and displays the mean spectra in a Class Means
plot window.
Plotting Statistics
1. Select one of the following from the Interactive Class Tool dialog menu bar:
• To plot the statistics for the active class, select Options → Stats for active
class.
• To plot the statistics for all classes, select Options → Stats for all classes.
2. If the Input File Associated with Classification Image dialog appears, select
the input file to calculate the statistics from. The dialog appears only if an input
file has not been set previously.
3. ENVI computes the statistics and displays a Class Statistics Results window.
The mean spectrum is in white, the +/- one standard deviation is in green, and
the minimum and maximum spectra are in red.
Tip
To get a text report and more detailed statistics for your classes, select
Classification → Post Classification → Class Statistics from the ENVI main
menu bar.
2. In the ENVI Question dialog, click Yes. ENVI overwrites the current file with
the new classification image.
1. From the Display group menu bar, select Overlay → Contour Lines. The
Input Contour Band dialog appears.
2. Select the band from which to generate the contours. Only bands that are the
same size as the displayed image are available. The Contour Plot dialog
appears, listing eight default contour levels in the Defined Contour Levels
area. ENVI defines these levels using a minimum and maximum data value,
calculated using the Scroll window. The Min and Max fields show the levels.
To hide or show the Contour Plot dialog at any time without erasing your
contours, see “Showing and Hiding Overlay Dialogs and Layers” on page 29.
3. Enter the minimum and maximum values in the appropriate fields. Click Reset
to return the range to its initial values.
4. By default, ENVI plots the contours in the Image, Scroll, and Zoom windows.
To plot the contours in fewer windows, clear the appropriate Window check
boxes at the bottom of the dialog. The check boxes that remain selected
indicate in which window to draw the contour.
5. Click Apply to plot the contours.
Removing Levels
To remove a level from the Defined Contours List:
1. Select a specific contour level.
2. Click Delete Level.
3. Click Apply to replot the contours.
Clearing Levels
To clear all the levels from the Defined Contours List in the Contour Plot dialog,
click Clear Levels.
8. Click the toggle button to select Use same color for each level (plot all
contours in the same color) or Increment colors for each level (plot each
contour level in a different color). The different colors of contour levels follow
the list of graphic colors.
9. Click OK to enter the new contours into the Defined Contour Levels list.
10. Click Apply to plot the contours to the display group.
1. From the Contour Plot dialog menu bar, select Options → Reset Contour
Labels.
2. Click Apply.
3. Adjust other parameters as needed. See “Saving to Image Files” on page 18 for
details.
4. Open the image in a new display group. It should contain the contours.
5. From the Display group menu bar, select File → QuickMap → New
QuickMap or from Previous Template. This QuickMap now contains the
contour overlays. This suggested workaround may not produce the best results
in all situations. During the vector-to-raster conversion (saving the contours to
the image), the vectors are pixelized and will affect the quality of the
QuickMap. See “Creating QuickMaps” on page 24 for more information.
3. Enter the minimum and maximum values in the appropriate fields to change
the density slice range. To reset the data range to the initial values, click Reset.
4. Select whether to apply the density slice colors to the Image window, Scroll
window, or both windows by selecting the desired check boxes next to
Windows at the bottom of the dialog.
5. Click Apply to apply the selected ranges and colors to the image.
• To remove a range from the list, select the data range and click Delete
Range.
• To clear the list of density slice ranges, click Clear Ranges.
Adding Ranges
To add new ranges to the list in the Density Slice dialog:
1. From the Density Slice dialog menu bar, select Options → Add New Ranges.
The Add Density Slice Ranges dialog appears.
2. Enter the Range Start value, Range End, and # of Ranges in the appropriate
fields.
3. Choose the starting color from color button. The colors of multiple ranges
follow the list of graphics colors.
4. Click OK. The Density Slice dialog appears. ENVI lists the ranges in the
dialog, and you can edit them.
5. Click Apply to apply the density slicing colors to the image.
Changing Bands
1. From the Density Slice dialog menu bar, select Options → Change Density
Slice Band. The Density Slice Band Choice dialog appears.
2. Change the band used for the density slice data ranges by selecting the
bandname.
3. Click OK. ENVI applies the new density slice coloring.
Setting Defaults
1. From the Density Slice dialog menu bar, select Options → Set Number of
Default Ranges. The Set Number of Ranges dialog appears.
2. Enter the number of default density slice ranges.
3. Click OK.
Resetting Defaults
To reset the Defined Density Slice Ranges list to the default ranges and colors
determined by the Min and Max values and number of set default ranges:
1. From the Density Slice dialog menu bar, select Options → Apply Default
Ranges.
2. Click Apply to apply these ranges and colors to the display.
Changing Borders
To change the display borders size and color:
1. From the Grid Line Parameters dialog menu bar, select Options → Set
Display Borders.
2. Enter the border size for the left, top, right and bottom in pixels.
3. To select the border color, click Border Color.
Overlaying Vectors
Use ENVI Vector Tools to view vector data such as USGS Digital Line Graphs
(DLG), USGS DLGs in Spatial Data Transfer Standard (SDTS) format, DXF files,
ARC/INFO Interchange files, and shapefiles. ENVI provides two ways of displaying
and working with vectors:
• Using the menu selections in the display group and the Vector Parameters
dialog (Figure 1-13).
Figure 1-13: Image Display Group (left) and Vector Parameters Dialog (right)
• Using the vectors by themselves in a Vector window with the same menu bar
as the Vector Parameters dialog and using the right-click menu selections (see
“Working with Vectors” on page 1013 for details).
The method for overlaying vectors onto an image display is discussed in the sections
that follow.
Tip
The Mouse Button Descriptions window is useful when working with vectors
because it provides information about the function of each mouse button at any
given cursor location.
Use Vectors to overlay vector layers on an image, to control the appearance of the
vectors, and to interact with the vector attributes. You can also use ENVI’s interactive
vector functions to edit and query attributes associated with shapefiles and to create
your own vector files and attributes.
If you have many display groups open and the Vector Parameters dialog associated
with the current display group is hidden behind other windows, you can use the
display group right-click menu to find its Vector Parameters dialog and bring it to the
front.
To do this, right-click anywhere in the display group and select <Find Vector
Parameters>.
window, the active/available layers are viewable under the Vector window right-click
menu.
Selecting the Active Vector Layer
The active vector layer is the layer to which all editing or queries are performed.
To select the active vector layer from the Image window, right-click inside the Image
window and select which layer is active by choosing Select Active Layer →
layer_name.
To select the active vector layer from the Vector Parameter dialog, select the layer
name in the Available Vector Layers list to make it the active layer.
You can optionally use a right-click menu to turn layers on and off:
1. Right-click on the layer name in the Available Vector Layers list, select
Select Active Layer, and click on a layer to make it the active layer.
2. Right-click again and select the Active Layer Off/On.
Matching Histograms
Use Histogram Matching to automatically match the histogram of one displayed
image to another displayed image. This function makes the brightness distribution of
the two images as close as possible. The resulting histogram of the display where the
function was started is changed to match the current (source) histogram of the
selected image display. The histogram source is selected as the input histogram. You
can use this feature on both gray scale and color images.
To perform Histogram Matching, you must have at least two images displayed.
1. From the Display group menu bar of the histogram that you want to change,
select Enhance → Histogram Matching. The Histogram Matching Input
Parameters dialog appears.
2. In the Match To list, select the display number of the image with the
histogram you want to match to.
3. Under Input Histogram, select the source of the input histogram by selecting
the appropriate toggle button: Image, Scroll (subsampled data), Zoom, Band
(all pixels), or a ROI (region of interest).
4. Click OK. The display stretch changes to match the selected histogram.
Note
To see how the histogram matched, select Enhance → Interactive
Stretching in the image where you applied the histogram match.
The resulting dialog shows two histograms in the Output Histogram plot: the
imported histogram in red and the matched output histogram in white.
When the values are entered, the output histogram updates to reflect the changes
made to the input histogram and shows the distribution of the data with the new
stretch applied.
Stretch Types
Use the Stretch_Type menu in the Interactive Contrast Stretching dialog to select
from a list of all available types of interactive stretches.
For information about interacting with the Interactive Histogram window, see
“Plotting Histograms in an ENVI Plot Window” on page 89).
3. Click Apply to apply the stretch to the displayed data. To re-display the
original stretch select Options → Reset Stretch.
The selected stretch can also be permanently applied to the displayed image as
described in “Converting Stretched Data” on page 89.
Matching Histograms
Also use the Arbitrary Contrast Stretching function to match a histogram from one
image to the histogram of another.
1. Grab either the input or output histogram from one plot by left-clicking on the
Input Histogram or Output Histogram label at the top of the plot.
2. Drag the name into the other output histogram and release the button. The
imported histogram is plotted in red and the output histogram is stretched to
match the imported histogram.
3. Click Apply to apply the stretch to the displayed data.
2. In the Maximum Bins field, enter the maximum limit for the number of bins
used in the histogram.
Resetting Stretches
To reset the stretch to what it was initially, select Options → Reset Stretch from the
Interactive Histogram window menu bar.
Applying Stretches
If you selected Options → Auto Apply: Off from the Interactive Histogram window
menu bar, click Apply to apply the stretch parameters.
You can also set the default Interactive Stretch Auto Apply preference
Saving Histograms
Standard ENVI output options include Image, PostScript, BMP, HDF, JPEG, PICT,
SRF, TIFF, and XWD formats.
From the Interactive Contrast Stretching dialog menu bar, select File → Save Plot
As → PostScript or Image File. To print the Interactive Histogram window, select
File → Print.
The interaction is similar to that for saving plots, but no annotation is allowed. For
details, see “Saving Images from Displays” on page 15.
To save an LUT, select File → Save Stretch to LUT → ASCII LUT or ENVI
Default LUT from the Interactive Contrast Stretching dialog menu bar.
• ASCII LUT saves the lookup table to a file as a single column of ASCII data
with the parameters Binsize and Data Min at the top of the file. The first value
in the data column corresponds to the LUT value for the input data minimum.
The next value is the LUT value for the input data minimum plus the binsize
and the remaining values in the column are saved in the same manner up to the
input data maximum value.
• ENVI Default LUT saves the LUT to an ENVI binary format file. This file is
automatically named with the input filename and an .lut extension and saved
in the same directory as the input file (or in the alternate header directory).
When the data band displays in ENVI, this LUT is automatically used as the
default stretch.
Linking Displays
Use Link to link and unlink images. When you link images, actions such as moving
the Zoom box in the Image window, the Image box in the Scroll window, changing
the zoom factor, or resizing one Image window are mirrored in all other linked Image
windows. Linking images works automatically in the following situations:
• One image is a spatial subset of the other and the xstart and ystart fields in the
header files are accurately set (if the subset was produced in ENVI, these
settings are done automatically).
• If the images are both georeferenced in the same projection and have identical
pixel sizes.
• If both images are already exactly co-registered (that is, same pixel size,
orientation, and identical map coverage on the ground).
The link offset can be manually set to overcome these limitations if the images have
identical pixel sizes and are in a similar projection.
To use the link function, you must have at least two Image windows open. The Link
option is not available when only one image displays.
1. From the Display group menu bar, select Tools → Link → Link Displays.
The Link Displays dialog appears.
2. Select from the list of available displays by using the toggle button for each
available display to select Yes or No.
3. Specify the link pixel for each image by specifying the Link xoff (x offset) and
yoff (y offset) parameters in pixels measured from the upper-left (1,1) corner
of each image.
4. Select the base image for the link by choosing the appropriate display from the
Link Size/Position drop-down list.
5. Toggle the Dynamic Overlay to On or Off for your linked images. When
Dynamic Overlay is On, left-clicking in one display overlays the image from
the linked display.
6. You can set the Transparency level of the overlay in the Link Displays dialog
from 0 to 100%. In this case, 0% results in the second display completely
masking the display in which the mouse button was clicked, and 100% results
in a completely transparent overlay. A transparency factor of 50 shows half
underlying image and half dynamic overlay.
7. Click OK. ENVI sizes and positions all other images to correspond to the base
image.
If you already have images linked and want to link another image, open the Link
Displays dialog and select Yes for that display using the toggle button.
To remove a single Image window from the link, select Tools → Link → Unlink
Display in that Image window. The other windows remain linked.
4. Select from the following options. See “Mouse Button Functions with
Dynamic Overlays On” and “Mouse Button Functions with Dynamic Overlays
Off” on page 95 for additional information.
• To show a second linked image (the overlay) in the first image (the base),
left-click in any of the linked images.
• To cause the multiple overlays to cycle, successively displaying each
linked image as an overlay on the base image, left-click and hold the
mouse button and simultaneously middle-click. You can also cycle through
all the linked images by left-clicking and holding the mouse button and
pressing the n key on your keyboard.
• To move the overlays around inside a specific image and compare the two
images, left-click and hold the mouse button and move the cursor in the
image.
• To change the size of the overlay, middle-click, drag the corner of the
overlay, and release the mouse button. After resizing, left-click to
reposition the overlay.
• To do a quick comparison of the images (flicker) by repeatedly left-
clicking and releasing to activate the overlay effect. Select Tools →
Link → Dynamic Overlay Off from the Display group menu bar to
disable the flickering feature.
• When displays are linked, you can reposition the Zoom window by left-
clicking and dragging when the cursor is within the Zoom box in the
Image window.
• To save to memory the image displayed in the Image window with the
dynamic overlay shown, press the p key on your keyboard while left-
clicking and holding the mouse button. ENVI adds the resulting output to
the Available Bands List.
Left Click and drag the overlay (see the left mouse button
description for Zoom box functions in the following table for
an exception).
Middle Resize the overlay.
Right Click to display the right-click menu.
Left + Middle Cycle multiple overlays.
Table 1-3: Mouse Button Functions – Linked Images with Dynamic Overlays
3. When the x or y profile is extracted, left-click and hold the mouse button in the
profile plot and drag the cursor to mark the current position on the profile with
crosshair cursor in the image. The Zoom box crosshair concurrently tracks the
location in the profile on the Scroll, Main and Zoom window and the Main and
Zoom images are updated to match the position of the cursor along the profile.
Extracting Z Profiles
Use ENVI’s Z Profiles to interactively plot the spectrum (all bands) for the pixel
under the cursor. You can extract spectra from any multispectral dataset including
MSS, TM, and higher spectral dimension data such as GEOSCAN (24 bands),
GERIS (63 bands), and AVIRIS (224 bands).
Vertical Plot bars in the Z Profile window show which band or RGB bands are
currently displayed in the window. You can interactively change the bands shown in
the window by moving the plot bars to new band positions.
For datasets with fewer than approximately 50 spectral bands, the extraction and
plotting of spectra are fast enough that you can use a BSQ data file. For higher
spectral dimension datasets such as hyperspectral data, using a BIL or BIP file allows
real-time extraction of spectra (see “Converting Data (BSQ, BIL, BIP)” on page 271).
Note
It is recommended to use the BIL data format for hyperspectral datasets because it
produces a response similar to the BIP data for spectral plotting and browsing, yet it
is much faster than BIP format for image display.
Figure 1-21: z (spectral) Profile Plots (left) Landsat Thematic Mapper (Right)
AVIRIS
Note
When working with complex data types, a Z Profile reports different values than the
Cursor Location/Value tool. This is because Z Profiles cannot plot both real and
imaginary numbers, as with the Cursor Location/Value tool. Z Profiles show the log
of the absolute data values. To show only the real part of an image without taking
the log, you must write out the real portion of the image as a separate dataset.
Browsing Spectra
To perform spectral browsing, middle-click in the Zoom box, then left-click and drag
the box across the image.
5. Right-click to select the final vertex and complete the transect. A handle (color
diamond shape) is placed on the drawn transect.
• To move the transect, left-click and drag the handle.
• To delete the transect, middle-click.
6. To extract and display the profile in a plot window, right-click.
If the transect is extracted from a three band color composite image, then three
profiles display in the plot window. The red band profile is a solid line, the
green band profile is a dotted line, and the blue band profile is a dash-dot line.
7. Left-click in the profile plot window and move the cursor to mark the current
position on the profile with a crosshair cursor in the image.
The Zoom box concurrently tracks the location in the profile on the Scroll,
Image, and Zoom windows, and the Image and Zoom images are updated to
match the position of the cursor along the profile.
8. To define another arbitrary profile, left-click in the Image, Scroll, or Zoom
window again to define the new vertices. The new profile is drawn and plotted
in a new color in a new plot window.
4. Enter the name of the output file for the transect points and their related band
profile values.
5. Click OK to save the transect points to the specified ASCII file. The resulting
text file contains the following columns:
• Point: The index number describing the location of the point within the
transect.
• X and Y: The point location in pixel values.
• Euclidian Distance: This value describes the length of a straight line
between a transect point and the transect origin. For a straight transect line,
this value and the Cumulative Distance value are the same.
• Cumulative Distance: This value describes the total length of the transect
up to the point specified. For a straight transect line, this value and the
Euclidian Distance value are the same.
• Band n: These columns contain the pixel values at each point in the
transect for each band selected.
If the image contains georeferencing information, the text file also includes the
following projection columns:
• Map X and Map Y: The point location in map projection units.
• Latitude and Longitude: The point location in geographic latitude and
longitude coordinates in decimal degrees.
• The rest of the columns contain the pixel values at each point in the
transect for each band selected.
based option initiates the Enter Transect Endpoints dialog. When map
information is available for the image, the default setting is map-based.
• First Map Coordinate: The starting geographic or map-based coordinate
of the transect.
• Second Map Coordinate: The ending geographic or map-based
coordinate of the transect.
2. Click OK. The transect produces a spatial profile that is listed in the Extracted
Profiles section of the Spatial Profile Tool dialog. The resulting spatial profile
displays in a new plot window.
2. Drag the corner to define the box, then release the mouse button to redraw the
enlarged profile subset.
3. To set the y plot range to encompass the full range of all plotted data, middle-
click to the left of the plot frame.
6. Click OK to load the spectrum or other (x,y) plot into the plot window. When
loaded, all of the other plot options are available.
5. Click OK. ENVI loads the spectra into the plot window.
3. To change the color of the plotted line, use the color button.
4. To select the style of the line (for example, dotted, dashed, solid), choose from
Line Style drop-down list.
5. To set the thickness of the line, use the Thick field to adjust the value or enter
a new value.
6. To set the number of points to average in the x direction (smoothing) when
plotting the data, enter the value in the Nsum field and press Enter.
7. To select the symbol type, select from the Symbol drop-down list.
8. To control the size of the displayed symbols, use the SymSize field.
9. To display the line along with the selected symbols or display only the
symbols, use the toggle button Symbol & Line or Symbol Only.
10. Click Apply.
Modifications applied with the Plot Parameters dialog only appear in the plot
window when the Plot Parameters dialog is open. If you close the Plot
Parameters dialog, the plot parameters return to their default values.
• To change the length of the major tick marks when they are present, enter a
value between 0 and 0.5 in the Tick Length field, and the number of minor
tick marks in the Minor Ticks field. Lengths are measured as a ratio of the
axis length normalized to 1.0. For example, a length of 0.02 results in ticks
that are 2% of the length of the entire axis. A length of 0.5 results in lines
drawn across 50% of the plot that meet in the middle (the equivalent of the
grid option).
• Select either Auto or Fixed next to the label Tick Marks. The Auto option
places a predetermined number of major and minor ticks on the axis.
Major tick marks are labeled. The Fixed option allows you to enter the
axis parameters. This includes the starting and ending major ticks, the tick
increment between the major ticks, and the number of minor ticks between
major ticks.
• To control the size of the margins around the plot axes, enter the margin
size (in characters) in the Left Margin and Right Margin (for the x axis)
and the Bottom Margin and Top Margin (for the y axis) fields.
8. Click Apply.
Clearing Plots
The Clear Plot selection appears under the Options menu in plot windows if they
were created using ENVI spectral library functions or using New Window.
To clear all of the displayed plots within one of these windows, select Options →
Clear Plot.
Profiles cannot be cleared because they contain the profile for the current pixel.
• For plots of images with wavelengths in the image header, plot the wave
number (1/wavelength) on the x axis, select Plot_Function → X
Axis:1/Wavelength.
• To replot the data displayed in the window with its continuum removed, select
Plot_Function → Continuum Removed. The continuum is the convex hull
that fits over the data and is divided into the original data values to produce the
continuum removed values (see “Using Continuum Removal” on page 785 for
details). The continuum is calculated using the first and last data points
displayed in the plot, so for plots that have been zoomed, the continuum is
calculated based on the displayed data range only.
• To replot the data displayed in the plot window as binary encoded plots (0s and
1s), select Plot_Function → Binary Encoding. Binary encoding replots the
data as a spectrum of 0s and 1s. It calculates the mean of the data and encodes
each value as a 0 if it is less than or equal to the mean and as a 1 if it is greater
than the mean (see “Applying Binary Encoding Classification” on page 416 for
details).
• To replot the original data values, select Plot_Function → Normal.
You can add your own IDL plot function to this menu by entering the name of the
function into the useradd.txt file in the ENVI menu subdirectory (see “Plot
Functions” in the ENVI Programmer’s Guide) and adding a .pro or .sav file
containing the function code to the ENVI save_add subdirectory.
Color Mapping
Use Color Mapping to apply color tables to images, to create interactive density
sliced images, to control the RGB image planes, and to change classification color
mapping.
You can save a color image displayed using any method described in this section to
an RGB color image. See “Saving to Image Files” on page 18.
The dialog contains a gray scale wedge (or color wedge if a color table is
applied) and two sliders to control the contrast stretch. It also has two menus,
File and Options.
2. Select one of the following options from the ENVI Color Tables dialog menu
bar:
• To have any color table changes applied to your images automatically,
select Options → Auto Apply: On.
• To have changes applied manually, select Options → Auto Apply: Off.
After making changes, select Options → Apply. The Auto Apply option
is automatically set in 8-bit mode.
3. Move the Stretch Bottom and Stretch Top sliders to control the minimum and
maximum values to display. Moving the Stretch Bottom slider to the right
causes bright areas of the image to become darker, while moving the Stretch
Top slider to the left causes dark areas of the image to become brighter. To
invert the stretch move the Stretch Bottom slider all the way to the right and
the Stretch Top slider all the way to the left.
If Auto Apply is on, the new contrast stretch is applied to the image
immediately.
4. Apply a selected color table automatically to the current image by selecting the
color table name. ENVI provides a number of pre-saved color tables. The B-W
linear table provides a gray scale image. The RAINBOW color table provides
a cool-to-hot density slice. Other color table options allow you to apply their
preferred color scheme.
To reset the original color tables and stretch, select Options → Reset Color
Table.
1. From the Display group menu bar, select Tools → Color Mapping → Class
Color Mapping.
If your classification is overlaid on a base image, you can access class color
mapping from the Interactive Class Tool dialog by selecting Options → Edit
class colors/names (see “Overlaying Classes” on page 44).
The Class Color Mapping dialog appears.
2. In the Selected Classes list, select the name of the class to change.
3. In the Class Name field, change the name.
4. Select the color system RGB, HSV, or HLS from the drop-down list (see
“Color Transforms” on page 527 for information about color spaces).
5. Select one of the following options to set the values of the class colors:
• Select a color by clicking Color.
• Move the three sliders (0-255 for the three colors in RGB) or click the
increase/decrease buttons to change the values or enter new values into the
fields and press Enter.
6. From the Class Color Mapping dialog menu bar, select Options → Save
Changes to save the changed classification names and colors to the classified
image header file.
To reset the original class colors and names, select Options → Reset Color
Mapping.
Color changes on 24-bit color displays, are not automatically applied. Instead,
select Options → Apply.
Collecting Points
Use Point Collection to collect points (both pixel locations and map locations) from
display groups. The points display in the ENVI Point Collection table. You can save
points from various display groups in a single table and export them to the Ground
Control Points (GCP) List, save them to ASCII or ENVI vector files (EVFs), and
restore them from ASCII files.
1. Select one of the following:
• From the Display group menu bar, select Tools → Point Collection.
• From the ENVI main menu bar, select Window → Point Collection.
The ENVI Point Collection table appears. To hide or show the ENVI Point
Collection table, see “Showing and Hiding Overlay Dialogs and Layers” on
page 29.
Click on a
column heading
to resort the
table.
2. Middle-click in the Image window or the Zoom window to collect the point
under the cursor. The pixel locations appear in the ENVI Point Collection
table. If the data is georeferenced, map and geographic locations appear as
well.
3. To add a description of the point in the ENVI Point Collection table, left-click
on the Attribute Description cell and type a description. Do not use spaces in
the description text. Left-click on the cell to accept the description text.
2. Select the input filename and click Open. The Input ASCII File dialog
appears.
3. Select which columns of data contain the Image X and Y pixel locations, the
Map X and Y values and the Lat/Long values.
4. Select the projection type if you have map values.
5. From the Associated Image drop-down list, select the display group number
associated with the imported points. If no display is associated with the points,
select None.
6. Click OK. The points appear in the point collection dialog.
2. Select the Shape Type to use when saving the points by selecting the radio
button of the desired type: PointZ, PolylineZ, or PolygonZ.
3. If either PolylineZ or PolygonZ are chosen, the Record Description field
appears. This is an optional field for entering a descriptive phrase about the
point collection. The string entered in this text field is saved in the 3D
shapefile’s record description attribute when the file is saved.
For PointZ, the record descriptions are taken for each point from the Attribute
Description column of the ENVI Point Collection table.
4. Enter an output filename.
5. Click OK. When 3D shapefiles are read into ENVI, the elevation appears as an
attribute.
If the ENVI Point Collection table’s Attribute Description column contains a
description for each point, which ENVI saves in the shapefile. If the column is
empty for some points, but contains a value for other points, ENVI adds a
value of undefined for the attribute descriptions that are empty when it saves
the file.
Building Masks
Use Build Masks to create image masks from specific data values (including the data
ignore value), ranges of values, finite or infinite values, ROIs, ENVI vector files
(EVFs), and annotation files. See “Building Masks” on page 350.
Measurement Tool
Use Measurement Tool to get a report on the distance between points in a polygon
or polyline, and to get perimeter and area measurements for polygons, rectangles, and
ellipses. For details, see “Measurement Tool” on page 303.
To measure ROIs while using the ROI function, see “Reporting ROI Measurements”
on page 333.
2. In Zoom window, select a pixel or enter the Sample and Line coordinates of
the pixel for which you want the line of sight calculated.
• To select a different pixel, use the solid black arrow buttons to move the
Zoom window crosshairs in single pixel increments in the corresponding
direction.
• To designate whether or not to apply x and y offsets for data that has
offsets, select Options → Use Image Offset: Yes or No from the Line of
Sight calculator dialog menu bar.
• To select pixels for georeferenced images, see “Selecting Pixels for
Georeferenced Images” on page 135).
3. Click Apply. The Select Line of Sight Input DEM Band dialog appears.
4. Select the file that contains the DEM that is associated with the displayed
image.
5. Click OK. The Line of Sight Parameters dialog appears.
6. Enter the maximum distance (in meters) for the line of sight calculation. To
designate an elevation above the pixel, enter the value in the same units as the
DEM.
7. Click OK. ENVI creates an ROI that shows which pixels can be seen from the
designated pixel. ENVI labels the ROI LOS in the ROI Tool dialog and
overlays the ROI on your image.
Selecting Pixels
To select pixels and display their values in the Spatial Pixel Editor table, left-click in
the Image window and drag the Zoom box, or middle-click on a pixel to center the
Zoom window over it.
Undoing Changes
To undo all changes in pixel values, select Options → Undo all changes from the
Spatial Pixel Editor dialog menu bar.
Creating Animations
Use Animation to create a movie out of images from one or more open files.
Animation is performed in gray scale only.
3. Set the size of the animation window by entering values in the Window Size
fields. The selected images are automatically resized to the selected window
size. Reducing either the spatial subset to animate and/or the size of the
animation window enhances the speed of the animation.
The following table describes the available options in the Animation window:
Option Description
To set the animation speed, enter a number from 0 to 100 into
the Speed field, or use the arrow buttons to set the speed.
Option Description
Saving Animations
You can save an animation as an MPEG (Moving Picture Experts Group) file.
Saving an animation as an MPEG file requires a special license. For more
information, contact your ENVI sales representative or Technical Support.
1. From the Animation menu bar, select File → Save Animation as MPEG. The
Output Sequence to MPEG Parameters dialog appears.
2. From the MPEG Frame Rate drop-down list, select a frame rate in frames per
second.
3. In the MPEG Quality field, enter a compression quality value between 0 and
100, or use the increase/decrease buttons to set the value.
The compression is a lossy compression where 0 is lowest quality and 100 is
highest quality (no compression). Entering a compression quality factor less
than 100 decreases the amount of disk space to use to store the MPEG output.
4. A duplication factor helps to make the MPEG output appear smoother. To
duplicate frames in the MPEG output, enter a Duplicate frames number.
5. Enter an output filename.
6. Click OK.
Depending upon the size of the image, there may be a brief delay while the DN
values are extracted and tabulated. As soon as the scatter plot appears, the interactive
scatter plot function is available for use.
To reset the window to the default size, select Options → Reset Size from the Scatter
Plot window menu bar.
3. Press and hold the middle mouse button while moving the cursor in the scatter
plot to cause real-time dancing pixels to appear in the Image window.
Editing Classes
To remove pixels from an existing class:
1. From the Scatter plot window menu bar, select Class → White.
2. Draw a polygon around the pixels to remove them. The deleted pixels return to
white.
Deleting Classes
To completely delete the selected class polygon, middle-click outside the scatter plot
axes.
Clearing Classes
To remove the ROIs and associated Image window highlighted pixels from the scatter
plot and Image window for the selected class color, select Options → Clear Class
from the Scatter plot window menu bar.
To remove all the ROIs and associated Image window highlighted pixels from the
scatter plot and Image window for all the classes, select Options → Clear All.
Exporting Classes
Use Exporting Classes to export the highlighted Image window pixels for the
selected class color or for all of the classes to an ENVI ROI. You can use the exported
ROI in other ENVI functions.
To export a selected class, select Options → Export Class from the Scatter plot
window menu bar. If the ROI Controls window is on the screen, the region is listed as
a Scatter Plot Import. The class color and number of pixels in the region is also
listed.
The ROIs are retained in memory even when the ROI Controls window is not on
screen. The Scatter Plot Import region is listed in the ROI window the next time it is
started.
To export the highlighted Image window pixels for all of the classes, select
Options → Export All from the Scatter plot window menu bar.
Attaching Z Profiles
To associate a Z Profile window (spectral plot) with the scatter plot:
1. From the Scatter plot window menu bar, select Options → Z Profile. The
Input File Associated with 2D Scatter Plot dialog appears.
2. Select the corresponding input file.
3. Right-click inside the scatter plot to display the spectrum for the point nearest
the cursor.
See “Extracting Z Profiles” on page 98 for details.
Changing Bands
Use Change Bands to change the bands used in the scatter plot and to plot previously
defined classes on the new scatter plot. The corresponding Image window pixels are
highlighted.
1. From the Scatter plot window menu bar, select Options → Change Bands.
The Scatter Plot Band Choice dialog appears.
2. Choose new X and Y axes for the scatter plot by selecting the desired bands as
described in “Selecting Bands for Scatter Plots” on page 145.
ENVI User’s Guide Using Window Options from the Display Group
158 Chapter 1: Interactive Displays
Using Window Options from the Display Group ENVI User’s Guide
Chapter 2
File Management
• GeoSPOT: Reads the primary format for SPOT data. ArcView® raster image
files have a similar format specification. The GeoSPOT format is described in
detail in documentation available from SPOT Image. While the GeoSPOT
format provides for a wide variety of both raster and vector data, ENVI
currently supports only GeoSPOT raster images, which have the .bil file
extension and an associated header file with the extension .hdr.
• ACRES SPOT: Reads SPOT Australian Centre for Remote Sensing CCRS
and SPIM. In the Enter Filenames dialog, select the Imag_xx.dat file.
• SPOT vegetation: Reads SPOT vegetation data. ENVI also creates a meta file.
In the Enter Filenames dialog, select the .hdf file.
• SPOT DIMAP: Reads SPOT DIMAP data. ENVI extracts the necessary
header information, including georeferencing information, band wavelengths,
gains, and offsets. If the data is multispectral SPOT, ENVI automatically
displays a color infrared image. In the Enter Filenames dialog, select any
.dim. It may take a long time to parse the header information from the
metadata.dim file. It is quicker to open the TIFF image file using File →
Open Image File directly if you do not plan to georeference the data. If you
plan to georeference the SPOT data using ENVI, you must read the data using
File → Open External File → Spot → DIMAP because the header
information is needed (see “Georeferencing SPOT Data” on page 944).
mosaic of the entire dataset. You can use the resulting dataset as a single entity within
ENVI.
• File → Open External File → QuickBird → Mosaic Tiled QuickBird
Product
• File → Open Image File
• Basic Tools → Mosaicking → Tiled QuickBird Product
• Map → Mosaicking → Tiled QuickBird Product
The resulting mosaic of the entire dataset appears in the Available Bands List.
When using the Mosaic Tiled QuickBird Product option, individual tiles do not
appear in the Available Bands List. To open an individual tile in the Available Bands
List, use one of the non-mosaicking options in the File → Open External File →
QuickBird option.
If the QuickBird data tiles contain map information, ENVI reads in this map
information and applies it to the virtual mosaic. If map information is not provided
with the QuickBird data tiles, ENVI attempts to find geographic coordinates in the
tile file to calculate a warp that it can apply to the virtual mosaic. If no map
information exists in the tile file, ENVI attempts to read in the appropriate RPC
model and use it to emulate map information.
required. This method does not change the appearance of the image; it only
calculates a geolocation for each individual pixel. This georeferencing method
is less computationally- and disk-space intensive than a full orthorectification
process performed on the imagery; however, the full orthorectification process
provides greater accuracy. To automatically assign an RPC file to use as a
pseudo projection for a specific image, see “Emulating an RPC or RSM
Projection” on page 201.
• Mosaic Tiled WorldView Product: Reads WorldView-1 and WorldView-2
mosaic tiled data, described in the following section.
ENVI can read AVHRR files that do not have TBM or ARS headers. It can also read
AVHRR data files produced by the Quorum receiving station.
Although you can read data from the Quorum receiving station, it lacks the required
header information that enables you to calibrate or georeference the data.
From the ENVI main menu bar, select File → Open External File → AVHRR and
select one of the following options:
• KLMN 1B: To open AVHRR data from the Quorum receiving station, select
File → Open External File → AVHRR → Quorum from the ENVI main
menu bar. ENVI reads 10-bit packed format as integer data, uncompressed
formats as integer data, and 8-bit format as byte data. To use the embedded
The first column is the x position and the second column is the y position. The
remaining columns are the bands of data. Column 3 is band 1, column 4 is band 2,
column 5 is band 3, column 6 is band 4, and column 7 is band 5. You can delimit the
file by space, comma, or tab. The file can also contain any optional header
information, provided that the header is commented out with a semicolon.
Following are the steps to import a TerraScan ASCII file into ENVI.
1. From the ENVI main menu bar, select Topographic → Rasterize Point Data.
The Enter ASCII Grid Points Filename dialog appears.
2. Select an ASCII output file from TerraScan, and click Open. The Input
Irregular Grid Points dialog appears. Verify that your file is shown as the
proper input file.
3. Set the X position column to 1, and set the Y position column to 2.
4. Set the Z data value column to 3. For a TerraScan ASCII file, the Z data
value column corresponds to the band of data.
5. Select an input projection. See “Selecting Map Projection Types” on page 990
for more information.
6. Click OK. The Gridding Output Parameters dialog appears.
Note
To open MODIS products M*D08*, M*D27HV, M*D27W, M*D43B1,
M*D43B1C, M*D43B2, M*D43B2C, M*D43B3, M*D43B3C, M*D43C2,
M*DATML2, use File → Open External File → Generic Formats →
HDF.
• From the ENVI main menu bar, select File → Open External File →
RapidEye. The Enter RapidEye XML Filenames dialog appears. Select a
RapidEye metadata file (.xml), and click Open.
• From the ENVI main menu bar, select File → Open External File →
Military → NITF → NITF and select one or more NITF files (.ntf) for the
bands you want to open. Use the Shift key to select more than one file.
Level-3A images are distributed as GeoTIFF files with associated metadata. Each
GeoTIFF file contains five bands of data. To open Level-3A data, you have two
options:
• From the ENVI main menu bar or Available Bands List menu bar, select
File → Open Image File. The Enter Data Filenames dialog appears. Select a
RapidEye metadata file (.xml), and click Open. ENVI loads all five bands of
data into the Available Bands List and displays the wavelength information for
each band. By reading the XML metadata file, ENVI has the band and
wavelength information needed to display a true-color image by default when
you click Load RGB in the Available Bands List.
• From the ENVI main menu bar, select File → Open External File →
RapidEye. The Enter RapidEye XML Filenames dialog appears. Select a
RapidEye metadata file (.xml), and click Open.
• From the ENVI main menu bar, select File → Open External File → Generic
Formats → TIFF/GeoTIFF. The Enter TIFF/GeoTIFF Filenames dialog
appears. Select a GeoTIFF file (.tif) and click Open. ENVI loads the five
bands of RapidEye data into the Available Bands List.
The XML files that come with Level-1B and Level-3A data are required if you want
to send RapidEye data from ENVI to ENVI Zoom (using the File → Launch ENVI
Zoom menu option).
• RADARSAT: Reads ERS-1 and ERS-2 format data from RADARSAT. ENVI
automatically extracts the needed header information (including UTM
georeferencing information) from the data file, leader file, and/or trailer file.
See “Opening Integer Format RADARSAT-1 Data” and “For Byte Scaling” on
page 180 for additional details.
• TOPSAR: Reads raw TOPSAR (AIRSAR Integrated Processor Data) format
data files (Cvv, Incidence Angle, Correlation Image, or the DEM). To read all
of the TOPSAR files and automatically convert them to physical units, see
“Converting TOPSAR Data” on page 1140. To synthesize AIRSAR images,
see “Synthesizing AIRSAR and SIR-C Data” on page 1102. You can also open
TOPSAR files using the Radar → TOPSAR Tools → Open TOPSAR File
menu option.
Opening Integer Format RADARSAT-1 Data
When the RADARSAT File Import dialog appears, select Import Data Type →
Unsigned Integer.
ENVI adds the image band to the Available Bands List.
For Byte Scaling
1. When the RADARSAT File Import dialog appears, select Import Data
Type → Scale to Byte.
2. Enter the scaling minimum and maximum data values or keep the default
values. The Min and Max values are automatically entered as the 2% points
from the histogram in the CEOS header if it is found. If the CEOS header is
not available you must enter these values.
3. Click OK to start the data reading. ENVI adds the image band to the Available
Bands List.
Complex RADARSAT-1 data are read into ENVI as follows:
• Raw data product: Two bands of byte data, one each for the Z and I Stokes
parameters.
• SLC data product: Two bands of signed integer, one each for the Q and I
Stokes parameters.
geographic coordinates, click Yes in the Mosaic Files? button. Select output to
File or Memory. A new file and standard ENVI header file are created from
the information in the embedded header.
Note
If you do not select mosaicking and output is to File, each DEM is converted
to its own image. In this case, enter a filename without an extension. The
output file for each separate image is automatically assigned a numerical
extension (for example, _1 for the first file, _2 for the second file, and so
forth).
• DEM: Reads United States Geological Survey Digital Elevation Model data.
You can also use File → Open External File → USGS → SDTS DEM to
open these files. In the Enter Filenames dialog, select an input file. ENVI
opens single USGS DEMs and converts them to ENVI image files. When
multiple DEMs are available, ENVI automatically mosaics them into one
ENVI image file. To open more than one DEM file, click Input Additional
File and select the new file in the Enter Filenames dialog; to automatically
mosaic the DEM files into one image based on their geographic coordinates,
click Yes in the Mosaic Files? button. Select output to File or Memory. A new
file and standard ENVI header file are created from the information in the
embedded header. (See the previous Note.)
Note
If your DEM data are in ASCII format, follow the steps in “Importing ASCII
DEMs” on page 1059.
• USGS SDTS DEM: Reads Spatial Data Transfer Standard Digital Elevation
Model data. You can also use the ENVI main menu bar option File → Open
External File → USGS → SDTS DEM to open these files. In the Enter
Filenames dialog, select an input file (typically the xxxxCATD.DDF file). To
open more than one DEM file, click Input Additional File and select the new
file in the Enter Filenames dialog. Select output to File or Memory. A new file
and standard ENVI header file are created from the information in the
embedded header. (See the previous Note.)
• SRTM DEM: Reads Shuttle Radar Topography Mission data. ENVI opens
single SRTM DEM files (.hgt) as ENVI image files; when multiple DEMs
are available, ENVI automatically mosaics them into one ENVI image file. In
the Enter Filenames dialog, select the desired input filename. If the file
contains missing data points, you are prompted to correct the missing values. If
you click Yes to replace missing values, the -32768 values (which indicate
2. Select the file with the .las extension and click Open. The Output Lidar
Parameters dialog appears.
3. Select an option for the data conversion from the Output Format drop-down
list:
• ENVI Raster File (default): Rasterizes the LAS data, creating output in a
standard ENVI image file where the pixel values correspond to surface
height or intensity. The converted data format results in an ENVI format
file.
• ENVI Vector File: Creates an ENVI vector file (.evf) in which each
vector point record corresponds to an (x,y) pair of the LIDAR data with the
elevation, intensity, and return number stored as corresponding attributes.
Each point is converted to a vector item and the options for model type,
output images, interpolation, pixel size, and output data type are
desensitized.
4. If you selected ENVI Raster File, select one of the following from the Model
Type drop-down list:
• Last Return (default): Also known as the bare Earth model, the last return
corresponds to a pulse return from the last (lowest) surface to return a
pulse. This can include solid materials, such as bare Earth, that are under
semi-transparent vegetation. Solid objects such as buildings that do not
have any transparency are also returned.
• Full Feature: This model returns an average value of all the returns at a
given (x,y) location.
• First Return: Also known as the first pulse return, this model corresponds
to a pulse return from the first (highest) surface. This model can include
returns from the top of any semi-solid object, such as vegetation.
5. Select the output image type from the Output Image(s) drop-down list. If your
Output Format setting is ENVI Vector File, then both elevation and intensity
are saved as attributes to the vector file. The following options are available:
• Elevation and Intensity (default): The output file contains a digital
elevation model (DEM) of the surface height, and an intensity image.
• Elevation: The output file contains only the DEM of the return height.
• Intensity: The output file contains only the intensity image.
6. To determine the data projection saved in the DEM or vector file, click Select
Output Image Projection. This option is available only if the LAS file has
• ER Mapper: Reads ER Mapper unsigned integer data, but does not read
signed 8-bit or ER Mapper algorithm files. In the Enter Filenames dialog,
select the data header file (.ers). ENVI extracts the header information,
including the UTM (unrotated) georeferencing information.
• ECW: Reads Enhanced Compressed Wavelet format. For Windows, the ECW
reader only work in 32-bit mode. If you have a 64-bit Windows PC, run ENVI
in 32-bit mode by selecting Start → Program Files → ENVI x.x → 32-bit →
ENVI or ENVI + IDL.
• PCI (.pix): Reads files stored in the PCI database file format. You cannot use
this option to directly read PCI files that contain multiple data types or that are
in file interleave format.
• ESRI® GRID: Reads ESRI GRID data format files on Windows platforms
only. ENVI will open an ESRI GRID file and display projection information
when the file contains a supported ESRI geographic coordinate system code or
ESRI projected coordinate system code or ESRI datum code.
The ability to read GRID datasets is only available if you have a licensed
version of ArcView® software or ArcGIS® version 8.x (or later) installed on
your system. If you have ArcView or ArcGIS 8.x software installed but not
licensed, do not attempt to read ESRI GRID datasets with ENVI. Doing so
may cause ENVI to exit immediately.
You can only read ESRI GRID files in Windows 32-bit mode. If you have a 64-
bit Windows PC, run ENVI in 32-bit mode by selecting Start → Program
Files → ENVI x.x → 32-bit → ENVI or ENVI + IDL.
In the Select Grid to Open dialog, select the GRID (single band data) or GRID
Stack 7.x (multiband data) dataset to open. Highlight the GRID or GRID
Stack, then click OK.
An ESRI GRID or ESRI GRID Stack is stored in a directory, not a file. When
you open a GRID directory in ENVI, it is opened as single band data. When a
GRID Stack directory is opened, it appears in the Available Bands List as an
image with multiple bands.
used to speed image display by reducing the resampling required when displaying
large portions of an image at low resolution. The Scroll window is accelerated by
reading data from the pyramid layers, if present.
top of the file that have non-numeric characters or that start with a semicolon are
skipped. The image data in the file must be in the format of an image array. The
number of samples is determined by the number of values in a line, and the number of
lines is determined by the number of lines in the file.
1. From the ENVI main menu bar, select File → Open External File → Generic
Formats → ASCII. The Enter Filenames dialog appears.
2. Select an ASCII file. The number of samples and lines (columns and rows) are
automatically determined. The Input File dialog appears.
3. Select BSQ, BIL, or BIP from the Interleave drop-down list.
4. From the Data Type drop-down list, select the correct data type.
5. Enter the number of input bands by using the arrow buttons next to Number of
Bands or by typing a number into the box.
6. Click OK. The bands are read into memory and entered into the Available
Bands List.
To import a DEM that is in ASCII format, follow the steps in “Importing ASCII
DEMs” on page 1059.
viewed without ever being fully decompressed. The memory requirements and time
delays associated with opening a full image into memory are avoided and, regardless
of size, you can view an image quickly.
Note
For Windows, the MrSID reader only works in 32-bit mode. If you have a 64-bit
Windows PC, run ENVI in 32-bit mode by selecting Start → Program Files →
ENVI x.x → 32-bit → ENVI or ENVI + IDL.
1. From the ENVI main menu bar, select one of the following:
• File → Open External File → Generic Formats → MrSID
• File → Open Image File
2. In the Enter Filenames dialog, select the MrSID compressed file to read. ENVI
automatically extracts the needed header information, including any
georeferencing information, and adds the image bands to the Available Bands
List.
B. Click OK. If you have a GeoTIFF file and a TIFF world file, all projection
information is read directly from the GeoTIFF file. ENVI adds the bands
to the Available Bands List.
1. In the Header Info dialog, click Edit Attributes and select either:
• Band Names: The Edit Band Name values dialog appears.
• Spectral Library Names: The Edit Spectral Library Names values dialog
appears.
2. Select the band name or spectral library name to change in the list. The name
appears in the Edit Selected Item field.
3. Type the new name and press Enter.
4. Click OK.
Importing Header Data from ASCII Files
In some header editing dialogs you can import data from an ASCII file.
1. Click Import ASCII. The Enter ASCII Filename dialog appears.
2. Open the ASCII file. The Input ASCII File dialog appears with first few values
from the ASCII file listed. If you are editing band or spectral library names, the
information from the ASCII file appears at the top of the Edit Band Name or
Edit Spectral Library Names dialogs.
The number of rows of the ASCII file must match the number of bands in the
image file. The ASCII file may have one or more columns of ASCII data;
however, the file used to import band names can only contain strings.
3. If available, in the Wavelength Column field enter the number of the ASCII
column that contains the wavelengths.
To scale the wavelength values on-the-fly, enter a multiplicative scale factor in
the Multiply Factor field. For example, to multiply the imported wavelength
values by 100, enter 100.
4. If available, in the FWHM Column field enter the number of the ASCII
column that contains the band width information (used in spectral resampling).
The ASCII file can also contain a Bad Bands List column. The Bad Bands
List column specifies a good band with a 1 and bad band with a 0.
5. If available, in the Data Gain Column field enter the column number of the
ASCII file that represents the gain.
6. If available, in the Data Offsets Column field enter the column number of the
ASCII file that represents the offsets.
7. If available, enter the number of the ASCII column that contains the Bad
Bands List.
8. Click OK.
9. Click OK in the Header Info dialog to write all of the changes to the header
file.
1. In the Header Info dialog, click Edit Attributes and select Bad Bands List.
The Edit Bad Bands List values dialog appears.
2. All bands in the list are highlighted by default as good. Deselect any desired
bands in order to designate them as bad bands.
To designate a range of bands, enter the beginning and ending band numbers in
the fields next to the Add Range button. Click Add Range.
3. Click OK.
2. In the Edit Map Information dialog, enter the reference pixel coordinates in the
Image Coordinate of Tie Points X and Y fields and the pixel size in the Pixel
Size and Rotation X and Y fields. Be sure to enter the pixel size in the units
appropriate for your selected projection.
3. If North is not up in the image, enter a rotation angle in degrees in the Map
Rotation field. Measure the angle in a clockwise direction where zero degrees
is straight up (see “Overlaying Grid Lines” on page 66).
4. Select the map projection by clicking on Change Proj and selecting the
appropriate projection from the list of projections (see “Selecting Map
Projection Types” on page 990).
3. In the RPC or RSM Parameters dialog, click the RPC or RSM Projection
Emulation Enabled toggle button, and select On (to enable the projection) or
Off (to use the native map information, if any).
Use the Select Projection section of the RPC or RSM Parameters dialog if you
need to modify the projection used to report the resulting geolocation
information. Different parameters are available depending on the selected
projection type.
• To build a customized map projection, click New and follow the
instructions under “Building Customized Map Projections” on page 992.
• To change the datum for a projection type, click Datum and select a datum
from the list in the Select Geographic Datum dialog.
• If you select UTM, click the N or S toggle button to indicate if the selected
latitude is north (N) or south (S) of the equator. Enter a zone, or click Set
Zone and enter the latitude and longitude values to automatically calculate
the zone.
• If you select a State Plane projection, enter the zone or click Set Zone and
select the zone name from the list.
Both NOS and USGS zone numbers are shown next to the zone name.
• To designate the units for a projection type, click Units and select a unit
type from the drop-down list.
4. Click OK. The Header Info dialog appears.
ENVI automatically adds a geoid offset and assumes that the DEM is in meters
above sea level, even for fixed-level DEMs.
1. In the Header Info dialog, click Edit Attributes and select Associate DEM
File. The Select DEM to associate with this file dialog appears. If an
association already exists, the Associate DEM with File dialog appears. Select
either:
• Edit existing DEM file association: The Select DEM Band to Associate
With This File dialog appears. The existing associated band is highlighted
by default. Edit the DEM file association by selecting another band to
associate. Click OK.
• Clear existing DEM file association: The Associate DEM with File
dialog closes, the association no longer exists, and the Header Info dialog
appears.
2. If the association of a DEM with this image is new, select a band as the
associated DEM.
3. Click OK.
When you select a DEM association, two fields (Table 2-1) are written to the
ENVI header file.
Field Description
dem file = /path/file Path and Filename of selected DEM file.
dem band = 2 Index (starting at 1) of selected DEM band
The dem band is not written if the DEM file contains a
single band, or if the first band of an image was chosen. In
these cases, the dem band value defaults to 0.
Note - Neither field is written if an in-memory band is selected as the associated
DEM band. In this case, the DEM association exists for the current ENVI session
only; it does not persist for subsequent sessions.
When you open an image which has an associated DEM, ENVI adds both the image
and the DEM to the Available Bands List. If ENVI cannot find the associated file,
ENVI displays an error message and adds only the base image to the Available Bands
List.
This DEM association also affects the following functions in ENVI.
• Cursor Location/Value Tool: The DEM value for a given pixel displays next to
the data value for areas where the DEM and the image share common
geographic coordinates; for example, Data:11 (DEM=1280). The DEM
value is not shown if either the image or the DEM are not georeferenced.
• 3D SurfaceView: ENVI uses the associated DEM file as the default and does
not prompted you to select a DEM for the surface.
• RPC or RSM Projection Emulation: If an image displays using RPC or RSM
projection emulation and it has an associated DEM, ENVI uses the DEM to
refine the RPC or RSM solution, which improves its positional accuracy.
For RPC or RSM Projection Emulation, the DEM must contain map
information covering the area for the RPC or RSM image; otherwise, ENVI
uses the default elevation.
• RPC or RSM Orthorectification: If an image being orthorectified has an
associated DEM file, then ENVI uses it as the default elevation input to the
orthorectification process. You can change this default designation at any time,
if desired.
4. Select the class name of the region to change from the list Selected Classes.
• To change a selected class name, edit it in the Class Name text field. The
header file does not allow class names that include commas.
• To change the class color in the RGB color space (0-255 for the three
colors), move the three sliders for Red, Green, or Blue.
• To reset the original class colors and names, click Reset.
• To change the class colors in the HSV or HLS color spaces, select the
appropriate system from the System drop-down list. Move the Hue,
Saturation, Value or Hue, Lightness, Saturation sliders to the desired
values.
5. Click OK. On 24-bit color displays, ENVI does not automatically apply the
color changes. Instead, for 24-bit hardware, apply color changes to the image
by clicking Apply Changes, which appears only when 24-bit color is
available.
3. Click OK. ENVI saves the stretch setting in the .hdr file. Whenever you
display this image, this stretch setting overrides the global default stretch given
in the envi.cfg file.
Note
If Default Stretch is set to None, ENVI uses the Display Default Stretch
preference setting.
3. Click OK.
2. Select the output image type from the Output Image Value drop-down list:
• Constant: Generates an image with a constant value for every pixel. Enter
the desired DN value in the Value text box.
• Horiz Ramp or Vert Ramp: Generates an image with either a horizontal
or vertical linear ramp. Enter the desired minimum and maximum ramp
values in the Min Value and Max Value fields.
3. To select the starting byte to view from the file, enter the byte number in the
File Offset Byte(s) field.
4. To move the data view forward or backward one page respectively, click the
Next Page and Prev Page buttons.
5. If you suspect that the data may be in something other than byte format, select
View_Format from the Data Viewer menu bar and select a data format
(Hexadecimal, Byte, Unsigned Integer, Integer, Long Integer, Unsigned
Long Integer, or Floating Point). The number of columns of data values and
the representation of the listed data change accordingly.
6. To open and view a different file, select File → Open New File from the Data
Viewer menu bar.
7. To exit the data viewer, select File → Cancel from the Data Viewer menu bar.
8. To evaluate the swapping of bytes between Intel and IEEE formats for data
types with more than one byte per value (integer, long integer, and floating
point), select Byte_Swap from the Data Viewer menu bar and select one of the
following:
• None: No swapping
• Short Word: Swapping two bytes for an integer
• Long Word: Swapping byte pairs for long integer and floating-point data
Subsetting Data
Many ENVI tools allow you to subset your data before processing. From the Input
File dialog (described in The Input File Dialog in Getting Started with ENVI), you
can perform spatial, spectral, or statistics subsetting, and, in some cases, mask the
input data.
Spatial Subsetting
Use spatial subsetting to limit applying a function to a spatial subset of the image.
You can select spatial subsets by using the following methods:
• Entering samples and line values
• Selecting interactively from the image
• Entering map coordinates
• Using the same spatial subset that was previously used on another file
• Using the image shown in the meta scroll window
• Using the bounding box around a region of interest
The options in the Spatial Subset dialog vary depending on whether the current data
is sample-line-based or georeferenced. Additionally, if the same image is open in
more than one display group, you can specify which display number to apply the
subset to.
Subsetting by Samples/Lines
The size of the original dataset and the size of the currently-selected subset appear
below the fields.
To subset by samples and lines:
1. In the Select Spatial Subset dialog, select the image to subset by from the
Subset by Image drop-down list.
2. Enter the starting and ending values of the Samples and/or Lines into the
appropriate fields, or enter the desired number of lines or pixels in the NS or
NL fields.
3. Click OK.
Subsetting by Images
To select the spatial subset interactively from the image:
1. In the Select Spatial Subset dialog, select the image to subset by from the
Subset by Image drop-down list, then click Image. The Subset by Image
dialog appears. A subsampled version of the selected image band is displayed.
A box on the image outlines the currently selected subset.
2. To change the subset size or location, select from the following options:
• In the Subset by Image dialog, click and grab on one of the corners of the
box and drag to the desired location.
• To move the box around the image, click on the box and drag it to the
desired location.
• Change the values in the Samples or Lines fields.
3. Click OK. The starting and ending sample and line coordinates appear in the
text boxes labeled Samples and Lines.
Spectral Subsetting
Use spectral subsetting to limit application of a function to selected bands of an
image.
Subsetting by Bands
1. In the Input File dialog, click Spectral Subset. The File Spectral Subset dialog
appears. The appearance of this dialog varies depending on whether the image
has a bad bands list. A bad band is not included in the processing. A list of
bands available for selection appear in the center of the dialog.
Subsetting by Ranges
When selecting a range of bands to subset, the dialog default initially shows all bands
selected. To select a specific range of bands instead of the default:
1. In the File Spectral Subset dialog, click Clear to reset the default setting.
2. Enter the starting and ending band numbers into the two fields next to the Add
Range button.
3. Click Add Range.
Statistics Subsetting
Use spectral subsetting to limit application of a function to selected spatial subset or
the area under an ROI.
1. When available in a dialog, click Stats Subset. The Select a Statistics Subset
dialog appears.
2. Select one of the following options from the Calculate Stats On field:
• Image Subset: To select a standard image spatial subset. See “Spatial
Subsetting” on page 215 to subset the data.
• ROI/EVF: To select an ROI or vector as the subset. A list of ROIs and
vectors display in the Select ROI/EVF list; select an ROI or vector name,
and click OK.
Tip
To add a previously saved ROI or vector to the list, click Open in the Select
Statistics Subset dialog and select ROI File or an EVF File.
Figure 2-14: Select Statistics Subset Dialog for Image Subset (left) and for
ROI/EVF (right)
Masking
You can apply a spatial mask to your file, to prevent ENVI from applying the selected
function to the image portion that you mask. You can only apply a mask that was
previously defined (see “Building Masks” on page 350).
Only certain ENVI functions allow spatial masking before processing. These
functions include statistics, classification, Linear Spectral Unmixing, Matched
Filtering, Continuum Removal, and Spectral Feature Fitting.
To apply a previously built spatial mask to your image:
1. Select either of the following:
• In the Input File dialog, click Select Mask Band. The Select Mask Input
Band dialog appears with a list of all bands that are the same spatial size as
the input image.
• In any dialog with the Select Mask button, click Select Mask. The Select
Mask Input Band dialog appears with a list of all bands that are the same
spatial size as the input image.
2. Select the band containing the mask.
3. Click OK.
To remove a mask applied to your image:
• In the Input File dialog, click Mask Options and select Clear Mask
Band.
• In any dialog with the Clear Mask button, click Clear Mask.
Saving Files
Use Save File As to create a new standard ENVI disk file or an ENVI meta file from
bands contained in the Available Bands List and to output image data to various
image processing formats. See Saving as Standard ENVI Files, Saving as ENVI Meta
Files, and Saving as ASCII Files in Getting Started with ENVI for details about
saving to those file formats. See the following for details on saving to all other file
formats.
You can also save images from the Display group menu bar Save Image As →
Image File option, as described in “Saving Images from Displays” on page 15.
Restrictions
• You must have an ArcView® license to save to a personal or file geodatabase
and an ArcEditor™ or ArcInfo® license to save to an enterprise geodatabase.
Contact your ESRI sales representative to purchase a license.
• Personal geodatabases store datasets within a Microsoft Access data file,
which is limited in size to 2GB.
4. The Block Width and Block Height fields default to an optimized block size
(in pixels) for writing from ENVI. You can change the default values as
desired, especially if you want to specify the optimal ERDAS block size (64 by
64 pixels). The input image may be of any interleave, but the image is written
as block sequential by band interleave.
Blocks are similar to tiles used by ENVI to manage data. The size of the block
dictates how much image data is processed at one time. ENVI’s performance is
enhanced using larger block sizes. When writing data in the IMAGINE format,
ENVI optimizes the block size to fit the most data in memory. Smaller block
sizes may result in slower writing performance.
5. The Enter Output IMAGINE Filename field defaults to the root filename,
but uses the .img extension for the IMAGINE format. You can change the
path and filename.
You can assign any name to the file, but it must have the .img extension. If the
file size is greater than the 2 GB limit, a second file using the root name and
the .ige extension is automatically generated in the same directory as the
.img file.
Note
If the filename contains a character which is not valid for the ERDAS
filename convention, it is changed to an underscore character (_) when the
file is saved.
6. Click OK. In addition to saving required header information, ENVI saves band
names, the bad bands list, classification information, and map information (for
georeferenced files) when creating ERDAS IMAGINE output.
Tape Utilities
Use Tape Utilities to read MSS, TM, SPOT, AVHRR, AVIRIS, NLAPS, and CEOS
format radar data (including SIR-C/X-SAR, RADARSAT-1 and ERS-1) from a
variety of computer compatible tape (CCT) formats and to read U.S. Geological
Survey DEMs and DLGs (Optional Format only). You can also use the tape utilities
to read BSQ, BIP, or BIL data directly from tape and to control SCSI tape drives (9
track, 8 mm, and 4 mm media) at the file and record levels.
Flexible tape tools are included with the tape utilities for adding other data types,
even when a specific format is not directly supported. Also included is a special tape
output utility for writing ENVI files to tape, which preserves header information and
file structure. A corresponding tape input utility is also available for reading ENVI
formatted tapes. Use the tape scan and dump utilities to diagnose tape structures,
build scripts for commonly used tape types, and dump tapes to disk.
Tape functions are automatically supported on UNIX. To install SCSI tape support
for Microsoft Windows 2000 and Windows XP platforms, open the aspi_v470.exe
self-extracting archive in the \tape32 directory on the ENVI for Windows
installation CD. Please see the included README.DOC file for installation
instructions. For Windows, tape functions only work in 32-bit mode. If you have a
64-bit Windows PC, run ENVI in 32-bit mode by selecting Start →
Program Files → ENVI x.x → 32-bit → ENVI or ENVI + IDL.
On UNIX Platforms
The tape devices on UNIX platforms are specified as the name of the tape device in
the /dev directory. For example, to specify device Øb in the /dev/rmt directory, use
the name /dev/rmt/Øb as the ENVI Tape Device.
3. Click OK to have ENVI scan the tape header. If MSS format data are
identified, the MSS Tape Output Parameter dialog appears.
To subset the image being read from tape, enter the starting and ending lines
and/or samples in the SamplesTo and/or LinesTo fields respectively.
4. Select bands to read by clicking the toggle buttons next to the desired band
names. To designate a range of bands, enter the beginning and ending band
numbers in the fields next to the Add Range button. Click Add Range.
5. Select bands to read by clicking the toggle buttons next to the desired band
names. To designate a range of bands, enter the beginning and ending band
numbers in the fields next to the Add Range button. Click Add Range.
6. Select output to File or Memory. File output is recommended.
7. Click OK to start the tape processing.
• To change the tape record size, enter the values into the MaxRecsize
fields.
3. Click OK to have ENVI scan the tape header. If the data are identified as one
of the supported SPOT data formats, the SPOT Tape Output Parameters dialog
appears.
4. To subset the image being read from tape, enter the starting and ending lines
and/or samples in the SamplesTo and/or LinesTo fields respectively.
5. Select the bands to read by clicking the toggle buttons next to the band names.
If SPOT PAN data are identified, one band is listed in the Select Output
Bands list. If SPOT XS data are identified, three bands are listed. To designate
a range of bands, enter the beginning and ending band numbers in the fields
next to the Add Range button. Click Add Range.
6. Select output to File or to Memory. File output is recommended.
7. Click OK to start the tape processing.
1. From the ENVI main menu bar, select File → Tape Utilities → Read Known
Tape Formats → AVIRIS. The AVIRIS - Load Tape dialog appears.
2. Select from the following options:
• To designate a different tape device, enter or choose a device. If you
designate a different tape device, allow your operating system enough time
to register the new device before proceeding.
• To change the tape record size, enter the values into the MaxRecsize
fields.
3. Click OK to have ENVI read the AVIRIS wavelength file and scan the AVIRIS
image header. If the data are identified as AVIRIS data, the AVIRIS Tape
Output Parameters dialog appears.
4. To subset the image being read from tape, enter the starting and ending lines
and/or samples in the SamplesTo and/or LinesTo fields respectively.
5. Choose the bands (and their corresponding wavelengths) to read by clicking
the toggle buttons next to the band names. To designate a range of bands, enter
the beginning and ending band numbers in the fields next to the Add Range
button. Click Add Range.
6. Select output to File or to Memory. File output is recommended.
7. Click OK to start the tape processing.
• To change the tape record size, enter the values into the MaxRecsize
fields.
6. Click OK. ENVI scans the data file to determine critical format and location
information and automatically identifies and reads the data.
7. Select output to File or to Memory. File output is recommended.
8. Click OK to start the tape processing.
Note
If you are using SIR-C files dumped directly from tape to disk, or if you are using
SIR-C data on disk that were not read from tape, see “Using the CEOS Header Tool
to Find Missing Information” on page 1120 for instructions for entering required
data parameters.
You can subset all data types directly from tape to conserve disk space. In addition,
you can multilook SLC datasets from tape with integer or non-integer number of
looks.
Depending on the data type, the size of the SIR-C data scene, and the number of
datasets selected, tape reading and processing could take from less than one hour to
several hours to complete.
1. From the ENVI main menu bar, select File → Tape Utilities → Read Known
Tape Formats → SIR-C CEOS. The SIR-C Format - Load Tape dialog
appears.
2. Select from the following options:
• To designate a different tape device, enter or choose a device. If you
designate a different tape device, allow your operating system enough time
to register the new device before proceeding.
• To change the tape record size, enter the values into the MaxRecsize
fields.
3. Click OK to have ENVI scan the tape for standard format SIR-C data. When
the tape scan is completed, the SIR-C Tape File Selection dialog appears. A list
of the SIR-C data files on the tape appears in the Select Output Files list.
4. Select the box next to one or more of the desired datasets to choose the data to
read from the tape. To designate a range of data, enter the beginning and
ending numbers in the fields next to the Add Range button. Click Add Range.
5. Click OK. The SIR-C Tape Parameters dialog appears. The SIR-C datasets
selected in the previous dialog are listed in the Selected SIR-C Tape Files list.
Only select this option if insufficient disk space is available to read the entire dataset.
Multi-looking on disk is much more efficient and is the preferred processing option
(see “Multilooking SIR-C Compressed Data” on page 1135).
1. In the SIRC Tape Parameters dialog, select a dataset.
2. Click Multi-Look. The SIRC Multi-Look Parameters dialog appears.
2. Click Spatial Subset to use optional Spatial Subsetting, then click OK. The
RADARSAT Tape Parameters dialog appears.
3. Repeat steps 1 through 3 for each dataset listed in the dialog.
Entering RADARSAT-1 Output Filenames
1. In the RADARSAT Tape Parameters dialog, select a dataset in the Selected
RADARSAT Tape Files list.
2. In the Enter Output Filename field, enter a filename.
3. Repeat steps 1 and 2 for each dataset listed in the dialog. ENVI creates one
image output file for each dataset selected.
4. Click OK to begin reading the tape and processing the data. The processed
images appear in the Available Bands List.
Tip
To conserve disk space, subset the data directly from tape.
1. From the ENVI main menu bar, select File → Tape Utilities → Read Known
Tape Formats → Read Generic CEOS. The CEOS Format - Load Tape
dialog appears.
2. Select from the following options:
Data
File ID Rec ID Information
Type
Data
File ID Rec ID Information
Type
top of the window. To designate a different tape device, enter the device name
in the Tape Device field.
If you designate a different tape device, allow your operating system enough
time to register the new device before proceeding.
2. In the ENVI Tape Dump Utility window, select Options → Scan Tape. The
ENVI Tape Information Scan window appears as the tape is scanned and the
current file number and the total number of bytes scanned on the tape are
listed. As each file is completed, the tape information is shown in the ENVI
Tape Dump Utility dialog.
To interrupt the tape scan at any time, click Interrupt Tape Scan at the bottom
of the window (the interrupt may take a few seconds to register with the
system). If the tape scan is interrupted, the information up to that point is
shown and the tape is automatically rewound. In either case, the file number,
the number of records, and the number of bytes per record are listed.
3. In the ENVI Tape Dump Utility dialog, select from the following options to
select records and bytes and to edit their values:
To edit values, select an item in the list. Change the values as they appear in the
corresponding fields and press Enter.
This allows combining files/records and selecting subsets of bytes to read.
• To add new items to the dump list, enter new values in the fields and click
Add Entry.
• To delete an entry from the list, select the entry in the list and click Delete
Entry.
• To clear the list, select Clear Entries.
• To recall data from the previous scan, select Options → Restore Prev
Scan.
• To save the Tape Script to an ASCII text file, select File → Save Format
and enter the filename. The default file extension is .fmt. (See “Tape
Script Format (.fmt)” on page 1177).
• To recall previously saved Tape Scripts, select File → Restore Format
and select the file.
4. In the ENVI Tape Dump Utility dialog, select Options → Dump Tape to read
the tape. The Tape Dump Output Parameters dialog appears.
5. Use the toggle button to select either Dump tape records to a single output
file or Dump each item to a separate output file.
3. When the Selected Directories List is complete, click OK. ENVI displays a
status window as it scans the directories.
If a header does not match its ENVI file, ENVI displays a warning message.
Click OK. The Header Info dialog appears so that you may enter the correct
information (described in Creating Header Files in Getting Started with ENVI).
When scanning is finished, the Scanned ENVI Files dialog appears, listing all
of the ENVI files found in the specified directories.
4. To open a file, select a file in the Located Files List and select File → Open
File from the Scanned ENVI Files dialog menu bar. ENVI adds the image to
the Available Bands List.
To add a new directory and enable selecting files from within that directory, select
Options → Scan New Directory List from the Scanned ENVI Files dialog menu
bar.
To open a file based on geographic location, see “Opening Files with the Geo-
Browser” on page 256
In the Geo-Browser window, the mouse cursor’s latitude and longitude appear in the
upper-left corner of the map.
ENVI Preferences
Use Preferences to view information about the current configuration of ENVI, or to
change the current configuration. For information about setting ENVI preferences,
see Setting ENVI Preferences in Getting Started with ENVI.
• If the xfac and yfac values are greater than or equal to 1, select Nearest
Neighbor, Bilinear, or Cubic Convolution, from the Resampling drop-
down list (see “Warping and Resampling” on page 904).
• If the xfac and yfac values are less than 1, select from Nearest Neighbor
or Pixel Aggregate resampling only.
• You can control x and y scales independently. Enter values less than 1 to
reduce the image and values greater than 1 to enlarge the image. The
number of output samples and lines are updated.
• Nearest neighbor resampling uses the nearest pixel value as the output
pixel value and pixel aggregate averages all of the pixel values that
contribute to the output pixel (for example, if you enter .5 for both xfac
and yfac, the output pixel values are calculated by averaging the four input
pixel values).
4. Select output to File or Memory.
5. Click OK.
Rotating Images
Use Rotate/Flip Data to perform several standard image rotations, including 0, 90,
180, and 270 degrees with or without transposition. (Here, transpose means that the
dimensions of the array are swapped.) Alternatively, you can specify the exact angle
of the desired rotation. Rotating images is useful for orienting images before
registration.
Tip
To flip an image vertically, where the pivot line is a horizontal running through the
middle of the image, choose 270 degrees with transpose. To flip an image
horizontally, where the pivot line is a vertical running through the middle of the
image, choose 90 degrees with transpose.
1. From the ENVI main menu bar, select Basic Tools → Rotate/Flip Data. The
Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Rotation Parameters dialog appears. This
dialog varies slightly depending on whether you use a standard IDL rotation or
an exact rotation angle.
Initially, some letters and numbers are printed horizontally (normal text
orientation) in the box in the upper-right corner of the dialog. The orientation
of the numbers shows schematically the orientation of the output image with
the selected rotation and/or transpose applied.
3. Select from the following rotation options:
• To apply a standard rotation (0, 90, 180, or 270 degrees), click Standard
and select the rotation.
To flip the x and y coordinates of the image, click the Transpose toggle
button to Yes.
• To specify the exact rotation angle desired, enter a value in the Angle field
(angles are measured clockwise from horizontal) and press Enter.
Select the resampling algorithm to use to calculate the output image from
the Resampling drop-down list. The choices are Nearest Neighbor,
Bilinear Interpolation, or Cubic Convolution (see “Warping and
Resampling” on page 904).
Figure 3-1: Rotation Parameters Dialog with Standard IDL Rotations and
Transposes (left) and with Arbitrary Angle Rotations (right)
Layer Stacking
Use Layer Stacking to build a new multiband file from georeferenced images of
various pixel sizes, extents, and projections. The input bands will be resampled and
re-projected to a common user-selected output projection and pixel size. The output
file will have a geographic extent that either encompasses all of the input file extents
or encompasses only the data extent where all of the files overlap.
1. Select one of the following options from the ENVI main menu bar:
• Basic Tools → Layer Stacking
• Map → Layer Stacking
The Layer Stacking Parameters dialog appears.
Stretching Data
Use Stretch Data to perform file-to-file contrast stretching. The data stretching
function is a flexible method for changing the data range of a given input file. You
have full control over both the input and output histograms and the output data type
(byte, integer, floating-point, and so forth). For more information, see “Using
Interactive Stretching” on page 80.
1. From the ENVI main menu bar, select Basic Tools → Stretch Data. The Input
File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Data Stretching dialog appears.
3. To calculate the statistics based on a statistics subset or the area under an ROI,
click Stats Subset. For subsetting details, see “Statistics Subsetting” on
page 223.
4. Select the Stretch Type (Linear, Equalize (histogram equalized), Gaussian,
and Square Root) from the appropriate radio button.
If you select Gaussian, enter a standard deviation in the Stdv field.
5. Select a Stretch Range value of By Percent or By Value from the appropriate
radio buttons.
6. Enter the minimum and maximum values in the Min and Max fields,
respectively, to control the input data range.
7. Set the Output Data Range in the Min and Max fields. The values must
match the ranges of the data type selected from the Data Type drop-down list
(see next). If out-of-range values are entered, low values are automatically
corrected to the minimum and high values are automatically corrected to the
maximum of the selected data type.
8. From the Data Type drop-down list, select the appropriate data type (byte,
integer, unsigned integer, long integer, unsigned long integer, 64-bit integer
and unsigned 64-bit integer, floating-point, double precision, complex, or
double complex).
9. Select output to File or Memory.
10. Click OK. If no statistics file exists for the selected input file, ENVI calculates
the image statistics before data stretching and an Image Statistics window
shows the percent processing complete as a slider that moves from 0 to 100%.
If a statistics file already exists (or when the image statistics are calculated) a
Data Stretching window shows the percentage of data stretching completed.
When complete, ENVI adds the resulting file to the Available Bands List.
Statistics
Use the Statistics option to generate statistical reports and display plots of
histograms, mean spectra, eigenvalues, and other statistic information for image files.
Computing Statistics
ENVI can calculate basic statistics and/or tabulated histogram information
(frequency distributions) for single-band or multi-band images. The minimum,
maximum, and mean spectra can only be calculated for multi-band images. Similarly,
covariance statistics, which include eigenvectors and a correlation matrix, can only be
calculated for multi-band images. The statistics are calculated in double-precision.
1. From the ENVI main menu bar, select Basic Tools → Statistics → Compute
Statistics. The Input File dialog appears.
2. In the Select Input File list, select the input file by selecting the filename.
3. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK.
4. Click OK. The Compute Statistics Parameters dialog appears.
B. Use the Floating Report toggle button to designate the format (Normal or
Scientific) for the numbers in the ASCII report. Normal numbers are in
decimal format (for example, 25.88). Scientific numbers are a single digit
followed by a decimal value, the letter e, and the exponential power (for
example, 2.588e+001).
11. Click OK. When the statistics are calculated, the Statistics Results plot
window appears.
Enter Output Stats Filename[.sta] field, enter a filename. The default file
extension for statistics files is .sta. ENVI saves the statistics report to the
specified file when you click OK.
• Save results to text file: Saves the statistics report to a text file. When this
option is selected, the Save Results to Text File dialog appears. In the Enter
Output Text Filename[.txt] field, enter a filename. The statistics report is
saved to the specified file when you click OK.
Tip
The resulting text file is tab-delimited for easy import into external
spreadsheet programs, such as Excel.
B. Use the Floating Report toggle button to designate the format (Normal or
Scientific) for the numbers in the ASCII report. Normal numbers are in
decimal format (for example, 25.88). Scientific numbers are a single digit
followed by a decimal value, the letter e, and the exponential power (for
example, 2.588e+001).
4. From the Output Data Type drop-down list, select the output data type. If the
output data falls outside the data type range, the output data will be clipped to
the highest or lowest data type value (that is, byte output will have values only
between 0-255 and all negative values will clip to 0).
5. Select output to File or Memory.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
ENVI calculates the various statistics as follows, where xj is the value of a pixel for
band j and N = number of bands:
Spatial Statistics
Use Spatial Statistics to calculate spatial autocorrelation and semivariance for
images. These statistics aid analyzing the extent to which the occurrence of an event
in an area inhibits, or makes more likely, the occurrence of an event in a neighboring
area.
You can either calculate statistics for each pixel’s nearest neighbors only, or you can
calculate a correlogram, which is a plot of the autocorrelation value calculated at
multiple pixel separations.
ENVI provides options to calculate the following spatial statistics:
• Global Spatial Statistics: Provide a single value describing the overall
autocorrelation within a scene.
• Local Spatial Statistics: Result in an image output, where each pixel value
represents the autocorrelation of that pixel and its neighbors. The two types of
statistics are described in the following sections:
3. In the Global Spatial Statistics Parameters dialog, set the following choices:
4. From the Neighborhood Rule drop-down list, select which adjacency rule to
use in the calculation. This rule defines which adjacent pixels to compare to
the central pixel. The choices are:
• Rook’s Case (default): Selects the pixels on the top, bottom, left, and
right.
Figure 3-8: Global Spatial Statistics Results Dialog with Plot, Correlogram, and
Text Report Output
In the lower portion of the Global Spatial Statistics Results plot window, the View
Correlogram tab is the default display when the input file has multiple bands. When
the input file has a single band, the Text Report tab is the default display. You can
switch between the two views by selecting the appropriate tab. When you select the
Moran’s I, Geary’s C, or Semivariogram radio buttons, the correlogram data
updates to reflect the current plot view.
If you did not generate plot output, the View By Band and Text Report tabs display.
When the input file has a single band, the Text Report tab is the default display
(Figure 3-9).
Figure 3-9: Global Spatial Statistics Results Dialog with Text Output (No Plot)
When viewing information in the Text Report tab, you can quickly jump to statistics
for a specific band in the report. To do this, click Select Stat, then select the band to
view.
When the input file has multiple bands, the View By Band tab is the default display
(Figure 3-10). The View By Band tab shows the Moran’s I and Geary’s C indices,
and semivariance statistics by band.
Figure 3-10: Global Spatial Statistics Results Dialog with View By Band Output
References:
Daniel A. Griffith, 1987. Spatial Autocorrelation – A Primer. Association of
American Geographers, Washington D.C.
Curran, P.J., 1988. The Semivariogram in Remote Sensing: An Introduction. Remote
Sensing of Environment, 24:493-507.
Woodcock, C.E. and A.H. Strahler, 1987. The Factor of Scale in Remote Sensing.
Remote Sensing of Environment, 21:311-332.
3. From the Neighborhood Rule drop-down list, select which adjacency rule to
use in the calculation. This rule defines which adjacent pixels to compare to
the central pixel. The choices are:
• Rook’s Case (default): Selects the pixels on the top, bottom, left, and
right.
• Bishop’s Case: Selects four diagonal neighboring pixels.
• Queen’s Case: Selects all eight neighboring pixels.
• Horizontal: Selects two neighboring pixels in the same row.
• Vertical: Selects two neighboring pixels in the same column.
• Positive Slope: Selects two neighboring pixels in opposite corners in a
positive diagonal.
• Negative Slope: Selects two neighboring pixels in opposite corners in a
negative diagonal.
References:
Anselin, L., 1995. Local Indicators of Spatial Association – LISA. Geographical
Analysis 27(2):93-115
Getis, A. and J.K. Ord, 1992. The Analysis of Spatial Association by Use of Distance
Statistics. Geographical Analysis 24(3):189-206.
• Differences in the collection date and time: Seasonal changes can impart big
differences in scenes containing vegetation (due to plant senescence and
canopy architecture development). Differences in the season and time of day
will also affect the solar azimuth and elevation.
• Differences in Atmospheric Conditions: The dominant weather conditions
can affect atmospheric transmission and scattering. Consistent differences in
gross atmospheric conditions are often associated with seasonal changes. For
example, differences in the predominant wind direction can be important
(winds blowing in over the ocean contain different aerosols with different
scattering properties from those blowing in over an urban area). Another
common, yet consistent, atmospheric difference is the water content of the
atmosphere. Summer atmospheres tend to be wetter than winter atmospheres.
Atmospherically corrected images can reduce such influences.
• Differences in Image Calibrations: For the most accurate change detection
results, it is important to work with images that are calibrated into the same
units. If a calibration into physical units (such as radiance) is not possible, a
relative calibration may be better than none at all (especially if the instruments
that collected the images have different dynamic ranges).
• Differences in Image Resolution: Differing pixel sizes can lead to false
change detections. It is important that the original images (prior to resampling
or re-projection) have the same pixel resolution. For scenes with large swaths
(such as AVHRR, SeaWiFS, or MODIS) the actual pixel sizes differ across the
scene. In such cases, differences in the sensor viewing geometry can also be
important.
• Coregistration Accuracy: Accurately coregistered images are critical for
change detection analyses. While the Compute Difference Map routine will
automatically coregister the input images using the available map information,
if the differences in the image geometry are substantial, it is well worth the
effort to ensure that the coregistration is as accurate as possible before
performing a change detection.
The Compute Difference Map tool does not compensate for any of these (or other)
conditions. Its results are strictly dependent on pixel-for-pixel comparisons.
1. From the ENVI main menu bar, select Basic Tools → Change Detection →
Compute Difference Map. The Select the ‘Initial State’ Image dialog appears.
The input images must be georeferenced or coregistered. If the images are not
coregistered, then the available map information will be used to automatically
coregister the area common to both.
2. Select a single band image representing the initial state and perform optional
Spatial Subsetting, then click OK. The Select the ‘Final State’ Image dialog
appears.
3. Select a single band image representing the final state and perform optional
Spatial Subsetting, then click OK. The Compute Difference Map Input
Parameters dialog appears.
4. Enter the number of classes to use. Each class is defined by a difference
threshold that represents a varying amount of change between the two images.
The minimum number of classes is two. The default classification thresholds
are evenly spaced between (-1) and (+1) for simple differences, and (-100%)
and (+100%) for percent differences. The default class definitions attempt to
produce symmetric classes, with an equal number of positive and negative
change classes surrounding a No Change category. The order in which the
classes are defined is as follows:
• For n classes, where n is odd, the first (n/2) classes represent positive
changes, starting with the largest positive changes and ending with the
smallest.
• The middle class, (n/2) + 1, represents no change.
• The last (n/2) classes represent negative changes, starting with the smallest
negative changes end ending with the largest.
• For an even number of classes the definitions remain the same except that
the number of negative classes is reduced by one. In short, the default class
definitions range from positive to negative, with the magnitude of the
change increasing with distance from the middle No Change class.
5. To modify or view the classification thresholds, define names for the classes,
or import classification thresholds from a previous result, click Define Class
Thresholds. (If using default thresholds, this step is unnecessary.) The Define
Class Simple Difference Thresholds dialog appears. Each class is defined by
one line in the dialog.
While you are encouraged to customize the criteria to use to define the change
thresholds, It is recommended that the classes retain their default symmetrical
property, with an equal number of positive and negative classes surrounding a
No Change class. Retaining the default position (order) and type (negative or
positive) of classes will make the results easier to interpret using the
classification color assignments.
A. Define class names by placing the cursor in the field next to the class you
wish to rename, and enter the new class name.
3. Select a classification image representing the final state and perform optional
Spatial Subsetting, then click OK. The Define Equivalent Classes dialog
appears.
4. Match the classes from the Initial and final state images by selecting the
matching names in the two lists and clicking Add Pair.
Add only the classes you wish to include in the change detection analysis (you
do not have to pair all classes). The class combinations are shown in a list at
the bottom of the dialog. If the classes in each image have the same names,
they are automatically paired.
5. Click OK. The Change Detection Statistics Output dialog appears.
6. Select the Report Type. You may choose any combination of Pixels, Percent,
and Area.
7. Click the Output Classification Mask Images? toggle button to specify
whether or not to create class masks.
8. If the Output Classification Mask Images? toggle button is Yes, select output
to File or Memory.
9. Click OK. If an Area Report was requested but the initial state image does not
have pixel sizes defined, the Define Pixel Sizes for Area Statistics dialog
displays.
10. Enter the pixel sizes.
11. Click OK. ENVI adds the resulting output to the Available Bands List and
opens the statistics in the Change Detection Statistics window.
information about the analysis, such as the names of the input images and the
equivalent class pairings.
The statistics tables list the initial state classes in the columns and the final state
classes in the rows. However, the columns include only the selected (paired) initial
state classes, while the rows contain all of the final state classes. This is required for a
complete accounting of the distribution of pixels that changed classes. For each initial
state class (that is, each column), the table indicates how these pixels were classified
in the final state image. For example, the table in Figure 3-12 shows that 2,094 pixels
initially classified as Water changed into the Sediment class in the final state image.
• The Class Total row indicates the total number of pixels in each initial state
class, and the Class Total column indicates the total number of pixels in each
final state class. The table in Figure 3-12 shows that 35,989 pixels were
classified as Agriculture in the initial state image.
• The Row Total column is a class-by-class summation of all final state pixels
that fell into the selected initial state classes. Note that this may not be the
same as the Final State Class Totals because it is not required that all initial
state classes be included in the analysis.
• The Class Changes row indicates the total number of initial state pixels that
changed classes. In the table in Figure 3-12, the total Class Changes for
Agriculture is 30,923 pixels. In other words, 30,923 pixels that were initially
classified as Agriculture changed into final state classes other than agriculture.
• The Image Difference row is the difference in the total number of equivalently
classed pixels in the two images, computed by subtracting the Initial State
Class Totals from the Final State Class Totals. An Image Difference that is
positive indicates that the class size increased. For example, in the sample
analysis, the Water class grew by 67,618 pixels.
Select the tabs along the top of the Change Detection Statistics dialog to show
equivalent information for the class changes in terms of Percentage and Area. In the
Percent report (not shown here) the increase in the size of the Water class corresponds
to a growth of 21%:
(final state - initial state) / initial state = (390381 - 322763) / 322763 = 0.209
Additional Features of the Change Detection Statistics Report
Options from the Change Detection Statistics Report dialog menu bar are:
• To change the floating-point precision displayed in the report, select
Options → Set Report Precision.
• To convert the units for the Area report, select Options → Convert Area
Units.
• To save the statistics reports to an ASCII text file, select File → Save to Text
File. The Save Change Detection Stats to Text dialog appears; you can
optionally add a descriptive line of header text to the file being written. The
data is saved in a tab delimited format to facilitate importing into other
software programs.
ENVI saves the class mask images as a multi-band image with one mask for each
paired class. To help identify the class into which a pixel changed, the masks are
stored as ENVI classification images with the class assignments (names, colors, and
values) matching the final state. A value of zero in the mask indicates that no change
occurred from the initial to the final state; non-zero values indicate a change.
To differentiate pixels that did not change classes from those that changed into the
Unclassified class (which typically has a classification value of zero), pixels that
changed into the Unclassified class are assigned a value equal to the number of final
state classes plus one, and color coded white. For example, in the sample analysis
shown in Figure 3-12, the final state image contains 6 classes; therefore, any pixel in
the mask that changed into the Unclassified class would be assigned a value of 7.
Measurement Tool
Use Measurement Tool to get a report on the distance between points in a polygon
or polyline, and to get perimeter and area measurements for polygons, rectangles, and
ellipses.
To measure ROIs while using the ROI function, see “Reporting ROI Measurements”
on page 333.
1. Select one of the following options:
• From the ENVI main menu bar, select Basic Tools → Measurement Tool.
• From the Display group menu bar, select Tools → Measurement Tool.
The Display Measurement Tool dialog appears.
2. In the Display field, enter the number of the display that you want to take
measurements from.
3. Select the radio button for the display group window (Image, Scroll, Zoom) in
which you want to measure.
To disable the measurement function at any time, select the Off radio button.
4. From the Display Measurement Tool menu bar, select Type → area_shape you
want to measure.
5. From the Display Measurement Tool dialog menu bar, select Units →
unit_type. If the pixel size of the image is not stored in the header, and you
select any unit except Pixels, complete the following steps when the Input
Display Pixel Size dialog appears:
A. In the X Pixel Size and Y Pixel Size fields, type the size of the pixels in
your image.
B. From the Units drop-down list, select the unit type.
C. Click OK.
6. From the Display Measurement Tool dialog menu bar, select Area →
report_type to specify measuring the area in Units2 (for example, meters2),
Acres, or Hectares.
If you select Acres or Hectares, The Input Display Pixels dialog appears.
Enter the X/Y Pixel Size, select the Units from the drop-down list, and click
OK.
7. From the Display Measurement Tool dialog menu bar, select Options →
unit_type to specify whether the measurement information is reported as line
segments (the default) or as point coordinates. The options are:
• Report as Points: To produce a listing of the vertices coordinates. The
coordinates are reported as a pixel location (Pixel (x,y)).
• Report as Segments: To produce a listing of the line segment distances.
• Georef Map (x,y) or Georef (Lat/Lon): For georeferenced images, you
can produce a listing of the coordinates as either map coordinates or as
latitude and longitude coordinates.
8. In the display group window, left-click and draw the shape.
• For Rectangle or Ellipse type, left-click and hold the button and drag the
shape to the size. To draw a square or circle, middle-click and hold the
button and drag.
• Right-click to close the polygon or complete the line.
• For Polygon type, the distance between the vertices are listed and the
perimeter and total area are reported when the polygon is closed.
• For Polyline type, the distance between the vertices are listed and the total
distance is given when the polyline is completed.
• For Rectangle type, the lengths of the side segments, the perimeter, and
the total area are reported.
• For Ellipse type, the circumference and total area are reported.
• To erase the shape, right-click again.
Band Math
ENVI Band Math is a flexible image processing tool with many capabilities not
available in any other image processing system. You can use ENVI’s Band Math
dialog to define bands or files used as input, to call a user Band Math function, and to
write the result to a file or memory. ENVI’s Band Math function accesses data
spatially by mapping variables to bands or files. Spatial data that are too large to read
entirely into memory are automatically accessed using ENVI’s data tiling.
The following figure depicts Band Math processing that adds three bands. Each band
in the expression is mapped to an input image band, summed, and output as the
resulting image data. You can map one or more of the expression’s variables to a file
instead of mapping each variable to a single band. The resulting output is a new
image file. For example, in the expression b1 + b2 + b3, if b1 is mapped to a file and
b2 and b3 are mapped to a single band, then the resulting image file contains the
bands of the b1 file summed with b2 and b3.
Some common image summing operations are easier to perform using the
Basic Tools → Statistics → Sum Data Bands selection (see “Summing Data
Bands” on page 280).
2. In the Band Math dialog, enter the desired mathematical description, including
variable names, into the Enter an expression field. Use variables in place of
band names or filenames (the variables will be assigned in the next step).
Variable names must begin with the character “b” or “B” followed by up to 5
numeric characters.
For example, to calculate the average of three bands, use the following
equation:
(float(b1)+float(b2)+float(b3))/3.0
Three variables are used in this expression: B1, B2, and B3. Note that, in this
example, the IDL function float() is used to prevent byte overflow errors
during calculation. See “Band Math Requirements” on page 310 for further
details.
2. Select the band in the Available Bands List. When the first band is selected,
only those bands with the same spatial dimensions are shown in the band list.
3. Continue to assign a value to B2, B3, and so forth in the same manner.
Mapping Variables to Multiband Images
You can assign a multiband image as one or all of the variables (using an image file as
a variable is considered File Math).
1. In the Variables to Bands Pairings dialog, select a variable in the Variables
used in expression field.
result = expression
In the Band Math dialog, enter only the expression part of the function.
Your expression can include any valid IDL function, including those that you
write yourself. If you are using your own custom IDL functions, be sure to
properly compile the function before using it in Band Math (see “Writing Band
Math User Functions” on page 319).
2. All input bands must have identical dimensions: The expression is applied
on a simple pixel-by-pixel basis. Therefore, the input bands (to which your
expression is applied) must all have the same spatial dimensions in samples
and lines. Furthermore, Band Math does not automatically coregister images
that are georeferenced. To automatically coregister images prior to using Band
Math, use the Basic Tools → Layer Stacking utility (see “Layer Stacking” on
page 269).
3. All variables in the expression must be named Bn (or bn): The variables in
the expression that represent input bands must begin with the character “b” or
“B” followed by up to 5 numeric characters. For example, all of the following
expressions are valid when adding three bands:
b1 + b2 + b3
B1 + B11 + B111
B1 + b2 + B3
4. The result must be a band of the same dimension as the input bands: The
expression must produce a result with the same spatial dimensions in samples
and lines as the input bands.
Tip
To find out the data type of your images, highlight them in the Available Bands List
and their data type will be listed in the DIMS box at the bottom of the dialog.
You might ask, why not just carry out all computations in a floating-point data type
since it can represent any value? The answer is disk space. The greater the dynamic
range a data type can represent, the more disk space it consumes. For example, byte
data types use only 1 byte for every pixel, integers use 2 bytes for every pixel, while
floating-point data types use 4 bytes for every pixel. Thus a floating-point result will
consume twice as much disk space as an integer result. See Table 3-1 to learn more
about the disk space usage and dynamic ranges of the IDL data types.
Casting Bytes
Data Type Shortcut Dynamic Range
Function per Pixel
Casting Bytes
Data Type Shortcut Dynamic Range
Function per Pixel
The order of precedence combined with the dynamic typing can also change the
outcome of your expression. Be sure to promote the data type in the proper place in
the expression to avoid data type overflow or integer division errors. For example,
consider the following case:
The following table describes the order of precedence for each operator:
Order of
Operator Description
Precedence
Order of
Operator Description
Precedence
Fifth EQ Equal
NE Not equal
LE Less than or equal
LT Less than
GE Greater than or equal
GT Greater than
Sixth AND Boolean AND
OR Boolean OR
XOR Boolean exclusive OR
Seventh ?: Conditional expression (rarely used in Band
Math). The following example expression
compares two arrays, b1 and b2:
(max(b1) eq 0) ? b2: b1
Avoid Using IDL Functions That Require All of the Image Data
at Once
Like all other ENVI routines, the Band Math processing is tiled. This means that if
the images being processed are larger than the Image Tile Size (Mb) preference,
which is set to 1 MB by default, then it will be broken into smaller pieces, each piece
processed separately, then reassembled. This can cause problems if you use an IDL
function that requires all of the image data at once, because the Band Math
expression is applied individually to each tile of data. For example, consider using the
IDL function MAX(), which determines the maximum value in an array:
b1 / max(b1)
If the Band Math processing is tiled, then each tile will be divided by the tile's
maximum value, instead of the maximum value of the whole band. If you find that
your Band Math result has broad horizontal stripes in it, tiling may be the cause of the
problem (because the tiles are horizontal sections of the image). IDL functions to
avoid include FFT, MAX, MIN, MEAN, MEDIAN, STDDEV, VARIANCE, and
TOTAL. In most cases it is also difficult to use the BYTSCL function, but if you
know beforehand the data range of your input bands then you can use BYTSCL as
long as you include the MIN and MAX keywords.
The relational operators return a one for true and a zero for false, so the portion of the
expression that reads (b1 lt 0) will return an array of the same dimensions as b1 filled
with ones where b1 was negative and zeros everywhere else. Multiplying this by the
replacement value (-999) affects only those pixels that met the criterion of being
negative. The second relational operator (b1 ge 0) is the complement to the first,
finding all of the pixels that are positive or zero, which is multiplied by their original
value and added to the replacement value array. Constructing Band Math expressions
with array operators like this provides a great deal of flexibility. See “Sample Band
Math Expressions” on page 317 for more examples.
The following table describes selected IDL array handling functions. For a complete
listing, see the IDL Reference Guide.
If you want to keep the results of the division as an integer, it is usually better to carry
out the division as a floating-point operation then convert the results back to your
desired data type. For example, if your input bands are both byte data type and you
want to round up the result and store it as an integer, use the following expression:
fix( ceil( b1/float(b2) ) )
To learn more about the dynamic range of IDL data types, see Table 3-1 in “IDL is
Dynamically Typed” on page 312.
Creating a Blended Image
Band Math is an easy way to experiment with blending multiple images together. For
example, if b1 and b2 are both byte data types, the above expression will produce a
new byte image that is weighted 80% by b2 and 20% by b1.
byte( round( (0.2 * b1) + (0.8 * b2) ) )
The next example is a slightly more complicated expression but its use of array
operators is quite similar to the previous example. This expression uses several
criteria to create a binary mask identifying pixels that are predominantly clouds. This
algorithm can actually be used to create cloud masks from calibrated daytime
imagery from the Advanced Very High Resolution Radiometer (AVHRR) sensor. In
the expression, b4 (a thermal band) must be negative or b2 (a reflectance band) must
exceed 0.65 and the difference between bands b3 and b4 (a mid IR and thermal band)
must exceed 15 degrees. Because relational operators return a one for true, the mask
will have a value of one where there are clouds and zeros elsewhere.
(b4 lt 0) or ( b2 gt 0.65 AND (b3 - b4) gt 15 )
In the next example, the use of both the minimum and maximum operators clips the
data values in b1 at zero and one. No value in b1 will exceed one or fall below zero.
0 > b1 < 1
Spectral Math
Use Spectral Math to apply mathematical expressions or IDL procedures to spectra
(and also to selected multiband images). The spectra can be from a multiband image
(that is, a Z Profile), a spectral library, or an ASCII file. For details, see “Spectral
Math” on page 857.
Segmenting Images
Use Segmentation Image to segment an image into areas of connected pixels based
on the pixel DN value. You can enter a single DN or a range of DN values to use in
the segmentation. Either four or eight adjacent pixels are considered for the
connectivity and you can specify the minimum number of pixels that must be
contained in a region. Each connected region, or segment, is given a unique DN value
in the output image.
1. From the ENVI main menu bar, select Basic Tools → Segmentation Image.
The Input File dialog appears.
2. Select an input band and perform optional Spatial Subsetting, then click OK.
The Segmentation Image Parameters dialog appears.
3. In the Min Thresh Value and Max Thresh Value threshold fields, enter a
minimum and/or maximum threshold value in DN. If you enter only one value,
the data minimum or maximum is used as the other end of the threshold.
To use a single DN value, enter that value in both the Min Thresh Value and
Max Thresh Value threshold fields. Only pixels that fall within the entered
DN range will be considered in making the segmentation image. All other
pixels will have an output value of 0.
4. In the Population Minimum field, enter the minimum number of pixels in a
segment.
5. Use the Number of Neighbors toggle button to select either 4 or 8 neighbors
to consider for the connectivity.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Defining ROIs
Regions of interest (ROIs) are portions of images, either selected graphically or
selected by other means such as thresholding. The regions can be irregularly-shaped
and are typically used to extract statistics for classification, masking, and other
operations. To perform ROI definition, you use the ROI Tool dialog. The topic
Defining Regions of Interest in Getting Started with ENVI describes how to open the
ROI Tool dialog, and how to turn off ROI definition. The sections that follow describe
how to define ROIs.
Drawing ROIs
The following ROI types are available in ENVI:
• Polygon
• Polyline
• Point
• Rectangle and square
• Ellipse and circle
• Multi Part (donut)
A single region can contain any combination of the six ROI types. Each type has
different mouse button functions.
1. Select one of the following options for the active display group:
• From the Display group menu bar, select Overlay → Region of Interest.
• From the Display group menu bar, select Tools → Region of Interest →
ROI Tool.
• From the ENVI main menu bar, select Basic Tools → Region of
Interest → ROI Tool.
• In the display group, right-click and select ROI Tool.
The ROI Tool dialog appears. To hide the ROI Tool dialog at any time without
erasing your ROIs, see “Showing and Hiding Overlay Dialogs and Layers” on
page 29.
2. Select the Window you want to use for adding the ROI. The choices are
Image, Scroll, and Zoom. To disable ROI mode, select Off.
3. From the ROI Tool dialog menu bar, select ROI_Type and select one of the
following options.
You can change the settings for how each ROI appears in the display group.
See “Editing ROI Attributes” on page 326 for descriptions.
Mouse buttons functions vary when you are in ROI mode. See “ROI Mouse
Button Functions” on page 324 for details.
• Polygon: (default) Left-click on the image or plot to add polygon vertices.
Right-click to complete the polygon.
• Polyline: Left-click on the image to add polyline vertices. Right-click to
complete the polyline.
• Point: Left-click on the image to add points.
• Rectangle: Left-click and drag on the image to draw the rectangle, or
middle-click to draw a square.
• Ellipse: Left-click and drag on the image to draw the ellipse, or middle-
click to draw a circle.
• Multi Part: On/Off: Use multi part mode to draw ROIs with holes in
them, or donut ROIs. You cannot draw multi part ROIs using point or
polyline ROIs. Left-click and drag on the image to draw the base shape.
Right-click to fill the ROI. Draw any number of additional ROIs or parts
within the first ROI to create holes (holes cannot cross the path of any
other shape within its group). The ROIs do not fill in when you right-click
the second time.
• Input Points from ASCII: See “Adding ASCII Data into ROIs” on
page 331.
4. After adding parts to the ROI, right-click a third time to accept the multi part
ROI. ENVI fills in the base ROI and removes the parts to reveal holes in the
base ROI.
5. Use the colored diamond-shaped handle if you need move the ROI to another
location.
6. Right-click to accept the ROI placement.
still apply in all other windows. To temporarily suspended drawing ROIs, select the
Off radio button in the ROI Tool dialog, to return to normal mouse operations.
Mouse
Function
Button
Note
If you have multiple images of the same size displayed and their associated ROI
Tool dialogs open at the same time, any ROIs drawn in one image display also
display in the other images.
If you have multiple images of the same size displayed and their associated ROI Tool
dialogs open at the same time, any ROIs drawn in one image display also display in
the other images.
1. To start a new ROI, click New Region in the ROI Tool dialog. A new name
appears in the Available Regions of Interest table. The new region uses the next
color in the graphics colors list by default.
2. Select the ROI type and draw the ROI.
3. Edit the ROI as needed.
ROI Options
In the ROI Tool dialog, you have many options to choose from when working with
ROIs. You can input ASCII points into an ROI, create multiple ROIs, report ROI
statistics, measure distances and area, report the areas of the ROIs, load, erase, delete,
plot means, merge regions, reconcile ROIs, and perform band thresholds to ROIs.
If the ROI displays in more than one image (of the same spatial size), any edits are
reflected in all of those images.
Deleting ROIs
Use the Tools menu on the Display group menu bar, the Delete ROI or Delete Part
buttons in the ROI Tool dialog, or the Basic Tools menu on the ENVI main menu bar
to delete ROIs.
Note
If the ROI to delete displays in more than one image (for images of the same spatial
size), deleting it from one deletes it from all. When the ROIs are deleted, they
cannot be recovered unless they are saved to a file.
1. From the ROI Tool dialog menu bar, select an ROI in the ROI Tool dialog table
and click Goto.
2. Continue clicking Goto to move the Zoom window over each pixel contained
in that ROI.
• Save all ROI results to ENVI stats files: Saves the statistics reports for all the
ROIs to separate ENVI statistics files. The Save All ROI Results to ENVI Stats
Files dialog appears. Enter the root name of the statistics files (the default file
extension for statistics files is .sta) and click OK. The statistics report for
each class or region is saved to individual files. The individual files have the
same root name that you specified and are appended with their appropriate
ROI number.
• Save all ROI results to text file: Saves the statistics report for all the ROIs to a
text file. The Save All ROI Results to Text File dialog appears. Enter the name
of the text file and click OK.
Tip
The resulting text file is tab-delimited for easy import into external
spreadsheet programs, such as Excel.
The Stats for button contains a list of the available ROIs. The ROI Statistics Results
dialog reports the calculated statistics (in both the plot and text sections) of the ROI
specified by this menu. To compare statistics for different ROIs with the current ROI
shown in the Statistics Results dialog, use the Options → Copy results to new
window option to create a copy of the ROI Statistics Results dialog for the current
ROI, then use the Stats for menu to display a different ROI in the newly created
dialog.
The Select Plot drop-down button also contains the following additional options:
• Mean for all ROIs: Displays a plot of the means of all the ROIs.
• Stdev for all ROIs: Displays a plot of the standard deviations of all the ROIs.
• Eigenvalues for all ROIs: Displays a plot of the eigenvalues of all the ROIs.
• Histogram for all ROIs: Displays a plot of the histogram of all ROIs for a
chosen band of data.
Growing ROIs
You can grow ROIs to neighboring pixels using a specified threshold. The threshold
is determined by specifying a number of standard deviations away from the mean of
the drawn region. You can use either 4 or 8 neighboring pixels to determine the
growth pattern. It is calculated using the displayed band for a gray scale display, or
the red band for a color display.
Note
All grown ROIs are output as points, regardless of the starting ROI type.
1. In the ROI Tool dialog table, select the name of the ROI to grow.
Within the current Image window, neighboring pixels that fall within the
standard deviation threshold are included in the grown region. Adjacent pixels
outside the current Image window, regardless of pixel value, are not included
in the ROI.
2. Click Grow. The new grown ROI is shown in the Image window. A prompt
asks if you want to keep the resulting grown ROI.
3. Select Yes to grow the ROI with all of the points shown. Select No to return the
ROI to its original size.
If you select No, the Region Growing dialog appears. Change the values of the
standard deviation multiplier and the number of neighbors, if desired.
4. Click OK.
1. From the ROI Tool dialog menu bar, select ROI_Type → Input Points from
ASCII. The Enter ASCII Points Filename dialog appears.
2. Select an input ASCII filename. The Input ASCII File dialog appears.
3. Enter the column numbers for the x and y point coordinates.
4. Select the type of ROI that the points define from the These points comprise
drop-down list. For polygon and polyline ROIs, the ASCII points define the
vertices of the ROI.
5. Select whether the input coordinates are Pixel Based or Map Based. If you
select Map Based, select the projection type and enter the zone and datum
information as necessary by clicking Zone and Datum.
6. Click OK.
Merging Regions
To merge multiple defined ROIs into one:
1. From the ROI Tool dialog menu bar, select Options → Merge Regions. The
Merge ROIs dialog appears with two lists of all defined regions.
2. Under Choose Base ROI to Merge, select the name of a region.
3. Under the Choose ROIs to Merge list, select the names of the regions to
merge into the base region.
4. Click the Delete Merged ROIs? toggle button to select whether or not to
delete the individual regions being merged after they are merged. The color of
the other ROIs change to that of the base ROI and the other names are removed
from the ROI Tools dialog table.
5. Click OK.
Intersecting Regions
Use Intersect Regions to create a point type region of interest that contains only the
points where two or more ROIs intersect in an image.
You can also calculate ROI intersections on-the-fly and use them when building a
mask. For more information, see “Including ROI Intersections” on page 353.
To intersect ROIs:
1. From the ROI Tool dialog menu bar, select Options → Intersect Regions. The
ROI Intersection dialog appears.
2. Select the names of the intersecting ROIs to include in the new ROI. Only
select regions that intersect. If a non-intersecting ROI is selected, an error
occurs.
3. Click OK. The new ROI appears in the Available Regions of Interest list. It is a
point type ROI and displays under any overlying polygon ROIs.
Tip
If you cannot see the new point ROI in the Image window, erase all other ROIs and
re-display the new ROI.
Tip
To take measurements of an image without using the ROI functions, see
“Measurement Tool” on page 303.
1. From the ROI Tool dialog menu bar, select Options → Measurement Report.
A blank ROI Measurement Report dialog appears. As you draw the ROI, the
ROI Measurement Report dialog lists the measurements, which differ
depending on the active ROI type.
2. Draw the ROIs as described in “Drawing ROIs” on page 323 for specific ROI
types.
• In Polygon mode, the report lists the distance between the vertices, the
perimeter, and the total area when the polygon is closed.
• In Polyline mode, the report lists the distance between the vertices and the
total distance when the polyline is completed.
• No distance measurements are given when in Point mode.
• In Rectangle mode, the report lists the lengths of the sides, the perimeter,
and total area.
• In Ellipse mode, the report lists the circumference and total area.
Selecting Measurement Units
In the ROI Measurement Report dialog, use the Units menu to select the unit the ROI
is measured in. The choices are pixels, meters, kilometers, feet, yards, miles, and
nautical miles.
1. Select Units → unit_type.
2. If the pixel size of the image is not stored in the header, and you select any unit
except pixel, complete these steps when the Input Display Pixel Size dialog
appears.
A. In the X Pixel Size and Y Pixel Size fields, type the size of the pixels in
your image.
B. From the Units menu, select the unit type.
C. Click OK.
Measuring ROI Area
In the ROI Measurement Report dialog, use the Area menu to measure the area of the
ROI in acres, hectares, or units2 (for example, meters2). Select Area → Acres or
Hectares.
Reconciling ROIs
In the ROI Tool dialog, use Reconcile ROIs to apply ROIs defined in one image size
to different sized images.
Tip
When using Reconcile ROIs, ROIs can only be reconciled to images with the same
pixel size as the original image. To reconcile ROIs to an image with a different pixel
size, use Reconcile ROIs via Map.
Reference:
J.A. Richards, 1999, Remote Sensing Digital Image Analysis, Springer-Verlag,
Berlin, p. 240.
The following steps show how you can use the Compute ROI Separability option to
compute the spectral separability between selected ROI pairs.
1. Select one of the following options:
• From the ROI Tool dialog menu bar, select Options → Compute ROI
Separability.
• From the Display group menu bar, select Tools → Regions of Interest →
Compute ROI Separability.
• From the ENVI main menu bar, select Basic Tools → Region of
Interest → Compute ROI Separability.
The Input File dialog appears.
2. Select an input file and perform optional Spectral Subsetting, then click OK.
3. The ROI Separability Calculation dialog appears.
4. In the dialog, select ROIs for the separability calculation.
5. Click OK. The separabilities are calculated and reported in a report dialog.
Both the Jeffries-Matusita and Transformed Divergence values are reported for
every ROI pair. The bottom of the report shows the ROI pair separability
values listed from the least separable pair to the most separable.
6. To save the report to an ASCII file, select File → Save Text to ASCII.
• From the ENVI main menu bar, select Basic Tools → Region of
Interest → Save ROIs to File.
The Save ROIs to File dialog appears.
2. Select the ROIs to save. Only ROIs that were defined in images with the same
dimensions as those in the current display appear in the ROI list. ROIs of other
dimensions remain in memory.
3. Enter a filename or choose an existing output filename (with the extension
.roi for consistency).
4. Click OK.
5. Use the Mask pixels outside of ROI? toggle button to select whether or not to
mask pixels that do not fall within the ROI. If you select Yes, enter a
background value.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
If the pixels overlap, edit the groups of pixels by selecting the appropriate
colors from the Class menu to add pixels to an ROI or by selecting White to
remove pixels from an ROI.
8. From the n-D Controls dialog menu bar, select Options → Export Class or
Export All to export the colored pixels back to the ROI Tool dialog so you can
import them into classifications.
For details about the n-D Visualizer, see “The n-D Visualizer” on page 754.
You can also output map information, latitude and longitudes, and band data values
for every ROI location. Prior to output, you can select which parameters to include in
the ASCII file. The output is formatted into columns for easy input into spread sheets.
For an example of an ROI ASCII file, see “Example of ASCII Output” on page 346.
1. Select one of the following options:
• From the ROI Tool dialog menu bar, select File → Output ROIs to
ASCII.
• From the Display group menu bar, select Tools → Region of Interest →
Output ROIs to ASCII.
• From the ENVI main menu bar, select Basic Tools → Region of
Interest → Output ROIs to ASCII.
The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Output ROIs to ASCII Parameters dialog
appears.
3. Select the ROIs to output.
4. Click Edit ASCII Output Form to set parameters. The Output ROI Values to
ASCII dialog appears.
5. Set the parameters by selecting/clearing the corresponding check box. By
default, all parameters are selected for output.
• Point #: Include a label with the output.
• ROI Location: Include the ROI location information in the output. Use
the ROI Location toggle button to select whether the ROI location is
output by 1D locations or by sample/line. Pointers to each of the pixels
contained in the selected ROIs are output to the ASCII file. The pointers
are the 1D addresses to the pixel locations in the file where a 1D address
equals the line number times the number of samples plus the sample
number.
• Map Location: Include geographic location information for
georeferenced data. Use the Map Location toggle button to designate
output of the geographic locations in normal or scientific notation. Use the
increase/decrease buttons to set the number of significant digits.
Mosaicking Images
Use Mosaicking to mosaic both pixel-based and georeferenced images.
You may also access this function from the Map menu. For details about using this
function, see “Image Mosaicking” on page 919.
1. From the ENVI main menu bar, select Basic Tools → Mosaicking → Pixel-
Based or Georeferenced.
2. From the Import menu, select images for the mosaic. The Input File dialog
appears.
3. Select the file. Load as many files as needed for the mosaic.
4. Position each one in the output file by either entering the upper-left corner
coordinate or by clicking and dragging the schematic image outline for each
image to the desired location (see “Image Mosaicking” on page 919 for a more
detailed description).
5. Click OK.
Creating Masks
Use Masking to create image masks. A mask is a binary image that consists of values
of 0 and 1. When a mask is used in a processing function, ENVI includes the areas
with values of 1 and ignores the masked 0 values in the calculations.
Masking is available for selected ENVI functions, including: statistics, classification,
Linear Spectral Unmixing, Matched Filtering, Continuum Removal and Spectral
Feature Fitting.
Figure 3-21: Example Mask Image from a Data Range and Imported ROI
Building Masks
Use Build Mask to build image masks from specific data values (including the data
ignore value), ranges of values, finite or infinite values, ROIs, ENVI vector files
(EVFs), and annotation files. You can use any combination of input to define a mask
and you can permanently apply a mask to an image.
If the input file has a data ignore value, then the dialog opens with the value
automatically added to the Selected Attributes for Mask list. The two
numbers shown with the filename indicate the data ignore value in the file, and
the bracketed word [All] refers to which bands in a multiband file must contain
the data ignore value in order for a pixel to be masked.
Masking Options
Options in the Mask Definition dialog include importing data values, importing
annotations, masking finite values, masking non-numbers and infinite data values,
using ROIs and EVFs with the mask, and selecting areas for masking.
2. Use this dialog, the select the input file for the data range. The Input for Data
Range Mask dialog appears.
3. Click Select New Input if you need to change the input file.
4. Enter a minimum and/or maximum value in the Band Min Value and Band
Max Value fields. If you enter only a minimum or maximum value, the data’s
actual maximum or minimum, respectively, will be used as the other end value.
If the input file has a data ignore value, then the dialog opens with the value
automatically entered in the Band Min Value and Band Max Value fields.
5. Select either Mask pixel if ALL bands match range or Mask pixel if ANY
bands match range. The ALL option includes all pixels that are in the data
range for all bands (a logical AND operation). The ANY option includes all
pixels that are in the data range for any band (a logical OR operation)
6. Click OK to enter the range into the mask definition list.
Including Annotations
To include an annotation file in the mask, select Options → Import Annotation
from the Mask Definition dialog menu bar and select the input file.
To include the currently displayed annotation shapes in the mask, select Options →
Import Displayed Annotation.
Only rectangles, ellipses, and polygons are imported into the mask definition.
Or, if the ROI is a polygon, then you could create an annotation polygon with the
same shape as your ROI and import it into the Mask Definition dialog (instead of
importing the ROI). From the Display group menu bar, select Overlay →
Annotation. You will need to trace your annotation polygon over your ROI polygon
because it is not possible to convert an ROI polygon directly to an annotation
polygon.
Selecting Areas
Select from the following options to define mask areas:
• To set the defined areas in the mask to 1 (On) or to 0 (Off), select Options →
Selected Areas On/Off from the Mask Definition dialog menu bar. The mask
is built using a Logical OR or Logical AND operation between all of the items
in the list. The default, Logical OR, uses all the defined areas to make the
mask. Using the Logical AND masks only the areas where all of the defined
areas overlap.
Selected areas are those pixels that satisfy the masking criteria.
• To define the mask using only those areas where the listed data ranges,
annotation shapes, and/or ROIs overlap, select Options → Selected
Attributes [Logical AND] from the Mask Definition dialog menu bar.
• To use all the defined areas to make the mask, select Options → Selected
Attributes [Logical OR] from the Mask Definition dialog menu bar.
Deleting Attributes
To delete an item from the Select Attributes list in the Mask Definition dialog,
highlight the item and click Delete Item.
Saving Masks
1. In the Mask Definition dialog, select output to File or Memory.
2. Click Apply.
Applying Masks
Use Apply Mask to permanently apply a mask to an image, giving that the masked
out value is what you specify.
1. From the ENVI main menu bar, select Basic Tools → Masking → Apply
Mask. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK.
3. To create a mask, click Mask Options. This menu contains the following
options:
• Build Mask: Opens the Mask Definition dialog, which is described in
“Building Masks” on page 350.
• Mask Data Ignore Values [All Bands]: Build a mask that includes all the
pixels for which the data ignore value occurs in all bands (a logical AND
operation).
• Mask Data Ignore Values [Any Band]: Build a mask that includes all the
pixels for which the data ignore value occurs in any band (a logical OR
operation).
• Mask NaNs [All Bands]: Build a mask that includes all pixels which have
a value of NaN in all bands (a logical AND operation).
• Mask NaNs [Any Band]: Build a mask that includes all pixels which have
a value of NaN in any band (a logical OR operation).
Note
NaN and Infinity values are treated the same in ENVI. Infinity values are
masked along with NaNs. Moreover, the Mask NaNs options are only
available for files containing floating-point, double-precision floating,
complex floating, or double-precision complex data types.
When one of the latter four mask options is chosen, the mask is automatically
built and named either <basename>_iv_mask, or <basename>_nan_mask,
where <basename> is the name of the selected input file and the mask is made
from either ignore values (iv) or NaNs (nan). In the case where the selected
input file is in memory, then the mask is assigned a temporary filename.
4. Specify the mask by clicking Select Mask Band. See “Masking” on page 224.
5. Click OK. The Apply Mask Parameters dialog appears.
6. Enter the value in the Mask Value field. All areas in the input images where
the mask equals zero are set to this mask value.
7. Enter an output filename or select output to memory.
Preprocessing Utilities
ENVI provides preprocessing utilities for calibration, general purpose tools, and data-
specific tools. These utilities are described in the following sections.
Calibration Utilities
Use Calibration Utilities to apply calibration factors to AVHRR, MSS, QuickBird,
TM, TIMS, and WorldView-1 and -2 data, and to use a variety of atmospheric
correction techniques.
AVHRR Calibration
Use the AVHRR calibration utility to calibrate AVHRR data from the NOAA-12,
-14, -15, -16, -17, and -18 satellites. Bands 1 and 2 are calibrated to percent
reflectance and bands 3, 4, and 5 are calibrated to brightness temperature, in degrees
Kelvin. For details, see “Calibrating AVHRR Data” on page 380.
Landsat Calibration
Use Landsat Calibration to convert Landsat MSS, TM, and ETM+ digital numbers
to spectral radiance or exoatmospheric reflectance (reflectance above the atmosphere)
using published post-launch gains and offsets.
The spectral radiance (Lλ) is calculated using the following equation:
LMAX λ – LMIN λ
L λ = LMIN λ + ⎛⎝ --------------------------------------------------------------------⎞⎠ ( QCAL – QCALMIN )
QCALMAX – QCALMIN
Where:
• QCAL is the calibrated and quantized scaled radiance in units of digital
numbers
• LMINλ is the spectral radiance at QCAL = 0
• LMAXλ is the spectral radiance at QCAL = QCALMAX
LMINλ and LMAXλ are derived from values published in Chander, Markham,
and Helder (2009). See References.
• QCALMIN is the minimum quantized calibrated pixel value (corresponding to
LMINλ) in DN. Valid values are as follows:
1: LPGS products
Where:
• Lλ is the spectral radiance
• d is the Earth-Sun distance in astronomical units
• ESUNλ is the mean solar exoatmospheric irradiance. ENVI uses the ESUNλ
values from the Landsat 7 Science Data Users Handbook for Landsat 7 ETM+.
ENVI uses the ESUNλ values from Chander and Markham (2003) for Landsat
TM 4 and 5. See References.
• θs is the solar zenith angle in degrees.
Using the Landsat Calibration Tool
Use Landsat Calibration to specify the calibration coefficients and other related
parameters for Landsat MSS, TM and ETM+ data.
1. Use File → Open External File → Landsat to open a Landsat file, as
described in “Opening Landsat Files” in ENVI Help. Landsat metadata files
are preferred since ENVI will use the metadata to automatically determine
calibration parameters. The Landsat Calibration tool only works with file
formats described in “Opening Landsat Files” in ENVI Help. You cannot
calibrate a meta file consisting of multiple Landsat files.
The Landsat Calibration tool also works with Landsat data that you previously
saved to ENVI raster format. Use File → Open Image File to open a Landsat
data file in ENVI raster format.
2. Select one of the following options from the ENVI main menu bar:
• Basic Tools → Preprocessing → Calibration Utilities → Landsat
Calibration
• Basic Tools → Preprocessing → Data-Specific Utilities →
Landsat TM → Landsat Calibration
• Basic Tools → Preprocessing → Data-Specific Utilities →
Landsat MSS → Landsat Calibration
• Spectral → Preprocessing → Calibration Utilities → Landsat
Calibration
• Spectral → Preprocessing → Data-Specific Utilities → Landsat TM →
Landsat Calibration
• Spectral → Preprocessing → Data-Specific Utilities → Landsat
MSS → Landsat Calibration
The ENVI Landsat Calibration Input File dialog appears.
3. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The TM Calibration Parameters dialog appears.
ENVI determines the calibration parameters from the metadata and populates
the dialog accordingly.
For Landsat 7 GeoTIFF files without metadata, the dialog defaults to a date of
January 1, 1984 and a sun elevation angle of 90 degrees. You can obtain
calibration parameters from the EROS Data Center as CPF files and MetaData
files. However, most Landsat GeoTIFF data now come with metadata, so if
you use File → Open External File → Landsat → GeoTIFF with Metadata
to open the data, ENVI will automatically determine the calibration
parameters.
4. You can edit the Data Acquistion Month/Day/Year, and Sun Elevation (deg)
values if you want to override the metadata-derived values.
5. Select the desired Calibration Type using the Radiance or Reflectance radio
buttons. If you selected a thermal band for input (Bands 61 or 62), the
calibrated output will be temperature (in degrees Kelvin).
using the calibration factors in the QuickBird metadata file (the absCalFactor value in
the .imd file). The units are converted from [W/(m2*sr)] into
[(μW) / (cm2 *nm*sr)] using the following nominal bandpass widths:
ENVI stores the gain factors that were applied in the ENVI header file of the
calibrated image.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
Calibration Utilities → QuickBird Radiance. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. Select only an original unmodified QuickBird
image product.
If ENVI is unable to locate the associated QuickBird metadata file, you will be
prompted to select it.
The QuickBird Calibration Parameters dialog appears.
3. To scale the calibrated result into unsigned integers set the Scale Output to
Integers toggle button to Yes and enter a scale factor. To output the result in
floating-point set the toggle button to No.
Scaling the result into integers will produce a file that is half the size (in bytes)
as the floating-point result, however the precision is typically reduced to three
digits. The maximum value that an unsigned integer can hold is 65,535.
4. Select output to File or Memory.
5. Click OK.
absCalFactor value in the .imd file). The units are converted from [W/(m2*sr)] into
[(μW) / (cm2 *nm*sr)] using the following nominal bandpass widths:
The gain factors that were applied are found in the ENVI header file of the calibrated
image.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
Calibration Utilities → WorldView Radiance. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. Select only an original unmodified WorldView-1 or
WorldView-2 image product.
If ENVI is unable to locate the associated WorldView metadata file, you will
be prompted to select it.
The WorldView Calibration Parameters dialog appears.
3. To scale the calibrated result into unsigned integers set the Scale Output to
Integers toggle button to Yes and enter a scale factor. To output the result in
floating-point set the toggle button to No.
Scaling the result into integers will produce a file that is half the size (in bytes)
as the floating-point result, however the precision is typically reduced to three
digits. The maximum value that an unsigned integer can hold is 65,535.
4. Select output to File or Memory.
5. Click OK.
FLAASH Calibration
Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) is an
atmospheric correction method in ENVI for retrieving spectral reflectance from
hyperspectral radiance images. FLAASH incorporates the MODTRAN4 radiation
transfer model to compensate for atmospheric effects.
FLAASH is part of the Atmospheric Correction Module: QUAC and FLAASH and is
available for purchase from ITT Visual Information Solutions or your ENVI
distributor. Contact your sales representative or ITT Visual Information Solutions
(303-786-9900, [email protected]) for more information.
If you have an Atmospheric Correction Module license, see the Atmospheric
Correction Module: QUAC and FLAASH User’s Guide for details.
Log Residuals
The new Log Residuals calibration tool is designed to remove solar irradiance,
atmospheric transmittance, instrument gain, topographic effects, and albedo effects
from radiance data. This transform creates a pseudo reflectance image that is useful
for analyzing mineral-related absorption features. Log residuals calibration is similar
to IARR calibration in that both tools use only in-scene statistics to produce a result.
The logarithmic residuals of a dataset are defined as the input spectrum divided by
the spectral geometric mean, then divided by the spatial geometric mean. The
geometric mean is used because the transmittance and other effects are considered
multiplicative; it is calculated using logarithms of the data values. The spectral mean
is the mean of all bands for each pixel and removes topographic effects. The spatial
mean is the mean of all pixels for each band and accounts for the solar irradiance,
atmospheric transmittance, and instrument gain.
Figure 3-24 shows a comparison of input and output spectra.
Figure 3-24: Comparison of Input Spectra from Original Image (left) and Output
Spectra from Log Residual Calibrated Image (right)
1. From the ENVI main menu bar, select one of the following:
• Basic Tools → Preprocessing → Calibration Utilities → Log Residuals
• Spectral → Preprocessing → Calibration Utilities → Log Residuals
2. The Input File dialog appears.
3. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Log Residuals Calibration Parameters dialog
appears.
4. Select output to File or Memory.
5. Click OK. ENVI adds the resulting output to the Available Bands List.
Reference:
Green, A.A., M.D. Craig, “Analysis of aircraft spectrometer data, with logarithmic
residuals”, Proceedings of the Airborne Imaging Spectrometer Data Analysis
Workshop, April 8-10, 1985, G. Vane and A. Goetz editors, JPL, pp111-119.
ENVI’s empirical line calibration requires at least one field, laboratory, or other
reference spectrum; these can come from spectral profiles or plots, spectral libraries,
ROIs, statistics or from ASCII files. Input spectra will automatically be resampled to
match the selected data wavelengths. If more than one spectrum is used, then the
regression for each band will be calculated by fitting the regression line through all of
the spectra. If only one spectrum is used, then the regression line will be assumed to
pass through the origin (zero reflectance equals zero DN). The calibration can also be
performed on a dataset using existing factors.
Computing Factors and Calibrating
Typically, you should choose a dark and a bright region in the image for use in the
empirical line calibration (providing that reference spectra are available for these
regions). This provides a more accurate linear regression. Using as many paired
data/field spectra as you can will also improve the calibration. At least one spectral
pair is necessary.
To use spectra from ROIs, define the ROIs before running this function.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
Calibration Utilities → Empirical Line → Compute Factors and
Calibrate. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Empirical Line Spectra dialog appears.
3. Collect image spectra and reference spectra, and pair spectra using the
procedures described in the following sections.
Collecting Data (Image) Spectra
Use the Data Spectra Collection dialog to collect the image spectra (un-calibrated
spectra), which can come from a plot or profile, a spectral library, ROI, or ASCII
spectrum. Use the Import menu and other interactive options to import and collect
spectra.
1. In the Empirical Line Spectra dialog, click Data Spectra: Import Spectra.
2. Collect spectra using the Import menu as described in “Importing Spectra” on
page 442 or using the black draw widget at the top of the dialog as described in
“Dragging-and-Dropping Spectra” on page 443.
3. After the data spectra are selected, click Apply. The spectra names are entered
into the Empirical Line Spectra dialog.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
Calibration Utilities → Empirical Line → Calibrate Using Existing
Factors.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Enter Calibration Factors Filename dialog
appears.
3. Choose a calibration factors file (.cff) created during a previous Empirical
Line Calibration session.
4. Click OK. The Empirical Line Calibration Parameters dialog appears.
5. Select output to File or Memory.
6. Click OK.
Calculating Emissivity
Use Calculate Emissivity to use one of three techniques in ENVI to separate the
emissivity and temperature information in radiance data measured with thermal
infrared sensors. Both the Reference Channel and Emissivity Normalization
techniques assume a fixed emissivity value and produce emissivity and temperature
outputs. The Alpha Residuals technique does not provide temperature information.
Reference Channel Emissivity Calculation
Use Reference Channel to calculate emissivity and temperature values from thermal
infrared radiance data. For details, see “Using Reference Channel Emissivity” on
page 395.
Emissivity Normalization
Use Emissivity Normalization to calculate emissivity and temperature values from
thermal infrared radiance data. For details, see “Using Emissivity Normalization” on
page 396.
Alpha Residuals
Use Alpha Residuals to produce alpha residual spectra that approximate the shape of
emissivity spectra from thermal infrared radiance data. For details, see “Using Alpha
Residuals” on page 397.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
General Purpose Utilities → Replace Bad Lines. The Input File dialog
appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Bad Lines Parameters dialog appears.
3. In the Enter Bad Line field, specify which bad lines to replace and press
Enter.
4. The line appears in the Selected Lines list.
• To remove that line from the list, select the line.
• To save the line coordinates to a file, click Save.
• To restore the coordinates from a previously saved file, click Restore.
• To clear the list of lines to replace, click Clear.
5. In the Half Width to Average field, enter the number of adjacent lines to use
as an average for calculation of the replacement line. The value is symmetrical
around the line to replace. For example, the value 2 means that two lines on
either side of the selected line will be averaged to calculate the replacement.
6. Click OK. The Bad Lines Output dialog appears.
7. Select output to File or Memory.
8. Click OK.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
General Purpose Utilities → Apply Gain and Offset.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Gain and Offset Values dialog appears.
3. In the Gain Values list, select a band name.
4. In the Edit Selected Item field, edit the gain value of the item.
5. In the Offset Values list, select a band name.
6. In the Edit Selected Item field, edit the offset value of the item.
7. Repeat this selection and assignment of values for each band to process.
To reset all of the bands to their original values, click Reset.
8. Select an output data type from the Output Data Type drop-down list.
9. Select output to File or Memory.
10. Click OK. ENVI adds the resulting output to the Available Bands List.
Destriping Data
Use Destripe data to remove periodic scan line striping in image data. This type of
striping is often seen in Landsat MSS data (every 6th line) and less commonly, in
Landsat TM data (every 16th line). When destriping the data, ENVI calculates the
mean of every nth line and normalizes each line to its respective mean. In order for
destriping to function properly, the data must be in the acquired format (horizontal
strips) and cannot be rotated or georeferenced.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
General Purpose Utilities → Destripe. The Destriping Data Input File
appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Destriping Parameters dialog appears.
3. Enter the number of detectors in the Number of Detectors field. The number
of detectors is the periodicity of the striping (for example, for Landsat MSS,
the value would be 6).
If the file type has been set in the header, the default is set automatically.
4. Select output to File or Memory.
5. Click OK.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing →
General Purpose Utilities → Convert VAX to IEEE. The Input File dialog
appears.
2. Choose the file to converted from the list of available files.
3. Click OK.
4. When the VAX to IEEE Parameters dialog appears, enter the VAX header size
(bytes).
5. Select one of the following options:
• To copy the header information into the output file as an embedded ENVI
header, click Yes next to Copy Header.
• To copy only the data, click No.
6. Enter an output filename.
7. Click OK.
Data-Specific Utilities
Use Data-Specific Utilities to apply data-specific functions that work specifically on
your data type.
ASTER Utilities
Use ASTER Utilities to extract and apply calibration information from HDF
attributes, compute sea surface temperatures, use information in the data for
georeferencing, and to orthorectify data.
Building ASTER Geometry Files
Use ASTER Build Geometry File to calculate the geometry values for each pixel.
You may select which values to calculate: latitude, longitude, solar zenith, and/or
sensor zenith angles.
For details, see “Building ASTER Geometry Files” on page 949.
Georeferencing ASTER Data
You can georeference the ASTER data, calibration results, and sea surface
temperature image using information from the ASTER data themselves. Each line of
data has 51 latitude and longitude values that you can use in the georeferencing.
For details, see “Georeferencing ASTER Data” on page 949.
Orthorectifying ASTER Data
Use Orthorectify ASTER or Orthorectify ASTER with Ground Control to
orthorectify ASTER data. For details, see “Orthorectify Using RPCs” on page 911.
AVHRR Utilities
Use AVHRR Utilities to read and display information from the AVHRR header,
calibrate AVHRR data to percent reflectance and brightness temperature, compute
sea surface temperatures (SSTs), and to use information in the data for
georeferencing. The AVHRR utilities support NOAA-12 through -19.
The calibration and sea surface temperatures should be calculated before
georeferencing.
References:
Di, L. and D. C. Rundquist, 1994. A one-step algorithm for correction and calibration
of AVHRR Level 1b data, Photogrammetric Engineering & Remote Sensing, Vol. 60,
No. 2, pp. 165-171.
Displaying AVHRR Header Information
1. From the ENVI main menu bar, select Basic Tools → Preprocessing → Data-
Specific Utilities → AVHRR → Display Header Information.
2. Select the input AVHRR data file.
3. Click OK. The AVHRR File Information dialog appears. The header
information displays.
Saving Header Info to ASCII Files
To save the header information to an ASCII file, select from the AVHRR File
Information dialog, File → Save Text to ASCII, and enter an output filename.
Calibrating AVHRR Data
Use Calibrate Data to calibrate AVHRR data from the NOAA-12 though -18
satellites. Bands 1 and 2 are calibrated to percent reflectance, and bands 3, 4, and 5
are calibrated to brightness temperature, in degrees Kelvin.
1. From the ENVI main menu bar, select Basic Tools → Preprocessing → Data-
Specific Utilities → AVHRR → Calibrate Data.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The AVHRR Calibrate Parameters dialog appears.
3. Select the satellite number from the Satellite drop-down button.
4. Select output to File or Memory.
5. Click OK.
Output bands 1 and 2 are in % reflectance, and output bands 3, 4, and 5 are in
brightness temperature, in degrees Kelvin.
Building AVHRR Geometry Files
Use AVHRR Build Geometry File to calculate the geometry values for each pixel.
You may select which values to calculate: latitude, longitude, solar zenith, and/or
sensor zenith angles.
For details, see “Building AVHRR Geometry Files” on page 952.
Georeferencing AVHRR Data
You can georeference the AVHRR data, calibration results, and sea surface
temperature image using information from the AVHRR data themselves. Each line of
data has 51 latitude and longitude values that you can use in the georeferencing.
For details, see “Georeferencing AVHRR Data” on page 953.
Computing Sea Surface Temperature
1. From the ENVI main menu bar, select Basic Tools → Preprocessing → Data-
Specific Utilities → AVHRR → Compute Sea Surface Temperature.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The AVHRR Sea Surface Temperature Parameters
dialog appears.
The input file must contain AVHRR bands 3, 4, and 5.
3. From the Satellite drop-down button, select the satellite name.
4. From the SST Algorithm drop-down button, select the algorithm.
5. Select output to File or Memory.
6. Click OK.
The output sea surface temperature image is in degrees Celsius.
Note
AVHRR data that have been scaled to 8-bit depth cannot be used to compute SSTs
because NOAA does not modify the calibration coefficients stored in the file’s
Level 1B header.
ENVI computes SST images in degrees Celsius, using AVHRR bands 3, 4, and 5.
Currently, ENVI does not use a cloud or land mask in the sea surface temperature
calculation. ENVI uses the Multi-Channel Sea Surface Temperature (MCSST)
algorithms: one for daytime data and three for nighttime data:
• Day MCSST Split
• Night MCSST Split
• Night MCSST Dual
• Night MCSST Triple
These algorithms differ by which bands are used to correct for the atmosphere. Split-
window uses bands 4 and 5, dual-window uses bands 3 and 4, and triple-window uses
bands 3, 4, and 5.
ENVI uses the following SST equations for NOAA-12, -14, and -15:
Day MCSST Split
Ts = a0 + a1*band4 + a2(band4 - band5) + a3(band4 - band5)(sec(φ) -1)
Night MCSST Split
Ts = a0 + a1*band4 + a2(band4 - band5) + a3(band4 - band5)(sec(φ)-1)
Night MCSST Dual
Ts = a0 + a1*band4 + a2(band3 - band4) + a3(sec(φ) -1)
Satellite a0 a1 a2 a3
NOAA-12 -263.006 0.963563 2.579211 0.242598
Satellite a0 a1 a2 a3
NOAA-12 -263.94 0.967077 2.384376 0.480788
Satellite a0 a1 a2 a3
NOAA-12 -279.846 1.031355 1.288548 2.265075
Satellite a0 a1 a2 a3 a4
NOAA-12 -271.971 1.000281 0.911173 1.710028
CARTOSAT-1 Utilities
Use Orthorectify CARTOSAT-1 or Orthorectify CARTOSAT-1 with Ground
Control to orthorectify CARTOSAT-1 data. For details, see “Orthorectify Using
RPCs” on page 911.
ENVISAT Utilities
Use Georeference AATSR, Georeference ASAR, or Georeference MERIS to
georeference your ENVISAT AATSR, ASAR, or MERIS data with the geolocation
information included in the ENVISAT file. ENVISAT imagery contains geolocation
tie points that correspond to specific pixels in the image. You can use these tie points
to automatically georeference the ENVISAT data without building a geometry file.
For details, see “Georeferencing ENVISAT” on page 956.
IKONOS Utilities
Use Orthorectify IKONOS or Orthorectify IKONOS with Ground Control to
orthorectify IKONOS data. For details, see “Orthorectify Using RPCs” on page 911.
Landsat TM Utilities
Use Landsat TM Calibration to convert Landsat TM digital numbers to radiance or
exoatmospheric reflectance (reflectance above the atmosphere) using published post-
launch gains and offsets. For details, see “Landsat Calibration” on page 357.
MODIS Utilities
Use Georeference Data to georeference your MODIS Level 1B and Level 2 datasets
and apply correction for the MODIS bow tie effect. ENVI extracts latitude and
longitude values from the header information to georeference the data. For details,
see “Georeferencing MODIS” on page 957.
OrbView-3 Utilities
Use Orthorectify OrbView-3 or Orthorectify OrbView-3 with Ground Control to
orthorectify IKONOS data. For details, see “Orthorectify Using RPCs” on page 911.
QuickBird Utilities
Use the QuickBird Radiance utility to convert QuickBird relative radiance into
absolute radiance in units of [ ( μW ) ⁄ ( cm 2 ⋅ nm ⋅ sr ) ]. For details, see “QuickBird
Radiance Calibration” on page 361.
Use the Orthorectify QuickBird or Orthorectify QuickBird with Ground Control
to orthorectify QuickBird data. For details, see “Orthorectify Using RPCs” on
page 911.
WorldView Utilities
Use the WorldView Radiance utility to convert WorldView-1 and WorldView-2
relative radiance into absolute radiance in units of [ ( μW ) ⁄ ( cm2 ⋅ nm ⋅ sr ) ]. For
details, see “WorldView Radiance Calibration” on page 362.
Use the Orthorectify WorldView or Orthorectify WorldView with Ground
Control to orthorectify WorldView data. For details, see “Orthorectify Using RPCs”
on page 911.
SeaWiFS Utilities
Use SeaWiFS Utilities to calculate geometry information for and to georeference
HDF and CEOS format SeaWiFS data. Geometry information includes latitude,
longitude, sensor azimuth, sensor zenith, solar azimuth, solar zenith, and UTC time.
The georeferencing function produces a full precision geocoding based on a complete
geometry model of the earth and satellite orbits.
Build Geometry File
Use Build Geometry File to calculate the geometry for HDF and CEOS format
SeaWiFS data. For details see “Building SeaWiFS Geometry Files” on page 946.
Georeference SeaWiFS Data
Use Georeference Data to georeference your SeaWiFS data. For details, see
“Georeferencing SeaWiFS Data” on page 947.
SPOT Utilities
Build Geometry File
Use Build Geometry File to build a SPOT geometry file to calculate the x and y
coordinates for each pixel. For details, see “Building SPOT Geometry Files” on
page 943.
TIMS Utilities
Thermal IR Atmospheric Correction
Use Thermal Atm Correction to approximate and remove the atmospheric
contributions to thermal infrared data. TIMS data must be converted to radiance
before performing the Thermal Atm Correction. ENVI provides a tool for converting
TIMS data to radiance (see “Radiance Calibration” on page 390). For best results,
perform this correction before converting your data to emissivity. The atmospheric
correction algorithm used in ENVI is similar to the In-Scene Atmospheric
Compensation algorithm, ISAC. This algorithm assumes that the atmosphere is
uniform over the data scene and that there is an occurrence of a near-blackbody
surface within the scene. The location of the blackbody surface is not required. A
single layer approximation of the atmosphere is used and it is assumed that there is no
reflected downwelling radiance.
The algorithm first determines the wavelength that most often exhibits the maximum
brightness temperature. This wavelength is then used as the reference wavelength.
Only spectra that have their brightest temperature at this wavelength are used to
calculate the atmospheric compensation. At this point, for each wavelength, the
reference blackbody radiance values are plotted against the measured radiances. A
line is fitted to the highest points in these plotted data and the fit is weighted to assign
more weight to regions with denser sampling. The compensation for this band is then
applied as the slope and offset derived from the linear regression of these data with
their computed blackbody radiances at the reference wavelength.
Upwelling atmospheric radiance and atmospheric transmission are approximated
using the following method: first, the surface temperature of every pixel is estimated
from the data and used to estimate the brightness temperature using the Planck
function and assuming an emissivity of 1; next, a line is fitted, using one of two
methods, to a scatter plot of radiance versus brightness temperature. The atmospheric
upwelling and transmission are then derived from the slope and offset of this line.
• Normalized Regression first fits a line to the scatter plot of radiance vs.
brightness temperature by doing a standard least squares regression. The
residuals of this fit are then compared to a normal probability plot. Another
regression is done on the residuals in the normal plot. Points that are 3
times the noise equivalent sensor response (NESR) away from the
regression line are deemed outliers and are removed. A final regression is
done on the scatter plot using this reduced set of pixels. This method uses
all the points in the scatter plot that are not outliers and does not fit to only
the top of the scatter plot where the emissivity values are closest to 1.
• If you choose Normalized Regression, enter the Noise Equivalent
Sensor Response in the field.
7. Enter a gain and offset output filename, if desired.
8. Use the toggle button to select whether to plot the resulting atmospheric
transmission and upwelling spectra.
9. Enter an output filename.
10. Click OK. ENVI adds the resulting output to the Available Bands List.
References:
Johnson, B. R. and S. J. Young, “In-Scene Atmospheric Compensation: Application
to SEBASS Data Collected at the ARM Site,” Technical Report, Space and
Environment Technology Center, The Aerospace Corporation, May 1998.
Hernandez-Baquero, E., “Characterization of the Earths Surface and Atmosphere
from Multispectral and Hyperspectral Thermal Imagery,” Ph.D. Dissertation,
Rochester Institute of Technology, Chester F. Carlsom Center for Imaging Science,
Rochester, NY, 2000.
Radiance Calibration
Use Radiance Calibration to calibrate raw data from the NASA Thermal Infrared
Multispectral Scanner (TIMS) to radiance in units of W/m2/μm/sr. Data from on-
board black bodies, and two internal reference sources, are stored within the first 60
bytes of each image line. You can smooth reference data. ENVI calculates gain and
offset values for each TIMS spectral band using Plank’s radiation law, and uses the
reference data to calibrate the raw DN values to radiance.
References:
Palluconi, F. D. and Meeks, G. R., 1985. “Thermal Infrared Multispectral Scanner
(TIMS): An Investigator’s Guide to TIMS Data,” JPL Publication 85-32, p. 14.
1. Select one of the following from the ENVI main menu bar:
Thermal IR Utilities
Use Thermal IR utilities to apply an atmospheric correction, and to convert the
dataset from radiance to emissivity and temperature using one of three methods:
Reference Channel Emissivity, Emissivity Normalization, and Alpha Residuals.
Thermal image data must be converted to radiance before performing the atmospheric
correction. Perform this correction before converting your data to emissivity for best
results.
Atmospheric Correction for Thermal IR Data
Use Thermal Atm Correction to approximate and remove the atmospheric
contributions from thermal infrared radiance data. Thermal image data must be
converted to radiance before performing the atmospheric correction. TIMS data
should be converted to radiance using TIMS Radiance tool before performing the
atmospheric correction. TIMS Radiance tools apply the correct band coefficients to
convert to radiance in the appropriate units. No data scale factor is then required
during the atmospheric correction. Perform the correction before converting your
data to emissivity for the best results.
Note
ENVI does not check to make sure the images are thermal infrared data. Be sure
that your data wavelengths measure between 8 and 14 μm before applying this
correction.
blackbody surface is not required for this correction. A single layer approximation of
the atmosphere is used. No reflected downwelling radiance is also assumed.
The algorithm first determines the wavelength that most often exhibits the maximum
brightness temperature. This wavelength is then used as the reference wavelength.
Only spectra that have their brightest temperature at this wavelength are used to
calculate the atmospheric compensation. At this point, for each wavelength, the
reference blackbody radiance values are plotted against the measured radiances. A
line is fitted to the highest points in these plotted data and the fit is weighted to assign
more weight to regions with denser sampling. The compensation for this band is then
applied as the slope and offset derived from the linear regression of these data with
their computed blackbody radiances at the reference wavelength.
Upwelling atmospheric radiance and atmospheric transmission are approximated
using the following method. First, the surface temperature of every pixel is estimated
from the data and used to approximate the brightness temperature using the Planck
function and assuming an emissivity of 1. Next, a line is fitted (using one of two
methods) to a scatter plot of radiance vs. brightness temperature. The atmospheric
upwelling and transmission are then derived from the slope and offset of this line.
1. Select one of the following from the ENVI main menu bar:
• Basic Tools → Preprocessing → Calibration Utilities →
Thermal Atm Correction
• Basic Tools → Preprocessing → Data-Specific Utilities → Thermal
IR → Thermal Atm Correction
• Basic Tools → Preprocessing → Data-Specific Utilities → TIMS →
Thermal Atm Correction
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Thermal Atm Correction Parameters dialog
appears.
3. Enter the Data Scale Factor needed to scale your data to units of W/m2/μm/sr.
The image output from this atmospheric correction uses the same units as the
input image. For example, if the results of the correction are used to calculate
emissivity and temperature, then the same scale factor must be specified.
4. Click Wavelength Units and select the units.
5. Use the toggle button to select either All or Max Hit to determine which pixels
to use in the surface temperature estimation regression.
Selecting All will estimate the surface temperature for each pixel by using the
maximum value of the brightness temperatures found throughout the input
wavelengths. Selecting Max Hit will estimate the surface temperature for only
those pixels that have their maximum brightness temperatures at a particular
wavelength. The wavelength used is the wavelength that has the largest number
of pixels with a maximum brightness temperature value.
6. Use the toggle button to select either Top of Bins or Normalized Regression
for the scatter plot fitting technique.
• Top of Bins fits a line to the top of the scatter plot of radiance vs.
brightness temperature. The top of the scatter plot corresponds to those
pixels whose emissivity is closest to 1. This Top of Bins fit is achieved by
doing a standard least squares regression on the top 5% of the data in the
scatter plot. This technique is susceptible to sensor noise which may occur
at the top of the scatter plot.
• Normalized Regression first fits a line to the scatter plot of radiance vs.
brightness temperature by doing a standard least squares regression. The
residuals of this fit are then compared to a normal probability plot. Another
regression is done on the residuals in the normal plot. Points that are 3
times the noise equivalent sensor response (NESR) away from the
regression line are deemed outliers and are removed. A final regression is
done on the scatter plot using this reduced set of pixels. This method uses
all the points in the scatter plot that are not outliers and does not fit to only
the top of the scatter plot where the emissivity values are closest to 1.
• If you choose Normalized Regression, enter the Noise Equivalent
Sensor Response in the field.
7. Enter a gain and offset output filename, if desired.
8. Use the toggle button to select whether to plot the resulting atmospheric
transmission and upwelling spectra.
9. Enter an output filename.
10. Click OK. ENVI adds the resulting output to the Available Bands List.
References:
Johnson, B. R. and S. J. Young, “In-Scene Atmospheric Compensation: Application
to SEBASS Data Collected at the ARM Site,” Technical Report, Space and
Environment Technology Center, The Aerospace Corporation, May 1998.
Hernandez-Baquero, E., “Characterization of the Earths Surface and Atmosphere
from Multispectral and Hyperspectral Thermal Imagery,” Ph.D. Dissertation,
Rochester Institute of Technology, Chester F. Carlsom Center for Imaging Science,
Rochester, NY, 2000.
Converting to Emissivity and Temperature
The radiation emitted from a surface in the thermal infrared wavelengths is a function
of both the surface temperature and emissivity. The emissivity relates to the
composition of the surface and is often used for surface constituent mapping.
ENVI uses three techniques to separate the emissivity and temperature information in
radiance data measured with thermal infrared sensors. Both the Reference Channel
Emissivity and Emissivity Normalization techniques assume a fixed emissivity
value and produce emissivity and temperature outputs. The Alpha Residuals
technique does not provide temperature information.
References:
Hook, S. J., A. R. Gabell, A. A. Green, and P. S. Kealy, 1992. A comparison of
techniques for extracting emissivity information from thermal infrared data for
geologic studies. Remote Sensing of Environment, Vol. 42, pp. 123-135.
Kealy, P. S. and S. J. Hook, 1993., Separating temperature and emissivity in thermal
infrared multispectral scanner data: Implications for recovering land surface
temperatures. IEEE Transactions on Geoscience and Remote Sensing, Vol. 31, No. 6,
pp.1155-1164.
Using Reference Channel Emissivity
Use Reference Channel Emissivity to calculate emissivity and temperature values
from thermal infrared radiance data. The reference channel emissivity technique
assumes that all the pixels in one channel (band) of the thermal infrared data have a
constant emissivity. Using this constant emissivity, a temperature image is calculated
and those temperatures are used to calculate the emissivity values in all the other
bands using the Planck function. You can select the band to keep constant and enter
the desired emissivity value for that band. See the previous references for more
information.
1. Select one of the following from the ENVI main menu bar:
• Basic Tools → Preprocessing → Data-Specific Utilities → Thermal
IR → Reference Channel Emissivity
• Basic Tools → Preprocessing → Calibration Utilities → Calculate
Emissivity → Reference Channel
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Calculate Emissivity parameters dialog
appears.
3. Enter a data scale factor to scale the radiance values into the units of
W/m2/μm/sr (for example, if your data is in microflicks (μW/cm2/μm/sr) enter
a scale factor of .01).
4. Enter a wavelength scale factor to scale the wavelengths that are read from the
header into units of μm.
5. From the Emissivity Band drop-down list, select which band to set to a
constant emissivity value.
6. In the Assumed Emissivity Value field, enter the emissivity value for the
constant band.
7. Click the Output Temperature Image? toggle button to designate whether or
not to output a temperature image. Enter an output filename.
8. Enter an output filename for the emissivity data.
9. Click OK. ENVI adds the temperature image (single band) and emissivity data
cube (same number of bands as input radiance data) to the Available Bands
List.
Using Emissivity Normalization
Use Emissivity Normalization to calculate emissivity and temperature values from
thermal infrared radiance data. The emissivity normalization technique calculates the
temperature for every pixel and band in the data using a fixed emissivity value. The
highest temperature for each pixel is used to calculate the emissivity values using the
Planck function. You can enter the desired fixed emissivity value. See the references
in the introduction to “Thermal IR Utilities” on page 391 for more information.
1. Select one of the following from the ENVI main menu bar:
• Basic Tools → Preprocessing → Data-Specific Utilities → Thermal
IR → Emissivity Normalization
• Basic Tools → Preprocessing → Calibration Utilities → Calculate
Emissivity → Emissivity Normalization
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Calculate Emissivity parameters dialog
appears.
3. Enter a data scale factor to scale the radiance values into the units of
W/m2/μm/sr (for example, if your data is in microflicks (μW/cm2/μm/sr) enter
a scale factor of .01).
4. Enter a wavelength scale factor to scale the wavelengths (read from the header)
into units of μm.
5. In the Assumed Emissivity Value field, enter the fixed emissivity value to use
to calculate the temperatures.
6. Click the Output Temperature Image? toggle button to designate whether or
not to output a temperature image. Enter an output filename.
Supervised Classification
Use Supervised classification to cluster pixels in a dataset into classes corresponding
to user-defined training classes.
Training classes are groups of pixels (ROIs) or individual spectra. Select them as
representative areas or materials that you want mapped in the output. You should try
to select ROIs that are homogenous. You can examine the separability of your ROIs
by exporting them to an n-D Visualizer and looking at the distribution of the points
within each ROI (they should cluster tightly together) and looking for overlap
between the classes (they should not overlap). For details, see “Exporting ROIs to the
n-D Visualizer” on page 344. You can also get a report of the separability values
between ROI pairs (see “Computing ROI Separability” on page 339).
Supervised classification techniques include parallelepiped, minimum distance,
Mahalanobis distance, maximum likelihood, Spectral Angle Mapper (SAM), Spectral
Information Divergence (SID), and binary encoding.
For all supervised classification methods, you have an option to create rule images
(which is recommended). Rule images show the classification results before final
assignment of classes. For example, the pixel values in the rule images (one per class)
for a minimum distance classification represent the distance between the class and
each pixel. You can use these rule images in the rule classifier to adjust thresholds
and generate new classification images (see “Classifying from Rule Images” on
page 458).
The supervised classification methods in ENVI (excluding Neural Net) differentiate
masked pixels from unclassified pixels. ENVI does not apply the algorithm to
masked pixels because they are already masked out. Conversely, unclassified pixels
are those to which ENVI applies the classification algorithm, but it is unable to assign
the pixels to one of the defined classes. If you choose a mask band upon input, ENVI
creates a class called Masked Pixels. While the unclassified pixels are given a pixel
value in the output image of 0, the output pixel value of the Masked Pixels class is
[max class value]+1, where max class value is the value given to the last valid class.
This new Masked Pixels class enables you to accurately apply a confusion matrix to a
classified image containing masked pixels.
You must define training classes before performing supervised classification. You can
define training classes by either using the Endmember Collection dialog to select
spectra (see “Collecting Endmember Spectra” on page 439), or by defining ROIs (see
“Defining ROIs” on page 323). You can define the training sites as multiple irregular
polygons, vectors, and/or individual pixels.
3. In the Select Classes from Regions list, select ROIs and/or vectors as training
classes. The ROIs listed are derived from the available ROIs in the ROI Tool
dialog. The vectors listed are derived from the open vectors in the Available
Vectors List.
4. Select one of the following thresholding options from the Set Max stdev from
Mean area:
• None: Use no standard deviation threshold.
• Single Value: Use a single threshold for all classes. Enter a value in the
Max stdev from Mean field to designate the number of standard
deviations to use around the mean.
• Multiple Values: Enter a different threshold for each class. Use this option
as follows:
A. In the list of classes, select the class or classes to which you want to assign
different threshold values and click Multiple Values. The Assign Max
stdev from Mean dialog appears.
B. Select a class, then enter a threshold value in the field at the bottom of the
dialog. Repeat for each class. Click OK when you are finished.
5. Select classification output to File or Memory.
6. Use the Output Rule Images? toggle button to select whether or not to create
rule images. Use rule images to create intermediate classification image results
before final assignment of classes. You can later use rule images in the Rule
Classifier to create a new classification image without having to recalculate the
entire classification (see “Classifying from Rule Images” on page 458).
7. If you selected Yes to output rule images, select output to File or Memory.
8. Click Preview to see a 256 x 256 spatial subset from the center of the output
classification image (see “Previewing the Output Classification Image” in the
next section for more information). Change the parameters as needed and click
Preview again to update the display.
9. Click OK. ENVI adds the resulting output to the Available Bands List. The
pixel values of the resulting rule images range from 0 to n (where n is the
number of bands) and represent the number of bands that satisfied the
parallelepiped criteria. There is one rule image for each selected class. Areas
that match all bands for a particular class are carried over as classified areas
into the classified image. If more than one match occurs, the first class to
evaluate (the first ROI from the selected list) carries over into the classified
image.
• Single Value: Use a single threshold for all classes. Enter a value in the
Max stdev from Mean and/or Set Max Distance Error fields. For Max
stdev from Mean, enter the number of standard deviations to use around
the mean. ENVI does not classify pixels outside this range. For Max
Distance Error, enter the value in DNs. ENVI does not classify pixels at a
distance greater than this value.
If you set values for both Set Max stdev from Mean and Set Max
Distance Error, the classification uses the smaller of the two to determine
which pixels to classify. If you select None for both parameters, then
ENVI classifies all pixels.
• Multiple Values: Enter a different threshold for each class. Use this option
as follows:
A. In the list of classes, select the class or classes to which you want to assign
different threshold values and click Multiple Values. The Assign Max
Distance Error dialog appears.
B. Select a class, then enter a threshold value in the field at the bottom of the
dialog. Repeat for each class. Click OK when you are finished.
9. Click OK. ENVI adds the resulting output to the Available Bands List. If you
selected to output rule images, ENVI creates one for each class with the pixel
values equal to the Euclidean distance from the class mean. Areas that satisfied
the minimum distance criteria are carried over as classified areas into the
classified image.
to a specific class. Unless you select a probability threshold, all pixels are classified.
Each pixel is assigned to the class that has the highest probability (that is, the
maximum likelihood). If the highest probability is smaller than a threshold you
specify, the pixel remains unclassified.
ENVI implements maximum likelihood classification by calculating the following
discriminant functions for each pixel in the image (Richards, 1999):
Where:
i = class
x = n-dimensional data (where n is the number of bands)
p(ωi) = probability that class ωi occurs in the image and is assumed the same for
all classes
|Σi| = determinant of the covariance matrix of the data in class ωi
Σi-1 = its inverse matrix
mi = mean vector
Reference:
Richards, J.A., 1999, Remote Sensing Digital Image Analysis, Springer-Verlag,
Berlin, p. 240.
1. Select one of the following:
• From the ENVI main menu bar, select Classification → Supervised →
Maximum Likelihood.
• From the Endmember Collection dialog menu bar, select Algorithm →
Maximum Likelihood (see “Selecting Processing Techniques” on
page 452 for further details).
The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and Masking, then click OK. The Maximum Likelihood
Parameters dialog appears.
9. If you selected Yes to output rule images, select output to File or Memory.
10. Click Preview to see a 256 x 256 spatial subset from the center of the output
classification image (see “Previewing the Output Classification Image” on
page 404 for more information about this class preview). Change the
parameters as needed and click Preview again to update the display.
11. Click OK. ENVI adds the resulting output to the Available Bands List. The
rule images, one per class, contain a maximum likelihood discriminant
function with a modified Chi Squared probability distribution. Higher rule
image values indicate higher probabilities. The final classification allocates
each pixel to the class with the highest probability.
To convert between the rule image’s data space and probability, use the Rule
Classifier. For the classification threshold, enter the probability threshold used in the
maximum likelihood classification as a percentage (for example, 95%) (see
“Classifying from Rule Images” on page 458 for more information). The Rule
Classifier automatically finds the corresponding rule image Chi Squared value.
Tip
If you specify an ROI as a training set for maximum likelihood classification, you
may receive a “Too May Iterations in TQLI” error message if the ROI includes only
pixels that all have the same value in one band. A band with no variance at all
(every pixel in that band in the subset has the same value) leads to a singularity
problem where the band becomes a near-perfect linear combination of other bands
in the dataset, resulting in an error message.
Reference:
Kruse, F. A., A. B. Lefkoff, J. B. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J.
Barloon, and A. F. H. Goetz, 1993, “The Spectral Image Processing System (SIPS) -
Interactive Visualization and Analysis of Imaging spectrometer Data.” Remote
Sensing of the Environment, v. 44, p. 145 - 163.
Tip
You can use ENVI’s Spectral Hourglass Wizard to guide you through the spectral
hourglass processing flow, which includes SAM classification, to find and map
image spectral endmembers from hyperspectral or multispectral data. See “Spectral
Hourglass Wizard” on page 829.
• Multiple Values: Enter a different threshold for each class. Use this option
as follows:
A. In the list of classes, select the class or classes to which you want to assign
different threshold values and click Multiple Values. The Assign
Maximum Angle (radians) dialog appears.
B. Select a class, then enter a threshold value in the field at the bottom of the
dialog. Repeat for each class. Click OK when you are finished.
6. Select classification output to File or Memory.
7. Use the Output Rule Images? toggle button to select whether or not to create
rule images. Use rule images to create intermediate classification image results
before final assignment of classes. You can later use rule images in the Rule
Classifier to create a new classification image without having to recalculate the
entire classification (see “Classifying from Rule Images” on page 458).
8. If you selected Yes to output rule images, select output to File or Memory.
9. Click Preview to see a 256 x 256 spatial subset from the center of the output
classification image (see “Previewing the Output Classification Image” on
page 404 for more information about this class preview). Change the
parameters as needed and click Preview again to update the display.
10. Click OK. ENVI adds the resulting output to the Available Bands List. The
output from SAM is a classified image and a set of rule images (one per
endmember). The pixel values of the rule images represent the spectral angle
in radians from the reference spectrum for each class. Lower spectral angles
represent better matches to the endmember spectra. Areas that satisfied the
selected radian threshold criteria are carried over as classified areas into the
classified image.
Reference:
Du, H., C.-I. Chang, H. Ren, F.M. D’Amico, J. O. Jensen, J., “New Hyperspectral
Discrimination Measure for Spectral Characterization,” Optical Engineering, Vol.
43, No. 8, 2004, 1777-1786.
1. Select one of the following:
• From the ENVI main menu bar, select Classification → Supervised →
Spectral Information Divergence.
• From the Endmember Collection dialog menu bar, select Algorithm →
Spectral Information Divergence (see “Selecting Processing
Techniques” on page 452 for further details).
The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. The Endmember Collection:SID
dialog appears.
3. From the Endmember Collection:SID dialog menu bar, select Import →
spectra_source and collect endmember spectra from a variety of sources. For
details, see “Importing Spectra” on page 442 and “Managing Endmember
Spectra” on page 453.
4. In the Endmember Collection:SID dialog, click Apply. The Spectral
Information Divergence Parameters dialog appears.
5. Select one of the following thresholding options from the Set Maximum
Divergence Threshold area:
• None: Use no threshold.
• Single Value: Use a single threshold for all classes. Enter a value in the
Maximum Divergence Threshold field. This is the minimum allowable
variation between the endmember spectrum vector and the pixel vector.
The default value is .05, but can vary substantially given the nature of the
similarity measure. A threshold that discriminates well for one pair of
spectral vectors may be either too sensitive or not sensitive enough for
another pair due to the similar/dissimilar nature of their probability
distributions.
• Multiple Values: Enter a different divergence to test each class against its
corresponding maximum spectral divergence. When selected, the Assign
Reference:
Mazer, A. S., Martin, M., Lee, M., and Solomon, J. E., 1988, “Image Processing
Software for Imaging Spectrometry Analysis,” Remote Sensing of the Environment, v.
24, no. 1, p. 201-210.
1. Select one of the following:
• From the ENVI main menu bar, select Classification → Supervised →
Binary Encoding.
• From the Endmember Collection dialog menu bar, select Algorithm →
Binary Encoding (see “Selecting Processing Techniques” on page 452 for
further details).
The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. The Binary Encoding Parameters
dialog appears.
3. In the Select Classes from Regions list, select ROIs and/or vectors as training
classes. The ROIs listed are derived from the available ROIs in the ROI Tool
dialog. The vectors listed are derived from the open vectors in the Available
Vectors List.
4. Select one of the following thresholding options from the Set Minimum
Encoding Threshold area:
• None: Use no threshold.
• Single Value: Use a single threshold for all classes. Enter a decimal
percentage value (from 0.0 to 1.0) in the Minimum Encoding Threshold
field. The percentage value represents the number of bands that must
match.
• Multiple Values: Enter a different threshold for each class. Use this option
as follows:
A. In the list of classes, select the class or classes to which you want to assign
different threshold values and click Multiple Values. The Assign
Minimum Encoding Threshold dialog appears.
B. Select a class, then enter a threshold value in the field at the bottom of the
dialog. If you do not enter a minimum value, ENVI classifies all pixels.
Repeat for each class. Click OK when you are finished.
5. Select classification output to File or Memory.
6. Use the Output Rule Images? toggle button to select whether or not to create
rule images. Use rule images to create intermediate classification image results
before final assignment of classes. You can later use rule images in the Rule
Classifier to create a new classification image without having to recalculate the
entire classification (see “Classifying from Rule Images” on page 458).
7. If you selected Yes to output rule images, select output to File or Memory.
8. Click Preview to see a 256 x 256 spatial subset from the center of the output
classification image (see “Previewing the Output Classification Image” on
page 404 for more information about this class preview). Change the
parameters as needed and click Preview again to update the display.
9. Click OK. ENVI adds the resulting output to the Available Bands List. If you
selected to output rule images, ENVI creates rule images for each class with
the pixel values equal to the percentage (0-100%) of bands that matched that
class. Areas that satisfied the minimum threshold are carried over as classified
areas into the classified image.
could lead to better classifications but too many weights could also lead to
poor generalizations.
6. In the Training Rate field, enter a value from 0 to 1.0. The training rate
determines the magnitude of the adjustment of the weights. A higher rate will
speed up the training, but will also increase the risk of oscillations or non-
convergence of the training result.
7. In the Training Momentum field, enter a value from 0 to 1.0. Entering a
momentum rate greater than zero allows you to set a higher training rate
without oscillations. A higher momentum rate trains with larger steps than a
lower momentum rate. Its effect is to encourage weight changes along the
current direction.
8. In the Training RMS Exit Criteria field, enter the RMS error value at which
the training should stop.
If the RMS error, which is shown in the plot during training, falls below the
entered value, the training will stop, even if the number of iterations has not
been met. The classification will then be executed.
9. Enter the Number of Hidden Layers to use. For a linear classification, enter a
value of 0. With no hidden layers the different input regions must be linearly
separable with a single hyperplane. Non-linear classifications are performed by
setting the Number of Hidden Layers to a value of 1 or greater. When the
input regions are linearly inseparable and require two hyperplanes to separate
the classes you must have a least one hidden layer to solve the problem. Two
hidden layers are used to classify input space where the different elements are
neither contiguous or connected.
10. Enter the Number of Training Iterations.
11. To enter a minimum output activation threshold, enter a value in the Min
Output Activation Threshold field. If the activation value of the pixel being
classified is less than this threshold value, then that pixel will be labeled
unclassified in the output.
12. Select classification output to File or Memory.
13. Use the Output Rule Images? toggle button to select whether or not to create
rule images. Use rule images to create intermediate classification image results
before final assignment of classes. You can later use rule images in the Rule
Classifier to create a new classification image without having to recalculate the
entire classification (see “Classifying from Rule Images” on page 458).
14. If you selected Yes to output rule images, select output to File or Memory.
15. Click OK. ENVI adds the resulting output to the Available Bands List. During
the training, a plot window appears showing the RMS error at each iteration.
The error should decrease and approach a steady low value if proper training
occurs. If the errors are oscillating and not converging, try using a lower
training rate value or different ROIs. ENVI lists the resulting neural net
classification image, and rule images if output, in the Available Bands List.
which works well in most cases. The mathematical representation of each kernel is
listed below:
where:
γ is the gamma term in the kernel function for all kernel types except linear.
dis the polynomial degree term in the kernel function for the polynomial
kernel.
r is the bias term in the kernel function for the polynomial and sigmoid kernels.
γ, d, and r are user-controlled parameters, as their correct definition
significantly increases the accuracy of the SVM solution.
Processing large images through the SVM classifier is time-consuming at high
resolution, so ENVI’s SVM provides a hierarchical, reduced-resolution classification
process that improves performance without significantly degrading results. It is most
effective when operating in areas that contain homogenous features, such as water
bodies, parking lots, and fields. The hierarchical classification process performs the
following steps:
1. ENVI resamples the image to the lowest-resolution level requested.
2. ENVI resamples the ROIs to the same resolution.
3. The SVM classifier trains and runs on the reduced resolution image and ROIs.
The classifier performs training at the lower-resolution level (instead of at full
resolution and applying the results at each level) because retraining at each
level provides higher-accuracy results for the resampled imagery.
4. SVM examines all of the rule image values to determine those that exceed the
reclassification probability threshold. The class information and probability
information associated with these pixels are stored for later application to the
result image.
5. The examination process continues at the next higher-resolution pyramid level,
except that SVM performs classification only for pixels that are not marked as
classified at the lower-level resolution. The process repeats until it reaches the
full-resolution layer.
References:
Chang, C.-C. and C.-J. Lin. (2001). “LIBSVM: a library for support vector
machines.”
Hsu, C.-W., Chang, C.-C., and Lin, C.-J. (2007). “A practical guide to support vector
classification.” National Taiwan University. URL
http://ntu.csie.org/~cjlin/papers/guide/guide.pdf.
Wu, T.-F., C.-J. Lin, and R. C. Weng. (2004). “Probability estimates for multi-class
classification by pairwise coupling.” Journal of Machine Learning Research, 5:975-
1005, URL http://www.csie.ntu.edu.tw/~cjlin/papers/svmprob/svmprob.pdf.
You must first have ROIs selected to use as training pixels for each class. The more
pixels, the better the results.
1. From the ENVI main menu bar, select Classification → Supervised →
Support Vector Machine. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Support Vector Machine Classification
Parameters dialog appears.
3. In the Select Classes from Regions list, select at least one ROI and/or vector
as training classes. The ROIs listed are derived from the available ROIs in the
ROI Tool dialog. The vectors listed are derived from the open vectors in the
Available Vectors List.
4. Select the Kernel Type to use in the SVM classifier from the drop-down list.
Options are Linear, Polynomial, Radial Basis Function, and Sigmoid.
Depending on the option you select, additional fields may appear.
5. If the Kernel Type is Polynomial, set the Degree of Kernel Polynomial to
specify the degree use for the SVM classification. The minimum value is 1, the
default is 2, and the maximum value is 6.
6. If the Kernel Type is Polynomial or Sigmoid, specify the Bias in Kernel
Function for the kernel to use in the SVM algorithm. The default is 1.
7. If the kernel type is Polynomial, Radial Basis Function, or Sigmoid, use the
Gamma in Kernel Function field to set the gamma parameter used in the
kernel function. This value is a floating point value greater than 0. The default
is the inverse of the number of bands in the input image.
8. Specify the Penalty Parameter for the SVM algorithm to use. This value is a
floating point value greater than 0. The penalty parameter controls the trade-off
between allowing training errors and forcing rigid margins. Increasing the
value of the penalty parameter increases the cost of misclassifying points and
causes ENVI to create a more accurate model that may not generalize well.
The default is 100.
9. Use the Pyramid Levels field to set the number of hierarchical processing
levels to apply during the SVM training and classification process. If this value
is set to 0, ENVI processes the image at full resolution only. The default is 0.
The maximum value is dynamic; it varies with the size of the image you select.
The maximum value is determined by the criteria that the highest pyramid-
level image is larger than 64 x 64. For example, for an image that is
24000 x 24000, the maximum level is 8.
10. If the Pyramid Levels field is a value greater than zero, set the Pyramid
Reclassification Threshold to specify the probability threshold that a pixel
classified at a lower resolution level must meet to avoid being reclassified at a
finer resolution. The range is from 0 to 1. The default is 0.9.
11. Use the Classification Probability Threshold field to set the probability that
is required for the SVM classifier to classify a pixel. Pixels where all rule
probabilities are less than this threshold are unclassified. The range is from 0 to
1. The default is 0.0.
12. Select classification output to File or Memory.
13. Use the Output Rule Images? toggle button to select whether or not to create
rule images output. Use rule images to create intermediate classification image
results before final assignment of classes. You can later use rule images in the
Rule Classifier to create a new classification image without having to
recalculate the entire classification (see “Classifying from Rule Images” on
page 458).
14. If you selected Yes to output rule images, select output to File or Memory.
15. Click OK. ENVI adds the resulting output to the Available Bands List. If you
selected to output rule images, ENVI creates rule images for each class with
the pixel values equal to the percentage (0-100%) of bands that matched that
class. Areas that satisfied the minimum threshold are carried over as classified
areas into the classified image.
Unsupervised Classification
Use Unsupervised classification to cluster pixels in a dataset based on statistics only,
without any user-defined training classes. The unsupervised classification techniques
available are ISODATA and K-Means.
3. Enter the minimum and maximum Number Of Classes to define. ENVI uses a
range for the number of classes because the ISODATA algorithm splits and
merges classes based on input thresholds and does not keep a fixed number of
classes.
4. Enter the maximum number of iterations in the Maximum Iterations field and
a change threshold (0-100%) in the Change Threshold % field. ENVI uses
the change threshold to end the iterative process when the number of pixels in
each class changes by less than the threshold. The classification ends when
either this threshold is met or the maximum number of iterations is reached.
5. Enter the minimum number of pixels needed to form a class in the Minimum #
Pixels in Class field. If there are fewer than the minimum number of pixels in
a class then ENVI deletes that class and the pixels placed in the class(es)
nearest to them.
6. Enter the maximum class standard deviation (in DN) in the Maximum Class
Stdv field. If the standard deviation of a class is larger than this threshold then
the class is split into two classes.
7. Enter the minimum distance (in DN) between class means and the maximum
number of merge pairs in the fields provided.
If the distance between class means is less than the minimum value entered,
then ENVI merges the classes. The maximum number of class pairs to merge
is set by the maximum number of merge pairs parameter.
To set the optional standard deviation to use around the class mean and/or the
maximum allowable distance error (in DN), enter the values in the Maximum
Stdev From Mean or Maximum Distance Error fields, respectively.
If you enter values for both of these optional parameters, the classification uses
the smaller of the two to determine which pixels to classify. If you do not enter
a value for either parameter, then all pixels are classified.
8. Select output to File or Memory.
9. Click OK. The status bar cycles from 0 to 100% for each iteration of the
classifier. ENVI adds the resulting output to the Available Bands List. ENVI
computes the statistics for the initial class seeds with a skip factor of 2.5 for
both the sample and line directions.
3. Enter the number of classes and maximum number of iterations in the fields
provided.
4. Enter a Change Threshold % (0-100%) which ENVI uses to end the iterative
process when the number of pixels in each class changes by less than the
threshold. The classification ends when either this threshold is met or the
maximum number of iterations is reached.
To set the optional standard deviation to use around the class mean and/or the
maximum allowable distance error (in DN), enter the values in the Maximum
Stdev From Mean or Maximum Distance Error fields, respectively.
If you enter values for both of these optional parameters, the classification uses
the smaller of the two to determine which pixels to classified. If you do not
enter a value for either parameter, then all pixels are classified.
Input data can be from various sources and data types. For example, you can use
multispectral data in conjunction with digital elevation data to find pixels with low
vegetation and high slopes. You can use georeferenced files that are in different
projections with different pixel sizes in a single decision tree and ENVI reprojects
and resample them on-the-fly. ENVI can calculate special variables, such as NDVI,
on-the-fly and use them in the expressions. The variables and expressions are
described in the following section.
Tip
See the ENVI Tutorials on the ITT Visual Information Solutions website (or on the
ENVI Resource DVD that shipped with your ENVI installation) for step-by-step
examples.
Basic arithmetic Addition (+), subtraction (-), multiplication (*), and division
(/)
Trigonometric sin(x), cos(x), and tan(x)
functions
Arcs: asin(x), acos(x), and atan(x)
Hyperbolics: sinh(x), cosh(x), and tanh(x)
Relational and LT, LE, EQ, NE, GE, GT
logical operators
AND, OR, NOT, XOR
maximum (>) and minimum (<)
• {lmnf[n]}: Local minimum noise fraction. Uses only the surviving pixels in
the calculations.
• {mean[n]}: The mean for band n.
• {stdev[n]}: The standard deviation for band n.
• {min[n]}: The minimum of band n.
• {max[n]}: The maximum of band n.
• {lmean[n]}: Local mean. The mean of only the surviving pixels.
• {lstdev[n]}: Local standard deviation. The standard deviation of only the
surviving pixels.
• {lmin[n]}: Local minimum. The minimum of only the surviving pixels.
• {lmax[n]}: Local maximum. The maximum of only the surviving pixels.
You can use any number of different expressions within the decision tree classifier as
long as the expression results in a single, binary decision (true or false answer for
each pixel). The following examples show how to define various simple and complex
decisions:
• Pixels with values greater than 20 in band 1, and values less than or equal to 45
in band 2 are identified by the following expression:
(b1 GT 20) AND (b2 LE 45)
• Pixels with values in band 1 greater than the mean of band 2 plus twice the
standard deviation of band 2 are identified by the following expression:
b1 GT ({mean[2]} + 2*{stdev[2]})
• Pixels with a slope greater than 15 degrees and a northern aspect are identified
by the following expression:
({slope} GT 15) AND (({aspect} LT 90) or ({aspect} GT 270))
Note
The local statistics variables can only be used on georeferenced files that have the
same projection as the selected base file, otherwise statistics are calculated using the
entire band.
Tip
You can also compile the functions as described in “Compiling” in the ENVI
Programmer’s Guide.
info = SIZE(data)
result = MAKE_ARRAY(/BYTE, SIZE = info)
FOR index = 0L, (N_ELEMENTS(values) - 1) DO $
result += (data EQ values[index])
RETURN, result
END
To call this function from the Expression field to derive a mask for values of 20, 22,
24, and 26 in the first band (b1) in the bhtmref.img file in the data directory of the
ENVI distribution, use the following syntax:
dt_choose_values(b1, [20, 22, 24, 26])
4. To see details about the number of pixels at each node, right-click in the
background of the Decision Tree dialog and select Zoom In. Each node lists
how many pixels survived to reach that node. The status bar also provides
details when you position the cursor over the class.
5. To make any changes to the decision nodes, click the node button and change
the expression in the Edit Decision Properties dialog.
To change the output class color and name, click the class node button and
enter the new name and color in the Edit Class Properties dialog.
For other interactive options, see the following “Decision Tree Options”
section.
6. Execute the decision tree again and look at the results.
7. Repeat steps 4 through 6 until you are satisfied with the results.
1. From the ENVI main menu bar, select Classification → Decision Tree →
Edit Existing Decision Tree. The Enter Saved Decision Tree Filename dialog
appears.
2. Select the decision tree file to restore and click Open. The selected decision
tree appears in the Decision Tree dialog and you can edit it or execute it as
desired.
Use the Endmember Collection dialog to collect endmembers, select algorithm types,
and manage endmembers. The procedures to use the Endmember Collection dialog
are described in the following sections.
• Left-click in the cell to enter an RGB triplet (such as 255, 0, 0 for red). If
you provide an RGB triplet, ENVI uses the closest valid ENVI graphics
color.
You can also set the default ENVI colors for all the spectra by right-clicking in
the Color column title and selecting Assign default colors to all. You can set
all the spectra colors to <none> with Assign default colors to undefined, and
you can select Reset Colors to reset all the spectra to their original colors. All
of the right-click menu options are also available in the Endmember Collection
dialog Options menu.
• Source: Displays the source of each spectrum. The values in this column
cannot be edited. The following list shows the possible sources for the spectra
in the table.
ASCII file: Spectrum imported from ASCII column data file.
ASD file: Spectrum imported from ASD formatted file (which is output from
the Analytical Spectral Devices spectrometers).
Spec Lib: Spectrum imported from ENVI spectral library (.sli) file.
ROI mean: Mean spectrum from either an ROI or a vector.
Stats file: Spectrum imported from ENVI statistics (.sta) file.
Plot: Spectrum imported from a plot window. For more information on how to
import from the plot window, see “Using the Right-Click Menu” on page 442.
Unknown: Spectrum retrieved from any other source.
• Bands: Displays the number of spectral bands contained in each spectrum.
You cannot edit the values in this column.
• Wavelength: Contains the wavelength range of each spectrum. ENVI converts
the wavelengths of the spectra to the wavelengths of the image data when you
click either Plot or Apply in the dialog. This conversion does not occur if the
wavelength units of the spectra are unknown. You can change the wavelength
units by right-clicking in the cell of the Wavelength column.
To change an input file with unknown wavelength units to a file containing
known wavelength units, select File → Change input File option from the
Endmember Collection dialog menu bar.
• Status: Displays the status of each spectrum related to the input file. You
cannot edit the values in this column. The following list shows the possible
status values for the spectra in the table:
Match: The wavelengths of the input file and the spectrum match exactly and
no resampling is necessary when you click either Plot or Apply.
Resample: The wavelengths of the input file and the spectrum are different,
but ENVI is able to resample the spectrum to the wavelength space of the input
file when you click either Plot or Apply.
Invalid: The wavelengths of the input file and the spectrum are different, but
ENVI is not able to resample because the wavelength units of either the
spectrum or the input file are unknown. If a spectrum is invalid, it cannot be
plotted and is not used when you click Apply.
Importing Spectra
You can import endmember spectra into the table in several ways. You can import
from the following:
• Plot windows
• ACII files
• ASD binary files
• Spectral libraries
• ROIs or vectors
• Statistics files
When using the Mahalanobis distance or maximum likelihood classifiers, you can
only import the endmember spectra from ROIs or statistics files because these
classifications use the endmember covariance statistics.
3. Click on the spectra you want to collect for the Endmember Spectra table.
4. In the Endmember Collection dialog, right-click on either an empty space in
the table or on the upper-left corner of the table. The resulting menu contains
all the available spectra from every displayed plot window.
If no spectrum was available in any plot window, the right-click menu states
that no spectrum is available.
5. Left-click on the spectra you want to import into the table. The selected spectra
appears in the Endmember Spectra table.
6. Use the right-click menu to import all of the spectrum you want to include in
the table.
Dragging-and-Dropping Spectra
You can also drag-and-drop a spectrum from the Spectral Profile plot window key to
the Endmember Spectra table to collect it in the Endmember Collection dialog.
1. From the Spectral Profile plot window menu bar, select Options → Plot Key.
The key (legend) for the plot window appears to the right of the spectral plot.
2. Click and drag the key of a spectrum into the Endmember Spectra table. The
spectrum appears in the Endmember Collection table.
Using the Endmember Collection Dialog Menu Option
You can also import multiple spectra by using an Endmember Collection dialog menu
option.
1. From the Endmember Collection dialog menu bar, select Import → from Plot
Windows. The Import from Plot Windows dialog appears.
2. Select one or more spectra.
3. Click OK. The selected spectra appear in the Endmember Collection dialog.
3. Enter the X Axis Column number that contains the x axis data.
4. Select the endmember spectra to import in the Select Y Axis Columns area.
5. Change the Wavelength Units and Y Scale Factor parameters as needed.
6. Click OK to enter the selected endmember spectra into the list on the
Endmember Collection dialog.
Importing Additional ASCII Files
To select another ASCII file and read the data using the settings previously defined
the Input ASCII File dialog:
1. From the Endmember Collection dialog menu bar, select Import → from
ASCII file (previous template). The Select ASCII Files to Import dialog
appears.
2. Select an ASCII file and click Open. This option reads the data directly into
the Endmember Collection dialog without the intermediate parameter dialog.
If the Reflectance Scale Factor parameter is set in both the spectral library
header and the image data header, then ENVI will automatically scale the
library data to match the image data. If one of the two data sources has no scale
factor in its header, then no scaling will be done.
6. Click OK to enter the selected spectra into the Endmember Spectra list.
Plotting Spectra
• To plot a spectrum from the Endmember Spectra table, select the row number
column of the spectrum to plot, then click Plot.
• To plot multiple spectra, use the Shift or Ctrl key as you select the row number
columns, then click Plot.
• To plot all the spectra in the table, click Select All, then click Plot.
Deleting Spectra
• To delete a spectrum from the Endmember Spectra table, select the row
number column of the spectrum to delete, then click Delete.
• To delete multiple spectra, use the Shift or Ctrl key as you select the row
number columns, then click Delete.
• To delete all the spectra in the table, click Select All, then click Delete.
Endmember Options
Use the Options menu in the Endmember Collection dialog to edit the endmember
names, edit the endmember colors, or suppress backgrounds (using BandMax).
3. Use the Select Background section of the dialog to collect any spectra you
want to use as backgrounds. This section contains an embedded version of the
Endmember Collection dialog. All the items in this section are the same as the
items provided in the Endmember Collection dialog.
The BandMax significance values are calculated whenever a spectrum is added
or deleted in the Select Background section.
4. Review the Significant Bands list, which shows the bands that BandMax
automatically determined were significant.
5. Modify the Band Significance Threshold value as needed. The threshold
ranges from 0 to 1. ENVI uses only bands with a significance value greater
than or equal to the significance threshold. Setting the Band Significance
Threshold to a higher value results in fewer selected bands in the subset.
The increase/decrease buttons change the threshold by increments of 0.01. An
increase in the Band Significance Threshold value decreases the Number of
Significant Bands value and updates the Significant Bands list. If a change of
0.01 is not enough to update the Number of Significant Bands, increase the
increment until it does.
6. You can also decrease the Number of Significant Bands value, which
increases the Band Significance Threshold value and updates the Significant
Bands list.
The increase/decrease buttons change the number of bands by at least 1. If two
or more bands have the same significance value, ENVI uses a greater
increment to include all of these bands.
7. Click Save Significant Bands to File if you want to save the band subset in the
Significant Bands list to an ASCII file. When you have derived the subset of
bands that effectively detects your targets, you may want to use this same band
subset to perform a series of classifications on a set of images from the same
sensor. You can use the output ASCII file as input when spectrally subsetting a
file.
8. Click OK. All the significant bands displayed in the Significant Bands section
form the band subset that is used on the input data when you click Apply in the
Endmember Collection dialog.
2. Select an input file and perform optional Spatial Subsetting, and/or Masking,
then click OK. ENVI resamples all the endmembers to match the new input
file.
You can change the spectral or spatial subset of your input file by selecting
File → Change Input File, selecting the same filename, and changing the
spectral subset.
Post Classification
Use Post Classification tools to classify rule images, to calculate class statistics and
confusion matrices, to apply majority or minority analysis to classification images, to
clump, sieve, and combine classes, to overlay classes on an image, to calculate buffer
zone images, to calculate segmentation images, and to output classes to vector layers.
Figure 4-11: Class Color Mapping Option from Display Group Menu Bar
1. From the Display group menu bar, select Tools → Color Mapping → Class
Color Mapping. The Classification Mapping dialog appears.
2. To change the color system for all classes, select RGB, HLS, or HSV from
drop-down list.
3. To modify the class color, select a class name in the Selected Classes list and
use one of the following methods:
• Click on the Color button and select the new color from the resulting
menu.
• Enter new values into the Red, Green, and Blue fields and press Enter.
• Move the color adjustment slider bars.
4. To change the name of the selected class, edit it in the Class Name field.
5. To reset the colors and names to their original values, select Options → Reset
Color Mapping.
6. Select File → Save Changes to retain the new colors.
For more information, see “Mapping Class Colors” on page 123 and “Editing
Classification Information” on page 205.
3. Click the Classify By toggle button to select whether the image will be
classified by Minimum or Maximum values.
4. Select one of the following options to set a threshold value:
• To enter the same threshold value for all classes, enter a value in the Set
All Thresholds field and click Set All Thresholds. The threshold value
appears in the Thresh field for each class.
• To enter a different threshold for each class, enter a value in the Thresh
field for each class.
• To enter a threshold value based on histogram percentage, enter the
percent value (for example, 5%) in the Thresh field.
Tip
You can plot a histogram of a rule band to help you determine threshold
values. For details, see “Plotting Histograms” on page 460.
If using maximum likelihood rule images that were produced from ENVI 3.6
or later, enter the threshold using a percentage (for example, 95%), and the
corresponding Chi Square value will automatically be computed using the
histogram of the rule image.
5. Click Quick Apply. A classification image based on the current settings
displays in a new display group.
To remove a class from the display, select the On check box for that class to
deselect it. To display that class again, select the On check box again.
6. Click Quick Apply to see how any of your changes affect the classification
image.
Plotting Histograms
You can plot a histogram of a rule band to help you determine threshold values by
clicking Hist for that class. A plot window displays with the histogram of the selected
band.
For details about working with ENVI plots, see “Using Interactive Plot Functions” on
page 106.
3. Select an input file from which to calculate the statistics. Select the file that
will be used to calculate class statistics for the areas identified in the
classification image (usually the data file). You can also apply a mask to the
calculation through the Select Mask Band or Build Mask buttons on this
dialog.
4. Click OK. The Class Selection dialog appears.
5. Select the classes that you want to calculate statistics for.
6. Click OK. The Compute Statistics Parameters dialog appears.
7. Select the statistics options by selecting the check boxes. See “Computing
Statistics” on page 274 for detailed information about the options available in
the Compute Statistics Parameters dialog.
8. Click OK. The class statistics are calculated and the Class Statistics Results
dialog appears.
The Class Statistics Results dialog is very similar to the Statistics Results dialog (see
“Viewing Statistics Reports” on page 278) and contains the same functions. However,
the Class Statistics Results dialog also contains additional sections for reporting
multiple instances of statistical information.
The File menu contains the following additional options:
• Save current Class result to ENVI stats file: This option enables you to save
the statistics report for the class specified in the Stats for section of the Class
Statistics Results dialog to an ENVI statistics file. When this option is selected,
the Save Current Class Result to ENVI Stats File dialog appears. In the Enter
Output Stats Filename[.sta] section of this dialog, enter a filename. The
default file extension for ENVI statistics files is .sta. The statistics report is
saved to the specified file when you click OK.
• Save current Class result to text file: This option enables you to save the
statistics report for the class specified in the Stats for section of the Class
Statistics Results dialog to a text file. When this option is selected, the Save
Current Class Result to Text File dialog appears. In the Enter Output Text
Filename[.txt] section of this dialog, enter a filename. The statistics report is
saved to the specified file when you click OK.
Tip
The resulting text file is tab-delimited for easy import into external
spreadsheet programs, such as Excel.
• Save all Class results to ENVI stats files: This option enables you to save the
statistics reports for all the classes to separate ENVI statistics files. When this
option is selected, the Save All Class Results to ENVI Stats Files dialog
appears. In the Enter Output Root Filename section of this dialog, enter the
root name of the statistics files. The default file extension for statistics files is
.sta. The statistics report for each class or region is saved to individual files
when you click OK. The individual files have the same root name that you
specified and are appended with their appropriate class or ROI number.
• Save all Class results to text file: This option enables you to save the statistics
report for all the classes to a text file. When this option is selected, the Save All
Class Results to Text File dialog appears. In the Enter Output Text
Filename[.txt] section of this dialog, enter a filename. The statistics report is
saved to the specified file when you click OK.
Tip
The resulting text file is tab-delimited for easy import into external
spreadsheet programs, such as Excel.
If the input file has an associated pixel size (via georeferencing or explicit setting of
the size in the header file), the Class Summary Area Units submenu appears in the
Options menu. This submenu enables you to specify the units of the reported area.
The default area unit is Meter2. When a different unit is chosen, the Class
Distribution Summary text section is updated to show this change.
The Stats for button contains a list of the available classes. The Class Statistics
Results dialog reports the calculated statistics (in both the plot and text sections) of
the class specified by this menu. To compare statistics for different classes with the
current class shown in the Class Statistics Results dialog, use the Options → Copy
results to new window option to create a copy of the Class Statistics Results for the
current class, then use the Stats for menu to display a different class.
The Select Plot drop-down button also contains the following additional options:
• Mean for all Classes: This option displays a plot of the mean of all the classes.
• Stdev for all Classes: This option displays a plot of the standard deviation of
all the classes.
• Eigenvalues for all Classes: This option displays a plot of the eigenvalues of
all the classes.
• Histogram for all Classes: This option and its submenu display a plot of the
histogram of all the classes for each band of data.
6. Next to the Output Confusion Matrix in label, select the Pixels and/or the
Percent check boxes. If you select both check boxes, they will be reported in
the same window.
7. Next to the Report Accuracy Assessment label, select the Yes or No toggle.
8. Next to Output Error Images label, click the toggle button to select Yes or
No.
The output error images are mask images, one for each class, where all
correctly classified pixels have a value of 0 and incorrectly classified pixels
have a value of 1. The last error image band shows all the incorrectly classified
pixels for all the classes combined.
9. Select output to File or Memory.
10. Click OK.
The report shows the overall accuracy, kappa coefficient, confusion matrix, errors of
commission (percentage of extra pixels in class), errors of omission (percentage of
pixels left out of class), producer accuracy, and user accuracy for each class. Producer
accuracy is the probability that a pixel in the classification image is put into class x
given the ground truth class is x. User Accuracy is the probability that the ground
truth class is x given a pixel is put into class x in the classification image. The
confusion matrix output shows how each of these accuracy assessments is calculated.
For details, see “Confusion Matrix Example” on page 467.
given the ground truth class is x. User Accuracy is the probability that the ground
truth class is x given a pixel is put into class x in the classification image. The
confusion matrix output shows how the accuracy assessments are calculated.
Prod.
Class User Acc. Prod. Acc. User Acc.
Acc.
Overall Accuracy
The overall accuracy is calculated by summing the number of pixels classified
correctly and dividing by the total number of pixels. The ground truth image or
ground truth ROIs define the true class of the pixels. The pixels classified correctly
are found along the diagonal of the confusion matrix table which lists the number of
pixels that were classified into the correct ground truth class. The total number of
pixels is the sum of all the pixels in all the ground truth classes.
Kappa Coefficient
The kappa coefficient (κ) is another measure of the accuracy of the classification. It is
calculated by multiplying the total number of pixels in all the ground truth classes (N)
by the sum of the confusion matrix diagonals (xkk), subtracting the sum of the ground
truth pixels in a class times the sum of the classified pixels in that class summed over
all classes ( xκΣ x Σκ ), and dividing by the total number of pixels squared minus the sum
of the ground truth pixels in that class times the sum of the classified pixels in that
class summed over all classes.
N ∑ x kk – ∑ x kΣ x Σk
k k
κ = --------------------------------------------------
-
N ∑ kΣ Σk
2– x x
k
Commission
Errors of commission represent pixels that belong to another class that are labelled as
belonging to the class of interest. The errors of commission are shown in the rows of
the confusion matrix. In the confusion matrix example, the Grass class has a total of
102,421 pixels where 64,516 pixels are classified correctly and 37,905 other pixels
are classified incorrectly as Grass (37,905 is the sum of all the other classes in the
Grass row of the confusion matrix). The ratio of the number of pixels classified
incorrectly by the total number of pixels in the ground truth class forms an error of
commission. For the Grass class the error of commission is 37,905/102,421 which
equals 37%.
Omission
Errors of omission represent pixels that belong to the ground truth class but the
classification technique has failed to classify them into the proper class. The errors of
omission are shown in the columns of the confusion matrix. In the confusion matrix
example, the Grass class has a total of 109,484 ground truth pixels where 64,516
pixels are classified correctly and 44,968 Grass ground truth pixels are classified
incorrectly (44,968 is the sum of all the other classes in the Grass column of the confusion
matrix). The ratio of the number of pixels classified incorrectly by the total number of
pixels in the ground truth class forms an error of omission. For the Grass class the
error of omission is 44,968/109,484 which equals 41.1%.
Producer Accuracy
The producer accuracy is a measure indicating the probability that the classifier has
labelled an image pixel into Class A given that the ground truth is Class A. In the
confusion matrix example, the Grass class has a total of 109,484 ground truth pixels
where 64,516 pixels are classified correctly. The producer accuracy is the ratio
64,516/109,484 or 58.9%.
User Accuracy
User accuracy is a measure indicating the probability that a pixel is Class A given that
the classifier has labelled the pixel into Class A. In the confusion matrix example, the
classifier has labelled 102,421 pixels as the Grass class and a total of 64,516 pixels are
classified correctly. The user accuracy is the ratio 64,516/102,421 or 63.0%.
Reference:
A. P. Bradley, 1997, “The use of the area under the ROC Curve in the evaluation of
machine learning algorithms,” Pattern Recognition, V. 30, No.7, pp 1145-1159.
6. Click the Classify by toggle button to select whether to classify the rule image
by minimum value or maximum value. For example, if your rule images are
from the Minimum distance or SAM classifier, classify by minimum value. If
your rule images are from the maximum likelihood classifier, classify by
maximum value.
7. In the Min and Max parameters fields, type minimum and maximum values
for the ROC curve threshold range. Rule images are classified at N (specified
by Points per ROC curve) evenly spaced thresholds between (and including)
the Min and Max values. Each of these classifications is compared to the
ground truth and becomes a single point on a ROC curve. For example, if your
rule images are from the maximum likelihood classifier, the best choice is to
enter a min value of 0 and max value of 1.
8. In the Points per ROC Curve field, enter the number of points in the ROC
curves.
9. In the ROC curve plots per window field, enter the number of plots per
window.
10. Select whether to output probability of detection versus threshold plot by
selecting the Yes or No check box.
11. Click OK. The ROC curves and probability of detection curves appear in plot
windows.
To remove a class match from the list, select the combination name. The two
class names reappear in the lists at the top of the dialog.
4. Click OK. The ROC Curve Parameters dialog appears.
5. Click the Classify by toggle button to select whether to classify the rule image
by minimum value or maximum value. For example, if your rule images are
from the minimum distance or SAM classifier, classify by minimum value. If
your rule images are from the maximum likelihood classifier, classify by
maximum value.
6. In the Min and Max parameters fields, type minimum and maximum values
for the ROC curve threshold range. Rule images are classified at N (specified
by Points per ROC curve) evenly spaced thresholds between (and including)
the Min and Max values. Each of these classifications is compared to the
ground truth and becomes a single point on a ROC curve. For example, if your
rule images are from the maximum likelihood classifier, the best choice is to
enter a min value of 0 and max value of 1.
7. In the Points per ROC Curve field, enter the number of points in the ROC
curves.
8. In the ROC curve plots per window field, enter the number of plots per
window.
9. Select whether to output probability of detection versus threshold plot by
selecting the Yes or No check box.
10. Click OK. The ROC curves and probability of detection curves appear in plot
windows.
For large sample sizes, the distribution of classes (or ROIs) in the sample will
approximate a Stratified Random sampling, but classes with small sizes may
be missed altogether in the random sample.
3. Select a class (or ROI) in the list at the top of the dialog. The class (or ROI)
will show up in the field at the bottom left of the dialog under the Edit Sample
Size for Selected Class label.
4. In the white box next to the class (or ROI) name at the bottom right of the
dialog, enter a sample size in pixels. To set the sample size to a percent of the
total class (or ROI) size, enter the percentage value (for example, 15%) then
press Enter. The sample size in pixels will automatically be calculated.
5. Click OK.
The total sample size displays to the right of the set ROI sample sizes button.
Random Sampling
1. Set the Sample Size in pixels or percent by clicking the toggle button. Entering
a value for one will automatically update the value for the other, making it easy
to see the relationship between the percentage sample size and the pixel sample
size.
2. The total sample size displays beneath the Sample Size parameter and updates
automatically when a new sample size is entered.
Majority/Minority Analysis
Use Majority/Minority Analysis to apply majority or minority analysis to a
classification image. Use majority analysis to change spurious pixels within a large
single class to that class. You enter a kernel size and the center pixel in the kernel will
be replaced with the class value that the majority of the pixels in the kernel has. If you
select Minority analysis, then the center pixel in the kernel will be replaced with the
class value that the minority of the pixels in the kernel has.
1. From the ENVI main menu bar, select Classification → Post
Classification → Majority/Minority Analysis. The Input File dialog appears.
2. Select the classification input file and any optional Spatial Subsetting and/or
Spectral Subsetting, then click OK. The Majority/Minority Parameters dialog
appears.
3. In the list of classes, select the classes that you want to apply the analysis to.
If the center pixel is from a class that you did not select in the Select Classes
list, ENVI does not change that pixel. However, if the unselected class is the
majority class in the kernel, ENVI can change center pixels of selected classes
into an unselected class.
4. Select the analysis method, by clicking the corresponding toggle button.
5. Enter a kernel size. Kernel sizes are odd and the kernels do not have to be
square. Larger kernel sizes produce more smoothing of the classification
image.
If you select Majority analysis, enter the Center Pixel Weight. The center
pixel weight is the weight used to determine how many times the class of the
center pixel is counted when determining which class is in the majority. For
example, if you enter a weight of 1, ENVI will count the center pixel class only
one time; if you enter 5, ENVI will count the center pixel class five times.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Clumping Classes
Use Clump Classes to clump adjacent similar classified areas together using
morphological operators. Classified images often suffer from a lack of spatial
coherency (speckle or holes in classified areas). Low pass filtering could be used to
smooth these images, but the class information would be contaminated by adjacent
class codes. Clumping classes solves this problem. The selected classes are clumped
together by first performing a dilate operation then an erode operation on the
classified image using a kernel of the size specified in the parameters dialog.
1. From the ENVI main menu bar, select Classification → Post
Classification → Clump Classes. The Input File dialog appears.
2. Select a classified image and perform any optional Spatial Subsetting, then
click OK. You can select only classified images (based upon the file type
described in the image’s header). The Clump Parameters dialog appears with
all of the available classes in the image in the Select Classes list.
3. Select the classes on which to perform clumping.
4. Enter the morphological operator size in the Rows and Cols fields.
5. Select output to File or Memory.
6. Click OK.
Sieving Classes
Use Sieve Classes to solve the problem of isolated pixels occurring in classification
images. Sieving classes removes isolated classified pixels using blob grouping.
Again, low pass or other types of filtering could be used to remove these areas, but
the class information would be contaminated by adjacent class codes. The sieve
classes method looks at the neighboring 4 or 8 pixels to determine if a pixel is
grouped with pixels of the same class. If the number of pixels in a class that are
grouped is less than the value that you enter, those pixels will be removed from the
class. When pixels are removed from a class using sieving, black pixels (unclassified)
will be left.
Tip
Use the Clump Classes function (see “Clumping Classes” on page 482) after
sieving to replace the black pixels.
Combining Classes
Use Combine Classes to selectively combine classes in classified images. You can
also merge classes using Overlay → Classification from the Display group menu bar
(see “Merging Classes” on page 47 for details).
Combining classes or removing the unclassified class effectively deletes those
individual classes.
1. From the ENVI main menu bar, select Classification → Post
Classification → Combine Classes. The Classification Combine Classes
dialog appears.
2. Select a classified image and perform and any optional Spatial Subsetting, then
click OK. The Combine Classes Parameters dialog appears.
3. Select a class for input from the Input Classes list. The selected class name
appears in the Input Class field.
4. Select an output class by clicking on a class name in the Output Classes list.
5. When both the input and output classes are selected, click Add Combination
to finalize the selection. The new, combined class to create is shown in the
Combined Classes list at the bottom of the dialog. For example, selecting
region 1 as the input and region 3 as the output causes the string region 1 ->
region 3 to appear in the Combined Classes list.
To deselect combined classes, select the name in the Combined Classes list.
6. Click OK. The Combine Classes Output dialog appears.
7. Select output to File or Memory. ENVI adds the resulting output to the
Available Bands List.
Overlaying Classes
Use Overlay Classes to produce an image map with a color composite or gray scale
background image and the classes overlaid in color. ENVI creates a three-band RGB
image.
You can also overlay classes using the Overlay menu on the Display group menu bar.
For details, see “Overlaying Classes” on page 44.
Due to the nature of the classification overlay, the background image should be
stretched and saved to byte output images prior to overlay.
1. From the ENVI main menu bar, select Classification → Post
Classification → Overlay Classes. The Input File dialog appears.
2. Select the classification image and any optional Spatial Subsetting, then click
OK. The Input Background RGB Input Bands dialog appears.
3. Click sequentially on the red, green, and blue bands to use for the background
image. The input files must be byte images (that is, files containing values
between 0 and 255). If a gray scale background is desired, select the same
spectral band for the RGB inputs.
4. Click OK. The Class Overlay to RGB Parameters dialog appears.
5. Select the classes to overlay on the background image by clicking the toggle
button associated with the class name in the list.
6. Select output to File or Memory.
7. Click OK. ENVI creates the class overlay image. If your display is set to 8-bit
color, the class overlay image may appear incorrect when displayed due to the
color quantization. However, on output, it will be correct.
with a distance larger than that value will be set to the maximum distance value. An
example of a buffer zone image is shown in the following figure.
output all of the classes to a single layer, attributes that include the class
number, polygon length, and area will be created for each polygon.
5. Select output to File or Memory.
6. Click OK. ENVI makes a polygon vector layer for each class selected. If you
select to output each class to a separate layer, each selected class is saved to a
separate vector file with an _1, _2, and so forth appended to the root name.
The vectors appear to be shifted one half pixel to the Southeast because ENVI
interprets the map coordinate of the vector node to be the upper left hand corner of
the raster pixel (versus treating the address as the center of the pixel).
Image Sharpening
Use Image Sharpening tools to automatically merge a low-resolution color, multi-,
or hyper-spectral image with a high-resolution gray scale image (with resampling to
the high-resolution pixel size). ENVI uses the following image sharpening techniques
for byte-scaled RGB imagery:
• An HSV transform.
• A color normalization (Brovey) transform.
The images must either be georeferenced or have the same image dimensions. The
RGB input bands for the sharpening should be stretched byte data or selected from an
open color display.
ENVI uses the following image sharpening techniques for spectral imagery:
• A Gram-Schmidt transform.
• A principal components (PC) transform.
• A color normalized (CN) transform.
more accurate because it uses the spectral response function of a given sensor to
estimate what the panchromatic data look like.
If you display a Gram-Schmidt pan-sharpened image and a PC pan-sharpened image,
the visual differences are very subtle. The differences are in the spectral information;
compare a Z Profile of the original image with that of the pan-sharpened image to see
the differences in spectral information, or calculate a covariance matrix for both
images. The effect of pan sharpening is best revealed in images with homogenous
surface features (flat deserts or water, for example).
The low spatial resolution spectral bands to use to simulate the panochromatic band
must fall in the range of the high spatial resolution panchromatic band or they will not
be included in the resampling process.
ENVI performs Gram-Schmidt spectral sharpening by:
1. Simulating a panchromatic band from the lower spatial resolution spectral
bands.
2. Performing a Gram-Schmidt transformation on the simulated panchromatic
band and the spectral bands, using the simulated panchromatic band as the first
band.
3. Swapping the high spatial resolution panchromatic band with the first Gram-
Schmidt band.
4. Applying the inverse Gram-Schmidt transform to form the pan-sharpened
spectral bands.
Reference:
Laben et al., Process for Enhancing the Spatial Resolution of Multispectral Imagery
Using Pan-Sharpening, US Patent 6,011,875.
The images you use must either be georeferenced or have the same image
dimensions. If the images are georeferenced, ENVI co-registers the images before
performing the sharpening.
Note
Ensure that you have adequate disk space before performing a Gram-Schmidt
transformation, because this process creates an output file and several temporary
files. An error message will appear during the process if you do not have adequate
disk space.
5. If you applied a mask to the low resolution input data, the Mask Output Value
field appears. Set a value for the pixels in the output masked area.
ENVI uses only non-masked pixels in the calculation of the low resolution
statistics. The mask is applied to the high resolution result and masked pixels
are set to the specified mask output value. The default value is 0.
6. Select the resampling method from the Resampling drop-down list.
7. Select output to File or Memory.
8. Click OK. ENVI adds the resulting output to the Available Bands List.
3. Select the high spatial resolution (low spectral resolution) sharpening image,
and perform optional Spatial Subsetting and/or Spectral Subsetting, then click
OK. The CN Spectral Sharpening Parameters dialog appears.
4. Enter a scale factor for the sharpening image in the Sharpening Image
Multiplicative Scale Factor field.
The sharpening image must be in the same units and have the same scale factor
as the input image. For example, if the input image is an integer hyperspectral
file calibrated into units of (reflectance * 10000), but the sharpening image is a
floating-point multispectral file calibrated into reflectance (zero to one), then
enter a scale factor of 10,000. If the input image is in units of radiance of
[μW/(cm2 • nm • sr)], but the sharpening image is in units of radiance of
[μW/(cm2 • m • sr)], then enter a scale factor of 0.001.
5. If the input and sharpening images require resampling or warping to produce a
coregistered pair, and the input image is BIL or BIP interleave, the Output
Interleave option appears. Select an output interleave. (In all other cases the
output interleave is the same as the input image interleave.) Creating BSQ
output is faster, but is often inconvenient for further use of the sharpened
image.
All in-memory results must be BSQ interleave.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List. Bands
that were sharpened are easily identified by their band names. The units, data
type, dynamic range, and image geometry match that of the input image. The
pixel size matches that of the sharpening image.
3. Click Enter Pair to create a new band pair listed in the Selected Ratio Pairs
list. When you select the first pair, only bands with the same spatial size appear
in the Available Bands List. The Numerator and Denominator fields clear.
Create as many ratio combinations as needed by entering additional band pairs.
All ratios in the Selected Ratio Pairs list are output as multiple bands in a
single file.
4. Click OK. The Band Ratios Parameters dialog appears.
5. To select an optional spatial subset, click Spatial Subset. For subsetting
details, see “Spatial Subsetting” on page 215.
6. Select output to File or Memory.
7. To output the ratio values as byte or floating-point data, select from the Output
Data Type drop-down list. If you select Byte, ENVI stretches the output ratio
values by mapping the values entered in the Min and Max fields to 0 to 255.
8. If you selected an output ratio of Byte, change the byte stretching ratio data
range in the Min and Max values.
9. Click OK. ENVI adds the resulting output to the Available Bands List.
2. Select the input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. The Forward PC Parameters
dialog appears.
Tip
If you have any bad bands in your dataset, you should use spectral subsetting
to exclude them from PC analysis. When bad bands are set to the same values
as adjacent bands, when they are interpolated from nearby bands, when they
are set to a constant value, or when they contain zero variance or large
outliers, a “singularity” problem is encountered where the band becomes a
near-perfect linear combination of other bands. This may cause the “Too May
Iterations in TQLI” error message to appear when you calculate the forward
PC rotation.
3. Click Stats Subset to calculate the statistics based on a spatial subset or the
area under an ROI. The calculated statistics are applied to the entire file or to a
spatial subset of the file. See “Statistics Subsetting” on page 223 for details.
4. Enter the Stats X/Y Resize Factors less than 1 in the appropriate fields to sub-
sample the data when calculating the statistics. This increases the speed of the
statistics calculations. For example, using a resize factor of 0.1 will use every
10th pixel in the statistics calculations.
5. In the Output Stats Filename [.sta] field, enter a filename for the noise
statistics.
6. Select whether to calculate the PCs based on the Covariance Matrix or
Correlation Matrix using the toggle button. Typically:
• Use Covariance Matrix when calculating the principal components.
• Use Correlation Matrix when the data range differs greatly between
bands and normalization is needed.
7. If you applied a mask, enter a value for the output results in the Output Mask
Value field. ENVI applies the mask for the statistics calculation and the
masked areas of the output dataset are set to the entered mask value.
8. Select output to File or Memory.
9. From the Output Data Type drop-down list, select the data type of the output
file.
10. To use a subset from eigenvalues, use the Select Subset from Eigenvalues
toggle button to select Yes or No.
11. If you chose No to selecting a subset from eigenvalues, select the Number of
Output PC Bands. The default number of output bands is equal to the number
of input bands.
12. Click OK.
If you chose No to selecting a subset from eigenvalues, ENVI performs the
transform and adds the output to the Available Bands List. The PC Eigenvalues
plot window also appears. For information on editing and other options in the
Eigenvalue plot window, see “Using Interactive Plot Functions” on page 106.
If you chose Yes to selecting a subset from eigenvalues, the Select Number of
Output Bands dialog appears. Each band is listed with its corresponding
eigenvalue and the cumulative percentage of data variance contained in each
IC band. Do the following:
A. Select the Number of Output IC Bands. IC Bands with large eigenvalues
contain the largest amounts of data variance, while bands with lower
eigenvalues contain less data information and more noise. Sometimes, it is
best to output only those bands with large eigenvalues to save disk space.
B. Click OK. ENVI performs the transform and adds the output to the
Available Bands List. The output IC rotation contains only the number of
bands that you selected. For example, if you chose 4 as the number of
output bands, only the first four IC bands appear in your output file.
Inversing PC Rotations
Use Inverse PC Rotation to transform principal components images back into their
original data space.
1. From the ENVI main menu bar, select Transform → Principal
Components → Inverse PC Rotation. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Enter Statistics Filename dialog appears with
all of the existing statistics files in the current input data directory listed. The
statistics files appear with the default file extension .sta.
3. Select the statistics file saved from the forward PC rotation. The statistics file
must exist before you select the inverse PC rotation.
4. Select either Covariance Matrix or Correlation Matrix by clicking the
Calculate using toggle button.
If you want to inverse the images back to their original data space, select the
same calculate method that you used in the forward rotation.
5. Select output to File or Memory.
6. From the Output Data Type drop-down list, select the data type of the output
file.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
component, the fixed-point algorithm runs first. If the algorithm does not
converge after maximization iterations, the stabilized fixed-point algorithm
runs to improve convergence. The default value is 100. The lower limit is 0 (no
stabilization step). Enabling stabilization and increasing stabilization iterations
helps ENVI find the optimal components; however, each iteration adds to
processing time, depending on the CPU and system load.
9. Select one of the following from the Contrast Function drop-down list.
• LogCosh (default). If you select LogCosh, you must also enter a value in
the Coefficient field. LogCosh is a good general-purpose contrast
function.
• Kurtosis
• Gaussian
The contrast functions and their first-order derivatives are as follows:
1
LogCosh G 1 ( u ) = ----- log cosh ( a 1 u ) , g 1 ( u ) = tan h ( a 1 u )
a1
2 2
Kurtosis G 2 ( u ) = – exp ( – u ⁄ 2 ) , g 2 ( u ) = u exp ( – u ⁄ 2 )
1 4 3
Gaussian G 3 ( u ) = --- u , g3 ( u ) = u
4
10. If your Contrast Function value is LogCosh, enter the Coefficient. The
default is 1.0. The range is 1.0 to 2.0.
11. To use a subset from eigenvalues, use the Select Subset from Eigenvalues
toggle button to select Yes or No.
12. If you chose No to selecting a subset from eigenvalues, select the Number of
Output IC Bands. The default is the number of input bands.
13. Select output to File or Memory.
14. If you applied a mask, enter a value for the output results in the Output Mask
Value field. ENVI sets the masked areas of the output dataset to the entered
mask value.
15. Enable the Sort Output Bands by 2D Spatial Coherence check box if you
want the output bands to be sorted by decreasing spatial coherence. Use this
option if a noisy band could appear as the first IC.
16. Enter the Output Transform Filename. You can use this file for future IC
calculations on the same image, or on an image with similar features. See
“Rotating from an Existing Transform” on page 514.
17. Click OK.
If you chose No to selecting a subset from eigenvalues, ENVI performs the
transform and adds the output to the Available Bands List.
If you chose Yes to selecting a subset from eigenvalues, the Select Number of
Output Bands dialog appears. Each band is listed with its corresponding
eigenvalue and the cumulative percentage of data variance contained in each
PC whitened band. Do the following:
A. Select the Number of Output IC Bands. If the value is less than the
number of input bands, data dimension is reduced.
B. Click OK. ENVI performs the transform and adds the output to the
Available Bands List. The output IC rotation contains only the number of
bands that you selected, or that met the eigenvalue subset criteria. For
example, if you chose 4 as the number of output bands, only four IC bands
appear in your output file.
5. Select the Output Data Type from the drop-down list. You can save the output
as byte, floating-point, integer, long integer, or double precision values.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
Tip
You can use ENVI’s Spectral Hourglass Wizard to guide you step-by-step through
the ENVI hourglass processing flow, including MNF transforms, to find and map
image spectral endmembers from hyperspectral or multispectral data. For details,
see “Spectral Hourglass Wizard” on page 829.
References:
Green, A. A., Berman, M., Switzer, P., and Craig, M. D., 1988, A transformation for
ordering multispectral data in terms of image quality with implications for noise
removal: IEEE Transactions on Geoscience and Remote Sensing, v. 26, no. 1, p. 65-
74.
Boardman, J. W., and Kruse, F. A., 1994, Automated spectral analysis: a geological
example using AVIRIS data, north Grapevine Mountains, Nevada: in Proceedings,
ERIM Tenth Thematic Conference on Geologic Remote Sensing, Environmental
Research Institute of Michigan, Ann Arbor, MI, pp. I-407 - I-418.
Statistics Files
Unlike principal components analysis, the forward MNF transform produces two
separate statistics files: MNF noise statistics and MNF statistics. While these files
appear to be ordinary ENVI statistics files, they contain information unique to MNF
and omit data typically found in ENVI statistics files.
During a forward MNF rotation, ENVI computes the following statistics:
• The mean for each band of the input image (to normalize the data)
• The covariance statistics of the noise (for the noise rotation and normalization)
• The covariance statistics of the noise-whitened and rescaled input image data
The first rotation stores its complete set of covariance statistics in the MNF noise
statistics file. However, this file contains only the noise covariance statistics, not the
data normally found in ENVI statistics files.
The second rotation stores its eigenvector matrix and eigenvalues in the MNF
statistics file. However, the rest of the covariance statistics for the second rotation are
not saved because the covariance placeholder in the MNF statistics file is used to
store a special “composite” MNF transformation matrix. This matrix describes the
net result of both principal components rotations as well as the band-independent
scaling introduced by the noise normalization. This non-orthogonal, non-unit length,
matrix allows an inverse MNF rotation to be applied in a single step.
The following table summarizes the contents of the statistics files generated by the
MNF transform:
MNF Noise
Statistic MNF Statistics File
Statistics File
gathered using the shift-difference statistics from a homogeneous area rather than
from the whole image. ENVI allows you to select the subset for statistics extraction.
1. From the ENVI main menu bar, select either:
• Transform → MNF Rotation → Forward MNF → Estimate Noise
Statistics From Data
• Spectral → MNF Rotation → Forward MNF → Estimate Noise
Statistics From Data
The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. The Forward MNF Transform
Parameters dialog appears.
3. Click Shift Diff Subset to select a spatial subset or an area under an
ROI/EVF/and so forth on which to calculate the statistics. You can then apply
the calculated results to the entire file (or to the file subset if you selected one
when you selected the input file). For instructions, see “Statistics Subsetting”
on page 223.
When estimating noise from image data using the shift differencing approach,
you should strongly consider choosing a spatial subset that is spectrally
uniform. However, if any band in your dataset has zero variance, you may
encounter a “singularity” problem where the covariance matrix cannot be
inverted, and you may receive a “Too May Iterations in TQLI” error message.
For example, if you use a deep, placid lake for your noise subset, most bands
will have very little variation in the subset, which is what you want. However,
if one band has the same pixel value throughout the entire subset, the band has
zero variance and will cause a singularity in the covariance matrix.
You can find bands that contain zero variance, large outliers, or statistics that
are identical to adjacent bands, by generating and reviewing a statistics report
on the same shift-difference subset of your input image that you plan to choose
for an MNF transform (see “Computing Statistics” on page 274). If you
determine that any band has zero variance, you should choose a slightly
different or larger subset for your noise estimate. The shift-difference spatial
subset must contain more pixels than you have bands in the image you are
transforming. Then, add the bad band(s) to the Bad Bands List in the ENVI
header as described in “Selecting Bad Bands” on page 198. ENVI excludes the
bad bands from numerical processing.
4. In the Enter Output Noise Stats Filename [.sta] field, enter a filename for the
noise statistics.
To select a homogeneous area for calculating the noise statistics, click Spatial
Subset to either manually enter a subset or to graphically indicate the area for
statistics extraction (see “Spatial Subsetting” on page 215).
5. In the Output MNF Stats Filename [.sta] field, enter an output file for the
MNF statistics. Be sure that the MNF and noise statistics files have different
names. See “Statistics Files” on page 517 for more information on these two
output files.
6. Select output to File or Memory.
7. Select the number of output MNF bands by using one of the following options:
• To select the number of output bands without examining the eigenvalues,
select No from the Select Subset from Eigenvalues toggle button, then set
the Number of Output MNF Bands.
• To select the number of output MNF bands by examining the eigenvalues,
use the following steps:
A. Select Yes from the Select Subset from Eigenvalues toggle button.
B. Click OK. ENVI calculates the statistics and the Select Output MNF
Bands dialog appears, with each band listed with its corresponding
eigenvalue. Also listed is the cumulative percentage of data variance
contained in each MNF band for all bands.
C. Set the Number of Output MNF Bands. For the best results, and to save
disk space, output only those bands with high eigenvalues. Images with
eigenvalues close to 1 are mostly noise.
8. Click OK. When ENVI finishes processing, the MNF Eigenvalues plot
window appears and the MNF bands are added to the Available Bands List.
See “Interpreting the MNF Eigenvalues Plot” on page 520. For information on
editing and other options in the plot window, see “Using Interactive Plot
Functions” on page 106.
Interpreting the MNF Eigenvalues Plot
When ENVI finishes processing, it adds the MNF bands to the Available Bands List
and displays the MNF Eigenvalues plot window. The output only contains the number
of bands you selected for output. For example, if your input data contains 224 bands,
but you selected only 50 bands for output, your output only contains the first 50
calculated MNF bands from you input file.
Bands with large eigenvalues (greater than 1) contain data, and bands with
eigenvalues near 1 contain noise. Display the MNF bands from the Available Bands
List and compare with the MNF Eigenvalue plot to determine which bands contain
data and which bands contain predominantly noise. In subsequent processing of this
data, spectrally subset the MNF bands to only include those bands where the images
appear spatially coherent and the eigenvalues are above the break in slope of the
MNF Eigenvalue plot. In the example shown in the following figure, you should only
include the first ten to twelve MNF bands.
the existing statistics files in the current input data directory listed. The
statistics files appear with the default file extension .sta.
3. Select a noise statistics file from a previous MNF transform processing session
and click Open. The Forward MNF Transform Parameters dialog appears.
4. In the Enter Output MNF Stats Filename [.sta] field, enter an output
filename for the MNF statistics. See “Statistics Files” on page 517 for more
information on this file.
5. Select output to File or Memory.
6. Select the number of output MNF bands by using one of the following options:
• To select the number of output bands without examining the eigenvalues,
select No from the Select Subset from Eigenvalues toggle button, then set
the Number of Output MNF Bands.
• To select the number of output MNF bands by examining the eigenvalues,
use the following steps:
A. Select Yes from the Select Subset from Eigenvalues toggle button.
B. Click OK. ENVI calculates the noise statistics and the first rotation. The
Select Output MNF Bands dialog appears, with each band listed with its
corresponding eigenvalue. Also listed is the cumulative percentage of data
variance contained in each MNF band for all bands.
C. Set the Number of Output MNF Bands. For the best results, and to save
disk space, output only those bands with high eigenvalues. Images with
eigenvalues close to 1 are mostly noise.
7. Click OK. When ENVI finishes processing, the MNF Eigenvalues plot
window appears and the MNF bands are added to the Available Bands List.
See “Interpreting the MNF Eigenvalues Plot” on page 520. For information on
editing and other options in the plot window, see “Using Interactive Plot
Functions” on page 106.
Color Transforms
Use Color Transforms to convert three-band red, green, blue (RGB) images to one
of several specific color spaces and from the selected color space back to RGB.
Adjusting the contrast stretch between the two transforms, you can produce a color-
enhanced color composite image.
Additionally, you can replace the value or lightness band with another band (usually
of higher spatial resolution) to produce an image that merges the color characteristics
of one image with the spatial characteristics of another image. ENVI does this
automatically in HSV Sharpening (see “Image Sharpening” on page 493).
The color spaces supported by ENVI include the hue, saturation, value (HSV), the
hue, lightness, saturation (HLS) and the USGS Munsell.
The Munsell color system is used by soil scientists and geologists to characterize the
color of soils and rocks. This color system has been modified by the U. S. Geological
Survey to describe color in digital images. The transform converts RGB coordinates
into the color coordinates Hue, Saturation, and Value (HSV). Hue ranges from 0-360,
where 0 and 360 = blue, 120 = green, and 240 = red. Saturation ranges from 0 to 208
with higher numbers representing more pure colors. Value ranges from
approximately 0 to 512 with higher numbers representing brighter colors.
Color transforms require three bands for input. These bands should be stretched byte
data or selected from an open color display.
Reference:
Kruse and Raines, A technique for enhancing digital color images by contrast
stretching in Munsell color space, in Proceedings of the ERIM Third Thematic
Conference, Environmental Research Institute of Michigan, Ann Arbor, MI, 1994:
pp. 755-760.
• RGB to HLS: Transforms an RGB image into the HLS color space. The Hues
produced are in the range of 0 to 360, where 0 = red, 120 = green, and 240 =
blue; and lightness and saturation are in the range 0 to 1 (floating-point). You
must have either an input file with at least three bands or a color display open
to use this function. The input RGB values must be byte data in the range 0 to
255.
• USGS Munsell RGB to HSV: Transforms an RGB image into the USGS
Munsell HSV color space. The input RGB values must be byte data in the
range 0 to 255. You must have either an input file with at least three bands or a
color display open to use this function.
From the ENVI main menu bar, select one of the following:
• Transform → Color Transforms → RGB to HSV
If you do not have a display open, the RGB to HSV Input Bands dialog
appears. See “Selecting Bands from the Input Bands Dialog” on page 528 for
further steps.
If you do have a display open, the RGB to HSV Input dialog appears. See
“Selecting Bands from an Open Display Group” on page 529 for further steps.
• Transform → Color Transforms → RGB to HLS
If you do not have a display open, the RGB to HLS Input Bands dialog
appears. See “Selecting Bands from the Input Bands Dialog” on page 528 for
further steps.
If you do have a display open, the RGB to HLS Input dialog appears. See
“Selecting Bands from an Open Display Group” on page 529 for further steps.
• Transform → Color Transforms → RGB to HSV (USGS Munsell)
If you do not have a display open, the RGB to USGS Munsell HSV Input
Bands dialog appears. See “Selecting Bands from the Input Bands Dialog” on
page 528 for further steps.
If you do have a display open, the USGS Munsell HSV Input dialog appears.
See “Selecting Bands from an Open Display Group” on page 529 for further
steps.
Selecting Bands from the Input Bands Dialog
If you select bands from the Available Bands List in one of the Input Bands dialogs,
ENVI does not apply stretching, and all data is clipped to byte type.
1. In the Input Bands dialog, select the bands to transform from the Available
Bands List. The names of the bands appear in the H, S, and V, (or H, L, and S)
fields.
2. To spatially subset your data, click Spatial Subset. For spatial subsetting
details, see “Spatial Subsetting” on page 215.
3. Click OK. The Parameters dialog appears.
4. Select output to File or Memory.
5. Click OK. ENVI adds the resulting output to the Available Bands List.
Selecting Bands from an Open Display Group
1. In the Input dialog, select one of the following:
• Available Bands List: When you select this option and click OK, the
Input Bands dialog appears. Use the steps in “Selecting Bands from the
Input Bands Dialog” above to select the bands to transform.
• Display #n: This method selects your bands from a color display and uses
the displayed stretch number. When you select this option and click OK,
the Parameters dialog appears.
2. To spatially subset your data, click Spatial Subset. For spatial subsetting
details, see “Spatial Subsetting” on page 215.
3. Select output to File or Memory.
4. Click OK. ENVI adds the resulting output to the Available Bands List.
• Use USGS Munsell HSV to RGB: Transforms a USGS Munsell HSV image
into the RGB color space. The input H, S, and V bands must have the
following data ranges: Hue = 0 to 360, where 0 and 360 = blue, 120 = green,
and 240 = red; Saturation ranges from 0 to 208 with higher numbers
representing more pure colors; Value ranges from approximately 0 to 512 with
higher numbers representing brighter colors. The RGB values produced are
byte data in the range 0 to 255.
1. From the ENVI main menu bar, select one of the following:
• Transform → Color Transforms → HSV to RGB. The HSV to RGB
Input Bands dialog appears.
• Transform → Color Transforms → HLS to RGB. The HLS to RGB
Input Bands dialog appears.
• Transform → Color Transforms → HSV to RGB (USGS Munsell).
The USGS Munsell HSV to RGB Input Bands dialog appears.
2. Select bands to transform from the Available Bands List. The names of the
bands appear in the H, S, and V, (or H, L, and S) fields.
3. To spatially subset your data, click Spatial Subset. For spatial subsetting
details, see “Spatial Subsetting” on page 215.
4. Click OK. Depending on the transform you are performing, the HSV to RGB
Parameters dialog, the HLS to RGB Parameters dialog, or the USGS Munsell
HSV to RGB Parameters dialog appears.
5. Select output to File or Memory.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
From the ENVI main menu bar, select Transform → Decorrelation Stretch.
If you do not have a display open, the Decorrelation Stretch Input dialog appears. See
“Selecting Bands from the Input Bands Dialog” on page 528 for further steps.
If you do have a display open, the Decorrelation Stretch Input dialog appears. See
“Selecting Bands from an Open Display Group” on page 529 for further steps.
Using NDVI
Use the NDVI to transform multispectral data into a single image band representing
vegetation distribution. The NDVI (Normalized Difference Vegetation Index) values
indicate the amount of green vegetation present in the pixel. Higher NDVI values
indicate more green vegetation. ENVI’s NDVI uses the standard algorithm:
NIR – Red
NDVI = ⎛⎝ ---------------------------⎞⎠
NIR + Red
Valid results fall between -1 and +1. ENVI has pre-set bands for AVHRR, Landsat
MSS, Landsat TM, SPOT, or AVIRIS data or you can enter the bands to use for other
data types.
Reference:
Jensen, J. R., 1986. Introductory Digital Image Processing, Prentice-Hall, New
Jersey, p. 379.
1. From the ENVI main menu bar, select Transform → NDVI. The Input File
dialog appears.
2. Select an input file and perform optional Spatial Subsetting, then click OK.
The NDVI Calculation Parameters dialog appears.
3. Specify the Input File Type (TM, MSS, AVHRR, and so on) from the drop-
down list. ENVI automatically enters the bands it uses to calculate the NDVI in
the Red and Near IR fields.
To calculate the NDVI for a sensor type not listed in the drop-down list, enter
the band numbers in the Red and Near IR fields.
4. Select output to File or Memory.
5. Select one of the following output types from the Output Data Type drop-
down list.
• Byte: Enter the Min and Max NDVI value in the fields supplied.
• Floating Point
6. Click OK. ENVI adds the resulting output to the Available Bands List.
• Median: Smooths an image, while preserving edges larger than the kernel
dimensions (good for removing salt and pepper noise or speckle). ENVI’s
Median filter replaces each center pixel with the median value (not to be
confused with the average) within the neighborhood specified by the filter size.
The default is a 3x3 kernel.
• Sobel: A non-linear edge enhancement, special case filter that uses an
approximation of the true Sobel function, and is a preset 3x3, non-linear edge
enhancement operator. The size of the filter cannot be changed and no kernel
editing is possible.
• Roberts: A non-linear edge detector filter similar to the Sobel. It is a special
case filter that uses a preset 2x2 approximation of the true Roberts function, a
simple, 2D differencing method for edge-sharpening and isolation. The size of
the filter cannot be changed and no kernel editing is possible.
• User-Defined Convolution: You can define custom convolution kernels
(including rectangular rather than square filters) by selecting and editing a user
kernel.
Mathematical morphology filtering is a non-linear method of processing digital
images on the basis of shape. Its primary goal is the quantification of geometrical
structures.
Reference:
Haralick, Sternberg, and Zhuang, Image Analysis Using Mathematical Morphology,
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-9, No.
4, July 1987, pp. 532-550.
Morphological kernels used in ENVI are just the structuring element and should not
be confused with convolution kernels. Morphology filter types include the following:
• Dilate: Commonly known as fill, expand, or grow, fills holes smaller than the
structural element (kernel) in a binary or gray scale image. Use only with
unsigned byte, unsigned long-integer, or unsigned integer data.
• Erode: Commonly known as shrink or reduce, removes islands of pixels
smaller than the structural element (kernel) in a binary or gray scale image.
Use only with unsigned byte, unsigned long-integer, or unsigned integer data.
• Opening: Smooth the contours, break narrow isthmuses, and eliminate small
islands and sharp peaks or capes in an image. The opening of an image is
defined as the erosion of the image followed by subsequent dilation using the
same structural element.
Tip
Using the Erode filter followed by using the Dilate filter produces the same
result as using an Opening filter.
• Closing: Smooth the contours, fuse narrow breaks and long thin gulfs,
eliminate small holes, and fill gaps in the contours of an image. The closing of
an image is defined as the dilation of the image followed by subsequent
erosion using the same structural element.
Tip
Using the Dilate filter followed by using the Erode filter produces the same
result as using the Closing filter.
4. Enter an add back value in the Image Add Back (0-100%) field. Adding back
part of the original image to the convolution filter results helps preserve the
spatial context and is typically done to sharpen an image. The Image Add
Back value is the percentage of the original image that is included in the final
output image. For example, if you enter 40%, then 40% of the original image is
added to 60% of the convolution filter image to produce the final result.
5. Select Quick Apply or Apply to File as described in “Applying Filter Results”
on page 545.
Editing Kernels
1. In the Convolutions and Morphology Tool dialog, double-click within the
Editable Kernel field of the value to edit. The line cursor appears.
2. Highlight the value and enter a new one.
3. Press Enter.
4. Enter the size of the processing window in the Rows (Y) and Cols (x) fields to
set the area to consider for the texture evaluation.
5. Select output to File or Memory.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
for a 3x3 window. The pixels in the 3 x 3 base window and the pixels in a 3x3
window that was shifted by 1 pixel are used to create the co-occurrence matrix.
Reference:
Haralick, R. M., Shanmugan, K., and Dinstein, I., 1973, “Textural Features for Image
Classification,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 3, No. 6,
pp. 610-621.
Anys, H., A. Bannari, D. C. He, and D. Morin, 1994. “Texture analysis for the
mapping of urban areas using airborne MEIS-II images,” Proceedings of the First
International Airborne Remote Sensing Conference and Exhibition, Strasbourg,
France, Vol. 3, pp. 231-245.
1. From the ENVI main menu bar, select Filter → Texture → Co-occurrence
Measures. The Texture Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Co-occurrence Texture Parameters dialog
appears.
3. Specify the texture images to create by selecting the check boxes next to the
texture types in the Textures to Compute area of the dialog.
4. Enter the size of the processing window in the Rows (y) and Cols (x) fields.
5. Enter the x and y shift values to use to calculate the co-occurrence matrix (see
Figure 6-3).
6. Select the gray scale quantization levels (up to 64) to use to calculate the co-
occurrence matrix. This setting is useful if the gray values of the image are
spread over a broad range (versus being clustered in a certain range).
6. To change the Noise Variance value, enter a new value in the field. The Noise
Variance parameter is set to the additive noise variance when Additive and
Both noise models are chosen. It is set to the multiplicative noise when the
Multiplicative noise model is chosen.
Tip
To estimate the noise variance, calculate the data variance over a flat area,
such as a lake or smooth playa, in the image. For multiplicative noise in radar
data, estimate the noise variance by 1/(number of looks).
5. Enter values that define the class cutoffs for the homogeneous (coefficient of
variation ≤ Cu), heterogeneous (Cu < coefficient of variation < C max), and point
target (coefficient of variation ≥ Cmax) classes.
Estimate the cutoff values based on the number of looks (L) of the radar image.
0.523
C u ≅ ------------- and C max ≅ 1 + --2-
L L
Reference:
Zhenghao Shi and Ko B. Fung, “A Comparison of Digital Speckle Filters,”
Proceedings of IGARSS 94, August 8-12, 1994, pp. 2129-2133.
1. From the ENVI main menu bar, select Filter → Adaptive → Frost.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Frost Filter Parameters dialog appears.
3. Enter the Filter Size.
4. Enter the value in the Damping Factor field. The damping factor determines
the amount of exponential damping and the default value of 1 is sufficient for
most radar images. Larger damping values preserve edges better but smooth
less, and smaller values smooth more. A damping value of 0 results in the same
output as a low pass filter.
5. Select output to File or Memory.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Gamma Filter Parameters dialog appears.
3. Enter the Filter Size.
4. Enter the Number of Looks in the field provided. ENVI uses he Number of
Looks parameter to calculate the noise variance by 1/(number of looks).
5. Select output to File or Memory.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
Reference:
Eliason, Eric M. and McEwen, Alfred S., “Adaptive Box Filters for Removal of
Random Noise from Digital Images,” Photogrammetric Engineering & Remote
Sensing, April, 1990, V56 No. 4, p. 453.
1. From the ENVI main menu bar, select Filter → Adaptive → Bit Errors.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Bit Error Removal Parameters dialog appears.
3. Enter the Filter Size in pixels.
4. Enter a Sigma Factor for the number of standard deviations to use for
determining valid pixels.
5. Enter a Tolerance (in data values). Pixels are only considered bad if they have
a value greater than the tolerance. A pixel is classified as a bit error when the
pixel value minus the filter box mean is greater than the Sigma Factor times
the localized standard deviation and greater than the tolerance. The bad pixels
will be replaced by the average of surrounding valid pixels by default.
6. To set bad pixels to zero instead of replacing them with an average, click Yes
for Zero Bit Errors?.
7. Optionally, enter the minimum and maximum values to consider as valid data
for the mean determination in the Valid Data Min and Valid Data Max fields.
8. Select output to File or Memory.
9. Click OK. ENVI adds the resulting output to the Available Bands List.
1. From the ENVI main menu bar, select Filter → FFT Filtering → Forward
FFT. The Forward FFT Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Forward FFT Parameters dialog appears.
To ensure your input dataset has an even number of samples and lines (see
Note above), you should select an even-dimensioned spatial subset of the
image.
3. Select output to File or Memory. Selecting file output is recommended.
4. Click OK. A status window displays the progress of the operation. ENVI
processes the entire FFT without tiling, so processing is limited by the system
memory or your machine. The status goes quickly from 0 to 100%. ENVI adds
the resulting output to the Available Bands List.
When you display a forward FFT image, ENVI displays is the natural log of the
magnitude of the complex pixel values (see “Using the Complex Lookup Function”
on page 208).
• For User Defined Pass and User Defined Cut filters, you can load ENVI
annotation (polygons and shapes only) into the filter (see “Loading
Annotations into User-Defined Filters” on page 565).
5. Enter the Number of Border Pixels to use to taper the filter (smooth the edges
of the filter). A value of zero indicates no smoothing.
Figure 6-7: From left to right: Circular Pass (low pass), Circular Cut (high pass),
Band Pass, Band Cut, and User Defined Filters
The images above depict, from left to right, Circular Pass (low pass), Circular
Cut (high pass), Band Pass, Band Cut, and User Defined filters. The diagonal
lines in the first four images and the grid lines in the fifth image represent the
area that will be filtered out.
6. Select output to File or Memory.
7. Click Apply. ENVI adds the resulting output to the Available Bands List. The
filter is a single band image of the specified dimensions.
Tip
Because of the limited range of the filter DN values (0 or 1), a contrast stretch
without histogram clipping (for example, a quick linear stretch) must be used to
properly display the filter image.
SPEAR Tools
The Spectral Processing Exploitation and Analysis Resource (SPEAR) tools provide
a series of Wizards that walk you through the steps ENVI requires to process
imagery. Basic instructions are included on each Wizard dialog and can be displayed
or hidden using the arrow button on the left side of the dialog. A Show informational
dialogs between steps check box is located on the bottom of some Wizard dialogs.
By default, this check box is not enabled. If you enable it, information dialogs display
at certain times during the Wizard to provide guidance. To proceed in the Wizard, you
must close the information dialog. Detailed information on any process can be
accessed using the Help button at the bottom of each Wizard dialog.
The following Wizards are available:
• SPEAR Anomaly Detection
• SPEAR Change Detection
• Google Earth Bridge
• SPEAR Image to Map Registration
• SPEAR Independent Components Analysis
• Lines of Communication (LOC) - Roads
• Lines of Communication (LOC) - Water
• SPEAR Metadata Browser
• SPEAR Orthorectification
• SPEAR Pan Sharpening
• Relative Water Depth
• Spectral Analogues
• TERCAT
• SPEAR Vegetation Delineation
• Vertical Stripe Removal
• Watercraft Finder
(RXD) algorithm to detect and extract targets that are spectrally distinct from the
image background. For more information on this algorithm, see “RX Anomaly
Detection” on page 827.
The SPEAR Anomaly Detection Wizard steps you through the process of running the
RXD algorithm on your image and provides the following:
• Since vegetation is spectrally anomalous in regions like arid areas, the SPEAR
Anomaly Detection Wizard includes an option to suppress vegetation
anomalies.
• The SPEAR Anomaly Detection Wizard provides the ability to set thresholds
to minimize false positives.
• The SPEAR Anomaly Detection Wizard provides a way to filter, review, and
rate detected anomalies.
• Once you are satisfied with the detected anomalies, you can export them (by
rating) to vector shapefiles.
To run the SPEAR Anomaly Detection Wizard:
1. From the ENVI main menu bar, select Spectral → SPEAR Tools → Anomaly
Detection. The SPEAR Anomaly Detection Wizard displays the File Selection
panel.
2. Click Select Input File, choose a file, then click OK. The input image should
be a multispectral file in any format readable by ENVI.
3. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
4. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
5. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
6. Click Next. The Select Parameters dialog appears.
7. Select an algorithm from the drop-down list provided. For detailed information
on each of these algorithms, see “RX Anomaly Detection” on page 827. The
following options are available:
• RXD: Standard RXD algorithm.
• UTD: Uniform Target Detector. UTD and RXD work exactly the same, but
instead of using a sample vector from the data (as with RXD), UTD uses
the unit vector. UTD extracts background signatures as anomalies and
provides a good estimate of the image background.
• RXD-UTD: A hybrid of the RXD and UTD methods. Subtracting UTD
from RXD suppresses the background and enhances the anomalies of
interest. The best condition to use RXD-UTD is when the anomalies have
an energy level that is comparable to, or less than, that of the background.
8. Using the Mean Source toggle button, specify whether the mean spectrum
should be derived from the full dataset (Global) or from a localized kernel
around the pixel (Local). If you choose Local, the Local Kernel Size field
appears. Specify a kernel size, in pixels, that will be used to create a mean
spectrum around a given pixel. The default value is 15.
9. Optionally, enable the Suppress vegetation anomalies checkbox to suppress
vegetative anomalies in the RXD results. This option is best used when
vegetation is a minor component of the image. It works by calculating a
Normalized Difference Vegetation Index (NDVI) for the input image, then
rescaling the RXD results inversely proportional to the NDVI. For more
information on NDVI, see “Normalized Difference Vegetation Index” on
page 1225.
10. Click Next. A processing status dialog appears and ENVI adds the resulting
output to the Available Bands List. The Filter Results dialog appears.
The original image in natural color and the RXD results in grayscale are
opened in the display and are dynamically linked for comparison. Bright pixels
in the output image represent targets that are spectrally distinct from the image
background.
11. Change the display of the reference image using the Reference Display drop-
down list.
12. If you selected to suppress vegetation anomalies, use the Anomaly Display
drop-down list to toggle between the RXD results and the vegetation-
suppressed results.
13. Optionally, select a threshold to segment the image into anomalous and non-
anomalous regions. The threshold should be set low enough to minimize false
positives without omitting real anomalies. Click on the dotted bar in the
histogram window or enter values in the fields at the top of the histogram
window to explore different thresholding options. Use Auto-Flicker to
examine your results (for detailed information on using this tool, see “Auto-
Flicker” on page 585). Once a suitable threshold has been determined, click
that is darker from one image to the other appear in red. The colors can then be
used to indicate potential areas of change.
For best results, use images with similar view geometries. Different view geometries
may cause objects such as trees and structures to “lean” in different directions.
Because these issues cannot be resolved in coregistration, they cause artifacts in the
results.
References:
Unsalan, Cem., and K. L. Boyer. (2004). A system to detect houses and residential
street networks in multispectral satellite images. Proceedings of the 17th
International Conference on Pattern Recognition (ICPR ‘04), Volume 3.
The SPEAR Change Detection tool is intended for multispectral images. If
panchromatic images are selected, 2CMV will be the only change detection method
available.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools → Change
Detection. The Change Detection Wizard displays the File Selection panel.
2. Time #1 and Time #2 images do not need to be in chronological order. One
will be used as the base image, and the other will be warped to match that
image. To preserve as much information as possible, use the highest resolution
image for the Time #1 image. Click Select Time #1 File, choose a file for the
base image, then click OK. The Auto Tie Points Matching Band dialog
appears.
3. Select the band to use for auto tie point matching, then click OK.
4. Click Select Time #2 File, choose a file for the warp image, then click OK.
The Auto Tie Points Matching Band appears.
5. Select the band to use for auto tie point matching, then click OK.
6. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears. Subsetting is applied to both images. See
“Spatial Subsetting” on page 215 for detailed instructions on selecting a
subset. When finished, click OK to return to the File Selection panel.
7. By default, the output file is saved to the same directory and uses the same
filename as the Time #2 input file, minus any extension. The filename is
appended with change_pca. To change the directory and/or filename, click
Select Output Root Name.
8. Click Next. The Method Selection panel appears with coregistration
parameters. See “Coregistration” on page 580 for detailed instructions on
setting the coregistration parameters. Click Next. The next two dialogs: the
Review Tie Points dialog and the Check Coregistration dialog allow you to
check the accuracy of the coregistration and fix tie points if needed. When
complete, click Next on each dialog. The Method Selection dialog appears.
9. Enable the checkboxes for each change detection method you elect to use.
10. Click Show Advanced Options to set advanced parameters for the Image
Transform and/or Subtractive methods.
• For the Image Transform method, select the desired transform algorithm to
use: Principal Components (see “Principal Component Analysis” on
page 504 for more information), Minimum Noise Fraction (see
“Minimum Noise Fraction Transform” on page 516 for more information),
or Independent Components (see “Independent Components Analysis”
on page 509 for more information).
• For the Subtractive method, select whether you wish to perform dark
object subtraction and/or radiometric normalization. Dark object
subtraction removes haze and performs atmospheric correction.
Radiometric normalization normalizes the overall brightness of the Time
#2 image to the Time #1 image based on bands. If a pixel is unchanged
between the two images, the digital number (DN) difference will be zero.
11. Click Next to begin processing. Processing may take several minutes; when it
is complete, the Examine Results dialog appears.
The change detection results and the input images (in natural color) are opened
in the display and are dynamically linked for comparison. For the Image
Transform and Subtractive methods, bright pixels in the output image indicate
changed areas. For the 2CMV method, red or cyan colors in the output image
indicate changed areas.
12. Change the display of the method results using the See results for drop-down
list. Changing the method displayed will provide different visualization
options in the Wizard. Each is described below.
Optionally, click on the dotted bar in the histogram window or enter values in
the fields at the top of the histogram window to aid in visualizing areas of
change. Use Auto-Flicker to examine your results (for detailed information on
using this tool, see “Auto-Flicker” on page 585).
• 2CMV: The display group on the left contains an RGB image composite in
which red or cyan colors indicate changed areas. The Time #1 image is
displayed in the red band, and the Time #2 image in the green and blue bands.
The color depends on the relative brightness of the changed object. Changed
areas are highlighted as bright and/or dark pixels in one of the transformed
bands.
The 2CMV image can only display one band at a time. To select another band,
use the See 2CMV for drop-down list, then click Load Image. Toggle the
colors displayed using the 2CMV Colors option.
Optionally save the results using Export to NITF and/or Save to Graphic.
Or, click Export Image to ArcGIS Geodatabase to save the results to a
geodatabase. The Select Output Geodatabase dialog appears. See “Selecting an
Output Geodatabase” on page 226 for instructions on saving images to a
geodatabase.
• Image Transform: The display group on the left contains one of the
transformed bands in grayscale. Transform attempts to segregate different
image features into different bands. One of the transform result bands
highlights changes; however, it can be any one of the transformed bands.
Typically, changes are in the second or third band, but it varies depending on
scene content and the amount of change. The display groups are dynamically
linked for ease of comparison. To explore different transformed bands, use the
Transform band drop-down list, then click Load Image.
• Subtractive: The display group on the left contains one of the difference
images in grayscale. The display groups are dynamically linked for ease of
comparison. To explore difference images, use the See change for drop-down
list, then click Load Image.
Select new color tables using the Color Tables list, then click Apply Color
Table. Use the Stretch Bottom and Stretch Top slider bars to adjust how the
color table is applied.
Optionally save the file. See “Saving to Image Files” on page 18 for details.
Figure 7-1: Difference in NDVI between Time 1 and Time 2 Dark Areas Indicate Areas
Stripped Bare of Vegetation (left) Time 1 Image (center) and Time 2 Image (right)
Figure 7-2: Bright Pixels (left) Show Changed Areas between Time 1 (center) and
Time 2 (right)
13. When you are finished examining results, click Next in the Examine Results
panel, then click Finish to exit the Wizard.
Coregistration
Coregistration occurs within the SPEAR Change Detection and SPEAR Pan
Sharpening tools. For change detection or pan sharpening to be effective, the images
of interest must be closely aligned. The native georeferencing information that comes
with the imagery is typically not accurate enough for this purpose. Instead, you must
select tie points marking the same features on both images. ENVI warps one image
based on these tie points to match the base image.
Using the Ground Control Points dialog to select tie points is the simplest method;
however, it can be challenging in areas with few obvious features, and it is time
consuming. To assist tie point selection, ENVI automatically scans both images to
locate common features. For best results, manually provide three to five seed points
to assist ENVI in finding tie points.
Though ENVI can select tie points much faster than a human operator, you should
check automatically chosen tie points before proceeding. Automatically generated tie
points may fall on clouds or cloud shadows, on rooftops, or on other elevated objects
and are not suitable. Slight time differences between image collections may generate
sub-optimal tie points.
Time #1 and #2 images in display groups, and the Ground Control Points
Selection dialog appears listing the automatically selected points.
• To manually select seed points, click Select Seed Points. ENVI opens the
Time #1 and #2 images in display groups and the Ground Control Points
Selection dialog appears. Select three to five seed points using the steps in
“Image-to-Image Ground Control Points” on page 878.
When manually selecting seed points, you can switch the Geographic
Link toggle to On to move the cursor to the same area in both images, then
switch the Geographic Link toggle to Off to fine tune the location.
4. If you manually selected seed points, click Retrieve Points. If you need to
clear the seed points and start again, click Clear Points.
5. Click Show Advanced Options if you want to set additional parameters for
area-based image matching methods. Typically, the default settings provide the
best results, but you can edit the parameters as needed. See “Area-Based
Matching Parameters” on page 896 for parameter descriptions.
• Number of Tie Points
• Search Window Size
• Moving Window Size
• Area Chip Size
• Minimum Correlation
• Point Oversampling
• Interest Operator
6. Click Next to continue. The following appears:
• One display group contains the base image.
• One display group contains the image to warp.
• The Ground Points Selection dialog.
• If you selected automatic tie points, the points show in the display groups,
the GCP Selection dialog shows how many points exist and the Root Mean
Square (RMS) error, and the Image to Image GCP List shows the
individual points.
• The Review Tie Points panel appears.
7. Review tie points, as described in “Reviewing Tie Points” on page 582.
8. When you are finished reviewing tie points, choose one of the following for the
warp Method in the Review Tie Points panel. “Warping and Resampling
Image-to-Image” on page 904 describes each parameter.
• Polynomial
• Triangulation
• RST
9. Choose one of the following for the warp Interpolation. The interpolation is
the manner in which the image is resampled to the warped grid. “Warping and
Resampling Image-to-Image” on page 904 describes the resampling choices.
• Nearest Neighbor
• Cubic Convolution
• Bilinear
For pan-sharpening, a nice looking image is paramount, so Cubic
Convolution is the best choice.
10. Click Next. The following appears:
• One display group contains the base image
• One display group contains the warped image.
• The two display groups are dynamically linked.
• The Check Co-Registration panel.
11. See “Checking Coregistration Accuracy” on page 584 for the final steps of
coregistration.
• When using any object with height above the ground, use the base of the
structure rather than the top. Parallax caused by different viewing geometries
(even for “simultaneous” panchromatic/multispectral imagery) makes these tie
points sub-optimal.
• Distribute tie points as evenly as possible across the entire scene. Try not to
leave large areas without any tie points.
• The more tie points the better. There is eventually a point of diminishing
returns. A good number is 50 tie points, although you can obtain excellent
results from as few as 10, depending on the scene.
• If you used manual tie point selection:
• Use the Ground Control Points Selection dialog to pick tie points.
• See “Image-to-Image Ground Control Points” on page 878 for assistance.
For tips on fixing tie points see “Fixing Tie Points” on page 583.
• If you used automatic tie point selection:
• The tie points automatically display. Tie points are colored indicating
good, satisfactory, and poor RMS error values. Look at the RMS Error in
the Ground Control Points Selection dialog. This number should be as
small as possible. Less than one is excellent, but anything less than three
should produce acceptable results.
• Enlarge the Image to Image GCP List dialog so you can see the RMS
column.
• From the Image to Image GCP List dialog menu bar, select Options →
Order Points by Error to sort the tie points by the highest RMS. For tips
on fixing tie points see “Fixing Tie Points” on page 583.
• Delete the tie point: Select the tie point in the Image to Image GCP List
dialog. Click Delete. The point is removed from the dialog list, and the RMS
error is updated.
• Use SPEAR to automatically turn off bad tie points: In the Review Tie
Points panel, set the Maximum allowable RMS per GCP parameter to the
desired RMS threshold, then click Apply. The tie points with the greatest RMS
errors are turned off until no tie point has an RMS greater than the set value.
This is a quick way to remove the greatest errors. To turn the points back on,
select them in the Image to Image GCP List dialog, then click On/Off.
Auto-Flicker
The SPEAR Auto-Flicker tool provides a way to compare two dynamically linked
images. Auto-flicker toggles between images at a desired speed using one of three
methods: blend, flicker, or swipe. The auto-flicker playback can be saved to an
animated GIF to examine and share results.
1. Select a display method using the provided drop-down list.
2. Control the operation using Play, Stop, and the speed slider.
3. Optionally, click Save to save the operation as an animated GIF.
• Because GIFs are saved in a maximum of 256 colors, animated GIFs may
experience color loss.
• The speed at which the saved GIF animation plays in ENVI may not match
its speed in other applications.
• Animated GIFs can contain many frames and therefore can be large in
size.
image thumbnails are selected, ancillary data files associated with the .kml file will
also be exported.
Note
Google Earth is not supported on Solaris.
Google Earth does not directly support georeferenced images. Images are placed on
the Earth according to four corner points and a rotation defined in the .kml file. To
ensure images overlay accurately, the Google Earth Bridge georectifies input images
to a North up orientation. If available, Rapid Positioning Coordinates (RPCs) will be
applied (via orthorectification) using the mean elevation for the image as determined
by a global elevation dataset. If RPCs are not available, the four corner points of the
image will be used as ground control points to warp the image to a North up
orientation.
You can export vectors and/or images as thumbnails and/or footprints to Google
Earth using the SPEAR Google Earth Bridge.
the .kml file. This enables the display of images collected between user-
defined start and stop dates using the Google Earth time slider. The sensor and
solar geometry information will be used to create small line segments
extending from the center of the image towards the sensor and the sun,
respectively. The sensor line will be the same color as the footprint color, and
the sun line will be colored yellow. These “geometry stubs” may be turned on
or off in Google Earth independently from the footprints and thumbnails.
Once you have selected the images for export, click Next. The Image
Properties dialog appears.
4. Select thumbnails, footprints, or both from the Output Type drop-down list.
Exporting footprints is nearly instantaneous, while exporting thumbnails may
take several minutes depending on the image size.
5. Image footprints are vector polygons depicting the spatial extent of the
selected image. Select the desired color for the footprints using the Color
drop-down list. If more than one image is being exported, the Sequential
option will cycle through the colors in the list (other than black and white) for
each footprint.
6. Select a fill type using the Fill drop-down list. Type a number in the
Transparency field or use the arrow key adjacent to the field to set the
transparency for semi-transparent fills. For a solid fill, set the transparency to
0%.
7. The Line Width field controls the width for the line around the edge of the
footprint.
8. Select an available format from the Output Format drop-down list. For PNG,
black fill pixels are set to transparent. This option is only available for
thumbnails.
9. The Image Size drop-down list indicates the approximate size of the resulting
thumbnail.
10. Google Earth can import 8-bit grayscale or 24-bit color images. Set the stretch
parameters using the Thumbnail Parameters section of the dialog.
11. Enable the Do not export any vectors option if you do not want to export
vectors and click Next. To export vectors, select the vector layers to export and
click Next.
• Vector layers opened in ENVI will be available in the Select Layers to
Export list. Use Refresh List to refresh the list of vectors open in ENVI.
• Select the vector layers to export and click Next. The Vector Properties
dialog appears.
• Each vector layer selected for output is represented by one column in the
Vector Parameters table. Click on the table rows to set properties for each
vector layer, then click Next. The Select Output dialog appears.
12. Click Select Output File to set output options. If you export thumbnails, one
graphic per thumbnail will be written to the directory specified. Thumbnails
will need to be in the same directory as the .kml file that refers to them.
Enable the Open in Google Earth when done option to automatically load the
file(s) to Google Earth.
Note
Google Earth also supports .kmz format, which is a zipped file containing
the .kml file and the associated images. Using a zip file tool, zip the files,
then replace the .zip extension with .kmz. Google Earth can open this file
format directly, and it is a convenient way to keep multiple related files
together.
13. Processing is complete; click Finish to exit the Wizard. If exported images had
collection day and time metadata, you may need to adjust the time slider in
Google Earth to display all of the images.
Note
Because Google Earth super overlays tile images at multiple resolution
levels, they often generate a large number of files. It is recommended that
upon selecting the .kml output name, you create a new directory to contain
these overlay files.
setting the coregistration parameters. Click Next. The next two dialogs: the
Review Tie Points dialog and the Check Coregistration dialog allow you to
check the accuracy of the coregistration and to fix tie points if needed. When
complete, click Next on each dialog.
8. Processing is complete, click Finish to exit the Wizard.
8. The Number of Output ICs field defaults to the number of input bands. Fewer
IC bands will speed up processing but may exclude minor features. It is
recommended that you accept the default for multispectral imagery. For
hyperspectral imagery, you can modify this number to speed up processing.
9. If you did not apply masking at file input, enter the Sample X/Y Resize
Factors in the appropriate fields to sub-sample the data when calculating the
IC transform. Sub-sampling reduces the IC sample size to fit into memory and
increases computational speed. This option is valid only on images with an x,y
size greater than 64. A setting of 1 (the default) does not change the data. For
example, on an image with an x,y size greater than 64, a resize factor of 0.5
will use every other pixel in the statistics calculations and the IC sample.
Setting this value to a small number could lose features of interest, as those
pixels may be discarded. The upper limit is 1.0. The lower value limits x,y size
after downsampling to 64. Required processing memory and available memory
are displayed on the Wizard screen.
10. Optionally, click Show Advanced Parameters to define additional settings.
Detailed information on these settings can be found in “Independent
Components Analysis” on page 509.
11. Click Next. The Examine Results dialog appears.
The original image in natural color and the IC results in grayscale are opened
in the display and are dynamically linked for comparison.
12. Cycle through the IC bands using the ICA Result drop-down list or the Show
Prev and Show Next buttons. Entries appended with “Dark” and “Bright”
highlight the darkest or brightest pixels in the chosen IC band. Use the
Animate ICA Bands option to examine the results. For detailed information
on animation in ENVI, see “Creating Animations” on page 140.
13. Optionally, click on the dotted bar in the histogram window or enter values in
the fields at the top of the histogram window to explore different stretching
options. Use Auto-Flicker to examine your results (for detailed information on
using this tool, see “Auto-Flicker” on page 585).
14. Click Next after examining your results. Processing is complete; click Finish
to exit the Wizard.
This tool highlights roads to aid manual digitizing. Depending on your level of
knowledge of road locations within the scene, you can process the scene using one of
two different workflows:
• Use the supervised spectral processing workflow if you can manually identify
roads of different types in the image. Road types that have been trained are
mapped throughout the scene using spectral algorithms.
• Use the unsupervised spectral processing workflow when you cannot manually
identify roads of different types in the image. This method uses Principal
Components Analysis and/or a red soil algorithm to highlight roads.
9. In the Digitize LOCs panel, select result to display. For example, if you select
Principal Components from the Examine Result for drop-down list, there is
a list of PC images to select from (one per input spectral band).
10. Click Load image type to load the image. The following appear:
• One display group contains a natural color composite of the input image,
to use as reference.
• One display group contains the selected spectral processing result. This
display group is dynamically linked to the natural color window. Use the
dynamic overlay tool to see how the spectral processing result relates to
the natural color reference image.
• Optionally, click on the dotted bar in the histogram window or enter values
in the fields at the top of the histogram window to explore different
stretching options. For MF/SAM Ratio, MF, and Red Soil results, roads
are always bright. For SAM, roads are dark. For PC, you must manually
load each image to search for the one that highlights roads. The roads may
be either bright or dark. Use Auto-Flicker to examine your results (for
detailed information on using this tool, see “Auto-Flicker” on page 585).
• The Vector Parameters dialog, containing the three default vector layers
(Primary, Secondary, and Tertiary).
Figure 7-3: Unstretched Image (right) and Stretched Image (left) Highlights Red
Soil Roads (Imagery Courtesy of DigitalGlobe)
11. Use the Vector Parameters dialog to add new road vectors. See “Adding New
Vectors” on page 1019 for details. Assign the roads within the image to the
three default vector layers as desired.
• To create a new vector layer for digitization, click Create, in the Digitize
LOCs panel and enter a descriptive layer name and a filename.
• To import an existing vector layer so that new LOCs may be added to it,
click Import. (The vector layer to import must already be open in ENVI.)
• To remove a vector layer, select the layer to remove, then click Remove.
You cannot remove the default layers.
• At any time during digitizing, you can use the histogram to stretch the
display group containing the spectrally processed result, or display a
different result using the controls in the Display Results dialog.
• Vectors are saved automatically each time the displayed image changes, or
before you move to the next step.
12. When digitizing LOCs is complete, click Next. The Export Vectors panel
appears.
13. Optionally, select vector layers to export to separate files from the Vector
Output list. The vectors you digitized are stored in separate files in ENVI
vector file (.evf) format. These vectors remain in the projection and datum of
the input image. Each exported vector will result in the following files being
created:
None/Already Corrected
Select this option if you do not want atmospheric correction, or if the input file is
already atmospherically corrected.
pixels, enter the value of these pixels for the Data ignore value and enable the
Use ignore data value check box.
• User Values: Click Edit Values and manually enter values to subtract from
each band in the Enter User Values dialog.
Log Residuals
The log residuals method produces a pseudo-reflectance dataset by dividing each
pixel’s spectrum by the spectral geometric mean and the spatial geometric mean. No
user input is required to run this method. Log residuals is generally effective for
analyzing absorption features present in hyperspectral data.
3. Create an ROI for both a dark and bright target, then select pixels with
available ground truth spectra.
4. When the ROIs are defined for dark and bright targets, click Select Data ROI
on the Atmospheric Correction panel. The ROI Selection dialog appears.
5. Select the appropriate ROI from the list, then click OK.
6. In the Atmospheric Correction panel, click Select Library Spectrum. The
Library Selection dialog appears.
By default, the spectral libraries in ENVI are available. These libraries contain
generic spectra for many man-made materials, vegetation, soils, rocks, and
minerals. To use a custom spectral library, it must be already loaded in the
Available Bands List.
7. Select the spectral library from the list of open files, then click OK. The
Spectrum Selection dialog displays.
8. Select the spectrum to use, then click OK. Select the spectral library
containing ground truth spectra, then select ground truth spectrum within the
library for the designated target.
Figure 7-7: Unstretched Image (right) and Stretched Image (left) Highlighting
Water in the Spectral Processing (Imagery Courtesy of DigitalGlobe)
11. Use the Vector Parameters dialog to add new water vectors. See “Adding New
Vectors” on page 1019 for details.
• To create a new vector layer for digitization, click Create, in the Digitize
LOCs panel and enter a descriptive layer name and a filename.
• To import an existing vector layer so that new LOCs may be added to it,
click Import. (The vector layer to import must already be open in ENVI.)
• To remove a vector layer, select the layer to remove, then click Remove.
You cannot remove the default layers.
• At any time during digitizing, you can use the histogram to stretch the
display group containing the spectrally processed result, or display a
different result using the controls in the Display Results dialog.
• Vectors are saved automatically each time the displayed image changes, or
before you move to the next step.
12. When digitizing LOCs is complete, click Next. The Export Vectors panel
appears.
13. Optionally, select vector layers to export to separate files from the Vector
Output list. The vectors you digitized are stored in separate files in ENVI
vector file (.evf) format. These vectors remain in the projection and datum of
the input image. Each exported vector will result in the following files being
created:
• ENVI vector file (.evf) in the native image projection
• ENVI vector file (.evf) in Geographic/WGS84 projection
• Solar Az/Elev: The azimuth and elevation angle (degrees above horizon)
to the sun from the center of the image.
5. The Difference table displays image pair suitability for change detection
analysis. The table lists the angular difference between sensor vectors as well
as solar vectors (in order). Cells display the sum of the angular differences by
color:
• 0-30 degrees: Highly suitable for change detection (green).
• 30-60 degrees: Moderately suitable for change detection (yellow).
• 60+ degrees: Poorly suited for change detection (red).
6. Click on a cell in the Difference table to display the Time Difference between
two collections.
7. When a cell for a valid image pair is selected in the Difference table, a 3D
graphic at the bottom of the SPEAR dialog updates to show the relative sensor
and solar geometries. Click and drag the graphic to change the view
perspective. Green cubes indicate sensor positions, yellow spheres indicate
solar positions, and red and blue lines indicate which sensor and solar vector
pair belong to each other.
To view a single image, select the row or column header for that image.
8. Optionally, right-click and select Save Image to File to save the image to a
graphic file.
9. When you are finished viewing the metadata, click Finish to exit the Wizard.
SPEAR Orthorectification
An orthorectified image (or orthophoto) is one where each pixel represents a true
ground location and all geometric, terrain, and sensor distortions have been removed
to within a specified accuracy. Orthorectification transforms the central perspective of
an aerial photograph or satellite-derived image to an orthogonal view of the ground,
which removes the effects of sensor tilt and terrain relief. Scale is constant throughout
the orthophoto, regardless of elevation, thus providing accurate measurements of
distance and direction. Geospatial professionals can easily combine orthophotos with
other spatial data in a geographic information system (GIS) for city planning,
resource management, and other related fields.
The SPEAR Orthorectification Wizard allows you to orthorectify images using
rational polynomial coefficients (RPCs), elevation and geoid information, and
optional ground control points (GCPs). RPCs and elevation information do not
provide enough details to build a rigorous model representing the path of light rays
from a ground object to the sensor. Use the ENVI Orthorectification Module for
rigorous orthorectification. This add-on module requires an additional license in your
installation; contact your ENVI sales representative to obtain a license.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools →
Orthorectification. The SPEAR Orthorectification Wizard displays the File
Selection panel.
2. Click Select Image to Ortho, choose a file, then click OK. The input image
must contain RPCs and can be in any format readable by ENVI.
3. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
4. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
5. Select an elevation source. In flat areas or when only general correction is
needed, select Z Value. Use Auto Retrieve to retrieve the mean elevation for
the area covered by the input image. Click Select Elevation File to select a file
with elevation data. The elevation file must contain map information and can
be in any format readable by ENVI.
6. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
7. Click Next. The Select Method dialog appears.
8. Select Normal or Fast mode. Normal mode is more accurate and calculates an
orthorectified coordinate for every pixel in the image. Fast mode calculates an
orthorectified coordinate for pixels laying on a grid with user-defined spacing,
then triangulates the position of all of the points within each grid cell. In
relatively flat areas, Fast mode will produce accurate results with faster
processing.
9. Optionally, collect GCPs to refine the georeferencing accuracy of the
orthorectified image. For detailed information on collecting GCPs, see
“Collecting Ground Control Points (Image-to-Image)” on page 879.
10. Click Next. The Orthorectification Parameters dialog appears.
11. Click Show Ortho Params to optionally modify the following default
parameters:
• X Pixel Size and Y Pixel Size values default to the resolution of the input
file.
• The Image Resampling Method determines the pixel values in the input
image when it is converted from its current orientation into the new
orientation. The choices are Nearest Neighbor, Bilinear, and Cubic
Convolution. The default is Bilinear.
• The Image Background allows you to set a value for pixels in the output
image that are outside the boundary of the source image.
• DEM Resampling defines the method to be used to resample the DEM to
the resolution of the input or base image.
• DEM to Geoid Offset is a constant value that is added to every value in
the DEM to account for the difference between a spheroid mean sea level
(used in most available DEM data) and the constant geopotential surface
known as the geoid. The RPC coefficients are created based on geoid
height, and this information must be used to provide accurate
orthorectification.
For example, if the geoid is 10 m below mean sea level at the location of
your image, enter a value of -10.
Many institutions doing photogrammetric processing have their own
software for geoid height determination. You can also obtain software
from NGA, USGS, NOAA, and other sources. A geoid height calculator is
located at:
http://www.ngs.noaa.gov/cgi-bin/GEOID_STUFF/geoid99_prompt1.prl.
• Grid Spacing is available for Fast mode. Fast mode calculates an
orthorectified coordinate for pixels laying on a grid with this spacing
value, then triangulates the position of all of the points within each grid
cell.
12. Click Next. The Examine Results dialog appears.
The orthorectified image is displayed. If GCPs were selected, the base image
will be displayed and geographically linked to the orthorectified image so that
you can examine the georeferencing accuracy.
13. When you are finished examining your results, click Finish to exit the Wizard.
resolution dataset. The resulting product should only serve as an aid to literal analysis
and not for further spectral analysis.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools → Pan
Sharpening. The SPEAR Pan Sharpening Wizard displays the File Selection
panel.
2. Click Select High Res File, select a panchromatic image, select the input band
to use, then click OK.
3. Click Select Low Res File, select a multispectral image, select the input band
to use, then click OK. Simultaneous image collections are not required but will
produce better results.
4. To optionally process only a portion of the scene, select the High Res or Low
Res image as the subset source, then click Select Subset. A small Select
Spatial Subset dialog appears.
5. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
6. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
Figure 7-10: Pan Sharpening Results using Different Methods (IKONOS Imagery
Courtesy of GeoEye, Copyright 2007)
2. Click Select Input File, choose a file, then click OK. The input file must be
multispectral, with at least blue, green, and near infrared bands.
3. If wavelengths are not embedded in the image header, a series of Select Band
dialogs appear. Select the Blue band, Green band, Red band, and NIR band,
then click OK after each selection.
4. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
5. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
6. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
7. Click Next. The Atmospheric Correction panel appears.
8. Use the steps in “SPEAR Atmospheric Correction” on page 598 to optionally
perform atmospheric correction. For calculating relative water depths, it is
typically best to not perform atmospheric correction. Atmospheric correction
of littoral or marine areas often alters the data such that calculating water
depths may produce anomalous and unsatisfactory results. Unless there is a
specific need to perform atmospheric correction and the implications are
understood, it is best to skip this step.
9. Click Next. The Method Selection panel appears.
10. Select the desired Bathymetry method:
• Log Ratio Transform
• Principal Component: “Principal Component Analysis” on page 504
describes the PC transform
• Independent Components: “Independent Components Analysis” on
page 509 describes the IC transform
Log Ratio Transform typically produces better results. If you use Principal
Components, you need to examine each resulting Principal Component image
to find the one corresponding to water depth. Even then, water depth may not
be entirely decorrelated from bottom albedo or other sources of error.
11. If you selected Log Ratio Transform, click Show Advanced Options to see
additional parameter settings. The following are available:
• Median Filter: The default setting is 3x3 to remove high frequency noise
that often occurs in the water depth results. If desired, select a different
kernel size for the filter or turn filtering off from the drop-down list.
Setting the median filter to larger sizes creates smoother results, but may
smooth over small submerged features.
• Calibrate to absolute depth: Enable this check box to calibrate the
relative depths to absolute depths by using ground truth information. A
display group opens and a Calibration Points table for the ground truth
points appears in the Method Selection panel. You need to add at least
three ground truth points.
To enter ground truth points manually, move the cursor in to the pixel with
the known depth. In the Method Selection panel, click Add Current
Location as New Point. A new row is added to the Calibration Points
table with the column and row location of the selected pixel. Select the
value in the Depth column and enter the depth value, in meters. Repeat
this process for each ground truth point.
You can also import ground truth points from ASCII files. The ASCII file
must contain three columns: x coordinate, y coordinate, and depth. The x
and y coordinates may be in any defined map projection, such as
Geographic or UTM, and the columns may be in any order. To import a
file, click Import ASCII in the Method Selection panel. Select the text file
containing the ground truth data. A dialog appears asking you to identify
the three columns, and to select the map projection that the x and y
coordinates are defined as. When finished, all the points in the ASCII file
that fall within the image’s bounds are entered into the table.
Figure 7-11: Example Water Depths Using Different Median Kernel Sizes,
Left = None, Center = 5x5, Right = 13x13 (Imagery Courtesy of DigitalGlobe)
Figure 7-13: Exponential Models using SQRT(Y) (left) and Equal(1.0) (right)
Measurement Errors
13. Click OK in the Absolute Depths Calibration dialog. The Examine Results
panel appears. See “Examining the Relative Water Depth Results” for details.
3. To load a water depth image with a color table applied, select the Color Table
tab. See “Color Table” on page 621 for details.
4. To load a density sliced image with the default parameters, select the Density
Slice tab. A new display group opens with the density sliced image. The new
display group is dynamically linked to the reference image display group. See
“Density Slice” on page 623 for details about the density slice settings.
5. Use Auto-Flicker to examine your results (for detailed information on using
this tool, see “Auto-Flicker” on page 585).
6. When you are finished examining results, click Next in the Examine Results
panel, then click Finish to exit the Wizard.
Color Table
1. Select a color table from the Color Tables list, then click Apply Color Table.
A new display group opens with the colorized image. The new display group is
dynamically linked to the reference image display group.
2. To preview new color tables, select the desired color table in the list, then click
Apply Color Table.
3. Use the Stretch Bottom and Stretch Top slider bars to change the way the
color table is applied. See Figure 7-14 for an example. Move the slider bar
positions to reverse the color table order if needed.
Figure 7-14: Changing Color Table Appearance with Slider Bars (Imagery
Courtesy of DigitalGlobe)
4. To save the displayed color table image to a graphics file suitable for use in a
briefing or report, click Save to File. The Output Display to Image File dialog
appears. See “Saving to Image Files” on page 18 for details.
5. To create an output file showing the applied color table overlaid on the
reference image, click Create Overlay Mosaic. Non-water pixels (black in the
color table image) are transparent, allowing the underlying reference image to
show through. ENVI prompts you to select an output filename, then adds the
mosaic to the Available Bands List.
6. To export the overlay mosaic to a graphics file suitable for use in a report or
brief, load the image into a display group, then select File → Save Image
As → Image File. The Output Display to Image File dialog appears. See
“Saving to Image Files” on page 18 for details.
7. To export the overlay mosaic to a geodatabase, click Export Mosaic to
ArcGIS Geodatabase. This button appears below the color table list. The
Select Output Geodatabase dialog appears. See “Selecting an Output
Geodatabase” on page 226 for further instructions.
Density Slice
1. Select the Base Image to use from the drop-down list, then click Load Image.
A new display group opens with the density slice, dynamically linked to the
base image type you selected.
2. There are four display ranges available for use. Default depth ranges are
general rules of thumb that apply in many, but not all, cases. Adjust the ranges
to suit your particular data by entering new values or using the up/down
arrows. The ranges are as follows:
• Very Shallow
• Shallow
• Moderate
• Deep
The numbers shown for each range indicate the bottom depth threshold. For
example, the default for Shallow are pixels with a depth between 3.0 and 10.0
meters when calibration to absolute depths is performed.
The default ranges differ depending on whether or not the Bathymetry was
calibrated to absolute depths. If the depths are relative, the results range from 0
to 1. Otherwise, the results range between whatever the calibrated depths are.
3. Adjust the color for each range by right-clicking on the color box and selecting
a new color.
4. To turn off a range to show the Base Image beneath it, clear the On/Off check
box for that particular range.
Figure 7-16: Natural Color Reference Image (left), Density Sliced Image (center), with
corresponding parameters (right) (Imagery Courtesy of DigitalGlobe)
5. To restore the Density Slice parameters to their original state, click Restore
Defaults in the Examine Results panel.
6. To save the density slice image to a graphics file suitable for use in a briefing
or report, click Save to File in the Examine Results panel. The Output Display
to Image File dialog appears. See “Saving to Image Files” on page 18 for
details. Density slices may be exported to shapefiles or regions of interest
(ROIs) by clicking the buttons provided.
Spectral Analogues
The Spectral Analogues tool maps the occurrence throughout an image of some
desired material. Spectral Analogues uses spectral matching algorithms that compare
the spectrum of each pixel to the mean spectrum of user-specified training pixels. You
must know at least one location in the image where the desired material exists,
usually through visual observation or prior knowledge, so that it can be used for
training. The Spectral Analogues tool is designed for use with multispectral data.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools → Spectral
Analogues. The ENVI Spectral Analogues Wizard displays the File Selection
panel.
2. Click Select Input File, choose a file, then click OK. The input image should
be a multispectral file in any format readable by ENVI.
3. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
4. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
5. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
6. Click Next. The Atmospheric Correction panel appears.
7. Use the steps in “SPEAR Atmospheric Correction” on page 598 to perform
atmospheric correction. Dark object subtraction provides good results with
minimal user input.
8. When atmospheric correction is complete, click Next. A natural color
composite image is loaded into a display group, the ROI dialog appears, and
the Select Training Pixels SPEAR dialog appears.
9. Use the ROI Tools dialog to add pixels to the ROIs. See “Defining ROIs” on
page 323 for details. You can create new ROIs as needed through the ROI Tool
dialog, and populate them with pixels that represent some other class. Choose
multiple pixels for each ROI to represent variability within that type.
10. Click Next. The ROI Selection dialog appears.
11. Select one or more ROIs to map, then click OK. The Spectral Processing panel
appears.
12. Select one or more Spectral Processing Parameters:
• Spectral Angle Mapper: “Applying Spectral Angle Mapper
Classification” on page 412 describes the SAM classification method.
• Matched Filter: “Using Matched Filtering” on page 776 describes the MF
classification method.
• MF/SAM Ratio: Suppresses false positives that may be present in one
method, but not the other, while enhancing true positives. For example, a
pixel containing water has a high MF value and low SAM value. A high
value divided by a low value results in a very large value, therefore
enhancing the positive result. Conversely, if the MF product has a high
value for a false positive, but the SAM correctly maps it as non-water so it
has a high value, the high value divided by high value results in a smaller
value, suppressing the false positive.
• Normalized Euclidean Distance: Calculates the distance between two
vectors in the same manner as Euclidean Distance, but normalizes the
vectors first. Normalization is performed by dividing each vector by its
mean.
Note
You cannot disable the Spectral Angle Mapper and Matched Filter options
when MF/SAM Ratio is enabled.
Figure 7-17: Unstretched Rule Image (left), Stretched Rule Image (right) for
SAM Results Highlighting Aircraft and Spectrally Similar Materials
TERCAT
The Terrain Categorization (TERCAT) tool creates an output product in which pixels
with similar spectral properties are clumped into classes. These classes may be either
user-defined, or automatically generated by the classification algorithm. The
TERCAT tool provides all of the standard ENVI classification algorithms, plus an
additional algorithm called Winner Takes All.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools →
TERCAT. The ENVI TERCAT Wizard displays the File Selection panel.
2. Click Select Input File, choose a file, then click OK. The input image should
be a multispectral file in any format readable by ENVI.
3. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
4. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
5. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
6. Click Next. The Atmospheric Correction panel appears.
7. Use the steps in “SPEAR Atmospheric Correction” on page 598 to perform
atmospheric correction.
8. When atmospheric correction is complete, click Next. The Method Selection
panel appears.
9. Select the classification methods to use:
• Unsupervised: These methods do not require training data to create a
TERCAT, though the classes will not be labelled. “Unsupervised
Classification” on page 425 provides descriptions for each method.
• Supervised: These methods require you to train the algorithms by creating
ROIs that include representative pixels of the desired classes. “Supervised
Classification” on page 401 provides descriptions for each method. The
Winner Takes All (TERCAT) classification method classifies each pixel
to the most commonly occurring class for all supervised methods
performed. You must select at least two other supervised methods to run
Winner Takes All.
10. When method selection is complete, click Next. The Select Training Pixels
panel appears.
• If you selected any supervised classification methods, a natural color
composite image is loaded into a display group, and the ROI dialog
appears.
• If you selected only unsupervised classification methods, you do not need
to select training pixels. Click Next to go to the TERCAT Parameters panel
(step 13).
11. For supervised classification, use the ROI Tools dialog to add pixels to the
ROI. See “Defining ROIs” on page 323 for details. Create a new ROI for each
class to map. Try to collect training pixels uniformly across the image. Give
the ROIs descriptive names and representative colors to make them easy to
identify.
12. For supervised classification, click Next. The ROI Selection dialog appears.
Select one or more ROIs to map, then click OK. The TERCAT Parameters
panel appears.
13. Each TERCAT method has its own set of advanced parameters that you can
adjust, to change the way to run the algorithm. To view the advanced
parameters, click Show Advanced Options, then click on the desired tab. In
most cases, using the default values produces satisfactory results. For more
information on the parameters, refer to “Classification Tools” on page 399 for
the TERCAT classification method of interest.
For Winner Takes All, you can apply a weighting to each selected supervised
classification method g. By default, each method is set to 1.0 which means
each method is treated equally. Optionally, adjust the weights so that favored
algorithms have larger values.
14. To save the rule images created during TERCAT processing, select the Output
rule Images? check box available when you show the advanced parameters.
The rule images are opened in the Available Bands Lists before processing, but
are not used again by the Wizard.
15. Click Next. The Examine Results panel appears. See “Examining the
TERCAT Results” on page 629 for details.
16. When you are finished examining results, click Next in the Examine Results
panel, then click Finish to exit the Wizard.
Figure 7-18: Reference Image (left) and Two Results Display Groups (center and
right) for comparison of Results (Imagery Courtesy of DigitalGlobe)
The Winner Takes All result is determined by choosing the class for each pixel by the
majority vote of all supervised TERCAT methods you chose to run. For example, if
three TERCATs determined a pixel was asphalt and two determined it was concrete,
the Winner Takes All result is asphalt. In the case of a tie, the majority class of the
neighboring pixels are used to classify the pixel in question.
In addition to the TERCAT product, ENVI creates a Winner Takes All (Probability)
layer. This layer indicates the level of confidence for each pixel’s classification. In
asphalt/concrete example, the probability would be:
3 -⎞
⎛ -----------
⎝ 3 + 2⎠ = 0.6
The probability for a pixel in which all TERCATs agreed would be 1.0. Therefore,
brighter pixels in the probability layer indicate high confidence, and dark pixels
indicate low confidence. If many low confidence pixels remain, you could select
more training pixels to better represent the low confidence land cover, and you could
additionally add a new land cover class.
Figure 7-19: Reference Image (left), Winner Takes All (TERCAT) (center), and
Winner Takes All (Probability) (right) (Imagery Courtesy of DigitalGlobe)
• Sieve Classes: “Sieving Classes” on page 483 describes this process. Set
the Group Min Threshold value to the maximum sized cluster to sieve
from the results. The Number of Neighbors value indicates the type of
neighborhood examined for determining clusters. Sieved pixels are
assigned to the Unclassified class. The sieving results are appended to the
lists in the Display Results section.
• Class Statistics: “Statistics” on page 274 describes this process. This
method creates a report that shows general statistics about the selected
TERCAT(s) and classes, including mean spectra, area covered, and so
forth.
• Class Overlay: “Overlaying Classes” on page 485 describes this process.
The new image is loaded into a new display group.
4. Click Go to begin processing.
Figure 7-21: Class Statistics Dialog Showing Mean Spectrum for Each Class
and General Statistical Information, Such as Area Covered by Each Class
Figure 7-22: Example Class Overlay Image with Trees, Roads, and Reservoir
Class Overlaid on Natural Color Reference Image (Imagery Courtesy of
DigitalGlobe)
5. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
6. Click Next. The Atmospheric Correction panel appears.
7. Use the steps in “SPEAR Atmospheric Correction” on page 598 to perform
atmospheric correction. Dark object subtraction produces good results with
minimal user input.
8. When atmospheric correction is complete, click Next. The Examine Results
panel appears. See “Examining the Vegetation Delineation Results” for details.
9. When you are finished examining results, click Next in the Examine Results
panel, then click Finish to exit the Wizard.
3. Use the Stretch Bottom and Stretch Top slider bars to change the way the
color table is applied. See Figure 7-23 for an example. Move the slider bar
positions to reverse the color table order if needed.
Figure 7-23: Changing Color Table Appearance with Slider Bars (Imagery
Courtesy of DigitalGlobe)
4. To save the displayed color table image to a graphics file suitable for use in a
briefing or report, click Save to File. The Output Display to Image File dialog
appears. See “Saving to Image Files” on page 18 for details.
5. Pixels with no vegetation are masked using the Veg Mask parameter so that
analysis is focused on vegetated pixels. Change the Veg Mask threshold if
needed. Higher values mask more pixels, lower values mask fewer pixels. You
can change the color of the mask by right-clicking the color patch and
selecting a new color. Turn off the mask by clearing the On/Off check box.
Figure 7-24: Example of Adjusting Vegetation Mask by Changing the Veg Mask
Threshold (Imagery Courtesy of DigitalGlobe)
6. To create an output file showing the applied color table overlaid on the
reference image, click Create Overlay Mosaic. Pixels masked out with the
Veg Mask parameter are transparent, allowing the underlying reference image
to show through. ENVI prompts you to select an output filename, then adds the
mosaic to the Available Bands List.
7. To export the overlay mosaic to a graphics file suitable for use in a report or
brief, load the image into a display group, then select File → Save Image
As → Image File. The Output Display to Image File dialog appears. See
“Saving to Image Files” on page 18 for details.
Density Slice
1. Select the Base Image to use from the drop-down list, then click Load Image.
A new display group opens with the density slice, dynamically linked to the
base image type you selected.
2. There are four display ranges available for use. The default NDVI ranges are
general rules of thumb that apply in many, but not all cases. Adjust the ranges
to suit your particular data by entering new values or using the up/down
arrows. The ranges are as follows:
• No Veg
• Sparse Veg
• Moderate Veg
• Dense Veg
The numbers displayed for each range indicate the bottom NDVI threshold.
For example, the default for No Veg are pixels with an NDVI value falling
between -1.0 and 0.249.
The Normalized Difference Vegetation Index (NDVI) generates an image that
ranges from -1.0 to 1.0. Pixels with no vegetation tend towards -1.0, while
pixels with vigorous vegetation tend towards 1.0.
3. Adjust the color for each range by right-clicking on the color box and selecting
a new color.
4. To turn off a range to show the Base Image beneath it, clear the On/Off check
box for that particular range.
Figure 7-26: Density Sliced Image (left) with Corresponding Parameters (right)
with No Veg range Turned Off to Display the Base Image (Natural Color)
Beneath (Imagery Courtesy of DigitalGlobe)
5. To restore the Density Slice parameters to their original state, click Restore
Defaults in the Examine Results panel.
6. To save the density slice image to a graphics file suitable for use in a briefing
or report, click Save to File in the Examine Results panel. The Output Display
to Image File dialog appears. See “Saving to Image Files” on page 18 for
details.
The SPEAR Vertical Stripe Removal tool is best used when the image background is
relatively homogenous (consistent brightness level throughout image). Because it can
cause artifacts, using this tool on heterogeneous images (like coastal images
containing bright land and dark water) is not recommended.
1. From the ENVI main menu bar, select Spectral → SPEAR Tools → Vertical
Stripe Removal. The Vertical Stripe Removal Wizard displays the File
Selection panel.
2. Click Select Input File, choose a file, then click OK. The input image should
be a multispectral file in any format readable by ENVI. EO and thermal-
infrared files may also be used.
3. To optionally process only a portion of the scene, click Select Subset. A small
Select Spatial Subset dialog appears.
4. Click Spatial Subset. The standard Select Spatial Subset dialog appears. See
“Spatial Subsetting” on page 215. When finished, click OK to return to the
File Selection panel.
5. By default, output files are saved to the same directory and use the same
rootname as the input file, minus any extension. Output files are appended with
a unique suffix. To change the directory and/or root filename, click Select
Output Root Name.
6. Click Next. The Create Mask dialog appears.
7. Apply an optional mask to the data by clicking Input Band and selecting the
desired mask image. For details on mask selection, see “Masking” on
page 224.
The darkest and brightest 5% of the image will be masked by default. Dark
pixels are displayed as blue, and bright pixels as red. Modify the red and blue
percentages to include more or fewer pixels in the mask.
Right-click on the red or blue color boxes to change the colors used to
represent the dark and bright pixels. To remove the highlighted pixels from the
display, disable the Show checkboxes. The highlighted pixels will not be
displayed, but will be available for statistical calculations.
8.Click Next. The Examine Results dialog appears.
The original image and the image with vertical striping removed are opened in
the display and are dynamically linked for comparison. Use Auto-Flicker to
examine your results (for detailed information on using this tool, see “Auto-
Flicker” on page 585).
9. When you are finished examining your results, click Next in the Examine
Results panel, then click Finish to exit the Wizard.
Watercraft Finder
The Watercraft Finder tool streamlines processing to detect the presence of moving or
non-moving watercraft in open water environments using high-resolution
multispectral data. While the tool is also applicable for detecting watercraft in littoral
zones, the false positive rate will be elevated.
The premise of the tool is that watercraft, which reflect in the near infrared
wavelengths, will appear as anomalous clusters of pixels in near infrared absorbing
water. Several algorithms within the tool make use of this premise to aid you in
rapidly detecting watercraft. The algorithms ENVI uses depend upon which of the
following two processing workflows you choose:
• Use the texture-based processing workflow to work on all pixels in the scene
simultaneously, using the fact that the desired watercraft will occur as isolated
clusters of anomalous pixels in an otherwise uniform background (this is the
expected “texture” of the results). This is typically the most appropriate
workflow to use.
• Use the two-dimensional scatterplot workflow to operate only on the area of
the image currently visible in the Image window to manually select pixels for
occurrence of watercraft. This workflow is not efficient for exploitation of
large areas; use it instead for analysis of small areas or exploratory analysis of
larger areas.
the ROI could lead to false positives. Much like watercraft, the clouds and land
surfaces are relatively reflective in the near infrared wavelengths.
5. Click Select Area of Interest to select the ROI containing the polygon you just
created. The AOI Selection dialog appears.
6. Select the ROI to use, then click OK.
7. In the File Selection panel, click Next. The Method Selection panel appears.
8. Select one of the following methods, then click Next:
• Texture Based Search. See “Texture Based Search” on page 645 for
steps.
• Two Band Scatterplot. See “Two Band Scatterplot” on page 649 for
steps.
9. Your results may still contain false positives after filtering if the false positives
are about the same size as watercraft, or if you did not perform filtering.
Optionally, click Display Eraser ROI.
10. The result loads into a new display group, and the ROI Tool dialog appears
with an Eraser ROI entry.
11. Select the Eraser ROI to make it active, then use standard ROI drawing
methods (described in “Defining ROIs” on page 323) to cover over any
observed false positives.
12. Click Next. The Eraser ROI Selection dialog displays.
13. Select the Eraser ROI to use, then click OK to remove the false positives from
the results. The Export Vectors panel appears.
Figure 7-27: Drawing Polygons to Cover False Positives to Remove from the Results
(right) (Imagery Courtesy of DigitalGlobe)
14. Optionally, select vector layers to export to separate files from the Vector
Output list. The vectors you digitized are stored in separate files in ENVI
vector file (.evf) format. These vectors remain in the projection and datum of
the input image. Each exported vector will result in the following files being
created:
• ENVI vector file (.evf) in the native image projection
Figure 7-29: Input Image with Detected Watercraft Overlaid (Imagery Courtesy
of DigitalGlobe)
Figure 7-30: Full Resolution Image with Detected Watercraft Overlaid (Imagery
Courtesy of DigitalGlobe)
17. When you are finished examining results, click Finish in the Processing
Complete panel to exit the Wizard.
Texture Based Search
1. When you select Texture Based Search, then click Next in the Method
Selection panel. The Processing Method panel appears.
2. Select one of the following:
• Perform PCA processing: “Principal Component Analysis” on page 504
describes the PCA transform. In this case, it creates a new dataset in which
the watercraft stand out from the water more than in the original dataset,
therefore leading to fewer false positives. Performing PCA over very dark
areas, such as open water, often enhances the noise, which may also lead to
false positives.
• Skip PCA processing: Because the contrast between watercraft and water
is typically fairly high in the original image, the default is not to use PCA.
Change this option if you find that the variance in the result within the
image was not sufficient to accurately detect watercraft without numerous
false positives.
3. Click Next. The Texture Processing panel appears. If you selected to perform
PCA processing, ENVI performs the processing.
4. Select the image band to use for texture processing.
• If PCA processing was not performed, select the NIR band (frequently
Band 4), as this band has the best contrast between watercraft and water.
• If PCA was performed, select the PCA band that shows the best contrast
between watercraft and water. This is typically PCA Band 2, but may
depend on the content of the scene.
5. Select one of the following texture measure types:
• Data Range (typically the best choice)
• Variance
6. Click Create and Display Texture Image to create the texture image and to
load it into a new display group. A histogram window also appears.
7. Optionally, select a threshold to detect watercraft. The threshold should be set
low enough to minimize false positives without omitting real watercraft. Click
on the dotted bar in the histogram window or enter values in the fields at the
top of the histogram window to explore different thresholding options. When
you are satisfied with the threshold, click Retrieve Value in the Texture
Processing panel to register the selected threshold. Use Auto-Flicker to
examine your results (for detailed information on using this tool, see “Auto-
Flicker” on page 585).
Figure 7-31: Thresholding Set to Eliminate Noise, but Watercraft Remain (Imagery
Courtesy of DigitalGlobe)
Figure 7-32: Land and Cloud Areas Remaining in ROI, Highlighted in Texture
Image (Imagery Courtesy of DigitalGlobe)
Figure 7-33: Results Before Filtering (center), and After Filtering (right); some
false positives remain (Imagery Courtesy of DigitalGlobe)
10. Click Next. The Edit Results panel appears. See step 9 in “Running the
Watercraft Finder Wizard” on page 641 for details.
Two Band Scatterplot
1. When you select Two Band Scatterplot, then click Next in the Method
Selection panel, the 2-D Scatterplot panel appears.
2. Scatterplots plotting the near infrared band versus the red band are normally
the most effective for detecting watercraft. This is the default selection. To
change the bands to plot, click Show Band Selection and select the bands.
3. Click Load Scatterplot to load the image to a display group and to show the
scatterplot. The water pixels dominating the image in the display group form a
dense cloud of points in the lower left quadrant of the scatterplot. If any
watercraft (or other near infrared reflecting objects) are present, data points
extend upwards and to the right of the data cloud.
4. To select data points, click and drag a circle around them in the scatterplot.
Right-click to close the circle and register the points. The circled points in the
scatterplot and their corresponding pixels in the display group turn red.
7. To select a watercraft, circle the extension of data points, right-click, and select
Export Class. Repeat this process for each portion of the image as it is viewed
at full resolution in the Image window.
Figure 7-35: Selecting the Entire Extension in the Scatterplot Highlights the
Entire Watercraft (right) (Imagery Courtesy of DigitalGlobe)
8. When all the watercraft are selected, click Next. The ROI Selection dialog
appears.
9. Select all the ROIs that were exported during the previous steps, then click
OK. SPEAR combines the results into a single layer. The Edit Results panel
appears. See step 9 in “Running the Watercraft Finder Wizard” on page 641 for
details.
THOR Workflows
The Tactical Hyperspectral Operations Resource (THOR) workflows are designed to
process hyperspectral imagery. The following workflows are available:
• THOR Anomaly Detection
• THOR Atmospheric Correction
• THOR Change Detection
• THOR LOCs - Water and Trails
• THOR Stressed Vegetation
• THOR Target Detection
Each workflow uses a different combination of shared tools, referred to as panels.
Each panel includes basic instructions, which you can show or hide by using the
arrow button on the right side of the dialog. A Show informational dialogs between
steps check box is located on the bottom of some panels. By default, this check box is
not enabled. To proceed to the next step in each panel, you must close the information
dialog. Click the Help button at the bottom of each panel for more detailed
information on the current workflow.
Many of the panels also invoke a separate application called the THOR Viewer,
which is a graphical user interface for viewing input imagery and reviewing target
detections. See “Using the THOR Viewer” on page 663 for more information.
Click the Next button to proceed to the next panel in a workflow.
Note
Atmospheric Correction is also available as a panel in many workflows, in
addition to being available as a stand-alone tool.
THOR extracts a rough estimate of the image’s mean spectrum and compares it
to a generalized radiance spectrum using Spectral Angle Mapper (SAM) to
determine whether the input image is radiance or reflectance. If THOR
determines that the input is reflectance, it automatically selects None /
Already corrected as the correction method and warns you if you try to apply
a correction method. However, the initial screening is only an estimate, so you
can still proceed with an atmospheric correction method if needed.
2. Select a method from the Atmospheric Correction Method drop-down list,
and specify the required parameters (described in the following sections). Then
click the green Run Process button. You cannot proceed to the next step until
you run at least one correction method or select the None / Already corrected
option.
3. You can also run more than one correction method and compare the results
side-by-side. Click Compare Results, select which methods to compare, then
click OK. ENVI opens the resulting images from each correction method into
separate display groups, with the displays linked and Z-profiles displayed.
Log Residuals
The Log Residuals method produces a pseudo reflectance dataset by dividing each
pixel’s spectrum by the spectral geometric mean and the spatial geometric mean. No
user input is required to run this method. Log residuals is usually most effective for
analyzing absorption features present in hyperspectral data.
1. From the Atmospheric Correction Method drop-down list, select Log
Residuals.
2. Click Run Process.
QUAC
The QUick Atmospheric Correction (QUAC) method is an in-scene based correction
method that can produce results as accurate as the first-principles based Fast Line-of-
sight Atmosphere Analysis of Spectral Hypercubes (FLAASH) method with no user
input. QUAC works best when the input image has a reasonably diverse set (at least
10) of endmembers. Refer to the Atmospheric Correction Module User’s Guide for
more information on QUAC. To use the QUAC method, you must purchase a separate
license for the Atmospheric Correction Module: QUAC and FLAASH.
1. From the Atmospheric Correction Method drop-down list, select QUAC.
2. From the Sensor Type drop-down list, select the sensor from which your
hyperspectral input image was captured. If you do not know the sensor, select
Unknown and QUAC will attempt to figure out the sensor based on the bands
of the input imagery.
3. Click Run Process.
FLAASH
The Fast Line-of-sight Atmosphere Analysis of Spectral Hypercubes (FLAASH)
method is a physics-based approach to atmospheric correction that utilizes various
metadata about time, location, and many other parameters to generate a radiative
transfer model using MODTRAN4. FLAASH is capable of producing extremely
accurate surface reflectance results, but it requires a lot of user input. THOR attempts
to simplify this process by filling in as many of the parameters as possible using the
input image's metadata.
1. From the Atmospheric Correction Method drop-down list, select FLAASH
(Rigorous).
2. THOR automatically populates the various parameter fields based on the
image metadata. You can edit these values as needed. Refer to the Atmospheric
Correction Module User’s Guide for more information on these parameters. To
use the FLAASH method, you must purchase a separate license for the
Atmospheric Correction Module: QUAC and FLAASH.
3. Click Run Process.
have the same number of spectral bands. THOR is intended for use with
hyperspectral data, but input files are only required to have at least two bands. The
input files must at least partially cover the same area on the ground, but they do not
need to be georeferenced. The two images will be co-registered later in the Change
Detection workflow.
1. From the ENVI main menu bar, select Spectral →THOR Workflows →
Change Detection. The Change Detection dialog appears.
2. Click Select Time #1 File. A file selection dialog appears.
3. Select a multi-band image and perform optional Spatial Subsetting.
4. Click Select Time #2 File. A file selection dialog appears.
5. Select a multi-band image to be compared with the Time #1 file. The Time #1
and #2 images do not necessarily need to be in chronological order. The Time
#1 file will serve as the base image that the Time #2 image is warped to for
purposes of co-registration. The desired spatial subset for change detection is
based on the Time #1 image.
6. You can choose to process a subset of spectral bands by clicking one of the
following buttons:
• Graphically: The THOR Spectral Subsetting dialog appears. See “Using
the THOR Spectral Subsetting Dialog” on page 672 for details.
• By List: The File Spectral Subset dialog appears. See “Spectral
Subsetting” on page 220 for further instructions.
7. By default, the output rootname will be change. Miscellaneous output files
produced during the workflow will use this rootname, plus other words as
needed, such as _subset. To change the rootname to something else, click
Select Output Root Name.
By default, the output file is written to the same directory as the input file. You
can choose another output directory, but if it is a read-only directory, ENVI
will default to the Temp Directory specified in your ENVI Preferences (see
“Default Directory Preference Settings” on page 1146).
8. Click Next.
9. Select tie points for the images. See “THOR Coregistration” on page 674.
10. Perform atmospheric correction on the Time #1 and Time #2 images. See
“THOR Atmospheric Correction” on page 654.
11. In the Change Detection Methods panel, select Spectral Angle. This is
currently the only option available. This method computes the spectral angle
on a pixel-by-pixel basis from the two co-registered input images. The larger
the spectral angle, the greater the change.
12. Perform rule thresholding. See “Rule Thresholding” on page 705.
13. Perform spatial filtering. See “Spatial Filtering” on page 708.
14. Export targets. See “Export Targets” on page 721.
already selected indices (described in Step 6), they will still be calculated as
needed.
10. Click Next.
11. Use the Review Veg Stress Results panel to review the vegetation indices and
colorized products that you just created. The results are displayed in the THOR
Viewer. (See “Using the THOR Viewer” on page 663 for instructions.) The
available images for display are listed in a tree view; images include the input
file, the atmospherically corrected image (if any), the individual vegetation
indices, and the colorized, application-specific products.
12. Click on the desired image to load it in the THOR Viewer. For colorized
products, a legend is displayed to help identify what the colors mean.
Following is an example of a colorized image:
Figure 7-36: Example showing the colorized product results for Agricultural
Stress.
The THOR Viewer consists of four areas, shown in the following figure:
Viewing Area
The Viewing Area is where the loaded image and target overlays (if any) are
displayed. Clicking in the Viewing Area performs various functions depending on
what is currently selected in the Control Bar.
Control Bar
The Control Bar has a number of different functions, described as follows:
Button Description
Button Description
Status Bar
The Sample and Line fields in the Status Bar display the current pixel address
underneath the cursor. The upper-left corner of the first pixel is (1,1). If the image is
georeferenced, the Map X and Map Y fields will display the geographic position for
the pixel. The lower-right part of the Status Bar indicates the progress of any
processes running in the background, such as generating image statistics. You can
still interact with the THOR Viewer while background processes are running.
Target Controller
The Target Controller may or may not be visible, depending on your THOR
workflow tasks. When present, the Target Controller is used to control how targets are
defined through the use of one or more rule images. The upper part of the Target
Controller contains the list of current targets. The colored box next to the target
names indicates the color of the target layer overlaid on the image. Clicking on the
target name shows or hides its histogram panel. To change the order in which layers
are drawn in the Viewing Area, click and drag the list items into the desired order.
The top item in the list is drawn on top.
The Snapshot button below the Target Controller will create a JPEG image of the
current Viewing Area contents.
When two or more targets are being examined, select the Co-Targets option to see
where those targets co-occur. Use the M of N Targets slider to indicate how many
targets must co-occur in the same pixel to display it as a co-target. Use the ROI
Color box to set the color used to indicate co-targets. The Opacity slider adjusts the
opacity of the co-target layer (100 is completely opaque, 0 is completely transparent).
Figure 7-38: The Co-targets tool shows pixels (displayed in white here) that are
designated as valid targets by more than one target layer, potentially indicating
target confusion.
Histogram Panel
Use the Histogram Panel to adjust the target overlay. Each target has one target
overlay that is displayed in the Viewing Area. That target overlay may be generated
from one or more rule images. Each available rule image is represented by one
histogram in the Histogram Panel. Each target will have its own Histogram Panel.
Figure 7-39: The Histogram Panel controls each target layer’s appearance.
Use the Overlay Controls within the Histogram Panels to change the selected target
layer's appearance. Check the Hide box to hide the layer. Check the Flicker box to
flicker the layer on and off, making it easier to see its distribution. Use the M of N
Rule Images slider to control how the rule images are combined into a single target
overlay. Use the ROI Color box to control the layer's color. Clicking on the box will
cycle through the colors. Right-clicking on the box allows you to select the desired
color from a list. The Opacity slider controls the opacity of the layer (100 is
completely opaque, 0 is completely transparent).
Each histogram corresponds to one rule image for that target. An abbreviated name
for the algorithm that generated the rule image is displayed at the bottom of the
histogram. Each histogram has a colored Target Threshold bar that divides the
histogram into Included Areas and Excluded Areas. Included Areas are the parts of
the histogram where pixels are designated as valid targets, whereas Excluded Areas
indicate non-target pixels. Note that Excluded Areas are shaded on the histogram plot
so that they are easily distinguished from Included Areas. Some target detection
algorithms have better matches with lower scores (like Spectral Angle Mapper),
whereas other algorithms have better matches with higher scores (like Matched
Filter).
The Target Threshold is set by default to include the tail of the histogram. This
works well for sparse targets, but you will need to adjust it for targets that cover larger
portions of the image (such as land cover classes). You can adjust the Target
Threshold by left-clicking and dragging in the histogram plot. The target overlay in
the Viewing Area will be adjusted dynamically. Several numbers are displayed next
to the threshold bar. The top number indicates the threshold value in rule image
space. The percentage to the left of the bar indicates the portion of the image below
the threshold, and the percentage to the right indicates the portion of the image above
the threshold (these two numbers sum to 100%).
You can zoom into a histogram plot by right-clicking and dragging the area to zoom
into. Reset the plot to the original range by either middle-clicking in the plot area or
clicking the Reset button.
The M of N Rule Images slider determines how the multiple rule images are
combined into a single target overlay. N represents the total number of included rule
images, and M represents the number of rule images the target must be within the
Included Area to be considered a valid target. For example, if the slider is set to 7
when there are seven included rule images, pixels are only designated as valid targets
and displayed in the target overlay if they occur within the Included Area on all seven
rule image histograms. If the slider is adjusted to 1, targets will be included in the
overlay even if they are only in the Included Area of one histogram. By setting the
slider to higher values, you can be more confident that the targets in the overlay are
valid, and not false positives. By default, the M of N Rule Images slider is set at 50%
of the number of rule images.
Figure 7-40: Example images showing the resulting target overlays for M of N
values of 1 (left), 4 (center), and 9 (right).
You can easily adjust the results for each rule image independently of other rule
images by selecting the Show This Rule Only option under the desired histogram.
The target overlay switches to show the results based on only that rule image. Adjust
the Target Threshold as desired, then uncheck the box to return to the target overlay
being determined by a combination of rule images. If the results for a rule image are
unsatisfactory, uncheck the Include box so that it will no longer be included in the
combination results.
Pop-Up Windows
Pop-up windows provide more information about a particular pixel or target. Right-
clicking on the image will bring up a pop-up window. When right-clicking on a
target, pop-up windows contain an ancillary image chip corresponding to the clicked
location (if an ancillary image was selected), a spectral plot, and text information
about the target ID, target size, and confidence level in the target.
The spectral plot contains the mean spectrum of the target blob in white, and the
reference spectrum used to find that target (in relevant color). Double-clicking on the
ancillary image chip will load the ancillary image in an ENVI display group, centered
on the clicked location (or it will move the display group to that location if the image
is already displayed). Double-clicking the spectral plot will create an ENVI spectral
plot window with the spectra loaded. When right-clicking on a non-target pixel, the
pop-up window will display a spectral plot with the selected pixel's spectrum
displayed. To preserve an existing pop-up window while creating more windows,
hold down the Ctrl key while right-clicking.
Shared Tools
Each workflow uses a different combination of shared tools, referred to as panels.
File Selection
Use the File Selection panel to select input files for processing, to provide an output
rootname, and to select any ancillary files for processing.
1. Click Select Input File. The Select Input File dialog appears.
2. Select a hyperspectral image that has wavelength information defined for each
band. Click OK.
3. In the File Selection panel, click Spatial Subset to perform Spatial Subsetting.
4. You can choose to process a subset of spectral bands by clicking one of the
following buttons in the File Selection panel:
• Graphically: The THOR Spectral Subsetting dialog appears. See “Using
the THOR Spectral Subsetting Dialog” on page 672 for details.
• By List: The File Spectral Subset dialog appears. See “Spectral
Subsetting” on page 220 for further instructions.
5. By default, the output rootname will be the same as that of the input file.
Miscellaneous output files produced during the workflow will use this
rootname, plus other identifiers as needed (for example, _subset). Click
Select Output Root Name to change the rootname and directory for the
output files.
By default, the output file is written to the same directory as the input file. You
can choose another output directory, but if it is a read-only directory, ENVI
will default to the Temp Directory specified in your ENVI Preferences (see
“Default Directory Preference Settings” on page 1146).
6. In the context of THOR, an ancillary file is a high-resolution panchromatic
image of the same area covered by the hyperspectral image; it will be used
later in the Review Detections step. Ancillary files are not applicable to all
workflows, so the option to select one will not always be available from the
File Selection panel. You can only select an ancillary file if the input
hyperspectral image is georeferenced. The ancillary file must also be
georeferenced.
Figure 7-41: Example of the initial state of the THOR Spectral Subsetting dialog.
There are several ways to determine whether a band is “good” or “bad.” A vertical
blue line in the plot window (initially set at the first band on top of the y-axis)
indicates the band position of the thumbnail image displayed in the right part of the
dialog. Click and drag the blue bar to view thumbnails of other bands. Those bands
that are noisy or contain bad data should be marked as “bad.” You can also control the
displayed thumbnail image by using the Displayed Band slider. Mark the band
appropriately, using the Good Bands or Bad Bands button. This method is often
simpler and more precise than clicking and dragging in the plot area.
You can animate the thumbnail images by clicking the Play button. Click the Pause
button to pause the animation. The Speed field controls the speed of the animation.
You can export a snapshot of the good and bad bands to a bad-bands list (for use
elsewhere in ENVI) by clicking Output to Bad Bands List and selecting an output
filename.
You can modify the range of the plot’s y-axis by clicking Set Y Scale. Enter the new
minimum and maximum range values, and click OK.
Figure 7-42: Example of the THOR Spectral Subsetting dialog after marking bad
bands. The vertical blue bar at band number 196 indicates the band currently
displayed in the animation window (right).
THOR Coregistration
Coregistration occurs within the THOR Change Detection tool. For change detection
to be effective, the images of interest must be closely aligned. The native
georeferencing information that comes with the imagery is typically not accurate
enough for this purpose. Instead, you must select tie points marking the same features
on both images. ENVI warps one image based on these tie points to match the base
image.
Using the Ground Control Points dialog to select tie points is the simplest method;
however, it can be challenging in areas with few obvious features, and it is time
consuming. To assist tie point selection, ENVI automatically scans both images to
locate common features. For best results, manually provide three to five seed points
to assist ENVI in finding tie points.
Though ENVI can select tie points much faster than a human operator, you should
check automatically chosen tie points before proceeding. Automatically generated tie
points may fall on clouds or cloud shadows, on rooftops, or on other elevated objects
and are not suitable. Slight time differences between image collections may generate
sub-optimal tie points.
Creating Tie Points
1. Select one of the following coregistration options:
• Select tie points manually: “Image-to-Image Ground Control Points” on
page 878 describes how to select tie points. When you are finished
selecting GCPs, click Next. The Review Tie Points dialog appears.
Proceed to “Reviewing Tie Points” on page 678 for details.
• Select tie points automatically: See “Automatic Tie Point Selection” on
page 677.
2. To use seed points, ensure that the Use Seed Points check box is enabled.
3. Select seed points either manually or automatically. Choose one of the
following:
• To automatically generate four seed points, click Auto-Generate Seed
Points. ENVI selects four seed points. The auto-generated seed points may
not fall precisely on the same feature. This is acceptable, as long as they
are relatively close. These seed points are discarded when tie point
selection is complete. Click Select Seed Points, then ENVI opens the
Time #1 and #2 images in display groups, and the Ground Control Points
Selection dialog appears listing the automatically selected points.
• To manually select seed points, click Select Seed Points. ENVI opens the
Time #1 and #2 images in display groups and the Ground Control Points
Selection dialog appears. Select three to five seed points using the steps in
“Image-to-Image Ground Control Points” on page 878.
When manually selecting seed points, you can switch the Geographic
Link toggle to On to move the cursor to the same area in both images, then
switch the Geographic Link toggle to Off to fine tune the location.
4. If you manually selected seed points, click Retrieve Points. If you need to
clear the seed points and start again, click Clear Points.
5. Click Show Advanced Options if you want to set additional parameters for
area-based image matching methods. Typically, the default settings provide the
best results, but you can edit the parameters as needed. See “Area-Based
Matching Parameters” on page 896 for parameter descriptions.
• Number of Tie Points
• Search Window Size
• Moving Window Size
• Area Chip Size
• Minimum Correlation
• Point Oversampling
• Interest Operator
6. Click Next to continue. The following appears:
• One display group contains the base image.
• One display group contains the image to warp.
• The Ground Points Selection dialog.
• If you selected automatic tie points, the points show in the display groups,
the GCP Selection dialog shows how many points exist and the Root Mean
Square (RMS) error, and the Image to Image GCP List shows the
individual points.
• The Review Tie Points panel appears.
7. Review tie points, as described in “Reviewing Tie Points” on page 678.
8. When you are finished reviewing tie points, choose one of the following for the
warp Method in the Review Tie Points panel. “Warping and Resampling
Image-to-Image” on page 904 describes each parameter.
• Polynomial
• Triangulation
• RST
9. Choose one of the following for the warp Interpolation. The interpolation is
the manner in which the image is resampled to the warped grid. “Warping and
Resampling Image-to-Image” on page 904 describes the resampling choices.
• Nearest Neighbor
• Cubic Convolution
• Bilinear
• From the Image to Image GCP List dialog menu bar, select Options →
Order Points by Error to sort the tie points by the highest RMS. For tips
on fixing tie points see “Fixing Tie Points” on page 679.
Fixing Tie Points
• Fix the tie point: Select the tie point in the Image to Image GCP List dialog,
and click Goto to jump to that point. Adjust the location of the crosshairs to the
desired location. Click Update in the Image to Image GCP List to update the
location of that point. ENVI immediately calculates and displays a new RMS
error.
• Turn the tie point off: Select the tie point in the Image to Image GCP List
dialog. Click On/Off. The point turns green to indicate that it is off and no
longer being used, and the RMS error is updated. Turn the point back on by
clicking On/Off again.
• Delete the tie point: Select the tie point in the Image to Image GCP List
dialog. Click Delete. The point is removed from the dialog list, and the RMS
error is updated.
• Use THOR to automatically turn off bad tie points: In the Review Tie
Points panel, set the Maximum allowable RMS per GCP parameter to the
desired RMS threshold, then click Apply. The tie points with the greatest RMS
errors are turned off until no tie point has an RMS greater than the set value.
This is a quick way to remove the greatest errors. To turn the points back on,
select them in the Image to Image GCP List dialog, then click On/Off.
Checking Coregistration Accuracy
When warping is complete, check the accuracy of the coregistration.
1. Roam around the dynamically linked images and check the accuracy of the
coregistration. When flickering between the images, there should be little shift
between features. If there is a shift, it could be due to several reasons:
• Inaccurate Tie Points: Inaccurate tie points cause poor coregistration. In
this case, re-review tie points for accuracy. Add more if needed.
• Elevated Features: Because of parallax caused by collection with
different view geometries (even for panchromatic and multispectral images
collected “simultaneously”), objects with height above the ground “lean”
in different directions on the two images. In this case, coregistration is
impossible to achieve. The taller the object, the greater the degree of error.
• Topography: Topography and elevated features are related. Topography
viewed from different directions “leans” different ways in the images, and
2. Use the ROI Tool to define ROIs for each desired target, and draw the ROIs as
described in “Drawing ROIs” on page 323. Name the ROIs so that you will
recognize them later in the workflow. As you create ROIs, they will appear in
the Available ROIs list in the Target Signature Selection panel.
3. Click Plot Signature(s) to plot the mean signatures for the selected ROIs.
Click ROI Options to create new ROIs, delete ROIs, or save the ROIs to a file.
You can also perform these functions through the ROI Tool. If you accidentally
close the image or the ROI Tool, click Reload Image and ROIs.
4. To designate ROIs as targets, select the desired ROIs in the Available ROIs list,
then click the arrow button to move them to the Target ROIs list. You can
remove ROIs from the Target ROIs list by clicking Remove ROIs or by right-
clicking the desired ROIs and selecting the appropriate option. This will
remove them from the Target ROIs list, but it will not delete them.
clicking the desired ROIs and selecting the appropriate option. This will remove them
from the Background ROIs list, but it will not delete them.
combination of every input band. The leading bands in the transformed data generally
contain the unique content in the image, while the latter bands contain mostly noise
and otherwise redundant information.
Band selection refers to selecting a discrete subset of input bands to continue
processing with. You can select bands manually, or you can let ENVI automatically
attempt to identify the most important bands for differentiating targets from
background signatures. Each of these methods is discussed in more detail below.
In the Dimensionality Reduction and Band Selection panel, select a dimensionality
reduction method from the Method drop-down list. The options are None (proceed
with all bands of the input or atmospherically corrected image), Image Transform,
Automated band selection, and Manual band selection. The area below the drop-
down list will change to show the relevant parameters for the selected method.
Note
The Automated band selection method is only available if you selected
background signatures using the Background Signature Selection panel.
Image Transform
Each of the available transforms creates an output dataset where each output band is a
linear combination of all the input bands. Follow these steps to continue:
1. From the Transform Method drop-down list, select a method. The choices are
as follows:
• Independent Component Analysis: The ICA method works well with
hyperspectral data because it is more likely to treat sparse targets as
important features, compared with the PCA or MNF methods. However,
ICA can take a significantly longer time to process.
Click Transform Params. The ICA Parameters dialog appears. See
“Independent Components Analysis” on page 509 for more information on
the parameters required for ICA. Reducing the Number of Output IC
Bands value will result in faster processing, but at the risk of omitting
some subtle features. ICA is a memory-intensive process. If you receive a
memory allocation error during processing, reduce the X Sampling and Y
Sampling values to reduce the memory usage. However, this can
potentially change the statistics such that you miss sparse targets.
• Principal Component Analysis: The PCA method creates a number of
PC bands, which are linear combinations of the original spectral bands that
are uncorrelated. You can calculate the same number of output PC bands
as input spectral bands. The first PC band contains the largest percentage
of data variance and the second PC band contains the second largest data
variance, and so on.
Click Transform Params. The PCA Parameters dialog appears. Refer to
“Principal Component Analysis” on page 504 for more information on the
required parameters for PCA.
• Minimum Noise Fraction: The MNF transform is a linear transformation
that uses separate PCA rotations to segregate noise in the data and to
reduce the dimensionality of the original dataset.
Click Transform Params. The MNF Parameters dialog appears. Refer to
“MNF Rotation” on page 748 for more information about the required
parameters for MNF.
2. The use of masks is recommended along with image transforms in cases where
the image contains large areas that you are not interested in processing (such as
water or black-fill pixels). In the Parameters dialog for each image transform
method is a Mask Information section. From the Masking Method drop-down
list, select one of the following options:
• None: Select this option if you do not want to use a mask.
• Mask user-specified value: Select this option to mask out certain pixels
within an image. In the Fill Value field, enter the pixel value for the pixels
you want to exclude from processing. When you apply a mask, the mask is
used to generate transform coefficients. Yet the coefficients are applied to
every pixel in the input image, whether a mask was used or not. Set the
Zero Out Masked Pixels toggle button to Yes to apply the mask to the
transform image so that masked values are 0.
• User-defined mask file: Select this option to apply a separate mask file to
the hyperspectral image. The mask file should be a binary image
consisting of values of 0 and 1, and it must have the same spatial extent
and projection as the hyperspectral image. Click Choose, and select a
mask file. Set the Zero Out Masked Pixels toggle button to Yes to apply
the mask to the transform image so that masked values are 0.
3. Click Run Transform. When processing is complete, the results appear in the
Dimensionality Reduction and Band Selection panel.
4. Review the transformed image bands to determine which bands contain useful
information. While the example discussed here is for ICA results, the same
applies to PCA and MNF.
The ICA Results section of the Dimensionality Reduction and Band Selection
panel lists every transformed band. Double-click a band or click Display Band
to open the selected band in a display group. Click Animate Bands to launch
ENVI's animation tool, which provides a convenient way to quickly review all
the bands.
For ICA, click Plot 2D Coherence to see a plot of the spatial coherence for
each transformed band. For PCA and MNF, click Plot Data Variance and
select a plot to view. The choices are: Plot Variance (ENVI), Plot Variance
(Log axis, IDL), Plot Eigenvalues (ENVI), and Plot Eigenvalues (Log axis,
IDL).
Advanced users may explore transformed results further by clicking Plot Band
Weights. The Band Weightings dialog appears. This dialog provides some
insight into how much each input spectral band contributes to each
transformed image band. For more information, see “Transform Band
Weights” on page 715.
5. Select the transformed bands you wish to keep by clicking one of the following
buttons:
• Select Graphically: The THOR Spectral Subsetting dialog appears. See
“Using the THOR Spectral Subsetting Dialog” on page 672 for details.
• Select by List: The File Spectral Subset dialog appears. See “Spectral
Subsetting” on page 220 for further instructions.
THOR will automatically apply the selected transform to the target and background
(if any) signatures before spectral matching is performed later in your workflow.
Automated Band Selection
From the Method drop-down list in the Dimensionality Reduction and Band
Selection panel, select Automated band selection.
Note
The Automated band selection method is only available if you selected
background signatures using the Background Signature Selection panel. See
“Background Signature Selection” on page 682 for details.
Use automated band selection to identify spectral bands that best differentiate target
signatures from background signatures. You can use the BandMax or Advanced
Signature Optimization algorithm to perform automated band selection. Select one of
these methods from the Selection Method drop-down list.
BandMax:
When you select this option, a plot window opens with the target and background
signatures displayed. The grey plot curve (usually low in value) represents Band
Significance. Following is an example:
Click and
drag the red
bar to set
the threshold
The red, horizontal line on the plot represents the threshold used to define which
bands are significant (shaded blue on the plot) and which are not. You can change the
threshold by clicking and dragging the threshold bar in the plot window, or by using
the Band Significance Threshold slider in the Dimensionality Reduction and Band
Selection panel.
Figure 7-47: Use the slider bars to set the threshold and number of significant
bands.
Adjusting the Significant Bands slider also controls the threshold position by setting
it to force a specific number of significant bands to be selected. The bands deemed
significant are shaded blue in the plot window and listed in the Dimensionality
Reduction and Band Selection panel.
You can fine-tune the band selection by clicking Edit Band Selection and selecting
or unselecting specific bands in the list. If you change the threshold further after
editing these values, the threshold changes will be lost.
Click Output Bands to ASCII to save a bad bands list to a text file.
Advanced Signature Optimization:
The Advanced Signature Optimization method works differently than BandMax in
that you evaluate each target/background signature pair separately. After selecting
Advanced Signature Optimization from the Selection Method drop-down list, follow
these steps to continue:
1. Click Add. The Select Signature Pairs dialog appears.
Figure 7-48: Click and drag the red horizontal line to set the threshold.
You can change the threshold by clicking and dragging the threshold bar in the
plot window, or by using the Threshold slider in the Dimensionality Reduction
and Band Selection panel.
The bands deemed significant are shaded green in the plot window and listed
in the Dimensionality Reduction and Band Selection panel.
6. If you determine that certain target/background pairs are more important than
others, you can skew the band significance plot for that pair relative to the
others. First, select the desired pair in the Target/Background Signature
Pairs list. Then, adjust the Target Priority (%) slider to change the scale
factor applied to that pair's band significance plot.
7. You can fine-tune the band selection by clicking Edit Band Selection and
selecting or unselecting specific bands in the list. If you change the threshold
further after editing these values, the threshold changes will be lost.
8. Click Output Bands to ASCII to save a bad bands list to a text file.
Manual Band Selection
Manual band selection allows you to define which bands should be used for further
processing. At a minimum, you should remove bands in the 1400 nm and 1900 nm
Signature Matching
Use the Signature Matching panel to select the spectral matching methods used to
detect occurrences of the selected target signatures. If you selected one or more target
signatures from spectral libraries, this panel is also used to set the proper scale factor
needed to scale the library signatures into the same space as the input image.
The Spectral Matching Algorithms section lists the available spectral matching
methods. Select which method(s) you want to run:
• Spectral Angle Mapper (SAM): See “Spectral Angle Mapper Classification”
on page 771 for details.
• Matched Filter (MF): See “Using Matched Filtering” on page 776 for details.
• Adaptive Coherence Estimator (ACE): See “Using Adaptive Coherence
Estimator” on page 782 for details.
Figure 7-50: Example of scale factor that is correct (top), too low (middle), and
too high (bottom).
Import Signatures
Use the Import Signatures panel to import spectral signatures from multiple sources
in order to create a new spectral library for the Spectral Library Builder.
7. You can use metadata queries to determine which of the input SLEFs will be
imported. SLEF files are typically categorized by Group, Class, and Sub-Class;
and each has a Country of Origin. Select the desired metadata values from
these groups, and click Run Query. Only those signatures that contain
metadata matching the query will appear in the Spectra to Import list. You can
also enter search terms in the Keyword Search field. The query must find all
of the entered metadata query fields within a signature's metadata for it to
appear in the Spectra to Import list.
8. When you are finished selecting the signatures to import, click OK. The
signature data will be extracted from the SLEF files, and the signatures will be
added to the Imported Signatures list.
Importing Signatures from ENVI Spectral Libraries
The following are steps to import signatures from ENVI spectral library (also called
SLI) files.
1. From the Import signatures from drop-down list, select ENVI Spectral
Library (.sli).
2. Click Import Signatures. The Select Spectral Library dialog appears.
3. Select one or more spectral library files (.sli), and click OK. The SLI
Importer dialog appears.
Note
Your ENVI installation includes several spectral library files (.sli), located in
the spec_lib directory.
The SLI Importer dialog is divided into two areas: a list of available signatures
(left), and a list of signatures to import (right).
4. To plot a signature, select the signature in the Available Signatures List and
perform one of the following actions:
• Double-click
• Right-click and select Plot Selected Spectra
• Click the Plot Selected Entries button
5. To import the selected signatures, click the arrow button to move them to the
Signatures to Import list.
6. To remove signatures from the list so that they will not import, select one or
more signatures and click Remove Selected. To clear the list, click Clear All.
7. When finished selecting signatures to import, click OK. The signature data
will be extracted from the spectral library file(s) and the signatures will be
added to the Imported Signatures list in the Import Signatures panel.
8. You can use metadata queries to determine which of the input MRSL
signatures will be imported. MRSL files are typically categorized by Group,
Class, and Sub-Class; and each has a Country of Origin. Select the desired
metadata values from these groups, and click Run Query. Only those
signatures that contain metadata matching the query will appear in the Spectra
to Import list. You can also enter search terms in the Keyword Search field.
The query must find all of the entered metadata query fields within a
signature's metadata for it to appear in the Spectra to Import list.
9. When you are finished selecting the signatures to import, click OK. The
signature data will be extracted from the MRSL files, and the signatures will
be added to the Imported Signatures list.
Material Identification
Use the Material Identification tool to attempt to identify unknown spectral
signatures by comparing them to spectral libraries. Follow these steps to continue:
1. Choose one of the following options to run Material Identification:
• Click Identify Material in the Review Detections panel.
• Select Spectral → THOR Workflows → Tools → Material
Identification from the ENVI main menu bar.
The Material Identification panel appears.
Figure 7-52: Spectral angle thresholds used for coloring the results table.
The plot area displays the unknown signature in red, the currently selected
library signature in the results table in green, and the band importance in blue.
The band importance signifies how important each spectral band is relative to
the others for differentiating the unknown and library signatures. The larger the
band importance, the more effect that band has on enlarging the spectral angle
between the unknown and library signature. You can turn off the band
importance by unchecking the Plot band importance option.
5. Click Export to ENVI Plot to export the plots to a standard ENVI plot
window.
6. At times, you may want the Material Identification tool to ignore certain noisy
or otherwise bad bands when comparing signatures. To designate which bands
should be ignored, select the Edit bad bands option. The plot area will be
shaded green for "good" bands and red for "bad" (ignored) bands. Click the red
Bad Bands button, then click and drag in the plot area to designate those bands
as "bad." Click the green Good Bands button, then click and drag in the plot
area to designate those bands as "good."
7. After specifying bad bands, uncheck the Edit bad Bands option to compare
signatures using the new set of good bands. Bad bands are blanked out in the
plot area.
Figure 7-54: Original band selection (left), new bad band selection (middle), and
final results with the bad bands blanked out (right).
Figure 7-55: Updated results for the same signature after specifying bad bands.
8. To view metadata for the selected signature in the results table, click View
Metadata. The Spectrum Metadata dialog appears.
9. You can export the results table to an ASCII text file by clicking Export Table.
Rule Thresholding
Use the Rule Thresholding panel to review rule images that result from processes
such as spectral matching and anomaly detection. The input image will load into the
THOR Viewer, which is used to adjust the target thresholds. (See “Using the THOR
Viewer” on page 663 for details.) Use the Rule Thresholding panel to change the
band combination or image displayed in the THOR Viewer.
Changing the Displayed Image
The Rule Thresholding panel lists the different images that are available to display in
the THOR Viewer. The images include the original input image and the
atmospherically corrected image. Other images are displayed, depending on which
module is running. For Target Detection, this includes the transform images and rule
images for each target signature.
For images with multiple spectral bands, you can load several default band
combinations with a single click. These include Natural Color Composite, False
Color Composite, and SWIR Color Composite.
Figure 7-57: Use the Rule Thresholding panel to change the band combination
or image displayed in the THOR Viewer.
If the selected image does not have the necessary bands to create that composite,
THOR will display a warning. To load a single band or color composite of your
choice, click Select Another Base Image. The Select Bands to View dialog appears.
Select the Gray Scale or RGB Color radio button, then select the desired bands.
When done, click OK to have the selected band(s) display in the THOR Viewer.
Figure 7-58: Table showing center wavelengths used for finding bands for the
default color composite.
Setting Thresholds
Use the THOR Viewer and associated histogram panels to set the thresholds as
needed. See “Using the THOR Viewer” on page 663 for more information.
Spatial Filtering
Use the Spatial Filtering panel to filter the detection overlays produced by the Rule
Thresholding panel. The THOR Viewer is displayed along with the existing detection
overlays. (See “Using the THOR Viewer” on page 663 for more information.)
1. To continue in a workflow without performing spatial filtering, select the Do
not perform spatial filtering option.
2. Select one or more layers in the Select Layers to be Filtered list that the filter
will apply to. You can apply one set of filtering parameters to certain layers
and another set of parameters to other layers by utilizing this list.
3. Spatial filtering consists of clumping and/or sieving. If both clumping and
sieving are selected, clumping occurs first, then sieving. Select the Clump
Results and/or Sieve Results options to indicate whether each will be
performed.
Clumping groups nearby isolated detections into one cohesive blob. The larger
the Clump operator size parameter, the further apart the isolated detections
that will be grouped.
Sieving removes detections that are smaller than the Min Blob Size or larger
than the Max Blob Size. The Pixel Neighbor Scheme determines how blob
sizes are calculated. By default, all eight neighboring pixels are considered as
part of the blob. Alternatively, only four neighboring pixels may be considered
for blob grouping.
4. Click Apply Filter to preview the filtered results. Click Flicker Results to
better see the changes. This will alternate the original, non-filtered detection
overlay with the filtered overlay.
metadata cell displays its contents in the text box below the table, making it
easier to read long fields.
Figure 7-61: Click on signatures to plot them and view their metadata.
4. In some cases, you may want to compare library signatures to image signatures
that have different wavelength units or y-axis scaling. Enter the desired values
in the X Scale and/or Y Scale boxes. The next time you select a signature from
the list, these scale factors will be applied to the signature. You can then drag
these signatures into a plot window displaying an image signature for direct
comparison.
5. It is possible to list only those signatures in the library that match certain
metadata criteria. Click the Query Parameters tab to show the metadata query
parameters. See “Query Library” on page 681 for more information on queries.
Review Detections
Use the Review Detections panel to review and flag target detections resulting from
the Anomaly Methods panel. The Review Detections panel provides a simple and
quick way to review each detection by allowing you to plot its mean spectrum and
attempting to identify it using the Material Identification tool. You can export targets
of interest using the Export Targets panel. The following steps describe how to use
the Review Detections panel.
1. To skip reviewing detections, click the Do not review detections option in the
Review Detections panel. THOR will export all detected targets to a single
class in the Export Targets panel. Otherwise, leave this option unchecked.
2. The THOR Viewer is displayed with the detections overlaid. The THOR
Viewer target list will have four layers loaded: Pending, Good, Bad, and the
input detection layer (Anomalies in the example below). The input detection
layer is initially hidden. By default, all detections are placed in the Pending
layer. You can adjust the properties of these layers by clicking the layer name.
See “Using the THOR Viewer” on page 663 for more information on setting
layer properties.
Figure 7-62: Overview of the Review Detections panel. THOR Viewer is shown
at right.
3. You can change the reference image in the THOR Viewer using the Reference
Image drop-down list. Depending on the input image bands available, several
different color composites will be available in addition to rule images used to
create the detection overlay.
4. The detection list lists all of the detections (clusters of pixels) in the detection
overlay along with several properties. Detection list fields include ID (numeric
identifier for detections), Pixels (the number of pixels in the detection),
Strength (the mean value of the rule image used to generate the detection), and
State (Pending, Good, or Bad). Clicking on a row header will select the
detection and center the THOR Viewer on that detection. Clicking on a column
header will sort the table by that column. You can sort the table by selecting the
desired field from the Sort by drop-down list. Use the List Detections drop-
down list to view all or only certain types of flagged detections.
5. For each detection, determine whether it is Good (of interest) or Bad (not of
interest). You may examine the detections visually, use Plot Spectrum to view
the mean spectrum for the detection, or Identify Material to use the Material
Identification panel to attempt to identify the detection. Data must be in units
of reflectance to properly use the Material Identification panel. When you have
determined whether the detection is of interest or not, click on the
appropriately colored flag to mark it. The detection will change color both in
the THOR Viewer and in the table. The next detection will then automatically
be selected and the THOR Viewer will center over it, so you can review it. You
may also use the Next and Prev buttons to manually navigate through the
detection table.
Figure 7-63: Example of the Review Detections panel and THOR Viewer while
reviewing detections.
Edit Metadata
Use the Edit Metadata panel to add or edit metadata for spectral signatures being
imported into a spectral library via the THOR Spectral Library Builder.
The available signatures (derived from the Import Signatures panel) are displayed at
the top. Any available metadata for the selected signature are displayed at the bottom.
Above the metadata table is a row of buttons that perform various functions.
Adding Fields
1. To add a new metadata field, select the signature you want to add metadata to.
2. Click Add Field. You can add a new field for just the selected signature, or for
all signatures.
3. Enter the new field name (required) and field contents (optional) in the Add
Field to Metadata field.
4. Click Accept. The new field is added to the selected signature(s).
Removing Fields
1. To remove a metadata field, select the signature whose metadata you want to
edit.
2. Select the metadata field to delete.
3. Click Delete Field. You can delete the field for just the selected signature, or
for all signatures.
4. Click Accept.
Editing Field Names and Content
1. Select the signature whose metadata you want to edit.
2. Select the metadata field to edit.
3. Click View/Edit Field Contents. The View/Edit Field dialog appears.
4. Edit the field name and contents as desired, then click Accept.
5. To view the spectrum for any signature, select the signature name and click
Plot Spectrum.
Figure 7-65: Example of selecting a single transformed band to see how each
input spectral band contributed to it.
You can also select several transformed bands by clicking and dragging in the
list, or by holding down the Ctrl or Shift key while selecting from the list. The
resulting plot will be the mean of all the plots for the selected bands.
Figure 7-66: Example of selecting multiple transformed bands to see the mean
of all input spectral band contributions.
Figure 7-67: Transform band weights can reveal interesting information about the
input dataset.
Setting the first toggle button in the Transform Band Weightings dialog to
Create new window will display a new plot window each time you select a
new transform band. Otherwise, the currently active plot window will be used.
Leaving the bottom toggle button at Replace existing plot will replace the
previous band selection with the new selection. If you select Accumulate
plots, the new selection will overlay with the previous selections in the same
plot window.
Figure 7-68: Example of accumulating plots for multiple transform bands in the
same plot window.
Export Library
Use the Export Library panel to write a new spectral library to disk from within the
THOR Spectral Library Builder. You can export the library to Metadata Rich Spectral
Library (MRSL) format or to the standard ENVI spectral library (.sli) format.
When exporting to a standard ENVI spectral library, all signatures will be resampled
to the same spectral range and band spacing, and all metadata beyond signature
names will be removed.
Exporting to MRSL Format
1. From the Output Format drop-down list, select Metadata Rich Spectral
Library (.msl).
2. Choose an output filename and location. The file will be appended with an
.msl extension. The output MRSL file will preserve each signature's spectral
range, band spacing, and metadata.
Note
ENVI cannot read MRSL files; only THOR can read these files. Use the
THOR Spectral Library Viewer to view MRSL files.
Export Targets
Use the Export Targets panel to export target detection overlays produced by the Rule
Thresholding panel to regions of interest (ROIs) or vector files. If you previously
used the Spatial Filtering panel, then the filtered results will be exported.
First, select the layers that you want to export from the Target List. Then select one of
the following options.
Exporting to ROIs
ENVI uses ROIs for a number of processes. Since ROIs are pixel-based, the input
image does not need to be georeferenced for this option to be available.
1. Click Export to ROIs. The input image is loaded into a display group, and the
new ROIs are added to the ROI Tool dialog. The new ROI names are appended
with (THOR derived target) to differentiate them from ROIs you may
have created as training sets.
2. The exported ROIs only exist in memory at this point. To save them to disk for
future use, select File → Save ROIs from the ROI Tool menu bar.
Exporting to ENVI Vector Format
1. Click Export to EVFs. This option is only available when the input image is
georeferenced.
2. Enter a root name for the output EVF files. One EVF file will be created per
layer with an extension of _.evf. If the input image projection is not
Geographic Lat/Lon, all vectors will be converted to Geographic Lat/Lon
when each EVF is created. Once processing is complete, the exported layers
appear in ENVI's Available Vectors List. From there, you can load the vectors
into a vector or image window.
Exporting to Shapefiles
1. Click Export to Shapefiles. This option is only available when the input
image is georeferenced.
2. Enter a root name for the output shapefiles. One shapefile (along with ancillary
files) will be created per layer with an extension of _.shp. If the input image
projection is not Geographic Lat/Lon, all vectors will be converted to
Geographic Lat/Lon when each shapefile is created. Once processing is
complete, the exported layers will be written to the output directory but will
not be loaded into ENVI (as this would require a conversion back to EVF).
C.-I Chang, J.-M. Liu, B.-C. Chieu, C.-M. Wang, C. S. Lo, P.-C. Chung, H. Ren,
C.-W. Yang, and D.-J. Ma, “A generalized constrained energy minimization approach
to subpixel target detection for multispectral imagery,” Optical Engineering, vol. 39,
no. 5, pp. 1275-1281, May 2000. (CEM)
H. Ren and C.-I Chang, “Target-constrained interference-minimized approach to
subpixel target detection for hyperspectral imagery,” Optical Engineering, vol. 39, no.
12, pp. 3138-3145, December 2000. (TCIMF)
S. Johnson, “Constrained energy minimization and the target-constrained
interference-minimized filter,” Optical Engineering, vol. 42, no. 6, pp. 1850-1854,
June 2003. (CEM and TCIMF)
S. Kraut, L. L. Scharf, and R. W. Butler, “The adaptive coherence estimator: a
uniformly most-powerful-invariant adaptive detection statistic,” IEEE Trans. on
Signal Processing, vol. 53, no. 2, pp. 427-438, 2005. (ACE)
D. Manolakis, D. Marden, and G. A. Shaw, “Hyperspectral image processing for
automatic target detection applications,” Lincoln Laboratory Journal, vol. 14, pp. 79-
116, 2003. (ACE)
J. C. Harsanyi and C.-I Chang, “Hyperspectral image classification and
dimensionality reduction: an orthogonal subspace projection approach,” IEEE Trans.
On Geoscience and Remote Sensing, vol. 32, no. 4, pp. 779-785, 1994. (OSP)
C.-I Chang, “Further results on relationship between spectral unmixing and subspace
projection,” IEEE Trans. on Geosciences and Remote Sensing, vol. 36, pp. 1030-
1032, May 1998. (OSP)
C.-I Chang, “Hyperspectral Imaging: Techniques for Spectral Detection and
Classification,” Kluwer Academic Publishers, Dordrecht. 2003. (OSP, CEM, TCIMF)
J. W. Boardman, “Leveraging the high dimensionality of AVIRIS data for improved
sub-pixel target unmixing and rejection of false positives: mixture tuned matched
filtering,” In: 7th JPL Airborne Geoscience Workshop, pp. 55-56, 1998. (MTMF)
The first panel in the Target Detection Wizard, the Introduction panel, explains the
overall workflow of the Wizard.
Click Next to proceed to the next panel, where you provide input for the first step in
the workflow. When that panel appears, you will see that it shows Wizard settings
only; the Wizard description text is hidden, as in the following figure.
Figure 7-70: Select Input/Output Files Panel with Wizard Text Hidden
To show the text next to the settings, click Show Text. The following figure shows the
Select Input/Output Files panel with the Wizard text visible.
Figure 7-71: Select Input/Output Files Panel with Wizard Text Shown
3. Click Select Output Root Name. The Output Files dialog appears.
4. Enter a File name to use as the root name of all output files generated by the
target detection analysis, then click OK. ENVI appends a descriptive extension
to the output filename: _mnf for MNF transform results, _mf for Matched
Filtering rule images, _cem for Constrained Energy Minimization rule images,
_mf_target for Matched Filtering binary decision, _mf_target.shp for
Matched Filtering shapefile output, _mf_target.roi for Matched Filtering
ROI output, and so forth.
Note
If files with the specified root name already exist, ENVI prompts you to
delete the existing files before continuing. If you select No, you need to enter
a different output root filename.
one target spectra and you need to select non-target spectra in the next step in the
Wizard.
1. Enter target spectra from spectral libraries, individual spectral plots, text files,
ROIs, or statistics files. See “Importing Spectra” on page 442 for details.
Tip
If using radiance or uncalibrated data as input, or if you applied an
atmospheric correction that gives relative reflectance, spectra derived directly
from the image data usually produce better results than selecting library
targets. The image spectra more accurately account for any errors in
calibration or atmospheric correction, the scales of mixing that occur in your
data, and sensor response effects.
Tip
If you are importing ROIs as targets, the average calculated from each region
is used as the target spectrum.
2. When importing spectra is complete, choose the spectra to use by selecting one
or more rows in the table, or by clicking Select All. ENVI uses only the spectra
you select for target detection processing. If you do not select any rows, ENVI
uses all spectra.
Note
If you select only one target spectrum and do not specify non-target spectra
in the next step, the OSP, TCIMF, and MTTCIMF target detection methods
will not be available as target detection methods in a later step.
3. Click Next in the Wizard. The Select Non-Target Spectra panel appears.
2. Enter non-target spectra from spectral libraries, individual spectral plots, text
files, ROIs, or statistics files. To import the non-target spectra, see “Importing
Spectra” on page 442.
Tip
If you are importing ROIs as non-targets, the average calculated from each region is
used as the non-target spectrum; therefore, you want to include each separate
material as a separate ROI.
3. When importing spectra is complete, choose the spectra to use by selecting one
or more rows in the table, or by clicking Select All. ENVI uses only the spectra
you select for target detection processing.
4. Click Next in the Wizard. The Apply MNF Transform panel appears.
1. To apply the MNF transform, leave the Apply MNF Transform? toggle at
Yes. If you toggle to No, click Next to proceed.
2. If you wish to set advanced parameters for MNF transformation, click Show
Advanced Options. Otherwise, click Next to proceed. The next steps are for
the advanced settings.
4. Click Noise Stats Shift Diff Spatial Subset to select a spatial subset or an area
under an ROI/EVF or an image or file on which to calculate the statistics. You
can then apply the calculated results to the entire file (or to the file subset, if
you selected one when you selected the input file).
5. Click Next in the Wizard. The Target Detection Methods panel appears.
only available if you provided more than one target spectra in Step 3, or
you provided non-target spectra in Step 4.
• Mixture Tuned Target-Constrained Interference-Minimized Filter
(MTTCIMF): This method combines the Mixture Tuned technique and
TCIMF target detector. It uses a Minimum Noise Fraction (MNF)
transform input file to perform TCIMF, and it adds an infeasibility image
to the results. The infeasibility image is used to reduce the number of false
positives that are sometimes found when using TCIMF alone. The output
of MTTCIMF is a set of rule images corresponding to TCIMF scores and a
set of images corresponding to infeasibility values. The infeasibility results
are in noise sigma units and indicate the feasibility of the TCIMF result.
Correctly mapped pixels have a high TCIMF score and a low infeasibility
value. If non-target spectra were specified, MTTCIMF can potentially
reduce the number of false positives over MTMF. This method is only
available if you provided more than one target spectra in Step 3 or you
provided non-target spectra in Step 4, and you applied the MNF transform
in Step 5.
• Mixture Tuned Matched Filtering (MTMF): MTMF uses an MNF
transform input file to perform MF, and it adds an infeasibility image to the
results. The infeasibility image is used to reduce the number of false
positives that are sometimes found when using MF alone. Pixels with a
high infeasibility are likely to be MF false positives. Correctly mapped
pixels will have an MF score above the background distribution around
zero and a low infeasibility value. The infeasibility values are in noise
sigma units that vary in DN scale with an MF score. This method is only
available if you applied the MNF transform in Step 5.
2. If you wish to set advanced parameters for the target detection methods, click
Show Advanced Options. Otherwise, click Next to proceed. The next steps
pertain to the advanced settings.
3. If you selected the CEM and/or TCIMF method(s), you need to model the
composite unknown background with statistics such as a covariance or
correlation matrix. The reference target signature is generally only present in a
small number of pixels in a scene, so the background statistics are usually
computed over the whole scene. Click the toggle button to select using the
Covariance Matrix or Correlation Matrix for the calculation.
4. If you selected the ACE, CEM, MF, TCIMF, MTMF, and/or MTTCIMF
method(s), you can model the scene background by removing anomalous
pixels before calculating background statistics. To model the subspace
background, enable the Use Subspace Background check box. By better
To display other output, do the following. Any selections you make are automatically
reflected in the display group.
1. Select one Target from the column.
2. Select one Method from the column.
The new target and/or method are loaded into the display group. When you
select MTMF and MTTCIMF results, a 2D Full Band Scatter Plot opens.
When viewing results that include a Full Band Scatter Plot, a good region to visualize
is one defined by high detection scores and low infeasibility values.
Though similar to the 2D Scatter Plot, the Full Band 2D Scatter Plot view is
constructed from the entire band, not just the data visible in the Image window. When
the view in the Image window changes, the view in the Full Band Scatter Plot does
not. When you select different regions in the Full Band Scatter Plot, the highlighted
pixels in the Image, Scroll, and Zoom windows update.
Dancing pixels, density slice, and menus are not available. Use the following mouse
buttons in the 2D plot:
• Draw polygons using the left mouse button. Left-click in the scatter plot to
define the vertices of a new polygon that will select pixels. Right-click to close
the polygon.
Note
Clicking the middle mouse button before you close the polygon erases it.
• Resize the scatter plot using the middle mouse button. Click the middle mouse
button, grab the corner of the window and drag to the desired size. To reset the
plot to its default size, middle-click anywhere on the plot.
Note
Resizing the scatter plot causes any drawn polygon(s) to be reset.
• Erase all drawn polygons by clicking the middle mouse button outside of the
scatter plot.
To further refine the display, you can do the following:
• To apply a different stretch to the display, select from the Default Stretch
drop-down list. Square Root is typically a good stretch when the target pixels
occupy a small population of the image.
• For MF, CEM, ACE, SAM, OSP, and TCIMF: ENVI uses a default
thresholding value. To apply a different Rule Threshold, move the slider or
enter a new value. If the rule value is larger than the threshold, the pixel is
highlighted in the display. Adjust the threshold to a higher value to select fewer
pixels, and vice versa.
Note
Threshold settings behave in the opposite manner for SAM since lower
values represent a high probability of being a target in the SAM rule image.
• To change the color of the detected target pixels in the display, see “Editing
ROI Attributes” on page 326.
When you have finished loading and previewing the output images, click Next. The
Target Filtering panel appears.
Filter Targets
In Step 8 of the Target Detection Wizard, you have the opportunity to use different
filtering options to clean up mis-detected pixels and false positives. Sometimes a
single target object can be separated into multiple objects by only several mis-
detected pixels, and sometimes false positives are detected as a blob with a single or a
few pixels.
1. Select the target to filter from the Select Target drop-down list.
2. Choose from the following Filter Options:
• Clumping: Groups the separated pixels into one object. The target pixels
are clumped together by first performing a morphological dilate operation
then by an erode operation using a kernel of the size specified in the
clumping parameters.
• Sieving: Removes small isolated objects. It looks at the neighboring four
or eight pixels to determine if a target pixel is grouped with other target
pixels. If the number of target pixels that are grouped is less than the
Group Min Threshold value, ENVI removes those pixels from the output
image.
3. If you are using the Clumping filter, set the morphological Operator Size for
Rows and Columns.
4. If you are using the Sieving filter, set the minimum number of pixels contained
in a class group in the Group Min Threshold field. ENVI removes any groups
of pixels smaller than this value from the class.
5. For Sieving, toggle the Number of Neighbors button to select 4 or 8 as the
neighboring pixels to look at when determining the number of pixels in a class
group. The four-neighbor region around a pixel consists of the two adjacent
horizontal and two adjacent vertical neighbors. The eight-neighbor region
around a pixel consists of all the immediately adjacent pixels.
6. Click Next to filter the display. The Export Results panel appears.
Export Results
In Step 9 of the Target Detection Wizard, you select whether to export the target
detection results to one shapefile and/or ROI per target detection method.
For shapefile output, ENVI performs raster-to-vector conversion for each of the
selected target detection methods and lists the output in the Available Vectors List.
The shapefiles and ROIs are saved to the directory you specified with the output root
name at file input.
If you are exporting to ROIs, enable Display ROI to have ENVI automatically
display the ROI Tools dialog after the ROIs are created.
After you make your selection, click Next. When ENVI is done producing export
files, the View Statistics and Report panel appears.
If you exported to Shapefiles, the Available Vectors List appears, listing the
Shapefiles ENVI created.
If you exported to ROIs and enabled the Display ROIs check box, the ROI Tools
dialog appears. Use the ROI Tools dialog to find and delete false positives. Navigate
to target objects that are false positives and click Delete Part.
Spectral Libraries
Use Spectral Libraries to build and maintain personalized libraries of material
spectra, and to access several public-domain spectral libraries. ENVI provides
spectral libraries developed at the Jet Propulsion Laboratory for three different grain
sizes of approximately 160 “pure” minerals from 0.4 to 2.5 μm. ENVI also provides
public-domain U.S. Geological Survey (USGS) spectral libraries with nearly 500
spectra of well-characterized minerals and a few vegetation spectra, from a range of
0.4 to 2.5 μm. Spectral libraries from Johns Hopkins University contain spectra for
materials from 0.4 to 14 μm. The IGCP 264 spectral libraries were collected as part
of IGCP Project 264 during 1990. They consist of five libraries measured on five
different spectrometers for 26 well-characterized samples. Spectral libraries of
vegetation spectra were provided by Chris Elvidge, measured from 0.4 to 2.5 μm. See
“ENVI Spectral Libraries” on page 1183 for more information and references.
ENVI spectral libraries are stored in ENVI’s image format, with each line of the
image corresponding to an individual spectrum and each sample of the image
corresponding to an individual spectral measurement at a specific wavelength (see
“ENVI Spectral Libraries” on page 1183). You can display and enhance ENVI
spectral libraries.
2. To plot multiple spectra at the same time, select multiple spectrum names.
Each new spectrum is plotted in the same Spectral Library plot window as a
new color.
3. To show the names and colors of the plotted spectra, right-click inside the
Spectral Library plot window and select Plot Key.
4. To scale the library spectra by a constant, from the Spectral Library Viewer
menu bar, select Options → Edit (x,y) Scale Factors, then enter a value in the
X Multiplier and Y Multiplier fields.
5. Click OK. The spectral library spectra are multiplied by that value. Scale
factors are used to compare library spectra to image spectra that are scaled into
integers, or to convert wavelength units between microns and nanometers.
Figure 7-72: Spectral Library Viewer Dialog (left) Showing Individual Spectra and
Spectral Library Plot Window (right)
3. Click OK. ENVI adds the resulting output to the Available Bands List.
If the input and output wavelength ranges do not overlap completely, and you
did not specify output wavelength units, ENVI prompts you to specify the
output wavelength units to perform the conversion.
Resampling from User-Defined Filter Functions
If you select User Defined Filter Function as the resampling method in the Spectral
Resampling Parameters dialog, the Input Filter Function Spectral Library dialog
appears.
The user-defined filter function must take the form of an ENVI spectral library with
each sample of the image representing a wavelength value and each line of the image
representing an individual filter function. The value at each wavelength must be a
weight between 0 and 1, which is used as a multiplicative factor when applied to the
library being resampled. To see an example filter function file, navigate to the
filt_func directory of your ENVI installation, open the Landsat TM file tm.sli
as a spectral library file, and plot the filter functions.
1. In the Input Filter Function Spectral Library dialog, select the filter function.
2. Click OK. ENVI adds the resulting output to the Available Bands List.
Note
You should not resample from a low resolution to a high-resolution spectrum; the
results may be false.
Collecting Spectra
Use the Spectral Library Builder dialog to collect endmember spectra from a variety
of sources. All spectra are automatically resampled to the selected wavelengths. For
details, see “Importing Spectra” on page 442 and “Managing Endmember Spectra”
on page 453.
Spectral Slices
Use Spectral Slices to extract a combined spatial and spectral profile from a multi-
band image. You can slice images in the following way:
• Horizontally: All of the bands for a single line of the image
• Vertically: All of the bands for a single pixel column of the image
• Arbitrarily: You define the direction by selecting an ROI polyline (see
“Drawing ROIs” on page 323).
Slices in ENVI are saved as gray scale images, with the following characteristics:
• The line direction (y) corresponds to the spatial dimension of the image being
sliced. For a horizontal slice, the number of lines is equal to the number of
samples. For a vertical slice, the number of lines is equal to the number of
lines. For an arbitrary slice, the number of lines is equal to the total number of
pixels along the ROI polyline.
• The sample direction (x) corresponds to the spectral dimension, or the number
of bands in the sliced image.
• The gray scale image shows the spectral intensity (reflectance, radiance, and so
forth), depending on the calibration of the data.
MNF Rotation
Use MNF Rotation to determine the inherent dimensionality of image data, to
segregate noise in the data, and to reduce the computational requirements for
subsequent processing. For more information, see “Minimum Noise Fraction
Transform” on page 516.
3. Click OK. The Pixel Purity Index Parameters or FAST Pixel Purity Index
Parameters dialog appears, depending on which menu item you selected in
step 1.
Figure 7-74: Pixel Purity Index Parameters (left) and Fast Pixel Purity Index
Parameters (right) Dialogs
A processing status dialog appears with the PPI plot. This plot shows the total
number of extreme pixels satisfying the threshold criterion found by the PPI
processing as a function of the number of iterations. It should asymptotically
approach a flat line (zero slope) when all of the extreme pixels are found.
7. Click OK. A processing status dialog and Pixel Purity Index Plot appear.
ENVI adds the resulting output to the Available Bands List.
3. From the Display group menu bar, select Overlay → Region of Interest to
open the ROI Tool dialog.
4. From the ROI Tool menu bar, select Options → Band Threshold to ROI to
create an ROI containing only the pixels with high PPI values (see “Converting
Band Values to ROIs” on page 337)
You should specify a minimum threshold. For example, a minimum value of 10
includes all pixels with PPI values greater than 10 in the ROI. However, if bad data
points exist in the PPI image, you can use both a minimum and maximum threshold.
When you have created an ROI containing the high PPI values, you can use the n-D
Visualizer to interactively define the image endmembers.
Background
This discussion provides a brief introduction to plotting spectra in n-D space. To
bypass this and to begin using the n-D Visualizer, skip to “Starting the n-D
Visualizer” on page 757.
Think of spectra as points in an n-D scatter plot, where n is the number of bands. The
coordinates of the points in n-D space consist of n spectral radiance or reflectance
values in each band for a given pixel. You can use the distribution of these points in n-
D space to estimate the number of spectral endmembers and their pure spectral
signatures.
Endmembers are pure spectrally unique materials that occur in a scene. Using a linear
unmixing model, you can reconstruct every spectrum in the image as some
combination of image endmember spectra.
The n-D Visualizer was designed to receive region of interest (ROI) input containing
the spectrally purest pixels in a scene, and to allow you to segregate these pure pixels
into their respective endmembers. ROIs are used because interactively rotating all of
the data in a full image is too computationally intensive. A pure pixel is one that is the
closest to containing only one spectrally unique endmember material. Defining an
ROI that contains the purest pixels in an image is easy when you use ENVI's PPI tool.
(See “Pixel Purity Index” on page 749.) You can also import an ROI to the n-D
Visualizer that is not based on the purest pixels in the image. However, doing so
means that some endmembers may not be represented in the resulting n-D scatter
plot.
Figure 7-76: Scatter Plot Showing Pure Pixels and Mixing Endmembers
Now consider a third pixel that is 100 percent filled with sand. This pixel creates a
third corner to the data cloud. Any pixel that contains a mixture of sand, water, and
grass, will fall inside the triangle defined by connecting the three pure pixels together:
Figure 7-77: Pure Pixels Defining the Corners of the Scatter Plot
Any pixel that contains only two of the three materials falls on the edge of the
triangle, but only the pure pixels fall in the corners of the triangle. In this example, the
data cloud forms a triangle. This example considers only a 2D scatter plot with three
endmembers, but even in scatter plots using any number of dimensions with data
containing any number of endmembers, pure pixels always plot in the corners of the
data cloud, and mixed pixels will fall within the shape defined by these corners.
Once you have started the n-D Visualizer, you can display scatter plots with as many
dimensions as you have bands in your input image. When you display a scatter plot
with more than three dimensions, you may notice that as the data cloud rotates, it
occasionally folds in on itself in a manner that is difficult to comprehend. This is a
result of plotting more dimensions than you can visualize (since we live in a 3D
world!) However, understanding the direction of the rotations is not necessary in
order to derive useful information from the n-D Visualizer.
Unlike ENVI’s 2D Scatter Plots tool, the n-D Visualizer scatter plots are arranged so
that the mean of each image band falls in the center of the plot. The origin of the
scatter plot is therefore in the center of the data cloud.
5. A status box appears while the ROI is loaded. The n-D Visualizer and n-D
Controls dialogs appear.
The n-D Visualizer and n-D Controls dialogs appear. The precluster results are
shown as colored pixels in the n-D Visualizer.
5. Rotate the data cloud to assess the results and modify them as needed.
Figure 7-78: n-D Visualizer (left) and n-D Controls dialog (right)
• Clicking an individual band number in the n-D Controls dialog turns the band
number white and displays the corresponding band pixel data in the n-D scatter
plot. You must select at least two bands to view a scatter plot.
• Clicking the same band number again turns it black and turns off the band
pixel data in the n-D scatter plot.
• Selecting two bands in the n-D Controls dialog produces a 2D scatter plot;
selecting three bands produces a 3D scatter plot, and so on. You can select any
combination of bands at once.
1. In the n-D Controls dialog, click the band numbers (thus the number of
dimensions) you want to project in the n-D Visualizer. If you select only two
dimensions, rotation is not possible. If you select 3D, you have the option of
driving the axes, or initiating automatic rotation. If you select more than 3D,
only automatic random rotation is available.
2. Select from the following options.
• To drive the axes, select Options → 3D: Drive Axes from the n-D
Controls menu bar. Click and drag in the n-D Visualizer to manually spin
the axes of the 3D scatter plot.
• To display the axes themselves, select Options → Show Axes from the n-
D Controls menu bar.
• To start or stop the rotation, click Start or Stop in the n-D Controls dialog.
• To control the rotation speed, enter a Speed value in the n-D Controls
dialog. Higher values cause faster rotation with fewer steps between views.
• The information in the View status box in the n-D Controls dialog tells you
the number of steps you are moving through between the random
projection views.
• To move step-by-step through the projection views, click <- to go
backward and -> to go forward.
• To display a new random projection view, click New in the n-D Controls
dialog.
Identifying Endmembers
When using the n-D Visualizer, your goal is to visually identify and distinguish the
purest pixels in the image. The purest pixels always form the very tip of a corner in
the data cloud. Each corner corresponds to one spectrally unique material in the
image. Therefore, you should try to find all the corners of the data cloud and assign
each corner a different color. Once you separate the purest pixels into different
classes this way, you can use the pixel spectra from those classes as the endmembers
for spectral analysis (such as Linear Spectral Unmixing or Matched Filtering).
To determine which pixels correspond to different image endmembers, watch the data
cloud rotate until the pixels form a protrusion, or arm, out of the data cloud. When a
distinct corner becomes visible, stop the animation, select a class color from the
Class menu, and circle the most extreme corner pixels to signify that they represent
one endmember. It is best not to circle all of the pixels that cluster into a corner. If
possible, you should try to identify only the few pixels in that corner that form the
most extreme tip of the corner. These pixels contain the largest fraction of that
particular endmember material. The less extreme pixels contain larger fractions of
other materials. A corner may consist of tens of clustered pixels or only one or two
similar pixels.
After you have colored the corner pixels, watch the data cloud rotate, and make sure
that the pixels you selected stay together in all projections. You should change the
bands used in the scatter plot periodically, so that every band is ultimately included in
the scatter plot. This ensures that all of the pixels you have identified as being in the
same class really do have similar values at all wavelengths. If the pixels do not cluster
in all projections, they do not correspond to the same material. If you find that some
of the pixels separate from the rest of the class in some projections, then you can
delete those pixels from the class by choosing White as the class color and circling
the errant pixels.
Defining Classes
Typically, classes are defined when groups of pixels stay together during rotation and
are separated from the rest of the pixels. You can define multiple classes at once. Use
the Z Profile option to help define classes (see “n-D Visualizer/Controls Options” on
page 766).
1. Click Stop in the n-D Controls dialog to stop the rotation when a group of
pixels is isolated from the main body of pixels plotted in the n-D Visualizer.
Or, use the arrow buttons to go to a particular projection view.
2. Highlight the desired pixels on the n-D Visualizer by left-clicking to set
vertices, and right-clicking to close the polygon.
3. From the n-D Controls menu bar, select Class and choose a color for the class.
To automatically use the next available class color for the next ROI, select
Class → New from the n-D Controls menu bar (or right-click in the n-D
Visualizer and select New Class).
4. Click Start to rotate the scatter plot until additional groups of pixels are
isolated, and repeat the class definition process.
From the n-D Controls menu bar, select Options → Class Controls.
All of the defined classes appear in the dialog. The white class contains all of the
unclustered or unassigned points. The number of points in each class is shown in the
fields next to the colored squares.
Turning Classes On/Off
To turn a class off in the n-D Visualizer, de-select the On check box for that class in
the n-D Class Controls dialog. Click again to turn it back on.
To turn all but one of the classes off in the n-D Visualizer, double-click the colored
box at the bottom of the n-D Class Controls dialog representing the class that you
want to remain displayed. Double-click again to turn the other classes back on.
Selecting the Active Class
To designate a class as the active class, click once on the colored square (at the
bottom of the n-D Class Controls dialog) corresponding to that class.
The color appears next to the Active Class label in the n-D Class Controls dialog, and
any functions you execute from the n-D Class Controls dialog affect only that class.
You may designate a class as the active class even though it is not enabled in the n-D
Visualizer.
Changing Plot Symbols
In the n-D Class Controls dialog, click Symbol and select the desired symbol.
Producing Spectral Plots
To produce spectral plots for the active class:
1. Click the Stats, Mean, or Plot button on the n-D Class Controls dialog. The
Input File Associated with n-D Data dialog appears.
• Stats: Display the mean, minimum, maximum, and standard deviation
spectra of the current class in one plot. These should be derived from the
original reflectance or radiance data file.
• Mean: Display the mean spectrum of the current class alone. This should
be derived from the original reflectance or radiance data file.
• Plot: Display the spectrum of each pixel in the class together in one plot.
This should be derived from the original reflectance or radiance data file.
2. Select the input file that you want to calculate the spectra from.
If you select a file with different spatial dimensions than the file you used as
input into the n-D visualizer, enter the x and y offset values for the n-D subset
when prompted.
Note
If you select Plot for a class that contains hundreds of points, the spectra for all the
points will be plotted and the plot may be unreadable.
Clearing Classes
To remove all points from a class, click Clear on the n-D Class Controls dialog, or
right-click in the n-D Visualizer and select Clear Class or Clear All.
Exporting Classes
To export the points to an ROI, click Export on the n-D Class Controls dialog, or
right-click in the n-D Visualizer and select Export Class or Export All.
Designating Classes to Collapse
To include the statistics from a class when calculating the projection used to collapse
the data, select the Clp check box next to that class name in the n-D Class Controls
dialog.
If the data are in a collapsed state, they will be recollapsed using the selected classes
when you select any of the Clp check boxes (see “Collapsing Classes” on page 764).
Collapsing Classes
You can collapse the classes by means or by variance to make class definition easier
when the dimensionality of a dataset is higher than four or five. With more than four
or five dimensions, interactively identifying and defining many classes becomes
difficult. Both methods iteratively collapse the data cloud based on the defined
classes.
To collapse the data, calculate a projection (based either on class means or
covariance) to minimize or hide the space spanned by the pre-defined classes and to
maximize or enhance the remaining variation in the dataset. The data are subjected to
this special projection and replace the original data in the n-D Visualizer.
Additionally, an eigenvalue plot displays the residual spectral dimension of the
collapsed data. The collapsed classes should form a tight cluster so you can more
readily examine the remaining pixels. The dimensionality of the data, shown by the
eigenvalue plot, should decrease with each collapse.
1. From the n-D Controls menu bar, select Options → Collapse Classes by
Means or Collapse Classes by Variance (see the descriptions in the following
sections).
An eigenvalue plot displays, showing the remaining dimensionality of the data
and suggesting the number of remaining classes to define. The n-D Selected
Bands widget changes color to red to indicate that collapsed data are displayed
in the n-D Visualizer.
2. Use the low-numbered bands to rotate and to select additional classes.
3. From the n-D Controls menu bar, select Options → Collapse Classes by
Means or Collapse Classes by Variance again to collapse all of the defined
classes.
4. Repeat these steps until you select all of the desired classes.
Collapsing Classes by Means
You must define at least two classes before using this collapsing method. The space
spanned by the spectral mean of each class is derived through a modified Gram-
Schmidt process. The complementary, or null, space is also calculated. The dataset is
projected onto the null space, and the means of all classes are forced to have the same
location in the scatter plot. For example, if you have identified two classes in the data
cloud and you collapse the classes by their mean values, ENVI arranges the data
cloud so that the two means of the identified classes appear on top of each other in
one place. As the scatter plot rotates, ENVI only uses the orientations where these
two corners appear to be on top of each other.
Collapsing Classes by Variance
With this method, ENVI calculates the band-by-band covariance matrix of the
classified pixels (lumped together regardless of class), along with eigenvectors and
eigenvalues. A standard principal components transformation is performed, packing
the remaining unexplained variance into the low-numbered bands of the collapsed
data. At each iterative collapsing, this process is repeated using all of the defined
classes. The eigenvalue plot shows the dimensionality of the transformed data,
suggesting the number of remaining classes to define.
The full dataset is projected onto the eigenvectors of the classified pixels. Each of
these projected bands is divided by the square root of the associated eigenvalue. This
transforms the classified data into a space where they have no covariance and one
standard deviation.
You should have at least nb * nb/2 pixels (where nb is the number of bands in the
dataset) classified so that ENVI can calculate the nb*nb covariance matrix.
ENVI calculates a whitening transform from the covariance matrix of the classified
pixels, and it applies the transform to all of the pixels. Whitening collapses the
colored pixels into a fuzzy ball in the center of the scatter plot, thereby hiding any
corners they may form. If any of the unclassified pixels contain mixtures of the
endmembers included among the classified pixels, those unclassified pixels also
collapse to the center of the data cloud. Any unclassified pixels that do not contain
mixtures of endmembers defined so far will stick out of the data cloud much better
after class collapsing, making them easier to distinguish.
Collapsing by variance is often used for partial unmixing work. For example, if you
are trying to distinguish very similar (but distinct) endmembers, you can put all of the
other pixels of the data cloud into one class and collapse this class by variance. The
subtle distinctions between the unclassified pixels are greatly enhanced in the
resulting scatter plot.
UnCollapsing Classes
To uncollapse the data and return to the original dataset, select Options →
UnCollapse from the n-D Controls menu bar.
All defined classes are shown in the n-D Visualizer, and the band numbers return to a
white color in the n-D Controls menu bar.
Adding Annotation
To add an annotation to the n-D Visualizer window, select Options → Annotate Plot
from the n-D Controls menu bar. See “Annotating Images and Plots” on page 31 for
further details. You cannot add borders to the n-D Visualizer.
Plotting Z Profiles
To open a plot window containing the spectrum of a point selected in the n-D
Visualizer:
1. Select Options → Z Profile from the n-D Controls menu bar. The Input File
Associated with n-D Data dialog appears.
2. Select the data file associated with the n-D data. Typically, this file is the
reflectance or original data. If you select an input file with different spatial
dimensions than the file used for input into the n-D Visualizer, you will be
prompted to enter the x and y offsets that point to the n-D subset.
The Z Profile plot window appears.
3. Select from the following options:
• To plot the Z Profile for the point nearest the cursor, middle-click in the
n-D Visualizer plot window.
• To add plots to the Z Profile plot window, right-click in the n-D Visualizer
plot window. The Z Profile corresponding to the point you selected is
added to the Z Profile plot window.
When the Z Profile plot window is open, the selected file is automatically used
to calculate the mean spectra when you select Options → Mean Class or
Mean All from the n-D Controls menu bar.
Importing Spectra
To import spectra from other sources such as spectral libraries:
1. Select Options → Import Library Spectra from the n-D Controls menu bar.
The n-D Visualizer Import Spectra dialog appears.
The input spectra must be in the same space as the n-D input data (that is, MNF
space).
Tip
To convert the spectra to MNF, select Transform → MNF Rotation →
Apply Forward MNF to Spectra from the ENVI main menu bar, and use
the statistics you saved when you ran the MNF on the original data (see
“Minimum Noise Fraction Transform” on page 516).
2. Drag spectra names from a Z Profile or Spectral Library plot window into the
n-D Visualizer Import Spectra dialog, or select Import from the n-D
Visualizer Import Spectra menu bar to import spectra from a spectral library,
ROI, or ASCII file. For details, see “Collecting Endmember Spectra” on
page 439 for details.
3. Click Apply. The Import Spectra Parameters dialog appears.
4. Select from the following options to set the spectra parameters.
• To edit a spectrum name, select the spectrum and make any changes in the
Name field.
• To edit the spectrum color, select the name and select a color from the
Color button.
• To plot the spectrum in the n-D Visualizer plot window, select the Show
Spectrum check box.
5. Click OK. The spectra appear in the n-D Visualizer as stars with labels. Note
that some spectra may fall outside the current projection and will not be visible
until you rotate the data.
Deleting Spectra
To delete imported library spectra from the n-D Visualizer:
1. Select Options → Delete Library Spectra from the n-D Controls menu bar.
2. Select the spectra to delete.
3. Click OK.
Editing Spectra Parameters
To change the color or name of imported library spectra and to turn spectra on or off
in the n-D Visualizer:
1. Select Options → Edit Library Spectra from the n-D Controls menu bar.
The Import Spectra Parameters dialog appears.
2. Select the spectra to edit.
3. Edit the color, change the names, and turn the Show Spectrum check box on
or off for the individual spectra.
4. Click OK.
Clearing Classes
To clear the currently selected class from the n-D Visualizer:
1. Select Options → Clear Class from the n-D Controls menu bar, or right-click
in the n-D Visualizer and select Clear Class. The currently selected class is
defined by the Active Class label in the n-D Class Controls dialog.
2. To clear all of the classes from the n-D Visualizer, select Options → Clear All
from the n-D Controls menu bar, or right-click in the n-D Visualizer and select
Clear All.
Each time you export n-D Visualizer classes, new ROIs are generated. For example, if
you choose Export All twice, two identical ROIs are generated for each class. For
this reason, you should consider waiting until you have selected all of the corners of
the data cloud before selecting Export All.
Saving States
To save the n-D Visualizer state, select File → Save State from the n-D Controls
menu bar and enter an output filename with the extension .ndv for consistency.
Mapping Methods
Binary Encoding
From the ENVI main menu bar, select Spectral → Mapping Methods → Binary
Encoding. See “Applying Binary Encoding Classification” on page 416 for a
complete description.
1. From the ENVI main menu bar, select Spectral → Mapping Methods →
Linear Spectral Unmixing. The Input File dialog appears.
5. To apply a unit-sum constraint in the unmixing, use the toggle button to select
Yes and enter a Weight value. This weight is added to the system of
simultaneous equations in the unmixing inversion process. Larger weights
cause the unmixing to honor the unit-sum constraint more closely.
6. Select output to File or Memory.
7. Click OK.
1. From the ENVI main menu bar, select Spectral → Mapping Methods →
Matched Filtering. The Input File dialog appears.
2. Select the input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK.
3. Click OK. The Endmember Collection:Matched Filter dialog appears.
4. Import spectra to match. For details, see “Importing Spectra” on page 442,
“Endmember Options” on page 450, and “Managing Endmember Spectra” on
page 453.
5. Click Apply. The Matched Filter Parameters dialog appears.
6. Use the toggle button to select Compute New Covariance Stats and enter an
output statistics filename, or toggle to Use Existing Stats File.
7. If you selected Compute New Covariance Stats: To remove anomalous pixels
before calculating background statistics, enable the Subspace Background
check box. Then, specify in the Background Threshold field the fraction of
the background in the anomalous image to use for calculating the subspace
background statistics. The threshold range is 0.500 to 1.000 (the entire image).
8. Select output to File or Memory.
9. From the Output Data Type drop-down list, select Byte or Floating Point.
10. If you select Byte, enter a Min and Max stretch value.
11. Click OK. If you selected Use Existing Stats File, select the statistics file that
corresponds to the input data file when the Input File dialog appears. This
statistics file must contain both the mean and covariance statistics for the input
data.
MTMF requires an MNF transform input file, or other data with isotropic, unit
variance noise (see “Applying Forward MNFs to Spectra” on page 525).
Tip
See “Spectral Hourglass Wizard” on page 829 for instructions on the ENVI
hourglass processing flow, including MNF transforms and MTMF, to find and map
image spectral endmembers from hyperspectral or multispectral data.
1. From the ENVI main menu bar, select Spectral → Mapping Methods →
Mixture Tuned Matched Filtering. The Input File dialog appears.
2. Select the input MNF file and perform optional Spatial Subsetting and/or
Spectral Subsetting, then click OK. The Endmember Collection:Mixture
Tuned MF dialog appears.
3. Import spectra to match. For details, see “Importing Spectra” on page 442,
“Endmember Options” on page 450, and “Managing Endmember Spectra” on
page 453.
The spectra must be in MNF space. You can calculate the spectra from ROIs in
the MNF input file or transform them into MNF space (see “Applying Forward
MNFs to Spectra” on page 525).
Tip
The input spectra must be pure and spectrally extreme endmembers, for the
MTMF to be interpretable. You can identify these types of endmembers
using ENVI’s PPI and n-D Visualizer. See “Using PPI Images for
Endmember Selection” on page 753.
4. Click Apply. The Mixture Tuned Matched Filter Parameters dialog appears.
5. Use the toggle button to select Compute New Covariance Stats and enter an
output statistics filename, or toggle to Use Existing Stats File.
6. If you selected Compute New Covariance Stats: To remove anomalous pixels
before calculating background statistics, enable the Subspace Background
check box. Then, specify in the Background Threshold field the fraction of
the background in the anomalous image to use for calculating the subspace
background statistics. The threshold range is 0.500 to 1.000 (the entire image).
7. Select output to File or Memory.
8. From the Output Data Type drop-down list, select Byte or Floating Point.
9. If you select Byte, enter a Min and Max stretch value.
10. Click OK.
11. If you selected Use Existing Stats File, select the statistics file that
corresponds to the input data file when the Input File dialog appears. This
statistics file must contain both the mean and covariance statistics for the input
data.
1. In the Available Bands List, select an MF Score image, click the Gray Scale
radio button, and click Load Band.
2. From the Display group menu bar, select Tools → 2D Scatter Plots.
3. Select an MF Score image from the Choose Band X list, and select the
corresponding Infeasibility image from the Choose Band Y list.
4. Click OK. A Scatter plot window appears.
5. Draw an ROI in the scatter plot around pixels that satisfy the high-MF, low-
infeasibility criteria. The best matches to the selected endmember are
displayed as colored pixels in the image.
Pixels with high MF scores and high infeasibility are probably false alarms and
should be rejected. Ideally, you should see a horizontal arm of true detections
across the bottom of the scatter plot. You can retrieve reliable sub-pixel
abundance values from the MF scores for low-infeasibility pixels.
6. From the Scatter Plot menu bar, select Options → Export Class to export the
selected pixels to an ROI. Repeat for each class to create ROIs of best-match
pixels.
7. You can also convert the ROIs to a classification image by selecting
Classification → Create Class Image from ROIs from the ENVI main menu
bar.
8. Once you select the best pixels from the scatter plot, the MF score for these
pixels corresponds to abundance, where a range of 0.0 to 1.0 equals percentage
abundance from 0% to 100%.
3. Import spectra to match. For details, see “Importing Spectra” on page 442,
“Endmember Options” on page 450, and “Managing Endmember Spectra” on
page 453.
4. Click Apply. The Constrained Energy Minimization Parameters dialog
appears.
5. Use the toggle button to select Compute New Covariance Stats and enter an
output statistics filename, or toggle to Use Existing Stats File.
6. If you selected Compute New Covariance Stats: To remove anomalous pixels
before calculating background statistics, enable the Subspace Background
check box. Then, specify in the Background Threshold field the fraction of
the background in the anomalous image to use for calculating the subspace
background statistics. The threshold range is 0.500 to 1.000 (the entire image).
7. Click the toggle button to select the Covariance Matrix or Correlation
Matrix method for the calculation.
8. Select output to File or Memory.
9. Click OK.
10. If you selected Use Existing Stats File, select the statistics file that
corresponds to the input data file when the Input File dialog appears. This
statistics file must contain both the mean and covariance statistics for the input
data.
CEM and MF, ACE does not require knowledge of all the endmembers within an
image scene.
1. From the ENVI main menu bar, select Spectral → Mapping Methods →
Adaptive Coherence Estimator. The Input File dialog appears.
2. Select the input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Endmember Collection:ACE dialog appears.
3. Import spectra to match. For details, see “Importing Spectra” on page 442,
“Endmember Options” on page 450, and “Managing Endmember Spectra” on
page 453.
4. Click Apply. The Adaptive Coherence Estimator Parameters dialog appears.
5. Use the toggle button to select Compute New Covariance Stats and enter an
output statistics filename, or toggle to Use Existing Stats File.
6. If you selected Compute New Covariance Stats: To remove anomalous pixels
before calculating background statistics, enable the Subspace Background
check box. Then, specify in the Background Threshold field the fraction of
the background in the anomalous image to use for calculating the subspace
background statistics. The threshold range is 0.500 to 1.000 (the entire image).
7. Select output to File or Memory.
8. Click OK.
9. If you selected Use Existing Stats File, select the statistics file that
corresponds to the input data file when the Input File dialog appears. This
statistics file must contain both the mean and covariance statistics for the input
data.
The continuum is removed by dividing it into the actual spectrum for each pixel in the
image:
Scr = (S / C)
Where:
Scr = Continuum-removed spectra
S = Original spectrum
C = Continuum curve
The resulting image spectra are equal to 1.0 where the continuum and the spectra
match, and less than 1.0 where absorption features occur. You can perform continuum
removal on data files or on individual spectra in a plot window. For references, see
“Spectral Tools References” on page 794.
1. From the ENVI main menu bar, select Spectral → Mapping Methods →
Continuum Removal. The Input File dialog appears.
2. Select the input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. For the best results, spectrally
subset around the region containing the absorption features of interest.
3. Click OK. The Continuum Removal Parameters dialog appears.
4. Select output to File or Memory.
5. Click OK. ENVI adds the resulting output to the Available Bands List.
C = (S / Scr)
1. Display a hyperspectral image.
2. From the Display group menu bar, select Tools → Profiles → Z Profile
(Spectrum). A Spectral Profile window appears.
3. From the Spectral Profile menu bar, select Options → New Window: with
Plots. An ENVI Plot Window appears.
4. From the ENVI Plot Window menu bar, select Plot_Function → Continuum
Removed.
5. From the ENVI main menu bar, select Spectral → Spectral Math. The
Spectral Math dialog appears.
6. In the Enter an expression field, enter the following:
float(S1) / (S2)
10. Click OK. The continuum curve plots over the original spectrum. Following is
an example.
You may notice that there are some small local maxima that do not fall on the
continuum curve. It is not always advantageous to place every peak in a spectrum on
the continuum curve, especially if the spectrum contains noise or poorly defined
features. The algorithm that ENVI uses to define local maxima was chosen to
increase the likelihood of identifying real absorption features. Also, remember that in
ENVI, the continuum-removed spectra are used primarily for Spectral Feature Fitting
(SFF), where it is very important to spectrally subset the data to isolate the feature of
interest. If you are investigating SFF results, you need to zoom to the region of the
Z Profile that corresponds to the spectral subset before starting a new plot window
and recovering the continuum curve.
9. Choose which spectral ranges to use in the SFF by left-clicking in the plot. A
marker appears. Left-click and drag to position the marker on the starting
wavelength. Right-click and select Set as Start Range. The wavelength value
appears in the Start Range field. Left-click and move the marker to the ending
range wavelength position. Right-click and select Set as End Range. The
wavelength value appears in the End Range field.
Tip
Middle-click and drag to zoom in to the spectral plot. Use the right-click
menu to reset the zoom range.
You can also enter Start Range and End Range values directly. You can
adjust the position of the markers one wavelength increment at a time by
clicking the increase/decrease buttons next to the Start Range and End Range
fields. Pressing Enter after entering a wavelength value causes that value to
jump to the nearest library spectrum wavelength value.
10. Click Add Range, or right-click in the plot window and select Add Range.
The range is added to the Edit Multi Range SFF Endmember Ranges dialog,
and the continuum-removed selected feature is plotted. The number next to the
feature is its depth measure. This is a ratio of feature depth divided by
continuum level; lower values indicate a larger feature depth.
• To remove a spectral range, click Remove in the Edit Multi Range SFF
Endmember Ranges dialog.
• To remove the colored continuum graphic from the plot, select the Cont
check box in the Edit Multi Range SFF Endmember Ranges dialog.
• To remove the continuum-removed feature graphic from the plot, select the
CR check box in the Edit Multi Range SFF Endmember Ranges dialog.
• To weight one feature more than another for that endmember, enter the
weight factors. The weight factors are calculated by summing the total
entered weights and dividing each weight by that sum.
11. Select other spectral ranges, if desired, for that endmember by repeating steps
6-7.
12. Select the next endmember and select its spectral ranges.
13. Save the ranges to a file by selecting Save Ranges in the Edit Multi Range SFF
Endmember Ranges dialog and entering an output filename.
14. Click OK. The Multi Range SFF Parameters dialog appears.
15. Select whether to output separate or combined scale and RMS images using
the toggle button (see “Spectral Feature Fitting Results” on page 789, which
describes these outputs).
16. Enter an output filename.
17. Click OK. ENVI adds the resulting output to the Available Bands List.
Spectral Libraries
Clark, R. N., Swayze, G. A., Gallagher, A., King, T. V. V., and Calvin, W. M., 1993,
The U. S. Geological Survey Digital Spectral Library: Version 1: 0.2 to 3.0 mm: U. S.
Geological Survey, Open File Report 93-592, p. 1340.
Grove, C. I., Hook, S. J., and Paylor II, E. D., 1992, Laboratory Reflectance Spectra
of 160 Minerals, 0.4 to 2.5 Micrometers: Jet Propulsion Laboratory Pub. 92-2.
See “ENVI Spectral Libraries” on page 1183 for more references.
n-D Visualizer
Boardman, J. W., 1993, Automated spectral unmixing of AVIRIS data using convex
geometry concepts: in Summaries, Fourth JPL Airborne Geoscience Workshop, JPL
Publication 93-26, v. 1, pp. 11 - 14.
Boardman J. W., and Kruse, F. A., 1994, Automated spectral analysis: A geologic
example using AVIRIS data, north Grapevine Mountains, Nevada: in Proceedings,
Tenth Thematic Conference on Geologic Remote Sensing, Environmental Research
Institute of Michigan, Ann Arbor, MI, pp. I-407 - I-418.
Binary Encoding
Goetz, A. F. H., Vane, G., Solomon, J. E., and Rock, B. N., 1985, Imaging
spectrometry for earth remote sensing: Science, v. 228, pp. 1147 - 1153.
Mazer, A. S., Martin, M., Lee, M., and Solomon, J. E. (1988). Image processing
software for imaging spectrometry data analysis. Remote Sensing of Environment
24(1): pp. 201 - 210.
Matched Filtering
Boardman, J. W., Kruse, F. A., and Green, R. O., 1995, Mapping target signatures via
partial unmixing of AVIRIS data: in Summaries, Fifth JPL Airborne Earth Science
Workshop, JPL Publication 95-1, v. 1, pp. 23-26.
Chen, J. Y. and I. S. Reed, 1987, A detection algorithm for optical targets in clutter,
IEEE Trans. on Aerosp. Electron. Syst., V. AES-23, No. 1.
Harsanyi, J. C., and C. I. Chang, 1994, Hyperspectral image classification and
dimensionality reduction: An orthogonal subspace projection approach, IEEE
Transactions on Geoscience and Remote Sensing, V. 32, pp. 779-785.
Stocker, A., I. S. Reed, and X. Yu, 1990, Multidimensional signal processing for
electrooptical target detection, Proc. SPIE Int. Soc. Opt. Eng., V. 1305.
Yu, X., I. S. Reed, and A. D. Stocker, Comparative performance analysis of adaptive
multispectral detectors, IEEE Trans. on Signal Processing, V. 41, No. 8.
Continuum Removal
Clark, R. N., and Roush, T. L., 1984, Reflectance spectroscopy: Quantitative analysis
techniques for remote sensing applications: Journal of Geophysical Research, v. 89,
no. B7, pp. 6329-6340.
Clark, R. N., King, T. V. V., and Gorelick, N. S., 1987, Automatic continuum analysis
of reflectance spectra: in Proceedings, Third AIS workshop, 2-4 June, 1987, JPL
Publication 87-30, Jet Propulsion Laboratory, Pasadena, California, pp. 138-142.
Green, A. A., and Craig, M. D., 1985, Analysis of aircraft spectrometer data with
logarithmic residuals: in Proceedings, AIS workshop, 8-10 April, 1985, JPL
Publication 85-41, Jet Propulsion Laboratory, Pasadena, California, pp. 111-119.
Kruse, F. A., Raines, G. L., and Watson, K., 1985, Analytical techniques for
extracting geologic information from multichannel airborne spectroradiometer and
airborne imaging spectrometer data: in Proceedings, International Symposium on
Remote Sensing of Environment, Thematic Conference on Remote Sensing for
Exploration Geology, 4th, Environmental Research Institute of Michigan, Ann Arbor,
pp. 309-324.
Kruse, F. A., Lefkoff, A. B., and Dietz, J. B., 1993, Expert System-Based Mineral
Mapping in northern Death Valley, California/Nevada using the Airborne
Visible/Infrared Imaging Spectrometer (AVIRIS): Remote Sensing of Environment,
Special issue on AVIRIS, May-June 1993, v. 44, pp. 309 - 336.
Kruse, F. A., and Lefkoff, A. B., 1993, Knowledge-based geologic mapping with
imaging spectrometers: Remote Sensing Reviews, Special Issue on NASA Innovative
Research Program (IRP) results, v. 8, pp. 3 - 28.
Clark, R. N., and Swayze, G. A., 1995, Mapping minerals, amorphous materials,
environmental materials, vegetation, water, ice, and snow, and other materials: The
USGS Tricorder Algorithm: in Summaries of the Fifth Annual JPL Airborne Earth
Science Workshop, JPL Publication 95-1, pp. 39 - 40.
Crowley, J. K., and Clark, R. N., 1992, AVIRIS study of Death Valley evaporite
deposits using least-squares band-fitting methods: in Summaries of the Third Annual
JPL Airborne Geoscience Workshop, JPL Publication 92-14, v 1, pp. 29-31.
Swayze, G. A., and Clark, R. N., 1995, Spectral identification of minerals using
imaging spectrometry data: evaluating the effects of signal to noise and spectral
resolution using the Tricorder Algorithm: in Summaries of the Fifth Annual JPL
Airborne Earth Science Workshop, JPL Publication 95-1, pp. 157 - 158.
Vegetation Analysis
Remote sensing offers an efficient way to estimate vegetation properties over large
geographic areas. You can use spectral vegetation indices (VIs) calculated in ENVI to
analyze vegetation properties. This requires the following:
1. An understanding of the structure and function of vegetation and its reflectance
properties (described in “Understanding Vegetation and Its Reflectance
Properties” on page 1212).
2. An understanding of the vegetation properties that you can estimate by
calculating VIs on hyperspectral reflectance data, and knowledge of how these
VIs work (described in “Vegetation Indices” on page 1221).
3. Using ENVI to calculate applicable VIs and analyzing the output to determine
the vegetation conditions in your data (described in “Vegetation Index
Calculator” on page 799).
4. Applying classifications to the various ecologies of the indices and analyzing
the classifications for specific conditions, such as agricultural stress, fire fuel
distribution, and overall forest health (described in “Vegetation Analysis
Tools” on page 802).
Shadowed or highly shaded areas in an input file may not have enough light to cause
the vegetation signal to register for the VIs. Atmospheric correction does not improve
the quality of these areas. Under these conditions, the VIs may provide inaccurate
representations of the vegetation conditions.
To generate ecosystem-specific classifications from your data, see “Vegetation
Analysis Tools” on page 802.
To calculate VIs:
1. From the ENVI main menu bar, select Spectral → Vegetation Analysis →
Vegetation Index Calculator. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK.
3. Click OK. The Vegetation Indices Parameters dialog appears (Figure 7-84).
If the ENVI header for the input file contains a bad bands list, the bad bands
are excluded from the VI calculation.
4. Select one or more VIs to calculate for the input file from the Select
Vegetation Indices list. This list shows the VIs that ENVI verified as
applicable to the input file. All applicable VIs are selected by default. You
must select at least one VI to continue. The number of selected VIs appears in
the Number of items selected field.
5. Use the Biophysical Cross Checking toggle button to specify whether or not
to perform biophysical cross-checking for this calculation. Biophysical cross-
checking compares VIs to validate their results. If conflicting values exist
between VIs (for example, a greenness index shows insufficient vegetation to
support the water content measurement from a canopy water index), those data
values are ignored. When biophysical cross-checking is turned off, masking is
not applied, and ENVI stores all output data in the result. The default value is
On.
Note
Do not set Biophysical Cross Checking to On if you plan to use the VI
calculations in the vegetation analysis tools.
• Does the VI display in ENVI with good contrast? Good contrast increases the
likelihood the analysis tool works effectively on that VI.
• Is the display from the VI similar to other VIs from the same category? This
indicates the results obtained from the VI are similar to the results obtained
from other VIs. Occasionally, the result of one VI may provide drastically
different results from similar VIs, which may indicate the data in the particular
VI is suspect.
• Does the VI have large masked areas? Masked areas in an input VI can pass
through to the classification result, resulting in masked areas for the
classification result where other input VIs may have data.
Note
Because urban materials such as buildings and roads have different spectral
signatures than vegetation, they can cause errant results for scenes with mixed
pixels. It is best to mask out or ignore urban areas or bodies of water when
performing a vegetation condition analysis.
Each vegetation analysis tool uses a different set of three VI categories that are
combined to produce a map with classifications showing some vegetative property or
state. At least one VI from each of the categories must exist for a tool to be available.
Though the VI categories are pre-defined by each tool, you can select any of the
available VIs within a category. This provides flexibility within the tool, as you can
run it with different combinations of VIs to provide different classification results.
The Agricultural Stress Tool divides the input scene into nine classes, from lowest
stress to highest stress. Following is the classification map for the Agricultural Stress
Tool output:
The classifications are relative to the particular input scene only and cannot be
generalized to other areas or other scenes. Field examination is essential to link the
classes provided by the tool with the real-world conditions they represent. The classes
cannot be compared between scenes, as the vegetative variability between scenes
could be significant, and the actual classification values may not match. For example,
a classification color of green in one scene could represent the same field conditions
with a classification color of orange in another.
If fire fuels are located beneath a closed canopy, they may not be detected by the Fire
Fuel Tool. The higher greenness values caused by canopy closure automatically
reduce the fire risk calculation. Additionally, dry or senescent VIs are only sensitive
to the top layer of the vegetation, causing the dry vegetation beneath it to be obscured
by the upper layer of green vegetation. As such, dry materials under a closed canopy
may not be properly detected.
To calculate fire fuel:
1. From the ENVI main menu bar, select Spectral → Vegetation Analysis →
Fire Fuel. The Input File dialog appears.
2. Select the input file. The file may be a VI output file you created previously
using the Vegetation Index Calculator, or you may select a hyperspectral data
file. The hyperspectral data file should be one that is atmospherically corrected
and calibrated to reflectance.
3. Select Spatial Subsetting by clicking Spatial Subset. Using spatial subsetting
with the Fire Fuel Tool may further refine the results.
4. Apply optional Masking to the data by clicking Select Mask Band and
selecting the desired mask image.
5. Click OK. The Fire Fuel Parameters dialog appears (Figure 7-86).
The Fire Fuel Tool divides the input scene into nine classes, from least fuel (lowest
apparent fire risk) to most fuel (highest apparent fire risk). Following is the
classification map for the Fire Fuel Tool output:
The classifications are relative to the particular input scene only and cannot be
generalized to other areas or other scenes. Field examination is essential to link the
classes provided by the tool with the real-world conditions they represent. You cannot
compare classes between scenes, as the vegetative variability between scenes could
be significant, and the actual classification values may not match. For example, a
classification color of green in one scene could represent the same field conditions as
a classification color of orange in another.
The classifications are relative to the particular input scene only and cannot be
generalized to other areas or other scenes. Field examination is essential to link the
classes provided by the tool with the real-world conditions they represent. You cannot
compare classes between scenes, as the vegetative variability between scenes could
be significant, and the actual classification values may not match. For example, a
classification color of green in one scene could represent the same field conditions as
a classification color of orange in another.
Vegetation Suppression
Use Vegetation Suppression to remove the vegetation spectral signature from
multispectral and hyperspectral imagery, using information from red and near-
infrared bands. This method helps you better interpret geologic and urban features
and works best with open-canopy vegetation in medium spatial resolution (30 m)
imagery.
The algorithm models the amount of vegetation per pixel using a vegetation
transform. The model calculates the relationship of each input band with vegetation,
then it decorrelates the vegetative component of the total signal on a pixel-by-pixel
basis for each band. You can use the results of vegetation suppression for qualitative
analysis, but not for subsequent spectral analysis.
Vegetation suppression is most commonly used in lithologic mapping and linear
feature enhancement in areas with open canopies. For closed canopies in moderate-
resolution data, vegetation suppression is primarily used for linear feature
enhancement.
Reference:
Crippen, R. E., and R. G. Blom. 2001. Unveiling the lithology of vegetated terrains in
remotely sensed imagery. Photogrammetric Engineering & Remote Sensing 67(8):
935-943.
Perform the following steps to suppress vegetation:
1. From the ENVI main menu bar, select Spectral → Vegetation Suppression.
The Vegetation Suppression Input File dialog appears.
2. Select a multispectral file and perform optional Spatial Subsetting and/or
Spectral Subsetting.
• If the associated header contains wavelength information, ENVI
automatically determines the proper red and infrared bands to use for
vegetation suppression. ENVI uses the band closest to 0.66 μm as the red
band, and it uses the band closest to 0.83 μm as the near-infrared band.
• If the header does not contain wavelength information, the Select Near
Infrared Band dialog appears. Select the near-infrared band. Click OK.
The Select Red Band dialog appears. Select the red band.
• If the image file has wavelength information but does not have a near-
infrared or red band, ENVI issues an error message and ends vegetation
suppression. ENVI also issues an error if the input image has only one
band.
3. Click OK. The Vegetation Suppression Parameters dialog appears.
4. Output the result to File or Memory.
5. Click OK. The vegetation suppression algorithm runs. ENVI creates one
output band per input band, and it adds the resulting image to the Available
Bands List.
The BandMax algorithm was developed by the Galileo Group, Inc. The process is
based on the United States patent application titled Spectral image processing system
and method for target detection and identification. This document was submitted by
Timothy J. Pachter and Daniel Matthew Puchalski to the United States Patent Office
in 2002.
The SAM Target Finder with BandMax Wizard guides you through the following
steps:
1. Select Input Image: Select the input file and the root name of the output files
for the Wizard.
2. Select Targets: Select the spectra to use as targets.
3. Select Background: Select backgrounds to suppress.
4. Select Band Subset: Identify significant bands to use in a SAM analysis.
5. Select SAM Parameters: Define a SAM maximum angle parameter.
6. Examine SAM Results: Analyze and examine SAM results. You may exit the
Wizard at this time if the results are acceptable.
Figure 7-88: Workflow Process for the SAM Target Finder with BandMax Wizard
If examining the results from this mapping process shows that the classification
accuracy is not sufficient, the Wizard returns you to step 2, where you can refine your
targets and backgrounds before running BandMax and the SAM analysis again. You
may continue to refine targets, select backgrounds, use BandMax to find the
appropriate band subset, and re-classify your input data with SAM. When your
analysis is complete, you can save and view a report of the processing. ENVI adds the
resulting output to the Available Bands List. As with any image processing tool, poor
results may occur if you provide poor input data or inappropriate parameters.
The Introduction panel explains the overall workflow of the Wizard. The other
panels in the Wizard present the remaining six possible steps of this process.
These steps may occur multiple times in different order, depending upon the
your analysis path.
The left part of each panel in the Wizard shows a brief description of the step
presented in that panel. This description contains the requirements of the step,
background information of the process used in the step, and useful analysis
techniques for the step. The right part of the panel contains an interface with
specific functions to perform for that step.
Figure 7-90: A Sample Panel from the SAM Target Finder with BandMax Wizard
You can hide the left part of a panel by clicking Hide Text to only show the
right part of the panel. If the left part is hidden, click Show Text to re-display
it. Each panel also contains a Next button for continuing to the next step and a
Prev button for reverting back to the previous step. Next is only enabled if you
provide enough information to continue in the workflow. In some cases, a
processing status dialog appears if the next process is performed before the
next panel is initialized.
Unlike other Wizards in ENVI, this Wizard is not linear. It has conditional
paths, where your input may induce a different direction in the workflow. You
can also use the Next and Prev buttons to loop back through the Wizard and
repeat a series of steps. The Prev button maintains a list of the steps that were
performed. This list enables the Prev button to still work even if you to go
through a conditional step or a series of steps looping through the Wizard.
2. Click Next in the Wizard. The Select Input/Output Files panel appears.
Select Targets
The Target Selection panel represents step 2 in the workflow (Figure 7-88). Here, you
will select spectra to use as targets.
Figure 7-91: Target Selection Panel of the SAM Target Finder with BandMax
Wizard
The Target Spectra section also contains a table that displays the spectra you have
selected.
1. Collect spectra by importing them or by dragging-and-dropping them into the
table (see “Importing Spectra” on page 442 for more information). You must
collect at least one spectrum to continue in the Wizard.
2. To select a specific spectrum in the table, click its related row number column
on the far left. You must highlight the entire row. You cannot select a spectrum
by clicking its Spectrum Name column. You can select multiple spectra using
Shift or Ctrl. Or, click Select All to select all the spectra in the table.
3. Click Plot to display the selected spectra in a separate plot window.
4. Click Delete to delete any selected spectra.
5. Click Next in the Wizard. The Background Selection and Rejection panel
appears.
Select Background
The Background Selection and Rejection panel represents step 3 in the workflow
(Figure 7-88). Here, you can select as “background” the pixel spectra that were
incorrectly identified as targets in a SAM analysis. The BandMax algorithm will
attempt to identify the bands that are best able to distinguish your targets from these
backgrounds. This process works best when using a small set of target and
background spectra.
If you do not want to specify any backgrounds:
1. Toggle Select Backgrounds to Reject? to No.
2. Click Next. The Wizard skips to the Select SAM Mapping Parameters panel,
which represents step 5 in the workflow (see “Select SAM Parameters” on
page 823).
To specify backgrounds:
1. Toggle Select Backgrounds to Reject? to Yes. The Select Background section
appears in the right part of the panel. This section contains the same items as
the Target Spectra section in step 2 of the workflow.
2. Collect background spectra by importing them or by dragging-and-dropping
them into the table (see “Importing Spectra” on page 442 for more
information). You must select at least one spectrum to continue in the Wizard.
All of the spectra selected in this table are used as backgrounds when
calculating the significant bands.
Figure 7-92: Select Optimal Bands Panel of the SAM Target Finder with
BandMax Wizard
Here, you will customize the band subset used to suppress your backgrounds. The
BandMax algorithm calculates a significance value for each band in the input image.
This unitless value ranges from 0 to 1, where a higher value indicates the band has a
higher probability of distinguishing a target response from a background response.
BandMax compares the significance value to a significance threshold to derive a
subset of bands, based upon the current list of targets and backgrounds.
BandMax calculates the significance threshold by attempting to select 25% of the
input bands, but never fewer than six bands. However, in some cases, even using a
threshold of 0 (the lowest) does not provide an adequate number of bands. In this
case, it is best to modify your selection of targets or backgrounds (typically by
removing one or more) and run the BandMax algorithm again. See “Select Targets”
on page 819 and “Select Background” on page 820.
The results of the BandMax calculations are shown in the right part of the panel. You
can customize the number of bands that will be chosen for SAM analysis (step 3 of
the Wizard workflow) by modifying the values in the following steps.
1. Review the Significant Bands list, which shows the bands that BandMax
automatically determined were significant when you clicked Next in the
Background Selection and Rejection panel.
2. Modify the Band Significance Threshold value as needed. The threshold
ranges from 0 to 1. Only bands with a significance value greater than or equal
to the significance threshold will be used. Setting the Band Significance
Threshold to a higher value results in fewer selected bands in the subset.
The increase/decrease buttons change the threshold by increments of 0.01. An
increase in the Band Significance Threshold value decreases the Number of
Significant Bands value and updates the Significant Bands list. If a change of
0.01 is not enough to update the Number of Significant Bands, increase the
increment until it does.
3. You can also decrease the Number of Significant Bands value, which
increases the Band Significance Threshold value and updates the Significant
Bands list.
The increase/decrease buttons change the number of bands by at least 1. If two
or more bands have the same significance value, a greater increment is used to
include all of these bands.
4. Click Save Significant Bands to File if you want to save the band subset in the
Significant Bands list to an ASCII file. When you have derived the subset of
bands that effectively detects your targets, you may want to use this same band
subset to perform a series of classifications on a set of images from the same
sensor. You can use this output ASCII file as input when spectrally subsetting a
file.
5. Click Next in the Wizard. The Select SAM Maximum Angle Threshold panel
appears. The bands in the Significant Bands list form a band subset that is
used as input in the following SAM analysis.
spatially coherent image; however, the overall pixel matches will not be as good as
for the lower threshold.
1. Modify the SAM Maximum Angle value as needed. The default value is 0.10
radians. The increase/decrease buttons increment the value by 0.05 radians.
2. Click Next in the Wizard. A processing status dialog shows the progress of the
SAM analysis, and the Investigate SAM Results panel appears. For more
information on the SAM classification and its parameters, see “Supervised
Classification” on page 401
Figure 7-93: Investigate SAM Results Panel of the SAM Target Finder with
BandMax Wizard
The output from SAM is a classified image and a set of rule images corresponding to
the spectral angle calculated between each pixel and each target (one rule image per
target).
1. Click Load SAM Class Result to examine the SAM classification image. If
the image does not show spatially coherent classes, then the classification does
not match the target spectra or what you know about the target. In this case,
you should review the SAM rule images.
2. Highlight the name of an image in the SAM Rule Images list and click Load
SAM Rule Image, or double-click the image name in the SAM Rule Images
list.
The rule image is loaded into a display group using an inverted color table.
(The ENVI Color Tables dialog also appears.) A smaller SAM angle means a
pixel is a better match to the target. Inverting the color table displays the best
matches as brighter pixels, which is more intuitive.
A histogram of the image also appears in a separate interactive stretching
dialog. This histogram explicitly shows the linear stretch applied to the display
of the rule image. A flat line in the left portion (lowest value) of the histogram
indicates a good separation of targets and backgrounds. Any data values to the
left of this flat line are significant matches for the target spectra.
3. The display group is linearly stretched to a maximum value specified by the
Default Stretch Max value in the SAM Rule Images section of the Wizard.
The initial value is 0.10. To change the default stretch maximum when you
display rule images, enter the desired Default Stretch Max value and double-
click on the rule image name in the SAM Rule Images list.
If the Default Stretch Max value is less than the minimum value in the rule
image band, the minimum value will be used instead.
4. Double-click in the display group to open the Cursor Location/Value tool.
Evaluate the SAM spectral angles for each pixel.
5. You can increase the maximum angle threshold to make the classification more
relaxed, or decrease it to make the classification more selective. Click Prev in
the Wizard and enter a new SAM Maximum Angle value if desired.
6. Click Show Report to view the results of the SAM Target Finder with
BandMax Wizard. See “Report the Results” on page 826. The SAM is not
recalculated if you clicked Prev to get back to this step.
7. If you are satisfied with the results, click Finish to exit the Wizard.
8. If you are not satisfied with the results, you can refine your targets,
backgrounds, or both. To return to step 2 of the Wizard workflow and repeat
these steps, click Next.
RX Anomaly Detection
RX Anomaly Detection uses the Reed-Xiaoli Detector (RXD) algorithm to detect
the spectral or color differences between a region to be tested and its neighboring
pixels or the entire dataset. This algorithm extracts targets that are spectrally distinct
from the image background. For RXD to be effective, the anomalous targets must be
sufficiently small, relative to the background. Results from RXD analysis are
unambiguous and have proven very effective in detecting subtle spectral features.
ENVI implements the standard RXD algorithm:
Where r is the sample vector, μ is the sample mean, and KLxL is the sample
covariance matrix.
RXD works with multispectral and hyperspectral images. Bad pixels or lines appear
as anomalous, but they do not affect the detection of other, valid anomalies. As with
any spectral algorithm, exclusion of bad bands increases the accuracy of results.
Currently, this algorithm does not differentiate detected anomalies from one another.
Reference:
Chang, Chein-I, and Shao-Shan Chiang, 2002. Anomaly detection and classification
for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing,
Vol. 40, No. 6, pp. 1314-1325.
Reed I. S., and X. Yu, Adaptive multiple-band CFAR detection of an optical pattern
with unknown spectral distribution. IEEE Trans. Acoustics, Speech and Signal Proc.
38, p p. 1760-1770, October 1990.
4. Select an algorithm from the drop-down list provided. The following options
are available:
• RXD: Standard RXD algorithm
• UTD: Uniform Target Detector, in which the anomaly is defined using (1 -
μ) as the matched signature, rather than (r - μ). UTD and RXD work
exactly the same, but instead of using a sample vector from the data (as
with RXD), UTD uses the unit vector. UTD extracts background
signatures as anomalies and provides a good estimate of the image
background.
• RXD-UTD: A hybrid of the RXD and UTD methods, in which (r - 1) is
used as the matched signature. This is a variant of the UTD approach.
Subtracting UTD from RXD suppresses the background and enhances the
anomalies of interest. The best condition to use RXD-UTD is when the
anomalies have an energy level that is comparable to, or less than, that of
the background. In this case, using UTD by itself does not detect the
anomalies, but using RXD-UTD enhances them.
5. Using the Mean Source toggle button, specify whether the mean spectrum
should be derived from the full dataset (Global) or from a localized kernel
around the pixel (Local). If you choose Local, the Local Kernel Size field
appears. Specify a kernel size, in pixels, around a given pixel that will be used
to create a mean spectrum. The default value is 15.
6. Output the result to File or Memory.
7. Click OK. A processing status dialog appears while ENVI builds a covariance
matrix and calculates a mean spectrum. Then, RXD runs line-by-line. ENVI
adds the resulting output to the Available Bands List. A default square-root
stretch is applied to the resulting image to highlight the extreme anomalies.
Bright pixels in the output image represent targets that are spectrally distinct from the
image background.
Introduction
Input/Output
File Selection
Minimum Noise
Transform (MNF)
Data Dimensionality
Determination
NO
Derive Endmembers
from Data?
YES
NO
Mapping with SAM, Unmixing
and/or MTMF
Summary Report
Figure 7-96: Each Step Within The Spectral Hourglass Wizard Flow Chart
Represents a Panel in the Wizard
Wizard Basics
To successfully use the Spectral Hourglass Wizard, you should become familiar with
the basic functions and concepts as shown in Figure 7-97:
Function Title
Important Information
and instructions are
displayed in the upper
panel
Figure 7-97: Spectral Hourglass Wizard Screen with the Select Input/Output Files
Panels Displayed
• Text in the top panel of the Wizard provides background information and
guidance for each step. It is very important that you read this information
before proceeding. Use the vertical and horizontal scroll bars to view all of the
text.
• Enter and select the parameters for that step using the buttons and fields near
the bottom of the panel.
• Use the Prev and Next buttons to step through the Wizard.
• Each processing step will execute when you select the Next button.
• If the Next button is not available for selection, be sure that you set all of the
necessary parameters.
• Use the Prev button to go back to a previous step to modify parameters and to
run that step again.
• ENVI adds the resulting output to the Available Bands List.
• You can examine the results of a function at any time.
• Various plots display during processing, and you can save them by selecting
File → Save Plot As in the plot window.
• To display only the title of each panel and not the text, right-click on the text
and select Display Title Only. To redisplay the text, right-click and select
Display Full Text.
3. Click Select Input File, choose a file, and perform optional Spatial Subsetting,
Spectral Subsetting and/or Masking.
As a guideline to help with spectral subsetting, vegetation analysis and iron-
oxide mineral mapping typically use wavelength ranges in the visible near-
infrared (VNIR) and shortwave infrared (SWIR) regions from 0.4 to 1.3 μm,
while the 2.0 to 2.5 μm range is used to map most other geologic materials.
A mask may help you to exclude selected pixels such as image borders, bad
pixels, or specific materials such as water or clouds.
Be careful with your choices of spatial and spectral subsets; avoid unnecessary
complications, data volumes, and scene complexity.
4. Click OK.
5. In the Select Input/Output Files panel, the Output Root Name defaults to the
root name of the selected input file. For example, if the input file is
boulder.img, then the output file for each process is boulder appended
with a function-dependent suffix (for example, boulder.mnf). Click Select
Output Root Name and enter or choose a different root name if desired.
6. Click Next. The Forward MNF Transform panel appears.
2. To use only a subset of a file to calculate noise statistics for the whole file,
click Shift Difference Spatial Subset and enter subsetting parameters. Noise
statistics are calculated from the data based on a shift difference method that
uses local pixel variance to estimate noise. A virtual noise image is created by
subtracting line-shifted and sample-shifted images from the original data and
averaging and scaling these two results. You can select a subset of the data to
speed the noise statistic calculations or select a uniform area to improve noise
estimates.
3. Click Next. The View MNF Results panel appears, and progress windows
display the processing status. After the processing is complete, the View MNF
Results panel and an MNF Eigenvalues plot window appear.
The MNF Eigenvalues plot shows the eigenvalue (y-axis) for each MNF-
transformed band (an eigenvalue number, shown in the x-axis). Larger
eigenvalues indicate higher data variance in the transformed band and may
help indicate data dimensionality. See “Data Dimensionality and Spatial
Coherence” on page 835. When the eigenvalues approach 1, only noise is left
in the transformed band, as the noise floor has been scaled to unity in each
output MNF band.
4. Click Load MNF Result to ENVI Display to display an RGB image of the
first three MNF bands. Use this image to quickly locate dominant spectral
materials, which are displayed in bright, pure colors. Or, select Load
Animation of MNF Bands to load the MNF result as a gray scale animation.
For details about ENVI animation, see “Creating Animations” on page 140.
5. After displaying and analyzing the MNF result, click Next in the Wizard. The
Determine Data Dimensionality panel appears.
Or, you can let the Wizard estimate data dimensionality using a spatial coherence
measure that is based on MNF-transformed images. The lower MNF bands are
expected to have spatial structure and contain most of the information, while higher
MNF bands are expected to have little spatial structure and contain most of the noise.
By retaining only the coherent MNF bands (those with variance above unity) and
discarding those that are indistinguishable from the noise (those with variance at or
below unity), the dataset is reduced to its inherent dimensionality. This should
improve spectral processing results.
A spatial coherence calculation consists of a correlation coefficient calculation
between each spectral band and a version of itself offset by one line. Noise, by
definition, should have a zero or very low correlation, while an image with a spatial
structure bigger than the pixel size has a much higher correlation.
If higher MNF images are correlated to their line-shifted versions, it means there is a
vertical structure in the noisy images (such as stripes), which may resemble a signal
in the higher MNF bands. Vertical striping noise could be caused by an area-array
instrument.
Perform the following steps to calculate data dimensionality using a spatial coherence
measure:
1. Click Calculate Dimensionality. A processing status dialog appears, followed
by a Spatial Coherence Threshold plot, which shows the Wizard's best guess
for the number of non-noise MNF bands.
While the Wizard is designed to give a reasonable default calculation of data
dimensionality, this process is often difficult and scene-dependent. You must
often resort to using a generous estimate of data dimensionality, by using all of
the MNF bands that have any reasonable image quality of eigenvalues well
above unity. Overestimating the dimensionality and including a few extra
MNF bands is much better than underestimating the dimensionality and
potentially discarding everything. If all of your MNF bands have decent image
quality and above-unity eigenvalues, then your data are not spectrally over-
determined and thus are not technically hyperspectral.
ENVI calculates the spatial coherence threshold based on the bands with
spatial coherence values greater than a given floor value, not greater than or
equal to a given floor value. If the threshold is set to 0, bands that score 0 will
not be included. For example, if you input 50 bands into the MNF
transformation and calculate the spatial coherence, it might return 36 out of 50
bands even though the threshold value is set to 0. This is because 14 of the
bands have a spatial coherence value of 0. Presumably, you should use a
threshold greater than 0, or manually select the number of bands.
Additionally, the threshold is not a count of all the bands above or below a
given value. Selecting the number of bands is accomplished by finding the first
band that drops below the threshold and including all the bands to the right of
this threshold as “no good.” The spatial coherence curve (as applied to MNF
data) theoretically should become less and less as you move to higher MNF
bands, since the bands get noisier and noisier. So, ENVI looks for the first band
that dropped below the threshold and discards the rest.
2. If you are satisfied with the dimensionality calculation, click OK in the Spatial
Coherence Threshold plot, then click Next in the Wizard. The Derive or Select
Endmembers panel appears.
3. Otherwise, manually change the threshold level by clicking and dragging the
horizontal red threshold line on the Spatial Coherence Threshold plot to a new
level. Click OK. You can also click Cancel in the Spatial Coherence Threshold
plot and enter a new Data Dimensionality value in the Determine Data
Dimensionality panel. Click Calculate Dimensionality and repeat the steps
above.
The PPI is calculated for the number of iterations you specified in the previous
panel. A Pixel Purity Index Plot appears, showing which iteration you are on
and the cumulative number of pixels that have been found to be extreme. The
curve in this plot usually starts steeply, as new pixels are found in each
iteration, and it should flatten out as all the extreme pixels are found.
3. When the iterations are complete, you can return to the Pixel Purity Index
panel and increase the Number of PPI Iterations if the plot has not flattened.
While it is difficult or impossible to precisely say how many iterations are
enough, you can never run too many iterations. Having many iterations (for
example, 20,000) gives a PPI result with an increased dynamic range and thus
the ability to find subtle, poorly expressed endmembers that might be
undetected if fewer iterations were completed.
4. When the iterations are complete, a PPI image is created in which the value of
each pixel corresponds to the number of times that pixel was recorded as
extreme. Bright pixels in the PPI image generally are image endmembers.
ENVI adds the resulting output to the Available Bands List. The pixels with the
highest values are input into the n-D Visualizer for the clustering process that
develops individual endmember spectra.
The PPI image is an important intermediate product in the spectral hourglass
process. It identifies and locates the purest pixels in the scene (often less than
1% of the total number of pixels). By understanding this small collection of the
purest pixels, you can have a full understanding of all the pixels in the image
via spectral mixture models. Furthermore, the PPI image maps type localities
and sites that should be visited for ground truth collection and spectral
measurements in the field.
5. In the Examine PPI Results panel, enter a value for Maximum PPI Pixels to
use in the n-D Visualizer. Smaller numbers animate faster and show only the
purest pixels; larger numbers give a better overall picture of the scatter plot, but
they animate more slowly and may hinder the selection of vertices. The Wizard
automatically applies a threshold to the PPI image to obtain the best PPI pixels
without exceeding the selected maximum. You can return to this page and
change the PPI maximum threshold so you can view both the overall scatter
plot and only the purest pixels.
6. Click Next in the Examine PPI Results panel to proceed to the n-Dimensional
Visualizer panel of the Wizard. The n-D Visualizer and n-D Controls dialogs
appear.
If you choose Unmixing, ENVI can optionally constrain the abundances for
each pixel to sum-to-one. You can define the weight of this constraint by
entering an Unmix Unit Sum Constraint Weight value. Larger weights
relative to the variance of the data cause the unmixing to honor the unit-sum
constraint more closely. To strictly honor the constraint, make the weight many
times the spectral variance of the data.
2. Click Next. Progress windows display the processing status. The Calculate
Mapping Methods panel briefly appears.
3. From the Examine Results For drop-down list, choose SAM, MTMF, or
Unmix to switch between the Investigate SAM Results, Investigate MTMF
Results, and Investigate Unmix Results panels.
Investigate SAM Results
The output from SAM is a classified image and a set of rule images (one per
endmember). The pixel values of the rule images represent the spectral angle in
radians from the reference spectrum for each class. Lower spectral angles represent
better matches to the endmember spectra. Areas that satisfied the selected radian
threshold criteria are carried over as classified areas into the classified image.
1. Click Load SAM Class Result to examine the SAM classification image. If
the image does not show spatially coherent classes, then the classification does
not match the target spectra or what you know about the target. In this case,
you should review the SAM rule images.
2. Highlight the name of an image in the SAM Rule Images list and click Load
SAM Rule Image, or double-click the image name in the SAM Rule Images
list.
The rule image is loaded into a display group using an inverted color table.
(The ENVI Color Tables dialog also appears.) A smaller SAM angle means a
pixel is a better match to the target. Inverting the color table displays the best
matches as brighter pixels, which is more intuitive.
A histogram of the image also appears in a separate interactive stretching
dialog. This histogram explicitly shows the linear stretch applied to the display
of the rule image. A flat line in the left portion (lowest value) of the histogram
indicates a good separation of targets and backgrounds. Any data values to the
left of this flat line are significant matches for the target spectra.
Summary Report
1. In the Investigate SAM Results, Investigate MTMF Results, or Investigate
Unmix Results panels, click Next. A Spectral Hourglass Wizard Summary
Report appears (see Figure 7-98).
2. This is the final panel of the Spectral Hourglass Wizard. The summary lists the
processing steps you performed and a list of the output files created. Save the
summary by selecting File → Save Text to ASCII from the Spectral
Hourglass Wizard Summary Report menu bar.
3. Click Finish to exit the Wizard.
SAM, MTMF, and Linear Spectral Unmixing mapping methods. Specify the
parameters required for these functions in the Automated Hourglass Parameters
dialog; each step runs without your interaction.
Note
You should not use the Automated Spectral Hourglass unless you are familiar with
the Spectral Hourglass Wizard processing flow and appropriate parameters. You
achieve better results if you run the Spectral Hourglass Wizard and interactively
select your parameters and endmembers.
1. From the ENVI main menu bar, select Spectral → Automated Spectral
Hourglass. The Automated Hourglass Processing Input Files dialog appears.
2. Select the input file and perform optional Spatial Subsetting, Spectral
Subsetting, and/or Masking, then click OK. The Automated Hourglass
Parameters dialog appears with default values for the parameters. See
“Spectral Hourglass Wizard” on page 829 for descriptions of these parameters.
3. Edit the parameters or accept the defaults.
4. Click OK. The Spectral Hourglass Wizard processing flow runs.
5. ENVI adds output from all of the intermediate steps to the Available Bands
List, spectral libraries, and plot windows, so that you can investigate each step
for accuracy. ROIs containing the pixels used in the mapping methods are
output, and the mean spectra used as endmembers are output to a text file and
spectral library.
Spectral Analyst
Use the Spectral Analyst to help identify materials based on their spectral
characteristics. The Spectral Analyst uses ENVI techniques such as Binary Encoding,
Spectral Angle Mapper, and Spectral Feature Fitting to rank the match of an unknown
spectrum to the materials in a spectral library (see “Binary Encoding” on page 771,
“Spectral Angle Mapper Classification” on page 771, and “Using Spectral Feature
Fitting” on page 788 for method descriptions). You can also define your own spectral
similarity techniques and add them to the Spectral Analyst (see “Spectral Analyst
Functions” in ENVI Help). The output of the Spectral Analyst is a list of the materials
in the input spectral library ranked in order of best-to-worst match. An overall
similarity score is reported, along with individual 0 to 1 scores for each method. For
more information, see “Tips for Successful Use of the Spectral Analyst” on page 848.
Note
This function does not identify spectra; it only recommends likely candidates for
identification. The results may change when the similarity methods or weights are
changed. You are still responsible for the actual identification.
• For Spectral Feature Fitting, enter the Min and Max values in RMS error
units. (The similarity is measured using the RMS fit error.)
A SAM or SFF result less than or equal to Min indicates a perfect match and
receives a score of 1. A SAM or SFF result greater than or equal to Max
receives a score of 0.
• For Binary Encoding, enter the Min and Max values as a percentage of
bands correctly matched (0 to 1). A Binary Encoding result less than or
equal to Min receives a score of 0, and a result greater than or equal to
Max receives a score of 1.
7. In the Edit Identify Methods Weighting dialog, click OK.
8. In the Spectral Analyst dialog, click Apply to load a spectrum.
• If one spectrum is plotted, it is automatically entered into the Spectral
Analyst.
• If more than one spectrum is plotted, select the desired spectrum name
from the small dialog that appears. You can only select one spectrum at a
time. Click OK.
• You can also enter spectra directly from a Z Profile dialog.
9. The Spectral Analyst dialog lists the results of the similarity measures. ENVI
resamples the spectral library to match the spectral resolution of the input
spectrum. Proceed to “Understanding the Spectral Analyst Information” on
page 848.
Editing Weights
To edit the weights, minimum, and maximum values for each method:
1. From the Spectral Analyst menu bar, select Options → Edit Method
Weights. The Edit Identify Methods Weighting dialog appears.
2. Refer to steps 5-7 in “Opening the Spectral Analyst” on page 845 for
information about the Weight, Min, and Max fields.
2. In the Spectral Analyst window, select Options → Auto Input via Z-profile.
3. In the Select Z Profiles dialog, select a Z Profile name.
4. Click OK.
5. In the Image or Zoom window, select a pixel to analyze.
The Z Profile is updated, and the spectral comparison information appears in
the Spectral Analyst dialog. As you move the Zoom box in the Image window,
the information in the Spectral Analyst changes accordingly.
Tip
If you display two images, you must select Options → Clear Auto Inputs before
using the Spectral Analyst in the second display.
Wavelength Ranges
Many materials are similar in one wavelength range, yet they are very different in
another range. For best results, use the wavelength range that contains the diagnostic
absorption features. When a spectrum displays, the Spectral Analyst works over the
range displayed in the corresponding plot being analyzed. To analyze a smaller range,
use the middle mouse button in the plot to zoom to the desired wavelength range
before clicking Apply in the Spectral Analyst.
Methods
Determine whether materials have absorption features. If so, Spectral Feature
Fitting is probably the best method. Otherwise, Spectral Angle Mapper or Binary
Encoding will yield better results.
Context
Examine the spectral ranking in the context of the image setting and known
information. If a suggested identification seems invalid with respect to the known
information, it is probably not the correct identification.
The Spectral Analyst tool is not foolproof. It should be used as a starting point to
identify the materials in an image scene. If you use it properly with a good spectral
library, it can provide excellent suggestions for identification. Used blindly, it can
produce erroneous results.
3. The Edit Multi Range SFF Endmember Ranges dialog appears with a list of
the selected endmembers and their corresponding spectral ranges.
4. Edit the spectral ranges using methods described in “Using Multi Range
Spectral Feature Fitting” on page 790. Click OK. The Save Changes to Multi
Range SFF Record dialog appears.
5. Click OK to update the .sff file.
material subsets. SMACC also provides abundance images to determine the fractions
of the total spectrally integrated radiance or reflectance of a pixel contributed by each
resulting endmember.
Mathematically, SMACC uses the following convex cone expansion for each pixel
spectrum (endmember), defined as H:
N
where:
• i is the pixel index
• j and k are the endmember indices from 1 to the expansion length, N
• R is a matrix that contains the endmember spectra as columns
• c is the spectral channel index
• A is a matrix that contains the fractional contribution (abundance) of each
endmember j in each endmember k for each pixel
The 2D matrix representation of a spectral image is factored into a convex 2D basis (a
span of a vector space) times a matrix of positive coefficients. In the image matrix
(R), the row elements represent individual pixels, and each column represents the
spectrum of that pixel. The coefficients in A are the fractional contributions or
abundances of the basis members of the original matrix. The basis forms an n-D
convex cone within its subset. The convex cone of the data is the set of all positive
linear combinations of the data vectors, while the convex hull is the set of all
weighted averages of the data. The factor matrices are then determined sequentially.
At each step, a new convex cone is formed by adding the selected vector from the
original matrix that lies furthest from the cone defined by the existing basis.
Reference:
Gruninger, J, A. J. Ratkowski and M. L. Hoke. “The Sequential Maximum Angle
Convex Cone (SMACC) Endmember Model”. Proceedings SPIE, Algorithms for
Multispectral and Hyper-spectral and Ultraspectral Imagery, Vol. 5425-1, Orlando
FL, April, 2004.
Spectral Math
Use Spectral Math to apply mathematical expressions or IDL procedures to spectra
and to selected multi-band images. The spectra can be either from a multi-band image
(a Z Profile), a spectral library, or an ASCII file (see “Extracting Z Profiles” on
page 98, “Opening Spectral Libraries” on page 739, and “Importing Spectra from
Spectral Libraries” on page 446). Use Spectral Math to apply mathematical
expressions to all of the bands of multi-band images as long as the number of bands
and spectral channels match.
When using expressions in Spectral Math, the operations are performed in the input
data type (byte, integer, floating-point, and so forth) Use the conversion functions in
Table 7-2 to explicitly set the desired data type of each input band.
To apply Spectral Math, each spectrum you want to process must be open and
displayed in a plot window.
The following figure depicts Spectral Math processing that adds three spectra. Each
spectrum in the expression is mapped to an input spectrum and summed, and the
resulting spectrum is output to a plot window. You can map one or more of the
expression spectra to a file instead of mapping each input to a single spectrum. The
resulting output is a new image file. For example, in the expression s1 + s2 + s3, if s1
is mapped to a file and s2 and s3 are mapped to single spectra, then the resulting
image file contains the spectra of the s1 file summed with s2 and s3.
Addition (+) Sine (sin(x)) Relational Operators (EQ, NE, LE, LT,
GE, GT)
Subtraction (-) Cosine (cos(x)) Boolean Operators (AND, OR, XOR,
NOT)
Multiplication (*) Tangent (tan(x)) Type conversion functions (byte, fix,
long, float, double, complex)
Division (/) ArcSine (asin(x)) IDL functions that return array results
Minimum ArcCosine (acos(x)) IDL procedures that return array
operator (<) results
Maximum ArcTangent User IDL functions and procedures
operator (>) (atan(x))
Absolute Value Hyperbolic Sine
(abs(x)) (sinh(x))
Square Root Hyperbolic Cosine
(sqrt(x)) (cosh(x))
Exponent (^) Hyperbolic Tangent
(tanh(x))
Natural Exponent
(exp(x))
Natural
Logarithm
(alog(x))
1. From the ENVI main menu bar, select Spectral → Spectral Math. The
Spectral Math dialog appears.
5. Click OK. When the processing is complete, the Spectral Math result spectrum
is plotted in the chosen plot window. The plot is available for additional
processing or saving to an output file using plot functions (see “Using
Interactive Plot Functions” on page 106).
PC Spectral Sharpening
Use PC Spectral Sharpening to sharpen a low spatial resolution multi-band image
using an associated high spatial resolution panchromatic band. See “Using PC
Spectral Sharpening” on page 497 for more information.
CN Spectral Sharpening
CN Spectral Sharpening is an extension of the Color Normalized algorithm often
used to pan-sharpen three-band RGB images. See “Using CN Spectral Sharpening”
on page 498 for more information.
EFFORT Polishing
Use Effort Polishing to run Empirical Flat Field Optimal Reflectance
Transformation (EFFORT) to determine and apply mild adjustments to ATREM
apparent reflectance data, so the spectra appear more like spectra of real materials.
Consistent noise or error features may appear in hyperspectral apparent reflectance
data because of the limited accuracy of the standards, measurements, and models that
were used and the limited accuracy of calibrations performed along the data signal
processing chain. This cumulative error may be several percent in each spectral band,
leading to apparent reflectance data with absolute accuracies far less than the actual
precision of the original data.
EFFORT searches for a mild linear correction, bootstrapped from the data
themselves, that polishes out this error and attempts to improve the accuracy of the
apparent reflectance data. The EFFORT correction applies statistically mild
adjustments to every band (gains near 1 and offsets near 0) that make a visual
improvement in the apparent reflectance spectra. This removal of the cumulative
errors of calibration and atmospheric correction allows improved comparison of
EFFORT corrected spectra to library spectra.
References:
Huntington, J. F. and Boardman, J. W., 1995, Semi-quantitative Mineralogical
and geological mapping with 1995 AVIRIS data, Proc. Spectral Sensing
Research ‘95, ISSSR, Published by the AGPS, 26 Nov - 1 Dec, 1995,
Melbourne, Australia.
Boardman, J. W., 1997, Mineralogic and geochemical mapping at Virginia
City, Nevada using 1995 AVIRIS data, in Proceedings of the Twelfth Thematic
Conference on Geological Remote Sensing, Environmental Research Institute
of Michigan, Denver, CO, pp. 21-28.
Boardman, J. W., 1998, Post-ATREM polishing of AVIRIS apparent
reflectance data using EFFORT: a lesson in accuracy versus precision, in
Summaries of the Seventh JPL Airborne Earth Science Workshop, Vol. 1, p. 53.
The EFFORT process is similar to the Empirical Line method of data calibration,
which matches data spectra to field-measured spectra. EFFORT, however, uses no
ground truth data, and the EFFORT-calculated gains and offsets are applied to
ATREM or other atmospherically corrected apparent reflectance data. EFFORT uses
the data to generate “pseudo field” spectra by fitting each observed spectrum with a
parametric model of Legendre polynomials optionally augmented with real spectra.
Gains and offsets for every band are calculated by comparing the modeled spectra to
the data spectra, for pixels that are well-fit. A number of spectra are used that span the
entire albedo range to give good leverage for the linear regression process, and the
data values versus modeled values are fit with a line for every band. The slope and
offset of this line are used to correct the apparent reflectance data for the error
features. You can apply gain-only corrections to fix the model-to-data offset to 0.
You can use one or more reality boost spectra (spectra from spectral libraries or field
spectra) to help in the modeling. Using a few spectra that you know are characteristic
of your area as reality boost spectra can produce better-fitting modeled spectra. The
modeled spectra are created by a linear combination of the Legendre and reality boost
spectra. Therefore, reality boost spectra that contain sharp features, such as the
vegetation red edge, when used to augment the Legrendre basis set, can produce a
better model, giving better EFFORT correction factors and/or offsets.
EFFORT works on one or more wavelength segments that you enter. Wavelengths
ranges that contain only noise (for example, the 1.4 μm and 1.9 μm water vapor
absorption bands) should not be used in the calculation.
Typically three segments are defined around the two large water vapor bands; bands
before the 1.4 μm water vapor band, bands between the 1.4 and 1.9 μm water vapor
bands, and bands past the 1.9 μm water vapor band. Each segment must start and end
with a valid band. You can set bands within a segment that contain large, known
errors but that are critical to further analysis as invalid, so they are not used in the
initial spectral modeling.
Invalid bands may include overlapping spectral bands, bands with ringing around the
0.94 and 1.14 μm water vapor bands, and O2 and CO2 under-corrected or over-
corrected bands. These invalid bands are not used in the modeling but will be
corrected on output. The order of the Legendre polynomial that is used to model the
spectra is set by you through trial and error (though the default value provided should
work in most cases). You can model each segment with a different order polynomial.
Tip
Select a polynomial order that will fit the real data features without fitting the error
features. Before running EFFORT, use spectral plots of the radiance data to select
the wavelength segments and invalid bands to input into the EFFORT dialog.
2. Select an input file and perform optional Spatial Subsetting, then click OK.
The EFFORT Input Parameters dialog appears.
3. Click Enter New Segment. A segment appears in the Segment Information
list.
4. Click Edit next to the new segment. The Segment Spectral Subset dialog
appears with all bands highlighted by default.
5. Select the starting and ending bands for this segment using Add Range; by
clicking and dragging over the list of bands; or by clicking on the first band,
holding down the Shift key, and clicking on the last band.
To set bands contained within a segment to an invalid state so that they will not
be used in computing the EFFORT correction, press the Ctrl key and click the
bands to toggle them off. These data will not be adjusted and are set to 0.
Invalid bands will not be used in the modeling, but they will be adjusted by
EFFORT.
6. Click OK.
7. To change the order of the Legendre polynomial used to fit this segment, enter
the desired order in the Order field next to the segment information.
A lower-order polynomial produces a flatter spectrum, which gives more error
suppression. However, it may also remove some actual reflectance features. A
higher-order polynomial produces a spectrum that fits the data better but it also
may fit some error features, which leaves them in the resulting output. To find
a polynomial order that fits only real data, use a trial-and-error method.
8. Enter new segments until all the spectral segments are defined.
9. To remove the last segment entered, click Delete Last Segment.
• To incorporate the spectral features and not the overall spectral shape, use
continuum removal.
• If the overall shape of the spectrum is important, (for example, vegetation
spectra), do not use continuum removal.
4. Click OK.
5. In the EFFORT Input Parameters dialog, click Edit to change the reality boost
spectra options, or click Delete to delete any of the input reality boost spectra.
Preprocessing Utilities
Preprocessing utilities for building geometry files, georeferencing data, and
orthorectifying data are described in “Map Tools” on page 875. All other
preprocessing utilities are described in “Preprocessing Utilities” on page 357.
Registration
Use Registration to reference images to geographic coordinates and/or correct them
to match base image geometry. You can select ground control points (GCPs)
interactively from Image windows and/or Vector windows. ENVI performs warping
using polynomial functions; Delaunay triangulation; or rotation, scaling, and
translation (RST). Resampling methods include nearest neighbor, bilinear, and cubic
convolution. Comparison of base and warped images using ENVI’s multiple
Dynamic Overlay capabilities allows quick assessment of registration accuracy.
Tip
See the ENVI Tutorials on the ITT Visual Information Solutions website (or on the
ENVI Resource DVD that shipped with your ENVI installation) for step-by-step
examples.
You can warp single-band or multi-band images. The Ground Points Selection dialog
allows you to prototype and test different GCPs and warp options.
1. Open the base image and warp image files and load them into two display
groups.
2. From the ENVI main menu bar, select Map → Registration → Select GCPs:
Image to Image. The Image to Image Registration dialog appears.
3. In the Base Image list, select the display group corresponding to the base
(reference) image.
4. In the Warp Image list, select the display group corresponding to the warp
image.
5. Click OK. The Ground Control Points Selection dialog (Figure 8-2) appears.
Tip
See the ENVI Tutorials on the ITT Visual Information Solutions website (or on the
ENVI Resource DVD that shipped with your ENVI installation) for step-by-step
examples.
2. Add individual GCPs by positioning the cursor in the two images to the same
ground location. Examine the locations in the two Zoom windows, and adjust
the locations as needed by left-clicking in each Zoom window. Subpixel
positioning is supported in the Zoom windows. The larger the zoom factor, the
finer the positioning.
The sample and line coordinates (in both images) appear in the Ground
Control Points Selection dialog, in the Base X, Y and Warp X, Y fields,
respectively. Subpixel coordinates are shown as floating-point values.
3. In the Ground Control Points Selection dialog, click Add Point to add the
GCP to the GCP list.
To view the list of GCPs, click Show List. The Image to Image GCP List
appears with the GCPs listed in a table. For a description of the GCP List, see
“Using the Image to Image GCP List” on page 882.
When the GCPs are added to the list, a marker is placed in the Image windows
of both the base and warp images. The GCP marker consists of an ID number
next to an encircled crosshair. The marker indicates the selected pixel (or
subpixel location). The center of the marker (located under the crosshair)
indicates the actual GCP location.
4. Add additional GCPs by following the same procedure.
When you select at least four GCPs, the predicted x,y coordinates for the
selected warp, the x and y error, and the RMS error are listed in the Image to
Image GCP List table (Figure 8-2).
Tip
For the best results, try to minimize the RMS error by refining the positions of the
pixels with the largest errors or by removing them (see “Minimizing RMS Error” on
page 881). You can reduce errors by adding more points. If you only have a few
points, place them near the image corners or widely scatter them throughout the
image.
Figure 8-2: Ground Control Points Selection Dialog (top) and Image to Image
GCP List (bottom)
Tip
To see which points have the highest RMS errors, select Options → Order Points
by Error from the Ground Control Points Selection dialog menu bar. The points in
the CGP List are reordered so that those with the highest errors appear at the top of
the list.
Select the GCP in the Image to Image GCP List and double-click a Base X, Base Y,
Warp X, or Warp Y value to edit it. Enter the new value and press the Enter key or
click Update. The changes are reflected in the Image to Image GCP List and in the
base and warp images.
Note
If you have already selected several GCPs, a significant delay may occur while the
GCPs are redrawn and the error is recalculated.
Warping Options
Use the Options menu in the Ground Control Points Selection dialog to warp from
the currently displayed band or to warp from a file. You can also select image-to-map
type warping if your base image is georeferenced. This allows you to change the
output pixel size and projection type of the warped image.
• To use the GCPs to perform registration, select either Options → Warp
Displayed Band or Options → Warp File from the Ground Control Points
Selection dialog menu bar.
• To select image-to-map type warping (if the base image is georeferenced),
select Options → Warp Displayed Band (as Image to Map) or Options →
Warp File (as Image to Map) from the Ground Control Points Selection
dialog menu bar.
The Registration Parameters dialog appears. The details of the various warp options
available in ENVI are discussed in “Warping and Resampling Image-to-Image” on
page 904 and “Warping and Resampling Image-to-Map” on page 907.
GCP Options
Use the Options menu in the Ground Control Points Selection dialog to control GCP
labels, colors, and order; to reverse the base and warp images; and to set other
preferences.
Reversing Base/Warp Images
To reverse the positions of the base and warp GCPs, select Options → Reverse
Base/Warp from the Ground Control Points Selection dialog menu bar.
Selecting RST Calculation
For first-order polynomials, you can calculate errors using a rotation, scaling, and
translation (RST) method. From the Ground Control Points Selection dialog menu
bar, select Options → 1st Degree (RST Only). A check mark next to the menu
option indicates it is enabled.
Predicting GCP Locations
To predict the location of a GCP in the warp image based on the warping determined
by the current GCPs and the selected polynomial degree, perform the following steps:
1. In the Zoom window of the base image, position the crosshairs over a pixel.
2. In the Group Control Points Selection dialog, click Predict. The warp image
Zoom box and the crosshairs move to the predicted pixel.
3. Refine the prediction by selecting the correct pixel in the warp image.
4. Click Add Point to enter the GCP.
Automatically Predicting GCP Locations
1. Select Options → Auto Predict from the Group Control Points Selection
dialog menu bar. A check mark next to the menu option indicates it is enabled.
When you adjust the pixel location using the Zoom window or the Zoom box
in the Image window, the Zoom box and crosshairs of the warp image Zoom
window move to the predicted pixel location.
2. Click Add Point to enter the GCP.
Turning Labels On/Off
To turn the GCP labels off or on, select Options → Label Points from the Group
Control Points Selection dialog menu bar. A check mark next to the menu option
indicates it is enabled.
Managing GCPs
Use the File menu in the Ground Control Points Selection dialog to save and restore
GCP files.
Saving GCPs to ASCII Files
1. From the Ground Control Points Selection dialog menu bar or the Image to
Image GCP List menu bar, select File → Save GCPs to ASCII.
2. Enter an output filename with a.pts extension.
3. Click OK. See “GCP File Format (.pts)” on page 1180 for information on the
format.
N N
j
x′ = a ( x, y ) = ∑ ∑ Pi, j x y i
i = 0j = 0
N N
y′ = b ( x, y ) = ∑ ∑ Qi, j x j y i
i = 0j = 0
Where x′and y′ are the locations in the warp image, x and y are the locations in the
base image, N is the polynomial degree, and P and Q are the polynomial coefficients.
The P and Q polynomial coefficients matrices are written to the file by rows, one
element per line. For example, a 2 x 2 matrix of P and Q is written in the following
format:
P[0, 0] P[1, 0]
P[0, 1] P[1, 1]
Q[0, 0] Q[1, 0]
Q[0,1] Q[1, 1]
Restoring Saved GCPs
1. From the Ground Control Points Selection dialog menu bar, select File →
Restore GCPs from ASCII. An Enter The Ground Control Points Filename
dialog appears.
2. Select the desired GCP .pts filename and click Open.
When you enter enough points, the Image to Map GCP List shows the
predicted x,y coordinates for the selected warp, the x,y error, and the RMS
error.
The Ground Control Points Selection dialog shows the number of GCPs you selected.
When you select a sufficient number of GCPs to conduct a first-degree polynomial
warp, the total RMS error is also displayed.
Tip
For the best registration, try to minimize the RMS error by refining the positions of
the pixels with the largest errors or by removing them (see “Minimizing RMS
Error” on page 881). You can reduce errors by adding more points. If you only have
a few points, place them near the image corners or widely scatter them throughout
the image.
For details about editing, predicting, and positioning GCPs, see “GCP Options” on
page 884. For details about managing GCPs, see “Managing GCPs” on page 885.
Warping Options
To use the GCPs to perform registration, select either Options → Warp Displayed
Band or Options → Warp File from the Ground Control Points selection dialog
menu bar. The Registration Parameters dialog appears. For details, see “Warping and
Resampling Image-to-Map” on page 907.
GCP Options
See “GCP Options” on page 884 for details on the Ground Control Points Selection
dialog options.
The automatic image coregistration tool uses area-based matching and feature-based
matching algorithms to obtain tie points, which are as follows:
• Area-based image matching compares the gray scale values of patches of two
or more images and tries to find conjugate image locations based on similarity
in those gray scale value patterns. The results of area-based matching largely
depend upon the quality of the approximate relationship between the base
image and the warp image. This is determined through traditional or pseudo
(RPC or RSM) map information, or by using three or more tie points. If both
map information and three or more tie points exist, the three or more tie points
condition is used. In the case where the pseudo map information is not
sufficiently accurate, it is recommended that you select three or more tie points
to obtain good matching results.
• Feature-based image matching extracts distinct features from images then
identifies those features that correspond to one another. This is done by
comparing feature attributes and location.
Figure 8-5: Base Image Band Matching Choice (left) and Warp Image Band
Matching Choice (right) Dialogs
7. Click OK. If both images display in gray scale, step 6 is skipped. The
Automatic Tie Points Parameters dialog appears (Figure 8-6 and Figure 8-8).
8. You can set parameters for area-based matching (described in “Area-Based
Matching Parameters” on page 896) or feature-based matching (described in
“Feature-Based Matching Parameters” on page 901).
Area-based matching is disabled if both the base image and warp image do not
have traditional or pseudo (RPC or RSM) map information, or if three or more
tie points have not already been selected. If area-based matching is disabled
and you want to use this method, click Cancel, return to the Ground Control
Points Selection dialog and manually select three or more tie points, then select
Options → Automatically Generate Tie Points from the Ground Control
Points Selection dialog menu bar.
9. Click OK. When processing is complete, the Ground Control Points Selection
dialog displays.
10. In the Ground Control Points Selection dialog, click Show List to view the
GCPs. The Image to Image GCP List dialog appears. See “Using the Image to
Image GCP List” on page 882 for information on viewing and editing the GCP
list.
9. The final steps depend upon the Examine tie points before warping
parameter setting:
• If the Examine tie points before warping value was Yes, the Image to
Image GCP List dialog displays. See “Using the Image to Image GCP
List” on page 882 for information on viewing and editing the GCP list.
• If the Examine tie points before warping value was No, ENVI adds the
warped image to the Available Bands List.
• Number of Tie Points: Specify the number of tie points to generate. ENVI
uses this value to automatically filter out certain incorrect tie points. This value
can be as few as 9, but the recommended value is 25 (default setting). If the
Point Oversampling value is greater than 1, ENVI trades off between tie point
quality and point distribution. Therefore, if you want to obtain at least 25
points, you may want to enter a larger value in this field, such as 50, to allow
for further filtering.
• Search Window Size: Specify the search window size, in square pixels. The
search window is a defined subset of the image, within which the smaller
moving window scans to find a topographic feature match for a tie point
placement. The search window size can be any integer greater than or equal to
21, but it must be larger than the Moving Window Size. The default is 81.
This value depends upon the quality of the initial user-defined tie points (a
minimum of three points) or the correctness of traditional or pseudo (RPC or
RSM) map information for the base and warp image, and it also depends on the
roughness of terrain.
If both images have certain map information, then a good way to establish the
minimum search window size is to geographically link the base and warp
images. Click a feature point (point A) in the base image, then click a feature
point in the warp image. The cursor automatically moves to a point (point B)
that is close to the ground feature point (point C) that represents the same
ground feature as point A. Measure (in pixels) the distance between point B
and point C in the warp image and use 2*(distance+1) as the minimum search
window size. The search window size may vary considerably with different
images. For example, 81 may be sufficient for one image pair, while 781 may
be necessary for another image pair. Using a larger value results in a greater
the chance of finding the conjugate point, but takes longer processing time.
Setting an excessively large value may result in false matches because more
similar points may exist in a larger area.
• Moving Window Size: Specify the moving window size, in square pixels. The
moving window scans methodically in the image subset area defined by the
Search Window Size, looking for matches to a topographic feature. The
moving window size must be an odd integer. The smallest allowable value is 5.
The default is 11. Using a larger value results in a more reliable tie point
placement, but takes longer processing time; conversely, a smaller value takes
less processing time, but the tie points are less reliable. Determining a good
moving window size largely depends upon the image resolution and terrain
type. Some general guidelines follow:
• For a 10 meters or higher resolution image, use a range of 9-15.
• For a 5-10 meter resolution image, use a range of 11-21.
• For a 1-5 meter resolution image, use a range of 15-41.
• For a 1 meter or less resolution image, use a range of 21-81 or higher.
• Warp Parameters: These parameters are available when you select No with
the toggle button. Use these parameters to set warping and resampling values.
For details on these parameters, see “Warping and Resampling” on page 904.
• Output Parameters: These parameters are available when you select No with
the toggle button. Use these parameters to specify the Output Tie Points
Filename [.pts] and the output method for the results (File or Memory).
• Match Levels: Specify the number of image pyramid levels to use to conduct
image matching. The default is 1. Set this parameter to 2 or higher if the Pixel
Size Ratio (warp/base) setting is relatively inaccurate.
• Maximum Match Size: Specify the maximum size of the image pyramid used
for conducting image matching. The default is at a pyramid level that is close
to 512 x 512. The actual value varies with the base image size. The larger the
value of this parameter, the more tie points it may generate, the more accurate
the result could be, and the more processing time required.
Note
The Match Levels and Maximum Match Size values affect the number and
size of temporary files created during processing. Be sure your temporary
directory has enough available space to allow for these files, particularly
when setting match values larger than the default settings.
• Pixel Size Ratio (warp/base): Specify the ratio of the ground pixel size in the
warp image to the ground pixel size in the base image. The default is 1.0. This
parameter guides the matching process. For example, if a base image pixel
ground size is 20 m and the warp image pixel ground size is 10 m, the resulting
pixel size ratio will be 0.5 (10/20). If both base image and warp image have
traditional or pseudo (RPC or RSM) map information, a corresponding default
value is automatically set for the Pixel Size Ratio (warp/base).
• Secondary Reliability Check: Select Yes or No to perform a secondary
reliability check. Selecting Yes (default) results in fewer, yet more reliable tie
points. Selecting No results in more, yet less reliable, tie points. For typical
remote sensing imagery, you should select Yes. If image quality is poor and
you can extract only a few feature points, or if you want to extract as many tie
points as possible (at the expense of output tie points reliability), then select
No.
• Spatial Subset: If desired, select a spatial subset from the base or warp image,
specifying the approximate overlap region of the corresponding image. This is
useful when the two images only partially overlap. In such a case, you can
select the subsets of the images to guide the image matching process. If both
the base and warp images have traditional or pseudo (RPC or RSM) map
information, a reasonable default value will be set.
The following parameters are available on the Automatic Registration Parameters
dialog only (accessible by Map → Registration → Automatic Registration: Image
to Image):
• Examine tie points before warping: Click Yes (default) to allow the
examination of tie points before warping the image. It is recommended that
you select Yes so that you can review the tie points and edit those that are less
than optimal.
• Warp Parameters: These parameters are available when you select No with
the toggle button. Use these parameters to set warping and resampling values.
For details on these parameters, see “Warping and Resampling Image-to-
Image” on page 904.
• Output Parameters: These parameters are available when you select No with
the toggle button. Use these parameters to specify the Output Tie Points
Filename [.pts] and the output method for the results (File or Memory).
3. In the Registration Parameters dialog, select the warping method from the
Method drop-down list. The method you select determines the additional
Warp Parameters options available to you. The available warping methods
are as follows:
• RST: Rotation, scaling, and translation; this is the simplest method. To
perform RST warping, you need three or more GCPs. The RST warping
algorithm uses an affine transformation:
x = a1 + a2X + a3Y
y = b1 + b2X + b3Y
This algorithm does not allow for shearing in the image warp. To allow for
shearing, use a first-order polynomial warp. While the RST method is very
fast, in most cases, you can achieve more accurate results with a first-order
polynomial warp.
• Polynomial: Polynomial warping is available from the 1st to nth degree.
The degree available is dependent upon the number of GCPs selected
where #GCPs → (degree + 1)2.
A first-order polynomial warp includes an XY interaction term to account
for image shear:
x = a1 + a2X + a3Y + a4XY
y = b1 + b2X + b3Y + b4XY
• Triangulation: Delaunay triangulation warping fits triangles to the
irregularly spaced GCPs and interpolates values to the output grid. This is
the default option.
4. Depending on the Method you chose, set the following method-specific
options:
• For polynomial warping, enter the desired polynomial Degree. The degree
available is dependent on the number of GCPs defined where #GCPs >
(degree + 1)2.
• For triangulation warping, use the Zero Edge toggle button to select
whether or not you want a one-pixel border of background color at the
edge of the warp data.
By selecting this option, you will avoid a smearing effect that may appear
at the edges of warped images and that you often see when using ENVI’s
data-specific georeferencing functions.
5. From the Resampling drop-down list, select the resampling method:
• Nearest Neighbor: Uses the nearest pixel without any interpolation to
create the warped image.
• Bilinear: Performs a linear interpolation using four pixels to resample the
warped image.
• Cubic Convolution: Uses 16 pixels to approximate the sinc function using
cubic polynomials to resample the image. Cubic convolution resampling is
significantly slower than the other methods.
6. In the Background field, enter a digital number (DN) value to use to fill areas
where no image data appear in the warped image.
To override the output dimensions, enter the desired values in the Output
Image Extent fields for image-to-image registration. The output image
dimensions are automatically set to the size of the bounding rectangle that
contains the warped input image. Therefore, the output warp image size is
usually not the same size as the base image. The output size coordinates are
determined in base image coordinates. As a result, the upper-left corner values
are typically not (0,0), but they indicate the x and y offset from the base image
upper-left corner. These offset values are stored in the header and allow for
dynamic overlay of the base and warp images even when they are different
sizes.
Note
If you want the registration result to exactly match the coverage of the base
image, change the Upper Left X and Upper Left Y values to 1. Change the
Output Samples and Output Lines values so they are the same size as the
base image. If the coverage of your input image is smaller than that of the
base image, the result will have areas of “no data” where there is no
coverage.
2. Select a.pts file and click OK. The Image to Map Registration dialog
appears (Figure 8-12).
3. Follow the instructions in “Selecting Map Projection Types” on page 990 for
details about the Image to Map Registration dialog parameters.
Orthorectification
Use Orthorectification to rectify data from specific pushbroom sensors (ALOS
PRISM and AVINIR, ASTER, FORMOSAT-2, GeoEye-1, IKONOS, KOMPSAT-2,
OrbView-3, QuickBird, RapidEye, WorldView-1 and -2, SPOT Level 1A and 1B, and
CARTOSAT-1), using a rational polynomial coefficients (RPC) model. Data from
each of these sensors typically include an ancillary RPC file or have the necessary
ephemeris data, which ENVI uses to compute RPCs.
Another option is to build RPCs for any generic pushbroom sensor, scanned aerial
photograph, or digital aerial photograph using ENVI’s Build RPCs function. This
option builds RPCs from ground control points (GCPs) or known exterior orientation
parameters (XS, YS, ZS, Omega, Phi, and Kappa). Then, use the Generic RPC and
RSM function to orthorectify the image. See “Building RPCs” on page 972 and
“Generic RPC and RSM” on page 918 for more information.
Note
To perform rigorous orthorectification, you must have the ENVI Orthorectification
Module installed (available with a separate license). See the ENVI
Orthorectification Module User’s Guide for details.
9. If you choose DEM, follow steps 10-13. If the input image has an associated
DEM file, it is used as the default DEM for the orthorectification process. For
more information on associating a DEM with an image, see “Associating a
DEM to a File” on page 202.
10. Click Select DEM File and specify the location of the DEM file.
11. Select an option from the DEM Resampling drop-down list. The resampling
technique converts the DEM from the source image coordinate system to
Geographic (WGS-84), which is required for input into the RPC algorithm.
ENVI performs a full projection to convert each DEM coordinate into the
correct coordinate system.
12. Enter a Geoid offset value. This is a constant value that is added to every value
in the DEM to account for the difference between a spheroid mean sea level
(used in most available DEM data) and the constant geopotential surface
known as the GEOID. The RPC coefficients are created based on geoid height,
and this information must be used to provide accurate orthorectification.
For example, if the geoid is 10 m below mean sea level at the location of your
image, enter a value of -10.
Many institutions doing photogrammetric processing have their own software
for geoid height determination. You can also obtain software from NGA,
USGS, NOAA, and other sources. A geoid height calculator is located at:
http://www.ngs.noaa.gov/cgi-bin/GEOID_STUFF/geoid99_prompt1.prl.
13. Select output to File or Memory.
14. On the right side of the Orthorectification Parameters dialog, verify the
parameters related to the extent and pixel size of the output image.
15. The default values are calculated from the georeferencing information of the
original image. If the input image has no georeferencing information, ENVI
estimates the extent of the image and a reasonable pixel size to derive the
default values.
16. To change the map projection, click Change Proj.
17. To change the pixel size or number of samples or lines, enter the information in
the X Pixel Size, Y Pixel Size, Output X Size, and Output Y Size fields.
18. Click Options and select from the menu to manage the output projection and
map extent settings.
19. Click OK. ENVI adds the resulting output to the Available Bands List.
Note
After ENVI adds the RPC or RSM information to the associated header file,
you may need to close the image, then reopen it so that ENVI will apply the
new header information.
2. Specify or restore one or more GCPs. For complete details on how to use the
dialog to select GCPs, see “Collecting Ground Control Points (Image-to-
Map)” on page 888.
The Elev value specified in the Ground Control Points Selection dialog must
be relative to the geoid height specified for the RPC orthorectification process.
3. From the Ground Control Points Selection dialog menu bar, select Options →
Orthorectify File. This option prompts you to select an input file. When the
input file is opened, the Orthorectification Parameters dialog appears; this
dialog allows you to customize the output parameters for the orthorectification
process.
You can also write a user function for reading a custom RPC file format, add it to the
save_add directory of your ENVI installation, add it to the Generic RPC and RSM
menu, and run it from the menu. To add the user function to the Generic RPC and
RSM menu, edit the useradd.txt file in the menu directory of your ENVI
installation. See “User-Defined RPC Reader” in the ENVI Programmer’s Guide for
more information.
Image Mosaicking
Use mosaicking to overlay two or more images that have overlapping areas (typically
georeferenced) or to put together a variety of non-overlapping images and/or plots for
presentation output (typically pixel-based). You can mosaic individual bands, entire
files, and multi-resolution georeferenced images. You can use your mouse or pixel-
or map-based coordinates to place images in mosaics and you can apply a feathering
technique to blend image boundaries. You can save the mosaicked images as a virtual
mosaic to avoid having to save an additional copy of the data to a disk file. Mosaic
templates can also be saved and restored for other input files.
Importing Images
Use the Import menu to choose input bands for the mosaic.
1. Select either Import → Import Files or Import → Import Files and Edit
Properties from the Pixel-Based Mosaic menu bar. Use the second option if
you want to enter a background see-through value, perform feathering,
position the image on input, select which bands appear in the mosaic display,
or perform color balancing. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. You can mosaic individual bands or entire files.
To select individual bands, click the Select By toggle button in the Input File
dialog to select Band.
3. Click OK.
4. If you selected Import Files and Edit Properties, an Entry dialog for each
selected file appears that allows you to set the following parameters:
• Enter a Data Value to Ignore. This is the background see-through value to
make pixels with that value transparent so that the underlying image is
visible. (Use for mosaicking of images with constant DN value borders).
The background value mask is built from the first band used in the file. If a
pixel in the first band used contains this background value, then that pixel
will be masked out for all bands in the mosaicking process.
• Enter feathering parameters as needed (For more information, see
“Feathering” on page 930).
• Enter the upper-left coordinates (in pixels) in the Xoffset and Yoffset
fields to position the image in the mosaic, or optionally click the Use
x/ystart in Positioning? toggle button to select Yes. This option uses the
x,y start values from the header to compose a relative start location.
• Click the Mosaic Display toggle button to select a Gray Scale or RGB
color image display in the mosaicking window. For RGB, enter the desired
band numbers to display in the Red, Green, and Blue fields. Enter a
Linear Stretch percentage.
• Select whether to apply color balancing to the images by clicking No,
Fixed, or Adjust (see “Color Balancing Images” on page 922).
• Click OK.
5. The Select Mosaic Size dialog appears. Enter the desired output size (in pixels)
of the mosaic image in the Mosaic Xsize and Mosaic Ysize fields, and click
OK. A subsampled image appears in the Pixel Mosaic window for each file
imported.
6. Import other files as needed. Each image name and the outline color are listed
below the mosaic image.
Positioning Images
The coordinates for the upper-left corner of the input image are listed in the X0 and
Y0 fields at the bottom of the Pixel Mosaic dialog. Images with xstart and ystart
values in their headers are automatically placed in the mosaic with the defined offset.
Select from the following options:
• To enable editing of the position coordinates, select any listed image in the
Pixel Mosaic dialog and enter the desired upper-left coordinates in the X0 and
Y0 fields. Press the Enter key.
The number of the currently selected image is shown in the # field. Any
changes to the X0 and Y0 fields are applied to this item.
• You can also position the images in the Pixel Mosaic dialog by left-clicking on
an image and dragging it to the desired location.
• To automatically place the images into a grid, select Options → Position
Entries into Grid from the Pixel Mosaic dialog menu bar. The Position
Images dialog appears. Enter values for Grid Columns, Grid Rows, Border
Pixels (the number of border pixels to place around the images), and
Separation Pixels (the number of pixels to place between the images). Click
OK. The mosaic image is resized to fit the grid.
• To center the images, keeping their relative positioning within the defined
mosaic size, select Options → Center Entries from the Pixel Mosaic dialog
menu bar.
• To lock the relative positions of the images and move them as a group, select
Options → Positioning Lock On from the Pixel Mosaic dialog menu bar.
Now click and drag the images to the desired position.
9. When creating a mosaic of gray scale and RGB images, you can apply single-
band images as RGB. Doing this, in addition to color balancing, creates an
RGB mosaic where the gray scale images blend well with the color images.
10. Enter an output filename.
11. Click OK. ENVI adds the resulting output to the Available Bands List.
Building Mosaics
Use Apply to build a mosaic after all of the images for the mosaic are positioned.
Building the mosaic outputs the mosaic to a file.
It is not necessary to build the mosaic to an output file unless you used feathering or
color balancing. You can save the mosaic as a Virtual Mosaic to save time and disk
space (see “Creating Virtual Mosaics” on page 924).
1. Select File → Apply from the Pixel Mosaic dialog menu bar. The Mosaic
Parameters dialog appears.
2. Edit the Output X/Y Pixel Size values as needed.
3. Select an interpolation method from the Resampling drop-down list. The
choices are Nearest Neighbor, Bilinear, or Cubic Convolution.
4. Select output to File or Memory.
5. Enter a Background Value (a DN value for areas outside of the mosaic).
The background value mask is built from the first band used in the file. If a
pixel in the first band used contains this background value, then that pixel will
be masked out for all bands in the mosaicking process.
6. If color balancing is selected, use the toggle button to select whether to use the
entire image or to use only the overlapping areas for statistic calculations.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Importing Images
Use the Import menu to select input files for the mosaic. The first image imported
into the mosaic must be a georeferenced image. The mosaic size will be set to the
georeferenced image size.
1. Select either Import → Import Files or Import → Import Files and Edit
Properties from the Map Based Mosaic dialog menu bar. Use the second
option if you wish to enter a transparent background value, to perform
feathering, to select which bands appear in the mosaic display, or to perform
color balancing. The Mosaic Input Files dialog appears.
The background value mask is built from the first band used in the file. If a
pixel in the first band used contains this background value, then that pixel will
be masked out for all bands in the mosaicking process.
2. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK.
You can mosaic individual bands or entire files. To select individual bands,
click the Select By toggle button in the Mosaic Input Files dialog to select
Band. Click OK.
When you select the first input file for the mosaic, a thumbnail image appears
and the mosaic size is set.
3. If you selected Import Files and Edit Properties, an Entry dialog for each
selected file appears that allows you to set the following parameters:
• Enter a Data Value to Ignore. This is the background see-through value to
make pixels with that value transparent so that the underlying image is
visible. (Use for mosaicking of images with constant DN value borders).
The background value mask is built from the first band used in the file. If a
pixel in the first band used contains this background value, then that pixel
will be masked out for all bands in the mosaicking process.
• Enter feathering parameters as needed (For more information, see
“Feathering” on page 930).
• Enter the upper-left coordinates (in pixels) in the Xoffset and Yoffset
fields to position the image in the mosaic, or optionally click the Use
x/ystart in Positioning? toggle button to select Yes. This option uses the
x,y start values from the header to compose a relative start location.
• Click the Mosaic Display toggle button to select a Gray Scale or RGB
color image display in the mosaicking window. For RGB, enter the desired
band numbers to display in the Red, Green, and Blue fields. Enter a
Linear Stretch percentage.
• Select whether to apply color balancing to the images by clicking No,
Fixed, or Adjust (see “Color Balancing Images” on page 922).
• Click OK.
4. Import additional georeferenced images into the mosaic as needed.
Georeferenced images are automatically positioned within the output mosaic
according to their geographic coordinates. New images are placed in front of
the other images, and the mosaic size is automatically adjusted to
accommodate the new images. If a properly georeferenced image is imported
with map coordinates that lay outside the current map extent of the mosaic, the
mosaic size is automatically changed to include the new image location.
For multi-resolution mosaicking, the output pixel size is entered on output and
ENVI automatically resamples the lower resolution images to match.
5. Import additional non-georeferenced images as needed. You can position these
images in the same manner as Pixel Based Mosaicking (for more information,
see “Positioning Images” on page 921). Each image name and the outline color
are listed below the mosaic image.
Note
You cannot adjust the mosaic positions of georeferenced images.
Building Mosaics
Use Apply to build a mosaic after all of the images for the mosaic are positioned.
Building the mosaic outputs the mosaic to a file.
It is not necessary to save the mosaic to an output file unless feathering,
multiresolution data, or color balancing were used. You can save the mosaic as a
virtual mosaic file to save time and disk space (see “Creating Virtual Mosaics” on
page 924).
1. Select File → Apply from the Map Based Mosaic dialog menu bar. The
Mosaic Parameters dialog appears with the output pixel size defaulted to the
highest resolution of input images. ENVI automatically resamples lower
resolution images to match. Non-georeferenced images in the mosaic are not
resampled.
2. Edit the Output X/Y Pixel Size values as needed.
3. Select an interpolation method from the Resampling drop-down list. The
choices are Nearest Neighbor, Bilinear, or Cubic Convolution. Nearest
Neighbor resampling is recommended.
4. Select output to File or Memory.
5. Enter a Background Value (a DN value for areas outside of the mosaic).
The background value mask is built from the first band used in the file. If a
pixel in the first band used contains this background value, then that pixel will
be masked out for all bands in the mosaicking process.
6. If color balancing is selected, use the toggle button to select whether to use the
entire image or to use only the overlapping areas for statistic calculations.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Feathering
Use Feathering to blend the edges of overlapping areas in input images for pixel-
based and georeferenced mosaicking. The two types of feathering in ENVI are edge
feathering and cutline feathering.
Tip
To use feathering when mosaicking images, import the bottom image without
feathering. Import the overlapping images with edge or cutline feathering as
desired.
make the output image and 0% of the bottom image is used. 50% of each image is
used to make the output at 10 pixels in from the edge.
1. From the Pixel Mosaic or Map Based Mosaic dialog menu bar, select
Import → Import Files and Edit Properties. A Mosaic Input Files dialog
appears. Select the image to feather and click OK. An Entry dialog appears.
• Or, right-click on the image and select Edit Entry. The Entry dialog
appears.
2. In the Feathering Distance field, enter the number of pixels over which to
blend the images.
The distance specified is used to create a linear ramp that averages the two images
across that distance from the cutline outwards. For example, if the specified distance
is 20 pixels, 100% of the top image is used in the blending at the cutline and 0% of
the bottom image is used to make the output image. At the specified distance (20
pixels) out from the cutline, 0% of the top image is used to make the output image
and 100% of the bottom image is used. 50% of each image is used to make the output
at 10 pixels out from the cutline.
1. From the Pixel Mosaic or Map Based Mosaic dialog menu bar, select
Import → Import Files and Edit Properties. A Mosaic Input Files dialog
appears. Select the image to feather and click OK. An Entry dialog appears.
• Or, right-click on the image and select Edit Entry. The Entry dialog
appears.
2. Click Select Cutline Annotation File and select an annotation file.
3. In the Feathering Distance field that appears in the Cutline Feathering
Frame, specify the distance (in pixels) used to blend the image boundaries.
The process runs in two parts. ENVI adds the resulting output to the Available
Bands List.
2. Select a GLT file and click OK. The Input Data File dialog appears.
3. Select an input file and perform optional Spectral Subsetting, then click OK.
The Georeference from GLT Parameters dialog appears.
4. If you used a subset of the original data as the input file, click the Subset to
Output Image Boundary toggle button to select whether to output only the
warped subset region or whether to output that subset warped within the entire
output boundary.
5. In the Background Value field, enter the DN value to use as the background
value around the edges of the warped data.
6. Enter an output filename.
7. Click OK.
For instructions on selecting input and output projections, see “Selecting Map
Projection Types” on page 990.
7. Click OK.
A default output pixel size and rotation angle are calculated, and they appear in
the Build Geometry Lookup File Parameters dialog.
The default output pixel size is calculated based on the map coordinates in
output space. The default output rotation angle is used to minimize the output
file size. If the rotation angle is set to 0, then north will be up in the output
image. If it is set to another angle, then north will be at an angle and will not be
“up” in the output image. The rotation angle is stored in the ENVI header and
is used when overlaying grids, so the grid lines appear at an angle.
• To change the output pixel size, replace the value in the Output Pixel Size
field.
• To change the output rotation angle, replace the value in the Output
Rotation field.
Note
If you change a non-zero rotation angle to 0 so north is up, your resulting
image may contain a lot of background fill and may become very large.
The Super GLT process involves optimal radial resampling, where every input pixel
within a given radius that contributes to the value of an output pixel is considered in a
weighted fashion. In comparison, bilinear resampling arbitrarily averages a group of
2 x 2 neighbors, and cubic convolution resampling averages a group of 4 x 4 or
16 x 16 neighbors. Super GLT averages every relevant pixel in the file within a given
radius to determine the best possible output pixel value. Super GLT, therefore, is
much slower than a normal GLT process.
The Super GLT method is particularly useful for aircraft data where the sensor may
scan a given point on the ground multiple times, due to the roll, pitch, or yaw of the
aircraft. This means an output pixel might be best represented by an average of three
different input pixels that are located in three different locations in the input file. This
example illustrates why the Super GLT process can take a longer time to process.
Follow the steps below to build a Super GLT.
Tip
You can use Super Georeferencing from IGM to combine building the super GLT
and georeferencing from it into one step. For details, see “Super Georeferencing
from IGM Files” on page 941.
1. From the ENVI main menu bar, select Map → Georeference from Input
Geometry → Build Super GLT. The Input X Geometry Band dialog appears.
2. Select the band that contains the x geometry coordinates and click OK. The
Input Y Geometry Band dialog appears.
3. Select the band that contains the y geometry coordinates and click OK. The
Geometry Projection Information dialog appears.
4. In the Input Projection of Geometry Bands list, select the projection type.
5. In the Output Projection for Georeferencing list, select the projection for the
georeferencing.
For instructions on selecting input and output projections, see “Selecting Map
Projection Types” on page 990.
6. Click OK.
A default output pixel size and rotation angle are calculated, and they appear in
the Build Geometry Lookup File Parameters dialog.
The default output pixel size is calculated based on the map coordinates in
output space. The default output rotation angle is used to minimize the output
file size. If the rotation angle is set to 0, then north will be up in the output
image. If it is set to another angle, then north will be at an angle and will not be
“up” in the output image. The rotation angle is stored in the ENVI header and
is used when overlaying grids, so the grid lines appear at an angle.
• To change the output pixel size, replace the value in the Output Pixel Size
field.
• To change the output rotation angle, replace the value in the Output
Rotation field.
Note
If you change a non-zero rotation angle to 0 so north is up, your resulting
image may contain a lot of background fill and may become very large.
1. From the ENVI main menu bar, select Map → Georeference from Input
Geometry → Georeference from Super GLT. The Input File dialog appears.
2. Select an input file and perform optional Spectral Subsetting, then click OK.
The Select SGL Filename dialog appears.
3. Select the super GLT file and click OK. The Georeference from SGL
Parameters dialog appears.
4. In the Background Pixel Value field, enter the DN value to use as the
background value around the edges of the warped data.
5. In the Kernel Size Min and Max fields, enter minimum and maximum kernel
sizes, respectively.
The minimum kernel size is used in the resampling unless fewer than the
minimum number of pixels to resample, or valid pixels, are contained in the
kernel. If fewer than the minimum number of valid pixels are contained in the
kernel, the kernel size is increased until either the minimum number of valid
pixels is met or the maximum kernel size is met. If there are fewer than the
minimum number of valid pixels in the maximum kernel size then the output
pixel value will be set to the background value.
6. Enter a value for Minimum Pixels to Resample.
If fewer than the minimum number of pixels are contained in the maximum
kernel size, then the output pixel value will be set to the background value.
7. Select output to File or Memory.
8. Click OK. ENVI adds the resulting output to the Available Bands List.
If fewer than the minimum number of pixels are contained in the maximum
kernel size, then the output pixel value will be set to the background value.
10. Enter an output SGL filename.
11. In the Georeference Background Value field, enter the DN value to use as the
background value around the edges of the warped data.
12. Select output to File or Memory.
13. Click OK. ENVI adds the resulting output to the Available Bands List.
Note
Using a large number of base points will increase the processing time
considerably.
Note
Using a large number of warp points will increase the processing time
considerably.
3. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK.
4. If your input file is not in HDF or CEOS format, select the associated HDF or
CEOS annotation file to read the header information from.
5. Click OK. The SeaWiFS Geometry Parameters dialog appears.
6. Select the values to compute by selecting one of the following options:
• To select individual values, select the check boxes next to the desired
value.
• To select a range of values, enter the beginning and ending numbers into
the two fields next to the Add Range button and click Add Range.
• To select all values, click Select All.
• To de-select any selected items, click Clear.
7. Select output to File or Memory.
8. From the Output Data Type drop-down button, select Double or Floating
Point.
9. Click OK.
5. Select the desired map projection for the x,y coordinates by selecting a
projection type from the list (see “Selecting Map Projection Types” on
page 990).
6. Select output to File or Memory.
7. Click OK. ENVI extracts the needed information from the ASTER header file
and adds the resulting output to the Available Bands List.
Georeferencing ENVISAT
Use Georeference AATSR, Georeference ASAR, or Georeference MERIS to
georeference ENVISAT AATSR, ASAR, or MERIS data with the geolocation
information included in the ENVISAT file. ENVISAT imagery contains geolocation
tie points that correspond to specific pixels in the image. You can use these tie points
to automatically georeference the ENVISAT data without building a geometry file.
1. Open an ENVISAT file by selecting File → Open External File →
ENVISAT → sensor_type from the ENVI main menu bar.
2. Select one of the following menu options from the ENVI main menu bar,
depending on which type of ENVISAT imagery is being georeferenced:
• Basic Tools → Preprocessing → Data-Specific Utilities →
ENVISAT → Georeference sensor_type
• Spectral → Preprocessing → Data-Specific Utilities → ENVISAT →
Georeference sensor_type
• Map → Georeference ENVISAT → sensor_type
The Select ENVISAT File dialog appears.
3. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK. The Select Output Projection dialog appears.
4. Select the desired output map projection from the list (see “Selecting Map
Projection Types” on page 990 for more information), enter any necessary
parameters.
5. Click OK. The Registration Parameters dialog appears.
6. Select any necessary parameters (see “Warping and Resampling” on page 904
for more information).
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Georeferencing MODIS
Use Georeference MODIS to georeference both MODIS Level 1B and MODIS
Level 2 datasets to the output projection coordinates.
If latitude and longitude data are available as images named Latitude and Longitude
within the HDF dataset, the utility recognizes the images as containing geographic
coordinates. If latitude and longitude data are available as images named other than
Latitude and Longitude, ENVI prompts you to select the coordinate-containing
images. If no images from the dataset contain geocoordinates, the data cannot be
georeferenced.
To apply MODIS bow tie effect correction to the data, as described in “Applying the
Correction for the MODIS Bow Tie Effect” on page 963, the dataset must have
latitude and longitude information.
1. Open a MODIS Level 1B or MODIS Level 2 file by selecting File → Open
External File → EOS → MODIS from the ENVI main menu bar.
2. From the ENVI main menu bar, select one of the following to georeference the
MODIS file:
• Map → Georeference MODIS
• Basic Tools → Preprocessing → Data-Specific Utilities → MODIS →
Georeference Data
• Spectral → Preprocessing → Data-Specific Utilities → MODIS →
Georeference Data
The Input MODIS File dialog appears.
3. Select an input file and perform optional Spatial Subsetting and/or Spectral
Subsetting, then click OK.
If the dataset contains bands named Latitude and Longitude, the Georeference
MODIS Parameters dialog appears (Figure 8-20).
If the dataset does not contain bands named Latitude and Longitude, the
Lat/Lon Band Selection dialog appears (Figure 8-19). In this case:
• If the dataset contains geographic coordinate bands using names other than
the expected default, select the image names and click OK. The
Georeference MODIS Parameters dialog appears.
• If the dataset does not contain geographic coordinate bands, click Cancel
to close the utility. You will not be able to georeference the dataset.
Note
Using many warp points may increase the warping time, but can significantly
increase the accuracy of the georeferencing.
Figure 8-25: Georeferenced MODIS Radiance Image with Bow Tie Correction
Image and Scroll Windows
Georeferencing RADARSAT
Use Georeference RADARSAT to georeference RADARSAT data with embedded
geolocation point information. You can use the geolocation points associated with
each line of RADARSAT data to georeference the data to a specified map projection.
You can use all of the geolocation points from the file, or specify how many points to
extract from the data for georeferencing.
Note
Not all RADARSAT files contain embedded geolocation points. If your
RADARSAT data does not have geolocation points, you will be unable to
georeference the file.
Use the settings in this dialog to select an output projection and to specify how
to extract geolocation points from the RADARSAT file:
• Select Output Projection For Registration: Choose the projection into
which to warp the data. The default projection is UTM (if map information
is available). For details about selecting map projections, see “Selecting
Map Projection Types” on page 990.
• X Pixel Size/Y Pixel Size: Set the output data pixel size in units of
degrees, meters, or feet. The units to use depends on the units used for the
selected projection. If your input image does not have a pixel size defined,
the default values are 12.5 meters for both x and y. If pixel size data is
included in your file, those values appear in the X and Y Pixel Size fields.
• Number of Lines to Skip: Use this option to optimize either the accuracy
or the processing speed of the geolocation point extraction.
The larger the value in this field, the fewer number of points are extracted.
In this case, processing is faster but the accuracy is more approximate.
If you choose a smaller value for the number of lines to skip, the
processing time increases but the accuracy is improved. The default value
is 100.
• Enter Output GCP File Name: Allows you to optionally save the
extracted GCPs in a file with a specified name.
4. Click OK. The Registration Parameters dialog appears.
5. The Registration Parameters dialog allows you to further refine the warp
method, resampling technique, background value, and output parameters. For
more information about the Registration Parameters dialog, see “Warping and
Resampling Image-to-Map” on page 907.
In the Enter Output Filename field, enter the filename for your georegistered
image.
6. Click OK. ENVI adds the resulting output to the Available Bands List.
Building RPCs
Use Build RPCs to compute rational polynomial coefficient (RPC) information for
the following:
• Scanned aerial photographs from a frame camera.
• Digital aerial photographs with a frame central projection (including Vexcel
UltraCamD).
Note: ENVI automatically computes RPC map information for Leica ADS40
files if the necessary ancillary files are present. See “Opening ADS40 Files” on
page 174 for more information.
• Digital aerial photographs with a line central projection (including Leica
ADS40 and STARLABO TLS).
• Imagery from any generic pushbroom sensor (including ALOS PRISM and
AVINIR, ASTER, CARTOSAT-1, FORMOSAT-2, GeoEye-1, IKONOS, IRS-
C, KOMPSAT-2, MOMS, QuickBird, RapidEye, WorldView-1 and -2, and
SPOT) as long as GCPs are available.
Note
ASTER, SPOT, and FORMOSAT-2 data files contain RPC information, which you
can retain before orthorectification or DEM extraction and avoid using Build RPCs
altogether. See “Retaining RPC Information from ASTER, SPOT, and
FORMOSAT-2 Data” on page 989 for details.
RPCs are computed using a digital photogrammetry technique that uses a collinearity
equation to construct sensor geometry, where the object point, perspective center, and
image point are all on the same space line. The technique involves a series of
transformations involving pixel, camera, image-space, and ground coordinate
systems.
For single-image orthorectification, the technique includes two preprocessing steps to
build the sensor geometry: interior orientation (which transforms the pixel
coordinate system to the camera coordinate system), and exterior orientation (which
determines the position and angular orientation parameters associated with the
image). Once RPCs are computed, the RPC information is added to the input file
header so that you can use the file with ENVI’s generic RPC orthorectification and
DEM Extraction tools. See “Generic RPC and RSM” on page 918 and the DEM
Extraction Module User’s Guide for more information.
Before computing RPCs (see “Select Input File” on page 979), you should
understand the photogrammetry model that ENVI uses, since you will have an
opportunity to edit associated parameters.
Imagery from pushbroom sensors and aerial photography from line central digital
cameras use the line central projection. Each scan line has its own projection center,
as the following figure shows.
To compute RPCs, the angles of each axis in the object space system, along with the
location of the projection center, must also be determined. These are referred to as
exterior orientation parameters. ENVI can automatically calculate these parameters
based on GCPs that you select, but you can also edit them or manually enter them as
needed.
References:
Wang, Zhizhuo, 1990, Principles of Photogrammetry (with Remote Sensing),
Beijing: Publishing House of Surveying and Mapping.
McGlone, J. C., editor, 2004, Manual of Photogrammetry, Fifth Edition, American
Society for Photogrammetry and Remote Sensing.
---f- = -----------
PS -
H GSD
Where:
f is the focal length
H is the aircraft or satellite altitude
PS is the pixel size on the camera lens
GSD is the ground sample distance or ground resolution.
The following table lists focal lengths and pixel sizes for some aerial cameras and
satellite pushbroom sensors:
along the side. If viewed from the ground point corresponding to the scene center, the
across track incidence angle has a positive value if the viewing direction is eastward.
Following are some guidelines for determining the incidence angles for different
pushbroom sensors.
ASTER
You can set both angles to 0.0 degrees. However, for Band 3B, you should set the
along track incidence angle to -27.6 degrees (descending orbit) or 27 degrees
(ascending orbit), and the across track incidence angle to 0.0 degrees.
Note
ASTER and SPOT data files contain RPC information, which you can retain before
orthorectification or DEM extraction and avoid using Build RPCs altogether. See
“Retaining RPC Information from ASTER, SPOT, and FORMOSAT-2 Data” on
page 989 for details.
IKONOS
The *_metadata.txt file associated with an IKONOS dataset lists Nominal
Collection Elevation and Nominal GSD (Cross Scan and Along Scan)
angles for each source image. Use these values to compute the approximate along
track and across track incidence angles with the following equations:
2 2
tan ( ∂ nominal ) + 1 – a
∂ across_track = tan
–1 -----------------------------------------------------
2
1+a
2 2
a ( 1 + tan ( ∂ nominal ) ) – 1
∂ along_track = tan
–1 --------------------------------------------------------------
2
1+a
Where:
GSD along_track
a = -----------------------------------
GSD across_track
You should set the signs of the incidence angles according to the actual pointing
direction, which you can determine from the Nominal Collection Azimuth
value in the *_metadata.txt file.
∂ nominal = 90 – nominal_collection_elevation
IRS-1C/1D
Set the along track incidence angle to 0.0 degrees. Set the across track incidence
angle according to the Input view angle (Deg) value in the leader file.
KOMPSAT-2
Set the approximate incidence angles using the
AUX_IMAGE_SATELLITE_INCIDENCE_DEG metadata field in the associated
ephemeris data file (.eph).
QuickBird
Set the approximate incidence angles (and signs) using the inTrackViewAngle and
crossTrackViewAngle values in the associated *.IMD file.
RapidEye
Set the across track incidence angle to the acrossTrackIncidenceAngle value in
the associated *_metadata.xml file. Set the along track incidence angle to 0.0
degrees.
SPOT
Incidence angles are available in the leader file (CAP format) or XML metadata file
(DIMAP format).
For SPOT-1 through SPOT-4 data, you can set the along track incidence angle to 0
because this type of viewing is not allowed. For SPOT-5 data, the XML metadata lists
the along track incidence angle in the <INCIDENCE_ANGLE> tag. Use the
<VIEW_ANGLE> tag to set the across track incidence angle.
For CAP-format data, the incidence angle is in the byte offset 453-468 within the
header record. You can use a simple text editor to view the header record. The format
for the incidence angle is <X>AA.A, for example, L12.7 or R18.1. If the prefix is L,
set the angle to a negative value. If the prefix is R, set the angle to a positive value.
Frame Camera
1. From the Type drop-down list in the Build RPCs dialog, select Frame
Camera.
2. Enter a Focal Length (mm) value for the camera or sensor. This field is
required.
3. Enter Principal Point x0 (mm) and Principal Point y0 (mm) coordinates,
which are usually available from the camera calibration report. The default
value is 0 for both fields.
4. Click Select Fiducials in Display. The scanned aerial photograph is
automatically loaded to a new display group, and the Interior Orientation
Fiducials dialog appears.
• Restore GCPs from ASCII File: The Enter GCP Filename dialog
appears. Select a GCP file that contains projection information (with a
.pts extension). Click OK.
• Select Projection for GCPs: The Select GCPs in Display dialog appears.
See “Selecting Map Projection Types” on page 990 for more information
on choosing a projection. You can also select Restore GCPs from ASCII
File from this dialog. Click OK.
3. The Exterior Orientation GCPs dialog appears. This dialog is similar to the
Ground Control Points Selection dialog for image-to-map registration. See
“Collecting Ground Control Points (Image-to-Map)” on page 888 for more
information.
4. Center the crosshairs in the Zoom window over a GCP and click once. The
image coordinates appear in the Image X and Image Y fields of the Exterior
Orientation GCPs dialog.
5. Enter map coordinates for the GCP in the appropriate fields of the Exterior
Orientation GCPs dialog.
6. In the Elev field, enter an elevation for the selected ground point.
7. Click Add Point to add the location to the list of GCPs.
8. Click Show List to display the Ground Control Points List dialog. This dialog
is similar to the Image to Image GCP List dialog. See “Using the Image to
Image GCP List” on page 882 for more information.
9. Continue adding GCPs. You should spread the GCPs across the image,
including all four corners, for best results. Unlike the GCPs used in “warp”
registrations, the accuracy of each GCP used for the exterior orientation is
absolutely critical for locating the position of the aerial camera. If the exterior
orientation is not accurate, then the orthorectified image will be in error, even
if the interior orientation is perfect.
10. From the Exterior Orientation GCPs dialog menu bar, select Options →
Export GCPs to Build RPCs Widget to compute exterior orientation
parameters. An Exterior Orientation from GCPs Error Report dialog appears,
which shows a report of the individual RMS errors for each GCP, and the total
RMS error.
The Build RPCs dialog lists six exterior orientation parameters (XS, YS, ZS,
Omega, Phi, and Kappa), along with the units of the rotation angles, and the
rotation system used.
The rotation matrix associated with XS, YS, and ZS is calculated from three
rotation angles: Omega, Phi, and Kappa. While the rotation angles are different
among rotation systems, the rotation matrix is the same.
Note
You should only be concerned with the rotation system if you import existing
orientation parameters that come from a global positioning system (GPS),
inertial navigation system (INS), or inertial measurement unit (IMU); or
from block bundle adjustment results using third-party photogrammetry
software. If these do not apply to you, then accept the default calculated
exterior orientation parameters. Rotation angles are only used to compute the
RPCs.
11. To edit any of the exterior orientation parameters, click Edit Parameters. The
Edit Exterior Orientation Parameters dialog appears. This dialog also allows
you to edit the projection information associated with the GCPs.
12. From the Rotation System drop-down list in the Edit Exterior Orientation
Parameters dialog, select one of the following options.
• Omega/Phi/Kappa: SX is the primary axis, defined as a fixed axis whose
space direction does not change while the direction of the other axes
changes as the space is rotated. Omega is a rotation about the SX axis, Phi
is a rotation about the SY axis, and Kappa is a rotation about the SZ axis.
Figure 8-30 indicates the positive directions of all the rotation angles. This
is the default option since it is widely used in the U.S. and most of the
world.
about the SZ axis. Figure 8-31 indicates the positive directions of all the
rotation angles. This convention is commonly used in Germany.
13. Edit the XS, YS, ZS, Omega, Phi, and Kappa fields in the Edit Exterior
Orientation Parameters dialog as needed.
14. Click the Units toggle button to toggle between Degrees or Radians. This
field represents the units of Omega, Phi, and Kappa.
15. Click OK in the Edit Exterior Orientation Parameters dialog.
Compute RPCs
1. If you want to further improve the RMS error of the exterior orientation model,
click Select GCPs in Display again in the Build RPCs dialog. You can add
more GCPs or delete GCPs with large errors. See “Build Exterior Orientation”
on page 980.
2. When you are finished modifying the GCPs, click Recalculate Exterior
Orientation in the Build RPCs dialog.
3. Click OK in the Build RPCs dialog. The Scene Elevation in Meters dialog
appears.
4. The Minimum Elevation and Maximum Elevation fields are initially
populated with the range of global elevation values in world_dem (found in
the data directory of your ENVI installation path). If you know the elevation
range of your scene, you can enter new Minimum Elevation and Maximum
Elevation values. These values represent the height above the WGS-84
ellipsoid for the geographic region that the image covers.
5. Click OK. After processing is complete, an ENVI Message dialog appears:
“RPCs have been calculated for this file, and the header has been updated.”
Click OK.
Once RPCs are computed, the RPC information is added to the input file
header so that you can use the file with ENVI’s Generic RPC orthorectification
and DEM Extraction tools. See “Generic RPC and RSM” on page 918 and the
DEM Extraction Module User’s Guide for more information.
7. When you are finished modifying the GCPs, click Recalculate Exterior
Orientation in the Build RPCs dialog.
8. Click OK. The Scene Elevation in Meters dialog appears.
9. The Minimum Elevation and Maximum Elevation fields are initially
populated with the range of global elevation values in world_dem (found in
the data directory of your ENVI installation path). If you know the elevation
range of your scene, you can enter new Minimum Elevation and Maximum
Elevation values. These values represent the height above the WGS-84
ellipsoid for the geographic region that the image covers.
10. Click OK. After processing is complete, an ENVI Message dialog appears:
“RPCs have been calculated for this file, and the header has been updated.”
Click OK.
Once RPCs are computed, the RPC information is added to the input file
header so that you can use the file with ENVI’s Generic RPC orthorectification
and DEM Extraction tools. See “Generic RPC and RSM” on page 918 and the
DEM Extraction Module User’s Guide for more information.
Pushbroom Sensor
1. From the Type drop-down list in the Build RPCs dialog, select Pushbroom
Sensor.
2. Enter a Focal Length (mm) value for the camera or sensor. This field is
required. See “Focal Length and Pixel Size” on page 975.
3. Enter Principal Point x0 (mm) and Principal Point y0 (mm) coordinates.
The default value is 0 for both fields. See “Principal Point Coordinates” on
page 975.
4. Enter X Pixel Size (mm) and Y Pixel Size (mm) values. These are required
fields. See “Focal Length and Pixel Size” on page 975.
5. Enter Incidence Angle Along Track and Incidence Angle Across Track
values. See “Along Track and Across Track Incidence Angles” on page 976.
6. Select a Sensor Line Along Axis option. Each sensor line has one projective
center.
• X: The sensor line direction is along the image x-axis.
• Y: The sensor line direction is along the image y-axis.
7. Set the required Polynomial Orders for XS, YS, ZS, Omega, Phi, and Kappa.
• 0: The parameter is constant for the entire image.
• 1: The parameter has a linear relationship with the y camera coordinates,
for example: XS(i) = a0 + a1yi
• 2: The parameter is modeled using a second-order polynomial, for
example: XS(i) = a0 + a1yi + a2yi2
The default value is 1 for all six exterior orientation parameters. The
higher you set the polynomial order, the more GCPs you must select in the
image. Usually, a second-order polynomial is only needed to model a
scenario where there is a nonlinear variation of the exterior orientation
between sensor lines, which implies an unstable flight path. Experiment
with different polynomial orders to select the optimal modeling strategy.
8. Click Select GCPs in Display. A Select GCPs in Display dialog appears. See
“Build Exterior Orientation” on page 980 for the remaining steps.
9. If you want to further improve the RMS error of the exterior orientation model,
click Select GCPs in Display again. You can add more GCPs or delete GCPs
with large errors.
10. When you are finished modifying the GCPs, click Recalculate Exterior
Orientation in the Build RPCs dialog.
11. Click OK. The Scene Elevation in Meters dialog appears.
12. The Minimum Elevation and Maximum Elevation fields are initially
populated with the range of global elevation values in world_dem (found in
the data directory of your ENVI installation path). If you know the elevation
range of your scene, you can enter new Minimum Elevation and Maximum
Elevation values. These values represent the height above the WGS-84
ellipsoid for the geographic region that the image covers.
13. Click OK. After processing is complete, an ENVI Message dialog appears:
“RPCs have been calculated for this file, and the header has been updated.”
Click OK.
Once RPCs are computed, the RPC information is added to the input file
header so that you can use the file with ENVI’s Generic RPC orthorectification
and DEM Extraction tools. See “Generic RPC and RSM” on page 918 and the
DEM Extraction Module User’s Guide for more information.
• If you select UTM, click the N or S toggle button to indicate if the selected
latitude is north (N) or south (S) of the equator. Enter a zone, or click Set
Zone and enter the latitude and longitude values to automatically calculate
the zone.
• If you select a State Plane projection, enter the zone or click Set Zone and
select the zone name from the list.
Both NOS and USGS zone numbers are shown next to the zone name.
• To designate the units for a projection type, click Units and select a unit
type. Enter a pixel size (in the selected unit).
1. From the ENVI main menu bar, select Map → Customize Map Projections,
or click New in any dialog where map projection selection is available. The
Customized Map Projection Definition dialog appears.
2. Either proceed to step 3, or select Projection → Load Existing Projection
from the dialog menu bar to select from a list of standard projections contained
in the file map_proj.txt located in the ENVI directory structure (see “ENVI
Map Projections File” on page 994).
After the parameters are loaded, you can edit them as necessary.
3. Enter or modify the Projection Name, if desired.
4. Select the Projection Type from the list of supported projections (see “Map
Projections” on page 1199).
5. To change the ellipsoid, click the Projection Datum toggle button to select
Projection Ellipsoid. Select User Defined from the list. Enter the desired A
and B values to define the ellipsoid (See Snyder, 1982).
6. To change the datum, select the desired datum from the scrolling list. The
ellipsoid that corresponds with the selected datum is shown next to the
Ellipsoid label. Toggle back to the Projection Ellipsoid list and select this
ellipsoid.
7. If you want a false easting and northing (typically to keep map coordinates
from being negative), enter the values in the corresponding fields.
the projection origin. Lamberts Conformal Conic and Albers Equal Area
projections require the projection origin and two standard parallels. See
Snyder, 1982 for details.
• Select Options → Toggle DMS <-> DD from the Customized Map
Projection Definition dialog menu bar to switch between
degrees/minutes/seconds and decimal degrees.
• You can select Options → Extract Projection Origin From Image from
the Customized Map Projection Definition dialog menu bar to set the
latitude/longitude to the center pixel location of an existing georeferenced
image. A Choose Georeferenced Image dialog appears. Select a
georeferenced image and click OK.
9. Select Projection → Add New Projection from the Customized Map
Projection Definition dialog to add the projection to the list of projections used
by ENVI.
The available projections are modified for the current ENVI session and you
will be asked if you want to save the changes to the map_proj.txt file when
you close the dialog (see “ENVI Map Projections File”).
To save the new or modified projection information, select File → Save
Projections from the Customized Map Projection Definition dialog menu bar.
The file map_proj.txt, located in the ENVI directory structure, is modified
to contain the new projection. You can edit this file using any text editor as an
alternative to the interactive definition above.
;
; 7 - Stereographic (ellipsoid)
; a, b, lat0, lon0, x0, y0, k0, [datum], name
;
; 9 - Albers Conical Equal Area
; a, b, lat0, lon0, x0, y0, sp1, sp2, [datum], name
;
; 10- Polyconic
; a, b, lat0, lon0, x0, y0, [datum], name
;
; 11- Lambert Azimuthal Equal Area
; a, b, lat0, lon0, x0, y0, [datum], name
;
; 12- Azimuthal Equadistant
; r, lat0, lon0, x0, y0, name
;
; 13- Gnomonic
; r, lat0, lon0, x0, y0, name
4. In the Output Projection and Map Extent area of the Convert Map
Projection Parameters dialog:
• Click Change Proj to change the projection of the upper-left coordinate
only. The Projection Selection dialog appears. Change the map coordinate
or latitude/longitude information and click OK.
• The X/Y Pixel Size and Output X/Y Size fields in the Convert Map
Projection Parameters dialog are automatically populated with information
from the input image.
• Click Options and select from the menu to manage the output projection
and map extent settings.
5. In the Conversion Parameters area of the Convert Map Projection
Parameters dialog, select the warping and resampling methods (see “Warping
and Resampling Image-to-Image” on page 904.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
Layer Stacking
Use Layer Stacking to build a new multiband file from georeferenced images of
various pixel sizes, extents, and projections. The input bands will be resampled and
re-projected to a common output projection and pixel size. The output file will have a
geographic extent that either encompasses all of the input file extents or encompasses
only the data extent where all of the files overlap.
For details, see “Layer Stacking” on page 269.
Using GPS-Link
Use GPS-Link to read National Marine Electronics Association 0183 format
(NMEA 0183) data directly from a GPS unit. The GPS must be manually set to the
NMEA 0183 mode. ENVI supports a GPS-link on PCs running Microsoft
Windows 2000 or Windows XP. However, the SCSI tape support drivers are needed
for the link to work on Windows. To install SCSI tape support for Microsoft
Windows 2000 and Windows XP platforms, open the aspi_v470.exe self-
extracting archive in the \tape32 directory on the ENVI for Windows installation
CD. Please see the included README.DOC file for installation instructions. GPS-Link
only works in Windows 32-bit mode. If you have a 64-bit Windows PC, run ENVI in
32-bit mode by selecting Start → Program Files → ENVI x.x → 32-bit → ENVI
or ENVI + IDL.
GPS-Link Options
In the ENVI GPS-Link dialog, use the Options menu to clear points, collect points,
or set the display format (degrees) of the points.
• To erase all points in the list, select Options → Clear Points.
• To turn the automatic collection of points at a set time interval on or off, select
Options → Auto Update: On/Off.
• To set the time interval between automatic point collection, select Options →
Set Retrieval Rate. Enter the desired time interval in seconds.
• To turn the collection of points on and off, select Options → Collect Points:
On/Off.
• When in Auto Update mode, select Collect Points: On/Off to pause the
collection of points.
• To display the collected location in decimal degrees (DD) or in degrees,
minutes, seconds, select Options → Display Points: DD/DMS.
Sample Format
Following is a sample ASCII file of GPS data:
1, -107.7840409, 44.28326903
2, -107.7764449, 44.28345405
3, -107.7688487, 44.28363857
4, -107.7612525, 44.28382258
5, -107.7536563, 44.28400609
This file has five GPS point locations, identified by the ID number in the first
column. The second column contains longitudes, and the third column contains
latitudes. You can delimit the data values by space, comma, or tab. The file can also
contain optional header information, as long as the header is commented out with a
semicolon.
7. Select a map projection, datum, and units. See “Selecting Map Projection
Types” on page 990 for details.
8. Click OK. The longitude and latitude values are imported into ENVI and
appear in the ENVI Point Collection dialog.
You can access options to work with vector layers from either the Vector window
menu bar, the Vector Parameters dialog menu bar, or the right-click menu. Using the
menu options, you can add new vectors; export vector layer coordinates for use in
image-to-map registration; and view, edit, and query vector attributes, as described in
the next sections.
Tip
Use the Mouse Button Descriptions window while working with vectors to view
information about the function of each mouse button at any given cursor location.
See Vector Window Mouse Button Functions in Getting Started with ENVI.
• From either the Vector window or Vector Parameters dialog menu bar,
select Edit → Edit Layer Properties.
2. Select the vector to edit from the Layer Names list.
3. To edit the layer color, click the color button and select a new color.
4. To edit the appearance of the lines, enter a value in the Thick field, and select
the line style from the Style drop-down list.
5. Select a polygon fill type from the Polygon Fill drop-down list options:
• None: Leave the polygon unfilled.
• Solid: Fill the polygon with the polygon color.
• Line, Dashed, Dotted, and so forth: Fill the polygon with equally spaced
lines.
6. Change the orientation of the fill lines, enter the value of the angle in degrees
(counterclockwise, with respect to the horizontal [0 degrees]) in the Orien
field.
7. Change the spacing of the lines, enter a value in the Space box.
8. Select the Point Symbol to use for plot points from the drop-down list.
9. Designate the size of the symbol, in the Symbol Size field.
10. To plot attribute names with vector points, see “Plotting Attribute Names” on
page 1017.
11. To plot vector points with different sizes for attribute values, see “Plotting
Vector Points Based on Attribute Values” on page 1018.
12. Click OK.
To create your own vector plot symbols, see “Creating Vector Plot Symbols” in the
ENVI Programmer’s Guide.
3. Use the Attribute drop-down list to select the column name to plot from the
Layer Attributes table.
4. Use the Align drop-down list to select left, middle, or right alignment, font
type, text size and orientation using the appropriate parameters.
5. Click OK. The Edit Vector Layers dialog appears.
For more information about attributes, see “Vector Attributes” on page 1029.
• Right-click in either the Vector window or the display group and select
Select Mode → vector_type.
4. Select the vector type from the right-click menu in the Vector window, or from
the Mode menu in the Vector Parameters dialog menu bar. The choices are
Polygon, Polyline, Rectangle, Ellipse, and Point.
From the Vector window right-click menu, choose Multi Part: On. See
“Drawing ROIs” on page 323 for details.
5. Depending on the vector type selected in the previous step, use one of the
following:
• Left-click to define points, polygons, or polyline vertices. To undo the last
point, middle-click.
• Left-click and drag to draw rectangles or ellipses. Middle-click to delete
the rectangle or ellipse.
• Middle-click and drag to draw squares or circles. Middle-click to delete
the square or circle.
• To undo the last point, middle-click.
6. Right-click to set the vector placement. For all vector types other than Point, a
diamond-shaped handle appears near the new vector. You can left-click on the
diamond and drag it to change vector placement.
For Point vector types, select one of the following:
• Accept as Individual Points: To accept each point as an individual record.
• Accept as Multi Point: To accept multiple points as a single record.
• Remove New Point: To remove the point.
7. For all vector types other than Point, right-click again and select one of the
following:
• Accept New Polyline/Polygon/Rectangle/Ellipse: Accepts the placement
of the vector.
• Remove New Polyline/Polygon/Rectangle/Ellipse: Deletes the vector.
• Snap Start Node to the Nearest Polyline: Accepts the placement of the
polyline and connects its start node with an existing nearby polyline.
• Snap End Node to the Nearest Polyline: Accepts the placement of the
polyline and connects its end node with an existing nearby polyline.
• Snap Both Ends to the Nearest Polylines: Accepts the placement of the
polyline and connects both its start and end nodes with existing nearby
polylines.
• Node Handles On: To enable the node handles for editing purposes.
You can remove accepted vectors from an unsaved vector layer.
1. Select one of the following:
• From the Vector window or Vector Parameters dialog menu bar, select
Edit.
• Right-click in the Vector window or the display group and select Edit.
2. From the resulting menu, select one of the following:
• Undo Last Edit: To remove only the last vector added.
• Undo All Edits: To remove all vectors added.
Tip
To toggle mouse button functions from Edit Existing Vectors mode to Cursor
Query mode (zoom, pan, and so forth), press and hold the Ctrl key while clicking
the mouse button.
1. Make the layer to which to add the points the active layer.
2. Select one of the following:
• From either the Vector window or the Vector Parameters dialog menu bar,
select Mode → Add New Vectors.
• Right-click in either the Vector window or the display group and select
Select Mode → Add New Vectors.
3. Right-click in either the Vector window or the display group and select Input
Points from ASCII. The Input ASCII Points Filename dialog appears.
4. Select a file and click Open. The Input ASCII File dialog appears.
5. Select the x and y column numbers.
6. Use the These points comprise drop-down list to select the type of vector the
points define, a polygon, polyline, group of points (as one vector), or
individual points.
7. If your vectors are georeferenced, select the projection of the ASCII points.
8. Click OK. ENVI adds the points to the vector layer.
3. Click OK. The selected layers appear in the right-click menu and display
automatically.
2. Enter the values (in pixels) in the four Plot Border Values fields, for the top,
right, bottom, and left sides of the window.
3. Select the background color from the Background button.
4. Enter the number of Number of X/Y tick intervals in the appropriate fields.
Tick marks in vector projection units are plotted in all the borders and labels
display in the left and bottom borders.
5. Click Apply to apply the border and ticks to the window.
To reset the plot range in the Vector window to the bounding box, click Reset
Ranges.
4. Click OK. The layer name appears in the Available Vector Layers list.
• Convert all records of an EVF layer to one ROI: To convert all vectors
to a single ROI record.
• Convert each record of an EVF layer to a new ROI: To convert each
vector to an individual ROI records.
4. Click OK. The ROI name(s) appear in the ROI Tool dialog.
Note
Exporting layers to ROIs can create very large ROIs.
• If your vector layer is in the Vector window, the Select Associated Data
File dialog appears. Select an input file and perform optional Spatial
Subsetting, then click OK. The Buffer Zone Image Parameters dialog
appears.
• If your vector layer is overlaid on the display group, the Buffer Zone
Image Parameters dialog displays. dialog appears. Select an input file and
perform optional Spatial Subsetting, then click OK.
• If more than one vector layer is open, the Buffer Zone Input Layers dialog
appears. Select the names of the layers to include in the buffer zone image
and click OK. The Select Associated Data File or Buffer Zone Image
Parameters dialog appears.
If you select more than one layer, the distance will be calculated from the
pixel to the nearest selected layer.
2. Set the Maximum Distance to measure. Any pixels with a distance larger than
this value are set to the maximum distance value +1.
3. From the Distance Kernel drop-down button, select Floating Point or Integer
output.
4. Select output to File or Memory.
5. Click OK.
Vector Attributes
Vector layers may have attributes associated with them. ENVI reads shapefile and
MapInfo Interchange file attributes. The attributes are stored in a dBASE II table
(.dbf) for shapefiles and in a .mid file for MapInfo.
Use the Layer Attributes table to view, edit, sort, and save vector attribute data. Use
the Vector Attribute functions to create new vector layers based on attribute values, to
add new attributes to vectors, to plot point attribute names in Vector windows, and to
associate point symbol sizes with attribute values (see “Changing Vector Layer
Display Properties” on page 1016).
Expression Description
6. Enter a query value in the field. The value can be a string (case sensitive) or
numeric value depending on the attribute type.
7. To make a more complicated query expression using logical operators, choose
from the following options:
• Click AND and follow steps 4-6 to do a query that must satisfy both
entered mathematical expressions.
• Click OR and follow steps 4-6 to do a query that must satisfy one of the
entered mathematical expressions.
• Click Clear to clear the entire query expression.
• Click Delete to delete individual lines of the query expression.
8. Enter a query layer name in the appropriate field.
9. Select output to File or Memory.
10. Click OK.
8. Enter the number of digits to the right of the decimal in the Decimal Count
box for a numeric field.
9. Click OK. ENVI starts a Layer Attributes table (see “The Layer Attributes
Table” on page 1033).
The ASCII data appears in the Layer Attributes table. You can edit the
information in any of the fields.
Highlighting Vectors
To highlight the vector that corresponds to a selected attribute record, select the
attribute record number in the Layer Attributes table. You can highlight vectors that
correspond to multiple attribute records.
Depending on where the vectors are displayed, either the Vector window or the
display group centers on the corresponding vector and highlights it in the Current
Highlight color.
4. Select the column number (strings can only have one column), and starting and
ending row numbers.
5. Click OK to enter the data into the Layer Attributes table.
Saving Changes
To save changes made to the Layer Attributes table, select File → Save Changes
from the Layer Attributes table menu bar.
Note
This overwrites the existing attribute file.
Note
Building vector layers from the high resolution database creates very large
output files (~20 MB each).
If you need to create vector layers from a coastline database that was not installed in
the ENVI default installation, you can copy the database files from your installation
CD to your computer. The coastline database files are under the following directory
of your installation CD (Unix users must re-install the software):
...setup\x86\itt\IDLxx\resource\maps\
Then, follow the steps outlined above for building the vector layers in ENVI and you
should have access to the additional database. After you create the EVF files, you can
delete the extra database files to save disk space.
5. Click the Edit Attributes drop-down button and select Map Info. The Edit
Map Information dialog appears.
6. Fill in the fields according to your selected map projection and area of interest.
See “Entering Map Information for Georeferenced Files” on page 199 for
details.
7. Once you have defined the map information, click the name of your empty
image in the Available Bands List, and select Load Band.
8. From the Display group menu bar, select Overlay → Vectors. The Vector
parameters dialog appears.
9. From the Vector Parameters dialog menu bar, select Options → Import
Layers. The Import Vector Layers dialog appears.
10. Select the world boundary layer from the list and click OK.
11. From the Display group menu bar, select Overlay → Grid Lines. See “Adding
Grid Lines” on page 63 for details.
12. From the Display group menu bar, select Overlay → Annotation, and overlay
any further annotation. See “Annotating Images and Plots” on page 31 for
details.
5. Select the band, or bands, to use for linear extraction, or keep the default band
selection. If you want to use multiple bands, it is recommended that you select
six or fewer bands. Using more than six bands slows system performance.
6. Click OK. The image opens in color in a display group and the Vector
Parameters dialog appears with a layer named Intelligent Digitizer: New Layer
in the Available Vector Layers list.
7. Extract features using the steps in “Using Intelligent Digitizer” on page 1045.
If you are starting Intelligent Digitizer from the display group:
1. From the Display group menu bar, select Overlay → Vectors. The Vector
Parameters dialog appears.
2. Select File → Create New Layer. The New Vector Layer Parameters dialog
appears.
3. Enter a Layer Name.
4. Select output to File or Memory.
5. Click OK. The Vector Parameters dialog appears.
6. Select Mode → Add New Vectors, then select Mode → Intelligent Digitizer
Parameters. The Intelligent Digitizer Parameters dialog appears.
7. Click Select Intelligent Digitizer Input Bands. The File Spectral Subset
dialog appears, with the currently displayed band or bands selected by default.
8. Select the band, or bands, to use for linear extraction, or keep the default band
selection. If you want to use multiple bands, it is recommended that you select
six or fewer bands. Using more than six bands slows system performance.
9. Click OK.
10. Extract features using the steps in “Using Intelligent Digitizer”.
1. To use intelligent mode, ensure the following are enabled in the Vector
Parameters menu bar. (If you accessed Intelligent Digitizer using Vector →
Intelligent Digitizer, these options are already set.)
• Mode → Add New Vectors
• Mode → Polygon or Polyline
• Mode → Multi Part: Off
• Mode → Intelligent Digitizer
2. In the Window field, select which display group window to use for extracting
features. The choices are Image, Scroll, or Zoom. To disable feature
extraction in all windows, select Off.
3. Left-click to specify the first seed point of the feature to extract, and continue
to add seed points by left-clicking at intervals along the feature. ENVI
automatically connects the seed points as you go. See “Intelligent Digitizer
Mouse Button Functions” on page 1048 for information about feature
extraction mouse button behavior. Some general guidelines for extracting
features are as follows:
• For road centerline extraction, select seed points near the road centerline.
• For features that contain sharp curves, select seed points at the sharp curve.
• If the spectrum of the feature surface changes abruptly, select a seed point
a few pixels beyond the surface change.
• For polygon extraction, define the last seed point of the polygon on or near
the first seed point. If the first and last seed points are too far apart when
you accept the polygon, ENVI connects the nodes with a straight line,
rather than following the shape of the feature.
• If a seed point does not extract the feature as desired, middle-click to
delete the seed point and select a new one. Clicking on a location closer to
the previous seed point will likely give a better result.
• To extract polyline features that will eventually intersect with features you
have not yet extracted, extend the start or end node beyond where the
features will intersect. You can correct overshooting polylines afterward
using automatic post-processing.
• In areas that do not provide sufficient contrast between the feature and its
background, press and hold the Shift key to toggle to standard vector
mode, then left-click to define the feature through the area of low contrast.
When back to an area of higher contrast, release the Shift key to resume
using intelligent mode.
4. When there are enough seed points to extract the feature, right-click to set the
polygon or polyline placement. A diamond-shaped handle appears near the
polygon or polyline. If needed, you can left-click on the diamond and drag it to
change vector placement.
5. Right-click again and select one of the following:
• Accept New Polyline/Polygon: Accepts the placement of the polyline or
polygon.
• Remove New Polyline/Polygon: Deletes the polyline or polygon.
• Snap Start Node to the Nearest Polyline: Accepts the placement of the
polyline and connects its start node with an existing nearby polyline.
• Snap End Node to the Nearest Polyline: Accepts the placement of the
polyline and connects its end node with an existing nearby polyline.
• Snap Both Ends to the Nearest Polylines: Accepts the placement of the
polyline and connects both its start and end nodes with existing nearby
polylines.
Note
For snapping options, the polylines to snap must be within the Snap
Tolerance (Pixels) parameter in the Intelligent Digitizer Parameters dialog
(see “Setting Intelligent Digitizer Parameters” on page 1053).
Mouse
Function
Button
• Make the layer to calculate active, then from the Vector Parameters menu
bar, select Options → Calculate Length Attribute.
2. If the input band does not have map information, the Input Display Pixel dialog
appears. Enter a pixel size and units, which ENVI uses to calculate the length.
3. ENVI calculates the EVF length of the features, then displays the results in the
evf_length field of the Layer Attributes table.
7. From the Available Vector Layers list, select the layers to overlay on the
display group image.
8. Click Load Selected. The Load Vectors dialog displays.
9. Select the display number to load the vectors into from the Select Vector list.
10. Click OK. ENVI overlays the vectors on the display group image and the
Vector Parameters dialog appears.
2. The Linear Feature Width (Pixels) default is the average road width at the
current image resolution, in pixels (assuming a road width of 15 meters).
Though this setting is suitable for road extraction, you can change it for
features with widths that differ from the average road width. If needed, use the
Cursor Location/Value tool to help estimate the width. The lowest allowable
value is 0.0.
3. The Snap Tolerance (Pixels) specifies distance, in pixels, to allow between
two polylines for ENVI to join them when you use snapping operations.
Change the default as needed. The default is 30.00, which indicates that ENVI
will join polylines 30.00 or fewer pixels apart. The lowest allowable value is
0.0, which disables snapping.
4. The Smoothing default specifies how much smoothing ENVI applies during
feature extraction. High is the default setting, which is good for features that
are not highly curved. If you have highly curved features to extract, use a
setting of Low or Off to reduce the amount of curve smoothing.
5. To perform feature extraction on a different band, or bands, than what is in the
display group, see “Selecting Different Bands” on page 1051.
6. When all parameters are set, click OK.
2. Select the measurement units from the Length Units drop-down list. The
default is Meters. Other options include Km, Feet, Yards, Miles, and Nautical
Miles. If needed, you can add additional unit options to the useradd.txt file
in the ENVI menu directory.
3. Enter the Attribute Field Width to define the width of the evf_length
attribute in the output file.
4. Enter the Decimal Count to specify the number of digits to allow to the right
of the decimal in the evf_length attribute in the output file.
11. Select the type of interpolation by clicking the Interpolation toggle button.
The choices are: Linear (quintic polynomial) or Quintic (smooth quintic
polynomial).
12. Select whether or not to extrapolate edges by clicking the Extrapolate Edges
toggle button.
If you select Yes, ENVI uses quintic extrapolation.
13. Select the Output Data Type from the drop-down list.
14. Select output to File or Memory.
15. Click OK. ENVI adds the resulting output to the Available Bands List.
Sample Format
Following is a sample ASCII file of DEM data:
1, -107.7840409, 44.28326903, 366.2
2, -107.7764449, 44.28345405, 225.2
3, -107.7688487, 44.28363857, 309.2
4, -107.7612525, 44.28382258, 608.7
5, -107.7536563, 44.28400609, 600.2
This file has five DEM locations and elevation values, identified by the ID number in
the first column. The second column contains longitudes, the third column contains
latitudes, and the fourth column contains elevations (in meters). You can delimit the
data values by space, comma, or tab. The file can also contain optional header
information, as long as the header is commented out with a semicolon.
Follow the rasterize point data steps in the previous section to open your ASCII
DEM. Set the X/Y Position Column fields to the appropriate longitude and latitude
columns, respectively. Set the Z Data Value Column to the appropriate elevation
column. When you click OK in the Gridding Output Parameters dialog, ENVI adds
the DEM data to the Available Bands List.
2. Select an input band and perform optional Spatial Subsetting, then click OK.
The Topo Model Parameters dialog appears
3. Enter the kernel size.
Tip
Use various kernel sizes to extract multi-scale topographic information.
Larger kernel sizes run slower.
Figure 10-1: Topo Model Parameters Dialog (left) and Compute Sun Elevation
and Azimuth Dialog (right)
Figure 10-2: Example of Topo Modeling for a Simulated Pyramid DEM with an
Aspect of 135 Degrees and Elevation of 45 Degrees
References:
Wood, Joseph The Geomorphological Characterization of Digital Elevation Models,
Ph. D. Thesis, University of Leicester, Department of Geography, Leicester, UK,
1996.
If your DEM is noisy, striped, or stepped, you should smooth it before using this
function.
Tip
See the ENVI Tutorials on the ITT Visual Information Solutions website (or on the
ENVI Resource DVD that shipped with your ENVI installation) for step-by-step
examples.
Tip
You can change the class colors by selecting Tools → Color Mapping → Class
Color Mapping (see “Mapping Class Colors” on page 123).
3. Specify the Sun Elevation Angle and Sun Azimuth Angle in the
corresponding fields. (To compute sun elevation values, see “Computing Sun
Elevation Values” on page 1069.)
4. Select the name of a color table from the list.
5. From the Stretch drop-down list, select a stretch type.
• If you select % Linear, enter the percentage to clip.
• If you select Linear Range, enter the minimum and maximum values.
• If you select Gaussian, enter the number of standard deviations for the
data distribution.
6. Select output to File or Memory.
7. Click OK. ENVI adds the resulting output to the Available Bands List.
ENVI User’s Guide Converting Vector Topo Maps into Raster DEMs
1080 Chapter 10: Topographic Tools
4. From the Elevation Attribute Column drop-down list, select the attribute that
contains the elevation of the vector contours.
5. If desired, enter a Valid Elevation Range in the same units as the elevation
attributes. ENVI omits vectors whose elevation falls outside of the valid range
when constructing the DEM.
6. Enter an Output Pixel Size and Output Data Type. You can change the pixel
size before you click OK to begin processing. It is recommended that you do
not set the output pixel size to a value smaller than the approximate sampling
distance between the nodes that define the vector contours.
7. Use the Gridding Interpolation Method and the Extrapolate Edge of Image
toggle buttons to set the gridding parameters.
8. To spatially subset the output DEM, click one of the following buttons:
• Map: To restrict the DEM output to an area defined in map coordinates.
The Spatial Subset by Map Coordinates dialog appears. See “Subsetting by
Map Coordinates” on page 218 for details.
• File: To restrict the output DEM to the same area as an existing
georeferenced file. The Choose a File on Which to Base the Spatial Subset
dialog appears.
9. In the Select Output Projection area, choose a map projection for the output
DEM. This does not have to be the same projection as the input vector data.
10. Click OK. The DEM Output Parameters dialog appears.
11. Examine the information displayed under the Gridded DEM Output Image
title to make sure it is accurate. If you need to change any of the output
parameters, including the pixel size, click Change Output Parameters.
12. Select output to File or Memory.
13. Click OK. ENVI adds the resulting output to the Available Bands List.
Converting Vector Topo Maps into Raster DEMs ENVI User’s Guide
Chapter 10: Topographic Tools 1081
Using 3D SurfaceView
Use ENVI’s 3D SurfaceView window to visualize elevation or other surface data in
3D. You can use 3D SurfaceView to do the following:
• Display the surface data as a wire-frame, a ruled grid, or as points.
• Drape the surface data with a gray scale or color image, and overlay it with
ROIs and vectors.
• Rotate, translate, and zoom in and out of the surface in realtime using the
mouse cursor or the 3D SurfaceView Controls dialog. The cursor is also linked
to your draped image allowing cursor locations, values, and profiles to display
from the 3D view.
• Define a flight path (interactively or with a drawn annotation). You can
animate the flight path to produce 3D fly-throughs of your data. You can
control the vertical and horizontal view angles fly through your data at a
constant height above the surface or at a constant altitude.
• Use perspective controls to place the visual perspective in the 3D SurfaceView
and rotate the surface around that perspective.
Displaying 3D Files
If you are running ENVI on a Windows system, you must be in 16-bit or 24-bit color
display mode.
1. Display the gray scale or color image you want to drape over your DEM (or
other 3D dataset). ENVI uses the entire image as the overlay image on the
DEM unless both the image and DEM files are georeferenced.
• If both files are georeferenced, then ENVI uses only the part of the image
that overlaps with your DEM.
• If the DEM is subset, then ENVI subsets the georeferenced image to
match. The spatial resolutions of the two files do not need to be the same.
• If both the image and the DEM are georeferenced, they do not need to be
in the same projection. ENVI reprojects the DEM on the fly to match the
image projection.
2. Select one of the following options:
• From the ENVI main menu bar, select Topographic → 3D SurfaceView.
• From the Display group menu bar, select Tools → 3D SurfaceView.
If more than one display group is open, select the display group that contains
the desired image. The Associated DEM Input File dialog appears.
3. Select an input file and perform optional Spatial Subsetting, then click OK.
The 3D SurfaceView Input Parameters dialog appears.
Note
If a DEM is already associated with the input image file in the image header,
ENVI uses the associated DEM by default (it does not prompt you to select a
DEM). For more information on associating a DEM with an image, see
“Associating a DEM to a File” on page 202.
4. Select the DEM Resolution (number of pixels) check box to use for the 3D
plot. You can select more than one resolution. Typically, use the lowest
resolution (64) while you are determining the best flight path, then use a higher
resolution to display your final fly-through sequence.
Note
Using higher DEM resolutions significantly slow the display; use higher
resolution settings only on powerful platforms.
10. To perform optional Spatial Subsetting, click Spatial Subset and select a
spatial subset of the image.
The spatial dimensions of the DEM and image do not need to be same. ENVI
displays the full dimensions (or selected subsets) of both datasets.
11. Click OK. The 3D SurfaceView plot window appears (see Figure 10-6).
Left Click and drag horizontally to rotate the surface around the
z axis.
Click and drag vertically to rotate the surface around the y
axis.
Double left-click on the surface to go to a pixel in the
Image window.
Middle Click and drag to translate (move) the image.
Right Click and drag to the right to zoom in.
Click and drag to the left to zoom out.
At any time, you can return the surface to its original settings by clicking Reset View
in the 3D SurfaceView Controls dialog. The surface returns to its original orientation,
but the dialog settings are retained.
The view zooms to the new origin of the perspective which is set at a height
that is 0.05 normalized units (default) above the surface and the surface begins
to rotate. You can modify the height of the perspective origin above the surface
using the Translation controls in the 3D SurfaceView Controls dialog.
If a perspective rotation was not initiated, click Start to begin rotating the
surface around its center point.
Click Stop to pause the current rotation.
If the cursor is in select mode, but a perspective origin in the surface was not
chosen, clicking Stop toggles the cursor out of select mode.
3. Enter the value for the Rotation Delay. The delay value is the number of
seconds to wait between successive renderings of the rotating surface. The
default value is 0.05. Setting the value to 0.0 sets the rotation speed to your
computer’s limits for calculating the transformation matrix and rendering the
surface.
4. To change the direction of the rotation of the surface, click Direction and
select either Left or Right. Direction refers to the rotation direction of the
surface, not the viewer’s perspective. The default rotation direction is Right.
The following sections provide details about working in the two different modes.
Figure 10-8: SurfaceView Motion Controls Dialog in User Defined Mode (left)
and Annotation Mode (right)
5. Repeat selecting projection steps until you have selected as many as desired
(only two are required).
• To replace a projection in the flight path list, select the path view number
and click Replace.
• To delete a projection in the flight path list, select the path view number
and click Delete.
• To clear the flight path list, click Clear.
6. Enter the number of frames to use in the fly-through animation. ENVI
interpolates the flight path smoothly interpolated between the two projections.
7. Click Play Sequence to animate the fly-through.
To control the speed and direction of the fly-through, select Options →
Animate Sequence in the SurfaceView Motion Controls dialog (see
“Animating Flight Paths” on page 1092).
Tip
If you are already in Annotation mode and want to input annotation, select File →
Input Annotation from Display or Input Annotation from File from the 3D
SurfaceView Motion Controls dialog menu bar.
3. To smooth the flight path by using a running average of points along the line,
enter the number of points to use in the average in the Flight Smooth Factor
field.
4. To fly over the surface at a constant height above the DEM terrain, click the
toggle button until Flight Clearance appears and enter the height (in the same
units as the DEM).
To fly over the surface at a constant elevation, click the toggle button until
Flight Elevation appears and enter the elevation above sea level (in the same
units as the DEM).
5. To adjust the vertical look angle, enter the angle, in degrees, in the Up/Down
field. A vertical look angle of -90 degrees looks straight down at the surface. A
look angle of 0 degrees looks straight ahead (horizontal).
To adjust the horizontal look angle, enter the angle, in degrees, in the
Left/Right field. A horizontal look angle of -90 degrees looks to the left, a
look angle of 0 degrees looks straight ahead, and a look angle of 90 degrees
looks to the right.
6. To turn the annotation flight line trace off and on, select Options →
Annotation Trace from either the 3D SurfaceView Motion Controls dialog or
the 3D SurfaceView window menu bar.
7. Enter the number of frames to use when animating the data fly-through in the
Frames field.
2. Click the toggle button to select Map Coord for georeferenced images, or
Pixel Coord for non-georeferenced images. The default is Pixel Coord.
• If you select Pixel Coord, enter the Sample and Line coordinates of
where you want to stand in the image.
If your image is a subset of a larger image, you can use the x and y pixel
offsets from the header file by clicking the Use Offset toggle button to Yes.
• If you select Map Coord, enter the map coordinates of the location where
you want to stand and click Change Proj to change the projection (if
desired).
3. In the Auto Apply field, select whether or not to apply changes automatically
by selecting Yes or No.
4. Select from the following options to change your position and viewing angle in
the surface view image.
• To change the location where you are standing, double left-click on a spot
in the Image window to automatically move to that location.
The location change is reflected in both the Image window and the 3D
SurfaceView window.
• To change the azimuth look direction (north = 0 and angles increase
clockwise), use the Azimuth slider bar or fields in the 3D SurfaceView
Position Controls dialog.next to the label. To change the angle in
increments of five degrees, middle-click on the Azimuth arrows.
• To change the elevation angle from which you are looking, use the
Elevation slider bar next to the label. To change the angle in increments of
five degrees, middle-click on the Elevation arrows.
An angle of 0 degrees is horizontal and a negative angle is down looking.
• To change the distance from which you are looking down on the image,
enter a number in the Height Above Ground field or use the
increase/decrease buttons to change the height.
The units used should be the same as the DEM elevation units.
5. Click Apply.
Smoothing 3D Images
To smooth 3D Surface View Images that appear pixelated, select Options →
Bilinear Interpolation.
To disable the smoothing effect, select Options → Interpolation: None.
Saving 3D SurfaceViews
Use the File option in the 3D SurfaceView window menu bar to save the plot to an
image file or a Virtual Reality Modeling (VRML) file, or export the plot to a printer.
Select one of the following from the 3D SurfaceView window menu bar:
• To save a surface view as an image file, select File → Save Surface As →
Image File.
• To save a surface view as a VRML 2.0 file, select File → Save Surface As →
VRML.
If you are using a Cosmo Player to view the VRML file, set the Cosmo Player
preferences as follows to view the file. In the Cosmo Player menu:
Select Preferences → Performance
Select Fastest for Image/Texture quality
Unselect the option Enable specular and emissive color.
• To print a 3D SurfaceView plot, select File → Print.
For more information and instructions, see “Saving Images from Displays” on
page 15.
Viewing Headers
Use View AIRSAR/TOPSAR Header, View COSMO-SkyMed Header, View
Generic CEOS Header, and View RADARSAT Header, to produce reports of
header information.
1. From the ENVI main menu bar, select one of the following:
• Radar → Open/Prepare Radar File → View AIRSAR/TOPSAR
Header
• Radar → Open/Prepar Radar File → View COSMO-SkyMed
Header → Basic (simple information) or Extend (detailed information)
• Radar → Open/Prepare Radar File → View Generic CEOS Header
• Radar → Open/Prepare Radar File → View RADARSAT Header
1. From the ENVI main menu bar, select Radar → Calibration → Sigma
Nought. The Input File dialog appears.
2. Select an input file and perform optional Spatial Subsetting, then click OK.
The Select Input DEM File dialog appears.
3. Select the corresponding DEM data file and click Open. The DEM file must
be the same dimensions and pixel size as the input radar file.
4. If ENVI cannot find the leader file, the Enter Leader File dialog displays.
Select the file and click Open.
5. The Sigma Nought Parameters dialog appears. Enter an output filename and
click OK.
6. If you are using ERS data, the ERS Incidence Angle and ERS Absolute
Calibration values will be automatically read from the leader file. If they are
not found in the leader file, enter the center of image incidence angle and
absolute calibration constant in the appropriate fields.
7. If you are using RADARSAT data, ENVI will automatically read the Near
Range Angle and Far Range Angles from the leader file. If theses values are
not found in the leader file, enter the near range angle and far range angle, in
degrees, in the appropriate fields.
8. Select output to file or memory.
9. Click OK.
Figure 11-1: Antenna Pattern Correction Plot with First Order Polynomial Fit (in
White)
• To manually enter the necessary parameters, click Cancel and enter the
parameters in the Slant to Ground Range Parameters dialog that appears.
The Input File dialog appears.
3. Select an input file and perform optional Spatial Subsetting, then click OK.
The Slant to Ground Range Parameters dialog appears.
4. For SIR-C, AIRSAR, and RADARSAT input files, ENVI automatically
populates the Instrument height (km), Near range distance (km), and Slant
range pixel size (m) fields with information from the input file that contains
data acquisition parameters.
For generic input files:
• Enter the sensor altitude in the Instrument height (km) field.
• Enter a Near range distance (km) value from nadir for the input image.
• Enter a value for Slant range pixel size (m). This is not the pixel size of
the input image.
5. Enter a value for the desired Output pixel size (m) of the ground range data.
6. From the Near Range Location drop-down list, select the location of the near
range in the image: Top, Bottom, Left, or Right.
7. From the Resampling Method drop-down list, select Nearest Neighbor,
Bilinear, or Cubic Conv.
8. Output the result to File or Memory.
9. Click OK.
Adaptive Filters
ENVI includes several adaptive filters to use for SAR processing. The filters include
Lee, Enhanced Lee, Frost, Enhanced Frost, Gamma, Kuan, and Local Sigma to
reduce image speckle and a Bit Errors filter to remove bad pixels. Filters are
described in “Using Adaptive Filters” on page 553.
Texture Filters
ENVI includes several texture filters for extracting textural information from SAR
and other data types. These include filters based on the data range, RMS, 1st moment,
and 2nd moment of the data. Filters are described in “Using Texture Filters” on
page 548.
For easier processing, name the three files with the following convention:
filename_p.stk, filename_l.stk, and filename_c.stk.
You can generate byte images to conserve disk space when quantitative analysis is not
an issue.
1. From the ENVI main menu bar, select Radar → Polarimetric Tools →
Synthesize AIRSAR Data. The Input Stokes Matrix Files dialog appears. Use
this dialog to read the compressed Stokes matrix files.
7. Click OK. All of the synthesized bands are placed in a single file. ENVI adds
the synthesized images to the Available Bands List.
1. From the Select Band Combinations to Synthesize list, select the check
boxes next to bands to include.
2. Enter the transmit and receive ellipticity and orientation angles in the
Transmit Ellip/Orien and Receive Ellip/Orien fields.
3. Click Add Combination. Ellipticity values range from -45 to 45 degrees with
an ellipticity of 0 producing linear polarizations. Orientation values range from
0 to 180 degrees with 0 representing horizontal and 90 degrees for vertical
polarizations.
4. Select the desired bands, C, L, and /or P by selecting the box to the left of the
band name.
5. Click Add Combination. The selected images are listed in the Additional
Images field.
Note
If you are using SIR-C files dumped directly from tape to disk, or if you are using
SIR-C data on disk that were not read from tape, see “Using the CEOS Header Tool
to Find Missing Information” on page 1120 for instructions for entering required
data parameters.
For easier processing, name the files with the following convention:
filename_c.cdp and filename_l.cdp.
If your data are Multi-Look Detected (MLD), these are just 2-byte integer data and do
not need to be synthesized. For other data types, see “Using the CEOS Header Tool to
Find Missing Information” on page 1120 for instructions on finding the number of
lines, samples, and offset for the image, using the View CEOS Header tool. Then,
open the image by selecting File → Open Image File from the ENVI main menu bar,
and enter this information into the Header Info dialog that appears. (For complex
data, be sure to set the Strip Line Header? toggle button to Yes.)
Follow these steps to synthesize SIR-C data:
1. From the ENVI main menu bar, select one of the following options:
• Radar → Polarimetric Tools → Decompress-Synthesize Images →
Synthesize SIR-C Images
• Radar →Open/Prepare Radar File →Synthesize SIR-C Data
The Input Data Products Files dialog appears. Use this dialog to read the
SIR-C compressed scattering matrix files.
Note
Do not open the Stokes matrix files using the File menu selection from the
ENVI main menu bar.
directly convert the data into compressed data product (.cdp) format so they can be
synthesized into images. You need to use ENVI’s View CEOS Header tool to gather
the required information for synthesizing a SIR-C image.
Note
You can also find missing information in the quicklook prints that JPL provided
with the data, or by printing or viewing one of the ASCII CEOS header files that
comes on the SIR-C distribution tapes.
You should have at least four files for your SIR-C data. The largest file is usually the
data file, and the second largest is usually the SAR leader file.
1. From the ENVI main menu bar, select Radar → Open/Prepare Radar
File →View Generic CEOS Header. The Enter Compressed Data Products
Filename dialog appears.
2. Select the SAR leader file and click Open. The CEOS Header Report dialog
appears.
3. Note the Processed Scene Range value. Enter this value in the Width
(km) field of the SIR-C Header Parameters dialog when you later synthesize
the data.
4. Note the Processed Scene Azimuth value. Enter this value in the Length
(km) field of the SIR-C Header Parameters dialog when you later synthesize
the data.
5. Note the Product type value. Select the appropriate SAR Channel Type
radio button in the SIR-C Header Parameters dialog when you later synthesize
the data.
6. Close the CEOS Header Report dialog.
7. From the ENVI main menu bar, select Radar → Open/Prepare Radar
File →View Generic CEOS Header again. The Enter Compressed Data
Products Filename dialog appears.
8. Select the data file and click Open. The CEOS Header Report dialog appears.
9. Note the Number of Samples and Number of Lines values. Enter these
values in the Samples and Lines fields of the SIR-C Header Parameters dialog
when you later synthesize the data.
10. Note the Record Info value. The last number in the array is the size of the
image offset or embedded header. Enter this value in the Offset field of the
SIR-C Header Parameters dialog when you later synthesize the data. If there
are two fields labeled Record Info in the CEOS Header Report, then take the
two large numbers at the end of these lines and add them to determine the
image offset.
11. Now that you have the information you need to synthesize the SIR-C images,
follow the steps in “Synthesizing SIR-C Data” on page 1117. When you enter
information in the SIR-C Header Parameters dialog, be sure to click the Yes
radio button next to Strip Line Header?
Background
The first imaging SAR systems collected data with only one polarization state. For
example, the SEASAT satellite, launched in 1978, measured the backscatter return
for a horizontally polarized transmitted signal, and a horizontally polarized return
signal. Several more recent systems have also utilized single polarization SAR,
including ERS-1, JERS-1, SIR-A, and SIR-B. Only in the past decade have SAR
sensors been capable of measuring more than one polarization state while preserving
phase information. These systems, called POLSAR, transmit and receive both
vertically and horizontally polarized microwave signals. Currently, the most
commonly available POLSAR datasets are those generated from JPL’s AIRSAR or
SIR-C instruments.
and H axes are in a plane perpendicular to the direction at which the radar signal is
transmitted.
If the electromagnetic wave has a linear polarization, then the polarization ellipse will
be a straight line which corresponds to an ellipticity angle of 0 or 180. For linear
polarizations, an orientation angle of 0 or 180 degrees represents horizontal
polarization, while an orientation angle of 90 degrees represents vertical polarization.
POLSAR sensors usually transmit and receive both vertical and horizontal linearly
polarized radiation. An excellent source for more information on ellipticity and
orientation angles is the chapter on Radar Fundamentals in Principles and
Applications of Imaging Radar.
For a SAR system that coherently transmits and receives both horizontal and vertical
polarizations, you can use the elements of the resulting scattering matrix to calculate
images representing any desired polarization state (that is, any valid combination of
ellipticity and orientation angles). Therefore, for POLSAR data, you are not limited
to synthesizing images of backscatter at VV, HH, or HV polarizations. You can
synthesize an image that shows what the backscatter would be with transmitted and
received signals of any polarization, including non-linear polarizations.
a pixel or ROI, are commonly used for this purpose. Polarization signature plots show
the variation in scattering intensity, normalized scattering cross-section, or dB as a
function of ellipticity and orientation angles. In a polarization signature plot, linear
vertical polarization is shown in the center of the plot, while linear horizontal
polarization is shown at the center of the X axis, at both the maximum and minimum
of the Y axis. See Figure 11-7 for an example.
References:
Evans, D.L., T.G. Farr, J.J. van Zyl, and H.A. Zebker, 1988, Radar polarimetry:
analysis tools and applications. IEEE Transactions on Geosciences and Remote
Sensing, Vol. 26, No. 6, pp. 774-789.
Raney, R.K., 1998, Radar fundamentals: technical perspective, in Principles and
Application of Imaging Radar (F.M. Henderson and A.J. Lewis, eds.), Manual of
Remote Sensing, Third Edition, Volume 2, John Wiley and Sons, Inc.
van Zyl, J.J., H.A. Zebker, and C. Elachi, 1987, Imaging radar polarization
signatures: theory and observation. Radio Science, Vol. 22, No. 4, pp. 529-543.
van Zyl, J. J., 1989, Unsupervised classification of scattering behavior using radar
polarimetry data, IEEE Transactions on Geosciences and Remote Sensing, vol. 27,
No. 1, pp. 36-45.
Zebker, H.A., J.J. van Zyl, and D.N. Held, 1987, Imaging radar polarimetry from
wave synthesis. Journal of Geophysical Research, Vol. 92, No. 31, pp. 683-701.
7. Select output to File or Memory. ENVI saves the polarization signatures (co-
or cross-polarized) as a multiband image, where each band of the image is a
separate 91 samples (-45 to 45 degrees ellipticity angle) by 181 lines (0 to 180
degrees orientation angle) polarization signature for one of the selected
frequencies with the intensity representing the z axis (intensity, normalized
intensity, or dB).
8. In the Load Bands to Polsig Viewers field, select whether or not to load each
signature into its own Polarization Signature Viewer after it is extracted.
Note
Selecting Yes may create numerous windows and use a significant amount of
resources. The preferred alternative is to select No and to use the Polarization
Signature Viewer function to view individual signatures (see the following
“Using the Polarization Signature Viewer” section).
When the Polarization Signature Viewer appears, a 3D wire mesh surface displays on
the left and a 2D gray scale image of the signature displays on the right of the Viewer.
The statistics for the current signature are listed below the plot.
The numbers next to the title in the ASCII file are the pixel locations of the pixel the
signature was extracted from.
The following figure shows polarization signatures and surface plots that are
annotated to clarify the relation between the images and the default orientation 3D
polarization signatures.
4. Enter the number of looks in the Samples (range) and Lines (az) directions or
select from the following options.
• To specify the number of pixels in the output image, enter the values in the
Pixels fields for the samples and lines.
• To specify the output pixel size in meters, enter the values in the Pixel Size
(m) fields for samples and lines.
When you enter a value for one of the parameters, ENVI automatically
calculates the others to match. For example, if you enter the Pixel Size as
30 m, then ENVI calculates the corresponding number of pixels and the looks
and changes them in the corresponding fields.
ENVI supports both integer and floating-point number.
5. To perform Spatial Subsetting on the data, click Spatial Subset.
6. Enter a base name in the Enter Base Filename field. ENVI uses this value as
the basis for multiple filenames, one for each frequency selected. For example
if you enter sirc as the base name and both C-band and L-band data are being
processed, then ENVI creates two output files named sirc_c.cdp and
sirc_l.cdp
7. Click OK.
Use the synthesize function to generate image data from the output multilooked
compressed data files (see “Synthesizing SIR-C Data” on page 1117).
Configuration Parameter Descriptions . . 1144 Installing Other TrueType Fonts with ENVI . .
Editing System Graphics Colors . . . . . . . 1163 1169
Editing System Color Tables . . . . . . . . . . 1165 Modifying IDL CPU Parameters . . . . . . 1171
Dialog Option/
Description
.cfg File Option
Graphics Colors File Optional. A file that defines the graphics colors
default graphic colors used by ENVI. “ENVI Graphic Colors File” on
file page 1164 describes the file format, and “Editing
System Graphics Colors” on page 1163 details
how to edit the file through the ENVI interface.
Color Table File Optional. A file that defines the color tables used
default color table in ENVI. “Editing System Color Tables” on
file page 1165 details how to edit the color table
through the ENVI interface.
ENVI Menu File Optional. A file that defines the ENVI main menu
default envi menu file bar options available for the installation. The topic
ENVI Main Menu Bar Definition File in Getting
Started with ENVI describes the file format.
Display Menu File Optional. A file that defines the Display group
default display menu menu bar options available for the installation.
file The topic Display Group Menu Bar Definition
File in Getting Started with ENVI describes the
file format.
Dialog Option/
Description
.cfg File Option
Shortcut Menu File Optional. A file that defines the display group
default display right-click menu options available for the
shortcut menu file installation. The topic Display Group Right-Click
Menu Definition File in Getting Started with ENVI
describes the file format.
Map Projection File Optional. A file that defines map projections.
default map projection “ENVI Map Projections File” on page 994
file describes the file format.
Dialog Option/
Description
.cfg File Option
Dialog Option/
Description
.cfg File Option
Data Directory The directory for input images. This is the default
default data directory data directory ENVI uses unless you specify
another path when opening a file.
Temp Directory The directory used to store ENVI temporary files.
default tmp directory
Output Directory The directory for output files. ENVI writes output
default output files to this directory unless you specify another
directory path when entering an output filename.
Dialog Option/
Description
.cfg File Option
Dialog Option/
Description
.cfg File Option
Image Window Xsize/ Sets the initial size of the full resolution Image
Image Window Ysize window in pixels.
image window default
xsize/image window
default ysize
Image Window Scroll Bars Specifies whether the Image window has scroll
image window scroll bars.
bars
Dialog Option/
Description
.cfg File Option
Scroll Window Xsize/ Sets the initial size of the Scroll window in pixels.
Scroll Window Ysize
scroll window default
xsize/scroll window
default ysize
Zoom Window Xsize/ Sets the initial size of the Zoom window in pixels.
Zoom Window Ysize
zoom window default
xsize/zoom window
default ysize
Zoom Window Scroll Bars Specifies whether the Zoom window has scroll
zoom window scroll bars.
bars
Zoom Window Zoom Factor Sets the Zoom window zoom factor.
zoom window zoom
factor
Display Retain Value Sets the control of the backing store. A retain
display retain value value of 0 specifies no backing store. A retain
value of 1 requests that the server or window
system provide backing store. A retain value of 2
(default) specifies that IDL provide backing store.
Dialog Option/
Description
.cfg File Option
Display Default Stretch Sets the default stretch to use for images loaded
display default into display groups. The options are: % Linear,
stretch Linear Range, Gaussian, Equalize (histogram
equalization), and Square Root (square root
stretch). For % Linear, Linear Range, Gaussian,
specify the value in the field next the drop-down
list.
Scroll/Zoom Position Sets the initial position (Left, Right, Above,
(Style 1) Below, Within, Off) of the Zoom and Scroll
scroll and zoom window windows relative to the Image window.
default position
Zoom Position (Style 2) Sets the location of the Zoom window (Left,
zoom window default Right, Off), if only the Zoom and Scroll windows
position display.
RGB Colors Per Image Specifies how many of the 256 colors in the color
number of colors per table to use for each RGB color quantized image
rgb image display. ENVI runs on 8- or 24-bit color
workstations. These settings are for 8-bit mode.
Each Image window occupies a certain range of
entries in the color table.
Gray Scale Colors Per Specifies how many of the 256 colors in the color
Image table to use for each gray scale display. ENVI runs
number of colors per on 8- or 24-bit color workstations. These settings
gray scale image are for 8-bit mode. Each Image window occupies a
certain range of entries in the color table.
Edit System Graphic Colors Opens the Edit Graphic Colors dialog, which is
detailed in “Editing System Graphics Colors” on
page 1163.
Edit System Color Tables Opens the ENVI Color Table Editor dialog, which
is detailed in “Editing System Color Tables” on
page 1165.
Dialog Option/
Description
.cfg File Option
Dialog Option/
Description
.cfg File Option
Dialog Option/
Description
.cfg File Option
Y-Axis Labels Sets whether the y-axis labels of the map grid
map grid y-axis labels display horizontally or vertically.
Font Sets the font of the map grid labels. For more
map grid font information on fonts, see “Installing Other
TrueType Fonts with ENVI” on page 1169.
Charsize Sets the font size of the map grid labels.
map grid labels size
Dialog Option/
Description
.cfg File Option
(color box) Sets the foreground color of the map grid lines.
map grid lines color
(color box) Sets the foreground color of the map grid box.
map grid box color
Thick Sets the thickness of the map grid box sides.
map grid box thick
Style Sets the line style of the map grid box sides.
map grid box line
style
Y-Axis Labels Sets whether the y-axis labels of the geo grid
geo grid y-axis labels display horizontally or vertically.
Dialog Option/
Description
.cfg File Option
(color box) Sets the foreground color of the geo grid labels.
geo grid labels color
(color box) Sets the foreground color of the geo grid lines.
geo grid lines color
Dialog Option/
Description
.cfg File Option
(color box) Sets the foreground color of the geo grid box.
geo grid box color
Thick Sets the thickness of the geo grid box sides.
geo grid box thick
Style Sets the line style of the geo grid box sides.
geo grid box line
style
You can also edit the Previous Files List by selecting File → Preferences and
selecting the Previous Files List tab to open the editing window (see “Previous Files
List Preference Settings” on page 1155).
Dialog Option/
Description
.cfg File Option
Menu Orientation Sets whether the ENVI main menu bar displays
main menu orientation horizontally or vertically.
Max Items for Multilist Sets the maximum number of items to list in a
max number of items widget. If the number of widget listed is greater
for multilist than this value, ENVI uses an alternate check box
mode to speed up list scrolling.
Max Histogram Bins Sets the maximum number of bins to use during
max number of histogram calculation.
histogram bins
Max Items in Pulldown Sets the maximum number of items to list in a
max number of items in drop-down button. If the number of items is
pulldown menus greater than this value, the drop-down button
divides the items into sub-menus, each containing
only this maximum number of items or fewer. This
prevents long lists of items from running off the
display.
Max Vertices for Memory Sets the maximum number of polygon vertices for
max number of vertices vector files loaded into ENVI. If the number of
for memory vertices is greater than this value, ENVI will not
load the polygons into memory, but instead creates
an ENVI vector (.evf) file.
Dialog Option/
Description
.cfg File Option
Command Line Blocking Sets whether you have access to IDL command
command line blocking line programming during an ENVI session.
Exit IDL on Exit from ENVI Determines whether to automatically exit IDL
exit idl on exit from when exiting from ENVI, or if the IDL session
envi remains open.
Status Window for Input Sets whether to show a status window when
Data loading data of BIL or BIP format into a display
status window for group.
input data reading
Interactive Stretch Auto Sets whether on not to automatically apply
Apply changes to the stretch in interactive stretching.
interactive stretch You can also toggled this on/off from the
auto reply Interactive Stretching dialog.
Dialog Option/
Description
.cfg File Option
Dialog Option/
Description
.cfg File Option
Cache Size (Mb) Sets a soft limit for the amount of system RAM
total cache size (MB) you expect ENVI to use. ENVI has an internal
memory management/cache scheme that works to
limit the amount of memory used. This setting is
designed to avoid Unable to Allocate Array errors
in IDL and segmentation fault/core dumps back to
system level from IDL on some platforms. The
error results when ENVI attempts to use more
memory than is available. Higher values for the
cache size speed up spatial processing functions.
Set this value to slightly less than the amount of
available system RAM (less on multi-user
systems) (see “Total Cache Size” on page 1159).
Image Tile Size (Mb) Sets the tile size for ENVI to use for processing.
image tile size (MB) Tiling is an internal ENVI image segmentation
technique that allows the system to work on
images larger than available RAM. Set this value
to about 1/10th the total cache size described in
“Image Tile Size” on page 1161. You can view
images of any size in ENVI; however, this setting
determines how much of that image is kept in
memory at any given time.
If you have an ENVI + IDL installation, see
“Optimizing Thread Pool Elements and Image
Tile Size Settings” on page 1172 for additional
information before setting this value.
The Cache Size setting in the ENVI configuration file does not directly affect
memory allocation, but it helps ENVI determine how much RAM is likely to be
available. The cache size is specified because IDL does not have any way to know
how much RAM is available on your machine.
ENVI uses the cache size you specify to determine when you have used up that much
RAM. ENVI should start to remove unnecessary items from RAM to make more
available. Sometimes, ENVI keeps certain kinds of data in RAM to speed
performance, although these are not required for ENVI to run and are released if
memory is low.
The first items removed are the raw data read from disk files (oldest-by-use first),
followed by byte-scaled display images (oldest-by-use first). If enough memory has
not been freed, the memory-only items are removed after requesting you to save or
remove them. If, however, the data in the current request is a memory-only item, you
are required to store (not remove) the item. If the request still exceeds the total cache
size with nothing in memory, the operation is still attempted. You should, however,
periodically store memory images to disk to avoid cases where the in memory request
is greater than the total cache size.
The Cache Size setting is not a fixed upper limit on the amount of RAM that ENVI
will use; it is just a benchmark that ENVI uses to gauge whether the amount of RAM
you have used is small or large. It is configurable because of the wide variety of RAM
on different machines.
If your Cache Size setting is large, then ENVI will not start cleaning up RAM until
that large amount of RAM has been used. This means that less RAM will be left
available for further processing. To minimize your chances of running out of RAM in
the middle of a process (when ENVI cannot clear RAM), you should have a
reasonable amount of RAM available even after your cache is full. On single-user
machines, we recommend you set the cache to 50-75% of the available physical
RAM on the system. For example, if you have 512 MB of RAM, then a reasonable
cache size is 256-384 MB. For multi-user systems (such as UNIX workstations), the
cache size should be reduced to reflect the amount of physical RAM that will
typically be available. Generally, it is best to err on the side of a smaller cache size.
Note
These are only suggestions; ENVI will work with any numbers you supply.
Current Color
color menu
Color
system
Color
Sliders palette
Positioning
arrow Circle
cursor
Color bar
3. In the Color Table File field, enter the new color table filename.
4. Click OK.
Column Description
Fontname This column contains the name of the font that is available when
working with features such a QuickMap.
Filename This column contains the TrueType font filename.
Direct This column indicates the Direct Graphics scale, a correction factor
that is applied when rendering the font on a Direct Graphics device.
Object This column indicates the Object Graphics scale, a correction factor
that is applied when rendering the font on an Object Graphics device.
For more information on scale factors, see the “Fonts” Appendix in the IDL
Reference Guide.
For example, a tile size setting of 1 MB, applied to an image containing double
precision complex data (16 bytes per array element) produces input arrays for
spatially tiled processes of 62,500 elements, which would fall quite short of the
default 100,000 element minimum required for multi-threading. Furthermore, multi-
threading performance gains are generally much better with larger input array sizes.
An unnecessarily small tile size might needlessly limit the benefits of multi-
threading.
While not all processes in ENVI are tiled and not all processes in IDL are multi-
threaded, it is best to set the tile size parameter on multi-processor machines with
multi-threading in mind so as to take full advantage of this feature.
Note
Do not just set the tile size to a large portion of your RAM. If the tile size parameter
is set to an amount greater than the amount of actually available memory (memory
still free after the operating system and any other applications have claimed what
they need) and ENVI is asked to use the full tile size for a large process, any number
of system and/or application errors could result.
Set the Image Tile Size (Mb) value to less than the smallest amount of free memory
that is expected on the machine where ENVI is installed, giving due consideration to
the memory demands of other applications and users at any given time. Depending on
how much memory the machine in question has and what demands are placed on it, a
safe tile size setting may be significantly larger than the 1 MB default value.
File : E:\DATA\bldr_reg\bldr4m.img
Bands: 1
Dims : 1-359,1-516
Info : (1,1) {0} [ {0}] {}
File : E:\DATA\bldr_reg\bldr_sp.img
Bands: 1
Dims : 1-1071,1-1390
Info : (1,1) {20} [ {0}] {0.00}
For image-to-map registration, coordinates are listed as map x, map y, image x, image
y. An example of a typical image-to-map .pts file is shown here:
ENVI Registration GCP File
359459.810 4143531.300 288.000 496.000
367681.530 4141772.000 232.000 23.000
366343.470 4138660.500 458.000 35.000
362337.840 4145969.500 71.000 388.000
361339.910 4138479.300 569.000 301.000
354457.530 4140550.300 591.000 714.000
354352.590 4145685.000 261.000 819.000
359918.310 4142412.000 351.000 448.000
364900.910 4141752.300 290.000 172.000
The full details of the spectral library and the descriptions of the samples are
available at: http://speclab.cr.usgs.gov/
Hardcopy is also available from the USGS. The reference is given here:
Clark, R.N., G.A. Swayze, A.J. Gallagher, T.V.V. King, and W.M. Calvin, 1993, The
U. S. Geological Survey, Digital Spectral Library: Version 1: 0.2 to 3.0 microns, U.S.
Geological Survey Open File Report 93-592, 1340 pages.
The full details of the spectral library and the descriptions of the samples are
available at: http://speclab.cr.usgs.gov/
Hardcopy is also available from the USGS. The reference is given here:
Clark, R.N., G.A. Swayze, A.J. Gallagher, T.V.V. King, and W.M. Calvin, 1993, The
U. S. Geological Survey, Digital Spectral Library: Version 1: 0.2 to 3.0 microns, U.S.
Geological Survey Open File Report 93-592, 1340 pages.
These spectra were measured at the Center for the Study of Earth from Space (CSES)
at the University of Colorado on a custom modified Beckman 5270 dual beam
spectrometer. This spectrometer was under direct computer control measuring
reflectance spectra at constant high resolution (3.8 nm) sampled at 1.0 nm intervals
throughout the 0.7 to 2.5 µm range. A tungsten light source was used and HALON
was the reference material. The reference and sample spectra were collected
simultaneously and ratioed in real time.
The full details of the spectral library and the descriptions of the samples are
available at: http://speclab.cr.usgs.gov/
The following is reproduced verbatim from the README file provided by JHU with
the data:
With the exception of man-made materials, all spectra in the Johns Hopkins
Library were measured under the direction of John W. Salisbury. Most
measurements were made by Dana M. D'Aria, either at Johns Hopkins
University in Baltimore, MD, or at the U.S. Geological Survey in Reston, VA.
This text is a general introduction to the library, with an overview of
measurement techniques, which do differ for different materials. There is a
separate introductory text for each kind of material (rocks, minerals, lunar
soils, terrestrial soils, meteorites, and so forth) that contains more detailed
information.
Two different kinds of spectral data are resident in this library. Spectra of
minerals and meteorites were measured in bidirectional (actually biconical)
reflectance (see two Salisbury et al., 1991 references below for details). These
spectra, recorded from 2.08-25 micrometers, cannot be used to quantitatively
predict emissivity because only hemispherical reflectance can be used in this
way. However, when recorded properly, as described in the meteorite paper,
curve shape is accurate and can be used for remote sensing applications.
All other spectral data, with the exception of portions of generic snow and
vegetation spectra (see the introductory text for each type of material), were
measured in directional hemispherical reflectance. Under most conditions, the
infrared portion of these data can be used to calculate emissivity using
Kirchhoff's Law (E=1-R), which has been verified by both laboratory and field
measurements (Salisbury et al., 1994; Korb et al., 1996). The unusual
circumstances (for example, the lunar environment) where thermal gradients
may cause significant departure from Kirchhoffian behavior are discussed in
Salisbury et al., 1994.
The apparently seamless reflectance spectra from 0.4 to 14 micrometers of
rocks and soils were generated using two different instruments, both equipped
with integrating spheres for measurement of directional hemispherical
reflectance, with source radiation impinging on the sample from a centerline
angle 10 degrees from the vertical.
Unless specified otherwise (see relevant introductory texts for generic snow
and vegetation spectra, and spectra of man-made materials), all visible/near-
infrared (VNIR) spectra were recorded using a Beckman Instruments model
UV 5240 dual-beam, grating spectrophotometer at the U.S. Geological Survey,
Reston, VA. The data were obtained digitally and corrected for both instrument
function and the reflectance of the HALON reference using standards
traceable to the U. S. National Institute of Science and Technology.
Measurements of such standards indicate an absolute reflectance accuracy of
plus or minus 3 percent. Wavelength accuracy was checked using a holmium
oxide reference filter and is reproducible and accurate to within plus or minus
0.004 micrometers, or 4 nm (one digitization step). Spectral resolution is
variable because the Beckman uses an automatic slit program to keep the
energy on the detector constant. The result is a spectral bandwidth typically
less than 0.008 micrometers over the 0.4 to 2.5 micrometers spectral range
measured, but slightly larger at the two extremes of the range of the lead
sulfide detector (0.8-0.9 micrometers and 2.4- 2.5 micrometers). This
instrument has a grating change at 0.8 micrometers, which sometimes results
in a spectral artifact (either a small, sharp absorption band, or a slight offset of
the spectral curve) at that wavelength.
Two similar instruments were used to record reflectance in the infrared range
(2.08 to 15 micrometers). Briefly, both are Nicolet FTIR spectrophotometers
and both have a reproducibility and absolute accuracy better than plus or minus
1 percent over most of the spectral range. Early measurements of igneous
rocks with an older detector were noisy in the 13.5-14 micrometers range and
do not quite meet this standard. Because FTIR instruments record spectral data
in frequency space, both wavelength accuracy and spectral resolution are given
Introduction
ENVI supports many different map projection types. Brief descriptions and lists of
ENVI supported map projections, coordinates, ellipsoids, and datums are given
below. You can define your own parameters for the supported ENVI projection types
and select the desired ellipsoid or datum (see “Building Customized Map
Projections” on page 992). You can also define new projection types by providing the
formula used to translate latitude and longitude coordinates into the new projection
coordinates (see “ENVI_CONVERT_PROJECTION_COORDINATES” in the ENVI
Reference Guide). You can also define your own new projection types by providing
the formula used to translate latitude and longitude coordinates into the new
projection coordinates (see Chapter 6, “User-Defined Map Projection Types” in the
ENVI Programmer’s Guide). More information about map projections is in the
following references.
References:
Snyder, 1982. Map Projections Used by the U. S. Geological Survey. USGS Bulletin
1532.
Peter H. Dana, The Geographer’s Craft Project, Department of Geography,
University of Colorado at Boulder
http://www.colorado.edu/geography/gcraft/notes/datum/datum_f.html
http://www.connect.net/jbanta/
Map Projections
Use map projections to represent all or part of the 3D surface of the earth in two-
dimensions. Distortion always occurs when projecting a spherical surface onto a
planar map and different projections cause different map characteristics to be
distorted. You must choose the projection that shows the characteristics important to
your goals accurately at the expense of other characteristics, which will be distorted.
Map projections can have properties such as equal area, conformal, equidistant, or
true azimuth (direction) characteristics. On an equal area map projection, circles of a
fixed diameter drawn on any part of the map will encompass the same geographic
area. This projection is useful for comparing land areas. However, the shapes, angles,
and scales of equal area maps may be distorted. On a conformal map projection (for
example, UTM), the local angles are correct and the local scale in every direction is
constant showing the true shape correctly. This projection is useful for measuring
distance and direction between relatively near points. An equidistant map projection
has a true accurate scale between one or two points and every other point on the map.
Reference lines are called standard parallels or standard meridians. True direction
map projections show the correct directions or azimuths among all points on the map.
Many map projections use a compromise between these characteristics to reduce
distortion yet provide accurate measurements for local areas. Several map projections
are designed for specific uses, such as air and sea navigation, satellite mapping, and
so forth.
Map projections are typically projected onto one of three types of surface: cylinder,
cone, or plane. These surfaces wrap around or intersect with the globe and then are
cut and laid flat to produce a map.
• Cylindrical projections are made by wrapping a cylinder around the globe and
projecting the surface onto the cylinder. Often, the cylinder touches the globe
at the equator so the meridians of longitude are projected as equidistant
straight lines perpendicular to the equator and the parallels of latitude are
projected parallel to the equator (mathematically spaced). The Mercator
projection is an example of a cylindrical projection.
• Conical projections are made by placing a cone over the globe. Often, the apex
of the cone is along the polar axis of the globe and the cone touches the globe
at a parallel of latitude. In this case, the meridians are projected onto the cone
as equidistant straight lines and the parallels are lines around the
circumference of the cone that are circular when the map is laid flat.
ENVI Projection
Projection Name and Description
Number
Albers Conical Equal Area 9
a, b, lat0, lon0, x0, y0, sp1, sp2, [datum], name
Arbitrary 0
Azimuthal Equidistant 12
r, lat0, lon0, x0, y0, name
Equidistant Conic A 33
a, b, lat0, lon0, x0, y0, sp1, [datum], name
Equidistant Conic B 34
a, b, lat0, lon0, x0, y0, sp1, sp2, [datum], name
Equirectangular / Equidistant Cylindrical 17
r, lat0, lon0, x0, y0, name
General Vertical Nearside Perspective 15
r, lat0, lon0, x0, y0, height, [datum], name
Geographic 1
ENVI Projection
Projection Name and Description
Number
Gnomonic 13
r, lat0, lon0, x0, y0, name
Hammer 27
r, lon0, x0, y0, name
Hotine Oblique Mercator B 6
a, b, lat0, lon0, azimuth, x0, y0, k0, [datum], name
azimuth:azimuth of central line (degrees east of north)
Lambert Azimuthal Equal Area (sphere) 36
r, lat0, lon0, x0, y0, [datum], name
Lambert Azimuthal Equal Area 11
a, b, lat0, lon0, x0, y0, [datum], name
Lambert Conformal Conic 4
a, b, lat0, lon0, x0, y0, sp1, sp2, [datum], name
Mercator 20
a, b, lat0, lon0, x0, y0, [datum], name
Miller Cylindrical 18
r, lon0, x0, y0, name
Mollweide 25
r, lon0, x0, y0, name
New Zealand Map Grid 39
a, b, lat0, lon0, x0, y0, [datum], name
Orthographic 14
r, lat0, lon0, x0, y0, name
Polar Stereographic 31
a, b, lat0, lon0, x0, y0, [datum], name
Polyconic 10
a, b, lat0, lon0, x0, y0, [datum], name
ENVI Projection
Projection Name and Description
Number
Coordinate Systems
The position of a point on a globe is often represented in spherical coordinates by
degrees of latitude and longitude. The parallels of latitude run east-west and are
formed by 90 equally spaced circles around the globe from the equator to each pole
(north latitudes are positive and south latitudes are negative). The circle at the equator
is at 0 degrees and the numbers increase to the north and south poles which are at 90
degrees each. The meridians of longitude are defined by north-south lines passing
through each pole that intersect the equator at 360 equally spaced intervals. The
meridian that passes through Greenwich, England is defined as 0 degrees and is
called the prime meridian. Degrees of longitude are defined between 0 degrees and
180 degrees east (positive) or west (negative) of the prime meridian. ENVI uses map
projections to represent the latitude and longitude lines on a plan map.
The position of a point on a map is often represented in Cartesian (x, y) rectangular
coordinates. The x-axis coordinates typically increase to the east and the y-axis
coordinates increase to the north. The x and y coordinates are often called eastings
and northings and the origin may be defined with a false easting and false northing.
These coordinate grids are often divided into zones to reduce distortion. The
Universal Transverse Mercator (UTM) and State Plane projections are examples of
these type of coordinate systems.
Ellipsoids
The shape of the Earth is often represented by an oblate ellipsoid, which is an ellipse
that is rotated about its shorter axis. The ellipsoids are described by two parameters,
the semi-major and semi-minor axes. Reference ellipsoids are used to represent the
shape of the Earth and many are based on surface measurements to give a regional
best fit and not an entire Earth best fit. Therefore, different ellipsoids are often used
for different regions of the Earth.
The ellipsoids available in ENVI are listed in the following.
Airy IUGG
Australian National Krassovsky
Bessel 1841 Mercury
Clarke 1858 Modified Airy
Clarke 1866 Modified Everest
Datums
A datum is a smooth, mathematical surface that closely fits the mean sea level surface
throughout the area of interest. It is created when an ellipsoid model is fixed to a base
point on the Earth. Since the ellipsoid models are approximate, as you move away
from the fixed point you get larger errors. Therefore, different datums exist for
different regional areas to reduce error. Because different datums are defined by
fixing an ellipsoid to different base points, changing datums changes the latitude and
longitude of a point on the surface of the Earth. Therefore, it is necessary to know
which datum is used when defining the coordinates of your points.
ENVI supports many datums which are listed in the datum.txt file in the
map_proj directory of the ENVI distribution. Contact ITT Visual Information
Solutions Technical Support for instructions on adding your own custom datum.
Non-Standard Projections
In addition to supporting standard map projections with known tie points and fixed
pixel sizes (see “Map Projections” on page 1199), ENVI also supports non-standard
projections such as affine map transformations, RPCs, and RSMs. These are not true
map projections, but they give a reliable estimate of geographic locations for each
pixel.
calculate geographic coordinates for each pixel. The pixel size varies in the rectified
image. This type of projection contains a high degree of variability and is not
geographically accurate; the (x,y) locations in the rectified image are only “best
guesses.”
When you only know the geographic coordinates and map projection of the four
corner points of an image, you can enter coordinates in the Geographic Corners
dialog (see “Entering Geographic Corners for Non-Georeferenced Files” on
page 204). ENVI uses this information to calculate an affine map transformation for
the image.
Images that ENVI rectifies using an affine map transformation are designated by the
word “pseudo” under the Map Info icon in the Available Bands List (Figure D-18).
Sensor Models
Sensor models are another type of map information used to define the physical
relationship between image coordinates and ground coordinates. A sensor model is a
mathematical model that replaces the rigorous (physical) sensor model associated
with a specific image by representing the model’s ground-to-image relationship. It is
used to map a 3D ground point to a 2D image point. Various commercial spaceborne
image providers utilize sensor models, particularly RPCs, which are the most
common type in current use (McGlone, 1984).
RPCs
Rational polynomial coefficients (RPCs) model the ground-to-image relationship as a
third-order, rational, ground-to-image polynomial. See McGlone (1984) for the
theory behind the RPC model.
You can let ENVI compute RPCs prior to single-image orthorectification by building
interior and exterior orientation models. (See “Building RPCs” on page 972.) ENVI
uses an iterative, converging solution to compute RPCs and adds the RPC
information to the input file header so that you can use the file with ENVI’s generic
RPC and RSM orthorectification and DEM Extraction tools.
Some sensors such as Quickbird, IKONOS, OrbView-3, and CARTOSAT-1 provide
pre-computed RPCs along with the respective imagery. If your file has associated
RPC information, you can automatically derive RPC-based geolocation information
for individual pixels in an image. See “Emulating an RPC or RSM Projection” on
page 201 for more information. This method is not as geographically accurate as
performing a full orthorectification, but it is less computationally and disk-space
intensive than orthorectification.
Note
If your image file has pre-computed RPC information, ENVI uses the mean
elevation value used in the RPC transformation for the entire image. However, you
can significantly increase the geolocation accuracy of the image if you associate the
image with a DEM. See “Associating a DEM to a File” on page 202. Once
initialized, this technique will result in accuracy equivalent to performing a full
orthorectification.
For NITF files, RPC information is contained in the RPC00A and RPC00B Tagged
Record Extensions (TREs). If either of these TREs exists in a NITF file, ENVI and
ENVI Zoom use the RPC model to emulate a projection by default. See Map
Information in the NITF/NSIF Module User’s Guide for details.
RSM
RSM is a more recent sensor model (Dowman and Dolloff, 2000) that corrects the
deficiencies of RPC-based sensor models. An RSM contains a variety of
enhancements over the RPC model, including:
• Increased accuracy over images with large number of rows or columns (such
as image strips) by breaking the image into tiles with separate models.
• The ability to store varying degrees of complexity in the polynomial
representation used.
• The ability to store error estimation information so that the precision of
individual solutions can be computed.
If your file has associated RSM information, ENVI can automatically derive RSM-
based geolocation information for individual pixels in an image. (See “Emulating an
RPC or RSM Projection” on page 201 for more information.) Use of this model has
the same behavior and limitations as using an RPC model in ENVI. In particular,
associating a DEM with an image will result in significantly higher accuracy.
For more information on using RSMs in NITF data, see Map Information in the
NITF/NSIF Module User’s Guide for details.
References
Dowman, I., and J. T. Dolloff. (2000). An evaluation of rational functions for
photogrammetric restitution. International Archives of Photogrammetry and Remote
Sensing 33(B3): pp. 254-266.
McGlone, J. C., editor. (2004). Manual of Photogrammetry, Fifth Edition, American
Society for Photogrammetry and Remote Sensing.
This appendix describes the vegetation VIs used by ENVI’s vegetation analysis tool. It includes the
following:
Department of Global Ecology, and related sources. See “Reference:” on page 1220
for additional information.
Vegetation is divided into the following general categories:
• Plant Foliage
• Canopies
• Non-Photosynthetic Vegetation
Plant Foliage
Plant foliage, including leaves, needles, and other green materials, often look similar
to the casual observer, but they vary widely in both shape and chemical composition.
The chemical composition of leaves can often be estimated using VIs, but doing so
requires some knowledge of the basic composition of leaves and how they change
under different environmental conditions. The most important leaf components that
affect their spectral properties are:
• Pigments
• Water
• Carbon
• Nitrogen
These components are described in the sections that follow. Other components (such
as phosphorus, calcium, and so forth) are significant to plant function, but they do not
directly contribute to the spectral properties of leaves, and therefore cannot be
directly measured using remotely sensed data.
Pigments
There are three main categories of leaf pigments in plants: chlorophyll, carotenoids,
and anthocyanins. These pigments serve a variety of purposes, and are critical to the
function and health of vegetation, though the relative concentrations of these
pigments in vegetation can vary significantly. Vegetation with a high concentration of
chlorophyll is generally very healthy, as chlorophyll is linked to greater light use
efficiency or photosynthetic rates. Conversely, carotenoid and anthocyanin pigments
often appear in higher concentrations in vegetation that is less healthy, typically due
to stress or the onset of senescence (dormant or dying vegetation that appears red,
yellow, or brown).
Chlorophyll, the most well-known and most important pigment, causes the green
color of healthy plant leaves. It is primarily responsible for photosynthesis, the
process by which plants take up carbon dioxide (CO2) from the atmosphere and
convert it into organic forms such as sugar and starch. Chlorophyll concentrations in
leaves are broadly correlated with photosynthetic rates. Chlorophyll-a and -b
pigments most closely associated with photosynthesis.
Carotenoids are a group of pigments containing alpha-carotene, beta-carotene, and
xanthophyll pigments (for example, zeaxanthin). Carotene is the yellow-orange
pigment found in tree leaves as they change from green to brown (as seen during
autumn). Carotenoid pigments have multiple functions, but they are generally found
in higher concentrations in plant leaves that are either stressed (seen in drought or
nutrient depletion), senescent, or dead. Carotenoids assist the process of light
absorption in plants, and help protect plants from the harmful effects of very high
light conditions.
Anthocyanins also have multiple functions, but are typically related to changes in
foliage. Anthocyanins are reddish pigments abundant in both newly forming leaves
and leaves undergoing senescence. Anthocyanins also serve to protect leaves from
damage due to ultraviolet radiation.
As a group, leaf pigments only affect the visible portion of the shortwave spectrum
(400 nm to 700 nm), though the affects vary depending upon the type of pigment.
Figure E-19 shows the absorption of each pigment type as a function of wavelength
throughout the visible range.
Water
Plants of different species inherently contain different amounts of water based on
their leaf geometry, canopy architecture, and water requirements. Among plants of
one species, there is still significant variation, depending upon leaf thickness, water
availability, and plant health. Water is critical for many plant processes, in particular,
photosynthesis. Generally, vegetation of the same type with greater water content is
more productive and less prone to burn.
Leaf water affects plant reflectance in the near-infrared and shortwave infrared
regions of the spectrum (see Figure E-20). Water has maximum absorptions centered
near 1400 and 1900 nm, but these spectral regions usually cannot be observed from
airborne or space-based sensors due to atmospheric water absorption, preventing their
practical use in the creation of VIs. Water features centered around 970 nm and 1190
nm are pronounced and can be readily measured from hyperspectral sensors. These
spectral regions are generally not sampled by multispectral sensors.
Carbon
Plants contain carbon in many forms, including sugars, starch, cellulose, and lignin.
Sugars and starch are immediate products of photosynthesis; they are moved to other
locations in plants to construct cellulose and lignin. Cellulose is primarily used in the
construction of cell walls in plant tissues. Lignin is used for the most structurally
robust portions of plants, such as leaf vacuoles, veins, woody tissue, and roots.
Cellulose and lignin display spectral features in the shortwave infrared range of the
shortwave optical spectrum (Figure E-20).
Figure E-20: Relative Light Absorption Intensity of Leaf Water and Carbon
(Cellulose and Lignin)
Nitrogen
Leaves contain nitrogen bound in the chlorophyll pigment, proteins, and other
molecules. Nitrogen concentrations in foliage are linked to maximum photosynthetic
rate and net primary production. VIs sensitive to chlorophyll content (which is
approximately 6% nitrogen) are often broadly sensitive to nitrogen content as well.
Some proteins that contain nitrogen affect the spectral properties of leaves in the
1500 nm to 1720 nm range.
Canopies
Leaf reflectance properties, controlled by properties of pigments, water, and carbon,
play a significant role in reflectance at the canopy level. Additionally, the amount of
foliage and the architecture of the canopy are also meaningful in determining the
scattering and absorption properties of vegetation canopies. Different ecosystems,
whether they be forest, grassland, or agricultural field, have different reflectance
properties, even though the properties of individual leaves are usually quite similar.
Vegetation with mostly vertical foliage, such as grass, reflects light differently than
foliage with more horizontally-oriented foliage, seen frequently in trees and tropical
forest plants. The variation in reflectance caused by different canopy structures, much
like individual leaf reflectance, is highly variable with wavelength.
There are a variety of vegetation properties of interest to scientists at the canopy level,
and many of these have direct effects on canopy reflectance properties. The two most
significant are leaf area index (LAI) and leaf angle distribution (LAD). The LAI is the
green leaf area per unit ground area, which represents the total amount of green
vegetation present in the canopy. The LAI is an important property of vegetation, and
has the strongest effect on overall canopy reflectance. The LAD describes the overall
variety of directions in which the leaves are oriented, but is often simplified by
specifying the mean leaf angle (MLA) and making assumptions about the actual
distribution. The MLA is the average of the differences between the angle of each
leaf in a canopy and horizontal.
Figure E-22 shows the effects of LAI and LAD on reflectance of radiation by
vegetation canopies. In this plot, MLA is the parameter that represents LAD.
Figure E-22: Example Effects of Increasing LAI (A) and Decreasing MLA (B) on
Canopy Reflectance
Non-Photosynthetic Vegetation
The previous sections primarily discuss live, green vegetation, but many ecosystems
contain as much, or more, senescent or dead vegetation (also known as non-
photosynthetic vegetation, or NPV) as they do green vegetation. Examples include
grasslands, shrublands, savannas, and open woodlands, which collectively cover over
half of the global vegetated land surface. This material is often called non-
photosynthetic vegetation because it could be truly dead or simply dormant (such as
some grasses between rainfall events). Also included in the NPV category are woody
structures in many plants, including tree trunks, stems, and branches.
NPV is composed largely of the carbon-based molecules lignin, cellulose, and starch.
As such, it has a similar reflectance signature to these materials, with most of the
variation in the shortwave infrared range. In many canopies, much of the NPV is
obscured below a potentially closed leaf canopy; the wavelengths used to measure
NPV (shortwave infrared) are often unable to penetrate through the upper canopy to
interact with this NPV. As such, only exposed NPV has a significant effect on the
spectral reflectance of vegetated ecosystems. When exposed, NPV scatters photons
very efficiently in the shortwave infrared range, in direct contrast to green vegetation
which absorbs strongly in the shortwave infrared range.
In general, photons in the visible wavelength region are efficiently absorbed by live,
green vegetation. Likewise, photons in the SWIR-2 region of the spectrum are
efficiently absorbed by water. In contrast to live vegetation, dead, dry, or senescent
vegetation scatters photons very efficiently throughout the spectrum, with the most
scattering occurring in the SWIR-1 and SWIR-2 ranges. The change in canopy
reflectance due to increasing amounts of NPV is shown in Figure E-23.
Reference:
Asner, G.P., 1998, Biophysical and Biochemical Sources of Variability in Canopy
Reflectance, Remote Sensing of Environment, 64:234-253.
Vegetation Indices
Vegetation Indices (VIs) are combinations of surface reflectance at two or more
wavelengths designed to highlight a particular property of vegetation. They are
derived using the reflectance properties of vegetation described in “Plant Foliage” on
page 1213. Each of the VIs is designed to accentuate a particular vegetation property.
More than 150 VIs have been published in scientific literature, but only a small subset
have substantial biophysical basis or have been systematically tested. ENVI provides
27 vegetation indices to use to detect the presence and relative abundance of
pigments, water, and carbon as expressed in the solar-reflected optical spectrum
(400 nm to 2500 nm).
Selection of the most important vegetation categories and the best representative
indices within each category was performed by Dr. Gregory P. Asner of the Carnegie
Institution of Washington, Department of Global Ecology. The selections were based
upon robustness, scientific basis, and general applicability. Many of these indices are
currently unknown or under-used in the commercial, government, and scientific
communities.
The indices are grouped into categories that calculate similar properties. The
categories and indices are:
• Broadband Greenness (5 indices):
• Normalized Difference Vegetation Index
• Simple Ratio Index
• Enhanced Vegetation Index
• Atmospherically Resistant Vegetation Index
• Sum Green Index
• Narrowband Greenness (7 indices):
• Red Edge Normalized Difference Vegetation Index
• Modified Red Edge Simple Ratio Index
• Modified Red Edge Normalized Difference Vegetation Index
• Vogelmann Red Edge Index 1
• Vogelmann Red Edge Index 2
• Vogelmann Red Edge Index 3
• Red Edge Position Index
Note
The VIs provided in ENVI are not designed to quantify the exact concentration or
abundance of any given vegetation component. Instead, they are intended for use in
geographically mapping relative amounts of vegetation components, which can then
be interpreted in terms of ecosystem conditions.
Broadband Greenness
The broadband greenness VIs are among the simplest measures of the general
quantity and vigor of green vegetation. They are combinations of reflectance
measurements that are sensitive to the combined effects of foliage chlorophyll
concentration, canopy leaf area, foliage clumping, and canopy architecture. These
VIs are designed to provide a measure of the overall amount and quality of
photosynthetic material in vegetation, which is essential for understanding the state
of vegetation for any purpose. These VIs are an integrative measurement of these
factors and are well correlated with the fractional absorption of photosynthetically
active radiation (fAPAR) in plant canopies and vegetated pixels. They do not provide
quantitative information on any one biological or environmental factor contributing to
the fAPAR, but broad correlations have been found between the broadband greenness
VIs and canopy LAI.
Broadband greenness VIs compare reflectance measurements from the reflectance
peak of vegetation in the near-infrared range to another measurement taken in the red
range, where chlorophyll absorbs photons to store into energy through
photosynthesis. Use of near-infrared measurements, with much greater penetration
depth through the canopy than red, allows sounding of the total amount of green
vegetation in the column until the signal saturates at very high levels. Because these
features are spectrally quite broad, many of the broadband greenness indices can
work effectively, even with image data collected from broadband multispectral
sensors, such as AVHRR, Landsat TM, and QuickBird. Applications include
vegetation phenology (growth) studies, land-use and climatological impact
assessments, and vegetation productivity modeling.
The broadband greenness equations in the next sections represent the surface
reflectance in an image band with a center wavelength as follows: ρNIR = 800 nm,
ρRED = 680 nm, and ρBLUE = 450 nm. Increases in leaf chlorophyll concentration or
leaf area, decreases in foliage clumping, and changes in canopy architecture each can
contribute to ρNIR decreases and ρRED increases, thereby causing an increase in the
broadband greenness values.
Index Description
Index Description
ρ NIR – ρ RED
NDVI = ------------------------------
-
ρ NIR + ρ RED
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.2 to 0.8.
References:
Rouse, J.W., R.H. Haas, J.A. Schell, and D.W. Deering, 1973. Monitoring Vegetation
Systems in the Great Plains with ERTS. Third ERTS Symposium, NASA SP-351 I:
309-317.
Tucker, C.J., 1979. Red and Photographic Infrared Linear Combinations for
Monitoring Vegetation. Remote Sensing of the Environment 8:127-150.
Jackson, R.D., P.N. Slater, and P.J. Pinter, 1983. Discrimination of Growth and Water
Stress in Wheat by Various Vegetation Indices Through Clear and Turbid
Atmospheres. Remote Sensing of the Environment 15:187-208.
Sellers, P.J., 1985. Canopy Reflectance, Photosynthesis and Transpiration.
International Journal of Remote Sensing 6:1335-1372.
ρ NIR
SR = ------------
-
ρ RED
The value of this index ranges from 0 to more than 30. The common range for green
vegetation is 2 to 8.
References:
Rouse, J.W., R.H. Haas, J.A. Schell, and D.W. Deering, 1973. Monitoring Vegetation
Systems in the Great Plains with ERTS. Third ERTS Symposium, NASA SP-351 I:
309-317.
Tucker, C.J., 1979. Red and Photographic Infrared Linear Combinations for
Monitoring Vegetation. Remote Sensing of the Environment 8:127-150.
Sellers, P.J., 1985. Canopy Reflectance, Photosynthesis and Transpiration.
International Journal of Remote Sensing 6:1335-1372.
ρ NIR – ρ RED
EVI = 2.5 ⎛ -------------------------------------------------------------------------⎞
⎝ ρ NIR + 6ρ RED – 7.5ρ BLUE + 1⎠
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.2 to 0.8.
Reference:
Huete, A.R., H. Liu, K. Batchily, and W. van Leeuwen, 1997. A Comparison of
Vegetation Indices Over a Global Set of TM Images for EOS-MODIS. Remote
Sensing of Environment 59(3):440-451.
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.2 to 0.8.
Reference:
Kaufman, Y.J. and D. Tanre, 1996. Strategy for Direct and Indirect Methods for
Correcting the Aerosol Effect on Remote Sensing: from AVHRR to EOS-MODIS.
Remote Sensing of Environment 55:65-79.
Reference:
Lobell, D.B. and G.P. Asner, 2003. Hyperion studies of crop stress in Mexico.
Proceedings of the 12th Annual JPL Airborne Earth Science Workshop. Pasadena,
CA. (ftp://popo.jpl.nasa.gov/pub/docs/workshops/aviris.proceedings.html).
Narrowband Greenness
Narrowband greenness VIs are a combination of reflectance measurements sensitive
to the combined effects of foliage chlorophyll concentration, canopy leaf area, foliage
clumping, and canopy architecture. Similar to the broadband greenness VIs,
narrowband greenness VIs are designed to provide a measure of the overall amount
and quality of photosynthetic material in vegetation, which is essential for
understanding the state of vegetation. These VIs use reflectance measurements in the
red and near-infrared regions to sample the red edge portion of the reflectance curve.
The red edge is a name used to describe the steeply sloped region of the vegetation
reflectance curve between 690 nm and 740 nm that is caused by the transition from
chlorophyll absorption and near-infrared leaf scattering. Use of near-infrared
measurements, with much greater penetration depth through the canopy than red
measurements, allows estimation of the total amount of green material in the column.
Narrowband greenness VIs are more sophisticated measures of general quantity and
vigor of green vegetation than the broadband greenness VIs. Making narrowband
measurements in the red edge allows these indices to be more sensitive to smaller
changes in vegetation health than the broadband greenness VIs, particularly in
conditions of dense vegetation where the broadband measures can saturate.
Narrowband greenness VIs are intended for use with high spectral resolution imaging
data, such as that acquired by hyperspectral sensors.
Index Description
Red Edge Normalized A modification of the NDVI using reflectance
Difference Vegetation measurements along the red edge.
Index
Modified Red Edge A ratio of reflectance along the red edge with blue
Simple Ratio Index reflection correction.
Modified Red Edge A modification of the Red Edge NDVI using blue to
Normalized Difference compensate for scattered light.
Vegetation Index
Index Description
Vogelmann Red Edge A shoulder of the RED-to-NIR transition that is
Index 1 indicative of canopy stress.
Vogelmann Red Edge A shape of the near-infrared transition that is
Index 2 indicative of the onset of canopy stress and
senescence.
Vogelmann Red Edge A shape of near-infrared transition that is indicative of
Index 3 the onset of canopy stress and senescence.
Red Edge Position Index The location of the maximum derivative in near-
infrared transition, which is sensitive to chlorophyll
concentration.
ρ 750 – ρ 705
NDVI 705 = --------------------------
ρ 750 + ρ 705
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.2 to 0.9.
References:
Gitelson, A.A. and M.N. Merzlyak, 1994. Spectral Reflectance Changes Associated
with Autumn Senescence of Aesculus Hippocastanum L. and Acer Platanoides L.
ρ 750 – ρ 445
mSR 705 = -------------------------
-
ρ 705 – ρ 445
The value of this index ranges from 0 to 30. The common range for green vegetation
is 2 to 8.
References:
Sims, D.A. and J.A. Gamon, 2002. Relationships Between Leaf Pigment Content and
Spectral Reflectance Across a Wide Range of Species, Leaf Structures and
Developmental Stages. Remote Sensing of Environment 81:337-354.
Datt, B., 1999. A New Reflectance Index for Remote Sensing of Chlorophyll Content
in Higher Plants: Tests Using Eucalyptus Leaves. Journal of Plant Physiology
154:30-36.
ρ 750 – ρ 705
mNDVI 705 = ---------------------------------------------
-
ρ 750 + ρ 705 – 2ρ 445
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.2 to 0.7.
References:
Datt, B., 1999. A New Reflectance Index for Remote Sensing of Chlorophyll Content
in Higher Plants: Tests Using Eucalyptus Leaves. Journal of Plant Physiology
154:30-36.
Sims, D.A. and J.A. Gamon, 2002. Relationships Between Leaf Pigment Content and
Spectral Reflectance Across a Wide Range of Species, Leaf Structures and
Developmental Stages. Remote Sensing of Environment 81:337-354.
ρ 740
VOG1 = ---------
-
ρ 720
The value of this index ranges from 0 to 20. The common range for green vegetation
is 4 to 8.
Reference:
Vogelmann, J.E., B.N. Rock, and D.M. Moss, 1993. Red Edge Spectral
Measurements from Sugar Maple Leaves. International Journal of Remote Sensing
14:1563-1575.
ρ 734 – ρ 747
VOG2 = --------------------------
ρ 715 + ρ 726
The value of this index ranges from 0 to 20. The common range for green vegetation
is 4 to 8.
Reference:
Vogelmann, J.E., B.N. Rock, and D.M. Moss, 1993. Red Edge Spectral
Measurements from Sugar Maple Leaves. International Journal of Remote Sensing
14:1563-1575.
ρ 734 – ρ 747
VOG3 = --------------------------
ρ 715 + ρ 720
The value of this index ranges from 0 to 20. The common range for green vegetation
is 4 to 8.
Reference:
Vogelmann, J.E., B.N. Rock, and D.M. Moss, 1993. Red Edge Spectral
Measurements from Sugar Maple Leaves. International Journal of Remote Sensing
14:1563-1575.
Index Description
Photochemical Useful to estimate absorption by leaf carotenoids
Reflectance Index (especially xanthophyll) pigments, leaf stress, and
carbon dioxide uptake.
Structure Insensitive Indicator of leaf pigment concentrations normalized
Pigment Index for variations in overall canopy structure and foliage
content.
Red Green Ratio Index Ratio of reflectance in RED-to-GREEN sensitive to
ratio of anthocyanin to chlorophyll.
ρ 531 – ρ 570
PRI = --------------------------
ρ 531 + ρ 570
The value of this index ranges from -1 to 1. The common range for green vegetation
is -0.2 to 0.2.
References:
Gamon, J.A., J. Penuelas, and C.B. Field, 1992. A Narrow-Waveband Spectral Index
That Tracks Diurnal Changes in Photosynthetic Efficiency. Remote Sensing of
Environment 41:35-44.
Gamon, J.A., L. Serrano, and J.S. Surfus, 1997. The Photochemical Reflectance
Index: An Optical Indicator of Photosynthetic Radiation Use Efficiency Across
Species, Functional Types and Nutrient Levels. Oecologia 112:492-501.
ρ 800 – ρ 445
SIPI = -------------------------
-
ρ 800 – ρ 680
The value of this index ranges from 0 to 2. The common range for green vegetation is
0.8 to 1.8.
Reference:
Penuelas, J., F. Baret, and I. Filella, 1995. Semi-Empirical Indices to Assess
Carotenoids/Chlorophyll-a Ratio from Leaf Spectral Reflectance. Photosynthetica
31:221-230.
Applications include plant growth cycle (phenology) studies, canopy stress detection,
and crop yield prediction. Results are reported as the mean of all bands in the red
range divided by the mean of all bands in the green range. The value of this index
ranges from 0.1 to more than 8. The common range for green vegetation is 0.7 to 3.
Reference:
Gamon, J.A. and J.S. Surfus, 1999. Assessing Leaf Pigment Content and Activity With
a Reflectometer. New Phytologist 143:105-117.
Canopy Nitrogen
The Canopy Nitrogen VI is designed to provide a measure of nitrogen concentration
of remotely sensed foliage. Nitrogen is an important component of chlorophyll and is
generally present in high concentration in vegetation that is growing quickly.
“Nitrogen” on page 1216 provides more information on the importance of nitrogen in
vegetation. This VI uses reflectance measurements in the shortwave infrared range to
measure relative amounts of nitrogen contained in vegetation canopies.
The value of this index ranges from 0 to 1. The common range for green vegetation is
0.02 to 0.1.
References:
Serrano, L., J. Penuelas, and S.L. Ustin, 2002. Remote Sensing of Nitrogen and
Lignin in Mediterranean Vegetation from AVIRIS Data: Decomposing Biochemical
from Structural Signals. Remote Sensing of Environment 81:355-364.
Fourty, T., F. Baret, S. Jacquemoud, G. Schmuck, and J. Verdebout, 1996. Leaf
Optical Properties with Explicit Description of Its Biochemical Composition: Direct
and Inverse Problems. Remote Sensing of Environment 56:104-117.
Index Description
The value of this index ranges from 0 to 1. The common range for green vegetation is
0.005 to 0.05.
References:
Serrano, L., J. Penuelas, and S.L. Ustin, 2002. Remote Sensing of Nitrogen and
Lignin in Mediterranean Vegetation from AVIRIS Data: Decomposing Biochemical
from Structural Signals. Remote Sensing of Environment 81:355-364.
Fourty, T., F. Baret, S. Jacquemoud, G. Schmuck, and J. Verdebout, 1996. Leaf
Optical Properties with Explicit Description of Its Biochemical Composition: Direct
and Inverse Problems. Remote Sensing of Environment 56:104-117.
Melillo, J.M., J.D. Aber, and J.F. Muratore, 1982. Nitrogen and Lignin Control of
Hardwood Leaf Litter Decomposition Dynamics. Ecology 63:621-626.
The value of this index ranges from -3 to more than 4. The common range for green
vegetation is -2 to 4.
References:
Daughtry, C.S.T., 2001. Discriminating Crop Residues from Soil by Short-Wave
Infrared Reflectance. Agronomy Journal 93:125-131.
Daughtry, C.S.T., E.R. Hunt Jr., and J.E. McMurtrey III. 2004. Assessing Crop
Residue Cover Using Shortwave Infrared Reflectance. Remote Sensing of
Environment 90:126-134.
ρ 680 – ρ 500
PSRI = -------------------------
-
ρ 750
The value of this index ranges from -1 to 1. The common range for green vegetation
is -0.1 to 0.2.
Reference:
Merzlyak, J.R., A.A. Gitelson, O.B. Chivkunova, and V.Y. Rakitin, 1999. Non-
destructive Optical Detection of Pigment Changes During Leaf Senescence and Fruit
Ripening. Physiologia Plantarum 106:135-141.
Leaf Pigments
The leaf pigment VIs are designed to provide a measure of stress-related pigments
present in vegetation. Stress-related pigments include carotenoids and anthocyanins,
which are present in higher concentrations in weakened vegetation. These VIs do not
measure chlorophyll, which is measured using the greenness indices. Carotenoids
function in light absorption processes in plants, as well as in protecting plants from
the harmful effects of high light conditions. Anthocyanins are water-soluble pigments
abundant in newly forming leaves and leaves undergoing senescence. Applications
for leaf pigment VIs include crop monitoring, ecosystem studies, analyses of canopy
stress, and precision agriculture. Stress pigments can indicate the presence of
vegetation stress, often before it is observable using the unaided eye. The pigments
are described in greater detail in “Pigments” on page 1213. The VIs use reflectance
measurements in the visible spectrum to take advantage of the absorption signatures
of stress-related pigments. For leaf pigment indices, reflectance needs to be scaled
between 0 and 1. If reflectance data are scaled, use the Reflectance Scale Factor
field in the Header Info dialog instead of creating a copy of your data. See “Entering
a Reflectance Scale Factor” on page 207 for details.
Index Description
1 1
CRI1 = ⎛ ----------⎞ – ⎛ ----------⎞
⎝ ρ 510⎠ ⎝ ρ 550⎠
The value of this index ranges from 0 to more than 15. The common range for green
vegetation is 1 to 12.
Reference:
Gitelson, A.A., Y. Zur, O.B. Chivkunova, and M.N. Merzlyak, 2002. Assessing
Carotenoid Content in Plant Leaves with Reflectance Spectroscopy. Photochemistry
and Photobiology 75:272-281.
1 1
CRI2 = ⎛ ----------⎞ – ⎛ ----------⎞
⎝ ρ 510⎠ ⎝ ρ 700⎠
The value of this index ranges from 0 to more than 15. The common range for green
vegetation is 1 to 11.
Reference:
Gitelson, A.A., Y. Zur, O.B. Chivkunova, and M.N. Merzlyak, 2002. Assessing
Carotenoid Content in Plant Leaves with Reflectance Spectroscopy. Photochemistry
and Photobiology 75:272-281.
1 1
ARI1 = ⎛ ----------⎞ – ⎛ ----------⎞
⎝ ρ 550⎠ ⎝ ρ 700⎠
The value of this index ranges from 0 to more than 0.2. The common range for green
vegetation is 0.001 to 0.1.
Reference:
Gitelson, A.A., M.N. Merzlyak, and O.B. Chivkunova, 2001. Optical Properties and
Nondestructive Estimation of Anthocyanin Content in Plant Leaves. Photochemistry
and Photobiology 71:38-45.
1 1
ARI2 = ρ 800 ⎛ ----------⎞ – ⎛ ----------⎞
⎝ ρ 550⎠ ⎝ ρ 700⎠
The value of this index ranges from 0 to more than 0.2. The common range for green
vegetation is 0.001 to 0.1.
Reference:
Gitelson, A.A., M.N. Merzlyak, and O.B. Chivkunova, 2001. Optical Properties and
Nondestructive Estimation of Anthocyanin Content in Plant Leaves. Photochemistry
and Photobiology 71:38-45.
Index Description
Water Band Index Absorption intensity at 900 nm increases with canopy
water content.
Normalized Difference The rate of increase at 857 nm absorption relative to
Water Index 1241 nm is a direct metric of total volumetric water
content of vegetation.
Moisture Stress Index Detects changes at 1599 nm absorption that is sensitive
to the onset of moisture stress in vegetation.
Normalized Difference Absorption intensity at 1649 nm increases with canopy
Infrared Index water content.
ρ 900
WBI = ---------
-
ρ 970
References:
Penuelas, J., I. Filella, C. Biel, L. Serrano, and R. Save, 1995. The Reflectance at the
950-970 Region as an Indicator of Plant Water Status. International Journal of
Remote Sensing 14:1887-1905.
Champagne, C., E. Pattey, A. Bannari, and I.B. Stratchan, 2001. Mapping Crop Water
Status: Issues of Scale in the Detection of Crop Water Stress Using Hyperspectral
Indices. Proceedings of the 8th International Symposium on Physical Measurements
and Signatures in Remote Sensing, Aussois, France. Pp.79-84.
ρ 857 – ρ 1241
NDWI = ----------------------------
-
ρ 857 + ρ 1241
The value of this index ranges from -1 to 1. The common range for green vegetation
is -0.1 to 0.4.
Reference:
Gao, B.C., 1995. Normalized Difference Water Index for Remote Sensing of
Vegetation Liquid Water from Space. Proceedings of SPIE 2480: 225-236.
inverted relative to the other water VIs; higher values indicate greater water stress and
less water content. MSI is defined by the following equation:
ρ 1599
MSI = -----------
-
ρ 819
The value of this index ranges from 0 to more than 3. The common range for green
vegetation is 0.4 to 2.
References:
Hunt Jr., E.R. and B.N. Rock, 1989. Detection of Changes in Leaf Water Content
Using Near- And Middle-Infrared Reflectances. Remote Sensing of Environment
30:43-54.
Ceccato, P., S. Flasse, S. Tarantola, S. Jacquemoud, and J.M. Gregoire, 2001.
Detecting Vegetation Leaf Water Content Using Reflectance in the Optical Domain.
Remote Sensing of Environment 77:22-33.
ρ 819 – ρ 1649
NDII = ----------------------------
-
ρ 819 + ρ 1649
The value of this index ranges from -1 to 1. The common range for green vegetation
is 0.02 to 0.6.
References:
Hardisky, M.A., V. Klemas, and R.M. Smart, 1983. The Influences of Soil Salinity,
Growth Form, and Leaf Moisture on the Spectral Reflectance of Spartina Alterniflora
Canopies. Photogrammetric Engineering and Remote Sensing 49:77-83.