Erdas

Download as pdf or txt
Download as pdf or txt
You are on page 1of 770

ERDAS Field Guide

Copyright (c) 2005 Leica Geosystems Geospatial Imaging, LLC


All rights reserved.
Printed in the United States of America.
The information contained in this document is the exclusive property of Leica Geosystems Geospatial Imaging, LLC.
This work is protected under United States copyright law and other international copyright treaties and conventions.
No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, except as expressly
permitted in writing by Leica Geosystems Geospatial Imaging, LLC. All requests should be sent to: Manager of
Technical Documentation, Leica Geosystems Geospatial Imaging, LLC, 5051 Peachtree Corners Circle, Suite 100,
Norcross, GA, 30092, USA.
The information contained in this document is subject to change without notice.
Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a
project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the
University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under
license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S.
Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S.
Government has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced
throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has
other rights under 35 U.S.C. 200-212 and applicable implementing regulations; (b) If LizardTech's rights in the
MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions
of this license which could reasonably be deemed to do so would then protect the University and/or the U.S.
Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data
to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor
that the MrSID Software will not infringe any patent or other proprietary right. For further information about these
provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104.
ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, Stereo Analyst and IMAGINE VirtualGIS are registered trademarks;
IMAGINE OrthoBASE Pro is a trademark of Leica Geosystems Geospatial Imaging, LLC.
SOCET SET is a registered trademark of BAE Systems Mission Solutions.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.
Table of Contents / iii Field Guide
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Conventions Used in this Book . . . . . . . . . . . . . . xxv
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Absorption / Reflection Spectra . . . . . . . . . . . . . . . . . . . 5
Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Spectral Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Spatial Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Radiometric Resolution . . . . . . . . . . . . . . . . . . . . . . . 16
Temporal Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 17
Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . 23
ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . 24
Image File Organization . . . . . . . . . . . . . . . . . . . . 27
Consistent Naming Convention . . . . . . . . . . . . . . . . . . 27
Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . 28
Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Using Image Data in GIS . . . . . . . . . . . . . . . . . . . 29
Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . 29
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Multispectral Classification . . . . . . . . . . . . . . . . . . . . . 30
Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . 31
Editing Continuous (Athematic) Data . . . . . . . . . . . . . . 31
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . 32
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Table of Contents / iv Field Guide
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Attribute Information . . . . . . . . . . . . . . . . . . . . . 39
Displaying Vector Data . . . . . . . . . . . . . . . . . . . . 40
Color Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . 42
Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Imported Vector Data . . . . . . . . . . . . . . . . . . . . . 44
Raster to Vector Conversion . . . . . . . . . . . . . . . . 45
Other Vector Data Types . . . . . . . . . . . . . . . . . . . 45
Shapefile Vector Format . . . . . . . . . . . . . . . . . . . . . . . 46
SDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
ArcGIS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Raster and Vector Data Sources . . . . . . . . . . . . . . . . . . . . 49
Importing and Exporting . . . . . . . . . . . . . . . . . . . 49
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Raster Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Annotation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Generic Binary Data . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . 57
IKONOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
IRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Landsat 1-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Landsat 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
NLAPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . 67
OrbView-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
SeaWiFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
SPOT4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Advantages of Using Radar Data . . . . . . . . . . . . . . . . . 73
Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . 76
Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . 77
Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . 82
Image Data from Aircraft . . . . . . . . . . . . . . . . . . 83
AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Daedalus TMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Field Guide Table of Contents / v
Image Data from Scanning . . . . . . . . . . . . . . . . . . 84
Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . 85
Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Aerial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . 86
DOQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . 89
.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
ADRG File Naming Convention . . . . . . . . . . . . . . . . . . 91
ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . 94
ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . 94
Raster Product Format . . . . . . . . . . . . . . . . . . . . . 95
CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
CADRG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . 97
DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . 99
GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Satellite Position . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Differential Correction . . . . . . . . . . . . . . . . . . . . . . . .101
Applications of GPS Data . . . . . . . . . . . . . . . . . . . . . .101
Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . 103
Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . .103
Raster Data from Other Software Vendors . . . . . 106
ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
GRID and GRID Stacks . . . . . . . . . . . . . . . . . . . . . . .107
JFIF (JPEG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
MrSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
SUN Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
GeoTIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
Vector Data from Other Software Vendors . . . . . 111
ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
Image Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . .117
Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
Table of Contents / vi Field Guide
Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . 119
Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Displaying Raster Layers. . . . . . . . . . . . . . . . . . .125
Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . 125
Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . 130
Using the Viewer . . . . . . . . . . . . . . . . . . . . . . . .133
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . 139
Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Geographic Information . . . . . . . . . . . . . . . . . . . . . . 142
Enhancing Continuous Raster Layers . . . . . . . . . . . . . 142
Creating New Image Files . . . . . . . . . . . . . . . . . . . . . 143
Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
Input Image Mode . . . . . . . . . . . . . . . . . . . . . . .146
Exclude Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Image Dodging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Color Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . 148
Intersection Mode. . . . . . . . . . . . . . . . . . . . . . . .149
Set Overlap Function . . . . . . . . . . . . . . . . . . . . . . . . 150
Automatically Generate Cutlines For Intersection . . . . . 150
Geometry-based Cutline Generation . . . . . . . . . . . . . . 151
Output Image Mode . . . . . . . . . . . . . . . . . . . . . .152
Output Image Options . . . . . . . . . . . . . . . . . . . . . . . 152
Run Mosaic To Disc . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . 156
Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . 156
Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . .159
Radiometric Correction: Visible/Infrared Imagery . . . . . 160
Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . 161
Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . 162
Radiometric Enhancement . . . . . . . . . . . . . . . . . .162
Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . 163
Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . 168
Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . 171
Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . 172
Spatial Enhancement . . . . . . . . . . . . . . . . . . . . .172
Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . 173
Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Field Guide Table of Contents / vii
Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
Wavelet Resolution Merge . . . . . . . . . . . . . . . . . 181
Wavelet Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182
Algorithm Theory . . . . . . . . . . . . . . . . . . . . . . . . . . .185
Prerequisites and Limitations . . . . . . . . . . . . . . . . . . .187
Spectral Transform . . . . . . . . . . . . . . . . . . . . . . . . . .188
Spectral Enhancement . . . . . . . . . . . . . . . . . . . . 189
Principal Components Analysis . . . . . . . . . . . . . . . . . .190
Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . .194
Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194
RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
Hyperspectral Image Processing. . . . . . . . . . . . . 202
Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
IAR Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
Rescale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
Processing Sequence . . . . . . . . . . . . . . . . . . . . . . . . .205
Spectrum Average . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Signal to Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Mean per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Profile Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Wavelength Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
Spectral Library . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
System Requirements . . . . . . . . . . . . . . . . . . . . . . . .209
Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 209
FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . .212
IFFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215
Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . .221
Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . .222
Radar Imagery Enhancement . . . . . . . . . . . . . . . 223
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224
Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .234
Radiometric Correction: Radar Imagery . . . . . . . . . . . .237
Slant-to-Ground Range Correction . . . . . . . . . . . . . . .239
Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . .240
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
The Classification Process . . . . . . . . . . . . . . . . . 243
Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . .243
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243
Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .244
Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245
Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245
Table of Contents / viii Field Guide
Classification Tips. . . . . . . . . . . . . . . . . . . . . . . .246
Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . 246
Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . 246
Supervised vs. Unsupervised Training . . . . . . . . . . . . . 247
Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . 247
Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Supervised Training . . . . . . . . . . . . . . . . . . . . . .248
Training Samples and Feature Space Objects . . . . . . . 249
Selecting Training Samples . . . . . . . . . . . . . . . . .249
Evaluating Training Samples . . . . . . . . . . . . . . . . . . . 251
Selecting Feature Space Objects . . . . . . . . . . . . .252
Unsupervised Training . . . . . . . . . . . . . . . . . . . .254
ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . 255
RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . .261
Evaluating Signatures . . . . . . . . . . . . . . . . . . . . .262
Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . 269
Classification Decision Rules . . . . . . . . . . . . . . . .269
Nonparametric Rules . . . . . . . . . . . . . . . . . . . . . . . . 270
Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . 277
Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . 278
Fuzzy Methodology . . . . . . . . . . . . . . . . . . . . . . .280
Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . 280
Fuzzy Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Expert Classification . . . . . . . . . . . . . . . . . . . . . .281
Knowledge Engineer . . . . . . . . . . . . . . . . . . . . . . . . . 282
Knowledge Classifier . . . . . . . . . . . . . . . . . . . . . . . . . 284
Evaluating Classification . . . . . . . . . . . . . . . . . . .284
Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . 287
Photogrammetric Concepts . . . . . . . . . . . . . . . . . . . . . . .291
What is Photogrammetry? . . . . . . . . . . . . . . . . . . . . . 291
Types of Photographs and Images . . . . . . . . . . . . . . . 292
Why use Photogrammetry? . . . . . . . . . . . . . . . . . . . . 293
Photogrammetry vs. Conventional Geometric Correction 293
Single Frame Orthorectification vs. Block Triangulation . 294
Image and Data Acquisition . . . . . . . . . . . . . . . .296
Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . 297
Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . 298
Field Guide Table of Contents / ix
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . .299
Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . .302
Interior Orientation . . . . . . . . . . . . . . . . . . . . . . 303
Principal Point and Focal Length . . . . . . . . . . . . . . . . .304
Fiducial Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304
Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306
Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . 307
The Collinearity Equation . . . . . . . . . . . . . . . . . . . . . .309
Photogrammetric Solutions . . . . . . . . . . . . . . . . 310
Space Resection . . . . . . . . . . . . . . . . . . . . . . . . . . . .311
Space Forward Intersection . . . . . . . . . . . . . . . . . . . .311
Bundle Block Adjustment . . . . . . . . . . . . . . . . . . . . . .312
Least Squares Adjustment . . . . . . . . . . . . . . . . . . . . .315
Self-calibrating Bundle Adjustment . . . . . . . . . . . . . . .318
Automatic Gross Error Detection . . . . . . . . . . . . . . . . .318
GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
GCP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . .320
Processing Multiple Strips of Imagery . . . . . . . . . . . . .321
Tie Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Automatic Tie Point Collection . . . . . . . . . . . . . . . . . . .322
Image Matching Techniques . . . . . . . . . . . . . . . . 323
Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . .324
Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . .326
Relation Based Matching . . . . . . . . . . . . . . . . . . . . . .326
Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
Satellite Photogrammetry. . . . . . . . . . . . . . . . . . 327
SPOT Interior Orientation . . . . . . . . . . . . . . . . . . . . . .329
SPOT Exterior Orientation . . . . . . . . . . . . . . . . . . . . .330
Collinearity Equations and Satellite Block Triangulation .334
Radar Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
IMAGINE OrthoRadar Theory . . . . . . . . . . . . . . . 339
Parameters Required for Orthorectification . . . . . . . . . .339
Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . .342
IMAGINE StereoSAR DEM Theory . . . . . . . . . . . . 347
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350
Despeckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Constrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357
Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357
IMAGINE IFSAR DEM Theory . . . . . . . . . . . . . . . . 358
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .358
Electromagnetic Wave Background . . . . . . . . . . . . . . .358
The Interferometric Model . . . . . . . . . . . . . . . . . . . . .361
Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . .366
Table of Contents / x Field Guide
Phase Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . 368
Phase Flattening . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Phase Unwrapping . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375
Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Latitude/Longitude . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . .376
When to Georeference Only . . . . . . . . . . . . . . . . . . . . 377
Disadvantages of Rectification . . . . . . . . . . . . . . . . . . 378
Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Ground Control Points . . . . . . . . . . . . . . . . . . . . .379
GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . 379
Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . 380
Polynomial Transformation . . . . . . . . . . . . . . . . .382
Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . 383
Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . 385
Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . 391
Rubber Sheeting . . . . . . . . . . . . . . . . . . . . . . . . .392
Triangle-Based Finite Element Analysis . . . . . . . . . . . . 392
Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Triangle-based rectification . . . . . . . . . . . . . . . . . . . . 393
Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . 393
Nonlinear transformation . . . . . . . . . . . . . . . . . . . . . 393
Check Point Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 394
RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394
Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . 394
Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Error Contribution by Point . . . . . . . . . . . . . . . . . . . . 396
Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . 396
Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . 396
Resampling Methods . . . . . . . . . . . . . . . . . . . . . .397
Rectifying to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . 399
Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 400
Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Bicubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . 406
Map-to-Map Coordinate Conversions . . . . . . . . . .408
Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . 408
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Terrain Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .411
Terrain Data . . . . . . . . . . . . . . . . . . . . . . . . . . . .412
Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . .413
Field Guide Table of Contents / xi
Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Topographic Normalization . . . . . . . . . . . . . . . . . 418
Lambertian Reflectance Model . . . . . . . . . . . . . . . . . .419
Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . .419
Geographic Information Systems . . . . . . . . . . . . . . . . . . . 421
Information vs. Data . . . . . . . . . . . . . . . . . . . . . . . . .422
Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . 425
Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . 426
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . .429
Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . .430
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . .431
Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . .432
Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . 433
Contiguity Analysis. . . . . . . . . . . . . . . . . . . . . . . 434
Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . 435
Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Matrix Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . 441
Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . 442
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .447
Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . .448
Using Attributes in Models . . . . . . . . . . . . . . . . . . . . .448
Script Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 449
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453
Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 453
Editing Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . .453
Constructing Topology . . . . . . . . . . . . . . . . . . . . 454
Building and Cleaning Coverages . . . . . . . . . . . . . . . .455
Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . .461
Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Table of Contents / xii Field Guide
Neatlines, Tick Marks, and Grid Lines . . . . . . . . .468
Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469
Labels and Descriptive Text . . . . . . . . . . . . . . . .470
Typography and Lettering . . . . . . . . . . . . . . . . . . . . . 471
Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474
Properties of Map Projections . . . . . . . . . . . . . . . . . . . 474
Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Geographical and Planar Coordinates . . . . . . . . .478
Available Map Projections . . . . . . . . . . . . . . . . . .479
Choosing a Map Projection . . . . . . . . . . . . . . . . .486
Map Projection Uses in a GIS . . . . . . . . . . . . . . . . . . . 486
Deciding Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487
Map Composition . . . . . . . . . . . . . . . . . . . . . . . .492
Learning Map Composition . . . . . . . . . . . . . . . . . . . . 492
Plan the Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . .493
US National Map Accuracy Standard . . . . . . . . . . . . . . 493
USGS Land Use and Land Cover Map Guidelines . . . . . 494
USDA SCS Soils Maps Guidelines . . . . . . . . . . . . . . . . 494
Digitized Hardcopy Maps . . . . . . . . . . . . . . . . . . . . . . 494
Hardcopy Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .495
Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . .495
Scaled Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Printing Large Maps . . . . . . . . . . . . . . . . . . . . . . . . . 495
Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . 496
Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . 497
Mechanics of Printing . . . . . . . . . . . . . . . . . . . . .499
Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . 500
Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . 500
RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . 501
Math Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 507
Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Dimensionality of Data . . . . . . . . . . . . . . . . . . . .511
Field Guide Table of Contents / xiii
Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . .511
Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513
Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . .513
n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . .514
Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . .515
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .515
Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . .516
Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .517
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . .517
Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .518
Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
Works Cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
Related Reading . . . . . . . . . . . . . . . . . . . . . . . . . 725
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
Table of Contents / xiv Field Guide
/ xv Field Guide
List of Figures
Figure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Figure 2: Typical File Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Figure 3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 4: Sun Illumination Spectral Irradiance at the Earths Surface . . . . . . . . . . . . . 7
Figure 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Figure 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . 12
Figure 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Figure 9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 10: Landsat TMBand 2 (Four Types of Resolution) . . . . . . . . . . . . . . . . . . . . 17
Figure 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Figure 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Figure 15: Examples of Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 19: Attribute CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Figure 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Figure 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . . . 45
Figure 23: Multispectral Imagery Comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Figure 24: Landsat MSS vs. Landsat TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 25: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Figure 26: SLAR Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Figure 27: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Figure 28: Radar Reflection from Different Sources and Distances . . . . . . . . . . . . . . 75
Figure 29: ADRG Overview File Displayed in a Viewer . . . . . . . . . . . . . . . . . . . . . . . 88
Figure 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Figure 31: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Figure 32: ADRI Overview File Displayed in a Viewer . . . . . . . . . . . . . . . . . . . . . . . . 94
Figure 33: Arc/second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Figure 34: Common Uses of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Figure 35: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . 117
Figure 36: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . 121
Figure 37: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . 123
Figure 38: Transforming Data File Values to Screen Values. . . . . . . . . . . . . . . . . . . 124
Figure 39: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . 127
Figure 40: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . . . 128
Figure 41: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . 129
Figure 42: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . 132
Figure 43: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Figure 44: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Figure 45: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 46: Linked Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Figure 47: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . 163
Figure 48: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Figure 49: Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
/ xvi Field Guide
Figure 50: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Figure 51: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Figure 52: Contrast Stretch Using Lookup Tables, and Effect on Histogram . . . . . . . . 168
Figure 53: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Figure 54: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Figure 55: Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Figure 56: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Figure 57: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 58: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Figure 59: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . 175
Figure 60: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 61: Schematic Diagram of the Discrete Wavelet Transform - DWT . . . . . . . . . 184
Figure 62: Inverse Discrete Wavelet Transform - DWT-1 . . . . . . . . . . . . . . . . . . . . 185
Figure 63: Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 64: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figure 65: First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 66: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 67: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Figure 68: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . . . . . 197
Figure 69: Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Figure 70: Rescale Graphical User Interface (GUI) . . . . . . . . . . . . . . . . . . . . . . . . 205
Figure 71: Spectrum Average GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Figure 72: Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 73: Two-Dimensional Spatial Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 74: Three-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 75: Surface Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 76: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Figure 77: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Figure 78: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Figure 79: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . 217
Figure 80: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Figure 81: High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . . . . 219
Figure 82: Filtering Using the Bartlett Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Figure 83: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . . . 220
Figure 84: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Figure 85: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Figure 86: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figure 87: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . . . . . 231
Figure 88: A Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . . . . . 232
Figure 89: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Figure 90: Adjust Brightness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Figure 91: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . 239
Figure 92: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Figure 93: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Figure 94: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . . . . . 253
Figure 95: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Figure 96: ISODATA First Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 97: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 98: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Figure 99: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Figure 100: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Figure 101: Parallelepiped Classification Using Two Standard Deviations as Limits . . 272
Figure 102: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . 274
Figure 103: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Field Guide / xvii
Figure 104: Minimum Spectral Distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Figure 105: Knowledge Engineer Editing Window . . . . . . . . . . . . . . . . . . . . . . . . . 282
Figure 106: Example of a Decision Tree Branch . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Figure 107: Split Rule Decision Tree Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Figure 108: Knowledge Classifier Classes of Interest . . . . . . . . . . . . . . . . . . . . . . . 284
Figure 109: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Figure 110: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Figure 111: Exposure Stations Along a Flight Path . . . . . . . . . . . . . . . . . . . . . . . . 296
Figure 112: A Regular Rectangular Block of Aerial Photos . . . . . . . . . . . . . . . . . . . 297
Figure 113: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . . . 300
Figure 114: Image Space and Ground Space Coordinate System . . . . . . . . . . . . . . . 301
Figure 115: Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Figure 116: Internal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Figure 117: Pixel Coordinate System vs. Image Space Coordinate System . . . . . . . . 305
Figure 118: Radial vs. Tangential Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . . . 306
Figure 119: Elements of Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Figure 120: Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Figure 121: Photogrammetric Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Figure 122: GCP Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 123: GCPs in a Block of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 124: Point Distribution for Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Figure 125: Tie Points in a Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Figure 126: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . . . . 327
Figure 127: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . . . . 328
Figure 128: Image Coordinates in a Satellite Scene. . . . . . . . . . . . . . . . . . . . . . . . 329
Figure 129: Interior Orientation of a SPOT Scene . . . . . . . . . . . . . . . . . . . . . . . . . 330
Figure 130: Inclination of a Satellite Stereo-Scene (View from North to South) . . . . . 332
Figure 131: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . . . . 333
Figure 132: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . . . . 334
Figure 133: Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Figure 134: Digital OrthophotoFinding Gray Values . . . . . . . . . . . . . . . . . . . . . . . 336
Figure 135: Doppler Cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Figure 136: Sparse Mapping and Output Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Figure 137: IMAGINE StereoSAR DEM Process Flow . . . . . . . . . . . . . . . . . . . . . . . . 347
Figure 138: SAR Image Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Figure 139: UL Corner of the Reference Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Figure 140: UL Corner of the Match Image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Figure 141: Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Figure 142: Electromagnetic Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Figure 143: Variation of Electric Field in Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Figure 144: Effect of Time and Distance on Energy . . . . . . . . . . . . . . . . . . . . . . . . 360
Figure 145: Geometric Model for an Interferometric SAR System. . . . . . . . . . . . . . . 361
Figure 146: Differential Collection Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Figure 147: Interferometric Phase Image without Filtering . . . . . . . . . . . . . . . . . . . 368
Figure 148: Interferometric Phase Image with Filtering . . . . . . . . . . . . . . . . . . . . . 369
Figure 149: Interferometric Phase Image without Phase Flattening . . . . . . . . . . . . . 370
Figure 150: Electromagnetic Wave Traveling through Space . . . . . . . . . . . . . . . . . . 370
Figure 151: One-dimensional Continuous vs. Wrapped Phase Function . . . . . . . . . . . 371
Figure 152: Sequence of Unwrapped Phase Images . . . . . . . . . . . . . . . . . . . . . . . . 372
Figure 153: Wrapped vs. Unwrapped Phase Images . . . . . . . . . . . . . . . . . . . . . . . . 373
Figure 154: Polynomial Curve vs. GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Figure 155: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Figure 156: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Figure 157: Transformation Example1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . . 388
/ xviii Field Guide
Figure 158: Transformation Example2nd GCP Changed . . . . . . . . . . . . . . . . . . . . 388
Figure 159: Transformation Example2nd-Order . . . . . . . . . . . . . . . . . . . . . . . . . 389
Figure 160: Transformation Example4th GCP Added . . . . . . . . . . . . . . . . . . . . . . 389
Figure 161: Transformation Example3rd-Order . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Figure 162: Transformation ExampleEffect of a 3rd-Order Transformation . . . . . . . 390
Figure 163: Triangle Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Figure 164: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Figure 165: RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Figure 166: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Figure 167: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Figure 168: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Figure 169: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Figure 170: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Figure 171: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . . . 412
Figure 172: 3 3 Window Calculates the Slope at Each Pixel . . . . . . . . . . . . . . . . . 413
Figure 173: Slope Calculation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Figure 174: 3 3 Window Calculates the Aspect at Each Pixel . . . . . . . . . . . . . . . . . 415
Figure 175: Aspect Calculation Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Figure 176: Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Figure 177: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Figure 178: Raster Attributes for lnlandc.img . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Figure 179: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Figure 180: Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Figure 181: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Figure 182: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
Figure 183: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . . . 437
Figure 184: Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure 185: Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Figure 186: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . 443
Figure 187: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Figure 188: Modeling Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Figure 189: Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . 451
Figure 190: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Figure 191: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Figure 192: Sample Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Figure 193: Sample Neatline, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . 469
Figure 194: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Figure 195: Sample Sans Serif and Serif Typefaces with Various Styles Applied . . . . . 472
Figure 196: Good Lettering vs. Bad Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Figure 197: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Figure 198: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Figure 199: Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Figure 200: Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Figure 201: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . . . 496
Figure 202: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Figure 203: Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
Figure 204: Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Figure 205: Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Figure 206: Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Figure 207: Two Band Plot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Figure 208: Two-band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
Figure 209: Albers Conical Equal Area Projection. . . . . . . . . . . . . . . . . . . . . . . . . . 529
Figure 210: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . . . 532
Figure 211: Behrmann Cylindrical Equal-Area Projection . . . . . . . . . . . . . . . . . . . . . 534
Field Guide / xix
Figure 212: Bonne Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Figure 213: Cassini Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Figure 214: Eckert I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
Figure 215: Eckert II Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
Figure 216: Eckert III Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Figure 217: Eckert IV Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Figure 218: Eckert V Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Figure 219: Eckert V Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Figure 220: Eckert VI Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Figure 221: Equidistant Conic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Figure 222: Equirectangular Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Figure 223: Geographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Figure 224: Hammer Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Figure 225: Interrupted Goode Homolosine Projection . . . . . . . . . . . . . . . . . . . . . . 567
Figure 226: Interrupted Mollweide Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
Figure 227: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . 572
Figure 228: Lambert Conformal Conic Projection. . . . . . . . . . . . . . . . . . . . . . . . . . 575
Figure 229: Loximuthal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Figure 230: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Figure 231: Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Figure 232: Mollweide Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Figure 233: Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Figure 234: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
Figure 235: Plate Carre Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Figure 236: Polar Stereographic Projection and its Geometric Construction . . . . . . . . 598
Figure 237: Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . . 600
Figure 238: Quartic Authalic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
Figure 239: Robinson Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
Figure 240: Sinusoidal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Figure 241: Space Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Figure 242: Zones of the State Plane Coordinate System . . . . . . . . . . . . . . . . . . . . 612
Figure 243: Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
Figure 244: Two Point Equidistant Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Figure 245: Zones of the Universal Transverse Mercator Grid in the United States . . . 630
Figure 246: Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Figure 247: Wagner IV Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Figure 248: Wagner VII Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Figure 249: Winkel I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
Figure 250: Winkels Tripel Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
/ xx Field Guide
/ xxi Field Guide
List of Tables
Table 1: Bandwidths Used in Remote Sensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Table 2: Description of File Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Table 3: Raster Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Table 4: Annotation Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Table 5: Vector Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Table 6: IKONOS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Table 7: LISS-III Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Table 8: Panchromatic Band and Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Table 9: WiFS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Table 10: MSS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Table 11: TM Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Table 12: Landsat 7 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Table 13: AVHRR Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Table 14: OrbView-3 Bands and Spectral Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 15: SeaWiFS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 16: SPOT XS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Table 17: SPOT4 Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Table 18: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . 75
Table 19: Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Table 20: JERS-1 Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Table 21: RADARSAT Beam Mode Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Table 22: SIR-C/X-SAR Bands and Frequencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Table 23: Daedalus TMS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Table 24: ARC System Chart Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Table 25: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . 91
Table 26: Common Raster Data Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Table 27: File Types Created by Screendump . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Table 28: The Most Common TIFF Format Elements. . . . . . . . . . . . . . . . . . . . . . . . 110
Table 29: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Table 30: Conversion of IGES Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Table 31: Colorcell Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Table 32: Commonly Used RGB Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Table 33: Overview of Zoom Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Table 34: Description of Modeling Functions Available for Enhancement . . . . . . . . . . 157
Table 35: Theoretical Coefficient of Variation Values . . . . . . . . . . . . . . . . . . . . . . . 228
Table 36: Parameters for Sigma Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Table 37: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Table 38: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Table 39: Feature Space Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Table 40: ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Table 41: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Table 42: Parallelepiped Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Table 43: Feature Space Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Table 44: Minimum Distance Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Table 45: Mahalanobis Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Table 46: Maximum Likelihood/Bayesian Decision Rule . . . . . . . . . . . . . . . . . . . . . 279
Table 47: Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Table 48: SAR Parameters Required for Orthorectification . . . . . . . . . . . . . . . . . . . 339
Table 49: STD_LP_HD Correlator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
/ xxii Field Guide
Table 50: Number of GCPs per Order of Transformation . . . . . . . . . . . . . . . . . . . . . 391
Table 51: Nearest Neighbor Resampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Table 52: Bilinear Interpolation Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Table 53: Cubic Convolution Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Table 54: Bicubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Table 55: Example of a Recoded Land Cover Layer . . . . . . . . . . . . . . . . . . . . . . . . 438
Table 56: Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Table 57: Attribute Information for parks.img. . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Table 58: General Editing Operations and Supporting Feature Types. . . . . . . . . . . . . 454
Table 59: Comparison of Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . 455
Table 60: Common Map Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Table 61: Pixels per Inch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Table 62: Acres and Hectares per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Table 63: Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Table 64: Projection Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Table 65: Spheroids for use with ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . 490
Table 66: Alaska Conformal Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Table 67: Albers Conical Equal Area Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Table 68: Azimuthal Equidistant Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Table 69: Behrmann Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Table 70: Bonne Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Table 71: Cassini Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Table 72: Eckert I Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Table 73: Eckert II Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Table 74: Eckert III Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Table 75: Eckert IV Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Table 76: Eckert VI Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Table 77: Equidistant Conic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Table 78: Equirectangular (Plate Carre) Summary . . . . . . . . . . . . . . . . . . . . . . . . 555
Table 79: Gall Stereographic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Table 80: General Vertical Near-side Perspective Summary . . . . . . . . . . . . . . . . . . . 559
Table 81: Gnomonic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Table 82: Hammer Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Table 83: Interrupted Goode Homolosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
Table 84: Lambert Azimuthal Equal Area Summary . . . . . . . . . . . . . . . . . . . . . . . . 570
Table 85: Lambert Conformal Conic Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Table 86: Loximuthal Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
Table 87: Mercator Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
Table 88: Miller Cylindrical Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
Table 89: Modified Transverse Mercator Summary . . . . . . . . . . . . . . . . . . . . . . . . . 583
Table 90: Mollweide Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Table 91: New Zealand Map Grid Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
Table 92: Oblique Mercator (Hotine) Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Table 93: Orthographic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Table 94: Polar Stereographic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
Table 95: Polyconic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Table 96: Quartic Authalic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Table 97: Robinson Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Table 98: RSO Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
Table 99: Sinusoidal Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
Table 100: Space Oblique Mercator Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Table 101: NAD27 State Plane Coordinate System for the United States . . . . . . . . . . 612
Table 102: NAD83 State Plane Coordinate System for the United States . . . . . . . . . . 616
Table 103: Stereographic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Field Guide / xxiii
Table 104: Transverse Mercator Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
Table 105: Two Point Equidistant Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
Table 106: UTM Zones, Central Meridians, and Longitude Ranges . . . . . . . . . . . . . . 630
Table 107: Van der Grinten I Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
Table 108: Wagner IV Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
Table 109: Wagner VII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
Table 110: Winkel I Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
Table 111: Bipolar Oblique Conic Conformal Summary . . . . . . . . . . . . . . . . . . . . . . 642
Table 112: Cassini-Soldner Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Table 113: Modified Polyconic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Table 114: Modified Stereographic Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Table 115: Mollweide Equal Area Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Table 116: Robinson Pseudocylindrical Summary . . . . . . . . . . . . . . . . . . . . . . . . . 652
Table 117: Winkels Tripel Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
/ xxiv Field Guide
Conventions Used in this Book / xxv Field Guide
Preface
Introduction The purpose of the ERDAS Field Guide

is to provide background
information on why one might use particular geographic information
system (GIS) and image processing functions and how the software
is manipulating the data, rather than what buttons to push to
actually perform those functions. This book is also aimed at a diverse
audience: from those who are new to geoprocessing to those savvy
users who have been in this industry for years. For the novice, the
ERDAS Field Guide provides a brief history of the field, an extensive
glossary of terms, and notes about applications for the different
processes described. For the experienced user, the ERDAS Field
Guide includes the formulas and algorithms that are used in the
code, so that he or she can see exactly how each operation works.
Although the ERDAS Field Guide is primarily a reference to basic
image processing and GIS concepts, it is geared toward ERDAS
IMAGINE

users and the functions within ERDAS IMAGINE software,


such as GIS analysis, image processing, cartography and map
projections, graphics display hardware, statistics, and remote
sensing. However, in some cases, processes and functions are
described that may not be in the current version of the software, but
planned for a future release. There may also be functions described
that are not available on your system, due to the actual package that
you are using.
The enthusiasm with which the first four editions of the ERDAS Field
Guide were received has been extremely gratifying, both to the
authors and to Leica Geosystems GIS & Mapping, LLC as a whole.
First conceived as a helpful manual for users, the ERDAS Field Guide
is now being used as a textbook, lab manual, and training guide
throughout the world.
The ERDAS Field Guide will continue to expand and improve to keep
pace with the profession. Suggestions and ideas for future editions
are always welcome, and should be addressed to the Technical
Writing department of Engineering at Leica Geosystems, in Norcross,
Georgia.
Conventions Used
in this Book
The following paragraphs are used throughout the ERDAS Field
Guide and other ERDAS IMAGINE documentation.
These paragraphs contain strong warnings or important tips.
These paragraphs provide software-specific information.
Conventions Used in this Book / xxvi Field Guide
These paragraphs lead you to other chapters in the ERDAS Field
Guide or other manuals for additional information.
These paragraphs give you additional information.
NOTE: Notes give additional instruction
Image Data / 1 Field Guide
Raster Data
Introduction The ERDAS IMAGINE system incorporates the functions of both
image processing and GIS. These functions include importing,
viewing, altering, and analyzing raster and vector data sets.
This chapter is an introduction to raster data, including:
remote sensing
data storage formats
different types of resolution
radiometric correction
geocoded data
raster data in GIS
See Vector Data for more information on vector data.
Image Data In general terms, an image is a digital picture or representation of
an object. Remotely sensed image data are digital representations of
the Earth. Image data are stored in data files, also called image files,
on magnetic tapes, computer disks, or other media. The data consist
only of numbers. These representations form images when they are
displayed on a screen or are output to hardcopy.
Each number in an image file is a data file value. Data file values are
sometimes referred to as pixels. The term pixel is abbreviated from
picture element. A pixel is the smallest part of a picture (the area
being scanned) with a single value. The data file value is the
measured brightness value of the pixel at a specific wavelength.
Raster image data are laid out in a grid similar to the squares on a
checkerboard. Each cell of the grid is represented by a pixel, also
known as a grid cell.
In remotely sensed image data, each pixel represents an area of the
Earth at a specific location. The data file value assigned to that pixel
is the record of reflected radiation or emitted heat from the Earths
surface at that location.
Data file values may also represent elevation, as in digital elevation
models (DEMs).
NOTE: DEMs are not remotely sensed image data, but are currently
being produced from stereo points in radar imagery.
Image Data / 2 Field Guide
The terms pixel and data file value are not interchangeable in
ERDAS IMAGINE. Pixel is used as a broad term with many
meanings, one of which is data file value. One pixel in a file may
consist of many data file values. When an image is displayed or
printed, other types of values are represented by a pixel.
See Image Display for more information on how images are
displayed.
Bands Image data may include several bands of information. Each band is
a set of data file values for a specific portion of the electromagnetic
spectrum of reflected light or emitted heat (red, green, blue, near-
infrared, infrared, thermal, etc.) or some other user-defined
information created by combining or enhancing the original bands,
or creating new bands from other sources.
ERDAS IMAGINE programs can handle an unlimited number of bands
of image data in a single file.
Figure 1: Pixels and Bands in a Raster Image
See Enhancement for more information on combining or
enhancing bands of data.
Bands vs. Layers
In ERDAS IMAGINE, bands of data are occasionally referred to as
layers. Once a band is imported into a GIS, it becomes a layer of
information which can be processed in various ways. Additional
layers can be created and added to the image file (.img extension)
in ERDAS IMAGINE, such as layers created by combining existing
layers. Read more about image files in ERDAS IMAGINE Format
(.img).
Layers vs. Viewer Layers
The Viewer permits several images to be layered, in which case each
image (including a multiband image) may be a layer.
1 pixel
3 bands
Field Guide Image Data / 3
Numeral Types
The range and the type of numbers used in a raster layer determine
how the layer is displayed and processed. For example, a layer of
elevation data with values ranging from -51.257 to 553.401 would
be treated differently from a layer using only two values to show land
and water.
The data file values in raster layers generally fall into these
categories:
Nominal data file values are simply categorized and named. The
actual value used for each category has no inherent meaningit
is simply a class value. An example of a nominal raster layer
would be a thematic layer showing tree species.
Ordinal data are similar to nominal data, except that the file
values put the classes in a rank or order. For example, a layer
with classes numbered and named
1 - Good, 2 - Moderate, and 3 - Poor is an ordinal system.
Interval data file values have an order, but the intervals between
the values are also meaningful. Interval data measure some
characteristic, such as elevation or degrees Fahrenheit, which
does not necessarily have an absolute zero. (The difference
between two values in interval data is meaningful.)
Ratio data measure a condition that has a natural zero, such as
electromagnetic radiation (as in most remotely sensed data),
rainfall, or slope.
Nominal and ordinal data lend themselves to applications in which
categories, or themes, are used. Therefore, these layers are
sometimes called categorical or thematic.
Likewise, interval and ratio layers are more likely to measure a
condition, causing the file values to represent continuous gradations
across the layer. Such layers are called continuous.
Coordinate Systems The location of a pixel in a file or on a displayed or printed image is
expressed using a coordinate system. In two-dimensional coordinate
systems, locations are organized in a grid of columns and rows. Each
location on the grid is expressed as a pair of coordinates known as X
and Y. The X coordinate specifies the column of the grid, and the Y
coordinate specifies the row. Image data organized into such a grid
are known as raster data.
There are two basic coordinate systems used in ERDAS IMAGINE:
file coordinatesindicate the location of a pixel within the image
(data file)
map coordinatesindicate the location of a pixel in a map
Remote Sensing / 4 Field Guide
File Coordinates
File coordinates refer to the location of the pixels within the image
(data) file. File coordinates for the pixel in the upper left corner of
the image always begin at 0, 0.
Figure 2: Typical File Coordinates
Map Coordinates
Map coordinates may be expressed in one of a number of map
coordinate or projection systems. The type of map coordinates used
by a data file depends on the method used to create the file (remote
sensing, scanning an existing map, etc.). In ERDAS IMAGINE, a data
file can be converted from one map coordinate system to another.
For more information on map coordinates and projection
systems, see Cartography or Map Projections. See
Rectification for more information on changing the map
coordinate system of a data file.
Remote Sensing Remote sensing is the acquisition of data about an object or scene
by a sensor that is far from the object (Colwell, 1983). Aerial
photography, satellite imagery, and radar are all forms of remotely
sensed data.
Usually, remotely sensed data refer to data of the Earth collected
from sensors on satellites or aircraft. Most of the images used as
input to the ERDAS IMAGINE system are remotely sensed. However,
you are not limited to remotely sensed data.
This section is a brief introduction to remote sensing. There are
many books available for more detailed information, including
Colwell, 1983, Swain and Davis, 1978; and Slater, 1980 (see
Bibliography).
rows (y)
(3,1)
x,y
columns (x)
0
1
2
3
0 1 2 3 4
Field Guide Remote Sensing / 5
Electromagnetic Radiation Spectrum
The sensors on remote sensing platforms usually record
electromagnetic radiation. Electromagnetic radiation (EMR) is
energy transmitted through space in the form of electric and
magnetic waves (Star and Estes, 1990). Remote sensors are made
up of detectors that record specific wavelengths of the
electromagnetic spectrum. The electromagnetic spectrum is the
range of electromagnetic radiation extending from cosmic waves to
radio waves (Jensen, 1996).
All types of land cover (rock types, water bodies, etc.) absorb a
portion of the electromagnetic spectrum, giving a distinguishable
signature of electromagnetic radiation. Armed with the knowledge of
which wavelengths are absorbed by certain features and the
intensity of the reflectance, you can analyze a remotely sensed
image and make fairly accurate assumptions about the scene. Figure
3 illustrates the electromagnetic spectrum (Suits, 1983; Star and
Estes, 1990).
Figure 3: Electromagnetic Spectrum
SWIR and LWIR
The near-infrared and middle-infrared regions of the
electromagnetic spectrum are sometimes referred to as the short
wave infrared region (SWIR). This is to distinguish this area from the
thermal or far infrared region, which is often referred to as the long
wave infrared region (LWIR). The SWIR is characterized by reflected
radiation whereas the LWIR is characterized by emitted radiation.
Absorption / Reflection
Spectra
When radiation interacts with matter, some wavelengths are
absorbed and others are reflected.To enhance features in image
data, it is necessary to understand how vegetation, soils, water, and
other land covers reflect and absorb radiation. The study of the
absorption and reflection of EMR waves is called spectroscopy.
micrometers m (one millionth of a meter)
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0
Reflected
Thermal
SWIR LWIR
Visible
(0.4 - 0.7)
Blue (0.4 - 0.5)
Green (0.5 - 0.6)
Red (0.6 - 0.7)
Near-infrared
(0.7 - 2.0)
Ultraviolet
Middle-infrared
(2.0 - 5.0)
Far-infrared
(8.0 - 15.0)
Radar
Remote Sensing / 6 Field Guide
Spectroscopy
Most commercial sensors, with the exception of imaging radar
sensors, are passive solar imaging sensors. Passive solar imaging
sensors can only receive radiation waves; they cannot transmit
radiation. (Imaging radar sensors are active sensors that emit a
burst of microwave radiation and receive the backscattered
radiation.)
The use of passive solar imaging sensors to characterize or identify
a material of interest is based on the principles of spectroscopy.
Therefore, to fully utilize a visible/infrared (VIS/IR) multispectral
data set and properly apply enhancement algorithms, it is necessary
to understand these basic principles. Spectroscopy reveals the:
absorption spectrathe EMR wavelengths that are absorbed by
specific materials of interest
reflection spectrathe EMR wavelengths that are reflected by
specific materials of interest
Absorption Spectra
Absorption is based on the molecular bonds in the (surface) material.
Which wavelengths are absorbed depends upon the chemical
composition and crystalline structure of the material. For pure
compounds, these absorption bands are so specific that the SWIR
region is often called an infrared fingerprint.
Atmospheric Absorption
In remote sensing, the sun is the radiation source for passive
sensors. However, the sun does not emit the same amount of
radiation at all wavelengths. Figure 4 shows the solar irradiation
curve, which is far from linear.
Field Guide Remote Sensing / 7
Figure 4: Sun Illumination Spectral Irradiance at the Earths Surface
Source: Modified from Chahine et al, 1983
Solar radiation must travel through the Earths atmosphere before it
reaches the Earths surface. As it travels through the atmosphere,
radiation is affected by four phenomena (Elachi, 1987):
absorptionthe amount of radiation absorbed by the
atmosphere
scatteringthe amount of radiation scattered away from the field
of view by the atmosphere
scattering sourcedivergent solar irradiation scattered into the
field of view
emission sourceradiation re-emitted after absorption
0
0.0
E


S
p
e
c
t
r
a
l

I
r
r
a
d
i
a
n
c
e

(
W
m
-
2

m

-
1
)
3.0 1.5 2.7 2.4 2.1 1.8 1.2 0.9 0.6 0.3
Wavelength m
2500
2000
1500
1000
500
UV VIS INFRARED
Solar irradiation curve outside atmosphere
Solar irradiation curve at sea level
Peaks show absorption by H
2
0, C0
2
, and O
3
Remote Sensing / 8 Field Guide
Figure 5: Factors Affecting Radiation
Source: Elachi, 1987
Absorption is not a linear phenomenait is logarithmic with
concentration (Flaschka, 1969). In addition, the concentration of
atmospheric gases, especially water vapor, is variable. The other
major gases of importance are carbon dioxide (CO
2
) and ozone (O
3
),
which can vary considerably around urban areas. Thus, the extent of
atmospheric absorbance varies with humidity, elevation, proximity
to (or downwind of) urban smog, and other factors.
Scattering is modeled as Rayleigh scattering with a commonly used
algorithm that accounts for the scattering of short wavelength
energy by the gas molecules in the atmosphere (Pratt, 1991)for
example, ozone. Scattering is variable with both wavelength and
atmospheric aerosols. Aerosols differ regionally (ocean vs. desert)
and daily (for example, Los Angeles smog has different
concentrations daily).
Scattering source and emission source may account for only 5% of
the variance. These factors are minor, but they must be considered
for accurate calculation. After interaction with the target material,
the reflected radiation must travel back through the atmosphere and
be subjected to these phenomena a second time to arrive at the
satellite.
Absorptionthe amount of
Scatteringthe amount of radiation
Scattering Sourcedivergent solar
Emission Sourceradiation
Radiation
radiation absorbed by the
atmosphere
re-emitted after absorption
scattered away from the field of view
irradiations scattered into the
field of view
by the atmosphere
Field Guide Remote Sensing / 9
The mathematical models that attempt to quantify the total
atmospheric effect on the solar illumination are called radiative
transfer equations. Some of the most commonly used are Lowtran
(Kneizys et al, 1988) and Modtran (Berk et al, 1989).
See Enhancement for more information on atmospheric
modeling.
Reflectance Spectra
After rigorously defining the incident radiation (solar irradiation at
target), it is possible to study the interaction of the radiation with the
target material. When an electromagnetic wave (solar illumination in
this case) strikes a target surface, three interactions are possible
(Elachi, 1987):
reflection
transmission
scattering
It is the reflected radiation, generally modeled as bidirectional
reflectance (Clark and Roush, 1984), that is measured by the remote
sensor.
Remotely sensed data are made up of reflectance values. The
resulting reflectance values translate into discrete digital numbers
(or values) recorded by the sensing device. These gray scale values
fit within a certain bit range (such as 0 to 255, which is 8-bit data)
depending on the characteristics of the sensor.
Each satellite sensor detector is designed to record a specific portion
of the electromagnetic spectrum. For example, Landsat Thematic
Mapper (TM) band 1 records the 0.45 to 0.52 m portion of the
spectrum and is designed for water body penetration, making it
useful for coastal water mapping. It is also useful for soil/vegetation
discriminations, forest type mapping, and cultural features
identification (Lillesand and Kiefer, 1987).
The characteristics of each sensor provide the first level of
constraints on how to approach the task of enhancing specific
features, such as vegetation or urban areas. Therefore, when
choosing an enhancement technique, one should pay close attention
to the characteristics of the land cover types within the constraints
imposed by the individual sensors.
Remote Sensing / 10 Field Guide
The use of VIS/IR imagery for target discrimination, whether the
target is mineral, vegetation, man-made, or even the atmosphere
itself, is based on the reflectance spectrum of the material of interest
(see Figure 6). Every material has a characteristic spectrum based
on the chemical composition of the material. When sunlight (the
illumination source for VIS/IR imagery) strikes a target, certain
wavelengths are absorbed by the chemical bonds; the rest are
reflected back to the sensor. It is, in fact, the wavelengths that are
not returned to the sensor that provide information about the
imaged area.
Specific wavelengths are also absorbed by gases in the atmosphere
(H
2
O vapor, CO
2
, O
2
, etc.). If the atmosphere absorbs a large
percentage of the radiation, it becomes difficult or impossible to use
that particular wavelength(s) to study the Earth. For the present
Landsat and Systeme Pour lobservation de la Terre (SPOT) sensors,
only the water vapor bands are considered strong enough to exclude
the use of their spectral absorption region. Figure 6 shows how
Landsat TM bands 5 and 7 were carefully placed to avoid these
regions. Absorption by other atmospheric gases was not extensive
enough to eliminate the use of the spectral region for present day
broad band sensors.
Figure 6: Reflectance Spectra
100
80
60
40
20
0
R
e
f
l
e
c
t
a
n
c
e
,

%
.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, m
Vegetation (green)
Silt loam
Atmospheric
bands
4 5 6 7
Landsat MSS bands
1
2 3 4
5 7
Landsat TM bands
Kaolinite
absorption
Field Guide Remote Sensing / 11
Source: Modified from Fraser, 1986;Crist et al, 1986; Sabins, 1987
NOTE: This chart is for comparison purposes only. It is not meant to
show actual values. The spectra are offset to better display the lines.
An inspection of the spectra reveals the theoretical basis of some of
the indices in the ERDAS IMAGINE Image Interpreter. Consider the
vegetation index TM4/TM3. It is readily apparent that for vegetation
this value could be very large. For soils, the value could be much
smaller, and for clay minerals, the value could be near zero.
Conversely, when the clay ratio TM5/TM7 is considered, the opposite
applies.
Hyperspectral Data
As remote sensing moves toward the use of more and narrower
bands (for example, AVIRIS with 224 bands each only 10 nm wide),
absorption by specific atmospheric gases must be considered. These
multiband sensors are called hyperspectral sensors. As more and
more of the incident radiation is absorbed by the atmosphere, the
digital number (DN) values of that band get lower, eventually
becoming uselessunless one is studying the atmosphere. Someone
wanting to measure the atmospheric content of a specific gas could
utilize the bands of specific absorption.
NOTE: Hyperspectral bands are generally measured in nanometers
(nm).
Figure 6 shows the spectral bandwidths of the channels for the
Landsat sensors plotted above the absorption spectra of some
common natural materials (kaolin clay, silty loam soil, and green
vegetation). Note that while the spectra are continuous, the Landsat
channels are segmented or discontinuous. We can still use the
spectra in interpreting the Landsat data. For example, a Normalized
Difference Vegetation Index (NDVI) ratio for the three would be very
different and, therefore, could be used to discriminate between the
three materials. Similarly, the ratio TM5/TM7 is commonly used to
measure the concentration of clay minerals. Evaluation of the
spectra shows why.
Figure 7 shows detail of the absorption spectra of three clay
minerals. Because of the wide bandpass (2080 to 2350 nm) of TM
band 7, it is not possible to discern between these three minerals
with the Landsat sensor. As mentioned, the AVIRIS hyperspectral
sensor has a large number of approximately 10 nm wide bands. With
the proper selection of band ratios, mineral identification becomes
possible. With this data set, it would be possible to discriminate
between these three clay minerals, again using band ratios. For
example, a color composite image prepared from RGB =
2160nm/2190nm, 2220nm/2250nm, 2350nm/2488nm could
produce a color-coded clay mineral image-map.
Remote Sensing / 12 Field Guide
The commercial airborne multispectral scanners are used in a similar
fashion. The Airborne Imaging Spectrometer from the Geophysical &
Environmental Research Corp. (GER) has 79 bands in the UV, visible,
SWIR, and thermal-infrared regions. The Airborne Multispectral
Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the visible,
SWIR, and thermal-infrared regions. To properly utilize these
hyperspectral sensors, you must understand the phenomenon
involved and have some idea of the target materials being sought.
Figure 7: Laboratory Spectra of Clay Minerals in the Infrared
Region
Source: Modified from Sabins, 1987
NOTE: Spectra are offset vertically for clarity.
The characteristics of Landsat, AVIRIS, and other data types are
discussed in Raster and Vector Data Sources. See
Enhancement for more information on the NDVI ratio.
2000
2200 2400 2600
Landsat TM band 7
2080 nm
2350 nm
Kaolinite
Montmorillonite
Illite
R
e
f
l
e
c
t
a
n
c
e
,

%
Wavelength, nm
Field Guide Remote Sensing / 13
Imaging Radar Data
Radar remote sensors can be broken into two broad categories:
passive and active. The passive sensors record the very low
intensity, microwave radiation naturally emitted by the Earth.
Because of the very low intensity, these images have low spatial
resolution (i.e., large pixel size).
It is the active sensors, termed imaging radar, that are introducing
a new generation of satellite imagery to remote sensing. To produce
an image, these satellites emit a directed beam of microwave energy
at the target, and then collect the backscattered (reflected) radiation
from the target scene. Because they must emit a powerful burst of
energy, these satellites require large solar collectors and storage
batteries. For this reason, they cannot operate continuously; some
satellites are limited to 10 minutes of operation per hour.
The microwave energy emitted by an active radar sensor is coherent
and defined by a narrow bandwidth. The following table summarizes
the bandwidths used in remote sensing.
*Wavelengths commonly used in imaging radars are shown in parentheses.
A key element of a radar sensor is the antenna. For a given position
in space, the resolution of the resultant image is a function of the
antenna size. This is termed a real-aperture radar (RAR). At some
point, it becomes impossible to make a large enough antenna to
create the desired spatial resolution. To get around this problem,
processing techniques have been developed which combine the
signals received by the sensor as it travels over the target. Thus, the
antenna is perceived to be as long as the sensor path during
backscatter reception. This is termed a synthetic aperture and the
sensor a synthetic aperture radar (SAR).
Table 1: Bandwidths Used in Remote Sensing
Band
Designation*
Wavelength (),
cm
Frequency (),
GHz
(10
9
cycles sec
-
1
)
Ka (0.86 cm) 0.8 to 1.1 40.0 to 26.5
K 1.1 to 1.7 26.5 to 18.0
Ku 1.7 to 2.4 18.0 to 12.5
X (3.0 cm, 3.2 cm) 2.4 to 3.8 12.5 to 8.0
C 3.8 to 7.5 8.0 to 4.0
S 7.5 to 15.0 4.0 to 2.0
L (23.5 cm, 25.0 cm) 15.0 to 30.0 2.0 to 1.0
P 30.0 to 100.0 1.0 to 0.3
Resolution / 14 Field Guide
The received signal is termed a phase history or echo hologram. It
contains a time history of the radar signal over all the targets in the
scene, and is itself a low resolution RAR image. In order to produce
a high resolution image, this phase history is processed through a
hardware/software system called an SAR processor. The SAR
processor software requires operator input parameters, such as
information about the sensor flight path and the radar sensor's
characteristics, to process the raw signal data into an image. These
input parameters depend on the desired result or intended
application of the output imagery.
One of the most valuable advantages of imaging radar is that it
creates images from its own energy source and therefore is not
dependent on sunlight. Thus one can record uniform imagery any
time of the day or night. In addition, the microwave frequencies at
which imaging radars operate are largely unaffected by the
atmosphere. This allows image collection through cloud cover or rain
storms. However, the backscattered signal can be affected. Radar
images collected during heavy rainfall are often seriously
attenuated, which decreases the signal-to-noise ratio (SNR). In
addition, the atmosphere does cause perturbations in the signal
phase, which decreases resolution of output products, such as the
SAR image or generated DEMs.
Resolution Resolution is a broad term commonly used to describe:
the number of pixels you can display on a display device, or
the area on the ground that a pixel represents in an image file.
These broad definitions are inadequate when describing remotely
sensed data. Four distinct types of resolution must be considered:
spectralthe specific wavelength intervals that a sensor can
record
spatialthe area on the ground represented by each pixel
radiometricthe number of possible data file values in each band
(indicated by the number of bits into which the recorded energy
is divided)
temporalhow often a sensor obtains imagery of a particular
area
These four domains contain separate information that can be
extracted from the raw data.
Spectral Resolution Spectral resolution refers to the specific wavelength intervals in the
electromagnetic spectrum that a sensor can record (Simonett et al,
1983). For example, band 1 of the Landsat TM sensor records energy
between 0.45 and 0.52 m in the visible part of the spectrum.
Field Guide Resolution / 15
Wide intervals in the electromagnetic spectrum are referred to as
coarse spectral resolution, and narrow intervals are referred to as
fine spectral resolution. For example, the SPOT panchromatic sensor
is considered to have coarse spectral resolution because it records
EMR between 0.51 and 0.73 m. On the other hand, band 3 of the
Landsat TM sensor has fine spectral resolution because it records
EMR between 0.63 and 0.69 m (Jensen, 1996).
NOTE: The spectral resolution does not indicate how many levels the
signal is broken into.
Spatial Resolution Spatial resolution is a measure of the smallest object that can be
resolved by the sensor, or the area on the ground represented by
each pixel (Simonett et al, 1983). The finer the resolution, the lower
the number. For instance, a spatial resolution of 79 meters is coarser
than a spatial resolution of 10 meters.
Scale
The terms large-scale imagery and small-scale imagery often refer
to spatial resolution. Scale is the ratio of distance on a map as
related to the true distance on the ground (Star and Estes, 1990).
Large-scale in remote sensing refers to imagery in which each pixel
represents a small area on the ground, such as SPOT data, with a
spatial resolution of 10 m or 20 m. Small scale refers to imagery in
which each pixel represents a large area on the ground, such as
Advanced Very High Resolution Radiometer (AVHRR) data, with a
spatial resolution of 1.1 km.
This terminology is derived from the fraction used to represent the
scale of the map, such as 1:50,000. Small-scale imagery is
represented by a small fraction (one over a very large number).
Large-scale imagery is represented by a larger fraction (one over a
smaller number). Generally, anything smaller than 1:250,000 is
considered small-scale imagery.
NOTE: Scale and spatial resolution are not always the same thing.
An image always has the same spatial resolution, but it can be
presented at different scales (Simonett et al, 1983).
Instantaneous Field of View
Spatial resolution is also described as the instantaneous field of view
(IFOV) of the sensor, although the IFOV is not always the same as
the area represented by each pixel. The IFOV is a measure of the
area viewed by a single detector in a given instant in time (Star and
Estes, 1990). For example, Landsat MSS data have an IFOV of 79
79 meters, but there is an overlap of 11.5 meters in each pass of the
scanner, so the actual area represented by each pixel is 56.5 79
meters (usually rounded to 57 79 meters).
Even though the IFOV is not the same as the spatial resolution, it is
important to know the number of pixels into which the total field of
view for the image is broken. Objects smaller than the stated pixel
size may still be detectable in the image if they contrast with the
background, such as roads, drainage patterns, etc.
Resolution / 16 Field Guide
On the other hand, objects the same size as the stated pixel size (or
larger) may not be detectable if there are brighter or more dominant
objects nearby. In Figure 8, a house sits in the middle of four pixels.
If the house has a reflectance similar to its surroundings, the data
file values for each of these pixels reflect the area around the house,
not the house itself, since the house does not dominate any one of
the four pixels. However, if the house has a significantly different
reflectance than its surroundings, it may still be detectable.
Figure 8: IFOV
Radiometric Resolution Radiometric resolution refers to the dynamic range, or number of
possible data file values in each band. This is referred to by the
number of bits into which the recorded energy is divided.
For instance, in 8-bit data, the data file values range from 0 to 255
for each pixel, but in 7-bit data, the data file values for each pixel
range from 0 to 128.
In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures
the EMR in its range. The total intensity of the energy from 0 to the
maximum amount the sensor measures is broken down into 256
brightness values for 8-bit data, and 128 brightness values for 7-bit
data.
20 m
20 m
house
20 m
20 m
Field Guide Data Correction / 17
Figure 9: Brightness Values
Temporal Resolution Temporal resolution refers to how often a sensor obtains imagery of
a particular area. For example, the Landsat satellite can view the
same area of the globe once every 16 days. SPOT, on the other hand,
can revisit the same area every three days.
NOTE: Temporal resolution is an important factor to consider in
change detection studies.
Figure 10 illustrates all four types of resolution:
Figure 10: Landsat TMBand 2 (Four Types of Resolution)
Source: EOSAT
Data Correction There are several types of errors that can be manifested in remotely
sensed data. Among these are line dropout and striping. These
errors can be corrected to an extent in GIS by radiometric and
geometric correction functions.
NOTE: Radiometric errors are usually already corrected in data from
EOSAT or SPOT.
8-bit
0
7-bit
0 1 2 3 4 5 122 123 124 125 126 127
0
max. intensity
244 255 249 11 10 9 8 7 6 5 4 3 2 1 0
max. intensity
Spatial Resolution:
1 pixel = 79 m 79 m
Temporal Resolution:
same area viewed
every 16 days
79 m
79 m
Day 1
Day 17
Day 31
Radiometric
Resolution:
8-bit (0 - 255)
Spectral
Resolution:
0.52 - 0.60 m
Data Storage / 18 Field Guide
See Enhancement for more information on radiometric and
geometric correction.
Line Dropout Line dropout occurs when a detector either completely fails to
function or becomes temporarily saturated during a scan (like the
effect of a camera flash on a human retina). The result is a line or
partial line of data with higher data file values, creating a horizontal
streak until the detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line
of estimated data file values. The estimated line is based on the lines
above and below it.
You can correct line dropout using the 5 5 Median Filter from
the Radar Speckle Suppression function. The Convolution and
Focal Analysis functions in the ERDAS IMAGINE Image
Interpreter also corrects for line dropout.
Striping Striping or banding occurs if a detector goes out of adjustmentthat
is, it provides readings consistently greater than or less than the
other detectors for the same band over the same ground cover.
Use ERDAS IMAGINE Image Interpreter or ERDAS IMAGINE
Spatial Modeler for implementing algorithms to eliminate
striping. The ERDAS IMAGINE Spatial Modeler editing
capabilities allow you to adapt the algorithms to best address
the data.
Data Storage Image data can be stored on a variety of mediatapes, CD-ROMs,
or floppy diskettes, for examplebut how the data are stored (e.g.,
structure) is more important than on what they are stored.
All computer data are in binary format. The basic unit of binary data
is a bit. A bit can have two possible values0 and 1, or off and on
respectively. A set of bits, however, can have many more values,
depending on the number of bits used. The number of values that
can be expressed by a set of bits is 2 to the power of the number of
bits used.
A byte is 8 bits of data. Generally, file size and disk space are
referred to by number of bytes. For example, a PC may have 640
kilobytes (1,024 bytes = 1 kilobyte) of RAM (random access
memory), or a file may need 55,698 bytes of disk space. A megabyte
(Mb) is about one million bytes. A gigabyte (Gb) is about one billion
bytes.
Storage Formats Image data can be arranged in several ways on a tape or other
media. The most common storage formats are:
Field Guide Data Storage / 19
BIL (band interleaved by line)
BSQ (band sequential)
BIP (band interleaved by pixel)
For a single band of data, all formats (BIL, BIP, and BSQ) are
identical, as long as the data are not blocked.
Blocked data are discussed under Storage Media.
BIL
In BIL (band interleaved by line) format, each record in the file
contains a scan line (row) of data for one band (Slater, 1980). All
bands of data for a given line are stored consecutively within the file
as shown in Figure 11.
Figure 11: Band Interleaved by Line (BIL)
NOTE: Although a header and trailer file are shown in this diagram,
not all BIL data contain header and trailer files.
BSQ
In BSQ (band sequential) format, each entire band is stored
consecutively in the same file (Slater, 1980). This format is
advantageous, in that:
Header
Trailer
Image
Line 1, Band 1
Line 1, Band 2
+
+
+
Line 1, Band x
Line 2, Band 1
Line 2, Band 2
+
+
+
Line 2, Band x
Line n, Band 1
Line n, Band 2
+
+
+
Line n, Band x
Data Storage / 20 Field Guide
one band can be read and viewed easily, and
multiple bands can be easily loaded in any order.
Figure 12: Band Sequential (BSQ)
Landsat TM data are stored in a type of BSQ format known as fast
format. Fast format data have the following characteristics:
Files are not split between tapes. If a band starts on the first
tape, it ends on the first tape.
An end-of-file (EOF) marker follows each band.
An end-of-volume marker marks the end of each volume (tape).
An end-of-volume marker consists of three end-of-file markers.
There is one header file per tape.
There are no header records preceding the image data.
Regular products (not geocoded) are normally unblocked.
Geocoded products are normally blocked (EOSAT).
ERDAS IMAGINE imports all of the header and image file information.
Header File(s)
end-of-file
end-of-file
Trailer File(s)
Image File
Band 1
Image File
Band 2
Image File
Band x
Line 1, Band 1
Line 2, Band 2
Line 3, Band 3
+
+
+
Line n, Band 1
Line 1, Band 2
Line 2, Band 2
Line 3, Band 2
+
+
+
Line n, Band 2
Line 1, Band x
Line 2, Band x
Line 3, Band x
+
+
+
Line n, Band x
Field Guide Data Storage / 21
See Geocoded Data for more information on geocoded data.
BIP
In BIP (band interleaved by pixel) format, the values for each band
are ordered within a given pixel. The pixels are arranged sequentially
on the tape (Slater, 1980). The sequence for BIP format is:
Pixel 1, Band 1
Pixel 1, Band 2
Pixel 1, Band 3
.
.
.
Pixel 2, Band 1
Pixel 2, Band 2
Pixel 2, Band 3
.
.
.
Storage Media Today, most raster data are available on a variety of storage media
to meet the needs of users, depending on the system hardware and
devices available. When ordering data, it is sometimes possible to
select the type of media preferred. The most common forms of
storage media are discussed in the following section:
9-track tape
4 mm tape
8 mm tape
1/4 cartridge tape
CD-ROM/optical disk
Other types of storage media are:
floppy disk (3.5 or 5.25)
film, photograph, or paper
videotape
Tape
The data on a tape can be divided into logical records and physical
records. A record is the basic storage unit on a tape.
Data Storage / 22 Field Guide
A logical record is a series of bytes that form a unit. For example,
all the data for one line of an image may form a logical record.
A physical record is a consecutive series of bytes on a magnetic
tape, followed by a gap, or blank space, on the tape.
Blocked Data
For reasons of efficiency, data can be blocked to fit more on a tape.
Blocked data are sequenced so that there are more logical records in
each physical record. The number of logical records in each physical
record is the blocking factor. For instance, a record may contain
28,000 bytes, but only 4000 columns due to a blocking factor of 7.
Tape Contents
Tapes are available in a variety of sizes and storage capacities. To
obtain information about the data on a particular tape, read the tape
label or box, or read the header file. Often, there is limited
information on the outside of the tape. Therefore, it may be
necessary to read the header files on each tape for specific
information, such as:
number of tapes that hold the data set
number of columns (in pixels)
number of rows (in pixels)
data storage formatBIL, BSQ, BIP
pixel depth4-bit, 8-bit, 10-bit, 12-bit, 16-bit
number of bands
blocking factor
number of header files and header records
4 mm Tapes
The 4 mm tape is a relative newcomer in the world of GIS. This tape
is a mere 2 .75 in size, but it can hold up to 2 Gb of data. This
petite cassette offers an obvious shipping and storage advantage
because of its size.
8 mm Tapes
The 8 mm tape offers the advantage of storing vast amounts of data.
Tapes are available in 5 and 10 Gb storage capacities (although
some tape drives cannot handle the 10 Gb size). The 8 mm tape is a
2.5 4 cassette, which makes it easy to ship and handle.
Field Guide Data Storage / 23
1/4 Cartridge Tapes
This tape format falls between the 8 mm and 9-track in physical size
and storage capacity. The tape is approximately 4 6 in size and
stores up to 150 Mb of data.
9-Track Tapes
A 9-track tape is an older format that was the standard for two
decades. It is a large circular tape approximately 10 in diameter. It
requires a 9-track tape drive as a peripheral device for retrieving
data. The size and storage capability make 9-track less convenient
than 8 mm or 1/4 tapes. However, 9-track tapes are still widely
used.
A single 9-track tape may be referred to as a volume. The complete
set of tapes that contains one image is referred to as a volume set.
The storage format of a 9-track tape in binary format is described by
the number of bits per inch, bpi, on the tape. The tapes most
commonly used have either 1600 or 6250 bpi. The number of bits
per inch on a tape is also referred to as the tape density. Depending
on the length of the tape, 9-tracks can store between 120-150 Mb of
data.
CD-ROM
Data such as ADRG and Digital Line Graphs (DLG) are most often
available on CD-ROM, although many types of data can be requested
in CD-ROM format. A CD-ROM is an optical read-only storage device
which can be read with a CD player. CD-ROMs offer the advantage
of storing large amounts of data in a small, compact device. Up to
644 Mb can be stored on a CD-ROM. Also, since this device is read-
only, it protects the data from accidentally being overwritten,
erased, or changed from its original integrity. This is the most stable
of the current media storage types and data stored on CD-ROM are
expected to last for decades without degradation.
Calculating Disk Space To calculate the amount of disk space a raster file requires on an
ERDAS IMAGINE system, use the following formula:
Where:
y = rows
x = columns
b = number of bytes per pixel
n = number of bands
1.4 adds 30% to the file size for pyramid layers and 10% for
miscellaneous adjustments, such as histograms, lookup tables, etc.
NOTE: This output file size is approximate. See Pyramid Layers on
page 134 for more information.
x y b ( ) ( n [ ) ] 1.4 output file size =
Data Storage / 24 Field Guide
For example, to load a 3 band, 16-bit file with 500 rows and 500
columns, about 2,100,000 bytes of disk space is needed.
Bytes Per Pixel
The number of bytes per pixel is listed below:
4-bit data: .5
8-bit data: 1.0
16-bit data: 2.0
NOTE: On the PC, disk space is shown in bytes. On the workstation,
disk space is shown as kilobytes (1,024 bytes).
ERDAS IMAGINE Format
(.img)
In ERDAS IMAGINE, file name extensions identify the file type. When
data are imported into ERDAS IMAGINE, they are converted to the
ERDAS IMAGINE file format and stored in image files. ERDAS
IMAGINE image files (.img) can contain two types of raster layers:
thematic
continuous
An image file can store a combination of thematic and continuous
layers, or just one type.
Figure 13: Image Files Store Raster Layers
ERDAS Version 7.5 Users
For Version 7.5 users, when importing a GIS file from Version 7.5, it
becomes an image file with one thematic raster layer. When
importing a LAN file, each band becomes a continuous raster layer
within an image file.
Thematic Raster Layer
Thematic data are raster layers that contain qualitative, categorical
information about an area. A thematic layer is contained within an
image file. Thematic layers lend themselves to applications in which
categories or themes are used. Thematic raster layers are used to
represent data measured on a nominal or ordinal scale, such as:
soils
500 500 ( ) 2 ( ) 3 [ ] 1.4 2 100 000 , , = or 2.1 Mb
Image File (.img)
Raster Layer(s)
Thematic Raster Layer(s) Continuous Raster Layer(s)
Field Guide Data Storage / 25
land use
land cover
roads
hydrology
NOTE: Thematic raster layers are displayed as pseudo color layers.
Figure 14: Example of a Thematic Raster Layer
See Image Display for information on displaying thematic
raster layers.
Continuous Raster Layer
Continuous data are raster layers that contain quantitative
(measuring a characteristic on an interval or ratio scale) and related,
continuous values. Continuous raster layers can be multiband (e.g.,
Landsat TM data) or single band (e.g., SPOT panchromatic data).
The following types of data are examples of continuous raster layers:
Landsat
SPOT
digitized (scanned) aerial photograph
DEM
slope
temperature
NOTE: Continuous raster layers can be displayed as either a gray
scale raster layer or a true color raster layer.
soils
Data Storage / 26 Field Guide
Figure 15: Examples of Continuous Raster Layers
Tiled Data
Data in the .img format are tiled data. Tiled data are stored in tiles
that can be set to any size.
The default tile size for image files is 64 64 pixels.
Image File Contents
The image files contain the following additional information about the
data:
the data file values
statistics
lookup tables
map coordinates
map projection
This additional information can be viewed using the Image
Information function located on the Viewers tool bar.
Statistics
In ERDAS IMAGINE, the file statistics are generated from the data
file values in the layer and incorporated into the image file. This
statistical information is used to create many program defaults, and
helps you make processing decisions.
Landsat TM DEM
Field Guide Image File Organization / 27
Pyramid Layers
Sometimes a large image takes longer than normal to display in the
Viewer. The pyramid layer option enables you to display large
images faster. Pyramid layers are image layers which are
successively reduced by the power of 2 and resampled.
The Pyramid Layer option is available in the Image Information
function located on the Viewers tool bar and, from the Import
function.
See Image Display for more information on pyramid layers.
See the On-Line Help for detailed information on ERDAS
IMAGINE file formats.
Image File
Organization
Data are easy to locate if the data files are well organized. Well
organized files also make data more accessible to anyone who uses
the system. Using consistent naming conventions and the ERDAS
IMAGINE Image Catalog helps keep image files well organized and
accessible.
Consistent Naming
Convention
Many processes create an output file, and every time a file is created,
it is necessary to assign a file name. The name that is used can either
cause confusion about the process that has taken place, or it can
clarify and give direction. For example, if the name of the output file
is image.img, it is difficult to determine the contents of the file. On
the other hand, if a standard nomenclature is developed in which the
file name refers to a process or contents of the file, it is possible to
determine the progress of a project and contents of a file by
examining the directory.
Develop a naming convention that is based on the contents of the
file. This helps everyone involved know what the file contains. For
example, in a project to create a map composition for Lake Lanier, a
directory for the files may look similar to the one below:
lanierTM.img
lanierSPOT.img
lanierSymbols.ovr
lanierlegends.map.ovr
lanierScalebars.map.ovr
lanier.map
lanier.plt
lanier.gcc
lanierUTM.img
Image File Organization / 28 Field Guide
From this listing, one can make some educated guesses about the
contents of each file based on naming conventions used. For
example, lanierTM.img is probably a Landsat TM scene of Lake
Lanier. The file lanier.map is probably a map composition that has
map frames with lanierTM.img and lanierSPOT.img data in them. The
file lanierUTM.img was probably created when lanierTM.img was
rectified to a UTM map projection.
Keeping Track of Image
Files
Using a database to store information about images enables you to
track image files (.img) without having to know the name or location
of the file. The database can be queried for specific parameters (e.g.,
size, type, map projection) and the database returns a list of image
files that match the search criteria. This file information helps to
quickly determine which image(s) to use, where it is located, and its
ancillary data. An image database is especially helpful when there
are many image files and even many on-going projects. For
example, you could use the database to search for all of the image
files of Georgia that have a UTM map projection.
Use the ERDAS IMAGINE Image Catalog to track and store
information for image files (.img) that are imported and created
in ERDAS IMAGINE.
NOTE: All information in the Image Catalog database, except archive
information, is extracted from the image file header. Therefore, if
this information is modified in the Image Information utility, it is
necessary to recatalog the image in order to update the information
in the Image Catalog database.
ERDAS IMAGINE Image Catalog
The ERDAS IMAGINE Image Catalog database is designed to serve
as a library and information management system for image files
(.img) that are imported and created in ERDAS IMAGINE. The
information for the image files is displayed in the Image Catalog
CellArray. This CellArray enables you to view all of the ancillary
data for the image files in the database. When records are queried
based on specific criteria, the image files that match the criteria are
highlighted in the CellArray. It is also possible to graphically view the
coverage of the selected image files on a map in a canvas window.
When it is necessary to store some data on a tape, the ERDAS
IMAGINE Image Catalog database enables you to archive image files
to external devices. The Image Catalog CellArray shows which tape
the image file is stored on, and the file can be easily retrieved from
the tape device to a designated disk directory. The archived image
files are copies of the files on disknothing is removed from the disk.
Once the file is archived, it can be removed from the disk, if you like.
Field Guide Using Image Data in GIS / 29
Geocoded Data Geocoding, also known as georeferencing, is the geographical
registration or coding of the pixels in an image. Geocoded data are
images that have been rectified to a particular map projection and
pixel size.
Raw, remotely-sensed image data are gathered by a sensor on a
platform, such as an aircraft or satellite. In this raw form, the image
data are not referenced to a map projection. Rectification is the
process of projecting the data onto a plane and making them
conform to a map projection system.
It is possible to geocode raw image data with the ERDAS IMAGINE
rectification tools. Geocoded data are also available from Space
Imaging EOSAT and SPOT.
See Map Projections for detailed information on the different
projections available. See Rectification for information on
geocoding raw imagery with ERDAS IMAGINE.
Using Image Data
in GIS
ERDAS IMAGINE provides many tools designed to extract the
necessary information from the images in a database. The following
chapters in this book describe many of these processes.
This section briefly describes some basic image file techniques that
may be useful for any application.
Subsetting and
Mosaicking
Within ERDAS IMAGINE, there are options available to make
additional image files from those acquired from EOSAT, SPOT, etc.
These options involve combining files, mosaicking, and subsetting.
ERDAS IMAGINE programs allow image data with an unlimited
number of bands, but the most common satellite data types
Landsat and SPOThave seven or fewer bands. Image files can be
created with more than seven bands.
It may be useful to combine data from two different dates into one
file. This is called multitemporal imagery. For example, a user may
want to combine Landsat TM from one date with TM data from a later
date, then perform a classification based on the combined data. This
is particularly useful for change detection studies.
You can also incorporate elevation data into an existing image file as
another band, or create new bands through various enhancement
techniques.
To combine two or more image files, each file must be
georeferenced to the same coordinate system, or to each other.
See Rectification for information on georeferencing images.
Using Image Data in GIS / 30 Field Guide
Subset
Subsetting refers to breaking out a portion of a large file into one or
more smaller files. Often, image files contain areas much larger than
a particular study area. In these cases, it is helpful to reduce the size
of the image file to include only the area of interest (AOI). This not
only eliminates the extraneous data in the file, but it speeds up
processing due to the smaller amount of data to process. This can be
important when dealing with multiband data.
The ERDAS IMAGINE Import option often lets you define a
subset area of an image to preview or import. You can also use
the Subset option from ERDAS IMAGINE Image Interpreter to
define a subset area.
Mosaic
On the other hand, the study area in which you are interested may
span several image files. In this case, it is necessary to combine the
images to create one large file. This is called mosaicking.
To create a mosaicked image, use the Mosaic Images option
from the Data Preparation menu.
Enhancement Image enhancement is the process of making an image more
interpretable for a particular application (Faust, 1989).
Enhancement can make important features of raw, remotely sensed
data and aerial photographs more interpretable to the human eye.
Enhancement techniques are often used instead of classification for
extracting useful information from images.
There are many enhancement techniques available. They range in
complexity from a simple contrast stretch, where the original data
file values are stretched to fit the range of the display device, to
principal components analysis, where the number of image file bands
can be reduced and new bands created to account for the most
variance in the data.
See Enhancement for more information on enhancement
techniques.
Multispectral
Classification
Image data are often used to create thematic files through
multispectral classification. This entails using spectral pattern
recognition to identify groups of pixels that represent a common
characteristic of the scene, such as soil type or vegetation.
See Classification for a detailed explanation of classification
procedures.
Field Guide Editing Raster Data / 31
Editing Raster
Data
ERDAS IMAGINE provides raster editing tools for editing the data
values of thematic and continuous raster data. This is primarily a
correction mechanism that enables you to correct bad data values
which produce noise, such as spikes and holes in imagery. The raster
editing functions can be applied to the entire image or a user-
selected area of interest (AOI).
With raster editing, data values in thematic data can also be recoded
according to class. Recoding is a function that reassigns data values
to a region or to an entire class of pixels.
See Geographic Information Systems for information about
recoding data. See Enhancement for information about
reducing data noise using spatial filtering.
The ERDAS IMAGINE raster editing functions allow the use of focal
and global spatial modeling functions for computing the values to
replace noisy pixels or areas in continuous or thematic data.
Focal operations are filters that calculate the replacement value
based on a window (3 3, 5 5, etc.), and replace the pixel of
interest with the replacement value. Therefore this function affects
one pixel at a time, and the number of surrounding pixels that
influence the value is determined by the size of the moving window.
Global operations calculate the replacement value for an entire area
rather than affecting one pixel at a time. These functions, specifically
the Majority option, are more applicable to thematic data.
See the ERDAS IMAGINE On-Line Help for information about
using and selecting AOIs.
The raster editing tools are available in the Viewer.
Editing Continuous
(Athematic) Data
Editing DEMs
DEMs occasionally contain spurious pixels or bad data. These spikes,
holes, and other noises caused by automatic DEM extraction can be
corrected by editing the raster data values and replacing them with
meaningful values. This discussion of raster editing focuses on DEM
editing.
Editing Raster Data / 32 Field Guide
The ERDAS IMAGINE Raster Editing functionality was originally
designed to edit DEMs, but it can also be used with images of
other continuous data sources, such as radar, SPOT, Landsat,
and digitized photographs.
When editing continuous raster data, you can modify or replace
original pixel values with the following:
a constant valueenter a known constant value for areas such
as lakes.
the average of the buffering pixelsreplace the original pixel
value with the average of the pixels in a specified buffer area
around the AOI. This is used where the constant values of the
AOI are not known, but the area is flat or homogeneous with little
variation (for example, a lake).
the original data value plus a constant valueadd a negative
constant value to the original data values to compensate for the
height of trees and other vertical features in the DEM. This
technique is commonly used in forested areas.
spatial filteringfilter data values to eliminate noise such as
spikes or holes in the data.
interpolation techniques (discussed below).
Interpolation Techniques While the previously listed raster editing techniques are perfectly
suitable for some applications, the following interpolation techniques
provide the best methods for raster editing:
2-D polynomialsurface approximation
multisurface functionswith least squares prediction
distance weighting
Each pixels data value is interpolated from the reference points in
the data file. These interpolation techniques are described below:
2-D Polynomial
This interpolation technique provides faster interpolation calculations
than distance weighting and multisurface functions. The following
equation is used:
V = a
0
+ a
1
x + a
2
y + a
2
x
2
+ a
4
xy + a
5
y
2
+. . .
Field Guide Editing Raster Data / 33
Where:
V = data value (elevation value for DEM)
a = polynomial coefficients
x = x coordinate
y = y coordinate
Multisurface Functions
The multisurface technique provides the most accurate results for
editing DEMs that have been created through automatic extraction.
The following equation is used:
Where:
V = output data value (elevation value for DEM)
W
i
= coefficients which are derived by the least squares method
Q
i
= distance-related kernels which are actually interpretable as continuous
single value surfaces
Source: Wang, Z., 1990
Distance Weighting
The weighting function determines how the output data values are
interpolated from a set of reference data points. For each pixel, the
values of all reference points are weighted by a value corresponding
with the distance between each point and the pixel.
The weighting function used in ERDAS IMAGINE is:
Where:
S = normalization factor
D = distance from output data point and reference point
The value for any given pixel is calculated by taking the sum of
weighting factors for all reference points multiplied by the data
values of those points, and dividing by the sum of the weighting
factors:
V W
i
Q
i
=
W
S
D
---- 1
\ .
| |
2
=
Editing Raster Data / 34 Field Guide
Where:
V = output data value (elevation value for DEM)
i = i
th
reference point
W
i
= weighting factor of point i
V
i
= data value of point i
n = number of reference points
Source: Wang, Z., 1990
V
W
i
V
i

i 1 =
n

W
i
i 1 =
n

---------------------------- =
/ 35 Field Guide
Vector Data
Introduction ERDAS IMAGINE is designed to integrate two data types, raster and
vector, into one system. While the previous chapter explored the
characteristics of raster data, this chapter is focused on vector data.
The vector data structure in ERDAS IMAGINE is based on the ArcInfo
data model (developed by ESRI, Inc.). This chapter describes vector
data, attribute information, and symbolization.
You do not need ArcInfo software or an ArcInfo license to use the
vector capabilities in ERDAS IMAGINE. Since the ArcInfo data
model is used in ERDAS IMAGINE, you can use ArcInfo
coverages directly without importing them.
See Geographic Information Systems for information on
editing vector layers and using vector data in a GIS.
Vector data consist of:
points
lines
polygons
Each is illustrated in Figure 16.
Figure 16: Vector Elements
Points A point is represented by a single x, y coordinate pair. Points can
represent the location of a geographic feature or a point that has no
area, such as a mountain peak. Label points are also used to identify
polygons (see Figure 17).
node
node
vertices
polygons
label point
line
points
/ 36 Field Guide
Lines A line (polyline) is a set of line segments and represents a linear
geographic feature, such as a river, road, or utility line. Lines can
also represent nongeographical boundaries, such as voting districts,
school zones, contour lines, etc.
Polygons A polygon is a closed line or closed set of lines defining a
homogeneous area, such as soil type, land use, or water body.
Polygons can also be used to represent nongeographical features,
such as wildlife habitats, state borders, commercial districts, etc.
Polygons also contain label points that identify the polygon. The label
point links the polygon to its attributes.
Vertex The points that define a line are vertices. A vertex is a point that
defines an element, such as the endpoint of a line segment or a
location in a polygon where the line segment defining the polygon
changes direction. The ending points of a line are called nodes. Each
line has two nodes: a from-node and a to-node. The from-node is the
first vertex in a line. The to-node is the last vertex in a line. Lines
join other lines only at nodes. A series of lines in which the from-
node of the first line joins the to-node of the last line is a polygon.
Figure 17: Vertices
In Figure 17, the line and the polygon are each defined by three
vertices.
Coordinates Vector data are expressed by the coordinates of vertices. The
vertices that define each element are referenced with x, y, or
Cartesian, coordinates. In some instances, those coordinates may be
inches [as in some computer-aided design (CAD) applications], but
often the coordinates are map coordinates, such as State Plane,
Universal Transverse Mercator (UTM), or Lambert Conformal Conic.
Vector data digitized from an ungeoreferenced image are expressed
in file coordinates.
Tics
Vector layers are referenced to coordinates or a map projection
system using tic files that contain geographic control points for the
layer. Every vector layer must have a tic file. Tics are not
topologically linked to other features in the layer and do not have
descriptive data associated with them.
line
polygon
vertices
label point
Field Guide / 37
Vector Layers Although it is possible to have points, lines, and polygons in a single
layer, a layer typically consists of one type of feature. It is possible
to have one vector layer for streams (lines) and another layer for
parcels (polygons). A vector layer is defined as a set of features
where each feature has a location (defined by coordinates and
topological pointers to other features) and, possibly attributes
(defined as a set of named items or variables) (ESRI 1989). Vector
layers contain both the vector features (points, lines, polygons) and
the attribute information.
Usually, vector layers are also divided by the type of information
they represent. This enables the user to isolate data into themes,
similar to the themes used in raster layers. Political districts and soil
types would probably be in separate layers, even though both are
represented with polygons. If the project requires that the
coincidence of features in two or more layers be studied, the user
can overlay them or create a new layer.
See Geographic Information Systems for more information
about analyzing vector layers.
Topology The spatial relationships between features in a vector layer are
defined using topology. In topological vector data, a mathematical
procedure is used to define connections between features, identify
adjacent polygons, and define a feature as a set of other features
(e.g., a polygon is made of connecting lines) (Environmental
Systems Research Institute, 1990).
Topology is not automatically created when a vector layer is created.
It must be added later using specific functions. Topology must be
updated after a layer is edited also.
Digitizing describes how topology is created for a new or edited
vector layer.
Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based
on the ArcInfo data model used for ARC coverages. This
georelational data model is actually a set of files using the
computers operating system for file management and input/output.
An ERDAS IMAGINE vector layer is stored in subdirectories on the
disk. Vector data are represented by a set of logical tables of
information, stored as files within the subdirectory. These files may
serve the following purposes:
define features
provide feature attributes
cross-reference feature definition files
provide descriptive information for the coverage as a whole
/ 38 Field Guide
A workspace is a location that contains one or more vector layers.
Workspaces provide a convenient means for organizing layers into
related groups. They also provide a place for the storage of tabular
data not directly tied to a particular layer. Each workspace is
completely independent. It is possible to have an unlimited number
of workspaces and an unlimited number of vector layers in a
workspace. Table 2 summarizes the types of files that are used to
make up vector layers.
Figure 18 illustrates how a typical vector workspace is set up
(Environmental Systems Research Institute, 1992).
Figure 18: Workspace Structure
Table 2: Description of File Types
File Type File Description
Feature
Definition
Files
ARC Line coordinates and topology
CNT Polygon centroid coordinates
LAB Label point coordinates and topology
TIC Tic coordinates
Feature
Attribute Files
AAT Line (arc) attribute table
PAT Polygon or point attribute table
Feature Cross-
Reference File
PAL Polygon/line/node cross-reference file
Layer
Description
Files
BND Coordinate extremes
LOG Layer history file
PRJ Coordinate definition file
TOL Layer tolerance file
georgia
parcels testdata
demo INFO roads streets
Field Guide Attribute Information / 39
Because vector layers are stored in directories rather than in
simple files, you MUST use the utilities provided in ERDAS
IMAGINE to copy and rename them. A utility is also provided to
update path names that are no longer correct due to the use of
regular system commands on vector layers.
See the ESRI documentation for more detailed information
about the different vector files.
Attribute
Information
Along with points, lines, and polygons, a vector layer can have a
wealth of associated descriptive, or attribute, information associated
with it. Attribute information is displayed in CellArrays. This is the
same information that is stored in the INFO database of ArcInfo.
Some attributes are automatically generated when the layer is
created. Custom fields can be added to each attribute table.
Attribute fields can contain numerical or character data.
The attributes for a roads layer may look similar to the example in
Figure 19. You can select features in the layer based on the attribute
information. Likewise, when a row is selected in the attribute
CellArray, that feature is highlighted in the Viewer.
Figure 19: Attribute CellArray
Using Imported Attribute Data
When external data types are imported into ERDAS IMAGINE, only
the required attribute information is imported into the attribute
tables (AAT and PAT files) of the new vector layer. The rest of the
attribute information is written to one of the following INFO files:
<layer name>.ACODEarc attribute information
<layer name>.PCODEpolygon attribute information
Displaying Vector Data / 40 Field Guide
<layer name>.XCODEpoint attribute information
To utilize all of this attribute information, the INFO files can be
merged into the PAT and AAT files. Once this attribute information
has been merged, it can be viewed in CellArrays and edited as
desired. This new information can then be exported back to its
original format.
The complete path of the file must be specified when establishing an
INFO file name in a Viewer application, such as exporting attributes
or merging attributes, as shown in the following example:
/georgia/parcels/info!arc!parcels.pcode
Use the Attributes option in the Viewer to view and manipulate
vector attribute data, including merging and exporting. (The
Raster Attribute Editor is for raster attributes only and cannot be
used to edit vector attributes.)
See the ERDAS IMAGINE On-Line Help for more information
about using CellArrays.
Displaying Vector
Data
Vector data are displayed in Viewers, as are other data types in
ERDAS IMAGINE. You can display a single vector layer, overlay
several layers in one Viewer, or display a vector layer(s) over a
raster layer(s).
In layers that contain more than one feature (a combination of
points, lines, and polygons), you can select which features to
display. For example, if you are studying parcels, you may want to
display only the polygons in a layer that also contains street
centerlines (lines).
Color Schemes Vector data are usually assigned class values in the same manner as
the pixels in a thematic raster file. These class values correspond to
different colors on the display screen. As with a pseudo color image,
you can assign a color scheme for displaying the vector classes.
See Image Display for a thorough discussion of how images
are displayed.
Symbolization Vector layers can be displayed with symbolization, meaning that the
attributes can be used to determine how points, lines, and polygons
are rendered. Points, lines, polygons, and nodes are symbolized
using styles and symbols similar to annotation. For example, if a
point layer represents cities and towns, the appropriate symbol could
be used at each point based on the population of that area.
Field Guide Displaying Vector Data / 41
Points
Point symbolization options include symbol, size, and color. The
symbols available are the same symbols available for annotation.
Lines
Lines can be symbolized with varying line patterns, composition,
width, and color. The line styles available are the same as those
available for annotation.
Polygons
Polygons can be symbolized as lines or as filled polygons. Polygons
symbolized as lines can have varying line styles (see Lines). For
filled polygons, either a solid fill color or a repeated symbol can be
selected. When symbols are used, you select the symbol to use, the
symbol size, symbol color, background color, and the x- and y-
separation between symbols. Figure 20 illustrates a pattern fill.
Figure 20: Symbolization Example
See the ERDAS IMAGINE Tour Guides or On-Line Help for
information about selecting features and using CellArrays.
The vector layer reflects
the symbolization that is
defined in the Symbology dialog.
Digitizing / 42 Field Guide
Vector Data
Sources
Vector data are created by:
tablet digitizingmaps, photographs, or other hardcopy data can
be digitized using a digitizing tablet
screen digitizingcreate new vector layers by using the mouse
to digitize on the screen
using other software packagesmany external vector data types
can be converted to ERDAS IMAGINE vector layers
converting raster layersraster layers can be converted to
vector layers
Each of these options is discussed in a separate section.
Digitizing In the broadest sense, digitizing refers to any process that converts
nondigital data into numbers. However, in ERDAS IMAGINE, the
digitizing of vectors refers to the creation of vector data from
hardcopy materials or raster images that are traced using a digitizer
keypad on a digitizing tablet or a mouse on a displayed image.
Any image not already in digital format must be digitized before it
can be read by the computer and incorporated into the database.
Most Landsat, SPOT, or other satellite data are already in digital
format upon receipt, so it is not necessary to digitize them. However,
you may also have maps, photographs, or other nondigital data that
contain information you want to incorporate into the study. Or, you
may want to extract certain features from a digital image to include
in a vector layer. Tablet digitizing and screen digitizing enable you to
digitize certain features of a map or photograph, such as roads,
bodies of water, voting districts, and so forth.
Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer
nondigital data such as maps or photographs to vector format. The
digitizing tablet contains an internal electronic grid that transmits
data to ERDAS IMAGINE on cue from a digitizer keypad operated by
you.
Figure 21: Digitizing Tablet
Field Guide Digitizing / 43
Digitizer Setup
The map or photograph to be digitized is secured on the tablet, and
a coordinate system is established with a setup procedure.
Digitizer Operation
The handheld digitizer keypad features a small window with a
crosshair and keypad buttons. Position the intersection of the
crosshair directly over the point to be digitized. Depending on the
type of equipment and the program being used, one of the input
buttons is pushed to tell the system which function to perform, such
as:
digitize a point (i.e., transmit map coordinate data),
connect a point to previous points,
assign a particular value to the point or polygon, or
measure the distance between points, etc.
Move the puck along the desired polygon boundaries or lines,
digitizing points at appropriate intervals (where lines curve or
change direction), until all the points are collected.
Newly created vector layers do not contain topological data. You
must create topology using the Build or Clean options. This is
discussed further in Geographic Information Systems.
Digitizing Modes
There are two modes used in digitizing:
point modeone point is generated each time a keypad button
is pressed
stream modepoints are generated continuously at specified
intervals, while the puck is in proximity to the surface of the
digitizing tablet
You can create a new vector layer from the Viewer. Select the
Tablet Input function from the Viewer to use a digitizing tablet
to enter new information into that layer.
Measurement
The digitizing tablet can also be used to measure both linear and
areal distances on a map or photograph. The digitizer puck is used
to outline the areas to measure. You can measure:
lengths and angles by drawing a line
Imported Vector Data / 44 Field Guide
perimeters and areas using a polygonal, rectangular, or elliptical
shape
positions by specifying a particular point
Measurements can be saved to a file, printed, and copied. These
operations can also be performed with screen digitizing.
Select the Measure function from the Viewer or click on the Ruler
tool in the Viewer tool bar to enable tablet or screen
measurement.
Screen Digitizing In screen digitizing, vector data are drawn with a mouse in the
Viewer using the displayed image as a reference. These data are
then written to a vector layer.
Screen digitizing is used for the same purposes as tablet digitizing,
such as:
digitizing roads, bodies of water, political boundaries
selecting training samples for input to the classification programs
outlining an area of interest for any number of applications
Create a new vector layer from the Viewer.
Imported Vector
Data
Many types of vector data from other software packages can be
incorporated into the ERDAS IMAGINE system. These data formats
include:
ArcInfo GENERATE format files from ESRI, Inc.
ArcInfo INTERCHANGE files from ESRI, Inc.
ArcView Shapefiles from ESRI, Inc.
Digital Line Graphs (DLG) from U.S.G.S.
Digital Exchange Files (DXF) from Autodesk, Inc.
ETAK MapBase files from ETAK, Inc.
Initial Graphics Exchange Standard (IGES) files
Intergraph Design (DGN) files from Intergraph
Spatial Data Transfer Standard (SDTS) vector files
Field Guide Other Vector Data Types / 45
Topologically Integrated Geographic Encoding and Referencing
System
(TIGER) files from the U.S. Census Bureau
Vector Product Format (VPF) files from the Defense Mapping
Agency
See Raster and Vector Data Sources for more information on
these data.
Raster to Vector
Conversion
A raster layer can be converted to a vector layer and used as another
layer in a vector database. The following diagram illustrates a
thematic file in raster format that has been converted to vector
format.
Figure 22: Raster Format Converted to Vector Format
Most commonly, thematic raster data rather than continuous data
are converted to vector format, since converting continuous layers
may create more vector features than are practical or even
manageable.
Convert vector data to raster data, and vice versa, using
IMAGINE Vector.
Other Vector Data
Types
While this chapter has focused mainly on the ArcInfo coverage
format, there are other types of vector formats that you can use in
ERDAS IMAGINE. The two primary types are:
Raster soils layer Soils layer converted to vector polygon layer
Other Vector Data Types / 46 Field Guide
shapefile
Spatial Database Engine (SDE)
Shapefile Vector Format The shapefile vector format was designed by ESRI. You can now use
shapefile format (extension .shp) in ERDAS IMAGINE. You can now:
display shapefiles
create shapefiles
edit shapefiles
attribute shapefiles
symbolize shapefiles
print shapefiles
The shapefile contains spatial data, such as boundary information.
SDE Like the shapefile format, the Spatial Database Engine (SDE) is a
vector format designed by ESRI. The data layers are stored in a
relational database management system (RDBMS) such as Oracle, or
SQL Server. Some of the features of SDE include:
storage of large, untiled spatial layers for fast retrieval
powerful and flexible query capabilities using the SQL where
clause
operation in a client-server environment
multiuser access to the data
ERDAS IMAGINE has the capability to act as a client to access SDE
vector layers stored in a database. To do this, it uses a wizard
interface to connect ERDAS IMAGINE to a SDE database, and selects
one of the vector layers. Additionally, it can join business tables with
the vector layer, and generate a subset of features by imposing
attribute constraints (e.g., SQL where clause).
The definition of the vector layer as extracted from a SDE database
is stored in a <layername>.sdv file, and can be loaded as a regular
ERDAS IMAGINE data file. ERDAS IMAGINE supports the SDE
projection systems. Currently, ERDAS IMAGINEs SDE capability is
read-only. For example, features can be queried and AOIs can be
created, but not edited.
SDTS SDTS stands for Spatial Data Transfer Standard. SDTS is used to
transfer spatial data between computer systems. Such data includes
attribute, georeferencing, data quality report, data dictionary, and
supporting metadata.
Field Guide Other Vector Data Types / 47
According to the USGS, the
implementation of SDTS is of significant interest to users and
producers of digital spatial data because of the potential for
increased access to and sharing of spatial data, the reduction of
information loss in data exchange, the elimination of the
duplication of data acquisition, and the increase in the quality
and integrity of spatial data (United States Geological Survey,
1999c).
The components of SDTS are broken down into six parts. The first
three parts are related, but independent, and are concerned with the
transfer of spatial data. The last three parts provide definitions for
rules and formats for applying SDTS to the exchange of data. The
parts of SDTS are as follows:
Part 1Logical Specifications
Part 2Spatial Features
Part 3ISO 8211 Encoding
Part 4Topological Vector Profile
Part 5Raster Profile
Part 6Point Profile
ArcGIS Integration ArcGIS Integration is the method you use to access the data in a
geodatabase. The term geodatabase is the short form of geographic
database. The geodatabase is hosted inside of a regional database
management system that provides services for managing
geographic data. The services include validation rules, relationships,
and topological associations. ERDAS IMAGINE has always supported
ESRI data formats such as coverages and shapefiles, and now, using
ArcGIS Vector Integration, ERDAS IMAGINE can also access CAD and
VPF data on the internet.
There are two types of geodatabases: personal and enterprise. The
personal geodatabases are for use by an individual or small group,
and the enterprise geodatabases are for use by large groups.
Industrial strength host systems such as Oracle support the
organizational structure of enterprise geodatabases. The
organization of both personal and enterprise geodatabases starts
with a workspace that contains both spatial and non-spatial datasets
such as feature classes, raster datasets, and tables. An example of
a feature dataset would be U.S. Agriculture. Within the datasets are
feature classes. An example of a feature class would be U.S.
Hydrology. Within every feature class are particular features like
wells and lakes. Each feature class will be symbolized by only one
type of geometry such as points symbolizing wells or polygons
symbolizing lakes.
Other Vector Data Types / 48 Field Guide
It is important to remember when you delete a personal database
connection, the entire database is deleted from disk. When you
delete a database connection on an enterprise database, only the
connection is broken, and nothing in the geodatabase is deleted.
Importing and Exporting / 49 Field Guide
Raster and Vector Data Sources
Introduction This chapter is an introduction to the most common raster and vector
data types that can be used with the ERDAS IMAGINE software
package. The raster data types covered include:
visible/infrared satellite data
radar imagery
airborne sensor data
scanned or digitized maps and photographs
digital terrain models (DTMs)
The vector data types covered include:
ArcInfo GENERATE format files
AutoCAD Digital Exchange Files (DXF)
United States Geological Survey (USGS) Digital Line Graphs
(DLG)
MapBase digital street network files (ETAK)
U.S. Department of Commerce Initial Graphics Exchange
Standard files (IGES)
U.S. Census Bureau Topologically Integrated Geographic
Encoding and Referencing System files (TIGER)
Importing and
Exporting
Raster Data There is an abundance of data available for use in GIS today. In
addition to satellite and airborne imagery, raster data sources
include digital x-rays, sonar, microscopic imagery, video digitized
data, and many other sources.
Because of the wide variety of data formats, ERDAS IMAGINE
provides two options for importing data:
import for specific formats
generic import for general formats
Importing and Exporting / 50 Field Guide
Import
Table 3 lists some of the raster data formats that can be imported to
exported from, directly read from, and directly written to ERDAS
IMAGINE.
There is a distinct difference between import and direct read.
Import means that the data is converted from its original format
into another format (e.g. IMG, TIFF, or GRID Stack), which can
be read directly by ERDAS IMAGINE. Direct read formats are
those formats which the Viewer and many of its associated tools
can read immediately without any conversion process.
NOTE: Annotation and Vector data formats are listed separately.
Table 3: Raster Data Formats
Data Type Import Export
Direct
Read
Direct
Write
ADRG
ADRI
ARCGEN
Arc Coverage
ArcInfo & Space Imaging BIL,
BIP, BSQ

Arc Interchange
ASCII
ASRP
ASTER (EOS HDF Format)
AVHRR (NOAA)
AVHRR (Dundee Format)
AVHRR (Sharp)
BIL, BIP, BSQ
a
(Generic
Binary)

b
CADRG (Compressed ADRG)
CIB (Controlled Image Base)
DAEDALUS
USGS DEM
DOQ
DOQ (JPEG)
DTED
ER Mapper
Field Guide Importing and Exporting / 51
ERS (I-PAF CEOS)
ERS (Conae-PAF CEOS)
ERS (Tel Aviv-PAF CEOS)
ERS (D-PAF CEOS)
ERS (UK-PAF CEOS)
FIT
Generic Binary (BIL, BIP,
BSQ)
a

b
GeoTIFF
GIS (Erdas 7.x)
GRASS
GRID
GRID Stack
GRID Stack 7.x
GRD (Surfer: ASCII/Binary)
IRS-1C/1D (EOSAT Fast
Format C)

IRS-1C/1D(EUROMAP Fast
Format C)

IRS-1C/1D (Super Structured


Format)

JFIF (JPEG)
Landsat-7 Fast-L7A ACRES
Landsat-7 Fast-L7A EROS
Landsat-7 Fast-L7A Eurimage
LAN (Erdas 7.x)
MODIS (EOS HDF Format)
MrSID
MSS Landsat
NLAPS Data Format (NDF)
NASDA CEOS
PCX
RADARSAT (Vancouver CEOS)
RADARSAT (Acres CEOS)
RADARSAT (West Freugh
CEOS)

Table 3: Raster Data Formats


Data Type Import Export
Direct
Read
Direct
Write
Importing and Exporting / 52 Field Guide
Raster Product Format
SDE
SDTS
SeaWiFS L1B and L2A
(OrbView)

Shapefile
SPOT
SPOT CCRS
SPOT (GeoSpot)
SPOT SICORP MetroView
SUN Raster
TIFF
TM Landsat Acres Fast Format
TM Landsat Acres Standard
Format

TM Landsat EOSAT Fast


Format

TM Landsat EOSAT Standard


Format

TM Landsat ESA Fast Format


TM Landsat ESA Standard
Format

TM Landsat-7 Eurimage CEOS


(Multispectral)

TM Landsat-7 Eurimage CEOS


(Panchromatic)

TM Landsat-7 HDF Format


TM Landsat IRS Fast Format
TM Landsat IRS Standard
Format

TM Landsat-7 Fast-L7A ACRES


TM Landsat-7 Fast-L7A EROS
TM Landsat-7 Fast-L7A
Eurimage

TM Landsat Radarsat Fast


Format

TM Landsat Radarsat Standard


Format

USRP
Table 3: Raster Data Formats
Data Type Import Export
Direct
Read
Direct
Write
Field Guide Importing and Exporting / 53
a
See "Generic Binary Data".
b
Direct read of generic binary data requires an accompanying header file in the ESRI
ArcInfo, Space Imaging, or ERDAS IMAGINE formats.
The import function converts raster data to the ERDAS IMAGINE file
format (.img), or other formats directly writable by ERDAS IMAGINE.
The import function imports the data file values that make up the
raster image, as well as the ephemeris or additional data inherent to
the data structure. For example, when the user imports Landsat
data, ERDAS IMAGINE also imports the georeferencing data for the
image.
Raster data formats cannot be exported as vector data formats
unless they are converted with the Vector utilities.
Each direct function is programmed specifically for that type of
data and cannot be used to import other data types.
Raster Data Sources
NITFS
NITFS stands for the National Imagery Transmission Format
Standard. NITFS is designed to pack numerous image compositions
with complete annotation, text attachments, and imagery-
associated metadata.
According to Jordan and Beck,
NITFS is an unclassified format that is based on ISO/IEC 12087-
5, Basic Image Interchange Format (BIIF). The NITFS
implementation of BIIF is documented in U.S. Military Standard
2500B, establishing a standard data format for digital imagery
and imagery-related products.
NITFS was first introduced in 1990 and was for use by the
government and intelligence agencies. NITFS is now the standard for
military organizations as well as commercial industries.
Jordan and Beck list the following attributes of NITF files:
provide a common basis for storage and digital interchange of
images and associated data among existing and future systems
support interoperability by simultaneously providing a data
format for shared access applications while also serving as a
standard message format for dissemination of images and
associated data (text, symbols, labels) via digital
communications
require minimal preprocessing and post-processing of
transmitted data
Importing and Exporting / 54 Field Guide
support variable image sizes and resolution
minimize formatting overhead, particularly for those users
transmitting only a small amount of data or with limited
bandwidth
provide universal features and functions without requiring
commonality of hardware or proprietary software
Moreover, NITF files support the following:
multiple images
annotation on images
ASCII text files to accompany imagery and annotation
metadata to go with imagery, annotation and text
The process of translating NITFS files is a cross-translation process.
One systems internal representation for the files and their
associated data is processed and put into the NITF format. The
receiving system reformats the NITF file, and converts it for the
receiving systems internal representation of the files and associated
data.
In ERDAS IMAGINE, the IMAGINE NITF

software accepts such


information and assembles it into one file in the standard NITF
format.
Source: Jordan and Beck, 1999
Annotation Data Annotation data can also be imported directly. Table 4 lists the
Annotation formats.
There is a distinct difference between import and direct read. Import
means that the data is converted from its original format into
another format (e.g. IMG, TIFF, or GRID Stack), which can be read
directly by ERDAS IMAGINE. Direct read formats are those formats
which the Viewer and many of its associated tools can read
immediately without any conversion process.
Table 4: Annotation Data Formats
Data Type Import Export
Direct
Read
Direct
Write
ANT (Erdas 7.x)
ASCII To Point Annotation
DXF To Annotation
Field Guide Importing and Exporting / 55
Generic Binary Data The Generic Binary import option is a flexible program which enables
the user to define the data structure for ERDAS IMAGINE. This
program allows the import of BIL, BIP, and BSQ data that are stored
in left to right, top to bottom row order. Data formats from unsigned
1-bit up to 64-bit floating point can be imported. This program
imports only the data file valuesit does not import ephemeris data,
such as georeferencing information. However, this ephemeris data
can be viewed using the Data View option (from the Utility menu or
the Import dialog).
Complex data cannot be imported using this program; however, they
can be imported as two real images and then combined into one
complex image using the Spatial Modeler.
You cannot import tiled or compressed data using the Generic
Binary import option.
Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing
points, lines, and polygons using a digitizing tablet or the computer
screen. Several vector data types, which are available from a variety
of government agencies and private companies, can also be
imported. Table 5 lists some of the vector data formats that can be
imported to, and exported from, ERDAS IMAGINE:
There is a distinct difference between import and direct read.
Import means that the data is converted from its original format
into another format (e.g. IMG, TIFF, or GRID Stack), which can
be read directly by ERDAS IMAGINE. Direct read formats are
those formats which the Viewer and many of its associated tools
can read immediately without any conversion process.
Table 5: Vector Data Formats
Data Type Import Export
Direct
Read
Direct
Write
ARCGEN
Arc Interchange
Arc_Interchange to
Coverage

Arc_Interchange to Grid
ASCII To Point Coverage
DFAD
DGN (Intergraph IGDS)
DIG Files (Erdas 7.x)
DLG
Satellite Data / 56 Field Guide
Once imported, the vector data are automatically converted to
ERDAS IMAGINE vector layers.
These vector formats are discussed in more detail in "Vector
Data from Other Software Vendors". See Vector Data for more
information on ERDAS IMAGINE vector layers.
Import and export vector data with the Import/Export function.
You can also convert vector layers to raster format, and vice
versa, with the IMAGINE Vector utilities.
Satellite Data There are several data acquisition options available including
photography, aerial sensors, and sophisticated satellite scanners.
However, a satellite system offers these advantages:
Digital data gathered by a satellite sensor can be transmitted
over radio or microwave communications links and stored on
magnetic tapes, so they are easily processed and analyzed by a
computer.
Many satellites orbit the Earth, so the same area can be covered
on a regular basis for change detection.
DXF to Annotation
DXF to Coverage
ETAK
IGDS (Intergraph .dgn
File)

IGES
MIF/MID (MapInfo) to
Coverage

SDE
SDTS
Shapefile
Terramodel
TIGER
VPF
Table 5: Vector Data Formats
Data Type Import Export
Direct
Read
Direct
Write
Field Guide Satellite Data / 57
Once the satellite is launched, the cost for data acquisition is less
than that for aircraft data.
Satellites have very stable geometry, meaning that there is less
chance for distortion or skew in the final image.
Use the Import/Export function to import a variety of satellite
data.
Satellite System A satellite system is composed of a scanner with sensors and a
satellite platform. The sensors are made up of detectors.
The scanner is the entire data acquisition system, such as the
Landsat TM scanner or the SPOT panchromatic scanner (Lillesand
and Kiefer, 1987). It includes the sensor and the detectors.
A sensor is a device that gathers energy, converts it to a signal,
and presents it in a form suitable for obtaining information about
the environment (Colwell, 1983).
A detector is the device in a sensor system that records
electromagnetic radiation. For example, in the sensor system on
the Landsat TM scanner there are 16 detectors for each
wavelength band (except band 6, which has 4 detectors).
In a satellite system, the total width of the area on the ground
covered by the scanner is called the swath width, or width of the total
field of view (FOV). FOV differs from IFOV in that the IFOV is a
measure of the field of view of each detector. The FOV is a measure
of the field of view of all the detectors combined.
Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important
data acquisition satellites. These systems provide the majority of
remotely-sensed digital images in use today. The Landsat and SPOT
satellites have several characteristics in common:
Both scanners can produce nadir views. Nadir is the area on the
ground directly beneath the scanners detectors.
They have sun-synchronous orbits, meaning that they rotate
around the Earth at the same rate as the Earth rotates on its axis,
so data are always collected at the same local time of day over
the same region.
They both record electromagnetic radiation in one or more
bands. Multiband data are referred to as multispectral imagery.
Single band, or monochrome, imagery is called panchromatic.
NOTE: The current SPOT system has the ability to collect off-nadir
stereo imagery.
Satellite Data / 58 Field Guide
Image Data Comparison
Figure 23 shows a comparison of the electromagnetic spectrum
recorded by Landsat TM, Landsat MSS, SPOT, and National Oceanic
and Atmospheric Administration (NOAA) AVHRR data. These data
are described in detail in the following sections.
Figure 23: Multispectral Imagery Comparison
IKONOS The IKONOS satellite was launched in September of 1999 by the
Athena II rocket.
The resolution of the panchromatic sensor is 1 m. The resolution of
the multispectral scanner is 4 m. The swath width is 13 km at nadir.
The accuracy with out ground control is 12 m horizontally, and 10 m
vertically; with ground control it is 2 m horizontally, and 3 m
vertically.
IKONOS orbits at an altitude of 423 miles, or 681 kilometers. The
revisit time is 2.9 days at 1 m resolution, and 1.5 days at 1.5 m
resolution
.5
.6
.7
.8
.9
1.0
1.1
1.2
1.3
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
1.4
Landsat MSS
(1,2,3,4)
Landsat TM
(4, 5)
SPOT
XS
SPOT
Pan
NOAA
AVHRR
1
Band 1
Band 2
Band 3
Band 4
Band 1
Band 2
Band 3
Band 4
Band 5
Band 7
Band 1
Band 2
Band 3
Band 1
Band 1
Band 2
Band 3
Band 5
Band 4
Band 6
3.0
3.5
4.0
5.0
6.0
7.0
8.0
9.0
10.0
12.0
13.0
11.0
m
i
c
r
o
m
e
t
e
r
s
1
NOAA

AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.
Field Guide Satellite Data / 59
Source: Space Imaging, 1999a; Center for Health Applications of
Aerospace Related Technologies, 2000a
IRS
IRS-1C
The IRS-1C sensor was launched in December of 1995.
The repeat coverage of IRS-1C is every 24 days. The sensor has a
744 km swath width.
The IRS-1C satellite has three sensors on board with which to
capture images of the Earth. Those sensors are as follows:
LISS-III
LISS-III has a spatial resolution of 23 m, with the exception of the
SW Infrared band, which is 70 m. Bands 2, 3, and 4 have a swath
width of 142 kilometers; band 5 has a swath width of 148 km.
Repeat coverage occurs every 24 days at the Equator.
Source: National Remote Sensing Agency, 1998
Panchromatic Sensor
Table 6: IKONOS Bands and Wavelengths
Band Wavelength (microns)
1, Blue 0.45 to 0.52 m
2, Green 0.52 to 0.60 m
3, Red 0.63 to 0.69 m
4, NIR 0.76 to 0.90 m
Panchromatic 0.45 to 0.90 m
Table 7: LISS-III Bands and Wavelengths
Band Wavelength (microns)
1, Blue ---
2, Green 0.52 to 0.59 m
3, Red 0.62 to 0.68 m
4, NIR 0.77 to 0.86 m
5, SW IR 1.55 to 1.70 m
Satellite Data / 60 Field Guide
The panchromatic sensor has 5.8 m spatial resolution, as well as
stereo capability. Its swath width is 70 m. Repeat coverage is every
24 days at the Equator. The revisit time is every five days, with
26 off-nadir viewing.
Wide Field Sensor (WiFS)
WiFS has a 188 m spatial resolution, and repeat coverage every five
days at the Equator. The swath width is 774 km.
Source: Space Imaging, 1999b; Center for Health Applications of
Aerospace Related Technologies, 1998
IRS-1D
IRS-1D was launched in September of 1997. It collects imagery at a
spatial resolution of 5.8 m. IRS-1Ds sensors were copied for IRS-1C,
which was launched in December 1995.
Imagery collected by IRS-1D is distributed in black and white format.
The panchromatic imagery reveals objects on the Earths surface
(such) as transportation networks, large ships, parks and opens
space, and built-up urban areas (Space Imaging, 1999b). This
information can be used to classify land cover in applications such as
urban planning and agriculture. The Space Imaging facility located in
Norman, Oklahoma has been obtaining IRS-1D data since 1997.
For band and wavelength data on IRS-1D, see "IRS".
Source: Space Imaging, 1998
Table 8: Panchromatic Band and Wavelength
Band Wavelength (microns)
Pan 0.5 to 0.75 m
Table 9: WiFS Bands and Wavelengths
Band Wavelength (microns)
1, Red 0.62 to 0.68 m
2, NIR 0.77 to 0.86 m
3, MIR 1.55 to 1.75 m
Field Guide Satellite Data / 61
Landsat 1-5 In 1972, the National Aeronautics and Space Administration (NASA)
initiated the first civilian program specializing in the acquisition of
remotely sensed digital satellite data. The first system was called
ERTS (Earth Resources Technology Satellites), and later renamed to
Landsat. There have been several Landsat satellites launched since
1972. Landsats 1, 2, and 3 are no longer operating, but Landsats 4
and 5 are still in orbit gathering data.
Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and
Landsats 4 and 5 collect MSS and TM data. MSS and TM are
discussed in more detail in the following sections.
NOTE: Landsat data are available through the Earth Observation
Satellite Company (EOSAT) or the Earth Resources Observation
Systems (EROS) Data Center. See "Ordering Raster Data" for more
information.
MSS
The MSS from Landsats 4 and 5 has a swath width of approximately
185 170 km from a height of approximately 900 km for Landsats
1, 2, and 3, and 705 km for Landsats 4 and 5. MSS data are widely
used for general geologic studies as well as vegetation inventories.
The spatial resolution of MSS data is 56 79 m, with a 79 79 m
IFOV. A typical scene contains approximately 2340 rows and 3240
columns. The radiometric resolution is 6-bit, but it is stored as 8-bit
(Lillesand and Kiefer, 1987).
Detectors record electromagnetic radiation (EMR) in four bands:
Bands 1 and 2 are in the visible portion of the spectrum and are
useful in detecting cultural features, such as roads. These bands
also show detail in water.
Bands 3 and 4 are in the near-infrared portion of the spectrum
and can be used in land/water and vegetation discrimination.
Table 10: MSS Bands and Wavelengths
Band
Wavelengt
h
(microns)
Comments
1,
Green
0.50 to 0.60
m
This band scans the region between the blue
and red chlorophyll absorption bands. It
corresponds to the green reflectance of healthy
vegetation, and it is also useful for mapping
water bodies.
2, Red 0.60 to 0.70
m
This is the red chlorophyll absorption band of
healthy green vegetation and represents one of
the most important bands for vegetation
discrimination. It is also useful for determining
soil boundary and geological boundary
delineations and cultural features.
Satellite Data / 62 Field Guide
Source: Center for Health Applications of Aerospace Related
Technologies, 2000b
TM
The TM scanner is a multispectral scanning system much like the
MSS, except that the TM sensor records reflected/emitted
electromagnetic energy from the visible, reflective-infrared, middle-
infrared, and thermal-infrared regions of the spectrum. TM has
higher spatial, spectral, and radiometric resolution than MSS.
TM has a swath width of approximately 185 km from a height of
approximately 705 km. It is useful for vegetation type and health
determination, soil moisture, snow and cloud differentiation, rock
type discrimination, etc.
The spatial resolution of TM is 28.5 28.5 m for all bands except the
thermal (band 6), which has a spatial resolution of 120 120 m. The
larger pixel size of this band is necessary for adequate signal
strength. However, the thermal band is resampled to 28.5 28.5 m
to match the other bands. The radiometric resolution is 8-bit,
meaning that each pixel has a possible range of data values from 0
to 255.
Detectors record EMR in seven bands:
Bands 1, 2, and 3 are in the visible portion of the spectrum and
are useful in detecting cultural features such as roads. These
bands also show detail in water.
Bands 4, 5, and 7 are in the reflective-infrared portion of the
spectrum and can be used in land/water discrimination.
Band 6 is in the thermal portion of the spectrum and is used for
thermal mapping (Jensen, 1996; Lillesand and Kiefer, 1987).
3,
Red,
NIR
0.70 to 0.80
m
This band is especially responsive to the
amount of vegetation biomass present in a
scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
4, NIR 0.80 to 1.10
m
This band is useful for vegetation surveys and
for penetrating haze (Jensen, 1996).
Table 10: MSS Bands and Wavelengths
Band
Wavelengt
h
(microns)
Comments
Field Guide Satellite Data / 63
Source: Center for Health Applications of Aerospace Related
Technologies, 2000b
Table 11: TM Bands and Wavelengths
Band
Wavelengt
h
(microns)
Comments
1,
Blue
0.45 to 0.52
m
This band is useful for mapping coastal water
areas, differentiating between soil and
vegetation, forest type mapping, and detecting
cultural features.
2,
Green
0.52 to 0.60
m
This band corresponds to the green reflectance
of healthy vegetation. Also useful for cultural
feature identification.
3, Red 0.63 to 0.69
m
This band is useful for discriminating between
many plant species. It is also useful for
determining soil boundary and geological
boundary delineations as well as cultural
features.
4, NIR 0.76 to 0.90
m
This band is especially responsive to the
amount of vegetation biomass present in a
scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
5, MIR 1.55 to 1.75
m
This band is sensitive to the amount of water in
plants, which is useful in crop drought studies
and in plant health analyses. This is also one of
the few bands that can be used to discriminate
between clouds, snow, and ice.
6, TIR 10.40 to
12.50 m
This band is useful for vegetation and crop
stress detection, heat intensity, insecticide
applications, and for locating thermal pollution.
It can also be used to locate geothermal
activity.
7, MIR 2.08 to 2.35
m
This band is important for the discrimination of
geologic rock type and soil boundaries, as well
as soil and vegetation moisture content.
Satellite Data / 64 Field Guide
Figure 24: Landsat MSS vs. Landsat TM
Band Combinations for Displaying TM Data
Different combinations of the TM bands can be displayed to create
different composite effects. The following combinations are
commonly used to display images:
NOTE: The order of the bands corresponds to the Red, Green, and
Blue (RGB) color guns of the monitor.
Bands 3, 2, 1 create a true color composite. True color means
that objects look as they would to the naked eyesimilar to a
color photograph.
Bands 4, 3, 2 create a false color composite. False color
composites appear similar to an infrared photograph where
objects do not have the same colors or contrasts as they would
naturally. For instance, in an infrared image, vegetation appears
red, water appears navy or black, etc.
Bands 5, 4, 2 create a pseudo color composite. (A thematic
image is also a pseudo color image.) In pseudo color, the colors
do not reflect the features in natural colors. For instance, roads
may be red, water yellow, and vegetation blue.
Different color schemes can be used to bring out or enhance the
features under study. These are by no means all of the useful
combinations of these seven bands. The bands to be used are
determined by the particular application.
4 bands
7 bands
1 pixel=
30x30m
1 pixel=
57x79m
M
S
S
T
M
radiometric
resolution
0-127
radiometric
resolution
0-255
Field Guide Satellite Data / 65
See Image Display for more information on how images are
displayed, Enhancement for more information on how images
can be enhanced, and "Ordering Raster Data" for information on
types of Landsat data available.
Landsat 7 The Landsat 7 satellite, launched in 1999, uses Enhanced Thematic
Mapper Plus (ETM+) to observe the Earth. The capabilities new to
Landsat 7 include the following:
15m spatial resolution panchromatic band
5% radiometric calibration with full aperture
60m spatial resolution thermal IR channel
The primary receiving station for Landsat 7 data is located in Sioux
Falls, South Dakota at the USGS EROS Data Center (EDC). ETM+
data is transmitted using X-band direct downlink at a rate of 150
Mbps. Landsat 7 is capable of capturing scenes without cloud
obstruction, and the receiving stations can obtain this data in real
time using the X-band. Stations located around the globe, however,
are only able to receive data for the portion of the ETM+ ground
track where the satellite can be seen by the receiving station.
Landsat 7 Data Types
One type of data available from Landsat 7 is browse data. Browse
data is a lower resolution image for determining image location,
quality and information content. The other type of data is metadata,
which is descriptive information on the image. This information is
available via the internet within 24 hours of being received by the
primary ground station. Moreover, EDC processes the data to Level
0r. This data has been corrected for scan direction and band
alignment errors only. Level 1G data, which is corrected, is also
available.
Landsat 7 Specifications
Information about the spectral range and ground resolution of the
bands of the Landsat 7 satellite is provided in the following table:
Table 12: Landsat 7 Characteristics
Band Number
Wavelength
(microns)
Resolution (m)
1 0.45 to 0.52 m 30
2 0.52 to 0.60 m 30
3 0.63 to 0.69 m 30
4 0.76 to 0.90 m 30
Satellite Data / 66 Field Guide
Landsat 7 has a swath width of 185 kilometers. The repeat coverage
interval is 16 days, or 233 orbits. The satellite orbits the Earth at 705
kilometers.
Source: National Aeronautics and Space Administration, 1998;
National Aeronautics and Space Administration, 2001
NLAPS The National Landsat Archive Production System (NLAPS) is the
Landsat processing system used by EROS. The NLAPS system is able
to produce systematically-corrected, and terrain corrected
products. . . (United States Geological Survey, n.d.).
Landsat data received from satellites is generated into TM corrected
data using the NLAPS by:
correcting and validating the mirror scan and payload correction
data
providing for image framing by generating a series of scene
center parameters
synchronizing telemetry data with video data
estimating linear motion deviation of scan mirror/scan line
corrections
generating benchmark correction matrices for specified map
projections
producing along- and across-scan high-frequency line matrices
According to the USGS, the products provided by NLAPS include the
following:
image data and the metadata describing the image
processing procedure, which contains information describing the
process by which the image data were produced
DEM data and the metadata describing them (available only with
terrain corrected products)
5 1.55 to 1.75 m 30
6 10.4 to 12.5 m 60
7 2.08 to 2.35 m 30
Panchromatic (8) 0.50 to 0.90 m 15
Table 12: Landsat 7 Characteristics
Band Number
Wavelength
(microns)
Resolution (m)
Field Guide Satellite Data / 67
For information about the Landsat data processed by NLAPS, see
"Landsat 1-5" and "Landsat 7".
Source: United States Geological Survey, n.d.
NOAA Polar Orbiter Data NOAA has sponsored several polar orbiting satellites to collect data
of the Earth. These satellites were originally designed for
meteorological applications, but the data gathered have been used
in many fieldsfrom agronomy to oceanography (Needham, 1986).
The first of these satellites to be launched was the TIROS-N in 1978.
Since the TIROS-N, five additional NOAA satellites have been
launched. Of these, the last three are still in orbit gathering data.
AVHRR
The NOAA AVHRR data are small-scale data and often cover an entire
country. The swath width is 2700 km and the satellites orbit at a
height of approximately 833 km (Kidwell, 1988; Needham, 1986).
The AVHRR system allows for direct transmission in real-time of data
called High Resolution Picture Transmission (HRPT). It also allows for
about ten minutes of data to be recorded over any portion of the
world on two recorders on board the satellite. These recorded data
are called Local Area Coverage (LAC). LAC and HRPT have identical
formats; the only difference is that HRPT are transmitted directly and
LAC are recorded.
There are three basic formats for AVHRR data which can be imported
into ERDAS IMAGINE:
LACdata recorded on board the sensor with a spatial resolution
of approximately 1.1 1.1 km,
HRPTdirect transmission of AVHRR data in real-time with the
same resolution as LAC, and
GACdata produced from LAC data by using only 1 out of every
3 scan lines. GAC data have a spatial resolution of approximately
4 4 km.
AVHRR data are available in 10-bit packed and 16-bit unpacked
format. The term packed refers to the way in which the data are
written to the tape. Packed data are compressed to fit more data on
each tape (Kidwell, 1988).
AVHRR images are useful for snow cover mapping, flood monitoring,
vegetation mapping, regional soil moisture analysis, wildfire fuel
mapping, fire detection, dust and sandstorm monitoring, and various
geologic applications (Lillesand and Kiefer, 1987). The entire globe
can be viewed in 14.5 days. There may be four or five bands,
depending on when the data were acquired.
Satellite Data / 68 Field Guide
AVHRR data have a radiometric resolution of 10-bits, meaning that
each pixel has a possible data file value between 0 and 1023. AVHRR
scenes may contain one band, a combination of bands, or all bands.
All bands are referred to as a full set, and selected bands are referred
to as an extract.
See "Ordering Raster Data" for information on the types of NOAA
data available.
Use the Import/Export function to import AVHRR data.
OrbView-3 OrbView is a high-resolution satellite scheduled for launch by
ORBIMAGE in the year 2000.
Table 13: AVHRR Bands and Wavelengths
Band
Wavelengt
h
(microns)
Comments
1, Vis-
ible
0.58 to 0.68
m
This band corresponds to the green reflectance
of healthy vegetation and is important for
vegetation discrimination.
2, NIR 0.725 to
1.10 m
This band is especially responsive to the
amount of vegetation biomass present in a
scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
3, TIR 3.55 to 3.93
m
This is a thermal band that can be used for
snow and ice discrimination. It is also useful for
detecting fires.
4, TIR 10.50 to
11.50 m
(NOAA 6, 8,
10)
10.30 to
11.30 m
(NOAA 7, 9,
11)
This band is useful for vegetation and crop
stress detection. It can also be used to locate
geothermal activity.
5, TIR 10.50 to
11.50 m
(NOAA 6, 8,
10)
11.50 to
12.50 m
(NOAA 7, 9,
11)
See Band 4, above.
Field Guide Satellite Data / 69
The OrbView-3 satellite will provide both 1 m panchromatic imagery
and 4 m multispectral imagery. One-meter imagery will enable the
viewing of houses, automobiles and aircraft, and will make it possible
to create highly precise digital maps and three-dimensional fly-
through scenes. Four-meter multispectral imagery will provide color
and infrared information to further characterize cities, rural areas
and undeveloped land from space (ORBIMAGE, 1999). Specific
applications include telecommunications and utilities, agriculture
and forestry.
OrbView-3s swath width is 8 km, with an image area of 64 km
2
. The
revisit time is less than 3 days. OrbView-3 orbits the Earth at an
altitude of 470 km.
Source: ORBIMAGE, 1999; ORBIMAGE, 2000
SeaWiFS The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) instrument is
on-board the SeaStar spacecraft, which was launched in 1997. The
SeaStar spacecrafts orbit is circular, at an altitude of 705 km.The
satellite uses an attitude control system (ACS), which maintains
orbit, as well as performs solar and lunar calibration maneuvers. The
ACS also provides attitude information within one SeaWiFS pixel.
The SeaWiFS instrument is made up of an optical scanner and an
electronics module. The swath width is 2,801 km LAC/HRPT (958.3
degrees) and 1,502 km GAC (45 degrees). The spatial resolution is
1.1 km LAC and 4.5 km GAC. The revisit time is one day.
Table 14: OrbView-3 Bands and Spectral Ranges
Bands Spectral Range
1 450 to 520 nm
2 520 to 600 nm
3 625 to 695 nm
4 760 to 900 nm
Panchromatic 450 to 900 nm
Table 15: SeaWiFS Bands and Wavelengths
Band Wavelength (nanometers)
1, Blue 402 to 422 nm
2, Blue 433 to 453 nm
3, Cyan 480 to 500 nm
4, Green 500 to 520 nm
5, Green 545 to 565 nm
Satellite Data / 70 Field Guide
Source: National Aeronautics and Space Administration, 1999;
Center for Health Applications of Aerospace Related Technologies,
1998
SPOT The first SPOT satellite, developed by the French Centre National
dEtudes Spatiales (CNES), was launched in early 1986. The second
SPOT satellite was launched in 1990 and the third was launched in
1993. The sensors operate in two modes, multispectral and
panchromatic. SPOT is commonly referred to as a pushbroom
scanner meaning that all scanning parts are fixed, and scanning is
accomplished by the forward motion of the scanner. SPOT pushes
3000/6000 sensors along its orbit. This is different from Landsat
which scans with 16 detectors perpendicular to its orbit.
The SPOT satellite can observe the same area on the globe once
every 26 days. The SPOT scanner normally produces nadir views, but
it does have off-nadir viewing capability. Off-nadir refers to any point
that is not directly beneath the detectors, but off to an angle. Using
this off-nadir capability, one area on the Earth can be viewed as
often as every 3 days.
This off-nadir viewing can be programmed from the ground control
station, and is quite useful for collecting data in a region not directly
in the path of the scanner or in the event of a natural or man-made
disaster, where timeliness of data acquisition is crucial. It is also very
useful in collecting stereo data from which elevation data can be
extracted.
The width of the swath observed varies between 60 km for nadir
viewing and 80 km for off-nadir viewing at a height of 832 km
(Jensen, 1996).
Panchromatic
SPOT Panchromatic (meaning sensitive to all visible colors) has 10
10 m spatial resolution, contains 1 band0.51 to 0.73 mand is
similar to a black and white photograph. It has a radiometric
resolution of 8 bits (Jensen, 1996).
XS
SPOT XS, or multispectral, has 20 20 m spatial resolution, 8-bit
radiometric resolution, and contains 3 bands (Jensen, 1996).
6, Red 660 to 680 nm
7, NIR 745 to 785 nm
8, NIR 845 to 885 nm
Table 15: SeaWiFS Bands and Wavelengths
Band Wavelength (nanometers)
Field Guide Satellite Data / 71
Figure 25: SPOT Panchromatic vs. SPOT XS
See "Ordering Raster Data" for information on the types of SPOT
data available.
Table 16: SPOT XS Bands and Wavelengths
Band
Wavelengt
h
(microns)
Comments
1, Green 0.50 to 0.59
m
This band corresponds to the green
reflectance of healthy vegetation.
2, Red 0.61 to 0.68
m
This band is useful for discriminating between
plant species. It is also useful for soil
boundary and geological boundary
delineations.
3,
Reflec-
tive IR
0.79 to 0.89
m
This band is especially responsive to the
amount of vegetation biomass present in a
scene. It is useful for crop identification and
emphasizes soil/crop and land/water
contrasts.
1 band
3 bands
1 pixel=
20x20m
1 pixel=
10x10m
P
a
n
ch
ro
m
a
tic
X
S
radiometric
resolution
0-255
Radar Data / 72 Field Guide
Stereoscopic Pairs
Two observations can be made by the panchromatic scanner on
successive days, so that the two images are acquired at angles on
either side of the vertical, resulting in stereoscopic imagery.
Stereoscopic imagery can also be achieved by using one vertical
scene and one off-nadir scene. This type of imagery can be used to
produce a single image, or topographic and planimetric maps
(Jensen, 1996).
Topographic maps indicate elevation. Planimetric maps correctly
represent horizontal distances between objects (Star and Estes,
1990).
See "Topographic Data" and Terrain Analysis for more
information about topographic data and how SPOT stereopairs
and aerial photographs can be used to create elevation data and
orthographic images.
SPOT4 The SPOT4 satellite was launched in 1998. SPOT4 carries High
Resolution Visible Infrared (HR VIR) instruments that obtain
information in the visible and near-infrared spectral bands.
The SPOT4 satellite orbits the Earth at 822 km at the Equator. The
SPOT4 satellite has two sensors on board: a multispectral sensor,
and a panchromatic sensor. The multispectral scanner has a pixel
size of 20 20 m, and a swath width of 60 km. The panchromatic
scanner has a pixel size of 10 10 m, and a swath width of 60 km.
Source: SPOT Image, 1998; SPOT Image, 1999; Center for Health
Applications of Aerospace Related Technologies, 2000c.
Radar Data Simply put, radar data are produced when:
a radar transmitter emits a beam of micro or millimeter waves,
the waves reflect from the surfaces they strike, and
Table 17: SPOT4 Bands and Wavelengths
Band Wavelength
1, Green 0.50 to 0.59 m
2, Red 0.61 to 0.68 m
3, (near-IR) 0.78 to 0.89 m
4, (mid-IR) 1.58 to 1.75 m
Panchromatic 0.61 to 0.68 m
Field Guide Radar Data / 73
the backscattered radiation is detected by the radar systems
receiving antenna, which is tuned to the frequency of the
transmitted waves.
The resultant radar data can be used to produce radar images.
While there is a specific importer for data from RADARSAT and
others, most types of radar image data can be imported into
ERDAS IMAGINE with the Generic import option of
Import/Export. The Generic SAR Node of the IMAGINE Radar
Mapping Suite can be used to create or edit the radar
ephemeris.
A radar system can be airborne, spaceborne, or ground-based.
Airborne radar systems have typically been mounted on civilian and
military aircraft, but in 1978, the radar satellite Seasat-1 was
launched. The radar data from that mission and subsequent
spaceborne radar systems have been a valuable addition to the data
available for use in GIS processing. Researchers are finding that a
combination of the characteristics of radar data and visible/infrared
data is providing a more complete picture of the Earth. In the last
decade, the importance and applications of radar have grown
rapidly.
Advantages of Using
Radar Data
Radar data have several advantages over other types of remotely
sensed imagery:
Radar microwaves can penetrate the atmosphere day or night
under virtually all weather conditions, providing data even in the
presence of haze, light rain, snow, clouds, or smoke.
Under certain circumstances, radar can partially penetrate arid
and hyperarid surfaces, revealing subsurface features of the
Earth.
Although radar does not penetrate standing water, it can reflect
the surface action of oceans, lakes, and other bodies of water.
Surface eddies, swells, and waves are greatly affected by the
bottom features of the water body, and a careful study of surface
action can provide accurate details about the bottom features.
Radar Sensors Radar images are generated by two different types of sensors:
SLAR (Side-looking Airborne Radar)uses an antenna which is
fixed below an aircraft and pointed to the side to transmit and
receive the radar signal. (See Figure 26.)
SARuses a side-looking, fixed antenna to create a synthetic
aperture. SAR sensors are mounted on satellites and the NASA
Space Shuttle. The sensor transmits and receives as it is moving.
The signals received over a time interval are combined to create
the image.
Radar Data / 74 Field Guide
Both SLAR and SAR systems use side-looking geometry. Figure 26
shows a representation of an airborne SLAR system.
Figure 26: SLAR Radar
Source: Lillesand and Kiefer, 1987
Figure 27 shows a graph of the data received from the radiation
transmitted in Figure 26. Notice how the data correspond to the
terrain in Figure 26. These data can be used to produce a radar
image of the target area. A target is any object or feature that is the
subject of the radar scan.
Figure 27: Received Radar Signal
Active and Passive Sensors
An active radar sensor gives off a burst of coherent radiation that
reflects from the target, unlike a passive microwave sensor which
simply receives the low-level radiation naturally emitted by targets.
Range
Direction
Azimuth
Direction
Sensor Height
at Nadir
Azimuth
Resolution
Previous
Image
Lines
Beam
Width
T
r
e
e
s
V
a
l
l
e
y
H
i
l
l
H
i
l
l

S
h
a
d
o
w T
r
e
e
s
Time
S
t
r
e
n
g
t
h

(
D
N
)
Field Guide Radar Data / 75
Like the coherent light from a laser, the waves emitted by active
sensors travel in phase and interact minimally on their way to the
target area. After interaction with the target area, these waves are
no longer in phase. This is due to the different distances they travel
from different targets, or single versus multiple bounce scattering.
Figure 28: Radar Reflection from Different Sources and Distances
Source: Lillesand and Kiefer, 1987
Currently, these bands are commonly used for radar imaging
systems:
More information about these radar systems is given later in this
chapter.
Diffuse reflector
Specular
reflector
Corner
reflector
Radar waves
are
transmitted in
Once reflected, they are
out of phase, interfering
with each other and
producing speckle noise.
Table 18: Commonly Used Bands for Radar Imaging
Ban
d
Frequency
Range
Wavelengt
h Range
Radar System
X 5.20-10.90
GHz
5.77-2.75
cm
USGS SLAR
C 3.9-6.2 GHz 3.8-7.6 cm ERS-1, RADARSAT
L 0.39-1.55 GHz 76.9-19.3
cm
SIR-A,B, Almaz, FUYO-1
(JERS-1)
P 0.225-0.391
GHz
40.0-76.9
cm
AIRSAR
Radar Data / 76 Field Guide
Radar bands were named arbitrarily when radar was first developed
by the military. The letter designations have no special meaning.
NOTE: The C band overlaps the X band. Wavelength ranges may
vary slightly between sensors.
Speckle Noise Once out of phase, the radar waves can interfere constructively or
destructively to produce light and dark pixels known as speckle
noise. Speckle noise in radar data must be reduced before the data
can be utilized. However, the radar image processing programs used
to reduce speckle noise also produce changes to the image. This
consideration, combined with the fact that different applications and
sensor outputs necessitate different speckle removal models, has
lead ERDAS to offer several speckle reduction algorithms.
When processing radar data, the order in which the image
processing programs are implemented is crucial. This is
especially true when considering the removal of speckle noise.
Since any image processing done before removal of the speckle
results in the noise being incorporated into and degrading the
image, do not rectify, correct to ground range, or in any way
resample the pixel values before removing speckle noise. A
rotation using nearest neighbor might be permissible.
The IMAGINE Radar Interpreter

allows you to:


import radar data into the GIS as a stand-alone source or as an
additional layer with other imagery sources
remove speckle noise
enhance edges
perform texture analysis
perform radiometric and slant-to-ground range correction
IMAGINE OrthoRadar

allows you to orthorectify radar imagery.


The IMAGINE IFSAR DEM

module allows you to generate DEMs


from SAR data using interferometric techniques.
The IMAGINE StereoSAR DEM

module allows you to generate DEMs


from SAR data using stereoscopic techniques.
See Enhancement and Radar Concepts for more information
on radar imagery enhancement.
Applications for Radar
Data
Radar data can be used independently in GIS applications or
combined with other satellite data, such as Landsat, SPOT, or
AVHRR. Possible GIS applications for radar data include:
Field Guide Radar Data / 77
Geologyradars ability to partially penetrate land cover and
sensitivity to micro relief makes radar data useful in geologic
mapping, mineral exploration, and archaeology.
Classificationa radar scene can be merged with visible/infrared
data as an additional layer(s) in vegetation classification for
timber mapping, crop monitoring, etc.
Glaciologythe ability to provide imagery of ocean and ice
phenomena makes radar an important tool for monitoring
climatic change through polar ice variation.
Oceanographyradar is used for wind and wave measurement,
sea-state and weather forecasting, and monitoring ocean
circulation, tides, and polar oceans.
Hydrologyradar data are proving useful for measuring soil
moisture content and mapping snow distribution and water
content.
Ship monitoringthe ability to provide day/night all-weather
imaging, as well as detect ships and associated wakes, makes
radar a tool that can be used for ship navigation through frozen
ocean areas such as the Arctic or North Atlantic Passage.
Offshore oil activitiesradar data are used to provide ice updates
for offshore drilling rigs, determining weather and sea conditions
for drilling and installation operations, and detecting oil spills.
Pollution monitoringradar can detect oil on the surface of water
and can be used to track the spread of an oil spill.
Current Radar Sensors Table 19 gives a brief description of currently available radar
sensors. This is not a complete list of such sensors, but it does
represent the ones most useful for GIS applications.
Table 19: Current Radar Sensors
ERS-1, 2 JERS-1 SIR-A, B SIR-C
RADARSA
T
Almaz-1
Availability opera-
tional
defunct 1981,
1984
1994 operational 1991-
1992
Resolution 12.5 m 18 m 25 m 25 m 10-100 m 15 m
Revisit Time 35 days 44 days NA NA 3 days NA
Scene Area 100 100
km
75 100
km
30 60
km
variable
swath
50 50 to
500 500
km
40 100
km
Bands C band L band L band L, C, X
bands
C band C band
Radar Data / 78 Field Guide
Almaz-1
Almaz was launched by the Soviet Union in 1987. The SAR operated
with a single frequency SAR, which was attached to a spacecraft.
Almaz-1 provided optically-processed data. The Almaz mission was
largely kept secret
Almaz-1 was launched in 1991, and provides S-band information. It
also includes a single polarization SAR as well as a sounding
radiometric scanner (RMS) system and several infrared bands
(Atlantis Scientific, Inc., 1997).
The swath width of Almaz-1 is 20-45 km, the range resolution is 15-
30 m, and the azimuth resolution is 15 m.
Source: National Aeronautics and Space Administration, 1996;
Atlantis Scientific, Inc., 1997
ERS-1
ERS-1, a radar satellite, was launched by ESA in July of 1991. One
of its primary instruments is the Along-Track Scanning Radiometer
(ATSR). The ATSR monitors changes in vegetation of the Earths
surface.
The instruments aboard ERS-1 include: SAR Image Mode, SAR Wave
Mode, Wind Scatterometer, Radar Altimeter, and Along Track
Scanning Radiometer-1 (European Space Agency, 1997).
ERS-1 receiving stations are located all over the world, in countries
such as Sweden, Norway, and Canada.
Some of the information that is obtained from the ERS-1 (as well as
ERS-2, to follow) includes:
maps of the surface of the Earth through clouds
physical ocean features and atmospheric phenomena
maps and ice patterns of polar regions
database information for use in modeling
surface elevation changes
According to ESA,
. . .ERS-1 provides both global and regional views of the Earth,
regardless of cloud coverage and sunlight conditions. An
operational near-real-time capability for data acquisition,
processing and dissemination, offering global data sets within
three hours of observation, has allowed the development of time-
critical applications particularly in weather, marine and ice
forecasting, which are of great importance for many industrial
activities (European Space Agency, 1995).
Source: European Space Agency, 1995
Field Guide Radar Data / 79
ERS-2
ERS-2, a radar satellite, was launched by ESA in April of 1995. It has
an instrument called GOME, which stands for Global Ozone
Monitoring Experiment. This instrument is designed to evaluate
atmospheric chemistry. ERS-2, like ERS-1 makes use of the ATSR.
The instruments aboard ERS-2 include: SAR Image Mode, SAR Wave
Mode, Wind Scatterometer, Radar Altimeter, Along Track Scanning
Radiometer-2, and the Global Ozone Monitoring Experiment.
ERS-2 receiving stations are located all over the world. Facilities that
process and archive ERS-2 data are also located around the globe.
One of the benefits of the ERS-2 satellite is that, along with ERS-1,
it can provide data from the exact same type of synthetic aperture
radar (SAR).
ERS-2 provides many different types of information. See ERS-1 for
some of the most common types. Data obtained from ERS-2 used in
conjunction with that from ERS-1 enables you to perform
interferometric tasks. Using the data from the two sensors, DEMs
can be created.
For information about ERDAS IMAGINEs interferometric
software, IMAGINE IFSAR DEM, see "IMAGINE IFSAR DEM
Theory".
Source: European Space Agency, 1995
JERS-1
JERS stands for Japanese Earth Resources Satellite. The JERS-1
satellite was launched in February of 1992, with an SAR instrument
and a 4-band optical sensor aboard. The SAR sensors ground
resolution is 18 m, and the optical sensors ground resolution is
roughly 18 m across-track and 24 m along-track. The revisit time of
the satellite is every 44 days. The satellite travels at an altitude of
568 km, at an inclination of 97.67.
Table 20: JERS-1 Bands and Wavelengths
Band Wavelength
1 0.52 to 0.60 m
2 0.63 to 0.69 m
3 0.76 to 0.86 m
4
1
0.76 to 0.86 m
5 1.60 to 1.71 m
6 2.01 to 2.12 m
7 2.13 to 2.25 m
Radar Data / 80 Field Guide
1
Viewing 15.3 forward
Source: Earth Remote Sensing Data Analysis Center, 2000.
JERS-1 data comes in two different formats: European and
Worldwide. The European data format consists mainly of coverage
for Europe and Antarctica. The Worldwide data format has images
that were acquired from stations around the globe. According to
NASA, a reduction in transmitter power has limited the use of JERS-
1 data (National Aeronautics and Space Administration, 1996).
Source: Eurimage, 1998; National Aeronautics and Space
Administration, 1996.
RADARSAT
RADARSAT satellites carry SARs, which are capable of transmitting
signals that can be received through clouds and during nighttime
hours. RADARSAT satellites have multiple imaging modes for
collecting data, which include Fine, Standard, Wide, ScanSAR
Narrow, ScanSAR Wide, Extended (H), and Extended (L). The
resolution and swath width varies with each one of these modes, but
in general, Fine offers the best resolution: 8 m.
The types of RADARSAT image products include: Single Data, Single
Look Complex, Path Image, Path Image Plus, Map Image, Precision
Map Image, and Orthorectified. You can obtain this data in forms
ranging from CD-ROM to print.
8 2.27 to 2.40 m
Table 20: JERS-1 Bands and Wavelengths
Band Wavelength
Table 21: RADARSAT Beam Mode Resolution
Beam Mode Resolution
Fine Beam Mode 8 m
Standard Beam Mode 25 m
Wide Beam Mode 30 m
ScanSAR Narrow Beam Mode 50 m
ScanSAR Wide Beam Mode 100 m
Extended High Beam Mode 25 m
Low Beam Mode 35 m
Field Guide Radar Data / 81
The RADARSAT satellite uses a single frequency, C-band. The
altitude of the satellite is 496 miles, or 798 km. The satellite is able
to image the entire Earth, and its path is repeated every 24 days.
The swath width is 500 km. Daily coverage is available of the Arctic,
and any area of Canada can be obtained within three days.
Source: RADARSAT, 1999; Space Imaging, 1999c
SIR-A
SIR stands for Spaceborne Imaging Radar. SIR-A was launched and
began collecting data in 1981. The SIR-A mission built on the Seasat
SAR mission that preceded it by increasing the incidence angle with
which it captured images. The primary goal of the SIR-A mission was
to collect geological information. This information did not have as
pronounced a layover effect as previous imagery.
An important achievement of SIR-A data is that it is capable of
penetrating surfaces to obtain information. For example, NASA says
that the L-band capability of SIR-A enabled the discovery of dry river
beds in the Sahara Desert.
SIR-1 uses L-band, has a swath width of 50 km, a range resolution
of 40 m, and an azimuth resolution of 40 m (Atlantis Scientific, Inc.,
1997).
For information on the ERDAS IMAGINE software that reduces
layover effect, IMAIGNE OrthoRadar, see "IMAGINE OrthoRadar
Theory".
Source: National Aeronautics and Space Administration, 1995a;
National Aeronautics and Space Administration, 1996; Atlantis
Scientific, Inc., 1997.
SIR-B
SIR-B was launched and began collecting data in 1984. SIR-B
improved over SIR-A by using an articulating antenna. This antenna
allowed the incidence angle to range between 15 and 60 degrees.
This enabled the mapping of surface features using multiple-
incidence angle backscatter signatures (National Aeronautics and
Space Administration, 1996).
SIR-B uses L-band, has a swath width of 10-60 km, a range
resolution of 60-10 m, and an azimuth resolution of 25 m (Atlantis
Scientific, Inc., 1997).
Source: National Aeronautics and Space Administration, 1995a,
National Aeronautics and Space Administration, 1996; Atlantis
Scientific, Inc., 1997.
Radar Data / 82 Field Guide
SIR-C
SIR-C is part of a radar system, SIR-C/X-SAR, which was launched
in 1994. The system is able to . . .measure, from space, the radar
signature of the surface at three different wavelengths, and to make
measurements for different polarizations at two of those
wavelengths (National Aeronautics and Space Administration,
1997). Moreover, it can supply . . .images of the magnitude of radar
backscatter for four polarization combinations (National Aeronautics
and Space Administration, 1995a).
The data provided by SIR-C/X-SAR allows measurement of the
following:
vegetation type, extent, and deforestation
soil moisture content
ocean dynamics, wave and surface wind speeds and directions
volcanism and tectonic activity
soil erosion and desertification
The antenna of the system is composed of three antennas: one at L-
band, one at C-band, and one at X-band. The antenna was
assembled by the Jet Propulsion Laboratory. The acquisition of data
at three different wavelengths makes SIR-C/X-SAR data very useful.
The SIR-C and X-SAR do not have to be operated together: they can
also be operated independent of one another.
SIR-C/X-SAR data come in resolutions from 10 to 200 m. The swath
width of the sensor varies from 15 to 90 km, which depends on the
direction the antenna is pointing. The system orbits the Earth at 225
km above the surface.
Source: National Aeronautics and Space Administration, 1995a,
National Aeronautics and Space Administration, 1997.
Future Radar Sensors Several radar satellites are planned for launch within the next
several years, but only a few programs will be successful. Following
are two scheduled programs which are known to be highly
achievable.
Table 22: SIR-C/X-SAR Bands and Frequencies
Bands Wavelength
L-Band 0.235 m
C-Band 0.058 m
X-Band 0.031 m
Field Guide Image Data from Aircraft / 83
Light SAR
NASA and Jet Propulsion Laboratories (JPL) are currently designing
a radar satellite called Light SAR. Present plans are for this to be a
multipolar sensor operating at L-band.
Radarsat-2
The Canadian Space Agency is working on the follow-on system to
Radarsat 1. Present plans are to include multipolar, C-band imagery.
Image Data from
Aircraft
Image data can also be acquired from multispectral scanners or
radar sensors aboard aircraft, as well as satellites. This is useful if
there is not time to wait for the next satellite to pass over a particular
area, or if it is necessary to achieve a specific spatial or spectral
resolution that cannot be attained with satellite sensors.
For example, this type of data can be beneficial in the event of a
natural or man-made disaster, because there is more control over
when and where the data are gathered.
Two common types of airborne image data are:
Airborne Synthetic Aperture Radar (AIRSAR)
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)
AIRSAR AIRSAR is an experimental airborne radar sensor developed by JPL,
Pasadena, California, under a contract with NASA. AIRSAR data have
been available since 1983.
This sensor collects data at three frequencies:
C-band
L-band
P-band
Because this sensor measures at three different wavelengths,
different scales of surface roughness are obtained. The AIRSAR
sensor has an IFOV of 10 m and a swath width of 12 km.
AIRSAR data have been used in many applications such as
measuring snow wetness, classifying vegetation, and estimating soil
moisture.
NOTE: These data are distributed in a compressed format. They
must be decompressed before loading with an algorithm available
from JPL. See "Addresses to Contact" for contact information.
AVIRIS The AVIRIS was also developed by JPL under a contract with NASA.
AVIRIS data have been available since 1987.
Image Data from Scanning / 84 Field Guide
This sensor produces multispectral data that have 224 narrow
bands. These bands are 10 nm wide and cover the spectral range of
.4 - 2.4 nm. The swath width is 11 km, and the spatial resolution is
20 m. This sensor is flown at an altitude of approximately 20 km. The
data are recorded at 10-bit radiometric resolution.
Daedalus TMS Daedalus is a thematic mapper simulator (TMS), which simulates the
characteristics, such as spatial and radiometric, of the TM sensor on
Landsat spacecraft.
The Daedalus TMS orbits at 65,000 feet, and has a ground resolution
of 25 meters. The total scan angle is 43 degrees, and the swath
width is 15.6 km. Daedalus TMS is flown aboard the NASA ER-2
aircraft.
The Daedalus TMS spectral bands are as follows:
Source: National Aeronautics and Space Administration, 1995b
Image Data from
Scanning
Hardcopy maps and photographs can be incorporated into the
ERDAS IMAGINE environment through the use of a scanning device
to transfer them into a digital (raster) format.
Table 23: Daedalus TMS Bands and Wavelengths
Daedalus
Channel
TM Band Wavelength
1 A 0.42 to 0.45 m
2 1 0.45 to 0.52 m
3 2 0.52 to 0.60 m
4 B 0.60 to 0.62 m
5 3 0.63 to 0.69 m
6 C 0.69 to 0.75 m
7 4 0.76 to 0.90 m
8 D 0.91 to 1.05 m
9 5 1.55 to 1.75 m
10 7 2.08 to 2.35 m
11 6 8.5 to 14.0 m low
gain
12 6 8.5 to 14.0 m high
gain
Field Guide Image Data from Scanning / 85
In scanning, the map, photograph, transparency, or other object to
be scanned is typically placed on a flat surface, and the scanner
scans across the object to record the image. The image is then
transferred from analog to digital data.
There are many commonly used scanners for GIS and other desktop
applications, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or
Vexcel (Vexcel Imaging Corp., Boulder, Colorado). Many scanners
produce a Tagged Image File Format (TIFF) file, which can be used
directly by ERDAS IMAGINE.
Use the Import/Export function to import scanned data.Eikonix
data can be obtained in the ERDAS IMAGINE .img format using
the XSCAN

Tool by Ektron and then imported directly into


ERDAS IMAGINE.
Photogrammetric
Scanners
There are photogrammetric high quality scanners and desktop
scanners. Photogrammetric quality scanners are special devices
capable of high image quality and excellent positional accuracy. Use
of this type of scanner results in geometric accuracies similar to
traditional analog and analytical photogrammetric instruments.
These scanners are necessary for digital photogrammetric
applications that have high accuracy requirements.
These units usually scan only film because film is superior to paper,
both in terms of image detail and geometry. These units usually have
a Root Mean Square Error (RMSE) positional accuracy of 4 microns
or less, and are capable of scanning at a maximum resolution of 5 to
10 microns.
The required pixel resolution varies depending on the application.
Aerial triangulation and feature collection applications often scan in
the 10 to 15 micron range. Orthophoto applications often use 15- to
30-micron pixels. Color film is less sharp than panchromatic,
therefore color ortho applications often use 20- to 40-micron pixels.
Desktop Scanners Desktop scanners are general purpose devices. They lack the image
detail and geometric accuracy of photogrammetric quality units, but
they are much less expensive. When using a desktop scanner, you
should make sure that the active area is at least 9 9 inches (i.e.,
A3-type scanners), enabling you to capture the entire photo frame.
Desktop scanners are appropriate for less rigorous uses, such as
digital photogrammetry in support of GIS or remote sensing
applications. Calibrating these units improves geometric accuracy,
but the results are still inferior to photogrammetric units. The image
correlation techniques that are necessary for automatic tie point
collection and elevation extraction are often sensitive to scan quality.
Therefore, errors can be introduced into the photogrammetric
solution that are attributable to scanning errors.
ADRG Data / 86 Field Guide
Aerial Photography Aerial photographs, such as NAPP photos, are most widely used data
sources in photogrammetry. They can not be utilized in softcopy or
digital photogrammetric applications until scanned. The standard
dimensions of the aerial photos are 9 9 inches or 230 230 mm.
The ground area covered by the photo depends on the scale. The
scanning resolution determines the digital image file size and pixel
size.
For example, for a 1:40,000 scale standard block of white aerial
photos scanned at 25 microns (1016 dots per inch), the ground pixel
size is 1 1 m
2
. The resulting file size is about 85 MB. It is not
recommended to scan a photo with a scanning resolution less than
5 microns or larger than 5080 dpi.
DOQs DOQ stands for digital orthophoto quadrangle. USGS defines a DOQ
as a computer-generated image of an aerial photo, which has been
orthorectified to give it map coordinates. DOQs can provide accurate
map measurements.
The format of the DOQ is a grayscale image that covers 3.75 minutes
of latitude by 3.75 minutes of longitude. DOQs use the North
American Datum of 1983, and the Universal Transverse Mercator
projection. Each pixel of a DOQ represents a square meter. 3.75-
minute quarter quadrangles have a 1:12,000 scale. 7.5-minute
quadrangles have a 1:24,000 scale. Some DOQs are available in
color-infrared, which is especially useful for vegetation monitoring.
DOQs can be used in land use and planning, management of natural
resources, environmental impact assessments, and watershed
analysis, among other applications. A DOQ can also be used as a
cartographic base on which to overlay any number of associated
thematic layers for displaying, generating, and modifying planimetric
data or associated data files (United States Geological Survey,
1999b).
According to the USGS:
DOQ production begins with an aerial photo and requires four
elements: (1) at least three ground positions that can be
identified within the photo; (2) camera calibration specifications,
such as focal length; (3) a digital elevation model (DEM) of the
area covered by the photo; (4) and a high-resolution digital
image of the photo, produced by scanning. The photo is
processed pixel by pixel to produce an image with features in true
geographic positions (United States Geological Survey, 1999b).
Source: United States Geological Survey, 1999b.
ADRG Data ADRG (ARC Digitized Raster Graphic) data come from the National
Imagery and Mapping Agency (NIMA), which was formerly known as
the Defense Mapping Agency (DMA). ADRG data are primarily used
for military purposes by defense contractors. The data are in 128
128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide
large amounts of hardcopy graphic data without having to store and
maintain the actual hardcopy graphics.
Field Guide ADRG Data / 87
ADRG data consist of digital copies of NIMA hardcopy graphics
transformed into the ARC system and accompanied by ASCII
encoded support files. These digital copies are produced by scanning
each hardcopy graphic into three images: red, green, and blue. The
data are scanned at a nominal collection interval of 100 microns (254
lines per inch). When these images are combined, they provide a 3-
band digital representation of the original hardcopy graphic.
ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a
rectangular coordinate and projection system at any scale for the
Earths ellipsoid, based on the World Geodetic System 1984 (WGS
84). The ARC System divides the surface of the ellipsoid into 18
latitudinal bands called zones. Zones 1 - 9 cover the Northern
hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone
9 is the North Polar region. Zone 18 is the South Polar region.
Distribution Rectangles
For distribution, ADRG are divided into geographic data sets called
Distribution Rectangles (DRs). A DR may include data from one or
more source charts or maps. The boundary of a DR is a geographic
rectangle that typically coincides with chart and map neatlines.
Zone Distribution Rectangles
Each DR is divided into Zone Distribution Rectangles (ZDRs). There
is one ZDR for each ARC System zone covering any part of the DR.
The ZDR contains all the DR data that fall within that zones limits.
ZDRs typically overlap by 1,024 rows of pixels, which allows for
easier mosaicking. Each ZDR is stored on the CD-ROM as a single
raster image file (.IMG). Included in each image file are all raster
data for a DR from a single ARC System zone, and padding pixels
needed to fulfill format requirements. The padding pixels are black
and have a zero value.
The padding pixels are not imported by ERDAS IMAGINE, nor are
they counted when figuring the pixel height and width of each
image.
ADRG File Format Each CD-ROM contains up to eight different file types which make up
the ADRG format. ERDAS IMAGINE imports three types of ADRG data
files:
.OVR (Overview)
.IMG (Image)
.Lxx (Legend or marginalia data)
NOTE: Compressed ADRG (CADRG) is a different format, which may
be imported or read directly.
ADRG Data / 88 Field Guide
The ADRG .IMG and .OVR file formats are different from the
ERDAS IMAGINE .img and .ovr file formats.
.OVR (overview) The overview file contains a 16:1 reduced resolution image of the
whole DR. There is an overview file for each DR on a CD-ROM.
Importing ADRG Subsets
Since DRs can be rather large, it may be beneficial to import a subset
of the DR data for the application. ERDAS IMAGINE enables you to
define a subset of the data from the preview image (see Figure 30).
You can import from only one ZDR at a time. If a subset covers
multiple ZDRs, they must be imported separately and mosaicked
with the Mosaic option.
Figure 29: ADRG Overview File Displayed in a Viewer
The white rectangle in Figure 30 represents the DR. The subset area
in this illustration would have to be imported as three files: one for
each zone in the DR.
Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2
and 4 would also be included in the subset area.
Field Guide ADRG Data / 89
Figure 30: Subset Area with Overlapping ZDRs
.IMG (scanned image
data)
The .IMG files are the data files containing the actual scanned
hardcopy graphic(s). Each .IMG file contains one ZDR plus padding
pixels. The Import function converts the .IMG data files on the CD-
ROM to the ERDAS IMAGINE file format (.img). The image file can
then be displayed in a Viewer.
.Lxx (legend data) Legend files contain a variety of diagrams and accompanying
information. This is information that typically appears in the margin
or legend of the source graphic.
This information can be imported into ERDAS IMAGINE and
viewed. It can also be added to a map composition with the
ERDAS IMAGINE Map Composer.
Each legend file contains information based on one of these diagram
types:
Index (IN)shows the approximate geographical position of the
graphic and its relationship to other graphics in the region.
Elevation/Depth Tint (EL)depicts the colors or tints using a
multicolored graphic that represent different elevations or depth
bands on the printed map or chart.
Slope (SL)represents the percent and degree of slope
appearing in slope bands.
Boundary (BN)depicts the geopolitical boundaries included on
the map or chart.
Zone 4
Zone 3
Zone 2
overlap area
overlap area
Subset
Area
ADRG Data / 90 Field Guide
Accuracy (HA, VA, AC)depicts the horizontal and vertical
accuracies of selected map or chart areas. AC represents a
combined horizontal and vertical accuracy diagram.
Geographic Reference (GE)depicts the positioning information
as referenced to the World Geographic Reference System.
Grid Reference (GR)depicts specific information needed for
positional determination with reference to a particular grid
system.
Glossary (GL)gives brief lists of foreign geographical names
appearing on the map or chart with their English-language
equivalents.
Landmark Feature Symbols (LS)depict navigationally-
prominent entities.
ARC System Charts
The ADRG data on each CD-ROM are based on one of these chart
types from the ARC system:
Each ARC System chart type has certain legend files associated with
the image(s) on the CD-ROM. The legend files associated with each
chart type are checked in Table 25.
Table 24: ARC System Chart Types
ARC System Chart Type Scale
GNC (Global Navigation Chart) 1:5,000,00
0
JNC-A (Jet Navigation Chart - Air) 1:3,000,00
0
JNC (Jet Navigation Chart) 1:2,000,00
0
ONC (Operational Navigation Chart) 1:1,000,00
0
TPC (Tactical Pilot Chart) 1:500,000
JOG-A (Joint Operations Graphic - Air) 1:250,000
JOG-G (Joint Operations Graphic - Ground) 1:250,000
JOG-C (Joint Operations Graphic - Combined) 1:250,000
JOG-R (Joint Operations Graphic - Radar) 1:250,000
ATC (Series 200 Air Target Chart) 1:200,000
TLM (Topographic Line Map) 1:50,000
Field Guide ADRG Data / 91
ADRG File Naming
Convention
The ADRG file naming convention is based on a series of codes:
ssccddzz
ss = the chart series code (see the table of ARC System charts)
cc = the country code
dd = the DR number on the CD-ROM (01-99). DRs are numbered
beginning with 01 for the northwesternmost DR and increasing
sequentially west to east, then north to south.
zz = the zone rectangle number (01-18)
For example, in the ADRG filename JNUR0101.IMG:
JN = Jet Navigation. This ADRG file is taken from a Jet Navigation
chart.
UR = Europe. The data is coverage of a European continent.
01 = This is the first DR on the CD-ROM, providing coverage of
the northwestern edge of the image area.
01 = This is the first zone rectangle of the DR.
.IMG = This file contains the actual scanned image data for a
ZDR.
Table 25: Legend Files for the ARC System Chart Types
ARC System
Chart
IN EL SL BN VA HA AC GE GR GL LS
GNC
JNC / JNC-A
ONC
TPC
JOG-A
JOG-G / JOG-C
JOG-R
ATC
TLM
ADRI Data / 92 Field Guide
You may change this name when the file is imported into ERDAS
IMAGINE. If you do not specify a file name, ERDAS IMAGINE
uses the ADRG file name for the image.
Legend File Names
Legend file names include a code to designate the type of diagram
information contained in the file (see the previous legend file
description). For example, the file JNUR01IN.L01 means:
JN = Jet Navigation. This ADRG file is taken from a Jet Navigation
chart.
UR = Europe. The data is coverage of a European continent.
01 = This is the first DR on the CD-ROM, providing coverage of
the northwestern edge of the image area.
IN = This indicates that this file is an index diagram from the
original hardcopy graphic.
.L01 = This legend file contains information for the source
graphic 01. The source graphics in each DR are numbered
beginning with 01 for the northwesternmost source graphic,
increasing sequentially west to east, then north to south. Source
directories and their files include this number code within their
names.
For more detailed information on ADRG file naming conventions,
see the National Imagery and Mapping Agency Product
Specifications for ARC Digitized Raster Graphics (ADRG),
published by the NIMA Aerospace Center.
ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from
the NIMA and are currently available only to Department of Defense
contractors. The data are in 128 128 tiled, 8-bit format, stored on
8 mm tape in band sequential format.
ADRI consists of SPOT panchromatic satellite imagery transformed
into the ARC system and accompanied by ASCII encoded support
files.
Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR
consists of all or part of one or more images mosaicked to meet the
ARC bounding rectangle, which encloses a 1 degree by 1 degree
geographic area. (See Figure 31.) Source images are orthorectified
to mean sea level using NIMA Level I Digital Terrain Elevation Data
(DTED) or equivalent data (Air Force Intelligence Support Agency,
1991).
Field Guide ADRI Data / 93
See the previous section on ADRG data for more information on
the ARC system. See "DTED" for more information.
Figure 31: Seamless Nine Image DR
In ADRI data, each DR contains only one ZDR. Each ZDR is stored as
a single raster image file, with no overlapping areas.
There are six different file types that make up the ADRI format: two
types of data files, three types of header files, and a color test patch
file. ERDAS IMAGINE imports two types of ADRI data files:
.OVR (Overview)
.IMG (Image)
The ADRI .IMG and .OVR file formats are different from the
ERDAS IMAGINE .img and .ovr file formats.
.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image
of the whole DR. There is an overview file for each DR on a tape. The
.OVR images show the mosaicking from the source images and the
dates when the source images were collected. (See Figure 32.) This
does not appear on the ZDR image.
Image 1
Image 2
Image 4
Image 8
Image 5
Image 6
Image 9
7
3
ADRI Data / 94 Field Guide
Figure 32: ADRI Overview File Displayed in a Viewer
.IMG (scanned image
data)
The .IMG files contain the actual mosaicked images. Each .IMG file
contains one ZDR plus any padding pixels needed to fit the ARC
boundaries. Padding pixels are black and have a zero data value. The
ERDAS IMAGINE Import function converts the .IMG data files to the
ERDAS IMAGINE file format (.img). The image file can then be
displayed in a Viewer. Padding pixels are not imported, nor are they
counted in image height or width.
ADRI File Naming
Convention
The ADRI file naming convention is based on a series of codes:
ssccddzz
ss = the image source code:
- SP (SPOT panchromatic)
- SX (SPOT multispectral) (not currently available)
- TM (Landsat Thematic Mapper) (not currently available)
cc = the country code
dd = the DR number on the tape (01-99). DRs are numbered
beginning with 01 for the northwesternmost DR and increasing
sequentially west to east, then north to south.
zz = the zone rectangle number (01-18)
For example, in the ADRI filename SPUR0101.IMG:
SP = SPOT 10 m panchromatic image
UR = Europe. The data is coverage of a European continent.
01 = This is the first Distribution Rectangle on the CD-ROM,
providing coverage of the northwestern edge of the image area
Field Guide Raster Product Format / 95
01 = This is the first zone rectangle of the Distribution Rectangle.
.IMG = This file contains the actual scanned image data for a
ZDR.
You may change this name when the file is imported into ERDAS
IMAGINE. If you do not specify a file name, ERDAS IMAGINE
uses the ADRI file name for the image.
Raster Product
Format
The Raster Product Format (RPF), from NIMA, is primarily used for
military purposes by defense contractors. RPF data are organized in
1536 1536 frames, with an internal tile size of 256 256 pixels.
RPF data are stored in an 8-bit format, with or without a pseudocolor
lookup table, on CD-ROM.
RPF Data are projected to the ARC system, based on the World
Geodetic System 1984 (WGS 84). The ARC System divides the
surface of the ellipsoid into 18 latitudinal bands called zones. Zones
1-9 cover the Northern hemisphere and zones A-J cover the
Southern hemisphere. Zone 9 is the North Polar region. Zone J is the
South Polar region
Polar data is projected to the Azimuthal Equidistant projection. In
nonpolar zones, data is in the Equirectangular projection, which is
proportional to latitude and longitude. ERDAS IMAGINE includes the
option to use either Equirectangular or Geographic coordinates for
nonpolar RPF data. The aspect ratio of projected RPF data is nearly
1; frames appear to be square, and measurement is possible.
Unprojected RPFs seldom have an aspect of ratio of 1, but may be
easier to combine with other data in Geographic coordinates.
Two military products are currently based upon the general RPF
specification:
Controlled Image Base (CIB)
Compressed ADRG (CADRG)
RPF employs Vector Quantization (VQ) to compress the frames. A
vector is a 4 4 tile of 8-bit pixel values. VQ evaluates all of the
vectors within the image, and reduces each vector into a single 12-
bit lookup value. Since only 4096 unique vector values are possible,
VQ is lossy, but the space savings are substantial. Most of the
processing effort of VQ is incurred in the compression stage,
permitting fast decompression by the users of the data in the field.
RPF data are stored on CD-ROM, with the following structure:
The root of the CD-ROM contains an RPF directory. This RPF
directory is often referred to as the root of the product.
Raster Product Format / 96 Field Guide
The RPF directory contains a table-of-contents file, named
A.TOC, which describes the location of all of the frames in the
product, and
The RPF directory contains one or more subdirectories containing
RPF frame files. RPF frame file names typically encode the map
zone and location of the frame within the map series.
Overview images may appear at various points in the directory
tree. Overview images illustrate the location of a set of frames
with respect to political and geographic boundaries. Overview
images typically have an .OVx file extension, such as .OVR or
.OV1.
All RPF frames, overview images, and table-of-contents files are
physically formatted within an NITF message. Since an RPF image is
broken up into several NITF messages, ERDAS IMAGINE treats RPF
and NITF as distinct formats.
Loading RPF Data
RPF frames may be imported or read directly. The direct read
feature, included in ERDAS IMAGINE, is generally preferable since
multiple frames with the same resolution can be read as a single
image. Import may still be desirable if you wish to examine the
metadata provided by a specific frame. ERDAS IMAGINE supplies
four image types related to RPF:
RPF Productcombines the entire contents of an RPF CD,
excluding overview images, as a single image, provided all
frames are within the same ARC map zone and resolution.The
RPF directory at the root of the CD-ROM is the image to be
loaded.
RPF Cellcombines all of the frames within a given subdirectory,
provided they all have the same resolution and reside within the
same ARC map zone. The directory containing frames is the
image to be read as an RPF cell.
RPF Framereads a single frame file.
RPF Overviewreads a single overview frame file.
CIB CIB is grayscale imagery produced from rectified imagery and
physically formatted as a compressed RPF. CIB offers a compression
ratio of 8:1 over its predecessor, ADRI. CIB is often based upon
SPOT panchromatic data or reformatted ADRI data, but can be
produced from other sources of imagery.
CADRG CADRG data consist of digital copies of NIMA hardcopy graphics
transformed into the ARC system. The data are scanned at a nominal
collection interval of 150 microns. The resulting image is 8-bit
pseudocolor, which is physically formatted as a compressed RPF.
Field Guide Topographic Data / 97
CADRG is a successor to ADRG, Compressed Aeronautical Chart
(CAC), and Compressed Raster Graphics (CRG). CADRG offers a
compression ratio of 55:1 over ADRG, due to the coarser collection
interval, VQ compression, and the encoding as 8-bit pseudocolor,
instead of 24-bit truecolor.
Topographic Data Satellite data can also be used to create elevation, or topographic
data through the use of stereoscopic pairs, as discussed above under
SPOT. Radar sensor data can also be a source of topographic
information, as discussed in Terrain Analysis However, most
available elevation data are created with stereo photography and
topographic maps.
ERDAS IMAGINE software can load and use:
USGS DEMs
DTED
Arc/second Format
Most elevation data are in arc/second format. Arc/second refers to
data in the Latitude/Longitude (Lat/Lon) coordinate system. The
data are not rectangular, but follow the arc of the Earths latitudinal
and longitudinal lines.
Each degree of latitude and longitude is made up of 60 minutes. Each
minute is made up of 60 seconds. Arc/second data are often referred
to by the number of seconds in each pixel. For example, 3
arc/second data have pixels which are 3 3 seconds in size. The
actual area represented by each pixel is a function of its latitude.
Figure 33 illustrates a 1 1 area of the Earth.
A row of data file values from a DEM or DTED file is called a profile.
The profiles of DEM and DTED run south to north, that is, the first
pixel of the record is the southernmost pixel.
Topographic Data / 98 Field Guide
Figure 33: Arc/second Format
In Figure 33, there are 1201 pixels in the first row and 1201 pixels
in the last row, but the area represented by each pixel increases in
size from the top of the file to the bottom of the file. The extracted
section in the example above has been exaggerated to illustrate this
point.
Arc/second data used in conjunction with other image data, such as
TM or SPOT, must be rectified or projected onto a planar coordinate
system such as UTM.
DEM DEMs are digital elevation model data. DEM was originally a term
reserved for elevation data provided by the USGS, but it is now used
to describe any digital elevation data.
DEMs can be:
purchased from USGS (for US areas only)
created from stereopairs (derived from satellite data or aerial
photographs)
See Terrain Analysis for more information on using DEMs. See
"Ordering Raster Data" for information on ordering DEMs.
USGS DEMs
There are two types of DEMs that are most commonly available from
USGS:
1:24,000 scale, also called 7.5-minute DEM, is usually referenced
to the UTM coordinate system. It has a spatial resolution of 30
30 m.
1:250,000 scale is available only in Arc/second format.
1
2
0
1
1
2
0
1
1
2
0
1
L
a
ti t ude
L
o
n
g
i
t
u
d
e
Field Guide Topographic Data / 99
Both types have a 16-bit range of elevation values, meaning each
pixel can have a possible elevation of -32,768 to 32,767.
DEM data are stored in ASCII format. The data file values in ASCII
format are stored as ASCII characters rather than as zeros and ones
like the data file values in binary data.
DEM data files from USGS are initially oriented so that North is on
the right side of the image instead of at the top. ERDAS IMAGINE
rotates the data 90 counterclockwise as part of the Import process
so that coordinates read with any ERDAS IMAGINE program are
correct.
DTED DTED data are produced by the National Imagery and Mapping
Agency (NIMA) and are available only to US government agencies
and their contractors. DTED data are distributed on 9-track tapes
and on CD-ROM.
There are two types of DTED data available:
DTED 1 a 1 1 area of coverage
DTED 2 a 1 1 or less area of coverage
Both are in Arc/second format and are distributed in cells. A cell is a
1 1 area of coverage. Both have a 16-bit range of elevation
values.
Like DEMs, DTED data files are also oriented so that North is on the
right side of the image instead of at the top. ERDAS IMAGINE rotates
the data 90 counterclockwise as part of the Import process so that
coordinates read with any ERDAS IMAGINE program are correct.
Using Topographic Data Topographic data have many uses in a GIS. For example,
topographic data can be used in conjunction with other data to:
calculate the shortest and most navigable path over a mountain
range
assess the visibility from various lookout points or along roads
simulate travel through a landscape
determine rates of snow melt
orthocorrect satellite or airborne images
create aspect and slope layers
provide ancillary data from image classification
See Terrain Analysis for more information about using
topographic and elevation data.
GPS Data / 100 Field Guide
GPS Data
Introduction Global Positioning System (GPS) data has been in existence since the
launch of the first satellite in the US Navigation System with Time
and Ranging (NAVSTAR) system on February 22, 1978, and the
availability of a full constellation of satellites since 1994. Initially, the
system was available to US military personnel only, but from 1993
onwards the system started to be used (in a degraded mode) by the
general public. There is also a Russian GPS system called GLONASS
with similar capabilities.
The US NAVSTAR GPS consists of a constellation of 24 satellites
orbiting the Earth, broadcasting data that allows a GPS receiver to
calculate its spatial position.
Satellite Position Positions are determined through the traditional ranging technique.
The satellites orbit the Earth (at an altitude of 20,200 km) in such a
manner that several are always visible at any location on the Earth's
surface. A GPS receiver with line of site to a GPS satellite can
determine how long the signal broadcast by the satellite has taken
to reach its location, and therefore can determine the distance to the
satellite. Thus, if the GPS receiver can see three or more satellites
and determine the distance to each, the GPS receiver can calculate
its own position based on the known positions of the satellites (i.e.,
the intersection of the spheres of distance from the satellite
locations). Theoretically, only three satellites should be required to
find the 3D position of the receiver, but various inaccuracies (largely
based on the quality of the clock within the GPS receiver that is used
to time the arrival of the signal) mean that at least four satellites are
generally required to determine a three-dimensional (3D) x, y, z
position.
The explanation above is an over-simplification of the technique
used, but does show the concept behind the use of the GPS system
for determining position. The accuracy of that position is affected by
several factors, including the number of satellites that can be seen
by a receiver, but especially for commercial users by Selective
Availability. Each satellite actually sends two signals at different
frequencies. One is for civilian use and one for military use. The
signal used for commercial receivers has an error introduced to it
called Selective Availability. Selective Availability introduces a
positional inaccuracy of up to 100m to commercial GPS receivers.
This is mainly intended to limit the use of highly accurate GPS
positioning to hostile users, but the errors can be ameliorated
through various techniques, such as keeping the GPS receiver
stationary; thereby allowing it to average out the errors, or through
more advanced techniques discussed in the following sections.
Field Guide GPS Data / 101
Differential Correction Differential Correction (or Differential GPS - DGPS) can be used to
remove the majority of the effects of Selective Availability. The
technique works by using a second GPS unit (or base station) that is
stationary at a precisely known position. As this GPS knows where it
actually is, it can compare this location with the position it calculates
from GPS satellites at any particular time and calculate an error
vector for that time (i.e., the distance and direction that the GPS
reading is in error from the real position). A log of such error vectors
can then be compared with GPS readings taken from the first, mobile
unit (the field unit that is actually taking GPS location readings of
features). Under the assumption that the field unit had line of site to
the same GPS satellites to acquire its position as the base station,
each field-read position (with an appropriate time stamp) can be
compared to the error vector for that time and the position corrected
using the inverse of the vector. This is generally performed using
specialist differential correction software.
Real Time Differential GPS (RDGPS) takes this technique one step
further by having the base station communicate the error vector via
radio to the field unit in real time. The field unit can then
automatically updates its own location in real time. The main
disadvantage of this technique is that the range that a GPS base
station can broadcast over is generally limited, thereby restricting
the range the mobile unit can be used away from the base station.
One of the biggest uses of this technique is for ocean navigation in
coastal areas, where base stations have been set up along coastlines
and around ports so that the GPS systems on board ships can get
accurate real time positional information to help in shallow-water
navigation.
Applications of GPS Data GPS data finds many uses in remote sensing and GIS applications,
such as:
Collection of ground truth data, even spectral properties of real-
world conditions at known geographic positions, for use in image
classification and validation. The user in the field identifies a
homogeneous area of identifiable land cover or use on the
ground and records its location using the GPS receiver. These
locations can then be plotted over an image to either train a
supervised classifier or to test the validity of a classification.
Moving map applications take the concept of relating the GPS
positional information to your geographic data layers one step
further by having the GPS position displayed in real time over the
geographical data layers. Thus you take a computer out into the
field and connect the GPS receiver to the computer, usually via
the serial port. Remote sensing and GIS data layers are then
displayed on the computer and the positional signal from the GPS
receiver is plotted on top of them.
GPS Data / 102 Field Guide
GPS receivers can be used for the collection of positional
information for known point features on the ground. If these can
be identified in an image, the positional data can be used as
Ground Control Points (GCPs) for geocorrecting the imagery to a
map projection system. If the imagery is of high resolution, this
generally requires differential correction of the positional data.
DGPS data can be used to directly capture GIS data and survey
data for direct use in a GIS or CAD system. In this regard the GPS
receiver can be compared to using a digitizing tablet to collect
data, but instead of pointing and clicking at features on a paper
document, you are pointing and clicking on the real features to
capture the information.
Precision agriculture uses GPS extensively in conjunction with
Variable Rate Technology (VRT). VRT relies on the use of a VRT
controller box connected to a GPS and the pumping mechanism
for a tank full of fertilizers/pesticides/seeds/water/etc. A digital
polygon map (often derived from remotely sensed data) in the
controller specifies a predefined amount to dispense for each
polygonal region. As the tractor pulls the tank around the field
the GPS logs the position that is compared to the map position in
memory. The correct amount is then dispensed at that location.
The aim of this process is to maximize yields without causing any
environmental damage.
GPS is often used in conjunction with airborne surveys. The
aircraft, as well as carrying a camera or scanner, has on board
one or more GPS receivers tied to an inertial navigation system.
As each frame is exposed precise information is captured (or
calculated in post processing) on the x, y, z and roll, pitch, yaw
of the aircraft. Each image in the aerial survey block thus has
initial exterior orientation parameters which therefore minimizes
the need for control in a block triangulation process.
Figure 34 shows some additional uses for GPS coordinates.
Figure 34: Common Uses of GPS Data
Navigation on land
Navigation on seas
Navigation in the air
Navigation in space
Harbor navigation
Navigation in rivers
Navigation of
recreational vehicles
High precision
kinematic surveys
on the ground
Guidance of robots and
other machines
Cadastral surveying
Geodetic network
densification
High precision aircraft
positioning
Photogrammetry
without ground control
Monitoring deformation
Hydrographic surveys
Active control stations
GPS
24 Hours
Per Day
World Wide
Field Guide Ordering Raster Data / 103
Source: Leick, 1990
Ordering Raster
Data
Table 26 describes the different Landsat, SPOT, AVHRR, and DEM
products that can be ordered. Information in this chart does not
reflect all the products that are available, but only the most common
types that can be imported into ERDAS IMAGINE.
Addresses to Contact For more information about these and related products, contact the
following agencies:
Landsat MSS, TM, and ETM data:
Space Imaging
12076 Grant Street
Thornton, CO 80241USA
Telephone (US): 800/425-2997
Telephone: 303/254-2000
Fax: 303/254-2215
Internet: www.spaceimage.com
SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com
Table 26: Common Raster Data Products
Data Type
Ground
Covered
Pixel Size
# of
Bands
Format
Available
Geocoded
Landsat TM Full Scene 185 170 km 28.5 m 7 Fast (BSQ)
Landsat TM Quarter Scene 92.5 80 km 28.5 m 7 Fast (BSQ)
Landsat MSS Full Scene 185 170 km 79 56 m 4 BSQ, BIL
SPOT 60 60 km 10 m and
20 m
1 - 3 BIL
NOAA AVHRR (LAC) 2700 2700
km
1.1 km 1 - 5 10-bit
packed or
unpacked
NOAA AVHRR (GAC) 4000 4000
km
4 km 1 - 5 10-bit
packed or
unpacked
USGS DEM 1:24,000 7.5 7.5 30 m 1 ASCII (UTM)
USGS DEM 1:250,000 1 1 3 3 1 ASCII
Ordering Raster Data / 104 Field Guide
NOAA AVHRR data:
Office of Satellite Operations
NOAA/National Environment Satellite, Data, and Information
Service
World Weather Building, Room 100
Washington, DC 20233 USA
Internet: www.nesdis.noaa.gov
AVHRR Dundee Format
NERC Satellite Station
University of Dundee
Dundee, Scotland DD1 4HN
Telephone: 44/1382-34-4409
Fax: 44/1382-202-575
Internet: www.sat.dundee.ac.uk
Cartographic data including, maps, airphotos, space images,
DEMs, planimetric data, and related information from federal,
state, and private agencies:
National Mapping Division
U.S. Geological Survey, National Center
12201 Sunrise Valley Drive
Reston, VA 20192 USA
Telephone: 703/648-4000
Internet: mapping.usgs.gov
ADRG data (available only to defense contractors):
NIMA (National Imagery and Mapping Agency)
ATTN: PMSC
Combat Support Center
Washington, DC 20315-0010 USA
ADRI data (available only to defense contractors):
Rome Laboratory/IRRP
Image Products Branch
Griffiss AFB, NY 13440-5700 USA
Landsat data:
Customer Services
U.S. Geological Survey
EROS Data Center
47914 252nd Street
Sioux Falls, SD 57198 USA
Telephone: 800/252-4547
Fax: 605/594-6589
Internet: edcwww.cr.usgs.gov/eros-home.html
Field Guide Ordering Raster Data / 105
ERS-1 radar data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-6413
Fax: 613-238-5425
Internet: www.rsi.ca
JERS-1 (Fuyo 1) radar data:
National Space Development Agency of Japan (NASDA)
Earth Observation Research Center
Roppongi - First Bldg., 1-9-9
Roppongi, Minato-ku
Tokyo 106-0032, Japan
Telephone: 81/3-3224-7040
Fax: 81/3-3224-7051
Internet: yyy.tksc.nasda.go.jp
SIR-A, B, C radar data:
Jet Propulsion Laboratories
California Institute of Technology
4800 Oak Grove Dr.
Pasadena, CA 91109-8099 USA
Telephone: 818/354-4321
Internet: www.jpl.nasa.gov
RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca
Almaz radar data:
NPO Mashinostroenia
Scientific Engineering Center Almaz
33 Gagarin Street
Reutov, 143952, Russia
Telephone: 7/095-538-3018
Fax: 7/095-302-2001
E:mail: [email protected]
U.S. Government RADARSAT sales:
Joel Porter
Lockheed Martin Astronautics
M/S: DC4001
12999 Deer Creek Canyon Rd.
Littleton, CO 80127
Telephone: 303-977-3233
Fax: 303-971-9827
E:mail: [email protected]
Raster Data from Other Software Vendors Field Guide
Raster Data from
Other Software
Vendors
ERDAS IMAGINE also enables you to import data created by other
software vendors. This way, if another type of digital data system is
currently in use, or if data is received from another system, it easily
converts to the ERDAS IMAGINE file format for use in ERDAS
IMAGINE.
Data from other vendors may come in that specific vendors format,
or in a standard format which can be used by several vendors. The
Import and/or Direct Read function handles these raster data types
from other software systems:
ERDAS Ver. 7.X
GRID and GRID Stacks
JFIF (JPEG)
MrSID
SDTS
Sun Raster
TIFF and GeoTIFF
Other data types might be imported using the Generic Binary
import option.
Vector to Raster Conversion
Vector data can also be a source of raster data by converting it to
raster format.
Convert a vector layer to a raster layer, or vice versa, by using
IMAGINE Vector.
ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE
software. The two basic types of ERDAS Ver. 7.X data files are
indicated by the file name extensions:
.LANa multiband continuous image file (the name is derived
from the Landsat satellite)
.GISa single-band thematic data file in which pixels are divided
into discrete categories (the name is derived from geographic
information system)
.LAN and .GIS image files are stored in the same format. The image
data are arranged in a BIL format and can be 4-bit, 8-bit, or 16-bit.
The ERDAS Ver. 7.X file structure includes:
Field Guide Raster Data from Other Software Vendors /
a header record at the beginning of the file
the data file values
a statistics or trailer file
When you import a .GIS file, it becomes an image file with one
thematic raster layer. When you import a .LAN file, each band
becomes a continuous raster layer within the image file.
GRID and GRID Stacks GRID is a raster geoprocessing program distributed by
Environmental Systems Research Institute, Inc. (ESRI) in Redlands,
California. GRID is designed to complement the vector data model
system, ArcInfo is a well-known vector GIS that is also distributed
by ESRI. The name GRID is taken from the raster data format of
presenting information in a grid of cells.
The data format for GRID is a compressed tiled raster data structure.
Like ArcInfo Coverages, a GRID is stored as a set of files in a
directory, including files to keep the attributes of the GRID.
Each GRID represents a single layer of continuous or thematic
imagery, but it is also possible to combine GRIDs files into a
multilayer image. A GRID Stack (.stk) file names multiple GRIDs to
be treated as a multilayer image. Starting with ArcInfo version 7.0,
ESRI introduced the STK format, referred to in ERDAS software as
GRID Stack 7.x, which contains multiple GRIDs. The GRID Stack 7.x
format keeps attribute tables for the entire stack in a separate
directory, in a manner similar to that of GRIDs and Coverages.
JFIF (JPEG) JPEG is a set of compression techniques established by the Joint
Photographic Experts Group (JPEG). The most commonly used form
of JPEG involves Discrete Cosine Transformation (DCT),
thresholding, followed by Huffman encoding. Since the output image
is not exactly the same as the input image, this form of JPEG is
considered to be lossy. JPEG can compresses monochrome imagery,
but achieves compression ratios of 20:1 or higher with color (RGB)
imagery, by taking advantage of the fact that the data being
compressed is a visible image. The integrity of the source image is
preserved by focussing its compression on aspects of the image that
are less noticeable to the human eye. JPEG cannot be used on
thematic imagery, due to the change in pixel values.
There is a lossless form of JPEG compression that uses DCT followed
by nonlossy encoding, but it is not frequently used since it only yields
an approximate compression ratio of 2:1. ERDAS IMAGINE only
handles the lossy form of JPEG.
While JPEG compression is used by other file formats, including TIFF,
the JPEG File Interchange Format (JFIF) is a standard file format
used to store JPEG-compressed imagery.
Raster Data from Other Software Vendors Field Guide
The ISO JPEG committee is currently working on a new enhancement
to the JPEG standard known as JPEG 2000, which will incorporate
wavelet compression techniques and more flexibility in JPEG
compression.
MrSID Multiresolution Seamless Image Database (MrSID, pronounced
Mister Sid) is a wavelet transform-based compression algorithm
designed by LizardTech, Inc. in Seattle, Washington
(http://www.lizardtech.com). The novel developments in MrSID
include a memory efficient implementation and automatic inclusion
of pyramid layers in every data set, both of which make MrSID well-
suited to provide efficient storage and retrieval of very large digital
images.
The underlying wavelet-based compression methodology used in
MrSID yields high compression ratios while satisfying stringent
image quality requirements. The compression technique used in
MrSID is lossy (i.e., the compression-decompression process does
not reproduce the source data pixel-for-pixel). Lossy compression is
not appropriate for thematic imagery, but is essential for large
continuous images since it allows much higher compression ratios
than lossless methods (e.g., the Lempel-Ziv-Welch, LZW, algorithm
used in the GIF and TIFF image formats). At standard compression
ratios, MrSID encoded imagery is visually lossless. On typical
remotely sensed imagery, lossless methods provide compression
ratios of perhaps 2:1, whereas MrSID provides excellent image
quality at compression ratios of 30:1 or more.
SDTS The Spatial Data Transfer Standard (SDTS) was developed by the
USGS to promote and facilitate the transfer of georeferenced data
and its associated metadata between dissimilar computer systems
without loss of fidelity. To achieve these goals, SDTS uses a flexible,
self-describing method of encoding data, which has enough structure
to permit interoperability.
For metadata, SDTS requires a number of statements regarding data
accuracy. In addition to the standard metadata, the producer may
supply detailed attribute data correlated to any image feature.
SDTS Profiles
The SDTS standard is organized into profiles. Profiles identify a
restricted subset of the standard needed to solve a certain problem
domain. Two subsets of interest to ERDAS IMAGINE users are:
Topological Vector Profile (TVP), which covers attributed vector
data. This is imported via the SDTS (Vector) title.
SDTS Raster Profile and Extensions (SRPE), which covers gridded
raster data. This is imported as SDTS Raster.
For more information on SDTS, consult the SDTS web page at
http://mcmcweb.er.usgs.gov/sdts.
Field Guide Raster Data from Other Software Vendors /
SUN Raster A SUN Raster file is an image captured from a monitor display. In
addition to GIS, SUN Raster files can be used in desktop publishing
applications or any application where a screen capture would be
useful.
There are two basic ways to create a SUN Raster file on a SUN
workstation:
use the OpenWindows Snapshot application
use the UNIX screendump command
Both methods read the contents of a frame buffer and write the
display data to a user-specified file. Depending on the display
hardware and options chosen, screendump can create any of the file
types listed in Table 27.
The data are stored in BIP format.
TIFF TIFF was developed by Aldus Corp. (Seattle, Washington) in 1986 in
conjunction with major scanner vendors who needed an easily
portable file format for raster image data. Today, the TIFF format is
a widely supported format used in video, fax transmission, medical
imaging, satellite imaging, document storage and retrieval, and
desktop publishing applications. In addition, the GeoTIFF extensions
permit TIFF files to be geocoded.
The TIFF formats main appeal is its flexibility. It handles black and
white line images, as well as gray scale and color images, which can
be easily transported between different operating systems and
computers.
TIFF File Formats
TIFFs great flexibility can also cause occasional problems in
compatibility. This is because TIFF is really a family of file formats
that is comprised of a variety of elements within the format.
Table 28 shows key Baseline TIFF format elements and the values
for those elements supported by ERDAS IMAGINE.
Any TIFF file that contains an unsupported value for one of these
elements may not be compatible with ERDAS IMAGINE.
Table 27: File Types Created by Screendump
File Type Available Compression
1-bit black and white None, RLE (run-length encoded)
8-bit color paletted (256 colors) None, RLE
24-bit RGB true color None, RLE
32-bit RGB true color None, RLE
Raster Data from Other Software Vendors Field Guide
a
All bands must contain the same number of bits (i.e., 4, 4, 4 or 8, 8, 8). Multiband
data with bit depths differing per band cannot be imported into ERDAS IMAGINE.
b
Must be imported and exported as 4-bit data.
c
Direct read/write only.
d
Compression supported on import and direct read/write only.
e
LZW is governed by patents and is not supported by the basic version of ERDAS
IMAGINE.
GeoTIFF According to the GeoTIFF Format Specification, Revision 1.0, "The
GeoTIFF spec defines a set of TIFF tags provided to describe all
Cartographic information associated with TIFF imagery that
originates from satellite imaging systems, scanned aerial
photography, scanned maps, digital elevation models, or as a result
of geographic analysis" (Ritter and Ruth, 1995).
The GeoTIFF format separates cartographic information into two
parts: georeferencing and geocoding.
Table 28: The Most Common TIFF Format Elements
Byte Order
Intel (LSB/MSB)
Motorola (MSB/LSB)
Image Type
Black and white
Gray scale
Inverted gray scale
Color palette
RGB (3-band)
Configuration
BIP
BSQ
Bits Per Plane
a 1
b
, 2
b
, 4, 8, 16
c
, 32
c
, 64
c
Compression
d None
CCITT G3 (B&W only)
CCITT G4 (B&W only)
Packbits
LZW
e
LZW with horizontal differencing
e
Field Guide Vector Data from Other Software Vendors /
Georeferencing
Georeferencing is the process of linking the raster space of an image
to a model space (i.e., a map system). Raster space defines how the
coordinate system grid lines are placed relative to the centers of the
pixels of the image. In ERDAS IMAGINE, the grid lines of the
coordinate system always intersect at the center of a pixel. GeoTIFF
allows the raster space to be defined either as having grid lines
intersecting at the centers of the pixels (PixelIsPoint) or as having
grid lines intersecting at the upper left corner of the pixels
(PixelIsArea). ERDAS IMAGINE converts the georeferencing values
for PixelIsArea images so that they conform to its raster space
definition.
GeoTIFF allows georeferencing via a scale and an offset, a full affine
transformation, or a set of tie points. ERDAS IMAGINE currently
ignores GeoTIFF georeferencing in the form of multiple tie points.
Geocoding
Geocoding is the process of linking coordinates in model space to the
Earths surface. Geocoding allows for the specification of projection,
datum, ellipsoid, etc. ERDAS IMAGINE interprets the GeoTIFF
geocoding to determine the latitude and longitude of the map
coordinates for GeoTIFF images. This interpretation also allows the
GeoTIFF image to be reprojected.
In GeoTIFF, the units of the map coordinates are obtained from the
geocoding, not from the georeferencing. In addition, GeoTIFF
defines a set of standard projected coordinate systems. The use of a
standard projected coordinate system in GeoTIFF constrains the
units that can be used with that standard system. Therefore, if the
units used with a projection in ERDAS IMAGINE are not equal to the
implied units of an equivalent GeoTIFF geocoding, ERDAS IMAGINE
transforms the georeferencing to conform to the implied units so that
the standard projected coordinate system code can be used. The
alternative (preserving the georeferencing as is and producing a
nonstandard projected coordinate system) is regarded as less
interoperable.
Vector Data from
Other Software
Vendors
It is possible to directly import several common vector formats into
ERDAS IMAGINE. These files become vector layers when imported.
These data can then be used for the analyses and, in most cases,
exported back to their original format (if desired).
Although data can be converted from one type to another by
importing a file into ERDAS IMAGINE and then exporting the ERDAS
IMAGINE file into another format, the import and export routines
were designed to work together. For example, if you have
information in AutoCAD that you would like to use in the GIS, you
can import a Drawing Interchange File (DXF) into ERDAS IMAGINE,
do the analysis, and then export the data back to DXF format.
Vector Data from Other Software Vendors Field Guide
In most cases, attribute data are also imported into ERDAS
IMAGINE. Each of the following sections lists the types of attribute
data that are imported.
Use Import/Export to import vector data from other software
vendors into ERDAS IMAGINE vector layers. These routines are
based on ArcInfo data conversion routines.
See Vector Data for more information on ERDAS IMAGINE
vector layers. See Geographic Information Systems for more
information about using vector data in a GIS.
ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE
command. The import ARCGEN program is used to import features
to a new layer. Topology is not created or maintained, therefore the
coverage must be built or cleaned after it is imported into ERDAS
IMAGINE.
ARCGEN files must be properly prepared before they are
imported into ERDAS IMAGINE. If there is a syntax error in the
data file, the import process may not work. If this happens, you
must kill the process, correct the data file, and then try
importing again.
See the ArcInfo documentation for more information about
these files.
AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc.
(Sausalito, California). AutoCAD is a computer-aided design program
that enables the user to draw two- and three-dimensional models.
This software is frequently used in architecture, engineering, urban
planning, and many other applications.
AutoCAD DXF is the standard interchange format used by most CAD
systems. The AutoCAD program DXFOUT creates a DXF file that can
be converted to an ERDAS IMAGINE vector layer. AutoCAD files can
also be output to IGES format using the AutoCAD program IGESOUT.
See "IGES" for more information about IGES files.
DXF files can be converted in the ASCII or binary format. The binary
format is an optional format for AutoCAD Releases 10 and 11. It is
structured just like the ASCII format, only the data are in binary
format.
Field Guide Vector Data from Other Software Vendors /
DXF files are composed of a series of related layers. Each layer
contains one or more drawing elements or entities. An entity is a
drawing element that can be placed into an AutoCAD drawing with a
single command. When converted to an ERDAS IMAGINE vector
layer, each entity becomes a single feature. Table 29 describes how
various DXF entities are converted to ERDAS IMAGINE.
The ERDAS IMAGINE import process also imports line and point
attribute data (if they exist) and creates an INFO directory with the
appropriate ACODE (arc attributes) and XCODE (point attributes)
files. If an imported DXF file is exported back to DXF format, this
information is also exported.
Refer to an AutoCAD manual for more information about the
format of DXF files.
DLG DLGs are furnished by the U.S. Geological Survey and provide
planimetric base map information, such as transportation,
hydrography, contours, and public land survey boundaries. DLG files
are available for the following USGS map series:
7.5- and 15-minute topographic quadrangles
1:100,000-scale quadrangles
1:2,000,000-scale national atlas maps
Table 29: Conversion of DXF Entries
DXF Entity
ERDAS
IMAGINE
Feature
Comments
Line
3DLine
Line These entities become two point lines.
The initial Z value of 3D entities is stored.
Trace
Solid
3DFace
Line These entities become four or five point
lines. The initial Z value of 3D entities is
stored.
Circle
Arc
Line These entities form lines. Circles are
composed of 361 pointsone vertex for
each degree. The first and last point is at
the same location.
Polyline Line These entities can be grouped to form a
single line having many vertices.
Point
Shape
Point These entities become point features in a
layer.
Vector Data from Other Software Vendors Field Guide
DLGs are topological files that contain nodes, lines, and areas
(similar to the points, lines, and polygons in an ERDAS IMAGINE
vector layer). DLGs also store attribute information in the form of
major and minor code pairs. Code pairs are encoded in two integer
fields, each containing six digits. The major code describes the class
of the feature (road, stream, etc.) and the minor code stores more
specific information about the feature.
DLGs can be imported in standard format (144 bytes per record) and
optional format (80 bytes per record). You can export to DLG-3
optional format. Most DLGs are in the Universal Transverse Mercator
(UTM) map projection. However, the 1:2,000,000 scale series is in
geographic coordinates.
The ERDAS IMAGINE import process also imports point, line, and
polygon attribute data (if they exist) and creates an INFO directory
with the appropriate ACODE, PCODE (polygon attributes), and
XCODE files. If an imported DLG file is exported back to DLG format,
this information is also exported.
To maintain the topology of a vector layer created from a DLG
file, you must Build or Clean it. See Geographic Information
Systems for information on this process.
ETAK ETAKs MapBase is an ASCII digital street centerline map product
available from ETAK, Inc. (Menlo Park, California). ETAK files are
similar in content to the Dual Independent Map Encoding (DIME)
format used by the U.S. Census Bureau. Each record represents a
single linear feature with address and political, census, and ZIP code
boundary information. ETAK has also included road class
designations and, in some areas, major landmark features.
There are four possible types of ETAK features:
DIME or D typesif the feature type is D, a line is created along
with a corresponding ACODE (arc attribute) record. The
coordinates are stored in Lat/Lon decimal degrees.
Alternate address or A typeseach record contains an alternate
address record for a line. These records are written to the
attribute file, and are useful for building address coverages.
Shape features or S typesshape records are used to add
vertices to the lines. The coordinates for these features are in
Lat/Lon decimal degrees.
Landmark or L typesif the feature type is L and you opt to
output a landmark layer, then a point feature is created along
with an associated PCODE record.
ERDAS IMAGINE vector data cannot be exported to ETAK
format.
Field Guide Vector Data from Other Software Vendors /
IGES IGES files are often used to transfer CAD data between systems.
IGES Version 3.0 format, published by the U.S. Department of
Commerce, is in uncompressed ASCII format only.
IGES files can be produced in AutoCAD using the IGESOUT
command. The following IGES entities can be converted:
The ERDAS IMAGINE import process also imports line and point
attribute data (if they exist) and creates an INFO directory with the
appropriate ACODE and XCODE files. If an imported IGES file is
exported back to IGES format, this information is also exported.
TIGER TIGER files are line network products of the U.S. Census Bureau. The
Census Bureau is using the TIGER system to create and maintain a
digital cartographic database that covers the United States, Puerto
Rico, Guam, the Virgin Islands, American Samoa, and the Trust
Territories of the Pacific.
TIGER/Line is the line network product of the TIGER system. The
cartographic base is taken from Geographic Base File/Dual
Independent Map Encoding (GBF/DIME), where available, and from
the USGS 1:100,000-scale national map series, SPOT imagery, and
a variety of other sources in all other areas, in order to have
continuous coverage for the entire United States. In addition to line
segments, TIGER files contain census geographic codes and, in
metropolitan areas, address ranges for the left and right sides of
each segment. TIGER files are available in ASCII format on both CD-
ROM and tape media. All released versions after April 1989 are
supported.
There is a great deal of attribute information provided with
TIGER/Line files. Line and point attribute information can be
converted into ERDAS IMAGINE format. The ERDAS IMAGINE import
process creates an INFO directory with the appropriate ACODE and
XCODE files. If an imported TIGER file is exported back to TIGER
format, this information is also exported.
TIGER attributes include the following:
Version numbersTIGER/Line file version number.
Table 30: Conversion of IGES Entities
IGES Entity
ERDAS IMAGINE
Feature
IGES Entity 100 (Circular Arc Entities) Lines
IGES Entity 106 (Copious Data Entities) Lines
IGES Entity 106 (Line Entities) Lines
IGES Entity 116 (Point Entities) Points
Vector Data from Other Software Vendors Field Guide
Permanent record numberseach line segment is assigned a
permanent record number that is maintained throughout all
versions of TIGER/Line files.
Source codeseach line and landmark point feature is assigned
a code to specify the original source.
Census feature class codesline segments representing physical
features are coded based on the USGS classification codes in
DLG-3 files.
Street attributesincludes street address information for
selected urban areas.
Legal and statistical area attributeslegal areas include states,
counties, townships, towns, incorporated cities, Indian
reservations, and national parks. Statistical areas are areas used
during the census-taking, where legal areas are not adequate for
reporting statistics.
Political boundariesthe election precincts or voting districts
may contain a variety of areas, including wards, legislative
districts, and election districts.
Landmarkslandmark area and point features include schools,
military installations, airports, hospitals, mountain peaks,
campgrounds, rivers, and lakes.
TIGER files for major metropolitan areas outside of the United
States (e.g., Puerto Rico, Guam) do not have address ranges.
Disk Space Requirements
TIGER/Line files are partitioned into counties ranging in size from
less than a megabyte to almost 120 megabytes. The average size is
approximately 10 megabytes. To determine the amount of disk
space required to convert a set of TIGER/Line files, use this rule: the
size of the converted layers is approximately the same size as the
files used in the conversion. The amount of additional scratch space
needed depends on the largest file and whether it needs to be sorted.
The amount usually required is about double the size of the file being
sorted.
The information presented in this section, "Vector Data from
Other Software Vendors", was obtained from the Data
Conversion and the 6.0 ARC Command References manuals,
both published by ESRI, Inc., 1992.
/ 117 Field Guide
Image Display
Introduction This section defines some important terms that are relevant to image
display. Most of the terminology and definitions used in this chapter
are based on the X Window System (Massachusetts Institute of
Technology) terminology. This may differ from other systems, such
as Microsoft Windows NT.
A seat is a combination of an X-server and a host workstation.
A host workstation consists of a CPU, keyboard, mouse, and a
display.
A display may consist of multiple screens. These screens work
together, making it possible to move the mouse from one screen
to the next.
The display hardware contains the memory that is used to
produce the image. This hardware determines which types of
displays are available (e.g., true color or pseudo color) and the
pixel depth (e.g., 8-bit or 24-bit).
Figure 35: Example of One Seat with One Display and Two
Screens
Display Memory Size The size of memory varies for different displays. It is expressed in
terms of:
display resolution, which is expressed as the horizontal and
vertical dimensions of memorythe number of pixels that can be
viewed on the display screen. Some typical display resolutions
are 1152 900, 1280 1024, and 1024 780. For the PC,
typical resolutions are 640 480, 800 600, 1024 768, and
1280 1024, and
the number of bits for each pixel or pixel depth, as explained
below.
Screen
Screen
/ 118 Field Guide
Bits for Image Plane
A bit is a binary digit, meaning a number that can have two possible
values0 and 1, or off and on. A set of bits, however, can have many
more values, depending upon the number of bits used. The number
of values that can be expressed by a set of bits is 2 to the power of
the number of bits used. For example, the number of values that can
be expressed by 3 bits is 8 (2
3
= 8).
Displays are referred to in terms of a number of bits, such as 8-bit
or 24-bit. These bits are used to determine the number of possible
brightness values. For example, in a 24-bit display, 24 bits per pixel
breaks down to eight bits for each of the three color guns per pixel.
The number of possible values that can be expressed by eight bits is
2
8
, or 256. Therefore, on a 24-bit display, each color gun of a pixel
can have any one of 256 possible brightness values, expressed by
the range of values 0 to 255.
The combination of the three color guns, each with 256 possible
brightness values, yields 256
3
, (or 2
24
, for the 24-bit image display),
or 16,777,216 possible colors for each pixel on a 24-bit display. If
the display being used is not 24-bit, the example above calculates
the number of possible brightness values and colors that can be
displayed.
Pixel The term pixel is abbreviated from picture element. As an element,
a pixel is the smallest part of a digital picture (image). Raster image
data are divided by a grid, in which each cell of the grid is
represented by a pixel. A pixel is also called a grid cell.
Pixel is a broad term that is used for both:
the data file value(s) for one data unit in an image (file pixels), or
one grid location on a display or printout (display pixels).
Usually, one pixel in a file corresponds to one pixel in a display or
printout. However, an image can be magnified or reduced so that
one file pixel no longer corresponds to one pixel in the display or
printout. For example, if an image is displayed with a magnification
factor of 2, then one file pixel takes up 4 (2 2) grid cells on the
display screen.
To display an image, a file pixel that consists of one or more numbers
must be transformed into a display pixel with properties that can be
seen, such as brightness and color. Whereas the file pixel has values
that are relevant to data (such as wavelength of reflected light), the
displayed pixel must have a particular color or gray level that
represents these data file values.
Colors Human perception of color comes from the relative amounts of red,
green, and blue light that are measured by the cones (sensors) in
the eye. Red, green, and blue light can be added together to produce
a wide variety of colorsa wider variety than can be formed from the
combinations of any three other colors. Red, green, and blue are
therefore the additive primary colors.
Field Guide / 119
A nearly infinite number of shades can be produced when red, green,
and blue light are combined. On a display, different colors
(combinations of red, green, and blue) allow you to perceive changes
across an image. Color displays that are available today yield 2
24
, or
16,777,216 colors. Each color has a possible 256 different values
(2
8
).
Color Guns
On a display, color guns direct electron beams that fall on red, green,
and blue phosphors. The phosphors glow at certain frequencies to
produce different colors. Color monitors are often called RGB
monitors, referring to the primary colors.
The red, green, and blue phosphors on the picture tube appear as
tiny colored dots on the display screen. The human eye integrates
these dots together, and combinations of red, green, and blue are
perceived. Each pixel is represented by an equal number of red,
green, and blue phosphors.
Brightness Values
Brightness values (or intensity values) are the quantities of each
primary color to be output to each displayed pixel. When an image
is displayed, brightness values are calculated for all three color guns,
for every pixel.
All of the colors that can be output to a display can be expressed with
three brightness valuesone for each color gun.
Colormap and Colorcells A color on the screen is created by a combination of red, green, and
blue values, where each of these components is represented as an
8-bit value. Therefore, 24 bits are needed to represent a color. Since
many systems have only an 8-bit display, a colormap is used to
translate the 8-bit value into a color. A colormap is an ordered set of
colorcells, which is used to perform a function on a set of input
values. To display or print an image, the colormap translates data file
values in memory into brightness values for each color gun.
Colormaps are not limited to 8-bit displays.
Colormap vs. Lookup Table
The colormap is a function of the display hardware, whereas a lookup
table is a function of ERDAS IMAGINE. When a contrast adjustment
is performed on an image in ERDAS IMAGINE, lookup tables are
used. However, if the auto-update function is being used to view the
adjustments in near real-time, then the colormap is being used to
map the image through the lookup table. This process allows the
colors on the screen to be updated in near real-time. This chapter
explains how the colormap is used to display imagery.
/ 120 Field Guide
Colorcells
There is a colorcell in the colormap for each data file value. The red,
green, and blue values assigned to the colorcell control the
brightness of the color guns for the displayed pixel (Nye 1990). The
number of colorcells in a colormap is determined by the number of
bits in the display (e.g., 8-bit, 24-bit).
For example, if a pixel with a data file value of 40 was assigned a
display value (colorcell value) of 24, then this pixel uses the
brightness values for the 24th colorcell in the colormap. In the
colormap below (Table 31), this pixel is displayed as blue.
The colormap is controlled by the X Windows system. There are 256
colorcells in a colormap with an 8-bit display. This means that 256
colors can be displayed simultaneously on the display. With a 24-bit
display, there are 256 colorcells for each color: red, green, and blue.
This offers 256 256 256, or 16,777,216 different colors.
When an application requests a color, the server specifies which
colorcell contains that color and returns the color. Colorcells can be
read-only or read/write.
Read-only Colorcells
The color assigned to a read-only colorcell can be shared by other
application windows, but it cannot be changed once it is set. To
change the color of a pixel on the display, it would not be possible to
change the color for the corresponding colorcell. Instead, the pixel
value would have to be changed and the image redisplayed. For this
reason, it is not possible to use auto-update operations in ERDAS
IMAGINE with read-only colorcells.
Read/Write Colorcells
The color assigned to a read/write colorcell can be changed, but it
cannot be shared by other application windows. An application can
easily change the color of displayed pixels by changing the color for
the colorcell that corresponds to the pixel value. This allows
applications to use auto update operations. However, this colorcell
cannot be shared by other application windows, and all of the
colorcells in the colormap could quickly be utilized.
Table 31: Colorcell Example
Colorcell Index Red Green Blue
1 255 0 0
2 0 170 90
3 0 0 255
24 0 0 255
Field Guide / 121
Changeable Colormaps
Some colormaps can have both read-only and read/write colorcells.
This type of colormap allows applications to utilize the type of
colorcell that would be most preferred.
Display Types The possible range of different colors is determined by the display
type. ERDAS IMAGINE supports the following types of displays:
8-bit PseudoColor
15-bit HiColor (for Windows NT)
24-bit DirectColor
24-bit TrueColor
The above display types are explained in more detail below.
A display may offer more than one visual type and pixel depth.
See the ERDAS IMAGINE Configuration Guide for more
information on specific display hardware.
32-bit Displays
A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit
DirectColor, or TrueColor display. Whether or not it is DirectColor or
TrueColor depends on the display hardware.
8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells.
Each cell has a red, green, and blue brightness value, giving 256
combinations of red, green, and blue. The data file value for the pixel
is transformed into a colorcell value. The brightness values for the
colorcell that is specified by this colorcell value are used to define the
color to be displayed.
Figure 36: Transforming Data File Values to a Colorcell Value
Colorcell
Index
Red
Value
Green
Value
Blue
Value
1
2
3
4 0 0 255
5
6
Red band
value
Green band
value
Blue band
value
Colorcell
value
(4)
Data File Values
Colormap
Blue pixel
/ 122 Field Guide
In Figure 36, data file values for a pixel of three continuous raster
layers (bands) is transformed to a colorcell value. Since the colorcell
value is four, the pixel is displayed with the brightness values of the
fourth colorcell (blue).
This display grants a small number of colors to ERDAS IMAGINE. It
works well with thematic raster layers containing less than 200
colors and with gray scale continuous raster layers. For image files
with three continuous raster layers (bands), the colors are severely
limited because, under ideal conditions, 256 colors are available on
an 8-bit display, while 8-bit, 3-band image files can contain over
16,000,000 different colors.
Auto Update
An 8-bit PseudoColor display has read-only and read/write colorcells,
allowing ERDAS IMAGINE to perform near real-time color
modifications using Auto Update and Auto Apply options.
24-bit DirectColor A 24-bit DirectColor display enables you to view up to three bands of
data at one time, creating displayed pixels that represent the
relationships between the bands by their colors. Since this is a 24-
bit display, it offers up to 256 shades of red, 256 shades of green,
and 256 shades of blue, which is approximately 16 million different
colors (256
3
). The data file values for each band are transformed
into colorcell values. The colorcell that is specified by these values is
used to define the color to be displayed.
Field Guide / 123
Figure 37: Transforming Data File Values to a Colorcell Value
In Figure 37, data file values for a pixel of three continuous raster
layers (bands) are transformed to separate colorcell values for each
band. Since the colorcell value is 1 for the red band, 2 for the green
band, and 6 for the blue band, the RGB brightness values are 0, 90,
200. This displays the pixel as a blue-green color.
This type of display grants a very large number of colors to ERDAS
IMAGINE and it works well with all types of data.
Auto Update
A 24-bit DirectColor display has read-only and read/write colorcells,
allowing ERDAS IMAGINE to perform real-time color modifications
using the Auto Update and Auto Apply options.
24-bit TrueColor A 24-bit TrueColor display enables you to view up to three
continuous raster layers (bands) of data at one time, creating
displayed pixels that represent the relationships between the bands
by their colors. The data file values for the pixels are transformed
into screen values and the colors are based on these values.
Therefore, the color for the pixel is calculated without querying the
server and the colormap. The colormap for a 24-bit TrueColor display
is not available for ERDAS IMAGINE applications. Once a color is
assigned to a screen value, it cannot be changed, but the color can
be shared by other applications.
Red band
value
Green band
value
Blue band
value
Data File Values
Red band
value
Green band
value
Blue band
value
Colorcell Values
(1)
(2)
(6)
Blue-green pixel
(0, 90, 200 RGB)
Colormap
Color-
Cell
Index
1
2
3
4
5
6
Blue
Value
0
0
200
Color-
Cell
Index
Green
Value
Red
Value
Color-
Cell
Index
1
2
3
4
5
6
0
90
55
1
2
3
4
5
6
0
0
55
/ 124 Field Guide
The screen values are used as the brightness values for the red,
green, and blue color guns. Since this is a 24-bit display, it offers 256
shades of red, 256 shades of green, and 256 shades of blue, which
is approximately 16 million different colors (256
3
).
Figure 38: Transforming Data File Values to Screen Values
In Figure 38, data file values for a pixel of three continuous raster
layers (bands) are transformed to separate screen values for each
band. Since the screen value is 0 for the red band, 90 for the green
band, and 200 for the blue band, the RGB brightness values are 0,
90, and 200. This displays the pixel as a blue-green color.
Auto Update
The 24-bit TrueColor display does not use the colormap in ERDAS
IMAGINE, and thus does not provide ERDAS IMAGINE with any real-
time color changing capability. Each time a color is changed, the
screen values must be calculated and the image must be redrawn.
Color Quality
The 24-bit TrueColor visual provides the best color quality possible
with standard equipment. There is no color degradation under any
circumstances with this display.
PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following
visual type and pixel depths:
8-bit PseudoColor
15-bit HiColor
24-bit TrueColor
Red band
value
Green band
value
Blue band
value
Data File Values
Red band
value
Green band
value
Blue band
value
Screen Values
(0)
(90)
(200)
Blue-green pixel
(0, 90, 200 RGB)
Field Guide Displaying Raster Layers / 125
8-bit PseudoColor
An 8-bit PseudoColor display for the PC uses the same type of
colormap as the X Windows 8-bit PseudoColor display, except that
each colorcell has a range of 0 to 63 on most video display adapters,
instead of 0 to 255. Therefore, each colorcell has a red, green, and
blue brightness value, giving 64 different combinations of red, green,
and blue. The colormap, however, is the same as the X Windows 8-
bit PseudoColor display. It has 256 colorcells allowing 256 different
colors to be displayed simultaneously.
15-bit HiColor
A 15-bit HiColor display for the PC assigns colors the same way as
the X Windows 24-bit TrueColor display, except that it offers 32
shades of red, 32 shades of green, and 32 shades of blue, for a total
of 32,768 possible color combinations. Some video display adapters
allocate 6 bits to the green color gun, allowing 64,000 colors. These
adapters use a 16-bit color scheme.
24-bit TrueColor
A 24-bit TrueColor display for the PC assigns colors the same way as
the X Windows 24-bit TrueColor display.
Displaying Raster
Layers
Image files (.img) are raster files in the ERDAS IMAGINE format.
There are two types of raster layers:
continuous
thematic
Thematic raster layers require a different display process than
continuous raster layers. This section explains how each raster layer
type is displayed.
Continuous Raster Layers An image file (.img) can contain several continuous raster layers;
therefore, each pixel can have multiple data file values. When
displaying an image file with continuous raster layers, it is possible
to assign which layers (bands) are to be displayed with each of the
three color guns. The data file values in each layer are input to the
assigned color gun. The most useful color assignments are those that
allow for an easy interpretation of the displayed image. For example:
A natural-color image approximates the colors that would appear
to a human observer of the scene.
A color-infrared image shows the scene as it would appear on
color-infrared film, which is familiar to many analysts.
Displaying Raster Layers / 126 Field Guide
Band assignments are often expressed in R,G,B order. For example,
the assignment 4, 2, 1 means that band 4 is assigned to red, band
2 to green, and band 1 to blue. Below are some widely used band to
color gun assignments (Faust, 1989):
Landsat TMnatural color: 3, 2, 1
This is natural color because band 3 is red and is assigned to the
red color gun, band 2 is green and is assigned to the green color
gun, and band 1 is blue and is assigned to the blue color gun.
Landsat TMcolor-infrared: 4, 3, 2
This is infrared because band 4 = infrared.
SPOT Multispectralcolor-infrared: 3, 2, 1
This is infrared because band 3 = infrared.
Contrast Table
When an image is displayed, ERDAS IMAGINE automatically creates
a contrast table for continuous raster layers. The red, green, and
blue brightness values for each band are stored in this table.
Since the data file values in continuous raster layers are quantitative
and related, the brightness values in the colormap are also
quantitative and related. The screen pixels represent the
relationships between the values of the file pixels by their colors. For
example, a screen pixel that is bright red has a high brightness value
in the red color gun, and a high data file value in the layer assigned
to red, relative to other data file values in that layer.
The brightness values often differ from the data file values, but they
usually remain in the same order of lowest to highest. Some
meaningful relationships between the values are usually maintained.
Contrast Stretch
Different displays have different ranges of possible brightness
values. The range of most displays is 0 to 255 for each color gun.
Since the data file values in a continuous raster layer often represent
raw data (such as elevation or an amount of reflected light), the
range of data file values is often not the same as the range of
brightness values of the display. Therefore, a contrast stretch is
usually performed, which stretches the range of the values to fit the
range of the display.
For example, Figure 39 shows a layer that has data file values from
30 to 40. When these values are used as brightness values, the
contrast of the displayed image is poor. A contrast stretch simply
stretches the range between the lower and higher data file values,
so that the contrast of the displayed image is higherthat is, lower
data file values are displayed with the lowest brightness values, and
higher data file values are displayed with the highest brightness
values.
Field Guide Displaying Raster Layers / 127
The colormap stretches the range of colorcell values from 30 to 40,
to the range 0 to 255. Because the output values are incremented at
regular intervals, this stretch is a linear contrast stretch. (The
numbers in Figure 39 are approximations and do not show an exact
linear relationship.)
Figure 39: Contrast Stretch and Colorcell Values
See Enhancement for more information about contrast
stretching. Contrast stretching is performed the same way for
display purposes as it is for permanent image enhancement.
A two standard deviation linear contrast stretch is applied to
stretch pixel values of all image files from 0 to 255 before they
are displayed in the Viewer, unless a saved contrast stretch
exists (the file is not changed). This often improves the initial
appearance of the data in the Viewer.
Statistics Files
To perform a contrast stretch, certain statistics are necessary, such
as the mean and the standard deviation of the data file values in
each layer.
Use the Image Information utility to create and view statistics
for a raster layer.
input colorcell values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
30 to 40 range
300
3125
3251
3376
34102
35127
36153
37178
38204
39229
40255
Displaying Raster Layers / 128 Field Guide
Usually, not all of the data file values are used in the contrast stretch
calculations. The minimum and maximum data file values of each
band are often too extreme to produce good results. When the
minimum and maximum are extreme in relation to the rest of the
data, then the majority of data file values are not stretched across a
very wide range, and the displayed image has low contrast.
Figure 40: Stretching by Min/Max vs. Standard Deviation
The mean and standard deviation of the data file values for each
band are used to locate the majority of the data file values. The
number of standard deviations above and below the mean can be
entered, which determines the range of data used in the stretch.
See Math Topics for more information on mean and standard
deviation.
Use the Contrast Tools dialog, which is accessible from the
Lookup Table Modification dialog, to enter the number of
standard deviations to be used in the contrast stretch.
24-bit DirectColor and TrueColor Displays
Figure 41 illustrates the general process of displaying three
continuous raster layers on a 24-bit DirectColor display. The process
is similar on a TrueColor display except that the colormap is not
used.
0 255
-2 mean +2
stretched data file values
f
r
e
q
u
e
n
c
y
most of the data
Standard Deviation Stretch
values stretched
over 255 are
not displayed
values stretched
less than 0 are
not displayed
Original Histogram
0 255 -2 mean +2
stored data file values
f
r
e
q
u
e
n
c
y
0 255 -2 mean +2
stretched data file values
f
r
e
q
u
e
n
c
y
most of the data
Min/Max Stretch
Field Guide Displaying Raster Layers / 129
Figure 41: Continuous Raster Layer Display Process
8-bit PseudoColor Display
When displaying continuous raster layers on an 8-bit PseudoColor
display, the data file values from the red, green, and blue bands are
combined and transformed to a colorcell value in the colormap. This
colorcell then provides the red, green, and blue brightness values.
Since there are only 256 colors available, a continuous raster layer
looks different when it is displayed in an 8-bit display than a 24-bit
display that offers 16 million different colors. However, the Viewer
performs dithering with the available colors in the colormap to let a
smaller set of colors appear to be a larger set of colors.
See "Dithering" for more information.
Band-to-
color gun
assignments:
Histograms
Ranges of
data file
values to
be displayed:
Colormap:
Color
guns:
Brightness
values in
each color
gun:
Color display:
Band 3
assigned to
RED
Band 2
assigned to
GREEN
Band 1
assigned to
BLUE
data file values in
of each band:
brightness values out brightness values out brightness values out
0 255 0 255 0 255
data file values in data file values in
0 255 0 255 0 255
0 255 0 255 0 255
Displaying Raster Layers / 130 Field Guide
Thematic Raster Layers A thematic raster layer generally contains pixels that have been
classified, or put into distinct categories. Each data file value is a
class value, which is simply a number for a particular category. A
thematic raster layer is stored in an image (.img) file. Only one data
file valuethe class valueis stored for each pixel.
Since these class values are not necessarily related, the gradations
that are possible in true color mode are not usually useful in pseudo
color. The class system gives the thematic layer a discrete look, in
which each class can have its own color.
Color Table
When a thematic raster layer is displayed, ERDAS IMAGINE
automatically creates a color table. The red, green, and blue
brightness values for each class are stored in this table.
RGB Colors
Individual color schemes can be created by combining red, green,
and blue in different combinations, and assigning colors to the
classes of a thematic layer.
Colors can be expressed numerically, as the brightness values for
each color gun. Brightness values of a display generally range from
0 to 255, however, ERDAS IMAGINE translates the values from 0 to
1. The maximum brightness value for the display device is scaled to
1. The colors listed in Table 32 are based on the range that is used
to assign brightness values in ERDAS IMAGINE.
Field Guide Displaying Raster Layers / 131
Table 32 contains only a partial listing of commonly used colors. Over
16 million colors are possible on a 24-bit display.
NOTE: Black is the absence of all color (0,0,0) and white is created
from the highest values of all three colors (1, 1, 1). To lighten a
color, increase all three brightness values. To darken a color,
decrease all three brightness values.
Use the Raster Attribute Editor to create your own color scheme.
24-bit DirectColor and TrueColor Displays
Figure 42 illustrates the general process of displaying thematic
raster layers on a 24-bit DirectColor display. The process is similar
on a TrueColor display except that the colormap is not used.
Table 32: Commonly Used RGB Colors
Color Red Green Blue
Red 1 0 0
Red-Orange 1 .392 0
Orange .608 .588 0
Yellow 1 1 0
Yellow-Green .490 1 0
Green 0 1 0
Cyan 0 1 1
Blue 0 0 1
Blue-Violet .392 0 .471
Violet .588 0 .588
Black 0 0 0
White 1 1 1
Gray .498 .498 .498
Brown .373 .227 0
Displaying Raster Layers / 132 Field Guide
Figure 42: Thematic Raster Layer Display Process
Display a thematic raster layer from the Viewer.
8-bit PseudoColor Display
The colormap is a limited resource that is shared among all of the
applications that are running concurrently. Because of the limited
resources, ERDAS IMAGINE does not typically have access to the
entire colormap.
1 2 3
4 3 5
2 1 4
Colormap:
Color
guns:
Brightness
values in
each color
gun:
Display:
class values in
brightness values out brightness values out brightness values out
class values in class values in
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
0 128 255 0 128 255 0 128 255
R O Y
V Y G
O R V
= 255
= 128
= 0
Original
image by
class:
Color
scheme:
CLASS
1
2
3
4
5
COLOR
Red =
Orange =
Yellow =
Violet =
Green =
RED
255
255
255
128
0
GREEN
0
128
255
0
255
BLUE
0
0
0
255
0
Brightness Values
Field Guide Using the Viewer / 133
Using the Viewer The Viewer is a window for displaying raster, vector, and annotation
layers. You can open as many Viewer windows as their window
manager supports.
NOTE: The more Viewers that are opened simultaneously, the more
RAM memory is necessary.
The Viewer not only makes digital images visible quickly, but it can
also be used as a tool for image processing and raster GIS modeling.
The uses of the Viewer are listed briefly in this section, and described
in greater detail in other chapters of the ERDAS Field Guide.
Colormap
ERDAS IMAGINE does not use the entire colormap because there are
other applications that also need to use it, including the window
manager, terminal windows, Arc View, or a clock. Therefore, there
are some limitations to the number of colors that the Viewer can
display simultaneously, and flickering may occur as well.
Color Flickering
If an application requests a new color that does not exist in the
colormap, the server assigns that color to an empty colorcell.
However, if there are not any available colorcells and the application
requires a private colorcell, then a private colormap is created for the
application window. Since this is a private colormap, when the cursor
is moved out of the window, the server uses the main colormap and
the brightness values assigned to the colorcells. Therefore, the
colors in the private colormap are not applied and the screen flickers.
Once the cursor is moved into the application window, the correct
colors are applied for that window.
Resampling
When a raster layer(s) is displayed, the file pixels may be resampled
for display on the screen. Resampling is used to calculate pixel
values when one raster grid must be fitted to another. In this case,
the raster grid defined by the file must be fit to the grid of screen
pixels in the Viewer.
All Viewer operations are file-based. So, any time an image is
resampled in the Viewer, the Viewer uses the file as its source. If the
raster layer is magnified or reduced, the Viewer refits the file grid to
the new screen grid.
The resampling methods available are:
Nearest Neighboruses the value of the closest pixel to assign to
the output pixel value.
Bilinear Interpolationuses the data file values of four pixels in
a 2 2 window to calculate an output value with a bilinear
function.
Using the Viewer / 134 Field Guide
Cubic Convolutionuses the data file values of 16 pixels in a 4
4 window to calculate an output value with a cubic function.
These are discussed in detail in Rectification.
The default resampling method is Nearest Neighbor.
Preference Editor
The Preference Editor enables you to set parameters for the Viewer
that affect the way the Viewer operates.
See the ERDAS IMAGINE On-Line Help for the Preference Editor
for information on how to set preferences for the Viewer.
Pyramid Layers Sometimes a large image file may take a long time to display in the
Viewer or to be resampled by an application. The Pyramid Layer
option enables you to display large images faster and allows certain
applications to rapidly access the resampled data. Pyramid layers are
image layers which are copies of the original layer successively
reduced by the power of 2 and then resampled. If the raster layer is
thematic, then it is resampled using the Nearest Neighbor method.
If the raster layer is continuous, it is resampled by a method that is
similar to Cubic Convolution. The data file values for sixteen pixels in
a 4 4 window are used to calculate an output data file value with
a filter function.
See Rectification for more information on Nearest Neighbor.
The number of pyramid layers created depends on the size of the
original image. A larger image produces more pyramid layers. When
the Create Pyramid Layer option is selected, ERDAS IMAGINE
automatically creates successively reduced layers until the final
pyramid layer can be contained in one block. The default block size
is 64 64 pixels.
See Raster Data for information on block size.
Pyramid layers are added as additional layers in the image file.
However, these layers cannot be accessed for display. The file size is
increased by approximately one-third when pyramid layers are
created. The actual increase in file size can be determined by
multiplying the layer size by this formula
Field Guide Using the Viewer / 135
Where:
n = number of pyramid layers
NOTE: This equation is applicable to all types of pyramid layers:
internal and external.
Pyramid layers do not appear as layers which can be processed: they
are for viewing purposes only. Therefore, they do not appear as
layers in other parts of the ERDAS IMAGINE system (e.g., the
Arrange Layers dialog).
The Image Files (General) section of the Preference Editor
contains a preference for the Initial Pyramid Layer Number. By
default, the value is set to 2. This means that the first pyramid
layer generated is discarded. In Figure 43 below, the 2K 2K
layer is discarded. If you wish to keep that layer, then set the
Initial Pyramid Layer Number to 1.
Pyramid layers can be deleted through the Image Information
utility. However, when pyramid layers are deleted, they are not
deleted from the image file; therefore, the image file size does
not change, but ERDAS IMAGINE utilizes this file space, if
necessary. Pyramid layers are deleted from viewing and
resampling access only - that is, they can no longer be viewed
or used in an application.
1
4
i
----
i 0 =
n

Using the Viewer / 136 Field Guide


Figure 43: Pyramid Layers
For example, a file that is 4K 4K pixels could take a long time to
display when the image is fit to the Viewer. The Compute Pyramid
Layers option creates additional layers successively reduced from 4K
4K, to 2K 2K, 1K 1K, 512 512, 128 128, down to 64
64. ERDAS IMAGINE then selects the pyramid layer size most
appropriate for display in the Viewer window when the image is
displayed.
The Compute Pyramid Layers option is available from Import
and the Image Information utility.
For more information about the .img format, see Raster Data
and the On-Line Help.
External Pyramid Layers
Pyramid layers can be either internal or external. If you choose
external pyramid layers, they are stored with the same name in the
same directory as the image with which they are associated, but with
the .rrd extension. For example, an image named tm_image1.img
has external pyramid layers contained in a file named
tm_image1.rrd.
Original Image
Pyramid layer (2K 2K)
Pyramid layer (1K 1K)
Pyramid layer (512 512)
Pyramid layer (128 128)
Pyramid layer (64 64)
image file
ERDAS IMAGINE
selects the pyramid
layer that displays the
fastest in the Viewer.
Viewer Window
(4K 4K)
Field Guide Using the Viewer / 137
The extension .rrd stands for reduced resolution data set. You can
delete the external pyramid layers associated with an image by
accessing the Image Information dialog. Unlike internal pyramid
layers, external pyramid layers do not affect the size of the
associated image.
Dithering A display is capable of viewing only a limited number of colors
simultaneously. For example, an 8-bit display has a colormap with
256 colorcells, therefore, a maximum of 256 colors can be displayed
at the same time. If some colors are being used for auto update color
adjustment while other colors are still being used for other imagery,
the color quality degrades.
Dithering lets a smaller set of colors appear to be a larger set of
colors. If the desired display color is not available, a dithering
algorithm mixes available colors to provide something that looks like
the desired color.
For a simple example, assume the system can display only two
colors: black and white, and you want to display gray. This can be
accomplished by alternating the display of black and white pixels.
Figure 44: Example of Dithering
In Figure 44, dithering is used between a black pixel and a white
pixel to obtain a gray pixel.
The colors that the Viewer dithers between are similar to each other,
and are dithered on the pixel level. Using similar colors and dithering
on the pixel level makes the image appear smooth.
Dithering allows multiple images to be displayed in different
Viewers without refreshing the currently displayed image(s)
each time a new image is displayed.
Color Patches
When the Viewer performs dithering, it uses patches of 2 2 pixels.
If the desired color has an exact match, then all of the values in the
patch match it. If the desired color is halfway between two of the
usable colors, the patch contains two pixels of each of the
surrounding usable colors. If it is 3/4 of the way between two usable
colors, the patch contains 3 pixels of the color it is closest to, and 1
pixel of the color that is second closest. Figure 45 shows what the
color patches would look like if the usable colors were black and
white and the desired color was gray.
Black Gray White
Using the Viewer / 138 Field Guide
Figure 45: Example of Color Patches
If the desired color is not an even multiple of 1/4 of the way between
two allowable colors, it is rounded to the nearest 1/4. The Viewer
separately dithers the red, green, and blue components of a desired
color.
Color Artifacts
Since the Viewer requires 2 2 pixel patches to represent a color,
and actual images typically have a different color for each pixel,
artifacts may appear in an image that has been dithered. Usually, the
difference in color resolution is insignificant, because adjacent pixels
are normally similar to each other. Similarity between adjacent
pixels usually smooths out artifacts that appear.
Viewing Layers The Viewer displays layers as one of the following types of view
layers:
annotation
vector
pseudo color
gray scale
true color
Annotation View Layer
When an annotation layer (xxx.ovr) is displayed in the Viewer, it is
displayed as an annotation view layer.
Vector View Layer
A Vector layer is displayed in the Viewer as a vector view layer.
Pseudo Color View Layer
When a raster layer is displayed as a pseudo color layer in the
Viewer, the colormap uses the RGB brightness values for the one
layer in the RGB table. This is most appropriate for thematic layers.
If the layer is a continuous raster layer, the layer would initially
appear gray, since there are not any values in the RGB table.
Exact 25% away 50% away 75% away Next color
Field Guide Using the Viewer / 139
Gray Scale View Layer
When a raster layer is displayed as a gray scale layer in the Viewer,
the colormap uses the brightness values in the contrast table for one
layer. This layer is then displayed in all three color guns, producing
a gray scale image. A continuous raster layer may be displayed as a
gray scale view layer.
True Color View Layer
Continuous raster layers should be displayed as true color layers in
the Viewer. The colormap uses the RGB brightness values for three
layers in the contrast table: one for each color gun to display the set
of layers.
Viewing Multiple Layers It is possible to view as many layers of all types (with the exception
of vector layers, which have a limit of 10) at one time in a single
Viewer.
To overlay multiple layers in one Viewer, they must all be referenced
to the same map coordinate system. The layers are positioned
geographically within the window, and resampled to the same scale
as previously displayed layers. Therefore, raster layers in one Viewer
can have different cell sizes.
When multiple layers are magnified or reduced, raster layers are
resampled from the file to fit to the new scale.
Display multiple layers from the Viewer. Be sure to turn off the
Clear Display check box when you open subsequent layers.
Overlapping Layers
When layers overlap, the order in which the layers are opened is very
important. The last layer that is opened always appears to be on top
of the previously opened layers.
In a raster layer, it is possible to make values of zero transparent in
the Viewer, meaning that they have no opacity. Thus, if a raster layer
with zeros is displayed over other layers, the areas with zero values
allow the underlying layers to show through.
Opacity is a measure of how opaque, or solid, a color is displayed in
a raster layer. Opacity is a component of the color scheme of
categorical data displayed in pseudo color.
100% opacity means that a color is completely opaque, and
cannot be seen through.
50% opacity lets some color show, and lets some of the
underlying layers show through. The effect is like looking at the
underlying layers through a colored fog.
0% opacity allows underlying layers to show completely.
Using the Viewer / 140 Field Guide
By manipulating opacity, you can compare two or more layers of
raster data that are displayed in a Viewer. Opacity can be set at
any value in the range of 0% to 100%. Use the Arrange Layers
dialog to restack layers in a Viewer so that they overlap in a
different order, if needed.
Non-Overlapping Layers
Multiple layers that are opened in the same Viewer do not have to
overlap. Layers that cover distinct geographic areas can be opened
in the same Viewer. The layers are automatically positioned in the
Viewer window according to their map coordinates, and are
positioned relative to one another geographically. The map
coordinate systems for the layers must be the same.
Linking Viewers Linking Viewers is appropriate when two Viewers cover the same
geographic area (at least partially), and are referenced to the same
map units. When two Viewers are linked:
Either the same geographic point is displayed in the centers of
both Viewers, or a box shows where one view fits inside the
other.
Scrolling one Viewer affects the other.
You can manipulate the zoom ratio of one Viewer from another.
Any inquire cursors in one Viewer appear in the other, for
multiple-Viewer pixel inquiry.
The auto-zoom is enabled, if the Viewers have the same zoom
ratio and nearly the same window size.
It is often helpful to display a wide view of a scene in one Viewer,
and then a close-up of a particular area in another Viewer. When two
such Viewers are linked, a box opens in the wide view window to
show where the close-up view lies.
Any image that is displayed at a magnification (higher zoom ratio) of
another image in a linked Viewer is represented in the other Viewer
by a box. If several Viewers are linked together, there may be
multiple boxes in that Viewer.
Figure 46 shows how one view fits inside the other linked Viewer. The
link box shows the extent of the larger-scale view.
Field Guide Using the Viewer / 141
Figure 46: Linked Viewers
Zoom and Roam Zooming enlarges an image on the display. When an image is
zoomed, it can be roamed (scrolled) so that the desired portion of
the image appears on the display screen. Any image that does not
fit entirely in the Viewer can be roamed and/or zoomed. Roaming
and zooming have no effect on how the image is stored in the file.
The zoom ratio describes the size of the image on the screen in terms
of the number of file pixels used to store the image. It is the ratio of
the number of screen pixels in the X or Y dimension to the number
that are used to display the corresponding file pixels.
A zoom ratio greater than 1 is a magnification, which makes the
image features appear larger in the Viewer. A zoom ratio less than 1
is a reduction, which makes the image features appear smaller in the
Viewer.
Using the Viewer / 142 Field Guide
NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that
images can be zoomed at virtually any scale (i.e., continuous
fractional zoom). Resampling is necessary whenever an image is
displayed with a new pixel grid. The resampling method used when
an image is zoomed is the same one used when the image is
displayed, as specified in the Open Raster Layer dialog. The default
resampling method is Nearest Neighbor.
Zoom the data in the Viewer via the Viewer menu bar, the
Viewer tool bar, or the Quick View right-button menu.
Geographic Information To prepare to run many programs, it may be necessary to determine
the data file coordinates, map coordinates, or data file values for a
particular pixel or a group of pixels. By displaying the image in the
Viewer and then selecting the pixel(s) of interest, important
information about the pixel(s) can be viewed.
The Quick View right-button menu gives you options to view
information about a specific pixel. Use the Raster Attribute
Editor to access information about classes in a thematic layer.
See Geographic Information Systems for information about
attribute data.
Enhancing Continuous
Raster Layers
Working with the brightness values in the colormap is useful for
image enhancement. Often, a trial and error approach is needed to
produce an image that has the right contrast and highlights the right
features. By using the tools in the Viewer, it is possible to quickly
view the effects of different enhancement techniques, undo
enhancements that are not helpful, and then save the best results to
disk.
Table 33: Overview of Zoom Ratio
A zoom ratio of 1 means... each file pixel is displayed with 1
screen pixel in the Viewer.
A zoom ratio of 2 means... each file pixel is displayed with a
block of
2 2 screen pixels. Effectively, the
image is displayed at 200%.
A zoom ratio of 0.5 means... each block of 2 2 file pixels is
displayed with 1 screen pixel.
Effectively, the image is displayed at
50%.
Field Guide Using the Viewer / 143
Use the Raster options from the Viewer to enhance continuous
raster layers.
See Enhancement for more information on enhancing
continuous raster layers.
Creating New Image Files It is easy to create a new image file (.img) from the layer(s)
displayed in the Viewer. The new image file contains three
continuous raster layers (RGB), regardless of how many layers are
currently displayed. The Image Information utility must be used to
create statistics for the new image file before the file is enhanced.
Annotation layers can be converted to raster format, and written to
an image file. Or, vector data can be gridded into an image,
overwriting the values of the pixels in the image plane, and
incorporated into the same band as the image.
Use the Viewer to .img function to create a new image file from
the currently displayed raster layers.
Using the Viewer / 144 Field Guide
/ 145 Field Guide
Mosaic
Introduction The Mosaic process offers you the capability to stitch images
together so one large, cohesive image of an area can be created.
Because of the different features of the Mosaic Tool, you can smooth
these images before mosaicking them together as well as color
balance them, or adjust the histograms of each image in order to
present a better large picture. It is necessary for the images to
contain map and projection information, but they do not need to be
in the same projection or have the same cell sizes. The input images
must have the same number of layers.
In addition to Mosaic Tool, Mosaic Wizard and Mosaic Direct are
features designed to make the Mosaic process easier for you. The
Mosaic Wizard will take you through the steps of creating a Mosaic
project. Mosaic Direct is designed to simplify the mosaic process by
gathering important information regarding the mosaic project from
you and then building the project without a lot of pre-processing by
you. The difference in the two is that Mosaic Wizard is a simplified
interface with minimal options for the beginning Mosaic user while
Mosaic Direct allows the regular or advanced Mosaic user to easily
and quickly set up Mosaic projects.
Mosaic Tool still offers you the most options and allows the most
input from you. There are a number of features included with the
Mosaic Tool to aid you in creating a better mosaicked image from
many separate images. In this chapter, the following features will be
discussed as part of the Mosaic Tool input image options followed by
an overview of Mosaic Wizard and Mosaic Direct. In Input Image
Mode for Mosaic Tool:
Exclude Areas
Image Dodging
Color Balancing
Histogram Matching
You can choose from the following when using Intersection Mode:
Set Overlap Function
Weighted Cutline Generation
Geometry-based Cutline Generation
Different options for choosing a cutline source
These options are available as part of the Output Image Mode:
Output Image Options
Input Image Mode / 146 Field Guide
Preview the Mosaic
Run the Mosaic Process to disk
Input Image Mode
Exclude Areas When you decide to mosaic images together, you will probably want
to use Image Dodging, Color Balancing, or Histogram Matching to
give the finished mosaicked image a smoother look devoid of bright
patches or shadowy areas that can appear on images. Many of the
color differentiations are caused by camera angle or cloud cover.
Before applying any of those features, you can use the Exclude Areas
feature to mark any types of areas you do not want to be taken into
account during a Color Balancing, Image Dodging, or Histogram
Matching process. Areas like dark water or bright urban areas can be
excluded so as not to throw off the process.
The Exclude Areas function works on the principal of defining an AOI
(area of interest) in a particular image, and excluding that area if you
wish. The feature makes it very easy to pinpoint and draw a polygon
around specific areas by featuring two viewers, one with a shot of the
entire image, and one zoomed to the AOI you have selected with the
Link cursor.
If you right-click your mouse while your cursor is in the viewer, you
will notice several options offered to help you better view your
images by fitting the image to the viewer window, changing the Link
cursor color, zooming in or out, rotating the image, changing band
combinations, and so on.
At the bottom of the Set Exclude Areas dialog, there is a tool bar with
options for creating a polygon for your AOI, using the Region
Growing tool for your AOI, selecting multiple AOIs, displaying AOI
styles, and finding and removing similar areas to your chosen AOI.
Image Dodging The Image Dodging feature of the Mosaic Tool applies a filter and
global statistics across each image you are mosaicking in order to
smooth out light imbalance over the image. The outcome of Image
Dodging is very similar to that of Color Balancing, but if you wish to
perform both functions on your images before mosaicking, you need
to do Image Dodging first. Unlike Color Balancing, Image Dodging
uses blocks instead of pixels to balance the image.
When you bring up the Image Dodging dialog you have several
different sections. Options for Current Image, Options for All
Images, and Display Setting are all above the viewer area showing
the image and a place for previewing the dodged image. If you want
to skip dodging for a certain image, you can check the Dont do
dodging on this image box and skip to the next image you want to
mosaic.
In the area titled Statistics Collection, you can change the Grid Size,
Skip Factor X, and Skip Factor Y. If you want a specific number to
apply to all of your images, you can click that button so you dont
have to reenter the information with each new image.
Field Guide Input Image Mode / 147
In Options For All Images, you can first choose whether the image
should be dodged by each band or as one. You then decide if you
want the dodging performed across all of the images you intend to
mosaic or just one image. This is helpful if you have a set of images
that all look smooth except for one that may show a shadow or bright
spot in it. If you click Edit Correction Settings, you will get a prompt
to Compute Settings first. If you want to, go ahead and compute the
settings you have stipulated in the dialog. After the settings are
computed, you will see a dialog titled Set Dodging Correction
Parameters. In this dialog you are able to change and reset the
brightness and contrast and the constraints of the image.
Use Display Setting to choose either a RGB image or a Single Band
image. If using an RGB image, you can change those bands to
whatever combination you wish. After you compute the settings a
final time, preview the dodged image in the dialog viewer so you will
know if you need to do anything further to it before mosaicking.
Color Balancing When you click Use Color Balancing, you are given the option of
Automatic Color Balancing. If you choose this option, the method will
be chosen for you. If you want to manually choose the surface
method and display options, choose Manual Color Manipulation in the
Set Color Balancing dialog.
Mosaic Color Balancing gives you several options to balance any
color disparities in your images before mosaicking them together
into one large image. When you choose to use Color Balancing in the
Color Corrections dialog, you will be asked if you want to color
balance your images automatically or manually. For more control
over how the images are color balanced, you should choose the
manual color balancing option. Once you choose this option, you will
have access to the Mosaic Color Balancing tool where you can choose
different surface methods, display options, and surface settings for
color balancing your images.
Surface Methods
When choosing a surface method you should concentrate on how the
light abnormality in your image is dispersed. Depending on the
shape of the bright or shadowed area you want to correct, you
should choose one of the following:
Parabolic -The color difference is elliptical and does not darken at
an equal rate on all sides.
Conic - The color difference will peak in brightness in the center
and darken at an equal rate on all sides.
Linear - The color difference is graduated across the image.
Exponential - The color difference is very bright in the center and
slowly, but not always evenly, darkens on all sides.
Input Image Mode / 148 Field Guide
It may be necessary to experiment a bit when trying to decide what
surface method to use. It can sometimes be particularly difficult to
tell the difference right away between parabolic, conic, and
exponential. Conic is usually best for hot spots found in aerial
photography although linear may be necessary in those situations
due to the correction of flight line variations. The linear method is
also useful for images with a large fall off in illumination along the
look direction, especially with SAR images, and also with off-nadir
viewing sensors.
In the same area, you will see a check box for Common center for all
layers. If you check this option, all layers in the current image will
have their center points set to that of the current layer. Whenever
the selector is moved, the text box updated, or the reset button
clicked, all of the layers will be updated. If you move the center
point, and you wish to bring it back to the middle of the image, you
can click Reset Center Point in the Surface Method area.
Display Setting
The Display Setting area of the Mosaic Color Balancing tool lets you
choose between RGB images and Single Band images. You can also
alter which layer in an RGB image is the red, green, or blue.
Surface Settings
When you choose a Surface Method, the Surface Settings become
the parameters used in that methods formula. The parameters
define the surface, and the surface will then be used to flatten the
brightness variation throughout the image. You can change the
following Surface Settings:
Offset
Scale
Center X
Center Y
Axis Ratio
As you change the settings, you can see the Image Profile graph
change as well. If you want to preview the color balanced image
before accepting it, you can click Preview at the bottom of the Mosaic
Color Balancing tool. This is helpful because you can change any
disparities that still exist in the image.
Histogram Matching Histogram Matching is used in other facets of IMAGINE, but it is
particularly useful to the mosaicking process. You should use the
Histogram Matching option to match data of the same or adjacent
scenes that was captured on different days, or data that is slightly
different because of sun or atmospheric effects.
Field Guide Intersection Mode / 149
By choosing Histogram Matching through the Color Corrections
dialog in Mosaic Tool, you have the options of choosing the Matching
Method, the Histogram Type, and whether or not to use an external
reference file. When choosing a Matching Method, decide if you want
your images to be matched according to all the other images you
want to mosaic or just matched to the overlapping areas between
the images. For Histogram Type you can choose to match images
band by band or by the intensity (RGB) of the images.
If you check Use external reference, you will get the choice of using
an image file or parameters as your Histogram Source. If you have
an image that contains the characteristics you would like to see in
the image you are running through Histogram Matching, then you
should use it.
Intersection Mode When you mosaic images, you will have overlapping areas. For those
overlapping areas, you can specify a cutline so that the pixels on one
side of a particular cutline take the value of one overlapping image,
while the pixels on the other side of the cutline take the value of
another overlapping image. The cutlines can be generated manually
or automatically.
When you choose the Set Mode for Intersection button on the Mosaic
Tool toolbar, you have several different options for handling the
overlapping of your images. The features for dealing with image
overlap include:
Loading cutlines from a vector file (a shapefile or arc coverage
file)
Editing cutlines as vectors in the viewer
Automatic clipping, extending, and merging of cutlines that cross
multiple image intersections
Loading images and calibration information from triangulated
block files as well as setting the elevation source
Selecting mosaic output areas with ASCII files containing corner
coordinates of sheets that may be rotated. The ASCII import tool
is used to try to parse ASCII files that do not conform to a
predetermined format.
Allowing users to save cutlines and intersections to a pair of
shapefiles
Loading clip boundary output regions from AOI or vector files.
This boundary applies to all output regions. Pixels outside the clip
boundary will be set to the background color.
Intersection Mode / 150 Field Guide
Set Overlap Function When you are using more than one image, you need to define how
they should overlap. Set Overlap Function gives you the options of
no cutline existing, and if one does not exist, how to handle the
overlap of images as well as if a cutline exists, then what smoothing
or feathering options to use concerning the cutline.
No Cutline Exists
When no cutline exists between overlapping images, you will need to
choose how to handle the overlap. You are given the following
choices:
Overlay
Average
Minimum
Maximum
Feather
Cutline Exists
When a cutline does exist between images, you will need to decide
on smoothing and feathering options to cover the overlap area in the
vicinity of the cutline. The Smoothing Options area allows you to
choose both the Distance and the Smoothing Filter. The Feathering
Options given are No Feathering, Feathering, and Feathering by
Distance. If you choose Feathering by Distance, you will be able to
enter a specific distance.
Automatically Generate
Cutlines For Intersection
The current implementation of Automatic Cutline Generation is
geometry-based. The method uses the centerlines of the overlapping
polygons as cutlines. While this is a very straightforward approach,
it is not recommended for images containing buildings, bridges,
rivers, and so on because of the possibility the method would make
the mosaicked images look obviously inaccurate near the cutline
area. For example, if the cutline crosses a bridge, the bridge may
look broken at the point where the cutline crosses it.
Weighted Cutline Generation
When your overlapping images contain buildings, bridges, rivers,
roads, or anything else where it is very important that the cutline not
break, you should use the Weighted Cutline Generation option. The
Weighted Cutline Generation option generates the most nadir cutline
first. The most nadir cutline is divided into sections where a section
is a collection of continuous cutline segments shared by the two
images. The starting and ending points of these sections are called
nodes. Between nodes, a cutline is refined based on a cost function.
The point with the smallest cost will be picked as a new cutline
vertex.
Field Guide Intersection Mode / 151
Cutline Refining Parameters
The Weighted Cutline Generation dialogs first section is Cutline
Refining Parameters. In this section you can choose the Segment
Length, which specifies the segment length of the refined cutline.
The smaller the segment length, the smaller the search area for the
next vertex will be, and the chances are reduced of the cutline
cutting through features such as roads or bridges. This is an
especially important consideration for dense urban areas with many
buildings. Smaller segment lengths will usually slow down the finding
of edges, but at least the chances are small of cutting through
important features.
The Bounding Width specifies the constraint to the new vertices in
the vertical direction of the segment between two nodes. More
specifically the distance from a new vertex to the segment between
the two nodes must be no bigger than the half of the value specified
by the Bounding Width field.
Cost Function Weighting Factors
The Cost Function used in cutline refinement is a weighted
combination of direction, standard deviation, and difference in gray
value. The weighting is in favor of high standard deviation, a low
difference in gray value, and direction that is closest to the direction
between the two nodes. The default value is one for all three
weighting factors. When left at the default value, all three
components play the same role. When you increment one weighting
factor, that factor will play a larger role. If you set a weighting factor
to zero, the corresponding component will not play a role at all. If
you set all three weighting factors to zero, the cutline refinement will
not be done, and the weighted cutline will be reduced to the most
nadir cutline.
Geometry-based Cutline
Generation
Geometry-based Cutline Generation is a bit more simplistic because
it is based only on the geometry of the overlapping region between
images. Pixel values of the involved images are not used. For an
overlapping region that only involves two images, the geometry-
based cutline can be seen as a center line of the overlapping area
that cuts the region into two equal halves. One half is closer to the
center of the first image, and the other half is closer to the center of
the second image. Geometry-based Cutline Generation runs very
quickly compared to Weighted Cutline Generation. Geometry-based
generation does not have to factor in pixels from the images. Use the
geometry-based method when your images contain homogenous
areas like grasses or lakes, but use Weighted Cutline Generation for
images where the cutline cannot break such as buildings, roads,
rivers, and urban areas.
Output Image Mode / 152 Field Guide
Output Image
Mode
After you have chosen images to be mosaicked, gone through any
color balancing or histogram matching, or image dodging, and
checked overlapping images for possible cutline needs, you are
ready to output the images to an actual mosaic file. When you select
the Set Mode for Output portion of the Mosaic Tool, the first feature
you will want to use is Output Image Options. After choosing those
options, you can preview the mosaic and then run it to disc.
Output Image Options This dialog lets you define your output map areas and change output
map projection if you wish. You will be given the choice of using
Union of All Inputs, User-defined AOI, Map Series File, USGS Maps
Database, or ASCII Sheet File as your defining feature for an output
map area. The default is Union of All Inputs.
Different choices yield different options to further modify the output
image. For instance, if you select User-defined AOI, then you are
given the choice of outputting multiple AOI objects to either multiple
files or a single file. If you choose Map Series File, you will be able to
enter the filename you want to use and choose whether to treat the
map extent as pixel centers or pixel edges.
If you choose ASCII Sheet File to define the Output Map Area, you
will need to supply a text file. If you need to create an ASCII file, you
should do so according to the following definitions:
ASCII Sheet File Definition:
The ASCII Sheet File may have one or more records in the following
format. Fields are white space delimited.
Field 1: Sheet name.
Field 2: One of UL, UR, LL, or LR to identify which coordinate the
following field represents.
Field 3: X coordinate
Field 4: Y coordinate
Fields 2-4 may be repeated for any two of the coordinates or for all
four. If all four coordinates are present, the sheet will be treated as
a rotated orthoimage. Otherwise, it will be treated as a north-up
orthoimage.
Last Field: -99. This terminates the record.
Here is an example of an ASCII Sheet File Definition:
some_sheet_name
UL 0 0
LR 10 10
-99
Or, another example:
Field Guide Output Image Mode / 153
some_other_sheet
UL 0 0
LL 3 5
LR 5 5
UR 3 3
-99
Alternate ASCII Sheet File Definition
North-up Orthoimages
Each line represents one sheet. A line consists of four floating point
values representing two corners of the output sheet. The name of the
sheet may also be present.
Rotated Orthoimages
Each line represents one sheet. A line consists of eight floating-point
values representing four corners of the output sheet. The name of
the sheet may also be present.
Examples:
0 0 10 10
OR
some_name 0 0 10 10
OR
0 0 3 0 3 3 0 3
OR
some_name 0 0 3 0 3 3 0 3
Also part of Output Image Options is the choice of choosing a Clip
Boundary. If you choose Clip Boundary, any area outside of the Clip
Boundary will be designated as backgroung value in your output
image. This differs from the User-defined AOI because Clip Boundary
applies to all output images. You can also click Change Output Map
Projection to bring up the Projection Chooser. The Projection
Chooser lets you choose a particular projection to use from
categories and projections around the world. If you want to choose
a customized map projection, you can do that as well.
You are also given the options of changing the Output Cell Size from
the default of 8.0, and you can choose a particular Output Data Type
from a dropdown list instead of the default Unsigned 8 bit.
When you are done selecting Output Image Options, you can
preview the mosaicked image before saving it as a file.
Output Image Mode / 154 Field Guide
Run Mosaic To Disc When you are ready to process the mosaicked image to disc, you can
click this icon and open the Output File Name dialog. From this
dialog, browse to the directory where you want to store your
mosaicked image, and enter the file name for the image. There are
several options on the Output Options tab such as Output to a
Common Look Up Table, Ignore Input Values, Output Background
Value, and Create Output in Batch mode. You can choose from any
of these according to your desired outcome.
/ 155 Field Guide
Enhancement
Introduction Image enhancement is the process of making an image more
interpretable for a particular application (Faust, 1989).
Enhancement makes important features of raw, remotely sensed
data more interpretable to the human eye. Enhancement techniques
are often used instead of classification techniques for feature
extractionstudying and locating areas and objects on the ground
and deriving useful information from images.
The techniques to be used in image enhancement depend upon:
Your datathe different bands of Landsat, SPOT, and other
imaging sensors are selected to detect certain features. You must
know the parameters of the bands being used before performing
any enhancement. (See Raster Data for more details.)
Your objectivefor example, sharpening an image to identify
features that can be used for training samples requires a
different set of enhancement techniques than reducing the
number of bands in the study. You must have a clear idea of the
final product desired before enhancement is performed.
Your expectationswhat you think you are going to find.
Your backgroundyour experience in performing enhancement.
This chapter discusses these enhancement techniques available with
ERDAS IMAGINE:
Data correctionradiometric and geometric correction
Radiometric enhancementenhancing images based on the
values of individual pixels
Spatial enhancementenhancing images based on the values of
individual and neighboring pixels
Spectral enhancementenhancing images by transforming the
values of each pixel on a multiband basis
Hyperspectral image processingan extension of the techniques
used for multispectral data sets
Fourier analysistechniques for eliminating periodic noise in
imagery
Radar imagery enhancementtechniques specifically designed
for enhancing radar imagery
/ 156 Field Guide
See Bibliography to find current literature that provides a more
detailed discussion of image processing enhancement
techniques.
Display vs. File
Enhancement
With ERDAS IMAGINE, image enhancement may be performed:
temporarily, upon the image that is displayed in the Viewer (by
manipulating the function and display memories), or
permanently, upon the image data in the data file.
Enhancing a displayed image is much faster than enhancing an
image on disk. If one is looking for certain visual effects, it may be
beneficial to perform some trial and error enhancement techniques
on the display. Then, when the desired results are obtained, the
values that are stored in the display device memory can be used to
make the same changes to the data file.
For more information about displayed images and the memory
of the display device, see Image Display.
Spatial Modeling
Enhancements
Two types of models for enhancement can be created in ERDAS
IMAGINE:
Graphical modelsuse Model Maker (Spatial Modeler) to easily,
and with great flexibility, construct models that can be used to
enhance the data.
Script modelsfor even greater flexibility, use the Spatial
Modeler Language (SML) to construct models in script form. SML
enables you to write scripts which can be written, edited, and run
from the Spatial Modeler component or directly from the
command line. You can edit models created with Model Maker
using SML or Model Maker.
Although a graphical model and a script model look different, they
produce the same results when applied.
Image Interpreter
ERDAS IMAGINE supplies many algorithms constructed as models,
which are ready to be applied with user-input parameters at the
touch of a button. These graphical models, created with Model
Maker, are listed as menu functions in the Image Interpreter. These
functions are mentioned throughout this chapter. Just remember,
these are modeling functions which can be edited and adapted as
needed with Model Maker or the SML.
See Geographic Information Systems for more information on
Raster Modeling.
Field Guide / 157
The modeling functions available for enhancement in Image
Interpreter are briefly described in Table 34.
Table 34: Description of Modeling Functions Available for
Enhancement
Function Description
SPATIAL
ENHANCEMENT
These functions enhance the
image using the values of
individual and surrounding
pixels.
Convolution Uses a matrix to average small sets of
pixels across an image.
Non-directional Edge Averages the results from two orthogonal
1st derivative edge detectors.
Focal Analysis Enables you to perform one of several
analyses on class values in an image file
using a process similar to convolution
filtering.
Texture Defines texture as a quantitative
characteristic in an image.
Adaptive Filter Varies the contrast stretch for each pixel
depending upon the DN values in the
surrounding moving window.
Statistical Filter Produces the pixel output DN by
averaging pixels within a moving window
that fall within a statistically defined
range.
Resolution Merge Merges imagery of differing spatial
resolutions.
Crisp Sharpens the overall scene luminance
without distorting the thematic content
of the image.
RADIOMETRIC
ENHANCEMENT
These functions enhance the
image using the values of
individual pixels within each
band.
LUT (Lookup Table) Stretch Creates an output image that contains
the data values as modified by a lookup
table.
Histogram Equalization Redistributes pixel values with a
nonlinear contrast stretch so that there
are approximately the same number of
pixels with each value within a range.
Histogram Match Mathematically determines a lookup
table that converts the histogram of one
image to resemble the histogram of
another.
/ 158 Field Guide
Brightness Inversion Allows both linear and nonlinear reversal
of the image intensity range.
Haze Reduction* Dehazes Landsat 4 and 5 TM data and
panchromatic data.
Noise Reduction* Removes noise using an adaptive filter.
Destripe TM Data Removes striping from a raw TM4 or TM5
data file.
SPECTRAL
ENHANCEMENT
These functions enhance the
image by transforming the
values of each pixel on a
multiband basis.
Principal Components Compresses redundant data values into
fewer bands, which are often more
interpretable than the source data.
Inverse Principal Components Performs an inverse principal
components analysis.
Decorrelation Stretch Applies a contrast stretch to the principal
components of an image.
Tasseled Cap Rotates the data structure axes to
optimize data viewing for vegetation
studies.
RGB to IHS Transforms red, green, blue values to
intensity, hue, saturation values.
IHS to RGB Transforms intensity, hue, saturation
values to red, green, blue values.
Indices Performs band ratios that are commonly
used in mineral and vegetation studies.
Natural Color Simulates natural color for TM data.
FOURIER ANALYSIS
These functions enhance the
image by applying a Fourier
Transform to the data. NOTE:
These functions are currently
view onlyno manipulation is
allowed.
Fourier Transform* Enables you to utilize a highly efficient
version of the Discrete Fourier Transform
(DFT).
Fourier Transform Editor* Enables you to edit Fourier images using
many interactive tools and filters.
Table 34: Description of Modeling Functions Available for
Enhancement
Function Description
Field Guide Correcting Data / 159
* Indicates functions that are not graphical models.
NOTE: There are other Image Interpreter functions that do not
necessarily apply to image enhancement.
Correcting Data Each generation of sensors shows improved data acquisition and
image quality over previous generations. However, some anomalies
still exist that are inherent to certain sensors and can be corrected
by applying mathematical formulas derived from the distortions
(Lillesand and Kiefer, 1987). In addition, the natural distortion that
results from the curvature and rotation of the Earth in relation to the
sensor platform produces distortions in the image data, which can
also be corrected.
Radiometric Correction
Generally, there are two types of data correction: radiometric and
geometric. Radiometric correction addresses variations in the pixel
intensities (DNs) that are not caused by the object or scene being
scanned. These variations include:
differing sensitivities or malfunctioning of the detectors
topographic effects
atmospheric effects
Geometric Correction
Geometric correction addresses errors in the relative positions of
pixels. These errors are induced by:
sensor viewing geometry
terrain variations
Inverse Fourier Transform* Computes the inverse two-dimensional
Fast Fourier Transform (FFT) of the
spectrum stored.
Fourier Magnitude* Converts the Fourier Transform image
into the more familiar Fourier Magnitude
image.
Periodic Noise Removal* Automatically removes striping and other
periodic noise from images.
Homomorphic Filter* Enhances imagery using an
illumination/reflectance model.
Table 34: Description of Modeling Functions Available for
Enhancement
Function Description
Correcting Data / 160 Field Guide
Because of the differences in radiometric and geometric
correction between traditional, passively detected
visible/infrared imagery and actively acquired radar imagery,
the two are discussed separately. See "Radar Imagery
Enhancement".
Radiometric Correction:
Visible/Infrared Imagery
Striping
Striping or banding occurs if a detector goes out of adjustmentthat
is, it provides readings consistently greater than or less than the
other detectors for the same band over the same ground cover.
Some Landsat 1, 2, and 3 data have striping every sixth line,
because of improper calibration of some of the 24 detectors that
were used by the MSS. The stripes are not constant data values, nor
is there a constant error factor or bias. The differing response of the
errant detector is a complex function of the data value sensed.
This problem has been largely eliminated in the newer sensors.
Various algorithms have been advanced in current literature to help
correct this problem in the older data. Among these algorithms are
simple along-line convolution, high-pass filtering, and forward and
reverse principal component transformations (Crippen, 1989a).
Data from airborne multispectral or hyperspectral imaging scanners
also shows a pronounced striping pattern due to varying offsets in
the multielement detectors. This effect can be further exacerbated
by unfavorable sun angle. These artifacts can be minimized by
correcting each scan line to a scene-derived average (Kruse, 1988).
Use the Image Interpreter or the Spatial Modeler to implement
algorithms to eliminate striping.The Spatial Modeler editing
capabilities allow you to adapt the algorithms to best address
the data. The IMAGINE Radar Interpreter Adjust Brightness
function also corrects some of these problems.
Line Dropout
Another common remote sensing device error is line dropout. Line
dropout occurs when a detector either completely fails to function,
or becomes temporarily saturated during a scan (like the effect of a
camera flash on the retina). The result is a line or partial line of data
with higher data file values, creating a horizontal streak until the
detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line
of estimated data file values, which is based on the lines above and
below it.
Field Guide Correcting Data / 161
Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not
considered errors, since they are part of the signal received by the
sensing device (Bernstein, 1983). However, it is often important to
remove atmospheric effects, especially for scene matching and
change detection analysis.
Over the past 30 years, a number of algorithms have been developed
to correct for variations in atmospheric transmission. Four categories
are mentioned here:
dark pixel subtraction
radiance to reflectance conversion
linear regressions
atmospheric modeling
Use the Spatial Modeler to construct the algorithms for these
operations.
Dark Pixel Subtraction
The dark pixel subtraction technique assumes that the pixel of lowest
DN in each band should really be zero, and hence its radiometric
value (DN) is the result of atmosphere-induced additive errors
(Crane, 1971; Chavez et al, 1977). These assumptions are very
tenuous and recent work indicates that this method may actually
degrade rather than improve the data (Crippen, 1987).
Radiance to Reflectance Conversion
Radiance to reflectance conversion requires knowledge of the true
ground reflectance of at least two targets in the image. These can
come from either at-site reflectance measurements, or they can be
taken from a reflectance table for standard materials. The latter
approach involves assumptions about the targets in the image.
Linear Regressions
A number of methods using linear regressions have been tried.
These techniques use bispectral plots and assume that the position
of any pixel along that plot is strictly a result of illumination. The
slope then equals the relative reflectivities for the two spectral
bands. At an illumination of zero, the regression plots should pass
through the bispectral origin. Offsets from this represent the additive
extraneous components, due to atmosphere effects (Crippen, 1987).
Radiometric Enhancement / 162 Field Guide
Atmospheric Modeling
Atmospheric modeling is computationally complex and requires
either assumptions or inputs concerning the atmosphere at the time
of imaging. The atmospheric model used to define the computations
is frequently Lowtran or Modtran (Kneizys et al, 1988). This model
requires inputs such as atmospheric profile (e.g., pressure,
temperature, water vapor, ozone), aerosol type, elevation, solar
zenith angle, and sensor viewing angle.
Accurate atmospheric modeling is essential in preprocessing
hyperspectral data sets where bandwidths are typically 10 nm or
less. These narrow bandwidth corrections can then be combined to
simulate the much wider bandwidths of Landsat or SPOT sensors
(Richter, 1990).
Geometric Correction As previously noted, geometric correction is applied to raw sensor
data to correct errors of perspective due to the Earths curvature and
sensor motion. Today, some of these errors are commonly removed
at the sensors data processing center. In the past, some data from
Landsat MSS 1, 2, and 3 were not corrected before distribution.
Many visible/infrared sensors are not nadir-viewing: they look to the
side. For some applications, such as stereo viewing or DEM
generation, this is an advantage. For other applications, it is a
complicating factor.
In addition, even a nadir-viewing sensor is viewing only the scene
center at true nadir. Other pixels, especially those on the view
periphery, are viewed off-nadir. For scenes covering very large
geographic areas (such as AVHRR), this can be a significant problem.
This and other factors, such as Earth curvature, result in geometric
imperfections in the sensor image. Terrain variations have the same
distorting effect, but on a smaller (pixel-by-pixel) scale. These
factors can be addressed by rectifying the image to a map.
See Rectification for more information on geometric correction
using rectification and Photogrammetric Concepts for more
information on orthocorrection.
A more rigorous geometric correction utilizes a DEM and sensor
position information to correct these distortions. This is
orthocorrection.
Radiometric
Enhancement
Radiometric enhancement deals with the individual values of the
pixels in the image. It differs from spatial enhancement (discussed
in "Spatial Enhancement"), which takes into account the values of
neighboring pixels.
Field Guide Radiometric Enhancement / 163
Depending on the points and the bands in which they appear,
radiometric enhancements that are applied to one band may not be
appropriate for other bands. Therefore, the radiometric
enhancement of a multiband image can usually be considered as a
series of independent, single-band enhancements (Faust, 1989).
Radiometric enhancement usually does not bring out the contrast of
every pixel in an image. Contrast can be lost between some pixels,
while gained on others.
Figure 47: Histograms of Radiometrically Enhanced Data
In Figure 47, the range between j and k in the histogram of the
original data is about one third of the total range of the data. When
the same data are radiometrically enhanced, the range between j
and k can be widened. Therefore, the pixels between j and k gain
contrastit is easier to distinguish different brightness values in
these pixels.
However, the pixels outside the range between j and k are more
grouped together than in the original histogram to compensate for
the stretch between j and k. Contrast among these pixels is lost.
Contrast Stretching When radiometric enhancements are performed on the display
device, the transformation of data file values into brightness values
is illustrated by the graph of a lookup table.
For example, Figure 48 shows the graph of a lookup table that
increases the contrast of data file values in the middle range of the
input data (the range within the brackets). Note that the input range
within the bracket is narrow, but the output brightness values for the
same pixels are stretched over a wider range. This process is called
contrast stretching.
O i i l D t
0 255 j k
F
r
e
q
u
e
n
c
y
(j and k are reference points)
E h d D t
0 255 j k
F
r
e
q
u
e
n
c
y
Radiometric Enhancement / 164 Field Guide
Figure 48: Graph of a Lookup Table
Notice that the graph line with the steepest (highest) slope brings
out the most contrast by stretching output values farther apart.
Linear and Nonlinear
The terms linear and nonlinear, when describing types of spectral
enhancement, refer to the function that is applied to the data to
perform the enhancement. A piecewise linear stretch uses a polyline
function to increase contrast to varying degrees over different
ranges of the data, as in Figure 49.
Figure 49: Enhancement with Lookup Tables
Linear Contrast Stretch
A linear contrast stretch is a simple way to improve the visible
contrast of an image. It is often necessary to contrast-stretch raw
image data, so that they can be seen on the display.
In most raw data, the data file values fall within a narrow range
usually a range much narrower than the display device is capable of
displaying. That range can be expanded to utilize the total range of
the display device (usually 0 to 255).
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s255
255 0
0
linear
nonlinear
piecewise
linear
Field Guide Radiometric Enhancement / 165
A two standard deviation linear contrast stretch is automatically
applied to images displayed in the Viewer.
Nonlinear Contrast Stretch
A nonlinear spectral enhancement can be used to gradually increase
or decrease contrast over a range, instead of applying the same
amount of contrast (slope) across the entire image. Usually,
nonlinear enhancements bring out the contrast in one range while
decreasing the contrast in other ranges. The graph of the function in
Figure 50 shows one example.
Figure 50: Nonlinear Radiometric Enhancement
Piecewise Linear Contrast Stretch
A piecewise linear contrast stretch allows for the enhancement of a
specific portion of data by dividing the lookup table into three
sections: low, middle, and high. It enables you to create a number
of straight line segments that can simulate a curve. You can enhance
the contrast or brightness of any section in a single color gun at a
time. This technique is very useful for enhancing image areas in
shadow or other areas of low contrast.
In ERDAS IMAGINE, the Piecewise Linear Contrast function is set
up so that there are always pixels in each data file value from 0
to 255. You can manipulate the percentage of pixels in a
particular range, but you cannot eliminate a range of data file
values.
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
Radiometric Enhancement / 166 Field Guide
A piecewise linear contrast stretch normally follows two rules:
1) The data values are continuous; there can be no
break in the values between High, Middle, and Low.
Range specifications adjust in relation to any
changes to maintain the data value range.
2) The data values specified can go only in an upward,
increasing direction, as shown in Figure 51.
Figure 51: Piecewise Linear Contrast Stretch
The contrast value for each range represents the percent of the
available output range that particular range occupies. The brightness
value for each range represents the middle of the total range of
brightness values occupied by that range. Since rules 1 and 2 above
are enforced, as the contrast and brightness values are changed,
they may affect the contrast and brightness of other ranges. For
example, if the contrast of the low range increases, it forces the
contrast of the middle to decrease.
Contrast Stretch on the Display
Usually, a contrast stretch is performed on the display device only,
so that the data file values are not changed. Lookup tables are
created that convert the range of data file values to the maximum
range of the display device. You can then edit and save the contrast
stretch values and lookup tables as part of the raster data image file.
These values are loaded into the Viewer as the default display values
the next time the image is displayed.
In ERDAS IMAGINE, you can permanently change the data file
values to the lookup table values. Use the Image Interpreter LUT
Stretch function to create an .img output file with the same data
values as the displayed contrast stretched image.
0 255
100%
Data Value Range
L
U
T

V
a
l
u
e
Low
Middle
High
Field Guide Radiometric Enhancement / 167
See Raster Data for more information on the data contained in
image files.
The statistics in the image file contain the mean, standard deviation,
and other statistics on each band of data. The mean and standard
deviation are used to determine the range of data file values to be
translated into brightness values or new data file values. You can
specify the number of standard deviations from the mean that are to
be used in the contrast stretch. Usually the data file values that are
two standard deviations above and below the mean are used. If the
data have a normal distribution, then this range represents
approximately 95 percent of the data.
The mean and standard deviation are used instead of the minimum
and maximum data file values because the minimum and maximum
data file values are usually not representative of most of the data. A
notable exception occurs when the feature being sought is in
shadow. The shadow pixels are usually at the low extreme of the
data file values, outside the range of two standard deviations from
the mean.
The use of these statistics in contrast stretching is discussed and
illustrated in Image Display. Statistical terms are discussed in
Math Topics.
Varying the Contrast Stretch
There are variations of the contrast stretch that can be used to
change the contrast of values over a specific range, or by a specific
amount. By manipulating the lookup tables as in Figure 52, the
maximum contrast in the features of an image can be brought out.
Figure 52 shows how the contrast stretch manipulates the histogram
of the data, increasing contrast in some areas and decreasing it in
others. This is also a good example of a piecewise linear contrast
stretch, which is created by adding breakpoints to the histogram.
Radiometric Enhancement / 168 Field Guide
Figure 52: Contrast Stretch Using Lookup Tables, and Effect
on Histogram
Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel
values so that there is approximately the same number of pixels with
each value within a range. The result approximates a flat histogram.
Therefore, contrast is increased at the peaks of the histogram and
lessened at the tails.
Histogram equalization can also separate pixels into distinct groups
if there are few output values over a wide range. This can have the
visual effect of a crude classification.
Figure 53: Histogram Equalization
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
input data file values
o
u
t
p
u
t

b
r
i
g
h
t
n
e
s
s

v
a
l
u
e
s
255
255 0
0
1. Linear stretch. Values are
clipped at 255.
2. A breakpoint is added to the
linear function, redistributing
the contrast.
3. Another breakpoint added.
Contrast at the peak of the
histogram continues to increase.
4. The breakpoint at the top of
the function is moved so that
values are not clipped.
input
histogram
output
histogram
Original Histogram After Equalization
peak
tail
pixels at peak are spread
apart - contrast is gained
pixels at
tail are
grouped -
contrast
is lost
Field Guide Radiometric Enhancement / 169
To perform a histogram equalization, the pixel values of an image
(either data file values or brightness values) are reassigned to a
certain number of bins, which are simply numbered sets of pixels.
The pixels are then given new values, based upon the bins to which
they are assigned.
The following parameters are entered:
N - the number of bins to which pixel values can be assigned. If
there are many bins or many pixels with the same value(s), some
bins may be empty.
M - the maximum of the range of the output values. The range
of the output values is from 0 to M.
The total number of pixels is divided by the number of bins, equaling
the number of pixels per bin, as shown in the following equation:
Where:
N = the number of bins
T = the total number of pixels in the image
A = the equalized number of pixels per bin
The pixels of each input value are assigned to bins, so that the
number of pixels in each bin is as close to A as possible. Consider
Figure 54:
Figure 54: Histogram Equalization Example
There are 240 pixels represented by this histogram. To equalize this
histogram to 10 bins, there would be:
240 pixels / 10 bins = 24 pixels per bin = A
A
T
N
---- =
0 1 2 3 4 5 6 7 8 9
5 5
10
15
60 60
40
30
10
5
n
u
m
b
e
r

o
f

p
i
x
e
l
s
data file values
A = 24
Radiometric Enhancement / 170 Field Guide
To assign pixels to bins, the following equation is used:
Where:
A = equalized number of pixels per bin (see above)
H
i
= the number of values with the value i (histogram)
int = integer function (truncating real numbers to
integer)
B
i
= bin number for pixels with value i
Source: Modified from Gonzalez and Wintz, 1977
The 10 bins are rescaled to the range 0 to M. In this example, M =
9, because the input values ranged from 0 to 9, so that the equalized
histogram can be compared to the original. The output histogram of
this equalized image looks like Figure 55:
Figure 55: Equalized Histogram
Effect on Contrast
By comparing the original histogram of the example data with the
one above, you can see that the enhanced image gains contrast in
the peaks of the original histogram. For example, the input range of
3 to 7 is stretched to the range 1 to 8. However, data values at the
tails of the original histogram are grouped together. Input values 0
through 2 all have the output value of 0. So, contrast among the tail
pixels, which usually make up the darkest and brightest regions of
the input image, is lost.
B
i
int
H
k
k 1 =
i 1

\ .
|
|
| |
H
i
2
----- +
A
-----------------------------------
=
0 1 2 3 4 5 6 7 8 9
15
60 60
40
30
n
u
m
b
e
r

o
f

p
i
x
e
l
s
output data file values
A = 24
20
15
0
1
2
3
4 5
7
8
9
6
numbers inside bars are input data file values
0 0 0
Field Guide Radiometric Enhancement / 171
The resulting histogram is not exactly flat, since the pixels can rarely
be grouped together into bins with an equal number of pixels. Sets
of pixels with the same value are never split up to form equal bins.
Level Slice
A level slice is similar to a histogram equalization in that it divides
the data into equal amounts. A level slice on a true color display
creates a stair-stepped lookup table. The effect on the data is that
input file values are grouped together at regular intervals into a
discrete number of levels, each with one output brightness value.
To perform a true color level slice, you must specify a range for the
output brightness values and a number of output levels. The lookup
table is then stair-stepped so that there is an equal number of input
pixels in each of the output levels.
Histogram Matching Histogram matching is the process of determining a lookup table that
converts the histogram of one image to resemble the histogram of
another. Histogram matching is useful for matching data of the same
or adjacent scenes that were scanned on separate days, or are
slightly different because of sun angle or atmospheric effects. This is
especially useful for mosaicking or change detection.
To achieve good results with histogram matching, the two input
images should have similar characteristics:
The general shape of the histogram curves should be similar.
Relative dark and light features in the image should be the same.
For some applications, the spatial resolution of the data should
be the same.
The relative distributions of land covers should be about the
same, even when matching scenes that are not of the same area.
If one image has clouds and the other does not, then the clouds
should be removed before matching the histograms. This can be
done using the AOI function. The AOI function is available from
the Viewer menu bar.
In ERDAS IMAGINE, histogram matching is performed band-to-
band (e.g., band 2 of one image is matched to band 2 of the
other image).
To match the histograms, a lookup table is mathematically derived,
which serves as a function for converting one histogram to the other,
as illustrated in Figure 56.
Spatial Enhancement / 172 Field Guide
Figure 56: Histogram Matching
Brightness Inversion The brightness inversion functions produce images that have the
opposite contrast of the original image. Dark detail becomes light,
and light detail becomes dark. This can also be used to invert a
negative image that has been scanned to produce a positive image.
Brightness inversion has two options: inverse and reverse. Both
options convert the input data range (commonly 0 - 255) to 0 - 1.0.
A min-max remapping is used to simultaneously stretch the image
and handle any input bit format. The output image is in floating point
format, so a min-max stretch is used to convert the output image
into 8-bit format.
Inverse is useful for emphasizing detail that would otherwise be lost
in the darkness of the low DN pixels. This function applies the
following algorithm:
DN
out
= 1.0 if 0.0 < DN
in
< 0.1
DN
out
= 0.1 DN
in
if 0.1 < DN
in
< 1
Reverse is a linear function that simply reverses the DN values:
DN
out
= 1.0 - DN
in

Source: Pratt, 1991
Spatial
Enhancement
While radiometric enhancements operate on each pixel individually,
spatial enhancement modifies pixel values based on the values of
surrounding pixels. Spatial enhancement deals largely with spatial
frequency, which is the difference between the highest and lowest
values of a contiguous set of pixels. Jensen (Jensen, 1986) defines
spatial frequency as the number of changes in brightness value per
unit distance for any particular part of an image.
Consider the examples in Figure 57:
zero spatial frequencya flat image, in which every pixel has the
same value
Source histogram (a), mapped through the lookup table (b),
approximates model histogram (c).
f
r
e
q
u
e
n
c
y
input
0 255
f
r
e
q
u
e
n
c
y
input
0 255
f
r
e
q
u
e
n
c
y
input
0 255
+
=
(a) (b)
Field Guide Spatial Enhancement / 173
low spatial frequencyan image consisting of a smoothly varying
gray scale
highest spatial frequencyan image consisting of a checkerboard
of black and white pixels
Figure 57: Spatial Frequencies
This section contains a brief description of the following:
Convolution, Crisp, and Adaptive filtering
Resolution merging
See "Radar Imagery Enhancement" for a discussion of Edge
Detection and Texture Analysis. These spatial enhancement
techniques can be applied to any type of data.
Convolution Filtering Convolution filtering is the process of averaging small sets of pixels
across an image. Convolution filtering is used to change the spatial
frequency characteristics of an image (Jensen, 1996).
A convolution kernel is a matrix of numbers that is used to average
the value of each pixel with the values of surrounding pixels in a
particular way. The numbers in the matrix serve to weight this
average toward particular pixels. These numbers are often called
coefficients, because they are used as such in the mathematical
equations.
In ERDAS IMAGINE, there are four ways you can apply
convolution filtering to an image:
1) The kernel filtering option in the Viewer
2) The Convolution function in Image Interpreter
3) The IMAGINE Radar Interpreter Edge Enhancement function
4) The Convolution function in Model Maker
zero spatial frequency low spatial frequency high spatial frequency
Spatial Enhancement / 174 Field Guide
Filtering is a broad term, which refers to the altering of spatial or
spectral features for image enhancement (Jensen, 1996).
Convolution filtering is one method of spatial filtering. Some texts
may use the terms synonymously.
Convolution Example
To understand how one pixel is convolved, imagine that the
convolution kernel is overlaid on the data file values of the image (in
one band), so that the pixel to be convolved is in the center of the
window.
Figure 58: Applying a Convolution Kernel
Figure 58 shows a 3 3 convolution kernel being applied to the pixel
in the third column, third row of the sample data (the pixel that
corresponds to the center of the kernel).
To compute the output value for this pixel, each value in the
convolution kernel is multiplied by the image pixel value that
corresponds to it. These products are summed, and the total is
divided by the sum of the values in the kernel, as shown here:
integer [(-1 8) + (-1 6) + (-1 6) + (-1 2) + (16 8) +
(-1 6) +
(-1 2) + (-1 2) + (-1 8) (-1 + -1 + -1 + -1 + 16 + -1 +
-1 + -1 + -1)]
= int [(128-40) / (16-8)]
= int (88 / 8) = int (11) = 11
In order to convolve the pixels at the edges of an image, pseudo data
must be generated in order to provide values on which the kernel can
operate. In the example below, the pseudo data are derived by
reflection. This means the top row is duplicated above the first data
row and the left column is duplicated left of the first data column. If
a second row or column is needed (for a 5 5 kernel for example),
the second data row or column is copied above or left of the first copy
and so on. An alternative to reflection is to create background value
(usually zero) pseudo data; this is called Fill.
-1 -1 -1
-1 16 -1
-1 -1 -1
2 8 6 6 6
2 8 6 6 6
2 2 8 6 6
2 2 2 8 6
2 2 2 2 8
Input Data
Kernel
Field Guide Spatial Enhancement / 175
When the pixels in this example image are convolved, output values
cannot be calculated for the last row and column; here we have used
?s to show the unknown values. In practice, the last row and column
of an image are either reflected or filled just like the first row and
column.
Figure 59: Output Values for Convolution Kernel
The kernel used in this example is a high frequency kernel, as
explained below. It is important to note that the relatively lower
values become lower, and the higher values become higher, thus
increasing the spatial frequency of the image.
Convolution Formula
The following formula is used to derive an output data file value for
the pixel being convolved (in the center):
Where:
f
ij
= the coefficient of a convolution kernel at position i,j
(in the kernel)
d
ij
= the data value of the pixel that corresponds to f
ij

q = the dimension of the kernel, assuming a square
kernel (if q = 3, the kernel is 3 3)
F = either the sum of the coefficients of the kernel, or
1 if the sum of coefficients is 0
V = the output pixel value
In cases where V is less than 0, V is clipped to 0.
Source: Modified from Jensen, 1996; Schowengerdt, 1983
0 11 5 6 ?
1 11 5 5 ?
1 0 11 6 ?
2 1 0 11 ?
? ? ? ? ?
Output Data
Input Data
2 2 8 6 6 6
2 2 8 6 6 6
2 2 8 6 6 6
2 2 2 8 6 6
2 2 2 2 8 6
2 2 2 2 2 8
pseudo data (shaded)
V
f
ij
d
ij
j 1 =
q

\ .
|
|
| |
i 1 =
q

F
------------------------------------
=
Spatial Enhancement / 176 Field Guide
The sum of the coefficients (F) is used as the denominator of the
equation above, so that the output values are in relatively the same
range as the input values. Since F cannot equal zero (division by zero
is not defined), F is set to 1 if the sum is zero.
Zero-Sum Kernels
Zero-sum kernels are kernels in which the sum of all coefficients in
the kernel equals zero. When a zero-sum kernel is used, then the
sum of the coefficients is not used in the convolution equation, as
above. In this case, no division is performed (F = 1), since division
by zero is not defined.
This generally causes the output values to be:
zero in areas where all input values are equal (no edges)
low in areas of low spatial frequency
extreme in areas of high spatial frequency (high values become
much higher, low values become much lower)
Therefore, a zero-sum kernel is an edge detector, which usually
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high, which is at the
edges between homogeneous (homogeneity is low spatial
frequency) groups of pixels. The resulting image often consists of
only edges and zeros.
Zero-sum kernels can be biased to detect edges in a particular
direction. For example, this
3 3 kernel is biased to the south (Jensen, 1996).
See the section on "Edge Detection" for more detailed
information.
High-Frequency Kernels
A high-frequency kernel, or high-pass kernel, has the effect of
increasing spatial frequency.
-1 -1 -1
1 -2 1
1 1 1
Field Guide Spatial Enhancement / 177
High-frequency kernels serve as edge enhancers, since they bring
out the edges between homogeneous groups of pixels. Unlike edge
detectors (such as zero-sum kernels), they highlight edges and do
not necessarily eliminate other features.
When this kernel is used on a set of pixels in which a relatively low
value is surrounded by higher values, like this...
...the low value gets lower. Inversely, when the kernel is used on a
set of pixels in which a relatively high value is surrounded by lower
values...
...the high value becomes higher. In either case, spatial frequency is
increased by this kernel.
Low-Frequency Kernels
Below is an example of a low-frequency kernel, or low-pass kernel,
which decreases spatial frequency.
This kernel simply averages the values of the pixels, causing them
to be more homogeneous. The resulting image looks either more
smooth or more blurred.
For information on applying filters to thematic layers, see
Geographic Information Systems.
-1 -1 -1
-1 16 -1
-1 -1 -1
BEFORE AFTER
204 200 197 204 200 197
201 106 209 201 9 209
198 200 210 198 200 210
BEFORE AFTER
64 60 57 64 60 57
61 125 69 61 187 69
58 60 70 58 60 70
1 1 1
1 1 1
1 1 1
Spatial Enhancement / 178 Field Guide
Crisp The Crisp filter sharpens the overall scene luminance without
distorting the interband variance content of the image. This is a
useful enhancement if the image is blurred due to atmospheric haze,
rapid sensor motion, or a broad point spread function of the sensor.
The algorithm used for this function is:
1) Calculate principal components of multiband input image.
2) Convolve PC-1 with summary filter.
3) Retransform to RGB space.
The logic of the algorithm is that the first principal component (PC-
1) of an image is assumed to contain the overall scene luminance.
The other PCs represent intra-scene variance. Thus, you can sharpen
only PC-1 and then reverse the principal components calculation to
reconstruct the original image. Luminance is sharpened, but
variance is retained.
Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial,
spectral, or temporal resolution.
See Raster Data for a full description of resolution types.
Landsat TM sensors have seven bands with a spatial resolution of
28.5 m. SPOT panchromatic has one broad band with very good
spatial resolution10 m. Combining these two images to yield a
seven-band data set with 10 m resolution provides the best
characteristics of both sensors.
A number of models have been suggested to achieve this image
merge. Welch and Ehlers
(Welch and Ehlers, 1987) used forward-reverse RGB to IHS
transforms, replacing I (from transformed TM data) with the SPOT
panchromatic image. However, this technique is limited to three
bands (R, G, B).
Chavez (Chavez et al, 1991), among others, uses the forward-
reverse principal components transforms with the SPOT image,
replacing PC-1.
In the above two techniques, it is assumed that the intensity
component (PC-1 or I) is spectrally equivalent to the SPOT
panchromatic image, and that all the spectral information is
contained in the other PCs or in H and S. Since SPOT data do not
cover the full spectral range that TM data do, this assumption does
not strictly hold. It is unacceptable to resample the thermal band
(TM6) based on the visible (SPOT panchromatic) image.
Another technique (Schowengerdt, 1980) combines a high frequency
image derived from the high spatial resolution data (i.e., SPOT
panchromatic) additively with the high spectral resolution Landsat
TM image.
The Resolution Merge function has two different options for
resampling low spatial resolution data to a higher spatial resolution
while retaining spectral information:
Field Guide Spatial Enhancement / 179
forward-reverse principal components transform
multiplicative
Principal Components Merge
Because a major goal of this merge is to retain the spectral
information of the six TM bands 1 - 5, 7), this algorithm is
mathematically rigorous. It is assumed that:
PC-1 contains only overall scene luminance; all interband
variation is contained in the other 5 PCs, and
scene luminance in the SWIR bands is identical to visible scene
luminance.
With the above assumptions, the forward transform into PCs is
made. PC-1 is removed and its numerical range (min to max) is
determined. The high spatial resolution image is then remapped so
that its histogram shape is kept constant, but it is in the same
numerical range as PC-1. It is then substituted for PC-1 and the
reverse transform is applied. This remapping is done so that the
mathematics of the reverse transform do not distort the thematic
information (Welch and Ehlers, 1987).
Multiplicative
The second technique in the Image Interpreter uses a simple
multiplicative algorithm:
(DN
TM1
) (DN
SPOT
) = DN
new TM1
The algorithm is derived from the four component technique of
Crippen (Crippen, 1989a). In this paper, it is argued that of the four
possible arithmetic methods to incorporate an intensity image into a
chromatic image (addition, subtraction, division, and multiplication),
only multiplication is unlikely to distort the color.
However, in his study Crippen first removed the intensity component
via band ratios, spectral indices, or PC transform. The algorithm
shown above operates on the original image. The result is an
increased presence of the intensity component. For many
applications, this is desirable. People involved in urban or suburban
studies, city planning, and utilities routing often want roads and
cultural features (which tend toward high reflection) to be
pronounced in the image.
Brovey Transform
In the Brovey Transform method, three bands are used according to
the following formula:
[DN
B1
/ DN
B1
+ DN
B2
+ DN
B3
] [DN
high res. image
] = DN
B1_new
[DN
B2
/ DN
B1
+ DN
B2
+ DN
B3
] [DN
high res. image
] = DN
B2_new
[DNB3 / DN
B1
+ DN
B2
+ DN
B3
] [DN
high res. image
] = DN
B3_new
Spatial Enhancement / 180 Field Guide
Where:
B(n) = band (number)
The Brovey Transform was developed to visually increase contrast in
the low and high ends of an images histogram (i.e., to provide
contrast in shadows, water and high reflectance areas such as urban
features). Consequently, the Brovey Transform should not be used if
preserving the original scene radiometry is important. However, it is
good for producing RGB images with a higher degree of contrast in
the low and high ends of the image histogram and for producing
visually appealing images.
Since the Brovey Transform is intended to produce RGB images, only
three bands at a time should be merged from the input multispectral
scene, such as bands 3, 2, 1 from a SPOT or Landsat TM image or 4,
3, 2 from a Landsat TM image. The resulting merged image should
then be displayed with bands 1, 2, 3 to RGB.
Adaptive Filter Contrast enhancement (image stretching) is a widely applicable
standard image processing technique. However, even adjustable
stretches like the piecewise linear stretch act on the scene globally.
There are many circumstances where this is not the optimum
approach. For example, coastal studies where much of the water
detail is spread through a very low DN range and the land detail is
spread through a much higher DN range would be such a
circumstance. In these cases, a filter that adapts the stretch to the
region of interest (the area within the moving window) would
produce a better enhancement. Adaptive filters attempt to achieve
this (Fahnestock and Schowengerdt, 1983; Peli and Lim, 1982;
Schwartz and Soha, 1977).
ERDAS IMAGINE supplies two adaptive filters with user-
adjustable parameters. The Adaptive Filter function in Image
Interpreter can be applied to undegraded images, such as SPOT,
Landsat, and digitized photographs. The Image Enhancement
function in IMAGINE Radar Interpreter is better for degraded or
difficult images.
Scenes to be adaptively filtered can be divided into three broad and
overlapping categories:
Undegradedthese scenes have good and uniform illumination
overall. Given a choice, these are the scenes one would prefer to
obtain from imagery sources such as Space Imaging or SPOT.
Low luminancethese scenes have an overall or regional less
than optimum intensity. An underexposed photograph (scanned)
or shadowed areas would be in this category. These scenes need
an increase in both contrast and overall scene luminance.
Field Guide Wavelet Resolution Merge / 181
High luminancethese scenes are characterized by overall
excessively high DN values. Examples of such circumstances
would be an over-exposed (scanned) photograph or a scene with
a light cloud cover or haze. These scenes need a decrease in
luminance and an increase in contrast.
No single filter with fixed parameters can address this wide variety
of conditions. In addition, multiband images may require different
parameters for each band. Without the use of adaptive filters, the
different bands would have to be separated into one-band files,
enhanced, and then recombined.
For this function, the image is separated into high and low frequency
component images. The low frequency image is considered to be
overall scene luminance. These two component parts are then
recombined in various relative amounts using multipliers derived
from LUTs. These LUTs are driven by the overall scene luminance:
DN
out
= K(DN
Hi
) + DN
LL
Where:
K = user-selected contrast multiplier
Hi = high luminance (derives from the LUT)
LL = local luminance (derives from the LUT)
Figure 60: Local Luminance Intercept
Figure 60 shows the local luminance intercept, which is the output
luminance value that an input luminance value of 0 would be
assigned.
Wavelet
Resolution Merge
The ERDAS IMAGINE Wavelet Resolution Merge allows multispectral
images of relatively low spatial resolution to be sharpened using a
co-registered panchromatic image of relatively higher resolution. A
primary intended target dataset is Landsat 7 ETM+. Increasing the
spatial resolution of multispectral imagery in this fashion is, in fact,
the rationale behind the Landsat 7 sensor design.
0 255 Low Frequency Image DN
L
o
c
a
l

L
u
m
i
n
a
n
c
e
255
Intercept (I)
Wavelet Resolution Merge / 182 Field Guide
The ERDAS IMAGINE algorithm is a modification of the work of King
and Wang (King et al, 2001) with extensive input from Lemeshewsky
(Lemeshewsky, 1999, Lemeshewsky, 2002a, Lemeshewsky,
2002b). Aside from traditional Pan-Multispectral image sharpening,
this algorithm can be used to merge any two images, for example,
radar with SPOT Pan.
Fusing information from several sensors into one composite image
can take place on four levels; signal, pixel, feature, and symbolic.
This algorithm works at the pixel level. The results of pixel-level
fusion are primarily for presentation to a human observer/analyst
(Rockinger and Fechner, 1998). However, in the case of
pan/multispectral image sharpening, it must be considered that
computer-based analysis (e.g., supervised classification) could be a
logical follow-on. Thus, it is vital that the algorithm preserve the
spectral fidelity of the input dataset.
Wavelet Theory Wavelet-based image reduction is similar to Fourier transform
analysis. In the Fourier transform, long continuous (sine and cosine)
waves are used as the basis. The wavelet transform uses short,
discrete wavelets instead of a long wave. Thus the new transform
is much more local (Strang et al, 1997). In image processing terms,
the wavelet can be parameterized as a finite size moving window.
A key element of using wavelets is selection of the base waveform to
be used; the mother wavelet or basis. The basis is the basic
waveform to be used to represent the image. The input signal
(image) is broken down into successively smaller multiples of this
basis.
Wavelets are derived waveforms that have a lot of mathematically
useful characteristics that make them preferable to simple sine or
cosine functions. For example, wavelets are discrete; that is, they
have a finite length as opposed to sine waves which are continuous
and infinite in length. Once the basis waveform is mathematically
defined, a family of multiples can be created with incrementally
increasing frequency. For example, related wavelets of twice the
frequency, three times the frequency, four times the frequency, etc.
can be created.
Once the waveform family is defined, the image can be decomposed
by applying coefficients to each of the waveforms. Given a sufficient
number of waveforms in the family, all the detail in the image can be
defined by coefficient multiples of the ever-finer waveforms.
In practice, the coefficients of the discrete high-pass filter are of
more interest than the wavelets themselves. The wavelets are rarely
even calculated (Shensa, 1992). In image processing, we do not
want to get deeply involved in mathematical waveform
decomposition; we want relatively rapid processing kernels (moving
windows). Thus, we use the above theory to derive moving window,
high-pass kernels which approximate the waveform decomposition.
Field Guide Wavelet Resolution Merge / 183
For image processing, orthogonal and biorthogonal transforms are of
interest. With orthogonal transforms, the new axes are mutually
perpendicular and the output signal has the same length as the input
signal. The matrices are unitary and the transform is lossless. The
same filters are used for analysis and reconstruction.
In general, biorthogonal (and symmetrical) wavelets are more
appropriate than orthogonal wavelets for image processing
applications (Strang et al, 1997, p. 362-363). Biorthogonal wavelets
are ideal for image processing applications because of their
symmetry and perfect reconstruction properties. Each biorthogonal
wavelet has a reconstruction order and a decomposition order
associated with it. For example, biorthogonal 3.3 denotes a
biorthogonal wavelet with reconstruction order 3 and decomposition
order 3. For biorthogonal transforms, the lengths of and angles
between the new axes may change. The new axes are not
necessarily perpendicular. The analysis and reconstruction filters are
not required to be the same. They are, however, mathematically
constrained so that no information is lost, perfect reconstruction is
possible and the matrices are invertible.
The signal processing properties of the Discrete Wavelet Transform
(DWT) are strongly determined by the choice of high-pass
(bandpass) filter (Shensa, 1992). Although biorthogonal wavelets
are phase linear, they are shift variant due to the decimation
process, which saves only even-numbered averages and differences.
This means that the resultant subimage changes if the starting point
is shifted (translated) by one pixel. For the commonly used, fast
(Mallat, 1989) discrete wavelet decomposition algorithm, a shift of
the input image can produce large changes in the values of the
wavelet decomposition coefficients. One way to overcome this is to
use an average of each average and difference pair.
Once selected, the wavelets are applied to the input image
recursively via a pyramid algorithm or filter bank. This is commonly
implemented as a cascading series of highpass and lowpass filters,
based on the mother wavelet, applied sequentially to the low-pass
image of the previous recursion. After filtering at any level, the low-
pass image (commonly termed the approximation image) is
passed to the next finer filtering in the filter bank. The high-pass
images (termed horizontal, vertical, and diagonal) are retained
for later image reconstruction. In practice, three or four recursions
are sufficient.
2-D Discrete Wavelet Transform
A 2-D Discrete Wavelet Transform of an image yields four
components:
approximation coefficients
horizontal coefficients variations along the columns
vertical coefficients variations along the rows
W

H
W

V
Wavelet Resolution Merge / 184 Field Guide
diagonal coefficients variations along the diagonals
(Gonzalez and Woods, 2001)
Figure 61: Schematic Diagram of the Discrete Wavelet
Transform - DWT
Symbols and are, respectively, the low-pass and high-pass
wavelet filters used for decomposition. The rows of the image are
convolved with the low-pass and high-pass filters and the result is
downsampled along the columns. This yields two subimages whose
horizontal resolutions are reduced by a factor of 2. The high-pass or
detailed coefficients characterize the images high frequency
information with vertical orientation while the low-pass component
contains its low frequency, vertical information. Both subimages are
again filtered columnwise with the same low-pass and high-pass
filters and downsampled along rows.
Thus, for each input image, we have four subimages each reduced
by a factor of 4 compared to the original image; , , , and
.
2-D Inverse Discrete Wavelet Transform
The reduced components of the input images are passed as input to
the low-pass and high-pass reconstruction filters and
(different from the ones used for decomposition) as shown in Figure
62.
W

D
low
pass
high
pass
column
decimation
row
decimation
low
pass
low
pass
h

high
pass
high
pass
W

D
W

H
W

V
sub-
image
sub-
image
input
image
h

H
W

V
W

D
h

Field Guide Wavelet Resolution Merge / 185


Figure 62: Inverse Discrete Wavelet Transform - DWT
-1
The sequence of steps is the opposite of that in the DWT, the
subimages are upsampled along rows (since the last step in the DWT
was downsampling along rows) and convolved with the low-pass and
high-pass filters columnwise (in the DWT we filtered along the
columns last). These intermediate outputs are concatenated,
upsampled along columns and then filtered rowwise and finally
concatenated to yield the original image.
Algorithm Theory The basic theory of the decomposition is that an image can be
separated into high-frequency and low-frequency components. For
example, a low-pass filter can be used to create a low-frequency
image. Subtracting this low-frequency image from the original image
would create the corresponding high-frequency image. These two
images contain all of the information in the original image. If they
were added together the result would be the original image.
The same could be done by high-pass filter filtering an image and the
corresponding low-frequency image could be derived. Again, adding
the two together would yield the original image. Any image can be
broken into various high- and low-frequency components using
various high- and low-pass filters. The wavelet family can be thought
of as a high-pass filter. Thus wavelet-based high- and low-frequency
images can be created from any input image. By definition, the low-
frequency image is of lower resolution and the high-frequency image
contains the detail of the image.
This process can be repeated recursively. The created low-frequency
image could be again processed with the kernels to create new
images with even lower resolution. Thus, starting with a 5-meter
image, a 10-meter low-pass image and the corresponding high-pass
image could be created. A second iteration would create a 20-meter
low- and, corresponding, high-pass images. A third recursion would
create a 40-meter low- and, corresponding, high-frequency images,
etc.
row
padding
low
pass
low
pass
h

high
pass
high
pass
W

D
W

H
W

V
h

output
image
column
padding
low
pass
high
pass
Wavelet Resolution Merge / 186 Field Guide
Consider two images taken on the same day of the same area: one
a 5-meter panchromatic, the other 40-meter multispectral. The 5-
meter has better spatial resolution, but the 40-meter has better
spectral resolution. It would be desirable to take the high-pass
information from the 5-meter image and combine it with the 40-
meter multispectral image yielding a 5-meter multispectral image.
Using wavelets, one can decompose the 5-meter image through
several iterations until a 40-meter low-pass image is generated plus
all the corresponding high-pass images derived during the recursive
decomposition. This 40-meter low-pass image, derived from the
original 5-meter pan image, can be replaced with the 40-meter
multispectral image and the whole wavelet decomposition process
reversed, using the high-pass images derived during the
decomposition, to reconstruct a 5-meter resolution multispectral
image. The approximation component of the high spectral resolution
image and the horizontal, vertical, and diagonal components of the
high spatial resolution image are fused into a new output image.
If all of the above calculations are done in a mathematically
rigorously way (histomatch and resample before substitution etc.)
one can derive a multispectral image that has the high-pass (high-
frequency) details from the 5-meter image.
In the above scenario, it should be noted that the high-resolution
image (panchromatic, perhaps) is a single band and so the
substitution image, from the multispectral image, must also be a
single band. There are tools available to compress the multispectral
image into a single band for substitution using the IHS transform or
PC transform. Alternately, single bands can be processed
sequentially.
Figure 63: Wavelet Resolution Merge
Resample
DWT
DWT -1
high
spectral
res
high
spatial
res
fused
image
h
v
d
h
v
d
a
Histogram Match
Field Guide Wavelet Resolution Merge / 187
Prerequisites and
Limitations
Precise Coregistration
A first prerequisite is that the two images be precisely co-registered.
For some sensors (e.g., Landsat 7 ETM+) this co-registration is
inherent in the dataset. If this is not the case, a greatly over-defined
2
nd
order polynomial transform should be used to coregister one
image to the other. By over-defining the transform (that is, by
having far more than the minimum number of tie points), it is
possible to reduce the random RMS error to the subpixel level. This
is easily accomplished by using the Point Prediction option in the GCP
Tool. In practice, well-distributed tie points are collected until the
predicted point consistently falls exactly were it should. At that time,
the transform must be correct. This may require 30-60 tie points for
a typical Landsat TMSPOT Pan co-registration.
When doing the coregistration, it is generally preferable to register
the lower resolution image to the higher resolution image, i.e., the
high resolution image is used as the Reference Image. This will allow
the greatest accuracy of registration. However, if the lowest
resolution image has georeferencing that is to be retained, it may be
desirable to use it as the Reference Image. A larger number of tie
points and more attention to precise work would then be required to
attain the same registration accuracy. Evaluation of the X- and Y-
Residual and the RMS Error columns in the ERDAS IMAGINE GCP Tool
will indicate the accuracy of registration.
It is preferable to store the high and low resolution images as
separate image files rather than Layerstacking them into a single
image file. In ERDAS IMAGINE, stacked image layers are resampled
to a common pixel size. Since the Wavelet Resolution Merge
algorithm does the pixel resampling at an optimal stage in the
calculation, this avoids multiple resamplings.
After creating the coregistered images, they should be codisplayed
in an ERDAS IMAGINE Viewer. Then the Fade, Flicker, and Swipe
Tools can be used to visually evaluate the precision of the
coregistration.
Identical Spectral Range
Secondly, an underlying assumption of resolution merge algorithms
is that the two images are spectrally identical. Thus, while a SPOT
Panchromatic image can be used to sharpen TM bands 1-4, it would
be questionable to use it for TM bands 5 and 7 and totally
inappropriate for TM band 6 (thermal emission). If the datasets are
not spectrally identical, the spectral fidelity of the MS dataset will be
lost.
It has been noted (Lemeshewsky, 2002b) that there can be
spectrally-induced contrast reversals between visible and NIR bands
at, for example, soil-vegetation boundaries. This can produce
degraded edge definition or artifacts.
Wavelet Resolution Merge / 188 Field Guide
Temporal Considerations
A trivial corollary is that the two images must have no temporally-
induced differences. If a crop has been harvested, trees have
dropped their foliage, lakes have grown or shrunk, etc., then
merging of the two images in that area is inappropriate. If the areas
of change are small, the merge can proceed and those areas
removed from evaluation. If, however, the areas of change are large,
the histogram matching step may introduce data distortions.
Theoretical Limitations
As described in the discussion of the discrete wavelet transform, the
algorithm downsamples the high spatial resolution input image by a
factor of two with each iteration. This produces approximation (a)
images with pixel sizes reduced by a factor of two with each
iteration. The low (spatial) resolution image will substitute exactly
for the a image only if the input images have relative pixel sizes
differing by a multiple of 2. Any other pixel size ratio will require
resampling of the low (spatial) resolution image prior to substitution.
Certain ratios can result in a degradation of the substitution image
that may not be fully overcome by the subsequent wavelet
sharpening. This will result in a less than optimal enhancement. For
the most common scenarios, Landsat ETM+, IKONOS and QuickBird,
this is not a problem.
Although the mathematics of the algorithm are precise for any pixel
size ratio, a resolution increase of greater than two or three becomes
theoretically questionable. For example, all images are degraded due
to atmospheric refraction and scattering of the returning signal. This
is termed point spread. Thus, both images in a resolution merge
operation have, to some (unknown) extent, been smeared. Thus,
both images in a resolution merge operation have, to an unknown
extent, already been degraded. It is not reasonable to assume that
each multispectral pixel can be precisely devolved into nine or more
subpixels.
Spectral Transform Three merge scenarios are possible. The simplest is when the input
low (spatial) resolution image is only one band; a single band of a
multispectral image, for example. In this case, the only option is to
select which band to use. If the low resolution image to be processed
is a multispectral image, two methods will be offered for creating the
grayscale representation of the multispectral image intensity; IHS
and PC.
The IHS method accepts only 3 input bands. It has been suggested
that this technique produces an output image that is the best for
visual interpretation. Thus, this technique would be appropriate
when producing a final output product for map production. Since a
visual product is likely to be only an R, G, B image, the 3-band
limitation on this method is not a distinct limitation. Clearly, if one
wished to sharpen more data layers, the bands could be done as
separate groups of 3 and then the whole dataset layerstacked back
together.
Field Guide Spectral Enhancement / 189
Lemeshewsky (Lemeshewsky, 2002b) discusses some theoretical
limitations on IHS sharpening that suggest that sharpening of the
bands individually (as discussed above) may be preferable. Yocky
(Yocky, 1995) demonstrates that the IHS transform can distort
colors, particularly red, and discusses theoretical explanations.
The PC Method will accept any number of input data layers. It has
been suggested (Lemeshewsky, 2002a) that this technique produces
an output image that better preserves the spectral integrity of the
input dataset. Thus, this method would be most appropriate if
further processing of the data is intended; for example, if the next
step was a classification operation. Note, however, that Zhang
(Zhang, 1999) has found equivocal results with the PC versus IHS
approaches.
The wavelet, IHS, and PC calculations produce single precision
floating point output. Consequently, the resultant image must
undergo a data compression to get it back to 8 bit format.
Spectral
Enhancement
The enhancement techniques that follow require more than one band
of data. They can be used to:
compress bands of data that are similar
extract new bands of data that are more interpretable to the eye
apply mathematical transforms and algorithms
display a wider variety of information in the three available color
guns (R, G, B)
In this documentation, some examples are illustrated with two-
dimensional graphs. However, you are not limited to two-
dimensional (two-band) data. ERDAS IMAGINE programs allow
an unlimited number of bands to be used. Keep in mind that
processing such data sets can require a large amount of
computer swap space. In practice, the principles outlined below
apply to any number of bands.
Some of these enhancements can be used to prepare data for
classification. However, this is a risky practice unless you are
very familiar with your data and the changes that you are
making to it. Anytime you alter values, you risk losing some
information.
Spectral Enhancement / 190 Field Guide
Principal Components
Analysis
Principal components analysis (PCA) is often used as a method of
data compression. It allows redundant data to be compacted into
fewer bandsthat is, the dimensionality of the data is reduced. The
bands of PCA data are noncorrelated and independent, and are often
more interpretable than the source data (Jensen, 1996; Faust,
1989).
The process is easily explained graphically with an example of data
in two bands. Below is an example of a two-band scatterplot, which
shows the relationships of data file values in two bands. The values
of one band are plotted against those of the other. If both bands
have normal distributions, an ellipse shape results.
Scatterplots and normal distributions are discussed in Math
Topics.
Figure 64: Two Band Scatterplot
Ellipse Diagram
In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid
(3 dimensions), or hyperellipsoid (more than 3 dimensions) is
formed if the distributions of each input band are normal or near
normal. (The term ellipse is used for general purposes here.)
To perform PCA, the axes of the spectral space are rotated, changing
the coordinates of each pixel in spectral space, as well as the data
file values. The new axes are parallel to the axes of the ellipse.
First Principal Component
The length and direction of the widest transect of the ellipse are
calculated using matrix algebra in a process explained below. The
transect, which corresponds to the major (longest) axis of the
ellipse, is called the first principal component of the data. The
direction of the first principal component is the first eigenvector, and
its length is the first eigenvalue (Taylor, 1977).
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
histogram
Band A
h
i
s
t
o
g
r
a
m
B
a
n
d

B
0 255
255
0
Field Guide Spectral Enhancement / 191
A new axis of the spectral space is defined by this first principal
component. The points in the scatterplot are now given new
coordinates, which correspond to this new axis. Since, in spectral
space, the coordinates of the points are the data file values, new
data file values are derived from this process. These values are
stored in the first principal component band of a new data file.
Figure 65: First Principal Component
The first principal component shows the direction and length of the
widest transect of the ellipse. Therefore, as an axis in spectral space,
it measures the highest variation within the data. In Figure 66 it is
easy to see that the first eigenvalue is always greater than the
ranges of the input bands, just as the hypotenuse of a right triangle
must always be longer than the legs.
Figure 66: Range of First Principal Component
Successive Principal Components
The second principal component is the widest transect of the ellipse
that is orthogonal (perpendicular) to the first principal component.
As such, the second principal component describes the largest
amount of variance in the data that is not already described by the
first principal component (Taylor, 1977). In a two-dimensional
analysis, the second principal component corresponds to the minor
axis of the ellipse.
0 255
255
0
Principal Component
(new axis)
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
0 255
255
0
range of Band A
r
a
n
g
e

o
f

B
a
n
d

B
range of pc 1
Spectral Enhancement / 192 Field Guide
Figure 67: Second Principal Component
In n dimensions, there are n principal components. Each successive
principal component:
is the widest transect of the ellipse that is orthogonal to the
previous components in the n-dimensional space of the
scatterplot (Faust, 1989), and
accounts for a decreasing amount of the variation in the data
which is not already accounted for by previous principal
components (Taylor, 1977).
Although there are n output bands in a PCA, the first few bands
account for a high proportion of the variance in the datain some
cases, almost 100%. Therefore, PCA is useful for compressing data
into fewer bands.
In other applications, useful information can be gathered from the
principal component bands with the least variance. These bands can
show subtle details in the image that were obscured by higher
contrast in the original image. These bands may also show regular
noise in the data (for example, the striping in old MSS data) (Faust,
1989).
Computing Principal Components
To compute a principal components transformation, a linear
transformation is performed on the data. This means that the
coordinates of each pixel in spectral space (the original data file
values) are recomputed using a linear equation. The result of the
transformation is that the axes in n-dimensional spectral space are
shifted and rotated to be relative to the axes of the ellipse.
To perform the linear transformation, the eigenvectors and
eigenvalues of the n principal components must be mathematically
derived from the covariance matrix, as shown in the following
equation:
0 255
255
0
PC 1
PC 2
90 angle
(orthogonal)
Field Guide Spectral Enhancement / 193
E Cov ET = V
Where:
Cov = the covariance matrix
E = the matrix of eigenvectors
T = the transposition function
V = a diagonal matrix of eigenvalues, in which all nondiagonal
elements are zeros
V is computed so that its nonzero elements are ordered from
greatest to least, so that
v
1
> v
2
> v
3
... > v
n

Source: Faust, 1989
A full explanation of this computation can be found in Gonzalez
and Wintz, 1977.
The matrix V is the covariance matrix of the output principal
component file. The zeros represent the covariance between bands
(there is none), and the eigenvalues are the variance values for each
band. Because the eigenvalues are ordered from v
1
to v
n
, the first
eigenvalue is the largest and represents the most variance in the
data.
Each column of the resulting eigenvector matrix, E, describes a unit-
length vector in spectral space, which shows the direction of the
principal component (the ellipse axis). The numbers are used as
coefficients in the following equation, to transform the original data
file values into the principal component values.
V
v
1
0 0 ... 0
0 v
2
0 ... 0
...
0 0 0 ... v
n
=
P
e
d
k
E
ke
k 1 =
n

=
Spectral Enhancement / 194 Field Guide
Where:
e = the number of the principal component (first,
second)
P
e
= the output principal component value for principal
component number e
k = a particular input band
n = the total number of bands
d
k
= an input data file value in band k
E
ke
= the eigenvector matrix element at row k, column e
Source: Modified from Gonzalez and Wintz, 1977
Decorrelation Stretch The purpose of a contrast stretch is to:
alter the distribution of the image DN values within the 0 - 255
range of the display device, and
utilize the full range of values in a linear fashion.
The decorrelation stretch stretches the principal components of an
image, not to the original image.
A principal components transform converts a multiband image into a
set of mutually orthogonal images portraying inter-band variance.
Depending on the DN ranges and the variance of the individual input
bands, these new images (PCs) occupy only a portion of the possible
0 - 255 data range.
Each PC is separately stretched to fully utilize the data range. The
new stretched PC composite image is then retransformed to the
original data areas.
Either the original PCs or the stretched PCs may be saved as a
permanent image file for viewing after the stretch.
NOTE: Storage of PCs as floating point, single precision is probably
appropriate in this case.
Tasseled Cap The different bands in a multispectral image can be visualized as
defining an N-dimensional space where N is the number of bands.
Each pixel, positioned according to its DN value in each band, lies
within the N-dimensional space. This pixel distribution is determined
by the absorption/reflection spectra of the imaged material. This
clustering of the pixels is termed the data structure (Crist and Kauth,
1986).
See Raster Data for more information on absorption/reflection
spectra. See the discussion on "Principal Components Analysis".
Field Guide Spectral Enhancement / 195
The data structure can be considered a multidimensional
hyperellipsoid. The principal axes of this data structure are not
necessarily aligned with the axes of the data space (defined as the
bands of the input image). They are more directly related to the
absorption spectra. For viewing purposes, it is advantageous to
rotate the N-dimensional space such that one or two of the data
structure axes are aligned with the Viewer X and Y axes. In
particular, you could view the axes that are largest for the data
structure produced by the absorption peaks of special interest for the
application.
For example, a geologist and a botanist are interested in different
absorption features. They would want to view different data
structures and therefore, different data structure axes. Both would
benefit from viewing the data in a way that would maximize visibility
of the data structure of interest.
The Tasseled Cap transformation offers a way to optimize data
viewing for vegetation studies. Research has produced three data
structure axes that define the vegetation information content (Crist
et al, 1986, Crist and Kauth, 1986):
Brightnessa weighted sum of all bands, defined in the direction
of the principal variation in soil reflectance.
Greennessorthogonal to brightness, a contrast between the
near-infrared and visible bands. Strongly related to the amount
of green vegetation in the scene.
Wetnessrelates to canopy and soil moisture (Lillesand and
Kiefer, 1987).
A simple calculation (linear combination) then rotates the data space
to present any of these axes to you.
These rotations are sensor-dependent, but once defined for a
particular sensor (say Landsat 4 TM), the same rotation works for
any scene taken by that sensor. The increased dimensionality
(number of bands) of TM vs. MSS allowed Crist et al (Crist et al,
1986) to define three additional axes, termed Haze, Fifth, and Sixth.
Lavreau (Lavreau, 1991) has used this haze parameter to devise an
algorithm to dehaze Landsat imagery.
The Tasseled Cap algorithm implemented in the Image Interpreter
provides the correct coefficient for MSS, TM4, and TM5 imagery. For
TM4, the calculations are:
Spectral Enhancement / 196 Field Guide
Brightness = .3037(TM1) + .2793)(TM2) + .4743 (TM3) + .5585 (TM4) +
.5082 (TM5) + .1863 (TM7)
Greenness = -.2848 (TM1) - .2435 (TM2) - .5436 (TM3) + .7243 (TM4) + .0840
(TM5) - .1800 (TM7)
Wetness = .1509 (TM1) + .1973 (TM2) + .3279 (TM3) + .3406 (TM4) - .7112
(TM5) - .4572 (TM7)
Haze = .8832 (TM1) - .0819 (TM2) - .4580 (TM3) - .0032 (TM4) - .0563
(TM5) + .0130 (TM7)
Source: Modified from Crist et al, 1986, Jensen, 1996
RGB to IHS The color monitors used for image display on image processing
systems have three color guns. These correspond to red, green, and
blue (R,G,B), the additive primary colors. When displaying three
bands of a multiband data set, the viewed image is said to be in
R,G,B space.
However, it is possible to define an alternate color space that uses
intensity (I), hue (H), and saturation (S) as the three positioned
parameters (in lieu of R,G, and B). This system is advantageous in
that it presents colors more nearly as perceived by the human eye.
Intensity is the overall brightness of the scene (like PC-1) and
varies from 0 (black) to 1 (white).
Saturation represents the purity of color and also varies linearly
from 0 to 1.
Hue is representative of the color or dominant wavelength of the
pixel. It varies from 0 at the red midpoint through green and blue
back to the red midpoint at 360. It is a circular dimension (see
Figure 68). In Figure 68, 0 to 255 is the selected range; it could
be defined as any data range. However, hue must vary from 0 to
360 to define the entire sphere (Buchanan, 1979).
Field Guide Spectral Enhancement / 197
Figure 68: Intensity, Hue, and Saturation Color Coordinate
System
Source: Buchanan, 1979
To use the RGB to IHS transform, use the RGB to IHS function
from Image Interpreter.
The algorithm used in the Image Interpreter RGB to IHS transform
is (Conrac Corporation, 1980):
Where:
R,G,B are each in the range of 0 to 1.0.
r, g, b are each in the range of 0 to 1.0.
M = largest value, r, g, or b
m = least value, r, g, or b
NOTE: At least one of the R, G, or B values is 0, corresponding to the
color with the largest value, and at least one of the R, G, or B values
is 1, corresponding to the color with the least value.
The equation for calculating intensity in the range of 0 to 1.0 is:
The equations for calculating saturation in the range of 0 to 1.0 are:
I
N
T
E
N
S
I
T
Y
255
255
0
255,0
Red
Blue
Green
SATURATION
H
U
E
R
M r
M m
---------------- = G
M g
M m
---------------- = B
M b
M m
---------------- =
I
M m +
2
---------------- =
Spectral Enhancement / 198 Field Guide
The equations for calculating hue in the range of 0 to 360 are:
If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)
Where:
R,G,B are each in the range of 0 to 1.0.
M = largest value, R, G, or B
m = least value, R, G, or B
IHS to RGB The family of IHS to RGB is intended as a complement to the
standard RGB to IHS transform.
In the IHS to RGB algorithm, a min-max stretch is applied to either
intensity (I), saturation (S), or both, so that they more fully utilize
the 0 to 1 value range. The values for hue (H), a circular dimension,
are 0 to 360. However, depending on the dynamic range of the DN
values of the input image, it is possible that I or S or both occupy
only a part of the 0 to 1 range. In this model, a min-max stretch is
applied to either I, S, or both, so that they more fully utilize the 0 to
1 value range. After stretching, the full IHS image is retransformed
back to the original RGB space. As the parameter Hue is not
modified, it largely defines what we perceive as color, and the
resultant image looks very much like the input image.
It is not essential that the input parameters (IHS) to this transform
be derived from an RGB to IHS transform. You could define I and/or
S as other parameters, set Hue at 0 to 360, and then transform to
RGB space. This is a method of color coding other data sets.
In another approach (Daily, 1983), H and I are replaced by low- and
high-frequency radar imagery. You can also replace I with radar
intensity before the IHS to RGB transform (Croft (Holcomb), 1993).
Chavez evaluates the use of the IHS to RGB transform to resolution
merge Landsat TM with SPOT panchromatic imagery (Chavez et al,
1991).
NOTE: Use the Spatial Modeler for this analysis.
See the previous section on RGB to IHS transform for more
information.
S 0 = If M m = ,
S
M m
M m +
---------------- = If I 0.5 ,
S
M m
2 M m
------------------------- = If I 0.5 > ,
Field Guide Spectral Enhancement / 199
The algorithm used by ERDAS IMAGINE for the IHS to RGB function
is (Conrac Corporation, 1980):
Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0
If I 0.5, M = I (1 + S)
If I > 0.5, M = I + S - I (S)
m = 2 * 1 - M
The equations for calculating R in the range of 0 to 1.0 are:
If H < 60, R = m + (M - m)(H 60)
If 60 H < 180, R = M
If 180 H < 240, R = m + (M - m)((240 - H ) 60)
If 240 H 360, R = m
The equations for calculating G in the range of 0 to 1.0 are:
If H < 120, G = m
If 120 H < 180, m + (M - m)((H - 120) 60)
If 180 H < 300, G = M
If 300 H 360, G = m + (M-m)((360 - H) 60)
Equations for calculating B in the range of 0 to 1.0:
If H < 60, B = M
If 60 H < 120, B = m + (M - m)((120 - H) 60)
If 120 H < 240, B = m
If 240 H < 300, B = m + (M - m)((H - 240) 60)
If 300 H 360, B = M
Indices Indices are used to create output images by mathematically
combining the DN values of different bands. These may be simplistic:
(Band X - Band Y)
or more complex:
In many instances, these indices are ratios of band DN values:

These ratio images are derived from the absorption/reflection
spectra of the material of interest. The absorption is based on the
molecular bonds in the (surface) material. Thus, the ratio often gives
information on the chemical composition of the target.
Band X - Band Y
Band X + Band Y
Band X
Band Y
Spectral Enhancement / 200 Field Guide
See Raster Data for more information on the
absorption/reflection spectra.
Applications
Indices are used extensively in mineral exploration and
vegetation analysis to bring out small differences between
various rock types and vegetation classes. In many cases,
judiciously chosen indices can highlight and enhance differences
that cannot be observed in the display of the original color bands.
Indices can also be used to minimize shadow effects in satellite
and aircraft multispectral images. Black and white images of
individual indices or a color combination of three ratios may be
generated.
Certain combinations of TM ratios are routinely used by
geologists for interpretation of Landsat imagery for mineral type.
For example: Red 5/7, Green 5/4, Blue 3/1.
Integer Scaling Considerations
The output images obtained by applying indices are generally
created in floating point to preserve all numerical precision. If there
are two bands, A and B, then:
ratio = A/B
If A>>B (much greater than), then a normal integer scaling would
be sufficient. If A>B and A is never much greater than B, scaling
might be a problem in that the data range might only go from 1 to 2
or from 1 to 3. In this case, integer scaling would give very little
contrast.
For cases in which A<B or A<<B, integer scaling would always
truncate to 0. All fractional data would be lost. A multiplication
constant factor would also not be very effective in seeing the data
contrast between 0 and 1, which may very well be a substantial part
of the data image. One approach to handling the entire ratio range
is to actually process the function:
ratio = atan(A/B)
This would give a better representation for A/B < 1 as well as for A/B
> 1.
Index Examples
The following are examples of indices that have been
preprogrammed in the Image Interpreter in ERDAS IMAGINE:
IR/R (infrared/red)
SQRT (IR/R)
Field Guide Spectral Enhancement / 201
Vegetation Index = IR-R
Normalized Difference Vegetation Index (NDVI) =
Transformed NDVI (TNDVI) =
Iron Oxide = TM 3/1
Clay Minerals = TM 5/7
Ferrous Minerals = TM 5/4
Mineral Composite = TM 5/7, 5/4, 3/1
Hydrothermal Composite = TM 5/7, 3/1, 4/3
Source: Modified from Sabins, 1987; Jensen, 1996; Tucker, 1979
The following table shows the infrared (IR) and red (R) band for
some common sensors (Tucker, 1979, Jensen, 1996):
Image Algebra
Image algebra is a general term used to describe operations that
combine the pixels of two or more raster layers in mathematical
combinations. For example, the calculation:
(infrared band) - (red band)
DNir - DNred
yields a simple, yet very useful, measure of the presence of
vegetation. At the other extreme is the Tasseled Cap calculation
(described in the following pages), which uses a more complicated
mathematical combination of as many as six bands to define
vegetation.
Band ratios, such as:
Sensor
IR
Band
R Band
Landsat MSS 7 5
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
IR R
IR R +
----------------
IR R
IR R +
---------------- 0.5 +
TM5
TM7
------------
= clay minerals
Hyperspectral Image Processing / 202 Field Guide
are also commonly used. These are derived from the absorption
spectra of the material of interest. The numerator is a baseline of
background absorption and the denominator is an absorption peak.
See Raster Data for more information on absorption/reflection
spectra.
NDVI is a combination of addition, subtraction, and division:
Hyperspectral
Image Processing
Hyperspectral image processing is, in many respects, simply an
extension of the techniques used for multispectral data sets; indeed,
there is no set number of bands beyond which a data set is
hyperspectral. Thus, many of the techniques or algorithms currently
used for multispectral data sets are logically applicable, regardless
of the number of bands in the data set (see the discussion of Figure
7 of this manual). What is of relevance in evaluating these data sets
is not the number of bands per se, but the spectral bandwidth of the
bands (channels). As the bandwidths get smaller, it becomes
possible to view the data set as an absorption spectrum rather than
a collection of discontinuous bands. Analysis of the data in this
fashion is termed imaging spectrometry.
A hyperspectral image data set is recognized as a three-dimensional
pixel array. As in a traditional raster image, the x-axis is the column
indicator and the y-axis is the row indicator. The z-axis is the band
number or, more correctly, the wavelength of that band (channel).
A hyperspectral image can be visualized as shown in Figure 69.
Figure 69: Hyperspectral Data Axes
A data set with narrow contiguous bands can be plotted as a
continuous spectrum and compared to a library of known spectra
using full profile spectral pattern fitting algorithms. A serious
complication in using this approach is assuring that all spectra are
corrected to the same background.
NDVI
IR R
IR R +
---------------- =
Y
X
Z
Field Guide Hyperspectral Image Processing / 203
At present, it is possible to obtain spectral libraries of common
materials. The JPL and USGS mineral spectra libraries are included
in ERDAS IMAGINE. These are laboratory-measured reflectance
spectra of reference minerals, often of high purity and defined
particle size. The spectrometer is commonly purged with pure
nitrogen to avoid absorbance by atmospheric gases. Conversely, the
remote sensor records an image after the sunlight has passed
through the atmosphere (twice) with variable and unknown amounts
of water vapor, CO
2
. (This atmospheric absorbance curve is shown
in Figure 4.) The unknown atmospheric absorbances superimposed
upon the Earths surface reflectances makes comparison to
laboratory spectra or spectra taken with a different atmosphere
inexact. Indeed, it has been shown that atmospheric composition
can vary within a single scene. This complicates the use of spectral
signatures even within one scene. Atmospheric absorption and
scattering is discussed in Atmospheric Absorption.
A number of approaches have been advanced to help compensate for
this atmospheric contamination of the spectra. These are introduced
briefly in "Atmospheric Effects" for the general case. Two specific
techniques, Internal Average Relative Reflectance (IARR) and Log
Residuals, are implemented in ERDAS IMAGINE. These have the
advantage of not requiring auxiliary input information; the correction
parameters are scene-derived. The disadvantage is that they
produce relative reflectances (i.e., they can be compared to
reference spectra in a semi-quantitative manner only).
Normalize Pixel albedo is affected by sensor look angle and local topographic
effects. For airborne sensors, this look angle effect can be large
across a scene. It is less pronounced for satellite sensors. Some
scanners look to both sides of the aircraft. For these data sets, the
average scene luminance between the two half-scenes can be large.
To help minimize these effects, an equal area normalization
algorithm can be applied (Zamudio and Atkinson, 1990). This
calculation shifts each (pixel) spectrum to the same overall average
brightness. This enhancement must be used with a consideration of
whether this assumption is valid for the scene. For an image that
contains two (or more) distinctly different regions (e.g., half ocean
and half forest), this may not be a valid assumption. Correctly
applied, this normalization algorithm helps remove albedo variations
and topographic effects.
Hyperspectral Image Processing / 204 Field Guide
IAR Reflectance As discussed above, it is desired to convert the spectra recorded by
the sensor into a form that can be compared to known reference
spectra. This technique calculates a relative reflectance by dividing
each spectrum (pixel) by the scene average spectrum (Kruse, 1988).
The algorithm is based on the assumption that this scene average
spectrum is largely composed of the atmospheric contribution and
that the atmosphere is uniform across the scene. However, these
assumptions are not always valid. In particular, the average
spectrum could contain absorption features related to target
materials of interest. The algorithm could then overcompensate for
(i.e., remove) these absorbance features. The average spectrum
should be visually inspected to check for this possibility. Properly
applied, this technique can remove the majority of atmospheric
effects.
Log Residuals The Log Residuals technique was originally described by Green and
Craig (Green and Craig, 1985), but has been variously modified by
researchers. The version implemented here is similar to the
approach of Lyon (Lyon, 1987). The algorithm can be conceptualized
as:
Output Spectrum = (input spectrum) - (average spectrum) -
(pixel brightness) + (image brightness)
All parameters in the above equation are in logarithmic space, hence
the name.
This algorithm corrects the image for atmospheric absorption,
systemic instrumental variation, and illuminance differences
between pixels.
Rescale Many hyperspectral scanners record the data in a format larger than
8-bit. In addition, many of the calculations used to correct the data
are performed with a floating point format to preserve precision. At
some point, it is advantageous to compress the data back into an 8-
bit range for effective storage and/or display. However, when
rescaling data to be used for imaging spectrometry analysis, it is
necessary to consider all data values within the data cube, not just
within the layer of interest. This algorithm is designed to maintain
the 3-dimensional integrity of the data values. Any bit format can be
input. The output image is always 8-bit.
When rescaling a data cube, a decision must be made as to which
bands to include in the rescaling. Clearly, a bad band (i.e., a low S/N
layer) should be excluded. Some sensors image in different regions
of the electromagnetic (EM) spectrum (e.g., reflective and thermal
infrared or long- and short-wave reflective infrared). When rescaling
these data sets, it may be appropriate to rescale each EM region
separately. These can be input using the Select Layer option in the
Viewer.
Field Guide Hyperspectral Image Processing / 205
Figure 70: Rescale Graphical User Interface (GUI)
NOTE: Bands 26 through 28, and 46 through 55 have been deleted
from the calculation.The deleted bands are still rescaled, but they
are not factored into the rescale calculation.
Processing Sequence The above (and other) processing steps are utilized to convert the
raw image into a form that is easier to interpret. This interpretation
often involves comparing the imagery, either visually or
automatically, to laboratory spectra or other known end-member
spectra. At present there is no widely accepted standard processing
sequence to achieve this, although some have been advanced in the
scientific literature (Zamudio and Atkinson, 1990; Kruse, 1988;
Green and Craig, 1985; Lyon, 1987). Two common processing
sequences have been programmed as single automatic
enhancements, as follows:
Automatic Relative ReflectanceImplements the following
algorithms: Normalize, IAR Reflectance, and Rescale.
Automatic Log ResidualsImplements the following algorithms:
Normalize, Log Residuals, and Rescale.
Use this dialog to
rescale the image
Enter the bands to
be included in the
calculation here
Hyperspectral Image Processing / 206 Field Guide
Spectrum Average In some instances, it may be desirable to average together several
pixels. This is mentioned above under "IAR Reflectance" as a test for
applicability. In preparing reference spectra for classification, or to
save in the Spectral Library, an average spectrum may be more
representative than a single pixel. Note that to implement this
function it is necessary to define which pixels to average using the
AOI tools. This enables you to average any set of pixels that is
defined; the pixels do not need to be contiguous and there is no limit
on the number of pixels averaged. Note that the output from this
program is a single pixel with the same number of input bands as the
original image.
Figure 71: Spectrum Average GUI
Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the
usefulness or validity of a particular band. In this implementation,
S/N is defined as Mean/Std.Dev. in a 3 3 moving window. After
running this function on a data set, each layer in the output image
should be visually inspected to evaluate suitability for inclusion into
the analysis. Layers deemed unacceptable can be excluded from the
processing by using the Select Layers option of the various Graphical
User Interfaces (GUIs). This can be used as a sensor evaluation tool.
Mean per Pixel This algorithm outputs a single band, regardless of the number of
input bands. By visually inspecting this output image, it is possible
to see if particular pixels are outside the norm. While this does not
mean that these pixels are incorrect, they should be evaluated in this
context. For example, a CCD detector could have several sites
(pixels) that are dead or have an anomalous response, these would
be revealed in the mean-per-pixel image. This can be used as a
sensor evaluation tool.
Profile Tools To aid in visualizing this three-dimensional data cube, three basic
tools have been designed:
AOI Polygon
Click here to
enter an Area
of Interest
Use this ERDAS
IMAGINE dialog to
rescale the image
Field Guide Hyperspectral Image Processing / 207
Spectral Profilea display that plots the reflectance spectrum of
a designated pixel, as shown in Figure 72.
Figure 72: Spectral Profile
Spatial Profilea display that plots spectral information along a
user-defined polyline. The data can be displayed two-
dimensionally for a single band, as in Figure 73.
Figure 73: Two-Dimensional Spatial Profile
The data can also be displayed three-dimensionally for multiple
bands, as in Figure 74.
Hyperspectral Image Processing / 208 Field Guide
Figure 74: Three-Dimensional Spatial Profile
Surface Profilea display that allows you to designate an x,y
area and view any selected layer, z.
Figure 75: Surface Profile
Wavelength Axis Data tapes containing hyperspectral imagery commonly designate
the bands as a simple numerical sequence. When plotted using the
profile tools, this yields an x-axis labeled as 1, 2, 3, 4, etc. Elsewhere
on the tape or in the accompanying documentation is a file that lists
the center frequency and width of each band. This information
should be linked to the image intensity values for accurate analysis
or comparison to other spectra, such as the Spectra Libraries.
Spectral Library Two spectral libraries are presently included in the software package
(JPL and USGS). In addition, it is possible to extract spectra (pixels)
from a data set or prepare average spectra from an image and save
these in a user-derived spectral library. This library can then be used
for visual comparison with other image spectra, or it can be used as
input signatures in a classification.
Field Guide Fourier Analysis / 209
Classification The advent of data sets with very large numbers of bands has
pressed the limits of the traditional classifiers such as Isodata,
Maximum Likelihood, and Minimum Distance, but has not obviated
their usefulness. Much research has been directed toward the use of
Artificial Neural Networks (ANN) to more fully utilize the information
content of hyperspectral images (Merenyi et al, 1996). To date,
however, these advanced techniques have proven to be only
marginally better at a considerable cost in complexity and
computation. For certain applications, both Maximum Likelihood
(Benediktsson et al, 1990) and Minimum Distance (Merenyi et al,
1996) have proven to be appropriate. Classification contains a
detailed discussion of these classification techniques.
A second category of classification techniques utilizes the imaging
spectroscopy model for approaching hyperspectral data sets. This
approach requires a library of possible end-member materials. These
can be from laboratory measurements using a scanning
spectrometer and reference standards (Clark et al, 1990). The JPL
and USGS libraries are compiled this way. The reference spectra
(signatures) can also be scene-derived from either the scene under
study or another similar scene (Adams et al, 1989).
System Requirements Because of the large number of bands, a hyperspectral data set can
be surprisingly large. For example, an AVIRIS scene is only 512
614 pixels in dimension, which seems small. However, when
multiplied by 224 bands (channels) and 16 bits, it requires over 140
megabytes of data storage space. To process this scene requires
corresponding large swap and temp space. In practice, it has been
found that a 48 Mb memory board and 100 Mb of swap space is a
minimum requirement for efficient processing. Temporary file space
requirements depend upon the process being run.
Fourier Analysis Image enhancement techniques can be divided into two basic
categories: point and neighborhood. Point techniques enhance the
pixel based only on its value, with no concern for the values of
neighboring pixels. These techniques include contrast stretches
(nonadaptive), classification, and level slices. Neighborhood
techniques enhance a pixel based on the values of surrounding
pixels. As a result, these techniques require the processing of a
possibly large number of pixels for each output pixel. The most
common way of implementing these enhancements is via a moving
window convolution. However, as the size of the moving window
increases, the number of requisite calculations becomes enormous.
An enhancement that requires a convolution operation in the spatial
domain can be implemented as a simple multiplication in frequency
spacea much faster calculation.
Fourier Analysis / 210 Field Guide
In ERDAS IMAGINE, the FFT is used to convert a raster image from
the spatial (normal) domain into a frequency domain image. The FFT
calculation converts the image into a series of two-dimensional sine
waves of various frequencies. The Fourier image itself cannot be
easily viewed, but the magnitude of the image can be calculated,
which can then be displayed either in the Viewer or in the FFT Editor.
Analysts can edit the Fourier image to reduce noise or remove
periodic features, such as striping. Once the Fourier image is edited,
it is then transformed back into the spatial domain by using an IFFT.
The result is an enhanced version of the original image.
This section focuses on the Fourier editing techniques available in the
FFT Editor. Some rules and guidelines for using these tools are
presented in this document. Also included are some examples of
techniques that generally work for specific applications, such as
striping.
NOTE: You may also want to refer to the works cited at the end of
this section for more information.
The basic premise behind a Fourier transform is that any one-
dimensional function, f(x) (which might be a row of pixels), can be
represented by a Fourier series consisting of some combination of
sine and cosine terms and their associated coefficients. For example,
a line of pixels with a high spatial frequency gray scale pattern might
be represented in terms of a single coefficient multiplied by a sin(x)
function. High spatial frequencies are those that represent frequent
gray scale changes in a short pixel distance. Low spatial frequencies
represent infrequent gray scale changes that occur gradually over a
relatively large number of pixel distances. A more complicated
function, f(x), might have to be represented by many sine and cosine
terms with their associated coefficients.
Field Guide Fourier Analysis / 211
Figure 76: One-Dimensional Fourier Analysis
Figure 76 shows how a function f(x) can be represented as a linear
combination of sine and cosine. In this example the function is a
square wave whose cosine coefficients are zero leaving only sine
terms. The first three terms of the Fourier series are plotted in the
upper right graph and the plot of the sum is shown below it. After
nine iterations, the Fourier series is approaching the original
function.
A Fourier transform is a linear transformation that allows calculation
of the coefficients necessary for the sine and cosine terms to
adequately represent the image. This theory is used extensively in
electronics and signal processing, where electrical signals are
continuous and not discrete. Therefore, DFT has been developed.
Because of the computational load in calculating the values for all the
sine and cosine terms along with the coefficient multiplications, a
highly efficient version of the DFT was developed and called the FFT.
To handle images which consist of many one-dimensional rows of
pixels, a two-dimensional FFT has been devised that incrementally
uses one-dimensional FFTs in each direction and then combines the
result. These images are symmetrical about the origin.
Applications
Fourier transformations are typically used for the removal of noise
such as striping, spots, or vibration in imagery by identifying
periodicities (areas of high spatial frequency). Fourier editing can be
used to remove regular errors in data such as those caused by
sensor anomalies (e.g., striping). This analysis technique can also be
used across bands as another form of pattern/feature recognition.
FFT The FFT calculation is:
0 2
Original Function f(x) Fourier Series of f(x)
Sum of first 3 terms in series
1
3
--- 3x sin
1
5
--- 5x sin
x sin
0 2
Sum of first 9 terms in series
0 2 0 2
Fourier Analysis / 212 Field Guide
Where:
M = the number of pixels horizontally
N = the number of pixels vertically
u,v = spatial frequency variables
e = 2.71828, the natural logarithm base
j = the imaginary component of a complex number
The number of pixels horizontally and vertically must each be a
power of two. If the dimensions of the input image are not a power
of two, they are padded up to the next highest power of two. There
is more information about this later in this section.
Source: Modified from Oppenheim and Schafer, 1975; Press et al,
1988.
Images computed by this algorithm are saved with an .fft file
extension.
You should run a Fourier Magnitude transform on an .fft file
before viewing it in the Viewer. The FFT Editor automatically
displays the magnitude without further processing.
Fourier Magnitude The raster image generated by the FFT calculation is not an optimum
image for viewing or editing. Each pixel of a fourier image is a
complex number (i.e., it has two components: real and imaginary).
For display as a single image, these components are combined in a
root-sum of squares operation. Also, since the dynamic range of
Fourier spectra vastly exceeds the range of a typical display device,
the Fourier Magnitude calculation involves a logarithmic function.
Finally, a Fourier image is symmetric about the origin (u, v = 0, 0).
If the origin is plotted at the upper left corner, the symmetry is more
difficult to see than if the origin is at the center of the image.
Therefore, in the Fourier magnitude image, the origin is shifted to
the center of the raster array.
In this transformation, each .fft layer is processed twice. First, the
maximum magnitude, |X|
max
, is computed. Then, the following
computation is performed for each FFT element magnitude x:
F u v , ( ) f x y , ( )e
j2ux M j2vy N
[ ]
y 0 =
N 1

x 0 =
M 1

y x ( ) 255.0ln
x
x
max
--------------
\ .
| |
e 1 ( ) 1 + =
Field Guide Fourier Analysis / 213
Where:
x = input FFT element
y = the normalized log magnitude of the FFT element
|x|
max
= the maximum magnitude
e = 2.71828, the natural logarithm base
| | = the magnitude operator
This function was chosen so that y would be proportional to the
logarithm of a linear function of x, with y(0)=0 and y (|x|
max
) = 255.
In Figure 77, Image A is one band of a badly striped Landsat TM
scene. Image B is the Fourier Magnitude image derived from the
Landsat image.
Figure 77: Example of Fourier Magnitude
Note that, although Image A has been transformed into Image B,
these raster images are very different symmetrically. The origin of
Image A is at (x, y) = (0, 0) in the upper left corner. In Image B, the
origin (u, v) = (0, 0) is in the center of the raster. The low
frequencies are plotted near this origin while the higher frequencies
are plotted further out. Generally, the majority of the information in
an image is in the low frequencies. This is indicated by the bright
area at the center (origin) of the Fourier image.
It is important to realize that a position in a Fourier image,
designated as (u, v), does not always represent the same frequency,
because it depends on the size of the input raster image. A large
spatial domain image contains components of lower frequency than
a small spatial domain image. As mentioned, these lower frequencies
are plotted nearer to the center (u, v = 0, 0) of the Fourier image.
Note that the units of spatial frequency are inverse length, e.g., m
-1
.
Image A Image B origin
origin
Fourier Analysis / 214 Field Guide
The sampling increments in the spatial and frequency domain are
related by:
Where:
M = horizontal image size in pixels
N = vertical image size in pixels
x = pixel size
y = pixel size
For example, converting a 512 512 Landsat TM image (pixel size
= 28.5m) into a Fourier image:
If the Landsat TM image was 1024 1024:
So, as noted above, the frequency represented by a (u, v) position
depends on the size of the input image.
u or v Frequency
0 0
1 6.85 10
-5
m
-1
2 13.7 10
-5
m
-1
u or v Frequency
0 0
1 3.42 10
-5
m
-1

2 6.85 10
-5
m
-1

u
1
Mx
------------ =
v
1
Ny
----------- =
u v
1
512 28.5
------------------------- 6.85 10
5
m
1
= = =
u v
1
1024 28.5
---------------------------- 3.42 10
5
m
1
= = =
Field Guide Fourier Analysis / 215
For the above calculation, the sample images are 512 512 and
1024 1024 (powers of two). These were selected because the FFT
calculation requires that the height and width of the input image be
a power of two (although the image need not be square). In practice,
input images usually do not meet this criterion. Three possible
solutions are available in ERDAS IMAGINE:
Subset the image.
Pad the imagethe input raster is increased in size to the next
power of two by imbedding it in a field of the mean value of the
entire input image.
Resample the image so that its height and width are powers of
two.
Figure 78: The Padding Technique
The padding technique is automatically performed by the FFT
program. It produces a minimum of artifacts in the output Fourier
image. If the image is subset using a power of two (i.e., 64 64,
128 128, 64 128), no padding is used.
IFFT The IFFT computes the inverse two-dimensional FFT of the spectrum
stored.
The input file must be in the compressed .fft format described
earlier (i.e., output from the FFT or FFT Editor).
If the original image was padded by the FFT program, the
padding is automatically removed by IFFT.
This program creates (and deletes, upon normal termination) a
temporary file large enough to contain one entire band of .fft
data.The specific expression calculated by this program is:
400
300
512
512
mean value
f x y , ( )
1
N
1
N
2
---------------
F u v , ( )e
j2ux M j2vy N +
[ ]
v 0 =
N 1

u 0 =
M 1

0 x M 1 0 y N , 1
Fourier Analysis / 216 Field Guide
Where:
M = the number of pixels horizontally
N = the number of pixels vertically
u, v = spatial frequency variables
e = 2.71828, the natural logarithm base
Source: Modified from Oppenheim and Schafer, 1975 and Press et
al, 1988.
Images computed by this algorithm are saved with an .ifft.img
file extension by default.
Filtering Operations performed in the frequency (Fourier) domain can be
visualized in the context of the familiar convolution function. The
mathematical basis of this interrelationship is the convolution
theorem, which states that a convolution operation in the spatial
domain is equivalent to a multiplication operation in the frequency
domain:
g(x, y) = h(x, y) * f(x, y)
is equivalent to
G(u, v) = H(u, v) F(u, v)
Where:
f(x, y) = input image
h(x, y) = position invariant operation (convolution kernel)
g(x, y) = output image
G, F, H = Fourier transforms of g, f, h
The names high-pass, low-pass, high-frequency indicate that these
convolution functions derive from the frequency domain.
Low-Pass Filtering
The simplest example of this relationship is the low-pass kernel. The
name, low-pass kernel, is derived from a filter that would pass low
frequencies and block (filter out) high frequencies. In practice, this
is easily achieved in the spatial domain by the M = N = 3 kernel:
Obviously, as the size of the image and, particularly, the size of the
low-pass kernel increases, the calculation becomes more time-
consuming. Depending on the size of the input image and the size of
the kernel, it can be faster to generate a low-pass image via Fourier
processing.
1 1 1
1 1 1
1 1 1
Field Guide Fourier Analysis / 217
Figure 79 compares Direct and Fourier domain processing for finite
area convolution.
Figure 79: Comparison of Direct and Fourier Domain
Processing
Source: Pratt, 1991
In the Fourier domain, the low-pass operation is implemented by
attenuating the pixels frequencies that satisfy:
D
0
is often called the cutoff frequency.
As mentioned, the low-pass information is concentrated toward the
origin of the Fourier image. Thus, a smaller radius (r) has the same
effect as a larger N (where N is the size of a kernel) in a spatial
domain low-pass convolution.
As was pointed out earlier, the frequency represented by a particular
u, v (or r) position depends on the size of the input image. Thus, a
low-pass operation of r = 20 is equivalent to a spatial low-pass of
various kernel sizes, depending on the size of the input image.
0
4
8
12
16
200 400 600 800 1000 1200
S
i
z
e

o
f

n
e
i
g
h
b
o
r
h
o
o
d

f
o
r

c
a
l
c
u
l
a
t
i
o
n
Size of input image
Fourier processing more efficient
Direct processing more efficient
u
2
v
2
+ D
0
2
>
Fourier Analysis / 218 Field Guide
For example:
This table shows that using a window on a 64 64 Fourier image
with a radius of 50 as the cutoff is the same as using a 3 3 low-
pass kernel on a 64 64 spatial domain image.
High-Pass Filtering
Just as images can be smoothed (blurred) by attenuating the high-
frequency components of an image using low-pass filters, images
can be sharpened and edge-enhanced by attenuating the low-
frequency components using high-pass filters. In the Fourier
domain, the high-pass operation is implemented by attenuating the
pixels frequencies that satisfy:
Windows The attenuation discussed above can be done in many different
ways. In ERDAS IMAGINE Fourier processing, five window functions
are provided to achieve different types of attenuation:
Ideal
Bartlett (triangular)
Butterworth
Gaussian
Hanning (cosine)
Each of these windows must be defined when a frequency domain
process is used. This application is perhaps easiest understood in the
context of the high-pass and low-pass filter operations. Each window
is discussed in more detail:
Image Size
Fourier Low-Pass
r =
Convolution Low-Pass
N =
64 64 50 3
30 3.5
20 5
10 9
5 14
128 128 20 13
10 22
256 256 20 25
10 42
u
2
v
2
+ D
0
2
<
Field Guide Fourier Analysis / 219
Ideal
The simplest low-pass filtering is accomplished using the ideal
window, so named because its cutoff point is absolute. Note that in
Figure 80 the cross section is ideal.
Figure 80: An Ideal Cross Section
H(u, v) = 1 if D(u, v) D
0
H(u, v) = 0 if D(u, v) > D
0
All frequencies inside a circle of a radius D
0
are retained completely
(passed), and all frequencies outside the radius are completely
attenuated. The point D
0
is termed the cutoff frequency.
High-pass filtering using the ideal window looks like the following
illustration:
Figure 81: High-Pass Filtering Using the Ideal Window
H(u, v) = 0 if D(u, v) D
0
H(u, v) = 1 if D(u, v) > D
0
All frequencies inside a circle of a radius D
0
are completely
attenuated, and all frequencies outside the radius are retained
completely (passed).
A major disadvantage of the ideal filter is that it can cause ringing
artifacts, particularly if the radius (r) is small. The smoother
functions (e.g., Butterworth and Hanning) minimize this effect.
Bartlett
Filtering using the Bartlett window is a triangular function, as shown
in the following low- and high-pass cross sections:
H(u,v)
D(u,v)
1
0
D
0

g
a
i
n
frequency
H(u,v)
D(u,v)
1
0
D
0

g
a
i
n
frequency
Fourier Analysis / 220 Field Guide
Figure 82: Filtering Using the Bartlett Window
Butterworth, Gaussian, and Hanning
The Butterworth, Gaussian, and Hanning windows are all smooth and
greatly reduce the effect of ringing. The differences between them
are minor and are of interest mainly to experts. For most normal
types of Fourier image enhancement, they are essentially
interchangeable.
The Butterworth window reduces the ringing effect because it does
not contain abrupt changes in value or slope. The following low- and
high-pass cross sections illustrate this:
Figure 83: Filtering Using the Butterworth Window
The equation for the low-pass Butterworth window is:
NOTE: The Butterworth window approaches its window center gain
asymptotically.
The equation for the Gaussian low-pass window is:
The equation for the Hanning low-pass window is:
H(u,v)
D(u,v)
1
0
D
0

g
a
i
n
frequency
H(u,v)
D(u,v)
1
0
D
0

g
a
i
n
frequency
Low-Pass High-Pass
H(u,v)
D(u,v)
1
0
0.5
1 2 3
D
0

g
a
i
n
frequency
Low-Pass High-Pass
H(u,v)
D(u,v)
1
0
0.5
1 2 3
D
0

g
a
i
n
frequency
H u v , ( )
1
1 D u v , ( ) ( ) D
0
[ ]
2n
+
----------------------------------------------------- =
H u v , ( ) e
x
D
0
------
\ .
| |

2
=
Field Guide Fourier Analysis / 221
Fourier Noise Removal Occasionally, images are corrupted by noise that is periodic in
nature. An example of this is the scan lines that are present in some
TM images. When these images are transformed into Fourier space,
the periodic line pattern becomes a radial line. The Fourier Analysis
functions provide two main tools for reducing noise in images:
editing
automatic removal of periodic noise
Editing
In practice, it has been found that radial lines centered at the Fourier
origin (u, v = 0, 0) are best removed using back-to-back wedges
centered at (0, 0). It is possible to remove these lines using very
narrow wedges with the Ideal window. However, the sudden
transitions resulting from zeroing-out sections of a Fourier image
causes a ringing of the image when it is transformed back into the
spatial domain. This effect can be lessened by using a less abrupt
window, such as Butterworth.
Other types of noise can produce artifacts, such as lines not centered
at u,v = 0,0 or circular spots in the Fourier image. These can be
removed using the tools provided in the FFT Editor. As these artifacts
are always symmetrical in the Fourier magnitude image, editing
tools operate on both components simultaneously. The FFT Editor
contains tools that enable you to attenuate a circular or rectangular
region anywhere on the image.
Automatic Periodic Noise Removal
The use of the FFT Editor, as described above, enables you to
selectively and accurately remove periodic noise from any image.
However, operator interaction and a bit of trial and error are
required. The automatic periodic noise removal algorithm has been
devised to address images degraded uniformly by striping or other
periodic anomalies. Use of this algorithm requires a minimum of
input from you.
H u v , ( )
1
2
-- - 1
x
2D
0
----------
\ .
| |
cos +
\ .
| |
for = 0 x 2D
0

H u v , ( ) 0 otherwise =
Fourier Analysis / 222 Field Guide
The image is first divided into 128 128 pixel blocks. The Fourier
Transform of each block is calculated and the log-magnitudes of each
FFT block are averaged. The averaging removes all frequency
domain quantities except those that are present in each block (i.e.,
some sort of periodic interference). The average power spectrum is
then used as a filter to adjust the FFT of the entire image. When the
IFFT is performed, the result is an image that should have any
periodic noise eliminated or significantly reduced. This method is
partially based on the algorithms outlined in Cannon (Cannon, 1983)
and Srinivasan et al (Srinivasan et al, 1988).
Select the Periodic Noise Removal option from Image Interpreter
to use this function.
Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may
be modeled as the product of illumination and reflectance
components:
I(x, y) = i(x, y) r(x, y)
Where:
I(x, y) = image intensity (DN) at pixel x, y
i(x, y) = illumination of pixel x, y
r(x, y) = reflectance at pixel x, y
The illumination image is a function of lighting conditions and
shadows. The reflectance image is a function of the object being
imaged. A log function can be used to separate the two components
(i and r) of the image:
ln I(x, y) = ln i(x, y) + ln r(x, y)
This transforms the image from multiplicative to additive
superposition. With the two component images separated, any linear
operation can be performed. In this application, the image is now
transformed into Fourier space. Because the illumination component
usually dominates the low frequencies, while the reflectance
component dominates the higher frequencies, the image may be
effectively manipulated in the Fourier domain.
By using a filter on the Fourier image, which increases the high-
frequency components, the reflectance image (related to the target
material) may be enhanced, while the illumination image (related to
the scene illumination) is de-emphasized.
Select the Homomorphic Filter option from Image Interpreter to
use this function.
By applying an IFFT followed by an exponential function, the
enhanced image is returned to the normal spatial domain. The flow
chart in Figure 84 summarizes the homomorphic filtering process in
ERDAS IMAGINE.
Field Guide Radar Imagery Enhancement / 223
Figure 84: Homomorphic Filtering Process
As mentioned earlier, if an input image is not a power of two, the
ERDAS IMAGINE Fourier analysis software automatically pads the
image to the next largest size to make it a power of two. For manual
editing, this causes no problems. However, in automatic processing,
such as the homomorphic filter, the artifacts induced by the padding
may have a deleterious effect on the output image. For this reason,
it is recommended that images that are not a power of two be subset
before being used in an automatic process.
A detailed description of the theory behind Fourier series and
Fourier transforms is given in Gonzales and Wintz (Gonzalez and
Wintz, 1977). See also Oppenheim (Oppenheim and Schafer,
1975) and Press (Press et al, 1988).
Radar Imagery
Enhancement
The nature of the surface phenomena involved in radar imaging is
inherently different from that of visible/infrared (VIS/IR) images.
When VIS/IR radiation strikes a surface it is either absorbed,
reflected, or transmitted. The absorption is based on the molecular
bonds in the (surface) material. Thus, this imagery provides
information on the chemical composition of the target.
When radar microwaves strike a surface, they are reflected
according to the physical and electrical properties of the surface,
rather than the chemical composition. The strength of radar return
is affected by slope, roughness, and vegetation cover. The
conductivity of a target area is related to the porosity of the soil and
its water content. Consequently, radar and VIS/IR data are
complementary; they provide different information about the target
area. An image in which these two data types are intelligently
combined can present much more information than either image by
itself.
Input
Image
Log
Log
Image
FFT
Fourier
Image
Butter-
worth
Filter
IFFT
Expo-
nential
Enhanced
Image
i r ln i + ln r i = low freq.
r = high freq.
i decreased
r increased
Filtered
Fourier
Image
Radar Imagery Enhancement / 224 Field Guide
See Raster Data and Raster and Vector Data Sources for
more information on radar data.
This section describes enhancement techniques that are particularly
useful for radar imagery. While these techniques can be applied to
other types of image data, this discussion focuses on the special
requirements of radar imagery enhancement. The ERDAS IMAGINE
Radar Interpreter provides a sophisticated set of image processing
tools designed specifically for use with radar imagery. This section
describes the functions of the ERDAS IMAGINE Radar Interpreter.
For information on the Radar Image Enhancement function, see
the section on "Radiometric Enhancement".
Speckle Noise Speckle noise is commonly observed in radar (microwave or
millimeter wave) sensing systems, although it may appear in any
type of remotely sensed image utilizing coherent radiation. An active
radar sensor gives off a burst of coherent radiation that reflects from
the target, unlike a passive microwave sensor that simply receives
the low-level radiation naturally emitted by targets.
Like the light from a laser, the waves emitted by active sensors
travel in phase and interact minimally on their way to the target
area. After interaction with the target area, these waves are no
longer in phase. This is because of the different distances they travel
from targets, or single versus multiple bounce scattering.
Once out of phase, radar waves can interact to produce light and
dark pixels known as speckle noise. Speckle noise must be reduced
before the data can be effectively utilized. However, the image
processing programs used to reduce speckle noise produce changes
in the image.
Because any image processing done before removal of the
speckle results in the noise being incorporated into and
degrading the image, you should not rectify, correct to ground
range, or in any way resample, enhance, or classify the pixel
values before removing speckle noise. Functions using Nearest
Neighbor are technically permissible, but not advisable.
Since different applications and different sensors necessitate
different speckle removal models, ERDAS IMAGINE Radar
Interpreter includes several speckle reduction algorithms:
Mean filter
Median filter
Lee-Sigma filter
Local Region filter
Field Guide Radar Imagery Enhancement / 225
Lee filter
Frost filter
Gamma-MAP filter
NOTE: Speckle noise in radar images cannot be completely removed.
However, it can be reduced significantly.
These filters are described in the following sections:
Mean Filter
The Mean filter is a simple calculation. The pixel of interest (center
of window) is replaced by the arithmetic average of all values within
the window. This filter does not remove the aberrant (speckle)
value; it averages it into the data.
In theory, a bright and a dark pixel within the same window would
cancel each other out. This consideration would argue in favor of a
large window size (e.g., 7 7). However, averaging results in a loss
of detail, which argues for a small window size.
In general, this is the least satisfactory method of speckle reduction.
It is useful for applications where loss of resolution is not a problem.
Median Filter
A better way to reduce speckle, but still simplistic, is the Median
filter. This filter operates by arranging all DN values in sequential
order within the window that you define. The pixel of interest is
replaced by the value in the center of this distribution. A Median filter
is useful for removing pulse or spike noise. Pulse functions of less
than one-half of the moving window width are suppressed or
eliminated. In addition, step functions or ramp functions are
retained.
The effect of Mean and Median filters on various signals is shown (for
one dimension) in Figure 85.
Radar Imagery Enhancement / 226 Field Guide
Figure 85: Effects of Mean and Median Filters
The Median filter is useful for noise suppression in any image. It does
not affect step or ramp functions; it is an edge preserving filter
(Pratt, 1991). It is also applicable in removing pulse function noise,
which results from the inherent pulsing of microwaves. An example
of the application of the Median filter is the removal of dead-detector
striping, as found in Landsat 4 TM data (Crippen, 1989a).
Local Region Filter
The Local Region filter divides the moving window into eight regions
based on angular position (North, South, East, West, NW, NE, SW,
and SE). Figure 86 shows a 5 5 moving window and the regions of
the Local Region filter.
ORIGINAL MEAN FILTERED MEDIAN FILTERED
Step
Ramp
Single Pulse
Double Pulse
Field Guide Radar Imagery Enhancement / 227
Figure 86: Regions of Local Region Filter
For each region, the variance is calculated as follows:
Source: Nagao and Matsuyama, 1978
The algorithm compares the variance values of the regions
surrounding the pixel of interest. The pixel of interest is replaced by
the mean of all DN values within the region with the lowest variance
(i.e., the most uniform region). A region with low variance is
assumed to have pixels minimally affected by wave interference, yet
very similar to the pixel of interest. A region of low variance is
probably such for several surrounding pixels.
The result is that the output image is composed of numerous uniform
areas, the size of which is determined by the moving window size. In
practice, this filter can be utilized sequentially 2 or 3 times,
increasing the window size. The resultant output image is an
appropriate input to a classification application.
Sigma and Lee Filters
The Sigma and Lee filters utilize the statistical distribution of the DN
values within the moving window to estimate what the pixel of
interest should be.
Speckle in imaging radar can be mathematically modeled as
multiplicative noise with a mean of 1. The standard deviation of the
noise can be mathematically defined as:
Standard Deviation = = Coefficient Of Variation =
sigma ()
= pixel of interest
= North region
= NE region
= SW region
Variance
DN
x y ,
Mean ( )
2
n 1
----------------------------------------------- =
VARIANCE
MEAN
-----------------------------------
Radar Imagery Enhancement / 228 Field Guide
The coefficient of variation, as a scene-derived parameter, is used as
an input parameter in the Sigma and Lee filters. It is also useful in
evaluating and modifying VIS/IR data for input to a 4-band
composite image, or in preparing a 3-band ratio color composite
(Crippen, 1989a).
It can be assumed that imaging radar data noise follows a Gaussian
distribution. This would yield a theoretical value for Standard
Deviation (SD) of .52 for 1-look radar data and SD = .26 for 4-look
radar data.
Table 35 gives theoretical coefficient of variation values for various
look-average radar scenes:
The Lee filters are based on the assumption that the mean and
variance of the pixel of interest are equal to the local mean and
variance of all pixels within the moving window you select.
The actual calculation used for the Lee filter is:
DN
out
= [Mean] + K[DN
in
- Mean]
Where:
Mean = average of pixels in a moving window
The variance of x [Var (x)] is defined as:
Source: Lee, 1981
Table 35: Theoretical Coefficient of Variation Values
# of Looks (scenes) Coef. of Variation Value
1 .52
2 .37
3 .30
4 .26
6 .21
8 .18
K
Var x ( )
Mean [ ]
2

2
Var x ( ) +
---------------------------------------------------- =
Var x ( )
Variance within window [ ] Mean within window [ ]
2
+
Sigma [ ]
2
1 +
-------------------------------------------------------------------------------------------------------------------------------
\ .
|
| |
Mean within window [ ]
2
=
Field Guide Radar Imagery Enhancement / 229
The Sigma filter is based on the probability of a Gaussian
distribution. It is assumed that 95.5% of random samples are within
a 2 standard deviation (2 sigma) range. This noise suppression filter
replaces the pixel of interest with the average of all DN values within
the moving window that fall within the designated range.
As with all the radar speckle filters, you must specify a moving
window size. The center pixel of the moving window is the pixel of
interest.
As with the Statistics filter, a coefficient of variation specific to the
data set must be input. Finally, you must specify how many standard
deviations to use (2, 1, or 0.5) to define the accepted range.
The statistical filters (Sigma and Statistics) are logically applicable to
any data set for preprocessing. Any sensor system has various
sources of noise, resulting in a few erratic pixels. In VIS/IR imagery,
most natural scenes are found to follow a normal distribution of DN
values, thus filtering at 2 standard deviations should remove this
noise. This is particularly true of experimental sensor systems that
frequently have significant noise problems.
These speckle filters can be used iteratively. You must view and
evaluate the resultant image after each pass (the data histogram is
useful for this), and then decide if another pass is appropriate and
what parameters to use on the next pass. For example, three passes
of the Sigma filter with the following parameters are very effective
when used with any type of data:
Similarly, there is no reason why successive passes must be of the
same filter. The following sequence is useful prior to a classification:
Table 36: Parameters for Sigma Filter
Pass Sigma Value
Sigma
Multiplier
Window Size
1 0.26 0.5 3 3
2 0.26 1 5 5
3 0.26 2 7 7
Table 37: Pre-Classification Sequence
Filter Pass
Sigma
Value
Sigma
Multiplier
Window Size
Lee 1 0.26 NA 3 3
Lee 2 0.26 NA 5 5
Local
Region
3 NA NA 5 5 or 7 7
Radar Imagery Enhancement / 230 Field Guide
With all speckle reduction filters there is a playoff between noise
reduction and loss of resolution. Each data set and each
application have a different acceptable balance between these
two factors. The ERDAS IMAGINE filters have been designed to
be versatile and gentle in reducing noise (and resolution).
Frost Filter
The Frost filter is a minimum mean square error algorithm that
adapts to the local statistics of the image. The local statistics serve
as weighting parameters for the impulse response of the filter
(moving window). This algorithm assumes that noise is
multiplicative with stationary statistics.
The formula used is:
Where:
and
K = normalization constant
= local mean
= local variance
= image coefficient of variation value
|t| = |X-X
0
| + |Y-Y
0
|
n = moving window size
Source: Lopes et al, 1990
Gamma-MAP Filter
The Maximum A Posteriori (MAP) filter attempts to estimate the
original pixel DN, which is assumed to lie between the local average
and the degraded (actual) pixel DN. MAP logic maximizes the a
posteriori probability density function with respect to the original
image.
Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume
a Gaussian distribution for the speckle noise. Recent work has shown
this to be an invalid assumption. Natural vegetated areas have been
shown to be more properly modeled as having a Gamma distributed
cross section. This algorithm incorporates this assumption. The exact
formula used is the cubic equation:
DN Ke
t
n n

=
4 n
2
( )
2
I
2
( ) =
I

3
I I

2
I

DN ( ) 0 = +
Field Guide Radar Imagery Enhancement / 231
Where:
= sought value
= local mean
DN = input value
= original image variance
Source: Frost et al, 1982
Edge Detection Edge and line detection are important operations in digital image
processing. For example, geologists are often interested in mapping
lineaments, which may be fault lines or bedding structures. For this
purpose, edge and line detection are major enhancement
techniques.
In selecting an algorithm, it is first necessary to understand the
nature of what is being enhanced. Edge detection could imply
amplifying an edge, a line, or a spot (see Figure 87).
Figure 87: One-dimensional, Continuous Edge, and Line Models
Ramp edgean edge modeled as a ramp, increasing in DN value
from a low to a high level, or vice versa. Distinguished by DN
change, slope, and slope midpoint.
Step edgea ramp edge with a slope angle of 90 degrees.
Linea region bounded on each end by an edge; width must be
less than the moving window size.
Roof edgea line with a width near zero.
The models in Figure 87 represent ideal theoretical edges. However,
real data values vary to produce a more distorted edge due to sensor
noise or vibration (see Figure 88). There are no perfect edges in
raster data, hence the need for edge detection algorithms.
I
Ramp edge
DN change
Step edge
Line
width
Roof edge
slope
slope midpoint
90
o
width near 0
DN change
DN change
DN change
x or y
D
N

V
a
l
u
e
D
N

V
a
l
u
e
D
N

V
a
l
u
e
D
N

V
a
l
u
e
x or y
x or y
x or y
Radar Imagery Enhancement / 232 Field Guide
Figure 88: A Noisy Edge Superimposed on an Ideal Edge
Edge detection algorithms can be broken down into 1st-order
derivative and 2nd-order derivative operations. Figure 89 shows
ideal one-dimensional edge and line intensity curves with the
associated 1st-order and 2nd-order derivatives.
Figure 89: Edge and Line Derivatives
The 1st-order derivative kernel(s) derives from the simple Prewitt
kernel:
The 2nd-order derivative kernel(s) derives from Laplacian operators:
I
n
t
e
n
s
i
t
y
Actual data values
Ideal model step edge
x
g(x)
x
g
x
-----
x
g
2

x
2

--------
x
g(x)
x
g
x
-----
x
g
2

x
2

--------
Line
Ramp Edge
Original Feature
1st Derivative
2nd Derivative

x
-----
1 1 1
0 0 0
1 1 1
=
and

y
-----
1 0 1
1 0 1
1 0 1
=

2
x
2
--------
1 2 1
1 2 1
1 2 1
=

2
y
2
--------
1 1 1
2 2 2
1 1 1
=
and
Field Guide Radar Imagery Enhancement / 233
1st-Order Derivatives (Prewitt)
The ERDAS IMAGINE Radar Interpreter utilizes sets of template
matching operators. These operators approximate to the eight
possible compass orientations (North, South, East, West, Northeast,
Northwest, Southeast, Southwest). The compass names indicate the
slope direction creating maximum response. (Gradient kernels with
zero weighting, i.e., the sum of the kernel coefficient is zero, have
no output in uniform regions.) The detected edge is orthogonal to the
gradient direction.
To avoid positional shift, all operating windows are odd number
arrays, with the center pixel being the pixel of interest. Extension of
the 3 3 impulse response arrays to a larger size is not clear cut
different authors suggest different lines of rationale. For example, it
may be advantageous to extend the 3-level (Prewitt, 1970) to:
or the following might be beneficial:
Larger template arrays provide a greater noise immunity, but are
computationally more demanding.
Zero-Sum Filters
A common type of edge detection kernel is a zero-sum filter. For this
type of filter, the coefficients are designed to add up to zero.
Following are examples of two zero-sum filters:
1 1 0 1 1
1 1 0 1 1
1 1 0 1 1
1 1 0 1 1
1 1 0 1 1
2 1 0 1 2
2 1 0 1 2
2 1 0 1 2
2 1 0 1 2
2 1 0 1 2
or
4 2 0 2 4
4 2 0 2 4
4 2 0 2 4
4 2 0 2 4
4 2 0 2 4
1 0 1
2 0 2
1 0 1
vertical
1 2 1
0 0 0
1 2 1
horizontal
Sobel =
Radar Imagery Enhancement / 234 Field Guide
Prior to edge enhancement, you should reduce speckle noise by
using the ERDAS IMAGINE Radar Interpreter Speckle
Suppression function.
2nd-Order Derivatives (Laplacian Operators)
The second category of edge enhancers is 2nd-order derivative or
Laplacian operators. These are best for line (or spot) detection as
distinct from ramp edges. ERDAS IMAGINE Radar Interpreter offers
two such arrays:
Unweighted line:
Weighted line:
Source: Pratt, 1991
Some researchers have found that a combination of 1st- and
2nd-order derivative images produces the best output. See
Eberlein and Weszka (Eberlein and Weszka, 1975) for
information about subtracting the 2nd-order derivative
(Laplacian) image from the 1st-order derivative image
(gradient).
Texture According to Pratt (Pratt, 1991), Many portions of images of natural
scenes are devoid of sharp edges over large areas. In these areas
the scene can often be characterized as exhibiting a consistent
structure analogous to the texture of cloth. Image texture
measurements can be used to segment an image and classify its
segments.
1 0 1
1 0 1
1 0 1
vertical
1 1 1
0 0 0
1 1 1
horizontal
Prewitt =
1 2 1
1 2 1
1 2 1
1 2 1
2 4 2
1 2 1
Field Guide Radar Imagery Enhancement / 235
As an enhancement, texture is particularly applicable to radar data,
although it may be applied to any type of data with varying results.
For example, it has been shown (Blom and Daily, 1982) that a three-
layer variance image using 15 15, 31 31, and 61 61 windows
can be combined into a three-color RGB (red, green, blue) image
that is useful for geologic discrimination. The same could apply to a
vegetation classification.
You could also prepare a three-color image using three different
functions operating through the same (or different) size moving
window(s). However, each data set and application would need
different moving window sizes and/or texture measures to maximize
the discrimination.
Radar Texture Analysis
While texture analysis has been useful in the enhancement of VIS/IR
image data, it is showing even greater applicability to radar imagery.
In part, this stems from the nature of the imaging process itself.
The interaction of the radar waves with the surface of interest is
dominated by reflection involving the surface roughness at the
wavelength scale. In VIS/IR imaging, the phenomena involved is
absorption at the molecular level. Also, as we know from array-type
antennae, radar is especially sensitive to regularity that is a multiple
of its wavelength. This provides for a more precise method for
quantifying the character of texture in a radar return.
The ability to use radar data to detect texture and provide
topographic information about an image is a major advantage over
other types of imagery where texture is not a quantitative
characteristic.
The texture transforms can be used in several ways to enhance the
use of radar imagery. Adding the radar intensity image as an
additional layer in a (vegetation) classification is fairly
straightforward and may be useful. However, the proper texture
image (function and window size) can greatly increase the
discrimination. Using known test sites, one can experiment to
discern which texture image best aids the classification. For
example, the texture image could then be added as an additional
layer to the TM bands.
As radar data come into wider use, other mathematical texture
definitions may prove useful and will be added to the ERDAS
IMAGINE Radar Interpreter. In practice, you interactively decide
which algorithm and window size is best for your data and
application.
Radar Imagery Enhancement / 236 Field Guide
Texture Analysis Algorithms
While texture has typically been a qualitative measure, it can be
enhanced with mathematical algorithms. Many algorithms appear in
literature for specific applications (Haralick, 1979; Iron and
Petersen, 1981).
The algorithms incorporated into ERDAS IMAGINE are those which
are applicable in a wide variety of situations and are not
computationally over-demanding. This later point becomes critical as
the moving window size increases. Research has shown that very
large moving windows are often needed for proper enhancement.
For example, Blom (Blom and Daily, 1982) uses up to a 61 61
window.
Four algorithms are currently utilized for texture enhancement in
ERDAS IMAGINE:
mean Euclidean distance (1st-order)
variance (2nd-order)
skewness (3rd-order)
kurtosis (4th-order)
Mean Euclidean Distance
These algorithms are shown below (Iron and Petersen, 1981):
Where:
x
ijl
= DN value for spectral band and pixel (i,j) of a
multispectral image
x
cl
= DN value for spectral band of a windows center
pixel
n = number of pixels in a window
Variance
Where:
x
ij
= DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window, where:
Mean Euclidean Distance

x
c
x
ij
( )
2
] [
n 1
----------------------------------------------
1
2
---
=
Variance
x (
ij
M)
2

n 1
----------------------------- =
Field Guide Radar Imagery Enhancement / 237
Skewness
Where:
x
ij
= DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Kurtosis
Where:
x
ij
= DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Texture analysis is available from the Texture function in Image
Interpreter and from the ERDAS IMAGINE Radar Interpreter
Texture Analysis function.
Radiometric Correction:
Radar Imagery
The raw radar image frequently contains radiometric errors due to:
imperfections in the transmit and receive pattern of the radar
antenna
errors due to the coherent pulse (i.e., speckle)
the inherently stronger signal from a near range (closest to the
sensor flight path) than a far range (farthest from the sensor
flight path) target
Mean
x
ij
n
--------- =
Skew
x (
ij
M)
3

n 1 ( ) V ( )
3
2
---
-------------------------------- =
Kurtosis
x (
ij
M)
4

n ( 1) V ( )
2

----------------------------- =
Radar Imagery Enhancement / 238 Field Guide
Many imaging radar systems use a single antenna that transmits the
coherent radar burst and receives the return echo. However, no
antenna is perfect; it may have various lobes, dead spots, and
imperfections. This causes the received signal to be slightly distorted
radiometrically. In addition, range fall-off causes far range targets to
be darker (less return signal).
These two problems can be addressed by adjusting the average
brightness of each range line to a constantusually the average
overall scene brightness (Chavez and Berlin, 1986). This requires
that each line of constant range be long enough to reasonably
approximate the overall scene brightness (see Figure 90). This
approach is generic; it is not specific to any particular radar sensor.
The Adjust Brightness function in ERDAS IMAGINE works by
correcting each range line average. For this to be a valid
approach, the number of data values must be large enough to
provide good average values. Be careful not to use too small an
image. This depends upon the character of the scene itself.
Figure 90: Adjust Brightness Function
Range Lines/Lines of Constant Range
Lines of constant range are not the same thing as range lines:
Range lineslines that are perpendicular to the flight of the
sensor
Lines of constant rangelines that are parallel to the flight of the
sensor
Range directionsame as range lines
c
o
l
u
m
n
s

o
f

d
a
t
a
a = average data value of each row
Add the averages of all data rows:
Overall average
a
x
= calibration coefficient
This subset would not give an accurate average for
correcting the entire scene.
a
1
+ a
2
+ a
3
+ a
4
....a
x
of line x
rows of data
x
= Overall average
Field Guide Radar Imagery Enhancement / 239
Because radiometric errors are a function of the imaging geometry,
the image must be correctly oriented during the correction process.
For the algorithm to correctly address the data set, you must tell
ERDAS IMAGINE whether the lines of constant range are in columns
or rows in the displayed image.
Figure 91 shows the lines of constant range in columns, parallel to
the sides of the display screen:
Figure 91: Range Lines vs. Lines of Constant Range
Slant-to-Ground Range
Correction
Radar images also require slant-to-ground range correction, which is
similar in concept to orthocorrecting a VIS/IR image. By design, an
imaging radar is always side-looking. In practice, the depression
angle is usually 75
o
at most. In operation, the radar sensor
determines the range (distance to) each target, as shown in Figure
92.
Figure 92: Slant-to-Ground Range Correction
Range Direction
F
l
i
g
h
t

(
A
z
i
m
u
t
h
)

D
i
r
e
c
t
i
o
n
Lines Of Constant Range
R
a
n
g
e

L
i
n
e
s
Display
Screen
A
B
C
H
90
o
across track antenna
= depression angle

Dist
s
Dist
g
arcs of constant range
Radar Imagery Enhancement / 240 Field Guide
Assuming that angle ACB is a right angle, you can approximate:
Where:
Dist
s
= slant range distance
Dist
g
= ground range distance

Source: Leberl, 1990
This has the effect of compressing the near range areas more than
the far range areas. For many applications, this may not be
important. However, to geocode the scene or to register radar to
infrared or visible imagery, the scene must be corrected to a ground
range format. To do this, the following parameters relating to the
imaging geometry are needed:
Depression angle ()angular distance between sensor horizon
and scene center
Sensor height (H)elevation of sensor (in meters) above its
nadir point
Beam widthangular distance between near range and far range
for entire scene
Pixel size (in meters)range input image cell size
This information is usually found in the header file of data. Use
the Data View option to view this information. If it is not
contained in the header file, you must obtain this information
from the data supplier.
Once the scene is range-format corrected, pixel size can be changed
for coregistration with other data sets.
Merging Radar with
VIS/IR Imagery
As aforementioned, the phenomena involved in radar imaging is
quite different from that in VIS/IR imaging. Because these two
sensor types give different information about the same target
(chemical vs. physical), they are complementary data sets. If the
two images are correctly combined, the resultant image conveys
both chemical and physical information and could prove more useful
than either image alone.
The methods for merging radar and VIS/IR data are still
experimental and open for exploration. The following methods are
suggested for experimentation:
Dist
s
Dist
g
( ) cos ( )
cos
Dist
s
Dist
g
------------- =
Field Guide Radar Imagery Enhancement / 241
Codisplaying in a Viewer
RGB to IHS transforms
Principal components transform
Multiplicative
The ultimate goal of enhancement is not mathematical or logical
purity; it is feature extraction. There are currently no rules to
suggest which options yield the best results for a particular
application; you must experiment. The option that proves to be
most useful depends upon the data sets (both radar and
VIS/IR), your experience, and your final objective.
Codisplaying
The simplest and most frequently used method of combining radar
with VIS/IR imagery is codisplaying on an RGB color monitor. In this
technique, the radar image is displayed with one (typically the red)
gun, while the green and blue guns display VIS/IR bands or band
ratios. This technique follows from no logical model and does not
truly merge the two data sets.
Use the Viewer with the Clear Display option disabled for this
type of merge. Select the color guns to display the different
layers.
RGB to IHS Transforms
Another common technique uses the RGB to IHS transforms. In this
technique, an RGB color composite of bands (or band derivatives,
such as ratios) is transformed into IHS color space. The intensity
component is replaced by the radar image, and the scene is reverse
transformed. This technique integrally merges the two data types.
For more information, see "RGB to IHS".
Principal Components Transform
A similar image merge involves utilizing the PC transformation of the
VIS/IR image. With this transform, more than three components can
be used. These are converted to a series of principal components.
The first PC, PC-1, is generally accepted to correlate with overall
scene brightness. This value is replaced by the radar image and the
reverse transform is applied.
For more information, see "Principal Components Analysis".
Radar Imagery Enhancement / 242 Field Guide
Multiplicative
A final method to consider is the multiplicative technique. This
requires several chromatic components and a multiplicative
component, which is assigned to the image intensity. In practice, the
chromatic components are usually band ratios or PCs; the radar
image is input multiplicatively as intensity (Croft (Holcomb), 1993).
The two sensor merge models using transforms to integrate the two
data sets (PC and RGB to IHS) are based on the assumption that the
radar intensity correlates with the intensity that the transform
derives from the data inputs. However, the logic of mathematically
merging radar with VIS/IR data sets is inherently different from the
logic of the SPOT/TM merges (as discussed in "Resolution Merge").
It cannot be assumed that the radar intensity is a surrogate for, or
equivalent to, the VIS/IR intensity. The acceptability of this
assumption depends on the specific case.
For example, Landsat TM imagery is often used to aid in mineral
exploration. A common display for this purpose is RGB = TM5/TM7,
TM5/TM4, TM3/TM1; the logic being that if all three ratios are high,
the sites suited for mineral exploration are bright overall. If the
target area is accompanied by silicification, which results in an area
of dense angular rock, this should be the case. However, if the
alteration zone is basaltic rock to kaolinite/alunite, then the radar
return could be weaker than the surrounding rock. In this case, radar
would not correlate with high 5/7, 5/4, 3/1 intensity and the
substitution would not produce the desired results (Croft (Holcomb),
1993).
The Classification Process / 243 Field Guide
Classification
Introduction Multispectral classification is the process of sorting pixels into a finite
number of individual classes, or categories of data, based on their
data file values. If a pixel satisfies a certain set of criteria, the pixel
is assigned to the class that corresponds to that criteria. This process
is also referred to as image segmentation.
Depending on the type of information you want to extract from the
original data, classes may be associated with known features on the
ground or may simply represent areas that look different to the
computer. An example of a classified image is a land cover map,
showing vegetation, bare land, pasture, urban, etc.
The Classification
Process
Pattern Recognition Pattern recognition is the scienceand artof finding meaningful
patterns in data, which can be extracted through classification. By
spatially and spectrally enhancing an image, pattern recognition can
be performed with the human eye; the human brain automatically
sorts certain textures and colors into categories.
In a computer system, spectral pattern recognition can be more
scientific. Statistics are derived from the spectral characteristics of
all pixels in an image. Then, the pixels are sorted based on
mathematical criteria. The classification process breaks down into
two parts: training and classifying (using a decision rule).
Training First, the computer system must be trained to recognize patterns in
the data. Training is the process of defining the criteria by which
these patterns are recognized (Hord, 1982). Training can be
performed with either a supervised or an unsupervised method, as
explained below.
Supervised Training
Supervised training is closely controlled by the analyst. In this
process, you select pixels that represent patterns or land cover
features that you recognize, or that you can identify with help from
other sources, such as aerial photos, ground truth data, or maps.
Knowledge of the data, and of the classes desired, is required before
classification.
By identifying patterns, you can instruct the computer system to
identify pixels with similar characteristics. If the classification is
accurate, the resulting classes represent the categories within the
data that you originally identified.
The Classification Process / 244 Field Guide
Unsupervised Training
Unsupervised training is more computer-automated. It enables you
to specify some parameters that the computer uses to uncover
statistical patterns that are inherent in the data. These patterns do
not necessarily correspond to directly meaningful characteristics of
the scene, such as contiguous, easily recognized areas of a particular
soil type or land use. They are simply clusters of pixels with similar
spectral characteristics. In some cases, it may be more important to
identify groups of pixels with similar spectral characteristics than it
is to sort pixels into recognizable categories.
Unsupervised training is dependent upon the data itself for the
definition of classes. This method is usually used when less is known
about the data before classification. It is then the analysts
responsibility, after classification, to attach meaning to the resulting
classes (Jensen, 1996). Unsupervised classification is useful only if
the classes can be appropriately interpreted.
Signatures The result of training is a set of signatures that defines a training
sample or cluster. Each signature corresponds to a class, and is used
with a decision rule (explained below) to assign the pixels in the
image file to a class. Signatures in ERDAS IMAGINE can be
parametric or nonparametric.
A parametric signature is based on statistical parameters (e.g.,
mean and covariance matrix) of the pixels that are in the training
sample or cluster. Supervised and unsupervised training can
generate parametric signatures. A set of parametric signatures can
be used to train a statistically-based classifier (e.g., maximum
likelihood) to define the classes.
A nonparametric signature is not based on statistics, but on discrete
objects (polygons or rectangles) in a feature space image. These
feature space objects are used to define the boundaries for the
classes. A nonparametric classifier uses a set of nonparametric
signatures to assign pixels to a class based on their location either
inside or outside the area in the feature space image. Supervised
training is used to generate nonparametric signatures (Kloer, 1994).
ERDAS IMAGINE enables you to generate statistics for a
nonparametric signature. This function allows a feature space object
to be used to create a parametric signature from the image being
classified. However, since a parametric classifier requires a normal
distribution of data, the only feature space object for which this
would be mathematically valid would be an ellipse (Kloer, 1994).
When both parametric and nonparametric signatures are used to
classify an image, you are more able to analyze and visualize the
class definitions than either type of signature provides independently
(Kloer, 1994).
See Math Topics for information on feature space images and
how they are created.
Field Guide The Classification Process / 245
Decision Rule After the signatures are defined, the pixels of the image are sorted
into classes based on the signatures by use of a classification
decision rule. The decision rule is a mathematical algorithm that,
using data contained in the signature, performs the actual sorting of
pixels into distinct class values.
Parametric Decision Rule
A parametric decision rule is trained by the parametric signatures.
These signatures are defined by the mean vector and covariance
matrix for the data file values of the pixels in the signatures. When
a parametric decision rule is used, every pixel is assigned to a class
since the parametric decision space is continuous (Kloer, 1994).
Nonparametric Decision Rule
A nonparametric decision rule is not based on statistics; therefore, it
is independent of the properties of the data. If a pixel is located
within the boundary of a nonparametric signature, then this decision
rule assigns the pixel to the signatures class. Basically, a
nonparametric decision rule determines whether or not the pixel is
located inside of nonparametric signature boundary.
Output File When classifying an image file, the output file is an image file with a
thematic raster layer. This file automatically contains the following
data:
class values
class names
color table
statistics
histogram
The image file also contains any signature attributes that were
selected in the ERDAS IMAGINE Supervised Classification utility.
The class names, values, and colors can be set with the
Signature Editor or the Raster Attribute Editor.
Classification Tips / 246 Field Guide
Classification Tips
Classification Scheme Usually, classification is performed with a set of target classes in
mind. Such a set is called a classification scheme (or classification
system). The purpose of such a scheme is to provide a framework
for organizing and categorizing the information that can be extracted
from the data (Jensen et al, 1983). The proper classification scheme
includes classes that are both important to the study and discernible
from the data on hand. Most schemes have a hierarchical structure,
which can describe a study area in several levels of detail.
A number of classification schemes have been developed by
specialists who have inventoried a geographic region. Some
references for professionally-developed schemes are listed below:
Anderson, J.R., et al. 1976. A Land Use and Land Cover
Classification System for Use with Remote Sensor Data. U.S.
Geological Survey Professional Paper 964.
Cowardin, Lewis M., et al. 1979. Classification of Wetlands and
Deepwater Habitats of the United States. Washington, D.C.: U.S.
Fish and Wildlife Service.
Florida Topographic Bureau, Thematic Mapping Section. 1985.
Florida Land Use, Cover and Forms Classification System. Florida
Department of Transportation, Procedure No. 550-010-001-a.
Michigan Land Use Classification and Reference Committee.
1975. Michigan Land Cover/Use Classification System. Lansing,
Michigan: State of Michigan Office of Land Use.
Other states or government agencies may also have specialized land
use/cover studies.
It is recommended that the classification process is begun by
defining a classification scheme for the application, using previously
developed schemes, like those above, as a general framework.
Iterative Classification A process is iterative when it repeats an action. The objective of the
ERDAS IMAGINE system is to enable you to iteratively create and
refine signatures and classified image files to arrive at a desired final
classification. The ERDAS IMAGINE classification utilities are tools to
be used as needed, not a numbered list of steps that must always be
followed in order.
The total classification can be achieved with either the supervised or
unsupervised methods, or a combination of both. Some examples
are below:
Signatures created from both supervised and unsupervised
training can be merged and appended together.
Field Guide Classification Tips / 247
Signature evaluation tools can be used to indicate which
signatures are spectrally similar. This helps to determine which
signatures should be merged or deleted. These tools also help
define optimum band combinations for classification. Using the
optimum band combination may reduce the time required to run
a classification process.
Since classifications (supervised or unsupervised) can be based
on a particular area of interest (either defined in a raster layer or
an .aoi layer), signatures and classifications can be generated
from previous classification results.
Supervised vs.
Unsupervised Training
In supervised training, it is important to have a set of desired classes
in mind, and then create the appropriate signatures from the data.
You must also have some way of recognizing pixels that represent
the classes that you want to extract.
Supervised classification is usually appropriate when you want to
identify relatively few classes, when you have selected training sites
that can be verified with ground truth data, or when you can identify
distinct, homogeneous regions that represent each class.
On the other hand, if you want the classes to be determined by
spectral distinctions that are inherent in the data so that you can
define the classes later, then the application is better suited to
unsupervised training. Unsupervised training enables you to define
many classes easily, and identify classes that are not in contiguous,
easily recognized regions.
NOTE: Supervised classification also includes using a set of classes
that is generated from an unsupervised classification. Using a
combination of supervised and unsupervised classification may yield
optimum results, especially with large data sets (e.g., multiple
Landsat scenes). For example, unsupervised classification may be
useful for generating a basic set of classes, then supervised
classification can be used for further definition of the classes.
Classifying Enhanced
Data
For many specialized applications, classifying data that have been
merged, spectrally merged or enhancedwith principal components,
image algebra, or other transformationscan produce very specific
and meaningful results. However, without understanding the data
and the enhancements used, it is recommended that only the
original, remotely-sensed data be classified.
Dimensionality Dimensionality refers to the number of layers being classified. For
example, a data file with 3 layers is said to be 3-dimensional, since
3-dimensional feature space is plotted to analyze the data.
Feature space and dimensionality are discussed in Math
Topics.
Supervised Training / 248 Field Guide
Adding Dimensions
Using programs in ERDAS IMAGINE, you can add layers to existing
image files. Therefore, you can incorporate data (called ancillary
data) other than remotely-sensed data into the classification. Using
ancillary data enables you to incorporate variables into the
classification from, for example, vector layers, previously classified
data, or elevation data. The data file values of the ancillary data
become an additional feature of each pixel, thus influencing the
classification (Jensen, 1996).
Limiting Dimensions
Although ERDAS IMAGINE allows an unlimited number of layers of
data to be used for one classification, it is usually wise to reduce the
dimensionality of the data as much as possible. Often, certain layers
of data are redundant or extraneous to the task at hand.
Unnecessary data take up valuable disk space, and cause the
computer system to perform more arduous calculations, which slows
down processing.
Use the Signature Editor to evaluate separability to calculate the
best subset of layer combinations. Use the Image Interpreter
functions to merge or subset layers. Use the Image Information
tool (on the Viewers tool bar) to delete a layer(s).
Supervised
Training
Supervised training requires a priori (already known) information
about the data, such as:
What type of classes need to be extracted? Soil type? Land use?
Vegetation?
What classes are most likely to be present in the data? That is,
which types of land cover, soil, or vegetation (or whatever) are
represented by the data?
In supervised training, you rely on your own pattern recognition
skills and a priori knowledge of the data to help the system
determine the statistical criteria (signatures) for data classification.
To select reliable samples, you should know some information
either spatial or spectralabout the pixels that you want to classify.
Field Guide Selecting Training Samples / 249
The location of a specific characteristic, such as a land cover type,
may be known through ground truthing. Ground truthing refers to
the acquisition of knowledge about the study area from field work,
analysis of aerial photography, personal experience, etc. Ground
truth data are considered to be the most accurate (true) data
available about the area of study. They should be collected at the
same time as the remotely sensed data, so that the data correspond
as much as possible (Star and Estes, 1990). However, some ground
data may not be very accurate due to a number of errors and
inaccuracies.
Training Samples and
Feature Space Objects
Training samples (also called samples) are sets of pixels that
represent what is recognized as a discernible pattern, or potential
class. The system calculates statistics from the sample pixels to
create a parametric signature for the class.
The following terms are sometimes used interchangeably in
reference to training samples. For clarity, they are used in this
documentation as follows:
Training sample, or sample, is a set of pixels selected to
represent a potential class. The data file values for these pixels
are used to generate a parametric signature.
Training field, or training site, is the geographical AOI in the
image represented by the pixels in a sample. Usually, it is
previously identified with the use of ground truth data.
Feature space objects are user-defined AOIs in a feature space
image. The feature space signature is based on these objects.
Selecting Training
Samples
It is important that training samples be representative of the class
that you are trying to identify. This does not necessarily mean that
they must contain a large number of pixels or be dispersed across a
wide region of the data. The selection of training samples depends
largely upon your knowledge of the data, of the study area, and of
the classes that you want to extract.
ERDAS IMAGINE enables you to identify training samples using one
or more of the following methods:
using a vector layer
defining a polygon in the image
identifying a training sample of contiguous pixels with similar
spectral characteristics
identifying a training sample of contiguous pixels within a certain
area, with or without similar spectral characteristics
using a class from a thematic raster layer from an image file of
the same area (i.e., the result of an unsupervised classification)
Selecting Training Samples / 250 Field Guide
Digitized Polygon
Training samples can be identified by their geographical location
(training sites, using maps, ground truth data). The locations of the
training sites can be digitized from maps with the ERDAS IMAGINE
Vector or AOI tools. Polygons representing these areas are then
stored as vector layers. The vector layers can then be used as input
to the AOI tools and used as training samples to create signatures.
Use the Vector and AOI tools to digitize training samples from a
map. Use the Signature Editor to create signatures from training
samples that are identified with digitized polygons.
User-defined Polygon
Using your pattern recognition skills (with or without supplemental
ground truth information), you can identify samples by examining a
displayed image of the data and drawing a polygon around the
training site(s) of interest. For example, if it is known that oak trees
reflect certain frequencies of green and infrared light according to
ground truth data, you may be able to base your sample selections
on the data (taking atmospheric conditions, sun angle, time, date,
and other variations into account). The area within the polygon(s)
would be used to create a signature.
Use the AOI tools to define the polygon(s) to be used as the
training sample. Use the Signature Editor to create signatures
from training samples that are identified with the polygons.
Identify Seed Pixel
With the Seed Properties dialog and AOI tools, the cursor (crosshair)
can be used to identify a single pixel (seed pixel) that is
representative of the training sample. This seed pixel is used as a
model pixel, against which the pixels that are contiguous to it are
compared based on parameters specified by you.
When one or more of the contiguous pixels is accepted, the mean of
the sample is calculated from the accepted pixels. Then, the pixels
contiguous to the sample are compared in the same way. This
process repeats until no pixels that are contiguous to the sample
satisfy the spectral parameters. In effect, the sample grows outward
from the model pixel with each iteration. These homogenous pixels
are converted from individual raster pixels to a polygon and used as
an AOI layer.
Select the Seed Properties option in the Viewer to identify
training samples with a seed pixel.
Field Guide Selecting Training Samples / 251
Seed Pixel Method with Spatial Limits
The training sample identified with the seed pixel method can be
limited to a particular region by defining the geographic distance and
area.
Vector layers (polygons or lines) can be displayed as the top
layer in the Viewer, and the
boundaries can then be used as an AOI for training samples
defined under Seed Properties.
Thematic Raster Layer
A training sample can be defined by using class values from a
thematic raster layer (see Table 38). The data file values in the
training sample are used to create a signature. The training sample
can be defined by as many class values as desired.
NOTE: The thematic raster layer must have the same coordinate
system as the image file being classified.
Evaluating Training
Samples
Selecting training samples is often an iterative process. To generate
signatures that accurately represent the classes to be identified, you
may have to repeatedly select training samples, evaluate the
signatures that are generated from the samples, and then either
take new samples or manipulate the signatures as necessary.
Signature manipulation may involve merging, deleting, or appending
from one file to another. It is also possible to perform a classification
using the known signatures, then mask out areas that are not
classified to use in gathering more signatures.
Table 38: Training Sample Comparison
Method Advantages Disadvantages
Digitized Polygon precise map
coordinates, represents
known ground
information
may overestimate class
variance, time-
consuming
User-defined Polygon high degree of user
control
may overestimate class
variance, time-
consuming
Seed Pixel auto-assisted, less
time
may underestimate
class variance
Thematic Raster Layer allows iterative
classifying
must have previously
defined thematic layer
Selecting Feature Space Objects / 252 Field Guide
See "Evaluating Signatures" for methods of determining the
accuracy of the signatures created from your training samples.
Selecting Feature
Space Objects
The ERDAS IMAGINE Feature Space tools enable you to interactively
define feature space objects (AOIs) in the feature space image(s). A
feature space image is simply a graph of the data file values of one
band of data against the values of another band (often called a
scatterplot). In ERDAS IMAGINE, a feature space image has the
same data structure as a raster image; therefore, feature space
images can be used with other ERDAS IMAGINE utilities, including
zoom, color level slicing, virtual roam, Spatial Modeler, and Map
Composer.
Figure 93: Example of a Feature Space Image
The transformation of a multilayer raster image into a feature space
image is done by mapping the input pixel values to a position in the
feature space image. This transformation defines only the pixel
position in the feature space image. It does not define the pixels
value.
The pixel values in the feature space image can be the accumulated
frequency, which is calculated when the feature space image is
defined. The pixel values can also be provided by a thematic raster
layer of the same geometry as the source multilayer image. Mapping
a thematic layer into a feature space image can be useful for
evaluating the validity of the parametric and nonparametric decision
boundaries of a classification (Kloer, 1994).
When you display a feature space image file (.fsp.img) in a
Viewer, the colors reflect the density of points for both bands.
The bright tones represent a high density and the dark tones
represent a low density.
b
a
n
d

2
band 1
Field Guide Selecting Feature Space Objects / 253
Create Nonparametric Signature
You can define a feature space object (AOI) in the feature space
image and use it directly as a nonparametric signature. Since the
Viewers for the feature space image and the image being classified
are both linked to the ERDAS IMAGINE Signature Editor, it is possible
to mask AOIs from the image being classified to the feature space
image, and vice versa. You can also directly link a cursor in the
image Viewer to the feature space Viewer. These functions help
determine a location for the AOI in the feature space image.
A single feature space image, but multiple AOIs, can be used to
define the signature. This signature is taken within the feature space
image, not the image being classified. The pixels in the image that
correspond to the data file values in the signature (i.e., feature space
object) are assigned to that class.
One fundamental difference between using the feature space image
to define a training sample and the other traditional methods is that
it is a nonparametric signature. The decisions made in the
classification process have no dependency on the statistics of the
pixels. This helps improve classification accuracies for specific
nonnormal classes, such as urban and exposed rock (Faust et al,
1991).
See Math Topics for information on feature space images.
Figure 94: Process for Defining a Feature Space Object
Display the image file to be classified in a Viewer
Create feature space image from the image file being
(layers 3, 2, 1).
(layer 1 vs. layer 2).
Draw an AOI (feature space object around the
desired area in the feature space image.Once
you have a desired AOI, it can be used as a
signature.
A decision rule is used to analyze each pixel in
the image file being classified, and the pixels
with the corresponding data file values are
assigned to the feature space class.
Unsupervised Training / 254 Field Guide
Evaluate Feature Space Signatures
Using the Feature Space tools, it is also possible to use a feature
space signature to generate a mask. Once it is defined as a mask,
the pixels under the mask are identified in the image file and
highlighted in the Viewer. The image displayed in the Viewer must
be the image from which the feature space image was created. This
process helps you to visually analyze the correlations between
various spectral bands to determine which combination of bands
brings out the desired features in the image.
You can have as many feature space images with different band
combinations as desired. Any polygon or rectangle in these feature
space images can be used as a nonparametric signature. However,
only one feature space image can be used per signature. The
polygons in the feature space image can be easily modified and/or
masked until the desired regions of the image have been identified.
Use the Feature Space tools in the Signature Editor to create a
feature space image and mask the signature. Use the AOI tools
to draw polygons.
Unsupervised
Training
Unsupervised training requires only minimal initial input from you.
However, you have the task of interpreting the classes that are
created by the unsupervised training algorithm.
Unsupervised training is also called clustering, because it is based on
the natural groupings of pixels in image data when they are plotted
in feature space. According to the specified parameters, these
groups can later be merged, disregarded, otherwise manipulated, or
used as the basis of a signature.
Feature space is explained in Math Topics.
Table 39: Feature Space Signatures
Advantages Disadvantages
Provide an accurate way to
classify a class with a
nonnormal distribution
(e.g., residential and urban).
The classification decision process
allows overlap and unclassified pixels.
Certain features may be more
visually identifiable in a feature
space image.
The feature space image may be
difficult to interpret.
The classification decision
process is fast.
Field Guide Unsupervised Training / 255
Clusters
Clusters are defined with a clustering algorithm, which often uses all
or many of the pixels in the input data file for its analysis. The
clustering algorithm has no regard for the contiguity of the pixels
that define each cluster.
The Iterative Self-Organizing Data Analysis Technique
(ISODATA) (Tou and Gonzalez, 1974) clustering method uses
spectral distance as in the sequential method, but iteratively
classifies the pixels, redefines the criteria for each class, and
classifies again, so that the spectral distance patterns in the data
gradually emerge.
The RGB clustering method is more specialized than the
ISODATA method. It applies to three-band, 8-bit data. RGB
clustering plots pixels in three-dimensional feature space, and
divides that space into sections that are used to define clusters.
Each of these methods is explained below, along with its advantages
and disadvantages.
Some of the statistics terms used in this section are explained in
Math Topics.
ISODATA Clustering ISODATA is iterative in that it repeatedly performs an entire
classification (outputting a thematic raster layer) and recalculates
statistics. Self-Organizing refers to the way in which it locates
clusters with minimum user input.
The ISODATA method uses minimum spectral distance to assign a
cluster for each candidate pixel. The process begins with a specified
number of arbitrary cluster means or the means of existing
signatures, and then it processes repetitively, so that those means
shift to the means of the clusters in the data.
Because the ISODATA method is iterative, it is not biased to the top
of the data file, as are the one-pass clustering algorithms.
Use the Unsupervised Classification utility in the Signature
Editor to perform ISODATA clustering.
ISODATA Clustering Parameters
To perform ISODATA clustering, you specify:
N - the maximum number of clusters to be considered. Since
each cluster is the basis for a class, this number becomes the
maximum number of classes to be formed. The ISODATA process
begins by determining N arbitrary cluster means. Some clusters
with too few pixels can be eliminated, leaving less than N
clusters.
Unsupervised Training / 256 Field Guide
T - a convergence threshold, which is the maximum percentage
of pixels whose class values are allowed to be unchanged
between iterations.
M - the maximum number of iterations to be performed.
Initial Cluster Means
On the first iteration of the ISODATA algorithm, the means of N
clusters can be arbitrarily determined. After each iteration, a new
mean for each cluster is calculated, based on the actual spectral
locations of the pixels in the cluster, instead of the initial arbitrary
calculation. Then, these new means are used for defining clusters in
the next iteration. The process continues until there is little change
between iterations (Swain, 1973).
The initial cluster means are distributed in feature space along a
vector that runs between the point at spectral coordinates
(
1
-
1
,
2
-
2
,
3
-
3
, ...
n
-
n
)
and the coordinates
(
1
+
1
,
2
+
2
,
3
+
3
, ...
n
+
n
)
Such a vector in two dimensions is illustrated in Figure 95. The initial
cluster means are evenly distributed between
(
A
-
A
,
B
-
B
) and (
A
+
A
,
B
+
B
)
Figure 95: ISODATA Arbitrary Clusters
Pixel Analysis
Pixels are analyzed beginning with the upper left corner of the image
and going left to right, block by block.
The spectral distance between the candidate pixel and each cluster
mean is calculated. The pixel is assigned to the cluster whose mean
is the closest. The ISODATA function creates an output image file
with a thematic raster layer and/or a signature file (.sig) as a result
of the clustering. At the end of each iteration, an image file exists
that shows the assignments of the pixels to the clusters.
y

A
0
B
5 arbitrary cluster means in two-dimensional spectral space
+

B
-
B
A
-
A

+ 0
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s

B
A A
B
Field Guide Unsupervised Training / 257
Considering the regular, arbitrary assignment of the initial cluster
means, the first iteration of the ISODATA algorithm always gives
results similar to those in Figure 96.
Figure 96: ISODATA First Pass
For the second iteration, the means of all clusters are recalculated,
causing them to shift in feature space. The entire process is
repeatedeach candidate pixel is compared to the new cluster
means and assigned to the closest cluster mean.
Figure 97: ISODATA Second Pass
Percentage Unchanged
After each iteration, the normalized percentage of pixels whose
assignments are unchanged since the last iteration is displayed in
the dialog. When this number reaches T (the convergence
threshold), the program terminates.
It is possible for the percentage of unchanged pixels to never
converge or reach T (the convergence threshold). Therefore, it may
be beneficial to monitor the percentage, or specify a reasonable
maximum number of iterations, M, so that the program does not run
indefinitely.
Cluster
1
Cluster
2
Cluster
3
Cluster
4
Cluster
5
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
Unsupervised Training / 258 Field Guide
Principal Component Method
Whereas clustering creates signatures depending on pixels spectral
reflectance by adding pixels together, the principal component
method actually subtracts pixels. Principal Components Analysis
(PCA) is a method of data compression. With it, you can eliminate
data that is redundant by compacting it into fewer bands.
The resulting bands are noncorrelated and independent. You may
find these bands more interpretable than the source data. PCA can
be performed on up to 256 bands with ERDAS IMAGINE. As a type of
spectral enhancement, you are required to specify the number of
components you want output from the original data.
Recommended Decision Rule
Although the ISODATA algorithm is the most similar to the minimum
distance decision rule, the signatures can produce good results with
any type of classification. Therefore, no particular decision rule is
recommended over others.
In most cases, the signatures created by ISODATA are merged,
deleted, or appended to other signature sets. The image file created
by ISODATA is the same as the image file that is created by a
minimum distance classification, except for the nonconvergent pixels
(100-T% of the pixels).
Table 40: ISODATA Clustering
Advantages Disadvantages
Because it is iterative, clustering
is not geographically biased to
the top or bottom pixels of the
data file.
The clustering process is time-
consuming, because it can repeat many
times.
This algorithm is highly
successful at finding the
spectral clusters that are
inherent in the data. It does not
matter where the initial
arbitrary cluster means are
located, as long as enough
iterations are allowed.
Does not account for pixel spatial
homogeneity.
A preliminary thematic raster
layer is created, which gives
results similar to using a
minimum distance classifier (as
explained below) on the
signatures that are created. This
thematic raster layer can be
used for analyzing and
manipulating the signatures
before actual classification takes
place.
Field Guide Unsupervised Training / 259
Use the Merge and Delete options in the Signature Editor to
manipulate signatures.
Use the Unsupervised Classification utility in the Signature
Editor to perform ISODATA clustering, generate signatures, and
classify the resulting signatures.
RGB Clustering
The RGB Clustering and Advanced RGB Clustering functions in
Image Interpreter create a thematic raster layer. However, no
signature file is created and no other classification decision rule
is used. In practice, RGB Clustering differs greatly from the
other clustering methods, but it does employ a clustering
algorithm.
RGB clustering is a simple classification and data compression
technique for three bands of data. It is a fast and simple algorithm
that quickly compresses a three-band image into a single band
pseudocolor image, without necessarily classifying any particular
features.
The algorithm plots all pixels in 3-dimensional feature space and
then partitions this space into clusters on a grid. In the more
simplistic version of this function, each of these clusters becomes a
class in the output thematic raster layer.
The advanced version requires that a minimum threshold on the
clusters be set so that only clusters at least as large as the threshold
become output classes. This allows for more color variation in the
output file. Pixels that do not fall into any of the remaining clusters
are assigned to the cluster with the smallest city-block distance from
the pixel. In this case, the city-block distance is calculated as the
sum of the distances in the red, green, and blue directions in 3-
dimensional space.
Along each axis of the three-dimensional scatterplot, each input
histogram is scaled so that the partitions divide the histograms
between specified limitseither a specified number of standard
deviations above and below the mean, or between the minimum and
maximum data values for each band.
The default number of divisions per band is listed below:
Red is divided into 7 sections (32 for advanced version)
Green is divided into 6 sections (32 for advanced version)
Blue is divided into 6 sections (32 for advanced version)
Unsupervised Training / 260 Field Guide
Figure 98: RGB Clustering
Partitioning Parameters
It is necessary to specify the number of R, G, and B sections in each
dimension of the 3-dimensional scatterplot. The number of sections
should vary according to the histograms of each band. Broad
histograms should be divided into more sections, and narrow
histograms should be divided into fewer sections (see Figure 98).
It is possible to interactively change these parameters in the
RGB Clustering function in the Image Interpreter. The number
of classes is calculated based on the current parameters, and it
displays on the command screen.
G
B
R
16
1
9
5
3
5
2
5
5
9
8
B
R
G

1
6
16
34
55
35
0
16
35 195 255
98
R
G
B
This cluster contains pixels
between 16 and 34 in RED,
and between 35 and 55 in
GREEN, and between 0 and
16 in BLUE.
f
r
e
q
u
e
n
c
y
input histograms
0
0
Table 41: RGB Clustering
Advantages Disadvantages
The fastest classification
method. It is designed to
provide a fast, simple
classification for applications
that do not require specific
classes.
Exactly three bands must be input,
which is not suitable for all applications.
Not biased to the top or bottom
of the data file. The order in
which the pixels are examined
does not influence the outcome.
Does not always create thematic
classes that can be analyzed for
informational purposes.
Field Guide Signature Files / 261
Tips
Some starting values that usually produce good results with the
simple RGB clustering are:
R = 7
G = 6
B = 6
which results in 7 6 6 = 252 classes.
To decrease the number of output colors/classes or to darken the
output, decrease these values.
For the Advanced RGB clustering function, start with higher values
for R, G, and B. Adjust by raising the threshold parameter and/or
decreasing the R, G, and B parameter values until the desired
number of output classes is obtained.
Signature Files A signature is a set of data that defines a training sample, feature
space object (AOI), or cluster. The signature is used in a
classification process. Each classification decision rule (algorithm)
requires some signature attributes as inputthese are stored in the
signature file (.sig). Signatures in ERDAS IMAGINE can be
parametric or nonparametric.
The following attributes are standard for all signatures (parametric
and nonparametric):
nameidentifies the signature and is used as the class name in
the output thematic raster layer. The default signature name is
Class <number>.
colorthe color for the signature and the color for the class in the
output thematic raster layer. This color is also used with other
signature visualization functions, such as alarms, masking,
ellipses, etc.
valuethe output class value for the signature. The output class
value does not necessarily need to be the class number of the
signature. This value should be a positive integer.
(Advanced version only) A
highly interactive function,
allowing an iterative adjustment
of the parameters until the
number of clusters and the
thresholds are satisfactory for
analysis.
Table 41: RGB Clustering
Advantages Disadvantages
Evaluating Signatures / 262 Field Guide
orderthe order to process the signatures for order-dependent
processes, such as signature alarms and parallelepiped
classifications.
parallelepiped limitsthe limits used in the parallelepiped
classification.
Parametric Signature
A parametric signature is based on statistical parameters (e.g.,
mean and covariance matrix) of the pixels that are in the training
sample or cluster. A parametric signature includes the following
attributes in addition to the standard attributes for signatures:
the number of bands in the input image (as processed by the
training program)
the minimum and maximum data file value in each band for each
sample or cluster (minimum vector and maximum vector)
the mean data file value in each band for each sample or cluster
(mean vector)
the covariance matrix for each sample or cluster
the number of pixels in the sample or cluster
Nonparametric Signature
A nonparametric signature is based on an AOI that you define in the
feature space image for the image file being classified. A
nonparametric classifier uses a set of nonparametric signatures to
assign pixels to a class based on their location, either inside or
outside the area in the feature space image.
The format of the .sig file is described in the On-Line Help.
Information on these statistics can be found in Math Topics.
Evaluating
Signatures
Once signatures are created, they can be evaluated, deleted,
renamed, and merged with signatures from other files. Merging
signatures enables you to perform complex classifications with
signatures that are derived from more than one training method
(supervised and/or unsupervised, parametric and/or
nonparametric).
Use the Signature Editor to view the contents of each signature,
manipulate signatures, and perform your own mathematical
tests on the statistics.
Field Guide Evaluating Signatures / 263
Using Signature Data
There are tests to perform that can help determine whether the
signature data are a true representation of the pixels to be classified
for each class. You can evaluate signatures that were created either
from supervised or unsupervised training. The evaluation methods in
ERDAS IMAGINE include:
Alarmusing your own pattern recognition ability, you view the
estimated classified area for a signature (using the parallelepiped
decision rule) against a display of the original image.
Ellipseview ellipse diagrams and scatterplots of data file values
for every pair of bands.
Contingency matrixdo a quick classification of the pixels in a
set of training samples to see what percentage of the sample
pixels are actually classified as expected. These percentages are
presented in a contingency matrix. This method is for supervised
training only, for which polygons of the training samples exist.
Divergencemeasure the divergence (statistical distance)
between signatures and determine band subsets that maximize
the classification.
Statistics and histogramsanalyze statistics and histograms of
the signatures to make evaluations and comparisons.
NOTE: If the signature is nonparametric (i.e., a feature space
signature), you can use only the alarm evaluation method.
After analyzing the signatures, it would be beneficial to merge or
delete them, eliminate redundant bands from the data, add new
bands of data, or perform any other operations to improve the
classification.
Alarm The alarm evaluation enables you to compare an estimated
classification of one or more signatures against the original data, as
it appears in the Viewer. According to the parallelepiped decision
rule, the pixels that fit the classification criteria are highlighted in the
displayed image. You also have the option to indicate an overlap by
having it appear in a different color.
With this test, you can use your own pattern recognition skills, or
some ground truth data, to determine the accuracy of a signature.
Use the Signature Alarm utility in the Signature Editor to
perform n-dimensional alarms on the image in the Viewer, using
the parallelepiped decision rule. The alarm utility creates a
functional layer, and the Viewer allows you to toggle between
the image layer and the functional layer.
Evaluating Signatures / 264 Field Guide
Ellipse In this evaluation, ellipses of concentration are calculated with the
means and standard deviations stored in the signature file. It is also
possible to generate parallelepiped rectangles, means, and labels.
In this evaluation, the mean and the standard deviation of every
signature are used to represent the ellipse in 2-dimensional feature
space. The ellipse is displayed in a feature space image.
Ellipses are explained and illustrated in Math Topics under the
discussion of Scatterplots.
When the ellipses in the feature space image show extensive
overlap, then the spectral characteristics of the pixels represented
by the signatures cannot be distinguished in the two bands that are
graphed. In the best case, there is no overlap. Some overlap,
however, is expected.
Figure 99 shows how ellipses are plotted and how they can overlap.
The first graph shows how the ellipses are plotted based on the range
of 2 standard deviations from the mean. This range can be altered,
changing the ellipse plots. Analyzing the plots with differing numbers
of standard deviations is useful for determining the limits of a
parallelepiped classification.
Figure 99: Ellipse Evaluation of Signatures
By analyzing the ellipse graphs for all band pairs, you can determine
which signatures and which bands provide accurate classification
results.
Signature Overlap
Distinct Signatures

A1
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s

A
2
+2 B2 s
-2 B2

B2
signature 2
signature 1
= mean in Band A for signature 1,

A2
= mean in Band A for signature 2, etc.
Band C
data file values
B
a
n
d

D
d
a
t
a

f
i
l
e

v
a
l
u
e
s

C2
D2

signature 2
signature 1
D1

C1
s
-
2
s

A
2

A
2
+
2
s

A2
= mean in Band A for signature 2,

B2
= mean in Band B for signature 2, etc.

B2
+2s

B2
-2s

B2

A
2
+
2
s

A
2
-
2
s

A
2

D1

D2

C2

C1
Field Guide Evaluating Signatures / 265
Use the Signature Editor to create a feature space image and to
view an ellipse(s) of signature data.
Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs
and compares the results to the pixels of a training sample.
The pixels of each training sample are not always so homogeneous
that every pixel in a sample is actually classified to its corresponding
class. Each sample pixel only weights the statistics that determine
the classes. However, if the signature statistics for each sample are
distinct from those of the other samples, then a high percentage of
each samples pixels is classified as expected.
In this evaluation, a quick classification of the sample pixels is
performed using the minimum distance, maximum likelihood, or
Mahalanobis distance decision rule. Then, a contingency matrix is
presented, which contains the number and percentages of pixels that
are classified as expected.
Use the Signature Editor to perform the contingency matrix
evaluation.
Separability Signature separability is a statistical measure of distance between
two signatures. Separability can be calculated for any combination of
bands that is used in the classification, enabling you to rule out any
bands that are not useful in the results of the classification.
For the distance (Euclidean) evaluation, the spectral distance
between the mean vectors of each pair of signatures is computed. If
the spectral distance between two samples is not significant for any
pair of bands, then they may not be distinct enough to produce a
successful classification.
The spectral distance is also the basis of the minimum distance
classification (as explained below). Therefore, computing the
distances between signatures helps you predict the results of a
minimum distance classification.
Use the Signature Editor to compute signature separability and
distance and automatically generate the report.
The formulas used to calculate separability are related to the
maximum likelihood decision rule. Therefore, evaluating signature
separability helps you predict the results of a maximum likelihood
classification. The maximum likelihood decision rule is explained
below.
There are three options for calculating the separability. All of these
formulas take into account the covariances of the signatures in the
bands being compared, as well as the mean vectors of the
signatures.
Evaluating Signatures / 266 Field Guide
Refer to Math Topics for information on the mean vector and
covariance matrix.
Divergence
The formula for computing Divergence (D
ij
) is as follows:

Where:
i and j = the two signatures (classes) being compared
C
i
= the covariance matrix of signature i

i
= the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
Transformed Divergence
The formula for computing Transformed Divergence (TD) is as
follows:

Where:
i and j = the two signatures (classes) being compared
C
i
= the covariance matrix of signature i

i
= the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
D
ij
1
2
---tr C
i
C
j
( ) C
i
1
C
j
1
( ) ( )
1
2
---tr C
i
1
C
j
1
( )
i

j
( )
i

j
( )
T
( ) + =
D
ij
1
2
---tr C
i
C
j
( ) C
i
1
C
j
1
( ) ( )
1
2
---tr C
i
1
C
j
1
( )
i

j
( )
i

j
( )
T
( ) + =
TD
ij
2000 1 exp
D
ij
8
----------
\ .
| |

\ .
| |
=
Field Guide Evaluating Signatures / 267
According to Jensen, the transformed divergence gives an
exponentially decreasing weight to increasing distances between the
classes. The scale of the divergence values can range from 0 to
2,000. Interpreting your results after applying transformed
divergence requires you to analyze those numerical divergence
values. As a general rule, if the result is greater than 1,900, then the
classes can be separated. Between 1,700 and 1,900, the separation
is fairly good. Below 1,700, the separation is poor (Jensen, 1996).
Jeffries-Matusita Distance
The formula for computing Jeffries-Matusita Distance (JM) is as
follows:

Where:
i and j = the two signatures (classes) being compared
C
i
= the covariance matrix of signature i

i
= the mean vector of signature i
ln = the natural logarithm function
|C
i
| = the determinant of Ci (matrix algebra)
Source: Swain and Davis, 1978
According to Jensen, The JM distance has a saturating behavior with
increasing class separation like transformed divergence. However, it
is not as computationally efficient as transformed divergence
(Jensen, 1996).
Separability
Both transformed divergence and Jeffries-Matusita distance have
upper and lower bounds. If the calculated divergence is equal to the
appropriate upper bound, then the signatures can be said to be
totally separable in the bands being studied. A calculated divergence
of zero means that the signatures are inseparable.
TD is between 0 and 2000.
JM is between 0 and 1414.
A separability listing is a report of the computed divergence for every
class pair and one band combination. The listing contains every
divergence value for the bands studied for every possible pair of
signatures.

1
8
---
i

j
( )
T
C
i
C
j
+
2
-----------------
\ .
| |
1

i

j
( )
1
2
-- -ln
C
i
C
j
+ ( ) 2
C
i
C
j

--------------------------------
\ .
|
| |
+ =
JM
ij
2 1 e

( ) =
Evaluating Signatures / 268 Field Guide
The separability listing also contains the average divergence and the
minimum divergence for the band set. These numbers can be
compared to other separability listings (for other band
combinations), to determine which set of bands is the most useful
for classification.
Weight Factors
As with the Bayesian classifier (explained below with maximum
likelihood), weight factors may be specified for each signature.
These weight factors are based on a priori probabilities that any
given pixel is assigned to each class. For example, if you know that
twice as many pixels should be assigned to Class A as to Class B,
then Class A should receive a weight factor that is twice that of Class
B.
NOTE: The weight factors do not influence the divergence equations
(for TD or JM), but they do influence the report of the best average
and best minimum separability.
The weight factors for each signature are used to compute a
weighted divergence with the following calculation:
Where:
i and j = the two signatures (classes) being compared
U
ij
= the unweighted divergence between i and j
W
ij
= the weighted divergence between i and j
c = the number of signatures (classes)
f
i
= the weight factor for signature i
Probability of Error
The Jeffries-Matusita distance is related to the pairwise probability of
error, which is the probability that a pixel assigned to class i is
actually in class j. Within a range, this probability can be estimated
according to the expression below:
W
ij
f
i
f
j
U
ij
j i 1 + =
c

\ .
|
|
| |
i 1 =
c 1

1
2
--- f
i
i 1
c

\ .
|
|
| |
2
f
i
2
i 1
c

----------------------------------------------------- =
1
16
------ 2 JM
ij
2
( )
2
P
e
1
1
2
-- - 1
1
2
-- -JM
ij
2
+
\ .
| |

Field Guide Classification Decision Rules / 269
Where:
i and j = the signatures (classes) being compared
JM
ij
= the Jeffries-Matusita distance between i and j
P
e
= the probability that a pixel is misclassified from i to
j
Source: Swain and Davis, 1978
Signature Manipulation In many cases, training must be repeated several times before the
desired signatures are produced. Signatures can be gathered from
different sourcesdifferent training samples, feature space images,
and different clustering programsall using different techniques.
After each signature file is evaluated, you may merge, delete, or
create new signatures. The desired signatures can finally be moved
to one signature file to be used in the classification.
The following operations upon signatures and signature files are
possible with ERDAS IMAGINE:
View the contents of the signature statistics
View histograms of the samples or clusters that were used to
derive the signatures
Delete unwanted signatures
Merge signatures together, so that they form one larger class
when classified
Append signatures from other files. You can combine signatures
that are derived from different training methods for use in one
classification.
Use the Signature Editor to view statistics and histogram listings
and to delete, merge, append, and rename signatures within a
signature file.
Classification
Decision Rules
Once a set of reliable signatures has been created and evaluated, the
next step is to perform a classification of the data. Each pixel is
analyzed independently. The measurement vector for each pixel is
compared to each signature, according to a decision rule, or
algorithm. Pixels that pass the criteria that are established by the
decision rule are then assigned to the class for that signature. ERDAS
IMAGINE enables you to classify the data both parametrically with
statistical representation, and nonparametrically as objects in
feature space. Figure 100 shows the flow of an image pixel through
the classification decision making process in ERDAS IMAGINE (Kloer,
1994).
Classification Decision Rules / 270 Field Guide
If a nonparametric rule is not set, then the pixel is classified using
only the parametric rule. All of the parametric signatures are tested.
If a nonparametric rule is set, the pixel is tested against all of the
signatures with nonparametric definitions. This rule results in the
following conditions:
If the nonparametric test results in one unique class, the pixel is
assigned to that class.
If the nonparametric test results in zero classes (i.e., the pixel
lies outside all the nonparametric decision boundaries), then the
unclassified rule is applied. With this rule, the pixel is either
classified by the parametric rule or left unclassified.
If the pixel falls into more than one class as a result of the
nonparametric test, the overlap rule is applied. With this rule, the
pixel is either classified by the parametric rule, processing order,
or left unclassified.
Nonparametric Rules ERDAS IMAGINE provides these decision rules for nonparametric
signatures:
parallelepiped
feature space
Unclassified Options
ERDAS IMAGINE provides these options if the pixel is not classified
by the nonparametric rule:
parametric rule
unclassified
Overlap Options
ERDAS IMAGINE provides these options if the pixel falls into more
than one feature space object:
parametric rule
by order
unclassified
Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for
parametric signatures:
minimum distance
Mahalanobis distance
maximum likelihood (with Bayesian variation)
Field Guide Classification Decision Rules / 271
Figure 100: Classification Flow Diagram
Parallelepiped In the parallelepiped decision rule, the data file values of the
candidate pixel are compared to upper and lower limits. These limits
can be either:
the minimum and maximum data file values of each band in the
signature,
the mean of each band, plus and minus a number of standard
deviations, or
any limits that you specify, based on your knowledge of the data
and signatures. This knowledge may come from the signature
evaluation techniques discussed above.
These limits can be set using the Parallelepiped Limits utility in
the Signature Editor.
Candidate Pixel
No
Yes
Resulting Number of Classes
>1
Unclassified Overlap Options
Parametric Rule
Unclassified
Assignment
Class
Assignment
1
Unclassified
Parametric Unclassified Parametric
By Order
Nonparametric Rule
0
Options
Classification Decision Rules / 272 Field Guide
There are high and low limits for every signature in every band.
When a pixels data file values are between the limits for every band
in a signature, then the pixel is assigned to that signatures class.
Figure 101 is a two-dimensional example of a parallelepiped
classification.
Figure 101: Parallelepiped Classification Using Two
Standard Deviations as Limits
The large rectangles in Figure 101 are called parallelepipeds. They
are the regions within the limits for each signature.
Overlap Region
In cases where a pixel may fall into the overlap region of two or more
parallelepipeds, you must define how the pixel can be classified.
The pixel can be classified by the order of the signatures. If one
of the signatures is first and the other signature is fourth, the
pixel is assigned to the first signatures class. This order can be
set in the Signature Editor.
The pixel can be classified by the defined parametric decision
rule. The pixel is tested against the overlapping signatures only.
If neither of these signatures is parametric, then the pixel is left
unclassified. If only one of the signatures is parametric, then the
pixel is automatically assigned to that signatures class.
The pixel can be left unclassified.

B2
+2s
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3 3
3
1
1 1
1 1 1
1
3
3
3
3
3
?
?
?
?
?
? ?
?
?
?
? ?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
? ? ?
?
?
?
?
?
2
2
2
2
2
2
3
class 1
class 2
class 3

B2
-2s

B2

A
2
+
2
s

A
2
-
2
s

A
2
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s

A2
= mean of Band A,
class 2

B2
= mean of Band B,
class 2
?
2
l
= pixels in class 1
= pixels in class 2
= unclassified pixels
3
3 = pixels in class 3
Field Guide Classification Decision Rules / 273
Regions Outside of the Boundaries
If the pixel does not fall into one of the parallelepipeds, then you
must define how the pixel can be classified.
The pixel can be classified by the defined parametric decision
rule. The pixel is tested against all of the parametric signatures.
If none of the signatures is parametric, then the pixel is left
unclassified.
The pixel can be left unclassified.
Use the Supervised Classification utility in the Signature Editor
to perform a parallelepiped classification.
Table 42: Parallelepiped Decision Rule
Advantages Disadvantages
Fast and simple, since the data
file values are compared to
limits that remain constant for
each band in each signature.
Since parallelepipeds have corners,
pixels that are actually quite far,
spectrally, from the mean of the
signature may be classified. An
example of this is shown in Figure 102.
Often useful for a first-pass,
broad classification, this
decision rule quickly narrows
down the number of possible
classes to which each pixel can
be assigned before the more
time-consuming calculations are
made, thus cutting processing
time (e.g., minimum distance,
Mahalanobis distance, or
maximum likelihood).
Not dependent on normal
distributions.
Classification Decision Rules / 274 Field Guide
Figure 102: Parallelepiped Corners Compared to the
Signature Ellipse
Feature Space The feature space decision rule determines whether or not a
candidate pixel lies within the nonparametric signature in the feature
space image. When a pixels data file values are in the feature space
signature, then the pixel is assigned to that signatures class. Figure
103 is a two-dimensional example of a feature space classification.
The polygons in this figure are AOIs used to define the feature space
signatures.
Figure 103: Feature Space Classification
Overlap Region
In cases where a pixel may fall into the overlap region of two or more
AOIs, you must define how the pixel can be classified.
The pixel can be classified by the order of the feature space
signatures. If one of the signatures is first and the other
signature is fourth, the pixel is assigned to the first signatures
class. This order can be set in the Signature Editor.

B
Signature Ellipse
Parallelepiped
boundary
A
p p
*
candidate pixel
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
Band A
data file values
3 3
3
3
3
3 3
3
l
l
l
l
l
l l
3
3
3
3
3
class 1
class 3
Band A
d t fil l
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
?
3
l
2
= pixels in class 1
= pixels in class 2
= pixels in class 3
= unclassified pixels
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
l
l
l
l
l
l
l
class 2
? ?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
Field Guide Classification Decision Rules / 275
The pixel can be classified by the defined parametric decision
rule. The pixel is tested against the overlapping signatures only.
If neither of these feature space signatures is parametric, then
the pixel is left unclassified. If only one of the signatures is
parametric, then the pixel is assigned automatically to that
signatures class.
The pixel can be left unclassified.
Regions Outside of the AOIs
If the pixel does not fall into one of the AOIs for the feature space
signatures, then you must define how the pixel can be classified.
The pixel can be classified by the defined parametric decision
rule. The pixel is tested against all of the parametric signatures.
If none of the signatures is parametric, then the pixel is left
unclassified.
The pixel can be left unclassified.
Use the Decision Rules utility in the Signature Editor to perform
a feature space classification.
Minimum Distance The minimum distance decision rule (also called spectral distance)
calculates the spectral distance between the measurement vector for
the candidate pixel and the mean vector for each signature.
Table 43: Feature Space Decision Rule
Advantages Disadvantages
Often useful for a first-pass,
broad classification.
The feature space decision rule allows
overlap and unclassified pixels.
Provides an accurate way to
classify a class with a
nonnormal distribution (e.g.,
residential and urban).
The feature space image may be
difficult to interpret.
Certain features may be more
visually identifiable, which can
help discriminate between
classes that are spectrally
similar and hard to differentiate
with parametric information.
The feature space method is
fast.
Classification Decision Rules / 276 Field Guide
Figure 104: Minimum Spectral Distance
In Figure 104, spectral distance is illustrated by the lines from the
candidate pixel to the means of the three signatures. The candidate
pixel is assigned to the class with the closest mean.
The equation for classifying by spectral distance is based on the
equation for Euclidean distance:
Where:
n = number of bands (dimensions)
i = a particular band
c = a particular class
X
xyi
= data file value of pixel x,y in band i

ci
= mean of data file values in band i for the sample for
class c
SD
xyc
= spectral distance from pixel x,y to the mean of
class c
Source: Swain and Davis, 1978
When spectral distance is computed for all possible values of c (all
possible classes), the class of the candidate pixel is assigned to the
class for which SD is the lowest.

B3

B2

B1

A1

A2

A3
u
u
u

3
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
candidate pixel
o
o
SD
xyc

ci
X
xyi
( )
2
i 1 =
n

=
Field Guide Classification Decision Rules / 277
Mahalanobis Distance
The Mahalanobis distance algorithm assumes that the
histograms of the bands have normal distributions. If this is not
the case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
Mahalanobis distance is similar to minimum distance, except that the
covariance matrix is used in the equation. Variance and covariance
are figured in so that clusters that are highly varied lead to similarly
varied classes, and vice versa. For example, when classifying urban
areastypically a class whose pixels vary widelycorrectly classified
pixels may be farther from the mean than those of a class for water,
which is usually not a highly varied class (Swain and Davis, 1978).
The equation for the Mahalanobis distance classifier is as follows:
D = (X-M
c
)
T
(Cov
c
-1
) (X-M
c
)
Table 44: Minimum Distance Decision Rule
Advantages Disadvantages
Since every pixel is spectrally
closer to either one sample
mean or another, there are no
unclassified pixels.
Pixels that should be unclassified (i.e.,
they are not spectrally close to the
mean of any sample, within limits that
are reasonable to you) become
classified. However, this problem is
alleviated by thresholding out the
pixels that are farthest from the means
of their classes. (See the discussion on
"Thresholding".)
The fastest decision rule to
compute, except for
parallelepiped.
Does not consider class variability. For
example, a class like an urban land
cover class is made up of pixels with a
high variance, which may tend to be
farther from the mean of the signature.
Using this decision rule, outlying urban
pixels may be improperly classified.
Inversely, a class with less variance,
like water, may tend to overclassify
(that is, classify more pixels than are
appropriate to the class), because the
pixels that belong to the class are
usually spectrally closer to their mean
than those of other classes to their
means.
Classification Decision Rules / 278 Field Guide
Where:
D = Mahalanobis distance
c = a particular class
X = the measurement vector of the candidate pixel
M
c
= the mean vector of the signature of class c
Cov
c
= the covariance matrix of the pixels in the signature
of class c
Cov
c
-1
= inverse of Cov
c

T = transposition function
The pixel is assigned to the class, c, for which D is the lowest.
Maximum
Likelihood/Bayesian
The maximum likelihood algorithm assumes that the histograms
of the bands of data have normal distributions. If this is not the
case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
The maximum likelihood decision rule is based on the probability that
a pixel belongs to a particular class. The basic equation assumes that
these probabilities are equal for all classes, and that the input bands
have normal distributions.
Table 45: Mahalanobis Decision Rule
Advantages Disadvantages
Takes the variability of classes
into account, unlike minimum
distance or parallelepiped.
Tends to overclassify signatures with
relatively large values in the covariance
matrix. If there is a large dispersion of
the pixels in a cluster or training
sample, then the covariance matrix of
that signature contains large values.
May be more useful than
minimum distance in cases
where statistical criteria (as
expressed in the covariance
matrix) must be taken into
account, but the weighting
factors that are available with
the maximum
likelihood/Bayesian option are
not needed.
Slower to compute than parallelepiped
or minimum distance.
Mahalanobis distance is parametric,
meaning that it relies heavily on a
normal distribution of the data in each
input band.
Field Guide Classification Decision Rules / 279
Bayesian Classifier
If you have a priori knowledge that the probabilities are not equal for
all classes, you can specify weight factors for particular classes. This
variation of the maximum likelihood decision rule is known as the
Bayesian decision rule (Hord, 1982). Unless you have a priori
knowledge of the probabilities, it is recommended that they not be
specified. In this case, these weights default to 1.0 in the equation.
The equation for the maximum likelihood/Bayesian classifier is as
follows:
D = ln(a
c
) - [0.5 ln(|Cov
c
|)] - [0.5 (X-M
c
)T (Cov
c
-1
) (X-M
c
)]
Where:
D = weighted distance (likelihood)
c = a particular class
X = the measurement vector of the candidate pixel
M
c
= the mean vector of the sample of class c
a
c
= percent probability that any candidate pixel is a
member of class c (defaults to 1.0, or is entered
from a priori knowledge)
Cov
c
= the covariance matrix of the pixels in the sample of
class c
|Cov
c
| = determinant of Cov
c
(matrix algebra)
Cov
c
-1
= inverse of Cov
c
(matrix algebra)
ln = natural logarithm function
T = transposition function (matrix algebra)
The inverse and determinant of a matrix, along with the difference
and transposition of vectors, would be explained in a textbook of
matrix algebra.
The pixel is assigned to the class, c, for which D is the lowest.
Table 46: Maximum Likelihood/Bayesian Decision Rule
Advantages Disadvantages
The most accurate of the
classifiers in the ERDAS
IMAGINE system (if the input
samples/clusters have a normal
distribution), because it takes
the most variables into
consideration.
An extensive equation that takes a long
time to compute. The computation time
increases with the number of input
bands.
Takes the variability of classes
into account by using the
covariance matrix, as does
Mahalanobis distance.
Maximum likelihood is parametric,
meaning that it relies heavily on a
normal distribution of the data in each
input band.
Fuzzy Methodology / 280 Field Guide
Fuzzy
Methodology
Fuzzy Classification The Fuzzy Classification method takes into account that there are
pixels of mixed make-up, that is, a pixel cannot be definitively
assigned to one category. Jensen notes that, Clearly, there needs
to be a way to make the classification algorithms more sensitive to
the imprecise (fuzzy) nature of the real world (Jensen, 1996).
Fuzzy classification is designed to help you work with data that may
not fall into exactly one category or another. Fuzzy classification
works using a membership function, wherein a pixels value is
determined by whether it is closer to one class than another. A fuzzy
classification does not have definite boundaries, and each pixel can
belong to several different classes (Jensen, 1996).
Like traditional classification, fuzzy classification still uses training,
but the biggest difference is that it is also possible to obtain
information on the various constituent classes found in a mixed
pixel. . . (Jensen, 1996). Jensen goes on to explain that the process
of collecting training sites in a fuzzy classification is not as strict as
a traditional classification. In the fuzzy method, the training sites do
not have to have pixels that are exactly the same.
Once you have a fuzzy classification, the fuzzy convolution utility
allows you to perform a moving window convolution on a fuzzy
classification with multiple output class assignments. Using the
multilayer classification and distance file, the convolution creates a
new single class output file by computing a total weighted distance
for all classes in the window.
Fuzzy Convolution The Fuzzy Convolution operation creates a single classification layer
by calculating the total weighted inverse distance of all the classes
in a window of pixels. Then, it assigns the center pixel in the class
with the largest total inverse distance summed over the entire set of
fuzzy classification layers.
Tends to overclassify signatures with
relatively large values in the covariance
matrix. If there is a large dispersion of
the pixels in a cluster or training
sample, then the covariance matrix of
that signature contains large values.
Table 46: Maximum Likelihood/Bayesian Decision Rule
Advantages Disadvantages
Field Guide Expert Classification / 281
This has the effect of creating a context-based classification to
reduce the speckle or salt and pepper in the classification. Classes
with a very small distance value remain unchanged while classes
with higher distance values may change to a neighboring value if
there is a sufficient number of neighboring pixels with class values
and small corresponding distance values. The following equation is
used in the calculation:
Where:
i = row index of window
j = column index of window
s = size of window (3, 5, or 7)
l = layer index of fuzzy set
n = number of fuzzy layers used
W = weight table for window
k = class value
D[k] = distance file value for class k
T[k] = total weighted distance of window for class k
The center pixel is assigned the class with the maximum T[k].
Expert
Classification
Expert classification can be performed using the IMAGINE Expert
Classifier

. The expert classification software provides a rules-based


approach to multispectral image classification, post-classification
refinement, and GIS modeling. In essence, an expert classification
system is a hierarchy of rules, or a decision tree, that describes the
conditions under which a set of low level constituent information gets
abstracted into a set of high level informational classes. The
constituent information consists of user-defined variables and
includes raster imagery, vector coverages, spatial models, external
programs, and simple scalars.
A rule is a conditional statement, or list of conditional statements,
about the variables data values and/or attributes that determine an
informational component or hypotheses. Multiple rules and
hypotheses can be linked together into a hierarchy that ultimately
describes a final set of target informational classes or terminal
hypotheses. Confidence values associated with each condition are
also combined to provide a confidence image corresponding to the
final output classified image.
T k [ ]
w
ij
D
ijl
k [ ]
-----------------
l 0 =
n

j 0 =
s

i 0 =
s

=
Expert Classification / 282 Field Guide
The IMAGINE Expert Classifier is composed of two parts: the
Knowledge Engineer and the Knowledge Classifier. The Knowledge
Engineer provides the interface for an expert with first-hand
knowledge of the data and the application to identify the variables,
rules, and output classes of interest and create the hierarchical
decision tree. The Knowledge Classifier provides an interface for a
nonexpert to apply the knowledge base and create the output
classification.
Knowledge Engineer With the Knowledge Engineer, you can open knowledge bases, which
are presented as decision trees in editing windows.
Figure 105: Knowledge Engineer Editing Window
In Figure 105, the upper left corner of the editing window is an
overview of the entire decision tree with a green box indicating the
position within the knowledge base of the currently displayed portion
of the decision tree. This box can be dragged to change the view of
the decision tree graphic in the display window on the right. The
branch containing the currently selected hypotheses, rule, or
condition is highlighted in the overview.
The decision tree grows in depth when the hypothesis of one rule is
referred to by a condition of another rule. The terminal hypotheses
of the decision tree represent the final classes of interest.
Intermediate hypotheses may also be flagged as being a class of
interest. This may occur when there is an association between
classes.
Figure 106 represents a single branch of a decision tree depicting a
hypothesis, its rule, and conditions.
Field Guide Expert Classification / 283
Figure 106: Example of a Decision Tree Branch
In this example, the rule, which is Gentle Southern Slope,
determines the hypothesis, Good Location. The rule has four
conditions depicted on the right side, all of which must be satisfied
for the rule to be true.
However, the rule may be split if either Southern or Gentle slope
defines the Good Location hypothesis. While both conditions must
still be true to fire a rule, only one rule must be true to satisfy the
hypothesis.
Figure 107: Split Rule Decision Tree Branch
Variable Editor
The Knowledge Engineer also makes use of a Variable Editor when
classifying images. The Variable editor provides for the definition of
the variable objects to be used in the rules conditions.
The two types of variables are raster and scalar. Raster variables
may be defined by imagery, feature layers (including vector layers),
graphic spatial models, or by running other programs. Scalar
variables my be defined with an explicit value, or defined as the
output from a model or external program.
Gentle Southern Slope
Aspect > 135
Aspect <= 225
Slope < 12
Slope > 0
Good Location
Hypothesis
Rule
Conditions
Southern Slope Good Location
Aspect > 135
Aspect <= 225
Slope < 12
Slope > 0
Gentle Slope
Evaluating Classification / 284 Field Guide
Evaluating the Output of the Knowledge Engineer
The task of creating a useful, well-constructed knowledge base
requires numerous iterations of trial, evaluation, and refinement. To
facilitate this process, two options are provided. First, you can use
the Test Classification to produce a test classification using the
current knowledge base. Second, you can use the Classification
Pathway Cursor to evaluate the results. This tool allows you to move
a crosshair over the image in a Viewer to establish a confidence level
for areas in the image.
Knowledge Classifier The Knowledge Classifier is composed of two parts: an application
with a user interface, and a command line executable. The user
interface application allows you to input a limited set of parameters
to control the use of the knowledge base. The user interface is
designed as a wizard to lead you though pages of input parameters.
After selecting a knowledge base, you are prompted to select
classes. The following is an example classes dialog:
Figure 108: Knowledge Classifier Classes of Interest
After you select the input data for classification, the classification
output options, output files, output area, output cell size, and output
map projection, the Knowledge Classifier process can begin. An
inference engine then evaluates all hypotheses at each location
(calculating variable values, if required), and assigns the hypothesis
with the highest confidence. The output of the Knowledge Classifier
is a thematic image, and optionally, a confidence image.
Evaluating
Classification
After a classification is performed, these methods are available for
testing the accuracy of the classification:
ThresholdingUse a probability image file to screen out
misclassified pixels.
Accuracy AssessmentCompare the classification to ground
truth or other data.
Field Guide Evaluating Classification / 285
Thresholding Thresholding is the process of identifying the pixels in a classified
image that are the most likely to be classified incorrectly. These
pixels are put into another class (usually class 0). These pixels are
identified statistically, based upon the distance measures that were
used in the classification decision rule.
Distance File
When a minimum distance, Mahalanobis distance, or maximum
likelihood classification is performed, a distance image file can be
produced in addition to the output thematic raster layer. A distance
image file is a one-band, 32-bit offset continuous raster layer in
which each data file value represents the result of a spectral distance
equation, depending upon the decision rule used.
In a minimum distance classification, each distance value is the
Euclidean spectral distance between the measurement vector of
the pixel and the mean vector of the pixels class.
In a Mahalanobis distance or maximum likelihood classification,
the distance value is the Mahalanobis distance between the
measurement vector of the pixel and the mean vector of the
pixels class.
The brighter pixels (with the higher distance file values) are
spectrally farther from the signature means for the classes to which
they re assigned. They are more likely to be misclassified.
The darker pixels are spectrally nearer, and more likely to be
classified correctly. If supervised training was used, the darkest
pixels are usually the training samples.
Figure 109: Histogram of a Distance Image
Figure 109 shows how the histogram of the distance image usually
appears. This distribution is called a chi-square distribution, as
opposed to a normal distribution, which is a symmetrical bell curve.
Threshold
The pixels that are the most likely to be misclassified have the higher
distance file values at the tail of this histogram. At some point that
you defineeither mathematically or visuallythe tail of this
histogram is cut off. The cutoff point is the threshold.
distance value
n
u
m
b
e
r

o
f

p
i
x
e
l
s
0
0
Evaluating Classification / 286 Field Guide
To determine the threshold:
interactively change the threshold with the mouse, when a
distance histogram is displayed while using the threshold
function. This option enables you to select a chi-square value by
selecting the cut-off value in the distance histogram, or
input a chi-square parameter or distance measurement, so that
the threshold can be calculated statistically.
In both cases, thresholding has the effect of cutting the tail off of the
histogram of the distance image file, representing the pixels with the
highest distance values.
Figure 110: Interactive Thresholding Tips
Figure 110 shows some example distance histograms. With each
example is an explanation of what the curve might mean, and how
to threshold it.
Smooth chi-square shapetry to find the breakpoint where
the curve becomes more horizontal, and cut off the tail.
Minor mode(s) (peaks) in the curve probably indicate that
the class picked up other features that were not represented
in the signature. You probably want to threshold these
features out.
Not a good class. The signature for this class probably
represented a polymodal (multipeaked) distribution.
Peak of the curve is shifted from 0. Indicates that the
signature mean is off-center from the pixels it represents.
You may need to take a new signature and reclassify.
Field Guide Evaluating Classification / 287
Chi-square Statistics
If the minimum distance classifier is used, then the threshold is
simply a certain spectral distance. However, if Mahalanobis or
maximum likelihood are used, then chi-square statistics are used to
compare probabilities (Swain and Davis, 1978).
When statistics are used to calculate the threshold, the threshold is
more clearly defined as follows:
T is the distance value at which C% of the pixels in a class have a
distance value greater than or equal to T.
Where:
T = the threshold for a class
C% = the percentage of pixels that are believed to be
misclassified, known as the confidence level
T is related to the distance values by means of chi-square statistics.
The value X
2
(chi-squared) is used in the equation. X
2
is a function
of:
the number of bands of data usedknown in chi-square statistics
as the number of degrees of freedom
the confidence level
When classifying an image in ERDAS IMAGINE, the classified image
automatically has the degrees of freedom (i.e., number of bands)
used for the classification. The chi-square table is built into the
threshold application.
NOTE: In this application of chi-square statistics, the value of X
2
is
an approximation. Chi-square statistics are generally applied to
independent variables (having no covariance), which is not usually
true of image data.
A further discussion of chi-square statistics can be found in a
statistics text.
Use the Classification Threshold utility to perform the
thresholding.
Accuracy Assessment Accuracy assessment is a general term for comparing the
classification to geographical data that are assumed to be true, in
order to determine the accuracy of the classification process.
Usually, the assumed-true data are derived from ground truth data.
Evaluating Classification / 288 Field Guide
It is usually not practical to ground truth or otherwise test every pixel
of a classified image. Therefore, a set of reference pixels is usually
used. Reference pixels are points on the classified image for which
actual data are (or will be) known. The reference pixels are randomly
selected (Congalton, 1991).
NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility
to perform an accuracy assessment for any thematic layer. This layer
does not have to be classified by ERDAS IMAGINE (e.g., you can run
an accuracy assessment on a thematic layer that was classified in
ERDAS Version 7.5 and imported into ERDAS IMAGINE).
Random Reference Pixels
When reference pixels are selected by the analyst, it is often
tempting to select the same pixels for testing the classification that
were used in the training samples. This biases the test, since the
training samples are the basis of the classification. By allowing the
reference pixels to be selected at random, the possibility of bias is
lessened or eliminated (Congalton, 1991).
The number of reference pixels is an important factor in determining
the accuracy of the classification. It has been shown that more than
250 reference pixels are needed to estimate the mean accuracy of a
class to within plus or minus five percent (Congalton, 1991).
ERDAS IMAGINE uses a square window to select the reference pixels.
The size of the window can be defined by you. Three different types
of distribution are offered for selecting the random pixels:
randomno rules are used
stratified randomthe number of points is stratified to the
distribution of thematic layer classes
equalized randomeach class has an equal number of random
points
Use the Accuracy Assessment utility to generate random
reference points.
Accuracy Assessment CellArray
An Accuracy Assessment CellArray is created to compare the
classified image with reference data. This CellArray is simply a list of
class values for the pixels in the classified image file and the class
values for the corresponding reference pixels. The class values for
the reference pixels are input by you. The CellArray data reside in an
image file.
Use the Accuracy Assessment CellArray to enter reference pixels
for the class values.
Field Guide Evaluating Classification / 289
Error Reports
From the Accuracy Assessment CellArray, two kinds of reports can
be derived.
The error matrix simply compares the reference points to the
classified points in a c c matrix, where c is the number of
classes (including class 0).
The accuracy report calculates statistics of the percentages of
accuracy, based upon the results of the error matrix.
When interpreting the reports, it is important to observe the
percentage of correctly classified pixels and to determine the nature
of errors of the producer and yourself.
Use the Accuracy Assessment utility to generate the error matrix
and accuracy reports.
Kappa Coefficient
The Kappa coefficient expresses the proportionate reduction in error
generated by a classification process compared with the error of a
completely random classification. For example, a value of .82 implies
that the classification process is avoiding 82 percent of the errors
that a completely random classification generates (Congalton,
1991).
Evaluating Classification / 290 Field Guide
/ 291 Field Guide
Photogrammetric Concepts
Introduction
What is
Photogrammetry?
Photogrammetry is the art, science and technology of obtaining
reliable information about physical objects and the environment
through the process of recording, measuring and interpreting
photographic images and patterns of electromagnetic radiant
imagery and other phenomena (American Society of
Photogrammetry, 1980).
Photogrammetry was invented in 1851 by Laussedat, and has
continued to develop over the last 140 years. Over time, the
development of photogrammetry has passed through the phases of
plane table photogrammetry, analog photogrammetry, analytical
photogrammetry, and has now entered the phase of digital
photogrammetry (Konecny, 1994).
The traditional, and largest, application of photogrammetry is to
extract topographic information (e.g., topographic maps) from aerial
images. However, photogrammetric techniques have also been
applied to process satellite images and close range images in order
to acquire topographic or nontopographic information of
photographed objects.
Prior to the invention of the airplane, photographs taken on the
ground were used to extract the relationships between objects using
geometric principles. This was during the phase of plane table
photogrammetry.
In analog photogrammetry, starting with stereomeasurement in
1901, optical or mechanical instruments were used to reconstruct
three-dimensional geometry from two overlapping photographs. The
main product during this phase was topographic maps.
In analytical photogrammetry, the computer replaces some
expensive optical and mechanical components. The resulting devices
were analog/digital hybrids. Analytical aerotriangulation, analytical
plotters, and orthophoto projectors were the main developments
during this phase. Outputs of analytical photogrammetry can be
topographic maps, but can also be digital products, such as digital
maps and DEMs.
/ 292 Field Guide
Digital photogrammetry is photogrammetry as applied to digital
images that are stored and processed on a computer. Digital images
can be scanned from photographs or can be directly captured by
digital cameras. Many photogrammetric tasks can be highly
automated in digital photogrammetry (e.g., automatic DEM
extraction and digital orthophoto generation). Digital
photogrammetry is sometimes called softcopy photogrammetry. The
output products are in digital form, such as digital maps, DEMs, and
digital orthophotos saved on computer storage media. Therefore,
they can be easily stored, managed, and applied by you. With the
development of digital photogrammetry, photogrammetric
techniques are more closely integrated into remote sensing and GIS.
Digital photogrammetric systems employ sophisticated software to
automate the tasks associated with conventional photogrammetry,
thereby minimizing the extent of manual interaction required to
perform photogrammetric operations. LPS Project Manager is such a
photogrammetric system.
Photogrammetry can be used to measure and interpret information
from hardcopy photographs or images. Sometimes the process of
measuring information from photography and satellite imagery is
considered metric photogrammetry, such as creating DEMs.
Interpreting information from photography and imagery is
considered interpretative photogrammetry, such as identifying and
discriminating between various tree types as represented on a
photograph or image (Wolf, 1983).
Types of Photographs and
Images
The types of photographs and images that can be processed within
IMAGINE LPS Project Manager include aerial, terrestrial, close range,
and oblique. Aerial or vertical (near vertical) photographs and
images are taken from a high vantage point above the Earths
surface. The camera axis of aerial or vertical photography is
commonly directed vertically (or near vertically) down. Aerial
photographs and images are commonly used for topographic and
planimetric mapping projects. Aerial photographs and images are
commonly captured from an aircraft or satellite.
Terrestrial or ground-based photographs and images are taken with
the camera stationed on or close to the Earths surface. Terrestrial
and close range photographs and images are commonly used for
applications involved with archeology, geomorphology, civil
engineering, architecture, industry, etc.
Oblique photographs and images are similar to aerial photographs
and images, except the camera axis is intentionally inclined at an
angle with the vertical. Oblique photographs and images are
commonly used for reconnaissance and corridor mapping
applications.
Digital photogrammetric systems use digitized photographs or digital
images as the primary source of input. Digital imagery can be
obtained from various sources. These include:
Digitizing existing hardcopy photographs
Using digital cameras to record imagery
Field Guide / 293
Using sensors on board satellites such as Landsat and SPOT to
record imagery
This document uses the term imagery in reference to
photography and imagery obtained from various sources. This
includes aerial and terrestrial photography, digital and video
camera imagery, 35 mm photography, medium to large format
photography, scanned photography, and satellite imagery.
Why use
Photogrammetry?
As stated in the previous section, raw aerial photography and
satellite imagery have large geometric distortion that is caused by
various systematic and nonsystematic factors. The photogrammetric
modeling based on collinearity equations eliminates these errors
most efficiently, and creates the most reliable orthoimages from the
raw imagery. It is unique in terms of considering the image-forming
geometry, utilizing information between overlapping images, and
explicitly dealing with the third dimension: elevation.
In addition to orthoimages, photogrammetry can also provide other
geographic information such as a DEM, topographic features, and
line maps reliably and efficiently. In essence, photogrammetry
produces accurate and precise geographic information from a wide
range of photographs and images. Any measurement taken on a
photogrammetrically processed photograph or image reflects a
measurement taken on the ground. Rather than constantly go to the
field to measure distances, areas, angles, and point positions on the
Earths surface, photogrammetric tools allow for the accurate
collection of information from imagery. Photogrammetric
approaches for collecting geographic information save time and
money, and maintain the highest accuracies.
Photogrammetry vs.
Conventional Geometric
Correction
Conventional techniques of geometric correction such as polynomial
transformation are based on general functions not directly related to
the specific distortion or error sources. They have been successful in
the field of remote sensing and GIS applications, especially when
dealing with low resolution and narrow field of view satellite imagery
such as Landsat and SPOT data (Yang, 1997). General functions
have the advantage of simplicity. They can provide a reasonable
geometric modeling alternative when little is known about the
geometric nature of the image data.
/ 294 Field Guide
However, conventional techniques generally process the images one
at a time. They cannot provide an integrated solution for multiple
images or photographs simultaneously and efficiently. It is very
difficult, if not impossible, for conventional techniques to achieve a
reasonable accuracy without a great number of GCPs when dealing
with large-scale imagery, images having severe systematic and/or
nonsystematic errors, and images covering rough terrain.
Misalignment is more likely to occur when mosaicking separately
rectified images. This misalignment could result in inaccurate
geographic information being collected from the rectified images.
Furthermore, it is impossible for a conventional technique to create
a three-dimensional stereo model or to extract the elevation
information from two overlapping images. There is no way for
conventional techniques to accurately derive geometric information
about the sensor that captured the imagery.
Photogrammetric techniques overcome all the problems mentioned
above by using least squares bundle block adjustment. This solution
is integrated and accurate.
IMAGINE LPS Project Manager can process hundreds of images or
photographs with very few GCPs, while at the same time eliminating
the misalignment problem associated with creating image mosaics.
In short, less time, less money, less manual effort, but more
geographic fidelity can be realized using the photogrammetric
solution.
Single Frame
Orthorectification vs.
Block Triangulation
Single frame orthorectification techniques orthorectify one image at
a time using a technique known as space resection. In this respect,
a minimum of three GCPs is required for each image. For example,
in order to orthorectify 50 aerial photographs, a minimum of 150
GCPs is required. This includes manually identifying and measuring
each GCP for each image individually. Once the GCPs are measured,
space resection techniques compute the camera/sensor position and
orientation as it existed at the time of data capture. This information,
along with a DEM, is used to account for the negative impacts
associated with geometric errors. Additional variables associated
with systematic error are not considered.
Single frame orthorectification techniques do not utilize the internal
relationship between adjacent images in a block to minimize and
distribute the errors commonly associated with GCPs, image
measurements, DEMs, and camera/sensor information. Therefore,
during the mosaic procedure, misalignment between adjacent
images is common since error has not been minimized and
distributed throughout the block.
Field Guide / 295
Aerial or block triangulation is the process of establishing a
mathematical relationship between the images contained in a
project, the camera or sensor model, and the ground. The
information resulting from aerial triangulation is required as input for
the orthorectification, DEM, and stereopair creation processes. The
term aerial triangulation is commonly used when processing aerial
photography and imagery. The term block triangulation, or simply
triangulation, is used when processing satellite imagery. The
techniques differ slightly as a function of the type of imagery being
processed.
Classic aerial triangulation using optical-mechanical analog and
analytical stereo plotters is primarily used for the collection of GCPs
using a technique known as control point extension. Since the cost
of collecting GCPs is very large, photogrammetric techniques are
accepted as the ideal approach for collecting GCPs over large areas
using photography rather than conventional ground surveying
techniques. Control point extension involves the manual photo
measurement of ground points appearing on overlapping images.
These ground points are commonly referred to as tie points. Once
the points are measured, the ground coordinates associated with the
tie points can be determined using photogrammetric techniques
employed by analog or analytical stereo plotters. These points are
then referred to as ground control points (GCPs).
With the advent of digital photogrammetry, classic aerial
triangulation has been extended to provide greater functionality.
IMAGINE LPS Project Manager uses a mathematical technique known
as bundle block adjustment for aerial triangulation. Bundle block
adjustment provides three primary functions:
To determine the position and orientation for each image in a
project as they existed at the time of photographic or image
exposure. The resulting parameters are referred to as exterior
orientation parameters. In order to estimate the exterior
orientation parameters, a minimum of three GCPs is required for
the entire block, regardless of how many images are contained
within the project.
To determine the ground coordinates of any tie points manually
or automatically measured on the overlap areas of multiple
images. The highly precise ground point determination of tie
points is useful for generating control points from imagery in lieu
of ground surveying techniques. Additionally, if a large number
of ground points is generated, then a DEM can be interpolated
using the Create Surface tool in ERDAS IMAGINE.
To minimize and distribute the errors associated with the
imagery, image measurements, GCPs, and so forth. The bundle
block adjustment processes information from an entire block of
imagery in one simultaneous solution (i.e., a bundle) using
statistical techniques (i.e., adjustment component) to
automatically identify, distribute, and remove error.
Image and Data Acquisition / 296 Field Guide
Because the images are processed in one step, the misalignment
issues associated with creating mosaics are resolved.
Image and Data
Acquisition
During photographic or image collection, overlapping images are
exposed along a direction of flight. Most photogrammetric
applications involve the use of overlapping images. In using more
than one image, the geometry associated with the camera/sensor,
image, and ground can be defined to greater accuracies and
precision.
During the collection of imagery, each point in the flight path at
which the camera exposes the film, or the sensor captures the
imagery, is called an exposure station (see Figure 111).
Figure 111: Exposure Stations Along a Flight Path
Each photograph or image that is exposed has a corresponding
image scale associated with it. The image scale expresses the
average ratio between a distance in the image and the same distance
on the ground. It is computed as focal length divided by the flying
height above the mean ground elevation. For example, with a flying
height of 1000 m and a focal length of 15 cm, the image scale (SI)
would be 1:6667.
NOTE: The flying height above ground is used, versus the altitude
above sea level.
A strip of photographs consists of images captured along a flight line,
normally with an overlap of 60%. All photos in the strip are assumed
to be taken at approximately the same flying height and with a
constant distance between exposure stations. Camera tilt relative to
the vertical is assumed to be minimal.
The photographs from several flight paths can be combined to form
a block of photographs. A block of photographs consists of a number
of parallel strips, normally with a sidelap of 20-30%. Block
triangulation techniques are used to transform all of the images in a
block and ground points into a homologous coordinate system.
Flight path
of airplane
Exposure station
Flight Line 1
Flight Line 2
Flight Line 3
Field Guide Image and Data Acquisition / 297
A regular block of photos is a rectangular block in which the number
of photos in each strip is the same. Figure 112 shows a block of 5
2 photographs.
Figure 112: A Regular Rectangular Block of Aerial Photos
Photogrammetric
Scanners
Photogrammetric quality scanners are special devices capable of
high image quality and excellent positional accuracy. Use of this type
of scanner results in geometric accuracies similar to traditional
analog and analytical photogrammetric instruments. These scanners
are necessary for digital photogrammetric applications that have
high accuracy requirements.
These units usually scan only film because film is superior to paper,
both in terms of image detail and geometry. These units usually have
a Root Mean Square Error (RMSE) positional accuracy of 4 microns
or less, and are capable of scanning at a maximum resolution of 5 to
10 microns (5 microns is equivalent to approximately 5,000 pixels
per inch).
The required pixel resolution varies depending on the application.
Aerial triangulation and feature collection applications often scan in
the 10- to 15-micron range. Orthophoto applications often use 15-
to 30-micron pixels. Color film is less sharp than panchromatic,
therefore color ortho applications often use 20- to 40-micron pixels.
Desktop Scanners Desktop scanners are general purpose devices. They lack the image
detail and geometric accuracy of photogrammetric quality units, but
they are much less expensive. When using a desktop scanner, you
should make sure that the active area is at least 9 9 inches (i.e.,
A3 type scanners), enabling you to capture the entire photo frame.
Flying direction
Strip 2
Strip 1
60% overlap
20-30%
sidelap
Photographic
block
Image and Data Acquisition / 298 Field Guide
Desktop scanners are appropriate for less rigorous uses, such as
digital photogrammetry in support of GIS or remote sensing
applications. Calibrating these units improves geometric accuracy,
but the results are still inferior to photogrammetric units. The image
correlation techniques that are necessary for automatic tie point
collection and elevation extraction are often sensitive to scan quality.
Therefore, errors can be introduced into the photogrammetric
solution that are attributable to scanning errors. IMAGINE LPS
Project Manager accounts for systematic errors attributed to
scanning errors.
Scanning Resolutions One of the primary factors contributing to the overall accuracy of
block triangulation and orthorectification is the resolution of the
imagery being used. Image resolution is commonly determined by
the scanning resolution (if film photography is being used), or by the
pixel resolution of the sensor. In order to optimize the attainable
accuracy of a solution, the scanning resolution must be considered.
The appropriate scanning resolution is determined by balancing the
accuracy requirements versus the size of the mapping project and
the time required to process the project. Table 47 lists the scanning
resolutions associated with various scales of photography and image
file size.
Table 47: Scanning Resolutions
12 microns
(2117 dpi
1
)
16 microns
(1588 dpi)
25 microns
(1016 dpi)
50 microns
(508 dpi)
85 microns
(300 dpi)
Photo Scale
1 to
Ground
Coverage
(meters)
Ground
Coverage
(meters)
Ground
Coverage
(meters)
Ground
Coverage
(meters)
Ground
Coverage
(meters)
1800 0.0216 0.0288 0.045 0.09 0.153
2400 0.0288 0.0384 0.06 0.12 0.204
3000 0.036 0.048 0.075 0.15 0.255
3600 0.0432 0.0576 0.09 0.18 0.306
4200 0.0504 0.0672 0.105 0.21 0.357
4800 0.0576 0.0768 0.12 0.24 0.408
5400 0.0648 0.0864 0.135 0.27 0.459
6000 0.072 0.096 0.15 0.3 0.51
6600 0.0792 0.1056 0.165 0.33 0.561
7200 0.0864 0.1152 0.18 0.36 0.612
7800 0.0936 0.1248 0.195 0.39 0.663
8400 0.1008 0.1344 0.21 0.42 0.714
9000 0.108 0.144 0.225 0.45 0.765
9600 0.1152 0.1536 0.24 0.48 0.816
10800 0.1296 0.1728 0.27 0.54 0.918
Field Guide Image and Data Acquisition / 299
1
dots per inch
The ground coverage column refers to the ground coverage per
pixel. Thus, a 1:40000 scale photograph scanned at 25 microns
[1016 dots per inch (dpi)] has a ground coverage per pixel of 1 m
1 m. The resulting file size is approximately 85 MB, assuming a
square 9 9 inch photograph.
Coordinate Systems Conceptually, photogrammetry involves establishing the relationship
between the camera or sensor used to capture imagery, the imagery
itself, and the ground. In order to understand and define this
relationship, each of the three variables associated with the
relationship must be defined with respect to a coordinate space and
coordinate system.
Pixel Coordinate System
The file coordinates of a digital image are defined in a pixel
coordinate system. A pixel coordinate system is usually a coordinate
system with its origin in the upper-left corner of the image, the x-
axis pointing to the right, the y-axis pointing downward, and the unit
in pixels, as shown by axis c and r in Figure 113. These file
coordinates (c, r) can also be thought of as the pixel column and row
number. This coordinate system is referenced as pixel coordinates
(c, r) in this chapter.
12000 0.144 0.192 0.3 0.6 1.02
15000 0.18 0.24 0.375 0.75 1.275
18000 0.216 0.288 0.45 0.9 1.53
24000 0.288 0.384 0.6 1.2 2.04
30000 0.36 0.48 0.75 1.5 2.55
40000 0.48 0.64 1 2 3.4
50000 0.6 0.8 1.25 2.5 4.25
60000 0.72 0.96 1.5 3 5.1
B/W File Size (MB) 363 204 84 21 7
Color File Size (MB) 1089 612 252 63 21
Table 47: Scanning Resolutions (Continued)
12 microns
(2117 dpi
1
)
16 microns
(1588 dpi)
25 microns
(1016 dpi)
50 microns
(508 dpi)
85 microns
(300 dpi)
Image and Data Acquisition / 300 Field Guide
Figure 113: Pixel Coordinates and Image Coordinates
Image Coordinate System
An image coordinate system or an image plane coordinate system is
usually defined as a two-dimensional coordinate system occurring on
the image plane with its origin at the image center, normally at the
principal point or at the intersection of the fiducial marks as
illustrated by axis x and y in Figure 113. Image coordinates are used
to describe positions on the film plane. Image coordinate units are
usually millimeters or microns. This coordinate system is referenced
as image coordinates (x, y) in this chapter.
Image Space Coordinate System
An image space coordinate system (Figure 114) is identical to image
coordinates, except that it adds a third axis (z). The origin of the
image space coordinate system is defined at the perspective center
S as shown in Figure 114. Its x-axis and y-axis are parallel to the x-
axis and y-axis in the image plane coordinate system. The z-axis is
the optical axis, therefore the z value of an image point in the image
space coordinate system is usually equal to -f (focal length). Image
space coordinates are used to describe positions inside the camera
and usually use units in millimeters or microns. This coordinate
system is referenced as image space coordinates (x, y, z) in this
chapter.
y
x
c
r
Origin of pixel
coordinate
Origin of image
coordinate
system
system
Field Guide Image and Data Acquisition / 301
Figure 114: Image Space and Ground Space Coordinate
System
Ground Coordinate System
A ground coordinate system is usually defined as a three-
dimensional coordinate system that utilizes a known map projection.
Ground coordinates (X,Y,Z) are usually expressed in feet or meters.
The Z value is elevation above mean sea level for a given vertical
datum. This coordinate system is referenced as ground coordinates
(X,Y,Z) in this chapter.
Geocentric and Topocentric Coordinate System
Most photogrammetric applications account for the Earths curvature
in their calculations. This is done by adding a correction value or by
computing geometry in a coordinate system which includes
curvature. Two such systems are geocentric and topocentric
coordinates.
A geocentric coordinate system has its origin at the center of the
Earth ellipsoid. The Z-axis equals the rotational axis of the Earth, and
the X-axis passes through the Greenwich meridian. The Y-axis is
perpendicular to both the Z-axis and X-axis, so as to create a three-
dimensional coordinate system that follows the right hand rule.
z
y
x
Image coordinate system
S
f
a
o
Z
Height
Y
X
A
Ground coordinate system
Image and Data Acquisition / 302 Field Guide
A topocentric coordinate system has its origin at the center of the
image projected on the Earth ellipsoid. The three perpendicular
coordinate axes are defined on a tangential plane at this center
point. The plane is called the reference plane or the local datum. The
x-axis is oriented eastward, the y-axis northward, and the z-axis is
vertical to the reference plane (up).
For simplicity of presentation, the remainder of this chapter does not
explicitly reference geocentric or topocentric coordinates. Basic
photogrammetric principles can be presented without adding this
additional level of complexity.
Terrestrial Photography Photogrammetric applications associated with terrestrial or ground-
based images utilize slightly different image and ground space
coordinate systems. Figure 115 illustrates the two coordinate
systems associated with image space and ground space.
Figure 115: Terrestrial Photography
The image and ground space coordinate systems are right-handed
coordinate systems. Most terrestrial applications use a ground space
coordinate system that was defined using a localized Cartesian
coordinate system.

YG
XG
ZG
XA
ZA
YA
Ground space
Ground point A
y
x
z
xa
a
ya
Perspective Center
Image space
ZL
YL
XL
Y
X
'
'
'
Z
Field Guide Interior Orientation / 303
The image space coordinate system directs the z-axis toward the
imaged object and the y-axis directed North up. The image x-axis is
similar to that used in aerial applications. The X
L
, Y
L,
and Z
L

coordinates define the position of the perspective center as it existed
at the time of image capture. The ground coordinates of ground point
A (X
A
, Y
A,
and Z
A
) are defined within the ground space coordinate
system (X
G
, Y
G,
and Z
G
). With this definition, the rotation angles ,
, and are still defined as in the aerial photography conventions. In
IMAGINE LPS Project Manager, you can also use the ground (X, Y, Z)
coordinate system to directly define GCPs. Thus, GCPs do not need
to be transformed. Then the definition of rotation angles , , and
are different, as shown in Figure 115.
Interior
Orientation
Interior orientation defines the internal geometry of a camera or
sensor as it existed at the time of data capture. The variables
associated with image space are defined during the process of
interior orientation. Interior orientation is primarily used to
transform the image pixel coordinate system or other image
coordinate measurement system to the image space coordinate
system.
Figure 116 illustrates the variables associated with the internal
geometry of an image captured from an aerial camera, where o
represents the principal point and a represents an image point.
Figure 116: Internal Geometry
The internal geometry of a camera is defined by specifying the
following variables:
Principal point
Focal length
Perspective Center
y
z
x
Focal length Fiducial mark
Image plane
yo
ya
a
xa
xo
O
Interior Orientation / 304 Field Guide
Fiducial marks
Lens distortion
Principal Point and Focal
Length
The principal point is mathematically defined as the intersection of
the perpendicular line through the perspective center of the image
plane. The length from the principal point to the perspective center
is called the focal length (Wang, Z., 1990).
The image plane is commonly referred to as the focal plane. For
wide-angle aerial cameras, the focal length is approximately 152
mm, or 6 inches. For some digital cameras, the focal length is 28
mm. Prior to conducting photogrammetric projects, the focal length
of a metric camera is accurately determined or calibrated in a
laboratory environment.
This mathematical definition is the basis of triangulation, but difficult
to determine optically. The optical definition of principal point is the
image position where the optical axis intersects the image plane. In
the laboratory, this is calibrated in two forms: principal point of
autocollimation and principal point of symmetry, which can be seen
from the camera calibration report. Most applications prefer to use
the principal point of symmetry since it can best compensate for the
lens distortion.
Fiducial Marks As stated previously, one of the steps associated with interior
orientation involves determining the image position of the principal
point for each image in the project. Therefore, the image positions
of the fiducial marks are measured on the image, and subsequently
compared to the calibrated coordinates of each fiducial mark.
Since the image space coordinate system has not yet been defined
for each image, the measured image coordinates of the fiducial
marks are referenced to a pixel or file coordinate system. The pixel
coordinate system has an x coordinate (column) and a y coordinate
(row). The origin of the pixel coordinate system is the upper left
corner of the image having a row and column value of 0 and 0,
respectively. Figure 117 illustrates the difference between the pixel
coordinate system and the image space coordinate system.
Field Guide Interior Orientation / 305
Figure 117: Pixel Coordinate System vs. Image Space
Coordinate System
Using a two-dimensional affine transformation, the relationship
between the pixel coordinate system and the image space coordinate
system is defined. The following two-dimensional affine
transformation equations can be used to determine the coefficients
required to transform pixel coordinate measurements to the image
coordinates:
The x and y image coordinates associated with the calibrated fiducial
marks and the X and Y pixel coordinates of the measured fiducial
marks are used to determine six affine transformation coefficients.
The resulting six coefficients can then be used to transform each set
of row (y) and column (x) pixel coordinates to image coordinates.
The quality of the two-dimensional affine transformation is
represented using a root mean square (RMS) error. The RMS error
represents the degree of correspondence between the calibrated
fiducial mark coordinates and their respective measured image
coordinate values. Large RMS errors indicate poor correspondence.
This can be attributed to film deformation, poor scanning quality,
out-of-date calibration information, or image mismeasurement.
The affine transformation also defines the translation between the
origin of the pixel coordinate system and the image coordinate
system (x
o-file
and y
o-file
). Additionally, the affine transformation
takes into consideration rotation of the image coordinate system by
considering angle (see Figure 117). A scanned image of an aerial
photograph is normally rotated due to the scanning procedure.

Fiducial mark
Ya-file Yo-file
Xa-file
Xo-file
xa
a
ya
x a
1
a
2
X a
3
Y + + =
y b
1
b
2
X b
3
Y + + =
Interior Orientation / 306 Field Guide
The degree of variation between the x- and y-axis is referred to as
nonorthogonality. The two-dimensional affine transformation also
considers the extent of nonorthogonality. The scale difference
between the x-axis and the y-axis is also considered using the affine
transformation.
Lens Distortion Lens distortion deteriorates the positional accuracy of image points
located on the image plane. Two types of radial lens distortion exist:
radial and tangential lens distortion. Lens distortion occurs when
light rays passing through the lens are bent, thereby changing
directions and intersecting the image plane at positions deviant from
the norm. Figure 118 illustrates the difference between radial and
tangential lens distortion.
Figure 118: Radial vs. Tangential Lens Distortion
Radial lens distortion causes imaged points to be distorted along
radial lines from the principal point o. The effect of radial lens
distortion is represented as r. Radial lens distortion is also
commonly referred to as symmetric lens distortion. Tangential lens
distortion occurs at right angles to the radial lines from the principal
point. The effect of tangential lens distortion is represented as t.
Since tangential lens distortion is much smaller in magnitude than
radial lens distortion, it is considered negligible.
The effects of lens distortion are commonly determined in a
laboratory during the camera calibration procedure.
The effects of radial lens distortion throughout an image can be
approximated using a polynomial. The following polynomial is used
to determine coefficients associated with radial lens distortion:
r represents the radial distortion along a radial distance r from the
principal point (Wolf, 1983). In most camera calibration reports, the
lens distortion value is provided as a function of radial distance from
the principal point or field angle. IMAGINE LPS Project Manager
accommodates radial lens distortion parameters in both scenarios.
r t
radial distance (r)
x
y
o
r k
0
r k
1
r
3
k
2
r
5
+ + =
Field Guide Exterior Orientation / 307
Three coefficients (k
0
, k
1,
and k
2
) are computed using statistical
techniques. Once the coefficients are computed, each measurement
taken on an image is corrected for radial lens distortion.
Exterior
Orientation
Exterior orientation defines the position and angular orientation
associated with an image. The variables defining the position and
orientation of an image are referred to as the elements of exterior
orientation. The elements of exterior orientation define the
characteristics associated with an image at the time of exposure or
capture. The positional elements of exterior orientation include Xo,
Yo, and Zo. They define the position of the perspective center (O)
with respect to the ground space coordinate system (X, Y, and Z).
Zo is commonly referred to as the height of the camera above sea
level, which is commonly defined by a datum.
The angular or rotational elements of exterior orientation describe
the relationship between the ground space coordinate system (X, Y,
and Z) and the image space coordinate system (x, y, and z). Three
rotation angles are commonly used to define angular orientation.
They are omega (), phi (), and kappa (). Figure 119 illustrates
the elements of exterior orientation.
Exterior Orientation / 308 Field Guide
Figure 119: Elements of Exterior Orientation
Omega is a rotation about the photographic x-axis, phi is a rotation
about the photographic y-axis, and kappa is a rotation about the
photographic z-axis, which are defined as being positive if they are
counterclockwise when viewed from the positive end of their
respective axis. Different conventions are used to define the order
and direction of the three rotation angles (Wang, Z., 1990). The
ISPRS recommends the use of the , , convention. The
photographic z-axis is equivalent to the optical axis (focal length).
The x, y, and z coordinates are parallel to the ground space
coordinate system.
Using the three rotation angles, the relationship between the image
space coordinate system (x, y, and z) and ground space coordinate
system (X, Y, and Z or x, y, and z) can be determined. A 3 3
matrix defining the relationship between the two systems is used.
This is referred to as the orientation or rotation matrix, M. The
rotation matrix can be defined as follows:
X
Y
Z
Xp
Yp
Zp
Xo
Zo
Yo
O
o p
Ground Point P


x
y
z
x
y
z
yp
xp
f
Field Guide Exterior Orientation / 309
The rotation matrix is derived by applying a sequential rotation of
omega about the x-axis, phi about the y-axis, and kappa about the
z-axis.
The Collinearity Equation The following section defines the relationship between the
camera/sensor, the image, and the ground. Most photogrammetric
tools utilize the following formulations in one form or another.
With reference to Figure 119, an image vector a can be defined as
the vector from the exposure station O to the image point p. A
ground space or object space vector A can be defined as the vector
from the exposure station O to the ground point P. The image vector
and ground vector are collinear, inferring that a line extending from
the exposure station to the image point and to the ground is linear.
The image vector and ground vector are only collinear if one is a
scalar multiple of the other. Therefore, the following statement can
be made:
Where k is a scalar multiple. The image and ground vectors must be
within the same coordinate system. Therefore, image vector a is
comprised of the following components:
Where x
o
and y
o
represent the image coordinates of the principal
point.
Similarly, the ground vector can be formulated as follows:
In order for the image and ground vectors to be within the same
coordinate system, the ground vector must be multiplied by the
rotation matrix M. The following equation can be formulated:
M
m
11
m
12
m
13
m
21
m
22
m
23
m
31
m
32
m
33
=
a kA =
a
x
p
x
o

y
p
y
o

f
=
A
X
p
X
o

Y
p
Y
o

Z
p
Z
o

=
Photogrammetric Solutions / 310 Field Guide
Where:
The above equation defines the relationship between the perspective
center of the camera/sensor exposure station and ground point P
appearing on an image with an image point location of p. This
equation forms the basis of the collinearity condition that is used in
most photogrammetric operations. The collinearity condition
specifies that the exposure station, ground point, and its
corresponding image point location must all lie along a straight line,
thereby being collinear. Two equations comprise the collinearity
condition.
One set of equations can be formulated for each ground point
appearing on an image. The collinearity condition is commonly used
to define the relationship between the camera/sensor, the image,
and the ground.
Photogrammetric
Solutions
As stated previously, digital photogrammetry is used for many
applications, ranging from orthorectification, automated elevation
extraction, stereopair creation, feature collection, highly accurate
point determination, and control point extension.
For any of the aforementioned tasks to be undertaken, a relationship
between the camera/sensor, the image(s) in a project, and the
ground must be defined. The following variables are used to define
the relationship:
Exterior orientation parameters for each image
Interior orientation parameters for each image
a kMA =
x
p
x
o

y
p
y
o

f
kM
X
p
X
o

Y
p
Y
o

Z
p
Z
o

=
x
p
x
o
f
m
11
X
p
X
o
1
) m
12
Y
p
Y
o
1
) m
13
Z
p
Z
o
1
) ( + ( + (
m
31
X
p
X
o
1
( ) m
32
Y
p
Y
o
1
( ) m
33
Z
p
Z
o
1
( ) + +
-------------------------------------------------------------------------------------------------------------------------
=
y
p
y
o
f
m
21
X
p
X
o
1
) m
22
Y
p
Y
o
1
) m
23
Z
p
Z
o
1
) ( + ( + (
m
31
X
p
X
o
1
( ) m
32
Y
p
Y
o
1
( ) m
33
Z
p
Z
o
1
( ) + +
-------------------------------------------------------------------------------------------------------------------------
=
Field Guide Photogrammetric Solutions / 311
Accurate representation of the ground
Well-known obstacles in photogrammetry include defining the
interior and exterior orientation parameters for each image in a
project using a minimum number of GCPs. Due to the costs and labor
intensive procedures associated with collecting ground control, most
photogrammetric applications do not have an abundant number of
GCPs. Additionally, the exterior orientation parameters associated
with an image are normally unknown.
Depending on the input data provided, photogrammetric techniques
such as space resection, space forward intersection, and bundle
block adjustment are used to define the variables required to
perform orthorectification, automated DEM extraction, stereopair
creation, highly accurate point determination, and control point
extension.
Space Resection Space resection is a technique that is commonly used to determine
the exterior orientation parameters associated with one image or
many images based on known GCPs. Space resection is based on the
collinearity condition. Space resection using the collinearity condition
specifies that, for any image, the exposure station, the ground point,
and its corresponding image point must lie along a straight line.
If a minimum number of three GCPs is known in the X, Y, and Z
direction, space resection techniques can be used to determine the
six exterior orientation parameters associated with an image. Space
resection assumes that camera information is available.
Space resection is commonly used to perform single frame
orthorectification, where one image is processed at a time. If
multiple images are being used, space resection techniques require
that a minimum of three GCPs be located on each image being
processed.
Using the collinearity condition, the positions of the exterior
orientation parameters are computed. Light rays originating from at
least three GCPs intersect through the image plane through the
image positions of the GCPs and resect at the perspective center of
the camera or sensor. Using least squares adjustment techniques,
the most probable positions of exterior orientation can be computed.
Space resection techniques can be applied to one image or multiple
images.
Space Forward
Intersection
Space forward intersection is a technique that is commonly used to
determine the ground coordinates X, Y, and Z of points that appear
in the overlapping areas of two or more images based on known
interior orientation and known exterior orientation parameters. The
collinearity condition is enforced, stating that the corresponding light
rays from the two exposure stations pass through the corresponding
image points on the two images and intersect at the same ground
point. Figure 120 illustrates the concept associated with space
forward intersection.
Photogrammetric Solutions / 312 Field Guide
Figure 120: Space Forward Intersection
Space forward intersection techniques assume that the exterior
orientation parameters associated with the images are known. Using
the collinearity equations, the exterior orientation parameters along
with the image coordinate measurements of point p on Image 1 and
Image 2 are input to compute the X
p
, Y
p,
and Z
p
coordinates of
ground point p.
Space forward intersection techniques can be used for applications
associated with collecting GCPs, cadastral mapping using airborne
surveying techniques, and highly accurate point determination.
Bundle Block Adjustment For mapping projects having more than two images, the use of space
intersection and space resection techniques is limited. This can be
attributed to the lack of information required to perform these tasks.
For example, it is fairly uncommon for the exterior orientation
parameters to be highly accurate for each photograph or image in a
project, since these values are generated photogrammetrically.
Airborne GPS and INS techniques normally provide initial
approximations to exterior orientation, but the final values for these
parameters must be adjusted to attain higher accuracies.
Similarly, rarely are there enough accurate GCPs for a project of 30
or more images to perform space resection (i.e., a minimum of 90 is
required). In the case that there are enough GCPs, the time required
to identify and measure all of the points would be costly.
X
Y
Z
Xp
Yp
Zp
Xo1 Yo2
Zo1
Xo2
Yo1
Zo2
O1
o1
O2
o2
p1
p2
Ground Point P
Field Guide Photogrammetric Solutions / 313
The costs associated with block triangulation and orthorectification
are largely dependent on the number of GCPs used. To minimize the
costs of a mapping project, fewer GCPs are collected and used. To
ensure that high accuracies are attained, an approach known as
bundle block adjustment is used.
A bundle block adjustment is best defined by examining the
individual words in the term. A bundled solution is computed
including the exterior orientation parameters of each image in a
block and the X, Y, and Z coordinates of tie points and adjusted
GCPs. A block of images contained in a project is simultaneously
processed in one solution. A statistical technique known as least
squares adjustment is used to estimate the bundled solution for the
entire block while also minimizing and distributing error.
Block triangulation is the process of defining the mathematical
relationship between the images contained within a block, the
camera or sensor model, and the ground. Once the relationship has
been defined, accurate imagery and information concerning the
Earths surface can be created.
When processing frame camera, digital camera, videography, and
nonmetric camera imagery, block triangulation is commonly referred
to as aerial triangulation (AT). When processing imagery collected
with a pushbroom sensor, block triangulation is commonly referred
to as triangulation.
There are several models for block triangulation. The common
models used in photogrammetry are block triangulation with the
strip method, the independent model method, and the bundle
method. Among them, the bundle block adjustment is the most
rigorous of the above methods, considering the minimization and
distribution of errors. Bundle block adjustment uses the collinearity
condition as the basis for formulating the relationship between image
space and ground space. IMAGINE LPS Project Manager uses bundle
block adjustment techniques.
In order to understand the concepts associated with bundle block
adjustment, an example comprising two images with three GCPs
with X, Y, and Z coordinates that are known is used. Additionally, six
tie points are available. Figure 121 illustrates the photogrammetric
configuration.
Figure 121: Photogrammetric Configuration
Tie point
GCP
Photogrammetric Solutions / 314 Field Guide
Forming the Collinearity Equations
For each measured GCP, there are two corresponding image
coordinates (x and y). Thus, two collinearity equations can be
formulated to represent the relationship between the ground point
and the corresponding image measurements. In the context of
bundle block adjustment, these equations are known as observation
equations.
If a GCP has been measured on the overlapping areas of two images,
four equations can be written: two for image measurements on the
left image comprising the pair and two for the image measurements
made on the right image comprising the pair. Thus, GCP A measured
on the overlap areas of image left and image right has four
collinearity formulas.
One image measurement of GCP A on Image 1:
One image measurement of GCP A on Image 2:
Positional elements of exterior orientation on Image 1:
Positional elements of exterior orientation on Image 2:
x
a
1
x
o
f
m
11
X
A
X
o
1
) m
12
Y
A
Y
o
1
) m
13
Z
A
Z
o
1
) ( + ( + (
m
31
X
A
X
o
1
( ) m
32
Y
A
Y
o
1
( ) m
33
Z
A
Z
o
1
( ) + +
----------------------------------------------------------------------------------------------------------------------------
=
y
a
1
y
o
f
m
21
X
A
X
o
1
) m
22
Y
A
Y
o
1
) m
23
Z
A
Z
o
1
) ( + ( + (
m
31
X
A
X
o
1
( ) m
32
Y
A
Y
o
1
( ) m
33
Z
A
Z
o
1
( ) + +
----------------------------------------------------------------------------------------------------------------------------
=
x
a
2
x
o
f
m
11
X
A
X
o
2
) m
12
Y
A
Y
o
2
) m
13
Z
A
Z
o
2
) ( + ( + (
m
31
X
A
X
o
2
( ) m
32
Y
A
Y
o
2
( ) m
33
Z
A
Z
o
2
( ) + +
----------------------------------------------------------------------------------------------------------------------------------
=
y
a
2
y
o
f
m
21
X
A
X
o
2
) m
22
Y
A
Y
o
2
) m
23
Z
A
Z
o
2
) ( + ( + (
m
31
X
A
X
o
2
( ) m
32
Y
A
Y
o
2
( ) m
33
Z
A
Z
o
2
( ) + +
----------------------------------------------------------------------------------------------------------------------------------
=
x
a
1
y
a
1
,
x
a
2
y
a
2
,
X
o
1
Y
o
1
Z ,
o
1
,
Field Guide Photogrammetric Solutions / 315
If three GCPs have been measured on the overlap areas of two
images, twelve equations can be formulated, which includes four
equations for each GCP (refer to Figure 121).
Additionally, if six tie points have been measured on the overlap
areas of the two images, twenty-four equations can be formulated,
which includes four for each tie point. This is a total of 36 observation
equations (refer to Figure 121).
The previous example has the following unknowns:
Six exterior orientation parameters for the left image (i.e., X, Y,
Z, omega, phi, kappa)
Six exterior orientation parameters for the right image (i.e., X, Y,
Z, omega, phi and kappa), and
X, Y, and Z coordinates of the tie points. Thus, for six tie points,
this includes eighteen unknowns (six tie points times three X, Y,
Z coordinates).
The total number of unknowns is 30. The overall quality of a bundle
block adjustment is largely a function of the quality and redundancy
in the input data. In this scenario, the redundancy in the project can
be computed by subtracting the number of unknowns, 30, by the
number of knowns, 36. The resulting redundancy is six. This term is
commonly referred to as the degrees of freedom in a solution.
Once each observation equation is formulated, the collinearity
condition can be solved using an approach referred to as least
squares adjustment.
Least Squares
Adjustment
Least squares adjustment is a statistical technique that is used to
estimate the unknown parameters associated with a solution while
also minimizing error within the solution. With respect to block
triangulation, least squares adjustment techniques are used to:
Estimate or adjust the values associated with exterior orientation
Estimate the X, Y, and Z coordinates associated with tie points
Estimate or adjust the values associated with interior orientation
Minimize and distribute data error through the network of
observations
Data error is attributed to the inaccuracy associated with the input
GCP coordinates, measured tie point and GCP image positions,
camera information, and systematic errors.
The least squares approach requires iterative processing until a
solution is attained. A solution is obtained when the residuals
associated with the input data are minimized.
X
o
2
Y
o
2
Z ,
o
2
,
Photogrammetric Solutions / 316 Field Guide
The least squares approach involves determining the corrections to
the unknown parameters based on the criteria of minimizing input
measurement residuals. The residuals are derived from the
difference between the measured (i.e., user input) and computed
value for any particular measurement in a project. In the block
triangulation process, a functional model can be formed based upon
the collinearity equations.
The functional model refers to the specification of an equation that
can be used to relate measurements to parameters. In the context
of photogrammetry, measurements include the image locations of
GCPs and GCP coordinates, while the exterior orientations of all the
images are important parameters estimated by the block
triangulation process.
The residuals, which are minimized, include the image coordinates of
the GCPs and tie points along with the known ground coordinates of
the GCPs. A simplified version of the least squares condition can be
broken down into a formulation as follows:
Where:
V = the matrix containing the image coordinate
residuals
A = the matrix containing the partial derivatives with
respect to the unknown parameters, including
exterior orientation, interior orientation, XYZ tie
point, and GCP coordinates
X = the matrix containing the corrections to the
unknown parameters
L = the matrix containing the input observations (i.e.,
image coordinates and GCP coordinates)
The components of the least squares condition are directly related to
the functional model based on collinearity equations. The A matrix is
formed by differentiating the functional model, which is based on
collinearity equations, with respect to the unknown parameters such
as exterior orientation, etc. The L matrix is formed by subtracting the
initial results obtained from the functional model with newly
estimated results determined from a new iteration of processing. The
X matrix contains the corrections to the unknown exterior orientation
parameters. The X matrix is calculated in the following manner:
V AX L , including a weight matrix P =
X A
t
PA) (
1
A
t
PL =
Field Guide Photogrammetric Solutions / 317
Where:
X = the matrix containing the corrections to the
unknown parameters t
A = the matrix containing the partial derivatives with
respect to the unknown parameters
t = the matrix transposed
P = the matrix containing the weights of the
observations
L = the matrix containing the observations
Once a least squares iteration of processing is completed, the
corrections to the unknown parameters are added to the initial
estimates. For example, if initial approximations to exterior
orientation are provided from Airborne GPS and INS information, the
estimated corrections computed from the least squares adjustment
are added to the initial value to compute the updated exterior
orientation values. This iterative process of least squares adjustment
continues until the corrections to the unknown parameters are less
than a user-specified threshold (commonly referred to as a
convergence value).
The V residual matrix is computed at the end of each iteration of
processing. Once an iteration is completed, the new estimates for
the unknown parameters are used to recompute the input
observations such as the image coordinate values. The difference
between the initial measurements and the new estimates is obtained
to provide the residuals. Residuals provide preliminary indications of
the accuracy of a solution. The residual values indicate the degree to
which a particular observation (input) fits with the functional model.
For example, the image residuals have the capability of reflecting
GCP collection in the field. After each successive iteration of
processing, the residuals become smaller until they are satisfactorily
minimized.
Once the least squares adjustment is completed, the block
triangulation results include:
Final exterior orientation parameters of each image in a block
and their accuracy
Final interior orientation parameters of each image in a block and
their accuracy
X, Y, and Z tie point coordinates and their accuracy
Adjusted GCP coordinates and their residuals
Image coordinate residuals
The results from the block triangulation are then used as the primary
input for the following tasks:
Stereo pair creation
Photogrammetric Solutions / 318 Field Guide
Feature collection
Highly accurate point determination
DEM extraction
Orthorectification
Self-calibrating Bundle
Adjustment
Normally, there are more or less systematic errors related to the
imaging and processing system, such as lens distortion, film
distortion, atmosphere refraction, scanner errors, etc. These errors
reduce the accuracy of triangulation results, especially in dealing
with large-scale imagery and high accuracy triangulation. There are
several ways to reduce the influences of the systematic errors, like
a posteriori-compensation, test-field calibration, and the most
common approach: self-calibration (Konecny and Lehmann, 1984;
Wang, Z., 1990).
The self-calibrating methods use additional parameters in the
triangulation process to eliminate the systematic errors. How well it
works depends on many factors such as the strength of the block
(overlap amount, crossing flight lines), the GCP and tie point
distribution and amount, the size of systematic errors versus random
errors, the significance of the additional parameters, the correlation
between additional parameters, and other unknowns.
There was intensive research and development for additional
parameter models in photogrammetry in the 70s and the 80s, and
many research results are available (e.g., Bauer and Mller, 1972;
Brown 1975; Ebner, 1976; Grn, 1978; Jacobsen, 1980; Jacobsen,
1982; Li, 1985; Wang, Y., 1988a, Stojic et al, 1998). Based on these
scientific reports, IMAGINE LPS Project Manager provides four
groups of additional parameters for you to choose for different
triangulation circumstances. In addition, IMAGINE LPS Project
Manager allows the interior orientation parameters to be analytically
calibrated within its self-calibrating bundle block adjustment
capability.
Automatic Gross Error
Detection
Normal random errors are subject to statistical normal distribution.
In contrast, gross errors refer to errors that are large and are not
subject to normal distribution. The gross errors among the input data
for triangulation can lead to unreliable results. Research during the
80s in the photogrammetric community resulted in significant
achievements in automatic gross error detection in the triangulation
process (e.g., Kubik, 1982; Li, 1983; Li, 1985; Jacobsen, 1984; El-
Hakim and Ziemann, 1984; Wang, Y., 1988a).
Methods for gross error detection began with residual checking using
data-snooping and were later extended to robust estimation (Wang,
Z., 1990). The most common robust estimation method is the
iteration with selective weight functions. Based on the scientific
research results from the photogrammetric community, IMAGINE
LPS Project Manager offers two robust error detection methods
within the triangulation process.
Field Guide GCPs / 319
It is worth mentioning that the effect of the automatic error detection
depends not only on the mathematical model, but also depends on
the redundancy in the block. Therefore, more tie points in more
overlap areas contribute better gross error detection. In addition,
inaccurate GCPs can distribute their errors to correct tie points,
therefore the ground and image coordinates of GCPs should have
better accuracy than tie points when comparing them within the
same scale space.
GCPs The instrumental component of establishing an accurate relationship
between the images in a project, the camera/sensor, and the ground
is GCPs. GCPs are identifiable features located on the Earths surface
that have known ground coordinates in X, Y, and Z. A full GCP has
X,Y, and Z (elevation of the point) coordinates associated with it.
Horizontal control only specifies the X,Y, while vertical control only
specifies the Z. The following features on the Earths surface are
commonly used as GCPs:
Intersection of roads
Utility infrastructure (e.g., fire hydrants and manhole covers)
Intersection of agricultural plots of land
Survey benchmarks
Depending on the type of mapping project, GCPs can be collected
from the following sources:
Theodolite survey (millimeter to centimeter accuracy)
Total station survey (millimeter to centimeter accuracy)
Ground GPS (centimeter to meter accuracy)
Planimetric and topographic maps (accuracy varies as a function
of map scale, approximate accuracy between several meters to
40 meters or more)
Digital orthorectified images (X and Y coordinates can be
collected to an accuracy dependent on the resolution of the
orthorectified image)
DEMs (for the collection of vertical GCPs having Z coordinates
associated with them, where accuracy is dependent on the
resolution of the DEM and the accuracy of the input DEM)
GCPs / 320 Field Guide
When imagery or photography is exposed, GCPs are recorded and
subsequently displayed on the photography or imagery. During GCP
measurement in IMAGINE LPS Project Manager, the image positions
of GCPs appearing on an image or on the overlap areas of the images
are collected.
It is highly recommended that a greater number of GCPs be available
than are actually used in the block triangulation. Additional GCPs can
be used as check points to independently verify the overall quality
and accuracy of the block triangulation solution. A check point
analysis compares the photogrammetrically computed ground
coordinates of the check points to the original values. The result of
the analysis is an RMSE that defines the degree of correspondence
between the computed values and the original values. Lower RMSE
values indicate better results.
GCP Requirements The minimum GCP requirements for an accurate mapping project
vary with respect to the size of the project. With respect to
establishing a relationship between image space and ground space,
the theoretical minimum number of GCPs is two GCPs having X, Y,
and Z coordinates and one GCP having a Z coordinate associated
with it. This is a total of seven observations.
In establishing the mathematical relationship between image space
and object space, seven parameters defining the relationship must
be determined. The seven parameters include a scale factor
(describing the scale difference between image space and ground
space); X, Y, Z (defining the positional differences between image
space and object space); and three rotation angles (omega, phi, and
kappa) that define the rotational relationship between image space
and ground space.
In order to compute a unique solution, at least seven known
parameters must be available. In using the two X, Y, Z GCPs and one
vertical (Z) GCP, the relationship can be defined. However, to
increase the accuracy of a mapping project, using more GCPs is
highly recommended.
The following descriptions are provided for various projects:
Processing One Image
If processing one image for the purpose of orthorectification (i.e., a
single frame orthorectification), the minimum number of GCPs
required is three. Each GCP must have an X, Y, and Z coordinate
associated with it. The GCPs should be evenly distributed to ensure
that the camera/sensor is accurately modeled.
Processing a Strip of Images
If processing a strip of adjacent images, two GCPs for every third
image is recommended. To increase the quality of orthorectification,
measuring three GCPs at the corner edges of a strip is advantageous.
Thus, during block triangulation a stronger geometry can be
enforced in areas where there is less redundancy such as the corner
edges of a strip or a block.
Field Guide GCPs / 321
Figure 122 illustrates the GCP configuration for a strip of images
having 60% overlap. The triangles represent the GCPs. Thus, the
image positions of the GCPs are measured on the overlap areas of
the imagery.
Figure 122: GCP Configuration
Processing Multiple
Strips of Imagery
Figure 123 depicts the standard GCP configuration for a block of
images, comprising four strips of images, each containing eight
overlapping images.
Figure 123: GCPs in a Block of Images
In this case, the GCPs form a strong geometric network of
observations. As a general rule, it is advantageous to have at least
one GCP on every third image of a block. Additionally, whenever
possible, locate GCPs that lie on multiple images, around the outside
edges of a block, and at certain distances from one another within
the block.

Tie Points / 322 Field Guide
Tie Points A tie point is a point that has ground coordinates that are not known,
but is visually recognizable in the overlap area between two or more
images. The corresponding image positions of tie points appearing
on the overlap areas of multiple images is identified and measured.
Ground coordinates for tie points are computed during block
triangulation. Tie points can be measured both manually and
automatically.
Tie points should be visually well-defined in all images. Ideally, they
should show good contrast in two directions, like the corner of a
building or a road intersection. Tie points should also be well
distributed over the area of the block. Typically, nine tie points in
each image are adequate for block triangulation. Figure 124 depicts
the placement of tie points.
Figure 124: Point Distribution for Triangulation
In a block of images with 60% overlap and 25-30% sidelap, nine
points are sufficient to tie together the block as well as individual
strips (see Figure 125).
Figure 125: Tie Points in a Block
Automatic Tie Point
Collection
Selecting and measuring tie points is very time-consuming and
costly. Therefore, in recent years, one of the major focal points of
research and development in photogrammetry has concentrated on
the automated triangulation where the automatic tie point collection
is the main issue.
Tie points in a
single image
y
x
Nine tie points in
each image tie the
block together
Tie points
Field Guide Image Matching Techniques / 323
The other part of the automated triangulation is the automatic
control point identification, which is still unsolved due to the
complication of the scenario. There are several valuable research
results available for automated triangulation (e.g., Agouris and
Schenk, 1996; Heipke, 1996; Krzystek, 1998; Mayr, 1995; Schenk,
1997; Tang et al, 1997; Tsingas, 1995; Wang, Y., 1998b).
After investigating the advantages and the weaknesses of the
existing methods, IMAGINE LPS Project Manager was designed to
incorporate an advanced method for automatic tie point collection. It
is designed to work with a variety of digital images such as aerial
images, satellite images, digital camera images, and close range
images. It also supports the processing of multiple strips including
adjacent, diagonal, and cross-strips.
Automatic tie point collection within IMAGINE LPS Project Manager
successfully performs the following tasks:
Automatic block configuration. Based on the initial input
requirements, IMAGINE LPS Project Manager automatically
detects the relationship of the block with respect to image
adjacency.
Automatic tie point extraction. The feature point extraction
algorithms are used here to extract the candidates of tie points.
Point transfer. Feature points appearing on multiple images are
automatically matched and identified.
Gross error detection. Erroneous points are automatically
identified and removed from the solution.
Tie point selection. The intended number of tie points defined by
you is automatically selected as the final number of tie points.
The image matching strategies incorporated in IMAGINE LPS Project
Manager for automatic tie point collection include the coarse-to-fine
matching; feature-based matching with geometrical and topological
constraints, which is simplified from the structural matching
algorithm (Wang, Y., 1998b); and the least square matching for the
high accuracy of tie points.
Image Matching
Techniques
Image matching refers to the automatic identification and
measurement of corresponding image points that are located on the
overlapping area of multiple images. The various image matching
methods can be divided into three categories including:
Area based matching
Feature based matching
Relation based matching
Image Matching Techniques / 324 Field Guide
Area Based Matching Area based matching is also called signal based matching. This
method determines the correspondence between two image areas
according to the similarity of their gray level values. The cross
correlation and least squares correlation techniques are well-known
methods for area based matching.
Correlation Windows
Area based matching uses correlation windows. These windows
consist of a local neighborhood of pixels. One example of correlation
windows is square neighborhoods (for example, 3 3, 5 5, 7 7
pixels). In practice, the windows vary in shape and dimension based
on the matching technique. Area correlation uses the characteristics
of these windows to match ground feature locations in one image to
ground features on the other.
A reference window is the source window on the first image, which
remains at a constant location. Its dimensions are usually square in
size (for example, 3 3, 5 5, and so on). Search windows are
candidate windows on the second image that are evaluated relative
to the reference window. During correlation, many different search
windows are examined until a location is found that best matches the
reference window.
Correlation Calculations
Two correlation calculations are described below: cross correlation
and least squares correlation. Most area based matching
calculations, including these methods, normalize the correlation
windows. Therefore, it is not necessary to balance the contrast or
brightness prior to running correlation. Cross correlation is more
robust in that it requires a less accurate a priori position than least
squares. However, its precision is limited to one pixel. Least squares
correlation can achieve precision levels of one-tenth of a pixel, but
requires an a priori position that is accurate to about two pixels. In
practice, cross correlation is often followed by least squares for high
accuracy.
Cross Correlation
Cross correlation computes the correlation coefficient of the gray
values between the template window and the search window
according to the following equation:

g
1
c
1
r
1
, ( ) g
1
[ ] g
2
c
2
r
2
, ( ) g
2
[ ]
i j ,

g
1
c
1
r
1
, ( ) g
1
[ ]
2
g
2
c
2
r
2
, ( ) g
2
[ ]
i j ,

2
i j ,

------------------------------------------------------------------------------------------------------ =
with
g
1
1
n
--- g
1
c
1
r
1
, ( )
i j ,

= g
2
1
n
--- g
2
c
2
r
2
, ( )
i j ,

=
Field Guide Image Matching Techniques / 325
Where:
= the correlation coefficient
g(c,r) = the gray value of the pixel (c,r)
c
1
,r
1
= the pixel coordinates on the left image
c
2
,r
2
= the pixel coordinates on the right image
n = the total number of pixels in the window
i, j = pixel index into the correlation window
When using the area based cross correlation, it is necessary to have
a good initial position for the two correlation windows. If the exterior
orientation parameters of the images being matched are known, a
good initial position can be determined. Also, if the contrast in the
windows is very poor, the correlation can fail.
Least Squares Correlation
Least squares correlation uses the least squares estimation to derive
parameters that best fit a search window to a reference window. This
technique has been investigated thoroughly in photogrammetry
(Ackermann, 1983; Grn and Baltsavias, 1988; Helava, 1988). It
accounts for both gray scale and geometric differences, making it
especially useful when ground features on one image look somewhat
different on the other image (differences which occur when the
surface terrain is quite steep or when the viewing angles are quite
different).
Least squares correlation is iterative. The parameters calculated
during the initial pass are used in the calculation of the second pass
and so on, until an optimum solution is determined. Least squares
matching can result in high positional accuracy (about 0.1 pixels).
However, it is sensitive to initial approximations. The initial
coordinates for the search window prior to correlation must be
accurate to about two pixels or better.
When least squares correlation fits a search window to the reference
window, both radiometric (pixel gray values) and geometric
(location, size, and shape of the search window) transformations are
calculated.
For example, suppose the change in gray values between two
correlation windows is represented as a linear relationship. Also
assume that the change in the windows geometry is represented by
an affine transformation.
g
2
c
2
r
2
, ( ) h
0
h
1
g
1
c
1
r
1
, ( ) + =
c
2
a
0
a
1
c
1
a
2
r
1
+ + =
r
2
b
0
b
1
c
1
b
2
r
1
+ + =
Image Matching Techniques / 326 Field Guide
Where:
c
1
,r
1
= the pixel coordinate in the reference window
c
2
,r
2
= the pixel coordinate in the search window
g
1
(c
1
,r
1
) = the gray value of pixel (c1,r1)
g
2
(c
2
,r
2
) = the gray value of pixel (c1,r1)
h
0
, h
1
= linear gray value transformation parameters
a
0
,

a
1
, a
2
= affine geometric transformation parameters
b
0
, b
1
, b
2
= affine geometric transformation parameters
Based on this assumption, the error equation for each pixel is
derived, as shown in the following equation:
Where:
g
c
and g
r
are the gradients of g
2
(c
2
,r
2
).
Feature Based Matching Feature based matching determines the correspondence between
two image features. Most feature based techniques match extracted
point features (this is called feature point matching), as opposed to
other features, such as lines or complex objects. The feature points
are also commonly referred to as interest points. Poor contrast areas
can be avoided with feature based matching.
In order to implement feature based matching, the image features
must initially be extracted. There are several well-known operators
for feature point extraction. Examples include the Moravec Operator,
the Dreschler Operator, and the Frstner Operator (Frstner and
Glch, 1987; L, 1988).
After the features are extracted, the attributes of the features are
compared between two images. The feature pair having the
attributes with the best fit is recognized as a match. IMAGINE LPS
Project Manager utilizes the Frstner interest operator to extract
feature points.
Relation Based Matching Relation based matching is also called structural matching
(Vosselman and Haala, 1992; Wang, Y., 1994; and Wang, Y., 1995).
This kind of matching technique uses the image features and the
relationship between the features. With relation based matching, the
corresponding image structures can be recognized automatically,
without any a priori information. However, the process is time-
consuming since it deals with varying types of information. Relation
based matching can also be applied for the automatic recognition of
control points.
v a
1
a
2
c
1
a
3
r
1
+ + ( )g
c
b
1
b
2
c
1
b
3
r
1
+ + ( )g
r
h
1
h
2
g
1
c
1
r
1
, ( ) g + + =
with g g
2
c
2
r
2
, ( ) g
1
c
1
r
1
, ( ) =
Field Guide Satellite Photogrammetry / 327
Image Pyramid Because of the large amount of image data, the image pyramid is
usually adopted during the image matching techniques to reduce the
computation time and to increase the matching reliability. The
pyramid is a data structure consisting of the same image
represented several times, at a decreasing spatial resolution each
time. Each level of the pyramid contains the image at a particular
resolution.
The matching process is performed at each level of resolution. The
search is first performed at the lowest resolution level and
subsequently at each higher level of resolution. Figure 126 shows a
four-level image pyramid.
Figure 126: Image Pyramid for Matching at Coarse to Full
Resolution
There are different resampling methods available for generating an
image pyramid. Theoretical and practical investigations show that
the resampling methods based on the Gaussian filter, which are
approximated by a binomial filter, have the superior properties
concerning preserving the image contents and reducing the
computation time (Wang, Y., 1994). Therefore, IMAGINE LPS Project
Manager uses this kind of pyramid layer instead of those currently
available under ERDAS IMAGINE, which are overwritten
automatically by LPS Project Manager.
Satellite
Photogrammetry
Satellite photogrammetry has slight variations compared to
photogrammetric applications associated with aerial frame cameras.
This document makes reference to the SPOT and IRS-1C satellites.
The SPOT satellite provides 10-meter panchromatic imagery and 20-
meter multispectral imagery (four multispectral bands of
information).
Level 2
Full resolution (1:1)
256 x 256 pixels
Level 1
512 x 512 pixels
Level 3
128 x 128 pixels
Level 4
64 x 64 pixels
Matching begins
and
Matching finishes
Resolution of 1:8
Resolution of 1:4
Resolution of 1:2
on level 4
on level 1
Satellite Photogrammetry / 328 Field Guide
The SPOT satellite carries two high resolution visible (HRV) sensors,
each of which is a pushbroom scanner that takes a sequence of line
images while the satellite circles the Earth. The focal length of the
camera optic is 1084 mm, which is very large relative to the length
of the camera (78 mm). The field of view is 4.1 degrees. The satellite
orbit is circular, North-South and South-North, about 830 km above
the Earth, and sun-synchronous. A sun-synchronous orbit is one in
which the orbital rotation is the same rate as the Earths rotation.
The Indian Remote Sensing (IRS-1C) satellite utilizes a pushbroom
sensor consisting of three individual CCDs. The ground resolution of
the imagery ranges between 5 to 6 meters. The focal length of the
optic is approximately 982 mm. The pixel size of the CCD is 7
microns. The images captured from the three CCDs are processed
independently or merged into one image and system corrected to
account for the systematic error associated with the sensor.
Both the SPOT and IRS-1C satellites collect imagery by scanning
along a line. This line is referred to as the scan line. For each line
scanned within the SPOT and IRS-1C sensors, there is a unique
perspective center and a unique set of rotation angles. The location
of the perspective center relative to the line scanner is constant for
each line (interior orientation and focal length). Since the motion of
the satellite is smooth and practically linear over the length of a
scene, the perspective centers of all scan lines of a scene are
assumed to lie along a smooth line. Figure 127 illustrates the
scanning technique.
Figure 127: Perspective Centers of SPOT Scan Lines
perspective centers of scan lines
motion of satellite
scan lines
on image
ground
Field Guide Satellite Photogrammetry / 329
The satellite exposure station is defined as the perspective center in
ground coordinates for the center scan line. The image captured by
the satellite is called a scene. For example, a SPOT Pan 1A scene is
composed of 6000 lines. For SPOT Pan 1A imagery, each of these
lines consists of 6000 pixels. Each line is exposed for 1.5
milliseconds, so it takes 9 seconds to scan the entire scene. (A scene
from SPOT XS 1A is composed of only 3000 lines and 3000 columns
and has 20-meter pixels, while Pan has 10-meter pixels.)
NOTE: The following section addresses only the 10 meter SPOT Pan
scenario.
A pixel in the SPOT image records the light detected by one of the
6000 light sensitive elements in the camera. Each pixel is defined by
file coordinates (column and row numbers). The physical dimension
of a single, light-sensitive element is 13 13 microns. This is the
pixel size in image coordinates. The center of the scene is the center
pixel of the center scan line. It is the origin of the image coordinate
system. Figure 128 depicts image coordinates in a satellite scene:
Figure 128: Image Coordinates in a Satellite Scene
Where:
A = origin of file coordinates
A-X
F
, A-Y
F
= file coordinate axes
C = origin of image coordinates (center of scene)
C-x, C-y = image coordinate axes
SPOT Interior Orientation Figure 129 shows the interior orientation of a satellite scene. The
transformation between file coordinates and image coordinates is
constant.
y
x
A
X
F
Y
F
C
6000
lines
6000 pixels
Satellite Photogrammetry / 330 Field Guide
Figure 129: Interior Orientation of a SPOT Scene
For each scan line, a separate bundle of light rays is defined, where:
P
k
= image point
x
k
= x value of image coordinates for scan line k
f = focal length of the camera
O
k
= perspective center for scan line k, aligned along the
orbit
PP
k
= principal point for scan line k
l
k
= light rays for scan line, bundled at perspective
center Ok
SPOT Exterior Orientation SPOT satellite geometry is stable and the sensor parameters, such
as focal length, are well-known. However, the triangulation of SPOT
scenes is somewhat unstable because of the narrow, almost parallel
bundles of light rays.
P
1
x
k
O
1
O
k
O
n
PP
n
PP
1
P
k
P
n
x
n
x
1
f
f
f
l
1
l
k
l
n
P
1
PP
k
(N > S)
orbiting direction
scan lines
(image plane)
Field Guide Satellite Photogrammetry / 331
Ephemeris data for the orbit are available in the header file of SPOT
scenes. They give the satellites position in three-dimensional,
geocentric coordinates at 60-second increments. The velocity vector
and some rotational velocities relating to the attitude of the camera
are given, as well as the exact time of the center scan line of the
scene. The header of the data file of a SPOT scene contains
ephemeris data, which provides information about the recording of
the data and the satellite orbit.
Ephemeris data that can be used in satellite triangulation include:
Position of the satellite in geocentric coordinates (with the origin
at the center of the Earth) to the nearest second
Velocity vector, which is the direction of the satellites travel
Attitude changes of the camera
Time of exposure (exact) of the center scan line of the scene
The geocentric coordinates included with the ephemeris data are
converted to a local ground system for use in triangulation. The
center of a satellite scene is interpolated from the header data.
Light rays in a bundle defined by the SPOT sensor are almost
parallel, lessening the importance of the satellites position. Instead,
the inclination angles (incidence angles) of the cameras on board the
satellite become the critical data.
The scanner can produce a nadir view. Nadir is the point directly
below the camera. SPOT has off-nadir viewing capability. Off-nadir
refers to any point that is not directly beneath the satellite, but is off
to an angle (i.e., East or West of the nadir).
A stereo scene is achieved when two images of the same area are
acquired on different days from different orbits, one taken East of
the other. For this to occur, there must be significant differences in
the inclination angles.
Inclination is the angle between a vertical on the ground at the
center of the scene and a light ray from the exposure station. This
angle defines the degree of off-nadir viewing when the scene was
recorded. The cameras can be tilted in increments of a minimum of
0.6 to a maximum of 27 degrees to the East (negative inclination) or
West (positive inclination). Figure 130 illustrates the inclination.
Satellite Photogrammetry / 332 Field Guide
Figure 130: Inclination of a Satellite Stereo-Scene (View from
North to South)
Where:
C = center of the scene
I- = eastward inclination
I+ = westward inclination
O
1
,O
2
= exposure stations (perspective centers of imagery)
The orientation angle of a satellite scene is the angle between a
perpendicular to the center scan line and the North direction. The
spatial motion of the satellite is described by the velocity vector. The
real motion of the satellite above the ground is further distorted by
the Earths rotation.
The velocity vector of a satellite is the satellites velocity if measured
as a vector through a point on the spheroid. It provides a technique
to represent the satellites speed as if the imaged area were flat
instead of being a curved surface (see Figure 131).
C
Earths surface
orbit 2
orbit 1
I -
I +
EAST WEST
sensors
O
1
O
2
scene coverage
vertical
(ellipsoid)
Field Guide Satellite Photogrammetry / 333
Figure 131: Velocity Vector and Orientation Angle of a Single
Scene
Where:
O = orientation angle
C = center of the scene
V = velocity vector
Satellite block triangulation provides a model for calculating the
spatial relationship between a satellite sensor and the ground
coordinate system for each line of data. This relationship is
expressed as the exterior orientation, which consists of
the perspective center of the center scan line (i.e., X, Y, and Z),
the change of perspective centers along the orbit,
the three rotations of the center scan line (i.e., omega, phi, and
kappa), and
the changes of angles along the orbit.
In addition to fitting the bundle of light rays to the known points,
satellite block triangulation also accounts for the motion of the
satellite by determining the relationship of the perspective centers
and rotation angles of the scan lines. It is assumed that the satellite
travels in a smooth motion as a scene is being scanned. Therefore,
once the exterior orientation of the center scan line is determined,
the exterior orientation of any other scan line is calculated based on
the distance of that scan line from the center, and the changes of the
perspective center location and rotation angles.
Bundle adjustment for triangulating a satellite scene is similar to the
bundle adjustment used for aerial images. A least squares
adjustment is used to derive a set of parameters that comes the
closest to fitting the control points to their known ground
coordinates, and to intersecting tie points.
The resulting parameters of satellite bundle adjustment are:
center scan line
orbital path
V
C
O
North
Satellite Photogrammetry / 334 Field Guide
Ground coordinates of the perspective center of the center scan
line
Rotation angles for the center scan line
Coefficients, from which the perspective center and rotation
angles of all other scan lines are calculated
Ground coordinates of all tie points
Collinearity Equations
and Satellite Block
Triangulation
Modified collinearity equations are used to compute the exterior
orientation parameters associated with the respective scan lines in
the satellite scenes. Each scan line has a unique perspective center
and individual rotation angles. When the satellite moves from one
scan line to the next, these parameters change. Due to the smooth
motion of the satellite in orbit, the changes are small and can be
modeled by low order polynomial functions.
Control for Satellite Block Triangulation
Both GCPs and tie points can be used for satellite block triangulation
of a stereo scene. For triangulating a single scene, only GCPs are
used. In this case, space resection techniques are used to compute
the exterior orientation parameters associated with the satellite as
they existed at the time of image capture. A minimum of six GCPs is
necessary. Ten or more GCPs are recommended to obtain a good
triangulation result.
The best locations for GCPs in the scene are shown below in Figure
132.
Figure 132: Ideal Point Distribution Over a Satellite Scene for
Triangulation
y
x
horizontal
scan lines
GCP
Field Guide Satellite Photogrammetry / 335
Orthorectification As stated previously, orthorectification is the process of removing
geometric errors inherent within photography and imagery. The
variables contributing to geometric errors include, but are not limited
to:
Camera and sensor orientation
Systematic error associated with the camera or sensor
Topographic relief displacement
Earth curvature
By performing block triangulation or single frame resection, the
parameters associated with camera and sensor orientation are
defined. Utilizing least squares adjustment techniques during block
triangulation minimizes the errors associated with camera or sensor
instability. Additionally, the use of self-calibrating bundle adjustment
(SCBA) techniques along with Additional Parameter (AP) modeling
accounts for the systematic errors associated with camera interior
geometry. The effects of the Earths curvature are significant if a
large photo block or satellite imagery is involved. They are
accounted for during the block triangulation procedure by setting the
relevant option. The effects of topographic relief displacement are
accounted for by utilizing a DEM during the orthorectification
procedure.
The orthorectification process takes the raw digital imagery and
applies a DEM and triangulation results to create an orthorectified
image. Once an orthorectified image is created, each pixel within the
image possesses geometric fidelity. Thus, measurements taken off
an orthorectified image represent the corresponding measurements
as if they were taken on the Earths surface (see Figure 133).
Figure 133: Orthorectification
Image
DEM
Orthorectified image
Satellite Photogrammetry / 336 Field Guide
An image or photograph with an orthographic projection is one for
which every point looks as if an observer were looking straight down
at it, along a line of sight that is orthogonal (perpendicular) to the
Earth. The resulting orthorectified image is known as a digital
orthoimage (see Figure 134).
Relief displacement is corrected by taking each pixel of a DEM and
finding the equivalent position in the satellite or aerial image. A
brightness value is determined for this location based on resampling
of the surrounding pixels. The brightness value, elevation, and
exterior orientation information are used to calculate the equivalent
location in the orthoimage file.
Figure 134: Digital OrthophotoFinding Gray Values
Where:
P = ground point
P
1
= image point
O = perspective center (origin)
X,Z = ground coordinates (in DTM file)
f = focal length
DTM
orthoimage
grayvalues
Z
X
Pl
f
P
O
Field Guide Satellite Photogrammetry / 337
In contrast to conventional rectification techniques,
orthorectification relies on the digital elevation data, unless the
terrain is flat. Various sources of elevation data exist, such as the
USGS DEM and a DEM automatically created from stereo image
pairs. They are subject to data uncertainty, due in part to the
generalization or imperfections in the creation process. The quality
of the digital orthoimage is significantly affected by this uncertainty.
For different image data, different accuracy levels of DEMs are
required to limit the uncertainty-related errors within a controlled
limit. While the near-vertical viewing SPOT scene can use very
coarse DEMs, images with large incidence angles need better
elevation data such as USGS level-1 DEMs. For aerial photographs
with a scale larger than 1:60000, elevation data accurate to 1 meter
is recommended. The 1 meter accuracy reflects the accuracy of the
Z coordinates in the DEM, not the DEM resolution or posting.
Detailed discussion of DEM requirements for orthorectification
can be found in Yang and Williams (Yang and Williams, 1997).
See Bibliography.
Resampling methods used are nearest neighbor, bilinear
interpolation, and cubic convolution. Generally, when the cell sizes
of orthoimage pixels are selected, they should be similar or larger
than the cell sizes of the original image. For example, if the image
was scanned at 25 microns (1016 dpi) producing an image of 9K
9K pixels, one pixel would represent 0.025 mm on the image.
Assuming that the image scale of this photo is 1:40000, then the cell
size on the ground is about 1 m. For the orthoimage, it is appropriate
to choose a pixel spacing of 1 m or larger. Choosing a smaller pixel
size oversamples the original image.
For information, see the scanning resolutions table, Table 47.
For SPOT Pan images, a cell size of 10 meters is appropriate. Any
further enlargement from the original scene to the orthophoto does
not improve the image detail. For IRS-1C images, a cell size of 6
meters is appropriate.
Satellite Photogrammetry / 338 Field Guide
IMAGINE OrthoRadar Theory / 339 Field Guide
Radar Concepts
Introduction Radar images are quite different from other remotely sensed
imagery you might use with ERDAS IMAGINE software. For example,
radar images may have speckle noise. Radar images, do, however,
contain a great deal of information. ERDAS IMAGINE has many radar
packages, including IMAGINE Radar Interpreter, IMAGINE
OrthoRadar, IMAGINE StereoSAR DEM, IMAGINE IFSAR DEM, and
the Generic SAR Node with which you can analyze your radar
imagery.You have already learned about the various methods of
speckle suppressionthose are IMAGINE Radar Interpreter
functions.
This chapter tells you about the advanced radar processing packages
that ERDAS IMAGINE has to offer. The following sections go into
detail about the geometry and functionality of those modules of the
IMAGINE Radar Mapping Suite.
IMAGINE
OrthoRadar
Theory
Parameters Required for
Orthorectification
SAR image orthorectification requires certain information about the
sensor and the SAR image. Different sensors (RADARSAT, ERS, etc.)
express these parameters in different ways and in different units. To
simplify the design of our SAR tools and easily support future
sensors, all SAR images and sensors are described using our Generic
SAR model. The sensor-specific parameters are converted to a
Generic SAR model on import.
The following table lists the parameters of the Generic SAR model
and their units. These parameters can be viewed in the SAR
Parameters tab on the main Generic SAR Model Properties (IMAGINE
OrthoRadar) dialog.
Table 48: SAR Parameters Required for Orthorectification
Parameter Description Units
sensor The sensor that produced the image
(RADARSAT, ERS, etc.)
coord_sys Coordinate system for ephemeris
(I = inertial, F = fixed body or Earth
rotating)
year Year of data collection
month Month of data collection
day Day of data collection
IMAGINE OrthoRadar Theory / 340 Field Guide
doy GMT day of year of data collection
num_samples Number of samples in each line in the
image
(= number of samples in range)
num_lines Number of lines in the image
(= number of lines in azimuth)
image_start_time Image start time in seconds of day sec
first_pt_secs_of_day Time of first ephemeris point provided
in seconds of day. This parameter is
updated during orbit adjustment
sec
first_pt_org The original time of the first ephemeris
point provided in seconds of day
time_interval Time interval between ephemeris points sec
time_interval_org Time interval between ephemeris
points. This parameter is updated
during orbit adjustment
sec
image_duration Slow time duration of image sec
image_end_time Same as image_start_time +
image_duration
sec
semimajor Semimajor axis of Earth model used
during SAR processing
m
semiminor Semiminor axis of Earth model used
during SAR processing
m
target_height Assumed height of scene above Earth
model used during SAR processing
m
look_side Side to which sensor is pointed (left or
right). Sometimes called sensor clock
angle where -90 deg is left-looking and
90 deg is right-looking
deg
wavelength Wavelength of sensor m
sampling_rate Range sampling rate Hz
range_pix_spacing Slant or ground range pixel spacing m
near_slant_range Slant range to near range pixel m
num_pos_pts Number of ephemeris points provided
projection Slant or ground range projection
gnd2slt_coeffs[6] Coefficients used in polynomial
transform from ground to slant range.
Used when the image is in ground range
Table 48: SAR Parameters Required for Orthorectification
Parameter Description Units
Field Guide IMAGINE OrthoRadar Theory / 341
time_dir_pixels Time direction in the pixel (range)
direction
time_dir_lines Time direction in the line (azimuth)
direction
rsx, rsy, rsz Array of spacecraft positions (x, y, z) in
an Earth Fixed Body coordinate system
m
vsx, vsy, vsz Array of spacecraft velocities (x, y, z) in
an Earth Fixed Body coordinate system
m/sec
orbitState Flag indicating if the orbit has been
adjusted
rs_coeffs[9] Coefficients used to model the sensor
orbit positions as a function of time
vs_coeffs[9] Coefficients used to model the sensor
orbit velocities as a function of time
sub_unity_subset Flag indicating that the entire image is
present
sub_range_start Indicates the starting range sample of
the current raster relative to the original
image
sub_range_end Indicates the ending range sample of
the current raster relative to the original
image
sub_range_degrade Indicates the range sample degrade
factor of the current raster relative to
the original image
sub_range_num_sa
mples
Indicates the range number of samples
of the current raster
sub_azimuth_start Indicates the starting azimuth line of
the current raster relative to the original
image
sub_azimuth_end Indicates the ending azimuth line of the
current raster relative to the original
image
sub_azimuth_degrad
e
Indicates the azimuth degrade factor of
the current raster relative to the original
image
sub_azimuth_num_li
nes
Indicates the azimuth number of lines
Table 48: SAR Parameters Required for Orthorectification
Parameter Description Units
IMAGINE OrthoRadar Theory / 342 Field Guide
Algorithm Description
Overview
The orthorectification process consists of several steps:
ephemeris modeling and refinement (if GCPs are provided)
sparse mapping grid generation
output formation (including terrain corrections)
Each of these steps is described in detail in the following sections.
Ephemeris Coordinate System
The positions and velocities of the spacecraft are internally assumed
to be in an Earth Fixed Body coordinate system. If the ephemeris are
provided in inertial coordinate system, IMAGINE OrthoRadar
converts them from inertial to Earth Fixed Body coordinates.
The Earth Fixed Body coordinate system is an Earth-centered
Cartesian coordinate system that rotates with the Earth. The x-axis
radiates from the center of the Earth through the 0 longitude point
on the equator. The z-axis radiates from the center of the Earth
through the geographic North Pole. The y-axis completes the right-
handed Cartesian coordinate system.
Ephemeris Modeling
The platform ephemeris is described by three or more platform
locations and velocities. To predict the platform position and velocity
at some time (t):
R
s,x
= a
1
+ a
2
t + a
3
t
2
R
s,y
= b
1
+ b
2
t + b
3
t
2
R
s,z
= c
1
+ c
2
t + c
3
t
2
V
s,x
= d
1
+ d
2
t + d
3
t
2
V
s,y
= e
1
+ e
2
t + e
3
t
2
V
s,z
= f
1
+ f
2
t + f
3
t
2
Where R
s
is the sensor position and V
s
is the sensor velocity:
R
s
= [ R
s,x
R
s,y
R
s,z
]
T
V
s
= [ V
s,x
V
s,x
V
s,x
]
T
Field Guide IMAGINE OrthoRadar Theory / 343
To determine the model coefficients {a
i
, b
i
, c
i
} and {d
i
, e
i
, f
i
}, first
do some preprocessing. Select the best three consecutive data
points prior to fitting (if more than three points are available). The
best three data points must span the entire image in time. If more
than one set of three data points spans the image, then select the
set of three that has a center time closest to the center time of the
image.
Once a set of three consecutive data points is found, model the
ephemeris with an exact solution.
Form matrix A:
Where t
1
, t
2
, and t
3
are the times associated with each platform
position. Select t such that t = 0.0 corresponds to the time of the
second position point. Form vector b:
b = [ R
s,x
(1) R
s,x
(2) R
s,x
(3) ]
T
Where R
s,x
(i) is the x-coordinate of the i-th platform position (i =1:
3). We wish to solve Ax = b where x is:
x = [ a
1
a
2
a
3
]
T
To do so, use LU decomposition. The process is repeated for: R
s,y
,
R
s,z
, V
s,x
, V
s,y
, and V
s,z
SAR Imaging Model
Before discussing the ephemeris adjustment, it is important to
understand how to get from a pixel in the SAR image (as specified
by a range line and range pixel) to a target position on the Earth
[specified in Earth Centered System (ECS) coordinates or x, y, z].
This process is used throughout the ephemeris adjustment and the
orthorectification itself.
For each range line and range pixel in the SAR image, the
corresponding target location (R
t
) is determined. The target location
can be described as (lat, lon, elev) or (x, y, z) in ECS. The target can
either lie on a smooth Earth ellipsoid or on a smooth Earth ellipsoid
plus an elevation model.
In either case, the location of R
t
is determined by finding the
intersection of the Doppler cone, range sphere, and Earth model. In
order to do this, first find the Doppler centroid and slant range for a
given SAR image pixel.
Let i = range pixel and j = range line.
A
1.0 t
1
t
1
2
1.0 t
2
t
1
2
1.0 t
3
t
3
2
=
IMAGINE OrthoRadar Theory / 344 Field Guide
Time
Time T(j) is thus:
Where T(0) is the image start time, N
a
is the number of range lines,
and t
dur
is the image duration time.
Doppler Centroid
The computation of the Doppler centroid f
d
to use with the SAR
imaging model depends on how the data was processed. If the data
was deskewed, this value is always 0. If the data is skewed, then this
value may be a nonzero constant or may vary with i.
Slant Range
The computation of the slant range to the pixel i depends on the
projection of the image. If the data is in a slant range projection,
then the computation of slant range is straightforward:
Where R
sl
(i) is the slant range to pixel i, r
sl
is the near slant range,
and r
sr
is the slant range pixel spacing.
If the projection is a ground range projection, then this computation
is potentially more complicated and depends on how the data was
originally projected into a ground range projection by the SAR
processor.
Intersection of Doppler Cone, Range Sphere, and Earth Model
To find the location of the target R
t
corresponding to a given range
pixel and range line, the intersection of the Doppler cone, range
sphere, and Earth model must be found. For an ellipsoid, these may
be described as follows:
f
D
> 0 for forward squint
R
sl
= | R
s
- R
t
|
T j ( ) T 0 ( )
j 1
N
a
1
--------------- t
dur
+ =
R
sl
i ( ) r
sl
i 1 ( ) r
sr
+ =
f
D
2
R
sl
----------- R
s
R
t
( ) V
s
V
t
( ) =
R
s
x ( )
2
R
s
y ( )
2
+
R
e
h
targ
+ ( )
2
---------------------------------------
R
s
z ( )
2
R
m
h
targ
+ ( )
2
------------------------------- 1 = +
Field Guide IMAGINE OrthoRadar Theory / 345
Where R
s
and V
s
are the platform position and velocity respectively,
V
t
is the target velocity ( = 0, in this coordinate system), R
e
is the
Earth semimajor axis, and R
m
is the Earth semiminor axis. The
platform position and velocity vectors R
s
and V
s
can be found as a
function of time T(j) using the ephemeris equations developed
previously.
Figure 135 graphically illustrates the solution for the target location
given the sensor ephemeris, doppler cone, range sphere, and flat
Earth model.
Figure 135: Doppler Cone
Ephemeris Adjustment
There are three possible adjustments that can be made: along track,
cross track, and radial. In IMAGINE OrthoRadar, the along track
adjustment is performed separately. The cross track and radial
adjustments are made simultaneously. These adjustments are made
using residuals associated with GCPs. Each GCP has a map
coordinate (such as lat, lon) and an elevation. Also, an SAR image
range line and range pixel must be given. The SAR image range line
and range pixel are converted to R
t
using the method described
previously (substituting h
targ
= elevation of GCP above ellipsoid used
in SAR processing).
The along track adjustment is computed first, followed by the cross
track and radial adjustments. The two adjustment steps are then
repeated.
For more information, consult SAR Geocoding: Data and
Systems Gunter Schreier, Ed.
V
s
R
t
(on Earth model)
R
s
(Earth center)
R

IMAGINE OrthoRadar Theory / 346 Field Guide


Orthorectification
The ultimate goal in orthorectification is to determine, for a given
target location on the ground, the associated range line and range
pixel from the input SAR image, including the effects of terrain.
To do this, there are several steps. First, take the target location and
locate the associated range line and range pixel from the input SAR
image assuming smooth terrain. This places you in approximately
the correct range line. Next, look up the elevation at the target from
the input DEM. The elevation, in combination with the known slant
range to the target, is used to determine the correct range pixel. The
data can now be interpolated from the input SAR image.
Sparse Mapping Grid
Select a block size M. For every Mth range line and Mth range pixel,
compute R
t
on a smooth ellipsoid (using the SAR Earth model), and
save these values in an array. Smaller M implies less distortion
between grid points.
Regardless of M and the total number of samples and lines in the
input SAR image, always compute R
t
at the end of every line and for
the very last line. The spacing between points in the sparse mapping
grid is regular except at the far edges of the grid.
Output Formation
For each point in the output grid, there is an associated R
t
. This
target should fall on the surface of the Earth model used for SAR
processing, thus a conversion is made between the Earth model used
for the output grid and the Earth model used during SAR processing.
The process of orthorectification starts with a location on the ground.
The line and pixel location of the pixel to this map location can be
determined from the map location and the sparse mapping grid. The
value at this pixel location is then assigned to the map location.
Figure 136 illustrates this process.
Figure 136: Sparse Mapping and Output Grids
R
t
output grid
sparse mapping grid
Field Guide IMAGINE StereoSAR DEM Theory / 347
IMAGINE
StereoSAR DEM
Theory
Introduction This chapter details the theory that supports IMAGINE StereoSAR
DEM processing.
To understand the way IMAGINE StereoSAR DEM works to create
DEMs, it is first helpful to look at the process from beginning to end.
Figure 137 shows a stylized process for basic operation of the
IMAGINE StereoSAR DEM module.
Figure 137: IMAGINE StereoSAR DEM Process Flow
The following discussion includes algorithm descriptions as well as
discussion of various processing options and selection of processing
parameters.
Import
Image 1 Image 2
GCPs
Affine
Registration
Image 2
Coregistere
Tie
Points
Automatic
Image
Correlation
Parallax File
Range/Doppler
Stereo
Intersection
Sensor-
based DEM
Resample
Reproject
Digital
Elevation
IMAGINE StereoSAR DEM Theory / 348 Field Guide
Input There are many elements to consider in the Input step. These
include beam mode selection, importing files, orbit correction, and
ephemeris data.
Beam Mode Selection
Final accuracy and precision of the DEM produced by the IMAGINE
StereoSAR DEM module is predicated on two separate calculation
sequences. These are the automatic image correlation and the
sensor position/triangulation calculations. These two calculation
sequences are joined in the final step: Height.
The two initial calculation sequences have disparate beam mode
demands. Automatic correlation works best with images acquired
with as little angular divergence as possible. This is because different
imaging angles produce different-looking images, and the automatic
correlator is looking for image similarity. The requirement of image
similarity is the same reason images acquired at different times can
be hard to correlate. For example, images taken of agricultural areas
during different seasons can be extremely different and, therefore,
difficult or impossible for the automatic correlator to process
successfully.
Conversely, the triangulation calculation is most accurate when
there is a large intersection angle between the two images (see
Figure 138). This results in images that are truly different due to
geometric distortion. The ERDAS IMAGINE automatic image
correlator has proven sufficiently robust to match images with
significant distortion if the proper correlator parameters are used.
Figure 138: SAR Image Intersection
NOTE: IMAGINE StereoSAR DEM has built-in checks that assure the
sensor associated with the Reference image is closer to the imaged
area than the sensor associated with the Match image.
Incidence
angles
Elevation of
point
Field Guide IMAGINE StereoSAR DEM Theory / 349
A third factor, cost effectiveness, must also often be evaluated. First,
select either Fine or Standard Beam modes. Fine Beam images with
a pixel size of six meters would seem, at first glance, to offer a much
better DEM than Standard Beam with 12.5-meter pixels. However, a
Fine Beam image covers only one-fourth the area of a Standard
Beam image and produces a DEM only minimally better.
Various Standard Beam combinations, such as an S3/S6 or an
S3/S7, cover a larger area per scene, but only for the overlap area
which might be only three-quarters of the scene area. Testing at
both ERDAS and RADARSAT has indicated that a stereopair
consisting of a Wide Beam mode 2 image and a Standard Beam
mode 7 image produces the most cost-effective DEM at a resolution
consistent with the resolution of the instrument and the technique.
Import
The imagery required for the IMAGINE StereoSAR DEM module can
be imported using the ERDAS IMAGINE radar-specific importers for
either RADARSAT or ESA (ERS-1, ERS-2). These importers
automatically extract data from the image header files and store it
in an Hfa file attached to the image. In addition, they abstract key
parameters necessary for sensor modeling and attach these to the
image as a Generic SAR Node Hfa file. Other radar imagery (e.g.,
SIR-C) can be imported using the Generic Binary Importer. The
Generic SAR Node can then be used to attach the Generic SAR Node
Hfa file.
Orbit Correction
Extensive testing of both the IMAGINE OrthoRadar and IMAGINE
StereoSAR DEM modules has indicated that the ephemeris data from
the RADARSAT and the ESA radar satellites is very accurate (see
appended accuracy reports). However, the accuracy does vary with
each image, and there is no a priori way to determine the accuracy
of a particular data set.
The modules of the IMAGINE Radar Mapping Suite: IMAGINE
OrthoRadar, IMAGINE StereoSAR DEM, and IMAGINE IFSAR DEM,
allow for correction of the sensor model using GCPs. Since the
supplied orbit ephemeris is very accurate, orbit correction should
only be attempted if you have very good GCPs. In practice, it has
been found that GCPs from 1:24 000 scale maps or a handheld GPS
are the minimum acceptable accuracy. In some instances, a single
accurate GCP has been found to result in a significant increase in
accuracy.
As with image warping, a uniform distribution of GCPs results in a
better overall result and a lower RMS error. Again, accurate GCPs are
an essential requirement. If your GCPs are questionable, you are
probably better off not using them. Similarly, the GCP must be
recognizable in the radar imagery to within plus or minus one to two
pixels. Road intersections, reservoir dams, airports, or similar man-
made features are usually best. Lacking one very accurate and
locatable GCP, it would be best to utilize several good GCPs
dispersed throughout the image as would be done for a rectification.
IMAGINE StereoSAR DEM Theory / 350 Field Guide
Ellipsoid vs. Geoid Heights
The IMAGINE Radar Mapping Suite is based on the World Geodetic
System (WGS) 84 Earth ellipsoid. The sensor model uses this
ellipsoid for the sensor geometry. For maximum accuracy, all GCPs
used to refine the sensor model for all IMAGINE Radar Mapping Suite
modules (IMAGINE OrthoRadar, IMAGINE StereoSAR DEM, or
IMAGINE IFSAR DEM) should be converted to this ellipsoid in all
three dimensions: latitude, longitude, and elevation.
Note that, while ERDAS IMAGINE reprojection converts latitude and
longitude to UTM WGS 84 for many input projections, it does not
modify the elevation values. To do this, it is necessary to determine
the elevation offset between WGS 84 and the datum of the input
GCPs. For some input datums this can be accomplished using the
Web site: www.ngs.noaa.gov/GEOID/geoid.html. This offset must
then be added to, or subtracted from, the input GCP. Many handheld
GPS units can be set to output in WGS 84 coordinates.
One elegant feature of the IMAGINE StereoSAR DEM module is that
orbit refinement using GCPs can be applied at any time in the
process flow without losing the processing work to that stage. The
stereopair can even be processed all the way through to a final DEM
and then you can go back and refine the orbit. This refined orbit is
transferred through all the intermediate files (Subset, Despeckle,
etc.). Only the final step Height would need to be rerun using the
new refined orbit model.
The ephemeris normally received with RADARSAT, or ERS-1 and
ERS-2 imagery is based on an extrapolation of the sensor orbit from
previous positions. If the satellite received an orbit correction
command, this effect might not be reflected in the previous position
extrapolation. The receiving stations for both satellites also do
ephemeris calculations that include post image acquisition sensor
positions. These are generally more accurate. They are not,
unfortunately, easy to acquire and attach to the imagery.
Refined Ephemeris
For information, see IMAGINE IFSAR DEM Theory.
Subset Use of the Subset option is straightforward. It is not necessary that
the two subsets define exactly the same area: an approximation is
acceptable. This option is normally used in two circumstances. First,
it can be used to define a small subset for testing correlation
parameters prior to running a full scene. Also, it would be used to
constrain the two input images to only the overlap area. Constraining
the input images is useful for saving data space, but is not necessary
for functioning of IMAGINE StereoSAR DEM it is purely optional.
Field Guide IMAGINE StereoSAR DEM Theory / 351
Despeckle The functions to despeckle the images prior to automatic correlation
are optional. The rationale for despeckling at this time is twofold.
One, image speckle noise is not correlated between the two images:
it is randomly distributed in both. Thus, it only serves to confuse the
automatic correlation calculation. Presence of speckle noise could
contribute to false positives during the correlation process.
Secondly, as discussed under Beam Mode Selection, the two images
the software is trying to match are different due to viewing geometry
differences. The slight low-pass character of the despeckle algorithm
may actually move both images toward a more uniform appearance,
which aids automatic correlation.
Functionally, the despeckling algorithms presented here are identical
to those available in the IMAGINE Radar Interpreter. In practice, a 3
3 or 5 5 kernel has been found to work acceptably. Note that all
ERDAS IMAGINE speckle reduction algorithms allow the kernel to be
tuned to the image being processed via the Coefficient of Variation.
Calculation of this parameter is accessed through the IMAGINE
Radar Interpreter Speckle Suppression interface.
See the ERDAS IMAGINE Tour Guides for the IMAGINE Radar
Interpreter tour guide.
Degrade The Degrade option offered at this step in the processing is
commonly used for two purposes. If the input imagery is Single Look
Complex (SLC), the pixels are not square (this is shown as the Range
and Azimuth pixel spacing sizes). It may be desirable at this time to
adjust the Y scale factor to produce pixels that are more square. This
is purely an option; the software accurately processes undegraded
SLC imagery.
Secondly, if data space or processing time is limited, it may be useful
to reduce the overall size of the image file while still processing the
full images. Under those circumstances, a reduction of two or three
in both X and Y might be appropriate. Note that the processing flow
recommended for maximum accuracy processes the full resolution
scenes and correlates for every pixel. Degrade is used subsequent to
Match to lower DEM variance (LE90) and increase pixel size to
approximately the desired output posting.
Rescale
This operation converts the input imagery bit format, commonly
unsigned 16-bit, to unsigned 8-bit using a two standard deviations
stretch. This is done to reduce the overall data file sizes. Testing has
not shown any advantage to retaining the original 16-bit format, and
use of this option is routinely recommended.
Register Register is the first of the Process Steps (other than Input) that must
be done. This operation serves two important functions, and proper
user input at this processing level affects the speed of subsequent
processing, and may affect the accuracy of the final output DEM.
IMAGINE StereoSAR DEM Theory / 352 Field Guide
The registration operation uses an affine transformation to rotate the
Match image so that it more closely aligns with the Reference image.
The purpose is to adjust the images so that the elevation-induced
pixel offset (parallax) is mostly in the range (x-axis) direction (i.e.,
the images are nearly epipolar). Doing this greatly reduces the
required size of the search window in the Match step.
One output of this step is the minimum and maximum parallax
offsets, in pixels, in both the x- and y-axis directions. These values
must be recorded by the operator and are used in the Match step to
tune the IMAGINE StereoSAR DEM correlator parameter file (.ssc).
These values are critical to this tuning operation and, therefore,
must be correctly extracted from the Register step.
Two basic guidelines define the selection process for the tie points
used for the registration. First, as with any image-to-image
registration, a better result is obtained if the tie points are uniformly
distributed throughout the images. Second, since you want the
calculation to output the minimum and maximum parallax offsets in
both the x- and y-axis directions, the tie points selected must be
those that have the minimum and maximum parallax.
In practice, the following procedure has been found successful. First,
select a fairly uniform grid of about eight tie points that defines the
lowest elevation within the image. Coastlines, river flood plains,
roads, and agricultural fields commonly meet this criteria. Use of the
Solve Geometric Model icon on the StereoSAR Registration Tool
should yield values in the -5 to +5 range at this time. Next, identify
and select three or four of the highest elevations within the image.
After selecting each tie point, click the Solve Geometric Model icon
and note the effect of each tie point on the minimum and maximum
parallax values. When you feel you have quantified these values,
write them down and apply the resultant transform to the image.
Constrain This option is intended to allow you to define areas where it is not
necessary to search the entire search window area. A region of lakes
would be such an area. This reduces processing time and also
minimizes the likelihood of finding false positives. This option is not
implemented at present.
Match An essential component, and the major time-saver of the IMAGINE
StereoSAR DEM software is automatic image correlation.
In automatic image correlation, a small subset (image chip) of the
Reference image termed the template (see Figure 139), is compared
to various regions of the Match images search area (Figure 140) to
find the best Match point. The center pixel of the template is then
said to be correlated with the center pixel of the Match region. The
software then proceeds to the next pixel of interest, which becomes
the center pixel of the new template.
Figure 139 shows the upper left (UL) corner of the Reference image.
An 11 11 pixel template is shown centered on the pixel of interest:
X = 8, Y = 8.
Field Guide IMAGINE StereoSAR DEM Theory / 353
Figure 139: UL Corner of the Reference Image
Figure 140 shows the UL corner of the Match image. The 11 11
pixel template is shown centered on the initial estimated correlation
pixel X = 8, Y = 8. The 15 7 pixel search area is shown in a dashed
line. Since most of the parallax shift is in the range direction (x-axis),
the search area should always be a rectangle to minimize search
time.
Figure 140: UL Corner of the Match Image
The ERDAS IMAGINE automatic image correlator works on the
hierarchical pyramid technique. This means that the image is
successively reduced in resolution to provide a coregistered set of
images of increasing pixel size (see Figure 141). The automatic
correlation software starts at the top of the resolution pyramid with
the lowest resolution image being processed first. The results of this
process are filtered and interpolated before being passed to the next
highest resolution layer as the initial estimated correlation point.
From this estimated point, the search is performed on this higher
resolution layer.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1 2 3 4 5 6 7 8 9101112 14 13 15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1 2 3 4 5 6 7 8 9101112 14 13 15
IMAGINE StereoSAR DEM Theory / 354 Field Guide
Figure 141: Image Pyramid
Template Size
The size of the template directly affects computation time: a larger
image chip takes more time. However, too small of a template could
contain insufficient image detail to allow accurate matching. A
balance must be struck between these two competing criteria, and is
somewhat image-dependent. A suitable template for a suburban
area with roads, fields, and other features could be much smaller
than the required template for a vast region of uniform ground cover.
Because of viewing geometry-induced differences in the Reference
and Match images, the template from the Reference image is never
identical to any area of the Match image. The template must be large
enough to minimize this effect.
The IMAGINE StereoSAR DEM correlator parameters shown in Table
49 are for the library file Std_LP_HD.ssc. These parameters are
appropriate for a RADARSAT Standard Beam mode (Std) stereopair
with low parallax (LP) and high density of detail (HD). The low
parallax parameters are appropriate for images of low to moderate
topography. The high density of detail (HD) parameters are
appropriate for the suburban area discussed above.
Level 2
Full resolution (1:1)
256 256 pixels
Level 1
512 512 pixels
Level 3
128 128 pixels
Level 4
64 64 pixels
Matching starts
Matching ends
Resolution of 1:8
Resolution of 1:4
Resolution of 1:2
and
on level 4
on level 1
Table 49: STD_LP_HD Correlator
Level Average Size X Size Y Search -X Search +X Search -Y Search +Y
1 1 20 20 2 2 1 1
2 2 60 60 3 4 1 1
3 3 90 90 8 20 2 3
Field Guide IMAGINE StereoSAR DEM Theory / 355
Note that the size of the template (Size X and Size Y) increases as
you go up the resolution pyramid. This size is the effective size if it
were on the bottom of the pyramid (i.e., the full resolution image).
Since they are actually on reduced resolution levels of the pyramid,
they are functionally smaller. Thus, the 220 220 template on Level
6 is actually only 36 36 during the actual search. By stating the
template size relative to the full resolution image, it is easy to display
a box of approximate size on the input image to evaluate the amount
of detail available to the correlator, and thus optimize the template
sizes.
Search Area
Considerable computer time is expended in searching the Match
image for the exact Match point. Thus, this search area should be
minimized. (In addition, searching too large of an area increases the
possibility of a false match.) For this reason, the software first
requires that the two images be registered. This gives the software
a rough idea of where the Match point might be. In stereo DEM
generation, you are looking for the offset of a point in the Match
image from its corresponding point in the Reference image
(parallax). The minimum and maximum displacement is quantified
in the Register step and is used to restrain the search area.
In Figure 140, the search area is defined by four parameters: -X, +X,
-Y, and +Y. Most of the displacement in radar imagery is a function
of the look angle and is in the range or x-axis direction. Thus, the
search area is always a rectangle emphasizing the x-axis. Because
the total search area (and, therefore, the total time) is X times Y, it
is important to keep these values to a minimum. Careful use at the
Register step easily achieves this.
4 4 120 120 10 30 2 5
5 5 180 180 20 60 2 8
6 6 220 220 25 70 3 10
Table 49: STD_LP_HD Correlator
Level Average Size X Size Y Search -X Search +X Search -Y Search +Y
Level Step X Step Y Threshold Value Vector X Vector Y Applied
1 2 2 0.30000 0.00000 0.00000 0.00000 0
2 8 8 0.20000 0.00000 0.00000 0.00000 0
3 20 20 0.20000 0.00000 0.00000 0.00000 0
4 50 50 0.20000 0.00000 0.00000 0.00000 0
5 65 65 0.20000 0.00000 0.00000 0.00000 0
6 80 80 0.10000 0.00000 0.00000 0.00000 0
IMAGINE StereoSAR DEM Theory / 356 Field Guide
Step Size
Because a radar stereopair typically contains millions of pixels, it is
not desirable to correlate every pixel at every level of the hierarchal
pyramid, nor is this even necessary to achieve an accurate result.
The density at which the automatic correlator is to operate at each
level in the resolution pyramid is determined by the step size
(posting). The approach used is to keep posting tighter (smaller step
size) as the correlator works down the resolution pyramid. For
maximum accuracy, it is recommended to correlate every pixel at
the full resolution level. This result is then compressed by the
Degrade step to the desired DEM cell size.
Threshold
The degree of similarity between the Reference template and each
possible Match region within the search area must be quantified by
a mathematical metric. IMAGINE StereoSAR DEM uses the widely
accepted normalized correlation coefficient. The range of possible
values extends from -1 to +1, with +1 being an identical match. The
algorithm uses the maximum value within the search area as the
correlation point.
The threshold in Table 49 is the minimum numerical value of the
normalized correlation coefficient that is accepted as a correlation
point. If no value within the entire search area attains this minimum,
there is not a Match point for that level of the resolution pyramid. In
this case, the initial estimated position, passed from the previous
level of the resolution pyramid, is retained as the Match point.
Correlator Library
To aid both the novice and the expert in rapidly selecting and refining
an IMAGINE StereoSAR DEM correlator parameter file for a specific
image pair, a library of tested parameter files has been assembled
and is included with the software. These files are labeled using the
following syntax: (RADARSAT Beam mode)_(Magnitude of
Parallax)_(Density of Detail).
RADARSAT Beam Mode
Correlator parameter files are available for both Standard (Std) and
Fine (Fine) Beam modes. An essential difference between these two
categories is that, with the Fine Beam mode, more pixels are
required (i.e., a larger template) to contain the same number of
image features than with a Standard Beam image of the same area.
Magnitude of Parallax
The magnitude of the parallax is divided into high parallax (_HP) and
low parallax (_LP) options. This determination is based upon the
elevation changes and slopes within the images and is somewhat
subjective. This parameter determines the size of the search area.
Field Guide IMAGINE StereoSAR DEM Theory / 357
Density of Detail
The level of detail within each template is divided into high density
(_HD) and low density (_LD) options. The density of detail for a
suburban area with roads, fields, and other features would be much
higher than the density of detail for a vast region of uniform ground
cover. This parameter, in conjunction with beam mode, determines
the required template sizes.
Quick Tests
It is often advantageous to quickly produce a low resolution DEM to
verify that the automatic image correlator is optimum before
correlating on every pixel to produce the final DEM.
For this purpose, a Quick Test (_QT) correlator parameter file has
been provided for each of the full resolution correlator parameter
files in the .ssc library. These correlators process the image only
through resolution pyramid Level 3. Processing time up to this level
has been found to be acceptably fast, and testing has shown that if
the image is successfully processed to this level, the correlator
parameter file is probably appropriate.
Evaluation of the parallax files produced by the Quick Test
correlators and subsequent modification of the correlator parameter
file is discussed in "IMAGINE StereoSAR DEM Application" in the
IMAGINE Radar Mapping Suite Tour Guide.
Degrade The second Degrade step compresses the final parallax image file
(Level 1). While not strictly necessary, it is logical and has proven
advantageous to reduce the pixel size at this time to approximately
the intended posting of the final output DEM. Doing so at this time
decreases the variance (LE90) of the final DEM through averaging.
Height This step combines the information from the above processing steps
to derive surface elevations. The sensor models of the two input
images are combined to derive the stereo intersection geometry. The
parallax values for each pixel are processed through this geometric
relationship to derive a DEM in sensor (pixel) coordinates.
Comprehensive testing of the IMAGINE StereoSAR DEM module has
indicated that, with reasonable data sets and careful work, the
output DEM falls between DTED Level I and DTED Level II. This
corresponds to between USGS 30-meter and USGS 90-meter DEMs.
Thus, an output pixel size of 40 to 50 meters is consistent with this
expected precision.
The final step is to resample and reproject this sensor DEM in to the
desired final output DEM. The entire ERDAS IMAGINE reprojection
package is accessed within the IMAGINE StereoSAR DEM module.
IMAGINE IFSAR DEM Theory / 358 Field Guide
IMAGINE IFSAR
DEM Theory
Introduction Terrain height extraction is one of the most important applications
for SAR images. There are two basic techniques for extracting height
from SAR images: stereo and interferometry. Stereo height
extraction is much like the optical process and is discussed in
IMAGINE StereoSAR DEM Theory. The subject of this section is SAR
interferometry (IFSAR).
Height extraction from IFSAR takes advantage of one of the unique
qualities of SAR images: distance information from the sensor to the
ground is recorded for every pixel in the SAR image. Unlike optical
and IR images, which contain only the intensity of the energy
received to the sensor, SAR images contain distance information in
the form of phase. This distance is simply the number of wavelengths
of the source radiation from the sensor to a given point on the
ground. SAR sensors can record this information because, unlike
optical and IR sensors, their radiation source is active and coherent.
Unfortunately, this distance phase information in a single SAR image
is mixed with phase noise from the ground and other effects. For this
reason, it is impossible to extract just the distance phase from the
total phase in a single SAR image. However, if two SAR images are
available that cover the same area from slightly different vantage
points, the phase of one can be subtracted from the phase of the
other to produce the distance difference of the two SAR images
(hence the term interferometry). This is because the other phase
effects for the two images are approximately equal and cancel out
each other when subtracted. What is left is a measure of the distance
difference from one image to the other. From this difference and the
orbit information, the height of every pixel can be calculated.
This chapter covers basic concepts and processing steps needed to
extract terrain height from a pair of interferometric SAR images.
Electromagnetic Wave
Background
In order to understand the SAR interferometric process, you must
have a general understanding of electromagnetic waves and how
they propagate. An electromagnetic wave is a changing electric field
that produces a changing magnetic field that produces a changing
electric field, and so on. As this process repeats, energy is
propagated through empty space at the speed of light.
Figure 142 gives a description of the type of electromagnetic wave
that we are interested in. In this diagram, E indicates the electric
field and H represents the magnetic field. The directions of E and H
are mutually perpendicular everywhere. In a uniform plane, wave E
and H lie in a plane and have the same value everywhere in that
plane.
Field Guide IMAGINE IFSAR DEM Theory / 359
A wave of this type with both E and H transverse to the direction of
propagation is called a Transverse ElectroMagnetic (TEM) wave. If
the electric field E has only a component in the y direction and the
magnetic field H has only a component in the z direction, then the
wave is said to be polarized in the y direction (vertically polarized).
Polarization is generally defined as the direction of the electric field
component with the understanding that the magnetic field is
perpendicular to it.
Figure 142: Electromagnetic Wave
The electromagnetic wave described above is the type that is sent
and received by an SAR. The SAR, like most equipment that uses
electromagnetic waves, is only sensitive to the electric field
component of the wave; therefore, we restrict our discussion to it.
The electric field of the wave has two main properties that we must
understand in order to understand SAR and interferometry. These
are the magnitude and phase of the wave. Figure 143 shows that the
electric field varies with time.
Figure 143: Variation of Electric Field in Time
Direction of
propagation
x
y
z
E
y
H
z
2 3 4
t 0 =

P
t
T
4
--- =
t
T
2
--- =
P
P
IMAGINE IFSAR DEM Theory / 360 Field Guide
The figure shows how the wave phase varies with time at three
different moments. In the figure is the wavelength and T is the time
required for the wave to travel one full wavelength. P is a point of
constant phase and moves to the right as time progresses. The wave
has a specific phase value at any given moment in time and at a
specific point along its direction of travel. The wave can be expressed
in the form of Equation 1.
Equation 1
Where:
Equation 1 is expressed in Cartesian coordinates and assumes that
the maximum magnitude of E
y
is unity. It is more useful to express
this equation in exponential form and include a maximum term as in
Equation 2.
Equation 2
So far we have described the definition and behavior of the
electromagnetic wave phase as a function of time and distance. It is
also important to understand how the strength or magnitude
behaves with time and distance from the transmitter. As the wave
moves away from the transmitter, its total energy stays the same
but is spread over a larger distance. This means that the energy at
any one point (or its energy density) decreases with time and
distance as shown in Figure 144.
Figure 144: Effect of Time and Distance on Energy
E
y
t x + ( ) cos =

2
T
------ =

------ =
E
y
E
0
e
j t x
=
t, x
Field Guide IMAGINE IFSAR DEM Theory / 361
The magnitude of the wave decreases exponentially as the distance
from the transmitter increases. Equation 2 represents the general
form of the electromagnetic wave that we are interested in for SAR
and IFSAR applications. Later, we further simplify this expression
given certain restrictions of an SAR sensor.
The Interferometric
Model
Most uses of SAR imagery involve a display of the magnitude of the
image reflectivity and discard the phase when the complex image is
magnitude-detected. The phase of an image pixel representing a
single scatterer is deterministic; however, the phase of an image
pixel represents multiple scatterers (in the same resolution cell), and
is made up of both a deterministic and nondeterministic, statistical
part. For this reason, pixel phase in a single SAR image is generally
not useful. However, with proper selection of an imaging geometry,
two SAR images can be collected that have nearly identical
nondeterministic phase components. These two SAR images can be
subtracted, leaving only a useful deterministic phase difference of
the two images.
Figure 145 provides the basic geometric model for an interferometric
SAR system.
Figure 145: Geometric Model for an Interferometric SAR
System
Where:
A1 = antenna 1
A2 = antenna 2
B
i
= baseline
R
1
= vector from antenna 1 to point of interest
R
2
= vector from antenna 2 to point of interest
= angle between R1 and baseline vectors (depression
angle)
Z
ac
= antenna 1 height
A1
A2
B
i
R
1
R
2 R
1
R
2

Z
ac
X
Z
IMAGINE IFSAR DEM Theory / 362 Field Guide
A rigid baseline B
i
separates two antennas, A1 and A2. This
separation causes the two antennas to illuminate the scene at
slightly different depression angles relative to the baseline. Here,
is the nominal depression angle from A1 to the scatterer relative to
the baseline. The model assumes that the platform travels at
constant velocity in the X direction while the baseline remains
parallel to the Y axis at a constant height Z
ac
above the XY plane.
The electromagnetic wave Equation 2 describes the signal data
collected by each antenna. The two sets of signal data differ
primarily because of the small differences in the data collection
geometry. Complex images are generated from the signal data
received by each antenna.
As stated earlier, the phase of an image pixel represents the phase
of multiple scatters in the same resolution cell and consists of both
deterministic and unknown random components. A data collection
for SAR interferometry adheres to special conditions to ensure that
the random component of the phase is nearly identical in the two
images. The deterministic phase in a single image is due to the two-
way propagation path between the associated antenna and the
target.
From our previously derived equation for an electromagnetic wave,
and assuming the standard SAR configuration in which the
perpendicular distance from the SAR to the target does not change,
we can write the complex quantities representing a corresponding
pair of image pixels, P
1
and P
2
, from image 1 and image 2 as
Equation 3 and Equation 4.
Equation 3
and
Equation 4
The quantities a
1
and a
2
represent the magnitudes of each image
pixel. Generally, these magnitudes are approximately equal. The
quantities
1
and
2
are the random components of pixel phase. They
represent the vector summations of returns from all unresolved
scatterers within the resolution cell and include contributions from
receiver noise. With proper system design and collection geometry,
they are nearly equal. The quantities
1
and
2
are the deterministic
contribution to the phase of the image pixel. The desired function of
the interferometer is to provide a measure of the phase difference,

1
-
2
.
P
1
a
1
e
j
1

1
+ ( )
=
P
2
a
2
e
j
2

2
+ ( )
=
Field Guide IMAGINE IFSAR DEM Theory / 363
Next, we must relate the phase value to the distance vector from
each antenna to the point of interest. This is done by recognizing that
phase and the wavelength of the electromagnetic wave represent
distance in number of wavelengths. Equation 5 relates phase to
distance and wavelength.
Equation 5
Multiplication of one image and the complex conjugate of the second
image on a pixel-by-pixel basis yields the phase difference between
corresponding pixels in the two images. This complex product
produces the interferogram I with:
Equation 6
Where denotes the complex conjugate operation. With
1
and
2

nearly equal and a
1
and a
2
nearly equal, the two images differ
primarily in how the slight difference in collection depression angles
affects
1
and
2
. Ideally then, each pixel in the interferogram has
the form:
Equation 7
using a
1
= a
2
= a. The amplitude a
2
of the interferogram
corresponds to image intensity. The phase
12
of the interferogram
becomes
Equation 8

i
4R
i

------------ =
I P
1
P
2
' =
I a
2
e
j
4

------ R
1
R
2
( )
\ .
| |

a
2
e
j
12
= =

12
4 R
2
R
1
( )

------------------------------ =
IMAGINE IFSAR DEM Theory / 364 Field Guide
which is the quantity used to derive the depression angle to the point
of interest relative to the baseline and, eventually, information about
the scatterer height relative to the XY plane. Using the following
approximation allows us to arrive at an equation relating the
interferogram phase to the nominal depression angle.
Equation 9
Equation 10
In Equation 9 and Equation 10, is the nominal depression angle
from the center of the baseline to the scatterer relative to the
baseline. No phase difference indicates that = 90 degrees and the
scatterer is in the plane through the center of and orthogonal to the
baseline. The interferometric phase involves many radians of phase
for scatterers at other depression angles since the range difference
R
2
- R
1
is many wavelengths. In practice, however, an
interferometric system does not measure the total pixel phase
difference. Rather, it measures only the phase difference that
remains after subtracting all full 2 intervals present (module-2).
To estimate the actual depression angle to a particular scatterer, the
interferometer must measure the total pixel phase difference of
many cycles. This information is available, for instance, by
unwrapping the raw interferometric phase measurements beginning
at a known scene location. Phase unwrapping is discussed in further
detail in Phase Unwrapping.
Because of the ambiguity imposed by the wrapped phase problem,
it is necessary to seek the relative depression angle and relative
height among scatterers within a scene rather then their absolute
depression angle and height. The differential of Equation 10 with
respect to provides this relative measure. This differential is
Equation 11
or
Equation 12
R
2
R
1
B
i
( ) cos

12
4B
i
( ) cos

------------------------------

T
4B
i

------------
\ .
| |
( ) sin =


4B
i
( ) sin
-----------------------------
12
=
Field Guide IMAGINE IFSAR DEM Theory / 365
This result indicates that two pixels in the interferogram that differ
in phase by
12
represent scatterers differing in depression angle by
. Figure 146 shows the differential collection geometry.
Figure 146: Differential Collection Geometry
From this geometry, a change in depression angle is related to a
change h in height (at the same range from mid-baseline) by
Equation 13.
Equation 13
Using a useful small-angle approximation to Equation 13 and
substituting Equation 12 into Equation 13 provides the result
Equation 14 for h.
Equation 14
Note that, because we are calculating differential height, we need at
least one known height value in order to calculate absolute height.
This translates into a need for at least one GCP in order to calculate
absolute heights from the IMAGINE IFSAR DEM process.
A1
A2
B
i

Z
ac
Y
X
Z
h
Z
ac
h

Z
ac
( ) sin
-----------------
Z
ac
h
Z
ac
( ) sin
( ) sin
--------------------------------------- =
h Z
ac
( ) cot
h
Z
ac
( ) cot
4B
i
( ) sin
-----------------------------
12
=
IMAGINE IFSAR DEM Theory / 366 Field Guide
In this section, we have derived the mathematical model needed to
calculate height from interferometric phase information. In order to
put this model into practice, there are several important processes
that must be performed. These processes are image registration,
phase noise reduction, phase flattening, and phase unwrapping.
These processes are discussed in the following sections.
Image Registration In the discussion of the interferometric model of the last section, we
assumed that the pixels had been identified in each image that
contained the phase information for the scatterer of interest.
Aligning the images from the two antennas is the purpose of the
image registration step. For interferometric systems that employ two
antennas attached by a fixed boom and collect data simultaneously,
this registration is simple and deterministic. Given the collection
geometry, the registration can be calculated without referring to the
data. For repeat pass systems, the registration is not quite so simple.
Since the collection geometry cannot be precisely known, we must
use the data to help us achieve image registration.
The registration process for repeat pass interferometric systems is
generally broken into two steps: pixel and sub-pixel registration.
Pixel registration involves using the magnitude (visible) part of each
image to remove the image misregistration down to around a pixel.
This means that, after pixel registration, the two images are
registered to within one or two pixels of each other in both the range
and azimuth directions.
Pixel registration is best accomplished using a standard window
correlator to compare the magnitudes of the two images over a
specified window. You usually specify a starting point in the two
images, a window size, and a search range for the correlator to
search over. The process identifies the pixel offset that produces the
highest match between the two images, and therefore the best
interferogram. One offset is enough to pixel register the two images.
Pixel registration, in general, produces a reasonable interferogram,
but not the best possible. This is because of the nature of the phase
function for each of the images. In order to form an image from the
original signal data collected for each image, it is required that the
phase functions in range and azimuth be Nyquist sampled.
Nyquist sampling simply means that the original continuous function
can be reconstructed from the sampled data. This means that, while
the magnitude resolution is limited to the pixel sizes (often less than
that), the phase function can be reconstructed to much higher
resolutions. Because it is the phase functions that ultimately provide
the height information, it is important to register them as closely as
possible. This fine registration of the phase functions is the goal of
the sub-pixel registration step.
Field Guide IMAGINE IFSAR DEM Theory / 367
Sub-pixel registration is achieved by starting at the pixel registration
offset and searching over upsampled versions of the phase functions
for the best possible interferogram. When this best interferogram is
found, the sub-pixel offset has been identified. In order to
accomplish this, we must construct higher resolution phase functions
from the data. In general this is done using the relation from signal
processing theory shown in Equation 15.
Equation 15
Where:
r = range independent variable
a = azimuth independent variable
i(r, a) = interferogram in spacial domain
I(u, v) = interferogram in frequency domain
r = sub-pixel range offset (i.e., 0.25)
a = sub-pixel azimuth offset (i.e., 0.75)

-1
= inverse Fourier transform
Applying this relation directly requires two-dimensional (2D) Fourier
transforms and inverse Fourier transforms for each window tested.
This is impractical given the computing requirements of Fourier
transforms. Fortunately, we can achieve the upsampled phase
functions we need using 2D sinc interpolation, which involves
convolving a 2D sync function of a given size over our search region.
Equation 16 defines the sync function for one dimension.
Equation 16
Using sync interpolation is a fast and efficient method of
reconstructing parts of the phase functions which are at sub-pixel
locations.
In general, one sub-pixel offset is not enough to sub-pixel register
two SAR images over the entire collection. Unlike the pixel
registration, sub-pixel registration is dependent on the pixel location,
especially the range location. For this reason, it is important to
generate a sub-pixel offset function that varies with range position.
Two sub-pixel offsets, one at the near range and one at the far
range, are enough to generate this function. This sub-pixel register
function provides the weights for the sync interpolator needed to
register one image to the other during the formation of the
interferogram.
i r r + a a + , ( )
1
I u v , ( ) e
j ur va + ( )
( ) [ ] =
n ( ) sin
n
-------------------
IMAGINE IFSAR DEM Theory / 368 Field Guide
Phase Noise Reduction We mentioned in The Interferometric Model that it is necessary to
unwrap the phase of the interferogram before it can be used to
calculate heights. From a practical and implementational point of
view, the phase unwrapping step is the most difficult. We discuss
phase unwrapping more in Phase Unwrapping.
Before unwrapping, we can do a few things to the data that make the
phase unwrapping easier. The first of these is to reduce the noise in
the interferometric phase function. Phase noise is introduced by
radar system noise, image misregistration, and speckle effects
caused by the complex nature of the imagery. Reducing this noise is
done by applying a coherent average filter of a given window size
over the entire interferogram. This filter is similar to the more
familiar averaging filter, except that it operates on the complex
function instead of just the magnitudes. The form of this filter is
given in Equation 17.
Equation 17
Figure 147 shows an interferometric phase image without filtering;
Figure 148 shows the same phase image with filtering.
Figure 147: Interferometric Phase Image without Filtering
i

r a , ( )
RE i r i a j + , + ( ) [ ] jImg r i a j + , + ( ) [ ] +
j 0 =
M

i 0 =
N

M N +
------------------------------------------------------------------------------------------------------------------- =
Field Guide IMAGINE IFSAR DEM Theory / 369
Figure 148: Interferometric Phase Image with Filtering
The sharp ridges that look like contour lines in Figure 147 and Figure
148 show where the phase functions wrap. The goal of the phase
unwrapping step is to make this one continuous function. This is
discussed in greater detail in Phase Unwrapping. Notice how the
filtered image of Figure 148 is much cleaner then that of Figure 147.
This filtering makes the phase unwrapping much easier.
Phase Flattening The phase function of Figure 148 is fairly well behaved and is ready
to be unwrapped. There are relatively few wrap lines and they are
distinct. Notice in the areas where the elevation is changing more
rapidly (mountain regions) the frequency of the wrapping increases.
In general, the higher the wrapping frequency, the more difficult the
area is to unwrap. Once the wrapping frequency exceeds the spacial
sampling of the phase image, information is lost. An important
technique in reducing this wrapping frequency is phase flattening.
Phase flattening involves removing high frequency phase wrapping
caused by the collection geometry. This high frequency wrapping is
mainly in the range direction, and is because of the range separation
of the antennas during the collection. Recall that it is this range
separation that gives the phase difference and therefore the height
information. The phase function of Figure 148 has already had phase
flattening applied to it. Figure 149 shows this same phase function
without phase flattening applied.
IMAGINE IFSAR DEM Theory / 370 Field Guide
Figure 149: Interferometric Phase Image without Phase
Flattening
Phase flattening is achieved by removing the phase function that
would result if the imaging area was flat from the actual phase
function recorded in the interferogram. It is possible, using the
equations derived in The Interferometric Model, to calculate this
flat Earth phase function and subtract it from the data phase
function.
It should be obvious that the phase function in Figure 148 is easier
to unwrap then the phase function of Figure 149.
Phase Unwrapping We stated in The Interferometric Model that we must unwrap the
interferometric phase before we can use it to calculate height values.
In Phase Noise Reduction and Phase Flattening, we develop
methods of making the phase unwrapping job easier. This section
further defines the phase unwrapping problem and describes how to
solve it.
As an electromagnetic wave travels through space, it cycles through
its maximum and minimum phase values many times as shown in
Figure 150.
Figure 150: Electromagnetic Wave Traveling through Space
2 3 4 5 6 7

1
3
2
------ =
2
11
2
--------- =
P
1
P
2
Field Guide IMAGINE IFSAR DEM Theory / 371
The phase difference between points P
1
and P
2
is given by Equation
18.
Equation 18
Recall from Equation 8 that finding the phase difference at two points
is the key to extracting height from interferometric phase.
Unfortunately, an interferometric system does not measure the total
pixel phase difference. Rather, it measures only the phase difference
that remains after subtracting all full 2 intervals present (module-
2). This results in the following value for the phase difference of
Equation 18.
Equation 19
Figure 151 further illustrates the difference between a one-
dimensional continuous and wrapped phase function. Notice that
when the phase value of the continuous function reaches 2, the
wrapped phase function returns to 0 and continues from there. The
job of the phase unwrapping is to take a wrapped phase function and
reconstruct the continuous function from it.
Figure 151: One-dimensional Continuous vs. Wrapped Phase
Function

2

1

11
2
---------
3
2
------ 4 = =

2

1
mod2 ( )
11
2
---------
\ .
| |
mod2 ( )
3
2
------
\ .
| |

3
2
------
3
2
------ 0 = = =
10
8
6
4
2
continuous function
wrapped function
IMAGINE IFSAR DEM Theory / 372 Field Guide
There has been much research and many different methods derived
for unwrapping the 2D phase function of an interferometric SAR
phase image. A detailed discussion of all or any one of these methods
is beyond the scope of this chapter. The most successful approaches
employ algorithms which unwrap the easy or good areas first and
then move on to more difficult areas. Good areas are regions in
which the phase function is relatively flat and the correlation is high.
This prevents errors in the tough areas from corrupting good
regions. Figure 152 shows a sequence of unwrapped phase images
for the phase function of Figure 148.
Figure 152: Sequence of Unwrapped Phase Images
Figure 153 shows the wrapped phase compared to the unwrapped
phase image.
10% unwrapped
30% unwrapped
50% unwrapped
70% unwrapped
90% unwrapped
20% unwrapped
40% unwrapped
60% unwrapped
100% unwrapped
80% unwrapped
Field Guide IMAGINE IFSAR DEM Theory / 373
Figure 153: Wrapped vs. Unwrapped Phase Images
The unwrapped phase values can now be combined with the
collection position information to calculate height values for each
pixel in the interferogram.
Conclusions SAR interferometry uses the unique properties of SAR images to
extract height information from SAR interferometric image pairs.
Given a good image pair and good information about the collection
geometry, IMAGINE IFSAR DEM can produce very high quality
results. The best IMAGINE IFSAR DEM results are acquired with dual
antenna systems that collect both images at once. It is also possible
to do IFSAR processing on repeat pass systems. These systems have
the advantage of only requiring one antenna, and therefore are
cheaper to build. However, the quality of repeat pass IFSAR is very
sensitive to the collection conditions because of the fact that the
images were not collected at the same time. Weather and terrain
changes that occur between the collection of the two images can
greatly degrade the coherence of the image pair. This reduction in
coherence makes each part of the IMAGINE IFSAR DEM process
more difficult.
Wrapped phase image
Unwrapped phase image
IMAGINE IFSAR DEM Theory / 374 Field Guide
/ 375 Field Guide
Rectification
Introduction Raw, remotely sensed image data gathered by a satellite or aircraft
are representations of the irregular surface of the Earth. Even
images of seemingly flat areas are distorted by both the curvature of
the Earth and the sensor being used. This chapter covers the
processes of geometrically correcting an image so that it can be
represented on a planar surface, conform to other images, and have
the integrity of a map.
A map projection system is any system designed to represent the
surface of a sphere or spheroid (such as the Earth) on a plane. There
are a number of different map projection methods. Since flattening
a sphere to a plane causes distortions to the surface, each map
projection system compromises accuracy between certain
properties, such as conservation of distance, angle, or area. For
example, in equal area map projections, a circle of a specified
diameter drawn at any location on the map represents the same total
area. This is useful for comparing land use area, density, and many
other applications. However, to maintain equal area, the shapes,
angles, and scale in parts of the map may be distorted (Jensen,
1996).
There are a number of map coordinate systems for determining
location on an image. These coordinate systems conform to a grid,
and are expressed as X,Y (column, row) pairs of numbers. Each map
projection system is associated with a map coordinate system.
Rectification is the process of transforming the data from one grid
system into another grid system using a geometric transformation.
While polynomial transformation and triangle-based methods are
described in this chapter, discussion about various rectification
techniques can be found in Yang (Yang, 1997). Since the pixels of
the new grid may not align with the pixels of the original grid, the
pixels must be resampled. Resampling is the process of extrapolating
data values for the pixels on the new grid from the values of the
source pixels.
Registration In many cases, images of one area that are collected from different
sources must be used together. To be able to compare separate
images pixel by pixel, the pixel grids of each image must conform to
the other images in the data base. The tools for rectifying image data
are used to transform disparate images to the same coordinate
system.
Registration is the process of making an image conform to another
image. A map coordinate system is not necessarily involved. For
example, if image A is not rectified and it is being used with image
B, then image B must be registered to image A so that they conform
to each other. In this example, image A is not rectified to a particular
map projection, so there is no need to rectify image B to a map
projection.
When to Rectify / 376 Field Guide
Georeferencing Georeferencing refers to the process of assigning map coordinates to
image data. The image data may already be projected onto the
desired plane, but not yet referenced to the proper coordinate
system. Rectification, by definition, involves georeferencing, since all
map projection systems are associated with map coordinates.
Image-to-image registration involves georeferencing only if the
reference image is already georeferenced. Georeferencing, by itself,
involves changing only the map coordinate information in the image
file. The grid of the image does not change.
Geocoded data are images that have been rectified to a particular
map projection and pixel size, and usually have had radiometric
corrections applied. It is possible to purchase image data that is
already geocoded. Geocoded data should be rectified only if they
must conform to a different projection system or be registered to
other rectified data.
Latitude/Longitude Lat/Lon is a spherical coordinate system that is not associated with
a map projection. Lat/Lon expresses locations in the terms of a
spheroid, not a plane. Therefore, an image is not usually rectified to
Lat/Lon, although it is possible to convert images to Lat/Lon, and
some tips for doing so are included in this chapter.
You can view map projection information for a particular file
using the Image Information utility. Image Information allows
you to modify map information that is incorrect. However, you
cannot rectify data using Image Information. You must use the
Rectification tools described in this chapter.
The properties of map projections and of particular map
projection systems are discussed in Cartography and Map
Projections.
Orthorectification Orthorectification is a form of rectification that corrects for terrain
displacement and can be used if there is a DEM of the study area. It
is based on collinearity equations, which can be derived by using 3D
GCPs. In relatively flat areas, orthorectification is not necessary, but
in mountainous areas (or on aerial photographs of buildings), where
a high degree of accuracy is required, orthorectification is
recommended.
See Photogrammetric Concepts for more information on
orthocorrection.
When to Rectify Rectification is necessary in cases where the pixel grid of the image
must be changed to fit a map projection system or a reference
image. There are several reasons for rectifying image data:
Field Guide When to Rectify / 377
comparing pixels scene to scene in applications, such as change
detection or thermal inertia mapping (day and night comparison)
developing GIS data bases for GIS modeling
identifying training samples according to map coordinates prior
to classification
creating accurate scaled photomaps
overlaying an image with vector data, such as ArcInfo
comparing images that are originally at different scales
extracting accurate distance and area measurements
mosaicking images
performing any other analyses requiring precise geographic
locations
Before rectifying the data, you must determine the appropriate
coordinate system for the data base. To select the optimum map
projection and coordinate system, the primary use for the data base
must be considered.
If you are doing a government project, the projection may be
predetermined. A commonly used projection in the United States
government is State Plane. Use an equal area projection for thematic
or distribution maps and conformal or equal area projections for
presentation maps. Before selecting a map projection, consider the
following:
How large or small an area is mapped? Different projections are
intended for different size areas.
Where on the globe is the study area? Polar regions and
equatorial regions require different projections for maximum
accuracy.
What is the extent of the study area? Circular, north-south, east-
west, and oblique areas may all require different projection
systems (Environmental Systems Research Institute, 1992).
When to Georeference
Only
Rectification is not necessary if there is no distortion in the image.
For example, if an image file is produced by scanning or digitizing a
paper map that is in the desired projection system, then that image
is already planar and does not require rectification unless there is
some skew or rotation of the image. Scanning and digitizing produce
images that are planar, but do not contain any map coordinate
information. These images need only to be georeferenced, which is
a much simpler process than rectification. In many cases, the image
header can simply be updated with new map coordinate information.
This involves redefining:
When to Rectify / 378 Field Guide
the map coordinate of the upper left corner of the image
the cell size (the area represented by each pixel)
This information is usually the same for each layer of an image file,
although it could be different. For example, the cell size of band 6 of
Landsat TM data is different than the cell size of the other bands.
Use the Image Information utility to modify image file header
information that is incorrect.
Disadvantages of
Rectification
During rectification, the data file values of rectified pixels must be
resampled to fit into a new grid of pixel rows and columns. Although
some of the algorithms for calculating these values are highly
reliable, some spectral integrity of the data can be lost during
rectification. If map coordinates or map units are not needed in the
application, then it may be wiser not to rectify the image. An
unrectified image is more spectrally correct than a rectified image.
Classification
Some analysts recommend classification before rectification, since
the classification is then based on the original data values. Another
benefit is that a thematic file has only one band to rectify instead of
the multiple bands of a continuous file. On the other hand, it may be
beneficial to rectify the data first, especially when using GPS data for
the GCPs. Since these data are very accurate, the classification may
be more accurate if the new coordinates help to locate better training
samples.
Thematic Files
Nearest neighbor is the only appropriate resampling method for
thematic files, which may be a drawback in some applications. The
available resampling methods are discussed in detail later in this
chapter.
Rectification Steps NOTE: Registration and rectification involve similar sets of
procedures. Throughout this documentation, many references to
rectification also apply to image-to-image registration.
Usually, rectification is the conversion of data file coordinates to
some other grid and coordinate system, called a reference system.
Rectifying or registering image data on disk involves the following
general steps, regardless of the application:
1. Locate GCPs.
2. Compute and test a transformation.
3. Create an output image file with the new coordinate information in
the header. The pixels must be resampled to conform to the new
grid.
Field Guide Ground Control Points / 379
Images can be rectified on the display (in a Viewer) or on the disk.
Display rectification is temporary, but disk rectification is permanent,
because a new file is created. Disk rectification involves:
rearranging the pixels of the image onto a new grid, which
conforms to a plane in the new map projection and coordinate
system
inserting new information to the header of the file, such as the
upper left corner map coordinates and the area represented by
each pixel
Ground Control
Points
GCPs are specific pixels in an image for which the output map
coordinates (or other output coordinates) are known. GCPs consist
of two X,Y pairs of coordinates:
source coordinatesusually data file coordinates in the image
being rectified
reference coordinatesthe coordinates of the map or reference
image to which the source image is being registered
The term map coordinates is sometimes used loosely to apply to
reference coordinates and rectified coordinates. These coordinates
are not limited to map coordinates. For example, in image-to-image
registration, map coordinates are not necessary.
GCPs in ERDAS IMAGINE Any ERDAS IMAGINE image can have one GCP set associated with it.
The GCP set is stored in the image file along with the raster layers.
If a GCP set exists for the top file that is displayed in the Viewer, then
those GCPs can be displayed when the GCP Tool is opened.
In the CellArray of GCP data that displays in the GCP Tool, one
column shows the point ID of each GCP. The point ID is a name given
to GCPs in separate files that represent the same geographic
location. Such GCPs are called corresponding GCPs.
A default point ID string is provided (such as GCP #1), but you can
enter your own unique ID strings to set up corresponding GCPs as
needed. Even though only one set of GCPs is associated with an
image file, one GCP set can include GCPs for a number of
rectifications by changing the point IDs for different groups of
corresponding GCPs.
Entering GCPs Accurate GCPs are essential for an accurate rectification. From the
GCPs, the rectified coordinates for all other points in the image are
extrapolated. Select many GCPs throughout the scene. The more
dispersed the GCPs are, the more reliable the rectification is. GCPs
for large-scale imagery might include the intersection of two roads,
airport runways, utility corridors, towers, or buildings. For small-
scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (e.g., the edges of
lakes or other water bodies, vegetation, etc.) should not be used.
Ground Control Points / 380 Field Guide
The source and reference coordinates of the GCPs can be entered in
the following ways:
They may be known a priori, and entered at the keyboard.
Use the mouse to select a pixel from an image in the Viewer. With
both the source and destination Viewers open, enter source
coordinates and reference coordinates for image-to-image
registration.
Use a digitizing tablet to register an image to a hardcopy map.
Information on the use and setup of a digitizing tablet is
discussed in Vector Data.
Digitizing Tablet Option
If GCPs are digitized from a hardcopy map and a digitizing tablet,
accurate base maps must be collected. You should try to match the
resolution of the imagery with the scale and projection of the source
map. For example, 1:24,000 scale USGS quadrangles make good
base maps for rectifying Landsat TM and SPOT imagery. Avoid using
maps over 1:250,000, if possible. Coarser maps (i.e., 1:250,000)
are more suitable for imagery of lower resolution (i.e., AVHRR) and
finer base maps (i.e., 1:24,000) are more suitable for imagery of
finer resolution (i.e., Landsat and SPOT).
Mouse Option
When entering GCPs with the mouse, you should try to match
coarser resolution imagery to finer resolution imagery (i.e., Landsat
TM to SPOT), and avoid stretching resolution spans greater than a
cubic convolution radius (a 4 4 area). In other words, you should
not try to match Landsat MSS to SPOT or Landsat TM to an aerial
photograph.
How GCPs are Stored
GCPs entered with the mouse are stored in the image file, and those
entered at the keyboard or digitized using a digitizing tablet are
stored in a separate file with the extension .gcc.
GCP Prediction and
Matching
Automated GCP prediction enables you to pick a GCP in either
coordinate system and automatically locate that point in the other
coordinate system based on the current transformation parameters.
Automated GCP matching is a step beyond GCP prediction. For
image-to-image rectification, a GCP selected in one image is
precisely matched to its counterpart in the other image using the
spectral characteristics of the data and the geometric
transformation. GCP matching enables you to fine tune a rectification
for highly accurate results.
Field Guide Ground Control Points / 381
Both of these methods require an existing transformation which
consists of a set of coefficients used to convert the coordinates from
one system to another.
GCP Prediction
GCP prediction is a useful technique to help determine if enough
GCPs have been gathered. After selecting several GCPs, select a
point in either the source or the destination image, then use GCP
prediction to locate the corresponding GCP on the other image
(map). This point is determined based on the current transformation
derived from existing GCPs. Examine the automatically generated
point and see how accurate it is. If it is within an acceptable range
of accuracy, then there may be enough GCPs to perform an accurate
rectification (depending upon how evenly dispersed the GCPs are).
If the automatically generated point is not accurate, then more GCPs
should be gathered before rectifying the image.
GCP prediction can also be used when applying an existing
transformation to another image in a data set. This saves time in
selecting another set of GCPs by hand. Once the GCPs are
automatically selected, those that do not meet an acceptable level of
error can be edited.
GCP Matching
In GCP matching, you can select which layers from the source and
destination images to use. Since the matching process is based on
the reflectance values, select layers that have similar spectral
wavelengths, such as two visible bands or two infrared bands. You
can perform histogram matching to ensure that there is no offset
between the images. You can also select the radius from the
predicted GCP from which the matching operation searches for a
spectrally similar pixels. The search window can be any odd size
between 5 5 and 21 21.
Histogram matching is discussed in Enhancement.
A correlation threshold is used to accept or discard points. The
correlation ranges from -1.000 to +1.000. The threshold is an
absolute value threshold ranging from 0.000 to 1.000. A value of
0.000 indicates a bad match and a value of 1.000 indicates an exact
match. Values above 0.8000 or 0.9000 are recommended. If a
match cannot be made because the absolute value of the correlation
is less than the threshold, you have the option to discard points.
Polynomial Transformation / 382 Field Guide
Polynomial
Transformation
Polynomial equations are used to convert source file coordinates to
rectified map coordinates. Depending upon the distortion in the
imagery, the number of GCPs used, and their locations relative to
one another, complex polynomial equations may be required to
express the needed transformation. The degree of complexity of the
polynomial is expressed as the order of the polynomial. The order is
simply the highest exponent used in the polynomial.
The order of transformation is the order of the polynomial used in the
transformation. ERDAS IMAGINE allows 1st- through nth-order
transformations. Usually, 1st-order or 2nd-order transformations
are used.
You can specify the order of the transformation you want to use
in the Transform Editor.
A discussion of polynomials and order is included in Math
Topics.
Transformation Matrix
A transformation matrix is computed from the GCPs. The matrix
consists of coefficients that are used in polynomial equations to
convert the coordinates. The size of the matrix depends upon the
order of transformation. The goal in calculating the coefficients of the
transformation matrix is to derive the polynomial equations for which
there is the least possible amount of error when they are used to
transform the reference coordinates of the GCPs into the source
coordinates. It is not always possible to derive coefficients that
produce no error. For example, in Figure 154, GCPs are plotted on a
graph and compared to the curve that is expressed by a polynomial.
Figure 154: Polynomial Curve vs. GCPs
Source X coordinate
R
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
GCP
Polynomial curve
Field Guide Polynomial Transformation / 383
Every GCP influences the coefficients, even if there is not a perfect
fit of each GCP to the polynomial that the coefficients represent. The
distance between the GCP reference coordinate and the curve is
called RMS error, which is discussed later in this chapter. The least
squares regression method is used to calculate the transformation
matrix from the GCPs. This common method is discussed in statistics
textbooks.
Linear Transformations A 1st-order transformation is a linear transformation. It can change:
location in X and/or Y
scale in X and/or Y
skew in X and/or Y
rotation
First-order transformations can be used to project raw imagery to a
planar map projection, convert a planar map projection to another
planar map projection, and when rectifying relatively small image
areas. You can perform simple linear transformations to an image
displayed in a Viewer or to the transformation matrix itself. Linear
transformations may be required before collecting GCPs on the
displayed image. You can reorient skewed Landsat TM data, rotate
scanned quad sheets according to the angle of declination stated in
the legend, and rotate descending data so that north is up.
A 1st-order transformation can also be used for data that are already
projected onto a plane. For example, SPOT and Landsat Level 1B
data are already transformed to a plane, but may not be rectified to
the desired map projection. When doing this type of rectification, it
is not advisable to increase the order of transformation if at first a
high RMS error occurs. Examine other factors first, such as the GCP
source and distribution, and look for systematic errors.
ERDAS IMAGINE provides the following options for 1st-order
transformations:
scale
offset
rotate
reflect
Scale
Scale is the same as the zoom option in the Viewer, except that you
can specify different scaling factors for X and Y.
If you are scaling an image in the Viewer, the zoom option
undoes any changes to the scale that you do, and vice versa.
Polynomial Transformation / 384 Field Guide
Offset
Offset moves the image by a user-specified number of pixels in the
X and Y directions. For rotation, you can specify any positive or
negative number of degrees for clockwise and counterclockwise
rotation. Rotation occurs around the center pixel of the image.
Reflection
Reflection options enable you to perform the following operations:
left to right reflection
top to bottom reflection
left to right and top to bottom reflection (equal to a 180
rotation)
Linear adjustments are available from the Viewer or from the
Transform Editor. You can perform linear transformations in the
Viewer and then load that transformation to the Transform
Editor, or you can perform the linear transformations directly on
the transformation matrix.
Figure 155 illustrates how the data are changed in linear
transformations.
Figure 155: Linear Transformations
The transformation matrix for a 1st-order transformation consists of
six coefficientsthree for each coordinate (X and Y).
a
0
a
1
a
2
b
0
b
1
b
2
original image change of scale in X change of scale in Y
change of skew in X change of skew in Y rotation
Field Guide Polynomial Transformation / 385
Coefficients are used in a 1st-order polynomial as follows:
Where:
x and y are source coordinates (input)
xo and yo are rectified coordinates (output)
the coefficients of the transformation matrix are as above
The position of the coefficients in the matrix and the assignment
of the coefficients in the polynomial is an ERDAS IMAGINE
convention. Other representations of a 1st-order transformation
matrix may take a different form.
Nonlinear
Transformations
Transformations of the 2nd-order or higher are nonlinear
transformations. These transformations can correct nonlinear
distortions. The process of correcting nonlinear distortions is also
known as rubber sheeting. Figure 156 illustrates the effects of some
nonlinear transformations.
Figure 156: Nonlinear Transformations
Second-order transformations can be used to convert Lat/Lon data
to a planar projection, for data covering a large area (to account for
the Earths curvature), and with distorted data (for example, due to
camera lens distortion). Third-order transformations are used with
distorted aerial photographs, on scans of warped maps and with
radar imagery. Fourth-order transformations can be used on very
distorted aerial photographs.
The transformation matrix for a transformation of order t contains
this number of coefficients:
x
o
a
0
a
1
x a
2
y + + =
y
o
b
0
b
1
x b
2
y + + =
original image
some possible outputs
Polynomial Transformation / 386 Field Guide
It is multiplied by two for the two sets of coefficientsone set for X,
one for Y.
An easier way to arrive at the same number is:
Clearly, the size of the transformation matrix increases with the
order of the transformation.
Higher Order Polynomials
The polynomial equations for a t-order transformation take this
form:
Where:
t is the order of the polynomial
ak and bk are coefficients
the subscript k in ak and bk is determined by:
An example of 3rd-order transformation equations for X and Y, using
numbers, is:
These equations use a total of 20 coefficients, or
2 i
i 1 =
t 1 +

t 1 + ( ) t 2 + ( )
x
o
t

i o =
\ .
|
|
| | i

j o =
\ .
|
|
| |
= a
k
x
i j
y
j

y
o
t

i o =
\ .
|
|
| | i

j o =
\ .
|
|
| |
= b
k
x
i j
y
j

k
i i j +
2
--------------- j + =
x
o
5 4x 6y 10x
2
5xy 1y
2
3x
3
7x
2
y 11xy
2
4y
3
+ + + + + + =
y
o
13 12x 4y 1x
2
21xy 11y
2
1x
3
2x
2
y 5xy
2
12y
3
+ + + + + + + =
Field Guide Polynomial Transformation / 387
Effects of Order The computation and output of a higher-order polynomial equation
are more complex than that of a lower-order polynomial equation.
Therefore, higher-order polynomials are used to perform more
complicated image rectifications. To understand the effects of
different orders of transformation in image rectification, it is helpful
to see the output of various orders of polynomials.
The following example uses only one coordinate (X), instead of two
(X,Y), which are used in the polynomials for rectification. This
enables you to draw two-dimensional graphs that illustrate the way
that higher orders of transformation affect the output image.
NOTE: Because only the X coordinate is used in these examples, the
number of GCPs used is less than the number required to actually
perform the different orders of transformation.
Coefficients like those presented in this example would generally be
calculated by the least squares regression method. Suppose GCPs
are entered with these X coordinates:
These GCPs allow a 1st-order transformation of the X coordinates,
which is satisfied by this equation (the coefficients are in
parentheses):
Where:
x
r
= the reference X coordinate
x
i
= the source X coordinate
This equation takes on the same format as the equation of a line (y
= mx + b). In mathematical terms, a 1st-order polynomial is linear.
Therefore, a 1st-order transformation is also known as a linear
transformation. This equation is graphed in Figure 157.
3 1 + ( ) 3 2 + ( )
Source X Coordinate (input)
Reference X Coordinate
(output)
1 17
2 9
3 1
x
r
25 ( ) 8 ( ) + x
i
=
Polynomial Transformation / 388 Field Guide
Figure 157: Transformation Example1st-Order
However, what if the second GCP were changed as follows?
These points are plotted against each other in Figure 158.
Figure 158: Transformation Example2nd GCP Changed
A line cannot connect these points, which illustrates that they cannot
be expressed by a 1st-order polynomial, like the one above. In this
case, a 2nd-order polynomial equation expresses these points:
Polynomials of the 2nd-order or higher are nonlinear. The graph of
this curve is drawn in Figure 159.
Source X Coordinate (input)
Reference X Coordinate
(output)
1 17
2 7
3 1
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (25) + (-8)x
i
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
31 ( ) 16 ( )x
i
2 ( )x
i
2
+ + =
Field Guide Polynomial Transformation / 389
Figure 159: Transformation Example2nd-Order
What if one more GCP were added to the list?
Figure 160: Transformation Example4th GCP Added
As illustrated in Figure 160, this fourth GCP does not fit on the curve
of the 2nd-order polynomial equation. To ensure that all of the GCPs
fit, the order of the transformation could be increased to 3rd-order.
The equation and graph in Figure 161 could then result.
Source X Coordinate (input)
Reference X Coordinate
(output)
1 17
2 7
3 1
4 5
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (31) + (-16)x
i
+ (2)x
i
2
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (31) + (-16)x
i
+ (2)x
i
2
(4,5)
Polynomial Transformation / 390 Field Guide
Figure 161: Transformation Example3rd-Order
Figure 161 illustrates a 3rd-order transformation. However, this
equation may be unnecessarily complex. Performing a coordinate
transformation with this equation may cause unwanted distortions in
the output image for the sake of a perfect fit for all the GCPs. In this
example, a 3rd-order transformation probably would be too high,
because the output pixels would be arranged in a different order than
the input pixels, in the X direction.
Figure 162: Transformation ExampleEffect of a 3rd-Order Transformation
In this case, a higher order of transformation would probably not
produce the desired results.
Source X Coordinate (input)
Reference X Coordinate
(output)
1 x
o
(1) = 17
2 x
o
(2) = 7
3 x
o
(3) = 1
4 x
o
(4) = 5
0 1 2 3 4
0
4
8
12
16
r
e
f
e
r
e
n
c
e

X

c
o
o
r
d
i
n
a
t
e
source X coordinate
x
r
= (25) + (-5)x
i
+ (-4)x
i
2

+ (1)x
i
3
x
o
1 ( ) x
o
2 ( ) x
o
4 ( ) x
o
3 ( ) > > >
17 7 5 1 > > >
1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 4
1 2 3 4
3 4 2 1
input image
X coordinates
output image
X coordinates
Field Guide Polynomial Transformation / 391
Minimum Number of GCPs Higher orders of transformation can be used to correct more
complicated types of distortion. However, to use a higher order of
transformation, more GCPs are needed. For instance, three points
define a plane. Therefore, to perform a 1st-order transformation,
which is expressed by the equation of a plane, at least three GCPs
are needed. Similarly, the equation used in a 2nd-order
transformation is the equation of a paraboloid. Six points are
required to define a paraboloid. Therefore, at least six GCPs are
required to perform a 2nd-order transformation. The minimum
number of points required to perform a transformation of order t
equals:
Use more than the minimum number of GCPs whenever possible.
Although it is possible to get a perfect fit, it is rare, no matter how
many GCPs are used.
For 1st- through 10th-order transformations, the minimum number
of GCPs required to perform a transformation is listed in the following
table:
For the best rectification results, you should always use more
than the minimum number of GCPs, and they should be well-
distributed.
Table 50: Number of GCPs per Order of Transformation
Order of Transformation Minimum GCPs Required
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66
t 1 + ( ) t 2 + ( ) ( )
2
-------------------------------------
Rubber Sheeting / 392 Field Guide
Rubber Sheeting
Triangle-Based Finite
Element Analysis
The finite element analysis is a powerful tool for solving complicated
computation problems which can be approached by small simpler
pieces. It has been widely used as a local interpolation technique in
geographic applications. For image rectification, the known control
points can be triangulated into many triangles. Each triangle has
three control points as its vertices. Then, the polynomial
transformation can be used to establish mathematical relationships
between source and destination systems for each triangle. Because
the transformation exactly passes through each control point and is
not in a uniform manner, finite element analysis is also called rubber
sheeting. It can also be called as the triangle-based rectification
because the transformation and resampling for image rectification
are performed on a triangle-by-triangle basis.
This triangle-based technique should be used when other
rectification methods such as polynomial transformation and
photogrammetric modeling cannot produce acceptable results.
Triangulation To perform the triangle-based rectification, it is necessary to
triangulate the control points into a mesh of triangles. Watson
(Watson, 1992) summarily listed four kinds of triangulation,
including the arbitrary, optimal, Greedy and Delaunay triangulation.
Of the four kinds, the Delaunay triangulation is most widely used and
is adopted because of the smaller angle variations of the resulting
triangles.
The Delaunay triangulation can be constructed by the empty
circumcircle criterion. The circumcircle formed from three points of
any triangle does not have any other point inside. The triangles
defined this way are the most equiangular possible.
Figure 163 shows an example of the triangle network formed by 13
control points.
Figure 163: Triangle Network
p0
p3
p8
p10
p11
p4
p1
p12
p7
p6
p2
p5
p9
Field Guide Rubber Sheeting / 393
Triangle-based
rectification
Once the triangle mesh has been generated and the spatial order of
the control points is available, the geometric rectification can be
done on a triangle-by-triangle basis. This triangle-based method is
appealing because it breaks the entire region into smaller subsets. If
the geometric problem of the entire region is very complicated, the
geometry of each subset can be much simpler and modeled through
simple transformation.
For each triangle, the polynomials can be used as the general
transformation form between source and destination systems.
Linear transformation The easiest and fastest is the linear transformation with the first
order polynomials:
There is no need for extra information because there are three
known conditions in each triangle and three unknown coefficients for
each polynomial.
Nonlinear transformation Even though the linear transformation is easy and fast, it has one
disadvantage. The transitions between triangles are not always
smooth. This phenomenon is obvious when shaded relief or contour
lines are derived from the DEM which is generated by the linear
rubber sheeting. It is caused by incorporating the slope change of
the control data at the triangle edges and vertices. In order to
distribute the slope change smoothly across triangles, the nonlinear
transformation with polynomial order larger than one is used by
considering the gradient information.
The fifth order or quintic polynomial transformation is chosen here
as the nonlinear rubber sheeting technique in this dissertation. It is
a smooth function. The transformation function and its first order
partial derivative are continuous. It is not difficult to construct
(Akima, 1978). The formula is as follows:
xo a
0
a
1
x a
2
y + + =
yo b
0
b
1
x b
2
y + + =

xo a
k
x
i j
y
j

j 0 =
i

i 0 =
5

=
yo b
k
x
i j
y
j

j 0 =
i

i 0 =
5

RMS Error / 394 Field Guide


It has 21 coefficients for each polynomial to be determined. For
solving these unknowns, 21 conditions should be available. For each
vertex of the triangle, one point value is given, and two first order
and three second order partial derivatives can be easily derived by
establishing a second order polynomial using vertices in the
neighborhood of the vertex. Then the total 18 conditions are ready
to be used. Three more conditions can be obtained by assuming that
the normal partial derivative on each edge of the triangle is a cubic
polynomial, which means that the sum of the polynomial items
beyond the third order in the normal partial derivative has a value
zero.
Check Point Analysis It should be emphasized that the independent check point analysis
is critical for determining the accuracy of rubber sheeting modeling.
For an exact modeling method like rubber sheeting, the ground
control points, which are used in the modeling process, do not have
much geometric residuals remaining. To evaluate the geometric
transformation between source and destination coordinate systems,
the accuracy assessment using independent check points is
recommended.
RMS Error RMS error is the distance between the input (source) location of a
GCP and the retransformed location for the same GCP. In other
words, it is the difference between the desired output coordinate for
a GCP and the actual output coordinate for the same point, when the
point is transformed with the geometric transformation.
RMS error is calculated with a distance equation:
Where:
x
i
and y
i
are the input source coordinates
x
r
and y
r
are the retransformed coordinates
RMS error is expressed as a distance in the source coordinate
system. If data file coordinates are the source coordinates, then the
RMS error is a distance in pixel widths. For example, an RMS error of
2 means that the reference pixel is 2 pixels away from the
retransformed pixel.
Residuals and RMS Error
Per GCP
The GCP Tool contains columns for the X and Y residuals. Residuals
are the distances between the source and retransformed coordinates
in one direction. They are shown for each GCP. The X residual is the
distance between the source X coordinate and the retransformed X
coordinate. The Y residual is the distance between the source Y
coordinate and the retransformed Y coordinate.
If the GCPs are consistently off in either the X or the Y direction,
more points should be added in that direction. This is a common
problem in off-nadir data.
RMS error x
r
x
i
( )
2
y
r
y
i
( )
2
+ =
Field Guide RMS Error / 395
RMS Error Per GCP
The RMS error of each point is reported to help you evaluate the
GCPs. This is calculated with a distance formula:
Where:
R
i
= the RMS error for GCPi
XR
i
= the X residual for GCPi
YR
i
= the Y residual for GCPi
Figure 164 illustrates the relationship between the residuals and the
RMS error per point.
Figure 164: Residuals and RMS Error Per Point
Total RMS Error From the residuals, the following calculations are made to determine
the total RMS error, the X RMS error, and the Y RMS error:
Where:
R
x
= X RMS error
R
y
= Y RMS error
T = total RMS error
n = the number of GCPs
i = GCP number
XR
i
= the X residual for GCPi
YR
i
= the Y residual for GCPi
R
i
XR
i
2
YR
i
2
+ =
source GCP
retransformed GCP
RMS error
X residual
Y residual
R
x
1
n
--- XR
i
2
i 1 =
n

=
R
y
1
n
--- YR
i
2
i 1 =
n

=
T R
x
2
R
y
2
+ = or
1
n
--- XR
i
2
YR
i
2
+
i 1 =
n

RMS Error / 396 Field Guide


Error Contribution by
Point
A normalized value representing each points RMS error in relation to
the total RMS error is also reported. This value is listed in the
Contribution column of the GCP Tool.
Where:
E
i
= error contribution of GCPi
R
i
= the RMS error for GCPi
T = total RMS error
Tolerance of RMS Error In most cases, it is advantageous to tolerate a certain amount of
error rather than take a more complex transformation. The amount
of RMS error that is tolerated can be thought of as a window around
each source coordinate, inside which a retransformed coordinate is
considered to be correct (that is, close enough to use). For example,
if the RMS error tolerance is 2, then the retransformed pixel can be
2 pixels away from the source pixel and still be considered accurate.
Figure 165: RMS Error Tolerance
Acceptable RMS error is determined by the end use of the data base,
the type of data being used, and the accuracy of the GCPs and
ancillary data being used. For example, GCPs acquired from GPS
should have an accuracy of about 10 m, but GCPs from 1:24,000-
scale maps should have an accuracy of about 20 m.
It is important to remember that RMS error is reported in pixels.
Therefore, if you are rectifying Landsat TM data and want the
rectification to be accurate to within 30 meters, the RMS error should
not exceed 1.00. Acceptable accuracy depends on the image area
and the particular project.
Evaluating RMS Error To determine the order of polynomial transformation, you can assess
the relative distortion in going from image to map or map to map.
One should start with a 1st-order transformation unless it is known
that it does not work. It is possible to repeatedly compute
transformation matrices until an acceptable RMS error is reached.
E
i
R
i
T
----- =
2 pixel RMS error tolerance
(radius)
source
pixel
Retransformed coordinates
within this range are considered
correct
Field Guide Resampling Methods / 397
Most rectifications are either 1st-order or 2nd-order. The danger
of using higher order rectifications is that the more complicated
the equation for the transformation, the less regular and
predictable the results are. To fit all of the GCPs, there may be
very high distortion in the image.
After each computation of a transformation and RMS error, there are
four options:
Throw out the GCP with the highest RMS error, assuming that
this GCP is the least accurate. Another transformation can then
be computed from the remaining GCPs. A closer fit should be
possible. However, if this is the only GCP in a particular region of
the image, it may cause greater error to remove it.
Tolerate a higher amount of RMS error.
Increase the complexity of transformation, creating more
complex geometric alterations in the image. A transformation
can then be computed that can accommodate the GCPs with less
error.
Select only the points for which you have the most confidence.
Resampling
Methods
The next step in the rectification/registration process is to create the
output file. Since the grid of pixels in the source image rarely
matches the grid for the reference image, the pixels are resampled
so that new data file values for the output file can be calculated.
Resampling Methods / 398 Field Guide
Figure 166: Resampling
The following resampling methods are supported in ERDAS
IMAGINE:
Nearest Neighboruses the value of the closest pixel to assign
to the output pixel value.
Bilinear Interpolationuses the data file values of four pixels in
a 2 2 window to calculate an output value with a bilinear
function.
Cubic Convolutionuses the data file values of sixteen pixels in
a 4 4 window to calculate an output value with a cubic function.
Bicubic Spline Interpolationfits a cubic spline surface through
the current block of points.
In all methods, the number of rows and columns of pixels in the
output is calculated from the dimensions of the output map, which is
determined by the geometric transformation and the cell size. The
output corners (upper left and lower right) of the output file can be
specified. The default values are calculated so that the entire source
file is resampled to the destination file.
If an image to image rectification is being performed, it may be
beneficial to specify the output corners relative to the reference file
system, so that the images are coregistered. In this case, the upper
left X and upper left Y coordinate are 0,0 and not the defaults.
If the output units are pixels, then the origin of the image is the
upper left corner. Otherwise, the origin is the lower left corner.
1. The input image with
source GCPs.
2. The output grid, with
reference GCPs shown.
3. To compare the two grids, the
input image is laid over the
output grid, so that the GCPs
of the two grids fit together.
4. Using a resampling method,
the pixel values of the input
image are assigned to pixels
in the output grid.
GCP
GCP
GCP
GCP
Field Guide Resampling Methods / 399
Rectifying to Lat/Lon You can specify the nominal cell size if the output coordinate system
is Lat/Lon. The output cell size for a geographic projection (i.e.,
Lat/Lon) is always in angular units of decimal degrees. However, if
you want the cell to be a specific size in meters, you can enter meters
and calculate the equivalent size in decimal degrees. For example, if
you want the output file cell size to be 30 30 meters, then the
program would calculate what this size would be in decimal degrees
and automatically update the output cell size. Since the
transformation between angular (decimal degrees) and nominal
(meters) measurements varies across the image, the transformation
is based on the center of the output file.
Enter the nominal cell size in the Nominal Cell Size dialog.
Nearest Neighbor To determine an output pixels nearest neighbor, the rectified
coordinates (x
o
, y
o
) of the pixel are retransformed back to the source
coordinate system using the inverse of the transformation. The
retransformed coordinates (x
r
, y
r
) are used in bilinear interpolation
and cubic convolution as well. The pixel that is closest to the
retransformed coordinates (x
r
, y
r
) is the nearest neighbor. The data
file value(s) for that pixel become the data file value(s) of the pixel
in the output image.
Figure 167: Nearest Neighbor
(x
r
,y
r
)
nearest to
(x
r
,y
r
)
Resampling Methods / 400 Field Guide
Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is
based upon the distances between the retransformed coordinate
location (x
r
, y
r
) and the four closest pixels in the input (source)
image (see Figure 168). In this example, the neighbor pixels are
numbered 1, 2, 3, and 4. Given the data file values of these four
pixels on a grid, the task is to calculate a data file value for r (V
r
).
Table 51: Nearest Neighbor Resampling
Advantages Disadvantages
Transfers original data values
without averaging them as the
other methods do; therefore, the
extremes and subtleties of the data
values are not lost. This is an
important consideration when
discriminating between vegetation
types, locating an edge associated
with a lineament, or determining
different levels of turbidity or
temperatures in a lake
(Jensen, 1996).
When this method is used to
resample from a larger to a smaller
grid size, there is usually a stair
stepped effect around diagonal lines
and curves.
Suitable for use before
classification.
Data values may be dropped, while
other values may be duplicated.
The easiest of the three methods
to compute and the fastest to use.
Using on linear thematic data (e.g.,
roads, streams) may result in breaks
or gaps in a network of linear data.
Appropriate for thematic files,
which can have data file values
based on a qualitative (nominal or
ordinal) system or a quantitative
(interval or ratio) system. The
averaging that is performed with
bilinear interpolation and cubic
convolution is not suited to a
qualitative class value system.
Field Guide Resampling Methods / 401
Figure 168: Bilinear Interpolation
To calculate V
r
, first V
m
and V
n
are considered. By interpolating V
m

and V
n
, you can perform linear interpolation, which is a simple
process to illustrate. If the data file values are plotted in a graph
relative to their distances from one another, then a visual linear
interpolation is apparent. The data file value of m (V
m
) is a function
of the change in the data file value between pixels 3 and 1 (that is,
V
3
- V
1
).
Figure 169: Linear Interpolation
The equation for calculating V
m
from V
1
and V
3
is:
(x
r
,y
r
)
1 2
3 4
m r n
dy
dx
D
r is the location of the retransformed coordinate
V
3
V
m
V
1
(V
3
-

V
1
) / D
Y
1
Y
m
Y
3
D
data file coordinates
d
a
t
a

f
i
l
e

v
a
l
u
e
s
(Y)
Calculating a data file value as a function
of spatial distance between two pixels
V
m
V
3
V
1

D
------------------- dy V
1
+ =
Resampling Methods / 402 Field Guide
Where:
Y
i
= the Y coordinate for pixel i
V
i
= the data file value for pixel i
dy = the distance between Y1 and Ym in the source
coordinate system
D = the distance between Y1 and Y3 in the source
coordinate system
If one considers that (V
3
- V
1
/ D) is the slope of the line in the graph
above, then this equation translates to the equation of a line in y =
mx + b form.
Similarly, the equation for calculating the data file value for n (V
n
) in
the pixel grid is:
From V
n
and V
m
, the data file value for r, which is at the
retransformed coordinate location (x
r
,y
r
),can be calculated in the
same manner:
The following is attained by plugging in the equations for V
m
and V
n

to this final equation for V
r
:
In most cases D = 1, since data file coordinates are used as the
source coordinates and data file coordinates increment by 1.
Some equations for bilinear interpolation express the output data file
value as:
Where:
w
i
is a weighting factor
V
n
V
4
V
2

D
------------------- dy V
2
+ =
V
r
V
n
V
m

D
-------------------- dx V
m
+ =
V
r
V
4
V
2

D
------------------- dy V
2
+
V
3
V
1

D
------------------- dy V
1
+
D
------------------------------------------------------------------------------------------------------------ dx
V
3
V
1

D
------------------- dy V
1
+ + =
V
r
V
1
D dx ( ) D dy ( ) V
2
dx ( ) D dy ( ) V
3
D dx ( ) dy ( ) V
4
dx ( ) dy ( ) + + +
D
2
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- =
V
r
w
i
V
i
=
Field Guide Resampling Methods / 403
The equation above could be expressed in a similar format, in which
the calculation of w
i
is apparent:
Where:
x
i
= the change in the X direction between (x
r
,y
r
) and the data file
coordinate of pixel i
y
i
= the change in the Y direction between (x
r
,y
r
) and the data file
coordinate of
pixel i
V
i
= the data file value for pixel i
D = the distance between pixels (in X or Y) in the source coordinate
system
For each of the four pixels, the data file value is weighted more if the
pixel is closer to (x
r
, y
r
).
See Enhancement for more about convolution filtering.
Cubic Convolution Cubic convolution is similar to bilinear interpolation, except that:
a set of 16 pixels, in a 4 4 array, are averaged to determine
the output data file value, and
an approximation of a cubic function, rather than a linear
function, is applied to those 16 input values.
Table 52: Bilinear Interpolation Resampling
Advantages Disadvantages
Results in output images that are
smoother, without the stair-
stepped effect that is possible with
nearest neighbor.
Since pixels are averaged, bilinear
interpolation has the effect of a low-
frequency convolution. Edges are
smoothed, and some extremes of
the data file values are lost.
More spatially accurate than
nearest neighbor.
This method is often used when
changing the cell size of the data,
such as in SPOT/TM merges within
the 2 2 resampling matrix limit.
V
r
D x
i
( ) D y
i
( )
D
2
----------------------------------------------- V
i

i 1 =
4

=
Resampling Methods / 404 Field Guide
To identify the 16 pixels in relation to the retransformed coordinate
(x
r
,y
r
), the pixel (i,j) is used, such that:
i = int (x
r
)
j = int (y
r
)
This assumes that (x
r
,y
r
) is expressed in data file coordinates
(pixels). The pixels around (i,j) make up a 4 4 grid of input pixels,
as illustrated in Figure 170.
Figure 170: Cubic Convolution
Since a cubic, rather than a linear, function is used to weight the 16
input pixels, the pixels farther from (x
r
, y
r
) have exponentially less
weight than those closer to (x
r
, y
r
).
Several versions of the cubic convolution equation are used in the
field. Different equations have different effects upon the output data
file values. Some convolutions may have more of the effect of a low-
frequency filter (like bilinear interpolation), serving to average and
smooth the values. Others may tend to sharpen the image, like a
high-frequency filter. The cubic convolution used in ERDAS IMAGINE
is a compromise between low-frequency and high-frequency. The
general effect of the cubic convolution depends upon the data.
The formula used in ERDAS IMAGINE is:
(i,j)
(X
r
,Y
r
)
V
r
V i 1 j n 2 + , ( ) f d i 1 j n 2 + , ( ) 1 + ( )
V i j n 2 + , ( ) f d i j n 2 + , ( ) ( )
V i 1 + j n 2 + , ( ) f d i 1 + j n 2 + , ( ) 1 ( )
+
+
+
n 1 =
4

=
Field Guide Resampling Methods / 405
Where:
i = int (x
r
)
j = int (y
r
)
d(i,j) = the distance between a pixel with coordinates (i,j) and
(x
r
,y
r
)
V(i,j) = the data file value of pixel (i,j)
V
r
= the output data file value
a = -1 (a constant)
f(x) = the following function:
Source: Atkinson, 1985
Table 53: Cubic Convolution Resampling
Advantages Disadvantages
Uses 4 4 resampling. In most
cases, the mean and standard
deviation of the output pixels
match the mean and standard
deviation of the input pixels more
closely than any other resampling
method.
Data values may be altered.
The effect of the cubic curve
weighting can both sharpen the
image and smooth out noise
(Atkinson, 1985). The actual
effects depend upon the data being
used.
This method is extremely slow.
This method is recommended when
you are dramatically changing the
cell size of the data, such as in
TM/aerial photo merges (i.e.,
matches the 4 4 window more
closely than the 2 2 window).
f x ( )
a 2 + ( ) x
3
a 3 + ( ) x
2
1 +
a x
3
5a x
2
8a x 4a +
0
\

|
=
if x 1 <
if 1 x 2 < <
otherwise
Resampling Methods / 406 Field Guide
Bicubic Spline
Interpolation
Bicubic Spline Interpolation is based on fitting a cubic spline surface
through the current block of points. The output value is derived from
the fitting surface that will retain the values of the known points. This
algorithm is much slower than other methods of interpolation, but it
has the advantage of giving a more exact fit to the curve without the
oscillations that other interpolation methods can create. Bicubic
Spline Interpolation is so similar to Bilinear Interpolation that unless
you have the need to maximize surface smoothness, you should use
Bilinear Interpolation.
Data Points
The known data points are an array of raster of m n,
Where:
1 < i < m
1 < j < n
d is the cell size of the raster
V
i,j
is the cell value in (x
i
,y
j)
Equations
A bicubic polynomial function V(x,y) is constructed as following:
in each cell
x
1
x
2
x
m-1
x
m
y
1
y
2
y
n-1
y
n
V
1,1
V
2,1
V
m-1,1
V
m,1
V
1,2
V
1,n-1
V
1,n
V
2,2
V
2,n-1
V
2,n
V
m-1,2
V
m-1,n
V
m-1,n-1
V
m,2
V
m,n-1
V
m,n
X
i 1 +
X
i
d + =
Y
j 1 +
Y
j
d + =
V x y , ( ) a
p q ,
i j , ( )
x x
i
( )
p
y y
j
( )
q
q 0 =
3

p 0 =
3

=
Field Guide Resampling Methods / 407
The functions and their first and second derivatives must be
continuous across the interval and equal at the endpoints and the
fourth derivatives of the equations should be zero.
The function satisfies the conditions
i.e., the spline must interpolate all data points.
Coefficients can be obtained by resolving the known points
together with the selection of the boundary condition type.
Please refer to Shikin and Plis (Shikin and Plis, 1995) for the
boundary conditions and the mathematical details for solving the
equations. IMAGINE uses the first type of boundary condition.
Because in IMAGINE the input raster grid has been expanded two
cells around the boundary, the boundary condition has no
significant effects on the resampling.
Calculate value for unknown point
The value for point (x
r
, y
r
) can be calculated by the following
formula:
The value is determined by 16 coefficients. Because the coefficients
are resolved by using all other known points, all other points
contribute to the value. The nearer points contribute more whereas
the farther points contribute less.
R
ij
x y , ( ) x
i
x x
i 1 +
y
j
y y
j 1 +
} , { =
i 1 2 m j ; , , , 1 2 n , , , = =
V x
i
y
j
, ( ) V
i j ,
i ; 1 2 m j ; , , , 1 2 n , , , = = =
V x
r
y
r
, ( ) a
p q ,
i
r
j
r
, ( )
x
r
x
i
r
( )
p
y
r
y
j
r
( )
q
q 0 =
3

p 0 =
3

=
(ir, jr)
(x
r
,y
r
)
y
j
r
x
i
r
d
d
Map-to-Map Coordinate Conversions / 408 Field Guide
Source: Shikin and Plis, 1995
Map-to-Map
Coordinate
Conversions
There are many instances when you may need to change a map that
is already registered to a planar projection to another projection.
Some examples of when this is required are as follows
(Environmental Systems Research Institute, 1992):
When combining two maps with different projection
characteristics.
When the projection used for the files in the data base does not
produce the desired properties of a map.
When it is necessary to combine data from more than one zone
of a projection, such as UTM or State Plane.
A change in the projection is a geometric changedistances, areas,
and scale are represented differently. Therefore, the conversion
process requires that pixels be resampled.
Resampling causes some of the spectral integrity of the data to be
lost (see the disadvantages of the resampling methods explained
previously). So, it is not usually wise to resample data that have
already been resampled if the accuracy of data file values is
important to the application. If the original unrectified data are
available, it is usually wiser to rectify that data to a second map
projection system than to lose a generation by converting rectified
data and resampling it a second time.
Conversion Process To convert the map coordinate system of any georeferenced image,
ERDAS IMAGINE provides a shortcut to the rectification process. In
this procedure, GCPs are generated automatically along the
intersections of a grid that you specify. The program calculates the
reference coordinates for the GCPs with the appropriate conversion
formula and a transformation that can be used in the regular
rectification process.
Table 54: Bicubic Spline Interpolation
Advantages Disadvantages
Results in the smoothest output
images.
The most computationally intensive
resampling method, and is therefore
the slowest.
More spatially accurate than
nearest neighbor.
This method is often used when
upsampling.
Field Guide Map-to-Map Coordinate Conversions / 409
Vector Data Converting the map coordinates of vector data is much easier than
converting raster data. Since vector data are stored by the
coordinates of nodes, each coordinate is simply converted using the
appropriate conversion formula. There are no coordinates between
nodes to extrapolate.
Map-to-Map Coordinate Conversions / 410 Field Guide
/ 411 Field Guide
Terrain Analysis
Introduction Terrain analysis involves the processing and graphic simulation of
elevation data. Terrain analysis software functions usually work with
topographic data (also called terrain data or elevation data), in which
an elevation (or Z value) is recorded at each X,Y location. However,
terrain analysis functions are not restricted to topographic data. Any
series of values, such as population densities, ground water pressure
values, magnetic and gravity measurements, and chemical
concentrations, may be used.
Topographic data are essential for studies of trafficability, route
design, nonpoint source pollution, intervisibility, siting of recreation
areas, etc. (Welch, 1990). Especially useful are products derived
from topographic data. These include:
slope imagesillustrates changes in elevation over distance.
Slope images are usually color-coded according to the steepness
of the terrain at each pixel.
aspect imagesillustrates the prevailing direction that the slope
faces at each pixel.
shaded relief imagesillustrates variations in terrain by
differentiating areas that would be illuminated or shadowed by a
light source simulating the sun.
Topographic data and its derivative products have many
applications, including:
calculating the shortest and most navigable path over a
mountain range for constructing a road or routing a transmission
line
determining rates of snow melt based on variations in sun
shadow, which is influenced by slope, aspect, and elevation
Terrain data are often used as a component in complex GIS modeling
or classification routines. They can, for example, be a key to
identifying wildlife habitats that are associated with specific
elevations. Slope and aspect images are often an important factor in
assessing the suitability of a site for a proposed use. Terrain data can
also be used for vegetation classification based on species that are
terrain-sensitive (e.g., Alpine vegetation).
Although this chapter mainly discusses the use of topographic
data, the ERDAS IMAGINE terrain analysis functions can be used
on data types other than topographic data.
Terrain Data / 412 Field Guide
See Geographic Information Systems for more information
about GIS modeling.
Terrain Data Terrain data are usually expressed as a series of points with X,Y, and
Z values. When terrain data are collected in the field, they are
surveyed at a series of points including the extreme high and low
points of the terrain along features of interest that define the
topography such as streams and ridge lines, and at various points in
between.
DEM and DTED are expressed as regularly spaced points. To create
DEM and DTED files, a regular grid is overlaid on the topographic
contours. Elevations are read at each grid intersection point, as
shown in Figure 171.
Figure 171: Regularly Spaced Terrain Data Points
Elevation data are derived from ground surveys and through manual
photogrammetric methods. Elevation points can also be generated
through digital orthographic methods.
See Raster and Vector Data Sources for more details on DEM
and DTED data. See Photogrammetric Concepts for more
information on the digital orthographic process.
To make topographic data usable in ERDAS IMAGINE, they must
be represented as a surface, or DEM. A DEM is a one-band image
file where the value of each pixel is a specific elevation value. A
gray scale is used to differentiate variations in terrain.
DEMs can be edited with the Raster Editing capabilities of ERDAS
IMAGINE. See Raster Data for more information.
Topographic image with
grid overlay
DEM or regularly spaced
terrain data points (Z values)
20
30
40
50
30
20
20 22 29
34
31 39 38 34
45 48 41 30
Field Guide Slope Images / 413
Slope Images Slope is expressed as the change in elevation over a certain distance.
In this case, the certain distance is the size of the pixel. Slope is most
often expressed as a percentage, but can also be calculated in
degrees.
Use the Slope function in Image Interpreter to generate a slope
image.
In ERDAS IMAGINE, the relationship between percentage and degree
expressions of slope is as follows:
a 45 angle is considered a 100% slope
a 90 angle is considered a 200% slope
slopes less than 45 fall within the 1 - 100% range
slopes between 45 and 90 are expressed as 100 - 200% slopes
A 3 3 pixel window is used to calculate the slope at each pixel. For
a pixel at location X,Y, the elevations around it are used to calculate
the slope as shown in Figure 172. In Figure 172, each pixel has a
ground resolution of 30 30 meters.
Figure 172: 3 3 Window Calculates the Slope at Each Pixel
First, the average elevation changes per unit of distance in the x and
y direction (x and y) are calculated as:
a b c
d e f
g h i
Pixel X,Y has
a,b,c,d,f,g,h, and i are the elevations of
elevation e.
the pixels around it in a 3 X 3 window.
10 m 20 m 25 m
22 m
30 m
25 m
20 m 24 m
18 m
x
1
c a =
x
2
f d =
x
3
i g =
y
1
a g =
y
2
b h =
y
3
c i =
x x
1
x
2
x
3
+ + ( ) 3 x
s
=
Slope Images / 414 Field Guide
Where:
a...i = elevation values of pixels in a 3 3 window, as
shown above
x
s
= x pixel size = 30 meters
y
s
= y pixel size = 30 meters
The slope at pixel x,y is calculated as:
Example
Slope images are often used in road planning. For example, if the
Department of Transportation specifies a maximum of 15% slope on
any road, it would be possible to recode all slope values that are
greater than 15% as unsuitable for road building.
A hypothetical example is given in Figure 173, which shows how the
slope is calculated for a single pixel.
Figure 173: Slope Calculation Example
So, for the hypothetical example:
y y
1
y
2
y
3
+ + ( ) 3 y
s
=
s
x ( )
2
y ( )
2
+
2
-------------------------------------- =
s 0.0967 =
if s 1 percent slope s 100 =
slope in degrees s ( ) tan
1
180

------- =
percent slope 200
100
s
------- = if s 1 >
10 m 20 m 25 m
22 m 25 m
20 m 24 m
18 m
The pixel for which slope is being calculated is shaded.
The elevations of the neighboring pixels are given in meters.
Field Guide Aspect Images / 415
For the example, the slope is:
Aspect Images An aspect image is an image file that is gray scale coded according
to the prevailing direction of the slope at each pixel. Aspect is
expressed in degrees from north, clockwise, from 0 to 360. Due
north is 0 degrees. A value of 90 degrees is due east, 180 degrees
is due south, and 270 degrees is due west. A value of 361 degrees
is used to identify flat surfaces such as water bodies.
Use the Aspect function in Image Interpreter to generate an
aspect image.
As with slope calculations, aspect uses a 3 3 window around each
pixel to calculate the prevailing direction it faces. For pixel x,y with
the following elevation values around it, the average changes in
elevation in both x and y directions are calculated first. Each pixel is
30 30 meters in the following example:
Figure 174: 3 3 Window Calculates the Aspect at Each Pixel
x
1
25 10 15 = =
x
2
25 22 3 = =
x
3
18 20 2 = =
y
1
10 20 10 = =
y
2
20 24 4 = =
y
3
25 18 7 = =
x
15 3 2 +
30 3
---------------------- 0.177 = = y
10 4 7 +
30 3
-------------------------- 0.078 = =
slope in degrees s ( ) tan
1
180

------- 0.0967 ( ) 57.30 tan


1
5.54 = = =
percent slope 0. 0967 100 9.67% = =
a b c
d e f
g h i
Pixel X,Y has
a,b,c,d,f,g,h, and i are the elevations of
elevation e.
the pixels around it in a 3 3 window.
Aspect Images / 416 Field Guide
Where:
a...i = elevation values of pixels in a 3 3 window as
shown above
If x = 0 and y = 0, then the aspect is flat (coded to 361 degrees).
Otherwise, is calculated as:
Note that is calculated in radians.
Then, aspect is 180 + (in degrees).
Example
Aspect files are used in many of the same applications as slope files.
In transportation planning, for example, north facing slopes are
often avoided. Especially in northern climates, these would be
exposed to the most severe weather and would hold snow and ice
the longest. It would be possible to recode all pixels with north facing
aspects as undesirable for road building.
A hypothetical example is given in Figure 175, which shows how the
aspect is calculated for a single pixel.
Figure 175: Aspect Calculation Example
x
1
c a =
x
2
f d =
x
3
i g =
y
1
a g =
y
2
b h =
y
3
c i =
x x
1
x
2
x
3
+ + ( ) 3 =
y y
1
y
2
y
3
+ + ( ) 3 =

x
y
-------
\ .
| |
tan
1
=
10 m 20 m 25 m
22 m 25 m
20 m 24 m
18 m
The pixel for which slope is being calculated is shaded.
The elevations of the neighboring pixels are given in meters.
Field Guide Shaded Relief / 417
1.98 radians = 113.6 degrees
aspect = 180 + 113.6 = 293.6 degrees
Shaded Relief A shaded relief image provides an illustration of variations in
elevation. Based on a user-specified position of the sun, areas that
would be in sunlight are highlighted and areas that would be in
shadow are shaded. Shaded relief images are generated from an
elevation surface, alone or in combination with an image file draped
over the terrain.
It is important to note that the relief program identifies shadowed
areasi.e., those that are not in direct sun. It does not calculate the
shadow that is cast by topographic features onto the surrounding
surface.
For example, a high mountain with sunlight coming from the
northwest would be symbolized as follows in shaded relief. Only the
portions of the mountain that would be in shadow from a northwest
light would be shaded. The software would not simulate a shadow
that the mountain would cast on the southeast side.
Figure 176: Shaded Relief
Shaded relief images are an effective graphic tool. They can also be
used in analysis, e.g., snow melt over an area spanned by an
elevation surface. A series of relief images can be generated to
simulate the movement of the sun over the landscape. Snow melt
rates can then be estimated for each pixel based on the amount of
time it spends in sun or shadow. Shaded relief images can also be
used to enhance subtle detail in gray scale images such as
aeromagnetic, radar, gravity maps, etc.
x
15 3 2 +
3
------------------------ 5.33 = = y
10 4 7 +
3
---------------------------- 2.33 = =

5.33
2.33
-------------
\ .
| |
tan
1
1.98 = =
=
30
40
50
in sun shaded

This condition produces... this... not this


Topographic Normalization / 418 Field Guide
Use the Shaded Relief function in Image Interpreter to generate
a relief image.
In calculating relief, the software compares the user-specified sun
position and angle with the angle each pixel faces. Each pixel is
assigned a value between -1 and +1 to indicate the amount of light
reflectance at that pixel.
Negative numbers and zero values represent shadowed areas.
Positive numbers represent sunny areas, with +1 assigned to the
areas of highest reflectance.
The reflectance values are then applied to the original pixel values to
get the final result. All negative values are set to 0 or to the minimum
light level specified by you. These indicate shadowed areas. Light
reflectance in sunny areas falls within a range of values depending
on whether the pixel is directly facing the sun or not. (In the example
above, pixels facing northwest would be the brightest. Pixels facing
north-northwest and west-northwest would not be quite as bright.)
In a relief file, which is a DEM that shows surface relief, the surface
reflectance values are multiplied by the color lookup values for the
image file.
Topographic
Normalization
Digital imagery from mountainous regions often contains a
radiometric distortion known as topographic effect. Topographic
effect results from the differences in illumination due to the angle of
the sun and the angle of the terrain. This causes a variation in the
image brightness values. Topographic effect is a combination of:
incident illumination the orientation of the surface with respect
to the rays of the sun
exitance anglethe amount of reflected energy as a function of
the slope angle
surface cover characteristicsrugged terrain with high
mountains or steep slopes (Hodgson and Shelley, 1994)
One way to reduce topographic effect in digital imagery is by
applying transformations based on the Lambertian or Non-
Lambertian reflectance models. These models normalize the
imagery, which makes it appear as if it were a flat surface.
The Topographic Normalize function in Image Interpreter uses a
Lambertian Reflectance model to normalize topographic effect in
VIS/IR imagery.
Field Guide Topographic Normalization / 419
When using the Topographic Normalization model, the following
information is needed:
solar elevation and azimuth at time of image acquisition
DEM file
original imagery file (after atmospheric corrections)
Lambertian Reflectance
Model
The Lambertian Reflectance model assumes that the surface reflects
incident solar energy uniformly in all directions, and that variations
in reflectance are due to the amount of incident radiation.
The following equation produces normalized brightness values
(Colby, 1991; Smith et al, 1980):
BV
normal
= BV
observed
/ cos i
Where:
BV
normal
= normalized brightness values
BV
observed
= observed brightness values
cos i = cosine of the incidence angle
Incidence Angle
The incidence angle is defined from:
cos i = cos (90 -
s
) cos
n
+ sin (90 -
s
) sin
n
cos (
s
-
n
)
Where:
i = the angle between the solar rays and the normal to
the surface

s
= the elevation of the sun

s
= the azimuth of the sun

n
= the slope of each surface element

n
= the aspect of each surface element
If the surface has a slope of 0 degrees, then aspect is undefined and
i is simply 90 -
s
.
Non-Lambertian Model Minnaert (Minnaert and Szeicz, 1961) proposed that the observed
surface does not reflect incident solar energy uniformly in all
directions. Instead, he formulated the Non-Lambertian model, which
takes into account variations in the terrain. This model, although
more computationally demanding than the Lambertian model, may
present more accurate results.
In a Non-Lambertian Reflectance model, the following equation is
used to normalize the brightness values in the image (Colby, 1991;
Smith et al, 1980):
BV
normal

= (BV
observed

cos e) / (cos
k
i cos
k
e)
Topographic Normalization / 420 Field Guide
Where:
BV
normal

= normalized brightness values


BV
observed

= observed brightness values


cos i = cosine of the incidence angle
cos e = cosine of the exitance angle, or slope angle
k = the empirically derived Minnaert constant
Minnaert Constant
The Minnaert constant (k) may be found by regressing a set of
observed brightness values from the remotely sensed imagery with
known slope and aspect values, provided that all the observations in
this set are the same type of land cover. The k value is the slope of
the regression line (Hodgson and Shelley, 1994):
log (BV
observed

cos e) = log BV
normal

+ k log (cos i cos e)


Use the Spatial Modeler to create a model based on the Non-
Lambertian model.
NOTE: The Non-Lambertian model does not detect surfaces that are
shadowed by intervening topographic features between each pixel
and the sun. For these areas, a line-of-sight algorithm can identify
such shadowed pixels.
/ 421 Field Guide
Geographic Information Systems
Introduction The dawning of GIS can legitimately be traced back to the beginning
of the human race. The earliest known map dates back to 2500
B.C.E., but there were probably maps before that time. Since then,
humans have been continually improving the methods of conveying
spatial information. The mid-eighteenth century brought the use of
map overlays to show troop movements in the Revolutionary War.
This could be considered an early GIS. The first British census in
1825 led to the science of demography, another application for GIS.
During the 1800s, many different cartographers and scientists were
all discovering the power of overlays to convey multiple levels of
information about an area (Star and Estes, 1990).
Frederick Law Olmstead has long been considered the father of
Landscape Architecture for his pioneering work in the early 20th
century. Many of the methods Olmstead used in Landscape
Architecture also involved the use of hand-drawn overlays. This type
of analysis was beginning to be used for a much wider range of
applications, such as change detection, urban planning, and
resource management (Rado, 1992).
The first system to be called a GIS was the Canadian Geographic
Information System, developed in 1962 by Roger Tomlinson of the
Canada Land Inventory. Unlike earlier systems that were developed
for a specific application, this system was designed to store digitized
map data and land-based attributes in an easily accessible format for
all of Canada. This system is still in operation today (Parent and
Church, 1987).
In 1969, Ian McHargs influential work, Design with Nature, was
published. This work on land suitability/capability analysis (SCA), a
system designed to analyze many data layers to produce a plan map,
discussed the use of overlays of spatially referenced data layers for
resource planning and management (Star and Estes, 1990).
The era of modern GIS really started in the 1970s, as analysts began
to program computers to automate some of the manual processes.
Software companies like ESRI and ERDAS developed software
packages that could input, display, and manipulate geographic data
to create new layers of information. The steady advances in features
and power of the hardware over the last ten yearsand the decrease
in hardware costshave made GIS technology accessible to a wide
range of users. The growth rate of the GIS industry in the last several
years has exceeded even the most optimistic projections.
/ 422 Field Guide
Today, a GIS is a unique system designed to input, store, retrieve,
manipulate, and analyze layers of geographic data to produce
interpretable information. A GIS should also be able to create reports
and maps (Marble, 1990). The GIS database may include computer
images, hardcopy maps, statistical data, or any other data that is
needed in a study. Although the term GIS is commonly used to
describe software packages, a true GIS includes knowledgeable
staff, a training program, budgets, marketing, hardware, data, and
software (Walker and Miller, 1990). GIS technology can be used in
almost any geography-related discipline, from Landscape
Architecture to natural resource management to transportation
routing.
The central purpose of a GIS is to turn geographic data into useful
informationthe answers to real-life questionsquestions such as:
How can we monitor the influence of global climatic changes on
the Earths resources?
How should political districts be redrawn in a growing
metropolitan area?
Where is the best place for a shopping center that is most
convenient to shoppers and least harmful to the local ecology?
What areas should be protected to ensure the survival of
endangered species?
How can communities be better prepared to face natural
disasters, such as earthquakes, tornadoes, hurricanes, and
floods?
Information vs. Data Information, as opposed to data, is independently meaningful. It is
relevant to a particular problem or question:
The land cover at coordinate N875250, E757261 has a data file
value 8, is data.
Land cover with a value of 8 are on slopes too steep for
development, is information.
You can input data into a GIS and output information. The
information you wish to derive determines the type of data that must
be input. For example, if you are looking for a suitable refuge for bald
eagles, zip code data is probably not needed, while land cover data
may be useful.
For this reason, the first step in any GIS project is usually an
assessment of the scope and goals of the study. Once the project is
defined, you can begin the process of building the database.
Although software and data are commercially available, a custom
database must be created for the particular project and study area.
The database must be designed to meet the needs of the
organization and objectives. ERDAS IMAGINE provides tools required
to build and manipulate a GIS database.
Field Guide Data Input / 423
Successful GIS implementation typically includes two major steps:
data input
analysis
Data input involves collecting the necessary data layers into a GIS
database. In the analysis phase, these data layers are combined and
manipulated in order to create new layers and to extract meaningful
information from them. This chapter discusses these steps in detail.
Data Input Acquiring the appropriate data for a project involves creating a
database of layers that encompasses the study area. A database
created with ERDAS IMAGINE can consist of:
continuous layers (satellite imagery, aerial photographs,
elevation data, etc.)
thematic layers (land use, vegetation, hydrology, soils, slope,
etc.)
vector layers (streets, utility and communication lines, parcels,
etc.)
statistics (frequency of an occurrence, population demographics,
etc.)
attribute data (characteristics of roads, land, imagery, etc.)
The ERDAS IMAGINE software package employs a hierarchical,
object-oriented architecture that utilizes both raster imagery and
topological vector data. Raster images are stored in image files, and
vector layers are coverages or shapefiles based on the ESRI ArcInfo
and ArcView data models. The seamless integration of these two
types of data enables you to reap the benefits of both data formats
in one system.
Data Input / 424 Field Guide
Figure 177: Data Input
Raster data might be more appropriate in the following applications:
site selection
natural resource management
petroleum exploration
mission planning
change detection
On the other hand, vector data may be better suited for these
applications:
urban planning
tax assessment and planning
traffic engineering
facilities management
The advantage of an integrated raster and vector system such as
ERDAS IMAGINE is that one data structure does not have to be
chosen over the other. Both data formats can be used and the
functions of both types of systems can be accessed. Depending upon
the project, only raster or vector data may be needed, but most
applications benefit from using both.
GIS analyst using ERDAS IMAGINE
Landsat TM
SPOT panchromatic
Aerial photograph
Soils data
Land cover
Roads
Census data
Ownership parcels
Political boundaries
Landmarks
Vector Data Input Raster Data Input
Raster Attributes Vector Attributes
Field Guide Continuous Layers / 425
Themes and Layers
A database usually consists of files with data of the same
geographical area, with each file containing different types of
information. For example, a database for the city recreation
department might include files of all the parks in the area. These files
might depict park boundaries, county and municipal boundaries,
vegetation types, soil types, drainage basins, slope, roads, etc. Each
of these files contains different informationeach is a different
theme. The concept of themes has evolved from early GISs, in which
transparent overlays were created for each theme and combined
(overlaid) in different ways to derive new information.
A single theme may require more than a simple raster or vector file
to fully describe it. In addition to the image, there may be attribute
data that describe the information, a color scheme, or meaningful
annotation for the image. The full collection of data that describe a
certain theme is called a layer.
Depending upon the goals of a project, it may be helpful to combine
several themes into one layer. For example, if you want to propose
a new park site, you might create one layer that shows roads, land
cover, land ownership, slope, etc., and indicate through the use of
colors and/or annotation which areas would be best for the new site.
This one layer would then include many separate themes. Much of
GIS analysis is concerned with combining individual themes into one
or more layers that answer the questions driving the analysis. This
chapter explores these analysis techniques.
Continuous
Layers
Continuous raster layers are quantitative (measuring a
characteristic) and have related, continuous values. Continuous
raster layers can be multiband (e.g., Landsat TM) or single band
(e.g., SPOT panchromatic).
Satellite images, aerial photographs, elevation data, scanned maps,
and other continuous raster layers can be incorporated into a
database and provide a wealth of information that is not available in
thematic layers or vector layers. In fact, these layers often form the
foundation of the database. Extremely accurate base maps can be
created from rectified satellite images or aerial photographs. Then,
all other layers that are added to the database can be registered to
this base map.
Once used only for image processing, continuous data are now being
incorporated into GIS databases and used in combination with
thematic data to influence processing algorithms or as backdrop
imagery on which to display the results of analyses. Current satellite
data and aerial photographs are also effective in updating outdated
vector data. The vectors can be overlaid on the raster backdrop and
updated dynamically to reflect new or changed features, such as
roads, utility lines, or land use. This chapter explores the many uses
of continuous data in a GIS.
Thematic Layers / 426 Field Guide
See Raster Data for more information on continuous data.
Thematic Layers Thematic data are typically represented as single layers of
information stored as image files and containing discrete classes.
Classes are simply categories of pixels which represent the same
condition. An example of a thematic layer is a vegetation
classification with discrete classes representing coniferous forest,
deciduous forest, wetlands, agriculture, urban, etc.
A thematic layer is sometimes called a variable, because it
represents one of many characteristics about the study area. Since
thematic layers usually have only one band, they are usually
displayed in pseudo color mode, where particular colors are often
assigned to help visualize the information. For example, blues are
usually used for water features, greens for healthy vegetation, etc.
See Image Display for information on pseudo color display.
Class Numbering Systems
As opposed to the data file values of continuous raster layers, which
are generally multiband and statistically related, the data file values
of thematic raster layers can have a nominal, ordinal, interval, or
ratio relationship (Star and Estes, 1990).
Nominal classes represent categories with no particular order.
Usually, these are characteristics that are not associated with
quantities (e.g., soil type or political area).
Ordinal classes are those that have a sequence, such as poor,
good, better, and best. An ordinal class numbering system is
often created from a nominal system, in which classes have been
ranked by some criteria. In the case of the recreation department
database used in the previous example, the final layer may rank
the proposed park sites according to their overall suitability.
Interval classes also have a natural sequence, but the distance
between each value is meaningful as well. This numbering
system might be used for temperature data.
Ratio classes differ from interval classes only in that ratio classes
have a natural zero point, such as rainfall amounts.
The variable being analyzed, and the way that it contributes to the
final product, determines the class numbering system used in the
thematic layers. Layers that have one numbering system can easily
be recoded to a new system. This is discussed in detail under
"Recoding".
Field Guide Thematic Layers / 427
Classification
Thematic layers can be generated from remotely sensed data (e.g.,
Landsat TM, SPOT) by using the ERDAS IMAGINE Image Interpreter,
Classification, and Spatial Modeler tools. A frequent and popular
application is the creation of land cover classification schemes
through the use of both supervised (user-assisted) and unsupervised
(automatic) pattern-recognition algorithms contained within ERDAS
IMAGINE. The output is a single thematic layer that represents
specific classes based on the approach selected.
See Classification for more information.
Vector Data Converted to Raster Format
Vector layers can be converted to raster format if the raster format
is more appropriate for an application. Typical vector layers, such as
communication lines, streams, boundaries, and other linear
features, can easily be converted to raster format within ERDAS
IMAGINE for further analysis. Spatial Modeler automatically converts
vector layers to raster for processing.
Use the Vector Utilities menu from the Vector icon in the ERDAS
IMAGINE icon panel to convert vector layers to raster format, or
use the vector layers directly in Spatial Modeler.
Other sources of raster data are discussed in Raster and Vector
Data Sources.
Statistics Both continuous and thematic layers include statistical information.
Thematic layers contain the following information:
a histogram of the data values, which is the total number of
pixels in each class
a list of class names that correspond to class values
a list of class values
a color table, stored as brightness values in red, green, and blue,
which make up the colors of each class when the layer is
displayed
For thematic data, these statistics are called attributes and may be
accompanied by many other types of information, as described in
"Attributes".
Attributes / 428 Field Guide
Use the Image Information option on the Viewers tool bar to
generate or update statistics for image files.
See Raster Data for more information about the statistics
stored with continuous layers.
Vector Layers The vector layers used in ERDAS IMAGINE are based on the ArcInfo
data model and consist of points, lines, and polygons. These layers
are topologically complete, meaning that the spatial relationships
between features are maintained. Vector layers can be used to
represent transportation routes, utility corridors, communication
lines, tax parcels, school zones, voting districts, landmarks,
population density, etc. Vector layers can be analyzed independently
or in combination with continuous and thematic raster layers.
In ERDAS IMAGINE, vector layers may also be shapefiles based on
the ArcView data model.
Vector data can be acquired from several private and governmental
agencies. Vector data can also be created in ERDAS IMAGINE by
digitizing on the screen, using a digitizing tablet, or converting other
data types to vector format.
See Vector Data for more information on the characteristics of
vector data.
Attributes Text and numerical data that are associated with the classes of a
thematic layer or the features in a vector layer are called attributes.
This information can take the form of character strings, integer
numbers, or floating point numbers. Attributes work much like the
data that are handled by database management software. You may
define fields, which are categories of information about each class. A
record is the set of all attribute data for one class. Each record is like
an index card, containing information about one class or feature in a
file of many index cards, which contain similar information for the
other classes or features.
Attribute information for raster layers is stored in the image file.
Vector attribute information is stored in either an INFO file, dbf file,
or SDE database. In both cases, there are fields that are
automatically generated by the software, but more fields can be
added as needed to fully describe the data. Both are viewed in
CellArrays, which allow you to display and manipulate the
information. However, raster and vector attributes are handled
slightly differently, so a separate section on each follows.
Field Guide Attributes / 429
Raster Attributes In ERDAS IMAGINE, raster attributes for image files are accessible
from the Raster Attribute Editor. The Raster Attribute Editor contains
a CellArray, which is similar to a table or spreadsheet that not only
displays the information, but also includes options for importing,
exporting, copying, editing, and other operations.
Figure 178 shows the attributes for a land cover classification layer.
Figure 178: Raster Attributes for lnlandc.img
Most thematic layers contain the following attribute fields:
Class Name
Class Value
Color table (red, green, and blue values)
Opacity percentage
Histogram (number of pixels in the file that belong to the class)
As many additional attribute fields as needed can be defined for each
class.
See Classification for more information about the attribute
information that is automatically generated when new thematic
layers are created in the classification process.
Viewing Raster Attributes
Simply viewing attribute information can be a valuable analysis tool.
Depending on the type of information associated with the layers of a
database, processing may be further refined by comparing the
attributes of several files. When both the raster layer and its
associated attribute information are displayed, you can select
features in one using the other. For example, to locate the class
name associated with a particular area in a displayed image, simply
click in that area with the mouse and the associated row is
highlighted in the Raster Attribute Editor.
Attribute information is accessible in several places throughout
ERDAS IMAGINE. In some cases it is read-only and in other cases it
is a fully functioning editor, allowing the information to be modified.
Attributes / 430 Field Guide
Manipulating Raster Attributes
The applications for manipulating attributes are as varied as the
applications for GIS. The attribute information in a database depends
on the goals of the project. Some of the attribute editing capabilities
in ERDAS IMAGINE include:
import/export ASCII information to and from other software
packages, such as spreadsheets and word processors
cut, copy, and paste individual cells, rows, or columns to and
from the same Raster Attribute Editor or among several Raster
Attribute Editors
generate reports that include all or a subset of the information in
the Raster Attribute Editor
use formulas to populate cells
directly edit cells by entering in new information
The Raster Attribute Editor in ERDAS IMAGINE also includes a color
cell column, so that class (object) colors can be viewed or changed.
In addition to direct manipulation, attributes can be changed by
other programs. For example, some of the Image Interpreter
functions calculate statistics that are automatically added to the
Raster Attribute Editor. Models that read and/or modify attribute
information can also be written.
See Enhancement for more information on the Image
Interpreter. There is more information on GIS modeling in
"Graphical Modeling".
Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. You
can simply view attributes or use them to:
select features in a vector layer for further processing
determine how vectors are symbolized
label features
Figure 179 shows the attributes for a roads layer.
Field Guide Analysis / 431
Figure 179: Vector Attributes CellArray
See Vector Data for more information about vector attributes.
Analysis
ERDAS IMAGINE Analysis
Tools
In ERDAS IMAGINE, GIS analysis functions and algorithms are
accessible through three main tools:
script models created with SML
graphical models created with Model Maker
prepackaged functions in Image Interpreter
Spatial Modeler Language
SML is the basis for all ERDAS IMAGINE GIS functions. It is a
modeling language that enables you to create script (text) models
for a variety of applications. Models may be used to create custom
algorithms that best suit your data and objectives.
Analysis / 432 Field Guide
Model Maker
Model Maker is essentially SML linked to a graphical interface. This
enables you to create graphical models using a palette of easy-to-
use tools. Graphical models can be run, edited, saved in libraries, or
converted to script form and edited further, using SML.
NOTE: References to the Spatial Modeler in this chapter mean that
the named procedure can be accomplished using both Model Maker
and SML.
Image Interpreter
The Image Interpreter houses a set of common functions that were
all created using either Model Maker or SML. They have been given
a dialog interface to match the other processes in ERDAS IMAGINE.
In most cases, these processes can be run from a single dialog.
However, the actual models are also provided with the software to
enable customized processing.
Many of the functions described in the following sections can be
accomplished using any of these tools. Model Maker is also easy to
use and utilizes many of the same steps that would be performed
when drawing a flow chart of an analysis. SML is intended for more
advanced analyses, and has been designed using natural language
commands and simple syntax rules. Some applications may require
a combination of these tools.
Customizing ERDAS IMAGINE Tools
ERDAS Macro Language (EML) enables you to create and add new
and/or customized dialogs. If new capabilities are needed, they can
be created with the IMAGINE Developers Toolkit. Using these
tools, a GIS that is completely customized to a specific application
and its preferences can be created.
See the ERDAS IMAGINE On-Line Help for more information
about EML and the IMAGINE Developers Toolkit.
Analysis Procedures Once the database (layers and attribute data) is assembled, the
layers can be analyzed and new information extracted. Some
information can be extracted simply by looking at the layers and
visually comparing them to other layers. However, new information
can be retrieved by combining and comparing layers using the
following procedures:
Proximity analysisthe process of categorizing and evaluating
pixels based on their distances from other pixels in a specified
class or classes.
Contiguity analysisenables you to identify regions of pixels in
the same class and to filter out small regions.
Field Guide Proximity Analysis / 433
Neighborhood analysis any image processing technique that
takes surrounding pixels into consideration, such as convolution
filtering and scanning. This is similar to the convolution filtering
performed on continuous data. Several types of analyses can be
performed, such as boundary, density, mean, sum, etc.
Recodingenables you to assign new class values to all or a
subset of the classes in a layer.
Overlayingcreates a new file with either the maximum or
minimum value of the input layers.
Indexingadds the values of the input layers.
Matrix analysisoutputs the coincidence values of the input
layers.
Graphical modelingenables you to combine data layers in an
unlimited number of ways. For example, an output layer created
from modeling can represent the desired combination of themes
from many input layers.
Script modelingoffers all of the capabilities of graphical
modeling with the ability to perform more complex functions,
such as conditional looping.
Using an Area of Interest
Any of these functions can be performed on a single layer or multiple
layers. You can also select a particular AOI that is defined in a
separate file (AOI layer, thematic raster layer, or vector layer) or an
AOI that is selected immediately preceding the operation by entering
specific coordinates or by selecting the area in a Viewer.
Proximity
Analysis
Many applications require some measurement of distance or
proximity. For example, a real estate developer would be concerned
with the distance between a potential site for a shopping center and
an interchange to a major highway.
Proximity analysis determines which pixels of a layer are located at
specified distances from pixels in a certain class or classes. A new
thematic layer (image file) is created, which is categorized by the
distance of each pixel from specified classes of the input layer. This
new file then becomes a new layer of the database and provides a
buffer zone around the specified class(es). In further analysis, it may
be beneficial to weight other factors, based on whether they fall
inside or outside the buffer zone.
Figure 180 shows a layer containing lakes and streams and the
resulting layer after a proximity analysis is run to create a buffer
zone around all of the water features.
Contiguity Analysis / 434 Field Guide
Use the Search (GIS Analysis) function in Image Interpreter or
Spatial Modeler to perform a proximity analysis.
Figure 180: Proximity Analysis
Contiguity
Analysis
Contiguity analysis is concerned with the ways in which pixels of a
class are grouped together. Groups of contiguous pixels in the same
class, called raster regions, or clumps, can be identified by their sizes
and manipulated. One application of this tool would be an analysis
for locating helicopter landing zones that require at least 250
contiguous pixels at 10-meter resolution.
Contiguity analysis can be used to: 1) divide a large class into
separate raster regions, or 2) eliminate raster regions that are too
small to be considered for an application.
Filtering Clumps
In cases where very small clumps are not useful, they can be filtered
out according to their sizes. This is sometimes referred to as
eliminating the salt and pepper effects, or sieving. In Figure 181, all
of the small clumps in the original (clumped) layer are eliminated.
Figure 181: Contiguity Analysis
Original layer After proximity
analysis performed
Buffer
Lake
Streams
zones
Clumped layer Sieved layer
Field Guide Neighborhood Analysis / 435
Use the Clump and Sieve (GIS Analysis) function in Image
Interpreter or Spatial Modeler to perform contiguity analysis.
Neighborhood
Analysis
With a process similar to the convolution filtering of continuous
raster layers, thematic raster layers can also be filtered. The GIS
filtering process is sometimes referred to as scanning, but is not to
be confused with data capture via a digital camera. Neighborhood
analysis is based on local or neighborhood characteristics of the data
(Star and Estes, 1990).
Every pixel is analyzed spatially, according to the pixels that
surround it. The number and the location of the surrounding pixels
is determined by a scanning window, which is defined by you. These
operations are known as focal operations. The scanning window can
be of any size in SML. In Model Maker, it has the following
constraints:
circular, with a maximum diameter of 512 pixels
doughnut-shaped, with a maximum outer radius of 256
rectangular, up to 512 512 pixels, with the option to mask-out
certain pixels
Use the Neighborhood (GIS Analysis) function in Image
Interpreter or Spatial Modeler to perform neighborhood
analysis. The scanning window used in Image Interpreter can be
3 3, 5 5, or 7 7. The scanning window in Model Maker is
defined by you and can be up to 512 512. The scanning
window in SML can be of any size.
Defining Scan Area
You may define the area of the file to be scanned. The scanning
window moves only through this area as the analysis is performed.
Define the area in one or all of the following ways:
Specify a rectangular portion of the file to scan. The output layer
contains only the specified area.
Specify an area that is defined by an existing AOI layer, an
annotation overlay, or a vector layer. The area(s) within the
polygon are scanned, and the other areas remain the same. The
output layer is the same size as the input layer or the selected
rectangular portion.
Neighborhood Analysis / 436 Field Guide
Specify a class or classes in another thematic layer to be used as
a mask. The pixels in the scanned layer that correspond to the
pixels of the selected class or classes in the mask layer are
scanned, while the other pixels remain the same.
Figure 182: Using a Mask
In Figure 182, class 2 in the mask layer was selected for the mask.
Only the corresponding (shaded) pixels in the target layer are
scannedthe other values remain unchanged.
Neighborhood analysis creates a new thematic layer. There are
several types of analysis that can be performed upon each window
of pixels, as described below:
Boundarydetects boundaries between classes. The output layer
contains only boundary pixels. This is useful for creating
boundary or edge lines from classes, such as a land/water
interface.
Densityoutputs the number of pixels that have the same class
value as the center (analyzed) pixel. This is also a measure of
homogeneity (sameness), based upon the analyzed pixel. This is
often useful in assessing vegetation crown closure.
Diversityoutputs the number of class values that are present
within the window. Diversity is also a measure of heterogeneity
(difference).
Majorityoutputs the class value that represents the majority of
the class values in the window. The value is defined by you. This
option operates like a low-frequency filter to clean up a salt and
pepper layer.
Maximumoutputs the greatest class value within the window.
This can be used to emphasize classes with the higher class
values or to eliminate linear features or boundaries.
Meanaverages the class values. If class values represent
quantitative data, then this option can work like a convolution
filter. This is mostly used on ordinal or interval data.
8
8
2
2
2
2
8
8
2
2
2
2
6
6
6
6
2
2
6
6
6
8
8
8
8
8
8
6
6
8
3
3
4
4
5
5
3
3
3
4
4
5
4
4
4
4
4
5
4
4
4
4
5
5
5
5
5
5
5
5
mask layer target layer
Field Guide Recoding / 437
Medianoutputs the statistical median of the class values in the
window. This option may be useful if class values represent
quantitative data.
Minimumoutputs the least or smallest class value within the
window. The value is defined by you. This can be used to
emphasize classes with the low class values.
Minorityoutputs the least common of the class values that are
within the window. This option can be used to identify the least
common classes. It can also be used to highlight disconnected
linear features.
Rankoutputs the number of pixels in the scan window whose
value is less than the center pixel.
Standard deviationoutputs the standard deviation of class
values in the window.
Sumtotals the class values. In a file where class values are
ranked, totaling enables you to further rank pixels based on their
proximity to high-ranking pixels.
Figure 183: Sum Option of Neighborhood Analysis (Image
Interpreter)
In Figure 183, the Sum option of Neighborhood (Image Interpreter)
is applied to a 3 3 window of pixels in the input layer. In the output
layer, the analyzed pixel is given a value based on the total of all of
the pixels in the window.
The analyzed pixel is always the center pixel of the scanning
window. In this example, only the pixel in the third column and
third row of the file is summed.
Recoding Class values can be recoded to new values. Recoding involves the
assignment of new values to one or more classes. Recoding is used
to:
2
2
2
2
2
8
8
2
2
2
6
6
8
2
2
6
6
6
8
2
6
6
6
6
8
8
2
2
6
48
2
6
6
8
Output of one
iteration of the
sum operation
8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48
Overlaying / 438 Field Guide
reduce the number of classes
combine classes
assign different class values to existing classes
When an ordinal, ratio, or interval class numbering system is used,
recoding can be used to assign classes to appropriate values.
Recoding is often performed to make later steps easier. For example,
in creating a model that outputs good, better, and best areas, it may
be beneficial to recode the input layers so all of the best classes have
the highest class values.
In the following example (Table 55), a land cover layer is recoded so
that the most environmentally sensitive areas (Riparian and
Wetlands) have higher class values.
Use the Recode (GIS Analysis) function in Image Interpreter or
Spatial Modeler to recode layers.
Overlaying Thematic data layers can be overlaid to create a composite layer.
The output layer contains either the minimum or the maximum class
values of the input layers. For example, if an area was in class 5 in
one layer, and in class 3 in another, and the maximum class value
dominated, then the same area would be coded to class 5 in the
output layer, as shown in Figure 184.
Table 55: Example of a Recoded Land Cover Layer
Value New Value Class Name
0 0 Background
1 4 Riparian
2 1 Grassland and Scrub
3 1 Chaparral
4 4 Wetlands
5 1 Emergent Vegetation
6 1 Water
Field Guide Indexing / 439
Figure 184: Overlay
The application example in Figure 184 shows the result of combining
two layersslope and land use. The slope layer is first recoded to
combine all steep slopes into one value. When overlaid with the land
use layer, the highest data file values (the steep slopes) dominate in
the output layer.
Use the Overlay (GIS Analysis) function in Image Interpreter or
Spatial Modeler to overlay layers.
Indexing Thematic layers can be indexed (added) to create a composite layer.
The output layer contains the sums of the input layer values. For
example, the intersection of class 3 in one layer and class 5 in
another would result in class 8 in the output layer, as shown in Figure
185.
6
8
9
2
1
6
1
3
5
2
2
3
4
1
2
2
5
3
9
9
9
4
1
9
2
5
3
9
9
9
0
0
9
0
0
0
Recode
Overlay
Original Slope
1-5 = flat slopes
6-9 = steep slopes
Recoded Slope
0 = flat slopes
9 = steep slopes
Land Use
1 = commercial
2 = residential
3 = forest
4 = industrial
5 = wetlands
Overlay Composite
1 = commercial
2 = residential
3 = forest
4 = industrial
5 = wetlands
9 = steep slopes
(Land Use masked)
5 3
Basic Overlay Application Example
Indexing / 440 Field Guide
Figure 185: Indexing
The application example in Figure 185 shows the result of indexing.
In this example, you want to develop a new subdivision, and the
most likely sites are where there is the best combination (highest
value) of good soils, good slope, and good access. Because good
slope is a more critical factor to you than good soils or good access,
a weighting factor is applied to the slope layer. A weighting factor
has the effect of multiplying all input values by some constant. In
this example, slope is given a weight of 2.
Use the Index (GIS Analysis) function in the Image Interpreter
or Spatial Modeler to index layers.
9
9
5
9
9
1
1
9
5
9
5
1
9
9
5
9
9
9
36
24
16
36
36
8
28
36
16
18
10
10
18
18
2
18
18
2
Soils
9 = good
5 = fair
Slope
9 = good
5 = fair
Access
9 = good
5 = fair
1 = poor
1 = poor
1 = poor
Output values calculated
Weighting
Importance
1
1
1
Weighting
Importance
2
2
2
Weighting
Importance
1
1
1
+
+
=
3 5
8
Basic Index Application Example
Field Guide Modeling / 441
Matrix Analysis Matrix analysis produces a thematic layer that contains a separate
class for every coincidence of classes in two layers. The output is
best described with a matrix diagram.
In this diagram, the classes of the two input layers represent the
rows and columns of the matrix. The output classes are assigned
according to the coincidence of any two input classes.
All combinations of 0 and any other class are coded to 0,
because 0 is usually the background class, representing an area
that is not being studied.
Unlike overlaying or indexing, the resulting class values of a matrix
operation are unique for each coincidence of two input class values.
In this example, the output class value at column 1, row 3 is 11, and
the output class at column 3, row 1 is 3. If these files were indexed
(summed) instead of matrixed, both combinations would be coded
to class 4.
Use the Matrix (GIS Analysis) function in Image Interpreter or
Spatial Modeler to matrix layers.
Modeling Modeling is a powerful and flexible analysis tool. Modeling is the
process of creating new layers from combining or operating upon
existing layers. Modeling enables you to create a small set of layers
perhaps even a single layerwhich, at a glance, contains many
types of information about the study area.
For example, if you want to find the best areas for a bird sanctuary,
taking into account vegetation, availability of water, climate, and
distance from highly developed areas, you would create a thematic
layer for each of these criteria. Then, each of these layers would be
input to a model. The modeling process would create one thematic
layer, showing only the best areas for the sanctuary.
input layer 2 data values (columns)
0 1 2 3 4 5
input layer
1 data
values
(rows)
0
0 0 0 0 0 0
1
0 1 2 3 4 5
2
0 6 7 8 9 10
3
0 11 12 13 14 15
Graphical Modeling / 442 Field Guide
The set of procedures that define the criteria is called a model. In
ERDAS IMAGINE, models can be created graphically and resemble a
flow chart of steps, or they can be created using a script language.
Although these two types of models look different, they are
essentially the sameinput files are defined, functions and/or
operators are specified, and outputs are defined. The model is run
and a new output layer(s) is created. Models can utilize analysis
functions that have been previously defined, or new functions can be
created by you.
Use the Model Maker function in Spatial Modeler to create
graphical models and SML to create script models.
Data Layers
In modeling, the concept of layers is especially important. Before
computers were used for modeling, the most widely used approach
was to overlay registered maps on paper or transparencies, with
each map corresponding to a separate theme. Today, digital files
replace these hardcopy layers and allow much more flexibility for
recoloring, recoding, and reproducing geographical information
(Steinitz et al, 1976).
In a model, the corresponding pixels at the same coordinates in all
input layers are addressed as if they were physically overlaid like
hardcopy maps.
Graphical
Modeling
Graphical modeling enables you to draw models using a palette of
tools that defines inputs, functions, and outputs. This type of
modeling is very similar to drawing flowcharts, in that you identify a
logical flow of steps needed to perform the desired action. Through
the extensive functions and operators available in the ERDAS
IMAGINE graphical modeling program, you can analyze many layers
of data in very few steps without creating intermediate files that
occupy extra disk space. Modeling is performed using a graphical
editor that eliminates the need to learn a programming language.
Complex models can be developed easily and then quickly edited and
re-run on another data set.
Use the Model Maker function in Spatial Modeler to create
graphical models.
Image Processing and GIS
In ERDAS IMAGINE, the traditional GIS functions (e.g.,
neighborhood analysis, proximity analysis, recode, overlay, index,
etc.) can be performed in models, as well as image processing
functions. Both thematic and continuous layers can be input into
models that accomplish many objectives at once.
Field Guide Graphical Modeling / 443
For example, suppose there is a need to assess the environmental
sensitivity of an area for development. An output layer can be
created that ranks most to least sensitive regions based on several
factors, such as slope, land cover, and floodplain. To visualize the
location of these areas, the output thematic layer can be overlaid
onto a high resolution, continuous raster layer (e.g., SPOT
panchromatic) that has had a convolution filter applied. All of this
can be accomplished in a single model (as shown in Figure 186).
Figure 186: Graphical Model for Sensitivity Analysis
See the ERDAS IMAGINE Tour Guides manual for step-by-step
instructions on creating the environmental sensitivity model in
Figure 186. Descriptions of all of the graphical models delivered
with ERDAS IMAGINE are available in the On-Line Help.
Model Structure
A model created with Model Maker is essentially a flow chart that
defines:
the input image(s), vector(s), matrix(ces), table(s), and
scalar(s) to be analyzed
Graphical Modeling / 444 Field Guide
calculations, functions, or operations to be performed on the
input data
the output image(s), matrix(ces), table(s), and scalars to be
created
The graphical models created in Model Maker all have the same basic
structure: input, function, output. The number of inputs, functions,
and outputs can vary, but the overall form remains constant. All
components must be connected to one another before the model can
be executed. The model on the left in Figure 187 is the most basic
form. The model on the right is more complex, but it retains the
same input/function/output flow.
Figure 187: Graphical Model Structure
Graphical models are stored in ASCII files with the .gmd extension.
There are several sample graphical models delivered with ERDAS
IMAGINE that can be used as is or edited for more customized
processing.
See the On-Line Help for instructions on editing existing models.
Input
Input
Function Output
Basic Model
Input
Function
Output
Function
Input
Output
Complex Model
Field Guide Graphical Modeling / 445
Model Maker Functions The functions available in Model Maker are divided into the following
categories:
Table 56: Model Maker Functions
Category Description
Analysis Includes convolution filtering, histogram matching, contrast
stretch, principal components, and more.
Arithmetic Perform basic arithmetic functions including addition,
subtraction, multiplication, division, factorial, and modulus.
Bitwise Use bitwise and, or, exclusive or, and not.
Boolean Perform logical functions including and, or, and not.
Color Manipulate colors to and from RGB (red, green, blue) and IHS
(intensity, hue, saturation).
Conditional Run logical tests using conditional statements and
either...if...or...otherwise.
Data Generation Create raster layers from map coordinates, column numbers, or
row numbers. Create a matrix or table from a list of scalars.
Descriptor Read attribute information and map a raster through an
attribute column.
Distance Perform distance functions, including proximity analysis.
Exponential Use exponential operators, including natural and common
logarithmic, power, and square root.
Focal (Scan) Perform neighborhood analysis functions, including boundary,
density, diversity, majority, mean, minority, rank, standard
deviation, sum, and others.
Focal Use Opts Constraints on which pixel values to include in calculations for
the Focal (Scan) function.
Focal Apply Opts Constraints on which pixel values to apply the results of
calculations for the Focal (Scan) function.
Global Analyze an entire layer and output one value, such as diversity,
maximum, mean, minimum, standard deviation, sum, and
more.
Matrix Multiply, divide, and transpose matrices, as well as convert a
matrix to a table and vice versa.
Other Includes over 20 miscellaneous functions for data type
conversion, various tests, and other utilities.
Relational Includes equality, inequality, greater than, less than, greater
than or equal, less than or equal, and others.
Size Measure cell X and Y size, layer width and height, number of
rows and columns, etc.
Stack Statistics Perform operations over a stack of layers including diversity,
majority, max, mean, median, min, minority, standard
deviation, and sum.
Statistical Includes density, diversity, majority, mean, rank, standard
deviation, and more.
String Manipulate character strings.
Graphical Modeling / 446 Field Guide
These functions are also available for script modeling.
See the ERDAS IMAGINE Tour Guides and the On-Line SML
manual for complete instructions on using Model Maker, and
more detailed information about the available functions and
operators.
Objects Within Model Maker, an object is an input to or output from a
function. The five basic object types used in Model Maker are:
raster
vector
matrix
table
scalar
Raster
A raster object is a single layer or multilayer array of pixel data.
Rasters are typically used to specify and manipulate data from image
files.
Vector
Vector data in either a vector coverage, shapefile, or annotation
layer can be read directly into the Model Maker, converted from
vector to raster, then processed similarly to raster data; Model
Maker cannot write to coverages, or shapefiles or annotation layers.
Surface Calculate aspect and degree/percent slope and produce shaded
relief.
Trigonometric Use common trigonometric functions, including sine/arcsine,
cosine/arccosine, tangent/arctangent, hyperbolic arcsine,
arccosine, cosine, sine, and tangent.
Zonal Perform zonal operations including summary, diversity,
majority, max, mean, min, range, and standard deviation.
Table 56: Model Maker Functions
Category Description
Field Guide Graphical Modeling / 447
Matrix
A matrix object is a set of numbers arranged in a two-dimensional
array. A matrix has a fixed number of rows and columns. Matrices
may be used to store convolution kernels or the neighborhood
definition used in neighborhood functions. They can also be used to
store covariance matrices, eigenvector matrices, or matrices of
linear combination coefficients.
Table
A table object is a series of numeric values, colors, or character
strings. A table has one column and a fixed number of rows. Tables
are typically used to store columns from the Raster Attribute Editor
or a list of values that pertains to the individual layers of a set of
layers. For example, a table with four rows could be used to store the
maximum value from each layer of a four layer image file. A table
may consist of up to 32,767 rows. Information in the table can be
attributes, calculated (e.g., histograms), or defined by you.
Scalar
A scalar object is a single numeric value, color, or character string.
Scalars are often used as weighting factors.
The graphics used in Model Maker to represent each of these objects
are shown in Figure 188.
Figure 188: Modeling Objects
Data Types The five object types described above may be any of the following
data types:
Binaryeither 0 (false) or 1 (true)
Integerinteger values from -2,147,483,648 to 2,147,483,648
(signed 32-bit integer)
Floatfloating point data (double precision)
Stringa character string (for table objects only)
Raster
Matrix
Scalar
Table
+
+
Vector
Graphical Modeling / 448 Field Guide
Input and output data types do not have to be the same. Using SML,
you can change the data type of input files before they are
processed.
Output Parameters Since it is possible to have several inputs in one model, you can
optionally define the working window and the pixel cell size of the
output data along with the output map projection.
Working Window
Raster layers of differing areas can be input into one model.
However, the image area, or working window, must be specified in
order to use it in the model calculations. Either of the following
options can be selected:
Unionthe model operates on the union of all input rasters. (This
is the default.)
Intersectionthe model uses only the area of the rasters that is
common to all input rasters.
Pixel Cell Size
Input rasters may also be of differing resolution (pixel size), so you
must select the output cell size as either:
Minimumthe minimum cell size of the input layers is used (this
is the default setting).
Maximumthe maximum cell size of the input layers is used.
Otherspecify a new cell size.
Map Projection
The output map projection defaults to be the same as the first input,
or projection may be selected to be the same as a chosen input. The
output projection may also be selected from a projection library.
Using Attributes in
Models
With the criteria function in Model Maker, attribute data can be used
to determine output values. The criteria function simplifies the
process of creating a conditional statement. The criteria function can
be used to build a table of conditions that must be satisfied to output
a particular row value for an attribute (or cell value) associated with
the selected raster.
The inputs to a criteria function are rasters or vectors. The columns
of the criteria table represent either attributes associated with a
raster layer or the layer itself, if the cell values are of direct interest.
Criteria which must be met for each output column are entered in a
cell in that column (e.g., >5). Multiple sets of criteria may be entered
in multiple rows. The output raster contains the first row number of
a set of criteria that were met for a raster cell.
Field Guide Script Modeling / 449
Example
For example, consider the sample thematic layer, parks.img, that
contains the following attribute information:
A simple model could create one output layer that shows only the
parks in need of repairs. The following logic would therefore be coded
into the model:
If Turf Condition is not Good or Excellent, and if Path Condition
is not Good or Excellent, then the output class value is 1.
Otherwise, the output class value is 2.
More than one input layer can also be used. For example, a model
could be created, using the input layers parks.img and soils.img, that
shows the soil types for parks with either fair or poor turf condition.
Attributes can be used from every input file.
The following is a slightly more complex example:
If you have a land cover file and you want to create a file of pine
forests larger than 10 acres, the criteria function could be used to
output values only for areas that satisfy the conditions of being both
pine forest and larger than 10 acres. The output file would have two
classes: pine forests larger than 10 acres and background. If you
want the output file to show varying sizes of pine forest, you would
simply add more conditions to the criteria table.
Comparisons of attributes can also be combined with mathematical
and logical functions on the class values of the input file(s). With
these capabilities, highly complex models can be created.
See the ERDAS IMAGINE Tour Guides or the On-Line Help for
specific instructions on using the criteria function.
Script Modeling SML is a script language used internally by Model Maker to execute
the operations specified in the graphical models that are created.
SML can also be used to directly write to models you create. It
includes all of the functions available in Model Maker, plus:
conditional branching and looping
Table 57: Attribute Information for parks.img
Class Name Histogram Acres
Path
Condition
Turf
Condition
Car
Spaces
Grant Park 2456 403.45 Fair Good 127
Piedmont Park 5167 547.88 Good Fair 94
Candler Park 763 128.90 Excellent Excellent 65
Springdale Park 548 46.33 None Excellent 0
Script Modeling / 450 Field Guide
the ability to use complex data types
Graphical models created with Model Maker can be output to a script
file (text only) in SML. These scripts can then be edited with a text
editor using SML syntax and rerun or saved in a library. Script
models can also be written from scratch in the text editor. They are
stored in ASCII .mdl files.
The Text Editor is available from the Tools menu located on the
ERDAS IMAGINE menu bar and from the Model Librarian (Spatial
Modeler).
In Figure 189, both the graphical and script models are shown for a
tasseled cap transformation. Notice how even the annotation on the
graphical model is included in the automatically generated script
model. Generating script models from graphical models may aid in
learning SML.
Field Guide Script Modeling / 451
Figure 189: Graphical and Script Models For Tasseled Cap Transformation
Convert graphical models to scripts using Model Maker. Open
existing script models from the Model Librarian (Spatial
Modeler).
Graphical Model
Script Model
Tasseled Cap
Transformation
Models
# TM Tasseled Cap Transformation
# of Lake Lanier, Georgia
#
# declarations
#
INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR
"/usr/imagine/examples/tm_lanier.img";
FLOAT MATRIX n2_Custom_Matrix;
FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE
"/usr/imagine/examples/lntassel.img";
#
# set cell size for the model
#
SET CELLSIZE MIN;
#
# set window for the model
#
SET WINDOW UNION;
#
# load matrix n2_Custom_Matrix
#
n2_Custom_Matrix = MATRIX(3, 7:
0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000, 0.252520,
-0.247170, -0.162630, -0.406390, 0.854680, 0.054930, 0.000000, -0.117490,
0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000, -0.457320);
#
# function definitions
#
n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix ) ;
QUIT;
Script Modeling / 452 Field Guide
Statements A script model consists primarily of one or more statements. Each
statement falls into one of the following categories:
Declarationdefines objects to be manipulated within the model
Assignmentassigns a value to an object
Show and Viewenables you to see and interpret results from
the model
Setdefines the scope of the model or establishes default values
used by the Modeler
Macro Definitiondefines substitution text associated with a
macro name
Quitends execution of the model
SML also includes flow control structures so that you can utilize
conditional branching and looping in the models and statement block
structures, which cause a set of statements to be executed as a
group.
Declaration Example
In the script model in Figure 189, the following lines form the
declaration portion of the model:
INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR
"/usr/imagine/examples/tm_lanier.img";
FLOAT MATRIX n2_Custom_Matrix;
FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT
SINGLE "/usr/imagine/examples/lntassel.img";
Set Example
The following set statements are used:
SET CELLSIZE MIN;
SET WINDOW UNION;
Assignment Example
The following assignment statements are used:
n2_Custom_Matrix = MATRIX(3, 7:
0.331830, 0.331210, 0.551770, 0.425140, 0.480870,
0.000000, 0.252520,
-0.247170, -0.162630, -0.406390, 0.854680, 0.054930,
0.000000, -0.117490,
0.139290, 0.224900, 0.403590, 0.251780, -0.701330,
0.000000, -0.457320);
n4_lntassel = LINEARCOMB ( $n1_tm_lanier ,
$n2_Custom_Matrix ) ;
Field Guide Vector Analysis / 453
Data Types In addition to the data types utilized by Graphical Modeling, script
model objects can store data in the following data types:
Complexcomplex data (double precision)
Colorthree floating point numbers in the range of 0.0 to 1.0,
representing intensity of red, green, and blue
Variables Variables are objects in the Modeler that have been associated with
names using Declaration Statements. The declaration statement
defines the data type and object type of the variable. The declaration
may also associate a raster variable with certain layers of an image
file or a table variable with an attribute table. Assignment
Statements are used to set or change the value of a variable.
For script model syntax rules, descriptions of all available
functions and operators, and sample models, see the On-Line
SML manual.
Vector Analysis Most of the operations discussed in the previous pages of this
chapter focus on raster data. However, in a complete GIS database,
both raster and vector layers are present. One of the most common
applications involving the combination of raster and vector data is
the updating of vector layers using current raster imagery as a
backdrop for vector editing. For example, if a vector database is
more than one or two years old, then there are probably errors due
to changes in the area (new roads, moved roads, new development,
etc.). When displaying existing vector layers over a raster layer, you
can dynamically update the vector layer by digitizing new or changed
features on the screen.
Vector layers can also be used to indicate an AOI for further
processing. Assume you want to run a site suitability model on only
areas designated for commercial development in the zoning
ordinances. By selecting these zones in a vector polygon layer, you
could restrict the model to only those areas in the raster input files.
Vector layers can also be used as inputs to models. Updated or new
attributes may also be written to vector layers in models.
Editing Vector Layers Editable features are polygons (as lines), lines, label points, and
nodes. There can be multiple features selected with a mixture of any
and all feature types. Editing operations and commands can be
performed on multiple or single selections. In addition to the basic
editing operations (e.g., cut, paste, copy, delete), you can also
perform the following operations on the line features in multiple or
single selections:
splinesmooths or generalizes all currently selected lines using
a specified grain tolerance
Constructing Topology / 454 Field Guide
generalizeweeds vertices from selected lines using a specified
tolerance
split/unsplitmakes two lines from one by adding a node or joins
two lines by removing a node
densifyadds vertices to selected lines at a tolerance you specify
reshape (for single lines only)enables you to move the vertices
of a line
Reshaping (adding, deleting, or moving a vertex or node) can be
done on a single selected line. Table 58 details general editing
operations and the feature types that support each of those
operations.
The Undo utility may be applied to any edits. The software stores all
edits in sequential order, so that continually pressing Undo reverses
the editing.
For more information on vectors, see Raster and Vector Data
Sources.
Constructing
Topology
Either the Build or Clean option can be used to construct topology.
To create spatial relationships between features in a vector layer, it
is necessary to create topology. After a vector layer is edited, the
topology must be constructed to maintain the topological
relationships between features. When topology is constructed, each
feature is assigned an internal number. These numbers are then
used to determine line connectivity and polygon contiguity. Once
calculated, these values are recorded and stored in that layers
associated attribute table.
You must also reconstruct the topology of vector layers imported
into ERDAS IMAGINE.
Table 58: General Editing Operations and Supporting Feature Types
Add Delete Move Reshape
Points
yes yes yes no
Lines
yes yes yes yes
Polygons
yes yes yes no
Nodes
yes yes yes no
Field Guide Constructing Topology / 455
When topology is constructed, feature attribute tables are created
with several automatically created fields. Different fields are stored
for the different types of layers. The automatically generated fields
for a line layer are:
FNODE#the internal node number for the beginning of a line
(from-node)
TNODE#the internal number for the end of a line (to-node)
LPOLY#the internal number for the polygon to the left of the
line (zero for layers containing only lines and no polygons)
RPOLY#the internal number for the polygon to the right of the
line (zero for layers containing only lines and no polygons)
LENGTHlength of each line, measured in layer units
Cover#internal line number (values assigned by ERDAS
IMAGINE)
Cover-IDuser-ID (values modified by you)
The automatically generated fields for a point or polygon layer are:
AREAarea of each polygon, measured in layer units (zero for
layers containing only points and no polygons)
PERIMETERlength of each polygon boundary, measured in
layer units (zero for layers containing only points and no
polygons)
Cover#internal polygon number (values assigned by ERDAS
IMAGINE)
Cover-IDuser-ID (values modified by you)
Building and Cleaning
Coverages
The Build option processes points, lines, and polygons, but the Clean
option processes only lines and polygons. Build recognizes only
existing intersections (nodes), whereas Clean creates intersections
(nodes) wherever lines cross one another. The differences in these
two options are summarized in Table 59 (Environmental Systems
Research Institute, 1990).
Table 59: Comparison of Building and Cleaning Coverages
Capabilities Build Clean
Processes:
Polygons Yes Yes
Lines Yes Yes
Points Yes No
Constructing Topology / 456 Field Guide
Errors
Constructing topology also helps to identify errors in the layer. Some
of the common errors found are:
Lines with less than two nodes
Polygons that are not closed
Polygons that have no label point or too many label points
User-IDs that are not unique
Constructing typology can identify the errors mentioned above.
When topology is constructed, line intersections are created, the
lines that make up each polygon are identified, and a label point is
associated with each polygon. Until topology is constructed, no
polygons exist and lines that cross each other are not connected at
a node, because there is no intersection.
Construct topology using the Vector Utilities menu from the
Vector icon in the ERDAS IMAGINE icon panel.
You should not build or clean a layer that is displayed in a
Viewer, nor should you try to display a layer that is being built
or cleaned.
When the Build or Clean options are used to construct the topology
of a vector layer, two kinds of potential node errors may be
observed; pseudo nodes and dangling nodes. These are identified in
the Viewer with special symbols. The default symbols used by
IMAGINE are shown in Figure 190 below but may be changed in the
Vector Properties dialog.
Numbers features
Yes Yes
Calculates spatial
measurements
Yes Yes
Creates intersections
No Yes
Processing speed
Faster Slower
Table 59: Comparison of Building and Cleaning Coverages
Capabilities Build Clean
Field Guide Constructing Topology / 457
Pseudo nodes occur where a single line connects with itself (an
island) or where only two lines intersect. Pseudo nodes do not
necessarily indicate an error or a problem. Acceptable pseudo nodes
may represent an island (a spatial pseudo node) or the point where
a road changes from pavement to gravel (an attribute pseudo node).
A dangling node refers to the unconstructed node of a dangling line.
Every line begins and ends at a node point. So if a line does not close
properly, or was digitized past an intersection, it registers as a
dangling node. In some cases, a dangling node may be acceptable.
For example, in a street centerline map, cul-de-sacs or dead-ends
are often represented by dangling nodes.
In polygon layers there may be label errorsusually no label point
for a polygon, or more than one label point for a polygon. In the
latter case, two or more points may have been mistakenly digitized
for a polygon, or it may be that a line does not intersect another line,
resulting in an open polygon.
Figure 190: Layer Errors
Errors detected in a layer can be corrected by changing the
tolerances set for that layer and building or cleaning again, or by
editing the layer manually, then running Build or Clean.
Refer to the ERDAS IMAGINE Tour Guides manual for step-by-
step instructions on editing vector layers.
Label points in one polygon
(due to dangling node)
Dangling nodes
No label point
in polygon
Pseudo node
(island)
Constructing Topology / 458 Field Guide
Types of Maps / 459 Field Guide
Cartography
Introduction Maps and mapping are the subject of the art and science known as
cartographycreating two-dimensional representations of our
three-dimensional Earth. These representations were once hand-
drawn with paper and pen. But now, map production is largely
automatedand the final output is not always paper. The capabilities
of a computer system are invaluable to map users, who often need
to know much more about an area than can be reproduced on paper,
no matter how large that piece of paper is or how small the
annotation is. Maps stored on a computer can be queried, analyzed,
and updated quickly.
As the veteran GIS and image processing authority, Roger F.
Tomlinson, said: Mapped and related statistical data do form the
greatest storehouse of knowledge about the condition of the living
space of mankind. With this thought in mind, it only makes sense
that maps be created as accurately as possible and be as accessible
as possible.
In the past, map making was carried out by mapping agencies who
took the analysts (be they surveyors, photogrammetrists, or
draftsmen) information and created a map to illustrate that
information. But today, in many cases, the analyst is the
cartographer and can design his maps to best suit the data and the
end user.
This chapter defines some basic cartographic terms and explains
how maps are created within the ERDAS IMAGINE environment.
Use the Map Composer to create hardcopy and softcopy maps
and presentation graphics.
This chapter concentrates on the production of digital maps. See
Hardcopy Output for information about printing hardcopy
maps.
Types of Maps A map is a graphic representation of spatial relationships on the
Earth or other planets. Maps can take on many forms and sizes,
depending on the intended use of the map. Maps no longer refer only
to hardcopy output. In this manual, the maps discussed begin as
digital files and may be printed later as desired.
Some of the different types of maps are defined in Table 60.
Types of Maps / 460 Field Guide
Map Purpose
Aspect A map that shows the prevailing direction that a slope faces at each pixel.
Aspect maps are often color-coded to show the eight major compass
directions, or any of 360 degrees.
Base A map portraying background reference information onto which other
information is placed. Base maps usually show the location and extent of
natural Earth surface features and permanent human-made objects. Raster
imagery, orthophotos, and orthoimages are often used as base maps.
Bathymetric A map portraying the shape of a water body or reservoir using isobaths
(depth contours).
Cadastral A map showing the boundaries of the subdivisions of land for purposes of
describing and recording ownership or taxation.
Choropleth A map portraying properties of a surface using area symbols. Area symbols
usually represent categorized classes of the mapped phenomenon.
Composite A map on which the combined information from different thematic maps is
presented.
Contour A map in which lines are used to connect points of equal elevation. Lines
are often spaced in increments of ten or twenty feet or meters.
Derivative A map created by altering, combining, or analyzing other maps.
Index A reference map that outlines the mapped area, identifies all of the
component maps for the area if several map sheets are required, and
identifies all adjacent map sheets.
Inset A map that is an enlargement of some congested area of a smaller scale
map, and that is usually placed on the same sheet with the smaller scale
main map.
Isarithmic A map that uses isorithms (lines connecting points of the same value for
any of the characteristics used in the representation of surfaces) to
represent a statistical surface. Also called an isometric map.
Isopleth A map on which isopleths (lines representing quantities that cannot exist at
a point, such as population density) are used to represent some selected
quantity.
Morphometric A map representing morphological features of the Earths surface.
Outline A map showing the limits of a specific set of mapping entities, such as
counties, NTS quads, etc. Outline maps usually contain a very small
number of details over the desired boundaries with their descriptive codes.
Planimetric A map showing only the horizontal position of geographic objects, without
topographic features or elevation contours.
Relief Any map that appears to be, or is, three-dimensional. Also called a shaded
relief map.
Slope A map that shows changes in elevation over distance. Slope maps are
usually color-coded according to the steepness of the terrain at each pixel.
Field Guide Types of Maps / 461
In ERDAS IMAGINE, maps are stored as a map file with a .map
extension.
Thematic Maps Thematic maps comprise a large portion of the maps that many
organizations create. For this reason, this map type is explored in
more detail.
Thematic maps may be subdivided into two groups:
qualitative
quantitative
A qualitative map shows the spatial distribution or location of a kind
of nominal data. For example, a map showing corn fields in the
United States would be a qualitative map. It would not show how
much corn is produced in each location, or production relative to the
other areas.
A quantitative map displays the spatial aspects of numerical data. A
map showing corn production (volume) in each area would be a
quantitative map. Quantitative maps show ordinal (less than/greater
than) and interval/ratio (difference) scale data (Dent, 1985).
You can create thematic data layers from continuous data (aerial
photography and satellite images) using the ERDAS IMAGINE
classification capabilities. See Classification for more
information.
Base Information
Thematic maps should include a base of information so that the
reader can easily relate the thematic data to the real world. This base
may be as simple as an outline of counties, states, or countries, to
something more complex, such as an aerial photograph or satellite
image. In the past, it was difficult and expensive to produce maps
that included both thematic and continuous data, but technological
advances have made this easy.
Thematic A map illustrating the class characterizations of a particular spatial variable
(e.g., soils, land cover, hydrology, etc.)
Topographic A map depicting terrain relief.
Viewshed A map showing only those areas visible (or invisible) from a specified
point(s). Also called a line-of-sight map or a visibility map.
Map Purpose
Types of Maps / 462 Field Guide
For example, in a thematic map showing flood plains in the
Mississippi River valley, you could overlay the thematic data onto a
line coverage of state borders or a satellite image of the area. The
satellite image can provide more detail about the areas bordering the
flood plains. This may be valuable information when planning
emergency response and resource management efforts for the area.
Satellite images can also provide very current information about an
area, and can assist you in assessing the accuracy of a thematic
image.
In ERDAS IMAGINE, you can include multiple layers in a single
map composition. See Map Composition for more information
about creating maps.
Color Selection
The colors used in thematic maps may or may not have anything to
do with the class or category of information shown. Cartographers
usually try to use a color scheme that highlights the primary purpose
of the map. The map readers perception of colors also plays an
important role. Most people are more sensitive to red, followed by
green, yellow, blue, and purple. Although color selection is left
entirely up to the map designer, some guidelines have been
established (Robinson and Sale, 1969).
When mapping interval or ordinal data, the higher ranks and
greater amounts are generally represented by darker colors.
Use blues for water.
When mapping elevation data, start with blues for water, greens
in the lowlands, ranging up through yellows and browns to reds
in the higher elevations. This progression should not be used for
series other than elevation.
In temperature mapping, use red, orange, and yellow for warm
temperatures and blue, green, and gray for cool temperatures.
In land cover mapping, use yellows and tans for dryness and
sparse vegetation and greens for lush vegetation.
Use browns for land forms.
Use the Raster Attributes option in the Viewer to select and
modify class colors.
Field Guide Annotation / 463
Annotation A map is more than just an image(s) on a background. Since a map
is a form of communication, it must convey information that may not
be obvious by looking at the image. Therefore, maps usually contain
several annotation elements to explain the map. Annotation is any
explanatory material that accompanies a map to denote graphical
features on the map. This annotation may take the form of:
scale bars
legends
neatlines, tick marks, and grid lines
symbols (north arrows, etc.)
labels (rivers, mountains, cities, etc.) and descriptive text (title,
copyright, credits, production notes, etc.)
The annotation listed above is made up of single elements. The basic
annotation elements in ERDAS IMAGINE include:
rectangles (including squares)
ellipses (including circles)
polygons and polylines
text
These elements can be used to create more complex annotation,
such as legends, scale bars, etc. These annotation components are
actually groups of the basic elements and can be ungrouped and
edited like any other graphic. You can also create your own groups
to form symbols that are not in the ERDAS IMAGINE symbol library.
(Symbols are discussed in more detail under Symbols.)
Create annotation using the Annotation tool palette in the
Viewer or in a map composition.
How Annotation is Stored
An annotation layer is a set of annotation elements that is drawn in
a Viewer or Map Composer window and stored in a file. Annotation
that is created in a Viewer window is stored in a separate file from
the other data in the Viewer. These annotation files are called
overlay files (.ovr extension). Map annotation that is created in a
Map Composer window is also stored in an .ovr file, which is named
after the map composition. For example, the annotation for a file
called lanier.map would be lanier.map.ovr.
Scale / 464 Field Guide
Scale Map scale is a statement that relates distance on a map to distance
on the Earths surface. It is perhaps the most important information
on a map, since the level of detail and map accuracy are both factors
of the map scale. Scale is directly related to the map extent, or the
area of the Earths surface to be mapped. If a relatively small area is
to be mapped, such as a neighborhood or subdivision, then the scale
can be larger. If a large area is to be mapped, such as an entire
continent, the scale must be smaller. Generally, the smaller the
scale, the less detailed the map can be. As a rule, anything smaller
than 1:250,000 is considered small-scale.
Scale can be reported in several ways, including:
representative fraction
verbal statement
scale bar
Representative Fraction
Map scale is often noted as a simple ratio or fraction called a
representative fraction. A map in which one inch on the map equals
24,000 inches on the ground could be described as having a scale of
1:24,000 or 1/24,000. The units on both sides of the ratio must be
the same.
Verbal Statement
A verbal statement of scale describes the distance on the map to the
distance on the ground. A verbal statement describing a scale of
1:1,000,000 is approximately 1 inch to 16 miles. The units on the
map and on the ground do not have to be the same in a verbal
statement. One-inch and 6-inch maps of the British Ordnance
Survey are often referred to by this method (1 inch to 1 mile, 6
inches to 1 mile) (Robinson and Sale, 1969).
Scale Bars
A scale bar is a graphic annotation element that describes map scale.
It shows the distance on paper that represents a geographical
distance on the map. Maps often include more than one scale bar to
indicate various measurement systems, such as kilometers and
miles.
Figure 191: Sample Scale Bars
Kilometers
Miles
1 0 1 2 3 4
1 0 1 2
Field Guide Scale / 465
Use the Scale Bar tool in the Annotation tool palette to
automatically create representative fractions and scale bars.
Use the Text tool to create a verbal statement.
Common Map Scales
You can create maps with an unlimited number of scales, however,
there are some commonly used scales. Table 60 lists these scales
and their equivalents (Robinson and Sale, 1969).
Table 61 shows the number of pixels per inch for selected scales and
pixel sizes.
Table 60: Common Map Scales
Map
Scale
1/40 inch
represents
1 inch
represents
1
centimeter
represents
1 mile is
represented
by
1 kilometer
is
represented
by
1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm
1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm
1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm
1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm
1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm
1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm
1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm
1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm
1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm
1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm
1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm
1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm
1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm
1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm
1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm
1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm
1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm
1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm
Scale / 466 Field Guide
Courtesy of D. Cunningham and D. Way, The Ohio State University
Table 61: Pixels per Inch
Pixe
l
Size
(m)
SCALE
1=10
0
1:1200
1=200

1:2400
1=500

1:6000
1=100
0
1:12000
1=150
0
1:18000
1=200
0
1:24000
1=416
7
1:50000
1=1
mile
1:63360
1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35
2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67
2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74
5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87
10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93
15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29
20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47
25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37
30 1.02 2.03 5.08 10.16 15.240 20.32 42.33 53.64
35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98
40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23
45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76
50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19
75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46
100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09
150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73
200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05
250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44
300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36
350 .09 .17 .44 .87 1.31 1.74 3.63 4.60
400 .08 .15 .38 .76 1.14 1.52 3.18 4.02
450 .07 .14 .34 .68 1.02 1.35 2.82 3.58
500 .06 .12 .30 .61 .91 1.22 2.54 3.22
600 .05 .10 .25 .51 .76 1.02 2.12 2.69
700 .04 .09 .22 .44 .65 .87 1.81 2.30
800 .04 .08 .19 .38 .57 .76 1.59 2.01
900 .03 .07 .17 .34 .51 .68 1.41 1.79
1000 .03 .06 .15 .30 .46 .61 1.27 1.61
Field Guide Scale / 467
Table 62 lists the number of acres and hectares per pixel for various
pixel sizes.
Courtesy of D. Cunningham and D. Way, The Ohio State University
Table 62: Acres and Hectares per Pixel
Pixel Size (m) Acres Hectares
1 0.0002 0.0001
2 0.0010 0.0004
2.5 0.0015 0.0006
5 0.0062 0.0025
10 0.0247 0.0100
15 0.0556 0.0225
20 0.0988 0.0400
25 0.1544 0.0625
30 0.2224 0.0900
35 0.3027 0.1225
40 0.3954 0.1600
45 0.5004 0.2025
50 0.6178 0.2500
75 1.3900 0.5625
100 2.4710 1.0000
150 5.5598 2.2500
200 9.8842 4.0000
250 15.4440 6.2500
300 22.2394 9.0000
350 30.2703 12.2500
400 39.5367 16.0000
450 50.0386 20.2500
500 61.7761 25.0000
600 88.9576 36.0000
700 121.0812 49.0000
800 158.1468 64.0000
900 200.1546 81.0000
1000 247.1044 100.0000
Neatlines, Tick Marks, and Grid Lines / 468 Field Guide
Legends A legend is a key to the colors, symbols, and line styles that are used
in a map. Legends are especially useful for maps of categorical data
displayed in pseudo color, where each color represents a different
feature or category. A legend can also be created for a single layer
of continuous data, displayed in gray scale. Legends are likewise
used to describe all unknown or unique symbols utilized. Symbols in
legends should appear exactly the same size and color as they
appear on the map (Robinson and Sale, 1969).
Figure 192: Sample Legend
Use the Legend tool in the Annotation tool palette to
automatically create color legends. Symbol legends are not
created automatically, but can be created manually.
Neatlines, Tick
Marks, and Grid
Lines
Neatlines, tick marks, and grid lines serve to provide a
georeferencing system for map detail and are based on the map
projection of the image shown.
A neatline is a rectangular border around the image area of a
map. It differs from the map border in that the border usually
encloses the entire map, not just the image area.
Tick marks are small lines along the edge of the image area or
neatline that indicate regular intervals of distance.
Grid lines are intersecting lines that indicate regular intervals of
distance, based on a coordinate system. Usually, they are an
extension of tick marks. It is often helpful to place grid lines over
the image area of a map. This is becoming less common on
thematic maps, but is really up to the map designer. If the grid
lines help readers understand the content of the map, they
should be used.
pasture
forest
swamp
developed
Legend
Field Guide Symbols / 469
Figure 193: Sample Neatline, Tick Marks, and Grid Lines
Grid lines may also be referred to as a graticule.
Graticules are discussed in more detail in Projections.
Use the Grid/Tick tool in the Annotation tool palette to create
neatlines, tick marks, and grid lines. Tick marks and grid lines
can also be created over images displayed in a Viewer. See the
On-Line Help for instructions.
Symbols Since maps are a greatly reduced version of the real-world, objects
cannot be depicted in their true shape or size. Therefore, a set of
symbols is devised to represent real-world objects. There are two
major classes of symbols:
replicative
abstract
Replicative symbols are designed to look like their real-world
counterparts; they represent tangible objects, such as coastlines,
trees, railroads, and houses. Abstract symbols usually take the form
of geometric shapes, such as circles, squares, and triangles. They
are traditionally used to represent amounts that vary from place to
place, such as population density, amount of rainfall, etc. (Dent,
1985).
Both replicative and abstract symbols are composed of one or more
of the following annotation elements:
point
neatline
tick marks
grid lines
Labels and Descriptive Text / 470 Field Guide
line
area
Symbol Types
These basic elements can be combined to create three different
types of replicative symbols:
planformed after the basic outline of the object it represents.
For example, the symbol for a house might be a square, because
most houses are rectangular.
profileformed like the profile of an object. Profile symbols
generally represent vertical objects, such as trees, windmills, oil
wells, etc.
functionformed after the activity that a symbol represents. For
example, on a map of a state park, a symbol of a tent would
indicate the location of a camping area.
Figure 194: Sample Symbols
Symbols can have different sizes, colors, and patterns to indicate
different meanings within a map. The use of size, color, and pattern
generally shows qualitative or quantitative differences among areas
marked. For example, if a circle is used to show cities and towns,
larger circles would be used to show areas with higher population. A
specific color could be used to indicate county seats. Since symbols
are not drawn to scale, their placement is crucial to effective
communication.
Use the Symbol tool in the Annotation tool palette and the
symbol library to place symbols in maps.
Labels and
Descriptive Text
Place names and other labels convey important information to the
reader about the features on the map. Any features that help orient
the reader or are important to the content of the map should be
labeled. Descriptive text on a map can include the map title and
subtitle, copyright information, captions, credits, production notes,
or other explanatory material.
Plan Profile Function
Field Guide Labels and Descriptive Text / 471
Title
The map title usually draws attention by virtue of its size. It focuses
the readers attention on the primary purpose of the map. The title
may be omitted, however, if captions are provided outside of the
image area (Dent, 1985).
Credits
Map credits (or source information) can include the data source and
acquisition date, accuracy information, and other details that are
required or helpful to readers. For example, if you include data that
you do not own in a map, you must give credit to the owner.
Use the Text tool in the Annotation tool palette to add labels and
descriptive text to maps.
Typography and Lettering The choice of type fonts and styles and how names are lettered can
make the difference between a clear and attractive map and a
jumble of imagery and text. As with many other aspects of map
design, this is a very subjective area and many organizations already
have guidelines to use. This section is intended as an introduction to
the concepts involved and to convey traditional guidelines, where
available.
If your organization does not have a set of guidelines for the
appearance of maps and you plan to produce many in the future, it
would be beneficial to develop a style guide specifically for mapping.
This ensures that all of the maps produced follow the same
conventions, regardless of who actually makes the map.
ERDAS IMAGINE enables you to make map templates to
facilitate the development of map standards within your
organization.
Type Styles
Type style refers to the appearance of the text and may include font,
size, and style (bold, italic, underline, etc.). Although the type styles
used in maps are purely a matter of the designers taste, the
following techniques help to make maps more legible (Robinson and
Sale, 1969; Dent, 1985).
Do not use too many different typefaces in a single map.
Generally, one or two styles are enough when also using the
variations of those type faces (e.g., bold, italic, underline, etc.).
When using two typefaces, use a serif and a sans serif, rather
than two different serif fonts or two different sans serif fonts
[e.g., Sans (sans serif) and Roman (serif) could be used together
in one map].
Avoid ornate text styles because they can be difficult to read.
Labels and Descriptive Text / 472 Field Guide
Exercise caution in using very thin letters that may not reproduce
well. On the other hand, using letters that are too bold may
obscure important information in the image.
Use different sizes of type for showing varying levels of
importance. For example, on a map with city and town labels,
city names are usually in a larger type size than the town names.
Use no more than four to six different type sizes.
Put more important text in labels, titles, and names in all capital
letters and lesser important text in lowercase with initial capitals.
This is a matter of personal preference, although names in which
the letters must be spread out across a large area are better in
all capital letters. (Studies have found that capital letters are
more difficult to read, therefore lowercase letters might improve
the legibility of the map.)
In the past, hydrology, landform, and other natural features were
labeled in italic. However, this is not strictly adhered to by map
makers today, although water features are still nearly always
labeled in italic.
Figure 195: Sample Sans Serif and Serif Typefaces with
Various Styles Applied
Use the Styles dialog to adjust the style of text.
Lettering
Lettering refers to the way in which place names and other labels are
added to a map. Letter spacing, orientation, and position are the
three most important factors in lettering. Here again, there are no
set rules for how lettering is to appear. Much is determined by the
purpose of the map and the end user. Many organizations have
developed their own rules for lettering. Here is a list of guidelines
that have been used by cartographers in the past (Robinson and
Sale, 1969; Dent, 1985).
Names should be either entirely on land or waternot
overlapping both.
Sans 10 pt regular
Sans 10 pt italic
Sans 10 pt bold
Sans 10 pt bold italic
SANS 10 PT ALL CAPS
Roman 10 pt regular
Roman 10 pt italic
Roman 10 pt bold
Roman 10 pt bold italic
ROMAN 10 PT ALL CAPS
Sans Serif Serif
Field Guide Labels and Descriptive Text / 473
Lettering should generally be oriented to match the orientation
structure of the map. In large-scale maps this means parallel
with the upper and lower edges, and in small-scale maps, this
means in line with the parallels of latitude.
Type should not be curved (i.e., different from preceding bullet)
unless it is necessary to do so.
If lettering must be disoriented, it should never be set in a
straight line, but should always have a slight curve.
Names should be letter spaced (i.e., space between individual
letters, or kerning) as little as necessary.
Where the continuity of names and other map data, such as lines
and tones, conflicts with the lettering, the data, but not the
names, should be interrupted.
Lettering should never be upside-down in any respect.
Lettering that refers to point locations should be placed above or
below the point, preferably above and to the right.
The letters identifying linear features (roads, rivers, railroads,
etc.) should not be spaced. The word(s) should be repeated along
the feature as often as necessary to facilitate identification.
These labels should be placed above the feature and river names
should slant in the direction of the river flow (if the label is italic).
For geographical names, use the native language of the intended
map user. For an English-speaking audience, the name Germany
should be used, rather than Deutscheland.
Figure 196: Good Lettering vs. Bad Lettering
Text Color
Many cartographers argue that all lettering on a map should be
black. However, the map may be well-served by incorporating color
into its design. In fact, studies have shown that coding labels by
color can improve a readers ability to find information (Dent, 1985).
Atlanta
G E O R G I A
Good
Atlanta
G e o r g i a
Savannah
Savannah
Bad
Projections / 474 Field Guide
Projections
This section is adapted from Map Projections for Use with the
Geographic Information System by Lee and Walsh (Lee and
Walsh, 1984).
A map projection is the manner in which the spherical surface of the
Earth is represented on a flat (two-dimensional) surface. This can be
accomplished by direct geometric projection or by a mathematically
derived transformation. There are many kinds of projections, but all
involve transfer of the distinctive global patterns of parallels of
latitude and meridians of longitude onto an easily flattened surface,
or developable surface.
The three most common developable surfaces are the cylinder, cone,
and plane (Figure 197). A plane is already flat, while a cylinder or
cone may be cut and laid out flat, without stretching. Thus, map
projections may be classified into three general families: cylindrical,
conical, and azimuthal or planar.
Map projections are selected in the Projection Chooser. For more
information about the Projection Chooser, see the ERDAS
IMAGINE On-Line Help.
Properties of Map
Projections
Regardless of what type of projection is used, it is inevitable that
some error or distortion occurs in transforming a spherical surface
into a flat surface. Ideally, a distortion-free map has four valuable
properties:
conformality
equivalence
equidistance
true direction
Each of these properties is explained below. No map projection can
be true in all of these properties. Therefore, each projection is
devised to be true in selected properties, or most often, a
compromise among selected properties. Projections that
compromise in this manner are known as compromise projections.
Field Guide Projections / 475
Conformality is the characteristic of true shape, wherein a projection
preserves the shape of any small geographical area. This is
accomplished by exact transformation of angles around points. One
necessary condition is the perpendicular intersection of grid lines as
on the globe. The property of conformality is important in maps
which are used for analyzing, guiding, or recording motion, as in
navigation. A conformal map or projection is one that has the
property of true shape.
Equivalence is the characteristic of equal area, meaning that areas
on one portion of a map are in scale with areas in any other portion.
Preservation of equivalence involves inexact transformation of
angles around points and thus, is mutually exclusive with
conformality except along one or two selected lines. The property of
equivalence is important in maps that are used for comparing
density and distribution data, as in populations.
Equidistance is the characteristic of true distance measuring. The
scale of distance is constant over the entire map. This property can
be fulfilled on any given map from one, or at most two, points in any
direction or along certain lines. Equidistance is important in maps
that are used for analyzing measurements (i.e., road distances).
Typically, reference lines such as the equator or a meridian are
chosen to have equidistance and are termed standard parallels or
standard meridians.
True direction is characterized by a direction line between two points
that crosses reference lines (e.g., meridians) at a constant angle or
azimuth. An azimuth is an angle measured clockwise from a
meridian, going north to east. The line of constant or equal direction
is termed a rhumb line.
The property of constant direction makes it comparatively easy to
chart a navigational course. However, on a spherical surface, the
shortest surface distance between two points is not a rhumb line, but
a great circle, being an arc of a circle whose center is the center of
the Earth. Along a great circle, azimuths constantly change (unless
the great circle is the equator or a meridian).
Thus, a more desirable property than true direction may be where
great circles are represented by straight lines. This characteristic is
most important in aviation. Note that all meridians are great circles,
but the only parallel that is a great circle is the equator.
Projections / 476 Field Guide
Figure 197: Projection Types
Projection Types Although a great number of projections have been devised, the
majority of them are geometric or mathematical variants of the basic
direct geometric projection families described below. Choice of the
projection to be used depends upon the true property or combination
of properties desired for effective cartographic analysis.
Azimuthal Projections
Azimuthal projections, also called planar projections, are
accomplished by drawing lines from a given perspective point
through the globe onto a tangent plane. This is conceptually
equivalent to tracing a shadow of a figure cast by a light source. A
tangent plane intersects the global surface at only one point and is
perpendicular to a line passing through the center of the sphere.
Thus, these projections are symmetrical around a chosen center or
central meridian. Choice of the projection center determines the
aspect, or orientation, of the projection surface.
Azimuthal projections may be centered:
Regular Cylindrical
Transverse Cylindrical
Oblique Cylindrical
Regular Conic
Polar Azimuthal
(planar)
Oblique Azimuthal
(planar)
Field Guide Projections / 477
on the poles (polar aspect)
at a point on the equator (equatorial aspect)
at any other orientation (oblique aspect)
The origin of the projection linesthat is, the perspective point
may also assume various positions. For example, it may be:
the center of the Earth (gnomonic)
an infinite distance away (orthographic)
on the Earths surface, opposite the projection plane
(stereographic)
Conical Projections
Conical projections are accomplished by intersecting, or touching, a
cone with the global surface and mathematically projecting lines
onto this developable surface.
A tangent cone intersects the global surface to form a circle. Along
this line of intersection, the map is error-free and possess
equidistance. Usually, this line is a parallel, termed the standard
parallel.
Cones may also be secant, and intersect the global surface, forming
two circles that possess equidistance. In this case, the cone slices
underneath the global surface, between the standard parallels. Note
that the use of the word secant, in this instance, is only conceptual
and not geometrically accurate. Conceptually, the conical aspect
may be polar, equatorial, or oblique. Only polar conical projections
are supported in ERDAS IMAGINE.
Figure 198: Tangent and Secant Cones
Cylindrical Projections
Cylindrical projections are accomplished by intersecting, or touching,
a cylinder with the global surface. The surface is mathematically
projected onto the cylinder, which is then cut and unrolled.
Secant
two standard parallels
Tangent
one standard parallel
Geographical and Planar Coordinates / Field Guide
A tangent cylinder intersects the global surface on only one line to
form a circle, as with a tangent cone. This central line of the
projection is commonly the equator and possesses equidistance.
If the cylinder is rotated 90 degrees from the vertical (i.e., the long
axis becomes horizontal), then the aspect becomes transverse,
wherein the central line of the projection becomes a chosen standard
meridian as opposed to a standard parallel. A secant cylinder, one
slightly less in diameter than the globe, has two lines possessing
equidistance.
Figure 199: Tangent and Secant Cylinders
Perhaps the most famous cylindrical projection is the Mercator,
which became the standard navigational map. Mercator possesses
true direction and conformality.
Other Projections
The projections discussed so far are projections that are created by
projecting from a sphere (the Earth) onto a plane, cone, or cylinder.
Many other projections cannot be created so easily.
Modified projections are modified versions of another projection. For
example, the Space Oblique Mercator projection is a modification of
the Mercator projection. These modifications are made to reduce
distortion, often by including additional standard lines or a different
pattern of distortion.
Pseudo projections have only some of the characteristics of another
class projection. For example, the Sinusoidal is called a
pseudocylindrical projection because all lines of latitude are straight
and parallel, and all meridians are equally spaced. However, it
cannot truly be a cylindrical projection, because all meridians except
the central meridian are curved. This results in the Earth appearing
oval instead of rectangular (Environmental Systems Research
Institute, 1991).
Geographical and
Planar
Coordinates
Map projections require a point of reference on the Earths surface.
Most often this is the center, or origin, of the projection. This point
is defined in two coordinate systems:
geographical
Tangent
one standard parallel
Secant
two standard parallels
Field Guide Available Map Projections / 479
planar
Geographical
Geographical, or spherical, coordinates are based on the network of
latitude and longitude (Lat/Lon) lines that make up the graticule of
the Earth. Within the graticule, lines of longitude are called
meridians, which run north/south, with the prime meridian at 0
(Greenwich, England). Meridians are designated as 0 to 180, east
or west of the prime meridian. The 180 meridian (opposite the
prime meridian) is the International Dateline.
Lines of latitude are called parallels, which run east/west. Parallels
are designated as 0 at the equator to 90 at the poles. The equator
is the largest parallel. Latitude and longitude are defined with
respect to an origin located at the intersection of the equator and the
prime meridian. Lat/Lon coordinates are reported in degrees,
minutes, and seconds. Map projections are various arrangements of
the Earths latitude and longitude lines onto a plane.
Planar
Planar, or Cartesian, coordinates are defined by a column and row
position on a planar grid (X,Y). The origin of a planar coordinate
system is typically located south and west of the origin of the
projection. Coordinates increase from 0,0 going east and north. The
origin of the projection, being a false origin, is defined by values of
false easting and false northing. Grid references always contain an
even number of digits, and the first half refers to the easting and the
second half the northing.
In practice, this eliminates negative coordinate values and allows
locations on a map projection to be defined by positive coordinate
pairs. Values of false easting are read first and may be in meters or
feet.
Available Map
Projections
In ERDAS IMAGINE, map projection information appears in the
Projection Chooser, which is used to georeference images and to
convert map coordinates from one type of projection to another. The
Projection Chooser provides the following projections:
USGS Projections
Alaska Conformal
Albers Conical Equal Area
Azimuthal Equidistant
Behrmann
Bonne
Available Map Projections / 480 Field Guide
Cassini
Eckert I
Eckert II
Eckert III
Eckert IV
Eckert V
Eckert VI
EOSAT SOM
Equidistant Conic
Equidistant Cylindrical
Equirectangular (Plate Carre)
Gall Stereographic
Gauss Kruger
General Vertical Near-side Perspective
Geographic (Lat/Lon)
Gnomonic
Hammer
Interrupted Goode Homolosine
Interrupted Mollweide
Lambert Azimuthal Equal Area
Lambert Conformal Conic
Loximuthal
Mercator
Miller Cylindrical
Modified Transverse Mercator
Mollweide
New Zealand Map Grid
Field Guide Available Map Projections / 481
Oblated Equal Area
Oblique Mercator (Hotine)
Orthographic
Plate Carre
Polar Stereographic
Polyconic
Quartic Authalic
Robinson
RSO
Sinusoidal
Space Oblique Mercator
Space Oblique Mercator (Formats A & B)
State Plane
Stereographic
Stereographic (Extended)
Transverse Mercator
Two Point Equidistant
UTM
Van der Grinten I
Wagner IV
Wagner VII
Winkel I
External Projections
Albers Equal Area (see Albers Conical Equal Area on page 527)
Azimuthal Equidistant (see Azimuthal Equidistant on page 530)
Bipolar Oblique Conic Conformal
Cassini-Soldner
Available Map Projections / 482 Field Guide
Conic Equidistant (see Equidistant Conic on page 552)
Laborde Oblique Mercator
Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal
Area on page 570)
Lambert Conformal Conic (see Lambert Conformal Conic on
page 573)
Mercator (see Mercator on page 578)
Minimum Error Conformal
Modified Polyconic
Modified Stereographic
Mollweide Equal Area (see Mollweide on page 585)
Oblique Mercator (see Oblique Mercator (Hotine) on page 589)
Orthographic (see Orthographic on page 592)
Plate Carre (see Equirectangular (Plate Carre) on page 555)
Rectified Skew Orthomorphic (see RSO on page 605)
Regular Polyconic (see Polyconic on page 599)
Robinson Pseudocylindrical (see Robinson on page 603)
Sinusoidal (see Sinusoidal on page 606)
Southern Orientated Gauss Conformal
Stereographic (see Stereographic on page 621)
Swiss Cylindrical
Stereographic (Oblique) (see Stereographic on page 621)
Transverse Mercator (see Transverse Mercator on page 625)
Universal Transverse Mercator (see UTM on page 629)
Van der Grinten (see Van der Grinten I on page 632)
Winkels Tripel
Field Guide Available Map Projections / 483
Choice of the projection to be used depends upon the desired major
property and the region to be mapped (see Table 63). After choosing
the desired map projection, several parameters are required for its
definition (see Table 64). These parameters fall into three general
classes: (1) definition of the spheroid, (2) definition of the surface
viewing window, and (3) definition of scale.
For each map projection, a menu of spheroids displays, along with
appropriate prompts that enable you to specify these parameters.
Units
Use the units of measure that are appropriate for the map projection
type.
Lat/Lon coordinates are expressed in decimal degrees. When
prompted, you can use the DD function to convert coordinates in
degrees, minutes, seconds format to decimal. For example, for
305112:
dd(30,51,12) = 30.85333
-dd(30,51,12) = -30.85333
or
30:51:12 = 30.85333
You can also enter Lat/Lon coordinates in radians.
State Plane coordinates are expressed in feet or meters.
All other coordinates are expressed in meters.
Note also that values for longitude west of Greenwich, England,
and values for latitude south of the equator are to be entered as
negatives.
Table 63: Map Projections
Map projection
Constructio
n
Property Use
0 Geographic N/A N/A Data entry, spherical coordinates
1 Universal Transverse
Mercator
Cylinder (see
#9)
Conformal Data entry, plane coordinates
2 State Plane (see #4,
7,9,20)
Conformal Data entry, plane coordinates
3 Albers Conical Equal Area Cone Equivalent Middle latitudes, E-W expanses
4 Lambert Conformal Conic Cone Conformal True
Direction
Middle latitudes, E-W expanses flight
(straight great circles)
Available Map Projections / 484 Field Guide
5 Mercator Cylinder Conformal True
Direction
Nonpolar regions, navigation (straight
rhumb lines)
6 Polar Stereographic Plane Conformal Polar regions
7 Polyconic Cone Compromise N-S expanses
8 Equidistant Conic Cone Equidistant Middle latitudes, E-W expanses
9 Transverse Mercator Cylinder Conformal N-S expanses
10 Stereographic Plane Conformal Hemispheres, continents
11 Lambert Azimuthal Equal
Area
Plane Equivalent Square or round expanses
12 Azimuthal Equidistant Plane Equidistant Polar regions, radio/seismic work
(straight great circles)
13 Gnomonic Plane Compromise Navigation, seismic work (straight great
circles)
14 Orthographic Plane Compromise Globes, pictorial
15 General Vertical Near-Side
Perspective
Plane Compromise Hemispheres or less
16 Sinusoidal Pseudo-Cylinder Equivalent N-S expanses or equatorial regions
17 Equirectangular Cylinder Compromise City maps, computer plotting (simplistic)
18 Miller Cylindrical Cylinder Compromise World maps
19 Van der Grinten I N/A Compromise World maps
20 Oblique Mercator Cylinder Conformal Oblique expanses (e.g., Hawaiian
islands), satellite tracking
21 Space Oblique Mercator Cylinder Conformal Mapping of Landsat imagery
22 Modified Transverse
Mercator
Cylinder Conformal Alaska
Table 64: Projection Parameters
Projection type (#)
a
Parameter 3 4 5 6 7 8
b
9 10 11 12 13 14 15 16 17 18 19 20
b
21
b
22
Definition of
Spheroid
Spheroid
selections

Definition of
Surface Viewing Window
Table 63: Map Projections (Continued)
Map projection
Constructio
n
Property Use
Field Guide Available Map Projections / 485
False easting
False northing
Longitude of central
meridian

Latitude of origin of
projection

Longitude of center
of projection

Latitude of center of
projection

Latitude of first
standard parallel

Latitude of second
standard parallel

Latitude of true scale
Longitude below pole
Definition of Scale
Scale factor at
central meridian

Height of perspective
point above sphere

Scale factor at center


of projection

a. Numbers are used for reference only and correspond to the numbers used in Table 63. Parameters for definition of map
projection types 0-2 are not applicable and are described in the text.
b. Additional parameters required for definition of the map projection are described in the text of Map Projections.
Table 64: Projection Parameters (Continued)
Projection type (#)
a
Choosing a Map Projection / 486 Field Guide
Choosing a Map
Projection
Map Projection Uses in a
GIS
Selecting a map projection for the GIS database enables you to
(Maling, 1992):
decide how to best display the area of interest or illustrate the
results of analysis
register all imagery to a single coordinate system for easier
comparisons
test the accuracy of the information and perform measurements
on the data
Deciding Factors Depending on your applications and the uses for the maps created,
one or several map projections may be used. Many factors must be
weighed when selecting a projection, including:
type of map
special properties that must be preserved
types of data to be mapped
map accuracy
scale
If you are mapping a relatively small area, virtually any map
projection is acceptable. In mapping large areas (entire countries,
continents, and the world), the choice of map projection becomes
more critical. In small areas, the amount of distortion in a particular
projection is barely, if at all, noticeable. In large areas, there may be
little or no distortion in the center of the map, but distortion
increases outward toward the edges of the map.
Guidelines Since the sixteenth century, there have been three fundamental
rules regarding map projection use (Maling, 1992):
if the country to be mapped lies in the tropics, use a cylindrical
projection
if the country to be mapped lies in the temperate latitudes, use
a conical projection
if the map is required to show one of the polar regions, use an
azimuthal projection
Field Guide Spheroids / 487
These rules are no longer held so strongly. There are too many
factors to consider in map projection selection for broad
generalizations to be effective today. The purpose of a particular
map and the merits of the individual projections must be examined
before an educated choice can be made. However, there are some
guidelines that may help you select a projection (Pearson, 1990):
Statistical data should be displayed using an equal area
projection to maintain proper proportions (although shape may
be sacrificed).
Equal area projections are well-suited to thematic data.
Where shape is important, use a conformal projection.
Spheroids The previous discussion of direct geometric map projections
assumes that the Earth is a sphere, and for many maps this is
satisfactory. However, due to rotation of the Earth around its axis,
the planet bulges slightly at the equator. This flattening of the sphere
makes it an oblate spheroid, which is an ellipse rotated around its
shorter axis.
Figure 200: Ellipse
An ellipse is defined by its semi-major (long) and semi-minor (short)
axes.
The amount of flattening of the Earth is expressed as the ratio:
Where:
a = the equatorial radius (semi-major axis)
b = the polar radius (semi-minor axis)
Most map projections use eccentricity (e
2
) rather than flattening.
The relationship is:
Minor axis
Major axis
semi-minor
axis
semi-major axis
f a b ( ) a =
Spheroids / 488 Field Guide
The flattening of the Earth is about 1 part in 300, and becomes
significant in map accuracy at a scale of 1:100,000 or larger.
Calculation of a map projection requires definition of the spheroid (or
ellipsoid) in terms of the length of axes and eccentricity squared (or
radius of the reference sphere). Several principal spheroids are in
use by one or more countries. Differences are due primarily to
calculation of the spheroid for a particular region of the Earths
surface. Only recently have satellite tracking data provided spheroid
determinations for the entire Earth. However, these spheroids may
not give the best fit for a particular region. In North America, the
spheroid in use is the Clarke 1866 for NAD27 and GRS 1980 for
NAD83 (State Plane).
If other regions are to be mapped, different projections should be
used. Upon choosing a desired projection type, you have the option
to choose from the following list of spheroids:
Airy
Australian National
Bessel
Clarke 1866
Clarke 1880
Everest
GRS 1980
Helmert
Hough
International 1909
Krasovsky
Mercury 1960
Modified Airy
Modified Everest
Modified Mercury 1968
New International 1967
Southeast Asia
e
2
2f f
2
=
Field Guide Spheroids / 489
Sphere of Nominal Radius of Earth
Sphere of Radius 6370977m
Walbeck
WGS 66
WGS 72
WGS 84
The spheroids listed above are the most commonly used. There
are many other spheroids available, and they are listed in the
Projection Chooser. These additional spheroids are not
documented in this manual. You can use the IMAGINE
Developers Toolkit to add your own map projections and
spheroids to ERDAS IMAGINE.
The semi-major and semi-minor axes of all supported spheroids are
listed in Table 65, as well as the principal uses of these spheroids.
Spheroids / 490 Field Guide
Table 65: Spheroids for use with ERDAS IMAGINE
Spheroid
Semi-
Major Axis
Semi-Minor
Axis
Use
165 6378165.0 6356783.0 Global
Airy (1940) 6377563.0 6356256.91 England
Airy Modified (1849) Ireland
Australian National (1965) 6378160.0 6356774.719 Australia
Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, and
Indonesia
Bessell (Namibia) 6377483.865 6356165.383 Namibia
Clarke 1858 6378293.0 6356619.0 Global
Clarke 1866 6378206.4 6356583.8 North America and the
Philippines
Clarke 1880 6378249.145 6356514.86955 France and Africa
Clarke 1880 IGN 6378249.2 6356515.0 Global
Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan
Everest (1956) 6377301.243 6356100.2284 India, Nepal
Everest (1969) 6377295.664 6356094.6679 Global
Everest (Malaysia & Singapore) 6377304.063 6356103.038993 Global
Everest (Pakistan) 6377309.613 6356108.570542 Pakistan
Everest (Sabah & Sarawak) 6377298.556 6356097.5503 Brunei, East Malaysia
Fischer (1960) 6378166.0 6356784.2836 Global
Fischer (1968) 6378150.0 6356768.3372 Global
GRS 1980 (Geodetic Reference
System)
6378137.0 6356752.31414 Adopted in North America for
1983 Earth-centered coordinate
system (satellite)
Hayford 6378388.0 6356911.946128 Global
Helmert 6378200.0 6356818.1696278
9092
Egypt
Hough 6378270.0 6356794.343479 As International 1909 above,
with modification of ellipse axes
IAU 1965 6378160.0 6356775.0 Global
Indonesian 1974 6378160.0 6356774.504086 Global
International 1909 (= Hayford) 6378388.0 6356911.94613 Remaining parts of the world not
listed here
IUGG 1967 6378160.0 6356774.516 Hungary
Field Guide Spheroids / 491
Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and some
East European countries
Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used
Modified Airy 6377341.89 6356036.143 As Airy above, more recent
version
Modified Everest 6377304.063 6356103.039 As Everest above, more recent
version
Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, more
recent calculation
Modified Fischer (1960) 6378155.0 6356773.3205 Singapore
New International 1967 6378157.5 6356772.2 As International 1909 below,
more recent calculation
SGS 85 (Soviet Geodetic System
1985)
6378136.0 6356751.3016 Soviet Union
South American (1969) 6378160.0 6356774.7192 South America
Southeast Asia 6378155.0 6356773.3205 As named
Sphere 6371000.0 6371000.0 Global
Sphere of Nominal Radius of Earth 6370997.0 6370997.0 A perfect sphere
Sphere of Radius 6370997 m 6370997.0 6370997.0 A perfect sphere with the same
surface area as the Clarke 1866
spheroid
Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910
WGS 60 (World Geodetic System
1960)
6378165.0 6356783.287 Global
WGS 66 (World Geodetic System
1966)
6378145.0 6356759.769356 As WGS 72 above, older version
WGS 72 (World Geodetic System
1972)
6378135.0 6356750.519915 NASA (satellite)
WGS 84 (World Geodetic System
1984)
6378137.0 6356752.3142451
7929
As WGS 72, more recent
calculation
Table 65: Spheroids for use with ERDAS IMAGINE (Continued)
Spheroid
Semi-
Major Axis
Semi-Minor
Axis
Use
Map Composition / 492 Field Guide
Map Composition
Learning Map
Composition
Cartography and map composition may seem like an entirely new
discipline to many GIS and image processing analystsand that is
partly true. But, by learning the basics of map design, the results of
your analyses can be communicated much more effectively. Map
composition is also much easier than in the past when maps were
hand drawn. Many GIS analysts may already know more about
cartography than they realize, simply because they have access to
map-making software. Perhaps the first maps you made were
imitations of existing maps, but that is how we learn. This chapter is
certainly not a textbook on cartography; it is merely an overview of
some of the issues involved in creating cartographically-correct
products.
Plan the Map After your analysis is complete, you can begin map composition. The
first step in creating a map is to plan its contents and layout. The
following questions may aid in the planning process:
How is this map going to be used?
Will the map have a single theme or many?
Is this a single map, or is it part of a series of similar maps?
Who is the intended audience? What is the level of their
knowledge about the subject matter?
Will it remain in digital form and be viewed on the computer
screen or will it be printed?
If it is going to be printed, how big will it be? Will it be printed in
color or black and white?
Are there map guidelines already set up by your organization?
The answers to these questions can help to determine the type of
information that must go into the composition and the layout of that
information. For example, suppose you are going to do a series of
maps about global deforestation for presentation to Congress, and
you are going to print these maps in color on an inkjet printer. This
scenario might lead to the following conclusions:
A format (layout) should be developed for the series, so that all
the maps produced have the same style.
The colors used should be chosen carefully, since the maps are
printed in color.
Political boundaries might need to be included, since they
influence the types of actions that can be taken in each
deforested area.
Field Guide Map Accuracy / 493
The typeface size and style to be used for titles, captions, and
labels have to be larger than for maps printed on 8.5 11.0
sheets. The type styles selected should be the same for all maps.
Select symbols that are widely recognized, and make sure they
are all explained in a legend.
Cultural features (roads, urban centers, etc.) may be added for
locational reference.
Include a statement about the accuracy of each map, since these
maps may be used in very high-level decisions.
Once this information is in hand, you can actually begin sketching
the look of the map on a sheet of paper. It is helpful for you to know
how you want the map to look before starting the ERDAS IMAGINE
Map Composer. Doing so ensures that all of the necessary data
layers are available, and makes the composition phase go quickly.
See the tour guide about Map Composer in the ERDAS IMAGINE
Tour Guides for step-by-step instructions on creating a map.
Refer to the On-Line Help for details about how Map Composer
works.
Map Accuracy Maps are often used to influence legislation, promote a cause, or
enlighten a particular group before decisions are made. In these
cases, especially, map accuracy is of the utmost importance. There
are many factors that influence map accuracy: the projection used,
scale, base data, generalization, etc. The analyst/cartographer must
be aware of these factors before map production begins. The
accuracy of the map, in a large part, determines its usefulness. It is
usually up to individual organizations to perform accuracy
assessment and decide how those findings are reflected in the
products they produce. However, several agencies have established
guidelines for map makers.
US National Map Accuracy
Standard
The United States Bureau of the Budget has developed the US
National Map Accuracy Standard in an effort to standardize accuracy
reporting on maps. These guidelines are summarized below (Fisher,
1991):
On scales smaller than 1:20,000, not more than 10 percent of
points tested should be more than 1/50 inch in horizontal error,
where points refer only to points that can be well-defined on the
ground.
On maps with scales larger than 1:20,000, the corresponding
error term is 1/30 inch.
Map Accuracy / 494 Field Guide
At no more than 10 percent of the elevations tested can contours
be in error by more than one half of the contour interval.
Accuracy should be tested by comparison of actual map data with
survey data of higher accuracy (not necessarily with ground
truth).
If maps have been tested and do meet these standards, a
statement should be made to that effect in the legend.
Maps that have been tested but fail to meet the requirements
should omit all mention of the standards on the legend.
USGS Land Use and Land
Cover Map Guidelines
The USGS has set standards of their own for land use and land cover
maps (Fisher, 1991):
The minimum level of accuracy in identifying land use and land
cover categories is 85%.
The several categories shown should have about the same
accuracy.
Accuracy should be maintained between interpreters and times
of sensing.
USDA SCS Soils Maps
Guidelines
The United States Department of Agriculture (USDA) has set
standards for Soil Conservation Service (SCS) soils maps (Fisher,
1991):
Up to 25% of the pedons may be of other soil types than those
named if they do not present a major hindrance to land
management.
Up to only 10% of pedons may be of other soil types than those
named if they do present a major hindrance to land
management.
No single included soil type may occupy more than 10% of the
area of the map unit.
Digitized Hardcopy Maps Another method of expanding the database is by digitizing existing
hardcopy maps. Although this may seem like an easy way to gather
more information, care must be taken in pursuing this avenue if it is
necessary to maintain a particular level of accuracy. If the hardcopy
maps that are digitized are outdated, or were not produced using the
same accuracy standards that are currently in use, the digitized map
may negatively influence the overall accuracy of the database.
Printing Maps / 495 Field Guide
Hardcopy Output
Introduction Hardcopy output refers to any output of image data to paper. These
topics are covered in this chapter:
printing maps
the mechanics of printing
For additional information, see the chapter about Windows
printing in the ERDAS IMAGINE Configuration Guide.
Printing Maps ERDAS IMAGINE enables you to create and output a variety of types
of hardcopy maps, with several referencing features.
Scaled Maps A scaled map is a georeferenced map that has been projected to a
map projection, and is accurately laid-out and referenced to
represent distances and locations. A scaled map usually has a
legend, that includes a scale, such as 1 inch = 1000 feet. The scale
is often expressed as a ratio, like 1:12,000, where 1 inch on the map
represents 12,000 inches on the ground.
See Rectification for information on rectifying and
georeferencing images and Cartography for information on
creating maps.
Printing Large Maps Some scaled maps do not fit on the paper that is used by the printer.
These methods are used to print and store large maps:
A book map is laid out like the pages of a book. Each page fits on
the paper used by the printer. There is a border, but no tick
marks on every page.
A paneled map is designed to be spliced together into a large
paper map; therefore, borders and tick marks appear on the
outer edges of the large map.
Printing Maps / 496 Field Guide
Figure 201: Layout for a Book Map and a Paneled Map
Scale and Resolution The following scales and resolutions are noticeable during the
process of creating a map composition and sending the composition
to a hardcopy device:
spatial resolution of the image
display scale of the map composition
map scale of the image(s)
map composition to paper scale
device resolution
Spatial Resolution
Spatial resolution is the area on the ground represented by each raw
image data pixel.
Display Scale
Display scale is the distance on the screen as related to one unit on
paper. For example, if the map composition is 24 inches by 36
inches, it would not be possible to view the entire composition on the
screen. Therefore, the scale could be set to 1:0.25 so that the entire
map composition would be in view.
Map Scale
The map scale is the distance on a map as related to the true
distance on the ground, or the area that one pixel represents
measured in map units. The map scale is defined when you create
an image area in the map composition. One map composition can
have multiple image areas set at different scales. These areas may
need to be shown at different scales for different applications.
neatline
map composition
neatline
map composition
+
+
+
+
+
+
+
+
tick marks
Book Map Paneled Map
Field Guide Printing Maps / 497
Map Composition to Paper Scale
This scale is the original size of the map composition as related to
the desired output size on paper.
Device Resolution
The number of dots that are printed per unitfor example, 300 dots
per inch (DPI).
Use the ERDAS IMAGINE Map Composer to define the above
scales and resolutions.
Map Scaling Examples The ERDAS IMAGINE Map Composer enables you to define a map
size, as well as the size and scale for the image area within the map
composition. The examples in this section focus on the relationship
between these factors and the output file created by Map Composer
for the specific hardcopy device or file format. Figure 202 is the map
composition that is used in the examples. This composition was
originally created using the ERDAS IMAGINE Map Composer at a size
of 22 34, and the hardcopy output must be in two different
formats.
It must be output to a PostScript printer on an 8.5 11 piece
of paper.
A TIFF file must be created and sent to a film recorder having a
1,000 dpi resolution.
Figure 202: Sample Map Composition
Printing Maps / 498 Field Guide
Output to PostScript Printer
Since the map was created at 22 34, the map composition to
paper scale needs to be calculated so that the composition fits on an
8.5 11 piece of paper. If this scale is set for a one to one ratio,
then the composition is paneled.
To determine the map composition to paper scale factor, it is
necessary to calculate the most limiting direction. Since the printable
area for the printer is approximately 8.1 8.6, these numbers are
used in the calculation.
8.1 / 22 = 0.36 (horizontal direction)
8.6 / 34 = 0.23 (vertical direction)
The vertical direction is the most limiting; therefore, the map
composition to paper scale would be set for 0.23.
If the specified size of the map (width and height) is greater than
the printable area for the printer, the output hardcopy map is
paneled. See the hardware manual of the hardcopy device for
information about the printable area of the device.
Use the Print Map Composition dialog to output a map
composition to a PostScript printer.
Output to TIFF
The limiting factor in this example is not page size, but disk space
(600 MB total). A three-band image file must be created in order to
convert the map composition to .tif file. Due to the three bands and
the high resolution, the image file could be very large. The .tif file is
output to a film recorder with a 1,000 DPI device resolution.
To determine the number of megabytes for the map composition, the
X and Y dimensions need to be calculated:
X = 22 inches 1,000 dots/inch = 22,000
Y = 34 1,000 = 34,000
22,000 34,000 3 = 2244 MB (multiplied by 3 since there are
3 bands)
Field Guide Mechanics of Printing / 499
Although this appears to be an unmanageable file size, it is possible
to reduce the file size with little image degradation. The image file
created from the map composition must be less than half to
accommodate the .tif file, because the total disk space is only 600
megabytes. Dividing the map composition by three in both X and Y
directions (2,244 MB / 3 /3) results in approximately a 250
megabyte file. This file size is small enough to process and leaves
enough room for the image to TIFF conversion. This division is
accomplished by specifying a 1/3 or 0.333 map composition to paper
scale when outputting the map composition to an image file.
Once the image file is created and exported to TIFF format, it can be
sent to a film recorder that accepts .tif files. Remember, the file must
be enlarged three times to compensate for the reduction during the
image file creation.
See the hardware manual of the hardcopy device for information
about the DPI device resolution.
Use the ERDAS IMAGINE Print Map Composition dialog to output
a map composition to an image file.
Mechanics of
Printing
This section describes the mechanics of transferring an image or
map composition from a data file to a hardcopy map.
Halftone Printing Halftoning is the process of converting a continuous tone image into
a pattern of dots. A newspaper photograph is a common example of
halftoning.
To make a color illustration, halftones in the primary colors (cyan,
magenta, and yellow), plus black, are overlaid. The halftone dots of
different colors, in close proximity, create the effect of blended colors
in much the same way that phosphorescent dots on a color computer
monitor combine red, green, and blue to create other colors. By
using different patterns of dots, colors can have different intensities.
The dots for halftoning are a fixed densityeither a dot is there or it
is not there.
For scaled maps, each output pixel may contain one or more dot
patterns. If a very large image file is being printed onto a small piece
of paper, data file pixels are skipped to accommodate the reduction.
Hardcopy Devices
The following hardcopy devices use halftoning to output an image or
map composition:
Tektronix Inkjet Printer
Tektronix Phaser Printer
Mechanics of Printing / 500 Field Guide
See the users manual for the hardcopy device for more
information about halftone printing.
Continuous Tone Printing Continuous tone printing enables you to output color imagery using
the four process colors (cyan, magenta, yellow, and black). By using
varying percentages of these colors, it is possible to create a wide
range of colors. The printer converts digital data from the host
computer into a continuous tone image. The quality of the output
picture is similar to a photograph. The output is smoother than
halftoning because the dots for continuous tone printing can vary in
density.
Example
There are different processes by which continuous tone printers
generate a map. One example is a process called thermal dye
transfer. The entire image or map composition is loaded into the
printers memory. While the paper moves through the printer, heat
is used to transfer the dye from a ribbon, which has the dyes for all
of the four process colors, to the paper. The density of the dot
depends on the amount of heat applied by the printer to transfer the
dye. The amount of heat applied is determined by the brightness
values of the input image. This allows the printer to control the
amount of dye that is transferred to the paper to create a continuous
tone image.
Hardcopy Devices
The following hardcopy device uses continuous toning to output an
image or map composition:
Tektronix Phaser II SD
NOTE: The above printers do not necessarily use the thermal dye
transfer process to generate a map.
See the users manual for the hardcopy device for more
information about continuous tone printing.
Contrast and Color Tables ERDAS IMAGINE contrast and color tables are used for some printing
processes, just as they are used in displaying an image. For
continuous raster layers, they are loaded from the ERDAS IMAGINE
contrast table. For thematic layers, they are loaded from the color
table. The translation of data file values to brightness values is
performed entirely by the software program.
Field Guide Mechanics of Printing / 501
RGB to CMY Conversion
Colors
Since a printer uses ink instead of light to create a visual image, the
primary colors of pigment (cyan, magenta, and yellow) are used in
printing, instead of the primary colors of light (red, green, and blue).
Cyan, magenta, and yellow can be combined to make black through
a subtractive process, whereas the primary colors of light are
additivered, green, and blue combine to make white (Gonzalez and
Wintz, 1977).
The data file values that are sent to the printer and the contrast and
color tables that accompany the data file are all in the RGB color
scheme. The RGB brightness values in the contrast and color tables
must be converted to cyan, magenta, and yellow (CMY) values.
The RGB primary colors are the opposites of the CMY colors
meaning, for example, that the presence of cyan in a color means an
equal lack of red. To convert the values, each RGB brightness value
is subtracted from the maximum brightness value to produce the
brightness value for the opposite color. The following equation shows
this relationship:
C = MAX - R
M = MAX - G
Y = MAX - B
Where:
MAX = the maximum brightness value
R = red value from lookup table
G = green value from lookup table
B = blue value from lookup table
C = calculated cyan value
M = calculated magenta value
Y = calculated yellow value
Black Ink
Although, theoretically, cyan, magenta, and yellow combine to
create black ink, the color that results is often a dark, muddy brown.
Many printers also use black ink for a truer black.
NOTE: Black ink may not be available on all printers. Consult the
users manual for your printer.
Images often appear darker when printed than they do when
displayed on the display device. Therefore, it may be beneficial to
improve the contrast and brightness of an image before it is printed.
Use the programs discussed in Enhancement to brighten or
enhance an image before it is printed.
Mechanics of Printing / 502 Field Guide
Statistics / 503 Field Guide
Math Topics
Introduction This appendix is a cursory overview of some of the basic
mathematical concepts that are applicable to image processing. Its
purpose is to educate the novice reader, and to put these formulas
and concepts into the context of image processing and remote
sensing applications.
Summation A commonly used notation throughout this and other discussions is
the Sigma (), used to denote a summation of values.
For example, the notation
is the sum of all values of i, ranging from 1 to 10, which equals:
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55.
Similarly, the value i may be a subscript, which denotes an ordered
set of values. For example,
Where:
Q
1
= 3
Q
2
= 5
Q
3
= 7
Q
4
= 2
Statistics
Histogram In ERDAS IMAGINE image data files, each data file value (defined by
its row, column, and band) is a variable. ERDAS IMAGINE supports
the following data types:
1, 2, and 4-bit
8, 16, and 32-bit signed
i
i 1 =
10

Q
i
i 1 =
4

3 5 7 2 + + + 17 = =
Statistics / 504 Field Guide
8, 16, and 32-bit unsigned
32 and 64-bit floating point
64 and 128-bit complex floating point
Distribution, as used in statistics, is the set of frequencies with which
an event occurs, or that a variable has a particular value.
A histogram is a graph of data frequency or distribution. For a single
band of data, the horizontal axis of a histogram is the range of all
possible data file values. The vertical axis is the number of pixels that
have each data value.
Figure 203: Histogram
Figure 203 shows the histogram for a band of data in which Y pixels
have data value X. For example, in this graph, 300 pixels (y) have
the data file value of 100 (x).
Bin Functions Bins are used to group ranges of data values together for better
manageability. Histograms and other descriptor columns for 1, 2, 4,
and 8-bit data are easy to handle since they contain a maximum of
256 rows. However, to have a row in a descriptor table for every
possible data value in floating point, complex, and 32-bit integer
data would yield an enormous amount of information. Therefore, the
bin function is provided to serve as a data reduction tool.
Example of a Bin Function
Suppose you have a floating point data layer with values ranging
from 0.0 to 1.0. You could set up a descriptor table of 100 rows, with
each row or bin corresponding to a data range of .01 in the layer.
The bins would look like the following:
0
0 255
1000
X
Y
300
100
histogram
data file values
n
u
m
b
e
r

o
f

p
i
x
e
l
s
Bin Number Data Range
0 X < 0.01
1 0.01 X < 0.02
Field Guide Statistics / 505
Then, for example, row 23 of the histogram table would contain the
number of pixels in the layer whose value fell between .023 and
.024.
Types of Bin Functions
The bin function establishes the relationship between data values
and rows in the descriptor table. There are four types of bin functions
used in ERDAS IMAGINE image layers:
DIRECTone bin per integer value. Used by default for 1, 2, 4,
and 8-bit integer data, but may be used for other data types as
well. The direct bin function may include an offset for negative
data or data in which the minimum value is greater than zero.
For example, a direct bin with 900 bins and an offset of -601
would look like the following:
2 0.02 X < 0.03
.
.
.
98 0.98 X < 0.99
99 0.99 X
Bin Number Data Range
Bin Number Data Range
0 X -600.5
1 -600.5 < X -599.5
.
.
.
599 -2.5 < X -1.5
600 -1.5 < X -0.5
601 -0.5 < X < 0.5
602 0.5 X < 1.5
603 1.5 X < 2.5
.
.
.
Statistics / 506 Field Guide
LINEARestablishes a linear mapping between data values and
bin numbers, as in our first example, mapping the data range 0.0
to 1.0 to bin numbers 0 to 99.
The bin number is computed by:
bin = numbins * (x - min) / (max - min)
if (bin < 0) bin = 0
if (bin >= numbins) bin = numbins - 1
Where:
bin = resulting bin number
numbins = number of bins
x = data value
min = lower limit (usually minimum data value)
max = upper limit (usually maximum data value)
LOGestablishes a logarithmic mapping between data values
and bin numbers. The bin number is computed by:
bin = numbins * (ln (1.0 + ((x - min)/(max - min)))/ ln (2.0))
if (bin < 0) bin = 0
if (bin >= numbins) bin = numbins - 1
EXPLICITexplicitly defines mapping between each bin number
and data range.
Mean The mean () of a set of values is its statistical average, such that,
if Q
i
represents a set of k values:
or
The mean of data with a normal distribution is the value at the peak
of the curvethe point where the distribution balances.
898 296.5 X < 297.5
899 297.5 X
Bin Number Data Range

Q
1
Q
2
Q
3
... Q
k
+ + + +
k
--------------------------------------------------------- =

Q
i
k
-----
i 1 =
k

=
Field Guide Statistics / 507
Normal Distribution Our general ideas about an average, whether it be average age,
average test score, or the average amount of spectral reflectance
from oak trees in the spring, are made visible in the graph of a
normal distribution, or bell curve.
Figure 204: Normal Distribution
Average usually refers to a central value on a bell curve, although all
distributions have averages. In a normal distribution, most values
are at or near the middle, as shown by the peak of the bell curve.
Values that are more extreme are more rare, as shown by the tails
at the ends of the curve.
The Normal Distributions are a family of bell shaped distributions
that turn up frequently under certain special circumstances. For
example, a normal distribution would occur if you were to compare
the bands in a desert image. The bands would be very similar, but
would vary slightly.
Each Normal Distribution uses just two parameters, and , to
control the shape and location of the resulting probability graph
through the equation:
Where:
x = the quantitys distribution that is being
approximated
and e = famous mathematical constants
The parameter controls how much the bell is shifted horizontally
so that its average matches the average of the distribution of x, while
adjusts the width of the bell to try to encompass the spread of the
given distribution. In choosing to approximate a distribution by the
nearest of the Normal Distributions, we describe the many values in
the bin function of its distribution with just two parameters. It is a
significant simplification that can greatly ease the computational
burden of many operations, but like all simplifications, it reduces the
accuracy of the conclusions we can draw.
data file values
0 255
n
u
m
b
e
r

o
f

p
i
x
e
l
s
0
1000
f x ( )
e
x
2
------------
\ .
| |
2

2
---------------------- =
Statistics / 508 Field Guide
The normal distribution is the most widely encountered model for
probability. Many natural phenomena can be predicted or estimated
according to the law of averages that is implied by the bell curve
(Larsen and Marx, 1981).
A normal distribution in remotely sensed data is meaningfulit is a
sign that some characteristic of an object can be measured by the
average amount of electromagnetic radiation that the object reflects.
This relationship between the data and a physical scene or object is
what makes image processing applicable to various types of land
analysis.
The mean and standard deviation are often used by computer
programs that process and analyze image data.
Variance The mean of a set of values locates only the average valueit does
not adequately describe the set of values by itself. It is helpful to
know how much the data varies from its mean. However, a simple
average of the differences between each value and the mean equals
zero in every case, by definition of the mean. Therefore, the squares
of these differences are averaged so that a meaningful number
results (Larsen and Marx, 1981).
In theory, the variance is calculated as follows:
Where:
E = expected value (weighted average)
2
= squared to make the distance a positive number
In practice, the use of this equation for variance does not usually
reflect the exact nature of the values that are used in the equation.
These values are usually only samples of a large data set, and
therefore, the mean and variance of the entire data set are
estimated, not known.
The equation used in practice follows. This is called the minimum
variance unbiased estimator of the variance, or the sample variance
(notated
2
).
Where:
i = a particular pixel
k = the number of pixels (the higher the number, the better
the approximation)
Var
Q
E Q
Q
( )
2
=

Q
2
Q
i

Q
( )
2
i 1 =
k

k 1
-----------------------------------
Field Guide Statistics / 509
The theory behind this equation is discussed in chapters on point
estimates and sufficient statistics, and covered in most statistics
texts.
NOTE: The variance is expressed in units squared (e.g., square
inches, square data values, etc.), so it may result in a number that
is much higher than any of the original values.
Standard Deviation Since the variance is expressed in units squared, a more useful value
is the square root of the variance, which is expressed in units and
can be related back to the original values (Larsen and Marx, 1981).
The square root of the variance is the standard deviation.
Based on the equation for sample variance (
2
), the sample standard
deviation (
Q
) for a set of values Q is computed as follows:
In any distribution:
approximately 68% of the values are within one standard
deviation of , that is, between - and +
more than 1/2 of the values are between -2 and +2
more than 3/4 of the values are between -3 and +3
Source: Mendenhall and Scheaffer, 1973
An example of a simple application of these rules is seen in the
ERDAS IMAGINE Viewer. When 8-bit data are displayed in the
Viewer, ERDAS IMAGINE automatically applies a 2 standard
deviation stretch that remaps all data file values between -2 and
+2 (more than 1/2 of the data) to the range of possible brightness
values on the display device.
Standard deviations are used because the lowest and highest data
file values may be much farther from the mean than 2.
For more information on contrast stretch, see Enhancement.
Parameters As described above, the standard deviation describes how a fixed
percentage of the data varies from the mean. The mean and
standard deviation are known as parameters, which are sufficient to
describe a normal curve (Johnston, 1980).

Q
Q
i

Q
( )
2
i 1 =
k

k 1
----------------------------------- =
Statistics / 510 Field Guide
When the mean and standard deviation are known, they can be used
to estimate other calculations about the data. In computer
programs, it is much more convenient to estimate calculations with
a mean and standard deviation than it is to repeatedly sample the
actual data.
Algorithms that use parameters are parametric. The closer that the
distribution of the data resembles a normal curve, the more accurate
the parametric estimates of the data are. ERDAS IMAGINE
classification algorithms that use signature files (.sig) are
parametric, since the mean and standard deviation of each sample
or cluster are stored in the file to represent the distribution of the
values.
Covariance In many image processing procedures, the relationships between
two bands of data are important. Covariance measures the
tendencies of data file values in the same pixel, but in different
bands, to vary with each other, in relation to the means of their
respective bands. These bands must be linear.
Theoretically speaking, whereas variance is the average square of
the differences between values and their mean in one band,
covariance is the average product of the differences of corresponding
values in two different bands from their respective means. Compare
the following equation for covariance to the previous one for
variance:
Where:
Q and R = data file values in two bands
E = expected value
In practice, the sample covariance is computed with this equation:
Where:
i = a particular pixel
k = the number of pixels
Like variance, covariance is expressed in units squared.
Cov
QR
E Q
Q
( ) R
R
( ) =
C
QR
Q
i

Q
( ) R
i

R
( )
i 1 =
k

k
--------------------------------------------------------
Field Guide Dimensionality of Data / 511
Covariance Matrix The covariance matrix is an n n matrix that contains all of the
variances and covariances within n bands of data. Below is an
example of a covariance matrix for four bands of data:
The covariance matrix is symmetricalfor example, Cov
AB
= Cov
BA
.
The covariance of one band of data with itself is the variance of that
band:
Therefore, the diagonal of the covariance matrix consists of the band
variances.
The covariance matrix is an organized format for storing variance
and covariance information on a computer system, so that it needs
to be computed only once. Also, the matrix itself can be used in
matrix equations, as in principal components analysis.
See Matrix Algebra for more information on matrices.
Dimensionality of
Data
Spectral Dimensionality is determined by the number of sets of
values being used in a process. In image processing, each band of
data is a set of values. An image with four bands of data is said to
be four-dimensional (Jensen, 1996).
NOTE: The letter n is used consistently in this documentation to
stand for the number of dimensions (bands) of image data.
Measurement Vector The measurement vector of a pixel is the set of data file values for
one pixel in all n bands. Although image data files are stored band-
by-band, it is often necessary to extract the measurement vectors
for individual pixels.
band A band B band C band D
band A
Var
A
Cov
BA
Cov
CA
Cov
DA
band B
Cov
AB
Var
B
Cov
CB
Cov
DB
band C
Cov
AC
Cov
BC
Var
C
Cov
DC
band D
Cov
AD
Cov
BD
Cov
CD
Var
D

C
QQ
Q
i

Q
( ) Q
i

Q
( )
i 1 =
k

k 1
---------------------------------------------------------
Q
i

Q
( )
2
i 1 =
k

k 1
----------------------------------- = =
Dimensionality of Data / 512 Field Guide
Figure 205: Measurement Vector
According to Figure 205,
i = particular band
V
i
= the data file value of the pixel in band i, then the
measurement vector for this pixel is:
See Matrix Algebra for an explanation of vectors.
Mean Vector When the measurement vectors of several pixels are analyzed, a
mean vector is often calculated. This is the vector of the means of
the data file values in each band. It has n elements.
Figure 206: Mean Vector
According to Figure 206,
i = a particular band

i
= the mean of the data file values of the pixels being
studied, in band i, then the mean vector for this training
sample is:
1 pixel
Band 1
Band 2
Band 3
V
1
V
2
V
3
n = 3
V
1
V
2
V
3
Band 1
Band 2
Band 3
Training sample
mean of values in sample
in band 1 =
1
mean of these values
=
2
mean of these values
=

3
Field Guide Dimensionality of Data / 513
Feature Space Many algorithms in image processing compare the values of two or
more bands of data. The programs that perform these functions
abstractly plot the data file values of the bands being studied against
each other. An example of such a plot in two dimensions (two bands)
is illustrated in Figure 207.
Figure 207: Two Band Plot
NOTE: If the image is 2-dimensional, the plot does not always have
to be 2-dimensional.
In Figure 207, the pixel that is plotted has a measurement vector of:
The graph above implies physical dimensions for the sake of
illustration. Actually, these dimensions are based on spectral
characteristics represented by the digital image data. As opposed to
physical space, the pixel above is plotted in feature space. Feature
space is an abstract space that is defined by spectral units, such as
an amount of electromagnetic radiation.
Feature Space Images Several techniques for the processing of multiband data make use of
a two-dimensional histogram, or feature space image. This is simply
a graph of the data file values of one band of data against the values
of another band.
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
0 255
255
0
85
180
(180, 85)
180
85
Dimensionality of Data / 514 Field Guide
Figure 208: Two-band Scatterplot
The scatterplot pictured in Figure 208 can be described as a
simplification of a two-dimensional histogram, where the data file
values of one band have been plotted against the data file values of
another band. This figure shows that when the values in the bands
being plotted have jointly normal distributions, the feature space
forms an ellipse.
This ellipse is used in several algorithmsspecifically, for evaluating
training samples for image classification. Also, two-dimensional
feature space images with ellipses are helpful to illustrate principal
components analysis.
See Enhancement for more information on principal
components analysis, Classification for information on training
sample evaluation, and Rectification for more information on
orders of transformation.
n-Dimensional Histogram If two-dimensional data can be plotted on a two-dimensional
histogram, as above, then n-dimensional data can, abstractly, be
plotted on an n-dimensional histogram, defining n-dimensional
spectral space.
Each point on an n-dimensional scatterplot has n coordinates in that
spectral spacea coordinate for each axis. The n coordinates are the
elements of the measurement vector for the corresponding pixel.
In some image enhancement algorithms (most notably, principal
components), the points in the scatterplot are replotted, or the
spectral space is redefined in such a way that the coordinates are
changed, thus transforming the measurement vector of the pixel.
Band A
data file values
B
a
n
d

B
d
a
t
a

f
i
l
e

v
a
l
u
e
s
0 255
255
0
Field Guide Polynomials / 515
When all data sets (bands) have jointly normal distributions, the
scatterplot forms a hyperellipsoid. The prefix hyper refers to an
abstract geometrical shape, which is defined in more than three
dimensions.
NOTE: In this documentation, 2-dimensional examples are used to
illustrate concepts that apply to any number of dimensions of data.
The 2-dimensional examples are best suited for creating illustrations
to be printed.
Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral
space. It is a number that allows two measurement vectors to be
compared for similarity. The spectral distance between two pixels
can be calculated as follows:
Where:
D = spectral distance
n = number of bands (dimensions)
i = a particular band
d
i
= data file value of pixel d in band i
e
i
= data file value of pixel e in band i
This is the equation for Euclidean distancein two dimensions (when
n = 2), it can be simplified to the Pythagorean Theorem (c
2
= a
2
+
b
2
), or in this case:
D
2
= (d
i
- e
i
)
2
+ (d
j
- e
j
)
2

Polynomials A polynomial is a mathematical expression consisting of variables
and coefficients. A coefficient is a constant, which is multiplied by a
variable in the expression.
Order The variables in polynomial expressions can be raised to exponents.
The highest exponent in a polynomial determines the order of the
polynomial.
A polynomial with one variable, x, takes this form:
A + Bx + Cx
2
+ Dx
3
+ .... + x
t
D d
i
e
i
( )
2
i 1 =
n

=
Polynomials / 516 Field Guide
Where:
A, B, C, D ... = coefficients
t = the order of the polynomial
NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not
the complexity, of the transformation is changed. Mathematically,
cannot be 0.
A polynomial with two variables, x and y, takes this form:
Where:
t is the order of the polynomial
a
k
and b
k
are coefficients
the subscript k in a
k
and b
k
is determined by:
A numerical example of 3rd-order transformation equations for x and
y is:
x
o
= 5 + 4x - 6y + 10x
2
- 5xy + 1y
2
+ 3x
3
+ 7x
2
y - 11xy
2
+ 4y
3
y
o
= 13 + 12x + 4y + 1x
2
- 21xy + 1y
2
- 1x
3
+ 2x
2
y + 5xy
2
+ 12y
3
Polynomial equations are used in image rectification to transform the
coordinates of an input file to the coordinates of another system. The
order of the polynomial used in this process is the order of
transformation.
Transformation Matrix In the case of first order image rectification, the variables in the
polynomials (x and y) are the source coordinates of a GCP. The
coefficients are computed from the GCPs and stored as a
transformation matrix.
A detailed discussion of GCPs, orders of transformation, and
transformation matrices is included in Rectification.
x
o
t

i o =
\ .
|
|
| | i

j o =
\ .
|
|
| |
= a
k
x
i j
y
j

y
o
t

i o =
\ .
|
|
| | i

j o =
\ .
|
|
| |
= b
k
x
i j
y
j

k
i i j +
2
--------------- j + =
Field Guide Matrix Algebra / 517
Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular
array. If a matrix has i rows and j columns, it is said to be an i by j
matrix.
A one-dimensional matrix, having one column (i by 1) is one of many
kinds of vectors. For example, the measurement vector of a pixel is
an n-element vector of the data file values of the pixel, where n is
equal to the number of bands.
See Enhancement for information on eigenvectors.
Matrix Notation Matrices and vectors are usually designated with a single capital
letter, such as M. For example:
One value in the matrix M would be specified by its position, which
is its row and column (in that order) in the matrix. One element of
the array (one value) is designated with a lower case letter and its
position:
m
3,2
= 12.4
With column vectors, it is simpler to use only one number to
designate the position:
G
2
= 6.5
Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-
order transformation matrix. The coefficients are stored in a 2 3
matrix:
M
2.2 4.6
6.1 8.3
10.0 12.4
=
G
2.8
6.5
10.1
=
C
a
1
a
2
a
3
b
1
b
2
b
3
=
Matrix Algebra / 518 Field Guide
Then, where:
x
o
= a
1
+ a
2
x
i
+ a
3
y
i

y
o
= b
1
+ b
2
x
i
+ b
3
y
i

x
i
and y
i
= source coordinates
+x
o
and y
o
= rectified coordinates
The coefficients of the transformation matrix are as above.
The above could be expressed by a matrix equation:
R = CS, or
Where:
S = a matrix of the source coordinates (3 by 1)
C = the transformation matrix (2 by 3)
R = the matrix of rectified coordinates (2 by 1)
The sizes of the matrices are shown above to demonstrate a rule of
matrix multiplication. To multiply two matrices, the first matrix must
have the same number of columns as the second matrix has rows.
For example, if the first matrix is a by b, and the second matrix is m
by n, then b must equal m, and the product matrix has the size a by
n.
The formula for multiplying two matrices is:
for every i from 1 to a
for every j from 1 to n
Where:
i = a row in the product matrix
j = a column in the product matrix
f = an (a by b) matrix
g = an (m by n) matrix (b must equal m)
fg is an a by n matrix.
Transposition The transposition of a matrix is derived by interchanging its rows and
columns. Transposition is denoted by T, as in the following example
(Cullen, 1972).
x
0
y
0
a
1
a
2
a
3
b
1
b
2
b
3
1
x
i
y
i
=
fg ( )
ij
f
ik
g
kj
k 1 =
m

=
Field Guide Matrix Algebra / 519
For more information on transposition, see Computing Principal
Components and Classification Decision Rules.
G
2 3
6 4
10 12
=
G
T
2 6 10
3 4 12
=
Matrix Algebra / 520 Field Guide
/ 521 Field Guide
Map Projections
Introduction This appendix is an alphabetical listing of the map projections
supported in ERDAS IMAGINE. It is divided into two sections:
USGS Projections, and
External Projections
The external projections were implemented outside of ERDAS
IMAGINE so that you could add to these using the IMAGINE
Developers Toolkit. The projections in each section are presented in
alphabetical order.
The information in this appendix is adapted from:
Map Projections for Use with the Geographic Information System
(Lee and Walsh, 1984)
Map ProjectionsA Working Manual (Snyder, 1987)
ArcInfo HELP (Environmental Systems Research Institute, 1997)
Other sources are noted in the text.
For general information about map projection types, refer to
Cartography.
Rectify an image to a particular map projection using the ERDAS
IMAGINE Rectification tools. View, add, or change projection
information using the Image Information option.
NOTE: You cannot rectify to a new map projection using the Image
Information option. You should change map projection information
using Image Information only if you know the information to be
incorrect. Use the rectification tools to actually georeference an
image to a new map projection system.
USGS Projections / 522 Field Guide
USGS Projections The following USGS map projections are supported in ERDAS
IMAGINE and are described in this section:
Alaska Conformal
Albers Conical Equal Area
Azimuthal Equidistant
Behrmann
Bonne
Cassini
Eckert I
Eckert II
Eckert III
Eckert IV
Eckert V
Eckert VI
EOSAT SOM
Equidistant Conic
Equidistant Cylindrical
Equirectangular (Plate Carre)
Gall Stereographic
Gauss Kruger
General Vertical Near-side Perspective
Geographic (Lat/Lon)
Gnomonic
Hammer
Interrupted Goode Homolosine
Interrupted Mollweide
Lambert Azimuthal Equal Area
Lambert Conformal Conic
Field Guide USGS Projections / 523
Loximuthal
Mercator
Miller Cylindrical
Modified Transverse Mercator
Mollweide
New Zealand Map Grid
Oblated Equal Area
Oblique Mercator (Hotine)
Orthographic
Plate Carre
Polar Stereographic
Polyconic
Quartic Authalic
Robinson
RSO
Sinusoidal
Space Oblique Mercator
Space Oblique Mercator (Formats A & B)
State Plane
Stereographic
Stereographic (Extended)
Transverse Mercator
Two Point Equidistant
UTM
Van der Grinten I
Wagner IV
Wagner VII
USGS Projections / 524 Field Guide
Winkel I
Field Guide Alaska Conformal / 525
Alaska Conformal Use of this projection results in a conformal map of Alaska. It has
little scale distortion as compared to other conformal projections.
The method of projection is modified planar. [It is] a sixth-order-
equation modification of an oblique Stereographic conformal
projection on the Clarke 1866 spheroid. The origin is at 64 N, 152
W (Environmental Systems Research Institute, 1997).
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Alaska
Conformal is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
False easting
False northing
Table 66: Alaska Conformal Summary
Construction
Modified planar
Property
Conformal
Meridians
N/A
Parallels
N/A
Graticule
spacing
N/A
Linear scale
The minimum scale factor is 0.997 at roughly 62.5
N, 156 W. Scale increases outward from these
coordinates. Most of Alaska and the Aleutian Islands
(with the exception of the panhandle) is bounded by
a line of true scale. The scale factor for Alaska is
from 0.997 to 1.003. That is one quarter the range
for a corresponding conic projection (Snyder,
1987).
Uses
This projection is useful for mapping the complete
state of Alaska on the Clarke 1866 spheroid or
NAD27, but not with other datums and spheroids.
Distortion increases as distance from Alaska
increases.
Alaska Conformal / 526 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide Albers Conical Equal Area / 527
Albers Conical Equal
Area
The Albers Conical Equal Area projection is mathematically based on
a cone that is conceptually secant on two parallels. There is no areal
deformation. The North or South Pole is represented by an arc. It
retains its properties at various scales, and individual sheets can be
joined along their edges.
This projection produces very accurate area and distance
measurements in the middle latitudes (Figure 209). Thus, Albers
Conical Equal Area is well-suited to countries or continents where
north-south depth is about 3/5 the breadth of east-west. When this
projection is used for the continental US, the two standard parallels
are 29.5 and 45.5 North.
This projection possesses the property of equal-area, and the
standard parallels are correct in scale and in every direction. Thus,
there is no angular distortion (i.e., meridians intersect parallels at
right angles), and conformality exists along the standard parallels.
Like other conics, Albers Conical Equal Area has concentric arcs for
parallels and equally spaced radii for meridians. Parallels are not
equally spaced, but are farthest apart between the standard parallels
and closer together on the north and south edges.
Table 67: Albers Conical Equal Area Summary
Construction
Cone
Property
Equal-area
Meridians
Meridians are straight lines converging on the polar
axis, but not at the pole.
Parallels
Parallels are arcs of concentric circles concave
toward a pole.
Graticule
spacing
Meridian spacing is equal on the standard parallels
and decreases toward the poles. Parallel spacing
decreases away from the standard parallels and
increases between them. Meridians and parallels
intersect each other at right angles. The graticule
spacing preserves the property of equivalence of
area. The graticule is symmetrical.
Linear scale
Linear scale is true on the standard parallels.
Maximum scale error is 1.25% on a map of the
United States (48 states) with standard parallels of
29.5N and 45.5N.
Uses
Used for thematic maps. Used for large countries
with an east-west orientation. Maps based on the
Albers Conical Equal Area for Alaska use standard
parallels 55N and 65N; for Hawaii, the standard
parallels are 8N and 18N. The National Atlas of
the United States, United States Base Map (48
states), and the Geologic map of the United States
are based on the standard parallels of 29.5N and
45.5N.
Albers Conical Equal Area / 528 Field Guide
Albers Conical Equal Area is the projection exclusively used by the
USGS for sectional maps of all 50 states of the US in the National
Atlas of 1970.
Prompts
The following prompts display in the Projection Chooser once Albers
Conical Equal Area is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Latitude of 1st standard parallel
Latitude of 2nd standard parallel
Enter two values for the desired control lines of the projection (i.e.,
the standard parallels). Note that the first standard parallel is the
southernmost.
Then, define the origin of the map projection in both spherical and
rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for longitude of the desired central meridian and
latitude of the origin of projection.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing, corresponding to the
intersection of the central meridian and the latitude of the origin of
projection. These values must be in meters. It is often convenient to
make them large enough to prevent negative coordinates from
occurring within the region of the map projection. That is, the origin
of the rectangular coordinate system should fall outside of the map
projection to the south and west.
Field Guide Albers Conical Equal Area / 529
Figure 209: Albers Conical Equal Area Projection
In Figure 209, the standard parallels are 20N and 60N. Note the
change in spacing of the parallels.
Azimuthal Equidistant / 530 Field Guide
Azimuthal
Equidistant
The Azimuthal Equidistant projection is mathematically based on a
plane tangent to the Earth. The entire Earth can be represented, but
generally less than one hemisphere is portrayed, though the other
hemisphere can be portrayed, but is much distorted. It has true
direction and true distance scaling from the point of tangency.
Table 68: Azimuthal Equidistant Summary
Construction
Plane
Property
Equidistant
Meridians
Polar aspect: the meridians are straight lines
radiating from the point of tangency.
Oblique aspect: the meridians are complex curves
concave toward the point of tangency.
Equatorial aspect: the meridians are complex
curves concave toward a straight central meridian,
except the outer meridian of a hemisphere, which is
a circle.
Parallels
Polar aspect: the parallels are concentric circles.
Oblique aspect: the parallels are complex curves.
Equatorial aspect: the parallels are complex curves
concave toward the nearest pole; the Equator is
straight.
Graticule
spacing
Polar aspect: the meridian spacing is equal and
increases away from the point of tangency. Parallel
spacing is equidistant. Angular and area
deformation increase away from the point of
tangency.
Linear scale
Polar aspect: linear scale is true from the point of
tangency along the meridians only.
Oblique and equatorial aspects: linear scale is true
from the point of tangency. In all aspects, the
projection shows distances true to scale when
measured between the point of tangency and any
other point on the map.
Uses
The Azimuthal Equidistant projection is used for
radio and seismic work, as every place in the world
is shown at its true distance and direction from the
point of tangency. The USGS uses the oblique
aspect in the National Atlas and for large-scale
mapping of Micronesia. The polar aspect is used as
the emblem of the United Nations.
Field Guide Azimuthal Equidistant / 531
This projection is used mostly for polar projections because latitude
rings divide meridians at equal intervals with a polar aspect (Figure
210). Linear scale distortion is moderate and increases toward the
periphery. Meridians are equally spaced, and all distances and
directions are shown accurately from the central point.
This projection can also be used to center on any point on the Earth
(e.g., a city) and distance measurements are true from that central
point. Distances are not correct or true along parallels, and the
projection is neither equal-area nor conformal. Also, straight lines
radiating from the center of this projection represent great circles.
Prompts
The following prompts display in the Projection Chooser if Azimuthal
Equidistant is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Azimuthal Equidistant / 532 Field Guide
Figure 210: Polar Aspect of the Azimuthal Equidistant
Projection
This projection is commonly used in atlases for polar maps.
Field Guide Behrmann / 533
Behrmann With the exception of compression in the horizontal direction and
expansion in the vertical direction, the Behrmann projection is the
same as the Lambert Cylindrical Equal-area projection. These
changes prevent distortion at latitudes 30 N and S instead of at the
Equator.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once
Behrmann is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 69: Behrmann Summary
Construction Cylindrical
Property Equal-area
Meridians Straight parallel lines that are equally spaced and
0.42 the length of the Equator.
Parallels Straight lines that are unequally spaced and
farthest apart near the Equator, perpendicular to
meridians.
Graticule spacing See Meridians and Parallels. Poles are straight lines
the same length as the Equator. Symmetry is
present about any meridian or the Equator.
Linear scale Scale is true along latitudes 30 N and S.
Uses Used for creating world maps.
Behrmann / 534 Field Guide
Figure 211: Behrmann Cylindrical Equal-Area Projection
Source: Snyder and Voxland, 1989
Field Guide Bonne / 535
Bonne The Bonne projection is an equal-area projection. True scale is
achievable along the central meridian and all parallels. Although it
was used in the 1800s and early 1900s, Bonne was replaced by
Lambert Azimuthal Equal Area by the mapping company Rand
McNally & Co. and Hammond, Inc. (see Lambert Azimuthal Equal
Area on page 570)
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Bonne
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Latitude of standard parallel
Longitude of central meridian
Enter values of the latitude of standard parallel and the longitude of
central meridian.
False easting
False northing
Table 70: Bonne Summary
Construction
Pseudocone
Property
Equal-area
Meridians
N/A
Parallels
Parallels are concentric arcs that are equally
spaced.
Graticule
spacing
The central meridian is a linear graticule.
Linear scale
Scale is true along the central meridian and all
parallels.
Uses
This projection is best used on maps of continents
and small areas. There is some distortion.
Bonne / 536 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 212: Bonne Projection
Source: Snyder and Voxland, 1989
Field Guide Cassini / 537
Cassini The Cassini projection is a transverse cylindrical projection and is
neither equal-area or conformal. It is best used for areas that are
mostly in the north-south direction.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Cassini
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Scale Factor
Enter the scale factor.
Longitude of central meridian
Latitude of origin of projection
Enter the values for longitude of central meridian and latitude of
origin of projection.
False easting
False northing
Table 71: Cassini Summary
Construction
Cylinder
Property
Compromise
Meridians
N/A
Parallels
N/A
Graticule
spacing
Linear graticules are located at the Equator, central
meridian, as well as meridians 90 from the central
meridian.
Linear scale
With increasing distance from central meridian,
scale distortion increases. Scale is true along the
central meridian and lines perpendicular to the
central meridian.
Uses
Cassini is used for large maps of areas near the
central meridian. The extent is 5 degrees to either
side of the central meridian.
Cassini / 538 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 213: Cassini Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert I / 539
Eckert I A great amount of distortion at the Equator is due to the break at the
Equator.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Eckert
I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 72: Eckert I Summary
Construction Pseudocylinder
Property Neither conformal nor equal-area
Meridians Meridians are converging straight lines that are
equally spaced and broken at the Equator.
Parallels Parallels are perpendicular to the central meridian,
equally spaced straight parallel lines.
Graticule spacing See Meridians and Parallels. Poles are lines one half
the length of the Equator. Symmetry exists about
the central meridian or the Equator.
Linear scale Scale is true along latitudes 47 10 N and S. Scale
is constant at any latitude (and latitude of opposite
sign) and any meridian.
Uses This projection is used as a novelty to show a
straight-line graticule.
Eckert I / 540 Field Guide
Figure 214: Eckert I Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert II / 541
Eckert II The break at the Equator creates a great amount of distortion there.
Eckert II is similar to the Eckert I projection. The Eckert I projection
has meridians positioned identically to Eckert II, but the Eckert I
projection has equidistant parallels.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Eckert
II is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 73: Eckert II Summary
Construction Pseudocylinder
Property Equal-area
Meridians Meridians are straight lines that are equally spaced
and broken at the Equator. Central meridian is one
half as long as the Equator.
Parallels Parallels are straight parallel lines that are
unequally spaced. The greatest separation is close
to the Equator. Parallels are perpendicular to the
central meridian.
Graticule spacing See Meridians and Parallels. Pole lines are half the
length of the Equator. Symmetry exists at the
central meridian or the Equator.
Linear scale Scale is true along altitudes 55 10 N, and S. Scale
is constant along any latitude.
Uses This projection is used as a novelty to show
straight-line equal-area graticule.
Eckert II / 542 Field Guide
Figure 215: Eckert II Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert III / 543
Eckert III In the Eckert III projection, no point is free of all scale distortion,
but the Equator is free of angular distortion (Snyder and Voxland,
1989).
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert
III is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 74: Eckert III Summary
Construction Pseudocylinder
Property Area is not preserved.
Meridians Meridians are elliptical curves that are equally
spaced elliptical curves. The meridians +/- 180
from the central meridian are semicircles. The poles
and the central meridian are straight lines one half
the length of the Equator.
Parallels Parallels are equally spaced straight lines.
Graticule spacing See Meridians and Parallels. Pole lines are half the
length of the Equator. Symmetry exists at the
central meridian or the Equator.
Linear scale Scale is correct only along 37 and 55 N and S.
Features close to poles are compressed in the
north-south direction.
Uses Used for mapping the world.
Eckert III / 544 Field Guide
Figure 216: Eckert III Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert IV / 545
Eckert IV The Eckert IV projection is best used for thematic maps of the globe.
An example of a thematic map is one depicting land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert
IV is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 75: Eckert IV Summary
Construction
Pseudocylinder
Property
Equal-area
Meridians
Meridians are elliptical arcs that are equally spaced.
Parallels
Parallels are straight lines that are unequally spaced
and closer together at the poles.
Graticule
spacing
See Meridians and Parallels. The poles and the
central meridian are straight lines one half the
length of the Equator.
Linear scale
Scale is distorted north-south 40 percent along the
Equator relative to the east-west dimension. This
distortion decreases to zero at 40 30 N and S and
at the central meridian. Scale is correct only along
these parallels. Nearer the poles, features are
compressed in the north-south direction
(Environmental Systems Research Institute, 1997).
Uses
Use for world maps only.
Eckert IV / 546 Field Guide
Figure 217: Eckert IV Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert V / 547
Eckert V The Eckert V projection is only supported on a sphere. Like Eckert
III, no point is free of all scale distortion, but the Equator is free of
angular distortion (Snyder and Voxland, 1989).
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert
V is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 218: Eckert V Summary
Construction Pseudocylinder
Property Area is not preserved.
Meridians Meridians are sinusoidal curves that are equally
spaced. The poles and the central meridian are
straight lines one half as long as the Equator.
Parallels Parallels are straight lines that are equally spaced.
Graticule spacing See Meridians and Parallels.
Linear scale Scale is correct only along 37 55 N and S.
Features near the poles are compressed in the
north-south direction.
Uses This projection is best used for thematic world
maps.
Eckert V / 548 Field Guide
Figure 219: Eckert V Projection
Source: Snyder and Voxland, 1989
Field Guide Eckert VI / 549
Eckert VI The Eckert VI projection is best used for thematic maps. An example
of a thematic map is one depicting land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert
VI is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value of the longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 76: Eckert VI Summary
Construction
Pseudocylinder
Property
Equal-area
Meridians
Meridians are sinusoidal curves that are equally
spaced.
Parallels
Parallels are unequally spaced straight lines, closer
together at the poles.
Graticule
spacing
See Meridians and Parallels. The poles and the
central meridian are straight lines one half the
length of the Equator.
Linear scale
Scale is distorted north-south 29 percent along the
Equator relative to the east-west dimension. This
distortion decreases to zero at 49 16 N and S at
the central meridian. Scale is correct only along
these parallels. Nearer the poles, features are
compressed in the north-south direction
(Environmental Systems Research Institute, 1997).
Uses
Use for world maps only.
Eckert VI / 550 Field Guide
Figure 220: Eckert VI Projection
Source: Snyder and Voxland, 1989
Field Guide EOSAT SOM / 551
EOSAT SOM The EOSAT SOM projection is similar to the Space Oblique Mercator
projection. The main exception to the similarity is that the EOSAT
SOM projections X and Y coordinates are switched.
For information, see Space Oblique Mercator on page 608.
Prompts
The following prompts display in the Projection Chooser once EOSAT
SOM is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Orbital path number (1-251 or 1-233)
For Landsats 1, 2, and 3, the path range is from 1 to 251. For
Landsats 4 and 5, the path range is from 1 to 233.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Equidistant Conic / 552 Field Guide
Equidistant Conic With Equidistant Conic (Simple Conic) projections, correct distance
is achieved along the line(s) of contact with the cone, and parallels
are equidistantly spaced. It can be used with either one (A) or two
(B) standard parallels.
This projection is neither conformal nor equal-area, but the north-
south scale along meridians is correct. The North or South Pole is
represented by an arc. Because scale distortion increases with
increasing distance from the line(s) of contact, the Equidistant Conic
is used mostly for mapping regions predominantly east-west in
extent. The USGS uses the Equidistant Conic in an approximate form
for a map of Alaska.
Prompts
The following prompts display in the Projection Chooser if Equidistant
Conic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Table 77: Equidistant Conic Summary
Construction
Cone
Property
Equidistant
Meridians
Meridians are straight lines converging on a polar
axis but not at the pole.
Parallels
Parallels are arcs of concentric circles concave
toward a pole.
Graticule
spacing
Meridian spacing is true on the standard parallels
and decreases toward the pole. Parallels are placed
at true scale along the meridians. Meridians and
parallels intersect each other at right angles. The
graticule is symmetrical.
Linear scale
Linear scale is true along all meridians and along
the standard parallel or parallels.
Uses
The Equidistant Conic projection is used in atlases
for portraying mid-latitude areas. It is good for
representing regions with a few degrees of latitude
lying on one side of the Equator. It was used in the
former Soviet Union for mapping the entire country
(Environmental Systems Research Institute, 1992).
Field Guide Equidistant Conic / 553
Define the origin of the projection in both spherical and rectangular
coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for the longitude of the desired central meridian and the
latitude of the origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
intersection of the central meridian and the latitude of the origin of
projection. These values must be in meters. It is often convenient to
make them large enough so that no negative coordinates occur
within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map
projection to the south and west.
One or two standard parallels?
Latitude of standard parallel
Enter one or two values for the desired control line(s) of the
projection, i.e., the standard parallel(s). Note that if two standard
parallels are used, the first is the southernmost.
Figure 221: Equidistant Conic Projection
Source: Snyder and Voxland, 1989
Equidistant Cylindrical / 554 Field Guide
Equidistant
Cylindrical
The Equidistant Cylindrical projection is similar to the
Equirectangular projection.
For information, see Equirectangular (Plate Carre) on page
555.
Prompts
The following prompts display in the Projection Chooser if Equidistant
Cylindrical is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of standard parallel
Latitude of true scale
Enter a value for longitude of the standard parallel and the latitude
of true scale.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide Equirectangular (Plate Carre) / 555
Equirectangular
(Plate Carre)
Also called Simple Cylindrical, Equirectangular is composed of
equally spaced, parallel meridians and latitude lines that cross at
right angles on a rectangular map. Each rectangle formed by the grid
is equal in area, shape, and size.
Equirectangular is not conformal nor equal-area, but it does contain
less distortion than the Mercator in polar regions. Scale is true on all
meridians and on the central parallel. Directions due north, south,
east, and west are true, but all other directions are distorted. The
Equator is the standard parallel, true to scale and free of distortion.
However, this projection may be centered anywhere.
This projection is valuable for its ease in computer plotting. It is
useful for mapping small areas, such as city maps, because of its
simplicity. The USGS uses Equirectangular for index maps of the
conterminous US with insets of Alaska, Hawaii, and various islands.
However, neither scale nor projection is marked to avoid implying
that the maps are suitable for normal geographic information.
Prompts
The following prompts display in the Projection Chooser if
Equirectangular is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Table 78: Equirectangular (Plate Carre) Summary
Construction
Cylinder
Property
Compromise
Meridians
All meridians are straight lines.
Parallels
All parallels are straight lines.
Graticule
spacing
Equally spaced parallel meridians and latitude lines
cross at right angles.
Linear scale
The scale is correct along all meridians and along
the standard parallels (Environmental Systems
Research Institute, 1992).
Uses
Best used for city maps, or other small areas with
map scales small enough to reduce the obvious
distortion. Used for simple portrayals of the world or
regions with minimal geographic data, such as
index maps (Environmental Systems Research
Institute, 1992).
Equirectangular (Plate Carre) / 556 Field Guide
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Latitude of true scale
Enter a value for longitude of the desired central meridian to center
the projection and the latitude of true scale.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 222: Equirectangular Projection
Source: Snyder and Voxland, 1989
Field Guide Gall Stereographic / 557
Gall Stereographic The Gall Stereographic projection was created in 1855. The two
standard parallels are located at 45 N and 45 S. This projection is
used for world maps.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Gall
Stereographic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 79: Gall Stereographic Summary
Construction
Cylinder
Property
Compromise
Meridians
Meridians are straight lines that are equally spaced.
Parallels
Parallels are straight lines that have increased
space with distance from the Equator.
Graticule
spacing
All meridians and parallels are linear.
Linear scale
Scale is true in all directions along latitudes 45 N
and S. Scale is constant along parallels and is
symmetrical around the Equator. Distances are
compressed between latitudes 45 N and S, and
expanded beyond them (Environmental Systems
Research Institute, 1997).
Uses
Use for world maps only.
Gauss Kruger / 558 Field Guide
Gauss Kruger The Gauss Kruger projection is the same as the Transverse Mercator
projection, with the exception that Gauss Kruger uses a fixed scale
factor of 1. Gauss Kruger is available only in ellipsoidal form.
Many countries such as China and Germany use Gauss Kruger in 3-
degree zones instead of 6-degree zones for UTM.
For more information, see Transverse Mercator on page 625.
Prompts
The following prompts display in the Projection Chooser once Gauss
Kruger is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale factor
Designate the desired scale factor. This parameter is used to modify
scale distortion. A value of one indicates true scale only along the
central meridian. It may be desirable to have true scale along two
lines equidistant from and parallel to the central meridian, or to
lessen scale distortion away from the central meridian. A factor of
less than, but close to one is often used.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to
center the projection.
Latitude of origin of projection
Enter the value for the latitude of origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide General Vertical Near-side Perspective / 559
General Vertical
Near-side
Perspective
General Vertical Near-side Perspective presents a picture of the
Earth as if a photograph were taken at some distance less than
infinity. The map user simply identifies area of coverage, distance of
view, and angle of view. It is a variation of the General Perspective
projection in which the camera precisely faces the center of the
Earth.
Central meridian and a particular parallel (if shown) are straight
lines. Other meridians and parallels are usually arcs of circles or
ellipses, but some may be parabolas or hyperbolas. Like all
perspective projections, General Vertical Near-side Perspective
cannot illustrate the entire globe on one mapit can represent only
part of one hemisphere.
Table 80: General Vertical Near-side Perspective Summary
Construction
Plane
Property
Compromise
Meridians
The central meridian is a straight line in all aspects.
In the polar aspect all meridians are straight. In the
equatorial aspect the Equator is straight
(Environmental Systems Research Institute, 1992).
Parallels
Parallels on vertical polar aspects are concentric
circles. Nearly all other parallels are elliptical arcs,
except that certain angles of tilt may cause some
parallels to be shown as parabolas or hyperbolas.
Graticule
spacing
Polar aspect: parallels are concentric circles that are
not evenly spaced. Meridians are evenly spaced and
spacing increases from the center of the projection.
Equatorial and oblique aspects: parallels are
elliptical arcs that are not evenly spaced. Meridians
are elliptical arcs that are not evenly spaced, except
for the central meridian, which is a straight line.
Linear scale
Radial scale decreases from true scale at the center
to zero on the projection edge. The scale
perpendicular to the radii decreases, but not as
rapidly (Environmental Systems Research Institute,
1992).
Uses
Often used to show the Earth or other planets and
satellites as seen from space. Used as an aesthetic
presentation, rather than for technical applications
(Environmental Systems Research Institute, 1992).
General Vertical Near-side Perspective / Field Guide
Prompts
The following prompts display in the Projection Chooser if General
Vertical Near-side Perspective is selected. Respond to the prompts
as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Height of perspective point
Enter a value for the desired height of the perspective point above
the sphere in the same units as the radius.
Then, define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide Geographic (Lat/Lon) / 561
Geographic
(Lat/Lon)
The Geographic is a spherical coordinate system composed of
parallels of latitude (Lat) and meridians of longitude (Lon) (Figure
223). Both divide the circumference of the Earth into 360 degrees.
Degrees are further subdivided into minutes and seconds (60 sec =
1 minute, 60 min = 1 degree).
Because the Earth spins on an axis between the North and South
Poles, this allows construction of concentric, parallel circles, with a
reference line exactly at the north-south center, termed the Equator.
The series of circles north of the Equator is termed north latitudes
and runs from 0 latitude (the Equator) to 90 North latitude (the
North Pole), and similarly southward. Position in an east-west
direction is determined from lines of longitude. These lines are not
parallel, and they converge at the poles. However, they intersect
lines of latitude perpendicularly.
Unlike the Equator in the latitude system, there is no natural zero
meridian. In 1884, it was finally agreed that the meridian of the
Royal Observatory in Greenwich, England, would be the prime
meridian. Thus, the origin of the geographic coordinate system is the
intersection of the Equator and the prime meridian. Note that the
180 meridian is the international date line.
If you choose Geographic from the Projection Chooser, the following
prompts display:
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Note that in responding to prompts for other projections, values
for longitude are negative west of Greenwich and values for
latitude are negative south of the Equator.
Geographic (Lat/Lon) / 562 Field Guide
Figure 223: Geographic Projection
Figure 223 shows the graticule of meridians and parallels on the
global surface.
0
30
60
Parallel
Equator
North Pole
(Latitude)
Meridian
(Longitude)
3 0
6 0
Field Guide Gnomonic / 563
Gnomonic Gnomonic is a perspective projection that projects onto a tangent
plane from a position in the center of the Earth. Because of the close
perspective, this projection is limited to less than a hemisphere.
However, it is the only projection which shows all great circles as
straight lines. With a polar aspect, the latitude intervals increase
rapidly from the center outwards.
With an equatorial or oblique aspect, the Equator is straight.
Meridians are straight and parallel, while intervals between parallels
increase rapidly from the center and parallels are convex to the
Equator.
Because great circles are straight, this projection is useful for air and
sea navigation. Rhumb lines are curved, which is the opposite of the
Mercator projection.
Prompts
The following prompts display in the Projection Chooser if Gnomonic
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Table 81: Gnomonic Summary
Construction
Plane
Property
Compromise
Meridians
Polar aspect: the meridians are straight lines
radiating from the point of tangency.
Oblique and equatorial aspects: the meridians are
straight lines.
Parallels
Polar aspect: the parallels are concentric circles.
Oblique and equatorial aspects: parallels are
ellipses, parabolas, or hyperbolas concave toward
the poles (except for the Equator, which is straight).
Graticule
spacing
Polar aspect: the meridian spacing is equal and
increases away from the pole. The parallel spacing
increases rapidly from the pole.
Oblique and equatorial aspects: the graticule
spacing increases very rapidly away from the center
of the projection.
Linear scale
Linear scale and angular and areal deformation are
extreme, rapidly increasing away from the center of
the projection.
Uses
The Gnomonic projection is used in seismic work
because seismic waves travel in approximately
great circles. It is used with the Mercator projection
for navigation.
Gnomonic / 564 Field Guide
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Field Guide Hammer / 565
Hammer
The Hammer projection is useful for mapping the world. In
particular, the Hammer projection is suited for thematic maps of the
world, such as land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once
Hammer is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 82: Hammer Summary
Construction
Modified Azimuth
Property
Equal-area
Meridians
The central meridian is half as long as the Equator
and a straight line. Others are curved and concave
toward the central meridian and unequally spaced.
Parallels
With the exception of the Equator, all parallels are
complex curves that have a concave shape toward
the nearest pole.
Graticule
spacing
Only the Equator and central meridian are straight
lines.
Linear scale
Scale lessens along the Equator and central
meridian as proximity to the origin grows.
Uses
Use for world maps only.
Hammer / 566 Field Guide
Figure 224: Hammer Projection
Source: Snyder and Voxland, 1989
Field Guide Interrupted Goode Homolosine / 567
Interrupted Goode
Homolosine
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once
Interrupted Goode Homolosine is selected. Respond to the prompts
as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Figure 225: Interrupted Goode Homolosine Projection
Table 83: Interrupted Goode Homolosine
Construction Pseudocylindrical
Property Equal-area
Meridians In the interrupted form, there are six central
meridians, each a straight line 0.22 as long as the
Equator but not crossing the Equator. Other
meridians are equally spaced sinusoidal curves
between latitudes 40 44 N and S. and elliptical
arcs elsewhere, all concave toward the central
meridian. There is a slight bend in meridians at the
40 44 latitudes (Snyder and Voxland, 1989).
Parallels Parallels are straight parallel lines, which are
perpendicular to the central meridians. Between
latitudes 40 44 N and S, they are equally spaced.
Parallels gradually get closer together closer to the
poles.
Graticule spacing See Meridians and Parallels. Poles are points.
Symmetry is nonexistent in the interrupted form.
Linear scale Scale is true at each latitude between 40 44 N and
S along the central meridian within the same
latitude range. Scale varies with increased latitudes.
Uses This projection is useful for world maps.
Interrupted Goode Homolosine / 568 Field Guide
Source: Snyder and Voxland, 1989
Field Guide Interrupted Mollweide / 569
Interrupted
Mollweide
The interrupted Mollweide projection reduces the distortion of the
Mollweide projection. It is interrupted into six regions with fixed
parameters for each region.
Source: Snyder and Voxland, 1989
For more information, see Mollweide on page 585.
Prompts
The following prompts display in the Projection Chooser once
Interrupted Mollweide is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Figure 226: Interrupted Mollweide Projection
Source: Snyder and Voxland, 1989
Lambert Azimuthal Equal Area / 570 Field Guide
Lambert Azimuthal
Equal Area
The Lambert Azimuthal Equal Area projection is mathematically
based on a plane tangent to the Earth. It is the only projection that
can accurately represent both area and true direction from the
center of the projection (Figure 227). This central point can be
located anywhere. Concentric circles are closer together toward the
edge of the map, and the scale distorts accordingly. This projection
is well-suited to square or round land masses. This projection
generally represents only one hemisphere.
In the polar aspect, latitude rings decrease their intervals from the
center outwards. In the equatorial aspect, parallels are curves
flattened in the middle. Meridians are also curved, except for the
central meridian, and spacing decreases toward the edges.
Table 84: Lambert Azimuthal Equal Area Summary
Construction
Plane
Property
Equal-area
Meridians
Polar aspect: the meridians are straight lines
radiating from the point of tangency. Oblique and
equatorial aspects: meridians are complex curves
concave toward a straight central meridian, except
the outer meridian of a hemisphere, which is a
circle.
Parallels
Polar aspect: parallels are concentric circles.
Oblique and equatorial aspects: the parallels are
complex curves. The Equator on the equatorial
aspect is a straight line.
Graticule
spacing
Polar aspect: the meridian spacing is equal and
increases, and the parallel spacing is unequal and
decreases toward the periphery of the projection.
The graticule spacing, in all aspects, retains the
property of equivalence of area.
Linear scale
Linear scale is better than most azimuthals, but not
as good as the equidistant. Angular deformation
increases toward the periphery of the projection.
Scale decreases radially toward the periphery of the
map projection. Scale increases perpendicular to
the radii toward the periphery.
Uses
The polar aspect is used by the USGS in the
National Atlas. The polar, oblique, and equatorial
aspects are used by the USGS for the Circum-Pacific
Map.
Field Guide Lambert Azimuthal Equal Area / 571
Prompts
The following prompts display in the Projection Chooser if Lambert
Azimuthal Equal Area is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
In Figure 227, three views of the Lambert Azimuthal Equal Area
projection are shown: A) Polar aspect, showing one hemisphere; B)
Equatorial aspect, frequently used in old atlases for maps of the
eastern and western hemispheres; C) Oblique aspect, centered on
40N.
Lambert Azimuthal Equal Area / 572 Field Guide
Figure 227: Lambert Azimuthal Equal Area Projection
A
B
C
Field Guide Lambert Conformal Conic / 573
Lambert Conformal
Conic
This projection is very similar to Albers Conical Equal Area, described
previously. It is mathematically based on a cone that is tangent at
one parallel or, more often, that is conceptually secant on two
parallels (Figure 228). Areal distortion is minimal, but increases
away from the standard parallels. North or South Pole is represented
by a pointthe other pole cannot be shown. Great circle lines are
approximately straight. It retains its properties at various scales,
and sheets can be joined along their edges. This projection, like
Albers, is most valuable in middle latitudes, especially in a country
sprawling east to west like the US. The standard parallels for the US
are 33 and 45N.
The major property of this projection is its conformality. At all
coordinates, meridians and parallels cross at right angles. The
correct angles produce correct shapes. Also, great circles are
approximately straight. The conformal property of Lambert
Conformal Conic, and the straightness of great circles makes it
valuable for landmark flying.
Table 85: Lambert Conformal Conic Summary
Construction
Cone
Property
Conformal
Meridians
Meridians are straight lines converging at a pole.
Parallels
Parallels are arcs of concentric circles concave
toward a pole and centered at a pole.
Graticule
spacing
Meridian spacing is true on the standard parallels
and decreases toward the pole. Parallel spacing
increases away from the standard parallels and
decreases between them. Meridians and parallels
intersect each other at right angles. The graticule
spacing retains the property of conformality. The
graticule is symmetrical.
Linear scale
Linear scale is true on standard parallels. Maximum
scale error is 2.5% on a map of the United States
(48 states) with standard parallels at 33N and
45N.
Uses
Used for large countries in the mid-latitudes having
an east-west orientation. The United States (50
states) Base Map uses standard parallels at 37N
and 65N. Some of the National Topographic Map
Series 7.5-minute and 15-minute quadrangles, and
the State Base Map series are constructed on this
projection. The latter series uses standard parallels
of 33N and 45N. Aeronautical charts for Alaska
use standard parallels at 55N and 65N. The
National Atlas of Canada uses standard parallels at
49N and 77N.
Lambert Conformal Conic / 574 Field Guide
Lambert Conformal Conic is the State Plane coordinate system
projection for states of predominant east-west expanse. Since 1962,
Lambert Conformal Conic has been used for the International Map of
the World between 84N and 80S.
In comparison with Albers Conical Equal Area, Lambert Conformal
Conic possesses true shape of small areas, whereas Albers possesses
equal-area. Unlike Albers, parallels of Lambert Conformal Conic are
spaced at increasing intervals the farther north or south they are
from the standard parallels.
Prompts
The following prompts display in the Projection Chooser if Lambert
Conformal Conic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Latitude of 1st standard parallel
Latitude of 2nd standard parallel
Enter two values for the desired control lines of the projection, i.e.,
the standard parallels. Note that the first standard parallel is the
southernmost.
Then, define the origin of the map projection in both spherical and
rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
If you only have one standard parallel you should enter that
same value into all three latitude fields.
Enter values for longitude of the desired central meridian and
latitude of the origin of projection.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing corresponding to the
intersection of the central meridian, and the latitude of the origin of
projection. These values must be in meters. It is often convenient to
make them large enough to ensure that there are no negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Field Guide Lambert Conformal Conic / 575
Figure 228: Lambert Conformal Conic Projection
In Figure 228, the standard parallels are 20N and 60N. Note the
change in spacing of the parallels.
Loximuthal / 576 Field Guide
Loximuthal The distortion of the Loximuthal projection is average to
pronounced. Distortion is not present at the central latitude on the
central meridian. What is most noteworthy about the loximuthal
projection is the loxodromes that are straight, true to scale, and
correct in azimuth from the center (Snyder and Voxland, 1989).
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser if Loximuthal
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Latitude of central parallel
Table 86: Loximuthal Summary
Construction Pseudocylindrical
Property Neither conformal nor equal-area
Meridians The central meridian is a straight line generally
over half as long as the Equator, depending on the
central latitude. If the central latitude is the
Equator, the ratio is 0.5; if it is 40 N or S, the ratio
is 0.65. Other meridians are equally spaced
complex curves intersecting at the poles and
concave toward the central meridian (Snyder and
Voxland, 1989).
Parallels Parallels are straight parallel lines that are equally
spaced. They are perpendicular to the central
meridian.
Graticule spacing See Meridians and Parallels. Poles are points.
Symmetry exists about the central meridian.
Symmetry also exists at the Equator if it is
designated as the central latitude.
Linear scale Scale is true along the central meridian. Scale is
also constant along any given latitude, but different
for the latitude of opposite sign.
Uses Used for world maps where loxodromes (rhumb
lines) are emphasized.
Field Guide Loximuthal / 577
Enter a value for longitude of the desired central meridian to center
the projection and the latitude of central parallel.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 229: Loximuthal Projection
Source: Snyder and Voxland, 1989
Mercator / 578 Field Guide
Mercator This famous cylindrical projection was originally designed by Flemish
map maker Gerhardus Mercator in 1569 to aid navigation (Figure
230). Meridians and parallels are straight lines and cross at 90
angles. Angular relationships are preserved. However, to preserve
conformality, parallels are placed increasingly farther apart with
increasing distance from the Equator. Due to extreme scale
distortion in high latitudes, the projection is rarely extended beyond
80N or S unless the latitude of true scale is other than the Equator.
Distance scales are usually furnished for several latitudes.
This projection can be thought of as being mathematically based on
a cylinder tangent at the Equator. Any straight line is a constant-
azimuth (rhumb) line. Areal enlargement is extreme away from the
Equator; poles cannot be represented. Shape is true only within any
small area. It is a reasonably accurate projection within a 15 band
along the line of tangency.
Rhumb lines, which show constant direction, are straight. For this
reason, a Mercator map was very valuable to sea navigators.
However, rhumb lines are not the shortest pathgreat circles are
the shortest path. Most great circles appear as long arcs when drawn
on a Mercator map.
Table 87: Mercator Summary
Construction
Cylinder
Property
Conformal
Meridians
Meridians are straight and parallel.
Parallels
Parallels are straight and parallel.
Graticule
spacing
Meridian spacing is equal and the parallel spacing
increases away from the Equator. The graticule
spacing retains the property of conformality. The
graticule is symmetrical. Meridians intersect
parallels at right angles.
Linear scale
Linear scale is true along the Equator only (line of
tangency), or along two parallels equidistant from
the Equator (the secant form). Scale can be
determined by measuring one degree of latitude,
which equals 60 nautical miles, 69 statute miles, or
111 kilometers.
Uses
An excellent projection for equatorial regions.
Otherwise, the Mercator is a special-purpose map
projection best suited for navigation. Secant
constructions are used for large-scale coastal
charts. The use of the Mercator map projection as
the base for nautical charts is universal. Examples
are the charts published by the National Ocean
Survey, US Dept. of Commerce.
Field Guide Mercator / 579
Prompts
The following prompts display in the Projection Chooser if Mercator
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the origin of the map projection in both spherical and
rectangular coordinates.
Longitude of central meridian
Latitude of true scale
Enter values for longitude of the desired central meridian and
latitude at which true scale is desired. Selection of a parameter other
than the Equator can be useful for making maps in extreme north or
south latitudes.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing corresponding to the
intersection of the central meridian and the latitude of true scale.
These values must be in meters. It is often convenient to make them
large enough so that no negative coordinates occur within the region
of the map projection. That is, the origin of the rectangular
coordinate system should fall outside of the map projection to the
south and west.
Mercator / 580 Field Guide
Figure 230: Mercator Projection
In Figure 230, all angles are shown correctly, therefore small shapes
are true (i.e., the map is conformal). Rhumb lines are straight, which
makes it useful for navigation.
Field Guide Miller Cylindrical / 581
Miller Cylindrical Miller Cylindrical is a modification of the Mercator projection (Figure
231). It is similar to the Mercator from the Equator to 45, but
latitude line intervals are modified so that the distance between
them increases less rapidly. Thus, beyond 45, Miller Cylindrical
lessens the extreme exaggeration of the Mercator. Miller Cylindrical
also includes the poles as straight lines whereas the Mercator does
not.
Meridians and parallels are straight lines intersecting at right angles.
Meridians are equidistant, while parallels are spaced farther apart
the farther they are from the Equator. Miller Cylindrical is not equal-
area, equidistant, or conformal. Miller Cylindrical is used for world
maps and in several atlases.
Prompts
The following prompts display in the Projection Chooser if Miller
Cylindrical is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Table 88: Miller Cylindrical Summary
Construction
Cylinder
Property
Compromise
Meridians
All meridians are straight lines.
Parallels
All parallels are straight lines.
Graticule
spacing
Meridians are parallel and equally spaced, the lines
of latitude are parallel, and the distance between
them increases toward the poles. Both poles are
represented as straight lines. Meridians and
parallels intersect at right angles (Environmental
Systems Research Institute, 1992).
Linear scale
While the standard parallels, or lines, that are true
to scale and free of distortion, are at latitudes 45N
and S, only the Equator is standard.
Uses
Useful for world maps.
Miller Cylindrical / 582 Field Guide
Enter a value for the longitude of the desired central meridian to
center the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 231: Miller Cylindrical Projection
This projection resembles the Mercator, but has less distortion in
polar regions. Miller Cylindrical is neither conformal nor equal-area.
Field Guide Modified Transverse Mercator / 583
Modified Transverse
Mercator
In 1972, the USGS devised a projection specifically for the revision
of a 1954 map of Alaska which, like its predecessors, was based on
the Polyconic projection. This projection was drawn to a scale of
1:2,000,000 and published at 1:2,500,000 (map E) and
1:1,584,000 (map B). Graphically prepared by adapting
coordinates for the UTM projection, it is identified as the Modified
Transverse Mercator projection. It resembles the Transverse
Mercator in a very limited manner and cannot be considered a
cylindrical projection. It resembles the Equidistant Conic projection
for the ellipsoid in actual construction. The projection was also used
in 1974 for a base map of the Aleutian-Bering Sea Region published
at 1:2,500,000 scale.
It is found to be most closely equivalent to the Equidistant Conic for
the Clarke 1866 ellipsoid, with the scale along the meridians reduced
to 0.9992 of true scale, and the standard parallels at latitude 66.09
and 53.50N.
Prompts
The following prompts display in the Projection Chooser if Modified
Transverse Mercator is selected. Respond to the prompts as
described.
False easting
False northing
Table 89: Modified Transverse Mercator Summary
Construction
Cone
Property
Equidistant
Meridians
On pre-1973 editions of the Alaska Map E,
meridians are curved concave toward the center of
the projection. On post-1973 editions, the
meridians are straight.
Parallels
Parallels are arcs concave to the pole.
Graticule
spacing
Meridian spacing is approximately equal and
decreases toward the pole. Parallels are
approximately equally spaced. The graticule is
symmetrical on post-1973 editions of the Alaska
Map E.
Linear scale
Linear scale is more nearly correct along the
meridians than along the parallels.
Uses
USGSs Alaska Map E at the scale of 1:2,500,000.
The Bathymetric Maps Eastern Continental Margin
U.S.A., published by the American Association of
Petroleum Geologists, uses the straight meridians
on its Modified Transverse Mercator and is more
equivalent to the Equidistant Conic map projection.
Modified Transverse Mercator / 584 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Field Guide Mollweide / 585
Mollweide Carl B. Mollweide designed the projection in 1805. It is an equal-area
pseudocylindrical projection. The Mollweide projection is used
primarily for thematic maps of the world.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once
Mollweide is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for longitude of the desired central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 90: Mollweide Summary
Construction
Pseudocylinder
Property
Equal-area
Meridians
Meridians are elliptical arcs that are equally spaced.
The exception is the central meridian, which is a
straight line.
Parallels
All parallels are straight lines.
Graticule
spacing
The Equator and the central meridian are linear
graticules.
Linear scale
Scale is accurate along latitudes 40 44 N and S at
the central meridian. Distortion becomes more
pronounced farther from the lines, and is severe at
the extremes of the projection.
Uses
Use for world maps only.
Mollweide / 586 Field Guide
Figure 232: Mollweide Projection
Source: Snyder and Voxland, 1989
Field Guide New Zealand Map Grid / 587
New Zealand Map
Grid
This projection is used only for mapping New Zealand.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once New
Zealand Map Grid is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
The Spheroid Name defaults to International 1909. The Datum Name
defaults to Geodetic Datum 1949. These fields are not editable.
Easting Shift
Northing Shift
The Easting and Northing shifts are reported in meters.
Table 91: New Zealand Map Grid Summary
Construction Modified cylindrical
Property Conformal
Meridians N/A
Parallels N/A
Graticule spacing None
Linear scale Scale is within 0.02 percent of actual scale for the
country of New Zealand.
Uses This projection is useful only for maps of New
Zealand.
Oblated Equal Area / 588 Field Guide
Oblated Equal Area
Prompts
The following prompts display in the Projection Chooser once
Oblated Equal Area is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Parameter M
Parameter N
Enter the oblated equal area oval shape of parameters M and N.
Longitude of center of projection
Latitude of center of projection
Enter the longitude of the center of the projection and the latitude of
the center of the projection.
Rotation angle
Enter the oblated equal area oval rotation angle.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide Oblique Mercator (Hotine) / 589
Oblique Mercator
(Hotine)
Oblique Mercator is a cylindrical, conformal projection that intersects
the global surface along a great circle. It is equivalent to a Mercator
projection that has been altered by rotating the cylinder so that the
central line of the projection is a great circle path instead of the
Equator. Shape is true only within any small area. Areal enlargement
increases away from the line of tangency. Projection is reasonably
accurate within a 15 band along the line of tangency.
The USGS uses the Hotine version of Oblique Mercator. The Hotine
version is based on a study of conformal projections published by
British geodesist Martin Hotine in 1946-47. Prior to the
implementation of the Space Oblique Mercator, the Hotine version
was used for mapping Landsat satellite imagery.
Prompts
The following prompts display in the Projection Chooser if Oblique
Mercator (Hotine) is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Table 92: Oblique Mercator (Hotine) Summary
Construction
Cylinder
Property
Conformal
Meridians
Meridians are complex curves concave toward the
line of tangency, except each 180th meridian is
straight.
Parallels
Parallels are complex curves concave toward the
nearest pole.
Graticule
spacing
Graticule spacing increases away from the line of
tangency and retains the property of conformality.
Linear scale
Linear scale is true along the line of tangency, or
along two lines of equidistance from and parallel to
the line of tangency.
Uses
Useful for plotting linear configurations that are
situated along a line oblique to the Earths Equator.
Examples are: NASA Surveyor Satellite tracking
charts, ERTS flight indexes, strip charts for
navigation, and the National Geographic Societys
maps West Indies, Countries of the Caribbean,
Hawaii, and New Zealand.
Oblique Mercator (Hotine) / 590 Field Guide
The list of available spheroids is located in Table 65 on
page 490.
Scale factor at center
Designate the desired scale factor along the central line of the
projection. This parameter may be used to modify scale distortion
away from this central line. A value of 1.0 indicates true scale only
along the central line. A value of less than, but close to one is often
used to lessen scale distortion away from the central line.
Latitude of point of origin
False easting
False northing
The center of the projection is defined by rectangular coordinates of
false easting and false northing. The origin of rectangular
coordinates on this projection occurs at the nearest intersection of
the central line with the Earths Equator. To shift the origin to the
intersection of the latitude of the origin entered above and the
central line of the projection, compute coordinates of the latter point
with zero false eastings and northings, reverse the signs of the
coordinates obtained, and use these for false eastings and northings.
These values must be in meters.
It is often convenient to add additional values so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Do you want to enter either:
A) Azimuth East of North for central line and the
longitude of the point of origin
B) The latitude and longitude of the first and second
points defining the central line
These formats differ slightly in definition of the central line of the
projection.
Format A
For format A, the additional prompts are:
Azimuth east of north for central line
Longitude of point of origin
Format A defines the central line of the projection by the angle east
of north to the desired great circle path and by the latitude and
longitude of the point along the great circle path from which the
angle is measured. Appropriate values should be entered.
Field Guide Oblique Mercator (Hotine) / 591
Format B
For format B, the additional prompts are:
Longitude of 1st point
Latitude of 1st point
Longitude of 2nd point
Latitude of 2nd point
Format B defines the central line of the projection by the latitude of
a point on the central line which has the desired scale factor entered
previously and by the longitude and latitude of two points along the
desired great circle path. Appropriate values should be entered.
Figure 233: Oblique Mercator Projection
Source: Snyder and Voxland, 1989
Orthographic / 592 Field Guide
Orthographic The Orthographic projection is geometrically based on a plane
tangent to the Earth, and the point of projection is at infinity (Figure
234). The Earth appears as it would from outer space. Light rays that
cast the projection are parallel and intersect the tangent plane at
right angles. This projection is a truly graphic representation of the
Earth, and is a projection in which distortion becomes a visual aid. It
is the most familiar of the azimuthal map projections. Directions
from the center of the projection are true.
This projection is limited to one hemisphere and shrinks those areas
toward the periphery. In the polar aspect, latitude ring intervals
decrease from the center outwards at a much greater rate than with
Lambert Azimuthal. In the equatorial aspect, the central meridian
and parallels are straight, with spaces closing up toward the outer
edge.
The Orthographic projection seldom appears in atlases. Its utility is
more pictorial than technical. Orthographic has been used as a basis
for maps by Rand McNally and the USGS.
Table 93: Orthographic Summary
Construction
Plane
Property
Compromise
Meridians
Polar aspect: the meridians are straight lines
radiating from the point of tangency.
Oblique aspect: the meridians are ellipses, concave
toward the center of the projection.
Equatorial aspect: the meridians are ellipses
concave toward the straight central meridian.
Parallels
Polar aspect: the parallels are concentric circles.
Oblique aspect: the parallels are ellipses concave
toward the poles.
Equatorial aspect: the parallels are straight and
parallel.
Graticule
spacing
Polar aspect: meridian spacing is equal and
increases, and the parallel decreases from the point
of tangency.
Oblique and equatorial aspects: the graticule
spacing decreases away from the center of the
projection.
Linear scale
Scale is true on the parallels in the polar aspect and
on all circles centered at the pole of the projection
in all aspects. Scale decreases along lines radiating
from the center of the projection.
Uses
USGS uses the Orthographic map projection in the
National Atlas.
Field Guide Orthographic / 593
Prompts
The following prompts display in the Projection Chooser if
Orthographic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.Three views of the
Orthographic projection are shown in Figure 234: A) Polar aspect; B)
Equatorial aspect; C) Oblique aspect, centered at 40N and showing
the classic globe-like view.
Orthographic / 594 Field Guide
Figure 234: Orthographic Projection
A
C
B
Field Guide Plate Carre / 595
Plate Carre The parameters for the Plate Care projection are identical to that of
the Equirectangular projection.
For more information, see Equirectangular (Plate Carre) on
page 555.
Prompts
The following prompts display in the Projection Chooser if Plate
Carre is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Latitude of true scale
Enter a value for longitude of the desired central meridian to center
the projection and the latitude of true scale.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 235: Plate Carre Projection
Source: Snyder and Voxland, 1989
Polar Stereographic / 596 Field Guide
Polar Stereographic The Polar Stereographic may be used to accommodate all regions
not included in the UTM coordinate system, regions north of 84N
and 80S. This form is called Universal Polar Stereographic (UPS).
The projection is equivalent to the polar aspect of the Stereographic
projection on a spheroid. The central point is either the North Pole or
the South Pole. Of all the polar aspect planar projections, this is the
only one that is conformal.
The point of tangency is a single pointeither the North Pole or the
South Pole. If the plane is secant instead of tangent, the point of
global contact is a line of latitude (Environmental Systems Research
Institute, 1992).
Polar Stereographic is an azimuthal projection obtained by projecting
from the opposite pole (Figure 236). All of either the northern or
southern hemispheres can be shown, but not both. This projection
produces a circular map with one of the poles at the center.
Polar Stereographic stretches areas toward the periphery, and scale
increases for areas farther from the central pole. Meridians are
straight and radiating; parallels are concentric circles. Even though
scale and area are not constant with Polar Stereographic, this
projection, like all stereographic projections, possesses the property
of conformality.
The Astrogeology Center of the Geological Survey at Flagstaff,
Arizona, has been using the Polar Stereographic projection for the
mapping of polar areas of every planet and satellite for which there
is sufficient information.
Table 94: Polar Stereographic Summary
Construction
Plane
Property
Conformal
Meridians
Meridians are straight.
Parallels
Parallels are concentric circles.
Graticule
spacing
The distance between parallels increases with
distance from the central pole.
Linear scale
The scale increases with distance from the center. If
a standard parallel is chosen rather than one of the
poles, this latitude represents the true scale, and
the scale nearer the pole is reduced.
Uses
Polar regions (conformal). In the Universal Polar
Stereographic (UPS) system, the scale factor at the
pole is made 0.994, thus making the standard
parallel (latitude of true scale) approximately
8107N or S.
Field Guide Polar Stereographic / 597
Prompts
The following prompts display in the Projection Chooser if Polar
Stereographic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the origin of the map projection in both spherical and
rectangular coordinates. Ellipsoid projections of the polar regions
normally use the International 1909 spheroid (Environmental
Systems Research Institute, 1992).
Longitude below pole of map
Enter a value for longitude directed straight down below the pole for
a north polar aspect, or straight up from the pole for a south polar
aspect. This is equivalent to centering the map with a desired
meridian.
Latitude of true scale
Enter a value for latitude at which true scale is desired. For secant
projections, specify the latitude of true scale as any line of latitude
other than 90N or S. For tangential projections, specify the latitude
of true scale as the North Pole, 90 00 00, or the South Pole, -90 00
00 (Environmental Systems Research Institute, 1992).
False easting
False northing
Enter values of false easting and false northing corresponding to the
pole. These values must be in meters. It is often convenient to make
them large enough to prevent negative coordinates within the region
of the map projection. That is, the origin of the rectangular
coordinate system should fall outside of the map projection to the
south and west.This projection is conformal and is the most scientific
projection for polar regions.
Polar Stereographic / 598 Field Guide
Figure 236: Polar Stereographic Projection and its Geometric
Construction
N. Pole
S. Pole
Plane of projection
Equator
Field Guide Polyconic / 599
Polyconic Polyconic was developed in 1820 by Ferdinand Hassler specifically
for mapping the eastern coast of the US (Figure 237). Polyconic
projections are made up of an infinite number of conic projections
tangent to an infinite number of parallels. These conic projections
are placed in relation to a central meridian. Polyconic projections
compromise properties such as equal-area and conformality,
although the central meridian is held true to scale.
This projection is used mostly for north-south oriented maps.
Distortion increases greatly the farther east and west an area is from
the central meridian.
Prompts
The following prompts display in the Projection Chooser if Polyconic
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Table 95: Polyconic Summary
Construction
Cone
Property
Compromise
Meridians
The central meridian is a straight line, but all other
meridians are complex curves.
Parallels
Parallels (except the Equator) are nonconcentric
circular arcs. The Equator is a straight line.
Graticule
spacing
All parallels are arcs of circles, but not concentric.
All meridians, except the central meridian, are
concave toward the central meridian. Parallels cross
the central meridian at equal intervals but get
farther apart at the east and west peripheries.
Linear scale
The scale along each parallel and along the central
meridian of the projection is accurate. It increases
along the meridians as the distance from the central
meridian increases (Environmental Systems
Research Institute, 1992).
Uses
Used for 7.5-minute and 15-minute topographic
USGS quad sheets, from 1886 to about 1957
(Environmental Systems Research Institute, 1992).
Used almost exclusively in slightly modified form for
large-scale mapping in the United States until the
1950s.
Polyconic / 600 Field Guide
Define the origin of the map projection in both spherical and
rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for longitude of the desired central meridian and
latitude of the origin of projection.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing corresponding to the
intersection of the central meridian and the latitude of the origin of
projection. These values must be in meters. It is often convenient to
make them large enough so that no negative coordinates occur
within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map
projection to the south and west.
Figure 237: Polyconic Projection of North America
In Figure 237, the central meridian is 100W. This projection is used
by the USGS for topographic quadrangle maps.
Field Guide Quartic Authalic / 601
Quartic Authalic
Outer meridians at high latitudes have great distortion. If the Quartic
Authalic projection is interrupted, distortion can be reduced.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Quartic
Authalic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter the value of the longitude of the central meridian.
False easting
False northing
Table 96: Quartic Authalic
Construction Pseudocylindrical
Property Equal-area
Meridians The central meridian is a straight line, and is 0.45
as long as the Equator. The other meridians are
curves that are equally spaced. They fit a fourth-
order (quartic) equation and concave toward the
central meridian (Snyder and Voxland, 1989).
Parallels Parallels are straight parallel lines that are
unequally spaced. The parallels have the greatest
distance between in proximity to the Equator.
Parallel spacing changes slowly, and parallels are
perpendicular to the central meridian.
Graticule spacing See Meridians and Parallels. Poles are points.
Symmetry exists about the central meridian or the
Equator.
Linear scale Scale is accurate along the Equator. Scale is
constant along each latitude, and is the same for
the latitude of opposite sign.
Uses The McBryde-Thomas Flat-Polar Quartic projection
uses Quartic Authalic as its base (Snyder and
Voxland, 1989). Used for world maps.
Quartic Authalic / 602 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 238: Quartic Authalic Projection
Source: Snyder and Voxland, 1989
Field Guide Robinson / 603
Robinson According to ESRI, the Robinson central meridian is a straight line
0.51 times the length of the Equator. Parallels are equally spaced
straight lines between 38 N and S; spacing decreases beyond these
limits. The poles are 0.53 times the length of the Equator. The
projection is based upon tabular coordinates instead of mathematical
formulas (Environmental Systems Research Institute, 1997).
This projection has been used both by Rand McNally and the National
Geographic Society.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once
Robinson is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter the value of the longitude of the central meridian.
False easting
False northing
Table 97: Robinson Summary
Construction Pseudocylinder
Property Neither conformal nor equal-area
Meridians Meridians are equally spaced, and concave toward
the central meridian, and look like elliptical arcs
(Environmental Systems Research Institute, 1997).
Parallels Parallels are equally spaced straight lines between
38 N and S.
Graticule spacing The central meridian and all parallels are linear.
Linear scale Scale is true along latitudes 38 N and S. Scale is
constant along any specific latitude, and for the
latitude of opposite sign.
Uses Useful for thematic and common world maps.
Robinson / 604 Field Guide
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Figure 239: Robinson Projection
Source: Snyder and Voxland, 1989
Field Guide RSO / 605
RSO The acronym RSO stands for Rectified Skewed Orthomorphic. This
projection is used to map areas of Brunei and Malaysia, and is each
countrys national projection.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once RSO is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
RSO Type
Select the RSO Type. You can choose from Borneo or Malaysia.
Table 98: RSO Summary
Construction Cylinder
Property Conformal
Meridians Two meridians are 180 degrees apart.
Parallels N/A
Graticule spacing Graticules are two meridians 180 degrees apart.
Linear scale A line of true scale is drawn at an angle to the
central meridian (Environmental Systems Research
Institute, 1997).
Uses This projection should be used to map areas of
Brunei and Malaysia.
Sinusoidal / 606 Field Guide
Sinusoidal Sometimes called the Sanson-Flamsteed, Sinusoidal is a projection
with some characteristics of a cylindrical projectionoften called a
pseudocylindrical type. The central meridian is the only straight
meridianall others become sinusoidal curves. All parallels are
straight and the correct length. Parallels are also the correct distance
from the Equator, which, for a complete world map, is twice as long
as the central meridian.
Sinusoidal maps achieve the property of equal-area but not
conformality. The Equator and central meridian are distortion free,
but distortion becomes pronounced near outer meridians, especially
in polar regions.
Interrupting a Sinusoidal world or hemisphere map can lessen
distortion. The interrupted Sinusoidal contains less distortion
because each interrupted area can be constructed to contain a
separate central meridian. Central meridians may be different for the
northern and southern hemispheres and may be selected to
minimize distortion of continents or oceans.
Sinusoidal is particularly suited for less than world areas, especially
those bordering the Equator, such as South America or Africa.
Sinusoidal is also used by the USGS as a base map for showing
prospective hydrocarbon provinces and sedimentary basins of the
world.
Table 99: Sinusoidal Summary
Construction
Pseudocylinder
Property
Equal-area
Meridians
Meridians are sinusoidal curves, curved toward a
straight central meridian.
Parallels
All parallels are straight, parallel lines.
Graticule
spacing
Meridian spacing is equal and decreases toward the
poles. Parallel spacing is equal. The graticule
spacing retains the property of equivalence of area.
Linear scale
Linear scale is true on the parallels and the central
meridian.
Uses
Used as an equal-area projection to portray areas
that have a maximum extent in a north-south
direction. Used as a world equal-area projection in
atlases to show distribution patterns. Used by the
USGS as the base for maps showing prospective
hydrocarbon provinces of the world, and
sedimentary basins of the world.
Field Guide Sinusoidal / 607
Prompts
The following prompts display in the Projection Chooser if Sinusoidal
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to
center the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 240: Sinusoidal Projection
Source: Snyder and Voxland, 1989
Space Oblique Mercator / 608 Field Guide
Space Oblique
Mercator
The Space Oblique Mercator (SOM) projection is nearly conformal
and has little scale distortion within the sensing range of an orbiting
mapping satellite such as Landsat. It is the first projection to
incorporate the Earths rotation with respect to the orbiting satellite.
The method of projection used is the modified cylindrical, for which
the central line is curved and defined by the groundtrack of the orbit
of the satellite.The line of tangency is conceptual and there are no
graticules.
The SOM projection is defined by USGS. According to USGS, the X
axis passes through the descending node for each daytime scene.
The Y axis is perpendicular to the X axis, to form a Cartesian
coordinate system. The direction of the X axis in a daytime Landsat
scene is in the direction of the satellite motionsouth. The Y axis is
directed east. For SOM projections used by EOSAT, the axes are
switched; the X axis is directed east and the Y axis is directed south.
The SOM projection is specifically designed to minimize distortion
within sensing range of a mapping satellite as it orbits the Earth. It
can be used for the rectification of, and continuous mapping from,
satellite imagery. It is the standard format for data from Landsats 4
and 5. Plots for adjacent paths do not match without transformation
(Environmental Systems Research Institute, 1991).
Table 100: Space Oblique Mercator Summary
Construction
Cylinder
Property
Conformal
Meridians
All meridians are curved lines except for the
meridian crossed by the groundtrack at each polar
approach.
Parallels
All parallels are curved lines.
Graticule
spacing
There are no graticules.
Linear scale
Scale is true along the groundtrack, and varies
approximately 0.01% within sensing range
(Environmental Systems Research Institute, 1992).
Uses
Used for georectification of, and continuous
mapping from, satellite imagery. Standard format
for data from Landsats 4 and 5 (Environmental
Systems Research Institute, 1992).
Field Guide Space Oblique Mercator / 609
Prompts
The following prompts display in the Projection Chooser if Space
Oblique Mercator is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Orbital path number (1-251 or 1-233)
For Landsats 1, 2, and 3, the path range is from 1 to 251. For
Landsats 4 and 5, the path range is from 1 to 233.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 241: Space Oblique Mercator Projection
Source: Snyder and Voxland, 1989
Space Oblique Mercator (Formats A & B) / Field Guide
Space Oblique
Mercator (Formats A
& B)
The Space Oblique Mercator (Formats A&B) projection is similar to
the Space Oblique Mercator projection.
For more information, see Space Oblique Mercator on page
608.
Prompts
The following prompts display in the Projection Chooser once Space
Oblique Mercator (Formats A & B) is selected. Respond to the
prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Format A (Generic Satellite)
Inclination of orbit at ascending node
Period of satellite revolution in minutes
Longitude of ascending orbit at equator
Landsat path flag
If you select Format A of the Space Oblique Mercator projection, you
need to supply the information listed above.
Format B (Landsat)
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Path number (1-251 or 1-233)
For Landsats 1, 2, and 3, the path range is from 1 to 251. For
Landsats 4 and 5, the path range is from 1 to 233.
Field Guide State Plane / 611
State Plane The State Plane is an X,Y coordinate system (not a map projection);
its zones divide the US into over 130 sections, each with its own
projection surface and grid network (Figure 242). With the exception
of very narrow states, such as Delaware, New Jersey, and New
Hampshire, most states are divided into between two and ten zones.
The Lambert Conformal projection is used for zones extending
mostly in an east-west direction. The Transverse Mercator projection
is used for zones extending mostly in a north-south direction. Alaska,
Florida, and New York use either Transverse Mercator or Lambert
Conformal for different areas. The Aleutian panhandle of Alaska is
prepared on the Oblique Mercator projection.
Zone boundaries follow state and county lines, and, because each
zone is small, distortion is less than one in 10,000. Each zone has a
centrally located origin and a central meridian that passes through
this origin. Two zone numbering systems are currently in usethe
USGS code system and the National Ocean Service (NOS) code
system (Table 101 on page 612 and Table 102 on page 616), but
other numbering systems exist.
Prompts
The following prompts appear in the Projection Chooser if State Plane
is selected. Respond to the prompts as described.
State Plane Zone
Enter either the USGS zone code number as a positive value, or the
NOS zone code number as a negative value.
NAD27 or NAD83 or HARN
Either North America Datum 1927 (NAD27), North America Datum
1983 (NAD83), or High Accuracy Reference Network (HARN) may be
used to perform the State Plane calculations.
NAD27 is based on the Clarke 1866 spheroid.
NAD83 and HARN are based on the GRS 1980 spheroid. Some
zone numbers have been changed or deleted from NAD27.
Tables for both NAD27 and NAD83 zone numbers follow (Table 101
on page 612 and Table 102 on page 616). These tables include both
USGS and NOS code systems.
State Plane / 612 Field Guide
Figure 242: Zones of the State Plane Coordinate System
The following abbreviations are used in Table 101 on page 612 and
Table 102 on page 616:
Tr Merc = Transverse Mercator
Lambert = Lambert Conformal Conic
Oblique = Oblique Mercator (Hotine)
Polycon = Polyconic
Table 101: NAD27 State Plane Coordinate System for the United States
Code Number
State Zone Name Type USGS NOS
Alabama East Tr Merc 3101 -101
West Tr Merc 3126 -102
Alaska 1 Oblique 6101 -5001
2 Tr Merc 6126 -5002
3 Tr Merc 6151 -5003
4 Tr Merc 6176 -5004
5 Tr Merc 6201 -5005
6 Tr Merc 6226 -5006
7 Tr Merc 6251 -5007
8 Tr Merc 6276 -5008
9 Tr Merc 6301 -5009
10 Lambert 6326 -5010
American Samoa ------- Lambert ------ -5302
Arizona East Tr Merc 3151 -201
Central Tr Merc 3176 -202
West Tr Merc 3201 -203
Field Guide State Plane / 613
Arkansas North Lambert 3226 -301
South Lambert 3251 -302
California I Lambert 3276 -401
II Lambert 3301 -402
III Lambert 3326 -403
IV Lambert 3351 -404
V Lambert 3376 -405
VI Lambert 3401 -406
VII Lambert 3426 -407
Colorado North Lambert 3451 -501
Central Lambert 3476 -502
South Lambert 3501 -503
Connecticut -------- Lambert 3526 -600
Delaware -------- Tr Merc 3551 -700
District of Columbia Use Maryland or Virginia North
Florida East Tr Merc 3601 -901
West Tr Merc 3626 -902
North Lambert 3576 -903
Georgia East Tr Merc 3651 -1001
West Tr Merc 3676 -1002
Guam ------- Polycon ------- -5400
Hawaii 1 Tr Merc 5876 -5101
2 Tr Merc 5901 -5102
3 Tr Merc 5926 -5103
4 Tr Merc 5951 -5104
5 Tr Merc 5976 -5105
Idaho East Tr Merc 3701 -1101
Central Tr Merc 3726 -1102
West Tr Merc 3751 -1103
Illinois East Tr Merc 3776 -1201
West Tr Merc 3801 -1202
Indiana East Tr Merc 3826 -1301
West Tr Merc 3851 -1302
Table 101: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
State Plane / 614 Field Guide
Iowa North Lambert 3876 -1401
South Lambert 3901 -1402
Kansas North Lambert 3926 -1501
South Lambert 3951 -1502
Kentucky North Lambert 3976 -1601
South Lambert 4001 -1602
Louisiana North Lambert 4026 -1701
South Lambert 4051 -1702
Offshore Lambert 6426 -1703
Maine East Tr Merc 4076 -1801
West Tr Merc 4101 -1802
Maryland ------- Lambert 4126 -1900
Massachusetts Mainland Lambert 4151 -2001
Island Lambert 4176 -2002
Michigan (Tr Merc) East Tr Merc 4201 -2101
Central Tr Merc 4226 -2102
West Tr Merc 4251 -2103
Michigan (Lambert) North Lambert 6351 -2111
Central Lambert 6376 -2112
South Lambert 6401 -2113
Minnesota North Lambert 4276 -2201
Central Lambert 4301 -2202
South Lambert 4326 -2203
Mississippi East Tr Merc 4351 -2301
West Tr Merc 4376 -2302
Missouri East Tr Merc 4401 -2401
Central Tr Merc 4426 -2402
West Tr Merc 4451 -2403
Montana North Lambert 4476 -2501
Central Lambert 4501 -2502
South Lambert 4526 -2503
Nebraska North Lambert 4551 -2601
South Lambert 4576 -2602
Table 101: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
Field Guide State Plane / 615
Nevada East Tr Merc 4601 -2701
Central Tr Merc 4626 -2702
West Tr Merc 4651 -2703
New Hampshire --------- Tr Merc 4676 -2800
New Jersey --------- Tr Merc 4701 -2900
New Mexico East Tr Merc 4726 -3001
Central Tr Merc 4751 -3002
West Tr Merc 4776 -3003
New York East Tr Merc 4801 -3101
Central Tr Merc 4826 -3102
West Tr Merc 4851 -3103
Long Island Lambert 4876 -3104
North Carolina -------- Lambert 4901 -3200
North Dakota North Lambert 4926 -3301
South Lambert 4951 -3302
Ohio North Lambert 4976 -3401
South Lambert 5001 -3402
Oklahoma North Lambert 5026 -3501
South Lambert 5051 -3502
Oregon North Lambert 5076 -3601
South Lambert 5101 -3602
Pennsylvania North Lambert 5126 -3701
South Lambert 5151 -3702
Puerto Rico -------- Lambert 6001 -5201
Rhode Island -------- Tr Merc 5176 -3800
South Carolina North Lambert 5201 -3901
South Lambert 5226 -3902
South Dakota North Lambert 5251 -4001
South Lambert 5276 -4002
St. Croix --------- Lambert 6051 -5202
Tennessee --------- Lambert 5301 -4100
Table 101: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
State Plane / 616 Field Guide
Texas North Lambert 5326 -4201
North Central Lambert 5351 -4202
Central Lambert 5376 -4203
South Central Lambert 5401 -4204
South Lambert 5426 -4205
Utah North Lambert 5451 -4301
Central Lambert 5476 -4302
South Lambert 5501 -4303
Vermont -------- Tr Merc 5526 -4400
Virginia North Lambert 5551 -4501
South Lambert 5576 -4502
Virgin Islands -------- Lambert 6026 -5201
Washington North Lambert 5601 -4601
South Lambert 5626 -4602
West Virginia North Lambert 5651 -4701
South Lambert 5676 -4702
Wisconsin North Lambert 5701 -4801
Central Lambert 5726 -4802
South Lambert 5751 -4803
Wyoming East Tr Merc 5776 -4901
East Central Tr Merc 5801 -4902
West Central Tr Merc 5826 -4903
West Tr Merc 5851 -4904
Table 101: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
Table 102: NAD83 State Plane Coordinate System for the United States
Code Number
State Zone Name Type USGS NOS
Alabama East Tr Merc 3101 -101
West Tr Merc 3126 -102
Field Guide State Plane / 617
Alaska 1 Oblique 6101 -5001
2 Tr Merc 6126 -5002
3 Tr Merc 6151 -5003
4 Tr Merc 6176 -5004
5 Tr Merc 6201 -5005
6 Tr Merc 6226 -5006
7 Tr Merc 6251 -5007
8 Tr Merc 6276 -5008
9 Tr Merc 6301 -5009
10 Lambert 6326 -5010
Arizona East Tr Merc 3151 -201
Central Tr Merc 3176 -202
West Tr Merc 3201 -203
Arkansas North Lambert 3226 -301
South Lambert 3251 -302
California I Lambert 3276 -401
II Lambert 3301 -402
III Lambert 3326 -403
IV Lambert 3351 -404
V Lambert 3376 -405
VI Lambert 3401 -406
Colorado North Lambert 3451 -501
Central Lambert 3476 -502
South Lambert 3501 -503
Connecticut -------- Lambert 3526 -600
Delaware -------- Tr Merc 3551 -700
District of Columbia Use Maryland or Virginia North
Florida East Tr Merc 3601 -901
West Tr Merc 3626 -902
North Lambert 3576 -903
Georgia East Tr Merc 3651 -1001
West Tr Merc 3676 -1002
Table 102: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
State Plane / 618 Field Guide
Hawaii 1 Tr Merc 5876 -5101
2 Tr Merc 5901 -5102
3 Tr Merc 5926 -5103
4 Tr Merc 5951 -5104
5 Tr Merc 5976 -5105
Idaho East Tr Merc 3701 -1101
Central Tr Merc 3726 -1102
West Tr Merc 3751 -1103
Illinois East Tr Merc 3776 -1201
West Tr Merc 3801 -1202
Indiana East Tr Merc 3826 -1301
West Tr Merc 3851 -1302
Iowa North Lambert 3876 -1401
South Lambert 3901 -1402
Kansas North Lambert 3926 -1501
South Lambert 3951 -1502
Kentucky North Lambert 3976 -1601
South Lambert 4001 -1602
Louisiana North Lambert 4026 -1701
South Lambert 4051 -1702
Offshore Lambert 6426 -1703
Maine East Tr Merc 4076 -1801
West Tr Merc 4101 -1802
Maryland ------- Lambert 4126 -1900
Massachusetts Mainland Lambert 4151 -2001
Island Lambert 4176 -2002
Michigan North Lambert 6351 -2111
Central Lambert 6376 -2112
South Lambert 6401 -2113
Minnesota North Lambert 4276 -2201
Central Lambert 4301 -2202
South Lambert 4326 -2203
Mississippi East Tr Merc 4351 -2301
West Tr Merc 4376 -2302
Table 102: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
Field Guide State Plane / 619
Missouri East Tr Merc 4401 -2401
Central Tr Merc 4426 -2402
West Tr Merc 4451 -2403
Montana --------- Lambert 4476 -2500
Nebraska --------- Lambert 4551 -2600
Nevada East Tr Merc 4601 -2701
Central Tr Merc 4626 -2702
West Tr Merc 4651 -2703
New Hampshire --------- Tr Merc 4676 -2800
New Jersey --------- Tr Merc 4701 -2900
New Mexico East Tr Merc 4726 -3001
Central Tr Merc 4751 -3002
West Tr Merc 4776 -3003
New York East Tr Merc 4801 -3101
Central Tr Merc 4826 -3102
West Tr Merc 4851 -3103
Long Island Lambert 4876 -3104
North Carolina --------- Lambert 4901 -3200
North Dakota North Lambert 4926 -3301
South Lambert 4951 -3302
Ohio North Lambert 4976 -3401
South Lambert 5001 -3402
Oklahoma North Lambert 5026 -3501
South Lambert 5051 -3502
Oregon North Lambert 5076 -3601
South Lambert 5101 -3602
Pennsylvania North Lambert 5126 -3701
South Lambert 5151 -3702
Puerto Rico --------- Lambert 6001 -5201
Rhode Island --------- Tr Merc 5176 -3800
South Carolina --------- Lambert 5201 -3900
South Dakota --------- Lambert 5251 -4001
South Lambert 5276 -4002
Tennessee --------- Lambert 5301 -4100
Table 102: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
State Plane / 620 Field Guide
Texas North Lambert 5326 -4201
North Central Lambert 5351 -4202
Central Lambert 5376 -4203
South Central Lambert 5401 -4204
South Lambert 5426 -4205
Utah North Lambert 5451 -4301
Central Lambert 5476 -4302
South Lambert 5501 -4303
Vermont --------- Tr Merc 5526 -4400
Virginia North Lambert 5551 -4501
South Lambert 5576 -4502
Virgin Islands --------- Lambert 6026 -5201
Washington North Lambert 5601 -4601
South Lambert 5626 -4602
West Virginia North Lambert 5651 -4701
South Lambert 5676 -4702
Wisconsin North Lambert 5701 -4801
Central Lambert 5726 -4802
South Lambert 5751 -4803
Wyoming East Tr Merc 5776 -4901
East Central Tr Merc 5801 -4902
West Central Tr Merc 5826 -4903
West Tr Merc 5851 -4904
Table 102: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
State Zone Name Type USGS NOS
Field Guide Stereographic / 621
Stereographic Stereographic is a perspective projection in which points are
projected from a position on the opposite side of the globe onto a
plane tangent to the Earth (Figure 243 on page 623). All of one
hemisphere can easily be shown, but it is impossible to show both
hemispheres in their entirety from one center. It is the only
azimuthal projection that preserves truth of angles and local shape.
Scale increases and parallels become more widely spaced farther
from the center.
In the equatorial aspect, all parallels except the Equator are circular
arcs. In the polar aspect, latitude rings are spaced farther apart, with
increasing distance from the pole.
Table 103: Stereographic Summary
Construction
Plane
Property
Conformal
Meridians
Polar aspect: the meridians are straight lines
radiating from the point of tangency.
Oblique and equatorial aspects: the meridians are
arcs of circles concave toward a straight central
meridian. In the equatorial aspect, the outer
meridian of the hemisphere is a circle centered at
the projection center.
Parallels
Polar aspect: the parallels are concentric circles.
Oblique aspect: the parallels are nonconcentric arcs
of circles concave toward one of the poles with one
parallel being a straight line.
Equatorial aspect: parallels are nonconcentric arcs
of circles concave toward the poles; the Equator is
straight.
Graticule
spacing
The graticule spacing increases away from the
center of the projection in all aspects and it retains
the property of conformality.
Linear scale
Scale increases toward the periphery of the
projection.
Uses
The Stereographic projection is the most widely
used azimuthal projection, mainly used for
portraying large, continent-sized areas of similar
extent in all directions. It is used in geophysics for
solving problems in spherical geometry. The polar
aspect is used for topographic maps and
navigational charts. The American Geographical
Society uses this projection as the basis for its Map
of the Arctic. The USGS uses it as the basis for
maps of Antarctica.
Stereographic / 622 Field Guide
Prompts
The following prompts display in the Projection Chooser if
Stereographic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Define the center of the map projection in both spherical and
rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of
the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
The Stereographic is the only azimuthal projection which is
conformal. Figure 243 shows two views: A) Equatorial aspect, often
used in the 16th and 17th centuries for maps of hemispheres; and
B) Oblique aspect, centered on 40N.
Field Guide Stereographic / 623
Figure 243: Stereographic Projection
A
B
Stereographic (Extended) / 624 Field Guide
Stereographic
(Extended)
The Stereographic (Extended) projection has the same attributes as
the Stereographic projection, with the exception of the ability to
define scale factors.
For details about the Stereographic projection, see
Stereographic on page 621.
Prompts
The following prompts display in the Projection Chooser once
Stereographic (Extended) is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Scale factor
Designate the desired scale factor. This parameter is used to modify
scale distortion. A value of one indicates true scale only along the
central meridian. It may be desirable to have true scale along two
lines equidistant from and parallel to the central meridian, or to
lessen scale distortion away from the central meridian. A factor of
less than, but close to one is often used.
Longitude of origin of projection
Latitude of origin of projection
Enter the values for longitude of origin of projection and latitude of
origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Field Guide Transverse Mercator / 625
Transverse Mercator Transverse Mercator is similar to the Mercator projection except that
the axis of the projection cylinder is rotated 90 from the vertical
(polar) axis. The contact line is then a chosen meridian instead of the
Equator, and this central meridian runs from pole to pole. It loses the
properties of straight meridians and straight parallels of the standard
Mercator projection (except for the central meridian, the two
meridians 90 away, and the Equator).
Transverse Mercator also loses the straight rhumb lines of the
Mercator map, but it is a conformal projection. Scale is true along the
central meridian or along two straight lines equidistant from, and
parallel to, the central meridian. It cannot be edge-joined in an east-
west direction if each sheet has its own central meridian.
In the United States, Transverse Mercator is the projection used in
the State Plane coordinate system for states with predominant
north-south extent. The entire Earth from 84N to 80S is mapped
with a system of projections called the Universal Transverse
Mercator.
Prompts
The following prompts display in the Projection Chooser if Transverse
Mercator is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Table 104: Transverse Mercator Summary
Construction
Cylinder
Property
Conformal
Meridians
Meridians are complex curves concave toward a
straight central meridian that is tangent to the
globe. The straight central meridian intersects the
Equator and one meridian at a 90 angle.
Parallels
Parallels are complex curves concave toward the
nearest pole; the Equator is straight.
Graticule
spacing
Parallels are spaced at their true distances on the
straight central meridian. Graticule spacing
increases away from the tangent meridian. The
graticule retains the property of conformality.
Linear scale
Linear scale is true along the line of tangency, or
along two lines equidistant from, and parallel to, the
line of tangency.
Uses
Used where the north-south dimension is greater
than the east west dimension. Used as the base for
the USGS 1:250,000-scale series, and for some of
the 7.5-minute and 15-minute quadrangles of the
National Topographic Map Series.
Transverse Mercator / 626 Field Guide
The list of available spheroids is located in Table 65 on
page 490.
Scale factor at central meridian
Designate the desired scale factor at the central meridian. This
parameter is used to modify scale distortion. A value of one indicates
true scale only along the central meridian. It may be desirable to
have true scale along two lines equidistant from and parallel to the
central meridian, or to lessen scale distortion away from the central
meridian. A factor of less than, but close to one is often used.
Finally, define the origin of the map projection in both spherical and
rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for longitude of the desired central meridian and
latitude of the origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
intersection of the central meridian and the latitude of the origin of
projection. These values must be in meters. It is often convenient to
make them large enough so that there are no negative coordinates
within the region of the map projection. That is, origin of the
rectangular coordinate system should fall outside of the map
projection to the south and west.
Field Guide Two Point Equidistant / 627
Two Point
Equidistant
The Two Point Equidistant projection is used to show the distance
from either of two chosen points to any other point on a map
(Environmental Systems Research Institute, 1997). Note that the
first point has to be west of the second point. This projection has
been used by the National Geographic Society to map areas of Asia.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Two
Point Equidistant is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Longitude of 1st point
Latitude of 1st point
Table 105: Two Point Equidistant Summary
Construction
Modified planar
Property
Compromise
Meridians
N/A
Parallels
N/A
Graticule
spacing
N/A
Linear scale
N/A
Uses
The Two Point Equidistant projection does not
represent great circle paths (Environmental
Systems Research Institute, 1997). There is little
distortion if two chosen points are within 45 degrees
of each other.
Two Point Equidistant / 628 Field Guide
Enter the longitude and latitude values of the first point.
Longitude of 2nd point
Latitude of 2nd point
Enter the longitude and latitude values of the second point.
Figure 244: Two Point Equidistant Projection
Source: Snyder and Voxland, 1989
Field Guide UTM / 629
UTM UTM is an international plane (rectangular) coordinate system
developed by the US Army that extends around the world from 84N
to 80S. The world is divided into 60 zones each covering six degrees
longitude. Each zone extends three degrees eastward and three
degrees westward from its central meridian. Zones are numbered
consecutively west to east from the 180 meridian (Figure 245,
Table 106 on page 630).
The Transverse Mercator projection is then applied to each UTM
zone. Transverse Mercator is a transverse form of the Mercator
cylindrical projection. The projection cylinder is rotated 90 from the
vertical (polar) axis and can then be placed to intersect at a chosen
central meridian. The UTM system specifies the central meridian of
each zone. With a separate projection for each UTM zone, a high
degree of accuracy is possible (one part in 1000 maximum distortion
within each zone). If the map to be projected extends beyond the
border of the UTM zone, the entire map may be projected for any
UTM zone specified by you.
See Transverse Mercator on page 625 for more information.
Prompts
The following prompts display in the Projection Chooser if UTM is
chosen.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
UTM Zone
North or South
UTM / 630 Field Guide
Figure 245: Zones of the Universal Transverse Mercator Grid
in the United States
All values in Table 106 are in full degrees east (E) or west (W) of the
Greenwich prime meridian (0).
126 120 114 108 102 96 90 84 78 72 66
Table 106: UTM Zones, Central Meridians, and Longitude Ranges
Zone
Central
Meridia
n
Range Zone
Central
Meridia
n
Range
1 177W 180W-174W 31 3E 0-6E
2 171W 174W-168W 32 9E 6E-12E
3 165W 168W-162W 33 15E 12E-18E
4 159W 162W-156W 34 21E 18E-24E
5 153W 156W-150W 35 27E 24E-30E
6 147W 150W-144W 36 33E 30E-36E
7 141W 144W-138W 37 39E 36E-42E
8 135W 138W-132W 38 45E 42E-48E
9 129W 132W-126W 39 51E 48E-54E
10 123W 126W-120W 40 57E 54E-60E
11 117W 120W-114W 41 63E 60E-66E
12 111W 114W-108W 42 69E 66E-72E
13 105W 108W-102W 43 75E 72E-78E
14 99W 102W-96W 44 81E 78E-84E
15 93W 96W-90W 45 87E 84E-90E
16 87W 90W-84W 46 93E 90E-96E
17 81W 84W-78W 47 99E 96E-102E
18 75W 78W-72W 48 105E 102E-108E
Field Guide UTM / 631
19 69W 72W-66W 49 111E 108E-114E
20 63W 66W-60W 50 117E 114E-120E
21 57W 60W-54W 51 123E 120E-126E
22 51W 54W-48W 52 129E 126E-132E
23 45W 48W-42W 53 135E 132E-138E
24 39W 42W-36W 54 141E 138E-144E
25 33W 36W-30W 55 147E 144E-150E
26 27W 30W-24W 56 153E 150E-156E
27 21W 24W-18W 57 159E 156E-162E
28 15W 18W-12W 58 165E 162E-168E
29 9W 12W-6W 59 171E 168E-174E
30 3W 6W-0 60 177E 174E-180E
Table 106: UTM Zones, Central Meridians, and Longitude Ranges (Continued)
Zone
Central
Meridia
n
Range Zone
Central
Meridia
n
Range
Van der Grinten I / 632 Field Guide
Van der Grinten I The Van der Grinten I projection produces a map that is neither
conformal nor equal-area (Figure 246 on page 633). It compromises
all properties, and represents the Earth within a circle.
All lines are curved except the central meridian and the Equator.
Parallels are spaced farther apart toward the poles. Meridian spacing
is equal at the Equator. Scale is true along the Equator, but increases
rapidly toward the poles, which are usually not represented.
Van der Grinten I avoids the excessive stretching of the Mercator and
the shape distortion of many of the equal-area projections. It has
been used to show distribution of mineral resources on the ocean
floor.
Prompts
The following prompts display in the Projection Chooser if Van der
Grinten I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Table 107: Van der Grinten I Summary
Construction
Miscellaneous
Property
Compromise
Meridians
Meridians are circular arcs concave toward a
straight central meridian.
Parallels
Parallels are circular arcs concave toward the poles,
except for a straight Equator.
Graticule
spacing
Meridian spacing is equal at the Equator. The
parallels are spaced farther apart toward the poles.
The central meridian and Equator are straight lines.
The poles are commonly not represented. The
graticule spacing results in a compromise of all
properties.
Linear scale
Linear scale is true along the Equator. Scale
increases rapidly toward the poles.
Uses
The Van der Grinten projection is used by the
National Geographic Society for world maps. Used
by the USGS to show distribution of mineral
resources on the sea floor.
Field Guide Van der Grinten I / 633
Enter a value for the longitude of the desired central meridian to
center the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 246: Van der Grinten I Projection
The Van der Grinten I projection resembles the Mercator, but it is not
conformal.
Wagner IV / 634 Field Guide
Wagner IV
The Wagner IV Projection has distortion primarily in the polar
regions.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser if Wagner IV
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to
center the projection.
False easting
False northing
Table 108: Wagner IV Summary
Construction Pseudocylinder
Property Equal-area
Meridians The central meridian is a straight line one half as
long as the Equator. The other meridians are
portions of ellipses that are equally spaced. They
are concave towards the central meridian. The
meridians at 103 55 E and W of the central
meridian are circular arcs.
Parallels Parallels are unequally spaced. Parallels have the
widest space between them at the Equator, and are
perpendicular to the central meridian.
Graticule spacing See Meridians and Parallels. Poles are lines one half
as long as the Equator. Symmetry exists around the
central meridian or the Equator.
Linear scale Scale is accurate along latitudes 42 59 N and S.
Scale is constant along any specific latitude as well
as the latitude of opposite sign.
Uses Useful for world maps.
Field Guide Wagner IV / 635
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 247: Wagner IV Projection
Source: Snyder and Voxland, 1989
Wagner VII / 636 Field Guide
Wagner VII The Wagner VII projection is modified based on the Hammer
projection. The poles correspond to the 65th parallels on the
Hammer [projection], and meridians are repositioned (Snyder and
Voxland, 1989).
Distortion is prevalent in polar areas.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser if Wagner IV
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to
center the projection.
False easting
False northing
Table 109: Wagner VII
Construction Modified azimuthal
Property Equal-area
Meridians Central meridian is straight and half the Equators
length. Other meridians are unequally spaced
curves. They are concave toward the central
meridian.
Parallels The Equator is straight; the other parallels are
curved. Other parallels are unequally spaced
curves, which are concave toward the closest pole.
Graticule spacing See Meridians and Parallels. Poles are curved lines.
Symmetry exists about the central meridian or the
Equator.
Linear scale Scale decreases along the central meridian and the
Equator relative to distance from the center of the
Wagner VII projection.
Uses Used for world maps.
Field Guide Wagner VII / 637
Enter values of false easting and false northing corresponding to the
center of the projection. These values must be in meters. It is often
convenient to make them large enough to prevent negative
coordinates within the region of the map projection. That is, the
origin of the rectangular coordinate system should fall outside of the
map projection to the south and west.
Figure 248: Wagner VII Projection
Source: Snyder and Voxland, 1989
Winkel I / 638 Field Guide
Winkel I
The Winkel I projection is not free of distortion at any point (Snyder
and Voxland, 1989).
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Winkel
I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The list of available spheroids is located in Table 65 on
page 490.
Latitude of standard parallel
Longitude of central meridian
Enter values of the latitude of standard parallel and the longitude of
central meridian.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Table 110: Winkel I Summary
Construction Pseudocylinder
Property Neither conformal nor equal-area
Meridians Central meridian is a straight line 0.61 the length of
the Equator. The other meridians are sinusoidal
curves that are equally spaced and concave toward
the central meridian.
Parallels Parallels are equally spaced.
Graticule spacing See Meridians and Parallels. Pole lines are 0.61 the
length of the Equator. Symmetry exists about the
central meridian or the Equator.
Linear scale Scale is true along latitudes 50 28 N and S. Scale
is constant along any given latitude as well as the
latitude of the opposite sign.
Uses Used for world maps.
Field Guide Winkel I / 639
Figure 249: Winkel I Projection
Source: Snyder and Voxland, 1989
External Projections / 640 Field Guide
External Projections The following external projections are supported in ERDAS IMAGINE
and are described in this section. Some of these projections were
discussed in the previous section. Those descriptions are not
repeated here. Simply refer to the page number in parentheses for
more information.
NOTE: ERDAS IMAGINE does not support datum shifts for these
external projections.
Albers Equal Area (see Albers Conical Equal Area on page 527)
Azimuthal Equidistant (see Azimuthal Equidistant on page 530)
Bipolar Oblique Conic Conformal
Cassini-Soldner
Conic Equidistant (see Equidistant Conic on page 552)
Laborde Oblique Mercator
Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal
Area on page 570)
Lambert Conformal Conic (see Lambert Conformal Conic on
page 573)
Mercator (see Mercator on page 578)
Minimum Error Conformal
Modified Polyconic
Modified Stereographic
Mollweide Equal Area (see Mollweide on page 585)
Oblique Mercator (see Oblique Mercator (Hotine) on page 589)
Orthographic (see Orthographic on page 592)
Plate Carre (see Equirectangular (Plate Carre) on page 555)
Rectified Skew Orthomorphic (see RSO on page 605)
Regular Polyconic (see Polyconic on page 599)
Robinson Pseudocylindrical (see Robinson on page 603)
Sinusoidal (see Sinusoidal on page 606)
Southern Orientated Gauss Conformal
Stereographic (see Stereographic on page 621)
Field Guide External Projections / 641
Swiss Cylindrical
Stereographic (Oblique) (see Stereographic on page 621)
Transverse Mercator (see Transverse Mercator on page 625)
Universal Transverse Mercator (see UTM on page 629)
Van der Grinten (see Van der Grinten I on page 632)
Winkels Tripel
Bipolar Oblique Conic Conformal / 642 Field Guide
Bipolar Oblique Conic
Conformal
The Bipolar Oblique Conic Conformal projection was developed by
O.M. Miller and William A. Briesemeister in 1941 specifically for
mapping North and South America, and maintains conformality for
these regions. It is based upon the Lambert Conformal Conic, using
two oblique conic projections side-by-side.
The two oblique conics are joined with the poles 104 apart. A great
circle arc 104 long begins at 20S and 110W, cuts through Central
America, and terminates at 45N and approximately 195936W.
The scale of the map is then increased by approximately 3.5%. The
origin of the coordinates is made 1715N, 7302W.
Refer to Lambert Conformal Conic on page 573 for more
information.
Prompts
The following prompts display in the Projection Chooser if Bipolar
Oblique Conic Conformal is selected.
Projection Name
Spheroid Type
Datum Name
Table 111: Bipolar Oblique Conic Conformal Summary
Construction
Cone
Property
Conformal
Meridians
Meridians are complex curves concave toward the
center of the projection.
Parallels
Parallels are complex curves concave toward the
nearest pole.
Graticule
spacing
Graticule spacing increases away from the lines of
true scale and retains the property of conformality.
Linear scale
Linear scale is true along two lines that do not lie
along any meridian or parallel. Scale is compressed
between these lines and expanded beyond them.
Linear scale is generally good, but there is as much
as a 10% error at the edge of the projection as
used.
Uses
Used to represent one or both of the American
continents. Examples are the Basement map of
North America and the Tectonic map of North
America.
Field Guide Cassini-Soldner / 643
Cassini-Soldner The Cassini projection was devised by C. F. Cassini de Thury in 1745
for the survey of France. Mathematical analysis by J. G. von Soldner
in the early 19th century led to more accurate ellipsoidal formulas.
Today, it has largely been replaced by the Transverse Mercator
projection, although it is still in limited use outside of the United
States. It was one of the major topographic mapping projections
until the early 20th century.
The spherical form of the projection bears the same relation to the
Equidistant Cylindrical, or Plate Carre, projection that the spherical
Transverse Mercator bears to the regular Mercator. Instead of having
the straight meridians and parallels of the Equidistant Cylindrical, the
Cassini has complex curves for each, except for the Equator, the
central meridian, and each meridian 90 away from the central
meridian, all of which are straight.
There is no distortion along the central meridian if it is maintained at
true scale, which is the usual case. If it is given a reduced scale
factor, the lines of true scale are two straight lines on the map,
parallel to and equidistant from, the central meridian. There is no
distortion along them instead.
Table 112: Cassini-Soldner Summary
Construction
Cylinder
Property
Compromise
Meridians
Central meridian, each meridian 90 from the
central meridian, and the Equator are straight lines.
Other meridians are complex curves.
Parallels
Parallels are complex curves.
Graticule
spacing
Complex curves for all meridians and parallels,
except for the Equator, the central meridian, and
each meridian 90 away from the central meridian,
all of which are straight.
Linear scale
Scale is true along the central meridian, and along
lines perpendicular to the central meridian. Scale is
constant but not true along lines parallel to the
central meridian on the spherical form, and nearly
so for the ellipsoid.
Uses
Used for topographic mapping, formerly in England
and currently in a few other countries, such as
Denmark, Germany, and Malaysia.
Cassini-Soldner / 644 Field Guide
The scale is correct along the central meridian, and also along any
straight line perpendicular to the central meridian. It gradually
increases in a direction parallel to the central meridian as the
distance from that meridian increases, but the scale is constant
along any straight line on the map that is parallel to the central
meridian. Therefore, Cassini-Soldner is more suitable for regions
that are predominantly north-south in extent, such as Great Britain,
than regions extending in other directions. The projection is neither
equal-area nor conformal, but it has a compromise of both features.
The Cassini-Soldner projection was adopted by the Ordnance Survey
for the official survey of Great Britain during the second half of the
19th century. A system equivalent to the oblique Cassini-Soldner
projection was used in early coordinate transformations for ERTS
(now Landsat) satellite imagery, but it was changed to Oblique
Mercator (Hotine) in 1978, and to the Space Oblique Mercator in
1982.
Prompts
The following prompts display in the Projection Chooser if Cassini-
Soldner is selected.
Projection Name
Spheroid Type
Datum Name
Field Guide Laborde Oblique Mercator / 645
Laborde Oblique
Mercator
In 1928, Laborde combined a conformal sphere with a complex-
algebra transformation of the Oblique Mercator projection for the
topographic mapping of Madagascar. This variation is now known as
the Laborde Oblique Mercator. The central line is a great circle arc.
See Oblique Mercator (Hotine) on page 589 for more
information.
Prompts
The following prompts display in the Projection Chooser if Laborde
Oblique Mercator is selected.
Projection Name
Spheroid Type
Datum Name
Minimum Error Conformal / 646 Field Guide
Minimum Error
Conformal
The Minimum Error Conformal projection is the same as the New
Zealand Map Grid projection.
For more information, see New Zealand Map Grid on page 587.
Field Guide Modified Polyconic / 647
Modified Polyconic The Modified Polyconic projection was devised by Lallemand of
France, and in 1909 it was adopted by the International Map
Committee (IMC) in London as the basis for the 1:1,000,000-scale
International Map of the World (IMW) series.
The projection differs from the ordinary Polyconic in two principal
features: all meridians are straight, and there are two meridians that
are made true to scale. Adjacent sheets fit together exactly not only
north to south, but east to west. There is still a gap when mosaicking
in all directions, in that there is a gap between each diagonal sheet,
and either one or the other adjacent sheet.
In 1962, a U.N. conference on the IMW adopted the Lambert
Conformal Conic and the Polar Stereographic projections to replace
the Modified Polyconic.
See Polyconic on page 599 for more information.
Prompts
The following prompts display in the Projection Chooser if Modified
Polyconic is selected.
Projection Name
Spheroid Type
Datum Name
Table 113: Modified Polyconic Summary
Construction
Cone
Property
Compromise
Meridians
All meridians are straight.
Parallels
Parallels are circular arcs. The top and bottom
parallels of each sheet are nonconcentric circular
arcs.
Graticule
spacing
The top and bottom parallels of each sheet are
nonconcentric circular arcs. The two parallels are
spaced from each other according to the true scale
along the central meridian, which is slightly
reduced.
Linear scale
Scale is true along each parallel and along two
meridians, but no parallel is standard.
Uses
Used for the International Map of the World (IMW)
series until 1962.
Modified Stereographic / 648 Field Guide
Modified
Stereographic
The meridians and parallels of the Modified Stereographic projection
are generally curved, and there is usually no symmetry about any
point or line. There are limitations to these transformations. Most of
them can only be used within a limited range. As the distance from
the projection center increases, the meridians, parallels, and
shorelines begin to exhibit loops, overlapping, and other undesirable
curves. A world map using the GS50 (50-State) projection is almost
illegible with meridians and parallels intertwined like wild vines.
Prompts
The following prompts display in the Projection Chooser if Modified
Stereographic is selected.
Projection Name
Spheroid Type
Datum Name
Table 114: Modified Stereographic Summary
Construction
Plane
Property
Conformal
Meridians
All meridians are normally complex curves, although
some may be straight under certain conditions.
Parallels
All parallels are complex curves, although some
may be straight under certain conditions.
Graticule
spacing
The graticule is normally not symmetrical about any
axis or point.
Linear scale
Scale is true along irregular lines, but the map is
usually designed to minimize scale variation
throughout a selected region.
Uses
Used for maps of continents in the Eastern
Hemisphere, for the Pacific Ocean, and for maps of
Alaska and the 50 United States.
Field Guide Mollweide Equal Area / 649
Mollweide Equal Area The second oldest pseudocylindrical projection that is still in use
(after the Sinusoidal) was presented by Carl B. Mollweide (1774-
1825) of Halle, Germany, in 1805. It is an equal-area projection of
the Earth within an ellipse. It has had a profound effect on world map
projections in the 20th century, especially as an inspiration for other
important projections, such as the Van der Grinten.
The Mollweide is normally used for world maps and occasionally for
a very large region, such as the Pacific Ocean. This is because only
two points on the Mollweide are completely free of distortion unless
the projection is interrupted. These are the points at latitudes
404412N and S on the central meridian(s).
The world is shown in an ellipse with the Equator, its major axis,
twice as long as the central meridian, its minor axis. The meridians
90 east and west of the central meridian form a complete circle. All
other meridians are elliptical arcs which, with their opposite numbers
on the other side of the central meridian, form complete ellipses that
meet at the poles.
Table 115: Mollweide Equal Area Summary
Construction
Pseudocylinder
Property
Equal-area
Meridians
All of the meridians are ellipses. The central
meridian is a straight line, and 90 meridians are
circular arcs (Pearson, 1990).
Parallels
The Equator and parallels are straight lines
perpendicular to the central meridian, but they are
not equally spaced.
Graticule
spacing
Linear graticules include the central meridian and
the Equator (Environmental Systems Research
Institute, 1992). Meridians are equally spaced along
the Equator and along all other parallels. The
parallels are straight parallel lines, but they are not
equally spaced. The poles are points.
Linear scale
Scale is true along latitudes 4044N and S.
Distortion increases with distance from these lines
and becomes severe at the edges of the projection
(Environmental Systems Research Institute, 1992).
Uses
Often used for world maps (Pearson, 1990).
Suitable for thematic or distribution mapping of the
entire world, frequently in interrupted form
(Environmental Systems Research Institute, 1992).
Mollweide Equal Area / 650 Field Guide
Prompts
The following prompts display in the Projection Chooser if Mollweide
Equal Area is selected.
Projection Name
Spheroid Type
Datum Name
Field Guide Rectified Skew Orthomorphic / 651
Rectified Skew
Orthomorphic
Martin Hotine (1898 - 1968) called the Oblique Mercator projection
the Rectified Skew Orthomorphic projection.
See Oblique Mercator (Hotine) for more information.
Prompts
The following prompts display in the Projection Chooser if Rectified
Skew Orthomorphic is selected.
Projection Name
Spheroid Type
Datum Name
Robinson Pseudocylindrical / 652 Field Guide
Robinson
Pseudocylindrical
The Robinson Pseudocylindrical projection provides a means of
showing the entire Earth in an uninterrupted form. The continents
appear as units and are in relatively correct size and location. Poles
are represented as lines.
Meridians are equally spaced and resemble elliptical arcs, concave
toward the central meridian. The central meridian is a straight line
0.51 times the length of the Equator. Parallels are equally spaced
straight lines between 38N and S, and then the spacing decreases
beyond these limits. The poles are 0.53 times the length of the
Equator. The projection is based upon tabular coordinates instead of
mathematical formulas (Environmental Systems Research Institute,
1992).
Prompts
The following prompts display in the Projection Chooser if Robinson
Pseudocylindrical is selected.
Projection Name
Spheroid Type
Datum Name
Table 116: Robinson Pseudocylindrical Summary
Construction
Pseudocylinder
Property
Compromise
Meridians
Meridians are elliptical arcs, equally spaced, and
concave toward the central meridian.
Parallels
Parallels are straight lines.
Graticule
spacing
Parallels are straight lines and are parallel. The
individual parallels are evenly divided by the
meridians (Pearson, 1990).
Linear scale
Generally, scale is made true along latitudes 38N
and S. Scale is constant along any given latitude,
and for the latitude of opposite sign (Environmental
Systems Research Institute, 1992).
Uses
Developed for use in general and thematic world
maps. Used by Rand McNally since the 1960s and
by the National Geographic Society since 1988 for
general and thematic world maps (Environmental
Systems Research Institute, 1992).
Field Guide Southern Orientated Gauss Conformal / 653
Southern Orientated
Gauss Conformal
Southern Orientated Gauss Conformal is another name for the
Transverse Mercator projection, after mathematician Friedrich Gauss
(1777-1855). It is also called the Gauss-Krger projection.
See Transverse Mercator on page 625 for more information.
Prompts
The following prompts display in the Projection Chooser if Southern
Orientated Gauss Conformal is selected.
Projection Name
Spheroid Type
Datum Name
Swiss Cylindrical / 654 Field Guide
Swiss Cylindrical The Swiss Cylindrical projection is a cylindrical projection used by the
Swiss Landestopographie, which is a form of the Oblique Mercator
projection.
For more information, see Oblique Mercator (Hotine) on page
589.
Field Guide Winkels Tripel / 655
Winkels Tripel Winkels Tripel was formulated in 1921 by Oswald Winkel of
Germany. It is a combined projection that is the arithmetic mean of
the Plate Carre and Aitoffs projection (Maling, 1992).
Prompts
The following prompts display in the Projection Chooser if Winkels
Tripel is selected.
Projection Name
Spheroid Type
Datum Name
Figure 250: Winkels Tripel Projection
Source: Snyder and Voxland, 1989
Table 117: Winkels Tripel Summary
Construction
Modified azimuthal
Property
Neither conformal nor equal-area
Meridians
Central meridian is straight. Other meridians are
curved and are equally spaced along the Equator,
and concave toward the central meridian.
Parallels
Equidistant spacing of parallels. Equator and the
poles are straight. Other parallels are curved and
concave toward the nearest pole.
Graticule
spacing
Symmetry is maintained along the central meridian
or the Equator.
Linear scale
Scale is true along the central meridian and
constant along the Equator.
Uses
Used for world maps.
Winkels Tripel / 656 Field Guide
/ 657 Field Guide
Glossary
Numerics 2Dtwo-dimensional.
3Dthree-dimensional.
A absorption spectrathe electromagnetic radiation wavelengths
that are absorbed by specific materials of interest.
abstract symbolan annotation symbol that has a geometric
shape, such as a circle, square, or triangle. These symbols
often represent amounts that vary from place to place, such as
population density, yearly rainfall, etc.
a priorialready or previously known.
accuracy assessmentthe comparison of a classification to
geographical data that is assumed to be true. Usually, the
assumed-true data are derived from ground truthing.
accuracy reportin classification accuracy assessment, a list of the
percentages of accuracy, which is computed from the error
matrix.
ACSsee attitude control system.
active sensorsthe solar imaging sensors that both emit and
receive radiation.
ADRGsee ARC Digitized Raster Graphic.
ADRIsee ARC Digital Raster Imagery.
aerial stereopairtwo photos taken at adjacent exposure stations.
Airborne Synthetic Aperture Radaran experimental airborne
radar sensor developed by Jet Propulsion Laboratories (JPL),
Pasadena, California, under a contract with NASA. AIRSAR data
have been available since 1983.
Airborne Visible/Infrared Imaging Spectrometer(AVIRIS) a
sensor developed by JPL (Pasadena, California) under a
contract with NASA that produces multispectral data with 224
narrow bands. These bands are 10 nm wide and cover the
spectral range of 0.4-2.4 nm. AVIRIS data have been available
since 1987.
AIRSARsee Airborne Synthetic Aperture Radar.
alarma test of a training sample, usually used before the signature
statistics are calculated. An alarm highlights an area on the
display that is an approximation of the area that would be
classified with a signature. The original data can then be
compared to the highlighted area.
/ 658 Field Guide
Almaza Russian radar satellite that completed its mission in 1992.
Along-Track Scanning Radiometer(ATSR) instrument aboard
the European Space Agencys ERS-1 and ERS-2 satellites,
which detects changes in the amount of vegetation on the
Earths surface.
American Standard Code for Information Interchange
(ASCII) a basis of character sets. . .to convey some control
codes, space, numbers, most basic punctuation, and
unaccented letters a-z and A-Z (Free On-Line Dictionary of
Computing, 1999a).
analog photogrammetryoptical or mechanical instruments used
to reconstruct three-dimensional geometry from two
overlapping photographs.
analytical photogrammetrythe computer replaces optical and
mechanical components by substituting analog measurement
and calculation with mathematical computation.
ancillary datathe data, other than remotely sensed data, that are
used to aid in the classification process.
ANNsee Artificial Neural Networks.
annotationthe explanatory material accompanying an image or
map. In ERDAS IMAGINE, annotation consists of text, lines,
polygons, ellipses, rectangles, legends, scale bars, grid lines,
tick marks, neatlines, and symbols that denote geographical
features.
annotation layera set of annotation elements that is drawn in a
Viewer or Map Composer window and stored in a file (.ovr
extension).
AOIsee area of interest.
arcsee line.
ARC system (Equal Arc-Second Raster Chart/Map)a system
that provides a rectangular coordinate and projection system at
any scale for the Earths ellipsoid, based on the World Geodetic
System 1984 (WGS 84).
ARC Digital Raster ImageryDefense Mapping Agency (DMA)
data that consist of SPOT panchromatic, SPOT multispectral, or
Landsat TM satellite imagery transformed into the ARC system
and accompanied by ASCII encoded support files. These data
are available only to Department of Defense contractors.
ARC Digitized Raster Graphicdata from the Defense Mapping
Agency (DMA) that consist of digital copies of DMA hardcopy
graphics transformed into the ARC system and accompanied by
ASCII encoded support files. These data are primarily used for
military purposes by defense contractors.
Field Guide / 659
ARC GENERATE datavector data created with the ArcInfo
UNGENERATE command.
arc/seconda unit of measure that can be applied to data in the
Lat/Lon coordinate system. Each pixel represents the distance
covered by one second of latitude or longitude. For example, in
3 arc/second data, each pixel represents an area three seconds
latitude by three seconds longitude.
areaa measurement of a surface.
area based matchingan image matching technique that
determines the correspondence between two image areas
according to the similarity of their gray level values.
area of interest (AOI) a point, line, or polygon that is selected as
a training sample or as the image area to be used in an
operation. AOIs can be stored in separate .aoi files.
Artificial Neural Networks(ANN) data classifiers that may
process hyperspectral images with a large number of bands.
ASCIIsee American Standard Code for Information
Interchange.
aspectthe orientation, or the direction that a surface faces, with
respect to the directions of the compass: north, south, east,
west.
aspect imagea thematic raster image that shows the prevailing
direction that each pixel faces.
aspect mapa map that is color coded according to the prevailing
direction of the slope at each pixel.
ATSRsee Along-Track Scanning Radiometer.
attitude control system(ACS) system used by SeaWiFS
instrument to sustain orbit, conduct lunar and solar calibration
procedures, and supply attitude information within one
SeaWiFS pixel (National Aeronautics and Space Administration,
1999).
attributethe tabular information associated with a raster or vector
layer.
averagethe statistical mean; the sum of a set of values divided by
the number of values in the set.
AVHRRAdvanced Very High Resolution Radiometer data. Small-
scale imagery produced by an NOAA polar orbiting satellite. It
has a spatial resolution of 1.1 1.1 km or 4 4 km.
AVIRISsee Airborne Visible/Infrared Imaging
Spectrometer.
azimuthan angle measured clockwise from a meridian, going
north to east.
/ 660 Field Guide
azimuthal projectiona map projection that is created from
projecting the surface of the Earth to the surface of a plane.
B banda set of data file values for a specific portion of the
electromagnetic spectrum of reflected light or emitted heat
(red, green, blue, near-infrared, infrared, thermal, etc.), or
some other user-defined information created by combining or
enhancing the original bands, or creating new bands from other
sources. Sometimes called channel.
bandingsee striping.
base mapa map portraying background reference information
onto which other information is placed. Base maps usually
show the location and extent of natural surface features and
permanent human-made features.
Basic Image Interchange Format(BIIF) the basis for the NITFS
format.
batch filea file that is created in the Batch mode of ERDAS
IMAGINE. All steps are recorded for a later run. This file can be
edited.
batch modea mode of operating ERDAS IMAGINE in which steps
are recorded for later use.
bathymetric mapa map portraying the shape of a water body or
reservoir using isobaths (depth contours).
Bayesiana variation of the maximum likelihood classifier, based
on the Bayes Law of probability. The Bayesian classifier allows
the application of a priori weighting factors, representing the
probabilities that pixels are assigned to each class.
BIIFsee Basic Image Interchange Format.
BILband interleaved by line. A form of data storage in which each
record in the file contains a scan line (row) of data for one band.
All bands of data for a given line are stored consecutively within
the file.
bilinear interpolationa resampling method that uses the data
file values of four pixels in a 2 2 window to calculate an
output data file value by computing a weighted average of the
input data file values with a bilinear function.
bin functiona mathematical function that establishes the
relationship between data file values and rows in a descriptor
table.
binsordered sets of pixels. Pixels are sorted into a specified
number of bins. The pixels are then given new values based
upon the bins to which they are assigned.
Field Guide / 661
BIPband interleaved by pixel. A form of data storage in which the
values for each band are ordered within a given pixel. The
pixels are arranged sequentially on the tape.
bita binary digit, meaning a number that can have two possible
values 0 and 1, or off and on. A set of bits, however, can have
many more values, depending upon the number of bits used.
The number of values that can be expressed by a set of bits is
2 to the power of the number of bits used. For example, the
number of values that can be expressed by 3 bits is 8 (2
3
= 8).
block of photographsformed by the combined exposures of a
flight. The block consists of a number of parallel strips with a
sidelap of 20-30%.
blockeda method of storing data on 9-track tapes so that there
are more logical records in each physical record.
blocking factorthe number of logical records in each physical
record. For instance, a record may contain 28,000 bytes, but
only 4,000 columns due to a blocking factor of 7.
book mapa map laid out like the pages of a book. Each page fits
on the paper used by the printer. There are neatlines and tick
marks on all sides of every page.
Booleanlogical, based upon, or reducible to a true or false
condition.
borderon a map, a line that usually encloses the entire map, not
just the image area as does a neatline.
boundarya neighborhood analysis technique that is used to detect
boundaries between thematic classes.
bpibits per inch. A measure of data storage density for magnetic
tapes.
breaklinean elevation polyline in which each vertex has its own X,
Y, Z value.
brightness valuethe quantity of a primary color (red, green,
blue) to be output to a pixel on the display device. Also called
intensity value, function memory value, pixel value, display
value, and screen value.
BSQband sequential. A data storage format in which each band is
contained in a separate file.
buffer zonea specific area around a feature that is isolated for or
from further analysis. For example, buffer zones are often
generated around streams in site assessment studies so that
further analyses exclude these areas that are often unsuitable
for development.
buildthe process of constructing the topology of a vector layer by
processing points, lines, and polygons. See clean.
/ 662 Field Guide
bundlethe unit of photogrammetric triangulation after each point
measured in an image is connected with the perspective center
by a straight light ray. There is one bundle of light rays for each
image.
bundle attitudedefined by a spatial rotation matrix consisting of
three angles (, , ).
bundle locationdefined by the perspective center, expressed in
units of the specified map projection.
byte8 bits of data.
C CACsee Compressed Aeronautical Chart.
CADsee computer-aided design.
cadastral mapa map showing the boundaries of the subdivisions
of land for purposes of describing and recording ownership or
taxation.
CADRGsee Compressed ADRG.
calibration certificate/reportin aerial photography, the
manufacturer of the camera specifies the interior orientation in
the form of a certificate or report.
Cartesiana coordinate system in which data are organized on a
grid and points on the grid are referenced by their X,Y
coordinates.
cartographythe art and science of creating maps.
categorical datasee thematic data.
CCTsee computer compatible tape.
CD-ROMa read-only storage device read by a CD-ROM player.
cell1. a 1 1 area of coverage. DTED (Digital Terrain Elevation
Data) are distributed in cells. 2. a pixel; grid cell.
cell sizethe area that one pixel represents, measured in map
units. For example, one cell in the image may represent an area
30 30 on the ground. Sometimes called pixel size.
center of the scenethe center pixel of the center scan line; the
center of a satellite image.
central processing unit(CPU) the part of a computer which
controls all the other parts. . .the CPU consists of the control
unit, the arithmetic and logic unit (ALU) and memory
(registers, cache, RAM and ROM) as well as various temporary
buffers and other logic (Free On-Line Dictionary of Computing,
1999b).
charactera number, letter, or punctuation symbol. One character
usually occupies one byte when stored on a computer.
Field Guide / 663
check pointadditional ground points used to independently verify
the degree of accuracy of a triangulation.
check point analysisthe act of using check points to
independently verify the degree of accuracy of a triangulation.
chi-square distributiona nonsymmetrical data distribution: its
curve is characterized by a tail that represents the highest and
least frequent data values. In classification thresholding, the
tail represents the pixels that are most likely to be classified
incorrectly.
choropleth mapa map portraying properties of a surface using
area symbols. Area symbols usually represent categorized
classes of the mapped phenomenon.
CIBsee Controlled Image Base.
city-block distancethe physical or spectral distance that is
measured as the sum of distances that are perpendicular to one
another.
classa set of pixels in a GIS file that represents areas that share
some condition. Classes are usually formed through
classification of a continuous raster layer.
class valuea data file value of a thematic file that identifies a pixel
as belonging to a particular class.
classificationthe process of assigning the pixels of a continuous
raster image to discrete categories.
classification accuracy tablefor accuracy assessment, a list of
known values of reference pixels, supported by some ground
truth or other a priori knowledge of the true class, and a list of
the classified values of the same pixels, from a classified file to
be tested.
classification scheme(or classification system) a set of target
classes. The purpose of such a scheme is to provide a
framework for organizing and categorizing the information that
can be extracted from the data.
cleanthe process of constructing the topology of a vector layer by
processing lines and polygons. See build.
clienton a computer on a network, a program that accesses a
server utility that is on another machine on the network.
clumpa contiguous group of pixels in one class. Also called raster
region.
clusteringunsupervised training; the process of generating
signatures based on the natural groupings of pixels in image
data when they are plotted in spectral space.
clustersthe natural groupings of pixels when plotted in spectral
space.
/ 664 Field Guide
CMYcyan, magenta, yellow. Primary colors of pigment used by
printers, whereas display devices use RGB.
CNESCentre National dEtudes Spatiales. The corporation was
founded in 1961. It provides support for ESA. CNES suggests
and executes programs (Centre National DEtudes Spatiales,
1998).
coefficientone number in a matrix, or a constant in a polynomial
expression.
coefficient of variationa scene-derived parameter that is used
as input to the Sigma and Local Statistics radar enhancement
filters.
collinearitya nonlinear mathematical model that
photogrammetric triangulation is based upon. Collinearity
equations describe the relationship among image coordinates,
ground coordinates, and orientation parameters.
colorcellthe location where the data file values are stored in the
colormap. The red, green, and blue values assigned to the
colorcell control the brightness of the color guns for the
displayed pixel.
color gunson a display device, the red, green, and blue phosphors
that are illuminated on the picture tube in varying brightnesses
to create different colors. On a color printer, color guns are the
devices that apply cyan, yellow, magenta, and sometimes black
ink to paper.
colormapan ordered set of colorcells, which is used to perform a
function on a set of input values.
color printera printer that prints color or black and white
imagery, as well as text. ERDAS IMAGINE supports several
color printers.
color schemea set of lookup tables that assigns red, green, and
blue brightness values to classes when a layer is displayed.
composite mapa map on which the combined information from
different thematic maps is presented.
Compressed ADRG(CADRG) a military data product based upon
the general RPF specification.
Compressed Aeronautical Chart(CAC) precursor to CADRG.
Compressed Raster Graphics(CRG) precursor to CADRG.
compromise projectiona map projection that compromises
among two or more of the map projection properties of
conformality, equivalence, equidistance, and true direction.
computer-aided design(CAD) computer application used for
design and GPS survey.
Field Guide / 665
computer compatible tape(CCT) a magnetic tape used to
transfer and store digital data.
confidence levelthe percentage of pixels that are believed to be
misclassified.
conformala map or map projection that has the property of
conformality, or true shape.
conformalitythe property of a map projection to represent true
shape, wherein a projection preserves the shape of any small
geographical area. This is accomplished by exact
transformation of angles around points.
conic projectiona map projection that is created from projecting
the surface of the Earth to the surface of a cone.
connectivity radiusthe distance (in pixels) that pixels can be
from one another to be considered contiguous. The
connectivity radius is used in connectivity analysis.
contiguity analysisa study of the ways in which pixels of a class
are grouped together spatially. Groups of contiguous pixels in
the same class, called raster regions, or clumps, can be
identified by their sizes and manipulated.
contingency matrixa matrix that contains the number and
percentages of pixels that were classified as expected.
continuousa term used to describe raster data layers that contain
quantitative and related values. See continuous data.
continuous dataa type of raster data that are quantitative
(measuring a characteristic) and have related, continuous
values, such as remotely sensed images (e.g., Landsat, SPOT,
etc.).
contour mapa map in which a series of lines connects points of
equal elevation.
contrast stretchthe process of reassigning a range of values to
another range, usually according to a linear function. Contrast
stretching is often used in displaying continuous raster layers,
since the range of data file values is usually much narrower
than the range of brightness values on the display device.
control pointa point with known coordinates in the ground
coordinate system, expressed in the units of the specified map
projection.
Controlled Image Base(CIB) a military data product based upon
the general RPF specification.
convolution filteringthe process of averaging small sets of pixels
across an image. Used to change the spatial frequency
characteristics of an image.
/ 666 Field Guide
convolution kernela matrix of numbers that is used to average
the value of each pixel with the values of surrounding pixels in
a particular way. The numbers in the matrix serve to weight
this average toward particular pixels.
coordinate systema method of expressing location. In two-
dimensional coordinate systems, locations are expressed by a
column and row, also called x and y.
correlation thresholda value used in rectification to determine
whether to accept or discard GCPs. The threshold is an absolute
value threshold ranging from 0.000 to 1.000.
correlation windowswindows that consist of a local
neighborhood of pixels. One example is square neighborhoods
(e.g., 3 3, 5 5, 7 7 pixels).
corresponding GCPsthe GCPs that are located in the same
geographic location as the selected GCPs, but are selected in
different files.
covariancemeasures the tendencies of data file values for the
same pixel, but in different bands, to vary with each other in
relation to the means of their respective bands. These bands
must be linear. Covariance is defined as the average product of
the differences between the data file values in each band and
the mean of each band.
covariance matrixa square matrix that contains all of the
variances and covariances within the bands in a data file.
CPU see central processing unit.
creditson maps, the text that can include the data source and
acquisition date, accuracy information, and other details that
are required for or helpful to readers.
CRGsee Compressed Raster Graphics.
crisp filtera filter used to sharpen the overall scene luminance
without distorting the interband variance content of the image.
cross correlationa calculation that computes the correlation
coefficient of the gray values between the template window and
the search window.
cubic convolutiona method of resampling that uses the data file
values of sixteen pixels in a 4 4 window to calculate an output
data file value with a cubic function.
current directoryalso called default directory, it is the directory
that you are in. It is the default path.
cylindrical projectiona map projection that is created from
projecting the surface of the Earth to the surface of a cylinder.
Field Guide / 667
D dangling nodea line that does not close to form a polygon, or that
extends past an intersection.
data1. in the context of remote sensing, a computer file containing
numbers that represent a remotely sensed image, and can be
processed to display that image. 2. a collection of numbers,
strings, or facts that requires some processing before it is
meaningful.
database (one word)a relational data structure usually used to
store tabular information. Examples of popular databases
include SYBASE, dBase, Oracle, INFO, etc.
data base (two words)in ERDAS IMAGINE, a set of continuous and
thematic raster layers, vector layers, attribute information, and
other kinds of data that represent one area of interest. A data
base is usually part of a GIS.
data filea computer file that contains numbers that represent an
image.
data file valueeach number in an image file. Also called file value,
image file value, DN, brightness value, pixel.
datumsee reference plane.
DCTsee Discrete Cosine Transformation.
decision rulean equation or algorithm that is used to classify
image data after signatures have been created. The decision
rule is used to process the data file values based upon the
signature statistics.
decorrelation stretcha technique used to stretch the principal
components of an image, not the original image.
default directorysee current directory.
Defense Mapping Agency(DMA) agency that supplies VPF, ARC
digital raster, DRG, ADRG, and DTED files.
degrees of freedomwhen chi-square statistics are used in
thresholding, the number of bands in the classified file.
DEMsee digital elevation model.
densifythe process of adding vertices to selected lines at a user-
specified tolerance.
density1. the number of bits per inch on a magnetic tape. 9-track
tapes are commonly stored at 1600 and 6250 bpi. 2. a
neighborhood analysis technique that outputs the number of
pixels that have the same value as the analyzed pixel in a user-
specified window.
derivative mapa map created by altering, combining, or
analyzing other maps.
/ 668 Field Guide
descriptorsee attribute.
desktop scannersgeneral purpose devices that lack the image
detail and geometric accuracy of photogrammetric quality
units, but are much less expensive.
detectorthe device in a sensor system that records
electromagnetic radiation.
developable surfacea flat surface, or a surface that can be easily
flattened by being cut and unrolled, such as the surface of a
cone or a cylinder.
DFTsee Discrete Fourier Transform.
DGPSsee Differential Correction.
Differential Correction(DGPS) can be used to remove the
majority of the effects of Selective Availability.
digital elevation model(DEM) continuous raster layers in which
data file values represent elevation. DEMs are available from
the USGS at 1:24,000 and 1:250,000 scale, and can be
produced with terrain analysis programs, IMAGINE IFSAR DEM,
IMAGINE OrthoMAX, and IMAGINE StereoSAR DEM.
Digital Number(DN) variation in pixel intensity due to
composition of what it represents. For example, the DN of
water is different from that of land. DN is expressed in a value
typically from 0-255.
digital orthophotoan aerial photo or satellite scene that has been
transformed by the orthogonal projection, yielding a map that
is free of most significant geometric distortions.
digital orthophoto quadrangle(DOQ) a computer-generated
image of an aerial photo (United States Geological Survey,
1999b).
digital photogrammetryphotogrammetry as applied to digital
images that are stored and processed on a computer. Digital
images can be scanned from photographs or can be directly
captured by digital cameras.
Digital Line Graph(DLG) a vector data format created by the
USGS.
Digital Terrain Elevation Data(DTED) data produced by the
DMA. DTED data comes in two types, both in Arc/second
format: DTED 1a 1 1 area of coverage, and DTED 2a
1 1 or less area of coverage.
digital terrain model(DTM) a discrete expression of topography
in a data array, consisting of a group of planimetric coordinates
(X,Y) and the elevations of the ground points and breaklines.
digitized raster graphic(DRG) a digital replica of DMA hardcopy
graphic products. See also ADRG.
Field Guide / 669
digitizingany process that converts nondigital data into numeric
data, usually to be stored on a computer. In ERDAS IMAGINE,
digitizing refers to the creation of vector data from hardcopy
materials or raster images that are traced using a digitizer
keypad on a digitizing tablet, or a mouse on a display device.
DIMEsee Dual Independent Map Encoding.
dimensionalitya term referring to the number of bands being
classified. For example, a data file with three bands is said to
be three-dimensional, since three-dimensional spectral space
is plotted to analyze the data.
directoryan area of a computer disk that is designated to hold a
set of files. Usually, directories are arranged in a tree structure,
in which directories can also contain many levels of
subdirectories.
Discrete Cosine Transformation(DCT) an element of a
commonly used form of JPEG, which is a compression
technique.
Discrete Fourier Transform(DFT) method of removing striping
and other noise from radar images. See also Fast Fourier
Transform.
displacementthe degree of geometric distortion for a point that
is not on the nadir line.
display devicethe computer hardware consisting of a memory
board and a monitor. It displays a visible image from a data file
or from some user operation.
display driverthe ERDAS IMAGINE utility that interfaces between
the computer running ERDAS IMAGINE software and the
display device.
display memorythe subset of image memory that is actually
viewed on the display screen.
display pixelone grid location on a display device or printout.
display resolutionthe number of pixels that can be viewed on the
display device monitor, horizontally and vertically (i.e., 512
512 or 1024 1024).
distancesee Euclidean distance, spectral distance.
distance image filea one-band, 16-bit file that can be created in
the classification process, in which each data file value
represents the result of the distance equation used in the
program. Distance image files generally have a chi-square
distribution.
distributionthe set of frequencies with which an event occurs, or
the set of probabilities that a variable has a particular value.
/ 670 Field Guide
distribution rectangles(DR) the geographic data sets into which
ADRG data are divided.
ditheringa display technique that is used in ERDAS IMAGINE to
allow a smaller set of colors appear to be a larger set of colors.
divergencea statistical measure of distance between two or more
signatures. Divergence can be calculated for any combination
of bands used in the classification; bands that diminish the
results of the classification can be ruled out.
diversitya neighborhood analysis technique that outputs the
number of different values within a user-specified window.
DLGsee Digital Line Graph.
DMAsee Defense Mapping Agency.
DNsee Digital Number.
DOQsee digital orthophoto quadrangle.
dot patternsthe matrices of dots used to represent brightness
values on hardcopy maps and images.
dots per inch(DPI) when referring to the resolution of an output
device, such as a printer, the number of dots that are printed
per unitfor example, 300 dots per inch.
double precisiona measure of accuracy in which fifteen
significant digits can be stored for a coordinate.
downsamplingthe skipping of pixels during the display or
processing of the scanning process.
DPIsee dots per inch.
DRsee distribution rectangles.
DTEDsee Digital Terrain Elevation Data.
DTMsee digital terrain model.
Dual Independent Map Encoding(DIME) a type of ETAK feature
wherein a line is created along with a corresponding ACODE
(arc attribute) record. The coordinates are stored in Lat/Lon
decimal degrees. Each record represents a single linear
feature.
DXFData Exchange Format. A format for storing vector data in
ASCII files, used by AutoCAD software.
dynamic rangesee radiometric resolution.
E Earth Observation Satellite Company(EOSAT) a private
company that directs the Landsat satellites and distributes
Landsat imagery.
Field Guide / 671
Earth Resources Observation Systems(EROS) a division of the
USGS National Mapping Division. EROS is involved with
managing data and creating systems, as well as research
(USGS, 1999a).
Earth Resources Technology Satellites(ERTS) in 1972, NASAs
first civilian program to acquire remotely sensed digital satellite
data, later renamed to Landsat.
EDCsee EROS Data Center.
edge detectora convolution kernel, which is usually a zero-sum
kernel, that smooths out or zeros out areas of low spatial
frequency and creates a sharp contrast where spatial frequency
is high. High spatial frequency is at the edges between
homogeneous groups of pixels.
edge enhancera high-frequency convolution kernel that brings
out the edges between homogeneous groups of pixels. Unlike
an edge detector, it only highlights edges, it does not
necessarily eliminate other features.
eigenvaluethe length of a principal component that measures the
variance of a principal component band. See also principal
components.
eigenvectorthe direction of a principal component represented as
coefficients in an eigenvector matrix which is computed from
the eigenvalues. See also principal components.
electromagnetic(EM) type of spectrum consisting of different
regions such as thermal infrared and long-wave and short-
wave reflective.
electromagnetic radiation(EMR) the energy transmitted
through space in the form of electric and magnetic waves.
electromagnetic spectrumthe range of electromagnetic
radiation extending from cosmic waves to radio waves,
characterized by frequency or wavelength.
elementan entity of vector data, such as a point, line, or polygon.
elevation datasee terrain data, DEM.
ellipsea two-dimensional figure that is formed in a two-
dimensional scatterplot when both bands plotted have normal
distributions. The ellipse is defined by the standard deviations
of the input bands. Ellipse plots are often used to test
signatures before classification.
EMsee electromagnetic.
EMLsee ERDAS Macro Language.
EMRsee electromagnetic radiation.
/ 672 Field Guide
end-of-file mark(EOF) usually a half-inch strip of blank tape that
signifies the end of a file that is stored on magnetic tape.
end-of-volume mark(EOV) usually three EOFs marking the end
of a tape.
Enhanced Thematic Mapper Plus(ETM+) the observing
instrument on Landsat 7.
enhancementthe process of making an image more interpretable
for a particular application. Enhancement can make important
features of raw, remotely sensed data more interpretable to
the human eye.
entityan AutoCAD drawing element that can be placed in an
AutoCAD drawing with a single command.
Environmental Systems Research Institute(ESRI) company
based in Redlands, California, which produces software such as
ArcInfo and ArcView. ESRI has created many data formats,
including GRID and GRID Stack.
EOFsee end-of-file mark.
EOSATsee Earth Observation Satellite Company.
EOV see end-of-volume mark.
ephemeris datacontained in the header of the data file of a SPOT
scene, provides information about the recording of the data and
the satellite orbit.
epipolar stereopaira stereopair without y-parallax.
equal areasee equivalence.
equatorial aspecta map projection that is centered around the
equator or a point on the equator.
equidistancethe property of a map projection to represent true
distances from an identified point.
equivalencethe property of a map projection to represent all
areas in true proportion to one another.
ERDAS Macro Language(EML) computer language that can be
used to create custom dialogs in ERDAS IMAGINE, or to edit
existing dialogs and functions for your specific application.
EROSsee Earth Resources Observation Systems.
EROS Data Center(EDC) a division of USGS, located in Sioux
Falls, SD, which is the primary receiving center for Landsat 7
data.
error matrixin classification accuracy assessment, a square
matrix showing the number of reference pixels that have the
same values as the actual classified points.
Field Guide / 673
ERS-1the European Space Agencys (ESA) radar satellite launched
in July 1991, currently provides the most comprehensive radar
data available. ERS-2 was launched in 1995.
ERTSsee Earth Resources Technology Satellites.
ESAsee European Space Agency.
ESRIsee Environmental Systems Research Institute.
ETAK MapBasean ASCII digital street centerline map product
available from ETAK, Inc. (Menlo Park, California).
ETM+see Enhanced Thematic Mapper Plus.
Euclidean distancethe distance, either in physical or abstract
(e.g., spectral) space, that is computed based on the equation
of a straight line.
exposure stationduring image acquisition, each point in the flight
path at which the camera exposes the film.
extendthe process of moving selected dangling lines up a
specified distance so that they intersect existing lines.
extensionthe three letters after the period in a file name that
usually identify the type of file.
extent1. the image area to be displayed in a Viewer. 2. the area
of the Earths surface to be mapped.
exterior orientationall images of a block of aerial photographs in
the ground coordinate system are computed during
photogrammetric triangulation using a limited number of points
with known coordinates. The exterior orientation of an image
consists of the exposure station and the camera attitude at this
moment.
exterior orientation parametersthe perspective centers
ground coordinates in a specified map projection, and three
rotation angles around the coordinate axes.
European Space Agency(ESA) company with two satellites,
ERS-1 and ERS-2, that collect radar data. For more
information, visit the ESA web site at http://www.esa.int.
extractselected bands of a complete set of NOAA AVHRR data.
F false colora color scheme in which features have expected colors.
For instance, vegetation is green, water is blue, etc. These are
not necessarily the true colors of these features.
false eastingan offset between the y-origin of a map projection
and the y-origin of a map. Typically used so that no y-
coordinates are negative.
/ 674 Field Guide
false northingan offset between the x-origin of a map projection
and the x-origin of a map. Typically used so that no x-
coordinates are negative.
fast formata type of BSQ format used by EOSAT to store Landsat
TM data.
Fast Fourier Transform(FFT) a type of Fourier Transform faster
than the DFT. Designed to remove noise and periodic features
from radar images. It converts a raster image from the spatial
domain into a frequency domain image.
feature based matchingan image matching technique that
determines the correspondence between two image features.
feature collectionthe process of identifying, delineating, and
labeling various types of natural and human-made phenomena
from remotely-sensed images.
feature extractionthe process of studying and locating areas and
objects on the ground and deriving useful information from
images.
feature spacean abstract space that is defined by spectral units
(such as an amount of electromagnetic radiation).
feature space area of interesta user-selected area of interest
(AOI) that is selected from a feature space image.
feature space imagea graph of the data file values of one band
of data against the values of another band (often called a
scatterplot).
FFTsee Fast Fourier Transform.
fiducial centerthe center of an aerial photo.
fiducialsfour or eight reference markers fixed on the frame of an
aerial metric camera and visible in each exposure. Fiducials are
used to compute the transformation from data file to image
coordinates.
fieldin an attribute database, a category of information about each
class or feature, such as Class name and Histogram.
field of view(FOV) in perspective views, an angle that defines
how far the view is generated to each side of the line of sight.
file coordinatesthe location of a pixel within the file in x,y
coordinates. The upper left file coordinate is usually 0,0.
file pixelthe data file value for one data unit in an image file.
file specification or filespecthe complete file name, including
the drive and path, if necessary. If a drive or path is not
specified, the file is assumed to be in the current drive and
directory.
Field Guide / 675
filledreferring to polygons; a filled polygon is solid or has a
pattern, but is not transparent. An unfilled polygon is simply a
closed vector that outlines the area of the polygon.
filteringthe removal of spatial or spectral features for data
enhancement. Convolution filtering is one method of spatial
filtering. Some texts may use the terms filtering and spatial
filtering synonymously.
flipthe process of reversing the from-to direction of selected lines
or links.
focal lengththe orthogonal distance from the perspective center
to the image plane.
focal operationsfilters that use a moving window to calculate
new values for each pixel in the image based on the values of
the surrounding pixels.
focal planethe plane of the film or scanner used in obtaining an
aerial photo.
Fourier analysisan image enhancement technique that was
derived from signal processing.
FOVsee field of view.
from-nodethe first vertex in a line.
full setall bands of a NOAA AVHRR data set.
function memoriesareas of the display device memory that store
the lookup tables, which translate image memory values into
brightness values.
function symbolan annotation symbol that represents an
activity. For example, on a map of a state park, a symbol of a
tent would indicate the location of a camping area.
Fuyo 1 (JERS-1)the Japanese radar satellite launched in
February 1992.
G GACsee global area coverage.
GBFsee Geographic Base File.
GCPsee ground control point.
GCP matchingfor image-to-image rectification, a GCP selected in
one image is precisely matched to its counterpart in the other
image using the spectral characteristics of the data and the
transformation matrix.
GCP predictionthe process of picking a GCP in either coordinate
system and automatically locating that point in the other
coordinate system based on the current transformation
parameters.
/ 676 Field Guide
generalizethe process of weeding vertices from selected lines
using a specified tolerance.
geocentric coordinate systema coordinate system that has its
origin at the center of the Earth ellipsoid. The Z
G
-axis equals
the rotational axis of the Earth, and the X
G
-axis passes through
the Greenwich meridian. The Y
G
-axis is perpendicular to both
the Z
G
-axis and the X
G
-axis, so as to create a three-
dimensional coordinate system that follows the right-hand rule.
geocoded dataan image(s) that has been rectified to a particular
map projection and cell size and has had radiometric
corrections applied.
Geographic Base File(GBF) along with DIME, sometimes
provides the cartographic base for TIGER/Line files, which
cover the US, Puerto Rico, Guam, the Virgin Islands, American
Samoa, and the Trust Territories of the Pacific.
geographic information system(GIS) a unique system
designed for a particular application that stores, enhances,
combines, and analyzes layers of geographic data to produce
interpretable information. A GIS may include computer images,
hardcopy maps, statistical data, and any other data needed for
a study, as well as computer software and human knowledge.
GISs are used for solving complex geographic planning and
management problems.
geographical coordinatesa coordinate system for explaining the
surface of the Earth. Geographical coordinates are defined by
latitude and by longitude (Lat/Lon), with respect to an origin
located at the intersection of the equator and the prime
(Greenwich) meridian.
geometric correctionthe correction of errors of skew, rotation,
and perspective in raw, remotely sensed data.
georeferencingthe process of assigning map coordinates to
image data and resampling the pixels of the image to conform
to the map projection grid.
GeoTIFF TIFF files that are geocoded.
gigabyte(Gb) about one billion bytes.
GISsee geographic information system.
GIS filea single-band ERDAS Ver. 7.X data file in which pixels are
divided into discrete categories.
global area coverage(GAC) a type of NOAA AVHRR data with a
spatial resolution of 4 4 km.
global operationsfunctions that calculate a single value for an
entire area, rather than for each pixel like focal functions.
Field Guide / 677
GLObal NAvigation Satellite System(GLONASS) a satellite-
based navigation system produced by the Russian Space
Forces. It provides three-dimensional locations, velocity, and
time measurements for both civilian and military applications.
GLONASS started its mission in 1993 (Magellan Corporation,
1999).
Global Ozone Monitoring Experiment(GOME) instrument
aboard ESAs ERS-2 satellite, which studies atmospheric
chemistry (European Space Agency, 1995).
Global Positioning System(GPS) system used for the collection
of GCPs, which uses orbiting satellites to pinpoint precise
locations on the Earths surface.
GLONASSsee GLObal NAvigation Satellite System.
.gmd filethe ERDAS IMAGINE graphical model file created with
Model Maker (Spatial Modeler).
gnomonican azimuthal projection obtained from a perspective at
the center of the Earth.
GOMEsee Global Ozone Monitoring Experiment.
GPSsee Global Positioning System.
graphical modelinga technique used to combine data layers in
an unlimited number of ways using icons to represent input
data, functions, and output data. For example, an output layer
created from modeling can represent the desired combination
of themes from many input layers.
graphical modela model created with Model Maker (Spatial
Modeler). Graphical models are put together like flow charts
and are stored in .gmd files.
Graphical User Interface(GUI) the dialogs and menus of ERDAS
IMAGINE that enable you to execute commands to analyze
your imagery.
graticulethe network of parallels of latitude and meridians of
longitude applied to the global surface and projected onto
maps.
gray scalea color scheme with a gradation of gray tones ranging
from black to white.
great circlean arc of a circle for which the center is the center of
the Earth. A great circle is the shortest possible surface route
between two points on the Earth.
GRIDa compressed tiled raster data structure that is stored as a
set of files in a directory, including files to keep the attributes
of the GRID.
grid cella pixel.
/ 678 Field Guide
grid linesintersecting lines that indicate regular intervals of
distance based on a coordinate system. Sometimes called a
graticule.
GRID Stackmultiple GRIDs to be treated as a multilayer image.
ground control point(GCP) specific pixel in image data for which
the output map coordinates (or other output coordinates) are
known. GCPs are used for computing a transformation matrix,
for use in rectifying an image.
ground coordinate systema three-dimensional coordinate
system which utilizes a known map projection. Ground
coordinates (X,Y,Z) are usually expressed in feet or meters.
ground truthdata that are taken from the actual area being
studied.
ground truthingthe acquisition of knowledge about the study
area from field work, analysis of aerial photography, personal
experience, etc. Ground truth data are considered to be the
most accurate (true) data available about the area of study.
GUIsee Graphical User Interface.
H halftoningthe process of using dots of varying size or
arrangements (rather than varying intensity) to form varying
degrees of a color.
hardcopy outputany output of digital computer (softcopy) data
to paper.
HARNsee High Accuracy Reference Network.
header filea file usually found before the actual image data on
tapes or CD-ROMs that contains information about the data,
such as number of bands, upper left coordinates, map
projection, etc.
header recordthe first part of an image file that contains general
information about the data in the file, such as the number of
columns and rows, number of bands, database coordinates of
the upper left corner, and the pixel depth. The contents of
header records vary depending on the type of data.
HFAsee Hierarchal File Architecture System.
Hierarchal File Architecture System(HFA) a format that allows
different types of information about a file to be stored in a tree-
structured fashion. The tree is made of nodes that contain
information such as ephemeris data.
High Accuracy Reference Network(HARN) HARN is based on
the GRS 1980 spheroid, and can be used to perform State Plane
calculations.
Field Guide / 679
high-frequency kernela convolution kernel that increases the
spatial frequency of an image. Also called high-pass kernel.
High Resolution Picture Transmission(HRPT) the direct
transmission of AVHRR data in real-time with the same
resolution as LAC.
High Resolution Visible Infrared(HR VIR) a pushbroom scanner
on the SPOT 4 satellite, which captures information in the
visible and near-infrared bands (SPOT Image, 1999).
High Resolution Visible sensor(HRV) a pushbroom scanner on
a SPOT satellite that takes a sequence of line images while the
satellite circles the Earth.
histograma graph of data distribution, or a chart of the number
of pixels that have each possible data file value. For a single
band of data, the horizontal axis of a histogram graph is the
range of all possible data file values. The vertical axis is a
measure of pixels that have each data value.
histogram equalizationthe process of redistributing pixel values
so that there are approximately the same number of pixels with
each value within a range. The result is a nearly flat histogram.
histogram matchingthe process of determining a lookup table
that converts the histogram of one band of an image or one
color gun to resemble another histogram.
horizontal controlthe horizontal distribution of GCPs in aerial
triangulation (x,y - planimetry).
host workstationa CPU, keyboard, mouse, and a display.
HRPTsee High Resolution Picture Transmission.
HRVsee High Resolution Visible sensor.
HR VIRsee High Resolution Visible Infrared.
huea component of IHS (intensity, hue, saturation) that is
representative of the color or dominant wavelength of the pixel.
It varies from 0 to 360. Blue = 0 (and 360), magenta = 60, red
= 120, yellow = 180, green = 240, and cyan = 300.
hyperspectral sensorsthe imaging sensors that record multiple
bands of data, such as the AVIRIS with 224 bands.
I IARRsee Internal Average Relative Reflectance.
IFFTsee Inverse Fast Fourier Transform.
IFOVsee instantaneous field of view.
IGESsee Initial Graphics Exchange Standard files.
/ 680 Field Guide
IHSintensity, hue, saturation. An alternate color space from RGB
(red, green, blue). This system is advantageous in that it
presents colors more nearly as perceived by the human eye.
See intensity, hue, and saturation.
imagea picture or representation of an object or scene on paper,
or a display screen. Remotely sensed images are digital
representations of the Earth.
image algebraany type of algebraic function that is applied to the
data file values in one or more bands.
image centerthe center of the aerial photo or satellite scene.
image coordinate systemthe location of each point in the image
is expressed for purposes of photogrammetric triangulation.
image datadigital representations of the Earth that can be used
in computer image processing and GIS analyses.
image filea file containing raster image data. Image files in
ERDAS IMAGINE have the extension .img. Image files from the
ERDAS Ver. 7.X series software have the extension .LAN or
.GIS.
image matchingthe automatic acquisition of corresponding
image points on the overlapping area of two images.
image memorythe portion of the display device memory that
stores data file values (which may be transformed or processed
by the software that accesses the display device).
image pairsee stereopair.
image processingthe manipulation of digital image data,
including (but not limited to) enhancement, classification, and
rectification operations.
image pyramida data structure consisting of the same image
represented several times, at a decreasing spatial resolution
each time. Each level of the pyramid contains the image at a
particular resolution.
image scale(SI) expresses the average ratio between a distance
in the image and the same distance on the ground. It is
computed as focal length divided by the flying height above the
mean ground elevation.
image space coordinate systemidentical to image coordinates,
except that it adds a third axis (z) that is used to describe
positions inside the camera. The units are usually in millimeters
or microns.
IMCsee International Map Committee.
.img file(also, image file) an ERDAS IMAGINE file that stores
continuous or thematic raster layers.
Field Guide / 681
IMWsee International Map of the World.
inclinationthe angle between a vertical on the ground at the
center of the scene and a light ray from the exposure station,
which defines the degree of off-nadir viewing when the scene
was recorded.
indexinga function applied to thematic layers that adds the data
file values of two or more layers together, creating a new
output layer. Weighting factors can be applied to one or more
layers to add more importance to those layers in the final sum.
index mapa reference map that outlines the mapped area,
identifies all of the component maps for the area if several map
sheets are required, and identifies all adjacent map sheets.
Indian Remote Sensing Satellite(IRS) satellites operated by
Space Imaging, including IRS-1A, IRS-1B, IRS-1C, and IRS-
1D.
indicesthe process used to create output images by
mathematically combining the DN values of different bands.
informationsomething that is independently meaningful, as
opposed to data, which are not independently meaningful.
Initial Graphics Exchange Standard files(IGES) files often
used to transfer CAD data between systems. IGES Version 3.0
format, published by the U.S. Department of Commerce, is in
uncompressed ASCII format only.
initializationa process that ensures all values in a file or in
computer memory are equal until additional information is
added or processed to overwrite these values. Usually the
initialization value is 0. If initialization is not performed on a
data file, there could be random data values in the file.
inset mapa map that is an enlargement of some congested area
of a smaller scale map, and that is usually placed on the same
sheet with the smaller scale main map.
instantaneous field of view(IFOV) a measure of the area viewed
by a single detector on a scanning system in a given instant in
time.
intensitya component of IHS (intensity, hue, saturation), which is
the overall brightness of the scene and varies from 0 (black) to
1 (white).
interferometrymethod of subtracting the phase of one SAR
image from another to derive height information.
interior orientationdefines the geometry of the sensor that
captured a particular image.
/ 682 Field Guide
Internal Average Relative Reflectance(IARR) a technique
designed to compensate for atmospheric contamination of the
spectra.
International Map Committee(IMC) located in London, the
committee responsible for creating the International Map of the
World series.
International Map of the World(IMW) a series of maps
produced by the International Map Committee. Maps are in
1:1,000,000 scale.
intersectionthe area or set that is common to two or more input
areas or sets.
interval dataa type of data in which thematic class values have a
natural sequence, and in which the distances between values
are meaningful.
Inverse Fast Fourier Transform(IFFT) used after the Fast
Fourier Transform to transform a Fourier image back into the
spatial domain. See also Fast Fourier Transform.
IRinfrared portion of the electromagnetic spectrum. See also
electromagnetic spectrum.
IRSsee Indian Remote Sensing Satellite.
isarithmic mapa map that uses isorithms (lines connecting points
of the same value for any of the characteristics used in the
representation of surfaces) to represent a statistical surface.
Also called an isometric map.
ISODATA clusteringsee Iterative Self-Organizing Data
Analysis Technique.
islandA single line that connects with itself.
isopleth mapa map on which isopleths (lines representing
quantities that cannot exist at a point, such as population
density) are used to represent some selected quantity.
iterativea term used to describe a process in which some
operation is performed repeatedly.
Iterative Self-Organizing Data Analysis Technique(ISODATA
clustering) a method of clustering that uses spectral distance
as in the sequential method, but iteratively classifies the pixels,
redefines the criteria for each class, and classifies again, so that
the spectral distance patterns in the data gradually emerge.
J JERS-1 (Fuyo 1)the Japanese radar satellite launched in
February 1992.
Field Guide / 683
Jet Propulsion Laboratories(JPL) the lead U.S. center for
robotic exploration of the solar system. JPL is managed for
NASA by the California Institute of Technology. For more
information, see the JPL web site at http://www.jpl.nasa.gov
(National Aeronautics and Space Administration, 1999).
JFIFsee JPEG File Interchange Format.
jointhe process of interactively entering the side lot lines when the
front and rear lines have already been established.
Joint Photographic Experts Group(JPEG) 1. a group
responsible for creating a set of compression techniques. 2.
Compression techniques are also called JPEG.
JPEGsee Joint Photographic Experts Group.
JPEG File Interchange Format(JFIF) standard file format used
to store JPEG-compressed imagery.
JPLsee Jet Propulsion Laboratories.
K Kappa coefficienta number that expresses the proportionate
reduction in error generated by a classification process
compared with the error of a completely random classification.
kernelsee convolution kernel.
L labelin annotation, the text that conveys important information to
the reader about map features.
label pointa point within a polygon that defines that polygon.
LACsee local area coverage.
.LAN filesmultiband ERDAS Ver. 7.X image files (the name
originally derived from the Landsat satellite). LAN files usually
contain raw or enhanced remotely sensed data.
land cover mapa map of the visible ground features of a scene,
such as vegetation, bare land, pasture, urban areas, etc.
Landsata series of Earth-orbiting satellites that gather MSS and
TM imagery, operated by EOSAT.
large-scalea description used to represent a map or data file
having a large ratio between the area on the map (such as
inches or pixels), and the area that is represented (such as
feet). In large-scale image data, each pixel represents a small
area on the ground, such as SPOT data, with a spatial
resolution of 10 or 20 meters.
Lat/LonLatitude/Longitude, a map coordinate system.
/ 684 Field Guide
layer1. a band or channel of data. 2. a single band or set of three
bands displayed using the red, green, and blue color guns of
the ERDAS IMAGINE Viewer. A layer could be a remotely
sensed image, an aerial photograph, an annotation layer, a
vector layer, an area of interest layer, etc. 3. a component of a
GIS data base that contains all of the data for one theme. A
layer consists of a thematic image file, and may also include
attributes.
least squares correlationuses the least squares estimation to
derive parameters that best fit a search window to a reference
window.
least squares regressionthe method used to calculate the
transformation matrix from the GCPs. This method is discussed
in statistics textbooks.
legendthe reference that lists the colors, symbols, line patterns,
shadings, and other annotation that is used on a map, and their
meanings. The legend often includes the maps title, scale,
origin, and other information.
letteringthe manner in which place names and other labels are
added to a map, including letter spacing, orientation, and
position.
level 1A (SPOT)an image that corresponds to raw sensor data to
which only radiometric corrections have been applied.
level 1B (SPOT)an image that has been corrected for the Earths
rotation and to make all pixels 10 10 on the ground. Pixels
are resampled from the level 1A sensor data by cubic
polynomials.
level slicethe process of applying a color scheme by equally
dividing the input values (image memory values) into a certain
number of bins, and applying the same color to all pixels in
each bin. Usually, a ROYGBIV (red, orange, yellow, green, blue,
indigo, violet) color scheme is used.
line1. a vector data element consisting of a line (the set of pixels
directly between two points), or an unclosed set of lines. 2. a
row of pixels in a data file.
line dropouta data error that occurs when a detector in a satellite
either completely fails to function or becomes temporarily
overloaded during a scan. The result is a line, or partial line of
data with incorrect data file values creating a horizontal streak
until the detector(s) recovers, if it recovers.
lineara description of a function that can be graphed as a straight
line or a series of lines. Linear equations (transformations) can
generally be expressed in the form of the equation of a line or
plane. Also called 1st-order.
Field Guide / 685
linear contrast stretchan enhancement technique that outputs
new values at regular intervals.
linear transformationa 1st-order rectification. A linear
transformation can change location in X and/or Y, scale in X
and/or Y, skew in X and/or Y, and rotation.
line of sightin perspective views, the point(s) and direction from
which the viewer is looking into the image.
local area coverage(LAC) a type of NOAA AVHRR data with a
spatial resolution of 1.1 1.1 km.
logical recorda series of bytes that form a unit on a 9-track tape.
For example, all the data for one line of an image may form a
logical record. One or more logical records make up a physical
record on a tape.
long wave infrared region(LWIR) the thermal or far-infrared
region of the electromagnetic spectrum.
lookup table(LUT) an ordered set of numbers that is used to
perform a function on a set of input values. To display or print
an image, lookup tables translate data file values into
brightness values.
lossya term describing a data compression algorithm which
actually reduces the amount of information in the data, rather
than just the number of bits used to represent that information
(Free On-Line Dictionary of Computing, 1999c).
low-frequency kernela convolution kernel that decreases spatial
frequency. Also called low-pass kernel.
LUTsee lookup table.
LWIRsee long wave infrared region.
M Machine Independent Format(MIF) a format designed to store
data in a way that it can be read by a number of different
machines.
magnifythe process of displaying one file pixel over a block of
display pixels. For example, if the magnification factor is 3,
then each file pixel takes up a block of 3 3 display pixels.
Magnification differs from zooming in that the magnified image
is loaded directly to image memory.
magnitudean element of an electromagnetic wave. Magnitude of
a wave decreases exponentially as the distance from the
transmitter increases.
Mahalanobis distancea classification decision rule that is similar
to the minimum distance decision rule, except that a
covariance matrix is used in the equation.
/ 686 Field Guide
majoritya neighborhood analysis technique that outputs the most
common value of the data file values in a user-specified
window.
MAPsee Maximum A Posteriori.
mapa graphic representation of spatial relationships on the Earth
or other planets.
map coordinatesa system of expressing locations on the Earths
surface using a particular map projection, such as UTM, State
Plane, or Polyconic.
map framean annotation element that indicates where an image
is placed in a map composition.
map projectiona method of representing the three-dimensional
spherical surface of a planet on a two-dimensional map surface.
All map projections involve the transfer of latitude and
longitude onto an easily flattened surface.
matrixa set of numbers arranged in a rectangular array. If a
matrix has i rows and j columns, it is said to be an i j matrix.
matrix analysisa method of combining two thematic layers in
which the output layer contains a separate class for every
combination of two input classes.
matrix objectin Model Maker (Spatial Modeler), a set of numbers
in a two-dimensional array.
maximuma neighborhood analysis technique that outputs the
greatest value of the data file values in a user-specified
window.
Maximum A Posteriori(MAP) a filter (Gamma-MAP) that is
designed to estimate the original DN value of a pixel, which it
assumes is between the local average and the degraded DN.
maximum likelihooda classification decision rule based on the
probability that a pixel belongs to a particular class. The basic
equation assumes that these probabilities are equal for all
classes, and that the input bands have normal distributions.
.mdl filean ERDAS IMAGINE script model created with the Spatial
Modeler Language.
mean1. the statistical average; the sum of a set of values divided
by the number of values in the set. 2. a neighborhood analysis
technique that outputs the mean value of the data file values in
a user-specified window.
mean vectoran ordered set of means for a set of variables
(bands). For a data file, the mean vector is the set of means for
all bands in the file.
measurement vectorthe set of data file values for one pixel in all
bands of a data file.
Field Guide / 687
median1. the central value in a set of data such that an equal
number of values are greater than and less than the median.
2. a neighborhood analysis technique that outputs the median
value of the data file values in a user-specified window.
megabyte(Mb) about one million bytes.
memory residenta term referring to the occupation of a part of a
computers RAM (random access memory), so that a program
is available for use without being loaded into memory from
disk.
mensurationthe measurement of linear or areal distance.
meridiana line of longitude, going north and south. See
geographical coordinates.
MIFsee Machine Independent Format.
minimuma neighborhood analysis technique that outputs the
least value of the data file values in a user-specified window.
minimum distancea classification decision rule that calculates
the spectral distance between the measurement vector for
each candidate pixel and the mean vector for each signature.
Also called spectral distance.
minoritya neighborhood analysis technique that outputs the least
common value of the data file values in a user-specified
window.
modethe most commonly-occurring value in a set of data. In a
histogram, the mode is the peak of the curve.
modelin a GIS, the set of expressions, or steps, that defines your
criteria and creates an output layer.
modelingthe process of creating new layers from combining or
operating upon existing layers. Modeling allows the creation of
new classes from existing classes and the creation of a small
set of imagesperhaps even a single imagewhich, at a
glance, contains many types of information about a scene.
modified projectiona map projection that is a modified version
of another projection. For example, the Space Oblique Mercator
projection is a modification of the Mercator projection.
monochrome imagean image produced from one band or layer,
or contained in one color gun of the display device.
morphometric mapa map representing morphological features of
the Earths surface.
mosaickingthe process of piecing together images side by side,
to create a larger image.
MrSIDsee Multiresolution Seamless Image Database.
MSSsee multispectral scanner.
/ 688 Field Guide
Multiresolution Seamless Image Database(MrSID) a wavelet
transform-based compression algorithm designed by
LizardTech, Inc.
multispectral classificationthe process of sorting pixels into a
finite number of individual classes, or categories of data, based
on data file values in multiple bands. See also classification.
multispectral imagerysatellite imagery with data recorded in
two or more bands.
multispectral scanner(MSS) Landsat satellite data acquired in
four bands with a spatial resolution of 57 79 meters.
multitemporaldata from two or more different dates.
N NAD27see North America Datum 1927.
NAD83see North America Datum 1983.
nadirthe area on the ground directly beneath a scanners
detectors.
nadir linethe average of the left and right edge lines of a Landsat
image.
nadir pointthe center of the nadir line in vertically viewed
imagery.
NASAsee National Aeronautics and Space Administration.
National Aeronautics and Space Administration(NASA) an
organization that studies outer space. For more information,
visit the NASA web site at http://www.nasa.gov.
National Imagery and Mapping Agency(NIMA) formerly DMA.
The agency was formed in October of 1996. NIMA supplies
current imagery and geospatial data (National Imagery and
Mapping Agency, 1998).
National Imagery Transmission Format Standard(NITFS) a
format designed to package imagery with complete annotation,
text attachments, and imagery-associated metadata (Jordan
and Beck, 1999).
National Ocean Service(NOS) the organization that created a
zone numbering system for the State Plane coordinate system.
A division of NOAA. For more information, visit the NOS web
site at http://www.nos.noaa.gov.
National Oceanic and Atmospheric Administration(NOAA) an
organization that studies weather, water bodies, and
encourages conservation. For more information, visit the NOAA
web site at http://www.noaa.gov.
Navigation System with Time and Ranging(NAVSTAR)
satellite launched in 1978 for collection of GPS data.
Field Guide / 689
NAVSTARsee Navigation System with Time and Ranging.
NDVIsee Normalized Difference Vegetation Index.
nearest neighbora resampling method in which the output data
file value is equal to the input pixel that has coordinates closest
to the retransformed coordinates of the output pixel.
neatlinea rectangular border printed around a map. On scaled
maps, neatlines usually have tick marks that indicate intervals
of map coordinates or distance.
negative inclinationthe sensors are tilted in increments of 0.6
to a maximum of 27 to the east.
neighborhood analysisany image processing technique that
takes surrounding pixels into consideration, such as
convolution filtering and scanning.
NIMAsee National Imagery and Mapping Agency.
9-trackCCTs that hold digital data.
NITFSsee National Imagery Transmission Format Standard.
NOAAsee National Oceanic and Atmospheric
Administration.
nodethe ending points of a line. See from-node and to-node.
nominal dataa type of data in which classes have no inherent
order, and therefore are qualitative.
nonlineardescribing a function that cannot be expressed as the
graph of a line or in the form of the equation of a line or plane.
Nonlinear equations usually contain expressions with
exponents. Second-order (2nd-order) or higher-order
equations and transformations are nonlinear.
nonlinear transformationa 2nd-order or higher rectification.
nonparametric signaturea signature for classification that is
based on polygons or rectangles that are defined in the feature
space image for the image file. There is no statistical basis for
a nonparametric signature; it is simply an area in a feature
space image.
normalthe state of having a normal distribution.
normal distributiona symmetrical data distribution that can be
expressed in terms of the mean and standard deviation of the
data. The normal distribution is the most widely encountered
model for probability, and is characterized by the bell curve.
Also called Gaussian distribution.
normalizea process that makes an image appear as if it were a
flat surface. This technique is used to reduce topographic
effect.
/ 690 Field Guide
Normalized Difference Vegetation Index(NDVI) the formula
for NDVI is IR-R/IR+R, where IR stands for the infrared portion
of the electromagnetic spectrum, and R stands for the red
portion of the electromagnetic spectrum. NDVI finds areas of
vegetation in imagery.
North America Datum 1927(NAD27) a datum created in 1927
that is based on the Clarke 1866 spheroid. Commonly used in
conjunction with the State Plane coordinate system.
North America Datum 1983(NAD83) a datum created in 1983
that is based on the GRS 1980 spheroid. Commonly used in
conjunction with the State Plane coordinate system.
NOSsee National Ocean Service.
NPO Mashinostroeniaa company based in Russia that develops
satellites, such as Almaz 1-B, for GIS application.
number mapsmaps that output actual data file values or
brightness values, allowing the analysis of the values of every
pixel in a file or on the display screen.
numeric keypadthe set of numeric and/or mathematical operator
keys (+, -, etc.) that is usually on the right side of the
keyboard.
Nyquistin image registration, the original continuous function can
be reconstructed from the sampled data, and phase function
can be reconstructed to much higher resolution.
O objectin models, an input to or output from a function. See
matrix object, raster object, scalar object, table object.
oblique aspecta map projection that is not oriented around a pole
or the Equator.
observationin photogrammetric triangulation, a grouping of the
image coordinates for a GCP.
off-nadirany point that is not directly beneath a scanners
detectors, but off to an angle. The SPOT scanner allows off-
nadir viewing.
1:24,0001:24,000 scale data, also called 7.5-minute DEM,
available from USGS. It is usually referenced to the UTM
coordinate system and has a spatial resolution of 30 30
meters.
1:250,0001:250,000 scale DEM data available from USGS.
Available only in arc/second format.
opacitya measure of how opaque, or solid, a color is displayed in
a raster layer.
Field Guide / 691
operating system(OS) the most basic means of communicating
with the computer. It manages the storage of information in
files and directories, input from devices such as the keyboard
and mouse, and output to devices such as the monitor.
orbita circular, north-south and south-north path that a satellite
travels above the Earth.
orderthe complexity of a function, polynomial expression, or
curve. In a polynomial expression, the order is simply the
highest exponent used in the polynomial. See also linear,
nonlinear.
ordinal dataa type of data that includes discrete lists of classes
with an inherent order, such as classes of streamsfirst order,
second order, third order, etc.
orientation anglethe angle between a perpendicular to the
center scan line and the North direction in a satellite scene.
orthographican azimuthal projection with an infinite perspective.
orthocorrectionsee orthorectification.
orthoimagesee digital orthophoto.
orthomapan image map product produced from orthoimages, or
orthoimage mosaics, that is similar to a standard map in that it
usually includes additional information, such as map coordinate
grids, scale bars, north arrows, and other marginalia.
orthorectificationa form of rectification that corrects for terrain
displacement and can be used if a DEM of the study area is
available.
OSsee operating system.
outline mapa map showing the limits of a specific set of mapping
entities such as counties. Outline maps usually contain a very
small number of details over the desired boundaries with their
descriptive codes.
overlay1. a function that creates a composite file containing either
the minimum or the maximum class values of the input files.
Overlay sometimes refers generically to a combination of
layers. 2. the process of displaying a classified file over the
original image to inspect the classification.
overlay filean ERDAS IMAGINE annotation file (.ovr extension).
.ovr filean ERDAS IMAGINE annotation file.
P packto store data in a way that conserves tape or disk space.
panchromatic imagerysingle-band or monochrome satellite
imagery.
/ 692 Field Guide
paneled mapa map designed to be spliced together into a large
paper map. Therefore, neatlines and tick marks appear on the
outer edges of the large map.
pairwise modean operation mode in rectification that allows the
registration of one image to an image in another Viewer, a map
on a digitizing tablet, or coordinates entered at the keyboard.
parallaxdisplacement of a GCP appearing in a stereopair as a
function of the position of the sensors at the time of image
capture. You can adjust parallax in both the X and the Y
direction so that the image point in both images appears in the
same image space.
parallela line of latitude, going east and west.
parallelepiped1. a classification decision rule in which the data
file values of the candidate pixel are compared to upper and
lower limits. 2. the limits of a parallelepiped classification,
especially when graphed as rectangles.
parameter1. any variable that determines the outcome of a
function or operation. 2. the mean and standard deviation of
data, which are sufficient to describe a normal curve.
parametric signaturea signature that is based on statistical
parameters (e.g., mean and covariance matrix) of the pixels
that are in the training sample or cluster.
passive sensorssolar imaging sensors that can only receive
radiation waves and cannot transmit radiation.
paththe drive, directories, and subdirectories that specify the
location of a file.
pattern recognitionthe science and art of finding meaningful
patterns in data, which can be extracted through classification.
PCsee principal components.
PCAsee principal components analysis.
perspective center1. a point in the image coordinate system
defined by the x and y coordinates of the principal point and the
focal length of the sensor. 2. after triangulation, a point in the
ground coordinate system that defines the sensors position
relative to the ground.
perspective projectionthe projection of points by straight lines
from a given perspective point to an intersection with the plane
of projection.
phasean element of an electromagnetic wave.
phase flattening In IMAGINE IFSAR DEM, removes the phase
function that would result if the imaging area was flat from the
actual phase function recorded in the interferogram.
Field Guide / 693
phase unwrappingIn IMAGINE IFSAR DEM, the process of taking
a wrapped phase function and reconstructing the continuous
function from it.
photogrammetric quality scannersspecial devices capable of
high image quality and excellent positional accuracy. Use of
this type of scanner results in geometric accuracies similar to
traditional analog and analytical photogrammetric instruments.
photogrammetrythe "art, science and technology of obtaining
reliable information about physical objects and the
environment through the process of recording, measuring and
interpreting photographic images and patterns of
electromagnetic radiant imagery and other phenomena"
(American Society of Photogrammetry, 1980).
physical recorda consecutive series of bytes on a 9-track tape
followed by a gap, or blank space, on the tape.
piecewise linear contrast stretcha spectral enhancement
technique used to enhance a specific portion of data by dividing
the lookup table into three sections: low, middle, and high.
pixelabbreviated from picture element; the smallest part of a
picture (image).
pixel coordinate systema coordinate system with its origin in
the upper left corner of the image, the x-axis pointing to the
right, the y-axis pointing downward, and the units in pixels.
pixel depththe number of bits required to store all of the data file
values in a file. For example, data with a pixel depth of 8, or 8-
bit data, have 256 values (2
8
= 256), ranging from 0 to 255.
pixel sizethe physical dimension of a single light-sensitive
element (13 13 microns).
planar coordinatescoordinates that are defined by a column and
row position on a grid (x,y).
planar projectionsee azimuthal projection.
plane table photogrammetryprior to the invention of the
airplane, photographs taken on the ground were used to
extract the geometric relationships between objects using the
principles of Descriptive Geometry.
planimetric mapa map that correctly represents horizontal
distances between objects.
plan symbolan annotation symbol that is formed after the basic
outline of the object it represents. For example, the symbol for
a house might be a square, since most houses are rectangular.
point1. an element consisting of a single (x,y) coordinate pair.
Also called grid cell. 2. a vertex of an element. Also called a
node.
/ 694 Field Guide
point IDin rectification, a name given to GCPs in separate files
that represent the same geographic location.
point modea digitizing mode in which one vertex is generated
each time a keypad button is pressed.
polar aspecta map projection that is centered around a pole.
polarizationthe direction of the electric field component with the
understanding that the magnetic field is perpendicular to it.
polygona set of closed line segments defining an area.
polynomiala mathematical expression consisting of variables and
coefficients. A coefficient is a constant that is multiplied by a
variable in the expression.
positive inclinationthe sensors are tilted in increments of 0.6 to
a maximum of 27 to the west.
primary colorscolors from which all other available colors are
derived. On a display monitor, the primary colors red, green,
and blue are combined to produce all other colors. On a color
printer, cyan, yellow, and magenta inks are combined.
principal components(PC) the transects of a scatterplot of two
or more bands of data that represent the widest variance and
successively smaller amounts of variance that are not already
represented. Principal components are orthogonal
(perpendicular) to one another. In principal components
analysis, the data are transformed so that the principal
components become the axes of the scatterplot of the output
data.
principal components analysis(PCA) a method of data
compression that allows redundant data to be compressed into
fewer bands (Jensen, 1996; Faust, 1989).
principal component banda band of data that is output by
principal components analysis. Principal component bands are
uncorrelated and nonredundant, since each principal
component describes different variance within the original
data.
principal components analysisthe process of calculating
principal components and outputting principal component
bands. It allows redundant data to be compacted into fewer
bands (i.e., the dimensionality of the data is reduced).
principal point (X
p
, Y
p
)the point in the image plane onto which
the perspective center is projected, located directly beneath
the interior orientation.
printera device that prints text, full color imagery, and/or
graphics. See color printer, text printer.
Field Guide / 695
profilea row of data file values from a DEM or DTED file. The
profiles of DEM and DTED run south to north (i.e., the first pixel
of the record is the southernmost pixel).
profile symbolan annotation symbol that is formed like the profile
of an object. Profile symbols generally represent vertical
objects such as trees, windmills, oil wells, etc.
proximity analysisa technique used to determine which pixels of
a thematic layer are located at specified distances from pixels
in a class or classes. A new layer is created that is classified by
the distance of each pixel from specified classes of the input
layer.
pseudo colora method of displaying an image (usually a thematic
layer) that allows the classes to have distinct colors. The class
values of the single band file are translated through all three
function memories that store a color scheme for the image.
pseudo nodea single line that connects with itself (an island), or
where only two lines intersect.
pseudo projectiona map projection that has only some of the
characteristics of another projection.
pushbrooma scanner in which all scanning parts are fixed, and
scanning is accomplished by the forward motion of the scanner,
such as the SPOT scanner.
pyramid layersimage layers which are successively reduced by
the power of 2 and resampled. Pyramid layers enable large
images to display faster.
Q quadrangle1. any of the hardcopy maps distributed by USGS
such as the 7.5-minute quadrangle or the 15-minute
quadrangle. 2. one quarter of a full Landsat TM scene.
Commonly called a quad.
qualitative mapa map that shows the spatial distribution or
location of a kind of nominal data. For example, a map showing
corn fields in the US would be a qualitative map. It would not
show how much corn is produced in each location, or
production relative to other areas.
quantitative mapa map that displays the spatial aspects of
numerical data. A map showing corn production (volume) in
each area would be a quantitative map.
/ 696 Field Guide
R radar datathe remotely sensed data that are produced when a
radar transmitter emits a beam of micro or millimeter waves,
the waves reflect from the surfaces they strike, and the
backscattered radiation is detected by the radar systems
receiving antenna, which is tuned to the frequency of the
transmitted waves.
RADARSATa Canadian radar satellite.
radiative transfer equationsthe mathematical models that
attempt to quantify the total atmospheric effect of solar
illumination.
radiometric correctionthe correction of variations in data that
are not caused by the object or scene being scanned, such as
scanner malfunction and atmospheric interference.
radiometric enhancementan enhancement technique that deals
with the individual values of pixels in an image.
radiometric resolutionthe dynamic range, or number of possible
data file values, in each band. This is referred to by the number
of bits into which the recorded energy is divided. See pixel
depth.
RAMsee random-access memory.
random-access memory(RAM) memory used for applications
and data storage on a CPU (Free On-Line Dictionary of
Computing, 1999d).
ranka neighborhood analysis technique that outputs the number
of values in a user-specified window that are less than the
analyzed value.
RARsee Real-Aperture Radar.
raster datadata that are organized in a grid of columns and rows.
Raster data usually represent a planar graph or geographical
area. Raster data in ERDAS IMAGINE are stored in image files.
raster objectin Model Maker (Spatial Modeler), a single raster
layer or set of layers.
Raster Product Format(RPF) Data from NIMA, used primarily for
military purposes. Organized in 1536 1536 frames, with an
internal tile size of 256 256 pixels.
raster regiona contiguous group of pixels in one GIS class. Also
called clump.
ratio dataa data type in which thematic class values have the
same properties as interval values, except that ratio values
have a natural zero or starting point.
RDBMSsee relational database management system.
RDGPSsee Real Time Differential GPS.
Field Guide / 697
Real-Aperture Radar(RAR) a radar sensor that uses its side-
looking, fixed antenna to transmit and receive the radar
impulse. For a given position in space, the resolution of the
resultant image is a function of the antenna size. The signal is
processed independently of subsequent return signals.
Real Time Differential GPS(RDGPS) takes the Differential
Correction technique one step further by having the base
station communicate the error vector via radio to the field unit
in real time.
recodingthe assignment of new values to one or more classes.
record1. the set of all attribute data for one class of feature. 2.
the basic storage unit on a 9-track tape.
rectificationthe process of making image data conform to a map
projection system. In many cases, the image must also be
oriented so that the north direction corresponds to the top of
the image.
rectified coordinatesthe coordinates of a pixel in a file that has
been rectified, which are extrapolated from the GCPs. Ideally,
the rectified coordinates for the GCPs are exactly equal to the
reference coordinates. Because there is often some error
tolerated in the rectification, this is not always the case.
reducethe process of skipping file pixels when displaying an image
so that a larger area can be represented on the display screen.
For example, a reduction factor of 3 would cause only the pixel
at every third row and column to be displayed, so that each
displayed pixel represents a 3 3 block of file pixels.
reference coordinatesthe coordinates of the map or reference
image to which a source (input) image is being registered.
GCPs consist of both input coordinates and reference
coordinates for each point.
reference pixelsin classification accuracy assessment, pixels for
which the correct GIS class is known from ground truth or other
data. The reference pixels can be selected by you, or randomly
selected.
reference planeIn a topocentric coordinate system, the
tangential plane at the center of the image on the Earth
ellipsoid, on which the three perpendicular coordinate axes are
defined.
reference systemthe map coordinate system to which an image
is registered.
reference windowthe source window on the first image of an
image pair, which remains at a constant location. See also
correlation windows and search windows.
/ 698 Field Guide
reflection spectrathe electromagnetic radiation wavelengths
that are reflected by specific materials of interest.
registrationthe process of making image data conform to another
image. A map coordinate system is not necessarily involved.
regular block of photosa rectangular block in which the number
of photos in each strip is the same; this includes a single strip
or a single stereopair.
relational database management system(RDBMS) system
that stores SDE database layers.
relation based matchingan image matching technique that uses
the image features and the relation among the features to
automatically recognize the corresponding image structures
without any a priori information.
relief mapa map that appears to be or is three-dimensional.
remote sensingthe measurement or acquisition of data about an
object or scene by a satellite or other instrument above or far
from the object. Aerial photography, satellite imagery, and
radar are all forms of remote sensing.
replicative symbolan annotation symbol that is designed to look
like its real-world counterpart. These symbols are often used to
represent trees, railroads, houses, etc.
representative fractionthe ratio or fraction used to denote map
scale.
resamplingthe process of extrapolating data file values for the
pixels in a new grid when data have been rectified or registered
to another image.
rescalingthe process of compressing data from one format to
another. In ERDAS IMAGINE, this typically means compressing
a 16-bit file to an 8-bit file.
reshapethe process of redigitizing a portion of a line.
residualsin rectification, the distances between the source and
retransformed coordinates in one direction. In ERDAS
IMAGINE, they are shown for each GCP. The X residual is the
distance between the source X coordinate and the
retransformed X coordinate. The Y residual is the distance
between the source Y coordinate and the retransformed Y
coordinate.
resolutiona level of precision in data. For specific types of
resolution see display resolution, radiometric resolution,
spatial resolution, spectral resolution, and temporal
resolution.
Field Guide / 699
resolution mergingthe process of sharpening a lower-resolution
multiband image by merging it with a higher-resolution
monochrome image.
retransformedin the rectification process, a coordinate in the
reference (output) coordinate system that has transformed
back into the input coordinate system. The amount of error in
the transformation can be determined by computing the
difference between the original coordinates and the
retransformed coordinates. See RMS error.
RGBred, green, blue. The primary additive colors that are used on
most display hardware to display imagery.
RGB clusteringa clustering method for 24-bit data (three 8-bit
bands) that plots pixels in three-dimensional spectral space,
and divides that space into sections that are used to define
clusters. The output color scheme of an RGB-clustered image
resembles that of the input file.
rhumb linea line of true direction that crosses meridians at a
constant angle.
right-hand rulea convention in three-dimensional coordinate
systems (X,Y,Z) that determines the location of the positive Z
axis. If you place your right hand fingers on the positive X axis
and curl your fingers toward the positive Y axis, the direction
your thumb is pointing is the positive Z axis direction.
RMS errorthe distance between the input (source) location of a
GCP and the retransformed location for the same GCP. RMS
error is calculated with a distance equation.
RMSE(Root Mean Square Error) used to measure how well a
specific calculated solution fits the original data. For each
observation of a phenomena, a variation can be computed
between the actual observation and a calculated value. (The
method of obtaining a calculated value is application-specific.)
Each variation is then squared. The sum of these squared
values is divided by the number of observations and then the
square root is taken. This is the RMSE value.
roamthe process of moving across a display so that different areas
of the image appear on the display screen.
rootthe first part of a file name, which usually identifies the files
specific contents.
ROYGBIVa color scheme ranging through red, orange, yellow,
green, blue, indigo, and violet at regular intervals.
RPFsee Raster Product Format.
rubber sheetingthe application of a nonlinear rectification (2nd-
order or higher).
/ 700 Field Guide
S samplesee training sample.
SARsee Synthetic Aperture Radar.
saturationa component of IHS which represents the purity of
color and also varies linearly from 0 to 1.
SCAsee suitability/capability analysis.
scale1. the ratio of distance on a map as related to the true
distance on the ground. 2. cell size. 3. the processing of values
through a lookup table.
scale bara graphic annotation element that describes map scale.
It shows the distance on paper that represents a geographical
distance on the map.
scalar objectin Model Maker (Spatial Modeler), a single numeric
value.
scaled mapa georeferenced map that is accurately arranged and
referenced to represent distances and locations. A scaled map
usually has a legend that includes a scale, such as 1 inch =
1000 feet. The scale is often expressed as a ratio like 1:12,000
where 1 inch on the map equals 12,000 inches on the ground.
scannerthe entire data acquisition system, such as the Landsat
TM scanner or the SPOT panchromatic scanner.
scanning1. the transfer of analog data, such as photographs,
maps, or another viewable image, into a digital (raster) format.
2. a process similar to convolution filtering that uses a kernel
for specialized neighborhood analyses, such as total, average,
minimum, maximum, boundary, and majority.
scatterplota graph, usually in two dimensions, in which the data
file values of one band are plotted against the data file values
of another band.
scenethe image captured by a satellite.
screen coordinatesthe location of a pixel on the display screen,
beginning with 0,0 in the upper left corner.
screen digitizingthe process of drawing vector graphics on the
display screen with a mouse. A displayed image can be used as
a reference.
script modelingthe technique of combining data layers in an
unlimited number of ways. Script modeling offers all of the
capabilities of graphical modeling with the ability to perform
more complex functions, such as conditional looping.
script modela model that is comprised of text only and is created
with the SML. Script models are stored in .mdl files.
SCSsee Soil Conservation Service.
Field Guide / 701
SDsee standard deviation.
SDEsee Spatial Database Engine.
SDTSsee spatial data transfer standard.
SDTS Raster Profile and Extensions(SRPE) an SDTS profile
that covers gridded raster data.
search radiusin surfacing routines, the distance around each
pixel within which the software searches for terrain data points.
search windowscandidate windows on the second image of an
image pair that are evaluated relative to the reference window.
seata combination of an X-server and a host workstation.
Sea-viewing Wide Field-of-View Sensor(SeaWiFS) a sensor
located on many different satellites such as ORBVIEWs
OrbView-2, and NASAs SeaStar.
SeaWiFSsee Sea-viewing Wide Field-of-View Sensor.
secantthe intersection of two points or lines. In the case of conic
or cylindrical map projections, a secant cone or cylinder
intersects the surface of a globe at two circles.
Selective Availabilityintroduces a positional inaccuracy of up to
100 m to commercial GPS receivers.
sensora device that gathers energy, converts it to a digital value,
and presents it in a form suitable for obtaining information
about the environment.
separabilitya statistical measure of distance between two
signatures.
separability listinga report of signature divergence which lists
the computed divergence for every class pair and one band
combination. The listing contains every divergence value for
the bands studied for every possible pair of signatures.
sequential clusteringa method of clustering that analyzes pixels
of an image line by line and groups them by spectral distance.
Clusters are determined based on relative spectral distance and
the number of pixels per cluster.
serveron a computer in a network, a utility that makes some
resource or service available to the other machines on the
network (such as access to a tape drive).
shaded relief imagea thematic raster image that shows
variations in elevation based on a user-specified position of the
sun. Areas that would be in sunlight are highlighted and areas
that would be in shadow are shaded.
/ 702 Field Guide
shaded relief mapa map of variations in elevation based on a
user-specified position of the sun. Areas that would be in
sunlight are highlighted and areas that would be in shadow are
shaded.
shapefilean ESRI vector format that contains spatial data.
Shapefiles have the .shp extension.
short wave infrared region(SWIR) the near-infrared and
middle-infrared regions of the electromagnetic spectrum.
SIsee image scale.
Side-looking Airborne Radar(SLAR) a radar sensor that uses an
antenna which is fixed below an aircraft and pointed to the side
to transmit and receive the radar signal.
signal based matchingsee area based matching.
Signal-to-Noise ratio(S/N) in hyperspectral image processing, a
ratio used to evaluate the usefulness or validity of a particular
band of data.
signaturea set of statistics that defines a training sample or
cluster. The signature is used in a classification process. Each
signature corresponds to a GIS class that is created from the
signatures with a classification decision rule.
skewa condition in satellite data, caused by the rotation of the
Earth eastward, which causes the position of the satellite
relative to the Earth to move westward. Therefore, each line of
data represents terrain that is slightly west of the data in the
previous line.
SLARsee Side-looking Airborne Radar.
slopethe change in elevation over a certain distance. Slope can be
reported as a percentage or in degrees.
slope imagea thematic raster image that shows changes in
elevation over distance. Slope images are usually color coded
to show the steepness of the terrain at each pixel.
slope mapa map that is color coded to show changes in elevation
over distance.
small-scalefor a map or data file, having a small ratio between
the area of the imagery (such as inches or pixels) and the area
that is represented (such as feet). In small-scale image data,
each pixel represents a large area on the ground, such as NOAA
AVHRR data, with a spatial resolution of 1.1 km.
SMLsee Spatial Modeler Language.
S/Nsee Signal-to-Noise ratio.
softcopy photogrammetrysee digital photogrammetry.
Field Guide / 703
Soil Conservation Service(SCS) an organization that produces
soil maps (Fisher, 1991) with guidelines provided by the USDA.
SOMsee Space Oblique Mercator.
source coordinatesin the rectification process, the input
coordinates.
Spaceborne Imaging Radar(SIR-A, SIR-B, and SIR-C) the radar
sensors that fly aboard NASA space shuttles. SIR-A flew aboard
the 1981 NASA Space Shuttle Columbia. That data and SIR-B
data from a later Space Shuttle mission are still valuable
sources of radar data. The SIR-C sensor was launched in 1994.
Space Oblique Mercator(SOM) a projection available in ERDAS
IMAGINE that is nearly conformal and has little scale distortion
within the sensing range of an orbiting mapping satellite such
as Landsat.
spatial data transfer standard(SDTS) a robust way of
transferring Earth-referenced spatial data between dissimilar
computer systems with the potential for no information loss
(United States Geological Survey, 1999c).
Spatial Database Engine(SDE) An ESRI vector format that
manages a database theme. SDE allows you to access
databases that may contain large amounts of information
(Environmental Systems Research Institute, 1996).
spatial enhancementthe process of modifying the values of
pixels in an image relative to the pixels that surround them.
spatial frequencythe difference between the highest and lowest
values of a contiguous set of pixels.
Spatial Modeler Language(SML) a script language used
internally by Model Maker (Spatial Modeler) to execute the
operations specified in the graphical models you create. SML
can also be used to write application-specific models.
spatial resolutiona measure of the smallest object that can be
resolved by the sensor, or the area on the ground represented
by each pixel.
speckle noisethe light and dark pixel noise that appears in radar
data.
spectral distancethe distance in spectral space computed as
Euclidean distance in n-dimensions, where n is the number of
bands.
spectral enhancementthe process of modifying the pixels of an
image based on the original values of each pixel, independent
of the values of surrounding pixels.
spectral resolutionthe specific wavelength intervals in the
electromagnetic spectrum that a sensor can record.
/ 704 Field Guide
spectral spacean abstract space that is defined by spectral units
(such as an amount of electromagnetic radiation). The notion
of spectral space is used to describe enhancement and
classification techniques that compute the spectral distance
between n-dimensional vectors, where n is the number of
bands in the data.
spectroscopythe study of the absorption and reflection of
electromagnetic radiation (EMR) waves.
spliced mapa map that is printed on separate pages, but intended
to be joined together into one large map. Neatlines and tick
marks appear only on the pages which make up the outer
edges of the whole map.
splinethe process of smoothing or generalizing all currently
selected lines using a specified grain tolerance during vector
editing.
splitthe process of making two lines from one by adding a node.
SPOTa series of Earth-orbiting satellites operated by the Centre
National dEtudes Spatiales (CNES) of France.
SRPEsee SDTS Raster Profile and Extensions.
STAsee statistics file.
standard deviation(SD) 1. the square root of the variance of a
set of values which is used as a measurement of the spread of
the values. 2. a neighborhood analysis technique that outputs
the standard deviation of the data file values of a user-specified
window.
standard meridiansee standard parallel.
standard parallelthe line of latitude where the surface of a globe
conceptually intersects with the surface of the projection
cylinder or cone.
statementin script models, properly formatted lines that perform
a specific task in a model. Statements fall into the following
categories: declaration, assignment, show, view, set, macro
definition, and quit.
statistical clusteringa clustering method that tests 3 3 sets of
pixels for homogeneity, and builds clusters only from the
statistics of the homogeneous sets of pixels.
statistics file(STA) an ERDAS IMAGINE Ver. 7.X trailer file for
LAN data that contains statistics about the data.
stereographic1. the process of projecting onto a tangent plane
from the opposite side of the Earth. 2. the process of acquiring
images at angles on either side of the vertical.
stereopaira set of two remotely-sensed images that overlap,
providing two views of the terrain in the overlap area.
Field Guide / 705
stereo-sceneachieved when two images of the same area are
acquired on different days from different orbits, one taken east
of the vertical, and the other taken west of the nadir.
stream modea digitizing mode in which vertices are generated
continuously while the digitizer keypad is in proximity to the
surface of the digitizing tablet.
stringa line of text. A string usually has a fixed length (number of
characters).
strip of photographsconsists of images captured along a flight-
line, normally with an overlap of 60% for stereo coverage. All
photos in the strip are assumed to be taken at approximately
the same flying height and with a constant distance between
exposure stations. Camera tilt relative to the vertical is
assumed to be minimal.
stripinga data error that occurs if a detector on a scanning system
goes out of adjustmentthat is, it provides readings
consistently greater than or less than the other detectors for
the same band over the same ground cover. Also called
banding.
structure based matchingsee relation based matching.
subsettingthe process of breaking out a portion of a large image
file into one or more smaller files.
suitability/capability analysis(SCA) a system designed to
analyze many data layers to produce a plan map. Discussed in
McHargs book Design with Nature (Star and Estes, 1990).
suma neighborhood analysis technique that outputs the total of
the data file values in a user-specified window.
Sun raster dataimagery captured from a Sun monitor display.
sun-synchronousa term used to describe Earth-orbiting satellites
that rotate around the Earth at the same rate as the Earth
rotates on its axis.
supervised trainingany method of generating signatures for
classification, in which the analyst is directly involved in the
pattern recognition process. Usually, supervised training
requires the analyst to select training samples from the data
that represent patterns to be classified.
surfacea one-band file in which the value of each pixel is a specific
elevation value.
swath widthin a satellite system, the total width of the area on
the ground covered by the scanner.
SWIRsee short wave infrared region.
/ 706 Field Guide
symbolan annotation element that consists of other elements
(sub-elements). See plan symbol, profile symbol, and
function symbol.
symbolizationa method of displaying vector data in which
attribute information is used to determine how features are
rendered. For example, points indicating cities and towns can
appear differently based on the population field stored in the
attribute database for each of those areas.
Synthetic Aperture Radar(SAR) a radar sensor that uses its
side-looking, fixed antenna to create a synthetic aperture. SAR
sensors are mounted on satellites, aircraft, and the NASA
Space Shuttle. The sensor transmits and receives as it is
moving. The signals received over a time interval are combined
to create the image.
T table objectin Model Maker (Spatial Modeler), a series of numeric
values or character strings.
tablet digitizingthe process of using a digitizing tablet to transfer
nondigital data such as maps or photographs to vector format.
Tagged Imaged File Formatsee TIFF data.
tangentan intersection at one point or line. In the case of conic or
cylindrical map projections, a tangent cone or cylinder
intersects the surface of a globe in a circle.
Tasseled Cap transformationan image enhancement technique
that optimizes data viewing for vegetation studies.
TEMsee transverse electromagnetic wave.
temporal resolutionthe frequency with which a sensor obtains
imagery of a particular area.
terrain analysisthe processing and graphic simulation of
elevation data.
terrain dataelevation data expressed as a series of x, y, and z
values that are either regularly or irregularly spaced.
text printera device used to print characters onto paper, usually
used for lists, documents, and reports. If a color printer is not
necessary or is unavailable, images can be printed using a text
printer. Also called a line printer.
thematic dataraster data that are qualitative and categorical.
Thematic layers often contain classes of related information,
such as land cover, soil type, slope, etc. In ERDAS IMAGINE,
thematic data are stored in image files.
thematic layersee thematic data.
Field Guide / 707
thematic mapa map illustrating the class characterizations of a
particular spatial variable such as soils, land cover, hydrology,
etc.
Thematic Mapper(TM) Landsat data acquired in seven bands
with a spatial resolution of 30 30 meters.
thematic mapper simulator(TMS) an instrument designed to
simulate spectral, spatial, and radiometric characteristics of the
Thematic Mapper sensor on the Landsat-4 and 5 spacecraft
(National Aeronautics and Space Administration, 1995b).
themea particular type of information, such as soil type or land
use, that is represented in a layer.
3D perspective viewa simulated three-dimensional view of
terrain.
thresholda limit, or cutoff point, usually a maximum allowable
amount of error in an analysis. In classification, thresholding is
the process of identifying a maximum distance between a pixel
and the mean of the signature to which it was classified.
tick markssmall lines along the edge of the image area or neatline
that indicate regular intervals of distance.
tie pointa point; its ground coordinates are not known, but can be
recognized visually in the overlap or sidelap area between two
images.
TIFF dataTagged Image File Format data is a raster file format
developed by Aldus, Corp. (Seattle, Washington), in 1986 for
the easy transportation of data.
TIGERsee Topologically Integrated Geographic Encoding
and Referencing System.
tiled datathe storage format of ERDAS IMAGINE image files.
TINsee triangulated irregular network.
TMsee Thematic Mapper.
TMSsee thematic mapper simulator.
TNDVIsee Transformed Normalized Distribution Vegetative
Index.
to-nodethe last vertex in a line.
topocentric coordinate systema coordinate system that has its
origin at the center of the image on the Earth ellipsoid. The
three perpendicular coordinate axes are defined on a tangential
plane at this center point. The x-axis is oriented eastward, the
y-axis northward, and the z-axis is vertical to the reference
plane (up).
topographica term indicating elevation.
/ 708 Field Guide
topographic dataa type of raster data in which pixel values
represent elevation.
topographic effecta distortion found in imagery from
mountainous regions that results from the differences in
illumination due to the angle of the sun and the angle of the
terrain.
topographic mapa map depicting terrain relief.
Topologically Integrated Geographic Encoding and
Referencing System(TIGER) files are line network products
of the US Census Bureau.
Topological Vector Profile(TVP) a profile of SDTS that covers
attributed vector data.
topologya term that defines the spatial relationships between
features in a vector layer.
total RMS errorthe total root mean square (RMS) error for an
entire image. Total RMS error takes into account the RMS error
of each GCP.
trailer file1. an ERDAS IMAGINE Ver. 7.X file with a .TRL
extension that accompanies a GIS file and contains information
about the GIS classes. 2. a file following the image data on a
9-track tape.
trainingthe process of defining the criteria by which patterns in
image data are recognized for the purpose of classification.
training fieldthe geographical area represented by the pixels in a
training sample. Usually, it is previously identified with the use
of ground truth data or aerial photography. Also called training
site.
training samplea set of pixels selected to represent a potential
class. Also called sample.
transformation matrixa set of coefficients that is computed from
GCPs, and used in polynomial equations to convert coordinates
from one system to another. The size of the matrix depends
upon the order of the transformation.
Transformed Normalized Distribution Vegetative Index
(TNDVI) adds 0.5 to the NDVI equation, then takes the square
root. Created by Deering et al in 1975 (Jensen, 1996).
transpositionthe interchanging of the rows and columns of a
matrix, denoted with
T
.
transverse aspectthe orientation of a map in which the central
line of the projection, which is normally the equator, is rotated
90 degrees so that it follows a meridian.
Field Guide / 709
transverse electromagnetic wave(TEM) a wave where both E
(electric field) and H (magnetic field) are transverse to the
direction of propagation.
triangulated irregular network(TIN) a specific representation
of DTMs in which elevation points can occur at irregular
intervals.
triangulationestablishes the geometry of the camera or sensor
relative to objects on the Earths surface.
true colora method of displaying an image (usually from a
continuous raster layer) that retains the relationships between
data file values and represents multiple bands with separate
color guns. The image memory values from each displayed
band are translated through the function memory of the
corresponding color gun.
true directionthe property of a map projection to represent the
direction between two points with a straight rhumb line, which
crosses meridians at a constant angle.
TVPsee Topological Vector Profile.
U unionthe area or set that is the combination of two or more input
areas or sets without repetition.
United Sates Department of Agriculture(USDA) an
organization regulating the agriculture of the US. For more
information, visit the web site www.usda.gov.
United States Geological Survey(USGS) an organization
dealing with biology, geology, mapping, and water. For more
information, visit the web site www.usgs.gov.
Universal Polar Stereographic(UPS) a mapping system used in
conjunction with the Polar Stereographic projection that makes
the scale factor at the pole 0.994.
Universal Transverse Mercator(UTM) UTM is an international
plane (rectangular) coordinate system developed by the US
Army that extends around the world from 84N to 80S. The
world is divided into 60 zones each covering six degrees
longitude. Each zone extends three degrees eastward and
three degrees westward from its central meridian. Zones are
numbered consecutively west to east from the 180 meridian.
unscaled mapa hardcopy map that is not referenced to any
particular scale in which one file pixel is equal to one printed
pixel.
unsplitthe process of joining two lines by removing a node.
/ 710 Field Guide
unsupervised traininga computer-automated method of pattern
recognition in which some parameters are specified by the user
and are used to uncover statistical patterns that are inherent in
the data.
UPSsee Universal Polar Stereographic.
USDAsee United States Department of Agriculture.
USGSsee United States Geological Survey.
UTMsee Universal Transverse Mercator.
V variable1. a numeric value that is changeable, usually
represented with a letter. 2. a thematic layer. 3. one band of a
multiband image. 4. in models, objects which have been
associated with a name using a declaration statement.
variable rate technology(VRT) in precision agriculture, used
with GPS data. VRT relies on the use of a VRT controller box
connected to a GPS and the pumping mechanism for a tank full
of fertilizers/pesticides/seeds/water/etc.
variancethe measure of central tendency.
vector1. a line element. 2. a one-dimensional matrix, having
either one row (1 by j), or one column (i by 1). See also mean
vector, measurement vector.
vector datadata that represent physical forms (elements) such as
points, lines, and polygons. Only the vertices of vector data are
stored, instead of every point that makes up the element.
ERDAS IMAGINE vector data are based on the ArcInfo data
model and are stored in directories, rather than individual files.
See workspace.
vector layera set of vector features and their associated
attributes.
Vector Quantization(VQ) used to compress frames of RPF data.
velocity vectorthe satellites velocity if measured as a vector
through a point on the spheroid.
verbal statementa statement that describes the distance on the
map to the distance on the ground. A verbal statement
describing a scale of 1:1,000,000 is approximately 1 inch to 16
miles. The units on the map and on the ground do not have to
be the same in a verbal statement.
vertexa point that defines an element, such as a point where a line
changes direction.
vertical controlthe vertical distribution of GCPs in aerial
triangulation (z - elevation).
verticesplural of vertex.
Field Guide / 711
viewshed analysisthe calculation of all areas that can be seen
from a particular viewing point or path.
viewshed mapa map showing only those areas visible (or
invisible) from a specified point(s).
VIS/IRsee visible/infrared imagery.
visible/infrared imagery(VIS/IR) a type of multispectral data
set that is based on the reflectance spectrum of the material of
interest.
volumea medium for data storage, such as a magnetic disk or a
tape.
volume setthe complete set of tapes that contains one image.
VPFsee vector product format.
VQsee Vector Quantization.
VRTsee variable rate technology.
W waveleta waveform that is bounded in both frequency and
duration (Free On-Line Dictionary of Computing, 1999e).
weightthe number of values in a set; particularly, in clustering
algorithms, the weight of a cluster is the number of pixels that
have been averaged into it.
weighting factora parameter that increases the importance of an
input variable. For example, in GIS indexing, one input layer
can be assigned a weighting factor that multiplies the class
values in that layer by that factor, causing that layer to have
more importance in the output file.
weighting functionin surfacing routines, a function applied to
elevation values for determining new output values.
WGSsee World Geodetic System.
Wide Field Sensor(WiFS) sensor aboard IRS-1C with 188m
spatial resolution.
WiFSsee Wide Field Sensor.
working windowthe image area to be used in a model. This can
be set to either the union or intersection of the input layers.
workspacea location that contains one or more vector layers. A
workspace is made up of several directories.
World Geodetic System(WGS) a spheroid. Earth ellipsoid with
multiple versions including: WGS 66, 72, and 84.
write ringa protection device that allows data to be written to a
9-track tape when the ring is in place, but not when it is
removed.
/ 712 Field Guide
X X residualin RMS error reports, the distance between the source
X coordinate and the retransformed X coordinate.
X RMS errorthe root mean square (RMS) error in the X direction.
Y Y residualin RMS error reports, the distance between the source
Y coordinate and the retransformed Y coordinate.
Y RMS errorthe root mean square (RMS) error in the Y direction.
Z ZDRsee zone distribution rectangles.
zero-sum kernela convolution kernel in which the sum of all the
coefficients is zero. Zero-sum kernels are usually edge
detectors.
zone distribution rectangles(ZDRs) the images into which each
distribution DR are divided in ADRG data.
zoomthe process of expanding displayed pixels on an image so
they can be more closely studied. Zooming is similar to
magnification, except that it changes the display only
temporarily, leaving image memory the same.
Works Cited / 713 Field Guide
Bibliography
Works Cited
Ackermann, 1983
Ackermann, F., 1983. High precision digital image correlation. Paper presented at 39th
Photogrammetric Week, Institute of Photogrammetry, University of Stuttgart, 231-243.
Adams et al, 1989
Adams, J.B., M. O. Smith, and A. R. Gillespie. 1989. Simple Models for Complex Natural Surfaces: A
Strategy for the Hyperspectral Era of Remote Sensing. Paper presented at Institute of Electrical
and Electronics Engineers, Inc. (IEEE) International Geosciences and Remote Sensing (IGARSS)
12th Canadian Symposium on Remote Sensing, Vancouver, British Columbia, Canada, July
1989, I:16-21.
Agouris and Schenk, 1996
Agouris, P., and T. Schenk. 1996. Automated Aerotriangulation Using Multiple Image Multipoint
Matching. Photogrammetric Engineering and Remote Sensing 62 (6): 703-710.
Akima, 1978
Akima, H. 1978. A Method of Bivariate Interpolation and Smooth Surface Fitting for Irregularly
Distributed Data Points. Association for Computing Machinery (ACM) Transactions on
Mathematical Software 4 (2): 148-159.
American Society of Photogrammetry, 1980
American Society of Photogrammetry (ASP). 1980. Photogrammetric Engineering and Remote
Sensing XLVI:10:1249.
Atkinson, 1985
Atkinson, P. 1985. Preliminary Results of the Effect of Resampling on Thematic Mapper Imagery.
1985 ACSM-ASPRS Fall Convention Technical Papers. Falls Church, Virginia: American Society
for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.
Atlantis Scientific, Inc., 1997
Atlantis Scientific, Inc. 1997. Sources of SAR Data. Retrieved October 2, 1999, from
http://www.atlsci.com/library/sar_sources.html
Bauer and Mller, 1972
Bauer, H., and J. Mller. 1972. Height accuracy of blocks and bundle block adjustment with additional
parameters. International Society for Photogrammetry and Remote Sensing (ISPRS) 12th
Congress, Ottawa.
Benediktsson et al, 1990
Benediktsson, J.A., P. H. Swain, O. K. Ersoy, and D. Hong 1990. Neural Network Approaches Versus
Statistical Methods in Classification of Multisource Remote Sensing Data. Institute of Electrical
and Electronics Engineers, Inc. (IEEE) Transactions on Geoscience and Remote Sensing 28 (4):
540-551.
Berk et al, 1989
Berk, A., L. S. Bernstein, and D. C. Robertson. 1989. MODTRAN: A Moderate Resolution Model for
LOWTRAN 7. Airforce Geophysics Laboratory Technical Report GL-TR-89-0122, Hanscom AFB,
MA.
Bernstein, 1983
Bernstein, R. 1983. Image Geometry and Rectification. Chapter 21 in Manual of Remote Sensing. Ed.
R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Works Cited / 714 Field Guide
Blom and Daily, 1982
Blom, R. G., and M. Daily. 1982. Radar Image Processing for Rock-Type Discrimination. Institute of
Electrical and Electronics Engineers, Inc. (IEEE) Transactions on Geoscience and Remote
Sensing 20 (3).
Buchanan, 1979
Buchanan, M. D. 1979. Effective Utilization of Color in Multidimensional Data Presentation. Paper
presented at the Society of Photo-Optical Engineers, 199:9-19.
Cannon, 1983
Cannon, T. M. 1983. Background Pattern Removal by Power Spectral Filtering. Applied Optics 22 (6):
777-779.
Center for Health Applications of Aerospace Related Technologies, 1998
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor
Specifications: SeaWiFS. Retrieved December 28, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/seastar.html
Center for Health Applications of Aerospace Related Technologies, 2000a
. 2000a. Sensor Specifications: Ikonos. Retrieved December 28, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/ikonos.html
Center for Health Applications of Aerospace Related Technologies, 2000b
. 2000b. Sensor Specifications: Landsat. Retrieved December 31, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/landsat.html
Center for Health Applications of Aerospace Related Technologies, 2000c
. 2000c. Sensor Specifications: SPOT. Retrieved December 31, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/spot.html
Centre National DEtudes Spatiales, 1998
Centre National DEtudes Spatiales (CNES). 1998. CNES: Centre National DEtudes Spatiales.
Retrieved October 25, 1999, from http://sads.cnes.fr/ceos/cdrom-
98/ceos1/cnes/gb/lecnes.htm
Chahine et al, 1983
Chahine, M. T., D. J. McCleese, P. W. Rosenkranz, and D. H. Staelin. 1983. Interaction Mechanisms
within the Atmosphere. Chapter 5 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.
Chavez et al, 1977
Chavez, P. S., Jr., G. L. Berlin, and W. B. Mitchell. 1977. Computer Enhancement Techniques of
Landsat MSS Digital Images for Land Use/Land Cover Assessments. Remote Sensing Earth
Resource. 6:259.
Chavez and Berlin, 1986
Chavez, P. S., Jr., and G. L. Berlin. 1986. Restoration Techniques for SIR-B Digital Radar Images.
Paper presented at the Fifth Thematic Conference: Remote Sensing for Exploration Geology,
Reno, Nevada, September/October 1986.
Chavez et al, 1991
Chavez, P. S., Jr., S. C. Sides, and J. A. Anderson. 1991. Comparison of Three Different Methods to
Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.
Photogrammetric Engineering & Remote Sensing 57 (3): 295-303.
Clark and Roush, 1984
Clark, R. N., and T. L. Roush. 1984. Reflectance Spectroscopy: Quantitative Analysis Techniques for
Remote Sensing Applications. Journal of Geophysical Research 89 (B7): 6329-6340.
Clark et al, 1990
Clark, R. N., A. J. Gallagher, and G. A. Swayze. 1990. Material Absorption Band Depth Mapping of
Imaging Spectrometer Data Using a Complete Band Shape Least-Squares Fit with Library
Reference Spectra. Paper presented at the Second Airborne Visible/Infrared Imaging
Spectrometer (AVIRIS) Conference, Pasadena, California, June 1990. Jet Propulsion Laboratory
Publication 90-54:176-186.
Field Guide Works Cited / 715
Colby, 1991
Colby, J. D. 1991. Topographic Normalization in Rugged Terrain. Photogrammetric Engineering &
Remote Sensing 57 (5): 531-537.
Colwell, 1983
Colwell, R. N., ed. 1983. Manual of Remote Sensing. 2d ed. Falls Church, Virginia: American Society
of Photogrammetry.
Congalton, 1991
Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data.
Remote Sensing of Environment 37: 35-46.
Conrac Corporation, 1980
Conrac Corporation. 1980. Raster Graphics Handbook. New York: Van Nostrand Reinhold.
Crane, 1971
Crane, R. B. 1971. Preprocessing Techniques to Reduce Atmospheric and Sensor Variability in
Multispectral Scanner Data. Proceedings of the 7th International Symposium on Remote
Sensing of Environment. Ann Arbor, Michigan, p. 1345.
Crippen, 1987
Crippen, R. E. 1987. The Regression Intersection Method of Adjusting Image Data for Band Ratioing.
International Journal of Remote Sensing 8 (2): 137-155.
Crippen, 1989a
. 1989a. A Simple Spatial Filtering Routine for the Cosmetic Removal of Scan-Line Noise from
Landsat TM P-Tape Imagery. Photogrammetric Engineering & Remote Sensing 55 (3): 327-331.
Crippen, 1989b
. 1989b. Development of Remote Sensing Techniques for the Investigation of Neotectonic
Activity, Eastern Transverse Ranges and Vicinity, Southern California. Ph.D. diss., University of
California, Santa Barbara.
Crist et al, 1986
Crist, E. P., R. Laurin, and R. C. Cicone. 1986. Vegetation and Soils Information Contained in
Transformed Thematic Mapper Data. Paper presented at International Geosciences and Remote
Sensing Symposium (IGARSS) 86 Symposium, ESA Publications Division, ESA SP-254.
Crist and Kauth, 1986
Crist, E. P., and R. J. Kauth. 1986. The Tasseled Cap De-Mystified. Photogrammetric Engineering &
Remote Sensing 52 (1): 81-86.
Croft (Holcomb), 1993
Croft, F. C., N. L. Faust, and D. W. Holcomb. 1993. Merging Radar and VIS/IR Imagery. Paper
presented at the Ninth Thematic Conference on Geologic Remote Sensing, Pasadena, California,
February 1993.
Cullen, 1972
Cullen, C. G. 1972. Matrices and Linear Transformations. 2d ed. Reading, Massachusetts: Addison-
Wesley Publishing Company.
Daily, 1983
Daily, M. 1983. Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar Imagery.
Photogrammetric Engineering & Remote Sensing 49 (3): 349-355.
Dent, 1985
Dent, B. D. 1985. Principles of Thematic Map Design. Reading, Massachusetts: Addison-Wesley
Publishing Company.
Earth Remote Sensing Data Analysis Center, 2000
Earth Remote Sensing Data Analysis Center (ERSDAC). 2000. JERS-1 OPS. Retrieved December 28,
2001, from http://www.ersdac.or.jp/Projects/JERS1/JOPS/JOPS_E.html
Eberlein and Weszka, 1975
Eberlein, R. B., and J. S. Weszka. 1975. Mixtures of Derivative Operators as Edge Detectors.
Computer Graphics and Image Processing 4: 180-183.
Ebner, 1976
Ebner, H. 1976. Self-calibrating block adjustment. Bildmessung und Luftbildwesen 44: 128-139.
Works Cited / 716 Field Guide
Elachi, 1987
Elachi, C. 1987. Introduction to the Physics and Techniques of Remote Sensing. New York: John Wiley
& Sons.
El-Hakim and Ziemann, 1984
El-Hakim, S.F. and H. Ziemann. 1984. A Step-by-Step Strategy for Gross-Error Detection.
Photogrammetric Engineering & Remote Sensing 50 (6): 713-718.
Environmental Systems Research Institute, 1990
Environmental Systems Research Institute, Inc. 1990. Understanding GIS: The ArcInfo Method.
Redlands, California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
. 1992. ARC Command References 6.0. Redlands. California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI,
Incorporated.
Environmental Systems Research Institute, 1992
. 1992. Managing Tabular Data. Redlands, California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands,
California: ESRI, Incorporated.
Environmental Systems Research Institute, 1996
. 1996. Using ArcView GIS. Redlands, California: ESRI, Incorporated.
Environmental Systems Research Institute, 1997
. 1997. ArcInfo. Version 7.2.1. ArcInfo HELP. Redlands, California: ESRI, Incorporated.
Eurimage, 1998
Eurimage. 1998. JERS-1. Retrieved October 1, 1999, from
http://www.eurimage.com/Products/JERS_1.html
European Space Agency, 1995
European Space Agency (ESA). 1995. ERS-2: A Continuation of the ERS-1 Success, by G. Duchossois
and R. Zobl. Retrieved October 1, 1999, from
http://esapub.esrin.esa.it/bulletin/bullet83/ducho83.htm
European Space Agency, 1997
. 1997. SAR Mission Planning for ERS-1 and ERS-2, by S. DElia and S. Jutz. Retrieved October
1, 1999, from http://esapub.esrin.esa.it/bulletin/bullet90/b90delia.htm
Fahnestock and Schowengerdt, 1983
Fahnestock, J. D., and R. A. Schowengerdt. 1983. Spatially Variant Contrast Enhancement Using
Local Range Modification. Optical Engineering 22 (3): 378-381.
Faust, 1989
Faust, N. L. 1989. Image Enhancement. Volume 20, Supplement 5 of Encyclopedia of Computer
Science and Technology. Ed. A. Kent and J. G. Williams. New York: Marcel Dekker, Inc.
Faust et al, 1991
Faust, N. L., W. Sharp, D. W. Holcomb, P. Geladi, and K. Esbenson. 1991. Application of Multivariate
Image Analysis (MIA) to Analysis of TM and Hyperspectral Image Data for Mineral Exploration.
Paper presented at the Eighth Thematic Conference on Geologic Remote Sensing, Denver,
Colorado, April/May 1991.
Fisher, 1991
Fisher, P. F. 1991. Spatial Data Sources and Data Problems. In Geographical Information Systems:
Principles and Applications. Ed. D. J. Maguire, M. F. Goodchild, and D. W. Rhind. New York:
Longman Scientific & Technical.
Flaschka, 1969
Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes & Noble, Inc.
Field Guide Works Cited / 717
Frstner and Glch, 1987
Frstner, W. and E. Glch. 1987. A fast operator for detection and precise location of distinct points,
corners and centers of circular features. Paper presented at the Intercommission Conference on
Fast Processing of Photogrammetric Data, Interlaken, Switzerland, June 1987, 281-305.
Fraser, 1986
Fraser, S. J., et al. 1986. Targeting Epithermal Alteration and Gossans in Weathered and Vegetated
Terrains Using Aircraft Scanners: Successful Australian Case Histories. Paper presented at the
fifth Thematic Conference: Remote Sensing for Exploration Geology, Reno, Nevada.
Free On-Line Dictionary of Computing, 1999a
Free On-Line Dictionary Of Computing. 1999a. American Standard Code for Information Interchange.
Retrieved October 25, 1999a, from http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999b
. 1999b. central processing unit. Retrieved October 25, 1999, from
http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999c
. 1999c. lossy. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999d
. 1999d. random-access memory. Retrieved November 11, 1999, from
http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999e
. 1999e. wavelet. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldoc
Frost et al, 1982
Frost, V. S., J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman. 1982. A Model for Radar Images and
Its Application to Adaptive Digital Filtering of Multiplicative Noise. Institute of Electrical and
Electronics Engineers, Inc. (IEEE) Transactions on Pattern Analysis and Machine Intelligence
PAMI-4 (2): 157-166.
Gonzalez and Wintz, 1977
Gonzalez, R. C., and P. Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-
Wesley Publishing Company.
Gonzalez and Woods, 2001
Gonzalez, R. and Woods, R., Digital Image Processing. Prentice Hall, NJ, 2001.
Green and Craig, 1985
Green, A. A., and M. D. Craig. 1985. Analysis of Aircraft Spectrometer Data with Logarithmic
Residuals. Paper presented at the AIS Data Analysis Workshop, Pasadena, California, April
1985. Jet Propulsion Laboratory (JOL) Publication 85 (41): 111-119.
Grn, 1978
Grn, A., 1978. Experiences with self calibrating bundle adjustment. Paper presented at the American
Congress on Surveying and Mapping/American Society of Photogrammetry (ACSM-ASP)
Convention, Washington, D.C., February/March 1978.
Grn and Baltsavias, 1988
Grn, A., and E. P. Baltsavias. 1988. Geometrically constrained multiphoto matching.
Photogrammetric Engineering and Remote Sensing 54 (5): 633-641.
Haralick, 1979
Haralick, R. M. 1979. Statistical and Structural Approaches to Texture. Paper presented at meeting
of the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Seattle, Washington, May
1979, 67 (5): 786-804.
Heipke, 1996
Heipke, C. 1996. Automation of interior, relative and absolute orientation. International Archives of
Photogrammetry and Remote Sensing 31(B3): 297-311.
Helava, 1988
Helava, U.V. 1988. Object space least square correlation. International Archives of Photogrammetry
and Remote Sensing 27 (B3): 321-331.
Works Cited / 718 Field Guide
Hodgson and Shelley, 1994
Hodgson, M. E., and B. M. Shelley. 1994. Removing the Topographic Effect in Remotely Sensed
Imagery. ERDAS Monitor, 6 (1): 4-6.
Hord, 1982
Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic Press.
Iron and Petersen, 1981
Iron, J. R., and G. W. Petersen. 1981. Texture Transforms of Remote Sensing Data. Remote Sensing
of Environment 11:359-370.
Jacobsen, 1980
Jacobsen, K. 1980. Vorschlge zur Konzeption und zur Bearbeitung von Bndelblockausgleichungen.
Ph.D. dissertation, wissenschaftliche Arbeiten der Fachrichtung Vermessungswesen der
Universitt Hannover, No. 102.
Jacobsen, 1982
. 1982. Programmgesteuerte Auswahl zusetzlicher Parameter. Bildmessung und
Luftbildwesen, p. 213-217.
Jacobsen, 1984
. 1984. Experiences in blunder detection for Aerial Triangulation. Paper presented at
International Society for Photogrammetry and Remote Sensing (ISPRS) 15th Congress, Rio de
Janeiro, Brazil, June 1984.
Jensen, 1986
Jensen, J. R. 1986. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood
Cliffs, New Jersey: Prentice-Hall.
Jensen, 1996
Jensen, J. R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. 2d ed.
Englewood Cliffs, New Jersey: Prentice-Hall.
Jensen et al, 1983
Jensen, J. R., et al. 1983. Urban/Suburban Land Use Analysis. Chapter 30 in Manual of Remote
Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Johnston, 1980
Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography: A Primer on the General Linear
Model. Essex, England: Longman Group Ltd.
Jordan and Beck, 1999
Jordan, L. E., III, and L. Beck. 1999. NITFSThe National Imagery Transmission Format Standard.
Atlanta, Georgia: ERDAS, Inc.
Kidwell, 1988
Kidwell, K. B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9,
NOAA-10, and NOAA-11) Users Guide. Washington, DC: National Oceanic and Atmospheric
Administration.
King et al, 2001
King, Roger and Wang, Jianwen, A Wavelet Based Algorithm for Pan Sharpening Landsat 7 Imagery,
2001.
Kloer, 1994
Kloer, B. R. 1994. Hybrid Parametric/Non-Parametric Image Classification. Paper presented at the
ACSM-ASPRS Annual Convention, Reno, Nevada, April 1994.
Kneizys et al, 1988
Kneizys, F. X., E. P. Shettle, L. W. Abreu, J. H. Chettwynd, G. P. Anderson, W. O. Gallery, J. E. A.
Selby, and S. A. Clough. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts: Air
Force Geophysics Laboratory. AFGL-TR-88-0177.
Konecny, 1994
Konecny, G. 1994. New Trends in Technology, and their Application: Photogrammetry and Remote
SensingFrom Analog to Digital. Paper presented at the Thirteenth United Nations Regional
Cartographic Conference for Asia and the Pacific, Beijing, China, May 1994.
Field Guide Works Cited / 719
Konecny and Lehmann, 1984
Konecny, G., and G. Lehmann. 1984. Photogrammetrie. Walter de Gruyter Verlag, Berlin.
Kruse, 1988
Kruse, F. A. 1988. Use of Airborne Imaging Spectrometer Data to Map Minerals Associated with
Hydrothermally Altered Rocks in the Northern Grapevine Mountains, Nevada and California.
Remote Sensing of the Environment 24 (1): 31-51.
Krzystek, 1998
Krzystek, P. 1998. On the use of matching techniques for automatic aerial triangulation. Paper
presented at meeting of the International Society for Photogrammetry and Remote Sensing
(ISPRS) Commission III Conference, Columbus, Ohio, July 1998.
Kubik, 1982
Kubik, K. 1982. An error theory for the Danish method. Paper presented at International Society for
Photogrammetry and Remote Sensing (ISPRS) Commission III Symposium, Helsinki, Finland,
June 1982.
Larsen and Marx, 1981
Larsen, R. J., and M. L. Marx. 1981. An Introduction to Mathematical Statistics and Its Applications.
Englewood Cliffs, New Jersey: Prentice-Hall, Inc.
Lavreau, 1991
Lavreau, J. 1991. De-Hazing Landsat Thematic Mapper Images. Photogrammetric Engineering &
Remote Sensing 57 (10): 1297-1302.
Leberl, 1990
Leberl, F. W. 1990. Radargrammetric Image Processing. Norwood, Massachusetts: Artech House, Inc.
Lee and Walsh, 1984
Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information System.
U.S. Fish and Wildlife Service, FWS/OBS-84/17.
Lee, 1981
Lee, J. S. 1981. Speckle Analysis and Smoothing of Synthetic Aperture Radar Images. Computer
Graphics and Image Processing 17 (1): 24-32.
Leick, 1990
Leick, A. 1990. GPS Satellite Surveying. New York, New York: John Wiley & Sons.
Lemeshewsky, 1999
Lemeshewsky, George P, Multispectral multisensor image fusion using wavelet transforms, in Visual
Image Processing VIII, S. K. Park and R. Juday, Ed., Proc SPIE 3716, pp214-222, 1999.
Lemeshewsky, 2002a
Lemeshewsky, George P, personal communication, 2002a.
Lemeshewsky, 2002b
Lemeshewsky, George P, Multispectral Image sharpening Using a Shift-Invariant Wavelet Transform
and Adaptive Processing of Multiresolution Edges in Visual Information Processing XI, Z.
Rahman and R.A. Schowengerdt, Eds., Proc SPIE, v. 4736, 2002b.
Li, 1983
Li, D. 1983. Ein Verfahren zur Aufdeckung grober Fehler mit Hilfe der a posteriori-Varianzschtzung.
Bildmessung und Luftbildwesen 5.
Li, 1985
. 1985. Theorie und Untersuchung der Trennbarkeit von groben Papunktfehlern und
systematischen Bildfehlern bei der photogrammetrischen punktbestimmung. Ph.D. dissertation,
Deutsche Geodtische Kommission, Reihe C, No. 324.
Lillesand and Kiefer, 1987
Lillesand, T. M., and R. W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John
Wiley & Sons, Inc.
Lopes et al, 1990
Lopes, A., E. Nezry, R. Touzi, and H. Laur. 1990. Maximum A Posteriori Speckle Filtering and First
Order Textural Models in SAR Images. Paper presented at the International Geoscience and
Remote Sensing Symposium (IGARSS), College Park, Maryland, May 1990, 3:2409-2412.
Works Cited / 720 Field Guide
L, 1988
L, Y. 1988. Interest operator and fast implementation. IASPRS 27 (B2), Kyoto, 1988.
Lyon, 1987
Lyon, R. J. P. 1987. Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid Rocks.
Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-119.
Magellan Corporation, 1999
Magellan Corporation. 1999. GLONASS and the GPS+GLONASS Advantage. Retrieved October 25,
1999, from http://www.magellangps.com/geninfo/glonass.htm
Maling, 1992
Maling, D. H. 1992. Coordinate Systems and Map Projections. 2d ed. New York: Pergamon Press.
Mallat, 1989
Mallat S.G., "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE
Transactions on Pattern Analysis and Machine Intelligence, Volume 11. No 7., 1989.
Marble, 1990
Marble, D. F. 1990. Geographic Information Systems: An Overview. In Introductory Readings in
Geographic Information Systems. Ed. D. J. Peuquet and D. F. Marble. Bristol, Pennsylvania:
Taylor & Francis, Inc.
Mayr, 1995
Mayr, W. 1995. Aspects of automatic aerotriangulation. Paper presented at the 45th
Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995, 225-234.
Mendenhall and Scheaffer, 1973
Mendenhall, W., and R. L. Scheaffer. 1973. Mathematical Statistics with Applications. North Scituate,
Massachusetts: Duxbury Press.
Merenyi et al, 1996
Merenyi, E., J. V. Taranik, T. Monor, and W. Farrand. March 1996. Quantitative Comparison of Neural
Network and Conventional Classifiers for Hyperspectral Imagery. Paper presented at the Sixth
AVIRIS Conference. JPL Pub.
Minnaert and Szeicz, 1961
Minnaert, J. L., and G. Szeicz. 1961. The Reciprocity Principle in Lunar Photometry. Astrophysics
Journal 93:403-410.
Nagao and Matsuyama, 1978
Nagao, M., and T. Matsuyama. 1978. Edge Preserving Smoothing. Computer Graphics and Image
Processing 9:394-407.
National Aeronautics and Space Administration, 1995a
National Aeronautics and Space Administration (NASA). 1995a. Mission Overview. Retrieved October
2, 1999, from http://southport.jpl.nasa.gov/science/missiono.html
National Aeronautics and Space Administration, 1995b
. 1995b. Thematic Mapper Simulators (TMS). Retrieved October 2, 1999, from
http://geo.arc.nasa.gov/esdstaff/jskiles/top-down/OTTER/OTTER_docs/DAEDALUS.html
National Aeronautics and Space Administration, 1996
. 1996. SAR Development. Retrieved October 2, 1999, from
http://southport.jpl.nasa.gov/reports/iwgsar/3_SAR_Development.html
National Aeronautics and Space Administration, 1997
. 1997. What is SIR-C/X-SAR? Retrieved October 2, 1999, from
http://southport.jpl.nasa.gov/desc/SIRCdesc.html
National Aeronautics and Space Administration, 1998
. 1998. Landsat 7. Retrieved September 30, 1999, from
http://geo.arc.nasa.gov/sge/landsat/landsat.html
National Aeronautics and Space Administration, 1999
. 1999. An Overview of SeaWiFS and the SeaStar Spacecraft. Retrieved September 30, 1999,
from http://seawifs.gsfc.nasa.gov/SEAWIFS/SEASTAR/SPACECRAFT.html
Field Guide Works Cited / 721
National Aeronautics and Space Administration, 2001
. 2001. Landsat 7 Mission Specifications. Retrieved December 28, 2001, from
http://landsat.gsfc.nasa.gov/project/L7_Specifications.html
National Imagery and Mapping Agency, 1998
National Imagery and Mapping Agency (NIMA). 1998. The National Imagery and Mapping Agency Fact
Sheet. Retrieved November 11, 1999, from
http://164.214.2.59/general/factsheets/nimafs.html
National Remote Sensing Agency, 1998
National Remote Sensing Agency, Department of Space, Government of India. 1998. Table 3.
Specifications of IRS-ID LISS-III camera. Retrieved December 28, 2001 from
http://202.54.32.164/interface/inter/v8n4/v8n4t_3.html
Needham, 1986
Needham, B. H. 1986. Availability of Remotely Sensed Data and Information from the U.S. National
Oceanic and Atmospheric Administrations Satellite Data Services Division. Chapter 9 in Satellite
Remote Sensing for Resources Development, edited by Karl-Heinz Szekielda. Gaithersburg,
Maryland: Graham & Trotman, Inc.
Oppenheim and Schafer, 1975
Oppenheim, A. V., and R. W. Schafer. 1975. Digital Signal Processing. Englewood Cliffs, New Jersey:
Prentice-Hall, Inc.
ORBIMAGE, 1999
ORBIMAGE. 1999. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved October 1, 1999,
from http://www.orbimage.com/satellite/orbview3/orbview3.html
ORBIMAGE, 2000
. 2000. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved December 31, 2000, from
http://www.orbimage.com/corp/orbimage_system/ov3/
Parent and Church, 1987
Parent, P., and R. Church. 1987. Evolution of Geographic Information Systems as Decision Making
Tools. Fundamentals of Geographic Information Systems: A Compendium. Ed. W. J. Ripple.
Bethesda, Maryland: American Society for Photogrammetry and Remote Sensing and American
Congress on Surveying and Mapping.
Pearson, 1990
Pearson, F. 1990. Map Projections: Theory and Applications. Boca Raton, Florida: CRC Press, Inc.
Peli and Lim, 1982
Peli, T., and J. S. Lim. 1982. Adaptive Filtering for Image Enhancement. Optical Engineering 21 (1):
108-112.
Pratt, 1991
Pratt, W. K. 1991. Digital Image Processing. 2d ed. New York: John Wiley & Sons, Inc.
Press et al, 1988
Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. 1988. Numerical Recipes in C.
New York, New York: Cambridge University Press.
Prewitt, 1970
Prewitt, J. M. S. 1970. Object Enhancement and Extraction. In Picture Processing and Psychopictorics.
Ed. B. S. Lipkin and A. Resenfeld. New York: Academic Press.
RADARSAT, 1999
RADARSAT. 1999. RADARSAT Specifications. Retrieved September 14, 1999 from
http://radarsat.space.gc.ca/
Rado, 1992
Rado, B. Q. 1992. An Historical Analysis of GIS. Mapping Tomorrows Resources. Logan, Utah: Utah
State University.
Richter, 1990
Richter, R. 1990. A Fast Atmospheric Correction Algorithm Applied to Landsat TM Images.
International Journal of Remote Sensing 11 (1): 159-166.
Works Cited / 722 Field Guide
Ritter and Ruth, 1995
Ritter, N., and M. Ruth. 1995. GeoTIFF Format Specification Rev. 1.0. Retrieved October 4, 1999,
from http:/www.remotesensing.org/geotiff/spec/geotiffhome.html
Robinson and Sale, 1969
Robinson, A. H., and R. D. Sale. 1969. Elements of Cartography. 3d ed. New York: John Wiley & Sons,
Inc.
Rockinger and Fechner, 1998
Rockinger, O., and Fechner, T., Pixel-Level Image Fusion, in Signal Processing, Sensor Fusion and
Target Recognition, I. Kadar, Ed., Proc SPIE 3374, pp378-388, 1998.
Sabins, 1987
Sabins, F. F., Jr. 1987. Remote Sensing Principles and Interpretation. 2d ed. New York: W. H.
Freeman & Co.
Schenk, 1997
Schenk, T., 1997. Towards automatic aerial triangulation. International Society for Photogrammetry
and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing 52 (3): 110-121.
Schowengerdt, 1980
Schowengerdt, R. A. 1980. Reconstruction of Multispatial, Multispectral Image Data Using Spatial
Frequency Content. Photogrammetric Engineering & Remote Sensing 46 (10): 1325-1334.
Schowengerdt, 1983
. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York:
Academic Press.
Schwartz and Soha, 1977
Schwartz, A. A., and J. M. Soha. 1977. Variable Threshold Zonal Filtering. Applied Optics 16 (7).
Shensa, 1992
Shensa, M., The discrete wavelet transform, IEEE Trans Sig Proc, v. 40, n. 10, pp. 2464-2482,
1992.
Shikin and Plis, 1995
Shikin, E. V., and A. I. Plis. 1995. Handbook on Splines for the User. Boca Raton: CRC Press, LLC.
Simonett et al, 1983
Simonett, D. S., et al. 1983. The Development and Principles of Remote Sensing. Chapter 1 in Manual
of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of
Photogrammetry.
Slater, 1980
Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts: Addison-
Wesley Publishing Company, Inc.
Smith et al, 1980
Smith, J. A., T. L. Lin, and K. J. Ranson. 1980. The Lambertian Assumption and Landsat Data.
Photogrammetric Engineering & Remote Sensing 46 (9): 1183-1189.
Snyder, 1987
Snyder, J. P. 1987. Map Projections--A Working Manual. Geological Survey Professional Paper 1395.
Washington, DC: United States Government Printing Office.
Snyder and Voxland, 1989
Snyder, J. P., and P. M. Voxland. 1989. An Album of Map Projections. U.S. Geological Survey
Professional Paper 1453. Washington, DC: United States Government Printing Office.
Space Imaging, 1998
Space Imaging. 1998. IRS-ID Satellite Imagery Available for Sale Worldwide. Retrieved October 1,
1999, from http://www.spaceimage.com/newsroom/releases/1998/IRS1Dworldwide.html
Space Imaging, 1999a
. 1999a. IKONOS. Retrieved September 30, 1999, from
http://www.spaceimage.com/aboutus/satellites/IKONOS/ikonos.html
Space Imaging, 1999b
. 1999b. IRS (Indian Remote Sensing Satellite). Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/IRS/IRS.html
Field Guide Works Cited / 723
Space Imaging, 1999c
. 1999c. RADARSAT. Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/RADARSAT/radarsat.htm
SPOT Image, 1998
SPOT Image. 1998. SPOT 4In Service! Retrieved September 30, 1999 from
http://www.spot.com/spot/home/news/press/Commish.htm
SPOT Image, 1999
. 1999. SPOT System Technical Data. Retrieved September 30, 1999, from
http://www.spot.com/spot/home/system/introsat/seltec/seltec.htm
Srinivasan et al, 1988
Srinivasan, R., M. Cannon, and J. White. 1988. Landsat Destriping Using Power Spectral Filtering.
Optical Engineering 27 (11): 939-943.
Star and Estes, 1990
Star, J., and J. Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New
Jersey: Prentice-Hall.
Steinitz et al, 1976
Steinitz, C., P. Parker, and L. E. Jordan, III. 1976. Hand Drawn Overlays: Their History and
Perspective Uses. Landscape Architecture 66:444-445.
Stojic et al, 1998
Stojic

, M., J. Chandler, P. Ashmore, and J. Luce. 1998. The assessment of sediment transport rates
by automated digital photogrammetry. Photogrammetric Engineering & Remote Sensing 64 (5):
387-395.
Strang et al, 1997
Strang, Gilbert and Nguyen, Truong, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1997.
Suits, 1983
Suits, G. H. 1983. The Nature of Electromagnetic Radiation. Chapter 2 in Manual of Remote Sensing.
Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Swain, 1973
Swain, P. H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information
Note 111572). West Lafayette, Indiana: The Laboratory for Applications of Remote Sensing,
Purdue University.
Swain and Davis, 1978
Swain, P. H., and S. M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw
Hill Book Company.
Tang et al, 1997
Tang, L., J. Braun, and R. Debitsch. 1997. Automatic Aerotriangulation - Concept, Realization and
Results. Photogrammetry & Remote Sensing 52 (3): 122-131.
Taylor, 1977
Taylor, P. J. 1977. Quantitative Methods in Geography: An Introduction to Spatial Analysis. Boston,
Massachusetts: Houghton Mifflin Company.
Tou and Gonzalez, 1974
Tou, J. T., and R. C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts:
Addison-Wesley Publishing Company.
Tsingas, 1995
Tsingas, V. 1995. Operational use and empirical results of automatic aerial triangulation. Paper
presented at the 45th Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995,
207-214.
Tucker, 1979
Tucker, C. J. 1979. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.
Remote Sensing of Environment 8:127-150.
USGS, 1999a
United States Geological Survey (USGS). 1999a. About the EROS Data Center. Retrieved October 25,
1999, from http://edcwww.cr.usgs.gov/content_about.html
Works Cited / 724 Field Guide
United States Geological Survey, 1999b
. 1999b. Digital Orthophoto Quadrangles. Retrieved October 2, 1999, from
http://mapping.usgs.gov/digitalbackyard/doqs.html
United States Geological Survey, 1999c
. 1999c. What is SDTS? Retrieved October 2, 1999, from
http://mcmcweb.er.usgs.gov/sdts/whatsdts.html
United States Geological Survey, n.d.
. n.d. National Landsat Archive Production System (NLAPS). Retrieved September 30, 1999,
from http://edc.usgs.gov/glis/hyper/guide/nlaps.html
Vosselman and Haala, 1992
Vosselman, G., and N. Haala. 1992. Erkennung topographischer Papunkte durch relationale
Zuordnung. Zeitschrift fr Photogrammetrie und Fernerkundung 60 (6): 170-176.
Walker and Miller, 1990
Walker, T. C., and R. K. Miller. 1990. Geographic Information Systems: An Assessment of
Technology, Applications, and Products. Madison, Georgia: SEAI Technical Publications.
Wang, Y., 1988a
Wang, Y. 1988a. A combined adjustment program system for close range photogrammetry. Journal
of Wuhan Technical University of Surveying and Mapping 12 (2).
Wang, Y., 1998b
. 1998b. Principles and applications of structural image matching. International Society for
Photogrammetry and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing
53:154-165.
Wang, Y., 1994
. 1994. Strukturzuordnung zur automatischen Oberflchenrekonstruktion. Ph.D. dissertation,
wissenschaftliche Arbeiten der Fachrichtung Vermessungswesen der Universitt Hannover.
Wang, Y., 1995
. 1995. A New Method for Automatic Relative Orientation of Digital Images. Zeitschrift fuer
Photogrammetrie und Fernerkundung (ZPF) 3: 122-130.
Wang, Z., 1990
Wang, Z. 1990. Principles of Photogrammetry (with Remote Sensing). Beijing, China: Press of Wuhan
Technical University of Surveying and Mapping, and Publishing House of Surveying and
Mapping.
Watson, 1992
Watson, D. 1992. Contouring: A Guide to the Analysis and Display of Spatial Data. Tarrytown, New
York: Elsevier Science.
Welch, 1990
Welch, R. 1990. 3-D Terrain Modeling for GIS Applications. GIS World 3 (5): 26-30.
Welch and Ehlers, 1987
Welch, R., and W. Ehlers. 1987. Merging Multiresolution SPOT HRV and Landsat TM Data.
Photogrammetric Engineering & Remote Sensing 53 (3): 301-303.
Wolf, 1983
Wolf, P. R. 1983. Elements of Photogrammetry. New York: McGraw-Hill, Inc.
Yang, 1997
Yang, X. 1997. Georeferencing CAMS Data: Polynomial Rectification and Beyond. Ph. D. dissertation,
University of South Carolina.
Yang and Williams, 1997
Yang, X., and D. Williams. 1997. The Effect of DEM Data Uncertainty on the Quality of Orthoimage
Generation. Paper presented at Geographic Information Systems/Land Information Systems
(GIS/LIS) 97, Cincinnati, Ohio, October 1997, 365-371.
Yocky, 1995
Yocky, D. A., Image merging and data fusion by means of the two-dimensional wavelet transform,
J. Opt. Soc. Amer., v. 12, n. 9, pp 1834-1845, 1995.
Field Guide Related Reading / 725
Zamudio and Atkinson, 1990
Zamudio, J. A., and W. W. Atkinson. 1990. Analysis of AVIRIS data for Spectral Discrimination of
Geologic Materials in the Dolly Varden Mountains. Paper presented at the Second Airborne
Visible Infrared Imaging Sepctrometer (AVIRIS) Conference, Pasadena, California, June 1990,
Jet Propulsion Laboratories (JPL) Publication 90-54:162-66.
Zhang, 1999
Zhang, Y., A New Merging Method and its Spectral and Spatial Effects, Int. J. Rem. Sens., vol. 20,
no. 10, pp 2003-2014, 1999.
Related Reading
Battrick, B., and L. Proud, eds. 1992. ERS-1 User Handbook. Noordwijk, The Netherlands: European
Space Agency Publications Division, c/o ESTEC.
Billingsley, F. C., et al. 1983. Data Processing and Reprocessing. Chapter 17 in Manual of Remote
Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American Society of
Photogrammetry.
Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and
Implementation. New York: John Wiley & Sons, Inc.
Carter, J. R. 1989. On Defining the Geographic Information System. Fundamentals of Geographic
Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society
for Photogrammetric Engineering and Remote Sensing and the American Congress on Surveying
and Mapping.
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor
Specifications: IRS-P3. Retrieved December 28, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/irsp3.html
Dangermond, J. 1989. A Review of Digital Data Commonly Available and Some of the Practical
Problems of Entering Them into a GIS. Fundamentals of Geographic Information Systems: A
Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetry and
Remote Sensing and American Congress on Surveying and Mapping.
Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product Specifications
for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: Defense Mapping Agency
Aerospace Center.
Duda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. New York: John Wiley
& Sons, Inc.
Elachi, C. 1992. Radar Images of the Earth from Space. Exploring Space.
Elachi, C. 1988. Spaceborne Radar Remote Sensing: Applications and Techniques. New York:
Institute of Electrical and Electronics Engineers, Inc. (IEEE) Press.
Elassal, A. A., and V. M. Caruso. 1983. USGS Digital Cartographic Data Standards: Digital Elevation
Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.
Federal Geographic Data Committee (FGDC). 1997. Content Standards for Digital Orthoimagery.
Federal Geographic Data Committee, Washington, DC.
Related Reading / 726 Field Guide
Freden, S. C., and F. Gordon, Jr. 1983. Landsat Satellites. Chapter 12 in Manual of Remote Sensing.
Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Geological Remote Sensing Group. 1992. Geological Remote Sensing Group Newsletter 5.
Wallingford, United Kingdom: Institute of Hydrology.
Gonzalez, R. C., and R. E. R. Woods. 1992. Digital Image Processing. Reading, Massachusetts:
Addison-Wesley Publishing Company.
Guptill, S. C., ed. 1988. A Process for Evaluating Geographic Information Systems. U.S. Geological
Survey Open-File Report 88-105.
Jacobsen, K. 1994. Combined Block Adjustment with Precise Differential GPS Data. International
Archives of Photogrammetry and Remote Sensing 30 (B3): 422.
Jordan, L. E., III, B. Q. Rado, and S. L. Sperry. 1992. Meeting the Needs of the GIS and Image
Processing Industry in the 1990s. Photogrammetric Engineering & Remote Sensing 58 (8):
1249-1251.
Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.
Kennedy, M. 1996. The Global Positioning System and GIS: An Introduction. Chelsea, Michigan: Ann
Arbor Press, Inc.
Knuth, D. E. 1987. Digital Halftones by Dot Diffusion. Association for Computing Machinery
Transactions on Graphics 6:245-273.
Kraus, K. 1984. Photogrammetrie. Band II. Dmmler Verlag, Bonn.
Lue, Y., and K. Novak. 1991. Recursive Grid - Dynamic Window Matching for Automatic DEM
Generation. 1991 ACSM-ASPRS Fall Convention Technical Papers.
Menon, S., P. Gao, and C. Zhan. 1991. GRID: A Data Model and Functional Map Algebra for Raster
Geo-processing. Paper presented at Geographic Information Systems/Land Information
Systems (GIS/LIS) 91, Atlanta, Georgia, October 1991, 2:551-561.
Moffit, F. H., and E. M. Mikhail. 1980. Photogrammetry. 3d ed. New York: Harper& Row Publishers.
Nichols, D., J. Frew et al. 1983. Digital Hardware. Chapter 20 in Manual of Remote Sensing. Ed. R.
N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Sader, S. A., and J. C. Winne. 1992. RGB-NDVI Colour Composites For Visualizing Forest Change
Dynamics. International Journal of Remote Sensing 13 (16): 3055-3067.
Short, N. M., Jr. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing.
Washington, DC: National Aeronautics and Space Administration.
Space Imaging. 1999. LANDSAT TM. Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/Landsat/landsat.html
Stimson, G. W. 1983. Introduction to Airborne Radar. El Segundo, California: Hughes Aircraft
Company.
TIFF Developers Toolkit. 1990. Seattle, Washington: Aldus Corp.
United States Geological Survey (USGS). 1999. Landsat Thematic Mapper Data. Retrieved September
30, 1999, from http://edc.usgs.gov/glis/hyper/guide/landsat_tm
Field Guide Related Reading / 727
Wolberg, G. 1990. Digital Image Warping. Los Alamitos, California: Institute of Electrical and
Electronics Engineers, Inc. (IEEE) Computer Society Press.
Wolf, P. R. 1980. Definitions of Terms and Symbols used in Photogrammetry. Manual of
Photogrammetry. Ed. C. C. Slama. Falls Church, Virginia: American Society of Photogrammetry.
Wong, K. W. 1980. Basic Mathematics of Photogrammetry. Chapter II in Manual of Photogrammetry.
Ed. C. C. Slama. Falls Church, Virginia: American Society of Photogrammetry.
Yang, X., R. Robinson, H. Lin, and A. Zusmanis. 1993. Digital Ortho Corrections Using Pre-
transformation Distortion Adjustment. 1993 ASPRS Technical Papers 3:425-434.
Related Reading / 728 Field Guide
Index / 729 Field Guide
Index
Symbols
.OV1 (overview image) 96
.OVR (overview image) 96
.stk (GRID Stack file) 107
Numerics
1:24,000 scale 98
1:250,000 scale 98
2D affine transformation 305
4 mm tape 21, 22
7.5-minute DEM 98
8 mm tape 21, 22
9-track tape 21, 23
A
a priori 248, 380
Absorption 7, 10
spectra 6, 11
Absorption spectra 199
Accuracy assessment 284, 287
Accuracy report 289
Active sensor 6
Adaptive filter 180
Additional Parameter modeling (AP) 335
ADRG 23, 50, 86
file naming convention 91
ordering 104
ADRI 50, 92
file naming convention 94
ordering 104
Aerial photography 86
Aerial photos 4, 25, 292
Aerial triangulation (AT) 295, 313
Airborne GPS 312
Airborne imagery 49
Airborne Imaging Spectrometer 12
Airborne Multispectral Scanner Mk2 12
AIRSAR 75, 83
Aitoff 655
Albers Conical Equal Area 527, 573
Almaz 75
Almaz 1-B 77
Almaz-1 77
Analog photogrammetry 291
Analytical photogrammetry 291
Annotation 54, 133, 138, 463
element 463
in script models 450
layer 463
ANT (Erdas 7.x) (annotation) 54
AP 335
Arc Coverage 50
Arc Interchange (raster) 50
Arc Interchange (vector) 55
ARC system 87, 95
Arc/second format 97
Arc_Interchange to Coverage 55
Arc_Interchange to Grid 55
ARCGEN 50, 55, 112
ArcInfo 49, 50, 107, 112, 377, 423, 428
coverages 35
data model 35, 37
UNGENERATE 112
ArcInfo GENERATE 44
ArcInfo INTERCHANGE 44
ArcView 44
Area based matching 324
Area of interest 31, 171, 433
ASCII 99, 430
ASCII Raster 50
ASCII To Point Annotation (annotation) 54
ASCII To Point Coverage (vector) 55
Aspect 411, 415, 476
calculating 415
equatorial 477
oblique 477
polar 477
transverse 478
ASTER 50
AT 313
Atmospheric correction 171
Atmospheric effect 161
Atmospheric modeling 162
Attribute
imported 39
in model 448
information 35, 37, 39
raster 429
thematic 427, 428
vector 428, 430
viewing 429
Auto update 122, 123, 124
AutoCAD 44, 49, 112
Automatic image correlation 352
Automatic tie point collection 322
Average 507
AVHRR 15, 50, 58, 67, 162, 201, 380
extract 68
full set 68
ordering 103
AVHRR (Dundee Format) 50
AVHRR (NOAA) 50
AVIRIS 11, 83
Index
Field Guide Index / 730
Azimuth 475
Azimuthal Equidistant 530
B
Band 2, 61, 62, 67, 70
displaying 126
Banding 18, 160
see also striping
Bartlett window 219
Basic Image Interchange Format 53
Bayesian classifier 268
Beam mode selection 348
Behrmann 533
BIL 19, 50
Bin 169
Binary format 18
BIP 19, 21, 50
Bipolar Oblique Conic Conformal 642
Bit 18
display device 118
in files (depth) 16, 18, 22
Block of images 321
Block triangulation 295, 313
Blocking factor 20, 22
Bonne 535
Border 468
Bpi 23
Brightness inversion 172
Brightness value 16, 118, 119, 126, 130,
163
Brovey Transform 179
BSQ 19, 50
Buffer zone 433
Bundle block adjustment 295, 312
Butterworth window 220
Byte 18
C
CADRG (Compressed ADRG) 50
Canadian Geographic Information System 421
Cartesian coordinate 36
Cartography 459
Cartridge tape 21, 23
Cassini 537, 643
Cassini-Soldner 643
Categorical
data 3
CCD 328
CD-ROM 18, 21, 23
Cell 99
Change detection 17, 29, 56, 377
Chi-square
distribution 285
statistics 287
CIB 96
CIB (Controlled Image Base) 50
Class 243, 426
name 427, 429
value 130, 429
numbering systems 426, 438
Classical aerial triangulation 295
Classification 30, 77, 227, 243, 427
and enhanced data 189, 247
and rectified data 378
and terrain analysis 411
evaluating 284
flow chart 271
iterative 246, 251, 269
scheme 246
Clump 434
Clustering 254
ISODATA 255
RGB 255, 259
Coefficient 515
in convolution 173
of variation 228
Collinearity condition 310
Collinearity equations 314, 334
Color gun 118, 119, 126
Color scheme 40, 427
Color table 130, 429
for printing 500
Colorcell 120
read-only 120
Color-infrared 125
Colormap 119, 133
Complex image 55
Confidence level 287
Conformality 475
Contiguity analysis 432
Contingency matrix 263, 265
Continuous
data 3
Continuous data
see data
Contrast stretch 30, 163
for display 126, 166, 509
linear 164
min/max vs. standard deviation 128, 167
nonlinear 165
piecewise linear 165
Contrast table 126
Control point extension 295
Controlled Image Base 96
Convergence value 317
Convolution 18
cubic 403
Field Guide Index / 731
filtering 173, 403, 404, 435
kernel
crisp 178
edge detector 176
edge enhancer 177
gradient 233
high frequency 175, 176
low frequency 177, 403
Prewitt 232
zero-sum 176
Convolution kernel 173
high frequency 404
low frequency 404
Prewitt 233
Coordinate
Cartesian 36, 479
conversion 408
file 3, 36, 142
geographic 479, 561
map 3, 4, 36, 375, 378, 379
planar 479
reference 378, 379
retransformed 400
source 379
spherical 479
Coordinate system 3, 299
ground space 299
image space 299
Correlation calculations 324
Correlation coefficient 356
Correlation threshold 381
Correlation windows 324
Correlator library 356
Covariance 265, 277, 510
sample 510
Covariance matrix 192, 262, 267, 279, 511
Cross correlation 324
D
DAEDALUS 50, 84
DAEDALUS TMS
bands/frequencies 84
Data 422
airborne sensor 49
ancillary 248
categorical 3
complex 55
compression 190, 259
continuous 3, 24, 125, 425
displaying 129
creating 143
elevation 248, 411
enhancement 156
from aircraft 83
geocoded 20, 29, 376
gray scale 139
hyperspectral 11
interval 3
nominal 3
ordering 103
ordinal 3
packed 67
pseudo color 138
radar 49, 72
applications 76
bands 75
merging 240
raster 3, 133
converting to vector 106
editing 31
formats (BIL, etc.) 22
importing and exporting 50
in GIS 424
sources 49
ratio 3
satellite 49
structure 194
thematic 3, 24, 130, 251, 427
displaying 132
tiled 26, 55
topographic 97, 411
using 99
true color 139
vector 133, 138, 377, 409, 428
converting to raster 106, 427
copying 39
displaying 40
editing 453
densify 454
generalize 454
reshape 454
spline 453
split 454
unsplit 454
from raster data 42
importing 42, 55
in GIS 424
renaming 39
sources 42, 44, 49
structure 37
viewing 138
multiple layers 139
overlapping layers 139
Data correction 17, 31, 155, 159
geometric 159, 162, 375
radiometric 159, 237, 376
Field Guide Index / 732
Data file value 1, 30
display 126, 142, 164
in classification 243
Data storage 18
Database
image 28
Decision rule 245, 269
Bayesian 279
feature space 274
Mahalanobis distance 277
maximum likelihood 278, 285
minimum distance 275, 278, 285, 287
non-parametric 270
parallelepiped 271
parametric 270
Decision tree 283
Decorrelation stretch 194
Degrees of freedom 287
DEM 1, 25, 97, 98, 162, 376, 412
editing 31
interpolation 32
ordering 103
Density 23
Density of detail 357
Descriptive information
see attribute information 39
Desktop Scanners 85
Desktop scanners 297
Detector 57, 159
DFAD 55
DGN 44
DGN (Intergraph IGDS) (vector) 55
Differential collection geometry 365
Differential correction 101
DIG Files (Erdas 7.x) (vector) 55
Digital image 42
Digital orthophoto 336
Digital orthophotography 412
Digital photogrammetry 292
Digital picture
see image 118
Digital terrain model (DTM) 49
Digitizing 42, 377
GCPs 380
operation modes 43
point mode 43
screen 42, 44
stream mode 43
tablet 42
DIME 114
Dimensionality 190, 247, 248, 514
Direction of flight 296
Discrete Cosine Transformation 107
Disk space 23
Diskette 18, 21
Display
32-bit 121
DirectColor 121, 122, 128, 131
HiColor 121, 124
PC 125
PseudoColor 121, 124, 129, 132
TrueColor 121, 123, 124, 128, 131
Display device 117, 126, 163, 166, 196
Display memory 156
Display resolution 117
Distance image file 285, 286
Distribution 504
Distribution Rectangle 87
Dithering 137
color artifacts 138
color patch 137
Divergence 263
signature 266
transformed 266
DLG 23, 44, 49, 55, 113
Doppler centroid 344
Doppler cone 344
DOQ 50
DOQ (JPEG) 50
DOQs 86
DTED 50, 92, 97, 412
DTM 49
DXF 44, 49, 54, 112
DXF to Annotation 56
DXF to Coverage 56
Dynamic range 16
E
Earth Centered System 343
Earth Fixed Body coordinate system 342
Earth model 344
Eckert I 539
Eckert II 541
Eckert III 543
Eckert IV 545
Eckert V 547
Eckert VI 549
Edge detection 176, 231
Edge enhancement 177
Eigenvalue 190
Eigenvector 190, 192
Eikonix 85
Electric field 359
Electromagnetic radiation 5, 57
Electromagnetic spectrum 2, 5
long wave infrared region 5
short wave infrared region 5
Field Guide Index / 733
Electromagnetic wave 358, 370
Elements of exterior orientation 307
Ellipse 190, 263
Ellipsoid 350
Enhancement 30, 64, 155, 243
linear 127, 164
nonlinear 164
on display 133, 142, 156
radar data 155, 224
radiometric 155, 162, 172
spatial 155, 162, 172
spectral 155, 189
Entity (AutoCAD) 113
EOF (end of file) 20
EOSAT 17, 29, 61, 103
EOSAT SOM 551
EOV (end of volume) 20
Ephemeris adjustment 345
Ephemeris coordinate system 342
Ephemeris data 331
Ephemeris modeling 342
equal area
see equivalence
Equidistance 475
Equidistant Conic 552, 583
Equidistant Cylindrical 554, 643
Equirectangular 555
Equivalence 475
ER Mapper 50
ERDAS macro language (EML) 432
ERDAS Version 7.X 24, 106
EROS Data Center 104
Error matrix 289
ERS (Conae-PAF CEOS) 51
ERS (D-PAF CEOS) 51
ERS (I-PAF CEOS) 51
ERS (Tel Aviv-PAF CEOS) 51
ERS (UK-PAF CEOS) 51
ERS-1 75, 77, 78
ordering 105
ERS-2 77, 79
ESRI 35, 421
ETAK 44, 49, 56, 114
Euclidean distance 276, 285, 515
Expected value 508
Expert classification 281
Exposure station 296
Extent 464
Exterior orientation 307
SPOT 330
F
.fsp.img file 252
False color 64
False easting 479
False northing 479
Fast format 20
Fast Fourier Transform 210
Feature based matching 326
Feature extraction 155
Feature point matching 326
Feature space 513
area of interest 249, 253
image 252, 513
Fiducial marks 304
Field 428
File
.fsp.img 252
.gcc 380
.GIS 24, 106
.gmd 444
.img 2, 24, 125
.LAN 24, 106
.mdl 450
.ovr 463
.sig 510
archiving 28
header 22
output 23, 27, 397, 398, 449
.img 166
classification 245
pixel 118
tic 36
File coordinate 3
File name 27
Film recorder 499
Filter
adaptive 180
Fourier image 216
Frost 225, 230
Gamma-MAP 225, 230
high-pass 218
homomorphic 222
Lee 225, 227
Lee-Sigma 224
local region 224, 226
low-pass 216
mean 224, 225
median 18, 224, 225, 226
periodic noise removal 221
Sigma 227
zero-sum 233
Filtering 174
see also convolution filtering
FIT 51
Flattening 369
Flight path 296
Field Guide Index / 734
Focal analysis 18
Focal length 304
Focal operation 31, 435
Focal plane 304
Fourier analysis 155, 210
Fourier magnitude 210
calculating 212
Fourier Transform
calculation 211
Editor
window functions 218
filtering 216
high-pass filtering 218
inverse
calculating 215
low-pass filtering 216
neighborhood techniques 209
noise removal 221
point techniques 209
Frequency
statistical 504
Frost filter 225, 230
Function memory 156
Fuyo 1 75
ordering 105
Fuzzy convolution 280
Fuzzy methodology 280
G
.gcc file 380
.GIS file 24, 106
.gmd file 444
GAC (Global Area Coverage) 67
Gamma-MAP filter 225, 230
Gauss Kruger 558
Gaussian distribution 229
Gauss-Krger 653
GCP 379, 516
corresponding 379
digitizing 380
matching 380, 381
minimum required 391
prediction 380, 381
selecting 379
GCP configuration 321
GCP requirements 320
GCPs 319
General Vertical Near-side Perspective 559
Generic Binary 51
Geocentric coordinate system 301
Geocoded data 20
Geocoding
GeoTIFF 111
Geographic Information System
see GIS
Geoid 350
Geology 77
Georeference 376, 495
Georeferencing
GeoTIFF 111
GeoTIFF 51, 110
geocoding 111
georeferencing 111
Gigabyte 18
GIS 1
database 423
defined 422
history 421
GIS (Erdas 7.x) 51
Glaciology 77
Global operation 31
GLONASS 100
Gnomonic 563
GOME 79
GPS data 100
GPS data applications 101
GPS satellite position 100
Gradient kernel 233
Graphical model 156, 431
convert to script 450
create 444
Graphical modeling 433, 442
GRASS 51
Graticule 469, 479
Gray scale 412
Gray values 336
GRD 51
Great circle 475
GRID 51, 106, 107
GRID (Surfer
ASCII/Binary) 51
Grid cell 1, 118
Grid line 468
GRID Stack 51
GRID Stack7x 51
GRID Stacks 107
Ground Control Point
see GCP
Ground coordinate system 301
Ground space 299
Ground truth 243, 249, 250, 263
Ground truth data 101
Ground-based photographs 292
H
Halftone 499
Field Guide Index / 735
Hammer 565
Hardcopy 495
Hardware 117
HD 354
Header
file 20, 22
record 20
HFA file 349
Hierarchical pyramid technique 353
High density (HD) 354
High Resolution Visible sensors (HRV) 328
Histogram 163, 259, 427, 429, 503, 504,
513, 514
breakpoint 167
signature 263, 269
Histogram equalization
formula 170
Histogram match 171
Homomorphic filtering 222
Host workstation 117
Hotine 589
HRPT (High Resolution Picture Transmission)
67
HRV 328
Hue 196
Huffman encoding 107
Hydrology 77
Hyperspectral data 11
Hyperspectral image processing 155
I
.img file 2, 24
Ideal window 219
IFOV (instantaneous field of view) 15
IGDS 56
IGES 44, 49, 56, 115
IHS to RGB 198
IKONOS 58
bands/frequencies 59
Image 1, 118, 155
airborne 49
complex 55
digital 42
microscopic 49
pseudo color 40
radar 49
raster 42
ungeoreferenced 36
Image algebra 201, 247
Image Catalog 27, 28
Image coordinate system 300
Image data 1
Image display 2, 117
Image file 1, 125, 412
statistics 503
Image Information 127, 135, 376, 378, 428
Image Interpreter 11, 18, 156, 166, 179,
180, 195, 237, 259, 413, 415, 418, 427,
430, 431
functions 157
Image processing 1
Image pyramid 327
Image registration 366
Image scale (SI) 296, 337
Image space 299, 304
Image space coordinate system 300
IMAGINE Developers Toolkit 432, 489, 521
IMAGINE Expert Classifier 281
IMAGINE OrthoRadar algorithm description
342
IMAGINE Radar Interpreter 224
IMAGINE StereoSAR DEM
Constrain
theory 352
Degrade
theory 351, 357
Despeckle
theory 351
Height
theory 357
Input
theory 348
Match
theory 352
Register
theory 351
Rescale
theory 351
Subset
theory 350
IMAGINE StereoSAR DEM process flow 347
Import
direct 50
generic 55
Incidence angles 331
Inclination 332
Inclination angles 331
Index 199, 433, 439
application 200
vegetation 11
Inertial navigation system 102
INFO 39, 113, 114, 115, 428
path 40
see also ArcInfo
Information (vs. data) 422
INS 312
Intensity 196
Field Guide Index / 736
Interferometric model 361
Interior orientation 303
SPOT 329
International Dateline 479
Interpretative photogrammetry 292
Interrupted Goode Homolosine 567
Interrupted Mollweide 569
Interval
classes 400, 426
data 3
IRS 59
IRS-1C 59
IRS-1C (EOSAT Fast Format C) 51
IRS-1C (EUROMAP Fast Format C) 51
IRS-1C/1D 51
IRS-1D 60
J
Jeffries-Matusita distance 267
JERS-1 77, 79
bands/frequencies 79
ordering 105
JFIF (JPEG) 51, 107
JPEG File Interchange Format 107
K
Kappa 308
Kappa coefficient 289
Knowledge Classifier 284
Knowledge Engineer 282
Kurtosis 236
L
.LAN file 24, 106
Laborde Oblique Mercator 645
LAC (Local Area Coverage) 67
Lambert Azimuthal Equal Area 570
Lambert Conformal 611
Lambert Conformal Conic 573, 642, 647
Lambertian reflectance model 418, 419
LAN (Erdas 7.x) 51
Landsat 10, 17, 25, 42, 51, 57, 58, 155,
162, 180, 608
description 57, 61
history 61
MSS 15, 61, 62, 160, 162, 201
ordering 103
TM 9, 10, 14, 20, 62, 178, 198, 201,
226, 242, 378, 380
displaying 126
Landsat 7 65
characteristics 65
data types 65
Laplacian operator 232, 234
Latitude/Longitude 97, 376, 479, 561
rectifying 399
Layer 2, 425, 442
Least squares adjustment 315
Least squares condition 316
Least squares correlation 325
Least squares regression 383, 387
Lee filter 225, 227
Lee-Sigma filter 224
Legend 468
Lens distortion 306
Level 1B data 383
Level slice 171
Light SAR 83
Line 36, 41, 55, 114
Line detection 231
Line dropout 18, 160
Linear regression 161
Linear transformation 383, 387
Lines of constant range 239
LISS-III 59
bands/frequencies 59
Local region filter 224, 226
Long wave infrared region 5
Lookup table 119, 163
display 167
Low parallax (LP) 354
Lowtran 9, 162
Loximuthal 576
LP 354
M
.mdl file 450
Magnification 118, 140, 141
Magnitude of parallax 356
Mahalanobis distance 285
Map 459
accuracy 486, 493
aspect 460
base 460
bathymetric 460
book 495
cadastral 460
choropleth 460
colors in 462
composite 460
composition 492
contour 460
credit 471
derivative 460
hardcopy 494
index 460
Field Guide Index / 737
inset 460
isarithmic 460
isopleth 460
label 470
land cover 243
lettering 472
morphometric 460
outline 460
output to TIFF 498
paneled 495
planimetric 72, 460
printing 495
continuous tone 500
with black ink 501
qualitative 461
quantitative 461
relief 460
scale 486, 496
scaled 495, 499
shaded relief 460
slope 460
thematic 461
title 471
topographic 72, 461
typography 471
viewshed 461
Map Composer 459, 497
Map coordinate 3, 4, 378, 379
conversion 408
Map projection 375, 377, 474, 521
azimuthal 474, 476, 486
compromise 474
conformal 487
conical 474, 477, 486
cylindrical 474, 477, 486
equal area 487
external 481, 640
gnomonic 477
modified 478
orthographic 477
planar 474, 476
pseudo 478
see also specific projection
selecting 486
stereographic 477
types 476
units 483
USGS 479, 522
MapBase
see ETAK
Mapping 459
Mask 436
Matrix 517
analysis 433, 441
contingency 263, 265
covariance 192, 262, 267, 279, 511
error 289
transformation 378, 380, 397, 516
Matrix algebra
and transformation matrix 517
multiplication 517
notation 517
transposition 518
Maximum likelihood
classification decision rule 265
Mean 127, 167, 264, 506, 508, 509
of ISODATA clusters 256
vector 267, 512
Mean Euclidean distance 236
Mean filter 224, 225
Measurement 43
Measurement vector 511, 513, 517
Median filter 224, 225, 226
Megabyte 18
Mercator 478, 578, 581, 589, 625, 632, 643
Meridian 479
Metric photogrammetry 292
Microscopic imagery 49
Microsoft Windows NT 117, 124
MIF/MID (MapInfo) to Coverage 56
Miller Cylindrical 581
Minimum distance
classification decision rule 258, 265
Minimum Error Conformal 646
Minnaert constant 420
Model 442, 443
Model Maker 156, 431, 442
criteria function 448
functions 445
object 446
data type 447
matrix 447
raster 446
scalar 447
table 447
working window 448
Modeling 31, 441
and image processing 442
and terrain analysis 411
using conditional statements 448
Modified Polyconic 647
Modified Stereographic 648
Modified Transverse Mercator 583
MODIS 51
Modtran 9, 162
Mollweide 585
Mollweide Equal Area 649
Mosaic 29, 30, 377
Field Guide Index / 738
MrSID 51, 108
MSS Landsat 51
Multiplicative algorithm 179
Multispectral imagery 57, 70
Multitemporal imagery 29
N
Nadir 57, 331
NASA 61, 73, 83
NASA ER-2 84
NASA/JPL 83
NASDA CEOS 51
Natural-color 125
NAVSTAR 100
Nearest neighbor
see resample
Neatline 468
Negative inclination 331
Neighborhood analysis 433, 435
boundary 436
density 436
diversity 436
majority 436
maximum 436
mean 436
median 437
minimum 437
minority 437
rank 437
standard deviation 437
sum 437
New Zealand Map Grid 587
NITFS 53
NLAPS 51, 66
NOAA 67
Node 36
dangling 457
from-node 36
pseudo 457
to-node 36
Noise reduction 368
Noise removal 221
Nominal
classes 400, 426
data 3
Non-Lambertian reflectance model 418, 419
Nonlinear transformation 385, 388, 390
Normal distribution 190, 277, 278, 279, 506,
507, 510, 514
Normalized correlation coefficient 356
Normalized Difference Vegetation Index (ND-
VI) 11, 201, 202
O
.OV1 (overview image) 96
.OVR (overview image) 96
.ovr file 463
Oblated Equal Area 588
Oblique Mercator 589, 611, 645, 651
Oblique photographs 292
Observation equations 314
Oceanography 77
Off-nadir 70, 331
Offset 383
Oil exploration 77
Omega 308
Opacity 139, 429
Optical disk 21
Orb View-3
bands/frequencies 69
Orbit correction 349
OrbView3 68
Order
of polynomial 515
of transformation 516
Ordinal
classes 400, 426
data 3
Orientation 308
Orientation angle 333
Orthocorrection 99, 162, 239, 376
Orthographic 592
Orthorectification 335, 346, 376
Output file 23, 397, 398, 449
.img 166
classification 245
Output formation 346
Overlay 433, 438
Overlay file 463
Overview images 96
P
Panchromatic
band/frequency 60
Panchromatic imagery 57, 70
Panchromatic sensor 59
Parallax 355
Parallel 479
Parallelepiped
alarm 263
Parameter 509
Parametric 278, 279, 510
Passive sensor 6
Pattern recognition 243, 263
PCX 51
Periodic noise removal 221
Field Guide Index / 739
Phase 368, 369, 370, 372
Phase function 371
Phi 308
Photogrammetric configuration 313
Photogrammetric scanners 85, 297
Photogrammetry 291
Photograph 49, 56, 180
aerial 83
ordering 104
Pixel 1, 118
depth 117
display 118
file vs. display 118
Pixel coordinate system 299, 304
Plane table photogrammetry 291
Plate Carre 555, 595, 643, 655
Point 35, 41, 55, 114
label 36
Point ID 379
Polar Stereographic 596, 647
Pollution monitoring 77
Polyconic 583, 599, 647
Polygon 36, 41, 55, 114, 206, 249, 435, 453
Polynomial 386, 515
Positive inclination 331
Posting 356
PostScript 497
Preference Editor 134
Prewitt kernel 232, 233
Primary color 118, 501
RGB vs. CMY 501
Principal component band 191
Principal components 30, 178, 179, 190,
194, 247, 511, 514
computing 192
Principal point 304
Printer
PostScript 497
Tektronix Inkjet 499
Tektronix Phaser 499
Tektronix Phaser II SD 500
Probability 268, 278
Processing a strip of images 320
Processing one image 320
Profile 97
Projection
external
Bipolar Oblique Conic Conformal 642
Cassini-Soldner 643
Laborde Oblique Mercator 645
Modified Polyconic 647
Modified Stereographic 648
Mollweide Equal Area 649
Rectified Skew Orthomorphic 651
Robinson Pseudocylindrical 652
Southern Orientated Gauss Conformal
653
Winkels Tripel 655
perspective 621
USGS
Alaska Conformal 525
Albers Conical Equal Area 527
Azimuthal Equidistant 530
Bonne 535
Cassini 537
Conic Equidistant 552
Eckert IV 545
Eckert VI 549
Equirectangular (Plate Carre) 555
Gall Stereographic
Gall Stereographic 557
General Vertical Nearside Perspective
559
Geographic (Lat/Lon) 561
Gnomonic 563
Hammer 565
Lambert Azimuthal Equal Area 570
Lambert Conformal Conic 573
Mercator 578
Miller Cylindrical 581
Modified Transverse Mercator 583
Mollweide 585
Oblique Mercator (Hotine) 589
Orthographic 592
Polar Stereographic 596
Polyconic 599
Sinusoidal 606
Space Oblique Mercator 608
State Plane 611
Stereographic 621
Transverse Mercator 625
Two Point Equidistant 627
UTM 629
Van der Grinten I 632
Projection Chooser 479
Proximity analysis 432
Pseudo color 64
display 426
Pseudo color image 40
Pushbroom scanner 70
Pyramid layer 27, 134
Pyramid technique 353
Pythagorean Theorem 515
Q
Quartic Authalic 601
Quick tests 357
Field Guide Index / 740
R
Radar 4, 18, 180, 233, 234, 235
Radar imagery 49
RADARSAT 51, 77, 80
beam mode resolution 80
ordering 105
RADARSAT (Acres CEOS) 51
RADARSAT (Vancouver CEOS) 51
RADARSAT Beam mode 356
Radial lens distortion 306
Radiative transfer equation 9
RAM 133
Range line 238
Range sphere 344
Raster
data 3
Raster Attribute Editor 131, 429
Raster editing 31
Raster image 42
Raster Product Format 52, 95
Raster region 434
Ratio
classes 400, 426
data 3
Rayleigh scattering 8
Real time differential GPS 101
Recode 31, 433, 437, 442
Record 21, 428
logical 22
physical 22
Rectification 29, 375
process 378
Rectified Skew Orthomorphic 651
Reduction 141
Reference coordinate 378, 379
Reference pixel 288
Reference plane 302
Refined ephemeris 350
Reflect 383, 384
Reflection spectra 6
see absorption spectra
Registration 375
vs. rectification 378
Relation based matching 326
Remote sensing 375
Report
generate 430
Resample 375, 378, 397
Bicubic Spline 398
Bilinear Interpolation 133, 398, 400
Cubic Convolution 134, 398, 403
for display 133
Nearest Neighbor 133, 398, 399, 403
Residuals 394
Resolution 14, 298
display 117
merge 178
radiometric 14, 16, 17, 61
spatial 15, 17, 171, 496
spectral 14, 17
temporal 14, 17
Resolution merge
Brovey Transform 179
Multiplicative 179
Principal Components Merge 179
Retransformed coordinate 400
RGB monitor 119
RGB to IHS 241
Rhumb line 475
Right hand rule 301
RMS error 305, 380, 383, 394, 396
tolerance 396
total 395
RMSE 297
Roam 141
Robinson 603
Robinson Pseudocylindrical 652
Root mean square error 305
Root Mean Square Error (RMSE) 297
Rotate 383
Rotation matrix 308
RPF 95
RPF Cell 96
RPF Frame 96
RPF Overview 96
RPF Product 96
RSO 605
Rubber sheeting 385
S
.sig file 510
.stk (GRID Stack file) 107
Sanson-Flamsteed 606
SAR 73
SAR image intersection 348
SAR imaging model 343
Satellite 56
imagery 4
system 56
Satellite photogrammetry 327
Satellite scene 329
Saturation 196
Scale 15, 383, 464, 495
display 496
equivalents 465
large 15
map 496
Field Guide Index / 741
paper 497
determining 498
pixels per inch 465
representative fraction 464
small 15
verbal statement 464
Scale bar 464
Scaled map 499
Scan line 328
Scanner 57
Scanners 85
Scanning 84, 377
Scanning resolutions 298
Scanning window 435
Scattering 7
Rayleigh 8
Scatterplot 190, 259, 514
feature space 252
SCBA 335
Screendump command 109
Script model 156, 431
data type 453
library 450
statement 452
Script modeling 433
SDE 52
SDTS 44, 108
SDTS (raster) 52
SDTS (vector) 56
SDTS profiles 108
SDTS Raster Profile and Extensions 108
Search area 352, 355
Seasat-1 73
Seat 117
SeaWiFS 69
bands/frequencies 69
SeaWiFS L1B and L2A 52
Secant 477
Seed properties 250
Self-calibrating bundle adjustment (SCBA)
335
Sensor 57
active 6, 74
passive 6, 74
radar 77
Separability
listing 267
signature 265
Shaded relief 411, 417
calculating 418
Shadow
enhancing 165
Shapefile 52
Ship monitoring 77
Short wave infrared region 5
SI 337
Sigma filter 227
Sigma notation 503
Signal based matching 324
Signature 244, 245, 248, 251, 261
alarm 263
append 269
contingency matrix 265
delete 269
divergence 263, 266
ellipse 263, 264
evaluating 262
file 510
histogram 263, 269
manipulating 251, 269
merge 269
non-parametric 244, 253, 262
parametric 244, 262
separability 265, 267
statistics 263, 269
transformed divergence 266
Simple Conic 552
Simple Cylindrical 555
Single frame orthorectification 294
Sinusoidal 478, 606
SIR-A 75, 77, 81
ordering 105
SIR-B 75, 77, 81
ordering 105
SIR-C 77, 82
ordering 105
SIR-C/X-SAR 82
bands/frequencies 82
Skew 383
Skewness 236
Slant range 344
Slant-to-ground range correction 239
SLAR 73, 75
Slope 411, 413
calculating 413
Softcopy photogrammetry 292
Source coordinate 379
Southern Orientated Gauss Conformal 653
Space forward intersection 311
Space Oblique Mercator 478, 589, 608
Space Oblique Mercator (Formats A & B) 610
Space resection 294, 311
Sparse mapping grid 346
Spatial frequency 172
Spatial Modeler 18, 55, 156, 420, 427, 442
Spatial Modeler Language 156, 431, 442, 449
Speckle noise 76, 224
removing 225
Field Guide Index / 742
Speckle suppression 18
local region filter 226
mean filter 225
median filter 225, 226
Sigma filter 227
Spectral dimensionality 511
Spectral distance 265, 275, 285, 287, 515
in ISODATA clustering 255
Spectral space 190, 192, 513
Spectroscopy 5
Spheroid 487, 488
SPOT 10, 15, 17, 25, 29, 42, 52, 57, 58, 70,
92, 115, 155, 162, 180, 380
ordering 103
panchromatic 15, 178, 198
XS 70, 201
displaying 126
SPOT (GeoSpot) 52
SPOT 4
bands/frequencies 72
SPOT CCRS 52
SPOT exterior orientation 330
SPOT interior orientation 329
SPOT SICORP MetroView 52
SPOT4 72
SRPE 108
Standard Beam Mode 354
Standard deviation 128, 167, 264, 508
sample 509
Standard meridian 475, 478
Standard parallel 475, 477
State Plane 483, 488, 574, 611, 625
Statistics 26, 503
signature 263
Step size 356
Stereographic 596, 621
Stereographic (Extended) 624
Stereoscopic imagery 72
Striping 18, 192, 226
Subset 29
Summation 503
Sun angle 417
SUN Raster 52
Sun Raster 106, 109
Surface generation
weighting function 33
Swath width 57
Swiss Cylindrical 654
Symbol 469
abstract 469
function 470
plan 470
profile 470
replicative 469
Symbolization 35
Symbology 40
Symmetric lens distortion 306
T
Tangent 477
Tangential lens distortion 306
Tape 18
Tasseled Cap transformation 195, 201
Tektronix
Inkjet Printer 499
Phaser II SD 500
Phaser Printer 499
Template 352
Template size 354
Terramodel 56
Terrestrial photography 292, 302
Texture analysis 235
Thematic
data 3
Thematic data
see data
Theme 425
Threshold 285, 356
Thresholding 285
Thresholding (classification) 277
Tick mark 468
Tie point distribution 322
Tie points 295, 322
TIFF 52, 85, 106, 109, 497, 498
TIGER 45, 49, 56, 115
disk space requirement 116
Time 344
TM Landsat Acres Fast Format 52
TM Landsat Acres Standard Format 52
TM Landsat EOSAT Fast Format 52
TM Landsat EOSAT Standard Format 52
TM Landsat ESA Fast Format 52
TM Landsat ESA Standard Format 52
TM Landsat IRS Fast Format 52
TM Landsat IRS Standard Format 52
TM Landsat Radarsat Fast Format 52
TM Landsat Radarsat Standard Format 52
TM Landsat-7 Fast-L7A EROS 52
Topocentric coordinate system 302
Topographic effect 418
Topological Vector Profile 108
Topology 37, 454
constructing 455
Total field of view 57
Total RMS error 395
Training 243
supervised 243, 247
Field Guide Index / 743
supervised vs. unsupervised 247
unsupervised 244, 247, 254
Training field 249
Training sample 249, 252, 288, 377, 514
defining 250
evaluating 251
Training site
see training field
Transformation
1st-order 383
linear 383, 387
matrix 382, 385
nonlinear 385, 388, 390
order 382
Transformation matrix 378, 380, 382, 385,
397, 516
Transposition 279, 518
Transposition function 266, 278, 279
Transverse Mercator 583, 611, 625, 629,
643, 653
True color 64
True direction 475
Two Point Equidistant 627
Type style
on maps 471
U
Ungeoreferenced image 36
Universal Polar Stereographic 596
Universal Transverse Mercator 583, 625, 629
Unwrapped phase 372
Unwrapping 370
USGS DEM 50
USRP 52
UTM
see Universal Transverse Mercator
V
V residual matrix 317
Van der Grinten I 632, 649
Variable 426
in models 453
Variable rate technology 102
Variance 236, 277, 508, 509, 510, 511
sample 508
Vector 517
Vector data
see data
Vector layer 37
Vector Quantization 95
Vegetation index 11, 201
Velocity vector 333
Vertex 36
Viewer 133, 134, 156, 433
dithering 137
linking 140
Volume 23
set 23
VPF 45, 56
VQ 95
W
Wagner IV 634
Wagner VII 636
Wavelet 108
Weight factor 440
classification
separability 268
Weighting function (surfacing) 33
Wide Field Sensor 60
WiFS 60
bands/frequencies 60
Winkel I 638
Winkel II 638
Winkels Tripel 655
Workspace 38
X
X matrix 316
X residual 394
X RMS error 395
X Window 117
XSCAN 85
Y
Y residual 394
Y RMS error 395
Z
Zero-sum filter 176, 233
Zone 87, 95
Zone distribution rectangle (ZDR) 87
Zoom 140, 141
Field Guide Index / 744

You might also like