Erdas FieldGuide
Erdas FieldGuide
Erdas FieldGuide
®
ERDAS , Inc.
Atlanta, Georgia
Copyright 1982 - 1997 by ERDAS, Inc. All rights reserved.
ERDAS, Inc.
ERDAS International
Acknowledgments
The ERDAS Field Guide was originally researched, written, edited, and designed by
Chris Smith and Nicki Brown of ERDAS, Inc. The Second Edition was produced by
Chris Smith, Nicki Brown, Nancy Pyden, and Dana Wormer of ERDAS, Inc., with
assistance from Diana Margaret and Susanne Strater. The Third Edition was written and
edited by Chris Smith, Nancy Pyden, and Pam Cole of ERDAS, Inc. The fourth edition
was written and edited by Stacey Schrader and Russ Pouncey of ERDAS, Inc. Many,
many thanks go to David Sawyer, ERDAS Engineering Director, and the ERDAS
Software Engineers for their significant contributions to this and previous editions.
Without them this manual would not have been possible. Thanks also to Derrold
Holcomb for lending his expertise on the Enhancement chapter. Many others at ERDAS
provided valuable comments and suggestions in an extensive review process.
A special thanks to those industry experts who took time out of their hectic schedules
to review previous editions of the ERDAS Field Guide. Of these “external”reviewers,
Russell G. Congalton, D. Cunningham, Thomas Hack, Michael E. Hodgson, David
McKinsey, and D. Way deserve recognition for their contributions to previous editions.
Cover image: The image on the front cover of the ERDAS IMAGINE Ver. 8.3 manuals is
Global Relief Data from the National Geophysical Data Center (National Oceanic and
Atmospheric Administration, U.S. Department of Commerce).
Trademarks
ERDAS and ERDAS IMAGINE are registered trademarks of ERDAS, Inc. IMAGINE Essentials,
IMAGINE Advantage, IMAGINE Professional, IMAGINE Vista, IMAGINE Production, Model
Maker, CellArray, ERDAS Field Guide, and ERDAS IMAGINE Tour Guides are trademarks of
ERDAS, Inc. OrthoMAX is a trademark of Autometric, Inc. Restoration is a trademark of
Environmental Research Institute of Michigan. Other brands and product names are trademarks
of their respective owners. ERDAS IMAGINE Ver. 8.3. January, 1997. Part No. SWE-MFG4-8.3.0-
ALLP.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiv
CHAPTER 1
Raster Data
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Absorption/Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spectral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Radiometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Temporal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
CHAPTER 2
Vector Layers
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
i
Table of Contents
Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
CHAPTER 3
Raster and Vector Data Sources
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Importing and Exporting Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Importing and Exporting Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Landsat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
ii
DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Sun Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
CHAPTER 4
Image Display
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Using the IMAGINE Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
CHAPTER 5
Enhancement
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
iii
Table of Contents
iv
Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
CHAPTER 6
Classification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Selecting Feature Space Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Non-parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
CHAPTER 7
Photogrammetric Concepts
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
v
Table of Contents
vi
CHAPTER 8
Rectification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Orders of Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
“Rectifying” to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Map to Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
CHAPTER 9
Terrain Analysis
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
CHAPTER 10
Geographic Information Systems
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
vii
Table of Contents
CHAPTER 11
Cartography
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
viii
Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
CHAPTER 12
Hardcopy Output
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
APPENDIX A
Math Topics
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
ix
Table of Contents
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
APPENDIX B
File Formats and Extensions
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
ERDAS IMAGINE .img Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Sensor Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Raster Layer Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Attribute Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Map Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Map Projection Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Machine Independent Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
MIF Data Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
MIF Data Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
ERDAS IMAGINE HFA File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Hierarchical File Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Pre-defined HFA File Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Basic Objects of an HFA File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
HFA Object Directory for .img files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
APPENDIX C
Map Projections
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
USGS Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Albers Conical Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
Conic Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Equirectangular (Plate Carrée) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
General Vertical Near-side Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Gnomonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Lambert Azimuthal Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Lambert Conformal Conic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Miller Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Modified Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Oblique Mercator (Hotine) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Orthographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Polar Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
x
Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Sinusoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Space Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
State Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
UTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Van der Grinten I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
External Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Bipolar Oblique Conic Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Cassini-Soldner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
Laborde Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Mollweide Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Rectified Skew Orthomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Robinson Pseudocylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Southern Orientated Gauss Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Winkel’s Tripel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
xi
Table of Contents
xii
List of Figures
Figure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Figure 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Figure 3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . . . . .7
Figure 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Figure 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . . . 12
Figure 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 10: Landsat TM - Band 2 (Four Types of Resolution). . . . . . . . . . . . . . . . . . . . . . 18
Figure 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Figure 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 15: Examples of Continuous Raster Layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Figure 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 19: Attribute Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Figure 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Figure 23: Multispectral Imagery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 24: Landsat MSS vs. Landsat TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Figure 25: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Figure 26: SLAR Radar (Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 27: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 28: Radar Reflection from Different Sources and Distances
(Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer . . . . . . . . . . . . . . 73
Figure 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Figure 31: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer. . . . . . . . . . . . . . . 79
Figure 33: ARC/Second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Figure 34: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . . . 97
Figure 35: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 102
Figure 36: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 103
Figure 37: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . . . . . . 104
Figure 38: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Figure 39: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . . . . . . 108
Figure 40: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . 109
Figure 41: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Figure 42: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Figure 43: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Figure 44: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Figure 45: Linked Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Figure 46: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . . . 132
Figure 47: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Figure 48: Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Figure 49: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Figure 50: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Figure 51: Contrast Stretch By Manipulating Lookup Tables
and the Effect on the Output Histogram . . . . . . . . . . . . . . . . . . . . . . . . . 137
Figure 52: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 53: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Figure 54: Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Figure 55: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Figure 56: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Figure 57: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Figure 58: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Figure 59: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Figure 60: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Figure 61: First Principal Component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figure 62: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figure 63: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Figure 64: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . . . . . . 161
Figure 65: Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Figure 66: Rescale GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Figure 67: Spectrum Average GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Figure 68: Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Figure 69: Two-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 70: Three-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 71: Surface Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Figure 72: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Figure 73: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Figure 74: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 75: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . . 183
Figure 76: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Figure 77: High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 78: Filtering Using the Bartlett Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 79: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Figure 80: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figure 81: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Figure 82: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Figure 83: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . . . . . . 200
Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . . . 201
Figure 85: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Figure 86: Adjust Brightness Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 87: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 88: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Figure 89: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Figure 90: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 91: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Figure 92: ISODATA First Pass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Figure 93: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Figure 94: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Figure 95: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Figure 96: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Figure 97: Parallelepiped Classification Using Plus or Minus
Two Standard Deviations as Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Figure 98: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . . . 248
Figure 99: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Figure 100: Minimum Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Figure 101: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Figure 102: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Figure 103: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 263
Figure 104: Sample Photogrammetric Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Figure 105: Exposure Stations along a Flight Path . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Figure 106: A Regular (Rectangular) Block of Aerial Photos . . . . . . . . . . . . . . . . . . . . 267
Figure 107: Triangulation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Figure 108: Focal and Image Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Figure 109: Image Coordinates, Fiducials, and Principal Point . . . . . . . . . . . . . . . . . . 273
Figure 110: Exterior Orientation of an Aerial Photo . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Figure 111: Control Points in Aerial Photographs
(block of 8 X 4 photos) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation . . . . . 278
Figure 113: Tie Points in a Block of Photos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Figure 114: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . . . . . . 280
Figure 115: Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Figure 116: Interior Orientation of a SPOT Scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Figure 117: Inclination of a Satellite Stereo-Scene
(View from North to South) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Figure 118: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . . . . . 285
Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . . . . 286
Figure 120: Aerial Stereopair (60% Overlap). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Figure 121: SPOT Stereopair (80% Overlap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Figure 122: Epipolar Stereopair Creation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Figure 123: Generate Elevation Models Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . 291
Figure 124: Generate Elevation Models Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . 291
Figure 125: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . . . . . 293
Figure 126: Orthorectification Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Figure 127: Orthorectification Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Figure 128: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Figure 129: Digital Orthophoto - Finding Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . 300
Figure 130: Image Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Figure 131: Feature Collection Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . 307
Figure 132: Feature Collection Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . 307
Figure 133: Polynomial Curve vs. GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Figure 134: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Figure 135: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 136: Transformation Example—1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Figure 137: Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . . . . . 325
Figure 138: Transformation Example—2nd-Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Figure 139: Transformation Example—4th GCP Added. . . . . . . . . . . . . . . . . . . . . . . . . 326
Figure 140: Transformation Example—3rd-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Figure 141: Transformation Example—Effect of a 3rd-Order Transformation . . . . . . . . 327
Figure 142: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Figure 143:RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Figure 144: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Figure 145: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Figure 146: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Figure 147: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Figure 148: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Figure 149: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel. . . . . . . . . . . . . . . . . . . . 349
Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel. . . . . . . . . . . . . . . . . . . 352
Figure 152: Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Figure 153: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Figure 154: Raster Attributes for lnlandc.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Figure 155: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Figure 156: Proximity Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Figure 157: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Figure 158: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . . . . 377
Figure 160: Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Figure 161: Indexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Figure 162: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Figure 163: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Figure 164: Modeling Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Figure 165: Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . 391
Figure 166: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Figure 167: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Figure 168: Sample Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Figure 169: Sample Neatline, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . 410
Figure 170: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied. . . . . 413
Figure 172: Good Lettering vs. Bad Lettering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Figure 173: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Figure 174: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Figure 175: Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Figure 176: Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Figure 177: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . . . . . . 438
Figure 178: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure 179: Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Figure 180: Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Figure 181: Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Figure 182: Mean Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Figure 183: Two Band Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Figure 184: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Figure 185: Examples of Objects Stored in an .img File . . . . . . . . . . . . . . . . . . . . . . . . 468
Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels . . . . . . . 471
Figure 187: HFA File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Figure 188: HFA File Structure Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Figure 189: Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Figure 190: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . . . . 524
Figure 191: Geographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Figure 192: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . 537
Figure 193: Lambert Conformal Conic Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
Figure 194: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Figure 195: Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Figure 196: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Figure 197: Polar Stereographic Projection and its Geometric Construction . . . . . . . . 556
Figure 198: Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Figure 199: Zones of the State Plane Coordinate System . . . . . . . . . . . . . . . . . . . . . . . 564
Figure 200: Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Figure 201: Zones of the Universal Transverse Mercator Grid in the United States . . . 581
Figure 202: Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
List of Tables
Table 1: Description of File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Table 2: Raster Data Formats for Direct Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Table 3: Vector Data Formats for Import and Export . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Table 4: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table 5: Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 6: ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Table 7: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . 76
Table 8: Common Raster Data Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Table 9: File Types Created by Screendump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Table 10: The Most Common TIFF Format Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Table 11: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Table 12: Conversion of IGES Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table 13: Colorcell Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Table 14: Commonly Used RGB Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Table 15: Overview of Zoom Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Table 16: Description of Modeling Functions Available for Enhancement . . . . . . . . . . 127
Table 17: Theoretical Coefficient of Variation Values. . . . . . . . . . . . . . . . . . . . . . . . . . 195
Table 18: Parameters for Sigma Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Table 19: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Table 20: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Table 21: Example of a Recoded Land Cover Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Table 22: Attribute Information for parks.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Table 23: General Editing Operations and Supporting Feature Types . . . . . . . . . . . . . 394
Table 24: Comparison of Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . 396
Table 25: Common Map Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Table 26: Pixels per Inch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Table 27: Acres and Hectares per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Table 28: Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Table 29: Projection Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Table 30: Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Table 31: ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Table 32: Usage of Binning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 565
Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 570
Table 35: UTM zones, central meridians, and longitude ranges . . . . . . . . . . . . . . . . . . 582
Preface
Introduction The purpose of the ERDAS Field Guide is to provide background information on why
one might use particular GIS and image processing functions and how the software is
manipulating the data, rather than what buttons to push to actually perform those
functions. This book is also aimed at a diverse audience: from those who are new to
geoprocessing to those savvy users who have been in this industry for years. For the
novice, the ERDAS Field Guide provides a brief history of the field, an extensive glossary
of terms, and notes about applications for the different processes described. For the
experienced user, the ERDAS Field Guide includes the formulas and algorithms that are
used in the code, so that he or she can see exactly how each operation works.
Although the ERDAS Field Guide is primarily a reference to basic image processing and
GIS concepts, it is geared toward ERDAS IMAGINE users and the functions within
ERDAS IMAGINE software, such as GIS analysis, image processing, cartography and
map projections, graphics display hardware, statistics, and remote sensing. However,
in some cases, processes and functions are described that may not be in the current
version of the software, but planned for a future release. There may also be functions
described that are not available on your system, due to the actual package that you are
using.
The enthusiasm with which the first three editions of the ERDAS Field Guide were
received has been extremely gratifying, both to the authors and to ERDAS as a whole.
First conceived as a helpful manual for ERDAS users, the ERDAS Field Guide is now
being used as a textbook, lab manual, and training guide throughout the world.
The ERDAS Field Guide will continue to expand and improve to keep pace with the
profession. Suggestions and ideas for future editions are always welcome, and should
be addressed to the Technical Writing division of Engineering at ERDAS, Inc., in
Atlanta, Georgia.
Conventions Used in The following paragraphs are used throughout the ERDAS Field Guide and other
this Book ERDAS IMAGINE documentation.
These paragraphs direct you to the ERDAS IMAGINE software function that accomplishes the
described task.
These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals for
additional information.
xxii ERDAS
Image Data
CHAPTER 1
Raster Data
Introduction The ERDAS IMAGINE system incorporates the functions of both image processing and
geographic information systems (GIS). These functions include importing, viewing,
altering, and analyzing raster and vector data sets.
• remote sensing
• radiometric correction
• geocoded data
Image Data In general terms, an image is a digital picture or representation of an object. Remotely
sensed image data are digital representations of the earth. Image data are stored in data
files, also called image files, on magnetic tapes, computer disks, or other media. The
data consist only of numbers. These representations form images when they are
displayed on a screen or are output to hardcopy.
Each number in an image file is a data file value. Data file values are sometimes
referred to as pixels. The term pixel is abbreviated from picture element. A pixel is the
smallest part of a picture (the area being scanned) with a single value. The data file
value is the measured brightness value of the pixel at a specific wavelength.
Raster image data are laid out in a grid similar to the squares on a checker board. Each
cell of the grid is represented by a pixel, also known as a grid cell.
In remotely sensed image data, each pixel represents an area of the earth at a specific
location. The data file value assigned to that pixel is the record of reflected radiation or
emitted heat from the earth’s surface at that location.
Field Guide 1
Data file values may also represent elevation, as in digital elevation models (DEMs).
NOTE: DEMs are not remotely sensed image data, but are currently being produced from stereo
points in radar imagery.
The terms “pixel” and “data file value” are not interchangeable in ERDAS IMAGINE. Pixel is
used as a broad term with many meanings, one of which is data file value. One pixel in a file may
consist of many data file values. When an image is displayed or printed, other types of values are
represented by a pixel.
See "CHAPTER 4: Image Display" for more information on how images are displayed.
Bands Image data may include several bands of information. Each band is a set of data file
values for a specific portion of the electromagnetic spectrum of reflected light or
emitted heat (red, green, blue, near-infrared, infrared, thermal, etc.) or some other user-
defined information created by combining or enhancing the original bands, or creating
new bands from other sources.
ERDAS IMAGINE programs can handle an unlimited number of bands of image data
in a single file.
3 bands
1 pixel
Figure 1: Pixels and Bands in a Raster Image
2 ERDAS
Image Data
Numeral Types
The range and the type of numbers used in a raster layer determine how the layer is
displayed and processed. For example, a layer of elevation data with values ranging
from -51.257 to 553.401 would be treated differently from a layer using only two values
to show land and water.
The data file values in raster layers will generally fall into these categories:
• Nominal data file values are simply categorized and named. The actual value used
for each category has no inherent meaning—it is simply a class value. An example
of a nominal raster layer would be a thematic layer showing tree species.
• Ordinal data are similar to nominal data, except that the file values put the classes
in a rank or order. For example, a layer with classes numbered and named “1 -
Good,” “2 - Moderate,” and “3 - Poor” is an ordinal system.
• Interval data file values have an order, but the intervals between the values are also
meaningful. Interval data measure some characteristic, such as elevation or degrees
Fahrenheit, which does not necessarily have an absolute zero. (The difference
between two values in interval data is meaningful.)
• Ratio data measure a condition that has a natural zero, such as electromagnetic
radiation (as in most remotely sensed data), rainfall, or slope.
Likewise, interval and ratio layers are more likely to measure a condition, causing the
file values to represent continuous gradations across the layer. Such layers are called
continuous.
Field Guide 3
Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using a
coordinate system. In two-dimensional coordinate systems, locations are organized in
a grid of columns and rows. Each location on the grid is expressed as a pair of coordi-
nates known as X and Y. The X coordinate specifies the column of the grid, and the Y
coordinate specifies the row. Image data organized into such a grid are known as raster
data.
• file coordinates — indicate the location of a pixel within the image (data file)
File Coordinates
File coordinates refer to the location of the pixels within the image (data) file. File
coordinates for the pixel in the upper left corner of the image always begin at 0,0.
0 1 2 3 4
1 (3,1)
x,y
rows (y) 2
columns (x)
Figure 2: Typical File Coordinates
Map Coordinates
Map coordinates may be expressed in one of a number of map coordinate or projection
systems. The type of map coordinates used by a data file depends on the method used
to create the file (remote sensing, scanning an existing map, etc.). In ERDAS IMAGINE,
a data file can be converted from one map coordinate system to another.
For more information on map coordinates and projection systems, see "CHAPTER 11:
Cartography" or "APPENDIX C: Map Projections.". See "CHAPTER 8: Rectification" for
more information on changing the map coordinate system of a data file.
4 ERDAS
Remote Sensing
Remote Sensing Remote sensing is the acquisition of data about an object or scene by a sensor that is far
from the object (Colwell 1983). Aerial photography, satellite imagery, and radar are all
forms of remotely sensed data.
Usually, remotely sensed data refer to data of the earth collected from sensors on satel-
lites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are
remotely sensed. However, the user is not limited to remotely sensed data.
This section is a brief introduction to remote sensing. There are many books available for more
detailed information, including Colwell 1983, Swain and Davis 1978, and Slater 1980 (see
“Bibliography”).
All types of land cover—rock types, water bodies, etc.—absorb a portion of the electro-
magnetic spectrum, giving a distinguishable “signature” of electromagnetic radiation.
Armed with the knowledge of which wavelengths are absorbed by certain features and
the intensity of the reflectance, the user can analyze a remotely sensed image and make
fairly accurate assumptions about the scene. Figure 3 illustrates the electromagnetic
spectrum (Suits 1983; Star and Estes 1990).
Reflected Thermal
SWIR LWIR
Ultraviolet
Radar
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0
Near-infrared Middle-infrared Far-infrared
(0.7 - 2.0) (2.0 - 5.0) (8.0 - 15.0)
Visible
(0.4 - 0.7)
Blue (0.4 - 0.5)
Green (0.5 - 0.6) micrometers µm (one millionth of a meter)
Red (0.6 - 0.7)
Field Guide 5
SWIR and LWIR
The near-infrared and middle-infrared regions of the electromagnetic spectrum are
sometimes referred to as the short wave infrared region (SWIR). This is to distinguish
this area from the thermal or far infrared region, which is often referred to as the long
wave infrared region (LWIR). The SWIR is characterized by reflected radiation
whereas the LWIR is characterized by emitted radiation.
Absorption/Reflection When radiation interacts with matter, some wavelengths are absorbed and others are
Spectra reflected.To enhance features in image data, it is necessary to understand how
vegetation, soils, water, and other land covers reflect and absorb radiation. The study
of the absorption and reflection of EMR waves is called spectroscopy.
Spectroscopy
Most commercial sensors, with the exception of imaging radar sensors, are passive
solar imaging sensors. Passive solar imaging sensors can only receive radiation waves;
they cannot transmit radiation. (Imaging radar sensors are active sensors which emit a
burst of microwave radiation and receive the backscattered radiation.)
The use of passive solar imaging sensors to characterize or identify a material of interest
is based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared
(VIS/IR) multispectral data set and properly apply enhancement algorithms, it is
necessary to understand these basic principles. Spectroscopy reveals the:
• absorption spectra — the EMR wavelengths that are absorbed by specific materials
of interest
• reflection spectra — the EMR wavelengths that are reflected by specific materials
of interest
Absorption Spectra
Absorption is based on the molecular bonds in the (surface) material. Which
wavelengths are absorbed depends upon the chemical composition and crystalline
structure of the material. For pure compounds, these absorption bands are so specific
that the SWIR region is often called “an infrared fingerprint.”
Atmospheric Absorption
In remote sensing, the sun is the radiation source for passive sensors. However, the sun
does not emit the same amount of radiation at all wavelengths. Figure 4 shows the solar
irradiation curve—which is far from linear.
6 ERDAS
Remote Sensing
2500
1000
500
0
0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
Wavelength µm
UV VIS INFRARED
Solar radiation must travel through the earth’s atmosphere before it reaches the earth’s
surface. As it travels through the atmosphere, radiation is affected by four phenomena
(Elachi 1987):
• scattering — the amount of radiation scattered by the atmosphere away from the
field of view
• scattering source — divergent solar irradiation scattered into the field of view
Field Guide 7
Radiation
Scattering source and emission source may account for only 5% of the variance. These
factors are minor, but they must be considered for accurate calculation. After inter-
action with the target material, the reflected radiation must travel back through the
atmosphere and be subjected to these phenomena a second time to arrive at the satellite.
8 ERDAS
Remote Sensing
The mathematical models that attempt to quantify the total atmospheric effect on the
solar illumination are called radiative transfer equations. Some of the most commonly
used are Lowtran (Kneizys 1988) and Modtran (Berk 1989).
Reflectance Spectra
After rigorously defining the incident radiation (solar irradiation at target), it is possible
to study the interaction of the radiation with the target material. When an electromag-
netic wave (solar illumination in this case) strikes a target surface, three interactions are
possible (Elachi 1987):
• reflection
• transmission
• scattering
Remotely sensed data are made up of reflectance values. The resulting reflectance
values translate into discrete digital numbers (or values) recorded by the sensing
device. These gray scale values will fit within a certain bit range (such as 0-255, which
is 8-bit data) depending on the characteristics of the sensor.
Each satellite sensor detector is designed to record a specific portion of the electromag-
netic spectrum. For example, Landsat TM band 1 records the 0.45 to 0.52 µm portion of
the spectrum and is designed for water body penetration, making it useful for coastal
water mapping. It is also useful for soil/vegetation discriminations, forest type
mapping, and cultural features identification (Lillesand and Kiefer 1987).
The characteristics of each sensor provide the first level of constraints on how to
approach the task of enhancing specific features, such as vegetation or urban areas.
Therefore, when choosing an enhancement technique, one should pay close attention to
the characteristics of the land cover types within the constraints imposed by the
individual sensors.
The use of VIS/IR imagery for target discrimination, whether the target is mineral,
vegetation, man-made, or even the atmosphere itself, is based on the reflectance
spectrum of the material of interest (see Figure 6). Every material has a characteristic
spectrum based on the chemical composition of the material. When sunlight (the illumi-
nation source for VIS/IR imagery) strikes a target, certain wavelengths are absorbed by
the chemical bonds; the rest are reflected back to the sensor. It is, in fact, the
wavelengths that are not returned to the sensor that provide information about the
imaged area.
Field Guide 9
Specific wavelengths are also absorbed by gases in the atmosphere (H20 vapor, CO2, O2,
etc.). If the atmosphere absorbs a large percentage of the radiation, it becomes difficult
or impossible to use that particular wavelength(s) to study the earth. For the present
Landsat and SPOT sensors, only the water vapor bands were considered strong enough
to exclude the use of their spectral absorption region. Figure 6 shows how Landsat TM
bands 5 and 7 were carefully placed to avoid these regions. Absorption by other
atmospheric gases was not extensive enough to eliminate the use of the spectral region
for present day broad band sensors.
1 2 3 4 5 7
Landsat TM bands
100 Atmospheric
absorption
bands
Kaolinite
80
Vegetation (green)
Reflectance%
60
40
Silt loam
20
0
.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, µm
NOTE: Spectra are offset for clarity and scale. Modified from Fraser 1986, Crist 1986, Sabins 1987
NOTE: This chart is for comparison purposes only. It is not meant to show actual values. The
spectra are offset to better display the lines.
An inspection of the spectra reveals the theoretical basis of some of the indices in the
ERDAS IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It is
readily apparent that for vegetation this value could be very large; for soils, much
smaller, and for clay minerals, near zero. Conversely, when the clay ratio TM5/TM7 is
considered, the opposite applies.
10 ERDAS
Remote Sensing
Hyperspectral Data
As remote sensing moves toward the use of more and narrower bands (for example,
AVIRIS with 224 bands only 10 nm wide), absorption by specific atmospheric gases
must be considered. These multiband sensors are called hyperspectral sensors. As
more and more of the incident radiation is absorbed by the atmosphere, the digital
number (DN) values of that band get lower, eventually becoming useless—unless one
is studying the atmosphere. Someone wanting to measure the atmospheric content of a
specific gas could utilize the bands of specific absorption.
Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plotted
above the absorption spectra of some common natural materials (kaolin clay, silty loam
soil and green vegetation). Note that while the spectra are continuous, the Landsat
channels are segmented or discontinuous. We can still use the spectra in interpreting
the Landsat data. For example, an NDVI ratio for the three would be very different and,
hence, could be used to discriminate between the three materials. Similarly, the ratio
TM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluation
of the spectra shows why.
Figure 7 shows detail of the absorption spectra of three clay minerals. Because of the
wide bandpass (2080-2350 nm) of TM band 7, it is not possible to discern between these
three minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor
has a large number of approximately 10 nm wide bands. With the proper selection of
band ratios, mineral identification becomes possible. With this dataset, it would be
possible to discriminate between these three clay minerals, again using band ratios. For
example, a color composite image prepared from RGB = 2160nm/2190nm,
2220nm/2250nm, 2350nm/2488nm could produce a color coded clay mineral image-
map.
The commercial airborne multispectral scanners are used in a similar fashion. The
Airborne Imaging Spectrometer from the Geophysical & Environmental Research
Corp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. The
Airborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the
visible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectral
sensors, the user must understand the phenomenon involved and have some idea of the
target materials being sought.
Field Guide 11
Landsat TM band 7
2080 nm 2350 nm
Kaolinite
Reflectance%
Montmorillonite
Illite
The characteristics of Landsat, AVIRIS, and other data types are discussed in "CHAPTER 3:
Raster and Vector Data Sources". See page 166 of "CHAPTER 5: Enhancement" for more
information on the NDVI ratio
12 ERDAS
Remote Sensing
It is the active sensors, termed imaging radar, that are introducing a new generation of
satellite imagery to remote sensing. To produce an image, these satellites emit a
directed beam of microwave energy at the target and then collect the backscattered
(reflected) radiation from the target scene. Because they must emit a powerful burst of
energy, these satellites require large solar collectors and storage batteries. For this
reason, they cannot operate continuously; some satellites are limited to 10 minutes of
operation per hour.
The microwave energy emitted by an active radar sensor is coherent and defined by a
narrow bandwidth. The following table summarizes the bandwidths used in remote
sensing.
A key element of a radar sensor is the antenna. For a given position in space, the
resolution of the resultant image is a function of the antenna size. This is termed a real-
aperture radar (RAR). At some point, it becomes impossible to make a large enough
antenna to create the desired spatial resolution. To get around this problem, processing
techniques have been developed which combine the signals received by the sensor as it
travels over the target. Thus the antenna is perceived to be as long as the sensor path
during backscatter reception. This is termed a synthetic aperture and the sensor a
synthetic aperture radar (SAR).
Field Guide 13
The received signal is termed a phase history or echo hologram. It contains a time
history of the radar signal over all the targets in the scene and is itself a low resolution
RAR image. In order to produce a high resolution image, this phase history is processed
through a hardware/software system called a SAR processor. The SAR processor
software requires operator input parameters, such as information about the sensor
flight path and the radar sensor's characteristics, to process the raw signal data into an
image. These input parameters depend on the desired result or intended application of
the output imagery.
One of the most valuable advantages of imaging radar is that it creates images from its
own energy source and therefore is not dependant on sunlight. Thus one can record
uniform imagery any time of the day or night. In addition, the microwave frequencies
at which imaging radars operate are largely unaffected by the atmosphere. This allows
image collection through cloud cover or rain storms. However, the backscattered signal
can be affected. Radar images collected during heavy rainfall will often be seriously
attenuated, which decreases the signal-to-noise ratio (SNR). In addition, the
atmosphere does cause perturbations in the signal phase, which decreases resolution of
output products, such as the SAR image or generated DEMs.
14 ERDAS
Resolution
These broad definitions are inadequate when describing remotely sensed data. Four
distinct types of resolution must be considered:
• radiometric - the number of possible data file values in each band (indicated by the
number of bits into which the recorded energy is divided)
These four domains contain separate information that can be extracted from the raw
data.
Spectral Spectral resolution refers to the specific wavelength intervals in the electromagnetic
spectrum that a sensor can record (Simonett 1983). For example, band 1 of the Landsat
Thematic Mapper sensor records energy between 0.45 and 0.52 µm in the visible part of
the spectrum.
NOTE: The spectral resolution does not indicate how many levels into which the signal is broken
down.
Spatial Spatial resolution is a measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel (Simonett 1983). The finer the
resolution, the lower the number. For instance, a spatial resolution of 79 meters is
coarser than a spatial resolution of 10 meters.
Scale
The terms large-scale imagery and small-scale imagery often refer to spatial resolution.
Scale is the ratio of distance on a map as related to the true distance on the ground (Star
and Estes 1990).
Large scale in remote sensing refers to imagery in which each pixel represents a small
area on the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Small
scale refers to imagery in which each pixel represents a large area on the ground, such
as AVHRR data, with a spatial resolution of 1.1 km.
Field Guide 15
This terminology is derived from the fraction used to represent the scale of the map,
such as 1:50,000. Small-scale imagery is represented by a small fraction (one over a very
large number). Large-scale imagery is represented by a larger fraction (one over a
smaller number). Generally, anything smaller than 1:250,000 is considered small-scale
imagery.
NOTE: Scale and spatial resolution are not always the same thing. An image always has the
same spatial resolution but it can be presented at different scales (Simonett 1983).
IFOV
Spatial resolution is also described as the instantaneous field of view (IFOV) of the
sensor, although the IFOV is not always the same as the area represented by each pixel.
The IFOV is a measure of the area viewed by a single detector in a given instant in time
(Star and Estes 1990). For example, Landsat MSS data have an IFOV of 79 × 79 meters,
but there is an overlap of 11.5 meters in each pass of the scanner, so the actual area
represented by each pixel is 56.5 × 79 meters (usually rounded to 57 × 79 meters).
Even though the IFOV is not the same as the spatial resolution, it is important to know
the number of pixels into which the total field of view for the image is broken. Objects
smaller than the stated pixel size may still be detectable in the image if they contrast
with the background, such as roads, drainage patterns, etc.
On the other hand, objects the same size as the stated pixel size (or larger) may not be
detectable if there are brighter or more dominant objects nearby. In Figure 8, a house
sits in the middle of four pixels. If the house has a reflectance similar to its
surroundings, the data file values for each of these pixels will reflect the area around
the house, not the house itself, since the house does not dominate any one of the four
pixels. However, if the house has a significantly different reflectance than its
surroundings, it may still be detectable.
16 ERDAS
Resolution
20m
20m 20m
house
20m
Figure 8: IFOV
Radiometric Radiometric resolution refers to the dynamic range, or number of possible data file
values in each band. This is referred to by the number of bits into which the recorded
energy is divided.
For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in
7-bit data, the data file values for each pixel range from 0 to 128.
In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in its
range. The total intensity of the energy from 0 to the maximum amount the sensor
measures is broken down into 256 brightness values for 8-bit data and 128 brightness
values for 7-bit data.
8-bit
0 max. intensity
7-bit
0 max. intensity
Field Guide 17
Temporal Temporal resolution refers to how often a sensor obtains imagery of a particular area.
For example, the Landsat satellite can view the same area of the globe once every 16
days. SPOT, on the other hand, can revisit the same area every three days.
Spatial Resolution: 79 m
1 pixel = 79 m x 79 m 79 m
Radiometric
Resolution:
8-bit (0 - 255)
Spectral
Resolution:
0.52 - 0.60 mm
Day 1
Temporal Resolution:
Day 17 same area viewed every
Day 31 16 days
Source: EOSAT
Figure 10: Landsat TM - Band 2 (Four Types of Resolution)
18 ERDAS
Data Correction
Data Correction There are several types of errors that can be manifested in remotely sensed data. Among
these are line dropout and striping. These errors can be corrected to an extent in GIS by
radiometric and geometric correction functions.
NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT.
Line Dropout Line dropout occurs when a detector either completely fails to function or becomes
temporarily saturated during a scan (like the effect of a camera flash on a human retina).
The result is a line or partial line of data with higher data file values, creating a
horizontal streak until the detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line of estimated data
file values, based on the lines above and below it.
You can correct line dropout using the 5 x 5 Median Filter from the Radar Speckle Suppression
function. The Convolution and Focal Analysis functions in Image Interpreter will also correct
for line dropout.
Striping Striping or banding will occur if a detector goes out of adjustment—that is, it provides
readings consistently greater than or less than the other detectors for the same band
over the same ground cover.
Use Image Interpreter or Spatial Modeler for implementing algorithms to eliminate striping.
The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the
data.
Field Guide 19
Data Storage Image data can be stored on a variety of media—tapes, CD-ROMs, or floppy diskettes,
for example—but how the data are stored (e.g., structure) is more important than on
what they are stored.
All computer data are in binary format. The basic unit of binary data is a bit. A bit can
have two possible values—0 and 1, or “off” and “on” respectively. A set of bits,
however, can have many more values, depending on the number of bits used. The
number of values that can be expressed by a set of bits is 2 to the power of the number
of bits used.
A byte is 8 bits of data. Generally, file size and disk space are referred to by number of
bytes. For example, a PC may have 640 kilobytes (1,024 bytes = 1 kilobyte) of RAM
(random access memory), or a file may need 55,698 bytes of disk space. A megabyte
(Mb) is about one million bytes. A gigabyte (Gb) is about one billion bytes.
Storage Formats Image data can be arranged in several ways on a tape or other media. The most common
storage formats are:
For a single band of data, all formats (BIL, BIP, and BSQ) are identical, as long as the
data are not blocked.
BIL
In BIL (band interleaved by line) format, each record in the file contains a scan line (row)
of data for one band (Slater 1980). All bands of data for a given line are stored consecu-
tively within the file as shown in Figure 11.
20 ERDAS
Data Storage
Header
Image
Line 1, Band 1
Line 1, Band 2
•
•
•
Line 1, Band x
Line 2, Band 1
Line 2, Band 2
•
•
•
Line 2, Band x
Line n, Band 1
Line n, Band 2
•
•
•
Line n, Band x
Trailer
Figure 11: Band Interleaved by Line (BIL)
NOTE: Although a header and trailer file are shown in this diagram, not all BIL data contain
header and trailer files.
BSQ
In BSQ (band sequential) format, each entire band is stored consecutively in the same
file (Slater 1980). This format is advantageous, in that:
Field Guide 21
Header File(s)
Line 1, Band 1
Line 2, Band 1
Image File Line 3, Band 1
Band 1 •
•
•
Line n, Band 1
end-of-file
Line 1, Band 2
Line 2, Band 2
Image File Line 3, Band 2
Band 2 •
•
•
Line n, Band 2
end-of-file
Line 1, Band x
Line 2, Band x
Image File Line 3, Band x
Band x •
•
•
Line n, Band x
Trailer File(s)
Figure 12: Band Sequential (BSQ)
Landsat Thematic Mapper (TM) data are stored in a type of BSQ format known as fast
format. Fast format data have the following characteristics:
• Files are not split between tapes. If a band starts on the first tape, it will end on the
first tape.
• Regular products (not geocoded) are normally unblocked. Geocoded products are
normally blocked (EOSAT).
ERDAS IMAGINE will import all of the header and image file information.
22 ERDAS
Data Storage
BIP
In BIP (band interleaved by pixel) format, the values for each band are ordered within
a given pixel. The pixels are arranged sequentially on the tape (Slater 1980). The
sequence for BIP format is:
Pixel 1, Band 1
Pixel 1, Band 2
Pixel 1, Band 3
.
.
.
Pixel 2, Band 1
Pixel 2, Band 2
Pixel 2, Band 3
.
.
.
Storage Media Today, most raster data are available on a variety of storage media to meet the needs of
users, depending on the system hardware and devices available. When ordering data,
it is sometimes possible to select the type of media preferred. The most common forms
of storage media are discussed in the following section:
• 9-track tape
• 4 mm tape
• 8 mm tape
• CD-ROM/optical disk
Field Guide 23
Other types of storage media are:
• videotape
Tape
The data on a tape can be divided into logical records and physical records. A record is
the basic storage unit on a tape.
• A logical record is a series of bytes that form a unit. For example, all the data for
one line of an image may form a logical record.
Blocked Data
For reasons of efficiency, data can be blocked to fit more on a tape. Blocked data are
sequenced so that there are more logical records in each physical record. The number
of logical records in each physical record is the blocking factor. For instance, a record
may contain 28,000 bytes, but only 4000 columns due to a blocking factor of 7.
Tape Contents
Tapes are available in a variety of sizes and storage capacities. To obtain information
about the data on a particular tape, read the tape label or box, or read the header file.
Often, there is limited information on the outside of the tape. Therefore, it may be
necessary to read the header files on each tape for specific information, such as:
• number of bands
• blocking factor
24 ERDAS
Data Storage
4 mm Tapes
The 4 mm tape is a relative newcomer in the world of GIS. This tape is a mere
2” × 1.75” in size, but it can hold up to 2 Gb of data. This petite cassette offers an
obvious shipping and storage advantage because of its size.
8 mm Tapes
The 8 mm tape offers the advantage of storing vast amounts of data. Tapes are available
in 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb
size). The 8 mm tape is a 2.5” × 4” cassette, which makes it easy to ship and handle.
9-Track Tapes
A 9-track tape is an older format that was the standard for two decades. It is a large
circular tape approximately 10” in diameter. It requires a 9-track tape drive as a
peripheral device for retrieving data. The size and storage capability make 9-track less
convenient than 8 mm or 1/4” tapes. However, 9-track tapes are still widely used.
A single 9-track tape may be referred to as a volume. The complete set of tapes that
contains one image is referred to as a volume set.
The storage format of a 9-track tape in binary format is described by the number of bits
per inch, bpi, on the tape. The tapes most commonly used have either 1600 or 6250 bpi.
The number of bits per inch on a tape is also referred to as the tape density. Depending
on the length of the tape, 9-tracks can store between 120-150 Mb of data.
CD-ROM
Data such as ADRG and DLG are most often available on CD-ROM, although many
types of data can be requested in CD-ROM format. A CD-ROM is an optical read-only
storage device which can be read with a CD player. CD-ROM’s offer the advantage of
storing large amounts of data in a small, compact device. Up to 644 Mb can be stored
on a CD-ROM. Also, since this device is read-only, it protects the data from accidentally
being overwritten, erased, or changed from its original integrity. This is the most stable
of the current media storage types and data stored on CD-ROM are expected to last for
decades without degradation.
Field Guide 25
Calculating Disk Space To calculate the amount of disk space a raster file will require on an ERDAS IMAGINE
system, use the following formula:
where:
y = rows
x = columns
b = number of bytes per pixel
n = number of bands
1.4 adds 30% to the file size for pyramid layers and 10% for miscellaneous adjust-
ments, such as histograms, lookup tables, etc.
For example, to load a 3 band, 16-bit file with 500 rows and 500 columns, about
2,100,000 bytes of disk space would be needed.
4-bit data: .5
NOTE: On the PC, disk space is shown in bytes. On the workstation, disk space is shown as
kilobytes (1,024 bytes).
26 ERDAS
Data Storage
ERDAS IMAGINE Format In ERDAS IMAGINE, file name extensions identify the file type. When data are
(.img) imported into IMAGINE, they are converted to the ERDAS IMAGINE file format and
stored in .img files. ERDAS IMAGINE image files (.img) can contain two types of raster
layers:
• thematic
• continuous
An image file can store a combination of thematic and continuous layers or just one
type.
Raster Layer(s)
• soils
• land use
• land cover
• roads
• hydrology
Field Guide 27
soils
Figure 14: Example of a Thematic Raster Layer
See "CHAPTER 4: Image Display" for information on displaying thematic raster layers.
• Landsat
• SPOT
• slope
• temperature
NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true
color raster layer.
28 ERDAS
Data Storage
Landsat TM DEM
Figure 15: Examples of Continuous Raster Layers
Tiled Data
Data in the .img format are tiled data. Tiled data are stored in tiles that can be set to any
size.
• statistics
• lookup tables
• map coordinates
• map projection
This additional information can be viewed in the Image Information function from the ERDAS
IMAGINE icon panel.
Field Guide 29
Statistics
In ERDAS IMAGINE, the file statistics are generated from the data file values in the
layer and incorporated into the .img file. This statistical information is used to create
many program defaults, and helps the user make processing decisions.
Pyramid Layers
Sometimes a large image will take longer than normal to display in the ERDAS
IMAGINE Viewer. The pyramid layer option enables the user to display large images
faster. Pyramid layers are image layers which are successively reduced by the power of
2 and resampled.
The Pyramid Layer option is available in the Image Information function from the ERDAS
IMAGINE icon panel and from the Import function.
See "CHAPTER 4: Image Display" for more information on pyramid layers. See "APPENDIX
B: File Formats and Extensions" for detailed information on ERDAS IMAGINE file formats.
30 ERDAS
Image File Organization
Image File Data is easy to locate if the data files are well organized. Well organized files will also
Organization make data more accessible to anyone who uses the system. Using consistent naming
conventions and the ERDAS IMAGINE Image Catalog will help keep image files well
organized and accessible.
Consistent Naming Many processes create an output file, and every time a file is created, it will be necessary
Convention to assign a file name. The name which is used can either cause confusion about the
process that has taken place, or it can clarify and give direction. For example, if the
name of the output file is “junk,” it is difficult to determine the contents of the file. On
the other hand, if a standard nomenclature is developed in which the file name refers
to a process or contents of the file, it is possible to determine the progress of a project
and contents of a file by examining the directory.
Develop a naming convention that is based on the contents of the file. This will help
everyone involved know what the file contains. For example, in a project to create a
map composition for Lake Lanier, a directory for the files may look similar to the one
below:
lanierTM.img
lanierSPOT.img
lanierSymbols.ovr
lanierlegends.map.ovr
lanierScalebars.map.ovr
lanier.map
lanier.plt
lanier.gcc
lanierUTM.img
From this listing, one can make some educated guesses about the contents of each file
based on naming conventions used. For example, “lanierTM.img” is probably a
Landsat TM scene of Lake Lanier. “lanier.map” is probably a map composition that has
map frames with lanierTM.img and lanierSPOT.img data in them. “lanierUTM.img”
was probably created when lanierTM.img was rectified to a UTM map projection.
Keeping Track of Image Using a database to store information about images enables the user to track image files
Files (.img) without having to know the name or location of the file. The database can be
queried for specific parameters (e.g., size, type, map projection) and the database will
return a list of image files that match the search criteria. This file information will help
to quickly determine which image(s) to use, where it is located, and its ancillary data.
An image database is especially helpful when there are many image files and even
many on-going projects. For example, one could use the database to search for all of the
image files of Georgia that have a UTM map projection.
Use the Image Catalog to track and store information for image files (.img) that are imported and
created in IMAGINE.
NOTE: All information in the Image Catalog database, except archive information, is extracted
from the image file header. Therefore, if this information is modified in the Image Info utility, it
will be necessary to re-catalog the image in order to update the information in the Image Catalog
database.
Field Guide 31
ERDAS IMAGINE Image Catalog
The ERDAS IMAGINE Image Catalog database is designed to serve as a library and
information management system for image files (.img) that are imported and created in
ERDAS IMAGINE. The information for the .img files is displayed in the Image Catalog
CellArray. This CellArray enables the user to view all of the ancillary data for the image
files in the database. When records are queried based on specific criteria, the .img files
that match the criteria will be highlighted in the CellArray. It is also possible to graph-
ically view the coverage of the selected .img files on a map in a canvas window.
When it is necessary to store some data on a tape, the Image Catalog database enables
the user to archive .img files to external devices. The Image Catalog CellArray will
show which tape the .img file is stored on, and the file can be easily retrieved from the
tape device to a designated disk directory. The archived .img files are copies of the files
on disk—nothing is removed from the disk. Once the file is archived, it can be removed
from the disk, if desired.
Geocoded Data Geocoding, also known as georeferencing, is the geographical registration or coding of
the pixels in an image. Geocoded data are images that have been rectified to a particular
map projection and pixel size.
Raw, remotely sensed image data are gathered by a sensor on a platform, such as an
aircraft or satellite. In this raw form, the image data are not referenced to a map
projection. Rectification is the process of projecting the data onto a plane and making
them conform to a map projection system.
It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools.
Geocoded data are also available from EOSAT and SPOT.
See "APPENDIX C: Map Projections" for detailed information on the different projections
available. See "CHAPTER 8: Rectification" for information on geocoding raw imagery with
ERDAS IMAGINE.
32 ERDAS
Using Image Data in GIS
Using Image Data in ERDAS IMAGINE provides many tools designed to extract the necessary information
GIS from the images in a data base. The following chapters in this book describe many of
these processes.
This section briefly describes some basic image file techniques that may be useful for
any application.
Subsetting and Within ERDAS IMAGINE, there are options available to make additional image files
Mosaicking from those acquired from EOSAT, SPOT, etc. These options involve combining files,
mosaicking, and subsetting.
ERDAS IMAGINE programs allow image data with an unlimited number of bands, but
the most common satellite data types—Landsat and SPOT—have seven or fewer bands.
Image files can be created with more than seven bands.
It may be useful to combine data from two different dates into one file. This is called
multitemporal imagery. For example, a user may want to combine Landsat TM from
one date with TM data from a later date, then perform a classification based on the
combined data. This is particularly useful for change detection studies.
The user can also incorporate elevation data into an existing image file as another band,
or create new bands through various enhancement techniques.
To combine two or more image files, each file must be georeferenced to the same coordinate
system, or to each other. See "CHAPTER 8: Rectification" for information on georeferencing
images.
Subset
Subsetting refers to breaking out a portion of a large file into one or more smaller files.
Often, image files contain areas much larger than a particular study area. In these cases,
it is helpful to reduce the size of the image file to include only the area of interest. This
not only eliminates the extraneous data in the file, but it speeds up processing due to
the smaller amount of data to process. This can be important when dealing with
multiband data.
The Import option lets you define a subset area of an image to preview or import. You can also
use the Subset option from Image Interpreter to define a subset area.
Mosaic
On the other hand, the study area in which the user is interested may span several
image files. In this case, it is necessary to combine the images to create one large file.
This is called mosaicking.
To create a mosaicked image, use the Mosaic Images option from the Data Prep menu. All of the
images to be mosaicked must be georeferenced to the same coordinate system.
Field Guide 33
Enhancement Image enhancement is the process of making an image more interpretable for a
particular application (Faust 1989). Enhancement can make important features of raw,
remotely sensed data and aerial photographs more interpretable to the human eye.
Enhancement techniques are often used instead of classification for extracting useful
information from images.
There are many enhancement techniques available. They range in complexity from a
simple contrast stretch, where the original data file values are stretched to fit the range
of the display device, to principal components analysis, where the number of image
file bands can be reduced and new bands created to account for the most variance in the
data.
Multispectral Image data are often used to create thematic files through multispectral classification.
Classification This entails using spectral pattern recognition to identify groups of pixels that represent
a common characteristic of the scene, such as soil type or vegetation.
34 ERDAS
Editing Raster Data
Editing Raster Data ERDAS IMAGINE provides raster editing tools for editing the data values of thematic
and continuous raster data. This is primarily a correction mechanism that enables the
user to correct bad data values which produce noise, such as spikes and holes in
imagery. The raster editing functions can be applied to the entire image or a user-
selected area of interest (AOI).
With raster editing, data values in thematic data can also be recoded according to class.
Recoding is a function which reassigns data values to a region or to an entire class of
pixels.
See "CHAPTER 10: Geographic Information Systems" for information about recoding data. See
"CHAPTER 5: Enhancement" for information about reducing data noise using spatial filtering.
The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial
modeling functions for computing the values to replace noisy pixels or areas in
continuous or thematic data.
Focal operations are filters which calculate the replacement value based on a window
(3 × 3, 5 × 5, etc.) and replace the pixel of interest with the replacement value. Therefore
this function affects one pixel at a time, and the number of surrounding pixels which
influence the value is determined by the size of the moving window.
Global operations calculate the replacement value for an entire area rather than
affecting one pixel at a time. These functions, specifically the Majority option, are more
applicable to thematic data.
See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs.
Field Guide 35
Editing Continuous Editing DEMs
(Athematic) Data DEMs will occasionally contain spurious pixels or bad data. These spikes, holes, and
other noises caused by automatic DEM extraction can be corrected by editing the raster
data values and replacing them with meaningful values. This discussion of raster
editing will focus on DEM editing.
The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs, but
it can also be used with images of other continuous data sources, such as radar, SPOT, Landsat,
and digitized photographs.
When editing continuous raster data, the user can modify or replace original pixel
values with the following:
• a constant value — enter a known constant value for areas such as lakes
• the average of the buffering pixels — replace the original pixel value with the
average of the pixels in a specified buffer area around the AOI. This is used where
the constant values of the AOI are not known, but the area is flat or homogeneous
with little variation (for example, a lake).
• the original data value plus a constant value — add a negative constant value to the
original data values to compensate for the height of trees and other vertical features
in the DEM. This technique is commonly used in forested areas.
• spatial filtering — filter data values to eliminate noise such as spikes or holes in the
data
36 ERDAS
Editing Raster Data
Interpolation While the previously listed raster editing techniques are perfectly suitable for some
Techniques applications, the following interpolation techniques provide the best methods for raster
editing:
• Distance weighting
Each pixel’s data value is interpolated from the reference points in the data file. These
interpolation techniques are described below.
2-D Polynomial
This interpolation technique provides faster interpolation calculations than distance
weighting and multi-surface functions. The following equation is used:
where:
Multi-surface Functions
The multi-surface technique provides the most accurate results for editing DEMs which
have been created through automatic extraction. The following equation is used:
V = ∑ W i Qi
where:
Field Guide 37
Distance Weighting
The weighting function determines how the output data values will be interpolated
from a set of reference data points. For each pixel, the values of all reference points are
weighted by a value corresponding with the distance between each point and the pixel.
2
W = ---- – 1
S
D
where:
S = normalization factor
The value for any given pixel is calculated by taking the sum of weighting factors for all
reference points multiplied by the data values of those points, and dividing by the sum
of the weighting factors:
∑ Wi × Vi
V = i-----------------------------
=1 -
n
∑ Wi
i=1
where:
38 ERDAS
Introduction
CHAPTER 2
Vector Layers
Introduction ERDAS IMAGINE is designed to integrate two data types into one system: raster and
vector. While the previous chapter explored the characteristics of raster data, this
chapter is focused on vector data. The vector data structure in ERDAS IMAGINE is
based on the ARC/INFO data model (developed by ESRI, Inc.). This chapter describes
vector data, attribute information, and symbolization.
You do not need ARC/INFO software or an ARC/INFO license to use the vector capabilities in
ERDAS IMAGINE. Since the ARC/INFO data model is used in ERDAS IMAGINE, you can
use ARC/INFO coverages directly without importing them.
See "CHAPTER 10: Geographic Information Systems" for information on editing vector layers
and using vector data in a GIS.
• points
• lines
• polygons
vertices node
polygons
line
label point
node
points
Figure 16: Vector Elements
Field Guide 39
Points
A point is represented by a single x,y coordinate pair. Points can represent the location
of a geographic feature or a point that has no area, such as a mountain peak. Label
points are also used to identify polygons (see below).
Lines
A line (polyline) is a set of line segments and represents a linear geographic feature,
such as a river, road, or utility line. Lines can also represent non-geographical bound-
aries, such as voting districts, school zones, contour lines, etc.
Polygons
A polygon is a closed line or closed set of lines defining a homogeneous area, such as
soil type, land use, or water body. Polygons can also be used to represent non-
geographical features, such as wildlife habitats, state borders, commercial districts, etc.
Polygons also contain label points that identify the polygon. The label point links the
polygon to its attributes.
Vertex
The points that define a line are vertices. A vertex is a point that defines an element,
such as the endpoint of a line segment or a location in a polygon where the line segment
defining the polygon changes direction. The ending points of a line are called nodes.
Each line has two nodes: a from-node and a to-node. The from-node is the first vertex
in a line. The to-node is the last vertex in a line. Lines join other lines only at nodes. A
series of lines in which the from-node of the first line joins the to-node of the last line is
a polygon.
label point
line polygon
vertices
In Figure 17, the line and the polygon are each defined by three vertices.
40 ERDAS
Introduction
Coordinates Vector data are expressed by the coordinates of vertices. The vertices that define each
element are referenced with x,y, or Cartesian, coordinates. In some instances, those
coordinates may be inches (as in some CAD applications), but often the coordinates are
map coordinates, such as State Plane, Universal Transverse Mercator (UTM), or
Lambert Conformal Conic. Vector data digitized from an ungeoreferenced image are
expressed in file coordinates.
Tics
Vector layers are referenced to coordinates or a map projection system using tic files
that contain geographic control points for the layer. Every vector layer must have a tic
file. Tics are not topologically linked to other features in the layer and do not have
descriptive data associated with them.
Vector Layers Although it is possible to have points, lines, and polygons in a single layer, a layer
typically consists of one type of feature. It is possible to have one vector layer for
streams (lines) and another layer for parcels (polygons). A vector layer is defined as a
set of features where each feature has a location (defined by coordinates and topological
pointers to other features) and, possibly attributes (defined as a set of named items or
variables) (ESRI 1989). Vector layers contain both the vector features (points, lines,
polygons) and the attribute information.
Usually, vector layers are also divided by the type of information they represent. This
enables the user to isolate data into themes, similar to the themes used in raster layers.
Political districts and soil types would probably be in separate layers, even though both
are represented with polygons. If the project requires that the coincidence of features in
two or more layers be studied, the user can overlay them or create a new layer.
See "CHAPTER 10: Geographic Information Systems" for more information about analyzing
vector layers.
Topology The spatial relationships between features in a vector layer are defined using topology.
In topological vector data, a mathematical procedure is used to define connections
between features, identify adjacent polygons, and define a feature as a set of other
features (e.g., a polygon is made of connecting lines) (ESRI 1990).
Topology is not automatically created when a vector layer is created. It must be added
later using specific functions. Topology must be updated after a layer is edited also.
"Digitizing" on page 47 describes how topology is created for a new or edited vector layer.
Field Guide 41
Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based in the ARC/INFO
data model used for ARC coverages. This georelational data model is actually a set of
files using the computer’s operating system for file management and input/output. An
ERDAS IMAGINE vector layer is stored in subdirectories on the disk. Vector data are
represented by a set of logical tables of information, stored as files within the subdi-
rectory. These files may serve the following purposes:
• define features
42 ERDAS
Introduction
georgia
parcels testdata
Because vector layers are stored in directories rather than in simple files, you MUST use the
utilities provided in ERDAS IMAGINE to copy and rename them. A utility is also provided to
update path names that are no longer correct due to the use of regular system commands on
vector layers.
See the ESRI documentation for more detailed information about the different vector files.
Field Guide 43
Attribute Along with points, lines, and polygons, a vector layer can have a wealth of associated
Information descriptive, or attribute, information associated with it. Attribute information is
displayed in ERDAS IMAGINE CellArrays. This is the same information that is stored
in the INFO database of ARC/INFO. Some attributes are automatically generated when
the layer is created. Custom fields can be added to each attribute table. Attribute fields
can contain numerical or character data.
The attributes for a roads layer may look similar to the example in Figure 19. The user
can select features in the layer based on the attribute information. Likewise, when a row
is selected in the attribute CellArray, that feature is highlighted in the Viewer.
To utilize all of this attribute information, the INFO files can be merged into the PAT
and AAT files. Once this attribute information has been merged, it can be viewed in
IMAGINE CellArrays and edited as desired. This new information can then be
exported back to its original format.
The complete path of the file must be specified when establishing an INFO file name in
an ERDAS IMAGINE Viewer application, such as exporting attributes or merging
attributes, as shown in the example below:
/georgia/parcels/info!arc!parcels.pcode
44 ERDAS
Displaying Vector Data
Use the Attributes option in the IMAGINE Viewer to view and manipulate vector attribute
data, including merging and exporting. (The Raster Attribute Editor is for raster attributes only
and cannot be used to edit vector attributes.)
See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.
Displaying Vector Vector data are displayed in Viewers, as are other data types in ERDAS IMAGINE. The
Data user can display a single vector layer, overlay several layers in one Viewer, or display
a vector layer(s) over a raster layer(s).
In layers that contain more than one feature (a combination of points, lines, and
polygons), the user can select which features to display. For example, if a user is
studying parcels, he or she may want to display only the polygons in a layer that also
contains street centerlines (lines).
Color Schemes
Vector data are usually assigned class values in the same manner as the pixels in a
thematic raster file. These class values correspond to different colors on the display
screen. As with a pseudo color image, the user can assign a color scheme for displaying
the vector classes.
See "CHAPTER 4: Image Display" for a thorough discussion of how images are displayed.
Symbolization Vector layers can be displayed with symbolization, meaning that the attributes can be
used to determine how points, lines, and polygons are rendered. Points, lines,
polygons, and nodes are symbolized using styles and symbols similar to annotation.
For example, if a point layer represents cities and towns, the appropriate symbol could
be used at each point based on the population of that area.
Points
Point symbolization options include symbol, size, and color. The symbols available are
the same symbols available for annotation.
Lines
Lines can be symbolized with varying line patterns, composition, width, and color. The
line styles available are the same as those available for annotation.
Polygons
Polygons can be symbolized as lines or as filled polygons. Polygons symbolized as lines
can have varying line styles (see Lines above). For filled polygons, either a solid fill
color or a repeated symbol can be selected. When symbols are used, the user selects the
symbol to use, the symbol size, symbol color, background color, and the x- and y-
separation between symbols. Figure 20 illustrates a pattern fill.
Field Guide 45
The vector layer will reflect
the symbolization that is
defined in the Symbology dialog.
See the ERDAS IMAGINE Tour Guides or On-Line Help for information about selecting
features and using CellArrays.
46 ERDAS
Digitizing
• screen digitizing—create new vector layers by using the mouse to digitize on the
screen
• using other software packages—many external vector data types can be converted
to ERDAS IMAGINE vector layers
Digitizing In the broadest sense, digitizing refers to any process that converts non-digital data into
numbers. However, in ERDAS IMAGINE, the digitizing of vectors refers to the creation
of vector data from hardcopy materials or raster images that are traced using a digitizer
keypad on a digitizing tablet or a mouse on a displayed image.
Any image not already in digital format must be digitized before it can be read by the
computer and incorporated into the data base. Most Landsat, SPOT, or other satellite
data are already in digital format upon receipt, so it is not necessary to digitize them.
However, the user may also have maps, photographs, or other non-digital data that
contain information they want to incorporate into the study. Or, the user may want to
extract certain features from a digital image to include in a vector layer. Tablet
digitizing and screen digitizing enable the user to digitize certain features of a map or
photograph, such as roads, bodies of water, voting districts, and so forth.
Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer non-digital data such
as maps or photographs to vector format. The digitizing tablet contains an internal
electronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypad
operated by the user.
Field Guide 47
Digitizer Set-Up
The map or photograph to be digitized is secured on the tablet, and a coordinate system
is established with a set-up procedure.
Digitizer Operation
The hand-held digitizer keypad features a small window with cross-hairs and keypad
buttons. Position the intersection of the cross-hairs directly over the point to be
digitized. Depending on the type of equipment and the program being used, one of the
input buttons is pushed to tell the system which function to perform, such as:
Move the puck along the desired polygon boundaries or lines, digitizing points at
appropriate intervals (where lines curve or change direction), until all the points are
satisfactorily completed.
Newly created vector layers do not contain topological data. You must create topology using the
Build or Clean options. This is discussed further in “Chapter 9: Geographic Information
Systems.”
Digitizing Modes
There are two modes used in digitizing:
• point mode — one point is generated each time a keypad button is pressed
• stream mode — points are generated continuously at specified intervals, while the
puck is in proximity to the surface of the digitizing tablet
You can create a new vector layer from the IMAGINE Viewer. Select the Tablet Input function
from the Viewer to use a digitizing tablet to enter new information into that layer.
Measurement
The digitizing tablet can also be used to measure both linear and areal distances on a
map or photograph. The digitizer puck is used to outline the areas to measure. The user
can measure:
48 ERDAS
Imported Vector Data
Measurements can be saved to a file, printed, and copied. These operations can also be
performed with screen digitizing.
Select the Measure function from the IMAGINE Viewer or click on the Ruler tool in the Viewer
tool bar to enable tablet or screen measurement.
Screen Digitizing In screen digitizing, vector data are drawn in the Viewer with a mouse using the
displayed image as a reference. These data are then written to a vector layer.
Screen digitizing is used for the same purposes as tablet digitizing, such as:
Imported Vector Many types of vector data from other software packages can be incorporated into the
Data ERDAS IMAGINE system. These data formats include:
• Vector Product Format (VPF) files from the Defense Mapping Agency
See "CHAPTER 3: Raster and Vector Data Sources" for more information on these data.
Field Guide 49
Raster to Vector A raster layer can be converted to a vector layer and used as another layer in a vector
Conversion data base. The diagram below illustrates a thematic file in raster format that has been
converted to vector format.
Most commonly, thematic raster data rather than continuous data are converted to
vector format, since converting continuous layers may create more vector features than
are practical or even manageable.
Convert vector data to raster data, and vice versa, using ERDAS IMAGINE Vector.
50 ERDAS
Introduction
CHAPTER 3
Raster and Vector Data Sources
Introduction This chapter is an introduction to the most common raster and vector data types that
can be used with the ERDAS IMAGINE software package. The raster data types
covered include:
• radar imagery
Importing and Exporting There is an abundance of data available for use in GIS today. In addition to satellite and
Raster Data airborne imagery, raster data sources include digital x-rays, sonar, microscopic
imagery, video digitized data, and many other sources.
Because of the wide variety of data formats, ERDAS IMAGINE provides two options
for importing data:
Direct Import
Table 2 lists some of the raster data formats that can be directly imported to and
exported from ERDAS IMAGINE:
Field Guide 51
Table 2: Raster Data Formats for Direct Import
Once imported, the raster data are converted to the ERDAS IMAGINE file format
(.img). The direct import function will import the data file values that make up the
raster image, as well as the ephemeris or additional data inherent to the data structure.
For example, when the user imports Landsat data, ERDAS IMAGINE also imports the
georeferencing data for the image.
Raster data formats cannot be exported as vector data formats, unless they are converted with
the Vector utilities.
Each direct function is programmed specifically for that type of data and cannot be used to
import other data types.
Generic Import
The Generic import option is a flexible program which enables the user to define the
data structure for ERDAS IMAGINE. This program allows the import of BIL, BIP, and
BSQ data that are stored in left to right, top to bottom row order. Data formats from
unsigned 1-bit up to 64-bit floating point can be imported. This program imports only
the data file values—it does not import ephemeris data, such as georeferencing infor-
mation. However, this ephemeris data can be viewed using the Data View option (from
the Utility menu or the Import dialog).
52 ERDAS
Introduction
Complex data cannot be imported using this program; however, they can be imported
as two real images and then combined into one complex image using the IMAGINE
Spatial Modeler.
You cannot import tiled or compressed data using the Generic import option.
Importing and Exporting Vector layers can be created within ERDAS IMAGINE by digitizing points, lines, and
Vector Data polygons using a digitizing tablet or the computer screen. Several vector data types,
which are available from a variety of government agencies and private companies, can
also be imported. Table 3 lists some of the vector data formats that can be imported to,
and exported from, ERDAS IMAGINE:
Table 3: Vector Data Formats for Import and Export
Once imported, the vector data are automatically converted to ERDAS IMAGINE
vector layers.
These vector formats are discussed in more detail in "Vector Data from Other Software
Vendors" on page 90. See "CHAPTER 2: Vector Layers" for more information on ERDAS
IMAGINE vector layers.
Import and export vector data with the Import/Export function. You can also convert vector
layers to raster format, and vice versa, with the ERDAS IMAGINE Vector utilities.
Field Guide 53
Satellite Data There are several data acquisition options available including photography, aerial
sensors, and sophisticated satellite scanners. However, a satellite system offers these
advantages:
• Many satellites orbit the earth, so the same area can be covered on a regular basis
for change detection.
• Once the satellite is launched, the cost for data acquisition is less than that for
aircraft data.
• Satellites have very stable geometry, meaning that there is less chance for distortion
or skew in the final image.
Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. The
sensors are made up of detectors.
• The scanner is the entire data acquisition system, such as the Landsat Thematic
Mapper scanner or the SPOT panchromatic scanner (Lillesand and Kiefer 1987). It
includes the sensor and the detectors.
In a satellite system, the total width of the area on the ground covered by the scanner is
called the swath width, or width of the total field of view (FOV). FOV differs from IFOV
(instantaneous field of view) in that the IFOV is a measure of the field of view of each
detector. The FOV is a measure of the field of view off all the detectors combined.
54 ERDAS
Satellite Data
Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important data acquisition
satellites. These systems provide the majority of remotely sensed digital images in use
today. The Landsat and SPOT satellites have several characteristics in common:
• They have sun-synchronous orbits, meaning that they rotate around the earth at
the same rate as the earth rotates on its axis, so data are always collected at the same
local time of day over the same region.
• They both record electromagnetic radiation in one or more bands. Multiband data
are referred to as multispectral imagery. Single band, or monochrome, imagery is
called panchromatic.
• Both scanners can produce nadir views. Nadir is the area on the ground directly
beneath the scanner’s detectors.
NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery, as will the
future Landsat 7 system.
Field Guide 55
Landsat MSS Landsat TM SPOT SPOT NOAA
(1,2,3,4) (4, 5) XS Pan AVHRR1
Band 1
.5
.6 Band 1 Band 2 Band 1
Band 1 Band 1
Band 2 Band 3 Band 2
.7
.8 Band 3
Band 4 Band 3
.9 Band 2
1.0 Band 4
1.1
1.2
1.3
1.4
1.5
1.6 Band 5
1.7
1.8
1.9
2.0
micrometers
2.1
2.2 Band 7
2.3
2.4
2.5
2.6
3.0
3.5
Band 3
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0 Band 4
12.0 Band 6 Band 5
13.0
1 NOAA AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.
56 ERDAS
Satellite Data
Landsat In 1972, the National Aeronautics and Space Administration (NASA) initiated the first
civilian program specializing in the acquisition of remotely sensed digital satellite data.
The first system was called ERTS (Earth Resources Technology Satellites), and later
renamed to Landsat. There have been several Landsat satellites launched since 1972.
Landsats 1, 2, and 3 are no longer operating, but Landsats 4 and 5 are still in orbit
gathering data.
Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5
collect MSS and Thematic Mapper (TM) data. MSS and TM are discussed in more detail
below.
NOTE: Landsat data are available through the Earth Observation Satellite Company (EOSAT)
or the EROS Data Center. See "Ordering Raster Data" on page 84 for more information.
MSS
The MSS (multispectral scanner) from Landsats 4 and 5 has a swath width of approxi-
mately 185 × 170 km from a height of approximately 900 km for Landsats 1,2, and 3, and
705 km for Landsats 4 and 5. MSS data are widely used for general geologic studies as
well as vegetation inventories.
• Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting
cultural features, such as roads. These bands also show detail in water.
• Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in
land/water and vegetation discrimination.
1 =Green, 0.50-0.60 µm
This band scans the region between the blue and red chlorophyll absorption bands. It
corresponds to the green reflectance of healthy vegetation, and it is also useful for
mapping water bodies.
2 =Red, 0.60-0.70 µm
This is the red chlorophyll absorption band of healthy green vegetation and represents
one of the most important bands for vegetation discrimination. It is also useful for
determining soil boundary and geological boundary delineations and cultural features.
Field Guide 57
TM
The TM (thematic mapper) scanner is a multispectral scanning system much like the
MSS, except that the TM sensor records reflected/emitted electromagnetic energy from
the visible, reflective-infrared, middle-infrared, and thermal-infrared regions of the
spectrum. TM has higher spatial, spectral, and radiometric resolution than MSS.
TM has a swath width of approximately 185 km from a height of approximately 705 km.
It is useful for vegetation type and health determination, soil moisture, snow and cloud
differentiation, rock type discrimination, etc.
The spatial resolution of TM is 28.5 × 28.5 m for all bands except the thermal (band 6),
which has a spatial resolution of 120 × 120 m. The larger pixel size of this band is
necessary for adequate signal strength. However, the thermal band is resampled to 28.5
× 28.5 m to match the other bands. The radiometric resolution is 8-bit, meaning that each
pixel has a possible range of data values from 0 to 255.
• Bands 1, 2, and 3 are in the visible portion of the spectrum and are useful in
detecting cultural features such as roads. These bands also show detail in water.
• Bands 4, 5, and 7 are in the reflective-infrared portion of the spectrum and can be
used in land/water discrimination.
• Band 6 is in the thermal portion of the spectrum and is used for thermal mapping
(Jensen 1996; Lillesand and Kiefer 1987).
1 =Blue, 0.45-0.52 µm
Useful for mapping coastal water areas, differentiating between soil and vegetation,
forest type mapping, and detecting cultural features.
2 =Green, 0.52-0.60 µm
Corresponds to the green reflectance of healthy vegetation. Also useful for cultural
feature identification.
3 =Red, 0.63-0.69 µm
Useful for discriminating between many plant species. It is also useful for determining
soil boundary and geological boundary delineations as well as cultural features.
4 =Reflective-infrared, 0.76-0.90 µm
This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.
5 =Mid-infrared, 1.55-1.74 µm
This band is sensitive to the amount of water in plants, which is useful in crop drought
studies and in plant health analyses. This is also one of the few bands that can be used
to discriminate between clouds, snow, and ice.
58 ERDAS
Satellite Data
6 =Thermal-infrared, 10.40-12.50 µm
This band is useful for vegetation and crop stress detection, heat intensity, insecticide
applications, and for locating thermal pollution. It can also be used to locate geothermal
activity.
7 =Mid-infrared, 2.08-2.35 µm
This band is important for the discrimination of geologic rock type and soil boundaries,
as well as soil and vegetation moisture content.
4 bands
MSS
7 bands
TM
radiometric
resolution 1 pixel=
0-127 57x79m
1 pixel=
30x30m
radiometric
resolution
0-255
NOTE: The order of the bands corresponds to the Red, Green, and Blue (RGB) color guns of the
monitor.
• Bands 3,2,1 create a true color composite. True color means that objects look as they
would to the naked eye—similar to a color photograph.
• Bands 4,3,2 create a false color composite. False color composites appear similar to
an infrared photograph where objects do not have the same colors or contrasts as
they would naturally. For instance, in an infrared image, vegetation appears red,
water appears navy or black, etc.
• Bands 5,4,2 create a pseudo color composite. (A thematic image is also a pseudo
color image.) In pseudo color, the colors do not reflect the features in natural colors.
For instance, roads may be red, water yellow, and vegetation blue.
Field Guide 59
Different color schemes can be used to bring out or enhance the features under study.
These are by no means all of the useful combinations of these seven bands. The bands
to be used are determined by the particular application.
See "CHAPTER 4: Image Display" for more information on how images are displayed,
"CHAPTER 5: Enhancement" for more information on how images can be enhanced, and
"Ordering Raster Data" on page 84 for information on types of Landsat data available.
SPOT The first Systeme Pour l’observation de la Terre (SPOT) satellite, developed by the
French Centre National d’Etudes Spatiales (CNES), was launched in early 1986. The
second SPOT satellite was launched in 1990 and the third was launched in 1993. The
sensors operate in two modes, multispectral and panchromatic. SPOT is commonly
referred to as a pushbroom scanner meaning that all scanning parts are fixed and
scanning is accomplished by the forward motion of the scanner. SPOT pushes
3000/6000 sensors along its orbit. This is different from Landsat which scans with 16
detectors perpendicular to its orbit.
The SPOT satellite can observe the same area on the globe once every 26 days. The SPOT
scanner normally produces nadir views, but it does have off-nadir viewing capability.
Off-nadir refers to any point that is not directly beneath the detectors, but off to an
angle. Using this off-nadir capability, one area on the earth can be viewed as often as
every 3 days.
This off-nadir viewing can be programmed from the ground control station, and is quite
useful for collecting data in a region not directly in the path of the scanner or in the
event of a natural or man-made disaster, where timeliness of data acquisition is crucial.
It is also very useful in collecting stereo data from which elevation data can be extracted.
The width of the swath observed varies between 60 km for nadir viewing and 80 km for
off-nadir viewing at a height of 832 km (Jensen 1996).
Panchromatic
SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial
resolution, contains 1 band—0.51 to 0.73 µm—and is similar to a black and white photo-
graph. It has a radiometric resolution of 8 bits (Jensen 1996).
XS
SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit radiometric resolution,
and contains 3 bands (Jensen 1996).
1 =Green, 0.50-0.59 µm
Corresponds to the green reflectance of healthy vegetation.
2 =Red, 0.61-0.68 µm
Useful for discriminating between plant species. It is also useful for soil boundary and
geological boundary delineations.
60 ERDAS
Satellite Data
Panc
hrom 1 band
atic
XS
3 bands
1 pixel=
10x10m
radiometric
resolution
0-255 1 pixel=
20x20m
See "Ordering Raster Data" on page 84 for information on the types of SPOT data available.
Stereoscopic Pairs
Two observations can be made by the panchromatic scanner on successive days, so that
the two images are acquired at angles on either side of the vertical, resulting in stereo-
scopic imagery. Stereoscopic imagery can also be achieved by using one vertical scene
and one off-nadir scene. This type of imagery can be used to produce a single image, or
topographic and planimetric maps (Jensen 1996).
See "Topographic Data" on page 81 and "CHAPTER 9: Terrain Analysis" for more information
about topographic data and how SPOT stereopairs and aerial photographs can be used to create
elevation data and orthographic images.
Field Guide 61
NOAA Polar Orbiter Data The National Oceanic and Atmospheric Administration (NOAA) has sponsored several
polar orbiting satellites to collect data of the earth. These satellites were originally
designed for meteorological applications, but the data gathered have been used in
many fields—from agronomy to oceanography (Needham 1986).
The first of these satellites to be launched was the TIROS-N in 1978. Since the TIROS-N,
five additional NOAA satellites have been launched. Of these, the last three are still in
orbit gathering data.
AVHRR
The NOAA AVHRR (Advanced Very High Resolution Radiometer) data are small-scale
data and often cover an entire country. The swath width is 2700 km and the satellites
orbit at a height of approximately 833 km (Kidwell 1988; Needham 1986).
The AVHRR system allows for direct transmission in real-time of data called High
Resolution Picture Transmission (HRPT). It also allows for about ten minutes of data
to be recorded over any portion of the world on two recorders on board the satellite.
This recorded data are called Local Area Coverage (LAC). LAC and HRPT have
identical formats; the only difference is that HRPT are transmitted directly and LAC are
recorded.
There are three basic formats for AVHRR data which can be imported into ERDAS
IMAGINE:
• Local Area Coverage (LAC) - data recorded on board the sensor with a spatial
resolution of approximately 1.1 × 1.1 km.
• Global Area Coverage (GAC) - data produced from LAC data by using only 1 out
of every 3 scan lines. GAC data have a spatial resolution of approximately 4 × 4 km.
AVHRR data are available in 10-bit packed and 16-bit unpacked format. The term
packed refers to the way in which the data are written to the tape. Packed data are
compressed to fit more data on each tape (Kidwell 1988).
AVHRR images are useful for snow cover mapping, flood monitoring, vegetation
mapping, regional soil moisture analysis, wildfire fuel mapping, fire detection, dust
and sandstorm monitoring, and various geologic applications (Lillesand and Kiefer
1987). The entire globe can be viewed in 14.5 days. There may be four or five bands,
depending on when the data were acquired.
1 =Visible, 0.58-0.68 µm
This band corresponds to the green reflectance of healthy vegetation and is important
for vegetation discrimination.
2 =Near-infrared, 0.725-1.10 µm
This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.
62 ERDAS
Satellite Data
3 =Thermal-infrared, 3.55-3.93 µm
This is a thermal band that can be used for snow and ice discrimination. It is also useful
for detecting fires.
AVHRR data have a radiometric resolution of 10-bits, meaning that each pixel has a
possible data file value between 0 and 1023. AVHRR scenes may contain one band, a
combination of bands, or all bands. All bands are referred to as a full set, and selected
bands are referred to as an extract.
See "Ordering Raster Data" on page 84 for information on the types of NOAA data available.
Field Guide 63
Radar Data Simply put, radar data are produced when:
While there is a specific importer for RADARSAT data, most types of radar image data can be
imported into ERDAS IMAGINE with the Generic import option of Import/Export.
Advantages of Using Radar data have several advantages over other types of remotely sensed imagery:
Radar Data
• Radar microwaves can penetrate the atmosphere day or night under virtually all
weather conditions, providing data even in the presence of haze, light rain, snow,
clouds, or smoke.
• Under certain circumstances, radar can partially penetrate arid and hyperarid
surfaces, revealing sub-surface features of the earth.
• Although radar does not penetrate standing water, it can reflect the surface action
of oceans, lakes, and other bodies of water. Surface eddies, swells, and waves are
greatly affected by the bottom features of the water body, and a careful study of
surface action can provide accurate details about the bottom features.
Radar Sensors Radar images are generated by two different types of sensors:
64 ERDAS
Radar Data
Both SLAR and SAR systems use side-looking geometry. Figure 26 shows a represen-
tation of an airborne SLAR system. Figure 27 shows a graph of the data received from
the radiation transmitted in Figure 26. Notice how the data correspond to the terrain in
Figure 26. These data can be used to produce a radar image of the target area. (A target
is any object or feature that is the subject of the radar scan.)
Range
direction
Beam
width
Trees
Hill Shadow
Strength (DN)
River
Time
Figure 27: Received Radar Signal
Field Guide 65
Active and Passive Sensors
An active radar sensor gives off a burst of coherent radiation that reflects from the
target, unlike a passive microwave sensor which simply receives the low-level
radiation naturally emitted by targets.
Like the coherent light from a laser, the waves emitted by active sensors travel in phase
and interact minimally on their way to the target area. After interaction with the target
area, these waves are no longer in phase. This is due to the different distances they
travel from different targets, or single versus multiple bounce scattering.
Radar waves
are transmitted
in phase.
At present, these bands are commonly used for radar imaging systems:
Wavelength
Band Frequency Range Radar System
Range
X 5.20-10.90 GHZ 5.77-2.75 cm USGS SLAR
C 3.9-6.2 GHZ 3.8-7.6 cm ERS-1, Fuyo 1
L 0.39-1.55 GHZ 76.9-19.3 cm SIR-A,B, Almaz
P 0.225-0.391 GHZ 40.0-76.9 cm AIRSAR
More information about these radar systems is given later in this chapter.
66 ERDAS
Radar Data
Radar bands were named arbitrarily when radar was first developed by the military.
The letter designations have no special meaning.
NOTE: The C band overlaps the X band. Wavelength ranges may vary slightly between sensors.
Speckle Noise Once out of phase, the radar waves can interfere constructively or destructively to
produce light and dark pixels known as speckle noise. Speckle noise in radar data must
be reduced before the data can be utilized. However, the radar image processing
programs used to reduce speckle noise also produce changes to the image. This consid-
eration, combined with the fact that different applications and sensor outputs neces-
sitate different speckle removal models, has lead ERDAS to offer several speckle
reduction algorithms in ERDAS IMAGINE Radar.
When processing radar data, the order in which the image processing programs are implemented
is crucial. This is especially true when considering the removal of speckle noise. Since any image
processing done before removal of the speckle results in the noise being incorporated into and
degrading the image, do not rectify, correct to ground range, or in any way resample the pixel
values before removing speckle noise. A rotation using nearest neighbor might be permissible.
• import radar data into the GIS as a stand-alone source or as an additional layer with
other imagery sources
• enhance edges
Field Guide 67
Applications for Radar Radar data can be used independently in GIS applications or combined with other
Data satellite data, such as Landsat, SPOT, or AVHRR. Possible GIS applications for radar
data include:
• Geology — radar’s ability to partially penetrate land cover and sensitivity to micro
relief makes radar data useful in geologic mapping, mineral exploration, and
archaeology.
• Glaciology — the ability to provide imagery of ocean and ice phenomena makes
radar an important tool for monitoring climatic change through polar ice variation.
• Oceanography — radar is used for wind and wave measurement, sea-state and
weather forecasting, and monitoring ocean circulation, tides, and polar oceans.
• Hydrology — radar data are proving useful for measuring soil moisture content
and mapping snow distribution and water content.
• Offshore Oil Activities — radar data are used to provide ice updates for offshore
drilling rigs, determining weather and sea conditions for drilling and installation
operations, and detecting oil spills.
• Pollution monitoring — radar can detect oil on the surface of water and can be used
to track the spread of an oil spill.
68 ERDAS
Radar Data
Current Radar Sensors Table 5 gives a brief description of currently available radar sensors. This is not a
complete list of such sensors, but it does represent the ones most useful for GIS appli-
cations.
Future Radar Sensors Several radar satellites are planned for launch within the next several years, but only a
few programs will be successful. Following are two scheduled programs which are
known to be highly achievable.
Almaz 1-b
NPO Mashinostroenia plans to launch and operate Almaz-1b as a commercial program
in 1998. The Almaz-1b system will include a unique, complex multisensor payload
consisting of eight high resolution sensors which can operate in various sensor combi-
nations, including high resolution, two-pass radar stereo and single- pass stereo
coverage in the optical and multispectral bandwidths. Almaz-1b will feature three
synthetic aperture radars (SAR) that can collect multipolar, multifrequency (X, P, S
band) imagery in high resolution (5-7m spatial; 20-30 km swath), intermediate (5-15m
spatial; 60-70km swath), or survey (20-40m spatial; 120-170km swath) modes.
Light SAR
NASA/JPL is currently designing a radar satellite called Light SAR. Present plans are
for this to be a multi-polar sensor operating at L-band.
Field Guide 69
Image Data from Image data can also be acquired from multispectral scanners or radar sensors aboard
Aircraft aircraft, as well as satellites. This is useful if there isn’t time to wait for the next satellite
to pass over a particular area, or if it is necessary to achieve a specific spatial or spectral
resolution that cannot be attained with satellite sensors.
For example, this type of data can be beneficial in the event of a natural or man-made
disaster, because there is more control over when and where the data are gathered.
• AIRSAR
• AVIRIS
AIRSAR AIRSAR (Airborne Synthetic Aperture Radar) is an experimental airborne radar sensor
developed by Jet Propulsion Laboratories (JPL), Pasadena, California, under a contract
with NASA. AIRSAR data have been available since 1983.
• C-band
• L-band
• P-band
Because this sensor measures at three different wavelengths, different scales of surface
roughness are obtained. The AIRSAR sensor has an IFOV of 10 m and a swath width of
12 km.
AIRSAR data have been used in many applications such as measuring snow wetness,
classifying vegetation, and estimating soil moisture.
NOTE: These data are distributed in a compressed format. They must be decompressed before
loading with an algorithm available from JPL. See "Addresses to Contact" on page 85 for contact
information.
AVIRIS The AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) was also developed by
JPL (Pasadena, California) under a contract with NASA. AVIRIS data have been
available since 1987.
This sensor produces multispectral data that have 224 narrow bands. These bands are
10 nm wide and cover the spectral range of .4 - 2.4 nm. The swath width is 11 km and
the spatial resolution is 20 m. This sensor is flown at an altitude of approximately 20 km.
The data are recorded at 10-bit radiometric resolution.
70 ERDAS
Image Data from Scanning
Image Data from Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINE
Scanning system through the use of a scanning camera. Scanning is remote sensing in a manner
of speaking, but the term “remote sensing” is usually reserved for satellite or aerial data
collection. In GIS, scanning refers to the transfer of analog data, such as photographs,
maps, or other viewable images, into a digital (raster) format.
There are many commonly used scanning cameras for GIS and other desktop applica-
tions, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or Vexcel (Vexcel Imaging
Corp., Boulder, Colorado). Many scanners produce a TIFF file, which can be read
directly by ERDAS IMAGINE.
Eikonix data can be obtained in the ERDAS IMAGINE .img format using the XSCAN™ Tool
by Ektron and then imported directly into ERDAS IMAGINE.
Field Guide 71
ADRG Data ADRG (ARC Digitized Raster Graphic) data, from the Defense Mapping Agency
(DMA), are primarily used for military purposes by defense contractors. The data are
in 128 × 128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide large
amounts of hardcopy graphic data without having to store and maintain the actual
hardcopy graphics.
ADRG data consist of digital copies of DMA hardcopy graphics transformed into the
ARC system and accompanied by ASCII encoded support files. These digital copies are
produced by scanning each hardcopy graphic into three images: red, green, and blue.
The data are scanned at a nominal collection interval of 100 microns (254 lines per inch).
When these images are combined, they provide a 3-band digital representation of the
original hardcopy graphic.
ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangular
coordinate and projection system at any scale for the earth’s ellipsoid, based on the
World Geodetic System 1984 (WGS 84). The ARC System divides the surface of the
ellipsoid into 18 latitudinal bands called zones. Zones 1 - 9 cover the Northern
hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone 9 is the North Polar
region. Zone 18 is the South Polar region.
Distribution Rectangles
For distribution, ADRG are divided into geographic data sets called Distribution
Rectangles (DRs). A DR may include data from one or more source charts or maps. The
boundary of a DR is a geographic rectangle which typically coincides with chart and
map neatlines.
The padding pixels are not imported by ERDAS IMAGINE, nor are they counted when figuring
the pixel height and width of each image.
ADRG File Format Each CD-ROM contains up to eight different file types which make up the ADRG
format. ERDAS IMAGINE imports three types of ADRG data files:
• .OVR (Overview)
• .IMG (Image)
NOTE: Compressed ADRG (CADRG) is a different format, with its own importer.
72 ERDAS
ADRG Data
The ADRG .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr
file formats.
.OVR (overview) The overview file contains a 16:1 reduced resolution image of the whole DR. There is an
overview file for each DR on a CD.
You can import from only one ZDR at a time. If a subset covers multiple ZDRs, they must be
imported separately and mosaicked with the ERDAS IMAGINE Mosaic option.
Field Guide 73
The white rectangle in Figure 30 represents the DR. The subset area in this illustration
would have to be imported as three files, one for each zone in the DR.
Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2 and 4 would also
be included in the subset area.
Zone 4
overlap area
Zone 3
Subset
overlap area Area
Zone 2
.IMG (scanned image The .IMG files are the data files containing the actual scanned hardcopy graphic(s).
data) Each .IMG file contains one ZDR plus padding pixels. The ERDAS IMAGINE Import
function converts the .IMG data files on the CD-ROM to the IMAGINE file format
(.img). The .img file can then be displayed in a Viewer.
.Lxx (legend data) Legend files contain a variety of diagrams and accompanying information. This is infor-
mation which typically appears in the margin or legend of the source graphic.
This information can be imported into ERDAS IMAGINE and viewed. It can also be added to a
map composition with the ERDAS IMAGINE Map Composer.
74 ERDAS
ADRG Data
Each legend file contains information based on one of these diagram types:
• Index (IN) — shows the approximate geographical position of the graphic and its
relationship to other graphics in the region.
• Slope (SL) — represents the percent and degree of slope appearing in slope bands.
• Boundary (BN) — depicts the geopolitical boundaries included on the map or chart.
• Accuracy (HA, VA, AC) — depicts the horizontal and vertical accuracies of selected
map or chart areas. AC represents a combined horizontal and vertical accuracy
diagram.
• Glossary (GL) — gives brief lists of foreign geographical names appearing on the
map or chart with their English-language equivalents.
• Landmark Feature Symbols (LS) — landmark feature symbols are used to depict
navigationally-prominent entities.
Field Guide 75
Each ARC System chart type has certain legend files associated with the image(s) on the
CD-ROM. The legend files associated with each chart type are checked in Table 7.
ADRG File Naming The ADRG file naming convention is based on a series of codes: ssccddzz
Convention
• ss = the chart series code (see the table of ARC System charts)
• dd = the DR number on the CD-ROM (01-99). DRs are numbered beginning with
01 for the northwesternmost DR and increasing sequentially west to east, then
north to south.
• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.
• .IMG = This file contains the actual scanned image data for a ZDR.
You may change this name when the file is imported into ERDAS IMAGINE. If you do not
specify a file name, IMAGINE will use the ADRG file name for the image.
76 ERDAS
ADRG Data
• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.
• IN = This indicates that this file is an index diagram from the original hardcopy
graphic.
• .L01 = This legend file contains information for the source graphic 01. The source
graphics in each DR are numbered beginning with 01 for the northwesternmost
source graphic, increasing sequentially west to east, then north to south. Source
directories and their files include this number code within their names.
For more detailed information on ADRG file naming conventions, see the Defense Mapping
Agency Product Specifications for ARC Digitized Raster Graphics (ADRG), published by
DMA Aerospace Center.
Field Guide 77
ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from the DMA and are
currently available only to Department of Defense contractors. The data are in 128 × 128
tiled, 8-bit format, stored on 8 mm tape in band sequential format.
ADRI consists of SPOT panchromatic satellite imagery transformed into the ARC
system and accompanied by ASCII encoded support files.
Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR consists of all or
part of one or more images mosaicked to meet the ARC bounding rectangle, which
encloses a 1 degree by 1 degree geographic area. (See Figure 31.) Source images are
orthorectified to mean sea level using DMA Level I Digital Terrain Elevation Data
(DTED) or equivalent data (Air Force Intelligence Support Agency 1991).
See the previous section on ADRG data for more information on the ARC system. See more
about DTED data on page 83.
Image 1
Image 2
3
Image 4
Image 5
Image 6
Image 9
Image 8
7
In ADRI data, each DR contains only one ZDR. Each ZDR is stored as a single raster
image file, with no overlapping areas.
78 ERDAS
ADRI Data
There are six different file types that make up the ADRI format: two types of data files,
three types of header files, and a color test patch file. ERDAS IMAGINE imports the two
types of ADRI data files:
• .OVR (Overview)
• .IMG (Image)
The ADRI .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr
file formats.
.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image of the whole DR.
There is an overview file for each DR on a tape. The .OVR images show the mosaicking
from the source images and the dates when the source images were collected. (See
Figure 32.) This does not appear on the ZDR image.
Field Guide 79
.IMG (scanned image The .IMG files contain the actual mosaicked images. Each .IMG file contains one ZDR
data) plus any padding pixels needed to fit the ARC boundaries. Padding pixels are black and
have a zero data value. The ERDAS IMAGINE Import function converts the .IMG data
files to the IMAGINE file format (.img). The .img file can then be displayed in a Viewer.
Padding pixels are not imported, nor are they counted in image height or width.
ADRI File Naming The ADRI file naming convention is based on a series of codes: ssccddzz
Convention
• ss = the image source code:
- SP (SPOT panchromatic)
• dd = the DR number on the tape (01-99). DRs are numbered beginning with 01 for
the northwesternmost DR and increasing sequentially west to east, then north to
south.
• .IMG = This file contains the actual scanned image data for a ZDR.
You may change this name when the file is imported into ERDAS IMAGINE. If you do not
specify a file name, IMAGINE will use the ADRI file name for the image.
80 ERDAS
Topographic Data
Topographic Data Satellite data can also be used to create elevation, or topographic, data through the use
of stereoscopic pairs, as discussed above under SPOT. Radar sensor data can also be a
source of topographic information, as discussed in “Chapter 8: Terrain Analysis.”
However, most available elevation data are created with stereo photography and
topographic maps.
Arc/Second Format
Most elevation data are in arc/second format. Arc/second refers to data in the
Latitude/Longitude (Lat/Lon) coordinate system. The data are not rectangular, but
follow the arc of the earth’s latitudinal and longitudinal lines.
Each degree of latitude and longitude is made up of 60 minutes. Each minute is made
up of 60 seconds. Arc/second data are often referred to by the number of seconds in
each pixel. For example, 3 arc/second data have pixels which are 3 × 3 seconds in size.
The actual area represented by each pixel is a function of its latitude. Figure 33 illus-
trates a 1° × 1° area of the earth.
A row of data file values from a DEM or DTED file is called a profile. The profiles of
DEM and DTED run south to north, that is, the first pixel of the record is the southern-
most pixel.
Longitude
1201
1201
La t i t
u de
1201
In Figure 33 there are 1201 pixels in the first row and 1201 pixels in the last row, but the
area represented by each pixel increases in size from the top of the file to the bottom of
the file. The extracted section in the example above has been exaggerated to illustrate
this point.
Field Guide 81
Arc/second data used in conjunction with other image data, such as TM or SPOT, must
be rectified or projected onto a planar coordinate system such as UTM.
DEM DEMs are digital elevation model data. DEM was originally a term reserved for
elevation data provided by the United States Geological Survey (USGS), but it is now
used to describe any digital elevation data.
See "CHAPTER 9: Terrain Analysis" for more information on using DEMs. See "Ordering
Raster Data" on page 84 for information on ordering DEMs.
USGS DEMs
There are two types of DEMs that are most commonly available from USGS:
• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTM
coordinate system. It has a spatial resolution of 30 × 30 m.
Both types have a 16-bit range of elevation values, meaning each pixel can have a
possible elevation of -32,768 to 32,767.
DEM data are stored in ASCII format. The data file values in ASCII format are stored as
ASCII characters rather than as zeros and ones like the data file values in binary data.
DEM data files from USGS are initially oriented so that North is on the right side of the
image instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as
part of the Import process so that coordinates read with any IMAGINE program will be
correct.
82 ERDAS
Topographic Data
DTED DTED data are produced by the Defense Mapping Agency (DMA) and are available
only to US government agencies and their contractors. DTED data are distributed on 9-
track tapes and on CD-ROM.
Both are in arc/second format and are distributed in cells. A cell is a 1° × 1° area of
coverage. Both have a 16-bit range of elevation values.
Like DEMs, DTED data files are also oriented so that North is on the right side of the
image instead of at the top. IMAGINE rotates the data 90° counterclockwise as part of
the Import process so that coordinates read with any ERDAS IMAGINE program will
be correct.
Using Topographic Data Topographic data have many uses in a GIS. For example, topographic data can be used
in conjunction with other data to:
• calculate the shortest and most navigable path over a mountain range
See "CHAPTER 9: Terrain Analysis" for more information about using topographic and
elevation data.
Field Guide 83
Ordering Raster Table 8 describes the different Landsat, SPOT, AVHRR, and DEM products that can be
Data ordered. Information in this chart does not reflect all the products that are available, but
only the most common types that can be imported into ERDAS IMAGINE.
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
84 ERDAS
Ordering Raster Data
Addresses to Contact For more information about these and related products, contact the agencies below:
• SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com
• Landsat data:
EROS Data Center
Sioux Falls, SD 57198 USA
Field Guide 85
• ERS-1 radar data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-6413
Fax: 613-238-5425
Internet: www.rsi.ca
• RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca
86 ERDAS
Raster Data from Other Software Vendors
Raster Data from ERDAS IMAGINE also enables the user to import data created by other software
Other Software vendors. This way, if another type of digital data system is currently in use, or if data is
Vendors received from another system, it will easily convert to the ERDAS IMAGINE file format
for use in ERDAS IMAGINE. The Import function will directly import these raster data
types from other software systems:
• GRID
• Sun Raster
• TIFF
Other data types might be imported using the Generic import option.
Convert a vector layer to a raster layer, or vice versa, by using ERDAS IMAGINE Vector.
ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The two
basic types of ERDAS Ver. 7.X data files are indicated by the file name extensions:
• .LAN — a multiband continuous image file (the name is derived from the Landsat
satellite)
• .GIS — a single-band thematic data file in which pixels are divided into discrete
categories (the name is derived from geographic information system)
.LAN and .GIS image files are stored in the same format. The image data are arranged
in a BIL format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structure
includes:
When you import a .GIS file, it becomes an .img file with one thematic raster layer. When you
import a .LAN file, each band becomes a continuous raster layer within the .img file.
Field Guide 87
GRID GRID is a raster geoprocessing program distributed by Environmental Systems
Research Institute, Inc. (Redlands, California). GRID is a spatial analysis and modeling
language that enables the user to perform per-cell, per-neighborhood, per-zone, and
per-layer analyses. It was designed to function as a complement to the vector data
model system, ARC/INFO, a well-known vector GIS which is also distributed by ESRI.
GRID files are in a compressed tiled raster data structure. The name is taken from the
raster data format of presenting information in a grid of cells.
Sun Raster A Sun raster file is an image captured from a monitor display. In addition to GIS, Sun
Raster files can be used in desktop publishing applications or any application where a
screen capture would be useful.
There are two basic ways to create a Sun raster file on a Sun workstation:
Both methods read the contents of a frame buffer and write the display data to a user-
specified file. Depending on the display hardware and options chosen, screendump can
create any of the file types listed in Table 9.
TIFF The Tagged Image File Format (TIFF) was developed by Aldus Corp. (Seattle,
Washington) in 1986 in conjunction with major scanner vendors who needed an easily
portable file format for raster image data. Today, the TIFF format is a widely supported
format used in video, fax transmission, medical imaging, satellite imaging, document
storage and retrieval, and desktop publishing applications. In addition, the GEOTIFF
extensions permit TIFF files to be geocoded.
The TIFF format’s main appeal is its flexibility. It handles black and white line images,
as well as gray scale and color images, which can be easily transported between
different operating systems and computers.
Table 10 shows the most common TIFF format elements. The elements supported in
ERDAS IMAGINE are checked.
88 ERDAS
Raster Data from Other Software Vendors
Any TIFF format that contains an unsupported element may not be compatible with ERDAS
IMAGINE.
Motorola (MSB/LSB) ✓
Black and white ✓
Gray scale ✓
Color palette ✓
RGB (3-band) ✓
Configuration BIP ✓
BSQ
3,5,6,7
None ✓
CCITT G3 (B&W only) ✓
Packbits ✓
LZW****
**All bands must contain the same number of bits (i.e., 4,4,4 or 8,8,8). Multi-band data assigned to different
bits cannot be imported into IMAGINE.
****LZW is governed by patents and is not supported by the basic version of IMAGINE.
Field Guide 89
Vector Data from It is possible to directly import several common vector formats into ERDAS IMAGINE.
Other Software These files become vector layers when imported. These data can then be used for the
Vendors analyses and, in most cases, exported back to its original format (if desired).
Although data can be converted from one type to another by importing a file into
IMAGINE and then exporting the IMAGINE file into another format, the import and
export routines were designed to work together. For example, if a user has information
in AutoCAD that they would like to use in the GIS, they can import a DXF file into
ERDAS IMAGINE, do the analysis, and then export the data back to DXF format.
In most cases, attribute data are also imported into ERDAS IMAGINE. Each section
below lists the types of attribute data that are imported.
Use Import/Export to import vector data from other software vendors into ERDAS IMAGINE
vector layers. These routines are based on ARC/INFO data conversion routines.
See "CHAPTER 2: Vector Layers" for more information on ERDAS IMAGINE vector layers.
See"CHAPTER 10: Geographic Information Systems" for more information about using vector
data in a GIS.
ARCGEN ARCGEN files are ASCII files created with the ARC/INFO UNGENERATE command.
The import ARCGEN program is used to import features to a new layer. Topology is
not created or maintained, therefore the coverage must be built or cleaned after it is
imported into ERDAS IMAGINE.
ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. If
there is a syntax error in the data file, the import process may not work. If this happens, you must
kill the process, correct the data file, and then try importing again.
See the ARC/INFO documentation for more information about these files.
AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito,
California). AutoCAD is a computer-aided design program that enables the user to
draw two-and three-dimensional models. This software is frequently used in archi-
tecture, engineering, urban planning, and many other applications.
The AutoCAD Drawing Interchange File (DXF) is the standard interchange format used
by most CAD systems. The AutoCAD program DXFOUT will create a DXF file that can
be converted to an ERDAS IMAGINE vector layer. AutoCAD files can also be output to
IGES format using the AutoCAD program IGESOUT.
90 ERDAS
Vector Data from Other Software Vendors
DXF files can be converted in the ASCII or binary format. The binary format is an
optional format for AutoCAD Releases 10 and 11. It is structured just like the ASCII
format, only the data are in binary format.
DXF files are composed of a series of related layers. Each layer contains one or more
drawing elements or entities. An entity is a drawing element that can be placed into an
AutoCAD drawing with a single command. When converted to an ERDAS IMAGINE
vector layer, each entity becomes a single feature. Table 11 describes how various DXF
entities are converted to IMAGINE.
DXF IMAGINE
Comments
Entity Feature
Line Line These entities become two point lines. The initial Z
value of 3D entities is stored.
3DLine
Trace Line These entities become four or five point lines. The
initial Z value of 3D entities is stored.
Solid
3DFace
Circle Line These entities form lines. Circles are composed of
361 points—one vertex for each degree. The first
Arc and last point is at the same location.
Polyline Line These entities can be grouped to form a single line
having many vertices.
Point Point These entities become point features in a layer.
Shape
The ERDAS IMAGINE import process also imports line and point attribute data (if they
exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and
XCODE (point attributes) files. If an imported DXF file is exported back to DXF format,
this information will also be exported.
Refer to an AutoCAD manual for more information about the format of DXF files.
Field Guide 91
DLG Digital Line Graphs (DLG) are furnished by the U.S. Geological Survey and provide
planimetric base map information, such as transportation, hydrography, contours, and
public land survey boundaries. DLG files are available for the following USGS map
series:
• 1:100,000-scale quadrangles
DLGs are topological files that contain nodes, lines, and areas (similar to the points,
lines, and polygons in an ERDAS IMAGINE vector layer). DLGs also store attribute
information in the form of major and minor code pairs. Code pairs are encoded in two
integer fields, each containing six digits. The major code describes the class of the
feature (road, stream, etc.) and the minor code stores more specific information about
the feature.
DLGs can be imported in standard format (144 bytes per record) and optional format
(80 bytes per record). The user can export to DLG-3 optional format. Most DLGs are in
the Universal Transverse Mercator map projection. However, the 1:2,000,000 scale
series are in geographic coordinates.
The ERDAS IMAGINE import process also imports point, line, and polygon attribute
data (if they exist) and creates an INFO directory with the appropriate ACODE (arc
attributes), PCODE (polygon attributes) and XCODE (point attributes) files. If an
imported DLG file is exported back to DLG format, this information will also be
exported.
To maintain the topology of a vector layer created from a DLG file, you must Build or Clean it.
See “Chapter 9: Geographic Information Systems” for information on this process.
92 ERDAS
Vector Data from Other Software Vendors
ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from
ETAK, Inc. (Menlo Park, California). ETAK files are similar in content to the Dual
Independent Map Encoding (DIME) format used by the U.S. Census Bureau. Each
record represents a single linear feature with address and political, census, and ZIP
code boundary information. ETAK has also included road class designations and, in
some areas, major landmark features.
• Alternate address or A types — each record contains an alternate address record for
a line. These records are written to the attribute file, and are useful for building
address coverages.
• Shape features or S types — shape records are used to add vertices to the lines. The
coordinates for these features are in Lat/Lon decimal degrees.
• Landmark or L types — if the feature type is L and the user opts to output a
landmark layer, then a point feature is created along with an associated PCODE
record.
Field Guide 93
IGES Initial Graphics Exchange Standard (IGES) files are often used to transfer CAD data
between systems. IGES Version 3.0 format, published by the U.S. Department of
Commerce, is in uncompressed ASCII format only.
IGES files can be produced in AutoCAD using the IGESOUT command. The following
IGES entities can be converted:
The ERDAS IMAGINE import process also imports line and point attribute data (if they
exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and
XCODE (point attributes) files. If an imported IGES file is exported back to IGES format,
this information will also be exported.
94 ERDAS
Vector Data from Other Software Vendors
TIGER Topologically Integrated Geographic Encoding and Referencing System (TIGER) files
are line network products of the U.S. Census Bureau. The Census Bureau is using the
TIGER system to create and maintain a digital cartographic database that covers the
United States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust
Territories of the Pacific.
TIGER/Line is the line network product of the TIGER system. The cartographic base is
taken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME),
where available, and from the USGS 1:100,000-scale national map series, SPOT imagery,
and a variety of other sources in all other areas, in order to have continuous coverage
for the entire United States. In addition to line segments, TIGER files contain census
geographic codes and, in metropolitan areas, address ranges for the left and right sides
of each segment. TIGER files are available in ASCII format on both CD-ROM and tape
media. All released versions after April 1989 are supported.
There is a great deal of attribute information provided with TIGER/Line files. Line and
point attribute information can be converted into ERDAS IMAGINE format. The
ERDAS IMAGINE import process creates an INFO directory with the appropriate
ACODE (arc attributes) and XCODE (point attributes) files. If an imported TIGER file
is exported back to TIGER format, this information will also be exported.
• Source codes—each line and landmark point feature is assigned a code to specify
the original source
• Census feature class codes—line segments representing physical features are coded
based on the USGS classification codes in DLG-3 files
• Legal and statistical area attributes—legal areas include states, counties, townships,
towns, incorporated cities, Indian reservations, and national parks. Statistical areas
are areas used during the census-taking, where legal areas are not adequate for
reporting statistics.
TIGER files for major metropolitan areas outside of the United States (e.g., Puerto Rico, Guam)
do not have address ranges.
Field Guide 95
Disk Space Requirements
TIGER/Line files are partitioned into counties ranging in size from less than a
megabyte to almost 120 megabytes. The average size is approximately 10 megabytes.
To determine the amount of disk space required to convert a set of TIGER/Line files,
use this rule: the size of the converted layers is approximately the same size as the files
used in the conversion. The amount of additional scratch space needed depends on the
largest file and whether it will need to be sorted. The amount usually required is about
double the size of the file being sorted.
The information presented in this section, "Vector Data from Other Software Vendors", was
obtained from the Data Conversion and the 6.0 ARC Command References manuals, both
published by ESRI, Inc., 1992.
96 ERDAS
Introduction
CHAPTER 4
Image Display
Introduction This section defines some important terms that are relevant to image display. Most of
the terminology and definitions used in this chapter are based on the X Window System
(Massachusetts Institute of Technology) terminology. This may differ from other
systems, such as Microsoft Windows NT.
• A display may consist of multiple screens. These screens work together, making it
possible to move the mouse from one screen to the next.
• The display hardware contains the memory that is used to produce the image. This
hardware determines which types of displays are available (e.g., true color or
pseudo color) and the pixel depth (e.g., 8-bit or 24-bit).
Screen Screen
Figure 34: Example of One Seat with One Display and Two Screens
Display Memory Size The size of memory varies for different displays. It is expressed in terms of:
• the number of bits for each pixel or pixel depth, as explained below.
Field Guide 97
Bits for Image Plane
A bit is a binary digit, meaning a number that can have two possible values—0 and 1,
or “off” and “on.” A set of bits, however, can have many more values, depending upon
the number of bits used. The number of values that can be expressed by a set of bits is
2 to the power of the number of bits used. For example, the number of values that can
be expressed by 3 bits is 8 (23 = 8).
Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bits
are used to determine the number of possible brightness values. For example, in a 24-
bit display, 24 bits per pixel breaks down to eight bits for each of the three color guns
per pixel. The number of possible values that can be expressed by eight bits is 28, or 256.
Therefore, on a 24-bit display, each color gun of a pixel can have any one of 256 possible
brightness values, expressed by the range of values 0 to 255.
The combination of the three color guns, each with 256 possible brightness values,
yields 2563, (or 224, for the 24-bit image display), or 16,777,216 possible colors for each
pixel on a 24-bit display. If the display being used is not 24-bit, the example above will
calculate the number of possible brightness values and colors that can be displayed.
Pixel The term pixel is abbreviated from picture element. As an element, a pixel is the
smallest part of a digital picture (image). Raster image data are divided by a grid, in
which each cell of the grid is represented by a pixel. A pixel is also called a grid cell.
• the data file value(s) for one data unit in an image (file pixels), or
Usually, one pixel in a file corresponds to one pixel in a display or printout. However,
an image can be magnified or reduced so that one file pixel no longer corresponds to
one pixel in the display or printout. For example, if an image is displayed with a magni-
fication factor of 2, then one file pixel will take up 4 (2 × 2) grid cells on the display
screen.
To display an image, a file pixel that consists of one or more numbers must be trans-
formed into a display pixel with properties that can be seen, such as brightness and
color. Whereas the file pixel has values that are relevant to data (such as wavelength of
reflected light), the displayed pixel must have a particular color or gray level that repre-
sents these data file values.
Colors Human perception of color comes from the relative amounts of red, green, and blue
light that are measured by the cones (sensors) in the eye. Red, green, and blue light can
be added together to produce a wide variety of colors—a wider variety than can be
formed from the combinations of any three other colors. Red, green, and blue are
therefore the additive primary colors.
A nearly infinite number of shades can be produced when red, green, and blue light are
combined. On a display, different colors (combinations of red, green, and blue) allow
the user to perceive changes across an image. Color displays that are available today
yield 224, or 16,777,216 colors. Each color has a possible 256 different values (28).
98 ERDAS
Introduction
Color Guns
On a display, color guns direct electron beams that fall on red, green, and blue
phosphors. The phosphors glow at certain frequencies to produce different colors.
Color monitors are often called RGB monitors, referring to the primary colors.
The red, green, and blue phosphors on the picture tube appear as tiny colored dots on
the display screen. The human eye integrates these dots together, and combinations of
red, green, and blue are perceived. Each pixel is represented by an equal number of red,
green, and blue phosphors.
Brightness Values
Brightness values (or intensity values) are the quantities of each primary color to be
output to each displayed pixel. When an image is displayed, brightness values are
calculated for all three color guns, for every pixel.
All of the colors that can be output to a display can be expressed with three brightness
values—one for each color gun.
Colormap and Colorcells A color on the screen is created by a combination of red, green, and blue values, where
each of these components is represented as an 8-bit value. Therefore, 24 bits are needed
to represent a color. Since many systems have only an 8-bit display, a colormap is used
to translate the 8-bit value into a color. A colormap is an ordered set of colorcells, which
is used to perform a function on a set of input values. To display or print an image, the
colormap translates data file values in memory into brightness values for each color
gun. Colormaps are not limited to 8-bit displays.
Colorcells
There is a colorcell in the colormap for each data file value. The red, green, and blue
values assigned to the colorcell control the brightness of the color guns for the displayed
pixel (Nye 1990). The number of colorcells in a colormap is determined by the number
of bits in the display (e.g., 8-bit, 24-bit).
Field Guide 99
For example, if a pixel with a data file value of 40 was assigned a display value (colorcell
value) of 24, then this pixel would use the brightness values for the 24th colorcell in the
colormap. In the colormap below (Table 13), this pixel would be displayed as blue.
Colorcell
Red Green Blue
Index
1 255 0 0
2 0 170 90
3 0 0 255
24 0 0 255
The colormap is controlled by the Windows system. There are 256 colorcells in a
colormap with an 8-bit display. This means that 256 colors can be displayed simulta-
neously on the display. With a 24-bit display, there are 256 colorcells for each color: red,
green, and blue. This offers 256 × 256 × 256 or 16,777,216 different colors.
When an application requests a color, the server will specify which colorcell contains
that color and will return the color. Colorcells can be read-only or read/write.
Read-Only Colorcells
The color assigned to a read-only colorcell can be shared by other application windows,
but it cannot be changed once it is set. To change the color of a pixel on the display, it
would not be possible to change the color for the corresponding colorcell. Instead, the
pixel value would have to be changed and the image redisplayed. For this reason, it is
not possible to use auto update operations in ERDAS IMAGINE with read-only
colorcells.
Read/Write Colorcells
The color assigned to a read/write colorcell can be changed, but it cannot be shared by
other application windows. An application can easily change the color of displayed
pixels by changing the color for the colorcell that corresponds to the pixel value. This
allows applications to use auto update operations. However, this colorcell cannot be
shared by other application windows, and all of the colorcells in the colormap could
quickly be utilized.
Changeable Colormaps
Some colormaps can have both read-only and read/write colorcells. This type of
colormap allows applications to utilize the type of colorcell that would be most
preferred.
100 ERDAS
Introduction
Display Types The possible range of different colors is determined by the display type. ERDAS
IMAGINE supports the following types of displays:
• 8-bit PseudoColor
• 24-bit DirectColor
• 24-bit TrueColor
A display may offer more than one visual type and pixel depth. See “ERDAS IMAGINE 8.3
Installing and Configuring” for more information on specific display hardware.
32-bit Displays
A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor or
TrueColor display. Whether or not it is DirectColor or TrueColor depends on the
display hardware.
In Figure 35, data file values for a pixel of three continuous raster layers (bands) is trans-
formed to a colorcell value. Since the colorcell value is four, the pixel is displayed with
the brightness values of the fourth colorcell (blue).
This display grants a small number of colors to ERDAS IMAGINE. It works well with
thematic raster layers containing less than 200 colors and with gray scale continuous
raster layers. For image files with three continuous raster layers (bands), the colors will
be severely limited because, under ideal conditions, 256 colors are available on an 8-bit
display, while 8-bit, 3-band image files can contain over 16,000,000 different colors.
Auto Update
An 8-bit PseudoColor display has read-only and read/write colorcells, allowing
ERDAS IMAGINE to perform near real-time color modifications using Auto Update
and Auto Apply options.
102 ERDAS
Introduction
24-bit DirectColor A 24-bit DirectColor display enables the user to view up to three bands of data at one
time, creating displayed pixels that represent the relationships between the bands by
their colors. Since this is a 24-bit display, it offers up to 256 shades of red, 256 shades of
green, and 256 shades of blue, which is approximately 16 million different colors (2563).
The data file values for each band are transformed into colorcell values. The colorcell
that is specified by these values is used to define the color to be displayed.
Colormap Color-
Cell Red
Data File Values Colorcell Values Value
Color- Index
Cell Green
Value
Index 1 0
Red band Red band
value Color-
value Cell Blue 1 0 2 0
(1) Value
Index 3
2 90
Green band Green band 4
value 1 0 3
value
(2) 4 5
2 0
Blue band Blue band 5 6 55
3
value value
(6) 4 6 55
5
6 200
Blue-green pixel
(0, 90, 200 RGB)
In Figure 36, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate colorcell values for each band. Since the colorcell value is 1 for
the red band, 2 for the green band, and 6 for the blue band, the RGB brightness values
are 0, 90, 200. This displays the pixel as a blue-green color.
This type of display grants a very large number of colors to ERDAS IMAGINE and it
works well with all types of data.
Auto Update
A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDAS
IMAGINE to perform real-time color modifications using the Auto Update and Auto
Apply options.
The screen values are used as the brightness values for the red, green, and blue color
guns. Since this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and
256 shades of blue, which is approximately 16 million different colors (2563).
In Figure 37, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate screen values for each band. Since the screen value is 0 for the
red band, 90 for the green band and 200 for the blue band, the RGB brightness values
are 0, 90, and 200. This displays the pixel as a blue-green color.
Auto Update
The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thus
does not provide IMAGINE with any real-time color changing capability. Each time a
color is changed, the screen values must be calculated and the image must be re-drawn.
Color Quality
The 24-bit TrueColor visual provides the best color quality possible with standard
equipment. There is no color degradation under any circumstances with this display.
104 ERDAS
Introduction
PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and
pixel depths:
• 8-bit PseudoColor
• 15-bit HiColor
• 24-bit TrueColor
8-bit PseudoColor
An 8-bit PseudoColor display for the PC uses the same type of colormap as the X
Windows 8-bit PseudoColor display, except that each colorcell has a range of 0 to 63 on
most video display adapters, instead of 0 to 255. Therefore, each colorcell has a red,
green, and blue brightness value, giving 64 different combinations of red, green, and
blue. The colormap, however, is the same as the X Windows 8-bit PseudoColor display.
It has 256 colorcells allowing 256 different colors to be displayed simultaneously.
15-bit HiColor
A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-
bit TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32
shades of blue, for a total of 32,768 possible color combinations. Some video display
adapters allocate 6 bits to the green color gun, allowing 64 thousand colors. These
adapters use a 16-bit color scheme.
24-bit TrueColor
A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows
24-bit TrueColor display.
• continuous
• thematic
Thematic raster layers require a different display process than continuous raster layers.
This section explains how each raster layer type is displayed.
Continuous Raster An image file (.img) can contain several continuous raster layers, and therefore, each
Layers pixel can have multiple data file values. When displaying an image file with continuous
raster layers, it is possible to assign which layers (bands) are to be displayed with each
of the three color guns. The data file values in each layer are input to the assigned color
gun. The most useful color assignments are those that allow for an easy interpretation
of the displayed image. For example:
• a natural-color image will approximate the colors that would appear to a human
observer of the scene.
Band assignments are often expressed in R,G,B order. For example, the assignment 4,2,1
means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below are
some widely used band to color gun assignments (Faust 1989):
Contrast Table
When an image is displayed, ERDAS IMAGINE automatically creates a contrast table
for continuous raster layers. The red, green, and blue brightness values for each band
are stored in this table.
Since the data file values in continuous raster layers are quantitative and related, the
brightness values in the colormap are also quantitative and related. The screen pixels
represent the relationships between the values of the file pixels by their colors. For
example, a screen pixel that is bright red has a high brightness value in the red color
gun, and a high data file value in the layer assigned to red, relative to other data file
values in that layer.
106 ERDAS
Displaying Raster Layers
The brightness values often differ from the data file values, but they usually remain in
the same order of lowest to highest. Some meaningful relationships between the values
are usually maintained.
Contrast Stretch
Different displays have different ranges of possible brightness values. The range of
most displays is 0 to 255 for each color gun.
Since the data file values in a continuous raster layer often represent raw data (such as
elevation or an amount of reflected light), the range of data file values is often not the
same as the range of brightness values of the display. Therefore, a contrast stretch is
usually performed, which stretches the range of the values to fit the range of the
display.
For example, Figure 38 shows a layer that has data file values from 30 to 40. When these
values are used as brightness values, the contrast of the displayed image is poor. A
contrast stretch simply “stretches” the range between the lower and higher data file
values, so that the contrast of the displayed image is higher—that is, lower data file
values are displayed with the lowest brightness values, and higher data file values are
displayed with the highest brightness values.
The colormap stretches the range of colorcell values from 30 to 40 to the range 0 to 255.
Since the output values are incremented at regular intervals, this stretch is a linear
contrast stretch. (The numbers in Figure 38 are approximations and do not show an
exact linear relationship.)
30 ➛ 0
255
31 ➛ 25
32 ➛ 51
output brightness values
33 ➛ 76
34 ➛ 102
35 ➛ 127
36 ➛ 153
37 ➛ 178
30 to 40 range 38 ➛ 204
0 39 ➛ 229
0 255
input colorcell values 40 ➛ 255
See "CHAPTER 5: Enhancement" for more information about contrast stretching. Contrast
stretching is performed the same way for display purposes as it is for permanent image
enhancement.
Statistics Files
To perform a contrast stretch, certain statistics are necessary, such as the mean and the
standard deviation of the data file values in each layer.
Use the Image Information utility to create and view statistics for a raster layer.
Usually, not all of the data file values are used in the contrast stretch calculations. The
minimum and maximum data file values of each band are often too extreme to produce
good results. When the minimum and maximum are extreme in relation to the rest of
the data, then the majority of data file values are not stretched across a very wide range,
and the displayed image has low contrast.
frequency
Original Histogram
frequency
values stretched
values stretched
over 255 are
less than 0 are
not displayed
not displayed -2σ mean +2σ 0 -2σ mean +2σ 255
0 stretched data file values 255 stretched data file values
The mean and standard deviation of the data file values for each band are used to locate
the majority of the data file values. The number of standard deviations above and below
the mean can be entered, which determines the range of data used in the stretch.
See "APPENDIX A: Math Topics" for more information on mean and standard deviation.
108 ERDAS
Displaying Raster Layers
Use the Contrast Tools dialog, which is accessible from the Lookup Table Modification dialog, to
enter the number of standard deviations to be used in the contrast stretch.
Histograms
of each band:
Ranges of
data file
values to
be displayed:
0 data file values in 255 0 data file values in 255 0 data file values in 255
Colormap:
0 brightness values out 255 0 brightness values out 255 0 brightness values out 255
Color
guns:
Brightness
values in
each color
gun:
Color display:
Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified, or put into
distinct categories. Each data file value is a class value, which is simply a number for a
particular category. A thematic raster layer is stored in an image (.img) file. Only one
data file value—the class value—is stored for each pixel.
Since these class values are not necessarily related, the gradations that are possible in
true color mode are not usually useful in pseudo color. The class system gives the
thematic layer a discrete look, in which each class can have its own color.
Color Table
When a thematic raster layer is displayed, ERDAS IMAGINE automatically creates a
color table. The red, green, and blue brightness values for each class are stored in this
table.
RGB Colors
Individual color schemes can be created by combining red, green, and blue in different
combinations, and assigning colors to the classes of a thematic layer.
Colors can be expressed numerically, as the brightness values for each color gun.
Brightness values of a display generally range from 0 to 255, however, IMAGINE trans-
lates the values from 0 to 1. The maximum brightness value for the display device is
scaled to 1. The colors listed in Table 14 are based on the range that would be used to
assign brightness values in ERDAS IMAGINE.
Table 14 contains only a partial listing of commonly used colors. Over 16 million colors
are possible on a 24-bit display.
110 ERDAS
Displaying Raster Layers
NOTE: Black is the absence of all color (0,0,0) and white is created from the highest values of all
three colors (1,1,1). To lighten a color, increase all three brightness values. To darken a color,
decrease all three brightness values.
Use the Raster Attribute Editor to create your own color scheme.
Colormap:
Brightness
values in
each color
gun:
= 255
= 128
=0
R O Y
Display: V Y G
O R V
112 ERDAS
Using the IMAGINE Viewer
Using the IMAGINE The ERDAS IMAGINE Viewer is a window for displaying raster, vector, and
Viewer annotation layers. The user can open as many Viewer windows as their window
manager supports.
NOTE: The more Viewers that are opened simultaneously, the more RAM memory is necessary.
The ERDAS IMAGINE Viewer not only makes digital images visible quickly, but it can
also be used as a tool for image processing and raster GIS modeling. The uses of the
Viewer are listed briefly in this section, and described in greater detail in other chapters
of the ERDAS Field Guide.
Colormap
ERDAS IMAGINE does not use the entire colormap because there are other applica-
tions that also need to use it, including the window manager, terminal windows, ARC
View, or a clock. Therefore, there are some limitations to the number of colors that the
Viewer can display simultaneously, and flickering may occur as well.
Color Flickering
If an application requests a new color that does not exist in the colormap, the server will
assign that color to an empty colorcell. However, if there are not any available colorcells
and the application requires a private colorcell, then a private colormap will be created
for the application window. Since this is a private colormap, when the cursor is moved
out of the window, the server will use the main colormap and the brightness values
assigned to the colorcells. Therefore, the colors in the private colormap will not be
applied and the screen will flicker. Once the cursor is moved into the application
window, the correct colors will be applied for that window.
Resampling
When a raster layer(s) is displayed, the file pixels may be resampled for display on the
screen. Resampling is used to calculate pixel values when one raster grid must be fitted
to another. In this case, the raster grid defined by the file must be fit to the grid of screen
pixels in the Viewer.
All Viewer operations are file-based. So, any time an image is resampled in the Viewer,
the Viewer uses the file as its source. If the raster layer is magnified or reduced, the
Viewer re-fits the file grid to the new screen grid.
• Nearest Neighbor - uses the value of the closest pixel to assign to the output pixel
value.
• Bilinear Interpolation - uses the data file values of four pixels in a 2 × 2 window to
calculate an output value with a bilinear function.
Preference Editor
The ERDAS IMAGINE Preference Editor enables the user to set parameters for the
ERDAS IMAGINE Viewer that affect the way the Viewer operates.
See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how to
set preferences for the Viewer.
Pyramid Layers Sometimes a large .img file may take a long time to display in the ERDAS IMAGINE
Viewer or to be resampled by an application. The Pyramid Layer option enables the
user to display large images faster and allows certain applications to rapidly access the
resampled data. Pyramid layers are image layers which are copies of the original layer
successively reduced by the power of 2 and then resampled. If the raster layer is
thematic, then it is resampled using the Nearest Neighbor method. If the raster layer is
continuous, it is resampled by a method that is similar to cubic convolution. The data
file values for sixteen pixels in a 4 × 4 window are used to calculate an output data file
value with a filter function.
The number of pyramid layers created depends on the size of the original image. A
larger image will produce more pyramid layers. When the Create Pyramid Layer
option is selected, ERDAS IMAGINE automatically creates successively reduced layers
until the final pyramid layer can be contained in one block. The default block size is 64
× 64 pixels.
Pyramid layers are added as additional layers in the .img file. However, these layers
cannot be accessed for display. The file size is increased by approximately one-third
when pyramid layers are created. The actual increase in file size can be determined by
multiplying the layer size by this formula:
∑ 4----i
1
i=0
where:
114 ERDAS
Using the IMAGINE Viewer
Pyramid layers do not appear as layers which can be processed; they are for viewing
purposes only. Therefore, they will not appear as layers in other parts of the ERDAS
IMAGINE system (e.g., the Arrange Layers dialog).
Pyramid layers can be deleted through the Image Information utility. However, when pyramid
layers are deleted, they will not be deleted from the .img file - so the .img file size will not change,
but ERDAS IMAGINE will utilize this file space, if necessary. Pyramid layers are deleted from
viewing and resampling access only - that is, they can no longer be viewed or used in an appli-
cation.
IMAGINE selects
the pyramid layer
Pyramid layer (64 × 64) that will display the
fastest in the
Pyramid layer (128 × 128) Viewer.
Original Image
(4K × 4K)
.img file
For example, a file which is 4K × 4K pixels could take a long time to display when the
image is fit to the Viewer. The Pyramid Layer option creates additional layers succes-
sively reduced from 4K × 4K, to 2K × 2K, 1K × 1K, 512 × 512, 128 × 128, down to 64 × 64.
ERDAS IMAGINE then selects the pyramid layer size most appropriate for display in
the Viewer window when the image is displayed.
The pyramid layer option is available from Import and the Image Information utility.
Dithering A display is capable of viewing only a limited number of colors simultaneously. For
example, an 8-bit display has a colormap with 256 colorcells, therefore, a maximum of
256 colors can be displayed at the same time. If some colors are being used for auto
update color adjustment while other colors are still being used for other imagery, the
color quality will degrade.
Dithering lets a smaller set of colors appear to be a larger set of colors. If the desired
display color is not available, a dithering algorithm mixes available colors to provide
something that looks like the desired color.
For a simple example, assume the system can display only two colors, black and white,
and the user wants to display gray. This can be accomplished by alternating the display
of black and white pixels.
In Figure 43, dithering is used between a black pixel and a white pixel to obtain a gray
pixel.
The colors that the ERDAS IMAGINE Viewer will dither between will be similar to each
other, and will be dithered on the pixel level. Using similar colors and dithering on the
pixel level makes the image appear smooth.
Dithering allows multiple images to be displayed in different Viewers without refreshing the
currently displayed image(s) each time a new image is displayed.
116 ERDAS
Using the IMAGINE Viewer
Color Patches
When the Viewer performs dithering, it uses patches of 2 × 2 pixels. If the desired color
has an exact match, then all of the values in the patch will match it. If the desired color
is halfway between two of the usable colors, the patch will contain two pixels of each of
the surrounding usable colors. If it is 3/4 of the way between two usable colors, the
patch will contain 3 pixels of the color it is closest to and 1 pixel of the color that is
second closest. Figure 44 shows what the color patches would look like if the usable
colors were black and white and the desired color was gray.
If the desired color is not an even multiple of 1/4 of the way between two allowable
colors, it is rounded to the nearest 1/4. The Viewer separately dithers the red, green,
and blue components of a desired color.
Color Artifacts
Since the Viewer requires 2 × 2 pixel patches to represent a color, and actual images
typically have a different color for each pixel, artifacts may appear in an image that has
been dithered. Usually, the difference in color resolution is insignificant, because
adjacent pixels are normally similar to each other. Similarity between adjacent pixels
usually smooths out artifacts that would appear.
Viewing Layers The ERDAS IMAGINE Viewer displays layers as one of the following types of view
layers:
• annotation
• vector
• pseudo color
• gray scale
• true color
118 ERDAS
Using the IMAGINE Viewer
Viewing Multiple Layers It is possible to view as many layers of all types (in the exception of vector layers, which
have a limit of 10) at one time in a single Viewer.
To overlay multiple layers in one Viewer, they must all be referenced to the same map
coordinate system. The layers are positioned geographically within the window, and
resampled to the same scale as previously displayed layers. Therefore, raster layers in
one Viewer can have different cell sizes.
When multiple layers are magnified or reduced, raster layers are resampled from the
file to fit to the new scale.
Display multiple layers from the Viewer. Be sure to turn off the Clear Display check box when
you open subsequent layers.
Overlapping Layers
When layers overlap, the order in which the layers are opened is very important. The
last layer that is opened will always appear to be “on top” of the previously opened
layers.
In a raster layer, it is possible to make values of zero transparent in the Viewer, meaning
that they have no opacity. Thus, if a raster layer with zeros is displayed over other
layers, the areas with zero values will allow the underlying layers to show through.
• 100% opacity means that a color is completely opaque, and cannot be seen through.
• 50% opacity lets some color show, and lets some of the underlying layers show
through. The effect is like looking at the underlying layers through a colored fog.
By manipulating opacity, you can compare two or more layers of raster data that are displayed
in a Viewer. Opacity can be set at any value in the range of 0% to 100%. Use the Arrange Layers
dialog to re-stack layers in a Viewer so that they overlap in a different order, if needed.
Non-Overlapping Layers
Multiple layers that are opened in the same Viewer do not have to overlap. Layers that
cover distinct geographic areas can be opened in the same Viewer. The layers will be
automatically positioned in the Viewer window according to their map coordinates,
and will be positioned relative to one another geographically. The map coordinate
systems for the layers must be the same.
• either the same geographic point is displayed in the centers of both Viewers, or a
box shows where one view fits inside the other
• the user can manipulate the zoom ratio of one Viewer from another
• any inquire cursors in one Viewer appear in the other, for multiple-Viewer pixel
inquiry
• the auto-zoom is enabled, if the Viewers have the same zoom ratio and nearly the
same window size
It is often helpful to display a wide view of a scene in one Viewer, and then a close-up
of a particular area in another Viewer. When two such Viewers are linked, a box opens
in the wide view window to show where the close-up view lies.
Any image that is displayed at a magnification (higher zoom ratio) of another image in
a linked Viewer is represented in the other Viewer by a box. If several Viewers are
linked together, there may be multiple boxes in that Viewer.
Figure 45 shows how one view fits inside the other linked Viewer. The link box shows
the extent of the larger-scale view.
120 ERDAS
Using the IMAGINE Viewer
Zoom and Roam Zooming enlarges an image on the display. When an image is zoomed, it can be roamed
(scrolled) so that the desired portion of the image appears on the display screen. Any
image that does not fit entirely in the Viewer can be roamed and/or zoomed. Roaming
and zooming have no effect on how the image is stored in the file.
The zoom ratio describes the size of the image on the screen in terms of the number of
file pixels used to store the image. It is the ratio of the number of screen pixels in the X
or Y dimension to the number that are used to display the corresponding file pixels.
A zoom ratio greater than 1 is a magnification, which makes the image features appear
larger in the Viewer. A zoom ratio less than 1 is a reduction, which makes the image
features appear smaller in the Viewer.
NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that images can be zoomed at
virtually any scale (e.g., continuous fractional zoom). Resampling is necessary whenever an
image is displayed with a new pixel grid. The resampling method used when an image is zoomed
is the same one used when the image is displayed, as specified in the Open Raster Layer dialog.
The default resampling method is Nearest Neighbor.
Zoom the data in the Viewer via the Viewer menu bar, the Viewer tool bar, or the Quick View
right-button menu.
The Quick View right-button menu gives you options to view information about a specific pixel.
Use the Raster Attribute Editor to access information about classes in a thematic layer.
See "CHAPTER 10: Geographic Information Systems" for information about attribute data.
Enhancing Continuous Working with the brightness values in the colormap is useful for image enhancement.
Raster Layers Often, a trial-and-error approach is needed to produce an image that has the right
contrast and highlights the right features. By using the tools in the Viewer, it is possible
to quickly view the effects of different enhancement techniques, undo enhancements
that aren’t helpful, and then save the best results to disk.
Use the Raster options from the Viewer to enhance continuous raster layers.
See "CHAPTER 5: Enhancement" for more information on enhancing continuous raster layers.
Creating New Image It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. The
Files new .img file will contain three continuous raster layers (RGB), regardless of how many
layers are currently displayed. The IMAGINE Image Info utility must be used to create
statistics for the new .img file before the file is enhanced.
Annotation layers can be converted to raster format, and written to an .img file. Or,
vector data can be gridded into an image, overwriting the values of the pixels in the
image plane, and incorporated into the same band as the image.
Use the Viewer to .img function to create a new .img file from the currently displayed raster
layers.
122 ERDAS
Using the IMAGINE Viewer
CHAPTER 5
Enhancement
Introduction Image enhancement is the process of making an image more interpretable for a
particular application (Faust 1989). Enhancement makes important features of raw,
remotely sensed data more interpretable to the human eye. Enhancement techniques
are often used instead of classification techniques for feature extraction—studying and
locating areas and objects on the ground and deriving useful information from images.
• The user’s data — the different bands of Landsat, SPOT, and other imaging sensors
were selected to detect certain features. The user must know the parameters of the
bands being used before performing any enhancement. (See "CHAPTER 1: Raster
Data" for more details.)
• The user’s objective — for example, sharpening an image to identify features that
can be used for training samples will require a different set of enhancement
techniques than reducing the number of bands in the study. The user must have a
clear idea of the final product desired before enhancement is performed.
• The user’s expectations — what the user thinks he or she will find.
This chapter will briefly discuss the following enhancement techniques available with
ERDAS IMAGINE:
See "Bibliography" on page 635 to find current literature which will provide a more detailed
discussion of image processing enhancement techniques.
Display vs. File With ERDAS IMAGINE, image enhancement may be performed:
Enhancement
• temporarily, upon the image that is displayed in the Viewer (by manipulating the
function and display memories), or
Enhancing a displayed image is much faster than enhancing an image on disk. If one is
looking for certain visual effects, it may be beneficial to perform some trial-and-error
enhancement techniques on the display. Then, when the desired results are obtained,
the values that are stored in the display device memory can be used to make the same
changes to the data file.
For more information about displayed images and the memory of the display device, see
"CHAPTER 4: Image Display".
Spatial Modeling Two types of models for enhancement can be created in ERDAS IMAGINE:
Enhancements
• Graphical models — use Model Maker (Spatial Modeler) to easily, and with great
flexibility, construct models which can be used to enhance the data.
• Script models — for even greater flexibility, use the Spatial Modeler Language to
construct models in script form. The Spatial Modeler Language (SML) enables the
user to write scripts which can be written, edited, and run from the Spatial Modeler
component or directly from the command line. The user can edit models created
with Model Maker using the Spatial Modeling Language or Model Maker.
Although a graphical model and a script model look different, they will produce the
same results when applied.
Image Interpreter
ERDAS IMAGINE supplies many algorithms constructed as models, ready to be
applied with user-input parameters at the touch of a button. These graphical models,
created with Model Maker, are listed as menu functions in the Image Interpreter. These
functions are mentioned throughout this chapter. Just remember, these are modeling
functions which can be edited and adapted as needed with Model Maker or the Spatial
Modeler Language.
126 ERDAS
Introduction
The modeling functions available for enhancement in Image Interpreter are briefly
described in Table 16.
Function Description
Non-directional Edge Averages the results from two orthogonal 1st derivative edge
detectors.
Focal Analysis Enables the user to perform one of several analyses on class
values in an .img file using a process similar to convolution
filtering.
Texture Defines texture as a quantitative characteristic in an image.
Adaptive Filter Varies the contrast stretch for each pixel depending upon the
DN values in the surrounding moving window.
Statistical Filter Produces the pixel output DN by averaging pixels within a
moving window that fall within a statistically defined range.
Resolution Merge Merges imagery of differing spatial resolutions.
LUT (Lookup Table) Creates an output image that contains the data values as mod-
Stretch ified by a lookup table.
Destripe TM Data Removes striping from a raw TM4 or TM5 data file.
Function Description
Principal Components Compresses redundant data values into fewer bands which
are often more interpretable than the source data.
Inverse Principal Performs an inverse principal components analysis.
Components
Decorrelation Stretch Applies a contrast stretch to the principal components of an
image.
Tasseled Cap Rotates the data structure axes to optimize data viewing for
vegetation studies.
RGB to IHS Transforms red, green, blue values to intensity, hue, satura-
tion values.
IHS to RGB Transforms intensity, hue, saturation values to red, green,
blue values.
Indices Performs band ratios that are commonly used in mineral and
vegetation studies.
Natural Color Simulates natural color for TM data.
Fourier Transform* Enables the user to utilize a highly efficient version of the
Discrete Fourier Transform (DFT).
Fourier Transform Enables the user to edit Fourier images using many interactive tools
Editor* and filters.
Fourier Magnitude* Converts the Fourier Transform image into the more familiar
Fourier Magnitude image.
Periodic Noise Automatically removes striping and other periodic noise from
Removal* images.
NOTE: There are other Image Interpreter functions that do not necessarily apply to image
enhancement.
128 ERDAS
Correcting Data
Correcting Data Each generation of sensors shows improved data acquisition and image quality over
previous generations. However, some anomalies still exist that are inherent to certain
sensors and can be corrected by applying mathematical formulas derived from the
distortions (Lillesand and Kiefer 1979). In addition, the natural distortion that results
from the curvature and rotation of the earth in relation to the sensor platform produces
distortions in the image data, which can also be corrected.
Radiometric Correction
Generally, there are two types of data correction: radiometric and geometric.
Radiometric correction addresses variations in the pixel intensities (digital numbers, or
DNs) that are not caused by the object or scene being scanned. These variations include:
• topographic effects
• atmospheric effects
Geometric Correction
Geometric correction addresses errors in the relative positions of pixels. These errors
are induced by:
• terrain variations
Because of the differences in radiometric and geometric correction between traditional, passively
detected visible/infrared imagery and actively acquired radar imagery, the two will be discussed
separately. See "Radar Imagery Enhancement" on page 191.
Radiometric Correction -
Visible/Infrared Imagery
Striping
Striping or banding will occur if a detector goes out of adjustment—that is, it provides
readings consistently greater than or less than the other detectors for the same band
over the same ground cover.
Some Landsat 1, 2, and 3 data have striping every sixth line, due to improper calibration
of some of the 24 detectors that were used by the MSS. The stripes are not constant data
values, nor is there a constant error factor or bias. The differing response of the errant
detector is a complex function of the data value sensed.
This problem has been largely eliminated in the newer sensors. Various algorithms
have been advanced in current literature to help correct this problem in the older data.
Among these algorithms are simple along-line convolution, high-pass filtering, and
forward and reverse principal component transformations (Crippen 1989).
Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate
striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best
address the data. The Radar Adjust Brightness function will also correct some of these problems.
Line Dropout
Another common remote sensing device error is line dropout. Line dropout occurs
when a detector either completely fails to function, or becomes temporarily saturated
during a scan (like the effect of a camera flash on the retina). The result is a line or partial
line of data with higher data file values, creating a horizontal streak until the detector(s)
recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line of estimated data
file values, based on the lines above and below it.
Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered “errors,”
since they are part of the signal received by the sensing device (Bernstein 1983).
However, it is often important to remove atmospheric effects, especially for scene
matching and change detection analysis.
Over the past 20 years a number of algorithms have been developed to correct for varia-
tions in atmospheric transmission. Four categories will be mentioned here:
• linear regressions
• atmospheric modeling
Use the Spatial Modeler to construct the algorithms for these operations.
130 ERDAS
Correcting Data
Linear Regressions
A number of methods using linear regressions have been tried. These techniques use
bispectral plots and assume that the position of any pixel along that plot is strictly a
result of illumination. The slope then equals the relative reflectivities for the two
spectral bands. At an illumination of zero, the regression plots should pass through the
bispectral origin. Offsets from this represent the additive extraneous components, due
to atmosphere effects (Crippen 1987).
Atmospheric Modeling
Atmospheric modeling is computationally complex and requires either assumptions or
inputs concerning the atmosphere at the time of imaging. The atmospheric model used
to define the computations is frequently Lowtran or Modtran (Kneizys et al 1988). This
model requires inputs such as atmospheric profile (pressure, temperature, water vapor,
ozone, etc.) aerosol type, elevation, solar zenith angle, and sensor viewing angle.
Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errors
of perspective due to the earth’s curvature and sensor motion. Today, some of these
errors are commonly removed at the sensor’s data processing center. But in the past,
some data from Landsat MSS 1, 2, and 3 were not corrected before distribution.
Many visible/infrared sensors are not nadir-viewing; they look to the side. For some
applications, such as stereo viewing or DEM generation, this is an advantage. For other
applications, it is a complicating factor.
In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir.
Other pixels, especially those on the view periphery, are viewed off-nadir. For scenes
covering very large geographic areas (such as AVHRR), this can be a significant
problem.
This and other factors, such as earth curvature, result in geometric imperfections in the
sensor image. Terrain variations have the same distorting effect but on a smaller (pixel-
by-pixel) scale. These factors can be addressed by rectifying the image to a map.
Radiometric Radiometric enhancement deals with the individual values of the pixels in the image.
Enhancement It differs from spatial enhancement (discussed on page 143), which takes into account
the values of neighboring pixels.
Depending on the points and the bands in which they appear, radiometric enhance-
ments that are applied to one band may not be appropriate for other bands. Therefore,
the radiometric enhancement of a multiband image can usually be considered as a
series of independent, single-band enhancements (Faust 1989).
Radiometric enhancement usually does not bring out the contrast of every pixel in an
image. Contrast can be lost between some pixels, while gained on others.
Frequency
0 j k 255 0 j k 255
In Figure 46, the range between j and k in the histogram of the original data is about one
third of the total range of the data. When the same data are radiometrically enhanced,
the range between j and k can be widened. Therefore, the pixels between j and k gain
contrast—it is easier to distinguish different brightness values in these pixels.
However, the pixels outside the range between j and k are more grouped together than
in the original histogram, to compensate for the stretch between j and k. Contrast among
these pixels is lost.
132 ERDAS
Radiometric Enhancement
Contrast Stretching When radiometric enhancements are performed on the display device, the transfor-
mation of data file values into brightness values is illustrated by the graph of a lookup
table.
For example, Figure 47 shows the graph of a lookup table that increases the contrast of
data file values in the middle range of the input data (the range within the brackets).
Note that the input range within the bracket is narrow, but the output brightness values
for the same pixels are stretched over a wider range. This process is called contrast
stretching.
255
0
0 255
input data file values
Figure 47: Graph of a Lookup Table
Notice that the graph line with the steepest (highest) slope brings out the most contrast
by stretching output values farther apart.
255
output brightness values
linear
nonlinear
piecewise
linear
0
0 255
input data file values
Figure 48: Enhancement with Lookup Tables
In most raw data, the data file values fall within a narrow range—usually a range much
narrower than the display device is capable of displaying. That range can be expanded
to utilize the total range of the display device (usually 0 to 255).
A two standard deviation linear contrast stretch is automatically applied to images displayed in
the IMAGINE Viewer.
255
output brightness values
0
0 255
input data file values
Figure 49: Nonlinear Radiometric Enhancement
In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are always
pixels in each data file value from 0 to 255. You can manipulate the percentage of pixels in a
particular range but you cannot eliminate a range of data file values.
134 ERDAS
Radiometric Enhancement
1) The data values are continuous; there can be no break in the values between
High, Middle, and Low. Range specifications will adjust in relation to any
changes to maintain the data value range.
100%
LUT Value
The contrast value for each range represents the percent of the available output range
that particular range occupies. The brightness value for each range represents the
middle of the total range of brightness values occupied by that range. Since rules 1 and
2 above are enforced, as the contrast and brightness values are changed, they may affect
the contrast and brightness of other ranges. For example, if the contrast of the low range
increases, it forces the contrast of the middle to decrease.
In ERDAS IMAGINE, you can permanently change the data file values to the lookup table
values. Use the Image Interpreter LUT Stretch function to create an .img output file with the
same data values as the displayed contrast stretched image.
See "CHAPTER 1: Raster Data" for more information on the data contained in .img files.
The mean and standard deviation are used instead of the minimum and maximum data
file values, because the minimum and maximum data file values are usually not repre-
sentative of most of the data. (A notable exception occurs when the feature being sought
is in shadow. The shadow pixels are usually at the low extreme of the data file values,
outside the range of two standard deviations from the mean.)
The use of these statistics in contrast stretching is discussed and illustrated in "CHAPTER 4:
Image Display". Statistics terms are discussed in "APPENDIX A: Math Topics".
Figure 51 shows how the contrast stretch manipulates the histogram of the data,
increasing contrast in some areas and decreasing it in others. This is also a good
example of a piecewise linear contrast stretch, created by adding breakpoints to the
histogram.
136 ERDAS
Radiometric Enhancement
255 255
input
histogram
0 0
0 255 0 255
input data file values input data file values
2. A breakpoint is added to the
1. Linear stretch. Values are linear function, redistributing
clipped at 255. the contrast.
255 255
output brightness values
0
0 255 output brightness values 0
0 255
input data file values input data file values
3. Another breakpoint added. 4. The breakpoint at the top of
Contrast at the peak of the the function is moved so that
histogram continues to increase. values are not clipped.
Histogram equalization can also separate pixels into distinct groups, if there are few
output values over a wide range. This can have the visual effect of a crude classification.
peak
pixels at
tail are
tail grouped -
contrast
is lost
pixels at peak are spread
apart - contrast is gained
Figure 52: Histogram Equalization
To perform a histogram equalization, the pixel values of an image (either data file
values or brightness values) are reassigned to a certain number of bins, which are
simply numbered sets of pixels. The pixels are then given new values, based upon the
bins to which they are assigned.
• N - the number of bins to which pixel values can be assigned. If there are many bins
or many pixels with the same value(s), some bins may be empty.
• M - the maximum of the range of the output values. The range of the output values
will be from 0 to M.
The total number of pixels is divided by the number of bins, equaling the number of
pixels per bin, as shown in the following equation:
T EQUATION 1
A = ----
N
where:
138 ERDAS
Radiometric Enhancement
The pixels of each input value are assigned to bins, so that the number of pixels in each
bin is as close to A as possible. Consider Figure 53:
60 60
40
number of pixels
30
A = 24
15
10 10
5 5 5
0 1 2 3 4 5 6 7 8 9
data file values
Figure 53: Histogram Equalization Example
There are 240 pixels represented by this histogram. To equalize this histogram to 10
bins, there would be:
i–1
Hi
∑ H k + ------
k = 1 2 EQUATION 2
B i = int -----------------------------------
A
where:
40
number of pixels
30
4 5
20 A = 24
6
15 15
2 7
8
1 3
0 0 0 0 9
0 1 2 3 4 5 6 7 8 9
output data file values
Figure 54: Equalized Histogram
Effect on Contrast
By comparing the original histogram of the example data with the one above, one can
see that the enhanced image gains contrast in the “peaks” of the original histogram—
for example, the input range of 3 to 7 is stretched to the range 1 to 8. However, data
values at the “tails” of the original histogram are grouped together—input values 0
through 2 all have the output value of 0. So, contrast among the “tail” pixels, which
usually make up the darkest and brightest regions of the input image, is lost.
The resulting histogram is not exactly flat, since the pixels can rarely be grouped
together into bins with an equal number of pixels. Sets of pixels with the same value are
never split up to form equal bins.
Level Slice
A level slice is similar to a histogram equalization in that it divides the data into equal
amounts. A level slice on a true color display creates a “stair-stepped” lookup table. The
effect on the data is that input file values are grouped together at regular intervals into
a discrete number of levels, each with one output brightness value.
To perform a true color level slice, the user must specify a range for the output
brightness values and a number of output levels. The lookup table is then “stair-
stepped” so that there is an equal number of input pixels in each of the output levels.
Histogram Matching Histogram matching is the process of determining a lookup table that will convert the
histogram of one image to resemble the histogram of another. Histogram matching is
useful for matching data of the same or adjacent scenes that were scanned on separate
days, or are slightly different because of sun angle or atmospheric effects. This is
especially useful for mosaicking or change detection.
140 ERDAS
Radiometric Enhancement
To achieve good results with histogram matching, the two input images should have
similar characteristics:
• Relative dark and light features in the image should be the same.
• For some applications, the spatial resolution of the data should be the same.
• The relative distributions of land covers should be about the same, even when
matching scenes that are not of the same area. If one image has clouds and the other
does not, then the clouds should be “removed” before matching the histograms.
This can be done using the Area of Interest (AOI) function. The AOI function is
available from the Viewer menu bar.
In ERDAS IMAGINE, histogram matching is performed band to band (e.g., band 2 of one image
is matched to band 2 of the other image, etc.).
frequency
frequency
=
+
Brightness inversion has two options: inverse and reverse. Both options convert the
input data range (commonly 0 - 255) to 0 - 1.0. A min-max remapping is used to simul-
taneously stretch the image and handle any input bit format. The output image is in
floating point format, so a min-max stretch is used to convert the output image into 8-
bit format.
Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of
the low DN pixels. This function applies the following algorithm:
DNout = 0.1
if 0.1 < DN < 1
DNin
142 ERDAS
Spatial Enhancement
• zero spatial frequency - a flat image, in which every pixel has the same value
• Resolution merging
See "Radar Imagery Enhancement" on page 191 for a discussion of Edge Detection and Texture
Analysis. These spatial enhancement techniques can be applied to any type of data.
A convolution kernel is a matrix of numbers that is used to average the value of each
pixel with the values of surrounding pixels in a particular way. The numbers in the
matrix serve to weight this average toward particular pixels. These numbers are often
called coefficients, because they are used as such in the mathematical equations.
In ERDAS IMAGINE, there are four ways you can apply convolution filtering to an image:
Filtering is a broad term, referring to the altering of spatial or spectral features for
image enhancement (Jensen 1996). Convolution filtering is one method of spatial
filtering. Some texts may use the terms synonymously.
Convolution Example
To understand how one pixel is convolved, imagine that the convolution kernel is
overlaid on the data file values of the image (in one band), so that the pixel to be
convolved is in the center of the window.
2 8 6 6 6 -1 -1 -1
2 8 6 6 6 -1 16 -1
2 2 8 6 6 -1 -1 -1
2 2 2 8 6
2 2 2 2 8 Kernel
Data
Figure 57 shows a 3 × 3 convolution kernel being applied to the pixel in the third
column, third row of the sample data (the pixel that corresponds to the center of the
kernel).
144 ERDAS
Spatial Enhancement
To compute the output value for this pixel, each value in the convolution kernel is
multiplied by the image pixel value that corresponds to it. These products are summed,
and the total is divided by the sum of the values in the kernel, as shown here:
integer (
(-1 x 8) + (-1 x 6) + (-1 x 6) +
(-1 x 2) + (16 x 8) + (-1 x 6) +
(-1 x 2) + (-1 x 2) + (-1 x 8) : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1))
When the 2 × 2 set of pixels in the center of this 5 x 5 image is convolved, the output
values are:
1 2 3 4 5
1 2 8 6 6 6
2 2 11 5 6 6
3 2 0 11 6 6
4 2 2 2 8 6
5 2 2 2 2 8
The kernel used in this example is a high frequency kernel, as explained below. It is
important to note that the relatively lower values become lower, and the higher values
become higher, thus increasing the spatial frequency of the image.
q q
∑ ∑ f ij d ij
i=1 j=1
V = -------------------------------------
F
where:
fij = the coefficient of a convolution kernel at position i,j (in the kernel)
q = the dimension of the kernel, assuming a square kernel (if q=3, the kernel is
3 × 3)
F = either the sum of the coefficients of the kernel, or 1 if the sum of coefficients
is zero
The sum of the coefficients (F) is used as the denominator of the equation above, so that
the output values will be in relatively the same range as the input values. Since F cannot
equal zero (division by zero is not defined), F is set to 1 if the sum is zero.
Zero-Sum Kernels
Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals
zero. When a zero-sum kernel is used, then the sum of the coefficients is not used in the
convolution equation, as above. In this case, no division is performed (F = 1), since
division by zero is not defined.
• zero in areas where all input values are equal (no edges)
• extreme (high values become much higher, low values become much lower) in
areas of high spatial frequency
146 ERDAS
Spatial Enhancement
Therefore, a zero-sum kernel is an edge detector, which usually smooths out or zeros
out areas of low spatial frequency and creates a sharp contrast where spatial frequency
is high, which is at the edges between homogeneous groups of pixels. The resulting
image often consists of only edges and zeros.
-1 -1 -1
1 -2 1
1 1 1
See the section on "Edge Detection" on page 200 for more detailed information.
High-Frequency Kernels
A high-frequency kernel, or high-pass kernel, has the effect of increasing spatial
frequency.
High-frequency kernels serve as edge enhancers, since they bring out the edges
between homogeneous groups of pixels. Unlike edge detectors (such as zero-sum
kernels), they highlight edges and do not necessarily eliminate other features.
-1 -1 -1
-1 16 -1
-1 -1 -1
When this kernel is used on a set of pixels in which a relatively low value is surrounded
by higher values, like this...
BEFORE AFTER
...the low value gets lower. Inversely, when the kernel is used on a set of pixels in which
a relatively high value is surrounded by lower values...
BEFORE AFTER
64 60 57 64 60 57
61 125 69 61 187 69
58 60 70 58 60 70
...the high value becomes higher. In either case, spatial frequency is increased by this
kernel.
148 ERDAS
Spatial Enhancement
Low-Frequency Kernels
Below is an example of a low-frequency kernel, or low-pass kernel, which decreases
spatial frequency.
1 1 1
1 1 1
1 1 1
This kernel simply averages the values of the pixels, causing them to be more homoge-
neous (homogeneity is low spatial frequency). The resulting image looks either
smoother or more blurred.
For information on applying filters to thematic layers, see “Chapter 9: Geographic Information
Systems.”
Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband
variance content of the image. This is a useful enhancement if the image is blurred due
to atmospheric haze, rapid sensor motion, or a broad point spread function of the
sensor.
The logic of the algorithm is that the first principal component (PC-1) of an image is
assumed to contain the overall scene luminance. The other PC’s represent intra-scene
variance. Thus, the user can sharpen only PC-1 and then reverse the principal compo-
nents calculation to reconstruct the original image. Luminance is sharpened, but
variance is retained.
Landsat TM sensors have seven bands with a spatial resolution of 28.5 m. SPOT
panchromatic has one broad band with very good spatial resolution—10 m. Combining
these two images to yield a seven band data set with 10 m resolution would provide the
best characteristics of both sensors.
A number of models have been suggested to achieve this image merge. Welch and
Ehlers (1987) used forward-reverse RGB to IHS transforms, replacing I (from trans-
formed TM data) with the SPOT panchromatic image. However, this technique is
limited to three bands (R,G,B).
Chavez (1991), among others, uses the forward-reverse principal components trans-
forms with the SPOT image, replacing PC-1.
In the above two techniques, it is assumed that the intensity component (PC-1 or I) is
spectrally equivalent to the SPOT panchromatic image, and that all the spectral infor-
mation is contained in the other PC’s or in H and S. Since SPOT data do not cover the
full spectral range that TM data do, this assumption does not strictly hold. It is
unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchro-
matic) image.
The Resolution Merge function has two different options for resampling low spatial
resolution data to a higher spatial resolution while retaining spectral information:
• multiplicative
• PC-1 contains only overall scene luminance; all interband variation is contained in
the other 5 PCs, and
With the above assumptions, the forward transform into principal components is made.
PC-1 is removed and its numerical range (min to max) is determined. The high spatial
resolution image is then remapped so that its histogram shape is kept constant, but it is
in the same numerical range as PC-1. It is then substituted for PC-1 and the reverse
transform is applied. This remapping is done so that the mathematics of the reverse
transform do not distort the thematic information (Welch and Ehlers 1987).
150 ERDAS
Spatial Enhancement
Multiplicative
The second technique in the Image Interpreter uses a simple multiplicative algorithm:
The algorithm is derived from the four component technique of Crippen (Crippen
1989). In this paper, it is argued that of the four possible arithmetic methods to incor-
porate an intensity image into a chromatic image (addition, subtraction, division, and
multiplication), only multiplication is unlikely to distort the color.
However, in his study Crippen first removed the intensity component via band ratios,
spectral indices, or PC transform. The algorithm shown above operates on the original
image. The result is an increased presence of the intensity component. For many appli-
cations, this is desirable. Users involved in urban or suburban studies, city planning,
utilities routing, etc., often want roads and cultural features (which tend toward high
reflection) to be pronounced in the image.
Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image
processing technique. However, even adjustable stretches like the piecewise linear
stretch act on the scene globally. There are many circumstances where this is not the
optimum approach. For example, coastal studies where much of the water detail is
spread through a very low DN range and the land detail is spread through a much
higher DN range would be such a circumstance. In these cases, a filter that “adapts” the
stretch to the region of interest (the area within the moving window) would produce a
better enhancement. Adaptive filters attempt to achieve this (Fahnestock and
Schowengerdt 1983, Peli and Lim 1982, Schwartz 1977).
ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. The Adaptive
Filter function in Image Interpreter can be applied to undegraded images, such as SPOT,
Landsat, and digitized photographs. The Image Enhancement function in Radar is better for
degraded or difficult images.
• Undegraded — these scenes have good and uniform illumination overall. Given a
choice, these are the scenes one would prefer to obtain from imagery sources such
as EOSAT or SPOT.
No one filter with fixed parameters can address this wide variety of conditions. In
addition, multiband images may require different parameters for each band. Without
the use of adaptive filters, the different bands would have to be separated into one-band
files, enhanced, and then recombined.
For this function, the image is separated into high and low frequency component
images. The low frequency image is considered to be overall scene luminance. These
two component parts are then recombined in various relative amounts using multi-
pliers derived from look-up tables. These LUTs are driven by the overall scene
luminance:
where:
255
Local Luminance
Intercept (I)
152 ERDAS
Spectral Enhancement
Figure 59 shows the local luminance intercept, which is the output luminance value that
an input luminance value of 0 would be assigned.
Spectral The enhancement techniques that follow require more than one band of data. They can
Enhancement be used to:
• extract new bands of data that are more interpretable to the eye
• display a wider variety of information in the three available color guns (R,G,B)
In this documentation, some examples are illustrated with two-dimensional graphs. However,
you are not limited to two-dimensional (two-band) data. ERDAS IMAGINE programs allow an
unlimited number of bands to be used.
Keep in mind that processing such data sets can require a large amount of computer swap space.
In practice, the principles outlined below apply to any number of bands.
Some of these enhancements can be used to prepare data for classification. However, this is a
risky practice unless you are very familiar with your data, and the changes that you are making
to it. Anytime you alter values, you risk losing some information.
Principal Components Principal components analysis (or PCA) is often used as a method of data compression.
Analysis It allows redundant data to be compacted into fewer bands—that is, the dimensionality
of the data is reduced. The bands of PCA data are non-correlated and independent, and
are often more interpretable than the source data (Jensen 1996; Faust 1989).
The process is easily explained graphically with an example of data in two bands.
Below is an example of a two-band scatterplot, which shows the relationships of data
file values in two bands. The values of one band are plotted against those of the other.
If both bands have normal distributions, an ellipse shape results.
histogram
Band B
histogram
Band A
0
0 255
Band A
data file values
Figure 60: Two Band Scatterplot
Ellipse Diagram
In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid (3 dimensions) or
hyperellipsoid (more than 3) is formed if the distributions of each input band are
normal or near normal. (The term “ellipse” will be used for general purposes here.)
To perform principal components analysis, the axes of the spectral space are rotated,
changing the coordinates of each pixel in spectral space, and the data file values as well.
The new axes are parallel to the axes of the ellipse.
A new axis of the spectral space is defined by this first principal component. The points
in the scatterplot are now given new coordinates, which correspond to this new axis.
Since, in spectral space, the coordinates of the points are the data file values, new data
file values are derived from this process. These values are stored in the first principal
component band of a new data file.
154 ERDAS
Spectral Enhancement
255
Principal Component
(new axis)
0
0 255
The first principal component shows the direction and length of the widest transect of
the ellipse. Therefore, as an axis in spectral space, it measures the highest variation
within the data. In Figure 62 it is easy to see that the first eigenvalue will always be
greater than the ranges of the input bands, just as the hypotenuse of a right triangle
must always be longer than the legs.
255
range of pc 1
data file values
Band B
range of Band B
range of Band A
0
0 255
Band A
data file values
Figure 62: Range of First Principal Component
PC 2
PC 1
90˚ angle
(orthogonal)
0
0 255
• is the widest transect of the ellipse that is orthogonal to the previous components
in the n-dimensional space of the scatterplot (Faust 1989)
• accounts for a decreasing amount of the variation in the data which is not already
accounted for by previous principal components (Taylor 1977)
Although there are n output bands in a principal components analysis, the first few
bands account for a high proportion of the variance in the data—in some cases, almost
100%. Therefore, principal components analysis is useful for compressing data into
fewer bands.
In other applications, useful information can be gathered from the principal component
bands with the least variance. These bands can show subtle details in the image that
were obscured by higher contrast in the original image. These bands may also show
regular noise in the data (for example, the striping in old MSS data) (Faust 1989).
156 ERDAS
Spectral Enhancement
v 1 0 0 ... 0
0 v 2 0 ... 0
V =
...
0 0 0 ... v n
E Cov ET = V
where:
V is computed so that its non-zero elements are ordered from greatest to least, so that v1
> v2 > v3... > vn .
A full explanation of this computation can be found in Gonzalez and Wintz 1977.
The matrix V is the covariance matrix of the output principal component file. The zeros
represent the covariance between bands (there is none), and the eigenvalues are the
variance values for each band. Because the eigenvalues are ordered from v1 to vn,, the
first eigenvalue is the largest and represents the most variance in the data.
n
Pe = ∑ d k E ke
k=1
where:
E = the eigenvector matrix, such that Eke = the element of that matrix at
row k, column e
• alter the distribution of the image DN values within the 0 - 255 range of the display
device and
The decorrelation stretch stretches the principal components of an image, not to the
original image.
Each PC is separately stretched to fully utilize the data range. The new stretched PC
composite image is then retransformed to the original data areas.
Either the original PCs or the stretched PCs may be saved as a permanent image file for
viewing after the stretch.
NOTE: Storage of PCs as floating point-single precision is probably appropriate in this case.
158 ERDAS
Spectral Enhancement
Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimen-
sional space where N is the number of bands. Each pixel, positioned according to its DN
value in each band, lies within the N-dimensional space. This pixel distribution is deter-
mined by the absorption/reflection spectra of the imaged material. This clustering of
the pixels is termed the data structure (Crist & Kauth 1986).
See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra. See the
discussion on "Principal Components Analysis" on page 153.
For example, a geologist and a botanist are interested in different absorption features.
They would want to view different data structures and therefore, different data
structure axes. Both would benefit from viewing the data in a way that would maximize
visibility of the data structure of interest.
The Tasseled Cap transformation offers a way to optimize data viewing for vegetation
studies. Research has produced three data structure axes which define the vegetation
information content (Crist et al 1986, Crist & Kauth 1986):
• Brightness — a weighted sum of all bands, defined in the direction of the principal
variation in soil reflectance.
• Wetness — relates to canopy and soil moisture (Lillesand and Kiefer 1987).
A simple calculation (linear combination) then rotates the data space to present any of
these axes to the user.
These rotations are sensor-dependent, but once defined for a particular sensor (say
Landsat 4 TM), the same rotation will work for any scene taken by that sensor. The
increased dimensionality (number of bands) of TM vs. MSS allowed Crist et al to define
three additional axes, termed Haze, Fifth, and Sixth. Laurin (1986) has used this haze
parameter to devise an algorithm to de-haze Landsat imagery.
RGB to IHS The color monitors used for image display on image processing systems have three
color guns. These correspond to red, green, and blue (R,G,B), the additive primary
colors. When displaying three bands of a multiband data set, the viewed image is said
to be in R,G,B space.
However, it is possible to define an alternate color space that uses Intensity (I), Hue (H),
and Saturation (S) as the three positioned parameters (in lieu of R,G, and B). This system
is advantageous in that it presents colors more nearly as perceived by the human eye.
• Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black)
to 1 (white).
• Saturation represents the purity of color and also varies linearly from 0 to 1.
160 ERDAS
Spectral Enhancement
255
INTENSITY
Blue
255 SATURATION 0
Green
HUE
255,0 Red
To use the RGB to IHS transform, use the RGB to IHS function from Image Interpreter.
The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac 1980):
M–R
R = ---------------
M–m
M–G
G = ---------------
M–m
M–B
B = ---------------
M–m
where:
NOTE: At least one of the R, G, or B values is 0, corresponding to the color with the largest
value, and at least one of the R, G, or B values is 1, corresponding to the color with the least value.
M+m
I = ----------------
2
If M = m, S = 0
M–m
If I < 0.5, S = ----------------
M+m
M–m
If I > 0.5, S = ------------------------
2–M–m
If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)
where:
IHS to RGB The family of IHS to RGB are intended as a complement to the standard RGB to IHS
transform.
In the IHS to RGB algorithm, a min - max stretch is applied to either intensity (I),
saturation (S), or both, so that they more fully utilize the 0 - 1 value range. The values
for hue (H), a circular dimension, are 0 - 360. However, depending on the dynamic
range of the DN values of the input image, it is possible that I or S or both will occupy
only a part of the 0 - 1 range. In this model, a min-max stretch is applied to either I, S,
or both, so that they more fully utilize the 0 - 1 value range. After stretching, the full IHS
image is retransformed back to the original RGB space. As the parameter Hue is not
modified, it largely defines what we perceive as color, and the resultant image looks
very much like the input image.
It is not essential that the input parameters (IHS) to this transform be derived from an
RGB to IHS transform. The user could define I and/or S as other parameters, set Hue at
0-360, and then transform to RGB space. This is a method of color coding other data sets.
In another approach (Daily 1983), H and I are replaced by low- and high-frequency
radar imagery. The user can also replace I with radar intensity before the IHS to RGB
transform (Holcomb 1993). Chavez evaluates the use of the IHS to RGB transform to
resolution merge Landsat TM with SPOT panchromatic imagery (Chavez 1991). NOTE:
Use the Spatial Modeler for this analysis.
162 ERDAS
Spectral Enhancement
See the previous section on RGB to IHS transform for more information.
The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac 1980):
If I £ 0.5, M = I (1 + S)
m=2*1-M
-----
H
-
If H < 60, R = m + (M - m)
60
If 60 £ H < 180, R = M
240
-------------------
–H
If 180 £ H < 240, R = m + (M - m) 60
If 240 £ H £ 360, R = m
If H < 120, G = m
------------------
H – 120
-
If 120 £ H < 180, m + (M - m) 60
If 180 £ H < 300, G = M
360
-------------------
–H
If 300 £ H £ 360, G = m + (M-m) 60
If H < 60, B = M
120
-------------------
–H
If 60 £ H < 120, B = m + (M - m) 60
If 120 £ H < 240, B = m
------------------
H – 240
-
If 240 £ H < 300, B = m + (M - m) 60
If 300 £ H £ 360, B = M
(Band X - Band Y)
or more complex:
Band X - Band Y
Band X + Band Y
Band X
Band Y
These ratio images are derived from the absorption/reflection spectra of the material
of interest. The absorption is based on the molecular bonds in the (surface) material.
Thus, the ratio often gives information on the chemical composition of the target.
See "CHAPTER 1: Raster Data" for more information on the absorption/reflection spectra.
Applications
• Indices are used extensively in mineral exploration and vegetation analyses to
bring out small differences between various rock types and vegetation classes. In
many cases, judiciously chosen indices can highlight and enhance differences
which cannot be observed in the display of the original color bands.
• Indices can also be used to minimize shadow effects in satellite and aircraft
multispectral images. Black and white images of individual indices or a color
combination of three ratios may be generated.
ratio = A/B
If A>>B (much greater than), then a normal integer scaling would be sufficient. If A>B
and A is never much greater than B, scaling might be a problem in that the data range
might only go from 1 to 2 or 1 to 3. Integer scaling in this case would give very little
contrast.
164 ERDAS
Spectral Enhancement
For cases in which A<B or A<<B, integer scaling would always truncate to 0. All
fractional data would be lost. A multiplication constant factor would also not be very
effective in seeing the data contrast between 0 and 1, which may very well be a
substantial part of the data image. One approach to handling the entire ratio range is to
actually process the function:
ratio = atan(A/B)
This would give a better representation for A/B < 1 as well as for A/B > 1 (Faust 1992).
Index Examples
The following are examples of indices which have been preprogrammed in the Image
Interpreter in ERDAS IMAGINE:
• IR/R (infrared/red)
• SQRT (IR/R)
IR – R
----------------
IR + R
IR – R
---------------- + 0.5
IR + R
IR R
Sensor
Band Band
Landsat MSS 7 5
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
Image Algebra
Image algebra is a general term used to describe operations that combine the pixels of
two or more raster layers in mathematical combinations. For example, the calculation:
DNir - DNred
yields a simple, yet very useful, measure of the presence of vegetation. At the other
extreme is the Tasseled Cap calculation (described in the following pages), which uses
a more complicated mathematical combination of as many as six bands to define
vegetation.
TM 5
------------ = clay minerals
TM 7
are also commonly used. These are derived from the absorption spectra of the material
of interest; the numerator is a baseline of background absorption and the denominator
is an absorption peak.
166 ERDAS
Hyperspectral Image Processing
Hyperspectral Image Hyperspectral image processing is in many respects simply an extension of the
Processing techniques used for multi-spectral datasets; indeed, there is no set number of bands
beyond which a dataset is hyperspectral. Thus, many of the techniques or algorithms
currently used for multi-spectral datasets are logically applicable, regardless of the
number of bands in the dataset (see the discussion of Figure 7 on page 12 of this
manual). What is of relevance in evaluating these datasets is not the number of bands
per se, but the spectral band-width of the bands (channels). As the bandwidths get
smaller, it becomes possible to view the dataset as an absorption spectrum rather than
a collection of discontinuous bands. Analysis of the data in this fashion is termed
"imaging spectrometry".
Y
Z
A dataset with narrow contiguous bands can be plotted as a continuous spectrum and
compared to a library of known spectra using full profile spectral pattern fitting
algorithms. A serious complication in using this approach is assuring that all spectra are
corrected to the same background.
At present, it is possible to obtain spectral libraries of common materials. The JPL and
USGS mineral spectra libraries are included in IMAGINE. These are laboratory
measured reflectance spectra of reference minerals, often of high purity and defined
particle size. The spectrometer is commonly purged with pure nitrogen to avoid absor-
bance by atmospheric gases. Conversely, the remote sensor records an image after the
sunlight has (twice) passed through the atmosphere with variable and unknown
amounts of water vapor, CO2, etc. (This atmospheric absorbance curve is shown in
Figure 4.) The unknown atmospheric absorbances superimposed upon the Earth
surface reflectances makes comparison to laboratory spectra or spectra taken with a
different atmosphere inexact. Indeed, it has been shown that atmospheric composition
can vary within a single scene. This complicates the use of spectral signatures even
within one scene. Atmospheric absorption and scattering is discussed on pages 6
through 10 of this manual.
Normalize Pixel albedo is affected by sensor look angle and local topographic effects. For airborne
sensors this look angle effect can be large across a scene; it is less pronounced for
satellite sensors. Some scanners look to both sides of the aircraft. For these datasets, the
average scene luminance between the two half-scenes can be large. To help minimize
these effects, an "equal area normalization" algorithm can be applied (Zamudio and
Atkinson 1990). This calculation shifts each (pixel) spectrum to the same overall average
brightness. This enhancement must be used with a consideration of whether this
assumption is valid for the scene. For an image which contains 2 (or more) distinctly
different regions (e.g., half ocean and half forest), this may not be a valid assumption.
Correctly applied, this normalization algorithm helps remove albedo variations and
topographic effects.
IAR Reflectance As discussed above, it is desired to convert the spectra recorded by the sensor into a
form that can be compared to known reference spectra. This technique calculates a
relative reflectance by dividing each spectrum (pixel) by the scene average spectrum
(Kruse 1988). The algorithm is based on the assumption that this scene average
spectrum is largely composed of the atmospheric contribution and that the atmosphere
is uniform across the scene. However, these assumptions are not always valid. In
particular, the average spectrum could contain absorption features related to target
materials of interest. The algorithm could then overcompensate for (i.e., remove) these
absorbance features. The average spectrum should be visually inspected to check for
this possibility. Properly applied, this technique can remove the majority of
atmospheric effects.
168 ERDAS
Hyperspectral Image Processing
Log Residuals The Log Residuals technique was originally described by Green and Craig (1985), but
has been variously modified by researchers. The version implemented here is similar to
the approach of Lyon (1987). The algorithm can be conceptualized as:
All parameters in the above equation are in logarithmic space, hence the name.
This algorithm corrects the image for atmospheric absorption, systemic instrumental
variation, and illuminance differences between pixels.
Rescale Many hyperspectral scanners record the data in a format larger than 8-bit. In addition,
many of the calculations used to correct the data will be performed with a floating point
format to preserve precision. At some point, it will be advantageous to compress the
data back into an 8-bit range for effective storage and/or display. However, when
rescaling data to be used for imaging spectrometry analysis, it is necessary to consider
all data values within the data cube, not just within the layer of interest. This algorithm
is designed to maintain the 3-dimensional integrity of the data values. Any bit format
can be input. The output image will always be 8-bit.
When rescaling a data cube, a decision must be made as to which bands to include in
the rescaling. Clearly, a “bad” band (i.e., a low S/N layer) should be excluded. Some
sensors image in different regions of the electromagnetic (EM) spectrum (e.g., reflective
and thermal infra-red or long- and short-wave reflective infra-red). When rescaling
these data sets, it may be appropriate to rescale each EM region separately. These can
be input using the Select Layer option in the IMAGINE Viewer.
NOTE: Bands 26 through 28 and 46 through 55 have been deleted from the calculation.The
deleted bands will still be rescaled, but they will not be factored into the rescale calculation.
Processing Sequence The above (and other) processing steps are utilized to convert the raw image into a form
that is easier to interpret. This interpretation often involves comparing the imagery,
either visually or automatically, to laboratory spectra or other known "end-member"
spectra. At present there is no widely accepted standard processing sequence to achieve
this, although some have been advanced in the scientific literature (Zamudio and
Atkinson 1990; Kruse 1988; Green and Craig 1985; Lyon 1987). Two common processing
sequences have been programmed as single automatic enhancements, as follows:
170 ERDAS
Hyperspectral Image Processing
Spectrum Average In some instances, it may be desirable to average together several pixels. This is
mentioned above under IAR Reflectance as a test for applicability. In preparing
reference spectra for classification, or to save in the Spectral Library, an average
spectrum may be more representative than a single pixel. Note that to implement this
function it is necessary to define which pixels to average using the IMAGINE AOI tools.
This enables the user to average any set of pixels which are defined; they do not need
to be contiguous and there is no limit on the number of pixels averaged. Note that the
output from this program is a single pixel with the same number of input bands as the
original image.
AOI Polygon
Use this
IMAGINE
dialog to
rescale the
image
Click here to
enter an Area
of Interest
Mean per Pixel This algorithm outputs a single band, regardless of the number of input bands. By
visually inspecting this output image, it is possible to see if particular pixels are "outside
the norm". While this does not mean that these pixels are incorrect, they should be
evaluated in this context. For example, a CCD detector could have several sites (pixels)
that are dead or have an anomalous response, these would be revealed in the Mean per
Pixel image. This can be used as a sensor evaluation tool.
Profile Tools To aid in visualizing this three-dimensional data cube, three basic tools have been
designed:
172 ERDAS
Hyperspectral Image Processing
The data can also be displayed three-dimensionally for multiple bands, as in Figure 70.
Wavelength Axis Data tapes containing hyperspectral imagery commonly designate the bands as a
simple numerical sequence. When plotted using the profile tools, this yields an x-axis
labeled as 1,2,3,4, etc. Elsewhere on the tape or in the accompanying documentation is
a file which lists the center frequency and width of each band. This information should
be linked to the image intensity values for accurate analysis or comparison to other
spectra, such as the Spectra Libraries.
Spectral Library As discussed on page 167, two spectral libraries are presently included in the software
package (JPL and USGS). In addition, it is possible to extract spectra (pixels) from a data
set or prepare average spectra from an image and save these in a user-derived spectral
library. This library can then be used for visual comparison with other image spectra,
or it can be used as input signatures in a classification.
174 ERDAS
Hyperspectral Image Processing
Classification The advent of datasets with very large numbers of bands has pressed the limits of the
"traditional classifiers" such as Isodata, Maximum Likelihood, and Minimum Distance,
but has not obviated their usefulness. Much research has been directed toward the use
of Artificial Neural Networks (ANN) to more fully utilize the information content of
hyperspectral images (Merenyi, Taranik, Monor, and Farrand 1996). To date, however,
these advanced techniques have proven to be only marginally better at a considerable
cost in complexity and computation. For certain applications, both Maximum
Likelihood (Benediktsson, Swain, Ersoy, and Hong 1990) and Minimum Distance
(Merenyi, Taranik, Monor, and Farrand 1996) have proven to be appropriate.
"CHAPTER 6: Classification" contains a detailed discussion of these classification
techniques.
System Requirements Because of the large number of bands, a hyperspectral dataset can be surprisingly large.
For example, an AVIRIS scene is only 512 X 614 pixels in dimension — seems small.
However, when multiplied by 224 bands (channels) and 16 bits, it requires over 140
megabytes of data storage space. To process this scene will require corresponding large
swap and temp space. In practice, it has been found that a 48 Mb memory board and
100 Mb of swap space is a minimum requirement for efficient processing. Temporary
file space requirements will, of course, depend upon the process being run.
In ERDAS IMAGINE, the Fast Fourier Transform (FFT) is used to convert a raster image
from the spatial (normal) domain into a frequency domain image. The FFT calculation
converts the image into a series of two-dimensional sine waves of various frequencies.
The Fourier image itself cannot be easily viewed, but the magnitude of the image can
be calculated, which can then be displayed either in the IMAGINE Viewer or in the FFT
Editor. Analysts can edit the Fourier image to reduce noise or remove periodic features,
such as striping. Once the Fourier image is edited, it is then transformed back into the
spatial domain by using an inverse Fast Fourier Transform. The result is an enhanced
version of the original image.
This section focuses on the Fourier editing techniques available in the ERDAS
IMAGINE FFT Editor. Some rules and guidelines for using these tools are presented in
this document. Also included are some examples of techniques that will generally work
for specific applications, such as striping.
NOTE: You may also want to refer to the works cited at the end of this section for more
information.
The basic premise behind a Fourier transform is that any one-dimensional function, f(x)
(which might be a row of pixels), can be represented by a Fourier series consisting of
some combination of sine and cosine terms and their associated coefficients. For
example, a line of pixels with a high spatial frequency gray scale pattern might be repre-
sented in terms of a single coefficient multiplied by a sin(x) function. High spatial
frequencies are those that represent frequent gray scale changes in a short pixel
distance. Low spatial frequencies represent infrequent gray scale changes that occur
gradually over a relatively large number of pixel distances. A more complicated
function, f(x), might have to be represented by many sine and cosine terms with their
associated coefficients.
176 ERDAS
Fourier Analysis
Sine
Cosine
0 π 2π 0 π 2π
0 π 2π
Figure 72: One-Dimensional Fourier Analysis
Figure 72 shows how a function f(x) can be represented as a linear combination of sine
and cosine. The Fourier transform of that same function is also shown.
Applications
Fourier transformations are typically used for the removal of noise such as striping,
spots, or vibration in imagery by identifying periodicities (areas of high spatial
frequency). Fourier editing can be used to remove regular errors in data such as those
caused by sensor anomalies (e.g., striping). This analysis technique can also be used
across bands as another form of pattern/feature recognition.
0 ≤ u ≤ M – 1, 0 ≤ v ≤ N – 1
where:
The number of pixels horizontally and vertically must each be a power of two. If the
dimensions of the input image are not a power of two, they are padded up to the next
highest power of two. There is more information about this later in this section.
Images computed by this algorithm are saved with an .fft file extension.
You should run a Fourier Magnitude transform on an .fft file before viewing it in the ERDAS
IMAGINE Viewer. The FFT Editor automatically displays the magnitude without further
processing.
Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing
or editing. Each pixel of a fourier image is a complex number (i.e., it has two compo-
nents—real and imaginary). For display as a single image, these components are
combined in a root-sum of squares operation. Also, since the dynamic range of Fourier
spectra vastly exceeds the range of a typical display device, the Fourier Magnitude
calculation involves a logarithmic function.
Finally, a Fourier image is symmetric about the origin (u,v = 0,0). If the origin is plotted
at the upper left corner, the symmetry is more difficult to see than if the origin is at the
center of the image. Therefore, in the Fourier magnitude image, the origin is shifted to
the center of the raster array.
178 ERDAS
Fourier Analysis
In this transformation, each .fft layer is processed twice. First, the maximum magnitude,
|X|max, is computed. Then, the following computation is performed for each FFT
element magnitude x:
y ( x ) = 255.0ln -------------- ( e – 1 ) + 1
x
x max
where:
This function was chosen so that y would be proportional to the logarithm of a linear
function of x, with y(0)=0 and y (|x|max) = 255.
Source: ERDAS
In Figure 73, Image A is one band of a badly striped Landsat TM scene. Image B is the
Fourier Magnitude image derived from the Landsat image.
origin
origin
Image A Image B
Figure 73: Example of Fourier Magnitude
It is important to realize that a position in a Fourier image, designated as (u,v), does not
always represent the same frequency, because it depends on the size of the input raster
image. A large spatial domain image contains components of lower frequency than a
small spatial domain image. As mentioned, these lower frequencies are plotted nearer
to the center (u,v = 0,0) of the Fourier image. Note that the units of spatial frequency are
inverse length, e.g., m-1.
The sampling increments in the spatial and frequency domain are related by:
1
∆u = ------------
M∆x
1
∆v = -----------
N∆y
where:
For example, converting a 512 × 512 Landsat TM image (pixel size = 28.5m) into a
Fourier image:
1 –5 –1
∆u = ∆v = ------------------------- = 6.85 × 10 m
512 × 28.5
u or v Frequency
0 0
180 ERDAS
Fourier Analysis
1 –5 –1
∆u = ∆v = ---------------------------- = 3.42 × 10 m
1024 × 28.5
u or v Frequency
0 0
1 3.42 × 10-5
2 6.85 × 10-5
So, as noted above, the frequency represented by a (u,v) position depends on the size of
the input image.
For the above calculation, the sample images are 512 × 512 and 1024 × 1024—powers of
two. These were selected because the FFT calculation requires that the height and width
of the input image be a power of two (although the image need not be square). In
practice, input images will usually not meet this criterion. Three possible solutions are
available in ERDAS IMAGINE:
• Pad the image — the input raster is increased in size to the next power of two by
imbedding it in a field of the mean value of the entire input image.
• Resample the image so that its height and width are powers of two.
For example:
300
512
512
Figure 74: The Padding Technique
• The input file must be in the compressed .fft format described earlier (i.e., output
from the Fast Fourier Transform or FFT Editor).
• If the original image was padded by the FFT program, the padding will
automatically be removed by IFFT.
• This program creates (and deletes, upon normal termination) a temporary file large
enough to contain one entire band of .fft data.The specific expression calculated by
this program is:
M – 1N – 1
j2πux ⁄ M + j2πvy ⁄ N
∑ ∑ [ F ( u, v )e
1
f ( x, y ) ← -------------- ]
N1N2
u = 0v = 0
0 ≤ x ≤ M – 1, 0 ≤ y ≤ N – 1
where:
Images computed by this algorithm are saved with an .ifft.img file extension by default.
Filtering Operations performed in the frequency (Fourier) domain can be visualized in the
context of the familiar convolution function. The mathematical basis of this interrela-
tionship is the convolution theorem, which states that a convolution operation in the
spatial domain is equivalent to a multiplication operation in the frequency domain:
where:
The names high-pass, low-pass, high-frequency, etc., indicate that these convolution
functions derive from the frequency domain.
182 ERDAS
Fourier Analysis
Low-Pass Filtering
The simplest example of this relationship is the low-pass kernel. The name, low-pass
kernel, is derived from a filter that would pass low frequencies and block (filter out)
high frequencies. In practice, this is easily achieved in the spatial domain by the
M = N = 3 kernel:
1 1 1
1 1 1
1 1 1
Obviously, as the size of the image and, particularly, the size of the low-pass kernel
increases, the calculation becomes more time-consuming. Depending on the size of the
input image and the size of the kernel, it can be faster to generate a low-pass image via
Fourier processing.
Figure 75 compares Direct and Fourier domain processing for finite area convolution.
12
In the Fourier domain, the low-pass operation is implemented by attenuating the pixels
whose frequencies satisfy:
2 2 2
u + v > D0
For example:
64 × 64 50 3
30 3.5
20 5
10 9
5 14
128 × 128 20 13
10 22
256 × 256 20 25
10 42
This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as
the cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.
High-Pass Filtering
Just as images can be smoothed (blurred) by attenuating the high-frequency compo-
nents of an image using low-pass filters, images can be sharpened and edge-enhanced
by attenuating the low-frequency components using high-pass filters. In the Fourier
domain, the high-pass operation is implemented by attenuating pixels whose
frequencies satisfy:
2 2 2
u + v < D0
184 ERDAS
Fourier Analysis
Windows The attenuation discussed above can be done in many different ways. In ERDAS
IMAGINE Fourier processing, five window functions are provided to achieve different
types of attenuation:
• Ideal
• Bartlett (triangular)
• Butterworth
• Gaussian
• Hanning (cosine)
Each of these windows must be defined when a frequency domain process is used. This
application is perhaps easiest understood in the context of the high-pass and low-pass
filter operations. Each window is discussed in more detail below.
Ideal
The simplest low-pass filtering is accomplished using the ideal window, so named
because its cutoff point is absolute. Note that in Figure 76 the cross section is “ideal.”
H(u,v)
gain
0 D(u,v)
D0
frequency
H(u,v) = 1 if D(u,v) ≤ D0
0 if D(u,v) > D0
All frequencies inside a circle of a radius D0 are retained completely (passed), and all
frequencies outside the radius are completely attenuated. The point D0 is termed the
cutoff frequency.
High-pass filtering using the ideal window looks like the illustration below.
gain
1
0 D(u,v)
D0
frequency
H(u,v) = 0 if D(u,v) ≤ D0
1 if D(u,v) > D0
All frequencies inside a circle of a radius D0 are completely attenuated, and all
frequencies outside the radius are retained completely (passed).
A major disadvantage of the ideal filter is that it can cause “ringing” artifacts, particu-
larly if the radius (r) is small. The smoother functions (i.e., Butterworth, Hanning, etc.)
minimize this effect.
Bartlett
Filtering using the Bartlett window is a triangular function, as shown in the low- and
high-pass cross-sections below.
Low-Pass High-Pass
H(u,v) H(u,v)
gain
gain
1 1
D(u,v) 0 D(u,v)
0 D0 D0
frequency frequency
Figure 78: Filtering Using the Bartlett Window
186 ERDAS
Fourier Analysis
The Butterworth window reduces the ringing effect because it does not contain abrupt
changes in value or slope. The low- and high-pass cross sections below illustrate this.
Low-Pass High-Pass
H(u,v) H(u,v)
1 1
gain
gain
0.5 0.5
D(u,v) 0 D(u,v)
0 1 2 3 1 2 3
D0 D0
frequency frequency
NOTE: The Butterworth window approaches its window center gain asymptotically.
for 0 ≤ x ≤ 2D 0
0 otherwise
• Editing
Editing
In practice, it has been found that radial lines centered at the Fourier origin (u,v = 0,0)
are best removed using back-to-back wedges centered at (0,0). It is possible to remove
these lines using very narrow wedges with the Ideal window. However, the sudden
transitions resulting from zeroing out sections of a Fourier image will cause a ringing
of the image when it is transformed back into the spatial domain. This effect can be
lessened by using a less abrupt window, such as Butterworth.
Other types of noise can produce artifacts, such as lines not centered at u,v = 0,0 or
circular spots in the Fourier image. These can be removed using the tools provided in
the IMAGINE FFT Editor. As these artifacts are always symmetrical in the Fourier
magnitude image, editing tools operate on both components simultaneously. The
Fourier Editor contains tools that enable the user to attenuate a circular or rectangular
region anywhere on the image.
The image is first divided into 128 x 128 pixel blocks. The Fourier Transform of each
block is calculated and the log-magnitudes of each FFT block are averaged. The
averaging removes all frequency domain quantities except those which are present in
each block (i.e., some sort of periodic interference). The average power spectrum is then
used as a filter to adjust the FFT of the entire image. When the inverse Fourier
Transform is performed, the result is an image which should have any periodic noise
eliminated or significantly reduced. This method is partially based on the algorithms
outlined in Cannon et al. 1983 and Srinivasan et al. 1988.
Select the Periodic Noise Removal option from Image Interpreter to use this function.
188 ERDAS
Fourier Analysis
Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled as
the product of illumination and reflectance components:
where:
The illumination image is a function of lighting conditions, shadows, etc. The reflec-
tance image is a function of the object being imaged. A log function can be used to
separate the two components (i and r) of the image:
This transforms the image from multiplicative to additive superposition. With the two
component images separated, any linear operation can be performed. In this appli-
cation, the image is now transformed into Fourier space. Because the illumination
component usually dominates the low frequencies, while the reflectance component
dominates the higher frequencies, the image may be effectively manipulated in the
Fourier domain.
By using a filter on the Fourier image, which increases the high-frequency components,
the reflectance image (related to the target material) may be enhanced, while the illumi-
nation image (related to the scene illumination) is de-emphasized.
Select the Homomorphic Filter option from Image Interpreter to use this function.
i decreased
r increased
As mentioned earlier, if an input image is not a power of two, the ERDAS IMAGINE
Fourier analysis software will automatically pad the image to the next largest size to
make it a power of two. For manual editing, this causes no problems. However, in
automatic processing, such as the homomorphic filter, the artifacts induced by the
padding may have a deleterious effect on the output image. For this reason, it is recom-
mended that images that are not a power of two be subset before being used in an
automatic process.
A detailed description of the theory behind Fourier series and Fourier transforms is given in
Gonzales and Wintz (1977). See also Oppenheim (1975) and Press (1988).
190 ERDAS
Radar Imagery Enhancement
Radar Imagery The nature of the surface phenomena involved in radar imaging is inherently different
Enhancement from that of VIS/IR images. When VIS/IR radiation strikes a surface it is either
absorbed, reflected, or transmitted. The absorption is based on the molecular bonds in
the (surface) material. Thus, this imagery provides information on the chemical compo-
sition of the target.
When radar microwaves strike a surface, they are reflected according to the physical
and electrical properties of the surface, rather than the chemical composition. The
strength of radar return is affected by slope, roughness, and vegetation cover. The
conductivity of a target area is related to the porosity of the soil and its water content.
Consequently, radar and VIS/IR data are complementary; they provide different infor-
mation about the target area. An image in which these two data types are intelligently
combined can present much more information than either image by itself.
See "CHAPTER 1: Raster Data" and "CHAPTER 3: Raster and Vector Data Sources" for more
information on radar data.
This section describes enhancement techniques that are particularly useful for radar
imagery. While these techniques can be applied to other types of image data, this
discussion will focus on the special requirements of radar imagery enhancement.
ERDAS IMAGINE Radar provides a sophisticated set of image processing tools
designed specifically for use with radar imagery. This section will describe the
functions of ERDAS IMAGINE Radar.
For information on the Radar Image Enhancement function, see the section on "Radiometric
Enhancement" on page 132.
Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing
systems, although it may appear in any type of remotely sensed image utilizing
coherent radiation. An active radar sensor gives off a burst of coherent radiation that
reflects from the target, unlike a passive microwave sensor that simply receives the low-
level radiation naturally emitted by targets.
Like the light from a laser, the waves emitted by active sensors travel in phase and
interact minimally on their way to the target area. After interaction with the target area,
these waves are no longer in phase. This is due to the different distances they travel
from targets, or single versus multiple bounce scattering.
Once out of phase, radar waves can interact to produce light and dark pixels known as
speckle noise. Speckle noise must be reduced before the data can be effectively utilized.
However, the image processing programs used to reduce speckle noise produce
changes in the image.
Since any image processing done before removal of the speckle results in the noise being incor-
porated into and degrading the image, you should not rectify, correct to ground range, or in any
way resample, enhance, or classify the pixel values before removing speckle noise. Functions
using nearest neighbor are technically permissible, but not advisable.
• Mean filter
• Median filter
• Lee-Sigma filter
• Lee filter
• Frost filter
• Gamma-MAP filter
NOTE: Speckle noise in radar images cannot be completely removed. However, it can be reduced
significantly.
Mean Filter
The Mean filter is a simple calculation. The pixel of interest (center of window) is
replaced by the arithmetic average of all values within the window. This filter does not
remove the aberrant (speckle) value—it averages it into the data.
In theory, a bright and a dark pixel within the same window would cancel each other
out. This consideration would argue in favor of a large window size (i.e.,
7 × 7). However, averaging results in a loss of detail, which argues for a small window
size.
In general, this is the least satisfactory method of speckle reduction. It is useful for
“quick and dirty” applications or those where loss of resolution is not a problem.
Median Filter
A better way to reduce speckle, but still simplistic, is the Median filter. This filter
operates by arranging all DN (digital number) values within the user-defined window
in sequential order. The pixel of interest is replaced by the value in the center of this
distribution. A Median filter is useful for removing pulse or spike noise. Pulse functions
of less than one-half of the moving window width are suppressed or eliminated. In
addition, step functions or ramp functions are retained.
The effect of Mean and Median filters on various signals is shown (for one dimension)
in Figure 81.
192 ERDAS
Radar Imagery Enhancement
Step
Ramp
Single Pulse
Double Pulse
Figure 81: Effects of Mean and Median Filters
The Median filter is useful for noise suppression in any image. It does not affect step or
ramp functions—it is an edge preserving filter (Pratt 1991). It is also applicable in
removing pulse function noise which results from the inherent pulsing of microwaves.
An example of the application of the Median filter is the removal of dead-detector
striping, such as is found in Landsat 4 TM data (Crippen 1989).
= North region
= NE region
= SW region
2
Σ ( DN x, y – Mean )
Variance = -----------------------------------------------
n–1
Source: Nagao 1979
The algorithm compares the variance values of the regions surrounding the pixel of
interest. The pixel of interest is replaced by the mean of all DN values within the region
with the lowest variance, i.e., the most uniform region. A region with low variance is
assumed to have pixels minimally affected by wave interference yet very similar to the
pixel of interest. A region of low variance will probably be such for several surrounding
pixels.
The result is that the output image is composed of numerous uniform areas, the size of
which is determined by the moving window size. In practice, this filter can be utilized
sequentially 2 or 3 times, increasing the window size. The resultant output image is an
appropriate input to a classification application.
194 ERDAS
Radar Imagery Enhancement
It can be assumed that imaging radar data noise follows a Gaussian distribution. This
would yield a theoretical value for Standard Deviation (SD) of .52 for 1-look radar data
and SD = .26 for 4-look radar data.
Table 17 gives theoretical coefficient of variation values for various look-average radar
scenes:
The Lee filters are based on the assumption that the mean and variance of the pixel of
interest are equal to the local mean and variance of all pixels within the user-selected
moving window.
where:
Var ( x )
K= ----------------------------------------------------
2 2
[ Mean ] σ + Var ( x )
2
– [ Mean within window ]
As with all the Radar speckle filters, the user must specify a moving window size, the
center pixel of which is the pixel of interest.
As with the Statistics filter, a coefficient of variation specific to the data set must be
input. Finally, the user must specify how many standard deviations to use (2, 1, or 0.5)
to define the accepted range.
The statistical filters (Sigma and Statistics) are logically applicable to any data set for
preprocessing. Any sensor system has various sources of noise, resulting in a few erratic
pixels. In VIS/IR imagery, most natural scenes are found to follow a normal distri-
bution of DN values, thus filtering at 2 standard deviations should remove this noise.
This is particularly true of experimental sensor systems that frequently have significant
noise problems.
196 ERDAS
Radar Imagery Enhancement
These speckle filters can be used iteratively. The user must view and evaluate the
resultant image after each pass (the data histogram is useful for this), and then decide
if another pass is appropriate and what parameters to use on the next pass. For example,
three passes of the Sigma filter with the following parameters is very effective when
used with any type of data:
Sigma Window
Pass Sigma Value
Multiplier Size
Similarly, there is no reason why successive passes must be of the same filter. The
following sequence is useful prior to a classification:
With all speckle reduction filters there is a playoff between noise reduction and loss of resolution.
Each data set and each application will have a different acceptable balance between these two
factors. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducing
noise (and resolution).
–α t
DN = ∑ Kαe
n×n
Where
K = normalization constant
Ι = local mean
σ = local variance
And
2 2 2
α = ( 4 ⁄ nσ ) ( σ ⁄ I )
198 ERDAS
Radar Imagery Enhancement
Gamma-MAP Filter
The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN,
which is assumed to lie between the local average and the degraded (actual) pixel DN.
MAP logic maximizes the a posteriori probability density function with respect to the
original image.
Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume a Gaussian distri-
bution for the speckle noise. Recent work has shown this to be an invalid assumption.
Natural vegetated areas have been shown to be more properly modeled as having a
Gamma distributed cross section. This algorithm incorporates this assumption. The
exact formula used is the cubic equation:
3 2
Î – IÎ + σ ( Î – DN ) = 0
Where
Î = sought value
Ι = local mean
DN = input value
DN Value
DN Value
slope DN change
DN change 90o
slope midpoint
x or y x or y
Step edge
Ramp edge
DN Value
DN Value
DN change DN change
x or y x or y
Line Roof edge
• Line — a region bounded on each end by an edge; width must be less than the
moving window size.
The models in Figure 83 represent ideal theoretical edges. However, real data values
will vary to produce a more distorted edge, due to sensor noise, vibration, etc. (see
Figure 84). There are no perfect edges in raster data, hence the need for edge detection
algorithms.
200 ERDAS
Radar Imagery Enhancement
Intensity
Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order
derivative operations. Figure 85 shows ideal one-dimensional edge and line intensity
curves with the associated 1st-order and 2nd-order derivatives.
g(x) g(x)
Original Feature
x x
∂g ∂g
-----
1st Derivative -----
∂x ∂x
x x
2
2nd Derivative ∂g
2 ∂g
-------- --------
2
∂x
2 ∂x
x x
1 0 –1
∂ 1 1 1
----- = 1 0 – 1 ∂
∂y ----- = and
1 0 –1 ∂x 0 0 0
–1 –1 –1
–1 2 –1 –1 –1 –1
∂2 and ∂2
-------2- = – 1 2 – 1 -------2- = 2 2 2
∂x ∂y
–1 2 –1 –1 –1 –1
To avoid positional shift, all operating windows are odd number arrays, with the center
pixel being the pixel of interest. Extension of the 3 × 3 impulse response arrays to a
larger size is not clear cut— different authors suggest different lines of rationale. For
example, it may be advantageous to extend the 3-level (Prewitt 1970) to:
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
or
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
202 ERDAS
Radar Imagery Enhancement
Larger template arrays provide a greater noise immunity, but are computationally
more demanding.
Zero-Sum Filters
A common type of edge detection kernel is a zero-sum filter. For this type of filter, the
coefficients are designed to add up to zero. Examples of two zero-sum filters are given
below:
–1 –2 –1 1 0 –1
Sobel =
0 0 0 2 0 –2
1 2 1 1 0 –1
horizontal vertical
–1 –1 –1 1 0 –1
Prewitt= 0 0 0 1 0 –1
1 1 1 1 0 –1
horizontal vertical
Prior to edge enhancement, you should reduce speckle noise by using the Radar Speckle
Suppression function.
Unweighted line:
–1 2 –1
–1 2 –1
–1 2 –1
Weighted line:
–1 2 –1
–2 4 –2
–1 2 –1
Texture According to Pratt (1991), “Many portions of images of natural scenes are devoid of
sharp edges over large areas. In these areas the scene can often be characterized as
exhibiting a consistent structure analogous to the texture of cloth. Image texture
measurements can be used to segment an image and classify its segments.”
The user could also prepare a three-color image using three different functions
operating through the same (or different) size moving window(s). However, each data
set and application would need different moving window sizes and/or texture
measures to maximize the discrimination.
The interaction of the radar waves with the surface of interest is dominated by reflection
involving the surface roughness at the wavelength scale. In VIS/IR imaging, the
phenomena involved is absorption at the molecular level. Also, as we know from array-
type antennae, radar is especially sensitive to regularity that is a multiple of its
wavelength. This provides for a more precise method for quantifying the character of
texture in a radar return.
The ability to use radar data to detect texture and provide topographic information
about an image is a major advantage over other types of imagery where texture is not a
quantitative characteristic.
The texture transforms can be used in several ways to enhance the use of radar imagery.
Adding the radar intensity image as an additional layer in a (vegetation) classification
is fairly straightforward and may be useful. However, the proper texture image
(function and window size) can greatly increase the discrimination. Using known test
sites, one can experiment to discern which texture image best aids the classification. For
example, the texture image could then be added as an additional layer to the TM bands.
As radar data come into wider use, other mathematical texture definitions will prove useful and
will be added to ERDAS IMAGINE Radar. In practice, you will interactively decide which
algorithm and window size is best for your data and application.
204 ERDAS
Radar Imagery Enhancement
The algorithms incorporated into ERDAS IMAGINE are those which are applicable in
a wide variety of situations and are not computationally over-demanding. This later
point becomes critical as the moving window size increases. Research has shown that
very large moving windows are often needed for proper enhancement. For example,
Blom (Blom et al 1982) uses up to a 61 × 61 window.
Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE:
• variance (2nd-order)
• skewness (3rd-order)
1
2 --2-
Mean Euclidean Distance = Σ [ Σ λ ( x cλ – x ijλ ) ]
------------------------------------------------
n–1
where:
xijλ= DN value for spectral band λ and pixel (i,j) of a multispectral image
xcλ= DN value for spectral band λ of a window’s center pixel
n = number of pixels in a window
Variance
2
Σ ( x ij – M )
Variance = ------------------------------
n–1
where:
Σx ij
Mean = ---------
n
3
Σ ( x ij – M )
Skew = --------------------------------
3
---
2
(n – 1)(V )
where:
Kurtosis
4
Σ ( x ij – M )
Kurtosis = ------------------------------
2
(n – 1 )(V )
where:
Texture analysis is available from the Texture function in Image Interpreter and from the Radar
Texture Analysis function.
206 ERDAS
Radar Imagery Enhancement
Radiometric Correction - The raw radar image frequently contains radiometric errors due to:
Radar Imagery
• imperfections in the transmit and receive pattern of the radar antenna
• the inherently stronger signal from a near range (closest to the sensor flight path)
than a far range (farthest from the sensor flight path) target
Many imaging radar systems use a single antenna that transmits the coherent radar
burst and receives the return echo. However, no antenna is perfect; it may have various
lobes, dead spots, and imperfections. This causes the received signal to be slightly
distorted radiometrically. In addition, range fall-off will cause far range targets to be
darker (less return signal).
These two problems can be addressed by adjusting the average brightness of each
range line to a constant— usually the average overall scene brightness (Chavez 1986).
This requires that each line of constant range be long enough to reasonably approx-
imate the overall scene brightness (see Figure 86). This approach is generic; it is not
specific to any particular radar sensor.
The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line
average. For this to be a valid approach, the number of data values must be large enough to
provide good average values. Be careful not to use too small an image. This will depend upon the
character of the scene itself.
rows of data
a = average data value of each row
a1 + a2 + a3 + a4 ....ax
= Overall average
x
Overall average
= calibration coefficient
ax of line x
• Range lines — lines that are perpendicular to the flight of the sensor
• Lines of constant range — lines that are parallel to the flight of the sensor
Because radiometric errors are a function of the imaging geometry, the image must be
correctly oriented during the correction process. For the algorithm to correctly address
the data set, the user must tell ERDAS IMAGINE whether the lines of constant range
are in columns or rows in the displayed image.
Figure 87 shows the lines of constant range in columns, parallel to the sides of the
display screen:
Range Direction
208 ERDAS
Radar Imagery Enhancement
Slant-to-Ground Range Radar images also require slant-to-ground range correction, which is similar in concept
Correction to orthocorrecting a VIS/IR image. By design, an imaging radar is always side-looking.
In practice, the depression angle is usually 75o at most. In operation, the radar sensor
determines the range (distance to) each target, as shown in Figure 88.
θ = depression angle
C
Dists
90o
θ
A B
Distg
Assuming that angle ACB is a right angle, the user can approximate:
where:
Dist
cos θ = ------------s-
Dist g
Source: Leberl 1990
• Depression angle (θ) — angular distance between sensor horizon and scene center
• Sensor height (H) — elevation of sensor (in meters) above its nadir point
• Beam width— angular distance between near range and far range for entire scene
This information is usually found in the header file of data. Use the Data View... option to view
this information. If it is not contained in the header file, you must obtain this information from
the data supplier.
Once the scene is range-format corrected, pixel size can be changed for coregistration
with other data sets.
210 ERDAS
Radar Imagery Enhancement
Merging Radar with As aforementioned, the phenomena involved in radar imaging is quite different from
VIS/IR Imagery that in VIS/IR imaging. Because these two sensor types give different information
about the same target (chemical vs. physical), they are complementary data sets. If the
two images are correctly combined, the resultant image will convey both chemical and
physical information and could prove more useful than either image alone.
The methods for merging radar and VIS/IR data are still experimental and open for
exploration. The following methods are suggested for experimentation:
• Co-displaying in a Viewer
• Multiplicative
The ultimate goal of enhancement is not mathematical or logical purity - it is feature extraction.
There are currently no rules to suggest which options will yield the best results for a particular
application; you must experiment. The option that proves to be most useful will depend upon the
data sets (both radar and VIS/IR), your experience, and your final objective.
Co-Displaying
The simplest and most frequently used method of combining radar with VIS/IR
imagery is co-displaying on an RGB color monitor. In this technique the radar image is
displayed with one (typically the red) gun, while the green and blue guns display
VIS/IR bands or band ratios. This technique follows from no logical model and does not
truly merge the two data sets.
Use the Viewer with the Clear Display option disabled for this type of merge. Select the color
guns to display the different layers.
Multiplicative
A final method to consider is the multiplicative technique. This requires several
chromatic components and a multiplicative component, which is assigned to the image
intensity. In practice, the chromatic components are usually band ratios or PC's; the
radar image is input multiplicatively as intensity (Holcomb 1993).
The two sensor merge models using transforms to integrate the two data sets (Principal
Components and RGB to IHS) are based on the assumption that the radar intensity
correlates with the intensity that the transform derives from the data inputs. However,
the logic of mathematically merging radar with VIS/IR data sets is inherently different
from the logic of the SPOT/TM merges (as discussed under the section in this chapter
on Resolution Merge). It cannot be assumed that the radar intensity is a surrogate for,
or equivalent to, the VIS/IR intensity. The acceptability of this assumption will depend
on the specific case.
212 ERDAS
Radar Imagery Enhancement
CHAPTER 6
Classification
Introduction Multispectral classification is the process of sorting pixels into a finite number of
individual classes, or categories of data, based on their data file values. If a pixel
satisfies a certain set of criteria, the pixel is assigned to the class that corresponds to that
criteria. This process is also referred to as image segmentation.
Depending on the type of information the user wants to extract from the original data,
classes may be associated with known features on the ground or may simply represent
areas that “look different” to the computer. An example of a classified image is a land
cover map, showing vegetation, bare land, pasture, urban, etc.
In a computer system, spectral pattern recognition can be more scientific. Statistics are
derived from the spectral characteristics of all pixels in an image. Then, the pixels are
sorted based on mathematical criteria. The classification process breaks down into two
parts—training and classifying (using a decision rule).
Training First, the computer system must be trained to recognize patterns in the data. Training
is the process of defining the criteria by which these patterns are recognized (Hord
1982). Training can be performed with either a supervised or an unsupervised method,
as explained below.
Supervised Training
Supervised training is closely controlled by the analyst. In this process, the user selects
pixels that represent patterns or landcover features that they recognize, or that they can
identify with help from other sources, such as aerial photos, ground truth data, or maps.
Knowledge of the data, and of the classes desired, is required before classification.
By identifying patterns, the user can “train” the computer system to identify pixels with
similar characteristics. If the classification is accurate, the resulting classes represent the
categories within the data that the user originally identified.
Unsupervised training is dependent upon the data itself for the definition of classes.
This method is usually used when less is known about the data before classification. It
is then the analyst’s responsibility, after classification, to attach meaning to the resulting
classes (Jensen 1996). Unsupervised classification is useful only if the classes can be
appropriately interpreted.
Signatures The result of training is a set of signatures that defines a training sample or cluster. Each
signature corresponds to a class, and is used with a decision rule (explained below) to
assign the pixels in the image file (.img) to a class. Signatures in ERDAS IMAGINE can
be parametric or non-parametric.
ERDAS IMAGINE enables the user to generate statistics for a non-parametric signature.
This function will allow a feature space object to be used to create a parametric
signature from the image being classified. However, since a parametric classifier
requires a normal distribution of data, the only feature space object for which this
would be mathematically valid would be an ellipse (Kloer 1994).
When both parametric and non-parametric signatures are used to classify an image, the
user is more able to analyze and visualize the class definitions than either type of
signature provides independently (Kloer 1994).
See "APPENDIX A: Math Topics" for information on feature space images and how they are
created.
216 ERDAS
The Classification Process
Decision Rule After the signatures are defined, the pixels of the image are sorted into classes based on
the signatures, by use of a classification decision rule. The decision rule is a mathe-
matical algorithm that, using data contained in the signature, performs the actual
sorting of pixels into distinct class values.
• Anderson, J.R., et al. 1976. “A Land Use and Land Cover Classification System for
Use with Remote Sensor Data.” U.S. Geological Survey Professional Paper 964.
• Cowardin, Lewis M., et al. 1979. Classification of Wetlands and Deepwater Habitats of
the United States. Washington, D.C.: U.S. Fish and Wildlife Service.
• Florida Topographic Bureau, Thematic Mapping Section. 1985. Florida Land Use,
Cover and Forms Classification System. Florida Department of Transportation,
Procedure No. 550-010-001-a.
• Michigan Land Use Classification and Reference Committee. 1975. Michigan Land
Cover/Use Classification System. Lansing, Michigan: State of Michigan Office of Land
Use.
Other states or government agencies may also have specialized land use/cover studies.
It is recommended that the classification process is begun by the user defining a classi-
fication scheme for the application, using previously developed schemes, like those
above, as a general framework.
218 ERDAS
Classification Tips
Iterative Classification A process is iterative when it repeats an action. The objective of the ERDAS IMAGINE
system is to enable the user to iteratively create and refine signatures and classified .img
files to arrive at a desired final classification. The IMAGINE classification utilities are a
“tool box” to be used as needed, not a numbered list of steps that must always be
followed in order.
The total classification can be achieved with either the supervised or unsupervised
methods, or a combination of both. Some examples are below:
• Signatures created from both supervised and unsupervised training can be merged
and appended together.
• Signature evaluation tools can be used to indicate which signatures are spectrally
similar. This will help to determine which signatures should be merged or deleted.
These tools also help define optimum band combinations for classification. Using
the optimum band combination may reduce the time required to run a classification
process.
Supervised vs. In supervised training, it is important to have a set of desired classes in mind, and then
Unsupervised Training create the appropriate signatures from the data. The user must also have some way of
recognizing pixels that represent the classes that he or she wants to extract.
On the other hand, if the user wants the classes to be determined by spectral distinctions
that are inherent in the data, so that he or she can define the classes later, then the appli-
cation is better suited to unsupervised training. Unsupervised training enables the user
to define many classes easily, and identify classes that are not in contiguous, easily
recognized regions.
NOTE: Supervised classification also includes using a set of classes that was generated from an
unsupervised classification. Using a combination of supervised and unsupervised classification
may yield optimum results, especially with large data sets (e.g., multiple Landsat scenes). For
example, unsupervised classification may be useful for generating a basic set of classes, then
supervised classification could be used for further definition of the classes.
Classifying Enhanced For many specialized applications, classifying data that have been merged, spectrally
Data merged or enhanced—with principal components, image algebra, or other transforma-
tions—can produce very specific and meaningful results. However, without under-
standing the data and the enhancements used, it is recommended that only the original,
remotely-sensed data be classified.
Adding Dimensions
Using programs in ERDAS IMAGINE, the user can add layers to existing .img files.
Therefore, the user can incorporate data (called ancillary data) other than remotely-
sensed data into the classification. Using ancillary data enables the user to incorporate
variables into the classification from, for example, vector layers, previously classified
data, or elevation data. The data file values of the ancillary data become an additional
feature of each pixel, thus influencing the classification (Jensen 1996).
Limiting Dimensions
Although ERDAS IMAGINE allows an unlimited number of layers of data to be used
for one classification, it is usually wise to reduce the dimensionality of the data as much
as possible. Often, certain layers of data are redundant or extraneous to the task at hand.
Unnecessary data take up valuable disk space, and cause the computer system to
perform more arduous calculations, which slows down processing.
Use the Signature Editor to evaluate separability to calculate the best subset of layer combina-
tions. Use the Image Interpreter functions to merge or subset layers. Use the Image Information
tool (in the ERDAS IMAGINE icon panel) to delete a layer(s).
220 ERDAS
Supervised Training
Supervised Training Supervised training requires a priori (already known) information about the data, such
as:
• What type of classes need to be extracted? Soil type? Land use? Vegetation?
• What classes are most likely to be present in the data? That is, which types of land
cover, soil, or vegetation (or whatever) are represented by the data?
In supervised training, the user relies on his or her own pattern recognition skills and a
priori knowledge of the data to help the system determine the statistical criteria (signa-
tures) for data classification.
To select reliable samples, the user should know some information—either spatial or
spectral—about the pixels that they want to classify.
The location of a specific characteristic, such as a land cover type, may be known
through ground truthing. Ground truthing refers to the acquisition of knowledge
about the study area from field work, analysis of aerial photography, personal
experience, etc. Ground truth data are considered to be the most accurate (true) data
available about the area of study. They should be collected at the same time as the
remotely sensed data, so that the data correspond as much as possible (Star and Estes
1990). However, some ground data may not be very accurate due to a number of errors,
inaccuracies, and human shortcomings.
Training Samples and Training samples (also called samples) are sets of pixels that represent what is recog-
Feature Space Objects nized as a discernible pattern, or potential class. The system will calculate statistics from
the sample pixels to create a parametric signature for the class.
• Training field, or training site, is the geographical area(s) of interest (AOI) in the
image represented by the pixels in a sample. Usually, it is previously identified
with the use of ground truth data.
Feature space objects are user-defined areas of interest (AOIs) in a feature space image.
The feature space signature is based on this objects(s).
ERDAS IMAGINE enables the user to identify training samples via one or more of the
following methods:
• using a class from a thematic raster layer from an image file of the same area (i.e.,
the result of an unsupervised classification)
Digitized Polygon
Training samples can be identified by their geographical location (training sites, using
maps, ground truth data). The locations of the training sites can be digitized from maps
with the ERDAS IMAGINE Vector or AOI tools. Polygons representing these areas are
then stored as vector layers. The vector layers can then be used as input to the AOI tools
and used as training samples to create signatures.
Use the Vector and AOI tools to digitize training samples from a map. Use the Signature Editor
to create signatures from training samples that are identified with digitized polygons.
User-Defined Polygon
Using his or her pattern recognition skills (with or without supplemental ground truth
information), the user can identify samples by examining a displayed image of the data
and drawing a polygon around the training site(s) of interest. For example, if it is
known that oak trees reflect certain frequencies of green and infrared light according to
ground truth data, the user may be able to base his or her sample selections on the data
(taking atmospheric conditions, sun angle, time, date, and other variations into
account). The area within the polygon(s) would be used to create a signature.
Use the AOI tools to define the polygon(s) to be used as the training sample. Use the Signature
Editor to create signatures from training samples that are identified with the polygons.
222 ERDAS
Selecting Training Samples
When one or more of the contiguous pixels is accepted, the mean of the sample is calcu-
lated from the accepted pixels. Then, the pixels contiguous to the sample are compared
in the same way. This process repeats until no pixels that are contiguous to the sample
satisfy the spectral parameters. In effect, the sample “grows” outward from the model
pixel with each iteration. These homogenous pixels will be converted from individual
raster pixels to a polygon and used as an area of interest (AOI) layer.
Select the Seed Properties option in the Viewer to identify training samples with a seed pixel.
Vector layers (polygons or lines) can be displayed as the top layer in the Viewer, and the
boundaries can then be used as an AOI for training samples defined under Seed Properties.
NOTE: The thematic raster layer must have the same coordinate system as the image file being
classified.
Training Samples
See "Evaluating Signatures" on page 236 for methods of determining the accuracy of the
signatures created from your training samples.
Selecting Feature The ERDAS IMAGINE Feature Space tools enable the user to interactively define
Space Objects feature space objects (AOIs) in the feature space image(s). A feature space image is
simply a graph of the data file values of one band of data against the values of another
band (often called a scatterplot). In ERDAS IMAGINE, a feature space image has the
same data structure as a raster image; therefore, feature space images can be used with
other IMAGINE utilities, including zoom, color level slicing, virtual roam, Spatial
Modeler, and Map Composer.
band 2
band 1
Figure 89: Example of a Feature Space Image
The transformation of a multilayer raster image into a feature space image is done by
mapping the input pixel values to a position in the feature space image. This transfor-
mation defines only the pixel position in the feature space image. It does not define the
pixel’s value. The pixel values in the feature space image can be the accumulated
frequency, which is calculated when the feature space image is defined. The pixel
values can also be provided by a thematic raster layer of the same geometry as the
source multilayer image. Mapping a thematic layer into a feature space image can be
useful for evaluating the validity of the parametric and non-parametric decision bound-
aries of a classification (Kloer 1994).
When you display a feature space image file (.fsp.img) in an ERDAS IMAGINE Viewer, the
colors reflect the density of points for both bands. The bright tones represent a high density and
the dark tones represent a low density.
224 ERDAS
Selecting Feature Space Objects
A single feature space image, but multiple AOIs, can be used to define the signature.
This signature is taken within the feature space image, not the image being classified.
The pixels in the image that correspond to the data file values in the signature (i.e.,
feature space object) will be assigned to that class.
One fundamental difference between using the feature space image to define a training
sample and the other traditional methods is that it is a non-parametric signature. The
decisions made in the classification process have no dependency on the statistics of the
pixels. This helps improve classification accuracies for specific non-normal classes, such
as urban and exposed rock (Faust, et al 1991).
The user can have as many feature space images with different band combinations as
desired. Any polygon or rectangle in these feature space images can be used as a non-
parametric signature. However, only one feature space image can be used per
signature. The polygons in the feature space image can be easily modified and/or
masked until the desired regions of the image have been identified.
Use the Feature Space tools in the Signature Editor to create a feature space image and mask the
signature. Use the AOI tools to draw polygons.
Advantages Disadvantages
Provide an accurate way to classify a class The classification decision process allows
with a non-normal distribution (e.g., resi- overlap and unclassified pixels.
dential and urban).
Certain features may be more visually The feature space image may be difficult to
identifiable in a feature space image. interpret.
The classification decision process is fast.
226 ERDAS
Unsupervised Training
Unsupervised Unsupervised training requires only minimal initial input from the user. However, the
Training user will have the task of interpreting the classes that are created by the unsupervised
training algorithm.
Clusters
Clusters are defined with a clustering algorithm, which often uses all or many of the
pixels in the input data file for its analysis. The clustering algorithm has no regard for
the contiguity of the pixels that define each cluster.
• The RGB clustering method is more specialized than the ISODATA method. It
applies to three-band, 8-bit data. RGB clustering plots pixels in three-dimensional
feature space, and divides that space into sections that are used to define clusters.
Each of these methods is explained below, along with its advantages and disadvan-
tages.
Some of the statistics terms used in this section are explained in "APPENDIX A: Math Topics".
The ISODATA method uses minimum spectral distance to assign a cluster for each
candidate pixel. The process begins with a specified number of arbitrary cluster means
or the means of existing signatures, and then it processes repetitively, so that those
means will shift to the means of the clusters in the data.
Because the ISODATA method is iterative, it is not biased to the top of the data file, as
are the one-pass clustering algorithms.
Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering.
• N - the maximum number of clusters to be considered. Since each cluster is the basis
for a class, this number becomes the maximum number of classes to be formed. The
ISODATA process begins by determining N arbitrary cluster means. Some clusters
with too few pixels can be eliminated, leaving less than N clusters.
228 ERDAS
Unsupervised Training
The initial cluster means are distributed in feature space along a vector that runs
between the point at spectral coordinates (µ1-s1, µ2-s2, µ3-s3, ... µn-sn) and the coordi-
nates (µ1+s1, µ2+s2, µ3+s3, ... µn+sn). Such a vector in two dimensions is illustrated in
Figure 91. The initial cluster means are evenly distributed between (µ1-s1, µn-sn) and
(µ1+s1, µn+sn).
µB+ σB
data file values
µB
Band B
µB- σB
0
0 µA - σ µA µA+σA
A
Band A
data file values
Pixel Analysis
Pixels are analyzed beginning with the upper-left corner of the image and going left to
right, block by block.
The spectral distance between the candidate pixel and each cluster mean is calculated.
The pixel is assigned to the cluster whose mean is the closest. The ISODATA function
creates an output .img file with a thematic raster layer and/or a signature file (.sig) as a
result of the clustering. At the end of each iteration, an .img file exists that shows the
assignments of the pixels to the clusters.
Cluster Cluster
4 5
Cluster
3
Band B
2
Cluster
1
Band A
data file values
Figure 92: ISODATA First Pass
For the second iteration, the means of all clusters are recalculated, causing them to shift
in feature space. The entire process is repeated—each candidate pixel is compared to
the new cluster means and assigned to the closest cluster mean.
data file values
Band B
Band A
data file values
Figure 93: ISODATA Second Pass
Percentage Unchanged
After each iteration, the normalized percentage of pixels whose assignments are
unchanged since the last iteration is displayed in the dialog. When this number reaches
T (the convergence threshold), the program terminates.
230 ERDAS
Unsupervised Training
It is possible for the percentage of unchanged pixels to never converge or reach T (the
convergence threshold). Therefore, it may be beneficial to monitor the percentage, or
specify a reasonable maximum number of iterations, M, so that the program will not run
indefinitely.
ISODATA Clustering
Advantages Disadvantages
Clustering is not geographically biased to The clustering process is time-consuming,
the top or bottom pixels of the data file, because it can repeat many times.
because it is iterative.
This algorithm is highly successful at find- Does not account for pixel spatial homoge-
ing the spectral clusters that are inherent neity.
in the data. It does not matter where the
initial arbitrary cluster means are located,
as long as enough iterations are allowed.
A preliminary thematic raster layer is cre-
ated, which gives results similar to using a
minimum distance classifier (as explained
below) on the signatures that are created.
This thematic raster layer can be used for
analyzing and manipulating the signa-
tures before actual classification takes
place.
Use the Merge and Delete options in the Signature Editor to manipulate signatures.
Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering, generate signatures, and classify the resulting signatures.
The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create a
thematic raster layer. However, no signature file is created and no other classification decision
rule is used. In practice, RGB Clustering differs greatly from the other clustering methods, but
it does employ a clustering algorithm and, therefore, it is explained here.
RGB clustering is a simple classification and data compression technique for three
bands of data. It is a fast and simple algorithm that quickly compresses a 3-band image
into a single band pseudocolor image, without necessarily classifying any particular
features.
The algorithm plots all pixels in 3-dimensional feature space and then partitions this
space into clusters on a grid. In the more simplistic version of this function, each of these
clusters becomes a class in the output thematic raster layer.
The advanced version requires that a minimum threshold on the clusters be set, so that
only clusters at least as large as the threshold will become output classes. This allows
for more color variation in the output file. Pixels which do not fall into any of the
remaining clusters are assigned to the cluster with the smallest city-block distance from
the pixel. In this case, the city-block distance is calculated as the sum of the distances
in the red, green, and blue directions in 3-dimensional space.
Along each axis of the 3-dimensional scatterplot, each input histogram is scaled so that
the partitions divide the histograms between specified limits— either a specified
number of standard deviations above and below the mean, or between the minimum
and maximum data values for each band.
232 ERDAS
Unsupervised Training
input histograms
R
frequency
between 16 and 34 in RED,
B and between 35 and 55 in
GREEN, and between 0 and
16 16 in BLUE.
0 35 195 255
16 98
98 G
R
195
16
34 R
55
0
35
G
0
35
16
B
25
B
5
Partitioning Parameters
It is necessary to specify the number of R, G, and B sections in each dimension of the 3-
dimensional scatterplot. The number of sections should vary according to the histo-
grams of each band. Broad histograms should be divided into more sections, and
narrow histograms should be divided into fewer sections (see Figure 94).
It is possible to interactively change these parameters in the RGB Clustering function in the
Image Interpreter. The number of classes is calculated based on the current parameters, and it
displays on the command screen.
Advantages Disadvantages
The fastest classification method. It is Exactly three bands must be input, which
designed to provide a fast, simple classifi- is not suitable for all applications.
cation for applications that do not require
specific classes.
Not biased to the top or bottom of the data Does not always create thematic classes
file. The order in which the pixels are that can be analyzed for informational
examined does not influence the outcome. purposes.
(Advanced version only) A highly interac-
tive function, allowing an iterative adjust-
ment of the parameters until the number
of clusters and the thresholds are satisfac-
tory for analysis.
Tips
Some starting values that usually produce good results with the simple RGB clustering
are:
R=7
G=6
B=6
To decrease the number of output colors/classes or to darken the output, decrease these
values.
For the Advanced RGB clustering function, start with higher values for R, G, and B.
Adjust by raising the threshold parameter and/or decreasing the R, G, and B parameter
values until the desired number of output classes is obtained.
234 ERDAS
Signature Files
Signature Files A signature is a set of data that defines a training sample, feature space object (AOI), or
cluster. The signature is used in a classification process. Each classification decision rule
(algorithm) requires some signature attributes as input—these are stored in the
signature file (.sig). Signatures in ERDAS IMAGINE can be parametric or non-
parametric.
The following attributes are standard for all signatures (parametric and non-
parametric):
• name — identifies the signature and is used as the class name in the output
thematic raster layer. The default signature name is Class <number>.
• color — the color for the signature and is used as the color for the class in the output
thematic raster layer. This color is also used with other signature visualization
functions, such as alarms, masking, ellipses, etc.
• value — the output class value for the signature. The output class value does not
necessarily need to be the class number of the signature. This value should be a
positive integer.
• order — the order to process the signatures for order-dependent processes, such as
signature alarms and parallelepiped classifications.
Parametric Signature
A parametric signature is based on statistical parameters (e.g., mean and covariance
matrix) of the pixels that are in the training sample or cluster. A parametric signature
includes the following attributes in addition to the standard attributes for signatures:
• the number of bands in the input image (as processed by the training program)
• the minimum and maximum data file value in each band for each sample or cluster
(minimum vector and maximum vector)
• the mean data file value in each band for each sample or cluster (mean vector)
Non-parametric Signature
A non-parametric signature is based on an AOI that the user defines in the feature
space image for the .img file being classified. A non-parametric classifier will use a set
of non-parametric signatures to assign pixels to a class based on their location, either
inside or outside the area in the feature space image.
The format of the .sig file is described in "APPENDIX B: File Formats and Extensions".
Information on these statistics can be found in "APPENDIX A: Math Topics".
Use the Signature Editor to view the contents of each signature, manipulate signatures, and
perform your own mathematical tests on the statistics.
• Alarm — using his or her own pattern recognition ability, the user views the
estimated classified area for a signature (using the parallelepiped decision rule)
against a display of the original image.
• Ellipse — view ellipse diagrams and scatterplots of data file values for every pair
of bands.
NOTE: If the signature is non-parametric (i.e., a feature space signature), you can use only the
alarm evaluation method.
After analyzing the signatures, it would be beneficial to merge or delete them, eliminate
redundant bands from the data, add new bands of data, or perform any other opera-
tions to improve the classification.
Alarm The alarm evaluation enables the user to compare an estimated classification of one or
more signatures against the original data, as it appears in the ERDAS IMAGINE
Viewer. According to the parallelepiped decision rule, the pixels that fit the classifi-
cation criteria are highlighted in the displayed image. The user also has the option to
indicate an overlap by having it appear in a different color.
With this test, the user can use his or her own pattern recognition skills, or some
ground-truth data, to determine the accuracy of a signature.
236 ERDAS
Evaluating Signatures
Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms on the
image in the Viewer, using the parallelepiped decision rule. The alarm utility creates a functional
layer, and the IMAGINE Viewer allows you to toggle between the image layer and the functional
layer.
Ellipse In this evaluation, ellipses of concentration are calculated with the means and standard
deviations stored in the signature file. It is also possible to generate parallelepiped
rectangles, means, and labels.
In this evaluation, the mean and the standard deviation of every signature are used to
represent the ellipse in 2-dimensional feature space. The ellipse is displayed in a feature
space image.
Ellipses are explained and illustrated in "APPENDIX A: Math Topics" under the discussion of
Scatterplots.
When the ellipses in the feature space image show extensive overlap, then the spectral
characteristics of the pixels represented by the signatures cannot be distinguished in the
two bands that are graphed. In the best case, there is no overlap. Some overlap,
however, is expected.
Figure 95 shows how ellipses are plotted and how they can overlap. The first graph
shows how the ellipses are plotted based on the range of 2 standard deviations from the
mean. This range can be altered, changing the ellipse plots. Analyzing the plots with
differing numbers of standard deviations is useful for determining the limits of a paral-
lelepiped classification.
signature 1
data file values
data file values
signature 2
µµB2B2+2
+2ss
Band D
signature 1
µµD1
Band B
D1
µµB2 signature 2
B2
µµD2
D2
µµ s
B2B2-2-2s
µC2 µµC1
µ A2 -2 s
µ A2
µ A2 +2s
µA2-2s
µA2+2s
µA2
C2 C1
Band A Band C
data file values data file values
Figure 95: Ellipse Evaluation of Signatures
Use the Signature Editor to create a feature space image and to view an ellipse(s) of signature
data.
Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results
to the pixels of a training sample.
The pixels of each training sample are not always so homogeneous that every pixel in a
sample will actually be classified to its corresponding class. Each sample pixel only
weights the statistics that determine the classes. However, if the signature statistics for
each sample are distinct from those of the other samples, then a high percentage of each
sample’s pixels will be classified as expected.
In this evaluation, a quick classification of the sample pixels is performed using the
minimum distance, maximum likelihood, or Mahalanobis distance decision rule. Then,
a contingency matrix is presented, which contains the number and percentages of
pixels that were classified as expected.
For the distance (Euclidean) evaluation, the spectral distance between the mean vectors
of each pair of signatures is computed. If the spectral distance between two samples is
not significant for any pair of bands, then they may not be distinct enough to produce
a successful classification.
The spectral distance is also the basis of the minimum distance classification (as
explained below). Therefore, computing the distances between signatures will help the
user predict the results of a minimum distance classification.
Use the Signature Editor to compute signature separability and distance and automatically
generate the report.
The formulas used to calculate separability are related to the maximum likelihood
decision rule. Therefore, evaluating signature separability helps the user predict the
results of a maximum likelihood classification. The maximum likelihood decision rule
is explained below.
There are three options for calculating the separability. All of these formulas take into
account the covariances of the signatures in the bands being compared, as well as the
mean vectors of the signatures.
238 ERDAS
Evaluating Signatures
Refer to "APPENDIX A: Math Topics" for information on the mean vector and covariance
matrix.
Divergence
The formula for computing Divergence (Dij) is as follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
where:
Transformed Divergence
The formula for computing Transformed Divergence (TD) is as follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
– D ij
TD ij = 2 1 – exp ----------
8
where:
T Ci + C j
–1
1 (Ci + C j) ⁄ 2
α = --- ( µ i – µ j ) ------------------ ( µ i – µ j ) + --- ln --------------------------------
1
,
8 2 2 C × C
i j
–α
JM ij = 2(1 – e )
where:
Separability
Both transformed divergence and Jeffries-Matusita distance have upper and lower
bounds. If the calculated divergence is equal to the appropriate upper bound, then the
signatures can be said to be totally separable in the bands being studied. A calculated
divergence of zero means that the signatures are inseparable.
A separability listing is a report of the computed divergence for every class pair and
one band combination. The listing contains every divergence value for the bands
studied for every possible pair of signatures.
The separability listing also contains the average divergence and the minimum diver-
gence for the band set. These numbers can be compared to other separability listings
(for other band combinations), to determine which set of bands is the most useful for
classification.
Weight Factors
As with the Bayesian classifier (explained below with maximum likelihood), weight
factors may be specified for each signature. These weight factors are based on a priori
(already known) probabilities that any given pixel will be assigned to each class. For
example, if the user knows that twice as many pixels should be assigned to Class A as
to Class B, then Class A should receive a weight factor that is twice that of Class B.
NOTE: The weight factors do not influence the divergence equations (for TD or JM), but they do
influence the report of the best average and best minimum separability.
240 ERDAS
Evaluating Signatures
The weight factors for each signature are used to compute a weighted divergence with
the following calculation:
c–1 c
∑ ∑ f i f j U ij
i = 1 j = i+1
W ij = -------------------------------------------------------
2
-
c c
1
--- ∑ f i – ∑ f i 2
2
i=1 i=1
where:
Probability of Error
The Jeffries-Matusita distance is related to the pairwise probability of error, which is the
probability that a pixel assigned to class i is actually in class j. Within a range, this
probability can be estimated according to the expression below:
where:
The following operations upon signatures and signature files are possible with ERDAS
IMAGINE:
• View histograms of the samples or clusters that were used to derive the signatures
• Merge signatures together, so that they form one larger class when classified
• Append signatures from other files. The user can combine signatures that are
derived from different training methods for use in one classification.
Use the Signature Editor to view statistics and histogram listings and to delete, merge, append,
and rename signatures within a signature file.
242 ERDAS
Classification Decision Rules
Classification Once a set of reliable signatures has been created and evaluated, the next step is to
Decision Rules perform a classification of the data. Each pixel is analyzed independently. The
measurement vector for each pixel is compared to each signature, according to a
decision rule, or algorithm. Pixels that pass the criteria that are established by the
decision rule are then assigned to the class for that signature. ERDAS IMAGINE enables
the user to classify the data both parametrically with statistical representation, and non-
parametrically as objects in feature space. Figure 96 shows the flow of an image pixel
through the classification decision making process in ERDAS IMAGINE (Kloer 1994).
If a non-parametric rule is not set, then the pixel is classified using only the parametric
rule. All of the parametric signatures will be tested. If a non-parametric rule is set, the
pixel will be tested against all of the signatures with non-parametric definitions. This
rule results in the following conditions:
• If the non-parametric test results in one unique class, the pixel will be assigned to
that class.
• If the non-parametric test results in zero classes (i.e., the pixel lies outside all the
non-parametric decision boundaries), then the unclassified rule will be applied.
With this rule, the pixel will either be classified by the parametric rule or left
unclassified.
• If the pixel falls into more than one class as a result of the non-parametric test, the
overlap rule will be applied. With this rule, the pixel will either be classified by the
parametric rule, processing order, or left unclassified.
• parallelepiped
• feature space
Unclassified Options
ERDAS IMAGINE provides these options if the pixel is not classified by the non-
parametric rule:
• parametric rule
• unclassified
Overlap Options
ERDAS IMAGINE provides these options if the pixel falls into more than one feature
space object:
• parametric rule
• by order
• unclassified
Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signa-
tures:
• minimum distance
• Mahalanobis distance
244 ERDAS
Classification Decision Rules
Candidate Pixel
No Non-parametric Rule
Yes
0 >1
By Order
Parametric Unclassified Parametric
Unclassified
Parametric Rule
Unclassified
Assignment
Class
Assignment
• the minimum and maximum data file values of each band in the signature,
• the mean of each band, plus and minus a number of standard deviations, or
• any limits that the user specifies, based on his or her knowledge of the data and
signatures. This knowledge may come from the signature evaluation techniques
discussed above.
These limits can be set using the Parallelepiped Limits utility in the Signature Editor.
There are high and low limits for every signature in every band. When a pixel’s data file
values are between the limits for every band in a signature, then the pixel is assigned to
that signature’s class. Figure 97 is a two-dimensional example of a parallelepiped classi-
fication.
● = pixels in class 1
? ? ?
◆
◆ class 3 ▲ = pixels in class 2
? ? ◆ ◆
? ? ? ◆◆ ◆ ◆ = pixels in class 3
data file values
◆ ◆ ◆◆ ?
? ? ? ? ◆ ◆ ? = unclassified pixels
µB2+2s ◆ ◆
Band B
◆ ? ?
▲ ▲ ▲▲ ◆ ◆ ◆
µA2 = mean of Band A,
▲ ? ?
? ▲ ?
▲ ▲ ? ? ? ?
▲ ▲ ▲▲ ? ? ? class 2
? ?
µB2
?
▲
▲
▲
▲
?
● ● ● ● ● µB2 = mean of Band B,
▲ ▲ ● ● class 1 class 2
? ▲ ? ?
▲
?
?
µB2-2s class 2
µA2+2s
µA2
µA2-2s
Band A
data file values
Figure 97: Parallelepiped Classification Using Plus or Minus
Two Standard Deviations as Limits
The large rectangles in Figure 97 are called parallelepipeds. They are the regions within
the limits for each signature.
246 ERDAS
Classification Decision Rules
Overlap Region
In cases where a pixel may fall into the overlap region of two or more parallelepipeds,
the user must define how the pixel will be classified.
• The pixel can be classified by the order of the signatures. If one of the signatures is
first and the other signature is fourth, the pixel will be assigned to the first
signature’s class. This order can be set in the ERDAS IMAGINE Signature Editor.
• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against the overlapping signatures only. If neither of these signatures is
parametric, then the pixel will be left unclassified. If only one of the signatures is
parametric, then the pixel will be assigned automatically to that signature’s class.
• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against all of the parametric signatures. If none of the signatures is
parametric, then the pixel will be left unclassified.
Use the Supervised Classification utility in the Signature Editor to perform a parallelepiped
classification.
Advantages Disadvantages
Fast and simple, since the data file values Since parallelepipeds have “corners,” pix-
are compared to limits that remain con- els that are actually quite far, spectrally,
stant for each band in each signature. from the mean of the signature may be
classified. An example of this is shown in
Often useful for a first-pass, broad classifi-
Figure 98.
cation, this decision rule quickly narrows
down the number of possible classes to
which each pixel can be assigned before
the more time-consuming calculations are
made, thus cutting processing time (e.g.,
minimum distance, Mahalanobis distance
or maximum likelihood).
Not dependent on normal distributions.
Band B
µB Parallelepiped
boundary
*
candidate pixel
µA
Band A
data file values
Figure 98: Parallelepiped Corners Compared to the Signature Ellipse
Feature Space The feature space decision rule determines whether or not a candidate pixel lies within
the non-parametric signature in the feature space image. When a pixel’s data file values
are in the feature space signature, then the pixel is assigned to that signature’s class.
Figure 99 is a two-dimensional example of a feature space classification. The polygons
in this figure are AOIs used to define the feature space signatures.
◆ ◆ ◆ ◆ ◆
◆ ? ?
◆ ◆ ◆ ?
◆
◆ ◆ ◆ class 3 ? ?
?
?
● = pixels in class 1
?
data file values
?? ?
?
?
?
? ?
▲ = pixels in class 2
▲▲ ▲
Band B
? ? ?
▲
▲ ▲ ? ◆ = pixels in class 3
▲ ▲ ?
▲ ? ? ?
▲ ▲ ▲
= unclassified pixels
▲
▲
▲ ▲
class 2 ▲ ● ● ● ● ●
● ●
● ● ●
● ● ● ●
class 1
? ?
? ?
?? ?
?
? ? ?
?
Band A
data file values
Figure 99: Feature Space Classification
248 ERDAS
Classification Decision Rules
Overlap Region
In cases where a pixel may fall into the overlap region of two or more AOIs, the user
must define how the pixel will be classified.
• The pixel can be classified by the order of the feature space signatures. If one of the
signatures is first and the other signature is fourth, the pixel will be assigned to the
first signature’s class. This order can be set in the ERDAS IMAGINE Signature
Editor.
• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against the overlapping signatures only. If neither of these feature space
signatures is parametric, then the pixel will be left unclassified. If only one of the
signatures is parametric, then the pixel will be assigned automatically to that
signature’s class.
• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against all of the parametric signatures. If none of the signatures is
parametric, then the pixel will be left unclassified.
Advantages Disadvantages
Often useful for a first-pass, broad classifi- The feature space decision rule allows
cation. overlap and unclassified pixels.
Provides an accurate way to classify a class The feature space image may be difficult to
with a non-normal distribution (e.g., resi- interpret.
dential and urban).
Certain features may be more visually
identifiable, which can help discriminate
between classes that are spectrally similar
and hard to differentiate with parametric
information.
The feature space method is fast.
Use the Decision Rules utility in the Signature Editor to perform a feature space classification.
candidate pixel
µB3 µ3
◆
µB1 ◆ µ1
o
o µA1 µA2 µA3
Band A
data file values
Figure 100: Minimum Spectral Distance
In Figure 100, spectral distance is illustrated by the lines from the candidate pixel to the
means of the three signatures. The candidate pixel is assigned to the class with the
closest mean.
The equation for classifying by spectral distance is based on the equation for Euclidean
distance:
∑ ( µci – X xyi )
2
SD xyc =
i=1
where:
When spectral distance is computed for all possible values of c (all possible classes), the
class of the candidate pixel is assigned to the class for which SD is the lowest.
250 ERDAS
Classification Decision Rules
Advantages Disadvantages
Since every pixel is spectrally closer to Pixels which should be unclassified (that
either one sample mean or another, there is, they are not spectrally close to the mean
are no unclassified pixels. of any sample, within limits that are rea-
sonable to the user) will become classified.
However, this problem is alleviated by
thresholding out the pixels that are far-
thest from the means of their classes. (See
the discussion of Thresholding on
page 254.)
The fastest decision rule to compute, Does not consider class variability. For
except for parallelepiped. example, a class like an urban land cover
class is made up of pixels with a high vari-
ance, which may tend to be farther from
the mean of the signature. Using this deci-
sion rule, outlying urban pixels may be
improperly classified. Inversely, a class
with less variance, like water, may tend to
overclassify (that is, classify more pixels
than are appropriate to the class), because
the pixels that belong to the class are usu-
ally spectrally closer to their mean than
those of other classes to their means.
Mahalanobis Distance
The Mahalanobis distance algorithm assumes that the histograms of the bands have normal
distributions. If this is not the case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass parallelepiped classification.
where:
D= Mahalanobis distance
c= a particular class
X= the measurement vector of the candidate pixel
Mc= the mean vector of the signature of class c
Covc= the covariance matrix of the pixels in the signature of class c
Covc-1= inverse of Covc
T= transposition function
Advantages Disadvantages
Takes the variability of classes into Tends to overclassify signatures with rela-
account, unlike minimum distance or par- tively large values in the covariance
allelepiped. matrix. If there is a large dispersion of the
pixels in a cluster or training sample, then
the covariance matrix of that signature will
contain large values.
May be more useful than minimum dis- Slower to compute than parallelepiped or
tance in cases where statistical criteria (as minimum distance.
expressed in the covariance matrix) must
Mahalanobis distance is parametric, mean-
be taken into account, but the weighting
ing that it relies heavily on a normal distri-
factors that are available with the maxi-
bution of the data in each input band.
mum likelihood/Bayesian option are not
needed.
Maximum
Likelihood/Bayesian
The maximum likelihood algorithm assumes that the histograms of the bands of data have normal
distributions. If this is not the case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass parallelepiped classification.
The maximum likelihood decision rule is based on the probability that a pixel belongs
to a particular class. The basic equation assumes that these probabilities are equal for all
classes, and that the input bands have normal distributions.
Bayesian Classifier
If the user has a priori knowledge that the probabilities are not equal for all classes, he
or she can specify weight factors for particular classes. This variation of the maximum
likelihood decision rule is known as the Bayesian decision rule (Hord 1982). Unless the
user has a priori knowledge of the probabilities, it is recommended that they not be
specified. In this case, these weights default to 1.0 in the equation.
252 ERDAS
Classification Decision Rules
where:
The inverse and determinant of a matrix, along with the difference and transposition of
vectors, would be explained in a textbook of matrix algebra.
The multiple inverse of the function is computed and the pixel is assigned to the class,
c, for which D is the lowest.
Advantages Disadvantages
The most accurate of the classifiers in the An extensive equation that takes a long
ERDAS IMAGINE system (if the input time to compute. The computation time
samples/clusters have a normal distribu- increases with the number of input bands.
tion), because it takes the most variables
into consideration.
Takes the variability of classes into Maximum likelihood is parametric, mean-
account by using the covariance matrix, as ing that it relies heavily on a normal distri-
does Mahalanobis distance. bution of the data in each input band.
Tends to overclassify signatures with rela-
tively large values in the covariance
matrix. If there is a large dispersion of the
pixels in a cluster or training sample, then
the covariance matrix of that signature will
contain large values.
Thresholding Thresholding is the process of identifying the pixels in a classified image that are the
most likely to be classified incorrectly. These pixels are put into another class (usually
class 0). These pixels are identified statistically, based upon the distance measures that
were used in the classification decision rule.
Distance File
When a minimum distance, Mahalanobis distance, or maximum likelihood classifi-
cation is performed, a distance image file can be produced in addition to the output
thematic raster layer. A distance image file is a one-band, 32-bit offset continuous
raster layer in which each data file value represents the result of a spectral distance
equation, depending upon the decision rule used.
The brighter pixels (with the higher distance file values) are spectrally farther from the
signature means for the classes to which they were assigned. They are more likely to be
misclassified.
The darker pixels are spectrally nearer, and more likely to be classified correctly. If
supervised training was used, the darkest pixels are usually the training samples.
0
0
distance value
Figure 101: Histogram of a Distance Image
254 ERDAS
Evaluating Classification
Figure 101 shows how the histogram of the distance image usually appears. This distri-
bution is called a chi-square distribution, as opposed to a normal distribution, which
is a symmetrical bell curve.
Threshold
The pixels that are the most likely to be misclassified have the higher distance file values
at the tail of this histogram. At some point that the user defines—either mathematically
or visually—the “tail” of this histogram is cut off. The cutoff point is the threshold.
• interactively change the threshold with the mouse, when a distance histogram is
displayed while using the threshold function. This option enables the user to select
a chi-square value by selecting the cut-off value in the distance histogram, or
In both cases, thresholding has the effect of cutting the tail off of the histogram of the
distance image file, representing the pixels with the highest distance values.
Figure 102 shows some example distance histograms. With each example is an expla-
nation of what the curve might mean, and how to threshold it.
256 ERDAS
Evaluating Classification
Chi-square Statistics
If the minimum distance classifier was used, then the threshold is simply a certain
spectral distance. However, if Mahalanobis or maximum likelihood were used, then
chi-square statistics are used to compare probabilities (Swain and Davis 1978).
When statistics are used to calculate the threshold, the threshold is more clearly defined
as follows:
T is the distance value at which C% of the pixels in a class have a distance value greater
than or equal to T.
where:
T is related to the distance values by means of chi-square statistics. The value X2 (chi-
squared) is used in the equation. X2 is a function of:
When classifying an image in ERDAS IMAGINE, the classified image automatically has
the degrees of freedom (i.e., number of bands) used for the classification. The chi-square
table is built into the threshold application.
It is usually not practical to ground truth or otherwise test every pixel of a classified
image. Therefore, a set of reference pixels is usually used. Reference pixels are points
on the classified image for which actual data are (or will be) known. The reference pixels
are randomly selected (Congalton 1991).
NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an
accuracy assessment for any thematic layer. This layer did not have to be classified by IMAGINE
(e.g., you can run an accuracy assessment on a thematic layer that was classified in ERDAS
Version 7.5 and imported into IMAGINE).
The number of reference pixels is an important factor in determining the accuracy of the
classification. It has been shown that more than 250 reference pixels are needed to
estimate the mean accuracy of a class to within plus or minus five percent (Congalton
1991).
ERDAS IMAGINE uses a square window to select the reference pixels. The size of the
window can be defined by the user. Three different types of distribution are offered for
selecting the random pixels:
• equalized random — each class will have an equal number of random points
Use the Accuracy Assessment CellArray to enter reference pixels for the class values.
258 ERDAS
Output File
Error Reports
From the Accuracy Assessment CellArray, two kinds of reports can be derived.
• The error matrix simply compares the reference points to the classified points in a
c × c matrix, where c is the number of classes (including class 0).
• The accuracy report calculates statistics of the percentages of accuracy, based upon
the results of the error matrix.
Use the Accuracy Assessment utility to generate the error matrix and accuracy reports.
Kappa Coefficient
The Kappa coefficient expresses the proportionate reduction in error generated by a
classification process, compared with the error of a completely random classification.
For example, a value of .82 would imply that the classification process was avoiding 82
percent of the errors that a completely random classification would generate
(Congalton 1991).
Output File When classifying an .img file, the output file is an .img file with a thematic raster layer.
This file will automatically contain the following data:
• class values
• class names
• color table
• statistics
• histogram
The .img file will also contain any signature attributes that were selected in the ERDAS
IMAGINE Supervised Classification utility.
The class names, values, and colors can be set with the Signature Editor or the Raster Attribute
Editor.
Introduction
There are numerous sources of image data for both traditional and digital photogram-
metry. This document focuses on three main sources: aerial photographs (metric frame
cameras), SPOT satellite imagery, and Landsat satellite data. Many of the concepts
presented for aerial photographs also pertain to most imagery which has a single
perspective center. Likewise, the SPOT concepts have much in common with other
sensors that also use a linear Charged Coupled Device (CCD) in a pushbroom fashion.
Finally, a significantly different geometric model and approach is discussed for the
Landsat satellite, an across-track scanning device.
Prior to the invention of the airplane, photographs taken on the ground were used to
extract the relationships between objects using geometric principles. This was during
the phase of Plane Table Photogrammetry.
262 ERDAS
Coordinate Systems
Coordinate Systems There are a variety of coordinate systems used in photogrammetry. This chapter will
reference these systems as described below.
Pixel Coordinates The file coordinates of a digital image are defined in a pixel coordinate system. A pixel
coordinate system is usually a coordinate system with its origin in the upper-left corner
of the image, the x-axis pointing to the right, the y-axis pointing downward, and the
unit in pixels, as shown by axis c and r in Figure 103. These file coordinates (c,r) can also
be thought of as the pixel column and row number. This coordinate system is refer-
enced as pixel coordinates (c,r) in this chapter.
An image space coordinate system is identical to image coordinates, except that it adds
a third axis (z). Image space coordinates are used to describe positions inside the camera
and usually use units in millimeters or microns. This coordinate system is referenced as
image space coordinates (x,y,z) in this chapter.
y
c
r
Figure 103: Pixel Coordinates and Image Coordinates
Geocentric and Most photogrammetric applications account for earth curvature in their calculations.
Topocentric Coordinates This is done by adding a correction value or by computing geometry in a coordinate
system which includes curvature. Two such systems are geocentric and topocentric
coordinates.
A geocentric coordinate system has its origin at the center of the earth ellipsoid. The
ZG-axis equals the rotational axis of the earth, and the XG-axis passes through the
Greenwich meridian. The YG-axis is perpendicular to both the ZG-axis and XG-axis, so
as to create a three-dimensional coordinate system that follows the right hand rule.
A topocentric coordinate system has its origin at the center of the image projected on
the earth ellipsoid. The three perpendicular coordinate axis are defined on a tangential
plane at this center point. The plane is called the reference plane or the local datum. The
x-axis is oriented eastward, the y-axis northward, and the z-axis is vertical to the
reference plane (up).
For simplicity of presentation, the remainder of this chapter will not explicitly reference
geocentric or topocentric coordinates. Basic photogrammetric principles can be
presented without adding this additional level of complexity.
Work Flow The work flow of photogrammetry can be summarized in three steps: image acqui-
sition, photogrammetric processing, and product output.
264 ERDAS
Work Flow
Image Acquisition
Image Preprocessing:
Scan Aerial Film
Import Digital Imagery
Triangulation
Stereopair Creation
Photogrammetric
Processing
Generate Elevation Models
Product Output
The remainder of this chapter is presented in the same sequence as the items in Figure
104. For each section, the aerial model is presented first, followed by the SPOT model,
when appropriate. A Landsat satellite model is described at the end of the orthorectifi-
cation section.
Exposure Station Each point in the flight path at which the camera exposes the film is called an exposure
station.
flight path
Flight Line 3 of airplane
Flight Line 2
Flight Line 1
exposure station
Image Scale The image scale expresses the average ratio between a distance in the image and the
same distance on the ground. It is computed as focal length divided by the flying height
above the mean ground elevation. For example, with an altitude of 1,000m and a focal
length of 15 cm, the image scale (SI) would be 1:6667.
NOTE: The flying height above ground is used, versus the altitude above sea level.
Strip of Photographs A strip of photographs consists of images captured along a flight-line, normally with
an overlap of 60%. All photos in the strip are assumed to be taken at approximately the
same flying height and with a constant distance between exposure stations. Camera tilt
relative to the vertical is assumed to be minimal.
266 ERDAS
Aerial Camera Film
Block of Photographs The photographs from the flight path can be combined to form a block. A block of
photographs consists of a number of parallel strips, normally with a sidelap of 20-30%.
Photogrammetric triangulation is performed on the whole block of photographs to
transform images and ground points into a homologous coordinate system.
A regular block of photos is a rectangular block in which the number of photos in each
strip is the same. The figure below shows a block of 5 X 2 photographs.
60% overlap
strip 2
20-30%
sidelap
Correction Levels for SPOT scenes are delivered at different levels of correction. For example, SPOT Image
SPOT Imagery Corporation provides two correction levels that are of interest:
• Level 1B images have been corrected for the earth’s rotation and viewing angle,
producing roughly the same ground pixel size throughout the scene. Pixels are
resampled from the level 1A camera data by cubic polynomials. This data is
internally transformed to level 1A before the triangulation calculations are applied.
Refer to "CHAPTER 3: Raster and Vector Data Sources" for more information on satellite
remote sensing and the characteristics of satellite data that can be read into ERDAS.
Image Images must be read into the computer before processing can begin. Usually the images
Preprocessing are not digitally enhanced prior to photogrammetric processing. Most digital photo-
grammetric software packages have basic image enhancement tools. The common
practice is to perform more sophisticated enhancements on the end products (e.g.,
orthoimages or orthomosaics).
Scanning Aerial Film Aerial film must be scanned (digitized) to create a digital image. Once scanned, the
digital image can be imported into a digital photogrammetric system.
Scanning Resolution
The storage requirement for digital image data can be huge. Therefore, obtaining the
optimal pixel size (or scanning density) is often a trade-off between capturing
maximum image information and the digital storage burden. For example, a standard
panchromatic image is 9 by 9 inches (23 x 23 cm). Scanning at 25 microns (roughly 1000
pixels per inch) results in a file with 9000 rows and 9000 columns. Assuming 8 bits per
pixel and no image compression, this file occupies about 81 megabytes. Photogram-
metric projects often have hundreds or even thousands of photographs.
268 ERDAS
Image Preprocessing
Photogrammetric Scanners
Photogrammetric quality scanners are special devices capable of high image quality
and excellent positional accuracy. Use of this type of scanner results in geometric
accuracies similar to traditional analog and analytical photogrammetric instruments.
These scanners are necessary for digital photogrammetric applications which have high
accuracy requirements. These units usually scan only film (either positive or negative),
because film is superior to paper, both in terms of image detail and geometry. These
units usually have an RMSE (Root Mean Square Error) positional accuracy of 4 microns
or less, and are capable of scanning at a maximum resolution of 5 to 10 microns (5
microns is equivalent to approximately 5,000 pixels per inch). The needed pixel
resolution varies depending on the application. Aerial triangulation and feature
collection applications often scan in the 10 to 15 micron range. Orthophoto applications
often use 15 to 30 micron pixels. Color film is less sharp than panchromatic, therefore
color orthoapplications often use 20 to 40 micron pixels.
Desktop Scanners
Desktop scanners are general purpose devices. They lack the image detail and
geometric accuracy of photogrammetric quality units, but they are much less
expensive. When using a desktop scanner, the user should make sure that the active
area is at least 9 X 9 inches, enabling the entire photo frame to be captured. Desktop
scanners are appropriate for less rigorous uses, such as digital photogrammetry in
support of GIS or remote sensing applications. Calibrating these units improves
geometric accuracy, but the results are still inferior to photogrammetric units. The
image correlation techniques which are necessary for automatic elevation extraction are
often sensitive to scan quality. Therefore, elevation extraction can become problematic
if the scan quality is only marginal.
Triangulation Triangulation establishes the geometry of the camera or sensor relative to objects on the
earth’s surface. It is the first and most critical step of photogrammetric processing.
Figure 107 illustrates the triangulation work flow. First, the interior orientation estab-
lishes the geometry inside the camera or sensor. For aerial photographs, fiducial marks
are measured on the digital imagery and camera calibration information is entered. The
interior orientation information for SPOT is already known (they are fixed values). The
final step is to calculate the exterior orientation, which establishes the location and
attitude (rotation angles) of the camera or sensor during the time of image acquisition.
Ground control points aid this process.
Uncorrected Calculate
Digital Imagery Interior Orientation Camera or Sensor Information
Calculate
Exterior Orientation Ground Control Points
Triangulation Results
270 ERDAS
Triangulation
Aerial Triangulation The following discussion assumes that a standard metric aerial camera is being used, in
which the fiducial marks are readily visible on the scanned images and the camera
calibration information is available from an external source.
In bundle block adjustment, there are usually image coordinate observations, ground
coordinate point observations, and possibly observations from GPS and satellite orbit
information. The observation equation can be represented as follows:
V = AX – L
where
The equations can be solved using the iterative least squares adjustment:
X = ( A T PA ) –1 A T PL
where
Before the triangulation can be computed, the user should acquire images that overlap
in the block, measure the tie points on the images and digitize some control points.
To record an image, light rays reflected by an object on the ground are projected
through a lens. Ideally, all light rays are straight and intersect at the perspective center.
The light rays are then projected onto the film.
The plane of the film is called the focal plane. A virtual focal plane exists between the
perspective center and the terrain. The virtual focal plane is the same distance (focal
length) from the perspective center as is the plane of the film or scanner. The light rays
intersect both planes in the same manner. Virtual focal planes are often more conve-
nient to diagram, and therefore are often used in place of focal planes in photogram-
metric diagrams.
NOTE: In the discussion following, the virtual focal plane is called the “image plane,” and is
used to describe photogrammetric concepts.
perspective center
(all light rays intersect) virtual focal plane
(camera image plane)
terrain
The perspective center is projected onto a point in the image plane that lies directly
beneath it. This point is called the principal point. The orthogonal distance from the
perspective center to the image plane is the focal length of the lens.
272 ERDAS
Triangulation
A XF
F1
F4 F2
x
P
F3
YF
Fiducials are four or eight reference markers fixed on the frame of an aerial metric
camera and visible in each exposure as illustrated by points F1, F2, F3, and F4 in Figure
109. The image coordinates of the fiducials are provided in a camera calibration report.
Fiducials are used to compute the transformation from file coordinates to image coordi-
nates.
The file coordinates of a digital image are defined in a pixel coordinate system. For
example, in digital photogrammetry, it is usually a coordinate system with its origin in
the upper-left corner of the image, the x-axis pointing to the right, the y-axis pointing
downward, and the unit in pixels, as shown by A-XF and A-YF in Figure 109. These file
coordinates (XF, YF) can also be thought of as the pixel column and row number.
Once the file coordinates of fiducials are measured, the transformation from file coordi-
nates to image coordinates can be carried out. Usually the six-parameter affine transfor-
mation is used here:
x = ao + a1 X F + a2 Y F
y = bo + b1 X F + b2 Y F
Where
Exterior Orientation
The exterior orientation determines the relationship of an image to the ground
coordinate system. Each aerial camera image has six exterior orientation parameters,
the three coordinates of the perspective center (Xo,Yo,Zo) in the ground coordinate
system, and three rotation angles of (ω, ϕ, κ), as shown in Figure 110.
Z'
κ Y'
O
ω X'
z
y
x
PP
PI (x,y,-f)
l
Z
YO YG
XO
X
274 ERDAS
Triangulation
Where
PP = principal point
O = perspective center with ground coordinates (XO, YO, ZO)
O-x, O-y, O-z = image space coordinate system with origin in the
perspective center and the x,y-axis parallel to the image
coordinate system axis
XG,YG,ZG = ground coordinates
O-X', O-Y', O-Z' = a local coordinate system which is parallel to the ground
coordinate system, but has its origin at the perspective
center. Used for expressing rotation angles (ω, ϕ, κ).
ω = omega rotation angle around the X'-axis
ϕ = phi rotation angle around the Y'-axis
κ = kappa rotation angle around the Z'-axis
PI = point in the image plane
PG = point on the ground
Collinearity Equations
The relationship among image coordinates, ground coordinates, and orientation
parameters is described by the following collinearity equations:
r 11 ( X – X O ) + r 21 ( Y – Y O ) + r 31 ( Z – Z O )
x = – f ----------------------------------------------------------------------------------------------
r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O )
r 12 ( X – X O ) + r 22 ( Y – Y O ) + r 32 ( Z – Z O )
y = – f ----------------------------------------------------------------------------------------------
r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O )
Where:
x, y = image coordinates
X, Y, Z = ground coordinates
f = focal length
XO,YO,ZO = ground coordinates of perspective center
r11 - r33 = coefficients of a 3 X 3 rotation matrix defined by angles ω, ϕ,κ, that
transforms the image system to the ground system
A control point is a point with known coordinates in the ground coordinate system,
expressed in the units of the specified map projection. Control points are used to
establish a reference frame for the photogrammetric triangulation of a block of images.
These ground coordinates are typically three-dimensional. They consist of X,Y coordi-
nates in a map projection system and Z coordinates, which are elevation values
expressed in units above datum that are consistent with the map coordinate system.
The user selects these points based on their relation to clearly defined and visible
ground features. Ground control points serve as stable (known) values, so their
accuracy determines the accuracy of the triangulation.
In triangulation, there can be several types of control points. A full control point
specifies map X,Y coordinates along with a Z (elevation of the point). Horizontal control
only specifies the X,Y, while vertical control only specifies the Z.
Optimizing control distribution is part art and part science, and goes beyond the scope
of this document. However, the example presented in Figure 111 illustrates a specific
case.
276 ERDAS
Triangulation
▲ = control (along all edges of the block and after the 3rd photo of each strip)
▲ ▲
▲ ▲ ▲
▲ ▲
▲
▲ ▲ ▲ ▲
▲
▲ ▲
▲
▲ ▲ ▲ ▲ ▲
Figure 111: Control Points in Aerial Photographs
(block of 8 X 4 photos)
For optimal results, control points should be measured by geodetic techniques with an
accuracy that corresponds to about 0.1 to 0.5 pixels in the image. Digitization of existing
maps often does not yield this degree of accuracy.
For example, if a photograph was scanned with a resolution of 1000 dpi (9000 X 9000
pixels), the pixel size in the image is 25 microns (0.025mm). For an image scale of
1:40,000, each pixel covers approximately 1.0 X 1.0 meters on the ground. Applying the
above rule, the ground control points should be accurate to about 0.1 to 0.5 meters.
A greater number of known ground points should be available than will actually be
used in the triangulation. These additional points become check points, and can be
used to independently verify the degree of accuracy of the triangulation. This verifi-
cation, called check point analysis, is discussed on page 287.
Tie points should be visually well-defined in all images. Ideally, they should show good
contrast in two directions, like the corner of a building or a road intersection. Tie points
should also be well distributed over the area of the block. Typically, nine tie points in
each image are adequate for photogrammetric triangulation of aerial photographs. If a
control point already exists in the candidate location of a tie point, the tie point can be
omitted.
tie points
in a single
image
x
Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation
Where
278 ERDAS
Triangulation
In a block of aerial photographs with 60% overlap and 25-30% sidelap, nine points are
sufficient to tie together the block as well as individual strips.
In summary:
• A control point must be visually identifiable in one or more images and have
known ground coordinates. If, in later processing, the ground coordinates for a
control point are found to have low reliability, the control point can be changed to
a tie point.
• If the ground coordinates of a control point are not used in the triangulation, they
can serve as a check point for independent analysis of the accuracy of the
triangulation.
• A tie point is a point that is visually identifiable in at least two images for which
ground coordinates are unknown.
The focal length of the camera optic is 1,084 mm, which is very large relative to the
length of the camera (78 mm). The field of view is 4.12o.
The satellite orbit is circular, north-south and south-north, about 830 km above the
earth, and sun-synchronous. A sun-synchronous orbit is one in which the orbital
rotation is the same rate as the earth’s rotation.
For each line scanned, there is a unique perspective center and a unique set of rotation
angles. The location of the perspective center relative to the line scanner is constant for
each line (interior orientation and focal length). Since the motion of the satellite is
smooth and practically linear over the length of a scene, the perspective centers of all
scan lines of a scene are assumed to lie along a smooth line.
motion of
perspective centers satellite
of scan lines
scan lines
on image
ground
The satellite exposure station is defined as the perspective center in ground coordinates
for the center scan line.
The image captured by the satellite is called a scene. A scene (SPOT Pan 1A) is
composed of 6,000 lines. Each of these lines consists of 6000 pixels. Each line is exposed
for 1.5 milliseconds, so it takes 9 seconds to scan the entire scene. (A scene from SPOT
XS 1A is composed of only 3000 lines and 3000 columns and has 20 meter pixels, while
Pan has 10 meter pixels.)
NOTE: This section will address only the 10 meter Pan scenario.
280 ERDAS
Triangulation
A pixel in the SPOT image records the light detected by one of the 6,000 light-sensitive
elements in the camera. Each pixel is defined by file coordinates (column and row
numbers).
The center of the scene is the center pixel of the center scan line. It is the origin of the
image coordinate system.
A XF
6000 x
lines C
(rows)
YF
Where
On
Ok f
f
O1
orbiting direction
(N —> S)
f
PPn Pn
xn
scan
lines
(image
plane) Pk xk
PPk
P1 x1
PP1 P1 ln
lk
l1
Pk = image point
xk = x value of image coordinates for scan line k
f = focal length of the camera
Ok = perspective center for scan line k, aligned along the orbit
PPk = principal point for scan line k
lk = light rays for scan line, bundled at perspective center Ok
282 ERDAS
Triangulation
Ephemeris data for the orbit are available in the header file of SPOT scenes. They give
the satellite’s position in three-dimensional, geocentric coordinates at 60-second
increments. The velocity vector and some rotational velocities relating to the attitude of
the camera are given, as well as the exact time of the center scan line of the scene.
The header of the data file of a SPOT scene contains ephemeris data, which provides
information about the recording of the data and the satellite orbit.
• the position of the satellite in geocentric coordinates (with the origin at the center
of the earth) to the nearest second,
• the exact time of exposure of the center scan line of the scene.
The geocentric coordinates included with the ephemeris data are converted to a local
ground system for use in triangulation. The center of a satellite scene is interpolated
from the header data.
Light rays in a bundle defined by the SPOT sensor are almost parallel, lessening the
importance of the satellite’s position. Instead, the inclination angles of the cameras
become the critical data.
The scanner can produce a nadir view. Nadir is the point directly below the camera.
SPOT has off-nadir viewing capability. Off-nadir refers to any point that is not directly
beneath the satellite, but is off to an angle (i.e., east or west of the nadir).
A stereo-scene is achieved when two images of the same area are acquired on different
days from different orbits, one taken east of the other. For this to occur, there must be
significant differences in the inclination angles.
Inclination is the angle between a vertical on the ground at the center of the scene and
a light ray from the exposure station. This angle defines the degree of off-nadir viewing
when the scene was recorded. The cameras can be tilted in increments of 0.6o to a
maximum of 27o to the east (negative inclination) or west (positive inclination).
I-
I+
EAST WEST
Where
The velocity vector of a satellite is the satellite’s velocity if measured as a vector through
a point on the spheroid. It provides a technique to represent the satellite’s speed as if
the imaged area were flat instead of being a curved surface.
284 ERDAS
Triangulation
North
orbital path V
Where
O = orientation angle
C = center of the scene
V = velocity vector
Satellite triangulation provides a model for calculating the spatial relationship between
the SPOT sensor and the ground coordinate system for each line of data. This
relationship is expressed as the exterior orientation, which consists of:
In addition to fitting the bundle of light rays to the known points, satellite triangulation
also accounts for the motion of the satellite by determining the relationship of the
perspective centers and rotation angles of the scan lines. It is assumed that the satellite
travels in a smooth motion as a scene is being scanned. Therefore, once the exterior
orientation of the center scan line is determined, the exterior orientation of any other
scan line is calculated based on the distance of that scan line from the center and the
changes of the perspective center location and rotation angles.
• the ground coordinates of the perspective center of the center scan line,
• the coefficients, from which the perspective center and rotation angles of all other
scan lines can be calculated, and
Collinearity Equations
Modified collinearity equations are applied to analyze the exterior orientation of
satellite scenes. Each scan line has a unique perspective center and individual rotation
angles. When the satellite moves from one scan line to the next, these parameters
change. Due to the smooth motion of the satellite in orbit, the changes are small and can
be modeled by low order polynomial functions.
The best locations for control points in the scene are shown below.
control point
horizontal x
scan lines
Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation
286 ERDAS
Triangulation
In some cases, there are no reliable control points available in the area for which a DEM
or orthophoto is to be created. In this instance, a local coordinate system may be
defined. The coordinate center is the center of the scene expressed in the longitude and
latitude taken from the header. When a local coordinate system is defined, the satellite
positions, velocity vectors, and rotation angles from the image header are used to define
a datum.
The ground coordinates of tie points will be computed in such a case. The resulting
DEM would display relative elevations, and the coordinate system would approxi-
mately correspond to the real system of this area. However, the coordinate system is
limited by the accuracy of the emphemeris information.
This might be especially useful for remote islands, in which case points along the shore-
line can be very easily detected as tie points.
Triangulation Accuracy The triangulation solution usually provides the standard deviation, the covariance
Measures matrix of unknowns, the residuals of observations, and check point analysis to aid in
determining the accuracy of triangulation.
Standard Deviation ( σ 0 )
Each time the triangulation program completes one iteration, the σ0 value (square root
of variance of unit weight) is calculated. It gives the mean error of the image coordinate
measurements used in the adjustment. This value decreases as the bundle fits better to
the control and tie points.
NOTE: The σ0 value usually should not be larger than 0.25 to 0.75 pixels.
NOTE: In the case of aerial mapping, the vertical accuracy is usually lower than the horizontal
accuracy by a factor of 1.5. For satellite stereo-scenes, the vertical accuracy depends on the
dimension of the inclination angles (the separation of the two scenes).
Stereo Imagery To perform photogrammetric stereo operations, two views of the same ground area
captured from different locations are required. A stereopair is a set of two images that
overlap, providing two views of the terrain in the overlap area. The relief displacement
in a stereopair is required to extract three-dimensional information about the terrain.
Though digital photogrammetric principles can be applied to any type of imagery, this
document focuses on two main sources: aerial photographs (metric frame cameras) and
SPOT satellite imagery. Many of the concepts presented for aerial photographs also
pertain to most imagery that has a single perspective center. Likewise, the SPOT
concepts have much in common with other sensors that also use a linear Charged
Coupled Device (CCD) in a pushbroom fashion.
Aerial Stereopairs For decades, aerial photographs have been used to create topographic maps in analog
and analytical stereoplotters. Aerial photographs are taken by specialized cameras,
mounted so that the lens is close to vertical, pointing out of a hole in the bottom of an
airplane. Photos are taken in sequence at regular intervals. Neighboring photos along
the flight line usually overlap by 60% or more. A stereopair can be constructed from any
two overlapping photographs that share a common area on the ground, most
commonly along the flight line.
288 ERDAS
Stereo Imagery
flight direction
SPOT Stereopairs Satellite stereopairs are created by two scenes of the same terrain that are recorded from
different viewpoints. Because of its off-nadir viewing capability, it is easy to acquire
stereopairs from the SPOT satellite. The SPOT stereopairs are recorded from different
orbits on different days.
terrain
Epipolar Stereopairs Epipolar stereopairs are created from triangulated, overlapping imagery using the
process in Figure 122. Digital photogrammetry creates a new set of digital images by
resampling the overlap region into a stereo orientation. This orientation, called epipolar
geometry, is characterized by relief displacement only occurring in one dimension
(along the flight line). A feature unique to digital photogrammetry is that there is no
need to create a relative stereo model before proceeding to absolute map coordinates.
Epipolar
Stereopair
290 ERDAS
Generate Elevation Models
Generate Elevation Elevation models are generated from overlapping imagery (Figure 123). There are two
Models methods in digital photogrammetry. Method 1 uses the original images and triangu-
lation results. Method 2 uses only the epipolar stereopairs (which are assumed to
include geometric information derived from the triangulation results).
Generated DTM
Generated DTM
Traditional Methods The traditional method of deriving elevations was to visualize the stereopair in three
dimensions using an analog or analytical stereo plotter. The user would then place
points and breaklines at critical terrain locations. An alternative method was to set the
pointer to a fixed elevation and then proceed to trace contour lines.
Digital Methods Both of the traditional methods described above can also be used in digital photogram-
metry utilizing specialized stereo viewing hardware. However, a powerful new
method is introduced with the advent of all digital systems - image correlation. The
general idea is to use pattern matching algorithms to locate the same ground features
on two overlapping photographs. The triangulation information is then used to
calculate ground (X,Y,Z) values for each correlated feature.
DTMs
A digital terrain model (DTM) is a discrete expression of terrain surface in a data array,
consisting of a group of planimetric coordinates (X,Y) and the elevations of the ground
points and breaklines. A DTM can be in the regular grid form, or it can be represented
with irregular points. Consider DTM as being a general term for elevation models, with
DEMs and TINs (defined below) as specific representations.
A DTM can be extracted from stereo imagery based on the automatic matching of points
in the overlap areas of a stereo model. The stereo model can be a satellite stereo scene
or a pair of digitized aerial photographs.
The resulting DTM can be used as input to geoprocessing software. In particular, it can
be utilized to produce an orthophoto or used in an appropriate 3-D viewing package
(e.g., IMAGINE Virtual GIS).
DEMs
A digital elevation model (DEM) is a specific representation of DTMs in which the
elevation points consist of a regular grid. Often, DEMs are stored raster files in which
each grid cell value contains an elevation value.
TINs
A triangulated irregular network (TIN) is a specific representation of DTMs in which
elevation points can occur at irregular intervals. In addition to elevation points, break-
lines are often included in TINs. A breakline is an elevation polyline, in which each
vertex has its own X, Y, Z value.
DEM Interpolation The direct results from most image matching techniques are irregular and discrete
object surface points. In order to generate a DEM, the irregular set of object points need
to be interpolated. For each grid point, the elevation is computed by a surface interpo-
lation method.
There are many algorithms for DEM interpolation. Some of them have been introduced
in "CHAPTER 1: Raster Data." Other methods, including Least Square Collocation and
Finite Elements, are also used for DEM interpolation.
Often, to describe the terrain surface more accurately, breaklines should be added and
used in the DEM interpolation. TIN based interpolation methods can deal with break-
lines more efficiently.
292 ERDAS
Generate Elevation Models
Image Matching Image matching refers to the automatic acquisition of corresponding image points on
the overlapping area of two images.
For more information on image matching, see "Image Matching Techniques" on page 294.
Image Pyramid
Because of the large amounts of image data, the image pyramid is usually adopted in
the image matching techniques to reduce the computation time and to increase the
matching reliability. The pyramid is a data structure consisting of the same image
represented several times, at a decreasing spatial resolution each time. Each level of the
pyramid contains the image at a particular resolution.
The matching process is performed at each level of resolution. The search is first
performed at the lowest resolution level and subsequently at each higher level of
resolution. Figure 125 shows a four-level image pyramid.
Level 3
128 x 128 pixels
Resolution of 1:4
and
Level 2
256 x 256 pixels
Resolution of 1:2
Area Based Matching Area based matching can also be called signal based matching. This method deter-
mines the correspondence between two image areas according to the similarity of their
gray level values. The cross correlation and least squares correlation techniques are
well-known methods for area based matching.
Correlation Windows
Area based matching uses correlation windows. These windows consist of a local
neighborhood of pixels. One example of correlation windows is square neighborhoods
(e.g., 3 X 3, 5 X 5, 7 X 7 pixels). In practice, the windows vary in shape and dimension,
based on the matching technique. Area correlation uses the characteristics of these
windows to match ground feature locations in one image to ground features on the
other.
A reference window is the source window on the first image, which remains at a
constant location. Its dimensions are usually square in size (e.g., 3 X 3, 5 X 5, etc.). Search
windows are candidate windows on the second image that are evaluated relative to the
reference window. During correlation, many different search windows are examined
until a location is found that best matches the reference window.
Correlation Calculations
Two correlation calculations are described below: cross correlation and least squares
correlation. Most area based matching calculations, including these methods,
normalize the correlation windows. Therefore, it is not necessary to balance the contrast
or brightness prior to running correlation. Cross correlation is more robust in that it
requires a less accurate a priori position than least squares. However, its precision is
limited to 1.0 pixels. Least squares correlation can achieve precision levels of 0.1 pixels,
but requires an a priori position that is accurate to about 2 pixels. In practice, cross corre-
lation is often followed by least squares.
294 ERDAS
Image Matching Techniques
Cross Correlation
Cross correlation computes the correlation coefficient of the gray values between the
template window and the search window, according the following equation:
∑ [ g1 ( c1, r 1 ) – g1 ] [ g2 ( c2, r 2 ) – g2 ]
i, j
ρ = -------------------------------------------------------------------------------------------------------
∑ [ g1 ( c1, r 1 ) – g1 ] ∑ [ g2 ( c2, r 2 ) – g2 ]
2 2
i, j i, j
with
1 1
g 1 = --- ∑ g 1 ( c 1, r 1 ) g 2 = --- ∑ g 2 ( c 2, r 2 )
n i, j n i, j
where
When using the area based cross correlation, it is necessary to have a good initial
position for the two correlation windows. Also, if the contrast in the windows is very
poor, the correlation will fail.
Least squares correlation is iterative. The parameters calculated during the initial pass
are used in the calculation of the second pass and so on, until an optimum solution has
been determined. Least squares matching can result in high positional accuracy (about
0.1 pixels). However, it is sensitive to initial approximations. The initial coordinates for
the search window prior to correlation must be accurate to about 2 pixels or better.
When least squares correlation fits a search window to the reference window, both
radiometric (pixel gray values) and geometric (location, size, and shape of the search
window) transformations are calculated.
NOTE: The following formulas do not follow use the coordinate system nomenclature
established elsewhere in this chapter. The pixel coordinate values are presented as (x,y) instead
of (c,r).
g 2 ( c 2, r 2 ) = h 0 + h 1 g 1 ( c 1, r 1 )
c2 = a0 + a1 c1 + a2 r 1
r 2 = b0 + b1 c1 + b2 r 1
where
v = ( a 1 + a 2 c 1 + a 3 r 1 )g x + ( b 1 + b 2 c 1 + b 3 r 1 )g y – h 1 – h 2 g 1 ( c 1, r 1 ) + ∆g
with ∆g = g 2 ( c 2, r 2 ) – g 1 ( c 1, r 1 )
296 ERDAS
Image Matching Techniques
Feature Based Matching Feature based matching determines the correspondence between two image features.
Most feature based techniques match extracted point features (this is called feature
point matching), as opposed to other features, such as lines or complex objects. Poor
contrast areas can be avoided with feature based matching.
In order to implement feature based matching, the image features must initially be
extracted. There are several well-known operators for feature point extraction.
Examples include:
• Moravec Operator
• Dreschler Operator
• Förstner Operator
After the features are extracted, the attributes of the features are compared between two
images. The feature pair with the attributes which are the best fit will be recognized as
a match.
Relation Based Matching Relation based matching is also called structure based matching. This kind of matching
technique uses not only the image features, but also the relation among the features.
With relation based matching, the corresponding image structures can be recognized
automatically, without any a priori information. However, the process is time-
consuming, since it deals with varying types of information. Relation based matching
can also be applied for the automatic recognition of control points.
Generated DTM
Generated
Orthoimage
External DTM
Generated
Orthoimage
An image or photograph with an orthographic projection is one for which every point
looks as if an observer were looking straight down at it, along a line of sight that is
orthogonal (perpendicular) to the earth.
298 ERDAS
Orthorectification
orthographic terrain
projection
reference plane
(elevation zero)
Figure 128: Orthographic Projection
Geometric Distortions When a remotely-sensed image or an aerial photograph is recorded, there is inherent
geometric distortion caused by terrain and by the angle of the sensor or camera to the
ground. In addition, there are distortions caused by earth curvature, atmospheric
diffraction, the camera or sensor itself (e.g., radial lens distortion), and the mechanics of
acquisition (e.g., for SPOT, earth rotation and change in orbital position during acqui-
sition). In the following material, only the most significant of these distortions are
presented. These are terrain, sensor position, and rotation angles, as well as earth
curvature for small-scale images.
• a digital terrain model (DTM) of the area covered by the image, and
In overlap regions of orthoimage mosaics, the digital orthophoto can be used, which
minimizes problems with contrast, cloud cover, occlusions, and reflections from water
and snow.
Relief displacement is corrected for by taking each pixel of a DTM and finding the equiv-
alent position in the satellite or aerial image. A brightness value is determined for this
location based on resampling of the surrounding pixels. The brightness value, elevation,
and orientation are used to calculate the equivalent location in the orthoimage file.
Pl
f
Z
P
DTM
orthoimage
gray values
Figure 129: Digital Orthophoto - Finding Gray Values
Where
P = ground point
P1 = image point
O = perspective center (origin)
X,Z = ground coordinates (in DTM file)
f = focal length
300 ERDAS
Orthorectification
The resulting orthoimages have similar basic characteristics to images created by other
means of rectification, such as polynomial warping or rubber sheeting. On any rectified
image, a map ground coordinate can be quickly calculated for a pixel position. The
orthorectification process almost always explicitly models the ground terrain and
sensor attitude (position and rotation angles), which makes it much more accurate for
off-nadir imagery, larger image scales, and mountainous regions. Also, orthorectifi-
cation often requires less control points than other methods.
Resampling methods used are nearest neighbor, bilinear interpolation, and cubic
convolution.
Generally, when the cell sizes of orthoimage pixels are selected, they should be similar
or larger than the cell sizes of the original image. For example, if the image was scanned
9K X 9K, 1 pixel would represent 0.025mm on the image. Assuming that the image scale
(SI) of this photo is 1:40,000, then the cell size on the ground is about 1m. For the
orthoimage, it would be appropriate to choose a pixel spacing of 1m or larger. Choosing
a smaller pixel size would oversample the original image.
For SPOT Pan images, a cell size of 10 x 10 meters is appropriate. Any further
enlargement from the original scene to the orthophoto would not improve the image
detail.
Landsat Landsat TM or Landsat MSS sensor systems have a complex geometry which includes
Orthorectification factors such as a rotating mirror inside the sensor, changes in orbital position during
acquisition, and earth rotation. The resulting “zero level” imagery requires sophisti-
cated rectification that is beyond the capabilities of many end users. For this reason,
almost all Landsat data formats have already been preprocessed to minimize these
distortions. Applying simple polynomial rectification techniques to these formats
usually fulfills most georeferencing needs when the terrain is relatively flat. However,
imagery of mountainous regions needs to account for relief displacement for effective
georeferencing. A solution to this problem is discussed below.
This section illustrates just one example of how to correct the relief distortion by using the
polynomial formulation.
Vertical imagery taken by a line scanner, such as Landsat TM or Landsat MSS, can be
used with an existing DTM to yield an orthoimage. No information about the sensor or
the orbit of the satellite is needed. Instead, a transformation is calculated using infor-
mation about the ground region and image capture. The correction takes into account:
• elevation
scan line
image plane
The edges of a Landsat image can be determined by a search based on a gray level
threshold between the image and background fill values. Sample points are determined
along the edges of the image. Each edge line is then obtained by a least squares
regression. The nadir line is found by averaging the left and right edge lines.
302 ERDAS
Orthorectification
For simplicity, each line (the four edges and the nadir line) can be approximated by a
straight line without losing generality.
NOTE: The following formulas do not follow use the coordinate system nomenclature
established elsewhere in this chapter. The pixel coordinate values are presented as (x,y) instead
of (c,r).
1 + g 12
- ( x – c 0 – c 1 y )
d = -------------------- (1)
1 – c1g1
Where g1 is the slope of the scan line (y = g0 +g1x), which can be obtained based on the
top and bottom edges with a method similar to the one described for the left and right
edges.
exposure station
∆d
image plane
H
datum
earth
ellipsoid
R
α
β
earth’s center
Figure 130: Image Displacement
where
304 ERDAS
Orthorectification
( R + Z ) sin β R sin α
-------------------------------------------------------- = ------------------------------------- (2)
( R + H ) – ( R + Z ) cos β R + H – R cos α
∆d tan β
------- = 1 – ------------ (3)
d tan α
Considering α and β are very tiny values, the following approximations can be used
with sufficient accuracy:
cos α ≈ 1
cos β ≈ 1
Then, an explicit approximate equation can be derived from equations (1), (2), and (3):
1 + g 12 Z R + H
∆d = --------------------- ---- -------------- ( x – c 0 – c 1 y ) (4)
1 – c1g1 H R + Z
where
( 1.0 – p )x + c 1 py = F 1 ( X , Y ) – c 0 p
g 1 px + ( 1.0 – c 1 g 1 p )y = F 2 ( X , Y ) – c 0 g 1 p
where
306 ERDAS
Map Feature Collection
Map Feature Feature collection is the process of identifying, delineating, and labeling various types
Collection of natural and man-made phenomena from remotely-sensed images. The features are
represented by attribute points, lines, and polygons. General categories of features
include elevation models, hydrology, infrastructure, and land cover. There can be many
different elements within a general category. For instance, infrastructure can be broken
down into roads, utilities, and buildings. To achieve high levels of positional accuracy,
photogrammetric processing is applied to the imagery prior to collecting features (see
Figures 131 and 132).
Collected Map
Features
Collected Map
Features
Stereoscopic Collection Method 1 (Figure 131), which uses a stereopair as the image backdrop, is the most
common approach. Viewing the stereopair in three dimensions provides greater image
content and the ability to obtain three-dimensional feature ground coordinates (X,Y,Z).
Monoscopic Collection Method 2 (Figure 132), which uses an orthoimage as the image backdrop, works well
for non-urban areas and/or smaller image scales. The features are collected from
orthoimages while viewing them in mono. Therefore, only X and Y ground coordinates
can be obtained. Monoscopic collection from orthoimages has no special hardware
requirements, making orthoimages an ideal image source for many applications.
Orthoimages Orthoimages are the end product of orthorectification. Once created, these digital
images can be enhanced, merged with other data sources, and mosaicked with adjacent
orthoimages. The resulting digital file makes an ideal image backdrop for many appli-
cations, including feature collection, visualization, and input into GIS/Remote sensing
systems. Orthoimages have very good positional accuracy, making them an excellent
primary data source for all types of mapping.
Topographic Features obtained from elevation extraction and feature collection can serve as primary
Database inputs into a topographic database. This database can then be utilized by GIS and map
publishing systems.
Topographic Maps Topographic maps are the traditional end product of the photogrammetric process. In
the digital era, topographic maps are often produced by map publishing systems which
utilize a topographic database.
308 ERDAS
Topographic Database
CHAPTER 8
Rectification
Introduction Raw, remotely sensed image data gathered by a satellite or aircraft are representations
of the irregular surface of the earth. Even images of seemingly “flat” areas are distorted
by both the curvature of the earth and the sensor being used. This chapter covers the
processes of geometrically correcting an image so that it can be represented on a planar
surface, conform to other images, and have the integrity of a map.
A map projection system is any system designed to represent the surface of a sphere or
spheroid (such as the earth) on a plane. There are a number of different map projection
methods. Since flattening a sphere to a plane causes distortions to the surface, each map
projection system compromises accuracy between certain properties, such as conser-
vation of distance, angle, or area. For example, in equal area map projections, a circle of
a specified diameter drawn at any location on the map will represent the same total
area. This is useful for comparing land use area, density, and many other applications.
However, to maintain equal area, the shapes, angles, and scale in parts of the map may
be distorted (Jensen 1996).
There are a number of map coordinate systems for determining location on an image.
These coordinate systems conform to a grid, and are expressed as X,Y (column, row)
pairs of numbers. Each map projection system is associated with a map coordinate
system.
Rectification is the process of transforming the data from one grid system into another
grid system using an nth order polynomial. Since the pixels of the new grid may not
align with the pixels of the original grid, the pixels must be resampled. Resampling is
the process of extrapolating data values for the pixels on the new grid from the values
of the source pixels.
Registration
In many cases, images of one area that are collected from different sources must be used
together. To be able to compare separate images pixel by pixel, the pixel grids of each
image must conform to the other images in the data base. The tools for rectifying image
data are used to transform disparate images to the same coordinate system.
Registration is the process of making an image conform to another image. A map
coordinate system is not necessarily involved. For example, if image A is not rectified
and it is being used with image B, then image B must be registered to image A, so that
they conform to each other. In this example, image A is not rectified to a particular map
projection, so there is no need to rectify image B to a map projection.
Geocoded data are images that have been rectified to a particular map projection and
pixel size, and usually have had radiometric corrections applied. It is possible to
purchase image data that is already geocoded. Geocoded data should be rectified only
if they must conform to a different projection system or be registered to other rectified
data.
Latitude/Longitude
Latitude/Longitude is a spherical coordinate system that is not associated with a map
projection. Lat/Lon expresses locations in the terms of a spheroid, not a plane.
Therefore, an image is not usually “rectified” to Lat/Lon, although it is possible to
convert images to Lat/Lon, and some tips for doing so are included in this chapter.
You can view map projection information for a particular file using the ERDAS IMAGINE
Image Information utility. Image Information allows you to modify map information that is
incorrect. However, you cannot rectify data using Image Information. You must use the Recti-
fication tools described in this chapter.
The properties of map projections and of particular map projection systems are discussed in
"CHAPTER 11: Cartography" and "APPENDIX C: Map Projections."
Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and
can be used if there is a digital elevation model (DEM) of the study area. In relatively
flat areas, orthorectification is not necessary, but in mountainous areas (or on aerial
photographs of buildings), where a high degree of accuracy is required, orthorectifi-
cation is recommended.
312 ERDAS
When to Rectify
When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to
fit a map projection system or a reference image. There are several reasons for rectifying
image data:
• mosaicking images
Before rectifying the data, one must determine the appropriate coordinate system for
the data base. To select the optimum map projection and coordinate system, the
primary use for the data base must be considered.
• How large or small an area will be mapped? Different projections are intended for
different size areas.
• Where on the globe is the study area? Polar regions and equatorial regions require
different projections for maximum accuracy.
• What is the extent of the study area? Circular, north-south, east-west, and oblique
areas may all require different projection systems (ESRI 1992).
• the map coordinate of the upper left corner of the image, and
This information is usually the same for each layer of an image (.img) file, although it
could be different. For example, the cell size of band 6 of Landsat TM data is different
than the cell size of the other bands.
Use the Image Information utility to modify image file header information that is incorrect.
Disadvantages of During rectification, the data file values of rectified pixels must be resampled to fit into
Rectification a new grid of pixel rows and columns. Although some of the algorithms for calculating
these values are highly reliable, some spectral integrity of the data can be lost during
rectification. If map coordinates or map units are not needed in the application, then it
may be wiser not to rectify the image. An unrectified image is more spectrally correct
than a rectified image.
Classification
Some analysts recommend classification before rectification, since the classification will
then be based on the original data values. Another benefit is that a thematic file has only
one band to rectify instead of the multiple bands of a continuous file. On the other hand,
it may be beneficial to rectify the data first, especially when using Global Positioning
System (GPS) data for the ground control points. Since these data are very accurate, the
classification may be more accurate if the new coordinates help to locate better training
samples.
Thematic Files
Nearest neighbor is the only appropriate resampling method for thematic files, which
may be a drawback in some applications. The available resampling methods are
discussed in detail later in this chapter.
314 ERDAS
When to Rectify
Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Throughout this
documentation, many references to rectification also apply to image-to-image registration.
Usually, rectification is the conversion of data file coordinates to some other grid and
coordinate system, called a reference system. Rectifying or registering image data on
disk involves the following general steps, regardless of the application:
3. Create an output image file with the new coordinate information in the header. The pix-
els must be resampled to conform to the new grid.
Images can be rectified on the display (in a Viewer) or on the disk. Display rectification
is temporary, but disk rectification is permanent, because a new file is created. Disk
rectification involves:
• rearranging the pixels of the image onto a new grid, which conforms to a plane in
the new map projection and coordinate system, and
• inserting new information to the header of the file, such as the upper left corner
map coordinates and the area represented by each pixel.
• source coordinates — usually data file coordinates in the image being rectified
The term “map coordinates” is sometimes used loosely to apply to reference coordi-
nates and rectified coordinates. These coordinates are not limited to map coordinates.
For example, in image-to-image registration, map coordinates are not necessary.
GCPs in ERDAS Any ERDAS IMAGINE image can have one GCP set associated with it. The GCP set is
IMAGINE stored in the image file (.img) along with the raster layers. If a GCP set exists for the top
file that is displayed in the Viewer, then those GCPs can be displayed when the GCP
Tool is opened.
In the CellArray of GCP data that displays in the GCP Tool, one column shows the point
ID of each GCP. The point ID is a name given to GCPs in separate files that represent
the same geographic location. Such GCPs are called corresponding GCPs.
A default point ID string is provided (such as “GCP #1”), but the user can enter his or
her own unique ID strings to set up corresponding GCPs as needed. Even though only
one set of GCPs is associated with an image file, one GCP set can include GCPs for a
number of rectifications by changing the point IDs for different groups of corre-
sponding GCPs.
Entering GCPs Accurate ground control points are essential for an accurate rectification. From the
ground control points, the rectified coordinates for all other points in the image are
extrapolated. Select many GCPs throughout the scene. The more dispersed the GCPs
are, the more reliable the rectification will be. GCPs for large-scale imagery might
include the intersection of two roads, airport runways, utility corridors, towers, or
buildings. For small-scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (e.g., the edges of lakes or other water
bodies, vegetation, etc.) should not be used.
The source and reference coordinates of the ground control points can be entered in the
following ways:
• Use the mouse to select a pixel from an image in the Viewer. With both the source
and destination Viewers open, enter source coordinates and reference coordinates
for image-to-image registration.
Information on the use and setup of a digitizing tablet is discussed in "CHAPTER 2: Vector
Layers."
316 ERDAS
Ground Control Points
Mouse Option
When entering GCPs with the mouse, the user should try to match coarser resolution
imagery to finer resolution imagery (i.e., Landsat TM to SPOT) and avoid stretching
resolution spans greater than a cubic convolution radius (a 4 × 4 area). In other words,
the user should not try to match Landsat MSS to SPOT or Landsat TM to an aerial
photograph.
Refer to "APPENDIX B: File Formats and Extensions" for more information on the format of
.img and .gcc files.
The order of transformation is the order of the polynomial used in the transformation.
ERDAS IMAGINE allows 1st- through nth-order transformations. Usually, 1st-order or
2nd-order transformations are used.
You can specify the order of the transformation you want to use in the Transform Editor.
Transformation Matrix
A transformation matrix is computed from the GCPs. The matrix consists of coeffi-
cients which are used in polynomial equations to convert the coordinates. The size of
the matrix depends upon the order of transformation. The goal in calculating the coeffi-
cients of the transformation matrix is to derive the polynomial equations for which
there is the least possible amount of error when they are used to transform the reference
coordinates of the GCPs into the source coordinates. It is not always possible to derive
coefficients that produce no error. For example, in Figure 133, GCPs are plotted on a
graph and compared to the curve that is expressed by a polynomial.
Reference X coordinate
GCP
Polynomial curve
Source X coordinate
Figure 133: Polynomial Curve vs. GCPs
Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to the
polynomial that the coefficients represent. The distance between the GCP reference
coordinate and the curve is called RMS error, which is discussed later in this chapter.
The least squares regression method is used to calculate the transformation matrix
from the GCPs. This common method is discussed in statistics textbooks.
318 ERDAS
Orders of Transformation
• location in X and/or Y
• scale in X and/or Y
• skew in X and/or Y
• rotation
A 1st-order transformation can also be used for data that are already projected onto a
plane. For example, SPOT and Landsat Level 1B data are already transformed to a
plane, but may not be rectified to the desired map projection. When doing this type of
rectification, it is not advisable to increase the order of transformation if at first a high
RMS error occurs. Examine other factors first, such as the GCP source and distribution,
and look for systematic errors.
• scale
• offset
• rotate
• reflect
Scale
Scale is the same as the zoom option in the Viewer, except that the user can specify
different scaling factors for X and Y.
If you are scaling an image in the Viewer, the zoom option will undo any changes to the scale
that you do, and vice versa.
Offset
Offset moves the image by a user-specified number of pixels in the X and Y directions.
For rotation, the user can specify any positive or negative number of degrees for
clockwise and counterclockwise rotation. Rotation occurs around the center pixel of the
image.
Linear adjustments are available from the Viewer or from the Transform Editor. You can
perform linear transformations in the Viewer and then load that transformation to the
Transform Editor, or you can perform the linear transformations directly on the transformation
matrix.
Figure 134 illustrates how the data are changed in linear transformations.
320 ERDAS
Orders of Transformation
a1 a2 a3
b1 b2 b3
xo = b1 + b2 xi + b3 yi
EQUATION 3
yo = a1 + a2 xi + a3 yi
where:
The position of the coefficients in the matrix and the assignment of the coefficients in the
polynomial is an ERDAS IMAGINE convention. Other representations of a 1st-order transfor-
mation matrix may take a different form.
original image
t+1
2 ∑i EQUATION 4
i=1
It is multiplied by two for the two sets of coefficients—one set for X, one for Y.
(t + 1) × (t + 2) EQUATION 5
Clearly, the size of the transformation matrix increases with the order of the transfor-
mation.
322 ERDAS
Orders of Transformation
Ωy t 6
x o = A + Bx + Cy + Dx 2 + Exy + F y 2 + ... + Qx i y j + ... +EQUATION
where:
All combinations of xi times yj are used in the polynomial expression, such that:
i+ j≤t EQUATION 7
The equation for yo takes the same format with different coefficients. An example of 3rd-
order transformation equations for X and Y, using numbers, is:
(3 + 1) × (3 + 2) EQUATION 8
The example below uses only one coordinate (X), instead of two (X,Y), which are used
in the polynomials for rectification. This enables the user to draw two-dimensional
graphs that illustrate the way that higher orders of transformation affect the output
image.
NOTE: Because only the X coordinate is used in these examples, the number of GCPs used is less
than the numbers required to actually perform the different orders of transformation.
Coefficients like those presented in this example would generally be calculated by the
least squares regression method. Suppose GCPs are entered with these X coordinates:
Source X Reference X
Coordinate Coordinate
(input) (output)
1 17
2 9
3 1
x r = ( 25 ) + ( – 8 ) x i EQUATION 9
where:
This equation takes on the same format as the equation of a line (y = mx + b). In mathe-
matical terms, a 1st-order polynomial is linear. Therefore, a 1st-order transformation is
also known as a linear transformation. This equation is graphed in Figure 136.
324 ERDAS
Orders of Transformation
16
reference X coordinate
12 xr = (25) + (-8)xi
0
0 1 2 3 4
source X coordinate
Figure 136: Transformation Example—1st-Order
Source X Reference X
Coordinate Coordinate
(input) (output)
1 17
2 7
3 1
16
reference X coordinate
12
0
0 1 2 3 4
source X coordinate
Figure 137: Transformation Example—2nd GCP Changed
A line cannot connect these points, which illustrates that they cannot be expressed by
a 1st-order polynomial, like the one above. In this case, a 2nd-order polynomial equation
will express these points:
2
x r = ( 31 ) + ( – 16 )x i + ( 2 )x i EQUATION 10
Polynomials of the 2nd-order or higher are nonlinear. The graph of this curve is drawn
in Figure 138.
reference X coordinate
12 xr = (31) + (-16)xi + (2)xi2
0
0 1 2 3 4
source X coordinate
Figure 138: Transformation Example—2nd-Order
Source X Reference X
Coordinate Coordinate
(input) (output)
1 17
2 7
3 1
4 5
16
reference X coordinate
8
(4,5)
4
0
0 1 2 3 4
source X coordinate
Figure 139: Transformation Example—4th GCP Added
As illustrated in Figure 139, this fourth GCP does not fit on the curve of the 2nd-order
polynomial equation. So that all of the GCPs would fit, the order of the transformation
could be increased to 3rd-order. The equation and graph in Figure 140 would then
result.
326 ERDAS
Orders of Transformation
16
reference X coordinate
12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3
0
0 1 2 3 4
source X coordinate
Figure 140: Transformation Example—3rd-Order
Source X Reference X
Coordinate Coordinate
(input) (output)
1 xo(1) = 17
2 xo(2) = 7
3 xo(3) = 1
4 xo(4) = 5
1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
1 2 3 4 3 4 2 1
Minimum Number of Higher orders of transformation can be used to correct more complicated types of
GCPs distortion. However, to use a higher order of transformation, more GCPs are needed.
For instance, three points define a plane. Therefore, to perform a 1st-order transfor-
mation, which is expressed by the equation of a plane, at least three GCPs are needed.
Similarly, the equation used in a 2nd-order transformation is the equation of a parab-
oloid. Six points are required to define a paraboloid. Therefore, at least six GCPs are
required to perform a 2nd-order transformation. The minimum number of points
required to perform a transformation of order t equals:
For 1st- through 10th-order transformations, the minimum number of GCPs required
to perform a transformation is listed in the table below.
Order of Minimum
Transformation GCPs Required
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66
For the best rectification results, you should always use more than the minimum number of
GCPs and they should be well distributed.
328 ERDAS
Orders of Transformation
GCP Prediction and Automated GCP prediction enables the user to pick a GCP in either coordinate system
Matching and automatically locate that point in the other coordinate system based on the current
transformation parameters.
Automated GCP matching is a step beyond GCP prediction. For image to image recti-
fication, a GCP selected in one image is precisely matched to its counterpart in the other
image using the spectral characteristics of the data and the transformation matrix. GCP
matching enables the user to fine tune a rectification for highly accurate results.
GCP Prediction
GCP prediction is a useful technique to help determine if enough ground control points
have been gathered. After selecting several GCPs, select a point in either the source or
the destination image, then use GCP prediction to locate the corresponding GCP on the
other image (map). This point is determined based on the current transformation
matrix. Examine the automatically generated point and see how accurate it is. If it is
within an acceptable range of accuracy, then there may be enough GCPs to perform an
accurate rectification (depending upon how evenly dispersed the GCPs are). If the
automatically generated point is not accurate, then more GCPs should be gathered
before rectifying the image.
GCP prediction can also be used when applying an existing transformation matrix to
another image in a data set. This saves time in selecting another set of GCPs by hand.
Once the GCPs are automatically selected, those that do not meet an acceptable level of
error can be edited.
GCP Matching
In GCP matching the user can select which layers from the source and destination
images to use. Since the matching process is based on the reflectance values, select
layers that have similar spectral wavelengths, such as two visible bands or two infrared
bands. The user can perform histogram matching to ensure that there is no offset
between the images. The user can also select the radius from the predicted GCP from
which the matching operation will search for a spectrally similar pixel. The search
window can be any odd size between 5 × 5 and 21 × 21.
A correlation threshold is used to accept or discard points. The correlation ranges from
-1.000 to +1.000. The threshold is an absolute value threshold ranging from 0.000 to
1.000. A value of 0.000 indicates a bad match and a value of 1.000 indicates an exact
match. Values above 0.8000 or 0.9000 are recommended. If a match cannot be made
because the absolute value of the correlation is greater than the threshold, the user has
the option to discard points.
RMS error (root mean square) is the distance between the input (source) location of a
GCP and the retransformed location for the same GCP. In other words, it is the
difference between the desired output coordinate for a GCP and the actual output
coordinate for the same point, when the point is transformed with the transformation
matrix.
2 2
RMS error = ( xr – xi ) + ( yr – yi ) EQUATION 14
where:
RMS error is expressed as a distance in the source coordinate system. If data file coordi-
nates are the source coordinates, then the RMS error is a distance in pixel widths. For
example, an RMS error of 2 means that the reference pixel is 2 pixels away from the
retransformed pixel.
Residuals and RMS Error The ERDAS IMAGINE GCP Tool contains columns for the X and Y residuals. Residuals
Per GCP are the distances between the source and retransformed coordinates in one direction.
They are shown for each GCP. The X residual is the distance between the source X
coordinate and the retransformed X coordinate. The Y residual is the distance between
the source Y coordinate and the retransformed Y coordinate.
If the GCPs are consistently off in either the X or the Y direction, more points should be
added in that direction. This is a common problem in off-nadir data.
330 ERDAS
RMS Error
Ri = X R i2 + Y R i2 EQUATION 15
where:
Figure 142 illustrates the relationship between the residuals and the RMS error per
point.
RMS error
Y residual
retransformed GCP
∑ X Ri2
1
Rx = ---
n
i=1
∑ Y Ri2
1
Ry = ---
n
i=1
∑ X Ri2 + Y Ri2
1
T = R x2 + R y2 or ---
n
i=1
where:
Error Contribution by A normalized value representing each point’s RMS error in relation to the total RMS
Point error is also reported. This value is listed in the Contribution column of the GCP Tool.
Ri
E i = ----- EQUATION 16
T
where:
332 ERDAS
RMS Error
Tolerance of RMS Error In most cases, it will be advantageous to tolerate a certain amount of error rather than
take a higher order of transformation. The amount of RMS error that is tolerated can be
thought of as a window around each source coordinate, inside which a retransformed
coordinate is considered to be correct (that is, close enough to use). For example, if the
RMS error tolerance is 2, then the retransformed pixel can be 2 pixels away from the
source pixel and still be considered accurate.
Retransformed coordinates
within this range are considered
correct
Acceptable RMS error is determined by the end use of the data base, the type of data
being used, and the accuracy of the GCPs and ancillary data being used. For example,
GCPs acquired from GPS should have an accuracy of about 10 m, but GCPs from
1:24,000-scale maps should have an accuracy of about 20 m.
It is important to remember that RMS error is reported in pixels. Therefore, if the user
is rectifying Landsat TM data and wants the rectification to be accurate to within 30
meters, the RMS error should not exceed 0.50. If the user is rectifying AVHRR data, an
RMS error of 1.50 might be acceptable. Acceptable accuracy will depend on the image
area and the particular project.
Most rectifications are either 1st-order or 2nd-order. The danger of using higher order rectifica-
tions is that the more complicated the equation for the transformation, the less regular and
predictable the results will be. To fit all of the GCPs, there may be very high distortion in the
image.
After each computation of a transformation matrix and RMS error, there are four
options:
• Throw out the GCP with the highest RMS error, assuming that this GCP is the least
accurate. Another transformation matrix can then be computed from the remaining
GCPs. A closer fit should be possible. However, if this is the only GCP in a
particular region of the image, it may cause greater error to remove it.
• Select only the points for which you have the most confidence.
334 ERDAS
Resampling Methods
Resampling The next step in the rectification/registration process is to create the output file. Since
Methods the grid of pixels in the source image rarely matches the grid for the reference image,
the pixels are resampled so that new data file values for the output file can be calcu-
lated.
GCP GCP
GCP GCP
• Nearest neighbor — uses the value of the closest pixel to assign to the output pixel
value.
• Bilinear interpolation — uses the data file values of four pixels in a 2 × 2 window
to calculate an output value with a bilinear function.
• Cubic convolution — uses the data file values of sixteen pixels in a 4 × 4 window
to calculate an output value with a cubic function.
If the output units are pixels, then the origin of the image is the upper left corner.
Otherwise, the origin is the lower left corner.
“Rectifying” to Lat/Lon The user can specify the nominal cell size if the output coordinate system is Lat/Lon.
The output cell size for a geographic projection (i.e., Lat/Lon) is always in angular units
of decimal degrees. However, if the user wants the cell to be a specific size in meters, he
or she can enter meters and calculate the equivalent size in decimal degrees. For
example, if the user wants the output file cell size to be 30 × 30 meters, then the program
would calculate what this size would be in decimal degrees and automatically update
the output cell size. Since the transformation between angular (decimal degrees) and
nominal (meters) measurements varies across the image, the transformation is based on
the center of the output file.
Enter the nominal cell size in the Nominal Cell Size dialog.
336 ERDAS
Resampling Methods
Nearest Neighbor To determine an output pixel’s nearest neighbor, the rectified coordinates (xo,yo) of the
pixel are retransformed back to the source coordinate system using the inverse of the
transformation matrix. The retransformed coordinates (xr,yr) are used in bilinear inter-
polation and cubic convolution as well. The pixel that is closest to the retransformed
coordinates (xr,yr) is the nearest neighbor. The data file value(s) for that pixel become
the data file value(s) of the pixel in the output image.
(xr,yr)
nearest to
(xr,yr)
Advantages Disadvantages
Transfers original data values without When this method is used to resample
averaging them, as the other methods do, from a larger to a smaller grid size, there is
therefore the extremes and subtleties of the usually a “stair stepped” effect around
data values are not lost. This is an impor- diagonal lines and curves.
tant consideration when discriminating
between vegetation types, locating an edge
associated with a lineament, or determin-
ing different levels of turbidity or temper-
atures in a lake (Jensen 1996).
Suitable for use before classification. Data values may be dropped, while other
values may be duplicated.
The easiest of the three methods to com- Using on linear thematic data (e.g., roads,
pute and the fastest to use. streams) may result in breaks or gaps in a
network of linear data.
Appropriate for thematic files, which can
have data file values based on a qualitative
(nominal or ordinal) system or a quantita-
tive (interval or ratio) system. The averag-
ing that is performed with bilinear
interpolation and cubic convolution is not
suited to a qualitative class value system.
1 2
dy
m r n
dx
(xr,yr)
3 4
D
To calculate Vr, first Vm and Vn are considered. By interpolating Vm and Vn, the user can
perform linear interpolation, which is a simple process to illustrate. If the data file
values are plotted in a graph relative to their distances from one another, then a visual
linear interpolation is apparent. The data file value of m (Vm) is a function of the change
in the data file value between pixels 3 and 1 (that is, V3 - V1).
V3
data file values
Vm
(V3 - V1) / D
V1
Y1 Ym Y3
D
data file coordinates
(Y)
Figure 147: Linear Interpolation
338 ERDAS
Resampling Methods
V3 – V1
V m = ------------------- × dy + V 1 EQUATION 17
D
where:
If one considers that (V3 - V1 / D) is the slope of the line in the graph above, then this
equation translates to the equation of a line in y = mx + b form.
Similarly, the equation for calculating the data file value for n (Vn) in the pixel grid is:
V4 – V2
V n = ------------------- × dy + V 2 EQUATION 18
D
From Vn and Vm, the data file value for r, which is at the retransformed coordinate
location (xr,yr),can be calculated in the same manner:
Vn – Vm
V r = --------------------- × dx + V m EQUATION 19
D
V4 – V2 V3 – V1
- × dy + V 2 – ------------------
------------------ - × dy + V 1 V3 – V1
D D
V r = --------------------------------------------------------------------------------------------------------- × dx + ------------------- × dy + V 1
D D
V 1 ( D – dx ) ( D – dy ) + V 2 ( dx ) ( D – dy ) + V 3 ( D – dx ) ( dy ) + V 4 ( dx ) ( dy )
V r = -------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------
D2
In most cases D = 1, since data file coordinates are used as the source coordinates and
data file coordinates increment by 1.
Some equations for bilinear interpolation express the output data file value as:
Vr = ∑ wi V i EQUATION 20
where:
wi is a weighting factor
The equation above could be expressed in a similar format, in which the calculation of
wi is apparent:
4
( D – ∆x i ) ( D – ∆y i )
Vr = ∑ ----------------------------------------------- × V i
D
2
EQUATION 21
i=1
where:
∆xi = the change in the X direction between (xr,yr) and the data file coordinate
of pixel i
∆yi = the change in the Y direction between (xr,yr) and the data file coordinate
of pixel i
For each of the four pixels, the data file value is weighted more if the pixel is closer to
(xr,yr).
340 ERDAS
Resampling Methods
Advantages Disadvantages
Results in output images that are Since pixels are averaged, bilinear interpo-
smoother, without the “stair stepped” lation has the effect of a low-frequency
effect that is possible with nearest neigh- convolution. Edges are smoothed, and
bor. some extremes of the data file values are
lost.
More spatially accurate than nearest
neighbor.
This method is often used when changing
the cell size of the data, such as in
SPOT/TM merges within the
2 × 2 resampling matrix limit.
• a set of 16 pixels, in a 4 × 4 array, are averaged to determine the output data file
value, and
To identify the 16 pixels in relation to the retransformed coordinate (xr,yr), the pixel (i,j)
is used, such that...
i = int (xr)
j = int (yr)
...and assuming that (xr,yr) is expressed in data file coordinates (pixels). The pixels
around (i,j) make up a 4 × 4 grid of input pixels, as illustrated in Figure 148.
(Xr,Yr)
Since a cubic, rather than a linear, function is used to weight the 16 input pixels, the
pixels farther from (xr,yr) have exponentially less weight than those closer to (xr,yr).
Several versions of the cubic convolution equation are used in the field. Different
equations have different effects upon the output data file values. Some convolutions
may have more of the effect of a low-frequency filter (like bilinear interpolation),
serving to average and smooth the values. Others may tend to sharpen the image, like
a high-frequency filter. The cubic convolution used in ERDAS IMAGINE is a
compromise between low-frequency and high-frequency. The general effect of the
cubic convolution will depend upon the data.
342 ERDAS
Resampling Methods
n=1
+ V ( i, j + n – 2 ) × f ( d ( i, j + n – 2 ) )
+ V ( i + 1, j + n – 2 ) × f ( d ( i + 1, j + n – 2 ) – 1 )
+ V ( i + 2, j + n – 2 ) × f ( d ( i + 2, j + n – 2 ) – 2 )
where:
i = int (xr)
j = int (yr)
d(i,j) = the distance between a pixel with coordinates (i,j) and (xr,yr)
( a + 2 ) x 3 – ( a + 3 ) x 2 + 1 if x < 1
f (x) = 3 2
if 1 < x < 2
a x – 5a x 2 + 8a x – 4a
0 otherwise
Source: Atkinson 1985
Advantages Disadvantages
Uses 4 × 4 resampling. In most cases, the Data values may be altered.
mean and standard deviation of the out-
put pixels match the mean and standard
deviation of the input pixels more closely
than any other resampling method.
The effect of the cubic curve weighting can The most computationally intensive resa-
both sharpen the image and smooth out mpling method, and is therefore the slow-
noise (Atkinson 1985). The actual effects est.
will depend upon the data being used.
This method is recommended when the
user is dramatically changing the cell size
of the data, such as in TM/aerial photo
merges (i.e., matches the 4 × 4 window
more closely than the 2 × 2 window).
344 ERDAS
Map to Map Coordinate Conversions
Map to Map There are many instances when the user will need to change a map that is already regis-
Coordinate tered to a planar projection to another projection. Some examples of when this is
Conversions required are listed below (ESRI 1992).
• When the projection used for the files in the data base does not produce the desired
properties of a map.
• When it is necessary to combine data from more than one zone of a projection, such
as UTM or State Plane.
A change in the projection is a geometric change—distances, areas, and scale are repre-
sented differently. Therefore, the conversion process requires that pixels be resampled.
Resampling causes some of the spectral integrity of the data to be lost (see the disadvan-
tages of the resampling methods explained previously). So, it is not usually wise to
resample data that have already been resampled if the accuracy of data file values is
important to the application. If the original unrectified data are available, it is usually
wiser to rectify that data to a second map projection system than to “lose a generation”
by converting rectified data and resampling it a second time.
Conversion Process
To convert the map coordinate system of any georeferenced image, ERDAS IMAGINE
provides a shortcut to the rectification process. In this procedure, GCPs are generated
automatically along the intersections of a grid that the user specifies. The program
calculates the reference coordinates for the GCPs with the appropriate conversion
formula and a transformation that can be used in the regular rectification process.
Vector Data
Converting the map coordinates of vector data is much easier than converting raster
data. Since vector data are stored by the coordinates of nodes, each coordinate is simply
converted using the appropriate conversion formula. There are no coordinates between
nodes to extrapolate.
CHAPTER 9
Terrain Analysis
Introduction Terrain analysis involves the processing and graphic simulation of elevation data.
Terrain analysis software functions usually work with topographic data (also called
terrain data or elevation data), in which an elevation (or Z value) is recorded at each X,Y
location. Terrain analysis functions are not restricted to topographic data, however.
Any series of values, such as population densities, ground water pressure values,
magnetic and gravity measurements, and chemical concentrations, may be used.
Topographic data are essential for studies of trafficability, route design, non-point
source pollution, intervisibility, siting of recreation areas, etc. (Welch 1990). Especially
useful are products derived from topographic data. These include:
• slope images — illustrating changes in elevation over distance. Slope images are
usually color-coded according to the steepness of the terrain at each pixel.
• aspect images — illustrating the prevailing direction that the slope faces at each
pixel.
Topographic data and its derivative products have many applications, including:
• calculating the shortest and most navigable path over a mountain range for
constructing a road or routing a transmission line
Terrain data are often used as a component in complex GIS modeling or classification
routines. They can, for example, be a key to identifying wildlife habitats that are
associated with specific elevations. Slope and aspect images are often an important
factor in assessing the suitability of a site for a proposed use. Terrain data can also be
used for vegetation classification based on species that are terrain-sensitive (i.e., Alpine
vegetation).
Although this chapter mainly discusses the use of topographic data, the ERDAS IMAGINE
terrain analysis functions can be used on data types other than topographic data.
Topographic Data Topographic data are usually expressed as a series of points with X,Y, and Z values.
When topographic data are collected in the field, they are surveyed at a series of points
including the extreme high and low points of the terrain, along features of interest that
define the topography such as streams and ridge lines, and at various points in
between.
DEM (digital elevation models) and DTED (Digital Terrain Elevation Data) are
expressed as regularly spaced points. To create DEM and DTED files, a regular grid is
overlaid on the topographic contours. Elevations are read at each grid intersection
point, as shown in Figure 149.
20
30 20 22 29 34
40
31 39 38 34
30
50 20 45 48 41 30
Elevation data are derived from ground surveys and through manual photogrammetric
methods. Elevation points can also be generated through digital orthographic methods.
See "CHAPTER 3: Raster and Vector Data Sources" for more details on DEM and DTED data.
See "CHAPTER 7: Photogrammetric Concepts" for more information on the digital
orthographic process.
To make topographic data usable in ERDAS IMAGINE, they must be represented as a surface,
or DEM. A DEM is a one band .img file where the value of each pixel is a specific elevation value.
A gray scale is used to differentiate variations in terrain.
DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. See “Chapter 1:
Raster Layers” for more information.
348 ERDAS
Slope Images
Slope Images Slope is expressed as the change in elevation over a certain distance. In this case the
certain distance is the size of the pixel. Slope is most often expressed as a percentage,
but can also be calculated in degrees.
• slopes between 45° and 90° are expressed as 100 - 200% slopes
Slope images are often used in road planning. For example, if the Department of Trans-
portation specifies a maximum of 15% slope on any road, it would be possible to recode
all slope values that are greater than 15% as unsuitable for road building.
A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at location
X,Y, the elevations around it are used to calculate the slope as shown below. A
hypothetical example is shown with the slope calculation formulas. In Figure 150, each
pixel is 30 × 30 meters.
d e f 22 m 30 m 25 m
g h i 20 m 24 m 18 m
∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 × x s
∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 × y s
where:
∆x 1 = 25 – 10 = 15 ∆y 1 = 10 – 20 = – 10
∆x 2 = 25 – 22 = 3 ∆y 2 = 20 – 24 = – 4
∆x 3 = 18 – 20 = – 2 ∆y 3 = 25 – 18 = 7
15 + 3 – 2
∆x = ---------------------- = 0.177 ∆y
– 10 – 4 + 7
= -------------------------- = – 0.078
30 × 3 30 × 3
350 ERDAS
Slope Images
( ∆x ) 2 + ( ∆y ) 2 s = 0.0967
s = --------------------------------------
2
100
or else percent slope = 200 – -------
s
180
slope in degrees = tan–1 ( s ) × -------
π
180
slope in degrees = tan–1 ( s ) × ------- = tan–1 ( 0.0967 ) × 57.30 = 5.54
π
Aspect files are used in many of the same applications as slope files. In transportation
planning, for example, north facing slopes are often avoided. Especially in northern
climates, these would be exposed to the most severe weather and would hold snow and
ice the longest. It would be possible to recode all pixels with north facing aspects as
undesirable for road building.
As with slope calculations, aspect uses a 3x3 window around each pixel to calculate the
prevailing direction it faces. For pixel x,y with the following elevation values around it,
the average changes in elevation in both x and y directions are calculated first. Each
pixel is 30x30 meters in the following example:
d e f 22 m 30 m 25 m
g h i 20 m 24 m 18 m
a,b,c,d,f,g,h, and i are the elevations of the pixels around it in a 3x3 window.
∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3
∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3
352 ERDAS
Aspect Images
where:
15 + 3 – 2 – 10 – 4 + 7
∆x = ------------------------ = 5.33 ∆y = ---------------------------- = – 2.33
3 3
If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees). Otherwise, θ is calcu-
lated as:
∆x
θ = tan–1 ------
∆y
It is important to note that the relief program identifies shadowed areas—i.e., those that
are not in direct sun. It does not calculate the shadow that is cast by topographic
features onto the surrounding surface.
For example, a high mountain with sunlight coming from the northwest would be
symbolized as follows in shaded relief. Only the portions of the mountain that would
be in shadow from a northwest light would be shaded. The software would not
simulate a shadow that the mountain would cast on the southeast side.
30
40
50
= ≠
in sun shaded
Shaded relief images are an effective graphic tool. They can also be used in analysis,
e.g., of snow melt over an area spanned by an elevation surface. A series of relief images
can be generated to simulate the movement of the sun over the landscape. Snow melt
rates can then be estimated for each pixel based on the amount of time it spends in sun
or shadow. Shaded relief images can also be used to enhance subtle detail in gray scale
images such as aeromagnetic, radar, gravity maps, etc.
Use the Shaded Relief function in Image Interpreter to generate a relief image.
354 ERDAS
Shaded Relief
In calculating relief, the software compares the user-specified sun position and angle
with the angle each pixel faces. Each pixel is assigned a value between -1 and +1 to
indicate the amount of light reflectance at that pixel.
• Positive numbers represent sunny areas, with +1 assigned to the areas of highest
reflectance.
The reflectance values are then applied to the original pixel values to get the final result.
All negative values are set to 0 or to the minimum light level specified by the user. These
indicate shadowed areas. Light reflectance in sunny areas falls within a range of values
depending on whether the pixel is directly facing the sun or not. (In the example above,
pixels facing northwest would be the brightest. Pixels facing north-northwest and west-
northwest would not be quite as bright.)
In a relief file that includes an .img file along with the elevation surface, the surface
reflectance values are multiplied by the color lookup values for the .img file.
• incident illumination — the orientation of the surface with respect to the rays of the
sun
• exitance angle — the amount of reflected energy as a function of the slope angle
• surface cover characteristics — rugged terrain with high mountains or steep slopes
(Hodgson and Shelley 1993)
The Topographic Normalize function in Image Interpreter uses a Lambertian Reflectance model
to normalize topographic effect in VIS/IR imagery.
• DEM file
Lambertian Reflectance The Lambertian Reflectance model assumes that the surface reflects incident solar
Model energy uniformly in all directions, and that variations in reflectance are due to the
amount of incident radiation.
The following equation produces normalized brightness values (Colby 1991, Smith et al
1980):
where:
356 ERDAS
Topographic Normalization
Incidence Angle
The incidence angle is defined from:
cos i = cos (90 - θs) cos θn + sin (90 - θs) sin θn cos (φs - φn)
where:
i= the angle between the solar rays and the normal to the surface
θs= the elevation of the sun
φs= the azimuth of the sun
θn= the slope of each surface element
φn= the aspect of each surface element
If the surface has a slope of 0 degrees, then aspect is undefined and i is simply
90 - θs.
Non-Lambertian Model Minnaert (1961) proposed that the observed surface does not reflect incident solar
energy uniformly in all directions. Instead, he formulated the Non-Lambertian model,
which takes into account variations in the terrain. This model, although more compu-
tationally demanding than the Lambertian model, may present more accurate results.
where:
Minnaert Constant
The Minnaert constant (k) may be found by regressing a set of observed brightness
values from the remotely sensed imagery with known slope and aspect values,
provided that all the observations in this set are the same type of land cover. The k value
is the slope of the regression line (Hodgson and Shelley 1993):
Use the Spatial Modeler to create a model based on the Non-Lambertian Model.
NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by intervening
topographic features between each pixel and the sun. For these areas, a line-of-sight algorithm
will identify such shadowed pixels.
CHAPTER 10
Geographic Information Systems
Introduction The beginnings of geographic information systems (GIS) can legitimately be traced
back to the beginnings of man. The earliest known map dates back to 2500 B.C., but there
were probably maps before that. Since then, man has been continually improving the
methods of conveying spatial information. The mid-eighteenth century brought the use
of map overlays to show troop movements in the Revolutionary War. This could be
considered an early GIS. The first British census in 1825 led to the science of demog-
raphy, another application for GIS. During the 1800’s many different cartographers and
scientists were all discovering the power of overlays to convey multiple levels of infor-
mation about an area (Star and Estes).
Frederick Law Olmstead has long been considered the father of Landscape Architecture
for his pioneering work in the early 20th century. Many of the methods Olmstead used
in Landscape Architecture also involved the use of hand-drawn overlays. This type of
analysis was beginning to be used for a much wider range of applications, such as
change detection, urban planning, and resource management (Rado 1992).
The first system to be called a GIS was the Canadian Geographic Information System,
developed in 1962 by Roger Tomlinson of the Canada Land Inventory. Unlike earlier
systems that were developed for a specific application, this system was designed to
store digitized map data and land-based attributes in an easily accessible format for all
of Canada. This system is still in operation today (Parent and Church 1987).
In 1969, Ian McHarg’s influential work, Design with Nature, was published. This work
on land suitability/capability analysis (SCA), a system designed to analyze many data
layers to produce a plan map, discussed the use of overlays of spatially referenced data
layers for resource planning and management (Star and Estes 1990).
The era of modern GIS really started in the 1970s, as analysts began to program
computers to automate some of the manual processes. Software companies like ESRI
(Redlands, CA) and ERDAS developed software packages that could input, display,
and manipulate geographic data to create new layers of information. The steady
advances in features and power of the hardware over the last ten years and the decrease
in hardware costs have made GIS technology accessible to a wide range of users. The
growth rate of the GIS industry in the last several years has exceeded even the most
optimistic projections.
The central purpose of a GIS is to turn geographic data into useful information—the
answers to real-life questions—questions such as:
• How will we monitor the influence of global climatic changes on the earth’s
resources?
• Where is the best place for a shopping center that will be most convenient to
shoppers and least harmful to the local ecology?
• “The land cover at coordinate N875250, E757261 has a data file value 8,” is data.
• “Land cover with a value of 8 are on slopes too steep for development,” is
information.
The user can input data into a GIS and output information. The information the user
wishes to derive determines the type of data that must be input. For example, if one is
looking for a suitable refuge for bald eagles, zip code data is probably not needed, while
land cover data may be useful.
For this reason, this first step in any GIS project is usually an assessment of the scope
and goals of the study. Once the project is defined, the user can begin the processing of
building the data base. Although software and data are commercially available, a
custom data base must be created for the particular project and study area. It must be
designed to meet the needs of the organization and objectives. ERDAS IMAGINE
provides all the tools required to build and manipulate a GIS data base.
360 ERDAS
Data Input
• data input
• analysis
Data input involves collecting the necessary data layers into the image data base. In the
analysis phase, these data layers will be combined and manipulated in order to create
new layers and to extract meaningful information from them. This chapter discusses
these steps in detail.
Data Input Acquiring the appropriate data for a project involves creating a data base of layers that
encompass the study area. A data base created with ERDAS IMAGINE can consist of:
Landsat TM Roads
SPOT panchromatic Census data
Aerial photograph Ownership parcels
Soils data Political boundaries
Land cover Landmarks
• site selection
• petroleum exploration
• mission planning
• change detection
On the other hand, vector data may be better suited for these applications:
• urban planning
• traffic engineering
• facilities management
The advantage of an integrated raster and vector system such as ERDAS IMAGINE is
that one data structure does not have to be chosen over the other. Both data formats can
be used and the functions of both types of systems can be accessed. Depending upon
the project, only raster or vector data may be needed, but most applications benefit from
using both.
362 ERDAS
Data Input
A single theme may require more than a simple raster or vector file to fully describe it.
In addition to the image, there may be attribute data that describe the information, a
color scheme, or meaningful annotation for the image. The full collection of data that
describe a certain theme is called a layer.
Depending upon the goals of a project, it may be helpful to combine several themes into
one layer. For example, if a user wanted to propose a new park site, he or she might
create one layer that shows roads, land cover, land ownership, slope, etc., and indicate
through the use of colors and/or annotation which areas would be best for the new site.
This one layer would then include many separate themes. Much of GIS analysis is
concerned with combining individual themes into one or more layers that answer the
questions driving the analysis. This chapter explores these analysis techniques.
Satellite images, aerial photographs, elevation data, scanned maps, and other
continuous raster layers can be incorporated into a data base and provide a wealth of
information that is not available in thematic layers or vector layers. In fact, these layers
often form the foundation of the data base. Extremely accurate base maps can be created
from rectified satellite images or aerial photographs. Then, all other layers that are
added to the data base can be registered to this base map.
Once used only for image processing, continuous data are now being incorporated into
GIS data bases and used in combination with thematic data to influence processing
algorithms or as backdrop imagery on which to display the results of analyses. Current
satellite data and aerial photographs are also effective in updating outdated vector data.
The vectors can be overlaid on the raster backdrop and updated dynamically to reflect
new or changed features, such as roads, utility lines, or land use. This chapter will
explore the many uses of continuous data in a GIS.
Thematic Layers Thematic data are typically represented as single layers of information stored as .img
files and containing discrete classes. Classes are simply categories of pixels which
represent the same condition. An example of a thematic layer is a vegetation classifi-
cation with discrete classes representing coniferous forest, deciduous forest, wetlands,
agriculture, urban, etc.
364 ERDAS
Thematic Layers
• Nominal classes represent categories with no particular order. Usually, these are
characteristics that are not associated with quantities (e.g., soil type or political
area).
• Ordinal classes are those that have a sequence, such as “poor,” “good,” “better,”
and “best.” An ordinal class numbering system is often created from a nominal
system, in which classes have been ranked by some criteria. In the case of the
recreation department data base used in the previous example, the final layer may
rank the proposed park sites according to their overall suitability.
• Interval classes also have a natural sequence, but the distance between each value
is meaningful as well. This numbering system might be used for temperature data.
• Ratio classes differ from interval classes only in that ratio classes have a natural
zero point, such as rainfall amounts.
The variable being analyzed and the way that it contributes to the final product deter-
mines the class numbering system used in the thematic layers. Layers that have one
numbering system can easily be recoded to a new system. This is discussed in detail
under "Recoding" on page 378.
Classification
Thematic layers can be generated from remotely sensed data (e.g., Landsat TM, SPOT)
by using the ERDAS IMAGINE Image Interpreter, Classification, and Spatial Modeler
tools. A frequent and popular application is the creation of land cover classification
schemes through the use of both supervised (user-assisted) and unsupervised
(automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. The
output is a single thematic layer which represents specific classes based on the
approach selected.
Use the Vector Utilities menu from the Vector icon in the IMAGINE icon panel to convert
vector layers to raster format.
Statistics Both continuous and thematic layers include statistical information. Thematic layers
contain the following information:
• a histogram of the data values, which is the total number of pixels in each class
• a color table, stored as brightness values in red, green, and blue, which make up the
colors of each class when the layer is displayed
For thematic data, these statistics are called attributes and may be accompanied by
many other types of information, as described below.
Use the Image Information option in the ERDAS IMAGINE icon panel to generate or update
statistics for .img files.
See "CHAPTER 1: Raster Data" for more information about the statistics stored with
continuous layers.
366 ERDAS
Attributes
Vector Layers The vector layers used in ERDAS IMAGINE are based on the ARC/INFO data model
and consist of points, lines, and polygons. These layers are topologically complete,
meaning that the spatial relationships between features are maintained. Vector layers
can be used to represent transportation routes, utility corridors, communication lines,
tax parcels, school zones, voting districts, landmarks, population density, etc. Vector
layers can be analyzed independently or in combination with continuous and thematic
raster layers.
Vector data can be acquired from several private and governmental agencies. Vector
data can also be created in ERDAS IMAGINE by digitizing on the screen, using a
digitizing tablet, or converting other data types to vector format.
See "CHAPTER 2: Vector Layers" for more information on the characteristics of vector data.
Attributes Text and numerical data that are associated with the classes of a thematic layer or
the features in a vector layer are called attributes. This information can take the form of
character strings, integer numbers, or floating point numbers. Attributes work much
like the data that are handled by data base management software. The user may define
fields, which are categories of information about each class. A record is the set of all
attribute data for one class. Each record is like an index card, containing information
about one class or feature in a file of many index cards, which contain similar infor-
mation for the other classes or features.
Attribute information for raster layers is stored in the image (.img) file. Vector attribute
information is stored in an INFO file. In both cases, there are fields that are automati-
cally generated by the software, but more fields can be added as needed to fully
describe the data. Both are viewed in ERDAS IMAGINE CellArrays, which allow the
user to display and manipulate the information. However, raster and vector attributes
are handled slightly differently, so a separate section on each follows.
Raster Attributes In ERDAS IMAGINE, raster attributes for .img files are accessible from the Raster
Attribute Editor. The Raster Attribute Editor contains a CellArray, which is similar to a
table or spreadsheet that not only displays the information, but includes options for
importing, exporting, copying, editing, and other operations.
Figure 154 shows the attributes for a land cover classification layer.
• Class Name
• Class Value
• Opacity percentage
As many additional attribute fields as needed can be defined for each class.
See "CHAPTER 6: Classification" for more information about the attribute information that is
automatically generated when new thematic layers are created in the classification process.
368 ERDAS
Attributes
• cut, copy, and paste individual cells, rows, or columns to and from the same Raster
Attribute Editor or among several Raster Attribute Editors
• generate reports that include all or a subset of the information in the Raster
Attribute Editor
The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column, so
that class (object) colors can be viewed or changed. In addition to direct user manipu-
lation, attributes can be changed by other programs. For example, some of the Image
Interpreter functions calculate statistics that are automatically added to the Raster
Attribute Editor. Models that read and/or modify attribute information can also be
written.
See "CHAPTER 5: Enhancement" for more information on the Image Interpreter. There is more
information on GIS modeling, starting on page 383.
• label features
See "CHAPTER 2: Vector Layers" for more information about vector attributes.
370 ERDAS
Analysis
Analysis
ERDAS IMAGINE In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible through
Analysis Tools three main tools:
Model Maker
Model Maker is essentially the Spatial Modeler Language linked to a graphical
interface. This enables the user to create graphical models using a palette of easy-to-use
tools. Graphical models can be run, edited, saved in libraries, or converted to script
form and edited further, using the Spatial Modeler Language.
NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can be
accomplished using both Model Maker and the Spatial Modeler Language.
Image Interpreter
The Image Interpreter houses a set of common functions that were all created using
either Model Maker or the Spatial Modeler Language. They have been given a dialog
interface to match the other processes in ERDAS IMAGINE. In most cases, these
processes can be run from a single dialog. However, the actual models are also
provided with the software to enable customized processing.
Many of the functions described in the following sections can be accomplished using
any of these tools. Model Maker is also easy to use and requires many of the same steps
that would be performed when drawing a flow chart of an analysis. The Spatial
Modeler Language is intended for more advanced analyses, and has been designed
using natural language commands and simple syntax rules. Some applications may
require a combination of these tools.
The ERDAS Macro Language and the C Programmers’ Toolkit are part of the ERDAS
IMAGINE Developers’ Toolkit.
Analysis Procedures Once the data base (layers and attribute data) is assembled, the layers can be analyzed
and new information extracted. Some information can be extracted simply by looking
at the layers and visually comparing them to other layers. However, new information
can be retrieved by combining and comparing layers using the procedures outlined
below:
• Contiguity analysis — enables the user to identify regions of pixels in the same
class and to filter out small regions.
• Recoding — enables the user to assign new class values to all or a subset of the
classes in a layer.
• Overlaying — creates a new file with either the maximum or minimum value of the
input layers.
• Script modeling — offers all of the capabilities of graphical modeling with the
ability to perform more complex functions, such as conditional looping.
372 ERDAS
Proximity Analysis
Proximity Analysis Many applications require some measurement of distance, or proximity. For example,
a real estate developer would be concerned with the distance between a potential site
for a shopping center and an interchange to a major highway.
Figure 156 shows a layer containing lakes and streams and the resulting layer after a
proximity analysis is run to create a buffer zone around all of the water features.
Lake
Streams
Buffer
zones
Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform a
proximity analysis.
• eliminate raster regions that are too small to be considered for an application.
Filtering Clumps
In cases where very small clumps are not useful, they can be filtered out according to
their sizes. This is sometimes referred to as eliminating the “salt and pepper” effects, or
“sieving.” In Figure 157, all of the small clumps in the original (clumped) layer are
eliminated.
Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler to
perform contiguity analysis.
374 ERDAS
Neighborhood Analysis
Neighborhood With a process similar to the convolution filtering of continuous raster layers, thematic
Analysis raster layers can also be filtered. The GIS filtering process is sometimes referred to as
“scanning,” but is not to be confused with data capture via a digital camera. Neigh-
borhood analysis is based on local or neighborhood characteristics of the data (Star and
Estes 1990).
Every pixel is analyzed spatially, according to the pixels that surround it. The number
and the location of the surrounding pixels is determined by a scanning window, which
is defined by the user. These operations are known as focal operations. The scanning
window can be:
• rectangular, up to 512 × 512 pixels, with the option to mask out certain pixels
Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to
perform neighborhood analysis. The scanning window used in Image Interpreter can be
3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is user-defined and can be up to
512 × 512.
• Specify a rectangular portion of the file to scan. The output layer will contain only
the specified area.
• Specify a class or classes in another thematic layer to be used as a mask. The pixels
in the scanned layer that correspond to the pixels of the selected class or classes in
the mask layer will be scanned, while the other pixels will remain the same.
In Figure 158, class 2 in the mask layer was selected for the mask. Only the corre-
sponding (shaded) pixels in the target layer will be scanned—the other values will
remain unchanged.
Neighborhood analysis creates a new thematic layer. There are several types of analysis
that can be performed upon each window of pixels, as described below:
• Boundary — detects boundaries between classes. The output layer contains only
boundary pixels. This is useful for creating boundary or edge lines from classes,
such as a land/water interface.
• Density — outputs the number of pixels that have the same class value as the center
(analyzed) pixel. This is also a measure of homogeneity (sameness), based upon the
analyzed pixel. This is often useful in assessing vegetation crown closure.
• Diversity — outputs the number of class values that are present within the
window. Diversity is also a measure of heterogeneity (difference).
• Majority — outputs the class value that represents the majority of the class values
in the window. The value is user-defined. This option operates like a low-frequency
filter to clean up a “salt and pepper” layer.
• Maximum — outputs the greatest class value within the window. This can be used
to emphasize classes with the higher class values or to eliminate linear features or
boundaries.
• Mean — averages the class values. If class values represent quantitative data, then
this option can work like a convolution filter. This is mostly used on ordinal or
interval data.
• Median — outputs the statistical median of the class values in the window. This
option may be useful if class values represent quantitative data.
• Minimum — outputs the least or smallest class value within the window. The
value is user-defined. This can be used to emphasize classes with the low class
values.
376 ERDAS
Neighborhood Analysis
• Minority — outputs the least common of the class values that are within the
window. This option can be used to identify the least common classes. It can also
be used to highlight disconnected linear features.
• Rank — outputs the number of pixels in the scan window whose value is less than
the center pixel.
• Sum — totals the class values. In a file where class values are ranked, totaling
enables the user to further rank pixels based on their proximity to high-ranking
pixels.
2 8 6 6 6 8 6 6
2 48 6 Output of one
2 8 6 6 6 iteration of the
2 2 8 6 6 2 2 8 sum operation
2 2 2 8 6
2 2 2 2 8
8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48
The analyzed pixel is always the center pixel of the scanning window. In this example, only the
pixel in the third column and third row of the file is “summed.”
• combine classes
When an ordinal, ratio, or interval class numbering system is used, recoding can be
used to assign classes to appropriate values. Recoding is often performed to make later
steps easier. For example, in creating a model that will output “good,” “better,” and
“best” areas, it may be beneficial to recode the input layers so all of the “best” classes
have the highest class values.
In the following example (Table 21), a land cover layer is recoded so that the most
environmentally sensitive areas (Riparian and Wetlands) have higher class values.
0 0 Background
1 4 Riparian
3 1 Chaparral
4 4 Wetlands
5 1 Emergent Vegetation
6 1 Water
Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers.
378 ERDAS
Overlaying
Overlaying Thematic data layers can be overlaid to create a composite layer. The output layer
contains either the minimum or the maximum class values of the input layers. For
example, if an area was in class 5 in one layer, and in class 3 in another, and the
maximum class value dominated, then the same area would be coded to class 5 in the
output layer, as shown in Figure 160.
Overlay Composite
9
1 = commercial
9 4 2 = residential
9 1 2 3 = forest
9 5 4 = industrial
5 = wetlands
3 9 = steep slopes
(Land Use masked
Figure 160: Overlay
The application example in Figure 160 shows the result of combining two layers—slope
and land use. The slope layer is first recoded to combine all steep slopes into one value.
When overlaid with the land use layer, the highest data file values (the steep slopes)
dominate in the output layer.
Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlay
layers.
The application example in Figure 161 shows the result of indexing. In this example, the
user wants to develop a new subdivision, and the most likely sites are where there is
the best combination (highest value) of good soils, good slope, and good access. Since
good slope is a more critical factor to the user than good soils or good access, a
weighting factor is applied to the slope layer. A weighting factor has the effect of multi-
plying all input values by some constant. In this example, slope is given a weight of 2.
Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index
layers.
380 ERDAS
Matrix Analysis
Matrix Analysis
Matrix analysis produces a thematic layer that contains a separate class for every coinci-
dence of classes in two layers. The output is best described with a matrix diagram.
0 1 2 3 4 5
0 0 0 0 0 0 0
input layer 1 1 0 1 2 3 4 5
data values
(rows) 2 0 6 7 8 9 10
3 0 11 12 13 14 15
In this diagram, the classes of the two input layers represent the rows and columns of
the matrix. The output classes are assigned according to the coincidence of any two
input classes.
All combinations of 0 and any other class are coded to 0, because 0 is usually the background
class, representing an area that is not being studied.
Unlike overlaying or indexing, the resulting class values of a matrix operation are
unique for each coincidence of two input class values. In this example, the output class
value at column 1, row 3 is 11, and the output class at column 3, row 1 is 3. If these files
were indexed (summed) instead of matrixed, both combinations would be coded to
class 4.
Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix
layers.
For example, if a user wants to find the best areas for a bird sanctuary, taking into
account vegetation, availability of water, climate, and distance from highly developed
areas, he or she would create a thematic layer for each of these criteria. Then, each of
these layers would be input to a model. The modeling process would create one
thematic layer, showing only the best areas for the sanctuary.
The set of procedures that define the criteria is called a model. In ERDAS IMAGINE,
models can be created graphically and resemble a flow chart of steps, or they can be
created using a script language. Although these two types of models look different, they
are essentially the same—input files are defined, functions and/or operators are
specified, and outputs are defined. The model is run and a new output layer(s) is
created. Models can utilize analysis functions that have been previously defined, or
new functions can be created by the user.
Use the Model Maker function in Spatial Modeler to create graphical models and the Spatial
Modeler Language to create script models.
Data Layers
In modeling, the concept of layers is especially important. Before computers were used
for modeling, the most widely used approach was to overlay registered maps on paper
or transparencies, with each map corresponding to a separate theme. Today, digital
files replace these hardcopy layers and allow much more flexibility for recoloring,
recoding, and reproducing geographical information (Steinitz, Parker, and Jordan
1976).
In a model, the corresponding pixels at the same coordinates in all input layers are
addressed as if they were physically overlaid like hardcopy maps.
382 ERDAS
Graphical Modeling
Graphical Modeling Graphical modeling enables the user to “draw” models using a palette of tools that
defines inputs, functions, and outputs. This type of modeling is very similar to drawing
flowcharts, in that the user identifies a logical flow of steps needed to perform the
desired action. Through the extensive functions and operators available in the ERDAS
IMAGINE graphical modeling program, the user can analyze many layers of data in
very few steps, without creating intermediate files that occupy extra disk space.
Modeling is performed using a graphical editor that eliminates the need to learn a
programming language. Complex models can be developed easily and then quickly
edited and re-run on another data set.
Use the Model Maker function in Spatial Modeler to create graphical models.
For example, suppose there is a need to assess the environmental sensitivity of an area
for development. An output layer can be created that ranks most to least sensitive
regions based on several factors, such as slope, land cover, and floodplain. To visualize
the location of these areas, the output thematic layer can be overlaid onto a high
resolution, continuous raster layer (e.g., SPOT panchromatic) that has had a convo-
lution filter applied. All of this can be accomplished in a single model (as shown in
Figure 162).
See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating the
environmental sensitivity model in Figure 162. Descriptions of all of the graphical models
delivered with ERDAS IMAGINE are available in the On-Line Help.
Model Structure
A model created with Model Maker is essentially a flow chart that defines:
The graphical models created in Model Maker all have the same basic structure: input,
function, output. The number of inputs, functions, and outputs can vary, but the overall
form remains constant. All components must be connected to one another before the
model can be executed. The model on the left in Figure 163 is the most basic form. The
model on the right is more complex, but it retains the same input/function/output
flow.
384 ERDAS
Graphical Modeling
Function
Input Function Input
Output
Output
Graphical models are stored in ASCII files with the .gmd extension. There are several
sample graphical models delivered with ERDAS IMAGINE that can be used as is or
edited for more customized processing.
Category Description
Color Manipulate colors to and from RGB (red, green, blue) and IHS
(intensity, hue, saturation).
Conditional Run logical tests using conditional statements and
either...if...or...otherwise.
Data Create raster layers from map coordinates, column numbers, or
Generation row numbers. Create a matrix or table from a list of scalars.
386 ERDAS
Graphical Modeling
See the ERDAS IMAGINE Tour Guides manual and the on-line Spatial Modeler Language
manual for complete instructions on using Model Maker and more detailed information about
the available functions and operators.
Objects Within Model Maker, an object is an input to or output from a function. The four basic
object types used in Model Maker are:
• raster
• scalar
• matrix
• table
Raster
A raster object is a single layer or set of layers. Rasters are typically used to specify and
manipulate data from image (.img) files.
Scalar
A scalar object is a single numeric value. Scalars are often used as weighting factors.
Matrix
A matrix object is a set of numbers arranged in a two-dimensional array. A matrix has
a fixed number of rows and columns. Matrices may be used to store convolution kernels
or the neighborhood definition used in neighborhood functions. They can also be used
to store covariance matrices, eigenvector matrices, or matrices of linear combination
coefficients.
Table
A table object is a series of numeric values or character strings. A table has one column
and a fixed number of rows. Tables are typically used to store columns from the Raster
Attribute Editor or a list of values which pertains to the individual layers of a set of
layers. For example, a table with four rows could be used to store the maximum value
from each layer of a four layer image file. A table may consist of up to 32,767 rows.
Information in the table can be attributes, calculated (e.g., histograms), or user-defined.
The graphics used in Model Maker to represent each of these objects are shown in
Figure 164.
a 1
Scalar Table
Data Types The four object types described above may be any of the following data types:
Input and output data types do not have to be the same. Using the Spatial Modeler
Language, the user can change the data type of input files before they are processed.
Output Parameters Since it is possible to have several inputs in one model, one can optionally define the
working window and the pixel cell size of the output data.
Working Window
Raster layers of differing areas can be input into one model. However, the image area,
or working window, must be specified in order to use in the model calculations. Either
of the following options can be selected:
• Union — the model will operate on the union of all input rasters. (This is the
default.)
• Intersection — the model will use only the area of the rasters that is common to all
input rasters.
Input layers must be referenced to the same coordinate system (i.e., Lat/Lon, UTM, State Plane,
etc.).
388 ERDAS
Graphical Modeling
• Minimum — the minimum cell size of the input layers will be used (this is the
default setting).
• Maximum — the maximum cell size of the input layers will be used.
Using Attributes in With the criteria function in Model Maker, attribute data can be used to determine
Models output values. The criteria function simplifies the process of creating a conditional
statement. The criteria function can be used to build a table of conditions that must be
satisfied to output a particular row value for an attribute (or cell value) associated with
the selected raster.
The inputs to a criteria function are rasters. The columns of the criteria table represent
either attributes associated with a raster layer or the layer itself, if the cell values are of
direct interest. Criteria which must be met for each output column are entered in a cell
in that column (e.g., >5). Multiple sets of criteria may be entered in multiple rows. The
output raster will contain the first row number of a set of criteria that were met for a
raster cell.
Example
For example, consider the sample thematic layer, parks.img, that contains the following
attribute information:
Class Name Histogram Acres Path Condition Turf Condition Car Spaces
Grant Park 2456 403.45 Fair Good 127
Piedmont Park 5167 547.88 Good Fair 94
Candler Park 763 128.90 Excellent Excellent 65
Springdale Park 548 46.33 None Excellent 0
A simple model could create one output layer that showed only the parks in need of
repairs. The following logic would be coded into the model:
“If Turf Condition is not Good or Excellent, and if Path Condition is not Good or
Excellent, then the output class value is 1. Otherwise, the output class value is 2.”
More than one input layer could also be used. For example, a model could be created,
using the input layers parks.img and soils.img, which would show the soil types for
parks with Fair or Poor turf condition. Attributes can be used from every input file.
If a user had a land cover file and wanted to create a file of pine forests larger than 10
acres, the criteria function could be used to output values only for areas that satisfied
the conditions of being both pine forest and larger than 10 acres. The output file would
have two classes: pine forests larger than 10 acres and background. If the user wanted
the output file to show varying sizes of pine forest, he or she would simply add more
conditions to the criteria table.
See the ERDAS IMAGINE Tour Guides manual or the On-Line Help for specific instructions
on using the criteria function.
Script Modeling The Spatial Modeler Language is a script language used internally by Model Maker to
execute the operations specified in the graphical models that are created. The Spatial
Modeler Language can also be used to directly write to user-created models. It includes
all of the functions available in Model Maker, plus:
Graphical models created with Model Maker can be output to a script file (text only) in
the Spatial Modeler Language. These scripts can then be edited with a text editor using
the Spatial Modeler Language syntax and re-run or saved in a library. Script models can
also be written from scratch in the text editor. They are stored in ASCII .mdl files.
The Text Editor is available from the ERDAS IMAGINE icon panel and from the Script Library
(Spatial Modeler).
In Figure 165, both the graphical and script models are shown for a tasseled cap trans-
formation. Notice how even the annotation on the graphical model is included in the
automatically generated script model. Generating script models from graphical models
may aid in learning the Spatial Modeler Language.
390 ERDAS
Script Modeling
Tasseled Cap
Transformation
Models
Graphical Model
Script Model
Figure 165: Graphical and Script Models For Tasseled Cap Transformation
Convert graphical models to scripts using Model Maker. Open existing script models from the
Script Librarian (Spatial Modeler).
• Show and View — enables the user to see and interpret results from the model
• Set — defines the scope of the model or establishes default values used by the
Modeler
The Spatial Modeler Language also includes flow control structures, so that the user can
utilize conditional branching and looping in the models and statement block structures,
which cause a set of statements to be executed as a group.
Declaration Example
In the script model in Figure 165, the following lines form the declaration portion of the
model:
Set Example
The following set statements are used:
392 ERDAS
Script Modeling
Assignment Example
The following assignment statements are used:
n2_Custom_Matrix = MATRIX(3, 7:
Data Types In addition to the data types utilized by Graphical Modeling, script model objects can
store data in the following data types:
• Color — three floating point numbers in the range of 0.0 to 1.0, representing
intensity of red, green, and blue
Variables Variables are objects in the Modeler which have been associated with a name using a
declaration statement. The declaration statement defines the data type and object type
of the variable. The declaration may also associate a raster variable with certain layers
of an image file or a table variable with an attribute table. Assignment statements are
used to set or change the value of a variable.
For script model syntax rules, descriptions of all available functions and operators, and sample
models, see the on-line Spatial Modeler Language manual.
Vector layers can also be used to indicate an area of interest (AOI) for further
processing. Assume the user wants to run a site suitability model on only areas desig-
nated for commercial development in the zoning ordinances. By selecting these zones
in a vector polygon layer, the user could restrict the model to only those areas in the
raster input files.
Editing Vector Editable features are polygons (as lines), lines, label points, and nodes. There can be
Coverages multiple features selected with a mixture of any and all feature types. Editing opera-
tions and commands can be performed on multiple or single selections. In addition to
the basic editing operations (e.g., cut, paste, copy, delete), the user can also perform the
following operations on the line features in multiple or single selections:
• spline — smooths or generalizes all currently selected lines using a specified grain
tolerance
• split/unsplit — makes two lines from one by adding a node or joins two lines by
removing a node
• reshape (for single lines only) — enables the user to move the vertices of a line
The Undo utility may be applied to any edits. The software stores all edits in sequential
order, so that continually pressing Undo will reverse the editing.
394 ERDAS
Constructing Topology
Constructing Either the build or clean option can be used to construct topology. To create spatial
Topology relationships between features in a vector layer, it is necessary to create topology. After
a vector layer is edited, the topology must be constructed to maintain the topological
relationships between features. When topology is constructed, each feature is assigned
an internal number. These numbers are then used to determine line connectivity and
polygon contiguity. Once calculated, these values are recorded and stored in that
layer’s associated attribute table.
You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE.
When topology is constructed, feature attribute tables are created with several automat-
ically created fields. Different fields are stored for the different types of layers. The
automatically generated fields for a line layer are:
• FNODE# — the internal node number for the beginning of a line (from-node)
• LPOLY# — the internal number for the polygon to the left of the line (will be zero
for layers containing only lines and no polygons)
• RPOLY# — the internal number for the polygon to the right of the line (will be zero
for layers containing only lines and no polygons)
• AREA — area of each polygon, measured in layer units (will be zero for layers
containing only points and no polygons)
Building and Cleaning Build processes points, lines, and polygons, but clean processes only lines and
Coverages polygons. Build recognizes only existing intersections (nodes), whereas clean creates
intersections (nodes) wherever lines cross one another. The differences in these two
options are summarized in Table 24 (ESRI 1990).
Processes:
Polygons Yes Yes
Lines Yes Yes
Points Yes No
Errors
Constructing topology also helps to identify errors in the layer. Some of the common
errors found are:
Constructing typology can identify the errors mentioned above. When topology is
constructed, line intersections are created, the lines that make up each polygon are
identified, and a label point is associated with each polygon. Until topology is
constructed, no polygons exist and lines that cross each other are not connected at a
node, since there is no intersection.
Construct topology using the Vector Utilities menu from the Vector icon in the IMAGINE icon
panel.
You should not build or clean a layer that is displayed in a Viewer, nor should you try to display
a layer that is being built or cleaned.
396 ERDAS
Constructing Topology
When the build or clean options are used to construct the topology of a vector layer,
potential node errors are marked with special symbols. These symbols are listed below
(ESRI 1990).
Pseudo nodes, drawn with a diamond symbol, occur where a single line connects
with itself (an island) or where only two lines intersect. Pseudo nodes do not neces-
sarily indicate an error or a problem. Acceptable pseudo nodes may represent an island
(a spatial pseudo node) or the point where a road changes from pavement to gravel (an
attribute pseudo node).
In polygon layers there may be label errors—usually no label point for a polygon, or
more than one label point for a polygon. In the latter case, two or more points may have
been mistakenly digitized for a polygon, or it may be that a line does not intersect
another line, resulting in an open polygon.
No label point
Pseudo node in polygon
(island)
Dangling nodes
Errors detected in a layer can be corrected by changing the tolerances set for that layer
and building or cleaning again, or by editing the layer manually, then running build or
clean.
Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on editing
vector layers.
Introduction Maps and mapping are the subject of the art and science known as cartography—
creating 2-dimensional representations of our 3-dimensional Earth. These representa-
tions were once hand-drawn with paper and pen. But now, map production is largely
automated—and the final output is not always paper. The capabilities of a computer
system are invaluable to map users, who often need to know much more about an area
than can be reproduced on paper, no matter how large that piece of paper is or how
small the annotation is. Maps stored on a computer can be queried, analyzed, and
updated quickly.
As the veteran GIS and image processing authority Roger F. Tomlinson said, “Mapped
and related statistical data do form the greatest storehouse of knowledge about the
condition of the living space of mankind.” With this thought in mind, it only makes
sense that maps be created as accurately as possible and be as accessible as possible.
In the past, map making was carried out by mapping agencies who took the analyst’s
(be they surveyors, photogrammetrists, or draftsmen) information and created a map
to illustrate that information. But today, in many cases, the analyst is the cartographer
and can design his maps to best suit the data and the end user.
This chapter defines some basic cartographic terms and explains how maps are created
within the ERDAS IMAGINE environment.
Use the ERDAS IMAGINE Map Composer to create hardcopy and softcopy maps and presen-
tation graphics.
This chapter concentrates on the production of digital maps. See "CHAPTER 12: Hardcopy
Output" for information about printing hardcopy maps.
Map Purpose
Aspect A map that shows the prevailing direction that a slope faces at each pixel.
Aspect maps are often color-coded to show the eight major compass
directions or any of 360 degrees.
Base A map portraying background reference information onto which other
information is placed. Base maps usually show the location and extent of
natural earth surface features and permanent man-made objects. Raster
imagery, orthophotos, and orthoimages are often used as base maps.
Bathymetric A map portraying the shape of a water body or reservoir using isobaths
(depth contours).
Cadastral A map showing the boundaries of the subdivisions of land for purposes
of describing and recording ownership or taxation.
Choropleth A map portraying properties of a surface using area symbols. Area sym-
bols usually represent categorized classes of the mapped phenomenon.
Composite A map on which the combined information from different thematic maps
is presented.
Contour A map in which lines are used to connect points of equal elevation. Lines
are often spaced in increments of ten or twenty feet or meters.
Derivative A map created by altering, combining, or through the analysis of other
maps.
Index A reference map that outlines the mapped area, identifies all of the com-
ponent maps for the area if several map sheets are required, and identi-
fies all adjacent map sheets.
Inset A map that is an enlargement of some congested area of a smaller scale
map, and that is usually placed on the same sheet with the smaller scale
main map.
Isarithmic A map that uses isorithms (lines connecting points of the same value for
any of the characteristics used in the representation of surfaces) to repre-
sent a statistical surface. Also called an isometric map.
Isopleth A map on which isopleths (lines representing quantities that cannot exist
at a point, such as population density) are used to represent some
selected quantity.
Morphometric A map representing morphological features of the earth’s surface.
Outline A map showing the limits of a specific set of mapping entities, such as
counties, NTS quads, etc. Outline maps usually contain a very small
number of details over the desired boundaries with their descriptive
codes.
Planimetric A map showing only the horizontal position of geographic objects, with-
out topographic features or elevation contours.
400 ERDAS
Types of Maps
Map Purpose
Relief Any map that appears to be, or is, 3-dimensional. Also called a shaded
relief map.
Slope A map which shows changes in elevation over distance. Slope maps are
usually color-coded according to the steepness of the terrain at each
pixel.
Thematic A map illustrating the class characterizations of a particular spatial vari-
able such as soils, land cover, hydrology, etc.
Topographic A map depicting terrain relief.
Viewshed A map showing only those areas visible (or invisible) from a specified
point(s). Also called a line-of-sight map or a visibility map.
In ERDAS IMAGINE, maps are stored as a map file with a .map extension.
See "APPENDIX B: File Formats and Extensions" for information on the format of the .map file.
• qualitative
• quantitative
A qualitative map shows the spatial distribution or location of a kind of nominal data.
For example, a map showing corn fields in the United States would be a qualitative
map. It would not show how much corn is produced in each location, or production
relative to the other areas.
A quantitative map displays the spatial aspects of numerical data. A map showing corn
production (volume) in each area would be a quantitative map. Quantitative maps
show ordinal (less than/greater than) and interval/ratio (how much different) scale
data (Dent 1985).
You can create thematic data layers from continuous data (aerial photography and satellite
images) using the ERDAS IMAGINE classification capabilities. See “Chapter 6: Classification”
for more information.
Base Information
Thematic maps should include a base of information so that the reader can easily relate
the thematic data to the real world. This base may be as simple as an outline of counties,
states, or countries, to something more complex, such as an aerial photograph or
satellite image. In the past, it was difficult and expensive to produce maps that included
both thematic and continuous data, but technological advances have made this easy.
For example, in a thematic map showing flood plains in the Mississippi River valley,
the user could overlay the thematic data onto a line coverage of state borders or a
satellite image of the area. The satellite image can provide more detail about the areas
bordering the flood plains. This may be valuable information when planning
emergency response and resource management efforts for the area. Satellite images can
also provide very current information about an area, and can assist the user in assessing
the accuracy of a thematic image.
In ERDAS IMAGINE, you can include multiple layers in a single map composition. See Map
Composition on page 432 for more information about creating maps.
402 ERDAS
Types of Maps
Color Selection
The colors used in thematic maps may or may not have anything to do with the class or
category of information shown. Cartographers usually try to use a color scheme that
highlights the primary purpose of the map. The map reader’s perception of colors also
plays an important role. Most people are more sensitive to red, followed by green,
yellow, blue, and purple. Although color selection is left entirely up to the map
designer, some guidelines have been established (Robinson and Sale 1969).
• When mapping interval or ordinal data, the higher ranks and greater amounts are
generally represented by darker colors.
• When mapping elevation data, start with blues for water, greens in the lowlands,
ranging up through yellows and browns to reds in the higher elevations. This
progression should not be used for series other than elevation.
• In temperature mapping, use red, orange, and yellow for warm temperatures and
blue, green, and gray for cool temperatures.
• In land cover mapping, use yellows and tans for dryness and sparse vegetation and
greens for lush vegetation.
Use the Raster Attributes option in the Viewer to select and modify class colors.
• scale bars
• legends
The annotation listed above is made up of single elements. The basic annotation
elements in ERDAS IMAGINE include:
• text
These elements can be used to create more complex annotation, such as legends, scale
bars, etc. These annotation components are actually groups of the basic elements and
can be ungrouped and edited like any other graphic. The user can also create his or her
own groups to form symbols that are not in the IMAGINE symbol library. (Symbols are
discussed in more detail under "Symbols" on page 411.)
Create annotation using the Annotation tool palette in the Viewer or in a map composition.
See "APPENDIX B: File Formats and Extensions" for information on the format of the .ovr file.
404 ERDAS
Scale
Scale Map scale is a statement that relates distance on a map to distance on the earth’s
surface. It is perhaps the most important information on a map, since the level of detail
and map accuracy are both factors of the map scale. Scale is directly related to the map
extent, or the area of the earth’s surface to be mapped. If a relatively small area is to be
mapped, such as a neighborhood or subdivision, then the scale can be larger. If a large
area is to be mapped, such as an entire continent, the scale must be smaller. Generally,
the smaller the scale, the less detailed the map can be. As a rule, anything smaller than
1:250,000 is considered small-scale.
• representative fraction
• verbal statement
• scale bar
Representative Fraction
Map scale is often noted as a simple ratio or fraction called a representative fraction. A
map in which one inch on the map equals 24,000 inches on the ground could be
described as having a scale of 1:24,000 or 1/24,000. The units on both sides of the ratio
must be the same.
Verbal Statement
A verbal statement of scale describes the distance on the map to the distance on the
ground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to
16 miles. The units on the map and on the ground do not have to be the same in a verbal
statement. One-inch and 6-inch maps of the British Ordnance Survey are often referred
to by this method (1 inch to 1 mile, 6 inches to 1 mile) (Robinson and Sale 1969).
Scale Bars
A scale bar is a graphic annotation element that describes map scale. It shows the
distance on paper that represents a geographical distance on the map. Maps often
include more than one scale bar to indicate various measurement systems, such as
kilometers and miles.
Kilometers
1 0 1 2 3 4
Miles
1 0 1 2
Figure 167: Sample Scale Bars
Use the Scale Bar tool in the Annotation tool palette to automatically create representative
fractions and scale bars. Use the Text tool to create a verbal statement.
1 kilometer
1 1 mile is
1/40 inch 1 inch is
Map Scale centimeter represented
represents represents represented
represents by
by
1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm
1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm
1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm
1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm
1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm
1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm
1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm
1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm
1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm
1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm
1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm
1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm
1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm
1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm
1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm
1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm
1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm
1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm
406 ERDAS
Scale
Table 26 shows the number of pixels per inch for selected scales and pixel sizes.
SCALE
Pixel
Size 1”=1
1”=100’ 1”=200’ 1”=500’ 1”=1000’ 1”=1500’ 1”=2000’ 1”=4167’
(m) mile
1:1200 1:2400 1:6000 1:12000 1:18000 1:24000 1:50000
1:63360
1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35
2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67
2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74
5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87
10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93
15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29
20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47
25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37
30 1.02 2.03 5.08 10.16 15.24 20.32 42.33 53.64
35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98
40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23
45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76
50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19
75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46
100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09
150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73
200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05
250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44
300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36
350 .09 .17 .44 .87 1.31 1.74 3.63 4.60
400 .08 .15 .38 .76 1.14 1.52 3.18 4.02
450 .07 .14 .34 .68 1.02 1.35 2.82 3.58
500 .06 .12 .30 .61 .91 1.22 2.54 3.22
600 .05 .10 .25 .51 .76 1.02 2.12 2.69
700 .04 .09 .22 .44 .65 .87 1.81 2.30
800 .04 .08 .19 .38 .57 .76 1.59 2.01
900 .03 .07 .17 .34 .51 .68 1.41 1.79
1000 .03 .06 .15 .30 .46 .61 1.27 1.61
408 ERDAS
Legends
Legends A legend is a key to the colors, symbols, and line styles that are used in a map. Legends
are especially useful for maps of categorical data displayed in pseudo color, where each
color represents a different feature or category. A legend can also be created for a single
layer of continuous data, displayed in gray scale. Legends are likewise used to describe
all unknown or unique symbols utilized. Symbols in legends should appear exactly the
same size and color as they appear on the map (Robinson and Sale 1969).
Legend
pasture
forest
swamp
developed
Use the Legend tool in the Annotation tool palette to automatically create color legends. Symbol
legends are not created automatically, but can be created manually.
• Tick marks are small lines along the edge of the image area or neatline that indicate
regular intervals of distance.
• Grid lines are intersecting lines that indicate regular intervals of distance, based on
a coordinate system. Usually, they are an extension of tick marks. It is often helpful
to place grid lines over the image area of a map. This is becoming less common on
thematic maps, but is really up to the map designer. If the grid lines will help
readers understand the content of the map, they should be used.
neatline
grid lines
tick marks
Use the Grid/Tick tool in the Annotation tool palette to create neatlines, tick marks, and grid
lines. Tick marks and grid lines can also be created over images displayed in a Viewer. See the
On-Line Help for instructions.
410 ERDAS
Symbols
Symbols Since maps are a greatly reduced version of the real-world, objects cannot be depicted
in their true shape or size. Therefore, a set of symbols is devised to represent real-world
objects. There are two major classes of symbols:
• replicative
• abstract
Replicative symbols are designed to look like their real-world counterparts; they
represent tangible objects, such as coastlines, trees, railroads, and houses. Abstract
symbols usually take the form of geometric shapes, such as circles, squares, and
triangles. They are traditionally used to represent amounts that vary from place to
place, such as population density, amount of rainfall, etc. (Dent 1985).
Both replicative and abstract symbols are composed of one or more of the following
annotation elements:
• point
• line
• area
Symbol Types
These basic elements can be combined to create three different types of replicative
symbols:
• plan — formed after the basic outline of the object it represents. For example, the
symbol for a house might be a square, since most houses are rectangular.
• profile — formed like the profile of an object. Profile symbols generally represent
vertical objects, such as trees, windmills, oil wells, etc.
• function — formed after the activity that a symbol represents. For example, on a
map of a state park, a symbol of a tent would indicate the location of a camping
area.
Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols in
maps.
Labels and Place names and other labels convey important information to the reader about the
Descriptive Text features on the map. Any features that will help orient the reader or are important to
the content of the map should be labeled. Descriptive text on a map can include the map
title and subtitle, copyright information, captions, credits, production notes, or other
explanatory material.
Title
The map title usually draws attention by virtue of its size. It focuses the reader’s
attention on the primary purpose of the map. The title may be omitted, however, if
captions are provided outside of the image area (Dent 1985).
Credits
Map credits (or source) can include the data source and acquisition date, accuracy
information, and other details that are required or helpful to readers. For example, if the
user includes data which they do not own in a map, they must give credit to the owner.
Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.
Typography and The choice of type fonts and styles and how names are lettered can make the difference
Lettering between a clear and attractive map and a jumble of imagery and text. As with many
other aspects of map design, this is a very subjective area and many organizations
already have guidelines to use. This section is intended as an introduction to the
concepts involved and to convey traditional guidelines, where available.
If your organization does not have a set of guidelines for the appearance of maps and
you plan to produce many in the future, it would be beneficial to develop a style guide
specifically for mapping. This will ensure that all of the maps produced follow the same
conventions, regardless of who actually makes the map.
ERDAS IMAGINE enables you to make map templates to facilitate the development of map
standards within your organization.
412 ERDAS
Labels and Descriptive Text
Type Styles
Type style refers to the appearance of the text and may include font, size, and style
(bold, italic, underline, etc.). Although the type styles used in maps are purely a matter
of the designer’s taste, the following techniques help to make maps more legible
(Robinson and Sale 1969; Dent 1985).
• Do not use too many different typefaces in a single map. Generally, one or two
styles are enough when also using the variations of those type faces (e.g., bold,
italic, underline, etc.). When using two typefaces, use a serif and a sans serif, rather
than two different serif fonts or two different sans serif fonts [e.g., Sans (sans serif)
and Roman (serif) could be used together in one map].
• Exercise caution in using very thin letters that may not reproduce well. On the other
hand, using letters that are too bold may obscure important information in the
image.
• Use different sizes of type for showing varying levels of importance. For example,
on a map with city and town labels, city names will usually be in a larger type size
than the town names. Use no more than four to six different type sizes.
• Put more important text in labels, titles, and names in all capital letters and lesser
important text in lowercase with initial capitals. This is a matter of personal
preference, although names in which the letters must be spread out across a large
area are better in all capital letters. (Studies have found that capital letters are more
difficult to read, therefore lowercase letters might improve the legibility of the
map.)
• In the past, hydrology, landform, and other natural features were labeled in italic.
However, this is not strictly adhered to by map makers today, although water
features are still nearly always labeled in italic.
Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied
• Type should not be curved (i.e., different from preceding bullet) unless it is
necessary to do so.
• If lettering must be disoriented, it should never be set in a straight line, but should
always have a slight curve.
• Names should be letter spaced (space between individual letters - kerning) as little
as necessary.
• Where the continuity of names and other map data, such as lines and tones,
conflicts with the lettering, the data, not the names, should be interrupted.
• Lettering that refers to point locations should be placed above or below the point,
preferably above and to the right.
• The letters identifying linear features (roads, rivers, railroads, etc.) should not be
spaced. The word(s) should be repeated along the feature as often as necessary to
facilitate identification. These labels should be placed above the feature and river
names should slant in the direction of the river flow (if the label is italic).
• For geographical names, use the native language of the intended map user. For an
English-speaking audience, the name “Germany” should be used, rather than
“Deutscheland.”
414 ERDAS
Labels and Descriptive Text
Better Worse
Atlanta
Atlanta
GEORGIA G e o r g i a
Savannah
Savannah
Text Color
Many cartographers argue that all lettering on a map should be black,. However, the
map may be well served by incorporating color into its design. In fact, studies have
shown that coding labels by color can improve a reader’s ability to find information
(Dent 1985).
This section is adapted from Map Projections for Use with the Geographic Information System
by Lee and Walsh, 1984.
A map projection is the manner in which the spherical surface of the earth is repre-
sented on a flat (two-dimensional) surface. This can be accomplished by direct
geometric projection or by a mathematically derived transformation. There are many
kinds of projections, but all involve transfer of the distinctive global patterns of parallels
of latitude and meridians of longitude onto an easily flattened surface, or developable
surface.
The three most common developable surfaces are the cylinder, cone, and plane (Figure
173). A plane is already flat, while a cylinder or cone may be cut and laid out flat,
without stretching. Thus, map projections may be classified into three general families:
cylindrical, conical, and azimuthal or planar.
Map projections are selected in the Projection Chooser. The Projection Chooser is accessible from
the ERDAS IMAGINE icon panel, and from several other locations.
Properties of Map Regardless of what type of projection is used, it is inevitable that some error or
Projections distortion will occur in transforming a spherical surface into a flat surface. Ideally, a
distortion-free map has four valuable properties:
• conformality
• equivalence
• equidistance
• true direction
Each of these properties is explained below. No map projection can be true in all of these
properties. Therefore, each projection is devised to be true in selected properties, or
most often, a compromise among selected properties. Projections that compromise in
this manner are known as compromise projections.
416 ERDAS
Map Projections
Equivalence is the characteristic of equal area, meaning that areas on one portion of a
map are in scale with areas in any other portion. Preservation of equivalence involves
inexact transformation of angles around points and thus, is mutually exclusive with
conformality except along one or two selected lines. The property of equivalence is
important in maps that are used for comparing density and distribution data, as in
populations.
True direction is characterized by a direction line between two points that crosses
reference lines, for example, meridians, at a constant angle or azimuth. An azimuth is
an angle measured clockwise from a meridian, going north to east. The line of constant
or equal direction is termed a rhumb line.
Thus, a more desirable property than true direction may be where great circles are
represented by straight lines. This characteristic is most important in aviation. Note that
all meridians are great circles, but the only parallel that is a great circle is the equator.
Oblique Azimuthal
(planar)
Oblique Cylindrical
418 ERDAS
Map Projections
Projection Types Although a great number of projections have been devised, the majority of them are
geometric or mathematical variants of the basic direct geometric projection families
described below. Choice of the projection to be used will depend upon the true property
or combination of properties desired for effective cartographic analysis.
Azimuthal Projections
Azimuthal projections, also called planar projections, are accomplished by drawing
lines from a given perspective point through the globe onto a tangent plane. This is
conceptually equivalent to tracing a shadow of a figure cast by a light source. A tangent
plane intersects the global surface at only one point and is perpendicular to a line
passing through the center of the sphere. Thus, these projections are symmetrical
around a chosen center or central meridian. Choice of the projection center determines
the aspect, or orientation, of the projection surface.
The origin of the projection lines—that is, the perspective point—may also assume
various positions. For example, it may be:
Conical Projections
Conical projections are accomplished by intersecting, or touching, a cone with the
global surface and mathematically projecting lines onto this developable surface.
A tangent cone intersects the global surface to form a circle. Along this line of inter-
section, the map will be error-free and possess equidistance. Usually, this line is a
parallel, termed the standard parallel.
Cones may also be secant, and intersect the global surface, forming two circles that will
possess equidistance. In this case, the cone slices underneath the global surface,
between the standard parallels. Note that the use of the word “secant,” in this instance,
is only conceptual and not geometrically accurate. Conceptually, the conical aspect may
be polar, equatorial, or oblique. Only polar conical projections are supported in ERDAS
IMAGINE.
Cylindrical Projections
Cylindrical projections are accomplished by intersecting, or touching, a cylinder with
the global surface. The surface is mathematically projected onto the cylinder, which is
then “cut” and “unrolled.”
A tangent cylinder will intersect the global surface on only one line to form a circle, as
with a tangent cone. This central line of the projection is commonly the equator and will
possess equidistance.
If the cylinder is rotated 90 degrees from the vertical (i.e., the long axis becomes
horizontal), then the aspect becomes transverse, wherein the central line of the
projection becomes a chosen standard meridian as opposed to a standard parallel. A
secant cylinder, one slightly less in diameter than the globe, will have two lines
possessing equidistance.
Tangent Secant
one standard parallel two standard parallels
Perhaps the most famous cylindrical projection is the Mercator, which became the
standard navigational map, possessing true direction and conformality.
420 ERDAS
Map Projections
Other Projections
The projections discussed so far are projections that are created by projecting from a
sphere (the earth) onto a plane, cone, or cylinder. Many other projections cannot be
created so easily.
Modified projections are modified versions of another projection. For example, the
Space Oblique Mercator projection is a modification of the Mercator projection. These
modifications are made to reduce distortion, often by including additional standard
lines or a different pattern of distortion.
Pseudo projections have only some of the characteristics of another class projection.
For example, the Sinusoidal is called a pseudocylindrical projection because all lines of
latitude are straight and parallel, and all meridians are equally spaced. However, it
cannot truly be a cylindrical projection, because all meridians except the central
meridian are curved. This results in the Earth appearing oval instead of rectangular
(ESRI 1991).
• geographical
• planar
Geographical
Geographical, or spherical, coordinates are based on the network of latitude and
longitude (Lat/Lon) lines that make up the graticule of the earth. Within the graticule,
lines of longitude are called meridians, which run north/south, with the prime
meridian at 0˚ (Greenwich, England). Meridians are designated as 0˚ to 180˚, east or
west of the prime meridian. The 180˚ meridian (opposite the prime meridian) is the
International Dateline.
Lines of latitude are called parallels, which run east/west. Parallels are designated as
0˚ at the equator to 90˚ at the poles. The equator is the largest parallel. Latitude and
longitude are defined with respect to an origin located at the intersection of the equator
and the prime meridian. Lat/Lon coordinates are reported in degrees, minutes, and
seconds. Map projections are various arrangements of the earth’s latitude and
longitude lines onto a plane.
Planar
Planar, or Cartesian, coordinates are defined by a column and row position on a planar
grid (X,Y). The origin of a planar coordinate system is typically located south and west
of the origin of the projection. Coordinates increase from 0,0 going east and north. The
origin of the projection, being a “false” origin, is defined by values of false easting and
false northing. Grid references always contain an even number of digits, and the first
half refers to the easting and the second half the northing.
In practice, this eliminates negative coordinate values and allows locations on a map
projection to be defined by positive coordinate pairs. Values of false easting are read
first and may be in meters or feet.
422 ERDAS
Available Map Projections
Available Map In ERDAS IMAGINE, map projection information appears in the Projection Chooser,
Projections which is used to georeference images and to convert map coordinates from one type of
projection to another. The Projection Chooser provides the following projections:
USGS Projections
Albers Conical Equal Area
Azimuthal Equidistant
Equidistant Conic
Equirectangular
General Vertical Near-Side Perspective
Geographic (Lon/Lat)
Gnomonic
Lambert Azimuthal Equal Area
Lambert Conformal Conic
Mercator
Miller Cylindrical
Modified Transverse Mercator
Oblique Mercator (Hotine)
Orthographic
Polar Stereographic
Polyconic
Sinusoidal
Space Oblique Mercator
State Plane
Stereographic
Transverse Mercator
UTM
Van Der Grinten I
External Projections
Bipolar Oblique Conic Conformal
Cassini-Soldner
Laborde Oblique Mercator
Modified Polyconic
Modified Stereographic
Mollweide Equal Area
Plate Carrée
Rectified Skew Orthomorphic
Robinson Pseudocylindrical
Southern Orientated Gauss Conformal
Winkel’s Tripel
• definition of scale
For each map projection, a menu of spheroids displays, along with appropriate
prompts that enable the user to specify these parameters.
Units
Use the units of measure that are appropriate for the map projection type.
• Lat/Lon coordinates are expressed in decimal degrees. When prompted, the user
can use the DD function to convert coordinates in degrees, minutes, seconds format
to decimal. For example, for 30˚51’12’’:
dd(30,51,12) = 30.85333
-dd(30,51,12) = -30.85333
or
30:51:12 = 30.85333
Note also that values for longitude west of Greenwich, England, and values for latitude south of
the equator are to be entered as negatives.
424 ERDAS
Available Map Projections
Longitude of X X X X X X X X
central meridian X X
Latitude of origin X X X X X X
of projection
Longitude of cen- X X X X X X
ter of projection
Latitude of center X X X X X X
of projection
Latitude of first X X X
standard parallel
Latitude of second X X X
standard parallel
Latitude of true X X
scale
Longitude below X
pole
Definition of
Scale
Scale factor at X
central meridian
Height of perspec- X
tive point above
sphere
Scale factor at X
center of projection
a. Numbers are used for reference only and correspond to the numbers used in Table 1. Parameters for definition of map
projection types 0-2 are not applicable and are described in the text.
b. Additional parameters required for definition of the map projection are described in the text of Appendix C.
426 ERDAS
Choosing a Map Projection
• decide how to best display the area of interest or illustrate the results of analysis
• test the accuracy of the information and perform measurements on the data
Deciding Factors
Depending on the user’s applications and the uses for the maps created, one or several
map projections may be used. Many factors must be weighed when selecting a
projection, including:
• type of map
• map accuracy
• scale
If the user is mapping a relatively small area, virtually any map projection will do. It is
in mapping large areas (entire countries, continents, and the world) that the choice of
map projection becomes more critical. In small areas, the amount of distortion in a
particular projection is barely, if at all, noticeable. In large areas, there may be little or
no distortion in the center of the map, but distortion will increase outward toward the
edges of the map.
Guidelines
Since the sixteenth century, there have been three fundamental rules regarding map
projection use (Maling 1992):
• if the country to be mapped lies in the temperate latitudes, use a conical projection
• if the map is required to show one of the polar regions, use an azimuthal projection
428 ERDAS
Spheroids
Spheroids The previous discussion of direct geometric map projections assumes that the earth is a
sphere, and for many maps this is satisfactory. However, due to rotation of the earth
around its axis, the planet bulges slightly at the equator. This flattening of the sphere
makes it an oblate spheroid, which is an ellipse rotated around its shorter axis.
Minor axis
Major axis
semi-major axis
semi-minor
axis
f = (a – b) ⁄ a EQUATION 23
where:
Most map projections use eccentricity (e2) rather than flattening. The relationship is:
e2 = 2 f – f 2 EQUATION 24
The flattening of the earth is about 1 part in 300 and becomes significant in map
accuracy at a scale of 1:100,000 or larger.
Clarke 1866
Clarke 1880
Bessel
New International 1967
International 1909
WGS 72
Everest
WGS 66
GRS 1980
Airy
Modified Everest
Modified Airy
Walbeck
Southeast Asia
Australian National
Krasovsky
Hough
Mercury 1960
Modified Mercury 1968
Sphere of Radius 6370977m
WGS 84
Helmert
Sphere of Nominal Radius of Earth
The spheroids listed above are the most commonly used. There are many other spheroids
available, and they are listed in the Projection Chooser. These additional spheroids are not
documented in this manual. You can use the ERDAS IMAGINE Developers’ Toolkit to add
your own map projections and spheroids to IMAGINE.
430 ERDAS
Spheroids
The semi-major and semi-minor axes of all supported spheroids are listed in Table 30,
as well as the principal uses of these spheroids.
Semi-Major Semi-Minor
Spheroid Use
Axis Axis
Clarke 1866 6378206.4 6356583.8 North America and the
Philippines
Clarke 1880 6378249.145 6356514.86955 France and Africa
Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, and
Indonesia
New International 1967 6378157.5 6356772.2 As International 1909 below,
more recent calculation
International 1909 6378388.0 6356911.94613 Remaining parts of the world not
(= Hayford) listed here
WGS 72 (World 6378135.0 6356750.519915 NASA (satellite)
Geodetic System 1972)
Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan
WGS 66 (World 6378145.0 6356759.769356 As WGS 72 above, older version
Geodetic System 1966)
GRS 1980 (Geodetic 6378137.0 6356752.31414 Expected to be adopted in North
Reference System) America for 1983 earth-centered
coordinate system (satellite)
Airy (1940) 6377563.0 6356256.91 England
Modified Everest 6377304.063 6356103.039 As Everest above, more recent
version
Modified Airy 6377341.89 6356036.143 As Airy above, more recent
version
Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910
Southeast Asia 6378155.0 6356773.3205 As named
Australian National (1965) 6378160.0 6356774.719 Australia
Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and some
East European countries
Hough 6378270.0 6356794.343479 As International 1909 above, with
modification of ellipse axes
Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used
Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, more
recent calculation
Sphere of Radius 6370997 6370997.0 6370997.0 A perfect sphere with the same
m surface area as the Clarke 1866
spheroid
Semi-Major Semi-Minor
Spheroid Use
Axis Axis
WGS 84 6378137.0 6356752.31424517929 As WGS 72, more recent
calculation
Helmert 6378200.0 6356818.16962789092 Egypt
Sphere of Nominal Radius 6370997.0 6370997.0 A perfect sphere
of Earth
• Who is the intended audience? What is the level of their knowledge about the
subject matter?
• Will it remain in digital form and be viewed on the computer screen or will it be
printed?
• If it is going to be printed, how big will it be? Will it be printed in color or black and
white?
432 ERDAS
Map Composition
The answers to these questions will help to determine the type of information that must
go into the composition and the layout of that information. For example, suppose you
are going to do a series of maps about global deforestation for presentation to Congress,
and you are going to print these maps in color on an electrostatic printer. This scenario
might lead to the following conclusions:
• A format (layout) should be developed for the series, so that all the maps produced
have the same style.
• The colors used should be chosen carefully, since the maps will be printed in color.
• Political boundaries might need to be included, since they will influence the types
of actions that can be taken in each deforested area.
• The typeface size and style to be used for titles, captions, and labels will have to be
larger than for maps printed on 8.5” x 11.0” sheets. The type styles selected should
be the same for all maps.
• Select symbols that are widely recognized, and make sure they are all explained in
a legend.
• Cultural features (roads, urban centers, etc.) may be added for locational reference.
• Include a statement about the accuracy of each map, since these maps may be used
in very high-level decisions.
Once this information is in hand, the user can actually begin sketching the look of the
map on a sheet of paper. It is helpful for the user to know how they want the map to
look before starting the ERDAS IMAGINE Map Composer. Doing so will ensure that all
of the necessary data layers are available, and will make the composition phase go
quickly.
See the Map Composer section of the ERDAS IMAGINE Tour Guides manual for step-by-step
instructions on creating a map. Refer to the On-Line Help for details about how Map Composer
works.
• On scales smaller than 1:20,000, not more than 10 percent of points tested should be
more than 1/50 inch in horizontal error, where points refer only to points that can
be well-defined on the ground.
• On maps with scales larger than 1:20,000, the corresponding error term is 1/30 inch.
• At no more than 10 percent of the elevations tested will contours be in error by more
than one half of the contour interval.
• Accuracy should be tested by comparison of actual map data with survey data of
higher accuracy (not necessarily with ground truth).
• If maps have been tested and do meet these standards, a statement should be made
to that effect in the legend.
• Maps that have been tested but fail to meet the requirements should omit all
mention of the standards on the legend.
• The minimum level of accuracy in identifying land use and land cover categories is
85%.
• The several categories shown should have about the same accuracy.
434 ERDAS
Map Accuracy
• Up to 25% of the pedons may be of other soil types than those named if they do not
present a major hindrance to land management.
• Up to only 10% of pedons may be of other soil types than those named if they do
present a major hindrance to land management.
• No single included soil type may occupy more than 10% of the area of the map unit.
CHAPTER 12
Hardcopy Output
Introduction Hardcopy output refers to any output of image data to paper. These topics are covered
in this chapter:
• printing maps
Printing Maps ERDAS IMAGINE enables the user to create and output a variety of types of hardcopy
maps, with several referencing features.
Scaled Maps
A scaled map is a georeferenced map that has been projected to a map projection, and
is accurately laid-out and referenced to represent distances and locations. A scaled map
usually has a legend, that includes a scale, such as “1 inch = 1000 feet”. The scale is often
expressed as a ratio, like 1:12,000, where 1 inch on the map represents 12,000 inches on
the ground.
See "CHAPTER 8: Rectification" for information on rectifying and georeferencing images and
"CHAPTER 11: Cartography" for information on creating maps.
• A book map is laid out like the pages of a book. Each page fits on the paper used
by the printer. There is a border, but no tick marks on every page.
neatline neatline
tick marks ++ +
+
Scale and Resolution The following scales and resolutions will be noticeable during the process of creating a
map composition and sending the composition to a hardcopy device:
• device resolution
Spatial Resolution
Spatial resolution is the area on the ground represented by each raw image data pixel.
Display Scale
Display scale is the distance on the screen as related to one unit on paper. For example,
if the map composition is 24 inches by 36 inches, it would not be possible to view the
entire composition on the screen. Therefore, the scale could be set to 1:0.25 so that the
entire map composition would be in view.
Map Scale
The map scale is the distance on a map as related to the true distance on the ground; or
the area that one pixel represents, measured in map units. The map scale is defined
when the user creates an image area in the map composition. One map composition can
have multiple image areas set at different scales. These areas may need to be shown at
different scales for different applications.
438 ERDAS
Printing Maps
Device Resolution
The number of dots that are printed per unit—for example, 300 dots per inch (DPI).
Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.
Map Scaling Examples The ERDAS IMAGINE Map Composer enables the user to define a map size, as well as
the size and scale for the image area within the map composition. The examples in this
section focus on the relationship between these factors and the output file created by
Map Composer for the specific hardcopy device or file format. Figure 178 is the map
composition that will be used in the examples. This composition was originally created
using IMAGINE Map Composer at a size of 22” × 34” and the hardcopy output must
be in two different formats.
• A TIFF file must be created and sent to a film recorder having a 1,000 DPI
resolution.
To determine the map composition to paper scale factor, it is necessary to calculate the
most limiting direction. Since the printable area for the printer is approximately
8.1” × 8.6”, these numbers will be used in the calculation.
The vertical direction is the most limiting, therefore the map composition to paper scale
would be set for 0.23.
If the specified size of the map (width and height) is greater than the printable area for the printer,
the output hardcopy map will be paneled. See the hardware manual of the hardcopy device for
information about the printable area of the device.
Use the Print Map Composition dialog to output a map composition to a PostScript printer.
Output to TIFF
The limiting factor in this example is not page size, but disk space (600 MB total). A
three-band .img file must be created in order to convert the map composition to a .tif
file. Due to the three bands and the high resolution, the .img file could be very large.
The .tif file will be output to a film recorder with a 1,000 DPI device resolution.
To determine the number of megabytes for the map composition, the X and Y dimen-
sions need to be calculated:
• Y = 34 * 1,000 = 34,000
Although this appears to be an unmanageable file size, it is possible to reduce the file
size with little image degradation. The .img file created from the map composition must
be less than half to accommodate the .tif file, since the total disk space is only 600
megabytes. Dividing the map composition by three in both X and Y directions (2,244
MB / 3 /3) results in approximately a 250 megabyte file. This file size is small enough
to process and leaves enough room for the .img to .tif conversion. This division is
accomplished by specifying a 1/3 or 0.333 map composition to paper scale when
outputting the map composition to an .img file.
Once the .img file is created and exported to TIFF format, it can be sent to a film recorder
that accepts .tif files. Remember, the file must be enlarged three times to compensate for
the reduction during the .img file creation.
440 ERDAS
Mechanics of Printing
See the hardware manual of the hardcopy device for information about the DPI device resolution.
Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an
.img file.
Mechanics of This section describes the mechanics of transferring an image or map composition from
Printing a data file to a hardcopy map.
Halftone Printing Halftoning is the process of converting a continuous tone image into a pattern of dots.
A newspaper photograph is a common example of halftoning.
To make a color illustration, halftones in the primary colors (cyan, magenta, and
yellow), plus black, are overlaid. The halftone dots of different colors, in close
proximity, create the effect of blended colors in much the same way that phospho-
rescent dots on a color computer monitor combine red, green, and blue to create other
colors. By using different patterns of dots, colors can have different intensities. The dots
for halftoning are a fixed density—either a dot is there or it is not there.
For scaled maps, each output pixel may contain one or more dot patterns. If a very large
image file is being printed onto a small piece of paper, data file pixels will be skipped
to accommodate the reduction.
Hardcopy Devices
The following hardcopy devices use halftoning to output an image or map composition:
• Linotronic Imagesetter
See the user’s manual for the hardcopy device for more information about halftone printing.
Example
There are different processes by which continuous tone printers generate a map. One
example is a process called thermal dye transfer. The entire image or map composition
is loaded into the printer’s memory. While the paper moves through the printer, heat is
used to transfer the dye from a ribbon, which has the dyes for all of the four process
colors, to the paper. The density of the dot depends on the amount of heat applied by
the printer to transfer the dye. The amount of heat applied is determined by the
brightness values of the input image. This allows the printer to control the amount of
dye that is transferred to the paper to create a continuous tone image.
Hardcopy Devices
The following hardcopy devices use continuous toning to output an image or map
composition:
• Tektronix Phaser II SD
NOTE: The above printers do not necessarily use the thermal dye transfer process to generate a
map.
See the user’s manual for the hardcopy device for more information about continuous tone
printing.
Contrast and Color ERDAS IMAGINE contrast and color tables are used for some printing processes, just
Tables as they are used in displaying an image. For continuous raster layers, they are loaded
from the IMAGINE contrast table. For thematic layers, they are loaded from the color
table. The translation of data file values to brightness values is performed entirely by
the software program.
442 ERDAS
Mechanics of Printing
Colors
Since a printer uses ink instead of light to create a visual image, the primary colors of
pigment (cyan, magenta, and yellow) are used in printing, instead of the primary colors
of light (red, green, and blue). Cyan, magenta, and yellow can be combined to make
black through a subtractive process, whereas the primary colors of light are additive—
red, green, and blue combine to make white (Gonzalez and Wintz 1977).
The data file values that are sent to the printer and the contrast and color tables that
accompany the data file are all in the RGB color scheme. The RGB brightness values in
the contrast and color tables must be converted to CMY values.
The RGB primary colors are the opposites of the CMY colors—meaning, for example,
that the presence of cyan in a color means an equal lack of red. To convert the values,
each RGB brightness value is subtracted from the maximum brightness value to
produce the brightness value for the opposite color.
C = MAX - R
M = MAX - G
Y = MAX - B
where:
Black Ink
Although, theoretically, cyan, magenta, and yellow combine to create black ink, the
color that results is often a dark, muddy brown. Many printers also use black ink for a
truer black.
NOTE: Black ink is not available on all printers. Consult the user’s manual for your printer.
Images often appear darker when printed than they do when displayed on the display
device. Therefore, it may be beneficial to improve the contrast and brightness of an
image before it is printed.
APPENDIX A
Math Topics
Introduction This appendix is a cursory overview of some of the basic mathematical concepts that are
applicable to image processing. Its purpose is to educate the novice reader, and to put
these formulas and concepts into the context of image processing and remote sensing
applications.
Summation A commonly used notation throughout this and other discussions is the Sigma (Σ), used
to denote a summation of values.
10
∑i
i=1
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55.
∑ Qi = 3 + 5 + 7 + 2 = 17
i=1
where:
Q1 = 3
Q2 = 5
Q3 = 7
Q4 = 2
Statistics
Histogram In ERDAS IMAGINE image data files (.img), each data file value (defined by its row,
column, and band) is a variable. IMAGINE supports the following data types:
• 1, 2, and 4-bit
Distribution, as used in statistics, is the set of frequencies with which an event occurs,
or that a variable will have a particular value.
A histogram is a graph of data frequency or distribution. For a single band of data, the
horizontal axis of a histogram is the range of all possible data file values. The vertical
axis is the number of pixels that have each data value.
446 ERDAS
Statistics
Y
1000
number of pixels
histogram
300
0 X
0 100 255
data file values
Figure 179: Histogram
Figure 179 shows the histogram for a band of data in which Y pixels have data value X.
For example, in this graph, 300 pixels (y) have the data file value of 100 (x).
Bin Functions Bins are used to group ranges of data values together for better manageability. Histo-
grams and other descriptor columns for 1, 2, 4, and 8-bit data are easy to handle, since
they contain a maximum of 256 rows. However, to have a row in a descriptor table for
every possible data value in floating point, complex, and 32-bit integer data would yield
an enormous amount of information. Therefore, the bin function is provided to serve as
a data reduction tool.
Then, for example, row 23 of the histogram table would contain the number of pixels in
the layer whose value fell between .023 and .024.
• DIRECT — one bin per integer value. Used by default for 1, 2, 4, and 8-bit integer
data, but may be used for other data types as well. The direct bin function may
include an offset for negative data or data in which the minimum value is greater
than zero.
For example, a direct bin with 900 bins and an offset of -601 would look like the fol-
lowing:
0 X ≤ -600.5
1 -600.5 < X ≤ -599.5
.
.
.
599 -2.5 < X ≤ -1.5
600 -1.5 < X ≤ -0.5
601 -0.5 < X < 0.5
602 0.5 ≤ X < 1.5
603 1.5 ≤ X < 2.5
.
.
.
898 296.5 ≤ X < 297.5
899 297.5 ≤ X
448 ERDAS
Statistics
• LINEAR — establishes a linear mapping between data values and bin numbers, as
in our first example, mapping the data range 0.0 to 1.0 to bin numbers 0 to 99.
where:
x = data value
• LOG — establishes a logarithmic mapping between data values and bin numbers.
• EXPLICIT — explicitly defines mapping between each bin number and data range.
Q 1 + Q 2 + Q 3 + ... + Q k
µ = ----------------------------------------------------------
k
or
k
Qi
µ = ∑ -----
k
i=1
The mean of data with a normal distribution is the value at the peak of the curve—the
point where the distribution balances.
Normal Distribution Our general ideas about an average, whether it be average age, average test score, or the
average amount of spectral reflectance from oak trees in the spring,
1000
number of pixels
0
0 255
data file values
Average usually refers to a central value on a bell curve, although all distributions have
averages. In a normal distribution, most values are at or near the middle, as shown by
the peak of the bell curve. Values that are more extreme are more rare, as shown by the
tails at the ends of the curve.
The Normal Distributions are a family of bell shaped distributions that turn up
frequently under certain special circumstances. For example, a normal distribution
would occur if one were to compare the bands in a desert image. The bands would be
very similar, but would vary slightly.
450 ERDAS
Statistics
Each Normal Distribution uses just two parameters, σ and µ, to control the shape and
location of the resulting probability graph through the equation:
x–µ 2
– ------------
2σ
e
f ( x ) = ---------------------
σ 2π
where
The parameter, µ, controls how much the bell is shifted horizontally so that its average
will match the average of the distribution of x, while σ adjusts the width of the bell to
try to encompass the spread of the given distribution. In choosing to approximate a
distribution by the nearest of the Normal Distributions, we describe the many values in
the bin function of its distribution with just two parameters. It is a significant simplifi-
cation that can greatly ease the computational burden of many operations, but like all
simplifications, it reduces the accuracy of the conclusions we can draw.
The normal distribution is the most widely encountered model for probability. Many
natural phenomena can be predicted or estimated according to “the law of averages”
that is implied by the bell curve (Larsen and Marx 1981).
The mean and standard deviation are often used by computer programs that process
and analyze image data.
2
Var Q = E 〈 ( Q – µ Q ) 〉
where:
In practice, the use of this equation for variance does not usually reflect the exact nature
of the values that are used in the equation. These values are usually only samples of a
large data set, and therefore, the mean and variance of the entire data set are estimated,
not known.
The equation used in practice is shown below. This is called the “minimum variance
unbiased estimator” of the variance, or the sample variance (notated σ2).
∑ ( Qi – µQ )
2
2
σ Q ≈ i-------------------------------------
=1
k–1
where:
i = a particular pixel
k = the number of pixels (the higher the number, the better the approximation)
The theory behind this equation is discussed in chapters on “Point Estimates” and
“Sufficient Statistics,” and covered in most statistics texts.
NOTE: The variance is expressed in units squared (e.g., square inches, square data values, etc.),
so it may result in a number that is much higher than any of the original values.
452 ERDAS
Statistics
Standard Deviation Since the variance is expressed in units squared, a more useful value is the square root
of the variance, which is expressed in units and can be related back to the original
values (Larsen and Marx 1981). The square root of the variance is the standard
deviation.
Based on the equation for sample variance (s2), the sample standard deviation (sQ) for
a set of values Q is computed as follows:
∑ ( Qi – µQ )
2
sQ = i-------------------------------------
=1
k–1
In any distribution:
• approximately 68% of the values are within one standard deviation of µ: that is,
between µ-s and µ+s
• more than 1/2 of the values are between µ-2s and µ+2s
• more than 3/4 of the values are between µ-3s and µ+3s
Standard deviations are used because the lowest and highest data file values may be
much farther from the mean than 2s.
When the mean and standard deviation are known, they can be used to estimate other
calculations about the data. In computer programs, it is much more convenient to
estimate calculations with a mean and standard deviation than it is to repeatedly
sample the actual data.
Algorithms that use parameters are parametric. The closer that the distribution of the
data resembles a normal curve, the more accurate the parametric estimates of the data
will be. ERDAS IMAGINE classification algorithms that use signature files (.sig) are
parametric, since the mean and standard deviation of each sample or cluster are stored
in the file to represent the distribution of the values.
Covariance In many image processing procedures, the relationships between two bands of data are
important. Covariance measures the tendencies of data file values in the same pixel, but
in different bands, to vary with each other, in relation to the means of their respective
bands. These bands must be linear.
Cov QR = E 〈 ( Q – µ Q ) ( R – µ R )〉
where:
E = expected value
∑ ( Qi – µQ ) ( Ri – µ R )
C QR ≈ i---------------------------------------------------------
=1 -
k
where:
i = a particular pixel
k = the number of pixels
454 ERDAS
Statistics
Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covari-
ances within n bands of data. Below is an example of a covariance matrix for 4 bands of
data:
The covariance of one band of data with itself is the variance of that band:
k k
∑ ( Qi – µQ ) ( Qi – µQ ) ∑ ( Qi – µQ )
2
i=1
C QQ = ----------------------------------------------------------- i=1
= -------------------------------------
k–1 k–1
Therefore, the diagonal of the covariance matrix consists of the band variances.
The covariance matrix is an organized format for storing variance and covariance infor-
mation on a computer system, so that it needs to be computed only once. Also, the
matrix itself can be used in matrix equations, as in principal components analysis.
NOTE: The letter n is used consistently in this documentation to stand for the number of
dimensions (bands) of image data.
Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all n
bands. Although image data files are stored band-by-band, it is often necessary to
extract the measurement vectors for individual pixels.
V1 n=3
Band 1
V2
Band 2
V3
Band 3
1 pixel
i = particular band
Vi = the data file value of the pixel in band i, then the measurement vector
for this pixel is:
V1
V2
V3
456 ERDAS
Dimensionality of Data
Mean Vector When the measurement vectors of several pixels are analyzed, a mean vector is often
calculated. This is the vector of the means of the data file values in each band. It has n
elements.
Training sample
mean of values in sample
in band 1 = µ1
Band 3
i = a particular band
µi = the mean of the data file values of the pixels being studied, in band i,
then the mean vector for this training sample is:
µ1
µ2
µ3
255
85
0
0 180 255
Band A
data file values
NOTE: If the image is 2-dimensional, the plot doesn’t always have to be 2-dimensional.
In Figure 183, the pixel that is plotted has a measurement vector of:
180
85
The graph above implies physical dimensions for the sake of illustration. Actually,
these dimensions are based on spectral characteristics, represented by the digital image
data. As opposed to physical space, the pixel above is plotted in feature space. Feature
space is an abstract space that is defined by spectral units, such as an amount of electro-
magnetic radiation.
458 ERDAS
Dimensionality of Data
Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensional
histogram, or feature space image. This is simply a graph of the data file values of one
band of data against the values of another band.
255
When all data sets (bands) have jointly normal distributions, the scatterplot forms a
hyperellipsoid. The prefix “hyper” refers to an abstract geometrical shape, which is
defined in more than three dimensions.
NOTE: In this documentation, 2-dimensional examples are used to illustrate concepts that apply
to any number of dimensions of data. The 2-dimensional examples are best suited for creating
illustrations to be printed.
Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. It is a number
that allows two measurement vectors to be compared for similarity. The spectral
distance between two pixels can be calculated as follows:
∑ ( d i – ei )
2
D =
i=1
where:
D = spectral distance
n = number of bands (dimensions)
i = a particular band
di = data file value of pixel d in band i
ei = data file value of pixel e in band i
This is the equation for Euclidean distance—in two dimensions (when n = 2), it can be
simplified to the Pythagorean Theorem (c2 = a2 + b2), or in this case:
460 ERDAS
Polynomials
Order The variables in polynomial expressions can be raised to exponents. The highest
exponent in a polynomial determines the order of the polynomial.
where:
A, B, C, D ... Ω = coefficients
t = the order of the polynomial
NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not the complexity, of the
transformation is changed. Mathematically, Ω cannot be 0.
where:
All combinations of xi times yj are used in the polynomial expression, such that:
i+j≤t
Transformation Matrix In the case of first order image rectification, the variables in the polynomials (x and y)
are the source coordinates of a ground control point (GCP). The coefficients are
computed from the GCPs and stored as a transformation matrix.
Matrix Notation Matrices and vectors are usually designated with a single capital letter, such as M. For
example:
2.2 4.6
M = 6.1 8.3
10.0 12.4
One value in the matrix M would be specified by its position, which is its row and
column (in that order) in the matrix. One element of the array (one value) is designated
with a lower case letter and its position:
m3,2 = 12.4
With column vectors, it is simpler to use only one number to designate the position:
2.8
G = 6.5
10.1
G2 = 6.5
462 ERDAS
Matrix Algebra
Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transfor-
mation matrix. The coefficients are stored in a 2 by 3 matrix:
a1 a2 a3
C =
b1 b2 b3
Then, where:
xo = a1 + a2xi + a3yi
yo = b1 + b2xi + b3yi
x0 a1 a2 a3 1
=
y0 b1 b2 b3 xi
yi
R =CS, or
where:
The sizes of the matrices are shown above to demonstrate a rule of matrix multipli-
cation. To multiply two matrices, the first matrix must have the same number of
columns as the second matrix has rows. For example, if the first matrix is a by b, and the
second matrix is m by n, then b must equal m, and the product matrix will have the size
a by n.
m
( fg ) ij = ∑ f ik g kj
k=1
where:
fg is an a by n matrix.
Transposition The transposition of a matrix is derived by interchanging its rows and columns. Trans-
position is denoted by T, as in the example below (Cullen 1972).
2 3
G = 6 4
10 12
T
G = 2 6 10
3 4 12
464 ERDAS
APPENDIX B
File Formats and Extensions
Introduction This appendix describes all of the file formats and extensions that are used within
ERDAS IMAGINE software. However, this does not include files that are introduced
into IMAGINE by third party products. Please refer to the product‘s documentation for
information on those files.
Topics include:
• .img Files
ERDAS IMAGINE File A file name extension is a suffix, usually preceded by a period, that often identifies the
Extensions type of data in a file. ERDAS IMAGINE automatically assigns the default extension
when the user is prompted to enter a file name. The part of the file name before the
extension can be used in a manner that is helpful to the user and others. The files used
within the ERDAS IMAGINE system, their extensions, and their formats are conven-
tions of ERDAS, Inc.
All of the types of files used within IMAGINE are listed in Table 31 by their extensions.
Files with an ASCII format are simply text files which can be viewed with the IMAGINE
Text Editor utility. IMAGINE HFA (hierarchal file architecture) files can be viewed with
the IMAGINE HfaView utility. The list in Table 31 does not include files that are used
by third party products. Please refer to the product‘s documentation for information on
those files.
466 ERDAS
ERDAS IMAGINE File Extensions
The information stored in an .img file can be used to help the user visualize how
different processes change the data. For example, if the user runs a filter over the file
and creates a new file, the statistics for the two files can be compared to see how the
filter changed the data.
.img File
The objects of an .img file are described in more detail on the following pages.
Use the IMAGINE Image Information utility or the HfaView utility to view the information
that is stored in an .img file.
The information in the Image Info and HfaView utilities should be modified with caution because
IMAGINE programs use this information for data input. If it is incorrect, there will be errors in
the output data for these programs.
468 ERDAS
ERDAS IMAGINE .img Files
Sensor Information When importing satellite imagery, there is usually a header file on the tape or CD-ROM
that is separate from the data. This object contains ephemeris information about the
sensor, such as:
• number of bands
The data presented are dependent upon the sensor. Each sensor provides different
types of information. The sensor object is named:
<format type>_Header
Some examples of the various sensor types are listed in the chart below.
These parameters are defined when the raster layer is created or imported into IMAGINE. Use
the Image Information utility to view the parameters.
Compression
When importing a file into IMAGINE, the user has the option to compress the data.
Currently, IMAGINE uses the run-length compression method. The amount that the
data are compressed depends on data in the layer. For example, if the layer contains
large, homogenous areas (e.g., blocks of water), then compressing the layer would save
on disk space. However, if the layer is very heterogenous, run-length compression
would not save much disk space.
Use the Import function to compress data when it is imported into IMAGINE.
Block Size
IMAGINE software uses a tiled format to store raster layers. The tiled format allows
raster layers to be displayed and resampled quickly. The raster layer is divided into tiles
(i.e., blocks) when IMAGINE creates or imports an .img file. The size of this block can
be defined when the user either creates the file or imports it. The default block size is 64
pixels by 64 pixels.
NOTE: The default block size is acceptable for most applications and should not need to be
changed.
470 ERDAS
ERDAS IMAGINE .img Files
512
columns
64
pixels
64 pixels
512 rows
Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels
• histogram
• contrast table
• histogram
• class names
• class values
Attribute data can also include additional information for thematic raster layers, such
as the area, opacity, and attributes for each class.
Use the Raster Attribute Editor to view or modify the contents of these attribute tables.
Statistics The following statistics are calculated for each raster layer:
These statistics are based on the data file values of the pixels in the layer. Knowing the
statistics for a layer will aid the user in determining the process to be used in extracting
the features that are of the most interest. For example, if a user planning to use the
ISODATA classifier, the statistics could be used to see if the layer has a normal distri-
bution of data, which is preferred.
If they do not exist, statistics should be created for a layer. Certain Viewer functions
(e.g., contrast tools) will not run without layer statistics. Rebuilding statistics for a raster
layer may be necessary. For example, if the user does not want to include zero file
values in the statistics calculation (and they are currently included), the statistics could
be rebuilt without zero file values.
472 ERDAS
ERDAS IMAGINE .img Files
Use the Image Information utility to view, create, or rebuild statistics for a raster layer. If the
statistics do not exist, the information in the Image Information utility will be inactive and
shaded.
Map Information Map information for a raster layer will be created only when the layer has been georef-
erenced. If the layer has been georeferenced, the following information will be stored in
the raster layer:
• pixel size
The user should add or change the map information only when he or she has valid map
information to enter. If incorrect information is entered, then the data for this file will
no longer be valid. Since IMAGINE programs use these data, they must be correct.
When you import a file, the map information may not have imported correctly. If this occurs, use
the Image Info utility to update the information.
Use the Image Information utility to view, add, or change map information for a raster layer in
an .img file.
• map projection
• spheroid
• zone number
Do not add or change the map projection unless the projection listed in the Image Info utility is
incorrect or missing. This may occur when you import a file.
Changing the map projection with the Projections Editor dialog will not rectify the
layer. If the user enters incorrect information, then the data for this layer will no longer
be valid. Since IMAGINE programs use these data, they need to be correct.
Use the Image Information utility to view, add, or change the map projection for a raster layer
in an .img file. If the layer has not been georeferenced, the information in the Image Information
utility will be inactive and shaded.
Pyramid Layers IMAGINE gives the user the option to “pyramid” large raster layers for faster
processing and display in the Viewer. When the user generates pyramid layers,
reduced subsampled raster layers are created from the original raster layer. The
number of pyramid layers that are created depends on the size of the raster layer and
the block size.
For example, a raster layer that is 4k × 4k pixels could take a long time to display when
using the Fit To Window option in the Viewer. Using the Create Pyramid Layers
option, IMAGINE would create additional raster layers successively reduced from 4k
× 4k, to 2k × 2k, 1k × 1k, 512 × 512, 128 × 128, down to 64 × 64. Then IMAGINE would
select the pyramid layer size most appropriate for display in the Viewer window.
Pyramid layers can be created using the Image Information utility or when the raster layer is
imported. You can also use the Image Information utility to delete pyramid layers.
474 ERDAS
Machine Independent Format
Machine
Independent Format
MIF Data Elements ERDAS IMAGINE uses the Machine Independent Format (MIF) to store data in a
fashion which can be read by a variety of machines. This format provides support for
converting data between the IMAGINE standard data format and that of the specific
host's architecture. Files created using this package on one machine will be readable
from another machine with no explicit data translation.
Each MIF file is made up of one or more of the data elements explained below.
7 6 5 4 3 2 1 0
U1 _7 U1_ 6 U1_ 5 U1_ 4 U1_ 3 U1_ 2 U1_ 1 U1_ 0
byte 0
7 5 3 1
U2_3 U2_2 U2_1 U2_0
byte 0
7 3
U4_1 U4_0
byte 0
7
integer
byte 0
7
integer
byte 0
15
integer
byte 1 byte 0
15
integer
byte 1 byte 0
476 ERDAS
Machine Independent Format
15
integer
byte 1 byte 0
31
integer
31
integer
NOTE: Currently, this element appears in the data dictionary as an EMIF_T_ULONG element.
In future versions of the file format, the EMIF_T_PTR will be expanded to an 8-byte format
which will allow indexing using 64 bits which allow addressing of 16 billion Gigabytes of file
space.
31
integer
31
integer
31 30 22
s exp fraction
63 52 51
s exp fraction
478 ERDAS
Machine Independent Format
31 30 22
s exp fraction
31 30 22
s exp fraction
63 52 51
s exp fraction
63 52 51
s exp fraction
31
integer
31
integer
datatype: This indicates the type of data stored here. The types are:
DataType BytesPerObject
0 EMIT_T_U1 1/8
1 EMIF_T_U2 1/4
3 EMIT_T_U4 1/2
4 EMIF_T_UCHAR 1
5 EMIF_T_CHAR 1
6 EMIF_T_USHORT 2
7 EMIF_T_SHORT 2
8 EMIF_T_ULONG 4
9 EMIF_T_LONG 4
10 EMIF_T_FLOAT 4
11 EMIF_T_DOUBLE 8
12 EMIF_T_COMPLEX 8
13 EMIF_T_DCOMPLEX 16
15
integer
byte 9 byte 8
objecttype: This indicates the object type of the data. This is used in the IMAGINE
Spatial Modeler. The valid values are:
480 ERDAS
Machine Independent Format
0 SCALAR. This will not normally be the case, since a scalar has a
single value.
1 TABLE: This indicates that the object is an array. The numcolumns should be
1.
2 MATRIX: This indicates the number of rows and columns is greater than one.
This is used for Coefficient matrices, etc.
3 RASTER: This indicates that the number of rows and columns is greater than
one and the data are just a part of a larger raster object. This would be the
case for blocks of images which are written to the file.
15
integer
byte 11 byte 10
data: This is the actual data. The number of bytes is given as:
31
integer
The next four bytes provide the file pointer, which points to the data comprising the
object.
31
integer
31
integer
The next four bytes provide the file pointer which points to the data comprising the
object.
31
integer
482 ERDAS
Machine Independent Format
MIF Data Dictionary IMAGINE HFA files have a data dictionary that describes the contents of each of the
different types of nodes. The dictionary is a compact ASCII string which is usually
placed at the end of the file with a pointer to the start the dictionary that is stored in the
header of the file.
Each object is defined like a structure in C, and consists of one or more items. Each item
is composed of an ItemType and a name. The ItemType indicates the type of data and
the name indicates the name by which the item will be known.
Dictionary ObjectDefinition[ObjectDefinition...] .
Number of
ItemType Interpretation
Bytes
1 EMIF_T_U1 1
2 EMIF_T_U2 1
4 EMIF_T_U4 1
c EMIF_T_UCHAR 1
C EMIF_T_CHAR 1
e EMIF_T_ENUM. 2
s EMIF_T_USHORT 2
S EMIF_T_SHORT 2
t EMIF_T_TIME 4
l EMIF_T_ULONG 4
L EMIF_T_LONG 4
f EMIF_T_FLOAT 4
d EMIF_T_DOUBLE 8
m EMIF_T_COMPLEX 8
M EMIF_T_DCOMPLEX 16
b EMIT_T_BASEDATA dynamic
484 ERDAS
ERDAS IMAGINE HFA File Format
ERDAS IMAGINE Many of the files created and used by ERDAS IMAGINE are stored in a hierarchical file
HFA File Format architecture (HFA). This format allows any number of different types of data elements
to be stored in the file in a tree structured fashion. This tree is built of nodes which
contain a variety of types of data. The contents of the nodes (as well as the structural
information) is saved in the file in a machine independent format (MIF) which allows
the files to be shared between computers of differing architectures.
Hierarchical File The hierarchical file architecture maintains an object-oriented representation of data in
Architecture an IMAGINE disk file through use of a tree structure. Each object is called an entry and
occupies one node in the tree. Each object has a name and a type. The type refers to a
description of the data contained by that object. Additionally each object may contain a
pointer to a subtree of more nodes. All entries are stored in MIF and can be accessed
directly by name.
Use the IMAGINE HfaView utility to view the objects of a file that uses the HFA format.
Header
Node_4 Node_5
Data Data
Figure 188 is an example of an HFA file structure for a thematic raster layer in an .img
file. If there were more attributes in the IMAGINE Raster Attribute Editor, then they
would appear as objects under the Descriptor Table object.
Layer_1 Eimg_Layer
486 ERDAS
ERDAS IMAGINE HFA File Format
Pre-defined HFA File There are three categories of pre-defined HFA File Object Types found in .img files:
Object Types
• Basic HFA File Object Types
These sections list each object with two different detailed definitions. The first
definition shows how the object appears in the data dictionary in the HFA file. The
second definition is a table that shows the type, name, and description of each item in
the object. An item within an object can be an element or another object.
If an item is an element, then the item type is one of the basic types previously given
with the EMIF_T_ prefix omitted. For example, the item type for EMIF_T_CHAR would
be shown as CHAR.
If an item is a previously defined object type, then the type is simply the name of the
previously defined item.
If the item is an array, then the number of elements is given in square brackets [n] after
the type. For example, the type for an item with an array of 16 EMIF_T_CHAR would
appear as CHAR[16]. If the item is an indirect item of fixed size (it is a pointer to an
item), then the type is followed by an asterisk “*.” For example, a pointer to an item with
an array of 16 EMIF_T_CHAR would appear as CHAR[16] *. If the item is an indirect
item of variable size (it is a pointer to an item and the number of items), then the type
is followed by a “p.” For example, a pointer to an item with a variable sized array of
characters would look like CHAR p.
NOTE: If the item type is shown as PTR, then this item will be encoded in the data dictionary
as a ULONG element.
• Ehfa_File
• Ehfa_Entry
Ehfa_HeaderTag
The Ehfa_HeaderTag is used as a unique signature at the beginning of an ERDAS
IMAGINE HFA file. It must always occupy the first 20 bytes of the file.
{16:clabel,1:lheaderPtr,}Ehfa_HeaderTag,
Ehfa_File
The Ehfa_File is composed of several main parts, including the free list, the dictionary,
and the object tree. This entry is used to keep track of these items in the file, since they
may begin anywhere in the file.
{1:Lversion,1:lfreeList,1:lrootEntryPtr,1:SentryHeaderLength,1:ldictionaryPtr,}
Ehfa_File,
488 ERDAS
ERDAS IMAGINE HFA File Format
Ehfa_Entry
The Ehfa_Entry contains the header information for each node in the object tree,
including the name and type of the node as well as the parent/child information.
{1:lnext,1:lprev,1:lparent,1:lchild,1:ldata,1LdataSize,64:cname,32:ctype,
1:tmodTime,}Ehfa_Entry,
Eimg_Layer
An Eimg_Layer object is the base node for a single layer of imagery. This object
describes the basic information for the layer, including its width and height in pixels,
its data type, and the width and height of the blocks used to store the image. Other
information such as the actual pixel data, map information, projection information, etc.,
are stored as child objects under this node. The child objects that are usually found
under the Eimg_Layer include:
• Descriptor_Table (an Edsc_Table object which contains the histogram and other
pixel value related data)
• Ehfa_Layer (an Ehfa_Layer object which describes the type of data in the layer)
490 ERDAS
ERDAS IMAGINE HFA File Format
{1:oEmif_String,dependent,}Eimg_DependentFile,
Eimg_DependentLayerName
The Eimg_DependentLayerName object normally exists as the child of an Eimg_Layer
in an .aux file. It contains the original name of the layer of which it is a child in the
original imagery file being served by this .aux. It only exists in .aux files serving
imagery files of a format supported by a RasterFormats DLL Instance which does not
define a FileLayerNamesSet interface function (because these DLL Instances are
obviously incapable of supporting layer name changes).
{1:oEmif_String,ImageLayerName,}Eimg_DependentLayerName,
492 ERDAS
ERDAS IMAGINE HFA File Format
Eimg_Layer_SubSample
An Eimg_Layer_SubSample object is a node which contains a subsampled version of
the layer defined by the parent node. The node of this form are named _ss_2, _ss_4,
_ss_8, etc. This stands for SubSampled by 2, SubSampled by 4, etc. This node will have
an Edms_State node called RasterDMS and an Ehfa_Layer node called Ehfa_layer
under it. This will be present if pyramid layers have been computed.
0 =”thematic”
1 =”athematic”
ENUM pixelType The type of the pixels.
0=”u1”
1=”u2”
2=”u4”
3=”u8”
4=”s8”
5=”u16”
6=”s16”
7=”u32”
8=”s32”
9=”f32”
10=”f64”
11=”c64”
12=”c128”
LONG blockWidth The width of each block in the layer.
LONG blockHeight The height of each block in the layer.
{1:*bvalueBD,}Eimg_NonInitializedValue,
Eimg_MapInformation
The Eimg_MapInformation object contains the map projection system and the map
units applicable to the MapToPixelXForm object that is its sibling. As a child of an
Eimg_Layer, it will have the name MapInformation.
{1:oEmif_String,projection,1:oEmif_String,units,}Eimg_MapInformation,
Eimg_RRDNamesList
The Eimg_RRDNamesList object contains a list of layers of a resolution different
(reduced) than the original. As a child of an Eimg_Layer, it will have the name
RRDNamesList.
{1:oEmif_String,algorithm,0:poEmif_String,nameList,}Eimg_RRDNamesList,
494 ERDAS
ERDAS IMAGINE HFA File Format
Eimg_StatisticsParameters830
The Eimg_StatisticsParameters830 object contains statistics parameters that control the
computation of certain statistics. The parameters can apply to the computation of
Covariance, scalar Statistics of a layer, or the Histogram of a layer. In these cases, the
object will be named CovarianceParameters, StatisticsParameters, and HistogramPa-
rameters. The CovarianceParameters will exist as a sibling of the Covariance, and the
StatisticsParameters and HistogramParameters will be children of the Eimg_Layer to
which they apply.
{0:poEmif_String,LayerNames,1:*bExcludedValues,1:oEmif_String,AOIname,
1:lSkipFactorX,1:lSkipFactorY,1:*oEdsc_BinFunction,BinFunction,}
Eimg_StatisticsParameters830,
Ehfa_Layer
The Ehfa_Layer is used to indicate the type of layer. The initial design for the IMAGINE
files allowed for both raster and vector layers. Currently, the vector layers have not
been implemented.
{1:e2:raster,vector,type,1:ldictionaryPtr,}Ehfa_Layer,
0=”raster”
1=”vector”
ULONG dictionaryPtr This points to a dictionary entry which
describes the data. In the case of raster data, it
points to a dictionary pointer which describes
the contents of each block via the RasterDMS
definition given below.
{<n>:<t>data,}RasterDMS,
1 - Unsigned 1-bit
2 - Unsigned 2-bit
4 - Unsigned 4-bit
c - Unsigned 8-bit
C - Signed 8-bit
s - Unsigned 16-bit
S - Signed 16-bit
l - Unsigned 32-bit
L - Signed 32-bit
496 ERDAS
ERDAS IMAGINE HFA File Format
Edms_VirtualBlockInfo
An Edms_VirtualBlockInfo object describes a single raster data block of a layer. It
describes where to find the data in the file, how many bytes are in the data block, and
how to unpack the data from the block. For uncompressed data the unpacking is
straight forward. The scheme for compressed data is described below.
0=”false”
1=”true”
ENUM compressionType This indicates the type of compression used for
this block.
0=”no compression”
For uncompressed blocks, the data are simply packed into the block one pixel value at
a time. Each pixel is read from the block as indicated by its data type. All non-integer
data are uncompressed.
• The byte size of the output pixels is determined by examining the difference
between the maximum and the minium. If the difference is less than or equal to 256,
then 8-bit data are used. If the difference is less than 65,536 then, 16-bit data are
used, otherwise 32-bit data are used.
• A run-length encoding scheme is used to encode runs of the same pixel value. The
data minimum value occupies the first 4 bytes of the block. The number of run-
length segments occupies the next 4 bytes, and the next 4 bytes are an offset into the
block which indicates where the compressed pixel values begin. The next byte
indicates the number of bits per pixel (1,2,4,8,16,32). These four values are encoded
in the standard MIF format (unsigned long, or ULONG). Following this is the list
of segment counts, following the segment counts are the pixel values. There is one
segment count per pixel value.
There may be 1, 2, 3, or 4 bytes per count. The first two bits of the first count byte
contains 0,1,2,3 indicating that the count is contained in 1, 2,3, or 4 bytes. Then the rest
of the byte (6 bits) represent the six most significant bytes of the count. The next byte, if
present, represents decreasing significance.
NOTE: This order is different than the rest of the package. This was done so that the high byte
with the encoded byte count would be first in the byte stream. This pattern is repeated as many
times as indicated by the numsegments field.
The data values are compressed into the remaining space packed into as many bits per
pixel as indicated by the numbitpervalue field.
498 ERDAS
ERDAS IMAGINE HFA File Format
Edms_FreeIDList
An Edms_FreeIDList is used to track blocks which have been freed from the layer. The
freelist consists of an array of min/max pairs which indicate unused contiguous blocks
of data which lie within the allocated layer space. Currently this object is unused and
reserved for future expansion.
{1:Lmin,1:Lmax,}Edms_FreeIDList,
Edms_State
The Edms_State describes the location of each of the blocks of a single layer of imagery.
Basically, this object is an index of all of the blocks in the layer.
{1:lnumvirtualblocks,1:lnumobjectsperblock,1:lnextobjectnum,
1:e2:no compression,RLC compression,compressionType,
0:poEdms_VirtualBlockInfo,blockinfo,0:poEdms_FreeIDList,freelist,
1:tmodTime,}Edms_State
0=”no compression”
Edsc_Table
An Edsc_Table is a base node used to store columns of information. This serves simply
as a parent node for each of the columns which are a part of the table.
{1:lnumRows,} Edsc_Table,
Edsc_BinFunction
The Edsc_BinFunction describes how pixel values from the associated layer are to be
mapped into an index for the columns.
{1:lnumBins,1:e4:direct,linear,logarithmic,explicit,binFunction Type,
1:dminLimit,1:dmaxLimit,1:*bbinLimits,} Edsc_BinFunction,
0=”direct”
1=”linear”
2=” exponential”
3=”explicit”
DOUBLE minLimit The lowest value defined by the bin function.
DOUBLE maxLimit The highest value defined by the bin func-
tion.
BASEDATA binLimits The limits used to define the bins.
500 ERDAS
ERDAS IMAGINE HFA File Format
Edsc_Column
The columns of information which are stored in a table are stored in this format.
{1:lnumRows,1:LcolumnDataPtr,1:e4:integer,real,comples,string,dataType,
1:lmaxNumChar,} Edsc_Column,
0=”integer” (EMIF_T_LONG)
1=”real” (EMIF_T_DOUBLE)
2=”complex” (EMIF_T_DCOMPLEX)
3=”string” (EMIF_T_CHAR)
LONG maxNumChars The maximum string length (for string data
only). It is 0 if the type is not a String.
The types of information stored in columns are given in the following table.
502 ERDAS
ERDAS IMAGINE HFA File Format
Eded_ColumnAttributes_1
The Eded_ColumnAttributes_1 stores the descriptor column properties which are used
by the Raster Attribute Editor for the format and layout of the descriptor column
display in the Raster Attribute Editor CellArray. The properties include the position of
the descriptor column within the CellArray, the name, alignment, format, and width of
the column, whether the column is editable, the formula (if any) for the column, the
units (for numeric data), and whether the column is a component of a color column.
Each Eded_ColumnAttributes_1 is a child of the Edsc_Column containing the data for
the descriptor column. The properties for a color column are stored as a child of the
Eded_ColumnAttributes_1 for the red component of the color column.
{1:lposition,0:pcname,1:e2:FALSE,TRUE,editable,
1:e3:LEFT,CENTER,RIGHT,alignment,0:pcformat,
1:e3:DEFAULT,APPLY,AUTO-APPLY,formulamode,0:pcformula,1:dcolumnwidth,
0:pcunits,1:e5:NO_COLOR,RED,GREEN,BLUE,COLOR,colorflag,0:pcgreenname,
0:pcbluename,}Eded_ColumnAttributes_1,
Esta_Statistics
The Esta_Statistics is used to describe the statistics for a layer.
{1:dminimum,1:dmaximum,1:dmean,1:dmedian,1d:mode,1:dstddev,}
Esta_Statistics,
504 ERDAS
ERDAS IMAGINE HFA File Format
Esta_Covariance
The Esta_Covariance object is used to record the covariance matrix for the layers in an
.img file
{1:bcovariance,}Esta_Covariance,
Esta_SkipFactors
The Esta_SkipFactors object is used to record the skip factors that were used when the
statistics or histogram was calculated for a raster layer or when the covariance was
calculated for an .img file.
{1:LskipFactorX,1:LskipFactorY,}Esta_SkipFactors,
Esta_ExcludedValues
The Esta_ExcludedValues object is used to record the values that were excluded from
consideration when the statistics or histogram was calculated for a raster layer or when
the covariance was calculated for a .img file.
{1:*bvalueBD,}Esta_ExcludedValues,
{0:pcdatumname,1:e3:EPRJ_DATUM_PARAMETRIC,EPRJ_DATUM_GRID,
EPRJ_DATUM_REGRESSION,type,0:pdparams,0:pcgridname,}Eprj_Datum,
Eprj_Spheroid
The Eprj_Spheroid is used to describe spheroid parameters used to describe the shape
of the earth.
{0:pcsphereName,1:da,1:db,1:deSquared,1:dradius,}Eprj_Spheroid,
506 ERDAS
ERDAS IMAGINE HFA File Format
Eprj_ProParameters
The Eprj_Parameters is used to define the map projection for a layer.
{1:e2:EPRJ_INTERNAL,EPRJ_EXTERNAL,proType,1:lproNumber,
0:pcproExeName,0:pcproName,1:lproZone,0:pdproParams,
1:*oEprj_Spheroid,proSpheroid,}Eprj_ProParameters.
The following table defines the contents of the proParams array which is defined above.
The Parameters column defines the meaning of the various elements of the proParams
array for the different projections. Each one is described by one or more statements of
the form n: Description. n is the index into the array.
Name Parameters
0 ”Geographic(Latitude/Longitude)” None Used
1 ”UTM” 3: 1=North, -1=South
2 ”State Plane” 0: 0=NAD27, 1=NAD83
3 ”Albers Conical Equal Area” 2: Latitude of 1st standard parallel
6: False Easting
7: False Northing
4 ”Lambert Conformal Conic” 2: Latitude of 1st standard parallel
6: False Easting
7: False Northing
5 ”Mercator” 4: Longitude of central meridian
6: False Easting
7: False Northing
508 ERDAS
ERDAS IMAGINE HFA File Format
Name Parameters
6 ”Polar Stereographic” 4: Longitude directed straight down below
pole of map.
6: False Easting
7: False Northing.
7 ”Polyconic” 4: Longitude of central meridian
6: False Easting
7: False Northing
8 ”Equidistant Conic” 2: Latitude of standard parallel (Case 0)
6: False Easting
7: False Northing
8: 0=Case 0, 1=Case 1.
9 ”Transverse Mercator” 2: Scale Factor at Central Meridian
6: False Easting
7: False Northing
10 ”Stereographic” 4: Longitude of center of projection
6: False Easting
7: False Northing
11 ”Lambert Azimuthal Equal-area” 4: Longitude of center of projection
6: False Easting
7: False Northing
6: False Easting
7: False Northing
13 ”Gnomonic” 4: Longitude of center of projection
6: False Easting
7: False Northing
14 ”Orthographic” 4: Longitude of center of projection
6: False Easting
7: False Northing
15 ”General Vertical Near-Side Perspec- 2: Height of perspective point above sphere.
tive
4: Longitude of center of projection
6: False Easting
7: False Northing
16 ”Sinusoidal” 4: Longitude of central meridian
6: False Easting
7: False Northing
17 ”Equirectangular” 4: Longitude of central meridian
6: False Easting
7: False Northing
18 ”Miller Cylindrical” 4: Longitude of central meridian
6: False Easting
7: False Northing
19 ”Van der Grinten I” 4: Longitude of central meridian
6: False Easting
7: False Northing
510 ERDAS
ERDAS IMAGINE HFA File Format
Name Parameters
20 ”Oblique Mercator (Hotine)” 2: Scale Factor at center of projection
6: False Easting
7: False Northing.
6: False Easting
7: False Northing
22 ”Modified Transverse Mercator” 6: False Easting
7: False Northing
{1:dx,1:dy,}Eprj_Coordinate,
Eprj_Size
The Eprj_Size is a pair of doubles used to define a rectangular size.
{1:dx,1:dy,}Eprj_Size,
Eprj_MapInfo
The Eprj_MapInfo object is used to define the basic map information for a layer. It
defines the map coordinates for the center of the upper left and lower right pixels, as
well as the cell size and the name of the map projection.
{0:pcproName,1:*oEprj_Coordinate,upperLeftCenter,
1:*oEprj_Coordinate,lowerRightCenter,1:*oEprj_Size,pixelSize,
0:pcunits,}Eprj_MapInfo,
512 ERDAS
ERDAS IMAGINE HFA File Format
Efga_Polynomial
The Efga_Polynomial is used to store transformation coefficients created by the
IMAGINE GCP Tool.
{1:Lorder,1:Lnumdimtransforms,1:numdimpolynomial,1:Ltermcount,
1:*exponentList,1:bpolycoefmtx,1:bpolycoefvector,}Efga_Polynomial,
Exfr_GenericXFormHeader
The Exfr_GenericXFormHeader contains a list of GeometricModels titles for the
component XForms making up a composite Exfr_XForm. The components are written
as children of the Exfr_GenericXFormHeader with names XForm0, XForm1, ..., XFormi.
where i is the number of components listed by the Exfr_GenericXFormHeader. The
design of component XFormi is defined by the specific GeometricModels DLL instance
that controls XForms of the title specified as the ith title string in the
Exfr_GenericXFormHeader unless XFormi is of type Exfr_ASCIIXform (see below). As
a child of an Eimg_Layer, it will have the name MapToPixelXForm.
{0:poEmif_String,titleList,}Exfr_GenericXFormHeader,
{0:pcxForm,}Exfr_ASCIIXForm,
Calibration_Node
An object of type Calibration_Node is an empty object — it contains no data. A node of
this type simply serves as the parent node of four related child objects. The children of
the Calibration_Node are used to provide information which converts pixel coordinates
to map coordinates and vice versa.There is no dictionary definition for this object type.
A node of this type will be a child of the root node and will be named “Calibration.” The
“Calibration” node will have the four children described below.
514 ERDAS
Vector Layers
Vector Layers The vector data structure in ERDAS IMAGINE is based on the ARC/INFO data model
(developed by ESRI, Inc.).
See "CHAPTER 2: Vector Layers" for more information on vector layers. Refer to the
ARC/INFO users manuals for detailed information on the vector data structure.
APPENDIX C
Map Projections
Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS
IMAGINE. It is divided into two sections:
The external projections were implemented outside of ERDAS IMAGINE so that users
could add to these using the Developers’ Toolkit. The projections in each section are
presented in alphabetical order.
• Map Projections for Use with the Geographic Information System (Lee and Walsh 1984)
For general information about map projection types, refer to "CHAPTER 11: Cartography".
Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification
tools. View, add, or change projection information using the Image Information option.
NOTE: You cannot rectify to a new map projection using the Image Information option. You
should change map projection information using Image Information only if you know the
information to be incorrect. Use the rectification tools to actually georeference an image to a new
map projection system.
Azimuthal Equidistant
Equidistant Conic
Equirectangular
Geographic (Lat/Lon)
Gnomonic
Mercator
Miller Cylindrical
Orthographic
Polar Stereographic
Polyconic
Sinusoidal
State Plane
Stereographic
Transverse Mercator
UTM
518 ERDAS
USGS Projections
Construction Cone
Meridians are straight lines converging on the polar axis, but not
Meridians
at the pole.
The Albers Conical Equal Area projection is mathematically based on a cone that is
conceptually secant on two parallels. There is no areal deformation. The North or South
Pole is represented by an arc. It retains its properties at various scales, and individual
sheets can be joined along their edges.
This projection produces very accurate area and distance measurements in the middle
latitudes (Figure 189). Thus, Albers Conical Equal Area is well-suited to countries or
continents where north-south depth is about 3/5 the breadth of east-west. When this
projection is used for the continental U.S., the two standard parallels are 29.5˚ and 45.5˚
North.
This projection possesses the property of equal area, and the standard parallels are
correct in scale and in every direction. Thus, there is no angular distortion (i.e.,
meridians intersect parallels at right angles) and conformality exists along the standard
parallels. Like other conics, Albers Conical Equal Area has concentric arcs for parallels
and equally spaced radii for meridians. Parallels are not equally spaced, but are farthest
apart between the standard parallels and closer together on the north and south edges.
Albers Conical Equal Area is the projection exclusively used by the USGS for sectional
maps of all 50 states of the U.S. in the National Atlas of 1970.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter two values for the desired control lines of the projection, i.e., the standard
parallels. Note that the first standard parallel is the southernmost.
Then, define the origin of the map projection in both spherical and rectangular coordi-
nates.
Enter values for longitude of the desired central meridian and latitude of the origin of
projection.
Enter values of false easting and false northing, corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough to prevent negative
coordinates from occurring within the region of the map projection. That is, the origin
of the rectangular coordinate system should fall outside of the map projection to the
south and west.
520 ERDAS
USGS Projections
In Figure 189, the standard parallels are 20˚N and 60˚N. Note the change in spacing of
the parallels.
Summary
Construction Plane
Property Equidistant
Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Linear scale
Oblique and equatorial aspects: linear scale is true from the point
of tangency. In all aspects, the projection shows distances true to
scale when measured between the point of tangency and any
other point on the map.
The Azimuthal Equidistant projection is used for radio and seis-
mic work, as every place in the world will be shown at its true
distance and direction from the point of tangency. The U.S. Geo-
Uses
logical Survey uses the oblique aspect in the National Atlas and
for large-scale mapping of Micronesia. The polar aspect is used as
the emblem of the United Nations.
522 ERDAS
USGS Projections
This projection is used mostly for polar projections because latitude rings divide
meridians at equal intervals with a polar aspect (Figure 190). Linear scale distortion is
moderate and increases toward the periphery. Meridians are equally spaced, and all
distances and directions are shown accurately from the central point.
This projection can also be used to center on any point on the earth—a city, for
example—and distance measurements will be true from that central point. Distances
are not correct or true along parallels, and the projection is neither equal area nor
conformal. Also, straight lines radiating from the center of this projection represent
great circles.
Prompts
The following prompts display in the Projection Chooser if Azimuthal Equidistant is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the center of the map projection in both spherical and rectangular coordinates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
524 ERDAS
USGS Projections
Conic Equidistant
Summary
Construction Cone
Property Equidistant
With Equidistant Conic (Simple Conic) projections, correct distance is achieved along
the line(s) of contact with the cone, and parallels are equidistantly spaced. It can be used
with either one (A) or two (B) standard parallels. This projection is neither conformal
nor equal area, but the north-south scale along meridians is correct. The North or South
Pole is represented by an arc. Because scale distortion increases with increasing
distance from the line(s) of contact, the Equidistant Conic is used mostly for mapping
regions predominantly east-west in extent. The USGS uses the Equidistant Conic in an
approximate form for a map of Alaska.
Prompts
The following prompts display in the Projection Chooser if Equidistant Conic is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter values for the longitude of the desired central meridian and the latitude of the
origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that no negative
coordinates will occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.
Enter one or two values for the desired control line(s) of the projection, i.e., the standard
parallel(s). Note that if two standard parallels are used, the first is the southernmost.
526 ERDAS
USGS Projections
Equirectangular (Plate
Carrée)
Summary
Construction Cylinder
Property Compromise
This projection is valuable for its ease in computer plotting. It is useful for mapping
small areas, such as city maps, because of its simplicity. The USGS uses Equirectangular
for index maps of the conterminous U.S. with insets of Alaska, Hawaii, and various
islands. However, neither scale nor projection is marked to avoid implying that the
maps are suitable for normal geographic information.
Prompts
The following prompts display in the Projection Chooser if Equirectangular is selected.
Respond to the prompts as described.
Spheroid Name:
Datum Name:
Enter a value for longitude of the desired central meridian to center the projection and
the latitude of true scale.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
528 ERDAS
USGS Projections
Construction Plane
Property Compromise
Graticule spacing
Equatorial and oblique aspects: parallels are elliptical arcs that are
not evenly spaced. Meridians are elliptical arcs that are not evenly
spaced, except for the central meridian, which is a straight line.
Radial scale decreases from true scale at the center to zero on the
Linear scale projection edge. The scale perpendicular to the radii decreases,
but not as rapidly (ESRI 1992).
Often used to show the earth or other planets and satellites as
Uses seen from space. Used as an aesthetic presentation, rather than
for technical applications (ESRI 1992).
Central meridian and a particular parallel (if shown) are straight lines. Other meridians
and parallels are usually arcs of circles or ellipses, but some may be parabolas or hyper-
bolas. Like all perspective projections, General Vertical Near-side Perspective cannot
illustrate the entire globe on one map—it can represent only part of one hemisphere.
Prompts
The following prompts display in the Projection Chooser if General Vertical Near-side
Perspective is selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
Enter a value for desired height of the perspective point above the sphere in the same
units as the radius.
Then, define the center of the map projection in both spherical and rectangular coordi-
nates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
530 ERDAS
USGS Projections
Geographic (Lat/Lon) The Geographic is a spherical coordinate system composed of parallels of latitude (Lat)
and meridians of longitude (Lon) (Figure 191). Both divide the circumference of the
earth into 360 degrees, which are further subdivided into minutes and seconds (60 sec
= 1 minute, 60 min = 1 degree).
Because the earth spins on an axis between the North and South Poles, this allows
construction of concentric, parallel circles, with a reference line exactly at the north-
south center, termed the equator. The series of circles north of the equator are termed
north latitudes and run from 0˚ latitude (the equator) to 90˚ North latitude (the North
Pole), and similarly southward. Position in an east-west direction is determined from
lines of longitude. These lines are not parallel and they converge at the poles. However,
they intersect lines of latitude perpendicularly.
Unlike the equator in the latitude system, there is no natural zero meridian. In 1884, it
was finally agreed that the meridian of the Royal Observatory in Greenwich, England,
would be the prime meridian. Thus, the origin of the geographic coordinate system is
the intersection of the equator and the prime meridian. Note that the 180˚ meridian is
the international date line.
If the user chooses Geographic from the Projection Chooser, the following prompts will
display:
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Note that in responding to prompts for other projections, values for longitude are negative west
of Greenwich and values for latitude are negative south of the equator.
Parallel
(Latitude)
60
Equator
30 6 0
3 0
0
Meridian
(Longitude)
Figure 191 shows the graticule of meridians and parallels on the global surface.
532 ERDAS
USGS Projections
Gnomonic
Summary
Construction Plane
Property Compromise
Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Meridians
Parallels
Oblique and equatorial aspects: parallels are ellipses, parabolas,
or hyperbolas concave toward the poles (except for the equator,
which is straight).
Polar aspect: the meridian spacing is equal and increases away
from the pole. The parallel spacing increases very rapidly from
the pole.
Graticule spacing
Gnomonic is a perspective projection that projects onto a tangent plane from a position
in the center of the earth. Because of the close perspective, this projection is limited to
less than a hemisphere. However, it is the only projection which shows all great circles
as straight lines. With a polar aspect, the latitude intervals increase rapidly from the
center outwards.
With an equatorial or oblique aspect, the equator is straight. Meridians are straight and
parallel, while intervals between parallels increase rapidly from the center and parallels
are convex to the equator.
Because great circles are straight, this projection is useful for air and sea navigation.
Rhumb lines are curved, which is the opposite of the Mercator projection.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the center of the map projection in both spherical and rectangular coordinates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
534 ERDAS
USGS Projections
Lambert Azimuthal
Equal Area
Summary
Construction Plane
Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Meridians
Oblique and equatorial aspects: meridians are complex curves
concave toward a straight central meridian, except the outer
meridian of a hemisphere, which is a circle.
Polar aspect: parallels are concentric circles.
Parallels
Oblique and equatorial aspects: the parallels are complex curves.
The equator on the equatorial aspect is a straight line.
Polar aspect: the meridian spacing is equal and increases, and the
parallel spacing is unequal and decreases toward the periphery of
Graticule spacing
the projection. The graticule spacing, in all aspects, retains the
property of equivalence of area.
Linear scale is better than most azimuthals, but not as good as the
equidistant. Angular deformation increases toward the periphery
Linear scale of the projection. Scale decreases radially toward the periphery of
the map projection. Scale increases perpendicular to the radii
toward the periphery.
The polar aspect is used by the U.S. Geological Survey in the
Uses National Atlas. The polar, oblique, and equatorial aspects are
used by the U.S. Geological Survey for the Circum-Pacific Map.
In the polar aspect, latitude rings decrease their intervals from the center outwards. In
the equatorial aspect, parallels are curves flattened in the middle. Meridians are also
curved, except for the central meridian, and spacing decreases toward the edges.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the center of the map projection in both spherical and rectangular coordinates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
In Figure 192, three views of the Lambert Azimuthal Equal Area projection are shown:
A) Polar aspect, showing one hemisphere; B) Equatorial aspect, frequently used in old
atlases for maps of the eastern and western hemispheres; C) Oblique aspect, centered
on 40˚N.
536 ERDAS
USGS Projections
Construction Cone
Property Conformal
This projection is very similar to Albers Conical Equal Area, described previously. It is
mathematically based on a cone that is tangent at one parallel or, more often, that is
conceptually secant on two parallels (Figure 193). Areal distortion is minimal, but
increases away from the standard parallels. North or South Pole is represented by a
point—the other pole cannot be shown. Great circle lines are approximately straight. It
retains its properties at various scales, and sheets can be joined along their edges. This
projection, like Albers, is most valuable in middle latitudes, especially in a country
sprawling east to west like the U.S. The standard parallels for the U.S. are 33˚ and 45˚N.
The major property of this projection is its conformality. At all coordinates, meridians
and parallels cross at right angles. The correct angles produce correct shapes. Also,
great circles are approximately straight. The conformal property of Lambert Conformal
Conic and the straightness of great circles makes it valuable for landmark flying.
Lambert Conformal Conic is the State Plane coordinate system projection for states of
predominant east-west expanse. Since 1962, Lambert Conformal Conic has been used
for the International Map of the World between 84˚N and 80˚S.
In comparison with Albers Conical Equal Area, Lambert Conformal Conic possesses
true shape of small areas, whereas Albers possesses equal area. Unlike Albers, parallels
of Lambert Conformal Conic are spaced at increasing intervals the farther north or
south they are from the standard parallels.
538 ERDAS
USGS Projections
Prompts
The following prompts display in the Projection Chooser if Lambert Conformal Conic
is selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter two values for the desired control lines of the projection, i.e., the standard
parallels. Note that the first standard parallel is the southernmost.
Then, define the origin of the map projection in both spherical and rectangular coordi-
nates.
Enter values for longitude of the desired central meridian and latitude of the origin of
projection.
Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough to ensure that there will
be no negative coordinates within the region of the map projection. That is, the origin
of the rectangular coordinate system should fall outside of the map projection to the
south and west.
In Figure 193, the standard parallels are 20˚N and 60˚N. Note the change in spacing of
the parallels.
540 ERDAS
USGS Projections
Mercator
Summary
Construction Cylinder
Property Conformal
This famous cylindrical projection was originally designed by Flemish map maker
Gerhardus Mercator in 1569 to aid navigation (Figure 194). Meridians and parallels are
straight lines and cross at 90˚ angles. Angular relationships are preserved. However, to
preserve conformality, parallels are placed increasingly farther apart with increasing
distance from the equator. Due to extreme scale distortion in high latitudes, the
projection is rarely extended beyond 80˚N or S unless the latitude of true scale is other
than the equator. Distance scales are usually furnished for several latitudes.
Rhumb lines, which show constant direction, are straight. For this reason a Mercator
map was very valuable to sea navigators. However, rhumb lines are not the shortest
path; great circles are the shortest path. Most great circles appear as long arcs when
drawn on a Mercator map.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the origin of the map projection in both spherical and rectangular coordinates.
Enter values for longitude of the desired central meridian and latitude at which true
scale is desired. Selection of a parameter other than the equator can be useful for making
maps in extreme north or south latitudes.
Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of true scale. These values must be in meters. It is very
often convenient to make them large enough so that no negative coordinates will occur
within the region of the map projection. That is, the origin of the rectangular coordinate
system should fall outside of the map projection to the south and west.
542 ERDAS
USGS Projections
In Figure 194, all angles are shown correctly, therefore small shapes are true (i.e., the
map is conformal). Rhumb lines are straight, which makes it useful for navigation.
Summary
Construction Cylinder
Property Compromise
Meridians are parallel and equally spaced, the lines of latitude are
parallel, and the distance between them increases toward the
Graticule spacing
poles. Both poles are represented as straight lines. Meridians and
parallels intersect at right angles (ESRI 1992).
While the standard parallels, or lines true to scale and free of dis-
Linear scale
tortion, are at latitudes 45˚N and S, only the equator is standard.
Meridians and parallels are straight lines intersecting at right angles. Meridians are
equidistant, while parallels are spaced farther apart the farther they are from the
equator. Miller Cylindrical is not equal-area, equidistant, or conformal. Miller Cylin-
drical is used for world maps and in several atlases.
Prompts
The following prompts display in the Projection Chooser if Miller Cylindrical is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter a value for the longitude of the desired central meridian to center the projection.
544 ERDAS
USGS Projections
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
This projection resembles the Mercator, but has less distortion in polar regions. Miller
Cylindrical is neither conformal nor equal area.
Construction Cone
Property Equidistant
In 1972, the USGS devised a projection specifically for the revision of a 1954 map of
Alaska which, like its predecessors, was based on the Polyconic projection. This
projection was drawn to a scale of 1:2,000,000 and published at 1:2,500,000 (map “E”)
and 1:1,584,000 (map “B”). Graphically prepared by adapting coordinates for the
Universal Transverse Mercator projection, it is identified as the Modified Transverse
Mercator projection. It resembles the Transverse Mercator in a very limited manner and
cannot be considered a cylindrical projection. It resembles the Equidistant Conic
projection for the ellipsoid in actual construction. The projection was also used in 1974
for a base map of the Aleutian-Bering Sea Region published at 1:2,500,000 scale.
It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866
ellipsoid, with the scale along the meridians reduced to 0.9992 of true scale and the
standard parallels at latitude 66.09˚ and 53.50˚N.
546 ERDAS
USGS Projections
Prompts
The following prompts display in the Projection Chooser if Modified Transverse
Mercator is selected. Respond to the prompts as described.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
Construction Cylinder
Property Conformal
Parallels Parallels are complex curves concave toward the nearest pole.
Oblique Mercator is a cylindrical, conformal projection that intersects the global surface
along a great circle. It is equivalent to a Mercator projection that has been altered by
rotating the cylinder so that the central line of the projection is a great circle path instead
of the equator. Shape is true only within any small area. Areal enlargement increases
away from the line of tangency. Reasonably accurate projection within a 15˚ band along
the line of tangency.
The USGS uses the Hotine version of Oblique Mercator. The Hotine version is based on
a study of conformal projections published by British geodesist Martin Hotine in 1946-
47. Prior to the implementation of the Space Oblique Mercator, the Hotine version was
used for mapping Landsat satellite imagery.
Prompts
The following prompts display in the Projection Chooser if Oblique Mercator (Hotine)
is selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
548 ERDAS
USGS Projections
Designate the desired scale factor along the central line of the projection. This parameter
may be used to modify scale distortion away from this central line. A value of 1.0
indicates true scale only along the central line. A value of less than, but close to, one is
often used to lessen scale distortion away from the central line.
False easting
False northing
The center of the projection is defined by rectangular coordinates of false easting and
false northing. The origin of rectangular coordinates on this projection occurs at the
nearest intersection of the central line with the earth’s equator. To shift the origin to the
intersection of the latitude of the origin entered above and the central line of the
projection, compute coordinates of the latter point with zero false eastings and
northings, reverse the signs of the coordinates obtained, and use these for false eastings
and northings. These values must be in meters. It is very often convenient to add
additional values so that no negative coordinates will occur within the region of the
map projection. That is, the origin of the rectangular coordinate system should fall
outside of the map projection to the south and west.
These formats differ slightly in definition of the central line of the projection.
Format A
For format A the additional prompts are:
Format A defines the central line of the projection by the angle east of north to the
desired great circle path and by the latitude and longitude of the point along the great
circle path from which the angle is measured. Appropriate values should be entered.
Format B defines the central line of the projection by the latitude of a point on the central
line which has the desired scale factor entered previously and by the longitude and
latitude of two points along the desired great circle path. Appropriate values should be
entered.
550 ERDAS
USGS Projections
Orthographic
Summary
Construction Plane
Property Compromise
Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Graticule spacing
Oblique and equatorial aspects: the graticule spacing decreases
away from the center of the projection.
Scale is true on the parallels in the polar aspect and on all circles
Linear scale centered at the pole of the projection in all aspects. Scale
decreases along lines radiating from the center of the projection.
The U.S. Geological Survey uses the Orthographic map projection
Uses
in the National Atlas.
The Orthographic projection is geometrically based on a plane tangent to the earth, and
the point of projection is at infinity (Figure 196). The earth appears as it would from
outer space. Light rays that cast the projection are parallel and intersect the tangent
plane at right angles. This projection is a truly graphic representation of the earth and
is a projection in which distortion becomes a visual aid. It is the most familiar of the
azimuthal map projections. Directions from the center of the projection are true.
The Orthographic projection seldom appears in atlases. Its utility is more pictorial than
technical. Orthographic has been used as a basis for artistic maps by Rand McNally and
the USGS.
Prompts
The following prompts display in the Projection Chooser if Orthographic is selected.
Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the center of the map projection in both spherical and rectangular coordinates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
Three views of the Orthographic projection are shown in Figure 196: A) Polar aspect; B)
Equatorial aspect; C) Oblique aspect, centered at 40˚N and showing the classic globe-
like view.
552 ERDAS
USGS Projections
Summary
Construction Plane
Property Conformal
The Polar Stereographic may be used to accommodate all regions not included in the
UTM coordinate system, regions north of 84˚N and 80˚S. This form is called Universal
Polar Stereographic (UPS). The projection is equivalent to the polar aspect of the Stereo-
graphic projection on a spheroid. The central point is either the North Pole or the South
Pole. Of all the polar aspect planar projections, this is the only one that is conformal.
The point of tangency is a single point—either the North Pole or the South Pole. If the
plane is secant instead of tangent, the point of global contact is a line of latitude (ESRI
1992).
Polar Stereographic stretches areas toward the periphery, and scale increases for areas
farther from the central pole. Meridians are straight and radiating; parallels are
concentric circles. Even though scale and area are not constant with Polar Stereo-
graphic, this projection, like all stereographic projections, possesses the property of
conformality.
The Astrogeology Center of the Geological Survey at Flagstaff, Arizona, has been using
the Polar Stereographic projection for the mapping of polar areas of every planet and
satellite for which there is sufficient information.
554 ERDAS
USGS Projections
Prompts
The following prompts display in the Projection Chooser if Polar Stereographic is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the origin of the map projection in both spherical and rectangular coordinates.
Ellipsoid projections of the polar regions normally use the International 1909 spheroid
(ESRI 1992).
Enter a value for longitude directed straight down below the pole for a north polar
aspect, or straight up from the pole for a south polar aspect. This is equivalent to
centering the map with a desired meridian.
Enter a value for latitude at which true scale is desired. For secant projections, specify
the latitude of true scale as any line of latitude other than 90˚N or S. For tangential
projections, specify the latitude of true scale as the North Pole, 90 00 00, or the South
Pole, -90 00 00 (ESRI 1992).
False easting
False northing
Enter values of false easting and false northing corresponding to the pole. These values
must be in meters. It is very often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.
Equator
S. Pole
This projection is conformal and is the most scientific projection for polar regions.
556 ERDAS
USGS Projections
Polyconic
Summary
Construction Cone
Property Compromise
The central meridian is a straight line, but all other meridians are
Meridians
complex curves.
Parallels (except the equator) are nonconcentric circular arcs. The
Parallels
equator is a straight line.
All parallels are arcs of circles, but not concentric. All meridians,
excepting the central meridian, are concave toward the central
Graticule spacing
meridian. Parallels cross the central meridian at equal intervals
but get farther apart at the east and west peripheries.
The scale along each parallel and along the central meridian of
Linear scale the projection is accurate. It increases along the meridians as the
distance from the central meridian increases (ESRI 1992).
Used for 7.5-minute and 15-minute topographic USGS quad
sheets, from 1886 to about 1957 (ESRI 1992). Used almost exclu-
Uses
sively in slightly modified form for large-scale mapping in the
United States until the 1950s.
Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the
eastern coast of the U.S. (Figure 198). Polyconic projections are made up of an infinite
number of conic projections tangent to an infinite number of parallels. These conic
projections are placed in relation to a central meridian. Polyconic projections
compromise properties such as equal area and conformality, although the central
meridian is held true to scale.
This projection is used mostly for north-south oriented maps. Distortion increases
greatly the farther east and west an area is from the central meridian.
Prompts
The following prompts display in the Projection Chooser if Polyconic is selected.
Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter values for longitude of the desired central meridian and latitude of the origin of
projection.
Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that no negative
coordinates will occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.
In Figure 198, the central meridian is 100˚W. This projection is used by the U.S.
Geological Survey for topographic quadrangle maps.
558 ERDAS
USGS Projections
Sinusoidal
Summary
Construction Pseudo-cylinder
Linear scale Linear scale is true on the parallels and the central meridian.
Sinusoidal maps achieve the property of equal area but not conformality. The equator
and central meridian are distortion free, but distortion becomes pronounced near outer
meridians, especially in polar regions.
Interrupting a Sinusoidal world or hemisphere map can lessen distortion. The inter-
rupted Sinusoidal contains less distortion because each interrupted area can be
constructed to contain a separate central meridian. Central meridians may be different
for the northern and southern hemispheres and may be selected to minimize distortion
of continents or oceans.
Sinusoidal is particularly suited for less than world areas, especially those bordering
the equator, such as South America or Africa. Sinusoidal is also used by the USGS as a
base map for showing prospective hydrocarbon provinces and sedimentary basins of
the world.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter a value for the longitude of the desired central meridian to center the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
560 ERDAS
USGS Projections
Summary
Construction Cylinder
Property Conformal
All meridians are curved lines except for the meridian crossed by
Meridians
the groundtrack at each polar approach.
The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale
distortion within the sensing range of an orbiting mapping satellite such as Landsat. It
is the first projection to incorporate the earth’s rotation with respect to the orbiting
satellite.
The method of projection used is the modified cylindrical, for which the central line is
curved and defined by the groundtrack of the orbit of the satellite.The line of tangency
is conceptual and there are no graticules.
The Space Oblique Mercator projection is defined by USGS. According to USGS, the X
axis passes through the descending node for each daytime scene. The Y axis is perpen-
dicular to the X axis, to form a Cartesian coordinate system. The direction of the X axis
in a daytime Landsat scene is in the direction of the satellite motion — south. The Y axis
is directed east. For SOM projections used by EOSAT, the axes are switched; the X axis
is directed east and the Y axis is directed south.
Spheroid Name:
Datum Name:
For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path
range is from 1 to 233.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
562 ERDAS
USGS Projections
State Plane The State Plane is an X,Y coordinate system (not a map projection) whose zones divide
the U.S. into over 130 sections, each with its own projection surface and grid network
(Figure 199). With the exception of very narrow States, such as Delaware, New Jersey,
and New Hampshire, most States are divided into two to ten zones. The Lambert
Conformal projection is used for zones extending mostly in an east-west direction. The
Transverse Mercator projection is used for zones extending mostly in a north-south
direction. Alaska, Florida, and New York use either Transverse Mercator or Lambert
Conformal for different areas. The Aleutian panhandle of Alaska is prepared on the
Oblique Mercator projection.
Zone boundaries follow state and county lines, and, because each zone is small,
distortion is less than one in 10,000. Each zone has a centrally located origin and a
central meridian which passes through this origin. Two zone numbering systems are
currently in use—the U.S. Geological Survey (USGS) code system and the National
Ocean Service (NOS) code system (Tables 1 and 2)—but other numbering systems exist.
Prompts
The following prompts will appear in the Projection Chooser if State Plane is selected.
Respond to the prompts as described.
Enter either the USGS zone code number as a positive value, or the NOS zone code
number as a negative value.
NAD27 or 83
Either North America Datum 1927 (NAD27) or North America Datum 1983 (NAD83)
may be used to perform the State Plane calculations.
• NAD83 is based on the GRS 1980 spheroid. Some zone numbers have been changed
or deleted from NAD27.
Tables for both NAD27 and NAD83 zone numbers follow (Tables 1 and 2). These tables
include both USGS and NOS code systems.
564 ERDAS
USGS Projections
Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States
Code Number
Code Number
566 ERDAS
USGS Projections
Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)
Code Number
Code Number
568 ERDAS
USGS Projections
Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)
Code Number
Code Number
570 ERDAS
USGS Projections
Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)
Code Number
Code Number
572 ERDAS
USGS Projections
Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)
Code Number
Code Number
574 ERDAS
USGS Projections
Stereographic
Summary
Construction Plane
Property Conformal
Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Meridians
Oblique and equatorial aspects: the meridians are arcs of circles
concave toward a straight central meridian. In the equatorial
aspect, the outer meridian of the hemisphere is a circle centered at
the projection center.
Polar aspect: the parallels are concentric circles.
In the equatorial aspect, all parallels except the equator are circular arcs. In the polar
aspect, latitude rings are spaced farther apart, with increasing distance from the pole.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Define the center of the map projection in both spherical and rectangular coordinates.
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
The Stereographic is the only azimuthal projection which is conformal. Figure 200
shows two views: A) Equatorial aspect, often used in the 16th and 17th centuries for
maps of hemispheres; B) Oblique aspect, centered on 40˚N.
576 ERDAS
USGS Projections
Summary
Construction Cylinder
Property Conformal
Transverse Mercator is similar to the Mercator projection except that the axis of the
projection cylinder is rotated 90˚ from the vertical (polar) axis. The contact line is then
a chosen meridian instead of the equator and this central meridian runs from pole to
pole. It loses the properties of straight meridians and straight parallels of the standard
Mercator projection (except for the central meridian, the two meridians 90˚ away, and
the equator).
Transverse Mercator also loses the straight rhumb lines of the Mercator map, but it is a
conformal projection. Scale is true along the central meridian or along two straight lines
equidistant from, and parallel to, the central meridian. It cannot be edge-joined in an
east-west direction if each sheet has its own central meridian.
In the United States, Transverse Mercator is the projection used in the State Plane
coordinate system for states with predominant north-south extent. The entire earth
from 84˚N to 80˚S is mapped with a system of projections called the Universal Trans-
verse Mercator.
578 ERDAS
USGS Projections
Prompts
The following prompts display in the Projection Chooser if Transverse Mercator is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Designate the desired scale factor at the central meridian. This parameter is used to
modify scale distortion. A value of one indicates true scale only along the central
meridian. It may be desirable to have true scale along two lines equidistant from and
parallel to the central meridian, or to lessen scale distortion away from the central
meridian. A factor of less than, but close to, one is often used.
Finally, define the origin of the map projection in both spherical and rectangular coordi-
nates.
Enter values for longitude of the desired central meridian and latitude of the origin of
projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that there will be no
negative coordinates within the region of the map projection. That is, origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.
The Transverse Mercator projection is then applied to each UTM zone. Transverse
Mercator is a transverse form of the Mercator cylindrical projection. The projection
cylinder is rotated 90˚ from the vertical (polar) axis and can then be placed to intersect
at a chosen central meridian. The UTM system specifies the central meridian of each
zone. With a separate projection for each UTM zone, a high degree of accuracy is
possible (one part in 1000 maximum distortion within each zone).
If the map to be projected extends beyond the border of the UTM zone, the entire map
may be projected for any UTM zone specified by the user.
Prompts
The following prompts display in the Projection Chooser if UTM is chosen.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
UTM Zone
All values in Table 35 are in full degrees east (E) or west (W) of the Greenwich prime
meridian (0).
580 ERDAS
USGS Projections
126˚ 120˚ 114˚ 108˚ 102˚ 96˚ 90˚ 84˚ 78˚ 72˚ 66˚
Figure 201: Zones of the Universal Transverse Mercator Grid in the United States
Central Central
Zone Range Zone Range
Meridian Meridian
1 177W 180W-174W 31 3E 0-6E
2 171W 174W-168W 32 9E 6E-12E
3 165W 168W-162W 33 15E 12E-18E
4 159W 162W-156W 34 21E 18E-24E
5 153W 156W-150W 35 27E 24E-30E
6 147W 150W-144W 36 33E 30E-36E
7 141W 144W-138W 37 39E 36E-42E
8 135W 138W-132W 38 45E 42E-48E
9 129W 132W-126W 39 51E 48E-54E
10 123W 126W-120W 40 57E 54E-60E
11 117W 120W-114W 41 63E 60E-66E
12 111W 114W-108W 42 69E 66E-72E
13 105W 108W-102W 43 75E 72E-78E
14 99W 102W-96W 44 81E 78E-84E
15 93W 96W-90W 45 87E 84E-90E
16 87W 90W-84W 46 93E 90E-96E
17 81W 84W-78W 47 99E 96E-102E
18 75W 78W-72W 48 105E 102E-108E
19 69W 72W-66W 49 111E 108E-114E
20 63W 66W-60W 50 117E 114E-120E
21 57W 60W-54W 51 123E 120E-126E
22 51W 54W-48W 52 129E 126E-132E
23 45W 48W-42W 53 135E 132E-138E
24 39W 42W-36W 54 141E 138E-144E
25 33W 36W-30W 55 147E 144E-150E
26 27W 30W-24W 56 153E 150E-156E
27 21W 24W-18W 57 159E 156E-162E
28 15W 18W-12W 58 165E 162E-168E
29 9W 12W-6W 59 171E 168E-174E
30 3W 6W-0 60 177E 174E-180E
582 ERDAS
USGS Projections
Summary
Construction Miscellaneous
Property Compromise
The Van der Grinten I projection produces a map that is neither conformal nor equal
area (Figure 202). It compromises all properties, and represents the earth within a circle.
All lines are curved except the central meridian and the equator. Parallels are spaced
farther apart toward the poles. Meridian spacing is equal at the equator. Scale is true
along the equator, but increases rapidly toward the poles, which are usually not repre-
sented.
Van der Grinten I avoids the excessive stretching of the Mercator and the shape
distortion of many of the equal area projections. It has been used to show distribution
of mineral resources on the ocean floor.
Prompts
The following prompts display in the Projection Chooser if Van der Grinten I is
selected. Respond to the prompts as described.
Spheroid Name:
Datum Name:
The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."
Enter a value for the longitude of the desired central meridian to center the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
The Van der Grinten I projection resembles the Mercator, but it is not conformal.
584 ERDAS
External Projections
External Projections The following external projections are supported in ERDAS IMAGINE and are
described in this section. Some of these projections were discussed in the previous
section. Those descriptions are not repeated here. Simply refer to the page number in
parentheses for more information.
NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.
• Cassini-Soldner
• Modified Polyconic
• Modified Stereographic
• Robinson Pseudocylindrical
• Winkel’s Tripel
Construction Cone
Property Conformal
Parallels Parallels are complex curves concave toward the nearest pole.
Graticule spacing increases away from the lines of true scale and
Graticule spacing
retains the property of conformality.
Linear scale is true along two lines that do not lie along any
meridian or parallel. Scale is compressed between these lines and
Linear scale
expanded beyond them. Linear scale is generally good, but there
is as much as a 10% error at the edge of the projection as used.
Used to represent one or both of the American continents. Exam-
Uses ples are the Basement map of North America and the Tectonic
map of North America.
The Bipolar Oblique Conic Conformal projection was developed by O.M. Miller and
William A. Briesemeister in 1941 specifically for mapping North and South America,
and maintains conformality for these regions. It is based upon the Lambert Conformal
Conic, using two oblique conic projections side-by-side. The two oblique conics are
joined with the poles 104˚ apart. A great circle arc 104˚ long begins at 20˚S and 110˚W,
cuts through Central America, and terminates at 45˚N and approximately 19˚59’36”W.
The scale of the map is then increased by approximately 3.5%. The origin of the coordi-
nates is made 17˚15’N, 73˚02’W.
Prompts
The following prompts display in the Projection Chooser if Bipolar Oblique Conic
Conformal is selected.
Projection Name
Spheroid Type
Datum Name
586 ERDAS
External Projections
Cassini-Soldner
Summary
Construction Cylinder
Property Compromise
Complex curves for all meridians and parallels, except for the
Graticule spacing equator, the central meridian, and each meridian 90˚ away from
the central meridian, all of which are straight.
Scale is true along the central meridian, and along lines perpen-
dicular to the central meridian. Scale is constant but not true
Linear scale
along lines parallel to the central meridian on the spherical form
and nearly so for the ellipsoid.
Used for topographic mapping, formerly in England and cur-
Uses rently in a few other countries, such as Denmark, Germany, and
Malaysia.
The Cassini projection was devised by C. F. Cassini de Thury in 1745 for the survey of
France. Mathematical analysis by J. G. von Soldner in the early 19th century led to more
accurate ellipsoidal formulas. Today, it has largely been replaced by the Transverse
Mercator projection, although it is still in limited use outside of the United States. It was
one of the major topographic mapping projections until the early 20th century.
The spherical form of the projection bears the same relation to the Equidistant Cylin-
drical or Plate Carrée projection that the spherical Transverse Mercator bears to the
regular Mercator. Instead of having the straight meridians and parallels of the
Equidistant Cylindrical, the Cassini has complex curves for each, except for the equator,
the central meridian, and each meridian 90˚ away from the central meridian, all of
which are straight.
There is no distortion along the central meridian if it is maintained at true scale, which
is the usual case. If it is given a reduced scale factor, the lines of true scale are two
straight lines on the map, parallel to and equidistant from, the central meridian. There
is no distortion along them instead.
The scale is correct along the central meridian and also along any straight line perpen-
dicular to the central meridian. It gradually increases in a direction parallel to the
central meridian as the distance from that meridian increases, but the scale is constant
along any straight line on the map that is parallel to the central meridian. Therefore,
Cassini-Soldner is more suitable for regions that are predominantly north-south in
extent, such as Great Britain, than regions extending in other directions. The projection
is neither equal area nor conformal, but it has a compromise of both features.
Prompts
The following prompts display in the Projection Chooser if Cassini-Soldner is selected.
Projection Name
Spheroid Type
Datum Name
Laborde Oblique In 1928, Laborde combined a conformal sphere with a complex-algebra transformation
Mercator of the Oblique Mercator projection for the topographic mapping of Madagascar. This
variation is now known as the Laborde Oblique Mercator. The central line is a great
circle arc.
Prompts
The following prompts display in the Projection Chooser if Laborde Oblique Mercator
is selected.
Projection Name
Spheroid Type
Datum Name
588 ERDAS
External Projections
Modified Polyconic
Summary
Construction Cone
Property Compromise
Parallels are circular arcs. The top and bottom parallels of each
Parallels
sheet are nonconcentric circular arcs.
The top and bottom parallels of each sheet are nonconcentric cir-
cular arcs. The two parallels are spaced from each other accord-
Graticule spacing
ing to the true scale along the central meridian, which is slightly
reduced.
Scale is true along each parallel and along two meridians, but no
Linear scale
parallel is “standard.”
Uses Used for the International Map of the World series until 1962.
The Modified Polyconic projection was devised by Lallemand of France, and in 1909 it
was adopted by the International Map Committee (IMC) in London as the basis for the
1:1,000,000-scale International Map of the World (IMW) series.
The projection differs from the ordinary Polyconic in two principal features: all
meridians are straight, and there are two meridians that are made true to scale.
Adjacent sheets exactly fit together not only north to south, but east to west. There is
still a gap when mosaicking in all directions, in that there is a gap between each
diagonal sheet, and either one or the other adjacent sheet.
In 1962, a U.N. conference on the IMW adopted the Lambert Conformal Conic and the
Polar Stereographic projections to replace the Modified Polyconic.
Prompts
The following prompts display in the Projection Chooser if Modified Polyconic is
selected.
Projection Name
Spheroid Type
Datum Name
Summary
Construction Plane
Property Conformal
The meridians and parallels of the Modified Stereographic projection are generally
curved, and there is usually no symmetry about any point or line. There are limitations
to these transformations. Most of them can only be used within a limited range. As the
distance from the projection center increases, the meridians, parallels, and shorelines
begin to exhibit loops, overlapping, and other undesirable curves. A world map using
the GS50 (50-State) projection is almost illegible with meridians and parallels inter-
twined like wild vines.
Prompts
The following prompts display in the Projection Chooser if Modified Stereographic is
selected.
Projection Name
Spheroid Type
Datum Name
590 ERDAS
External Projections
Summary
Construction Pseudo-cylinder
The second oldest pseudo-cylindrical projection that is still in use (after the Sinusoidal)
was presented by Carl B. Mollweide (1774 - 1825) of Halle, Germany, in 1805. It is an
equal area projection of the earth within an ellipse. It has had a profound effect on
world map projections in the 20th century, especially as an inspiration for other
important projections, such as the Van der Grinten.
The Mollweide is normally used for world maps and occasionally for a very large
region, such as the Pacific Ocean. This is because only two points on the Mollweide are
completely free of distortion unless the projection is interrupted. These are the points at
latitudes 40˚44’12”N and S on the central meridian(s).
The world is shown in an ellipse with the equator, its major axis, twice as long as the
central meridian, its minor axis. The meridians 90˚ east and west of the central meridian
form a complete circle. All other meridians are elliptical arcs which, with their opposite
numbers on the other side of the central meridian, form complete ellipses that meet at
the poles.
Prompts
The following prompts display in the Projection Chooser if Mollweide Equal Area is
selected.
Projection Name
Spheroid Type
Datum Name
Prompts
The following prompts display in the Projection Chooser if Rectified Skew Ortho-
morphic is selected.
Projection Name
Spheroid Type
Datum Name
Robinson
Pseudocylindrical
Summary
Construction Pseudo-cylinder
Property Compromise
Parallels are straight lines and are parallel. The individual paral-
Graticule spacing
lels are evenly divided by the meridians (Pearson 1990).
Generally, scale is made true along latitudes 38˚N and S. Scale is
Linear scale constant along any given latitude, and for the latitude of opposite
sign (ESRI 1992).
Developed for use in general and thematic world maps. Used by
Rand McNally since the 1960s and by the National Geographic
Uses
Society since 1988 for general and thematic world maps (ESRI
1992).
Meridians are equally spaced and resemble elliptical arcs, concave toward the central
meridian. The central meridian is a straight line 0.51 times the length of the equator.
Parallels are equally spaced straight lines between 38˚N and S, and then the spacing
decreases beyond these limits. The poles are 0.53 times the length of the equator. The
projection is based upon tabular coordinates instead of mathematical formulas (ESRI
1992).
592 ERDAS
External Projections
Prompts
The following prompts display in the Projection Chooser if Robinson Pseudocylindrical
is selected.
Projection Name
Spheroid Type
Datum Name
Southern Orientated Southern Orientated Gauss Conformal is another name for the Transverse Mercator
Gauss Conformal projection, after mathematician Friedrich Gauss (1777 - 1855). It is also called the Gauss-
Krüger projection.
Prompts
The following prompts display in the Projection Chooser if Southern Orientated Gauss
Conformal is selected.
Projection Name
Spheroid Type
Datum Name
Summary
Prompts
The following prompts display in the Projection Chooser if Winkel’s Tripel is selected.
Projection Name
Spheroid Type
Datum Name
594 ERDAS
A
Glossary
596 ERDAS
B
average - the statistical mean; the sum of a set of values divided by the number of values
in the set.
AVHRR - Advanced Very High Resolution Radiometer data. Small-scale imagery
produced by an NOAA polar orbiting satellite. It has a spatial resolution of 1.1×
1.1 km or 4 × 4 km.
azimuth - an angle measured clockwise from a meridian, going north to east.
azimuthal projection - a map projection that is created from projecting the surface of
the earth to the surface of a plane.
B band - a set of data file values for a specific portion of the electromagnetic spectrum of
reflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal,
etc.) or some other user-defined information created by combining or enhancing
the original bands, or creating new bands from other sources. Sometimes called
“channel.”
banding - see striping.
base map - a map portraying background reference information onto which other infor-
mation is placed. Base maps usually show the location and extent of natural
surface features and permanent man-made features.
batch file - a file that is created in the batch mode of ERDAS IMAGINE. All steps are
recorded for a later run. This file can be edited.
batch mode - a mode of operating ERDAS IMAGINE in which steps are recorded for
later use.
bathymetric map - a map portraying the shape of a water body or reservoir using
isobaths (depth contours).
Bayesian - a variation of the maximum likelihood classifier, based on the Bayes Law of
probability. The Bayesian classifier allows the application of a priori weighting
factors, representing the probabilities that pixels will be assigned to each class.
BIL - band interleaved by line. A form of data storage in which each record in the file
contains a scan line (row) of data for one band. All bands of data for a given line
are stored consecutively within the file.
bilinear interpolation - a resampling method that uses the data file values of four pixels
in a 2 by 2 window to calculate an output data file value by computing a weighted
average of the input data file values with a bilinear function.
bin function - a mathematical function that establishes the relationship between data
file values and rows in a descriptor table.
bins - ordered sets of pixels. Pixels are sorted into a specified number of bins. The pixels
are then given new values based upon the bins to which they are assigned.
BIP - band interleaved by pixel. A form of data storage in which the values for each
band are ordered within a given pixel. The pixels are arranged sequentially on the
tape.
598 ERDAS
C
C cadastral map - a map showing the boundaries of the subdivisions of land for purposes
of describing and recording ownership or taxation.
calibration certificate/report - in aerial photography, the manufacturer of the camera
specifies the interior orientation in the form of a certificate or report.
Cartesian - a coordinate system in which data are organized on a grid and points on the
grid are referenced by their X,Y coordinates.
cartography - the art and science of creating maps.
categorical data - see thematic data.
CCT - see computer compatible tape.
CD-ROM - a read-only storage device read by a CD-ROM player.
cell - 1. a 1˚ × 1˚ area of coverage. DTED (Digital Terrain Elevation Data) are distributed
in cells. 2. a pixel; grid cell.
cell size - the area that one pixel represents, measured in map units. For example, one
cell in the image may represent an area 30 feet by 30 feet on the ground.
Sometimes called “pixel size.”
center of the scene - the center pixel of the center scan line; the center of a satellite
image.
character - a number, letter, or punctuation symbol. One character usually occupies one
byte when stored on a computer.
check point - additional ground points used to independently verify the degree of
accuracy of a triangulation.
check point analysis - the act of using check points to independently verify the degree
of accuracy of a triangulation.
chi-square distribution - a non-symmetrical data distribution, whose curve is charac-
terized by a “tail” that represents the highest and least frequent data values. In
classification thresholding, the “tail” represents the pixels that are most likely to
be classified incorrectly.
choropleth map - a map portraying properties of a surface using area symbols. Area
symbols usually represent categorized classes of the mapped phenomenon.
city-block distance - the physical or spectral distance that is measured as the sum of
distances that are perpendicular to one another.
class - a set of pixels in a GIS file which represent areas that share some condition.
Classes are usually formed through classification of a continuous raster layer.
class value - a data file value of a thematic file which identifies a pixel as belonging to
a particular class.
classification - the process of assigning the pixels of a continuous raster image to
discrete categories.
600 ERDAS
C
computer compatible tape (CCT) - a magnetic tape used to transfer and store digital
data.
confidence level - the percentage of pixels that are believed to be misclassified.
conformal - a map or map projection that has the property of conformality, or true
shape.
conformality - the property of a map projection to represent true shape, wherein a
projection preserves the shape of any small geographical area. This is accom-
plished by exact transformation of angles around points.
conic projection - a map projection that is created from projecting the surface of the
earth to the surface of a cone.
connectivity radius - the distance (in pixels) that pixels can be from one another to be
considered contiguous. The connectivity radius is used in connectivity analysis.
contiguity analysis - a study of the ways in which pixels of a class are grouped together
spatially. Groups of contiguous pixels in the same class, called raster regions, or
“clumps,” can be identified by their sizes and manipulated.
contingency matrix - a matrix which contains the number and percentages of pixels
that were classified as expected.
continuous - a term used to describe raster data layers that contain quantitative and
related values. See continuous data.
continuous data - a type of raster data that are quantitative (measuring a characteristic)
and have related, continuous values, such as remotely sensed images (e.g.,
Landsat, SPOT, etc.).
contour map - a map in which a series of lines connects points of equal elevation.
contrast stretch - the process of reassigning a range of values to another range, usually
according to a linear function. Contrast stretching is often used in displaying
continuous raster layers, since the range of data file values is usually much
narrower than the range of brightness values on the display device.
control point - a point with known coordinates in the ground coordinate system,
expressed in the units of the specified map projection.
convolution filtering - the process of averaging small sets of pixels across an image.
Used to change the spatial frequency characteristics of an image.
convolution kernel - a matrix of numbers that is used to average the value of each pixel
with the values of surrounding pixels in a particular way. The numbers in the
matrix serve to weight this average toward particular pixels.
coordinate system - a method for expressing location. In two-dimensional coordinate
systems, locations are expressed by a column and row, also called x and y.
correlation threshold - a value used in rectification to determine whether to accept or
discard ground control points. The threshold is an absolute value threshold
ranging from 0.000 to 1.000.
D dangling node - a line that does not close to form a polygon, or that extends past an
intersection.
data - 1. in the context of remote sensing, a computer file containing numbers which
represent a remotely sensed image, and can be processed to display that image.
2. a collection of numbers, strings, or facts that require some processing before
they are meaningful.
database (one word) - a relational data structure usually used to store tabular infor-
mation. Examples of popular databases include SYBASE, dBase, Oracle, INFO,
etc.
data base (two words) - in ERDAS IMAGINE, a set of continuous and thematic raster
layers, vector layers, attribute information, and other kinds of data which
represent one area of interest. A data base is usually part of a geographic infor-
mation system.
data file - a computer file that contains numbers which represent an image.
602 ERDAS
D
data file value - each number in an image file. Also called “file value,” “image file
value,” “digital number (DN),” “brightness value,” “pixel.”
datum - see reference plane.
decision rule - an equation or algorithm that is used to classify image data after signa-
tures have been created. The decision rule is used to process the data file values
based upon the signature statistics.
decorrelation stretch - a technique used to stretch the principal components of an
image, not the original image.
default directory - see current directory.
degrees of freedom - when chi-square statistics are used in thresholding, the number
of bands in the classified file.
DEM - see digital elevation model.
densify - the process of adding vertices to selected lines at a user-specified tolerance.
density - 1. the number of bits per inch on a magnetic tape. 9-track tapes are commonly
stored at 1600 and 6250 bpi. 2. a neighborhood analysis technique that outputs the
number of pixels that have the same value as the analyzed pixel in a user-
specified window.
derivative map - a map created by altering, combining, or analyzing other maps.
descriptor - see attribute.
desktop scanners - general purpose devices which lack the image detail and geometric
accuracy of photogrammetric quality units, but are much less expensive.
detector - the device in a sensor system that records electromagnetic radiation.
developable surface - a flat surface, or a surface that can be easily flattened by being cut
and unrolled, such as the surface of a cone or a cylinder.
digital elevation model (DEM)- continuous raster layers in which data file values
represent elevation. DEMs are available from the USGS at 1:24,000 and 1:250,000
scale, can be produced with terrain analysis programs, and IMAGINE
OrthoMAX.
digital orthophoto - An aerial photo or satellite scene which has been transformed by
the orthogonal projection, yielding a map that is free of most significant
geometric distortions.
Digital Photogrammetry - photogrammetry as applied to digital images that are stored
and processed on a computer. Digital images can be scanned from photographs
or can be directly captured by digital cameras.
digital terrain model (DTM) - a discrete expression of topography in a data array,
consisting of a group of planimetric coordinates (X,Y) and the elevations of the
ground points and breaklines.
digitized raster graphic (DRG) - a digital replica of Defense Mapping Agency
hardcopy graphic products. See also ADRG.
604 ERDAS
E
dot patterns - the matrices of dots used to represent brightness values on hardcopy
maps and images.
double precision - a measure of accuracy in which 15 significant digits can be stored for
a coordinate.
downsampling - the skipping of pixels during the display or processing of the scanning
process.
DTM - see digital terrain model.
DXF - Data Exchange Format. A format for storing vector data in ASCII files, used by
AutoCAD software.
dynamic range - see radiometric resolution.
E edge detector - a convolution kernel, usually a zero-sum kernel, which smooths out or
zeros out areas of low spatial frequency and creates a sharp contrast where spatial
frequency is high, which is at the edges between homogeneous groups of pixels.
edge enhancer - a high-frequency convolution kernel that brings out the edges between
homogeneous groups of pixels. Unlike an edge detector, it only highlights edges,
it does not necessarily eliminate other features.
eigenvalue - the length of a principal component which measures the variance of a
principal component band. See also principal components.
eigenvector - the direction of a principal component represented as coefficients in an
eigenvector matrix which is computed from the eigenvalues. See also principal
components.
electromagnetic radiation - the energy transmitted through space in the form of electric
and magnetic waves.
electromagnetic spectrum - the range of electromagnetic radiation extending from
cosmic waves to radio waves, characterized by frequency or wavelength.
element - an entity of vector data, such as a point, a line, or a polygon.
elevation data - see terrain data, DEM.
ellipse - a two-dimensional figure that is formed in a two-dimensional scatterplot when
both bands plotted have normal distributions. The ellipse is defined by the
standard deviations of the input bands. Ellipse plots are often used to test signa-
tures before classification.
end-of-file mark (EOF) - usually a half-inch strip of blank tape which signifies the end
of a file that is stored on magnetic tape.
end-of-volume mark (EOV) - usually three EOFs marking the end of a tape.
enhancement - the process of making an image more interpretable for a particular
application. Enhancement can make important features of raw, remotely sensed
data more interpretable to the human eye.
606 ERDAS
F
F false color - a color scheme in which features have “expected” colors. For instance,
vegetation is green, water is blue, etc. These are not necessarily the true colors of
these features.
false easting - an offset between the y-origin of a map projection and the y-origin of a
map. Usually used so that no y-coordinates are negative.
false northing - an offset between the x-origin of a map projection and the x-origin of a
map. Usually used so that no x-coordinates are negative.
fast format - a type of BSQ format used by EOSAT to store Landsat TM (Thematic
Mapper) data.
feature based matching - an image matching technique that determines the correspon-
dence between two image features.
feature collection - the process of identifying, delineating, and labeling various types
of natural and man-made phenomena from remotely-sensed images.
feature extraction - the process of studying and locating areas and objects on the
ground and deriving useful information from images.
feature space - an abstract space that is defined by spectral units (such as an amount of
electromagnetic radiation).
feature space area of interest - a user-selected area of interest (AOI) that is selected
from a feature space image.
feature space image - a graph of the data file values of one band of data against the
values of another band (often called a scatterplot).
fiducial center - the center of an aerial photo.
fiducials - four or eight reference markers fixed on the frame of an aerial metric camera
and visible in each exposure. Fiducials are used to compute the transformation
from data file to image coordinates.
field - in an attribute data base, a category of information about each class or feature,
such as “Class name” and “Histogram.”
field of view - in perspective views, an angle which defines how far the view will be
generated to each side of the line of sight.
file coordinates - the location of a pixel within the file in x,y coordinates. The upper left
file coordinate is usually 0,0.
file pixel - the data file value for one data unit in an image file.
file specification or filespec - the complete file name, including the drive and path, if
necessary. If a drive or path is not specified, the file is assumed to be in the current
drive and directory.
filled - referring to polygons; a filled polygon is solid or has a pattern, but is not trans-
parent. An unfilled polygon is simply a closed vector which outlines the area of
the polygon.
608 ERDAS
G
H halftoning - the process of using dots of varying size or arrangements (rather than
varying intensity) to form varying degrees of a color.
hardcopy output - any output of digital computer (softcopy) data to paper.
header file - a file usually found before the actual image data on tapes or CD-ROMs that
contains information about the data, such as number of bands, upper left coordi-
nates, map projection, etc.
header record - the first part of an image file that contains general information about
the data in the file, such as the number of columns and rows, number of bands,
data base coordinates of the upper left corner, and the pixel depth. The contents
of header records vary depending on the type of data.
high-frequency kernel - a convolution kernel that increases the spatial frequency of an
image. Also called “high-pass kernel.”
High Resolution Picture Transmission (HRPT) - the direct transmission of AVHRR
data in real-time with the same resolution as Local Area Coverage (LAC).
High Resolution Visible (HRV) sensor - a pushbroom scanner on a SPOT satellite that
takes a sequence of line images while the satellite circles the earth.
histogram - a graph of data distribution, or a chart of the number of pixels that have
each possible data file value. For a single band of data, the horizontal axis of a
histogram graph is the range of all possible data file values. The vertical axis is
the number of pixels that have each data value.
histogram equalization - the process of redistributing pixel values so that there are
approximately the same number of pixels with each value within a range. The
result is a nearly flat histogram.
histogram matching - the process of determining a lookup table that will convert the
histogram of one band of an image or one color gun to resemble another
histogram.
610 ERDAS
I
I IGES - Initial Graphics Exchange Standard files are often used to transfer CAD data
between systems. IGES Version 3.0 format, published by the U.S. Department of
Commerce, is in uncompressed ASCII format only.
IHS - intensity, hue, saturation. An alternate color space from RGB (red, green, blue).
This system is advantageous in that it presents colors more nearly as perceived
by the human eye. See intensity, hue, and saturation.
image - a picture or representation of an object or scene on paper or a display screen.
Remotely sensed images are digital representations of the earth.
image algebra - any type of algebraic function that is applied to the data file values in
one or more bands.
image center - the center of the aerial photo or satellite scene.
image coordinate system - the location of each point in the image is expressed for
purposes of photogrammetric triangulation.
image data - digital representations of the earth that can be used in computer image
processing and geographic information system (GIS) analyses.
image file - a file containing raster image data. Image files in ERDAS IMAGINE have
the extension .img. Image files from the ERDAS Ver. 7.X series software have the
extension .LAN or .GIS.
image matching - the automatic acquisition of corresponding image points on the
overlapping area of two images.
image memory - the portion of the display device memory that stores data file values
(which may be transformed or processed by the software that accesses the display
device).
image pair - see stereopair.
image processing - the manipulation of digital image data, including (but not limited
to) enhancement, classification, and rectification operations.
image pyramid - a data structure consisting of the same image represented several
times, at a decreasing spatial resolution each time. Each level of the pyramid
contains the image at a particular resolution.
612 ERDAS
J
L label - in annotation, the text that conveys important information to the reader about
map features.
label point - a point within a polygon that defines that polygon.
LAC - see local area coverage.
.LAN files - multiband ERDAS Ver. 7.X image files (the name originally derived from
the Landsat satellite). LAN files usually contain raw or enhanced remotely sensed
data.
land cover map - a map of the visible ground features of a scene, such as vegetation,
bare land, pasture, urban areas, etc.
Landsat - a series of earth-orbiting satellites that gather Multispectral (MSS) and
Thematic Mapper (TM) imagery, operated by EOSAT.
large scale - a description used to represent a map or data file having a large ratio
between the area on the map (such as inches or pixels) and the area that is repre-
sented (such as feet). In large-scale image data, each pixel represents a small area
on the ground, such as SPOT data, with a spatial resolution of 10 or 20 meters.
614 ERDAS
M
local area coverage (LAC) - a type of NOAA AVHRR data with a spatial resolution of
1.1 × 1.1 km.
logical record - a series of bytes that form a unit on a 9-track tape. For example, all the
data for one line of an image may form a logical record. One or more logical
records make up a physical record on a tape.
long wave infrared region (LWIR) - the thermal or far-infrared region of the electro-
magnetic spectrum.
lookup table (LUT) - an ordered set of numbers which is used to perform a function on
a set of input values. To display or print an image, lookup tables translate data
file values into brightness values.
low-frequency kernel - a convolution kernel that decreases spatial frequency. Also
called “low-pass kernel.”
LUT - see lookup table.
M magnify - the process of displaying one file pixel over a block of display pixels. For
example, if the magnification factor is 3, then each file pixel will take up a block
of 3 × 3 display pixels. Magnification differs from zooming in that the magnified
image is loaded directly to image memory.
Mahalanobis distance - a classification decision rule that is similar to the minimum
distance decision rule, except that a covariance matrix is used in the equation.
majority - a neighborhood analysis technique that outputs the most common value of
the data file values in a user-specified window.
map - a graphic representation of spatial relationships on the earth or other planets.
map coordinates - a system of expressing locations on the earth’s surface using a
particular map projection, such as Universal Transverse Mercator (UTM), State
Plane, or Polyconic.
map frame - an annotation element that indicates where an image will be placed in a
map composition.
map projection - a method of representing the three-dimensional spherical surface of a
planet on a two-dimensional map surface. All map projections involve the
transfer of latitude and longitude onto an easily flattened surface.
matrix - a set of numbers arranged in a rectangular array. If a matrix has i rows and j
columns, it is said to be an i by j matrix.
matrix analysis - a method of combining two thematic layers in which the output layer
contains a separate class for every combination of two input classes.
matrix object - in Model Maker (Spatial Modeler), a set of numbers in a two-dimen-
sional array.
maximum - a neighborhood analysis technique that outputs the greatest value of the
data file values in a user-specified window.
616 ERDAS
N
monochrome image - an image produced from one band or layer, or contained in one
color gun of the display device.
morphometric map - a map representing morphological features of the earth’s surface.
mosaicking - the process of piecing together images side by side, to create a larger
image.
multispectral classification - the process of sorting pixels into a finite number of
individual classes, or categories of data, based on data file values in multiple
bands. See also classification.
multispectral imagery - satellite imagery with data recorded in two or more bands.
multispectral scanner (MSS) - Landsat satellite data acquired in 4 bands with a spatial
resolution of 57 × 79 meters.
multitemporal - data from two or more different dates.
O object - in models, an input to or output from a function. See matrix object, raster
object, scalar object, table object.
oblique aspect - a map projection that is not oriented around a pole or the equator.
observation - in photogrammetric triangulation, a grouping of the image coordinates
for a control point.
off-nadir - any point that is not directly beneath a scanner’s detectors, but off to an
angle. The SPOT scanner allows off-nadir viewing.
1:24,000 - 1:24,000 scale data, also called “7.5-minute DEM” (Digital Elevation Model),
available from USGS. It is usually referenced to the UTM coordinate system and
has a spatial resolution of 30 × 30 meters.
1:250,000 - 1:250,000 scale DEM (Digital Elevation Model) data available from USGS.
Available only in arc/second format.
opacity - a measure of how opaque, or solid, a color is displayed in a raster layer.
operating system - the most basic means of communicating with the computer. It
manages the storage of information in files and directories, input from devices
such as the keyboard and mouse, and output to devices such as the monitor.
orbit - a circular, north-south and south-north path that a satellite travels above the
earth.
order - the complexity of a function, polynomial expression, or curve. In a polynomial
expression, the order is simply the highest exponent used in the polynomial. See
also linear, nonlinear.
ordinal data - a type of data that includes discrete lists of classes with an inherent order,
such as classes of streams—first order, second order, third order, etc.
orientation angle - the angle between a perpendicular to the center scan line and the
North direction in a satellite scene.
orthographic - an azimuthal projection with an infinite perspective.
orthocorrection - see orthorectification.
618 ERDAS
P
620 ERDAS
P
point ID - in rectification, a name given to GCPs in separate files that represent the same
geographic location.
point mode - a digitizing mode in which one vertex is generated each time a keypad
button is pressed.
polar aspect - a map projection that is centered around a pole.
polygon - a set of closed line segments defining an area.
polynomial - a mathematical expression consisting of variables and coefficients. A
coefficient is a constant, which is multiplied by a variable in the expression.
positive inclination - the sensors are tilted in increments of 0.6o to a maximum of 27o
to the west.
primary colors - colors from which all other available colors are derived. On a display
monitor, the primary colors red, green, and blue are combined to produce all
other colors. On a color printer, cyan, yellow, and magenta inks are combined.
principal components - the transects of a scatterplot of two or more bands of data,
which represent the widest variance and successively smaller amounts of
variance that are not already represented. Principal components are orthogonal
(perpendicular) to one another. In principal components analysis, the data are
transformed so that the principal components become the axes of the scatterplot
of the output data.
principal component band - a band of data that is output by principal components
analysis. Principal component bands are uncorrelated and non-redundant, since
each principal component describes different variance within the original data.
principal components analysis - the process of calculating principal components and
outputting principal component bands. It allows redundant data to be compacted
into fewer bands that is, the dimensionality of the data is reduced.
principal point (Xp,Yp) - the point in the image plane onto which the perspective center
is projected, located directly beneath the interior orientation.
printer - a device that prints text, full color imagery, and/or graphics. See color printer,
text printer.
profile - a row of data file values from a DEM (Digital Elevation Model) or DTED
(Digital Terrain Elevation Data) file. The profiles of DEM and DTED run south to
north, that is, the first pixel of the record is the southernmost pixel.
profile symbol - an annotation symbol that is formed like the profile of an object. Profile
symbols generally represent vertical objects such as trees, windmills, oil wells,
etc.
proximity analysis - a technique used to determine which pixels of a thematic layer are
located at specified distances from pixels in a class or classes. A new layer is
created which is classified by the distance of each pixel from specified classes of
the input layer.
Q quadrangle - 1. any of the hardcopy maps distributed by USGS such as the 7.5-minute
quadrangle or the 15-minute quadrangle. 2. one quarter of a full Landsat TM
scene. Commonly called a “quad.”
qualitative map - a map that shows the spatial distribution or location of a kind of
nominal data. For example, a map showing corn fields in the United States would
be a qualitative map. It would not show how much corn is produced in each
location, or production relative to the other areas.
quantitative map - a map that displays the spatial aspects of numerical data. A map
showing corn production (volume) in each area would be a quantitative map.
R radar data - the remotely sensed data that are produced when a radar transmitter emits
a beam of micro or millimeter waves, the waves reflect from the surfaces they
strike, and the backscattered radiation is detected by the radar system’s receiving
antenna which is tuned to the frequency of the transmitted waves.
RADARSAT - a Canadian radar satellite scheduled to be launched in 1995.
radiative transfer equations - the mathematical models that attempt to quantify the
total atmospheric effect of solar illumination.
radiometric correction - the correction of variations in data that are not caused by the
object or scene being scanned, such as scanner malfunction and atmospheric
interference.
radiometric enhancement - an enhancement technique that deals with the individual
values of pixels in an image.
radiometric resolution - the dynamic range, or number of possible data file values, in
each band. This is referred to by the number of bits into which the recorded
energy is divided. See pixel depth.
rank - a neighborhood analysis technique that outputs the number of values in a user-
specified window that are less than the analyzed value.
622 ERDAS
R
raster data - data that are organized in a grid of columns and rows. Raster data usually
represent a planar graph or geographical area. Raster data in ERDAS IMAGINE
are stored in .img files.
raster object - in Model Maker (Spatial Modeler), a single raster layer or set of layers.
raster region - a contiguous group of pixels in one GIS class. Also called clump.
ratio data - a data type in which thematic class values have the same properties as
interval values, except that ratio values have a natural zero or starting point.
Real-Aperture Radar (RAR) - a radar sensor that uses its side-looking, fixed antenna to
transmit and receive the radar impulse. For a given position in space, the
resolution of the resultant image is a function of the antenna size. The signal is
processed independently of subsequent return signals.
recoding - the assignment of new values to one or more classes.
record - 1. the set of all attribute data for one class of feature. 2. the basic storage unit on
a 9-track tape.
rectification - the process of making image data conform to a map projection system. In
many cases, the image must also be oriented so that the north direction corre-
sponds to the top of the image.
rectified coordinates - the coordinates of a pixel in a file that has been rectified, which
are extrapolated from the ground control points. Ideally, the rectified coordinates
for the ground control points are exactly equal to the reference coordinates. Since
there is often some error tolerated in the rectification, this is not always the case.
reduce - the process of skipping file pixels when displaying an image, so that a larger
area can be represented on the display screen. For example, a reduction factor of
3 would cause only the pixel at every third row and column to be displayed, so
that each displayed pixel represents a 3 × 3 block of file pixels.
reference coordinates - the coordinates of the map or reference image to which a source
(input) image is being registered. Ground control points consist of both input
coordinates and reference coordinates for each point.
reference pixels - in classification accuracy assessment, pixels for which the correct GIS
class is known from ground truth or other data. The reference pixels can be
selected by you, or randomly selected.
reference plane - In a topocentric coordinate system, the tangential plane at the center
of the image on the earth ellipsoid, on which the three perpendicular coordinate
axis are defined.
reference system - the map coordinate system to which an image is registered.
reference window - the source window on the first image of an image pair, which
remains at a constant location. See also correlation windows and search
windows.
reflection spectra - the electromagnetic radiation wavelengths that are reflected by
specific materials of interest.
624 ERDAS
S
rhumb line - a line of true direction, which crosses meridians at a constant angle.
right hand rule - a convention in three-dimensional coordinate systems (X,Y,Z) which
determines the location of the positive Z axis. If you place your right hand fingers
on the positive X axis and curl your fingers toward the positive Y axis, the
direction your thumb is pointing is the positive Z axis direction.
RMS error - the distance between the input (source) location of a GCP and the retrans-
formed location for the same GCP. RMS error is calculated with a distance
equation.
RMSE (Root Mean Square Error) - used to measure how well a specific calculated
solution fits the original data.For each observation of a phenomena, a variation
can be computed between the actual observation and a calculated value. (The
method of obtaining a calculated value is application-specific.) Each variation is
then squared. The sum of these squared values is divided by the number of obser-
vations and then the square root is taken. This is the RMSE value.
roam - the process of moving across a display so that different areas of the image appear
on the display screen.
root - the first part of a file name, which usually identifies the file’s specific contents.
ROYGBIV - a color scheme ranging through red, orange, yellow, green, blue, indigo,
and violet at regular intervals.
rubber sheeting - the application of a nonlinear rectification (2nd-order or higher).
626 ERDAS
S
Shuttle Imaging Radar (SIR-A, SIR-B, and SIR-C) - the radar sensors that fly aboard
NASA space shuttles. SIR-A flew aboard the 1981 NASA Space Shuttle Columbia.
That data and SIR-B data from a later Space Shuttle mission are still valuable
sources of radar data. A future shuttle mission is scheduled to carry the SIR-C
sensor.
Side-looking Airborne Radar (SLAR) - a radar sensor that uses an antenna which is
fixed below an aircraft and pointed to the side to transmit and receive the radar
signal.
signal based matching - see area based matching.
signature - a set of statistics that defines a training sample or cluster. The signature is
used in a classification process. Each signature corresponds to a GIS class that is
created from the signatures with a classification decision rule.
skew - a condition in satellite data, caused by the rotation of the earth eastward, which
causes the position of the satellite relative to the earth to move westward.
Therefore, each line of data represents terrain that is slightly west of the data in
the previous line.
slope - the change in elevation over a certain distance. Slope can be reported as a
percentage or in degrees.
slope image - a thematic raster image which shows changes in elevation over distance.
Slope images are usually color-coded to show the steepness of the terrain at each
pixel.
slope map - a map that is color-coded to show changes in elevation over distance.
small scale - for a map or data file, having a small ratio between the area of the imagery
(such as inches or pixels) and the area that is represented (such as feet). In small-
scale image data, each pixel represents a large area on the ground, such as NOAA
AVHRR (Advanced Very High Resolution Radiometer) data, with a spatial
resolution of 1.1 km.
Softcopy Photogrammetry - see Digital Photogrammetry.
source coordinates - in the rectification process, the input coordinates.
spatial enhancement - the process of modifying the values of pixels in an image relative
to the pixels that surround them.
spatial frequency - the difference between the highest and lowest values of a
contiguous set of pixels.
Spatial Modeler Language - a script language used internally by Model Maker (Spatial
Modeler) to execute the operations specified in the graphical models you create.
The Spatial Modeler Language can also be used to write application-specific
models.
spatial resolution - a measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel.
speckle noise - the light and dark pixel noise that appears in radar data.
628 ERDAS
S
stereo-scene - achieved when two images of the same area are acquired on different
days from different orbits, one taken east of the vertical, and the other taken west
of the nadir.
stream mode - a digitizing mode in which vertices are generated continuously while the
digitizer keypad is in proximity to the surface of the digitizing tablet.
string - a line of text. A string usually has a fixed length (number of characters).
strip of photographs - consists of images captured along a flight-line, normally with an
overlap of 60% for stereo coverage. All photos in the strip are assumed to be taken
at approximately the same flying height and with a constant distance between
exposure stations. Camera tilt relative to the vertical is assumed to be minimal.
striping - a data error that occurs if a detector on a scanning system goes out of
adjustment - that is, it provides readings consistently greater than or less than the
other detectors for the same band over the same ground cover. Also called
“banding.”
structure based matching - see relation based matching.
subsetting - the process of breaking out a portion of a large image file into one or more
smaller files.
sum - a neighborhood analysis technique that outputs the total of the data file values in
a user-specified window.
Sun raster data - imagery captured from a Sun monitor display.
sun-synchronous - a term used to describe earth-orbiting satellites that rotate around
the earth at the same rate as the earth rotates on its axis.
supervised training - any method of generating signatures for classification, in which
the analyst is directly involved in the pattern recognition process. Usually, super-
vised training requires the analyst to select training samples from the data, which
represent patterns to be classified.
surface - a one band file in which the value of each pixel is a specific elevation value.
swath width - in a satellite system, the total width of the area on the ground covered by
the scanner.
symbol - an annotation element that consists of other elements (sub-elements). See plan
symbol, profile symbol, and function symbol.
symbolization - a method of displaying vector data in which attribute information is
used to determine how features are rendered. For example, points indicating
cities and towns can appear differently based on the population field stored in the
attribute database for each of those areas.
Synthetic Aperture Radar (SAR) - a radar sensor that uses its side-looking, fixed
antenna to create a synthetic aperture. SAR sensors are mounted on satellites,
aircraft, and the NASA Space Shuttle. The sensor transmits and receives as it is
moving. The signals received over a time interval are combined to create the
image.
T table object - in Model Maker (Spatial Modeler), a series of numeric values or character
strings.
tablet digitizing - the process of using a digitizing tablet to transfer non-digital data
such as maps or photographs to vector format.
Tagged Imaged File Format - see TIFF data.
tangent - an intersection at one point or line. In the case of conic or cylindrical map
projections, a tangent cone or cylinder intersects the surface of a globe in a circle.
Tasseled Cap transformation - an image enhancement technique that optimizes data
viewing for vegetation studies.
temporal resolution - the frequency with which a sensor obtains imagery of a particular
area.
terrain analysis - the processing and graphic simulation of elevation data.
terrain data - elevation data expressed as a series of x, y, and z values that are either
regularly or irregularly spaced.
text printer - a device used to print characters onto paper, usually used for lists,
documents, and reports. If a color printer is not necessary or is unavailable,
images can be printed using a text printer. Also called a “line printer.”
thematic data - raster data that are qualitative and categorical. Thematic layers often
contain classes of related information, such as land cover, soil type, slope, etc. In
ERDAS IMAGINE, thematic data are stored in .img files.
thematic layer - see thematic data.
thematic map - a map illustrating the class characterizations of a particular spatial
variable such as soils, land cover, hydrology, etc.
Thematic Mapper (TM) - Landsat data acquired in 7 bands with a spatial resolution of
30 × 30 meters.
theme - a particular type of information, such as soil type or land use, that is repre-
sented in a layer.
3D perspective view - a simulated three-dimensional view of terrain.
threshold - a limit, or “cutoff point,” usually a maximum allowable amount of error in
an analysis. In classification, thresholding is the process of identifying a
maximum distance between a pixel and the mean of the signature to which it was
classified.
tick marks - small lines along the edge of the image area or neatline that indicate regular
intervals of distance.
tie point - a point whose ground coordinates are not known, but can be recognized
visually in the overlap or sidelap area between two images.
TIFF data - Tagged Image File Format data is a raster file format developed by Aldus,
Corp. (Seattle, Washington), in 1986 for the easy transportation of data.
630 ERDAS
T
U union - the area or set that is the combination of two or more input areas or sets, without
repetition.
unscaled map - a hardcopy map that is not referenced to any particular scale, in which
one file pixel is equal to one printed pixel.
unsplit - the process of joining two lines by removing a node.
unsupervised training - a computer-automated method of pattern recognition in which
some parameters are specified by the user and are used to uncover statistical
patterns that are inherent in the data.
632 ERDAS
W
viewshed analysis - the calculation of all areas that can be seen from a particular
viewing point or path.
viewshed map - a map showing only those areas visible (or invisible) from a specified
point(s).
volume - a medium for data storage, such as a magnetic disk or a tape.
volume set - the complete set of tapes that contains one image.
W weight - the number of values in a set; particularly, in clustering algorithms, the weight
of a cluster is the number of pixels that have been averaged into it.
weighting factor - a parameter that increases the importance of an input variable. For
example, in GIS indexing, one input layer can be assigned a weighting factor
which multiplies the class values in that layer by that factor, causing that layer to
have more importance in the output file.
weighting function - in surfacing routines, a function applied to elevation values for
determining new output values.
working window - the image area to be used in a model. This can be set to either the
union or intersection of the input layers.
workspace - a location which contains one or more vector layers. A workspace is made
up of several directories.
write ring - a protection device that allows data to be written to a 9-track tape when the
ring is in place, but not when it is removed.
X X residual - in RMS error reports, the distance between the source X coordinate and the
retransformed X coordinate.
X RMS error - the root mean square error (RMS) in the X direction.
Y Y residual - in RMS error reports, the distance between the source Y coordinate and the
retransformed Y coordinate.
Y RMS error - the root mean square error (RMS) in the Y direction.
Z zero-sum kernel - a convolution kernel in which the sum of all the coefficients is zero.
Zero-sum kernels are usually edge detectors.
zone distribution rectangles (ZDRs) - the images into which each distribution
rectangle (DR) are divided in ADRG data.
zoom - the process of expanding displayed pixels on an image so that they can be more
closely studied. Zooming is similar to magnification, except that it changes the
display only temporarily, leaving image memory the same.
634 ERDAS
Bibliography
Bibliography
Adams, J.B., Smith, M.O., and Gillespie, A.R. 1989. “Simple Models for Complex
Natural Surfaces: A Strategy for the Hyperspectral Era of Remote Sensing.”
Proceedings IEEE Intl. Geosciences and Remote Sensing Symposium.
1:16-21.
Battrick, Bruce, and Lois Proud, eds. May 1992. ERS-1 User Handbook. Noordwijk, The
Netherlands: European Space Agency, ESA Publications Division, c/o ESTEC.
Benediktsson, J.A., Swain, P.H., Ersoy, O.K., and Hong, D. 1990. “Neural Network
Approaches Versus Statistical Methods in Classification of Multisource Remote
Sensing Data.” IEEE Transactions on Geoscience and Remote Sensing 28:4:540-51.
Berk, A., et al. 1989. MODTRAN: A Moderate Resolution Model for LOWTRAN 7.
Hanscom Air Force Base, Massachusetts: U.S. Air Force Geophysical Laboratory
(AFGL).
Bernstein, Ralph, et al. 1983. “Image Geometry and Rectification.” Chapter 21 in Manual
of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American
Society of Photogrammetry.
Billingsley, Fred C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.
Blom, Ronald G., and Michael Daily. July 1982. “Radar Image Processing for Rock-Type
Discrimination.” IEEE Transactions on Geoscience and Remote Sensing, Vol. GE-20,
No. 3.
Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and
Implementation. New York: John Wiley & Sons, Inc.
Cannon, Michael, Alex Lehar, and Fred Preston, 1983. “Background Pattern Removal
by Power Spectral Filtering.” Applied Optics, Vol. 22, No. 6: 777-779.
Carter, James R. 1989. “On Defining the Geographic Information System.” Fundamentals
of Geographic Information Systems: A Compendium, edited by William J. Ripple.
Bethesda, Maryland: American Society for Photogrammetric Engineering and
Remote Sensing and the American Congress on Surveying and Mapping.
Chahine, Moustafa T., et al. 1983. “Interaction Mechanisms within the Atmosphere.”
Chapter 5 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.
Chavez, Pat S., Jr., et al. 1991. “Comparison of Three Different Methods to Merge Multi-
resolution and Multispectral Data: Landsat TM and SPOT Panchromatic.” Photo-
grammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.
Chavez, Pat S., Jr., and Graydon L. Berlin. 1986. “Restoration Techniques for SIR-B
Digital Radar Images.” Paper presented at the Fifth Thematic Conference:
Remote Sensing for Exploration Geology, Reno, Nevada.
Clark, Roger N., and Ted L. Roush. 1984. “Reflectance Spectroscopy: Quantitative
Analysis Techniques for Remote Sensing Applications.” Journal of Geophysical
Research, Vol. 89, No. B7: 6329-6340.
Clark, R.N., Gallagher, A.J., and Swayze, G.A. 1990. “Material Absorption Band Depth
Mapping of Imagine Spectrometer Data using a Complete Band Shape Least-
Square Fit with Library Reference Spectra.” Proceedings of the Second AVIRIS
Conference. JPL Pub. 90-54.
Colwell, Robert N., ed. 1983. Manual of Remote Sensing. Falls Church, Virginia: American
Society of Photogrammetry.
Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:
Conrac Corp.
Crippen, Robert E. July 1989. “Development of Remote Sensing Techniques for the
Investigation of Neotectonic Activity, Eastern Transverse Ranges and Vicinity,
Southern California.” Ph.D. Diss., University of California, Santa Barbara.
Crippen, Robert E. 1989. “A Simple Spatial Filtering Routine for the Cosmetic Removal
of Scan-Line Noise from Landsat TM P-Tape Imagery.” Photogrammetric
Engineering & Remote Sensing, Vol. 55, No. 3: 327-331.
Crippen, Robert E. 1987. “The Regression Intersection Method of Adjusting Image Data
for Band Ratioing.” International Journal of Remote Sensing, Vol. 8, No. 2: 137-155.
636 ERDAS
Crist, E. P., et al. 1986. “Vegetation and Soils Information Contained in Transformed
Thematic Mapper Data.” Proceedings of IGARSS’ 86 Symposium, ESA Publications
Division, ESA SP-254.
Crist, E. P., and R. J. Kauth. 1986. “The Tasseled Cap De-Mystified.” Photogrammetric
Engineering & Remote Sensing, Vol. 52, No. 1: 81-86.
Dangermond, Jack. 1988. “A Review of Digital Data Commonly Available and Some of
the Practical Problems of Entering Them into a GIS.” Fundamentals of Geographic
Information Systems: A Compendium, edited by William J. Ripple. Bethesda,
Maryland: American Society for Photogrammetric Engineering and Remote
Sensing and the American Congress on Surveying and Mapping.
Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product
Specifications for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: DMA
Aerospace Center.
Duda, Richard O., and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. New
York: John Wiley & Sons, Inc.
Elachi, Charles. 1987 Introduction to the Physics and Techniques of Remote Sensing. New
York: John Wiley & Sons.
Elachi, Charles. 1992. “Radar Images of the Earth from Space.” Exploring Space.
Elachi, Charles. 1987. Spaceborne Radar Remote Sensing: Applications and Techniques. New
York: IEEE Press.
Elassal, Atef A., and Vincent M. Caruso. 1983. USGS Digital Cartographic Data Standards:
Digital Elevation Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.
ESRI. 1992. ARC Command References 6.0. Redlands. California: ESRI, Inc.
ESRI. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI, Inc.
ESRI. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands,
California: ESRI, Inc.
ESRI. 1990. Understanding GIS: The ARC/INFO Method. Redlands, California: ESRI, Inc.
Fahnestock, James D., and Robert A. Schowengerdt. 1983. “Spatially Variant Contrast
Enhancement Using Local Range Modification.” Optical Engineering, Vol. 22, No.
3.
Fisher, P. F. 1991. “Spatial Data Sources and Data Problems.” Geographical Information
Systems: Principles and Applications, edited by David J. Maguire, Michael F.
Goodchild, and David W. Rhind. New York: Longman Scientific & Technical.
Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes &
Noble, Inc.
Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weathered
and Vegetated Terrains Using Aircraft Scanners: Successful Australian Case
Histories.” Paper presented at the fifth Thematic Conference: Remote Sensing for
Exploration Geology, Reno, Nevada.
Freden, Stanley C., and Frederick Gordon, Jr. 1983. “Landsat Satellites.” Chapter 12 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.
Frost, Victor S., Stiles, Josephine A., Shanmugan, K. S., and Holtzman, Julian C. 1982.
“A Model for Radar Images and Its Application to Adaptive Digital Filtering of
Multiplicative Noise.” IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, Vol. PAMI-4, No. 2, March 1982.
Geological Remote Sensing Group Newsletter. May 1992. No. 5. Institute of Hydrology,
Wallingford, OX10, United Kingdom.
Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading, Massachu-
setts: Addison-Wesley Publishing Company.
Gonzalez, Rafael C., and Richard E. Woods. 1992. Digital Image Processing. Reading,
Massachusetts: Addison-Wesley Publishing Company.
Green, A.A. and Craig, M.D. 1985. “Analysis of Aircraft Spectrometer Data with
Logarithmic Residuals.” Proceedings of the AIS Data Analysis Workshop. JPL Pub.
85-41:111-119.
Guptill, Stephen C., ed. 1988. A Process for Evaluating Geographic Information Systems.
U.S. Geological Survey Open-File Report 88-105.
638 ERDAS
Haralick, Robert M. 1979. “Statistical and Structural Approaches to Texture.”
Proceedings of the IEEE, Vol. 67, No. 5: 786-804. Seattle, Washington.
Hodgson, Michael E., and Bill M. Shelley. 1993. “Removing the Topographic Effect in
Remotely Sensed Imagery.” ERDAS Monitor, Fall 1993. Contact Dr. Hodgson,
Dept. of Geography, University of Colorado, Boulder, CO 80309-0260.
Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to
the 1993 ERIM Conference, Pasadena, California.
Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York:
Academic Press.
Iron, James R., and Gary W. Petersen. 1981.“Texture Transforms of Remote Sensing
Data,” Remote Sensing of Environment, Vol. 11: 359-370.
Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.
Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective.
Englewood Cliffs, New Jersey: Prentice-Hall.
Jordan, III, Lawrie E., Bruce Q. Rado, and Stephen L. Sperry. 1992. “Meeting the Needs
of the GIS and Image Processing Industry in the 1990s.” Photogrammetric
Engineering & Remote Sensing, Vol. 58, No. 8: 1249-1251.
Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.
Kidwell, Katherine B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-
7, NOAA-8, NOAA-9, and NOAA-10) Users Guide. Washington, DC: National
Oceanic and Atmospheric Administration.
Kneizys, F. X., et al. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts:
Air Force Geophysics Laboratory.
Larsen, Richard J., and Morris L. Marx. 1981. An Introduction to Mathematical Statistics
and Its Applications. Englewood Cliffs, New Jersey: Prentice-Hall.
Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information
System. U.S. Fish and Wildlife Service, FWS/OBS-84/17.
Lee, Jong-Sen. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture Radar
Images.” Computer Graphics and Image Processing, Vol. 17:24-32.
Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpre-
tation. New York: John Wiley & Sons, Inc.
Lopes, A., Nezry, E., Touzi, R., and Laur, H. 1990. “Maximum A Posteriori Speckle
Filtering and First Order Texture Models in SAR Images.” International Geoscience
and Remote Sensing Symposium (IGARSS).
Lue, Yan and Kurt Novak. 1991. “Recursive Grid - Dynamic Window Matching for
Automatic DEM Generation.” 1991 ACSM-ASPRS Fall Convention Technical
Papers.
Lyon, R.J.P. 1987. “Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid
Rocks.” Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-
119.
Maling, D. H. 1992. Coordinate Systems and Map Projections. 2nd ed. New York:
Pergamon Press.
Mendenhall, William, and Richard L. Scheaffer. 1973. Mathematical Statistics with Appli-
cations. North Scituate, Massachusetts: Duxbery Press.
Menon, Sudhakar, Peng Gao, and CiXiang Zhan. 1991. “GRID: A Data Model and
Functional Map Algebra for Raster Geo-processing.” GIS/LIS ‘91 Proceedings, Vol.
2: 551-561. Bethesda, Maryland: American Society for Photogrammetry and
Remote Sensing.
Merenyi, E., Taranik, J.V., Monor, Tim, and Farrand, W. March 1996. “Quantitative
Comparison of Neural Network and Conventional Classifiers for Hyperspectral
Imagery.” Proceedings of the Sixth AVIRIS Conference. JPL Pub.
Minnaert, J. L., and G. Szeicz. 1961. “The Reciprocity Principle in Lunar Photometry.”
Astrophysics Journal, Vol. 93: 403-410.
640 ERDAS
Nagao, Makoto, and Takashi Matsuyama. 1978. “Edge Preserving Smoothing.”
Computer Graphics and Image Processing, Vol. 9: 394-407.
Needham, Bruce H. 1986. “Availability of Remotely Sensed Data and Information from
the U.S. National Oceanic and Atmospheric Administration’s Satellite Data
Services Division.” Chapter 9 in Satellite Remote Sensing for Resources Development,
edited by Karl-Heinz Szekielda. Gaithersburg, Maryland: Graham & Trotman,
Inc.
Nichols, David, et al. 1983. “Digital Hardware.” Chapter 20 in Manual of Remote Sensing,
edited by Robert N. Colwell. Falls Church, Virginia: American Society of Photo-
grammetry.
Oppenheim, Alan V., and Ronald W. Schafer. 1975. Digital Signal Processing. Englewood
Cliffs, New Jersey: Prentice-Hall, Inc.
Pearson, Frederick. 1990. Map Projections: Theory and Applications. Boca Raton, Florida:
CRC Press, Inc.
Peli, Tamar, and Jae S. Lim. 1982. “Adaptive Filtering for Image Enhancement.” Optical
Engineering, Vol. 21, No. 1.
Pratt, William K. 1991. Digital Image Processing. New York: John Wiley & Sons, Inc.
Press, William H., et al. 1988. Numerical Recipes in C. New York, New York: Cambridge
University Press.
Rado, Bruce Q. 1992. “An Historical Analysis of GIS.” Mapping Tomorrow’s Resources.
Logan, Utah: Utah State University.
Robinson, Arthur H., and Randall D. Sale. 1969. Elements of Cartography. 3rd ed. New
York: John Wiley & Sons, Inc.
Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H.
Freeman and Co.
Sader, S. A., and J. C. Winne. 1992. “RGB-NDVI Colour Composites For Visualizing
Forest Change Dynamics.” International Journal of Remote Sensing, Vol. 13, No. 16:
3055-3067.
Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote
Sensing. New York: Academic Press.
Schwartz, A. A., and J. M. Soha. 1977. “Variable Threshold Zonal Filtering.” Applied
Optics, Vol. 16, No. 7.
Short, Nicholas M. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing.
Washington, DC: National Aeronautics and Space Administration.
Simonett, David S., et al. 1983. “The Development and Principles of Remote Sensing.”
Chapter 1 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.
Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts:
Addison-Wesley Publishing Company, Inc.
Smith, J., T. Lin, and K. Ranson. 1980. “The Lambertian Assumption and Landsat Data.”
Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 9: 1183-1189.
Snyder, John P. 1987. Map Projections--A Working Manual. Geological Survey Profes-
sional Paper 1532. Washington, DC: United States Government Printing Office.
Snyder, John P., and Philip M. Voxland. 1989. An Album of Map Projections. U.S.
Geological Survey Professional Paper 1453. Washington, DC: United States
Government Printing Office.
Srinivasan, Ram, Michael Cannon, and James White, 1988. “Landsat Destriping Using
Power Spectral Filtering.” Optical Engineering, Vol. 27, No. 11: 939-943.
Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction.
Englewood Cliffs, New Jersey: Prentice-Hall.
Steinitz, Carl, Paul Parker, and Lawrie E. Jordan, III. 1976. “Hand Drawn Overlays:
Their History and Perspective Uses.” Landscape Architecture, Vol. 66.
642 ERDAS
Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis
(LARS Information Note 111572). West Lafayette, Indiana: The Laboratory for
Applications of Remote Sensing, Purdue University.
Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach.
New York: McGraw Hill Book Company.
Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading,
Massachusetts: Addison-Wesley Publishing Company.
Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for
Monitoring Vegetation.” Remote Sensing of Environment, Vol. 8: 127-150.
Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An
Assessment of Technology, Applications, and Products. Madison, Georgia: SEAI
Technical Publications.
Welch, Roy. 1990. “3-D Terrain Modeling for GIS Applications.” GIS World, Vol. 3, No.
5.
Welch, R., and W. Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM
Data.” Photogrammetric Engineering & Remote Sensing, Vol. 53, No. 3: 301-303.
Wolberg, George. 1990. Digital Image Warping. IEEE Computer Society Press
Monograph.
Yang, Xinghe, R. Robinson, H. Lin, and A. Zusmanis. 1993. “Digital Ortho Corrections
Using Pre-transformation Distortion Adjustment.” 1993 ASPRS Technical Papers.
New Orleans. Vol. 3: 425-434.
Zamudio, J.A. and Atkinson, W.W. 1990. “Analysis of AVIRIS data for Spectral
Discrimination of Geologic Materials in the Dolly Varden Mountains.”
Proceedings of the Second AVIRIS Conference. JPL Pub. 90-54:162-66.
644 ERDAS
A
646 ERDAS
D
for display 107, 135, 453 covariance matrix 157, 235, 240, 253, 455
linear 133, 134 cross correlation 295
min/max vs. standard deviation 108, 136
nonlinear 134 D
piecewise linear 134 data 360
contrast table 106 airborne sensor 51
control point 276 ancillary 220
convolution 19 categorical 3
cubic 341 complex 53, 479
filtering 144, 341, 342, 375 compression 153, 232
kernel continuous 3, 27, 106, 364, 472
crisp 149 displaying 109
edge detector 147 creating 122
edge enhancer 148 elevation 220, 347
gradient 202 enhancement 126
high frequency 145, 148 floating point 478
low frequency 149, 341 from aircraft 70
Prewitt 202 geocoded 22, 32, 312
zero-sum 146 gray scale 118
convolution kernel 144 hyperspectral 11
high frequency 342 interval 3
low frequency 342 nominal 3
Prewitt 202 ordering 84
coordinate ordinal 3
Cartesian 41, 422 packed 62
conversion 345 pseudo color 118
file 4, 41, 122 radar 51, 64
geographic 422, 531 applications 68
map 4, 41, 311, 314, 316 bands 66
planar 422 merging 211
reference 315, 316 raster 4, 113
retransformed 330, 338 converting to vector 87
source 316 editing 35
spherical 422 formats (BIL, etc.) 24
coordinate system 4 importing and exporting 51
correlation calculations 294 in GIS 362
correlation threshold 329 sources 51
correlation windows 294 ratio 3
covariance 238, 251, 454 satellite 51
sample 454 structure 159
thematic 3, 27, 110, 223, 366, 472 minimum distance 250, 252, 254, 257
displaying 112 non-parametric 244
tiled 29, 53 parallelepiped 246
topographic 81, 347 parametric 244
using 83 decorrelation stretch 158
true color 118 degrees of freedom 257
vector 113, 118, 313, 345, 367 DEM 2, 28, 52, 81, 82, 131, 292, 312, 348
converting to raster 87, 365 editing 36
copying 43 interpolation 37, 292
displaying 45 ordering 84
editing 394 density 25
densify 394 descriptive information
generalize 394 see attribute information 44
reshape 394 Design with Nature (by Ian McHarg) 359
spline 394 desktop scanners 269
split 394 detector 54, 129
unsplit 394 Developers’ Toolkit 371, 430
from raster data 47 DGN 49
importing 47, 53 digital elevation model (DEM) 292
in GIS 362 digital image 47
renaming 43 digital orthophoto 299
sources 47, 49, 51 cell sizes 301
structure 42 creation 300
viewing 117 digital orthophotography 348
multiple layers 119 Digital Photogrammetry 262
overlapping layers 119 digital picture
data correction 19, 35, 125, 129 see image 98
geometric 129, 131, 311 digital terrain model (DTM) 51, 292
radiometric 129, 207, 312 digitizing 47, 314
data file value 1, 34 GCPs 316
display 107, 122, 134 operation modes 48
in classification 215 point mode 48
data storage 20 screen 47, 49
database stream mode 48
image 31 tablet 47
decision rule 217, 243 DIME 93
Bayesian 252 dimensionality 153, 220, 460
feature space 248 disk space 26
Mahalanobis distance 251 diskette 20, 24
maximum likelihood 252, 254 displacement 304
648 ERDAS
E
650 ERDAS
G
652 ERDAS
M
654 ERDAS
N
N NPO Mashinostroenia 69
nadir 55, 283, 302
nadir line 302 O
nadir point 302 .ovr file 404
NASA 57, 64, 70 Oblique Mercator 548, 563, 588, 592
NASA/JPL 69 oceanography 68
natural-color 106 off-nadir 60, 283
nearest neighbor offset 319
see resample oil exploration 68
neatline 410 1:24,000 scale 82
neighborhood analysis 372, 375 1:250,000 scale 82
boundary 376 opacity 119, 368
density 376 optical disk 23
diversity 376 orbit 280
majority 376 order
maximum 376 of polynomial 461
mean 376 of transformation 461
median 376 ordinal
minimum 376 classes 337, 365
minority 377 data 3
rank 377 orientation angle 284
standard deviation 377 orthocorrection 83, 132, 209, 298, 306, 308, 312
sum 377 orthogonal 298
9-track tape 23, 25 orthogonal distance 272
NOAA 62 Orthographic 551
node 40 orthographic projection 298, 299
dangling 397 orthoimage 299
from-node 40 orthomap 308
pseudo 397 orthorectification 298, 308, 312
to-node 40 output file 26, 335, 336, 390
noise removal 188 .img 135
nominal classification 259
classes 337, 365 overlay 372, 379
data 3 overlay file 404
Non-Lambertian reflectance model 356, 357
nonlinear transformation 321, 325, 327 P
normal distribution 153, 251, 252, 253, 450, 454, panchromatic imagery 55, 60
459 parallel 422
Normalized Difference Vegetation Index parallelepiped
(NDVI) 11, 166 alarm 236
656 ERDAS
S
658 ERDAS
T
SPOT 10, 15, 18, 19, 28, 32, 47, 52, 55, 60, 78, 95, 125, T
131, 151, 317 .tif file 440
ordering 84 tangent 419
panchromatic 15, 150 tape 20
XS 60 Tasseled Cap transformation 159, 166
displaying 106 Tektronix
SPOT bundle adjustment 286 Inkjet Printer 441
standard deviation 108, 136, 237, 287, 451 Phaser II SD 442
sample 453 Phaser Printer 441
standard meridian 417, 420 texture analysis 204
standard parallel 417, 419 thematic data
State Plane 424, 429, 538, 563, 578 see data
statistics 30, 446, 472 theme 363
signature 236 threshold 255
Stereographic 554, 575 thresholding 254
stereopair 288 thresholding (classification) 251
aerial 288 tick mark 410
epipolar 289 tie point 278, 287
SPOT 289 TIFF 52, 71, 87, 88, 439, 440
stereo-scene 283 TIGER 49, 51, 53, 95
stereoscopic collection 307 disk space requirement 96
stereoscopic imagery 61 tiled format 470
strip of photographs 266 TIN 292
striping 19, 156, 193 topocentric coordinate system 264
subset 33 topographic database 308
summation 445 topographic effect 356
sun angle 354 topographic map 308
Sun Raster 52, 87, 88 topology 41, 395
sun-synchronous orbit 280 build 395
surface generation clean 395
weighting function 38 constructing 395
swath width 54 total field of view 54
symbol 411 total RMS error 332
abstract 411 training 215
function 411 supervised 215, 219
plan 411 supervised vs. unsupervised 219
profile 411 unsupervised 216, 219, 227
replicative 411 training field 221
symbolization 39 training sample 221, 224, 258, 313, 459
symbology 45 defining 222
660 ERDAS