Erdas FieldGuide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 686

ERDAS Field Guide

Fourth Edition, Revised and Expanded

®
ERDAS , Inc.
Atlanta, Georgia
Copyright  1982 - 1997 by ERDAS, Inc. All rights reserved.

First Edition published 1990. Second Edition published 1991.

Third Edition reprinted 1995. Fourth Edition printed 1997.

Printed in the United States of America.

ERDAS proprietary - copying and disclosure prohibited without express permission


from ERDAS, Inc.

ERDAS, Inc.

2801 Buford Highway, NE


Atlanta, Georgia 30329-2137 USA
Phone: 404/248-9000
Fax: 404/248-9400
User Support: 404/248-9777

ERDAS International

Telford House, Fulbourn

Cambridge CBI 5HB England

Phone: 011 44 1223 881 774

Fax: 011 44 1223 880 160

The information in this document is subject to change without notice.

Acknowledgments
The ERDAS Field Guide was originally researched, written, edited, and designed by
Chris Smith and Nicki Brown of ERDAS, Inc. The Second Edition was produced by
Chris Smith, Nicki Brown, Nancy Pyden, and Dana Wormer of ERDAS, Inc., with
assistance from Diana Margaret and Susanne Strater. The Third Edition was written and
edited by Chris Smith, Nancy Pyden, and Pam Cole of ERDAS, Inc. The fourth edition
was written and edited by Stacey Schrader and Russ Pouncey of ERDAS, Inc. Many,
many thanks go to David Sawyer, ERDAS Engineering Director, and the ERDAS
Software Engineers for their significant contributions to this and previous editions.
Without them this manual would not have been possible. Thanks also to Derrold
Holcomb for lending his expertise on the Enhancement chapter. Many others at ERDAS
provided valuable comments and suggestions in an extensive review process.

A special thanks to those industry experts who took time out of their hectic schedules
to review previous editions of the ERDAS Field Guide. Of these “external”reviewers,
Russell G. Congalton, D. Cunningham, Thomas Hack, Michael E. Hodgson, David
McKinsey, and D. Way deserve recognition for their contributions to previous editions.

Cover image: The image on the front cover of the ERDAS IMAGINE Ver. 8.3 manuals is
Global Relief Data from the National Geophysical Data Center (National Oceanic and
Atmospheric Administration, U.S. Department of Commerce).

Trademarks
ERDAS and ERDAS IMAGINE are registered trademarks of ERDAS, Inc. IMAGINE Essentials,
IMAGINE Advantage, IMAGINE Professional, IMAGINE Vista, IMAGINE Production, Model
Maker, CellArray, ERDAS Field Guide, and ERDAS IMAGINE Tour Guides are trademarks of
ERDAS, Inc. OrthoMAX is a trademark of Autometric, Inc. Restoration is a trademark of
Environmental Research Institute of Michigan. Other brands and product names are trademarks
of their respective owners. ERDAS IMAGINE Ver. 8.3. January, 1997. Part No. SWE-MFG4-8.3.0-
ALLP.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiv

CHAPTER 1
Raster Data
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Absorption/Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spectral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Radiometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Temporal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

CHAPTER 2
Vector Layers
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

i
Table of Contents

Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

CHAPTER 3
Raster and Vector Data Sources
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Importing and Exporting Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Importing and Exporting Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Landsat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

ii
DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Sun Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

CHAPTER 4
Image Display
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Using the IMAGINE Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

CHAPTER 5
Enhancement
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

iii
Table of Contents

Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129


Radiometric Correction -Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
IAR Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Rescale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Processing Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Spectrum Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Signal to Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Mean per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Profile Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Wavelength Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Spectral Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Inverse Fast Fourier Transform (IFFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Radar Imagery Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Radiometric Correction - Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

iv
Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

CHAPTER 6
Classification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Selecting Feature Space Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Non-parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

CHAPTER 7
Photogrammetric Concepts
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

v
Table of Contents

Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263


Pixel Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Ground Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Geocentric and Topocentric Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Aerial Camera Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Exposure Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Image Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Strip of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Block of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Digital Imagery from Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Correction Levels for SPOT Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Scanning Aerial Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Photogrammetric Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Aerial Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
SPOT Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Triangulation Accuracy Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Stereo Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Aerial Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
SPOT Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Epipolar Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Generate Elevation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Traditional Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Digital Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Elevation Model Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
DEM Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Image Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Geometric Distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Aerial and SPOT Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Landsat Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Map Feature Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Stereoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Monoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Product Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Orthoimages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Orthomaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Topographic Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Topographic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

vi
CHAPTER 8
Rectification
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Orders of Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
“Rectifying” to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Map to Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

CHAPTER 9
Terrain Analysis
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

CHAPTER 10
Geographic Information Systems
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

vii
Table of Contents

Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364


Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Script Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Editing Vector Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Constructing Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

CHAPTER 11
Cartography
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

viii
Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

CHAPTER 12
Hardcopy Output
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

APPENDIX A
Math Topics
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

ix
Table of Contents

Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464

APPENDIX B
File Formats and Extensions
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
ERDAS IMAGINE .img Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Sensor Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Raster Layer Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Attribute Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
Map Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Map Projection Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Machine Independent Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
MIF Data Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
MIF Data Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
ERDAS IMAGINE HFA File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Hierarchical File Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Pre-defined HFA File Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Basic Objects of an HFA File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
HFA Object Directory for .img files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515

APPENDIX C
Map Projections
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
USGS Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Albers Conical Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
Conic Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Equirectangular (Plate Carrée) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
General Vertical Near-side Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Gnomonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Lambert Azimuthal Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Lambert Conformal Conic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Miller Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Modified Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Oblique Mercator (Hotine) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Orthographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Polar Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554

x
Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Sinusoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Space Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
State Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
UTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Van der Grinten I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
External Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Bipolar Oblique Conic Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Cassini-Soldner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
Laborde Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Mollweide Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Rectified Skew Orthomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Robinson Pseudocylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Southern Orientated Gauss Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Winkel’s Tripel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645

xi
Table of Contents

xii
List of Figures
Figure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Figure 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Figure 3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . . . . .7
Figure 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Figure 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . . . 12
Figure 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 10: Landsat TM - Band 2 (Four Types of Resolution). . . . . . . . . . . . . . . . . . . . . . 18
Figure 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Figure 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 15: Examples of Continuous Raster Layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Figure 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 19: Attribute Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Figure 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Figure 23: Multispectral Imagery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 24: Landsat MSS vs. Landsat TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Figure 25: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Figure 26: SLAR Radar (Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 27: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 28: Radar Reflection from Different Sources and Distances
(Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer . . . . . . . . . . . . . . 73
Figure 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Figure 31: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer. . . . . . . . . . . . . . . 79
Figure 33: ARC/Second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Figure 34: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . . . 97
Figure 35: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 102
Figure 36: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 103
Figure 37: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . . . . . . 104
Figure 38: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Figure 39: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . . . . . . 108
Figure 40: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . 109
Figure 41: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Figure 42: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Figure 43: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Figure 44: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Figure 45: Linked Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Figure 46: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . . . 132
Figure 47: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Figure 48: Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Figure 49: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Figure 50: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Figure 51: Contrast Stretch By Manipulating Lookup Tables
and the Effect on the Output Histogram . . . . . . . . . . . . . . . . . . . . . . . . . 137
Figure 52: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 53: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Figure 54: Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Figure 55: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Figure 56: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Figure 57: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Figure 58: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Figure 59: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Figure 60: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Figure 61: First Principal Component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figure 62: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figure 63: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Figure 64: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . . . . . . 161
Figure 65: Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Figure 66: Rescale GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Figure 67: Spectrum Average GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Figure 68: Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Figure 69: Two-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 70: Three-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 71: Surface Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Figure 72: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Figure 73: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Figure 74: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 75: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . . 183
Figure 76: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Figure 77: High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 78: Filtering Using the Bartlett Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 79: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Figure 80: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figure 81: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Figure 82: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Figure 83: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . . . . . . 200
Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . . . 201
Figure 85: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Figure 86: Adjust Brightness Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 87: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 88: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Figure 89: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Figure 90: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 91: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Figure 92: ISODATA First Pass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Figure 93: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Figure 94: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Figure 95: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Figure 96: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Figure 97: Parallelepiped Classification Using Plus or Minus
Two Standard Deviations as Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Figure 98: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . . . 248
Figure 99: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Figure 100: Minimum Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Figure 101: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Figure 102: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Figure 103: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 263
Figure 104: Sample Photogrammetric Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Figure 105: Exposure Stations along a Flight Path . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Figure 106: A Regular (Rectangular) Block of Aerial Photos . . . . . . . . . . . . . . . . . . . . 267
Figure 107: Triangulation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Figure 108: Focal and Image Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Figure 109: Image Coordinates, Fiducials, and Principal Point . . . . . . . . . . . . . . . . . . 273
Figure 110: Exterior Orientation of an Aerial Photo . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Figure 111: Control Points in Aerial Photographs
(block of 8 X 4 photos) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation . . . . . 278
Figure 113: Tie Points in a Block of Photos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Figure 114: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . . . . . . 280
Figure 115: Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Figure 116: Interior Orientation of a SPOT Scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Figure 117: Inclination of a Satellite Stereo-Scene
(View from North to South) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Figure 118: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . . . . . 285
Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . . . . 286
Figure 120: Aerial Stereopair (60% Overlap). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Figure 121: SPOT Stereopair (80% Overlap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Figure 122: Epipolar Stereopair Creation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Figure 123: Generate Elevation Models Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . 291
Figure 124: Generate Elevation Models Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . 291
Figure 125: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . . . . . 293
Figure 126: Orthorectification Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Figure 127: Orthorectification Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Figure 128: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Figure 129: Digital Orthophoto - Finding Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . 300
Figure 130: Image Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Figure 131: Feature Collection Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . 307
Figure 132: Feature Collection Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . 307
Figure 133: Polynomial Curve vs. GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Figure 134: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Figure 135: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 136: Transformation Example—1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Figure 137: Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . . . . . 325
Figure 138: Transformation Example—2nd-Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Figure 139: Transformation Example—4th GCP Added. . . . . . . . . . . . . . . . . . . . . . . . . 326
Figure 140: Transformation Example—3rd-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Figure 141: Transformation Example—Effect of a 3rd-Order Transformation . . . . . . . . 327
Figure 142: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Figure 143:RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Figure 144: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Figure 145: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Figure 146: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Figure 147: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Figure 148: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Figure 149: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel. . . . . . . . . . . . . . . . . . . . 349
Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel. . . . . . . . . . . . . . . . . . . 352
Figure 152: Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Figure 153: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Figure 154: Raster Attributes for lnlandc.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Figure 155: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Figure 156: Proximity Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Figure 157: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Figure 158: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . . . . 377
Figure 160: Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Figure 161: Indexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Figure 162: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Figure 163: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Figure 164: Modeling Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Figure 165: Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . 391
Figure 166: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Figure 167: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Figure 168: Sample Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Figure 169: Sample Neatline, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . 410
Figure 170: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied. . . . . 413
Figure 172: Good Lettering vs. Bad Lettering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Figure 173: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Figure 174: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Figure 175: Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Figure 176: Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Figure 177: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . . . . . . 438
Figure 178: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure 179: Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Figure 180: Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Figure 181: Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Figure 182: Mean Vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Figure 183: Two Band Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Figure 184: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Figure 185: Examples of Objects Stored in an .img File . . . . . . . . . . . . . . . . . . . . . . . . 468
Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels . . . . . . . 471
Figure 187: HFA File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Figure 188: HFA File Structure Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Figure 189: Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Figure 190: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . . . . 524
Figure 191: Geographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Figure 192: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . 537
Figure 193: Lambert Conformal Conic Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
Figure 194: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Figure 195: Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Figure 196: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Figure 197: Polar Stereographic Projection and its Geometric Construction . . . . . . . . 556
Figure 198: Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Figure 199: Zones of the State Plane Coordinate System . . . . . . . . . . . . . . . . . . . . . . . 564
Figure 200: Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Figure 201: Zones of the Universal Transverse Mercator Grid in the United States . . . 581
Figure 202: Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
List of Tables
Table 1: Description of File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Table 2: Raster Data Formats for Direct Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Table 3: Vector Data Formats for Import and Export . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Table 4: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table 5: Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Table 6: ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Table 7: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . 76
Table 8: Common Raster Data Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Table 9: File Types Created by Screendump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Table 10: The Most Common TIFF Format Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Table 11: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Table 12: Conversion of IGES Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table 13: Colorcell Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Table 14: Commonly Used RGB Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Table 15: Overview of Zoom Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Table 16: Description of Modeling Functions Available for Enhancement . . . . . . . . . . 127
Table 17: Theoretical Coefficient of Variation Values. . . . . . . . . . . . . . . . . . . . . . . . . . 195
Table 18: Parameters for Sigma Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Table 19: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Table 20: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Table 21: Example of a Recoded Land Cover Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Table 22: Attribute Information for parks.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Table 23: General Editing Operations and Supporting Feature Types . . . . . . . . . . . . . 394
Table 24: Comparison of Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . 396
Table 25: Common Map Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Table 26: Pixels per Inch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Table 27: Acres and Hectares per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Table 28: Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Table 29: Projection Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Table 30: Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Table 31: ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Table 32: Usage of Binning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 565
Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 570
Table 35: UTM zones, central meridians, and longitude ranges . . . . . . . . . . . . . . . . . . 582
Preface

Introduction The purpose of the ERDAS Field Guide is to provide background information on why
one might use particular GIS and image processing functions and how the software is
manipulating the data, rather than what buttons to push to actually perform those
functions. This book is also aimed at a diverse audience: from those who are new to
geoprocessing to those savvy users who have been in this industry for years. For the
novice, the ERDAS Field Guide provides a brief history of the field, an extensive glossary
of terms, and notes about applications for the different processes described. For the
experienced user, the ERDAS Field Guide includes the formulas and algorithms that are
used in the code, so that he or she can see exactly how each operation works.

Although the ERDAS Field Guide is primarily a reference to basic image processing and
GIS concepts, it is geared toward ERDAS IMAGINE users and the functions within
ERDAS IMAGINE software, such as GIS analysis, image processing, cartography and
map projections, graphics display hardware, statistics, and remote sensing. However,
in some cases, processes and functions are described that may not be in the current
version of the software, but planned for a future release. There may also be functions
described that are not available on your system, due to the actual package that you are
using.

The enthusiasm with which the first three editions of the ERDAS Field Guide were
received has been extremely gratifying, both to the authors and to ERDAS as a whole.
First conceived as a helpful manual for ERDAS users, the ERDAS Field Guide is now
being used as a textbook, lab manual, and training guide throughout the world.

The ERDAS Field Guide will continue to expand and improve to keep pace with the
profession. Suggestions and ideas for future editions are always welcome, and should
be addressed to the Technical Writing division of Engineering at ERDAS, Inc., in
Atlanta, Georgia.

Field Guide xxi


Preface

Conventions Used in The following paragraphs are used throughout the ERDAS Field Guide and other
this Book ERDAS IMAGINE documentation.

These paragraphs contain strong warnings or important tips.

These paragraphs direct you to the ERDAS IMAGINE software function that accomplishes the
described task.

These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals for
additional information.

xxii ERDAS
Image Data

CHAPTER 1
Raster Data

Introduction The ERDAS IMAGINE system incorporates the functions of both image processing and
geographic information systems (GIS). These functions include importing, viewing,
altering, and analyzing raster and vector data sets.

This chapter is an introduction to raster data, including:

• remote sensing

• data storage formats

• different types of resolution

• radiometric correction

• geocoded data

• using raster data in GIS

See "CHAPTER 2: Vector Layers" for more information on vector data.

Image Data In general terms, an image is a digital picture or representation of an object. Remotely
sensed image data are digital representations of the earth. Image data are stored in data
files, also called image files, on magnetic tapes, computer disks, or other media. The
data consist only of numbers. These representations form images when they are
displayed on a screen or are output to hardcopy.

Each number in an image file is a data file value. Data file values are sometimes
referred to as pixels. The term pixel is abbreviated from picture element. A pixel is the
smallest part of a picture (the area being scanned) with a single value. The data file
value is the measured brightness value of the pixel at a specific wavelength.

Raster image data are laid out in a grid similar to the squares on a checker board. Each
cell of the grid is represented by a pixel, also known as a grid cell.

In remotely sensed image data, each pixel represents an area of the earth at a specific
location. The data file value assigned to that pixel is the record of reflected radiation or
emitted heat from the earth’s surface at that location.

Field Guide 1
Data file values may also represent elevation, as in digital elevation models (DEMs).

NOTE: DEMs are not remotely sensed image data, but are currently being produced from stereo
points in radar imagery.

The terms “pixel” and “data file value” are not interchangeable in ERDAS IMAGINE. Pixel is
used as a broad term with many meanings, one of which is data file value. One pixel in a file may
consist of many data file values. When an image is displayed or printed, other types of values are
represented by a pixel.

See "CHAPTER 4: Image Display" for more information on how images are displayed.

Bands Image data may include several bands of information. Each band is a set of data file
values for a specific portion of the electromagnetic spectrum of reflected light or
emitted heat (red, green, blue, near-infrared, infrared, thermal, etc.) or some other user-
defined information created by combining or enhancing the original bands, or creating
new bands from other sources.

ERDAS IMAGINE programs can handle an unlimited number of bands of image data
in a single file.

3 bands

1 pixel
Figure 1: Pixels and Bands in a Raster Image

See "CHAPTER 5: Enhancement" for more information on combining or enhancing bands of


data.

Bands vs. Layers


In ERDAS IMAGINE, bands of data are usually referred to as layers. Once a band is
imported into a GIS, it becomes a layer of information which can be processed in
various ways. Additional layers can be created and added to the image file (.img
extension) in ERDAS IMAGINE, such as layers created by combining existing layers.
Read more about .img files in ERDAS IMAGINE Format (.img) on page 27.

2 ERDAS
Image Data

Layers vs. Viewer Layers


The IMAGINE Viewer permits several images to be layered, in which case each image
(including a multi-band image) may be a layer.

Numeral Types
The range and the type of numbers used in a raster layer determine how the layer is
displayed and processed. For example, a layer of elevation data with values ranging
from -51.257 to 553.401 would be treated differently from a layer using only two values
to show land and water.

The data file values in raster layers will generally fall into these categories:

• Nominal data file values are simply categorized and named. The actual value used
for each category has no inherent meaning—it is simply a class value. An example
of a nominal raster layer would be a thematic layer showing tree species.

• Ordinal data are similar to nominal data, except that the file values put the classes
in a rank or order. For example, a layer with classes numbered and named “1 -
Good,” “2 - Moderate,” and “3 - Poor” is an ordinal system.

• Interval data file values have an order, but the intervals between the values are also
meaningful. Interval data measure some characteristic, such as elevation or degrees
Fahrenheit, which does not necessarily have an absolute zero. (The difference
between two values in interval data is meaningful.)

• Ratio data measure a condition that has a natural zero, such as electromagnetic
radiation (as in most remotely sensed data), rainfall, or slope.

Nominal and ordinal data lend themselves to applications in which categories, or


themes, are used. Therefore, these layers are sometimes calledcategorical orthematic.

Likewise, interval and ratio layers are more likely to measure a condition, causing the
file values to represent continuous gradations across the layer. Such layers are called
continuous.

Field Guide 3
Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using a
coordinate system. In two-dimensional coordinate systems, locations are organized in
a grid of columns and rows. Each location on the grid is expressed as a pair of coordi-
nates known as X and Y. The X coordinate specifies the column of the grid, and the Y
coordinate specifies the row. Image data organized into such a grid are known as raster
data.

There are two basic coordinate systems used in ERDAS IMAGINE:

• file coordinates — indicate the location of a pixel within the image (data file)

• map coordinates — indicate the location of a pixel in a map

File Coordinates
File coordinates refer to the location of the pixels within the image (data) file. File
coordinates for the pixel in the upper left corner of the image always begin at 0,0.

0 1 2 3 4

1 (3,1)
x,y

rows (y) 2

columns (x)
Figure 2: Typical File Coordinates

Map Coordinates
Map coordinates may be expressed in one of a number of map coordinate or projection
systems. The type of map coordinates used by a data file depends on the method used
to create the file (remote sensing, scanning an existing map, etc.). In ERDAS IMAGINE,
a data file can be converted from one map coordinate system to another.

For more information on map coordinates and projection systems, see "CHAPTER 11:
Cartography" or "APPENDIX C: Map Projections.". See "CHAPTER 8: Rectification" for
more information on changing the map coordinate system of a data file.

4 ERDAS
Remote Sensing

Remote Sensing Remote sensing is the acquisition of data about an object or scene by a sensor that is far
from the object (Colwell 1983). Aerial photography, satellite imagery, and radar are all
forms of remotely sensed data.

Usually, remotely sensed data refer to data of the earth collected from sensors on satel-
lites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are
remotely sensed. However, the user is not limited to remotely sensed data.

This section is a brief introduction to remote sensing. There are many books available for more
detailed information, including Colwell 1983, Swain and Davis 1978, and Slater 1980 (see
“Bibliography”).

Electromagnetic Radiation Spectrum


The sensors on remote sensing platforms usually record electromagnetic radiation.
Electromagnetic radiation (EMR) is energy transmitted through space in the form of
electric and magnetic waves (Star and Estes 1990). Remote sensors are made up of
detectors that record specific wavelengths of the electromagnetic spectrum. The
electromagnetic spectrum is the range of electromagnetic radiation extending from
cosmic waves to radio waves (Jensen 1996).

All types of land cover—rock types, water bodies, etc.—absorb a portion of the electro-
magnetic spectrum, giving a distinguishable “signature” of electromagnetic radiation.
Armed with the knowledge of which wavelengths are absorbed by certain features and
the intensity of the reflectance, the user can analyze a remotely sensed image and make
fairly accurate assumptions about the scene. Figure 3 illustrates the electromagnetic
spectrum (Suits 1983; Star and Estes 1990).

Reflected Thermal

SWIR LWIR

Ultraviolet

Radar

0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0
Near-infrared Middle-infrared Far-infrared
(0.7 - 2.0) (2.0 - 5.0) (8.0 - 15.0)
Visible
(0.4 - 0.7)
Blue (0.4 - 0.5)
Green (0.5 - 0.6) micrometers µm (one millionth of a meter)
Red (0.6 - 0.7)

Figure 3: Electromagnetic Spectrum

Field Guide 5
SWIR and LWIR
The near-infrared and middle-infrared regions of the electromagnetic spectrum are
sometimes referred to as the short wave infrared region (SWIR). This is to distinguish
this area from the thermal or far infrared region, which is often referred to as the long
wave infrared region (LWIR). The SWIR is characterized by reflected radiation
whereas the LWIR is characterized by emitted radiation.

Absorption/Reflection When radiation interacts with matter, some wavelengths are absorbed and others are
Spectra reflected.To enhance features in image data, it is necessary to understand how
vegetation, soils, water, and other land covers reflect and absorb radiation. The study
of the absorption and reflection of EMR waves is called spectroscopy.

Spectroscopy
Most commercial sensors, with the exception of imaging radar sensors, are passive
solar imaging sensors. Passive solar imaging sensors can only receive radiation waves;
they cannot transmit radiation. (Imaging radar sensors are active sensors which emit a
burst of microwave radiation and receive the backscattered radiation.)

The use of passive solar imaging sensors to characterize or identify a material of interest
is based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared
(VIS/IR) multispectral data set and properly apply enhancement algorithms, it is
necessary to understand these basic principles. Spectroscopy reveals the:

• absorption spectra — the EMR wavelengths that are absorbed by specific materials
of interest

• reflection spectra — the EMR wavelengths that are reflected by specific materials
of interest

Absorption Spectra
Absorption is based on the molecular bonds in the (surface) material. Which
wavelengths are absorbed depends upon the chemical composition and crystalline
structure of the material. For pure compounds, these absorption bands are so specific
that the SWIR region is often called “an infrared fingerprint.”

Atmospheric Absorption
In remote sensing, the sun is the radiation source for passive sensors. However, the sun
does not emit the same amount of radiation at all wavelengths. Figure 4 shows the solar
irradiation curve—which is far from linear.

6 ERDAS
Remote Sensing

2500

Solar irradiation curve outside atmosphere


2000
El Spectral Irradiance (Wm-2 µ -1)

1500 Solar irradiation curve at sea level


Peaks show absorption by H20, C02, and O3

1000

500

0
0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
Wavelength µm

UV VIS INFRARED

Modified from Chahine, et al 1983

Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface

Solar radiation must travel through the earth’s atmosphere before it reaches the earth’s
surface. As it travels through the atmosphere, radiation is affected by four phenomena
(Elachi 1987):

• absorption— the amount of radiation absorbed by the atmosphere

• scattering — the amount of radiation scattered by the atmosphere away from the
field of view

• scattering source — divergent solar irradiation scattered into the field of view

• emission source — radiation re-emitted after absorption

Field Guide 7
Radiation

Absorption - the amount of


radiation absorbed by the
atmosphere

Scattering - the amount of radiation


scattered by the atmosphere
away from the field of view

Scattering Source - divergent solar


irradiations scattered into the
field of view

Emission Source - radiation


re-emitted after absorption

Source: Elachi 1987

Figure 5: Factors Affecting Radiation

Absorption is not a linear phenomena; it is logarithmic with concentration (Flaschka


1969). In addition, the concentration of atmospheric gases, especially water vapor, is
variable. The other major gases of importance are carbon dioxide (CO2) and ozone (O3),
which can vary considerably around urban areas. Thus, the extent of atmospheric
absorbance will vary with humidity, elevation, proximity to (or downwind of) urban
smog, and other factors.

Scattering is modeled as Rayleigh scattering with a commonly used algorithm that


accounts for the scattering of short wavelength energy by the gas molecules in the
atmosphere (Pratt 1991)—for example, ozone. Scattering is variable with both
wavelength and atmospheric aerosols. Aerosols differ regionally (ocean vs. desert) and
daily (for example, Los Angeles smog has different concentrations daily).

Scattering source and emission source may account for only 5% of the variance. These
factors are minor, but they must be considered for accurate calculation. After inter-
action with the target material, the reflected radiation must travel back through the
atmosphere and be subjected to these phenomena a second time to arrive at the satellite.

8 ERDAS
Remote Sensing

The mathematical models that attempt to quantify the total atmospheric effect on the
solar illumination are called radiative transfer equations. Some of the most commonly
used are Lowtran (Kneizys 1988) and Modtran (Berk 1989).

See "CHAPTER 5: Enhancement" for more information on atmospheric modeling.

Reflectance Spectra
After rigorously defining the incident radiation (solar irradiation at target), it is possible
to study the interaction of the radiation with the target material. When an electromag-
netic wave (solar illumination in this case) strikes a target surface, three interactions are
possible (Elachi 1987):

• reflection

• transmission

• scattering

It is the reflected radiation, generally modeled as bidirectional reflectance (Clark 1984),


that is measured by the remote sensor.

Remotely sensed data are made up of reflectance values. The resulting reflectance
values translate into discrete digital numbers (or values) recorded by the sensing
device. These gray scale values will fit within a certain bit range (such as 0-255, which
is 8-bit data) depending on the characteristics of the sensor.

Each satellite sensor detector is designed to record a specific portion of the electromag-
netic spectrum. For example, Landsat TM band 1 records the 0.45 to 0.52 µm portion of
the spectrum and is designed for water body penetration, making it useful for coastal
water mapping. It is also useful for soil/vegetation discriminations, forest type
mapping, and cultural features identification (Lillesand and Kiefer 1987).

The characteristics of each sensor provide the first level of constraints on how to
approach the task of enhancing specific features, such as vegetation or urban areas.
Therefore, when choosing an enhancement technique, one should pay close attention to
the characteristics of the land cover types within the constraints imposed by the
individual sensors.

The use of VIS/IR imagery for target discrimination, whether the target is mineral,
vegetation, man-made, or even the atmosphere itself, is based on the reflectance
spectrum of the material of interest (see Figure 6). Every material has a characteristic
spectrum based on the chemical composition of the material. When sunlight (the illumi-
nation source for VIS/IR imagery) strikes a target, certain wavelengths are absorbed by
the chemical bonds; the rest are reflected back to the sensor. It is, in fact, the
wavelengths that are not returned to the sensor that provide information about the
imaged area.

Field Guide 9
Specific wavelengths are also absorbed by gases in the atmosphere (H20 vapor, CO2, O2,
etc.). If the atmosphere absorbs a large percentage of the radiation, it becomes difficult
or impossible to use that particular wavelength(s) to study the earth. For the present
Landsat and SPOT sensors, only the water vapor bands were considered strong enough
to exclude the use of their spectral absorption region. Figure 6 shows how Landsat TM
bands 5 and 7 were carefully placed to avoid these regions. Absorption by other
atmospheric gases was not extensive enough to eliminate the use of the spectral region
for present day broad band sensors.

4 5 6 7 Landsat MSS bands

1 2 3 4 5 7
Landsat TM bands
100 Atmospheric
absorption
bands
Kaolinite

80

Vegetation (green)
Reflectance%

60

40

Silt loam
20

0
.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, µm
NOTE: Spectra are offset for clarity and scale. Modified from Fraser 1986, Crist 1986, Sabins 1987

Figure 6: Reflectance Spectra

NOTE: This chart is for comparison purposes only. It is not meant to show actual values. The
spectra are offset to better display the lines.

An inspection of the spectra reveals the theoretical basis of some of the indices in the
ERDAS IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It is
readily apparent that for vegetation this value could be very large; for soils, much
smaller, and for clay minerals, near zero. Conversely, when the clay ratio TM5/TM7 is
considered, the opposite applies.

10 ERDAS
Remote Sensing

Hyperspectral Data
As remote sensing moves toward the use of more and narrower bands (for example,
AVIRIS with 224 bands only 10 nm wide), absorption by specific atmospheric gases
must be considered. These multiband sensors are called hyperspectral sensors. As
more and more of the incident radiation is absorbed by the atmosphere, the digital
number (DN) values of that band get lower, eventually becoming useless—unless one
is studying the atmosphere. Someone wanting to measure the atmospheric content of a
specific gas could utilize the bands of specific absorption.

NOTE: Hyperspectral bands are generally measured in nanometers (nm).

Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plotted
above the absorption spectra of some common natural materials (kaolin clay, silty loam
soil and green vegetation). Note that while the spectra are continuous, the Landsat
channels are segmented or discontinuous. We can still use the spectra in interpreting
the Landsat data. For example, an NDVI ratio for the three would be very different and,
hence, could be used to discriminate between the three materials. Similarly, the ratio
TM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluation
of the spectra shows why.

Figure 7 shows detail of the absorption spectra of three clay minerals. Because of the
wide bandpass (2080-2350 nm) of TM band 7, it is not possible to discern between these
three minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor
has a large number of approximately 10 nm wide bands. With the proper selection of
band ratios, mineral identification becomes possible. With this dataset, it would be
possible to discriminate between these three clay minerals, again using band ratios. For
example, a color composite image prepared from RGB = 2160nm/2190nm,
2220nm/2250nm, 2350nm/2488nm could produce a color coded clay mineral image-
map.

The commercial airborne multispectral scanners are used in a similar fashion. The
Airborne Imaging Spectrometer from the Geophysical & Environmental Research
Corp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. The
Airborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the
visible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectral
sensors, the user must understand the phenomenon involved and have some idea of the
target materials being sought.

Field Guide 11
Landsat TM band 7

2080 nm 2350 nm

Kaolinite

Reflectance%

Montmorillonite

Illite

2000 2200 2400 2600


Wavelength, nm

Modified from Sabins 1987

Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region

NOTE: Spectra are offset vertically for clarity.

The characteristics of Landsat, AVIRIS, and other data types are discussed in "CHAPTER 3:
Raster and Vector Data Sources". See page 166 of "CHAPTER 5: Enhancement" for more
information on the NDVI ratio

12 ERDAS
Remote Sensing

Imaging Radar Data


Radar remote sensors can be broken into two broad categories: passive and active. The
passive sensors record the very low intensity, microwave radiation naturally emitted
by the Earth. Because of the very low intensity these images have low spatial resolution
(i.e., large pixel size).

It is the active sensors, termed imaging radar, that are introducing a new generation of
satellite imagery to remote sensing. To produce an image, these satellites emit a
directed beam of microwave energy at the target and then collect the backscattered
(reflected) radiation from the target scene. Because they must emit a powerful burst of
energy, these satellites require large solar collectors and storage batteries. For this
reason, they cannot operate continuously; some satellites are limited to 10 minutes of
operation per hour.

The microwave energy emitted by an active radar sensor is coherent and defined by a
narrow bandwidth. The following table summarizes the bandwidths used in remote
sensing.

Frequency (υ), GHz


Band Designation* Wavelength (λ), cm
(10 9 cycles ⋅ sec -1)

Ka (0.86 cm) 0.8 to 1.1 40.0 to 26.5

K 1.1 to 1.7 26.5 to 18.0

Ku 1.7 to 2.4 18.0 to 12.5

X (3.0 cm, 3.2 cm) 2.4 to 3.8 12.5 to 8.0

C 3.8 to 7.5 8.0 to 4.0

S 7.5 to 15.0 4.0 to 2.0

L (23.5 cm, 25.0 cm) 15.0 to 30.0 2.0 to 1.0

P 30.0 to 100.0 1.0 to 0.3

*Wavelengths commonly used in imaging radars are shown in parentheses.

A key element of a radar sensor is the antenna. For a given position in space, the
resolution of the resultant image is a function of the antenna size. This is termed a real-
aperture radar (RAR). At some point, it becomes impossible to make a large enough
antenna to create the desired spatial resolution. To get around this problem, processing
techniques have been developed which combine the signals received by the sensor as it
travels over the target. Thus the antenna is perceived to be as long as the sensor path
during backscatter reception. This is termed a synthetic aperture and the sensor a
synthetic aperture radar (SAR).

Field Guide 13
The received signal is termed a phase history or echo hologram. It contains a time
history of the radar signal over all the targets in the scene and is itself a low resolution
RAR image. In order to produce a high resolution image, this phase history is processed
through a hardware/software system called a SAR processor. The SAR processor
software requires operator input parameters, such as information about the sensor
flight path and the radar sensor's characteristics, to process the raw signal data into an
image. These input parameters depend on the desired result or intended application of
the output imagery.

One of the most valuable advantages of imaging radar is that it creates images from its
own energy source and therefore is not dependant on sunlight. Thus one can record
uniform imagery any time of the day or night. In addition, the microwave frequencies
at which imaging radars operate are largely unaffected by the atmosphere. This allows
image collection through cloud cover or rain storms. However, the backscattered signal
can be affected. Radar images collected during heavy rainfall will often be seriously
attenuated, which decreases the signal-to-noise ratio (SNR). In addition, the
atmosphere does cause perturbations in the signal phase, which decreases resolution of
output products, such as the SAR image or generated DEMs.

14 ERDAS
Resolution

Resolution Resolution is a broad term commonly used to describe:

• the number of pixels the user can display on a display device, or

• the area on the ground that a pixel represents in an image file.

These broad definitions are inadequate when describing remotely sensed data. Four
distinct types of resolution must be considered:

• spectral - the specific wavelength intervals that a sensor can record

• spatial - the area on the ground represented by each pixel

• radiometric - the number of possible data file values in each band (indicated by the
number of bits into which the recorded energy is divided)

• temporal - how often a sensor obtains imagery of a particular area

These four domains contain separate information that can be extracted from the raw
data.

Spectral Spectral resolution refers to the specific wavelength intervals in the electromagnetic
spectrum that a sensor can record (Simonett 1983). For example, band 1 of the Landsat
Thematic Mapper sensor records energy between 0.45 and 0.52 µm in the visible part of
the spectrum.

Wide intervals in the electromagnetic spectrum are referred to as coarse spectral


resolution, and narrow intervals are referred to as fine spectral resolution. For example,
the SPOT panchromatic sensor is considered to have coarse spectral resolution because
it records EMR between 0.51 and 0.73 µm. On the other hand, band 3 of the Landsat TM
sensor has fine spectral resolution because it records EMR between 0.63 and 0.69 µm
(Jensen 1996).

NOTE: The spectral resolution does not indicate how many levels into which the signal is broken
down.

Spatial Spatial resolution is a measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel (Simonett 1983). The finer the
resolution, the lower the number. For instance, a spatial resolution of 79 meters is
coarser than a spatial resolution of 10 meters.

Scale
The terms large-scale imagery and small-scale imagery often refer to spatial resolution.
Scale is the ratio of distance on a map as related to the true distance on the ground (Star
and Estes 1990).

Large scale in remote sensing refers to imagery in which each pixel represents a small
area on the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Small
scale refers to imagery in which each pixel represents a large area on the ground, such
as AVHRR data, with a spatial resolution of 1.1 km.

Field Guide 15
This terminology is derived from the fraction used to represent the scale of the map,
such as 1:50,000. Small-scale imagery is represented by a small fraction (one over a very
large number). Large-scale imagery is represented by a larger fraction (one over a
smaller number). Generally, anything smaller than 1:250,000 is considered small-scale
imagery.

NOTE: Scale and spatial resolution are not always the same thing. An image always has the
same spatial resolution but it can be presented at different scales (Simonett 1983).

IFOV
Spatial resolution is also described as the instantaneous field of view (IFOV) of the
sensor, although the IFOV is not always the same as the area represented by each pixel.
The IFOV is a measure of the area viewed by a single detector in a given instant in time
(Star and Estes 1990). For example, Landsat MSS data have an IFOV of 79 × 79 meters,
but there is an overlap of 11.5 meters in each pass of the scanner, so the actual area
represented by each pixel is 56.5 × 79 meters (usually rounded to 57 × 79 meters).

Even though the IFOV is not the same as the spatial resolution, it is important to know
the number of pixels into which the total field of view for the image is broken. Objects
smaller than the stated pixel size may still be detectable in the image if they contrast
with the background, such as roads, drainage patterns, etc.

On the other hand, objects the same size as the stated pixel size (or larger) may not be
detectable if there are brighter or more dominant objects nearby. In Figure 8, a house
sits in the middle of four pixels. If the house has a reflectance similar to its
surroundings, the data file values for each of these pixels will reflect the area around
the house, not the house itself, since the house does not dominate any one of the four
pixels. However, if the house has a significantly different reflectance than its
surroundings, it may still be detectable.

16 ERDAS
Resolution

20m

20m 20m

house

20m

Figure 8: IFOV

Radiometric Radiometric resolution refers to the dynamic range, or number of possible data file
values in each band. This is referred to by the number of bits into which the recorded
energy is divided.

For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in
7-bit data, the data file values for each pixel range from 0 to 128.

In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in its
range. The total intensity of the energy from 0 to the maximum amount the sensor
measures is broken down into 256 brightness values for 8-bit data and 128 brightness
values for 7-bit data.

0 1 2 3 4 5 6 7 8 9 10 11 244 249 255

8-bit

0 max. intensity

0 1 2 3 4 5 122 123 124 125 126 127

7-bit

0 max. intensity

Figure 9: Brightness Values

Field Guide 17
Temporal Temporal resolution refers to how often a sensor obtains imagery of a particular area.
For example, the Landsat satellite can view the same area of the globe once every 16
days. SPOT, on the other hand, can revisit the same area every three days.

NOTE: Temporal resolution is an important factor to consider in change detection studies.

Figure 10 illustrates all four types of resolution:

Spatial Resolution: 79 m
1 pixel = 79 m x 79 m 79 m
Radiometric
Resolution:
8-bit (0 - 255)

Spectral
Resolution:
0.52 - 0.60 mm

Day 1
Temporal Resolution:
Day 17 same area viewed every
Day 31 16 days
Source: EOSAT
Figure 10: Landsat TM - Band 2 (Four Types of Resolution)

18 ERDAS
Data Correction

Data Correction There are several types of errors that can be manifested in remotely sensed data. Among
these are line dropout and striping. These errors can be corrected to an extent in GIS by
radiometric and geometric correction functions.

NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT.

See "CHAPTER 5: Enhancement" for more information on radiometric and geometric


correction.

Line Dropout Line dropout occurs when a detector either completely fails to function or becomes
temporarily saturated during a scan (like the effect of a camera flash on a human retina).
The result is a line or partial line of data with higher data file values, creating a
horizontal streak until the detector(s) recovers, if it recovers.

Line dropout is usually corrected by replacing the bad line with a line of estimated data
file values, based on the lines above and below it.

You can correct line dropout using the 5 x 5 Median Filter from the Radar Speckle Suppression
function. The Convolution and Focal Analysis functions in Image Interpreter will also correct
for line dropout.

Striping Striping or banding will occur if a detector goes out of adjustment—that is, it provides
readings consistently greater than or less than the other detectors for the same band
over the same ground cover.

Use Image Interpreter or Spatial Modeler for implementing algorithms to eliminate striping.
The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the
data.

Field Guide 19
Data Storage Image data can be stored on a variety of media—tapes, CD-ROMs, or floppy diskettes,
for example—but how the data are stored (e.g., structure) is more important than on
what they are stored.

All computer data are in binary format. The basic unit of binary data is a bit. A bit can
have two possible values—0 and 1, or “off” and “on” respectively. A set of bits,
however, can have many more values, depending on the number of bits used. The
number of values that can be expressed by a set of bits is 2 to the power of the number
of bits used.

A byte is 8 bits of data. Generally, file size and disk space are referred to by number of
bytes. For example, a PC may have 640 kilobytes (1,024 bytes = 1 kilobyte) of RAM
(random access memory), or a file may need 55,698 bytes of disk space. A megabyte
(Mb) is about one million bytes. A gigabyte (Gb) is about one billion bytes.

Storage Formats Image data can be arranged in several ways on a tape or other media. The most common
storage formats are:

• BIL (band interleaved by line)

• BSQ (band sequential)

• BIP (band interleaved by pixel)

For a single band of data, all formats (BIL, BIP, and BSQ) are identical, as long as the
data are not blocked.

Blocked data are discussed under "Storage Media" on page 23.

BIL
In BIL (band interleaved by line) format, each record in the file contains a scan line (row)
of data for one band (Slater 1980). All bands of data for a given line are stored consecu-
tively within the file as shown in Figure 11.

20 ERDAS
Data Storage

Header

Image
Line 1, Band 1
Line 1, Band 2



Line 1, Band x

Line 2, Band 1
Line 2, Band 2



Line 2, Band x

Line n, Band 1
Line n, Band 2



Line n, Band x
Trailer
Figure 11: Band Interleaved by Line (BIL)

NOTE: Although a header and trailer file are shown in this diagram, not all BIL data contain
header and trailer files.

BSQ
In BSQ (band sequential) format, each entire band is stored consecutively in the same
file (Slater 1980). This format is advantageous, in that:

• one band can be read and viewed easily

• multiple bands can be easily loaded in any order

Field Guide 21
Header File(s)
Line 1, Band 1
Line 2, Band 1
Image File Line 3, Band 1
Band 1 •


Line n, Band 1
end-of-file

Line 1, Band 2
Line 2, Band 2
Image File Line 3, Band 2
Band 2 •


Line n, Band 2

end-of-file
Line 1, Band x
Line 2, Band x
Image File Line 3, Band x
Band x •


Line n, Band x
Trailer File(s)
Figure 12: Band Sequential (BSQ)

Landsat Thematic Mapper (TM) data are stored in a type of BSQ format known as fast
format. Fast format data have the following characteristics:

• Files are not split between tapes. If a band starts on the first tape, it will end on the
first tape.

• An end-of-file (EOF) marker follows each band.

• An end-of-volume marker marks the end of each volume (tape). An end-of-volume


marker consists of three end-of-file markers.

• There is one header file per tape.

• There are no header records preceding the image data.

• Regular products (not geocoded) are normally unblocked. Geocoded products are
normally blocked (EOSAT).

ERDAS IMAGINE will import all of the header and image file information.

See Geocoded Data on page 32 for more information on geocoded data.

22 ERDAS
Data Storage

BIP
In BIP (band interleaved by pixel) format, the values for each band are ordered within
a given pixel. The pixels are arranged sequentially on the tape (Slater 1980). The
sequence for BIP format is:

Pixel 1, Band 1

Pixel 1, Band 2

Pixel 1, Band 3

.
.
.
Pixel 2, Band 1

Pixel 2, Band 2

Pixel 2, Band 3

.
.
.

Storage Media Today, most raster data are available on a variety of storage media to meet the needs of
users, depending on the system hardware and devices available. When ordering data,
it is sometimes possible to select the type of media preferred. The most common forms
of storage media are discussed in the following section:

• 9-track tape

• 4 mm tape

• 8 mm tape

• 1/4” cartridge tape

• CD-ROM/optical disk

Field Guide 23
Other types of storage media are:

• floppy disk (3.5” or 5.25”)

• film, photograph, or paper

• videotape

Tape
The data on a tape can be divided into logical records and physical records. A record is
the basic storage unit on a tape.

• A logical record is a series of bytes that form a unit. For example, all the data for
one line of an image may form a logical record.

• A physical record is a consecutive series of bytes on a magnetic tape, followed by


a gap, or blank space, on the tape.

Blocked Data
For reasons of efficiency, data can be blocked to fit more on a tape. Blocked data are
sequenced so that there are more logical records in each physical record. The number
of logical records in each physical record is the blocking factor. For instance, a record
may contain 28,000 bytes, but only 4000 columns due to a blocking factor of 7.

Tape Contents
Tapes are available in a variety of sizes and storage capacities. To obtain information
about the data on a particular tape, read the tape label or box, or read the header file.
Often, there is limited information on the outside of the tape. Therefore, it may be
necessary to read the header files on each tape for specific information, such as:

• number of tapes that hold the data set

• number of columns (in pixels)

• number of rows (in pixels)

• data storage format—BIL, BSQ, BIP

• pixel depth—4-bit, 8-bit, 10-bit, 12-bit, or 16-bit

• number of bands

• blocking factor

• number of header files and header records

24 ERDAS
Data Storage

4 mm Tapes
The 4 mm tape is a relative newcomer in the world of GIS. This tape is a mere
2” × 1.75” in size, but it can hold up to 2 Gb of data. This petite cassette offers an
obvious shipping and storage advantage because of its size.

8 mm Tapes
The 8 mm tape offers the advantage of storing vast amounts of data. Tapes are available
in 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb
size). The 8 mm tape is a 2.5” × 4” cassette, which makes it easy to ship and handle.

1/4” Cartridge Tapes


This tape format falls between the 8 mm and 9-track in physical size and storage
capacity. The tape is approximately 4” × 6” in size and stores up to 150 Mb of data.

9-Track Tapes
A 9-track tape is an older format that was the standard for two decades. It is a large
circular tape approximately 10” in diameter. It requires a 9-track tape drive as a
peripheral device for retrieving data. The size and storage capability make 9-track less
convenient than 8 mm or 1/4” tapes. However, 9-track tapes are still widely used.

A single 9-track tape may be referred to as a volume. The complete set of tapes that
contains one image is referred to as a volume set.

The storage format of a 9-track tape in binary format is described by the number of bits
per inch, bpi, on the tape. The tapes most commonly used have either 1600 or 6250 bpi.
The number of bits per inch on a tape is also referred to as the tape density. Depending
on the length of the tape, 9-tracks can store between 120-150 Mb of data.

CD-ROM
Data such as ADRG and DLG are most often available on CD-ROM, although many
types of data can be requested in CD-ROM format. A CD-ROM is an optical read-only
storage device which can be read with a CD player. CD-ROM’s offer the advantage of
storing large amounts of data in a small, compact device. Up to 644 Mb can be stored
on a CD-ROM. Also, since this device is read-only, it protects the data from accidentally
being overwritten, erased, or changed from its original integrity. This is the most stable
of the current media storage types and data stored on CD-ROM are expected to last for
decades without degradation.

Field Guide 25
Calculating Disk Space To calculate the amount of disk space a raster file will require on an ERDAS IMAGINE
system, use the following formula:

[ ( ( x × y × b ) × n ) ] × 1.4 = output file size

where:

y = rows
x = columns
b = number of bytes per pixel
n = number of bands
1.4 adds 30% to the file size for pyramid layers and 10% for miscellaneous adjust-
ments, such as histograms, lookup tables, etc.

NOTE: This output file size is approximate.

For example, to load a 3 band, 16-bit file with 500 rows and 500 columns, about
2,100,000 bytes of disk space would be needed.

[ ( ( 500 × 500 ) × 2 ) × 3 ] × 1.4 = 2, 100, 000 or 2.1 Mb

Bytes Per Pixel


The number of bytes per pixel is listed below:

4-bit data: .5

8-bit data: 1.0

16-bit data: 2.0

NOTE: On the PC, disk space is shown in bytes. On the workstation, disk space is shown as
kilobytes (1,024 bytes).

26 ERDAS
Data Storage

ERDAS IMAGINE Format In ERDAS IMAGINE, file name extensions identify the file type. When data are
(.img) imported into IMAGINE, they are converted to the ERDAS IMAGINE file format and
stored in .img files. ERDAS IMAGINE image files (.img) can contain two types of raster
layers:

• thematic

• continuous

An image file can store a combination of thematic and continuous layers or just one
type.

Image File (.img)

Raster Layer(s)

Thematic Raster Layer(s) Continuous Raster Layer(s)

Figure 13: Image Files Store Raster Layers

ERDAS Version 7.5 Users


For Version 7.5 users, when importing a GIS file from Version 7.5, it becomes an image
file with one thematic raster layer. When importing a LAN file, each band becomes a
continuous raster layer within an image file.

Thematic Raster Layer


Thematic data are raster layers that contain qualitative, categorical information about
an area. A thematic layer is contained within an .img file. Thematic layers lend
themselves to applications in which categories or themes are used. Thematic raster
layers are used to represent data measured on a nominal or ordinal scale, such as:

• soils

• land use

• land cover

• roads

• hydrology

NOTE: Thematic raster layers are displayed as pseudo color layers.

Field Guide 27
soils
Figure 14: Example of a Thematic Raster Layer

See "CHAPTER 4: Image Display" for information on displaying thematic raster layers.

Continuous Raster Layer


Continuous data are raster layers that contain quantitative (measuring a characteristic
on an interval or ratio scale) and related, continuous values. Continuous raster layers
can be multiband (e.g., Landsat Thematic Mapper data) or single band (e.g., SPOT
panchromatic data). The following types of data are examples of continuous raster
layers:

• Landsat

• SPOT

• digitized (scanned) aerial photograph

• digital elevation model (DEM)

• slope

• temperature

NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true
color raster layer.

28 ERDAS
Data Storage

Landsat TM DEM
Figure 15: Examples of Continuous Raster Layers

Tiled Data
Data in the .img format are tiled data. Tiled data are stored in tiles that can be set to any
size.

The default tile size for .img files is 64 x 64 pixels.

Image File Contents


The .img files contain the following additional information about the data:

• the data file values

• statistics

• lookup tables

• map coordinates

• map projection

This additional information can be viewed in the Image Information function from the ERDAS
IMAGINE icon panel.

Field Guide 29
Statistics
In ERDAS IMAGINE, the file statistics are generated from the data file values in the
layer and incorporated into the .img file. This statistical information is used to create
many program defaults, and helps the user make processing decisions.

Pyramid Layers
Sometimes a large image will take longer than normal to display in the ERDAS
IMAGINE Viewer. The pyramid layer option enables the user to display large images
faster. Pyramid layers are image layers which are successively reduced by the power of
2 and resampled.

The Pyramid Layer option is available in the Image Information function from the ERDAS
IMAGINE icon panel and from the Import function.

See "CHAPTER 4: Image Display" for more information on pyramid layers. See "APPENDIX
B: File Formats and Extensions" for detailed information on ERDAS IMAGINE file formats.

30 ERDAS
Image File Organization

Image File Data is easy to locate if the data files are well organized. Well organized files will also
Organization make data more accessible to anyone who uses the system. Using consistent naming
conventions and the ERDAS IMAGINE Image Catalog will help keep image files well
organized and accessible.

Consistent Naming Many processes create an output file, and every time a file is created, it will be necessary
Convention to assign a file name. The name which is used can either cause confusion about the
process that has taken place, or it can clarify and give direction. For example, if the
name of the output file is “junk,” it is difficult to determine the contents of the file. On
the other hand, if a standard nomenclature is developed in which the file name refers
to a process or contents of the file, it is possible to determine the progress of a project
and contents of a file by examining the directory.

Develop a naming convention that is based on the contents of the file. This will help
everyone involved know what the file contains. For example, in a project to create a
map composition for Lake Lanier, a directory for the files may look similar to the one
below:

lanierTM.img
lanierSPOT.img
lanierSymbols.ovr
lanierlegends.map.ovr
lanierScalebars.map.ovr
lanier.map
lanier.plt
lanier.gcc
lanierUTM.img

From this listing, one can make some educated guesses about the contents of each file
based on naming conventions used. For example, “lanierTM.img” is probably a
Landsat TM scene of Lake Lanier. “lanier.map” is probably a map composition that has
map frames with lanierTM.img and lanierSPOT.img data in them. “lanierUTM.img”
was probably created when lanierTM.img was rectified to a UTM map projection.

Keeping Track of Image Using a database to store information about images enables the user to track image files
Files (.img) without having to know the name or location of the file. The database can be
queried for specific parameters (e.g., size, type, map projection) and the database will
return a list of image files that match the search criteria. This file information will help
to quickly determine which image(s) to use, where it is located, and its ancillary data.
An image database is especially helpful when there are many image files and even
many on-going projects. For example, one could use the database to search for all of the
image files of Georgia that have a UTM map projection.

Use the Image Catalog to track and store information for image files (.img) that are imported and
created in IMAGINE.

NOTE: All information in the Image Catalog database, except archive information, is extracted
from the image file header. Therefore, if this information is modified in the Image Info utility, it
will be necessary to re-catalog the image in order to update the information in the Image Catalog
database.

Field Guide 31
ERDAS IMAGINE Image Catalog
The ERDAS IMAGINE Image Catalog database is designed to serve as a library and
information management system for image files (.img) that are imported and created in
ERDAS IMAGINE. The information for the .img files is displayed in the Image Catalog
CellArray. This CellArray enables the user to view all of the ancillary data for the image
files in the database. When records are queried based on specific criteria, the .img files
that match the criteria will be highlighted in the CellArray. It is also possible to graph-
ically view the coverage of the selected .img files on a map in a canvas window.

When it is necessary to store some data on a tape, the Image Catalog database enables
the user to archive .img files to external devices. The Image Catalog CellArray will
show which tape the .img file is stored on, and the file can be easily retrieved from the
tape device to a designated disk directory. The archived .img files are copies of the files
on disk—nothing is removed from the disk. Once the file is archived, it can be removed
from the disk, if desired.

Geocoded Data Geocoding, also known as georeferencing, is the geographical registration or coding of
the pixels in an image. Geocoded data are images that have been rectified to a particular
map projection and pixel size.

Raw, remotely sensed image data are gathered by a sensor on a platform, such as an
aircraft or satellite. In this raw form, the image data are not referenced to a map
projection. Rectification is the process of projecting the data onto a plane and making
them conform to a map projection system.

It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools.
Geocoded data are also available from EOSAT and SPOT.

See "APPENDIX C: Map Projections" for detailed information on the different projections
available. See "CHAPTER 8: Rectification" for information on geocoding raw imagery with
ERDAS IMAGINE.

32 ERDAS
Using Image Data in GIS

Using Image Data in ERDAS IMAGINE provides many tools designed to extract the necessary information
GIS from the images in a data base. The following chapters in this book describe many of
these processes.

This section briefly describes some basic image file techniques that may be useful for
any application.

Subsetting and Within ERDAS IMAGINE, there are options available to make additional image files
Mosaicking from those acquired from EOSAT, SPOT, etc. These options involve combining files,
mosaicking, and subsetting.

ERDAS IMAGINE programs allow image data with an unlimited number of bands, but
the most common satellite data types—Landsat and SPOT—have seven or fewer bands.
Image files can be created with more than seven bands.

It may be useful to combine data from two different dates into one file. This is called
multitemporal imagery. For example, a user may want to combine Landsat TM from
one date with TM data from a later date, then perform a classification based on the
combined data. This is particularly useful for change detection studies.

The user can also incorporate elevation data into an existing image file as another band,
or create new bands through various enhancement techniques.

To combine two or more image files, each file must be georeferenced to the same coordinate
system, or to each other. See "CHAPTER 8: Rectification" for information on georeferencing
images.

Subset
Subsetting refers to breaking out a portion of a large file into one or more smaller files.

Often, image files contain areas much larger than a particular study area. In these cases,
it is helpful to reduce the size of the image file to include only the area of interest. This
not only eliminates the extraneous data in the file, but it speeds up processing due to
the smaller amount of data to process. This can be important when dealing with
multiband data.

The Import option lets you define a subset area of an image to preview or import. You can also
use the Subset option from Image Interpreter to define a subset area.

Mosaic
On the other hand, the study area in which the user is interested may span several
image files. In this case, it is necessary to combine the images to create one large file.
This is called mosaicking.

To create a mosaicked image, use the Mosaic Images option from the Data Prep menu. All of the
images to be mosaicked must be georeferenced to the same coordinate system.

Field Guide 33
Enhancement Image enhancement is the process of making an image more interpretable for a
particular application (Faust 1989). Enhancement can make important features of raw,
remotely sensed data and aerial photographs more interpretable to the human eye.
Enhancement techniques are often used instead of classification for extracting useful
information from images.

There are many enhancement techniques available. They range in complexity from a
simple contrast stretch, where the original data file values are stretched to fit the range
of the display device, to principal components analysis, where the number of image
file bands can be reduced and new bands created to account for the most variance in the
data.

See "CHAPTER 5: Enhancement" for more information on enhancement techniques.

Multispectral Image data are often used to create thematic files through multispectral classification.
Classification This entails using spectral pattern recognition to identify groups of pixels that represent
a common characteristic of the scene, such as soil type or vegetation.

See "CHAPTER 6: Classification" for a detailed explanation of classification procedures.

34 ERDAS
Editing Raster Data

Editing Raster Data ERDAS IMAGINE provides raster editing tools for editing the data values of thematic
and continuous raster data. This is primarily a correction mechanism that enables the
user to correct bad data values which produce noise, such as spikes and holes in
imagery. The raster editing functions can be applied to the entire image or a user-
selected area of interest (AOI).

With raster editing, data values in thematic data can also be recoded according to class.
Recoding is a function which reassigns data values to a region or to an entire class of
pixels.

See "CHAPTER 10: Geographic Information Systems" for information about recoding data. See
"CHAPTER 5: Enhancement" for information about reducing data noise using spatial filtering.

The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial
modeling functions for computing the values to replace noisy pixels or areas in
continuous or thematic data.

Focal operations are filters which calculate the replacement value based on a window
(3 × 3, 5 × 5, etc.) and replace the pixel of interest with the replacement value. Therefore
this function affects one pixel at a time, and the number of surrounding pixels which
influence the value is determined by the size of the moving window.

Global operations calculate the replacement value for an entire area rather than
affecting one pixel at a time. These functions, specifically the Majority option, are more
applicable to thematic data.

See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs.

The raster editing tools are available in the IMAGINE Viewer.

Field Guide 35
Editing Continuous Editing DEMs
(Athematic) Data DEMs will occasionally contain spurious pixels or bad data. These spikes, holes, and
other noises caused by automatic DEM extraction can be corrected by editing the raster
data values and replacing them with meaningful values. This discussion of raster
editing will focus on DEM editing.

The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs, but
it can also be used with images of other continuous data sources, such as radar, SPOT, Landsat,
and digitized photographs.

When editing continuous raster data, the user can modify or replace original pixel
values with the following:

• a constant value — enter a known constant value for areas such as lakes

• the average of the buffering pixels — replace the original pixel value with the
average of the pixels in a specified buffer area around the AOI. This is used where
the constant values of the AOI are not known, but the area is flat or homogeneous
with little variation (for example, a lake).

• the original data value plus a constant value — add a negative constant value to the
original data values to compensate for the height of trees and other vertical features
in the DEM. This technique is commonly used in forested areas.

• spatial filtering — filter data values to eliminate noise such as spikes or holes in the
data

• interpolation techniques (discussed below)

36 ERDAS
Editing Raster Data

Interpolation While the previously listed raster editing techniques are perfectly suitable for some
Techniques applications, the following interpolation techniques provide the best methods for raster
editing:

• 2-D polynomial — surface approximation

• Multi-surface functions — with least squares prediction

• Distance weighting

Each pixel’s data value is interpolated from the reference points in the data file. These
interpolation techniques are described below.

2-D Polynomial
This interpolation technique provides faster interpolation calculations than distance
weighting and multi-surface functions. The following equation is used:

V = a0 + a1x + a2y + a2x2 + a4xy + a5y2 +. . .

where:

V = data value (elevation value for DEM)


a = polynomial coefficients
x = x coordinate
y = y coordinate

Multi-surface Functions
The multi-surface technique provides the most accurate results for editing DEMs which
have been created through automatic extraction. The following equation is used:

V = ∑ W i Qi
where:

V = output data value (elevation value for DEM)


Wi = coefficients which are derived by the least squares method
Qi = distance-related kernels which are actually interpretable as
continuous single value surfaces

Source: Wang 1990

Field Guide 37
Distance Weighting
The weighting function determines how the output data values will be interpolated
from a set of reference data points. For each pixel, the values of all reference points are
weighted by a value corresponding with the distance between each point and the pixel.

The weighting function used in ERDAS IMAGINE is:

2
W =  ---- – 1
S
D 

where:

S = normalization factor

D = distance from output data point and reference point

The value for any given pixel is calculated by taking the sum of weighting factors for all
reference points multiplied by the data values of those points, and dividing by the sum
of the weighting factors:

∑ Wi × Vi
V = i-----------------------------
=1 -
n

∑ Wi
i=1
where:

V = output data value (elevation value for DEM)

i = ith reference point

Wi = weighting factor of point i

Vi = data value of point i

n = number of reference points

Source: Wang 1990

38 ERDAS
Introduction

CHAPTER 2
Vector Layers

Introduction ERDAS IMAGINE is designed to integrate two data types into one system: raster and
vector. While the previous chapter explored the characteristics of raster data, this
chapter is focused on vector data. The vector data structure in ERDAS IMAGINE is
based on the ARC/INFO data model (developed by ESRI, Inc.). This chapter describes
vector data, attribute information, and symbolization.

You do not need ARC/INFO software or an ARC/INFO license to use the vector capabilities in
ERDAS IMAGINE. Since the ARC/INFO data model is used in ERDAS IMAGINE, you can
use ARC/INFO coverages directly without importing them.

See "CHAPTER 10: Geographic Information Systems" for information on editing vector layers
and using vector data in a GIS.

Vector data consist of:

• points

• lines

• polygons

Each is illustrated in Figure 16.

vertices node

polygons

line

label point
node

points
Figure 16: Vector Elements

Field Guide 39
Points
A point is represented by a single x,y coordinate pair. Points can represent the location
of a geographic feature or a point that has no area, such as a mountain peak. Label
points are also used to identify polygons (see below).

Lines
A line (polyline) is a set of line segments and represents a linear geographic feature,
such as a river, road, or utility line. Lines can also represent non-geographical bound-
aries, such as voting districts, school zones, contour lines, etc.

Polygons
A polygon is a closed line or closed set of lines defining a homogeneous area, such as
soil type, land use, or water body. Polygons can also be used to represent non-
geographical features, such as wildlife habitats, state borders, commercial districts, etc.
Polygons also contain label points that identify the polygon. The label point links the
polygon to its attributes.

Vertex
The points that define a line are vertices. A vertex is a point that defines an element,
such as the endpoint of a line segment or a location in a polygon where the line segment
defining the polygon changes direction. The ending points of a line are called nodes.
Each line has two nodes: a from-node and a to-node. The from-node is the first vertex
in a line. The to-node is the last vertex in a line. Lines join other lines only at nodes. A
series of lines in which the from-node of the first line joins the to-node of the last line is
a polygon.

label point

line polygon

vertices

Figure 17: Vertices

In Figure 17, the line and the polygon are each defined by three vertices.

40 ERDAS
Introduction

Coordinates Vector data are expressed by the coordinates of vertices. The vertices that define each
element are referenced with x,y, or Cartesian, coordinates. In some instances, those
coordinates may be inches (as in some CAD applications), but often the coordinates are
map coordinates, such as State Plane, Universal Transverse Mercator (UTM), or
Lambert Conformal Conic. Vector data digitized from an ungeoreferenced image are
expressed in file coordinates.

Tics
Vector layers are referenced to coordinates or a map projection system using tic files
that contain geographic control points for the layer. Every vector layer must have a tic
file. Tics are not topologically linked to other features in the layer and do not have
descriptive data associated with them.

Vector Layers Although it is possible to have points, lines, and polygons in a single layer, a layer
typically consists of one type of feature. It is possible to have one vector layer for
streams (lines) and another layer for parcels (polygons). A vector layer is defined as a
set of features where each feature has a location (defined by coordinates and topological
pointers to other features) and, possibly attributes (defined as a set of named items or
variables) (ESRI 1989). Vector layers contain both the vector features (points, lines,
polygons) and the attribute information.

Usually, vector layers are also divided by the type of information they represent. This
enables the user to isolate data into themes, similar to the themes used in raster layers.
Political districts and soil types would probably be in separate layers, even though both
are represented with polygons. If the project requires that the coincidence of features in
two or more layers be studied, the user can overlay them or create a new layer.

See "CHAPTER 10: Geographic Information Systems" for more information about analyzing
vector layers.

Topology The spatial relationships between features in a vector layer are defined using topology.
In topological vector data, a mathematical procedure is used to define connections
between features, identify adjacent polygons, and define a feature as a set of other
features (e.g., a polygon is made of connecting lines) (ESRI 1990).

Topology is not automatically created when a vector layer is created. It must be added
later using specific functions. Topology must be updated after a layer is edited also.

"Digitizing" on page 47 describes how topology is created for a new or edited vector layer.

Field Guide 41
Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based in the ARC/INFO
data model used for ARC coverages. This georelational data model is actually a set of
files using the computer’s operating system for file management and input/output. An
ERDAS IMAGINE vector layer is stored in subdirectories on the disk. Vector data are
represented by a set of logical tables of information, stored as files within the subdi-
rectory. These files may serve the following purposes:

• define features

• provide feature attributes

• cross-reference feature definition files

• provide descriptive information for the coverage as a whole

A workspace is a location which contains one or more vector layers. Workspaces


provide a convenient means for organizing layers into related groups. They also
provide a place for the storage of tabular data not directly tied to a particular layer. Each
workspace is completely independent. It is possible to have an unlimited number of
workspaces and an unlimited number of vector layers in a workspace. Table 1 summa-
rizes the types of files that are used to make up vector layers.

Table 1: Description of File Types

File Type File Description


Feature ARC Line coordinates and topology
Definition
CNT Polygon centroid coordinates
Files
LAB Label point coordinates and topology
TIC Tic coordinates
Feature Attribute AAT Line (arc) attribute table
Files
PAT Polygon or point attribute table
Feature Cross- PAL Polygon/line/node cross-reference file
Reference File
Layer BND Coordinate extremes
Description Files
LOG Layer history file
PRJ Coordinate definition file
TOL Layer tolerance file

42 ERDAS
Introduction

Figure 18 illustrates how a typical vector workspace is set up (ESRI 1992).

georgia

parcels testdata

demo INFO roads streets

Figure 18: Workspace Structure

Because vector layers are stored in directories rather than in simple files, you MUST use the
utilities provided in ERDAS IMAGINE to copy and rename them. A utility is also provided to
update path names that are no longer correct due to the use of regular system commands on
vector layers.

See the ESRI documentation for more detailed information about the different vector files.

Field Guide 43
Attribute Along with points, lines, and polygons, a vector layer can have a wealth of associated
Information descriptive, or attribute, information associated with it. Attribute information is
displayed in ERDAS IMAGINE CellArrays. This is the same information that is stored
in the INFO database of ARC/INFO. Some attributes are automatically generated when
the layer is created. Custom fields can be added to each attribute table. Attribute fields
can contain numerical or character data.

The attributes for a roads layer may look similar to the example in Figure 19. The user
can select features in the layer based on the attribute information. Likewise, when a row
is selected in the attribute CellArray, that feature is highlighted in the Viewer.

Figure 19: Attribute Example

Using Imported Attribute Data


When external data types are imported into ERDAS IMAGINE, only the required
attribute information is imported into the attribute tables (AAT and PAT files) of the
new vector layer. The rest of the attribute information is written to one of the following
INFO files:

• <layer name>.ACODE - arc attribute information

• <layer name>.PCODE - polygon attribute information

• <layer name>.XCODE - point attribute information

To utilize all of this attribute information, the INFO files can be merged into the PAT
and AAT files. Once this attribute information has been merged, it can be viewed in
IMAGINE CellArrays and edited as desired. This new information can then be
exported back to its original format.

The complete path of the file must be specified when establishing an INFO file name in
an ERDAS IMAGINE Viewer application, such as exporting attributes or merging
attributes, as shown in the example below:

/georgia/parcels/info!arc!parcels.pcode

44 ERDAS
Displaying Vector Data

Use the Attributes option in the IMAGINE Viewer to view and manipulate vector attribute
data, including merging and exporting. (The Raster Attribute Editor is for raster attributes only
and cannot be used to edit vector attributes.)

See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.

Displaying Vector Vector data are displayed in Viewers, as are other data types in ERDAS IMAGINE. The
Data user can display a single vector layer, overlay several layers in one Viewer, or display
a vector layer(s) over a raster layer(s).

In layers that contain more than one feature (a combination of points, lines, and
polygons), the user can select which features to display. For example, if a user is
studying parcels, he or she may want to display only the polygons in a layer that also
contains street centerlines (lines).

Color Schemes
Vector data are usually assigned class values in the same manner as the pixels in a
thematic raster file. These class values correspond to different colors on the display
screen. As with a pseudo color image, the user can assign a color scheme for displaying
the vector classes.

See "CHAPTER 4: Image Display" for a thorough discussion of how images are displayed.

Symbolization Vector layers can be displayed with symbolization, meaning that the attributes can be
used to determine how points, lines, and polygons are rendered. Points, lines,
polygons, and nodes are symbolized using styles and symbols similar to annotation.
For example, if a point layer represents cities and towns, the appropriate symbol could
be used at each point based on the population of that area.

Points
Point symbolization options include symbol, size, and color. The symbols available are
the same symbols available for annotation.

Lines
Lines can be symbolized with varying line patterns, composition, width, and color. The
line styles available are the same as those available for annotation.

Polygons
Polygons can be symbolized as lines or as filled polygons. Polygons symbolized as lines
can have varying line styles (see Lines above). For filled polygons, either a solid fill
color or a repeated symbol can be selected. When symbols are used, the user selects the
symbol to use, the symbol size, symbol color, background color, and the x- and y-
separation between symbols. Figure 20 illustrates a pattern fill.

Field Guide 45
The vector layer will reflect
the symbolization that is
defined in the Symbology dialog.

Figure 20: Symbolization Example

See the ERDAS IMAGINE Tour Guides or On-Line Help for information about selecting
features and using CellArrays.

46 ERDAS
Digitizing

Vector Data Sources


Vector data are created by:

• tablet digitizing—maps, photographs or other hardcopy data can be digitized


using a digitizing tablet

• screen digitizing—create new vector layers by using the mouse to digitize on the
screen

• using other software packages—many external vector data types can be converted
to ERDAS IMAGINE vector layers

• converting raster layers—raster layers can be converted to vector layers

Each of these options is discussed in a separate section below.

Digitizing In the broadest sense, digitizing refers to any process that converts non-digital data into
numbers. However, in ERDAS IMAGINE, the digitizing of vectors refers to the creation
of vector data from hardcopy materials or raster images that are traced using a digitizer
keypad on a digitizing tablet or a mouse on a displayed image.

Any image not already in digital format must be digitized before it can be read by the
computer and incorporated into the data base. Most Landsat, SPOT, or other satellite
data are already in digital format upon receipt, so it is not necessary to digitize them.
However, the user may also have maps, photographs, or other non-digital data that
contain information they want to incorporate into the study. Or, the user may want to
extract certain features from a digital image to include in a vector layer. Tablet
digitizing and screen digitizing enable the user to digitize certain features of a map or
photograph, such as roads, bodies of water, voting districts, and so forth.

Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer non-digital data such
as maps or photographs to vector format. The digitizing tablet contains an internal
electronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypad
operated by the user.

Figure 21: Digitizing Tablet

Field Guide 47
Digitizer Set-Up
The map or photograph to be digitized is secured on the tablet, and a coordinate system
is established with a set-up procedure.

Digitizer Operation
The hand-held digitizer keypad features a small window with cross-hairs and keypad
buttons. Position the intersection of the cross-hairs directly over the point to be
digitized. Depending on the type of equipment and the program being used, one of the
input buttons is pushed to tell the system which function to perform, such as:

• digitize a point (i.e., transmit map coordinate data),

• connect a point to previous points,

• assign a particular value to the point or polygon,

• measure the distance between points, etc.

Move the puck along the desired polygon boundaries or lines, digitizing points at
appropriate intervals (where lines curve or change direction), until all the points are
satisfactorily completed.

Newly created vector layers do not contain topological data. You must create topology using the
Build or Clean options. This is discussed further in “Chapter 9: Geographic Information
Systems.”

Digitizing Modes
There are two modes used in digitizing:

• point mode — one point is generated each time a keypad button is pressed

• stream mode — points are generated continuously at specified intervals, while the
puck is in proximity to the surface of the digitizing tablet

You can create a new vector layer from the IMAGINE Viewer. Select the Tablet Input function
from the Viewer to use a digitizing tablet to enter new information into that layer.

Measurement
The digitizing tablet can also be used to measure both linear and areal distances on a
map or photograph. The digitizer puck is used to outline the areas to measure. The user
can measure:

• lengths and angles by drawing a line

• perimeters and areas using a polygonal, rectangular, or elliptical shape

• positions by specifying a particular point

48 ERDAS
Imported Vector Data

Measurements can be saved to a file, printed, and copied. These operations can also be
performed with screen digitizing.

Select the Measure function from the IMAGINE Viewer or click on the Ruler tool in the Viewer
tool bar to enable tablet or screen measurement.

Screen Digitizing In screen digitizing, vector data are drawn in the Viewer with a mouse using the
displayed image as a reference. These data are then written to a vector layer.

Screen digitizing is used for the same purposes as tablet digitizing, such as:

• digitizing roads, bodies of water, political boundaries

• selecting training samples for input to the classification programs

• outlining an area of interest for any number of applications

Create a new vector layer from the IMAGINE Viewer.

Imported Vector Many types of vector data from other software packages can be incorporated into the
Data ERDAS IMAGINE system. These data formats include:

• ARC/INFO GENERATE format files from ESRI, Inc.

• ARC/INFO INTERCHANGE files from ESRI,Inc.

• ARCVIEW Shape files from ESRI,Inc.

• Digital Line Graphs (DLG) from U.S.G.S.

• Digital Exchange Files (DXF) from Autodesk, Inc.

• ETAK MapBase files from ETAK, Inc.

• Initial Graphics Exchange Standard (IGES) files

• Intergraph Design (DGN) files from Intergraph

• Spatial Data Transfer Standard (SDTS) vector files

• Topologically Integrated Geographic Encoding and Referencing System


(TIGER) files from the U.S. Census Bureau

• Vector Product Format (VPF) files from the Defense Mapping Agency

See "CHAPTER 3: Raster and Vector Data Sources" for more information on these data.

Field Guide 49
Raster to Vector A raster layer can be converted to a vector layer and used as another layer in a vector
Conversion data base. The diagram below illustrates a thematic file in raster format that has been
converted to vector format.

Raster soils layer Soils layer converted to vector polygon layer

Figure 22: Raster Format Converted to Vector Format

Most commonly, thematic raster data rather than continuous data are converted to
vector format, since converting continuous layers may create more vector features than
are practical or even manageable.

Convert vector data to raster data, and vice versa, using ERDAS IMAGINE Vector.

50 ERDAS
Introduction

CHAPTER 3
Raster and Vector Data Sources

Introduction This chapter is an introduction to the most common raster and vector data types that
can be used with the ERDAS IMAGINE software package. The raster data types
covered include:

• visible/infrared satellite data

• radar imagery

• airborne sensor data

• digital terrain models

• scanned or digitized maps and photographs

The vector data types covered include:

• ARC/INFO GENERATE format files

• USGS Digital Line Graphs (DLG)

• AutoCAD Digital Exchange Files (DXF)

• MapBase digital street network files (ETAK)

• U.S. Department of Commerce Initial Graphics Exchange Standard files (IGES)

• U.S. Census Bureau Topologically Integrated Geographic Encoding and


Referencing System files (TIGER)

Importing and Exporting There is an abundance of data available for use in GIS today. In addition to satellite and
Raster Data airborne imagery, raster data sources include digital x-rays, sonar, microscopic
imagery, video digitized data, and many other sources.

Because of the wide variety of data formats, ERDAS IMAGINE provides two options
for importing data:

• Direct import for specific formats

• Generic import for general formats

Direct Import
Table 2 lists some of the raster data formats that can be directly imported to and
exported from ERDAS IMAGINE:

Field Guide 51
Table 2: Raster Data Formats for Direct Import

Data Type Import Export


ADRG ✓
ADRI ✓
ASCII ✓
AVHRR ✓
a
BIL, BIP, BSQ ✓ ✓
DTED ✓
ERDAS 7.X (.LAN, .GIS, .ANT) ✓ ✓
GRID ✓ ✓
Landsat ✓
RADARSAT ✓
SPOT ✓
Sun Raster ✓ ✓
TIFF ✓ ✓
USGS DEM ✓ ✓

a. See Generic Import on page 52.

Once imported, the raster data are converted to the ERDAS IMAGINE file format
(.img). The direct import function will import the data file values that make up the
raster image, as well as the ephemeris or additional data inherent to the data structure.
For example, when the user imports Landsat data, ERDAS IMAGINE also imports the
georeferencing data for the image.

Raster data formats cannot be exported as vector data formats, unless they are converted with
the Vector utilities.

Each direct function is programmed specifically for that type of data and cannot be used to
import other data types.

Generic Import
The Generic import option is a flexible program which enables the user to define the
data structure for ERDAS IMAGINE. This program allows the import of BIL, BIP, and
BSQ data that are stored in left to right, top to bottom row order. Data formats from
unsigned 1-bit up to 64-bit floating point can be imported. This program imports only
the data file values—it does not import ephemeris data, such as georeferencing infor-
mation. However, this ephemeris data can be viewed using the Data View option (from
the Utility menu or the Import dialog).

52 ERDAS
Introduction

Complex data cannot be imported using this program; however, they can be imported
as two real images and then combined into one complex image using the IMAGINE
Spatial Modeler.

You cannot import tiled or compressed data using the Generic import option.

Importing and Exporting Vector layers can be created within ERDAS IMAGINE by digitizing points, lines, and
Vector Data polygons using a digitizing tablet or the computer screen. Several vector data types,
which are available from a variety of government agencies and private companies, can
also be imported. Table 3 lists some of the vector data formats that can be imported to,
and exported from, ERDAS IMAGINE:
Table 3: Vector Data Formats for Import and Export

Data Type Import Export


GENERATE ✓ ✓
DXF ✓ ✓
DLG ✓ ✓
ETAK ✓
IGES ✓ ✓
TIGER ✓ ✓

Once imported, the vector data are automatically converted to ERDAS IMAGINE
vector layers.

These vector formats are discussed in more detail in "Vector Data from Other Software
Vendors" on page 90. See "CHAPTER 2: Vector Layers" for more information on ERDAS
IMAGINE vector layers.

Import and export vector data with the Import/Export function. You can also convert vector
layers to raster format, and vice versa, with the ERDAS IMAGINE Vector utilities.

Field Guide 53
Satellite Data There are several data acquisition options available including photography, aerial
sensors, and sophisticated satellite scanners. However, a satellite system offers these
advantages:

• Digital data gathered by a satellite sensor can be transmitted over radio or


microwave communications links and stored on magnetic tapes, so they are easily
processed and analyzed by a computer.

• Many satellites orbit the earth, so the same area can be covered on a regular basis
for change detection.

• Once the satellite is launched, the cost for data acquisition is less than that for
aircraft data.

• Satellites have very stable geometry, meaning that there is less chance for distortion
or skew in the final image.

Use the Import/Export function to import a variety of satellite data.

Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. The
sensors are made up of detectors.

• The scanner is the entire data acquisition system, such as the Landsat Thematic
Mapper scanner or the SPOT panchromatic scanner (Lillesand and Kiefer 1987). It
includes the sensor and the detectors.

• A sensor is a device that gathers energy, converts it to a signal and presents it in a


form suitable for obtaining information about the environment (Colwell 1983).

• A detector is the device in a sensor system that records electromagnetic radiation.


For example, in the sensor system on the Landsat Thematic Mapper scanner there
are 16 detectors for each wavelength band (except band 6, which has 4 detectors).

In a satellite system, the total width of the area on the ground covered by the scanner is
called the swath width, or width of the total field of view (FOV). FOV differs from IFOV
(instantaneous field of view) in that the IFOV is a measure of the field of view of each
detector. The FOV is a measure of the field of view off all the detectors combined.

54 ERDAS
Satellite Data

Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important data acquisition
satellites. These systems provide the majority of remotely sensed digital images in use
today. The Landsat and SPOT satellites have several characteristics in common:

• They have sun-synchronous orbits, meaning that they rotate around the earth at
the same rate as the earth rotates on its axis, so data are always collected at the same
local time of day over the same region.

• They both record electromagnetic radiation in one or more bands. Multiband data
are referred to as multispectral imagery. Single band, or monochrome, imagery is
called panchromatic.

• Both scanners can produce nadir views. Nadir is the area on the ground directly
beneath the scanner’s detectors.

NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery, as will the
future Landsat 7 system.

Image Data Comparison


Figure 23 shows a comparison of the electromagnetic spectrum recorded by Landsat
TM, Landsat MSS, SPOT, and NOAA AVHRR data. These data are described in detail
in the following sections.

Field Guide 55
Landsat MSS Landsat TM SPOT SPOT NOAA
(1,2,3,4) (4, 5) XS Pan AVHRR1
Band 1
.5
.6 Band 1 Band 2 Band 1
Band 1 Band 1
Band 2 Band 3 Band 2
.7
.8 Band 3
Band 4 Band 3
.9 Band 2
1.0 Band 4
1.1
1.2
1.3
1.4
1.5
1.6 Band 5
1.7
1.8
1.9
2.0
micrometers

2.1
2.2 Band 7
2.3
2.4
2.5
2.6
3.0
3.5
Band 3
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0 Band 4
12.0 Band 6 Band 5
13.0

1 NOAA AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.

Figure 23: Multispectral Imagery Comparison

56 ERDAS
Satellite Data

Landsat In 1972, the National Aeronautics and Space Administration (NASA) initiated the first
civilian program specializing in the acquisition of remotely sensed digital satellite data.
The first system was called ERTS (Earth Resources Technology Satellites), and later
renamed to Landsat. There have been several Landsat satellites launched since 1972.
Landsats 1, 2, and 3 are no longer operating, but Landsats 4 and 5 are still in orbit
gathering data.

Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5
collect MSS and Thematic Mapper (TM) data. MSS and TM are discussed in more detail
below.

NOTE: Landsat data are available through the Earth Observation Satellite Company (EOSAT)
or the EROS Data Center. See "Ordering Raster Data" on page 84 for more information.

MSS
The MSS (multispectral scanner) from Landsats 4 and 5 has a swath width of approxi-
mately 185 × 170 km from a height of approximately 900 km for Landsats 1,2, and 3, and
705 km for Landsats 4 and 5. MSS data are widely used for general geologic studies as
well as vegetation inventories.

The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m IFOV. A typical scene


contains approximately 2340 rows and 3240 columns. The radiometric resolution is 6-
bit, but it is stored as 8-bit (Lillesand and Kiefer 1987).

Detectors record electromagnetic radiation (EMR) in four bands:

• Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting
cultural features, such as roads. These bands also show detail in water.

• Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in
land/water and vegetation discrimination.

1 =Green, 0.50-0.60 µm
This band scans the region between the blue and red chlorophyll absorption bands. It
corresponds to the green reflectance of healthy vegetation, and it is also useful for
mapping water bodies.

2 =Red, 0.60-0.70 µm
This is the red chlorophyll absorption band of healthy green vegetation and represents
one of the most important bands for vegetation discrimination. It is also useful for
determining soil boundary and geological boundary delineations and cultural features.

3 =Reflective infrared, 0.70-0.80 µm


This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.

4 =Reflective infrared, 0.80-1.10 µm


This band is useful for vegetation surveys and for penetrating haze (Jensen 1996).

Field Guide 57
TM
The TM (thematic mapper) scanner is a multispectral scanning system much like the
MSS, except that the TM sensor records reflected/emitted electromagnetic energy from
the visible, reflective-infrared, middle-infrared, and thermal-infrared regions of the
spectrum. TM has higher spatial, spectral, and radiometric resolution than MSS.

TM has a swath width of approximately 185 km from a height of approximately 705 km.
It is useful for vegetation type and health determination, soil moisture, snow and cloud
differentiation, rock type discrimination, etc.

The spatial resolution of TM is 28.5 × 28.5 m for all bands except the thermal (band 6),
which has a spatial resolution of 120 × 120 m. The larger pixel size of this band is
necessary for adequate signal strength. However, the thermal band is resampled to 28.5
× 28.5 m to match the other bands. The radiometric resolution is 8-bit, meaning that each
pixel has a possible range of data values from 0 to 255.

Detectors record EMR in seven bands:

• Bands 1, 2, and 3 are in the visible portion of the spectrum and are useful in
detecting cultural features such as roads. These bands also show detail in water.

• Bands 4, 5, and 7 are in the reflective-infrared portion of the spectrum and can be
used in land/water discrimination.

• Band 6 is in the thermal portion of the spectrum and is used for thermal mapping
(Jensen 1996; Lillesand and Kiefer 1987).

1 =Blue, 0.45-0.52 µm
Useful for mapping coastal water areas, differentiating between soil and vegetation,
forest type mapping, and detecting cultural features.

2 =Green, 0.52-0.60 µm
Corresponds to the green reflectance of healthy vegetation. Also useful for cultural
feature identification.

3 =Red, 0.63-0.69 µm
Useful for discriminating between many plant species. It is also useful for determining
soil boundary and geological boundary delineations as well as cultural features.

4 =Reflective-infrared, 0.76-0.90 µm
This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.

5 =Mid-infrared, 1.55-1.74 µm
This band is sensitive to the amount of water in plants, which is useful in crop drought
studies and in plant health analyses. This is also one of the few bands that can be used
to discriminate between clouds, snow, and ice.

58 ERDAS
Satellite Data

6 =Thermal-infrared, 10.40-12.50 µm
This band is useful for vegetation and crop stress detection, heat intensity, insecticide
applications, and for locating thermal pollution. It can also be used to locate geothermal
activity.

7 =Mid-infrared, 2.08-2.35 µm
This band is important for the discrimination of geologic rock type and soil boundaries,
as well as soil and vegetation moisture content.

4 bands
MSS

7 bands
TM
radiometric
resolution 1 pixel=
0-127 57x79m

1 pixel=
30x30m
radiometric
resolution
0-255

Figure 24: Landsat MSS vs. Landsat TM

Band Combinations for Displaying TM Data


Different combinations of the TM bands can be displayed to create different composite
effects. The following combinations are commonly used to display images:

NOTE: The order of the bands corresponds to the Red, Green, and Blue (RGB) color guns of the
monitor.

• Bands 3,2,1 create a true color composite. True color means that objects look as they
would to the naked eye—similar to a color photograph.

• Bands 4,3,2 create a false color composite. False color composites appear similar to
an infrared photograph where objects do not have the same colors or contrasts as
they would naturally. For instance, in an infrared image, vegetation appears red,
water appears navy or black, etc.

• Bands 5,4,2 create a pseudo color composite. (A thematic image is also a pseudo
color image.) In pseudo color, the colors do not reflect the features in natural colors.
For instance, roads may be red, water yellow, and vegetation blue.

Field Guide 59
Different color schemes can be used to bring out or enhance the features under study.
These are by no means all of the useful combinations of these seven bands. The bands
to be used are determined by the particular application.

See "CHAPTER 4: Image Display" for more information on how images are displayed,
"CHAPTER 5: Enhancement" for more information on how images can be enhanced, and
"Ordering Raster Data" on page 84 for information on types of Landsat data available.

SPOT The first Systeme Pour l’observation de la Terre (SPOT) satellite, developed by the
French Centre National d’Etudes Spatiales (CNES), was launched in early 1986. The
second SPOT satellite was launched in 1990 and the third was launched in 1993. The
sensors operate in two modes, multispectral and panchromatic. SPOT is commonly
referred to as a pushbroom scanner meaning that all scanning parts are fixed and
scanning is accomplished by the forward motion of the scanner. SPOT pushes
3000/6000 sensors along its orbit. This is different from Landsat which scans with 16
detectors perpendicular to its orbit.

The SPOT satellite can observe the same area on the globe once every 26 days. The SPOT
scanner normally produces nadir views, but it does have off-nadir viewing capability.
Off-nadir refers to any point that is not directly beneath the detectors, but off to an
angle. Using this off-nadir capability, one area on the earth can be viewed as often as
every 3 days.

This off-nadir viewing can be programmed from the ground control station, and is quite
useful for collecting data in a region not directly in the path of the scanner or in the
event of a natural or man-made disaster, where timeliness of data acquisition is crucial.
It is also very useful in collecting stereo data from which elevation data can be extracted.

The width of the swath observed varies between 60 km for nadir viewing and 80 km for
off-nadir viewing at a height of 832 km (Jensen 1996).

Panchromatic
SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial
resolution, contains 1 band—0.51 to 0.73 µm—and is similar to a black and white photo-
graph. It has a radiometric resolution of 8 bits (Jensen 1996).

XS
SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit radiometric resolution,
and contains 3 bands (Jensen 1996).

1 =Green, 0.50-0.59 µm
Corresponds to the green reflectance of healthy vegetation.

2 =Red, 0.61-0.68 µm
Useful for discriminating between plant species. It is also useful for soil boundary and
geological boundary delineations.

60 ERDAS
Satellite Data

3 =Reflective infrared, 0.79-0.89 µm


This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.

Panc
hrom 1 band
atic

XS
3 bands

1 pixel=
10x10m

radiometric
resolution
0-255 1 pixel=
20x20m

Figure 25: SPOT Panchromatic vs. SPOT XS

See "Ordering Raster Data" on page 84 for information on the types of SPOT data available.

Stereoscopic Pairs
Two observations can be made by the panchromatic scanner on successive days, so that
the two images are acquired at angles on either side of the vertical, resulting in stereo-
scopic imagery. Stereoscopic imagery can also be achieved by using one vertical scene
and one off-nadir scene. This type of imagery can be used to produce a single image, or
topographic and planimetric maps (Jensen 1996).

Topographic maps indicate elevation. Planimetric maps correctly represent horizontal


distances between objects (Star and Estes 1990).

See "Topographic Data" on page 81 and "CHAPTER 9: Terrain Analysis" for more information
about topographic data and how SPOT stereopairs and aerial photographs can be used to create
elevation data and orthographic images.

Field Guide 61
NOAA Polar Orbiter Data The National Oceanic and Atmospheric Administration (NOAA) has sponsored several
polar orbiting satellites to collect data of the earth. These satellites were originally
designed for meteorological applications, but the data gathered have been used in
many fields—from agronomy to oceanography (Needham 1986).

The first of these satellites to be launched was the TIROS-N in 1978. Since the TIROS-N,
five additional NOAA satellites have been launched. Of these, the last three are still in
orbit gathering data.

AVHRR
The NOAA AVHRR (Advanced Very High Resolution Radiometer) data are small-scale
data and often cover an entire country. The swath width is 2700 km and the satellites
orbit at a height of approximately 833 km (Kidwell 1988; Needham 1986).

The AVHRR system allows for direct transmission in real-time of data called High
Resolution Picture Transmission (HRPT). It also allows for about ten minutes of data
to be recorded over any portion of the world on two recorders on board the satellite.
This recorded data are called Local Area Coverage (LAC). LAC and HRPT have
identical formats; the only difference is that HRPT are transmitted directly and LAC are
recorded.

There are three basic formats for AVHRR data which can be imported into ERDAS
IMAGINE:

• Local Area Coverage (LAC) - data recorded on board the sensor with a spatial
resolution of approximately 1.1 × 1.1 km.

• High Resolution Picture Transmission (HRPT) - direct transmission of AVHRR


data in real-time with the same resolution as LAC.

• Global Area Coverage (GAC) - data produced from LAC data by using only 1 out
of every 3 scan lines. GAC data have a spatial resolution of approximately 4 × 4 km.

AVHRR data are available in 10-bit packed and 16-bit unpacked format. The term
packed refers to the way in which the data are written to the tape. Packed data are
compressed to fit more data on each tape (Kidwell 1988).

AVHRR images are useful for snow cover mapping, flood monitoring, vegetation
mapping, regional soil moisture analysis, wildfire fuel mapping, fire detection, dust
and sandstorm monitoring, and various geologic applications (Lillesand and Kiefer
1987). The entire globe can be viewed in 14.5 days. There may be four or five bands,
depending on when the data were acquired.

1 =Visible, 0.58-0.68 µm
This band corresponds to the green reflectance of healthy vegetation and is important
for vegetation discrimination.

2 =Near-infrared, 0.725-1.10 µm
This band is especially responsive to the amount of vegetation biomass present in a
scene. It is useful for crop identification and emphasizes soil/crop and land/water
contrasts.

62 ERDAS
Satellite Data

3 =Thermal-infrared, 3.55-3.93 µm
This is a thermal band that can be used for snow and ice discrimination. It is also useful
for detecting fires.

4 =Thermal-infrared, 10.50-11.50 µm (NOAA-6, 8, 10),


10.30-11.30 µm (NOAA-7, 9, 11)
This band is useful for vegetation and crop stress detection. It can also be used to locate
geothermal activity.

5 =Thermal-infrared, 10.50-11.50 µm (NOAA-6, 8, 10),


11.50-12.50 µm (NOAA-7, 9, 11)
See band 4.

AVHRR data have a radiometric resolution of 10-bits, meaning that each pixel has a
possible data file value between 0 and 1023. AVHRR scenes may contain one band, a
combination of bands, or all bands. All bands are referred to as a full set, and selected
bands are referred to as an extract.

See "Ordering Raster Data" on page 84 for information on the types of NOAA data available.

Use the Import/Export function to import AVHRR data.

Field Guide 63
Radar Data Simply put, radar data are produced when:

• a radar transmitter emits a beam of micro or millimeter waves,

• the waves reflect from the surfaces they strike, and

• the backscattered radiation is detected by the radar system’s receiving antenna,


which is tuned to the frequency of the transmitted waves.

The resultant radar data can be used to produce radar images.

While there is a specific importer for RADARSAT data, most types of radar image data can be
imported into ERDAS IMAGINE with the Generic import option of Import/Export.

A radar system can be airborne, spaceborne, or ground-based. Airborne radar systems


have typically been mounted on civilian and military aircraft, but in 1978, the radar
satellite Seasat-1 was launched. The radar data from that mission and subsequent
spaceborne radar systems have been a valuable addition to the data available for use in
GIS processing. Researchers are finding that a combination of the characteristics of
radar data and visible/infrared data is providing a more complete picture of the earth.
In the last decade, the importance and applications of radar have grown rapidly.

Advantages of Using Radar data have several advantages over other types of remotely sensed imagery:
Radar Data
• Radar microwaves can penetrate the atmosphere day or night under virtually all
weather conditions, providing data even in the presence of haze, light rain, snow,
clouds, or smoke.

• Under certain circumstances, radar can partially penetrate arid and hyperarid
surfaces, revealing sub-surface features of the earth.

• Although radar does not penetrate standing water, it can reflect the surface action
of oceans, lakes, and other bodies of water. Surface eddies, swells, and waves are
greatly affected by the bottom features of the water body, and a careful study of
surface action can provide accurate details about the bottom features.

Radar Sensors Radar images are generated by two different types of sensors:

• SLAR (Side-looking Airborne Radar) — uses an antenna which is fixed below an


aircraft and pointed to the side to transmit and receive the radar signal. (See Figure
26.)

• SAR (Synthetic Aperture Radar) — uses a side-looking, fixed antenna to create a


synthetic aperture. SAR sensors are mounted on satellites and the NASA Space
Shuttle. The sensor transmits and receives as it is moving. The signals received over
a time interval are combined to create the image.

64 ERDAS
Radar Data

Both SLAR and SAR systems use side-looking geometry. Figure 26 shows a represen-
tation of an airborne SLAR system. Figure 27 shows a graph of the data received from
the radiation transmitted in Figure 26. Notice how the data correspond to the terrain in
Figure 26. These data can be used to produce a radar image of the target area. (A target
is any object or feature that is the subject of the radar scan.)

Range
direction
Beam
width

Sensor height at nadir


Azimuth resolution

Previous image lines


Azimuth
direction

Figure 26: SLAR Radar (Lillesand and Kiefer 1987)


Hill
Trees

Trees
Hill Shadow
Strength (DN)

River

Time
Figure 27: Received Radar Signal

Field Guide 65
Active and Passive Sensors
An active radar sensor gives off a burst of coherent radiation that reflects from the
target, unlike a passive microwave sensor which simply receives the low-level
radiation naturally emitted by targets.

Like the coherent light from a laser, the waves emitted by active sensors travel in phase
and interact minimally on their way to the target area. After interaction with the target
area, these waves are no longer in phase. This is due to the different distances they
travel from different targets, or single versus multiple bounce scattering.

Radar waves
are transmitted
in phase.

Once reflected, they are out


of phase, interfering with
each other and producing
speckle noise.

Diffuse reflector Specular Corner


reflector reflector
Figure 28: Radar Reflection from Different Sources and Distances
(Lillesand and Kiefer 1987)

At present, these bands are commonly used for radar imaging systems:

Table 4: Commonly Used Bands for Radar Imaging

Wavelength
Band Frequency Range Radar System
Range
X 5.20-10.90 GHZ 5.77-2.75 cm USGS SLAR
C 3.9-6.2 GHZ 3.8-7.6 cm ERS-1, Fuyo 1
L 0.39-1.55 GHZ 76.9-19.3 cm SIR-A,B, Almaz
P 0.225-0.391 GHZ 40.0-76.9 cm AIRSAR

More information about these radar systems is given later in this chapter.

66 ERDAS
Radar Data

Radar bands were named arbitrarily when radar was first developed by the military.
The letter designations have no special meaning.

NOTE: The C band overlaps the X band. Wavelength ranges may vary slightly between sensors.

Speckle Noise Once out of phase, the radar waves can interfere constructively or destructively to
produce light and dark pixels known as speckle noise. Speckle noise in radar data must
be reduced before the data can be utilized. However, the radar image processing
programs used to reduce speckle noise also produce changes to the image. This consid-
eration, combined with the fact that different applications and sensor outputs neces-
sitate different speckle removal models, has lead ERDAS to offer several speckle
reduction algorithms in ERDAS IMAGINE Radar.

When processing radar data, the order in which the image processing programs are implemented
is crucial. This is especially true when considering the removal of speckle noise. Since any image
processing done before removal of the speckle results in the noise being incorporated into and
degrading the image, do not rectify, correct to ground range, or in any way resample the pixel
values before removing speckle noise. A rotation using nearest neighbor might be permissible.

ERDAS IMAGINE Radar enables the user to:

• import radar data into the GIS as a stand-alone source or as an additional layer with
other imagery sources

• remove speckle noise

• enhance edges

• perform texture analysis

• perform radiometric and slant-to-ground range correction

See "CHAPTER 5: Enhancement" for more information on radar imagery enhancement.

Field Guide 67
Applications for Radar Radar data can be used independently in GIS applications or combined with other
Data satellite data, such as Landsat, SPOT, or AVHRR. Possible GIS applications for radar
data include:

• Geology — radar’s ability to partially penetrate land cover and sensitivity to micro
relief makes radar data useful in geologic mapping, mineral exploration, and
archaeology.

• Classification — a radar scene can be merged with visible/infrared data as an


additional layer(s) in vegetation classification for timber mapping, crop
monitoring, etc.

• Glaciology — the ability to provide imagery of ocean and ice phenomena makes
radar an important tool for monitoring climatic change through polar ice variation.

• Oceanography — radar is used for wind and wave measurement, sea-state and
weather forecasting, and monitoring ocean circulation, tides, and polar oceans.

• Hydrology — radar data are proving useful for measuring soil moisture content
and mapping snow distribution and water content.

• Ship monitoring — the ability to provide day/night all-weather imaging, as well as


detect ships and associated wakes, makes radar a tool which can be used for ship
navigation through frozen ocean areas such as the Arctic or North Atlantic Passage.
(The ERS-1 satellite provides excellent coverage of these specific target areas,
revisiting every 35 days.)

• Offshore Oil Activities — radar data are used to provide ice updates for offshore
drilling rigs, determining weather and sea conditions for drilling and installation
operations, and detecting oil spills.

• Pollution monitoring — radar can detect oil on the surface of water and can be used
to track the spread of an oil spill.

68 ERDAS
Radar Data

Current Radar Sensors Table 5 gives a brief description of currently available radar sensors. This is not a
complete list of such sensors, but it does represent the ones most useful for GIS appli-
cations.

Table 5: Current Radar Sensors

ERS-1, 2 JERS-1 SIR-A, B SIR-C RADARSAT Almaz-1

Availability operational operational 1981, 1984 1994 operational 1991-1992

Resolution 12.5 m 18 m 25 m 25 m 10-100 m 15 m

Revisit Time 35 days 44 days NA NA 3 days NA

Scene Area 100X100 km 75X100 km 30X60 km variable 50X50 to 40X100 km


swath 500X500 km
Bands C band L band L band L,C,X bands C band C band

Future Radar Sensors Several radar satellites are planned for launch within the next several years, but only a
few programs will be successful. Following are two scheduled programs which are
known to be highly achievable.

Almaz 1-b
NPO Mashinostroenia plans to launch and operate Almaz-1b as a commercial program
in 1998. The Almaz-1b system will include a unique, complex multisensor payload
consisting of eight high resolution sensors which can operate in various sensor combi-
nations, including high resolution, two-pass radar stereo and single- pass stereo
coverage in the optical and multispectral bandwidths. Almaz-1b will feature three
synthetic aperture radars (SAR) that can collect multipolar, multifrequency (X, P, S
band) imagery in high resolution (5-7m spatial; 20-30 km swath), intermediate (5-15m
spatial; 60-70km swath), or survey (20-40m spatial; 120-170km swath) modes.

Light SAR
NASA/JPL is currently designing a radar satellite called Light SAR. Present plans are
for this to be a multi-polar sensor operating at L-band.

Field Guide 69
Image Data from Image data can also be acquired from multispectral scanners or radar sensors aboard
Aircraft aircraft, as well as satellites. This is useful if there isn’t time to wait for the next satellite
to pass over a particular area, or if it is necessary to achieve a specific spatial or spectral
resolution that cannot be attained with satellite sensors.

For example, this type of data can be beneficial in the event of a natural or man-made
disaster, because there is more control over when and where the data are gathered.

Two common types of airborne image data are:

• AIRSAR

• AVIRIS

AIRSAR AIRSAR (Airborne Synthetic Aperture Radar) is an experimental airborne radar sensor
developed by Jet Propulsion Laboratories (JPL), Pasadena, California, under a contract
with NASA. AIRSAR data have been available since 1983.

This sensor collects data at three frequencies:

• C-band

• L-band

• P-band

Because this sensor measures at three different wavelengths, different scales of surface
roughness are obtained. The AIRSAR sensor has an IFOV of 10 m and a swath width of
12 km.

AIRSAR data have been used in many applications such as measuring snow wetness,
classifying vegetation, and estimating soil moisture.

NOTE: These data are distributed in a compressed format. They must be decompressed before
loading with an algorithm available from JPL. See "Addresses to Contact" on page 85 for contact
information.

AVIRIS The AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) was also developed by
JPL (Pasadena, California) under a contract with NASA. AVIRIS data have been
available since 1987.

This sensor produces multispectral data that have 224 narrow bands. These bands are
10 nm wide and cover the spectral range of .4 - 2.4 nm. The swath width is 11 km and
the spatial resolution is 20 m. This sensor is flown at an altitude of approximately 20 km.
The data are recorded at 10-bit radiometric resolution.

70 ERDAS
Image Data from Scanning

Image Data from Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINE
Scanning system through the use of a scanning camera. Scanning is remote sensing in a manner
of speaking, but the term “remote sensing” is usually reserved for satellite or aerial data
collection. In GIS, scanning refers to the transfer of analog data, such as photographs,
maps, or other viewable images, into a digital (raster) format.

In scanning, the map, photograph, transparency, or other object to be scanned is


typically placed on a flat surface, and the camera scans across the object to record the
image, transferring it from analog to digital data. Different scanning systems have
different setups for scanning.

There are many commonly used scanning cameras for GIS and other desktop applica-
tions, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or Vexcel (Vexcel Imaging
Corp., Boulder, Colorado). Many scanners produce a TIFF file, which can be read
directly by ERDAS IMAGINE.

Use the Import/Export function to import scanned data.

Eikonix data can be obtained in the ERDAS IMAGINE .img format using the XSCAN™ Tool
by Ektron and then imported directly into ERDAS IMAGINE.

Field Guide 71
ADRG Data ADRG (ARC Digitized Raster Graphic) data, from the Defense Mapping Agency
(DMA), are primarily used for military purposes by defense contractors. The data are
in 128 × 128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide large
amounts of hardcopy graphic data without having to store and maintain the actual
hardcopy graphics.

ADRG data consist of digital copies of DMA hardcopy graphics transformed into the
ARC system and accompanied by ASCII encoded support files. These digital copies are
produced by scanning each hardcopy graphic into three images: red, green, and blue.
The data are scanned at a nominal collection interval of 100 microns (254 lines per inch).
When these images are combined, they provide a 3-band digital representation of the
original hardcopy graphic.

ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangular
coordinate and projection system at any scale for the earth’s ellipsoid, based on the
World Geodetic System 1984 (WGS 84). The ARC System divides the surface of the
ellipsoid into 18 latitudinal bands called zones. Zones 1 - 9 cover the Northern
hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone 9 is the North Polar
region. Zone 18 is the South Polar region.

Distribution Rectangles
For distribution, ADRG are divided into geographic data sets called Distribution
Rectangles (DRs). A DR may include data from one or more source charts or maps. The
boundary of a DR is a geographic rectangle which typically coincides with chart and
map neatlines.

Zone Distribution Rectangles (ZDRs)


Each DR is divided into Zone Distribution Rectangles (ZDRs). There is one ZDR for
each ARC System zone covering any part of the DR. The ZDR contains all the DR data
that fall within that zone’s limits. ZDRs typically overlap by 1,024 rows of pixels, which
allows for easier mosaicking. Each ZDR is stored on the CD-ROM as a single raster
image file (.IMG). Included in each image file are all raster data for a DR from a single
ARC System zone, and padding pixels needed to fulfill format requirements. The
padding pixels are black and have a zero value.

The padding pixels are not imported by ERDAS IMAGINE, nor are they counted when figuring
the pixel height and width of each image.

ADRG File Format Each CD-ROM contains up to eight different file types which make up the ADRG
format. ERDAS IMAGINE imports three types of ADRG data files:

• .OVR (Overview)

• .IMG (Image)

• .Lxx (Legend or marginalia data)

NOTE: Compressed ADRG (CADRG) is a different format, with its own importer.

72 ERDAS
ADRG Data

The ADRG .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr
file formats.

.OVR (overview) The overview file contains a 16:1 reduced resolution image of the whole DR. There is an
overview file for each DR on a CD.

Importing ADRG Subsets


Since DRs can be rather large, it may be beneficial to import a subset of the DR data for
the application. ERDAS IMAGINE enables the user to define a subset of the data from
the preview image (see Figure 30).

You can import from only one ZDR at a time. If a subset covers multiple ZDRs, they must be
imported separately and mosaicked with the ERDAS IMAGINE Mosaic option.

Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer

Field Guide 73
The white rectangle in Figure 30 represents the DR. The subset area in this illustration
would have to be imported as three files, one for each zone in the DR.

Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2 and 4 would also
be included in the subset area.

Zone 4
overlap area

Zone 3
Subset
overlap area Area

Zone 2

Figure 30: Subset Area with Overlapping ZDRs

.IMG (scanned image The .IMG files are the data files containing the actual scanned hardcopy graphic(s).
data) Each .IMG file contains one ZDR plus padding pixels. The ERDAS IMAGINE Import
function converts the .IMG data files on the CD-ROM to the IMAGINE file format
(.img). The .img file can then be displayed in a Viewer.

.Lxx (legend data) Legend files contain a variety of diagrams and accompanying information. This is infor-
mation which typically appears in the margin or legend of the source graphic.

This information can be imported into ERDAS IMAGINE and viewed. It can also be added to a
map composition with the ERDAS IMAGINE Map Composer.

74 ERDAS
ADRG Data

Each legend file contains information based on one of these diagram types:

• Index (IN) — shows the approximate geographical position of the graphic and its
relationship to other graphics in the region.

• Elevation/Depth Tint (EL) — a multicolored graphic depicting the colors or tints


used to represent different elevations or depth bands on the printed map or chart.

• Slope (SL) — represents the percent and degree of slope appearing in slope bands.

• Boundary (BN) — depicts the geopolitical boundaries included on the map or chart.

• Accuracy (HA, VA, AC) — depicts the horizontal and vertical accuracies of selected
map or chart areas. AC represents a combined horizontal and vertical accuracy
diagram.

• Geographic Reference (GE) — depicts the positioning information as referenced to


the World Geographic Reference System.

• Grid Reference (GR) — depicts specific information needed for positional


determination with reference to a particular grid system.

• Glossary (GL) — gives brief lists of foreign geographical names appearing on the
map or chart with their English-language equivalents.

• Landmark Feature Symbols (LS) — landmark feature symbols are used to depict
navigationally-prominent entities.

ARC System Charts


The ADRG data on each CD-ROM are based on one of these chart types from the ARC
system:

Table 6: ARC System Chart Types

ARC System Chart Type Scale


GNC (Global Navigation Chart) 1:5,000,000
JNC-A (Jet Navigation Chart - Air) 1:3,000,000
JNC (Jet Navigation Chart) 1:2,000,000
ONC (Operational Navigation Chart) 1:1,000,000
TPC (Tactical Pilot Chart) 1:500,000
JOG-A (Joint Operations Graphic - Air) 1:250,000
JOG-G (Joint Operations Graphic - Ground) 1:250,000
JOG-C (Joint Operations Graphic - Combined) 1:250,000
JOG-R (Joint Operations Graphic - Radar) 1:250,000
ATC (Series 200 Air Target Chart) 1:200,000
TLM (Topographic Line Map) 1:50,000

Field Guide 75
Each ARC System chart type has certain legend files associated with the image(s) on the
CD-ROM. The legend files associated with each chart type are checked in Table 7.

Table 7: Legend Files for the ARC System Chart Types

ARC System Chart IN EL SL BN VA HA AC GE GR GL LS


GNC ✓ ✓
JNC / JNC-A ✓ ✓ ✓ ✓ ✓
ONC ✓ ✓ ✓ ✓ ✓
TPC ✓ ✓ ✓ ✓ ✓ ✓
JOG-A ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
JOG-G / JOG-C ✓ ✓ ✓ ✓ ✓ ✓ ✓
JOG-R ✓ ✓ ✓ ✓ ✓ ✓
ATC ✓ ✓ ✓ ✓ ✓
TLM ✓ ✓ ✓ ✓ ✓ ✓

ADRG File Naming The ADRG file naming convention is based on a series of codes: ssccddzz
Convention
• ss = the chart series code (see the table of ARC System charts)

• cc = the country code

• dd = the DR number on the CD-ROM (01-99). DRs are numbered beginning with
01 for the northwesternmost DR and increasing sequentially west to east, then
north to south.

• zz = the zone rectangle number (01-18)

For example, in the ADRG filename JNUR0101.IMG:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage of


the northwestern edge of the image area.

• 01 = This is the first zone rectangle of the Distribution Rectangle.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do not
specify a file name, IMAGINE will use the ADRG file name for the image.

76 ERDAS
ADRG Data

Legend File Names


Legend file names will include a code to designate the type of diagram information
contained in the file (see the previous legend file description). For example, the file
JNUR01IN.L01 means:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage of


the northwestern edge of the image area.

• IN = This indicates that this file is an index diagram from the original hardcopy
graphic.

• .L01 = This legend file contains information for the source graphic 01. The source
graphics in each DR are numbered beginning with 01 for the northwesternmost
source graphic, increasing sequentially west to east, then north to south. Source
directories and their files include this number code within their names.

For more detailed information on ADRG file naming conventions, see the Defense Mapping
Agency Product Specifications for ARC Digitized Raster Graphics (ADRG), published by
DMA Aerospace Center.

Field Guide 77
ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from the DMA and are
currently available only to Department of Defense contractors. The data are in 128 × 128
tiled, 8-bit format, stored on 8 mm tape in band sequential format.

ADRI consists of SPOT panchromatic satellite imagery transformed into the ARC
system and accompanied by ASCII encoded support files.

Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR consists of all or
part of one or more images mosaicked to meet the ARC bounding rectangle, which
encloses a 1 degree by 1 degree geographic area. (See Figure 31.) Source images are
orthorectified to mean sea level using DMA Level I Digital Terrain Elevation Data
(DTED) or equivalent data (Air Force Intelligence Support Agency 1991).

See the previous section on ADRG data for more information on the ARC system. See more
about DTED data on page 83.

Image 1
Image 2
3

Image 4
Image 5

Image 6

Image 9
Image 8
7

Figure 31: Seamless Nine Image DR

In ADRI data, each DR contains only one ZDR. Each ZDR is stored as a single raster
image file, with no overlapping areas.

78 ERDAS
ADRI Data

There are six different file types that make up the ADRI format: two types of data files,
three types of header files, and a color test patch file. ERDAS IMAGINE imports the two
types of ADRI data files:

• .OVR (Overview)

• .IMG (Image)

The ADRI .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr
file formats.

.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image of the whole DR.
There is an overview file for each DR on a tape. The .OVR images show the mosaicking
from the source images and the dates when the source images were collected. (See
Figure 32.) This does not appear on the ZDR image.

Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer

Field Guide 79
.IMG (scanned image The .IMG files contain the actual mosaicked images. Each .IMG file contains one ZDR
data) plus any padding pixels needed to fit the ARC boundaries. Padding pixels are black and
have a zero data value. The ERDAS IMAGINE Import function converts the .IMG data
files to the IMAGINE file format (.img). The .img file can then be displayed in a Viewer.
Padding pixels are not imported, nor are they counted in image height or width.

ADRI File Naming The ADRI file naming convention is based on a series of codes: ssccddzz
Convention
• ss = the image source code:

- SP (SPOT panchromatic)

- SX (SPOT multispectral) (not currently available)

- TM (Landsat Thematic Mapper) (not currently available)

• cc = the country code

• dd = the DR number on the tape (01-99). DRs are numbered beginning with 01 for
the northwesternmost DR and increasing sequentially west to east, then north to
south.

• zz = the zone rectangle number (01-18)

For example, in the ADRI filename SPUR0101.IMG:

• SP = SPOT 10 m panchromatic image.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage of


the northwestern edge of the image area.

• 01 = This is the first zone rectangle of the Distribution Rectangle.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do not
specify a file name, IMAGINE will use the ADRI file name for the image.

80 ERDAS
Topographic Data

Topographic Data Satellite data can also be used to create elevation, or topographic, data through the use
of stereoscopic pairs, as discussed above under SPOT. Radar sensor data can also be a
source of topographic information, as discussed in “Chapter 8: Terrain Analysis.”
However, most available elevation data are created with stereo photography and
topographic maps.

ERDAS IMAGINE software can load and use:

• USGS Digital Elevation Models (DEMs)

• Digital Terrain Elevation Data (DTED)

Arc/Second Format
Most elevation data are in arc/second format. Arc/second refers to data in the
Latitude/Longitude (Lat/Lon) coordinate system. The data are not rectangular, but
follow the arc of the earth’s latitudinal and longitudinal lines.

Each degree of latitude and longitude is made up of 60 minutes. Each minute is made
up of 60 seconds. Arc/second data are often referred to by the number of seconds in
each pixel. For example, 3 arc/second data have pixels which are 3 × 3 seconds in size.
The actual area represented by each pixel is a function of its latitude. Figure 33 illus-
trates a 1° × 1° area of the earth.

A row of data file values from a DEM or DTED file is called a profile. The profiles of
DEM and DTED run south to north, that is, the first pixel of the record is the southern-
most pixel.
Longitude

1201

1201
La t i t
u de

1201

Figure 33: ARC/Second Format

In Figure 33 there are 1201 pixels in the first row and 1201 pixels in the last row, but the
area represented by each pixel increases in size from the top of the file to the bottom of
the file. The extracted section in the example above has been exaggerated to illustrate
this point.

Field Guide 81
Arc/second data used in conjunction with other image data, such as TM or SPOT, must
be rectified or projected onto a planar coordinate system such as UTM.

DEM DEMs are digital elevation model data. DEM was originally a term reserved for
elevation data provided by the United States Geological Survey (USGS), but it is now
used to describe any digital elevation data.

DEMs can be:

• purchased from USGS (for US areas only)

• created from stereopairs (derived from satellite data or aerial photographs)

See "CHAPTER 9: Terrain Analysis" for more information on using DEMs. See "Ordering
Raster Data" on page 84 for information on ordering DEMs.

USGS DEMs
There are two types of DEMs that are most commonly available from USGS:

• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTM
coordinate system. It has a spatial resolution of 30 × 30 m.

• 1:250,000 scale available only in arc/second format.

Both types have a 16-bit range of elevation values, meaning each pixel can have a
possible elevation of -32,768 to 32,767.

DEM data are stored in ASCII format. The data file values in ASCII format are stored as
ASCII characters rather than as zeros and ones like the data file values in binary data.

DEM data files from USGS are initially oriented so that North is on the right side of the
image instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as
part of the Import process so that coordinates read with any IMAGINE program will be
correct.

82 ERDAS
Topographic Data

DTED DTED data are produced by the Defense Mapping Agency (DMA) and are available
only to US government agencies and their contractors. DTED data are distributed on 9-
track tapes and on CD-ROM.

There are two types of DTED data available:

• DTED 1—a 1° × 1° area of coverage

• DTED 2—a 1° × 1° or less area of coverage

Both are in arc/second format and are distributed in cells. A cell is a 1° × 1° area of
coverage. Both have a 16-bit range of elevation values.

Like DEMs, DTED data files are also oriented so that North is on the right side of the
image instead of at the top. IMAGINE rotates the data 90° counterclockwise as part of
the Import process so that coordinates read with any ERDAS IMAGINE program will
be correct.

Using Topographic Data Topographic data have many uses in a GIS. For example, topographic data can be used
in conjunction with other data to:

• calculate the shortest and most navigable path over a mountain range

• assess the visibility from various lookout points or along roads

• simulate travel through a landscape

• determine rates of snow melt

• orthocorrect satellite or airborne images

• create aspect and slope layers

• provide ancillary data from image classification

See "CHAPTER 9: Terrain Analysis" for more information about using topographic and
elevation data.

Field Guide 83
Ordering Raster Table 8 describes the different Landsat, SPOT, AVHRR, and DEM products that can be
Data ordered. Information in this chart does not reflect all the products that are available, but
only the most common types that can be imported into ERDAS IMAGINE.

Table 8: Common Raster Data Products

Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded

Landsat TM 185 × 170 km 28.5 m 7 Fast (BSQ) ✓


Full Scene
Landsat TM 92.5 × 80 km 28.5 m 7 Fast (BSQ) ✓
Quarter Scene
Landsat MSS 185 × 170 km 79 × 56 m 4 BSQ, BIL
Full Scene
SPOT 60 × 60 km 10 m and 1-3 BIL ✓
20 m
NOAA AVHRR (LAC) 2700 × 2700 km 1.1 km 1-5 10-bit packed
or unpacked
NOAA AVHRR (GAC) 4000 × 4000 km 4 km 1-5 10-bit packed
or unpacked
USGS DEM 1:24,000 7.5’ × 7.5’ 30 m 1 ASCII ✓ (UTM)
USGS DEM 1:250,000 1° × 1° 3” × 3” 1 ASCII

84 ERDAS
Ordering Raster Data

Addresses to Contact For more information about these and related products, contact the agencies below:

• Landsat MSS, TM, and ETM data:


EOSAT
International Headquarters
4300 Forbes Blvd.
Lanham, MD 20706 USA
Telephone: 1-800-344-9933
Fax: 301-552-0507
Internet: www.eosat.com

• SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com

• NOAA AVHRR data:


Satellite Data Service Branch
NOAA/National Environment Satellite, Data, and Information Service
World Weather Building, Room 100
Washington, DC 20233 USA

• AVHRR Dundee Format


NERC Satellite Station
University of Dundee
Dundee, Scotland DD1 4HN

• Cartographic data including, maps, airphotos, space images, DEMs, planimetric


data, and related information from federal, state, and private agencies:
National Cartographic Information Center
U.S. Geological Survey
507 National Center
Reston, VA 22092 USA

• ADRG data (available only to defense contractors):


DMA (Defense Mapping Agency)
ATTN: PMSC
Combat Support Center
Washington, DC 20315-0010 USA

• ADRI data (available only to defense contractors):


Rome Laboratory/IRRP
Image Products Branch
Griffiss AFB, NY 13440-5700 USA

• Landsat data:
EROS Data Center
Sioux Falls, SD 57198 USA

Field Guide 85
• ERS-1 radar data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-6413
Fax: 613-238-5425
Internet: www.rsi.ca

• JERS-1 (Fuyo 1) radar data:


National Space Development Agency of Japan (NASDA)
Earth Observation Program Office
Tokyo 105, Japan
Telephone: 81-3-5470-4254
Fax: 81-3-3432-3969

• SIR-A, B, C radar data:


Jet Propulsion Laboratories
California Institute of Technology
4800 Oak Grove Dr.
Pasadena, CA 91109-8099 USA
Telephone: 818-354-2386
Internet: www.jpl.nasa.gov

• RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca

• Almaz radar data:


NPO Mashinostroenia
Scientific Engineering Center “Almaz”
Gagarin st., 33,
Moscow Region
143952, Reutov, Russia
Telephone: 7.095.307-9194
Fax: 7.095.302-2001
Email: [email protected]

• U.S. Government RADARSAT sales:


Joel Porter
Lockheed Martin Astronautics
M/S: DC4001
12999 Deer Creek Canyon Rd.
Littleton, CO 80127
Telephone: 303-977-3233
Fax: 303-971-9827
email: [email protected]

86 ERDAS
Raster Data from Other Software Vendors

Raster Data from ERDAS IMAGINE also enables the user to import data created by other software
Other Software vendors. This way, if another type of digital data system is currently in use, or if data is
Vendors received from another system, it will easily convert to the ERDAS IMAGINE file format
for use in ERDAS IMAGINE. The Import function will directly import these raster data
types from other software systems:

• ERDAS Ver. 7.X

• GRID

• Sun Raster

• TIFF

Other data types might be imported using the Generic import option.

Vector to Raster Conversion


Vector data can also be a source of raster data by converting it to raster format.

Convert a vector layer to a raster layer, or vice versa, by using ERDAS IMAGINE Vector.

ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The two
basic types of ERDAS Ver. 7.X data files are indicated by the file name extensions:

• .LAN — a multiband continuous image file (the name is derived from the Landsat
satellite)

• .GIS — a single-band thematic data file in which pixels are divided into discrete
categories (the name is derived from geographic information system)

.LAN and .GIS image files are stored in the same format. The image data are arranged
in a BIL format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structure
includes:

• a header record at the beginning of the file

• the data file values

• a statistics or trailer file

When you import a .GIS file, it becomes an .img file with one thematic raster layer. When you
import a .LAN file, each band becomes a continuous raster layer within the .img file.

Field Guide 87
GRID GRID is a raster geoprocessing program distributed by Environmental Systems
Research Institute, Inc. (Redlands, California). GRID is a spatial analysis and modeling
language that enables the user to perform per-cell, per-neighborhood, per-zone, and
per-layer analyses. It was designed to function as a complement to the vector data
model system, ARC/INFO, a well-known vector GIS which is also distributed by ESRI.

GRID files are in a compressed tiled raster data structure. The name is taken from the
raster data format of presenting information in a grid of cells.

Sun Raster A Sun raster file is an image captured from a monitor display. In addition to GIS, Sun
Raster files can be used in desktop publishing applications or any application where a
screen capture would be useful.

There are two basic ways to create a Sun raster file on a Sun workstation:

• use the OpenWindows Snapshot application

• use the UNIX screendump command

Both methods read the contents of a frame buffer and write the display data to a user-
specified file. Depending on the display hardware and options chosen, screendump can
create any of the file types listed in Table 9.

Table 9: File Types Created by Screendump

File Type Available Compression


1-bit black and white None, RLE (run-length encoded)
8-bit color paletted (256 colors) None, RLE
24-bit RGB true color None, RLE
32-bit RGB true color None, RLE

The data are stored in BIP format.

TIFF The Tagged Image File Format (TIFF) was developed by Aldus Corp. (Seattle,
Washington) in 1986 in conjunction with major scanner vendors who needed an easily
portable file format for raster image data. Today, the TIFF format is a widely supported
format used in video, fax transmission, medical imaging, satellite imaging, document
storage and retrieval, and desktop publishing applications. In addition, the GEOTIFF
extensions permit TIFF files to be geocoded.

The TIFF format’s main appeal is its flexibility. It handles black and white line images,
as well as gray scale and color images, which can be easily transported between
different operating systems and computers.

TIFF File Formats


TIFF’s great flexibility can also cause occasional problems in compatibility. This is
because TIFF is really a family of file formats that are comprised of a variety of elements
within the format.

Table 10 shows the most common TIFF format elements. The elements supported in
ERDAS IMAGINE are checked.

88 ERDAS
Raster Data from Other Software Vendors

Any TIFF format that contains an unsupported element may not be compatible with ERDAS
IMAGINE.

NOTE: The checked items in Table 10 are supported in IMAGINE.

Table 10: The Most Common TIFF Format Elements

Byte Order Intel (LSB/MSB) ✓

Motorola (MSB/LSB) ✓
Black and white ✓
Gray scale ✓

Image Type Inverted gray scale ✓

Color palette ✓
RGB (3-band) ✓

Configuration BIP ✓

BSQ

Bits Per Plane** 1*,2*,4,8 ✓

3,5,6,7

None ✓
CCITT G3 (B&W only) ✓

Compression*** CCITT G4 (B&W only) ✓

Packbits ✓
LZW****

LZW with horizontal differencing****

*Must be imported and exported as 4-bit data.

**All bands must contain the same number of bits (i.e., 4,4,4 or 8,8,8). Multi-band data assigned to different
bits cannot be imported into IMAGINE.

***Compression supported on import only.

****LZW is governed by patents and is not supported by the basic version of IMAGINE.

Field Guide 89
Vector Data from It is possible to directly import several common vector formats into ERDAS IMAGINE.
Other Software These files become vector layers when imported. These data can then be used for the
Vendors analyses and, in most cases, exported back to its original format (if desired).

Although data can be converted from one type to another by importing a file into
IMAGINE and then exporting the IMAGINE file into another format, the import and
export routines were designed to work together. For example, if a user has information
in AutoCAD that they would like to use in the GIS, they can import a DXF file into
ERDAS IMAGINE, do the analysis, and then export the data back to DXF format.

In most cases, attribute data are also imported into ERDAS IMAGINE. Each section
below lists the types of attribute data that are imported.

Use Import/Export to import vector data from other software vendors into ERDAS IMAGINE
vector layers. These routines are based on ARC/INFO data conversion routines.

See "CHAPTER 2: Vector Layers" for more information on ERDAS IMAGINE vector layers.
See"CHAPTER 10: Geographic Information Systems" for more information about using vector
data in a GIS.

ARCGEN ARCGEN files are ASCII files created with the ARC/INFO UNGENERATE command.
The import ARCGEN program is used to import features to a new layer. Topology is
not created or maintained, therefore the coverage must be built or cleaned after it is
imported into ERDAS IMAGINE.

ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. If
there is a syntax error in the data file, the import process may not work. If this happens, you must
kill the process, correct the data file, and then try importing again.

See the ARC/INFO documentation for more information about these files.

AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito,
California). AutoCAD is a computer-aided design program that enables the user to
draw two-and three-dimensional models. This software is frequently used in archi-
tecture, engineering, urban planning, and many other applications.

The AutoCAD Drawing Interchange File (DXF) is the standard interchange format used
by most CAD systems. The AutoCAD program DXFOUT will create a DXF file that can
be converted to an ERDAS IMAGINE vector layer. AutoCAD files can also be output to
IGES format using the AutoCAD program IGESOUT.

See "IGES" on page 94 for more information about IGES files.

90 ERDAS
Vector Data from Other Software Vendors

DXF files can be converted in the ASCII or binary format. The binary format is an
optional format for AutoCAD Releases 10 and 11. It is structured just like the ASCII
format, only the data are in binary format.

DXF files are composed of a series of related layers. Each layer contains one or more
drawing elements or entities. An entity is a drawing element that can be placed into an
AutoCAD drawing with a single command. When converted to an ERDAS IMAGINE
vector layer, each entity becomes a single feature. Table 11 describes how various DXF
entities are converted to IMAGINE.

Table 11: Conversion of DXF Entries

DXF IMAGINE
Comments
Entity Feature
Line Line These entities become two point lines. The initial Z
value of 3D entities is stored.
3DLine
Trace Line These entities become four or five point lines. The
initial Z value of 3D entities is stored.
Solid

3DFace
Circle Line These entities form lines. Circles are composed of
361 points—one vertex for each degree. The first
Arc and last point is at the same location.
Polyline Line These entities can be grouped to form a single line
having many vertices.
Point Point These entities become point features in a layer.

Shape

The ERDAS IMAGINE import process also imports line and point attribute data (if they
exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and
XCODE (point attributes) files. If an imported DXF file is exported back to DXF format,
this information will also be exported.

Refer to an AutoCAD manual for more information about the format of DXF files.

Field Guide 91
DLG Digital Line Graphs (DLG) are furnished by the U.S. Geological Survey and provide
planimetric base map information, such as transportation, hydrography, contours, and
public land survey boundaries. DLG files are available for the following USGS map
series:

• 7.5- and 15-minute topographic quadrangles

• 1:100,000-scale quadrangles

• 1:2,000,000-scale national atlas maps

DLGs are topological files that contain nodes, lines, and areas (similar to the points,
lines, and polygons in an ERDAS IMAGINE vector layer). DLGs also store attribute
information in the form of major and minor code pairs. Code pairs are encoded in two
integer fields, each containing six digits. The major code describes the class of the
feature (road, stream, etc.) and the minor code stores more specific information about
the feature.

DLGs can be imported in standard format (144 bytes per record) and optional format
(80 bytes per record). The user can export to DLG-3 optional format. Most DLGs are in
the Universal Transverse Mercator map projection. However, the 1:2,000,000 scale
series are in geographic coordinates.

The ERDAS IMAGINE import process also imports point, line, and polygon attribute
data (if they exist) and creates an INFO directory with the appropriate ACODE (arc
attributes), PCODE (polygon attributes) and XCODE (point attributes) files. If an
imported DLG file is exported back to DLG format, this information will also be
exported.

To maintain the topology of a vector layer created from a DLG file, you must Build or Clean it.
See “Chapter 9: Geographic Information Systems” for information on this process.

92 ERDAS
Vector Data from Other Software Vendors

ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from
ETAK, Inc. (Menlo Park, California). ETAK files are similar in content to the Dual
Independent Map Encoding (DIME) format used by the U.S. Census Bureau. Each
record represents a single linear feature with address and political, census, and ZIP
code boundary information. ETAK has also included road class designations and, in
some areas, major landmark features.

There are four possible types of ETAK features:

• DIME or D types — if the feature type is D, a line is created along with a


corresponding ACODE (arc attribute) record. The coordinates are stored in
Lat/Lon decimal degrees.

• Alternate address or A types — each record contains an alternate address record for
a line. These records are written to the attribute file, and are useful for building
address coverages.

• Shape features or S types — shape records are used to add vertices to the lines. The
coordinates for these features are in Lat/Lon decimal degrees.

• Landmark or L types — if the feature type is L and the user opts to output a
landmark layer, then a point feature is created along with an associated PCODE
record.

ERDAS IMAGINE vector data cannot be exported to ETAK format.

Field Guide 93
IGES Initial Graphics Exchange Standard (IGES) files are often used to transfer CAD data
between systems. IGES Version 3.0 format, published by the U.S. Department of
Commerce, is in uncompressed ASCII format only.

IGES files can be produced in AutoCAD using the IGESOUT command. The following
IGES entities can be converted:

Table 12: Conversion of IGES Entities

IGES Entity IMAGINE Feature


IGES Entity 100 (Circular Arc Entities) Lines
IGES Entity 106 (Copious Data Entities) Lines
IGES Entity 106 (Line Entities) Lines
IGES Entity 116 (Point Entities) Points

The ERDAS IMAGINE import process also imports line and point attribute data (if they
exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and
XCODE (point attributes) files. If an imported IGES file is exported back to IGES format,
this information will also be exported.

94 ERDAS
Vector Data from Other Software Vendors

TIGER Topologically Integrated Geographic Encoding and Referencing System (TIGER) files
are line network products of the U.S. Census Bureau. The Census Bureau is using the
TIGER system to create and maintain a digital cartographic database that covers the
United States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust
Territories of the Pacific.

TIGER/Line is the line network product of the TIGER system. The cartographic base is
taken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME),
where available, and from the USGS 1:100,000-scale national map series, SPOT imagery,
and a variety of other sources in all other areas, in order to have continuous coverage
for the entire United States. In addition to line segments, TIGER files contain census
geographic codes and, in metropolitan areas, address ranges for the left and right sides
of each segment. TIGER files are available in ASCII format on both CD-ROM and tape
media. All released versions after April 1989 are supported.

There is a great deal of attribute information provided with TIGER/Line files. Line and
point attribute information can be converted into ERDAS IMAGINE format. The
ERDAS IMAGINE import process creates an INFO directory with the appropriate
ACODE (arc attributes) and XCODE (point attributes) files. If an imported TIGER file
is exported back to TIGER format, this information will also be exported.

TIGER attributes include the following:

• Version numbers—TIGER/Line file version number

• Permanent record numbers—each line segment is assigned a permanent record


number that is maintained throughout all versions of TIGER/Line files

• Source codes—each line and landmark point feature is assigned a code to specify
the original source

• Census feature class codes—line segments representing physical features are coded
based on the USGS classification codes in DLG-3 files

• Street attributes—includes street address information for selected urban areas

• Legal and statistical area attributes—legal areas include states, counties, townships,
towns, incorporated cities, Indian reservations, and national parks. Statistical areas
are areas used during the census-taking, where legal areas are not adequate for
reporting statistics.

• Political boundaries—the election precincts or voting districts may contain a


variety of areas, including wards, legislative districts, and election districts.

• Landmarks—landmark area and point features include schools, military


installations, airports, hospitals, mountain peaks, campgrounds, rivers, and lakes

TIGER files for major metropolitan areas outside of the United States (e.g., Puerto Rico, Guam)
do not have address ranges.

Field Guide 95
Disk Space Requirements
TIGER/Line files are partitioned into counties ranging in size from less than a
megabyte to almost 120 megabytes. The average size is approximately 10 megabytes.
To determine the amount of disk space required to convert a set of TIGER/Line files,
use this rule: the size of the converted layers is approximately the same size as the files
used in the conversion. The amount of additional scratch space needed depends on the
largest file and whether it will need to be sorted. The amount usually required is about
double the size of the file being sorted.

The information presented in this section, "Vector Data from Other Software Vendors", was
obtained from the Data Conversion and the 6.0 ARC Command References manuals, both
published by ESRI, Inc., 1992.

96 ERDAS
Introduction

CHAPTER 4
Image Display

Introduction This section defines some important terms that are relevant to image display. Most of
the terminology and definitions used in this chapter are based on the X Window System
(Massachusetts Institute of Technology) terminology. This may differ from other
systems, such as Microsoft Windows NT.

A seat is a combination of an X-server and a host workstation.

• A host workstation consists of a CPU, keyboard, mouse, and a display.

• A display may consist of multiple screens. These screens work together, making it
possible to move the mouse from one screen to the next.

• The display hardware contains the memory that is used to produce the image. This
hardware determines which types of displays are available (e.g., true color or
pseudo color) and the pixel depth (e.g., 8-bit or 24-bit).

Screen Screen

Figure 34: Example of One Seat with One Display and Two Screens

Display Memory Size The size of memory varies for different displays. It is expressed in terms of:

• display resolution, which is expressed as the horizontal and vertical dimensions of


memory—the number of pixels that can be viewed on the display screen. Some
typical display resolutions are 1152 × 900, 1280 × 1024, and 1024 × 780. For the PC,
typical resolutions are 640 × 480, 800 × 600,
1024 × 768, and 1280 × 1024.

• the number of bits for each pixel or pixel depth, as explained below.

Field Guide 97
Bits for Image Plane
A bit is a binary digit, meaning a number that can have two possible values—0 and 1,
or “off” and “on.” A set of bits, however, can have many more values, depending upon
the number of bits used. The number of values that can be expressed by a set of bits is
2 to the power of the number of bits used. For example, the number of values that can
be expressed by 3 bits is 8 (23 = 8).

Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bits
are used to determine the number of possible brightness values. For example, in a 24-
bit display, 24 bits per pixel breaks down to eight bits for each of the three color guns
per pixel. The number of possible values that can be expressed by eight bits is 28, or 256.
Therefore, on a 24-bit display, each color gun of a pixel can have any one of 256 possible
brightness values, expressed by the range of values 0 to 255.

The combination of the three color guns, each with 256 possible brightness values,
yields 2563, (or 224, for the 24-bit image display), or 16,777,216 possible colors for each
pixel on a 24-bit display. If the display being used is not 24-bit, the example above will
calculate the number of possible brightness values and colors that can be displayed.

Pixel The term pixel is abbreviated from picture element. As an element, a pixel is the
smallest part of a digital picture (image). Raster image data are divided by a grid, in
which each cell of the grid is represented by a pixel. A pixel is also called a grid cell.

Pixel is a broad term that is used for both:

• the data file value(s) for one data unit in an image (file pixels), or

• one grid location on a display or printout (display pixels).

Usually, one pixel in a file corresponds to one pixel in a display or printout. However,
an image can be magnified or reduced so that one file pixel no longer corresponds to
one pixel in the display or printout. For example, if an image is displayed with a magni-
fication factor of 2, then one file pixel will take up 4 (2 × 2) grid cells on the display
screen.

To display an image, a file pixel that consists of one or more numbers must be trans-
formed into a display pixel with properties that can be seen, such as brightness and
color. Whereas the file pixel has values that are relevant to data (such as wavelength of
reflected light), the displayed pixel must have a particular color or gray level that repre-
sents these data file values.

Colors Human perception of color comes from the relative amounts of red, green, and blue
light that are measured by the cones (sensors) in the eye. Red, green, and blue light can
be added together to produce a wide variety of colors—a wider variety than can be
formed from the combinations of any three other colors. Red, green, and blue are
therefore the additive primary colors.

A nearly infinite number of shades can be produced when red, green, and blue light are
combined. On a display, different colors (combinations of red, green, and blue) allow
the user to perceive changes across an image. Color displays that are available today
yield 224, or 16,777,216 colors. Each color has a possible 256 different values (28).

98 ERDAS
Introduction

Color Guns
On a display, color guns direct electron beams that fall on red, green, and blue
phosphors. The phosphors glow at certain frequencies to produce different colors.
Color monitors are often called RGB monitors, referring to the primary colors.

The red, green, and blue phosphors on the picture tube appear as tiny colored dots on
the display screen. The human eye integrates these dots together, and combinations of
red, green, and blue are perceived. Each pixel is represented by an equal number of red,
green, and blue phosphors.

Brightness Values
Brightness values (or intensity values) are the quantities of each primary color to be
output to each displayed pixel. When an image is displayed, brightness values are
calculated for all three color guns, for every pixel.

All of the colors that can be output to a display can be expressed with three brightness
values—one for each color gun.

Colormap and Colorcells A color on the screen is created by a combination of red, green, and blue values, where
each of these components is represented as an 8-bit value. Therefore, 24 bits are needed
to represent a color. Since many systems have only an 8-bit display, a colormap is used
to translate the 8-bit value into a color. A colormap is an ordered set of colorcells, which
is used to perform a function on a set of input values. To display or print an image, the
colormap translates data file values in memory into brightness values for each color
gun. Colormaps are not limited to 8-bit displays.

Colormap vs. Lookup Table


The colormap is a function of the display hardware, whereas a lookup table is a
function of ERDAS IMAGINE. When a contrast adjustment is performed on an image
in IMAGINE, lookup tables are used. However, if the auto-update function is being
used to view the adjustments in near real-time, then the colormap is being used to map
the image through the lookup table. This process allows the colors on the screen to be
updated in near real-time. This chapter explains how the colormap is used to display
imagery.

Colorcells
There is a colorcell in the colormap for each data file value. The red, green, and blue
values assigned to the colorcell control the brightness of the color guns for the displayed
pixel (Nye 1990). The number of colorcells in a colormap is determined by the number
of bits in the display (e.g., 8-bit, 24-bit).

Field Guide 99
For example, if a pixel with a data file value of 40 was assigned a display value (colorcell
value) of 24, then this pixel would use the brightness values for the 24th colorcell in the
colormap. In the colormap below (Table 13), this pixel would be displayed as blue.

Table 13: Colorcell Example

Colorcell
Red Green Blue
Index
1 255 0 0
2 0 170 90
3 0 0 255
24 0 0 255

The colormap is controlled by the Windows system. There are 256 colorcells in a
colormap with an 8-bit display. This means that 256 colors can be displayed simulta-
neously on the display. With a 24-bit display, there are 256 colorcells for each color: red,
green, and blue. This offers 256 × 256 × 256 or 16,777,216 different colors.

When an application requests a color, the server will specify which colorcell contains
that color and will return the color. Colorcells can be read-only or read/write.

Read-Only Colorcells
The color assigned to a read-only colorcell can be shared by other application windows,
but it cannot be changed once it is set. To change the color of a pixel on the display, it
would not be possible to change the color for the corresponding colorcell. Instead, the
pixel value would have to be changed and the image redisplayed. For this reason, it is
not possible to use auto update operations in ERDAS IMAGINE with read-only
colorcells.

Read/Write Colorcells
The color assigned to a read/write colorcell can be changed, but it cannot be shared by
other application windows. An application can easily change the color of displayed
pixels by changing the color for the colorcell that corresponds to the pixel value. This
allows applications to use auto update operations. However, this colorcell cannot be
shared by other application windows, and all of the colorcells in the colormap could
quickly be utilized.

Changeable Colormaps
Some colormaps can have both read-only and read/write colorcells. This type of
colormap allows applications to utilize the type of colorcell that would be most
preferred.

100 ERDAS
Introduction

Display Types The possible range of different colors is determined by the display type. ERDAS
IMAGINE supports the following types of displays:

• 8-bit PseudoColor

• 15-bit HiColor (for Windows NT)

• 24-bit DirectColor

• 24-bit TrueColor

The above display types are explained in more detail below.

A display may offer more than one visual type and pixel depth. See “ERDAS IMAGINE 8.3
Installing and Configuring” for more information on specific display hardware.

32-bit Displays
A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor or
TrueColor display. Whether or not it is DirectColor or TrueColor depends on the
display hardware.

Field Guide 101


8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells. Each cell has a red,
green, and blue brightness value, giving 256 combinations of red, green, and blue. The
data file value for the pixel is transformed into a colorcell value. The brightness values
for the colorcell that is specified by this colorcell value are used to define the color to be
displayed.

Data File Values


Colormap
Red band
value Color-
Red Green Blue
cell
Value Value Value
Index
Green band 1
Colorcell
value
value
2
(4)
Blue band 3
value Blue pixel
4 0 0 255
5
6

Figure 35: Transforming Data File Values to a Colorcell Value

In Figure 35, data file values for a pixel of three continuous raster layers (bands) is trans-
formed to a colorcell value. Since the colorcell value is four, the pixel is displayed with
the brightness values of the fourth colorcell (blue).

This display grants a small number of colors to ERDAS IMAGINE. It works well with
thematic raster layers containing less than 200 colors and with gray scale continuous
raster layers. For image files with three continuous raster layers (bands), the colors will
be severely limited because, under ideal conditions, 256 colors are available on an 8-bit
display, while 8-bit, 3-band image files can contain over 16,000,000 different colors.

Auto Update
An 8-bit PseudoColor display has read-only and read/write colorcells, allowing
ERDAS IMAGINE to perform near real-time color modifications using Auto Update
and Auto Apply options.

102 ERDAS
Introduction

24-bit DirectColor A 24-bit DirectColor display enables the user to view up to three bands of data at one
time, creating displayed pixels that represent the relationships between the bands by
their colors. Since this is a 24-bit display, it offers up to 256 shades of red, 256 shades of
green, and 256 shades of blue, which is approximately 16 million different colors (2563).
The data file values for each band are transformed into colorcell values. The colorcell
that is specified by these values is used to define the color to be displayed.

Colormap Color-
Cell Red
Data File Values Colorcell Values Value
Color- Index
Cell Green
Value
Index 1 0
Red band Red band
value Color-
value Cell Blue 1 0 2 0
(1) Value
Index 3
2 90
Green band Green band 4
value 1 0 3
value
(2) 4 5
2 0
Blue band Blue band 5 6 55
3
value value
(6) 4 6 55

5
6 200

Blue-green pixel
(0, 90, 200 RGB)

Figure 36: Transforming Data File Values to a Colorcell Value

In Figure 36, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate colorcell values for each band. Since the colorcell value is 1 for
the red band, 2 for the green band, and 6 for the blue band, the RGB brightness values
are 0, 90, 200. This displays the pixel as a blue-green color.

This type of display grants a very large number of colors to ERDAS IMAGINE and it
works well with all types of data.

Auto Update
A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDAS
IMAGINE to perform real-time color modifications using the Auto Update and Auto
Apply options.

Field Guide 103


24-bit TrueColor A 24-bit TrueColor display enables the user to view up to three continuous raster layers
(bands) of data at one time, creating displayed pixels that represent the relationships
between the bands by their colors. The data file values for the pixels are transformed
into screen values and the colors are based on these values. Therefore, the color for the
pixel is calculated without querying the server and the colormap. The colormap for a
24-bit TrueColor display is not available for ERDAS IMAGINE applications. Once a
color is assigned to a screen value, it cannot be changed, but the color can be shared by
other applications.

The screen values are used as the brightness values for the red, green, and blue color
guns. Since this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and
256 shades of blue, which is approximately 16 million different colors (2563).

Data File Values Screen Values

Red band Red band


value value
(0)

Green band Green band


value value
(90)
Blue-green pixel
Blue band Blue band (0, 90, 200 RGB)
value value
(200)

Figure 37: Transforming Data File Values to Screen Values

In Figure 37, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate screen values for each band. Since the screen value is 0 for the
red band, 90 for the green band and 200 for the blue band, the RGB brightness values
are 0, 90, and 200. This displays the pixel as a blue-green color.

Auto Update
The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thus
does not provide IMAGINE with any real-time color changing capability. Each time a
color is changed, the screen values must be calculated and the image must be re-drawn.

Color Quality
The 24-bit TrueColor visual provides the best color quality possible with standard
equipment. There is no color degradation under any circumstances with this display.

104 ERDAS
Introduction

PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and
pixel depths:

• 8-bit PseudoColor

• 15-bit HiColor

• 24-bit TrueColor

8-bit PseudoColor
An 8-bit PseudoColor display for the PC uses the same type of colormap as the X
Windows 8-bit PseudoColor display, except that each colorcell has a range of 0 to 63 on
most video display adapters, instead of 0 to 255. Therefore, each colorcell has a red,
green, and blue brightness value, giving 64 different combinations of red, green, and
blue. The colormap, however, is the same as the X Windows 8-bit PseudoColor display.
It has 256 colorcells allowing 256 different colors to be displayed simultaneously.

15-bit HiColor
A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-
bit TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32
shades of blue, for a total of 32,768 possible color combinations. Some video display
adapters allocate 6 bits to the green color gun, allowing 64 thousand colors. These
adapters use a 16-bit color scheme.

24-bit TrueColor
A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows
24-bit TrueColor display.

Field Guide 105


Displaying Raster Image files (.img) are raster files in the IMAGINE format. There are two types of raster
Layers layers:

• continuous

• thematic

Thematic raster layers require a different display process than continuous raster layers.
This section explains how each raster layer type is displayed.

Continuous Raster An image file (.img) can contain several continuous raster layers, and therefore, each
Layers pixel can have multiple data file values. When displaying an image file with continuous
raster layers, it is possible to assign which layers (bands) are to be displayed with each
of the three color guns. The data file values in each layer are input to the assigned color
gun. The most useful color assignments are those that allow for an easy interpretation
of the displayed image. For example:

• a natural-color image will approximate the colors that would appear to a human
observer of the scene.

• a color-infrared image shows the scene as it would appear on color-infrared film,


which is familiar to many analysts.

Band assignments are often expressed in R,G,B order. For example, the assignment 4,2,1
means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below are
some widely used band to color gun assignments (Faust 1989):

• Landsat TM - natural color: 3,2,1


This is natural color because band 3 = red and is assigned to the red color gun, band
2 = green and is assigned to the green color gun, and band 1 is blue and is assigned
to the blue color gun.

• Landsat TM - color-infrared: 4,3,2


This is infrared because band 4 = infrared.

• SPOT Multispectral - color-infrared: 3,2,1


This is infrared because band 3 = infrared.

Contrast Table
When an image is displayed, ERDAS IMAGINE automatically creates a contrast table
for continuous raster layers. The red, green, and blue brightness values for each band
are stored in this table.

Since the data file values in continuous raster layers are quantitative and related, the
brightness values in the colormap are also quantitative and related. The screen pixels
represent the relationships between the values of the file pixels by their colors. For
example, a screen pixel that is bright red has a high brightness value in the red color
gun, and a high data file value in the layer assigned to red, relative to other data file
values in that layer.

106 ERDAS
Displaying Raster Layers

The brightness values often differ from the data file values, but they usually remain in
the same order of lowest to highest. Some meaningful relationships between the values
are usually maintained.

Contrast Stretch
Different displays have different ranges of possible brightness values. The range of
most displays is 0 to 255 for each color gun.

Since the data file values in a continuous raster layer often represent raw data (such as
elevation or an amount of reflected light), the range of data file values is often not the
same as the range of brightness values of the display. Therefore, a contrast stretch is
usually performed, which stretches the range of the values to fit the range of the
display.

For example, Figure 38 shows a layer that has data file values from 30 to 40. When these
values are used as brightness values, the contrast of the displayed image is poor. A
contrast stretch simply “stretches” the range between the lower and higher data file
values, so that the contrast of the displayed image is higher—that is, lower data file
values are displayed with the lowest brightness values, and higher data file values are
displayed with the highest brightness values.

The colormap stretches the range of colorcell values from 30 to 40 to the range 0 to 255.
Since the output values are incremented at regular intervals, this stretch is a linear
contrast stretch. (The numbers in Figure 38 are approximations and do not show an
exact linear relationship.)

30 ➛ 0
255
31 ➛ 25
32 ➛ 51
output brightness values

33 ➛ 76
34 ➛ 102
35 ➛ 127
36 ➛ 153
37 ➛ 178
30 to 40 range 38 ➛ 204
0 39 ➛ 229
0 255
input colorcell values 40 ➛ 255

Figure 38: Contrast Stretch and Colorcell Values

See "CHAPTER 5: Enhancement" for more information about contrast stretching. Contrast
stretching is performed the same way for display purposes as it is for permanent image
enhancement.

Field Guide 107


A two standard deviation linear contrast stretch is applied to stretch pixel values from 0 to 255
of all .img files before they are displayed in the Viewer, unless a saved contrast stretch exists (the
file is not changed). This often improves the initial appearance of the data in the Viewer.

Statistics Files
To perform a contrast stretch, certain statistics are necessary, such as the mean and the
standard deviation of the data file values in each layer.

Use the Image Information utility to create and view statistics for a raster layer.

Usually, not all of the data file values are used in the contrast stretch calculations. The
minimum and maximum data file values of each band are often too extreme to produce
good results. When the minimum and maximum are extreme in relation to the rest of
the data, then the majority of data file values are not stretched across a very wide range,
and the displayed image has low contrast.
frequency

0 -2σ mean +2σ 255


stored data file values

Original Histogram

most of the data most of the data


frequency

frequency

values stretched
values stretched
over 255 are
less than 0 are
not displayed
not displayed -2σ mean +2σ 0 -2σ mean +2σ 255
0 stretched data file values 255 stretched data file values

Standard Deviation Stretch Min/Max Stretch

Figure 39: Stretching by Min/Max vs. Standard Deviation

The mean and standard deviation of the data file values for each band are used to locate
the majority of the data file values. The number of standard deviations above and below
the mean can be entered, which determines the range of data used in the stretch.

See "APPENDIX A: Math Topics" for more information on mean and standard deviation.

108 ERDAS
Displaying Raster Layers

Use the Contrast Tools dialog, which is accessible from the Lookup Table Modification dialog, to
enter the number of standard deviations to be used in the contrast stretch.

24-bit DirectColor and TrueColor Displays


Figure 40 illustrates the general process of displaying three continuous raster layers on
a 24-bit DirectColor display. The process would be similar on a TrueColor display
except that the colormap would not be used.

Band-to- Band 3 Band 2 Band 1


color gun assigned to assigned to assigned to
assignments: RED GREEN BLUE

Histograms
of each band:

0 255 0 255 0 255

Ranges of
data file
values to
be displayed:
0 data file values in 255 0 data file values in 255 0 data file values in 255

Colormap:

0 brightness values out 255 0 brightness values out 255 0 brightness values out 255

Color
guns:

Brightness
values in
each color
gun:

Color display:

Figure 40: Continuous Raster Layer Display Process

Field Guide 109


8-bit PseudoColor Display
When displaying continuous raster layers on an 8-bit PseudoColor display, the data file
values from the red, green, and blue bands are combined and transformed to a colorcell
value in the colormap. This colorcell then provides the red, green, and blue brightness
values. Since there are only 256 colors available, a continuous raster layer looks
different when it is displayed in an 8-bit display than a 24-bit display that offers 16
million different colors. However, the ERDAS IMAGINE Viewer performs dithering
with the available colors in the colormap to let a smaller set of colors appear to be a
larger set of colors.

See"Dithering" on page 116 for more information.

Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified, or put into
distinct categories. Each data file value is a class value, which is simply a number for a
particular category. A thematic raster layer is stored in an image (.img) file. Only one
data file value—the class value—is stored for each pixel.

Since these class values are not necessarily related, the gradations that are possible in
true color mode are not usually useful in pseudo color. The class system gives the
thematic layer a discrete look, in which each class can have its own color.

Color Table
When a thematic raster layer is displayed, ERDAS IMAGINE automatically creates a
color table. The red, green, and blue brightness values for each class are stored in this
table.

RGB Colors
Individual color schemes can be created by combining red, green, and blue in different
combinations, and assigning colors to the classes of a thematic layer.

Colors can be expressed numerically, as the brightness values for each color gun.
Brightness values of a display generally range from 0 to 255, however, IMAGINE trans-
lates the values from 0 to 1. The maximum brightness value for the display device is
scaled to 1. The colors listed in Table 14 are based on the range that would be used to
assign brightness values in ERDAS IMAGINE.

Table 14 contains only a partial listing of commonly used colors. Over 16 million colors
are possible on a 24-bit display.

Table 14: Commonly Used RGB Colors

Color Red Green Blue


Red 1 0 0
Red-Orange 1 .392 0
Orange .608 .588 0
Yellow 1 1 0
Yellow-Green .490 1 0

110 ERDAS
Displaying Raster Layers

Table 14: Commonly Used RGB Colors

Color Red Green Blue


Green 0 1 0
Cyan 0 1 1
Blue 0 0 1
Blue-Violet .392 0 .471
Violet .588 0 .588
Black 0 0 0
White 1 1 1
Gray .498 .498 .498
Brown .373 .227 0

NOTE: Black is the absence of all color (0,0,0) and white is created from the highest values of all
three colors (1,1,1). To lighten a color, increase all three brightness values. To darken a color,
decrease all three brightness values.

Use the Raster Attribute Editor to create your own color scheme.

24-bit DirectColor and TrueColor Displays


Figure 41 illustrates the general process of displaying thematic raster layers on a 24-bit
DirectColor display. The process would be similar on a TrueColor display except that
the colormap would not be used.

Display a thematic raster layer from the ERDAS IMAGINE Viewer.

Field Guide 111


1 2 3
Original
image by 4 3 5
class:
2 1 4

RED GREEN BLUE


CLASS COLOR brightness value brightness value brightness value
1 Red = 255 0 0
Color 2 Orange = 255 128 0
scheme: 3 Yellow = 255 255 0
4 Violet = 128 0 255
5 Green = 0 255 0

class values in class values in class values in


1 2 3 4 5 1 2 3 4 5 1 2 3 4 5

Colormap:

0 128 255 0 128 255 0 128 255


brightness values out brightness values out brightness values out
Color
guns:

Brightness
values in
each color
gun:
= 255
= 128
=0

R O Y

Display: V Y G
O R V

Figure 41: Thematic Raster Layer Display Process

8-bit PseudoColor Display


The colormap is a limited resource which is shared among all of the applications that
are running concurrently. Due to the limited resources, ERDAS IMAGINE does not
typically have access to the entire colormap.

112 ERDAS
Using the IMAGINE Viewer

Using the IMAGINE The ERDAS IMAGINE Viewer is a window for displaying raster, vector, and
Viewer annotation layers. The user can open as many Viewer windows as their window
manager supports.

NOTE: The more Viewers that are opened simultaneously, the more RAM memory is necessary.

The ERDAS IMAGINE Viewer not only makes digital images visible quickly, but it can
also be used as a tool for image processing and raster GIS modeling. The uses of the
Viewer are listed briefly in this section, and described in greater detail in other chapters
of the ERDAS Field Guide.

Colormap
ERDAS IMAGINE does not use the entire colormap because there are other applica-
tions that also need to use it, including the window manager, terminal windows, ARC
View, or a clock. Therefore, there are some limitations to the number of colors that the
Viewer can display simultaneously, and flickering may occur as well.

Color Flickering
If an application requests a new color that does not exist in the colormap, the server will
assign that color to an empty colorcell. However, if there are not any available colorcells
and the application requires a private colorcell, then a private colormap will be created
for the application window. Since this is a private colormap, when the cursor is moved
out of the window, the server will use the main colormap and the brightness values
assigned to the colorcells. Therefore, the colors in the private colormap will not be
applied and the screen will flicker. Once the cursor is moved into the application
window, the correct colors will be applied for that window.

Resampling
When a raster layer(s) is displayed, the file pixels may be resampled for display on the
screen. Resampling is used to calculate pixel values when one raster grid must be fitted
to another. In this case, the raster grid defined by the file must be fit to the grid of screen
pixels in the Viewer.

All Viewer operations are file-based. So, any time an image is resampled in the Viewer,
the Viewer uses the file as its source. If the raster layer is magnified or reduced, the
Viewer re-fits the file grid to the new screen grid.

The resampling methods available are:

• Nearest Neighbor - uses the value of the closest pixel to assign to the output pixel
value.

• Bilinear Interpolation - uses the data file values of four pixels in a 2 × 2 window to
calculate an output value with a bilinear function.

• Cubic Convolution - uses the data file values of 16 pixels in a 4 × 4 window to


calculate an output value with a cubic function.

These are discussed in detail in "CHAPTER 8: Rectification".

Field Guide 113


The default resampling method is Nearest Neighbor.

Preference Editor
The ERDAS IMAGINE Preference Editor enables the user to set parameters for the
ERDAS IMAGINE Viewer that affect the way the Viewer operates.

See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how to
set preferences for the Viewer.

Pyramid Layers Sometimes a large .img file may take a long time to display in the ERDAS IMAGINE
Viewer or to be resampled by an application. The Pyramid Layer option enables the
user to display large images faster and allows certain applications to rapidly access the
resampled data. Pyramid layers are image layers which are copies of the original layer
successively reduced by the power of 2 and then resampled. If the raster layer is
thematic, then it is resampled using the Nearest Neighbor method. If the raster layer is
continuous, it is resampled by a method that is similar to cubic convolution. The data
file values for sixteen pixels in a 4 × 4 window are used to calculate an output data file
value with a filter function.

See "CHAPTER 8: Rectification" for more information on Nearest Neighbor.

The number of pyramid layers created depends on the size of the original image. A
larger image will produce more pyramid layers. When the Create Pyramid Layer
option is selected, ERDAS IMAGINE automatically creates successively reduced layers
until the final pyramid layer can be contained in one block. The default block size is 64
× 64 pixels.

See "CHAPTER 1: Raster Data" for information on block size.

Pyramid layers are added as additional layers in the .img file. However, these layers
cannot be accessed for display. The file size is increased by approximately one-third
when pyramid layers are created. The actual increase in file size can be determined by
multiplying the layer size by this formula:

∑ 4----i
1

i=0
where:

n = number of pyramid layers

114 ERDAS
Using the IMAGINE Viewer

Pyramid layers do not appear as layers which can be processed; they are for viewing
purposes only. Therefore, they will not appear as layers in other parts of the ERDAS
IMAGINE system (e.g., the Arrange Layers dialog).

Pyramid layers can be deleted through the Image Information utility. However, when pyramid
layers are deleted, they will not be deleted from the .img file - so the .img file size will not change,
but ERDAS IMAGINE will utilize this file space, if necessary. Pyramid layers are deleted from
viewing and resampling access only - that is, they can no longer be viewed or used in an appli-
cation.

IMAGINE selects
the pyramid layer
Pyramid layer (64 × 64) that will display the
fastest in the
Pyramid layer (128 × 128) Viewer.

Pyramid layer (512 × 512)


Viewer Window

Pyramid layer (1K × 1K)

Pyramid layer (2K × 2K)

Original Image
(4K × 4K)

.img file

Figure 42: Pyramid Layers

For example, a file which is 4K × 4K pixels could take a long time to display when the
image is fit to the Viewer. The Pyramid Layer option creates additional layers succes-
sively reduced from 4K × 4K, to 2K × 2K, 1K × 1K, 512 × 512, 128 × 128, down to 64 × 64.
ERDAS IMAGINE then selects the pyramid layer size most appropriate for display in
the Viewer window when the image is displayed.

The pyramid layer option is available from Import and the Image Information utility.

Field Guide 115


For more information about the .img format, see "CHAPTER 1: Raster Data" and "APPENDIX
B: File Formats and Extensions".

Dithering A display is capable of viewing only a limited number of colors simultaneously. For
example, an 8-bit display has a colormap with 256 colorcells, therefore, a maximum of
256 colors can be displayed at the same time. If some colors are being used for auto
update color adjustment while other colors are still being used for other imagery, the
color quality will degrade.

Dithering lets a smaller set of colors appear to be a larger set of colors. If the desired
display color is not available, a dithering algorithm mixes available colors to provide
something that looks like the desired color.

For a simple example, assume the system can display only two colors, black and white,
and the user wants to display gray. This can be accomplished by alternating the display
of black and white pixels.

Black Gray White


Figure 43: Example of Dithering

In Figure 43, dithering is used between a black pixel and a white pixel to obtain a gray
pixel.

The colors that the ERDAS IMAGINE Viewer will dither between will be similar to each
other, and will be dithered on the pixel level. Using similar colors and dithering on the
pixel level makes the image appear smooth.

Dithering allows multiple images to be displayed in different Viewers without refreshing the
currently displayed image(s) each time a new image is displayed.

116 ERDAS
Using the IMAGINE Viewer

Color Patches
When the Viewer performs dithering, it uses patches of 2 × 2 pixels. If the desired color
has an exact match, then all of the values in the patch will match it. If the desired color
is halfway between two of the usable colors, the patch will contain two pixels of each of
the surrounding usable colors. If it is 3/4 of the way between two usable colors, the
patch will contain 3 pixels of the color it is closest to and 1 pixel of the color that is
second closest. Figure 44 shows what the color patches would look like if the usable
colors were black and white and the desired color was gray.

Exact 25% away 50% away 75% away Next color

Figure 44: Example of Color Patches

If the desired color is not an even multiple of 1/4 of the way between two allowable
colors, it is rounded to the nearest 1/4. The Viewer separately dithers the red, green,
and blue components of a desired color.

Color Artifacts
Since the Viewer requires 2 × 2 pixel patches to represent a color, and actual images
typically have a different color for each pixel, artifacts may appear in an image that has
been dithered. Usually, the difference in color resolution is insignificant, because
adjacent pixels are normally similar to each other. Similarity between adjacent pixels
usually smooths out artifacts that would appear.

Viewing Layers The ERDAS IMAGINE Viewer displays layers as one of the following types of view
layers:

• annotation

• vector

• pseudo color

• gray scale

• true color

Field Guide 117


Annotation View Layer
When an annotation layer (xxx.ovr) is displayed in the Viewer, it is displayed as an
annotation view layer.

Vector View Layer


Vector layers are displayed in the Viewer as a vector view layer.

Pseudo Color View Layer


When a raster layer is displayed as a pseudo color layer in the Viewer, the colormap
uses the RGB brightness values for the one layer in the RGB table. This is most appro-
priate for thematic layers. If the layer is a continuous raster layer, the layer would
initially appear gray, since there are not any values in the RGB table.

Gray Scale View Layer


When a raster layer is displayed as a gray scale layer in the Viewer, the colormap uses
the brightness values in the contrast table for one layer. This layer is then displayed in
all three color guns, producing a gray scale image. A continuous raster layer may be
displayed as a gray scale view layer.

True Color View Layer


Continuous raster layers should be displayed as true color layers in the Viewer. The
colormap uses the RGB brightness values for three layers in the contrast table, one for
each color gun to display the set of layers.

118 ERDAS
Using the IMAGINE Viewer

Viewing Multiple Layers It is possible to view as many layers of all types (in the exception of vector layers, which
have a limit of 10) at one time in a single Viewer.

To overlay multiple layers in one Viewer, they must all be referenced to the same map
coordinate system. The layers are positioned geographically within the window, and
resampled to the same scale as previously displayed layers. Therefore, raster layers in
one Viewer can have different cell sizes.

When multiple layers are magnified or reduced, raster layers are resampled from the
file to fit to the new scale.

Display multiple layers from the Viewer. Be sure to turn off the Clear Display check box when
you open subsequent layers.

Overlapping Layers
When layers overlap, the order in which the layers are opened is very important. The
last layer that is opened will always appear to be “on top” of the previously opened
layers.

In a raster layer, it is possible to make values of zero transparent in the Viewer, meaning
that they have no opacity. Thus, if a raster layer with zeros is displayed over other
layers, the areas with zero values will allow the underlying layers to show through.

Opacity is a measure of how opaque, or solid, a color is displayed in a raster layer.


Opacity is a component of the color scheme of categorical data displayed in pseudo
color.

• 100% opacity means that a color is completely opaque, and cannot be seen through.

• 50% opacity lets some color show, and lets some of the underlying layers show
through. The effect is like looking at the underlying layers through a colored fog.

• 0% opacity allows underlying layers to show completely.

By manipulating opacity, you can compare two or more layers of raster data that are displayed
in a Viewer. Opacity can be set at any value in the range of 0% to 100%. Use the Arrange Layers
dialog to re-stack layers in a Viewer so that they overlap in a different order, if needed.

Non-Overlapping Layers
Multiple layers that are opened in the same Viewer do not have to overlap. Layers that
cover distinct geographic areas can be opened in the same Viewer. The layers will be
automatically positioned in the Viewer window according to their map coordinates,
and will be positioned relative to one another geographically. The map coordinate
systems for the layers must be the same.

Field Guide 119


Linking Viewers Linking Viewers is appropriate when two Viewers cover the same geographic area (at
least partially) and are referenced to the same map units. When two Viewers are linked:

• either the same geographic point is displayed in the centers of both Viewers, or a
box shows where one view fits inside the other

• scrolling one Viewer affects the other

• the user can manipulate the zoom ratio of one Viewer from another

• any inquire cursors in one Viewer appear in the other, for multiple-Viewer pixel
inquiry

• the auto-zoom is enabled, if the Viewers have the same zoom ratio and nearly the
same window size

It is often helpful to display a wide view of a scene in one Viewer, and then a close-up
of a particular area in another Viewer. When two such Viewers are linked, a box opens
in the wide view window to show where the close-up view lies.

Any image that is displayed at a magnification (higher zoom ratio) of another image in
a linked Viewer is represented in the other Viewer by a box. If several Viewers are
linked together, there may be multiple boxes in that Viewer.

Figure 45 shows how one view fits inside the other linked Viewer. The link box shows
the extent of the larger-scale view.

Figure 45: Linked Viewers

120 ERDAS
Using the IMAGINE Viewer

Zoom and Roam Zooming enlarges an image on the display. When an image is zoomed, it can be roamed
(scrolled) so that the desired portion of the image appears on the display screen. Any
image that does not fit entirely in the Viewer can be roamed and/or zoomed. Roaming
and zooming have no effect on how the image is stored in the file.

The zoom ratio describes the size of the image on the screen in terms of the number of
file pixels used to store the image. It is the ratio of the number of screen pixels in the X
or Y dimension to the number that are used to display the corresponding file pixels.

A zoom ratio greater than 1 is a magnification, which makes the image features appear
larger in the Viewer. A zoom ratio less than 1 is a reduction, which makes the image
features appear smaller in the Viewer.

Table 15: Overview of Zoom Ratio

A zoom ratio of 1 means... each file pixel is displayed with 1 screen


pixel in the Viewer.
A zoom ratio of 2 means... each file pixel is displayed with a block
of 2 × 2 screen pixels. Effectively, the
image is displayed at 200%.
A zoom ratio of 0.5 means... each block of 2 × 2 file pixels is dis-
played with 1 screen pixel. Effectively,
the image is displayed at 50%.

NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that images can be zoomed at
virtually any scale (e.g., continuous fractional zoom). Resampling is necessary whenever an
image is displayed with a new pixel grid. The resampling method used when an image is zoomed
is the same one used when the image is displayed, as specified in the Open Raster Layer dialog.
The default resampling method is Nearest Neighbor.

Zoom the data in the Viewer via the Viewer menu bar, the Viewer tool bar, or the Quick View
right-button menu.

Field Guide 121


Geographic Information To prepare to run many programs, it may be necessary to determine the data file coordi-
nates, map coordinates, or data file values for a particular pixel or a group of pixels. By
displaying the image in the Viewer and then selecting the pixel(s) of interest, important
information about the pixel(s) can be viewed.

The Quick View right-button menu gives you options to view information about a specific pixel.
Use the Raster Attribute Editor to access information about classes in a thematic layer.

See "CHAPTER 10: Geographic Information Systems" for information about attribute data.

Enhancing Continuous Working with the brightness values in the colormap is useful for image enhancement.
Raster Layers Often, a trial-and-error approach is needed to produce an image that has the right
contrast and highlights the right features. By using the tools in the Viewer, it is possible
to quickly view the effects of different enhancement techniques, undo enhancements
that aren’t helpful, and then save the best results to disk.

Use the Raster options from the Viewer to enhance continuous raster layers.

See "CHAPTER 5: Enhancement" for more information on enhancing continuous raster layers.

Creating New Image It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. The
Files new .img file will contain three continuous raster layers (RGB), regardless of how many
layers are currently displayed. The IMAGINE Image Info utility must be used to create
statistics for the new .img file before the file is enhanced.

Annotation layers can be converted to raster format, and written to an .img file. Or,
vector data can be gridded into an image, overwriting the values of the pixels in the
image plane, and incorporated into the same band as the image.

Use the Viewer to .img function to create a new .img file from the currently displayed raster
layers.

122 ERDAS
Using the IMAGINE Viewer

Field Guide 123


124 ERDAS
Introduction

CHAPTER 5
Enhancement

Introduction Image enhancement is the process of making an image more interpretable for a
particular application (Faust 1989). Enhancement makes important features of raw,
remotely sensed data more interpretable to the human eye. Enhancement techniques
are often used instead of classification techniques for feature extraction—studying and
locating areas and objects on the ground and deriving useful information from images.

The techniques to be used in image enhancement depend upon:

• The user’s data — the different bands of Landsat, SPOT, and other imaging sensors
were selected to detect certain features. The user must know the parameters of the
bands being used before performing any enhancement. (See "CHAPTER 1: Raster
Data" for more details.)

• The user’s objective — for example, sharpening an image to identify features that
can be used for training samples will require a different set of enhancement
techniques than reducing the number of bands in the study. The user must have a
clear idea of the final product desired before enhancement is performed.

• The user’s expectations — what the user thinks he or she will find.

• The user’s background — the experience of the person performing the


enhancement.

This chapter will briefly discuss the following enhancement techniques available with
ERDAS IMAGINE:

• Data correction — radiometric and geometric correction

• Radiometric enhancement — enhancing images based on the values of individual


pixels

• Spatial enhancement — enhancing images based on the values of individual and


neighboring pixels

• Spectral enhancement — enhancing images by transforming the values of each


pixel on a multiband basis

Field Guide 125


• Hyperspectral image processing — an extension of the techniques used for multi-
spectral datasets

• Fourier analysis — techniques for eliminating periodic noise in imagery

• Radar imagery enhancement— techniques specifically designed for enhancing


radar imagery

See "Bibliography" on page 635 to find current literature which will provide a more detailed
discussion of image processing enhancement techniques.

Display vs. File With ERDAS IMAGINE, image enhancement may be performed:
Enhancement
• temporarily, upon the image that is displayed in the Viewer (by manipulating the
function and display memories), or

• permanently, upon the image data in the data file.

Enhancing a displayed image is much faster than enhancing an image on disk. If one is
looking for certain visual effects, it may be beneficial to perform some trial-and-error
enhancement techniques on the display. Then, when the desired results are obtained,
the values that are stored in the display device memory can be used to make the same
changes to the data file.

For more information about displayed images and the memory of the display device, see
"CHAPTER 4: Image Display".

Spatial Modeling Two types of models for enhancement can be created in ERDAS IMAGINE:
Enhancements
• Graphical models — use Model Maker (Spatial Modeler) to easily, and with great
flexibility, construct models which can be used to enhance the data.

• Script models — for even greater flexibility, use the Spatial Modeler Language to
construct models in script form. The Spatial Modeler Language (SML) enables the
user to write scripts which can be written, edited, and run from the Spatial Modeler
component or directly from the command line. The user can edit models created
with Model Maker using the Spatial Modeling Language or Model Maker.

Although a graphical model and a script model look different, they will produce the
same results when applied.

Image Interpreter
ERDAS IMAGINE supplies many algorithms constructed as models, ready to be
applied with user-input parameters at the touch of a button. These graphical models,
created with Model Maker, are listed as menu functions in the Image Interpreter. These
functions are mentioned throughout this chapter. Just remember, these are modeling
functions which can be edited and adapted as needed with Model Maker or the Spatial
Modeler Language.

126 ERDAS
Introduction

See "CHAPTER 10: Geographic Information Systems"for more information on Raster


Modeling.

The modeling functions available for enhancement in Image Interpreter are briefly
described in Table 16.

Table 16: Description of Modeling Functions Available for Enhancement

Function Description

These functions enhance the image using the values of


SPATIAL ENHANCEMENT
individual and surrounding pixels.

Convolution Uses a matrix to average small sets of pixels across an image.

Non-directional Edge Averages the results from two orthogonal 1st derivative edge
detectors.
Focal Analysis Enables the user to perform one of several analyses on class
values in an .img file using a process similar to convolution
filtering.
Texture Defines texture as a quantitative characteristic in an image.

Adaptive Filter Varies the contrast stretch for each pixel depending upon the
DN values in the surrounding moving window.
Statistical Filter Produces the pixel output DN by averaging pixels within a
moving window that fall within a statistically defined range.
Resolution Merge Merges imagery of differing spatial resolutions.

Crisp Sharpens the overall scene luminance without distorting the


thematic content of the image.

RADIOMETRIC These functions enhance the image using the values of


ENHANCEMENT individual pixels within each band.

LUT (Lookup Table) Creates an output image that contains the data values as mod-
Stretch ified by a lookup table.

Histogram Redistributes pixel values with a nonlinear contrast stretch so


Equalization that there are approximately the same number of pixels with
each value within a range.
Histogram Match Mathematically determines a lookup table that will convert
the histogram of one image to resemble the histogram of
another.
Brightness Inversion Allows both linear and nonlinear reversal of the image inten-
sity range.
Haze Reduction* De-hazes Landsat 4 and 5 Thematic Mapper data and panchromatic
data.
Noise Reduction* Removes noise using an adaptive filter.

Destripe TM Data Removes striping from a raw TM4 or TM5 data file.

Field Guide 127


Table 16: Description of Modeling Functions Available for Enhancement

Function Description

SPECTRAL These functions enhance the image by transforming the


ENHANCEMENT values of each pixel on a multiband basis.

Principal Components Compresses redundant data values into fewer bands which
are often more interpretable than the source data.
Inverse Principal Performs an inverse principal components analysis.
Components
Decorrelation Stretch Applies a contrast stretch to the principal components of an
image.
Tasseled Cap Rotates the data structure axes to optimize data viewing for
vegetation studies.
RGB to IHS Transforms red, green, blue values to intensity, hue, satura-
tion values.
IHS to RGB Transforms intensity, hue, saturation values to red, green,
blue values.
Indices Performs band ratios that are commonly used in mineral and
vegetation studies.
Natural Color Simulates natural color for TM data.

These functions enhance the image by applying a


FOURIER ANALYSIS Fourier Transform to the data. NOTE: These functions
are currently view only—no manipulation is allowed.

Fourier Transform* Enables the user to utilize a highly efficient version of the
Discrete Fourier Transform (DFT).
Fourier Transform Enables the user to edit Fourier images using many interactive tools
Editor* and filters.

Inverse Fourier Computes the inverse two-dimensional Fast Fourier Trans-


Transform* form of the spectrum stored.

Fourier Magnitude* Converts the Fourier Transform image into the more familiar
Fourier Magnitude image.
Periodic Noise Automatically removes striping and other periodic noise from
Removal* images.

Homomorphic Filter* Enhances imagery using an illumination/reflectance model.

* Indicates functions that are not graphical models.

NOTE: There are other Image Interpreter functions that do not necessarily apply to image
enhancement.

128 ERDAS
Correcting Data

Correcting Data Each generation of sensors shows improved data acquisition and image quality over
previous generations. However, some anomalies still exist that are inherent to certain
sensors and can be corrected by applying mathematical formulas derived from the
distortions (Lillesand and Kiefer 1979). In addition, the natural distortion that results
from the curvature and rotation of the earth in relation to the sensor platform produces
distortions in the image data, which can also be corrected.

Radiometric Correction
Generally, there are two types of data correction: radiometric and geometric.
Radiometric correction addresses variations in the pixel intensities (digital numbers, or
DNs) that are not caused by the object or scene being scanned. These variations include:

• differing sensitivities or malfunctioning of the detectors

• topographic effects

• atmospheric effects

Geometric Correction
Geometric correction addresses errors in the relative positions of pixels. These errors
are induced by:

• sensor viewing geometry

• terrain variations

Because of the differences in radiometric and geometric correction between traditional, passively
detected visible/infrared imagery and actively acquired radar imagery, the two will be discussed
separately. See "Radar Imagery Enhancement" on page 191.

Radiometric Correction -
Visible/Infrared Imagery
Striping
Striping or banding will occur if a detector goes out of adjustment—that is, it provides
readings consistently greater than or less than the other detectors for the same band
over the same ground cover.

Some Landsat 1, 2, and 3 data have striping every sixth line, due to improper calibration
of some of the 24 detectors that were used by the MSS. The stripes are not constant data
values, nor is there a constant error factor or bias. The differing response of the errant
detector is a complex function of the data value sensed.

This problem has been largely eliminated in the newer sensors. Various algorithms
have been advanced in current literature to help correct this problem in the older data.
Among these algorithms are simple along-line convolution, high-pass filtering, and
forward and reverse principal component transformations (Crippen 1989).

Field Guide 129


Data from airborne multi- or hyperspectral imaging scanners will also show a
pronounced striping pattern due to varying offsets in the multi-element detectors. This
effect can be further exacerbated by unfavorable sun angle. These artifacts can be
minimized by correcting each scan line to a scene-derived average (Kruse 1988).

Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate
striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best
address the data. The Radar Adjust Brightness function will also correct some of these problems.

Line Dropout
Another common remote sensing device error is line dropout. Line dropout occurs
when a detector either completely fails to function, or becomes temporarily saturated
during a scan (like the effect of a camera flash on the retina). The result is a line or partial
line of data with higher data file values, creating a horizontal streak until the detector(s)
recovers, if it recovers.

Line dropout is usually corrected by replacing the bad line with a line of estimated data
file values, based on the lines above and below it.

Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered “errors,”
since they are part of the signal received by the sensing device (Bernstein 1983).
However, it is often important to remove atmospheric effects, especially for scene
matching and change detection analysis.

Over the past 20 years a number of algorithms have been developed to correct for varia-
tions in atmospheric transmission. Four categories will be mentioned here:

• dark pixel subtraction

• radiance to reflectance conversion

• linear regressions

• atmospheric modeling

Use the Spatial Modeler to construct the algorithms for these operations.

Dark Pixel Subtraction


The dark pixel subtraction technique assumes that the pixel of lowest DN in each band
should really be zero and hence its radiometric value (DN) is the result of atmosphere-
induced additive errors. These assumptions are very tenuous and recent work indicates
that this method may actually degrade rather than improve the data (Crane 1971,
Chavez et al 1977).

130 ERDAS
Correcting Data

Radiance to Reflectance Conversion


Radiance to reflectance conversion requires knowledge of the true ground reflectance
of at least two targets in the image. These can come from either at-site reflectance
measurements or they can be taken from a reflectance table for standard materials. The
latter approach involves assumptions about the targets in the image.

Linear Regressions
A number of methods using linear regressions have been tried. These techniques use
bispectral plots and assume that the position of any pixel along that plot is strictly a
result of illumination. The slope then equals the relative reflectivities for the two
spectral bands. At an illumination of zero, the regression plots should pass through the
bispectral origin. Offsets from this represent the additive extraneous components, due
to atmosphere effects (Crippen 1987).

Atmospheric Modeling
Atmospheric modeling is computationally complex and requires either assumptions or
inputs concerning the atmosphere at the time of imaging. The atmospheric model used
to define the computations is frequently Lowtran or Modtran (Kneizys et al 1988). This
model requires inputs such as atmospheric profile (pressure, temperature, water vapor,
ozone, etc.) aerosol type, elevation, solar zenith angle, and sensor viewing angle.

Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets


where bandwidths are typically 10 nm or less. These narrow bandwidth corrections can
then be combined to simulate the much wider bandwidths of Landsat or SPOT sensors
(Richter 1990).

Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errors
of perspective due to the earth’s curvature and sensor motion. Today, some of these
errors are commonly removed at the sensor’s data processing center. But in the past,
some data from Landsat MSS 1, 2, and 3 were not corrected before distribution.

Many visible/infrared sensors are not nadir-viewing; they look to the side. For some
applications, such as stereo viewing or DEM generation, this is an advantage. For other
applications, it is a complicating factor.

In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir.
Other pixels, especially those on the view periphery, are viewed off-nadir. For scenes
covering very large geographic areas (such as AVHRR), this can be a significant
problem.

This and other factors, such as earth curvature, result in geometric imperfections in the
sensor image. Terrain variations have the same distorting effect but on a smaller (pixel-
by-pixel) scale. These factors can be addressed by rectifying the image to a map.

See "CHAPTER 8: Rectification" for more information on geometric correction using


rectification and "CHAPTER 7: Photogrammetric Concepts" for more information on
orthocorrection.

Field Guide 131


A more rigorous geometric correction utilizes a DEM and sensor position information
to correct these distortions. This is orthocorrection.

Radiometric Radiometric enhancement deals with the individual values of the pixels in the image.
Enhancement It differs from spatial enhancement (discussed on page 143), which takes into account
the values of neighboring pixels.

Depending on the points and the bands in which they appear, radiometric enhance-
ments that are applied to one band may not be appropriate for other bands. Therefore,
the radiometric enhancement of a multiband image can usually be considered as a
series of independent, single-band enhancements (Faust 1989).

Radiometric enhancement usually does not bring out the contrast of every pixel in an
image. Contrast can be lost between some pixels, while gained on others.

(j and k are reference points)


Frequency

Frequency
0 j k 255 0 j k 255

Original Data Enhanced Data

Figure 46: Histograms of Radiometrically Enhanced Data

In Figure 46, the range between j and k in the histogram of the original data is about one
third of the total range of the data. When the same data are radiometrically enhanced,
the range between j and k can be widened. Therefore, the pixels between j and k gain
contrast—it is easier to distinguish different brightness values in these pixels.

However, the pixels outside the range between j and k are more grouped together than
in the original histogram, to compensate for the stretch between j and k. Contrast among
these pixels is lost.

132 ERDAS
Radiometric Enhancement

Contrast Stretching When radiometric enhancements are performed on the display device, the transfor-
mation of data file values into brightness values is illustrated by the graph of a lookup
table.

For example, Figure 47 shows the graph of a lookup table that increases the contrast of
data file values in the middle range of the input data (the range within the brackets).
Note that the input range within the bracket is narrow, but the output brightness values
for the same pixels are stretched over a wider range. This process is called contrast
stretching.

255

output brightness values

0
0 255
input data file values
Figure 47: Graph of a Lookup Table

Notice that the graph line with the steepest (highest) slope brings out the most contrast
by stretching output values farther apart.

Linear and Nonlinear


The terms linear and nonlinear, when describing types of spectral enhancement, refer
to the function that is applied to the data to perform the enhancement. A piecewise
linear stretch uses a polyline function to increase contrast to varying degrees over
different ranges of the data, as in Figure 48.

255
output brightness values

linear

nonlinear

piecewise
linear

0
0 255
input data file values
Figure 48: Enhancement with Lookup Tables

Field Guide 133


Linear Contrast Stretch
A linear contrast stretch is a simple way to improve the visible contrast of an image. It
is often necessary to contrast-stretch raw image data, so that they can be seen on the
display.

In most raw data, the data file values fall within a narrow range—usually a range much
narrower than the display device is capable of displaying. That range can be expanded
to utilize the total range of the display device (usually 0 to 255).

A two standard deviation linear contrast stretch is automatically applied to images displayed in
the IMAGINE Viewer.

Nonlinear Contrast Stretch


A nonlinear spectral enhancement can be used to gradually increase or decrease
contrast over a range, instead of applying the same amount of contrast (slope) across
the entire image. Usually, nonlinear enhancements bring out the contrast in one range
while decreasing the contrast in other ranges. The graph of the function in Figure 49
shows one example.

255
output brightness values

0
0 255
input data file values
Figure 49: Nonlinear Radiometric Enhancement

Piecewise Linear Contrast Stretch


A piecewise linear contrast stretch allows for the enhancement of a specific portion of
data by dividing the lookup table into three sections: low, middle, and high. It enables
the user to create a number of straight line segments which can simulate a curve. The
user can enhance the contrast or brightness of any section in a single color gun at a time.
This technique is very useful for enhancing image areas in shadow or other areas of low
contrast.

In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are always
pixels in each data file value from 0 to 255. You can manipulate the percentage of pixels in a
particular range but you cannot eliminate a range of data file values.

134 ERDAS
Radiometric Enhancement

A piecewise linear contrast stretch normally follows two rules:

1) The data values are continuous; there can be no break in the values between
High, Middle, and Low. Range specifications will adjust in relation to any
changes to maintain the data value range.

2) The data values specified can go only in an upward, increasing direction,


as shown in Figure 50.

100%

LUT Value

Low Middle High

0 Data Value Range 255


Figure 50: Piecewise Linear Contrast Stretch

The contrast value for each range represents the percent of the available output range
that particular range occupies. The brightness value for each range represents the
middle of the total range of brightness values occupied by that range. Since rules 1 and
2 above are enforced, as the contrast and brightness values are changed, they may affect
the contrast and brightness of other ranges. For example, if the contrast of the low range
increases, it forces the contrast of the middle to decrease.

Contrast Stretch on the Display


Usually, a contrast stretch is performed on the display device only, so that the data file
values are not changed. Lookup tables are created that convert the range of data file
values to the maximum range of the display device. The user can then edit and save the
contrast stretch values and lookup tables as part of the raster data .img file. These values
will be loaded to the Viewer as the default display values the next time the image is
displayed.

In ERDAS IMAGINE, you can permanently change the data file values to the lookup table
values. Use the Image Interpreter LUT Stretch function to create an .img output file with the
same data values as the displayed contrast stretched image.

See "CHAPTER 1: Raster Data" for more information on the data contained in .img files.

Field Guide 135


The statistics in the .img file contain the mean, standard deviation, and other statistics
on each band of data. The mean and standard deviation are used to determine the range
of data file values to be translated into brightness values or new data file values. The
user can specify the number of standard deviations from the mean that are to be used
in the contrast stretch. Usually the data file values that are two standard deviations
above and below the mean are used. If the data have a normal distribution, then this
range represents approximately 95 percent of the data.

The mean and standard deviation are used instead of the minimum and maximum data
file values, because the minimum and maximum data file values are usually not repre-
sentative of most of the data. (A notable exception occurs when the feature being sought
is in shadow. The shadow pixels are usually at the low extreme of the data file values,
outside the range of two standard deviations from the mean.)

The use of these statistics in contrast stretching is discussed and illustrated in "CHAPTER 4:
Image Display". Statistics terms are discussed in "APPENDIX A: Math Topics".

Varying the Contrast Stretch


There are variations of the contrast stretch that can be used to change the contrast of
values over a specific range, or by a specific amount. By manipulating the lookup tables
as in Figure 51, the maximum contrast in the features of an image can be brought out.

Figure 51 shows how the contrast stretch manipulates the histogram of the data,
increasing contrast in some areas and decreasing it in others. This is also a good
example of a piecewise linear contrast stretch, created by adding breakpoints to the
histogram.

136 ERDAS
Radiometric Enhancement

255 255

output brightness values

output brightness values


output
histogram

input
histogram

0 0
0 255 0 255
input data file values input data file values
2. A breakpoint is added to the
1. Linear stretch. Values are linear function, redistributing
clipped at 255. the contrast.

255 255
output brightness values

0
0 255 output brightness values 0
0 255
input data file values input data file values
3. Another breakpoint added. 4. The breakpoint at the top of
Contrast at the peak of the the function is moved so that
histogram continues to increase. values are not clipped.

Figure 51: Contrast Stretch By Manipulating Lookup Tables


and the Effect on the Output Histogram

Field Guide 137


Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel values so that
there are approximately the same number of pixels with each value within a range. The
result approximates a flat histogram. Therefore, contrast is increased at the “peaks” of
the histogram and lessened at the “tails.”

Histogram equalization can also separate pixels into distinct groups, if there are few
output values over a wide range. This can have the visual effect of a crude classification.

Original Histogram After Equalization

peak

pixels at
tail are
tail grouped -
contrast
is lost
pixels at peak are spread
apart - contrast is gained
Figure 52: Histogram Equalization

To perform a histogram equalization, the pixel values of an image (either data file
values or brightness values) are reassigned to a certain number of bins, which are
simply numbered sets of pixels. The pixels are then given new values, based upon the
bins to which they are assigned.

The following parameters are entered:

• N - the number of bins to which pixel values can be assigned. If there are many bins
or many pixels with the same value(s), some bins may be empty.

• M - the maximum of the range of the output values. The range of the output values
will be from 0 to M.

The total number of pixels is divided by the number of bins, equaling the number of
pixels per bin, as shown in the following equation:

T EQUATION 1
A = ----
N
where:

N = the number of bins


T = the total number of pixels in the image
A = the equalized number of pixels per bin

138 ERDAS
Radiometric Enhancement

The pixels of each input value are assigned to bins, so that the number of pixels in each
bin is as close to A as possible. Consider Figure 53:

60 60

40

number of pixels
30

A = 24
15
10 10
5 5 5

0 1 2 3 4 5 6 7 8 9
data file values
Figure 53: Histogram Equalization Example

There are 240 pixels represented by this histogram. To equalize this histogram to 10
bins, there would be:

240 pixels / 10 bins = 24 pixels per bin = A

To assign pixels to bins, the following equation is used:

i–1
  Hi
 ∑ H k  + ------
k = 1  2 EQUATION 2
B i = int -----------------------------------
A

where:

A = equalized number of pixels per bin (see above)


Hi = the number of values with the value i (histogram)
int= integer function (truncating real numbers to integer)
Bi = bin number for pixels with value i

Source: Modified from Gonzalez and Wintz 1977

Field Guide 139


The 10 bins are rescaled to the range 0 to M. In this example, M = 9, since the input values
ranged from 0 to 9, so that the equalized histogram can be compared to the original. The
output histogram of this equalized image looks like Figure 54:

numbers inside bars are input data file values


60 60

40

number of pixels
30
4 5
20 A = 24
6
15 15
2 7
8
1 3
0 0 0 0 9

0 1 2 3 4 5 6 7 8 9
output data file values
Figure 54: Equalized Histogram

Effect on Contrast
By comparing the original histogram of the example data with the one above, one can
see that the enhanced image gains contrast in the “peaks” of the original histogram—
for example, the input range of 3 to 7 is stretched to the range 1 to 8. However, data
values at the “tails” of the original histogram are grouped together—input values 0
through 2 all have the output value of 0. So, contrast among the “tail” pixels, which
usually make up the darkest and brightest regions of the input image, is lost.

The resulting histogram is not exactly flat, since the pixels can rarely be grouped
together into bins with an equal number of pixels. Sets of pixels with the same value are
never split up to form equal bins.

Level Slice
A level slice is similar to a histogram equalization in that it divides the data into equal
amounts. A level slice on a true color display creates a “stair-stepped” lookup table. The
effect on the data is that input file values are grouped together at regular intervals into
a discrete number of levels, each with one output brightness value.

To perform a true color level slice, the user must specify a range for the output
brightness values and a number of output levels. The lookup table is then “stair-
stepped” so that there is an equal number of input pixels in each of the output levels.

Histogram Matching Histogram matching is the process of determining a lookup table that will convert the
histogram of one image to resemble the histogram of another. Histogram matching is
useful for matching data of the same or adjacent scenes that were scanned on separate
days, or are slightly different because of sun angle or atmospheric effects. This is
especially useful for mosaicking or change detection.

140 ERDAS
Radiometric Enhancement

To achieve good results with histogram matching, the two input images should have
similar characteristics:

• The general shape of the histogram curves should be similar.

• Relative dark and light features in the image should be the same.

• For some applications, the spatial resolution of the data should be the same.

• The relative distributions of land covers should be about the same, even when
matching scenes that are not of the same area. If one image has clouds and the other
does not, then the clouds should be “removed” before matching the histograms.
This can be done using the Area of Interest (AOI) function. The AOI function is
available from the Viewer menu bar.

In ERDAS IMAGINE, histogram matching is performed band to band (e.g., band 2 of one image
is matched to band 2 of the other image, etc.).

To match the histograms, a lookup table is mathematically derived, which serves as a


function for converting one histogram to the other, as illustrated in Figure 55.

(a) (b) (c)


frequency

frequency

frequency
=
+

0 255 0 255 0 255


input input input

Source histogram (a), mapped through the lookup table (b),


approximates model histogram (c).

Figure 55: Histogram Matching

Field Guide 141


Brightness Inversion The brightness inversion functions produce images that have the opposite contrast of
the original image. Dark detail becomes light, and light detail becomes dark. This can
also be used to invert a negative image that has been scanned and produce a positive
image.

Brightness inversion has two options: inverse and reverse. Both options convert the
input data range (commonly 0 - 255) to 0 - 1.0. A min-max remapping is used to simul-
taneously stretch the image and handle any input bit format. The output image is in
floating point format, so a min-max stretch is used to convert the output image into 8-
bit format.

Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of
the low DN pixels. This function applies the following algorithm:

DNout = 1.0 if 0.0 < DNin < 0.1

DNout = 0.1
if 0.1 < DN < 1
DNin

Reverse is a linear function that simply reverses the DN values:

DNout = 1.0 - DNin

Source: Pratt 1986

142 ERDAS
Spatial Enhancement

Spatial While radiometric enhancements operate on each pixel individually, spatial


Enhancement enhancement modifies pixel values based on the values of surrounding pixels. Spatial
enhancement deals largely with spatial frequency, which is the difference between the
highest and lowest values of a contiguous set of pixels. Jensen (1986) defines spatial
frequency as “the number of changes in brightness value per unit distance for any
particular part of an image.”

Consider the examples in Figure 56:

• zero spatial frequency - a flat image, in which every pixel has the same value

• low spatial frequency - an image consisting of a smoothly varying gray scale

• highest spatial frequency - an image consisting of a checkerboard of black and


white pixels

zero spatial frequency low spatial frequency high spatial frequency

Figure 56: Spatial Frequencies

This section contains a brief description of the following:

• Convolution, Crisp, and Adaptive filtering

• Resolution merging

See "Radar Imagery Enhancement" on page 191 for a discussion of Edge Detection and Texture
Analysis. These spatial enhancement techniques can be applied to any type of data.

Field Guide 143


Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image.
Convolution filtering is used to change the spatial frequency characteristics of an image
(Jensen 1996).

A convolution kernel is a matrix of numbers that is used to average the value of each
pixel with the values of surrounding pixels in a particular way. The numbers in the
matrix serve to weight this average toward particular pixels. These numbers are often
called coefficients, because they are used as such in the mathematical equations.

In ERDAS IMAGINE, there are four ways you can apply convolution filtering to an image:

1) The kernel filtering option in the Viewer


2) The Convolution function in Image Interpreter
3) The Radar Edge Enhancement function
4) The Convolution function in Model Maker

Filtering is a broad term, referring to the altering of spatial or spectral features for
image enhancement (Jensen 1996). Convolution filtering is one method of spatial
filtering. Some texts may use the terms synonymously.

Convolution Example
To understand how one pixel is convolved, imagine that the convolution kernel is
overlaid on the data file values of the image (in one band), so that the pixel to be
convolved is in the center of the window.

2 8 6 6 6 -1 -1 -1
2 8 6 6 6 -1 16 -1
2 2 8 6 6 -1 -1 -1

2 2 2 8 6
2 2 2 2 8 Kernel

Data

Figure 57: Applying a Convolution Kernel

Figure 57 shows a 3 × 3 convolution kernel being applied to the pixel in the third
column, third row of the sample data (the pixel that corresponds to the center of the
kernel).

144 ERDAS
Spatial Enhancement

To compute the output value for this pixel, each value in the convolution kernel is
multiplied by the image pixel value that corresponds to it. These products are summed,
and the total is divided by the sum of the values in the kernel, as shown here:

integer (
(-1 x 8) + (-1 x 6) + (-1 x 6) +
(-1 x 2) + (16 x 8) + (-1 x 6) +
(-1 x 2) + (-1 x 2) + (-1 x 8) : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1))

= int ((128-40) / (16-8))

= int (88 / 8) = int (11) = 11

When the 2 × 2 set of pixels in the center of this 5 x 5 image is convolved, the output
values are:

1 2 3 4 5

1 2 8 6 6 6

2 2 11 5 6 6

3 2 0 11 6 6

4 2 2 2 8 6

5 2 2 2 2 8

Figure 58: Output Values for Convolution Kernel

The kernel used in this example is a high frequency kernel, as explained below. It is
important to note that the relatively lower values become lower, and the higher values
become higher, thus increasing the spatial frequency of the image.

Field Guide 145


Convolution Formula
The following formula is used to derive an output data file value for the pixel being
convolved (in the center):

q q
 
∑  ∑ f ij d ij
i=1 j=1
V = -------------------------------------
F

where:

fij = the coefficient of a convolution kernel at position i,j (in the kernel)

dij = the data value of the pixel that corresponds to fij

q = the dimension of the kernel, assuming a square kernel (if q=3, the kernel is
3 × 3)

F = either the sum of the coefficients of the kernel, or 1 if the sum of coefficients
is zero

V = the output pixel value

In cases where V is less than 0, V is clipped to 0.

Source: Modified from Jensen 1996, Schowengerdt 1983

The sum of the coefficients (F) is used as the denominator of the equation above, so that
the output values will be in relatively the same range as the input values. Since F cannot
equal zero (division by zero is not defined), F is set to 1 if the sum is zero.

Zero-Sum Kernels
Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals
zero. When a zero-sum kernel is used, then the sum of the coefficients is not used in the
convolution equation, as above. In this case, no division is performed (F = 1), since
division by zero is not defined.

This generally causes the output values to be:

• zero in areas where all input values are equal (no edges)

• low in areas of low spatial frequency

• extreme (high values become much higher, low values become much lower) in
areas of high spatial frequency

146 ERDAS
Spatial Enhancement

Therefore, a zero-sum kernel is an edge detector, which usually smooths out or zeros
out areas of low spatial frequency and creates a sharp contrast where spatial frequency
is high, which is at the edges between homogeneous groups of pixels. The resulting
image often consists of only edges and zeros.

Field Guide 147


Zero-sum kernels can be biased to detect edges in a particular direction. For example,
this 3 × 3 kernel is biased to the south (Jensen 1996).

-1 -1 -1

1 -2 1

1 1 1

See the section on "Edge Detection" on page 200 for more detailed information.

High-Frequency Kernels
A high-frequency kernel, or high-pass kernel, has the effect of increasing spatial
frequency.

High-frequency kernels serve as edge enhancers, since they bring out the edges
between homogeneous groups of pixels. Unlike edge detectors (such as zero-sum
kernels), they highlight edges and do not necessarily eliminate other features.

-1 -1 -1

-1 16 -1

-1 -1 -1

When this kernel is used on a set of pixels in which a relatively low value is surrounded
by higher values, like this...

BEFORE AFTER

204 200 197 204 200 197

201 100 209 201 9 209

198 200 210 198 200 210

...the low value gets lower. Inversely, when the kernel is used on a set of pixels in which
a relatively high value is surrounded by lower values...

BEFORE AFTER

64 60 57 64 60 57

61 125 69 61 187 69

58 60 70 58 60 70

...the high value becomes higher. In either case, spatial frequency is increased by this
kernel.

148 ERDAS
Spatial Enhancement

Low-Frequency Kernels
Below is an example of a low-frequency kernel, or low-pass kernel, which decreases
spatial frequency.

1 1 1

1 1 1

1 1 1

This kernel simply averages the values of the pixels, causing them to be more homoge-
neous (homogeneity is low spatial frequency). The resulting image looks either
smoother or more blurred.

For information on applying filters to thematic layers, see “Chapter 9: Geographic Information
Systems.”

Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband
variance content of the image. This is a useful enhancement if the image is blurred due
to atmospheric haze, rapid sensor motion, or a broad point spread function of the
sensor.

The algorithm used for this function is:

1) Calculate principal components of multiband input image.

2) Convolve PC-1 with summary filter.

3) Retransform to RGB space.

Source: ERDAS (Faust 1993)

The logic of the algorithm is that the first principal component (PC-1) of an image is
assumed to contain the overall scene luminance. The other PC’s represent intra-scene
variance. Thus, the user can sharpen only PC-1 and then reverse the principal compo-
nents calculation to reconstruct the original image. Luminance is sharpened, but
variance is retained.

Field Guide 149


Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial, spectral, or temporal
resolution.

See "CHAPTER 1: Raster Data" for a full description of resolution types.

Landsat TM sensors have seven bands with a spatial resolution of 28.5 m. SPOT
panchromatic has one broad band with very good spatial resolution—10 m. Combining
these two images to yield a seven band data set with 10 m resolution would provide the
best characteristics of both sensors.

A number of models have been suggested to achieve this image merge. Welch and
Ehlers (1987) used forward-reverse RGB to IHS transforms, replacing I (from trans-
formed TM data) with the SPOT panchromatic image. However, this technique is
limited to three bands (R,G,B).

Chavez (1991), among others, uses the forward-reverse principal components trans-
forms with the SPOT image, replacing PC-1.

In the above two techniques, it is assumed that the intensity component (PC-1 or I) is
spectrally equivalent to the SPOT panchromatic image, and that all the spectral infor-
mation is contained in the other PC’s or in H and S. Since SPOT data do not cover the
full spectral range that TM data do, this assumption does not strictly hold. It is
unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchro-
matic) image.

Another technique (Schowengerdt 1980) combines a high frequency image derived


from the high spatial resolution data (i.e., SPOT panchromatic) additively with the high
spectral resolution Landsat TM image.

The Resolution Merge function has two different options for resampling low spatial
resolution data to a higher spatial resolution while retaining spectral information:

• forward-reverse principal components transform

• multiplicative

Principal Components Merge


Because a major goal of this merge is to retain the spectral information of the six TM
bands (1-5, 7), this algorithm is mathematically rigorous. It is assumed that:

• PC-1 contains only overall scene luminance; all interband variation is contained in
the other 5 PCs, and

• Scene luminance in the SWIR bands is identical to visible scene luminance.

With the above assumptions, the forward transform into principal components is made.
PC-1 is removed and its numerical range (min to max) is determined. The high spatial
resolution image is then remapped so that its histogram shape is kept constant, but it is
in the same numerical range as PC-1. It is then substituted for PC-1 and the reverse
transform is applied. This remapping is done so that the mathematics of the reverse
transform do not distort the thematic information (Welch and Ehlers 1987).

150 ERDAS
Spatial Enhancement

Multiplicative
The second technique in the Image Interpreter uses a simple multiplicative algorithm:

(DNTM1) (DNSPOT) = DNnew TM1

The algorithm is derived from the four component technique of Crippen (Crippen
1989). In this paper, it is argued that of the four possible arithmetic methods to incor-
porate an intensity image into a chromatic image (addition, subtraction, division, and
multiplication), only multiplication is unlikely to distort the color.

However, in his study Crippen first removed the intensity component via band ratios,
spectral indices, or PC transform. The algorithm shown above operates on the original
image. The result is an increased presence of the intensity component. For many appli-
cations, this is desirable. Users involved in urban or suburban studies, city planning,
utilities routing, etc., often want roads and cultural features (which tend toward high
reflection) to be pronounced in the image.

Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image
processing technique. However, even adjustable stretches like the piecewise linear
stretch act on the scene globally. There are many circumstances where this is not the
optimum approach. For example, coastal studies where much of the water detail is
spread through a very low DN range and the land detail is spread through a much
higher DN range would be such a circumstance. In these cases, a filter that “adapts” the
stretch to the region of interest (the area within the moving window) would produce a
better enhancement. Adaptive filters attempt to achieve this (Fahnestock and
Schowengerdt 1983, Peli and Lim 1982, Schwartz 1977).

ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. The Adaptive
Filter function in Image Interpreter can be applied to undegraded images, such as SPOT,
Landsat, and digitized photographs. The Image Enhancement function in Radar is better for
degraded or difficult images.

Field Guide 151


Scenes to be adaptively filtered can be divided into three broad and overlapping
categories:

• Undegraded — these scenes have good and uniform illumination overall. Given a
choice, these are the scenes one would prefer to obtain from imagery sources such
as EOSAT or SPOT.

• Low luminance — these scenes have an overall or regional less-than-optimum


intensity. An underexposed photograph (scanned) or shadowed areas would be in
this category. These scenes need an increase in both contrast and overall scene
luminance.

• High luminance — these scenes are characterized by overall excessively high DN


values. Examples of such circumstances would be an over-exposed (scanned)
photograph or a scene with a light cloud cover or haze. These scenes need a
decrease in luminance and an increase in contrast.

No one filter with fixed parameters can address this wide variety of conditions. In
addition, multiband images may require different parameters for each band. Without
the use of adaptive filters, the different bands would have to be separated into one-band
files, enhanced, and then recombined.

For this function, the image is separated into high and low frequency component
images. The low frequency image is considered to be overall scene luminance. These
two component parts are then recombined in various relative amounts using multi-
pliers derived from look-up tables. These LUTs are driven by the overall scene
luminance:

DNout = K(DNHi) + DNLL

where:

K = user-selected contrast multiplier

Hi = high luminance (derives from the LUT)

LL = local luminance (derives from the LUT)

255
Local Luminance

Intercept (I)

0 Low Frequency Image DN 255

Figure 59: Local Luminance Intercept

152 ERDAS
Spectral Enhancement

Figure 59 shows the local luminance intercept, which is the output luminance value that
an input luminance value of 0 would be assigned.

Spectral The enhancement techniques that follow require more than one band of data. They can
Enhancement be used to:

• compress bands of data that are similar

• extract new bands of data that are more interpretable to the eye

• apply mathematical transforms and algorithms

• display a wider variety of information in the three available color guns (R,G,B)

In this documentation, some examples are illustrated with two-dimensional graphs. However,
you are not limited to two-dimensional (two-band) data. ERDAS IMAGINE programs allow an
unlimited number of bands to be used.

Keep in mind that processing such data sets can require a large amount of computer swap space.
In practice, the principles outlined below apply to any number of bands.

Some of these enhancements can be used to prepare data for classification. However, this is a
risky practice unless you are very familiar with your data, and the changes that you are making
to it. Anytime you alter values, you risk losing some information.

Principal Components Principal components analysis (or PCA) is often used as a method of data compression.
Analysis It allows redundant data to be compacted into fewer bands—that is, the dimensionality
of the data is reduced. The bands of PCA data are non-correlated and independent, and
are often more interpretable than the source data (Jensen 1996; Faust 1989).

The process is easily explained graphically with an example of data in two bands.
Below is an example of a two-band scatterplot, which shows the relationships of data
file values in two bands. The values of one band are plotted against those of the other.
If both bands have normal distributions, an ellipse shape results.

Scatterplots and normal distributions are discussed in "APPENDIX A: Math Topics."

Field Guide 153


255

data file values


Band B

histogram
Band B
histogram
Band A
0
0 255
Band A
data file values
Figure 60: Two Band Scatterplot

Ellipse Diagram
In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid (3 dimensions) or
hyperellipsoid (more than 3) is formed if the distributions of each input band are
normal or near normal. (The term “ellipse” will be used for general purposes here.)

To perform principal components analysis, the axes of the spectral space are rotated,
changing the coordinates of each pixel in spectral space, and the data file values as well.
The new axes are parallel to the axes of the ellipse.

First Principal Component


The length and direction of the widest transect of the ellipse are calculated using matrix
algebra in a process explained below. The transect, which corresponds to the major
(longest) axis of the ellipse, is called the first principal component of the data. The
direction of the first principal component is the first eigenvector, and its length is the
first eigenvalue (Taylor 1977).

A new axis of the spectral space is defined by this first principal component. The points
in the scatterplot are now given new coordinates, which correspond to this new axis.
Since, in spectral space, the coordinates of the points are the data file values, new data
file values are derived from this process. These values are stored in the first principal
component band of a new data file.

154 ERDAS
Spectral Enhancement

255

Principal Component
(new axis)

0
0 255

Figure 61: First Principal Component

The first principal component shows the direction and length of the widest transect of
the ellipse. Therefore, as an axis in spectral space, it measures the highest variation
within the data. In Figure 62 it is easy to see that the first eigenvalue will always be
greater than the ranges of the input bands, just as the hypotenuse of a right triangle
must always be longer than the legs.

255

range of pc 1
data file values
Band B

range of Band B

range of Band A
0
0 255
Band A
data file values
Figure 62: Range of First Principal Component

Successive Principal Components


The second principal component is the widest transect of the ellipse that is orthogonal
(perpendicular) to the first principal component. As such, the second principal
component describes the largest amount of variance in the data that is not already
described by the first principal component (Taylor 1977). In a two-dimensional analysis,
the second principal component corresponds to the minor axis of the ellipse.

Field Guide 155


255

PC 2
PC 1
90˚ angle
(orthogonal)

0
0 255

Figure 63: Second Principal Component

In n dimensions, there are n principal components. Each successive principal


component:

• is the widest transect of the ellipse that is orthogonal to the previous components
in the n-dimensional space of the scatterplot (Faust 1989)

• accounts for a decreasing amount of the variation in the data which is not already
accounted for by previous principal components (Taylor 1977)

Although there are n output bands in a principal components analysis, the first few
bands account for a high proportion of the variance in the data—in some cases, almost
100%. Therefore, principal components analysis is useful for compressing data into
fewer bands.

In other applications, useful information can be gathered from the principal component
bands with the least variance. These bands can show subtle details in the image that
were obscured by higher contrast in the original image. These bands may also show
regular noise in the data (for example, the striping in old MSS data) (Faust 1989).

Computing Principal Components


To compute a principal components transformation, a linear transformation is
performed on the data—meaning that the coordinates of each pixel in spectral space
(the original data file values) are recomputed using a linear equation. The result of the
transformation is that the axes in n-dimensional spectral space are shifted and rotated
to be relative to the axes of the ellipse.

156 ERDAS
Spectral Enhancement

To perform the linear transformation, the eigenvectors and eigenvalues of the n


principal components must be mathematically derived from the covariance matrix, as
shown in the following equation:

v 1 0 0 ... 0
0 v 2 0 ... 0
V =
...
0 0 0 ... v n

E Cov ET = V

where:

Cov = the covariance matrix

E = the matrix of eigenvectors


T = the transposition function

V = a diagonal matrix of eigenvalues, in which all non-diagonal elements


are zeros

V is computed so that its non-zero elements are ordered from greatest to least, so that v1
> v2 > v3... > vn .

Source: Faust 1989

A full explanation of this computation can be found in Gonzalez and Wintz 1977.

The matrix V is the covariance matrix of the output principal component file. The zeros
represent the covariance between bands (there is none), and the eigenvalues are the
variance values for each band. Because the eigenvalues are ordered from v1 to vn,, the
first eigenvalue is the largest and represents the most variance in the data.

Field Guide 157


Each column of the resulting eigenvector matrix, E, describes a unit-length vector in
spectral space, which shows the direction of the principal component (the ellipse axis).
The numbers are used as coefficients in the following equation, to transform the
original data file values into the principal component values.

n
Pe = ∑ d k E ke
k=1

where:

e = the number of the principal component (first, second, etc.)

Pe = the output principal component value for principal component band e

k = a particular input band

n = the total number of bands

dk = an input data file value in band k

E = the eigenvector matrix, such that Eke = the element of that matrix at
row k, column e

Source: Modified from Gonzalez and Wintz 1977

Decorrelation Stretch The purpose of a contrast stretch is to:

• alter the distribution of the image DN values within the 0 - 255 range of the display
device and

• utilize the full range of values in a linear fashion.

The decorrelation stretch stretches the principal components of an image, not to the
original image.

A principal components transform converts a multiband image into a set of mutually


orthogonal images portraying inter-band variance. Depending on the DN ranges and
the variance of the individual input bands, these new images (PCs) will occupy only a
portion of the possible 0 - 255 data range.

Each PC is separately stretched to fully utilize the data range. The new stretched PC
composite image is then retransformed to the original data areas.

Either the original PCs or the stretched PCs may be saved as a permanent image file for
viewing after the stretch.

NOTE: Storage of PCs as floating point-single precision is probably appropriate in this case.

158 ERDAS
Spectral Enhancement

Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimen-
sional space where N is the number of bands. Each pixel, positioned according to its DN
value in each band, lies within the N-dimensional space. This pixel distribution is deter-
mined by the absorption/reflection spectra of the imaged material. This clustering of
the pixels is termed the data structure (Crist & Kauth 1986).

See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra. See the
discussion on "Principal Components Analysis" on page 153.

The data structure can be considered a multi-dimensional hyperellipsoid. The principal


axes of this data structure are not necessarily aligned with the axes of the data space
(defined as the bands of the input image). They are more directly related to the
absorption spectra. For viewing purposes, it is advantageous to rotate the N-dimen-
sional space, such that one or two of the data structure axes are aligned with the viewer
X and Y axes. In particular, the user could view the axes that are largest for the data
structure produced by the absorption peaks of special interest for the application.

For example, a geologist and a botanist are interested in different absorption features.
They would want to view different data structures and therefore, different data
structure axes. Both would benefit from viewing the data in a way that would maximize
visibility of the data structure of interest.

The Tasseled Cap transformation offers a way to optimize data viewing for vegetation
studies. Research has produced three data structure axes which define the vegetation
information content (Crist et al 1986, Crist & Kauth 1986):

• Brightness — a weighted sum of all bands, defined in the direction of the principal
variation in soil reflectance.

• Greenness — orthogonal to brightness, a contrast between the near-infrared and


visible bands. Strongly related to the amount of green vegetation in the scene.

• Wetness — relates to canopy and soil moisture (Lillesand and Kiefer 1987).

A simple calculation (linear combination) then rotates the data space to present any of
these axes to the user.

These rotations are sensor-dependent, but once defined for a particular sensor (say
Landsat 4 TM), the same rotation will work for any scene taken by that sensor. The
increased dimensionality (number of bands) of TM vs. MSS allowed Crist et al to define
three additional axes, termed Haze, Fifth, and Sixth. Laurin (1986) has used this haze
parameter to devise an algorithm to de-haze Landsat imagery.

Field Guide 159


The Tasseled Cap algorithm implemented in the Image Interpreter provides the correct
coefficient for MSS, TM4, and TM5 imagery. For TM4, the calculations are:

Brightness = .3037(TM1) + .2793)(TM2) + .4743 (TM3) +


.5585 (TM4) + .5082 (TM5) + .1863 (TM7)

Greenness = -.2848 (TM1) - .2435 (TM2) - .5436 (TM3) +


.7243 (TM4) + .0840 (TM5) - .1800 (TM7)

Wetness = .1509 (TM1) + .1973 (TM2) + .3279 (TM3) +


.3406 (TM4) - .7112 (TM5) - .4572 (TM7)

Haze = .8832 (TM1) - .0819 (TM2) - .4580 (TM3) -


.0032 (TM4) - .0563 (TM5) + .0130 (TM7)

Source: Modified from Crist et al 1986, Jensen 1996

RGB to IHS The color monitors used for image display on image processing systems have three
color guns. These correspond to red, green, and blue (R,G,B), the additive primary
colors. When displaying three bands of a multiband data set, the viewed image is said
to be in R,G,B space.

However, it is possible to define an alternate color space that uses Intensity (I), Hue (H),
and Saturation (S) as the three positioned parameters (in lieu of R,G, and B). This system
is advantageous in that it presents colors more nearly as perceived by the human eye.

• Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black)
to 1 (white).

• Saturation represents the purity of color and also varies linearly from 0 to 1.

• Hue is representative of the color or dominant wavelength of the pixel. It varies


from 0 at the red midpoint through green and blue back to the red midpoint at 360.
It is a circular dimension (see Figure 64). In Figure 64, 0-255 is the selected range; it
could be defined as any data range. However, hue must vary from 0-360 to define
the entire sphere (Buchanan 1979).

160 ERDAS
Spectral Enhancement

255

INTENSITY
Blue

255 SATURATION 0
Green
HUE
255,0 Red

Source: Buchanan 1979


Figure 64: Intensity, Hue, and Saturation Color Coordinate System

To use the RGB to IHS transform, use the RGB to IHS function from Image Interpreter.

The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac 1980):

M–R
R = ---------------
M–m

M–G
G = ---------------
M–m

M–B
B = ---------------
M–m

where:

R,G,B are each in the range of 0 to 1.0.


M= largest value, R, G, or B
m= least value, R, G, or B

NOTE: At least one of the R, G, or B values is 0, corresponding to the color with the largest
value, and at least one of the R, G, or B values is 1, corresponding to the color with the least value.

Field Guide 161


The equation for calculating intensity in the range of 0 to 1.0 is:

M+m
I = ----------------
2

The equations for calculating saturation in the range of 0 to 1.0 are:

If M = m, S = 0

M–m
If I < 0.5, S = ----------------
M+m

M–m
If I > 0.5, S = ------------------------
2–M–m

The equations for calculating hue in the range of 0 to 360 are:

If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)

where:

R,G,B are each in the range of 0 to 1.0.


M= largest value, R, G, or B
m= least value, R, G, or B

IHS to RGB The family of IHS to RGB are intended as a complement to the standard RGB to IHS
transform.

In the IHS to RGB algorithm, a min - max stretch is applied to either intensity (I),
saturation (S), or both, so that they more fully utilize the 0 - 1 value range. The values
for hue (H), a circular dimension, are 0 - 360. However, depending on the dynamic
range of the DN values of the input image, it is possible that I or S or both will occupy
only a part of the 0 - 1 range. In this model, a min-max stretch is applied to either I, S,
or both, so that they more fully utilize the 0 - 1 value range. After stretching, the full IHS
image is retransformed back to the original RGB space. As the parameter Hue is not
modified, it largely defines what we perceive as color, and the resultant image looks
very much like the input image.

It is not essential that the input parameters (IHS) to this transform be derived from an
RGB to IHS transform. The user could define I and/or S as other parameters, set Hue at
0-360, and then transform to RGB space. This is a method of color coding other data sets.

In another approach (Daily 1983), H and I are replaced by low- and high-frequency
radar imagery. The user can also replace I with radar intensity before the IHS to RGB
transform (Holcomb 1993). Chavez evaluates the use of the IHS to RGB transform to
resolution merge Landsat TM with SPOT panchromatic imagery (Chavez 1991). NOTE:
Use the Spatial Modeler for this analysis.

162 ERDAS
Spectral Enhancement

See the previous section on RGB to IHS transform for more information.

The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac 1980):

Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0

If I £ 0.5, M = I (1 + S)

If I > 0.5, M = I + S - I (S)

m=2*1-M

The equations for calculating R in the range of 0 to 1.0 are:

 -----
H
-
If H < 60, R = m + (M - m)
 60 
If 60 £ H < 180, R = M
 240
------------------- 
–H
If 180 £ H < 240, R = m + (M - m)  60 
If 240 £ H £ 360, R = m

The equations for calculating G in the range of 0 to 1.0 are:

If H < 120, G = m
 ------------------
H – 120 
-
If 120 £ H < 180, m + (M - m)  60 
If 180 £ H < 300, G = M
 360
------------------- 
–H
If 300 £ H £ 360, G = m + (M-m)  60 

Equations for calculating B in the range of 0 to 1.0:

If H < 60, B = M
 120
------------------- 
–H
If 60 £ H < 120, B = m + (M - m)  60 
If 120 £ H < 240, B = m
 ------------------
H – 240 
-
If 240 £ H < 300, B = m + (M - m)  60 
If 300 £ H £ 360, B = M

Field Guide 163


Indices Indices are used to create output images by mathematically combining the DN values
of different bands. These may be simplistic:

(Band X - Band Y)

or more complex:

Band X - Band Y
Band X + Band Y

In many instances, these indices are ratios of band DN values:

Band X
Band Y

These ratio images are derived from the absorption/reflection spectra of the material
of interest. The absorption is based on the molecular bonds in the (surface) material.
Thus, the ratio often gives information on the chemical composition of the target.

See "CHAPTER 1: Raster Data" for more information on the absorption/reflection spectra.

Applications
• Indices are used extensively in mineral exploration and vegetation analyses to
bring out small differences between various rock types and vegetation classes. In
many cases, judiciously chosen indices can highlight and enhance differences
which cannot be observed in the display of the original color bands.

• Indices can also be used to minimize shadow effects in satellite and aircraft
multispectral images. Black and white images of individual indices or a color
combination of three ratios may be generated.

• Certain combinations of TM ratios are routinely used by geologists for


interpretation of Landsat imagery for mineral type. For example: Red 5/7, Green
5/4, Blue 3/1.

Integer Scaling Considerations


The output images obtained by applying indices are generally created in floating point
to preserve all numerical precision. If there are two bands, A and B, then:

ratio = A/B

If A>>B (much greater than), then a normal integer scaling would be sufficient. If A>B
and A is never much greater than B, scaling might be a problem in that the data range
might only go from 1 to 2 or 1 to 3. Integer scaling in this case would give very little
contrast.

164 ERDAS
Spectral Enhancement

For cases in which A<B or A<<B, integer scaling would always truncate to 0. All
fractional data would be lost. A multiplication constant factor would also not be very
effective in seeing the data contrast between 0 and 1, which may very well be a
substantial part of the data image. One approach to handling the entire ratio range is to
actually process the function:

ratio = atan(A/B)

This would give a better representation for A/B < 1 as well as for A/B > 1 (Faust 1992).

Index Examples
The following are examples of indices which have been preprogrammed in the Image
Interpreter in ERDAS IMAGINE:

• IR/R (infrared/red)

• SQRT (IR/R)

• Vegetation Index = IR-R

• Normalized Difference Vegetation Index (NDVI) =

IR – R
----------------
IR + R

• Transformed NDVI (TNDVI) =

IR – R
---------------- + 0.5
IR + R

• Iron Oxide = TM 3/1

• Clay Minerals = TM 5/7

• Ferrous Minerals = TM 5/4

• Mineral Composite = TM 5/7, 5/4, 3/1

• Hydrothermal Composite = TM 5/7, 3/1, 4/3

Source: Modified from Sabins 1987, Jensen 1996, Tucker 1979

Field Guide 165


The following table shows the infrared (IR) and red (R) band for some common sensors
(Tucker 1979, Jensen 1996):

IR R
Sensor
Band Band

Landsat MSS 7 5

SPOT XS 3 2

Landsat TM 4 3

NOAA AVHRR 2 1

Image Algebra
Image algebra is a general term used to describe operations that combine the pixels of
two or more raster layers in mathematical combinations. For example, the calculation:

(infrared band) - (red band)

DNir - DNred

yields a simple, yet very useful, measure of the presence of vegetation. At the other
extreme is the Tasseled Cap calculation (described in the following pages), which uses
a more complicated mathematical combination of as many as six bands to define
vegetation.

Band ratios, such as:

TM 5
------------ = clay minerals
TM 7

are also commonly used. These are derived from the absorption spectra of the material
of interest; the numerator is a baseline of background absorption and the denominator
is an absorption peak.

See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra.

The Normalized Difference Vegetation Index (NDVI) is a combination of addition,


subtraction, and division:
IR – R
NDVI = ----------------
IR + R

166 ERDAS
Hyperspectral Image Processing

Hyperspectral Image Hyperspectral image processing is in many respects simply an extension of the
Processing techniques used for multi-spectral datasets; indeed, there is no set number of bands
beyond which a dataset is hyperspectral. Thus, many of the techniques or algorithms
currently used for multi-spectral datasets are logically applicable, regardless of the
number of bands in the dataset (see the discussion of Figure 7 on page 12 of this
manual). What is of relevance in evaluating these datasets is not the number of bands
per se, but the spectral band-width of the bands (channels). As the bandwidths get
smaller, it becomes possible to view the dataset as an absorption spectrum rather than
a collection of discontinuous bands. Analysis of the data in this fashion is termed
"imaging spectrometry".

A hyperspectral image data set is recognized as a three-dimensional pixel array. As in


a traditional raster image, the x-axis is the column indicator and the y-axis is the row
indicator. The z-axis is the band number or, more correctly, the wavelength of that band
(channel). A hyperspectral image can be visualized as shown in Figure 65.

Y
Z

Figure 65: Hyperspectral Data Axes

A dataset with narrow contiguous bands can be plotted as a continuous spectrum and
compared to a library of known spectra using full profile spectral pattern fitting
algorithms. A serious complication in using this approach is assuring that all spectra are
corrected to the same background.

At present, it is possible to obtain spectral libraries of common materials. The JPL and
USGS mineral spectra libraries are included in IMAGINE. These are laboratory
measured reflectance spectra of reference minerals, often of high purity and defined
particle size. The spectrometer is commonly purged with pure nitrogen to avoid absor-
bance by atmospheric gases. Conversely, the remote sensor records an image after the
sunlight has (twice) passed through the atmosphere with variable and unknown
amounts of water vapor, CO2, etc. (This atmospheric absorbance curve is shown in
Figure 4.) The unknown atmospheric absorbances superimposed upon the Earth
surface reflectances makes comparison to laboratory spectra or spectra taken with a
different atmosphere inexact. Indeed, it has been shown that atmospheric composition
can vary within a single scene. This complicates the use of spectral signatures even
within one scene. Atmospheric absorption and scattering is discussed on pages 6
through 10 of this manual.

Field Guide 167


A number of approaches have been advanced to help compensate for this atmospheric
contamination of the spectra. These are introduced briefly on page 130 of this manual
for the general case. Two specific techniques, Internal Average Relative Reflectance
(IARR) and Log Residuals, are implemented in IMAGINE 8.3. These have the
advantage of not requiring auxiliary input information; the correction parameters are
scene-derived. The disadvantage is that they produce relative reflectances (i.e., they can
be compared to reference spectra in a semi-quantitative manner only).

Normalize Pixel albedo is affected by sensor look angle and local topographic effects. For airborne
sensors this look angle effect can be large across a scene; it is less pronounced for
satellite sensors. Some scanners look to both sides of the aircraft. For these datasets, the
average scene luminance between the two half-scenes can be large. To help minimize
these effects, an "equal area normalization" algorithm can be applied (Zamudio and
Atkinson 1990). This calculation shifts each (pixel) spectrum to the same overall average
brightness. This enhancement must be used with a consideration of whether this
assumption is valid for the scene. For an image which contains 2 (or more) distinctly
different regions (e.g., half ocean and half forest), this may not be a valid assumption.
Correctly applied, this normalization algorithm helps remove albedo variations and
topographic effects.

IAR Reflectance As discussed above, it is desired to convert the spectra recorded by the sensor into a
form that can be compared to known reference spectra. This technique calculates a
relative reflectance by dividing each spectrum (pixel) by the scene average spectrum
(Kruse 1988). The algorithm is based on the assumption that this scene average
spectrum is largely composed of the atmospheric contribution and that the atmosphere
is uniform across the scene. However, these assumptions are not always valid. In
particular, the average spectrum could contain absorption features related to target
materials of interest. The algorithm could then overcompensate for (i.e., remove) these
absorbance features. The average spectrum should be visually inspected to check for
this possibility. Properly applied, this technique can remove the majority of
atmospheric effects.

168 ERDAS
Hyperspectral Image Processing

Log Residuals The Log Residuals technique was originally described by Green and Craig (1985), but
has been variously modified by researchers. The version implemented here is similar to
the approach of Lyon (1987). The algorithm can be conceptualized as:

Output Spectrum = (input spectrum) - (average spectrum)


- (pixel brightness) + (image brightness)

All parameters in the above equation are in logarithmic space, hence the name.

This algorithm corrects the image for atmospheric absorption, systemic instrumental
variation, and illuminance differences between pixels.

Rescale Many hyperspectral scanners record the data in a format larger than 8-bit. In addition,
many of the calculations used to correct the data will be performed with a floating point
format to preserve precision. At some point, it will be advantageous to compress the
data back into an 8-bit range for effective storage and/or display. However, when
rescaling data to be used for imaging spectrometry analysis, it is necessary to consider
all data values within the data cube, not just within the layer of interest. This algorithm
is designed to maintain the 3-dimensional integrity of the data values. Any bit format
can be input. The output image will always be 8-bit.

When rescaling a data cube, a decision must be made as to which bands to include in
the rescaling. Clearly, a “bad” band (i.e., a low S/N layer) should be excluded. Some
sensors image in different regions of the electromagnetic (EM) spectrum (e.g., reflective
and thermal infra-red or long- and short-wave reflective infra-red). When rescaling
these data sets, it may be appropriate to rescale each EM region separately. These can
be input using the Select Layer option in the IMAGINE Viewer.

Field Guide 169


Use this dialog to
rescale the image

Enter the bands to


be included in the
calculation here

Figure 66: Rescale GUI

NOTE: Bands 26 through 28 and 46 through 55 have been deleted from the calculation.The
deleted bands will still be rescaled, but they will not be factored into the rescale calculation.

Processing Sequence The above (and other) processing steps are utilized to convert the raw image into a form
that is easier to interpret. This interpretation often involves comparing the imagery,
either visually or automatically, to laboratory spectra or other known "end-member"
spectra. At present there is no widely accepted standard processing sequence to achieve
this, although some have been advanced in the scientific literature (Zamudio and
Atkinson 1990; Kruse 1988; Green and Craig 1985; Lyon 1987). Two common processing
sequences have been programmed as single automatic enhancements, as follows:

• Automatic Relative Reflectance — Implements the following algorithms:


Normalize, IAR Reflectance, Rescale.

• Automatic Log Residuals — Implements the following algorithms: Normalize,


Log Residuals, Rescale.

170 ERDAS
Hyperspectral Image Processing

Spectrum Average In some instances, it may be desirable to average together several pixels. This is
mentioned above under IAR Reflectance as a test for applicability. In preparing
reference spectra for classification, or to save in the Spectral Library, an average
spectrum may be more representative than a single pixel. Note that to implement this
function it is necessary to define which pixels to average using the IMAGINE AOI tools.
This enables the user to average any set of pixels which are defined; they do not need
to be contiguous and there is no limit on the number of pixels averaged. Note that the
output from this program is a single pixel with the same number of input bands as the
original image.

AOI Polygon

Use this
IMAGINE
dialog to
rescale the
image

Click here to
enter an Area
of Interest

Figure 67: Spectrum Average GUI

Field Guide 171


Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the usefulness or validity
of a particular band. In this implementation, S/N is defined as Mean/Std.Dev. in a 3X3
moving window. After running this function on a data set, each layer in the output
image should be visually inspected to evaluate suitability for inclusion into the analysis.
Layers deemed unacceptable can be excluded from the processing by using the Select
Layers option of the various Graphical User Interfaces (GUIs). This can be used as a
sensor evaluation tool.

Mean per Pixel This algorithm outputs a single band, regardless of the number of input bands. By
visually inspecting this output image, it is possible to see if particular pixels are "outside
the norm". While this does not mean that these pixels are incorrect, they should be
evaluated in this context. For example, a CCD detector could have several sites (pixels)
that are dead or have an anomalous response, these would be revealed in the Mean per
Pixel image. This can be used as a sensor evaluation tool.

Profile Tools To aid in visualizing this three-dimensional data cube, three basic tools have been
designed:

• Spectral Profile — a display that plots the reflectance spectrum of a designated


pixel, as shown in Figure 68.

Figure 68: Spectral Profile

172 ERDAS
Hyperspectral Image Processing

• Spatial Profile — a display that plots spectral information along a user-defined


polyline. The data can be displayed two-dimensionally for a single band, as in
Figure 69.

Figure 69: Two-Dimensional Spatial Profile

The data can also be displayed three-dimensionally for multiple bands, as in Figure 70.

Figure 70: Three-Dimensional Spatial Profile

Field Guide 173


• Surface Profile — a display that allows the operator to designate an x,y area and
view any selected layer, z.

Figure 71: Surface Profile

Wavelength Axis Data tapes containing hyperspectral imagery commonly designate the bands as a
simple numerical sequence. When plotted using the profile tools, this yields an x-axis
labeled as 1,2,3,4, etc. Elsewhere on the tape or in the accompanying documentation is
a file which lists the center frequency and width of each band. This information should
be linked to the image intensity values for accurate analysis or comparison to other
spectra, such as the Spectra Libraries.

Spectral Library As discussed on page 167, two spectral libraries are presently included in the software
package (JPL and USGS). In addition, it is possible to extract spectra (pixels) from a data
set or prepare average spectra from an image and save these in a user-derived spectral
library. This library can then be used for visual comparison with other image spectra,
or it can be used as input signatures in a classification.

174 ERDAS
Hyperspectral Image Processing

Classification The advent of datasets with very large numbers of bands has pressed the limits of the
"traditional classifiers" such as Isodata, Maximum Likelihood, and Minimum Distance,
but has not obviated their usefulness. Much research has been directed toward the use
of Artificial Neural Networks (ANN) to more fully utilize the information content of
hyperspectral images (Merenyi, Taranik, Monor, and Farrand 1996). To date, however,
these advanced techniques have proven to be only marginally better at a considerable
cost in complexity and computation. For certain applications, both Maximum
Likelihood (Benediktsson, Swain, Ersoy, and Hong 1990) and Minimum Distance
(Merenyi, Taranik, Monor, and Farrand 1996) have proven to be appropriate.
"CHAPTER 6: Classification" contains a detailed discussion of these classification
techniques.

A second category of classification techniques utilizes the imaging spectroscopy model


for approaching hyperspectral datasets. This approach requires a library of possible
end-member materials. These can be from laboratory measurements using a scanning
spectrometer and reference standards (Clark, Gallagher, and Swayze 1990). The JPL
and USGS libraries are compiled this way. Or the reference spectra (signatures) can be
scene-derived from either the scene under study or another similar scene (Adams,
Smith, and Gillespie 1989).

System Requirements Because of the large number of bands, a hyperspectral dataset can be surprisingly large.
For example, an AVIRIS scene is only 512 X 614 pixels in dimension — seems small.
However, when multiplied by 224 bands (channels) and 16 bits, it requires over 140
megabytes of data storage space. To process this scene will require corresponding large
swap and temp space. In practice, it has been found that a 48 Mb memory board and
100 Mb of swap space is a minimum requirement for efficient processing. Temporary
file space requirements will, of course, depend upon the process being run.

Field Guide 175


Fourier Analysis Image enhancement techniques can be divided into two basic categories: point and
neighborhood. Point techniques enhance the pixel based only on its value, with no
concern for the values of neighboring pixels. These techniques include contrast
stretches (non-adaptive), classification, level slices, etc. Neighborhood techniques
enhance a pixel based on the values of surrounding pixels. As a result, these techniques
require the processing of a possibly large number of pixels for each output pixel. The
most common way of implementing these enhancements is via a moving window
convolution. However, as the size of the moving window increases, the number of
requisite calculations becomes enormous. An enhancement that requires a convolution
operation in the spatial domain can be implemented as a simple multiplication in
frequency space—a much faster calculation.

In ERDAS IMAGINE, the Fast Fourier Transform (FFT) is used to convert a raster image
from the spatial (normal) domain into a frequency domain image. The FFT calculation
converts the image into a series of two-dimensional sine waves of various frequencies.
The Fourier image itself cannot be easily viewed, but the magnitude of the image can
be calculated, which can then be displayed either in the IMAGINE Viewer or in the FFT
Editor. Analysts can edit the Fourier image to reduce noise or remove periodic features,
such as striping. Once the Fourier image is edited, it is then transformed back into the
spatial domain by using an inverse Fast Fourier Transform. The result is an enhanced
version of the original image.

This section focuses on the Fourier editing techniques available in the ERDAS
IMAGINE FFT Editor. Some rules and guidelines for using these tools are presented in
this document. Also included are some examples of techniques that will generally work
for specific applications, such as striping.

NOTE: You may also want to refer to the works cited at the end of this section for more
information.

The basic premise behind a Fourier transform is that any one-dimensional function, f(x)
(which might be a row of pixels), can be represented by a Fourier series consisting of
some combination of sine and cosine terms and their associated coefficients. For
example, a line of pixels with a high spatial frequency gray scale pattern might be repre-
sented in terms of a single coefficient multiplied by a sin(x) function. High spatial
frequencies are those that represent frequent gray scale changes in a short pixel
distance. Low spatial frequencies represent infrequent gray scale changes that occur
gradually over a relatively large number of pixel distances. A more complicated
function, f(x), might have to be represented by many sine and cosine terms with their
associated coefficients.

176 ERDAS
Fourier Analysis

Original Function f(x) Sine and Cosine of f(x)

Sine

Cosine

0 π 2π 0 π 2π

(These graphics are for illustration purposes


Fourier Transform of f(x)
only and are not mathematically accurate.)

0 π 2π
Figure 72: One-Dimensional Fourier Analysis

Figure 72 shows how a function f(x) can be represented as a linear combination of sine
and cosine. The Fourier transform of that same function is also shown.

A Fourier transform is a linear transformation that allows calculation of the coefficients


necessary for the sine and cosine terms to adequately represent the image. This theory
is used extensively in electronics and signal processing, where electrical signals are
continuous and not discrete. Therefore, a discrete Fourier transform (DFT) has been
developed. Because of the computational load in calculating the values for all the sine
and cosine terms along with the coefficient multiplications, a highly efficient version of
the DFT was developed and called the Fast Fourier Transform (FFT).

To handle images which consist of many one-dimensional rows of pixels, a two-dimen-


sional FFT has been devised that incrementally uses one-dimensional FFT’s in each
direction and then combines the result. These images are symmetrical about the origin.

Applications
Fourier transformations are typically used for the removal of noise such as striping,
spots, or vibration in imagery by identifying periodicities (areas of high spatial
frequency). Fourier editing can be used to remove regular errors in data such as those
caused by sensor anomalies (e.g., striping). This analysis technique can also be used
across bands as another form of pattern/feature recognition.

Field Guide 177


Fast Fourier Transform The Fast Fourier Transform (FFT) calculation is:
(FFT)
M – 1N – 1
– j2πux ⁄ M – j2πvy ⁄ N
F ( u, v ) ← ∑ ∑ [ f ( x, y )e ]
x=0y=0

0 ≤ u ≤ M – 1, 0 ≤ v ≤ N – 1
where:

M = the number of pixels horizontally


N= the number of pixels vertically
u,v= spatial frequency variables
e≈ 2.71828, the natural logarithm base
j = the imaginary component of a complex number

The number of pixels horizontally and vertically must each be a power of two. If the
dimensions of the input image are not a power of two, they are padded up to the next
highest power of two. There is more information about this later in this section.

Source: Modified from Oppenheim 1975 and Press 1988

Images computed by this algorithm are saved with an .fft file extension.

You should run a Fourier Magnitude transform on an .fft file before viewing it in the ERDAS
IMAGINE Viewer. The FFT Editor automatically displays the magnitude without further
processing.

Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing
or editing. Each pixel of a fourier image is a complex number (i.e., it has two compo-
nents—real and imaginary). For display as a single image, these components are
combined in a root-sum of squares operation. Also, since the dynamic range of Fourier
spectra vastly exceeds the range of a typical display device, the Fourier Magnitude
calculation involves a logarithmic function.

Finally, a Fourier image is symmetric about the origin (u,v = 0,0). If the origin is plotted
at the upper left corner, the symmetry is more difficult to see than if the origin is at the
center of the image. Therefore, in the Fourier magnitude image, the origin is shifted to
the center of the raster array.

178 ERDAS
Fourier Analysis

In this transformation, each .fft layer is processed twice. First, the maximum magnitude,
|X|max, is computed. Then, the following computation is performed for each FFT
element magnitude x:

y ( x ) = 255.0ln  --------------  ( e – 1 ) + 1
x
 x max 

where:

x= input FFT element


y= the normalized log magnitude of the FFT element
|x|max= the maximum magnitude
e≈ 2.71828, the natural logarithm base
| |= the magnitude operator

This function was chosen so that y would be proportional to the logarithm of a linear
function of x, with y(0)=0 and y (|x|max) = 255.

Source: ERDAS

In Figure 73, Image A is one band of a badly striped Landsat TM scene. Image B is the
Fourier Magnitude image derived from the Landsat image.

origin

origin

Image A Image B
Figure 73: Example of Fourier Magnitude

Field Guide 179


Note that although Image A has been transformed into Image B, these raster images are
very different symmetrically. The origin of Image A is at (x,y) = (0,0) in the upper left
corner. In Image B, the origin (u,v) = (0,0) is in the center of the raster. The low
frequencies are plotted near this origin while the higher frequencies are plotted further
out. Generally, the majority of the information in an image is in the low frequencies.
This is indicated by the bright area at the center (origin) of the Fourier image.

It is important to realize that a position in a Fourier image, designated as (u,v), does not
always represent the same frequency, because it depends on the size of the input raster
image. A large spatial domain image contains components of lower frequency than a
small spatial domain image. As mentioned, these lower frequencies are plotted nearer
to the center (u,v = 0,0) of the Fourier image. Note that the units of spatial frequency are
inverse length, e.g., m-1.

The sampling increments in the spatial and frequency domain are related by:

1
∆u = ------------
M∆x

1
∆v = -----------
N∆y
where:

M= horizontal image size in pixels


N= vertical image size in pixels
∆ x= pixel size
∆ y= pixel size

For example, converting a 512 × 512 Landsat TM image (pixel size = 28.5m) into a
Fourier image:

1 –5 –1
∆u = ∆v = ------------------------- = 6.85 × 10 m
512 × 28.5

u or v Frequency

0 0

1 6.85 × 10-5 × M-1

2 13.7 × 10-5 × M-1

180 ERDAS
Fourier Analysis

If the Landsat TM image was 1024 × 1024:

1 –5 –1
∆u = ∆v = ---------------------------- = 3.42 × 10 m
1024 × 28.5

u or v Frequency

0 0

1 3.42 × 10-5

2 6.85 × 10-5

So, as noted above, the frequency represented by a (u,v) position depends on the size of
the input image.

For the above calculation, the sample images are 512 × 512 and 1024 × 1024—powers of
two. These were selected because the FFT calculation requires that the height and width
of the input image be a power of two (although the image need not be square). In
practice, input images will usually not meet this criterion. Three possible solutions are
available in ERDAS IMAGINE:

• Subset the image.

• Pad the image — the input raster is increased in size to the next power of two by
imbedding it in a field of the mean value of the entire input image.

• Resample the image so that its height and width are powers of two.

For example:

300
512

400 mean value

512
Figure 74: The Padding Technique

The padding technique is automatically performed by the FFT program. It produces a


minimum of artifacts in the output Fourier image. If the image is subset using a power
of two (i.e., 64 × 64, 128 × 128, 64 × 128, etc.) no padding is used.

Field Guide 181


Inverse Fast Fourier The Inverse Fast Fourier Transform (IFFT) computes the inverse two-dimensional Fast
Transform (IFFT) Fourier Transform of the spectrum stored.

• The input file must be in the compressed .fft format described earlier (i.e., output
from the Fast Fourier Transform or FFT Editor).

• If the original image was padded by the FFT program, the padding will
automatically be removed by IFFT.

• This program creates (and deletes, upon normal termination) a temporary file large
enough to contain one entire band of .fft data.The specific expression calculated by
this program is:

M – 1N – 1
j2πux ⁄ M + j2πvy ⁄ N
∑ ∑ [ F ( u, v )e
1
f ( x, y ) ← -------------- ]
N1N2
u = 0v = 0

0 ≤ x ≤ M – 1, 0 ≤ y ≤ N – 1
where:

M= the number of pixels horizontally


N= the number of pixels vertically
u,v= spatial frequency variables
e≈ 2.71828, the natural logarithm base

Source: Modified from Oppenheim and Press 1988

Images computed by this algorithm are saved with an .ifft.img file extension by default.

Filtering Operations performed in the frequency (Fourier) domain can be visualized in the
context of the familiar convolution function. The mathematical basis of this interrela-
tionship is the convolution theorem, which states that a convolution operation in the
spatial domain is equivalent to a multiplication operation in the frequency domain:

g(x,y) = h(x,y) ∗ f(x,y) ≡ G(u,v) = H(u,v) × F(u,v)

where:

f(x,y) = input image


h(x,y) = position invariant operation (convolution kernel)
g(x,y) = output image
G, F, H = Fourier transforms of g, f, h

The names high-pass, low-pass, high-frequency, etc., indicate that these convolution
functions derive from the frequency domain.

182 ERDAS
Fourier Analysis

Low-Pass Filtering
The simplest example of this relationship is the low-pass kernel. The name, low-pass
kernel, is derived from a filter that would pass low frequencies and block (filter out)
high frequencies. In practice, this is easily achieved in the spatial domain by the
M = N = 3 kernel:

1 1 1
1 1 1
1 1 1

Obviously, as the size of the image and, particularly, the size of the low-pass kernel
increases, the calculation becomes more time-consuming. Depending on the size of the
input image and the size of the kernel, it can be faster to generate a low-pass image via
Fourier processing.

Figure 75 compares Direct and Fourier domain processing for finite area convolution.

16 Fourier processing more efficient


Size of neighborhood for calculation

12

4 Direct processing more efficient

0 200 400 600 800 1000 1200


Size of input image
Source: Pratt, 1991
Figure 75: Comparison of Direct and Fourier Domain Processing

In the Fourier domain, the low-pass operation is implemented by attenuating the pixels
whose frequencies satisfy:

2 2 2
u + v > D0

D0 is frequently called the “cutoff frequency.”

As mentioned, the low-pass information is concentrated toward the origin of the


Fourier image. Thus, a smaller radius (r) has the same effect as a larger N (where N is
the size of a kernel) in a spatial domain low-pass convolution.

Field Guide 183


As was pointed out earlier, the frequency represented by a particular u,v (or r) position
depends on the size of the input image. Thus, a low-pass operation of r = 20 will be
equivalent to a spatial low-pass of various kernel sizes, depending on the size of the
input image.

For example:

Fourier Low-Pass Convolution Low-Pass


Image Size
r= N=

64 × 64 50 3

30 3.5

20 5

10 9

5 14

128 × 128 20 13

10 22

256 × 256 20 25

10 42

This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as
the cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.

High-Pass Filtering
Just as images can be smoothed (blurred) by attenuating the high-frequency compo-
nents of an image using low-pass filters, images can be sharpened and edge-enhanced
by attenuating the low-frequency components using high-pass filters. In the Fourier
domain, the high-pass operation is implemented by attenuating pixels whose
frequencies satisfy:

2 2 2
u + v < D0

184 ERDAS
Fourier Analysis

Windows The attenuation discussed above can be done in many different ways. In ERDAS
IMAGINE Fourier processing, five window functions are provided to achieve different
types of attenuation:

• Ideal

• Bartlett (triangular)

• Butterworth

• Gaussian

• Hanning (cosine)

Each of these windows must be defined when a frequency domain process is used. This
application is perhaps easiest understood in the context of the high-pass and low-pass
filter operations. Each window is discussed in more detail below.

Ideal
The simplest low-pass filtering is accomplished using the ideal window, so named
because its cutoff point is absolute. Note that in Figure 76 the cross section is “ideal.”

H(u,v)
gain

0 D(u,v)
D0
frequency
H(u,v) = 1 if D(u,v) ≤ D0
0 if D(u,v) > D0

Figure 76: An Ideal Cross Section

All frequencies inside a circle of a radius D0 are retained completely (passed), and all
frequencies outside the radius are completely attenuated. The point D0 is termed the
cutoff frequency.

High-pass filtering using the ideal window looks like the illustration below.

Field Guide 185


H(u,v)

gain
1

0 D(u,v)
D0
frequency
H(u,v) = 0 if D(u,v) ≤ D0
1 if D(u,v) > D0

Figure 77: High-Pass Filtering Using the Ideal Window

All frequencies inside a circle of a radius D0 are completely attenuated, and all
frequencies outside the radius are retained completely (passed).

A major disadvantage of the ideal filter is that it can cause “ringing” artifacts, particu-
larly if the radius (r) is small. The smoother functions (i.e., Butterworth, Hanning, etc.)
minimize this effect.

Bartlett
Filtering using the Bartlett window is a triangular function, as shown in the low- and
high-pass cross-sections below.

Low-Pass High-Pass
H(u,v) H(u,v)
gain
gain

1 1

D(u,v) 0 D(u,v)
0 D0 D0
frequency frequency
Figure 78: Filtering Using the Bartlett Window

186 ERDAS
Fourier Analysis

Butterworth, Gaussian, and Hanning


The Butterworth, Gaussian, and Hanning windows are all “smooth” and greatly reduce
the effect of ringing. The differences between them are minor and are of interest mainly
to experts. For most “normal” types of Fourier image enhancement, they are essentially
interchangeable.

The Butterworth window reduces the ringing effect because it does not contain abrupt
changes in value or slope. The low- and high-pass cross sections below illustrate this.

Low-Pass High-Pass
H(u,v) H(u,v)

1 1

gain
gain

0.5 0.5

D(u,v) 0 D(u,v)
0 1 2 3 1 2 3
D0 D0
frequency frequency

Figure 79: Filtering Using the Butterworth Window

The equation for the low-pass Butterworth window is:


1
H(u,v) = -----------------------------------------------------
1 + [ ( D ( u, v ) ) ⁄ D 0 ] 2n

NOTE: The Butterworth window approaches its window center gain asymptotically.

The equation for the Gaussian low-pass window is:


x 2
–  ------ 
D0
H(u,v) = e
The equation for the Hanning low-pass window is:
1 πx
H(u,v) = --- 1 + cos  ---------- 
2   2D 0 

for 0 ≤ x ≤ 2D 0

0 otherwise

Field Guide 187


Fourier Noise Removal Occasionally, images are corrupted by “noise” that is periodic in nature. An example of
this is the scan lines that are present in some TM images. When these images are trans-
formed into Fourier space, the periodic line pattern becomes a radial line. The ERDAS
IMAGINE Fourier Analysis functions provide two main tools for reducing noise in
images:

• Editing

• Automatic removal of periodic noise

Editing
In practice, it has been found that radial lines centered at the Fourier origin (u,v = 0,0)
are best removed using back-to-back wedges centered at (0,0). It is possible to remove
these lines using very narrow wedges with the Ideal window. However, the sudden
transitions resulting from zeroing out sections of a Fourier image will cause a ringing
of the image when it is transformed back into the spatial domain. This effect can be
lessened by using a less abrupt window, such as Butterworth.

Other types of noise can produce artifacts, such as lines not centered at u,v = 0,0 or
circular spots in the Fourier image. These can be removed using the tools provided in
the IMAGINE FFT Editor. As these artifacts are always symmetrical in the Fourier
magnitude image, editing tools operate on both components simultaneously. The
Fourier Editor contains tools that enable the user to attenuate a circular or rectangular
region anywhere on the image.

Automatic Periodic Noise Removal


The use of the FFT Editor, as described above, enables the user to selectively and
accurately remove periodic noise from any image. However, operator interaction and a
bit of trial and error are required. The automatic periodic noise removal algorithm has
been devised to address images degraded uniformly by striping or other periodic
anomalies. Use of this algorithm requires a minimum of input from the user.

The image is first divided into 128 x 128 pixel blocks. The Fourier Transform of each
block is calculated and the log-magnitudes of each FFT block are averaged. The
averaging removes all frequency domain quantities except those which are present in
each block (i.e., some sort of periodic interference). The average power spectrum is then
used as a filter to adjust the FFT of the entire image. When the inverse Fourier
Transform is performed, the result is an image which should have any periodic noise
eliminated or significantly reduced. This method is partially based on the algorithms
outlined in Cannon et al. 1983 and Srinivasan et al. 1988.

Select the Periodic Noise Removal option from Image Interpreter to use this function.

188 ERDAS
Fourier Analysis

Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled as
the product of illumination and reflectance components:

I(x,y) = i(x,y) × r(x,y)

where:

I(x,y) = image intensity (DN) at pixel x,y


i(x,y) = illumination of pixel x,y
r(x,y) = reflectance at pixel x,y

The illumination image is a function of lighting conditions, shadows, etc. The reflec-
tance image is a function of the object being imaged. A log function can be used to
separate the two components (i and r) of the image:

ln I(x,y) = ln i(x,y) + ln r(x,y)

This transforms the image from multiplicative to additive superposition. With the two
component images separated, any linear operation can be performed. In this appli-
cation, the image is now transformed into Fourier space. Because the illumination
component usually dominates the low frequencies, while the reflectance component
dominates the higher frequencies, the image may be effectively manipulated in the
Fourier domain.

By using a filter on the Fourier image, which increases the high-frequency components,
the reflectance image (related to the target material) may be enhanced, while the illumi-
nation image (related to the scene illumination) is de-emphasized.

Select the Homomorphic Filter option from Image Interpreter to use this function.

By applying an inverse fast Fourier transform followed by an exponential function, the


enhanced image is returned to the normal spatial domain. The flow chart in Figure 80
summarizes the homomorphic filtering process in ERDAS IMAGINE.

Field Guide 189


Log Fourier
Butter-
Input Log FFT worth
Image Image Image
Filter

i×r ln i + ln r i = low freq.


r = high freq.

Enhanced Expo- Filtered


Image nential IFFT Fourier
Image

i decreased
r increased

Figure 80: Homomorphic Filtering Process

As mentioned earlier, if an input image is not a power of two, the ERDAS IMAGINE
Fourier analysis software will automatically pad the image to the next largest size to
make it a power of two. For manual editing, this causes no problems. However, in
automatic processing, such as the homomorphic filter, the artifacts induced by the
padding may have a deleterious effect on the output image. For this reason, it is recom-
mended that images that are not a power of two be subset before being used in an
automatic process.

A detailed description of the theory behind Fourier series and Fourier transforms is given in
Gonzales and Wintz (1977). See also Oppenheim (1975) and Press (1988).

190 ERDAS
Radar Imagery Enhancement

Radar Imagery The nature of the surface phenomena involved in radar imaging is inherently different
Enhancement from that of VIS/IR images. When VIS/IR radiation strikes a surface it is either
absorbed, reflected, or transmitted. The absorption is based on the molecular bonds in
the (surface) material. Thus, this imagery provides information on the chemical compo-
sition of the target.

When radar microwaves strike a surface, they are reflected according to the physical
and electrical properties of the surface, rather than the chemical composition. The
strength of radar return is affected by slope, roughness, and vegetation cover. The
conductivity of a target area is related to the porosity of the soil and its water content.
Consequently, radar and VIS/IR data are complementary; they provide different infor-
mation about the target area. An image in which these two data types are intelligently
combined can present much more information than either image by itself.

See "CHAPTER 1: Raster Data" and "CHAPTER 3: Raster and Vector Data Sources" for more
information on radar data.

This section describes enhancement techniques that are particularly useful for radar
imagery. While these techniques can be applied to other types of image data, this
discussion will focus on the special requirements of radar imagery enhancement.
ERDAS IMAGINE Radar provides a sophisticated set of image processing tools
designed specifically for use with radar imagery. This section will describe the
functions of ERDAS IMAGINE Radar.

For information on the Radar Image Enhancement function, see the section on "Radiometric
Enhancement" on page 132.

Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing
systems, although it may appear in any type of remotely sensed image utilizing
coherent radiation. An active radar sensor gives off a burst of coherent radiation that
reflects from the target, unlike a passive microwave sensor that simply receives the low-
level radiation naturally emitted by targets.

Like the light from a laser, the waves emitted by active sensors travel in phase and
interact minimally on their way to the target area. After interaction with the target area,
these waves are no longer in phase. This is due to the different distances they travel
from targets, or single versus multiple bounce scattering.

Once out of phase, radar waves can interact to produce light and dark pixels known as
speckle noise. Speckle noise must be reduced before the data can be effectively utilized.
However, the image processing programs used to reduce speckle noise produce
changes in the image.

Since any image processing done before removal of the speckle results in the noise being incor-
porated into and degrading the image, you should not rectify, correct to ground range, or in any
way resample, enhance, or classify the pixel values before removing speckle noise. Functions
using nearest neighbor are technically permissible, but not advisable.

Field Guide 191


Since different applications and different sensors necessitate different speckle removal
models, IMAGINE Radar includes several speckle reduction algorithms:

• Mean filter

• Median filter

• Lee-Sigma filter

• Local Region filter

• Lee filter

• Frost filter

• Gamma-MAP filter

These filters are described below.

NOTE: Speckle noise in radar images cannot be completely removed. However, it can be reduced
significantly.

Mean Filter
The Mean filter is a simple calculation. The pixel of interest (center of window) is
replaced by the arithmetic average of all values within the window. This filter does not
remove the aberrant (speckle) value—it averages it into the data.

In theory, a bright and a dark pixel within the same window would cancel each other
out. This consideration would argue in favor of a large window size (i.e.,
7 × 7). However, averaging results in a loss of detail, which argues for a small window
size.

In general, this is the least satisfactory method of speckle reduction. It is useful for
“quick and dirty” applications or those where loss of resolution is not a problem.

Median Filter
A better way to reduce speckle, but still simplistic, is the Median filter. This filter
operates by arranging all DN (digital number) values within the user-defined window
in sequential order. The pixel of interest is replaced by the value in the center of this
distribution. A Median filter is useful for removing pulse or spike noise. Pulse functions
of less than one-half of the moving window width are suppressed or eliminated. In
addition, step functions or ramp functions are retained.

The effect of Mean and Median filters on various signals is shown (for one dimension)
in Figure 81.

192 ERDAS
Radar Imagery Enhancement

ORIGINAL MEAN FILTERED MEDIAN FILTERED

Step

Ramp

Single Pulse

Double Pulse
Figure 81: Effects of Mean and Median Filters

The Median filter is useful for noise suppression in any image. It does not affect step or
ramp functions—it is an edge preserving filter (Pratt 1991). It is also applicable in
removing pulse function noise which results from the inherent pulsing of microwaves.
An example of the application of the Median filter is the removal of dead-detector
striping, such as is found in Landsat 4 TM data (Crippen 1989).

Local Region Filter


The Local Region filter divides the moving window into eight regions based on angular
position (North, South, East, West, NW, NE, SW, and SE). Figure 82 shows a 5 × 5
moving window and the regions of the Local Region filter.

Field Guide 193


= pixel of interest

= North region

= NE region

= SW region

Figure 82: Regions of Local Region Filter

For each region, the variance is calculated as follows:

2
Σ ( DN x, y – Mean )
Variance = -----------------------------------------------
n–1
Source: Nagao 1979

The algorithm compares the variance values of the regions surrounding the pixel of
interest. The pixel of interest is replaced by the mean of all DN values within the region
with the lowest variance, i.e., the most uniform region. A region with low variance is
assumed to have pixels minimally affected by wave interference yet very similar to the
pixel of interest. A region of low variance will probably be such for several surrounding
pixels.

The result is that the output image is composed of numerous uniform areas, the size of
which is determined by the moving window size. In practice, this filter can be utilized
sequentially 2 or 3 times, increasing the window size. The resultant output image is an
appropriate input to a classification application.

194 ERDAS
Radar Imagery Enhancement

Sigma and Lee Filters


The Sigma and Lee filters utilize the statistical distribution of the DN values within the
moving window to estimate what the pixel of interest should be.

Speckle in imaging radar can be mathematically modeled as multiplicative noise with


a mean of 1. The standard deviation of the noise can be mathematically defined as:

Standard Deviation=> ----------------------------------- = Coefficient Of Variation => sigma (σ)


VARIANCE
MEAN

The coefficient of variation, as a scene-derived parameter, is used as an input


parameter in the Sigma and Lee filters. It is also useful in evaluating and modifying
visible/infrared (VIS/IR) data for input to a 4-band composite image or in preparing a
3-band ratio color composite (Crippen 1989).

It can be assumed that imaging radar data noise follows a Gaussian distribution. This
would yield a theoretical value for Standard Deviation (SD) of .52 for 1-look radar data
and SD = .26 for 4-look radar data.

Table 17 gives theoretical coefficient of variation values for various look-average radar
scenes:

Table 17: Theoretical Coefficient of Variation Values

# of Looks Coef. of Variation


(scenes) Value
1 .52
2 .37
3 .30
4 .26
6 .21
8 .18

The Lee filters are based on the assumption that the mean and variance of the pixel of
interest are equal to the local mean and variance of all pixels within the user-selected
moving window.

Field Guide 195


The actual calculation used for the Lee filter is:

DNout = [Mean] + K[DNin - Mean]

where:

Mean = average of pixels in a moving window

Var ( x )
K= ----------------------------------------------------
2 2
[ Mean ] σ + Var ( x )

The variance of x [Var (x)] is defined as:

 [ Variance within window ] + [ Mean within window ] 2


Var ( x ) =  -----------------------------------------------------------------------------------------------------------------------------
--
 [ Sigma ] + 1
2

2
– [ Mean within window ]

Source: Lee 1981

The Sigma filter is based on the probability of a Gaussian distribution. Briefly, it is


assumed that 95.5% of random samples are within a 2 standard deviation (2 sigma)
range. This noise suppression filter replaces the pixel of interest with the average of all
DN values within the moving window that fall within the designated range.

As with all the Radar speckle filters, the user must specify a moving window size, the
center pixel of which is the pixel of interest.

As with the Statistics filter, a coefficient of variation specific to the data set must be
input. Finally, the user must specify how many standard deviations to use (2, 1, or 0.5)
to define the accepted range.

The statistical filters (Sigma and Statistics) are logically applicable to any data set for
preprocessing. Any sensor system has various sources of noise, resulting in a few erratic
pixels. In VIS/IR imagery, most natural scenes are found to follow a normal distri-
bution of DN values, thus filtering at 2 standard deviations should remove this noise.
This is particularly true of experimental sensor systems that frequently have significant
noise problems.

196 ERDAS
Radar Imagery Enhancement

These speckle filters can be used iteratively. The user must view and evaluate the
resultant image after each pass (the data histogram is useful for this), and then decide
if another pass is appropriate and what parameters to use on the next pass. For example,
three passes of the Sigma filter with the following parameters is very effective when
used with any type of data:

Table 18: Parameters for Sigma Filter

Sigma Window
Pass Sigma Value
Multiplier Size

1 0.26 0.5 3×3


2 0.26 1 5×5
3 0.26 2 7×7

Similarly, there is no reason why successive passes must be of the same filter. The
following sequence is useful prior to a classification:

Table 19: Pre-Classification Sequence

Sigma Sigma Window


Filter Pass
Value Multiplier Size
Lee 1 0.26 NA 3×3
Lee 2 0.26 NA 5×5
Local Region 3 NA NA 5 × 5 or 7 × 7

With all speckle reduction filters there is a playoff between noise reduction and loss of resolution.
Each data set and each application will have a different acceptable balance between these two
factors. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducing
noise (and resolution).

Field Guide 197


Frost Filter
The Frost filter is a minimum mean square error algorithm which adapts to the local
statistics of the image. The local statistics serve as weighting parameters for the impulse
response of the filter (moving window). This algorithm assumes that noise is multipli-
cative with stationary statistics.

The formula used is:

–α t
DN = ∑ Kαe
n×n

Where

K = normalization constant

Ι = local mean
σ = local variance

σ = image coefficient of variation value


|t| = |X-X0| + |Y-Y0|

n = moving window size

And
2 2 2
α = ( 4 ⁄ nσ ) ( σ ⁄ I )

Source: Lopes, Nezry, Touzi, Laur, 1990

198 ERDAS
Radar Imagery Enhancement

Gamma-MAP Filter
The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN,
which is assumed to lie between the local average and the degraded (actual) pixel DN.
MAP logic maximizes the a posteriori probability density function with respect to the
original image.

Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume a Gaussian distri-
bution for the speckle noise. Recent work has shown this to be an invalid assumption.
Natural vegetated areas have been shown to be more properly modeled as having a
Gamma distributed cross section. This algorithm incorporates this assumption. The
exact formula used is the cubic equation:

3 2
Î – IÎ + σ ( Î – DN ) = 0
Where

Î = sought value

Ι = local mean
DN = input value

σ = original image variance

Source: Frost, Stiles, Shanmugan, Holtzman, 1982

Field Guide 199


Edge Detection Edge and line detection are important operations in digital image processing. For
example, geologists are often interested in mapping lineaments, which may be fault
lines or bedding structures. For this purpose, edge and line detection is a major
enhancement technique.

In selecting an algorithm, it is first necessary to understand the nature of what is being


enhanced. Edge detection could imply amplifying an edge, a line, or a spot (see Figure
83).

DN Value
DN Value

slope DN change
DN change 90o

slope midpoint

x or y x or y
Step edge
Ramp edge

width width near 0

DN Value
DN Value

DN change DN change

x or y x or y
Line Roof edge

Figure 83: One-dimensional, Continuous Edge, and Line Models

• Ramp edge — an edge modeled as a ramp, increasing in DN value from a low to a


high level, or vice versa. Distinguished by DN change, slope, and slope midpoint.

• Step edge — a ramp edge with a slope angle of 90 degrees.

• Line — a region bounded on each end by an edge; width must be less than the
moving window size.

• Roof edge — a line with a width near zero.

The models in Figure 83 represent ideal theoretical edges. However, real data values
will vary to produce a more distorted edge, due to sensor noise, vibration, etc. (see
Figure 84). There are no perfect edges in raster data, hence the need for edge detection
algorithms.

200 ERDAS
Radar Imagery Enhancement

Actual data values


Ideal model step edge

Intensity

Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge

Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order
derivative operations. Figure 85 shows ideal one-dimensional edge and line intensity
curves with the associated 1st-order and 2nd-order derivatives.

Step Edge Line

g(x) g(x)
Original Feature
x x

∂g ∂g
-----
1st Derivative -----
∂x ∂x

x x

2
2nd Derivative ∂g
2 ∂g
-------- --------
2
∂x
2 ∂x
x x

Figure 85: Edge and Line Derivatives

Field Guide 201


The 1st-order derivative kernel(s) derives from the simple Prewitt kernel:

1 0 –1
∂ 1 1 1
----- = 1 0 – 1 ∂
∂y ----- = and
1 0 –1 ∂x 0 0 0
–1 –1 –1

The 2nd-order derivative kernel(s) derives from Laplacian operators:

–1 2 –1 –1 –1 –1
∂2 and ∂2
-------2- = – 1 2 – 1 -------2- = 2 2 2
∂x ∂y
–1 2 –1 –1 –1 –1

1st-Order Derivatives (Prewitt)


ERDAS IMAGINE Radar utilizes sets of template matching operators. These operators
approximate to the eight possible compass orientations (North, South, East, West,
Northeast, Northwest, Southeast, Southwest). The compass names indicate the slope
direction creating maximum response. (Gradient kernels with zero weighting, i.e., the
sum of the kernel coefficient is zero, have no output in uniform regions.) The detected
edge will be orthogonal to the gradient direction.

To avoid positional shift, all operating windows are odd number arrays, with the center
pixel being the pixel of interest. Extension of the 3 × 3 impulse response arrays to a
larger size is not clear cut— different authors suggest different lines of rationale. For
example, it may be advantageous to extend the 3-level (Prewitt 1970) to:

1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1

or the following might be beneficial:

2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
or
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4

202 ERDAS
Radar Imagery Enhancement

Larger template arrays provide a greater noise immunity, but are computationally
more demanding.

Zero-Sum Filters
A common type of edge detection kernel is a zero-sum filter. For this type of filter, the
coefficients are designed to add up to zero. Examples of two zero-sum filters are given
below:

–1 –2 –1 1 0 –1
Sobel =
0 0 0 2 0 –2
1 2 1 1 0 –1
horizontal vertical

–1 –1 –1 1 0 –1
Prewitt= 0 0 0 1 0 –1
1 1 1 1 0 –1
horizontal vertical

Prior to edge enhancement, you should reduce speckle noise by using the Radar Speckle
Suppression function.

2nd-Order Derivatives (Laplacian Operators)


The second category of edge enhancers is 2nd-order derivative or Laplacian operators.
These are best for line (or spot) detection as distinct from ramp edges. ERDAS
IMAGINE Radar offers two such arrays:

Unweighted line:

–1 2 –1
–1 2 –1
–1 2 –1

Weighted line:

–1 2 –1
–2 4 –2
–1 2 –1

Source: Pratt 1991

Field Guide 203


Some researchers have found that a combination of 1st- and 2nd-order derivative images
produces the best output. See Eberlein and Weszka (1975) for information about subtracting the
2nd-order derivative (Laplacian) image from the 1st-order derivative image (gradient).

Texture According to Pratt (1991), “Many portions of images of natural scenes are devoid of
sharp edges over large areas. In these areas the scene can often be characterized as
exhibiting a consistent structure analogous to the texture of cloth. Image texture
measurements can be used to segment an image and classify its segments.”

As an enhancement, texture is particularly applicable to radar data, although it may be


applied to any type of data with varying results. For example, it has been shown (Blom
et al 1982) that a three-layer variance image using 15 × 15, 31 × 31, and 61 × 61 windows
can be combined into a three-color RGB (red, green, blue) image that is useful for
geologic discrimination. The same could apply to a vegetation classification.

The user could also prepare a three-color image using three different functions
operating through the same (or different) size moving window(s). However, each data
set and application would need different moving window sizes and/or texture
measures to maximize the discrimination.

Radar Texture Analysis


While texture analysis has been useful in the enhancement of visible/infrared image
data (VIS/IR), it is showing even greater applicability to radar imagery. In part, this
stems from the nature of the imaging process itself.

The interaction of the radar waves with the surface of interest is dominated by reflection
involving the surface roughness at the wavelength scale. In VIS/IR imaging, the
phenomena involved is absorption at the molecular level. Also, as we know from array-
type antennae, radar is especially sensitive to regularity that is a multiple of its
wavelength. This provides for a more precise method for quantifying the character of
texture in a radar return.

The ability to use radar data to detect texture and provide topographic information
about an image is a major advantage over other types of imagery where texture is not a
quantitative characteristic.

The texture transforms can be used in several ways to enhance the use of radar imagery.
Adding the radar intensity image as an additional layer in a (vegetation) classification
is fairly straightforward and may be useful. However, the proper texture image
(function and window size) can greatly increase the discrimination. Using known test
sites, one can experiment to discern which texture image best aids the classification. For
example, the texture image could then be added as an additional layer to the TM bands.

As radar data come into wider use, other mathematical texture definitions will prove useful and
will be added to ERDAS IMAGINE Radar. In practice, you will interactively decide which
algorithm and window size is best for your data and application.

204 ERDAS
Radar Imagery Enhancement

Texture Analysis Algorithms


While texture has typically been a qualitative measure, it can be enhanced with mathe-
matical algorithms. Many algorithms appear in literature for specific applications
(Haralick 1979, Irons 1981).

The algorithms incorporated into ERDAS IMAGINE are those which are applicable in
a wide variety of situations and are not computationally over-demanding. This later
point becomes critical as the moving window size increases. Research has shown that
very large moving windows are often needed for proper enhancement. For example,
Blom (Blom et al 1982) uses up to a 61 × 61 window.

Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE:

• mean Euclidean distance (1st-order)

• variance (2nd-order)

• skewness (3rd-order)

• kurtosis (4th order)

Mean Euclidean Distance


These algorithms are shown below (Irons and Petersen 1981):

1
2 --2-
Mean Euclidean Distance = Σ [ Σ λ ( x cλ – x ijλ ) ]
------------------------------------------------
n–1
where:

xijλ= DN value for spectral band λ and pixel (i,j) of a multispectral image
xcλ= DN value for spectral band λ of a window’s center pixel
n = number of pixels in a window

Variance
2
Σ ( x ij – M )
Variance = ------------------------------
n–1
where:

xij = DN value of pixel (i,j)


n= number of pixels in a window
M = Mean of the moving window, where:

Σx ij
Mean = ---------
n

Field Guide 205


Skewness

3
Σ ( x ij – M )
Skew = --------------------------------
3
---
2
(n – 1)(V )
where:

xij = DN value of pixel (i,j)


n = number of pixels in a window
M= Mean of the moving window (see above)
V = Variance (see above)

Kurtosis
4
Σ ( x ij – M )
Kurtosis = ------------------------------
2
(n – 1 )(V )
where:

xij = DN value of pixel (i,j)


n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)

Texture analysis is available from the Texture function in Image Interpreter and from the Radar
Texture Analysis function.

206 ERDAS
Radar Imagery Enhancement

Radiometric Correction - The raw radar image frequently contains radiometric errors due to:
Radar Imagery
• imperfections in the transmit and receive pattern of the radar antenna

• errors due to the coherent pulse (i.e., speckle)

• the inherently stronger signal from a near range (closest to the sensor flight path)
than a far range (farthest from the sensor flight path) target

Many imaging radar systems use a single antenna that transmits the coherent radar
burst and receives the return echo. However, no antenna is perfect; it may have various
lobes, dead spots, and imperfections. This causes the received signal to be slightly
distorted radiometrically. In addition, range fall-off will cause far range targets to be
darker (less return signal).

These two problems can be addressed by adjusting the average brightness of each
range line to a constant— usually the average overall scene brightness (Chavez 1986).
This requires that each line of constant range be long enough to reasonably approx-
imate the overall scene brightness (see Figure 86). This approach is generic; it is not
specific to any particular radar sensor.

The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line
average. For this to be a valid approach, the number of data values must be large enough to
provide good average values. Be careful not to use too small an image. This will depend upon the
character of the scene itself.

rows of data
a = average data value of each row

Add the averages of all data rows:


columns of data

a1 + a2 + a3 + a4 ....ax
= Overall average
x

Overall average
= calibration coefficient
ax of line x

This subset would not give an accurate average for


correcting the entire scene.

Figure 86: Adjust Brightness Function

Field Guide 207


Range Lines/Lines of Constant Range
Lines of constant range are not the same thing as range lines:

• Range lines — lines that are perpendicular to the flight of the sensor

• Lines of constant range — lines that are parallel to the flight of the sensor

• Range direction — same as range lines

Because radiometric errors are a function of the imaging geometry, the image must be
correctly oriented during the correction process. For the algorithm to correctly address
the data set, the user must tell ERDAS IMAGINE whether the lines of constant range
are in columns or rows in the displayed image.

Figure 87 shows the lines of constant range in columns, parallel to the sides of the
display screen:

Lines Of Constant Range


Display
Screen

Flight (Azimuth) Direction


Range Lines

Range Direction

Figure 87: Range Lines vs. Lines of Constant Range

208 ERDAS
Radar Imagery Enhancement

Slant-to-Ground Range Radar images also require slant-to-ground range correction, which is similar in concept
Correction to orthocorrecting a VIS/IR image. By design, an imaging radar is always side-looking.
In practice, the depression angle is usually 75o at most. In operation, the radar sensor
determines the range (distance to) each target, as shown in Figure 88.

arcs of constant range


antenna across track

θ = depression angle

C
Dists
90o
θ
A B
Distg

Figure 88: Slant-to-Ground Range Correction

Assuming that angle ACB is a right angle, the user can approximate:

Dist s ≈ ( Dist g ) ( cos θ )

where:

Dists= slant range distance


Distg= ground range distance

Dist
cos θ = ------------s-
Dist g
Source: Leberl 1990

Field Guide 209


This has the effect of compressing the near range areas more than the far range areas.
For many applications, this may not be important. However, to geocode the scene or to
register radar to infrared or visible imagery, the scene must be corrected to a ground
range format. To do this, the following parameters relating to the imaging geometry are
needed:

• Depression angle (θ) — angular distance between sensor horizon and scene center

• Sensor height (H) — elevation of sensor (in meters) above its nadir point

• Beam width— angular distance between near range and far range for entire scene

• Pixel size (in meters) — range input image cell size

This information is usually found in the header file of data. Use the Data View... option to view
this information. If it is not contained in the header file, you must obtain this information from
the data supplier.

Once the scene is range-format corrected, pixel size can be changed for coregistration
with other data sets.

210 ERDAS
Radar Imagery Enhancement

Merging Radar with As aforementioned, the phenomena involved in radar imaging is quite different from
VIS/IR Imagery that in VIS/IR imaging. Because these two sensor types give different information
about the same target (chemical vs. physical), they are complementary data sets. If the
two images are correctly combined, the resultant image will convey both chemical and
physical information and could prove more useful than either image alone.

The methods for merging radar and VIS/IR data are still experimental and open for
exploration. The following methods are suggested for experimentation:

• Co-displaying in a Viewer

• RGB to IHS transforms

• Principal components transform

• Multiplicative

The ultimate goal of enhancement is not mathematical or logical purity - it is feature extraction.
There are currently no rules to suggest which options will yield the best results for a particular
application; you must experiment. The option that proves to be most useful will depend upon the
data sets (both radar and VIS/IR), your experience, and your final objective.

Co-Displaying
The simplest and most frequently used method of combining radar with VIS/IR
imagery is co-displaying on an RGB color monitor. In this technique the radar image is
displayed with one (typically the red) gun, while the green and blue guns display
VIS/IR bands or band ratios. This technique follows from no logical model and does not
truly merge the two data sets.

Use the Viewer with the Clear Display option disabled for this type of merge. Select the color
guns to display the different layers.

RGB to IHS Transforms


Another common technique uses the RGB to IHS transforms. In this technique, an RGB
(red, green, blue) color composite of bands (or band derivatives, such as ratios) is trans-
formed into intensity, hue, saturation color space. The intensity component is replaced
by the radar image, and the scene is reverse transformed. This technique integrally
merges the two data types.

For more information, see "RGB to IHS" on page 160.

Field Guide 211


Principal Components Transform
A similar image merge involves utilizing the principal components (PC) transformation
of the VIS/IR image. With this transform, more than three components can be used.
These are converted to a series of principal components. The first principal component,
PC-1, is generally accepted to correlate with overall scene brightness. This value is
replaced by the radar image and the reverse transform is applied.

For more information, see "Principal Components Analysis" on page 153.

Multiplicative
A final method to consider is the multiplicative technique. This requires several
chromatic components and a multiplicative component, which is assigned to the image
intensity. In practice, the chromatic components are usually band ratios or PC's; the
radar image is input multiplicatively as intensity (Holcomb 1993).

The two sensor merge models using transforms to integrate the two data sets (Principal
Components and RGB to IHS) are based on the assumption that the radar intensity
correlates with the intensity that the transform derives from the data inputs. However,
the logic of mathematically merging radar with VIS/IR data sets is inherently different
from the logic of the SPOT/TM merges (as discussed under the section in this chapter
on Resolution Merge). It cannot be assumed that the radar intensity is a surrogate for,
or equivalent to, the VIS/IR intensity. The acceptability of this assumption will depend
on the specific case.

For example, Landsat TM imagery is often used to aid in mineral exploration. A


common display for this purpose is RGB = TM5/TM7, TM5/TM4, TM3/TM1; the logic
being that if all three ratios are high, the sites suited for mineral exploration will be
bright overall. If the target area is accompanied by silicification, which results in an area
of dense angular rock, this should be the case. However, if the alteration zone was
basaltic rock to kaolinite/alunite, then the radar return could be weaker than the
surrounding rock. In this case, radar would not correlate with high 5/7, 5/4, 3/1
intensity and the substitution would not produce the desired results (Holcomb 1993).

212 ERDAS
Radar Imagery Enhancement

Field Guide 213


214 ERDAS
The Classification Process

CHAPTER 6
Classification

Introduction Multispectral classification is the process of sorting pixels into a finite number of
individual classes, or categories of data, based on their data file values. If a pixel
satisfies a certain set of criteria, the pixel is assigned to the class that corresponds to that
criteria. This process is also referred to as image segmentation.

Depending on the type of information the user wants to extract from the original data,
classes may be associated with known features on the ground or may simply represent
areas that “look different” to the computer. An example of a classified image is a land
cover map, showing vegetation, bare land, pasture, urban, etc.

The Classification Pattern Recognition


Process Pattern recognition is the science—and art—of finding meaningful patterns in data,
which can be extracted through classification. By spatially and spectrally enhancing an
image, pattern recognition can be performed with the human eye—the human brain
automatically sorts certain textures and colors into categories.

In a computer system, spectral pattern recognition can be more scientific. Statistics are
derived from the spectral characteristics of all pixels in an image. Then, the pixels are
sorted based on mathematical criteria. The classification process breaks down into two
parts—training and classifying (using a decision rule).

Training First, the computer system must be trained to recognize patterns in the data. Training
is the process of defining the criteria by which these patterns are recognized (Hord
1982). Training can be performed with either a supervised or an unsupervised method,
as explained below.

Supervised Training
Supervised training is closely controlled by the analyst. In this process, the user selects
pixels that represent patterns or landcover features that they recognize, or that they can
identify with help from other sources, such as aerial photos, ground truth data, or maps.
Knowledge of the data, and of the classes desired, is required before classification.

By identifying patterns, the user can “train” the computer system to identify pixels with
similar characteristics. If the classification is accurate, the resulting classes represent the
categories within the data that the user originally identified.

Field Guide 215


Unsupervised Training
Unsupervised training is more computer-automated. It enables the user to specify
some parameters that the computer uses to uncover statistical patterns that are inherent
in the data. These patterns do not necessarily correspond to directly meaningful charac-
teristics of the scene, such as contiguous, easily recognized areas of a particular soil type
or land use. They are simply clusters of pixels with similar spectral characteristics. In
some cases, it may be more important to identify groups of pixels with similar spectral
characteristics than it is to sort pixels into recognizable categories.

Unsupervised training is dependent upon the data itself for the definition of classes.
This method is usually used when less is known about the data before classification. It
is then the analyst’s responsibility, after classification, to attach meaning to the resulting
classes (Jensen 1996). Unsupervised classification is useful only if the classes can be
appropriately interpreted.

Signatures The result of training is a set of signatures that defines a training sample or cluster. Each
signature corresponds to a class, and is used with a decision rule (explained below) to
assign the pixels in the image file (.img) to a class. Signatures in ERDAS IMAGINE can
be parametric or non-parametric.

A parametric signature is based on statistical parameters (e.g., mean and covariance


matrix) of the pixels that are in the training sample or cluster. Supervised and unsuper-
vised training can generate parametric signatures. A set of parametric signatures can be
used to train a statistically-based classifier (e.g., maximum likelihood) to define the
classes.

A non-parametric signature is not based on statistics, but on discrete objects (polygons


or rectangles) in a feature space image. These feature space objects are used to define
the boundaries for the classes. A non-parametric classifier will use a set of non-
parametric signatures to assign pixels to a class based on their location either inside or
outside the area in the feature space image. Supervised training is used to generate non-
parametric signatures (Kloer 1994).

ERDAS IMAGINE enables the user to generate statistics for a non-parametric signature.
This function will allow a feature space object to be used to create a parametric
signature from the image being classified. However, since a parametric classifier
requires a normal distribution of data, the only feature space object for which this
would be mathematically valid would be an ellipse (Kloer 1994).

When both parametric and non-parametric signatures are used to classify an image, the
user is more able to analyze and visualize the class definitions than either type of
signature provides independently (Kloer 1994).

See "APPENDIX A: Math Topics" for information on feature space images and how they are
created.

216 ERDAS
The Classification Process

Decision Rule After the signatures are defined, the pixels of the image are sorted into classes based on
the signatures, by use of a classification decision rule. The decision rule is a mathe-
matical algorithm that, using data contained in the signature, performs the actual
sorting of pixels into distinct class values.

Parametric Decision Rule


A parametric decision rule is trained by the parametric signatures. These signatures are
defined by the mean vector and covariance matrix for the data file values of the pixels
in the signatures. When a parametric decision rule is used, every pixel is assigned to a
class, since the parametric decision space is continuous (Kloer 1994).

Non-Parametric Decision Rule


A non-parametric decision rule is not based on statistics, therefore, it is independent
from the properties of the data. If a pixel is located within the boundary of a non-
parametric signature, then this decision rule will assign the pixel to the signature’s
class. Basically, a non-parametric decision rule determines whether or not the pixel is
located inside or outside of a non-parametric signature boundary.

Field Guide 217


Classification Tips
Classification Scheme Usually, classification is performed with a set of target classes in mind. Such a set is
called a classification scheme (or classification system). The purpose of such a scheme
is to provide a framework for organizing and categorizing the information that can be
extracted from the data (Jensen 1983). The proper classification scheme will include
classes that are both important to the study and discernible from the data on hand. Most
schemes have a hierarchical structure, which can describe a study area in several levels
of detail.

A number of classification schemes have been developed by specialists who have


inventoried a geographic region. Some references for professionally-developed
schemes are listed below:

• Anderson, J.R., et al. 1976. “A Land Use and Land Cover Classification System for
Use with Remote Sensor Data.” U.S. Geological Survey Professional Paper 964.

• Cowardin, Lewis M., et al. 1979. Classification of Wetlands and Deepwater Habitats of
the United States. Washington, D.C.: U.S. Fish and Wildlife Service.

• Florida Topographic Bureau, Thematic Mapping Section. 1985. Florida Land Use,
Cover and Forms Classification System. Florida Department of Transportation,
Procedure No. 550-010-001-a.

• Michigan Land Use Classification and Reference Committee. 1975. Michigan Land
Cover/Use Classification System. Lansing, Michigan: State of Michigan Office of Land
Use.

Other states or government agencies may also have specialized land use/cover studies.

It is recommended that the classification process is begun by the user defining a classi-
fication scheme for the application, using previously developed schemes, like those
above, as a general framework.

218 ERDAS
Classification Tips

Iterative Classification A process is iterative when it repeats an action. The objective of the ERDAS IMAGINE
system is to enable the user to iteratively create and refine signatures and classified .img
files to arrive at a desired final classification. The IMAGINE classification utilities are a
“tool box” to be used as needed, not a numbered list of steps that must always be
followed in order.

The total classification can be achieved with either the supervised or unsupervised
methods, or a combination of both. Some examples are below:

• Signatures created from both supervised and unsupervised training can be merged
and appended together.

• Signature evaluation tools can be used to indicate which signatures are spectrally
similar. This will help to determine which signatures should be merged or deleted.
These tools also help define optimum band combinations for classification. Using
the optimum band combination may reduce the time required to run a classification
process.

• Since classifications (supervised or unsupervised) can be based on a particular area


of interest (either defined in a raster layer or an .aoi layer), signatures and
classifications can be generated from previous classification results.

Supervised vs. In supervised training, it is important to have a set of desired classes in mind, and then
Unsupervised Training create the appropriate signatures from the data. The user must also have some way of
recognizing pixels that represent the classes that he or she wants to extract.

Supervised classification is usually appropriate when the user wants to identify


relatively few classes, when the user has selected training sites that can be verified with
ground truth data, or when the user can identify distinct, homogeneous regions that
represent each class.

On the other hand, if the user wants the classes to be determined by spectral distinctions
that are inherent in the data, so that he or she can define the classes later, then the appli-
cation is better suited to unsupervised training. Unsupervised training enables the user
to define many classes easily, and identify classes that are not in contiguous, easily
recognized regions.

NOTE: Supervised classification also includes using a set of classes that was generated from an
unsupervised classification. Using a combination of supervised and unsupervised classification
may yield optimum results, especially with large data sets (e.g., multiple Landsat scenes). For
example, unsupervised classification may be useful for generating a basic set of classes, then
supervised classification could be used for further definition of the classes.

Classifying Enhanced For many specialized applications, classifying data that have been merged, spectrally
Data merged or enhanced—with principal components, image algebra, or other transforma-
tions—can produce very specific and meaningful results. However, without under-
standing the data and the enhancements used, it is recommended that only the original,
remotely-sensed data be classified.

Field Guide 219


Dimensionality Dimensionality refers to the number of layers being classified. For example, a data file
with 3 layers is said to be 3-dimensional, since 3-dimensional feature space is “plotted”
to analyze the data.

Feature space and dimensionality are discussed in "APPENDIX A: Math Topics".

Adding Dimensions
Using programs in ERDAS IMAGINE, the user can add layers to existing .img files.
Therefore, the user can incorporate data (called ancillary data) other than remotely-
sensed data into the classification. Using ancillary data enables the user to incorporate
variables into the classification from, for example, vector layers, previously classified
data, or elevation data. The data file values of the ancillary data become an additional
feature of each pixel, thus influencing the classification (Jensen 1996).

Limiting Dimensions
Although ERDAS IMAGINE allows an unlimited number of layers of data to be used
for one classification, it is usually wise to reduce the dimensionality of the data as much
as possible. Often, certain layers of data are redundant or extraneous to the task at hand.
Unnecessary data take up valuable disk space, and cause the computer system to
perform more arduous calculations, which slows down processing.

Use the Signature Editor to evaluate separability to calculate the best subset of layer combina-
tions. Use the Image Interpreter functions to merge or subset layers. Use the Image Information
tool (in the ERDAS IMAGINE icon panel) to delete a layer(s).

220 ERDAS
Supervised Training

Supervised Training Supervised training requires a priori (already known) information about the data, such
as:

• What type of classes need to be extracted? Soil type? Land use? Vegetation?

• What classes are most likely to be present in the data? That is, which types of land
cover, soil, or vegetation (or whatever) are represented by the data?

In supervised training, the user relies on his or her own pattern recognition skills and a
priori knowledge of the data to help the system determine the statistical criteria (signa-
tures) for data classification.

To select reliable samples, the user should know some information—either spatial or
spectral—about the pixels that they want to classify.

The location of a specific characteristic, such as a land cover type, may be known
through ground truthing. Ground truthing refers to the acquisition of knowledge
about the study area from field work, analysis of aerial photography, personal
experience, etc. Ground truth data are considered to be the most accurate (true) data
available about the area of study. They should be collected at the same time as the
remotely sensed data, so that the data correspond as much as possible (Star and Estes
1990). However, some ground data may not be very accurate due to a number of errors,
inaccuracies, and human shortcomings.

Training Samples and Training samples (also called samples) are sets of pixels that represent what is recog-
Feature Space Objects nized as a discernible pattern, or potential class. The system will calculate statistics from
the sample pixels to create a parametric signature for the class.

The following terms are sometimes used interchangeably in reference to training


samples. For clarity, they will be used in this documentation as follows:

• Training sample, or sample, is a set of pixels selected to represent a potential class.


The data file values for these pixels are used to generate a parametric signature.

• Training field, or training site, is the geographical area(s) of interest (AOI) in the
image represented by the pixels in a sample. Usually, it is previously identified
with the use of ground truth data.

Feature space objects are user-defined areas of interest (AOIs) in a feature space image.
The feature space signature is based on this objects(s).

Field Guide 221


Selecting Training It is important that training samples be representative of the class that the user is trying
Samples to identify. This does not necessarily mean that they must contain a large number of
pixels or be dispersed across a wide region of the data. The selection of training samples
depends largely upon the user’s knowledge of the data, of the study area, and of the
classes that he or she wants to extract.

ERDAS IMAGINE enables the user to identify training samples via one or more of the
following methods:

• using a vector layer

• defining a polygon in the image

• identifying a training sample of contiguous pixels with similar spectral


characteristics

• identifying a training sample of contiguous pixels within a certain area, with or


without similar spectral characteristics

• using a class from a thematic raster layer from an image file of the same area (i.e.,
the result of an unsupervised classification)

Digitized Polygon
Training samples can be identified by their geographical location (training sites, using
maps, ground truth data). The locations of the training sites can be digitized from maps
with the ERDAS IMAGINE Vector or AOI tools. Polygons representing these areas are
then stored as vector layers. The vector layers can then be used as input to the AOI tools
and used as training samples to create signatures.

Use the Vector and AOI tools to digitize training samples from a map. Use the Signature Editor
to create signatures from training samples that are identified with digitized polygons.

User-Defined Polygon
Using his or her pattern recognition skills (with or without supplemental ground truth
information), the user can identify samples by examining a displayed image of the data
and drawing a polygon around the training site(s) of interest. For example, if it is
known that oak trees reflect certain frequencies of green and infrared light according to
ground truth data, the user may be able to base his or her sample selections on the data
(taking atmospheric conditions, sun angle, time, date, and other variations into
account). The area within the polygon(s) would be used to create a signature.

Use the AOI tools to define the polygon(s) to be used as the training sample. Use the Signature
Editor to create signatures from training samples that are identified with the polygons.

222 ERDAS
Selecting Training Samples

Identify Seed Pixel


With the Seed Properties dialog and AOI tools, the cursor (cross hair) can be used to
identify a single pixel (seed pixel) that is representative of the training sample. This seed
pixel will be used as a model pixel, against which the pixels that are contiguous to it are
compared based on parameters specified by the user.

When one or more of the contiguous pixels is accepted, the mean of the sample is calcu-
lated from the accepted pixels. Then, the pixels contiguous to the sample are compared
in the same way. This process repeats until no pixels that are contiguous to the sample
satisfy the spectral parameters. In effect, the sample “grows” outward from the model
pixel with each iteration. These homogenous pixels will be converted from individual
raster pixels to a polygon and used as an area of interest (AOI) layer.

Select the Seed Properties option in the Viewer to identify training samples with a seed pixel.

Seed Pixel Method with Spatial Limits


The training sample identified with the seed pixel method can be limited to a particular
region by defining the geographic distance and area.

Vector layers (polygons or lines) can be displayed as the top layer in the Viewer, and the
boundaries can then be used as an AOI for training samples defined under Seed Properties.

Thematic Raster Layer


A training sample can be defined by using class values from a thematic raster layer (see
Table 20). The data file values in the training sample will be used to create a signature.
The training sample can be defined by as many class values as desired.

NOTE: The thematic raster layer must have the same coordinate system as the image file being
classified.

Table 20: Training Sample Comparison

Training Samples

Method Advantages Disadvantages


Digitized Polygon precise map coordinates, may overestimate class
represents known ground variance, time- consuming
information
User-Defined Polygon high degree of user control may overestimate class
variance, time- consuming
Seed Pixel auto-assisted, less time may underestimate class
variance
Thematic Raster Layer allows iterative classifying must have previously
defined thematic layer

Field Guide 223


Evaluating Training Selecting training samples is often an iterative process. To generate signatures that
Samples accurately represent the classes to be identified, the user may have to repeatedly select
training samples, evaluate the signatures that are generated from the samples, and then
either take new samples or manipulate the signatures as necessary. Signature manipu-
lation may involve merging, deleting, or appending from one file to another. It is also
possible to perform a classification using the known signatures, then mask out areas
that are not classified to use in gathering more signatures.

See "Evaluating Signatures" on page 236 for methods of determining the accuracy of the
signatures created from your training samples.

Selecting Feature The ERDAS IMAGINE Feature Space tools enable the user to interactively define
Space Objects feature space objects (AOIs) in the feature space image(s). A feature space image is
simply a graph of the data file values of one band of data against the values of another
band (often called a scatterplot). In ERDAS IMAGINE, a feature space image has the
same data structure as a raster image; therefore, feature space images can be used with
other IMAGINE utilities, including zoom, color level slicing, virtual roam, Spatial
Modeler, and Map Composer.
band 2

band 1
Figure 89: Example of a Feature Space Image

The transformation of a multilayer raster image into a feature space image is done by
mapping the input pixel values to a position in the feature space image. This transfor-
mation defines only the pixel position in the feature space image. It does not define the
pixel’s value. The pixel values in the feature space image can be the accumulated
frequency, which is calculated when the feature space image is defined. The pixel
values can also be provided by a thematic raster layer of the same geometry as the
source multilayer image. Mapping a thematic layer into a feature space image can be
useful for evaluating the validity of the parametric and non-parametric decision bound-
aries of a classification (Kloer 1994).

When you display a feature space image file (.fsp.img) in an ERDAS IMAGINE Viewer, the
colors reflect the density of points for both bands. The bright tones represent a high density and
the dark tones represent a low density.

224 ERDAS
Selecting Feature Space Objects

Create Non-parametric Signature


The user can define a feature space object (AOI) in the feature space image and use it
directly as a non-parametric signature. Since the IMAGINE Viewers for the feature
space image and the image being classified are both linked to the IMAGINE Signature
Editor, it is possible to mask AOIs from the image being classified to the feature space
image, and vice versa. The user can also directly link a cursor in the image Viewer to
the feature space Viewer. These functions will help determine a location for the AOI in
the feature space image.

A single feature space image, but multiple AOIs, can be used to define the signature.
This signature is taken within the feature space image, not the image being classified.
The pixels in the image that correspond to the data file values in the signature (i.e.,
feature space object) will be assigned to that class.

One fundamental difference between using the feature space image to define a training
sample and the other traditional methods is that it is a non-parametric signature. The
decisions made in the classification process have no dependency on the statistics of the
pixels. This helps improve classification accuracies for specific non-normal classes, such
as urban and exposed rock (Faust, et al 1991).

See "APPENDIX A: Math Topics" for information on feature space images.

Display .img file to be classified in a Viewer


(layers 3, 2, 1).

Create feature space image from .img file being classified


(layer 1 vs. layer 2).

Draw an AOI (feature space object around the desired


area in the feature space image.Once the user has a
desired AOI, it can be used as a signature.

A decision rule will be used to analyze each pixel in the


.img file being classified, and the pixels with the
corresponding data file values will be assigned to the
feature space class.

Figure 90: Process for Defining a Feature Space Object

Field Guide 225


Evaluate Feature Space Signatures
Via the Feature Space tools, it is also possible to let use a feature space signature to
generate a mask. Once it is defined as a mask, the pixels under the mask will be
identified in the image file and highlighted in the Viewer. The image displayed in the
Viewer must be the image from which the feature space image was created. This
process will help the user to visually analyze the correlations between various spectral
bands to determine which combination of bands brings out the desired features in the
image.

The user can have as many feature space images with different band combinations as
desired. Any polygon or rectangle in these feature space images can be used as a non-
parametric signature. However, only one feature space image can be used per
signature. The polygons in the feature space image can be easily modified and/or
masked until the desired regions of the image have been identified.

Use the Feature Space tools in the Signature Editor to create a feature space image and mask the
signature. Use the AOI tools to draw polygons.

Feature Space Signatures

Advantages Disadvantages
Provide an accurate way to classify a class The classification decision process allows
with a non-normal distribution (e.g., resi- overlap and unclassified pixels.
dential and urban).
Certain features may be more visually The feature space image may be difficult to
identifiable in a feature space image. interpret.
The classification decision process is fast.

226 ERDAS
Unsupervised Training

Unsupervised Unsupervised training requires only minimal initial input from the user. However, the
Training user will have the task of interpreting the classes that are created by the unsupervised
training algorithm.

Unsupervised training is also called clustering, because it is based on the natural


groupings of pixels in image data when they are plotted in feature space. According to
the specified parameters, these groups can later be merged, disregarded, otherwise
manipulated, or used as the basis of a signature.

Feature space is explained in "APPENDIX A: Math Topics".

Clusters
Clusters are defined with a clustering algorithm, which often uses all or many of the
pixels in the input data file for its analysis. The clustering algorithm has no regard for
the contiguity of the pixels that define each cluster.

• The ISODATA clustering method uses spectral distance as in the sequential


method, but iteratively classifies the pixels, redefines the criteria for each class, and
classifies again, so that the spectral distance patterns in the data gradually emerge.

• The RGB clustering method is more specialized than the ISODATA method. It
applies to three-band, 8-bit data. RGB clustering plots pixels in three-dimensional
feature space, and divides that space into sections that are used to define clusters.

Each of these methods is explained below, along with its advantages and disadvan-
tages.

Some of the statistics terms used in this section are explained in "APPENDIX A: Math Topics".

Field Guide 227


ISODATA Clustering ISODATA stands for Iterative Self-Organizing Data Analysis Technique (Tou and
Gonzalez 1974). It is iterative in that it repeatedly performs an entire classification
(outputting a thematic raster layer) and recalculates statistics. Self-Organizing refers to
the way in which it locates clusters with minimum user input.

The ISODATA method uses minimum spectral distance to assign a cluster for each
candidate pixel. The process begins with a specified number of arbitrary cluster means
or the means of existing signatures, and then it processes repetitively, so that those
means will shift to the means of the clusters in the data.

Because the ISODATA method is iterative, it is not biased to the top of the data file, as
are the one-pass clustering algorithms.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering.

ISODATA Clustering Parameters


To perform ISODATA clustering, the user specifies:

• N - the maximum number of clusters to be considered. Since each cluster is the basis
for a class, this number becomes the maximum number of classes to be formed. The
ISODATA process begins by determining N arbitrary cluster means. Some clusters
with too few pixels can be eliminated, leaving less than N clusters.

• T - a convergence threshold, which is the maximum percentage of pixels whose


class values are allowed to be unchanged between iterations.

• M - the maximum number of iterations to be performed.

228 ERDAS
Unsupervised Training

Initial Cluster Means


On the first iteration of the ISODATA algorithm, the means of N clusters can be
arbitrarily determined. After each iteration, a new mean for each cluster is calculated,
based on the actual spectral locations of the pixels in the cluster, instead of the initial
arbitrary calculation. Then, these new means are used for defining clusters in the next
iteration. The process continues until there is little change between iterations (Swain
1973).

The initial cluster means are distributed in feature space along a vector that runs
between the point at spectral coordinates (µ1-s1, µ2-s2, µ3-s3, ... µn-sn) and the coordi-
nates (µ1+s1, µ2+s2, µ3+s3, ... µn+sn). Such a vector in two dimensions is illustrated in
Figure 91. The initial cluster means are evenly distributed between (µ1-s1, µn-sn) and
(µ1+s1, µn+sn).

5 arbitrary cluster means in two-dimensional spectral space

µB+ σB
data file values

µB
Band B

µB- σB

0
0 µA - σ µA µA+σA
A
Band A
data file values

Figure 91: ISODATA Arbitrary Clusters

Pixel Analysis
Pixels are analyzed beginning with the upper-left corner of the image and going left to
right, block by block.

The spectral distance between the candidate pixel and each cluster mean is calculated.
The pixel is assigned to the cluster whose mean is the closest. The ISODATA function
creates an output .img file with a thematic raster layer and/or a signature file (.sig) as a
result of the clustering. At the end of each iteration, an .img file exists that shows the
assignments of the pixels to the clusters.

Field Guide 229


Considering the regular, arbitrary assignment of the initial cluster means, the first
iteration of the ISODATA algorithm will always give results similar to those in Figure
92.

Cluster Cluster
4 5
Cluster
3

data file values


Cluster

Band B
2

Cluster
1

Band A
data file values
Figure 92: ISODATA First Pass

For the second iteration, the means of all clusters are recalculated, causing them to shift
in feature space. The entire process is repeated—each candidate pixel is compared to
the new cluster means and assigned to the closest cluster mean.
data file values
Band B

Band A
data file values
Figure 93: ISODATA Second Pass

Percentage Unchanged
After each iteration, the normalized percentage of pixels whose assignments are
unchanged since the last iteration is displayed in the dialog. When this number reaches
T (the convergence threshold), the program terminates.

230 ERDAS
Unsupervised Training

It is possible for the percentage of unchanged pixels to never converge or reach T (the
convergence threshold). Therefore, it may be beneficial to monitor the percentage, or
specify a reasonable maximum number of iterations, M, so that the program will not run
indefinitely.

ISODATA Clustering

Advantages Disadvantages
Clustering is not geographically biased to The clustering process is time-consuming,
the top or bottom pixels of the data file, because it can repeat many times.
because it is iterative.
This algorithm is highly successful at find- Does not account for pixel spatial homoge-
ing the spectral clusters that are inherent neity.
in the data. It does not matter where the
initial arbitrary cluster means are located,
as long as enough iterations are allowed.
A preliminary thematic raster layer is cre-
ated, which gives results similar to using a
minimum distance classifier (as explained
below) on the signatures that are created.
This thematic raster layer can be used for
analyzing and manipulating the signa-
tures before actual classification takes
place.

Recommended Decision Rule


Although the ISODATA algorithm is the most similar to the minimum distance
decision rule, the signatures can produce good results with any type of classification.
Therefore, no particular decision rule is recommended over others.

In most cases, the signatures created by ISODATA will be merged, deleted, or


appended to other signature sets. The .img file created by ISODATA is the same as the
.img file that would be created by a minimum distance classification, except for the non-
convergent pixels (100-T% of the pixels).

Use the Merge and Delete options in the Signature Editor to manipulate signatures.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering, generate signatures, and classify the resulting signatures.

Field Guide 231


RGB Clustering

The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create a
thematic raster layer. However, no signature file is created and no other classification decision
rule is used. In practice, RGB Clustering differs greatly from the other clustering methods, but
it does employ a clustering algorithm and, therefore, it is explained here.

RGB clustering is a simple classification and data compression technique for three
bands of data. It is a fast and simple algorithm that quickly compresses a 3-band image
into a single band pseudocolor image, without necessarily classifying any particular
features.

The algorithm plots all pixels in 3-dimensional feature space and then partitions this
space into clusters on a grid. In the more simplistic version of this function, each of these
clusters becomes a class in the output thematic raster layer.

The advanced version requires that a minimum threshold on the clusters be set, so that
only clusters at least as large as the threshold will become output classes. This allows
for more color variation in the output file. Pixels which do not fall into any of the
remaining clusters are assigned to the cluster with the smallest city-block distance from
the pixel. In this case, the city-block distance is calculated as the sum of the distances
in the red, green, and blue directions in 3-dimensional space.

Along each axis of the 3-dimensional scatterplot, each input histogram is scaled so that
the partitions divide the histograms between specified limits— either a specified
number of standard deviations above and below the mean, or between the minimum
and maximum data values for each band.

The default number of divisions per band is listed below:

• RED is divided into 7 sections (32 for advanced version)

• GREEN is divided into 6 sections (32 for advanced version)

• BLUE is divided into 6 sections (32 for advanced version)

232 ERDAS
Unsupervised Training

input histograms
R

G This cluster contains pixels

frequency
between 16 and 34 in RED,
B and between 35 and 55 in
GREEN, and between 0 and
16 16 in BLUE.
0 35 195 255
16 98

98 G
R

195
16
34 R
55
0

35
G

0
35

16
B
25

B
5

Figure 94: RGB Clustering

Partitioning Parameters
It is necessary to specify the number of R, G, and B sections in each dimension of the 3-
dimensional scatterplot. The number of sections should vary according to the histo-
grams of each band. Broad histograms should be divided into more sections, and
narrow histograms should be divided into fewer sections (see Figure 94).

It is possible to interactively change these parameters in the RGB Clustering function in the
Image Interpreter. The number of classes is calculated based on the current parameters, and it
displays on the command screen.

Field Guide 233


RGB Clustering

Advantages Disadvantages
The fastest classification method. It is Exactly three bands must be input, which
designed to provide a fast, simple classifi- is not suitable for all applications.
cation for applications that do not require
specific classes.
Not biased to the top or bottom of the data Does not always create thematic classes
file. The order in which the pixels are that can be analyzed for informational
examined does not influence the outcome. purposes.
(Advanced version only) A highly interac-
tive function, allowing an iterative adjust-
ment of the parameters until the number
of clusters and the thresholds are satisfac-
tory for analysis.

Tips
Some starting values that usually produce good results with the simple RGB clustering
are:

R=7
G=6
B=6

which results in 7 X 6 X 6 = 252 classes.

To decrease the number of output colors/classes or to darken the output, decrease these
values.

For the Advanced RGB clustering function, start with higher values for R, G, and B.
Adjust by raising the threshold parameter and/or decreasing the R, G, and B parameter
values until the desired number of output classes is obtained.

234 ERDAS
Signature Files

Signature Files A signature is a set of data that defines a training sample, feature space object (AOI), or
cluster. The signature is used in a classification process. Each classification decision rule
(algorithm) requires some signature attributes as input—these are stored in the
signature file (.sig). Signatures in ERDAS IMAGINE can be parametric or non-
parametric.

The following attributes are standard for all signatures (parametric and non-
parametric):

• name — identifies the signature and is used as the class name in the output
thematic raster layer. The default signature name is Class <number>.

• color — the color for the signature and is used as the color for the class in the output
thematic raster layer. This color is also used with other signature visualization
functions, such as alarms, masking, ellipses, etc.

• value — the output class value for the signature. The output class value does not
necessarily need to be the class number of the signature. This value should be a
positive integer.

• order — the order to process the signatures for order-dependent processes, such as
signature alarms and parallelepiped classifications.

• parallelepiped limits — the limits used in the parallelepiped classification.

Parametric Signature
A parametric signature is based on statistical parameters (e.g., mean and covariance
matrix) of the pixels that are in the training sample or cluster. A parametric signature
includes the following attributes in addition to the standard attributes for signatures:

• the number of bands in the input image (as processed by the training program)

• the minimum and maximum data file value in each band for each sample or cluster
(minimum vector and maximum vector)

• the mean data file value in each band for each sample or cluster (mean vector)

• the covariance matrix for each sample or cluster

• the number of pixels in the sample or cluster

Non-parametric Signature
A non-parametric signature is based on an AOI that the user defines in the feature
space image for the .img file being classified. A non-parametric classifier will use a set
of non-parametric signatures to assign pixels to a class based on their location, either
inside or outside the area in the feature space image.

The format of the .sig file is described in "APPENDIX B: File Formats and Extensions".
Information on these statistics can be found in "APPENDIX A: Math Topics".

Field Guide 235


Evaluating Once signatures are created, they can be evaluated, deleted, renamed, and merged with
Signatures signatures from other files. Merging signatures enables the user to perform complex
classifications with signatures that are derived from more than one training method
(supervised and/or unsupervised, parametric and/or non-parametric).

Use the Signature Editor to view the contents of each signature, manipulate signatures, and
perform your own mathematical tests on the statistics.

Using Signature Data


There are tests to perform that can help determine whether the signature data are a true
representation of the pixels to be classified for each class. The user can evaluate signa-
tures that were created either from supervised or unsupervised training. The evaluation
methods in ERDAS IMAGINE include:

• Alarm — using his or her own pattern recognition ability, the user views the
estimated classified area for a signature (using the parallelepiped decision rule)
against a display of the original image.

• Ellipse — view ellipse diagrams and scatterplots of data file values for every pair
of bands.

• Contingency matrix — do a quick classification of the pixels in a set of training


samples to see what percentage of the sample pixels are actually classified as
expected. These percentages are presented in a contingency matrix. This method is
for supervised training only, for which polygons of the training samples exist.

• Divergence — measure the divergence (statistical distance) between signatures


and determine band subsets that will maximize the classification.

• Statistics and histograms — analyze statistics and histograms of the signatures to


make evaluations and comparisons.

NOTE: If the signature is non-parametric (i.e., a feature space signature), you can use only the
alarm evaluation method.

After analyzing the signatures, it would be beneficial to merge or delete them, eliminate
redundant bands from the data, add new bands of data, or perform any other opera-
tions to improve the classification.

Alarm The alarm evaluation enables the user to compare an estimated classification of one or
more signatures against the original data, as it appears in the ERDAS IMAGINE
Viewer. According to the parallelepiped decision rule, the pixels that fit the classifi-
cation criteria are highlighted in the displayed image. The user also has the option to
indicate an overlap by having it appear in a different color.

With this test, the user can use his or her own pattern recognition skills, or some
ground-truth data, to determine the accuracy of a signature.

236 ERDAS
Evaluating Signatures

Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms on the
image in the Viewer, using the parallelepiped decision rule. The alarm utility creates a functional
layer, and the IMAGINE Viewer allows you to toggle between the image layer and the functional
layer.

Ellipse In this evaluation, ellipses of concentration are calculated with the means and standard
deviations stored in the signature file. It is also possible to generate parallelepiped
rectangles, means, and labels.

In this evaluation, the mean and the standard deviation of every signature are used to
represent the ellipse in 2-dimensional feature space. The ellipse is displayed in a feature
space image.

Ellipses are explained and illustrated in "APPENDIX A: Math Topics" under the discussion of
Scatterplots.

When the ellipses in the feature space image show extensive overlap, then the spectral
characteristics of the pixels represented by the signatures cannot be distinguished in the
two bands that are graphed. In the best case, there is no overlap. Some overlap,
however, is expected.

Figure 95 shows how ellipses are plotted and how they can overlap. The first graph
shows how the ellipses are plotted based on the range of 2 standard deviations from the
mean. This range can be altered, changing the ellipse plots. Analyzing the plots with
differing numbers of standard deviations is useful for determining the limits of a paral-
lelepiped classification.

Signature Overlap Distinct Signatures

signature 1
data file values
data file values

signature 2
µµB2B2+2
+2ss
Band D

signature 1
µµD1
Band B

D1

µµB2 signature 2
B2
µµD2
D2

µµ s
B2B2-2-2s

µC2 µµC1
µ A2 -2 s
µ A2
µ A2 +2s
µA2-2s

µA2+2s
µA2

C2 C1

Band A Band C
data file values data file values
Figure 95: Ellipse Evaluation of Signatures

Field Guide 237


By analyzing the ellipse graphs for all band pairs, the user can determine which signa-
tures and which bands will provide accurate classification results.

Use the Signature Editor to create a feature space image and to view an ellipse(s) of signature
data.

Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results
to the pixels of a training sample.

The pixels of each training sample are not always so homogeneous that every pixel in a
sample will actually be classified to its corresponding class. Each sample pixel only
weights the statistics that determine the classes. However, if the signature statistics for
each sample are distinct from those of the other samples, then a high percentage of each
sample’s pixels will be classified as expected.

In this evaluation, a quick classification of the sample pixels is performed using the
minimum distance, maximum likelihood, or Mahalanobis distance decision rule. Then,
a contingency matrix is presented, which contains the number and percentages of
pixels that were classified as expected.

Use the Signature Editor to perform the contingency matrix evaluation.

Separability Signature separability is a statistical measure of distance between two signatures.


Separability can be calculated for any combination of bands that will be used in the
classification, enabling the user to rule out any bands that are not useful in the results
of the classification.

For the distance (Euclidean) evaluation, the spectral distance between the mean vectors
of each pair of signatures is computed. If the spectral distance between two samples is
not significant for any pair of bands, then they may not be distinct enough to produce
a successful classification.

The spectral distance is also the basis of the minimum distance classification (as
explained below). Therefore, computing the distances between signatures will help the
user predict the results of a minimum distance classification.

Use the Signature Editor to compute signature separability and distance and automatically
generate the report.

The formulas used to calculate separability are related to the maximum likelihood
decision rule. Therefore, evaluating signature separability helps the user predict the
results of a maximum likelihood classification. The maximum likelihood decision rule
is explained below.

There are three options for calculating the separability. All of these formulas take into
account the covariances of the signatures in the bands being compared, as well as the
mean vectors of the signatures.

238 ERDAS
Evaluating Signatures

Refer to "APPENDIX A: Math Topics" for information on the mean vector and covariance
matrix.

Divergence
The formula for computing Divergence (Dij) is as follows:

1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2

where:

i and j= the two signatures (classes) being compared


Ci= the covariance matrix of signature i
µi= the mean vector of signature i
tr= the trace function (matrix algebra)
T= the transposition function

Source: Swain and Davis 1978

Transformed Divergence
The formula for computing Transformed Divergence (TD) is as follows:

1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2

– D ij
TD ij = 2 1 – exp  ---------- 
  8 
where:

i and j= the two signatures (classes) being compared


Ci= the covariance matrix of signature i
µi= the mean vector of signature i
tr= the trace function (matrix algebra)
T= the transposition function

Source: Swain and Davis 1978

Field Guide 239


Jeffries-Matusita Distance
The formula for computing Jeffries-Matusita Distance (JM) is as follows:

T Ci + C j
–1
1  (Ci + C j) ⁄ 2 
α = --- ( µ i – µ j )  ------------------  ( µ i – µ j ) + --- ln  --------------------------------
1
,
8  2  2  C × C 
i j

–α
JM ij = 2(1 – e )

where:

i and j = the two signatures (classes) being compared


Ci= the covariance matrix of signature i
µi = the mean vector of signature i
ln = the natural logarithm function
|Ci |= the determinant of Ci (matrix algebra)

Source: Swain and Davis 1978

Separability
Both transformed divergence and Jeffries-Matusita distance have upper and lower
bounds. If the calculated divergence is equal to the appropriate upper bound, then the
signatures can be said to be totally separable in the bands being studied. A calculated
divergence of zero means that the signatures are inseparable.

• TD is between 0 and 2000.

• JM is between 0 and 1414.

A separability listing is a report of the computed divergence for every class pair and
one band combination. The listing contains every divergence value for the bands
studied for every possible pair of signatures.

The separability listing also contains the average divergence and the minimum diver-
gence for the band set. These numbers can be compared to other separability listings
(for other band combinations), to determine which set of bands is the most useful for
classification.

Weight Factors
As with the Bayesian classifier (explained below with maximum likelihood), weight
factors may be specified for each signature. These weight factors are based on a priori
(already known) probabilities that any given pixel will be assigned to each class. For
example, if the user knows that twice as many pixels should be assigned to Class A as
to Class B, then Class A should receive a weight factor that is twice that of Class B.

NOTE: The weight factors do not influence the divergence equations (for TD or JM), but they do
influence the report of the best average and best minimum separability.

240 ERDAS
Evaluating Signatures

The weight factors for each signature are used to compute a weighted divergence with
the following calculation:

c–1 c
 
∑  ∑ f i f j U ij
i = 1 j = i+1
W ij = -------------------------------------------------------
2
-
c c
1  
---  ∑ f i – ∑ f i 2
2  
i=1 i=1

where:

i and j = the two signatures (classes) being compared


Uij = the unweighted divergence between i and j
Wij = the weighted divergence between i and j
c= the number of signatures (classes)
fi = the weight factor for signature i

Probability of Error
The Jeffries-Matusita distance is related to the pairwise probability of error, which is the
probability that a pixel assigned to class i is actually in class j. Within a range, this
probability can be estimated according to the expression below:

------ ( 2 – JM ij ) ≤ P e ≤ 1 – --- 1 + --- JM ij


1 2 2 1 1 2
16 2  2 

where:

i and j = the signatures (classes) being compared


JMij = the Jeffries-Matusita distance between i and j
Pe = the probability that a pixel will be misclassified from i to j

Source: Swain and Davis 1978

Field Guide 241


Signature Manipulation In many cases, training must be repeated several times before the desired signatures are
produced. Signatures can be gathered from different sources—different training
samples, feature space images, and different clustering programs— all using different
techniques. After each signature file is evaluated, the user may merge, delete, or create
new signatures. The desired signatures can finally be moved to one signature file to be
used in the classification.

The following operations upon signatures and signature files are possible with ERDAS
IMAGINE:

• View the contents of the signature statistics

• View histograms of the samples or clusters that were used to derive the signatures

• Delete unwanted signatures

• Merge signatures together, so that they form one larger class when classified

• Append signatures from other files. The user can combine signatures that are
derived from different training methods for use in one classification.

Use the Signature Editor to view statistics and histogram listings and to delete, merge, append,
and rename signatures within a signature file.

242 ERDAS
Classification Decision Rules

Classification Once a set of reliable signatures has been created and evaluated, the next step is to
Decision Rules perform a classification of the data. Each pixel is analyzed independently. The
measurement vector for each pixel is compared to each signature, according to a
decision rule, or algorithm. Pixels that pass the criteria that are established by the
decision rule are then assigned to the class for that signature. ERDAS IMAGINE enables
the user to classify the data both parametrically with statistical representation, and non-
parametrically as objects in feature space. Figure 96 shows the flow of an image pixel
through the classification decision making process in ERDAS IMAGINE (Kloer 1994).

If a non-parametric rule is not set, then the pixel is classified using only the parametric
rule. All of the parametric signatures will be tested. If a non-parametric rule is set, the
pixel will be tested against all of the signatures with non-parametric definitions. This
rule results in the following conditions:

• If the non-parametric test results in one unique class, the pixel will be assigned to
that class.

• If the non-parametric test results in zero classes (i.e., the pixel lies outside all the
non-parametric decision boundaries), then the unclassified rule will be applied.
With this rule, the pixel will either be classified by the parametric rule or left
unclassified.

• If the pixel falls into more than one class as a result of the non-parametric test, the
overlap rule will be applied. With this rule, the pixel will either be classified by the
parametric rule, processing order, or left unclassified.

Field Guide 243


Non-parametric Rules ERDAS IMAGINE provides these decision rules for non-parametric signatures:

• parallelepiped

• feature space

Unclassified Options
ERDAS IMAGINE provides these options if the pixel is not classified by the non-
parametric rule:

• parametric rule

• unclassified

Overlap Options
ERDAS IMAGINE provides these options if the pixel falls into more than one feature
space object:

• parametric rule

• by order

• unclassified

Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signa-
tures:

• minimum distance

• Mahalanobis distance

• maximum likelihood (with Bayesian variation)

244 ERDAS
Classification Decision Rules

Candidate Pixel

No Non-parametric Rule

Yes

Resulting Number of Classes 1

0 >1

Unclassified Overlap Options


Options

By Order
Parametric Unclassified Parametric

Unclassified

Parametric Rule
Unclassified
Assignment

Class
Assignment

Figure 96: Classification Flow Diagram

Field Guide 245


Parallelepiped In the parallelepiped decision rule, the data file values of the candidate pixel are
compared to upper and lower limits. These limits can be either:

• the minimum and maximum data file values of each band in the signature,

• the mean of each band, plus and minus a number of standard deviations, or

• any limits that the user specifies, based on his or her knowledge of the data and
signatures. This knowledge may come from the signature evaluation techniques
discussed above.

These limits can be set using the Parallelepiped Limits utility in the Signature Editor.

There are high and low limits for every signature in every band. When a pixel’s data file
values are between the limits for every band in a signature, then the pixel is assigned to
that signature’s class. Figure 97 is a two-dimensional example of a parallelepiped classi-
fication.

● = pixels in class 1
? ? ?

◆ class 3 ▲ = pixels in class 2
? ? ◆ ◆
? ? ? ◆◆ ◆ ◆ = pixels in class 3
data file values

◆ ◆ ◆◆ ?
? ? ? ? ◆ ◆ ? = unclassified pixels
µB2+2s ◆ ◆
Band B

◆ ? ?
▲ ▲ ▲▲ ◆ ◆ ◆
µA2 = mean of Band A,
▲ ? ?
? ▲ ?
▲ ▲ ? ? ? ?
▲ ▲ ▲▲ ? ? ? class 2
? ?
µB2
?




?
● ● ● ● ● µB2 = mean of Band B,
▲ ▲ ● ● class 1 class 2
? ▲ ? ?

?
?
µB2-2s class 2
µA2+2s

µA2

µA2-2s

Band A
data file values
Figure 97: Parallelepiped Classification Using Plus or Minus
Two Standard Deviations as Limits

The large rectangles in Figure 97 are called parallelepipeds. They are the regions within
the limits for each signature.

246 ERDAS
Classification Decision Rules

Overlap Region
In cases where a pixel may fall into the overlap region of two or more parallelepipeds,
the user must define how the pixel will be classified.

• The pixel can be classified by the order of the signatures. If one of the signatures is
first and the other signature is fourth, the pixel will be assigned to the first
signature’s class. This order can be set in the ERDAS IMAGINE Signature Editor.

• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against the overlapping signatures only. If neither of these signatures is
parametric, then the pixel will be left unclassified. If only one of the signatures is
parametric, then the pixel will be assigned automatically to that signature’s class.

• The pixel can be left unclassified.

Regions Outside of the Boundaries


If the pixel does not fall into one of the parallelepipeds, then the user must define how
the pixel will be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against all of the parametric signatures. If none of the signatures is
parametric, then the pixel will be left unclassified.

• The pixel can be left unclassified.

Use the Supervised Classification utility in the Signature Editor to perform a parallelepiped
classification.

Parallelepiped Decision Rule

Advantages Disadvantages
Fast and simple, since the data file values Since parallelepipeds have “corners,” pix-
are compared to limits that remain con- els that are actually quite far, spectrally,
stant for each band in each signature. from the mean of the signature may be
classified. An example of this is shown in
Often useful for a first-pass, broad classifi-
Figure 98.
cation, this decision rule quickly narrows
down the number of possible classes to
which each pixel can be assigned before
the more time-consuming calculations are
made, thus cutting processing time (e.g.,
minimum distance, Mahalanobis distance
or maximum likelihood).
Not dependent on normal distributions.

Field Guide 247


data file values
Signature Ellipse

Band B
µB Parallelepiped
boundary

*
candidate pixel

µA

Band A
data file values
Figure 98: Parallelepiped Corners Compared to the Signature Ellipse

Feature Space The feature space decision rule determines whether or not a candidate pixel lies within
the non-parametric signature in the feature space image. When a pixel’s data file values
are in the feature space signature, then the pixel is assigned to that signature’s class.
Figure 99 is a two-dimensional example of a feature space classification. The polygons
in this figure are AOIs used to define the feature space signatures.

◆ ◆ ◆ ◆ ◆
◆ ? ?
◆ ◆ ◆ ?

◆ ◆ ◆ class 3 ? ?
?
?
● = pixels in class 1
?
data file values

?? ?
?
?
?
? ?
▲ = pixels in class 2
▲▲ ▲
Band B

? ? ?

▲ ▲ ? ◆ = pixels in class 3
▲ ▲ ?
▲ ? ? ?
▲ ▲ ▲
= unclassified pixels


▲ ▲
class 2 ▲ ● ● ● ● ●
● ●
● ● ●
● ● ● ●
class 1
? ?
? ?
?? ?
?
? ? ?
?

Band A
data file values
Figure 99: Feature Space Classification

248 ERDAS
Classification Decision Rules

Overlap Region
In cases where a pixel may fall into the overlap region of two or more AOIs, the user
must define how the pixel will be classified.

• The pixel can be classified by the order of the feature space signatures. If one of the
signatures is first and the other signature is fourth, the pixel will be assigned to the
first signature’s class. This order can be set in the ERDAS IMAGINE Signature
Editor.

• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against the overlapping signatures only. If neither of these feature space
signatures is parametric, then the pixel will be left unclassified. If only one of the
signatures is parametric, then the pixel will be assigned automatically to that
signature’s class.

• The pixel can be left unclassified.

Regions Outside of the AOIs


If the pixel does not fall into one of the AOIs for the feature space signatures, then the
user must define how the pixel will be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel will be
tested against all of the parametric signatures. If none of the signatures is
parametric, then the pixel will be left unclassified.

• The pixel can be left unclassified.

Feature Space Decision Rule

Advantages Disadvantages
Often useful for a first-pass, broad classifi- The feature space decision rule allows
cation. overlap and unclassified pixels.
Provides an accurate way to classify a class The feature space image may be difficult to
with a non-normal distribution (e.g., resi- interpret.
dential and urban).
Certain features may be more visually
identifiable, which can help discriminate
between classes that are spectrally similar
and hard to differentiate with parametric
information.
The feature space method is fast.

Use the Decision Rules utility in the Signature Editor to perform a feature space classification.

Field Guide 249


Minimum Distance The minimum distance decision rule (also called spectral distance) calculates the
spectral distance between the measurement vector for the candidate pixel and the mean
vector for each signature.

candidate pixel
µB3 µ3

data file values


Band B
µB2 ◆
µ2

µB1 ◆ µ1

o
o µA1 µA2 µA3

Band A
data file values
Figure 100: Minimum Spectral Distance

In Figure 100, spectral distance is illustrated by the lines from the candidate pixel to the
means of the three signatures. The candidate pixel is assigned to the class with the
closest mean.

The equation for classifying by spectral distance is based on the equation for Euclidean
distance:

∑ ( µci – X xyi )
2
SD xyc =
i=1

where:

n= number of bands (dimensions)


i= a particular band
c= a particular class
Xxyi= data file value of pixel x,y in band i
µci= mean of data file values in band i for the sample for class c
SDxyc= spectral distance from pixel x,y to the mean of class c

Source: Swain and Davis 1978

When spectral distance is computed for all possible values of c (all possible classes), the
class of the candidate pixel is assigned to the class for which SD is the lowest.

250 ERDAS
Classification Decision Rules

Minimum Distance Decision Rule

Advantages Disadvantages
Since every pixel is spectrally closer to Pixels which should be unclassified (that
either one sample mean or another, there is, they are not spectrally close to the mean
are no unclassified pixels. of any sample, within limits that are rea-
sonable to the user) will become classified.
However, this problem is alleviated by
thresholding out the pixels that are far-
thest from the means of their classes. (See
the discussion of Thresholding on
page 254.)
The fastest decision rule to compute, Does not consider class variability. For
except for parallelepiped. example, a class like an urban land cover
class is made up of pixels with a high vari-
ance, which may tend to be farther from
the mean of the signature. Using this deci-
sion rule, outlying urban pixels may be
improperly classified. Inversely, a class
with less variance, like water, may tend to
overclassify (that is, classify more pixels
than are appropriate to the class), because
the pixels that belong to the class are usu-
ally spectrally closer to their mean than
those of other classes to their means.

Mahalanobis Distance

The Mahalanobis distance algorithm assumes that the histograms of the bands have normal
distributions. If this is not the case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass parallelepiped classification.

Mahalanobis distance is similar to minimum distance, except that the covariance


matrix is used in the equation. Variance and covariance are figured in so that clusters
that are highly varied will lead to similarly varied classes, and vice-versa. For example,
when classifying urban areas—typically a class whose pixels vary widely—correctly
classified pixels may be farther from the mean than those of a class for water, which is
usually not a highly varied class (Swain and Davis 1978).

Field Guide 251


The equation for the Mahalanobis distance classifier is as follows:

D = (X-Mc)T (Covc-1) (X-Mc)

where:

D= Mahalanobis distance
c= a particular class
X= the measurement vector of the candidate pixel
Mc= the mean vector of the signature of class c
Covc= the covariance matrix of the pixels in the signature of class c
Covc-1= inverse of Covc
T= transposition function

The pixel is assigned to the class, c, for which D is the lowest.

Mahalanobis Decision Rule

Advantages Disadvantages
Takes the variability of classes into Tends to overclassify signatures with rela-
account, unlike minimum distance or par- tively large values in the covariance
allelepiped. matrix. If there is a large dispersion of the
pixels in a cluster or training sample, then
the covariance matrix of that signature will
contain large values.
May be more useful than minimum dis- Slower to compute than parallelepiped or
tance in cases where statistical criteria (as minimum distance.
expressed in the covariance matrix) must
Mahalanobis distance is parametric, mean-
be taken into account, but the weighting
ing that it relies heavily on a normal distri-
factors that are available with the maxi-
bution of the data in each input band.
mum likelihood/Bayesian option are not
needed.

Maximum
Likelihood/Bayesian

The maximum likelihood algorithm assumes that the histograms of the bands of data have normal
distributions. If this is not the case, you may have better results with the parallelepiped or
minimum distance decision rule, or by performing a first-pass parallelepiped classification.

The maximum likelihood decision rule is based on the probability that a pixel belongs
to a particular class. The basic equation assumes that these probabilities are equal for all
classes, and that the input bands have normal distributions.

Bayesian Classifier
If the user has a priori knowledge that the probabilities are not equal for all classes, he
or she can specify weight factors for particular classes. This variation of the maximum
likelihood decision rule is known as the Bayesian decision rule (Hord 1982). Unless the
user has a priori knowledge of the probabilities, it is recommended that they not be
specified. In this case, these weights default to 1.0 in the equation.

252 ERDAS
Classification Decision Rules

The equation for the maximum likelihood/Bayesian classifier is as follows:

D = ln(ac) - [0.5 ln(|Covc|)] - [0.5 (X-Mc)T (Covc-1) (X-Mc)]

where:

D = weighted distance (likelihood)


c = a particular class
X = the measurement vector of the candidate pixel
Mc = the mean vector of the sample of class c
ac = percent probability that any candidate pixel is a member of class c
(defaults to 1.0, or is entered from a priori knowledge)
Covc = the covariance matrix of the pixels in the sample of class c
|Covc| = determinant of Covc (matrix algebra)
Covc-1 = inverse of Covc (matrix algebra)
ln = natural logarithm function
T = transposition function (matrix algebra)

The inverse and determinant of a matrix, along with the difference and transposition of
vectors, would be explained in a textbook of matrix algebra.

The multiple inverse of the function is computed and the pixel is assigned to the class,
c, for which D is the lowest.

Maximum Likelihood/Bayesian Decision Rule

Advantages Disadvantages
The most accurate of the classifiers in the An extensive equation that takes a long
ERDAS IMAGINE system (if the input time to compute. The computation time
samples/clusters have a normal distribu- increases with the number of input bands.
tion), because it takes the most variables
into consideration.
Takes the variability of classes into Maximum likelihood is parametric, mean-
account by using the covariance matrix, as ing that it relies heavily on a normal distri-
does Mahalanobis distance. bution of the data in each input band.
Tends to overclassify signatures with rela-
tively large values in the covariance
matrix. If there is a large dispersion of the
pixels in a cluster or training sample, then
the covariance matrix of that signature will
contain large values.

Field Guide 253


Evaluating After a classification is performed, these methods are available for testing the accuracy
Classification of the classification:

• Thresholding — Use a probability image file to screen out misclassified pixels.

• Accuracy Assessment — Compare the classification to ground truth or other data.

Thresholding Thresholding is the process of identifying the pixels in a classified image that are the
most likely to be classified incorrectly. These pixels are put into another class (usually
class 0). These pixels are identified statistically, based upon the distance measures that
were used in the classification decision rule.

Distance File
When a minimum distance, Mahalanobis distance, or maximum likelihood classifi-
cation is performed, a distance image file can be produced in addition to the output
thematic raster layer. A distance image file is a one-band, 32-bit offset continuous
raster layer in which each data file value represents the result of a spectral distance
equation, depending upon the decision rule used.

• In a minimum distance classification, each distance value is the Euclidean spectral


distance between the measurement vector of the pixel and the mean vector of the
pixel’s class.

• In a Mahalanobis distance or maximum likelihood classification, the distance value


is the Mahalanobis distance between the measurement vector of the pixel and the
mean vector of the pixel’s class.

The brighter pixels (with the higher distance file values) are spectrally farther from the
signature means for the classes to which they were assigned. They are more likely to be
misclassified.

The darker pixels are spectrally nearer, and more likely to be classified correctly. If
supervised training was used, the darkest pixels are usually the training samples.

Distance Image Histogram


number of pixels

0
0
distance value
Figure 101: Histogram of a Distance Image

254 ERDAS
Evaluating Classification

Figure 101 shows how the histogram of the distance image usually appears. This distri-
bution is called a chi-square distribution, as opposed to a normal distribution, which
is a symmetrical bell curve.

Threshold
The pixels that are the most likely to be misclassified have the higher distance file values
at the tail of this histogram. At some point that the user defines—either mathematically
or visually—the “tail” of this histogram is cut off. The cutoff point is the threshold.

To determine the threshold:

• interactively change the threshold with the mouse, when a distance histogram is
displayed while using the threshold function. This option enables the user to select
a chi-square value by selecting the cut-off value in the distance histogram, or

• input a chi-square parameter or distance measurement, so that the threshold can be


calculated statistically.

In both cases, thresholding has the effect of cutting the tail off of the histogram of the
distance image file, representing the pixels with the highest distance values.

Field Guide 255


Smooth chi-square shape -
try to find the “breakpoint”
where the curve becomes
more horizontal, and cut off

Minor mode(s) (peaks) in


the curve probably indicate
that the class picked up
other features that were not

Not a good class. The


signature for this class
probably represented a
polymodal (multi-peaked)

Peak of the curve is shifted


from 0. Indicates that the
signature mean is off-center
from the pixels it represents.

Figure 102: Interactive Thresholding Tips

Figure 102 shows some example distance histograms. With each example is an expla-
nation of what the curve might mean, and how to threshold it.

256 ERDAS
Evaluating Classification

Chi-square Statistics
If the minimum distance classifier was used, then the threshold is simply a certain
spectral distance. However, if Mahalanobis or maximum likelihood were used, then
chi-square statistics are used to compare probabilities (Swain and Davis 1978).

When statistics are used to calculate the threshold, the threshold is more clearly defined
as follows:

T is the distance value at which C% of the pixels in a class have a distance value greater
than or equal to T.

where:

T = the threshold for a class


C% = the percentage of pixels that are believed to be misclassified, known as the
confidence level

T is related to the distance values by means of chi-square statistics. The value X2 (chi-
squared) is used in the equation. X2 is a function of:

• the number of bands of data used—known in chi-square statistics as the number of


degrees of freedom

• the confidence level

When classifying an image in ERDAS IMAGINE, the classified image automatically has
the degrees of freedom (i.e., number of bands) used for the classification. The chi-square
table is built into the threshold application.

NOTE: In this application of chi-square statistics, the value of X2 is an approximation. Chi-


square statistics are generally applied to independent variables (having no covariance), which is
not usually true of image data.

A further discussion of chi-square statistics can be found in a statistics text.

Use the Classification Threshold utility to perform the thresholding.

Field Guide 257


Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographical
data that are assumed to be true, in order to determine the accuracy of the classification
process. Usually, the assumed-true data are derived from ground truth data.

It is usually not practical to ground truth or otherwise test every pixel of a classified
image. Therefore, a set of reference pixels is usually used. Reference pixels are points
on the classified image for which actual data are (or will be) known. The reference pixels
are randomly selected (Congalton 1991).

NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an
accuracy assessment for any thematic layer. This layer did not have to be classified by IMAGINE
(e.g., you can run an accuracy assessment on a thematic layer that was classified in ERDAS
Version 7.5 and imported into IMAGINE).

Random Reference Pixels


When reference pixels are selected by the analyst, it is often tempting to select the same
pixels for testing the classification as were used in the training samples. This biases the
test, since the training samples are the basis of the classification. By allowing the
reference pixels to be selected at random, the possibility of bias is lessened or eliminated
(Congalton 1991).

The number of reference pixels is an important factor in determining the accuracy of the
classification. It has been shown that more than 250 reference pixels are needed to
estimate the mean accuracy of a class to within plus or minus five percent (Congalton
1991).

ERDAS IMAGINE uses a square window to select the reference pixels. The size of the
window can be defined by the user. Three different types of distribution are offered for
selecting the random pixels:

• random — no rules will be used

• stratified random — the number of points will be stratified to the distribution of


thematic layer classes

• equalized random — each class will have an equal number of random points

Use the Accuracy Assessment utility to generate random reference points.

Accuracy Assessment CellArray


An Accuracy Assessment CellArray is created to compare the classified image with
reference data. This CellArray is simply a list of class values for the pixels in the
classified .img file and the class values for the corresponding reference pixels. The class
values for the reference pixels are input by the user. The CellArray data reside in an
.img file.

Use the Accuracy Assessment CellArray to enter reference pixels for the class values.

258 ERDAS
Output File

Error Reports
From the Accuracy Assessment CellArray, two kinds of reports can be derived.

• The error matrix simply compares the reference points to the classified points in a
c × c matrix, where c is the number of classes (including class 0).

• The accuracy report calculates statistics of the percentages of accuracy, based upon
the results of the error matrix.

When interpreting the reports, it is important to observe the percentage of correctly


classified pixels and to determine the nature of errors of the producer and the user.

Use the Accuracy Assessment utility to generate the error matrix and accuracy reports.

Kappa Coefficient
The Kappa coefficient expresses the proportionate reduction in error generated by a
classification process, compared with the error of a completely random classification.
For example, a value of .82 would imply that the classification process was avoiding 82
percent of the errors that a completely random classification would generate
(Congalton 1991).

For more information on the Kappa coefficient, see a statistics manual.

Output File When classifying an .img file, the output file is an .img file with a thematic raster layer.
This file will automatically contain the following data:

• class values

• class names

• color table

• statistics

• histogram

The .img file will also contain any signature attributes that were selected in the ERDAS
IMAGINE Supervised Classification utility.

The class names, values, and colors can be set with the Signature Editor or the Raster Attribute
Editor.

Field Guide 259


260 ERDAS
CHAPTER 7
Photogrammetric Concepts

Introduction

This chapter is an introduction to photogrammetric concepts, many of which are


equally applicable to traditional and digital photogrammetry. However, the focus here
is digital photogrammetry, so topics exclusive to traditional methods are omitted. In
addition, some of the presented concepts, such as use of satellite data or automatic
image correlation, exist only in the digital realm.

There are numerous sources of image data for both traditional and digital photogram-
metry. This document focuses on three main sources: aerial photographs (metric frame
cameras), SPOT satellite imagery, and Landsat satellite data. Many of the concepts
presented for aerial photographs also pertain to most imagery which has a single
perspective center. Likewise, the SPOT concepts have much in common with other
sensors that also use a linear Charged Coupled Device (CCD) in a pushbroom fashion.
Finally, a significantly different geometric model and approach is discussed for the
Landsat satellite, an across-track scanning device.

Field Guide 261


Definitions
Photogrammetry is the "art, science and technology of obtaining reliable information
about physical objects and the environment through the process of recording,
measuring and interpreting photographic images and patterns of electromagnetic
radiant imagery and other phenomena." (ASP, 1980)

Photogrammetry was invented in 1851 by Laussedat, and has continued to develop


over the last 140 years. Over time, the development of photogrammetry has passed
through the phases of Plane Table Photogrammetry, Analog Photogrammetry,
Analytical Photogrammetry, and has now entered the phase of Digital Photogram-
metry.

The traditional, and largest, application of photogrammetry is to extract topographic


information (e.g., topographic maps) from aerial images. However, photogrammetric
techniques have also been applied to process satellite images and close range images in
order to acquire topographic or non-topographic information of photographed objects.

Prior to the invention of the airplane, photographs taken on the ground were used to
extract the relationships between objects using geometric principles. This was during
the phase of Plane Table Photogrammetry.

In Analog Photogrammetry, starting with stereomeasurement in 1901, optical or


mechanical instruments were used to reconstruct three-dimensional geometry from
two overlapping photographs. The main product during this phase was topographic
maps.

In Analytical Photogrammetry, the computer replaces some expensive optical and


mechanical components by substituting analog measurement and calculation with
mathematical computation. The resulting devices were analog/digital hybrids.
Analytical aerotriangulation, analytical plotters, and orthophoto projectors were the
main developments during this phase. Outputs of analytical photogrammetry can be
topographic maps, but can also be digital products, such as digital maps and digital
elevation models (DEMs).

Digital Photogrammetry is photogrammetry as applied to digital images that are


stored and processed on a computer. Digital images can be scanned from photographs
or can be directly captured by digital cameras. Many photogrammetric tasks can be
highly automated in digital photogrammetry (e.g., automatic DEM extraction and
digital orthophoto generation). Digital Photogrammetry is sometimes called Softcopy
Photogrammetry. The output products are in digital form, such as digital maps, DEMs,
and digital orthophotos saved on computer storage media. Therefore, they can be easily
stored, managed, and applied by the user. With the development of digital photogram-
metry, photogrammetric techniques are more closely integrated into Remote Sensing
and GIS.

262 ERDAS
Coordinate Systems

Coordinate Systems There are a variety of coordinate systems used in photogrammetry. This chapter will
reference these systems as described below.

Pixel Coordinates The file coordinates of a digital image are defined in a pixel coordinate system. A pixel
coordinate system is usually a coordinate system with its origin in the upper-left corner
of the image, the x-axis pointing to the right, the y-axis pointing downward, and the
unit in pixels, as shown by axis c and r in Figure 103. These file coordinates (c,r) can also
be thought of as the pixel column and row number. This coordinate system is refer-
enced as pixel coordinates (c,r) in this chapter.

Image Coordinates An image coordinate system is usually defined as a two-dimensional coordinate


system occurring on the image plane with its origin at the image center, as illustrated
by axis x and y in Figure 103. Image coordinates are used to describe positions on the
film plane. Image coordinate units are usually millimeters or microns. This coordinate
system is referenced as image coordinates (x,y) in this chapter.

An image space coordinate system is identical to image coordinates, except that it adds
a third axis (z). Image space coordinates are used to describe positions inside the camera
and usually use units in millimeters or microns. This coordinate system is referenced as
image space coordinates (x,y,z) in this chapter.

y
c

r
Figure 103: Pixel Coordinates and Image Coordinates

Field Guide 263


Ground Coordinates A ground coordinate system is usually defined as a three-dimensional coordinate
system which utilizes a known map projection. Ground coordinates (X,Y,Z) are usually
expressed in feet or meters. The Z value is elevation above mean sea level for a given
vertical datum. This coordinate system is referenced as ground coordinates (X,Y,Z) in
this chapter.

Geocentric and Most photogrammetric applications account for earth curvature in their calculations.
Topocentric Coordinates This is done by adding a correction value or by computing geometry in a coordinate
system which includes curvature. Two such systems are geocentric and topocentric
coordinates.

A geocentric coordinate system has its origin at the center of the earth ellipsoid. The
ZG-axis equals the rotational axis of the earth, and the XG-axis passes through the
Greenwich meridian. The YG-axis is perpendicular to both the ZG-axis and XG-axis, so
as to create a three-dimensional coordinate system that follows the right hand rule.

A topocentric coordinate system has its origin at the center of the image projected on
the earth ellipsoid. The three perpendicular coordinate axis are defined on a tangential
plane at this center point. The plane is called the reference plane or the local datum. The
x-axis is oriented eastward, the y-axis northward, and the z-axis is vertical to the
reference plane (up).

For simplicity of presentation, the remainder of this chapter will not explicitly reference
geocentric or topocentric coordinates. Basic photogrammetric principles can be
presented without adding this additional level of complexity.

Work Flow The work flow of photogrammetry can be summarized in three steps: image acqui-
sition, photogrammetric processing, and product output.

264 ERDAS
Work Flow

Aerial Camera Film


Digital Imagery from Satellites

Image Acquisition
Image Preprocessing:
Scan Aerial Film
Import Digital Imagery

Triangulation

Stereopair Creation
Photogrammetric
Processing
Generate Elevation Models

Orthorectification Map Feature Collection

Orthoimages Topographic Database

Product Output

Orthomaps Topographic Maps

Figure 104: Sample Photogrammetric Work Flow

The remainder of this chapter is presented in the same sequence as the items in Figure
104. For each section, the aerial model is presented first, followed by the SPOT model,
when appropriate. A Landsat satellite model is described at the end of the orthorectifi-
cation section.

Field Guide 265


Image Acquisition

Aerial Camera Film

Exposure Station Each point in the flight path at which the camera exposes the film is called an exposure
station.

flight path
Flight Line 3 of airplane

Flight Line 2

Flight Line 1

exposure station

Figure 105: Exposure Stations along a Flight Path

Image Scale The image scale expresses the average ratio between a distance in the image and the
same distance on the ground. It is computed as focal length divided by the flying height
above the mean ground elevation. For example, with an altitude of 1,000m and a focal
length of 15 cm, the image scale (SI) would be 1:6667.

NOTE: The flying height above ground is used, versus the altitude above sea level.

Strip of Photographs A strip of photographs consists of images captured along a flight-line, normally with
an overlap of 60%. All photos in the strip are assumed to be taken at approximately the
same flying height and with a constant distance between exposure stations. Camera tilt
relative to the vertical is assumed to be minimal.

266 ERDAS
Aerial Camera Film

Block of Photographs The photographs from the flight path can be combined to form a block. A block of
photographs consists of a number of parallel strips, normally with a sidelap of 20-30%.
Photogrammetric triangulation is performed on the whole block of photographs to
transform images and ground points into a homologous coordinate system.

A regular block of photos is a rectangular block in which the number of photos in each
strip is the same. The figure below shows a block of 5 X 2 photographs.

60% overlap

strip 2

20-30%
sidelap

strip 1 flying direction

Figure 106: A Regular (Rectangular) Block of Aerial Photos

Field Guide 267


Digital Imagery from Digital image data from satellites are distributed on a variety of media, such as tapes
Satellites and CD-ROMs. The internal format varies, depending on the specific sensor and data
vendor. This section addresses photogrammetric operations on SPOT satellite images,
though most of the concepts are universal for any pushbroom sensor.

Correction Levels for SPOT scenes are delivered at different levels of correction. For example, SPOT Image
SPOT Imagery Corporation provides two correction levels that are of interest:

• Level 1A images correspond to raw camera data to which only radiometric


corrections have been applied.

• Level 1B images have been corrected for the earth’s rotation and viewing angle,
producing roughly the same ground pixel size throughout the scene. Pixels are
resampled from the level 1A camera data by cubic polynomials. This data is
internally transformed to level 1A before the triangulation calculations are applied.

Refer to "CHAPTER 3: Raster and Vector Data Sources" for more information on satellite
remote sensing and the characteristics of satellite data that can be read into ERDAS.

A SPOT scene covers an area of approximately 60 X 60 km, depending on the inclination


of the sensors. The resolution of one pixel corresponds to about 10 X 10 m on the ground
for panchromatic images. (Off-nadir scenes can cover up to 80 X 60 km.)

Image Images must be read into the computer before processing can begin. Usually the images
Preprocessing are not digitally enhanced prior to photogrammetric processing. Most digital photo-
grammetric software packages have basic image enhancement tools. The common
practice is to perform more sophisticated enhancements on the end products (e.g.,
orthoimages or orthomosaics).

Scanning Aerial Film Aerial film must be scanned (digitized) to create a digital image. Once scanned, the
digital image can be imported into a digital photogrammetric system.

Scanning Resolution
The storage requirement for digital image data can be huge. Therefore, obtaining the
optimal pixel size (or scanning density) is often a trade-off between capturing
maximum image information and the digital storage burden. For example, a standard
panchromatic image is 9 by 9 inches (23 x 23 cm). Scanning at 25 microns (roughly 1000
pixels per inch) results in a file with 9000 rows and 9000 columns. Assuming 8 bits per
pixel and no image compression, this file occupies about 81 megabytes. Photogram-
metric projects often have hundreds or even thousands of photographs.

268 ERDAS
Image Preprocessing

Photogrammetric Scanners
Photogrammetric quality scanners are special devices capable of high image quality
and excellent positional accuracy. Use of this type of scanner results in geometric
accuracies similar to traditional analog and analytical photogrammetric instruments.
These scanners are necessary for digital photogrammetric applications which have high
accuracy requirements. These units usually scan only film (either positive or negative),
because film is superior to paper, both in terms of image detail and geometry. These
units usually have an RMSE (Root Mean Square Error) positional accuracy of 4 microns
or less, and are capable of scanning at a maximum resolution of 5 to 10 microns (5
microns is equivalent to approximately 5,000 pixels per inch). The needed pixel
resolution varies depending on the application. Aerial triangulation and feature
collection applications often scan in the 10 to 15 micron range. Orthophoto applications
often use 15 to 30 micron pixels. Color film is less sharp than panchromatic, therefore
color orthoapplications often use 20 to 40 micron pixels.

Desktop Scanners
Desktop scanners are general purpose devices. They lack the image detail and
geometric accuracy of photogrammetric quality units, but they are much less
expensive. When using a desktop scanner, the user should make sure that the active
area is at least 9 X 9 inches, enabling the entire photo frame to be captured. Desktop
scanners are appropriate for less rigorous uses, such as digital photogrammetry in
support of GIS or remote sensing applications. Calibrating these units improves
geometric accuracy, but the results are still inferior to photogrammetric units. The
image correlation techniques which are necessary for automatic elevation extraction are
often sensitive to scan quality. Therefore, elevation extraction can become problematic
if the scan quality is only marginal.

Field Guide 269


Photogrammetric Processing

Photogrammetric processing consists of triangulation, stereopair creation, elevation


model generation, orthorectification, and map feature collection.

Triangulation Triangulation establishes the geometry of the camera or sensor relative to objects on the
earth’s surface. It is the first and most critical step of photogrammetric processing.
Figure 107 illustrates the triangulation work flow. First, the interior orientation estab-
lishes the geometry inside the camera or sensor. For aerial photographs, fiducial marks
are measured on the digital imagery and camera calibration information is entered. The
interior orientation information for SPOT is already known (they are fixed values). The
final step is to calculate the exterior orientation, which establishes the location and
attitude (rotation angles) of the camera or sensor during the time of image acquisition.
Ground control points aid this process.

Once triangulation is completed, the next step is usually stereopair generation.


However, the user could proceed directly to generating elevation models or
orthoimages.

Uncorrected Calculate
Digital Imagery Interior Orientation Camera or Sensor Information

Calculate
Exterior Orientation Ground Control Points

Triangulation Results

Figure 107: Triangulation Work Flow

270 ERDAS
Triangulation

Aerial Triangulation The following discussion assumes that a standard metric aerial camera is being used, in
which the fiducial marks are readily visible on the scanned images and the camera
calibration information is available from an external source.

Aerial triangulation determines the exterior orientation parameters of images and


three-dimensional coordinates of unknown points, using ground control points or
other kinds of known information. It is an economic technique to measure a large
amount of object points with very high accuracy.

Aerial triangulation is normally carried out for a block of images, containing a


minimum of two images. The strip, independent model, and bundle methods are the
common approaches for implementing triangulation, of which bundle block
adjustment is the most mathematically rigorous.

In bundle block adjustment, there are usually image coordinate observations, ground
coordinate point observations, and possibly observations from GPS and satellite orbit
information. The observation equation can be represented as follows:

V = AX – L

where

V = the matrix containing the image coordinate residuals


A = the matrix containing the partial derivatives with respect to the
unknown parameters (exterior orientation parameters and XYZ
ground coordinates)
X = the matrix containing the corrections to the unknown parameters
L = the matrix containing the observations (i.e., image coordinates and
control point coordinates)

The equations can be solved using the iterative least squares adjustment:

X = ( A T PA ) –1 A T PL
where

X = the matrix containing the corrections to the unknown parameters


A = the matrix containing the partial derivatives with respect to the
unknown parameters
P = the matrix containing the weights of the observations
L = the matrix containing the observations

Before the triangulation can be computed, the user should acquire images that overlap
in the block, measure the tie points on the images and digitize some control points.

Field Guide 271


Interior Orientation of an Aerial Photo
The interior orientation of an image defines the geometry within the camera. During
triangulation, the interior orientation must be available in order to accurately define the
external geometry of the camera. Concepts pertaining to interior orientation are
described in this section.

To record an image, light rays reflected by an object on the ground are projected
through a lens. Ideally, all light rays are straight and intersect at the perspective center.
The light rays are then projected onto the film.

The plane of the film is called the focal plane. A virtual focal plane exists between the
perspective center and the terrain. The virtual focal plane is the same distance (focal
length) from the perspective center as is the plane of the film or scanner. The light rays
intersect both planes in the same manner. Virtual focal planes are often more conve-
nient to diagram, and therefore are often used in place of focal planes in photogram-
metric diagrams.

NOTE: In the discussion following, the virtual focal plane is called the “image plane,” and is
used to describe photogrammetric concepts.

actual focal plane


(of film)

perspective center
(all light rays intersect) virtual focal plane
(camera image plane)

terrain

Figure 108: Focal and Image Plane

For purposes of photogrammetric triangulation, a local coordinate system is defined in


each image. The location of each point in the image can be expressed in terms of this
image coordinate system.

The perspective center is projected onto a point in the image plane that lies directly
beneath it. This point is called the principal point. The orthogonal distance from the
perspective center to the image plane is the focal length of the lens.

272 ERDAS
Triangulation

A XF
F1

F4 F2
x
P

F3

YF

Figure 109: Image Coordinates, Fiducials, and Principal Point

Fiducials are four or eight reference markers fixed on the frame of an aerial metric
camera and visible in each exposure as illustrated by points F1, F2, F3, and F4 in Figure
109. The image coordinates of the fiducials are provided in a camera calibration report.
Fiducials are used to compute the transformation from file coordinates to image coordi-
nates.

The file coordinates of a digital image are defined in a pixel coordinate system. For
example, in digital photogrammetry, it is usually a coordinate system with its origin in
the upper-left corner of the image, the x-axis pointing to the right, the y-axis pointing
downward, and the unit in pixels, as shown by A-XF and A-YF in Figure 109. These file
coordinates (XF, YF) can also be thought of as the pixel column and row number.

Once the file coordinates of fiducials are measured, the transformation from file coordi-
nates to image coordinates can be carried out. Usually the six-parameter affine transfor-
mation is used here:

x = ao + a1 X F + a2 Y F
y = bo + b1 X F + b2 Y F
Where

ao, a1, a2 = affine parameters


bo, b1, b 2 = affine parameters
XF, YF = file coordinates
x,y = image coordinates

Field Guide 273


Once this transformation is in place, image coordinates are directly obtained and the
interior orientation is complete.

Exterior Orientation
The exterior orientation determines the relationship of an image to the ground
coordinate system. Each aerial camera image has six exterior orientation parameters,
the three coordinates of the perspective center (Xo,Yo,Zo) in the ground coordinate
system, and three rotation angles of (ω, ϕ, κ), as shown in Figure 110.

Z'

κ Y'

O
ω X'

z
y
x
PP

PI (x,y,-f)

l
Z

ZO PG (XG, YG, ZG)


Y
XG ZG

YO YG
XO
X

Figure 110: Exterior Orientation of an Aerial Photo

274 ERDAS
Triangulation

Where

PP = principal point
O = perspective center with ground coordinates (XO, YO, ZO)
O-x, O-y, O-z = image space coordinate system with origin in the
perspective center and the x,y-axis parallel to the image
coordinate system axis
XG,YG,ZG = ground coordinates
O-X', O-Y', O-Z' = a local coordinate system which is parallel to the ground
coordinate system, but has its origin at the perspective
center. Used for expressing rotation angles (ω, ϕ, κ).
ω = omega rotation angle around the X'-axis
ϕ = phi rotation angle around the Y'-axis
κ = kappa rotation angle around the Z'-axis
PI = point in the image plane
PG = point on the ground

Collinearity Equations
The relationship among image coordinates, ground coordinates, and orientation
parameters is described by the following collinearity equations:

r 11 ( X – X O ) + r 21 ( Y – Y O ) + r 31 ( Z – Z O )
x = – f ----------------------------------------------------------------------------------------------
r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O )

r 12 ( X – X O ) + r 22 ( Y – Y O ) + r 32 ( Z – Z O )
y = – f ----------------------------------------------------------------------------------------------
r 13 ( X – X O ) + r 23 ( Y – Y O ) + r 33 ( Z – Z O )

Where:

x, y = image coordinates
X, Y, Z = ground coordinates
f = focal length
XO,YO,ZO = ground coordinates of perspective center
r11 - r33 = coefficients of a 3 X 3 rotation matrix defined by angles ω, ϕ,κ, that
transforms the image system to the ground system

The collinearity equations are the most basic principle of photogrammetry.

Field Guide 275


Control for Aerial Triangulation
The distribution and density of ground control is a major factor in the accuracy of
photogrammetric triangulation. Control points for aerial triangulation are created by
identifying points with known ground coordinates in the aerial photos.

A control point is a point with known coordinates in the ground coordinate system,
expressed in the units of the specified map projection. Control points are used to
establish a reference frame for the photogrammetric triangulation of a block of images.

These ground coordinates are typically three-dimensional. They consist of X,Y coordi-
nates in a map projection system and Z coordinates, which are elevation values
expressed in units above datum that are consistent with the map coordinate system.

The user selects these points based on their relation to clearly defined and visible
ground features. Ground control points serve as stable (known) values, so their
accuracy determines the accuracy of the triangulation.

Ground coordinates of control points can be acquired by digitizing existing maps or


from geodetic measurements, such as the global positioning system (GPS), a surveying
instrument, or Electronic Distance Measuring devices (EDMs). For optimal accuracy,
the coordinates should be accurate to within the distance on the ground that is repre-
sented by approximately 0.1 to 0.5 pixels in the image.

In triangulation, there can be several types of control points. A full control point
specifies map X,Y coordinates along with a Z (elevation of the point). Horizontal control
only specifies the X,Y, while vertical control only specifies the Z.

Optimizing control distribution is part art and part science, and goes beyond the scope
of this document. However, the example presented in Figure 111 illustrates a specific
case.

General rules for control distribution within a block are:

• Whenever possible, locate control points that lie on multiple images

• Control is needed around the outside of a block

• Control is needed at certain distances within the block

276 ERDAS
Triangulation

▲ = control (along all edges of the block and after the 3rd photo of each strip)
▲ ▲
▲ ▲ ▲

▲ ▲

▲ ▲ ▲ ▲

▲ ▲

▲ ▲ ▲ ▲ ▲
Figure 111: Control Points in Aerial Photographs
(block of 8 X 4 photos)

For optimal results, control points should be measured by geodetic techniques with an
accuracy that corresponds to about 0.1 to 0.5 pixels in the image. Digitization of existing
maps often does not yield this degree of accuracy.

For example, if a photograph was scanned with a resolution of 1000 dpi (9000 X 9000
pixels), the pixel size in the image is 25 microns (0.025mm). For an image scale of
1:40,000, each pixel covers approximately 1.0 X 1.0 meters on the ground. Applying the
above rule, the ground control points should be accurate to about 0.1 to 0.5 meters.

A greater number of known ground points should be available than will actually be
used in the triangulation. These additional points become check points, and can be
used to independently verify the degree of accuracy of the triangulation. This verifi-
cation, called check point analysis, is discussed on page 287.

Field Guide 277


Ground control points need not be available in every image, but can be supplemented
by other points which are identified as tie points. A tie point is a point whose ground
coordinates are not known, but can be recognized visually in the overlap or sidelap area
between two or more images. Ground coordinates for tie points are computed during
the photogrammetric triangulation.

Tie points should be visually well-defined in all images. Ideally, they should show good
contrast in two directions, like the corner of a building or a road intersection. Tie points
should also be well distributed over the area of the block. Typically, nine tie points in
each image are adequate for photogrammetric triangulation of aerial photographs. If a
control point already exists in the candidate location of a tie point, the tie point can be
omitted.

tie points
in a single
image
x

Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation

Where

x,y = the image coordinates

278 ERDAS
Triangulation

In a block of aerial photographs with 60% overlap and 25-30% sidelap, nine points are
sufficient to tie together the block as well as individual strips.

Nine tie points


in each image
tie the block
together.

Figure 113: Tie Points in a Block of Photos

In summary:

• A control point must be visually identifiable in one or more images and have
known ground coordinates. If, in later processing, the ground coordinates for a
control point are found to have low reliability, the control point can be changed to
a tie point.

• If a control point is in the overlap area, it helps to control both images.

• If the ground coordinates of a control point are not used in the triangulation, they
can serve as a check point for independent analysis of the accuracy of the
triangulation.

• A tie point is a point that is visually identifiable in at least two images for which
ground coordinates are unknown.

Field Guide 279


SPOT Triangulation The SPOT satellite carries two HRV (High Resolution Visible) sensors, each of which
is a pushbroom scanner that takes a sequence of line images while the satellite circles
the earth.

The focal length of the camera optic is 1,084 mm, which is very large relative to the
length of the camera (78 mm). The field of view is 4.12o.

The satellite orbit is circular, north-south and south-north, about 830 km above the
earth, and sun-synchronous. A sun-synchronous orbit is one in which the orbital
rotation is the same rate as the earth’s rotation.

For each line scanned, there is a unique perspective center and a unique set of rotation
angles. The location of the perspective center relative to the line scanner is constant for
each line (interior orientation and focal length). Since the motion of the satellite is
smooth and practically linear over the length of a scene, the perspective centers of all
scan lines of a scene are assumed to lie along a smooth line.

motion of
perspective centers satellite
of scan lines

scan lines
on image

ground

Figure 114: Perspective Centers of SPOT Scan Lines

The satellite exposure station is defined as the perspective center in ground coordinates
for the center scan line.

The image captured by the satellite is called a scene. A scene (SPOT Pan 1A) is
composed of 6,000 lines. Each of these lines consists of 6000 pixels. Each line is exposed
for 1.5 milliseconds, so it takes 9 seconds to scan the entire scene. (A scene from SPOT
XS 1A is composed of only 3000 lines and 3000 columns and has 20 meter pixels, while
Pan has 10 meter pixels.)

NOTE: This section will address only the 10 meter Pan scenario.

280 ERDAS
Triangulation

A pixel in the SPOT image records the light detected by one of the 6,000 light-sensitive
elements in the camera. Each pixel is defined by file coordinates (column and row
numbers).

The physical dimension of a single, light-sensitive element is 13 X 13 microns. This is


the pixel size in image coordinates.

The center of the scene is the center pixel of the center scan line. It is the origin of the
image coordinate system.

A XF

6000 x
lines C
(rows)

6000 pixels (columns)

YF

Figure 115: Image Coordinates in a Satellite Scene

Where

A = origin of file coordinates


A-XF, A-YF = file coordinate axes
C = origin of image coordinates (center of scene)
C-x, C-y = image coordinate axes

Field Guide 281


SPOT Interior Orientation
Figure 116 shows the interior orientation of a satellite scene. The transformation
between file coordinates and image coordinates is constant.

On

Ok f

f
O1
orbiting direction
(N —> S)

f
PPn Pn
xn

scan
lines
(image
plane) Pk xk
PPk

P1 x1
PP1 P1 ln

lk

l1

Figure 116: Interior Orientation of a SPOT Scene

For each scan line, a separate bundle of light-rays is defined, where

Pk = image point
xk = x value of image coordinates for scan line k
f = focal length of the camera
Ok = perspective center for scan line k, aligned along the orbit
PPk = principal point for scan line k
lk = light rays for scan line, bundled at perspective center Ok

282 ERDAS
Triangulation

SPOT Exterior Orientation


SPOT satellite geometry is stable and the sensor parameters (e.g., focal length) are well-
known. However, the triangulation of SPOT scenes is somewhat unstable because of
the narrow, almost parallel bundles of light rays.

Ephemeris data for the orbit are available in the header file of SPOT scenes. They give
the satellite’s position in three-dimensional, geocentric coordinates at 60-second
increments. The velocity vector and some rotational velocities relating to the attitude of
the camera are given, as well as the exact time of the center scan line of the scene.

The header of the data file of a SPOT scene contains ephemeris data, which provides
information about the recording of the data and the satellite orbit.

Ephemeris data that are used in satellite triangulation are:

• the position of the satellite in geocentric coordinates (with the origin at the center
of the earth) to the nearest second,

• the velocity vector, which is the direction of the satellite’s travel,

• attitude changes of the camera, and

• the exact time of exposure of the center scan line of the scene.

The geocentric coordinates included with the ephemeris data are converted to a local
ground system for use in triangulation. The center of a satellite scene is interpolated
from the header data.

Light rays in a bundle defined by the SPOT sensor are almost parallel, lessening the
importance of the satellite’s position. Instead, the inclination angles of the cameras
become the critical data.

The scanner can produce a nadir view. Nadir is the point directly below the camera.
SPOT has off-nadir viewing capability. Off-nadir refers to any point that is not directly
beneath the satellite, but is off to an angle (i.e., east or west of the nadir).

A stereo-scene is achieved when two images of the same area are acquired on different
days from different orbits, one taken east of the other. For this to occur, there must be
significant differences in the inclination angles.

Inclination is the angle between a vertical on the ground at the center of the scene and
a light ray from the exposure station. This angle defines the degree of off-nadir viewing
when the scene was recorded. The cameras can be tilted in increments of 0.6o to a
maximum of 27o to the east (negative inclination) or west (positive inclination).

Field Guide 283


O1 vertical O2
orbit 1 orbit 2
sensors

I-

I+

EAST WEST

earth’s surface (ellipsoid)


C
scene coverage

Figure 117: Inclination of a Satellite Stereo-Scene


(View from North to South)

Where

C = center of the scene


I- = Eastward inclination
I+ = Westward inclination
O1,O2 = exposure stations (perspective centers of imagery)
The orientation angle of a satellite scene is the angle between a perpendicular to the
center scan line and the North direction. The spatial motion of the satellite is described
by the velocity vector. The real motion of the satellite above the ground is further
distorted by earth rotation.

The velocity vector of a satellite is the satellite’s velocity if measured as a vector through
a point on the spheroid. It provides a technique to represent the satellite’s speed as if
the imaged area were flat instead of being a curved surface.

284 ERDAS
Triangulation

North

center scan line


C

orbital path V

Figure 118: Velocity Vector and Orientation Angle of a Single Scene

Where

O = orientation angle
C = center of the scene
V = velocity vector

Satellite triangulation provides a model for calculating the spatial relationship between
the SPOT sensor and the ground coordinate system for each line of data. This
relationship is expressed as the exterior orientation, which consists of:

• the perspective center of the center scan line,

• the change of perspective centers along the orbit,

• the three rotations of the center scan line, and

• the changes of angles along the orbit.

In addition to fitting the bundle of light rays to the known points, satellite triangulation
also accounts for the motion of the satellite by determining the relationship of the
perspective centers and rotation angles of the scan lines. It is assumed that the satellite
travels in a smooth motion as a scene is being scanned. Therefore, once the exterior
orientation of the center scan line is determined, the exterior orientation of any other
scan line is calculated based on the distance of that scan line from the center and the
changes of the perspective center location and rotation angles.

Field Guide 285


Bundle adjustment for triangulating a satellite scene is similar to the bundle adjustment
used for aerial photos. Least squares adjustment is used to derive a set of parameters
that comes the closest to fitting the control points to their known ground coordinates,
and to intersecting tie points.

The resulting parameters of satellite bundle adjustment are:

• the ground coordinates of the perspective center of the center scan line,

• the rotation angles for the center scan line,

• the coefficients, from which the perspective center and rotation angles of all other
scan lines can be calculated, and

• the ground coordinates of all tie points.

Collinearity Equations
Modified collinearity equations are applied to analyze the exterior orientation of
satellite scenes. Each scan line has a unique perspective center and individual rotation
angles. When the satellite moves from one scan line to the next, these parameters
change. Due to the smooth motion of the satellite in orbit, the changes are small and can
be modeled by low order polynomial functions.

Control for SPOT Triangulation


Both control and tie points can be used for satellite triangulation of a stereo scene. For
triangulating a single scene, only control points are used. It is then called space
resection. A minimum of six control points is necessary for good triangulation results.

The best locations for control points in the scene are shown below.

control point

horizontal x
scan lines

Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation

286 ERDAS
Triangulation

In some cases, there are no reliable control points available in the area for which a DEM
or orthophoto is to be created. In this instance, a local coordinate system may be
defined. The coordinate center is the center of the scene expressed in the longitude and
latitude taken from the header. When a local coordinate system is defined, the satellite
positions, velocity vectors, and rotation angles from the image header are used to define
a datum.

The ground coordinates of tie points will be computed in such a case. The resulting
DEM would display relative elevations, and the coordinate system would approxi-
mately correspond to the real system of this area. However, the coordinate system is
limited by the accuracy of the emphemeris information.

This might be especially useful for remote islands, in which case points along the shore-
line can be very easily detected as tie points.

Triangulation Accuracy The triangulation solution usually provides the standard deviation, the covariance
Measures matrix of unknowns, the residuals of observations, and check point analysis to aid in
determining the accuracy of triangulation.

Standard Deviation ( σ 0 )
Each time the triangulation program completes one iteration, the σ0 value (square root
of variance of unit weight) is calculated. It gives the mean error of the image coordinate
measurements used in the adjustment. This value decreases as the bundle fits better to
the control and tie points.

NOTE: The σ0 value usually should not be larger than 0.25 to 0.75 pixels.

Precision and Residual Values


After the adjustment, information describing the precision of the computed parameters,
as well as the residuals of the observations, can be computed. The variance matrix and
covariance matrix are the theoretical precision of the unknown parameters, depending
on the geometry of the photo block. Residuals describe how much the measured image
coordinates differ from their computed locations after the adjustment.

Check Point Analysis


An independent measure is needed to describe the accuracy of computed ground
coordinates of tie points.

A check point analysis compares photogrammetrically computed ground coordinates


of tie points with their known coordinates, measured independently by geodetic
methods or by digitizing existing maps. The result of this analysis is an RMS error,
which is split into a horizontal (X,Y) and a vertical (Z) component.

NOTE: In the case of aerial mapping, the vertical accuracy is usually lower than the horizontal
accuracy by a factor of 1.5. For satellite stereo-scenes, the vertical accuracy depends on the
dimension of the inclination angles (the separation of the two scenes).

Field Guide 287


Robust Estimation
Using robust estimation or data-snooping techniques, some gross errors in observa-
tions can be detected and flagged.

Editing Points and Repeating the Adjustment


To increase the accuracy of the triangulation, you may refine the input image coordi-
nates to eliminate the gross errors in the observations. The residual of each image
coordinate measurement can be a good reference to help decide whether to remove the
observation from further computations. However, because of the different redundancy
of the different measurements, the residuals are not equivalent to their observation
errors. A better way is to consider a residual together with its redundancy. After editing
the input image coordinates, the triangulation can be repeated.

Stereo Imagery To perform photogrammetric stereo operations, two views of the same ground area
captured from different locations are required. A stereopair is a set of two images that
overlap, providing two views of the terrain in the overlap area. The relief displacement
in a stereopair is required to extract three-dimensional information about the terrain.

Though digital photogrammetric principles can be applied to any type of imagery, this
document focuses on two main sources: aerial photographs (metric frame cameras) and
SPOT satellite imagery. Many of the concepts presented for aerial photographs also
pertain to most imagery that has a single perspective center. Likewise, the SPOT
concepts have much in common with other sensors that also use a linear Charged
Coupled Device (CCD) in a pushbroom fashion.

Aerial Stereopairs For decades, aerial photographs have been used to create topographic maps in analog
and analytical stereoplotters. Aerial photographs are taken by specialized cameras,
mounted so that the lens is close to vertical, pointing out of a hole in the bottom of an
airplane. Photos are taken in sequence at regular intervals. Neighboring photos along
the flight line usually overlap by 60% or more. A stereopair can be constructed from any
two overlapping photographs that share a common area on the ground, most
commonly along the flight line.

288 ERDAS
Stereo Imagery

flight direction

Figure 120: Aerial Stereopair (60% Overlap)

SPOT Stereopairs Satellite stereopairs are created by two scenes of the same terrain that are recorded from
different viewpoints. Because of its off-nadir viewing capability, it is easy to acquire
stereopairs from the SPOT satellite. The SPOT stereopairs are recorded from different
orbits on different days.

1st satellite 2nd satellite


motion of visit visit
satellite
during
scanning

terrain

Figure 121: SPOT Stereopair (80% Overlap)

Epipolar Stereopairs Epipolar stereopairs are created from triangulated, overlapping imagery using the
process in Figure 122. Digital photogrammetry creates a new set of digital images by
resampling the overlap region into a stereo orientation. This orientation, called epipolar
geometry, is characterized by relief displacement only occurring in one dimension
(along the flight line). A feature unique to digital photogrammetry is that there is no
need to create a relative stereo model before proceeding to absolute map coordinates.

Field Guide 289


Uncorrected Generate
Epipolar Triangulation Results
Digital Imagery
Stereopair

Epipolar
Stereopair

Figure 122: Epipolar Stereopair Creation Work Flow

290 ERDAS
Generate Elevation Models

Generate Elevation Elevation models are generated from overlapping imagery (Figure 123). There are two
Models methods in digital photogrammetry. Method 1 uses the original images and triangu-
lation results. Method 2 uses only the epipolar stereopairs (which are assumed to
include geometric information derived from the triangulation results).

Uncorrected Extract Elevation Triangulation Results


Digital Imagery Information

Generated DTM

Figure 123: Generate Elevation Models Work Flow (Method 1)

Epipolar Extract Elevation


Stereopair Information

Generated DTM

Figure 124: Generate Elevation Models Work Flow (Method 2)

Traditional Methods The traditional method of deriving elevations was to visualize the stereopair in three
dimensions using an analog or analytical stereo plotter. The user would then place
points and breaklines at critical terrain locations. An alternative method was to set the
pointer to a fixed elevation and then proceed to trace contour lines.

Digital Methods Both of the traditional methods described above can also be used in digital photogram-
metry utilizing specialized stereo viewing hardware. However, a powerful new
method is introduced with the advent of all digital systems - image correlation. The
general idea is to use pattern matching algorithms to locate the same ground features
on two overlapping photographs. The triangulation information is then used to
calculate ground (X,Y,Z) values for each correlated feature.

Field Guide 291


Elevation Model There are a variety of terms used to describe various digital elevation models. Their
Definitions usage in the industry are not always consistent. However, in this section the following
meanings will be assigned to specific terms, and this terminology will be used consis-
tently.

DTMs
A digital terrain model (DTM) is a discrete expression of terrain surface in a data array,
consisting of a group of planimetric coordinates (X,Y) and the elevations of the ground
points and breaklines. A DTM can be in the regular grid form, or it can be represented
with irregular points. Consider DTM as being a general term for elevation models, with
DEMs and TINs (defined below) as specific representations.

A DTM can be extracted from stereo imagery based on the automatic matching of points
in the overlap areas of a stereo model. The stereo model can be a satellite stereo scene
or a pair of digitized aerial photographs.

The resulting DTM can be used as input to geoprocessing software. In particular, it can
be utilized to produce an orthophoto or used in an appropriate 3-D viewing package
(e.g., IMAGINE Virtual GIS).

DEMs
A digital elevation model (DEM) is a specific representation of DTMs in which the
elevation points consist of a regular grid. Often, DEMs are stored raster files in which
each grid cell value contains an elevation value.

TINs
A triangulated irregular network (TIN) is a specific representation of DTMs in which
elevation points can occur at irregular intervals. In addition to elevation points, break-
lines are often included in TINs. A breakline is an elevation polyline, in which each
vertex has its own X, Y, Z value.

DEM Interpolation The direct results from most image matching techniques are irregular and discrete
object surface points. In order to generate a DEM, the irregular set of object points need
to be interpolated. For each grid point, the elevation is computed by a surface interpo-
lation method.

There are many algorithms for DEM interpolation. Some of them have been introduced
in "CHAPTER 1: Raster Data." Other methods, including Least Square Collocation and
Finite Elements, are also used for DEM interpolation.

Often, to describe the terrain surface more accurately, breaklines should be added and
used in the DEM interpolation. TIN based interpolation methods can deal with break-
lines more efficiently.

292 ERDAS
Generate Elevation Models

Image Matching Image matching refers to the automatic acquisition of corresponding image points on
the overlapping area of two images.

For more information on image matching, see "Image Matching Techniques" on page 294.

Image Pyramid
Because of the large amounts of image data, the image pyramid is usually adopted in
the image matching techniques to reduce the computation time and to increase the
matching reliability. The pyramid is a data structure consisting of the same image
represented several times, at a decreasing spatial resolution each time. Each level of the
pyramid contains the image at a particular resolution.

The matching process is performed at each level of resolution. The search is first
performed at the lowest resolution level and subsequently at each higher level of
resolution. Figure 125 shows a four-level image pyramid.

Level 4 Matching begins on level 4


64 x 64 pixels
Resolution of 1:8

Level 3
128 x 128 pixels
Resolution of 1:4

and

Level 2
256 x 256 pixels
Resolution of 1:2

Level 1 Matching finishes on level 1


Full resolution (1:1)
512 x 512 pixels

Figure 125: Image Pyramid for Matching at Coarse to Full Resolution

Epipolar Image pair


The epipolar image pair is a stereopair without y-parallax. It can be generated from the
original stereopair if the orientation parameters are known. When image matching
algorithms are applied to the epipolar stereopair, the search can be constrained to the
epipolar line to significantly reduce search times and false matching. However, some of
the image detail may be lost during the epipolar resampling process.

Field Guide 293


Image Matching The image matching methods can be divided into three categories:
Techniques
• area based matching

• feature based matching

• relation based matching

Area Based Matching Area based matching can also be called signal based matching. This method deter-
mines the correspondence between two image areas according to the similarity of their
gray level values. The cross correlation and least squares correlation techniques are
well-known methods for area based matching.

Correlation Windows
Area based matching uses correlation windows. These windows consist of a local
neighborhood of pixels. One example of correlation windows is square neighborhoods
(e.g., 3 X 3, 5 X 5, 7 X 7 pixels). In practice, the windows vary in shape and dimension,
based on the matching technique. Area correlation uses the characteristics of these
windows to match ground feature locations in one image to ground features on the
other.

A reference window is the source window on the first image, which remains at a
constant location. Its dimensions are usually square in size (e.g., 3 X 3, 5 X 5, etc.). Search
windows are candidate windows on the second image that are evaluated relative to the
reference window. During correlation, many different search windows are examined
until a location is found that best matches the reference window.

Correlation Calculations
Two correlation calculations are described below: cross correlation and least squares
correlation. Most area based matching calculations, including these methods,
normalize the correlation windows. Therefore, it is not necessary to balance the contrast
or brightness prior to running correlation. Cross correlation is more robust in that it
requires a less accurate a priori position than least squares. However, its precision is
limited to 1.0 pixels. Least squares correlation can achieve precision levels of 0.1 pixels,
but requires an a priori position that is accurate to about 2 pixels. In practice, cross corre-
lation is often followed by least squares.

294 ERDAS
Image Matching Techniques

Cross Correlation
Cross correlation computes the correlation coefficient of the gray values between the
template window and the search window, according the following equation:

∑ [ g1 ( c1, r 1 ) – g1 ] [ g2 ( c2, r 2 ) – g2 ]
i, j
ρ = -------------------------------------------------------------------------------------------------------
∑ [ g1 ( c1, r 1 ) – g1 ] ∑ [ g2 ( c2, r 2 ) – g2 ]
2 2

i, j i, j

with
1 1
g 1 = --- ∑ g 1 ( c 1, r 1 ) g 2 = --- ∑ g 2 ( c 2, r 2 )
n i, j n i, j

where

ρ = the correlation coefficient


g(c,r) = the gray value of the pixel (c,r)
c1,r1 = the pixel coordinates on the left image
c2,r2 = the pixel coordinates on the right image
n = the total number of pixels in the window
i, j = pixel index into the correlation window

When using the area based cross correlation, it is necessary to have a good initial
position for the two correlation windows. Also, if the contrast in the windows is very
poor, the correlation will fail.

Least Squares Correlation


Least squares correlation uses the least squares estimation to derive parameters that
best fit a search window to a reference window. This technique accounts for both gray
scale and geometric differences, making it especially useful when ground features on
one image look somewhat different on the other image (differences which occur when
the surface terrain is quite steep or when the viewing angles are quite different).

Least squares correlation is iterative. The parameters calculated during the initial pass
are used in the calculation of the second pass and so on, until an optimum solution has
been determined. Least squares matching can result in high positional accuracy (about
0.1 pixels). However, it is sensitive to initial approximations. The initial coordinates for
the search window prior to correlation must be accurate to about 2 pixels or better.

When least squares correlation fits a search window to the reference window, both
radiometric (pixel gray values) and geometric (location, size, and shape of the search
window) transformations are calculated.

Field Guide 295


For example, suppose the change in gray values between two correlation windows can
be represented as a linear relationship. Also, assume that the change in the window’s
geometry can be represented by an affine transformation.

NOTE: The following formulas do not follow use the coordinate system nomenclature
established elsewhere in this chapter. The pixel coordinate values are presented as (x,y) instead
of (c,r).

g 2 ( c 2, r 2 ) = h 0 + h 1 g 1 ( c 1, r 1 )
c2 = a0 + a1 c1 + a2 r 1
r 2 = b0 + b1 c1 + b2 r 1

where

x1,y1 = the pixel coordinate in the reference window


x2,y2 = the pixel coordinate in the search window
g1(x1,y1) = the gray value of pixel (x1,y1)
g2(x2,y2) = the gray value of pixel (x1,y1)
h0, h1 = linear gray value transformation parameters
a0, a1, a2 = affine geometric transformation parameters
b0, b1, b2 = affine geometric transformation parameters
Based on this assumption, the error equation for each pixel can be derived, as is shown
in the following equation:

v = ( a 1 + a 2 c 1 + a 3 r 1 )g x + ( b 1 + b 2 c 1 + b 3 r 1 )g y – h 1 – h 2 g 1 ( c 1, r 1 ) + ∆g
with ∆g = g 2 ( c 2, r 2 ) – g 1 ( c 1, r 1 )

where gc and gr are the gradients of g2 (c2,r2).

296 ERDAS
Image Matching Techniques

Feature Based Matching Feature based matching determines the correspondence between two image features.
Most feature based techniques match extracted point features (this is called feature
point matching), as opposed to other features, such as lines or complex objects. Poor
contrast areas can be avoided with feature based matching.

In order to implement feature based matching, the image features must initially be
extracted. There are several well-known operators for feature point extraction.
Examples include:

• Moravec Operator

• Dreschler Operator

• Förstner Operator

After the features are extracted, the attributes of the features are compared between two
images. The feature pair with the attributes which are the best fit will be recognized as
a match.

Relation Based Matching Relation based matching is also called structure based matching. This kind of matching
technique uses not only the image features, but also the relation among the features.
With relation based matching, the corresponding image structures can be recognized
automatically, without any a priori information. However, the process is time-
consuming, since it deals with varying types of information. Relation based matching
can also be applied for the automatic recognition of control points.

Field Guide 297


Orthorectification Orthorectification takes a raw digital image and applies an elevation model (DTM) and
triangulation results to create an orthoimage (digital orthophoto). The DTM can be
sourced from an externally derived product, or generated from the raw digital images
earlier in the photogrammetric process (see Figures 126 and 127).

Generated DTM

Uncorrected Orthorectify Triangulation Results


Digital Imagery Imagery

Generated
Orthoimage

Figure 126: Orthorectification Work Flow (Method 1)

External DTM

Uncorrected Orthorectify Triangulation Results


Digital Imagery Imagery

Generated
Orthoimage

Figure 127: Orthorectification Work Flow (Method 2)

An image or photograph with an orthographic projection is one for which every point
looks as if an observer were looking straight down at it, along a line of sight that is
orthogonal (perpendicular) to the earth.

298 ERDAS
Orthorectification

Figure 128 illustrates the relationship of a remotely-sensed image to the orthographic


projection. The digital orthophoto is a representation of the surface projected onto the
plane of zero elevation, which is called the datum, or reference plane.

perspective satellite sensor or camera


projection

orthographic terrain
projection

reference plane
(elevation zero)
Figure 128: Orthographic Projection

The resulting image is known as a digital orthophoto or orthoimage. An aerial photo


or satellite scene transformed by the orthographic projection yields a map that is free of
most significant geometric distortions.

Geometric Distortions When a remotely-sensed image or an aerial photograph is recorded, there is inherent
geometric distortion caused by terrain and by the angle of the sensor or camera to the
ground. In addition, there are distortions caused by earth curvature, atmospheric
diffraction, the camera or sensor itself (e.g., radial lens distortion), and the mechanics of
acquisition (e.g., for SPOT, earth rotation and change in orbital position during acqui-
sition). In the following material, only the most significant of these distortions are
presented. These are terrain, sensor position, and rotation angles, as well as earth
curvature for small-scale images.

Field Guide 299


Aerial and SPOT Creating Digital Orthophotos
Orthorectification The following are necessary to create digital orthophoto:

• a satellite image or scanned aerial photograph,

• a digital terrain model (DTM) of the area covered by the image, and

• the orientation parameters of the sensor or camera.

In overlap regions of orthoimage mosaics, the digital orthophoto can be used, which
minimizes problems with contrast, cloud cover, occlusions, and reflections from water
and snow.

Relief displacement is corrected for by taking each pixel of a DTM and finding the equiv-
alent position in the satellite or aerial image. A brightness value is determined for this
location based on resampling of the surrounding pixels. The brightness value, elevation,
and orientation are used to calculate the equivalent location in the orthoimage file.

Pl
f

Z
P

DTM

orthoimage
gray values
Figure 129: Digital Orthophoto - Finding Gray Values

Where

P = ground point
P1 = image point
O = perspective center (origin)
X,Z = ground coordinates (in DTM file)
f = focal length

300 ERDAS
Orthorectification

The resulting orthoimages have similar basic characteristics to images created by other
means of rectification, such as polynomial warping or rubber sheeting. On any rectified
image, a map ground coordinate can be quickly calculated for a pixel position. The
orthorectification process almost always explicitly models the ground terrain and
sensor attitude (position and rotation angles), which makes it much more accurate for
off-nadir imagery, larger image scales, and mountainous regions. Also, orthorectifi-
cation often requires less control points than other methods.

Resampling methods used are nearest neighbor, bilinear interpolation, and cubic
convolution.

See "CHAPTER 8: Rectification" for a complete explanation of rectification and resampling


methods.

Generally, when the cell sizes of orthoimage pixels are selected, they should be similar
or larger than the cell sizes of the original image. For example, if the image was scanned
9K X 9K, 1 pixel would represent 0.025mm on the image. Assuming that the image scale
(SI) of this photo is 1:40,000, then the cell size on the ground is about 1m. For the
orthoimage, it would be appropriate to choose a pixel spacing of 1m or larger. Choosing
a smaller pixel size would oversample the original image.

For SPOT Pan images, a cell size of 10 x 10 meters is appropriate. Any further
enlargement from the original scene to the orthophoto would not improve the image
detail.

Landsat Landsat TM or Landsat MSS sensor systems have a complex geometry which includes
Orthorectification factors such as a rotating mirror inside the sensor, changes in orbital position during
acquisition, and earth rotation. The resulting “zero level” imagery requires sophisti-
cated rectification that is beyond the capabilities of many end users. For this reason,
almost all Landsat data formats have already been preprocessed to minimize these
distortions. Applying simple polynomial rectification techniques to these formats
usually fulfills most georeferencing needs when the terrain is relatively flat. However,
imagery of mountainous regions needs to account for relief displacement for effective
georeferencing. A solution to this problem is discussed below.

This section illustrates just one example of how to correct the relief distortion by using the
polynomial formulation.

Vertical imagery taken by a line scanner, such as Landsat TM or Landsat MSS, can be
used with an existing DTM to yield an orthoimage. No information about the sensor or
the orbit of the satellite is needed. Instead, a transformation is calculated using infor-
mation about the ground region and image capture. The correction takes into account:

• elevation

• local earth curvature

• distance from nadir

• flying height above datum

Field Guide 301


The image coordinates are adjusted for the above factors, and a least squares regression
method similar to the one used for rectification is used to get a polynomial transfor-
mation between image and ground coordinates.

Vertical imagery is assumed to be captured by Landsat satellites.

Approximating the Nadir Line


The nadir line is a mathematically derived line based on the nadir. The nadir is a point
directly beneath the satellite. For vertically viewed imagery, the nadir point for a scan
line is assumed to be the center of the line. Thus the nadir line.

image point nadir


nadir line
image edge

scan line

image plane

The edges of a Landsat image can be determined by a search based on a gray level
threshold between the image and background fill values. Sample points are determined
along the edges of the image. Each edge line is then obtained by a least squares
regression. The nadir line is found by averaging the left and right edge lines.

302 ERDAS
Orthorectification

For simplicity, each line (the four edges and the nadir line) can be approximated by a
straight line without losing generality.

NOTE: The following formulas do not follow use the coordinate system nomenclature
established elsewhere in this chapter. The pixel coordinate values are presented as (x,y) instead
of (c,r).

left edge: x = a0 + a1y


right edge: x = b0 + b1y
nadir line: x = c0 + c1y
where

x,y = image coordinates,


a0,b0,c0 = intercepts of straight lines, and
a1,b1,c1 = slopes of straight lines.
and

c0 = 0.5 * (a0 + b0)


c1 = 0.5 * ( a1 + b1)

Determining a Point’s Distance from Nadir


The distance (d) of a point from the nadir is calculated based on its position along the
scan line, as follows:

 1 + g 12
- ( x – c 0 – c 1 y )
d =  -------------------- (1)
 1 – c1g1 
Where g1 is the slope of the scan line (y = g0 +g1x), which can be obtained based on the
top and bottom edges with a method similar to the one described for the left and right
edges.

Field Guide 303


Displacement
Displacement is the degree of geometric distortion for a point that is not on the nadir
line. A point on the nadir line represents an area in the image with zero distortion. The
displacement of a point is based on its relationship to the nadir point in the scan line in
which the point is located.

exposure station

∆d

image plane
H

datum

earth
ellipsoid

R
α
β

earth’s center
Figure 130: Image Displacement

where

R = radius of local earth curvature at the nadir point


H = flying height of satellite above datum at the nadir point
∆d = displacement
d = distance of image point from nadir point
Z = elevation of the ground point
α = angle between nadir and image point before adjustment for terrain
displacement
β = angle between nadir and image point by vertical view

304 ERDAS
Orthorectification

Solving for Displacement (∆d)


The displacement of an image point is expressed in the following equations:

( R + Z ) sin β R sin α
-------------------------------------------------------- = ------------------------------------- (2)
( R + H ) – ( R + Z ) cos β R + H – R cos α

∆d tan β
------- = 1 – ------------ (3)
d tan α
Considering α and β are very tiny values, the following approximations can be used
with sufficient accuracy:

cos α ≈ 1
cos β ≈ 1
Then, an explicit approximate equation can be derived from equations (1), (2), and (3):

 1 + g 12  Z   R + H 
∆d =  ---------------------  ---- -------------- ( x – c 0 – c 1 y ) (4)
 1 – c1g1 H  R + Z 

Solving for Transformations (F 1 ,F 2 )


Before performing the polynomial transformation, the displacement is applied. Thus,
the polynomial equations become:

x –  --------------------   ----   --------------  ( x – c 0 – c 1 y ) = F 1 ( X , Y )


1 Z R+H
1 – c 1 g 1 H   R + Z 

y –  --------------------   ----   --------------  ( x – c 0 – c 1 y ) = F 2 ( X , Y )


g1 Z R+H
1 – c 1 g 1 H   R + Z 

where

X,Y,Z = 3-D ground coordinates


x,y = image coordinates
F1, F2 = polynomial expressions
and where

F1 = A0 + A1X + A2Y + A3X2 + A4XY +A5Y2 + ...


F2 = B0 + B1X + B2Y + B3X2 + B4XY +B5Y2 + ...
and where

Ai, Bi = polynomial transformation coefficients

Field Guide 305


Solving for Image Coordinates
Once all the polynomial coefficients are derived using least squares regression, the
following pair of equations can easily be solved for image coordinates (x,y) during
orthocorrection:

( 1.0 – p )x + c 1 py = F 1 ( X , Y ) – c 0 p

g 1 px + ( 1.0 – c 1 g 1 p )y = F 2 ( X , Y ) – c 0 g 1 p

where

p =  --------------------   ----   -------------- 


1 Z R+H
1 – c 1 g 1 H   R + Z 

306 ERDAS
Map Feature Collection

Map Feature Feature collection is the process of identifying, delineating, and labeling various types
Collection of natural and man-made phenomena from remotely-sensed images. The features are
represented by attribute points, lines, and polygons. General categories of features
include elevation models, hydrology, infrastructure, and land cover. There can be many
different elements within a general category. For instance, infrastructure can be broken
down into roads, utilities, and buildings. To achieve high levels of positional accuracy,
photogrammetric processing is applied to the imagery prior to collecting features (see
Figures 131 and 132).

Stereopair Collect Features


Stereoscopically

Collected Map
Features

Figure 131: Feature Collection Work Flow (Method 1)

Generated Collect Features


Orthoimage Monoscopically

Collected Map
Features

Figure 132: Feature Collection Work Flow (Method 2)

Stereoscopic Collection Method 1 (Figure 131), which uses a stereopair as the image backdrop, is the most
common approach. Viewing the stereopair in three dimensions provides greater image
content and the ability to obtain three-dimensional feature ground coordinates (X,Y,Z).

Monoscopic Collection Method 2 (Figure 132), which uses an orthoimage as the image backdrop, works well
for non-urban areas and/or smaller image scales. The features are collected from
orthoimages while viewing them in mono. Therefore, only X and Y ground coordinates
can be obtained. Monoscopic collection from orthoimages has no special hardware
requirements, making orthoimages an ideal image source for many applications.

Field Guide 307


Product Output

Orthoimages Orthoimages are the end product of orthorectification. Once created, these digital
images can be enhanced, merged with other data sources, and mosaicked with adjacent
orthoimages. The resulting digital file makes an ideal image backdrop for many appli-
cations, including feature collection, visualization, and input into GIS/Remote sensing
systems. Orthoimages have very good positional accuracy, making them an excellent
primary data source for all types of mapping.

Orthomaps Orthomaps use orthoimages, or orthoimage mosaics, to produce an imagemap


product. This product is similar to a standard map in that it usually includes additional
information, such as map coordinate grids, scale bars, north arrows, etc. The images are
often clipped to represent the same ground area and map projection as standard map
series. In addition, other geographic data can be superimposed on the image. For
example, items from a topographic database (e.g., roads, contour lines, and feature
descriptions) can improve the interpretability of the orthomap.

Topographic Features obtained from elevation extraction and feature collection can serve as primary
Database inputs into a topographic database. This database can then be utilized by GIS and map
publishing systems.

Topographic Maps Topographic maps are the traditional end product of the photogrammetric process. In
the digital era, topographic maps are often produced by map publishing systems which
utilize a topographic database.

308 ERDAS
Topographic Database

Field Guide 309


310 ERDAS
Introduction

CHAPTER 8
Rectification

Introduction Raw, remotely sensed image data gathered by a satellite or aircraft are representations
of the irregular surface of the earth. Even images of seemingly “flat” areas are distorted
by both the curvature of the earth and the sensor being used. This chapter covers the
processes of geometrically correcting an image so that it can be represented on a planar
surface, conform to other images, and have the integrity of a map.

A map projection system is any system designed to represent the surface of a sphere or
spheroid (such as the earth) on a plane. There are a number of different map projection
methods. Since flattening a sphere to a plane causes distortions to the surface, each map
projection system compromises accuracy between certain properties, such as conser-
vation of distance, angle, or area. For example, in equal area map projections, a circle of
a specified diameter drawn at any location on the map will represent the same total
area. This is useful for comparing land use area, density, and many other applications.
However, to maintain equal area, the shapes, angles, and scale in parts of the map may
be distorted (Jensen 1996).

There are a number of map coordinate systems for determining location on an image.
These coordinate systems conform to a grid, and are expressed as X,Y (column, row)
pairs of numbers. Each map projection system is associated with a map coordinate
system.

Rectification is the process of transforming the data from one grid system into another
grid system using an nth order polynomial. Since the pixels of the new grid may not
align with the pixels of the original grid, the pixels must be resampled. Resampling is
the process of extrapolating data values for the pixels on the new grid from the values
of the source pixels.

Registration
In many cases, images of one area that are collected from different sources must be used
together. To be able to compare separate images pixel by pixel, the pixel grids of each
image must conform to the other images in the data base. The tools for rectifying image
data are used to transform disparate images to the same coordinate system.
Registration is the process of making an image conform to another image. A map
coordinate system is not necessarily involved. For example, if image A is not rectified
and it is being used with image B, then image B must be registered to image A, so that
they conform to each other. In this example, image A is not rectified to a particular map
projection, so there is no need to rectify image B to a map projection.

Field Guide 311


Georeferencing
Georeferencing refers to the process of assigning map coordinates to image data. The
image data may already be projected onto the desired plane, but not yet referenced to
the proper coordinate system. Rectification, by definition, involves georeferencing,
since all map projection systems are associated with map coordinates. Image-to-image
registration involves georeferencing only if the reference image is already georefer-
enced. Georeferencing, by itself, involves changing only the map coordinate infor-
mation in the image file. The grid of the image does not change.

Geocoded data are images that have been rectified to a particular map projection and
pixel size, and usually have had radiometric corrections applied. It is possible to
purchase image data that is already geocoded. Geocoded data should be rectified only
if they must conform to a different projection system or be registered to other rectified
data.

Latitude/Longitude
Latitude/Longitude is a spherical coordinate system that is not associated with a map
projection. Lat/Lon expresses locations in the terms of a spheroid, not a plane.
Therefore, an image is not usually “rectified” to Lat/Lon, although it is possible to
convert images to Lat/Lon, and some tips for doing so are included in this chapter.

You can view map projection information for a particular file using the ERDAS IMAGINE
Image Information utility. Image Information allows you to modify map information that is
incorrect. However, you cannot rectify data using Image Information. You must use the Recti-
fication tools described in this chapter.

The properties of map projections and of particular map projection systems are discussed in
"CHAPTER 11: Cartography" and "APPENDIX C: Map Projections."

Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and
can be used if there is a digital elevation model (DEM) of the study area. In relatively
flat areas, orthorectification is not necessary, but in mountainous areas (or on aerial
photographs of buildings), where a high degree of accuracy is required, orthorectifi-
cation is recommended.

See "CHAPTER 7: Photogrammetric Concepts" for more information on orthocorrection.

312 ERDAS
When to Rectify

When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to
fit a map projection system or a reference image. There are several reasons for rectifying
image data:

• scene-to-scene comparisons of individual pixels in applications, such as change


detection or thermal inertia mapping (day and night comparison)

• developing GIS data bases for GIS modeling

• identifying training samples according to map coordinates prior to classification

• creating accurate scaled photomaps

• overlaying an image with vector data, such as ARC/INFO

• comparing images that are originally at different scales

• extracting accurate distance and area measurements

• mosaicking images

• performing any other analyses requiring precise geographic locations

Before rectifying the data, one must determine the appropriate coordinate system for
the data base. To select the optimum map projection and coordinate system, the
primary use for the data base must be considered.

If the user is doing a government project, the projection may be pre-determined. A


commonly used projection in the United States government is State Plane. Use an equal
area projection for thematic or distribution maps and conformal or equal area projec-
tions for presentation maps. Before selecting a map projection, consider the following:

• How large or small an area will be mapped? Different projections are intended for
different size areas.

• Where on the globe is the study area? Polar regions and equatorial regions require
different projections for maximum accuracy.

• What is the extent of the study area? Circular, north-south, east-west, and oblique
areas may all require different projection systems (ESRI 1992).

Field Guide 313


When to Georeference Rectification is not necessary if there is no distortion in the image. For example, if an
Only image file is produced by scanning or digitizing a paper map that is in the desired
projection system, then that image is already planar and does not require rectification
unless there is some skew or rotation of the image. Scanning and digitizing produce
images that are planar, but do not contain any map coordinate information. These
images need only to be georeferenced, which is a much simpler process than rectifi-
cation. In many cases, the image header can simply be updated with new map
coordinate information. This involves redefining:

• the map coordinate of the upper left corner of the image, and

• the cell size (the area represented by each pixel).

This information is usually the same for each layer of an image (.img) file, although it
could be different. For example, the cell size of band 6 of Landsat TM data is different
than the cell size of the other bands.

Use the Image Information utility to modify image file header information that is incorrect.

Disadvantages of During rectification, the data file values of rectified pixels must be resampled to fit into
Rectification a new grid of pixel rows and columns. Although some of the algorithms for calculating
these values are highly reliable, some spectral integrity of the data can be lost during
rectification. If map coordinates or map units are not needed in the application, then it
may be wiser not to rectify the image. An unrectified image is more spectrally correct
than a rectified image.

Classification
Some analysts recommend classification before rectification, since the classification will
then be based on the original data values. Another benefit is that a thematic file has only
one band to rectify instead of the multiple bands of a continuous file. On the other hand,
it may be beneficial to rectify the data first, especially when using Global Positioning
System (GPS) data for the ground control points. Since these data are very accurate, the
classification may be more accurate if the new coordinates help to locate better training
samples.

Thematic Files
Nearest neighbor is the only appropriate resampling method for thematic files, which
may be a drawback in some applications. The available resampling methods are
discussed in detail later in this chapter.

314 ERDAS
When to Rectify

Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Throughout this
documentation, many references to rectification also apply to image-to-image registration.

Usually, rectification is the conversion of data file coordinates to some other grid and
coordinate system, called a reference system. Rectifying or registering image data on
disk involves the following general steps, regardless of the application:

1. Locate ground control points.

2. Compute and test a transformation matrix.

3. Create an output image file with the new coordinate information in the header. The pix-
els must be resampled to conform to the new grid.

Images can be rectified on the display (in a Viewer) or on the disk. Display rectification
is temporary, but disk rectification is permanent, because a new file is created. Disk
rectification involves:

• rearranging the pixels of the image onto a new grid, which conforms to a plane in
the new map projection and coordinate system, and

• inserting new information to the header of the file, such as the upper left corner
map coordinates and the area represented by each pixel.

Field Guide 315


Ground Control Ground control points (GCPs) are specific pixels in an image for which the output map
Points coordinates (or other output coordinates) are known. GCPs consist of two X,Y pairs of
coordinates:

• source coordinates — usually data file coordinates in the image being rectified

• reference coordinates — the coordinates of the map or reference image to which


the source image is being registered

The term “map coordinates” is sometimes used loosely to apply to reference coordi-
nates and rectified coordinates. These coordinates are not limited to map coordinates.
For example, in image-to-image registration, map coordinates are not necessary.

GCPs in ERDAS Any ERDAS IMAGINE image can have one GCP set associated with it. The GCP set is
IMAGINE stored in the image file (.img) along with the raster layers. If a GCP set exists for the top
file that is displayed in the Viewer, then those GCPs can be displayed when the GCP
Tool is opened.

In the CellArray of GCP data that displays in the GCP Tool, one column shows the point
ID of each GCP. The point ID is a name given to GCPs in separate files that represent
the same geographic location. Such GCPs are called corresponding GCPs.

A default point ID string is provided (such as “GCP #1”), but the user can enter his or
her own unique ID strings to set up corresponding GCPs as needed. Even though only
one set of GCPs is associated with an image file, one GCP set can include GCPs for a
number of rectifications by changing the point IDs for different groups of corre-
sponding GCPs.

Entering GCPs Accurate ground control points are essential for an accurate rectification. From the
ground control points, the rectified coordinates for all other points in the image are
extrapolated. Select many GCPs throughout the scene. The more dispersed the GCPs
are, the more reliable the rectification will be. GCPs for large-scale imagery might
include the intersection of two roads, airport runways, utility corridors, towers, or
buildings. For small-scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (e.g., the edges of lakes or other water
bodies, vegetation, etc.) should not be used.

The source and reference coordinates of the ground control points can be entered in the
following ways:

• They may be known a priori, and entered at the keyboard.

• Use the mouse to select a pixel from an image in the Viewer. With both the source
and destination Viewers open, enter source coordinates and reference coordinates
for image-to-image registration.

• Use a digitizing tablet to register an image to a hardcopy map.

Information on the use and setup of a digitizing tablet is discussed in "CHAPTER 2: Vector
Layers."

316 ERDAS
Ground Control Points

Digitizing Tablet Option


If GCPs will be digitized from a hardcopy map and a digitizing tablet, accurate base
maps must be collected. The user should try to match the resolution of the imagery with
the scale and projection of the source map. For example, 1:24,000 scale USGS
quadrangles make good base maps for rectifying Landsat TM and SPOT imagery.
Avoid using maps over 1:250,000, if possible. Coarser maps (i.e., 1:250,000) are more
suitable for imagery of lower resolution (i.e., AVHRR) and finer base maps (i.e.,
1:24,000) are more suitable for imagery of finer resolution (i.e., Landsat and SPOT).

Mouse Option
When entering GCPs with the mouse, the user should try to match coarser resolution
imagery to finer resolution imagery (i.e., Landsat TM to SPOT) and avoid stretching
resolution spans greater than a cubic convolution radius (a 4 × 4 area). In other words,
the user should not try to match Landsat MSS to SPOT or Landsat TM to an aerial
photograph.

How GCPs are Stored


GCPs entered with the mouse are stored in the .img file, and those entered at the
keyboard or digitized using a digitizing tablet are stored in a separate file with the
extension .gcc.

Refer to "APPENDIX B: File Formats and Extensions" for more information on the format of
.img and .gcc files.

Field Guide 317


Orders of Polynomial equations are used to convert source file coordinates to rectified map
Transformation coordinates. Depending upon the distortion in the imagery, the number of GCPs used,
and their locations relative to one another, complex polynomial equations may be
required to express the needed transformation. The degree of complexity of the
polynomial is expressed as the order of the polynomial. The order is simply the highest
exponent used in the polynomial.

The order of transformation is the order of the polynomial used in the transformation.
ERDAS IMAGINE allows 1st- through nth-order transformations. Usually, 1st-order or
2nd-order transformations are used.

You can specify the order of the transformation you want to use in the Transform Editor.

A discussion of polynomials and order is included in "APPENDIX A: Math Topics".

Transformation Matrix
A transformation matrix is computed from the GCPs. The matrix consists of coeffi-
cients which are used in polynomial equations to convert the coordinates. The size of
the matrix depends upon the order of transformation. The goal in calculating the coeffi-
cients of the transformation matrix is to derive the polynomial equations for which
there is the least possible amount of error when they are used to transform the reference
coordinates of the GCPs into the source coordinates. It is not always possible to derive
coefficients that produce no error. For example, in Figure 133, GCPs are plotted on a
graph and compared to the curve that is expressed by a polynomial.
Reference X coordinate

GCP
Polynomial curve

Source X coordinate
Figure 133: Polynomial Curve vs. GCPs

Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to the
polynomial that the coefficients represent. The distance between the GCP reference
coordinate and the curve is called RMS error, which is discussed later in this chapter.
The least squares regression method is used to calculate the transformation matrix
from the GCPs. This common method is discussed in statistics textbooks.

318 ERDAS
Orders of Transformation

Linear Transformations A 1st-order transformation is a linear transformation. A linear transformation can


change:

• location in X and/or Y

• scale in X and/or Y

• skew in X and/or Y

• rotation

First-order transformations can be used to project raw imagery to a planar map


projection, convert a planar map projection to another planar map projection, and when
rectifying relatively small image areas. The user can perform simple linear transforma-
tions to an image displayed in a Viewer or to the transformation matrix itself. Linear
transformations may be required before collecting GCPs on the displayed image. The
user can reorient skewed Landsat TM data, rotate scanned quad sheets according to the
angle of declination stated in the legend, and rotate descending data so that north is up.

A 1st-order transformation can also be used for data that are already projected onto a
plane. For example, SPOT and Landsat Level 1B data are already transformed to a
plane, but may not be rectified to the desired map projection. When doing this type of
rectification, it is not advisable to increase the order of transformation if at first a high
RMS error occurs. Examine other factors first, such as the GCP source and distribution,
and look for systematic errors.

ERDAS IMAGINE provides the following options for 1st-order transformations:

• scale

• offset

• rotate

• reflect

Scale
Scale is the same as the zoom option in the Viewer, except that the user can specify
different scaling factors for X and Y.

If you are scaling an image in the Viewer, the zoom option will undo any changes to the scale
that you do, and vice versa.

Offset
Offset moves the image by a user-specified number of pixels in the X and Y directions.
For rotation, the user can specify any positive or negative number of degrees for
clockwise and counterclockwise rotation. Rotation occurs around the center pixel of the
image.

Field Guide 319


Reflection
Reflection options enable the user to perform the following operations:

• left to right reflection

• top to bottom reflection

• left to right and top to bottom reflection (equal to a 180° rotation)

Linear adjustments are available from the Viewer or from the Transform Editor. You can
perform linear transformations in the Viewer and then load that transformation to the
Transform Editor, or you can perform the linear transformations directly on the transformation
matrix.

Figure 134 illustrates how the data are changed in linear transformations.

original image change of scale in X change of scale in Y

change of skew in X change of skew in Y rotation


Figure 134: Linear Transformations

320 ERDAS
Orders of Transformation

The transformation matrix for a 1st-order transformation consists of six coefficients—


three for each coordinate (X and Y)...

a1 a2 a3

b1 b2 b3

...which are used in a 1st-order polynomial as follows:

xo = b1 + b2 xi + b3 yi
EQUATION 3
yo = a1 + a2 xi + a3 yi
where:

xi and yi are source coordinates (input)


xo and yo are rectified coordinates (output)
the coefficients of the transformation matrix are as above

The position of the coefficients in the matrix and the assignment of the coefficients in the
polynomial is an ERDAS IMAGINE convention. Other representations of a 1st-order transfor-
mation matrix may take a different form.

Nonlinear Transformations of the 2nd-order or higher are nonlinear transformations. These


Transformations transformations can correct nonlinear distortions. The process of correcting nonlinear
distortions is also known as rubber sheeting. Figure 135 illustrates the effects of some
nonlinear transformations.

original image

some possible outputs


Figure 135: Nonlinear Transformations

Field Guide 321


Second-order transformations can be used to convert Lat/Lon data to a planar
projection, for data covering a large area (to account for the earth’s curvature), and with
distorted data (for example, due to camera lens distortion). Third-order transforma-
tions are used with distorted aerial photographs, on scans of warped maps and with
radar imagery. Fourth-order transformations can be used on very distorted aerial
photographs.

The transformation matrix for a transformation of order t contains this number of


coefficients:

t+1
2 ∑i EQUATION 4

i=1

It is multiplied by two for the two sets of coefficients—one set for X, one for Y.

An easier way to arrive at the same number is:

(t + 1) × (t + 2) EQUATION 5

Clearly, the size of the transformation matrix increases with the order of the transfor-
mation.

322 ERDAS
Orders of Transformation

Higher Order Polynomials


The polynomial equations for a t-order transformation take this form:

Ωy t 6
x o = A + Bx + Cy + Dx 2 + Exy + F y 2 + ... + Qx i y j + ... +EQUATION

where:

A, B, C, D, E, F ... Q ... Ω are coefficients

t is the order of the polynomial

i and j are exponents

All combinations of xi times yj are used in the polynomial expression, such that:

i+ j≤t EQUATION 7

The equation for yo takes the same format with different coefficients. An example of 3rd-
order transformation equations for X and Y, using numbers, is:

x o = 5 + 4x – 6y + 10x 2 – 5xy + 1y 2 + 3x 3 + 7x 2 y – 11xy 2 + 4y 3

y o = 13 + 12x + 4y + 1x 2 – 21xy + 11y 2 – 1x 3 + 2x 2 y + 5xy 2 + 12y 3

These equations use a total of 20 coefficients, or

(3 + 1) × (3 + 2) EQUATION 8

Field Guide 323


Effects of Order The computation and output of a higher-order polynomial equation are more complex
than that of a lower-order polynomial equation. Therefore, higher-order polynomials
are used to perform more complicated image rectifications. To understand the effects of
different orders of transformation in image rectification, it is helpful to see the output
of various orders of polynomials.

The example below uses only one coordinate (X), instead of two (X,Y), which are used
in the polynomials for rectification. This enables the user to draw two-dimensional
graphs that illustrate the way that higher orders of transformation affect the output
image.

NOTE: Because only the X coordinate is used in these examples, the number of GCPs used is less
than the numbers required to actually perform the different orders of transformation.

Coefficients like those presented in this example would generally be calculated by the
least squares regression method. Suppose GCPs are entered with these X coordinates:

Source X Reference X
Coordinate Coordinate
(input) (output)

1 17
2 9
3 1

These GCPs allow a 1st-order transformation of the X coordinates, which is satisfied by


this equation (the coefficients are in parentheses):

x r = ( 25 ) + ( – 8 ) x i EQUATION 9

where:

xr= the reference X coordinate


xi= the source X coordinate

This equation takes on the same format as the equation of a line (y = mx + b). In mathe-
matical terms, a 1st-order polynomial is linear. Therefore, a 1st-order transformation is
also known as a linear transformation. This equation is graphed in Figure 136.

324 ERDAS
Orders of Transformation

16

reference X coordinate
12 xr = (25) + (-8)xi

0
0 1 2 3 4
source X coordinate
Figure 136: Transformation Example—1st-Order

However, what if the second GCP were changed as follows?

Source X Reference X
Coordinate Coordinate
(input) (output)

1 17
2 7
3 1

These points are plotted against each other in Figure 137.

16
reference X coordinate

12

0
0 1 2 3 4
source X coordinate
Figure 137: Transformation Example—2nd GCP Changed

A line cannot connect these points, which illustrates that they cannot be expressed by
a 1st-order polynomial, like the one above. In this case, a 2nd-order polynomial equation
will express these points:

2
x r = ( 31 ) + ( – 16 )x i + ( 2 )x i EQUATION 10

Polynomials of the 2nd-order or higher are nonlinear. The graph of this curve is drawn
in Figure 138.

Field Guide 325


16

reference X coordinate
12 xr = (31) + (-16)xi + (2)xi2

0
0 1 2 3 4
source X coordinate
Figure 138: Transformation Example—2nd-Order

What if one more GCP were added to the list?

Source X Reference X
Coordinate Coordinate
(input) (output)

1 17
2 7
3 1
4 5

16
reference X coordinate

12 xr = (31) + (-16)xi + (2)xi2

8
(4,5)
4

0
0 1 2 3 4
source X coordinate
Figure 139: Transformation Example—4th GCP Added

As illustrated in Figure 139, this fourth GCP does not fit on the curve of the 2nd-order
polynomial equation. So that all of the GCPs would fit, the order of the transformation
could be increased to 3rd-order. The equation and graph in Figure 140 would then
result.

326 ERDAS
Orders of Transformation

16

reference X coordinate
12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3

0
0 1 2 3 4
source X coordinate
Figure 140: Transformation Example—3rd-Order

Figure 140 illustrates a 3rd-order transformation. However, this equation may be


unnecessarily complex. Performing a coordinate transformation with this equation may
cause unwanted distortions in the output image for the sake of a perfect fit for all the
GCPs. In this example, a 3rd-order transformation probably would be too high, because
the output pixels would be arranged in a different order than the input pixels, in the X
direction.

Source X Reference X
Coordinate Coordinate
(input) (output)

1 xo(1) = 17
2 xo(2) = 7
3 xo(3) = 1
4 xo(4) = 5

xo ( 1 ) > xo ( 2 ) > xo ( 4 ) > xo ( 3 ) EQUATION 11

17 >7 >5 >1 EQUATION 12

input image output image


X coordinates X coordinates

1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
1 2 3 4 3 4 2 1

Figure 141: Transformation Example—Effect of a 3rd-Order Transformation

Field Guide 327


In this case, a higher order of transformation would probably not produce the desired
results.

Minimum Number of Higher orders of transformation can be used to correct more complicated types of
GCPs distortion. However, to use a higher order of transformation, more GCPs are needed.
For instance, three points define a plane. Therefore, to perform a 1st-order transfor-
mation, which is expressed by the equation of a plane, at least three GCPs are needed.
Similarly, the equation used in a 2nd-order transformation is the equation of a parab-
oloid. Six points are required to define a paraboloid. Therefore, at least six GCPs are
required to perform a 2nd-order transformation. The minimum number of points
required to perform a transformation of order t equals:

((t + 1)(t + 2))


------------------------------------- EQUATION 13
2
Use more than the minimum number of GCPs whenever possible. Although it is
possible to get a perfect fit, it is rare, no matter how many GCPs are used.

For 1st- through 10th-order transformations, the minimum number of GCPs required
to perform a transformation is listed in the table below.

Order of Minimum
Transformation GCPs Required
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66

For the best rectification results, you should always use more than the minimum number of
GCPs and they should be well distributed.

328 ERDAS
Orders of Transformation

GCP Prediction and Automated GCP prediction enables the user to pick a GCP in either coordinate system
Matching and automatically locate that point in the other coordinate system based on the current
transformation parameters.

Automated GCP matching is a step beyond GCP prediction. For image to image recti-
fication, a GCP selected in one image is precisely matched to its counterpart in the other
image using the spectral characteristics of the data and the transformation matrix. GCP
matching enables the user to fine tune a rectification for highly accurate results.

Both of these methods require an existing transformation matrix. A transformation


matrix is a set of coefficients used to convert the coordinates to the new projection.
Transformation matrices are covered in more detail below.

GCP Prediction
GCP prediction is a useful technique to help determine if enough ground control points
have been gathered. After selecting several GCPs, select a point in either the source or
the destination image, then use GCP prediction to locate the corresponding GCP on the
other image (map). This point is determined based on the current transformation
matrix. Examine the automatically generated point and see how accurate it is. If it is
within an acceptable range of accuracy, then there may be enough GCPs to perform an
accurate rectification (depending upon how evenly dispersed the GCPs are). If the
automatically generated point is not accurate, then more GCPs should be gathered
before rectifying the image.

GCP prediction can also be used when applying an existing transformation matrix to
another image in a data set. This saves time in selecting another set of GCPs by hand.
Once the GCPs are automatically selected, those that do not meet an acceptable level of
error can be edited.

GCP Matching
In GCP matching the user can select which layers from the source and destination
images to use. Since the matching process is based on the reflectance values, select
layers that have similar spectral wavelengths, such as two visible bands or two infrared
bands. The user can perform histogram matching to ensure that there is no offset
between the images. The user can also select the radius from the predicted GCP from
which the matching operation will search for a spectrally similar pixel. The search
window can be any odd size between 5 × 5 and 21 × 21.

Histogram matching is discussed in "CHAPTER 5: Enhancement".

A correlation threshold is used to accept or discard points. The correlation ranges from
-1.000 to +1.000. The threshold is an absolute value threshold ranging from 0.000 to
1.000. A value of 0.000 indicates a bad match and a value of 1.000 indicates an exact
match. Values above 0.8000 or 0.9000 are recommended. If a match cannot be made
because the absolute value of the correlation is greater than the threshold, the user has
the option to discard points.

Field Guide 329


RMS Error In most cases, a perfect fit for all GCPs would require an unnecessarily high order of
transformation. Instead of increasing the order, the user has the option to tolerate a
certain amount of error. When a transformation matrix is calculated, the inverse of the
transformation matrix is used to retransform the reference coordinates of the GCPs back
to the source coordinate system. Unless the order of transformation allows for a perfect
fit, there is some discrepancy between the source coordinates and the retransformed
reference coordinates.

RMS error (root mean square) is the distance between the input (source) location of a
GCP and the retransformed location for the same GCP. In other words, it is the
difference between the desired output coordinate for a GCP and the actual output
coordinate for the same point, when the point is transformed with the transformation
matrix.

RMS error is calculated with a distance equation:

2 2
RMS error = ( xr – xi ) + ( yr – yi ) EQUATION 14

where:

xi and yi are the input source coordinates

xr and yr are the retransformed coordinates

RMS error is expressed as a distance in the source coordinate system. If data file coordi-
nates are the source coordinates, then the RMS error is a distance in pixel widths. For
example, an RMS error of 2 means that the reference pixel is 2 pixels away from the
retransformed pixel.

Residuals and RMS Error The ERDAS IMAGINE GCP Tool contains columns for the X and Y residuals. Residuals
Per GCP are the distances between the source and retransformed coordinates in one direction.
They are shown for each GCP. The X residual is the distance between the source X
coordinate and the retransformed X coordinate. The Y residual is the distance between
the source Y coordinate and the retransformed Y coordinate.

If the GCPs are consistently off in either the X or the Y direction, more points should be
added in that direction. This is a common problem in off-nadir data.

330 ERDAS
RMS Error

RMS Error Per GCP


The RMS error of each point is reported to help the user evaluate the GCPs. This is
calculated with a distance formula:

Ri = X R i2 + Y R i2 EQUATION 15

where:

Ri= the RMS error for GCPi


XRi= the X residual for GCPi
YRi= the Y residual for GCPi

Figure 142 illustrates the relationship between the residuals and the RMS error per
point.

source GCP X residual

RMS error

Y residual

retransformed GCP

Figure 142: Residuals and RMS Error Per Point

Field Guide 331


Total RMS Error From the residuals, the following calculations are made to determine the total RMS
error, the X RMS error, and the Y RMS error:

∑ X Ri2
1
Rx = ---
n
i=1

∑ Y Ri2
1
Ry = ---
n
i=1

∑ X Ri2 + Y Ri2
1
T = R x2 + R y2 or ---
n
i=1
where:

Rx= X RMS error


Ry= Y RMS error
T= total RMS error
n= the number of GCPs
i= GCP number
XRi= the X residual for GCPi
YRi= the Y residual for GCPi

Error Contribution by A normalized value representing each point’s RMS error in relation to the total RMS
Point error is also reported. This value is listed in the Contribution column of the GCP Tool.
Ri
E i = ----- EQUATION 16
T
where:

Ei= error contribution of GCPi


Ri= the RMS error for GCPi
T = total RMS error

332 ERDAS
RMS Error

Tolerance of RMS Error In most cases, it will be advantageous to tolerate a certain amount of error rather than
take a higher order of transformation. The amount of RMS error that is tolerated can be
thought of as a window around each source coordinate, inside which a retransformed
coordinate is considered to be correct (that is, close enough to use). For example, if the
RMS error tolerance is 2, then the retransformed pixel can be 2 pixels away from the
source pixel and still be considered accurate.

source 2 pixel RMS error tolerance


pixel (radius)

Retransformed coordinates
within this range are considered
correct

Figure 143:RMS Error Tolerance

Acceptable RMS error is determined by the end use of the data base, the type of data
being used, and the accuracy of the GCPs and ancillary data being used. For example,
GCPs acquired from GPS should have an accuracy of about 10 m, but GCPs from
1:24,000-scale maps should have an accuracy of about 20 m.

It is important to remember that RMS error is reported in pixels. Therefore, if the user
is rectifying Landsat TM data and wants the rectification to be accurate to within 30
meters, the RMS error should not exceed 0.50. If the user is rectifying AVHRR data, an
RMS error of 1.50 might be acceptable. Acceptable accuracy will depend on the image
area and the particular project.

Field Guide 333


Evaluating RMS Error To determine the order of transformation, the user can assess the relative distortion in
going from image to map or map to map. One should start with a 1st-order transfor-
mation unless it is known that it will not work. It is possible to repeatedly compute
transformation matrices until an acceptable RMS error is reached.

Most rectifications are either 1st-order or 2nd-order. The danger of using higher order rectifica-
tions is that the more complicated the equation for the transformation, the less regular and
predictable the results will be. To fit all of the GCPs, there may be very high distortion in the
image.

After each computation of a transformation matrix and RMS error, there are four
options:

• Throw out the GCP with the highest RMS error, assuming that this GCP is the least
accurate. Another transformation matrix can then be computed from the remaining
GCPs. A closer fit should be possible. However, if this is the only GCP in a
particular region of the image, it may cause greater error to remove it.

• Tolerate a higher amount of RMS error.

• Increase the order of transformation, creating more complex geometric alterations


in the image. A transformation matrix can then be computed that can accommodate
the GCPs with less error.

• Select only the points for which you have the most confidence.

334 ERDAS
Resampling Methods

Resampling The next step in the rectification/registration process is to create the output file. Since
Methods the grid of pixels in the source image rarely matches the grid for the reference image,
the pixels are resampled so that new data file values for the output file can be calcu-
lated.

GCP GCP

GCP GCP

1. The input image with 2. The output grid, with


source GCPs. reference GCPs shown.

3. To compare the two grids, the 4. Using a resampling method,


input image is laid over the the pixel values of the input
output grid, so that the GCPs image are assigned to pixels
of the two grids fit together. in the output grid.
Figure 144: Resampling

The following resampling methods are supported in ERDAS IMAGINE:

• Nearest neighbor — uses the value of the closest pixel to assign to the output pixel
value.

• Bilinear interpolation — uses the data file values of four pixels in a 2 × 2 window
to calculate an output value with a bilinear function.

• Cubic convolution — uses the data file values of sixteen pixels in a 4 × 4 window
to calculate an output value with a cubic function.

Additionally, IMAGINE Restoration, a deconvolution algorithm that models known


sensor-specific parameters to produce a better estimate of the original scene radiance,
is available as an add-on module to ERDAS IMAGINE. The
Restoration™ algorithm was developed by the Environmental Research Institute of
Michigan (ERIM). It produces sharper, crisper rectified images by preserving and
enhancing the high spatial frequency component of the image during the resampling
process. Restoration can also provide higher spatial resolution from oversampled
images and is well suited for data fusion and GIS data integration applications.

See the IMAGINE Restoration documentation for more information.

Field Guide 335


In all methods, the number of rows and columns of pixels in the output is calculated
from the dimensions of the output map, which is determined by the transformation
matrix and the cell size. The output corners (upper left and lower right) of the output
file can be specified. The default values are calculated so that the entire source file will
be resampled to the destination file.

If an image to image rectification is being performed, it may be beneficial to specify the


output corners relative to the reference file system, so that the images will be co-
registered. In this case, the upper left X and upper left Y coordinate will be 0,0 and not
the defaults.

If the output units are pixels, then the origin of the image is the upper left corner.
Otherwise, the origin is the lower left corner.

“Rectifying” to Lat/Lon The user can specify the nominal cell size if the output coordinate system is Lat/Lon.
The output cell size for a geographic projection (i.e., Lat/Lon) is always in angular units
of decimal degrees. However, if the user wants the cell to be a specific size in meters, he
or she can enter meters and calculate the equivalent size in decimal degrees. For
example, if the user wants the output file cell size to be 30 × 30 meters, then the program
would calculate what this size would be in decimal degrees and automatically update
the output cell size. Since the transformation between angular (decimal degrees) and
nominal (meters) measurements varies across the image, the transformation is based on
the center of the output file.

Enter the nominal cell size in the Nominal Cell Size dialog.

336 ERDAS
Resampling Methods

Nearest Neighbor To determine an output pixel’s nearest neighbor, the rectified coordinates (xo,yo) of the
pixel are retransformed back to the source coordinate system using the inverse of the
transformation matrix. The retransformed coordinates (xr,yr) are used in bilinear inter-
polation and cubic convolution as well. The pixel that is closest to the retransformed
coordinates (xr,yr) is the nearest neighbor. The data file value(s) for that pixel become
the data file value(s) of the pixel in the output image.

(xr,yr)

nearest to
(xr,yr)

Figure 145: Nearest Neighbor

Nearest Neighbor Resampling

Advantages Disadvantages
Transfers original data values without When this method is used to resample
averaging them, as the other methods do, from a larger to a smaller grid size, there is
therefore the extremes and subtleties of the usually a “stair stepped” effect around
data values are not lost. This is an impor- diagonal lines and curves.
tant consideration when discriminating
between vegetation types, locating an edge
associated with a lineament, or determin-
ing different levels of turbidity or temper-
atures in a lake (Jensen 1996).
Suitable for use before classification. Data values may be dropped, while other
values may be duplicated.
The easiest of the three methods to com- Using on linear thematic data (e.g., roads,
pute and the fastest to use. streams) may result in breaks or gaps in a
network of linear data.
Appropriate for thematic files, which can
have data file values based on a qualitative
(nominal or ordinal) system or a quantita-
tive (interval or ratio) system. The averag-
ing that is performed with bilinear
interpolation and cubic convolution is not
suited to a qualitative class value system.

Field Guide 337


Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is based upon the
distances between the retransformed coordinate location (xr,yr) and the four closest
pixels in the input (source) image (see Figure 146). In this example, the neighbor pixels
are numbered 1, 2, 3, and 4. Given the data file values of these four pixels on a grid, the
task is to calculate a data file value for r (Vr).

1 2
dy

m r n
dx

(xr,yr)

3 4
D

r is the location of the retransformed coordinate


Figure 146: Bilinear Interpolation

To calculate Vr, first Vm and Vn are considered. By interpolating Vm and Vn, the user can
perform linear interpolation, which is a simple process to illustrate. If the data file
values are plotted in a graph relative to their distances from one another, then a visual
linear interpolation is apparent. The data file value of m (Vm) is a function of the change
in the data file value between pixels 3 and 1 (that is, V3 - V1).

Calculating a data file value as a function


of spatial distance between two pixels

V3
data file values

Vm
(V3 - V1) / D
V1

Y1 Ym Y3
D
data file coordinates
(Y)
Figure 147: Linear Interpolation

338 ERDAS
Resampling Methods

The equation for calculating Vm from V1 and V3 is:

V3 – V1
V m = ------------------- × dy + V 1 EQUATION 17
D
where:

Yi= the Y coordinate for pixel i


Vi= the data file value for pixel i
dy= the distance between Y1 and Ym in the source coordinate system
D= the distance between Y1 and Y3 in the source coordinate system

If one considers that (V3 - V1 / D) is the slope of the line in the graph above, then this
equation translates to the equation of a line in y = mx + b form.

Similarly, the equation for calculating the data file value for n (Vn) in the pixel grid is:

V4 – V2
V n = ------------------- × dy + V 2 EQUATION 18
D
From Vn and Vm, the data file value for r, which is at the retransformed coordinate
location (xr,yr),can be calculated in the same manner:

Vn – Vm
V r = --------------------- × dx + V m EQUATION 19
D

Field Guide 339


The following is attained by plugging in the equations for Vm and Vn to this final
equation for Vr:

V4 – V2 V3 – V1
- × dy + V 2 – ------------------
------------------ - × dy + V 1 V3 – V1
D D
V r = --------------------------------------------------------------------------------------------------------- × dx + ------------------- × dy + V 1
D D

V 1 ( D – dx ) ( D – dy ) + V 2 ( dx ) ( D – dy ) + V 3 ( D – dx ) ( dy ) + V 4 ( dx ) ( dy )
V r = -------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------
D2

In most cases D = 1, since data file coordinates are used as the source coordinates and
data file coordinates increment by 1.

Some equations for bilinear interpolation express the output data file value as:

Vr = ∑ wi V i EQUATION 20

where:

wi is a weighting factor

The equation above could be expressed in a similar format, in which the calculation of
wi is apparent:

4
( D – ∆x i ) ( D – ∆y i )
Vr = ∑ ----------------------------------------------- × V i
D
2
EQUATION 21

i=1

where:

∆xi = the change in the X direction between (xr,yr) and the data file coordinate
of pixel i

∆yi = the change in the Y direction between (xr,yr) and the data file coordinate
of pixel i

Vi = the data file value for pixel i

D = the distance between pixels (in X or Y) in the source coordinate system

For each of the four pixels, the data file value is weighted more if the pixel is closer to
(xr,yr).

340 ERDAS
Resampling Methods

Bilinear Interpolation Resampling

Advantages Disadvantages
Results in output images that are Since pixels are averaged, bilinear interpo-
smoother, without the “stair stepped” lation has the effect of a low-frequency
effect that is possible with nearest neigh- convolution. Edges are smoothed, and
bor. some extremes of the data file values are
lost.
More spatially accurate than nearest
neighbor.
This method is often used when changing
the cell size of the data, such as in
SPOT/TM merges within the
2 × 2 resampling matrix limit.

See "CHAPTER 5: Enhancement" for more about convolution filtering.

Cubic Convolution Cubic convolution is similar to bilinear interpolation, except that:

• a set of 16 pixels, in a 4 × 4 array, are averaged to determine the output data file
value, and

• an approximation of a cubic function, rather than a linear function, is applied to


those 16 input values.

To identify the 16 pixels in relation to the retransformed coordinate (xr,yr), the pixel (i,j)
is used, such that...

i = int (xr)

j = int (yr)

...and assuming that (xr,yr) is expressed in data file coordinates (pixels). The pixels
around (i,j) make up a 4 × 4 grid of input pixels, as illustrated in Figure 148.

Field Guide 341


(i,j)

(Xr,Yr)

Figure 148: Cubic Convolution

Since a cubic, rather than a linear, function is used to weight the 16 input pixels, the
pixels farther from (xr,yr) have exponentially less weight than those closer to (xr,yr).

Several versions of the cubic convolution equation are used in the field. Different
equations have different effects upon the output data file values. Some convolutions
may have more of the effect of a low-frequency filter (like bilinear interpolation),
serving to average and smooth the values. Others may tend to sharpen the image, like
a high-frequency filter. The cubic convolution used in ERDAS IMAGINE is a
compromise between low-frequency and high-frequency. The general effect of the
cubic convolution will depend upon the data.

342 ERDAS
Resampling Methods

The formula used in ERDAS IMAGINE is:


4
Vr = ∑ V ( i – 1, j + n – 2 ) × f ( d ( i – 1, j + n – 2 ) + 1 E) QUATION 22

n=1
+ V ( i, j + n – 2 ) × f ( d ( i, j + n – 2 ) )

+ V ( i + 1, j + n – 2 ) × f ( d ( i + 1, j + n – 2 ) – 1 )

+ V ( i + 2, j + n – 2 ) × f ( d ( i + 2, j + n – 2 ) – 2 )
where:

i = int (xr)

j = int (yr)

d(i,j) = the distance between a pixel with coordinates (i,j) and (xr,yr)

V(i,j) = the data file value of pixel (i,j)

Vr = the output data file value

a = -0.5 (a constant which differs in other applications of cubic convolution)

f(x) = the following function:

( a + 2 ) x 3 – ( a + 3 ) x 2 + 1 if x < 1

f (x) =  3 2
if 1 < x < 2
a x – 5a x 2 + 8a x – 4a
0 otherwise
Source: Atkinson 1985

Field Guide 343


In most cases, a value for a of -0.5 tends to produce output layers with a mean and
standard deviation closer to that of the original data (Atkinson 1985).

Cubic Convolution Resampling

Advantages Disadvantages
Uses 4 × 4 resampling. In most cases, the Data values may be altered.
mean and standard deviation of the out-
put pixels match the mean and standard
deviation of the input pixels more closely
than any other resampling method.
The effect of the cubic curve weighting can The most computationally intensive resa-
both sharpen the image and smooth out mpling method, and is therefore the slow-
noise (Atkinson 1985). The actual effects est.
will depend upon the data being used.
This method is recommended when the
user is dramatically changing the cell size
of the data, such as in TM/aerial photo
merges (i.e., matches the 4 × 4 window
more closely than the 2 × 2 window).

344 ERDAS
Map to Map Coordinate Conversions

Map to Map There are many instances when the user will need to change a map that is already regis-
Coordinate tered to a planar projection to another projection. Some examples of when this is
Conversions required are listed below (ESRI 1992).

• When combining two maps with different projection characteristics.

• When the projection used for the files in the data base does not produce the desired
properties of a map.

• When it is necessary to combine data from more than one zone of a projection, such
as UTM or State Plane.

A change in the projection is a geometric change—distances, areas, and scale are repre-
sented differently. Therefore, the conversion process requires that pixels be resampled.

Resampling causes some of the spectral integrity of the data to be lost (see the disadvan-
tages of the resampling methods explained previously). So, it is not usually wise to
resample data that have already been resampled if the accuracy of data file values is
important to the application. If the original unrectified data are available, it is usually
wiser to rectify that data to a second map projection system than to “lose a generation”
by converting rectified data and resampling it a second time.

Conversion Process
To convert the map coordinate system of any georeferenced image, ERDAS IMAGINE
provides a shortcut to the rectification process. In this procedure, GCPs are generated
automatically along the intersections of a grid that the user specifies. The program
calculates the reference coordinates for the GCPs with the appropriate conversion
formula and a transformation that can be used in the regular rectification process.

Vector Data
Converting the map coordinates of vector data is much easier than converting raster
data. Since vector data are stored by the coordinates of nodes, each coordinate is simply
converted using the appropriate conversion formula. There are no coordinates between
nodes to extrapolate.

Field Guide 345


346 ERDAS
Introduction

CHAPTER 9
Terrain Analysis

Introduction Terrain analysis involves the processing and graphic simulation of elevation data.
Terrain analysis software functions usually work with topographic data (also called
terrain data or elevation data), in which an elevation (or Z value) is recorded at each X,Y
location. Terrain analysis functions are not restricted to topographic data, however.
Any series of values, such as population densities, ground water pressure values,
magnetic and gravity measurements, and chemical concentrations, may be used.

Topographic data are essential for studies of trafficability, route design, non-point
source pollution, intervisibility, siting of recreation areas, etc. (Welch 1990). Especially
useful are products derived from topographic data. These include:

• slope images — illustrating changes in elevation over distance. Slope images are
usually color-coded according to the steepness of the terrain at each pixel.

• aspect images — illustrating the prevailing direction that the slope faces at each
pixel.

• shaded relief images — illustrating variations in terrain by differentiating areas


that would be illuminated or shadowed by a light source simulating the sun.

Topographic data and its derivative products have many applications, including:

• calculating the shortest and most navigable path over a mountain range for
constructing a road or routing a transmission line

• determining rates of snow melt based on variations in sun shadow, which is


influenced by slope, aspect, and elevation

Terrain data are often used as a component in complex GIS modeling or classification
routines. They can, for example, be a key to identifying wildlife habitats that are
associated with specific elevations. Slope and aspect images are often an important
factor in assessing the suitability of a site for a proposed use. Terrain data can also be
used for vegetation classification based on species that are terrain-sensitive (i.e., Alpine
vegetation).

Although this chapter mainly discusses the use of topographic data, the ERDAS IMAGINE
terrain analysis functions can be used on data types other than topographic data.

Field Guide 347


See "CHAPTER 10: Geographic Information Systems" for more information about GIS
modeling.

Topographic Data Topographic data are usually expressed as a series of points with X,Y, and Z values.
When topographic data are collected in the field, they are surveyed at a series of points
including the extreme high and low points of the terrain, along features of interest that
define the topography such as streams and ridge lines, and at various points in
between.

DEM (digital elevation models) and DTED (Digital Terrain Elevation Data) are
expressed as regularly spaced points. To create DEM and DTED files, a regular grid is
overlaid on the topographic contours. Elevations are read at each grid intersection
point, as shown in Figure 149.

20
30 20 22 29 34

40
31 39 38 34
30
50 20 45 48 41 30

Topographic image with ...resulting DEM or regularly spaced


grid overlay... terrain data points (Z values)
Figure 149: Regularly Spaced Terrain Data Points

Elevation data are derived from ground surveys and through manual photogrammetric
methods. Elevation points can also be generated through digital orthographic methods.

See "CHAPTER 3: Raster and Vector Data Sources" for more details on DEM and DTED data.
See "CHAPTER 7: Photogrammetric Concepts" for more information on the digital
orthographic process.

To make topographic data usable in ERDAS IMAGINE, they must be represented as a surface,
or DEM. A DEM is a one band .img file where the value of each pixel is a specific elevation value.
A gray scale is used to differentiate variations in terrain.

DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. See “Chapter 1:
Raster Layers” for more information.

348 ERDAS
Slope Images

Slope Images Slope is expressed as the change in elevation over a certain distance. In this case the
certain distance is the size of the pixel. Slope is most often expressed as a percentage,
but can also be calculated in degrees.

Use the Slope function in Image Interpreter to generate a slope image.

In ERDAS IMAGINE, the relationship between percentage and degree expressions of


slope is as follows:

• a 45° angle is considered a 100% slope

• a 90° angle is considered a 200% slope

• slopes less than 45° fall within the 1 - 100% range

• slopes between 45° and 90° are expressed as 100 - 200% slopes

Slope images are often used in road planning. For example, if the Department of Trans-
portation specifies a maximum of 15% slope on any road, it would be possible to recode
all slope values that are greater than 15% as unsuitable for road building.

A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at location
X,Y, the elevations around it are used to calculate the slope as shown below. A
hypothetical example is shown with the slope calculation formulas. In Figure 150, each
pixel is 30 × 30 meters.

Pixel X,Y has a b c 10 m 20 m 25 m


elevation e.

d e f 22 m 30 m 25 m

g h i 20 m 24 m 18 m

a,b,c,d,f,g,h, and i are the elevations of the pixels around it in a 3 X 3 window.

Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel

Field Guide 349


First, the average elevation changes per unit of distance in the x and y direction (∆x and
∆y) are calculated as:

∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i

∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 × x s

∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 × y s

where:

a...i = elevation values of pixels in a 3 × 3 window, as shown above


xs= x pixel size = 30 meters
ys= y pixel size = 30 meters

So, for the hypothetical example...

∆x 1 = 25 – 10 = 15 ∆y 1 = 10 – 20 = – 10
∆x 2 = 25 – 22 = 3 ∆y 2 = 20 – 24 = – 4
∆x 3 = 18 – 20 = – 2 ∆y 3 = 25 – 18 = 7

15 + 3 – 2
∆x = ---------------------- = 0.177 ∆y
– 10 – 4 + 7
= -------------------------- = – 0.078
30 × 3 30 × 3

350 ERDAS
Slope Images

...the slope at pixel x,y is calculated as:

( ∆x ) 2 + ( ∆y ) 2 s = 0.0967
s = --------------------------------------
2

ifs ≤1 percent slope = s × 100

100
or else percent slope = 200 – -------
s
180
slope in degrees = tan–1 ( s ) × -------
π

For the example, the slope is:

180
slope in degrees = tan–1 ( s ) × ------- = tan–1 ( 0.0967 ) × 57.30 = 5.54
π

percent slope = 0.0967 × 100 = 9.67%

Field Guide 351


Aspect Images An aspect image is an .img file that is gray scale-coded according to the prevailing
direction of the slope at each pixel. Aspect is expressed in degrees from north,
clockwise, from 0 to 360. Due north is 0 degrees. A value of 90 degrees is due east, 180
degrees is due south, and 270 degrees is due west. A value of 361 degrees is used to
identify flat surfaces such as water bodies.

Aspect files are used in many of the same applications as slope files. In transportation
planning, for example, north facing slopes are often avoided. Especially in northern
climates, these would be exposed to the most severe weather and would hold snow and
ice the longest. It would be possible to recode all pixels with north facing aspects as
undesirable for road building.

Use the Aspect function in Image Interpreter to generate an aspect image.

As with slope calculations, aspect uses a 3x3 window around each pixel to calculate the
prevailing direction it faces. For pixel x,y with the following elevation values around it,
the average changes in elevation in both x and y directions are calculated first. Each
pixel is 30x30 meters in the following example:

Pixel X,Y has a b c 10 m 20 m 25 m


elevation e.

d e f 22 m 30 m 25 m

g h i 20 m 24 m 18 m

a,b,c,d,f,g,h, and i are the elevations of the pixels around it in a 3x3 window.

Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel

∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i

∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3

∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3

352 ERDAS
Aspect Images

where:

a...i =elevation values of pixels in a 3 × 3 window as shown above

15 + 3 – 2 – 10 – 4 + 7
∆x = ------------------------ = 5.33 ∆y = ---------------------------- = – 2.33
3 3

If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees). Otherwise, θ is calcu-
lated as:

∆x
θ = tan–1  ------ 
 ∆y 

then aspect is 180 + θ (in degrees).

For the example above,

θ = tan–1  -------------  = 1.98


5.33
 – 2.33 
1.98 radians = 113.6 degrees

aspect = 180 + 113.6 = 293.6 degrees

Field Guide 353


Shaded Relief A shaded relief image provides an illustration of variations in elevation. Based on a
user-specified position of the sun, areas that would be in sunlight are highlighted and
areas that would be in shadow are shaded. Shaded relief images are generated from an
elevation surface, alone or in combination with an .img file, draped over the terrain.

It is important to note that the relief program identifies shadowed areas—i.e., those that
are not in direct sun. It does not calculate the shadow that is cast by topographic
features onto the surrounding surface.

For example, a high mountain with sunlight coming from the northwest would be
symbolized as follows in shaded relief. Only the portions of the mountain that would
be in shadow from a northwest light would be shaded. The software would not
simulate a shadow that the mountain would cast on the southeast side.

This condition produces... this... not this

30

40
50
= ≠

in sun shaded

Figure 152: Shaded Relief

Shaded relief images are an effective graphic tool. They can also be used in analysis,
e.g., of snow melt over an area spanned by an elevation surface. A series of relief images
can be generated to simulate the movement of the sun over the landscape. Snow melt
rates can then be estimated for each pixel based on the amount of time it spends in sun
or shadow. Shaded relief images can also be used to enhance subtle detail in gray scale
images such as aeromagnetic, radar, gravity maps, etc.

Use the Shaded Relief function in Image Interpreter to generate a relief image.

354 ERDAS
Shaded Relief

In calculating relief, the software compares the user-specified sun position and angle
with the angle each pixel faces. Each pixel is assigned a value between -1 and +1 to
indicate the amount of light reflectance at that pixel.

• Negative numbers and zero values represent shadowed areas.

• Positive numbers represent sunny areas, with +1 assigned to the areas of highest
reflectance.

The reflectance values are then applied to the original pixel values to get the final result.
All negative values are set to 0 or to the minimum light level specified by the user. These
indicate shadowed areas. Light reflectance in sunny areas falls within a range of values
depending on whether the pixel is directly facing the sun or not. (In the example above,
pixels facing northwest would be the brightest. Pixels facing north-northwest and west-
northwest would not be quite as bright.)

In a relief file that includes an .img file along with the elevation surface, the surface
reflectance values are multiplied by the color lookup values for the .img file.

Field Guide 355


Topographic Digital imagery from mountainous regions often contains a radiometric distortion
Normalization known as topographic effect. Topographic effect results from the differences in illumi-
nation due to the angle of the sun and the angle of the terrain. This causes a variation in
the image brightness values. Topographic effect is a combination of:

• incident illumination — the orientation of the surface with respect to the rays of the
sun

• exitance angle — the amount of reflected energy as a function of the slope angle

• surface cover characteristics — rugged terrain with high mountains or steep slopes
(Hodgson and Shelley 1993)

One way to reduce topographic effect in digital imagery is by applying transformations


based on the Lambertian or Non-Lambertian reflectance models. These models
normalize the imagery, making it appear as if it were a flat surface.

The Topographic Normalize function in Image Interpreter uses a Lambertian Reflectance model
to normalize topographic effect in VIS/IR imagery.

When using the Topographic Normalization model, the following information is


needed:

• solar elevation and azimuth at time of image acquisition

• DEM file

• original imagery file (after atmospheric corrections)

Lambertian Reflectance The Lambertian Reflectance model assumes that the surface reflects incident solar
Model energy uniformly in all directions, and that variations in reflectance are due to the
amount of incident radiation.

The following equation produces normalized brightness values (Colby 1991, Smith et al
1980):

BVnormal λ= BV observed λ / cos i

where:

BVnormal λ = normalized brightness values


BVobserved λ= observed brightness values
cos i = cosine of the incidence angle

356 ERDAS
Topographic Normalization

Incidence Angle
The incidence angle is defined from:

cos i = cos (90 - θs) cos θn + sin (90 - θs) sin θn cos (φs - φn)

where:

i= the angle between the solar rays and the normal to the surface
θs= the elevation of the sun
φs= the azimuth of the sun
θn= the slope of each surface element
φn= the aspect of each surface element
If the surface has a slope of 0 degrees, then aspect is undefined and i is simply
90 - θs.

Non-Lambertian Model Minnaert (1961) proposed that the observed surface does not reflect incident solar
energy uniformly in all directions. Instead, he formulated the Non-Lambertian model,
which takes into account variations in the terrain. This model, although more compu-
tationally demanding than the Lambertian model, may present more accurate results.

In a Non-Lambertian Reflectance model, the following equation is used to normalize


the brightness values in the image (Colby 1991, Smith et al 1980):

BVnormal λ= (BVobserved λ cos e) / (cosk i cosk e)

where:

BVnormal λ = normalized brightness values


BVobserved λ = observed brightness values
cos i = cosine of the incidence angle
cos e = cosine of the exitance angle, or slope angle
k = the empirically derived Minnaert constant

Minnaert Constant
The Minnaert constant (k) may be found by regressing a set of observed brightness
values from the remotely sensed imagery with known slope and aspect values,
provided that all the observations in this set are the same type of land cover. The k value
is the slope of the regression line (Hodgson and Shelley 1993):

log (BVobserved λ cos e) = log BVnormal λ+ k log (cos i cos e)

Use the Spatial Modeler to create a model based on the Non-Lambertian Model.

NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by intervening
topographic features between each pixel and the sun. For these areas, a line-of-sight algorithm
will identify such shadowed pixels.

Field Guide 357


358 ERDAS
Introduction

CHAPTER 10
Geographic Information Systems

Introduction The beginnings of geographic information systems (GIS) can legitimately be traced
back to the beginnings of man. The earliest known map dates back to 2500 B.C., but there
were probably maps before that. Since then, man has been continually improving the
methods of conveying spatial information. The mid-eighteenth century brought the use
of map overlays to show troop movements in the Revolutionary War. This could be
considered an early GIS. The first British census in 1825 led to the science of demog-
raphy, another application for GIS. During the 1800’s many different cartographers and
scientists were all discovering the power of overlays to convey multiple levels of infor-
mation about an area (Star and Estes).

Frederick Law Olmstead has long been considered the father of Landscape Architecture
for his pioneering work in the early 20th century. Many of the methods Olmstead used
in Landscape Architecture also involved the use of hand-drawn overlays. This type of
analysis was beginning to be used for a much wider range of applications, such as
change detection, urban planning, and resource management (Rado 1992).

The first system to be called a GIS was the Canadian Geographic Information System,
developed in 1962 by Roger Tomlinson of the Canada Land Inventory. Unlike earlier
systems that were developed for a specific application, this system was designed to
store digitized map data and land-based attributes in an easily accessible format for all
of Canada. This system is still in operation today (Parent and Church 1987).

In 1969, Ian McHarg’s influential work, Design with Nature, was published. This work
on land suitability/capability analysis (SCA), a system designed to analyze many data
layers to produce a plan map, discussed the use of overlays of spatially referenced data
layers for resource planning and management (Star and Estes 1990).

The era of modern GIS really started in the 1970s, as analysts began to program
computers to automate some of the manual processes. Software companies like ESRI
(Redlands, CA) and ERDAS developed software packages that could input, display,
and manipulate geographic data to create new layers of information. The steady
advances in features and power of the hardware over the last ten years and the decrease
in hardware costs have made GIS technology accessible to a wide range of users. The
growth rate of the GIS industry in the last several years has exceeded even the most
optimistic projections.

Field Guide 359


Today, a geographic information system (or GIS) is a unique system designed to input,
store, retrieve, manipulate, and analyze layers of geographic data to produce inter-
pretable information. A GIS should also be able to create reports and maps (Marble
1990). The GIS data base may include computer images, hardcopy maps, statistical data,
or any other data that is needed in a study. Although the term GIS is commonly used to
describe software packages, a true GIS includes knowledgeable staff, a training
program, budgets, marketing, hardware, data, and software (Walker and Miller 1990).
GIS technology can be used in almost any geography-related discipline, from
Landscape Architecture to natural resource management to transportation routing.

The central purpose of a GIS is to turn geographic data into useful information—the
answers to real-life questions—questions such as:

• How will we monitor the influence of global climatic changes on the earth’s
resources?

• How should political districts be redrawn in a growing metropolitan area?

• Where is the best place for a shopping center that will be most convenient to
shoppers and least harmful to the local ecology?

• What areas should be protected to ensure the survival of endangered species?

• How can communities be better prepared to face natural disasters, such as


earthquakes, tornadoes, hurricanes, and floods?

Information vs. Data


Information, as opposed to data, is independently meaningful. It is relevant to a
particular problem or question:

• “The land cover at coordinate N875250, E757261 has a data file value 8,” is data.

• “Land cover with a value of 8 are on slopes too steep for development,” is
information.

The user can input data into a GIS and output information. The information the user
wishes to derive determines the type of data that must be input. For example, if one is
looking for a suitable refuge for bald eagles, zip code data is probably not needed, while
land cover data may be useful.

For this reason, this first step in any GIS project is usually an assessment of the scope
and goals of the study. Once the project is defined, the user can begin the processing of
building the data base. Although software and data are commercially available, a
custom data base must be created for the particular project and study area. It must be
designed to meet the needs of the organization and objectives. ERDAS IMAGINE
provides all the tools required to build and manipulate a GIS data base.

360 ERDAS
Data Input

Successful GIS implementation typically includes two major steps:

• data input

• analysis

Data input involves collecting the necessary data layers into the image data base. In the
analysis phase, these data layers will be combined and manipulated in order to create
new layers and to extract meaningful information from them. This chapter discusses
these steps in detail.

Data Input Acquiring the appropriate data for a project involves creating a data base of layers that
encompass the study area. A data base created with ERDAS IMAGINE can consist of:

• continuous layers (satellite imagery, aerial photographs, elevation data, etc.)

• thematic layers (land use, vegetation, hydrology, soils, slope, etc.)

• vector layers (streets, utility and communication lines, parcels, etc.)

• statistics (frequency of an occurrence, population demographics, etc.)

• attribute data (characteristics of roads, land, imagery, etc.)

The ERDAS IMAGINE software package employs a hierarchical, object-oriented archi-


tecture that utilizes both raster imagery and topological vector data. Raster images are
stored in .img files, and vector layers are coverages based on the ARC/INFO data
model. The seamless integration of these two data types enables the user to reap the
benefits of both data formats in one system.

Field Guide 361


Raster Data Input Vector Data Input

Landsat TM Roads
SPOT panchromatic Census data
Aerial photograph Ownership parcels
Soils data Political boundaries
Land cover Landmarks

Raster Attributes Vector Attributes

GIS analyst using ERDAS IMAGINE

Figure 153: Data Input

Raster data might be more appropriate in the following applications:

• site selection

• natural resource management

• petroleum exploration

• mission planning

• change detection

On the other hand, vector data may be better suited for these applications:

• urban planning

• tax assessment and planning

• traffic engineering

• facilities management

The advantage of an integrated raster and vector system such as ERDAS IMAGINE is
that one data structure does not have to be chosen over the other. Both data formats can
be used and the functions of both types of systems can be accessed. Depending upon
the project, only raster or vector data may be needed, but most applications benefit from
using both.

362 ERDAS
Data Input

Themes and Layers


A data base usually consists of files with data of the same geographical area, with each
file containing different types of information. For example, a data base for the city recre-
ation department might include files of all the parks in the area. These files might depict
park boundaries, county and municipal boundaries, vegetation types, soil types,
drainage basins, slope, roads, etc. Each of these files contains different information—
each is a different theme. The concept of themes has evolved from early GISs, in which
transparent overlays were created for each theme and combined (overlaid) in different
ways to derive new information.

A single theme may require more than a simple raster or vector file to fully describe it.
In addition to the image, there may be attribute data that describe the information, a
color scheme, or meaningful annotation for the image. The full collection of data that
describe a certain theme is called a layer.

Depending upon the goals of a project, it may be helpful to combine several themes into
one layer. For example, if a user wanted to propose a new park site, he or she might
create one layer that shows roads, land cover, land ownership, slope, etc., and indicate
through the use of colors and/or annotation which areas would be best for the new site.
This one layer would then include many separate themes. Much of GIS analysis is
concerned with combining individual themes into one or more layers that answer the
questions driving the analysis. This chapter explores these analysis techniques.

Field Guide 363


Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related,
continuous values. Continuous raster layers can be multiband (e.g., Landsat TM) or
single band (e.g., SPOT panchromatic).

Satellite images, aerial photographs, elevation data, scanned maps, and other
continuous raster layers can be incorporated into a data base and provide a wealth of
information that is not available in thematic layers or vector layers. In fact, these layers
often form the foundation of the data base. Extremely accurate base maps can be created
from rectified satellite images or aerial photographs. Then, all other layers that are
added to the data base can be registered to this base map.

Once used only for image processing, continuous data are now being incorporated into
GIS data bases and used in combination with thematic data to influence processing
algorithms or as backdrop imagery on which to display the results of analyses. Current
satellite data and aerial photographs are also effective in updating outdated vector data.
The vectors can be overlaid on the raster backdrop and updated dynamically to reflect
new or changed features, such as roads, utility lines, or land use. This chapter will
explore the many uses of continuous data in a GIS.

See "CHAPTER 1: Raster Data" for more information on continuous data.

Thematic Layers Thematic data are typically represented as single layers of information stored as .img
files and containing discrete classes. Classes are simply categories of pixels which
represent the same condition. An example of a thematic layer is a vegetation classifi-
cation with discrete classes representing coniferous forest, deciduous forest, wetlands,
agriculture, urban, etc.

A thematic layer is sometimes called a variable, because it represents one of many


characteristics about the study area. Since thematic layers usually have only one
“band,” they are usually displayed in pseudo color mode, where particular colors are
often assigned to help others visualize the information. For example, blues are usually
used for water features, greens for healthy vegetation, etc.

See "CHAPTER 4: Image Display" for information on pseudo color display.

364 ERDAS
Thematic Layers

Class Numbering Systems


As opposed to the data file values of continuous raster layers, which are generally
multiband and statistically related, the data file values of thematic raster layers can
have a nominal, ordinal, interval, or ratio relationship (Star and Estes 1990).

• Nominal classes represent categories with no particular order. Usually, these are
characteristics that are not associated with quantities (e.g., soil type or political
area).

• Ordinal classes are those that have a sequence, such as “poor,” “good,” “better,”
and “best.” An ordinal class numbering system is often created from a nominal
system, in which classes have been ranked by some criteria. In the case of the
recreation department data base used in the previous example, the final layer may
rank the proposed park sites according to their overall suitability.

• Interval classes also have a natural sequence, but the distance between each value
is meaningful as well. This numbering system might be used for temperature data.

• Ratio classes differ from interval classes only in that ratio classes have a natural
zero point, such as rainfall amounts.

The variable being analyzed and the way that it contributes to the final product deter-
mines the class numbering system used in the thematic layers. Layers that have one
numbering system can easily be recoded to a new system. This is discussed in detail
under "Recoding" on page 378.

Classification
Thematic layers can be generated from remotely sensed data (e.g., Landsat TM, SPOT)
by using the ERDAS IMAGINE Image Interpreter, Classification, and Spatial Modeler
tools. A frequent and popular application is the creation of land cover classification
schemes through the use of both supervised (user-assisted) and unsupervised
(automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. The
output is a single thematic layer which represents specific classes based on the
approach selected.

See "CHAPTER 6: Classification" for more information.

Vector Data Converted to Raster Format


Vector layers can be converted to raster format if the raster format is more appropriate
for an application. Typical vector layers, such as communication lines, streams, bound-
aries, and other linear features, can easily be converted to raster format within ERDAS
IMAGINE for further analysis.

Use the Vector Utilities menu from the Vector icon in the IMAGINE icon panel to convert
vector layers to raster format.

Field Guide 365


Other sources of raster data are discussed in "CHAPTER 3: Raster and Vector Data Sources".

Statistics Both continuous and thematic layers include statistical information. Thematic layers
contain the following information:

• a histogram of the data values, which is the total number of pixels in each class

• a list of class names that correspond to class values

• a list of class values

• a color table, stored as brightness values in red, green, and blue, which make up the
colors of each class when the layer is displayed

For thematic data, these statistics are called attributes and may be accompanied by
many other types of information, as described below.

Use the Image Information option in the ERDAS IMAGINE icon panel to generate or update
statistics for .img files.

See "CHAPTER 1: Raster Data" for more information about the statistics stored with
continuous layers.

366 ERDAS
Attributes

Vector Layers The vector layers used in ERDAS IMAGINE are based on the ARC/INFO data model
and consist of points, lines, and polygons. These layers are topologically complete,
meaning that the spatial relationships between features are maintained. Vector layers
can be used to represent transportation routes, utility corridors, communication lines,
tax parcels, school zones, voting districts, landmarks, population density, etc. Vector
layers can be analyzed independently or in combination with continuous and thematic
raster layers.

Vector data can be acquired from several private and governmental agencies. Vector
data can also be created in ERDAS IMAGINE by digitizing on the screen, using a
digitizing tablet, or converting other data types to vector format.

See "CHAPTER 2: Vector Layers" for more information on the characteristics of vector data.

Attributes Text and numerical data that are associated with the classes of a thematic layer or
the features in a vector layer are called attributes. This information can take the form of
character strings, integer numbers, or floating point numbers. Attributes work much
like the data that are handled by data base management software. The user may define
fields, which are categories of information about each class. A record is the set of all
attribute data for one class. Each record is like an index card, containing information
about one class or feature in a file of many index cards, which contain similar infor-
mation for the other classes or features.

Attribute information for raster layers is stored in the image (.img) file. Vector attribute
information is stored in an INFO file. In both cases, there are fields that are automati-
cally generated by the software, but more fields can be added as needed to fully
describe the data. Both are viewed in ERDAS IMAGINE CellArrays, which allow the
user to display and manipulate the information. However, raster and vector attributes
are handled slightly differently, so a separate section on each follows.

Raster Attributes In ERDAS IMAGINE, raster attributes for .img files are accessible from the Raster
Attribute Editor. The Raster Attribute Editor contains a CellArray, which is similar to a
table or spreadsheet that not only displays the information, but includes options for
importing, exporting, copying, editing, and other operations.

Figure 154 shows the attributes for a land cover classification layer.

Field Guide 367


Figure 154: Raster Attributes for lnlandc.img

Most thematic layers contain the following attribute fields:

• Class Name

• Class Value

• Color table (red, green, and blue values)

• Opacity percentage

• Histogram (number of pixels in the file that belong to the class)

As many additional attribute fields as needed can be defined for each class.

See "CHAPTER 6: Classification" for more information about the attribute information that is
automatically generated when new thematic layers are created in the classification process.

Viewing Raster Attributes


Simply viewing attribute information can be a valuable analysis tool. Depending on the
type of information associated with the layers of a data base, processing may be further
refined by comparing the attributes of several files. When both the raster layer and its
associated attribute information are displayed, the user can select features in one using
the other. For example, to locate the class name associated with a particular polygon in
a displayed image, simply click on that polygon with the mouse and that row is
highlighted in the Raster Attribute Editor.

Attribute information is accessible in several places throughout ERDAS IMAGINE. In


some cases it is read-only and in other cases it is a fully functioning editor, allowing the
information to be modified.

368 ERDAS
Attributes

Manipulating Raster Attributes


The applications for manipulating attributes are as varied as the applications for GIS.
The attribute information in a data base will depend on the goals of the project. Some
of the attribute editing capabilities in ERDAS IMAGINE include:

• import/export ASCII information to and from other software packages, such as


spreadsheets and word processors

• cut, copy, and paste individual cells, rows, or columns to and from the same Raster
Attribute Editor or among several Raster Attribute Editors

• generate reports that include all or a subset of the information in the Raster
Attribute Editor

• use formulas to populate cells

• directly edit cells by entering in new information

The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column, so
that class (object) colors can be viewed or changed. In addition to direct user manipu-
lation, attributes can be changed by other programs. For example, some of the Image
Interpreter functions calculate statistics that are automatically added to the Raster
Attribute Editor. Models that read and/or modify attribute information can also be
written.

See "CHAPTER 5: Enhancement" for more information on the Image Interpreter. There is more
information on GIS modeling, starting on page 383.

Field Guide 369


Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. The user can simply
view attributes or use them to:

• select features in a vector layer for further processing

• determine how vectors are symbolized

• label features

Figure 155 shows the attributes for a roads layer.

Figure 155: Vector Attributes CellArray

See "CHAPTER 2: Vector Layers" for more information about vector attributes.

370 ERDAS
Analysis

Analysis
ERDAS IMAGINE In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible through
Analysis Tools three main tools:

• script models created with the Spatial Modeler Language

• graphical models created with Model Maker

• pre-packaged functions in Image Interpreter

Spatial Modeler Language


The Spatial Modeler Language is the basis for all ERDAS IMAGINE GIS functions and
it is the most powerful. It is a modeling language that enables the user to create script
(text) models for a variety of applications. Models may be used to create custom
algorithms that best suit the user’s data and objectives.

Model Maker
Model Maker is essentially the Spatial Modeler Language linked to a graphical
interface. This enables the user to create graphical models using a palette of easy-to-use
tools. Graphical models can be run, edited, saved in libraries, or converted to script
form and edited further, using the Spatial Modeler Language.

NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can be
accomplished using both Model Maker and the Spatial Modeler Language.

Image Interpreter
The Image Interpreter houses a set of common functions that were all created using
either Model Maker or the Spatial Modeler Language. They have been given a dialog
interface to match the other processes in ERDAS IMAGINE. In most cases, these
processes can be run from a single dialog. However, the actual models are also
provided with the software to enable customized processing.

Many of the functions described in the following sections can be accomplished using
any of these tools. Model Maker is also easy to use and requires many of the same steps
that would be performed when drawing a flow chart of an analysis. The Spatial
Modeler Language is intended for more advanced analyses, and has been designed
using natural language commands and simple syntax rules. Some applications may
require a combination of these tools.

Customizing ERDAS IMAGINE Tools


ERDAS Macro Language (EML) enables the user to create and add new and/or
customized dialogs. If new capabilities are needed, they can be created with the C
Programmers’ Toolkit. Using these tools, a GIS that is completely customized to a
specific application and its preferences can be created.

The ERDAS Macro Language and the C Programmers’ Toolkit are part of the ERDAS
IMAGINE Developers’ Toolkit.

Field Guide 371


See the ERDAS IMAGINE On-Line Help for more information about the Developers’ Toolkit.

Analysis Procedures Once the data base (layers and attribute data) is assembled, the layers can be analyzed
and new information extracted. Some information can be extracted simply by looking
at the layers and visually comparing them to other layers. However, new information
can be retrieved by combining and comparing layers using the procedures outlined
below:

• Proximity analysis — the process of categorizing and evaluating pixels based on


their distances from other pixels in a specified class or classes.

• Contiguity analysis — enables the user to identify regions of pixels in the same
class and to filter out small regions.

• Neighborhood analysis — any image processing technique that takes surrounding


pixels into consideration, such as convolution filtering and scanning. This is similar
to the convolution filtering performed on continuous data. Several types of
analyses can be performed, such as boundary, density, mean, sum, etc.

• Recoding — enables the user to assign new class values to all or a subset of the
classes in a layer.

• Overlaying — creates a new file with either the maximum or minimum value of the
input layers.

• Indexing — adds the values of the input layers.

• Matrix analysis — outputs the coincidence of values in the input layers.

• Graphical modeling — enables the user to combine data layers in an unlimited


number of ways. For example, an output layer created from modeling can represent
the desired combination of themes from many input layers.

• Script modeling — offers all of the capabilities of graphical modeling with the
ability to perform more complex functions, such as conditional looping.

Using an Area of Interest (AOI)


Any of these functions can be performed on a single layer or multiple layers. The user
can also select a particular area of interest (AOI) that is defined in a separate file (AOI
layer, thematic raster layer, or vector layer) or an area of interest that is selected
immediately preceding the operation by entering specific coordinates or by selecting
the area in a Viewer.

372 ERDAS
Proximity Analysis

Proximity Analysis Many applications require some measurement of distance, or proximity. For example,
a real estate developer would be concerned with the distance between a potential site
for a shopping center and an interchange to a major highway.

A proximity analysis determines which pixels of a layer are located at specified


distances from pixels in a certain class or classes. A new thematic layer (.img file) is
created, which is categorized by the distance of each pixel from specified classes of the
input layer. This new file then becomes a new layer of the data base and provides a
buffer zone around the specified class(es). In further analysis, it may be beneficial to
weight other factors, based on whether they fall in or outside the buffer zone.

Figure 156 shows a layer containing lakes and streams and the resulting layer after a
proximity analysis is run to create a buffer zone around all of the water features.

Lake
Streams

Buffer
zones

Original layer After proximity


analysis performed
Figure 156: Proximity Analysis

Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform a
proximity analysis.

Field Guide 373


Contiguity Analysis A contiguity analysis is concerned with the ways in which pixels of a class are grouped
together. Groups of contiguous pixels in the same class, called raster regions, or
“clumps,” can be identified by their sizes and manipulated. One application of this tool
would be an analysis for locating helicopter landing zones that require at least 250
contiguous pixels at 10 meter resolution.

Contiguity analysis can be used to:

• further divide a large class into separate raster regions, or

• eliminate raster regions that are too small to be considered for an application.

Filtering Clumps
In cases where very small clumps are not useful, they can be filtered out according to
their sizes. This is sometimes referred to as eliminating the “salt and pepper” effects, or
“sieving.” In Figure 157, all of the small clumps in the original (clumped) layer are
eliminated.

Clumped layer Sieved layer

Figure 157: Contiguity Analysis

Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler to
perform contiguity analysis.

374 ERDAS
Neighborhood Analysis

Neighborhood With a process similar to the convolution filtering of continuous raster layers, thematic
Analysis raster layers can also be filtered. The GIS filtering process is sometimes referred to as
“scanning,” but is not to be confused with data capture via a digital camera. Neigh-
borhood analysis is based on local or neighborhood characteristics of the data (Star and
Estes 1990).

Every pixel is analyzed spatially, according to the pixels that surround it. The number
and the location of the surrounding pixels is determined by a scanning window, which
is defined by the user. These operations are known as focal operations. The scanning
window can be:

• circular, with a maximum diameter of 512 pixels

• doughnut-shaped, with a maximum outer radius of 256

• rectangular, up to 512 × 512 pixels, with the option to mask out certain pixels

Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to
perform neighborhood analysis. The scanning window used in Image Interpreter can be
3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is user-defined and can be up to
512 × 512.

Defining Scan Area


The user may define the area of the file to be scanned. The scanning window will move
only through this area as the analysis is performed. The area may be defined in one or
all of the following ways:

• Specify a rectangular portion of the file to scan. The output layer will contain only
the specified area.

• Specify an area of interest that is defined by an existing AOI layer, an annotation


overlay, or a vector layer. The area(s) within the polygon will be scanned, and the
other areas will remain the same. The output layer will be the same size as the input
layer or the selected rectangular portion.

• Specify a class or classes in another thematic layer to be used as a mask. The pixels
in the scanned layer that correspond to the pixels of the selected class or classes in
the mask layer will be scanned, while the other pixels will remain the same.

Field Guide 375


8 2 8 4 4 5
8 6 8 5 4 5
2 6 8 5 4 5
2 6 8 3 5 5
2 6 8 3 4 5
2 2 6 3 4
8 2 6 4 4
8 6 8 4 4
2 6 3 5 5
2 6 3 4 5
2 8 4 4 5

mask layer target layer


Figure 158: Using a Mask

In Figure 158, class 2 in the mask layer was selected for the mask. Only the corre-
sponding (shaded) pixels in the target layer will be scanned—the other values will
remain unchanged.

Neighborhood analysis creates a new thematic layer. There are several types of analysis
that can be performed upon each window of pixels, as described below:

• Boundary — detects boundaries between classes. The output layer contains only
boundary pixels. This is useful for creating boundary or edge lines from classes,
such as a land/water interface.

• Density — outputs the number of pixels that have the same class value as the center
(analyzed) pixel. This is also a measure of homogeneity (sameness), based upon the
analyzed pixel. This is often useful in assessing vegetation crown closure.

• Diversity — outputs the number of class values that are present within the
window. Diversity is also a measure of heterogeneity (difference).

• Majority — outputs the class value that represents the majority of the class values
in the window. The value is user-defined. This option operates like a low-frequency
filter to clean up a “salt and pepper” layer.

• Maximum — outputs the greatest class value within the window. This can be used
to emphasize classes with the higher class values or to eliminate linear features or
boundaries.

• Mean — averages the class values. If class values represent quantitative data, then
this option can work like a convolution filter. This is mostly used on ordinal or
interval data.

• Median — outputs the statistical median of the class values in the window. This
option may be useful if class values represent quantitative data.

• Minimum — outputs the least or smallest class value within the window. The
value is user-defined. This can be used to emphasize classes with the low class
values.

376 ERDAS
Neighborhood Analysis

• Minority — outputs the least common of the class values that are within the
window. This option can be used to identify the least common classes. It can also
be used to highlight disconnected linear features.

• Rank — outputs the number of pixels in the scan window whose value is less than
the center pixel.

• Standard deviation — outputs the standard deviation of class values in the


window.

• Sum — totals the class values. In a file where class values are ranked, totaling
enables the user to further rank pixels based on their proximity to high-ranking
pixels.

2 8 6 6 6 8 6 6
2 48 6 Output of one
2 8 6 6 6 iteration of the
2 2 8 6 6 2 2 8 sum operation

2 2 2 8 6
2 2 2 2 8
8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48

Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter)

In Figure 159, the Sum option of Neighborhood (Image Interpreter) is applied to a


3 × 3 window of pixels in the input layer. In the output layer, the analyzed pixel will be
given a value based on the total of all of the pixels in the window.

The analyzed pixel is always the center pixel of the scanning window. In this example, only the
pixel in the third column and third row of the file is “summed.”

Field Guide 377


Recoding Class values can be recoded to new values. Recoding involves the assignment of new
values to one or more classes. Recoding is used to:

• reduce the number of classes

• combine classes

• assign different class values to existing classes

When an ordinal, ratio, or interval class numbering system is used, recoding can be
used to assign classes to appropriate values. Recoding is often performed to make later
steps easier. For example, in creating a model that will output “good,” “better,” and
“best” areas, it may be beneficial to recode the input layers so all of the “best” classes
have the highest class values.

In the following example (Table 21), a land cover layer is recoded so that the most
environmentally sensitive areas (Riparian and Wetlands) have higher class values.

Table 21: Example of a Recoded Land Cover Layer

Value New Value Class Name

0 0 Background

1 4 Riparian

2 1 Grassland and Scrub

3 1 Chaparral

4 4 Wetlands

5 1 Emergent Vegetation

6 1 Water

Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers.

378 ERDAS
Overlaying

Overlaying Thematic data layers can be overlaid to create a composite layer. The output layer
contains either the minimum or the maximum class values of the input layers. For
example, if an area was in class 5 in one layer, and in class 3 in another, and the
maximum class value dominated, then the same area would be coded to class 5 in the
output layer, as shown in Figure 160.

Basic Overlay Application Example


Original Slope
6
1-5 = flat slopes
8 2 6-9 = steep slopes
9 1 1
3 5 6 3
5

Recode Recoded Slope


9
0 = flat slopes
9 0 9 = steep slopes
9 0 0
9 0
0
Land Use
2 Overlay
1 = commercial
2 4 2 = residential
3 = forest
3 1 2 4 = industrial
2 5 5 = wetlands
3

Overlay Composite
9
1 = commercial
9 4 2 = residential
9 1 2 3 = forest
9 5 4 = industrial
5 = wetlands
3 9 = steep slopes
(Land Use masked
Figure 160: Overlay

The application example in Figure 160 shows the result of combining two layers—slope
and land use. The slope layer is first recoded to combine all steep slopes into one value.
When overlaid with the land use layer, the highest data file values (the steep slopes)
dominate in the output layer.

Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlay
layers.

Field Guide 379


Indexing Thematic layers can be indexed (added) to create a composite layer. The output layer
contains the sums of the input layer values. For example, the intersection of class 3 in
one layer and class 5 in another would result in class 8 in the output layer, as shown in
Figure 161.

Basic Index Application Example


Weighting
9 Soils Importance
9 9 9 = good ×1
5 = fair ×1
5 9 1 1 = poor ×1
3 8 5 1 9
5
+ Weighting
Slope Importance
18
9 = good ×2
10 18 5 = fair ×2
10 18 18 1 = poor ×2
2 18
2
Weighting
+ Access Importance
9 9 = good ×1
5 9 5 = fair ×1
1 = poor ×1
1 9 9
5 9
9
=
36
24 36
Output values calculated
16 36 28
8 36
16

Figure 161: Indexing

The application example in Figure 161 shows the result of indexing. In this example, the
user wants to develop a new subdivision, and the most likely sites are where there is
the best combination (highest value) of good soils, good slope, and good access. Since
good slope is a more critical factor to the user than good soils or good access, a
weighting factor is applied to the slope layer. A weighting factor has the effect of multi-
plying all input values by some constant. In this example, slope is given a weight of 2.

Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index
layers.

380 ERDAS
Matrix Analysis

Matrix Analysis

Matrix analysis produces a thematic layer that contains a separate class for every coinci-
dence of classes in two layers. The output is best described with a matrix diagram.

input layer 2 data values (columns)

0 1 2 3 4 5

0 0 0 0 0 0 0

input layer 1 1 0 1 2 3 4 5
data values
(rows) 2 0 6 7 8 9 10

3 0 11 12 13 14 15

In this diagram, the classes of the two input layers represent the rows and columns of
the matrix. The output classes are assigned according to the coincidence of any two
input classes.

All combinations of 0 and any other class are coded to 0, because 0 is usually the background
class, representing an area that is not being studied.

Unlike overlaying or indexing, the resulting class values of a matrix operation are
unique for each coincidence of two input class values. In this example, the output class
value at column 1, row 3 is 11, and the output class at column 3, row 1 is 3. If these files
were indexed (summed) instead of matrixed, both combinations would be coded to
class 4.

Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix
layers.

Field Guide 381


Modeling Modeling is a powerful and flexible analysis tool. Modeling is the process of creating
new layers from combining or operating upon existing layers. Modeling enables the
user to create a small set of layers—perhaps even a single layer—which, at a glance,
contains many types of information about the study area.

For example, if a user wants to find the best areas for a bird sanctuary, taking into
account vegetation, availability of water, climate, and distance from highly developed
areas, he or she would create a thematic layer for each of these criteria. Then, each of
these layers would be input to a model. The modeling process would create one
thematic layer, showing only the best areas for the sanctuary.

The set of procedures that define the criteria is called a model. In ERDAS IMAGINE,
models can be created graphically and resemble a flow chart of steps, or they can be
created using a script language. Although these two types of models look different, they
are essentially the same—input files are defined, functions and/or operators are
specified, and outputs are defined. The model is run and a new output layer(s) is
created. Models can utilize analysis functions that have been previously defined, or
new functions can be created by the user.

Use the Model Maker function in Spatial Modeler to create graphical models and the Spatial
Modeler Language to create script models.

Data Layers
In modeling, the concept of layers is especially important. Before computers were used
for modeling, the most widely used approach was to overlay registered maps on paper
or transparencies, with each map corresponding to a separate theme. Today, digital
files replace these hardcopy layers and allow much more flexibility for recoloring,
recoding, and reproducing geographical information (Steinitz, Parker, and Jordan
1976).

In a model, the corresponding pixels at the same coordinates in all input layers are
addressed as if they were physically overlaid like hardcopy maps.

382 ERDAS
Graphical Modeling

Graphical Modeling Graphical modeling enables the user to “draw” models using a palette of tools that
defines inputs, functions, and outputs. This type of modeling is very similar to drawing
flowcharts, in that the user identifies a logical flow of steps needed to perform the
desired action. Through the extensive functions and operators available in the ERDAS
IMAGINE graphical modeling program, the user can analyze many layers of data in
very few steps, without creating intermediate files that occupy extra disk space.
Modeling is performed using a graphical editor that eliminates the need to learn a
programming language. Complex models can be developed easily and then quickly
edited and re-run on another data set.

Use the Model Maker function in Spatial Modeler to create graphical models.

Image Processing and GIS


In ERDAS IMAGINE, the traditional GIS functions (e.g., neighborhood analysis,
proximity analysis, recode, overlay, index, etc.) can be performed in models, as well as
image processing functions. Both thematic and continuous layers can be input into
models that accomplish many objectives at once.

For example, suppose there is a need to assess the environmental sensitivity of an area
for development. An output layer can be created that ranks most to least sensitive
regions based on several factors, such as slope, land cover, and floodplain. To visualize
the location of these areas, the output thematic layer can be overlaid onto a high
resolution, continuous raster layer (e.g., SPOT panchromatic) that has had a convo-
lution filter applied. All of this can be accomplished in a single model (as shown in
Figure 162).

Field Guide 383


Figure 162: Graphical Model for Sensitivity Analysis

See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating the
environmental sensitivity model in Figure 162. Descriptions of all of the graphical models
delivered with ERDAS IMAGINE are available in the On-Line Help.

Model Structure
A model created with Model Maker is essentially a flow chart that defines:

• the input image(s), matrix(ces), table(s), and scalar(s) to be analyzed

• calculations, functions, or operations to be performed on the input data

• the output image(s) to be created

The graphical models created in Model Maker all have the same basic structure: input,
function, output. The number of inputs, functions, and outputs can vary, but the overall
form remains constant. All components must be connected to one another before the
model can be executed. The model on the left in Figure 163 is the most basic form. The
model on the right is more complex, but it retains the same input/function/output
flow.

384 ERDAS
Graphical Modeling

Basic Model Complex Model

Input Input Function Output

Function
Input Function Input

Output
Output

Figure 163: Graphical Model Structure

Graphical models are stored in ASCII files with the .gmd extension. There are several
sample graphical models delivered with ERDAS IMAGINE that can be used as is or
edited for more customized processing.

See the On-Line Help for instructions on editing existing models.

Field Guide 385


Model Maker Functions The functions available in Model Maker are divided into 19 categories:

Category Description

Analysis Includes convolution filtering, histogram matching, contrast


stretch, principal components, and more.
Arithmetic Perform basic arithmetic functions including addition, subtraction,
multiplication, division, factorial, and modulus.
Bitwise Use bitwise and, or, exclusive or, and not.

Boolean Perform logical functions including and, or, and not.

Color Manipulate colors to and from RGB (red, green, blue) and IHS
(intensity, hue, saturation).
Conditional Run logical tests using conditional statements and
either...if...or...otherwise.
Data Create raster layers from map coordinates, column numbers, or
Generation row numbers. Create a matrix or table from a list of scalars.

Descriptor Read attribute information and map a raster through an attribute


column.
Distance Perform distance functions, including proximity analysis.

Exponential Use exponential operators, including natural and common loga-


rithmic, power, and square root.
Focal (Scan) Perform neighborhood analysis functions, including boundary,
density, diversity, majority, mean, minority, rank, standard devia-
tion, sum, and others.
Global Analyze an entire layer and output one value, such as diversity,
maximum, mean, minimum, standard deviation, sum, and more.
Matrix Multiply, divide, and transpose matrices, as well as convert a
matrix to a table and vice versa.
Other Includes over 20 miscellaneous functions for data type conversion,
various tests, and other utilities.
Relational Includes equality, inequality, greater than, less than, greater than or
equal, less than or equal, and others.
Statistical Includes density, diversity, majority, mean, rank, standard devia-
tion, and more.
String Manipulate character strings.

Surface Calculate aspect and degree/percent slope and produce shaded


relief.
Trigonometric Use common trigonometric functions, including sine/arcsine,
cosine/arccosine, tangent/arctangent, hyperbolic arcsine, arc-
cosine, cosine, sine, and tangent.

These functions are also available for script modeling.

386 ERDAS
Graphical Modeling

See the ERDAS IMAGINE Tour Guides manual and the on-line Spatial Modeler Language
manual for complete instructions on using Model Maker and more detailed information about
the available functions and operators.

Objects Within Model Maker, an object is an input to or output from a function. The four basic
object types used in Model Maker are:

• raster

• scalar

• matrix

• table

Raster
A raster object is a single layer or set of layers. Rasters are typically used to specify and
manipulate data from image (.img) files.

Scalar
A scalar object is a single numeric value. Scalars are often used as weighting factors.

Matrix
A matrix object is a set of numbers arranged in a two-dimensional array. A matrix has
a fixed number of rows and columns. Matrices may be used to store convolution kernels
or the neighborhood definition used in neighborhood functions. They can also be used
to store covariance matrices, eigenvector matrices, or matrices of linear combination
coefficients.

Table
A table object is a series of numeric values or character strings. A table has one column
and a fixed number of rows. Tables are typically used to store columns from the Raster
Attribute Editor or a list of values which pertains to the individual layers of a set of
layers. For example, a table with four rows could be used to store the maximum value
from each layer of a four layer image file. A table may consist of up to 32,767 rows.
Information in the table can be attributes, calculated (e.g., histograms), or user-defined.

The graphics used in Model Maker to represent each of these objects are shown in
Figure 164.

Field Guide 387


Raster Matrix

a 1
Scalar Table

Figure 164: Modeling Objects

Data Types The four object types described above may be any of the following data types:

• Binary — either 0 (false) or 1 (true)

• Integer — integer values from -2,147,483,648 to 2,147,483,648 (signed 32-bit integer)

• Float — floating point data (double precision)

• String — a character string (for table objects only)

Input and output data types do not have to be the same. Using the Spatial Modeler
Language, the user can change the data type of input files before they are processed.

Output Parameters Since it is possible to have several inputs in one model, one can optionally define the
working window and the pixel cell size of the output data.

Working Window
Raster layers of differing areas can be input into one model. However, the image area,
or working window, must be specified in order to use in the model calculations. Either
of the following options can be selected:

• Union — the model will operate on the union of all input rasters. (This is the
default.)

• Intersection — the model will use only the area of the rasters that is common to all
input rasters.

Input layers must be referenced to the same coordinate system (i.e., Lat/Lon, UTM, State Plane,
etc.).

388 ERDAS
Graphical Modeling

Pixel Cell Size


Input rasters may also be of differing resolution (pixel size), so the user must select the
output cell size as either:

• Minimum — the minimum cell size of the input layers will be used (this is the
default setting).

• Maximum — the maximum cell size of the input layers will be used.

• Other — specify a new cell size.

Using Attributes in With the criteria function in Model Maker, attribute data can be used to determine
Models output values. The criteria function simplifies the process of creating a conditional
statement. The criteria function can be used to build a table of conditions that must be
satisfied to output a particular row value for an attribute (or cell value) associated with
the selected raster.

The inputs to a criteria function are rasters. The columns of the criteria table represent
either attributes associated with a raster layer or the layer itself, if the cell values are of
direct interest. Criteria which must be met for each output column are entered in a cell
in that column (e.g., >5). Multiple sets of criteria may be entered in multiple rows. The
output raster will contain the first row number of a set of criteria that were met for a
raster cell.

Example
For example, consider the sample thematic layer, parks.img, that contains the following
attribute information:

Table 22: Attribute Information for parks.img

Class Name Histogram Acres Path Condition Turf Condition Car Spaces
Grant Park 2456 403.45 Fair Good 127
Piedmont Park 5167 547.88 Good Fair 94
Candler Park 763 128.90 Excellent Excellent 65
Springdale Park 548 46.33 None Excellent 0

A simple model could create one output layer that showed only the parks in need of
repairs. The following logic would be coded into the model:

“If Turf Condition is not Good or Excellent, and if Path Condition is not Good or
Excellent, then the output class value is 1. Otherwise, the output class value is 2.”

More than one input layer could also be used. For example, a model could be created,
using the input layers parks.img and soils.img, which would show the soil types for
parks with Fair or Poor turf condition. Attributes can be used from every input file.

Field Guide 389


The following is a slightly more complex example:

If a user had a land cover file and wanted to create a file of pine forests larger than 10
acres, the criteria function could be used to output values only for areas that satisfied
the conditions of being both pine forest and larger than 10 acres. The output file would
have two classes: pine forests larger than 10 acres and background. If the user wanted
the output file to show varying sizes of pine forest, he or she would simply add more
conditions to the criteria table.

Comparisons of attributes can also be combined with mathematical and logical


functions on the class values of the input file(s). With these capabilities, highly complex
models can be created.

See the ERDAS IMAGINE Tour Guides manual or the On-Line Help for specific instructions
on using the criteria function.

Script Modeling The Spatial Modeler Language is a script language used internally by Model Maker to
execute the operations specified in the graphical models that are created. The Spatial
Modeler Language can also be used to directly write to user-created models. It includes
all of the functions available in Model Maker, plus:

• conditional branching and looping

• the ability to use complex and color data types

• more flexibility in using raster objects and attributes

Graphical models created with Model Maker can be output to a script file (text only) in
the Spatial Modeler Language. These scripts can then be edited with a text editor using
the Spatial Modeler Language syntax and re-run or saved in a library. Script models can
also be written from scratch in the text editor. They are stored in ASCII .mdl files.

The Text Editor is available from the ERDAS IMAGINE icon panel and from the Script Library
(Spatial Modeler).

In Figure 165, both the graphical and script models are shown for a tasseled cap trans-
formation. Notice how even the annotation on the graphical model is included in the
automatically generated script model. Generating script models from graphical models
may aid in learning the Spatial Modeler Language.

390 ERDAS
Script Modeling

Tasseled Cap
Transformation
Models

Graphical Model

Script Model

# TM Tasseled Cap Transformation


# of Lake Lanier, Georgia
#
# declarations
#
INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.img";
FLOAT MATRIX n2_Custom_Matrix;
FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.img";
#
# set cell size for the model
#
SET CELLSIZE MIN;
#
# set window for the model
#
SET WINDOW UNION;
#
# load matrix n2_Custom_Matrix
#
n2_Custom_Matrix = MATRIX(3, 7:
0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000, 0.252520,
-0.247170, -0.162630, -0.406390, 0.854680, 0.054930, 0.000000, -0.117490,
0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000, -0.457320);
#
# function definitions
#
n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix ) ;
QUIT;

Figure 165: Graphical and Script Models For Tasseled Cap Transformation

Convert graphical models to scripts using Model Maker. Open existing script models from the
Script Librarian (Spatial Modeler).

Field Guide 391


Statements A script model consists primarily of one or more statements. Each statement falls into
one of the following categories:

• Declaration — defines objects to be manipulated within the model

• Assignment — assigns a value to an object

• Show and View — enables the user to see and interpret results from the model

• Set — defines the scope of the model or establishes default values used by the
Modeler

• Macro Definition — defines substitution text associated with a macro name

• Quit — ends execution of the model

The Spatial Modeler Language also includes flow control structures, so that the user can
utilize conditional branching and looping in the models and statement block structures,
which cause a set of statements to be executed as a group.

Declaration Example
In the script model in Figure 165, the following lines form the declaration portion of the
model:

INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR


"/usr/imagine/examples/tm_lanier.img";

FLOAT MATRIX n2_Custom_Matrix;

FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE


"/usr/imagine/examples/lntassel.img";

Set Example
The following set statements are used:

SET CELLSIZE MIN;

SET WINDOW UNION;

392 ERDAS
Script Modeling

Assignment Example
The following assignment statements are used:

n2_Custom_Matrix = MATRIX(3, 7:

0.331830, 0.331210, 0.551770, 0.425140, 0.480870,


0.000000, 0.252520,

-0.247170, -0.162630, -0.406390, 0.854680, 0.054930,


0.000000, -0.117490,

0.139290, 0.224900, 0.403590, 0.251780, -0.701330,


0.000000, -0.457320);

n4_lntassel = LINEARCOMB ( $n1_tm_lanier ,


$n2_Custom_Matrix ) ;

Data Types In addition to the data types utilized by Graphical Modeling, script model objects can
store data in the following data types:

• Complex — complex data (double precision)

• Color — three floating point numbers in the range of 0.0 to 1.0, representing
intensity of red, green, and blue

Variables Variables are objects in the Modeler which have been associated with a name using a
declaration statement. The declaration statement defines the data type and object type
of the variable. The declaration may also associate a raster variable with certain layers
of an image file or a table variable with an attribute table. Assignment statements are
used to set or change the value of a variable.

For script model syntax rules, descriptions of all available functions and operators, and sample
models, see the on-line Spatial Modeler Language manual.

Field Guide 393


Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on raster
data. However, in a complete GIS data base, both raster and vector layers will be
present. One of the most common applications involving the combination of raster and
vector data is the updating of vector layers using current raster imagery as a backdrop
for vector editing. For example, if a vector data base is more than one or two years old,
then there are probably errors due to changes in the area (new roads, moved roads, new
development, etc.). When displaying existing vector layers over a raster layer, the user
can dynamically update the vector layer by digitizing new or changed features on the
screen.

Vector layers can also be used to indicate an area of interest (AOI) for further
processing. Assume the user wants to run a site suitability model on only areas desig-
nated for commercial development in the zoning ordinances. By selecting these zones
in a vector polygon layer, the user could restrict the model to only those areas in the
raster input files.

Editing Vector Editable features are polygons (as lines), lines, label points, and nodes. There can be
Coverages multiple features selected with a mixture of any and all feature types. Editing opera-
tions and commands can be performed on multiple or single selections. In addition to
the basic editing operations (e.g., cut, paste, copy, delete), the user can also perform the
following operations on the line features in multiple or single selections:

• spline — smooths or generalizes all currently selected lines using a specified grain
tolerance

• generalize — weeds vertices from selected lines using a specified tolerance

• split/unsplit — makes two lines from one by adding a node or joins two lines by
removing a node

• densify — adds vertices to selected lines at a user-specified tolerance

• reshape (for single lines only) — enables the user to move the vertices of a line

Reshaping (adding, deleting, or moving a vertex or node) can be done on a single


selected line. Below, in Table 23, are general editing operations and the feature types
that will support each of those operations.

Table 23: General Editing Operations and Supporting Feature Types

Add Delete Move Reshape

Points yes yes yes no

Lines yes yes yes yes

Polygons yes yes yes no

Nodes yes yes yes no

The Undo utility may be applied to any edits. The software stores all edits in sequential
order, so that continually pressing Undo will reverse the editing.

394 ERDAS
Constructing Topology

Constructing Either the build or clean option can be used to construct topology. To create spatial
Topology relationships between features in a vector layer, it is necessary to create topology. After
a vector layer is edited, the topology must be constructed to maintain the topological
relationships between features. When topology is constructed, each feature is assigned
an internal number. These numbers are then used to determine line connectivity and
polygon contiguity. Once calculated, these values are recorded and stored in that
layer’s associated attribute table.

You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE.

When topology is constructed, feature attribute tables are created with several automat-
ically created fields. Different fields are stored for the different types of layers. The
automatically generated fields for a line layer are:

• FNODE# — the internal node number for the beginning of a line (from-node)

• TNODE# — the internal number for the end of a line (to-node)

• LPOLY# — the internal number for the polygon to the left of the line (will be zero
for layers containing only lines and no polygons)

• RPOLY# — the internal number for the polygon to the right of the line (will be zero
for layers containing only lines and no polygons)

• LENGTH — length of each line, measured in layer units

• Cover# — internal line number (values assigned by ERDAS IMAGINE)

• Cover-ID — user-ID (values modified by the user)

The automatically generated fields for a point or polygon layer are:

• AREA — area of each polygon, measured in layer units (will be zero for layers
containing only points and no polygons)

• PERIMETER — length of each polygon boundary, measured in layer units (will be


zero for layers containing only points and no polygons)

• Cover# — internal polygon number (values assigned by ERDAS IMAGINE)

• Cover-ID — user-ID (values modified by the user)

Building and Cleaning Build processes points, lines, and polygons, but clean processes only lines and
Coverages polygons. Build recognizes only existing intersections (nodes), whereas clean creates
intersections (nodes) wherever lines cross one another. The differences in these two
options are summarized in Table 24 (ESRI 1990).

Field Guide 395


Table 24: Comparison of Building and Cleaning Coverages

Capabilities Build Clean

Processes:
Polygons Yes Yes
Lines Yes Yes
Points Yes No

Numbers features Yes Yes

Calculates spatial measurements Yes Yes

Creates intersections No Yes

Processing speed Faster Slower

Errors
Constructing topology also helps to identify errors in the layer. Some of the common
errors found are:

• Lines with less than two nodes

• Polygons that are not closed

• Polygons that have no label point or too many label points

• User-IDs that are not unique

Constructing typology can identify the errors mentioned above. When topology is
constructed, line intersections are created, the lines that make up each polygon are
identified, and a label point is associated with each polygon. Until topology is
constructed, no polygons exist and lines that cross each other are not connected at a
node, since there is no intersection.

Construct topology using the Vector Utilities menu from the Vector icon in the IMAGINE icon
panel.

You should not build or clean a layer that is displayed in a Viewer, nor should you try to display
a layer that is being built or cleaned.

396 ERDAS
Constructing Topology

When the build or clean options are used to construct the topology of a vector layer,
potential node errors are marked with special symbols. These symbols are listed below
(ESRI 1990).

Pseudo nodes, drawn with a diamond symbol, occur where a single line connects
with itself (an island) or where only two lines intersect. Pseudo nodes do not neces-
sarily indicate an error or a problem. Acceptable pseudo nodes may represent an island
(a spatial pseudo node) or the point where a road changes from pavement to gravel (an
attribute pseudo node).

A dangling node, represented by a square symbol, refers to the unconstructed


node of a dangling line. Every line begins and ends at a node point. So if a line does not
close properly, or was digitized past an intersection, it will register as a dangling node.
In some cases, a dangling node may be acceptable. For example, in a street centerline
map, cul-de-sacs are often represented by dangling nodes.

In polygon layers there may be label errors—usually no label point for a polygon, or
more than one label point for a polygon. In the latter case, two or more points may have
been mistakenly digitized for a polygon, or it may be that a line does not intersect
another line, resulting in an open polygon.

No label point
Pseudo node in polygon
(island)

Dangling nodes

Label points in one polygon


(due to dangling node)

Figure 166: Layer Errors

Errors detected in a layer can be corrected by changing the tolerances set for that layer
and building or cleaning again, or by editing the layer manually, then running build or
clean.

Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on editing
vector layers.

Field Guide 397


398 ERDAS
CHAPTER 11
Cartography

Introduction Maps and mapping are the subject of the art and science known as cartography—
creating 2-dimensional representations of our 3-dimensional Earth. These representa-
tions were once hand-drawn with paper and pen. But now, map production is largely
automated—and the final output is not always paper. The capabilities of a computer
system are invaluable to map users, who often need to know much more about an area
than can be reproduced on paper, no matter how large that piece of paper is or how
small the annotation is. Maps stored on a computer can be queried, analyzed, and
updated quickly.

As the veteran GIS and image processing authority Roger F. Tomlinson said, “Mapped
and related statistical data do form the greatest storehouse of knowledge about the
condition of the living space of mankind.” With this thought in mind, it only makes
sense that maps be created as accurately as possible and be as accessible as possible.

In the past, map making was carried out by mapping agencies who took the analyst’s
(be they surveyors, photogrammetrists, or draftsmen) information and created a map
to illustrate that information. But today, in many cases, the analyst is the cartographer
and can design his maps to best suit the data and the end user.

This chapter defines some basic cartographic terms and explains how maps are created
within the ERDAS IMAGINE environment.

Use the ERDAS IMAGINE Map Composer to create hardcopy and softcopy maps and presen-
tation graphics.

This chapter concentrates on the production of digital maps. See "CHAPTER 12: Hardcopy
Output" for information about printing hardcopy maps.

Field Guide 399


Types of Maps A map is a graphic representation of spatial relationships on the earth or other planets.
Maps can take on many forms and sizes, depending on the intended use of the map.
Maps no longer refer only to hardcopy output. In this manual, the maps discussed
begin as digital files and may be printed later as desired.

Some of the different types of maps are defined below.

Map Purpose

Aspect A map that shows the prevailing direction that a slope faces at each pixel.
Aspect maps are often color-coded to show the eight major compass
directions or any of 360 degrees.
Base A map portraying background reference information onto which other
information is placed. Base maps usually show the location and extent of
natural earth surface features and permanent man-made objects. Raster
imagery, orthophotos, and orthoimages are often used as base maps.
Bathymetric A map portraying the shape of a water body or reservoir using isobaths
(depth contours).
Cadastral A map showing the boundaries of the subdivisions of land for purposes
of describing and recording ownership or taxation.
Choropleth A map portraying properties of a surface using area symbols. Area sym-
bols usually represent categorized classes of the mapped phenomenon.
Composite A map on which the combined information from different thematic maps
is presented.
Contour A map in which lines are used to connect points of equal elevation. Lines
are often spaced in increments of ten or twenty feet or meters.
Derivative A map created by altering, combining, or through the analysis of other
maps.
Index A reference map that outlines the mapped area, identifies all of the com-
ponent maps for the area if several map sheets are required, and identi-
fies all adjacent map sheets.
Inset A map that is an enlargement of some congested area of a smaller scale
map, and that is usually placed on the same sheet with the smaller scale
main map.
Isarithmic A map that uses isorithms (lines connecting points of the same value for
any of the characteristics used in the representation of surfaces) to repre-
sent a statistical surface. Also called an isometric map.
Isopleth A map on which isopleths (lines representing quantities that cannot exist
at a point, such as population density) are used to represent some
selected quantity.
Morphometric A map representing morphological features of the earth’s surface.

Outline A map showing the limits of a specific set of mapping entities, such as
counties, NTS quads, etc. Outline maps usually contain a very small
number of details over the desired boundaries with their descriptive
codes.
Planimetric A map showing only the horizontal position of geographic objects, with-
out topographic features or elevation contours.

400 ERDAS
Types of Maps

Map Purpose
Relief Any map that appears to be, or is, 3-dimensional. Also called a shaded
relief map.
Slope A map which shows changes in elevation over distance. Slope maps are
usually color-coded according to the steepness of the terrain at each
pixel.
Thematic A map illustrating the class characterizations of a particular spatial vari-
able such as soils, land cover, hydrology, etc.
Topographic A map depicting terrain relief.

Viewshed A map showing only those areas visible (or invisible) from a specified
point(s). Also called a line-of-sight map or a visibility map.

In ERDAS IMAGINE, maps are stored as a map file with a .map extension.

See "APPENDIX B: File Formats and Extensions" for information on the format of the .map file.

Field Guide 401


Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create.
For this reason, this map type will be explored in more detail.

Thematic maps may be subdivided into two groups:

• qualitative

• quantitative

A qualitative map shows the spatial distribution or location of a kind of nominal data.
For example, a map showing corn fields in the United States would be a qualitative
map. It would not show how much corn is produced in each location, or production
relative to the other areas.

A quantitative map displays the spatial aspects of numerical data. A map showing corn
production (volume) in each area would be a quantitative map. Quantitative maps
show ordinal (less than/greater than) and interval/ratio (how much different) scale
data (Dent 1985).

You can create thematic data layers from continuous data (aerial photography and satellite
images) using the ERDAS IMAGINE classification capabilities. See “Chapter 6: Classification”
for more information.

Base Information
Thematic maps should include a base of information so that the reader can easily relate
the thematic data to the real world. This base may be as simple as an outline of counties,
states, or countries, to something more complex, such as an aerial photograph or
satellite image. In the past, it was difficult and expensive to produce maps that included
both thematic and continuous data, but technological advances have made this easy.

For example, in a thematic map showing flood plains in the Mississippi River valley,
the user could overlay the thematic data onto a line coverage of state borders or a
satellite image of the area. The satellite image can provide more detail about the areas
bordering the flood plains. This may be valuable information when planning
emergency response and resource management efforts for the area. Satellite images can
also provide very current information about an area, and can assist the user in assessing
the accuracy of a thematic image.

In ERDAS IMAGINE, you can include multiple layers in a single map composition. See Map
Composition on page 432 for more information about creating maps.

402 ERDAS
Types of Maps

Color Selection
The colors used in thematic maps may or may not have anything to do with the class or
category of information shown. Cartographers usually try to use a color scheme that
highlights the primary purpose of the map. The map reader’s perception of colors also
plays an important role. Most people are more sensitive to red, followed by green,
yellow, blue, and purple. Although color selection is left entirely up to the map
designer, some guidelines have been established (Robinson and Sale 1969).

• When mapping interval or ordinal data, the higher ranks and greater amounts are
generally represented by darker colors.

• Use blues for water.

• When mapping elevation data, start with blues for water, greens in the lowlands,
ranging up through yellows and browns to reds in the higher elevations. This
progression should not be used for series other than elevation.

• In temperature mapping, use red, orange, and yellow for warm temperatures and
blue, green, and gray for cool temperatures.

• In land cover mapping, use yellows and tans for dryness and sparse vegetation and
greens for lush vegetation.

• Use browns for land forms.

Use the Raster Attributes option in the Viewer to select and modify class colors.

Field Guide 403


Annotation A map is more than just an image(s) on a background. Since a map is a form of commu-
nication, it must convey information that may not be obvious by looking at the image.
Therefore, maps usually contain several annotation elements to explain the map.
Annotation is any explanatory material that accompanies a map to denote graphical
features on the map. This annotation may take the form of:

• scale bars

• legends

• neatlines, tick marks, and grid lines

• symbols (north arrows, etc.)

• labels (rivers, mountains, cities, etc.)

• descriptive text (title, copyright, credits, production notes, etc.)

The annotation listed above is made up of single elements. The basic annotation
elements in ERDAS IMAGINE include:

• rectangles (including squares)

• ellipses (including circles)

• polygons and polylines

• text

These elements can be used to create more complex annotation, such as legends, scale
bars, etc. These annotation components are actually groups of the basic elements and
can be ungrouped and edited like any other graphic. The user can also create his or her
own groups to form symbols that are not in the IMAGINE symbol library. (Symbols are
discussed in more detail under "Symbols" on page 411.)

Create annotation using the Annotation tool palette in the Viewer or in a map composition.

How Annotation is Stored


An annotation layer is a set of annotation elements that is drawn in a Viewer or Map
Composer window and stored in a file. Annotation that is created in a Viewer window
is stored in a separate file from the other data in the Viewer. These annotation files are
called overlay files (.ovr extension). Map annotation that is created in a Map Composer
window is also stored in a .ovr file, which is named after the map composition. For
example, the annotation for a file called lanier.map would be lanier.map.ovr.

See "APPENDIX B: File Formats and Extensions" for information on the format of the .ovr file.

404 ERDAS
Scale

Scale Map scale is a statement that relates distance on a map to distance on the earth’s
surface. It is perhaps the most important information on a map, since the level of detail
and map accuracy are both factors of the map scale. Scale is directly related to the map
extent, or the area of the earth’s surface to be mapped. If a relatively small area is to be
mapped, such as a neighborhood or subdivision, then the scale can be larger. If a large
area is to be mapped, such as an entire continent, the scale must be smaller. Generally,
the smaller the scale, the less detailed the map can be. As a rule, anything smaller than
1:250,000 is considered small-scale.

Scale can be reported in several ways, including:

• representative fraction

• verbal statement

• scale bar

Representative Fraction
Map scale is often noted as a simple ratio or fraction called a representative fraction. A
map in which one inch on the map equals 24,000 inches on the ground could be
described as having a scale of 1:24,000 or 1/24,000. The units on both sides of the ratio
must be the same.

Verbal Statement
A verbal statement of scale describes the distance on the map to the distance on the
ground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to
16 miles. The units on the map and on the ground do not have to be the same in a verbal
statement. One-inch and 6-inch maps of the British Ordnance Survey are often referred
to by this method (1 inch to 1 mile, 6 inches to 1 mile) (Robinson and Sale 1969).

Scale Bars
A scale bar is a graphic annotation element that describes map scale. It shows the
distance on paper that represents a geographical distance on the map. Maps often
include more than one scale bar to indicate various measurement systems, such as
kilometers and miles.

Kilometers
1 0 1 2 3 4
Miles
1 0 1 2
Figure 167: Sample Scale Bars

Use the Scale Bar tool in the Annotation tool palette to automatically create representative
fractions and scale bars. Use the Text tool to create a verbal statement.

Field Guide 405


Common Map Scales
The user can create maps with an unlimited number of scales, however, there are some
commonly used scales. Table 25 lists these scales and their equivalents (Robinson and
Sale 1969).

Table 25: Common Map Scales

1 kilometer
1 1 mile is
1/40 inch 1 inch is
Map Scale centimeter represented
represents represents represented
represents by
by
1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm
1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm
1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm
1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm
1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm
1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm
1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm
1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm
1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm
1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm
1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm
1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm
1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm
1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm
1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm
1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm
1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm
1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm

406 ERDAS
Scale

Table 26 shows the number of pixels per inch for selected scales and pixel sizes.

Table 26: Pixels per Inch

SCALE
Pixel
Size 1”=1
1”=100’ 1”=200’ 1”=500’ 1”=1000’ 1”=1500’ 1”=2000’ 1”=4167’
(m) mile
1:1200 1:2400 1:6000 1:12000 1:18000 1:24000 1:50000
1:63360
1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35
2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67
2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74
5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87
10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93
15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29
20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47
25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37
30 1.02 2.03 5.08 10.16 15.24 20.32 42.33 53.64
35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98
40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23
45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76
50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19
75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46
100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09
150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73
200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05
250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44
300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36
350 .09 .17 .44 .87 1.31 1.74 3.63 4.60
400 .08 .15 .38 .76 1.14 1.52 3.18 4.02
450 .07 .14 .34 .68 1.02 1.35 2.82 3.58
500 .06 .12 .30 .61 .91 1.22 2.54 3.22
600 .05 .10 .25 .51 .76 1.02 2.12 2.69
700 .04 .09 .22 .44 .65 .87 1.81 2.30
800 .04 .08 .19 .38 .57 .76 1.59 2.01
900 .03 .07 .17 .34 .51 .68 1.41 1.79
1000 .03 .06 .15 .30 .46 .61 1.27 1.61

Courtesy of D. Cunningham and D. Way, The Ohio State University.

Field Guide 407


Table 27 lists the number of acres and hectares per pixel for various pixel sizes.

Table 27: Acres and Hectares per Pixel

Pixel Size (m) Acres Hectares


1 0.0002 0.0001
2 0.0010 0.0004
2.5 0.0015 0.0006
5 0.0062 0.0025
10 0.0247 0.0100
15 0.0556 0.0225
20 0.0988 0.0400
25 0.1544 0.0625
30 0.2224 0.0900
35 0.3027 0.1225
40 0.3954 0.1600
45 0.5004 0.2025
50 0.6178 0.2500
75 1.3900 0.5625
100 2.4710 1.0000
150 5.5598 2.2500
200 9.8842 4.0000
250 15.4440 6.2500
300 22.2394 9.0000
350 30.2703 12.2500
400 39.5367 16.0000
450 50.0386 20.2500
500 61.7761 25.0000
600 88.9576 36.0000
700 121.0812 49.0000
800 158.1468 64.0000
900 200.1546 81.0000
1000 247.1044 100.0000

Courtesy of D. Cunningham and D. Way, The Ohio State University.

408 ERDAS
Legends

Legends A legend is a key to the colors, symbols, and line styles that are used in a map. Legends
are especially useful for maps of categorical data displayed in pseudo color, where each
color represents a different feature or category. A legend can also be created for a single
layer of continuous data, displayed in gray scale. Legends are likewise used to describe
all unknown or unique symbols utilized. Symbols in legends should appear exactly the
same size and color as they appear on the map (Robinson and Sale 1969).

Legend
pasture

forest

swamp

developed

Figure 168: Sample Legend

Use the Legend tool in the Annotation tool palette to automatically create color legends. Symbol
legends are not created automatically, but can be created manually.

Field Guide 409


Neatlines, Tick Neatlines, tick marks, and grid lines serve to provide a georeferencing system for map
Marks, and Grid detail and are based on the map projection of the image shown.
Lines • A neatline is a rectangular border around the image area of a map. It differs from
the map border in that the border usually encloses the entire map, not just the
image area.

• Tick marks are small lines along the edge of the image area or neatline that indicate
regular intervals of distance.

• Grid lines are intersecting lines that indicate regular intervals of distance, based on
a coordinate system. Usually, they are an extension of tick marks. It is often helpful
to place grid lines over the image area of a map. This is becoming less common on
thematic maps, but is really up to the map designer. If the grid lines will help
readers understand the content of the map, they should be used.

neatline

grid lines
tick marks

Figure 169: Sample Neatline, Tick Marks, and Grid Lines

Grid lines may also be referred to as a graticule.

Graticules are discussed in more detail in "Map Projections" on page 416.

Use the Grid/Tick tool in the Annotation tool palette to create neatlines, tick marks, and grid
lines. Tick marks and grid lines can also be created over images displayed in a Viewer. See the
On-Line Help for instructions.

410 ERDAS
Symbols

Symbols Since maps are a greatly reduced version of the real-world, objects cannot be depicted
in their true shape or size. Therefore, a set of symbols is devised to represent real-world
objects. There are two major classes of symbols:

• replicative

• abstract

Replicative symbols are designed to look like their real-world counterparts; they
represent tangible objects, such as coastlines, trees, railroads, and houses. Abstract
symbols usually take the form of geometric shapes, such as circles, squares, and
triangles. They are traditionally used to represent amounts that vary from place to
place, such as population density, amount of rainfall, etc. (Dent 1985).

Both replicative and abstract symbols are composed of one or more of the following
annotation elements:

• point

• line

• area

Symbol Types
These basic elements can be combined to create three different types of replicative
symbols:

• plan — formed after the basic outline of the object it represents. For example, the
symbol for a house might be a square, since most houses are rectangular.

• profile — formed like the profile of an object. Profile symbols generally represent
vertical objects, such as trees, windmills, oil wells, etc.

• function — formed after the activity that a symbol represents. For example, on a
map of a state park, a symbol of a tent would indicate the location of a camping
area.

Plan Profile Function

Figure 170: Sample Symbols

Field Guide 411


Symbols can have different sizes, colors, and patterns to indicate different meanings
within a map. The use of size, color, and pattern are generally used to show qualitative
or quantitative differences among areas marked. For example, if a circle is used to show
cities and towns, larger circles would be used to show areas with higher population. A
specific color could be used to indicate county seats. Since symbols are not drawn to
scale, their placement is crucial to effective communication.

Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols in
maps.

Labels and Place names and other labels convey important information to the reader about the
Descriptive Text features on the map. Any features that will help orient the reader or are important to
the content of the map should be labeled. Descriptive text on a map can include the map
title and subtitle, copyright information, captions, credits, production notes, or other
explanatory material.

Title
The map title usually draws attention by virtue of its size. It focuses the reader’s
attention on the primary purpose of the map. The title may be omitted, however, if
captions are provided outside of the image area (Dent 1985).

Credits
Map credits (or source) can include the data source and acquisition date, accuracy
information, and other details that are required or helpful to readers. For example, if the
user includes data which they do not own in a map, they must give credit to the owner.

Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.

Typography and The choice of type fonts and styles and how names are lettered can make the difference
Lettering between a clear and attractive map and a jumble of imagery and text. As with many
other aspects of map design, this is a very subjective area and many organizations
already have guidelines to use. This section is intended as an introduction to the
concepts involved and to convey traditional guidelines, where available.

If your organization does not have a set of guidelines for the appearance of maps and
you plan to produce many in the future, it would be beneficial to develop a style guide
specifically for mapping. This will ensure that all of the maps produced follow the same
conventions, regardless of who actually makes the map.

ERDAS IMAGINE enables you to make map templates to facilitate the development of map
standards within your organization.

412 ERDAS
Labels and Descriptive Text

Type Styles
Type style refers to the appearance of the text and may include font, size, and style
(bold, italic, underline, etc.). Although the type styles used in maps are purely a matter
of the designer’s taste, the following techniques help to make maps more legible
(Robinson and Sale 1969; Dent 1985).

• Do not use too many different typefaces in a single map. Generally, one or two
styles are enough when also using the variations of those type faces (e.g., bold,
italic, underline, etc.). When using two typefaces, use a serif and a sans serif, rather
than two different serif fonts or two different sans serif fonts [e.g., Sans (sans serif)
and Roman (serif) could be used together in one map].

• Avoid ornate text styles because they can be difficult to read.

• Exercise caution in using very thin letters that may not reproduce well. On the other
hand, using letters that are too bold may obscure important information in the
image.

• Use different sizes of type for showing varying levels of importance. For example,
on a map with city and town labels, city names will usually be in a larger type size
than the town names. Use no more than four to six different type sizes.

• Put more important text in labels, titles, and names in all capital letters and lesser
important text in lowercase with initial capitals. This is a matter of personal
preference, although names in which the letters must be spread out across a large
area are better in all capital letters. (Studies have found that capital letters are more
difficult to read, therefore lowercase letters might improve the legibility of the
map.)

• In the past, hydrology, landform, and other natural features were labeled in italic.
However, this is not strictly adhered to by map makers today, although water
features are still nearly always labeled in italic.

Sans Serif Serif


Sans 10 pt regular Roman 10 pt regular

Sans 10 pt italic Roman 10 pt italic

Sans 10 pt bold Roman 10 pt bold

Sans 10 pt bold italic Roman 10 pt bold italic

SANS 10 PT ALL CAPS ROMAN 10 PT ALL CAPS

Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied

Use the Styles dialog to adjust the style of text.

Field Guide 413


Lettering
Lettering refers to the way in which place names and other labels are added to a map.
Letter spacing, orientation, and position are the three most important factors in
lettering. Here again, there are no set rules for how lettering is to appear. Much is deter-
mined by the purpose of the map and the end user. Many organizations have
developed their own rules for lettering. Here is a list of guidelines that have been used
by cartographers in the past (Robinson and Sale 1969; Dent 1985).

• Names should be either entirely on land or water—not overlapping both.

• Lettering should generally be oriented to match the orientation structure of the


map. In large-scale maps this means parallel with the upper and lower edges; in
small-scale maps, in line with the parallels of latitude.

• Type should not be curved (i.e., different from preceding bullet) unless it is
necessary to do so.

• If lettering must be disoriented, it should never be set in a straight line, but should
always have a slight curve.

• Names should be letter spaced (space between individual letters - kerning) as little
as necessary.

• Where the continuity of names and other map data, such as lines and tones,
conflicts with the lettering, the data, not the names, should be interrupted.

• Lettering should never be upside-down in any respect.

• Lettering that refers to point locations should be placed above or below the point,
preferably above and to the right.

• The letters identifying linear features (roads, rivers, railroads, etc.) should not be
spaced. The word(s) should be repeated along the feature as often as necessary to
facilitate identification. These labels should be placed above the feature and river
names should slant in the direction of the river flow (if the label is italic).

• For geographical names, use the native language of the intended map user. For an
English-speaking audience, the name “Germany” should be used, rather than
“Deutscheland.”

414 ERDAS
Labels and Descriptive Text

Better Worse

Atlanta
Atlanta

GEORGIA G e o r g i a
Savannah
Savannah

Figure 172: Good Lettering vs. Bad Lettering

Text Color
Many cartographers argue that all lettering on a map should be black,. However, the
map may be well served by incorporating color into its design. In fact, studies have
shown that coding labels by color can improve a reader’s ability to find information
(Dent 1985).

Field Guide 415


Map Projections

This section is adapted from Map Projections for Use with the Geographic Information System
by Lee and Walsh, 1984.

A map projection is the manner in which the spherical surface of the earth is repre-
sented on a flat (two-dimensional) surface. This can be accomplished by direct
geometric projection or by a mathematically derived transformation. There are many
kinds of projections, but all involve transfer of the distinctive global patterns of parallels
of latitude and meridians of longitude onto an easily flattened surface, or developable
surface.

The three most common developable surfaces are the cylinder, cone, and plane (Figure
173). A plane is already flat, while a cylinder or cone may be cut and laid out flat,
without stretching. Thus, map projections may be classified into three general families:
cylindrical, conical, and azimuthal or planar.

Map projections are selected in the Projection Chooser. The Projection Chooser is accessible from
the ERDAS IMAGINE icon panel, and from several other locations.

Properties of Map Regardless of what type of projection is used, it is inevitable that some error or
Projections distortion will occur in transforming a spherical surface into a flat surface. Ideally, a
distortion-free map has four valuable properties:

• conformality

• equivalence

• equidistance

• true direction

Each of these properties is explained below. No map projection can be true in all of these
properties. Therefore, each projection is devised to be true in selected properties, or
most often, a compromise among selected properties. Projections that compromise in
this manner are known as compromise projections.

Conformality is the characteristic of true shape, wherein a projection preserves the


shape of any small geographical area. This is accomplished by exact transformation of
angles around points. One necessary condition is the perpendicular intersection of grid
lines as on the globe. The property of conformality is important in maps which are used
for analyzing, guiding, or recording motion, as in navigation. A conformal map or
projection is one that has the property of true shape.

416 ERDAS
Map Projections

Equivalence is the characteristic of equal area, meaning that areas on one portion of a
map are in scale with areas in any other portion. Preservation of equivalence involves
inexact transformation of angles around points and thus, is mutually exclusive with
conformality except along one or two selected lines. The property of equivalence is
important in maps that are used for comparing density and distribution data, as in
populations.

Equidistance is the characteristic of true distance measuring. The scale of distance is


constant over the entire map. This property can be fulfilled on any given map from one,
or at most two, points in any direction or along certain lines. Equidistance is important
in maps that are used for analyzing measurements (i.e., road distances). Typically,
reference lines such as the equator or a meridian are chosen to have equidistance and
are termed standard parallels or standard meridians.

True direction is characterized by a direction line between two points that crosses
reference lines, for example, meridians, at a constant angle or azimuth. An azimuth is
an angle measured clockwise from a meridian, going north to east. The line of constant
or equal direction is termed a rhumb line.

The property of constant direction makes it comparatively easy to chart a navigational


course. However, on a spherical surface, the shortest surface distance between two
points is not a rhumb line, but a great circle, being an arc of a circle whose center is the
center of the earth. Along a great circle, azimuths constantly change (unless the great
circle is the equator or a meridian).

Thus, a more desirable property than true direction may be where great circles are
represented by straight lines. This characteristic is most important in aviation. Note that
all meridians are great circles, but the only parallel that is a great circle is the equator.

Field Guide 417


Regular Cylindrical
Regular Conic

Transverse Polar Azimuthal


Cylindrical (planar)

Oblique Azimuthal
(planar)
Oblique Cylindrical

Figure 173: Projection Types

418 ERDAS
Map Projections

Projection Types Although a great number of projections have been devised, the majority of them are
geometric or mathematical variants of the basic direct geometric projection families
described below. Choice of the projection to be used will depend upon the true property
or combination of properties desired for effective cartographic analysis.

Azimuthal Projections
Azimuthal projections, also called planar projections, are accomplished by drawing
lines from a given perspective point through the globe onto a tangent plane. This is
conceptually equivalent to tracing a shadow of a figure cast by a light source. A tangent
plane intersects the global surface at only one point and is perpendicular to a line
passing through the center of the sphere. Thus, these projections are symmetrical
around a chosen center or central meridian. Choice of the projection center determines
the aspect, or orientation, of the projection surface.

Azimuthal projections may be centered:

• on the poles (polar aspect)

• at a point on the equator (equatorial aspect)

• at any other orientation (oblique aspect)

The origin of the projection lines—that is, the perspective point—may also assume
various positions. For example, it may be:

• the center of the earth (gnomonic)

• an infinite distance away (orthographic)

• on the earth’s surface, opposite the projection plane (stereographic)

Conical Projections
Conical projections are accomplished by intersecting, or touching, a cone with the
global surface and mathematically projecting lines onto this developable surface.

A tangent cone intersects the global surface to form a circle. Along this line of inter-
section, the map will be error-free and possess equidistance. Usually, this line is a
parallel, termed the standard parallel.

Cones may also be secant, and intersect the global surface, forming two circles that will
possess equidistance. In this case, the cone slices underneath the global surface,
between the standard parallels. Note that the use of the word “secant,” in this instance,
is only conceptual and not geometrically accurate. Conceptually, the conical aspect may
be polar, equatorial, or oblique. Only polar conical projections are supported in ERDAS
IMAGINE.

Field Guide 419


Tangent Secant
one standard parallel two standard parallels

Figure 174: Tangent and Secant Cones

Cylindrical Projections
Cylindrical projections are accomplished by intersecting, or touching, a cylinder with
the global surface. The surface is mathematically projected onto the cylinder, which is
then “cut” and “unrolled.”

A tangent cylinder will intersect the global surface on only one line to form a circle, as
with a tangent cone. This central line of the projection is commonly the equator and will
possess equidistance.

If the cylinder is rotated 90 degrees from the vertical (i.e., the long axis becomes
horizontal), then the aspect becomes transverse, wherein the central line of the
projection becomes a chosen standard meridian as opposed to a standard parallel. A
secant cylinder, one slightly less in diameter than the globe, will have two lines
possessing equidistance.

Tangent Secant
one standard parallel two standard parallels

Figure 175: Tangent and Secant Cylinders

Perhaps the most famous cylindrical projection is the Mercator, which became the
standard navigational map, possessing true direction and conformality.

420 ERDAS
Map Projections

Other Projections
The projections discussed so far are projections that are created by projecting from a
sphere (the earth) onto a plane, cone, or cylinder. Many other projections cannot be
created so easily.

Modified projections are modified versions of another projection. For example, the
Space Oblique Mercator projection is a modification of the Mercator projection. These
modifications are made to reduce distortion, often by including additional standard
lines or a different pattern of distortion.

Pseudo projections have only some of the characteristics of another class projection.
For example, the Sinusoidal is called a pseudocylindrical projection because all lines of
latitude are straight and parallel, and all meridians are equally spaced. However, it
cannot truly be a cylindrical projection, because all meridians except the central
meridian are curved. This results in the Earth appearing oval instead of rectangular
(ESRI 1991).

Field Guide 421


Geographical and Map projections require a point of reference on the earth’s surface. Most often this is the
Planar Coordinates center, or origin, of the projection. This point is defined in two coordinate systems:

• geographical

• planar

Geographical
Geographical, or spherical, coordinates are based on the network of latitude and
longitude (Lat/Lon) lines that make up the graticule of the earth. Within the graticule,
lines of longitude are called meridians, which run north/south, with the prime
meridian at 0˚ (Greenwich, England). Meridians are designated as 0˚ to 180˚, east or
west of the prime meridian. The 180˚ meridian (opposite the prime meridian) is the
International Dateline.

Lines of latitude are called parallels, which run east/west. Parallels are designated as
0˚ at the equator to 90˚ at the poles. The equator is the largest parallel. Latitude and
longitude are defined with respect to an origin located at the intersection of the equator
and the prime meridian. Lat/Lon coordinates are reported in degrees, minutes, and
seconds. Map projections are various arrangements of the earth’s latitude and
longitude lines onto a plane.

Planar
Planar, or Cartesian, coordinates are defined by a column and row position on a planar
grid (X,Y). The origin of a planar coordinate system is typically located south and west
of the origin of the projection. Coordinates increase from 0,0 going east and north. The
origin of the projection, being a “false” origin, is defined by values of false easting and
false northing. Grid references always contain an even number of digits, and the first
half refers to the easting and the second half the northing.

In practice, this eliminates negative coordinate values and allows locations on a map
projection to be defined by positive coordinate pairs. Values of false easting are read
first and may be in meters or feet.

422 ERDAS
Available Map Projections

Available Map In ERDAS IMAGINE, map projection information appears in the Projection Chooser,
Projections which is used to georeference images and to convert map coordinates from one type of
projection to another. The Projection Chooser provides the following projections:

USGS Projections
Albers Conical Equal Area
Azimuthal Equidistant
Equidistant Conic
Equirectangular
General Vertical Near-Side Perspective
Geographic (Lon/Lat)
Gnomonic
Lambert Azimuthal Equal Area
Lambert Conformal Conic
Mercator
Miller Cylindrical
Modified Transverse Mercator
Oblique Mercator (Hotine)
Orthographic
Polar Stereographic
Polyconic
Sinusoidal
Space Oblique Mercator
State Plane
Stereographic
Transverse Mercator
UTM
Van Der Grinten I

External Projections
Bipolar Oblique Conic Conformal
Cassini-Soldner
Laborde Oblique Mercator
Modified Polyconic
Modified Stereographic
Mollweide Equal Area
Plate Carrée
Rectified Skew Orthomorphic
Robinson Pseudocylindrical
Southern Orientated Gauss Conformal
Winkel’s Tripel

Field Guide 423


Choice of the projection to be used will depend upon the desired major property and
the region to be mapped (Table 1). After choosing the desired map projection, several
parameters are required for its definition (Table 2). These parameters fall into three
general classes:

• definition of the spheroid

• definition of the surface viewing window

• definition of scale

For each map projection, a menu of spheroids displays, along with appropriate
prompts that enable the user to specify these parameters.

Units
Use the units of measure that are appropriate for the map projection type.

• Lat/Lon coordinates are expressed in decimal degrees. When prompted, the user
can use the DD function to convert coordinates in degrees, minutes, seconds format
to decimal. For example, for 30˚51’12’’:

dd(30,51,12) = 30.85333
-dd(30,51,12) = -30.85333

or

30:51:12 = 30.85333

The user can also enter Lat/Lon coordinates in radians.

• State Plane coordinates are expressed in feet.

• All other coordinates are expressed in meters.

Note also that values for longitude west of Greenwich, England, and values for latitude south of
the equator are to be entered as negatives.

424 ERDAS
Available Map Projections

Table 28: Map Projections

Map projection Construction Property Use


0 Geographic N/A N/A Data entry, spherical coordinates
1 Universal Transverse Cylinder Conformal Data entry, plane coordinates
Mercator (see #9)
2 State Plane (see #4, 7,9,20) Conformal Data entry, plane coordinates
3 Albers Conical Equal Cone Equivalent Middle latitudes, E-W expanses
Area
4 Lambert Conformal Conic Cone Conformal True Middle latitudes, E-W expanses
Direction flight (straight great circles)
5 Mercator Cylinder Conformal True Non-polar regions, navigation
Direction (straight rhumb lines)
6 Polar Stereographic Plane Conformal Polar regions
7 Polyconic Cone Compromise N-S expanses
8 Equidistant Conic Cone Equidistant Middle latitudes, E-W expanses
9 Transverse Mercator Cylinder Conformal N-S expanses
10 Stereographic Plane Conformal Hemispheres, continents
11 Lambert Azimuthal Equal Plane Equivalent Square or round expanses
Area
12 Azimuthal Equidistant Plane Equidistant Polar regions, radio/seismic
work (straight great circles)
13 Gnomonic Plane Compromise Navigation, seismic work
(straight great circles)
14 Orthographic Plane Compromise Globes, pictorial
15 General Vertical Near- Plane Compromise Hemispheres or less
Side Perspective
16 Sinusoidal Pseudo- Equivalent N-S expanses or equatorial
Cylinder regions
17 Equirectangular Cylinder Compromise City maps, computer plotting
(simplistic)
18 Miller Cylindrical Cylinder Compromise World maps
19 Van der Grinten I N/A Compromise World maps
20 Oblique Mercator Cylinder Conformal Oblique expanses (e.g., Hawai-
ian islands), satellite tracking
21 Space Oblique Mercator Cylinder Conformal Mapping of Landsat imagery
22 Modified Transverse Cylinder Conformal Alaska
Mercator

Field Guide 425


Table 29: Projection Parameters

Projection type (#) a


3 4 5 6 7 8b 9 1 1 1 1 1 1 1 1 1 1 20 21 2
Parameter b b
0 1 2 3 4 5 6 7 8 9 2
Definition of
Spheroid
Spheroid X X X X X X X X X X X X X X X X X X X
selections
Definition of
Surface Viewing
Window
False easting X X X X X X X X X X X X X X X X X X X
X
False northing X X X X X X X X X X X X X X X X X X X X

Longitude of X X X X X X X X
central meridian X X
Latitude of origin X X X X X X
of projection
Longitude of cen- X X X X X X
ter of projection
Latitude of center X X X X X X
of projection
Latitude of first X X X
standard parallel
Latitude of second X X X
standard parallel
Latitude of true X X
scale
Longitude below X
pole
Definition of
Scale
Scale factor at X
central meridian
Height of perspec- X
tive point above
sphere
Scale factor at X
center of projection

a. Numbers are used for reference only and correspond to the numbers used in Table 1. Parameters for definition of map
projection types 0-2 are not applicable and are described in the text.

b. Additional parameters required for definition of the map projection are described in the text of Appendix C.

426 ERDAS
Choosing a Map Projection

Choosing a Map Map Projections Uses in a GIS


Projection Selecting a map projection for the GIS data base will enable the user to (Maling 1992):

• decide how to best display the area of interest or illustrate the results of analysis

• register all imagery to a single coordinate system for easier comparisons

• test the accuracy of the information and perform measurements on the data

Deciding Factors
Depending on the user’s applications and the uses for the maps created, one or several
map projections may be used. Many factors must be weighed when selecting a
projection, including:

• type of map

• special properties that must be preserved

• types of data to be mapped

• map accuracy

• scale

If the user is mapping a relatively small area, virtually any map projection will do. It is
in mapping large areas (entire countries, continents, and the world) that the choice of
map projection becomes more critical. In small areas, the amount of distortion in a
particular projection is barely, if at all, noticeable. In large areas, there may be little or
no distortion in the center of the map, but distortion will increase outward toward the
edges of the map.

Guidelines
Since the sixteenth century, there have been three fundamental rules regarding map
projection use (Maling 1992):

• if the country to be mapped lies in the tropics, use a cylindrical projection

• if the country to be mapped lies in the temperate latitudes, use a conical projection

• if the map is required to show one of the polar regions, use an azimuthal projection

Field Guide 427


These rules are no longer held so strongly. There are too many factors to consider in
map projection selection for broad generalizations to be effective today. The purpose of
a particular map and the merits of the individual projections must be examined before
an educated choice can be made. However, there are some guidelines that may help a
user select a projection (Pearson 1990):

• Statistical data should be displayed using an equal area projection to maintain


proper proportions (although shape may be sacrificed)

• Equal area projections are well suited to thematic data

• Where shape is important, use a conformal projection

428 ERDAS
Spheroids

Spheroids The previous discussion of direct geometric map projections assumes that the earth is a
sphere, and for many maps this is satisfactory. However, due to rotation of the earth
around its axis, the planet bulges slightly at the equator. This flattening of the sphere
makes it an oblate spheroid, which is an ellipse rotated around its shorter axis.

Minor axis

Major axis

semi-major axis
semi-minor
axis

Figure 176: Ellipse

An ellipse is defined by its semi-major (long) and semi-minor (short) axes.

The amount of flattening of the earth is expressed as the ratio:

f = (a – b) ⁄ a EQUATION 23

where:

a = the equatorial radius (semi-major axis)

b = the polar radius (semi-minor axis)

Most map projections use eccentricity (e2) rather than flattening. The relationship is:

e2 = 2 f – f 2 EQUATION 24

The flattening of the earth is about 1 part in 300 and becomes significant in map
accuracy at a scale of 1:100,000 or larger.

Calculation of a map projection requires definition of the spheroid (or ellipsoid) in


terms of axes lengths and eccentricity squared (or radius of the reference sphere).
Several principal spheroids are in use by one or more countries. Differences are due
primarily to calculation of the spheroid for a particular region of the earth’s surface.
Only recently have satellite tracking data provided spheroid determinations for the
entire earth. However, these spheroids may not give the “best fit” for a particular
region. In North America, the spheroid in use is the Clarke 1866 for NAD27 and GRS
1980 for NAD83 (State Plane).

Field Guide 429


If other regions are to be mapped, different projections should be used. Upon choosing
a desired projection type, the user has the option to choose from the following list of
spheroids:

Clarke 1866
Clarke 1880
Bessel
New International 1967
International 1909
WGS 72
Everest
WGS 66
GRS 1980
Airy
Modified Everest
Modified Airy
Walbeck
Southeast Asia
Australian National
Krasovsky
Hough
Mercury 1960
Modified Mercury 1968
Sphere of Radius 6370977m
WGS 84
Helmert
Sphere of Nominal Radius of Earth

The spheroids listed above are the most commonly used. There are many other spheroids
available, and they are listed in the Projection Chooser. These additional spheroids are not
documented in this manual. You can use the ERDAS IMAGINE Developers’ Toolkit to add
your own map projections and spheroids to IMAGINE.

430 ERDAS
Spheroids

The semi-major and semi-minor axes of all supported spheroids are listed in Table 30,
as well as the principal uses of these spheroids.

Table 30: Spheroids

Semi-Major Semi-Minor
Spheroid Use
Axis Axis
Clarke 1866 6378206.4 6356583.8 North America and the
Philippines
Clarke 1880 6378249.145 6356514.86955 France and Africa
Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, and
Indonesia
New International 1967 6378157.5 6356772.2 As International 1909 below,
more recent calculation
International 1909 6378388.0 6356911.94613 Remaining parts of the world not
(= Hayford) listed here
WGS 72 (World 6378135.0 6356750.519915 NASA (satellite)
Geodetic System 1972)
Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan
WGS 66 (World 6378145.0 6356759.769356 As WGS 72 above, older version
Geodetic System 1966)
GRS 1980 (Geodetic 6378137.0 6356752.31414 Expected to be adopted in North
Reference System) America for 1983 earth-centered
coordinate system (satellite)
Airy (1940) 6377563.0 6356256.91 England
Modified Everest 6377304.063 6356103.039 As Everest above, more recent
version
Modified Airy 6377341.89 6356036.143 As Airy above, more recent
version
Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910
Southeast Asia 6378155.0 6356773.3205 As named
Australian National (1965) 6378160.0 6356774.719 Australia
Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and some
East European countries
Hough 6378270.0 6356794.343479 As International 1909 above, with
modification of ellipse axes
Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used
Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, more
recent calculation
Sphere of Radius 6370997 6370997.0 6370997.0 A perfect sphere with the same
m surface area as the Clarke 1866
spheroid

Field Guide 431


Table 30: Spheroids

Semi-Major Semi-Minor
Spheroid Use
Axis Axis
WGS 84 6378137.0 6356752.31424517929 As WGS 72, more recent
calculation
Helmert 6378200.0 6356818.16962789092 Egypt
Sphere of Nominal Radius 6370997.0 6370997.0 A perfect sphere
of Earth

Map Composition Learning Map Composition


Cartography and map composition may seem like an entirely new discipline to many
GIS and image processing analysts—and that is partly true. But, by learning the basics
of map design, the results of a user’s analyses can be communicated much more effec-
tively. Map composition is also much easier than in the past, when maps were hand
drawn. Many GIS analysts may already know more about cartography than they
realize, simply because they have access to map making software. Perhaps the first
maps you made were imitations of existing maps, but that is how we learn. This chapter
is certainly not a textbook on cartography; it is merely an overview of some of the issues
involved in creating cartographically-correct products.

Plan the Map


After the user’s analysis is complete, he or she can begin map composition. The first step
in creating a map is to plan its contents and layout. The following questions will aid in
the planning process:

• How will this map be used?

• Will the map have a single theme or many?

• Is this a single map, or is it part of a series of similar maps?

• Who is the intended audience? What is the level of their knowledge about the
subject matter?

• Will it remain in digital form and be viewed on the computer screen or will it be
printed?

• If it is going to be printed, how big will it be? Will it be printed in color or black and
white?

• Are there map guidelines already set up by your organization?

432 ERDAS
Map Composition

The answers to these questions will help to determine the type of information that must
go into the composition and the layout of that information. For example, suppose you
are going to do a series of maps about global deforestation for presentation to Congress,
and you are going to print these maps in color on an electrostatic printer. This scenario
might lead to the following conclusions:

• A format (layout) should be developed for the series, so that all the maps produced
have the same style.

• The colors used should be chosen carefully, since the maps will be printed in color.

• Political boundaries might need to be included, since they will influence the types
of actions that can be taken in each deforested area.

• The typeface size and style to be used for titles, captions, and labels will have to be
larger than for maps printed on 8.5” x 11.0” sheets. The type styles selected should
be the same for all maps.

• Select symbols that are widely recognized, and make sure they are all explained in
a legend.

• Cultural features (roads, urban centers, etc.) may be added for locational reference.

• Include a statement about the accuracy of each map, since these maps may be used
in very high-level decisions.

Once this information is in hand, the user can actually begin sketching the look of the
map on a sheet of paper. It is helpful for the user to know how they want the map to
look before starting the ERDAS IMAGINE Map Composer. Doing so will ensure that all
of the necessary data layers are available, and will make the composition phase go
quickly.

See the Map Composer section of the ERDAS IMAGINE Tour Guides manual for step-by-step
instructions on creating a map. Refer to the On-Line Help for details about how Map Composer
works.

Field Guide 433


Map Accuracy Maps are often used to influence legislation, promote a cause, or enlighten a particular
group before decisions are made. In these cases, especially, map accuracy is of the
utmost importance. There are many factors that influence map accuracy: the projection
used, scale, base data, generalization, etc. The analyst/cartographer must be aware of
these factors before map production begins, because the accuracy of the map will, in a
large part, determine its usefulness. It is usually up to individual organizations to
perform accuracy assessment and decide how those findings are reflected in the
products they produce. However, several agencies have established guidelines for map
makers.

US National Map Accuracy Standard


The United States Bureau of the Budget has developed the US National Map Accuracy
Standard in an effort to standardize accuracy reporting on maps. These guidelines are
summarized below (Fisher 1991):

• On scales smaller than 1:20,000, not more than 10 percent of points tested should be
more than 1/50 inch in horizontal error, where points refer only to points that can
be well-defined on the ground.

• On maps with scales larger than 1:20,000, the corresponding error term is 1/30 inch.

• At no more than 10 percent of the elevations tested will contours be in error by more
than one half of the contour interval.

• Accuracy should be tested by comparison of actual map data with survey data of
higher accuracy (not necessarily with ground truth).

• If maps have been tested and do meet these standards, a statement should be made
to that effect in the legend.

• Maps that have been tested but fail to meet the requirements should omit all
mention of the standards on the legend.

USGS Land Use and Land Cover Map Guidelines


The United States Geological Survey (USGS) has set standards of their own for land use
and land cover maps (Fisher 1991):

• The minimum level of accuracy in identifying land use and land cover categories is
85%.

• The several categories shown should have about the same accuracy.

• Accuracy should be maintained between interpreters and times of sensing.

434 ERDAS
Map Accuracy

USDA SCS Soils Maps Guidelines


The United States Department of Agriculture (USDA) has set standards for Soil Conser-
vation Service (SCS) soils maps (Fisher 1991):

• Up to 25% of the pedons may be of other soil types than those named if they do not
present a major hindrance to land management.

• Up to only 10% of pedons may be of other soil types than those named if they do
present a major hindrance to land management.

• No single included soil type may occupy more than 10% of the area of the map unit.

Digitized Hardcopy Maps


Another method of expanding the data base is by digitizing existing hardcopy maps.
Although this may seem like an easy way to gather more information, care must be
taken in pursuing this avenue if it is necessary to maintain a particular level of accuracy.
If the hardcopy maps that are digitized are outdated, or were not produced using the
same accuracy standards that are currently in use, the digitized map may negatively
influence the overall accuracy of the data base.

Field Guide 435


436 ERDAS
Printing Maps

CHAPTER 12
Hardcopy Output

Introduction Hardcopy output refers to any output of image data to paper. These topics are covered
in this chapter:

• printing maps

• the mechanics of printing

Printing Maps ERDAS IMAGINE enables the user to create and output a variety of types of hardcopy
maps, with several referencing features.

Scaled Maps
A scaled map is a georeferenced map that has been projected to a map projection, and
is accurately laid-out and referenced to represent distances and locations. A scaled map
usually has a legend, that includes a scale, such as “1 inch = 1000 feet”. The scale is often
expressed as a ratio, like 1:12,000, where 1 inch on the map represents 12,000 inches on
the ground.

See "CHAPTER 8: Rectification" for information on rectifying and georeferencing images and
"CHAPTER 11: Cartography" for information on creating maps.

Printing Large Maps


Some scaled maps will not fit on the paper that is used by the printer. These methods
are used to print and store large maps:

• A book map is laid out like the pages of a book. Each page fits on the paper used
by the printer. There is a border, but no tick marks on every page.

• A paneled map is designed to be spliced together into a large paper map.


Therefore, borders and tick marks appear on the outer edges of the large map.

Field Guide 437


+ +
+ +

neatline neatline

map composition map composition

tick marks ++ +
+

Book Map Paneled Map

Figure 177: Layout for a Book Map and a Paneled Map

Scale and Resolution The following scales and resolutions will be noticeable during the process of creating a
map composition and sending the composition to a hardcopy device:

• spatial resolution of the image

• display scale of the map composition

• map scale of the image(s)

• map composition to paper scale

• device resolution

Spatial Resolution
Spatial resolution is the area on the ground represented by each raw image data pixel.

Display Scale
Display scale is the distance on the screen as related to one unit on paper. For example,
if the map composition is 24 inches by 36 inches, it would not be possible to view the
entire composition on the screen. Therefore, the scale could be set to 1:0.25 so that the
entire map composition would be in view.

Map Scale
The map scale is the distance on a map as related to the true distance on the ground; or
the area that one pixel represents, measured in map units. The map scale is defined
when the user creates an image area in the map composition. One map composition can
have multiple image areas set at different scales. These areas may need to be shown at
different scales for different applications.

438 ERDAS
Printing Maps

Map Composition to Paper Scale


This scale is the original size of the map composition as related to the desired output
size on paper.

Device Resolution
The number of dots that are printed per unit—for example, 300 dots per inch (DPI).

Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.

Map Scaling Examples The ERDAS IMAGINE Map Composer enables the user to define a map size, as well as
the size and scale for the image area within the map composition. The examples in this
section focus on the relationship between these factors and the output file created by
Map Composer for the specific hardcopy device or file format. Figure 178 is the map
composition that will be used in the examples. This composition was originally created
using IMAGINE Map Composer at a size of 22” × 34” and the hardcopy output must
be in two different formats.

• It must be output to a PostScript printer on an 8.5” × 11” piece of paper.

• A TIFF file must be created and sent to a film recorder having a 1,000 DPI
resolution.

Figure 178: Sample Map Composition

Field Guide 439


Output to PostScript Printer
Since the map was created at 22” × 34”, the map composition to paper scale will need
to be calculated so that the composition will fit on an 8.5” × 11” piece of paper. If this
scale is set for a 1 to 1 ratio, then the composition will be paneled.

To determine the map composition to paper scale factor, it is necessary to calculate the
most limiting direction. Since the printable area for the printer is approximately
8.1” × 8.6”, these numbers will be used in the calculation.

• 8.1” / 22” = 0.36 (horizontal direction)

• 8.6” / 34” = 0.23 (vertical direction)

The vertical direction is the most limiting, therefore the map composition to paper scale
would be set for 0.23.

If the specified size of the map (width and height) is greater than the printable area for the printer,
the output hardcopy map will be paneled. See the hardware manual of the hardcopy device for
information about the printable area of the device.

Use the Print Map Composition dialog to output a map composition to a PostScript printer.

Output to TIFF
The limiting factor in this example is not page size, but disk space (600 MB total). A
three-band .img file must be created in order to convert the map composition to a .tif
file. Due to the three bands and the high resolution, the .img file could be very large.
The .tif file will be output to a film recorder with a 1,000 DPI device resolution.

To determine the number of megabytes for the map composition, the X and Y dimen-
sions need to be calculated:

• X = 22 inches * 1,000 dots/inch = 22,000

• Y = 34 * 1,000 = 34,000

• 22,000 * 34,000 * 3 = 2244 MB (multiplied by 3 since there are 3 bands)

Although this appears to be an unmanageable file size, it is possible to reduce the file
size with little image degradation. The .img file created from the map composition must
be less than half to accommodate the .tif file, since the total disk space is only 600
megabytes. Dividing the map composition by three in both X and Y directions (2,244
MB / 3 /3) results in approximately a 250 megabyte file. This file size is small enough
to process and leaves enough room for the .img to .tif conversion. This division is
accomplished by specifying a 1/3 or 0.333 map composition to paper scale when
outputting the map composition to an .img file.

Once the .img file is created and exported to TIFF format, it can be sent to a film recorder
that accepts .tif files. Remember, the file must be enlarged three times to compensate for
the reduction during the .img file creation.

440 ERDAS
Mechanics of Printing

See the hardware manual of the hardcopy device for information about the DPI device resolution.

Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an
.img file.

Mechanics of This section describes the mechanics of transferring an image or map composition from
Printing a data file to a hardcopy map.

Halftone Printing Halftoning is the process of converting a continuous tone image into a pattern of dots.
A newspaper photograph is a common example of halftoning.

To make a color illustration, halftones in the primary colors (cyan, magenta, and
yellow), plus black, are overlaid. The halftone dots of different colors, in close
proximity, create the effect of blended colors in much the same way that phospho-
rescent dots on a color computer monitor combine red, green, and blue to create other
colors. By using different patterns of dots, colors can have different intensities. The dots
for halftoning are a fixed density—either a dot is there or it is not there.

For scaled maps, each output pixel may contain one or more dot patterns. If a very large
image file is being printed onto a small piece of paper, data file pixels will be skipped
to accommodate the reduction.

Hardcopy Devices
The following hardcopy devices use halftoning to output an image or map composition:

• CalComp Electrostatic Plotters

• Canon PostScript Intelligent Processing Unit

• Linotronic Imagesetter

• Tektronix Inkjet Printer

• Tektronix Phaser Printer

• Versatec Electrostatic Plotter

See the user’s manual for the hardcopy device for more information about halftone printing.

Field Guide 441


Continuous Tone Continuous tone printing enables the user to output color imagery using the four
Printing process colors (cyan, magenta, yellow, and black). By using varying percentages of
these colors, it is possible to create a wide range of colors. The printer converts digital
data from the host computer into a continuous tone image. The quality of the output
picture is similar to a photograph. The output is smoother than halftoning because the
dots for continuous tone printing can vary in density.

Example
There are different processes by which continuous tone printers generate a map. One
example is a process called thermal dye transfer. The entire image or map composition
is loaded into the printer’s memory. While the paper moves through the printer, heat is
used to transfer the dye from a ribbon, which has the dyes for all of the four process
colors, to the paper. The density of the dot depends on the amount of heat applied by
the printer to transfer the dye. The amount of heat applied is determined by the
brightness values of the input image. This allows the printer to control the amount of
dye that is transferred to the paper to create a continuous tone image.

Hardcopy Devices
The following hardcopy devices use continuous toning to output an image or map
composition:

• IRIS Color Inkjet Printer

• Kodak XL7700 Continuous Tone Printer

• Tektronix Phaser II SD

NOTE: The above printers do not necessarily use the thermal dye transfer process to generate a
map.

See the user’s manual for the hardcopy device for more information about continuous tone
printing.

Contrast and Color ERDAS IMAGINE contrast and color tables are used for some printing processes, just
Tables as they are used in displaying an image. For continuous raster layers, they are loaded
from the IMAGINE contrast table. For thematic layers, they are loaded from the color
table. The translation of data file values to brightness values is performed entirely by
the software program.

442 ERDAS
Mechanics of Printing

RGB to CMY Conversion

Colors
Since a printer uses ink instead of light to create a visual image, the primary colors of
pigment (cyan, magenta, and yellow) are used in printing, instead of the primary colors
of light (red, green, and blue). Cyan, magenta, and yellow can be combined to make
black through a subtractive process, whereas the primary colors of light are additive—
red, green, and blue combine to make white (Gonzalez and Wintz 1977).

The data file values that are sent to the printer and the contrast and color tables that
accompany the data file are all in the RGB color scheme. The RGB brightness values in
the contrast and color tables must be converted to CMY values.

The RGB primary colors are the opposites of the CMY colors—meaning, for example,
that the presence of cyan in a color means an equal lack of red. To convert the values,
each RGB brightness value is subtracted from the maximum brightness value to
produce the brightness value for the opposite color.

C = MAX - R
M = MAX - G
Y = MAX - B

where:

MAX = the maximum brightness value


R = red value from lookup table
G = green value from lookup table
B = blue value from lookup table
C = calculated cyan value
M = calculated magenta value
Y = calculated yellow value

Black Ink
Although, theoretically, cyan, magenta, and yellow combine to create black ink, the
color that results is often a dark, muddy brown. Many printers also use black ink for a
truer black.

NOTE: Black ink is not available on all printers. Consult the user’s manual for your printer.

Images often appear darker when printed than they do when displayed on the display
device. Therefore, it may be beneficial to improve the contrast and brightness of an
image before it is printed.

Use the programs discussed in "CHAPTER 5: Enhancement" to brighten or enhance an image


before it is printed.

Field Guide 443


444 ERDAS
Summation

APPENDIX A
Math Topics

Introduction This appendix is a cursory overview of some of the basic mathematical concepts that are
applicable to image processing. Its purpose is to educate the novice reader, and to put
these formulas and concepts into the context of image processing and remote sensing
applications.

Summation A commonly used notation throughout this and other discussions is the Sigma (Σ), used
to denote a summation of values.

For example, the notation

10

∑i
i=1

is the sum of all values of i, ranging from 1 to 10, which equals:

1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55.

Field Guide 445


Similarly, the value i may be a subscript, which denotes an ordered set of values. For
example,

∑ Qi = 3 + 5 + 7 + 2 = 17
i=1

where:

Q1 = 3

Q2 = 5

Q3 = 7

Q4 = 2

Statistics
Histogram In ERDAS IMAGINE image data files (.img), each data file value (defined by its row,
column, and band) is a variable. IMAGINE supports the following data types:

• 1, 2, and 4-bit

• 8, 16, and 32-bit signed

• 8, 16, and 32-bit unsigned

• 32 and 64-bit floating point

• 64 and 128-bit complex floating point

Distribution, as used in statistics, is the set of frequencies with which an event occurs,
or that a variable will have a particular value.

A histogram is a graph of data frequency or distribution. For a single band of data, the
horizontal axis of a histogram is the range of all possible data file values. The vertical
axis is the number of pixels that have each data value.

446 ERDAS
Statistics

Y
1000

number of pixels
histogram

300

0 X
0 100 255
data file values
Figure 179: Histogram

Figure 179 shows the histogram for a band of data in which Y pixels have data value X.
For example, in this graph, 300 pixels (y) have the data file value of 100 (x).

Bin Functions Bins are used to group ranges of data values together for better manageability. Histo-
grams and other descriptor columns for 1, 2, 4, and 8-bit data are easy to handle, since
they contain a maximum of 256 rows. However, to have a row in a descriptor table for
every possible data value in floating point, complex, and 32-bit integer data would yield
an enormous amount of information. Therefore, the bin function is provided to serve as
a data reduction tool.

Example of a Bin Function


Suppose a user has a floating point data layer with values ranging from 0.0 to 1.0. The
user could set up a descriptor table of 100 rows, with each row or bin corresponding to
a data range of .01 in the layer.

The bins would look like the following:

Bin Number Data Range


0 X < 0.01
1 0.01 ≤ X < 0.02
2 0.02 ≤ X < 0.03
.
.
.
98 0.98 ≤ X < 0.99
99 0.99 ≤ X

Then, for example, row 23 of the histogram table would contain the number of pixels in
the layer whose value fell between .023 and .024.

Field Guide 447


Types of Bin Functions
The bin function establishes the relationship between data values and rows in the
descriptor table. There are four types of bin functions used in ERDAS IMAGINE image
layers:

• DIRECT — one bin per integer value. Used by default for 1, 2, 4, and 8-bit integer
data, but may be used for other data types as well. The direct bin function may
include an offset for negative data or data in which the minimum value is greater
than zero.

For example, a direct bin with 900 bins and an offset of -601 would look like the fol-
lowing:

Bin Number Data Range

0 X ≤ -600.5
1 -600.5 < X ≤ -599.5
.
.
.
599 -2.5 < X ≤ -1.5
600 -1.5 < X ≤ -0.5
601 -0.5 < X < 0.5
602 0.5 ≤ X < 1.5
603 1.5 ≤ X < 2.5
.
.
.
898 296.5 ≤ X < 297.5
899 297.5 ≤ X

448 ERDAS
Statistics

• LINEAR — establishes a linear mapping between data values and bin numbers, as
in our first example, mapping the data range 0.0 to 1.0 to bin numbers 0 to 99.

The bin number is computed by:

bin = numbins * (x - min) / (max - min)

if (bin < 0) bin = 0

if (bin >= numbins) bin = numbins - 1

where:

bin = resulting bin number

numbins = number of bins

x = data value

min = lower limit (usually minimum data value)

max = upper limit (usually maximum data value)LOG - establishes a loga-


rithmic mapping between data values and bin numbers.

• LOG — establishes a logarithmic mapping between data values and bin numbers.

The bin number is computed by:

bin = numbins * (ln (1.0 + ((x - min)/(max - min)))/ ln (2.0))

if (bin < 0) bin = 0

if (bin >= numbins) bin = numbins - 1

• EXPLICIT — explicitly defines mapping between each bin number and data range.

Field Guide 449


Mean The mean (µ) of a set of values is its statistical average, such that, if Qi represents a set
of k values:

Q 1 + Q 2 + Q 3 + ... + Q k
µ = ----------------------------------------------------------
k

or

k
Qi
µ = ∑ -----
k
i=1

The mean of data with a normal distribution is the value at the peak of the curve—the
point where the distribution balances.

Normal Distribution Our general ideas about an average, whether it be average age, average test score, or the
average amount of spectral reflectance from oak trees in the spring,

are made visible in the graph of a normal distribution, or bell curve.

1000
number of pixels

0
0 255
data file values

Figure 180: Normal Distribution

Average usually refers to a central value on a bell curve, although all distributions have
averages. In a normal distribution, most values are at or near the middle, as shown by
the peak of the bell curve. Values that are more extreme are more rare, as shown by the
tails at the ends of the curve.

The Normal Distributions are a family of bell shaped distributions that turn up
frequently under certain special circumstances. For example, a normal distribution
would occur if one were to compare the bands in a desert image. The bands would be
very similar, but would vary slightly.

450 ERDAS
Statistics

Each Normal Distribution uses just two parameters, σ and µ, to control the shape and
location of the resulting probability graph through the equation:

x–µ 2
–  ------------ 
 2σ 
e
f ( x ) = ---------------------
σ 2π

where

x = the quantity whose distribution is being approximated


π and e = famous mathematical constants

The parameter, µ, controls how much the bell is shifted horizontally so that its average
will match the average of the distribution of x, while σ adjusts the width of the bell to
try to encompass the spread of the given distribution. In choosing to approximate a
distribution by the nearest of the Normal Distributions, we describe the many values in
the bin function of its distribution with just two parameters. It is a significant simplifi-
cation that can greatly ease the computational burden of many operations, but like all
simplifications, it reduces the accuracy of the conclusions we can draw.

The normal distribution is the most widely encountered model for probability. Many
natural phenomena can be predicted or estimated according to “the law of averages”
that is implied by the bell curve (Larsen and Marx 1981).

A normal distribution in remotely sensed data is meaningful—it is a sign that some


characteristic of an object can be measured by the average amount of electromagnetic
radiation that the object reflects. This relationship between the data and a physical
scene or object is what makes image processing applicable to various types of land
analysis.

The mean and standard deviation are often used by computer programs that process
and analyze image data.

Field Guide 451


Variance The mean of a set of values locates only the average value—it does not adequately
describe the set of values by itself. It is helpful to know how much the data varies from
its mean. However, a simple average of the differences between each value and the
mean equals zero in every case, by definition of the mean. Therefore, the squares of
these differences are averaged so that a meaningful number results (Larsen and Marx
1981).

In theory, the variance is calculated as follows:

2
Var Q = E 〈 ( Q – µ Q ) 〉

where:

E = expected value (weighted average)


2 = squared to make the distance a positive number

In practice, the use of this equation for variance does not usually reflect the exact nature
of the values that are used in the equation. These values are usually only samples of a
large data set, and therefore, the mean and variance of the entire data set are estimated,
not known.

The equation used in practice is shown below. This is called the “minimum variance
unbiased estimator” of the variance, or the sample variance (notated σ2).

∑ ( Qi – µQ )
2

2
σ Q ≈ i-------------------------------------
=1
k–1

where:

i = a particular pixel
k = the number of pixels (the higher the number, the better the approximation)

The theory behind this equation is discussed in chapters on “Point Estimates” and
“Sufficient Statistics,” and covered in most statistics texts.

NOTE: The variance is expressed in units squared (e.g., square inches, square data values, etc.),
so it may result in a number that is much higher than any of the original values.

452 ERDAS
Statistics

Standard Deviation Since the variance is expressed in units squared, a more useful value is the square root
of the variance, which is expressed in units and can be related back to the original
values (Larsen and Marx 1981). The square root of the variance is the standard
deviation.

Based on the equation for sample variance (s2), the sample standard deviation (sQ) for
a set of values Q is computed as follows:

∑ ( Qi – µQ )
2

sQ = i-------------------------------------
=1
k–1

In any distribution:

• approximately 68% of the values are within one standard deviation of µ: that is,
between µ-s and µ+s

• more than 1/2 of the values are between µ-2s and µ+2s

• more than 3/4 of the values are between µ-3s and µ+3s

An example of a simple application of these rules is seen in the ERDAS IMAGINE


Viewer. When 8-bit data are displayed in the Viewer, IMAGINE automatically applies
a 2 standard deviation stretch that remaps all data file values between
µ-2s and µ+2s (more than 1/2 of the data) to the range of possible brightness values on
the display device.

Standard deviations are used because the lowest and highest data file values may be
much farther from the mean than 2s.

For more information on contrast stretch, see "CHAPTER 5: Enhancement."

Field Guide 453


Parameters As described above, the standard deviation describes how a fixed percentage of the
data varies from the mean. The mean and standard deviation are known as parameters,
which are sufficient to describe a normal curve (Johnston 1980).

When the mean and standard deviation are known, they can be used to estimate other
calculations about the data. In computer programs, it is much more convenient to
estimate calculations with a mean and standard deviation than it is to repeatedly
sample the actual data.

Algorithms that use parameters are parametric. The closer that the distribution of the
data resembles a normal curve, the more accurate the parametric estimates of the data
will be. ERDAS IMAGINE classification algorithms that use signature files (.sig) are
parametric, since the mean and standard deviation of each sample or cluster are stored
in the file to represent the distribution of the values.

Covariance In many image processing procedures, the relationships between two bands of data are
important. Covariance measures the tendencies of data file values in the same pixel, but
in different bands, to vary with each other, in relation to the means of their respective
bands. These bands must be linear.

Theoretically speaking, whereas variance is the average square of the differences


between values and their mean in one band, covariance is the average product of the
differences of corresponding values in two different bands from their respective means.
Compare the following equation for covariance to the previous one for variance:

Cov QR = E 〈 ( Q – µ Q ) ( R – µ R )〉

where:

Q and R = data file values in two bands

E = expected value

In practice, the sample covariance is computed with this equation:

∑ ( Qi – µQ ) ( Ri – µ R )
C QR ≈ i---------------------------------------------------------
=1 -
k

where:

i = a particular pixel
k = the number of pixels

Like variance, covariance is expressed in units squared.

454 ERDAS
Statistics

Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covari-
ances within n bands of data. Below is an example of a covariance matrix for 4 bands of
data:

band A band B band C band D

band A VarA CovBA CovCA CovDA

band B CovAB VarB CovCB CovDB

band C CovAC CovBC VarC CovDC

band D CovAD CovBD CovCD VarD

The covariance matrix is symmetrical—for example, CovAB = CovBA.

The covariance of one band of data with itself is the variance of that band:

k k

∑ ( Qi – µQ ) ( Qi – µQ ) ∑ ( Qi – µQ )
2

i=1
C QQ = ----------------------------------------------------------- i=1
= -------------------------------------
k–1 k–1

Therefore, the diagonal of the covariance matrix consists of the band variances.

The covariance matrix is an organized format for storing variance and covariance infor-
mation on a computer system, so that it needs to be computed only once. Also, the
matrix itself can be used in matrix equations, as in principal components analysis.

See "Matrix Algebra" on page 462 for more information on matrices.

Field Guide 455


Dimensionality of Spectral Dimensionality is determined by the number of sets of values being used in a
Data process. In image processing, each band of data is a set of values. An image with four
bands of data is said to be 4-dimensional (Jensen 1996).

NOTE: The letter n is used consistently in this documentation to stand for the number of
dimensions (bands) of image data.

Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all n
bands. Although image data files are stored band-by-band, it is often necessary to
extract the measurement vectors for individual pixels.

V1 n=3
Band 1
V2
Band 2
V3
Band 3
1 pixel

Figure 181: Measurement Vector

According to Figure 181:

i = particular band
Vi = the data file value of the pixel in band i, then the measurement vector
for this pixel is:

V1
V2
V3

See "Matrix Algebra" on page 462 for an explanation of vectors.

456 ERDAS
Dimensionality of Data

Mean Vector When the measurement vectors of several pixels are analyzed, a mean vector is often
calculated. This is the vector of the means of the data file values in each band. It has n
elements.

Training sample
mean of values in sample
in band 1 = µ1

mean of these values


Band 1 = µ2
mean of these values
Band 2 = µ3

Band 3

Figure 182: Mean Vector

According to Figure 182:

i = a particular band
µi = the mean of the data file values of the pixels being studied, in band i,
then the mean vector for this training sample is:

µ1
µ2
µ3

Field Guide 457


Feature Space Many algorithms in image processing compare the values of two or more bands of data.
The programs that perform these functions abstractly plot the data file values of the
bands being studied against each other. An example of such a plot in two dimensions
(two bands) is illustrated in Figure 183.

255

data file values


Band B
(180, 85)

85

0
0 180 255
Band A
data file values

Figure 183: Two Band Plot

NOTE: If the image is 2-dimensional, the plot doesn’t always have to be 2-dimensional.

In Figure 183, the pixel that is plotted has a measurement vector of:

180
85
The graph above implies physical dimensions for the sake of illustration. Actually,
these dimensions are based on spectral characteristics, represented by the digital image
data. As opposed to physical space, the pixel above is plotted in feature space. Feature
space is an abstract space that is defined by spectral units, such as an amount of electro-
magnetic radiation.

458 ERDAS
Dimensionality of Data

Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensional
histogram, or feature space image. This is simply a graph of the data file values of one
band of data against the values of another band.

255

data file values


Band B
0
0 255
Band A
data file values

Figure 184: Two Band Scatterplot

The scatterplot pictured in Figure 184 can be described as a simplification of a 2-


dimensional histogram, where the data file values of one band have been plotted
against the data file values of another band. This figure shows that when the values in
the bands being plotted have jointly normal distributions, the feature space forms an
ellipse.

This ellipse is used in several algorithms—specifically, for evaluating training samples


for image classification. Also, two-dimensional feature space images with ellipses are
helpful to illustrate principal components analysis.

See "CHAPTER 5: Enhancement" for more information on principal components analysis,


"CHAPTER 6: Classification" for information on training sample evaluation, and
"CHAPTER 8: Rectification"for more information on orders of transformation.

Field Guide 459


n-Dimensional If 2-dimensional data can be plotted on a 2-dimensional histogram, as above, then n-
Histogram dimensional data can, abstractly, be plotted on an n-dimensional histogram, defining n-
dimensional spectral space.

Each point on an n-dimensional scatterplot has n coordinates in that spectral space — a


coordinate for each axis. The n coordinates are the elements of the measurement vector
for the corresponding pixel.

In some image enhancement algorithms (most notably, principal components), the


points in the scatterplot are replotted, or the spectral space is redefined in such a way
that the coordinates are changed, thus transforming the measurement vector of the
pixel.

When all data sets (bands) have jointly normal distributions, the scatterplot forms a
hyperellipsoid. The prefix “hyper” refers to an abstract geometrical shape, which is
defined in more than three dimensions.

NOTE: In this documentation, 2-dimensional examples are used to illustrate concepts that apply
to any number of dimensions of data. The 2-dimensional examples are best suited for creating
illustrations to be printed.

Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. It is a number
that allows two measurement vectors to be compared for similarity. The spectral
distance between two pixels can be calculated as follows:

∑ ( d i – ei )
2
D =
i=1
where:

D = spectral distance
n = number of bands (dimensions)
i = a particular band
di = data file value of pixel d in band i
ei = data file value of pixel e in band i

This is the equation for Euclidean distance—in two dimensions (when n = 2), it can be
simplified to the Pythagorean Theorem (c2 = a2 + b2), or in this case:

D2 = (di - ei)2 + (dj - ej)2

460 ERDAS
Polynomials

Polynomials A polynomial is a mathematical expression consisting of variables and coefficients. A


coefficient is a constant, which is multiplied by a variable in the expression.

Order The variables in polynomial expressions can be raised to exponents. The highest
exponent in a polynomial determines the order of the polynomial.

A polynomial with one variable, x, takes this form:

A + Bx + Cx2 + Dx3 + .... + Ωxt

where:

A, B, C, D ... Ω = coefficients
t = the order of the polynomial

NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not the complexity, of the
transformation is changed. Mathematically, Ω cannot be 0.

A polynomial with two variables, x and y, takes this form:

A + Bx + Cy + Dx2 + Exy + Fy2+ ... + Qxiyj + ... + Wyt

where:

A, B, C, D, E, F ... Q ... W = coefficients


t = the order of the polynomial
i and j = exponents

All combinations of xi times yj are used in the polynomial expression, such that:

i+j≤t

A numerical example of 3rd-order transformation equations for x and y is:

xo = 5 + 4x - 6y + 10x2 - 5xy + 1y2 + 3x3 + 7x2y - 11xy2 + 4y3

yo = 13 + 12x + 4y + 1x2 - 21xy + 1y2 - 1x3 + 2x2y + 5xy2 + 12y3

Polynomial equations are used in image rectification to transform the coordinates of an


input file to the coordinates of another system. The order of the polynomial used in this
process is the order of transformation.

Transformation Matrix In the case of first order image rectification, the variables in the polynomials (x and y)
are the source coordinates of a ground control point (GCP). The coefficients are
computed from the GCPs and stored as a transformation matrix.

A detailed discussion of GCPs, orders of transformation, and transformation matrices is


included in "CHAPTER 8: Rectification."

Field Guide 461


Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular array. If a matrix has i
rows and j columns, it is said to be an i by j matrix.

A one-dimensional matrix, having one column (i by 1) is one of many kinds of vectors.


For example, the measurement vector of a pixel is an n-element vector of the data file
values of the pixel, where n is equal to the number of bands.

See "CHAPTER 5: Enhancement" for information on eigenvectors.

Matrix Notation Matrices and vectors are usually designated with a single capital letter, such as M. For
example:

2.2 4.6
M = 6.1 8.3
10.0 12.4

One value in the matrix M would be specified by its position, which is its row and
column (in that order) in the matrix. One element of the array (one value) is designated
with a lower case letter and its position:

m3,2 = 12.4

With column vectors, it is simpler to use only one number to designate the position:

2.8
G = 6.5
10.1

G2 = 6.5

462 ERDAS
Matrix Algebra

Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transfor-
mation matrix. The coefficients are stored in a 2 by 3 matrix:

a1 a2 a3
C =
b1 b2 b3

Then, where:

xo = a1 + a2xi + a3yi

yo = b1 + b2xi + b3yi

xi and yi = source coordinates


xo and yo = rectified coordinates
the coefficients of the transformation matrix are as above

The above could be expressed by a matrix equation:

x0 a1 a2 a3 1
=
y0 b1 b2 b3 xi
yi

R =CS, or

where:

S = a matrix of the source coordinates (3 by 1)


C = the transformation matrix (2 by 3)
R = the matrix of rectified coordinates (2 by 1)

The sizes of the matrices are shown above to demonstrate a rule of matrix multipli-
cation. To multiply two matrices, the first matrix must have the same number of
columns as the second matrix has rows. For example, if the first matrix is a by b, and the
second matrix is m by n, then b must equal m, and the product matrix will have the size
a by n.

Field Guide 463


The formula for multiplying two matrices is:

m
( fg ) ij = ∑ f ik g kj
k=1

for every i from 1 to a


for every j from 1 to n

where:

i = a row in the product matrix


j = a column in the product matrix
f = an (a by b) matrix
g = an (m by n) matrix (b must equal m)

fg is an a by n matrix.

Transposition The transposition of a matrix is derived by interchanging its rows and columns. Trans-
position is denoted by T, as in the example below (Cullen 1972).

2 3
G = 6 4
10 12

T
G = 2 6 10
3 4 12

For more information on transposition, see "Computing Principal Components" in


"CHAPTER 5: Enhancement" and "Classification Decision Rules" in "CHAPTER 6:
Classification."

464 ERDAS
APPENDIX B
File Formats and Extensions

Introduction This appendix describes all of the file formats and extensions that are used within
ERDAS IMAGINE software. However, this does not include files that are introduced
into IMAGINE by third party products. Please refer to the product‘s documentation for
information on those files.

Topics include:

• IMAGINE file extensions

• .img Files

• Hierarchial File Architecture (HFA) System

• IMAGINE Machine Independent Format (MIF)

• MIF Data Dictionary

ERDAS IMAGINE File A file name extension is a suffix, usually preceded by a period, that often identifies the
Extensions type of data in a file. ERDAS IMAGINE automatically assigns the default extension
when the user is prompted to enter a file name. The part of the file name before the
extension can be used in a manner that is helpful to the user and others. The files used
within the ERDAS IMAGINE system, their extensions, and their formats are conven-
tions of ERDAS, Inc.

All of the types of files used within IMAGINE are listed in Table 31 by their extensions.
Files with an ASCII format are simply text files which can be viewed with the IMAGINE
Text Editor utility. IMAGINE HFA (hierarchal file architecture) files can be viewed with
the IMAGINE HfaView utility. The list in Table 31 does not include files that are used
by third party products. Please refer to the product‘s documentation for information on
those files.

Field Guide 465


Table 31: ERDAS IMAGINE File Extensions

Extension Format Description


.aoi HFA Area of Interest file— stores a user-defined area of an .img file. It
includes everything that an .img file contains.
.aux HFA Auxiliary information (Projection, Attributes) — used to augment
a directly readable format (e.g., SGI FIT) when the format does
not handle such information.
.atx ASCII Text form of .aux file — used for input purposes only.
.cff HFA Coefficient file — stores transformation matrices created by
rectifying a file.

Note: This format is now obsolete. Use .gms instead.


.chp HFA Image chip — a greatly reduced preview of a raster image.
.clb ASCII Color Library file
.eml ASCII ERDAS Macro Language file — stores scripts which control the
operation of the IMAGINE graphical user interface. New .eml
files can be created with the ERDAS Macro Language and
incorporated into the IMAGINE interface.
.fft HFA Fast Fourier Transform file — stores raster layers in a compressed
format, created by performing a Fast Fourier Transformation on
an .img file.
.flb HFA Fill styles Library file
.fls ASCII File List file — stores the list of files that is used for mosaicking
images.
.fsp.img HFA Feature Space Image file — stores the same information as an .img
file plus the information required to create the feature space
image (e.g., transformation).
.gcc HFA Ground Control Coordinates file — stores ground control points.
.gmd ASCII Graphical Model file — stores scripts that draw the graphical
model (i.e., flow chart), created with the Spatial Modeler Model
Maker.
.gms HFA Geometric Model file — contains the parameters of a transforma-
tion and, optionally, the transformation itself for any geometric
model.
.ifft.img HFA Inverse Fast Fourier Transform file — stores raster layers created
by performing an inverse Fast Fourier Transformation on an .img
file.
.img HFA Image file — stores single or multiple raster layers, contrast and
color tables, descriptor tables, pyramid layers, and file
information. See the .img Files section in this chapter for more
information on .img files.
.klb ASCII Kernel Library file — stores convolution kernels.
.llb HFA Line style Library file

466 ERDAS
ERDAS IMAGINE File Extensions

Table 31: ERDAS IMAGINE File Extensions

Extension Format Description


.mag.img HFA Magnitude Image file — an .img file that stores the magnitude of
a Fourier Transform image file.
.map HFA (ver. 8.3) Map file — stores map frames created with Map Composer.
ASCII (pre-8.3)
.map.ovr HFA Map/Overlay file — stores annotation layers created in Map
Composer outside of the map frame (e.g., legends, grids, lines,
scales)
.mdl ASCII Model file — stores Spatial Modeler scripts. It does not store any
graphical model (i.e., flow chart) information. This file is
necessary for running a model. If only a .gmd file exists, then a
temporary .mdl file is created when a model is run.
.olh FrameMaker On-Line Help file —- stores the IMAGINE On-Line Help
documentation.
.ovr HFA Overlay file — stores an annotation layer that was created in a
map frame, in a blank Viewer, or on an image in a Viewer.
.pdf ASCII Preference Definition file — stores information that is used by the
Preference Editor.
.plb ASCII Projection Library file
.plt ASCII Plot file — stores the names of the panel files produced by
MapMaker. MapMaker processes the .map file to produce one or
more map panels. Each panel consists of two files. One is the
name file with the extension .plt.panel_xx.name, which names the
various fonts used in the panel along with name of the actual file
that contains the panel output. The other is the panel file itself
with the .plt.panel_xx extension. The .plt file contains the
complete path names (one per line) of the panel name files.
.plt.panel_xx.na ASCII Panel Name file — stores the name of the panel data file and any
me fonts used by the panel (the font names are present only for
PostScript output)
.plt.panel_xx ASCII/HFA Panel Data file — stores actual processed data output by
MapMaker. If the destination device was a PostScript device, then
this is an ASCII file that contains PostScript commands. If the
output device was a non-PostScript raster device, then this file is
an HFA file that contains one or three layers of raster imagery. It
can be viewed with the Viewer.
.pmdl ASCII Permanent Model files — stores the permanent version of the
.mdl files that are provided by ERDAS.
.sig HFA Signature file — stores a signature set, which was created by the
Classification Signature Editor or imported from ERDAS Version
7.5.
.sml HFA Symbol Library file — stores annotation symbols for the symbol
library.
.tlb HFA Text style Library file

Field Guide 467


ERDAS IMAGINE ERDAS IMAGINE uses .img files to store raster data. These files use the HFA structure.
.img Files Figure 185 shows the different objects of data stored in an .img file. The user’s file may
not have all of this information present, because these objects are not definitive, that is,
data may be added or removed when a process is run (e.g., add ground control points).
Also, other sources, such as programs incorporated by third party vendors, may add
objects to the .img file.

The information stored in an .img file can be used to help the user visualize how
different processes change the data. For example, if the user runs a filter over the file
and creates a new file, the statistics for the two files can be compared to see how the
filter changed the data.

.img File

Ground Covariance Sensor Layer_1 Layer_2 Layer_n


Control Matrix Info. Info. Info. Info.

Attribute Statistics Map Info. Projection Pyramid Data File


Data Info. Layers Values

Figure 185: Examples of Objects Stored in an .img File

The objects of an .img file are described in more detail on the following pages.

Use the IMAGINE Image Information utility or the HfaView utility to view the information
that is stored in an .img file.

The information in the Image Info and HfaView utilities should be modified with caution because
IMAGINE programs use this information for data input. If it is incorrect, there will be errors in
the output data for these programs.

468 ERDAS
ERDAS IMAGINE .img Files

Sensor Information When importing satellite imagery, there is usually a header file on the tape or CD-ROM
that is separate from the data. This object contains ephemeris information about the
sensor, such as:

• date and time scene was scanned

• calibration information of the sensor

• orientation of the sensor

• original dimensions for data

• data storage format

• number of bands

The data presented are dependent upon the sensor. Each sensor provides different
types of information. The sensor object is named:

<format type>_Header

Some examples of the various sensor types are listed in the chart below.

Sensor Sensor Object


ADRG ADRG_Header
Landsat TM TM_Header
NOAA AVHRR AVHRR_Header
RADARSAT RADARSAT_Header
SPOT SPOT_Header

Use the HfaView utility to view the header file.

Field Guide 469


Raster Layer Information Each raster layer within an .img file has its own ancillary data, including the following
parameters:

• height and width (rows and columns)

• layer type (continuous or thematic)

• data type (signed 8-bit, floating point, etc.)

• compression (see below)

• block size (see below)

This information is usually the same for each layer.

These parameters are defined when the raster layer is created or imported into IMAGINE. Use
the Image Information utility to view the parameters.

Compression
When importing a file into IMAGINE, the user has the option to compress the data.
Currently, IMAGINE uses the run-length compression method. The amount that the
data are compressed depends on data in the layer. For example, if the layer contains
large, homogenous areas (e.g., blocks of water), then compressing the layer would save
on disk space. However, if the layer is very heterogenous, run-length compression
would not save much disk space.

Data will be compressed only when it is stored. IMAGINE automatically uncompresses


data before the layer is run through a process. The time that it takes to uncompress the
data is minimal.

Use the Import function to compress data when it is imported into IMAGINE.

Block Size
IMAGINE software uses a tiled format to store raster layers. The tiled format allows
raster layers to be displayed and resampled quickly. The raster layer is divided into tiles
(i.e., blocks) when IMAGINE creates or imports an .img file. The size of this block can
be defined when the user either creates the file or imports it. The default block size is 64
pixels by 64 pixels.

NOTE: The default block size is acceptable for most applications and should not need to be
changed.

470 ERDAS
ERDAS IMAGINE .img Files

512
columns
64
pixels

64 pixels

512 rows

Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels

Field Guide 471


Attribute Data Continuous Raster Layer
The attribute table object for a continuous raster layer, by default, includes the
following information:

• histogram

• contrast table

Thematic Raster Layer


For a thematic raster layer, the attribute table object, by default, includes the following
information:

• histogram

• class names

• class values

• color table (red, green, and blue values).

Attribute data can also include additional information for thematic raster layers, such
as the area, opacity, and attributes for each class.

Use the Raster Attribute Editor to view or modify the contents of these attribute tables.

Statistics The following statistics are calculated for each raster layer:

• minimum and maximum data file values

• mean of the data file values

• median of the data file values

• mode of the data file values

• standard deviation of the data file values

See "APPENDIX A: Math Topics" for more information on these statistics.

These statistics are based on the data file values of the pixels in the layer. Knowing the
statistics for a layer will aid the user in determining the process to be used in extracting
the features that are of the most interest. For example, if a user planning to use the
ISODATA classifier, the statistics could be used to see if the layer has a normal distri-
bution of data, which is preferred.

If they do not exist, statistics should be created for a layer. Certain Viewer functions
(e.g., contrast tools) will not run without layer statistics. Rebuilding statistics for a raster
layer may be necessary. For example, if the user does not want to include zero file
values in the statistics calculation (and they are currently included), the statistics could
be rebuilt without zero file values.

472 ERDAS
ERDAS IMAGINE .img Files

Use the Image Information utility to view, create, or rebuild statistics for a raster layer. If the
statistics do not exist, the information in the Image Information utility will be inactive and
shaded.

Map Information Map information for a raster layer will be created only when the layer has been georef-
erenced. If the layer has been georeferenced, the following information will be stored in
the raster layer:

• upper left X,Y coordinates

• pixel size

• map unit used for measurement (e.g., meters, inches, feet)

See "CHAPTER 11: Cartography" for information on map data.

The user should add or change the map information only when he or she has valid map
information to enter. If incorrect information is entered, then the data for this file will
no longer be valid. Since IMAGINE programs use these data, they must be correct.

When you import a file, the map information may not have imported correctly. If this occurs, use
the Image Info utility to update the information.

Use the Image Information utility to view, add, or change map information for a raster layer in
an .img file.

Field Guide 473


Map Projection If the raster layer has been georeferenced, the following projection information will be
Information generated for the layer:

• map projection

• spheroid

• zone number

See "APPENDIX C: Map Projections" for information on map projections.

Do not add or change the map projection unless the projection listed in the Image Info utility is
incorrect or missing. This may occur when you import a file.

Changing the map projection with the Projections Editor dialog will not rectify the
layer. If the user enters incorrect information, then the data for this layer will no longer
be valid. Since IMAGINE programs use these data, they need to be correct.

Use the Image Information utility to view, add, or change the map projection for a raster layer
in an .img file. If the layer has not been georeferenced, the information in the Image Information
utility will be inactive and shaded.

Pyramid Layers IMAGINE gives the user the option to “pyramid” large raster layers for faster
processing and display in the Viewer. When the user generates pyramid layers,
reduced subsampled raster layers are created from the original raster layer. The
number of pyramid layers that are created depends on the size of the raster layer and
the block size.

For example, a raster layer that is 4k × 4k pixels could take a long time to display when
using the Fit To Window option in the Viewer. Using the Create Pyramid Layers
option, IMAGINE would create additional raster layers successively reduced from 4k
× 4k, to 2k × 2k, 1k × 1k, 512 × 512, 128 × 128, down to 64 × 64. Then IMAGINE would
select the pyramid layer size most appropriate for display in the Viewer window.

See "CHAPTER 4: Image Display" for more information on pyramid layers.

Pyramid layers can be created using the Image Information utility or when the raster layer is
imported. You can also use the Image Information utility to delete pyramid layers.

474 ERDAS
Machine Independent Format

Machine
Independent Format
MIF Data Elements ERDAS IMAGINE uses the Machine Independent Format (MIF) to store data in a
fashion which can be read by a variety of machines. This format provides support for
converting data between the IMAGINE standard data format and that of the specific
host's architecture. Files created using this package on one machine will be readable
from another machine with no explicit data translation.

Each MIF file is made up of one or more of the data elements explained below.

EMIF_T_U1 (Unsigned 1-bit Integer)


U1 is for unsigned 1-bit integers (0 - 1). This data type can be used for bitmap images
with “yes/no” conditions. When the data are read from a MIF file, they are automati-
cally expanded to give one value per byte in memory. When they are written to the file,
they are automatically compressed to place eight values into one output byte.

7 6 5 4 3 2 1 0
U1 _7 U1_ 6 U1_ 5 U1_ 4 U1_ 3 U1_ 2 U1_ 1 U1_ 0

byte 0

EMIF_T_U2 (Unsigned 2-bit Integer)


U2 is for unsigned 2-bit integers (0 - 3). This data type can be used for thematic data with
4 or fewer classes. When the data are read from a MIF file they are automatically
expanded to give one value per byte in memory. When they are written to the file, they
are automatically compressed to place four values into one output byte.

7 5 3 1
U2_3 U2_2 U2_1 U2_0

byte 0

EMIF_T_U4 (Unsigned 4-bit Integer)


U4 is for unsigned 4-bit integers (0 - 15). This data type can be used for thematic data
with 16 or fewer classes. When these data are read from a MIF file, they are automati-
cally expanded to give one value per byte. When they are written to the file it is
automatically compressed to place two values into one output byte.

7 3
U4_1 U4_0

byte 0

Field Guide 475


EMIF_T_UCHAR (8-bit Unsigned Integer)
This stores an 8-bit unsigned integer. It is most typically used to stored characters and
raster imagery.

7
integer

byte 0

EMIF_T_CHAR (8-bit Signed Integer)


This stores an 8-bit signed integer.

7
integer

byte 0

EMIF_T_USHORT (16-bit Unsigned Integer)


This stores a 16-bit unsigned integer, stored in Intel byte order. The least significant byte
is stored first.

15
integer

byte 1 byte 0

EMIF_T_SHORT (16-bit Signed Integer)


This stores a 16-bit two-complement signed integer, stored in Intel byte order. The least
significant byte is stored first.

15
integer

byte 1 byte 0

476 ERDAS
Machine Independent Format

EMIF_T_ENUM (Enumerated Data Types)


This stores an enumerated data type as a 16-bit unsigned integer, stored in Intel byte
order. The least significant byte is stored first. The list of strings associated with the type
are defined in the data dictionary which is defined below. The first item in the list is
indicated by 0.

15
integer

byte 1 byte 0

EMIF_T_ULONG (32-bit Unsigned Integer)


This stores a 32-bit unsigned integer, stored in Intel byte order. The least significant byte
is stored first.

31
integer

byte 3 byte 2 byte 1 byte 0

EMIF_T_LONG (32-bit Signed Integer)


This stores a 32-bit two-complement signed integer value, stored in Intel byte order.
The least significant byte is stored first.

31
integer

byte 3 byte 2 byte 1 byte 0

EMIF_T_PTR (32-bit Unsigned Integer)


This stores a 32-bit unsigned integer, which is used to provide a byte address within the
file. Byte 0 is the first byte, byte 1 is the second, etc. This allows for indexing into a 4-
Gigabyte file, however most UNIX systems only allow 2-Gigabyte files.

NOTE: Currently, this element appears in the data dictionary as an EMIF_T_ULONG element.
In future versions of the file format, the EMIF_T_PTR will be expanded to an 8-byte format
which will allow indexing using 64 bits which allow addressing of 16 billion Gigabytes of file
space.

31
integer

byte 3 byte 2 byte 1 byte 0

Field Guide 477


EMIF_T_TIME (32-bit Unsigned Integer)
This stores a 32-bit unsigned integer, which represents the number of seconds since
00:00:00 1 JAN 1970. This is the standard used in UNIX time keeping. The least signif-
icant byte is stored first.

31
integer

byte 3 byte 2 byte 1 byte 0

EMIF_T_FLOAT (Single Precision Floating Point)


Single precision floating point values are IEEE floating point values.

s = sign (0 = positive, 1 = negative)


exp = 8 bit excess 127 exponent
fraction = 24 bits of precision (+1 hidden bit)

31 30 22
s exp fraction

byte 3 byte 2 byte 1 byte 0

EMIF_T_DOUBLE (Double Precision Floating Point)


Double precision floating point data are IEEE double precision.

s = sign (0 = positive, 1 = negative)


exp = 11 bit excess 1023 exponent
fraction = 53 bits of precision (+1 hidden bit)

63 52 51
s exp fraction

byte 7 byte 6 byte 5 byte 4 byte 3 byte 2 byte 1 byte 0

478 ERDAS
Machine Independent Format

EMIF_T_COMPLEX (Single Precision Complex)


A complex data element has a real part and an imaginary part. Single precision floating
point values are IEEE floating point values.

s = sign (0 = positive, 1 = negative)


exp = 8 bit excess 127 exponent
fraction = 24 bits of precision (+1 hidden bit)

Real part: first single precision

31 30 22
s exp fraction

byte 3 byte 2 byte 1 byte 0

Imaginary part: second single precision

31 30 22
s exp fraction

byte 7 byte 6 byte 5 byte 4

EMIF_T_DCOMPLEX (Double Precision Complex)


A complex data element has a real part and an imaginary part. Double precision
floating point data are IEEE double precision.

s = sign (0 = positive, 1 = negative)


exp = 11 bit excess 1023 exponent
fraction = 53 bits of precision (+1 hidden bit)

Real part: first double precision

63 52 51
s exp fraction

byte 7 byte 6 byte 5 byte 4 byte 3 byte 2 byte 1 byte 0

Imaginary part: second double precision

63 52 51
s exp fraction

byte15 byte 14 byte 13 byte 12 byte 11 byte 10 byte 9 byte 8

Field Guide 479


EMIF_T_BASEDATA (Matrix of Numbers)
A Basedata is a generic two dimensional array of values. It can store any of the types of
data used by IMAGINE. It is a variable length object whose size is determined by the
data type, the number of rows, and the number of columns.

numrows: This indicates the number of rows of data in this item.

31
integer

byte 3 byte 2 byte 1 byte 0

numcolumns: This indicates the number of columns of data in this item.

31
integer

byte 7 byte 6 byte 5 byte 4

datatype: This indicates the type of data stored here. The types are:

DataType BytesPerObject
0 EMIT_T_U1 1/8
1 EMIF_T_U2 1/4
3 EMIT_T_U4 1/2
4 EMIF_T_UCHAR 1
5 EMIF_T_CHAR 1
6 EMIF_T_USHORT 2
7 EMIF_T_SHORT 2
8 EMIF_T_ULONG 4
9 EMIF_T_LONG 4
10 EMIF_T_FLOAT 4
11 EMIF_T_DOUBLE 8
12 EMIF_T_COMPLEX 8
13 EMIF_T_DCOMPLEX 16

15
integer

byte 9 byte 8

objecttype: This indicates the object type of the data. This is used in the IMAGINE
Spatial Modeler. The valid values are:

480 ERDAS
Machine Independent Format

0 SCALAR. This will not normally be the case, since a scalar has a
single value.
1 TABLE: This indicates that the object is an array. The numcolumns should be
1.
2 MATRIX: This indicates the number of rows and columns is greater than one.
This is used for Coefficient matrices, etc.
3 RASTER: This indicates that the number of rows and columns is greater than
one and the data are just a part of a larger raster object. This would be the
case for blocks of images which are written to the file.

15
integer

byte 11 byte 10

data: This is the actual data. The number of bytes is given as:

bytecount = numrows * numcolumns * BytesPerObject

EMIF_M_INDIRECT (Indication of Indirect Data)


This is used when the following data belongs to an indirect reference of data. For
example, when one object is defined by referring to another object.

The first four bytes provide the object repeat count.

31
integer

byte 3 byte 2 byte 1 byte 0

The next four bytes provide the file pointer, which points to the data comprising the
object.

31
integer

byte 7 byte 6 byte 5 byte 4

EMIF_M_PTR (Indication of Indirect Data)


This is used when the following data belong to an indirect reference of data of variable
length. For example, when one object is defined by referring to another object. This is
identical in file format to the EMIF_M_INDIRECT element. Its main difference is in the
memory resident object which gets created. In the case of the EMIF_M_PTR the count
and data pointer are placed into memory. Whereas only the data gets placed into
memory when the EMIF_M_INDIRECT element is read in. (The size of the object is
inherent in the data definitions.)

Field Guide 481


The first four bytes provide the object repeat count.

31
integer

byte 3 byte 2 byte 1 byte 0

The next four bytes provide the file pointer which points to the data comprising the
object.

31
integer

byte 7 byte 6 byte 5 byte 4

482 ERDAS
Machine Independent Format

MIF Data Dictionary IMAGINE HFA files have a data dictionary that describes the contents of each of the
different types of nodes. The dictionary is a compact ASCII string which is usually
placed at the end of the file with a pointer to the start the dictionary that is stored in the
header of the file.

Each object is defined like a structure in C, and consists of one or more items. Each item
is composed of an ItemType and a name. The ItemType indicates the type of data and
the name indicates the name by which the item will be known.

The syntax of the dictionary string is:

Dictionary ObjectDefinition[ObjectDefinition...] .

The dictionary is one or more ObjectDefinitions terminated by a


period. This is the complete collection of object type definitions.
ObjectDefinition {ItemDefinition[ItemDefinition...] }name ,
An ObjectDefinition is an ItemDefinition enclosed in braces {}
followed by a name and terminated by a comma. This is a com-
plete definition of a single object.
ItemDefinition number :[ *| p]ItemType[EnumData]name ,

An ItemDefinition is a number followed by a colon, followed


optionally by either an asterisk or a p, followed by an ItemType,
followed optionally by EnumData, followed by an item name,
and terminated by a comma. This is the complete definition of a
single Item. The * and the p both indicate that when the data are
read into memory, they will not be placed directly into the
structure being built, but that a new structure will be allocated
and filled with the data. The pointer to that structure is placed
into the initial structure. The asterisk indicates that the number
of items in the indirect object is given by the number in the item
definition. The p indicates that the number is variable. In both
cases, the count precedes the data in the input stream.
EnumData number :name ,[<name>,...]

EnumData is a number, followed by a colon, followed by one or


more names each of which is terminated by a comma. The num-
ber defines the number of names which will follow. This is the
complete set of names associated with an individual enum
type.
name Any sequence of alphanumeric characters excluding the
comma.
number A positive integer number. This composed of any sequence of
these digits: 0,1,2,3,4,5,6,7,8,9.
ItemType 1| 2| 4| c| C| s| S| l| L| f| d| t| m| M| b| e| o| x
This is used to indicate the type of an item. The following table
indicates how the characters correspond to one of the basic
EMIF_T types.

Field Guide 483


The following table describes the single character codes used to identify the ItemType
in the MIF Dictionary Definition. The Interpretation column describes the type of data
indicated by the item type. The Number of Bytes column is the number of bytes that the
data type will occupy in the MIF file. If the number of bytes is not fixed, then it is given
as dynamic.

Number of
ItemType Interpretation
Bytes
1 EMIF_T_U1 1

2 EMIF_T_U2 1

4 EMIF_T_U4 1

c EMIF_T_UCHAR 1

C EMIF_T_CHAR 1

e EMIF_T_ENUM. 2

s EMIF_T_USHORT 2

S EMIF_T_SHORT 2

t EMIF_T_TIME 4

l EMIF_T_ULONG 4

L EMIF_T_LONG 4

f EMIF_T_FLOAT 4

d EMIF_T_DOUBLE 8

m EMIF_T_COMPLEX 8

M EMIF_T_DCOMPLEX 16

b EMIT_T_BASEDATA dynamic

o Previously defined object. This indicates that dynamic


the description of the following data has been
previously defined in the dictionary. This is like
using a previously defined structure in a
structure definition.
x Defined object for this entry. This indicates that dynamic
the description of the following data follows.
This is like using a structure definition within a
structure definition.

484 ERDAS
ERDAS IMAGINE HFA File Format

ERDAS IMAGINE Many of the files created and used by ERDAS IMAGINE are stored in a hierarchical file
HFA File Format architecture (HFA). This format allows any number of different types of data elements
to be stored in the file in a tree structured fashion. This tree is built of nodes which
contain a variety of types of data. The contents of the nodes (as well as the structural
information) is saved in the file in a machine independent format (MIF) which allows
the files to be shared between computers of differing architectures.

Hierarchical File The hierarchical file architecture maintains an object-oriented representation of data in
Architecture an IMAGINE disk file through use of a tree structure. Each object is called an entry and
occupies one node in the tree. Each object has a name and a type. The type refers to a
description of the data contained by that object. Additionally each object may contain a
pointer to a subtree of more nodes. All entries are stored in MIF and can be accessed
directly by name.

Use the IMAGINE HfaView utility to view the objects of a file that uses the HFA format.

Header

Dictionary Root Node

Node_1 Node_2 Node_3

Data Data Data

Node_4 Node_5

Data Data

Figure 187: HFA File Structure

Field Guide 485


Nodes and Objects
Each node within the HFA tree structure contains an object and each object has its own
data. The types of objects in a file are dependent upon the type of file. For example, an
.img file will have different objects than an .ovr file because these files store different
types of data. The list of objects in a file is not fixed, that is objects may be added or
removed depending on the data in the file (e.g., all .img files with continuous raster
layers will not have a node for ground control points).

Figure 188 is an example of an HFA file structure for a thematic raster layer in an .img
file. If there were more attributes in the IMAGINE Raster Attribute Editor, then they
would appear as objects under the Descriptor Table object.

Layer_1 Eimg_Layer

Statistics Descriptor Table Projection


Esta_Statistics Edsc_Table Eprj_Pro Parameters

#Bin Red Green Blue Class_Names Histogram


Function# Edsc_Column Edsc_Column Edsc_Column Edsc_Column Edsc_Column
Edsc_Bin
Function

Figure 188: HFA File Structure Example

486 ERDAS
ERDAS IMAGINE HFA File Format

Pre-defined HFA File There are three categories of pre-defined HFA File Object Types found in .img files:
Object Types
• Basic HFA File Object Types

• .img Object Types

• External File Format Header Object Types

These sections list each object with two different detailed definitions. The first
definition shows how the object appears in the data dictionary in the HFA file. The
second definition is a table that shows the type, name, and description of each item in
the object. An item within an object can be an element or another object.

If an item is an element, then the item type is one of the basic types previously given
with the EMIF_T_ prefix omitted. For example, the item type for EMIF_T_CHAR would
be shown as CHAR.

If an item is a previously defined object type, then the type is simply the name of the
previously defined item.

If the item is an array, then the number of elements is given in square brackets [n] after
the type. For example, the type for an item with an array of 16 EMIF_T_CHAR would
appear as CHAR[16]. If the item is an indirect item of fixed size (it is a pointer to an
item), then the type is followed by an asterisk “*.” For example, a pointer to an item with
an array of 16 EMIF_T_CHAR would appear as CHAR[16] *. If the item is an indirect
item of variable size (it is a pointer to an item and the number of items), then the type
is followed by a “p.” For example, a pointer to an item with a variable sized array of
characters would look like CHAR p.

NOTE: If the item type is shown as PTR, then this item will be encoded in the data dictionary
as a ULONG element.

Field Guide 487


Basic Objects of an HFA This is a list of types of basic objects found in all HFA files:
File
• Ehfa_HeaderTag

• Ehfa_File

• Ehfa_Entry

Ehfa_HeaderTag
The Ehfa_HeaderTag is used as a unique signature at the beginning of an ERDAS
IMAGINE HFA file. It must always occupy the first 20 bytes of the file.

{16:clabel,1:lheaderPtr,}Ehfa_HeaderTag,

Type Name Description


CHAR[16] label This contains the string
“EHFA_HEADER_TAG”
PTR headerPtr The file pointer to the Ehfa_File header
record.

Ehfa_File
The Ehfa_File is composed of several main parts, including the free list, the dictionary,
and the object tree. This entry is used to keep track of these items in the file, since they
may begin anywhere in the file.

{1:Lversion,1:lfreeList,1:lrootEntryPtr,1:SentryHeaderLength,1:ldictionaryPtr,}
Ehfa_File,

Type Name Description


LONG version This defines the version number of the ehfa file.
It is currently 1.
PTR freeList This points to list of freed blocks within the file.
This list is searched first whenever new space is
needed. As blocks of space are released in the
file, they are placed on the free list so that they
may be reused later.
PTR rootEntryPtr This points to the root node of the object tree.
SHORT entryHeaderLength This defines the length of the entry portion of
each node. Each node consists of two parts. The
first part is the entry which contains the node
name, node type, and parent/child informa-
tion. The second part is the data for the node.
PTR dictionaryPtr This points to the starting position of the file for
the MIF Dictionary. The dictionary must be
read and decoded before any of the other
objects in the file can be decoded.

488 ERDAS
ERDAS IMAGINE HFA File Format

Ehfa_Entry
The Ehfa_Entry contains the header information for each node in the object tree,
including the name and type of the node as well as the parent/child information.

{1:lnext,1:lprev,1:lparent,1:lchild,1:ldata,1LdataSize,64:cname,32:ctype,
1:tmodTime,}Ehfa_Entry,

Type Name Description


PTR next This is a file pointer which gives the location of the
next node in the tree at the current level. If this is
the last node at this level, then this contains 0.
PTR prev This is a file pointer which gives the location of the
previous node in the tree at the current level. If this
is the first node at this level, then this contains 0.
PTR parent This is a file pointer which gives the location of the
parent for this node. This is 0 for the root node.
PTR child This is a file pointer which gives the location of the
first of the list of children for this node. If there are
no children, then this contains 0.
PTR data This points to the data for this node. If there is no
data for this node then it contains 0.
LONG dataSize This contains the number of bytes contained in the
data record associated with this node.
CHAR[64] name This contains a NULL terminated string that is the
name for this node. The string can be no longer then
64 bytes including the NULL terminator byte.
CHAR[32] type This contains a NULL terminated string which
names the type of data to be found at this node. The
type must match one of the types found in the data
dictionary. The type name can be no longer then 32
bytes including the NULL terminator byte.
TIME modTime This contains the time of the last modification to the
data in this node.

Field Guide 489


HFA Object Directory for The following section defines the list of objects which comprise IMAGINE image files
.img files (.img extension). This is not a complete list, because users and developers may create
new items and add them to any ERDAS IMAGINE file.

Eimg_Layer
An Eimg_Layer object is the base node for a single layer of imagery. This object
describes the basic information for the layer, including its width and height in pixels,
its data type, and the width and height of the blocks used to store the image. Other
information such as the actual pixel data, map information, projection information, etc.,
are stored as child objects under this node. The child objects that are usually found
under the Eimg_Layer include:

• RasterDMS (an Edms_State which actually contains the imagery)

• Descriptor_Table (an Edsc_Table object which contains the histogram and other
pixel value related data)

• Projection (an Eprj_ProParameters object which contains the projection


information)

• Map_Info (an Eprj_MapInfo object which contains the map information)

• Ehfa_Layer (an Ehfa_Layer object which describes the type of data in the layer)

490 ERDAS
ERDAS IMAGINE HFA File Format

{1:lwidth,1:lheight,1:e3:thematic,athematic,fft of real valued data,


layerType,1e13:u1,u2,u4,u8, s8,u16,s16,u32,s32,f32,f64,c64,c128,pixelType,
1:lblockWidth,1:lblockHeight,} Eimg_Layer,

Type Name Description


LONG width The width of the layer in pixels.
LONG height The height of the layer in pixels.
ENUM layerType The type of layer.
0=”thematic”
1=”athematic”
ENUM pixelType The type of the pixels.
0=”u1”
1=”u2”
2=”u4”
3=”u8”
4=”s8”
5=”u16”
6=”s16”
7=”u32”
8=”s32”
9=”f32”
10=”f64”
11=”c64”
12=”c128”
LONG blockWidth The width of each block in the layer.
LONG blockHeight The height of each block in the layer.

NOTE: In the following definitions, an Emif_String is of type CHAR p


(i.e., {0:pcstring,}Emif_String).

Field Guide 491


Eimg_DependentFile
The Eimg_DependentFile object contains the base name of the file for which the current
file is serving as an .aux. The object is written as a child of the root with the name Depen-
dentFile.

{1:oEmif_String,dependent,}Eimg_DependentFile,

Type Name Description


Emif_String dependent The dependent file name.

Eimg_DependentLayerName
The Eimg_DependentLayerName object normally exists as the child of an Eimg_Layer
in an .aux file. It contains the original name of the layer of which it is a child in the
original imagery file being served by this .aux. It only exists in .aux files serving
imagery files of a format supported by a RasterFormats DLL Instance which does not
define a FileLayerNamesSet interface function (because these DLL Instances are
obviously incapable of supporting layer name changes).

{1:oEmif_String,ImageLayerName,}Eimg_DependentLayerName,

Type Name Description


Emif_String ImageLayerName The original dependent layer name.

492 ERDAS
ERDAS IMAGINE HFA File Format

Eimg_Layer_SubSample
An Eimg_Layer_SubSample object is a node which contains a subsampled version of
the layer defined by the parent node. The node of this form are named _ss_2, _ss_4,
_ss_8, etc. This stands for SubSampled by 2, SubSampled by 4, etc. This node will have
an Edms_State node called RasterDMS and an Ehfa_Layer node called Ehfa_layer
under it. This will be present if pyramid layers have been computed.

{1:lwidth,1:lheight,1:e3:thematic,athematic,fft of real valued data,


layerType,1e13:u1,u2,u4,u8, s8,u16,s16,u32,s32,f32,f64,c64,c128,pixelType,
1:lblockWidth,1:lblockHeight,} Eimg_Layer_SubSample,

Type Name Description


LONG width The width of the layer in pixels.
LONG height The height of the layer in pixels.
ENUM layerType The type of layer.

0 =”thematic”

1 =”athematic”
ENUM pixelType The type of the pixels.

0=”u1”

1=”u2”

2=”u4”

3=”u8”

4=”s8”

5=”u16”

6=”s16”

7=”u32”

8=”s32”

9=”f32”

10=”f64”

11=”c64”

12=”c128”
LONG blockWidth The width of each block in the layer.
LONG blockHeight The height of each block in the layer.

Field Guide 493


Eimg_NonInitializedValue
The Eimg_NonInitializedValue object is used to record the value that is to be assigned
to any uninitialized blocks of raster data in a layer.

{1:*bvalueBD,}Eimg_NonInitializedValue,

Type Name Description


BASEDATA * valueBD A basedata structure containing the excluded
values

Eimg_MapInformation
The Eimg_MapInformation object contains the map projection system and the map
units applicable to the MapToPixelXForm object that is its sibling. As a child of an
Eimg_Layer, it will have the name MapInformation.

{1:oEmif_String,projection,1:oEmif_String,units,}Eimg_MapInformation,

Type Name Description


Emif_String projection The name of the map projection system associ-
ated with the MapToPixelTransform sibling
object.
Emif_String units The name of the map units of the coordinates
returned by the transforming layer pixel coor-
dinates through the inverse of the MapToPixel-
Transform sibling object.

Eimg_RRDNamesList
The Eimg_RRDNamesList object contains a list of layers of a resolution different
(reduced) than the original. As a child of an Eimg_Layer, it will have the name
RRDNamesList.

{1:oEmif_String,algorithm,0:poEmif_String,nameList,}Eimg_RRDNamesList,

Type Name Description


Emif_String algorithm The name of the algorithm used to compute the
layers in nameList.
Emif_String p nameList A list of the reduced resolution layers associ-
ated with the parent layer. These are full layer
names.

494 ERDAS
ERDAS IMAGINE HFA File Format

Eimg_StatisticsParameters830
The Eimg_StatisticsParameters830 object contains statistics parameters that control the
computation of certain statistics. The parameters can apply to the computation of
Covariance, scalar Statistics of a layer, or the Histogram of a layer. In these cases, the
object will be named CovarianceParameters, StatisticsParameters, and HistogramPa-
rameters. The CovarianceParameters will exist as a sibling of the Covariance, and the
StatisticsParameters and HistogramParameters will be children of the Eimg_Layer to
which they apply.

{0:poEmif_String,LayerNames,1:*bExcludedValues,1:oEmif_String,AOIname,
1:lSkipFactorX,1:lSkipFactorY,1:*oEdsc_BinFunction,BinFunction,}
Eimg_StatisticsParameters830,

Type Name Description


Emif_String p LayerNames The list of (full) layer names that were involved
in this computation (covariance only).
BASEDATA * ExcludedValues The values excluded during this computation.
Emif_String AOIname The name of the AOI file used to limit the com-
putation.
LONG SkipFactorX The skip factor in X.
LONG SkipFactorY The skip factor in Y.
Edsc_BinFunction * BinFunction The bin function used for this computation
(statistics and histogram only).

Ehfa_Layer
The Ehfa_Layer is used to indicate the type of layer. The initial design for the IMAGINE
files allowed for both raster and vector layers. Currently, the vector layers have not
been implemented.

{1:e2:raster,vector,type,1:ldictionaryPtr,}Ehfa_Layer,

Type Name Description


ENUM type The type of layer.

0=”raster”

1=”vector”
ULONG dictionaryPtr This points to a dictionary entry which
describes the data. In the case of raster data, it
points to a dictionary pointer which describes
the contents of each block via the RasterDMS
definition given below.

Field Guide 495


RasterDMS
The RasterDMS object definition must be present in the EMIF dictionary pointed to by
an Ehfa_Layer object that is of type “raster”. It describes the logical make-up of a block
of raster data in the Ehfa_Layer. The physical representation of the raster data is
actually managed by the DMS system through objects of type Ehfa_Layer and
Edms_State. The RasterDMS definition should describe the raster data in terms of total
number of data values in a block and the type of data value.

{<n>:<t>data,}RasterDMS,

Type Name Description


<t>[<n>] data The data is described in terms of total number, <n>, of
data file values in a block of the raster layer (which is
simply <block width> * <block height>) and the data
value type, <t>, which can have any one of the follow-
ing values:

1 - Unsigned 1-bit

2 - Unsigned 2-bit

4 - Unsigned 4-bit

c - Unsigned 8-bit

C - Signed 8-bit

s - Unsigned 16-bit

S - Signed 16-bit

l - Unsigned 32-bit

L - Signed 32-bit

f - Single precision floating point

d - Double precision floating point

m - Single precision complex

M - Double precision complex

496 ERDAS
ERDAS IMAGINE HFA File Format

Edms_VirtualBlockInfo
An Edms_VirtualBlockInfo object describes a single raster data block of a layer. It
describes where to find the data in the file, how many bytes are in the data block, and
how to unpack the data from the block. For uncompressed data the unpacking is
straight forward. The scheme for compressed data is described below.

{1:SfileCode,1:loffset,1:lsize,1:e2:false,true,logvalid,1:e2:no compression,ESRI GRID


compression,compressionType,1:LblockHeight,}Edms_VirtualBlockInfo,

Type Name Description


SHORT fileCode This is included to allow expansion of the layer
into multiple files. The number indicates the
file in which the block is located. Currently this
is always 0, since the multiple file scheme has
not been implemented.
PTR offset This points to the byte location in the file where
the block data actually resides.
LONG size The number of bytes in the block.
ENUM logvalid This indicates whether the block actually con-
tains valid data. This allows blocks to exist in
the map, but not in the file.

0=”false”

1=”true”
ENUM compressionType This indicates the type of compression used for
this block.

0=”no compression”

1=”ESRI GRID compression”

No compression indicates that the data located


at offset are uncompressed data. The stream of
bytes is to be interpreted as a sequence of bytes
which defines the data as indicated by the data
type.

The ESRI GRID compression is a two stage run-


length encoding.

For uncompressed blocks, the data are simply packed into the block one pixel value at
a time. Each pixel is read from the block as indicated by its data type. All non-integer
data are uncompressed.

Field Guide 497


The compression scheme used by ERDAS IMAGINE is a two level run-length encoding
scheme. If the data are an integral type, then the following steps are performed:

• The minimum and maximum values for a block are determined.

• The byte size of the output pixels is determined by examining the difference
between the maximum and the minium. If the difference is less than or equal to 256,
then 8-bit data are used. If the difference is less than 65,536 then, 16-bit data are
used, otherwise 32-bit data are used.

• The minimum is subtracted from each of the values.

• A run-length encoding scheme is used to encode runs of the same pixel value. The
data minimum value occupies the first 4 bytes of the block. The number of run-
length segments occupies the next 4 bytes, and the next 4 bytes are an offset into the
block which indicates where the compressed pixel values begin. The next byte
indicates the number of bits per pixel (1,2,4,8,16,32). These four values are encoded
in the standard MIF format (unsigned long, or ULONG). Following this is the list
of segment counts, following the segment counts are the pixel values. There is one
segment count per pixel value.

NOTE: No compression scheme is used if the data are non-integral.

min num data numbits data counts data values


segments offset per value

Each data count is encoded as follows:

next 8 bits next 8 bits next 8 bits byte high 6 bits


count

byte 3 byte 2 byte 1 byte 0

There may be 1, 2, 3, or 4 bytes per count. The first two bits of the first count byte
contains 0,1,2,3 indicating that the count is contained in 1, 2,3, or 4 bytes. Then the rest
of the byte (6 bits) represent the six most significant bytes of the count. The next byte, if
present, represents decreasing significance.

NOTE: This order is different than the rest of the package. This was done so that the high byte
with the encoded byte count would be first in the byte stream. This pattern is repeated as many
times as indicated by the numsegments field.

The data values are compressed into the remaining space packed into as many bits per
pixel as indicated by the numbitpervalue field.

498 ERDAS
ERDAS IMAGINE HFA File Format

Edms_FreeIDList
An Edms_FreeIDList is used to track blocks which have been freed from the layer. The
freelist consists of an array of min/max pairs which indicate unused contiguous blocks
of data which lie within the allocated layer space. Currently this object is unused and
reserved for future expansion.

{1:Lmin,1:Lmax,}Edms_FreeIDList,

Type Name Description


LONG min The minimum block number in the group.
LONG max The maximum block number in the group.

Edms_State
The Edms_State describes the location of each of the blocks of a single layer of imagery.
Basically, this object is an index of all of the blocks in the layer.

{1:lnumvirtualblocks,1:lnumobjectsperblock,1:lnextobjectnum,
1:e2:no compression,RLC compression,compressionType,
0:poEdms_VirtualBlockInfo,blockinfo,0:poEdms_FreeIDList,freelist,
1:tmodTime,}Edms_State

Type Name Description


LONG numvirtualblocks The number of blocks in this
layer.
LONG numobjectsperblock The number of pixels represented
by one block.
LONG nextobjectnum Currently, this type is not being
used and is reserved for future
expansion.
ENUM compressionType This indicates the type of com-
pression used for this block.

0=”no compression”

1=”ESRI GRID compression”

No compression indicates that


the data located at offset are
uncompressed data. The stream
of bytes is to be interpreted as a
sequence of bytes which defines
the data as indicated by the data
type.

The ESRI GRID compression is a


two stage run-length encoding.

Field Guide 499


Type Name Description
Edms_VirtualBlockInfo p blockinfo This is the table of entries which
describes the state and location of
each block in the layer.
Edms_FreeIDList p freelist Currently, this type is not being
used and is reserved for future
expansion.
TIME modTime This is the time of the last modifi-
cation to this layer.

Edsc_Table
An Edsc_Table is a base node used to store columns of information. This serves simply
as a parent node for each of the columns which are a part of the table.

{1:lnumRows,} Edsc_Table,

Type Name Description


LONG numRows This defines the number of rows in the table.

Edsc_BinFunction
The Edsc_BinFunction describes how pixel values from the associated layer are to be
mapped into an index for the columns.

{1:lnumBins,1:e4:direct,linear,logarithmic,explicit,binFunction Type,
1:dminLimit,1:dmaxLimit,1:*bbinLimits,} Edsc_BinFunction,

Type Name Description


LONG numBins The number of bins.
ENUM binFunction Type The type of bin function.

0=”direct”

1=”linear”

2=” exponential”

3=”explicit”
DOUBLE minLimit The lowest value defined by the bin function.
DOUBLE maxLimit The highest value defined by the bin func-
tion.
BASEDATA binLimits The limits used to define the bins.

Table 32 describes how the binning functions are used.

500 ERDAS
ERDAS IMAGINE HFA File Format

Table 32: Usage of Binning Functions

Bin Type Description


DIRECT Direct binning means that the pixel value minus the minimum
is used as is with no translation to index into the columns. For
example, if the minimum value is zero, then value 0 is
indexed into location 0, 1 is indexed into 1, etc.
LINEAR Linear binning means that the pixel value is first scaled by the
formula:
index = (value-minLimit)*numBins/(maxLimit-minLimit).

This allows a very large range of data, or even floating point


data, to be used to index into a table.
EXPONENTIAL Exponential binning is used to compress data with a large
dynamic range. The formula used is index =
numBins*(log(1+(value-minLimit)) / (maxLimit-minLimit).
EXPLICIT Explicit binning is used to map the data into indices using an
arbitrary set of boundaries. The data are compared against the
limits set in the binLimit table. If the pixel is less than or equal
to the first value, then the index is 0. If the pixel is less than or
equal to the next value, then the index is 1, etc.

Edsc_Column
The columns of information which are stored in a table are stored in this format.

{1:lnumRows,1:LcolumnDataPtr,1:e4:integer,real,comples,string,dataType,
1:lmaxNumChar,} Edsc_Column,

Type Name Description


LONG numRows The number of rows in this column.
PTR columnDataPtr Starting point of column data in the file. This
points to the location in the file which contains
the data.
ENUM dataType The data type of this column

0=”integer” (EMIF_T_LONG)
1=”real” (EMIF_T_DOUBLE)
2=”complex” (EMIF_T_DCOMPLEX)

3=”string” (EMIF_T_CHAR)
LONG maxNumChars The maximum string length (for string data
only). It is 0 if the type is not a String.

The types of information stored in columns are given in the following table.

Field Guide 501


Name Data Type Description
Histogram real This is found in the descriptor table of almost
every layer. It defines the number of pixels
which fall into each bin.
Class_Names string This is found in the descriptor table of almost
every thematic layer. It defines the name for
each class.
Red real This is found in the descriptor table of almost
every thematic layer. It defines the red compo-
nent of the color for each class. The range of the
value is from 0.0 to 1.0.
Green real This is found in the descriptor table of almost
every thematic layer. It defines the green com-
ponent of the color for each class. The range of
the value is from 0.0 to 1.0.
Blue real This is found in the descriptor table of almost
every thematic layer. It defines the blue compo-
nent of the color for each class. The range of the
value is from 0.0 to 1.0.
Opacity real This is found in the descriptor table of almost
every thematic layer. It defines the opacity
associated with the class. A value of 0 means
that the color will be solid. A value of 0.5
means that 50% of the underlying pixel would
show through, and 1.0 means that all of the
pixel value in the underlying layer would
show through.
Contrast real This is found in the descriptor table of most
continuous raster layers. It is used to define an
intensity stretch which is normally used to
improve contrast. The table is stored as normal-
ized values from 0.0 to 1.0.
GCP_Names string This is found in the GCP_Table in files which
have ground control points. This is the table of
names for the points.
GCP_xCoords real This is found in the GCP_Table in files which
have ground control points. This is the X coor-
dinate for the point.
GCP_yCoords real This is found in the GCP_Table in files which
have ground control points. This is the Y coor-
dinate for the point.
GCP_Color string This is found in the GCP_Table in files which
have ground control points. This is the name of
the color that is used to display this point.

502 ERDAS
ERDAS IMAGINE HFA File Format

Eded_ColumnAttributes_1
The Eded_ColumnAttributes_1 stores the descriptor column properties which are used
by the Raster Attribute Editor for the format and layout of the descriptor column
display in the Raster Attribute Editor CellArray. The properties include the position of
the descriptor column within the CellArray, the name, alignment, format, and width of
the column, whether the column is editable, the formula (if any) for the column, the
units (for numeric data), and whether the column is a component of a color column.
Each Eded_ColumnAttributes_1 is a child of the Edsc_Column containing the data for
the descriptor column. The properties for a color column are stored as a child of the
Eded_ColumnAttributes_1 for the red component of the color column.

{1:lposition,0:pcname,1:e2:FALSE,TRUE,editable,
1:e3:LEFT,CENTER,RIGHT,alignment,0:pcformat,
1:e3:DEFAULT,APPLY,AUTO-APPLY,formulamode,0:pcformula,1:dcolumnwidth,
0:pcunits,1:e5:NO_COLOR,RED,GREEN,BLUE,COLOR,colorflag,0:pcgreenname,
0:pcbluename,}Eded_ColumnAttributes_1,

Type Name Description


LONG position The position of this descriptor column in the
Raster Attribute Editor CellArray. The posi-
tions for all descriptor columns are sorted and
the columns are displayed in ascending order.
CHAR P name The name of the descriptor column. This is the
same as the name of the parent Edsc_Column
node, for all columns except color columns,
Color columns have no corresponding
Edsc_Column.
ENUM editable Specifies whether this column is editable.
0 = NO
1 = YES
ENUM alignment Alignment of this column in CellArray.
0 = LEFT
1 = CENTER
2 = RIGHT
CHAR P format The format for display of numeric data.
ENUM formulamode Mode for formula application.
0 = DEFAULT
1 = APPLY
2 = AUTO-APPLY
CHAR P formula The formula for the column.
DOUBLE columnwidth The width of the CellArray column
CHAR P units The name of the units for numeric data stored
in the column.

Field Guide 503


Type Name Description
ENUM colorflag Indicates whether column is a color column, a
component of a color column, or a normal col-
umn.
0 = NO_COLOR
1 = RED
2 = GREEN
3 = BLUE
4 = COLOR
CHAR P greenname Name of green component column associated
with color column. Empty string for other col-
umn types.
CHAR P bluename Name of blue component column associated
with color column. Empty string for other col-
umn types.

Esta_Statistics
The Esta_Statistics is used to describe the statistics for a layer.

{1:dminimum,1:dmaximum,1:dmean,1:dmedian,1d:mode,1:dstddev,}
Esta_Statistics,

Type Name Description


DOUBLE minimum The minimum of all of the pixels in the image.
This may exclude values as defined by the user.
DOUBLE maximum The maximum of all of the pixels in the image.
This may exclude values as defined by the user.
DOUBLE mean The mean of all of the pixels in the image. This
may exclude values as defined by the user.
DOUBLE median The median of all of the pixels in the image.
This may exclude values as defined by the user.
DOUBLE mode The mode of all of the pixels in the image. This
may exclude values as defined by the user.
DOUBLE stddev The standard deviation of the pixels in the
image. This may exclude values as defined by
the user.

504 ERDAS
ERDAS IMAGINE HFA File Format

Esta_Covariance
The Esta_Covariance object is used to record the covariance matrix for the layers in an
.img file

{1:bcovariance,}Esta_Covariance,

Type Name Description


BASEDATA covariance A basedata structure containing the covariance
matrix

Esta_SkipFactors
The Esta_SkipFactors object is used to record the skip factors that were used when the
statistics or histogram was calculated for a raster layer or when the covariance was
calculated for an .img file.

{1:LskipFactorX,1:LskipFactorY,}Esta_SkipFactors,

Type Name Description


LONG skipFactorX The horizontal sampling interval used for sta-
tistics measured in image columns/sample
LONG skipFactorY The vertical sampling interval used for statis-
tics measured in image rows/sample

Esta_ExcludedValues
The Esta_ExcludedValues object is used to record the values that were excluded from
consideration when the statistics or histogram was calculated for a raster layer or when
the covariance was calculated for a .img file.

{1:*bvalueBD,}Esta_ExcludedValues,

Type Name Description


BASEDATA * valueBD A basedata structure containing the excluded
values

Field Guide 505


Eprj_Datum
The Eprj_Datum object is used to record the datum information which is part of the
projection information for an .img file.

{0:pcdatumname,1:e3:EPRJ_DATUM_PARAMETRIC,EPRJ_DATUM_GRID,
EPRJ_DATUM_REGRESSION,type,0:pdparams,0:pcgridname,}Eprj_Datum,

Type Name Description


CHAR datumname The datum name.
ENUM type The datum type which could be one of three
different types: parametric type, grid type and
regression type.
DOUBLE params The seven parameters of a parametric datum
which describe the translations, rotations and
scale change between the current datum and
the reference datum WGS84.
CHAR gridname The name of a grid datum file which stores the
coordinate shifts among North America
Datums NAD27, NAD83 and HARN.

Eprj_Spheroid
The Eprj_Spheroid is used to describe spheroid parameters used to describe the shape
of the earth.

{0:pcsphereName,1:da,1:db,1:deSquared,1:dradius,}Eprj_Spheroid,

Type Name Description


CHAR p sphereName The name of the spheroid/ellipsoid. This name
is can be found in:
<$IMAGINE_HOME>/etc/spheroid.tab.
DOUBLE a The semi-major axis of the ellipsoid in meters.
DOUBLE b The semi-minor axis of the ellipsoid in meters.
DOUBLE eSquared The eccentricity of the ellipsoid, squared.
DOUBLE radius The radius of the spheroid in meters.

506 ERDAS
ERDAS IMAGINE HFA File Format

Eprj_ProParameters
The Eprj_Parameters is used to define the map projection for a layer.

{1:e2:EPRJ_INTERNAL,EPRJ_EXTERNAL,proType,1:lproNumber,
0:pcproExeName,0:pcproName,1:lproZone,0:pdproParams,
1:*oEprj_Spheroid,proSpheroid,}Eprj_ProParameters.

Type Name Description


ENUM proType This defines whether the projection is internal or
external.
0=”EPRJ_INTERNAL”
1=” EPRJ_EXTERNAL”
LONG proNumber The projection number for internal projections.
The current internal projections are:
0=”Geographic(Latitude/Longitude)”
1=”UTM”
2=”State Plane”
3=”Albers Conical Equal Area”
4=”Lambert Conformal Conic”
5=”Mercator”
6=”Polar Stereographic”
7=”Polyconic”
8=”Equidistant Conic”
9=”Transverse Mercator”
10=”Sterographic”
11=”Lambert Azimuthal Equal-area”
12=”Azimuthal Equidistant”
13=”Gnomonic”
14=”Orthographic”
15=”General Vertical Near-Side Perspective”
16=”Sinusoidal”
17=”Equirectangular”
18=”Miller Cylindrical”
19=”Van der Grinten I”
20=”Oblique Mercator (Hotine)”
21=”Space Oblique Mercator”
22=”Modified Transverse Mercator”
CHAR p proExeName The name of the executable to run for an external
projection.

Field Guide 507


Type Name Description
CHAR p proName The name of the projection. This will be one of the
names given above in the description of proNum-
ber.
LONG proZone The zone number for internal State Plane or UTM
projections.
DOUBLE p proParams The array of parameters for the projection.
Eprj_Spheroid * proSpheroid The parameters of the spheroid used to approxi-
mate the earth. See the proceeding description for
the Eprj_Spheroid object.

The following table defines the contents of the proParams array which is defined above.
The Parameters column defines the meaning of the various elements of the proParams
array for the different projections. Each one is described by one or more statements of
the form n: Description. n is the index into the array.

Name Parameters
0 ”Geographic(Latitude/Longitude)” None Used
1 ”UTM” 3: 1=North, -1=South
2 ”State Plane” 0: 0=NAD27, 1=NAD83
3 ”Albers Conical Equal Area” 2: Latitude of 1st standard parallel

3: Latitude of 2nd standard parallel

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing
4 ”Lambert Conformal Conic” 2: Latitude of 1st standard parallel

3: Latitude of 2nd standard parallel

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing
5 ”Mercator” 4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

508 ERDAS
ERDAS IMAGINE HFA File Format

Name Parameters
6 ”Polar Stereographic” 4: Longitude directed straight down below
pole of map.

5: Latitude of true scale.

6: False Easting

7: False Northing.
7 ”Polyconic” 4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing
8 ”Equidistant Conic” 2: Latitude of standard parallel (Case 0)

2: Latitude of 1st Standard Parallel (Case 1)

3: Latitude of 2nd standard Parallel (Case 1)

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

8: 0=Case 0, 1=Case 1.
9 ”Transverse Mercator” 2: Scale Factor at Central Meridian

4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
10 ”Stereographic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
11 ”Lambert Azimuthal Equal-area” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

Field Guide 509


Name Parameters
12 ”Azimuthal Equidistant” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
13 ”Gnomonic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
14 ”Orthographic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
15 ”General Vertical Near-Side Perspec- 2: Height of perspective point above sphere.
tive
4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing
16 ”Sinusoidal” 4: Longitude of central meridian

6: False Easting

7: False Northing
17 ”Equirectangular” 4: Longitude of central meridian

5: Latitude of True Scale.

6: False Easting

7: False Northing
18 ”Miller Cylindrical” 4: Longitude of central meridian

6: False Easting

7: False Northing
19 ”Van der Grinten I” 4: Longitude of central meridian

6: False Easting

7: False Northing

510 ERDAS
ERDAS IMAGINE HFA File Format

Name Parameters
20 ”Oblique Mercator (Hotine)” 2: Scale Factor at center of projection

3: Azimuth east of north for central line. (Case


1)

4: Longitude of point of origin (Case 1)

5: Latitude of point of origin.

6: False Easting

7: False Northing.

8: Longitude of 1st Point defining central line


(Case 0)

9: Latitude of 1st Point defining central line


(Case 0)

10: Longitude of 2nd Point defining central


line. (Case 0)

11: Latitude of 2nd Point defining central line


(Case 0).

12: 0=Case 0, 1=Case 1


21 ”Space Oblique Mercator” 4: Landsat Vehicle ID (1-5)

5: Orbital Path Number (1-251 or 1-233)

6: False Easting

7: False Northing
22 ”Modified Transverse Mercator” 6: False Easting

7: False Northing

Field Guide 511


Eprj_Coordinate
An Eprj_Coordiante is a pair of doubles used to define X and Y.

{1:dx,1:dy,}Eprj_Coordinate,

Type Name Description


DOUBLE x The X value of the coordinate.
DOUBLE y The Y value of the coordinate.

Eprj_Size
The Eprj_Size is a pair of doubles used to define a rectangular size.

{1:dx,1:dy,}Eprj_Size,

Type Name Description


DOUBLE width The X value of the coordinate.
DOUBLE height The Y value of the coordinate.

Eprj_MapInfo
The Eprj_MapInfo object is used to define the basic map information for a layer. It
defines the map coordinates for the center of the upper left and lower right pixels, as
well as the cell size and the name of the map projection.

{0:pcproName,1:*oEprj_Coordinate,upperLeftCenter,
1:*oEprj_Coordinate,lowerRightCenter,1:*oEprj_Size,pixelSize,
0:pcunits,}Eprj_MapInfo,

Type Name Description


CHAR p proName The name of the projection.
Eprj_ upperLeftCenter The coordinates of the center of the upper left
Coordinate * pixel.
Eprj_ lowerRightCenter The coordinates of the center of the lower right
Coordinate * pixel.
Eprj_Size * pixelSize The size of the pixel in the image.
CHAR * units The units of the above values.

512 ERDAS
ERDAS IMAGINE HFA File Format

Efga_Polynomial
The Efga_Polynomial is used to store transformation coefficients created by the
IMAGINE GCP Tool.

{1:Lorder,1:Lnumdimtransforms,1:numdimpolynomial,1:Ltermcount,
1:*exponentList,1:bpolycoefmtx,1:bpolycoefvector,}Efga_Polynomial,

Type Name Description


LONG order The order of the polynomial.
LONG numdimtransform The number of dimensions of the transfor-
mation (always 2).
LONG numdimpolynomial The number of dimensions of the polyno-
mial (always 2).
LONG termcount The number of terms in the polynomial.
LONG * exponentlist The ordered list of powers for the polyno-
mial.
BASEDATA polycoefmtx The polynomial coefficients.
BASEDATA polycoefvector The polynomial vectors.

Exfr_GenericXFormHeader
The Exfr_GenericXFormHeader contains a list of GeometricModels titles for the
component XForms making up a composite Exfr_XForm. The components are written
as children of the Exfr_GenericXFormHeader with names XForm0, XForm1, ..., XFormi.
where i is the number of components listed by the Exfr_GenericXFormHeader. The
design of component XFormi is defined by the specific GeometricModels DLL instance
that controls XForms of the title specified as the ith title string in the
Exfr_GenericXFormHeader unless XFormi is of type Exfr_ASCIIXform (see below). As
a child of an Eimg_Layer, it will have the name MapToPixelXForm.

{0:poEmif_String,titleList,}Exfr_GenericXFormHeader,

Type Name Description


Emif_String titleList The list of titles of the component XForms that
are children of this node.

Field Guide 513


Exfr_ASCIIXForm
An Exfr_ASCIIXForm is an ASCII string representation of an Exfr_XForm component
controlled by a DLL that does not have an XFormConvertToMIF function defined but
does define an XFormSprintf function.

{0:pcxForm,}Exfr_ASCIIXForm,

Type Name Description


CHAR p xForm An ASCII string representation of an XForm
component.

Calibration_Node
An object of type Calibration_Node is an empty object — it contains no data. A node of
this type simply serves as the parent node of four related child objects. The children of
the Calibration_Node are used to provide information which converts pixel coordinates
to map coordinates and vice versa.There is no dictionary definition for this object type.
A node of this type will be a child of the root node and will be named “Calibration.” The
“Calibration” node will have the four children described below.

Node ObjectType Description


Projection Eprj_ProParameters The projection associated with
the output coordinate system.
Map_Info Eprj_MapInfo The nominal map information
associated with the transforma-
tion.
InversePolynomial Efga_Polynomial This is the nth order polynomial
coefficient used to convert from
map coordinates to pixel coordi-
nates.
ForwardPolynomial Efga_Polynomial This is the nth order polynomial
used to convert from pixel coor-
dinates to map coordinates

514 ERDAS
Vector Layers

Vector Layers The vector data structure in ERDAS IMAGINE is based on the ARC/INFO data model
(developed by ESRI, Inc.).

See "CHAPTER 2: Vector Layers" for more information on vector layers. Refer to the
ARC/INFO users manuals for detailed information on the vector data structure.

Field Guide 515


516 ERDAS
Introduction

APPENDIX C
Map Projections

Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS
IMAGINE. It is divided into two sections:

• USGS projections, beginning on page 518

• External projections, beginning on page 585

The external projections were implemented outside of ERDAS IMAGINE so that users
could add to these using the Developers’ Toolkit. The projections in each section are
presented in alphabetical order.

The information in this appendix is adapted from:

• Map Projections for Use with the Geographic Information System (Lee and Walsh 1984)

• Map Projections—A Working Manual (Snyder 1987)

Other sources are noted in the text.

For general information about map projection types, refer to "CHAPTER 11: Cartography".

Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification
tools. View, add, or change projection information using the Image Information option.

NOTE: You cannot rectify to a new map projection using the Image Information option. You
should change map projection information using Image Information only if you know the
information to be incorrect. Use the rectification tools to actually georeference an image to a new
map projection system.

Field Guide 517


USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and are
described in this section:

Albers Conical Equal Area

Azimuthal Equidistant

Equidistant Conic

Equirectangular

General Vertical Near-side Perspective

Geographic (Lat/Lon)

Gnomonic

Lambert Azimuthal Equal Area

Lambert Conformal Conic

Mercator

Miller Cylindrical

Modified Transverse Mercator

Oblique Mercator (Hotine)

Orthographic

Polar Stereographic

Polyconic

Sinusoidal

Space Oblique Mercator

State Plane

Stereographic

Transverse Mercator

UTM

Van der Grinten I

518 ERDAS
USGS Projections

Albers Conical Equal


Area
Summary

Construction Cone

Property Equal area

Meridians are straight lines converging on the polar axis, but not
Meridians
at the pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Meridian spacing is equal on the standard parallels and decreases


toward the poles. Parallel spacing decreases away from the stan-
dard parallels and increases between them. Meridians and paral-
Graticule spacing
lels intersect each other at right angles. The graticule spacing
preserves the property of equivalence of area. The graticule is
symmetrical.
Linear scale is true on the standard parallels. Maximum scale
Linear scale error is 1.25% on a map of the United States (48 states) with stan-
dard parallels of 29.5˚N and 45.5˚N.
Used for thematic maps. Used for large countries with an east-
west orientation. Maps based on the Albers Conical Equal Area
for Alaska use standard parallels 55˚N and 65˚N; for Hawaii, the
Uses standard parallels are 8˚N and 18˚N. The National Atlas of the
United States, United States Base Map (48 states), and the Geo-
logic map of the United States are based on the standard parallels
of 29.5˚N and 45.5˚N.

The Albers Conical Equal Area projection is mathematically based on a cone that is
conceptually secant on two parallels. There is no areal deformation. The North or South
Pole is represented by an arc. It retains its properties at various scales, and individual
sheets can be joined along their edges.

This projection produces very accurate area and distance measurements in the middle
latitudes (Figure 189). Thus, Albers Conical Equal Area is well-suited to countries or
continents where north-south depth is about 3/5 the breadth of east-west. When this
projection is used for the continental U.S., the two standard parallels are 29.5˚ and 45.5˚
North.

This projection possesses the property of equal area, and the standard parallels are
correct in scale and in every direction. Thus, there is no angular distortion (i.e.,
meridians intersect parallels at right angles) and conformality exists along the standard
parallels. Like other conics, Albers Conical Equal Area has concentric arcs for parallels
and equally spaced radii for meridians. Parallels are not equally spaced, but are farthest
apart between the standard parallels and closer together on the north and south edges.

Albers Conical Equal Area is the projection exclusively used by the USGS for sectional
maps of all 50 states of the U.S. in the National Atlas of 1970.

Field Guide 519


Prompts
The following prompts display in the Projection Chooser once Albers Conical Equal
Area is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection, i.e., the standard
parallels. Note that the first standard parallel is the southernmost.

Then, define the origin of the map projection in both spherical and rectangular coordi-
nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of
projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing, corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough to prevent negative
coordinates from occurring within the region of the map projection. That is, the origin
of the rectangular coordinate system should fall outside of the map projection to the
south and west.

520 ERDAS
USGS Projections

Figure 189: Albers Conical Equal Area Projection

In Figure 189, the standard parallels are 20˚N and 60˚N. Note the change in spacing of
the parallels.

Field Guide 521


Azimuthal Equidistant

Summary

Construction Plane

Property Equidistant

Polar aspect: the meridians are straight lines radiating from the
point of tangency.

Oblique aspect: the meridians are complex curves concave


Meridians toward the point of tangency.

Equatorial aspect: the meridians are complex curves concave


toward a straight central meridian, except the outer meridian of a
hemisphere, which is a circle.
Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are complex curves.


Parallels

Equatorial aspect: the parallels are complex curves concave


toward the nearest pole; the equator is straight.
Polar aspect: the meridian spacing is equal and increases away
from the point of tangency. Parallel spacing is equidistant. Angu-
Graticule spacing
lar and area deformation increase away from the point of tan-
gency.
Polar aspect: linear scale is true from the point of tangency along
the meridians only.

Linear scale
Oblique and equatorial aspects: linear scale is true from the point
of tangency. In all aspects, the projection shows distances true to
scale when measured between the point of tangency and any
other point on the map.
The Azimuthal Equidistant projection is used for radio and seis-
mic work, as every place in the world will be shown at its true
distance and direction from the point of tangency. The U.S. Geo-
Uses
logical Survey uses the oblique aspect in the National Atlas and
for large-scale mapping of Micronesia. The polar aspect is used as
the emblem of the United Nations.

522 ERDAS
USGS Projections

The Azimuthal Equidistant projection is mathematically based on a plane tangent to the


earth. The entire earth can be represented, but generally less than one hemisphere is
portrayed, though the other hemisphere can be portrayed, but is much distorted. It has
true direction and true distance scaling from the point of tangency.

This projection is used mostly for polar projections because latitude rings divide
meridians at equal intervals with a polar aspect (Figure 190). Linear scale distortion is
moderate and increases toward the periphery. Meridians are equally spaced, and all
distances and directions are shown accurately from the central point.

This projection can also be used to center on any point on the earth—a city, for
example—and distance measurements will be true from that central point. Distances
are not correct or true along parallels, and the projection is neither equal area nor
conformal. Also, straight lines radiating from the center of this projection represent
great circles.

Prompts
The following prompts display in the Projection Chooser if Azimuthal Equidistant is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.

Field Guide 523


Figure 190: Polar Aspect of the Azimuthal Equidistant Projection

This projection is commonly used in atlases for polar maps.

524 ERDAS
USGS Projections

Conic Equidistant

Summary

Construction Cone

Property Equidistant

Meridians are straight lines converging on a polar axis but not at


Meridians
the pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Meridian spacing is true on the standard parallels and decreases


toward the pole. Parallels are placed at true scale along the
Graticule spacing
meridians. Meridians and parallels intersect each other at right
angles. The graticule is symmetrical.
Linear scale is true along all meridians and along the standard
Linear scale
parallel or parallels.
The Equidistant Conic projection is used in atlases for portraying
mid-latitude areas. It is good for representing regions with a few
Uses degrees of latitude lying on one side of the equator. It was used in
the former Soviet Union for mapping the entire country (ESRI
1992).

With Equidistant Conic (Simple Conic) projections, correct distance is achieved along
the line(s) of contact with the cone, and parallels are equidistantly spaced. It can be used
with either one (A) or two (B) standard parallels. This projection is neither conformal
nor equal area, but the north-south scale along meridians is correct. The North or South
Pole is represented by an arc. Because scale distortion increases with increasing
distance from the line(s) of contact, the Equidistant Conic is used mostly for mapping
regions predominantly east-west in extent. The USGS uses the Equidistant Conic in an
approximate form for a map of Alaska.

Prompts
The following prompts display in the Projection Chooser if Equidistant Conic is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Field Guide 525


Define the origin of the projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for the longitude of the desired central meridian and the latitude of the
origin of projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that no negative
coordinates will occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.

One or two standard parallels?

Latitude of standard parallel

Enter one or two values for the desired control line(s) of the projection, i.e., the standard
parallel(s). Note that if two standard parallels are used, the first is the southernmost.

526 ERDAS
USGS Projections

Equirectangular (Plate
Carrée)
Summary

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Equally spaced parallel meridians and latitude lines cross at right


Graticule spacing
angles.
The scale is correct along all meridians and along the standard
Linear scale
parallels (ESRI 1992).
Best used for city maps, or other small areas with map scales
small enough to reduce the obvious distortion. Used for simple
Uses
portrayals of the world or regions with minimal geographic data,
such as index maps (ESRI 1992).

Also called Simple Cylindrical, Equirectangular is composed of equally spaced, parallel


meridians and latitude lines that cross at right angles on a rectangular map. Each
rectangle formed by the grid is equal in area, shape, and size. Equirectangular is not
conformal nor equal area, but it does contain less distortion than the Mercator in polar
regions. Scale is true on all meridians and on the central parallel. Directions due north,
south, east, and west are true, but all other directions are distorted. The equator is the
standard parallel, true to scale and free of distortion. However, this projection may be
centered anywhere.

This projection is valuable for its ease in computer plotting. It is useful for mapping
small areas, such as city maps, because of its simplicity. The USGS uses Equirectangular
for index maps of the conterminous U.S. with insets of Alaska, Hawaii, and various
islands. However, neither scale nor projection is marked to avoid implying that the
maps are suitable for normal geographic information.

Prompts
The following prompts display in the Projection Chooser if Equirectangular is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

Field Guide 527


The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Latitude of true scale

Enter a value for longitude of the desired central meridian to center the projection and
the latitude of true scale.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.

528 ERDAS
USGS Projections

General Vertical Near-


side Perspective
Summary

Construction Plane

Property Compromise

The central meridian is a straight line in all aspects. In the polar


Meridians aspect all meridians are straight. In the equatorial aspect the
equator is straight (ESRI 1992).
Parallels on vertical polar aspects are concentric circles. Nearly all
other parallels are elliptical arcs, except that certain angles of tilt
Parallels
may cause some parallels to be shown as parabolas or hyperbo-
las.
Polar aspect: parallels are concentric circles that are not evenly
spaced. Meridians are evenly spaced and spacing increases from
the center of the projection.

Graticule spacing
Equatorial and oblique aspects: parallels are elliptical arcs that are
not evenly spaced. Meridians are elliptical arcs that are not evenly
spaced, except for the central meridian, which is a straight line.
Radial scale decreases from true scale at the center to zero on the
Linear scale projection edge. The scale perpendicular to the radii decreases,
but not as rapidly (ESRI 1992).
Often used to show the earth or other planets and satellites as
Uses seen from space. Used as an aesthetic presentation, rather than
for technical applications (ESRI 1992).

General Vertical Near-side Perspective presents a picture of the earth as if a photograph


were taken at some distance less than infinity. The map user simply identifies area of
coverage, distance of view, and angle of view. It is a variation of the General Perspective
projection in which the “camera” precisely faces the center of the earth.

Central meridian and a particular parallel (if shown) are straight lines. Other meridians
and parallels are usually arcs of circles or ellipses, but some may be parabolas or hyper-
bolas. Like all perspective projections, General Vertical Near-side Perspective cannot
illustrate the entire globe on one map—it can represent only part of one hemisphere.

Prompts
The following prompts display in the Projection Chooser if General Vertical Near-side
Perspective is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

Field Guide 529


The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Height of perspective point above sphere

Enter a value for desired height of the perspective point above the sphere in the same
units as the radius.

Then, define the center of the map projection in both spherical and rectangular coordi-
nates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.

530 ERDAS
USGS Projections

Geographic (Lat/Lon) The Geographic is a spherical coordinate system composed of parallels of latitude (Lat)
and meridians of longitude (Lon) (Figure 191). Both divide the circumference of the
earth into 360 degrees, which are further subdivided into minutes and seconds (60 sec
= 1 minute, 60 min = 1 degree).

Because the earth spins on an axis between the North and South Poles, this allows
construction of concentric, parallel circles, with a reference line exactly at the north-
south center, termed the equator. The series of circles north of the equator are termed
north latitudes and run from 0˚ latitude (the equator) to 90˚ North latitude (the North
Pole), and similarly southward. Position in an east-west direction is determined from
lines of longitude. These lines are not parallel and they converge at the poles. However,
they intersect lines of latitude perpendicularly.

Unlike the equator in the latitude system, there is no natural zero meridian. In 1884, it
was finally agreed that the meridian of the Royal Observatory in Greenwich, England,
would be the prime meridian. Thus, the origin of the geographic coordinate system is
the intersection of the equator and the prime meridian. Note that the 180˚ meridian is
the international date line.

If the user chooses Geographic from the Projection Chooser, the following prompts will
display:

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Note that in responding to prompts for other projections, values for longitude are negative west
of Greenwich and values for latitude are negative south of the equator.

Field Guide 531


North Pole

Parallel
(Latitude)

60

Equator

30 6 0

3 0
0

Meridian
(Longitude)

Figure 191: Geographic

Figure 191 shows the graticule of meridians and parallels on the global surface.

532 ERDAS
USGS Projections

Gnomonic

Summary

Construction Plane

Property Compromise

Polar aspect: the meridians are straight lines radiating from the
point of tangency.
Meridians

Oblique and equatorial aspects: the meridians are straight lines.


Polar aspect: the parallels are concentric circles.

Parallels
Oblique and equatorial aspects: parallels are ellipses, parabolas,
or hyperbolas concave toward the poles (except for the equator,
which is straight).
Polar aspect: the meridian spacing is equal and increases away
from the pole. The parallel spacing increases very rapidly from
the pole.
Graticule spacing

Oblique and equatorial aspects: the graticule spacing increases


very rapidly away from the center of the projection.
Linear scale and angular and areal deformation are extreme, rap-
Linear scale
idly increasing away from the center of the projection.
The Gnomonic projection is used in seismic work because seismic
Uses waves travel in approximately great circles. It is used with the
Mercator projection for navigation.

Gnomonic is a perspective projection that projects onto a tangent plane from a position
in the center of the earth. Because of the close perspective, this projection is limited to
less than a hemisphere. However, it is the only projection which shows all great circles
as straight lines. With a polar aspect, the latitude intervals increase rapidly from the
center outwards.

With an equatorial or oblique aspect, the equator is straight. Meridians are straight and
parallel, while intervals between parallels increase rapidly from the center and parallels
are convex to the equator.

Because great circles are straight, this projection is useful for air and sea navigation.
Rhumb lines are curved, which is the opposite of the Mercator projection.

Field Guide 533


Prompts
The following prompts display in the Projection Chooser if Gnomonic is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

534 ERDAS
USGS Projections

Lambert Azimuthal
Equal Area
Summary

Construction Plane

Property Equal Area

Polar aspect: the meridians are straight lines radiating from the
point of tangency.

Meridians
Oblique and equatorial aspects: meridians are complex curves
concave toward a straight central meridian, except the outer
meridian of a hemisphere, which is a circle.
Polar aspect: parallels are concentric circles.

Parallels
Oblique and equatorial aspects: the parallels are complex curves.
The equator on the equatorial aspect is a straight line.
Polar aspect: the meridian spacing is equal and increases, and the
parallel spacing is unequal and decreases toward the periphery of
Graticule spacing
the projection. The graticule spacing, in all aspects, retains the
property of equivalence of area.
Linear scale is better than most azimuthals, but not as good as the
equidistant. Angular deformation increases toward the periphery
Linear scale of the projection. Scale decreases radially toward the periphery of
the map projection. Scale increases perpendicular to the radii
toward the periphery.
The polar aspect is used by the U.S. Geological Survey in the
Uses National Atlas. The polar, oblique, and equatorial aspects are
used by the U.S. Geological Survey for the Circum-Pacific Map.

The Lambert Azimuthal Equal Area projection is mathematically based on a plane


tangent to the earth. It is the only projection that can accurately represent both area and
true direction from the center of the projection (Figure 192). This central point can be
located anywhere. Concentric circles are closer together as toward the edge of the map,
and the scale distorts accordingly. This projection is well-suited to square or round land
masses. This projection generally represents only one hemisphere.

In the polar aspect, latitude rings decrease their intervals from the center outwards. In
the equatorial aspect, parallels are curves flattened in the middle. Meridians are also
curved, except for the central meridian, and spacing decreases toward the edges.

Field Guide 535


Prompts
The following prompts display in the Projection Chooser if Lambert Azimuthal Equal
Area is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

In Figure 192, three views of the Lambert Azimuthal Equal Area projection are shown:
A) Polar aspect, showing one hemisphere; B) Equatorial aspect, frequently used in old
atlases for maps of the eastern and western hemispheres; C) Oblique aspect, centered
on 40˚N.

536 ERDAS
USGS Projections

Figure 192: Lambert Azimuthal Equal Area Projection

Field Guide 537


Lambert Conformal
Conic
Summary

Construction Cone

Property Conformal

Meridians Meridians are straight lines converging at a pole.

Parallels are arcs of concentric circles concave toward a pole and


Parallels
centered at a pole.
Meridian spacing is true on the standard parallels and decreases
toward the pole. Parallel spacing increases away from the stan-
Graticule spacing dard parallels and decreases between them. Meridians and paral-
lels intersect each other at right angles. The graticule spacing
retains the property of conformality. The graticule is symmetrical.
Linear scale is true on standard parallels. Maximum scale error is
Linear scale 2.5% on a map of the United States (48 states) with standard par-
allels at 33˚N and 45˚N.
Used for large countries in the mid-latitudes having an east-west
orientation. The United States (50 states) Base Map uses standard
parallels at 37˚N and 65˚N. Some of the National Topographic
Map Series 7.5-minute and 15-minute quadrangles and the State
Uses Base Map series are constructed on this projection. The latter
series uses standard parallels of 33˚N and 45˚N. Aeronautical
charts for Alaska use standard parallels at 55˚N and 65˚N. The
National Atlas of Canada uses standard parallels at 49˚N and
77˚N.

This projection is very similar to Albers Conical Equal Area, described previously. It is
mathematically based on a cone that is tangent at one parallel or, more often, that is
conceptually secant on two parallels (Figure 193). Areal distortion is minimal, but
increases away from the standard parallels. North or South Pole is represented by a
point—the other pole cannot be shown. Great circle lines are approximately straight. It
retains its properties at various scales, and sheets can be joined along their edges. This
projection, like Albers, is most valuable in middle latitudes, especially in a country
sprawling east to west like the U.S. The standard parallels for the U.S. are 33˚ and 45˚N.

The major property of this projection is its conformality. At all coordinates, meridians
and parallels cross at right angles. The correct angles produce correct shapes. Also,
great circles are approximately straight. The conformal property of Lambert Conformal
Conic and the straightness of great circles makes it valuable for landmark flying.

Lambert Conformal Conic is the State Plane coordinate system projection for states of
predominant east-west expanse. Since 1962, Lambert Conformal Conic has been used
for the International Map of the World between 84˚N and 80˚S.

In comparison with Albers Conical Equal Area, Lambert Conformal Conic possesses
true shape of small areas, whereas Albers possesses equal area. Unlike Albers, parallels
of Lambert Conformal Conic are spaced at increasing intervals the farther north or
south they are from the standard parallels.

538 ERDAS
USGS Projections

Prompts
The following prompts display in the Projection Chooser if Lambert Conformal Conic
is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection, i.e., the standard
parallels. Note that the first standard parallel is the southernmost.

Then, define the origin of the map projection in both spherical and rectangular coordi-
nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of
projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough to ensure that there will
be no negative coordinates within the region of the map projection. That is, the origin
of the rectangular coordinate system should fall outside of the map projection to the
south and west.

Field Guide 539


Figure 193: Lambert Conformal Conic Projection

In Figure 193, the standard parallels are 20˚N and 60˚N. Note the change in spacing of
the parallels.

540 ERDAS
USGS Projections

Mercator

Summary

Construction Cylinder

Property Conformal

Meridians Meridians are straight and parallel.

Parallels Parallels are straight and parallel.

Meridian spacing is equal and the parallel spacing increases away


from the equator. The graticule spacing retains the property of
Graticule spacing
conformality. The graticule is symmetrical. Meridians intersect
parallels at right angles.
Linear scale is true along the equator only (line of tangency), or
along two parallels equidistant from the equator (the secant
Linear scale form). Scale can be determined by measuring one degree of lati-
tude, which equals 60 nautical miles, 69 statute miles, or 111 kilo-
meters.
An excellent projection for equatorial regions. Otherwise the Mer-
cator is a special-purpose map projection best suited for naviga-
tion. Secant constructions are used for large-scale coastal charts.
Uses
The use of the Mercator map projection as the base for nautical
charts is universal. Examples are the charts published by the
National Ocean Survey, U.S. Dept. of Commerce.

This famous cylindrical projection was originally designed by Flemish map maker
Gerhardus Mercator in 1569 to aid navigation (Figure 194). Meridians and parallels are
straight lines and cross at 90˚ angles. Angular relationships are preserved. However, to
preserve conformality, parallels are placed increasingly farther apart with increasing
distance from the equator. Due to extreme scale distortion in high latitudes, the
projection is rarely extended beyond 80˚N or S unless the latitude of true scale is other
than the equator. Distance scales are usually furnished for several latitudes.

This projection can be thought of as being mathematically based on a cylinder tangent


at the equator. Any straight line is a constant-azimuth (rhumb) line. Areal enlargement
is extreme away from the equator; poles cannot be represented. Shape is true only
within any small area. It is a reasonably accurate projection within a 15˚ band along the
line of tangency.

Rhumb lines, which show constant direction, are straight. For this reason a Mercator
map was very valuable to sea navigators. However, rhumb lines are not the shortest
path; great circles are the shortest path. Most great circles appear as long arcs when
drawn on a Mercator map.

Field Guide 541


Prompts
The following prompts display in the Projection Chooser if Mercator is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of true scale

Enter values for longitude of the desired central meridian and latitude at which true
scale is desired. Selection of a parameter other than the equator can be useful for making
maps in extreme north or south latitudes.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of true scale. These values must be in meters. It is very
often convenient to make them large enough so that no negative coordinates will occur
within the region of the map projection. That is, the origin of the rectangular coordinate
system should fall outside of the map projection to the south and west.

542 ERDAS
USGS Projections

Figure 194: Mercator Projection

In Figure 194, all angles are shown correctly, therefore small shapes are true (i.e., the
map is conformal). Rhumb lines are straight, which makes it useful for navigation.

Field Guide 543


Miller Cylindrical

Summary

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Meridians are parallel and equally spaced, the lines of latitude are
parallel, and the distance between them increases toward the
Graticule spacing
poles. Both poles are represented as straight lines. Meridians and
parallels intersect at right angles (ESRI 1992).
While the standard parallels, or lines true to scale and free of dis-
Linear scale
tortion, are at latitudes 45˚N and S, only the equator is standard.

Uses Useful for world maps.

Miller Cylindrical is a modification of the Mercator projection (Figure 195). It is similar


to the Mercator from the equator to 45˚, but latitude line intervals are modified so that
the distance between them increases less rapidly. Thus, beyond 45˚, Miller Cylindrical
lessens the extreme exaggeration of the Mercator. Miller Cylindrical also includes the
poles as straight lines whereas the Mercator does not.

Meridians and parallels are straight lines intersecting at right angles. Meridians are
equidistant, while parallels are spaced farther apart the farther they are from the
equator. Miller Cylindrical is not equal-area, equidistant, or conformal. Miller Cylin-
drical is used for world maps and in several atlases.

Prompts
The following prompts display in the Projection Chooser if Miller Cylindrical is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

544 ERDAS
USGS Projections

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

Figure 195: Miller Cylindrical Projection

This projection resembles the Mercator, but has less distortion in polar regions. Miller
Cylindrical is neither conformal nor equal area.

Field Guide 545


Modified Transverse
Mercator
Summary

Construction Cone

Property Equidistant

On pre-1973 editions of the Alaska Map E, meridians are curved


Meridians concave toward the center of the projection. On post-1973 edi-
tions, the meridians are straight.

Parallels Parallels are arcs concave to the pole.

Meridian spacing is approximately equal and decreases toward


Graticule spacing the pole. Parallels are approximately equally spaced. The grati-
cule is symmetrical on post-1973 editions of the Alaska Map E.
Linear scale is more nearly correct along the meridians than along
Linear scale
the parallels.
The U.S. Geological Survey’s Alaska Map E at the scale of
1:2,500,000. The Bathymetric Maps Eastern Continental Margin
U.S.A., published by the American Association of Petroleum
Uses
Geologists, uses the straight meridians on its Modified Trans-
verse Mercator and is more equivalent to the Equidistant Conic
map projection.

In 1972, the USGS devised a projection specifically for the revision of a 1954 map of
Alaska which, like its predecessors, was based on the Polyconic projection. This
projection was drawn to a scale of 1:2,000,000 and published at 1:2,500,000 (map “E”)
and 1:1,584,000 (map “B”). Graphically prepared by adapting coordinates for the
Universal Transverse Mercator projection, it is identified as the Modified Transverse
Mercator projection. It resembles the Transverse Mercator in a very limited manner and
cannot be considered a cylindrical projection. It resembles the Equidistant Conic
projection for the ellipsoid in actual construction. The projection was also used in 1974
for a base map of the Aleutian-Bering Sea Region published at 1:2,500,000 scale.

It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866
ellipsoid, with the scale along the meridians reduced to 0.9992 of true scale and the
standard parallels at latitude 66.09˚ and 53.50˚N.

546 ERDAS
USGS Projections

Prompts
The following prompts display in the Projection Chooser if Modified Transverse
Mercator is selected. Respond to the prompts as described.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

Field Guide 547


Oblique Mercator
(Hotine)
Summary

Construction Cylinder

Property Conformal

Meridians are complex curves concave toward the line of tan-


Meridians
gency, except each 180th meridian is straight.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing increases away from the line of tangency and


Graticule spacing
retains the property of conformality.
Linear scale is true along the line of tangency, or along two lines
Linear scale
of equidistance from and parallel to the line of tangency.
Useful for plotting linear configurations that are situated along a
line oblique to the earth’s equator. Examples are: NASA Surveyor
Satellite tracking charts, ERTS flight indexes, strip charts for navi-
Uses
gation, and the National Geographic Society’s maps “West
Indies,” “Countries of the Caribbean,” “Hawaii,” and “New
Zealand.”

Oblique Mercator is a cylindrical, conformal projection that intersects the global surface
along a great circle. It is equivalent to a Mercator projection that has been altered by
rotating the cylinder so that the central line of the projection is a great circle path instead
of the equator. Shape is true only within any small area. Areal enlargement increases
away from the line of tangency. Reasonably accurate projection within a 15˚ band along
the line of tangency.

The USGS uses the Hotine version of Oblique Mercator. The Hotine version is based on
a study of conformal projections published by British geodesist Martin Hotine in 1946-
47. Prior to the implementation of the Space Oblique Mercator, the Hotine version was
used for mapping Landsat satellite imagery.

Prompts
The following prompts display in the Projection Chooser if Oblique Mercator (Hotine)
is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

548 ERDAS
USGS Projections

Scale factor at center of projection

Designate the desired scale factor along the central line of the projection. This parameter
may be used to modify scale distortion away from this central line. A value of 1.0
indicates true scale only along the central line. A value of less than, but close to, one is
often used to lessen scale distortion away from the central line.

Latitude of point of origin of projection

False easting

False northing

The center of the projection is defined by rectangular coordinates of false easting and
false northing. The origin of rectangular coordinates on this projection occurs at the
nearest intersection of the central line with the earth’s equator. To shift the origin to the
intersection of the latitude of the origin entered above and the central line of the
projection, compute coordinates of the latter point with zero false eastings and
northings, reverse the signs of the coordinates obtained, and use these for false eastings
and northings. These values must be in meters. It is very often convenient to add
additional values so that no negative coordinates will occur within the region of the
map projection. That is, the origin of the rectangular coordinate system should fall
outside of the map projection to the south and west.

Do you want to enter either:

A) Azimuth East of North for central line and the longi-


tude of the point of origin

B) The latitude and longitude of the first and second


points defining the central line

These formats differ slightly in definition of the central line of the projection.

Format A
For format A the additional prompts are:

Azimuth east of north for central line

Longitude of point of origin

Format A defines the central line of the projection by the angle east of north to the
desired great circle path and by the latitude and longitude of the point along the great
circle path from which the angle is measured. Appropriate values should be entered.

Field Guide 549


Format B
For format B the additional prompts are:

Longitude of 1st point defining central line

Latitude of 1st point defining central line

Longitude of 2nd point defining central line

Latitude of 2nd point defining central line

Format B defines the central line of the projection by the latitude of a point on the central
line which has the desired scale factor entered previously and by the longitude and
latitude of two points along the desired great circle path. Appropriate values should be
entered.

550 ERDAS
USGS Projections

Orthographic

Summary

Construction Plane

Property Compromise

Polar aspect: the meridians are straight lines radiating from the
point of tangency.

Oblique aspect: the meridians are ellipses, concave toward the


Meridians
center of the projection.

Equatorial aspect: the meridians are ellipses concave toward the


straight central meridian.
Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are ellipses concave toward the


Parallels
poles.

Equatorial aspect: the parallels are straight and parallel.


Polar aspect: meridian spacing is equal and increases, and the
parallel decreases from the point of tangency.

Graticule spacing
Oblique and equatorial aspects: the graticule spacing decreases
away from the center of the projection.
Scale is true on the parallels in the polar aspect and on all circles
Linear scale centered at the pole of the projection in all aspects. Scale
decreases along lines radiating from the center of the projection.
The U.S. Geological Survey uses the Orthographic map projection
Uses
in the National Atlas.

The Orthographic projection is geometrically based on a plane tangent to the earth, and
the point of projection is at infinity (Figure 196). The earth appears as it would from
outer space. Light rays that cast the projection are parallel and intersect the tangent
plane at right angles. This projection is a truly graphic representation of the earth and
is a projection in which distortion becomes a visual aid. It is the most familiar of the
azimuthal map projections. Directions from the center of the projection are true.

Field Guide 551


This projection is limited to one hemisphere and shrinks those areas toward the
periphery. In the polar aspect, latitude ring intervals decrease from the center outwards
at a much greater rate than with Lambert Azimuthal. In the equatorial aspect, the
central meridian and parallels are straight, with spaces closing up toward the outer
edge.

The Orthographic projection seldom appears in atlases. Its utility is more pictorial than
technical. Orthographic has been used as a basis for artistic maps by Rand McNally and
the USGS.

Prompts
The following prompts display in the Projection Chooser if Orthographic is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.

Three views of the Orthographic projection are shown in Figure 196: A) Polar aspect; B)
Equatorial aspect; C) Oblique aspect, centered at 40˚N and showing the classic globe-
like view.

552 ERDAS
USGS Projections

Figure 196: Orthographic Projection

Field Guide 553


Polar Stereographic

Summary

Construction Plane

Property Conformal

Meridians Meridians are straight.

Parallels Parallels are concentric circles.

The distance between parallels increases with distance from the


Graticule spacing
central pole.
The scale increases with distance from the center. If a standard
Linear scale parallel is chosen rather than one of the poles, this latitude repre-
sents the true scale, and the scale nearer the pole is reduced.
Polar regions (conformal). In the UPS system, the scale factor at
Uses the pole is made 0.994, thus making the standard parallel (lati-
tude of true scale) approximately 81˚07’N or S.

The Polar Stereographic may be used to accommodate all regions not included in the
UTM coordinate system, regions north of 84˚N and 80˚S. This form is called Universal
Polar Stereographic (UPS). The projection is equivalent to the polar aspect of the Stereo-
graphic projection on a spheroid. The central point is either the North Pole or the South
Pole. Of all the polar aspect planar projections, this is the only one that is conformal.

The point of tangency is a single point—either the North Pole or the South Pole. If the
plane is secant instead of tangent, the point of global contact is a line of latitude (ESRI
1992).

Polar Stereographic is an azimuthal projection obtained by projecting from the opposite


pole (Figure 197). All of either the northern or southern hemispheres can be shown, but
not both. This projection produces a circular map with one of the poles at the center.

Polar Stereographic stretches areas toward the periphery, and scale increases for areas
farther from the central pole. Meridians are straight and radiating; parallels are
concentric circles. Even though scale and area are not constant with Polar Stereo-
graphic, this projection, like all stereographic projections, possesses the property of
conformality.

The Astrogeology Center of the Geological Survey at Flagstaff, Arizona, has been using
the Polar Stereographic projection for the mapping of polar areas of every planet and
satellite for which there is sufficient information.

554 ERDAS
USGS Projections

Prompts
The following prompts display in the Projection Chooser if Polar Stereographic is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the origin of the map projection in both spherical and rectangular coordinates.
Ellipsoid projections of the polar regions normally use the International 1909 spheroid
(ESRI 1992).

Longitude directed straight down below pole of map

Enter a value for longitude directed straight down below the pole for a north polar
aspect, or straight up from the pole for a south polar aspect. This is equivalent to
centering the map with a desired meridian.

Latitude of true scale

Enter a value for latitude at which true scale is desired. For secant projections, specify
the latitude of true scale as any line of latitude other than 90˚N or S. For tangential
projections, specify the latitude of true scale as the North Pole, 90 00 00, or the South
Pole, -90 00 00 (ESRI 1992).

False easting

False northing

Enter values of false easting and false northing corresponding to the pole. These values
must be in meters. It is very often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.

Field Guide 555


N. Pole Plane of projection

Equator

S. Pole

Figure 197: Polar Stereographic Projection and its Geometric Construction

This projection is conformal and is the most scientific projection for polar regions.

556 ERDAS
USGS Projections

Polyconic

Summary

Construction Cone

Property Compromise

The central meridian is a straight line, but all other meridians are
Meridians
complex curves.
Parallels (except the equator) are nonconcentric circular arcs. The
Parallels
equator is a straight line.
All parallels are arcs of circles, but not concentric. All meridians,
excepting the central meridian, are concave toward the central
Graticule spacing
meridian. Parallels cross the central meridian at equal intervals
but get farther apart at the east and west peripheries.
The scale along each parallel and along the central meridian of
Linear scale the projection is accurate. It increases along the meridians as the
distance from the central meridian increases (ESRI 1992).
Used for 7.5-minute and 15-minute topographic USGS quad
sheets, from 1886 to about 1957 (ESRI 1992). Used almost exclu-
Uses
sively in slightly modified form for large-scale mapping in the
United States until the 1950s.

Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the
eastern coast of the U.S. (Figure 198). Polyconic projections are made up of an infinite
number of conic projections tangent to an infinite number of parallels. These conic
projections are placed in relation to a central meridian. Polyconic projections
compromise properties such as equal area and conformality, although the central
meridian is held true to scale.

This projection is used mostly for north-south oriented maps. Distortion increases
greatly the farther east and west an area is from the central meridian.

Prompts
The following prompts display in the Projection Chooser if Polyconic is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Field Guide 557


Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of
projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that no negative
coordinates will occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.

Figure 198: Polyconic Projection of North America

In Figure 198, the central meridian is 100˚W. This projection is used by the U.S.
Geological Survey for topographic quadrangle maps.

558 ERDAS
USGS Projections

Sinusoidal

Summary

Construction Pseudo-cylinder

Property Equal area

Meridians are sinusoidal curves, curved toward a straight central


Meridians
meridian.

Parallels All parallels are straight, parallel lines.

Meridian spacing is equal and decreases toward the poles. Paral-


Graticule spacing lel spacing is equal. The graticule spacing retains the property of
equivalence of area.

Linear scale Linear scale is true on the parallels and the central meridian.

Used as an equal area projection to portray areas that have a max-


imum extent in a north-south direction. Used as a world equal-
area projection in atlases to show distribution patterns. Used by
Uses
the U.S. Geological Survey as the base for maps showing prospec-
tive hydrocarbon provinces of the world and sedimentary basins
of the world.

Sometimes called the Sanson-Flamsteed, Sinusoidal is a projection with some character-


istics of a cylindrical projection—often called a pseudo-cylindrical type. The central
meridian is the only straight meridian—all others become sinusoidal curves. All
parallels are straight and the correct length. Parallels are also the correct distance from
the equator, which, for a complete world map, is twice as long as the central meridian.

Sinusoidal maps achieve the property of equal area but not conformality. The equator
and central meridian are distortion free, but distortion becomes pronounced near outer
meridians, especially in polar regions.

Interrupting a Sinusoidal world or hemisphere map can lessen distortion. The inter-
rupted Sinusoidal contains less distortion because each interrupted area can be
constructed to contain a separate central meridian. Central meridians may be different
for the northern and southern hemispheres and may be selected to minimize distortion
of continents or oceans.

Sinusoidal is particularly suited for less than world areas, especially those bordering
the equator, such as South America or Africa. Sinusoidal is also used by the USGS as a
base map for showing prospective hydrocarbon provinces and sedimentary basins of
the world.

Field Guide 559


Prompts
The following prompts display in the Projection Chooser if Sinusoidal is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

560 ERDAS
USGS Projections

Space Oblique Mercator

Summary

Construction Cylinder

Property Conformal

All meridians are curved lines except for the meridian crossed by
Meridians
the groundtrack at each polar approach.

Parallels All parallels are curved lines.

Graticule spacing There are no graticules.

Scale is true along the groundtrack, and varies approximately


Linear scale
0.01% within sensing range (ESRI 1992).
Used for georectification of, and continuous mapping from, satel-
Uses lite imagery. Standard format for data from Landsats 4 and 5
(ESRI 1992).

The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale
distortion within the sensing range of an orbiting mapping satellite such as Landsat. It
is the first projection to incorporate the earth’s rotation with respect to the orbiting
satellite.

The method of projection used is the modified cylindrical, for which the central line is
curved and defined by the groundtrack of the orbit of the satellite.The line of tangency
is conceptual and there are no graticules.

The Space Oblique Mercator projection is defined by USGS. According to USGS, the X
axis passes through the descending node for each daytime scene. The Y axis is perpen-
dicular to the X axis, to form a Cartesian coordinate system. The direction of the X axis
in a daytime Landsat scene is in the direction of the satellite motion — south. The Y axis
is directed east. For SOM projections used by EOSAT, the axes are switched; the X axis
is directed east and the Y axis is directed south.

The Space Oblique Mercator projection is specifically designed to minimize distortion


within sensing range of a mapping satellite as it orbits the Earth. It can be used for the
rectification of, and continuous mapping from, satellite imagery. It is the standard
format for data from Landsats 4 and 5. Plots for adjacent paths do not match without
transformation (ESRI 1991).

Field Guide 561


Prompts
The following prompts display in the Projection Chooser if Space Oblique Mercator is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.

Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path
range is from 1 to 233.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of
the projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

562 ERDAS
USGS Projections

State Plane The State Plane is an X,Y coordinate system (not a map projection) whose zones divide
the U.S. into over 130 sections, each with its own projection surface and grid network
(Figure 199). With the exception of very narrow States, such as Delaware, New Jersey,
and New Hampshire, most States are divided into two to ten zones. The Lambert
Conformal projection is used for zones extending mostly in an east-west direction. The
Transverse Mercator projection is used for zones extending mostly in a north-south
direction. Alaska, Florida, and New York use either Transverse Mercator or Lambert
Conformal for different areas. The Aleutian panhandle of Alaska is prepared on the
Oblique Mercator projection.

Zone boundaries follow state and county lines, and, because each zone is small,
distortion is less than one in 10,000. Each zone has a centrally located origin and a
central meridian which passes through this origin. Two zone numbering systems are
currently in use—the U.S. Geological Survey (USGS) code system and the National
Ocean Service (NOS) code system (Tables 1 and 2)—but other numbering systems exist.

Prompts
The following prompts will appear in the Projection Chooser if State Plane is selected.
Respond to the prompts as described.

State Plane Zone

Enter either the USGS zone code number as a positive value, or the NOS zone code
number as a negative value.

NAD27 or 83

Either North America Datum 1927 (NAD27) or North America Datum 1983 (NAD83)
may be used to perform the State Plane calculations.

• NAD27 is based on the Clarke 1866 spheroid.

• NAD83 is based on the GRS 1980 spheroid. Some zone numbers have been changed
or deleted from NAD27.

Tables for both NAD27 and NAD83 zone numbers follow (Tables 1 and 2). These tables
include both USGS and NOS code systems.

Field Guide 563


Figure 199: Zones of the State Plane Coordinate System

The following abbreviations are used in Table 33 andTable 34:

Tr Merc = Transverse Mercator


Lambert = Lambert Conformal Conic
Oblique = Oblique Mercator (Hotine)
Polycon = Polyconic

564 ERDAS
USGS Projections

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States

Code Number

State Zone Name Type USGS NOS


Alabama East Tr Merc 3101 -101
West Tr Merc 3126 -102
Alaska 1 Oblique 6101 -5001
2 Tr Merc 6126 -5002
3 Tr Merc 6151 -5003
4 Tr Merc 6176 -5004
5 Tr Merc 6201 -5005
6 Tr Merc 6226 -5006
7 Tr Merc 6251 -5007
8 Tr Merc 6276 -5008
9 Tr Merc 6301 -5009
10 Lambert 6326 -5010
American Samoa ------- Lambert ------ -5302
Arizona East Tr Merc 3151 -201
Central Tr Merc 3176 -202
West Tr Merc 3201 -203
Arkansas North Lambert 3226 -301
South Lambert 3251 -302
California I Lambert 3276 -401
II Lambert 3301 -402
III Lambert 3326 -403
IV Lambert 3351 -404
V Lambert 3376 -405
VI Lambert 3401 -406
VII Lambert 3426 -407
Colorado North Lambert 3451 -501
Central Lambert 3476 -502
South Lambert 3501 -503
Connecticut -------- Lambert 3526 -600
Delaware -------- Tr Merc 3551 -700
District of Columbia Use Maryland or Virginia North

Field Guide 565


Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Florida East Tr Merc 3601 -901
West Tr Merc 3626 -902
North Lambert 3576 -903
Georgia East Tr Merc 3651 -1001
West Tr Merc 3676 -1002
Guam ------- Polycon ------- -5400
Hawaii 1 Tr Merc 5876 -5101
2 Tr Merc 5901 -5102
3 Tr Merc 5926 -5103
4 Tr Merc 5951 -5104
5 Tr Merc 5976 -5105
Idaho East Tr Merc 3701 -1101
Central Tr Merc 3726 -1102
West Tr Merc 3751 -1103
Illinois East Tr Merc 3776 -1201
West Tr Merc 3801 -1202
Indiana East Tr Merc 3826 -1301
West Tr Merc 3851 -1302
Iowa North Lambert 3876 -1401
South Lambert 3901 -1402
Kansas North Lambert 3926 -1501
South Lambert 3951 -1502
Kentucky North Lambert 3976 -1601
South Lambert 4001 -1602
Louisiana North Lambert 4026 -1701
South Lambert 4051 -1702
Offshore Lambert 6426 -1703
Maine East Tr Merc 4076 -1801
West Tr Merc 4101 -1802
Maryland ------- Lambert 4126 -1900
Massachusetts Mainland Lambert 4151 -2001
Island Lambert 4176 -2002

566 ERDAS
USGS Projections

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Michigan (Tr Merc) East Tr Merc 4201 -2101
Central Tr Merc 4226 -2102
West Tr Merc 4251 -2103
Michigan (Lambert) North Lambert 6351 -2111
Central Lambert 6376 -2112
South Lambert 6401 -2113
Minnesota North Lambert 4276 -2201
Central Lambert 4301 -2202
South Lambert 4326 -2203
Mississippi East Tr Merc 4351 -2301
West Tr Merc 4376 -2302
Missouri East Tr Merc 4401 -2401
Central Tr Merc 4426 -2402
West Tr Merc 4451 -2403
Montana North Lambert 4476 -2501
Central Lambert 4501 -2502
South Lambert 4526 -2503
Nebraska North Lambert 4551 -2601
South Lambert 4576 -2602
Nevada East Tr Merc 4601 -2701
Central Tr Merc 4626 -2702
West Tr Merc 4651 -2703
New Hampshire --------- Tr Merc 4676 -2800
New Jersey --------- Tr Merc 4701 -2900
New Mexico East Tr Merc 4726 -3001
Central Tr Merc 4751 -3002
West Tr Merc 4776 -3003
New York East Tr Merc 4801 -3101
Central Tr Merc 4826 -3102
West Tr Merc 4851 -3103
Long Island Lambert 4876 -3104
North Carolina -------- Lambert 4901 -3200

Field Guide 567


Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


North Dakota North Lambert 4926 -3301
South Lambert 4951 -3302
Ohio North Lambert 4976 -3401
South Lambert 5001 -3402
Oklahoma North Lambert 5026 -3501
South Lambert 5051 -3502
Oregon North Lambert 5076 -3601
South Lambert 5101 -3602
Pennsylvania North Lambert 5126 -3701
South Lambert 5151 -3702
Puerto Rico -------- Lambert 6001 -5201
Rhode Island -------- Tr Merc 5176 -3800
South Carolina North Lambert 5201 -3901
South Lambert 5226 -3902
South Dakota North Lambert 5251 -4001
South Lambert 5276 -4002
St. Croix --------- Lambert 6051 -5202
Tennessee --------- Lambert 5301 -4100
Texas North Lambert 5326 -4201
North Central Lambert 5351 -4202
Central Lambert 5376 -4203
South Central Lambert 5401 -4204
South Lambert 5426 -4205
Utah North Lambert 5451 -4301
Central Lambert 5476 -4302
South Lambert 5501 -4303
Vermont -------- Tr Merc 5526 -4400
Virginia North Lambert 5551 -4501
South Lambert 5576 -4502
Virgin Islands -------- Lambert 6026 -5201
Washington North Lambert 5601 -4601
South Lambert 5626 -4602

568 ERDAS
USGS Projections

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


West Virginia North Lambert 5651 -4701
South Lambert 5676 -4702
Wisconsin North Lambert 5701 -4801
Central Lambert 5726 -4802
South Lambert 5751 -4803
Wyoming East Tr Merc 5776 -4901
East Central Tr Merc 5801 -4902
West Central Tr Merc 5826 -4903
West Tr Merc 5851 -4904

Field Guide 569


Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States

Code Number

State Zone Name Type USGS NOS


Alabama East Tr Merc 3101 -101
West Tr Merc 3126 -102
Alaska 1 Oblique 6101 -5001
2 Tr Merc 6126 -5002
3 Tr Merc 6151 -5003
4 Tr Merc 6176 -5004
5 Tr Merc 6201 -5005
6 Tr Merc 6226 -5006
7 Tr Merc 6251 -5007
8 Tr Merc 6276 -5008
9 Tr Merc 6301 -5009
10 Lambert 6326 -5010
Arizona East Tr Merc 3151 -201
Central Tr Merc 3176 -202
West Tr Merc 3201 -203
Arkansas North Lambert 3226 -301
South Lambert 3251 -302
California I Lambert 3276 -401
II Lambert 3301 -402
III Lambert 3326 -403
IV Lambert 3351 -404
V Lambert 3376 -405
VI Lambert 3401 -406
Colorado North Lambert 3451 -501
Central Lambert 3476 -502
South Lambert 3501 -503
Connecticut -------- Lambert 3526 -600
Delaware -------- Tr Merc 3551 -700
District of Columbia Use Maryland or Virginia North

570 ERDAS
USGS Projections

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Florida East Tr Merc 3601 -901
West Tr Merc 3626 -902
North Lambert 3576 -903
Georgia East Tr Merc 3651 -1001
West Tr Merc 3676 -1002
Hawaii 1 Tr Merc 5876 -5101
2 Tr Merc 5901 -5102
3 Tr Merc 5926 -5103
4 Tr Merc 5951 -5104
5 Tr Merc 5976 -5105
Idaho East Tr Merc 3701 -1101
Central Tr Merc 3726 -1102
West Tr Merc 3751 -1103
Illinois East Tr Merc 3776 -1201
West Tr Merc 3801 -1202
Indiana East Tr Merc 3826 -1301
West Tr Merc 3851 -1302
Iowa North Lambert 3876 -1401
South Lambert 3901 -1402
Kansas North Lambert 3926 -1501
South Lambert 3951 -1502
Kentucky North Lambert 3976 -1601
South Lambert 4001 -1602
Louisiana North Lambert 4026 -1701
South Lambert 4051 -1702
Offshore Lambert 6426 -1703
Maine East Tr Merc 4076 -1801
West Tr Merc 4101 -1802
Maryland ------- Lambert 4126 -1900
Massachusetts Mainland Lambert 4151 -2001
Island Lambert 4176 -2002

Field Guide 571


Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Michigan North Lambert 6351 -2111
Central Lambert 6376 -2112
South Lambert 6401 -2113
Minnesota North Lambert 4276 -2201
Central Lambert 4301 -2202
South Lambert 4326 -2203
Mississippi East Tr Merc 4351 -2301
West Tr Merc 4376 -2302
Missouri East Tr Merc 4401 -2401
Central Tr Merc 4426 -2402
West Tr Merc 4451 -2403
Montana --------- Lambert 4476 -2500
Nebraska --------- Lambert 4551 -2600
Nevada East Tr Merc 4601 -2701
Central Tr Merc 4626 -2702
West Tr Merc 4651 -2703
New Hampshire --------- Tr Merc 4676 -2800
New Jersey --------- Tr Merc 4701 -2900
New Mexico East Tr Merc 4726 -3001
Central Tr Merc 4751 -3002
West Tr Merc 4776 -3003
New York East Tr Merc 4801 -3101
Central Tr Merc 4826 -3102
West Tr Merc 4851 -3103
Long Island Lambert 4876 -3104
North Carolina --------- Lambert 4901 -3200
North Dakota North Lambert 4926 -3301
South Lambert 4951 -3302
Ohio North Lambert 4976 -3401
South Lambert 5001 -3402
Oklahoma North Lambert 5026 -3501
South Lambert 5051 -3502

572 ERDAS
USGS Projections

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Oregon North Lambert 5076 -3601
South Lambert 5101 -3602
Pennsylvania North Lambert 5126 -3701
South Lambert 5151 -3702
Puerto Rico --------- Lambert 6001 -5201
Rhode Island --------- Tr Merc 5176 -3800
South Carolina --------- Lambert 5201 -3900
South Dakota --------- Lambert 5251 -4001
South Lambert 5276 -4002
Tennessee --------- Lambert 5301 -4100
Texas North Lambert 5326 -4201
North Central Lambert 5351 -4202
Central Lambert 5376 -4203
South Central Lambert 5401 -4204
South Lambert 5426 -4205
Utah North Lambert 5451 -4301
Central Lambert 5476 -4302
South Lambert 5501 -4303
Vermont --------- Tr Merc 5526 -4400
Virginia North Lambert 5551 -4501
South Lambert 5576 -4502
Virgin Islands --------- Lambert 6026 -5201
Washington North Lambert 5601 -4601
South Lambert 5626 -4602
West Virginia North Lambert 5651 -4701
South Lambert 5676 -4702
Wisconsin North Lambert 5701 -4801
Central Lambert 5726 -4802
South Lambert 5751 -4803

Field Guide 573


Table 34: NAD83 State Plane coordinate system zone numbers, projection types,
and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS


Wyoming East Tr Merc 5776 -4901
East Central Tr Merc 5801 -4902
West Central Tr Merc 5826 -4903
West Tr Merc 5851 -4904

574 ERDAS
USGS Projections

Stereographic

Summary

Construction Plane

Property Conformal

Polar aspect: the meridians are straight lines radiating from the
point of tangency.

Meridians
Oblique and equatorial aspects: the meridians are arcs of circles
concave toward a straight central meridian. In the equatorial
aspect, the outer meridian of the hemisphere is a circle centered at
the projection center.
Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are nonconcentric arcs of circles con-


cave toward one of the poles with one parallel being a straight
Parallels
line.

Equatorial aspect: parallels are nonconcentric arcs of circles con-


cave toward the poles; the equator is straight.
The graticule spacing increases away from the center of the pro-
Graticule spacing
jection in all aspects and it retains the property of conformality.

Linear scale Scale increases toward the periphery of the projection.

The Stereographic projection is the most widely used azimuthal


projection, mainly used for portraying large, continent-size areas
of similar extent in all directions. It is used in geophysics for solv-
ing problems in spherical geometry. The polar aspect is used for
Uses
topographic maps and navigational charts. The American Geo-
graphical Society uses this projection as the basis for its “Map of
the Arctic.” The U.S. Geological Survey uses it as the basis for
maps of Antarctica.

Stereographic is a perspective projection in which points are projected from a position


on the opposite side of the globe onto a plane tangent to the earth (Figure 200). All of
one hemisphere can easily be shown, but it is impossible to show both hemispheres in
their entirety from one center. It is the only azimuthal projection that preserves truth of
angles and local shape. Scale increases and parallels become more widely spaced
farther from the center.

In the equatorial aspect, all parallels except the equator are circular arcs. In the polar
aspect, latitude rings are spaced farther apart, with increasing distance from the pole.

Field Guide 575


Prompts
The following prompts display in the Projection Chooser if Stereographic is selected.
Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough so that no negative coordinates will occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.

The Stereographic is the only azimuthal projection which is conformal. Figure 200
shows two views: A) Equatorial aspect, often used in the 16th and 17th centuries for
maps of hemispheres; B) Oblique aspect, centered on 40˚N.

576 ERDAS
USGS Projections

Figure 200: Stereographic Projection

Field Guide 577


Transverse Mercator

Summary

Construction Cylinder

Property Conformal

Meridians are complex curves concave toward a straight central


Meridians meridian that is tangent to the globe. The straight central merid-
ian intersects the equator and one meridian at a 90˚ angle.
Parallels are complex curves concave toward the nearest pole; the
Parallels
equator is straight.
Parallels are spaced at their true distances on the straight central
Graticule spacing meridian. Graticule spacing increases away from the tangent
meridian. The graticule retains the property of conformality.
Linear scale is true along the line of tangency, or along two lines
Linear scale
equidistant from, and parallel to, the line of tangency.
Used where the north-south dimension is greater than the east-
west dimension. Used as the base for the U.S. Geological Survey’s
Uses
1:250,000-scale series, and for some of the 7.5-minute and 15-
minute quadrangles of the National Topographic Map Series.

Transverse Mercator is similar to the Mercator projection except that the axis of the
projection cylinder is rotated 90˚ from the vertical (polar) axis. The contact line is then
a chosen meridian instead of the equator and this central meridian runs from pole to
pole. It loses the properties of straight meridians and straight parallels of the standard
Mercator projection (except for the central meridian, the two meridians 90˚ away, and
the equator).

Transverse Mercator also loses the straight rhumb lines of the Mercator map, but it is a
conformal projection. Scale is true along the central meridian or along two straight lines
equidistant from, and parallel to, the central meridian. It cannot be edge-joined in an
east-west direction if each sheet has its own central meridian.

In the United States, Transverse Mercator is the projection used in the State Plane
coordinate system for states with predominant north-south extent. The entire earth
from 84˚N to 80˚S is mapped with a system of projections called the Universal Trans-
verse Mercator.

578 ERDAS
USGS Projections

Prompts
The following prompts display in the Projection Chooser if Transverse Mercator is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Scale factor at central meridian

Designate the desired scale factor at the central meridian. This parameter is used to
modify scale distortion. A value of one indicates true scale only along the central
meridian. It may be desirable to have true scale along two lines equidistant from and
parallel to the central meridian, or to lessen scale distortion away from the central
meridian. A factor of less than, but close to, one is often used.

Finally, define the origin of the map projection in both spherical and rectangular coordi-
nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of
projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of the
central meridian and the latitude of the origin of projection. These values must be in
meters. It is very often convenient to make them large enough so that there will be no
negative coordinates within the region of the map projection. That is, origin of the
rectangular coordinate system should fall outside of the map projection to the south
and west.

Field Guide 579


UTM The Universal Transverse Mercator (UTM) is an international plane (rectangular)
coordinate system developed by the U.S. Army that extends around the world from
84˚N to 80˚S. The world is divided into 60 zones each covering six degrees longitude.
Each zone extends three degrees eastward and three degrees westward from its central
meridian. Zones are numbered consecutively west to east from the 180˚ meridian
(Figure 201, Table 35).

The Transverse Mercator projection is then applied to each UTM zone. Transverse
Mercator is a transverse form of the Mercator cylindrical projection. The projection
cylinder is rotated 90˚ from the vertical (polar) axis and can then be placed to intersect
at a chosen central meridian. The UTM system specifies the central meridian of each
zone. With a separate projection for each UTM zone, a high degree of accuracy is
possible (one part in 1000 maximum distortion within each zone).

If the map to be projected extends beyond the border of the UTM zone, the entire map
may be projected for any UTM zone specified by the user.

See "Transverse Mercator" on page 578 for more information.

Prompts
The following prompts display in the Projection Chooser if UTM is chosen.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

UTM Zone

Is the data North or South of the equator?

All values in Table 35 are in full degrees east (E) or west (W) of the Greenwich prime
meridian (0).

580 ERDAS
USGS Projections

126˚ 120˚ 114˚ 108˚ 102˚ 96˚ 90˚ 84˚ 78˚ 72˚ 66˚

Figure 201: Zones of the Universal Transverse Mercator Grid in the United States

Field Guide 581


Table 35: UTM zones, central meridians, and longitude ranges

Central Central
Zone Range Zone Range
Meridian Meridian
1 177W 180W-174W 31 3E 0-6E
2 171W 174W-168W 32 9E 6E-12E
3 165W 168W-162W 33 15E 12E-18E
4 159W 162W-156W 34 21E 18E-24E
5 153W 156W-150W 35 27E 24E-30E
6 147W 150W-144W 36 33E 30E-36E
7 141W 144W-138W 37 39E 36E-42E
8 135W 138W-132W 38 45E 42E-48E
9 129W 132W-126W 39 51E 48E-54E
10 123W 126W-120W 40 57E 54E-60E
11 117W 120W-114W 41 63E 60E-66E
12 111W 114W-108W 42 69E 66E-72E
13 105W 108W-102W 43 75E 72E-78E
14 99W 102W-96W 44 81E 78E-84E
15 93W 96W-90W 45 87E 84E-90E
16 87W 90W-84W 46 93E 90E-96E
17 81W 84W-78W 47 99E 96E-102E
18 75W 78W-72W 48 105E 102E-108E
19 69W 72W-66W 49 111E 108E-114E
20 63W 66W-60W 50 117E 114E-120E
21 57W 60W-54W 51 123E 120E-126E
22 51W 54W-48W 52 129E 126E-132E
23 45W 48W-42W 53 135E 132E-138E
24 39W 42W-36W 54 141E 138E-144E
25 33W 36W-30W 55 147E 144E-150E
26 27W 30W-24W 56 153E 150E-156E
27 21W 24W-18W 57 159E 156E-162E
28 15W 18W-12W 58 165E 162E-168E
29 9W 12W-6W 59 171E 168E-174E
30 3W 6W-0 60 177E 174E-180E

582 ERDAS
USGS Projections

Van der Grinten I

Summary

Construction Miscellaneous

Property Compromise

Meridians are circular arcs concave toward a straight central


Meridians
meridian.
Parallels are circular arcs concave toward the poles, except for a
Parallels
straight equator.
Meridian spacing is equal at the equator. The parallels are spaced
farther apart toward the poles. The central meridian and equator
Graticule spacing
are straight lines. The poles commonly are not represented. The
graticule spacing results in a compromise of all properties.
Linear scale is true along the equator. Scale increases rapidly
Linear scale
toward the poles.
The Van der Grinten projection is used by the National Geo-
Uses graphic Society for world maps. Used by the U.S. Geological Sur-
vey to show distribution of mineral resources on the sea floor.

The Van der Grinten I projection produces a map that is neither conformal nor equal
area (Figure 202). It compromises all properties, and represents the earth within a circle.

All lines are curved except the central meridian and the equator. Parallels are spaced
farther apart toward the poles. Meridian spacing is equal at the equator. Scale is true
along the equator, but increases rapidly toward the poles, which are usually not repre-
sented.

Van der Grinten I avoids the excessive stretching of the Mercator and the shape
distortion of many of the equal area projections. It has been used to show distribution
of mineral resources on the ocean floor.

Prompts
The following prompts display in the Projection Chooser if Van der Grinten I is
selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Field Guide 583


Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the
projection. These values must be in meters. It is very often convenient to make them
large enough to prevent negative coordinates within the region of the map projection.
That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.

Figure 202: Van der Grinten I Projection

The Van der Grinten I projection resembles the Mercator, but it is not conformal.

584 ERDAS
External Projections

External Projections The following external projections are supported in ERDAS IMAGINE and are
described in this section. Some of these projections were discussed in the previous
section. Those descriptions are not repeated here. Simply refer to the page number in
parentheses for more information.

NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.

• Albers Conical Equal Area (see page 519)

• Azimuthal Equidistant (see page 522)

• Bipolar Oblique Conic Conformal

• Cassini-Soldner

• Conic Equidistant (see page 525)

• Laborde Oblique Mercator

• Lambert Azimuthal Equal Area (see page 535)

• Lambert Conformal Conic (see page 538)

• Mercator (see page 541)

• Modified Polyconic

• Modified Stereographic

• Mollweide Equal Area

• Oblique Mercator (see page 548)

• Orthographic (see page 551)

• Plate Carrée (see page 527)

• Rectified Skew Orthomorphic

• Regular Polyconic (see page 557)

• Robinson Pseudocylindrical

• Sinusoidal (see page 559)

• Southern Orientated Gauss Conformal

• Stereographic (see page 575)

• Stereographic (Oblique) (see page 575)

Field Guide 585


• Transverse Mercator (see page 578)

• Universal Transverse Mercator (see page 580)

• Van der Grinten (see page 583)

• Winkel’s Tripel

Bipolar Oblique Conic


Conformal
Summary

Construction Cone

Property Conformal

Meridians are complex curves concave toward the center of the


Meridians
projection.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing increases away from the lines of true scale and
Graticule spacing
retains the property of conformality.
Linear scale is true along two lines that do not lie along any
meridian or parallel. Scale is compressed between these lines and
Linear scale
expanded beyond them. Linear scale is generally good, but there
is as much as a 10% error at the edge of the projection as used.
Used to represent one or both of the American continents. Exam-
Uses ples are the Basement map of North America and the Tectonic
map of North America.

The Bipolar Oblique Conic Conformal projection was developed by O.M. Miller and
William A. Briesemeister in 1941 specifically for mapping North and South America,
and maintains conformality for these regions. It is based upon the Lambert Conformal
Conic, using two oblique conic projections side-by-side. The two oblique conics are
joined with the poles 104˚ apart. A great circle arc 104˚ long begins at 20˚S and 110˚W,
cuts through Central America, and terminates at 45˚N and approximately 19˚59’36”W.
The scale of the map is then increased by approximately 3.5%. The origin of the coordi-
nates is made 17˚15’N, 73˚02’W.

Refer to "Lambert Conformal Conic" on page 538 for more information.

Prompts
The following prompts display in the Projection Chooser if Bipolar Oblique Conic
Conformal is selected.

Projection Name

Spheroid Type

Datum Name

586 ERDAS
External Projections

Cassini-Soldner

Summary

Construction Cylinder

Property Compromise

Central meridian, each meridian 90˚ from the central meridian,


Meridians and the equator are straight lines. Other meridians are complex
curves.

Parallels Parallels are complex curves.

Complex curves for all meridians and parallels, except for the
Graticule spacing equator, the central meridian, and each meridian 90˚ away from
the central meridian, all of which are straight.
Scale is true along the central meridian, and along lines perpen-
dicular to the central meridian. Scale is constant but not true
Linear scale
along lines parallel to the central meridian on the spherical form
and nearly so for the ellipsoid.
Used for topographic mapping, formerly in England and cur-
Uses rently in a few other countries, such as Denmark, Germany, and
Malaysia.

The Cassini projection was devised by C. F. Cassini de Thury in 1745 for the survey of
France. Mathematical analysis by J. G. von Soldner in the early 19th century led to more
accurate ellipsoidal formulas. Today, it has largely been replaced by the Transverse
Mercator projection, although it is still in limited use outside of the United States. It was
one of the major topographic mapping projections until the early 20th century.

The spherical form of the projection bears the same relation to the Equidistant Cylin-
drical or Plate Carrée projection that the spherical Transverse Mercator bears to the
regular Mercator. Instead of having the straight meridians and parallels of the
Equidistant Cylindrical, the Cassini has complex curves for each, except for the equator,
the central meridian, and each meridian 90˚ away from the central meridian, all of
which are straight.

There is no distortion along the central meridian if it is maintained at true scale, which
is the usual case. If it is given a reduced scale factor, the lines of true scale are two
straight lines on the map, parallel to and equidistant from, the central meridian. There
is no distortion along them instead.

The scale is correct along the central meridian and also along any straight line perpen-
dicular to the central meridian. It gradually increases in a direction parallel to the
central meridian as the distance from that meridian increases, but the scale is constant
along any straight line on the map that is parallel to the central meridian. Therefore,
Cassini-Soldner is more suitable for regions that are predominantly north-south in
extent, such as Great Britain, than regions extending in other directions. The projection
is neither equal area nor conformal, but it has a compromise of both features.

Field Guide 587


The Cassini-Soldner projection was adopted by the Ordnance Survey for the official
survey of Great Britain during the second half of the 19th century. A system equivalent
to the oblique Cassini-Soldner projection was used in early coordinate transformations
for ERTS (now Landsat) satellite imagery, but it was changed to Oblique Mercator
(Hotine) in 1978 and to the Space Oblique Mercator in 1982.

Prompts
The following prompts display in the Projection Chooser if Cassini-Soldner is selected.

Projection Name

Spheroid Type

Datum Name

Laborde Oblique In 1928, Laborde combined a conformal sphere with a complex-algebra transformation
Mercator of the Oblique Mercator projection for the topographic mapping of Madagascar. This
variation is now known as the Laborde Oblique Mercator. The central line is a great
circle arc.

See "Oblique Mercator (Hotine)" on page 548 for more information.

Prompts
The following prompts display in the Projection Chooser if Laborde Oblique Mercator
is selected.

Projection Name

Spheroid Type

Datum Name

588 ERDAS
External Projections

Modified Polyconic

Summary

Construction Cone

Property Compromise

Meridians All meridians are straight.

Parallels are circular arcs. The top and bottom parallels of each
Parallels
sheet are nonconcentric circular arcs.
The top and bottom parallels of each sheet are nonconcentric cir-
cular arcs. The two parallels are spaced from each other accord-
Graticule spacing
ing to the true scale along the central meridian, which is slightly
reduced.
Scale is true along each parallel and along two meridians, but no
Linear scale
parallel is “standard.”

Uses Used for the International Map of the World series until 1962.

The Modified Polyconic projection was devised by Lallemand of France, and in 1909 it
was adopted by the International Map Committee (IMC) in London as the basis for the
1:1,000,000-scale International Map of the World (IMW) series.

The projection differs from the ordinary Polyconic in two principal features: all
meridians are straight, and there are two meridians that are made true to scale.
Adjacent sheets exactly fit together not only north to south, but east to west. There is
still a gap when mosaicking in all directions, in that there is a gap between each
diagonal sheet, and either one or the other adjacent sheet.

In 1962, a U.N. conference on the IMW adopted the Lambert Conformal Conic and the
Polar Stereographic projections to replace the Modified Polyconic.

See "Polyconic" on page 557 for more information.

Prompts
The following prompts display in the Projection Chooser if Modified Polyconic is
selected.

Projection Name

Spheroid Type

Datum Name

Field Guide 589


Modified Stereographic

Summary

Construction Plane

Property Conformal

All meridians are normally complex curves, although some may


Meridians
be straight under certain conditions.
All parallels are complex curves, although some may be straight
Parallels
under certain conditions.
The graticule is normally not symmetrical about any axis or
Graticule spacing
point.
Scale is true along irregular lines, but the map is usually designed
Linear scale
to minimize scale variation throughout a selected region.
Used for maps of continents in the Eastern Hemisphere, for the
Uses
Pacific Ocean, and for maps of Alaska and the 50 United States.

The meridians and parallels of the Modified Stereographic projection are generally
curved, and there is usually no symmetry about any point or line. There are limitations
to these transformations. Most of them can only be used within a limited range. As the
distance from the projection center increases, the meridians, parallels, and shorelines
begin to exhibit loops, overlapping, and other undesirable curves. A world map using
the GS50 (50-State) projection is almost illegible with meridians and parallels inter-
twined like wild vines.

Prompts
The following prompts display in the Projection Chooser if Modified Stereographic is
selected.

Projection Name

Spheroid Type

Datum Name

590 ERDAS
External Projections

Mollweide Equal Area

Summary

Construction Pseudo-cylinder

Property Equal area

All of the meridians are ellipses. The central meridian is a straight


Meridians
line and 90˚ meridians are circular arcs (Pearson 1990).
The equator and parallels are straight lines perpendicular to the
Parallels
central meridian, but they are not equally spaced.
Linear graticules include the central meridian and the equator
(ESRI 1992). Meridians are equally spaced along the equator and
Graticule spacing
along all other parallels. The parallels are straight parallel lines,
but they are not equally spaced. The poles are points.
Scale is true along latitudes 40˚44’N and S. Distortion increases
Linear scale with distance from these lines and becomes severe at the edges of
the projection (ESRI 1992).
Often used for world maps (Pearson 1990). Suitable for thematic
Uses or distribution mapping of the entire world, frequently in inter-
rupted form (ESRI 1992).

The second oldest pseudo-cylindrical projection that is still in use (after the Sinusoidal)
was presented by Carl B. Mollweide (1774 - 1825) of Halle, Germany, in 1805. It is an
equal area projection of the earth within an ellipse. It has had a profound effect on
world map projections in the 20th century, especially as an inspiration for other
important projections, such as the Van der Grinten.

The Mollweide is normally used for world maps and occasionally for a very large
region, such as the Pacific Ocean. This is because only two points on the Mollweide are
completely free of distortion unless the projection is interrupted. These are the points at
latitudes 40˚44’12”N and S on the central meridian(s).

The world is shown in an ellipse with the equator, its major axis, twice as long as the
central meridian, its minor axis. The meridians 90˚ east and west of the central meridian
form a complete circle. All other meridians are elliptical arcs which, with their opposite
numbers on the other side of the central meridian, form complete ellipses that meet at
the poles.

Prompts
The following prompts display in the Projection Chooser if Mollweide Equal Area is
selected.

Projection Name

Spheroid Type

Datum Name

Field Guide 591


Rectified Skew Martin Hotine (1898 - 1968) called the Oblique Mercator projection the Rectified Skew
Orthomorphic Orthomorphic projection.

See "Oblique Mercator (Hotine)" on page 548 for more information.

Prompts
The following prompts display in the Projection Chooser if Rectified Skew Ortho-
morphic is selected.

Projection Name

Spheroid Type

Datum Name

Robinson
Pseudocylindrical
Summary

Construction Pseudo-cylinder

Property Compromise

Meridians are elliptical arcs, equally spaced, and concave toward


Meridians
the central meridian.

Parallels Parallels are straight lines.

Parallels are straight lines and are parallel. The individual paral-
Graticule spacing
lels are evenly divided by the meridians (Pearson 1990).
Generally, scale is made true along latitudes 38˚N and S. Scale is
Linear scale constant along any given latitude, and for the latitude of opposite
sign (ESRI 1992).
Developed for use in general and thematic world maps. Used by
Rand McNally since the 1960s and by the National Geographic
Uses
Society since 1988 for general and thematic world maps (ESRI
1992).

The Robinson Pseudocylindrical projection provides a means of showing the entire


earth in an uninterrupted form. The continents appear as units and are in relatively
correct size and location. Poles are represented as lines.

Meridians are equally spaced and resemble elliptical arcs, concave toward the central
meridian. The central meridian is a straight line 0.51 times the length of the equator.
Parallels are equally spaced straight lines between 38˚N and S, and then the spacing
decreases beyond these limits. The poles are 0.53 times the length of the equator. The
projection is based upon tabular coordinates instead of mathematical formulas (ESRI
1992).

592 ERDAS
External Projections

Prompts
The following prompts display in the Projection Chooser if Robinson Pseudocylindrical
is selected.

Projection Name

Spheroid Type

Datum Name

Southern Orientated Southern Orientated Gauss Conformal is another name for the Transverse Mercator
Gauss Conformal projection, after mathematician Friedrich Gauss (1777 - 1855). It is also called the Gauss-
Krüger projection.

See "Transverse Mercator" on page 578 for more information.

Prompts
The following prompts display in the Projection Chooser if Southern Orientated Gauss
Conformal is selected.

Projection Name

Spheroid Type

Datum Name

Field Guide 593


Winkel’s Tripel

Summary

Construction Modified azimuthal

Property Neither conformal nor equal area

Central meridian is straight. Other meridians are curved and are


Meridians equally spaced along the equator and concave toward the central
meridian.
Equidistant spacing of parallels. Equator and the poles are
Parallels straight. Other parallels are curved and concave toward the near-
est pole.
Symmetry is maintained along the central meridian or the Equa-
Graticule spacing
tor.
Scale is true along the central meridian and constant along the
Linear scale
Equator.

Uses Used for world maps.

Winkel’s Tripel was formulated in 1921 by Oswald Winkel of Germany. It is a combined


projection that is the arithmetic mean of the Plate Carrée and Aitoff’s projection
(Maling).

Prompts
The following prompts display in the Projection Chooser if Winkel’s Tripel is selected.

Projection Name

Spheroid Type

Datum Name

594 ERDAS
A

Glossary

A absorption spectra - the electromagnetic radiation wavelengths that are absorbed by


specific materials of interest.
abstract symbol - an annotation symbol that has a geometric shape, such as a circle,
square, or triangle. These symbols often represent amounts that vary from place
to place, such as population density, yearly rainfall, etc.
a priori - already or previously known.
accuracy assessment - the comparison of a classification to geographical data that is
assumed to be true. Usually, the assumed-true data are derived from ground
truthing.
accuracy report - in classification accuracy assessment, a list of the percentages of
accuracy, computed from the error matrix.
active sensors - the solar imaging sensors that both emit and receive radiation.
ADRG - see ARC Digitized Raster Graphic.
ADRI - see ARC Digital Raster Imagery.
aerial stereopair - two photos taken at adjacent exposure stations.
Airborne Synthetic Aperture Radar (AIRSAR) - an experimental airborne radar sensor
developed by Jet Propulsion Laboratories (JPL), Pasadena, California, under a
contract with NASA. AIRSAR data have been available since 1983.
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) - a sensor developed by
JPL (Pasadena, California) under a contract with NASA that produces multi-
spectral data with 224 narrow bands. These bands are 10 nm wide and cover the
spectral range of .4 - 2.4 nm. AVIRIS data have been available since 1987.
alarm - a test of a training sample, usually used before the signature statistics are calcu-
lated. An alarm highlights an area on the display which is an approximation of
the area that would be classified with a signature. The original data can then be
compared to the highlighted area.
Almaz - a Russian radar satellite that completed its mission in 1992.
Analog Photogrammetry - optical or mechanical instruments used to reconstruct three-
dimensional geometry from two overlapping photographs.
Analytical Photogrammetry - the computer replaces some expensive optical and
mechanical components by substituting analog measurement and calculation
with mathematical computation.

Field Guide 595


Glossary
A
ancillary data - the data, other than remotely sensed data, that are used to aid in the
classification process.
annotation - the explanatory material accompanying an image or map. In ERDAS
IMAGINE, annotation consists of text, lines, polygons, ellipses, rectangles,
legends, scale bars, grid lines, tick marks, neatlines, and symbols which denote
geographical features.
annotation layer - a set of annotation elements that is drawn in a Viewer or Map
Composer window and stored in a file (.ovr extension).
arc - see line.
ARC system (Equal Arc-Second Raster Chart/Map) - a system that provides a rectan-
gular coordinate and projection system at any scale for the earth’s ellipsoid, based
on the World Geodetic System 1984 (WGS 84).
ARC Digital Raster Imagery (ADRI) - Defense Mapping Agency (DMA) data that
consist of SPOT panchromatic, SPOT multispectral, or Landsat TM satellite
imagery transformed into the ARC system and accompanied by ASCII encoded
support files. These data are available only to Department of Defense contractors.
ARC Digitized Raster Graphic (ADRG) - data from the Defense Mapping Agency
(DMA) that consist of digital copies of DMA hardcopy graphics transformed into
the ARC system and accompanied by ASCII encoded support files. These data are
primarily used for military purposes by defense contractors.
ARC GENERATE data - vector data created with the ARC/INFO UNGENERATE
command.
arc/second - a unit of measure that can be applied to data in the Lat/Lon coordinate
system. Each pixel represents the distance covered by one second of latitude or
longitude. For example, in “3 arc/second” data, each pixel represents an area
three seconds latitude by three seconds longitude.
area - a measurement of a surface.
area based matching - an image matching technique that determines the correspon-
dence between two image areas according to the similarity of their gray level
values.
area of interest (AOI) - a point, line, or polygon that is selected as a training sample or
as the image area to be used in an operation. AOIs can be stored in separate .aoi
files.
aspect - the orientation, or the direction that a surface faces, with respect to the direc-
tions of the compass: north, south, east, west.
aspect image - a thematic raster image which shows the prevailing direction that each
pixel faces.
aspect map - a map that is color-coded according to the prevailing direction of the slope
at each pixel.
attribute - the tabular information associated with a raster or vector layer.

596 ERDAS
B

average - the statistical mean; the sum of a set of values divided by the number of values
in the set.
AVHRR - Advanced Very High Resolution Radiometer data. Small-scale imagery
produced by an NOAA polar orbiting satellite. It has a spatial resolution of 1.1×
1.1 km or 4 × 4 km.
azimuth - an angle measured clockwise from a meridian, going north to east.
azimuthal projection - a map projection that is created from projecting the surface of
the earth to the surface of a plane.

B band - a set of data file values for a specific portion of the electromagnetic spectrum of
reflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal,
etc.) or some other user-defined information created by combining or enhancing
the original bands, or creating new bands from other sources. Sometimes called
“channel.”
banding - see striping.
base map - a map portraying background reference information onto which other infor-
mation is placed. Base maps usually show the location and extent of natural
surface features and permanent man-made features.
batch file - a file that is created in the batch mode of ERDAS IMAGINE. All steps are
recorded for a later run. This file can be edited.
batch mode - a mode of operating ERDAS IMAGINE in which steps are recorded for
later use.
bathymetric map - a map portraying the shape of a water body or reservoir using
isobaths (depth contours).
Bayesian - a variation of the maximum likelihood classifier, based on the Bayes Law of
probability. The Bayesian classifier allows the application of a priori weighting
factors, representing the probabilities that pixels will be assigned to each class.
BIL - band interleaved by line. A form of data storage in which each record in the file
contains a scan line (row) of data for one band. All bands of data for a given line
are stored consecutively within the file.
bilinear interpolation - a resampling method that uses the data file values of four pixels
in a 2 by 2 window to calculate an output data file value by computing a weighted
average of the input data file values with a bilinear function.
bin function - a mathematical function that establishes the relationship between data
file values and rows in a descriptor table.
bins - ordered sets of pixels. Pixels are sorted into a specified number of bins. The pixels
are then given new values based upon the bins to which they are assigned.
BIP - band interleaved by pixel. A form of data storage in which the values for each
band are ordered within a given pixel. The pixels are arranged sequentially on the
tape.

Field Guide 597


Glossary
B
bit - a binary digit, meaning a number that can have two possible values 0 and 1, or
“off” and “on.” A set of bits, however, can have many more values, depending
upon the number of bits used. The number of values that can be expressed by a
set of bits is 2 to the power of the number of bits used. For example, the number
of values that can be expressed by 3 bits is 8 (23 = 8).
block of photographs - formed by the combined exposures of a flight. The block
consists of a number of parallel strips with a sidelap of 20-30%.
blocked - a method of storing data on 9-track tapes so that there are more logical
records in each physical record.
blocking factor - the number of logical records in each physical record. For instance, a
record may contain 28,000 bytes, but only 4,000 columns due to a blocking factor
of 7.
book map - a map laid out like the pages of a book. Each page fits on the paper used by
the printer. There are neatlines and tick marks on all sides of every page.
Boolean - logical; based upon, or reducible to a true or false condition.
border - on a map, a line that usually encloses the entire map, not just the image area as
does a neatline.
boundary - a neighborhood analysis technique that is used to detect boundaries
between thematic classes.
bpi - bits per inch. A measure of data storage density for magnetic tapes.
breakline - an elevation polyline, in which each vertex has its own X, Y, Z value.
brightness value - the quantity of a primary color (red, green, blue) to be output to a
pixel on the display device. Also called “intensity value,” “function memory
value,” “pixel value,” “display value,” “screen value.”
BSQ - band sequential. A data storage format in which each band is contained in a
separate file.
buffer zone - a specific area around a feature that is isolated for or from further analysis.
For example, buffer zones are often generated around streams in site assessment
studies, so that further analyses will exclude these areas that are often unsuitable
for development.
build - the process of constructing the topology of a vector layer by processing points,
lines, and polygons. See clean.
bundle - the unit of photogrammetric triangulation after each point measured in an
image is connected with the perspective center by a straight light ray. There is one
bundle of light rays for each image.
bundle attitude - defined by a spatial rotation matrix consisting of three angles (κ, ω, ϕ).
bundle location - defined by the perspective center, expressed in units of the specified
map projection.
byte - 8 bits of data.

598 ERDAS
C

C cadastral map - a map showing the boundaries of the subdivisions of land for purposes
of describing and recording ownership or taxation.
calibration certificate/report - in aerial photography, the manufacturer of the camera
specifies the interior orientation in the form of a certificate or report.
Cartesian - a coordinate system in which data are organized on a grid and points on the
grid are referenced by their X,Y coordinates.
cartography - the art and science of creating maps.
categorical data - see thematic data.
CCT - see computer compatible tape.
CD-ROM - a read-only storage device read by a CD-ROM player.
cell - 1. a 1˚ × 1˚ area of coverage. DTED (Digital Terrain Elevation Data) are distributed
in cells. 2. a pixel; grid cell.
cell size - the area that one pixel represents, measured in map units. For example, one
cell in the image may represent an area 30 feet by 30 feet on the ground.
Sometimes called “pixel size.”
center of the scene - the center pixel of the center scan line; the center of a satellite
image.
character - a number, letter, or punctuation symbol. One character usually occupies one
byte when stored on a computer.
check point - additional ground points used to independently verify the degree of
accuracy of a triangulation.
check point analysis - the act of using check points to independently verify the degree
of accuracy of a triangulation.
chi-square distribution - a non-symmetrical data distribution, whose curve is charac-
terized by a “tail” that represents the highest and least frequent data values. In
classification thresholding, the “tail” represents the pixels that are most likely to
be classified incorrectly.
choropleth map - a map portraying properties of a surface using area symbols. Area
symbols usually represent categorized classes of the mapped phenomenon.
city-block distance - the physical or spectral distance that is measured as the sum of
distances that are perpendicular to one another.
class - a set of pixels in a GIS file which represent areas that share some condition.
Classes are usually formed through classification of a continuous raster layer.
class value - a data file value of a thematic file which identifies a pixel as belonging to
a particular class.
classification - the process of assigning the pixels of a continuous raster image to
discrete categories.

Field Guide 599


Glossary
C
classification accuracy table - for accuracy assessment, a list of known values of
reference pixels, supported by some ground truth or other a priori knowledge of
the true class, and a list of the classified values of the same pixels, from a classified
file to be tested.
classification scheme - (or classification system) a set of target classes. The purpose of
such a scheme is to provide a framework for organizing and categorizing the
information that can be extracted from the data.
clean - the process of constructing the topology of a vector layer by processing lines and
polygons. See build.
client - on a computer on a network, a program that accesses a server utility that is on
another machine on the network.
clump - a contiguous group of pixels in one class. Also called raster region.
clustering - unsupervised training; the process of generating signatures based on the
natural groupings of pixels in image data when they are plotted in spectral space.
clusters - the natural groupings of pixels when plotted in spectral space.
coefficient - one number in a matrix, or a constant in a polynomial expression.
coefficient of variation - a scene-derived parameter that is used as input to the Sigma
and Local Statistics radar enhancement filters.
collinearity - a non-linear mathematical model that photogrammetric triangulation is
based upon. Collinearity equations describe the relationship among image
coordinates, ground coordinates, and orientation parameters
colorcell - the location where the data file values are stored in the colormap. The red,
green, and blue values assigned to the colorcell control the brightness of the color
guns for the displayed pixel.
color guns - on a display device, the red, green, and blue phosphors that are illuminated
on the picture tube in varying brightnesses to create different colors. On a color
printer, color guns are the devices that apply cyan, yellow, magenta, and
sometimes black ink to paper.
colormap - an ordered set of colorcells, which is used to perform a function on a set of
input values.
color printer - a printer that prints color or black-and-white imagery, as well as text.
ERDAS IMAGINE supports several color printers.
color scheme - a set of lookup tables that assigns red, green, and blue brightness values
to classes when a layer is displayed.
composite map - a map on which the combined information from different thematic
maps is presented.
compromise projection - a map projection that compromises among two or more of the
map projection properties of conformality, equivalence, equidistance, and true
direction.

600 ERDAS
C

computer compatible tape (CCT) - a magnetic tape used to transfer and store digital
data.
confidence level - the percentage of pixels that are believed to be misclassified.
conformal - a map or map projection that has the property of conformality, or true
shape.
conformality - the property of a map projection to represent true shape, wherein a
projection preserves the shape of any small geographical area. This is accom-
plished by exact transformation of angles around points.
conic projection - a map projection that is created from projecting the surface of the
earth to the surface of a cone.
connectivity radius - the distance (in pixels) that pixels can be from one another to be
considered contiguous. The connectivity radius is used in connectivity analysis.
contiguity analysis - a study of the ways in which pixels of a class are grouped together
spatially. Groups of contiguous pixels in the same class, called raster regions, or
“clumps,” can be identified by their sizes and manipulated.
contingency matrix - a matrix which contains the number and percentages of pixels
that were classified as expected.
continuous - a term used to describe raster data layers that contain quantitative and
related values. See continuous data.
continuous data - a type of raster data that are quantitative (measuring a characteristic)
and have related, continuous values, such as remotely sensed images (e.g.,
Landsat, SPOT, etc.).
contour map - a map in which a series of lines connects points of equal elevation.
contrast stretch - the process of reassigning a range of values to another range, usually
according to a linear function. Contrast stretching is often used in displaying
continuous raster layers, since the range of data file values is usually much
narrower than the range of brightness values on the display device.
control point - a point with known coordinates in the ground coordinate system,
expressed in the units of the specified map projection.
convolution filtering - the process of averaging small sets of pixels across an image.
Used to change the spatial frequency characteristics of an image.
convolution kernel - a matrix of numbers that is used to average the value of each pixel
with the values of surrounding pixels in a particular way. The numbers in the
matrix serve to weight this average toward particular pixels.
coordinate system - a method for expressing location. In two-dimensional coordinate
systems, locations are expressed by a column and row, also called x and y.
correlation threshold - a value used in rectification to determine whether to accept or
discard ground control points. The threshold is an absolute value threshold
ranging from 0.000 to 1.000.

Field Guide 601


Glossary
D
correlation windows - windows which consist of a local neighborhood of pixels. One
example is square neighborhoods (e.g., 3 X 3, 5 X 5, 7 X 7 pixels).
corresponding GCPs - the ground control points that are located in the same
geographic location as the selected GCPs, but were selected in different files.
covariance - measures the tendencies of data file values for the same pixel, but in
different bands, to vary with each other in relation to the means of their
respective bands. These bands must be linear. Covariance is defined as the
average product of the differences between the data file values in each band and
the mean of each band.
covariance matrix - a square matrix which contains all of the variances and covariances
within the bands in a data file.
credits - on maps, the text that can include the data source and acquisition date,
accuracy information, and other details that are required or helpful to readers.
crisp filter - a filter used to sharpen the overall scene luminance without distorting the
interband variance content of the image.
cross correlation - a calculation which computes the correlation coefficient of the gray
values between the template window and the search window.
cubic convolution - a method of resampling which uses the data file values of sixteen
pixels in a 4 by 4 window to calculate an output data file value with a cubic
function.
current directory - also called “default directory,” it is the directory that you are “in.”
It is the default path.
cylindrical projection - a map projection that is created from projecting the surface of
the earth to the surface of a cylinder.

D dangling node - a line that does not close to form a polygon, or that extends past an
intersection.
data - 1. in the context of remote sensing, a computer file containing numbers which
represent a remotely sensed image, and can be processed to display that image.
2. a collection of numbers, strings, or facts that require some processing before
they are meaningful.
database (one word) - a relational data structure usually used to store tabular infor-
mation. Examples of popular databases include SYBASE, dBase, Oracle, INFO,
etc.
data base (two words) - in ERDAS IMAGINE, a set of continuous and thematic raster
layers, vector layers, attribute information, and other kinds of data which
represent one area of interest. A data base is usually part of a geographic infor-
mation system.
data file - a computer file that contains numbers which represent an image.

602 ERDAS
D

data file value - each number in an image file. Also called “file value,” “image file
value,” “digital number (DN),” “brightness value,” “pixel.”
datum - see reference plane.
decision rule - an equation or algorithm that is used to classify image data after signa-
tures have been created. The decision rule is used to process the data file values
based upon the signature statistics.
decorrelation stretch - a technique used to stretch the principal components of an
image, not the original image.
default directory - see current directory.
degrees of freedom - when chi-square statistics are used in thresholding, the number
of bands in the classified file.
DEM - see digital elevation model.
densify - the process of adding vertices to selected lines at a user-specified tolerance.
density - 1. the number of bits per inch on a magnetic tape. 9-track tapes are commonly
stored at 1600 and 6250 bpi. 2. a neighborhood analysis technique that outputs the
number of pixels that have the same value as the analyzed pixel in a user-
specified window.
derivative map - a map created by altering, combining, or analyzing other maps.
descriptor - see attribute.
desktop scanners - general purpose devices which lack the image detail and geometric
accuracy of photogrammetric quality units, but are much less expensive.
detector - the device in a sensor system that records electromagnetic radiation.
developable surface - a flat surface, or a surface that can be easily flattened by being cut
and unrolled, such as the surface of a cone or a cylinder.
digital elevation model (DEM)- continuous raster layers in which data file values
represent elevation. DEMs are available from the USGS at 1:24,000 and 1:250,000
scale, can be produced with terrain analysis programs, and IMAGINE
OrthoMAX.
digital orthophoto - An aerial photo or satellite scene which has been transformed by
the orthogonal projection, yielding a map that is free of most significant
geometric distortions.
Digital Photogrammetry - photogrammetry as applied to digital images that are stored
and processed on a computer. Digital images can be scanned from photographs
or can be directly captured by digital cameras.
digital terrain model (DTM) - a discrete expression of topography in a data array,
consisting of a group of planimetric coordinates (X,Y) and the elevations of the
ground points and breaklines.
digitized raster graphic (DRG) - a digital replica of Defense Mapping Agency
hardcopy graphic products. See also ADRG.

Field Guide 603


Glossary
D
digitizing - any process that converts non-digital data into numeric data, usually to be
stored on a computer. In ERDAS IMAGINE, digitizing refers to the creation of
vector data from hardcopy materials or raster images that are traced using a
digitizer keypad on a digitizing tablet, or a mouse on a display device.
dimensionality - a term referring to the number of bands being classified. For example,
a data file with 3 bands is said to be 3-dimensional, since 3-dimensional spectral
space is plotted to analyze the data.
directory - an area of a computer disk that is designated to hold a set of files. Usually,
directories are arranged in a tree structure, in which directories can also contain
many levels of subdirectories.
displacement - the degree of geometric distortion for a point that is not on the nadir
line.
display device - the computer hardware consisting of a memory board and a monitor.
It displays a visible image from a data file or from some user operation.
display driver - the ERDAS IMAGINE utility that interfaces between the computer
running IMAGINE software and the display device.
display memory - the subset of image memory that is actually viewed on the display
screen.
display pixel - one grid location on a display device or printout.
display resolution - the number of pixels that can be viewed on the display device
monitor, horizontally and vertically (i.e., 512 × 512 or 1024 × 1024).
distance - see Euclidean distance, spectral distance.
distance image file - a one-band, 16-bit file that can be created in the classification
process, in which each data file value represents the result of the distance
equation used in the program. Distance image files generally have a chi-square
distribution.
distribution - the set of frequencies with which an event occurs, or the set of probabil-
ities that a variable will have a particular value.
distribution rectangles (DRs) - the geographic data sets into which ADRG data are
divided.
dithering - a display technique that is used in ERDAS IMAGINE to allow a smaller set
of colors appear to be a larger set of colors.
divergence - a statistical measure of distance between two or more signatures. Diver-
gence can be calculated for any combination of bands that will be used in the
classification; bands that diminish the results of the classification can be ruled
out.
diversity - a neighborhood analysis technique that outputs the number of different
values within a user-specified window.
DLG - Digital Line Graph. A vector data format created by the USGS.

604 ERDAS
E

dot patterns - the matrices of dots used to represent brightness values on hardcopy
maps and images.
double precision - a measure of accuracy in which 15 significant digits can be stored for
a coordinate.
downsampling - the skipping of pixels during the display or processing of the scanning
process.
DTM - see digital terrain model.
DXF - Data Exchange Format. A format for storing vector data in ASCII files, used by
AutoCAD software.
dynamic range - see radiometric resolution.

E edge detector - a convolution kernel, usually a zero-sum kernel, which smooths out or
zeros out areas of low spatial frequency and creates a sharp contrast where spatial
frequency is high, which is at the edges between homogeneous groups of pixels.
edge enhancer - a high-frequency convolution kernel that brings out the edges between
homogeneous groups of pixels. Unlike an edge detector, it only highlights edges,
it does not necessarily eliminate other features.
eigenvalue - the length of a principal component which measures the variance of a
principal component band. See also principal components.
eigenvector - the direction of a principal component represented as coefficients in an
eigenvector matrix which is computed from the eigenvalues. See also principal
components.
electromagnetic radiation - the energy transmitted through space in the form of electric
and magnetic waves.
electromagnetic spectrum - the range of electromagnetic radiation extending from
cosmic waves to radio waves, characterized by frequency or wavelength.
element - an entity of vector data, such as a point, a line, or a polygon.
elevation data - see terrain data, DEM.
ellipse - a two-dimensional figure that is formed in a two-dimensional scatterplot when
both bands plotted have normal distributions. The ellipse is defined by the
standard deviations of the input bands. Ellipse plots are often used to test signa-
tures before classification.
end-of-file mark (EOF) - usually a half-inch strip of blank tape which signifies the end
of a file that is stored on magnetic tape.
end-of-volume mark (EOV) - usually three EOFs marking the end of a tape.
enhancement - the process of making an image more interpretable for a particular
application. Enhancement can make important features of raw, remotely sensed
data more interpretable to the human eye.

Field Guide 605


Glossary
E
entity - an AutoCAD drawing element that can be placed in an AutoCAD drawing with
a single command.
EOSAT - Earth Observation Satellite Company. A private company that directs the
Landsat satellites and distributes Landsat imagery.
ephemeris data - contained in the header of the data file of a SPOT scene, provides
information about the recording of the data and the satellite orbit.
epipolar stereopair - a stereopair without y-parallax.
equal area - see equivalence.
equatorial aspect - a map projection that is centered around the equator or a point on
the equator.
equidistance - the property of a map projection to represent true distances from an
identified point.
equivalence - the property of a map projection to represent all areas in true proportion
to one another.
error matrix - in classification accuracy assessment, a square matrix showing the
number of reference pixels that have the same values as the actual classified
points.
ERS-1 - the European Space Agency’s (ESA) radar satellite launched in July 1991,
currently provides the most comprehensive radar data available. ERS-2 is
scheduled for launch in 1994.
ETAK MapBase - an ASCII digital street centerline map product available from ETAK,
Inc. (Menlo Park, California).
Euclidean distance - the distance, either in physical or abstract (e.g., spectral) space,
that is computed based on the equation of a straight line.
exposure station - during image acquisition, each point in the flight path at which the
camera exposes the film.
extend - the process of moving selected dangling lines up a specified distance so that
they intersect existing lines.
extension - the three letters after the period in a file name that usually identify the type
of file.
extent - 1. the image area to be displayed in a Viewer. 2. the area of the earth’s surface
to be mapped.
exterior orientation - all images of a block of aerial photographs in the ground
coordinate system are computed during photogrammetric triangulation, using a
limited number of points with known coordinates. The exterior orientation of an
image consists of the exposure station and the camera attitude at this moment.
exterior orientation parameters - the perspective center’s ground coordinates in a
specified map projection and three rotation angles around the coordinate axes.
extract - selected bands of a complete set of NOAA AVHRR data.

606 ERDAS
F

F false color - a color scheme in which features have “expected” colors. For instance,
vegetation is green, water is blue, etc. These are not necessarily the true colors of
these features.
false easting - an offset between the y-origin of a map projection and the y-origin of a
map. Usually used so that no y-coordinates are negative.
false northing - an offset between the x-origin of a map projection and the x-origin of a
map. Usually used so that no x-coordinates are negative.
fast format - a type of BSQ format used by EOSAT to store Landsat TM (Thematic
Mapper) data.
feature based matching - an image matching technique that determines the correspon-
dence between two image features.
feature collection - the process of identifying, delineating, and labeling various types
of natural and man-made phenomena from remotely-sensed images.
feature extraction - the process of studying and locating areas and objects on the
ground and deriving useful information from images.
feature space - an abstract space that is defined by spectral units (such as an amount of
electromagnetic radiation).
feature space area of interest - a user-selected area of interest (AOI) that is selected
from a feature space image.
feature space image - a graph of the data file values of one band of data against the
values of another band (often called a scatterplot).
fiducial center - the center of an aerial photo.
fiducials - four or eight reference markers fixed on the frame of an aerial metric camera
and visible in each exposure. Fiducials are used to compute the transformation
from data file to image coordinates.
field - in an attribute data base, a category of information about each class or feature,
such as “Class name” and “Histogram.”
field of view - in perspective views, an angle which defines how far the view will be
generated to each side of the line of sight.
file coordinates - the location of a pixel within the file in x,y coordinates. The upper left
file coordinate is usually 0,0.
file pixel - the data file value for one data unit in an image file.
file specification or filespec - the complete file name, including the drive and path, if
necessary. If a drive or path is not specified, the file is assumed to be in the current
drive and directory.
filled - referring to polygons; a filled polygon is solid or has a pattern, but is not trans-
parent. An unfilled polygon is simply a closed vector which outlines the area of
the polygon.

Field Guide 607


Glossary
G
filtering - the removal of spatial or spectral features for data enhancement. Convolution
filtering is one method of spatial filtering. Some texts may use the terms
“filtering” and “spatial filtering” synonymously.
flip - the process of reversing the from-to direction of selected lines or links.
focal length - the orthogonal distance from the perspective center to the image plane.
focal operations - filters which use a moving window to calculate new values for each
pixel in the image based on the values of the surrounding pixels.
focal plane - the plane of the film or scanner used in obtaining an aerial photo.
Fourier analysis - an image enhancement technique that was derived from signal
processing.
from-node - the first vertex in a line.
full set - all bands of an NOAA AVHRR (Advanced Very High Resolution Radiometer)
data set.
function memories - areas of the display device memory that store the lookup tables,
which translate image memory values into brightness values.
function symbol - an annotation symbol that represents an activity. For example, on a
map of a state park, a symbol of a tent would indicate the location of a camping
area.
Fuyo 1 (JERS-1) - the Japanese radar satellite launched in February 1992.

G GAC - see global area coverage.


GCP - see ground control point.
GCP matching - for image to image rectification, a ground control point (GCP) selected
in one image is precisely matched to its counterpart in the other image using the
spectral characteristics of the data and the transformation matrix.
GCP prediction - the process of picking a ground control point (GCP) in either
coordinate system and automatically locating that point in the other coordinate
system based on the current transformation parameters.
generalize - the process of weeding vertices from selected lines using a specified
tolerance.
geocentric coordinate system - a coordinate system which has its origin at the center of
the earth ellipsoid. The ZG-axis equals the rotational axis of the earth, and the XG-
axis passes through the Greenwich meridian. The YG-axis is perpendicular to
both the ZG-axis and XG-axis, so as to create a three-dimensional coordinate
system that follows the right hand rule.
geocoded data - an image(s) that has been rectified to a particular map projection and
cell size and has had radiometric corrections applied.

608 ERDAS
G

geographic information system (GIS) - a unique system designed for a particular


application that stores, enhances, combines, and analyzes layers of geographic
data to produce interpretable information. A GIS may include computer images,
hardcopy maps, statistical data, and any other data needed for a study, as well as
computer software and human knowledge. GISs are used for solving complex
geographic planning and management problems.
geographical coordinates - a coordinate system for explaining the surface of the earth.
Geographical coordinates are defined by latitude and by longitude (Lat/Lon),
with respect to an origin located at the intersection of the equator and the prime
(Greenwich) meridian.
geometric correction - the correction of errors of skew, rotation, and perspective in raw,
remotely sensed data.
georeferencing - the process of assigning map coordinates to image data and resam-
pling the pixels of the image to conform to the map projection grid.
gigabyte (Gb) - about one billion bytes.
GIS - see geographic information system.
GIS file - a single-band ERDAS Ver. 7.X data file in which pixels are divided into
discrete categories.
global area coverage (GAC) - a type of NOAA AVHRR (Advanced Very High
Resolution Radiometer) data with a spatial resolution of 4 × 4 km.
global operations - functions which calculate a single value for an entire area, rather
than for each pixel like focal functions.
.gmd file - the ERDAS IMAGINE graphical model file created with Model Maker
(Spatial Modeler).
gnomonic - an azimuthal projection obtained from a perspective at the center of the
earth.
graphical modeling - a technique used to combine data layers in an unlimited number
of ways using icons to represent input data, functions, and output data. For
example, an output layer created from modeling can represent the desired combi-
nation of themes from many input layers.
graphical model - a model created with Model Maker (Spatial Modeler). Graphical
models are put together like flow charts and are stored in .gmd files.
graticule - the network of parallels of latitude and meridians of longitude applied to the
global surface and projected onto maps.
gray scale - a “color” scheme with a gradation of gray tones ranging from black to
white.
great circle - an arc of a circle for which the center is the center of the earth. A great circle
is the shortest possible surface route between two points on the earth.
grid cell - a pixel.

Field Guide 609


Glossary
H
grid lines - intersecting lines that indicate regular intervals of distance based on a
coordinate system. Sometimes called a graticule.
ground control point (GCP) - specific pixel in image data for which the output map
coordinates (or other output coordinates) are known. GCPs are used for
computing a transformation matrix, for use in rectifying an image.
ground coordinate system - a three-dimensional coordinate system which utilizes a
known map projection. Ground coordinates (X,Y,Z) are usually expressed in feet
or meters.
ground truth - data that are taken from the actual area being studied.
ground truthing - the acquisition of knowledge about the study area from field work,
analysis of aerial photography, personal experience, etc. Ground truth data are
considered to be the most accurate (true) data available about the area of study.

H halftoning - the process of using dots of varying size or arrangements (rather than
varying intensity) to form varying degrees of a color.
hardcopy output - any output of digital computer (softcopy) data to paper.
header file - a file usually found before the actual image data on tapes or CD-ROMs that
contains information about the data, such as number of bands, upper left coordi-
nates, map projection, etc.
header record - the first part of an image file that contains general information about
the data in the file, such as the number of columns and rows, number of bands,
data base coordinates of the upper left corner, and the pixel depth. The contents
of header records vary depending on the type of data.
high-frequency kernel - a convolution kernel that increases the spatial frequency of an
image. Also called “high-pass kernel.”
High Resolution Picture Transmission (HRPT) - the direct transmission of AVHRR
data in real-time with the same resolution as Local Area Coverage (LAC).
High Resolution Visible (HRV) sensor - a pushbroom scanner on a SPOT satellite that
takes a sequence of line images while the satellite circles the earth.

histogram - a graph of data distribution, or a chart of the number of pixels that have
each possible data file value. For a single band of data, the horizontal axis of a
histogram graph is the range of all possible data file values. The vertical axis is
the number of pixels that have each data value.
histogram equalization - the process of redistributing pixel values so that there are
approximately the same number of pixels with each value within a range. The
result is a nearly flat histogram.
histogram matching - the process of determining a lookup table that will convert the
histogram of one band of an image or one color gun to resemble another
histogram.

610 ERDAS
I

horizontal control - the horizontal distribution of control points in aerial triangulation


(x,y - planimetry).
host workstation - a CPU, keyboard, mouse, and a display.
hue - a component of IHS (intensity, hue, saturation) which is representative of the
color or dominant wavelength of the pixel. It varies from 0 to 360. Blue = 0 (and
360), magenta = 60, red = 120, yellow = 180, green = 240, and cyan = 300.
hyperspectral sensors - the imaging sensors that record multiple bands of data, such as
the AVIRIS with 224 bands.

I IGES - Initial Graphics Exchange Standard files are often used to transfer CAD data
between systems. IGES Version 3.0 format, published by the U.S. Department of
Commerce, is in uncompressed ASCII format only.
IHS - intensity, hue, saturation. An alternate color space from RGB (red, green, blue).
This system is advantageous in that it presents colors more nearly as perceived
by the human eye. See intensity, hue, and saturation.
image - a picture or representation of an object or scene on paper or a display screen.
Remotely sensed images are digital representations of the earth.
image algebra - any type of algebraic function that is applied to the data file values in
one or more bands.
image center - the center of the aerial photo or satellite scene.
image coordinate system - the location of each point in the image is expressed for
purposes of photogrammetric triangulation.
image data - digital representations of the earth that can be used in computer image
processing and geographic information system (GIS) analyses.
image file - a file containing raster image data. Image files in ERDAS IMAGINE have
the extension .img. Image files from the ERDAS Ver. 7.X series software have the
extension .LAN or .GIS.
image matching - the automatic acquisition of corresponding image points on the
overlapping area of two images.
image memory - the portion of the display device memory that stores data file values
(which may be transformed or processed by the software that accesses the display
device).
image pair - see stereopair.
image processing - the manipulation of digital image data, including (but not limited
to) enhancement, classification, and rectification operations.
image pyramid - a data structure consisting of the same image represented several
times, at a decreasing spatial resolution each time. Each level of the pyramid
contains the image at a particular resolution.

Field Guide 611


Glossary
I
image scale - expresses the average ratio between a distance in the image and the same
distance on the ground. It is computed as focal length divided by the flying height
above the mean ground elevation.
image space coordinate system - identical to image coordinates, except that it adds a
third axis (z) which is used to describe positions inside the camera. The units are
usually in millimeters or microns.
.img file - an ERDAS IMAGINE file that stores continuous or thematic raster layers.
inclination - the angle between a vertical on the ground at the center of the scene and a
light ray from the exposure station, which defines the degree of off-nadir viewing
when the scene was recorded.
indexing - a function applied to thematic layers that adds the data file values of two or
more layers together, creating a new output layer. Weighting factors can be
applied to one or more layers to add more importance to those layers in the final
sum.
index map - a reference map that outlines the mapped area, identifies all of the
component maps for the area if several map sheets are required, and identifies all
adjacent map sheets.
indices - the process used to create output images by mathematically combining the
DN (digital number) values of different bands.
information - something that is independently meaningful, as opposed to data, which
are not independently meaningful.
initialization - a process that insures that all values in a file or in computer memory are
equal, until additional information is added or processed to overwrite these
values. Usually the initialization value is 0. If initialization is not performed on a
data file, there could be random data values in the file.
inset map - a map that is an enlargement of some congested area of a smaller scale map,
and that is usually placed on the same sheet with the smaller scale main map.
instantaneous field of view (IFOV) - a measure of the area viewed by a single detector
on a scanning system in a given instant in time.
intensity - a component of IHS (intensity, hue, saturation) which is the overall
brightness of the scene and varies from 0 (black) to 1 (white).
interior orientation - defines the geometry of an image’s sensor.
intersection - the area or set that is common to two or more input areas or sets.
interval data - a type of data in which thematic class values have a natural sequence,
and in which the distances between values are meaningful.
isarithmic map - a map that uses isorithms (lines connecting points of the same value
for any of the characteristics used in the representation of surfaces) to represent a
statistical surface. (Also called an isometric map.)

612 ERDAS
J

ISODATA clustering - Iterative Self-Organizing Data Analysis Technique; a method of


clustering that uses spectral distance as in the sequential method, but iteratively
classifies the pixels, redefines the criteria for each class, and classifies again, so
that the spectral distance patterns in the data gradually emerge.
island - A single line that connects with itself.
isopleth map - a map on which isopleths (lines representing quantities that cannot exist
at a point, such as population density) are used to represent some selected
quantity.
iterative - a term used to describe a process in which some operation is performed
repeatedly.

J JERS-1 (Fuyo 1) - the Japanese radar satellite launched in February 1992.


join - the process of interactively entering the side lot lines when the front and rear lines
have already been established.

K Kappa coefficient - a number that expresses the proportionate reduction in error


generated by a classification process compared with the error of a completely
random classification.
kernel - see convolution kernel.

L label - in annotation, the text that conveys important information to the reader about
map features.
label point - a point within a polygon that defines that polygon.
LAC - see local area coverage.
.LAN files - multiband ERDAS Ver. 7.X image files (the name originally derived from
the Landsat satellite). LAN files usually contain raw or enhanced remotely sensed
data.
land cover map - a map of the visible ground features of a scene, such as vegetation,
bare land, pasture, urban areas, etc.
Landsat - a series of earth-orbiting satellites that gather Multispectral (MSS) and
Thematic Mapper (TM) imagery, operated by EOSAT.
large scale - a description used to represent a map or data file having a large ratio
between the area on the map (such as inches or pixels) and the area that is repre-
sented (such as feet). In large-scale image data, each pixel represents a small area
on the ground, such as SPOT data, with a spatial resolution of 10 or 20 meters.

Field Guide 613


Glossary
L
layer - 1. a band or channel of data. 2. a single band or set of three bands displayed using
the red, green, and blue color guns of the ERDAS IMAGINE Viewer. A layer
could be a remotely sensed image, an aerial photograph, an annotation layer, a
vector layer, an area of interest layer, etc. 3. a component of a GIS data base that
contains all of the data for one theme. A layer consists of a thematic .img file and
may also include attributes.
least squares correlation - uses the least squares estimation to derive parameters that
best fit a search window to a reference window.
least squares regression - the method used to calculate the transformation matrix from
the GCPs (ground control points). This method is discussed in statistics
textbooks.
legend - the reference that lists the colors, symbols, line patterns, shadings, and other
annotation that is used on a map, and their meanings. The legend often includes
the map’s title, scale, origin, and other information.
lettering - the manner in which place names and other labels are added to a map,
including letter spacing, orientation, and position.
level 1A (SPOT) - an image which corresponds to raw sensor data to which only radio-
metric corrections have been applied.
level 1B (SPOT) - an image that has been corrected for the earth’s rotation and to make
all pixels 10 x 10 on the ground. Pixels are resampled from the level 1A sensor
data by cubic polynomials.
level slice - the process of applying a color scheme by equally dividing the input values
(image memory values) into a certain number of bins, and applying the same
color to all pixels in each bin. Usually, a ROYGBIV (red, orange, yellow, green,
blue, indigo, violet) color scheme is used.
line - 1. a vector data element consisting of a line (the set of pixels directly between two
points), or an unclosed set of lines. 2. a row of pixels in a data file.
line dropout - a data error that occurs when a detector in a satellite either completely
fails to function or becomes temporarily overloaded during a scan. The result is
a line, or partial line, of data with incorrect data file values creating a horizontal
streak until the detector(s) recovers, if it recovers.
linear - a description of a function that can be graphed as a straight line or a series of
lines. Linear equations (transformations) can generally be expressed in the form
of the equation of a line or plane. Also called “1st-order.”
linear contrast stretch - an enhancement technique that outputs new values at regular
intervals.
linear transformation - a 1st-order rectification. A linear transformation can change
location in X and/or Y, scale in X and/or Y, skew in X and/or Y, and rotation.
line of sight - in perspective views, the point(s) and direction from which the viewer is
looking into the image.

614 ERDAS
M

local area coverage (LAC) - a type of NOAA AVHRR data with a spatial resolution of
1.1 × 1.1 km.
logical record - a series of bytes that form a unit on a 9-track tape. For example, all the
data for one line of an image may form a logical record. One or more logical
records make up a physical record on a tape.
long wave infrared region (LWIR) - the thermal or far-infrared region of the electro-
magnetic spectrum.
lookup table (LUT) - an ordered set of numbers which is used to perform a function on
a set of input values. To display or print an image, lookup tables translate data
file values into brightness values.
low-frequency kernel - a convolution kernel that decreases spatial frequency. Also
called “low-pass kernel.”
LUT - see lookup table.

M magnify - the process of displaying one file pixel over a block of display pixels. For
example, if the magnification factor is 3, then each file pixel will take up a block
of 3 × 3 display pixels. Magnification differs from zooming in that the magnified
image is loaded directly to image memory.
Mahalanobis distance - a classification decision rule that is similar to the minimum
distance decision rule, except that a covariance matrix is used in the equation.
majority - a neighborhood analysis technique that outputs the most common value of
the data file values in a user-specified window.
map - a graphic representation of spatial relationships on the earth or other planets.
map coordinates - a system of expressing locations on the earth’s surface using a
particular map projection, such as Universal Transverse Mercator (UTM), State
Plane, or Polyconic.
map frame - an annotation element that indicates where an image will be placed in a
map composition.
map projection - a method of representing the three-dimensional spherical surface of a
planet on a two-dimensional map surface. All map projections involve the
transfer of latitude and longitude onto an easily flattened surface.
matrix - a set of numbers arranged in a rectangular array. If a matrix has i rows and j
columns, it is said to be an i by j matrix.
matrix analysis - a method of combining two thematic layers in which the output layer
contains a separate class for every combination of two input classes.
matrix object - in Model Maker (Spatial Modeler), a set of numbers in a two-dimen-
sional array.
maximum - a neighborhood analysis technique that outputs the greatest value of the
data file values in a user-specified window.

Field Guide 615


Glossary
M
maximum likelihood - a classification decision rule based on the probability that a
pixel belongs to a particular class. The basic equation assumes that these proba-
bilities are equal for all classes, and that the input bands have normal distribu-
tions.
.mdl file - an ERDAS IMAGINE script model created with the Spatial Modeler
Language.
mean - 1. the statistical average; the sum of a set of values divided by the number of
values in the set. 2. a neighborhood analysis technique that outputs the mean
value of the data file values in a user-specified window.
mean vector - an ordered set of means for a set of variables (bands). For a data file, the
mean vector is the set of means for all bands in the file.
measurement vector - the set of data file values for one pixel in all bands of a data file.
median - 1. the central value in a set of data such that an equal number of values are
greater than and less than the median. 2. a neighborhood analysis technique that
output the median value of the data file values in a user-specified window.
megabyte (Mb) - about one million bytes.
memory resident - a term referring to the occupation of a part of a computer’s RAM
(random access memory), so that a program is available for use without being
loaded into memory from disk.
mensuration - the measurement of linear or areal distance.
meridian - a line of longitude, going north and south. See geographical coordinates.
minimum - a neighborhood analysis technique that outputs the least value of the data
file values in a user-specified window.
minimum distance - a classification decision rule that calculates the spectral distance
between the measurement vector for each candidate pixel and the mean vector
for each signature. Also called spectral distance.
minority - a neighborhood analysis technique that outputs the least common value of
the data file values in a user-specified window.
mode - the most commonly-occurring value in a set of data. In a histogram, the mode
is the peak of the curve.
model - in a GIS, the set of expressions, or steps, that define your criteria and create an
output layer.
modeling - the process of creating new layers from combining or operating upon
existing layers. Modeling allows the creation of new classes from existing classes
and the creation of a small set of images - perhaps even a single image - which, at
a glance, contains many types of information about a scene.
modified projection - a map projection that is a modified version of another projection.
For example, the Space Oblique Mercator projection is a modification of the
Mercator projection.

616 ERDAS
N

monochrome image - an image produced from one band or layer, or contained in one
color gun of the display device.
morphometric map - a map representing morphological features of the earth’s surface.
mosaicking - the process of piecing together images side by side, to create a larger
image.
multispectral classification - the process of sorting pixels into a finite number of
individual classes, or categories of data, based on data file values in multiple
bands. See also classification.
multispectral imagery - satellite imagery with data recorded in two or more bands.
multispectral scanner (MSS) - Landsat satellite data acquired in 4 bands with a spatial
resolution of 57 × 79 meters.
multitemporal - data from two or more different dates.

N nadir - the area on the ground directly beneath a scanner’s detectors.


nadir line - the average of the left and right edge lines of a Landsat image.
nadir point - the center of the nadir line in vertically viewed imagery.
nearest neighbor - a resampling method in which the output data file value is equal to
the input pixel whose coordinates are closest to the retransformed coordinates of
the output pixel.
neatline - a rectangular border printed around a map. On scaled maps, neatlines
usually have tick marks which indicate intervals of map coordinates or distance.
negative inclination - the sensors are tilted in increments of 0.6o to a maximum of 27o
to the east.
neighborhood analysis - any image processing technique that takes surrounding pixels
into consideration, such as convolution filtering and scanning.
9-track - computer compatible tapes (CCTs) that hold digital data.
node - the ending points of a line. See from-node and to-node.
nominal data - a type of data in which classes that have no inherent order and therefore
are qualitative.
nonlinear - describing a function that cannot be expressed as the graph of a line or in
the form of the equation of a line or plane. Nonlinear equations usually contain
expressions with exponents. “2nd-order” or higher-order equations and transfor-
mations are nonlinear.
nonlinear transformation - a 2nd-order or higher rectification.
non-parametric signature - a signature for classification that is based on polygons or
rectangles that are defined in the feature space image for the .img file. There is no
statistical basis for a non-parametric signature, it is simply an area in a feature
space image.

Field Guide 617


Glossary
O
normal - the state of having a normal distribution.
normal distribution - a symmetrical data distribution that can be expressed in terms of
the mean and standard deviation of the data. The normal distribution is the most
widely encountered model for probability and is characterized by the “bell
curve.” Also called “Gaussian distribution.”
normalize - a process that makes an image appear as if it were a flat surface. This
technique is used to reduce topographic effect.
number maps - maps that output actual data file values or brightness values, allowing
the analysis of the values of every pixel in a file or on the display screen.
numeric keypad - the set of numeric and/or mathematical operator keys (“+”, “-”, etc.)
that is usually on the right side of the keyboard.

O object - in models, an input to or output from a function. See matrix object, raster
object, scalar object, table object.
oblique aspect - a map projection that is not oriented around a pole or the equator.
observation - in photogrammetric triangulation, a grouping of the image coordinates
for a control point.
off-nadir - any point that is not directly beneath a scanner’s detectors, but off to an
angle. The SPOT scanner allows off-nadir viewing.
1:24,000 - 1:24,000 scale data, also called “7.5-minute DEM” (Digital Elevation Model),
available from USGS. It is usually referenced to the UTM coordinate system and
has a spatial resolution of 30 × 30 meters.
1:250,000 - 1:250,000 scale DEM (Digital Elevation Model) data available from USGS.
Available only in arc/second format.
opacity - a measure of how opaque, or solid, a color is displayed in a raster layer.
operating system - the most basic means of communicating with the computer. It
manages the storage of information in files and directories, input from devices
such as the keyboard and mouse, and output to devices such as the monitor.
orbit - a circular, north-south and south-north path that a satellite travels above the
earth.
order - the complexity of a function, polynomial expression, or curve. In a polynomial
expression, the order is simply the highest exponent used in the polynomial. See
also linear, nonlinear.
ordinal data - a type of data that includes discrete lists of classes with an inherent order,
such as classes of streams—first order, second order, third order, etc.
orientation angle - the angle between a perpendicular to the center scan line and the
North direction in a satellite scene.
orthographic - an azimuthal projection with an infinite perspective.
orthocorrection - see orthorectification.

618 ERDAS
P

orthoimage - see digital orthophoto.


orthomap - an imagemap product produced from orthoimages, or orthoimage mosaics,
that is similar to a standard map in that it usually includes additional infor-
mation, such as map coordinate grids, scale bars, north arrows, and other
marginalia.
orthorectification - a form of rectification that corrects for terrain displacement and can
be used if a digital elevation model (DEM) of the study area is available.
outline map - a map showing the limits of a specific set of mapping entities such as
counties. Outline maps usually contain a very small number of details over the
desired boundaries with their descriptive codes.
overlay - 1. a function that creates a composite file containing either the minimum or
the maximum class values of the input files. “Overlay” sometimes refers generi-
cally to a combination of layers. 2. the process of displaying a classified file over
the original image to inspect the classification.
overlay file - an ERDAS IMAGINE annotation file (.ovr extension).
.ovr file - an ERDAS IMAGINE annotation file.

P pack - to store data in a way that conserves tape or disk space.


panchromatic imagery - single-band or monochrome satellite imagery.
paneled map - a map designed to be spliced together into a large paper map. Therefore,
neatlines and tick marks appear on the outer edges of the large map.
pairwise mode - an operation mode in rectification that allows the registration of one
image to an image in another Viewer, a map on a digitizing tablet, or coordinates
entered at the keyboard.
parallel - a line of latitude, going east and west.
parallelepiped - 1. a classification decision rule, in which the data file values of the
candidate pixel are compared to upper and lower limits. 2. the limits of a paral-
lelepiped classification, especially when graphed as rectangles.
parameter - 1. any variable that determines the outcome of a function or operation. 2.
the mean and standard deviation of data, which are sufficient to describe a
normal curve.
parametric signature - a signature that is based on statistical parameters (e.g., mean and
covariance matrix) of the pixels that are in the training sample or cluster.
passive sensors - solar imaging sensors that can only receive radiation waves and
cannot transmit radiation.
path - the drive, directories, and subdirectories that specify the location of a file.
pattern recognition - the science and art of finding meaningful patterns in data, which
can be extracted through classification.

Field Guide 619


Glossary
P
perspective center - 1. a point in the image coordinate system defined by the x and y
coordinates of the principal point and the focal length of the sensor. 2. after trian-
gulation, a point in the ground coordinate system that defines the sensor’s
position relative to the ground.
perspective projection - the projection of points by straight lines from a given
perspective point to an intersection with the plane of projection.
photogrammetric quality scanners - special devices capable of high image quality and
excellent positional accuracy. Use of this type of scanner results in geometric
accuracies similar to traditional analog and analytical photogrammetric instru-
ments.
photogrammetry - the "art, science and technology of obtaining reliable information
about physical objects and the environment through the process of recording,
measuring and interpreting photographic images and patterns of electromag-
netic radiant imagery and other phenomena." (ASP, 1980)
physical record - a consecutive series of bytes on a 9-track tape, followed by a gap, or
blank space, on the tape.
piecewise linear contrast stretch - a spectral enhancement technique used to enhance a
specific portion of data by dividing the lookup table into three sections: low,
middle, and high.
pixel - abbreviated from “picture element;” the smallest part of a picture (image).
pixel coordinate system - a coordinate system with its origin in the upper-left corner of
the image, the x-axis pointing to the right, the y-axis pointing downward, and the
unit in pixels.
pixel depth - the number of bits required to store all of the data file values in a file. For
example, data with a pixel depth of 8, or 8-bit data, have 256 values (28 = 256),
ranging from 0 to 255.
pixel size - the physical dimension of a single light-sensitive element (13 x 13 microns).
planar coordinates - coordinates that are defined by a column and row position on a
grid (x,y).
planar projection - see azimuthal projection.
Plane Table Photogrammetry - Prior to the invention of the airplane, photographs
taken on the ground were used to extract the geometric relationships between
objects using the principles of Descriptive Geometry.
planimetric map - a map that correctly represents horizontal distances between objects.
plan symbol - an annotation symbol that is formed after the basic outline of the object
it represents. For example, the symbol for a house might be a square, since most
houses are rectangular.
point - 1. an element consisting of a single (x,y) coordinate pair. Also called “grid cell.”
2. a vertex of an element. Also called “node.”

620 ERDAS
P

point ID - in rectification, a name given to GCPs in separate files that represent the same
geographic location.
point mode - a digitizing mode in which one vertex is generated each time a keypad
button is pressed.
polar aspect - a map projection that is centered around a pole.
polygon - a set of closed line segments defining an area.
polynomial - a mathematical expression consisting of variables and coefficients. A
coefficient is a constant, which is multiplied by a variable in the expression.
positive inclination - the sensors are tilted in increments of 0.6o to a maximum of 27o
to the west.
primary colors - colors from which all other available colors are derived. On a display
monitor, the primary colors red, green, and blue are combined to produce all
other colors. On a color printer, cyan, yellow, and magenta inks are combined.
principal components - the transects of a scatterplot of two or more bands of data,
which represent the widest variance and successively smaller amounts of
variance that are not already represented. Principal components are orthogonal
(perpendicular) to one another. In principal components analysis, the data are
transformed so that the principal components become the axes of the scatterplot
of the output data.
principal component band - a band of data that is output by principal components
analysis. Principal component bands are uncorrelated and non-redundant, since
each principal component describes different variance within the original data.
principal components analysis - the process of calculating principal components and
outputting principal component bands. It allows redundant data to be compacted
into fewer bands that is, the dimensionality of the data is reduced.
principal point (Xp,Yp) - the point in the image plane onto which the perspective center
is projected, located directly beneath the interior orientation.
printer - a device that prints text, full color imagery, and/or graphics. See color printer,
text printer.
profile - a row of data file values from a DEM (Digital Elevation Model) or DTED
(Digital Terrain Elevation Data) file. The profiles of DEM and DTED run south to
north, that is, the first pixel of the record is the southernmost pixel.
profile symbol - an annotation symbol that is formed like the profile of an object. Profile
symbols generally represent vertical objects such as trees, windmills, oil wells,
etc.
proximity analysis - a technique used to determine which pixels of a thematic layer are
located at specified distances from pixels in a class or classes. A new layer is
created which is classified by the distance of each pixel from specified classes of
the input layer.

Field Guide 621


Glossary
Q
pseudo color - a method of displaying an image (usually a thematic layer) that allows
the classes to have distinct colors. The class values of the single band file are trans-
lated through all three function memories which store a color scheme for the
image.
pseudo node - a single line that connects with itself (an island) or where only two lines
intersect.
pseudo projection - a map projection that has only some of the characteristics of
another projection.
pushbroom - a scanner in which all scanning parts are fixed and scanning is accom-
plished by the forward motion of the scanner, such as the SPOT scanner.
pyramid layers - image layers which are successively reduced by the power of 2 and
resampled. Pyramid layers enable large images to be displayed faster.

Q quadrangle - 1. any of the hardcopy maps distributed by USGS such as the 7.5-minute
quadrangle or the 15-minute quadrangle. 2. one quarter of a full Landsat TM
scene. Commonly called a “quad.”
qualitative map - a map that shows the spatial distribution or location of a kind of
nominal data. For example, a map showing corn fields in the United States would
be a qualitative map. It would not show how much corn is produced in each
location, or production relative to the other areas.
quantitative map - a map that displays the spatial aspects of numerical data. A map
showing corn production (volume) in each area would be a quantitative map.

R radar data - the remotely sensed data that are produced when a radar transmitter emits
a beam of micro or millimeter waves, the waves reflect from the surfaces they
strike, and the backscattered radiation is detected by the radar system’s receiving
antenna which is tuned to the frequency of the transmitted waves.
RADARSAT - a Canadian radar satellite scheduled to be launched in 1995.
radiative transfer equations - the mathematical models that attempt to quantify the
total atmospheric effect of solar illumination.
radiometric correction - the correction of variations in data that are not caused by the
object or scene being scanned, such as scanner malfunction and atmospheric
interference.
radiometric enhancement - an enhancement technique that deals with the individual
values of pixels in an image.
radiometric resolution - the dynamic range, or number of possible data file values, in
each band. This is referred to by the number of bits into which the recorded
energy is divided. See pixel depth.
rank - a neighborhood analysis technique that outputs the number of values in a user-
specified window that are less than the analyzed value.

622 ERDAS
R

raster data - data that are organized in a grid of columns and rows. Raster data usually
represent a planar graph or geographical area. Raster data in ERDAS IMAGINE
are stored in .img files.
raster object - in Model Maker (Spatial Modeler), a single raster layer or set of layers.
raster region - a contiguous group of pixels in one GIS class. Also called clump.
ratio data - a data type in which thematic class values have the same properties as
interval values, except that ratio values have a natural zero or starting point.
Real-Aperture Radar (RAR) - a radar sensor that uses its side-looking, fixed antenna to
transmit and receive the radar impulse. For a given position in space, the
resolution of the resultant image is a function of the antenna size. The signal is
processed independently of subsequent return signals.
recoding - the assignment of new values to one or more classes.
record - 1. the set of all attribute data for one class of feature. 2. the basic storage unit on
a 9-track tape.
rectification - the process of making image data conform to a map projection system. In
many cases, the image must also be oriented so that the north direction corre-
sponds to the top of the image.
rectified coordinates - the coordinates of a pixel in a file that has been rectified, which
are extrapolated from the ground control points. Ideally, the rectified coordinates
for the ground control points are exactly equal to the reference coordinates. Since
there is often some error tolerated in the rectification, this is not always the case.
reduce - the process of skipping file pixels when displaying an image, so that a larger
area can be represented on the display screen. For example, a reduction factor of
3 would cause only the pixel at every third row and column to be displayed, so
that each displayed pixel represents a 3 × 3 block of file pixels.
reference coordinates - the coordinates of the map or reference image to which a source
(input) image is being registered. Ground control points consist of both input
coordinates and reference coordinates for each point.
reference pixels - in classification accuracy assessment, pixels for which the correct GIS
class is known from ground truth or other data. The reference pixels can be
selected by you, or randomly selected.
reference plane - In a topocentric coordinate system, the tangential plane at the center
of the image on the earth ellipsoid, on which the three perpendicular coordinate
axis are defined.
reference system - the map coordinate system to which an image is registered.
reference window - the source window on the first image of an image pair, which
remains at a constant location. See also correlation windows and search
windows.
reflection spectra - the electromagnetic radiation wavelengths that are reflected by
specific materials of interest.

Field Guide 623


Glossary
R
registration - the process of making image data conform to another image. A map
coordinate system is not necessarily involved.
regular block of photos - a rectangular block in which the number of photos in each
strip is the same; this includes a single strip or a single stereopair.
relation based matching - an image matching technique that uses the image features
and the relation among the features to automatically recognize the corresponding
image structures without any a priori information.
relief map - a map that appears to be or is 3-dimensional.
remote sensing - the measurement or acquisition of data about an object or scene by a
satellite or other instrument above or far from the object. Aerial photography,
satellite imagery, and radar are all forms of remote sensing.
replicative symbol - an annotation symbol that is designed to look like its real-world
counterpart. These symbols are often used to represent trees, railroads, houses,
etc.
representative fraction - the ratio or fraction used to denote map scale.
resampling - the process of extrapolating data file values for the pixels in a new grid,
when data have been rectified or registered to another image.
rescaling - the process of compressing data from one format to another. In ERDAS
IMAGINE this typically means compressing a 16-bit file to an 8-bit file.
reshape - the process of redigitizing a portion of a line.
residuals - in rectification, the distances between the source and retransformed coordi-
nates in one direction. In ERDAS IMAGINE they are shown for each GCP. The X
residual is the distance between the source X coordinate and the retransformed X
coordinate. The Y residual is the distance between the source Y coordinate and
the retransformed Y coordinate.
resolution - a level of precision in data. For specific types of resolution see display
resolution, radiometric resolution, spatial resolution, spectral resolution, and
temporal resolution.
resolution merging - the process of sharpening a lower-resolution multiband image by
merging it with a higher-resolution monochrome image.
retransformed - in the rectification process, a coordinate in the reference (output)
coordinate system that has transformed back into the input coordinate system.
The amount of error in the transformation can be determined by computing the
difference between the original coordinates and the retransformed coordinates.
See RMS error.
RGB - red, green, blue. The primary additive colors which are used on most display
hardware to display imagery.
RGB clustering - a clustering method for 24-bit data (three 8-bit bands) which plots
pixels in 3-dimensional spectral space and divides that space into sections that
are used to define clusters. The output color scheme of an RGB-clustered image
resembles that of the input file.

624 ERDAS
S

rhumb line - a line of true direction, which crosses meridians at a constant angle.
right hand rule - a convention in three-dimensional coordinate systems (X,Y,Z) which
determines the location of the positive Z axis. If you place your right hand fingers
on the positive X axis and curl your fingers toward the positive Y axis, the
direction your thumb is pointing is the positive Z axis direction.
RMS error - the distance between the input (source) location of a GCP and the retrans-
formed location for the same GCP. RMS error is calculated with a distance
equation.
RMSE (Root Mean Square Error) - used to measure how well a specific calculated
solution fits the original data.For each observation of a phenomena, a variation
can be computed between the actual observation and a calculated value. (The
method of obtaining a calculated value is application-specific.) Each variation is
then squared. The sum of these squared values is divided by the number of obser-
vations and then the square root is taken. This is the RMSE value.
roam - the process of moving across a display so that different areas of the image appear
on the display screen.
root - the first part of a file name, which usually identifies the file’s specific contents.
ROYGBIV - a color scheme ranging through red, orange, yellow, green, blue, indigo,
and violet at regular intervals.
rubber sheeting - the application of a nonlinear rectification (2nd-order or higher).

S sample - see training sample.


saturation - a component of IHS which represents the purity of color and also varies
linearly from 0 to 1.
scale - 1. the ratio of distance on a map as related to the true distance on the ground. 2.
cell size. 3. the processing of values through a lookup table.
scale bar - a graphic annotation element that describes map scale. It shows the distance
on paper that represents a geographical distance on the map.
scalar object - in Model Maker (Spatial Modeler), a single numeric value.
scaled map - a georeferenced map that is accurately laid-out and referenced to
represent distances and locations. A scaled map usually has a legend which
includes a scale, such as “1 inch = 1000 feet.” The scale is often expressed as a ratio
like 1:12,000 where 1 inch on the map equals 12,000 inches on the ground.
scanner - the entire data acquisition system, such as the Landsat Thematic Mapper
scanner or the SPOT panchromatic scanner.
scanning - 1. the transfer of analog data, such as photographs, maps, or another
viewable image, into a digital (raster) format. 2. a process similar to convolution
filtering which uses a kernel for specialized neighborhood analyses, such as total,
average, minimum, maximum, boundary, and majority.

Field Guide 625


Glossary
S
scatterplot - a graph, usually in two dimensions, in which the data file values of one
band are plotted against the data file values of another band.
scene - the image captured by a satellite.
screen coordinates - the location of a pixel on the display screen, beginning with 0,0 in
the upper left corner.
screen digitizing - the process of drawing vector graphics on the display screen with a
mouse. A displayed image can be used as a reference.
script modeling - the technique of combining data layers in an unlimited number of
ways. Script modeling offers all of the capabilities of graphical modeling with the
ability to perform more complex functions, such as conditional looping.
script model - a model that is comprised of text only and is created with the Spatial
Modeler Language. Script models are stored in .mdl files.
search radius - in surfacing routines, the distance around each pixel within which the
software will search for terrain data points.
search windows - candidate windows on the second image of an image pair that are
evaluated relative to the reference window.
seat - a combination of an X-server and a host workstation.
secant - the intersection of two points or lines. In the case of conic or cylindrical map
projections, a secant cone or cylinder intersects the surface of a globe at two
circles.
sensor - a device that gathers energy, converts it to a digital value, and presents it in a
form suitable for obtaining information about the environment.
separability - a statistical measure of distance between two signatures.
separability listing - a report of signature divergence which lists the computed diver-
gence for every class pair and one band combination. The listing contains every
divergence value for the bands studied for every possible pair of signatures.
sequential clustering - a method of clustering that analyzes pixels of an image line by
line and groups them by spectral distance. Clusters are determined based on
relative spectral distance and the number of pixels per cluster.
server - on a computer in a network, a utility that makes some resource or service
available to the other machines on the network (such as access to a tape drive).
shaded relief image - a thematic raster image which shows variations in elevation
based on a user-specified position of the sun. Areas that would be in sunlight are
highlighted and areas that would be in shadow are shaded.
shaded relief map - a map of variations in elevation based on a user-specified position
of the sun. Areas that would be in sunlight are highlighted and areas that would
be in shadow are shaded.
short wave infrared region (SWIR) - the near-infrared and middle-infrared regions of
the electromagnetic spectrum.

626 ERDAS
S

Shuttle Imaging Radar (SIR-A, SIR-B, and SIR-C) - the radar sensors that fly aboard
NASA space shuttles. SIR-A flew aboard the 1981 NASA Space Shuttle Columbia.
That data and SIR-B data from a later Space Shuttle mission are still valuable
sources of radar data. A future shuttle mission is scheduled to carry the SIR-C
sensor.
Side-looking Airborne Radar (SLAR) - a radar sensor that uses an antenna which is
fixed below an aircraft and pointed to the side to transmit and receive the radar
signal.
signal based matching - see area based matching.
signature - a set of statistics that defines a training sample or cluster. The signature is
used in a classification process. Each signature corresponds to a GIS class that is
created from the signatures with a classification decision rule.
skew - a condition in satellite data, caused by the rotation of the earth eastward, which
causes the position of the satellite relative to the earth to move westward.
Therefore, each line of data represents terrain that is slightly west of the data in
the previous line.
slope - the change in elevation over a certain distance. Slope can be reported as a
percentage or in degrees.
slope image - a thematic raster image which shows changes in elevation over distance.
Slope images are usually color-coded to show the steepness of the terrain at each
pixel.
slope map - a map that is color-coded to show changes in elevation over distance.
small scale - for a map or data file, having a small ratio between the area of the imagery
(such as inches or pixels) and the area that is represented (such as feet). In small-
scale image data, each pixel represents a large area on the ground, such as NOAA
AVHRR (Advanced Very High Resolution Radiometer) data, with a spatial
resolution of 1.1 km.
Softcopy Photogrammetry - see Digital Photogrammetry.
source coordinates - in the rectification process, the input coordinates.
spatial enhancement - the process of modifying the values of pixels in an image relative
to the pixels that surround them.
spatial frequency - the difference between the highest and lowest values of a
contiguous set of pixels.
Spatial Modeler Language - a script language used internally by Model Maker (Spatial
Modeler) to execute the operations specified in the graphical models you create.
The Spatial Modeler Language can also be used to write application-specific
models.
spatial resolution - a measure of the smallest object that can be resolved by the sensor,
or the area on the ground represented by each pixel.
speckle noise - the light and dark pixel noise that appears in radar data.

Field Guide 627


Glossary
S
spectral distance - the distance in spectral space computed as Euclidean distance in n-
dimensions, where n is the number of bands.
spectral enhancement - the process of modifying the pixels of an image based on the
original values of each pixel, independent of the values of surrounding pixels.
spectral resolution - the specific wavelength intervals in the electromagnetic spectrum
that a sensor can record.
spectral space - an abstract space that is defined by spectral units (such as an amount
of electromagnetic radiation). The notion of spectral space is used to describe
enhancement and classification techniques that compute the spectral distance
between n-dimensional vectors, where n is the number of bands in the data.
spectroscopy - the study of the absorption and reflection of electromagnetic radiation
(EMR) waves.
spliced map - a map that is printed on separate pages, but intended to be joined
together into one large map. Neatlines and tick marks appear only on the pages
which make up the outer edges of the whole map.
spline - the process of smoothing or generalizing all currently selected lines using a
specified grain tolerance during vector editing.
split - the process of making two lines from one by adding a node.
SPOT - a series of earth-orbiting satellites operated by the Centre National d’Etudes
Spatiales (CNES) of France.
standard deviation - 1. the square root of the variance of a set of values which is used
as a measurement of the spread of the values. 2. a neighborhood analysis
technique that outputs the standard deviation of the data file values of a user-
specified window.
standard meridian - see standard parallel.
standard parallel - the line of latitude where the surface of a globe conceptually inter-
sects with the surface of the projection cylinder or cone.
statement - in script models, properly formatted lines that perform a specific task in a
model. Statements fall into the following categories: declaration, assignment,
show, view, set, macro definition, and quit.
statistical clustering - a clustering method that tests 3 × 3 sets of pixels for homogeneity
and builds clusters only from the statistics of the homogeneous sets of pixels.
statistics (STA) file - an ERDAS Ver. 7.X trailer file for LAN data that contains statistics
about the data.
stereographic - 1. the process of projecting onto a tangent plane from the opposite side
of the earth. 2. the process of acquiring images at angles on either side of the
vertical.
stereopair - a set of two remotely-sensed images that overlap, providing two views of
the terrain in the overlap area.

628 ERDAS
S

stereo-scene - achieved when two images of the same area are acquired on different
days from different orbits, one taken east of the vertical, and the other taken west
of the nadir.
stream mode - a digitizing mode in which vertices are generated continuously while the
digitizer keypad is in proximity to the surface of the digitizing tablet.
string - a line of text. A string usually has a fixed length (number of characters).
strip of photographs - consists of images captured along a flight-line, normally with an
overlap of 60% for stereo coverage. All photos in the strip are assumed to be taken
at approximately the same flying height and with a constant distance between
exposure stations. Camera tilt relative to the vertical is assumed to be minimal.
striping - a data error that occurs if a detector on a scanning system goes out of
adjustment - that is, it provides readings consistently greater than or less than the
other detectors for the same band over the same ground cover. Also called
“banding.”
structure based matching - see relation based matching.
subsetting - the process of breaking out a portion of a large image file into one or more
smaller files.
sum - a neighborhood analysis technique that outputs the total of the data file values in
a user-specified window.
Sun raster data - imagery captured from a Sun monitor display.
sun-synchronous - a term used to describe earth-orbiting satellites that rotate around
the earth at the same rate as the earth rotates on its axis.
supervised training - any method of generating signatures for classification, in which
the analyst is directly involved in the pattern recognition process. Usually, super-
vised training requires the analyst to select training samples from the data, which
represent patterns to be classified.
surface - a one band file in which the value of each pixel is a specific elevation value.
swath width - in a satellite system, the total width of the area on the ground covered by
the scanner.
symbol - an annotation element that consists of other elements (sub-elements). See plan
symbol, profile symbol, and function symbol.
symbolization - a method of displaying vector data in which attribute information is
used to determine how features are rendered. For example, points indicating
cities and towns can appear differently based on the population field stored in the
attribute database for each of those areas.
Synthetic Aperture Radar (SAR) - a radar sensor that uses its side-looking, fixed
antenna to create a synthetic aperture. SAR sensors are mounted on satellites,
aircraft, and the NASA Space Shuttle. The sensor transmits and receives as it is
moving. The signals received over a time interval are combined to create the
image.

Field Guide 629


Glossary
T

T table object - in Model Maker (Spatial Modeler), a series of numeric values or character
strings.
tablet digitizing - the process of using a digitizing tablet to transfer non-digital data
such as maps or photographs to vector format.
Tagged Imaged File Format - see TIFF data.
tangent - an intersection at one point or line. In the case of conic or cylindrical map
projections, a tangent cone or cylinder intersects the surface of a globe in a circle.
Tasseled Cap transformation - an image enhancement technique that optimizes data
viewing for vegetation studies.
temporal resolution - the frequency with which a sensor obtains imagery of a particular
area.
terrain analysis - the processing and graphic simulation of elevation data.
terrain data - elevation data expressed as a series of x, y, and z values that are either
regularly or irregularly spaced.
text printer - a device used to print characters onto paper, usually used for lists,
documents, and reports. If a color printer is not necessary or is unavailable,
images can be printed using a text printer. Also called a “line printer.”
thematic data - raster data that are qualitative and categorical. Thematic layers often
contain classes of related information, such as land cover, soil type, slope, etc. In
ERDAS IMAGINE, thematic data are stored in .img files.
thematic layer - see thematic data.
thematic map - a map illustrating the class characterizations of a particular spatial
variable such as soils, land cover, hydrology, etc.
Thematic Mapper (TM) - Landsat data acquired in 7 bands with a spatial resolution of
30 × 30 meters.
theme - a particular type of information, such as soil type or land use, that is repre-
sented in a layer.
3D perspective view - a simulated three-dimensional view of terrain.
threshold - a limit, or “cutoff point,” usually a maximum allowable amount of error in
an analysis. In classification, thresholding is the process of identifying a
maximum distance between a pixel and the mean of the signature to which it was
classified.
tick marks - small lines along the edge of the image area or neatline that indicate regular
intervals of distance.
tie point - a point whose ground coordinates are not known, but can be recognized
visually in the overlap or sidelap area between two images.
TIFF data - Tagged Image File Format data is a raster file format developed by Aldus,
Corp. (Seattle, Washington), in 1986 for the easy transportation of data.

630 ERDAS
T

TIGER - Topologically Integrated Geographic Encoding and Referencing System files


are line network products of the U.S. Census Bureau.
tiled data - the storage format of ERDAS IMAGINE .img files.
TIN - see triangulated irregular network.
to-node - the last vertex in a line.
topocentric coordinate system - a coordinate system which has its origin at the center
of the image on the earth ellipsoid. The three perpendicular coordinate axis are
defined on a tangential plane at this center point. The x-axis is oriented eastward,
the y-axis northward, and the z-axis is vertical to the reference plane (up).
topographic - a term indicating elevation.
topographic data - a type of raster data in which pixel values represent elevation.
topographic effect - a distortion found in imagery from mountainous regions that
results from the differences in illumination due to the angle of the sun and the
angle of the terrain.
topographic map - a map depicting terrain relief.
topology - a term that defines the spatial relationships between features in a vector
layer.
total RMS error - the total root mean square (RMS) error for an entire image. Total RMS
error takes into account the RMS error of each ground control point (GCP).
trailer file - 1. an ERDAS Ver. 7.X file with a .TRL extension that accompanies a GIS file
and contains information about the GIS classes. 2. a file following the image data
on a 9- track tape.
training - the process of defining the criteria by which patterns in image data are recog-
nized for the purpose of classification.
training field - the geographical area represented by the pixels in a training sample.
Usually, it is previously identified with the use of ground truth data or aerial
photography. Also called “training site.”
training sample - a set of pixels selected to represent a potential class. Also called
“sample.”
transformation matrix - a set of coefficients which are computed from ground control
points, and used in polynomial equations to convert coordinates from one system
to another. The size of the matrix depends upon the order of the transformation.
transposition - the interchanging of the rows and columns of a matrix, denoted with T.
transverse aspect - the orientation of a map in which the central line of the projection,
which is normally the equator, is rotated 90 degrees so that it follows a meridian.
triangulated irregular network (TIN) - a specific representation of DTMs in which
elevation points can occur at irregular intervals.
triangulation - establishes the geometry of the camera or sensor relative to objects on
the earth’s surface.

Field Guide 631


Glossary
U
true color - a method of displaying an image (usually from a continuous raster layer)
which retains the relationships between data file values and represents multiple
bands with separate color guns. The image memory values from each displayed
band are translated through the function memory of the corresponding color
gun.
true direction - the property of a map projection to represent the direction between two
points with a straight rhumb line, which crosses meridians at a constant angle.

U union - the area or set that is the combination of two or more input areas or sets, without
repetition.
unscaled map - a hardcopy map that is not referenced to any particular scale, in which
one file pixel is equal to one printed pixel.
unsplit - the process of joining two lines by removing a node.
unsupervised training - a computer-automated method of pattern recognition in which
some parameters are specified by the user and are used to uncover statistical
patterns that are inherent in the data.

V variable - 1. a numeric value that is changeable, usually represented with a letter. 2. a


thematic layer. 3. one band of a multiband image. 4. in models, objects which
have been associated with a name using a declaration statement.
variance - the measure of central tendency.
vector - 1. a line element. 2. a one-dimensional matrix, having either one row (1 by j), or
one column (i by 1). See also mean vector, measurement vector.
vector data - data that represent physical forms (elements) such as points, lines, and
polygons. Only the vertices of vector data are stored, instead of every point that
makes up the element. ERDAS IMAGINE vector data are based on the
ARC/INFO data model and are stored in directories, rather than individual files.
See workspace.
vector layer - a set of vector features and their associated attributes.
velocity vector - the satellite’s velocity if measured as a vector through a point on the
spheroid.
verbal statement - a statement that describes the distance on the map to the distance on
the ground. A verbal statement describing a scale of 1:1,000,000 is approximately
1 inch to 16 miles. The units on the map and on the ground do not have to be the
same in a verbal statement.
vertex - a point that defines an element, such as a point where a line changes direction.
vertical control - the vertical distribution of control points in aerial triangulation
(z - elevation).
vertices - plural of vertex.

632 ERDAS
W

viewshed analysis - the calculation of all areas that can be seen from a particular
viewing point or path.
viewshed map - a map showing only those areas visible (or invisible) from a specified
point(s).
volume - a medium for data storage, such as a magnetic disk or a tape.
volume set - the complete set of tapes that contains one image.

W weight - the number of values in a set; particularly, in clustering algorithms, the weight
of a cluster is the number of pixels that have been averaged into it.
weighting factor - a parameter that increases the importance of an input variable. For
example, in GIS indexing, one input layer can be assigned a weighting factor
which multiplies the class values in that layer by that factor, causing that layer to
have more importance in the output file.
weighting function - in surfacing routines, a function applied to elevation values for
determining new output values.
working window - the image area to be used in a model. This can be set to either the
union or intersection of the input layers.
workspace - a location which contains one or more vector layers. A workspace is made
up of several directories.
write ring - a protection device that allows data to be written to a 9-track tape when the
ring is in place, but not when it is removed.

X X residual - in RMS error reports, the distance between the source X coordinate and the
retransformed X coordinate.
X RMS error - the root mean square error (RMS) in the X direction.

Y Y residual - in RMS error reports, the distance between the source Y coordinate and the
retransformed Y coordinate.
Y RMS error - the root mean square error (RMS) in the Y direction.

Z zero-sum kernel - a convolution kernel in which the sum of all the coefficients is zero.
Zero-sum kernels are usually edge detectors.
zone distribution rectangles (ZDRs) - the images into which each distribution
rectangle (DR) are divided in ADRG data.
zoom - the process of expanding displayed pixels on an image so that they can be more
closely studied. Zooming is similar to magnification, except that it changes the
display only temporarily, leaving image memory the same.

Field Guide 633


Glossary
Z

634 ERDAS
Bibliography

Bibliography

Adams, J.B., Smith, M.O., and Gillespie, A.R. 1989. “Simple Models for Complex
Natural Surfaces: A Strategy for the Hyperspectral Era of Remote Sensing.”
Proceedings IEEE Intl. Geosciences and Remote Sensing Symposium.
1:16-21.

American Society of Photogrammetry. 1980. Photogrammetric Engineering and Remote


Sensing XLVI:10:1249.

Atkinson, Paula. 1985. “Preliminary Results of the Effect of Resampling on Thematic


Mapper Imagery.” 1985 ACSM-ASPRS Fall Convention Technical Papers. Falls
Church, Virginia: American Society for Photogrammetry and Remote Sensing
and American Congress on Surveying and Mapping.

Battrick, Bruce, and Lois Proud, eds. May 1992. ERS-1 User Handbook. Noordwijk, The
Netherlands: European Space Agency, ESA Publications Division, c/o ESTEC.

Benediktsson, J.A., Swain, P.H., Ersoy, O.K., and Hong, D. 1990. “Neural Network
Approaches Versus Statistical Methods in Classification of Multisource Remote
Sensing Data.” IEEE Transactions on Geoscience and Remote Sensing 28:4:540-51.

Berk, A., et al. 1989. MODTRAN: A Moderate Resolution Model for LOWTRAN 7.
Hanscom Air Force Base, Massachusetts: U.S. Air Force Geophysical Laboratory
(AFGL).

Bernstein, Ralph, et al. 1983. “Image Geometry and Rectification.” Chapter 21 in Manual
of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American
Society of Photogrammetry.

Billingsley, Fred C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.

Blom, Ronald G., and Michael Daily. July 1982. “Radar Image Processing for Rock-Type
Discrimination.” IEEE Transactions on Geoscience and Remote Sensing, Vol. GE-20,
No. 3.

Buchanan, M. D. 1979. “Effective Utilization of Color in Multidimensional Data Presen-


tation.” Proceedings of the Society of Photo-Optical Engineers, Vol. 199: 9-19.

Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and
Implementation. New York: John Wiley & Sons, Inc.

Field Guide 635


Bibliography

Cannon, Michael, Alex Lehar, and Fred Preston, 1983. “Background Pattern Removal
by Power Spectral Filtering.” Applied Optics, Vol. 22, No. 6: 777-779.

Carter, James R. 1989. “On Defining the Geographic Information System.” Fundamentals
of Geographic Information Systems: A Compendium, edited by William J. Ripple.
Bethesda, Maryland: American Society for Photogrammetric Engineering and
Remote Sensing and the American Congress on Surveying and Mapping.

Chahine, Moustafa T., et al. 1983. “Interaction Mechanisms within the Atmosphere.”
Chapter 5 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.

Chavez, Pat S., Jr., et al. 1991. “Comparison of Three Different Methods to Merge Multi-
resolution and Multispectral Data: Landsat TM and SPOT Panchromatic.” Photo-
grammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.

Chavez, Pat S., Jr., and Graydon L. Berlin. 1986. “Restoration Techniques for SIR-B
Digital Radar Images.” Paper presented at the Fifth Thematic Conference:
Remote Sensing for Exploration Geology, Reno, Nevada.

Clark, Roger N., and Ted L. Roush. 1984. “Reflectance Spectroscopy: Quantitative
Analysis Techniques for Remote Sensing Applications.” Journal of Geophysical
Research, Vol. 89, No. B7: 6329-6340.

Clark, R.N., Gallagher, A.J., and Swayze, G.A. 1990. “Material Absorption Band Depth
Mapping of Imagine Spectrometer Data using a Complete Band Shape Least-
Square Fit with Library Reference Spectra.” Proceedings of the Second AVIRIS
Conference. JPL Pub. 90-54.

Colby, J. D. 1991. “Topographic Normalization in Rugged Terrain.” Photogrammetric


Engineering & Remote Sensing, Vol. 57, No. 5: 531-537.

Colwell, Robert N., ed. 1983. Manual of Remote Sensing. Falls Church, Virginia: American
Society of Photogrammetry.

Congalton, R. 1991. “A Review of Assessing the Accuracy of Classifications of Remotely


Sensed Data.” Remote Sensing of Environment, Vol. 37: 35-46.

Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:
Conrac Corp.

Crippen, Robert E. July 1989. “Development of Remote Sensing Techniques for the
Investigation of Neotectonic Activity, Eastern Transverse Ranges and Vicinity,
Southern California.” Ph.D. Diss., University of California, Santa Barbara.

Crippen, Robert E. 1989. “A Simple Spatial Filtering Routine for the Cosmetic Removal
of Scan-Line Noise from Landsat TM P-Tape Imagery.” Photogrammetric
Engineering & Remote Sensing, Vol. 55, No. 3: 327-331.

Crippen, Robert E. 1987. “The Regression Intersection Method of Adjusting Image Data
for Band Ratioing.” International Journal of Remote Sensing, Vol. 8, No. 2: 137-155.

636 ERDAS
Crist, E. P., et al. 1986. “Vegetation and Soils Information Contained in Transformed
Thematic Mapper Data.” Proceedings of IGARSS’ 86 Symposium, ESA Publications
Division, ESA SP-254.

Crist, E. P., and R. J. Kauth. 1986. “The Tasseled Cap De-Mystified.” Photogrammetric
Engineering & Remote Sensing, Vol. 52, No. 1: 81-86.

Cullen, Charles G. 1972. Matrices and Linear Transformations. Reading, Massachusetts:


Addison-Wesley Publishing Company.

Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of Seasat


Radar Imagery.” Photogrammetric Engineering & Remote Sensing, Vol. 49, No. 3:
349-355.

Dangermond, Jack. 1988. “A Review of Digital Data Commonly Available and Some of
the Practical Problems of Entering Them into a GIS.” Fundamentals of Geographic
Information Systems: A Compendium, edited by William J. Ripple. Bethesda,
Maryland: American Society for Photogrammetric Engineering and Remote
Sensing and the American Congress on Surveying and Mapping.

Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product
Specifications for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: DMA
Aerospace Center.

Dent, Borden D. 1985. Principles of Thematic Map Design. Reading, Massachusetts:


Addison-Wesley Publishing Company.

Duda, Richard O., and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. New
York: John Wiley & Sons, Inc.

Eberlein, R. B., and J. S. Weszka. 1975. “Mixtures of Derivative Operators as Edge


Detectors.” Computer Graphics and Image Processing, Vol. 4: 180-183.

Elachi, Charles. 1987 Introduction to the Physics and Techniques of Remote Sensing. New
York: John Wiley & Sons.

Elachi, Charles. 1992. “Radar Images of the Earth from Space.” Exploring Space.

Elachi, Charles. 1987. Spaceborne Radar Remote Sensing: Applications and Techniques. New
York: IEEE Press.

Elassal, Atef A., and Vincent M. Caruso. 1983. USGS Digital Cartographic Data Standards:
Digital Elevation Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.

ESRI. 1992. ARC Command References 6.0. Redlands. California: ESRI, Inc.

ESRI. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI, Inc.

ESRI. 1992. Managing Tabular Data. Redlands, California: ESRI, Inc.

Field Guide 637


Bibliography

ESRI. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands,
California: ESRI, Inc.

ESRI. 1990. Understanding GIS: The ARC/INFO Method. Redlands, California: ESRI, Inc.

Fahnestock, James D., and Robert A. Schowengerdt. 1983. “Spatially Variant Contrast
Enhancement Using Local Range Modification.” Optical Engineering, Vol. 22, No.
3.

Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclo-


pedia of Computer Science and Technology, edited by Allen Kent and James G.
Williams. New York: Marcel Dekker, Inc.

Fisher, P. F. 1991. “Spatial Data Sources and Data Problems.” Geographical Information
Systems: Principles and Applications, edited by David J. Maguire, Michael F.
Goodchild, and David W. Rhind. New York: Longman Scientific & Technical.

Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes &
Noble, Inc.

Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weathered
and Vegetated Terrains Using Aircraft Scanners: Successful Australian Case
Histories.” Paper presented at the fifth Thematic Conference: Remote Sensing for
Exploration Geology, Reno, Nevada.

Freden, Stanley C., and Frederick Gordon, Jr. 1983. “Landsat Satellites.” Chapter 12 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.

Frost, Victor S., Stiles, Josephine A., Shanmugan, K. S., and Holtzman, Julian C. 1982.
“A Model for Radar Images and Its Application to Adaptive Digital Filtering of
Multiplicative Noise.” IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, Vol. PAMI-4, No. 2, March 1982.

Geological Remote Sensing Group Newsletter. May 1992. No. 5. Institute of Hydrology,
Wallingford, OX10, United Kingdom.

Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading, Massachu-
setts: Addison-Wesley Publishing Company.

Gonzalez, Rafael C., and Richard E. Woods. 1992. Digital Image Processing. Reading,
Massachusetts: Addison-Wesley Publishing Company.

Green, A.A. and Craig, M.D. 1985. “Analysis of Aircraft Spectrometer Data with
Logarithmic Residuals.” Proceedings of the AIS Data Analysis Workshop. JPL Pub.
85-41:111-119.

Guptill, Stephen C., ed. 1988. A Process for Evaluating Geographic Information Systems.
U.S. Geological Survey Open-File Report 88-105.

638 ERDAS
Haralick, Robert M. 1979. “Statistical and Structural Approaches to Texture.”
Proceedings of the IEEE, Vol. 67, No. 5: 786-804. Seattle, Washington.

Hodgson, Michael E., and Bill M. Shelley. 1993. “Removing the Topographic Effect in
Remotely Sensed Imagery.” ERDAS Monitor, Fall 1993. Contact Dr. Hodgson,
Dept. of Geography, University of Colorado, Boulder, CO 80309-0260.

Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to
the 1993 ERIM Conference, Pasadena, California.

Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York:
Academic Press.

Iron, James R., and Gary W. Petersen. 1981.“Texture Transforms of Remote Sensing
Data,” Remote Sensing of Environment, Vol. 11: 359-370.

Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in
Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:
American Society of Photogrammetry.

Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective.
Englewood Cliffs, New Jersey: Prentice-Hall.

Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography. Essex, England:


Longman Group Ltd.

Jordan, III, Lawrie E., Bruce Q. Rado, and Stephen L. Sperry. 1992. “Meeting the Needs
of the GIS and Image Processing Industry in the 1990s.” Photogrammetric
Engineering & Remote Sensing, Vol. 58, No. 8: 1249-1251.

Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.

Kidwell, Katherine B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-
7, NOAA-8, NOAA-9, and NOAA-10) Users Guide. Washington, DC: National
Oceanic and Atmospheric Administration.

Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paper


presented at the ACSM-ASPRS Annual Convention, April 1994, Reno, Nevada.

Kneizys, F. X., et al. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts:
Air Force Geophysics Laboratory.

Knuth, Donald E. 1987. “Digital Halftones by Dot Diffusion.” AMC Transactions on


Graphics, Vol. 6: 245-273.

Kruse, Fred A. 1988.“Use of Airborne Imaging Spectrometer Data to Map Minerals


Associated with Hydrothermally Altered Rocks in the Northern Grapevine
Mountains, Nevada.” Remote Sensing of the Environment,
Vol. 24: 31-51.

Field Guide 639


Bibliography

Larsen, Richard J., and Morris L. Marx. 1981. An Introduction to Mathematical Statistics
and Its Applications. Englewood Cliffs, New Jersey: Prentice-Hall.

Lavreau, J. 1991. “De-Hazing Landsat Thematic Mapper Images.” Photogrammetric


Engineering & Remote Sensing, Vol. 57, No. 10: 1297-1302.

Leberl, Franz W. 1990. Radargrammetric Image Processing. Norwood, Massachusetts:


Artech House, Inc.

Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information
System. U.S. Fish and Wildlife Service, FWS/OBS-84/17.

Lee, Jong-Sen. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture Radar
Images.” Computer Graphics and Image Processing, Vol. 17:24-32.

Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpre-
tation. New York: John Wiley & Sons, Inc.

Lopes, A., Nezry, E., Touzi, R., and Laur, H. 1990. “Maximum A Posteriori Speckle
Filtering and First Order Texture Models in SAR Images.” International Geoscience
and Remote Sensing Symposium (IGARSS).

Lue, Yan and Kurt Novak. 1991. “Recursive Grid - Dynamic Window Matching for
Automatic DEM Generation.” 1991 ACSM-ASPRS Fall Convention Technical
Papers.

Lyon, R.J.P. 1987. “Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid
Rocks.” Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-
119.

Maling, D. H. 1992. Coordinate Systems and Map Projections. 2nd ed. New York:
Pergamon Press.

Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” Introductory


Readings in Geographic Information Systems, edited by Donna J. Peuquet and Duane
F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.

Mendenhall, William, and Richard L. Scheaffer. 1973. Mathematical Statistics with Appli-
cations. North Scituate, Massachusetts: Duxbery Press.

Menon, Sudhakar, Peng Gao, and CiXiang Zhan. 1991. “GRID: A Data Model and
Functional Map Algebra for Raster Geo-processing.” GIS/LIS ‘91 Proceedings, Vol.
2: 551-561. Bethesda, Maryland: American Society for Photogrammetry and
Remote Sensing.

Merenyi, E., Taranik, J.V., Monor, Tim, and Farrand, W. March 1996. “Quantitative
Comparison of Neural Network and Conventional Classifiers for Hyperspectral
Imagery.” Proceedings of the Sixth AVIRIS Conference. JPL Pub.

Minnaert, J. L., and G. Szeicz. 1961. “The Reciprocity Principle in Lunar Photometry.”
Astrophysics Journal, Vol. 93: 403-410.

640 ERDAS
Nagao, Makoto, and Takashi Matsuyama. 1978. “Edge Preserving Smoothing.”
Computer Graphics and Image Processing, Vol. 9: 394-407.

Needham, Bruce H. 1986. “Availability of Remotely Sensed Data and Information from
the U.S. National Oceanic and Atmospheric Administration’s Satellite Data
Services Division.” Chapter 9 in Satellite Remote Sensing for Resources Development,
edited by Karl-Heinz Szekielda. Gaithersburg, Maryland: Graham & Trotman,
Inc.

Nichols, David, et al. 1983. “Digital Hardware.” Chapter 20 in Manual of Remote Sensing,
edited by Robert N. Colwell. Falls Church, Virginia: American Society of Photo-
grammetry.

Oppenheim, Alan V., and Ronald W. Schafer. 1975. Digital Signal Processing. Englewood
Cliffs, New Jersey: Prentice-Hall, Inc.

Parent, Phillip, and Richard Church. 1987. “Evolution of Geographic Information


Systems as Decision Making Tools.” Fundamentals of Geographic Information
Systems: A Compendium, edited by William J. Ripple. Bethesda, Maryland:
American Society for Photogrammetry and Remote Sensing and American
Congress on Surveying and Mapping.

Pearson, Frederick. 1990. Map Projections: Theory and Applications. Boca Raton, Florida:
CRC Press, Inc.

Peli, Tamar, and Jae S. Lim. 1982. “Adaptive Filtering for Image Enhancement.” Optical
Engineering, Vol. 21, No. 1.

Pratt, William K. 1991. Digital Image Processing. New York: John Wiley & Sons, Inc.

Press, William H., et al. 1988. Numerical Recipes in C. New York, New York: Cambridge
University Press.

Prewitt, J. M. S. 1970. “Object Enhancement and Extraction.” Picture Processing and


Psychopictorics, edited by B. S. Lipkin, and A. Resenfeld. New York: Academic
Press.

Rado, Bruce Q. 1992. “An Historical Analysis of GIS.” Mapping Tomorrow’s Resources.
Logan, Utah: Utah State University.

Richter, Rudolf. 1990. “A Fast Atmospheric Correction Algorithm Applied to Landsat


TM Images.” International Journal of Remote Sensing, Vol. 11, No. 1: 159-166.

Robinson, Arthur H., and Randall D. Sale. 1969. Elements of Cartography. 3rd ed. New
York: John Wiley & Sons, Inc.

Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H.
Freeman and Co.

Field Guide 641


Bibliography

Sader, S. A., and J. C. Winne. 1992. “RGB-NDVI Colour Composites For Visualizing
Forest Change Dynamics.” International Journal of Remote Sensing, Vol. 13, No. 16:
3055-3067.

Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote
Sensing. New York: Academic Press.

Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral Image


Data Using Spatial Frequency Content.” Photogrammetric Engineering & Remote
Sensing, Vol. 46, No. 10: 1325-1334.

Schwartz, A. A., and J. M. Soha. 1977. “Variable Threshold Zonal Filtering.” Applied
Optics, Vol. 16, No. 7.

Short, Nicholas M. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing.
Washington, DC: National Aeronautics and Space Administration.

Simonett, David S., et al. 1983. “The Development and Principles of Remote Sensing.”
Chapter 1 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.

Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts:
Addison-Wesley Publishing Company, Inc.

Smith, J., T. Lin, and K. Ranson. 1980. “The Lambertian Assumption and Landsat Data.”
Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 9: 1183-1189.

Snyder, John P. 1987. Map Projections--A Working Manual. Geological Survey Profes-
sional Paper 1532. Washington, DC: United States Government Printing Office.

Snyder, John P., and Philip M. Voxland. 1989. An Album of Map Projections. U.S.
Geological Survey Professional Paper 1453. Washington, DC: United States
Government Printing Office.

Srinivasan, Ram, Michael Cannon, and James White, 1988. “Landsat Destriping Using
Power Spectral Filtering.” Optical Engineering, Vol. 27, No. 11: 939-943.

Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction.
Englewood Cliffs, New Jersey: Prentice-Hall.

Steinitz, Carl, Paul Parker, and Lawrie E. Jordan, III. 1976. “Hand Drawn Overlays:
Their History and Perspective Uses.” Landscape Architecture, Vol. 66.

Stimson, George W. 1983. Introduction to Airborne Radar. El Segundo, California: Hughes


Aircraft Company.

Suits, Gwynn H. 1983. “The Nature of Electromagnetic Radiation.” Chapter 2 in Manual


of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American
Society of Photogrammetry.

642 ERDAS
Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis
(LARS Information Note 111572). West Lafayette, Indiana: The Laboratory for
Applications of Remote Sensing, Purdue University.

Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach.
New York: McGraw Hill Book Company.

Taylor, Peter J. 1977. Quantitative Methods in Geography: An Introduction to Spatial


Analysis. Boston, Massachusetts: Houghton Mifflin Company.

TIFF Developer’s Toolkit. 1990. Seattle, Washington: Aldus Corp.

Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading,
Massachusetts: Addison-Wesley Publishing Company.

Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for
Monitoring Vegetation.” Remote Sensing of Environment, Vol. 8: 127-150.

Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An
Assessment of Technology, Applications, and Products. Madison, Georgia: SEAI
Technical Publications.

Wang, Zhizhuo. 1990. Principles of Photogrammetry. Wuhan, China: Wuhan Techno-


logical University of Surveying and Mapping.

Welch, Roy. 1990. “3-D Terrain Modeling for GIS Applications.” GIS World, Vol. 3, No.
5.

Welch, R., and W. Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM
Data.” Photogrammetric Engineering & Remote Sensing, Vol. 53, No. 3: 301-303.

Wolberg, George. 1990. Digital Image Warping. IEEE Computer Society Press
Monograph.

Wolf, Paul R. 1980. “Definitions of Terms and Symbols used in Photogrammetry.”


Manual of Photogrammetry. Ed. Chester C. Slama. Falls Church, Virginia:
American Society of Photogrammetry.

Wong, K.W. 1980. “Basic Mathematics of Photogrammetry.” Manual of Photogrammetry.


Ed. Chester C. Slama. Falls Church, Virginia: American Society of Photogram-
metry.

Yang, Xinghe, R. Robinson, H. Lin, and A. Zusmanis. 1993. “Digital Ortho Corrections
Using Pre-transformation Distortion Adjustment.” 1993 ASPRS Technical Papers.
New Orleans. Vol. 3: 425-434.

Zamudio, J.A. and Atkinson, W.W. 1990. “Analysis of AVIRIS data for Spectral
Discrimination of Geologic Materials in the Dolly Varden Mountains.”
Proceedings of the Second AVIRIS Conference. JPL Pub. 90-54:162-66.

Field Guide 643


Bibliography

644 ERDAS
A

Index ARC/INFO INTERCHANGE 49


arc/second format 81
ARCGEN 90
A ARCVIEW 49
a priori 221, 294, 316 area based matching 294
absorption 7, 10 area of interest 35, 141, 372
spectra 6, 11 ASCII 52, 82, 369
accuracy assessment 254, 258 aspect 347, 352, 419
accuracy report 259 calculating 352
adaptive filter 151 equatorial 419
ADRG 25, 52, 72 oblique 419
file naming convention 76 polar 419
ordering 85 transverse 420
ADRI 52, 78 atmospheric correction 140
file naming convention 80 atmospheric effect 130
ordering 85 atmospheric modeling 131
aerial photos 5, 28 attribute
interior orientation 272 imported 44
airborne imagery 51 in models 389
Airborne Imaging Spectrometer 11 information 39, 41, 44
Airborne Multispectral Scanner Mk2 11 raster 367
AIRSAR 66, 70 thematic 366, 367
Aitoff 594 vector 367, 370
Albers Conical Equal Area 538 viewing 368
Almaz 66 auto update 102, 103, 104
Almaz 1-B 69 AutoCAD 49, 51, 90
Almaz 1-b 69 average 450
Almaz-1 69 AVHRR 15, 52, 55, 62, 131, 317
Analog Photogrammetry 262 extract 63
Analytical Photogrammetry 262 full set 63
annotation 113, 118, 404 ordering 84
element 404 AVIRIS 11, 70
in script models 390 azimuth 417
layer 404 Azimuthal Equidistant 522
ARC system 72
ARC/INFO 51, 88, 90, 313, 361, 367, 515 B
coverages 39 band 2, 57, 58, 60, 62
data model 39, 42 displaying 106
UNGENERATE 90 banding 19, 129
ARC/INFO GENERATE 49 see also striping

Field Guide 645


Index
C

Bartlett window 186 analysis 277, 287


Bayesian classifier 240 chi-square
BIL 20, 52 distribution 255
bin 138 statistics 257
binary format 20 class 215, 364
BIP 20, 23, 52 name 366, 368
Bipolar Oblique Conic Conformal 586 value 110, 368
bit 20 numbering systems 365, 378
display device 98 classification 34, 68, 194, 215, 365
in files (depth) 17, 20, 24 and enhanced data 153, 219
block of photographs 267, 279 and rectified data 314
blocking factor 22, 24 and terrain analysis 347
border 410 evaluating 254
bpi 25 flow chart 245
breakline 292 iterative 219, 224, 242
brightness inversion 142 scheme 218
brightness value 17, 98, 99, 107, 110, 133 clump 374
BSQ 20, 21, 52 clustering 227
buffer zone 373 ISODATA 227, 228
bundle adjustment (SPOT) 286 RGB 227, 232
Butterworth window 187 coefficient 461
byte 20 in convolution 144
of variation 195
C color gun 98, 99, 107
C Programmers’ Toolkit 371 color scheme 45, 366
CalComp Electrostatic Plotter 441 color table 110, 368
Canadian Geographic Information System for printing 442
359 colorcell 99
Canon PostScript Intelligent Processing read-only 100
Unit 441 color-infrared 106
Cartesian coordinate 41 colormap 99, 113
cartography 399 display 600
cartridge tape 23, 25 complex image 53
Cassini 587 confidence level 257
Cassini-Soldner 587 conformality 416
CD-ROM 20, 23, 25 contiguity analysis 372, 374
cell 83 contingency matrix 236, 238
center of the scene 281 continuous data
change detection 18, 33, 54, 313 see data
check point 277, 279 contrast stretch 34, 133

646 ERDAS
D

for display 107, 135, 453 covariance matrix 157, 235, 240, 253, 455
linear 133, 134 cross correlation 295
min/max vs. standard deviation 108, 136
nonlinear 134 D
piecewise linear 134 data 360
contrast table 106 airborne sensor 51
control point 276 ancillary 220
convolution 19 categorical 3
cubic 341 complex 53, 479
filtering 144, 341, 342, 375 compression 153, 232
kernel continuous 3, 27, 106, 364, 472
crisp 149 displaying 109
edge detector 147 creating 122
edge enhancer 148 elevation 220, 347
gradient 202 enhancement 126
high frequency 145, 148 floating point 478
low frequency 149, 341 from aircraft 70
Prewitt 202 geocoded 22, 32, 312
zero-sum 146 gray scale 118
convolution kernel 144 hyperspectral 11
high frequency 342 interval 3
low frequency 342 nominal 3
Prewitt 202 ordering 84
coordinate ordinal 3
Cartesian 41, 422 packed 62
conversion 345 pseudo color 118
file 4, 41, 122 radar 51, 64
geographic 422, 531 applications 68
map 4, 41, 311, 314, 316 bands 66
planar 422 merging 211
reference 315, 316 raster 4, 113
retransformed 330, 338 converting to vector 87
source 316 editing 35
spherical 422 formats (BIL, etc.) 24
coordinate system 4 importing and exporting 51
correlation calculations 294 in GIS 362
correlation threshold 329 sources 51
correlation windows 294 ratio 3
covariance 238, 251, 454 satellite 51
sample 454 structure 159

Field Guide 647


Index
D

thematic 3, 27, 110, 223, 366, 472 minimum distance 250, 252, 254, 257
displaying 112 non-parametric 244
tiled 29, 53 parallelepiped 246
topographic 81, 347 parametric 244
using 83 decorrelation stretch 158
true color 118 degrees of freedom 257
vector 113, 118, 313, 345, 367 DEM 2, 28, 52, 81, 82, 131, 292, 312, 348
converting to raster 87, 365 editing 36
copying 43 interpolation 37, 292
displaying 45 ordering 84
editing 394 density 25
densify 394 descriptive information
generalize 394 see attribute information 44
reshape 394 Design with Nature (by Ian McHarg) 359
spline 394 desktop scanners 269
split 394 detector 54, 129
unsplit 394 Developers’ Toolkit 371, 430
from raster data 47 DGN 49
importing 47, 53 digital elevation model (DEM) 292
in GIS 362 digital image 47
renaming 43 digital orthophoto 299
sources 47, 49, 51 cell sizes 301
structure 42 creation 300
viewing 117 digital orthophotography 348
multiple layers 119 Digital Photogrammetry 262
overlapping layers 119 digital picture
data correction 19, 35, 125, 129 see image 98
geometric 129, 131, 311 digital terrain model (DTM) 51, 292
radiometric 129, 207, 312 digitizing 47, 314
data file value 1, 34 GCPs 316
display 107, 122, 134 operation modes 48
in classification 215 point mode 48
data storage 20 screen 47, 49
database stream mode 48
image 31 tablet 47
decision rule 217, 243 DIME 93
Bayesian 252 dimensionality 153, 220, 460
feature space 248 disk space 26
Mahalanobis distance 251 diskette 20, 24
maximum likelihood 252, 254 displacement 304

648 ERDAS
E

display CalComp 441


32-bit 101 Versatec 441
DirectColor 101, 103, 109, 111 elevation model 291
HiColor 101, 105 generating
PC 105 digital method 291
PseudoColor 101, 102, 105, 110, 112 traditional method 291
TrueColor 101, 104, 105, 109, 111 ellipse 153, 236
display device 97, 107, 133, 135, 160 enhancement 34, 60, 125, 215
display memory 126 linear 107, 133
display resolution 97 nonlinear 133
distance image file 254, 255 on display 113, 122, 126
distortion radar data 126
geometric 299 radiometric 125, 132, 143
distribution 446 spatial 125, 132, 143
Distribution Rectangle 72 spectral 125, 153
dithering 116 entity (AutoCAD) 91
color artifacts 117 EOF (end of file) 22
color patch 117 EOSAT 19, 32, 57, 85
divergence 236 EOV (end of volume) 22
signature 239 ephemeris data 283
transformed 239 ephemeris information 469
DLG 25, 49, 51, 53, 92 epipolar geometry 289
Dreschler Operator 297 epipolar image pair 293
DTED 52, 78, 81, 348 epipolar stereopair 289
DTM 51, 292, 300 equal area
DXF 49, 51, 53, 90 see equivalence
dynamic range 17 equidistance 417
Equidistant Conic 525, 546
E Equidistant Cylindrical 587
edge detection 147, 200 Equirectangular 527
edge enhancement 148 equivalence 417
eigenvalue 154 ERDAS macro language (EML) 371
eigenvector 154, 157 ERDAS Version 7.X 27, 52, 87
8 mm tape 23, 25 EROS Data Center 85
Eikonix 71 error matrix 259
electromagnetic radiation 5, 54 ERS-1 66, 68, 69
electromagnetic spectrum 2, 5 ordering 86
long wave infrared region 6 ERS-2 69
short wave infrared region 6 ESRI 39, 359
electrostatic plotter ETAK 49, 51, 53, 93

Field Guide 649


Index
F

Euclidean distance 250, 254, 460 pixel 98


expected value 452 tic 41
exposure station 266, 280 file format 465
extent 405 HFA 485
IMAGINE 468
F MIF 475
.fsp.img file 224 vector layers 515
false color 59 file name 31
false easting 422 extensions 465
false northing 422 film recorder 440
fast format 22 filter
Fast Fourier Transform 176 adaptive 151
feature based matching 297 Frost 192, 198
feature collection 307 Gamma-MAP 192, 199
monoscopic 307 homomorphic 189
stereoscopic 307 Lee 192, 195
work flow 307 Lee-Sigma 192
feature extraction 125 local region 192, 193
feature point matching 297 mean 192
feature space 458 median 19, 192, 193
area of interest 221, 225 periodic noise removal 188
image 224, 459 Sigma 195
fiducials 273 zero-sum 203
field 367 filtering 144
file see also convolution filtering
.fsp.img 224 focal analysis 19
.gcc 317 focal length 272
.GIS 27, 87 focal operation 35, 375
.gmd 385 focal plane 272
.img 27, 106, 468 Förstner Operator 297
.LAN 27, 87 4 mm tape 23, 25
.mdl 390 Fourier analysis 126
.ovr 404 Fourier magnitude 176
.sig 454 calculating 178
.tif 440 Fourier Transform
archiving 32 calculation 178
header 24 Editor
output 26, 31, 335, 336, 390 window functions 185
.img 135 inverse 176
classification 259 neighborhood techniques 176

650 ERDAS
G

noise removal 188 gradient kernel 202


point techniques 176 graphical model 126, 371
frequency convert to script 390
statistical 446 create 384
Frost filter 192, 198 graphical modeling 372, 383
function memory 126 graticule 410, 422
Fuyo 1 66 gray scale 348
ordering 86 great circle 417
GRID 52, 87, 88
G grid cell 1, 98
.gcc file 317 grid line 410
.GIS file 27, 87 ground control point
.gmd file 385 see GCP
GAC (Global Area Coverage) 62 ground coordinate system 264
Gamma-MAP filter 192, 199 ground truth 215, 221, 222, 236
Gaussian distribution 196
Gauss-Krüger 593 H
GCP 316, 461 halftone 441
corresponding 316 hardcopy 437
digitizing 316, 317 hardware 97
matching 329 header
minimum required 328 file 22, 24
prediction 329 record 22
selecting 316 hierarchical file architecture (HFA) 485
General Vertical Near-side Perspective 529 High Resolution Visible (HRV) sensors 280
geocentric coordinate system 264 histogram 132, 232, 366, 368, 446, 459, 460
geocoded data 22 breakpoint 136
geographic information system signature 236, 242
see GIS histogram equalization
geology 68 formula 139
geometric distortion 299 histogram match 140
georeference 312, 437 homogeneity
gigabyte 20 spatial frequency 149
GIS 1 homomorphic filtering 189
data base 361 host workstation 97
defined 360 Hotine 548
history 359 HRPT (High Resolution Picture Transmis-
glaciology 68 sion) 62
global operation 35 HRV sensors 280
Gnomonic 533 hydrology 68

Field Guide 651


Index
I

hyperspectral data 11 generic 52


hyperspectral image processing 126 inclination 283
index 372, 380
I vegetation 10
.img file 2, 27, 468 INFO 44, 91, 92, 94, 95, 367
file path 44
.img 2 see also ARC/INFO
ideal window 185 information (vs. data) 360
IFOV (instantaneous field of view) 16 interior orientation 272
IGES 49, 51, 53, 94 International Dateline 422
image 1, 98, 125 interval
airborne 51 classes 337, 365
complex 53 data 3
digital 47 inverse Fast Fourier Transform 176
microscopic 51 IRIS Color Inkjet Printer 442
pseudo color 45 ISODATA 472
radar 51
raster 47 J
ungeoreferenced 41 Jeffries-Matusita distance 240
image algebra 166, 219 JERS-1 69
Image Catalog 31, 32 ordering 86
image coordinate system 263
image data 1 K
image display 2, 97 Kappa coefficient 259
image file 1, 106, 348 Kodak XL7700 Continuous Tone Printer
statistics 446 442
Image Information 108, 115, 312, 314, 366 kurtosis 205
Image Interpreter 10, 19, 126, 135, 151, 160, 206,
232, 349, 352, 354, 356, 365, 369, 371 L
functions 127 .LAN file 27, 87
image matching 293 Laborde Oblique Mercator 588
area based 294 LAC (Local Area Coverage) 62
feature based 297 Lambert Azimuthal Equal Area 535
feature point 297 Lambert Conformal 563
image processing 1 Lambert Conformal Conic 538, 586, 589
image pyramid 293 Lambertian reflectance model 356
image scale 266 Landsat 10, 18, 28, 47, 52, 55, 125, 131, 151, 561
image space coordinate system 263 description 55, 57
import history 57
direct 51 MSS 16, 57, 58, 129, 131

652 ERDAS
M

ordering 84 choropleth 400


TM 9, 10, 15, 22, 58, 150, 193, 212, 314, 317 colors in 403
displaying 106 composite 400
Laplacian operator 202, 203 composition 432
Latitude/Longitude 81, 312, 422, 531 contour 400
rectifying 336 credit 412
layer 2, 363, 382 derivative 400
least squares correlation 295 hardcopy 435
least squares regression 318, 324 index 400
Lee filter 192, 195 information 473
Lee-Sigma filter 192 inset 400
legend 409 isarithmic 400
level 1A data 268 isopleth 400
level 1B data 268, 319 label 412
level slice 140 land cover 215
Light SAR 69 lettering 414
line 40, 45, 53, 92 morphometric 400
line detection 200 outline 400
line dropout 19, 130 output to TIFF 440
linear regression 131 paneled 437
linear transformation 319, 324 planimetric 61, 400
lines 280 printing 437
Linotronic Imagesetter 441 continuous tone 442
local region filter 192, 193 with black ink 443
lookup table 99, 133 qualitative 402
display 136 quantitative 402
Lowtran 9, 131 relief 401
scale 427, 438
M scaled 437, 441
.mdl file 390 shaded relief 401
Machine Independent Format (MIF) 475 slope 401
magnification 98, 120, 121 thematic 401, 402
Mahalanobis distance 254 title 412
map 400 topographic 61, 401
accuracy 427, 434 typography 412
aspect 400 viewshed 401
base 400 Map Composer 399, 439
bathymetric 400 map coordinate 314, 316
book 437 conversion 345
cadastral 400 map feature collection 307

Field Guide 653


Index
M

map projection 311, 314, 416, 474 measurement 48


azimuthal 416, 419, 427 measurement vector 456, 458, 462
compromise 416 median filter 192, 193
conformal 428 megabyte 20
conical 416, 419, 427 Mercator 421, 541, 544, 548, 578, 583, 587
cylindrical 416, 420, 427 meridian 422
equal area 428 microscopic imagery 51
external 423, 585 Microsoft Windows NT 97, 105
gnomonic 419 MIF data dictionary 483
modified 421 Miller Cylindrical 544
orthographic 419 minimum distance
planar 416, 419 classification decision rule 231, 238
pseudo 421 Minnaert constant 357
selecting 427 model 382, 384
stereographic 419 Model Maker 126, 371, 382
types 419 criteria function 389
units 424 functions 386
USGS 423, 518 object 387
MapBase data type 388
see ETAK matrix 387
mapping 399 raster 387
mask 375 scalar 387
matrix 462 table 387
analysis 372, 381 working window 388
contingency 236, 238 modeling 35, 382
covariance 157, 235, 240, 253, 455 and image processing 383
error 259 and terrain analysis 347
transformation 315, 329, 334, 461 using conditional statements 389
matrix algebra Modified Polyconic 589
and transformation matrix 463 Modified Stereographic 590
multiplication 463 Modified Transverse Mercator 546
notation 462 Modtran 9, 131
transposition 464 Mollweide Equal Area 591
maximum likelihood monoscopic collection 307
classification decision rule 238 Moravec Operator 297
mean 108, 136, 237, 450, 451, 454 mosaic 33, 313
of ISODATA clusters 229 multiplicative algorithm 151
vector 240, 457 multispectral imagery 55, 60
mean Euclidean distance 205 multitemporal imagery 33
mean filter 192

654 ERDAS
N

N NPO Mashinostroenia 69
nadir 55, 283, 302
nadir line 302 O
nadir point 302 .ovr file 404
NASA 57, 64, 70 Oblique Mercator 548, 563, 588, 592
NASA/JPL 69 oceanography 68
natural-color 106 off-nadir 60, 283
nearest neighbor offset 319
see resample oil exploration 68
neatline 410 1:24,000 scale 82
neighborhood analysis 372, 375 1:250,000 scale 82
boundary 376 opacity 119, 368
density 376 optical disk 23
diversity 376 orbit 280
majority 376 order
maximum 376 of polynomial 461
mean 376 of transformation 461
median 376 ordinal
minimum 376 classes 337, 365
minority 377 data 3
rank 377 orientation angle 284
standard deviation 377 orthocorrection 83, 132, 209, 298, 306, 308, 312
sum 377 orthogonal 298
9-track tape 23, 25 orthogonal distance 272
NOAA 62 Orthographic 551
node 40 orthographic projection 298, 299
dangling 397 orthoimage 299
from-node 40 orthomap 308
pseudo 397 orthorectification 298, 308, 312
to-node 40 output file 26, 335, 336, 390
noise removal 188 .img 135
nominal classification 259
classes 337, 365 overlay 372, 379
data 3 overlay file 404
Non-Lambertian reflectance model 356, 357
nonlinear transformation 321, 325, 327 P
normal distribution 153, 251, 252, 253, 450, 454, panchromatic imagery 55, 60
459 parallel 422
Normalized Difference Vegetation Index parallelepiped
(NDVI) 11, 166 alarm 236

Field Guide 655


Index
R

parameter 454 computing 156


parametric 252, 253, 454 principal point 272
pattern recognition 215, 236 printer
periodic noise removal 188 Canon PostScript Intelligent Processing
perspective center 272, 280 Unit 441
photogrammetric processing 270 IRIS Color Inkjet 442
photogrammetric quality scanners 269 Kodak XL7700 Continuous Tone 442
photogrammetry 262 Linotronic Imagesetter 441
work flow 265 PostScript 439
photograph 51, 54, 151 Tektronix Inkjet 441
aerial 70 Tektronix Phaser 441
ordering 85 Tektronix Phaser II SD 442
pixel 1, 98, 281 probability 241, 252
depth 97 profile 81
display 98 projection
file vs. display 98 perspective 575
size 281 Projection Chooser 423
pixel coordinate system 263 proximity analysis 372, 373
Plane Table Photogrammetry 262 pseudo color 59
Plate Carrée 527, 587, 594 display 364
plotter pseudo color image 45
CalComp Electrostatic 441 pushbroom scanner 60, 261, 268
Versatec Electrostatic 441 pyramid layer 30, 114, 474
point 40, 45, 53, 92 Pythagorean Theorem 460
label 40
point ID 316 R
Polar Stereographic 554, 589 Radar 5, 19, 67, 151, 202, 203, 204
pollution monitoring 68 radar imagery 51
Polyconic 546, 557, 589 RADARSAT 52, 69
polygon 40, 45, 53, 92, 171, 222, 375, 394 ordering 86
polynomial 323, 461 radiative transfer equation 9
PostScript 439 RAM 113
precision 287 range line 208
Preference Editor 114 Raster Attribute Editor 111, 368, 486
Prewitt kernel 202 raster editing 35
primary color 98, 443 raster image 47
RGB vs. CMY 443 raster region 374
principal component band 154 ratio
principal components 34, 149, 150, 153, 158, classes 337, 365
212, 219, 455, 459, 460 data 3

656 ERDAS
S

Rayleigh scattering 8 rhumb line 417


recode 35, 372, 378, 382 right hand rule 264
record 24, 367 RMS error 318, 329, 330, 334
logical 24 tolerance 333
physical 24 total 332
rectification 32, 311 roam 121
process 315 Robinson Pseudocylindrical 592
Rectified Skew Orthomorphic 592 robust estimation 288
reduction 121 Root Mean Square Error (RMSE) 269
reference coordinate 315, 316 rotate 319
reference pixel 258 rubber sheeting 321
reference plane 264, 299
reference windows 294 S
reflect 319, 320 .sig file 454
reflection spectra 6 Sanson-Flamsteed 559
registration 311 SAR 64
vs. rectification 315 satellite 54
regular block of photos 267 imagery 5
relation based matching 297 system 54
remote sensing 311 scale 15, 319, 405, 437
vs. scanning 71 display 438
report equivalents 406
generate 369 large 15
resample 311, 314, 335 map 438
Bilinear Interpolation 113, 335, 338 paper 439
Cubic Convolution 113, 335, 341 determining 440
for display 113 pixels per inch 407
Nearest Neighbor 113, 335, 337, 341 representative fraction 405
residuals 287, 330 small 15
resolution 15 verbal statement 405
display 97 scale bar 405
merge 150 scaled map 441
radiometric 15, 17, 18, 57 scanner 54
spatial 15, 18, 141, 438 scanning 71, 314
spectral 15, 18 scanning window 375
temporal 15, 18 scattering 7
Restoration 335 Rayleigh 8
retransformed coordinate 330, 338 scatterplot 153, 232, 459
RGB monitor 99 feature space 224
RGB to IHS 211 scene 280

Field Guide 657


Index
S

screendump command 88 non-parametric 216, 225, 235


Script Librarian 391 parametric 216, 235
script model 126, 371 separability 238, 240
data type 393 statistics 236, 242
library 390 transformed divergence 239
statement 392 Simple Conic 525
script modeling 372 Simple Cylindrical 527
SDTS 49 Sinusoidal 421, 559
search windows 294 SIR-A 66, 69
Seasat-1 64 ordering 86
seat 97 SIR-B 66, 69
secant 419 ordering 86
seed properties 223 SIR-C 69
sensor 54 ordering 86
active 6, 66 skew 319
passive 6, 66 skewness 205
radar 69 slant-to-ground range correction 209
separability SLAR 64, 66
listing 240 slope 347, 349
signature 238 calculating 349
7.5-minute DEM 82 Softcopy Photogrammetry 262
shaded relief 347, 354 source coordinate 316
calculating 355 Southern Orientated Gauss Conformal 593
shadow Space Oblique Mercator 421, 548, 561
enhancing 134 spatial frequency 143
ship monitoring 68 Spatial Modeler 19, 53, 126, 357, 365, 382
Sigma filter 195 Spatial Modeler Language 126, 371, 382, 390
Sigma notation 445 speckle noise 67
signature 216, 217, 221, 224, 235 removing 192
alarm 236 speckle suppression 19
append 242 local region filter 193
contingency matrix 238 mean filter 192
delete 242 median filter 192, 193
divergence 236, 239 Sigma filter 195
ellipse 236, 237 spectral dimensionality 456
evaluating 236 spectral distance 238, 250, 254, 257, 460
file 454 in ISODATA clustering 228
histogram 236, 242 spectral space 154, 156, 458
manipulating 224, 242 spectroscopy 6
merge 242 spheroid 429

658 ERDAS
T

SPOT 10, 15, 18, 19, 28, 32, 47, 52, 55, 60, 78, 95, 125, T
131, 151, 317 .tif file 440
ordering 84 tangent 419
panchromatic 15, 150 tape 20
XS 60 Tasseled Cap transformation 159, 166
displaying 106 Tektronix
SPOT bundle adjustment 286 Inkjet Printer 441
standard deviation 108, 136, 237, 287, 451 Phaser II SD 442
sample 453 Phaser Printer 441
standard meridian 417, 420 texture analysis 204
standard parallel 417, 419 thematic data
State Plane 424, 429, 538, 563, 578 see data
statistics 30, 446, 472 theme 363
signature 236 threshold 255
Stereographic 554, 575 thresholding 254
stereopair 288 thresholding (classification) 251
aerial 288 tick mark 410
epipolar 289 tie point 278, 287
SPOT 289 TIFF 52, 71, 87, 88, 439, 440
stereo-scene 283 TIGER 49, 51, 53, 95
stereoscopic collection 307 disk space requirement 96
stereoscopic imagery 61 tiled format 470
strip of photographs 266 TIN 292
striping 19, 156, 193 topocentric coordinate system 264
subset 33 topographic database 308
summation 445 topographic effect 356
sun angle 354 topographic map 308
Sun Raster 52, 87, 88 topology 41, 395
sun-synchronous orbit 280 build 395
surface generation clean 395
weighting function 38 constructing 395
swath width 54 total field of view 54
symbol 411 total RMS error 332
abstract 411 training 215
function 411 supervised 215, 219
plan 411 supervised vs. unsupervised 219
profile 411 unsupervised 216, 219, 227
replicative 411 training field 221
symbolization 39 training sample 221, 224, 258, 313, 459
symbology 45 defining 222

Field Guide 659


Index
U

evaluating 224 vector layer 41


training site velocity vector 284
see training field Versatec Electrostatic Plotter 441
transformation vertex 40
1st-order 319 Viewer 113, 114, 126, 372
linear 319, 324 dithering 116
matrix 318, 322, 330 linking 120
nonlinear 321, 325, 327 volume 25
order 318 set 25
transformation matrix 315, 318, 322, 329, 330, VPF 49
334, 461
transposition 253, 464 W
transposition function 239, 252, 253 weight factor 380
Transverse Mercator 546, 563, 578, 580, 587, 593 classification
triangulated irregular network (TIN) 292 separability 240
triangulation 270 weighting function (surfacing) 38
accuracy measures 287 windows
aerial 269, 271 correlation 294
SPOT 280 reference 294
true color 59 search 294
true direction 417 Winkel’s Tripel 594
type style workspace 42
on maps 413
X
U X residual 330
ungeoreferenced image 41 X RMS error 332
Universal Polar Stereographic 554 X Window 97
Universal Transverse Mercator 546, 578, 580 XSCAN 71
UTM
see Universal Transverse Mercator Y
Y residual 330
V Y RMS error 332
Van der Grinten I 583, 591
variable 364 Z
in models 393 zero-sum filter 146, 203
variance 205, 251, 452, 453, 454, 455 zone 72
sample 452 zone distribution rectangle (ZDR) 72
vector 462 zoom 120, 121
vector data
see data

660 ERDAS

You might also like