Geography Unit 1
Geography Unit 1
Seventh Edition
❙
REMOTE SENSING AND
IMAGE INTERPRETATION
Seventh Edition
Jonathan W. Chipman
Dartmouth College
Vice President and Publisher Petra Recter
Executive Editor Ryan Flahive
Sponsoring Editor Marian Provenzano
Editorial Assistant Kathryn Hancox
Associate Editor Christina Volpe
Assistant Editor Julia Nollen
Senior Production Manager Janis Soo
Production Editor Bharathy Surya Prakash
Marketing Manager Suzanne Bochet
Photo Editor James Russiello
Cover Design Kenji Ngieng
Cover Photo Quantum Spatial and Washington State DOT
This book was set in 10/12 New Aster by Laserwords and printed and bound by Courier Westford.
Founded in 1807, John Wiley & Sons, Inc. has been a valued source of knowledge and understanding for more than
200 years, helping people around the world meet their needs and fulûll their aspirations. Our company is built on a
foundation of principles that include responsibility to the communities we serve and where we live and work.
In 2008, we launched a Corporate Citizenship Initiative, a global effort to address the environmental, social,
economic, and ethical challenges we face in our business. Among the issues we are addressing are carbon
impact, paper speciûcations and procurement, ethical conduct within our business and among our vendors, and
community and charitable support. For more information, please visit our website: www.wiley.com/go/
citizenship.
Copyright © 2015, 2008 John Wiley & Sons, Inc. All rights reserved. No part of this publication may be
reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976
United States Copyright Act, without either the prior written permission of the Publisher, or authorization
through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc. 222 Rosewood Drive,
Danvers, MA 01923, website www.copyright.com. Requests to the Publisher for permission should be addressed
to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, (201) 748-
6011, fax (201) 748-6008, website www.wiley.com/go/permissions.
Evaluation copies are provided to qualiûed academics and professionals for review purposes only, for use in their
courses during the next academic year. These copies are licensed and may not be sold or transferred to a third
party. Upon completion of the review period, please return the evaluation copy to Wiley. Return instructions and
a free-of-charge return mailing label are available at www.wiley.com/go/returnlabel. If you have chosen to adopt
this textbook for use in your course, please accept this book as your complimentary desk copy. Outside of the
United States, please contact your local sales representative.
Lillesand, Thomas M.
Remote sensing and image interpretation / Thomas M. Lillesand, Ralph W. Kiefer,
Jonathan W. Chipman. — Seventh edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-118-34328-9 (paperback)
1. Remote sensing. I. Kiefer, Ralph W. II. Chipman, Jonathan W. III. Title.
G70.4.L54 2015
621.36'78—dc23
2014046641
v
vi PREFACE
use of computers in remote sensing and image interpretation in the 1960s and
early 1970s. The book’s readers have diversiûed as the ûeld of remote sensing has
become a truly international activity, with countries in Asia, Africa, and Latin
America contributing at all levels from training new remote sensing analysts, to
using geospatial technology in managing their natural resources, to launching and
operating new earth observation satellites. At the same time, the proliferation
of high‐resolution image‐based visualization platforms—from Google Earth to
Microsoft’s Bing Maps—is in a sense turning everyone with access to the Internet
into an <armchair remote‐sensing aûcionado.= Acquiring the expertise to produce
informed, reliable interpretations of all this newly available imagery, however,
takes time and effort. To paraphrase the words attributed to Euclid, there is no
royal road to image analysis—developing these skills still requires a solid ground-
ing in the principles of electromagnetic radiation, sensor design, digital image pro-
cessing, and applications.
This edition of the book strongly emphasizes digital image acquisition and
analysis, while retaining basic information about earlier analog sensors and meth-
ods (from which a vast amount of archival data exist, increasingly valuable as a
source for studies of long‐term change). We have expanded our coverage of lidar
systems and of 3D remote sensing more generally, including digital photogram-
metric methods such as structure‐from‐motion (SFM). In keeping with the chan-
ges sweeping the ûeld today, images acquired from uninhabited aerial system
(UAS) platforms are now included among the ûgures and color plates, along with
images from many of the new optical and radar satellites that have been launched
since the previous edition was published. On the image analysis side, the continu-
ing improvement in computational power has led to an increased emphasis on
techniques that take advantage of high‐volume data sets, such as those dealing
with neural network classiûcation, object‐based image analysis, change detection,
and image time‐series analysis.
While adding in new material (including many new images and color plates)
and updating our coverage of topics from previous editions, we have also made
some improvements to the organization of the book. Most notably, what was
formerly Chapter 4—on visual image interpretation—has been split. The ûrst sec-
tions, dealing with methods for visual image interpretation, have been brought
into Chapter 1, in recognition of the importance of visual interpretation through-
out the book (and the ûeld). The remainder of the former Chapter 4 has been
moved to the end of the book and expanded into a new, broader review of applica-
tions of remote sensing not limited to visual methods alone. In addition, our cover-
age of radar and lidar systems has been moved ahead of the chapters on digital
image analysis methods and applications of remote sensing.
Despite these changes, we have also endeavored to retain the traditional
strengths of this book, which date back to the very ûrst edition. As noted above,
the book is deliberately <discipline neutral= and can serve as an introduction to the
principles, methods, and applications of remote sensing across many different
subject areas. There is enough material in this book for it to be used in many
PREFACE vii
different ways. Some courses may omit certain chapters and use the book in
a one‐semester or one‐quarter course; the book may also be used in a two‐course
sequence. Others may use this discussion in a series of modular courses, or in a
shortcourse/workshop format. Beyond the classroom, the remote sensing practi-
tioner will ûnd this book an enduring reference guide—technology changes con-
stantly, but the fundamental principles of remote sensing remain the same. We
have designed the book with these different potential uses in mind.
As always, this edition stands upon the shoulders of those that preceded it.
Many individuals contributed to the ûrst six editions of this book, and we thank
them again, collectively, for their generosity in sharing their time and expertise.
In addition, we would like to acknowledge the efforts of all the expert reviewers
who have helped guide changes in this edition and previous editions. We thank the
reviewers for their comments and suggestions.
Illustration materials for this edition were provided by: Dr. Sam Batzli, USGS
WisconsinView program, University of Wisconsin—Madison Space Science and
Engineering Center; Ruediger Wagner, Vice President of Imaging, Geospatial
Solutions Division and Jennifer Bumford, Marketing and Communications, Leica
Geosystems; Philipp Grimm, Marketing and Sales Manager, ILI GmbH; Jan
Schoderer, Sales Director UltraCam Business Unit and Alexander Wiechert, Busi-
ness Director, Microsoft Photogrammetry; Roz Brown, Media Relations Manager,
Ball Aerospace; Rick Holasek, NovaSol; Stephen Lich and Jason Howse, ITRES,
Inc.; Qinghua Guo and Jacob Flanagan, UC‐Merced; Dr. Thomas Morrison, Wake
Forest University; Dr. Andrea Laliberte, Earthmetrics, Inc.; Dr. Christoph
Borel‐Donohue, Research Associate Professor of Engineering Physics, U.S. Air
Force Institute of Technology; Elsevier Limited, the German Aerospace Center
(DLR), Airbus Defence & Space, the Canadian Space Agency, Leica Geosystems,
and the U.S. Library of Congress. Dr. Douglas Bolger, Dartmouth College, and
Dr. Julian Fennessy, Giraffe Conservation Foundation, generously contributed to
the discussion of wildlife monitoring in Chapter 8, including the giraffe telemetry
data used in Figure 8.24. Our particular thanks go to those who kindly shared
imagery and information about the Oso landslide in Washington State, including
images that ultimately appeared in a ûgure, a color plate, and the front and back
covers of this book; these sources include Rochelle Higgins and Susan Jackson at
Quantum Spatial, Scott Campbell at the Washington State Department of Trans-
portation, and Dr. Ralph Haugerud of the U.S. Geological Survey.
Numerous suggestions relative to the photogrammetric material contained in
this edition were provided by Thomas Asbeck, CP, PE, PLS; Dr. Terry Keating, CP,
PE, PLS; and Michael Renslow, CP, RPP.
We also thank the many faculty, academic staff, and graduate and under-
graduate students at Dartmouth College and the University of Wisconsin—
Madison who made valuable contributions to this edition, both directly and
indirectly.
Special recognition is due our families for their patient understanding and
encouragement while this edition was in preparation.
viii PREFACE
Finally, we want to encourage you, the reader, to use the knowledge of remote
sensing that you might gain from this book to literally make the world a better
place. Remote sensing technology has proven to provide numerous scientiûc, com-
mercial, and social beneûts. Among these is not only the efûciency it brings to the
day‐to‐day decision‐making process in an ever‐increasing range of applications,
but also the potential this ûeld holds for improving the stewardship of earth’s
resources and the global environment. This book is intended to provide a technical
foundation for you to aid in making this tremendous potential a reality.
Thomas M. Lillesand
Ralph W. Kiefer
Jonathan W. Chipman
This book is dedicated to the peaceful application of remote sensing in order to maximize
the scientific, social, and commercial benefits of this technology for all humankind.
❙
CONTENTS
ix
x CONTENTS
7
6 Digital Image Analysis 485
Microwave and Lidar Sensing 385
7.1 Introduction 485
6.1 Introduction 385
7.2 Preprocessing of Images 488
6.2 Radar Development 386
7.3 Image Enhancement 500
6.3 Imaging Radar System Operation 389
7.4 Contrast Manipulation 501
6.4 Synthetic Aperture Radar 399
7.5 Spatial Feature Manipulation 507
6.5 Geometric Characteristics of Radar
Imagery 402 7.6 Multi-Image Manipulation 517
6.6 Transmission Characteristics of Radar 7.7 Image Classiûcation 537
Signals 409 7.8 Supervised Classiûcation 538
6.7 Other Radar Image Characteristics 413 7.9 The Classiûcation Stage 540
6.8 Radar Image Interpretation 417 7.10 The Training Stage 546
6.9 Interferometric Radar 435 7.11 Unsupervised Classiûcation 556
6.10 Radar Remote Sensing from Space 441 7.12 Hybrid Classiûcation 560
6.11 Seasat-1 and the Shuttle Imaging 7.13 Classiûcation of Mixed Pixels 562
Radar Missions 443 7.14 The Output Stage and Postclassiûcation
6.12 Almaz-1 448 Smoothing 568
6.13 ERS, Envisat, and Sentinel-1 448 7.15 Object-Based Classiûcation 570
6.14 JERS-1, ALOS, and ALOS-2 450 7.16 Neural Network Classiûcation 573
xii CONTENTS
Index 709
8
Applications of Remote Sensing 609
1.1 INTRODUCTION
Remote sensing is the science and art of obtaining information about an object,
area, or phenomenon through the analysis of data acquired by a device that is not
in contact with the object, area, or phenomenon under investigation. As you read
these words, you are employing remote sensing. Your eyes are acting as sensors
that respond to the light reüected from this page. The <data= your eyes acquire are
impulses corresponding to the amount of light reüected from the dark and light
areas on the page. These data are analyzed, or interpreted, in your mental compu-
ter to enable you to explain the dark areas on the page as a collection of letters
forming words. Beyond this, you recognize that the words form sentences, and
you interpret the information that the sentences convey.
In many respects, remote sensing can be thought of as a reading process.
Using various sensors, we remotely collect data that may be analyzed to obtain
information about the objects, areas, or phenomena being investigated. The remo-
tely collected data can be of many forms, including variations in force distribu-
tions, acoustic wave distributions, or electromagnetic energy distributions. For
example, a gravity meter acquires data on variations in the distribution of the
1
2 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
force of gravity. Sonar, like a bat’s navigation system, obtains data on variations
in acoustic wave distributions. Our eyes acquire data on variations in electro-
magnetic energy distributions.
This book is about electromagnetic energy sensors that are operated from airborne
and spaceborne platforms to assist in inventorying, mapping, and monitoring
earth resources. These sensors acquire data on the way various earth surface
features emit and reflect electromagnetic energy, and these data are analyzed to
provide information about the resources under investigation.
Figure 1.1 schematically illustrates the generalized processes and elements
involved in electromagnetic remote sensing of earth resources. The two basic pro-
cesses involved are data acquisition and data analysis. The elements of the data
acquisition process are energy sources (a), propagation of energy through the
atmosphere (b), energy interactions with earth surface features (c), retransmission
of energy through the atmosphere (d), airborne and/or spaceborne sensors (e),
resulting in the generation of sensor data in pictorial and/or digital form ( f ). In
short, we use sensors to record variations in the way earth surface features reflect
and emit electromagnetic energy. The data analysis process (g) involves examining
the data using various viewing and interpretation devices to analyze pictorial data
and/or a computer to analyze digital sensor data. Reference data about the resour-
ces being studied (such as soil maps, crop statistics, or field-check data) are used
Reference
data
Visual
Pictorial
Digital
when and where available to assist in the data analysis. With the aid of the refer-
ence data, the analyst extracts information about the type, extent, location, and
condition of the various resources over which the sensor data were collected. This
information is then compiled (h), generally in the form of maps, tables, or digital
spatial data that can be merged with other <layers= of information in a geographic
information system (GIS). Finally, the information is presented to users (i), who
apply it to their decision-making process.
In the remainder of this chapter, we discuss the basic principles underlying the
remote sensing process. We begin with the fundamentals of electromagnetic
energy and then consider how the energy interacts with the atmosphere and with
earth surface features. Next, we summarize the process of acquiring remotely
sensed data and introduce the concepts underlying digital imagery formats. We
also discuss the role that reference data play in the data analysis procedure and
describe how the spatial location of reference data observed in the ûeld is often
determined using Global Positioning System (GPS) methods. These basics will
permit us to conceptualize the strengths and limitations of <real= remote sensing
systems and to examine the ways in which they depart from an <ideal= remote
sensing system. We then discuss brieüy the rudiments of GIS technology and the
spatial frameworks (coordinate systems and datums) used to represent the posi-
tions of geographic features in space. Because visual examination of imagery will
play an important role in every subsequent chapter of this book, this ûrst chapter
concludes with an overview of the concepts and processes involved in visual inter-
pretation of remotely sensed images. By the end of this chapter, the reader should
have a grasp of the foundations of remote sensing and an appreciation for the
close relationship among remote sensing, GPS methods, and GIS operations.
Chapters 2 and 3 deal primarily with photographic remote sensing. Chapter 2
describes the basic tools used in acquiring aerial photographs, including both
analog and digital camera systems. Digital videography is also treated in Chapter 2.
Chapter 3 describes the photogrammetric procedures by which precise spatial
measurements, maps, digital elevation models (DEMs), orthophotos, and other
derived products are made from airphotos.
Discussion of nonphotographic systems begins in Chapter 4, which describes
the acquisition of airborne multispectral, thermal, and hyperspectral data. In
Chapter 5 we discuss the characteristics of spaceborne remote sensing systems
and examine the principal satellite systems used to collect imagery from reüected
and emitted radiance on a global basis. These satellite systems range from the
Landsat and SPOT series of moderate-resolution instruments, to the latest gen-
eration of high-resolution commercially operated systems, to various meteor-
ological and global monitoring systems.
4 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
Chapter 6 is concerned with the collection and analysis of radar and lidar
data. Both airborne and spaceborne systems are discussed. Included in this latter
category are such systems as the ALOS, Envisat, ERS, JERS, Radarsat, and
ICESat satellite systems.
In essence, from Chapter 2 through Chapter 6, this book progresses from the
simplest sensing systems to the more complex. There is also a progression from
short to long wavelengths along the electromagnetic spectrum (see Section 1.2).
That is, discussion centers on photography in the ultraviolet, visible, and near-
infrared regions, multispectral sensing (including thermal sensing using emitted
long-wavelength infrared radiation), and radar sensing in the microwave region.
The ûnal two chapters of the book deal with the manipulation, interpretation,
and analysis of images. Chapter 7 treats the subject of digital image processing and
describes the most commonly employed procedures through which computer-
assisted image interpretation is accomplished. Chapter 8 presents a broad range of
applications of remote sensing, including both visual interpretation and computer-
aided analysis of image data.
Throughout this book, the International System of Units (SI) is used. Tables
are included to assist the reader in converting between SI and units of other mea-
surement systems.
Finally, a Works Cited section provides a list of references cited in the text. It
is not intended to be a compendium of general sources of additional information.
Three appendices provided on the publisher’s website (http://www.wiley.com/
college/lillesand) offer further information about particular topics at a level of
detail beyond what could be included in the text itself. Appendix A summarizes
the various concepts, terms, and units commonly used in radiation measurement
in remote sensing. Appendix B includes sample coordinate transformation and
resampling procedures used in digital image processing. Appendix C discusses
some of the concepts, terminology, and units used to describe radar signals.
Visible light is only one of many forms of electromagnetic energy. Radio waves,
ultraviolet rays, radiant heat, and X-rays are other familiar forms. All this energy
is inherently similar and propagates in accordance with basic wave theory. As
shown in Figure 1.2, this theory describes electromagnetic energy as traveling in a
harmonic, sinusoidal fashion at the <velocity of light= c. The distance from one
wave peak to the next is the wavelength l, and the number of peaks passing a ûxed
point in space per unit time is the wave frequency v.
From basic physics, waves obey the general equation
c ¼ vl ð1:1Þ
Because c is essentially a constant 3 3 108 m=sec , frequency v and wave-
length l for any given wave are related inversely, and either term can be used to
1.2 ENERGY SOURCES AND RADIATION PRINCIPLES 5
Figure 1.2 Electromagnetic wave. Components include a sinusoidal electric wave ðEÞ and a similar magnetic
wave ðMÞ at right angles, both being perpendicular to the direction of propagation.
from a blackbody per 1-m spectral interval. Hence, the area under these curves
equals the total radiant exitance, M, and the curves illustrate graphically what the
Stefan–Boltzmann law expresses mathematically: The higher the temperature of
the radiator, the greater the total amount of radiation it emits. The curves also
show that there is a shift toward shorter wavelengths in the peak of a blackbody
radiation distribution as temperature increases. The dominant wavelength, or
wavelength at which a blackbody radiation curve reaches a maximum, is related
to its temperature by Wien’s displacement law,
A
lm ¼ ð1:5Þ
T
where
lm ¼ wavelength of maximum spectral radiant exitance, m
A ¼ 2898 mm K
T ¼ temperature, K
Thus, for a blackbody, the wavelength at which the maximum spectral radiant
exitance occurs varies inversely with the blackbody’s absolute temperature.
We observe this phenomenon when a metal body such as a piece of iron is
heated. As the object becomes progressively hotter, it begins to glow and its color
8 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
108
6000 K Blackbody radiation curve
at the sun’s temperature
Spectral radiant exitance, Mλ (Wm–2 μm–1)
107
4000 K
106 Blackbody radiation curve
3000 K at incandescent lamp temperature
105 2000 K
104
1000 K
103
102 500 K
Blackbody radiation curve
at the earth’s temperature
101 300 K
200 K
1
0.1 0.2 0.5 1 2 5 10 20 50 100
Wavelength (μm)
Figure 1.4 Spectral distribution of energy radiated from blackbodies of various temperatures.
(Note that spectral radiant exitance Ml is the energy emitted per unit wavelength interval.
Total radiant exitance M is given by the area under the spectral radiant exitance curves.)
Irrespective of its source, all radiation detected by remote sensors passes through
some distance, or path length, of atmosphere. The path length involved can vary
widely. For example, space photography results from sunlight that passes through
the full thickness of the earth’s atmosphere twice on its journey from source to
sensor. On the other hand, an airborne thermal sensor detects energy emitted
directly from objects on the earth, so a single, relatively short atmospheric path
length is involved. The net effect of the atmosphere varies with these differences
in path length and also varies with the magnitude of the energy signal being
sensed, the atmospheric conditions present, and the wavelengths involved.
Because of the varied nature of atmospheric effects, we treat this subject on a
sensor-by-sensor basis in other chapters. Here, we merely wish to introduce the
notion that the atmosphere can have a profound effect on, among other things,
the intensity and spectral composition of radiation available to any sensing sys-
tem. These effects are caused principally through the mechanisms of atmospheric
scattering and absorption.
Scattering
it scatters the shorter (blue) wavelengths more dominantly than the other visible
wavelengths. Consequently, we see a blue sky. At sunrise and sunset, however, the
sun’s rays travel through a longer atmospheric path length than during midday.
With the longer path, the scatter (and absorption) of short wavelengths is so com-
plete that we see only the less scattered, longer wavelengths of orange and red.
Rayleigh scatter is one of the primary causes of <haze= in imagery. Visually,
haze diminishes the <crispness,= or <contrast,= of an image. In color photography,
it results in a bluish-gray cast to an image, particularly when taken from high alti-
tude. As we see in Chapter 2, haze can often be eliminated or at least minimized
by introducing, in front of the camera lens, a ûlter that does not transmit short
wavelengths.
Another type of scatter is Mie scatter, which exists when atmospheric particle
diameters essentially equal the wavelengths of the energy being sensed. Water
vapor and dust are major causes of Mie scatter. This type of scatter tends to inüu-
ence longer wavelengths compared to Rayleigh scatter. Although Rayleigh scatter
tends to dominate under most atmospheric conditions, Mie scatter is signiûcant
in slightly overcast ones.
A more bothersome phenomenon is nonselective scatter, which comes about
when the diameters of the particles causing scatter are much larger than the
wavelengths of the energy being sensed. Water droplets, for example, cause such
scatter. They commonly have a diameter in the range 5 to 100 m and scatter
all visible and near- to mid-IR wavelengths about equally. Consequently, this scat-
tering is <nonselective= with respect to wavelength. In the visible wavelengths,
equal quantities of blue, green, and red light are scattered; hence fog and clouds
appear white.
Absorption
(a)
(b)
(c)
Figure 1.5 Spectral characteristics of (a) energy sources, (b) atmospheric transmittance, and
(c) common remote sensing systems. (Note that wavelength scale is logarithmic.)
<visible= range) coincides with both an atmospheric window and the peak level of
energy from the sun. Emitted <heat= energy from the earth, shown by the small
curve in (a), is sensed through the windows at 3 to 5 m and 8 to 14 m using
such devices as thermal sensors. Multispectral sensors observe simultaneously
through multiple, narrow wavelength ranges that can be located at various points
in the visible through the thermal spectral region. Radar and passive microwave
systems operate through a window in the region 1 mm to 1 m.
The important point to note from Figure 1.5 is the interaction and the inter-
dependence between the primary sources of electromagnetic energy, the atmospheric
windows through which source energy may be transmitted to and from earth surface
features, and the spectral sensitivity of the sensors available to detect and record the
energy. One cannot select the sensor to be used in any given remote sensing task
arbitrarily; one must instead consider (1) the spectral sensitivity of the sensors
available, (2) the presence or absence of atmospheric windows in the spectral
range(s) in which one wishes to sense, and (3) the source, magnitude, and
12 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
When electromagnetic energy is incident on any given earth surface feature, three
fundamental energy interactions with the feature are possible. These are illu-
strated in Figure 1.6 for an element of the volume of a water body. Various frac-
tions of the energy incident on the element are reflected, absorbed, and/or
transmitted. Applying the principle of conservation of energy, we can state the
interrelationship among these three energy interactions as
E I ð l Þ ¼ E R ð l Þ þ E A ð lÞ þ E T ð l Þ ð1:6Þ
where
EI ¼ incident energy
ER ¼ reflected energy
EA ¼ absorbed energy
ET ¼ transmitted energy
with all energy components being a function of wavelength l.
Equation 1.6 is an energy balance equation expressing the interrelationship
among the mechanisms of reüection, absorption, and transmission. Two points
concerning this relationship should be noted. First, the proportions of energy
reüected, absorbed, and transmitted will vary for different earth features, depend-
ing on their material type and condition. These differences permit us to distin-
guish different features on an image. Second, the wavelength dependency means
that, even within a given feature type, the proportion of reüected, absorbed, and
Figure 1.6 Basic interactions between electromagnetic energy and an earth surface
feature.
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 13
transmitted energy will vary at different wavelengths. Thus, two features may be
indistinguishable in one spectral range and be very different in another wave-
length band. Within the visible portion of the spectrum, these spectral variations
result in the visual effect called color. For example, we call objects <blue= when
they reüect more highly in the blue portion of the spectrum, <green= when they
reüect more highly in the green spectral region, and so on. Thus, the eye utilizes
spectral variations in the magnitude of reüected energy to discriminate between
various objects. Color terminology and color mixing principles are discussed fur-
ther in Section 1.12.
Because many remote sensing systems operate in the wavelength regions in
which reüected energy predominates, the reüectance properties of earth features
are very important. Hence, it is often useful to think of the energy balance rela-
tionship expressed by Eq. 1.6 in the form
E R ð l Þ ¼ E I ð lÞ ½EA ðlÞ þ ET ðlÞ ð1:7Þ
That is, the reüected energy is equal to the energy incident on a given feature
reduced by the energy that is either absorbed or transmitted by that feature.
The reüectance characteristics of earth surface features may be quantiûed by
measuring the portion of incident energy that is reüected. This is measured as a
function of wavelength and is called spectral reflectance, rl . It is mathematically
deûned as
ER ðlÞ
rl ¼
EI ðlÞ
type overlap in most of the visible portion of the spectrum and are very close where
they do not overlap. Hence, the eye might see both tree types as being essentially
the same shade of <green= and might confuse the identity of the deciduous and con-
iferous trees. Certainly one could improve things somewhat by using spatial clues
to each tree type’s identity, such as size, shape, site, and so forth. However, this is
often difûcult to do from the air, particularly when tree types are intermixed. How
might we discriminate the two types on the basis of their spectral characteristics
alone? We could do this by using a sensor that records near-IR energy. A specia-
lized digital camera whose detectors are sensitive to near-IR wavelengths is just
such a system, as is an analog camera loaded with black and white IR ûlm. On
near-IR images, deciduous trees (having higher IR reüectance than conifers) gen-
erally appear much lighter in tone than do conifers. This is illustrated in Figure 1.8,
which shows stands of coniferous trees surrounded by deciduous trees. In Figure
1.8a (visible spectrum), it is virtually impossible to distinguish between tree types,
even though the conifers have a distinctive conical shape whereas the deciduous
trees have rounded crowns. In Figure 1.8b (near IR), the coniferous trees have a
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 15
(a)
(b)
Figure 1.8 Low altitude oblique aerial photographs illustrating deciduous versus coniferous trees.
(a) Panchromatic photograph recording reüected sunlight over the wavelength band 0.4 to 0:7 m.
(b) Black-and-white infrared photograph recording reüected sunlight over 0.7 to 0:9 m wavelength band.
(Author-prepared ûgure.)
16 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
distinctly darker tone. On such an image, the task of delineating deciduous versus
coniferous trees becomes almost trivial. In fact, if we were to use a computer to
analyze digital data collected from this type of sensor, we might <automate= our
entire mapping task. Many remote sensing data analysis schemes attempt to do just
that. For these schemes to be successful, the materials to be differentiated must be
spectrally separable.
Experience has shown that many earth surface features of interest can be iden-
tiûed, mapped, and studied on the basis of their spectral characteristics. Experience
has also shown that some features of interest cannot be spectrally separated. Thus,
to utilize remote sensing data effectively, one must know and understand the spec-
tral characteristics of the particular features under investigation in any given appli-
cation. Likewise, one must know what factors inüuence these characteristics.
Figure 1.9 shows typical spectral reüectance curves for many different types of
features: healthy green grass, dry (non-photosynthetically active) grass, bare soil
(brown to dark-brown sandy loam), pure gypsum dune sand, asphalt, construc-
tion concrete (Portland cement concrete), ûne-grained snow, clouds, and clear
lake water. The lines in this ûgure represent average reüectance curves compiled
by measuring a large sample of features, or in some cases representative reüec-
tance measurements from a single typical example of the feature class. Note how
distinctive the curves are for each feature. In general, the conûguration of these
curves is an indicator of the type and condition of the features to which they
apply. Although the reüectance of individual features can vary considerably above
and below the lines shown here, these curves demonstrate some fundamental
points concerning spectral reüectance.
For example, spectral reüectance curves for healthy green vegetation almost
always manifest the <peak-and-valley= conûguration illustrated by green grass in
Figure 1.9. The valleys in the visible portion of the spectrum are dictated by the
pigments in plant leaves. Chlorophyll, for example, strongly absorbs energy in the
wavelength bands centered at about 0.45 and 0:67 m (often called the <chlor-
ophyll absorption bands=). Hence, our eyes perceive healthy vegetation as green
in color because of the very high absorption of blue and red energy by plant
leaves and the relatively high reüection of green energy. If a plant is subject to
some form of stress that interrupts its normal growth and productivity, it may
decrease or cease chlorophyll production. The result is less chlorophyll absorp-
tion in the blue and red bands. Often, the red reüectance increases to the point
that we see the plant turn yellow (combination of green and red). This can be
seen in the spectral curve for dried grass in Figure 1.9.
As we go from the visible to the near-IR portion of the spectrum, the reüec-
tance of healthy vegetation increases dramatically. This spectral feature, known
as the red edge, typically occurs between 0.68 and 0:75 m, with the exact
position depending on the species and condition. Beyond this edge, from about
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 17
Figure 1.9 Spectral reüectance curves for various features types. (Original data courtesy USGS
Spectroscopy Lab, Johns Hopkins University Spectral Library, and Jet Propulsion Laboratory [JPL]; cloud
spectrum from Bowker et al., after Avery and Berlin, 1992. JPL spectra © 1999, California Institute of
Technology.)
0.75 to 1:3 m (representing most of the near-IR range), a plant leaf typically
reüects 40 to 50% of the energy incident upon it. Most of the remaining energy is
transmitted, because absorption in this spectral region is minimal (less than 5%).
Plant reüectance from 0.75 to 1:3 m results primarily from the internal structure of
18 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
plant leaves. Because the position of the red edge and the magnitude of the near-IR
reüectance beyond the red edge are highly variable among plant species, reüectance
measurements in these ranges often permit us to discriminate between species,
even if they look the same in visible wavelengths. Likewise, many plant stresses
alter the reüectance in the red edge and the near-IR region, and sensors operating
in these ranges are often used for vegetation stress detection. Also, multiple layers of
leaves in a plant canopy provide the opportunity for multiple transmissions and
reüections. Hence, the near-IR reüectance increases with the number of layers of
leaves in a canopy, with the maximum reüectance achieved at about eight leaf
layers (Bauer et al., 1986).
Beyond 1:3 m, energy incident upon vegetation is essentially absorbed or
reüected, with little to no transmittance of energy. Dips in reüectance occur at
1.4, 1.9, and 2:7 m because water in the leaf absorbs strongly at these wave-
lengths. Accordingly, wavelengths in these spectral regions are referred to as
water absorption bands. Reüectance peaks occur at about 1.6 and 2:2 m,
between the absorption bands. Throughout the range beyond 1:3 m, leaf
reüectance is approximately inversely related to the total water present in a
leaf. This total is a function of both the moisture content and the thickness of
a leaf.
The soil curve in Figure 1.9 shows considerably less peak-and-valley variation
in reüectance. That is, the factors that inüuence soil reüectance act over less spe-
ciûc spectral bands. Some of the factors affecting soil reüectance are moisture
content, organic matter content, soil texture (proportion of sand, silt, and clay),
surface roughness, and presence of iron oxide. These factors are complex, vari-
able, and interrelated. For example, the presence of moisture in soil will decrease
its reüectance. As with vegetation, this effect is greatest in the water absorption
bands at about 1.4, 1.9, and 2:7 m (clay soils also have hydroxyl absorption
bands at about 1.4 and 2:2 m). Soil moisture content is strongly related to the
soil texture: Coarse, sandy soils are usually well drained, resulting in low moisture
content and relatively high reüectance; poorly drained ûne-textured soils will
generally have lower reüectance. Thus, the reüectance properties of a soil are con-
sistent only within particular ranges of conditions. Two other factors that reduce
soil reüectance are surface roughness and content of organic matter. The pre-
sence of iron oxide in a soil will also signiûcantly decrease reüectance, at least in
the visible wavelengths. In any case, it is essential that the analyst be familiar
with the conditions at hand. Finally, because soils are essentially opaque to visi-
ble and infrared radiation, it should be noted that soil reüectance comes from the
uppermost layer of the soil and may not be indicative of the properties of the bulk
of the soil.
Sand can have wide variation in its spectral reüectance pattern. The curve
shown in Figure 1.9 is from a dune in New Mexico and consists of roughly 99%
gypsum with trace amounts of quartz (Jet Propulsion Laboratory, 1999). Its
absorption and reüectance features are essentially identical to those of its parent
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 19
material, gypsum. Sand derived from other sources, with differing mineral com-
positions, would have a spectral reüectance curve indicative of its parent mate-
rial. Other factors affecting the spectral response from sand include the presence
or absence of water and of organic matter. Sandy soil is subject to the same con-
siderations listed in the discussion of soil reüectance.
As shown in Figure 1.9, the spectral reüectance curves for asphalt and Port-
land cement concrete are much üatter than those of the materials discussed thus
far. Overall, Portland cement concrete tends to be relatively brighter than
asphalt, both in the visible spectrum and at longer wavelengths. It is important to
note that the reüectance of these materials may be modiûed by the presence of
paint, soot, water, or other substances. Also, as materials age, their spectral
reüectance patterns may change. For example, the reüectance of many types of
asphaltic concrete may increase, particularly in the visible spectrum, as their sur-
face ages.
In general, snow reüects strongly in the visible and near infrared, and absorbs
more energy at mid-infrared wavelengths. However, the reüectance of snow is
affected by its grain size, liquid water content, and presence or absence of other
materials in or on the snow surface (Dozier and Painter, 2004). Larger grains of
snow absorb more energy, particularly at wavelengths longer than 0:8 m. At tem-
peratures near 0°C, liquid water within the snowpack can cause grains to stick
together in clusters, thus increasing the effective grain size and decreasing the
reüectance at near-infrared and longer wavelengths. When particles of con-
taminants such as dust or soot are deposited on snow, they can signiûcantly reduce
the surface’s reüectance in the visible spectrum.
The aforementioned absorption of mid-infrared wavelengths by snow can per-
mit the differentiation between snow and clouds. While both feature types appear
bright in the visible and near infrared, clouds have signiûcantly higher reüectance
than snow at wavelengths longer than 1:4 m. Meteorologists can also use both
spectral and bidirectional reüectance patterns (discussed later in this section) to
identify a variety of cloud properties, including ice/water composition and
particle size.
Considering the spectral reüectance of water, probably the most distinctive
characteristic is the energy absorption at near-IR wavelengths and beyond.
In short, water absorbs energy in these wavelengths whether we are talking
about water features per se (such as lakes and streams) or water contained in
vegetation or soil. Locating and delineating water bodies with remote sensing
data are done most easily in near-IR wavelengths because of this absorption
property. However, various conditions of water bodies manifest themselves pri-
marily in visible wavelengths. The energy–matter interactions at these wave-
lengths are very complex and depend on a number of interrelated factors. For
example, the reüectance from a water body can stem from an interaction with
the water’s surface (specular reüection), with material suspended in the water,
or with the bottom of the depression containing the water body. Even with
20 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
deep water where bottom effects are negligible, the reüectance properties of a
water body are a function of not only the water per se but also the material in
the water.
Clear water absorbs relatively little energy having wavelengths less than
about 0:6 m. High transmittance typiûes these wavelengths with a maximum in
the blue-green portion of the spectrum. However, as the turbidity of water chan-
ges (because of the presence of organic or inorganic materials), transmittance—
and therefore reüectance—changes dramatically. For example, waters containing
large quantities of suspended sediments resulting from soil erosion normally
have much higher visible reüectance than other <clear= waters in the same geo-
graphic area. Likewise, the reüectance of water changes with the chlorophyll
concentration involved. Increases in chlorophyll concentration tend to decrease
water reüectance in blue wavelengths and increase it in green wavelengths.
These changes have been used to monitor the presence and estimate the con-
centration of algae via remote sensing data. Reüectance data have also been used
to determine the presence or absence of tannin dyes from bog vegetation in
lowland areas and to detect a number of pollutants, such as oil and certain
industrial wastes.
Figure 1.10 illustrates some of these effects, using spectra from three lakes
with different bio-optical properties. The ûrst spectrum is from a clear, oligo-
trophic lake with a chlorophyll level of 1:2 g=l and only 2.4 mg/l of dissolved
organic carbon (DOC). Its spectral reüectance is relatively high in the blue-green
portion of the spectrum and decreases in the red and near infrared. In contrast,
Figure 1.10 Spectral reüectance curves for lakes with clear water, high levels of
chlorophyll, and high levels of dissolved organic carbon (DOC).
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 21
the spectrum from a lake experiencing an algae bloom, with much higher chlor-
ophyll concentration ð12:3 g=lÞ, shows a reüectance peak in the green spectrum
and absorption in the blue and red regions. These reüectance and absorption fea-
tures are associated with several pigments present in algae. Finally, the third
spectrum in Figure 1.10 was acquired on an ombrotrophic bog lake, with very
high levels of DOC (20.7 mg/l). These naturally occurring tannins and other com-
plex organic molecules give the lake a very dark appearance, with its reüectance
curve nearly üat across the visible spectrum.
Many important water characteristics, such as dissolved oxygen concentra-
tion, pH, and salt concentration, cannot be observed directly through changes
in water reüectance. However, such parameters sometimes correlate with
observed reüectance. In short, there are many complex interrelationships
between the spectral reüectance of water and particular characteristics. One
must use appropriate reference data to correctly interpret reüectance measure-
ments made over water.
Our discussion of the spectral characteristics of vegetation, soil, and water
has been very general. The student interested in pursuing details on this subject,
as well as factors inüuencing these characteristics, is encouraged to consult the
various references contained in the Works Cited section located at the end of
this book.
quantitative, but they are not absolute. They may be distinctive, but they are not
necessarily unique.
We have already looked at some characteristics of objects that inüuence their
spectral response patterns. Temporal effects and spatial effects can also enter into
any given analysis. Temporal effects are any factors that change the spectral char-
acteristics of a feature over time. For example, the spectral characteristics of
many species of vegetation are in a nearly continual state of change throughout a
growing season. These changes often inüuence when we might collect sensor data
for a particular application.
Spatial effects refer to factors that cause the same types of features (e.g., corn
plants) at a given point in time to have different characteristics at different geo-
graphic locations. In small-area analysis the geographic locations may be meters
apart and spatial effects may be negligible. When analyzing satellite data, the
locations may be hundreds of kilometers apart where entirely different soils, cli-
mates, and cultivation practices might exist.
Temporal and spatial effects inüuence virtually all remote sensing operations.
These effects normally complicate the issue of analyzing spectral reüectance
properties of earth resources. Again, however, temporal and spatial effects might
be the keys to gleaning the information sought in an analysis. For example, the
process of change detection is premised on the ability to measure temporal effects.
An example of this process is detecting the change in suburban development near
a metropolitan area by using data obtained on two different dates.
An example of a useful spatial effect is the change in the leaf morphology of
trees when they are subjected to some form of stress. For example, when a tree
becomes infected with Dutch elm disease, its leaves might begin to cup and curl,
changing the reüectance of the tree relative to healthy trees that surround it. So,
even though a spatial effect might cause differences in the spectral reüectances of
the same type of feature, this effect may be just what is important in a particular
application.
Finally, it should be noted that the apparent spectral response from surface
features can be inüuenced by shadows. While an object’s spectral reüectance
(a ratio of reüected to incident energy, see Eq. 1.8) is not affected by changes in
illumination, the absolute amount of energy reüected does depend on illumina-
tion conditions. Within a shadow, the total reüected energy is reduced, and the
spectral response is shifted toward shorter wavelengths. This occurs because the
incident energy within a shadow comes primarily from Rayleigh atmospheric
scattering, and as discussed in Section 1.3, such scattering primarily affects short
wavelengths. Thus, in visible-wavelength imagery, objects inside shadows will
tend to appear both darker and bluer than if they were fully illuminated. This
effect can cause problems for automated image classiûcation algorithms; for
example, dark shadows of trees on pavement may be misclassiûed as water. The
effects of illumination geometry on reüectance are discussed in more detail later
in this section, while the impacts of shadows on the image interpretation process
are discussed in Section 1.12.
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 23
Figure 1.11 Atmospheric effects inüuencing the measurement of reüected solar energy.
Attenuated sunlight and skylight ðEÞ is reüected from a terrain element having reüectance r. The
attenuated
radiance reüected from the terrain element ðrET=Þ combines with the path radiance
Lp to form the total radiance ðLtot Þ recorded by the sensor.
24 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
where
Ltot ¼ total spectral radiance measured by sensor
r ¼ reflectance of object
E ¼ irradiance on object; incoming energy
T ¼ transmission of atmosphere
Lp ¼ path radiance; from the atmosphere and not from the object
It should be noted that all of the above factors depend on wavelength. Also, as
shown in Figure 1.11, the irradiance ðEÞ stems from two sources: (1) directly reflec-
ted “sunlight” and (2) diffuse “skylight,” which is sunlight that has been previously
scattered by the atmosphere. The relative dominance of sunlight versus skylight in
any given image is strongly dependent on weather conditions (e.g., sunny vs. hazy
vs. cloudy). Likewise, irradiance varies with the seasonal changes in solar elevation
angle (Figure 7.4) and the changing distance between the earth and sun.
For a sensor positioned close to the earth’s surface, the path radiance Lp will
generally be small or negligible, because the atmospheric path length from the
surface to the sensor is too short for much scattering to occur. In contrast, ima-
gery from satellite systems will be more strongly affected by path radiance, due
to the longer atmospheric path between the earth’s surface and the spacecraft.
This can be seen in Figure 1.12, which compares two spectral response patterns
from the same area. One “signature” in this figure was collected using a handheld
field spectroradiometer (see Section 1.6 for discussion), from a distance of only a
few cm above the surface. The second curve shown in Figure 1.12 was collected
by the Hyperion hyperspectral sensor on the EO-1 satellite (hyperspectral systems
Satellite
Spectral radiance
Surface
are discussed in Chapter 4, and the Hyperion instrument is covered in Chapter 5).
Due to the thickness of the atmosphere between the earth’s surface and the satel-
lite’s position above the atmosphere, this second spectral response pattern shows
an elevated signal at short wavelengths, due to the extraneous path radiance.
In its raw form, this near-surface measurement from the ûeld spectro-
radiometer could not be directly compared to the measurement from the satellite,
because one is observing surface reflectance while the other is observing the so-
called top of atmosphere (TOA) reüectance. Before such a comparison could be
performed, the satellite image would need to go through a process of atmospheric
correction, in which the raw spectral data are modiûed to compensate for the
expected effects of atmospheric scattering and absorption. This process, discussed
in Chapter 7, generally does not produce a perfect representation of the spectral
response curve that would actually be observed at the surface itself, but it can pro-
duce a sufûciently close approximation to be suitable for many types of analysis.
Readers who might be interested in obtaining additional details about the
concepts, terminology, and units used in radiation measurement may wish to
consult Appendix A.
Figure 1.13 Specular versus diffuse reüectance. (We are most often interested in measuring the diffuse
reüectance of objects.)
low solar angles. The effect is also compounded by differences in slope and aspect
(slope orientation) over terrain of varied relief.
Figure 1.15b illustrates the effect of differential atmospheric scattering. As dis-
cussed earlier, backscatter from atmospheric molecules and particles adds light
(path radiance) to that reüected from ground features. The sensor records more
atmospheric backscatter from area D than from area C due to this geometric
effect. In some analyses, the variation in this path radiance component is small
and can be ignored, particularly at long wavelengths. However, under hazy condi-
tions, differential quantities of path radiance often result in varied illumination
across an image.
As mentioned earlier, specular reüections represent the extreme in directional
reüectance. When such reüections appear, they can hinder analysis of the ima-
gery. This can often be seen in imagery taken over water bodies. Figure 1.15c
illustrates the geometric nature of this problem. Immediately surrounding point E
on the image, a considerable increase in brightness would result from specular
reüection. A photographic example of this is shown in Figure 1.16, which
includes areas of specular reüection from the right half of the large lake in the
center of the image. These mirrorlike reüections normally contribute little infor-
mation about the true character of the objects involved. For example, the small
water bodies just below the larger lake have a tone similar to that of some of the
ûelds in the area. Because of the low information content of specular reüections,
they are avoided in most analyses.
The most complete representation of an object’s geometric reüectance proper-
ties is the bidirectional reflectance distribution function (BRDF). This is a mathe-
matical description of how reüectance varies for all combinations of illumination
and viewing angles at a given wavelength (Schott, 2007). The BRDF for any given
feature can approximate that of a Lambertian surface at some angles and be non-
Lambertian at other angles. Similarly, the BRDF can vary considerably with
wavelength. A variety of mathematical models (including a provision for wave-
length dependence) have been proposed to represent the BRDF (Jupp and
Strahler, 1991).
28 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
Figure 1.16 Aerial photograph containing areas of specular reüection from water bodies. This image is
a portion of a summertime photograph taken over Green Lake, Green Lake County, WI. Scale 1:95,000.
Cloud shadows indicate direction of sunlight at time of exposure. Reproduced from color IR original.
(NASA image.)
Figure 1.17a shows graphic representations of the BRDF for three objects,
each of which can be visualized as being located at the point directly beneath the
center of one of the hemispheres. In each case, illumination is from the south
(located at the back right, in these perspective views). The brightness at any point
on the hemisphere indicates the relative reüectance of the object at a given view-
ing angle. A perfectly diffuse reüector (Figure 1.17a, top) has uniform reüectance
in all directions. At the other extreme, a specular reüector (bottom) has very high
reüectance in the direction directly opposite the source of illumination, and very
low reüectance in all other directions. An intermediate surface (middle) has
somewhat elevated reüectance at the specular angle but also shows some reüec-
tance in other directions.
Figure 1.17b shows a geometric reüectance pattern dominated by back-
scattering, in which reüectance is highest when viewed from the same direction
as the source of illumination. (This is in contrast to the intermediate and specular
examples from Figure 1.17a, in which forward scattering predominates.) Many
natural surfaces display this pattern of backscattering as a result of the differ-
ential shading (Figure 1.16a). In an image of a relatively uniform surface, there
may be a localized area of increased brightness (known as a <hotspot=), located
where the azimuth and zenith angles of the sensor are the same as those of the
1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES 29
(a) (b)
N
W E
S
S
E (c)
N W
Figure 1.17 (a) Visual representation of bidirectional reflectance patterns, for surfaces with Lambertian
(top), intermediate (middle), and specular (bottom) characteristics (after Campbell, 2002). (b) Simulated
bidirectional reflectance from an agricultural field, showing a “hotspot” when viewed from the
direction of solar illumination. (c) Differences in apparent reflectance in a field, when photographed
from the north (top) and the south (bottom). (Author-prepared figure.)
sun. The existence of the hotspot is due to the fact that the sensor is then viewing
only the sunlit portion of all objects in the area, without any shadowing.
An example of this type of hotspot is shown in Figure 1.17c. The two aerial
photographs shown were taken mere seconds apart, along a single north–south
30 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
üight line. The ûeld delineated by the white box has a great difference in its
apparent reüectance in the two images, despite the fact that no actual changes
occurred on the ground during the short interval between the exposures. In the
top photograph, the ûeld was being viewed from the north, opposite the direction
of solar illumination. Roughness of the ûeld’s surface results in differential shad-
ing, with the camera viewing the shadowed side of each small variation in the
ûeld’s surface. In contrast, the bottom photograph was acquired from a point to
the south of the ûeld, from the same direction as the solar illumination (the hot-
spot), and thus appears quite bright.
To summarize, variations in bidirectional reüectance—such as specular
reüection from a lake, or the hotspot in an agricultural ûeld—can signiûcantly
affect the appearance of objects in remotely sensed images. These effects cause
objects to appear brighter or darker solely as a result of the angular relationships
among the sun, the object, and the sensor, without regard to any actual reüec-
tance differences on the ground. Often, the impact of directional reüectance
effects can be minimized by advance planning. For example, when photographing
a lake when the sun is to the south and the lake’s surface is calm, it may be pre-
ferable to take the photographs from the east or west, rather than from the north,
to avoid the sun’s specular reüection angle. However, the impact of varying bidir-
ectional reüectance usually cannot be completely eliminated, and it is important
for image analysts to be aware of this effect.
of detector, the resulting data are generally recorded onto some magnetic or optical
computer storage medium, such as a hard drive, memory card, solid-state storage
unit or optical disk. Although sometimes more complex and expensive than ûlm-
based systems, electronic sensors offer the advantages of a broader spectral range of
sensitivity, improved calibration potential, and the ability to electronically store and
transmit data.
In remote sensing, the term photograph historically was reserved exclusively for
images that were detected as well as recorded on ûlm. The more generic term image
was adopted for any pictorial representation of image data. Thus, a pictorial record
from a thermal scanner (an electronic sensor) would be called a <thermal image,=
not a <thermal photograph,= because ûlm would not be the original detection
mechanism for the image. Because the term image relates to any pictorial product,
all photographs are images. Not all images, however, are photographs.
A common exception to the above terminology is use of the term digital pho-
tography. As we describe in Section 2.5, digital cameras use electronic detectors
rather than ûlm for image detection. While this process is not <photography= in
the traditional sense, <digital photography= is now the common way to refer to
this technique of digital data collection.
We can see that the data interpretation aspects of remote sensing can involve ana-
lysis of pictorial (image) and/or digital data. Visual interpretation of pictorial image
data has long been the most common form of remote sensing. Visual techniques
make use of the excellent ability of the human mind to qualitatively evaluate spatial
patterns in an image. The ability to make subjective judgments based on selected
image elements is essential in many interpretation efforts. Later in this chapter, in
Section 1.12, we discuss the process of visual image interpretation in detail.
Visual interpretation techniques have certain disadvantages, however, in that
they may require extensive training and are labor intensive. In addition, spectral
characteristics are not always fully evaluated in visual interpretation efforts. This
is partly because of the limited ability of the eye to discern tonal values on an
image and the difûculty of simultaneously analyzing numerous spectral images.
In applications where spectral patterns are highly informative, it is therefore pre-
ferable to analyze digital, rather than pictorial, image data.
The basic character of digital image data is illustrated in Figure 1.18.
Although the image shown in (a) appears to be a continuous-tone photograph,
it is actually composed of a two-dimensional array of discrete picture elements,
or pixels. The intensity of each pixel corresponds to the average brightness, or
radiance, measured electronically over the ground area corresponding to each
pixel. A total of 500 rows and 400 columns of pixels are shown in Figure 1.18a.
Whereas the individual pixels are virtually impossible to discern in (a), they are
readily observable in the enlargements shown in (b) and (c). These enlargements
correspond to sub-areas located near the center of (a). A 100 row 3 80 column
enlargement is shown in (b) and a 10 row 3 8 column enlargement is included in
(c). Part (d) shows the individual digital number (DN)—also referred to as the
32 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a) (b)
(c) (d)
Figure 1.18 Basic character of digital image data. (a) Original 500 row 3 400 column digital image. Scale
1:200,000. (b) Enlargement showing 100 row 3 80 column area of pixels near center of (a). Scale 1:40,000.
(c) 10 row 3 8 column enlargement. Scale 1:4,000. (d) Digital numbers corresponding to the radiance of
each pixel shown in (c). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 33
Band 3 88
Band 4 54
Band 5 27
Band 6 120
… 105
63
…
(a)
150
120
125
105
100 88
63
DN
75
54
50
25 27
0
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Wavelength (µm)
1 2 3 4 5 6 Band no.
(b)
Figure 1.19 Basic character of multi-band digital image data. (a) Each band is represented by a grid of
cells or pixels; any given pixel has a set of DNs representing its value in each band. (b) The spectral
signature for the pixel highlighted in (a), showing band number and wavelength on the X axis and pixel
DN on the Y axis. Values between the wavelengths of each spectral band, indicated by the dashed line
in (b), are not measured by this sensor and would thus be unknown.
of data within a single file. This format is referred to as band sequential (BSQ)
format. It has the advantage of simplicity, but it is often not the optimal choice
for efficient display and visualization of data, because viewing even a small por-
tion of the image requires reading multiple blocks of data from different “places”
on the computer disk. For example, to view a true-color digital image in BSQ for-
mat, with separate files used to store the red, green, and blue spectral bands, it
would be necessary for the computer to read blocks of data from three locations
on the storage medium.
An alternate method for storing multi-band data utilizes the band interleaved
by line (BIL) format. In this case, the image data file contains first a line of data
from band 1, then the same line of data from band 2, and each subsequent band.
This block of data consisting of the first line from each band is then followed by
the second line of data from bands 1, 2, 3, and so forth.
The third common data storage format is band interleaved by pixel (BIP). This
is perhaps the most widely used format for three-band images, such as those from
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 35
most consumer-grade digital cameras. In this format, the ûle contains each band’s
measurement for the ûrst pixel, then each band’s measurement for the next pixel,
and so on. The advantage of both BIL and BIP formats is that a computer can
read and process the data for small portions of the image much more rapidly,
because the data from all spectral bands are stored in closer proximity than in the
BSQ format.
Typically, the DNs constituting a digital image are recorded over such
numerical ranges as 0 to 255, 0 to 511, 0 to 1023, 0 to 2047, 0 to 4095 or higher.
These ranges represent the set of integers that can be recorded using 8-, 9-,
10-, 11-, and 12-bit binary computer coding scales, respectively. (That is,
28 ¼ 256, 29 ¼ 512, 210 ¼ 1024, 211 ¼ 2048, and 212 ¼ 4096.) The technical term
for the number of bits used to store digital image data is quantization level (or
color depth, when used to describe the number of bits used to display a color
image). As discussed in Chapter 7, with the appropriate calibration coefûcients
these integer DNs can be converted to more meaningful physical units such as
spectral reüectance, radiance, or normalized radar cross section.
Elevation Data
(a) (b)
(c) (d)
Figure 1.20 Representations of topographic data. (a) Portion of USGS 7.5-minute quadrangle
map, showing elevation contours. Scale 1:45,000. (b) Digital elevation model, with brightness
proportional to elevation. Scale 1:280,000. (c) Shaded-relief map derived from (b), with
simulated illumination from the north. Scale 1:280,000. (d) Three-dimensional perspective view,
with shading derived from (c). Scale varies in this projection. White rectangles in (b), (c), and (d)
indicate area enlarged in (a). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 37
from the upper right portion of (b) to the lower center, with many tributary val-
leys branching off from each side.
Figure 1.20c shows another way of visualizing topographic data using shaded
relief. This is a simulation of the pattern of shading that would be expected from a
three-dimensional surface under a given set of illumination conditions. In this
case, the simulation includes a primary source of illumination located to the
north, with a moderate degree of diffuse illumination from other directions to
soften the intensity of the shadows. Flat areas will have uniform tone in a shaded
relief map. Slopes facing toward the simulated light source will appear bright,
while slopes facing away from the light will appear darker.
To aid in visual interpretation, it is often preferable to create shaded relief
maps with illumination from the top of the image, regardless of whether that is a
direction from which solar illumination could actually come in the real world.
When the illumination is from other directions, particularly from the bottom of
the image, an untrained analyst may have difûculty correctly perceiving the land-
scape; in fact, the topography may appear inverted. (This effect is illustrated in
Figure 1.29.)
Figure 1.20d shows yet another method for visualizing elevation data, a three-
dimensional perspective view. In this example, the shaded relief map shown in (c)
has been <draped= over the DEM, and a simulated view has been created based on
a viewpoint located at a speciûed position in space (in this case, above and to the
south of the area shown). This technique can be used to visualize the appearance
of a landscape as seen from some point of interest. It is possible to <drape= other
types of imagery over a DEM; perspective views created using an aerial photo-
graph or high-resolution satellite image may appear quite realistic. Animation of
successive perspective views created along a user-deûned üight line permits the
development of simulated <üy-throughs= over an area.
The term <digital elevation model= or DEM can be used to describe any image
where the pixel values represent elevation ðZÞ coordinates. Two common sub-
categories of DEMs are a digital terrain model (DTM) and a digital surface model
(DSM). A DTM (sometimes referred to as a <bald-earth DEM=) records the eleva-
tion of the bare land surface, without any vegetation, buildings, or other features
above the ground. In contrast, a DSM records the elevation of whatever the
uppermost surface is at every location; this could be a tree crown, the roof of a
building, or the ground surface (where no vegetation or structures are present).
Each of these models has its appropriate uses. For example, a DTM would be use-
ful for predicting runoff in a watershed after a rainstorm, because streams will
üow over the ground surface rather than across the top of the forest canopy. In
contrast, a DSM could be used to measure the size and shape of objects on the
terrain, and to calculate intervisibility (whether a given point B can be seen from
a reference point A).
Figure 1.21 compares a DSM and DTM for the same site, using airborne lidar
data from the Capitol Forest area in Washington State (Andersen, McGaughey,
and Reutebuch, 2005). In Figure 1.21a, the uppermost lidar points have been used
38 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a)
(b)
Figure 1.21 Airborne lidar data of the Capitol Forest site, Washington State. (a) Digital surface model
(DSM) showing tops of tree crowns and canopy gaps. (b) Digital terrain model (DTM) showing
hypothetical bare earth surface. (From Andersen et al., 2006; courtesy Ward Carlson, USDA Forest
Service PNW Research Station.)
to create a DSM showing the elevation of the upper surface of the forest canopy,
the presence of canopy gaps, and, in many cases, the shape of individual tree
crowns. In Figure 1.21b, the lowermost points have been used to create a DTM,
showing the underlying ground surface if all vegetation and structures were
removed. Note the ability to detect ûne-scale topographic features, such as small
gullies and roadcuts, even underneath a dense forest canopy (Andersen et al.,
2006).
Plate 1 shows a comparison of a DSM (a) and DTM (b) for a wooded area in
New Hampshire. The models were derived from airborne lidar data acquired in
early December. This site is dominated by a mix of evergreen and deciduous tree
species, with the tallest (pines and hemlocks) exceeding 40 m in height. Scattered
clearings in the center and right side are athletic ûelds, parkland, and former ski
slopes now being taken over by shrubs and small trees. With obscuring vegetation
removed, the DTM in (b) shows a variety of glacial and post-glacial landforms, as
1.6 REFERENCE DATA 39
well as small roads, trails, and other constructed features. Also, by subtracting
the elevations in (b) from those in (a), it is possible to calculate the height of the
forest canopy above ground level at each point. The result, shown in (c), is refer-
red to as a canopy height model (CHM). In this model, the ground surface has
been üattened, so that all remaining variation represents differences in height of
the trees relative to the ground. Lidar and other high-resolution 3D data are
widely used for this type of canopy height analysis (Clark et al., 2004). (See
Sections 6.23 and 6.24 for more discussion.)
Increasingly, elevation data are being used for analysis not just in the form of
highly processed DEM, but in the more basic form of a point cloud. A point cloud
is simply a data set containing many three-dimensional point locations, each
representing a single measurement of the ðX, Y, ZÞ coordinates of an object or
surface. The positions, spacing, intensity, and other characteristics of the points
in this cloud can be analyzed using sophisticated 3D processing algorithms to
extract information about features (Rutzinger et al., 2008).
Further discussion of the acquisition, visualization, and analysis of elevation
data, including DEMs and point clouds, can be found in Chapters 3 and 6, under
the discussion of photogrammetry, interferometric radar, and lidar systems.
(a)
(b)
Figure 1.22 ASD, Inc. FieldSpec Spectroradiometer: (a) the instrument; (b) instrument shown in ûeld
operation. (Courtesy ASD, Inc.)
time, as can computed reüectance values within the wavelength bands of var-
ious satellite systems. In-ûeld calculation of band ratios and other computed
values is also possible. One such calculation might be the normalized differ-
ence vegetation index (NDVI), which relates the near-IR and visible reüectance
of earth surface features (Chapter 7). Another option is matching measured
spectra to a library of previously measured samples. The overall system is
compatible with a number of post-processing software packages and also
affords Ethernet, wireless, and GPS compatibility as well.
Figure 1.23 shows a versatile all-terrain instrument platform designed
primarily for collecting spectral measurements in agricultural cropland envir-
onments. The system provides the high clearance necessary for making
measurements over mature row crops, and the tracked wheels allow access to
difûcult landscape positions. Several measurement instruments can be sus-
pended from the system’s telescopic boom. Typically, these include a spectro-
radiometer, a remotely operated digital camera system, and a GPS receiver
(Section 1.7). While designed primarily for data collection in agricultural
ûelds, the long reach of the boom makes this device a useful tool for collecting
spectral data over such targets as emergent vegetation found in wetlands as
well as small trees and shrubs.
Using a spectroradiometer to obtain spectral reüectance measurements is
normally a three-step process. First, the instrument is aimed at a calibration
panel of known, stable reüectance. The purpose of this step is to quantify the
42 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
Figure 1.23 All-terrain instrument platform designed for collecting spectral measurements in agricultural cropland
environments. (Courtesy of the University of Nebraska-Lincoln Center for Advanced Land Management Information
Technologies.)
Currently, the U.S. Global Positioning System has only one operational counter-
part, the Russian GLONASS system. The full GLONASS constellation consists of
24 operational satellites, a number that was reached in October 2011. In addition, a
fully comprehensive European GNSS constellation, Galileo, is scheduled for com-
pletion by 2020 and will include 30 satellites. The data signals provided by Galileo
will be compatible with those from the U.S. GPS satellites, resulting in a greatly
increased range of options for GNSS receivers and signiûcantly improved accuracy.
Finally, China has announced plans for the development of its own Compass GNSS
constellation, to include 30 satellites in operational use by 2020. The future for
these and similar systems is an extremely bright and rapidly progressing one.
The means by which GNSS signals are used to determine ground positions
is called satellite ranging. Conceptually, the process simply involves measuring
44 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
the time required for signals transmitted by at least four satellites to reach the
ground receiver. Knowing that the signals travel at the speed of light
3 3 108 m=sec in a vacuum , the distance from each satellite to the receiver can be
computed using a form of three-dimensional triangulation. In principle, the signals
from only four satellites are needed to identify the receiver’s location, but in prac-
tice it is usually desirable to obtain measurements from as many satellites as
practical.
GNSS measurements are potentially subject to numerous sources of error.
These include clock bias (caused by imperfect synchronization between the high-
precision atomic clocks present on the satellites and the lower-precision clocks
used in GNSS receivers), uncertainties in the satellite orbits (known as satellite
ephemeris errors), errors due to atmospheric conditions (signal velocity depends
on time of day, season, and angular direction through the atmosphere), receiver
errors (due to such inüuences as electrical noise and signal-matching errors), and
multipath errors (reüection of a portion of the transmitted signal from objects not
in the straight-line path between the satellite and receiver).
Such errors can be compensated for (in great part) using differential GNSS
measurement methods. In this approach, simultaneous measurements are made
by a stationary base station receiver (located over a point of precisely known
position) and one (or more) roving receivers moving from point to point. The
positional errors measured at the base station are used to reûne the position mea-
sured by the rover(s) at the same instant in time. This can be done either by
bringing the data from the base and rover together in a post-processing mode
after the ûeld observations are completed or by instantaneously broadcasting
the base station corrections to the rovers. The latter approach is termed real-time
differential GNSS positioning.
In recent years, there have been efforts to improve the accuracy of GNSS
positioning through the development of regional networks of high-precision base
stations, generally referred to as satellite-based augmentation systems (SBAS). The
data from these stations are used to derive spatially explicit correction factors
that are then broadcast in real time, allowing advanced receiver units to deter-
mine their positions with a higher degree of accuracy. One such SBAS network,
the Wide Area Augmentation System (WAAS), consists of approximately 25 ground
reference stations distributed across the United States that continuously monitor
GPS satellite transmissions. Two main stations, located on the U.S. east and west
coasts, collect the data from the reference stations and create a composited cor-
rection message that is location speciûc. This message is then broadcast through
one of two geostationary satellites, satellites occupying a ûxed position over the
equator. Any WAAS-enabled GPS unit can receive these correction signals. The
GPS receiver then determines which correction data are appropriate at the cur-
rent location.
The WAAS signal reception is ideal for open land, aircraft, and marine appli-
cations, but the position of the relay satellites over the equator makes it difûcult
to receive the signals at high latitudes or when features such as trees and
1.8 CHARACTERISTICS OF REMOTE SENSING SYSTEMS 45
mountains obstruct the view of the horizon. In such situations, GPS positions can
sometimes actually contain more error with WAAS correction than without. How-
ever, in unobstructed operating conditions where a strong WAAS signal is avail-
able, positions are normally accurate to within 3 m or better.
Paralleling the deployment of the WAAS system in North America are the
Japanese Multi-functional Satellite Augmentation System (MSAS) in Asia, the
European Geostationary Navigation Overlay Service (EGNOS) in Europe, and pro-
posed future SBAS networks such as India’s GPS Aided Geo-Augmented Naviga-
tion (GAGAN) system. Like WAAS, these SBAS systems use geostationary
satellites to transmit data for real-time differential correction.
In addition to the regional SBAS real-time correction systems such as WAAS,
some nations have developed additional networks of base stations that can be
used for post-processing GNSS data for differential correction (i.e., high-accuracy
corrections made after data collection, rather than in real time). One such system
is the U.S. National Geodetic Survey’s Continuously Operating Reference Stations
(CORS) network. More than 1800 sites in the cooperative CORS network provide
GNSS reference data that can be accessed via the Internet and used in post-
processing for differential correction.
With the development of new satellite constellations, and new resources for
real-time and post-processed differential correction, GNSS-based location ser-
vices are expected to become even more widespread in industry, resource man-
agement, and consumer technology applications in the coming years.
Having introduced some basic concepts, we now have the elements necessary to
characterize a remote sensing system. In so doing, we can begin to appreciate
some of the problems encountered in the design and application of the various
sensing systems examined in subsequent chapters. In particular, the design and
operation of every real-world sensing system represents a series of compromises,
often in response to the limitations imposed by physics and by the current state of
technological development. When we consider the process from start to ûnish,
users of remote sensing systems need to keep in mind the following factors:
1. The energy source. All passive remote sensing systems rely on energy
that originates from sources other than the sensor itself, typically in the
form of either reüected radiation from the sun or emitted radiation from
earth surface features. As already discussed, the spectral distribution of
reüected sunlight and self-emitted energy is far from uniform. Solar
energy levels obviously vary with respect to time and location, and dif-
ferent earth surface materials emit energy with varying degrees of efû-
ciency. While we have some control over the sources of energy for active
systems such as radar and lidar, those sources have their own particular
46 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a)
(b)
Figure 1.24 Uninhabited aerial vehicles (UAVs) used for environmental applications of remote
sensing. (a) NASA’s Ikhana UAV, with imaging sensor in pod under left wing. (Photo courtesy
NASA Dryden Flight Research Center and Jim Ross.) (b) A vertical takeoff UAV mapping seagrass
and coral reef environments in Florida. (Photo courtesy Rick Holasek and NovaSol.)
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 49
The student should now begin to appreciate that successful use of remote sensing
is premised on the integration of multiple, interrelated data sources and analysis
procedures. No single combination of sensor and interpretation procedure is
appropriate to all applications. The key to designing a successful remote sensing
effort involves, at a minimum, (1) clear deûnition of the problem at hand, (2) eva-
luation of the potential for addressing the problem with remote sensing techni-
ques, (3) identiûcation of the remote sensing data acquisition procedures
appropriate to the task, (4) determination of the data interpretation procedures to
be employed and the reference data needed, and (5) identiûcation of the criteria
by which the quality of information collected can be judged.
50 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
All too often, one (or more) of the above components of a remote sensing
application is overlooked. The result may be disastrous. Many programs exist
with little or no means of evaluating the performance of remote sensing systems
in terms of information quality. Many people have acquired burgeoning quan-
tities of remote sensing data with inadequate capability to interpret them. In
some cases an inappropriate decision to use (or not to use) remote sensing has
been made, because the problem was not clearly deûned and the constraints or
opportunities associated with remote sensing methods were not clearly under-
stood. A clear articulation of the information requirements of a particular pro-
blem and the extent to which remote sensing might meet these requirements in a
timely manner is paramount to any successful application.
The success of many applications of remote sensing is improved considerably
by taking a multiple-view approach to data collection. This may involve multistage
sensing, wherein data about a site are collected from multiple altitudes. It may
involve multispectral sensing, whereby data are acquired simultaneously in sev-
eral spectral bands. Or, it may entail multitemporal sensing, where data about a
site are collected on more than one occasion.
In the multistage approach, satellite data may be analyzed in conjunction
with high altitude data, low altitude data, and ground observations (Figure 1.25).
Each successive data source might provide more detailed information over smal-
ler geographic areas. Information extracted at any lower level of observation may
then be extrapolated to higher levels of observation.
A commonplace example of the application of multistage sensing techniques
is the detection, identiûcation, and analysis of forest disease and insect problems.
From space images, the image analyst could obtain an overall view of the major
vegetation categories involved in a study area. Using this information, the areal
extent and position of a particular species of interest could be determined and
representative subareas could be studied more closely at a more reûned stage of
imaging. Areas exhibiting stress on the second-stage imagery could be delineated.
Representative samples of these areas could then be ûeld checked to document
the presence and particular cause of the stress.
After analyzing the problem in detail by ground observation, the analyst
would use the remotely sensed data to extrapolate assessments beyond the small
study areas. By analyzing the large-area remotely sensed data, the analyst can
determine the severity and geographic extent of the disease problem. Thus, while
the question of speciûcally what the problem is can generally be evaluated only by
detailed ground observation, the equally important questions of where, how
much, and how severe can often be best handled by remote sensing analysis.
In short, more information is obtained by analyzing multiple views of the ter-
rain than by analysis of any single view. In a similar vein, multispectral imagery
provides more information than data collected in any single spectral band. When
the signals recorded in the multiple bands are analyzed in conjunction with each
other, more information becomes available than if only a single band were used
or if the multiple bands were analyzed independently. The multispectral approach
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 51
being used extensively in computer-based GISs (Section 1.10). The GIS environ-
ment permits the synthesis, analysis, and communication of virtually unlimited
sources and types of biophysical and socioeconomic data—as long as they can be
geographically referenced. Remote sensing can be thought of as the <eyes= of such
systems providing repeated, synoptic (even global) visions of earth resources from
an aerial or space vantage point.
Remote sensing affords us the capability to literally see the invisible. We
can begin to see components of the environment on an ecosystem basis, in that
remote sensing data can transcend the cultural boundaries within which much of
our current resource data are collected. Remote sensing also transcends dis-
ciplinary boundaries. It is so broad in its application that nobody <owns= the ûeld.
Important contributions are made to—and beneûts derived from—remote sensing
by both the <hard= scientist interested in basic research and the <soft= scientist
interested in its operational application.
There is little question that remote sensing will continue to play an increas-
ingly broad and important role in the scientiûc, governmental, and commercial
sectors alike. The technical capabilities of sensors, space platforms, data commu-
nication and distribution systems, GPSs, digital image processing systems, and
GISs are improving on almost a daily basis. At the same time, we are witnessing
the evolution of a spatially enabled world society. Most importantly, we are
becoming increasingly aware of how interrelated and fragile the elements of our
global resource base really are and of the role that remote sensing can play in
inventorying, monitoring, and managing earth resources and in modeling and
helping us to better understand the global ecosystem and its dynamics.
TABLE 1.1 Example Point, Line, and Area Features and Typical Attributes
Contained in a GISa
The data in a GIS may be kept in individual standalone ûles (e.g., <shape-
ûles=), but increasingly a geodatabase is used to store and manage spatial data.
This is a type of relational database, consisting of tables with attributes in columns
and data records in rows (Table 1.2), and explicitly including locational informa-
tion for each record. While database implementations vary, there are certain
desirable characteristics that will improve the utility of a database in a GIS. These
characteristics include üexibility, to allow a wide range of database queries and
operations; reliability, to avoid accidental loss of data; security, to limit access to
authorized users; and ease of use, to insulate the end user from the details of the
database implementation.
One of the most important beneûts of a GIS is the ability to spatially inter-
relate multiple types of information stemming from a range of sources. This con-
cept is illustrated in Figure 1.26, where we have assumed that a hydrologist
wishes to use a GIS to study soil erosion in a watershed. As shown, the system
contains data from a range of source maps (a) that have been geocoded on a cell-
by-cell basis to form a series of layers (b), all in geographic registration. The ana-
lyst can then manipulate and overlay the information contained in, or derived
from, the various data layers. In this example, we assume that assessing the
potential for soil erosion throughout the watershed involves the simultaneous
cell-by-cell consideration of three types of data derived from the original data lay-
ers: slope, soil erodibility, and surface runoff potential. The slope information can
be computed from the elevations in the topography layer. The erodibility, which
is an attribute associated with each soil type, can be extracted from a relational
database management system incorporated in the GIS. Similarly, the runoff
potential is an attribute associated with each land cover type (land cover data can
Figure 1.26 GIS analysis procedure for studying potential soil erosion.
map of the entire nation. Another common constraint is that the compilation
dates of different source maps must be reasonably close in time. For example, a
GIS analysis of wildlife habitat might yield incorrect conclusions if it were based
on land cover data that are many years out of date. On the other hand, since other
types of spatial data are less changeable over time, the map compilation date
might not be as important for a layer such as topography or bedrock geology.
Most GISs use two primary approaches to represent the locational compo-
nent of geographic information: a raster (grid-based) or vector (point-based) for-
mat. The raster data model that was used in our soil erosion example is illustrated
in Figure 1.27b. In this approach, the location of geographic objects or conditions
is deûned by the row and column position of the cells they occupy. The value
stored for each cell indicates the type of object or condition that is found at that
location over the entire cell. Note that the ûner the grid cell size used, the more
geographic speciûcity there is in the data ûle. A coarse grid requires less data sto-
rage space but will provide a less precise geographic description of the original
data. Also, when using a very coarse grid, several data types and/or attributes may
occur in each cell, but the cell is still usually treated as a single homogeneous unit
during analysis.
The vector data model is illustrated in Figure 1.27c. Using this format, feature
boundaries are converted to straight-sided polygons that approximate the original
regions. These polygons are encoded by determining the coordinates of their
vertices, called points or nodes, which can be connected to form lines or arcs.
Topological coding includes <intelligence= in the data structure relative to the
spatial relationship (connectivity and adjacency) among features. For example,
topological coding keeps track of which arcs share common nodes and what
polygons are to the left and right of a given arc. This information facilitates such
spatial operations as overlay analysis, buffering, and network analysis.
Raster and vector data models each have their advantages and disadvantages.
Raster systems tend to have simpler data structures; they afford greater computa-
tional efûciency in such operations as overlay analysis; and they represent
features having high spatial variability and/or <blurred boundaries= (e.g., between
pure and mixed vegetation zones) more effectively. On the other hand, raster data
volumes are relatively greater; the spatial resolution of the data is limited to the
size of the cells comprising the raster; and the topological relationships among
spatial features are more difûcult to represent. Vector data formats have the
advantages of relatively lower data volumes, better spatial resolution, and the pre-
servation of topological data relationships (making such operations as network
analysis more efûcient). However, certain operations (e.g., overlay analysis) are
more complex computationally in a vector data format than in a raster format.
As we discuss frequently throughout this book, digital remote sensing images
are collected in a raster format. Accordingly, digital images are inherently compa-
tible spatially with other sources of information in a raster domain. Because
of this, <raw= images can be easily included directly as layers in a raster-based
GIS. Likewise, such image processing procedures as automated land cover classi-
ûcation (Chapter 7) result in the creation of interpreted or derived data ûles in a
raster format. These derived data are again inherently compatible with the other
sources of data represented in a raster format. This concept is illustrated in
Plate 2, in which we return to our earlier example of using overlay analysis to
assist in soil erosion potential mapping for an area in western Dane County, Wis-
consin. Shown in (a) is an automated land cover classiûcation that was produced
by processing Landsat Thematic Mapper (TM) data of the area. (See Chapter 7 for
additional information on computer-based land cover classiûcation.) To assess
the soil erosion potential in this area, the land cover data were merged with infor-
mation on the intrinsic erodibility of the soil present (b) and with land surface
slope information (c). These latter forms of information were already resident in a
GIS covering the area. Hence, all data could be combined for analysis in a mathe-
matical model, producing the soil erosion potential map shown in (d). To assist
the viewer in interpreting the landscape patterns shown in Plate 2, the GIS was
also used to visually enhance the four data sets with topographic shading based
on a DEM, providing a three-dimensional appearance.
For the land cover classiûcation in Plate 2a, water is shown as dark blue, non-
forested wetlands as light blue, forested wetlands as pink, corn as orange, other
row crops as pale yellow, forage crops as olive, meadows and grasslands as
yellow-green, deciduous forest as green, evergreen forest as dark green, low-
intensity urban areas as light gray, and high-intensity urban areas as dark gray. In
(b), areas of low soil erodibility are shown in dark brown, with increasing soil
erodibility indicated by colors ranging from orange to tan. In (c), areas of increas-
ing steepness of slope are shown as green, yellow, orange, and red. The soil
erosion potential map (d) shows seven colors depicting seven levels of potential
soil erosion. Areas having the highest erosion potential are shown in dark red.
These areas tend to have row crops growing on inherently erodible soils with
sloping terrain. Decreasing erosion potential is shown in a spectrum of colors
from orange through yellow to green. Areas with the lowest erosion potential are
1.11 SPATIAL DATA FRAMEWORKS FOR GIS AND REMOTE SENSING 57
If one is examining an image purely on its own, with no reference to any outside
source of spatial information, there may be no need to consider the type of coor-
dinate system used to represent locations within the image. In many cases, how-
ever, analysts will be comparing points in the image to GPS-located reference
data, looking for differences between two images of the same area, or importing
an image into a GIS for quantitative analysis. In all these cases, it is necessary
to know how the column and row coordinates of the image relate to some real-
world map coordinate system.
Because the shape of the earth is approximately spherical, locations on the
earth’s surface are often described in an angular coordinate or geographical sys-
tem, with latitude and longitude speciûed in degrees (°), minutes (0 ), and seconds
(00 ). This system originated in ancient Greece, and it is familiar to many people
today. Unfortunately, the calculation of distances and areas in an angular coordi-
nate system is complex. More signiûcantly, it is impossible to accurately represent
the three-dimensional surface of the earth on the two-dimensional planar surface
of a map or image without introducing distortion in one or more of the following
elements: shape, size, distance, and direction. Thus, for many purposes we wish
to mathematically transform angular geographical coordinates into a planar, or
Cartesian ðX – Y Þ coordinate system. The result of this transformation process is
referred to as a map projection.
While many types of map projections have been deûned, they can be grouped
into several broad categories based either on the geometric models used or on
the spatial properties that are preserved or distorted by the transformation.
Geometric models for map projection include cylindrical, conic, and azimuthal
or planar surfaces. From a map user’s perspective, the spatial properties of
map projections may be more important than the geometric model used. A con-
formal map projection preserves angular relationships, or shapes, within local
58 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
areas; over large areas, angles and shapes become distorted. An azimuthal (or
zenithal) projection preserves absolute directions relative to the central point of
projection. An equidistant projection preserves equal distances, for some but not
all points—scale is constant either for all distances along meridians or for all dis-
tances from one or two points. An equal-area (or equivalent) projection preserves
equal areas. Because a detailed explanation of the relationships among these
properties is beyond the scope of this discussion, sufûce it to say that no two-
dimensional map projection can accurately preserve all of these properties, but
certain subsets of these characteristics can be preserved in a single projection.
For example, the azimuthal equidistant projection preserves both direction and
distance—but only relative to the central point of the projection; directions and
distances between other points are not preserved.
In addition to the map projection associated with a given image, GIS data
layer, or other spatial data set, it is also often necessary to consider the datum
used with that map projection. A datum is a mathematical deûnition of the three-
dimensional solid (generally a slightly üattened ellipsoid) used to represent the
surface of the earth. The actual planet itself has an irregular shape that does
not correspond perfectly to any ellipsoid. As a result, a variety of different datums
have been described; some designed to ût the surface well in one particular
region (such as the North American Datum of 1983, or NAD83) and others
designed to best approximate the planet as a whole. Most GISs require that both a
map projection and a datum be speciûed before performing any coordinate
transformations.
To apply these concepts to the process of collecting and working with remo-
tely sensed images, most such images are initially acquired with rows and col-
umns of pixels aligned with the üight path (or orbit track) of the imaging
platform, be it a satellite, an aircraft, or a UAV. Before the images can be mapped,
or used in combination with other spatial data, they need to be georeferenced. His-
torically, this process typically involved identiûcation of visible control points
whose true geographic coordinates were known. A mathematical model would
then be used to transform the row and column coordinates of the raw image into
a deûned map coordinate system. In recent years, remote sensing platforms have
been outûtted with sophisticated systems to record their exact position and angu-
lar orientation. These systems, incorporating an inertial measurement unit (IMU)
and/or multiple onboard GPS units, enable highly precise modeling of the viewing
geometry of the sensor, which in turn is used for direct georeferencing of the sen-
sor data—relating them to a deûned map projection without the necessity of addi-
tional ground control points.
Once an image has been georeferenced, it may be ready for use with other
spatial information. On the other hand, some images may have further geometric
distortions, perhaps caused by varying terrain, or other factors. To remove these
distortions, it may be necessary to orthorectify the imagery, a process discussed
in Chapters 3 and 7.
1.12 VISUAL IMAGE INTERPRETATION 59
When we look at aerial and space images, we see various objects of different sizes,
shapes, and colors. Some of these objects may be readily identiûable while others
may not, depending on our own individual perceptions and experience. When we can
identify what we see on the images and communicate this information to others, we
are practicing visual image interpretation. The images contain raw image data. These
data, when processed by a human interpreter’s brain, become usable information.
Image interpretation is best learned through the experience of viewing hun-
dreds of remotely sensed images, supplemented by a close familiarity with the
environment and processes being observed. Given this fact, no textbook alone can
fully train its readers in image interpretation. Nonetheless, Chapters 2 through 8
of this book contain many examples of remote sensing images, examples that we
hope our readers will peruse and interpret. To aid in that process, the remainder
of this chapter presents an overview of the principles and methods typically
employed in image interpretation.
Aerial and space images contain a detailed record of features on the ground at
the time of data acquisition. An image interpreter systematically examines the ima-
ges and, frequently, other supporting materials such as maps and reports of ûeld
observations. Based on this study, an interpretation is made as to the physical nat-
ure of objects and phenomena appearing in the images. Interpretations may take
place at a number of levels of complexity, from the simple recognition of objects on
the earth’s surface to the derivation of detailed information regarding the complex
interactions among earth surface and subsurface features. Success in image inter-
pretation varies with the training and experience of the interpreter, the nature of
the objects or phenomena being interpreted, and the quality of the images being
utilized. Generally, the most capable image interpreters have keen powers of obser-
vation coupled with imagination and a great deal of patience. In addition, it is
important that the interpreter have a thorough understanding of the phenomenon
being studied as well as knowledge of the geographic region under study.
this challenge continues to be mitigated by the extensive use of aerial and space
imagery in such day-to-day activities as navigation, GIS applications, and weather
forecasting.
A systematic study of aerial and space images usually involves several basic
characteristics of features shown on an image. The exact characteristics useful
for any speciûc task and the manner in which they are considered depend on the
ûeld of application. However, most applications consider the following basic
characteristics, or variations of them: shape, size, pattern, tone (or hue), texture,
shadows, site, association, and spatial resolution (Olson, 1960).
Shape refers to the general form, conûguration, or outline of individual
objects. In the case of stereoscopic images, the object’s height also deûnes its
shape. The shape of some objects is so distinctive that their images may be identi-
ûed solely from this criterion. The Pentagon building near Washington, DC, is a
classic example. All shapes are obviously not this diagnostic, but every shape is of
some signiûcance to the image interpreter.
Size of objects on images must be considered in the context of the image
scale. A small storage shed, for example, might be misinterpreted as a barn if size
were not considered. Relative sizes among objects on images of the same scale
must also be considered.
Pattern relates to the spatial arrangement of objects. The repetition of certain
general forms or relationships is characteristic of many objects, both natural
and constructed, and gives objects a pattern that aids the image interpreter in
recognizing them. For example, the ordered spatial arrangement of trees in an
orchard is in distinct contrast to that of natural forest tree stands.
Tone (or hue) refers to the relative brightness or color of objects on an image.
Figure 1.8 showed how relative photo tones could be used to distinguish between
deciduous and coniferous trees on black and white infrared photographs. Without
differences in tone or hue, the shapes, patterns, and textures of objects could not
be discerned.
Texture is the frequency of tonal change on an image. Texture is produced by
an aggregation of unit features that may be too small to be discerned individually
on the image, such as tree leaves and leaf shadows. It is a product of their indivi-
dual shape, size, pattern, shadow, and tone. It determines the overall visual
<smoothness= or <coarseness= of image features. As the scale of the image is
reduced, the texture of any given object or area becomes progressively ûner and
ultimately disappears. An interpreter can often distinguish between features with
similar reüectances based on their texture differences. An example would be
the smooth texture of green grass as contrasted with the rough texture of green
tree crowns on medium-scale airphotos.
Shadows are important to interpreters in two opposing respects: (1) The
shape or outline of a shadow affords an impression of the proûle view of objects
(which aids interpretation) and (2) objects within shadows reüect little light and
are difûcult to discern on an image (which hinders interpretation). For example,
the shadows cast by various tree species or cultural features (bridges, silos,
1.12 VISUAL IMAGE INTERPRETATION 61
towers, poles, etc.) can deûnitely aid in their identiûcation on airphotos. In some
cases, the shadows cast by large animals can aid in their identiûcation. Figure 1.28
is a large-scale aerial photograph taken under low sun angle conditions that shows
camels and their shadows in Saudi Arabia. Note that the camels themselves can be
seen at the <base= of their shadows. Without the shadows, the animals could
be counted, but identifying them speciûcally as camels could be difûcult. Also, the
shadows resulting from subtle variations in terrain elevations, especially in the
case of low sun angle images, can aid in assessing natural topographic variations
that may be diagnostic of various geologic landforms.
As a general rule, the shape of the terrain is more easily interpreted when sha-
dows fall toward the observer. This is especially true when images are examined
monoscopically, where relief cannot be seen directly, as it can be in stereoscopic
images. In Figure 1.29a, a large ridge with numerous side valleys can be seen in
the center of the image. When this image is inverted (i.e., turned such that the
shadows fall away from the observer), as in (b), the result is a confusing image
that almost seems to have a valley of sorts running through the center of the
image (from bottom to top). This arises because one <expects= light sources to
generally be above objects (ASPRS, 1997, p. 73). The orientation of shadows with
respect to the observer is less important for interpreting images of buildings,
trees, or animals (as in Figure 1.28) than for interpreting the terrain.
Site refers to topographic or geographic location and is a particularly impor-
tant aid in the identiûcation of vegetation types. For example, certain tree species
would be expected to occur on well-drained upland sites, whereas other tree spe-
cies would be expected to occur on poorly drained lowland sites. Also, various
tree species occur only in certain geographic areas (e.g., redwoods occur in
California, but not in Indiana).
Figure 1.28 Vertical aerial photograph showing camels that cast long
shadows under a low sun angle in Saudi Arabia. Black-and-white
rendition of color original. (© George Steinmetz/Corbis.)
62 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a) (b)
Figure 1.29 Photograph illustrating the effect of shadow direction on the interpretability of terrain. Island
of Kauai, Hawaii, mid-January. Scale 1:48,000. (a) Shadows falling toward observer, (b) same image turned
such that shadows are falling away from observer. (Courtesy USDA–ASCS panchromatic photograph.)
(a)
Figure 1.30 Aerial photographic subscenes illustrating the elements of image interpretation, Detroit
Lakes area, Minnesota, mid-October. (a) Portion of original photograph of scale 1:32,000; (b) and (c)
enlarged to a scale of 1:4,600; (d) enlarged to a scale of 1:16,500; (e) enlarged to a scale of 1:25,500.
North is to the bottom of the page. (Courtesy KBM, Inc.)
64 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(b) (c)
the highway and the parking spaces of the theater). In addition to the curved rows
of the parking area, the pattern also shows the projection building and the screen.
The identiûcation of the screen is aided by its shadow. It is located in association
with a divided highway, which is accessed by a short roadway.
Many different land cover types can be seen in Figure 1.30c. Immediately
noticeable in this photograph, at upper left, is a feature with a superûcially similar
appearance to the drive-in theater. Careful examination of this feature, and the sur-
rounding grassy area, leads to the conclusion that this is a baseball diamond. The
trees that can be seen in numerous places in the photograph are casting shadows of
their trunks and branches because the mid-October date of this photograph is a time
when deciduous trees are in a leaf-off condition. Seen in the right one-third of the
photograph is a residential area. Running top to bottom through the center of the
1.12 VISUAL IMAGE INTERPRETATION 65
(d) (e)
image is a commercial area with buildings that have a larger size than the houses in
the residential area and large parking areas surrounding these larger buildings.
Figure 1.30d shows two major linear features. Near the bottom of the photo-
graph is a divided highway. Running diagonally from upper left to lower right is
an airport runway 1390 m long (the scale of this ûgure is 1:16,500, and the length
of this linear feature is 8.42 cm at this scale). The terminal area for this airport is
near the bottom center of Figure 1.30d.
Figure 1.30e illustrates natural versus constructed features. The water body at
a is a natural feature, with an irregular shoreline and some surrounding wetland
areas (especially visible at the narrow end of the lake). The water body at b is part
of a sewage treatment plant; the <shoreline= of this feature has unnaturally
straight sections in comparison with the water body shown at a.
The image interpretation process can often be facilitated through the use of image
interpretation keys. Keys can be valuable training aids for novice interpreters and
provide useful reference or refresher materials for more experienced interpreters.
An image interpretation key helps the interpreter evaluate the information pre-
sented on aerial and space images in an organized and consistent manner. It pro-
vides guidance about the correct identiûcation of features or conditions on the
1.12 VISUAL IMAGE INTERPRETATION 67
images. Ideally, a key consists of two basic parts: (1) a collection of annotated or
captioned images (preferably stereopairs) illustrative of the features or conditions
to be identiûed and (2) a graphic or word description that sets forth in some sys-
tematic fashion the image recognition characteristics of those features or condi-
tions. Two general types of image interpretation keys exist, differentiated by the
method of presentation of diagnostic features. A selective key contains numerous
example images with supporting text. The interpreter selects the example that
most nearly resembles the feature or condition found on the image under study.
An elimination key is arranged so that the interpretation proceeds step by step
from the general to the speciûc and leads to the elimination of all features or condi-
tions except the one being identiûed. Elimination keys often take the form of dichot-
omous keys where the interpreter makes a series of choices between two alternatives
and progressively eliminates all but one possible answer. Figure 1.31 shows a
dichotomous key prepared for the identiûcation of fruit and nut crops in the Sacra-
mento Valley, California. The use of elimination keys can lead to more positive
Figure 1.31 Dichotomous airphoto interpretation key to fruit and nut crops in the Sacramento Valley,
CA, designed for use with 1:6,000 scale panchromatic aerial photographs. (Adapted from American
Society of Photogrammetry, 1983. Copyright © 1975, American Society of Photogrammetry.
Reproduced with permission.)
68 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
answers than selective keys but may result in erroneous answers if the interpreter is
forced to make an uncertain choice between two unfamiliar image characteristics.
As a generalization, keys are more easily constructed and more reliably uti-
lized for cultural feature identiûcation (e.g., houses, bridges, roads, water towers)
than for vegetation or landform identiûcation. However, a number of keys have
been successfully employed for agricultural crop identiûcation and tree species
identiûcation. Such keys are normally developed and used on a region-by-region and
season-by-season basis because the appearance of vegetation can vary widely with
location and season.
Wavelengths of Sensing
The band(s) of the electromagnetic energy spectrum selected for aerial and space
imaging affects the amount of information that can be interpreted from the ima-
ges. Numerous examples of this are scattered throughout this book. The general
concepts of multiband imagery were discussed in Section 1.5. To explain how the
combinations of colors shown in an image relate to the various bands of data
recorded by a sensor, we next turn our attention to the principles of color, and
how combinations of colors are perceived.
Color is among the most important elements of image interpretation. Many fea-
tures and phenomena in an image can best be identiûed and interpreted through
examination of subtle differences in color. As discussed in Section 1.5, multi-
wavelength remote sensing images may be displayed in either true- or false-color
combinations. Particularly for interpreting false-color imagery, an understanding
of the principles of color perception and color mixing is essential.
Light falling on the retina of the human eye is sensed by rod and cone cells.
There are about 130 million rod cells, and they are 1000 times more light sensi-
tive than the cone cells. When light levels are low, human vision relies on the rod
cells to form images. All rod cells have the same wavelength sensitivity, which
peaks at about 0:55 m. Therefore, human vision at low light levels is monochro-
matic. It is the cone cells that determine the colors the eye sees. There are about
7 million cone cells; some sensitive to blue energy, some to green energy, and
some to red energy. The trichromatic theory of color vision explains that when the
blue-sensitive, green-sensitive, and red-sensitive cone cells are stimulated by dif-
ferent amounts of light, we perceive color. When all three types of cone cells are
stimulated equally, we perceive white light. Other theories of color vision have
been proposed. The opponent process of color vision hypothesizes that color vision
involves three mechanisms, each responding to a pair of so-called opposites:
white–black, red–green, and blue–yellow. This theory is based on many psycho-
physical observations and states that colors are formed by a hue cancellation
1.12 VISUAL IMAGE INTERPRETATION 69
method. The hue cancellation method is based on the observation that when certain
colors are mixed together, the resulting colors are not what would be intuitively
expected. For example, when red and green are mixed together, they produce yel-
low, not reddish green. (For further information, see Robinson et al., 1995.)
In the remainder of this discussion, we focus on the trichromatic theory of
color vision. Again, this theory is based on the concept that we perceive all colors
by synthesizing various amounts of just three (blue, green, and red).
Blue, green, and red are termed additive primaries. Plate 3a shows the effect
of projecting blue, green, and red light in partial superimposition. Where all three
beams overlap, the visual effect is white because all three of the eyes’ receptor sys-
tems are stimulated equally. Hence, white light can be thought of as the mixture
of blue, green, and red light. Various combinations of the three additive primaries
can be used to produce other colors. As illustrated, when red light and green light
are mixed, yellow light is produced. Mixture of blue and red light results in the
production of magenta light (bluish-red). Mixing blue and green results in cyan
light (bluish-green).
Yellow, magenta, and cyan are known as the complementary colors, or comple-
ments, of blue, green, and red light. Note that the complementary color for any
given primary color results from mixing the remaining two primaries.
Like the eye, color television and computer monitors operate on the principle
of additive color mixing through use of blue, green, and red elements on the
screen. When viewed at a distance, the light from the closely spaced screen ele-
ments forms a continuous color image.
Whereas color television and computer monitors simulate different colors
through additive mixture of blue, green, and red lights, color ûlm photography is
based on the principle of subtractive color mixture using superimposed yellow,
magenta, and cyan dyes. These three dye colors are termed the subtractive primaries,
and each results from subtracting one of the additive primaries from white light.
That is, yellow dye absorbs the blue component of white light. Magenta dye absorbs
the green component of white light. Cyan dye absorbs the red component of white
light.
The subtractive color-mixing process is illustrated in Plate 3b. This plate
shows three circular ûlters being held in front of a source of white light. The ûl-
ters contain yellow, magenta, and cyan dye. The yellow dye absorbs blue light
from the white background and transmits green and red. The magenta dye
absorbs green light and transmits blue and red. The cyan dye absorbs red light
and transmits blue and green. The superimposition of magenta and cyan dyes
results in the passage of only blue light from the background. This comes about
because the magenta dye absorbs the green component of the white background,
and the cyan dye absorbs the red component. Superimposition of the yellow and
cyan dyes results in the perception of green. Likewise, superimposition of yellow
and magenta dyes results in the perception of red. Where all three dyes overlap,
all light from the white background is absorbed and black results.
In color ûlm photography, and in color printing, various proportions of yel-
low, magenta, and cyan dye are superimposed to control the proportionate
70 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
amount of blue, green, and red light that reaches the eye. Hence, the subtractive
mixture of yellow, magenta, and cyan dyes on a photograph is used to control
the additive mixture of blue, green, and red light reaching the eye of the observer.
To accomplish this, color ûlm is manufactured with three emulsion layers that
are sensitive to blue, green, and red light but contain yellow, magenta, and cyan
dye after processing (see Section 2.4).
In digital color photography, the photosites in the detector array are typically
covered with a blue, green, or red ûlter, resulting in independent recording of the
three additive primary colors (see Section 2.5).
When interpreting color images, the analyst should keep in mind the relation-
ship between the color of a feature in the imagery, the color mixing process that
would produce that color, and the sensor’s wavelength ranges that are assigned to
the three primary colors used in that mixing process. It is then possible to work
backwards to infer the spectral properties of the feature on the landscape. For
example, if a feature in a false-color image has a yellow hue when displayed on a
computer monitor, that feature can be assumed to have a relatively high reüec-
tance in the wavelengths that are being displayed in the monitor’s red and green
color planes, and a relatively low reüectance in the wavelength displayed in blue
(because yellow results from the additive combination of red plus green light).
Knowing the spectral sensitivity of each of the sensor’s spectral bands, the analyst
can interpret the color as a spectral response pattern (see Section 1.4) that is
characterized by high reüectance at two wavelength ranges and low reüectance at
a third. The analyst can then use this information to draw inferences about the
nature and condition of the feature shown in the false-color image.
The temporal aspects of natural phenomena are important for image interpreta-
tion because such factors as vegetative growth and soil moisture vary during the
year. For crop identiûcation, more positive results can be achieved by obtaining
images at several times during the annual growing cycle. Observations of local
vegetation emergence and recession can aid in the timing of image acquisition for
natural vegetation mapping. In addition to seasonal variations, weather can cause
signiûcant short-term changes. Because soil moisture conditions may change dra-
matically during the day or two immediately following a rainstorm, the timing of
image acquisition for soil studies is very critical.
Another temporal aspect of importance is the comparison of leaf-off photo-
graphy with leaf-on photography. Leaf-off conditions are preferable for applica-
tions in which it is important to be able to see as much detail as possible
underneath trees. Such applications include activities such as topographic mapping
and urban feature identiûcation. Leaf-on conditions are preferred for vegetation
mapping. Figure 1.32a illustrates leaf-on photography. Here considerable detail
of the ground surface underneath the tree crowns is obscured. Figure 1.32b
1.12 VISUAL IMAGE INTERPRETATION 71
(a)
(b)
Figure 1.32 Comparison of leaf-on photography with leaf-off photography. Gladstone, OR. (a) Leaf-on
photograph exposed in summer. (b) Leaf-off photograph exposed in spring. Scale 1:1,500. (Courtesy Oregon
Metro.)
72 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
illustrates leaf-off photography. Here, the ground surface underneath the tree
crowns is much more visible than in the leaf-on photograph. Because leaf-off
photographs are typically exposed in spring or fall, there are longer shadows in
the image than with leaf-on photography, which is typically exposed in summer.
(Shadow length also varies with time of day.) Also, these photographs illustrate
leaf-on and leaf-off conditions in an urban area where most of the trees are decid-
uous and drop their leaves in fall (leaf-off conditions). The evergreen trees in the
images (e.g., lower right) maintain their needles throughout the year and cast
dark shadows. Hence, there would not be leaf-off conditions for such trees.
Every remote sensing system has a limit on how small an object on the earth’s sur-
face can be and still be <seen= by a sensor as being separate from its surroundings.
This limit, called the spatial resolution of a sensor, is an indication of how well a
sensor can record spatial detail. In some cases, the ground sample distance (GSD),
or ground area represented by a single pixel in a digital image, may correspond
closely to the spatial resolution of that sensor. In other cases, the ground sample
distance may be larger or smaller than the sensor’s spatial resolution, perhaps as
a result of the A-to-D conversion process or of digital image manipulation such
as resampling (see Chapter 7 and Appendix B). This distinction between spatial
resolution and ground sample distance is subtle but important. For the sake of
simplicity, the following discussion treats the GSD in a digital image as being
equivalent to the spatial resolution of the sensor that produced the image, but
note that in actual images the sampling distance may be larger or smaller than
the spatial resolution.
Figure 1.33 illustrates, in the context of a digital image, the interplay between
the spatial resolution of a sensor and the spatial variability present in a ground
scene. In (a), a single pixel covers only a small area of the ground (on the order of
the width of the rows of the crop shown). In (b), a coarser ground resolution is
depicted and a single pixel integrates the radiance from both the crop rows and the
soil between them. In (c), an even coarser resolution results in a pixel measuring
the average radiance over portions of two ûelds. Thus, depending on the spatial
resolution of the sensor and the spatial structure of the ground area being sensed,
digital images comprise a range of <pure= and <mixed= pixels. In general, the larger
the percentage of mixed pixels, the more limited is the ability to record and extract
spatial detail in an image. This is illustrated in Figure 1.34, in which the same area
has been imaged over a range of different ground resolution cell sizes.
Further discussion of the spatial resolution of remote sensing systems—
including the factors that determine spatial resolution and the methods used for
measuring or calculating a system’s resolution—can be found in Chapter 2 (for
camera systems), Chapters 4 and 5 (for airborne and spaceborne multispectral
and thermal sensors), and Chapter 6 (for radar systems).
It should be noted that there are other forms of resolution that are important
characteristics of remote sensing images. These include the following:
Spectral resolution, referring to a sensor’s ability to distinguish among different
ground features based on their spectral properties. Spectral resolution depends
upon the number, wavelength location, and narrowness of the spectral bands in
which a sensor collects image data. The bands in which any sensor collects data
can range from a single broad band (for panchromatic images), a few broad bands
(for multispectral images), or many very narrow bands (for hyperspectral images).
Radiometric resolution, referring to the sensor’s ability to differentiate among
subtle variations in brightness. Does the sensor divide the range from the <bright-
est= pixel to <darkest= pixel that can be recorded in an image (the dynamic range)
into 256, or 512, or 1024 gray level values? The ûner the radiometric resolution is,
the greater the quality and interpretability of an image. (See also the discussion of
quantization and digital numbers in Section 1.5.)
Temporal resolution, referring to the ability to detect changes over shorter or
longer periods of time. Most often, this term is used in reference to a sensor
that produces a time-series of multiple images. This could be a satellite system
with a deûned 16-day or 26-day orbital repeat cycle, or a tripod-mounted camera
with a timer that collects one image every hour to serve as reference data. The
importance of rapid and/or repeated coverage of an area varies dramatically with
the application at hand. For example, in disaster response applications, temporal
resolution might outweigh the importance of some, or all, of the other types of
resolution we have summarized above.
In subsequent chapters, as new remote sensing systems are introduced and
discussed, the reader should keep in mind these multiple resolutions that deter-
mine whether a given system would be suitable for a particular application.
74 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a) (b)
(c) (d)
(e) (f)
Figure 1.34 Ground resolution cell size effect on ability to extract detail from a digital image. Shown
is a portion of the University of Wisconsin-Madison campus, including Camp Randall Stadium and
vicinity, at a ground resolution cell size (per pixel) of: (a) 1 m, (b) 2.5 m, (c) 5 m, (d) 10 m, (e) 20 m, and
(f) 30 m, and an enlarged portion of the image at (g) 0.5 m, (h) 1 m, and (i) 2.5 m. (Courtesy University
of Wisconsin-Madison, Environmental Remote Sensing Center, and NASA Afûliated Research Center
Program.)
1.12 VISUAL IMAGE INTERPRETATION 75
(g)
(h)
(i)
Image Scale
Image scale, discussed in detail in Section 3.3, affects the level of useful informa-
tion that can be extracted from aerial and space images. The scale of an image
can be thought of as the relationship between a distance measured on the image
and the corresponding distance on the ground. Although terminology with regard
to image scale has not been standardized, we can consider that small-scale images
have a scale of 1:50,000 or smaller, medium-scale images have a scale between
1:12,000 and 1:50,000, and large-scale airphotos have a scale of 1:12,000 or larger.
In the case of digital data, images do not have a ûxed scale per se; rather, they
have a speciûc ground sample distance, as discussed previously (and illustrated in
Figures 1.33 and 1.34), and can be reproduced at various scales. Thus, one could
refer to the display scale of a digital image as it is displayed on a computer moni-
tor or as printed in hardcopy.
In the ûgure captions of this book, we have stated the hardcopy display scale
of many images—including photographic, multispectral, and radar images—so
that the reader can develop a feel for the degree of detail that can be extracted
from images of varying scales.
As generalizations, the following statements can be made about the appro-
priateness of various image scales for resource studies. Small-scale images are
used for regional surveys, large-area resource assessment, general resource man-
agement planning, and large-area disaster assessment. Medium-scale images are
used for the identiûcation, classiûcation, and mapping of such features as tree
species, agricultural crop type, vegetation community, and soil type. Large-scale
images are used for the intensive monitoring of speciûc items such as surveys of
the damage caused by plant disease, insects, or tree blowdown. Large-scale ima-
ges are also used for emergency response to such events as hazardous waste spills
and planning search and rescue operations in association with tornadoes, üoods,
and hurricanes.
In the United States, the National High-Altitude Photography (NHAP) pro-
gram, later renamed the National Aerial Photography Program (NAPP), was a fed-
eral multiagency activity coordinated by the U.S. Geological Survey (USGS). It
provided nationwide photographic coverage at nominal scales ranging from
1:80,000 and 1:58,000 (for NHAP) to 1:40,000 (for NAPP). The archive of NHAP
and NAPP photos has proven to be an extremely valuable ongoing source of med-
ium-scale images supporting a wide range of applications.
The National Agriculture Imagery Program (NAIP) acquires peak growing
season leaf-on imagery in the continental United States and delivers this imagery
to U.S. Department of Agriculture (USDA) County Service Centers in order to
assist with crop compliance and a multitude of other farm programs. NAIP ima-
gery is typically acquired with GSDs of one to two meters. The one-meter GSD
imagery is intended to provide updated digital orthophotography. The two-meter
GSD imagery is intended to support USDA programs that require current imagery
1.12 VISUAL IMAGE INTERPRETATION 77
acquired during the agricultural growing season but do not require high hor-
izontal accuracy. NAIP photographs are also useful in many non-USDA applica-
tions, including real estate, recreation, and land use planning.
There is no single <right= way to approach the image interpretation process. The
speciûc image products and interpretation equipment available will, in part, inüu-
ence how a particular interpretation task is undertaken. Beyond these factors, the
speciûc goals of the task will determine the image interpretation process
employed. Many applications simply require the image analyst to identify and
count various discrete objects occurring in a study area. For example, counts may
be made of such items as motor vehicles, residential dwellings, recreational water-
craft, or animals. Other applications of the interpretation process often involve the
identiûcation of anomalous conditions. For example, the image analyst might
survey large areas looking for such features as failing septic systems, sources of
water pollution entering a stream, areas of a forest stressed by an insect or disease
problem, or evidence of sites having potential archaeological signiûcance.
Many applications of image interpretation involve the delineation of discrete
areal units throughout images. For example, the mapping of land use, soil types, or
forest types requires the interpreter to outline the boundaries between areas of one
type versus another. Such tasks can be problematic when the boundary is not a dis-
crete edge, but rather a <fuzzy edge= or gradation from one type of area to another,
as is common with natural phenomena such as soils and natural vegetation.
Two extremely important issues must be addressed before an interpreter
undertakes the task of delineating separate areal units on remotely sensed images.
The ûrst is the deûnition of the classication system or criteria to be used to sepa-
rate the various categories of features occurring in the images. For example,
in mapping land use, the interpreter must ûx ûrmly in mind what speciûc character-
istics determine if an area is <residential,= <commercial,= or <industrial.= Similarly,
the forest type mapping process must involve clear deûnition of what constitutes an
area to be delineated in a particular species, height, or crown density class.
The second important issue in delineation of discrete areal units on images is
the selection of the minimum mapping unit (MMU) to be employed in the pro-
cess. This refers to the smallest size areal entity to be mapped as a discrete area.
Selection of the MMU will determine the extent of detail conveyed by an interpreta-
tion. This is illustrated in Figure 1.35. In (a), a small MMU results in a much more
detailed interpretation than does the use of a large MMU, as illustrated in (b).
Once the classiûcation system and MMU have been determined, the inter-
preter can begin the process of delineating boundaries between feature types.
Experience suggests that it is advisable to delineate the most highly contrasting
feature types ûrst and to work from the general to the speciûc. For example, in a
78 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a) (b)
Figure 1.35 Inüuence of minimum mapping unit size on interpretation detail. (a) Forest
types mapped using a small MMU: O, oak; M, maple; W, white pine; J, jack pine; R, red
pine; S, spruce. (b) Forest types mapped using a large MMU: D, deciduous; E, evergreen.
land use mapping effort, it would be better to separate <urban= from <water= and
<agriculture= before separating more detailed categories of each of these feature
types based on subtle differences.
In certain applications, the interpreter might choose to delineate photo-
morphic regions as part of the delineation process. These are regions of reason-
ably uniform tone, texture, and other image characteristics. When initially
delineated, the feature type identity of these regions may not be known. Field
observations or other ground truth can then be used to verify the identity of each
region. Regrettably, there is not always a one-to-one correspondence between the
appearance of a photomorphic region and a mapping category of interest. How-
ever, the delineation of such regions often serves as a stratiûcation tool in the
interpretation process and can be valuable in applications such as vegetation
mapping (where photomorphic regions often correspond directly to vegetation
classes of interest).
objects lie at different distances in a scene, each eye sees a slightly different view
of the objects. The differences between the two views are synthesized by the mind
to provide depth perception. Thus, the two views provided by our separated eyes
enable us to see in three dimensions.
Vertical aerial photographs are often taken along üight lines such that succes-
sive images overlap by at least 50% (see Figure 3.2). This overlap also provides
two views taken from separated positions. By viewing the left image of a pair with
the left eye and the right image with the right eye, we obtain a three-dimensional
view of the terrain surface. The process of stereoscopic viewing can be done using
a traditional stereoscope, or using various methods for stereoscopic viewing on
computer monitors. This book contains many stereopairs, or stereograms, which
can be viewed in three dimensions using a lens stereoscope such as shown in
Figure 1.36. An average separation of about 58 mm between common points has
been used in the stereograms in this book. The exact spacing varies somewhat
because of the different elevations of the points.
Typically, images viewed in stereo manifest vertical exaggeration. This is
caused by an apparent difference between the vertical scale and the horizontal
scale of the stereomodel. Because the vertical scale appears to be larger than the
horizontal scale, objects in the stereomodel appear to be too tall. A related con-
sideration is that slopes in the stereomodel will appear to be steeper than they
actually are. (The geometric terms and concepts used in this discussion of vertical
exaggeration are explained in more detail in Chapter 3.)
Many factors contribute to vertical exaggeration, but the primary cause is the
lack of equivalence between the original, in-flight, photographic base–height ratio,
B=H 0 (Figure 3.24a), and the corresponding, in-office, stereoviewing base–height
ratio, be =he (Figure 1.37). The perceived vertical exaggeration in the stereomodel
is approximately the ratio of these two ratios. The photographic base–height ratio
is the ratio of the air base distance between the two exposure stations to the flying
height above the average terrain elevation. The stereoviewing base–height ratio is
the ratio between the viewer’s eye base ðbe Þ to the distance from the eyes at which
the stereomodel is perceived ðhe Þ. The perceived vertical exaggeration, VE, is
approximately the ratio of the photographic base–height ratio to the stereoview-
ing base–height ratio,
B=H 0
VE ffi ð1:10Þ
be =he
where be =he ¼ 0:15 for most observers.
In short, vertical exaggeration varies directly with the photographic base–
height ratio.
PART I
Within the rings marked 1 through 8 are designs that appear to be at different elevations.
Using <1= to designate the highest elevation, write down the depth order of the designs. It is
possible that two or more designs may be at the same elevation. In this case, use the same
number for all designs at the same elevation.
Ring 1 Ring 6
Square (2) Lower left circle ()
Marginal ring (1) Lower right circle ()
Triangle (3) Upper right circle ()
Point (4) Upper left circle ()
Marginal ring ()
Ring 7 Ring 3
Black üag with ball () Square ()
Marginal ring () Marginal ring ()
Black circle () Cross ()
Arrow () Lower left circle ()
Tower with cross () Upper left circle ()
Double cross ()
Black triangle ()
Black rectangle ()
PART II
Indicate the relative elevations of the rings 1 through 8.
()()()()()()()()
Highest Lowest
PART III
Draw proûles to indicate the relative elevations of the letters in the words <prufungstafel=
and <stereoskopisches sehen.=
PART II
(7) (6) (5) (1) (4) (2)a (3)a (8)
Highest Lowest
PART III
a
Rings 2 and 3 are at the same elevation.
raising the edge of one of the photographs. More advanced devices for viewing
hardcopy stereoscopic image pairs include mirror stereoscopes (larger stereo-
scopes that use a combination of prisms and mirrors to separate the lines of sight
from each of the viewer’s eyes) and zoom stereoscopes, expensive precision instru-
ments used for viewing stereopairs under variable magniûcation.
With the proliferation of digital imagery and software for viewing and analyz-
ing digital images, analog stereoscopes have been replaced in the laboratory (if
not in the ûeld) by various computer hardware conûgurations for stereoviewing.
These devices are discussed in Section 3.10.
84 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
In recent years, there has been increasing emphasis on the development of quan-
titative, computer-based processing methods for analyzing remotely sensed data.
As will be discussed in Chapter 7, those methods have become increasingly
sophisticated and powerful. Despite these advances, computers are still somewhat
limited in their ability to evaluate many of the visual <clues= that are readily
apparent to the human interpreter, particularly those referred to as image texture.
Therefore, visual and numerical techniques should be seen as complementary in
nature, and consideration must be given to which approach (or combination of
approaches) best ûts a particular application.
The discussion of visual image interpretation in this section has of necessity
been brief. As mentioned at the start of this section, the skill of image interpreta-
tion is best learned interactively, through the experience of interpreting many
images. In the ensuing chapters of this book, we provide many examples of remo-
tely sensed images, from aerial photographs to synthetic aperture radar images.
We hope that the reader will apply the principles and concepts discussed in this
section to help interpret the features and phenomena illustrated in those images.
❙
2 ELEMENTS OF
PHOTOGRAPHIC SYSTEMS
2.1 INTRODUCTION
One of the most common, versatile, and economical forms of remote sensing is
aerial photography. Historically, most aerial photography has been ûlm-based. In
recent years, however, digital photography has become the most dominant form
of newly collected photographic imagery. In this chapter, unless otherwise speci-
ûed, when we use the term <aerial photography,= we are referring to both ûlm and
digital aerial photography. The basic advantages that aerial photography affords
over on-the-ground observation include:
85