REMOTE SENSING AND-part-3
REMOTE SENSING AND-part-3
of detector, the resulting data are generally recorded onto some magnetic or optical
computer storage medium, such as a hard drive, memory card, solid-state storage
unit or optical disk. Although sometimes more complex and expensive than ûlm-
based systems, electronic sensors offer the advantages of a broader spectral range of
sensitivity, improved calibration potential, and the ability to electronically store and
transmit data.
In remote sensing, the term photograph historically was reserved exclusively for
images that were detected as well as recorded on ûlm. The more generic term image
was adopted for any pictorial representation of image data. Thus, a pictorial record
from a thermal scanner (an electronic sensor) would be called a <thermal image,=
not a <thermal photograph,= because ûlm would not be the original detection
mechanism for the image. Because the term image relates to any pictorial product,
all photographs are images. Not all images, however, are photographs.
A common exception to the above terminology is use of the term digital pho-
tography. As we describe in Section 2.5, digital cameras use electronic detectors
rather than ûlm for image detection. While this process is not <photography= in
the traditional sense, <digital photography= is now the common way to refer to
this technique of digital data collection.
We can see that the data interpretation aspects of remote sensing can involve ana-
lysis of pictorial (image) and/or digital data. Visual interpretation of pictorial image
data has long been the most common form of remote sensing. Visual techniques
make use of the excellent ability of the human mind to qualitatively evaluate spatial
patterns in an image. The ability to make subjective judgments based on selected
image elements is essential in many interpretation efforts. Later in this chapter, in
Section 1.12, we discuss the process of visual image interpretation in detail.
Visual interpretation techniques have certain disadvantages, however, in that
they may require extensive training and are labor intensive. In addition, spectral
characteristics are not always fully evaluated in visual interpretation efforts. This
is partly because of the limited ability of the eye to discern tonal values on an
image and the difûculty of simultaneously analyzing numerous spectral images.
In applications where spectral patterns are highly informative, it is therefore pre-
ferable to analyze digital, rather than pictorial, image data.
The basic character of digital image data is illustrated in Figure 1.18.
Although the image shown in (a) appears to be a continuous-tone photograph,
it is actually composed of a two-dimensional array of discrete picture elements,
or pixels. The intensity of each pixel corresponds to the average brightness, or
radiance, measured electronically over the ground area corresponding to each
pixel. A total of 500 rows and 400 columns of pixels are shown in Figure 1.18a.
Whereas the individual pixels are virtually impossible to discern in (a), they are
readily observable in the enlargements shown in (b) and (c). These enlargements
correspond to sub-areas located near the center of (a). A 100 row 3 80 column
enlargement is shown in (b) and a 10 row 3 8 column enlargement is included in
(c). Part (d) shows the individual digital number (DN)—also referred to as the
32 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a) (b)
(c) (d)
Figure 1.18 Basic character of digital image data. (a) Original 500 row 3 400 column digital image. Scale
1:200,000. (b) Enlargement showing 100 row 3 80 column area of pixels near center of (a). Scale 1:40,000.
(c) 10 row 3 8 column enlargement. Scale 1:4,000. (d) Digital numbers corresponding to the radiance of
each pixel shown in (c). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 33
Band 3 88
Band 4 54
Band 5 27
Band 6 120
… 105
63
…
(a)
150
120
125
105
100 88
63
DN
75
54
50
25 27
0
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Wavelength (µm)
1 2 3 4 5 6 Band no.
(b)
Figure 1.19 Basic character of multi-band digital image data. (a) Each band is represented by a grid of
cells or pixels; any given pixel has a set of DNs representing its value in each band. (b) The spectral
signature for the pixel highlighted in (a), showing band number and wavelength on the X axis and pixel
DN on the Y axis. Values between the wavelengths of each spectral band, indicated by the dashed line
in (b), are not measured by this sensor and would thus be unknown.
of data within a single file. This format is referred to as band sequential (BSQ)
format. It has the advantage of simplicity, but it is often not the optimal choice
for efficient display and visualization of data, because viewing even a small por-
tion of the image requires reading multiple blocks of data from different “places”
on the computer disk. For example, to view a true-color digital image in BSQ for-
mat, with separate files used to store the red, green, and blue spectral bands, it
would be necessary for the computer to read blocks of data from three locations
on the storage medium.
An alternate method for storing multi-band data utilizes the band interleaved
by line (BIL) format. In this case, the image data file contains first a line of data
from band 1, then the same line of data from band 2, and each subsequent band.
This block of data consisting of the first line from each band is then followed by
the second line of data from bands 1, 2, 3, and so forth.
The third common data storage format is band interleaved by pixel (BIP). This
is perhaps the most widely used format for three-band images, such as those from
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 35
most consumer-grade digital cameras. In this format, the ûle contains each band’s
measurement for the ûrst pixel, then each band’s measurement for the next pixel,
and so on. The advantage of both BIL and BIP formats is that a computer can
read and process the data for small portions of the image much more rapidly,
because the data from all spectral bands are stored in closer proximity than in the
BSQ format.
Typically, the DNs constituting a digital image are recorded over such
numerical ranges as 0 to 255, 0 to 511, 0 to 1023, 0 to 2047, 0 to 4095 or higher.
These ranges represent the set of integers that can be recorded using 8-, 9-,
10-, 11-, and 12-bit binary computer coding scales, respectively. (That is,
28 ¼ 256, 29 ¼ 512, 210 ¼ 1024, 211 ¼ 2048, and 212 ¼ 4096.) The technical term
for the number of bits used to store digital image data is quantization level (or
color depth, when used to describe the number of bits used to display a color
image). As discussed in Chapter 7, with the appropriate calibration coefûcients
these integer DNs can be converted to more meaningful physical units such as
spectral reüectance, radiance, or normalized radar cross section.
Elevation Data
(a) (b)
(c) (d)
Figure 1.20 Representations of topographic data. (a) Portion of USGS 7.5-minute quadrangle
map, showing elevation contours. Scale 1:45,000. (b) Digital elevation model, with brightness
proportional to elevation. Scale 1:280,000. (c) Shaded-relief map derived from (b), with
simulated illumination from the north. Scale 1:280,000. (d) Three-dimensional perspective view,
with shading derived from (c). Scale varies in this projection. White rectangles in (b), (c), and (d)
indicate area enlarged in (a). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 37
from the upper right portion of (b) to the lower center, with many tributary val-
leys branching off from each side.
Figure 1.20c shows another way of visualizing topographic data using shaded
relief. This is a simulation of the pattern of shading that would be expected from a
three-dimensional surface under a given set of illumination conditions. In this
case, the simulation includes a primary source of illumination located to the
north, with a moderate degree of diffuse illumination from other directions to
soften the intensity of the shadows. Flat areas will have uniform tone in a shaded
relief map. Slopes facing toward the simulated light source will appear bright,
while slopes facing away from the light will appear darker.
To aid in visual interpretation, it is often preferable to create shaded relief
maps with illumination from the top of the image, regardless of whether that is a
direction from which solar illumination could actually come in the real world.
When the illumination is from other directions, particularly from the bottom of
the image, an untrained analyst may have difûculty correctly perceiving the land-
scape; in fact, the topography may appear inverted. (This effect is illustrated in
Figure 1.29.)
Figure 1.20d shows yet another method for visualizing elevation data, a three-
dimensional perspective view. In this example, the shaded relief map shown in (c)
has been <draped= over the DEM, and a simulated view has been created based on
a viewpoint located at a speciûed position in space (in this case, above and to the
south of the area shown). This technique can be used to visualize the appearance
of a landscape as seen from some point of interest. It is possible to <drape= other
types of imagery over a DEM; perspective views created using an aerial photo-
graph or high-resolution satellite image may appear quite realistic. Animation of
successive perspective views created along a user-deûned üight line permits the
development of simulated <üy-throughs= over an area.
The term <digital elevation model= or DEM can be used to describe any image
where the pixel values represent elevation ðZÞ coordinates. Two common sub-
categories of DEMs are a digital terrain model (DTM) and a digital surface model
(DSM). A DTM (sometimes referred to as a <bald-earth DEM=) records the eleva-
tion of the bare land surface, without any vegetation, buildings, or other features
above the ground. In contrast, a DSM records the elevation of whatever the
uppermost surface is at every location; this could be a tree crown, the roof of a
building, or the ground surface (where no vegetation or structures are present).
Each of these models has its appropriate uses. For example, a DTM would be use-
ful for predicting runoff in a watershed after a rainstorm, because streams will
üow over the ground surface rather than across the top of the forest canopy. In
contrast, a DSM could be used to measure the size and shape of objects on the
terrain, and to calculate intervisibility (whether a given point B can be seen from
a reference point A).
Figure 1.21 compares a DSM and DTM for the same site, using airborne lidar
data from the Capitol Forest area in Washington State (Andersen, McGaughey,
and Reutebuch, 2005). In Figure 1.21a, the uppermost lidar points have been used
38 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
(a)
(b)
Figure 1.21 Airborne lidar data of the Capitol Forest site, Washington State. (a) Digital surface model
(DSM) showing tops of tree crowns and canopy gaps. (b) Digital terrain model (DTM) showing
hypothetical bare earth surface. (From Andersen et al., 2006; courtesy Ward Carlson, USDA Forest
Service PNW Research Station.)
to create a DSM showing the elevation of the upper surface of the forest canopy,
the presence of canopy gaps, and, in many cases, the shape of individual tree
crowns. In Figure 1.21b, the lowermost points have been used to create a DTM,
showing the underlying ground surface if all vegetation and structures were
removed. Note the ability to detect ûne-scale topographic features, such as small
gullies and roadcuts, even underneath a dense forest canopy (Andersen et al.,
2006).
Plate 1 shows a comparison of a DSM (a) and DTM (b) for a wooded area in
New Hampshire. The models were derived from airborne lidar data acquired in
early December. This site is dominated by a mix of evergreen and deciduous tree
species, with the tallest (pines and hemlocks) exceeding 40 m in height. Scattered
clearings in the center and right side are athletic ûelds, parkland, and former ski
slopes now being taken over by shrubs and small trees. With obscuring vegetation
removed, the DTM in (b) shows a variety of glacial and post-glacial landforms, as
1.6 REFERENCE DATA 39
well as small roads, trails, and other constructed features. Also, by subtracting
the elevations in (b) from those in (a), it is possible to calculate the height of the
forest canopy above ground level at each point. The result, shown in (c), is refer-
red to as a canopy height model (CHM). In this model, the ground surface has
been üattened, so that all remaining variation represents differences in height of
the trees relative to the ground. Lidar and other high-resolution 3D data are
widely used for this type of canopy height analysis (Clark et al., 2004). (See
Sections 6.23 and 6.24 for more discussion.)
Increasingly, elevation data are being used for analysis not just in the form of
highly processed DEM, but in the more basic form of a point cloud. A point cloud
is simply a data set containing many three-dimensional point locations, each
representing a single measurement of the ðX, Y, ZÞ coordinates of an object or
surface. The positions, spacing, intensity, and other characteristics of the points
in this cloud can be analyzed using sophisticated 3D processing algorithms to
extract information about features (Rutzinger et al., 2008).
Further discussion of the acquisition, visualization, and analysis of elevation
data, including DEMs and point clouds, can be found in Chapters 3 and 6, under
the discussion of photogrammetry, interferometric radar, and lidar systems.
(a)
(b)
Figure 1.22 ASD, Inc. FieldSpec Spectroradiometer: (a) the instrument; (b) instrument shown in ûeld
operation. (Courtesy ASD, Inc.)
time, as can computed reüectance values within the wavelength bands of var-
ious satellite systems. In-ûeld calculation of band ratios and other computed
values is also possible. One such calculation might be the normalized differ-
ence vegetation index (NDVI), which relates the near-IR and visible reüectance
of earth surface features (Chapter 7). Another option is matching measured
spectra to a library of previously measured samples. The overall system is
compatible with a number of post-processing software packages and also
affords Ethernet, wireless, and GPS compatibility as well.
Figure 1.23 shows a versatile all-terrain instrument platform designed
primarily for collecting spectral measurements in agricultural cropland envir-
onments. The system provides the high clearance necessary for making
measurements over mature row crops, and the tracked wheels allow access to
difûcult landscape positions. Several measurement instruments can be sus-
pended from the system’s telescopic boom. Typically, these include a spectro-
radiometer, a remotely operated digital camera system, and a GPS receiver
(Section 1.7). While designed primarily for data collection in agricultural
ûelds, the long reach of the boom makes this device a useful tool for collecting
spectral data over such targets as emergent vegetation found in wetlands as
well as small trees and shrubs.
Using a spectroradiometer to obtain spectral reüectance measurements is
normally a three-step process. First, the instrument is aimed at a calibration
panel of known, stable reüectance. The purpose of this step is to quantify the
42 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
Figure 1.23 All-terrain instrument platform designed for collecting spectral measurements in agricultural cropland
environments. (Courtesy of the University of Nebraska-Lincoln Center for Advanced Land Management Information
Technologies.)
Currently, the U.S. Global Positioning System has only one operational counter-
part, the Russian GLONASS system. The full GLONASS constellation consists of
24 operational satellites, a number that was reached in October 2011. In addition, a
fully comprehensive European GNSS constellation, Galileo, is scheduled for com-
pletion by 2020 and will include 30 satellites. The data signals provided by Galileo
will be compatible with those from the U.S. GPS satellites, resulting in a greatly
increased range of options for GNSS receivers and signiûcantly improved accuracy.
Finally, China has announced plans for the development of its own Compass GNSS
constellation, to include 30 satellites in operational use by 2020. The future for
these and similar systems is an extremely bright and rapidly progressing one.
The means by which GNSS signals are used to determine ground positions
is called satellite ranging. Conceptually, the process simply involves measuring
44 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
the time required for signals transmitted by at least four satellites to reach the
ground receiver. Knowing that the signals travel at the speed of light
3 3 108 m=sec in a vacuum , the distance from each satellite to the receiver can be
computed using a form of three-dimensional triangulation. In principle, the signals
from only four satellites are needed to identify the receiver’s location, but in prac-
tice it is usually desirable to obtain measurements from as many satellites as
practical.
GNSS measurements are potentially subject to numerous sources of error.
These include clock bias (caused by imperfect synchronization between the high-
precision atomic clocks present on the satellites and the lower-precision clocks
used in GNSS receivers), uncertainties in the satellite orbits (known as satellite
ephemeris errors), errors due to atmospheric conditions (signal velocity depends
on time of day, season, and angular direction through the atmosphere), receiver
errors (due to such inüuences as electrical noise and signal-matching errors), and
multipath errors (reüection of a portion of the transmitted signal from objects not
in the straight-line path between the satellite and receiver).
Such errors can be compensated for (in great part) using differential GNSS
measurement methods. In this approach, simultaneous measurements are made
by a stationary base station receiver (located over a point of precisely known
position) and one (or more) roving receivers moving from point to point. The
positional errors measured at the base station are used to reûne the position mea-
sured by the rover(s) at the same instant in time. This can be done either by
bringing the data from the base and rover together in a post-processing mode
after the ûeld observations are completed or by instantaneously broadcasting
the base station corrections to the rovers. The latter approach is termed real-time
differential GNSS positioning.
In recent years, there have been efforts to improve the accuracy of GNSS
positioning through the development of regional networks of high-precision base
stations, generally referred to as satellite-based augmentation systems (SBAS). The
data from these stations are used to derive spatially explicit correction factors
that are then broadcast in real time, allowing advanced receiver units to deter-
mine their positions with a higher degree of accuracy. One such SBAS network,
the Wide Area Augmentation System (WAAS), consists of approximately 25 ground
reference stations distributed across the United States that continuously monitor
GPS satellite transmissions. Two main stations, located on the U.S. east and west
coasts, collect the data from the reference stations and create a composited cor-
rection message that is location speciûc. This message is then broadcast through
one of two geostationary satellites, satellites occupying a ûxed position over the
equator. Any WAAS-enabled GPS unit can receive these correction signals. The
GPS receiver then determines which correction data are appropriate at the cur-
rent location.
The WAAS signal reception is ideal for open land, aircraft, and marine appli-
cations, but the position of the relay satellites over the equator makes it difûcult
to receive the signals at high latitudes or when features such as trees and
1.8 CHARACTERISTICS OF REMOTE SENSING SYSTEMS 45
mountains obstruct the view of the horizon. In such situations, GPS positions can
sometimes actually contain more error with WAAS correction than without. How-
ever, in unobstructed operating conditions where a strong WAAS signal is avail-
able, positions are normally accurate to within 3 m or better.
Paralleling the deployment of the WAAS system in North America are the
Japanese Multi-functional Satellite Augmentation System (MSAS) in Asia, the
European Geostationary Navigation Overlay Service (EGNOS) in Europe, and pro-
posed future SBAS networks such as India’s GPS Aided Geo-Augmented Naviga-
tion (GAGAN) system. Like WAAS, these SBAS systems use geostationary
satellites to transmit data for real-time differential correction.
In addition to the regional SBAS real-time correction systems such as WAAS,
some nations have developed additional networks of base stations that can be
used for post-processing GNSS data for differential correction (i.e., high-accuracy
corrections made after data collection, rather than in real time). One such system
is the U.S. National Geodetic Survey’s Continuously Operating Reference Stations
(CORS) network. More than 1800 sites in the cooperative CORS network provide
GNSS reference data that can be accessed via the Internet and used in post-
processing for differential correction.
With the development of new satellite constellations, and new resources for
real-time and post-processed differential correction, GNSS-based location ser-
vices are expected to become even more widespread in industry, resource man-
agement, and consumer technology applications in the coming years.
Having introduced some basic concepts, we now have the elements necessary to
characterize a remote sensing system. In so doing, we can begin to appreciate
some of the problems encountered in the design and application of the various
sensing systems examined in subsequent chapters. In particular, the design and
operation of every real-world sensing system represents a series of compromises,
often in response to the limitations imposed by physics and by the current state of
technological development. When we consider the process from start to ûnish,
users of remote sensing systems need to keep in mind the following factors:
1. The energy source. All passive remote sensing systems rely on energy
that originates from sources other than the sensor itself, typically in the
form of either reüected radiation from the sun or emitted radiation from
earth surface features. As already discussed, the spectral distribution of
reüected sunlight and self-emitted energy is far from uniform. Solar
energy levels obviously vary with respect to time and location, and dif-
ferent earth surface materials emit energy with varying degrees of efû-
ciency. While we have some control over the sources of energy for active
systems such as radar and lidar, those sources have their own particular