0% found this document useful (0 votes)
9 views15 pages

REMOTE SENSING AND-part-3

Uploaded by

lzxz061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views15 pages

REMOTE SENSING AND-part-3

Uploaded by

lzxz061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1.

5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 31

of detector, the resulting data are generally recorded onto some magnetic or optical
computer storage medium, such as a hard drive, memory card, solid-state storage
unit or optical disk. Although sometimes more complex and expensive than ûlm-
based systems, electronic sensors offer the advantages of a broader spectral range of
sensitivity, improved calibration potential, and the ability to electronically store and
transmit data.
In remote sensing, the term photograph historically was reserved exclusively for
images that were detected as well as recorded on ûlm. The more generic term image
was adopted for any pictorial representation of image data. Thus, a pictorial record
from a thermal scanner (an electronic sensor) would be called a <thermal image,=
not a <thermal photograph,= because ûlm would not be the original detection
mechanism for the image. Because the term image relates to any pictorial product,
all photographs are images. Not all images, however, are photographs.
A common exception to the above terminology is use of the term digital pho-
tography. As we describe in Section 2.5, digital cameras use electronic detectors
rather than ûlm for image detection. While this process is not <photography= in
the traditional sense, <digital photography= is now the common way to refer to
this technique of digital data collection.
We can see that the data interpretation aspects of remote sensing can involve ana-
lysis of pictorial (image) and/or digital data. Visual interpretation of pictorial image
data has long been the most common form of remote sensing. Visual techniques
make use of the excellent ability of the human mind to qualitatively evaluate spatial
patterns in an image. The ability to make subjective judgments based on selected
image elements is essential in many interpretation efforts. Later in this chapter, in
Section 1.12, we discuss the process of visual image interpretation in detail.
Visual interpretation techniques have certain disadvantages, however, in that
they may require extensive training and are labor intensive. In addition, spectral
characteristics are not always fully evaluated in visual interpretation efforts. This
is partly because of the limited ability of the eye to discern tonal values on an
image and the difûculty of simultaneously analyzing numerous spectral images.
In applications where spectral patterns are highly informative, it is therefore pre-
ferable to analyze digital, rather than pictorial, image data.
The basic character of digital image data is illustrated in Figure 1.18.
Although the image shown in (a) appears to be a continuous-tone photograph,
it is actually composed of a two-dimensional array of discrete picture elements,
or pixels. The intensity of each pixel corresponds to the average brightness, or
radiance, measured electronically over the ground area corresponding to each
pixel. A total of 500 rows and 400 columns of pixels are shown in Figure 1.18a.
Whereas the individual pixels are virtually impossible to discern in (a), they are
readily observable in the enlargements shown in (b) and (c). These enlargements
correspond to sub-areas located near the center of (a). A 100 row 3 80 column
enlargement is shown in (b) and a 10 row 3 8 column enlargement is included in
(c). Part (d) shows the individual digital number (DN)—also referred to as the
32 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

(a) (b)

(c) (d)
Figure 1.18 Basic character of digital image data. (a) Original 500 row 3 400 column digital image. Scale
1:200,000. (b) Enlargement showing 100 row 3 80 column area of pixels near center of (a). Scale 1:40,000.
(c) 10 row 3 8 column enlargement. Scale 1:4,000. (d) Digital numbers corresponding to the radiance of
each pixel shown in (c). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 33

<brightness value= or <pixel value=—corresponding to the average radiance mea-


sured in each pixel shown in (c). These values result from quantizing the original
electrical signal from the sensor into positive integer values using a process called
analog-to-digital (A-to-D) signal conversion. (The A-to-D conversion process is dis-
cussed further in Chapter 4.)
Whether an image is acquired electronically or photographically, it may con-
tain data from a single spectral band or from multiple spectral bands. The image
shown in Figure 1.18 was acquired using a single broad spectral band, by inte-
grating all energy measured across a range of wavelengths (a process analogous
to photography using <black-and-white= ûlm). Thus, in the digital image, there is
a single DN for each pixel. It is also possible to collect <color= or multispectral
imagery, whereby data are collected simultaneously in several spectral bands. In
the case of a color photograph, three separate sets of detectors (or, for analog
cameras, three layers within the ûlm) each record radiance in a different range of
wavelengths.
In the case of a digital multispectral image, each pixel includes multiple DNs,
one for each spectral band. For example, as shown in Figure 1.19, one pixel in a
digital image might have values of 88 in the ûrst spectral band, perhaps represent-
ing blue wavelengths, 54 in the second band (green), 27 in the third (red), and so
on, all associated with a single ground area.
When viewing this multi-band image, it is possible to view a single band at
a time, treating it as if it were a discrete image, with brightness values propor-
tional to DN as in Figure 1.18. Alternatively, and more commonly, three bands
from the image can be selected and displayed simultaneously in shades of red,
green, and blue, to create a color composite image, whether on a computer
monitor or in a hard copy print. If the three bands being displayed were origin-
ally detected by the sensor in the red, green, and blue wavelength ranges of the
visible spectrum, then this composite will be referred to as a true-color image,
because it will approximate the natural combination of colors that would be
seen by the human eye. Any other combination of bands—perhaps involving
bands acquired in wavelengths outside the visible spectrum—will be referred to
as a false-color image. One common false-color combination of spectral bands
involves displaying near-IR, red, and green bands (from the sensor) in red,
green, and blue, respectively, on the display device. Note that, in all cases, these
three-band composite images involve displaying some combination of bands
from the sensor in red, green, and blue on the display device because the
human eye perceives color as a mixture of these three primary colors. (The
principles of color perception and color mixing are described in more detail in
Section 1.12.)
With multi-band digital data, the question arises of how to organize the data.
In many cases, each band of data is stored as a separate ûle or as a separate block
34 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Band 1 DNs for


Band 2 one pixel:

Band 3 88
Band 4 54
Band 5 27
Band 6 120
… 105
63

(a)

150
120
125
105
100 88
63
DN

75
54
50
25 27
0
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Wavelength (µm)
1 2 3 4 5 6 Band no.
(b)
Figure 1.19 Basic character of multi-band digital image data. (a) Each band is represented by a grid of
cells or pixels; any given pixel has a set of DNs representing its value in each band. (b) The spectral
signature for the pixel highlighted in (a), showing band number and wavelength on the X axis and pixel
DN on the Y axis. Values between the wavelengths of each spectral band, indicated by the dashed line
in (b), are not measured by this sensor and would thus be unknown.

of data within a single file. This format is referred to as band sequential (BSQ)
format. It has the advantage of simplicity, but it is often not the optimal choice
for efficient display and visualization of data, because viewing even a small por-
tion of the image requires reading multiple blocks of data from different “places”
on the computer disk. For example, to view a true-color digital image in BSQ for-
mat, with separate files used to store the red, green, and blue spectral bands, it
would be necessary for the computer to read blocks of data from three locations
on the storage medium.
An alternate method for storing multi-band data utilizes the band interleaved
by line (BIL) format. In this case, the image data file contains first a line of data
from band 1, then the same line of data from band 2, and each subsequent band.
This block of data consisting of the first line from each band is then followed by
the second line of data from bands 1, 2, 3, and so forth.
The third common data storage format is band interleaved by pixel (BIP). This
is perhaps the most widely used format for three-band images, such as those from
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 35

most consumer-grade digital cameras. In this format, the ûle contains each band’s
measurement for the ûrst pixel, then each band’s measurement for the next pixel,
and so on. The advantage of both BIL and BIP formats is that a computer can
read and process the data for small portions of the image much more rapidly,
because the data from all spectral bands are stored in closer proximity than in the
BSQ format.
Typically, the DNs constituting a digital image are recorded over such
numerical ranges as 0 to 255, 0 to 511, 0 to 1023, 0 to 2047, 0 to 4095 or higher.
These ranges represent the set of integers that can be recorded using 8-, 9-,
10-, 11-, and 12-bit binary computer coding scales, respectively. (That is,
28 ¼ 256, 29 ¼ 512, 210 ¼ 1024, 211 ¼ 2048, and 212 ¼ 4096.) The technical term
for the number of bits used to store digital image data is quantization level (or
color depth, when used to describe the number of bits used to display a color
image). As discussed in Chapter 7, with the appropriate calibration coefûcients
these integer DNs can be converted to more meaningful physical units such as
spectral reüectance, radiance, or normalized radar cross section.

Elevation Data

Increasingly, remote sensing instruments are used to collect three-dimensional


spatial data, in which each observation has a Z coordinate representing elevation,
along with the X and Y coordinates used to represent the horizontal position of
the pixel’s column and row. Particularly when collected over broad areas, these
elevation data may represent the topography, the three-dimensional shape of the
land surface. In other cases (usually, at ûner spatial scales), these elevation data
may represent the three-dimensional shapes of objects on or above the ground
surface, such as tree crowns in a forest, or buildings in a city. Elevation data may
be derived from the analysis of raw measurements from many types of remote
sensing instruments, including photographic systems, multispectral sensors,
radar systems, and lidar systems.
Elevation data may be represented in many different formats. Figure 1.20a
shows a small portion of a traditional contour map, from the U.S. Geological
Survey’s 7.5-minute (1:24,000-scale) quadrangle map series. In this map, topo-
graphic elevations are indicated by contour lines. Closely spaced lines indicate
steep terrain, while üat areas like river üoodplains have more widely spaced
contours.
Figure 1.20b shows a digital elevation model (DEM). Note that the white rec-
tangle in (b) represents the much smaller area shown in (a). The DEM is similar
to a digital image, with the DN at each pixel representing a surface elevation
rather than a radiance value. In (b), the brightness of each pixel is represented as
being proportional to its elevation, so light-toned areas are topographically higher
and dark-toned areas are lower. The region shown in this map consists of highly
dissected terrain, with a complex network of river valleys; one major valley runs
36 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

(a) (b)

(c) (d)

Figure 1.20 Representations of topographic data. (a) Portion of USGS 7.5-minute quadrangle
map, showing elevation contours. Scale 1:45,000. (b) Digital elevation model, with brightness
proportional to elevation. Scale 1:280,000. (c) Shaded-relief map derived from (b), with
simulated illumination from the north. Scale 1:280,000. (d) Three-dimensional perspective view,
with shading derived from (c). Scale varies in this projection. White rectangles in (b), (c), and (d)
indicate area enlarged in (a). (Author-prepared ûgure.)
1.5 DATA ACQUISITION AND DIGITAL IMAGE CONCEPTS 37

from the upper right portion of (b) to the lower center, with many tributary val-
leys branching off from each side.
Figure 1.20c shows another way of visualizing topographic data using shaded
relief. This is a simulation of the pattern of shading that would be expected from a
three-dimensional surface under a given set of illumination conditions. In this
case, the simulation includes a primary source of illumination located to the
north, with a moderate degree of diffuse illumination from other directions to
soften the intensity of the shadows. Flat areas will have uniform tone in a shaded
relief map. Slopes facing toward the simulated light source will appear bright,
while slopes facing away from the light will appear darker.
To aid in visual interpretation, it is often preferable to create shaded relief
maps with illumination from the top of the image, regardless of whether that is a
direction from which solar illumination could actually come in the real world.
When the illumination is from other directions, particularly from the bottom of
the image, an untrained analyst may have difûculty correctly perceiving the land-
scape; in fact, the topography may appear inverted. (This effect is illustrated in
Figure 1.29.)
Figure 1.20d shows yet another method for visualizing elevation data, a three-
dimensional perspective view. In this example, the shaded relief map shown in (c)
has been <draped= over the DEM, and a simulated view has been created based on
a viewpoint located at a speciûed position in space (in this case, above and to the
south of the area shown). This technique can be used to visualize the appearance
of a landscape as seen from some point of interest. It is possible to <drape= other
types of imagery over a DEM; perspective views created using an aerial photo-
graph or high-resolution satellite image may appear quite realistic. Animation of
successive perspective views created along a user-deûned üight line permits the
development of simulated <üy-throughs= over an area.
The term <digital elevation model= or DEM can be used to describe any image
where the pixel values represent elevation ðZÞ coordinates. Two common sub-
categories of DEMs are a digital terrain model (DTM) and a digital surface model
(DSM). A DTM (sometimes referred to as a <bald-earth DEM=) records the eleva-
tion of the bare land surface, without any vegetation, buildings, or other features
above the ground. In contrast, a DSM records the elevation of whatever the
uppermost surface is at every location; this could be a tree crown, the roof of a
building, or the ground surface (where no vegetation or structures are present).
Each of these models has its appropriate uses. For example, a DTM would be use-
ful for predicting runoff in a watershed after a rainstorm, because streams will
üow over the ground surface rather than across the top of the forest canopy. In
contrast, a DSM could be used to measure the size and shape of objects on the
terrain, and to calculate intervisibility (whether a given point B can be seen from
a reference point A).
Figure 1.21 compares a DSM and DTM for the same site, using airborne lidar
data from the Capitol Forest area in Washington State (Andersen, McGaughey,
and Reutebuch, 2005). In Figure 1.21a, the uppermost lidar points have been used
38 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

(a)

(b)
Figure 1.21 Airborne lidar data of the Capitol Forest site, Washington State. (a) Digital surface model
(DSM) showing tops of tree crowns and canopy gaps. (b) Digital terrain model (DTM) showing
hypothetical bare earth surface. (From Andersen et al., 2006; courtesy Ward Carlson, USDA Forest
Service PNW Research Station.)

to create a DSM showing the elevation of the upper surface of the forest canopy,
the presence of canopy gaps, and, in many cases, the shape of individual tree
crowns. In Figure 1.21b, the lowermost points have been used to create a DTM,
showing the underlying ground surface if all vegetation and structures were
removed. Note the ability to detect ûne-scale topographic features, such as small
gullies and roadcuts, even underneath a dense forest canopy (Andersen et al.,
2006).
Plate 1 shows a comparison of a DSM (a) and DTM (b) for a wooded area in
New Hampshire. The models were derived from airborne lidar data acquired in
early December. This site is dominated by a mix of evergreen and deciduous tree
species, with the tallest (pines and hemlocks) exceeding 40 m in height. Scattered
clearings in the center and right side are athletic ûelds, parkland, and former ski
slopes now being taken over by shrubs and small trees. With obscuring vegetation
removed, the DTM in (b) shows a variety of glacial and post-glacial landforms, as
1.6 REFERENCE DATA 39

well as small roads, trails, and other constructed features. Also, by subtracting
the elevations in (b) from those in (a), it is possible to calculate the height of the
forest canopy above ground level at each point. The result, shown in (c), is refer-
red to as a canopy height model (CHM). In this model, the ground surface has
been üattened, so that all remaining variation represents differences in height of
the trees relative to the ground. Lidar and other high-resolution 3D data are
widely used for this type of canopy height analysis (Clark et al., 2004). (See
Sections 6.23 and 6.24 for more discussion.)
Increasingly, elevation data are being used for analysis not just in the form of
highly processed DEM, but in the more basic form of a point cloud. A point cloud
is simply a data set containing many three-dimensional point locations, each
representing a single measurement of the ðX, Y, ZÞ coordinates of an object or
surface. The positions, spacing, intensity, and other characteristics of the points
in this cloud can be analyzed using sophisticated 3D processing algorithms to
extract information about features (Rutzinger et al., 2008).
Further discussion of the acquisition, visualization, and analysis of elevation
data, including DEMs and point clouds, can be found in Chapters 3 and 6, under
the discussion of photogrammetry, interferometric radar, and lidar systems.

1.6 REFERENCE DATA

As we have indicated in the previous discussion, rarely, if ever, is remote sensing


employed without the use of some form of reference data. The acquisition of refer-
ence data involves collecting measurements or observations about the objects,
areas, or phenomena that are being sensed remotely. These data can take on any
of a number of different forms and may be derived from a number of sources. For
example, the data needed for a particular analysis might be derived from a soil
survey map, a water quality laboratory report, or an aerial photograph. They may
also stem from a <ûeld check= on the identity, extent, and condition of agri-
cultural crops, land uses, tree species, or water pollution problems. Reference
data may also involve ûeld measurements of temperature and other physical and/
or chemical properties of various features. The geographic positions at which
such ûeld measurements are made are often noted on a map base to facilitate
their location in a corresponding remote sensing image. Usually, GPS receivers
are used to determine the precise geographic position of ûeld observations and
measurements (as described in Section 1.7).
Reference data are often referred to by the term ground truth. This term is not
meant literally, because many forms of reference data are not collected on the
ground and can only approximate the truth of actual ground conditions. For
example, <ground= truth may be collected in the air, in the form of detailed aerial
photographs used as reference data when analyzing less detailed high altitude or
satellite imagery. Similarly, the <ground= truth will actually be <water= truth if we
40 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

are studying water features. In spite of these inaccuracies, ground truth is a


widely used term for reference data.
Reference data might be used to serve any or all of the following purposes:
1. To aid in the analysis and interpretation of remotely sensed data.
2. To calibrate a sensor.
3. To verify information extracted from remote sensing data.
Hence, reference data must be collected in accordance with the principles of
statistical sampling design appropriate to the particular application.
Reference data can be very expensive and time consuming to collect properly.
They can consist of either time-critical and/or time-stable measurements. Time-
critical measurements are those made in cases where ground conditions change
rapidly with time, such as in the analysis of vegetation condition or water pollu-
tion events. Time-stable measurements are involved when the materials under
observation do not change appreciably with time. For example, geologic applica-
tions often entail ûeld observations that can be conducted at any time and that
would not change appreciably from mission to mission.
One form of reference data collection is the ground-based measurement of
the reüectance and/or emittance of surface materials to determine their spectral
response patterns. This might be done in the laboratory or in the ûeld using the
principles of spectroscopy. Spectroscopic measurement procedures can involve
the use of a variety of instruments. Often, a spectroradiometer is used in such
measurement procedures. This device measures, as a function of wavelength, the
energy coming from an object within its view. It is used primarily to prepare spec-
tral reüectance curves for various objects.
In laboratory spectroscopy, artiûcial sources of energy might be used to illu-
minate objects under study. In the laboratory, other ûeld parameters such as
viewing geometry between object and sensor are also simulated. More often,
therefore, in situ ûeld measurements are preferred because of the many variables
of the natural environment that inüuence remote sensor data that are difûcult, if
not impossible, to duplicate in the laboratory.
In the acquisition of ûeld measurements, spectroradiometers may be oper-
ated in a number of modes, ranging from handheld to helicopter or aircraft
mounted. Figures 1.10 and 1.12, in Section 1.4 of this chapter, both contain
examples of measurements acquired using a handheld ûeld spectroradiometer.
Figure 1.22 illustrates a highly portable instrument that is well suited for hand-
held operation. Through a ûber-optic input, this particular system acquires a
continuous spectrum by recording data in over 1000 narrow bands simulta-
neously (over the range 0.35 to 2:5 mm). The unit is typically transported in a
backpack carrier with provision for integrating the spectrometer with a note-
book computer. The computer provides for üexibility in data acquisition, dis-
play, and storage. For example, reüectance spectra can be displayed in real
1.6 REFERENCE DATA 41

(a)

(b)

Figure 1.22 ASD, Inc. FieldSpec Spectroradiometer: (a) the instrument; (b) instrument shown in ûeld
operation. (Courtesy ASD, Inc.)

time, as can computed reüectance values within the wavelength bands of var-
ious satellite systems. In-ûeld calculation of band ratios and other computed
values is also possible. One such calculation might be the normalized differ-
ence vegetation index (NDVI), which relates the near-IR and visible reüectance
of earth surface features (Chapter 7). Another option is matching measured
spectra to a library of previously measured samples. The overall system is
compatible with a number of post-processing software packages and also
affords Ethernet, wireless, and GPS compatibility as well.
Figure 1.23 shows a versatile all-terrain instrument platform designed
primarily for collecting spectral measurements in agricultural cropland envir-
onments. The system provides the high clearance necessary for making
measurements over mature row crops, and the tracked wheels allow access to
difûcult landscape positions. Several measurement instruments can be sus-
pended from the system’s telescopic boom. Typically, these include a spectro-
radiometer, a remotely operated digital camera system, and a GPS receiver
(Section 1.7). While designed primarily for data collection in agricultural
ûelds, the long reach of the boom makes this device a useful tool for collecting
spectral data over such targets as emergent vegetation found in wetlands as
well as small trees and shrubs.
Using a spectroradiometer to obtain spectral reüectance measurements is
normally a three-step process. First, the instrument is aimed at a calibration
panel of known, stable reüectance. The purpose of this step is to quantify the
42 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Figure 1.23 All-terrain instrument platform designed for collecting spectral measurements in agricultural cropland
environments. (Courtesy of the University of Nebraska-Lincoln Center for Advanced Land Management Information
Technologies.)

incoming radiation, or irradiance, incident upon the measurement site. Next,


the instrument is suspended over the target of interest and the radiation reüec-
ted by the object is measured. Finally, the spectral reüectance of the object is
computed by ratioing the reüected energy measurement in each band of obser-
vation to the incoming radiation measured in each band. Normally, the term
reflectance factor is used to refer to the result of such computations. A reüec-
tance factor is deûned formally as the ratio of the radiant üux actually reüected
by a sample surface to that which would be reüected into the same sensor geo-
metry by an ideal, perfectly diffuse (Lambertian) surface irradiated in exactly
the same way as the sample.
Another term frequently used to describe the above type of measurement is
bidirectional reflectance factor: one direction being associated with the sample
viewing angle (usually 0° from normal) and the other direction being that of the
sun’s illumination (deûned by the solar zenith and azimuth angles; see Section
1.4). In the bidirectional reüectance measurement procedure described above, the
sample and the reüectance standard are measured sequentially. Other approaches
exist in which the incident spectral irradiance and reüected spectral radiance are
measured simultaneously.
1.7 THE GLOBAL POSITIONING SYSTEM 43

1.7 THE GLOBAL POSITIONING SYSTEM AND OTHER GLOBAL


NAVIGATION SATELLITE SYSTEMS

As mentioned previously, the location of ûeld-observed reference data is usually


determined using a global navigation satellite system (GNSS). GNSS technology is also
used extensively in such other remote sensing activities as navigating aircraft during
sensor data acquisition and geometrically correcting and referencing raw image data.
The ûrst such system, the U.S. Global Positioning System (GPS) was originally devel-
oped for military purposes, but has subsequently become ubiquitous in many civil
applications worldwide, from vehicle navigation to surveying, and location-based ser-
vices on cellular phones and other personal electronic devices. Other GNSS <con-
stellations= have been or are being developed as well, a trend that will greatly increase
the accuracy and reliability of GNSS for end-users over the next decade.
The U.S. Global Positioning System includes at least 24 satellites rotating
around the earth in precisely known orbits, with subgroups of four or more satel-
lites operating in each of six different orbit planes. Typically, these satellites revolve
around the earth approximately once every 12 hours, at an altitude of approximately
20,200 km. With their positions in space precisely known at all times, the satellites
transmit time-encoded radio signals that are recorded by ground-based receivers
and can be used to aid in positioning and navigation. The nearly circular orbital
planes of the satellites are inclined about 60° from the equator and are spaced every
60° in longitude. This means that, in the absence of obstructions from the terrain or
nearby buildings, an observer at any point on the earth’s surface can receive the sig-
nal from at least four GPS satellites at any given time (day or night).

International Status of GNSS Development

Currently, the U.S. Global Positioning System has only one operational counter-
part, the Russian GLONASS system. The full GLONASS constellation consists of
24 operational satellites, a number that was reached in October 2011. In addition, a
fully comprehensive European GNSS constellation, Galileo, is scheduled for com-
pletion by 2020 and will include 30 satellites. The data signals provided by Galileo
will be compatible with those from the U.S. GPS satellites, resulting in a greatly
increased range of options for GNSS receivers and signiûcantly improved accuracy.
Finally, China has announced plans for the development of its own Compass GNSS
constellation, to include 30 satellites in operational use by 2020. The future for
these and similar systems is an extremely bright and rapidly progressing one.

GNSS Data Processing and Corrections

The means by which GNSS signals are used to determine ground positions
is called satellite ranging. Conceptually, the process simply involves measuring
44 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

the time required for signals transmitted by at least four satellites to reach the
ground receiver. Knowing that the signals travel at the speed of light
3 3 108 m=sec in a vacuum , the distance from each satellite to the receiver can be
computed using a form of three-dimensional triangulation. In principle, the signals
from only four satellites are needed to identify the receiver’s location, but in prac-
tice it is usually desirable to obtain measurements from as many satellites as
practical.
GNSS measurements are potentially subject to numerous sources of error.
These include clock bias (caused by imperfect synchronization between the high-
precision atomic clocks present on the satellites and the lower-precision clocks
used in GNSS receivers), uncertainties in the satellite orbits (known as satellite
ephemeris errors), errors due to atmospheric conditions (signal velocity depends
on time of day, season, and angular direction through the atmosphere), receiver
errors (due to such inüuences as electrical noise and signal-matching errors), and
multipath errors (reüection of a portion of the transmitted signal from objects not
in the straight-line path between the satellite and receiver).
Such errors can be compensated for (in great part) using differential GNSS
measurement methods. In this approach, simultaneous measurements are made
by a stationary base station receiver (located over a point of precisely known
position) and one (or more) roving receivers moving from point to point. The
positional errors measured at the base station are used to reûne the position mea-
sured by the rover(s) at the same instant in time. This can be done either by
bringing the data from the base and rover together in a post-processing mode
after the ûeld observations are completed or by instantaneously broadcasting
the base station corrections to the rovers. The latter approach is termed real-time
differential GNSS positioning.
In recent years, there have been efforts to improve the accuracy of GNSS
positioning through the development of regional networks of high-precision base
stations, generally referred to as satellite-based augmentation systems (SBAS). The
data from these stations are used to derive spatially explicit correction factors
that are then broadcast in real time, allowing advanced receiver units to deter-
mine their positions with a higher degree of accuracy. One such SBAS network,
the Wide Area Augmentation System (WAAS), consists of approximately 25 ground
reference stations distributed across the United States that continuously monitor
GPS satellite transmissions. Two main stations, located on the U.S. east and west
coasts, collect the data from the reference stations and create a composited cor-
rection message that is location speciûc. This message is then broadcast through
one of two geostationary satellites, satellites occupying a ûxed position over the
equator. Any WAAS-enabled GPS unit can receive these correction signals. The
GPS receiver then determines which correction data are appropriate at the cur-
rent location.
The WAAS signal reception is ideal for open land, aircraft, and marine appli-
cations, but the position of the relay satellites over the equator makes it difûcult
to receive the signals at high latitudes or when features such as trees and
1.8 CHARACTERISTICS OF REMOTE SENSING SYSTEMS 45

mountains obstruct the view of the horizon. In such situations, GPS positions can
sometimes actually contain more error with WAAS correction than without. How-
ever, in unobstructed operating conditions where a strong WAAS signal is avail-
able, positions are normally accurate to within 3 m or better.
Paralleling the deployment of the WAAS system in North America are the
Japanese Multi-functional Satellite Augmentation System (MSAS) in Asia, the
European Geostationary Navigation Overlay Service (EGNOS) in Europe, and pro-
posed future SBAS networks such as India’s GPS Aided Geo-Augmented Naviga-
tion (GAGAN) system. Like WAAS, these SBAS systems use geostationary
satellites to transmit data for real-time differential correction.
In addition to the regional SBAS real-time correction systems such as WAAS,
some nations have developed additional networks of base stations that can be
used for post-processing GNSS data for differential correction (i.e., high-accuracy
corrections made after data collection, rather than in real time). One such system
is the U.S. National Geodetic Survey’s Continuously Operating Reference Stations
(CORS) network. More than 1800 sites in the cooperative CORS network provide
GNSS reference data that can be accessed via the Internet and used in post-
processing for differential correction.
With the development of new satellite constellations, and new resources for
real-time and post-processed differential correction, GNSS-based location ser-
vices are expected to become even more widespread in industry, resource man-
agement, and consumer technology applications in the coming years.

1.8 CHARACTERISTICS OF REMOTE SENSING SYSTEMS

Having introduced some basic concepts, we now have the elements necessary to
characterize a remote sensing system. In so doing, we can begin to appreciate
some of the problems encountered in the design and application of the various
sensing systems examined in subsequent chapters. In particular, the design and
operation of every real-world sensing system represents a series of compromises,
often in response to the limitations imposed by physics and by the current state of
technological development. When we consider the process from start to ûnish,
users of remote sensing systems need to keep in mind the following factors:
1. The energy source. All passive remote sensing systems rely on energy
that originates from sources other than the sensor itself, typically in the
form of either reüected radiation from the sun or emitted radiation from
earth surface features. As already discussed, the spectral distribution of
reüected sunlight and self-emitted energy is far from uniform. Solar
energy levels obviously vary with respect to time and location, and dif-
ferent earth surface materials emit energy with varying degrees of efû-
ciency. While we have some control over the sources of energy for active
systems such as radar and lidar, those sources have their own particular

You might also like