0% found this document useful (0 votes)
818 views72 pages

Basic Principles of Photogrammetry

This document provides an introduction to basic photogrammetry principles: 1. Photogrammetry is the science of obtaining spatial measurements and derived products from photographs. It involves analyzing both hardcopy and digital photographs. 2. The document outlines seven basic photogrammetric activities: determining photograph scale and estimating distances; computing ground areas; quantifying relief displacement; determining object heights from displacement; determining heights and elevations from parallax; determining exterior orientation parameters; and producing maps, DEMs, and orthophotos. 3. Both vertical photographs and stereopairs are used, with stereopairs allowing parallax measurements for more detailed products like DEMs and orthophotos.

Uploaded by

Mehmet YILMAZ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
818 views72 pages

Basic Principles of Photogrammetry

This document provides an introduction to basic photogrammetry principles: 1. Photogrammetry is the science of obtaining spatial measurements and derived products from photographs. It involves analyzing both hardcopy and digital photographs. 2. The document outlines seven basic photogrammetric activities: determining photograph scale and estimating distances; computing ground areas; quantifying relief displacement; determining object heights from displacement; determining heights and elevations from parallax; determining exterior orientation parameters; and producing maps, DEMs, and orthophotos. 3. Both vertical photographs and stereopairs are used, with stereopairs allowing parallax measurements for more detailed products like DEMs and orthophotos.

Uploaded by

Mehmet YILMAZ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

3 BASIC PRINCIPLES
OF PHOTOGRAMMETRY

3.1 INTRODUCTION

Photogrammetry is the science and technology of obtaining spatial measurements


and other geometrically reliable derived products from photographs. Historically,
photogrammetric analyses involved the use of hardcopy photographic products
such as paper prints or film transparencies. Today, most photogrammetric proce-
dures involve the use of digital, or softcopy, photographic data products. In certain
cases, these digital products might result from high resolution scanning of hard-
copy photographs (e.g., historical photographs). However, the overwhelming
majority of digital photographs used in modern photogrammetric applications
come directly from digital cameras. In fact, photogrammetric processing of digital
camera data is often accomplished in-flight such that digital orthophotos, digital
elevation models (DEMs), and other GIS data products are available immediately
after, or even during, an aerial photographic mission.
While the physical character of hardcopy and digital photographs is quite dif-
ferent, the basic geometric principles used to analyze them photogrammetrically
are identical. In fact, it is often easier to visualize and understand these principles
in a hardcopy context and then extend them to the softcopy environment. This is

146
3.1 INTRODUCTION 147

the approach we adopt in this chapter. Hence, our objective in this discussion is
to not only prepare the reader to be able to make basic measurements from hard-
copy photographic images, but also to understand the underlying principles of
modern digital (softcopy) photogrammetry. We stress aerial photogrammetric
techniques and procedures in this discussion, but the same general principles
hold for terrestrial (ground-based) and space-based operations as well.
In this chapter, we introduce only the most basic aspects of the broad subject
of photogrammetry. (More comprehensive and detailed treatment of the subject of
photogrammetry is available in such references as ASPRS, 2004; Mikhail et al.,
2001; and Wolf et al., 2013.) We limit our discussion to the following photogram-
metric activities.

1. Determining the scale of a vertical photograph and estimating hor-


izontal ground distances from measurements made on a vertical
photograph. The scale of a photograph expresses the mathematical rela-
tionship between a distance measured on the photo and the corresponding
horizontal distance measured in a ground coordinate system. Unlike maps,
which have a single constant scale, aerial photographs have a range of
scales that vary in proportion to the elevation of the terrain involved. Once
the scale of a photograph is known at any particular elevation, ground dis-
tances at that elevation can be readily estimated from corresponding photo
distance measurements.
2. Using area measurements made on a vertical photograph to determine
the equivalent areas in a ground coordinate system. Computing
ground areas from corresponding photo area measurement is simply an
extension of the above concept of scale. The only difference is that whereas
ground distances and photo distances vary linearly, ground areas and photo
areas vary as the square of the scale.
3. Quantifying the effects of relief displacement on vertical aerial pho-
tographs. Again unlike maps, aerial photographs in general do not show
the true plan or top view of objects. The images of the tops of objects
appearing in a photograph are displaced from the images of their bases.
This is known as relief displacement and causes any object standing above
the terrain to “lean away” from the principal point of a photograph radially.
Relief displacement, like scale variation, precludes the use of aerial photo-
graphs directly as maps. However, reliable ground measurements and maps
can be obtained from vertical photographs if photo measurements are ana-
lyzed with due regard for scale variations and relief displacement.
4. Determining object heights from relief displacement measure-
ments. While relief displacement is usually thought of as an image dis-
tortion that must be dealt with, it can also be used to estimate the heights of
objects appearing on a photograph. As we later illustrate, the magnitude of
relief displacement depends on the flying height, the distance from the
photo principal point to the feature, and the height of the feature. Because
148 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

these factors are geometrically related, we can measure an object’s relief


displacement and radial position on a photograph and thereby determine
the height of the object. This technique provides limited accuracy but is
useful in applications where only approximate object heights are needed.
5. Determining object heights and terrain elevations by measuring image
parallax. The previous operations are performed using vertical photos
individually. Many photogrammetric operations involve analyzing images
in the area of overlap of a stereopair. Within this area, we have two views of
the same terrain, taken from different vantage points. Between these two
views, the relative positions of features lying closer to the camera (at higher
elevation) will change more from photo to photo than the positions of fea-
tures farther from the camera (at lower elevation). This change in relative
position is called parallax. It can be measured on overlapping photographs
and used to determine object heights and terrain elevations.
6. Determining the elements of exterior orientation of aerial photo-
graphs. In order to use aerial photographs for photogrammetric map-
ping purposes, it is necessary to determine six independent parameters
that describe the position and angular orientation of each photograph at
the instant of its exposure relative to the origin and orientation of the
ground coordinate system used for mapping. These six variables are
called the elements of exterior orientation. Three of these are the 3D posi-
tion (X, Y, Z) of the center of the photocoordinate axis system at the
instant of exposure. The remaining three are the 3D rotation angles (o, f, k)
related to the amount and direction of tilt in each photo at the instant of
exposure. These rotations are a function of the orientation of the platform
and camera mount when the photograph is taken. For example, the wings
of a fixed‐wing aircraft might be tilted up or down, relative to horizontal.
Simultaneously, the camera might be tilted up or down toward the front
or rear of the aircraft. At the same time, the aircraft might be rotated into
a headwind in order to maintain a constant heading.
We will see that there are two major approaches that can be taken to
determine the elements of exterior orientation. The first involves the use
of ground control (photo‐identifiable points of known ground coordinates)
together with a mathematical procedure called aerotriangulation. The sec-
ond approach entails direct georeferencing, which involves the integration
of GPS and IMU observations to determine the position and angular
orientation of each photograph (Sections 1.11 and 2.6). We treat both of
these approaches at a conceptual level, with a minimum of mathematical
detail.
7. Production of maps, DEMs, and orthophotos. “Mapping” from aerial
photographs can take on many forms. Historically, topographic maps
were produced using hardcopy stereopairs placed in a device called a ste-
reoplotter. With this type of instrument the photographs are mounted in
special projectors that can be mutually oriented to precisely correspond
3.1 INTRODUCTION 149

to the angular tilts (o, f, k) present when the photographs were taken.
Each of the projectors can also be translated in x, y, and z such that a
reduced‐size model is created that exactly replicates the exterior orienta-
tion of each of the photographs comprising the stereopair. (The scale of
the resulting stereomodel is determined by the “air base” distance
between the projectors chosen by the instrument operator.) When
viewed stereoscopically, the model can be used to prepare an analog or
digital planimetric map having no tilt or relief distortions. In addition,
topographic contours can be integrated with the planimetric data and
the height of individual features appearing in the model can be deter-
mined.
Whereas a stereoplotter is designed to transfer map information,
without distortions, from stereo photographs, a similar device can be
used to transfer image information, with distortions removed. The resul-
ting undistorted image is called an orthophotograph (or orthophoto).
Orthophotos combine the geometric utility of a map with the extra “real-
world image” information provided by a photograph. The process of
creating an orthophoto depends on the existence of a reliable DEM for
the area being mapped. The DEMs are usually prepared photo-
grammetrically as well. In fact, photogrammetric workstations generally
provide the integrated functionality for such tasks as generating DEMs,
digital orthophotos, topographic maps, perspective views, and “fly-
throughs,” as well as the extraction of spatially referenced GIS data in
two or three dimensions.
8. Preparing a flight plan to acquire vertical aerial photography. When-
ever new photographic coverage of a project area is to be obtained, a photo-
graphic flight mission must be planned. As we will discuss, mission planning
software highly automates this process. However, most readers of this
book are, or will become, consumers of image data rather than providers.
Such individuals likely will not have direct access to flight planning software
and will appropriately rely on professional data suppliers to jointly design a
mission to meet their needs. There are also cases where cost and logistics
might dictate that the data consumer and the data provider are one in the
same!
Given the above, it is important for image analysts to understand at least
the basic rudiments of mission planning to in order to facilitate such activ-
ities as preliminary estimation of data volume (as it influences both data
collection and analysis), choosing among alternative mission parameters,
and ensuring a reasonable fit between the information needs of a given
project and the data collected to meet those needs. Decisions have to be made
relative to such mission elements as image scale or ground sample distance
(GSD), camera format size and focal length, and desired image overlap.
The analyst can then determine such geometric factors as the appropriate fly-
ing height, the distance between image centers, the direction and spacing of
flight lines, and the total number of images required to cover the project area.
150 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Each of these photogrammetric operations is covered in separate sections in


this chapter. We first discuss some general geometric concepts that are basic to
these techniques.

3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL


PHOTOGRAPHS

Geometric Types of Aerial Photographs

Aerial photographs are generally classified as either vertical or oblique. Vertical


photographs are those made with the camera axis directed as vertically as possible.
However, a “truly” vertical aerial photograph is rarely obtainable because of the
previously described unavoidable angular rotations, or tilts, caused by the angular
attitude of the aircraft at the instant of exposure. These unavoidable tilts cause
slight (1° to 3°) unintentional inclination of the camera optical axis, resulting in the
acquisition of tilted photographs.
Virtually all photographs are tilted. When tilted unintentionally and slightly, til-
ted photographs can be analyzed using the simplified models and methods appro-
priate to the analysis of truly vertical photographs without the introduction
of serious error. This is done in many practical applications where approximate
measurements suffice (e.g., determining the ground dimensions of a flat agricultural
field based on photo measurements made with a scale or digitizing tablet). How-
ever, as we will discuss, precise digital photogrammetric procedures employ meth-
ods and models that rigorously account for even very small angles of tilt with no
loss of accuracy.
When aerial photographs are taken with an intentional inclination of the
camera axis, oblique photographs result. High oblique photographs include an
image of the horizon, and low oblique photographs do not. In this chapter, we
emphasize the geometric aspects of acquiring and analyzing vertical aerial
photographs given their extensive historical and continuing use for large-area
photogrammetric mapping. However, the use of oblique aerial photographs has
increased rapidly in such applications as urban mapping and disaster assessment.
With their “side-view” character, oblique photographs afford a more natural
perspective in comparison to the top view perspective of vertical aerial photo-
graphs. This can greatly facilitate the image interpretation process (particularly
for those individuals having limited training or experience in image inter-
pretation). Various examples of oblique aerial photographs are interspersed
throughout this book.
Photogrammetric measurements can also be made from oblique aerial photo-
graphs if they are acquired with this purpose in mind. This is often accomplished
using multiple cameras that are shuttered simultaneously. Figure 3.1 illustrates
such a configuration, the IGI Penta-DigiCAM system. This five-camera system
employs one nadir-pointing camera, two cameras viewing obliquely in opposite
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 151

(a)

Oblique image
coverage

Oblique image Vertical image Oblique image


coverage coverage coverage

Oblique image
coverage

(b)
Figure 3.1 IGI Penta-DigiCAM system: (a) early version of system
with camera back removed to illustrate the angular orientation of the
nadir and oblique cameras (lower oblique camera partially obscured
by IMU mounted to camera at center-bottom; (b) Maltese Cross
ground coverage pattern resulting from the system; (c) later version of
system installed in a gyro-stabilized mount with data storage units
and GNSS/IMU system mounted on top. (Courtesy of IGI GmbH.)
152 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(c)
Figure 3.1 (Continued )

directions in the cross-track direction, and two viewing obliquely in opposite


directions along-track. (We earlier illustrated a single DigiCam system in Section
2.6.) Figure 3.1 also depicts the “Maltese Cross” ground coverage pattern that
results with each simultaneous shutter release of the Penta-DigiCAM. Successive,
overlapping composites of this nature can be acquired to depict the top, front,
back, and sides of all ground features in a study area. In addition, the oblique
orientation angles from vertical can be varied to meet the need of various applica-
tions of the photography.

Taking Vertical Aerial Photographs

Most vertical aerial photographs are taken with frame cameras along flight lines,
or flight strips. The line traced on the ground directly beneath the aircraft during
acquisition of photography is called the nadir line. This line connects the image cen-
ters of the vertical photographs. Figure 3.2 illustrates the typical character of the
photographic coverage along a flight line. Successive photographs are generally
taken with some degree of endlap. Not only does this lapping ensure total coverage
along a flight line, but an endlap of at least 50% is essential for total stereoscopic
coverage of a project area. Stereoscopic coverage consists of adjacent pairs of
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 153

(a)

(b)

Figure 3.2 Photographic coverage along a flight strip: (a) conditions during exposure; (b) resulting
photography.

overlapping vertical photographs called stereopairs. Stereopairs provide two differ-


ent perspectives of the ground area in their region of endlap. When images forming
a stereopair are viewed through a stereoscope, each eye psychologically occupies
the vantage point from which the respective image of the stereopair was taken in
flight. The result is the perception of a three-dimensional stereomodel. As pointed
out in Chapter 1, most applications of aerial photographic interpretation entail the
use of stereoscopic coverage and stereoviewing.
Successive photographs along a flight strip are taken at intervals that are
controlled by the camera intervalometer or software-based sensor control system.
The area included in the overlap of successive photographs is called the stereo-
scopic overlap area. Typically, successive photographs contain 55 to 65% overlap
to ensure at least 50% endlap over varying terrain, in spite of unintentional tilt.
Figure 3.3 illustrates the ground coverage relationship of successive photographs
forming a stereopair having approximately a 60% stereoscopic overlap area.
The ground distance between the photo centers at the times of exposure is
called the air base. The ratio between the air base and the flying height above
ground determines the vertical exaggeration perceived by photo interpreters. The
larger the base–height ratio, the greater the vertical exaggeration.
154 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.3 Acquisition of successive photographs yielding a


stereopair. (Courtesy Leica Geosystems.)

Figure 3.4 shows Large Format Camera photographs of Mt. Washington and
vicinity, New Hampshire. These stereopairs illustrate the effect of varying the per-
centage of photo overlap and thus the base–height ratio of the photographs. These
photographs were taken from the Space Shuttle at an orbital altitude of 364 km.
The stereopair in (a) has a base–height ratio of 0.30. The stereopair in (b) has a
base–height ratio of 1.2 and shows much greater apparent relief (greater vertical
exaggeration) than (a).
This greater apparent relief often aids in visual image interpretation. Also, as
we will discuss later, many photogrammetric mapping operations depend upon
accurate determination of the position at which rays from two or more photo-
graphs intersect in space. Rays associated with larger base–height ratios intersect
at larger (closer to being perpendicular) angles than do those associated with the
smaller (closer to being parallel) angles associated with smaller base–height
ratios. Thus larger base–height ratios result in more accurate determination of
ray intersection positions than do smaller base–height ratios.
Most project sites are large enough for multiple-flight-line passes to be made
over the area to obtain complete stereoscopic coverage. Figure 3.5 illustrates how
adjacent strips are photographed. On successive flights over the area, adjacent strips
have a sidelap of approximately 30%. Multiple strips comprise what is called a block
of photographs.
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 155

(a)

Figure 3.4 Large Format Camera stereopairs, Mt. Washington and vicinity, New Hampshire;
scale 1:800,000 (1.5 times enlargement from original image scale): (a) 0.30 base–height ratio;
(b) 1.2 base–height ratio. (Courtesy NASA and ITEK Optical Systems.)

As discussed in Section 3.1, planning an aerial photographic mission is


usually accomplished with the aid of flight-planning software. Such software is
guided by user input of such factors as the area to be covered by the mission; the
required photo scale or GSD, overlap, and endlap; camera-specific parameters
such as format size and focal length; and the ground speed of the aircraft to be
used. A DEM and other background maps of the area to be covered are often inte-
grated into such systems as well, providing a 2D or 3D planning environment.
Flight-planning software is also usually closely coupled to, or integrated with, the
flight navigation and guidance software, display systems, and sensor control and
monitoring systems used during image acquisition. In this manner, in-flight dis-
plays visualize the approach to the mission area, the individual flight lines and
photo centers, and the turns at the ends of flight lines. If a portion of a flight line
156 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

Figure 3.4 (Continued)

is missed (e.g., due to cloud cover), guidance is provided back to the area to be
reflown. Such robust flight management and control systems provide a high
degree of automation to the navigation and guidance of the aircraft and the
operation of the camera during a typical mission.

Geometric Elements of a Vertical Photograph

The basic geometric elements of a hardcopy vertical aerial photograph taken with a
single-lens frame camera are depicted in Figure 3.6. Light rays from terrain objects
are imaged in the plane of the film negative after intersecting at the camera lens
exposure station, L. The negative is located behind the lens at a distance equal to
the lens focal length, f. Assuming the size of a paper print positive (or film positive)
is equal to that of the negative, positive image positions can be depicted dia-
grammatically in front of the lens in a plane located at a distance f. This rendition is
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 157

Figure 3.5 Adjacent flight lines over a project area.

appropriate in that most photo positives used for measurement purposes are con-
tact printed, resulting in the geometric relationships shown.
The x and y coordinate positions of image points are referenced with respect
to axes formed by straight lines joining the opposite fiducial marks (see Figure
2.24) recorded on the positive. The x axis is arbitrarily assigned to the fiducial
axis most nearly coincident with the line of flight and is taken as positive in the
forward direction of flight. The positive y axis is located 90° counterclockwise
from the positive x axis. Because of the precision with which the fiducial marks
and the lens are placed in a metric camera, the photocoordinate origin, o, can
be assumed to coincide exactly with the principal point, the intersection of the
lens optical axis and the film plane. The point where the prolongation of the
optical axis of the camera intersects the terrain is referred to as the ground prin-
cipal point, O. Images for terrain points A, B, C, D, and E appear geometrically
reversed on the negative at a0 , b0 , c0 , d0 , and e0 and in proper geometric relation-
ship on the positive at a, b, c, d, and e. (Throughout this chapter we refer to
points on the image with lowercase letters and corresponding points on the ter-
rain with uppercase letters.)
The xy photocoordinates of a point are the perpendicular distances from the xy
coordinate axes. Points to the right of the y axis have positive x coordinates and
points to the left have negative x coordinates. Similarly, points above the x axis have
positive y coordinates and those below have negative y coordinates.
158 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.6 Basic geometric elements of a vertical photograph.

Photocoordinate Measurement

Measurements of photocoordinates may be obtained using any one of many mea-


surement devices. These devices vary in their accuracy, cost, and availability. For
rudimentary photogrammetric problems—where low orders of measurement accu-
racy are acceptable—a triangular engineer’s scale or metric scale may be used. When
using these scales, measurement accuracy is generally improved by taking the aver-
age of several repeated measurements. Measurements are also generally more accu-
rate when made with the aid of a magnifying lens.
In a softcopy environment, photocoordinates are measured using a raster image
display with a cursor to collect “raw” image coordinates in terms of row and col-
umn values within the image file. The relationship between the row and column
3.3 PHOTOGRAPHIC SCALE 159

coordinate system and the camera’s fiducial axis coordinate system is determined
through the development of a mathematical coordinate transformation between the
two systems. This process requires that some points have their coordinates known
in both systems. The fiducial marks are used for this purpose in that their positions
in the focal plane are determined during the calibration of the camera, and they can
be readily measured in the row and column coordinate system. (Appendix B con-
tains a description of the mathematical form of the affine coordinate transformation,
which is often used to interrelate the fiducial and row and column coordinate
systems.)
Irrespective of what approach is used to measure photocoordinates, these
measurements contain errors of varying sources and magnitudes. These errors
stem from factors such as camera lens distortions, atmospheric refraction, earth
curvature, failure of the fiducial axes to intersect at the principal point, and
shrinkage or expansion of the photographic material on which measurements are
made. Sophisticated photogrammetric analyses include corrections for all these
errors. For simple measurements made on paper prints, such corrections are
usually not employed because errors introduced by slight tilt in the photography
will outweigh the effect of the other distortions.

3.3 PHOTOGRAPHIC SCALE

One of the most fundamental and frequently used geometric characteristics of hard-
copy aerial photographs is that of photographic scale. A photograph “scale,” like a
map scale, is an expression that states that one unit (any unit) of distance on a pho-
tograph represents a specific number of units of actual ground distance. Scales may
be expressed as unit equivalents, representative fractions, or ratios. For example, if
1 mm on a photograph represents 25 m on the ground, the scale of the photograph
1
can be expressed as 1 mm¼25 m (unit equivalents), or 25;000 (representative frac-
tion), or 1:25,000 (ratio).
Quite often the terms “large scale” and “small scale” are confused by those not
working with expressions of scale on a routine basis. For example, which photo-
graph would have the “larger” scale—a 1:10,000 scale photo covering several city
blocks or a 1:50,000 photo that covers an entire city? The intuitive answer is often
that the photo covering the larger “area” (the entire city) is the larger scale product.
This is not the case. The larger scale product is the 1:10,000 image because it shows
ground features at a larger, more detailed, size. The 1:50,000 scale photo of the
entire city would render ground features at a much smaller, less detailed size.
Hence, in spite of its larger ground coverage, the 1:50,000 photo would be termed
the smaller scale product.
A convenient way to make scale comparisons is to remember that the
same objects are smaller on a “smaller” scale photograph than on a “larger” scale
photo. Scale comparisons can also be made by comparing the magnitudes of the
1 1
representative fractions involved. (That is, 50;000 is smaller than 10;000.)
160 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

The most straightforward method for determining photo scale is to measure the
corresponding photo and ground distances between any two points. This requires
that the points be mutually identifiable on both the photo and a map. The scale S is
then computed as the ratio of the photo distance d to the ground distance D,

photo distance d
S ¼ photo scale ¼ ¼ (3:1)
ground distance D

EXAMPLE 3.1

Assume that two road intersections shown on a photograph can be located on a 1:25,000
scale topographic map. The measured distance between the intersections is 47.2 mm on the
map and 94.3 mm on the photograph. (a) What is the scale of the photograph? (b) At that
scale, what is the length of a fence line that measures 42.9 mm on the photograph?
Solution
(a) The ground distance between the intersections is determined from the map scale as

25,000
0:0472 m 3 ¼ 1180 m
1
By direct ratio, the photo scale is

0:0943 m 1
S¼ ¼ or 1 : 12;500
1180 m 12;513

(Note that because only three significant, or meaningful, figures were present in the
original measurements, only three significant figures are indicated in the final result.)
(b) The ground length of the 42.9-mm fence line is
d 1
D¼ ¼ 0:0429 m  ¼ 536:25 m or 536 m
S 12;500

For a vertical photograph taken over flat terrain, scale is a function of the
focal length f of the camera used to acquire the image and the flying height above
the ground, H0 , from which the image was taken. In general,
camera focal length f
Scale ¼ ¼ (3:2)
flying height above terrain H0
Figure 3.7 illustrates how we arrive at Eq. 3.2. Shown in this figure is the side view
of a vertical photograph taken over flat terrain. Exposure station L is at an aircraft
3.3 PHOTOGRAPHIC SCALE 161

Figure 3.7 Scale of a vertical photograph taken over flat terrain.

flying height H above some datum, or arbitrary base elevation. The datum most fre-
quently used is mean sea level. If flying height H and the elevation of the terrain h
are known, we can determine H0 by subtraction (H0 ¼H  h). If we now consider ter-
rain points A, O, and B, they are imaged at points a0 , o0 , and b0 on the negative film
and at a, o, and b on the positive print. We can derive an expression for photo scale
by observing similar triangles Lao and LAO, and the corresponding photo ðaoÞ and
ground ðAOÞ distances. That is,

ao f
S¼ ¼ 0 (3:3)
AO H
Equation 3.3 is identical to our scale expression of Eq. 3.2. Yet another way of
expressing these equations is
f
S¼ (3:4)
Hh
Equation 3.4 is the most commonly used form of the scale equation.
162 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

EXAMPLE 3.2

A camera equipped with a 152-mm-focal-length lens is used to take a vertical photograph


from a flying height of 2780 m above mean sea level. If the terrain is flat and located at an
elevation of 500 m, what is the scale of the photograph?
Solution
f 0:152 m 1
Scale ¼ ¼ ¼ or 1 : 15;000
H  h 2780 m  500 m 15;000

The most important principle expressed by Eq. 3.4 is that photo scale is a func-
tion of terrain elevation h. Because of the level terrain, the photograph depicted in
Figure 3.7 has a constant scale. However, photographs taken over terrain of varying
elevation will exhibit a continuous range of scales associated with the variations in ter-
rain elevation. Likewise, tilted and oblique photographs have nonuniform scales.

EXAMPLE 3.3

Assume that a vertical photograph was taken at a flying height of 5000 m above sea level
using a camera with a 152-mm-focal-length lens. (a) Determine the photo scale at points A
and B, which lie at elevations of 1200 and 1960 m. (b) What ground distance corresponds
to a 20.1-mm photo distance measured at each of these elevations?
Solution
(a) By Eq. 3.4
f 0:152 m 1
SA ¼ ¼ ¼ or 1 : 25;000
H  hA 5000 m  1200 m 25;000
f 0:152 m 1
SB ¼ ¼ ¼ or 1 : 20;000
H  hB 5000 m  1960 m 20;000

(b) The ground distance corresponding to a 20.1-mm photo distance is


d 1
DA ¼ ¼ 0:0201 m  ¼ 502:5 m or 502 m
SA 25;000
d 1
DB ¼ ¼ 0:0201 m  ¼ 402 m
SB 20;000
3.3 PHOTOGRAPHIC SCALE 163

Often it is convenient to compute an average scale for an entire photograph.


This scale is calculated using the average terrain elevation for the area imaged.
Consequently, it is exact for distances occurring at the average elevation and is
approximate at all other elevations. Average scale may be expressed as
f
Savg ¼ (3:5)
H  havg
where havg is the average elevation of the terrain shown in the photograph.
The result of photo scale variation is geometric distortion. All points on a map
are depicted in their true relative horizontal (planimetric) positions, but points on
a photo taken over varying terrain are displaced from their true “map positions.”
This difference results because a map is a scaled orthographic projection of the
ground surface, whereas a vertical photograph yields a perspective projection. The dif-
fering nature of these two forms of projection is illustrated in Figure 3.8. As shown, a

(a) (b)

Figure 3.8 Comparative geometry of (a) a map and (b) a vertical aerial photograph.
Note differences in size, shape, and location of the two trees.
164 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

map results from projecting vertical rays from ground points to the map sheet (at a
particular scale). A photograph results from projecting converging rays through a
common point within the camera lens. Because of the nature of this projection, any
variations in terrain elevation will result in scale variation and displaced image
positions.
On a map we see a top view of objects in their true relative horizontal positions.
On a photograph, areas of terrain at the higher elevations lie closer to the camera at
the time of exposure and therefore appear larger than corresponding areas lying at
lower elevations. Furthermore, the tops of objects are always displaced from their
bases (Figure 3.8). This distortion is called relief displacement and causes any object
standing above the terrain to “lean” away from the principal point of a photograph
radially. We treat the subject of relief displacement in Section 3.6.
By now the reader should see that the only circumstance wherein an aerial
photograph can be treated as if it were a map directly is in the case of a vertical
photograph imaging uniformly flat terrain. This is rarely the case in practice, and
the image analyst must always be aware of the potential geometric distortions
introduced by such influences as tilt, scale variation, and relief displacement. Fail-
ure to deal with these distortions will often lead, among other things, to a lack of
geometric “fit” among image-derived and nonimage data sources in a GIS. How-
ever, if these factors are properly addressed photogrammetrically, extremely reli-
able measurements, maps, and GIS products can be derived from aerial
photography.

3.4 GROUND COVERAGE OF AERIAL PHOTOGRAPHS

The ground coverage of a photograph is, among other things, a function of camera
format size. For example, an image taken with a camera having a 230 3 230-mm
format (on 240-mm film) has about 17.5 times the ground area coverage of an
image of equal scale taken with a camera having a 55 3 55-mm format (on 70-mm
film) and about 61 times the ground area coverage of an image of equal scale taken
with a camera having a 24 3 36-mm format (on 35-mm film). As with photo scale,
the ground coverage of photography obtained with any given format is a function of
focal length and flying height above ground, H0 . For a constant flying height, the
width of the ground area covered by a photo varies inversely with focal length. Con-
sequently, photos taken with shorter focal length lenses have larger areas of cover-
age (and smaller scales) than do those taken with longer focal length lenses. For any
given focal length lens, the width of the ground area covered by a photo varies
directly with flying height above terrain, with image scale varying inversely with fly-
ing height.
The effect that flying height has on ground coverage and image scale is illu-
strated in Figures 3.9a, b, and c. These images were all taken over Chattanooga,
3.4 GROUND COVERAGE OF AERIAL PHOTOGRAPHS 165

(a)

Figure 3.9 (a) Scale 1:210,000 vertical aerial photograph showing Chattanooga, TN. This figure is a 1.753 reduction
of an original photograph taken with f ¼ 152.4 mm from 18,300 m flying height. (NASA photograph.) (b) Scale
1:35,000 vertical aerial photograph providing coverage of area outlined in (a). This figure is a 1.753 reduction of an
original photograph taken with f ¼ 152.4 mm from 3050 m flying height. (c) Scale 1:10,500 vertical aerial photograph
providing coverage of area outlined in (b). This figure is a 1.753 reduction of an original photograph taken with
f ¼ 152.4 mm from 915 m flying height. (Courtesy Mapping Services Branch, Tennessee Valley Authority.)
166 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

Figure 3.9 (Continued)

Tennessee, with the same camera type equipped with the same focal length lens
but from three different altitudes. Figure 3.9a is a high-altitude, small-scale image
showing virtually the entire Chattanooga metropolitan area. Figure 3.9b is a
lower altitude, larger scale image showing the ground area outlined in Figure 3.9a.
Figure 3.9c is a yet lower altitude, larger scale image of the area outlined in
Figure 3.9b. Note the trade-offs between the ground area covered by an image and
the object detail available in each of the photographs.
3.5 AREA MEASUREMENT 167

(c)

Figure 3.9 (Continued)

3.5 AREA MEASUREMENT

The process of measuring areas using aerial photographs can take on many forms.
The accuracy of area measurement is a function of not only the measuring device
used, but also the degree of image scale variation due to relief in the terrain and tilt
in the photography. Although large errors in area determinations can result even
168 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

with vertical photographs in regions of moderate to high relief, accurate measure-


ments may be made on vertical photos of areas of low relief.
Simple scales may be used to measure the area of simply shaped features. For
example, the area of a rectangular field can be determined by simply measuring its
length and width. Similarly, the area of a circular feature can be computed after
measuring its radius or diameter.

EXAMPLE 3.4

A rectangular agricultural field measures 8.65 cm long and 5.13 cm wide on a vertical pho-
tograph having a scale of 1:20,000. Find the area of the field at ground level.
Solution
1
Ground length ¼ photo length 3 ¼ 0:0865 m 3 20;000 ¼ 1730 m
S
1
Ground width ¼ photo width 3 ¼ 0:0513 m 3 20;000 ¼ 1026 m
S
Ground area ¼ 1730 m 3 1026 m ¼ 1,774,980 m2 ¼ 177 ha

The ground area of an irregularly shaped feature is usually determined by


measuring the area of the feature on the photograph. The photo area is then con-
verted to a ground area from the following relationship:
1
Ground area ¼ photo area 3
S2

EXAMPLE 3.5

The area of a lake is 52.2 cm2 on a 1:7500 vertical photograph. Find the ground area of the
lake.
Solution
1
Ground area ¼ photo area 3 ¼ 0:00522 m2 3 75002 ¼ 293,625 m2 ¼ 29:4 ha
S2
3.5 AREA MEASUREMENT 169

Numerous methods can be used to measure the area of irregularly shaped fea-
tures on a photograph. One of the simplest techniques employs a transparent grid
overlay consisting of lines forming rectangles or squares of known area. The grid is
placed over the photograph and the area of a ground unit is estimated by counting
grid units that fall within the unit to be measured. Perhaps the most widely used grid
overlay is a dot grid (Figure 3.10). This grid, composed of uniformly spaced dots, is
superimposed over the photo, and the dots falling within the region to be measured
are counted. From knowledge of the dot density of the grid, the photo area of the
region can be computed.

Figure 3.10 Transparent dot grid overlay. (Author-prepared figure.)


170 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

EXAMPLE 3.6

A flooded area is covered by 129 dots on a 25-dot/cm2 grid on a 1:20,000 vertical aerial pho-
tograph. Find the ground area flooded.
Solution
1 cm2
Dot density ¼ 3 20;0002 ¼ 16;000;000 cm2=dot ¼ 0:16 ha=dot
25 dots
Ground area ¼ 129 dots 3 0:16 ha=dot ¼ 20:6 ha

The dot grid is an inexpensive tool and its use requires little training. When
numerous regions are to be measured, however, the counting procedure becomes
quite tedious. An alternative technique is to use a digitizing tablet. These devices are
interfaced with a computer such that area determination simply involves tracing
around the boundary of the region of interest and the area can be read out directly.
When photographs are available in softcopy format, area measurement often
involves digitizing from a computer monitor using a mouse or other form of cursor
control. The process of digitizing directly from a computer screen is called heads-up
digitizing because the image analyst can view the original image and the digitized
features being compiled simultaneously in one place. The heads-up, or on-screen,
approach is not only more comfortable, it also affords the ability to digitally zoom
in on features to be digitized, and it is much easier to detect mistakes made while
pointing at the digitized features and to perform any necessary remeasurement.

3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES

Characteristics of Relief Displacement

In Figure 3.8, we illustrated the effect of relief displacement on a photograph taken


over varied terrain. In essence, an increase in the elevation of a feature causes its
position on the photograph to be displaced radially outward from the principal
point. Hence, when a vertical feature is photographed, relief displacement causes
the top of the feature to lie farther from the photo center than its base. As a result,
vertical features appear to lean away from the center of the photograph.
The pictorial effect of relief displacement is illustrated by the aerial photo-
graphs shown in Figure 3.11. These photographs depict the construction site of the
Watts Bar Nuclear Plant adjacent to the Tennessee River. An operating coal-fired
steam plant with its fan-shaped coal stockyard is shown in the upper right of
Figure 3.11a; the nuclear plant is shown in the center. Note particularly the two
3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES 171

(a)

Figure 3.11 Vertical photographs of the Watts Bar Nuclear Power Plant Site, near Kingston, TN. In (a) the two plant
cooling towers appear near the principal point and exhibit only slight relief displacement. The towers manifest
severe relief displacement in (b). (Courtesy Mapping Services Branch, Tennessee Valley Authority.)

large cooling towers adjacent to the plant. In (a) these towers appear nearly in top
view because they are located very close to the principal point of this photograph.
However, the towers manifest some relief displacement because the top tower
appears to lean somewhat toward the upper right and the bottom tower toward the
lower right. In (b) the towers are shown at a greater distance from the principal
point. Note the increased relief displacement of the towers. We now see more of a
172 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

Figure 3.11 (Continued )

“side view” of the objects because the images of their tops are displaced farther than
the images of their bases. These photographs illustrate the radial nature of relief
displacement and the increase in relief displacement with an increase in the radial
distance from the principal point of a photograph.
The geometric components of relief displacement are illustrated in Figure 3.12,
which shows a vertical photograph imaging a tower. The photograph is taken from
flying height H above datum. When considering the relief displacement of a vertical
3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES 173

feature, it is convenient to arbitrarily assume a datum plane placed at the base of


the feature. If this is done, the flying height H must be correctly referenced to this
same datum, not mean sea level. Thus, in Figure 3.12 the height of the tower (whose
base is at datum) is h. Note that the top of the tower, A, is imaged at a in the photo-
graph whereas the base of the tower, A0 , is imaged at a0 . That is, the image of the top
of the tower is radially displaced by the distance d from that of the bottom. The dis-
tance d is the relief displacement of the tower. The equivalent distance projected to
datum is D. The distance from the photo principal point to the top of the tower is r.
The equivalent distance projected to datum is R.
We can express d as a function of the dimensions shown in Figure 3.12. From
similar triangles AA0 A00 and LOA00 ,

D R
¼
h H

Figure 3.12 Geometric components of relief displacement.


174 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Expressing distances D and R at the scale of the photograph, we obtain


d r
¼
h H

Rearranging the above equation yields


rh
d¼ (3:6)
H

where
d ¼ relief displacement
r ¼ radial distance on the photograph from the principal point to
the displaced image point
h ¼ height above datum of the object point
H ¼ flying height above the same datum chosen to reference h

An analysis of Eq. 3.6 indicates mathematically the nature of relief displacement


seen pictorially. That is, relief displacement of any given point increases as the
distance from the principal point increases (this can be seen in Figure 3.11), and it
increases as the elevation of the point increases. Other things being equal, it decrea-
ses with an increase in flying height. Hence, under similar conditions high altitude
photography of an area manifests less relief displacement than low altitude photo-
graphy. Also, there is no relief displacement at the principal point (since r ¼ 0).

Object Height Determination from Relief Displacement Measurement

Equation 3.6 also indicates that relief displacement increases with the feature
height h. This relationship makes it possible to indirectly measure heights of
objects appearing on aerial photographs. By rearranging Eq. 3.6, we obtain

dH
h¼ (3:7)
r

To use Eq. 3.7, both the top and base of the object to be measured must be
clearly identifiable on the photograph and the flying height H must be known.
If this is the case, d and r can be measured on the photograph and used to calculate
the object height h. (When using Eq. 3.7, it is important to remember that H must
be referenced to the elevation of the base of the feature, not to mean sea level.)
3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES 175

EXAMPLE 3.7

For the photo shown in Figure 3.12, assume that the relief displacement for the tower at A
is 2.01 mm, and the radial distance from the center of the photo to the top of the tower is
56.43 mm. If the flying height is 1220 m above the base of the tower, find the height of the
tower.
Solution
By Eq. 3.7
dH 2:01 mm ð1220 mÞ
h¼ ¼ ¼ 43:4 m
r 56:43 mm

While measuring relief displacement is a very convenient means of calculating


heights of objects from aerial photographs, the reader is reminded of the assumptions
implicit in the use of the method. We have assumed use of truly vertical photography,
accurate knowledge of the flying height, clearly visible objects, precise location of the
principal point, and a measurement technique whose accuracy is consistent with the
degree of relief displacement involved. If these assumptions are reasonably met, quite
reliable height determinations may be made using single prints and relatively
unsophisticated measuring equipment.

Correcting for Relief Displacement

In addition to calculating object heights, quantification of relief displacement can


be used to correct the image positions of terrain points appearing in a photograph.
Keep in mind that terrain points in areas of varied relief exhibit relief displacements
as do vertical objects. This is illustrated in Figure 3.13. In this figure, the datum
plane has been set at the average terrain elevation (not at mean sea level). If all ter-
rain points were to lie at this common elevation, terrain points A and B would be
located at A0 and B0 and would be imaged at points a0 and b0 on the photograph. Due
to the varied relief, however, the position of point A is shifted radially outward on
the photograph (to a), and the position of point B is shifted radially inward (to b).
These changes in image position are the relief displacements of points A and B.
Figure 3.13b illustrates the effect they have on the geometry of the photo. Because
A0 and B0 lie at the same terrain elevation, the image line a0 b0 accurately represents
the scaled horizontal length and directional orientation of the ground line AB.
176 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

(a)
Figure 3.13 Relief displacement on a photograph taken over varied terrain: (a) displacement of terrain
points; (b) distortion of horizontal angles measured on photograph.

When the relief displacements are introduced, the resulting line ab has a con-
siderably altered length and orientation.
Angles are also distorted by relief displacements. In Figure 3.13b, the hor-
izontal ground angle ACB is accurately expressed by a0 cb0 on the photo. Due to the
displacements, the distorted angle acb will appear on the photograph. Note that,
because of the radial nature of relief displacements, angles about the origin of the
photo (such as aob) will not be distorted.
Relief displacement can be corrected for by using Eq. 3.6 to compute its mag-
nitude on a point-by-point basis and then laying off the computed displacement
distances radially (in reverse) on the photograph. This procedure establishes the
datum-level image positions of the points and removes the relief distortions,
resulting in planimetrically correct image positions at datum scale. This scale can
be determined from the flying height above datum (S¼f/H). Ground lengths,
directions, angles, and areas may then be directly determined from these cor-
rected image positions.
3.7 IMAGE PARALLAX 177

EXAMPLE 3.8

Referring to the vertical photograph depicted in Figure 3.13, assume that the radial dis-
tance ra to point A is 63.84 mm and the radial distance rb to point B is 62.65 mm. Flying
height H is 1220 m above datum, point A is 152 m above datum, and point B is 168 m
below datum. Find the radial distance and direction one must lay off from points a and b
to plot a0 and b0 .
Solution
By Eq. 3.6
ra ha 63:84 mm 3 152 m
da ¼ ¼ ¼ 7:95 mm ðplot inwardÞ
H 1220 m
rb hb 62:65 mm 3 ð168 mÞ
db ¼ ¼ ¼ 8:63 mm ðplot outwardÞ
H 1220 m

3.7 IMAGE PARALLAX

Characteristics of Image Parallax

Thus far we have limited our discussion to photogrammetric operations involving


only single vertical photographs. Numerous applications of photogrammetry incor-
porate the analysis of stereopairs and use of the principle of parallax. The term paral-
lax refers to the apparent change in relative positions of stationary objects caused by
a change in viewing position. This phenomenon is observable when one looks at
objects through a side window of a moving vehicle. With the moving window as a
frame of reference, objects such as mountains at a relatively great distance from the
window appear to move very little within the frame of reference. In contrast, objects
close to the window, such as roadside trees, appear to move through a much greater
distance.
In the same way that the close trees move relative to the distant mountains,
terrain features close to an aircraft (i.e., at higher elevation) will appear to move
relative to the lower elevation features when the point of view changes between
successive exposures. These relative displacements form the basis for three-
dimensional viewing of overlapping photographs. In addition, they can be mea-
sured and used to compute the elevations of terrain points.
Figure 3.14 illustrates the nature of parallax on overlapping vertical photo-
graphs taken over varied terrain. Note that the relative positions of points A and B
change with the change in viewing position (in this case, the exposure station). Note
also that the parallax displacements occur only parallel to the line of flight. In theory,
178 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.14 Parallax displacements on overlapping vertical


photographs.

the direction of flight should correspond precisely to the fiducial x axis. In reality,
however, unavoidable changes in the aircraft orientation will usually slightly off-
set the fiducial axis from the flight axis. The true flight line axis may be found by
first locating on a photograph the points that correspond to the image centers of
the preceding and succeeding photographs. These points are called the conjugate
principal points. A line drawn through the principal points and the conjugate
principal points defines the flight axis. As shown in Figure 3.15, all photographs
except those on the ends of a flight strip normally have two sets of flight axes.
This happens because the aircraft’s path between exposures is usually slightly
curved. In Figure 3.15, the flight axis for the stereopair formed by photos 1 and 2
is flight axis 12. The flight axis for the stereopair formed by photos 2 and 3 is
flight axis 23.
The line of flight for any given stereopair defines a photocoordinate x axis for
use in parallax measurement. Lines drawn perpendicular to the flight line and pas-
sing through the principal point of each photo form the photographic y axes for par-
allax measurement. The parallax of any point, such as A in Figure 3.15, is expressed
3.7 IMAGE PARALLAX 179

Figure 3.15 Flight line axes for successive stereopairs along a flight strip. (Curvature of
aircraft path is exaggerated.)

in terms of the flight line coordinate system as


pa ¼ xa  x0a (3:8)
where
pa ¼ parallax of point A
xa ¼ measured x coordinate of image a on the left photograph of
the stereopair
x0a ¼ x coordinate of image a0 on the right photograph

The x axis for each photo is considered positive to the right of each photo
principal point. This makes x0a a negative quantity in Figure 3.14.

Object Height and Ground Coordinate Location from Parallax Measurement

Figure 3.16 shows overlapping vertical photographs of a terrain point, A. Using


parallax measurements, we may determine the elevation at A and its ground coor-
dinate location. Referring to Figure 3.16a, the horizontal distance between expo-
sure stations L and L0 is called B, the air base. The triangle in Figure 3.16b results
from superimposition of the triangles at L and L0 in order to graphically depict
the nature of parallax pa as computed from Eq. 3.8 algebraically. From similar tri-
angles La0x ax (Figure 3.16b) and LAxL0 (Figure 3.16a)
pa B
¼
f H  hA
from which
Bf
H  hA ¼ (3:9)
pa
180 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

(a)
Figure 3.16 Parallax relationships on overlapping vertical photographs: (a) adjacent photographs forming a
stereopair; (b) superimposition of right photograph onto left.

Rearranging yields
Bf
hA ¼ H  (3:10)
pa
Also, from similar triangles LOAAx and Loax,
XA xa
¼
H  hA f
from which
xa ðH  hA Þ
XA ¼
f
and substituting Eq. 3.9 into the above equation yields
xa
XA ¼ B (3:11)
pa
3.7 IMAGE PARALLAX 181

A similar derivation using y coordinates yields


ya
YA ¼ B (3:12)
pa

Equations 3.10 to 3.12 are commonly known as the parallax equations. In these equa-
tions, X and Y are ground coordinates of a point with respect to an arbitrary coordi-
nate system whose origin is vertically below the left exposure station and with
positive X in the direction of flight; p is the parallax of the point in question; and x
and y are the photocoordinates of the point on the left-hand photo. The major
assumptions made in the derivation of these equations are that the photos are truly
vertical and that they are taken from the same flying height. If these assumptions are
sufficiently met, a complete survey of the ground region contained in the photo over-
lap area of a stereopair can be made.

EXAMPLE 3.9

The length of line AB and the elevation of its endpoints, A and B, are to be determined from a
stereopair containing images a and b. The camera used to take the photographs has a
152.4-mm lens. The flying height was 1200 m (average for the two photos) and the air base was
600 m. The measured photographic coordinates of points A and B in the “flight line” coordinate
system are xa ¼ 54.61 mm, xb ¼ 98.67 mm, ya ¼ 50.80 mm, yb ¼ 25.40 mm, x0a ¼ 59:45 mm,
and x0b ¼ 27:39 mm. Find the length of line AB and the elevations of A and B.
Solution
From Eq. 3.8
pa ¼ xa  x0a ¼ 54:61  ð59:45Þ ¼ 114:06 mm

pb ¼ xb  x0b ¼ 98:67  ð27:39Þ ¼ 126:06 mm

From Eqs. 3.11 and 3.12


xa 600 3 54:61
XA ¼ B ¼ ¼ 287:27 m
pa 114:06
xb 600 3 98:67
XB ¼ B ¼ ¼ 469:63 m
pb 126:06
ya 600 3 50:80
YA ¼ B ¼ ¼ 267:23 m
pa 114:06

yb 600 3 ð25:40Þ
YB ¼ B ¼ ¼ 120:89 m
pb 126:06
182 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Applying the Pythagorean theorem yields

AB ¼ ½ð469:63  287:27Þ2 þ ð120:89  267:23Þ2 1=2 ¼ 428:8 m


From Eq. 3.10, the elevations of A and B are
Bf 600 3 152:4
hA ¼ H  ¼ 1200  ¼ 398 m
pa 114:06

Bf 600 3 152:4
hB ¼ H  ¼ 1200  ¼ 475 m
pb 126:06

In many applications, the difference in elevation between two points is of more


immediate interest than is the actual value of the elevation of either point. In such
cases, the change in elevation between two points can be found from

DpH0
Dh ¼ (3:13)
pa

where
Dh ¼ difference in elevation between two points whose parallax
difference is Dp
H0 ¼ flying height above the lower point
pa ¼ parallax of the higher point
Using this approach in our previous example yields
12:00 3 802
Dh ¼ ¼ 77 m
126:06
Note this answer agrees with the value computed above.

Parallax Measurement

To this point in our discussion, we have said little about how parallax measure-
ments are made. In Example 3.9 we assumed that x and x0 for points of interest
were measured directly on the left and right photos, respectively. Parallaxes were
then calculated from the algebraic differences of x and x0 , in accordance with
Eq. 3.8. This procedure becomes cumbersome when many points are analyzed,
because two measurements are required for each point.
Figure 3.17 illustrates the principle behind methods of parallax measurement
that require only a single measurement for each point of interest. If the two photo-
graphs constituting a stereopair are fastened to a base with their flight lines aligned,
the distance D remains constant for the setup, and the parallax of a point can be
3.7 IMAGE PARALLAX 183

Figure 3.17 Alignment of a stereopair for parallax measurement.

derived from measurement of the single distance d. That is, p¼D  d. Distance d
can be measured with a simple scale, assuming a and a0 are identifiable. In areas of
uniform photo tone, individual features may not be identifiable, making the mea-
surement of d very difficult.
Employing the principle illustrated in Figure 3.17, a number of devices have
been developed to increase the speed and accuracy of parallax measurement.
These devices also permit parallax to be easily measured in areas of uniform
photo tone. All employ stereoscopic viewing and the principle of the floating
mark. This principle is illustrated in Figure 3.18. While viewing through a stereo-
scope, the image analyst uses a device that places small identical marks over each
photograph. These marks are normally dots or crosses etched on transparent
material. The marks—called half marks—are positioned over similar areas on the
left-hand photo and the right-hand photo. The left mark is seen only by the left
eye of the analyst and the right mark is seen only by the right eye. The relative
positions of the half marks can be shifted along the direction of flight until they
visually fuse together, forming a single mark that appears to “float” at a specific
level in the stereomodel. The apparent elevation of the floating mark varies with
the spacing between the half marks. Figure 3.18 illustrates how the fused marks
can be made to float and can actually be set on the terrain at particular points in
the stereomodel. Half-mark positions (a, b), (a, c), and (a, d) result in floating-
mark positions in the model at B, C, and D.
184 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.18 Floating-mark principle. (Note that only the right half mark is moved to
change the apparent height of the floating mark in the stereomodel.)

A very simple device for measuring parallax is the parallax wedge. It consists
of a transparent sheet of plastic on which are printed two converging lines or
rows of dots (or graduated lines). Next to one of the converging lines is a scale
that shows the horizontal distance between the two lines at each point. Conse-
quently, these graduations can be thought of as a series of distance d measure-
ments as shown in Figure 3.17.
Figure 3.19 shows a parallax wedge set up for use. The wedge is positioned so
that one of the converging lines lies over the left photo in a stereopair and one
over the right photo. When viewed in stereo, the two lines fuse together over a
portion of their length, forming a single line that appears to float in the stereo-
model. Because the lines on the wedge converge, the floating line appears to slope
through the stereoscopic image.
Figure 3.20 illustrates how a parallax wedge might be used to determine the
height of a tree. In Figure 3.20a, the position of the wedge has been adjusted until
the sloping line appears to intersect the top of the tree. A reading is taken from the
scale at this point (58.55 mm). The wedge is then positioned such that the line
3.7 IMAGE PARALLAX 185

Figure 3.19 Parallax wedge oriented under lens stereoscope. (Author-prepared figure.)

(a)

(b)
Figure 3.20 Parallax wedge oriented for taking a reading on (a) the top and
(b) the base of a tree.
186 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

intersects the base of the tree, and a reading is taken (59.75 mm). The difference
between the readings (1.20 mm) is used to determine the tree height.

EXAMPLE 3.10

The flying height for an overlapping pair of photos is 1600 m above the ground and pa is
75.60 mm. Find the height of the tree illustrated in Figure 3.20.
Solution
From Eq. 3.13
DpH0
Dh ¼
pa
1:20 3 1600
¼ ¼ 25 m
75:60

Parallax measurement in softcopy photogrammetric systems usually involves


some form of numerical image correlation to match points on the left photo of a
stereopair to their conjugate images on the right photo. Figure 3.21 illustrates the
general concept of digital image matching. Shown is a stereopair of photographs
with the pixels contained in the overlap area depicted (greatly exaggerated in
size). A reference window on the left photograph comprises a local neighborhood
of pixels around a fixed location. In this case the reference window is square and
5 3 5 pixels in size. (Windows vary in size and shape based on the particular
matching technique.)
A search window is established on the right-hand photo of sufficient size and
general location to encompass the conjugate image of the central pixel of the refer-
ence window. The initial location of the search window can be determined based on
the location of the reference window, the camera focal length, and the size of the
area of overlap. A subsearch “moving window” of pixels (Chapter 7) is then system-
atically moved pixel by pixel about the rows and columns of the search window,
and the numerical correlation between the digital numbers within the reference and
subsearch windows at each location of the moving subsearch window is computed.
The conjugate image is assumed to be at the location where the correlation is a
maximum.
There are various types of algorithms that can be used to perform image match-
ing. (One approach employs the simple principle of epipolar geometry to minimize
the number of unnecessary computations made in the search process. This involves
using a small search window that is moved only along the straight line direction in
which the parallax of any point occurs.) The details of these procedures are not as
3.7 IMAGE PARALLAX 187

Figure 3.21 Principle of image matching. (After Wolf et al., 2013.)

important as the general concept of locating the conjugate image point for all
points on a reference image. The resulting photocoordinates can then be used in the
various parallax equations earlier described (Eqs. 3.10 to 3.13). However, the paral-
lax equations assume perfectly vertical photography and equal flying heights for all
images. This simplifies the geometry and hence the mathematics of computing
ground positions from the photo measurements. However, softcopy systems are not
constrained to the above assumptions. Such systems employ mathematical models
of the imaging process that readily handle variations in the flying height and atti-
tude of each photograph. As we discuss in Section 3.9, the relationship among
image coordinates, ground coordinates, the exposure station position, and angular
orientation of each photograph is normally described by a series of collinearity
equations. They are used in the process of analytical aerotriangulation, which
involves determining the X, Y, Z, ground coordinates of individual points based on
photocoordinate measurements.
188 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

3.8 GROUND CONTROL FOR AERIAL PHOTOGRAPHY

Many photogrammetric activities involve the use of a form of ground reference


data known as ground control. Ground control consists of physical points on the
ground whose positions are known in terms of mapping system coordinates. As
we discuss in the following section, one important role of ground control is to aid
in determining the exterior orientation parameters (position and angular orienta-
tion) of a photograph at the instant of exposure.
Ground control points must be mutually identifiable on the ground and on a
photograph. Such points may be horizontal control points, vertical control points,
or both. Horizontal control point positions are known planimetrically in some X Y
coordinate systems (e.g., a state plane coordinate system). Vertical control points
have known elevations with respect to a level datum (e.g., mean sea level). A sin-
gle point with known planimetric position and known elevation can serve as both
a horizontal and vertical control point.
Historically, ground control has been established through ground-surveying
techniques in the form of triangulation, trilateration, traversing, and leveling. Cur-
rently, the establishment of ground control is aided by the use of GPS procedures.
The details of these and other more sophisticated surveying techniques used for
establishing ground control are not important for the reader of this book to under-
stand. What is important to realize is that the positions of ground control points,
irrespective of how they are determined, must be highly accurate in that photogram-
metric measurements can only be as a reliable as the ground control on which they
are based. Depending on the number and measurement accuracy of the control
points required in a particular project, the costs of obtaining ground control mea-
surements can be substantial.
As mentioned, ground control points must be clearly identifiable both on the
ground and on the photography being used. Ideally, they should be located in
locally flat areas and free of obstructions such as buildings or overhanging trees.
When locating a control point in the field, potential obstructions can usually be
identified simply by occupying the prospective control point location, looking up
45° from horizontal and rotating one’s view horizontally through 360°. This entire
field of view should be free of obstructions. Often, control points are selected and
surveyed after photography has been taken, thereby ensuring that the points are
identifiable on the image. Cultural features, such as road intersections, are often
used as control points in such cases. If a ground survey is made prior to a photo
mission, control points may be premarked with artificial targets to aid in their iden-
tification on the photography. Crosses that contrast with the background land cover
make ideal control point markers. Their size is selected in accordance with the scale
of the photography to be flown and their material form can be quite variable. In
many cases, markers are made by simply painting white crosses on roadways.
3.9 DETERMINING THE ELEMENTS OF EXTERIOR ORIENTATION 189

Alternatively, markers can be painted on contrasting sheets of Masonite, plywood,


or heavy cloth.

3.9 DETERMINING THE ELEMENTS OF EXTERIOR ORIENTATION OF


AERIAL PHOTOGRAPHS

As previously stated (in Section 3.1), in order to use aerial photographs for
any precise photogrammetric mapping purposes, it is first necessary to deter-
mine the six independent parameters that describe the position and angular
orientation of the photocoordinate axis system of each photograph (at the instant
the photograph was taken) relative to the origin and angular orientation of the
ground coordinate system used for mapping. The process of determining the
exterior orientation parameters for an aerial photograph is called georeferencing.
Georeferenced images are those for which 2D photo coordinates can be projected
to the 3D ground coordinate reference system used for mapping and vice versa.
For a frame sensor, such as a frame camera, a single exterior orientation
applies to an entire image. With line scanning or other dynamic imaging systems,
the exposure station position and orientation change with each image line. The
process of georeferencing is equally important in establishing the geometric rela-
tionship between image and ground coordinates for such sensors as lidar, hyper-
spectral scanners, and radar as it is in aerial photography.
In the remainder of this section, we discuss the two basic approaches taken
to georeference frame camera images. The first is indirect georeferencing, which
makes use of ground control and a procedure called aerotriangulation to “back
out” computed values for the six exterior orientation parameters of all photo-
graphs in a flight strip or block. The second approach is direct georeferencing,
wherein these parameters are measured directly through the integration of air-
borne GPS and inertial measurement unit (IMU) observations.

Indirect Georeferencing

Figure 3.22 illustrates the relationship between the 2D (x, y) photocoordinate sys-
tem and the 3D (X, Y, Z) ground coordinate system for a typical photograph. This
figure also shows the six elements of exterior orientation: the 3D ground coordi-
nates of the exposure station (L) and the 3D rotations of the tilted photo plane (o,
f, and k) relative to an equivalent perfectly vertical photograph. Figure 3.22 also
shows what is termed the collinearity condition: the fact that the exposure station
of any photograph, any object point in the ground coordinate system, and its pho-
tographic image all lie on a straight line. This condition holds irrespective of the
angular tilt of a photograph. The condition can also be expressed mathematically
in terms of collinearity equations. These equations describe the relationships
among image coordinates, ground coordinates, the exposure station position, and
190 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.22 The collinearity condition.

the angular orientation of a photograph as follows:


"      #
m11 Xp  XL þ m12 Yp  YL þ m13 Zp  ZL
xp ¼ f       (3:14)
m31 Xp  XL þ m32 Yp  YL þ m33 Zp  ZL
"     #
m21 Xp  XL þ m22 Yp  YL þ m23 Zp  ZL
yp ¼ f       (3:15)
m31 Xp  XL þ m32 Yp  YL þ m33 Zp  ZL

where
xp, yp ¼ image coordinates of any point p
f ¼ focal length
XP, YP, ZP ¼ ground coordinates of point P
3.9 DETERMINING THE ELEMENTS OF EXTERIOR ORIENTATION 191

XL, YL, ZL ¼ ground coordinates of exposure station L


m11, … , m33 ¼ coefficients of a 3 3 3 rotation matrix defined by the
angles o, f, and k that transforms the ground
coordinate system to the image coordinate system
The above equations are nonlinear and contain nine unknowns: the exposure sta-
tion position (XL, YL, ZL), the three rotation angles (o, f, and k, which are imbed-
ded in the m coefficients), and the three object point coordinates (XP, YP, ZP) for
points other than control points.
A detailed description of how the collinearity equations are solved for the
above unknowns is beyond the scope of this discussion. Suffice it to say that the
process involves linearizing the equations using Taylor’s theorem and then using
ground control points to solve for the six unknown elements of exterior orienta-
tion. In this process, photocoordinates for at least three XYZ ground control
points are measured. In that Eqs. 3.14 and 3.15 yield two equations for each con-
trol point, three such points yield six equations that can be solved simultaneously
for the six unknowns. If more than three control points are available, then more
than six equations can be formed and a least squares solution for the unknowns
is performed. The aim of using more than three control points is to increase the
accuracy of the solution through redundancy and to prevent incorrect data from
going undetected.
At this juncture, we have described how ground control is applied to georefer-
encing of only a single photograph. However, most projects involve long strips or
blocks of aerial photos; thus, it would be cost prohibitive to use field measure-
ments to obtain the ground coordinates of a minimum of three ground control
points in each image. Dealing with this more general situation (in the absence of
using a GPS and IMU to accomplish data acquisition) typically involves the pro-
cess of analytical aerotriangulation (Graham and Koh, 2002). This procedure per-
mits “bridging” between overlapping photographs along a strip and between
sidelapping photographs within a block. This is done by establishing pass points
between photographs along a strip and tie points between images in overlapping
strips. Pass points and tie points are simply well‐defined points in overlapping
image areas that can be readily and reliably found in multiple images through
automated image matching. Their photocoordinate values are measured in each
image in which they appear.
Figure 3.23 illustrates a small block of photographs consisting of two flight
lines with five photographs included in each flight line. This block contains ima-
ges of 20 pass points, five tie points, and six ground control points, for a total of
31 object points. As shown, pass points are typically established near the center,
left‐center edge, and right‐center edge of each image along the direction of flight.
If the forward overlap between images is at least 60%, pass points selected in
these locations will appear in two or three images. Tie points are located near the
top and bottom of sidelapping images and will appear on four to six images. The
192 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

table included in the lower portion of Figure 3.23 indicates the number of photo-
graphs in which images of each object point appear. As can be seen from this
table, the grand total of object point images to be measured in this block is 94,
each yielding an x and y photocoordinate value, for a total of 188 observations.
There are also 18 direct observations for the 3D ground coordinates of the six

+ + + + +
1 A 6 11 16 21
D
+ + + + +
2 7 12 17 22

+ + + + +
3 B 8 13 18 E 23

+ + + + +
4 9 14 19 24

+ C + + + F +
5 10 15 20 25

+ Pass point + Tie point 3D Ground control point

Point No. of photos Point No. of photos


ID containing point ID containing point
1 2 14 3
A 2 15 3
2 2 16 3
3 4 17 3
B 4 18 6
4 2 19 3
5 2 20 3
C 2 21 2
6 3 D 2
7 3 22 2
8 6 E 4
9 3 23 4
10 3 24 2
11 3 F 2
12 3 25 2
13 6 Total no. of image points = 94
Figure 3.23 A small (10-photo) block of photographs illustrating the typical
location of pass points, tie points, and ground control points whose ground
coordinates are computed through the aerotriangulation process.
3.9 DETERMINING THE ELEMENTS OF EXTERIOR ORIENTATION 193

control points. However, many systems allow for errors in the ground control
values, so they are adjusted along with the photocoordinate measurements.
The nature of analytical aerotriangulation has evolved significantly over time,
and many variations of how it is accomplished exist. However, all methods
involve writing equations (typically collinearity equations) that express the ele-
ments of exterior orientation of each photo in a block in terms of camera con-
stants (e.g., focal length, principal point location, lens distortion), measured
photo coordinates, and ground control coordinates. The equations are then solved
simultaneously to compute the unknown exterior orientation parameters for all
photos in a block and the ground coordinates of the pass points and tie points
(thus increasing the spatial density of the control available to accomplish sub-
sequent mapping tasks). For the small photo block considered here, the number
of unknowns in the solution consists of the X, Y, and Z object space coordinates
of all object points (3331¼93) and the six exterior orientation parameters for
each of the photographs in the block (6310¼60). Thus, the total number of
unknowns in this relatively small block is 93 þ 60¼153.
The above process by which all photogrammetric measurements in a block
are related to ground control values in one massive solution is often referred to as
bundle adjustment. The “bundle” inferred by this terminology is the conical bun-
dle of light rays that pass through the camera lens at each exposure station. In
essence, the bundles from all photographs are adjusted simultaneously to best fit
all the ground control points, pass points, and tie points in a block of photographs
of virtually any size.
With the advent of airborne GPS over a decade ago, the aerotriangulation
process was greatly streamlined and improved. By including the GPS coordinates
of the exposure station for each photo in a block in a bundle adjustment, the need
for ground control was greatly reduced. Each exposure station becomes an addi-
tional control point. It is not unusual to employ only 10 to 12 control points
to control a block of hundreds of images when airborne GPS is employed
(D.F. Maune, 2007).

Direct Georeferencing

The currently preferred method for determining the elements of exterior orienta-
tion is direct georeferencing. As stated earlier, this approach involves processing
the raw measurements made by an airborne GPS together with IMU data to cal-
culate both the position and angular orientation of each image. The GPS data
afford high absolute accuracy information on position and velocity. At the same
time, IMU data provide very high relative accuracy information on position, velo-
city, and angular orientation. However, the absolute accuracy of IMUs tends to
degrade with the time when operated in a stand‐alone mode. This is where the
integration of the GPS and IMU data takes on importance. The high accuracy
GPS position information is used to control the IMU position error, which in turn
controls the IMU’s orientation error.
194 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Another advantage to GPS/IMU integration is that GPS readings are collected


at discrete intervals, whereas an IMU provides continuous positioning. Hence
the IMU data can be used to smooth out random errors in the positioning data.
Finally, the IMU can serve as a backup system for the GPS if it loses its “lock” on
satellite signals for a short period of time as the aircraft is maneuvered during the
photographic mission.
To say that direct positioning and orientation systems have had a major influ-
ence on how modern photogrammetric operations are performed is a severe under-
statement. The direct georeferencing approach in many circumstances eliminates
the need for ground control surveys (except to establish a base station for the air-
borne GPS operations) and the related process of aerotriangulation. The need to
manually select pass and tie points is eliminated by using autocorrelation methods
to select hundreds of points per stereomodel. This translates to improving the effi-
ciency, cost, and timeliness of delivery of geospatial data products. Not having to
perform extensive ground surveys can also improve personnel safety in circum-
stances such as working in treacherous terrain or the aftermath of a natural disaster.
Finally, direct georeferencing still has its own limitations and considerations.
The spatial relationships among the GPS antenna, IMU, and the mapping camera
have to be well calibrated and controlled. Also, a small mapping project could
actually cost more to accomplish using this approach due to the fixed costs of
acquiring and operating such a system. Current systems are very accurate at
small mapping scales. Depending on photo acquisition parameters and accuracy
requirements, the procedure might not be appropriate for mapping at very large
scales. Some limited ground control is also recommended for quality control pur-
poses. Likewise, flight timing should be optimized to ensure strong GPS signals
from a number of satellites, which may narrow the acquisition time window for
flight planning purposes.

3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL


PHOTOGRAPHS

Elementary Planimetric Mapping

Photogrammetric mapping can take on many forms, depending upon the nature of
the photographic data available, the instrumentation and /or software used, and the
form and accuracy required in any particular mapping application. Many applica-
tions only require the production of planimetric maps. Such maps portray the plan
view (X and Y locations) of natural and cultural features of interest. They do not
represent the contour or relief (Z elevations) of the terrain, as do topographic maps.
Planimetric mapping with hardcopy images can often be accomplished with
relatively simple and inexpensive methods and equipment, particularly when relief
effects are minimal and the ultimate in positional accuracy is not required. In such
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 195

cases, an analyst might use such equipment as an optical transfer scope to transfer
the locations of image features to a map base. This is done by pre-plotting the posi-
tion of several photo control points on a map sheet at the desired scale. Then the
image is scaled, stretched, rotated, and translated to optically fit (as closely as possi-
ble) the plotted positions of the control points on the map base. Once the orienta-
tion of the image to the map base is accomplished, the locations of other features of
interest in the image are transferred to the map.
Planimetric features can also be mapped from hardcopy images with the aid of
a table digitizer. In this approach, control points are again identified whose X Y
coordinates are known in the ground coordinate system and whose xy coordinates
are then measured in the digitizer axis system. This permits the formulation of a
two-dimensional coordinate transformation (Section 3.2 and Appendix B) to relate
the digitizer xy coordinates to the ground X Y coordinate system. This transforma-
tion is then used to relate the digitizer coordinates of features other than the
ground control points to the ground coordinate mapping system.
The above control point measurement and coordinate transformation
approach can also be applied when digital, or softcopy, image data are used in the
mapping process. In this case, the row and column x y coordinate of a pixel in
the image file is related to the X Y ground coordinate system via control point
measurement. Heads-up digitizing is then used to obtain the xy coordinates of the
planimetric features to be mapped from the image, and these are transformed
into the ground coordinate mapping system.
We stress that the accuracy of the ground coordinates resulting from either
tablet or heads-up digitizing can be highly variable. Among the many factors that
can influence this accuracy are the number and spatial distribution of the control
points, the accuracy of the ground control, the accuracy of the digitizer (in tablet
digitizing), the accuracy of the digitizing process, and the mathematical form of
coordinate transformation used. Compounding all of these factors are the poten-
tial effects of terrain relief and image tilt. For many applications, the accuracy of
these approaches may suffice and the cost of implementing more sophisticated
photogrammetric procedures can be avoided. However, when higher-order accu-
racy is required, it may only be achievable through softcopy mapping procedures
employing stereopairs (or larger strips or blocks) of georeferenced photographs.

Evolution from Hardcopy to Softcopy Viewing and Mapping Systems

Stereopairs represent a fundamental unit from which mapping products are


derived from aerial photographs. Historically, planimetric and topographic maps
were generated from hardcopy stereopairs using a device called a stereoplotter.
While very few hardcopy stereoplotters are in use today, they serve as a convenient
graphical means to illustrate the functionality of current softcopy‐based systems.
Figure 3.24 illustrates the principle underlying the design and operation of a
direct optical projection stereoplotter. Shown in (a) are the conditions under which
196 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(a)

(b)
Figure 3.24 Fundamental concept of stereoplotter instrument design: (a) exposure of
stereopair in flight; (b) projection in stereoplotter. (From P. R. Wolf, B. Dewitt, and
B. Wilkinson, 2013, Elements of Photogrammetry with Applications in GIS, 4th ed.,
McGraw-Hill. Reproduced with the permission of The McGraw-Hill Companies.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 197

a stereopair is exposed in‐flight. Note that the flying height for each exposure sta-
tion is slightly different and that the camera’s optical axis is not perfectly vertical
when the photos are exposed. Also note the angular relationships between the
light rays coming from point A on the terrain surface and recorded on each of the
two negatives.
As shown in (b), the negatives are used to produce diapositives (transpar-
encies printed on glass or film transparencies “sandwiched” between glass plates),
which are placed in two stereoplotter projectors. Light rays are then projected
through both the left and right diapositives. When the rays from the left and right
images intersect below the projectors, they form a stereomodel, which can be
viewed and measured in stereo. To aid in creating the stereomodel, the projectors
can be rotated about, and translated along, their x, y, and z axes. In this way, the
diapositives can be positioned and rotated such that they bear the exact relative
angular orientation to each other in the projectors as the negatives did when they
were exposed in the camera at the two exposure stations. The process of estab-
lishing this angular relationship in the stereoplotter is called relative orientation,
and it results in the creation of a miniature 3D stereomodel of the overlap area.
Relative orientation of a stereomodel is followed by absolute orientation,
which involves scaling and leveling the model. The desired scale of the model is
produced by varying the base distance, b, between the projectors. The scale of the
resulting model is equal to the ratio b/B. Leveling of the model can be accom-
plished by rotating both projectors together about the X direction and Y direction
of the mapping coordinate system.
Once the model is oriented, the X, Y, and Z ground coordinates of any point
in the overlap area can be obtained by bringing a reference floating mark in con-
tact with the model at that point. This reference mark can be translated in the
X and Y directions throughout the model, and it can be raised and lowered in the
Z direction. In preparing a topographic map, natural or cultural features are map-
ped planimetrically by tracing them with the floating mark, while continuously
raising and lowering the mark to maintain contact with the terrain. Contours are
compiled by setting the floating mark at the desired elevation of a contour and
moving the floating mark along the terrain so that it just maintains contact with
the surface of the model. Typically, the three‐dimensional coordinates of all
points involved in the map compilation process are recorded digitally to facilitate
subsequent automated mapping, GIS data extraction, and analysis.
It should be noted that stereoplotters recreate the elements of exterior orien-
tation in the original images forming the stereomodel, and the stereoplotting
operation focuses on the intersections of rays from conjugate points (rather than
the distorted positions of these points themselves on the individual photos). In
this manner, the effects of tilt, relief displacement, and scale variation inherent in
the original photographs are all negated in the stereoplotter map compilation
process.
Direct optical projection stereoplotters employed various techniques to pro-
ject and view the stereomodel. In order to see stereo, the operator’s eyes had to
198 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

view each image of the stereopair separately. Anaglyphic systems involved project-
ing one photo through a cyan filter and the other through a red filter. By viewing
the model through eyeglasses having corresponding color filters, the operator’s
left eye would see only the left photo and the right eye would see only the right
photo. Other approaches to stereo viewing included the use of polarizing filters in
place of colored filters, or placing shutters over the projectors to alternate display
of the left and right images as the operator viewed the stereomodel through a syn-
chronized shutter system.
Again, direct optical projection plotters represented the first generation of
such systems. Performing the relative and absolute orientation of these instru-
ments was an iterative and sometimes trial‐and‐error process, and mapping plani-
metric features and contours with such systems was very tedious. As time went
by, stereoplotter designs evolved from being direct optical devices to optical‐
mechanical, analytical, and now softcopy systems.
Softcopy‐based systems entered the commercial marketplace in the early
1990s. Early in their development, the dominant data source used by these sys-
tems was aerial photography that had been scanned by precision photogram-
metric scanners. Today, these systems primarily process digital camera data.
They incorporate high quality displays affording 3D viewing. Like their pre-
decessors, softcopy systems employ various means to enforce the stereoscopic
viewing condition that the left eye of the image analyst only sees the left image of
a stereopair and the right eye only sees the right image. These include, but are not
limited to, anaglyphic and polarization systems, as well as split‐screen and rapid
flicker approaches. The split screen technique involves displaying the left image
on the left side of a monitor and the right image on the right side. The analyst
then views the images through a stereoscope. The rapid flicker approach entails
high frequency (120 Hz) alternation between displaying the left image alone and
then the right alone on the monitor. The analyst views the display with a pair of
electronically shuttered glasses that are synchronized to be alternately clear or
opaque on the left or right side as the corresponding images are displayed.
In addition to affording a 3D viewing capability, softcopy photogrammetric
workstations must have very robust computational power and large data storage
capabilities. However, these hardware requirements are not unique to photo-
grammetric workstations. What is unique about photogrammetric workstations is
the diversity, modularity, and integration of the suite of software these systems
typically incorporate to generate photogrammetric mapping products.
The collinearity condition is frequently the basis for many softcopy analysis
procedures. For example, in the previous section of this discussion we illustrated
the use of the collinearity equations to georeference individual photographs. Col-
linearity is also frequently used to accomplish relative and absolute orientation of
stereopairs. Another very important application of the collinearity equations is
their incorporation in the process of space intersection.
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 199

Space Intersection and DEM Production

Space intersection is a procedure by which the X, Y, and Z coordinates of any


point in the overlap of a stereopair of tilted photographs can be determined. As
shown in Figure 3.25, space intersection is premised on the fact that correspond-
ing rays from overlapping photos intersect at a unique point. Image correlation
(Section 3.7) can be used to match any given point with its conjugate image to
determine the point of intersection.
Space intersection makes use of the fact that a total of four collinearity equa-
tions can be written for each point in the overlap area. Two of these equations
relate to the point’s x and y coordinates on the left‐hand photo; two result from
the x’ and y’ coordinates measured on the right‐hand photo. If the exterior

Z L1

L2
p1

o1 p2
o2

ZL 1
Y

P ZL 2

ZP
XP

YP
XL
2

XL 1 YL 2
YL 1
X

Figure 3.25 Space intersection.


200 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

orientation of both photos is known, then the only unknowns in each equation
are X, Y, and Z for the point under analysis. Given four equations for three
unknowns, a least squares solution for the ground coordinates of each point can
be performed. This means that an image analyst can stereoscopically view, and
extract the planimetric positions and elevations of, any point in the stereomodel.
These data can serve as direct input to a GIS or CAD system.
Systematic sampling of elevation values throughout the overlap area can form
the basis for DEM production with softcopy systems. While this process is highly
automated, the image correlation involved is often far from perfect. This normally
leads to the need to edit the resulting DEM, but this hybrid DEM compilation
process is still a very useful one. The process of editing and reviewing a DEM is
greatly facilitated in a softcopy‐based system in that all elevation points (and even
contours if they are produced) can be superimposed on the 3D view of the origi-
nal stereomodel to aid in inspection of the quality of elevation data.

Digital/Orthophoto Production

As implied by their name, orthophotos are orthographic photographs. They do not


contain the scale, tilt, and relief distortions characterizing normal aerial photo-
graphs. In essence, orthophotos are “photomaps.” Like maps, they have one scale
(even in varying terrain), and like photographs, they show the terrain in actual
detail (not by lines and symbols). Hence, orthophotos give the resource analyst the
“best of both worlds”—a product that can be readily interpreted like a photograph
but one on which true distances, angles, and areas may be measured directly.
Because of these characteristics, orthophotos make excellent base maps for compil-
ing data to be input to a GIS or overlaying and updating data already incorporated
in a GIS. Orthophotos also enhance the communication of spatial data, since data
users can often relate better to an orthophoto than a conventional line and symbol
map or display.
The primary inputs to the production of a digital orthophoto are a conven-
tional, perspective digital photograph and a DEM. The objective of the production
process is to compensate for the effects of tilt, relief displacement, and scale var-
iation by reprojecting the original image orthographically. One way to think
about this reprojection process is envisioning that the original image is projected
over the DEM, resulting in a “draped” image in which elevation effects are
minimized.
There are various means that can be used to create orthophotos. Here we
illustrate the process of backward projection. In this process, we start with point
locations in the DEM and find their corresponding locations on the original pho-
tograph. Figure 3.26 illustrates the process. At each ground position in the DEM
(XP, YP, ZP) the associated image point can be computed through the collinearity
equations as xp, yp. The brightness value of the image at that point is then inserted
into an output array, and the process is repeated for every line and column
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 201

Figure 3.26 Digital orthophoto production process.

position in the DEM to form the entire digital orthophoto. A minor complication in
this whole process is the fact that rarely will the photocoordinate value (xp, yp)
computed for a given DEM cell be exactly centered over a pixel in the original digi-
tal input image. Accordingly, the process of resampling (Chapter 7 and Appendix B)
is employed to determine the best brightness value to assign to each pixel in the
orthophoto based on a consideration of the brightness values of a neighborhood of
pixels surrounding each computed photocoordinate position (xp, yp).
Figure 3.27 illustrates the influence of the above reprojection process.
Figure 3.27a is a conventional (perspective) photograph of a power line clearing
traversing a hilly forested area. The excessively crooked appearance of the linear
clearing is due to relief displacement. Figure 3.27b is a portion of an orthophoto
covering the same area. The relief effects have been removed and the true path of
the power line is shown.
202 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(a) (b)

Figure 3.27 Portion of (a) a perspective photograph and (b) an orthophoto showing a
power line clearing traversing hilly terrain. (Note the excessive crookedness of the power
line clearing in the perspective photo that is eliminated in the orthophoto.) (Courtesy
USGS.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 203

Orthophotos alone do not convey topographic information. However, they


can be used as base maps for contour line overlays prepared in a separate stereo-
plotting operation. The result of overprinting contour information on an ortho-
photo is a topographic orthophotomap. Much time is saved in the preparation of
such maps because the instrument operator need not map the planimetric data in
the map compilation process. Figure 3.28 illustrates a portion of a topographic
orthophotomap.
Orthophotos may be viewed stereoscopically when they are paired with
stereomates. These products are photographs made by introducing image parallax as a
function of known terrain elevations obtained during the production of their corre-
sponding orthophoto. Figure 3.29 illustrates an orthophoto and a corresponding
stereomate that may be viewed stereoscopically. These products were generated as
a part of a stereoscopic orthophotomapping program undertaken by the Canadian
Forest Management Institute. The advantage of such products is that they combine
the attributes of an orthophoto with the benefits of stereo observation. (Note that
Figure 3.27 can also be viewed in stereo. This figure consists of an orthophoto and

Figure 3.28 Portion of a 1:4800 topographic orthophotomap. Photography taken over the Fox Chain of Lakes, IL.
(Courtesy Alster and Associates, Inc.)
204 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(b)

(a)

Figure 3.29 Stereo orthophotograph showing a portion of Gatineau Park, Canada: (a) An orthophoto
and (b) a stereomate provide for three-dimensional viewing of the terrain. Measurements made from,
or plots made on, the orthophoto have map accuracy. Forest-type information is overprinted on this
scene along with a Universal Transverse Mercator (UTM) grid. Note that the UTM grid is square on the
orthophoto but is distorted by the introduction of parallax on the stereomate. Scale 1:38,000.
(Courtesy Forest Management Institute, Canadian Forestry Service.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 205

one of the two photos comprising the stereopair from which the orthophoto was
produced.)
One caveat we wish to note here is that tall objects such as buildings will still
appear to lean in an orthophoto if these features are not included in the DEM
used in the orthophoto production process. This effect can be particularly trouble-
some in urban areas. The effect can be overcome by including building outline ele-
vations in the DEM, or minimized by using only the central portion of a
photograph, where relief displacement of vertical features is at a minimum.
Plate 7 is yet another example of the need for, and influence of, the distortion
correction provided through the orthophoto production process. Shown in (a) is an
original uncorrected color photograph taken over an area of high relief in Glacier
National Park. The digital orthophoto corresponding to the uncorrected photo-
graph in (a) is shown in (b). Note the locational errors that would be introduced if
GIS data were developed from the uncorrected image. GIS analysts are encouraged
to use digital orthophotos in their work whenever possible. Two major federal sour-
ces of such data in the United States are the U.S. Geological Survey (USGS)
National Digital Orthophoto Program (NDOP) and the USDA National Agriculture
Imagery Program (NAIP).
Figures 3.30 and 3.31 illustrate the visualization capability afforded by mer-
ging digital orthophoto data with DEM data. Figure 3.30 shows a perspective

Figure 3.30 Perspective view of a rural area generated digitally by draping orthophoto image data over a digital
elevation model of the same area. (Courtesy University of Wisconsin-Madison, Environmental Remote Sensing Center,
and NASA Affiliated Research Center Program.)
206 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(a)

(b)

Figure 3.31 Vertical stereopair (a) covering the ground area depicted in the perspective view shown in (b). The
image of each building face shown in (b) was extracted automatically from the photograph in which that face was
shown with the maximum relief displacement in the original block of aerial photographs covering the area.
(Courtesy University of Wisconsin-Madison, Campus Mapping Project.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 207

view of a rural area located near Madison, Wisconsin. This image was created by
draping digital orthophoto data over a DEM of the same area. Figure 3.31 shows
a stereopair (a) and a perspective view (b) of the Clinical Science Center, located
on the University of Wisconsin-Madison campus. The images of the various faces
of the buildings shown in (b) were extracted from the original digitized aerial
photograph in which that face was displayed with the greatest relief displacement
in accordance with the direction of the perspective view. This process is done
automatically from among all of the relevant photographs of the original block of
aerial photographs covering the area of interest.

Multi-Ray Photogrammetry

The term multi‐ray photogrammetry refers to any photogrammetric mapping opera-


tion that exploits the redundancy (increased robustness and accuracy) afforded by
analyzing a large number of mutually overlapping photographs simultaneously.
This terminology is relatively new, but multi‐ray photogrammetry is not inher-
ently a new technology. Rather, it is an extension of existing principles and tech-
niques that has come about with the widespread adoption of digital cameras and
fully digital mapping work flows.
Most multi‐ray operations are based on aerial photographs that are acquired
with a very high overlap (80–90%) and sidelap (as much as 60%). Multi‐ray flight
patterns result in providing from 12 to 15 rays to image each point on the ground.
The frame interval rates and storage capacity of modern digital cameras facilitate
the collection of such coverage with relatively little marginal cost in comparison
to the traditional approach of obtaining stereo coverage with 60% overlap and
20–30% sidelap. The benefit afforded by the multi‐ray approach is not only increased
accuracy, but also more complete (near total) automation of many procedures.
One of the most important applications of multi‐ray imagery is in the process
of dense image matching. This involves using multiple rays from multiple images
to find conjugate matching locations in a stereomodel. Usually, on the order of
50% of all pixels in a model can be accurately matched completely automatically
in this manner. In relatively invariant terrain this accuracy drops below 50%, but
in highly structured areas, such as cities, accuracies well above 50% are often rea-
lized. This leads to very high point densities in the production of DEMs and point
clouds. For example, if a GSD of 10 cm is used to acquire the multi‐ray photo-
graphy, this equates to 100 pixels per square meter on the ground. Thus, 50%
accuracy in the image matching process would result in a point density of
50 points per square meter on the ground.
Another important application of multi‐ray photogrammetry is the produc-
tion of true orthophotos. Earlier in this section we discussed the use of the back-
ward projection process to create a digital orthophoto using a single photograph
and a DEM. This process works very well for ground features that are at the
ground elevation of the DEM used. However, we also noted that elevated features
208 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

(e.g., buildings, trees, overpasses) that are not included in the DEM still manifest
relief displacement as a function of their elevation above ground and their image
distance from the principal point of the original photograph. Tall features located
at a distance from the principal point can also completely block or occlude the
appearance of ground areas that are in the “shadow” of these features during ray
projection. Figure 3.32 illustrates the nature of this problem and its solution

Digital
elevation
model

b a Orthophoto pixels
Ground hidden
by building displacement
(a)

b a Orthophoto pixels
(b)

b a Orthophoto pixels
(c)
Figure 3.32 Comparison among approaches taken to extract pixel brightness
numbers for a conventional orthophoto produced using a single photograph (a), a
true orthophoto using multiple photographs acquired with 60% overlap (b), and
a true orthophoto using multiple photographs acquired with 80% overlap (c). (After
Jensen, 2007).
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 209

through the use of more than one original photograph to produce a true digital
orthophoto composite image.
Shown in Figure 3.32a is the use of a single photograph to produce a con-
ventional orthophoto. The brightness value to be used to portray pixel a in the
orthophoto is obtained by starting with the ground (X, Y, Z) position of point a
known from the DEM, then using the collinearity equations to project up to the
photograph to determine the x, y photocoordinates of point a, and then using
resampling to interpolate an appropriate brightness value to use in the ortho-
photo to depict point a. This procedure works well for point a because there is no
vertical feature obstructing the ray between point a on the ground and the photo
exposure station. This is not the case for point b, which is in a ground area that is
obscured by the relief displacement of the nearby building. In this situation, the
brightness value that would be placed in the orthophoto for the ground position
of point b would be that of the roof of the building, and the side of the building
would be shown where the roof should be. The ground area near point b would
not be shown at all in the orthophoto. Clearly, the severity of such relief displace-
ment effects increases both with the height of the vertical feature involved and
the feature’s distance from the ground principal point.
Figure 3.32b illustrates the use of three successive aerial photographs
obtained along a flight line to mitigate the effects of relief displacement in the
digital orthophoto production process. In this case, the nominal overlap between
the successive photos is the traditional value of 60%. The outlines of ground‐
obstructing features are identified and recorded using traditional stereoscopic
feature extraction tools. The brightness values used to portray all other pixel posi-
tions in the orthophoto are automatically interpolated from the photograph
having the best view of each position. For point a the best view is from the closest
exposure station to that point, Exposure Station #3. For point b, the best view
is obtained from Exposure Station #1. The software used for the orthophoto com-
pilation process analyzes the DEM and feature data available for the model to
determine that the view of the ground for pixel b is obscured from Exposure Sta-
tion #2. In this manner, each pixel in the orthophoto composite is assigned the
brightness value from the corresponding position in the photograph acquired
from the closest exposure station affording an unobstructed view of the ground.
Figure 3.32c illustrates a multi‐ray solution to the true orthophoto production
process. In this case, the overlap along the flight strip of images is increased to
80% (or more). This results in several more closely spaced exposure stations
being available to cover a given study area. In this way the ray projections to each
pixel in the orthophoto become much more vertical and parallel, as if all rays are
projected nearly orthographically (directly straight downward) at every point in
the orthophoto. An automated DSM is used to create the true orthophoto. In such
images, building rooftops are shown in their correct planimetric location, directly
above the associated building foundation (with no lean). None of the sides of
buildings are shown, and the ground areas around all sides of buildings are
shown in their correct location.
210 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

3.11 FLIGHT PLANNING

Frequently, the objectives of a photographic remote sensing project can only be met
through procurement of new photography of a study area. These occasions can
arise for many reasons. For example, photography available for a particular area
could be outdated for applications such as land use mapping. In addition, available
photography may have been taken in the wrong season. For example, photography
acquired for topographic mapping is usually flown in the fall or spring to minimize
vegetative cover. This photography will likely be inappropriate for applications
involving vegetation analysis.
In planning the acquisition of new photography, there is always a trade-off
between cost and accuracy. At the same time, the availability, accuracy, and cost
of alternative data sources are continually changing as remote sensing technology
advances. This leads to such decisions as whether analog or digital photography
is appropriate. For many applications, high resolution satellite data may be an
acceptable and cost-effective alternative to aerial photography. Similarly, lidar
data might be used in lieu of, or in addition to, aerial photography. Key to making
such decisions is specifying the nature and accuracy of the end product(s)
required for the application at hand. For example, the required end products
might range from hardcopy prints to DEMs, planimetric and topographic maps,
thematic digital GIS datasets, and orthophotos, among many others.
The remainder of this discussion assumes that aerial photography has been
judged to best serve the needs of a given project, and the task at hand is to
develop a flight plan for acquiring the photography over the project’s study area.
As previously mentioned, flight planning software is generally used for this
purpose. Here we illustrate the basic computational considerations and proce-
dures embedded in such software by presenting two “manual” example solutions
to the flight planning process. We highlight the geometric aspects of preparing a
flight plan for both a film-based camera mission and a digital camera mission of
the same study area. Although we present two solutions using the same study
area, we do not mean to imply that the two mission designs yield photography of
identical quality and utility. They are simply presented as two representative
examples of the flight planning process.
Before we address the geometric aspects of photographic mission planning, we
stress that one of the most important parameters in an aerial mission is beyond
the control of even the best planner—the weather. In most areas, only a few days of
the year are ideal for aerial photography. In order to take advantage of clear
weather, commercial aerial photography firms will fly many jobs in a single day,
often at widely separated locations. Flights are usually scheduled between 10 a.m.
and 2 p.m. for maximum illumination and minimum shadow, although digital cam-
eras that provide high sensitivity under low light conditions can be used for mis-
sions conducted as late as sunset, or shortly thereafter, and under heavily overcast
conditions. However, as previously mentioned, mission timing is often optimized to
ensure strong GPS signals from a number of satellites, which may narrow the
3.11 FLIGHT PLANNING 211

acquisition time window. In addition, the mission planner may need to accom-
modate such mission-specific constraints as maximum allowable building lean in
orthophotos produced from the photography, occlusions in urban areas, specular
reflections over areas covered by water, vehicular traffic volumes at the time of ima-
ging, and civil and military air traffic control restrictions. Overall, a great deal of
time, effort, and expense go into the planning and execution of a photographic mis-
sion. In many respects, it is an art as well as a science.
The parameters needed for the geometric design of a film-based photographic
mission are (1) the focal length of the camera to be used, (2) the film format size,
(3) the photo scale desired, (4) the size of the area to be photographed, (5) the
average elevation of the area to be photographed, (6) the overlap desired, (7) the
sidelap desired, and (8) the ground speed of the aircraft to be used. When design-
ing a digital camera photographic mission, the required parameters are the same,
except the number and physical dimension of the pixels in the sensor array are
needed in lieu of the film format size, and the GSD for the mission is required
instead of a mission scale.
Based on the above parameters, the mission planner prepares computations
and a flight map that indicate to the flight crew (1) the flying height above datum
from which the photos are to be taken; (2) the location, direction, and number of
flight lines to be made over the area to be photographed; (3) the time interval
between exposures; (4) the number of exposures on each flight line; and (5) the
total number of exposures necessary for the mission.
When flight plans are computed manually, they are normally portrayed on a
map for the flight crew. However, old photography or even a satellite image may
be used for this purpose. The computations prerequisite to preparing flight plans
for a film-based and a digital camera mission are given in the following two
examples, respectively.

EXAMPLE 3.11

A study area is 10 km wide in the east–west direction and 16 km long in the north–south
direction (see Figure 3.33). A camera having a 152.4-mm-focal-length lens and a 230-mm
format is to be used. The desired photo scale is 1:25,000 and the nominal endlap and side-
lap are to be 60 and 30%. Beginning and ending flight lines are to be positioned along the
boundaries of the study area. The only map available for the area is at a scale of 1:62,500.
This map indicates that the average terrain elevation is 300 m above datum. Perform the
computations necessary to develop a flight plan and draw a flight map.
Solution
(a) Use north–south flight lines. Note that using north–south flight lines minimizes the
number of lines required and consequently the number of aircraft turns and realign-
ments necessary. (Also, flying in a cardinal direction often facilitates the identification
of roads, section lines, and other features that can be used for aligning the flight lines.)
212 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.33 A 10 3 16-km study area over which photographic coverage is to be obtained. (Author-
prepared figure.)
3.11 FLIGHT PLANNING 213

(b) Find the flying height above terrain (H0 ¼ f/S) and add the mean site elevation to find
flying height above mean sea level:
f 0:1524 m
H¼ þ havg ¼ þ 300 m ¼ 4110 m
S 1=25;000

(c) Determine ground coverage per image from film format size and photo scale:

0:23 m
Coverage per photo ¼ ¼ 5750 m on a side
1=25;000

(d) Determine ground separation between photos on a line for 40% advance per photo
(i.e., 60% endlap):
0:40 3 5750 m ¼ 2300 m between photo centers

(e) Assuming an aircraft speed of 160 km/hr, the time between exposures is
2300 m=photo 3600 sec=hr
3 ¼ 51:75 sec ðuse 51 secÞ
160 km=hr 1000 m=km

(f) Because the intervalometer can only be set in even seconds (this varies between
models), the number is rounded off. By rounding down, at least 60% coverage is
ensured. Recalculate the distance between photo centers using the reverse of the
above equation:
1000 m=km
51 sec=photo 3 160 km=hr 3 ¼ 2267 m
3600 sec=hr

(g) Compute the number of photos per 16-km line by dividing this length by the photo
advance. Add one photo to each end and round the number up to ensure coverage:
16,000 m=line
þ 1 þ 1 ¼ 9:1 photos=line ðuse 10Þ
2267 m=photo

(h) If the flight lines are to have a sidelap of 30% of the coverage, they must be separated
by 70% of the coverage:
0:70 3 5750 m coverage ¼ 4025 m between flight lines

(i) Find the number of flight lines required to cover the 10-km study area width by divid-
ing this width by distance between flight lines (note: this division gives number of
spaces between flight lines; add 1 to arrive at the number of lines):
10,000 m width
þ 1 ¼ 3:48 ðuse 4Þ
4025 m=flight line
The adjusted spacing between lines for using four lines is
10,000 m
¼ 3333 m=space
4  1 spaces
214 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Figure 3.34 Flight map for Example 3.11. (Lines indicate centers of each flight line to be
followed.) (Author-prepared figure.)
3.11 FLIGHT PLANNING 215

(j) Find the spacing of flight lines on the map (1:62,500 scale):
1
3333 m 3 ¼ 53:3 mm
62;500

(k) Find the total number of photos needed:


10 photos=line 3 4 lines ¼ 40 photos

(Note: The first and last flight lines in this example were positioned coincident with the
boundaries of the study area. This provision ensures complete coverage of the area under
the “better safe than sorry” philosophy. Often, a savings in film, flight time, and money is
realized by experienced flight crews by moving the first and last lines in toward the middle
of the study area.)

The above computations would be summarized on a flight map as shown in


Figure 3.34. In addition, a set of detailed specifications outlining the material,
equipment, and procedures to be used for the mission would be agreed upon
prior to the mission. These specifications typically spell out the requirements and
tolerances for flying the mission, the form and quality of the products to be deliv-
ered, and the ownership rights to the original images. Among other things, mis-
sion specifications normally include such details as mission timing, ground
control and GPS/IMU requirements, camera calibration characteristics, film and
filter type, exposure conditions, scale tolerance, endlap, sidelap, tilt and image-to-
image flight line orientation (crab), photographic quality, product indexing, and
product delivery schedules.

EXAMPLE 3.12

Assume that it is desired to obtain panchromatic digital camera coverage of the same study
area described in the previous example. Also assume that a GSD of 25 cm is required given
the mapping accuracy requirements of the mission. The digital camera to be used for the
mission has a panchromatic CCD that includes 20,010 pixels in the across‐track direction,
and 13,080 pixels in the along‐track direction. The physical size of each pixel is 5.2 μm
(0.0052 mm). The camera is fitted with an 80‐mm‐focal‐length lens. As in the previous
example, stereoscopic coverage is required that has 60% overlap and 30% sidelap. The air-
craft to be used to conduct the mission will be operated at a nominal speed of 260 km/hr.
Perform the computations necessary to develop a preliminary flight plan for this mission in
order to estimate flight parameters.
216 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY

Solution
(a) As in the previous example, use north–south flight lines.
 
(b) Find the flying height above terrain H0 ¼ ðGSD
pd
Þ3f
and add the mean elevation to
find the flying height above mean sea level:
ðGSDÞ 3 f 0:25 m 3 80 mm
H¼ þ havg ¼ þ 300 m ¼ 4146 m
pd 0:0052 mm
(c) Determine the across‐track ground coverage of each image:
From Eq. 2.12, the across‐track sensor dimension is
dxt ¼ nxt 3 pd ¼ 20;010 3 0:0052 mm ¼ 104:05 mm
Dividing by the image scale, the across‐track ground coverage distance is
dxt H0 104:05 mm 3 3846 m
¼ ¼ 5002 m
f 80 mm
(d) Determine the along‐track ground coverage of each image:
From Eq. 2.13, the along‐track sensor dimension is
dat ¼ nat 3 pd ¼ 13;080 3 0:0052 mm ¼ 68:02 mm
Dividing by the image scale, the along‐track ground coverage distance is
dat H0 68:02 mm 3 3846 m
¼ ¼ 3270 m
f 80 mm
(e) Determine the ground separation between photos along‐track for 40% advance (i.e.,
60% endlap):
0:40 3 3270 m ¼ 1308 m
(f) Determine the interval between exposures for a flight speed of 260 km/hr:
1308 m=photo 3600 sec=hr
3 ¼ 18:11 sec ðuse 18Þ
260 km=hr 1000 m=km
(g) Compute the number of photos per 16 km line by dividing the line length by the photo
advance. Add one photo to each end and round up to ensure coverage:
16;000 m=line
þ 1 þ 1 ¼ 14:2 photos=line ðuse 15Þ
1308 m=photo
(h) If the flight lines are to have sidelap of 30% of the across‐track coverage, they must be
separated by 70% of the coverage:
0:70 3 5002 m ¼ 3501 m
(i) Find the number of flight lines required to cover the 10‐km study area width by divid-
ing this width by the distance between flight lines (Note: This division gives number
3.12 CONCLUSION 217

of spaces between flight lines; add 1 to arrive at the number of lines):


10;000 m
þ 1 ¼ 3:86 flight lines
3501 m=line
Use 4 flight lines.
(j) Find the total number of photos needed:
15 photos=line 3 4 lines ¼ 60 photos

As is the case with the acquisition of analog aerial photography, a flight plan for
acquiring digital photography is accompanied by a set of detailed specifications stating the
requirements and tolerances for flying the mission, preparing image products, ownership
rights, and other considerations. These specifications would generally parallel those used
for film‐based missions. However, they would also address those specific considerations
that are related to digital data capture. These include, but are not limited to, use of single
versus multiple camera heads, GSD tolerance, radiometric resolution of the imagery, geo-
metric and radiometric image pre‐processing requirements, and image compression and
storage formats. Overall, the goal of such specifications is to not only ensure that the digital
data resulting from the mission are not only of high quality, but also that they are compa-
tible with the hardware and software to be used to store, process, and supply derivative
products from the mission imagery.

3.12 CONCLUSION

As we have seen in this chapter, photogrammetry is a very large and rapidly


changing subject. Historically, most photogrammetric operations were analog in
nature, involving the physical projection and measurement of hardcopy images
with the aid of precise optical or mechanical equipment. Today, all-digital photo-
grammetric workflows are the norm. In addition, most softcopy photogrammetric
systems also readily provide some functionality for handling various forms of
non-photographic imagery (e.g., lidar, line scanner data, satellite imagery). With
links to GIS and image processing software, modern softcopy photogrammetric
systems represent highly integrated systems for spatial data capture, manipula-
tion, analysis, storage, display, and output.

You might also like