Basic Principles of Photogrammetry
Basic Principles of Photogrammetry
3 BASIC PRINCIPLES
OF PHOTOGRAMMETRY
3.1 INTRODUCTION
146
3.1 INTRODUCTION 147
the approach we adopt in this chapter. Hence, our objective in this discussion is
to not only prepare the reader to be able to make basic measurements from hard-
copy photographic images, but also to understand the underlying principles of
modern digital (softcopy) photogrammetry. We stress aerial photogrammetric
techniques and procedures in this discussion, but the same general principles
hold for terrestrial (ground-based) and space-based operations as well.
In this chapter, we introduce only the most basic aspects of the broad subject
of photogrammetry. (More comprehensive and detailed treatment of the subject of
photogrammetry is available in such references as ASPRS, 2004; Mikhail et al.,
2001; and Wolf et al., 2013.) We limit our discussion to the following photogram-
metric activities.
to the angular tilts (o, f, k) present when the photographs were taken.
Each of the projectors can also be translated in x, y, and z such that a
reduced‐size model is created that exactly replicates the exterior orienta-
tion of each of the photographs comprising the stereopair. (The scale of
the resulting stereomodel is determined by the “air base” distance
between the projectors chosen by the instrument operator.) When
viewed stereoscopically, the model can be used to prepare an analog or
digital planimetric map having no tilt or relief distortions. In addition,
topographic contours can be integrated with the planimetric data and
the height of individual features appearing in the model can be deter-
mined.
Whereas a stereoplotter is designed to transfer map information,
without distortions, from stereo photographs, a similar device can be
used to transfer image information, with distortions removed. The resul-
ting undistorted image is called an orthophotograph (or orthophoto).
Orthophotos combine the geometric utility of a map with the extra “real-
world image” information provided by a photograph. The process of
creating an orthophoto depends on the existence of a reliable DEM for
the area being mapped. The DEMs are usually prepared photo-
grammetrically as well. In fact, photogrammetric workstations generally
provide the integrated functionality for such tasks as generating DEMs,
digital orthophotos, topographic maps, perspective views, and “fly-
throughs,” as well as the extraction of spatially referenced GIS data in
two or three dimensions.
8. Preparing a flight plan to acquire vertical aerial photography. When-
ever new photographic coverage of a project area is to be obtained, a photo-
graphic flight mission must be planned. As we will discuss, mission planning
software highly automates this process. However, most readers of this
book are, or will become, consumers of image data rather than providers.
Such individuals likely will not have direct access to flight planning software
and will appropriately rely on professional data suppliers to jointly design a
mission to meet their needs. There are also cases where cost and logistics
might dictate that the data consumer and the data provider are one in the
same!
Given the above, it is important for image analysts to understand at least
the basic rudiments of mission planning to in order to facilitate such activ-
ities as preliminary estimation of data volume (as it influences both data
collection and analysis), choosing among alternative mission parameters,
and ensuring a reasonable fit between the information needs of a given
project and the data collected to meet those needs. Decisions have to be made
relative to such mission elements as image scale or ground sample distance
(GSD), camera format size and focal length, and desired image overlap.
The analyst can then determine such geometric factors as the appropriate fly-
ing height, the distance between image centers, the direction and spacing of
flight lines, and the total number of images required to cover the project area.
150 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(a)
Oblique image
coverage
Oblique image
coverage
(b)
Figure 3.1 IGI Penta-DigiCAM system: (a) early version of system
with camera back removed to illustrate the angular orientation of the
nadir and oblique cameras (lower oblique camera partially obscured
by IMU mounted to camera at center-bottom; (b) Maltese Cross
ground coverage pattern resulting from the system; (c) later version of
system installed in a gyro-stabilized mount with data storage units
and GNSS/IMU system mounted on top. (Courtesy of IGI GmbH.)
152 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(c)
Figure 3.1 (Continued )
Most vertical aerial photographs are taken with frame cameras along flight lines,
or flight strips. The line traced on the ground directly beneath the aircraft during
acquisition of photography is called the nadir line. This line connects the image cen-
ters of the vertical photographs. Figure 3.2 illustrates the typical character of the
photographic coverage along a flight line. Successive photographs are generally
taken with some degree of endlap. Not only does this lapping ensure total coverage
along a flight line, but an endlap of at least 50% is essential for total stereoscopic
coverage of a project area. Stereoscopic coverage consists of adjacent pairs of
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 153
(a)
(b)
Figure 3.2 Photographic coverage along a flight strip: (a) conditions during exposure; (b) resulting
photography.
Figure 3.4 shows Large Format Camera photographs of Mt. Washington and
vicinity, New Hampshire. These stereopairs illustrate the effect of varying the per-
centage of photo overlap and thus the base–height ratio of the photographs. These
photographs were taken from the Space Shuttle at an orbital altitude of 364 km.
The stereopair in (a) has a base–height ratio of 0.30. The stereopair in (b) has a
base–height ratio of 1.2 and shows much greater apparent relief (greater vertical
exaggeration) than (a).
This greater apparent relief often aids in visual image interpretation. Also, as
we will discuss later, many photogrammetric mapping operations depend upon
accurate determination of the position at which rays from two or more photo-
graphs intersect in space. Rays associated with larger base–height ratios intersect
at larger (closer to being perpendicular) angles than do those associated with the
smaller (closer to being parallel) angles associated with smaller base–height
ratios. Thus larger base–height ratios result in more accurate determination of
ray intersection positions than do smaller base–height ratios.
Most project sites are large enough for multiple-flight-line passes to be made
over the area to obtain complete stereoscopic coverage. Figure 3.5 illustrates how
adjacent strips are photographed. On successive flights over the area, adjacent strips
have a sidelap of approximately 30%. Multiple strips comprise what is called a block
of photographs.
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 155
(a)
Figure 3.4 Large Format Camera stereopairs, Mt. Washington and vicinity, New Hampshire;
scale 1:800,000 (1.5 times enlargement from original image scale): (a) 0.30 base–height ratio;
(b) 1.2 base–height ratio. (Courtesy NASA and ITEK Optical Systems.)
(b)
is missed (e.g., due to cloud cover), guidance is provided back to the area to be
reflown. Such robust flight management and control systems provide a high
degree of automation to the navigation and guidance of the aircraft and the
operation of the camera during a typical mission.
The basic geometric elements of a hardcopy vertical aerial photograph taken with a
single-lens frame camera are depicted in Figure 3.6. Light rays from terrain objects
are imaged in the plane of the film negative after intersecting at the camera lens
exposure station, L. The negative is located behind the lens at a distance equal to
the lens focal length, f. Assuming the size of a paper print positive (or film positive)
is equal to that of the negative, positive image positions can be depicted dia-
grammatically in front of the lens in a plane located at a distance f. This rendition is
3.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS 157
appropriate in that most photo positives used for measurement purposes are con-
tact printed, resulting in the geometric relationships shown.
The x and y coordinate positions of image points are referenced with respect
to axes formed by straight lines joining the opposite fiducial marks (see Figure
2.24) recorded on the positive. The x axis is arbitrarily assigned to the fiducial
axis most nearly coincident with the line of flight and is taken as positive in the
forward direction of flight. The positive y axis is located 90° counterclockwise
from the positive x axis. Because of the precision with which the fiducial marks
and the lens are placed in a metric camera, the photocoordinate origin, o, can
be assumed to coincide exactly with the principal point, the intersection of the
lens optical axis and the film plane. The point where the prolongation of the
optical axis of the camera intersects the terrain is referred to as the ground prin-
cipal point, O. Images for terrain points A, B, C, D, and E appear geometrically
reversed on the negative at a0 , b0 , c0 , d0 , and e0 and in proper geometric relation-
ship on the positive at a, b, c, d, and e. (Throughout this chapter we refer to
points on the image with lowercase letters and corresponding points on the ter-
rain with uppercase letters.)
The xy photocoordinates of a point are the perpendicular distances from the xy
coordinate axes. Points to the right of the y axis have positive x coordinates and
points to the left have negative x coordinates. Similarly, points above the x axis have
positive y coordinates and those below have negative y coordinates.
158 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Photocoordinate Measurement
coordinate system and the camera’s fiducial axis coordinate system is determined
through the development of a mathematical coordinate transformation between the
two systems. This process requires that some points have their coordinates known
in both systems. The fiducial marks are used for this purpose in that their positions
in the focal plane are determined during the calibration of the camera, and they can
be readily measured in the row and column coordinate system. (Appendix B con-
tains a description of the mathematical form of the affine coordinate transformation,
which is often used to interrelate the fiducial and row and column coordinate
systems.)
Irrespective of what approach is used to measure photocoordinates, these
measurements contain errors of varying sources and magnitudes. These errors
stem from factors such as camera lens distortions, atmospheric refraction, earth
curvature, failure of the fiducial axes to intersect at the principal point, and
shrinkage or expansion of the photographic material on which measurements are
made. Sophisticated photogrammetric analyses include corrections for all these
errors. For simple measurements made on paper prints, such corrections are
usually not employed because errors introduced by slight tilt in the photography
will outweigh the effect of the other distortions.
One of the most fundamental and frequently used geometric characteristics of hard-
copy aerial photographs is that of photographic scale. A photograph “scale,” like a
map scale, is an expression that states that one unit (any unit) of distance on a pho-
tograph represents a specific number of units of actual ground distance. Scales may
be expressed as unit equivalents, representative fractions, or ratios. For example, if
1 mm on a photograph represents 25 m on the ground, the scale of the photograph
1
can be expressed as 1 mm¼25 m (unit equivalents), or 25;000 (representative frac-
tion), or 1:25,000 (ratio).
Quite often the terms “large scale” and “small scale” are confused by those not
working with expressions of scale on a routine basis. For example, which photo-
graph would have the “larger” scale—a 1:10,000 scale photo covering several city
blocks or a 1:50,000 photo that covers an entire city? The intuitive answer is often
that the photo covering the larger “area” (the entire city) is the larger scale product.
This is not the case. The larger scale product is the 1:10,000 image because it shows
ground features at a larger, more detailed, size. The 1:50,000 scale photo of the
entire city would render ground features at a much smaller, less detailed size.
Hence, in spite of its larger ground coverage, the 1:50,000 photo would be termed
the smaller scale product.
A convenient way to make scale comparisons is to remember that the
same objects are smaller on a “smaller” scale photograph than on a “larger” scale
photo. Scale comparisons can also be made by comparing the magnitudes of the
1 1
representative fractions involved. (That is, 50;000 is smaller than 10;000.)
160 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
The most straightforward method for determining photo scale is to measure the
corresponding photo and ground distances between any two points. This requires
that the points be mutually identifiable on both the photo and a map. The scale S is
then computed as the ratio of the photo distance d to the ground distance D,
photo distance d
S ¼ photo scale ¼ ¼ (3:1)
ground distance D
EXAMPLE 3.1
Assume that two road intersections shown on a photograph can be located on a 1:25,000
scale topographic map. The measured distance between the intersections is 47.2 mm on the
map and 94.3 mm on the photograph. (a) What is the scale of the photograph? (b) At that
scale, what is the length of a fence line that measures 42.9 mm on the photograph?
Solution
(a) The ground distance between the intersections is determined from the map scale as
25,000
0:0472 m 3 ¼ 1180 m
1
By direct ratio, the photo scale is
0:0943 m 1
S¼ ¼ or 1 : 12;500
1180 m 12;513
(Note that because only three significant, or meaningful, figures were present in the
original measurements, only three significant figures are indicated in the final result.)
(b) The ground length of the 42.9-mm fence line is
d 1
D¼ ¼ 0:0429 m ¼ 536:25 m or 536 m
S 12;500
For a vertical photograph taken over flat terrain, scale is a function of the
focal length f of the camera used to acquire the image and the flying height above
the ground, H0 , from which the image was taken. In general,
camera focal length f
Scale ¼ ¼ (3:2)
flying height above terrain H0
Figure 3.7 illustrates how we arrive at Eq. 3.2. Shown in this figure is the side view
of a vertical photograph taken over flat terrain. Exposure station L is at an aircraft
3.3 PHOTOGRAPHIC SCALE 161
flying height H above some datum, or arbitrary base elevation. The datum most fre-
quently used is mean sea level. If flying height H and the elevation of the terrain h
are known, we can determine H0 by subtraction (H0 ¼H h). If we now consider ter-
rain points A, O, and B, they are imaged at points a0 , o0 , and b0 on the negative film
and at a, o, and b on the positive print. We can derive an expression for photo scale
by observing similar triangles Lao and LAO, and the corresponding photo ðaoÞ and
ground ðAOÞ distances. That is,
ao f
S¼ ¼ 0 (3:3)
AO H
Equation 3.3 is identical to our scale expression of Eq. 3.2. Yet another way of
expressing these equations is
f
S¼ (3:4)
Hh
Equation 3.4 is the most commonly used form of the scale equation.
162 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
EXAMPLE 3.2
The most important principle expressed by Eq. 3.4 is that photo scale is a func-
tion of terrain elevation h. Because of the level terrain, the photograph depicted in
Figure 3.7 has a constant scale. However, photographs taken over terrain of varying
elevation will exhibit a continuous range of scales associated with the variations in ter-
rain elevation. Likewise, tilted and oblique photographs have nonuniform scales.
EXAMPLE 3.3
Assume that a vertical photograph was taken at a flying height of 5000 m above sea level
using a camera with a 152-mm-focal-length lens. (a) Determine the photo scale at points A
and B, which lie at elevations of 1200 and 1960 m. (b) What ground distance corresponds
to a 20.1-mm photo distance measured at each of these elevations?
Solution
(a) By Eq. 3.4
f 0:152 m 1
SA ¼ ¼ ¼ or 1 : 25;000
H hA 5000 m 1200 m 25;000
f 0:152 m 1
SB ¼ ¼ ¼ or 1 : 20;000
H hB 5000 m 1960 m 20;000
(a) (b)
Figure 3.8 Comparative geometry of (a) a map and (b) a vertical aerial photograph.
Note differences in size, shape, and location of the two trees.
164 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
map results from projecting vertical rays from ground points to the map sheet (at a
particular scale). A photograph results from projecting converging rays through a
common point within the camera lens. Because of the nature of this projection, any
variations in terrain elevation will result in scale variation and displaced image
positions.
On a map we see a top view of objects in their true relative horizontal positions.
On a photograph, areas of terrain at the higher elevations lie closer to the camera at
the time of exposure and therefore appear larger than corresponding areas lying at
lower elevations. Furthermore, the tops of objects are always displaced from their
bases (Figure 3.8). This distortion is called relief displacement and causes any object
standing above the terrain to “lean” away from the principal point of a photograph
radially. We treat the subject of relief displacement in Section 3.6.
By now the reader should see that the only circumstance wherein an aerial
photograph can be treated as if it were a map directly is in the case of a vertical
photograph imaging uniformly flat terrain. This is rarely the case in practice, and
the image analyst must always be aware of the potential geometric distortions
introduced by such influences as tilt, scale variation, and relief displacement. Fail-
ure to deal with these distortions will often lead, among other things, to a lack of
geometric “fit” among image-derived and nonimage data sources in a GIS. How-
ever, if these factors are properly addressed photogrammetrically, extremely reli-
able measurements, maps, and GIS products can be derived from aerial
photography.
The ground coverage of a photograph is, among other things, a function of camera
format size. For example, an image taken with a camera having a 230 3 230-mm
format (on 240-mm film) has about 17.5 times the ground area coverage of an
image of equal scale taken with a camera having a 55 3 55-mm format (on 70-mm
film) and about 61 times the ground area coverage of an image of equal scale taken
with a camera having a 24 3 36-mm format (on 35-mm film). As with photo scale,
the ground coverage of photography obtained with any given format is a function of
focal length and flying height above ground, H0 . For a constant flying height, the
width of the ground area covered by a photo varies inversely with focal length. Con-
sequently, photos taken with shorter focal length lenses have larger areas of cover-
age (and smaller scales) than do those taken with longer focal length lenses. For any
given focal length lens, the width of the ground area covered by a photo varies
directly with flying height above terrain, with image scale varying inversely with fly-
ing height.
The effect that flying height has on ground coverage and image scale is illu-
strated in Figures 3.9a, b, and c. These images were all taken over Chattanooga,
3.4 GROUND COVERAGE OF AERIAL PHOTOGRAPHS 165
(a)
Figure 3.9 (a) Scale 1:210,000 vertical aerial photograph showing Chattanooga, TN. This figure is a 1.753 reduction
of an original photograph taken with f ¼ 152.4 mm from 18,300 m flying height. (NASA photograph.) (b) Scale
1:35,000 vertical aerial photograph providing coverage of area outlined in (a). This figure is a 1.753 reduction of an
original photograph taken with f ¼ 152.4 mm from 3050 m flying height. (c) Scale 1:10,500 vertical aerial photograph
providing coverage of area outlined in (b). This figure is a 1.753 reduction of an original photograph taken with
f ¼ 152.4 mm from 915 m flying height. (Courtesy Mapping Services Branch, Tennessee Valley Authority.)
166 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(b)
Tennessee, with the same camera type equipped with the same focal length lens
but from three different altitudes. Figure 3.9a is a high-altitude, small-scale image
showing virtually the entire Chattanooga metropolitan area. Figure 3.9b is a
lower altitude, larger scale image showing the ground area outlined in Figure 3.9a.
Figure 3.9c is a yet lower altitude, larger scale image of the area outlined in
Figure 3.9b. Note the trade-offs between the ground area covered by an image and
the object detail available in each of the photographs.
3.5 AREA MEASUREMENT 167
(c)
The process of measuring areas using aerial photographs can take on many forms.
The accuracy of area measurement is a function of not only the measuring device
used, but also the degree of image scale variation due to relief in the terrain and tilt
in the photography. Although large errors in area determinations can result even
168 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
EXAMPLE 3.4
A rectangular agricultural field measures 8.65 cm long and 5.13 cm wide on a vertical pho-
tograph having a scale of 1:20,000. Find the area of the field at ground level.
Solution
1
Ground length ¼ photo length 3 ¼ 0:0865 m 3 20;000 ¼ 1730 m
S
1
Ground width ¼ photo width 3 ¼ 0:0513 m 3 20;000 ¼ 1026 m
S
Ground area ¼ 1730 m 3 1026 m ¼ 1,774,980 m2 ¼ 177 ha
EXAMPLE 3.5
The area of a lake is 52.2 cm2 on a 1:7500 vertical photograph. Find the ground area of the
lake.
Solution
1
Ground area ¼ photo area 3 ¼ 0:00522 m2 3 75002 ¼ 293,625 m2 ¼ 29:4 ha
S2
3.5 AREA MEASUREMENT 169
Numerous methods can be used to measure the area of irregularly shaped fea-
tures on a photograph. One of the simplest techniques employs a transparent grid
overlay consisting of lines forming rectangles or squares of known area. The grid is
placed over the photograph and the area of a ground unit is estimated by counting
grid units that fall within the unit to be measured. Perhaps the most widely used grid
overlay is a dot grid (Figure 3.10). This grid, composed of uniformly spaced dots, is
superimposed over the photo, and the dots falling within the region to be measured
are counted. From knowledge of the dot density of the grid, the photo area of the
region can be computed.
EXAMPLE 3.6
A flooded area is covered by 129 dots on a 25-dot/cm2 grid on a 1:20,000 vertical aerial pho-
tograph. Find the ground area flooded.
Solution
1 cm2
Dot density ¼ 3 20;0002 ¼ 16;000;000 cm2=dot ¼ 0:16 ha=dot
25 dots
Ground area ¼ 129 dots 3 0:16 ha=dot ¼ 20:6 ha
The dot grid is an inexpensive tool and its use requires little training. When
numerous regions are to be measured, however, the counting procedure becomes
quite tedious. An alternative technique is to use a digitizing tablet. These devices are
interfaced with a computer such that area determination simply involves tracing
around the boundary of the region of interest and the area can be read out directly.
When photographs are available in softcopy format, area measurement often
involves digitizing from a computer monitor using a mouse or other form of cursor
control. The process of digitizing directly from a computer screen is called heads-up
digitizing because the image analyst can view the original image and the digitized
features being compiled simultaneously in one place. The heads-up, or on-screen,
approach is not only more comfortable, it also affords the ability to digitally zoom
in on features to be digitized, and it is much easier to detect mistakes made while
pointing at the digitized features and to perform any necessary remeasurement.
(a)
Figure 3.11 Vertical photographs of the Watts Bar Nuclear Power Plant Site, near Kingston, TN. In (a) the two plant
cooling towers appear near the principal point and exhibit only slight relief displacement. The towers manifest
severe relief displacement in (b). (Courtesy Mapping Services Branch, Tennessee Valley Authority.)
large cooling towers adjacent to the plant. In (a) these towers appear nearly in top
view because they are located very close to the principal point of this photograph.
However, the towers manifest some relief displacement because the top tower
appears to lean somewhat toward the upper right and the bottom tower toward the
lower right. In (b) the towers are shown at a greater distance from the principal
point. Note the increased relief displacement of the towers. We now see more of a
172 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(b)
“side view” of the objects because the images of their tops are displaced farther than
the images of their bases. These photographs illustrate the radial nature of relief
displacement and the increase in relief displacement with an increase in the radial
distance from the principal point of a photograph.
The geometric components of relief displacement are illustrated in Figure 3.12,
which shows a vertical photograph imaging a tower. The photograph is taken from
flying height H above datum. When considering the relief displacement of a vertical
3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES 173
D R
¼
h H
where
d ¼ relief displacement
r ¼ radial distance on the photograph from the principal point to
the displaced image point
h ¼ height above datum of the object point
H ¼ flying height above the same datum chosen to reference h
Equation 3.6 also indicates that relief displacement increases with the feature
height h. This relationship makes it possible to indirectly measure heights of
objects appearing on aerial photographs. By rearranging Eq. 3.6, we obtain
dH
h¼ (3:7)
r
To use Eq. 3.7, both the top and base of the object to be measured must be
clearly identifiable on the photograph and the flying height H must be known.
If this is the case, d and r can be measured on the photograph and used to calculate
the object height h. (When using Eq. 3.7, it is important to remember that H must
be referenced to the elevation of the base of the feature, not to mean sea level.)
3.6 RELIEF DISPLACEMENT OF VERTICAL FEATURES 175
EXAMPLE 3.7
For the photo shown in Figure 3.12, assume that the relief displacement for the tower at A
is 2.01 mm, and the radial distance from the center of the photo to the top of the tower is
56.43 mm. If the flying height is 1220 m above the base of the tower, find the height of the
tower.
Solution
By Eq. 3.7
dH 2:01 mm ð1220 mÞ
h¼ ¼ ¼ 43:4 m
r 56:43 mm
(b)
(a)
Figure 3.13 Relief displacement on a photograph taken over varied terrain: (a) displacement of terrain
points; (b) distortion of horizontal angles measured on photograph.
When the relief displacements are introduced, the resulting line ab has a con-
siderably altered length and orientation.
Angles are also distorted by relief displacements. In Figure 3.13b, the hor-
izontal ground angle ACB is accurately expressed by a0 cb0 on the photo. Due to the
displacements, the distorted angle acb will appear on the photograph. Note that,
because of the radial nature of relief displacements, angles about the origin of the
photo (such as aob) will not be distorted.
Relief displacement can be corrected for by using Eq. 3.6 to compute its mag-
nitude on a point-by-point basis and then laying off the computed displacement
distances radially (in reverse) on the photograph. This procedure establishes the
datum-level image positions of the points and removes the relief distortions,
resulting in planimetrically correct image positions at datum scale. This scale can
be determined from the flying height above datum (S¼f/H). Ground lengths,
directions, angles, and areas may then be directly determined from these cor-
rected image positions.
3.7 IMAGE PARALLAX 177
EXAMPLE 3.8
Referring to the vertical photograph depicted in Figure 3.13, assume that the radial dis-
tance ra to point A is 63.84 mm and the radial distance rb to point B is 62.65 mm. Flying
height H is 1220 m above datum, point A is 152 m above datum, and point B is 168 m
below datum. Find the radial distance and direction one must lay off from points a and b
to plot a0 and b0 .
Solution
By Eq. 3.6
ra ha 63:84 mm 3 152 m
da ¼ ¼ ¼ 7:95 mm ðplot inwardÞ
H 1220 m
rb hb 62:65 mm 3 ð168 mÞ
db ¼ ¼ ¼ 8:63 mm ðplot outwardÞ
H 1220 m
the direction of flight should correspond precisely to the fiducial x axis. In reality,
however, unavoidable changes in the aircraft orientation will usually slightly off-
set the fiducial axis from the flight axis. The true flight line axis may be found by
first locating on a photograph the points that correspond to the image centers of
the preceding and succeeding photographs. These points are called the conjugate
principal points. A line drawn through the principal points and the conjugate
principal points defines the flight axis. As shown in Figure 3.15, all photographs
except those on the ends of a flight strip normally have two sets of flight axes.
This happens because the aircraft’s path between exposures is usually slightly
curved. In Figure 3.15, the flight axis for the stereopair formed by photos 1 and 2
is flight axis 12. The flight axis for the stereopair formed by photos 2 and 3 is
flight axis 23.
The line of flight for any given stereopair defines a photocoordinate x axis for
use in parallax measurement. Lines drawn perpendicular to the flight line and pas-
sing through the principal point of each photo form the photographic y axes for par-
allax measurement. The parallax of any point, such as A in Figure 3.15, is expressed
3.7 IMAGE PARALLAX 179
Figure 3.15 Flight line axes for successive stereopairs along a flight strip. (Curvature of
aircraft path is exaggerated.)
The x axis for each photo is considered positive to the right of each photo
principal point. This makes x0a a negative quantity in Figure 3.14.
(b)
(a)
Figure 3.16 Parallax relationships on overlapping vertical photographs: (a) adjacent photographs forming a
stereopair; (b) superimposition of right photograph onto left.
Rearranging yields
Bf
hA ¼ H (3:10)
pa
Also, from similar triangles LOAAx and Loax,
XA xa
¼
H hA f
from which
xa ðH hA Þ
XA ¼
f
and substituting Eq. 3.9 into the above equation yields
xa
XA ¼ B (3:11)
pa
3.7 IMAGE PARALLAX 181
Equations 3.10 to 3.12 are commonly known as the parallax equations. In these equa-
tions, X and Y are ground coordinates of a point with respect to an arbitrary coordi-
nate system whose origin is vertically below the left exposure station and with
positive X in the direction of flight; p is the parallax of the point in question; and x
and y are the photocoordinates of the point on the left-hand photo. The major
assumptions made in the derivation of these equations are that the photos are truly
vertical and that they are taken from the same flying height. If these assumptions are
sufficiently met, a complete survey of the ground region contained in the photo over-
lap area of a stereopair can be made.
EXAMPLE 3.9
The length of line AB and the elevation of its endpoints, A and B, are to be determined from a
stereopair containing images a and b. The camera used to take the photographs has a
152.4-mm lens. The flying height was 1200 m (average for the two photos) and the air base was
600 m. The measured photographic coordinates of points A and B in the “flight line” coordinate
system are xa ¼ 54.61 mm, xb ¼ 98.67 mm, ya ¼ 50.80 mm, yb ¼ 25.40 mm, x0a ¼ 59:45 mm,
and x0b ¼ 27:39 mm. Find the length of line AB and the elevations of A and B.
Solution
From Eq. 3.8
pa ¼ xa x0a ¼ 54:61 ð59:45Þ ¼ 114:06 mm
yb 600 3 ð25:40Þ
YB ¼ B ¼ ¼ 120:89 m
pb 126:06
182 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Bf 600 3 152:4
hB ¼ H ¼ 1200 ¼ 475 m
pb 126:06
DpH0
Dh ¼ (3:13)
pa
where
Dh ¼ difference in elevation between two points whose parallax
difference is Dp
H0 ¼ flying height above the lower point
pa ¼ parallax of the higher point
Using this approach in our previous example yields
12:00 3 802
Dh ¼ ¼ 77 m
126:06
Note this answer agrees with the value computed above.
Parallax Measurement
To this point in our discussion, we have said little about how parallax measure-
ments are made. In Example 3.9 we assumed that x and x0 for points of interest
were measured directly on the left and right photos, respectively. Parallaxes were
then calculated from the algebraic differences of x and x0 , in accordance with
Eq. 3.8. This procedure becomes cumbersome when many points are analyzed,
because two measurements are required for each point.
Figure 3.17 illustrates the principle behind methods of parallax measurement
that require only a single measurement for each point of interest. If the two photo-
graphs constituting a stereopair are fastened to a base with their flight lines aligned,
the distance D remains constant for the setup, and the parallax of a point can be
3.7 IMAGE PARALLAX 183
derived from measurement of the single distance d. That is, p¼D d. Distance d
can be measured with a simple scale, assuming a and a0 are identifiable. In areas of
uniform photo tone, individual features may not be identifiable, making the mea-
surement of d very difficult.
Employing the principle illustrated in Figure 3.17, a number of devices have
been developed to increase the speed and accuracy of parallax measurement.
These devices also permit parallax to be easily measured in areas of uniform
photo tone. All employ stereoscopic viewing and the principle of the floating
mark. This principle is illustrated in Figure 3.18. While viewing through a stereo-
scope, the image analyst uses a device that places small identical marks over each
photograph. These marks are normally dots or crosses etched on transparent
material. The marks—called half marks—are positioned over similar areas on the
left-hand photo and the right-hand photo. The left mark is seen only by the left
eye of the analyst and the right mark is seen only by the right eye. The relative
positions of the half marks can be shifted along the direction of flight until they
visually fuse together, forming a single mark that appears to “float” at a specific
level in the stereomodel. The apparent elevation of the floating mark varies with
the spacing between the half marks. Figure 3.18 illustrates how the fused marks
can be made to float and can actually be set on the terrain at particular points in
the stereomodel. Half-mark positions (a, b), (a, c), and (a, d) result in floating-
mark positions in the model at B, C, and D.
184 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Figure 3.18 Floating-mark principle. (Note that only the right half mark is moved to
change the apparent height of the floating mark in the stereomodel.)
A very simple device for measuring parallax is the parallax wedge. It consists
of a transparent sheet of plastic on which are printed two converging lines or
rows of dots (or graduated lines). Next to one of the converging lines is a scale
that shows the horizontal distance between the two lines at each point. Conse-
quently, these graduations can be thought of as a series of distance d measure-
ments as shown in Figure 3.17.
Figure 3.19 shows a parallax wedge set up for use. The wedge is positioned so
that one of the converging lines lies over the left photo in a stereopair and one
over the right photo. When viewed in stereo, the two lines fuse together over a
portion of their length, forming a single line that appears to float in the stereo-
model. Because the lines on the wedge converge, the floating line appears to slope
through the stereoscopic image.
Figure 3.20 illustrates how a parallax wedge might be used to determine the
height of a tree. In Figure 3.20a, the position of the wedge has been adjusted until
the sloping line appears to intersect the top of the tree. A reading is taken from the
scale at this point (58.55 mm). The wedge is then positioned such that the line
3.7 IMAGE PARALLAX 185
Figure 3.19 Parallax wedge oriented under lens stereoscope. (Author-prepared figure.)
(a)
(b)
Figure 3.20 Parallax wedge oriented for taking a reading on (a) the top and
(b) the base of a tree.
186 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
intersects the base of the tree, and a reading is taken (59.75 mm). The difference
between the readings (1.20 mm) is used to determine the tree height.
EXAMPLE 3.10
The flying height for an overlapping pair of photos is 1600 m above the ground and pa is
75.60 mm. Find the height of the tree illustrated in Figure 3.20.
Solution
From Eq. 3.13
DpH0
Dh ¼
pa
1:20 3 1600
¼ ¼ 25 m
75:60
important as the general concept of locating the conjugate image point for all
points on a reference image. The resulting photocoordinates can then be used in the
various parallax equations earlier described (Eqs. 3.10 to 3.13). However, the paral-
lax equations assume perfectly vertical photography and equal flying heights for all
images. This simplifies the geometry and hence the mathematics of computing
ground positions from the photo measurements. However, softcopy systems are not
constrained to the above assumptions. Such systems employ mathematical models
of the imaging process that readily handle variations in the flying height and atti-
tude of each photograph. As we discuss in Section 3.9, the relationship among
image coordinates, ground coordinates, the exposure station position, and angular
orientation of each photograph is normally described by a series of collinearity
equations. They are used in the process of analytical aerotriangulation, which
involves determining the X, Y, Z, ground coordinates of individual points based on
photocoordinate measurements.
188 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
As previously stated (in Section 3.1), in order to use aerial photographs for
any precise photogrammetric mapping purposes, it is first necessary to deter-
mine the six independent parameters that describe the position and angular
orientation of the photocoordinate axis system of each photograph (at the instant
the photograph was taken) relative to the origin and angular orientation of the
ground coordinate system used for mapping. The process of determining the
exterior orientation parameters for an aerial photograph is called georeferencing.
Georeferenced images are those for which 2D photo coordinates can be projected
to the 3D ground coordinate reference system used for mapping and vice versa.
For a frame sensor, such as a frame camera, a single exterior orientation
applies to an entire image. With line scanning or other dynamic imaging systems,
the exposure station position and orientation change with each image line. The
process of georeferencing is equally important in establishing the geometric rela-
tionship between image and ground coordinates for such sensors as lidar, hyper-
spectral scanners, and radar as it is in aerial photography.
In the remainder of this section, we discuss the two basic approaches taken
to georeference frame camera images. The first is indirect georeferencing, which
makes use of ground control and a procedure called aerotriangulation to “back
out” computed values for the six exterior orientation parameters of all photo-
graphs in a flight strip or block. The second approach is direct georeferencing,
wherein these parameters are measured directly through the integration of air-
borne GPS and inertial measurement unit (IMU) observations.
Indirect Georeferencing
Figure 3.22 illustrates the relationship between the 2D (x, y) photocoordinate sys-
tem and the 3D (X, Y, Z) ground coordinate system for a typical photograph. This
figure also shows the six elements of exterior orientation: the 3D ground coordi-
nates of the exposure station (L) and the 3D rotations of the tilted photo plane (o,
f, and k) relative to an equivalent perfectly vertical photograph. Figure 3.22 also
shows what is termed the collinearity condition: the fact that the exposure station
of any photograph, any object point in the ground coordinate system, and its pho-
tographic image all lie on a straight line. This condition holds irrespective of the
angular tilt of a photograph. The condition can also be expressed mathematically
in terms of collinearity equations. These equations describe the relationships
among image coordinates, ground coordinates, the exposure station position, and
190 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
where
xp, yp ¼ image coordinates of any point p
f ¼ focal length
XP, YP, ZP ¼ ground coordinates of point P
3.9 DETERMINING THE ELEMENTS OF EXTERIOR ORIENTATION 191
table included in the lower portion of Figure 3.23 indicates the number of photo-
graphs in which images of each object point appear. As can be seen from this
table, the grand total of object point images to be measured in this block is 94,
each yielding an x and y photocoordinate value, for a total of 188 observations.
There are also 18 direct observations for the 3D ground coordinates of the six
+ + + + +
1 A 6 11 16 21
D
+ + + + +
2 7 12 17 22
+ + + + +
3 B 8 13 18 E 23
+ + + + +
4 9 14 19 24
+ C + + + F +
5 10 15 20 25
control points. However, many systems allow for errors in the ground control
values, so they are adjusted along with the photocoordinate measurements.
The nature of analytical aerotriangulation has evolved significantly over time,
and many variations of how it is accomplished exist. However, all methods
involve writing equations (typically collinearity equations) that express the ele-
ments of exterior orientation of each photo in a block in terms of camera con-
stants (e.g., focal length, principal point location, lens distortion), measured
photo coordinates, and ground control coordinates. The equations are then solved
simultaneously to compute the unknown exterior orientation parameters for all
photos in a block and the ground coordinates of the pass points and tie points
(thus increasing the spatial density of the control available to accomplish sub-
sequent mapping tasks). For the small photo block considered here, the number
of unknowns in the solution consists of the X, Y, and Z object space coordinates
of all object points (3331¼93) and the six exterior orientation parameters for
each of the photographs in the block (6310¼60). Thus, the total number of
unknowns in this relatively small block is 93 þ 60¼153.
The above process by which all photogrammetric measurements in a block
are related to ground control values in one massive solution is often referred to as
bundle adjustment. The “bundle” inferred by this terminology is the conical bun-
dle of light rays that pass through the camera lens at each exposure station. In
essence, the bundles from all photographs are adjusted simultaneously to best fit
all the ground control points, pass points, and tie points in a block of photographs
of virtually any size.
With the advent of airborne GPS over a decade ago, the aerotriangulation
process was greatly streamlined and improved. By including the GPS coordinates
of the exposure station for each photo in a block in a bundle adjustment, the need
for ground control was greatly reduced. Each exposure station becomes an addi-
tional control point. It is not unusual to employ only 10 to 12 control points
to control a block of hundreds of images when airborne GPS is employed
(D.F. Maune, 2007).
Direct Georeferencing
The currently preferred method for determining the elements of exterior orienta-
tion is direct georeferencing. As stated earlier, this approach involves processing
the raw measurements made by an airborne GPS together with IMU data to cal-
culate both the position and angular orientation of each image. The GPS data
afford high absolute accuracy information on position and velocity. At the same
time, IMU data provide very high relative accuracy information on position, velo-
city, and angular orientation. However, the absolute accuracy of IMUs tends to
degrade with the time when operated in a stand‐alone mode. This is where the
integration of the GPS and IMU data takes on importance. The high accuracy
GPS position information is used to control the IMU position error, which in turn
controls the IMU’s orientation error.
194 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Photogrammetric mapping can take on many forms, depending upon the nature of
the photographic data available, the instrumentation and /or software used, and the
form and accuracy required in any particular mapping application. Many applica-
tions only require the production of planimetric maps. Such maps portray the plan
view (X and Y locations) of natural and cultural features of interest. They do not
represent the contour or relief (Z elevations) of the terrain, as do topographic maps.
Planimetric mapping with hardcopy images can often be accomplished with
relatively simple and inexpensive methods and equipment, particularly when relief
effects are minimal and the ultimate in positional accuracy is not required. In such
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 195
cases, an analyst might use such equipment as an optical transfer scope to transfer
the locations of image features to a map base. This is done by pre-plotting the posi-
tion of several photo control points on a map sheet at the desired scale. Then the
image is scaled, stretched, rotated, and translated to optically fit (as closely as possi-
ble) the plotted positions of the control points on the map base. Once the orienta-
tion of the image to the map base is accomplished, the locations of other features of
interest in the image are transferred to the map.
Planimetric features can also be mapped from hardcopy images with the aid of
a table digitizer. In this approach, control points are again identified whose X Y
coordinates are known in the ground coordinate system and whose xy coordinates
are then measured in the digitizer axis system. This permits the formulation of a
two-dimensional coordinate transformation (Section 3.2 and Appendix B) to relate
the digitizer xy coordinates to the ground X Y coordinate system. This transforma-
tion is then used to relate the digitizer coordinates of features other than the
ground control points to the ground coordinate mapping system.
The above control point measurement and coordinate transformation
approach can also be applied when digital, or softcopy, image data are used in the
mapping process. In this case, the row and column x y coordinate of a pixel in
the image file is related to the X Y ground coordinate system via control point
measurement. Heads-up digitizing is then used to obtain the xy coordinates of the
planimetric features to be mapped from the image, and these are transformed
into the ground coordinate mapping system.
We stress that the accuracy of the ground coordinates resulting from either
tablet or heads-up digitizing can be highly variable. Among the many factors that
can influence this accuracy are the number and spatial distribution of the control
points, the accuracy of the ground control, the accuracy of the digitizer (in tablet
digitizing), the accuracy of the digitizing process, and the mathematical form of
coordinate transformation used. Compounding all of these factors are the poten-
tial effects of terrain relief and image tilt. For many applications, the accuracy of
these approaches may suffice and the cost of implementing more sophisticated
photogrammetric procedures can be avoided. However, when higher-order accu-
racy is required, it may only be achievable through softcopy mapping procedures
employing stereopairs (or larger strips or blocks) of georeferenced photographs.
(a)
(b)
Figure 3.24 Fundamental concept of stereoplotter instrument design: (a) exposure of
stereopair in flight; (b) projection in stereoplotter. (From P. R. Wolf, B. Dewitt, and
B. Wilkinson, 2013, Elements of Photogrammetry with Applications in GIS, 4th ed.,
McGraw-Hill. Reproduced with the permission of The McGraw-Hill Companies.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 197
a stereopair is exposed in‐flight. Note that the flying height for each exposure sta-
tion is slightly different and that the camera’s optical axis is not perfectly vertical
when the photos are exposed. Also note the angular relationships between the
light rays coming from point A on the terrain surface and recorded on each of the
two negatives.
As shown in (b), the negatives are used to produce diapositives (transpar-
encies printed on glass or film transparencies “sandwiched” between glass plates),
which are placed in two stereoplotter projectors. Light rays are then projected
through both the left and right diapositives. When the rays from the left and right
images intersect below the projectors, they form a stereomodel, which can be
viewed and measured in stereo. To aid in creating the stereomodel, the projectors
can be rotated about, and translated along, their x, y, and z axes. In this way, the
diapositives can be positioned and rotated such that they bear the exact relative
angular orientation to each other in the projectors as the negatives did when they
were exposed in the camera at the two exposure stations. The process of estab-
lishing this angular relationship in the stereoplotter is called relative orientation,
and it results in the creation of a miniature 3D stereomodel of the overlap area.
Relative orientation of a stereomodel is followed by absolute orientation,
which involves scaling and leveling the model. The desired scale of the model is
produced by varying the base distance, b, between the projectors. The scale of the
resulting model is equal to the ratio b/B. Leveling of the model can be accom-
plished by rotating both projectors together about the X direction and Y direction
of the mapping coordinate system.
Once the model is oriented, the X, Y, and Z ground coordinates of any point
in the overlap area can be obtained by bringing a reference floating mark in con-
tact with the model at that point. This reference mark can be translated in the
X and Y directions throughout the model, and it can be raised and lowered in the
Z direction. In preparing a topographic map, natural or cultural features are map-
ped planimetrically by tracing them with the floating mark, while continuously
raising and lowering the mark to maintain contact with the terrain. Contours are
compiled by setting the floating mark at the desired elevation of a contour and
moving the floating mark along the terrain so that it just maintains contact with
the surface of the model. Typically, the three‐dimensional coordinates of all
points involved in the map compilation process are recorded digitally to facilitate
subsequent automated mapping, GIS data extraction, and analysis.
It should be noted that stereoplotters recreate the elements of exterior orien-
tation in the original images forming the stereomodel, and the stereoplotting
operation focuses on the intersections of rays from conjugate points (rather than
the distorted positions of these points themselves on the individual photos). In
this manner, the effects of tilt, relief displacement, and scale variation inherent in
the original photographs are all negated in the stereoplotter map compilation
process.
Direct optical projection stereoplotters employed various techniques to pro-
ject and view the stereomodel. In order to see stereo, the operator’s eyes had to
198 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
view each image of the stereopair separately. Anaglyphic systems involved project-
ing one photo through a cyan filter and the other through a red filter. By viewing
the model through eyeglasses having corresponding color filters, the operator’s
left eye would see only the left photo and the right eye would see only the right
photo. Other approaches to stereo viewing included the use of polarizing filters in
place of colored filters, or placing shutters over the projectors to alternate display
of the left and right images as the operator viewed the stereomodel through a syn-
chronized shutter system.
Again, direct optical projection plotters represented the first generation of
such systems. Performing the relative and absolute orientation of these instru-
ments was an iterative and sometimes trial‐and‐error process, and mapping plani-
metric features and contours with such systems was very tedious. As time went
by, stereoplotter designs evolved from being direct optical devices to optical‐
mechanical, analytical, and now softcopy systems.
Softcopy‐based systems entered the commercial marketplace in the early
1990s. Early in their development, the dominant data source used by these sys-
tems was aerial photography that had been scanned by precision photogram-
metric scanners. Today, these systems primarily process digital camera data.
They incorporate high quality displays affording 3D viewing. Like their pre-
decessors, softcopy systems employ various means to enforce the stereoscopic
viewing condition that the left eye of the image analyst only sees the left image of
a stereopair and the right eye only sees the right image. These include, but are not
limited to, anaglyphic and polarization systems, as well as split‐screen and rapid
flicker approaches. The split screen technique involves displaying the left image
on the left side of a monitor and the right image on the right side. The analyst
then views the images through a stereoscope. The rapid flicker approach entails
high frequency (120 Hz) alternation between displaying the left image alone and
then the right alone on the monitor. The analyst views the display with a pair of
electronically shuttered glasses that are synchronized to be alternately clear or
opaque on the left or right side as the corresponding images are displayed.
In addition to affording a 3D viewing capability, softcopy photogrammetric
workstations must have very robust computational power and large data storage
capabilities. However, these hardware requirements are not unique to photo-
grammetric workstations. What is unique about photogrammetric workstations is
the diversity, modularity, and integration of the suite of software these systems
typically incorporate to generate photogrammetric mapping products.
The collinearity condition is frequently the basis for many softcopy analysis
procedures. For example, in the previous section of this discussion we illustrated
the use of the collinearity equations to georeference individual photographs. Col-
linearity is also frequently used to accomplish relative and absolute orientation of
stereopairs. Another very important application of the collinearity equations is
their incorporation in the process of space intersection.
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 199
Z L1
L2
p1
o1 p2
o2
ZL 1
Y
P ZL 2
ZP
XP
YP
XL
2
XL 1 YL 2
YL 1
X
orientation of both photos is known, then the only unknowns in each equation
are X, Y, and Z for the point under analysis. Given four equations for three
unknowns, a least squares solution for the ground coordinates of each point can
be performed. This means that an image analyst can stereoscopically view, and
extract the planimetric positions and elevations of, any point in the stereomodel.
These data can serve as direct input to a GIS or CAD system.
Systematic sampling of elevation values throughout the overlap area can form
the basis for DEM production with softcopy systems. While this process is highly
automated, the image correlation involved is often far from perfect. This normally
leads to the need to edit the resulting DEM, but this hybrid DEM compilation
process is still a very useful one. The process of editing and reviewing a DEM is
greatly facilitated in a softcopy‐based system in that all elevation points (and even
contours if they are produced) can be superimposed on the 3D view of the origi-
nal stereomodel to aid in inspection of the quality of elevation data.
Digital/Orthophoto Production
position in the DEM to form the entire digital orthophoto. A minor complication in
this whole process is the fact that rarely will the photocoordinate value (xp, yp)
computed for a given DEM cell be exactly centered over a pixel in the original digi-
tal input image. Accordingly, the process of resampling (Chapter 7 and Appendix B)
is employed to determine the best brightness value to assign to each pixel in the
orthophoto based on a consideration of the brightness values of a neighborhood of
pixels surrounding each computed photocoordinate position (xp, yp).
Figure 3.27 illustrates the influence of the above reprojection process.
Figure 3.27a is a conventional (perspective) photograph of a power line clearing
traversing a hilly forested area. The excessively crooked appearance of the linear
clearing is due to relief displacement. Figure 3.27b is a portion of an orthophoto
covering the same area. The relief effects have been removed and the true path of
the power line is shown.
202 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(a) (b)
Figure 3.27 Portion of (a) a perspective photograph and (b) an orthophoto showing a
power line clearing traversing hilly terrain. (Note the excessive crookedness of the power
line clearing in the perspective photo that is eliminated in the orthophoto.) (Courtesy
USGS.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 203
Figure 3.28 Portion of a 1:4800 topographic orthophotomap. Photography taken over the Fox Chain of Lakes, IL.
(Courtesy Alster and Associates, Inc.)
204 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(b)
(a)
Figure 3.29 Stereo orthophotograph showing a portion of Gatineau Park, Canada: (a) An orthophoto
and (b) a stereomate provide for three-dimensional viewing of the terrain. Measurements made from,
or plots made on, the orthophoto have map accuracy. Forest-type information is overprinted on this
scene along with a Universal Transverse Mercator (UTM) grid. Note that the UTM grid is square on the
orthophoto but is distorted by the introduction of parallax on the stereomate. Scale 1:38,000.
(Courtesy Forest Management Institute, Canadian Forestry Service.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 205
one of the two photos comprising the stereopair from which the orthophoto was
produced.)
One caveat we wish to note here is that tall objects such as buildings will still
appear to lean in an orthophoto if these features are not included in the DEM
used in the orthophoto production process. This effect can be particularly trouble-
some in urban areas. The effect can be overcome by including building outline ele-
vations in the DEM, or minimized by using only the central portion of a
photograph, where relief displacement of vertical features is at a minimum.
Plate 7 is yet another example of the need for, and influence of, the distortion
correction provided through the orthophoto production process. Shown in (a) is an
original uncorrected color photograph taken over an area of high relief in Glacier
National Park. The digital orthophoto corresponding to the uncorrected photo-
graph in (a) is shown in (b). Note the locational errors that would be introduced if
GIS data were developed from the uncorrected image. GIS analysts are encouraged
to use digital orthophotos in their work whenever possible. Two major federal sour-
ces of such data in the United States are the U.S. Geological Survey (USGS)
National Digital Orthophoto Program (NDOP) and the USDA National Agriculture
Imagery Program (NAIP).
Figures 3.30 and 3.31 illustrate the visualization capability afforded by mer-
ging digital orthophoto data with DEM data. Figure 3.30 shows a perspective
Figure 3.30 Perspective view of a rural area generated digitally by draping orthophoto image data over a digital
elevation model of the same area. (Courtesy University of Wisconsin-Madison, Environmental Remote Sensing Center,
and NASA Affiliated Research Center Program.)
206 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
(a)
(b)
Figure 3.31 Vertical stereopair (a) covering the ground area depicted in the perspective view shown in (b). The
image of each building face shown in (b) was extracted automatically from the photograph in which that face was
shown with the maximum relief displacement in the original block of aerial photographs covering the area.
(Courtesy University of Wisconsin-Madison, Campus Mapping Project.)
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 207
view of a rural area located near Madison, Wisconsin. This image was created by
draping digital orthophoto data over a DEM of the same area. Figure 3.31 shows
a stereopair (a) and a perspective view (b) of the Clinical Science Center, located
on the University of Wisconsin-Madison campus. The images of the various faces
of the buildings shown in (b) were extracted from the original digitized aerial
photograph in which that face was displayed with the greatest relief displacement
in accordance with the direction of the perspective view. This process is done
automatically from among all of the relevant photographs of the original block of
aerial photographs covering the area of interest.
Multi-Ray Photogrammetry
(e.g., buildings, trees, overpasses) that are not included in the DEM still manifest
relief displacement as a function of their elevation above ground and their image
distance from the principal point of the original photograph. Tall features located
at a distance from the principal point can also completely block or occlude the
appearance of ground areas that are in the “shadow” of these features during ray
projection. Figure 3.32 illustrates the nature of this problem and its solution
Digital
elevation
model
b a Orthophoto pixels
Ground hidden
by building displacement
(a)
b a Orthophoto pixels
(b)
b a Orthophoto pixels
(c)
Figure 3.32 Comparison among approaches taken to extract pixel brightness
numbers for a conventional orthophoto produced using a single photograph (a), a
true orthophoto using multiple photographs acquired with 60% overlap (b), and
a true orthophoto using multiple photographs acquired with 80% overlap (c). (After
Jensen, 2007).
3.10 PRODUCTION OF MAPPING PRODUCTS FROM AERIAL PHOTOGRAPHS 209
through the use of more than one original photograph to produce a true digital
orthophoto composite image.
Shown in Figure 3.32a is the use of a single photograph to produce a con-
ventional orthophoto. The brightness value to be used to portray pixel a in the
orthophoto is obtained by starting with the ground (X, Y, Z) position of point a
known from the DEM, then using the collinearity equations to project up to the
photograph to determine the x, y photocoordinates of point a, and then using
resampling to interpolate an appropriate brightness value to use in the ortho-
photo to depict point a. This procedure works well for point a because there is no
vertical feature obstructing the ray between point a on the ground and the photo
exposure station. This is not the case for point b, which is in a ground area that is
obscured by the relief displacement of the nearby building. In this situation, the
brightness value that would be placed in the orthophoto for the ground position
of point b would be that of the roof of the building, and the side of the building
would be shown where the roof should be. The ground area near point b would
not be shown at all in the orthophoto. Clearly, the severity of such relief displace-
ment effects increases both with the height of the vertical feature involved and
the feature’s distance from the ground principal point.
Figure 3.32b illustrates the use of three successive aerial photographs
obtained along a flight line to mitigate the effects of relief displacement in the
digital orthophoto production process. In this case, the nominal overlap between
the successive photos is the traditional value of 60%. The outlines of ground‐
obstructing features are identified and recorded using traditional stereoscopic
feature extraction tools. The brightness values used to portray all other pixel posi-
tions in the orthophoto are automatically interpolated from the photograph
having the best view of each position. For point a the best view is from the closest
exposure station to that point, Exposure Station #3. For point b, the best view
is obtained from Exposure Station #1. The software used for the orthophoto com-
pilation process analyzes the DEM and feature data available for the model to
determine that the view of the ground for pixel b is obscured from Exposure Sta-
tion #2. In this manner, each pixel in the orthophoto composite is assigned the
brightness value from the corresponding position in the photograph acquired
from the closest exposure station affording an unobstructed view of the ground.
Figure 3.32c illustrates a multi‐ray solution to the true orthophoto production
process. In this case, the overlap along the flight strip of images is increased to
80% (or more). This results in several more closely spaced exposure stations
being available to cover a given study area. In this way the ray projections to each
pixel in the orthophoto become much more vertical and parallel, as if all rays are
projected nearly orthographically (directly straight downward) at every point in
the orthophoto. An automated DSM is used to create the true orthophoto. In such
images, building rooftops are shown in their correct planimetric location, directly
above the associated building foundation (with no lean). None of the sides of
buildings are shown, and the ground areas around all sides of buildings are
shown in their correct location.
210 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Frequently, the objectives of a photographic remote sensing project can only be met
through procurement of new photography of a study area. These occasions can
arise for many reasons. For example, photography available for a particular area
could be outdated for applications such as land use mapping. In addition, available
photography may have been taken in the wrong season. For example, photography
acquired for topographic mapping is usually flown in the fall or spring to minimize
vegetative cover. This photography will likely be inappropriate for applications
involving vegetation analysis.
In planning the acquisition of new photography, there is always a trade-off
between cost and accuracy. At the same time, the availability, accuracy, and cost
of alternative data sources are continually changing as remote sensing technology
advances. This leads to such decisions as whether analog or digital photography
is appropriate. For many applications, high resolution satellite data may be an
acceptable and cost-effective alternative to aerial photography. Similarly, lidar
data might be used in lieu of, or in addition to, aerial photography. Key to making
such decisions is specifying the nature and accuracy of the end product(s)
required for the application at hand. For example, the required end products
might range from hardcopy prints to DEMs, planimetric and topographic maps,
thematic digital GIS datasets, and orthophotos, among many others.
The remainder of this discussion assumes that aerial photography has been
judged to best serve the needs of a given project, and the task at hand is to
develop a flight plan for acquiring the photography over the project’s study area.
As previously mentioned, flight planning software is generally used for this
purpose. Here we illustrate the basic computational considerations and proce-
dures embedded in such software by presenting two “manual” example solutions
to the flight planning process. We highlight the geometric aspects of preparing a
flight plan for both a film-based camera mission and a digital camera mission of
the same study area. Although we present two solutions using the same study
area, we do not mean to imply that the two mission designs yield photography of
identical quality and utility. They are simply presented as two representative
examples of the flight planning process.
Before we address the geometric aspects of photographic mission planning, we
stress that one of the most important parameters in an aerial mission is beyond
the control of even the best planner—the weather. In most areas, only a few days of
the year are ideal for aerial photography. In order to take advantage of clear
weather, commercial aerial photography firms will fly many jobs in a single day,
often at widely separated locations. Flights are usually scheduled between 10 a.m.
and 2 p.m. for maximum illumination and minimum shadow, although digital cam-
eras that provide high sensitivity under low light conditions can be used for mis-
sions conducted as late as sunset, or shortly thereafter, and under heavily overcast
conditions. However, as previously mentioned, mission timing is often optimized to
ensure strong GPS signals from a number of satellites, which may narrow the
3.11 FLIGHT PLANNING 211
acquisition time window. In addition, the mission planner may need to accom-
modate such mission-specific constraints as maximum allowable building lean in
orthophotos produced from the photography, occlusions in urban areas, specular
reflections over areas covered by water, vehicular traffic volumes at the time of ima-
ging, and civil and military air traffic control restrictions. Overall, a great deal of
time, effort, and expense go into the planning and execution of a photographic mis-
sion. In many respects, it is an art as well as a science.
The parameters needed for the geometric design of a film-based photographic
mission are (1) the focal length of the camera to be used, (2) the film format size,
(3) the photo scale desired, (4) the size of the area to be photographed, (5) the
average elevation of the area to be photographed, (6) the overlap desired, (7) the
sidelap desired, and (8) the ground speed of the aircraft to be used. When design-
ing a digital camera photographic mission, the required parameters are the same,
except the number and physical dimension of the pixels in the sensor array are
needed in lieu of the film format size, and the GSD for the mission is required
instead of a mission scale.
Based on the above parameters, the mission planner prepares computations
and a flight map that indicate to the flight crew (1) the flying height above datum
from which the photos are to be taken; (2) the location, direction, and number of
flight lines to be made over the area to be photographed; (3) the time interval
between exposures; (4) the number of exposures on each flight line; and (5) the
total number of exposures necessary for the mission.
When flight plans are computed manually, they are normally portrayed on a
map for the flight crew. However, old photography or even a satellite image may
be used for this purpose. The computations prerequisite to preparing flight plans
for a film-based and a digital camera mission are given in the following two
examples, respectively.
EXAMPLE 3.11
A study area is 10 km wide in the east–west direction and 16 km long in the north–south
direction (see Figure 3.33). A camera having a 152.4-mm-focal-length lens and a 230-mm
format is to be used. The desired photo scale is 1:25,000 and the nominal endlap and side-
lap are to be 60 and 30%. Beginning and ending flight lines are to be positioned along the
boundaries of the study area. The only map available for the area is at a scale of 1:62,500.
This map indicates that the average terrain elevation is 300 m above datum. Perform the
computations necessary to develop a flight plan and draw a flight map.
Solution
(a) Use north–south flight lines. Note that using north–south flight lines minimizes the
number of lines required and consequently the number of aircraft turns and realign-
ments necessary. (Also, flying in a cardinal direction often facilitates the identification
of roads, section lines, and other features that can be used for aligning the flight lines.)
212 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Figure 3.33 A 10 3 16-km study area over which photographic coverage is to be obtained. (Author-
prepared figure.)
3.11 FLIGHT PLANNING 213
(b) Find the flying height above terrain (H0 ¼ f/S) and add the mean site elevation to find
flying height above mean sea level:
f 0:1524 m
H¼ þ havg ¼ þ 300 m ¼ 4110 m
S 1=25;000
(c) Determine ground coverage per image from film format size and photo scale:
0:23 m
Coverage per photo ¼ ¼ 5750 m on a side
1=25;000
(d) Determine ground separation between photos on a line for 40% advance per photo
(i.e., 60% endlap):
0:40 3 5750 m ¼ 2300 m between photo centers
(e) Assuming an aircraft speed of 160 km/hr, the time between exposures is
2300 m=photo 3600 sec=hr
3 ¼ 51:75 sec ðuse 51 secÞ
160 km=hr 1000 m=km
(f) Because the intervalometer can only be set in even seconds (this varies between
models), the number is rounded off. By rounding down, at least 60% coverage is
ensured. Recalculate the distance between photo centers using the reverse of the
above equation:
1000 m=km
51 sec=photo 3 160 km=hr 3 ¼ 2267 m
3600 sec=hr
(g) Compute the number of photos per 16-km line by dividing this length by the photo
advance. Add one photo to each end and round the number up to ensure coverage:
16,000 m=line
þ 1 þ 1 ¼ 9:1 photos=line ðuse 10Þ
2267 m=photo
(h) If the flight lines are to have a sidelap of 30% of the coverage, they must be separated
by 70% of the coverage:
0:70 3 5750 m coverage ¼ 4025 m between flight lines
(i) Find the number of flight lines required to cover the 10-km study area width by divid-
ing this width by distance between flight lines (note: this division gives number of
spaces between flight lines; add 1 to arrive at the number of lines):
10,000 m width
þ 1 ¼ 3:48 ðuse 4Þ
4025 m=flight line
The adjusted spacing between lines for using four lines is
10,000 m
¼ 3333 m=space
4 1 spaces
214 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Figure 3.34 Flight map for Example 3.11. (Lines indicate centers of each flight line to be
followed.) (Author-prepared figure.)
3.11 FLIGHT PLANNING 215
(j) Find the spacing of flight lines on the map (1:62,500 scale):
1
3333 m 3 ¼ 53:3 mm
62;500
(Note: The first and last flight lines in this example were positioned coincident with the
boundaries of the study area. This provision ensures complete coverage of the area under
the “better safe than sorry” philosophy. Often, a savings in film, flight time, and money is
realized by experienced flight crews by moving the first and last lines in toward the middle
of the study area.)
EXAMPLE 3.12
Assume that it is desired to obtain panchromatic digital camera coverage of the same study
area described in the previous example. Also assume that a GSD of 25 cm is required given
the mapping accuracy requirements of the mission. The digital camera to be used for the
mission has a panchromatic CCD that includes 20,010 pixels in the across‐track direction,
and 13,080 pixels in the along‐track direction. The physical size of each pixel is 5.2 μm
(0.0052 mm). The camera is fitted with an 80‐mm‐focal‐length lens. As in the previous
example, stereoscopic coverage is required that has 60% overlap and 30% sidelap. The air-
craft to be used to conduct the mission will be operated at a nominal speed of 260 km/hr.
Perform the computations necessary to develop a preliminary flight plan for this mission in
order to estimate flight parameters.
216 CHAPTER 3 BASIC PRINCIPLES OF PHOTOGRAMMETRY
Solution
(a) As in the previous example, use north–south flight lines.
(b) Find the flying height above terrain H0 ¼ ðGSD
pd
Þ3f
and add the mean elevation to
find the flying height above mean sea level:
ðGSDÞ 3 f 0:25 m 3 80 mm
H¼ þ havg ¼ þ 300 m ¼ 4146 m
pd 0:0052 mm
(c) Determine the across‐track ground coverage of each image:
From Eq. 2.12, the across‐track sensor dimension is
dxt ¼ nxt 3 pd ¼ 20;010 3 0:0052 mm ¼ 104:05 mm
Dividing by the image scale, the across‐track ground coverage distance is
dxt H0 104:05 mm 3 3846 m
¼ ¼ 5002 m
f 80 mm
(d) Determine the along‐track ground coverage of each image:
From Eq. 2.13, the along‐track sensor dimension is
dat ¼ nat 3 pd ¼ 13;080 3 0:0052 mm ¼ 68:02 mm
Dividing by the image scale, the along‐track ground coverage distance is
dat H0 68:02 mm 3 3846 m
¼ ¼ 3270 m
f 80 mm
(e) Determine the ground separation between photos along‐track for 40% advance (i.e.,
60% endlap):
0:40 3 3270 m ¼ 1308 m
(f) Determine the interval between exposures for a flight speed of 260 km/hr:
1308 m=photo 3600 sec=hr
3 ¼ 18:11 sec ðuse 18Þ
260 km=hr 1000 m=km
(g) Compute the number of photos per 16 km line by dividing the line length by the photo
advance. Add one photo to each end and round up to ensure coverage:
16;000 m=line
þ 1 þ 1 ¼ 14:2 photos=line ðuse 15Þ
1308 m=photo
(h) If the flight lines are to have sidelap of 30% of the across‐track coverage, they must be
separated by 70% of the coverage:
0:70 3 5002 m ¼ 3501 m
(i) Find the number of flight lines required to cover the 10‐km study area width by divid-
ing this width by the distance between flight lines (Note: This division gives number
3.12 CONCLUSION 217
As is the case with the acquisition of analog aerial photography, a flight plan for
acquiring digital photography is accompanied by a set of detailed specifications stating the
requirements and tolerances for flying the mission, preparing image products, ownership
rights, and other considerations. These specifications would generally parallel those used
for film‐based missions. However, they would also address those specific considerations
that are related to digital data capture. These include, but are not limited to, use of single
versus multiple camera heads, GSD tolerance, radiometric resolution of the imagery, geo-
metric and radiometric image pre‐processing requirements, and image compression and
storage formats. Overall, the goal of such specifications is to not only ensure that the digital
data resulting from the mission are not only of high quality, but also that they are compa-
tible with the hardware and software to be used to store, process, and supply derivative
products from the mission imagery.
3.12 CONCLUSION