FOTOGRAMETRIA
FOTOGRAMETRIA
Aerotriangulation
17-1 Introduction
Aerotriangulation is the term most frequently applied to the process of determining the X, Y, and Z
ground coordinates of individual points based on photo coordinate measurements. Phototriangulation
is perhaps a more general term, however, because the procedure can be applied to terrestrial photos as
well as aerial photos. The principles involved are extensions of the material presented in Chap. 11.
With improved photogrammetric equipment and techniques, accuracies to which ground coordinates
can be determined by these procedures have become very high.
Aerotriangulation is used extensively for many purposes. One of the principal applications lies in
extending or densifying ground control through strips and/or blocks of photos for use in subsequent
photogrammetric operations. When used for this purpose, it is often called bridging, because in
essence a “bridge” of intermediate control points is developed between field-surveyed control that
exists in only a limited number of photos in a strip or block. Establishment of the needed control for
compilation of topographic maps with stereoplotters is an excellent example to illustrate the value of
aerotriangulation. In this application, as described in Chap. 12, the practical minimum number of
control points necessary in each stereomodel is three horizontal and four vertical points. For large
mapping projects, therefore, the number of control points needed is extensive, and the cost of
establishing them can be extremely high if it is done exclusively by field survey methods. Much of
this needed control is now routinely being established by aerotriangulation from only a sparse network
of field-surveyed ground control and at a substantial cost savings. A more recent innovation involves
the use of kinematic GPS and INS in the aircraft to provide coordinates and angular attitude of the
camera at the instant each photograph is exposed. In theory, this method can eliminate the need for
ground control entirely, although in practice a small amount of ground control is still used to
strengthen the solution.
Besides having an economic advantage over field surveying, aerotriangulation has other benefits:
(1) most of the work is done under laboratory conditions, thus minimizing delays and hardships due to
adverse weather conditions; (2) access to much of the property within a project area is not required;
(3) field surveying in difficult areas, such as marshes, extreme slopes, and hazardous rock formations,
can be minimized; and (4) the accuracy of the field-surveyed control necessary for bridging is verified
during the aerotriangulation process, and as a consequence, chances of finding erroneous control
values after initiation of compilation are minimized and usually eliminated. This latter advantage is so
meaningful that some organizations perform bridging even though adequate field-surveyed control
exists for stereomodel control. It is for this reason also that some specifications for mapping projects
require that aerotriangulation be used to establish photo control.
Apart from bridging for subsequent photogrammetric operations, aerotriangulation can be used in
a variety of other applications in which precise ground coordinates are needed although most of these
uses have been largely supplanted by GPS. In property surveying, aerotriangulation can be used to
locate section corners and property corners or to locate evidence that will assist in finding these
corners. In topographic mapping, aerotriangulation can be used to develop digital elevation models by
computing X, Y, and Z ground coordinates of a systematic network of points in an area, although
airborne laser scanning is commonly being used for this task. Aerotriangulation has been used
successfully for densifying geodetic control networks in areas surrounded by tall buildings where
problems due to multipath cause a loss of accuracy in GPS surveys. Special applications include the
precise determination of the relative positions of large machine parts during fabrication. It had been
found especially useful in such industries as shipbuilding and aircraft manufacture. Many other
applications of aerotriangulation are also being pursued.
Methods of performing aerotriangulation may be classified into one of three categories: analog,
semianalytical, and analytical. Early analog procedures involved manual interior, relative, and
absolute orientation of the successive models of long strips of photos using stereoscopic plotting
instruments having several projectors. This created long strip models from which coordinates of pass
points could be read directly. Later, universal stereoplotting instruments were developed which
enabled this process to be accomplished with only two projectors. These procedures are now
principally of historical interest, having given way to the other two methods.
Semianalytical aerotriangulation involves manual interior and relative orientation of
stereomodels within a stereoplotter, followed by measurement of model coordinates. Absolute
orientation is performed numerically—hence the term semianalytical aerotriangulation.
Analytical methods consist of photo coordinate measurement followed by numerical interior,
relative, and absolute orientation from which ground coordinates are computed. Various specialized
techniques have been developed within each of the three aerotriangulation categories. This chapter
briefly describes some of these techniques. It predominantly relates to bridging for subsequent
photogrammetric operations because this is the principal use of aerotriangulation. Extension of these
basic principles can readily be translated to the other areas of application, however.
A typical procedure for measuring a pass point begins by first manually digitizing the point in
one photograph. The pixels around this point serve as the template array. Next, the user defines a
search area in other photographs for automatic image matching. There are also automatic methods for
defining a search area by predicting the coordinates of the point in the subsequent photographs.
Finally, the pixel patch in the search area corresponding to the template array is automatically located.
Normalized cross-correlation followed by least squares matching is a common method for this step
(see Sec. 15-8). To avoid poor matches and blunders, well-defined unique objects with good contrast
and directionality should be selected as image-matching templates. Image-matching software usually
provides a measure of how well the point was matched, such as the correlation coefficient in
normalized cross-correlation. This number should serve as a guide for the user to decide whether or
not to accept the matching results. Care must be taken because it is not uncommon for incorrectly
matched points to have high correlation coefficients. The process is repeated for each pass point
keeping in mind the optimal distribution illustrated in Fig. 17-1. Due to increased redundancy, the
most effective points are those that appear in the so-called tri-lap area, which is the area included on
three consecutive images along a strip. Once many pass points are located, more can be added in a
fully automated process by prediction of point locations based on a coordinate transformation.
Example 17-1
Figure 17-3 illustrates a continuous strip of three stereomodels with pass points a through l and
ground control points A through E. Independent model coordinates for points and exposure stations of
each model are listed below along with ground coordinates of control points A through E. Compute the
ground coordinates of the pass points and exposure stations by the sequential method of
semianalytical aerotriangulation. Use a three-dimensional conformal coordinate transformation
program.
FIGURE 17-3 Configuration of pass points and control for semianalytical aerotriangulation of
Example 17-1.
Solution
1. With an ASCII text editor, create the following data file with a “.dat” extension (see Example
11-3 for a description of the data file format):
2. Run the “3dconf” program to produce the following results. (Only the portion of the output
which gives transformed points is shown.)
The above output gives the coordinates of points g, h, i, C, and O3 in the model 1-2 system.
3. With an ASCII text editor, create the following data file with a “.dat” extension:
4. Run the “3dconf” program to produce the following results. (Only the portion of the output
which gives transformed points is shown.)
The output gives the coordinates of points j, k, l, D, E, and O4 in the model 1-2 system.
5. With an ASCII text editor, create the following data file with a “.dat” extension:
6. Run the “3dconf” program to produce the following results (only the portion of the output
which gives transformed points is shown):
This completes the solution. Note that the output of step 6 contains the computed ground
coordinates in meters for pass points a through l as well as the exposure stations O1 through O4.
The computed standard deviations are also listed in meters.
Due to the nature of sequential strip formation, random errors will accumulate along the strip.
Often, this accumulated error will manifest itself in a systematic manner with the errors increasing in
a nonlinear fashion. This effect, illustrated in Fig. 17-4, can be significant, particularly in long strips.
Figure 17-4a shows a strip model comprised of seven contiguous stereomodels from a single flight
line. Note from the figure that sufficient ground control exists in model 1-2 to absolutely orient it (and
thereby the entire strip) to the ground system. The remaining control points (in models 4-5 and 7-8)
can then be used as checkpoints to reveal accumulated errors along the strip. Figure 17-4b shows a
plot of the discrepancies between model and ground coordinates for the checkpoints as a function of X
coordinates along the strip. Except for the ground control in the first model, which was used to
absolutely orient the strip, discrepancies exist between model positions of horizontal and vertical
control points and their corresponding field-surveyed positions. Smooth curves are fit to the
discrepancies as shown in the figure.
FIGURE 17-4 (a) Plan view of control extension of a seven-model strip. (b) Smooth curves indicating
accumulation of errors in X, Y, and Z coordinates during control extension of a strip.
If sufficient control is distributed along the length of the strip, a three-dimensional polynomial
transformation can be used in lieu of a conformal transformation to perform absolute orientation and
thus obtain corrected coordinates for all pass points. This polynomial transformation yields higher
accuracy through modeling of systematic errors along the strip. Most of the polynomials in use for
adjusting strips formed by aerotriangulation are variations of the following third-order equations:
(17-1)
I n Eqs. (17-1), , , and are the transformed ground coordinates; X and Y are strip model
coordinates; and the a’s, b’s , and c’s are coefficients which define the shape of the polynomial error
curves. The equations contain 30 unknown coefficients (a’s, b’s , and c’s). Each three-dimensional
control point enables the above three polynomial equations to be written, and thus 10 three-
dimensional control points are required in the strip for an exact solution. When dealing with
transformations involving polynomials, however, it is imperative to use redundant control which is
well distributed throughout the strip. It is important that the control points occur at the periphery as
well, since extrapolation from polynomials can result in excessive corrections. As illustrated by Fig.
17-4b, errors in X, Y, and Z are principally functions of the linear distance (X coordinate) of the point
along the strip. However, the nature of error propagation along strips formed by aerotriangulation is
such that discrepancies in X, Y, and Z coordinates are also each somewhat related to the Y positions of
the points in the strip. Depending on the complexity of the distortion, certain terms may be eliminated
from Eqs. (17-1) if they are found not to be significant. This serves to increase redundancy in the
transformation which generally results in more accurate results. Further, the discussion of the
application of polynomials in adjusting strip models to ground can be found in references cited at the
end of this chapter. It is possible however, to avoid the polynomial adjustment completely by using
the simultaneous approach as mentioned in Sec. 17-3.
FIGURE 17-5 (a) Block of photos in overlapped position. (b) Separated photos showing image points.
The unknown quantities to be obtained in a bundle adjustment consist of (1) the X, Y, and Z object
space coordinates of all object points and (2) the exterior orientation parameters (ω, ϕ, κ, XL, YL, and
ZL) of all photographs. The first group of unknowns (object space coordinates) is the necessary result
of any aerotriangulation, analytical or otherwise. Exterior orientation parameters, however, are
generally not of interest to the photogrammetrist, but they must be included in the mathematical
model for consistency. In the photo block of Fig. 17-5a the number of unknown object coordinates is
26 × 3 = 78 (number of object points times the number of coordinates per point). The number of
unknown exterior orientation parameters is 8 × 6 = 48 (number of photos times the number of exterior
orientation parameters per photo). Therefore the total number of unknowns is 78 + 48 = 126.
The measurements (observed quantities) associated with a bundle adjustment are (1) x and y
photo coordinates of images of object points; (2) X, Y, and/or Z coordinates of ground control points;
and (3) direct observations of the exterior orientation parameters (ω, ϕ, κ, XL, YL, and ZL) of the
photographs. The first group of observations, photo coordinates, is the fundamental photogrammetric
measurements. For a proper bundle adjustment they need to be weighted according to the accuracy and
precision with which they were measured. The next group of observations is coordinates of control
points determined through field survey. Although ground control coordinates are indirectly
determined quantities, they can be included as observations provided that proper weights are assigned.
The final set of observations, exterior orientation parameters, has recently become important in
bundle adjustments with the use of airborne GPS control as well as inertial navigation systems (INSs)
which have the capability of measuring the angular attitude of a photograph.
Returning to the block of Fig. 17-5, the number of photo coordinate observations is 76 × 2 = 152
(number of imaged points times the number of photo coordinates per point), and the number of ground
control observations is 6 × 3 = 18 (number of three-dimensional control points times the number of
coordinates per point). If the exterior orientation parameters were measured, the number of additional
observations would be 8 × 6 = 48 (number of photos times the number of exterior orientation
parameters per photo). Thus, if all three types of observations are included, there will be a total of 152
+ 18 + 48 = 218 observations; but if only the first two types are included, there will be only 152 + 18 =
170 observations. Regardless of whether exterior orientation parameters were observed, a least
squares solution is possible since the number of observations is greater than the number of unknowns
(126) in either case.
The observation equations which are the foundation of a bundle adjustment are the collinearity
equations (see Sec. D-3). These equations are given below in a slightly modified form as Eqs. (17-2)
and (17-3).
(17-2)
(17-3)
In these equations, xij and yij are the measured photo coordinates of the image of point j on photo i
related to the fiducial axis system; xo and yo are the coordinates of the principal point in the fiducial
axis system; f is the focal length (or more correctly, principal distance) of the camera;
are the rotation matrix terms for photo i; Xj, Yj, and Zj are the coordinates of point j
in object space; and , , and are the coordinates of the incident nodal point of the camera lens
in object space. Since the collinearity equations are nonlinear, they are linearized by applying the
first-order terms of Taylor’s series at a set of initial approximations. After linearization (see Sec. D-5)
the equations can be expressed in the following matrix form:
(17-4)
where
The above terms are defined as for Eqs. (D-15) and (D-16), except that subscripts i and j are used
for the photo designation and point designation, respectively. (In App. D, the subscript A is used for
the point designation, and no subscript is used for the photo designation.) Matrix contains the
partial derivatives of the collinearity equations with respect to the exterior orientation parameters of
photo i, evaluated at the initial approximations. Matrix contains the partial derivatives of the
collinearity equations with respect to the object space coordinates of point j, evaluated at the initial
approximations. Matrix contains corrections for the initial approximations of the exterior
orientation parameters for photo i, and matrix contains corrections for the initial approximations of
the object space coordinates of point j. Matrix εij contains measured minus computed x and y photo
coordinates for point j on photo i, and finally matrix Vij contains residuals for the x and y photo
coordinates.
Proper weights must be assigned to photo coordinate observations in order to be included in the
bundle adjustment. Expressed in matrix form, the weights for x and y photo coordinate observations of
point j on photo i are
(17-5)
where is the reference variance; and are variances in xij and yij, respectively; and
is the covariance of xij with yij. The reference variance is an arbitrary parameter which can
be set equal to 1, and in many cases, the covariance in photo coordinates is equal to zero. In this case,
the weight matrix for photo coordinates simplifies to
(17-6)
The next type of observation to be considered is ground control. Observation equations for
ground control coordinates are
(17-7)
where Xj, Yj, and Zj are unknown coordinates of point j; , , and are the measured coordinate
values for point j; and , and are the coordinate residuals for point j.
Even though ground control observation equations are linear, in order to be consistent with the
collinearity equations, they will also be approximated by the first-order terms of Taylor’s series.
(17-8)
In Eq. (17-8), , , and are initial approximations for the coordinates of point j; dXj, dYj, and dZj
are corrections to the approximations for the coordinates of point j; and the other terms are as
previously defined.
Rearranging the terms of Eq. (17-8) and expressing the result in matrix form gives
(17-9)
As with photo coordinate measurements, proper weights must be assigned to ground control
coordinate observations in order to be included in the bundle adjustment. Expressed in matrix form,
the weights for X, Y, and Z ground control coordinate observations of point j are
(17-10)
where is the reference variance; , , and are the variances in , , and , respectively;
is the covariance of with ; is the covariance of with ; and is
the covariance of with . As before, the reference variance can be arbitrarily set equal to 1;
however, in general, since ground control coordinates are indirectly determined quantities, their
covariances are not equal to zero.
The final type of observation consists of measurements of exterior orientation parameters. The
form of their observation equations is similar to that of ground control and given as Eq. (17-11).
(17-11)
(17-12)
The weight matrix for exterior orientation parameters has the following form:
(17-13)
With the observation equations and weights defined as above, the full set of normal equations may be
formed directly. In matrix form, the full normal equations are
(17-14)
where
In the above expressions, m is the number of photos, n is the number of points, i is the photo subscript,
a n d j is the point subscript. Note that if point j does not appear on photo i, the corresponding
submatrix will be a zero matrix. Note also that the contributions to the N matrix and the
contributions to the K matrix are made only when observations for exterior orientation parameters
exist; and the contributions to the N matrix and the contributions to the K matrix are made only
for ground control point observations.
While the normal equations are being formed, it is recommended that the estimate for the
standard deviation of unit weight be calculated (see Sec. B-10). Assuming the initial approximations
are reasonable, matrices εij, , and . are good estimates of the negatives of the residuals. Therefore,
the estimate of the standard deviation of unit weight can be computed by
(17-15)
In Eq. (17-15), n.o. is the number of observations and n.u. is the number of unknowns in the solution.
If all observations have been properly weighted, S0 should be close to 1.
After the normal equations have been formed, they are solved for the unknowns Δ, which are
corrections to the initial approximations for exterior orientation parameters and object space
coordinates. The corrections are then added to the approximations, and the procedure is repeated until
the estimated standard deviation of unit weight converges. At that point, the covariance matrix for the
unknowns can be computed by
(17-16)
Computed standard deviations for the unknowns can then be obtained by taking the square root of the
diagonal elements of the ΣΔΔ matrix.
Example 17-2
A strip model was constructed sequentially using the method described in Sec. 17-4, and then adjusted
to ground using a three-dimensional conformal coordinate transformation. Using the data provided in
the table, find the initial approximations of ω, φ, and κ for each photo in the strip using the chain
method.
Solution Use the rotation angles for the orientation of ground to strip to approximate ω, φ, and κ for
the first photo. Form the rotation matrix from the ground to strip system using the definitions in Sec.
C-7:
Next, form the rotation matrix representing the relative angular orientation from photo 1 to photo 2:
The product of these matrices yields the rotation matrix from ground to photo 2 which can be used to
find approximate values for ω, φ, and κ for photo 2 via the method described in Sec. D-10:
Multiplying the rotation matrix formed by the relative orientation angles from photo 2 to photo 3, M2-
3, by the above matrix yields the rotation matrix from ground to photo 3 which can be used to find
approximate values for ω, φ, and κ for photo 3:
Multiplying the rotation matrix formed by the relative orientation angles from photo 3 to photo 4, M3-
4, by the above matrix yields the rotation matrix from ground to photo 4 which can be used to find
approximate values for ω, φ, and κ for photo 4:
17-8 Bundle Adjustment with Airborne GPS Control
As mentioned in Sec. 17-1, kinematic GPS and INS observations can be taken aboard the aircraft as
the photography is being acquired to determine coordinates and angular attitude for exposure stations.
Use of GPS and INS in the aircraft to control a bundle adjustment of a block of photographs is termed
airborne control. By including coordinates of the exposure stations and angular attitude of the camera
in the adjustment, the amount of ground control can be greatly reduced.
Figure 17-6 illustrates the geometric relationship between a camera, inertial measurement unit
(IMU), and GPS antenna on an aircraft. In this figure, x, y, and z represent the standard three-
dimensional coordinate system of a mapping camera; and xA, yA, and zA represent the coordinates of the
GPS antenna relative to the camera axes, often referred to as the lever arm. The x axis of the camera is
parallel to the longitudinal axis of the aircraft, the z axis is vertical, and the y axis is perpendicular to
the x and z axes. Since object space coordinates obtained by GPS pertain to the phase center of the
antenna but the exposure station is defined as the incident nodal point of the camera lens, the GPS
coordinates of the antenna must be translated to the camera lens. To properly compute the
translations, it is necessary to know the angular orientation of the camera with respect to the object
space coordinate system. Determining the correct angular orientation is complicated by the use of a
gimbaled camera mount which allows relative rotations between the camera and the aircraft frame.
FIGURE 17-6 Configuration of camera, IMU, and GPS antenna for airborne GPS control.
If the camera in its mount was fixed, the rotation matrix Mi, consisting of angular orientation
parameters of the camera (ωi, ϕi and κi) would translate directly to angular orientation of the camera-
to-antenna vector. However, differential rotation from the airframe to the camera, represented by
(the superscript m stands for mount), must also be taken into account in order to determine the angular
attitude of the camera-to-antenna vector in object space. Note that even in a so-called fixed mount
there will generally be a crab adjustment, rotation about the z axis of the fixed-mount coordinate
system, to ensure proper photographic coverage (see Sec. 3-6). Some camera mounts such as the Leica
PAV30 shown in Fig. 3-8 have the capability of measuring the differential rotations, and they can be
recorded by a computer. The following equation specifies the rotation of the camera-to-antenna vector
with respect to object space:
(17-17)
I n Eq. (17-17), Mi is the conventional rotation matrix consisting of angular exterior orientation
parameters of the camera with respect to the object space coordinate system (ωi, ϕi and κi); in the
rotation matrix of the camera with respect to the mount; and is the rotation matrix of the camera-
to-antenna vector with respect to object space coordinates.
Once has been determined, the rotation angles ( , , and ) can be computed. [See Eqs. (C-
33) and Sec. D-10.] After has been computed, the coordinates of the camera lens can be computed
by Eq. (17-18). (Note: subscript i has been dropped.)
(17-18)
When a camera mount is used which does not provide for measurement of the differential
rotation from the airframe to the camera, it is assumed to be equal to zero, resulting in errors in the
computed position of the camera lens. This error can be minimized by mounting the GPS antenna
vertically above the camera in the aircraft, which effectively eliminates the error due to unaccounted
crab adjustment, rotation about the z axis of the fixed-mount coordinate system. As long as the
differential tilt rotations are small (less than a couple of degrees) and the antenna-to-camera vector is
short (less than 2 m), the lens positional error will be less than 10 cm. One last comment must be
made concerning the translation of GPS antenna coordinates to the lens. Since the values of ω, ϕ, and
κ are required to compute the translation, the antenna offset correction must be included within the
iterative loop of the analytical bundle adjustment.
In order to use airborne control, it is necessary to have accurate values for the lever arm between
the camera and GPS antenna. Perhaps the most common method for determining this vector is direct
measurement using conventional surveying techniques. However, it is possible to include their values
as unknowns in the bundle adjustment solution. Equations (17-19) show the collinearity equations for
an imaged point with observations from GPS and lever arm parameters included. Note that the lever
arm parameters are included under the assumption that the camera mount is fixed and should
otherwise reflect the change in angular attitude due to rotation of the mount.
(17-19)
The lever arm parameters are highly correlated with both the interior and exterior orientation
parameters. This can greatly affect the precision of their solution in the bundle adjustment, which is
why the lever arm parameters are normally measured using conventional surveying techniques.
The boresight angles that define the orientation of the IMU with respect to the camera can be
found by the difference between results from a bundle adjustment using ground control, and the values
obtained from using airborne control. Alternatively, as with the lever arm parameters, these values can
also be included in a bundle adjustment as unknowns. In this case, the ms in Eq. (17-19) correspond to
matrix entries of Mi, the product of the rotation matrix determined by the INS, MiIMU, and the boresight
rotation matrix, ΔM as shown in Eq. (17-20).
(17-20)
Since two rotation matrices are included, there are six unknown rotation angles in Eq. (17-19). This
makes the linearization of the collinearity equations significantly more complex than with the
standard formulation.
Another consideration regarding airborne GPS positioning is the problem of loss of lock on the
GPS satellites, especially during banked turns. When a GPS receiver operating in the kinematic mode
loses lock on too many satellites, the integer ambiguities must be redetermined (see Chap. 16-8).
Since returning to a previously surveyed point is generally out of the question, on-the-fly (OTF)
techniques are used to calculate the correct integer ambiguities. With high-quality, dual-frequency, P-
code receivers, OTF techniques are often successful in correctly redetermining the integer
ambiguities. In some cases, however, an integer ambiguity solution may be obtained which is slightly
incorrect. This results in an approximately linear drift in position along the flight line, which causes
exposure station coordinate errors to deteriorate. This problem can be detected by using a small
number of ground control points at the edges of the photo block. Inclusion of additional parameters in
the adjustment corresponding to the linear drift enables a correction to be applied which eliminates
this source of error. Often, cross strips are flown at the ends of the regular block strips, as shown in
Fig. 17-7. The cross strips contain ground control points at each end which allow drift due to incorrect
OTF integer ambiguities to be detected and corrected. The corrected cross strips in turn serve to
provide endpoint coordinates for the remainder of the strips in the block, thus enabling drift
corrections to be made for those strips as well.
Two additional precautions regarding airborne GPS should be noted. First, it is recommended
that a bundle adjustment with analytical self-calibration (see Sec. 19-4) be employed when airborne
GPS control is used. Often, due to inadequate modeling of atmospheric refraction distortion, strict
enforcement of the calibrated principal distance (focal length) of the camera will cause distortions and
excessive residuals in photo coordinates. Use of analytical self-calibration will essentially eliminate
that effect. Second, it is essential that appropriate object space coordinate systems be employed in
data reduction. GPS coordinates in a geocentric coordinate system should be converted to local
vertical coordinates for the adjustment (see Secs. 5-5 and F-4). After aerotriangulation is completed,
the local vertical coordinates can be converted to whatever system is desired. Elevations relative to
the ellipsoid can be converted to orthometric elevations by using an appropriate geoid model.
FIGURE 17-8 Three-line linear array sensor scans: forward, nadir, and backward.
Three-line scanners collect three raw image scenes synchronously along a strip. One scene
consists of the collection of scan lines from the backward-looking linear array, another is from the
nadir-looking linear array, and the third is from the forward-looking linear array. In their raw format,
Level 0, these scenes are distorted due to aircraft movement during collection. Correcting the data for
sensor tilt and aircraft movement using GPS-INS measurements yields nominally rectified imagery,
Level 1. Figure 17-9 shows Level 0 imagery and Level 1 imagery. In the ADS systems, the
transformations from Level 0 to Level 1 are done in real time. In order to increase the accuracy of the
imagery and to facilitate the calibration of boresight and lever arm parameters, the exterior orientation
parameters obtained by GPS-INS are adjusted using a unique method of aerotriangulation.
FIGURE 17-9 Raw (left) and processed (right) linear array imagery. Note that the edges of the
processed imagery correspond to the tilt of the sensor during acquisition. (Courtesy of the University
of Florida)
The first step in three-line scanner aerotriangulation is to obtain pass points between the scenes.
Although pass point generation is done in Level 1 scenes to facilitate automated matching, the
coordinates of the pass points refer to the Level 0 scenes. In order to apply the collinearity equations,
one must have exposure stations with multiple image observations. However, since the orientation
data comes from a continuous stream, the observations of the exterior orientation parameters are
continuous along the flight path and it is nearly impossible to have multiple points imaged in a single
scan line. Thus, orientation fixes are used. Orientation fixes can be considered simulated exposure
stations. They are defined at regular intervals along the flight path, and their spacing is chosen based
on the quality of the GPS-INS data. The poorer the GPS-INS, the shorter the allowable interval
between orientation fixes. Figure 17-10 illustrates the concept of orientation fixes along a flight path.
FIGURE 17-10 Orientation fixes along a flight path for a three-line linear sensor array.
Once the orientation fixes have been established, the collinearity equations for each point on each
scene can be formed. The exterior orientation parameters associated with the imaging of these points
must be expressed as functions of the nearest orientation fixes before and after imaging. The
adjustment is similar to relative orientation in that each orientation fix for a scene is adjusted based on
the weighted exterior orientation parameters of the other orientation fixes corresponding to the other
scenes. Each point yields two equations for each of the three scenes. Boresight and lever arm
parameters can also be introduced into the equations using methods similar to those described in Sec.
17-8. Care must be taken when selecting the distance between the orientation fixes in order to ensure
that there will be enough redundancy from pass points to resolve the unknown parameters. In general,
the distance between orientation fixes should not exceed the instantaneous ground distance between
the nadir and backward scan lines. After the adjustment is completed, the solved orientation fixes are
used to update the GPS-INS data, which can then be used to rectify Level 0 imagery.
(17-21)
In Eq. (17-21), x is the row number of some image position; ωx, ϕx, κx, , and are the exterior
orientation parameters of the sensor when row x was acquired; ωo, ϕo, κo, , and are the exterior
orientation parameters of the sensor at the start position; and a1 through a7 are coefficients which
describe the systematic variations of the exterior orientation parameters as the image is acquired. Note
that according to Eq. (17-21) the variation in ZL is second order, whereas the other variations are linear
(first order). This is due to the curved orbital path of the satellite and is based on an assumption that a
local vertical coordinate system (see Sec. 5-5) is being used. Depending upon the accuracy
requirements and measurement precision, the coefficient of the second-order term a7 may often be
assumed to be equal to zero.
Given the variation of exterior orientation parameters described above, the collinearity equations
which describe linear array sensor geometry for any image point a are
(17-22)
(17-23)
In Eqs. (17-22) and (17-23), ya is the y coordinate (column number) of the image of point A; yo is the y
coordinate of the principal (middle) point of the row containing the image; f is the sensor focal length;
through are the rotation matrix terms [see Eqs. (C-33)] for the sensor attitude when row xa was
acquired; , and are the coordinates of the sensor when row xa was acquired; and XA, YA, and ZA
are the object space coordinates of point A. Note that the exterior orientation terms and hence the
rotation matrix terms are functions of the form of Eq. (17-21). It is also important to note that the
units of the image coordinates and the focal length must be the same. For example, the first three
SPOT sensor systems had focal lengths of 1082 mm and, when operating in the panchromatic mode,
pixel dimensions of 0.013 mm in their focal planes.2 Therefore, if standard row and column image
coordinates (in terms of pixels) are used, the focal length is expressed as 1082 mm/0.013 mm/pixel =
83,200 pixels.
Rational polynomial coefficient (RPC) camera models (see Sec. C-10) are commonly used to
describe satellite imagery. RPCs are considered a replacement model for the actual physical
characteristics and orientation of the sensor with respect to image coordinates of ground points. They
are derived from the physical model of the satellite sensor using least squares techniques, and their
coefficients are delivered with the imagery. Much like the collinearity equations, RPCs are a
mathematical model for transforming three-dimensional ground points to two-dimensional image
coordinates. Thus, RPCs can be used in many of the same applications as the collinearity equations
such as DEM generation, othorectification, and feature extraction. For example, IKONOS satellite
imagery uses the ratio of two cubic polynomial functions of three-dimensional ground coordinates to
describe x (line) and y (sample) coordinates of a point in the linear array sensor image as in Eq. (17-
24). The image and ground coordinates of the points are normalized to avoid ill-conditioning and
increase the numerical precision (see Example B-6).
(17-24)
In Eq. (17-24), Pa, La, and Ha are the normalized latitude, longitude, and height of point a, xa and ya are
the normalized image coordinates of point a, and NumL, DenL, NumS, and DenS are cubic polynomial
functions of Pa, La, and Ha. Both of the two rational polynomials consist of 39 coefficients (20 in the
numerator and 19 in the denominator) for a total of 78 coefficients used in the model. Note that if a
point is imaged on two stereo satellite images, the three-dimensional object space coordinates can be
found via least squares since there would be four equations and three unknowns, similar to space
intersection via collinearity described in Sec. 11-7.
The RPC model on its own may be sufficient for some applications, however it is possible to
increase the accuracy by determining bias parameters using a least squares block adjustment of stereo
satellite imagery. Equation (17-25) is referred to as the adjustable RPC model, where a0, a1, a2, b0, b1,
a n d b2 are affine transformation parameters that model biases in image space stemming from
systematic errors in the physical orientation of the sensor.
(17-25)
The solution for the affine parameters can be found using a block adjustment of stereo satellite images
with Eq. (17-25) serving as the basis for the observation equations. Depending on the geometry of the
imagery and the type of sensor (e.g., IKONOS versus QuickBird) not all of the additional parameters
may be statistically significant, and care should be taken not to over-parameterize the adjustment (see
Sec. C-10).
(17-26)
In Eq. (17-26), is the block-diagonal submatrix from the upper left portion of N having dimensions
6m × 6m, where m is the number of photos in the block; is the block-diagonal submatrix from the
lower right portion of N having dimensions 3n × 3n, where n is the number of object points in the
block; is the submatrix from the upper right portion of N having dimensions 6m × 3n and is its
transpose; is the submatrix from the upper portion of Δ having dimensions of 6m × 1, consisting of
the correction terms for the exterior orientation parameters for all photos; is the submatrix from the
lower portion of Δ having dimensions of 3n × 1, consisting of the correction terms for the object space
coordinates for all points; is the submatrix from the upper portion of K having dimensions of 6m ×
1; and is the submatrix from the lower portion of K having dimensions of 3n × 1.
A block-diagonal matrix consists of nonzero submatrices along the main diagonal and zeros
everywhere else. This kind of matrix has the property that its inverse is also block-diagonal, where the
submatrices are inverses of the corresponding submatrices of the original matrix. As such, the inverse
of a block-diagonal matrix is much easier to compute than the inverse of a general, nonzero matrix.
With this in mind, Eq. (17-26) can be rearranged to a form which can be solved more efficiently. First,
Eq. (17-26) is separated into two separate matrix equations.
(17-27)
(17-28)
(17-29)
Next the right side of Eq. (17-29) is substituted for in Eq. (17-27).
(17-30)
(17-31)
Matrix Eq. (17-31) is referred to as the reduced normal equations. These equations are solved for
, which can then be substituted into Eq. (17-29) to compute . This approach is more efficient since
the largest system of equations which must be solved has only 6m unknowns, as opposed to 6m + 3n
unknowns in the full normal equations. This efficiency is made possible by the block-diagonal
structure of the matrix.
One can also use the partitioned N matrix to obtain the covariance matrix. The inverse of N can be
partitioned as shown in Eq. (17-32).
(17-32)
Using the relationship between a matrix and its inverse shown in Eq. (17-33), the matrix C = N–1 can
be formed using the definitions in Eqs. (17-34), (17-35), and (17-36).
(17-33)
(17-34)
(17-35)
(17-36)
While C2 can be used to form the full covariance matrix for point coordinates, the computations are
normally limited to determining covariance values for each point separately. This can be done by
using only the portions of the matrices on the right hand side of Eq. (17-36) corresponding to a
particular point j. The covariance matrix can then be formed using Eq. (17-16).
An additional enhancement to the solution can be made to increase computational efficiency even
further. This enhancement exploits the fact that the coefficient matrix of the reduced normal equations
is sparse; i.e., it has a large number of elements that are zero. Special computational techniques and
data storage methods are available which take advantage of sparsity, reducing both computational
time and data storage requirements. Details concerning these special computational techniques may be
found in references listed at the end of this chapter.
Figure 17-12 shows a small block with three strips of nine photos each, having end lap and side
lap equal to 60 and 30 percent, respectively. The outlines of photo coverage for only the first three
photos in strips 1 and 2 are shown in the figure, and the remainder are represented as neat models (see
Sec. 18-7). In Fig. 17-12, the image of a representative pass point A exists on photos 1-1, 1-2, 1-3, 2-1,
2-2, and 2-3. This pass point causes “connections” between each possible pair of photos from the set
of six on which it is imaged. Connections for the entire block are illustrated in Fig. 17-13. This figure
shows a graph which indicates the connections (shown as lines or arcs) caused by shared pass points
over the entire block.
FIGURE 17-12 Configuration of a photo block having three strips of nine photos each.
FIGURE 17-13 Graph showing connections between photos caused by shared pass points.
These connections cause nonzero submatrices to appear at corresponding locations in the reduced
normal equations. The positions where these nonzero submatrices appear depend upon the order in
which the photo parameters appear in the reduced normal equation matrix. Two ordering strategies,
known as down-strip and cross-strip, are commonly employed. In the down-strip ordering, the photo
parameters are arranged by strips, so that the nine photos from strip 1 appear first, followed by the
nine photos of strip 2, and the nine photos from strip 3. With cross-strip ordering, the photo
parameters are arranged so that the first photo of strip 1 appears first, followed by the first photos of
strips 2 and 3; then the second photos of strips 1, 2, and 3; and so on. These two photo orders are listed
i n Table 17-1. As will be demonstrated, cross-strip ordering leads to a more efficient solution than
down-strip ordering in this case.
TABLE 17-1 Down-Strip and Cross-Strip Ordering for the Photos of Fig. 17-14
Figure 17-14 shows a schematic representation of the reduced normal equations when down-strip
ordering is employed. Notice from the figure that the nonzero elements tend to cluster in a band about
the main diagonal of the matrix. The width of the band from the diagonal to the farthest off-diagonal
nonzero element is the bandwidth of the matrix. The bandwidth of the matrix shown in Fig. 17-14 is 6
× 12 = 72. With cross-strip ordering of the photos, the reduced normal equation matrix shown in Fig.
17-15 results. Here, the bandwidth is 6 × 8 = 48, which is substantially smaller than that for down-
strip ordering. The narrower the bandwidth, the faster the solution and the less storage required.
FIGURE 17-14 Structure of the reduced normal equations using down-strip ordering.
FIGURE 17-15 Structure of the reduced normal equations using cross-strip ordering.
Solution time for nonbanded reduced normal equations is proportional to the number of
unknowns (6m) raised to the third power. For the example with 27 photos, the time is proportional to
(6 × 27)3 = 4.2 × 106. For banded equations, the solution time is proportional to the bandwidth squared,
times the number of unknowns. For the example with down-strip number, the time is proportional to
722 × (6 × 27) = 8.4 × 105, which is 5 times faster than the nonbanded case. With cross-strip
numbering, the time is proportional to 482 × (6 × 27) = 3.7 × 105, which is more than 11 times faster
than the nonbanded case!
Down-strip and cross-strip ordering generally apply only to regular, rectangular photo blocks. In
cases where photo blocks cover irregular areas, other more complicated approaches should be used to
achieve a minimal bandwidth. Details of these other approaches can be found in references which
follow.
References
Ackermann, F., and H. Schade: “Application of GPS for Aerial Triangulation,” Photogrammetric
Engineering and Remote Sensing, vol. 59, no. 11, 1993, p. 1625.
American Society of Photogrammetry: Manual of Photogrammetry, 5th ed., Bethesda, MD, 2004.
Brown, D. C.: “New Developments in Photogeodesy,” Photogrammetric Engineering and Remote
Sensing, vol. 60, no. 7, 1994, p. 877.
Curry, S., and K. Schuckman: “Practical Considerations for the Use of Airborne GPS for
Photogrammetry,” Photogrammetric Engineering and Remote Sensing, vol. 59, no. 11, 1993, p.
1611.
Duff, I. S., A. M. Erisman, and J. K. Reid: Direct Methods for Sparse Matrices, Oxford University
Press, New York, 1990.
Ebadi, H., and M. A. Chapman: “GPS-Controlled Strip Triangulation Using Geometric Constraints of
Man-Made Structures,” Photogrammetric Engineering and Remote Sensing, vol. 64, no. 4, 1998,
p. 329.
El-Hakim, S. F., and H. Ziemann: “A Step-by-Step Strategy for Gross-Error Detection,”
Photogrammetric Engineering and Remote Sensing, vol. 50, no. 6, 1984, p. 713.
Erio, G.: “Three-Dimensional Transformations of Independent Models,” Photogrammetric
Engineering and Remote Sensing, vol. 41, no. 9, 1975, p. 1117.
Fraser, C. S., and H.T. Hanley: “Bias-Compensated RPCs for Sensor Orientation of High-Resolution
Satellite Imagery,” Photogrammetric Engineering and Remote Sensing, vol. 71, 2005, p. 909.
George, A., and J. W. H. Liu: Computer Solution of Large Sparse Positive-Definite Systems, Prentice-
Hall, Englewood Cliffs, NJ, 1981.
Goad, C. C., and M. Yang: “A New Approach to Precision Airborne GPS Positioning for
Photogrammetry,” Photogrammetric Engineering and Remote Sensing, vol. 63, no. 9, 1997, p.
1067.
Grodecki, J., and G. Dial: “Block Adjustment of High-Resolution Satellite Images Described by
Rational Polynomials,” Photogrammetric Engineering and Remote Sensing, vol. 69, no. 1, 2003,
p. 59.
Gruen, A., M. Cocard, and H. G. Kahle: “Photogrammetry and Kinematic GPS: Results of a High
Accuracy Test,” Photogrammetric Engineering and Remote Sensing, vol. 59, no. 11, 1993, p.
1643.
Hinsken, L., S. Miller, U. Tempelmann, R. Uebbing, and S. Walker : “Triangulation of LH Systems’
ADS40.
Imagery Using ORIMA GPS/IMU,”International Archives of Photogrammetry and Remote Sensing,
vol. 34, 2001, p. 156.
Jacobsen, K.: “Experiences in GPS Photogrammetry,” Photogrammetric Engineering and Remote
Sensing, vol. 59, no. 11, 1993, p. 1651.
Kubik, K., D. Merchant, and T. Schenk: “Robust Estimation in Photogrammetry,” Photogrammetric
Engineering and Remote Sensing, vol. 53, no. 2, 1987, p. 167.
Novak, K.: “Rectification of Digital Imagery,” Photogrammetric Engineering and Remote Sensing,
vol. 58, no. 3, 1992, p. 339.
Schut, G. H.: “Development of Programs for Strip and Block Adjustment at the National Research
Council of Canada,” Photogrammetric Engineering, vol. 30, no. 2, 1964, p. 283.
Schwarz, K. P., M. A. Chapman, M. W. Cannon, and P. Gong: “An Integrated INS/GPS Approach to
the Georeferencing of Remotely Sensed Data,” Photogrammetric Engineering and Remote
Sensing, vol. 59, no. 11, 1993, p. 1667.
Theodossiou, E. I., and I. J. Dowman: “Heighting Accuracy of SPOT,” Photogrammetric Engineering
and Remote Sensing, vol. 56, no. 12, 1990, p. 1643.
Toth, C. K., and A. Krupnik: “Concept, Implementation, and Results of an Automatic
Aerotriangulation System,” Photogrammetric Engineering and Remote Sensing, vol. 62, no. 6,
1996, p. 711.
Triggs, W., P. McLauchlan, R. Hartley, and A. Fitzgibbon: “Bundle Adjustment: A Modern
Synthesis,” Lecture Notes in Computer Science, vol. 1883, 2000, p. 298.
Westin, T.: “Precision Rectification of SPOT Imagery,” Photogrammetric Engineering and Remote
Sensing, vol. 56, no. 2, 1990, p. 247.
Wolf, P. R.: “Independent Model Triangulation,” Photogrammetric Engineering, vol. 36, no. 12, 1970,
p. 1262.
Problems
17-1. Discuss the advantages of aerotriangulation over field surveys.
17-2. List the three categories of aerotriangulation. Which categories are currently used?
17-3. Describe the process of automatic pass point generation. What are its advantages to manual
point measurement?
17-6. A continuous strip of three stereomodels has pass points a through l and ground control points
A through D. Independent model coordinates for points and exposure stations of each model are listed
below along with ground coordinates of control points. Compute the ground coordinates of the pass
points and exposure stations by the sequential method of semianalytical aerotriangulation. Use the
three-dimensional conformal coordinate transformation program provided (see Example 17-1).
17-7. Repeat Prob. 17-6, except using the coordinates from the table below.
17-8. Briefly describe how the method of independent model aerotriangulation by simultaneous
transformations differs from the sequential approach.
17-10. Briefly describe the unknowns and measurements associated with a bundle adjustment of a
block of photographs.
17-11. Describe how coordinates of the GPS antenna are related to the exposure station when
airborne GPS control is used to control photography.
17-12. Describe how the boresight angular attitude parameters relate the camera and the IMU.
17-14. Briefly discuss the problem associated with the lack of synchronization of GPS fixes with
camera exposures in airborne GPS control.
17-15. What is the purpose of cross strips at the ends of photo blocks when airborne GPS is used to
control photography?
17-16. Compute initial approximations for the angular exterior orientation parameters using the
results from a sequentially constructed strip model with results in the following table using the
methods in Example 17-2.
17-17. Briefly explain how a line perspective image differs from a point perspective image.
17-18. Briefly discuss the characteristic of the submatrix that makes the method of reduced
normal equations more efficient for a bundle adjustment than solving the full normal equations.
17-19. Discuss the difference between down-strip and cross-strip numbering as they apply to the
bandwidth of the reduced normal equations of a bundle adjustment.
_____________
2
SPOT 1, 2, and 3 sensors could be operated in either a panchromatic or multispectral mode. In panchromatic mode, pixel
dimensions were 0.013 mm, and ground resolution was 10 m. In multispectral mode, pixel dimensions were 0.026 mm, and ground
resolution was 20 m.