Adjustment Computation
Adjustment Computation
CONTENTS
Preface..........................................................................................Page
Acknowledgements...........................................................................xii
PART I...............................................................................................xvi
CHAPTER 1
INTRODUCTION
1.1
1.2
CHAPTER 2
MATRICES
2.1
Definitions...................................................................................
2.2
2.3
Matrix Multiplication....................................................................
2.31 Laws of Multiplication........................................................
2.4
2.5
Derivative of a Matrix..................................................................
2.6
Determinants..............................................................................
2.61 Definition...........................................................................
2.62 Properties of Determinants................................................
2.63 Rank of a Matrix................................................................
2.7
2.8
2.9
Inverse of a Matrix......................................................................
2.91 Singular and non-singular Matrices....................................
2.92 Theorems on Inverse of a Matrix.......................................
Definitions................................................................
2.102
2.122
Non-negative Matrix.................................................
2.123
2.124
Similar Matrices.......................................................
2.125
Periodic Matrices......................................................
Exercise 2.................................................................
CHAPTER 3
3.1
Introduction................................................................................
3.11 Existence of a solution.......................................................
3.12 Matrix Inversion and solution.............................................
of system of linear equations............................................
3.2
Adjoint Method............................................................................
3.3
Cramer's Rule..............................................................................
3.4
Elementary Operations...............................................................
3.5
Gaussian Elimination...................................................................
3.6
Gauss-Jordan Method..................................................................
3.7
Matrix Partitioning.......................................................................
3.8
Bordering Method.......................................................................
3.9
Introduction
4.2
4.3
4.21
Summarising Data...................................................
4.211
Frequency Distribution.............................................
4.212
Grouped Distribution................................................
4.213
Continuous Distribution............................................
4.22
Measures of Dispersion............................................
4.221
4.222
4.23
Measures of Correlation...........................................
Propagation of Errors...................................................................
4.31
Observational Errors.......................................
4.32
Laws of Expectation........................................
4.33
4.331
4.332
Taylors
Series
Expansion
for
non-linear
function
4.333
PART II
CHAPTER 5
5.1
Introduction................................................................................
5.2
5.3
5.4
5.5
5.6
5.7
CHAPTER 6
6.1
Introduction................................................................................
6.2
Non-linear Model.........................................................................
6.3
Partitioned Model........................................................................
6.4
6.5
CHAPTER 7
CONDITION EQUATIONS
7.1
Mixed Model................................................................................
7.2
7.3
7.4
PART III
CHAPTER 8
ADJUSTMENT
8.1
8.2
8.3
8.4
CHAPTER 9
9.1
Introduction................................................................................
9.2
PART IV
CHAPTER 10
10.12
10.13
CHAPTER 11
UNIVARIATE
INTERVAL
ESTIMATION
AND
HYPOTHESIS TESTING
11.1 Introduction................................................................................
11.2 Univariate Interval Estimation.....................................................
11.21
11.22
11.23
11.24
11.25
Testing
of
hypothesis
on
the
mean:
Single
Population................................................................
11.32
11.33
11.34
11.35
Test
of
hypothesis
on
the
variance:
Single
Population................................................................
11.36
11.37
11.38
Goodness of fit.........................................................
Exercise 11...............................................................
CHAPTER 12
MULTIVARIATE
INTERVAL
ESTIMATION
AND
HYPOTHESIS TESTING
12.1 Multivariate Confidence Interval.................................................
T
V PV
12.11
12.12
12.13
T
V PVs
1.
2.......................................................
12.21
Hypothesis Testing on
12.22
3.................................
12.222
Mean Vectors..................................................
12.23
12.24
PART IV
CHAPTER 13
13.22
13.42
13.512
13.53
Discussion of Results................................................
13.62
13.63
Condition Equations.................................................
13.82
Observation Equations.............................................
CHAPTER 14
APPLICATION IN PHOTOGRAMMETRY
14.1 Introduction................................................................................
14.2 Perspective Centre Determination...............................................
14.21
Affine Transformation...............................................
14.22
14.3
Relative Orientation...........................................................
14.31
14.312
Numerical
Relative
Orientation
using
element..........................................................
14.32
Collinearity Equations...........................
14.322
Coplanarity Equations...........................
14.4
Absolute Orientation..........................................................
14.5
14.6
14.7
14.8
14.9
14.10
14.11
14.12
Simultaneous
Adjustment
of
Photogrammetric
and
GeodeticObservation (SAPGO)...........................................
Exercise 14
Table A2:
Table A3:
Table A4:
PREFACE
This book is intended primarily to provide a text on adjustment
computations
and
statistical
data
analysis,
suitable
for
nature
geologists,
chemists,
physicists,
and
biologists
in
their
professional practice.
The derivations of important results in the method least squares
presented in this book are based on matrix algebra. Chapter two on
matrices provides the necessary background to these derivations. It is the
author's belief that analytical methods are best handled by matrix algebra
if the advantages of digital computers are to be maximised. Another
advantage of matrix approach is its conciseness which enables the student
to understand what is happening at every stage of derivation or
computations procedure.
A survey of both iterative and direct methods for solving linear
systems of equations is presented in chapter 3. One of the traditional
procedures of least squares method is the formation of system of linear
equations (normal equations) by minimizing the sum of squares of residuals
(or weighted residuals). If one is faced with a large system of normal
equations their solution becomes a formidable crucial in terms of speed,
accuracy, cost and storage requirements on the computer. Chapter 3
provides a variety of alternative methods for solving normal systems of
equations.
Chapter four briefly reviews four important aspects of adjustment
calculus - basic concepts, error propagation, Taylor's series expansion and
ACKNOWLEDGEMENTS
The authors wishes to express his sincere appreciation to various
individuals and organizations who have contributed directly or indirectly to
this book. Special thanks are due to Dr. F.A. Fajemirokun and Dr. F.O.A.
Egberongbe of Lagos University who reviewed the manuscript and made
several suggestions. The author is particularly grateful to Prof. G. Obenson
also of Lagos University who first instructed him on the subject of least
squares and who gave him the initial encouragement when the idea of this
book was proposed. The author is also equally indebted to Prof. U. Uotila of
the Ohio State University who taught him many profound concepts in least
squares theory. Special appreciation is also expressed for the inspiration
received from other professors at Ohio State who taught the author various
applications of least squares principles, Prof. R. Rapp, Prof. I. Muller, Prof.
D.C. Merchant and Prof. S.K. Ghosh (now of Laval University). Grateful
thanks are also due to Prof. J.S. Rustagi, the author's statistics teacher at
Ohio State.
Some of the examples and exercise in this book are based on
laboratory assignments given to students at Ohio State and at Lagos
University. Sincere appreciation must therefore be expressed for the
knowledge gained from my laboratory instructors at Ohio State - Dr. (Major)
Spinski and Dr. Rampal. Credit must also be given to my student at Lagos
University particularly Messrs Owolabi (P.G. Student), Oyinloye and Olaleye
for their assistance received from them. The author has also received
inspiration from the published work of several authors too numerous to
mention. Special mention must however be made of the following: Mr. G.W.
Schut of National Research Council, Canada, Prof. P.R. Wolf of the University
of Winsconsin, Prof. E.M. Mikhail of Uurdue University, Prof. F. Ackerman of
Stuttard University, Profs, Karara and K.W. Wong of the University or Illinois,
Late Prof. H. Thompson of University College, London, Prof. W. Faig of the
University of Washington of Newsbrunswick, Prof. S.A. Veress of the
University of Washington Seatle, Prof. K. Tolgard of the Royal Institute of
Technology, Stockolm, Mr. Duane Brown, Adjunct Professor at the Ohio State
University, Prof. A.J. Bradenberger of Laval University, Quebee, and Prof.
CHAPTER 1
INTRODUCTION
1.1
2.
3.
* Sample mean is the average of a sample taken from a population (see section
4.2)
4.
4 which has
n
V i2 6
i=1
may be found by
V
i = 1
x
2
i
n
=
i = 1
(x - x
x
n
= 2
( xi - x ) = 0
i = 1
therefore
n
2
i
xi
n
n x
=
i
1
n
xi
1
(1)
xi
1
Equation (1) is the familiar formula for computing a sample mean. The
above derivation proves that the sample means minimizes the sum of
squares of residual Vi which is the minimum variance property. The sample
mean is therefore the best estimate of the population means ().
The sample mean ( x 10) is an unbiased estimate of the population
* Sample mean is the average of a sample taken from a population (see section
4.2)
E(x) =
mean () if
(2)
x )=
E(
i = 1
=
xi
i = 1
E(
i = 1
1
n
1
n
1
n
xi
xi
since there exists only one possible solution to the partials leading to eqn.
(1). It is important to note that no form of statistical distribution is imposed
on the observation
classical statistical tests which in most cases are based on the properties of
normal distribution in order to obtain, statistically speaking, valid results.
1.2
squares which allows the "state vector" (vector of parameters) to vary with
time. Karup [1969] and Motitz [1972] developed the method of least
squares collocation which permits the introduction of two random
components in the observations - the signal and the noise (residuals).
Bjerhammer [1973] has developed a completely generalised model for
least squares of which Kalman filter and collocation may be regarded as
special cases. Krakiwsky [1975] has published a synthesis of recent
advances in the method of least squares includes a review of Kalman
filtering,
Bayes
filtering,
Trienstra
phase,
sequential
approach
and
collocation.
EXERCISE 1
1.
2.
3.
4.
5.
* Sample mean is the average of a sample taken from a population (see section
4.2)