Salgado-Castro88v 2
Salgado-Castro88v 2
by
Ruben 0. Salgado-Castro
VOLUME TWO
**********
APPENDICES
088 22971 9
11\4 G, S Is L3 4-1-1
November, 1988.
TABLE OF CONTENTS
VOLUME TWO
**********
List of Figures
List of Tables
List of main Symbols vii
APPENDIX A
APPENDIX B
APPENDIX C
APPENDIX D
FIGURE PAGE
Fig. A.1. Different linear systems of equations A-4
Fig. A.2. Arrow shaped linear system A-63
Fig. A.3. Cholesky factor of arrow shaped matrix
(without reordering) A-63
Fig. A.4. Reordered arrow shaped linear system A-63
Fig. A.5. Cholesky factor of reordered arrow-shaped
linear system A-64
Fig. A.6. Undirected graph corresponding to symmetric
matrix given by equation (98) A-65
Fig. A.7. Permutation matrix for interchanging rows 3
and 4 A-66
Fig. A.8. Reordered matrix [eq.(98)] and its
associated graph A-66
Fig. A.9. Diagonal storage scheme for banded matrices A-69
Fig. A.10. Minimum band ordering for arrow shaped
linear system A-70
Fig. A.11. Two examples of Cuthill-McKee orderings for
reducing bandwidth, from George (1981) A-71
Fi g . A.12. Skyline representation A-72
Fi g . A.13. Graph for level structure example, from
George and Liu (1981) A-74
Fig. A.14. Level structure rooted at x5, for graph
shown in Fig. A.13, from George and Liu
(1981) A-74
Fig. A.15. Algorithm for finding a pseudo-peripheral
node, from George and Liu (1981) A-76
Fi g . A.16. Example of application of minimum degree
algorithm, from George (1981) A-81
Fi g . A.17. Elimination graphs representing the process
of creation of fill-in A-82
Fig. A.18. The filled graph of F = L + L T A-83
Fig. A.19. Filled (reordered) matrix A-83
Fig. A.20. Sequence of quotient graphs r i used for
finding the reachable set of nodes in the
elimination process A-87
Fi g . A.21. An example of a monotonely ordered tree
(rooted at node 8) A-96
F ig . A.22. Matrix corresponding to the monotonely
ordered tree shown in Fig. A.21 A-97
Fig. A.23. Network example for quotient tree methods A-98
Fig. A.24. Quotient tree corresponding to the tree of
Fig. A.23 A-100
Fig. A.25. Matrix corresponding to the quotient tree
of Fig. A.24 A-100
Fig. A.26. Level structure rooted at node "a" and its
corresponding quotient tree A-101
Fig. A.27. Matrix corresponding to refined quotient
tree of Fig. A.26. b) A-102
Fig. A.28. Level structure rooted at node "t" and its
corresponding quotient tree A-103
Fig. A.29. Matrix corresponding to refined quotient
tree of Fig. A.28. b) A-103
Fig. A.30. Rectangular grid partitioned with 2
dissectors A-105
iii
FIGURE PAGE
Fig. A.31. Example of a 40 nodes rectangular shaped
graph, partitioned using one-way dissection A-106
Fig. A.32. Matrix structure corresponding to graph of
Fig. A.31 A-107
Fig. A.33. Example of a 40 nodes rectangular-shaped
graph, partitioned using nested dissection A-109
Fig. A.34. Matrix structure corresponding to graph of
Fig. A.33 A-109
Fig. A.35. Finite element definition and ordering for a
frontal solution scheme, from Livesley
(1983) A-112
Fig. A.36. The elimination sequence in a frontal
solution scheme, from Livesley (1983) A-113
Fig. A.37. Geometric interpretation of the convergence
of the conjugate gradient algorithm for n=2
and n=3, from Gambolatti and Perdon (1984),
solving A h = b A-134
Fig. B.1. Estimated semi-variogram B-9
Fig. B.2. Nugget effect and corresponding true semi-
variogram B-13
Fig. B.3. Relationship between covariance and semi-
variogram B-17
Fig. C.1. Representation of a B-spline C-4
Fig. C.2. B-splines involved in the spline function
for the interval ( Xi_i,Xi) C-5
Fig. C.3. Extended set of B-splines: interior and
exterior knots C-6
Fig. C.4. Rectangular subspace R for the independent
variables x and y, divided into panels RiA
by knots Xi i=1,2,...h+1 and Pi j=1,2,...k+1 C-15
LIST OF TABLES
TABLE PAGE
Table A.1. Comparison of the algorithms used in the
direct solution of dense linear s ystems of
equations A-46
Table B.1. Computation of the experimental semi-
variogram B-8
Table B.2. Basic analytical models for the experimental
semi-variogram B-14
Table B.3. Possible polynomial models for generalised
covariances, from Delhomme (1978) B-37
Table D.1. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example A D-1
Table D.2. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example B D-2
Table D.3. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example C D-3
Table D.4. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example C D-4
Table D.5. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example E D-5
Table D.6. Summary of the comparison between true,
initial and calibrated piezometric heads.
Example F D-6
Table D.7. Summary of the performance of the
calibrated heads. Example A D-7
Table D.8. Summary of the performance of the
calibrated heads. Example B D-8
Table D.9. Summary of the performance of the
calibrated heads. Example C D-9
Table D.10. Summary of the performance of the
calibrated heads. Example D D-10
Table D.11. Summary of the performance of the
calibrated heads. Example E D-11
Table D.12. Summary of the performance of the
calibrated heads. Example F D-12
Table D.13. Summary of the comparison between true,
initial and calibrated flows. Example A D-13
Table D.14. Summary of the comparison between true,
initial and calibrated flows. Example B .. D-14
Table D.15. Summary of the comparison between true,
initial and calibrated flows. Example C D-15
Table D.16. Summary of the comparison between true,
initial and calibrated flows. Example D D-16
Table D.17. Summary of the comparison between true,
initial and calibrated flows. Example E D-17
Table D.18. Summary of the comparison between true,
initial and calibrated flows. Example F D-18
Table D.19. Summary of the performance of the
calibrated flows. Example A D-19
TABLE PAGE
Table D.20. Summary of the performance of the
calibrated flows. Example B D-20
Table D.21. Summary of the performance of the
calibrated flows. Example C D-21
Table D.22. Summary of the performance of the
calibrated flows. Example D D-22
Table D.23. Summary of the performance of the
calibrated flows. Example E D-23
Table D.24. Summary of the performance of the
calibrated flows. Example F D-24
Table D.25. Summary of the comparison between true,
initial and calibrated C's. Example A D-25
Table D.26. Summary of the comparison between true,
initial and calibrated C's. Example B D-26
Table D.27. Summary of the comparison between true,
initial and calibrated C's. Example C D-27
Table D.28. Summary of the comparison between true,
initial and calibrated C's. Example D D-28
Table D.29. Summary of the comparison between true,
initial and calibrated C's. Example E D-29
Table D.30. Summary of the comparison between true,
initial and calibrated C's. Example F D-30
Table D.31. Summary of the performance of the
calibrated C's. Example A D-31
Table D.32. Summary of the performance of the
calibrated C's. Example B D-32
Table D.33. Summary of the performance of the
calibrated C's. Example C D-33
Table D.34. Summary of the performance of the
calibrated C's. Example D D-34
Table D.35. Summary of the performance of the
calibrated C's. Example E D-35
Table D.36. Summary of the performance of the
calibrated C's. Example F D-36
LIST OF MAIN SYMBOLS
APPENDIX A
Symbol Description
n
<x,y> = xT y = yT x = E xi yi
i=1
vii
APPENDIX B
Symbol Description
Simplified notation:
Cov(Z(x11H),Z(x2,H)) Covariance:
n.
Yo * = 'I 0 * (X 0 ) = E '>\ID 1 V-
-1
i=1
APPENDIX C
Symbol Description
h+4
s(x) =E r i M1(x)
i=1
Bi-cubic splines:
ix
Symbol Description
sub-divided.
h+4 k+4
s(x,y) = E E r
i=1 j=1
of order m*(h+4)(k+4).
A.1. Introduction.
network analysis.
A-1
features of the linear systems generated in network analysis are
needed. In the context of the gradient network analysis method,
the matrices produced are sparse (i.e. with very few non-zeros
unknown values, 1 ,
x.'s in
the following system of "m" linear equations:
a ll x l + a 12 x2 + a 13 x3 + ... 4. a ln xn = bl
where the coefficients aij and the right hand side terms bi are
A-2
to represent vectors and greek letters to denote scalar numbers.
In some instances, when the use of lower and upper case letters
to differentiate vectors from matrices is not possible ( or when
letters will represent scalars. Thus, the linear system (1) can
A x = b (2)
where:
a ll a 12 a 13 .... aln
A a 21 a 22 a23 .... a21
aml am2 am3 amn
I
A : is the matrix of coefficients and it has "m" rows and
"n" columns, i.e. it is an (mxn) matrix.
x = ( xi, x2,x3, xn )T
less equations than unknowns and the system has, in general, more
A-3
than one "exact" solution, by which we mean that the equality in
(1) or (2) holds.
just one equation in the unknowns xi and x2. The solution points
is null ( i. e. b = Q , or bi = 0 i=1,2, n ).
L 1 = a xl + b x2 + c = 0
Al. xi
x2 x2
x2
L3
X1
c) Overdetermined system of d) Determined system of
linear equations. linear equations.
A-4
b) Non-homogeneous case: where some bi / 0 (at least one of
A21 = q (3)
where:
A-5
We shall not study in more detail the solution of this kind of
Li, L2 and L3, where it is clear that no single point can satisfy
solution that minimizes the difference between the RHS vector and
the product A x. In other words we can minimize the residual
vector:
r =Ax- b (4)
m n
min 11 r 11 22 = rT r = E r1 2 = E ( E a - - x-j - b i ) 2 (5)
i=1 i=1 j=1
this problem, but we shall not include them here; for details see
for example Duff and Reid (1976), Golub and van Loan (1983).
A-6
In practice, as before, it is not common to have to solve
because the matrices are not square matrices, their inverse does
inverse" is used instead; see Gelb (1974), Rao (1962) and Penrose
(1955) for details.
A-7
In the context of our small example in a 2-dimensional space,
parallel.
A-8
[ see Duff, (1980)].
is:
Solve : Ax = b (6)
where:
A-1 A x = A-1 b
or
x = A-1 b (7)
A-9
As we shall see, looking at the most computationally efficient
A-10
one.
such that:
A x (k) b (8)
In this iterative context, we can define a residual vector
such that:
r = b - A x (k) (9)
and the iterative procedure for solving the linear system has
norm for the residual vector, which can be any one of a variety
A-11
shall review most of these methods in some detail in the next
sections.
available. Thus, it has been accepted for a long time that, for
sparse matrices, iterative methods are the best ones, both from
A-12
A.2, Review of direct methods for the solution of general dense
linear systems of equations.
i) Introduction.
T x = b (14)
where:
T : is a triangular matrix of coefficients, i.e.: only the
T = U (15)
then, the problem (14) can be represented as:
U x = b (16)
The system (16) is then of the form:
U11 x 1 4- u 12 x2 -I- u 13 x3 + u 1n xn b1
+ =
U22 x2 .4 u 23 x3 4- 4- u2n X b2
U 33 x3 + + u 3n xn = b3 - (17)
(17), the solution for the last component of "x" can be obtained
in a direct manner:
A-13
xn = bn / U
Following the same idea, the (n-1 )th component of x (i.e. xn_l)
can be computed directly from the (n-1 )th equation of (17),
xn-1 = (bn-1 un-ln xn ) / u n-in-1 (19)
can eliminate it from the rest of the system; this is done simply
The same is valid for xn_i, after computing it via equation (19).
L x = b (21)
and, instead of back-substitution, forward substitution can be
xl, followed by x2, x3, .... etc. up to xn. The equation which
A-14
defines the forward substitution algorithm becomes:
k-1
xk = (bk - E 1jk xk) / lkk (22)
j=1
where:
is the coefficient corresponding to row i and column j
of L.
equal to:
, (either ukk or lkk) are all non-zero, and that, in this case, the
solution is unique.
substitution process.
A-15
Hence, to solve a linear system where A is a general matrix,
step.
(1983)]:
From now on, and for simplicity, we shall assume that we want
A-16
leaving a zero in) all the positions under the diagonal elements.
2 xi + 3 x2 - x3 = 5
4 xi + 8 x2 - 3 x3 = 3 (24)
-2 xi + 3 x2 - x3 = 1
operations we get:
2 xi + 3 x2 - x3 = 5
2 x2 - x3-7 (25)
6 x2 - 2 x3 = 6
2 xi + 3 x2 - x3 =5
2 x2 - x3-7 (26)
X3 =27
X3 = 27
X2 = 10
xi = 1
A-17
This is the basic algorithm for Gaussian elimination, which can
be easily implemented in a computer. The only probable cause of
pivot element (at any stage of the elimination), in that case the
when A is a general matrix, there are two main strategies for the
shall choose the j-th equation as the new pivotal equation if and
permutation between rows "k" and "j" and carry on with the
A-18
sequence:
Original system:
2 xi + 3 x2 - x3 = 5
4 xi + 8 x2 - 3 x3 = 3 (24)
-2 xi + 3 x2 - x3 = 1
finding out that the second row has the biggest (absolute value)
element. Then, the first and second rows are permuted, leading
to:
4 xi + 8 x2 - 3 x3 = 3
2 xi + 3 x2 - x3 = 5 (27)
-2 xi + 3 x2 - x3 =1
4 xi + 8 x2 - 3 x3 = 3
- x2 +1/2x3 = 7/2 (28)
7 x2 -5/2x3 = 5/2
the system (28); we have to choose between the second and third
rows, and we find that the third row is the new pivotal equation.
4 xi + 8 x2 - 3 x3 = 3
7 x2 -5/2x3 = 5/2 (29)
- x2 +1/2x3 = 7/2
Eliminating the term under the pivot (7), we get:
4 xi + 8 x2 - 3 x3 = 3
7 x2 -5/2x3 = 5/2 (30)
2/14x3 = 54/14
A-19
Hence, the solution obtained via back-substitution becomes:
X3 = 27
X2 = 10
xl = 1
as before.
Thus, we have now two degrees of freedom for choosing the pivot,
and the search of the pivot will be carried out looking both at
the columns and rows to the left and under the last pivot.
When choosing the first pivot, we find out that the maximum
second column (8), then, we permute both first and second rows
A-20
8 x2 + 4 x i - 3 x3 = 3
3 x2 + 2 xi - x3 = 5 (31)
3x2 - 2 xi - x3 =1
8 x2 + 4 xi - 3 x3 = 3
1/2 xi +1/8x3 = 31/8 (32)
-28 xi + x3 =•1
second and third rows and we find that the new pivot is -28
(third row, variable xi), then we have to permute the second with
8 x2 + 4 xi - 3 x3 = 3
-28 xi + x3 =-1 (33)
1/2 xi +1/8x3 = 31/8
8 x2 + 4 xi - 3 x3 = 3
-28 xi + x3 =-1 (34)
8/3x3 = 216/7
x3 = 27
X2 = 10
xi = 1
as before.
A-21
- Number of additions : (n3-n)/3
- Number of multiplications : (n3-n)/3 (35)
multiplications/divisions:
n 3 n2 5n
(36)
3 2 6
of work:
Stage Additions Multiplications/divisions
Factorization (n3-n)/3 n3 /3'+ n 2 /2 - 5/6n
Substitution n 2 /2 - n/2 n 2 /2 + n/2
Loan (1983)].
A-22
Since the triangular matrix produced by Gaussian elimination
required is:
n2 /2 locations.
the amount of work and the storage needed . Since most of the
A-23
required.
iv) The product form of Gaussian elimination.
eliminating the terms under the diagonal of the j-th column, the
T j = 1 (38)
1
x 0 1
all diagonal elements = 1
zeros elsewhere non-zero values
A-24
1 j-1 zeros
j_th row
aj,j
-aj+2,j
(39)
aj,j
an ,j
aj , j
Note that the values of the coefficients aij are those of the
1 st step : Tl A x = Tl b
or Ai x = TI b
with Ai = TI A
2 nd step : T2 Al x = T2 Tl b
or A2 x = T2 TI b
with A2 = T2 Al = T2 Tl A
n_th s t ep : Tn An_i x = Tn Tn_i .... T2 Tl b (40)
or An x = Tn Tn_i .... T2 Tl b (41)
with An = Tn Tn_i .... T2 Tl A (42)
After the n-th step, An must be an upper triangular matrix,
and the system (41) can be solved by back-substitution.
A-25
2 xi + 3 x2 - x3 = 5
4 xi + 8 x2 - 3 x3 = 3 (24)
-2 xi + 3 x2 - x3 =1
The sequence of transformation matrices is:
1 0 0 2 3 -1
T1 -2 1 0 > Ai = T i A = O 2 -1
=[
1 0 1
I [
O 6 -2
1 0 0 2 3 -1
O 1 0 --> A2 = T 2 T I A = 0 2 -1
T2 =[
O -3 1
[
0 0 1
i
Note that the matrices Tj are highly sparse and that they can
be handled implicitly with a vector of length "n".
A x = b
Ay =c
Az = d
An = Tn Tn i T2 Ti A
An x = b*
An y = c*
An z = d*
with:
b* = Tn Tn_i ....T2 TI b
c* = Tn Tn_i ....T2 TI c
and d* = Tn Tn_i ....T2 TI d
A-26
Hence, in this case, we can compute and store a transformation
matrix:
many times as different right hand side vectors are given to us.
As can be seen in our small example, the matrix T takes the form:
0 0
T =
-2 1 0 = T 2 T l (note that T 3 = I)
7 -3 1
[ 1
Hence, we need to solve "n" times the same linear system, with
0 0 1
L = T-1 = 2 1 0
-1 3 1
[ 1
and
2 3 * -1 I
U = [ 0 2 -1
0 0 1
sections.
i) Introduction.
A-28
ii) The product form of the inverse.
j-th column
10x 1
zeros everywhere
except j-th column
and diagonal.
(50)
1
all diagonal elements = 1
non-zero values
-a1 •j
ai,j
-a23j
a it j
1 (51)
-an-1,j
aj,j
an,j
a- .33.]•
A-29
Equations (50) and (51) are for Gauss-Jordan, as equations (38)
and (39) are for Gaussian elimination.
Original system: A x = b
A-30
transformation matrices of the form of Tj*, which are very
sparse.
- Number of divisions : n
i) Introduction.
determined.
A-31
The common idea behind any factorization strategy is that
instead of having to solve the original system:
A x = b (57)
if we factorize A as:
A = L U (58)
where:
a) Forward substitution:
solve L y = b (61)
determining "y", and
A = L U (62)
A-32
or:
ua ll all
12 aln u 12 uln
22 ".. a2n U22 u2n
a 21 a
(63)
ann 1 unn
anl an2 •
where:
A : is a general unsymmetric matrix of coefficients.
needed, i.e.:
the first row of U and identifying the result with the first
column of A, we get:
ull = ail V i2,3,.. .,n (65)
then,
lii = a11/u11 V i2,3,. ..,n (66)
A-33
multiplying the second row of L by the columns of U and
identify ing the result with the second row of A, to obtain:
- Number of divisions :n
- Number of additions : n 3 /3 - n 2 /2 + n/6
iii) L L T factorization.
A-34
decomposition:
A =LU=L L T (69)
before:
- Solve L y = b for y
- Solve L T x = y for x
iv) L D U factorization.
A = LDU (70)
or:
_
a ll a 12 .... aln
a21 a 22 .... a21 =
1 d 11 1 ui
11 12 ....
-in
-'" 1 21 1 d 22 1 .... u2n
' (71)
1 n1 1 n2 .. 1 . dnn 1
[
A-35
where:
A : is a general non-symmetric matrix of coefficients.
L : is a lower triangular matrix whose diagonal elements are
all 1.
D : diagonal matrix
U : is an upper triangular matrix whose diagonal elements
are all 1.
A x = b (72)
we are solving:
LDUx= b (73)
and in this case the solution involves a three stage process:
a) Forward substitution:
solve L z = b (74)
for z, an auxiliary vector
b) Diagonal inversion:
solve D y = z (75)
for y, by computing directly
y = D -1 z (76)
this stage demands only n divisions, due to the fact that D
is diagonal.
c) Back-substitution:
solve U x = y (77)
for x, the original unknown vector.
A-36
More details can be found for example in Golub and Van Loan
(1983), algorithm 5.1.1.
v) L D L T factorization.
A-37
the matrix A = E 1
1 E
L , 1
1/E 1
D = [E 0
0 E-1/E
and L T = 1 1/1
0 1
= [
hence, for E << 1 the elements of the factors can become very
large.
are strictly positive [see theorem 5.2.1, Golub and Van Loan
be developed:
A = L D L T (80)
that is to say:
a ll a 12 aln
a21 a22 a2n
where:
A-38
a ij = aji, since the matrix A is symmetric, and
product of L D as:
L* = L D (82)
then,
!. 11 "i11
,-L 21 11 1 22 1-22 , A
L* = '31 u 11 1 32 '22 '33 '33 (83)
1 n1 d 11 1 n2 d 22 1 n3 d 33 1 dnn
a ll a 12 aln
a21 a22 a2n
1 111112.—.11n
'21 uli 122.d22 122..."2n
(84)
1 n1 dll 1 -n2 ei-22.....lnn dnn inn
i-1
lji = aij - E lik ljk dkk (86)
k =1
A-39
(at the beginning j=i=1 and lji=aij, then 111a11)
Step 2 : Compute
= (lii) -1 (87)
[this is due to equation (85)]
Step 3 : Compute
Solve [ 21 2 3 1 [x l [ 61
1 0 x2 = 3
3 0 3 x3 6
131 = 3
1 22 = 1-1 11 2 dll = 1 - 1 = 0
d22 = 1/0 not defined !!, so the factorization is not possible.
A-40
Solve 2 2 1 I [xl
2 5 2 . x2 = b
1 2 1 x3
2 2 1 2 0 0 1/2 0 0 2 2
2 5 2 = 2 3 0 . 0 1/30 . 0 3
1 2 1 1 11/6 0 06 0 0
A LT
factorization).
A = L D LT
where:
d 1/2
D1/2 = 11 1/2
d2 2 (89)
•
d 1/2
nn
1.
A = R RT (90)
where:
R = L D1/2
and ET = D I / 2 LT
A-41
which is exactly the same L L T factorization we obtained
before.
The algorithm can be derived from the fact that equation (90)
is of the form:
following the same pattern; see for example Golub and van Loan
terms under the square root become negative and the algorithm
Solve
A-42
21 21 03 1 . Hx21 1 = [ 31
6
which implies that the terms rik cannot grow indefinitely (like
A-43
A = Q U (92)
where:
Q is orthogonal
imagined.
A-44
orthog onal factors get a great deal of fill-in. Because of these
reasons, orthogonal methods are not widely used, their use being
restricted to solve ill-conditioned problems.
same policy in this comparison. This is due to the fact that most
comparison.
A-45
Table A.1. Comparison of the algorithms used in the direct
solution of dense linear systems of equations.
2 2
Substitution: Triangular n + n n -n
back or forward 2 2 2 2
————
1 2 3 2
Gaussian Unsymmetric n' + n -5 n n -n n Pivoting and permutations
elimination 3 2 6 3 3 2 needed for stability .
3 3 2 2
Gauss elimin. Unsymmetric n + n2 -In n + n - 5n n Idea
+ substitution 3 3 3 2 6 2
3 2 3 2
Gauss-Jordan Unsymmetric n + n -1 n n -n n Ides
elimination 2 2 2 2
3 3 2 n2
L U Unsymmetric n - I n n - n +n
factorization 3 3 3 2 6
, 2
L U factoriz.+ Unsymmetric n3 + n2 +‘ n n'1 + n2 - 5 n n
back+forw.subst. 3 3 3 2 6
T 3 2 3 2. 2
LL Symmetric ' n +n ' n + n_ n Good from the stability
factorization Pos.Defin. 6 2 6 4 2 point of view.
T 3 2 3 2 2
LL factoriz.+ Symmetric ' n +3n + n ' n +5n - n n
back+forw.subst. Pos.Defin. 6 2 6 4 2
3 2 3 2
L ou Unsymmetric n + n +1 n n - n +1n n2
factorization 3 2 6 3 2 6
3 2 3 2 2
LOU factoriz.+ Unsymmetric n +3n +13 n n +n - 5n n
back.+diag.inv+ 3 2 6 3 2 6
forw.substitut.
T 3 2 3 2 . 2
LDL Symmetric ! n + n + n ' n +3n - In n
factorization Pos.Defin. 6 2 6 4 2 2
T 3 2 3 2 , 2
LDI factoriz.+ Symmetric ' n + 2n +5 n ' n + 7n - 3 n n
back.+diag.inv+ Pos.Defin. 6 2 6 4 2 2
forw.substitut.
3 3 2 2
Explicit Unsymmetric n -1 n - 2n + n n
inversion
Notes:
01 It does not consider the fact that most of the algorithms can overwrite the original matrix.
A-46
comp arative point of view we can say that:
A x = b (95)
a) Both the coefficients of the matrix A and the right had side
( A + SA ) x = ( b + Sb ) (96)
where:
same order of b.
A-48
Dependin g on the sensitivity of the solution for the unknowns
x, either with respect to small changes in the coefficients of A
being singular.
ex- R (97)
A-49
If we premultiply our computed solution by the matrix A, we
r =b-AR (98)
known).
Then, the residual tells us, in an indirect way, how far is the
(iterative refinement).
r =b-AR=Ax-AR ( 99)
then, factorizing by A:
r = A ( x - R ) (100)
which, by equation (97) becomes:
r = A e (101)
A-50
Hence, once we know the residual (r) produced by the computed
value (R), we can compute the error (e) by simply solving the
Rnew = 2 + e (103)
system.
precision arithmetic.
A-51
iv) Estimating limits for the error.
e = A -1 r (104)
applied to a matrix:
II Ax II
II All = max (105)
x 11 x II
11 A B II 11 A 1111 B 11
and considering equation (104) we get:
11 e 11 11A-111 11 r 11 (106)
but, on applying the same to equation (101), we get:
11 r II II A 11 lie uI (107)
which can be reordered as:
A-52
II r II
1 II e 11 (108)
II All
II r II
1 II e 11 S IIA -1 11 11 r 11 (109)
From equation (109) we can obtain lower and upper bounds for
residual ( II
r1 I /I I 131 I )
II b II II r II c II e II 4 II A -1 11 II b II II r II
(110)
II A II II x II II b II II x II II x II II b 11
But, from equation (109), for the particular case when R=42,
II b II
1 II x II 1 11 A 'hl II b II*
II A II
ii x II
1 IIA-111 (112)
II b II
and, from the first part of (111):
II b II
1 II All (113)
II x II
1 II r II II e 11 II r II
1 S IIA - 1 11 II A 11 (114)
II A II II A -1 11 II b II II x II II b II
A-53
Equation (114) defines the lower and upper limits for the
of the matrix A:
becomes:
I I r I I s II e II II r II
(116)
II b II II x II II b II
which means that, in this case, the relative error is
A-54
v) Effect of the errors in the matrix of coefficients.
Let us assume that the right hand side vector has been
estimated accurately, then we are solving:
( A + SA ) x = b (117)
where:
Let:
A = A + SA (118)
A R = b (119)
From equation (95):
x = A -1 b (120)
x = A -1 A R (121)
but
A =A+A- A (122)
x = A -1 (A+A-A) R (123)
or
x =R + A -1 (A-A) R (124)
recalling that SA = A - A, from (118), and introducing it into
x - R = A -1 ( SA ) R . (125)
Then, using the norm property: IIA BII i IIAII IIBII, we get:
A-55
and so, equation (126) becomes:
x-kIl A_11, 11 SA II
I I All (127)
11 51 11 II A II
that is to say:
II x - II II SA II
S Cond (A) (128)
011 II A
the s-th decimal, and if Cond (A) .11 10t then, the maximum
algorithm.
A-56
An approximate solution R can be improved in its accuracy via
the iterative improvement algorithm, which is not too
systems of equations.
A-57
larger the size of the network, the bigger the relative savings
of computer resources. In addition to that, some sparse matrices
track of the position of all the non-zeros within the full matrix
A-58
symmetric positive definite matrices, we shall look carefully in
a) Coordinate scheme.
In this case we use two integer arrays to store the row and
non-zero value. The length of all these arrays is the same and
equal to the total number of non-zeros within the matrix, say NZ.
symmetric matrices.
lii 0
1 21 122
133
L= 1 41 142 1 44
1 ,53 1 54 1,55 ,
1 63 ;65 ;66 ,
1 75 1 76 '77
IROW = (1, 2, 2, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7)
Column index: ICOL
ICOL = (1, 1, 2, 3, 1, 2, 4, 3, 4, 5, 3, 5, 6, 5, 6, 7)
A-59
Non-zero values array: VALUE
VALUE= (—11
1 1, -21 ,1 22 ,1 33 ,1 41) 1 42, 1 44, 153,154,155,163,165,166,175,
176,177)
The length of each array is 16 ( = NZ ).
length N+1, where N is the order of the matrix; the new row index
IROWI= ( 1 , 2 , 4 , 5 , 8 , 11 , 14 , 17 )
ICOL = (1, 1, 2, 3, 1, 2, 4, 3, 4, 5, 3, 5, 6, 5, 6, 7)
176,177)
need the element of order N+1 for completeness. Also note that
A-60
The column-wise implicit scheme is straightforward. A further
improvement of this scheme can be obtained by storing the
of the column index array to NZ-N, and keeping the rest of the
storage unchanged.
c) Compressed scheme.
DIAG = (111,122,133,144,155,166,177)
Non-diagonal non-zero vector: LNZ
XNZSUB = (1, 2, 3, 5, 6, 7, 8)
The compressed scheme splits the non-zero values into two real
arrays, one for the diagonal elements DIAG (neither row nor
column indices are needed for these elements) and another LNZ for
A-61
The array XLNZ gives the amount of non-zeros per column
(excludin g the diagonal) and NZSUB gives the corresponding row
index of these non-zeros using XNZSUB as a pointer to the first
the fact that in the array NZSUB some elements play a double (or
even greater !!) role, like the 4 and 7 in the second and seventh
position of NZSUB respectively.
equations.
A.3.2. Fill-in.
between the rows (or columns) of the original matrix, new non-
fill).
A-62
One of the worst cases of fill-in that can be expected is shown
in Figures A.2 and A.3. The original system has the arrow shaped
_
XXXXX x x
x x x x
x x x = x
x x x x
x x X x
- J
A . x = b
L=
x x x x
x x x x
x x x = x
x x x x
xxxxx x x
I. .
A* .x = b*
A-63
The corresponding new Cholesky factor L* will have the sparsity
pattern shown in Figure A.5.
L* =
xxxxx
Comparing A* with its factor L*, we can see that no fill-in has
been produced.
fact that fill occurs only because of the way in which the
A x = b (95)
we solve:
(PAPT )Px=Pb (96)
matrix, i.e.:
p pT = pT p = (identity matrix) (97)
A-64
matrix P A PT.
(98):
17 8
7 2 9 10
9 3 11
A = 10 4 (98)
11 5 12
8 12 6
- ..
the set of nodes (one node per row or column) and E is the set of
X = {1,2,3,4,5,6}
GA: E= {(6,1),(1,2),(2,4),(2,3),
(3,5),(b,S)1
A-65
Note that for non-symmetric matrices, directed graphs (i.e.
1
1
P= 01
10
1
1
17 8
7 2 10 9
10 4
P A PT= 9 311
11 5 12
8 12 6
a) b)
Thus, in graph terms, we can see that reordering does not mean
A-66
From the previous example it is also evident that if A is
A-67
performance of the Gaussian elimination process, since it is
the data structures are dependent on the numerical values and not
consequences.
done.
A-68
A,3.3. SParse methods for banded matrices.
linear system does not emerge in a banded form naturall y , the aim
within the band; thus, all the zeros outside the band can be
and only the elements within the band are stored, irrespective of
The storage scheme used for dealing with banded matrices has
ann O 0 ann
<----> <---->
half band width half band width
A-69
solution, in the sense that some fill-in can still be exploited
with very little extra effort. This is done in the next method.
x x
P A PT = xxxxx
x x
Fig. A.10. Minimum band ordering for arrow shaped linear system.
stage.
from "r" and renumbering first those nodes with the minimum
A-71
non-zero in each row and the diagonal of the matrix, that is to
-- •
symmetric
In this example, fill in maY only occur in the fourth and sixth
efficient from the storage point of view. One of the most widely
it does not make it worse), and the algorithm has been called
A-72
(or RCM) algorithm. Hence, we need to find a root node useful for
our purposes.
It has been empirically found that good root nodes are those
nodes such that the amount of edges between them and any other
node) is maximum.
as "pseudo-peripheral" nodes.
that:
Lo(x) ={x}
A-73
Figure A.14 is an example of a rooted level structure,
of x5 is 1(x5) = 3 and
root
a
Fig. A.13. Graph for level structure example, from George and
Liu • (1981).
root
a) Node labelling:
b) Level structure:
L0(x5) = f x51
L3(x5) = f xi 1
Fig. A.14. Level structure rooted at x5, for graph shown in Fig.
A.13, from George and Liu (1981).
A-74
The algorithm proposed by George and Liu (1981) for finding
pseudo-peripheral nodes is reported to be based on a previous
"r", then assign r <-- x and repeat the process from step c),
otherwise:
A-75
(labelling), say:
other algorithm or, indeed, the root node can be specified by the
root
eccentricity = 2
root
eccentricity = 4
root
eccentricity = 4
same as before, stop:
x8 is a pseudo-
peripheral node).
A-76
We have implemented the envelope method, with the RCM ordering,
using the subroutines published by George and Liu (1981). We used
this implementation for solving the linear systems generated in
one of the main drawbacks found, deals with the amount of storage
needed for the non-zeros out of the diagonal (array ENV). The
algorithm.
of the storage for the non-zeros enveloped within the skyline was
indeed occupied by zeros, and the idea is now try to exploit all
A-77
Walker (1970) and Ogbuobiri (1970), in power networks.
both for size (partial pivoting) and density (to reduce fill-
in).
stage does not include the numerical values of the non-zeros, but
from the row which had the biggest diagonal absolute value
alternative.
A-78
with the fewest amount of non-zero elements.
definite matrices.
completed. Thus, when using this implementation the user does not
know how much storage will be needed, and the only way to tackle
and was found to be one of the best available from the point of
view of the execution time; that comparison was carried out with
A-79
problems.
(a,c,g, i, f,e,b,h,d)
A-80
P P
abcdefghi bcdefghi
p a * * * b * * CI *
b * * * * p c * * *
c * * * d CA
.:, * * *
d * * * * e * * * * *
e * * * * * f * * * *
f * * * * g * * *
g * * * h * * * *
h * * * * i * * *
i * **
A0 A1
P P P
bdefghi defbhi efbhd
b * ED * ED a * * cx) e * * * * *
d * * * ED e * * * * * p f * * 00
e * * * * * f * * ED * b *+* ED
f ED * * * b * ED * h * ED
P g * * * h®* * * d @@*
h * * * * p i * * *
i * **
A2 A3 A4
P p P
e bhd b h d h d
p e * * * * p b *® 0 p h *® pd *
b * * 0 GlID h Q * ® d e*
h * ® * (1 d (-1)0*
d * C) 0 *
A5 A6 A7 A8
Notes:
*****
(1) "p" marks the new pivot.
(2) denotes fill-in.
(3): the pivots are always swapped with the first row (and
column), and then eliminated from the following graph.
Minimum degree of a = 2
G°
Note that c, g and i have
also degree 2, and can be
chosen as next pivot as well
p Next pivot = c
Minimum degree of c = 2
G1
Note that again we have
multiple choices: c, g and i.
Next pivot = g
Minimum degree of g = 2
G2
Next pivot = i
Minimum degree of i = 2
G3
Next pivot = f
Minimum degree of f = 3
G4
Next pivot = e
Minimum degree of e = 3
G5
Next pivot = b
G6 Minimum degree of b = 2
G 7 0e Next pivot = h G8 Next pivot = d
Min. degree of h = 1 Min. deg d = 0
Notes:(1) The darker lines are created by the elimination process
and represent the new non-zeros in the factorization.
(2) "p" marks the chosen pivot element.
A-82
G F
(F=L+LT)
a c gifebhd
-
a * * *
C * * *
g * * *
i * * *
f * * * * e 0
e *****
b * * e * * e e
h * * e * 0 *e
d * * *ee* -
-
Notice that G F ( Fig. A.18 ) has been found using only the
A-83
According to Fig. A.17, the whole elimination can be
represented as a succession of graphs: G O , G l , G 2 , . , G8.
This is an explicit representation.
with k 2 0)
Note that a neighbouring node of y not in S is also reachable
from y ( with k=0 ).
Si = { xi, x2, . . , xi }
eliminated.
A-84
Reach(a,S 0 ) = { b, d } , So ={ 0 }
Reach(c,S1) = { b, f } , S1 ={ a }
Reach(g,S2) = { h, d } , S2 ={ a, 0 }
Reach(i,S3) = { f, h } , 53 ={ a, c, g }
Reach(f,S4) = { e, b, h } , S4 ={ a, c, g, i } (99)
Reach(e,S5) = { b, h,d } , S5 ={ a, c, g, i, f }
Reach(b,S6) = { h, d } , S6 ={ a, c, g, i, f, e }
Reach(h,S7) = { d } , 57 ={ a, c, g, i, f, e, b }
Reach(d, SO = { 0 } , S8 ={ a, c, g, i, f, e, b, h }
A-85
other hand, because we are interested only in finding the
reachable nodes from the set of uneliminated nodes, we do not
Fig. A.17) we have actually deleted the eliminated nodes from the
using the Reach operator, we can reduce the length of the path by
the reachable node is the neighbour of the pivot and it does not
Fig. A.20 [proved by a theorem in George and Liu (1981)] that the
storage locations used for storing the original matrix. Then, the
results of the ANALYSIS stage do not need more storage than that
A-86
1 r2
r3 r4
r7 rg
original matrix. This means that we get the amount of fill-in and
A-87
takes place, which allows us to set up the data structures needed
for the following stages ( FACTORIZATION/SOLUTION). All this can
the time spent on searching for a minimum degree node (see Fig.
A.17). The idea is, because we usually find more than one node
A-88
successful either from the storage or from the execution time
original system.
A x = b (100)
where:
A ll Al2 I xi I bi I
(101)
A 21 : A22 x2 b2
where:
All : a (p x p) submatrix.
A-89
131, b2 : partitioned right hand side vector.
B : 0
L = ----4----- (102)
wT : Lc
[ L
where:
LB LB T : LB W All 1 Al2
(104)
wT LB T wT w L c L_T
[ u = A21 1 A22
A ll = LB LB T (105)
which simply states that LB is the Cholesky factor of All . In
factorization.
A-90
LB W( * ,i) = Al2(*,i) V i=1,2, .. , q (107)
where:
WT W + Lc Lc T = A22 (108)
which, on defining the auxiliary matrix C:
C = Lc Lc T (109)
becomes, after reordering
C = A22 - WT W (110)
L B : 0 1 [ L B T : W 1 [ x 1 1 [ b1 1
wT i Lc 0 : Lc T x2 b2
[
A-91
and then, because we already know LB, Lc and W T , its solution
becomes just a series of substitution processes; indeed on
defining the auxiliar vector:
y = Y1 = LBT _ :__141 . x1
(112)
Y2 0 : Lc T x2
[
[ LB : 0 I I
[ y l b [ l I
(113)
wT : Lc • b2
Y2
WT Y 1 + L C Y 2 = b 2 (115)
which reordered gives:
Lc y2 = b2 - W T yi (116)
iii) Having yi from step i), y2 can again be computed by a
forward substitution from equation (116), which completes
the computation of the auxiliary vector "y".
[ LB T : W I [ x i I [ Yl ]
(117)
0 : Lc T x2 Y2
A-92
which can be done via the following steps:
Lc T x2 = y2 (118)
we obtain x2, by back substitution.
LB T x i + W x2 = yi (119)
which reordered gives:
LB T xi = yi - W x2 (120)
vi) Having x2 from step v), xi can be obtained by back
W = LB -1 A l2 (121)
i.e.
WT = A l2 T LB-T (122)
A-93
LB T x l = Y 1 - ( LB -1 A l2 ) x2 (124)
WT W = A l2 T [ LB -T ( LB -1 A l2 ) / (125)
where the computations are carried out following the
general one of, say, (rxr) blocks; in that case George and Liu
A-94
Lij = A i j L jj - T = ( L ii -1 A ji )T i,j=1,2, . , r
that is to say, we only need to store the diagonal block
Obviously the greater the number of blocks (r), the greater the
A-95
Loosely speaking, equation (127) simply says that a link
{Yi,Yi} exists if and only if two supernodes Yi and Yj are
adjacent.
the labelling system used, where each node is numbered before its
to a looped network (or graph) and the way of doing it is via the
A-97
quotient tree ("supertree"). Thus the origin of the name of
these methods has become clear.
Lo = { a }
L 1 ={13,f}
L2 =lc,g,k 1
L3 ={d,h, 1 ,m} (128)
L 4 = {e,i,n,s }
L5 ={j,o,t }
L6 ={p, q }
L7 = { r }
A-98
Y 1 = a
Y2 ={b, f
Y3 ={ c ,g,k}
Y4 = { d , h , 1 , m } (129)
Y5 = {e, i , n, S }
Y6 = j , o , t }
Y7 = q}
Y6 = { r }
Fig. A.23, nodes "o" and "t" are quite far from each other and a
presented in Fig. A.26 where, for the same network of Fig. A.23,
has been used; in this case, we no longer glue nodes like "o" and
1
Bj = G ( U Li ) (130)
i=j
the union of all the nodes below the j_th level structure,
Y such that:
A-99
Fig. A.24. Quotient tree corresponding to the tree of Fig. A.23.
r pqj oteinsdhlmcgkbf a
* * *
* * * * * *
* * * * * *
* * * * * * * * *
* * * * * * * * *
* * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * * * *
1 * * * * * * * * * * *
* * * * * * * * * * *
* * * * * * * * *
* * * * * * * * *
* * * * * * * * *
* * * * * *
* * * * * *
a * * *
4.
A-100
a) Level structure b) Refined quotient tree
non-zeros and now in the refined quotient tree rooted at node "a"
In the level structure used in Fig. A.26 we chose "a" as the root
A-101
node just by chance, but we have to ask ourselves what is the
best choice (if any) for such a root node. Since a partitioning
(131), see Fig. A.28, we get the matrix shown in Fig. A.29, which
rpqjoeidhtsnlmcgkbf a
r * * •
p * *
q
j *
o *
e
i
d
h
t * *
S * * * *
n * * *
1 * * * * * * *
m * * * * * * *
C * * * * * * * * *
g * * * * * * * * *
k * * * * * * * * *
b * * * * * *
f * * * * * *
a * * *
1
rqpedjociblaagflknmst
* *
* *
*
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * *
* * * * * * *
* * * * * *
* * * * * *
a * * * * * *
* * * * * *
* * * * * *
1 * * * * * *
* * * * *
* * * * *
* * * *
* * *
* *
A-104
Fig. A.30, a = 2 dissectors were used and 2 a + 1 members were
obtained (each dissector being a member itself).
0 (D 0.)
--->
---> --->
---> --->
---> a=2
--->
: ---> --->
< >
S=(1-.7)/(a+1)
< 1 >
sequence (see Fig. A.30). The same result can be obtained with a
numbering.
From Fig. A.32 we can see that the labelling system introduced
blocks of size (mS x mS) and a blocks of size (m x m). All the
A-105
11111431110% VOA I 101 104 I IP
WI
Vel I 94 1 I I ft1°M0$4Ms4OT
r
°4 P°1 411,4 q)4111,4 Pillf
m=5
q= 2 1+-6=2
- 1 = 8
can be defined, in order to cope with some cases when 1 and the
envelope method, the refined quotient tree method and the nested
dissection method.
A-106
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 26 29 3 0 31 32 33 34 35 36 37 3E1 39 40
- — - — — — - — —
t t
7 1 *5*
4 *5 *
5 is:
6 It 1
7 s 1tt
8 t t
9 1 I
10 I t t
11
12
13
14
15
16
17
18
19
20
21
72
23
24
25
27
29
31
32
35
36
38
39
40_
A-107
MA27 package of the Harwell Subroutine Library and we also found
that one-way dissection was still the best of the direct methods,
George and Liu and obtained the fastest execution time of all
A-108
eab i
(Amighlose
i
w . 04)=C fa 11,u1.4)
1110 6111110 I Ilk,
1 I I Ellt" PALO
r-.71
o4 60
Viii1P41 bel
4
sJ
1 I Doll littrig
34I I I UMWI4 I I Mell
L.
Fig. A.33. Example of a 40 nodes rectangular-shaped graph,
partitioned using nested dissection.
1 678 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
21
22
23
24
25
26
27
28
29
30
31
33
34
35
36
37
38
A-109
element problems, for example), whereas some other
implementations are restricted to 2-D problems (planar meshes).
shaped graph.
full agreement between the results obtained with this method and
others.
behind the method had been used for a long time, the name and its
A-110
continuous domain, aiming at evaluating the unknowns only at
certain points (nodes) within the domain, rather than at all its
A = E B ( k ) (132)
and h = E Q (k) (133)
where:
finite element.
i.e. its shape, size and nodes, provides the adequate basis for
p roducing a desirable structure for the Gaussian elimination
A-111
process. In all the previous methods ( like quotient tree and
dissection algorithms), a great deal of effort was spent in
frontal method, deals with the fact that we do not have to wait
out, and because we no longer need it, the corresponding row can
sequence.
A-112
Element introduced Nodes eliminated Frontal nodes
1 -4-3
2 -4-3
none -8-5-4-3
5 -8-7-4-3
3 , 4 -8-7-6
A-113
From Fig. A.36 it is clear that a node cannot be eliminated
until the contributions from all its related elements have been
minimizing the front width. This can be done using some of the
for example:
A = { WA) 4. B( B )] 4.[B(C) B(D)] 4. [B(E)] ...1 (135)
A-114
where a sensible selection of the fronts allows extra savings in
systems of equations,
A.4.1. Introduction.
be most appropriate:
A-115
a) For example, if we are solving a non-linear system of
equations via an iterative scheme, which builds up a sequence of
work and storage point of views. This has been the main argument
A-116
Though this situation is changing very rapidly, it is still
Ortega and Poole (1981) and Stoer and Bulirsh (1980) for details
on these methods).
A-117
that is to say:
a ll x l + a 12 x 2 + a 13 x 3 1- .... + a ln xn = bl
a 21 x l + a 22 x 2 + a 23 x 3 + .... + a 2n xn = b2
a 31 x l + a 32 x 2 + a 33 x 3 + .... + a 311 xn = b3 (137)
:
a ri l x 1 + a n2 x 2 + a n3 x 3 + .... + a nn xn = bn
a ll xi = b l - ( a 12 x 2 4- a 13 x 3 4- ... + am n xn)
1
x i (k+1) , ( bi - E a u j x j (k) ) V i=1,..,n (139)
an iXi
where (k+1) and (k) refer to the iteration index.
A-118
x(k+1)_x(k)11
E =
I 1X(k)11
the value of x( 1 ), the right hand side uses x(k) from the
the last value (of the current iteration) for those xi whose
A-119
and the algorithm described for Jacobi's method remains the
same, replacing equation (139) by (140).
value as:
xi(k+1) = x j (k) 4. w { Ri _ xj(k) } (141)
is called a relaxation parameter, more precisely, we have
A-120
Using the same idea behind simple (Picard) iteration, we can
solve (143) via the following iterative scheme:
x (k+1) = B -1 [ b - (A - B) x( k) ] (145)
or
x(k+1) = B-1 b - B -1 A x (k) + B -1 B x(k) (146)
then, reordering and considering 8 -1 B = L:
x(k+I) = x(k) _ B-I i\ b k1.41)
or
x(k+1) = [ I - B -1 A] x (k) + B -1 b (148)
A-121
We shall look at the iterative methods from the perspective of
A=-E+D-F (150)
where:
0
a 21 °
E = - (151)
anl an2 ••• ann-1 °
coefficients of A.
all
a22
D = •
(152)
a rmn
0 • aln
F = - 0 a n-18 (153)
coefficients of A.
B=D ; (154)
in so doing:
M = I - B -1 A = I - D -1 (- E + D - F)
= I - D -1 ( - E - F ) - D -1 D
M = D -1 ( E + F ) (155)
A-122
b) The Gauss-Seidel method can be obtained from equation (148)
by choosing B as:
B = D - E ; (156)
in that case:
M = I - B -1 A = I -(D - E) -1 (D - E - F)
= I - [ (D - E) -1 (D - E) - (D - E)-1F]
= I - [ I - (D - E) -1 F]
= (D - E) -1 F (157)
A=I LU (158)
Let us take:
B = L U (159)
then
B-1 = (L U) -1 = U -1 L-1
and
M = I - U -1 L -1 A (160)
B = (1/w) ( D - w E) (161)
"w". Then
M = I - B -1 A = I - w( D - w E ) -1 ( - E + D - F )
A-123
M = (D - wE) -1 [(D - wE) - w( - E + D - F )]
= (D - wE) -1 [ (1-w)D + wF] (162)
which, for the particular case w = 1, gives the Gauss-Seidel
convergence. So, the eigenvalues are the key to the study of the
(1980).
A-124
The eigenvalue problem is usually defined in terms of finding a
VxXQ (164)
the spectral radius and the norm of the matrix . On applying the
norm operator to equation (164), it is easy to prove that:
A-125
explicit computation of the spectral radius and eigenvalues, when
trying to determine the convergence properties of a method.
general formulation:
x(k+1) , [ 1 _ B-1 A] x(k) 4. B-1 b (148)
or
x(k+1) , m x(k) 4. d (149)
is
f(I-B-1A) < 1 (150)
or simply:
by (148) is simply:
II I - B- 1 A II < 1 (152)
A-126
* A is a positive definite matrix.
is such that:
0 < w < 2
matrices.
minimization problem.
The main idea behind this method comes from the fact that the
quadratic function:
Q = (1/2) z T A g - g T h (153)
A-127
we solve:
minimize (1/2) x T A x - x T h (154)
vectors:
<x3y> a xT y = yT X
•
If not, continue
A-128
b) Compute 2 (k+1) . r(k+1) _ ak 2(k)
to an iterative method.
algorithm:
x (k+1) following the direction p (k) . The algorithm finds the best
A-129
On differentiating equation (155) to determine the optimum
value of a, say *Iv we get:
A (x(k) + a k p(k)) - b = 0
then,
A x( k ) + ak A p(k) = b
or
a k A R (k) = b - A x( k) = r( k ) (156)
finally,
<r( k) , p(k) >
= (157)
ak < 2 (k) , A 2(k)›
(159), we get:
r(k4) = h _ A { x (k) 4. ak R (k) 1
or
A-130
r(k+1) „ h _ A x (k) _ ak A 2(k)
then
r(k+1) . r(k) _ a k A ROO (160)
This means that we can overwrite the residual vector r (k) with
<r(k+1) , A 2(k)›
(162)
< 2 (k) , A 2(1o)›
where again we can use the stored auxiliary vector E(k) instead
of A p(k).
A-131
termination property of the conjugate gradient method, since with
exact arithmetic the vectors p(k) and p( 101) are exactly A-
mainly due to the product A p (k) , which makes the method very
(non-sparse) matrices.
A-132
A.37. for 2 and 3-dimensional problems.
For more details on the Lanczos method see, for example, Scott
(1981).
A.4.8. Preconditioning.
known as "preconditioning".
A-133
/ \
/ \
/x \
/ \
/ 1
I 1
/ 1
I t
/
I
I 1
I 1
t 1
1 1
I 1
1 1
1 i
t /
1 /
1 /
1 I
1 /
1 /
1 /
\ /
\ /
\ / /
\ \ /
\ \ ,
//
N. a,
... /
..... .... ..1... ay. ...0
A-134
numbers (say, 10 4 or even greater) are considered as "ill-
conditioned matrices.
iterative methods.
either
(MA)x= (M b) (163)
or
( N A N T )( N -T x) = ( N b ) (164)
where M and N are square preconditioning matrices.
A-135
transformed unknown variable, say y = N -T x. Apart from our main
conditioned than the original one, we look for a new system which
can be imagined. From an ideal point of view, the best choice for
a) Diagonal matrices:
For example M= diag( m ll , m22 , ..., mnn) with mii = 1/aii. This
b) Complete factorizations of A:
A-136
c) Incomplete factorizations of A:
A 2: Ls Us
or
A x Ls D Us
A = L D LT + E (165)
where A = symmetric and positive definite matrix.
D = diagonal matrix
A-137
A z LD L T (166)
factorization:
A = L L T + E (167)
with the approximation:
A z L L T (168)
coefficients of A:
a ll = 1 11 111 --> 111 = J-;11 (170)
a u i = 1 11 1 i1 --> l = a ii /1 11 V (171)
Equations (170) and (171) give us the first row of the matrix
LT.
A-138
i-
l ii = 1/-a ii - E 1 lik2 (172)
k=1
Example a)
*********
_
8.3 -3.2 0.0 -2.7
-3.2 13.7 -5.6 0.0
A= 0.0 -5.6 10.5 -2.2
-2.7 0.0 -2.2 9.2
-
A-139
A-140
is factorized.
along this line, for symmetric M-matrices ( i.e. when all the
A-141
Mainly because of storage constraints we prefer the incomplete
factorization proposed by Kershaw (1978), though in some cases,
coefficient matrices.
A z L L T , (174)
the "standard" conjugate gradient algorithm is applied to the
A-142
which is equivalent to solving the system:
( L -1 A L -T ) y = b* (176)
where:
unknown x.
A-143
0. Compute the corrected right hand side vector h* = L -1 b
which is a simple premultiplication.
If not, continue
5).
A-144
Notes:
1. When solving iterativel y a system of non-linear equations,
via a sequence of linear systems, it appears logical to keep the
point, balancing the time saved starting with the vector y from
recommendation.
Compute <r(k),2(k)>/<2(k),k(k)›
c<k=
A-145
2 (k+1) = r(k+1) _ 131( 2(k)
We shall not repeat here the reasons for the computation of the
A-146
APPENDIX B
B.1. Introduction.
B-1
identification of parameters in groundwater flow [Hoeksema and
Kitanidis (1983), (1984) and (1985)] and other problems .
Journel and Huijbregts (1978), but they are reviewed here for
completeness.
stage procedure:
stage.
B-2
a great deal of judgement, even though the computer can be used
in an interactive way to ease the calculations involved. The
process and is usually carried out with the help of the computer,
We shall review each stage with some more detail, but first
B-3
Nevertheless, we restrict ourselves to the study of the
piezometric heads over a discrete set of points in the two-
dimensional space, these points are the nodes of the water
distribution network. Occasionally, we may also consider other
points, located in the pipes joining those nodes, but in either
case we are dealing with a discrete phenomenon rather than a
continuous one. As a result, head measurements are available at
some points of the space and head estimates (based on those
measurements) are required at unmeasured points.
( x, H ) = ( xi, x2, H )T
B-4
In order to simplify the notation of the random variables,
sometimes we may drop the state variable ( H ) or, even simpler,
distance d = x i - x2.
B-5
Hence, in theory, the semi-variogram depends on xi and x2, and
and higher order moments) associated with the random process. The
B-6
var[Z(xl,H) - Z(x2,H)] = 2 r(d) (3)
where d = xi - z2.
B-7
Equation (6) allows us to estimate the semi-vario g ram from the
data, as the arithmetic mean of the squared differences between
1 N(d)
r*(d) = k 1 E [ Z(xl,H) - Z(2,11) ]2 1 (7)
N(d) i=1
C1 ( 0 , 100 ]
C2 ( 100, 200 ]
C3 ( 200, 300 ]
C4 ( 300, 400 ]
Class C1 C2 C3 . . Ck
Average distance: d d1 d2 d3 . dk
(*)
1 Nk
ak = E [Zi - Zj]2
2 Nk i=1
B-8
The estimated semi-variogram r*(d) is the resulting plot with
computational purposes.
r *(d)
,...
: • •
• d (m)
0 100 200 300 400
B-9
Because the number of pairs decreases with the distance, it
convenient to discard the pairs too distant from each other; this
stressed:
B-10
the increments vanish at a zero distance, the semi-variogram
starts, in theory, at the origin [r(0)=0] and it increases with
implies that different ranges and sills can be obtained for the
there are some cases where the semi-variogram does not reach a
B-11
sill for any value of the distance; this implies that the
regionalized variable has an infinite variance, and only the
the average distance of each class, we may not get points close
is the typical case), the ore content will be very high, whereas
a very close sample, taken just outside the vein of high mineral
content, will show a very low ore content (if any at all). This
B-12
r*(d)
points from data analysis
nugget effect
d (m)
(a) The nugget effect
r*(d)
d (m)
(b) The true semi-variogram
have to be aware of this in the event that the data show some
discontinuity near the origin.
B-13
addition, for reasons which will become apparent later on, only
All the previous basic models, except the monomial one, are
models with a sill, although only the spherical and cubic models
Monomial Bo dC ° C0<2
in d C ° A0=0
and having a set of available models on the other hand, the aim
any other index), the process can be automated to give the best
the model.
B-15
through equations (2) and (3)]. If a drift or trend is detected
can be dealt with and also how measurement errors can be handled
covariance depends only on its lag; in our case, this means that
E [ Z(x, H) ] = m (8)
and
Cov(x1,x2) = E [ {Z(xi,H)-m} .[Z(x2,H)-ml ] = C(d) (9)
B-16
In these conditions, we can expand the expectation in (9) to
give:
C(d) = E[Z(xl,H)Z(x2,H)] - m 2 - m2 + m2
which reduces to
C(d) = E[Z(x1,H)Z(x2 3 H)] - m 2 . (10)
d (m)
B-17
Thus, because we already know how to estimate the semi-
that the mean "m" is also known and, under these conditions, we
mean process (dropping the state H in the notation from now on):
Y(X) = Z(X) - m (12)
n
Yo * = 11*(X o ) = E y (14)
i=1
estimated.
B-18
Yo* is then an estimate of the true value of the variable Yo,
which is not known. Kriging estimates an optimum Yo* in the sense
that it minimizes the square of the errors: Yo* - Yo; because Yo
is not known, we have to formalise the optima]ity condition
stating that the expected value of the square of the errors is
the quantity to be minimized:
+ E[Y02]
E [( y 0 * - Y0 ) 2 ] = E E Xo i Xo i E[Yi Yi] - 2 E Xo i E[YiYo] +
13 i
+ E[Y 0 2 ] (16)
E[t i Y j ] = c ( x i - x i) + m2 (17)
but, because of the variable change (12), Y is a process with
zero mean, hence:
B-19
E[Yi Yj] = C(xi - xj) (18)
+ C(0) (20)
a
E [ (Y 0 * - Y 0 ) 2 ] = 0 i=1, ..., n (21)
a Xoi
i.e.
a
. E[(Y 0 * - Y 0 ) 2 ] = 2 E X 0 i C[x i -x] - 2 C[x i - x0 ] = 0
i (22)
6 Xol
i = 1, ..., n
hence:
B-20
A sufficient condition for a unique solution of (23) is that
point being estimated and the measurement points [xi-x 0 ] and the
B-21
iii) If we set xo = xi, i.e.: xo coincides with one measurement
the measurements.
out only once, stored and used repeatedly for each different
point ( xo ) to be estimated.
the most widely used linear solvers; all of them are applicable
B-22
to solve the linear systems generated by Kriging, due to the
symmetry and positive definiteness of the system.
n
Y o * = Y*(X o ) = E y
i=1
var(Y0 *- 110 ) = E[(Y0*-Y0)2] - { E[Y0 *-Y0 ] 12 (24)
E[Y0 * - Yo] = 0 (25)
Note that the right hand side of (26) is simply the quadratic
function we had to minimize [equation (20)]. But we already know
B-23
the right hand side in equation (20) after determining the
Z = Y + m
Then,
n
Z o * = Z*(x0 ) = m + E l o i. ( Z i - m ) (28)
i=1
B-24
and the Kriging variance (c 0 2 ) becomes:
estimates.
intrinsic hypothesis.
applications where the mean . may not be known and the variance may
hypothesis".
conditions:
B-25
E [Z(xl,H) - Z(x2,H)] = m
and
estimator:
n
Z o * = Z * ( x0 ) =E Xo i Zi (30)
i=1
that:
E[Zo* - Zo] = 0
where:
unknown.
n.
E[ E X0 1 Zi ] = E[Zo] = m
i=1
or
B-26
n
E Xo i E[ Zi ] - m
i=1
i. e.
n
E X0 i m = m
i=1
n
E Xo i = 1 (32)
i=1
minimum E [ (Z 0 * - Zo ) 2 ] . (33)
E [(Z 0 * - Z 0 ) 2 ] = E [ ( E )o i Zi - Z 0 ) 2 ] (34)
i.
n
Zo =z Xso i Zo (35)
i=1
and factorizing:
which leads to
B-27
Recalling the definition of the semi-vario g ram in equation (1):
or simply
(36), i.e.
ENZi-Z 0 )(Zj-Z 0 )]= - Nxi-xj) + Nx i - x 0 ) + Nx j -x0)
EX0 3 in the last two terms of (39), both factors being exactly 1
E Xo i r (Xi-X0) = E Xo i r(Xj-X0)
whence (39) reduces to
B-28
This is the quadratic form corresponding to equation (20) in
the second-order stationary case. Equation (40) has to be
minimized, subject to the equality constraint represented by
equation (32). The minimization can be carried out via the
Lagrange multipliers technique, minimizing (without restrictions)
the expanded objective function:
where the factor k in the first term and the negative sign in
the second have been added for convenience, but do not alter the
main objective. The coefficient "P" is the (unknown) Lagrange
multiplier, which has to be determined, together with the set of
weights X0 1 ; to do so we have to make the partial derivatives of
(41), with respect to P and Xo i (i=1,...,n), equal to zero, which
leads to a set of (n+1) linear equations in (n+1) unknowns:
E ko i P = r(Xi-Xo) i=1,...,n
(42)
E ko i = 1
0 r r 13 • e 1
12 A o2 r10
r r r20
12 0 ?3 •
: :
(43)
r n1 rn2 rn3 . . 0 1 Xon rn0
1 1 1 . . 1 0 1
_
B-29
.The solution of (43) exists and is unique provided that the
matrix of coefficients is positive definite. The matrix is
symmetric and it only needs to be solved once, since it is just
the right hand side vector that depends on the point (x 0 ) being
estimated.
or
B-30
measurement Zi.
error-free measurement.
summary, we get:
0-i 2 + P
r (Xi- x) - )ko i
E Xo i = r(xi-X0)
ji=1,. ..,n
(46)
E Xo i = 1
i
B-31
= r(x i - xi)
r r 13 r 1n 1
—cr
12 13
r 23
•
r in
r10
r 12 —3. 2 • •
1 r?0
:
•
: : * = (47)
r n1 r n2 r -.7 n 2 1
n3 ' • rn0
1 1 1 .. 1 0 p 1
. 0 2 = var(Z 0 *-Z 0 )
0 = E1 r(x 1 -x 0 ) + p (48)
i
r raw(211-x2) = r real(xl-x2) + k f EN(x1)-Z(2)]}2 (49)
B-32
Nevertheless, some authors [Delhomme, (1979)] accept the
that if the original process Z(x) was not stationary, then the
data.
Then, from now on, we are going to assume that Z(x) has been
the problem of how to infer the semi-variogram from the raw data
B-33
Followin g Delhomme (1978) and de Marsily (1986), both based on
previous work by Matheron, and for a random function Z, we shall
n.
refer to E X 1 Zi as a generalised increment of order k if the
i=0
n
E xi fl(x i ) = 0 (50)
i=0
x=(x1 1 x2) T , this implies that the maximum degree must be x1y*x2c1
with 0 1 p+q 1 k.
n . 1
E xi = 0
i=0
n .
E xi xi i = 0 (51)
i=0
n .
E xi x2 1 = 0
i=0
n . . n . .
E xi xl i = 0 Ex1 ( x2 1 ) 2 = 0 (52)
i=0 i=0
n . n .
E xi. x 2 i = 0 EX1 x1 1 x2 1 =0
i=0 i=0
n .
E X1 = 0
i=0
is fulfilled.
[Matheron, (1973)], if
n .
E X 1 Zi
i=0
is a generalised increment of order k, its variance is given by:
n .
var(E X i Zi ) = E E X i X j K(Xi-x) (53)
i=0 ii
B-35
must guarantee that the variances of generalised increments must
be always positive.
Drift
Drift order Model for the generalised covariance
k
B-37
m(x) = al + a2x1 + a3x2 + a4x1 2 + a5x1x2 + a 6 x 2 2 + ... (55)
or, in general
w
m(X) = E ak Pk(X) (56)
k=1
where
dimensional space:
* linear drift:
m(x) = al + a2x1 + a3x2
* quadratic drift:
m(X) = al
n.
Zo* = E X0 1 Zi (57)
i=1
B-38
n . w w
ak Pk(2ç)
E Xo l 1 EakPk(X i ) 1 = E
i=1 k=1 k=1
w n . w
E ak { E x0 1 pk(x) } = E ak pk(x0)
k=1 1=1 k=1
E K i k k k=1,..,m (60)
'Co P (Xi) = P (X0)
i
m(X)=a1.
The rest of the Kriging equations are obtained from the minimum
n.
E Xo l Z i = Zo
i=1
with Xo° = -1
and provided that the mean (drift) is expressed as a function of
B-39
L(X 1 , Pk) = E E X 1 X 3 K(xi-xj)+Pk[ E ko i Pk (Xi)- Pk(X.0)]
0 0
0 (61)
13
N j Tr, k, ,
N tXi-Xj) + E Pic
E 'v3 P lXii = K(Xi-X0)]
jk 1=1,...,n
and
(62)
E ko i Pk (Xi) = Pk (X0)] k=1 ..... m
K n1
K n2n3 • • • K r 1 Pn2 Pnk Ikon KnO
1 1 1 . . . 1 0 0 0 -P1 1 (63)
P21 P22 P 23 • • • P 2n 0 0 0 -P2 P20
• •
• • • •
B-40
In the particular case of the intrinsic random functions of
6
E Xo i K(Xi-x , ) + E Pk Pk (Xi) = K(Xi-Xn)]
j k=1 i=1,...,n
and
.. (65)
E Xo i Pk (Xi) = Pk (X0 )] k=1,...,m
i-
in equation (52).
obtaining:
B-41
Equations (62) and (66) are the equations for the universal
ii) The function s(x), its first and second derivatives [s1(x)
and s"(x), respectively] are continuous. Higher order
derivatives do not need to comply with the continuity condition,
in particular, a cubic spline will have third derivatives which
are discontinuous at the knots [ Xi , 1=1,2, h ].
C- 1
differentiability of the spline is reduced in one order, at the
particular point where the coincident knots are located. Thus, by
an adequate knot selection, we are allowed to specify
discontinuities in the derivatives of the splines and, indeed, in
the spline polynomial itself.
C-2
Also, cubic splines, rather than splines of any other order,
difference representations.
4
Mi(x) = E vis (x - Xi_sq (1)
s=0
with
4
vis = 1 / { Tr (Xi-s -Xi-r) 1 (2)
r=0
r s
C-3
f(x) #
1
(x - Xj) 3 if x Xj
(x - = (3)
0 if x S ki
C-4
Since normally we shall be interested in defining a spline for
Fig. C.3.
t,
Fig. C.2. B-splines invo lved in the spline function for the
interval ( Xi-1 1 X 1 ) .
C-5
mh+4(x)
Mh+1 (x) Mh+2(x)
C-6
The evaluation of the B-splines using equation (4) starts with:
(4) and (5) implies that the area under its curve is exactly 1/n,
that is to say, for the cubic B-spline (n=4), the area is exactly
the same as before: 1/4.
itself.
C-7
C.3, Data fitting using one-dtmensional cubic splines.
It has already been said that the idea is to combine the cubic
h+4
s(x) = E ri M(x) (6)
1=1
where:
system of equations:
h+4
s(x1) = E ri M(x 1 ) = f(x) = fl
i=1
h+4
s(x2) = E ri M 1 (x 2 ) = f(x 2 ) = f2
i=1
(7)
h+4
s(xm ) = E r i M i (x m ) = f(x m ) = fm
i=1
h+4
E r i Mi(xr ) = f(xr) = fr r=1,2,... m (8)
i=1
C-8
which is clearly a linear system of "m" equations in the h+4
unknowns r i . Because normally m > h+4, (7) is an overdetermined
system of equations and an exact solution does not exist; a
solution in the least-squares sense in sought. This system is
known as the observation equations.
where:
A is a rectangular mx(h+4) matrix with coefficients:
•
An = Mi(xr).
r and f are column vectors of lengths (h+4)xl and mxl,
respectively.
i) Due to the fact that for every data point there are only 4
non-zero B-splines involved, and they are adjacent: Mi_3(x),
Mi_2(x), Mi_1(x) and M(x) (if the data point lies in the
interval Xj_i<x<XJ), this implies that A has only four non-zero
elements per row and they are adjacent. In addition, if the data
points are sorted in ascending order, according to their
magnitudes, the first of the four non-zeros of each row of A will
never lie at the left of the first non-zero of the previous row.
Two or more non-zeros will lie in the same column of A, if and
only if they lie in the same knot interval, say
C-9
Example a:
X1 X2 X3 X4 X5 X6 X7
+ + +- + x x- + x +-x--x--+-x-x-+---+---+---+--> x
II
I I 1 I
I
I
I 1 1 I
I
I
I 1 I
I
X-3 X -2 X -1 X o X1 X 2 )(3 X4 X5 X6 X7
a b
the corresponding structure of the observation matrix is:
A=
Example b:
X1 X2 X3 X4 X5 X6 X7 X8 X9
X X 4. X 4. +x x x- +x x+ x >x
1 1 1 1 1
1 1 1 1 1
X1
X2 X3 X4 )1(5
'N-3=X-2=X-1=)(0=a=x1 6=X7=X8=X9=b=X9
the corresponding structure of the observation matrix is:
A=
Since normally m > h+4 , i.e. there are more measurements than
unknowns, the linear system (9) is overdetermined. Clearly, a
C-10
unique solution is not possible, and the best we can expect is a
A T A r =AT f (10)
where:
[A T A] is a squared (h+4)x(h+4) banded symmetric matrix,
with bandwidth (2n-1).
The system (10) is not a very large linear system; it does not
C-11
Schoenberg and Whitney conditions; for interpolation problems,
these conditions establish a relationship between the location of
data points and knots, which is normally expressed as:
xh < Xh < xm
C-12
influence the conditioning of the problem and the accuracy of the
true, since too many knots may produce overfitting of the data
fitted spline [see Cox, Harris and Jones (1987)]. The automatic
ongoing research.
C-13
our case) at that particular point (x,y).
C-14
R.1+1, k+1
°j-4
a X 1 X 2 X3 Xi-4 Xi Xh b=Xh+1
Xi, say Mi(x), and the second related to the independent variable
"y" and to the set of knots 0j, say Nj(y). Both one-dimensional
C-15
those used in the previous section.
space (or at each panel Rij of the subspace R). For more details
h+4 k+4
s(x,y) = E E rij
M(x)N(y)
i=1 j=1
where:
r-
1J - is the new vector of "weights" (to be determined).
Note that, because the original basis functions M(x) and N(y)
Xi_2, Xi_1, )ki and Pj_4, Pj_33 Pj_2, Pj_i, Pj, respectively), and
because of the cross-product properties, the new basis function
C-16
the panel Ri j , the non-zero M(x)N 3 (y) will be those within the
h+4 k+4
E E r-1J• Mi(xr)Nj(yr) = f(xr,yr) = fr (12)
i=1 j=1 r=1,2, ... m
A is a (mx(h+4)(k+4)) matrix.
being approximated.
data points are sorted according to the panel Rij to which they
C-17
▪ ▪
belong, allowing the index "i" to run faster than "j" (i.e.
following the panels in Fig. C.4 from left to right and from
structure:
A 11 A 12 A 13 A 14
A 22 A 23 A24 834
A=
A k+1,k+1 A k+1,k+2 A k+1,k+3 A k+1,k+4
where:
is a rectangular (rx(h+4)) sub-matrix.
Note that if rij is associated with the panel Rij, then the
[AT A] r = AT f (14)
which is now a (h+4)(k+4)x(h+4)(k+4) linear system of equations.
C-18
Whitney conditions, to guarantee the existence and uniqueness of
the solution. To make things worse, it is often the case that the
that it may happen that some panels Rij remain without data
points within them, which implies that the least squares solution
C-19
This is particularl y true when the measurements are subject to
errors, where we want to know the impact of those errors on the
spline estimates.
C-20
The solution can be almost direct, and very efficient from the
computational point of view, provided that the normal equations
have been solved using a direct Gaussian elimination method, with
a Cholesky factorization [see Appendix A for details]. This is
actually the main argument in favour of direct solutions when
solving the normal equations, since if an iterative scheme
(conjugate gradient) is used, the statistical information becomes
unavailable [see Cox (1982a)].
hi = (0,0,...,0,1,0,0,...0) T (19)
and
h2 = (0,0,...,0,0,1,0,...0) T (20)
C-21
which can be re-written as:
coy ( r i , r j ) = 0. 2 (R—Thi)T(R—Th2) (23)
var(r i ) = 0. 2 uT u (30)
C-22
forward substitution plus an inner product for each variance. The
explicit inversion of [A T M has been avoided.
the number of weights, i.e.: r=h+4 and r=(h+4)(k+4), for the one-
h+4 h+4
var( s(x) ) = var( E M1(x) r i ) = E { M1(x) }2 var(r i ) (32)
i=1 i=1
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
D-1
Table D.2. Summary of the comparison between true, initial and
calibrated piezometric heads. Example B.
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.(2] RES. (3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE (4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES,[2] RES. (3] EST/TRUE [4]
NOTES:
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.957 -1.370 0.307 -2.033
KRIGING -3.880 -38.340 -24.051 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
•
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 1.000 -5.502 0.833 0.966
KRIGING -5.049 -267.833 -29.029 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 1.000 -5.209 0.833 0.969
II KRIGING -5.122 -271.631 -29.165 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 1.000 -5.592 0.833 0.966
III KRIGING -5.161 -273.701 -29.244 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.991 -88.752 -1.246 -8.383
IV KRIGING -5.065 -268.612 -29.057 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.441 -29.432 -3.819 -21.146
V KRIGING -2.960 -159.964 -24.767 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.886 -26.898 -0.513 -5.326
KRIGING -4.671 -248.348 -28.252 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.992 0.973 0.948 0.996
I KRIGING 0.924 0.956 0.634 0.660
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.993 0.973 0.949 0.997
II KRIGING 0.917 0.951 0.624 0.644
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.992 0.972 0.947 0.996
III KRIGING 0.913 0.949 0.619 0.638
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.999 0.974 0.967 0.998
IV KRIGING 0.856 0.946 0.479 0.612
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.993 0.998 0.939 0.996
V KRIGING 0.913 0.951 0.466 0.259
SPLINES -999.999 -999.999 -999.999 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.994 0.978 0.950 0.997
KRIGING 0.905 0.951 0.564 0.563
SPLINES -999.999 -999.999 -999.999 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.990 -1.324 0.420 -14.333
I
KRIGING -0.144 -999.999 -408.581 -999.999
SPLINES 0.951 -774.217 -133.729 -999.999
INTERPOL. 0.989 -1.273 0.418 -14.135
II KRIGING -0.195 -999.999 -408.782 -999.999
SPLINES 0.936 -762.062 -133.860 -999.999
INTERPOL. 0.992 -1.485 0.437 -13.308
III
KRIGING -0.155 -999.999 -408.587 -999.999
SPLINES 0.910 -760.159 -133.739 -999.999
INTERPOL. 0.989 -2.944 -0.173 -34.845
IV
KRIGING -0.183 -999.999 -408.550 -999.999
SPLINES 0.910 -760.038 -133.742 -999.999
INTERPOL. 0.878 -5.156 -1.448 -382.031
V KRIGING 0.940 -999.999 -385.586 -999.999
SPLINES 0.262 -845.138 -115.761 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.968 -2.436 -0.069 -91.730
KRIGING 0.053 -999.999 -404.017 -999.999
SPLINES 0.794 -780.323 -130.166 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.929 0.886 0.145 0.014
KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -13.299 -999.999 -999.999 -999.999
INTERPOL. 0.933 0.897 0.160 0.007
II KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -10.081 -999.999 -999.999 -999.999
INTERPOL. 0.938 0.881 0.184 0.022
III KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -52.001 -999.999 -999.999 -999.999
INTERPOL. 1.000 0.758 -0.692 -3.058
IV KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -37.897 -999.999 -999.999 -999.999
INTERPOL. -0.664 0.646 -0.508 -2.101
V KRIGING -999.999 -999.999 -999.999 499.999
SPLINES -42.912 -999.999 -999.999 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.627 0.813 -0.142 -1.023
KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -31.238 -999.999 -999.999 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.985 0.938 0.407 0.945
I KRIGING -77.643 -4.976 -450.616 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.960 0.903 0.338 0.943
II KRIGING -78.716 -5.225 -452.981 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.927 0.868 0.238 0.925
III KRIGING -80.021 -6.201 -459.157 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.866 0.821 -0.491 0.762
IV KRIGING -79.329 -6.041 -458.137 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.726 0.640 -0.413 0.291
-75.728 -5.181 -449.428 -999.999
V KRIGING
SPLINES -999.999 -999.999 -999.999 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.893 0.834 0.016 0.773
KRIGING -78.288 -5.525 -454.064 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAM.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE (4]
NOTES:
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.498 0.697 -1.898 -16.459
KRIGING -237.066 -295.720 -65.349 -504.082
SPLINES -243.633 -214.918 -96.763 -979.050
,
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.008 -0.050 0.005 -0.003
KRIGING 1.000 0.920 -0.154 0.478
SPLINES -42.610 -14.722 -26.108 -551.471
INTERPOL. 0.866 0.894 0.022 0.306
II KRIGING 0.891 -0.129 -0.404 0.221
SPLINES -1.351 -117.440 -19.608 -379.195
INTERPOL. 0.994 0.940 0.022 0.081
III KRIGING 0.984 -0.104 0.376 0.655
SPLINES -0.861 -158.095 -22.371 -397.106
INTERPOL. 0.923 0.999 0.005 0.157
IV KRIGING 0.999 -1.145 -7.038 -64.831
SPLINES -7.648 -472.515 -69.540 -999.999
INTERPOL. 0.548 0.956 -16.042 -129.818
V KRIGING -999.999 -999.999 -91.807 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.668 0.748 -3.198 -25.855
KRIGING -199.225 -200.091 -19.805 -212.695
SPLINES -210.494 -352.554 -227.525 -665.554
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -0.023 0.771 -2.638 -18.641
I KRIGING -31.332 -29.866 -6.784 -73.242
SPLINES -140.271 -148.540 -33.078 -999.999
INTERPOL. -12.538 -77.063 -2.586 -15.337
II KRIGING -0.931 -278.370 -3.860 -24.126
SPLINES -46.695 -533.205 -25.706 -999.999
INTERPOL. 0.900 -1.074 -2.870 -28.460
III KRIGING 0.996 -243.327 -6.084 -50.106
SPLINES -116.581 -535.163 -23.135 -999.999
INTERPOL. 0.967 1.000 0.798 0.974
IV KRIGING 0.991 -6.542 -0.007 0.109
SPLINES -5.318 -17.798 -2.484 -7.726
INTERPOL. 0.988 0.994 0.275 0.605
V KRIGING 0.342 0.033 0.201 -0.116
SPLINES 0.091 0.811 -2.654 -11.237
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
,
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. -1.941 -15.074 -1.404 -12.172
KRIGING -5.987 -111.615 -3.307 -29.496
SPLINES -61.755 -246.779 -17.411 -603.792
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -19.250 -999.999 -52.814 -999.999
KRIGING -0.563 -999.999 -936.003 -999.999
SPLINES 0.437 -999.999 -364.938 -999.999
INTERPOL. -999.999 -999.999 -66.950 -999.999
II KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -999.999 -999.999 -346.077 -999.999
INTERPOL. -999.999 -999.999 -39.519 -745.078
III KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -999.999 -999.999 -575.257 -999.999
INTERPOL. -999.999 -999.999 -54.663 -999.999
IV KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -999.999 -999.999 -582.368 -999.999
INTERPOL. -164.766 -999.999 -349.430 -999.999
V KRIGING -999.999 -999.999 -999.999 -999.999
SPLINES -999.999 -999.999 -575.688 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL. -636.803 -999.999 -112.675 -949.015
KRIGING -800.112 -999.999 -987.200 -999.999
SPLINES -799.912 -999.999 -488.865 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.453 0.268 -1.701 -2.970
I KRIGING -999.999 -445.618 -271.042 -999.999
SPLINES -459.738 -247.806 -395.403 -999.999
INTERPOL. -4.690 0.694 -0.834 -2.373
II KRIGING -999.999 -371.842 -225.047 -999.999
SPLINES -333.037 -426.801 -999.999 -999.999
INTERPOL. 1.000 1.000 -1.019 -1.390
III KRIGING -35.633 -463.270 -999.999 -999.999
SPLINES -37.607 -343.325 -971.949 -999.999
INTERPOL. 0.998 0.998 -2.326 -10.028
IV KRIGING -34.361 -537.835 -999.999 -999.999
SPLINES -20.644 -999.999 -999.999 -999.999
INTERPOL. 0.909 0.965 -8.595 -59.700
-83.941 -242.848 -834.416 -999.999
V KRIGING
SPLINES -82.577 -368.592 -919.051 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL. -0.266 0.785 -2.895 -15.292
KRIGING -430.787 -412.283 -666.101 -999.999
SPLINES -186.721 -477.304 -857.280 -999.999
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -0.192 -0.127 -0.091 -2.424
I KRIGING 0.478 -999.999 -135.710 -999.999
SPLINES -999.999 -999.999 -999.999 -999.999
INTERPOL. 0.958 -35.067 0.037 -0.962
II KRIGING 0.995 -999.999 -178.260 -999.999
SPLINES -999.999 -999.999 -310.745 -999.999
INTERPOL. 0.968 -33.502 -0.054 -0.681
III KRIGING -0.804 -999.999 -84.489 -999.999
SPLINES -999.999 -999.999 -467.582 -999.999
INTERPOL. 0.980 -39.651 -0.067 0.014
IV KRIGING -0.858 -999.999 -84.659 -999.999
SPLINES -999.999 -999.999 -467.876 -999.999
INTERPOL. 0.876 -37.067 -3.983 -141.496
V KRIGING -0.079 -999.999 -87.983 -999.999
SPLINES -999.999 -999.999 -481.720 -999.999
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. 0.718 -29.083 -0.832 -29.110
KRIGING -0.054 -999.999 -114.220 -999.999
SPLINES -999.999 -999.999 -545.584 -999.999
CASE PROCEDURE AVERA6E VARIANCE STAN,DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
CASE PROCEDURE AVERAGE VARIANCE STAN.DEV. MAXIMUM AVERAGE VARIANCE RATIO AVERAGES
RES.[1] RES.[2] RES. [3] EST/TRUE [4]
NOTES:
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -62.709 -13.645 -6.704 -273.295
I KRIGING -210.664 -14.007 -6.704 -90.406
SPLINES -773.546 -24.279 -6.704 -31.983
..I.
INTERPOL. -70.898 -12.253 -6.704 -261.133
II KRIGING -242.359 -8.062 -6.704 -47.609
SPLINES -773.546 -24.279 -6.704 -31.983
INTERPOL. -65.936 -12.957 -6.704 -268.537
III KRIGING -242.769 -7.990 -6.704 -47.169
SPLINES -773.546 -24.279 -6.704 -31.983
INTERPOL. 0.916 -1.648 0.119 -0.770
IV KRIGING -17.207 0.992 -0.293 -1.719
SPLINES -67.284 0.720 -0.293 -1.047
INTERPOL. -30.780 -145.424 -7.806 -282.156
V KRIGING -230.244 -10.255 -6.704 -64.730
SPLINES -773.546 -24.279 -6.704 -31.983
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AP
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL. -45.881 -37.185 -5.560 -217.178
KRIGING -188.649 -7.865 -5.422 -50.327
SPLINES -632.293 -19.279 -5.422 -25.796
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -51.879 -22.196 -6.704 -200.036
KRIGING 0.902 -12.729 -7.806 -42.248
SPLINES -564.898 -10.740 -6.704 -34.296
INTERPOL. -112.730 -24.106 -6.704 -257.706
II KRIGING 0.902 -12.729 -7.806 -42.248
SPLINES -564.898 -10.740 -6.704 -34.296
INTERPOL. -85.120 -18.550 -6.704 -190.796
III KRIGING 0.902 -12.729 -7.806 -42.248
SPLINES -564.898 -10.740 -6.704 -34.296
INTERPOL. -8.867 -1.292 0.230 -0.801
IV KRIGING -12.650 -6.325 -1.753 -7.621
SPLINES -47.362 0.987 -0.293 -1.318
INTERPOL. -31.573 0.927 -7.227 -139.664
V KRIGING -19.121 -19.584 -7.806 -42.482
SPLINES -395.178 -5.757 -6.704 -35.257
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL. -58.034 -13.043 -5.422 -157.801
KRIGING -5.813 -12.819 -6.596 -35.369
SPLINES -427.447 -7.398 -5.422 -27.893
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.679 -0.984 -0.807 -14.342
I KRIGING 0.633 0.673 -0.066 -6.634
SPLINES 0.824 0.892 0.800 0.931
INTERPOL. 0.764 -0.099 -0.798 -5.482
II KRIGING 0.633 0.673 -0.073 -6.638
SPLINES 0.824 0.892 0.800 0.931
INTERPOL. 0.679 -0.986 -0.808 -14.355
III KRIGING 0.638 0.689 -0.075 -6.104
SPLINES 0.824 0.892 0.800 0.931
INTERPOL. 0.672 -0.134 -0.062 -0.278
IV KRIGING 0.634 0.727 0.288 -0.354
SPLINES 0.786 0.867 0.566 0.755
INTERPOL. 0.604 -3.776 -1.438 -27.710
V KRIGING 0.585 0.580 -0.457 -12.893
SPLINES 0.819 0.915 0.800 0.933
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL. 0.680 -1.196 -0.782 -12.433
KRIGING 0.625 0.668 -0.077 -6.525
SPLINES 0.815 0.892 0.753 0.896
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -384.064 -131.976 -3.863 -148.606
KRIGING -999.999 -585.184 -4.737 -28.590
SPLINES -999.999 -641.875 -4.737 -11.530
INTERPOL. -594.284 -161.234 -4.719 -174.161
II KRIGING -999.999 -601.478 -4.737 -20.929
SPLINES -999.999 -576.977 -4.737 -12.697
INTERPOL. -33.414 -100.423 -3.863 -150.180
III KRIGING -999.999 -604.710 -4.737 -17.321
SPLINES -999.999 -591.182 -5.466 -11.248
INTERPOL. -351.862 -85.504 -1.759 -31.279
IV KRIGING -999.999 -165.657 -2.663 -11.312
SPLINES -999.999 -158.133 -3.250 -9.002
INTERPOL. -2.518 -447.881 -5.466 -127.279
V KRIGING -999.999 -603.746 -4.737 -18.142
SPLINES -999.999 -595.019 -4.737 -14.938
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX.RES VAR.RES
INTERPOL.
-273.228 -185.404 -3.934 -126.301
KRIGING -999.999 -512.155 -4.322 -19.259
SPLINES -999.999 -512.637 -4.585 -11.883
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. 0.014 -128.856 -4.500 -195.419
KRIGING -999.999 -499.395 -5.466 -7.868
SPLINES -999.999 -494.328 -4.737 -14.659
INTERPOL. 0.264 -122.212 -4.500 -204.031
II KRIGING -999.999 -465.098 -5.466 -11.185
SPLINES -999.999 -472.896 -4.737 -8.501
INTERPOL. -0.543 -125.500 -4.500 -200.832
III KRIGING -999.999 -385.901 -5.466 -9.159
SPLINES -999.999 -498.385 -5.466 -17.220
INTERPOL. -39.720 -47.949 -1.750 -50.605
IV KRIGING -999.999 -107.576 -3.250 -8.416
SPLINES -999.999 -129.673 -3.250 -9.477
INTERPOL. -999.999 -249.762 -4.673 -218.845
V KRIGING -999.999 -413.693 -5.466 -12.487
SPLINES -999.999 -524.876 -5.466 -12.089
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL.
-207.997 -134.856 -3.985 -173.947
KRIGING -999.999 -374.333 -5.023 -9.823
SPLINES -999.999 -424.031 -4.731 -12.389
PERFORMANCE INDEX
CASE PROCEDURE AVERAGE VARIANCE MAXIMUM VARIANCE
RESID. RESID.
INTERPOL. -18.400 -48.132 -4.737 -96.340
KRIGING -999.999 -247.525 -5.466 -23.788
SPLINES -999.999 -484.876 -5.466 -7.782
INTERPOL. -239.267 -52.456 -4.737 -102.722
II KRIGING -999.999 -255.237 -5.466 -28.545
SPLINES -999.999 -468.641 -5.466 -8.346
INTERPOL. -306.269 -44.957 -4.737 -92.442
III KRIGING -999.999 -354.426 -5.466 -22.785
SPLINES -999.999 -497.128 -5.466 -10.806
INTERPOL. -28.531 -14.705 -1.552 -22.705
IV KRIGING -999.999 -91.712 -3.250 -12.585
SPLINES -999.999 -137.148 -3.250 -7.886
INTERPOL. -183.203 -159.499 -4.737 -174.175
V KRIGING -999.999 -350.050 -5.466 -21.099
SPLINES -999.999 -522.272 -5.466 -8.499
NOTE: A VALUE OF -999.999 INDICATES THAT THE INDEX
IS ACTUALLY LESS THAN OR EQUAL TO -999.999
AVERAGE INDEX
PROCEDURE AVERAGE VARIANCE MAX. RES VAR. RES
INTERPOL.
-155.134 -63.950 -4.100 -97.677
KRIGING -999.999 -259.790 -5.023 -21.760
SPLINES -999.999 -422.013 -5.023 -8.664