0% found this document useful (0 votes)
19 views16 pages

Aplications To Matrix Calculations System Solutions

This paper presents an orthogonally based pivoting transformation for matrices, highlighting its applications in solving various linear algebra problems such as calculating inverses, determinants, and ranks. The transformation is shown to maintain the linear subspace generated by the matrix columns while providing efficient updates when changes occur. The authors discuss the theoretical properties of this method and its potential for simplifying computations in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views16 pages

Aplications To Matrix Calculations System Solutions

This paper presents an orthogonally based pivoting transformation for matrices, highlighting its applications in solving various linear algebra problems such as calculating inverses, determinants, and ranks. The transformation is shown to maintain the linear subspace generated by the matrix columns while providing efficient updates when changes occur. The authors discuss the theoretical properties of this method and its potential for simplifying computations in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

SIAM J. MATRIX ANAL. APPL.

c 2000 Society for Industrial and Applied Mathematics



Vol. 22, No. 3, pp. 666–681

AN ORTHOGONALLY BASED PIVOTING TRANSFORMATION


Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

OF MATRICES AND SOME APPLICATIONS∗


ENRIQUE CASTILLO† , ANGEL COBO† , FRANCISCO JUBETE† , ROSA EVA PRUNEDA† ,
AND CARMEN CASTILLO†

Abstract. In this paper we discuss the power of a pivoting transformation introduced by


Castillo, Cobo, Jubete, and Pruneda [Orthogonal Sets and Polar Methods in Linear Algebra: Ap-
plications to Matrix Calculations, Systems of Equations and Inequalities, and Linear Programming,
John Wiley, New York, 1999] and its multiple applications. The meaning of each sequential tableau
appearing during the pivoting process is interpreted. It is shown that each tableau of the process
corresponds to the inverse of a row modified matrix and contains the generators of the linear sub-
space orthogonal to a set of vectors and its complement. This transformation, which is based on the
orthogonality concept, allows us to solve many problems of linear algebra, such as calculating the
inverse and the determinant of a matrix, updating the inverse or the determinant of a matrix after
changing a row (column), determining the rank of a matrix, determining whether or not a set of
vectors is linearly independent, obtaining the intersection of two linear subspaces, solving systems of
linear equations, etc. When the process is applied to inverting a matrix and calculating its determi-
nant, not only is the inverse of the final matrix obtained, but also the inverses and the determinants
of all its block main diagonal matrices, all without extra computations.

Key words. compatibility, determinant, intersection of linear subspaces, linear systems of


equations, rank of a matrix, updating inverses

AMS subject classifications. 15A03, 15A06, 15A09, 15A15

PII. S0895479898349720

1. Introduction. Castillo, Cobo, Fernández-Canteli, Jubete, and Pruneda [2]


and Castillo, Cobo, Jubete, and Pruneda [3] have recently introduced a pivoting
transformation of a matrix that has important properties and has been shown to
be very useful to solve a long list of problems in linear algebra. The aim of this
paper is to show the power of this transformation, clarify the meaning of the partial
results obtained during the computationl process, and illustrate the wide range of
applications of this transformation to solve common problems in linear algebra, such
as calculating inverses of matrices, determinants or ranks, solving systems of linear
equations, etc.
The reader interested in a classical treatment of these problems can, for example,
consult the works of Burden and Faires [1], Golub and Van Loan [5], Gill et al. [6],
and Press et al. [8].
The new methods arising from this transformation have complexity identical to
that associated with the Gauss elimination method (see Castillo, Cobo, Jubete, and
Pruneda [3]). However, they are specially suitable for updating solutions when changes
in rows, columns, or variables are done. In fact, when changing a row, column, or
variable, a single step of the process allows one to obtain (update) the new solution
without the need to start again from scratch. For example, updating the inverse of
∗ Received by the editors December 22, 1998; accepted for publication (in revised form) by M. Chu

May 30, 2000; published electronically October 25, 2000. This work was partially supported by
Iberdrola, the Leonardo Torres Quevedo Foundation of the University of Cantabria, and Dirección
General de Investigación Cientı́fica y Técnica (DGICYT) (project TIC96-0580).
http://www.siam.org/journals/simax/22-3/34972.html
† Department of Applied Mathematics and Computational Sciences, University of Cantabria,

39005 Santander, Spain ([email protected], [email protected], [email protected],


[email protected]).
666
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 667

an n × n matrix when a row is changed requires one instead of n steps, a drastic


reduction in computational power.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

In this paper we introduce the pivoting transformation and its applications only
from the algebraic point of view. Discussing the numerical properties and performance
of this method with respect to stability, ill conditioning, etc., which must be done
carefully and taking into account its applications (see Demmel [4] and Higham [7]),
will be the aim of another paper.
The paper is structured as follows. In section 2 the pivoting transformation is
introduced. In section 3 its main properties are discussed. In section 4 an orthogonal-
ization algorithm is derived. In section 5 some applications are given and illustrated
with examples. Finally, some conclusions are given in section 6.

2. Pivoting transformation. The main tool to be used in this paper consists


of the so-called pivoting transformation, which transforms a set of vectors Vj =
{v1j , . . . , vnj } into another set of vectors Vj+1 = {v1j+1 , . . . , vnj+1 } by
 j j
 vk /tj
 if k = j,
(2.1) vkj+1 = j tjk
 vk − vjj if k = j,
tjj

where tjj = 0 and tjk , k = j, are arbitrary real numbers. In what follows we consider
that the vectors above are the columns of a matrix Vj .
This transformation can be formulated in matrix form as follows. Given a matrix
Vj = [v1j , . . . , vnj ], where vij , i = 1, . . . , n, are column vectors, a new matrix Vj+1 is
defined via

(2.2) Vj+1 = Vj M−1


j ,

where M−1
j is the inverse of the matrix

T
(2.3) Mj = (e1 , . . . , ej−1 , tj , ej+1 , . . . , en ) ,

where ei is the ith column of the identity matrix, the transpose of tj being defined
by

(2.4) tTj = uTj Vj

for some predestined vector uj .


Since tjj = 0, the matrix Mj is invertible. It can be proved that M−1
j is the
identity matrix with its jth row replaced by

1  
t∗j = −tj1 , . . . , −tjj−1 , 1, −tjj+1 , . . . , −tjn .
tjj

This transformation is used in well-known methods, such as the Gaussian elimina-


tion method. However, different selections of the t-values lead to completely different
results. In this paper we base this selection on the concept of orthogonality, and as-
sume a sequence of m transformations associated with a set of vectors {u1 , . . . , um }.
668 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

3. Main properties of the pivoting transformation. As we shall see, the


pivoting transformation has very important and useful properties that are illustrated
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

in the following theorems.


The first theorem proves that given a matrix V, the pivoting transformation
transforms its columns without changing the linear subspace they generate.
Theorem 3.1. Let L(Vj ) = L{v1j , . . . , vnj } be the linear subspace generated or
spanned by the set of vectors {v1j , . . . , vnj }. Consider the pivoting transformation (2.1)
or (2.2) and let L(Vj+1 ) = L{v1j+1 , . . . , vnj+1 }; then L(Vj ) = L(Vj+1 ).
Proof. The relationship (2.2) implies immediately that L(Vj+1 ) ⊂ L(Vj ). Con-
versely, the relationship

Vj+1 Mj = Vj

implies the other way. In fact, this theorem is true by merely looking at (2.2).
The following theorem shows that the pivoting process (2.2) with the pivoting
strategy (2.4) leads to the orthogonal decomposition of the linear subspace generated
by the columns of Vj with respect to vector u.
Theorem 3.2 (orthogonal decomposition with respect to a given vector). As-
sume now a vector uj = 0 and let tjk = uTj vkj , k = 1, . . . , n. If tjj = 0, then

(3.1) uTj Vj+1 = eTj .

In addition, the linear subspace orthogonal to uj in L(Vj ) is


 
{v ∈ L(Vj )|uTj v = 0} = L v1j+1 , . . . , vj−1
j+1 j+1
, vj+1 , . . . , vnj+1 ,

and its complement is L(vjj+1 ).


In other words, the transformation (2.2) gives the generators of the linear subspace
orthogonal to uj and the generators of its complement.
Proof. This theorem follows quickly from (2.4) and (2.2) because

uT Vj+1 = uT Vj M−1 T −1 T
j = tj Mj = ej .

Finally, Theorem 3.1 guarantees that L(vjj+1 ) is the complement.


Remark 1. Note that Theorem 3.2 allows us to obtain the linear subspace orthog-
onal to a given vector uj in any case. If tjj = 0, we can reorder the v vectors until we
satisfy the condition tjj = 0 or we find that tjj = 0 ∀j = 1, . . . , n, in which case the
orthogonal set to uj in L(Vj ) is all L(Vj ).
The following two theorems show that the pivoting transformation (2.2) allows
obtaining the linear space orthogonal to a given linear space, in another linear space.
Theorem 3.3. If we sequentially apply the transformation in Theorem 3.1 based
on a set of linearly independent vectors {u1 , . . . , uj }, the orthogonalization and nor-
malization properties in (3.1) are kept. In other words, we have

(3.2) uTr vkj+1 = δrk ∀r ≤ j ∀j,

where δrk are the Kronecker deltas.


Proof. We prove this by induction over j.
Step 1. The theorem is true for j = 1, because from (3.1) we have uT1 vk2 = δ1k .
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 669

Step j. We assume that the theorem is true for j, that is,

uTr vkj+1 = δrk


Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

∀r ≤ j ∀j.

Step j + 1. We prove that it is true for j + 1. In fact, we have


 T
T j+2 T j+1 −1 er if r = j + 1,
u V = ur V Mj+1 =
eTr M−1
j+1 = e T
r if r ≤ j.

Theorem 3.4 (orthogonal decomposition with respect to a given linear sub-


space). Assume the linear subspace L{u1 , . . . , un }. We can sequentially use Theorem
3.2 to obtain the orthogonal set to L{u1 , . . . , un } in a given subspace L(V1 ). Let
tji be the dot product of uj and vij . Then assuming, without loss of generality, that
tqq−1 = 0, we obtain

t1q−1 q−1 tnq−1 q−1
L(V q−1
) = L v1q−1 − vq , . . . , vq
q−1
, . . . , vn
q−1
− vq
tqq−1 tqq−1
= L (v1q , . . . , vnq ) = L(Vq )

and
q
{v ∈ L(V1 )|uT1 v = 0, . . . , uTq v = 0} = L vq+1 , . . . , vnq .

In addition, we have

uT1 v1q = 1, uT1 viq = 0 ∀i = 1, . . . , uTq vqq = 1, uTq viq = 0 ∀i = q.

The proof can easily be obtained using Theorem 3.3.


The following remarks point out the practical significance of the above four the-
orems.
Remark 2. The linear subspace orthogonal to the linear subspace generated by
vector uj is the linear space generated by the columns of Vk for any k ≥ j + 1 with
the exception of its pivot column, and its complement is the linear space generated
by this pivot column of Vk for any k ≥ j + 1.
Remark 3. The linear subspace, in the linear subspace generated by the columns
of V1 , orthogonal to the linear subspace generated by any subset W = {uk |k ∈ K}
is the linear subspace generated by the columns of Vℓ , ℓ ≥ maxk∈K k + 1, with the
exception of all pivot columns associated with the vectors in W , and its complement
is the linear subspace generated by the columns of Vℓ , ℓ ≥ maxk∈K k + 1, which are
their pivot columns.
4. The orthogonalization algorithm. In this section we describe an algorithm
for obtaining orthogonal decompositions, which is based on Theorem 3.4.
Algorithm 1.
• Input: Two linear subspaces L(V1 ) = L (v1 , . . . , vs ) ⊆ Rn and L(U) =
L (u1 , . . . , um ) ⊆ Rn .
• Output: The orthogonal linear subspace L(W2 ) to L(U) in L(V1 ) and its
complement L(W1 ).
Step 1: Set W = V1 (the matrix with vj , j = 1, . . . , s, as columns).
Step 2: Let i = 1 and ℓ = 0.
Step 3: Calculate the dot products tℓj = uTi wj , j = 1, . . . , s.
670 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

Table 4.1
Iterations for obtaining the orthogonal decomposition of L(V1 ) with respect to L(U). Pivot
columns are boldfaced.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2
1 1 0 0 0 0 3 1 1 –1 0 0
–1 0 1 0 0 0 –3 0 1 0 0 0
1 0 0 1 0 0 0 0 0 1 0 0
0 0 0 0 1 0 1 0 0 0 1 0
0 0 0 0 0 1 0 0 0 0 0 1
1 –1 1 0 0 3 0 –3 1 0

Modified second table Iteration 3


Output
3 1 –1 1 0 0 0 0 1/3 1 –1/3 0
–3 0 0 1 0 0 0 0 0 1 0 0 1 0 1 1 –1
0 0 1 0 0 0 –1 1 –1/3 0 1/3 0 0 0 0 1 0
1 0 0 0 1 0 0 0 0 0 1 0 0 0 –1 0 1
0 0 0 0 0 1 1 0 0 0 0 1 –3 1 –3 0 3
0 0 0 0 1
3 –3 0 1 0 –1 1/3 0 –1/3 1

Step 4: For j = ℓ+1 to s locate the pivot column rℓ as the first column not orthogonal
to ui , that is, tℓrℓ = 0. If there is no such a column go to Step 7. Otherwise,
continue with Step 5.
Step 5: Increase ℓ in one unit, divide the rℓ column by tℓrℓ , and if rℓ = ℓ, switch
columns ℓ and rℓ and associated dot products tℓrℓ and tℓℓ .
Step 6: For j = 1 to s and j = rℓ do the following: If tℓj = 0, do wkj = wkj − tℓj wki
for k = 1, . . . , n.
Step 7: If i = m, go to Step 8. Otherwise, increase i in one unit and go to Step 3.
Step 8: Return L(W2 ) = L (wℓ , . . . , ws ) as the orthogonal subspace of L(U) in
L(V1 ) and L(W1 ) = L (w1 , . . . , wℓ−1 ) as its complement.
Remark 4. If the pivoting process were used taking into account numerical consid-
erations, Step 4 should be adequately modified by the corresponding pivot selecting
strategy (maximum pivot strategy, for example). In this case, the corresponding per-
mutation in Step 8 is required. Note that in this paper only algebraic considerations
are used.
The process described in Algorithm 1 can be organized in a tabular form. A
detailed description is given in the following example.
Example 1 (orthogonal decomposition). Consider the linear subspace of L(V1 ) =
5
R :

L(U) = L (1, −1, 1, 0, 0)T , (3, −3, 0, 1, 0)T , (0, 0, −1, 0, 1)T .

We organize the procedure in a tabular form (see Table 4.1).


First, to obtain the orthogonal decomposition of L(V1 ) with respect to L(U), we
construct the initial tableau (see Iteration 1 in Table 4.1), starting with the identity
matrix V1 . The first column of this table is the first generator of L(U) and the
generators of the subspace to be decomposed are in the other columns. The last row
contains the inner products of the vector in the first column by the corresponding
column vectors.
Next, the first nonnull element in the last row is identified and the corresponding
column is selected as the pivot column, which is boldfaced in Iteration 1.
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 671

Finally, it is necessary to perform the pivoting process and to update the first
column and the last row of the table with the next generator of L(U) and the new
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

inner products. Then, we get the second table (see Iteration 2 in Table 4.1). In order
to select the pivot column, we have to look for the first nonnull element in the last
row starting with its second element because we are in the second iteration. Then,
the selected column is the third one, and before performing the pivoting process,
interchange of second and third columns must be done.
We repeat the pivoting process, incorporate the last generators of L(U), and
obtain the new dot products. We select the pivot column, starting at column three,
and look for a nonnull dot product, obtaining the fourth column as the pivot. Next,
we repeat the normalization and pivoting processes and, finally, we get the Output
tableau in Table 4.1, where the first three vectors are the generators of the complement
subspace and the last two are the generators of the orthogonal subspace. Italicized
columns are used in all iterations to refer to the complementary subspace.
Thus, the orthogonal decomposition becomes

R5 = L (1, 0, 0, −3, 0)T , (0, 0, 0, 1, 0)T , (1, 0, −1, −3, 0)T


⊕ L (1, 1, 0, 0, 0)T , (−1, 0, 1, 3, 1)T .

Note that, from the Output tableau, we can obtain the linear subspace orthogonal
to the linear subspace generated by any subset of the initial set of vectors. For
example, the orthogonal complement of the linear subspace generated by the set
{(1, −1, 1, 0, 0)T , (3, −3, 0, 1, 0)T } is (see Output in Table 4.1)

L({(1, −1, 1, 0, 0)T,(3, −3, 0, 1, 0)T })⊥ = L{(1, 0, −1, −3, 0)T,(1, 1, 0, 0, 0)T,(−1, 0, 1, 3, 1)T },

which can also be written as (see Iteration 3 in Table 4.1)

L({(1, −1, 1, 0, 0)T , (3, −3, 0, 1, 0)T })⊥ = L{(1, 1, 0, 0, 0), (−1/3, 0, 1/3, 1, 0), (0, 0, 0, 0, 1)}.

Similarly,

L({(0, 0, −1, 0, 1)T })⊥ = L{(1, 0, 0, −3, 0)T ,(0, 0, 0, 1, 0)T ,(1, 1, 0, 0, 0)T ,(−1, 0, 1, 3, 1)T }.

5. Applications. In addition to obtaining the linear subspace orthogonal to


a linear space generated by one or several vectors in a given linear subspace, the
proposed orthogonal pivoting transformation allows solving the following problems:
1. calculating the inverse of a matrix,
2. updating the inverse of a matrix after changing a row,
3. determining the rank of a matrix,
4. calculating the determinant of a matrix,
5. updating the determinant of a matrix after changing a row,
6. determining whether or not a set of vectors is linearly independent,
7. obtaining the intersection of two linear subspaces,
8. solving a homogeneous system of linear equations,
9. solving a complete system of linear equations,
10. deciding whether or not a linear system of equations is compatible.
5.1. Calculating the inverse of a matrix. The following theorem shows that
Algorithm 1 can be used for obtaining the inverse of a matrix.
Theorem 5.1. Assume that Algorithm 1 is applied to the rows of matrix A =
(a1 , . . . , an )T using a nonsingular initial matrix V1 . Then the matrix whose columns
672 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

are in the last tableau Vn+1 is the inverse of matrix A. In addition, if we start with
V1 being the identity matrix, in the process we obtain the inverses of all block main
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

diagonal matrices.
Proof. Matrices Vj for j = 2, . . . , n + 1 are obtained, using the transformations

(5.1) Vj+1 = Vj M−1


j , j = 1, . . . , n,

where Mj is defined in (2.3) with tTj = aTj Vj . Then it satisfies

(5.2) aTj Vn = aTj Vj M−1 −1 T −1 −1 T −1 −1 T


j · · · Mn = tj Mj · · · Mn = ej Mj+1 · · · Mn = ej .

This proves that A−1 = Vn ; that is, the inverse of A is the matrix whose columns
are in the final tableau obtained using Algorithm 1.
The second part of the theorem is obvious because the lower triangular part of
the identity matrix is null and does not affect the dot products and the pivoting
transformations involved in the process.
Example 2 (matrix inverses). Consider the following matrix A, where the block
main diagonal matrices are shown, and its inverse A−1 :
 
1 1 0 1 0

2/7 −5/7 −5/7 1/7 3/7

 −1 1 −1 0 0    3/7 3/7 3/7 −2/7 1/7 
 ; A−1 = 

A= 0 0 1 0 1 1/7 1/7 8/7 −3/7 −2/7  .

  
 0 0 0 1 2  
2/7 2/7 2/7 1/7 −4/7

0 1 0 −1 1 −1/7 −1/7 −1/7 3/7 2/7
(5.3)
Table 5.1 shows the iterations for inverting A using Algorithm 1. The inverse
matrix A−1 is obtained in the last iteration (see Table 5.1). In addition, Table 5.1
also contains the inverses of the block main diagonal matrices indicated below (see
the marked matrices in Iterations 2 to 5 in Table 5.1). The important result is that
this is obtained with no extra computation.
Finally, we mention that the 5 × 5 matrices we obtain in Iterations 2 to 5 in Table
5.1 are the inverses of the matrices that result from replacing in the unit matrix its
rows by the rows of matrix A. For example, the matrix in Iteration 4 is such that
   
1 1 0 1 0 1/2 −1/2 −1/2 −1/2 1/2
 −1 1 −1 0 0   1/2 1/2 1/2 −1/2 −1/2 
 ; H−1 =  0
   
(5.4) H =   0 0 1 0 1   0 1 0 −1 .
 0 0 0 1 0  0 0 0 1 0
0 0 0 0 1 0 0 0 0 1

Example 3 (inversion of a matrix starting from a regular matrix). The proposed


pivoting process can be done starting with an arbitrary nonsingular matrix. For
example, if we start with the matrix

1 0 1 1 0
 
 2 1 0 0 0 
(5.5) B =  −1 −1 1 0 −1  ,
 
0 1 1 2 2
 
0 2 1 0 1

we get the results in Table 5.2, i.e., the same inverse.


ORTHOGONALLY BASED PIVOTING TRANSFORMATION 673
Table 5.1
Iterations for inverting the matrix in Example 2. Pivot columns are boldfaced. The inverses of
all block main diagonal matrices are indicated in Iterations 2 to 5.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2
1 1 0 0 0 0 –1 1 –1 0 –1 0
1 0 1 0 0 0 1 0 1 0 0 0
0 0 0 1 0 0 –1 0 0 1 0 0
1 0 0 0 1 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0 0 0 0 1
1 1 0 1 0 –1 2 –1 1 0
Iteration 3 Iteration 4
0 1/2 –1/2 –1/2 –1/2 0 0 1/2 –1/2 –1/2 –1/2 1/2
0 1/2 1/2 1/2 –1/2 0 0 1/2 1/2 1/2 –1/2 –1/2
1 0 0 1 0 0 0 0 0 1 0 –1
0 0 0 0 1 0 1 0 0 0 1 0
1 0 0 0 0 1 2 0 0 0 0 1
0 0 1 0 1 0 0 0 1 2
Iteration 5 Output
0 1/2 –1/2 –1/2 –1/2 3/2 2/7 –5/7 –5/7 1/7 3/7
1 1/2 1/2 1/2 –1/2 1/2 3/7 3/7 3/7 –2/7 1/7
0 0 0 1 0 –1 1/7 1/7 8/7 –3/7 –2/7
–1 0 0 0 1 –2 2/7 2/7 2/7 1/7 –4/7
1 0 0 0 0 1 –1/7 –1/7 –1/7 3/7 2/7
1/2 1/2 1/2 –3/2 7/2

5.2. Updating the inverse of a matrix after changing a row. In this


section we start by giving an interpretation to each tableau obtained in the inversion
process of a matrix.
Since, according to Theorem 3.3, the pivoting transformation does not alter the
orthogonal properties of previous vectors, we can update the inverse of a matrix after
changing a row by an additional pivoting transformation in which the new row vector
is used.
To illustrate this, we use the results in Table 5.2. Note that the matrices in
Iterations 2 to 5 correspond to the matrices obtained from B−1 after sequentially
replacing the row which number coincides with the number of the pivot column by
their associated u-vectors. In other words, matrices in Table 5.2, Iterations 2 to 5,
are the inverses of the following matrices:
   
1 1 0 1 0 1 1 0 1 0
 −8 7 6 4 −2   −1 1 −1 0 0 
   
A1 =  6 −5 −4 −3 2  ; A2 =  6 −5 −4 −3
  2 ;
 −9 8 7 5 −3   −9 8 7 5 −3 
10 −9 −8 −5 3 10 −9 −8 −5 3
   
1 1 0 1 0 1 1 0 1 0
 −1 1 −1 0 0   −1 1 −1 0 0 
   
A3 =  0 0 1 0 1  ; A4 = 
 
 0 0 1 0 1 .

 −9 8 7 5 −3   0 0 0 1 2
10 −9 −8 −5 3 10 −9 −8 −5 3

5.3. Determining the rank of a matrix. In this section we see that Algorithm
1 also allows one to determine the rank of a matrix.
674 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

Table 5.2
Iterations for inverting the matrix in Example 2 when we start with matrix A in (5.5). Pivot
columns are boldfaced.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2
1 1 0 1 1 0 –1 1/3 –2/3 1/3 0 –2/3
1 2 1 0 0 0 1 2/3 –1/3 –4/3 –2 –4/3
0 –1 –1 1 0 –1 –1 –1/3 –1/3 5/3 1 –1/3
1 0 1 1 2 2 0 0 1 1 2 2
0 0 2 1 0 1 0 0 2 1 0 1
3 2 2 3 2 2/3 2/3 –10/3 –3 –1/3
Iteration 3 Iteration 4
0 1 –1 –3 –3 –1 0 5/11 –7/22 –3/11 –15/22 –13/22
0 1 –1/2 –3 –7/2 –3/2 0 5/11 2/11 –3/11 –13/11 –12/11
1 0 –1/2 0 –1/2 –1/2 0 0 –1/2 0 –1/2 –1/2
0 –1 3/2 6 13/2 5/2 1 1/11 3/22 6/11 41/22 37/22
1 –2 3 11 9 2 2 0 1/2 1 1/2 1/2
–2 5/2 11 17/2 3/2 1/11 25/22 28/11 63/22 59/22
Iteration 5 Output
0 10/21 –1/21 1/3 –5/21 1/21 2/7 –5/7 -5/7 1/7 3/7
1 31/63 41/63 7/9 –26/63 1/63 3/7 3/7 3/7 -2/7 1/7
0 1/63 –19/63 4/9 –11/63 –2/63 1/7 1/7 8/7 –3/7 –2/7
–1 2/63 –38/63 –10/9 41/63 –4/63 2/7 2/7 2/7 1/7 –4/7
1 –1/63 19/63 5/9 11/63 2/63 –1/7 –1/7 –1/7 3/7 2/7
4/9 14/9 22/9 –8/9 1/9

In an n-dimensional linear space, the rank of a matrix U coincides with n minus


the dimension of its orthogonal complement. Thus, if during the orthogonalization
process we start with a nonsingular matrix as columns and we can find a pivot in all
iterations, then the corresponding matrix is full rank. Otherwise, the rank is equal to
the number of pivot columns we can find.
Example 4 (rank of a matrix). Assume that we are interested in calculating the
rank of the matrix
 
1 0 1 1 1
0 1 1 0 1
 
A=  1 1 2 1 2 .

1 1 0 0 0
1 −1 0 1 0
In Table 5.3 we show the iterations for obtaining its rank. We can see that the rank
of A is 3, since the third and fifth iterations have no pivots.
5.4. Calculating the determinant of a matrix. The following theorem shows
that the determinant of a matrix can be calculated by means of Algorithm 1.
Theorem 5.2 (determinant of a matrix). The determinant of a matrix A can
be calculated by Algorithm 1 by multiplying the normalizing constants tℓrℓ , l = 1, . . . , n,
used in Step 5 and (−1)p , where p is the number of interchanges of columns that have
occurred when executing the algorithm. If we start the algorithm with the identity
matrix, the determinants of the block main diagonal matrices referred to in section
5.1 are the partial products.
Proof. Assume that we start in Step 1 with an identity matrix, W = In , which
has a determinant of one. In the inverting process, we transform this matrix using
two different transformations: the pivoting step, which does not alter the determinant
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 675
Table 5.3
Iterations for calculating the rank of the matrix in Example 4. Pivot columns are boldfaced.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2 Iteration 3


1 1 0 0 0 0 0 1 0 –1 –1 –1 1 1 0 –1 –1 –1
0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 –1 0 –1
1 0 0 1 0 0 1 0 0 1 0 0 2 0 0 1 0 0
1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0
1 0 0 0 0 1 1 0 0 0 0 1 2 0 0 0 0 1
1 0 1 1 1 0 1 1 0 1 1 1 0 0 0
Iteration 4 Iteration 5
1 1 0 –1 –1 –1 1 1/2 –1/2 1/2 –1/2 0
1 0 1 –1 0 –1 –1 –1/2 1/2 1/2 1/2 0
0 0 0 1 0 0 0 1/2 1/2 –1/2 –1/2 –1
0 0 0 0 1 0 1 0 0 0 1 0
0 0 0 0 0 1 0 0 0 0 0 1
1 1 –2 –1 –2 1 –1 0 0 0

value of the matrix, and the normalization step, which divides its determinant by tℓrℓ
(see (2.1)). In addition, we multiply it by −1 each time we switch columns. Since
|A−1 | = |A|−1 , we have
n

(5.6) |A| = (−1)p tiri .
i=1

If we start with an identity matrix, the lower triangular part of the identity matrix
is null and does not affect the dot products and the pivoting transformations involved
in the process. Thus, the result holds.
Example 5 (determinant of a matrix). The determinant of the matrix in Example
2 is obtained by multiplying the normalizing constants, that is, the last values in the
boldfaced columns in Table 5.1. Thus, we have

1 × 2 × 1 × 1 × 7/2 = 7.

The determinants of the block main diagonal matrices in (5.3) are 1, 2, 2, 2, and 7,
respectively.
Remark 5. If instead of starting with the identity matrix In , we start with a
nonsingular matrix B with determinant |B|, expression (5.6) becomes
n

(5.7) |A| = |B|−1 (−1)p tiri .
i=1

5.5. Updating the determinant of a matrix after changing a row. Ac-


cording to (5.6) or (5.7), the determinant is updated by multiplying the previous
determinant by the dot product of the new row by the associated pivot column.
Example 6 (updating the determinant after changing a row). Consider the matrix
A and its inverse A−1 ,
 
1 1 0 1 0 
2/7 −5/7 −5/7 1/7 3/7

 −1 1 −1 0 0   3/7 3/7 3/7 −2/7 1/7 
 , A−1 = 
 
(5.8) A =  0 0 1 0 1 1/7 1/7 8/7 −3/7 −2/7  ,

  
 0 0 0 1 2 
2/7 2/7 2/7 1/7 −4/7

0 1 0 −1 1 −1/7 −1/7 −1/7 3/7 2/7
676 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

Table 5.4
Pivoting process to determine whether or not a set of vectors is linearly dependent.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2 Iteration 3 Iteration 4


1 1 0 0 0 2 1 0 –1 –1 1 1 0 –1 –1 –1 1/4 1/4 1/4 0
0 0 1 0 0 –1 0 1 0 0 1 2 –1 –3 –2 1 –1/4 –1/4 3/4 1
1 0 0 1 0 –1 0 0 1 0 0 0 0 1 0 2 3/4 –1/4 –1/4 –1
1 0 0 0 1 0 0 0 0 1 –1 0 0 0 1 1 0 0 0 1
t1 1 0 1 1 t2 2 –1 –3 –2 t3 3 –1 –4 –4 t4 1 –1 0 0

and assume that we want to calculate the determinant of the matrix


 
1 1 0 1 0
 −1 1 −1 0 0 
 
B=  0 0 1 0 1 .

 a b c d e
0 1 0 −1 1

Since |A| = 7, we have

|B| = 7 × (a, b, c, d, e)(1/7, −2/7, −3/7, 1/7, 3/7)T = a − 2b − 3c + d + 3e.

5.6. Determining whether or not a set of vectors is linearly dependent.


To know whether or not a set of vectors is linearly independent, we use the property

{u1 , . . . , un } are linearly dependent ⇔ un ∈ L{u1 , . . . , un−1 }

which can be written as

{u1 , . . . , un } are linearly dependent ⇔ un ⊥ L{u1 , . . . , un−1 }⊥ .

Thus, the problem reduces to obtaining a set of generators of the orthogonal


complement of L{u1 , . . . , un−1 } and checking that the dot products of each of its
generators by un are null.
Note that this problem is the same as determining whether or not a vector belongs
to a linear subspace.
Example 7 (linear dependence of a set of vectors). Consider the set of vectors

{(1, 0, 1, 1), (2, −1, −1, 0), (1, 1, 0, −1), (−1, 1, 2, 1)}.

If we use the pivoting process (see Table 5.4), we have no problem finding a pivot
column for the first three vectors, but there is no pivot column for the fourth vector.
This means that the fourth vector is a linear combination of the first three.
5.7. Obtaining the intersection of two linear subspaces. Theorem 3.4
allows us to obtain the intersection of two linear subspaces S1 and S2 by noting that

(5.9) S1 ∩ S2 = S1 ∩ (S2⊥ )⊥ = S2 ∩ (S1⊥ )⊥ .

In fact, we can obtain first S2⊥ , the orthogonal to S2 , by letting L(V1 ) = Rn in The-
orem 3.4 and then find the orthogonal to S2⊥ in S1 , using S1 as L(V1 ). Alternatively,
we can obtain first S1⊥ , the orthogonal to S1 , by letting L(V1 ) = Rn in Theorem 3.4
and then find the orthogonal to S1⊥ in S2 , using S2 as L(V1 ).
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 677

Example 8 (intersection of moving subspaces). Consider the linear subspaces


S1 = L {v1 , v2 , v3 , v4 } and S2⊥ = L {v1 , v4 }, where
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

v1 = ( 1, sin 2t, −1, cos t )T ,


v2 = ( cos t, 1, sin 2t, −1 )T ,
v3 = ( −1, cos t, 1, sin 2t )T ,
v4 = ( sin 2t, −1, cos t, 1 )T ,
and assume that we wish to
1. determine the intersection Q1 = S1 ∩ S2 for all values of the time parameter
0 ≤ t ≤ 2π;
2. find the t-values for which we have Q1 = Q2 , where Q2 = S1 ∩ S3 and
S3⊥ = L {v1 , v2 }.
Then we have the following:
1. By definition we can write
Q1 = {v ∈ S1 |v ∈ S2 } ; v ∈ S2 ⇔ vT v1 = 0 and vT v4 = 0.
Using the procedure in Theorem 3.4 and starting with v1 , we get
p = v1T v1 = sin2 2t + cos2 t + 2,
(5.10) q = v1T v3 = 2 sin 2t cos t − 2,
v1T v2 = v1T v4 = 0.
Since p = 0 ∀t, using the orthogonalization procedure in Theorem 3.4, we
obtain
v ∈ S1 |vT v1 = 0 = L {u1 = pv3 − qv1 , v2 , v4 )},
and proceeding with v4 and taking into account that
v4T v2 = q; v4T v4 = p; v4T u1 = 0,
we have
Q1 = L {pv3 − qv1 , pv2 − qv4 } = L {u1 , u2 } .
2. By a similar process we get
Q2 = L {pv3 − qv1 , pv4 − qv2 } = L {u1 , u3 } .
Since Q2 is orthogonal to v2 and Q1 = Q2 , then Q1 is orthogonal to v2 and,
in particular, u2 is orthogonal to v2 . Similarly, since Q1 is orthogonal to v4
and Q1 = Q2 , then Q2 is orthogonal to v4 and, in particular, u3 is orthogonal
to v4 ; that is,
Q1 = Q2 ⇒ uT2 v2 = 0, uT3 v4 = 0 ⇒ p2 − q 2 =0 
π 3π 7π 11π
⇒ p = −q ⇒ cos t(2 sin t + 1) = 0 ⇒ t ∈ A = , , ,
2 2 6 6
and, conversely,
p = −q ⇒ u2 = u3 ⇒ Q1 = Q2 .
Thus, we get
 
π 3π 7π 11π
Q1 = Q2 ⇔ t ∈ A = , , , .
2 2 6 6
678 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

Table 5.5
Pivoting transformations corresponding to Example 9.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2 Iteration 3 Final


a1 v11 v21 v31 v41 v51 a2 v12 v22 v32 v42 a3 v13 v23 v33 v1 v2
1 1 0 0 0 0 0 –1 1 –1 2 0 1 0 0 1 0
1 0 1 0 0 0 1 1 0 0 0 0 0 –1 2 –1 2
–1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 1 0
1 0 0 0 1 0 1 0 0 1 0 –1 0 1 0 1 0
–2 0 0 0 0 1 –2 0 0 0 1 0 0 0 1 0 1
t1 1 1 –1 1 –2 t2 1 0 1 –2 t3 1 –1 0

5.8. Solving a homogeneous system of linear equations. Consider the


homogeneous system of equations

a11 x1 +a12 x2 + · · · +a1n xn = 0,


a21 x1 +a22 x2 + · · · +a2n xn = 0,
(5.11)
······ ······ ······ ······ ···
am1 x1 +am2 x2 + · · · +amn xn = 0

which can be written as


(a11 , . . . , a1n )(x1 , . . . , xn )T = 0,
(a21 , . . . , a2n )(x1 , . . . , xn )T = 0,
(5.12)
··························· ···
(am1 , . . . , amn )(x1 , . . . , xn )T = 0.

Expression (5.12) shows that (x1 , . . . , xn ) is orthogonal to the set of vectors

{(a11 , . . . , a1n ), (a21 , . . . , a2n ), . . . , (am1 , . . . , amn )}.

Then, obtaining the solution to system (5.11) reduces to determining the orthog-
onal complement of the linear subspace generated by the rows of matrix A.
Example 9 (a homogeneous system of linear equations). Consider the system of
equations

x1 +x2 −x3 +x4 −2x5 = 0,


(5.13) x2 +x4 −2x5 = 0,
x3 −x4 = 0.

To solve this system, we obtain the orthogonal complement of the linear subspace
generated by the rows of the system matrix, as shown in Table 5.5. Thus, the solution
is

(5.14) (x1 , x2 , x3 , x4 , x5 ) = ρ1 (1, −1, 1, 1, 0) + ρ2 (0, 2, 0, 0, 1) ,

where ρ1 and ρ2 are arbitrary real numbers.


5.9. Solving a complete system of linear equations. Now consider the
complete system of linear equations:
a11 x1 +a12 x2 + · · · +a1n xn = b1 ,
a21 x1 +a22 x2 + · · · +a2n xn = b2 ,
(5.15)
··· ··· ··· ··· ···
am1 x1 +am2 x2 + · · · +amn xn = bm .
ORTHOGONALLY BASED PIVOTING TRANSFORMATION 679

Adding the artificial variable xn+1 , it can be written as


a11 x1 +a12 x2 +··· +a1n xn −b1 xn+1 = 0,
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

a21 x1 +a22 x2 +··· +a2n xn −b2 xn+1 = 0,


(5.16) ··· ··· ··· ··· ··· ···
am1 x1 +am2 x2 +··· +amn xn −bm xn+1 = 0,
am1 x1 +am2 x2 +··· +amn xn −bm xn+1 = 0,
(5.17) xn+1 = 1.
System (5.16) can be written as
(a11 , . . . , a1n , −b1 )(x1 , . . . , xn , xn+1 )T = 0,
(a21 , . . . , a2n , −b2 )(x1 , . . . , xn , xn+1 )T = 0,
(5.18)
································· ···
(am1 , . . . , amn , −bm )(x1 , . . . , xn , xn+1 )T = 0.
Expression (5.18) shows that (x1 , . . . , xn , xn+1 ) is orthogonal to the set of vectors
{(a11 , . . . , a1n , −b1 ), (a21 , . . . , a2n , −b2 ), . . . , (am1 , . . . , amn , −bm )}.
Then, it is clear that the solution of (5.16) is the orthogonal complement of the
linear subspace generated by the rows of matrix A:
L{(a11 , . . . , a1n , −b1 ), (a21 , . . . , a2n , −b2 ), . . . , (am1 , . . . , amn , −bm )}⊥ .
Thus, the solution of (5.15) is the projection on X1 × · · · × Xn of the intersection
of the orthogonal complement of the linear subspace generated by
{(a11 , . . . , a1n , −b1 ), (a21 , . . . , a2n , −b2 ), . . . , (am1 , . . . , amn , −bm )}
and the set {x|xn+1 = 1}.
Example 10 (a complete system of linear equations). Consider the system of
equations
x1 +x2 −x3 +x4 = 2,
(5.19) x2 +x4 = 2,
x3 −x4 = 0
which, using the auxiliary variable x5 , can be written as (5.13). Since the solution
of the homogeneous system (5.13) was already obtained, now we only need to force
x5 = 1 and return to the initial set of variables. Thus, the solution is
(5.20) (x1 , x2 , x3 , x4 ) = (0, 2, 0, 0) + ρ1 (1, −1, 1, 1) ,
where ρ1 is an arbitrary real number.
Assume that now we add to system (5.19) the equation
(5.21) x2 −x4 = 0.
The new solution
(5.22) (x1 , x2 , x3 , x4 ) = (1, 1, 1, 1)
is obtained by an extra pivoting step using the new vector (0, 1, 0, −1, 0) (see Table
5.6).
Finally if in the system (5.19) we eliminate variable x4 , the new solution
(5.23) (x1 , x2 , x3 ) = (0, 2, 0)
can be obtained by introducing the new equation x4 = 0, which is equivalent to this
elimination. Using a new pivoting step with the vector (0, 0, 0, 1, 0), we get the results
in Table 5.6, and the solution in (5.23).
680 CASTILLO, COBO, JUBETE, PRUNEDA, AND CASTILLO

Table 5.6
New pivoting transformation after adding (5.21) and after removing variable x4 in system (5.19).
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 4 Final Iteration 4 Final


a4 v14 v24 v1 a4 v14 v24 v1
0 1 0 1 0 1 0 0
1 –1 2 1 0 –1 2 2
0 1 0 1 0 1 0 0
–1 1 0 1 1 1 0 0
0 0 1 1 0 0 1 1
t4 –2 2 t4 1 0

5.10. Deciding whether or not a linear system of equations is compat-


ible. In this section we show how to apply the orthogonal methods to analyze the
compatibility of a given system of equations.
System (5.15) can be written as

a11 a12 a1n b1


     
 
 a21   a22   a2n   b2 
(5.24) x1 
 ...  + x2  ...
   + · · · + xn  .  =  .
  ..   ..
.

am1 am2 amn bm

Expression (5.24) shows that the vector (b1 , . . . , bm )T is a linear combination of


the set of column vectors

{(a11 , . . . , am1 )T , (a12 , . . . , am2 )T , . . . , (a1n , . . . , amn )T }

of the system matrix A. Thus, the compatibility problem reduces to that of sec-
tion 5.6.
Thus, analyzing the compatibility of the system of equations (5.15) reduces to
finding the orthogonal complement L {w1 , . . . , wp } of L {a1 , . . . , an } and checking
whether or not bWT = 0.
The computational process arising from this procedure has a complexity which
coincides exactly with that for the Gauss elimination procedure. However, it has one
important advantage: W is independent of b and so we can analyze the compatibility
of any symbolic vector un+1 without extra computations.
Example 11 (compatibility of a linear system of equations). Suppose that we are
interested in determining the conditions under which the system of equations

2x1 − x2 + x3 = a,
(5.25) x1 − x3 = 3a,
x2 − 3x3 = b

is compatible. Then, using Algorithm 1, we get (see Table 5.7)

(5.26) W = L (1, −2, 1)T ,

which implies the following compatibility condition:

(5.27) w1 (a, 3a, b)T = (1, −2, 1)(a, 3a, b)T = 0 ⇒ b − 5a = 0.


ORTHOGONALLY BASED PIVOTING TRANSFORMATION 681
Table 5.7
Pivoting process to determine the orthogonal complement of the linear subspace generated by
the columns of A.
Downloaded 04/23/13 to 193.144.185.29. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Iteration 1 Iteration 2 Iteration 3


2 1 0 0 –1 1/2 –1/2 0 1 0 –1 1
1 0 1 0 0 0 1 0 –1 1 2 –2
0 0 0 1 1 0 0 1 –3 0 0 1
t1 2 1 0 t2 –1/2 1/2 1 t3 –1 –3 0

6. Conclusions. A pivoting transformation, based on the orthogonality con-


cept, has been discussed and some of its applications to solve common linear algebra
problems have been given. The main advantage of the suggested method with respect
to the Gauss elimination method is that the intermediate results arising in the solution
process are easily interpretable. This leads to immediate methods to update solutions
of several problems, such as calculating the inverse or the determinant of a matrix,
solving system of linear equations, etc., when small changes are done (changes in rows,
columns, and/or variables). When the method is applied to inverting a matrix and
calculating its determinant, not only is the inverse of the final matrix obtained, but
also the inverses and determinants of all its block main diagonal matrices, without
extra computations.
Acknowledgment. We thank the referee for the constructive comments that
allowed an important improvement of the paper.

REFERENCES

[1] R. I. Burden and J. D. Faires, Numerical Analysis, PWS, Boston, 1985.


[2] E. Castillo, A. Cobo, A. Fernandez-Canteli, F. Jubete, and R. E. Pruneda, Updating
inverses in matrix analysis of structures, Internat. J. Numer. Methods Engrg., 43 (1998),
pp. 1479–1504.
[3] E. Castillo, A. Cobo, F. Jubete, and R. E. Pruneda, Orthogonal Sets and Polar Methods in
Linear Algebra: Applications to Matrix Calculations, Systems of Equations and Inequalities,
and Linear Programming, John Wiley, New York, 1999.
[4] J.W. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997.
[5] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press,
Baltimore, 1996.
[6] P. E. Gill, G. H. Golub, W. Murray, and M. A. Saunders, Methods for modifying matrix
factorizations, Math. Comp., 28 (1974), pp. 505–535.
[7] N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, 1996.
[8] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes
in C. The Art of Scientific Computing, Cambridge University Press, Cambridge, UK, 1985.

You might also like