Iterative Methods For Sparse Linear Systems 2nd Edition Yousef Saad - Get Instant Access To The Full Ebook With Detailed Content
Iterative Methods For Sparse Linear Systems 2nd Edition Yousef Saad - Get Instant Access To The Full Ebook With Detailed Content
https://ebookultra.com/download/matrix-inequalities-for-iterative-
systems-1st-edition-hanjo-taubig/
https://ebookultra.com/download/iterative-regularization-methods-for-
nonlinear-ill-posed-problems-1st-edition-kaltenbacher/
https://ebookultra.com/download/signal-processing-and-linear-
systems-2nd-edition-b-p-lathi/
https://ebookultra.com/download/linear-algebra-linear-systems-and-
eigenvalues-p-m-van-dooren/
Computational Methods for Electric Power Systems 2nd
Edition Mariesa L. Crow
https://ebookultra.com/download/computational-methods-for-electric-
power-systems-2nd-edition-mariesa-l-crow/
https://ebookultra.com/download/predictive-control-for-linear-and-
hybrid-systems-1st-edition-francesco-borrelli/
https://ebookultra.com/download/krylov-solvers-for-linear-algebraic-
systems-1st-ed-edition-charles-george-broyden/
https://ebookultra.com/download/linear-multivariable-control-systems-
shankar-p-bhattacharyya/
https://ebookultra.com/download/finite-element-methods-parallel-
sparse-statics-and-eigen-solutions-1st-edition-duc-thai-nguyen/
Iterative methods for sparse linear systems 2nd Edition
Yousef Saad Digital Instant Download
Author(s): Yousef Saad
ISBN(s): 9780898715347, 0898715342
Edition: 2
File Details: PDF, 3.16 MB
Year: 2003
Language: english
Iterative Methods
for Sparse
Linear Systems
Yousef Saad
15 12
4 9 5
11 14 8
2 10 3
13 7
PREFACE xiii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Suggestions for Teaching . . . . . . . . . . . . . . . . . . . . . . . . . xv
2 DISCRETIZATION OF PDES 44
2.1 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 44
2.1.1 Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.1.2 The Convection Diffusion Equation . . . . . . . . . . . . . . . 47
3 SPARSE MATRICES 68
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2 Graph Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2.1 Graphs and Adjacency Graphs . . . . . . . . . . . . . . . . . . 70
3.2.2 Graphs of PDE Matrices . . . . . . . . . . . . . . . . . . . . . 72
3.3 Permutations and Reorderings . . . . . . . . . . . . . . . . . . . . . . . 72
3.3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.3.2 Relations with the Adjacency Graph . . . . . . . . . . . . . . . 75
3.3.3 Common Reorderings . . . . . . . . . . . . . . . . . . . . . . . 75
3.3.4 Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.4 Storage Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.5 Basic Sparse Matrix Operations . . . . . . . . . . . . . . . . . . . . . . 87
3.6 Sparse Direct Solution Methods . . . . . . . . . . . . . . . . . . . . . . 88
3.7 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Exercises and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
REFERENCES 425
INDEX 439
xii
Iterative methods for solving general, large sparse linear systems have been gaining
popularity in many areas of scientific computing. Until recently, direct solution methods
were often preferred to iterative methods in real applications because of their robustness
and predictable behavior. However, a number of efficient iterative solvers were discovered
and the increased need for solving very large linear systems triggered a noticeable and
rapid shift toward iterative techniques in many applications.
This trend can be traced back to the 1960s and 1970s when two important develop-
ments revolutionized solution methods for large linear systems. First was the realization
that one can take advantage of “sparsity” to design special direct methods that can be
quite economical. Initiated by electrical engineers, these “direct sparse solution methods”
led to the development of reliable and efficient general-purpose direct solution software
codes over the next three decades. Second was the emergence of preconditioned conjugate
gradient-like methods for solving linear systems. It was found that the combination of pre-
conditioning and Krylov subspace iterations could provide efficient and simple “general-
purpose” procedures that could compete with direct solvers. Preconditioning involves ex-
ploiting ideas from sparse direct solvers. Gradually, iterative methods started to approach
the quality of direct solvers. In earlier times, iterative methods were often special-purpose
in nature. They were developed with certain applications in mind, and their efficiency relied
on many problem-dependent parameters.
Now, three-dimensional models are commonplace and iterative methods are al-
most mandatory. The memory and the computational requirements for solving three-
dimensional Partial Differential Equations, or two-dimensional ones involving many
degrees of freedom per point, may seriously challenge the most efficient direct solvers
available today. Also, iterative methods are gaining ground because they are easier to
implement efficiently on high-performance computers than direct methods.
My intention in writing this volume is to provide up-to-date coverage of iterative meth-
ods for solving large sparse linear systems. I focused the book on practical methods that
work for general sparse matrices rather than for any specific class of problems. It is indeed
becoming important to embrace applications not necessarily governed by Partial Differ-
ential Equations, as these applications are on the rise. Apart from two recent volumes by
Axelsson [15] and Hackbusch [116], few books on iterative methods have appeared since
the excellent ones by Varga [213]. and later Young [232]. Since then, researchers and prac-
titioners have achieved remarkable progress in the development and use of effective iter-
ative methods. Unfortunately, fewer elegant results have been discovered since the 1950s
and 1960s. The field has moved in other directions. Methods have gained not only in effi-
ciency but also in robustness and in generality. The traditional techniques which required
Yousef Saad
This book can be used as a text to teach a graduate-level course on iterative methods for
linear systems. Selecting topics to teach depends on whether the course is taught in a
mathematics department or a computer science (or engineering) department, and whether
the course is over a semester or a quarter. Here are a few comments on the relevance of the
topics in each chapter.
For a graduate course in a mathematics department, much of the material in Chapter 1
should be known already. For non-mathematics majors most of the chapter must be covered
or reviewed to acquire a good background for later chapters. The important topics for
the rest of the book are in Sections: 1.8.1, 1.8.3, 1.8.4, 1.9, 1.11. Section 1.12 is best
treated at the beginning of Chapter 5. Chapter 2 is essentially independent from the rest
and could be skipped altogether in a quarter course. One lecture on finite differences and
the resulting matrices would be enough for a non-math course. Chapter 3 should make
the student familiar with some implementation issues associated with iterative solution
procedures for general sparse matrices. In a computer science or engineering department,
this can be very relevant. For mathematicians, a mention of the graph theory aspects of
sparse matrices and a few storage schemes may be sufficient. Most students at this level
should be familiar with a few of the elementary relaxation techniques covered in Chapter
4. The convergence theory can be skipped for non-math majors. These methods are now
often used as preconditioners and this may be the only motive for covering them.
Chapter 5 introduces key concepts and presents projection techniques in general terms.
Non-mathematicians may wish to skip Section 5.2.3. Otherwise, it is recommended to
start the theory section by going back to Section 1.12 on general definitions on projectors.
Chapters 6 and 7 represent the heart of the matter. It is recommended to describe the first
algorithms carefully and put emphasis on the fact that they generalize the one-dimensional
methods covered in Chapter 5. It is also important to stress the optimality properties of
those methods in Chapter 6 and the fact that these follow immediately from the properties
of projectors seen in Section 1.12. When covering the algorithms in Chapter 7, it is crucial
to point out the main differences between them and those seen in Chapter 6. The variants
such as CGS, BICGSTAB, and TFQMR can be covered in a short time, omitting details of
the algebraic derivations or covering only one of the three. The class of methods based on
the normal equation approach, i.e., Chapter 8, can be skipped in a math-oriented course,
especially in the case of a quarter system. For a semester course, selected topics may be
Sections 8.1, 8.2, and 8.4.
Currently, preconditioning is known to be the critical ingredient in the success of it-
erative methods in solving real-life problems. Therefore, at least some parts of Chapter 9
and Chapter 10 should be covered. Section 9.2 and (very briefly) 9.3 are recommended.
From Chapter 10, discuss the basic ideas in Sections 10.1 through 10.3. The rest could be
skipped in a quarter course.
Chapter 11 may be useful to present to computer science majors, but may be skimmed
or skipped in a mathematics or an engineering course. Parts of Chapter 12 could be taught
primarily to make the students aware of the importance of “alternative” preconditioners.
Suggested selections are: 12.2, 12.4, and 12.7.2 (for engineers). Chapter 13 presents an im-
portant research area and is primilarily geared to mathematics majors. Computer scientists
or engineers may prefer to cover this material in less detail.
To make these suggestions more specific, the following two tables are offered as sam-
ple course outlines. Numbers refer to sections in the text. A semester course represents
approximately 30 lectures of 75 minutes each whereas a quarter course is approximately
20 lectures of 75 minutes each. Different topics are selected for a mathematics course and
a non-mathematics course.
Semester course
Weeks Mathematics Computer Science / Eng.
1.9 –1.13 1.1 – 1.6 (Read)
1–3 2.1 – 2.5 1.7 – 1.13, 2.1 – 2.2
3.1 – 3.3, 3.7 3.1 – 3.7
4.1 – 4.3 4.1 – 4.2
4–6 5. 1 – 5.4 5.1 – 5.2.1
6.1 – 6.3 6.1 – 6.3
6.4 – 6.7 (Except 6.5.2) 6.4 – 6.5 (Except 6.5.5)
7–9 6.9 – 6.11 6.7.1, 6.8–6.9, 6.11.3.
7.1 – 7.3 7.1 – 7.3
7.4.1; 7.4.2 – 7.4.3 (Read) 7.4.1; 7.4.2 – 7.4.3 (Read)
10 – 12 8.1, 8.2, 8.4; 9.1 – 9.3 8.1 – 8.3; 9.1 – 9.3
10.1 – 10.3 10.1 – 10.4
10.5.1 – 10.5.6 10.5.1 – 10.5.4
13 – 15 10.6 ; 12.2 – 12.4 11.1 – 11.4 (Read); 11.5 – 11.6
13.1 – 13.6 12.1 – 12.2; 12.4 – 12.7
Quarter course
Weeks Mathematics Computer Science / Eng.
1–2 1.9 – 1.13, 3.1 – 3.2 1.1 – 1.6 (Read); 3.1 – 3.7
4.1 – 4.3 4.1
3–4 5.1 – 5.4 5.1 – 5.2.1
6.1 – 6.4 6.1 – 6.3
5–6 6.4 – 6.7 (Except 6.5.2) 6.4 – 6.5 (Except 6.5.5)
6.11, 7.1 – 7.3 6.7.1, 6.11.3, 7.1 – 7.3
7–8 7.4.1; 7.4.2 – 7.4.3 (Read) 7.4.1; 7.4.2 – 7.4.3 (Read)
9.1 – 9.3; 10.1 – 10.3 9.1 – 9.3; 10.1 – 10.3
9 – 10 10.6 ; 12.2 – 12.4 11.1 – 11.4 (Read); 11.5 – 11.6
13.1 – 13.4 12.1 – 12.2; 12.4 – 12.7
15 ?!&8#"$D%$CE&(%$6')4=C!*9($+,F-G)-/9 E"H.$*&5$&0* !$213#"%$.&I45<JL"6K "M$7BA&8$6$')9 $60*-E)N-#1": 6"+.;?-G<$#"&8$6=*> $601O-?9 .0-4P$+B&@E2! R9 'A$Q5B#&8 S
{}|E{
For the sake of generality, all vector spaces considered in this chapter are complex, unless
~ D ~R
F;D@[ ~ d}[E
otherwise stated. A complex matrix is an array of complex numbers
operations with matrices are the following:
~D
0 0 0 @[<u ~ /O;+u
Addition: , where , and are matrices of size and
(nK [K
where jz j> j , and
0
Sometimes, a notation with column vectors and row vectors is used. The column vector
is the vector consisting of the -th column of ,
!"" %$ &(''
# )
..
.
# %$. )
" '
F 6 2 ;E @ X z ;E ~ t>u 0/
The transpose of a matrix in is a matrix in whose elements are
defined by . It is denoted by
relevant to use the transpose conjugate matrix denoted by 21
. It is often more
and defined by
1 4 3 / /
in which the bar denotes the (element-wise) complex conjugation.
Matrices are strongly related to linear mappings between vector spaces of finite di-
mension. This is because they represent these mappings with respect to two given bases:
one for the initial vector space and the other for the image vector space, or range of .
dK @n zK
{}|
A matrix is square if it has the same number of columns and rows, i.e., if ~ . An
0
important square matrix is the identity matrix
F
where is the Kronecker symbol. The identity matrix satisfies the equality
~
N
for every matrix of size . The inverse of a matrix, when it exists, is a matrix such that
+
+
of associated with . The set of all the eigenvalues of is called the spectrum of and
8 98 (78 8 eigenvector
is denoted by * .
(
:
+
A scalar ( is an eigenvalue of if and only if * ! ( * *) ( =% . That is true
+ <; +
~
if and only if (iff thereafter) ( is a root of the characteristic polynomial. In particular, there
are at most distinct eigenvalues.
It is clear that a matrix is singular if and only if it admits zero as an eigenvalue. A well
5 / 8 T
known result in linear algebra is stated in the following proposition.
<3 <32>?. 1.4350 76
A matrix is nonsingular if and only if it admits an inverse.
(nK [K
* +
Thus, the determinant of a matrix determines whether or not the matrix admits an inverse.
The maximum modulus of the eigenvalues is called spectral radius and is denoted by
* + )
(
* + q
The trace of a matrix is equal to the sum of all its diagonal elements
It can be easily shown that the trace of is also equal to the sum of the eigenvalues of
counted with their multiplicities as roots of the characteristic polynomial.
5 / q
<3 <32>?. 2.4350
76
If ( is an eigenvalue of , then ( 3 is an eigenvalue of 01 . An
eigenvector of 01 associated with the eigenvalue ( 3 is called a left eigenvector of .
1 1 3 1 1 3
or, equivalently,
8 (78 (
{b|
The choice of a method for solving linear systems will often depend on the structure of
the matrix . One of the most important properties of matrices is symmetry, because of
its impact on the eigenstructure of . A number of other classes of matrices also have
0/
particular eigenstructures. The most important ones are listed below:
Symmetric matrices:
01
.
Hermitian matrices:
/
.
!
Skew-symmetric matrices: 0
1 !
.
Skew-Hermitian matrices:
0
01 0: pg1 <=/;E
.
Normal matrices:
Nonnegative matrices: %
.
~ (similar definition for nonpositive,
Unitary matrices: 1
positive, and negative matrices).
.
dK
It is worth noting that a unitary matrix
21 , since
is a matrix whose inverse is its transpose conjugate
1
1 J
F % for
F l
Upper triangular matrices: .
% for
0 s s
Lower triangular matrices: .
=% for
Upper bidiagonal matrices:
z0: s s
or .
3
=% for /!
z0 <=
Lower bidiagonal matrices: or .
&% for any pair such that !
2
Banded matrices:
p , , where and are two
0 <=
nonnegative integers. The number is called the bandwidth of .
Upper Hessenberg matrices: % for any pair such that . Lower
1
Hessenberg matrices can be defined similarly.
Outer product matrices: 8 , where both 8 and are vectors.
Permutation matrices: the columns of are a permutation of the columns of the
identity matrix.
Block diagonal matrices: generalizes the diagonal matrix by replacing each diago-
g +
An inner product on a (complex) vector space is any mapping from into ,
*
x
which satisfies the following conditions:
*
is linear with respect to , i.e.,
; + + $ * $[ +
+
*( ( $ $
( *
( $
( ( $
x
*
* c * g x
is Hermitian, i.e.,
+
x
+ +
*
g p z
is positive definite, i.e.,
+
* % %
x g
+
Note that (2) implies that * is real and therefore, (3) adds the constraint that
*
+
must also be positive for any nonzero . For any and ,
+
* g * g z p * x z % % &% =%
* p * z * g
+ + +
Similarly, % % for any . Hence, %
% &% for any and . In particular
+ + +
* x * g R z
the condition (3) can be rewritten as
% and
=% iff =%
+ +
as can be readily shown. A useful relation satisfied by any inner product is the so-called
Cauchy-Schwartz inequality:
* x + $ * g + * c +
J
c
* ! ( ! ( ! ( ! (
(
g + * y
/ * + j
of is defined by
* x + y ;3 2
J
* g 1 x J
which is often rewritten in matrix notation as
+
It is easy to verify that this mapping does indeed satisfy the three conditions required for
inner products, listed above. A fundamental property of the Euclidean inner product in
x + * g 1 + x 5 J
matrix computations is the simple relation
*
x + * g
Unitary matrices preserve the Euclidean inner product, i.e.,
*
6
Indeed, * g + * g 1 + * g
.
+
A vector norm on a vector space is a real-valued function
on , which
z D
satisfies the following three conditions:
% =% iff =% .
x
and
.
t
.
For the particular case when , we can associate with the inner product (1.3)
$: * x $
the Euclidean norm of a complex vector defined by
+
It follows from Proposition 1.3 that a unitary matrix preserves the Euclidean norm metric,
:$ !$ g
i.e.,
, , and
$ $ $ A $ $
lead to the most important norms in practice,
$: $
{b|
u
For a general matrix in , we define the following special set of norms
J
The norm is induced by the two norms and . These norms satisfy the usual
p
5z
properties of norms, i.e.,
%
jyz &% %
and iff
u
N
The most important cases are again those associated with
is of particular interest and the associated norm
is simply denoted by l ;+u
. The case
and
called a “ -norm.” A fundamental property of a -norm is that
an immediate consequence of the definition (1.7). Matrix norms that satisfy the above
property are sometimes called consistent. A result of consistency is that for any square
matrix ,
In particular the matrix
converges to zero if any of its -norms is less than 1.
The Frobenius norm of a matrix is defined by
& $
# 0 )
!
$ J
This can be viewed as the 2-norm of the column (or row) vector in "! consisting of all the
columns (respectively rows) of listed from to (respectively to .) It can be shown
: ~
pe ze @n
that this norm is also consistent, in spite of the fact that it is not induced by a pair of vector
norms, i.e., it is not derived from a formula of the form (1.7); see Exercise 5. However, it
does not satisfy some of the other properties of the -norms. For example, the Frobenius
norm of the identity matrix is not equal to one. To avoid these difficulties, we will only use
the term matrix norm for a norm that is induced by two norms as in the definition (1.7).
Thus, we will not consider the Frobenius norm to be a proper matrix norm, according to
our conventions, even though it is consistent.
The following equalities satisfied by the matrix norms defined above lead to alternative
J
0
J
in general. For example, the first property of norms is not satisfied, since for
%
* / +
% %
we have =% while % . Also, the triangle inequality is not satisfied for the pair ,
and
* +
where is defined above. Indeed,
while * * % + + p
{}|
A subspace of is a subset of
linear combinations of a set of vectors
A 6 6
that is also a complex vector space. The set of all
$ of is a vector subspace t
called the linear span of ,
$ A 6 E6
@n K ;K
If the ’s are linearly independent, then each vector of * admits a unique expres-
sion as a linear combination of the ’s. The set is then called a basis of the subspace
.
Given two vector subspaces and $ , their sum is a subspace defined as the set of
all vectors that are equal to the sum of a vector of and a vector of $ . The intersection
of two subspaces is also a subspace. If the intersection of and $ is reduced to % , then
the sum of and $ is called their direct sum and is denoted by $ . When
is equal to
, then every vector of
can be written in a unique way as the sum of
an element of and an element $ of $ . The transformation
is a linear transformation that is idempotent, i.e., such that
$ that maps into
. It is called a projector
onto along $ .
Two important subspaces that are associated with a matrix of are its range, `z
defined by
* + J
and its kernel or null space
* + 5 \
&%
The range of is clearly equal to the linear span of its columns. The rank of a matrix
is equal to the dimension of the range of , i.e., to the number of linearly independent
~ u
columns. This column rank is equal to the row rank, the number of linearly independent
rows of . A matrix in is of full rank when its rank is equal to the smallest of
and .
A subspace is said to be invariant under a (square) * !matrix
whenever . In
particular for any eigenvalue ( of the subspace ( is invariant under . The
* ! +
subspace (
+
is called the eigenspace associated with ( and consists of all the
eigenvectors of associated with ( , in addition to the zero-vector.
vectors $ ;E
the Gram-Schmidt process which we now describe. Given a set of linearly independent
, first normalize the vector , which means divide it by its 2-
norm, to obtain the scaled vector of norm unity. Then $ is orthogonalized against the
vector by subtracting from $ a multiple of to make the resulting vector orthogonal
to , i.e.,
$ $ ! * $
The resulting vector is again normalized to yield the second vector $ . The -th step of
+
the Gram-Schmidt process consists of orthogonalizing the vector against all previous
vectors .
z 0
3.
+
!
% % $ ,
4.
5.
6. If =%
then Stop, else %
7. EndDo
It is easy to prove that the above algorithm will not break down, i.e., all steps will
be completed if and only if the set of vectors $
is linearly independent. From
lines 4 and 5, it is clear that at every step of the algorithm the following relation holds:
0
# J
can be written as
This is called the QR decomposition of the matrix . From what was said above, the
QR decomposition of a matrix exists whenever the column vectors of form a linearly
~7%
independent set of vectors.
The above algorithm is the standard Gram-Schmidt process. There are alternative for-
mulations of the algorithm which have better numerical properties. The best known of
these is the Modified Gram-Schmidt (MGS) algorithm.
/ qP '&( )+*, -. /01
3 . 76
2. For Do:
@n K ;K
3.
4. For (0 [ E=
Define
1! , Do:
5.
6.
0
*
! +
%, %
7. EndDo
8.
9.
Compute
If = %
$ ,
then Stop, else # %
10. EndDo
l J
algorithm. This technique uses Householder reflectors, i.e., matrices of the form
!
/
in which is a vector of 2-norm unity. Geometrically, the vector represents a mirror
image of with respect to the hyperplane * .
To describe the Householder orthogonalization process, the problem can be formulated
as that of finding a QR factorization of a given matrix . For any vector , the vector ~P
D
for the Householder transformation (1.15) is selected in such a way that
D
For (1.16) to be satisfied, we must impose the condition
* ! / !
$
$
which gives *
$ !
!
+
$
$ l
+ $
, where
;
/
is the first component
$!
of the vector . Therefore, it is necessary that
$)
In order to avoid that the resulting vector be small, it is customary to take
! *
+
3
which yields
*
* +
$
$ $
J
Given an ~},
matrix, its first column can be transformed to a multiple of the column
, by premultiplying it by a Householder matrix ,
+
;
Assume, inductively, that the matrix has been transformed in ! successive steps into X
(n
the partially upper triangular form
!"" &(''
""
""
-
$
$-$
$
$
''
''
""
''
;
""
""
..
.
..
.
..
.
''
''
#
"
..
.
..
.
"
..
.
)
This matrix is upper triangular up to column number ! . To advance by one step, it must G
be transformed into one which is upper triangular up the -th column, leaving the previous
columns in the same form. To leave the first !
columns unchanged, select a vector
which has zeros in positions through ! . So the next Householder reflector matrix is
defined as
! / J
$
J
where the components of the vector are given by
(
( q J
%
if
if
if
with
$
*
+
$ J
* ! /
+
! / where / >
Therefore, the Householder matrices need not, and should not, be explicitly formed. In
""
.
.
''
; " %
..
# . )
..
.
Recall that our initial goal was to obtain a QR factorization of . We now wish to recover
the and # matrices from the ’s and the above matrix. If we denote by the product
of the on the left-side of (1.22), then (1.22) becomes
#
J
in which # is an
upper triangular matrix, and is an * !
Since is unitary, its inverse is equal to its transpose and, as a result,
~ + zero block.
/
#
$ ^
#
If
is the matrix of size which consists of the first ~ columns of the identity
#
matrix, then the above equality translates into
/
The matrix 2/
represents the
/ / /
first columns of
/ . Since
(1.22) and (1.23)) and
$
/ 8P
93 . 76 & */ & */ $ & & &
1. Define E !
[
Do: $
If compute
2. For
3.
Compute
4. Compute using (1.19), (1.20), (1.21)
$ with
! /
5.
6. Compute
7. EndDo
(K
dK
Note that line 6 can be omitted since the are not needed in the execution of the
next steps. It must be executed only when the matrix is needed at the completion of /
[E ~
the algorithm. Also, the operation in line 5 consists only of zeroing the components
and updating the -th component of . In practice, a work vector can be used for
and its nonzero components after this step can be saved into an upper triangular matrix.
Since the components 1 through of the vector are zero, the upper triangular matrix #
can be saved in those zero locations which would otherwise be unused.
{}|
This section discusses the reduction of square matrices into matrices that have simpler
forms, such as diagonal, bidiagonal, or triangular. Reduction means a transformation that
preserves the eigenvalues of a matrix.
q
+ -,/.01. 2.4350
76
Two matrices are said to be similar if there is a nonsingular
and
matrix
X
such that
The mapping is called a similarity transformation.
eigenvalue is multiple.
The geometric multiplicity of an eigenvalue ( of is the maximum number of
independent eigenvectors associated with it. In other words, the geometric multi-
plicity is the dimension of the eigenspace * ! ( .
+
A matrix is derogatory if the geometric multiplicity of at least one of its eigenvalues
is larger than one.
An eigenvalue is semisimple if its algebraic multiplicity is equal to its geometric
)
multiplicity. An eigenvalue that is not semisimple is called defective.
Often, ( ( $ ( (
~
) are used to denote the distinct eigenvalues of . It is
easy to show that the characteristic polynomials of two similar matrices are identical; see
Exercise 9. Therefore, the eigenvalues of two similar matrices are equal and so are their
algebraic multiplicities. Moreover, if is an eigenvector of , then is an eigenvector
@n K ;K
of and, conversely, if is an eigenvector of then is an eigenvector of . As
a result the number of independent eigenvectors associated with a given eigenvalue is the
same for two similar matrices, i.e., their geometric multiplicity is also the same.
The simplest form in which a matrix can be reduced is undoubtedly the diagonal form.
Unfortunately, this reduction is not always possible. A matrix that can be reduced to the
diagonal form is called diagonalizable. The following theorem characterizes such matrices.
arly independent eigenvectors.
6
% &% ~ %
A matrix is diagonalizable if and only if there exists a nonsingular matrix
and a diagonal matrix such that , or equivalently , where % is
~ ('
a diagonal matrix. This is equivalent to saying that linearly independent vectors exist —
the column-vectors of — such that
. Each of these column-vectors is an
eigenvector of .
eigenvalues of a matrix are semisimple, then ~
A matrix that is diagonalizable has only semisimple eigenvalues. Conversely, if all the
has eigenvectors. It can be easily
shown that these eigenvectors are linearly independent; see Exercise 2. As a result, we
5 / q
have the following proposition.
<3 <32>?. 2.4350 76
A matrix is diagonalizable if and only if all its eigenvalues are
semisimple.
~
Since every simple eigenvalue is semisimple, an immediate corollary of the above result
is: When has distinct eigenvalues, then it is diagonalizable.
From the theoretical viewpoint, one of the most important canonical forms of matrices is
the well known Jordan form. A full development of the steps leading to the Jordan form
is beyond the scope of this book. Only the main theorem is stated. Details, including the
+
proof, can be found in standard books of linear algebra such as [117]. In the following,
refers to the algebraic multiplicity of the individual eigenvalue " ( and 3 is the index of the
* !
(
54 * ! ( 64 .
P> /j+
eigenvalue, i.e., the smallest integer for which
+
3 76
Any matrix can be reduced to a block diagonal matrix consisting of
diagonal blocks, each associated with a distinct eigenvalue ( . Each of these diagonal
blocks has itself a block diagonal structure consisting of sub-blocks, where is the
geometric multiplicity of the eigenvalue ( . Each of the sub-blocks, referred to as a Jordan
(K
dK
block, is an upper bidiagonal matrix of size not exceeding 3 , with the constant (
on the diagonal and the constant one on the super diagonal.
The -th diagonal block, : [
, is known as the -th Jordan submatrix (sometimes
[ ;
$
# ..
. )
where each is associated with ( and is of size the algebraic multiplicity of ( . It has
itself the following structure,
!"" $ &('' !"" ( &(''
..
.
..
)
.
#
; 4 ) #
.. with
. (
Each of the blocks
corresponds to a different eigenvector associated with the eigenvalue
(
Here, it will be shown that any matrix is unitarily similar to an upper triangular matrix. The
~
completed by !
only result needed to prove the following theorem is that any vector of 2-norm one can be
additional vectors to form an orthonormal basis of .
P> /5 For any square matrix , there exists a unitary matrix
3 6
1 #
such that
is upper triangular.
6
~ ~
Assume that it is true for ! ~
The proof is by induction over the dimension . The result is trivial for
and consider any matrix of size . The matrix admits
.
~
loss of generality that 8 $
at least one eigenvector 8 that is associated with an eigenvalue ( . Also assume without
O
~ s
find an
($8 ~ * ! matrix such that the
! and+ hence, ~s~
. First, complete the vector 8 into an orthonormal set, i.e.,
matrix 8 ! is unitary. Then
1 1 J
1 !
1,
1 8
(78
( 8
%
* ~ * ~ # 1
* ~ * ~ + 1 +
Now use the induction hypothesis for the ! ! matrix : There
exists an ! ! unitary matrix such that is upper triangular.
+ +
@n K ;K
~R~
Define the matrix
%
%
A simpler proof that uses the Jordan canonical form and the QR decomposition is the sub-
ject of Exercise 7. Since the matrix # is triangular and similar to , its diagonal elements
are equal to the eigenvalues of ordered in a certain manner. In fact, it is easy to extend
the proof of the theorem to show that this factorization can be obtained with any order for
the eigenvalues. Despite its simplicity, the above theorem has far-reaching consequences,
some of which will be examined in the next section.
It is important to note that for any
of is invariant under . Indeed, the relation
~
, the subspace spanned by the first columns
# implies that for , we
have
< F _
If we let
$ ! and if # is the principal leading submatrix of dimension
of # , the above relation can be rewritten as
#
which is known as the partial Schur decomposition of . The simplest case of this decom-
position is when , in which case is an eigenvector. The vectors are usually called
Schur vectors. Schur vectors are not unique and depend, in particular, on the order chosen
for the eigenvalues.
real. A
matrix # . The reason for this is to avoid complex arithmetic when the original matrix is
block is associated with each complex conjugate pair of eigenvalues of the
matrix.
76
M !
Consider the matrix
&
# PP )
% %
!
! %
and the real eigenvalue % zq . The standard (complex) Schur form is given by the pair
of matrices
! % p z z ;! z p8 zq &
pp8 ; zzqz ! pp ; pp ) zz ;z )
! % % ! % % %
# % ! % % % !9% ! % !9%
% % ! %
9 ! %
%
! u z
[
[ p [ :u p F
[
and
&
# % %
u z[ [ :u ; [zF q p [ )
%
%
%
%
!
!
% %
%
%
!
! %
!
!
%
%
%
It is possible to avoid complex arithmetic by using the quasi-Schur form which consists of
the pair of matrices
! !9% %
pp pz
% & zzq zzq ; u
# !9% %
%
%
% % p z
!9%
%
)
z z z
! ;
z &
and
# # ; %
! %
% %
%
Pz[q )
!
%
% %
%
We conclude this section by pointing out that the Schur and the quasi-Schur forms
to select the blocks, corresponding to applying arbitrary rotations to the columns of
associated with these blocks.
successive powers
The analysis of many numerical techniques is based on understanding the behavior of the
of a given matrix . In this regard, the following theorem plays a
fundamental role in numerical linear algebra, more particularly in the analysis of iterative
P> /5
methods.
* + .
3 6
The sequence
, z[# converges to zero if and only if
%
6
To prove the necessary condition, assume that
%
and consider 8
eigenvector associated with an eigenvalue ( of maximum modulus. We have
a unit
8 (
8
which implies, by taking the 2-norms of both sides,
( 8 $ p
%
@n K ;K
;(
that each of the Jordan blocks converges to zero. Each block is of the form
(
where is a nilpotent matrix of index 3 , i.e.,
&%
. Therefore, for 3 ,
G
* (
!
3
+
Using the triangle inequality for any norm and taking
* \
yields
(
!
+
Since ( , each of the terms in this finite sum converges to zero as . Therefore,
the matrix converges to zero.
6
* + 3
deed, if the series converges, then
%
The first part of the theorem is an immediate consequence of Theorem 1.4. In-
. By the previous theorem, this implies that
+ +
. To show that the converse is also true, use the equality
! " * ! * $
* + 3 $ !
+ * +
and exploit the fact that since , then is nonsingular, and therefore,
"
* !
!
This shows that the series converges since the left-hand side will converge to
In addition, it also shows the second part of the theorem.
* ! +
.
Another important consequence of the Jordan canonical form is a result that relates
the spectral radius of a matrix to its matrix norm.
@n bK ^K
dK y
P> /5
3 6 , we have
For any matrix norm
* +
.
6
The proof is a direct application of the Jordan canonical form and is the subject
of Exercise 10.
{}|
This section examines specific properties of normal matrices and Hermitian matrices, in-
cluding some optimality properties related to their spectra. The most common normal ma-
trices that arise in practice are Hermitian or skew-Hermitian.
1 1 J
i.e., if it satisfies the relation
An immediate property of normal matrices is stated in the following lemma.
X T
76
If a normal matrix is triangular, then it is a diagonal matrix.
6
Assume, for example, that is upper triangular and normal. Compare the first
diagonal element of the left-hand side matrix of (1.25) with the corresponding element of
- $ $
the matrix on the right-hand side. We obtain that
which shows that the elements of the first row are zeros except for the diagonal one. The
matrix. Let
# 1 be the Schur canonical form of
where is unitary and # is
# 1 # 1 # # 1 1
or,
Upon multiplication by 1 on the left and on the right, this leads to the equality # 1 #
# #01 which means that # is normal, and according to the previous lemma this is only
possible if # is diagonal.
Thus, any normal matrix is diagonalizable and admits an orthonormal basis of eigenvectors,
namely, the column vectors of .
The following result will be used in a later chapter. The question that is asked is:
~ ~
Assuming that any eigenvector of a matrix is also an eigenvector of 21 , is normal?
If had a full set of eigenvectors, then the result is true and easy to prove. Indeed, if
is the
:
matrix of common eigenvectors, then
% and 1 % $ , with
,
: ,
$
% and % diagonal. Then, 01 $
% % and 1 $
% % and, therefore,
1
1 . It turns out that the result is true in general, i.e., independently of the
X
number of eigenvectors that admits.
*
, with is an eigenvector of
and since is also an eigenvector of
and because
, then
(
. For each eigenvector of ,
* 1 * 3 *
01 . Observe that
, it follows that
( ,
( . Next, it 3 +
+ + + +
is proved by contradiction that there are no elementary divisors. Assume that the contrary
+ x 2
is true for ( . Then, the first principal vector 8 associated with ( is defined by
! ( 8*
* 6 - * 2 * 6 J
Taking the inner product of the above relation with , we obtain
98 ( 8
+ + +
*
98 8 8 ( ( 8
+
A result of (1.26) and (1.27) is that
+
% which is a contradiction. Therefore, has
+
a full set of eigenvectors. This leads to the situation discussed just before the lemma, from
which it is concluded that must be normal.
% 1 %
Clearly, Hermitian matrices are a particular case of normal matrices. Since a normal
matrix satisfies the relation , with diagonal and unitary, the eigenvalues
%
of are the diagonal entries of . Therefore, if these entries are real it is clear that
. This is restated in the following corollary.
1
@n
bK ^K
dK
/ Y> T
3 <3 76
A normal matrix whose eigenvalues are real is Hermitian.
As will be seen shortly, the converse is also true, i.e., a Hermitian matrix has real eigenval-
ues.
* * +
An eigenvalue ( of any matrix satisfies the relation
98 8
(
8 8
+
+ * * xx +
where 8 is an associated eigenvector. Generally, one might consider the complex scalars
*
J
t
are important both for theoretical and practical purposes. The set of all possible Rayleigh
quotients as runs over is called the field of values of . This set is clearly bounded
since each * is bounded by the the 2-norm of , i.e., * $ for all .
+
If a matrix is normal, then any vector in
+
can be expressed as
where the vectors form an orthogonal basis of eigenvectors, and the expression for *
* * gx +
becomes +
*
+
(
$
$
;
( J
+
t
convex hull theorem), this means that the set of all possible Rayleigh quotients as runs
over all of is equal to the convex hull of the ( ’s. This leads to the following theorem
which is stated without proof.
P> /5
3 6
The field of values of a normal matrix is equal to the convex hull of its
spectrum.
The next question is whether or not this is also true for nonnormal matrices and the
answer is no: The convex hull of the eigenvalues and the field of values of a nonnormal
matrix are different in general. As a generic example, one can take any nonsymmetric real
matrix which has real eigenvalues only. In this case, the convex hull of the spectrum is
a real interval but its field of values will contain imaginary values. See Exercise 12 for
another example. It has been shown (Hausdorff) that the field of values of a matrix is a
convex set. Since the eigenvalues are members of the field of values, their convex hull is
contained in the field of values. This is summarized in the following proposition.
@n K ;K
5 / q
<3 <32>?. 2.4350
76
The field of values of an arbitrary matrix is a convex set which
contains the convex hull of its spectrum. It is equal to the convex hull of the spectrum
when the matrix is normal.
* + * + * +
Then
( 98 8 8 98 98 8 (
which is the stated result.
It is not difficult to see that if, in addition, the matrix is real, then the eigenvectors can be
chosen to be real; see Exercise 21. Since a Hermitian matrix is normal, the following is a
consequence of Theorem 1.7.
P> /j+ T
3 76
Any Hermitian matrix is unitarily similar to a real diagonal matrix.
ofj
In particular a Hermitian matrix admits a set of orthonormal eigenvectors that form a basis
.
In the proof of Theorem 1.8 we used the fact that the inner products * 98 8 are real.
+
Generally, it is clear that any Hermitian matrix is such that * is real for any vector
+
. It turns out that the converse is also true, i.e., it can be shown that if *
g
t
is
real for all vectors in , then the matrix is Hermitian; see Exercise 15.
+
Eigenvalues of Hermitian matrices can be characterized by optimality properties of
the Rayleigh quotients (1.28). The best known of these is the min-max principle. We now
label all the eigenvalues of in descending order:
( $ (
(
Here, the eigenvalues are not necessarily distinct and they are repeated, each according to
generic subspace of .
its multiplicity. In the following theorem, known as the Min-Max Theorem, represents a
( "
+
@n bK ^K
dK
6
Let be an orthonormal basis of
consisting of eigenvectors of
associated with (
these vectors and denote by *
+
the maximum of *
+
x g
( respectively. Let be the subspace
*
+
spanned by the first of
over all nonzero vectors
of a subspace . Since the dimension of is , a well known theorem of linear algebra
\ ~ b
shows that its intersection with any subspace of dimension !
is not reduced to
$
% , i.e., there is vector in . For this , we have
* x
* x +
$
(
(
+
so that *
+
( .
Consider, on the other hand, the particular subspace of dimension ~ `
! which
(
(
subspaces, *
+ is always
so that * ( . In other words, as runs over all the * !
+
+
-dimensional
( and there is at least one subspace for which *
~ \
+
( . This shows the desired result.
+
The above result is often called the Courant-Fisher min-max principle or theorem. As a
* * xg +
particular case, the largest eigenvalue of satisfies
(
J
+
Actually, there are four different ways of rewriting the above characterization. The
* gg J
second formulation is
*
(
+
+
and the two other ones can be obtained from (1.30) and (1.32) by simply relabeling the
eigenvalues increasingly instead of decreasingly. Thus, with our labeling of the eigenvalues
* * gg +
in descending order, (1.32) tells us that the smallest eigenvalue satisfies
(
J
+
with ( replaced by ( if the eigenvalues are relabeled increasingly.
In order for all the eigenvalues of a Hermitian matrix to be positive, it is necessary and
x 5 p z
sufficient that
* %
* g
&%
+
% for any is
Such a matrix is called positive definite. A matrix which satisfies
said to be positive semidefinite. In particular, the matrix 1 +
is semipositive definite for
any rectangular matrix, since
* 1 g * x
% p g
Similarly, , + +
1 is also a Hermitian semipositive definite matrix. The square roots of the
eigenvalues of 1 for a general rectangular matrix are called the singular values of
@n K ;K
and are denoted by : . In Section 1.5, we have stated without proof that the 2-norm of
any matrix is equal to the largest singular value : of . This is now an obvious fact,
* * xx + * 1,* g g +
because
$$
$ $
$ $ :
$
+ +
which results from (1.31).
Another characterization of eigenvalues, known as the Courant characterization, is
stated in the next theorem. In contrast with the min-max theorem, this property is recursive
in nature.
P> /j+ T
3
The eigenvalue (
76 and the corresponding eigenvector of a Hermi-
*
* + * * gg +
tian matrix are such that
(
3 ,
+ +
* * + * xg +
and for
(
*
J
+ +
Nonnegative matrices play a crucial role in the theory of matrices. They are important in
the study of convergence of iterative methods and arise in many applications including
economics, queuing theory, and chemical engineering.
A nonnegative matrix is simply a matrix whose entries are nonnegative. More gener-
ally, a partial order relation can be defined on the set of matrices.
+ q
, .01. 2.43 0
76
~D matrices. Then
Let and be two
if by definition, zF F for ~ , . If denotes the ~ D zero matrix,
then is if if
nonnegative , and . Similar definitions hold in which positive
“positive” is replaced by “negative”.
ju
The binary relation “ ” imposes only a partial order on
in
`z
since two arbitrary matrices
are not necessarily comparable by this relation. For the remainder of this section,
% ^K dK e S dK
#
we now assume that only square matrices are involved. The next proposition lists a number
If is nonnegative, then so is .
If , then / / .
P> /5 T
negative matrices is the following theorem known as the Perron-Frobenius theorem.
~P ~
3 6 * ,
Let be a real nonnegative irreducible matrix. Then (
; +
the spectral radius of , is a simple eigenvalue of . Moreover, there exists an eigenvector
8 with positive elements associated with this eigenvalue.
A relaxed version of this theorem allows the matrix to be reducible but the conclusion is
somewhat weakened in the sense that the elements of the eigenvectors are only guaranteed
to be nonnegative.
5 / 8
Next, a useful property is established.
<3 <32>?. 1.4350 76
be nonnegative matrices, with s
, s} and N sN
Let . Then
6
Consider the first inequality only, since the proof for the second is identical. The
/ Y>
A consequence of the proposition is the following corollary.
3 <3 76
Let and
% z
be two nonnegative matrices, with . Then
J
6
The proof is by induction. The inequality is clearly true for % . Assume that
(1.35) is true for . According to the previous proposition, multiplying (1.35) from the left
: J
by results in
"
@n K ;K
: J
sides of the inequality by to the right, and obtain
"
The inequalities (1.36) and (1.37) show that
proof.
s " "
, which completes the induction
A theorem which has important consequences on the analysis of iterative methods will
P> /j+ T
now be stated.
3 76
s s J
Let and be two square matrices that satisfy the inequalities
Then
* + * + J
6
* + .
The proof is based on the following equality stated in Theorem 1.6
for any matrix norm. Choosing the ! norm, for example, we have from the last property
in Proposition 1.6
* + . *
+
* +
is nonsingular and
J
!
In addition, since
% , all the powers of as well as their sum in (1.40) are also
nonnegative.
* +
To prove the sufficient condition, assume that is nonsingular and that its inverse
is nonnegative. By the Perron-Frobenius theorem, there is a nonnegative eigenvector 8
* +
associated with , which is an eigenvalue, i.e.,
18 8
*
or, equivalently,
8 8
* +
!
Since 8 and 5 are nonnegative, and !
which is the desired result.
+
is nonsingular, this shows that ?! % ,
% ^K dK e S dK
#
q A matrix is said to be an
+ -,/.01. 2.4350 76
-matrix if it satisfies the following four
for O;E ~ .
properties:
%
for ;x<=};E ~ .
%
is nonsingular.
.
%
2
In reality, the four conditions in the above definition are somewhat redundant and
equivalent conditions that are more rigorous will be given later. Let be any matrix which
satisfies properties (1) and (2) in the above definition and let % be the diagonal of . Since
% %,
% ! *% ! %
! * !
%
+ +
Now define
!
%
Using the previous theorem, !
%
is nonsingular and * !
1 %
%
;
if and only if *
+
+ +
. It is now easy to see that conditions (3) and (4) of Definition 1.4
can be replaced by the condition * .
P> /5 T Let a matrix be given such that
3 6
The next theorem shows that the condition (1) in Definition 1.4 is implied by the other
three.
P> /5 T Let a matrix be given such that
3 6
2 %
O;E
Then
6
Define 1
. Writing that * 8 O yields
; +
which gives
Since % for all , the right-hand side is and since % , then
4
%.
8 8
The second part of the result now follows immediately from an application of the previous
theorem.
0 s .
for all .
%
%
Consider now the matrix % . Since , then
% ) !
!
% % + % *% +
which, upon multiplying through by ) % , yields
! * ! ! !
) % ) %
Since the matrices ! % and !
% ) are nonnegative, Theorems 1.14 and 1.16
imply that
* % + * % + 3[
!
!
)
This establishes the result by using Theorem 1.16 once again.
z j p J
A real matrix is said to be positive definite or positive real if
* 8 8 %
8 8 %
+
pK ^K )S2n K @K
dK
It must be emphasized that this definition is only useful when formulated entirely for real
variables. Indeed, if 8 were not restricted to be real, then assuming that * 98 8 is real
+
for all 8 complex would imply that is Hermitian; see Exercise 15. If, in addition
Definition 1.41, is symmetric (real), then is said to be Symmetric Positive Definite
(SPD). Similarly, if is Hermitian, then is said to be Hermitian Positive Definite (HPD).
to
Some properties of HPD matrices were seen in Section 1.9, in particular with regards
to their eigenvalues. Now the more general case where is non-Hermitian and positive
definite is considered.
We begin with the observation that any square matrix (real or complex) can be decom-
J
posed as
* 1 + J
in which
) * 1 + ! J
Note that both and are Hermitian while the matrix in the decomposition (1.42)
is skew-Hermitian. The matrix
, while the matrix
in the decomposition is called the Hermitian part of
is the skew-Hermitian part of . The above decomposition is the
When is real and 8 is a real vector then 98 * 8 is real and, as a result, the decom-
+
* * + J
position (1.42) immediately gives the equality
8 8 8 8
+
This results in the following theorem.
P> /5 T
3 6
Let be a real positive definite matrix. Then is nonsingular. In
addition, there exists a scalar % such that
+
* 8 8 8 $$ J
for any real vector 8 .
6
The first statement is an immediate consequence of the definition of positive defi-
niteness. Indeed, if were singular, then there would be a nonzero vector such that 98 =%
and as a result * 8 8 % for this vector, which would contradict (1.41). We now prove
+
the second part of the theorem. From (1.45) and the fact that is positive definite, we
* * + + *
conclude that is HPD. Hence, from (1.33) based on the min-max theorem, we get
8 8
*
*8 8
8
+
8
( % z
*
8 8 +
+
Taking ( yields the desired inequality (1.46).
; +
A simple yet important result which locates the eigenvalues of in terms of the spectra
@n K ;K
6
+ + (
+ ( (
When the decomposition (1.42) is applied to the Rayleigh quotient of the eigen-
* + * + * + J
vector 8 associated with ( , we obtain
$
( 98 8 8 8 ?8 8
%* *
assuming that 8 . This leads to
* + * [ +
( 8 8
( ?8 8
+ +
The result follows using properties established in Section 1.9.
Thus, the eigenvalues of a matrix are contained in a rectangle defined by the eigenval-
ues of its Hermitian part and its non-Hermitian part. In the particular case where is real,
then is skew-Hermitian and its eigenvalues form a set that is symmetric with respect to
the real axis in the complex plane. Indeed, in this case, is real and its eigenvalues come
in conjugate pairs.
Note that all the arguments herein are based on the field of values and, therefore,
they provide ways to localize the eigenvalues of from knowledge of the field of values.
However, this approximation can be inaccurate in some cases.
76
Consider the matrix
%
_
The eigenvalues of
+ are ! and 101. Those of are *_ +
%
and those of
are * % ! .
x x g J
When a matrix is Symmetric Positive Definite, the mapping
* *
from jlj
to is a proper inner product on , in the sense defined in Section 1.4.
+ ;
+
The associated norm is often referred to as the energy norm. Sometimes, it is possible to
g x g
find an appropriate HPD matrix which makes a given matrix Hermitian, i.e., such that
+ *
* +
N
although is a non-Hermitian matrix with respect to the Euclidean inner product. The
simplest examples are and , where is Hermitian and is Hermitian
Positive Definite.
Another Random Scribd Document
with Unrelated Content
Mitäs siis nyt on tehtävä? Jos odottaa, niin voi ruhtinas
äkkiarvaamatta saapua. Jos taas lähtee Prohorowkaan ja Dnieperin
taakse, niin voi joutua hetmanien käsiin.
Jos hän nyt, tultuansa rannalle saa tietää, että hetmanin joukot
ovat toisella puolella, niin hän ei lähde yli, vaan kulkee alas jokea ja
yhtyy vastapäätä Czerkasya Bohuniin. Sitäpaitsi hän varmaan
matkalla saa lisää tietoja Chmielnickistä. Tiesihän Anton jo
Plesniewskin tunnustuksesta, että Chmielnicki oli valloittanut
Czehrynin, että hän oli lähettänyt Krywonosin hetmaneja vastaan ja
että hän itse Tuhaj-bejn kanssa paraikaa marssi heidän jäljessään.
Kokeneena sotilaana ja perinpohjin tuntien paikat katsoi Anton
nimittäin varmaksi, että taistelu jo oli tapahtunut. Siinä tapauksessa
piti vain saada tieto, kumpiko oli voittanut. Jos Chmielnicki oli lyöty,
niin hetmanin joukot hajaantuivat ajaakseen niitä takaa koko
Dnieperin tämänpuoleiselle alueelle eikä Zaglobaa siinä tapauksessa
kannattanut hakea. Mutta jos Chmielnicki oli voittanut?… Tosin Anton
ei siihen paljoakaan uskonut, olihan helpompi lyödä hetmanin poika
kuin itse hetmani, helpompi voittaa etujoukko kuin koko armeija.
— Äh, arveli vanha kasakka, — meidän atamanimme tekisi
viisaammin, jos hän pitäisi huolta omasta nahastaan eikä tytöstä.
Czehrynin luona sopisi mennä Dnieperin yli ja sitte rientää Sicziin niin
kauvan kuin aikaa vielä on. Täällä ruhtinas Jareman ja hetmanien
välissä käy hänen vaikeaksi istua.
— Oli.
— Hän se vasta oli, atamani hyvä, kuin oikea keruubi. Emme ole
sellaista nähneet.
Anton valehteli kuin palkattu, mutta hän teki sen luottaen siihen,
että rakuunalippukunta joka tulee Dnieperin puolelta, ei voi vielä
tietää mitään, ei hyökkäyksestä Rozlogia vastaan enempää kuin
taistelusta Wasilowkan luona, tai muistakaan Bohunin kujeista.
— Tietäkää, hölmöt, että jollei minua nyt olisi ollut, niin te kolmen
päivän päästä saisitte heittää henkenne Lubniessa. Ja nyt hevosen
selkään ja liikkeelle niin lujaan kuin vain hevoset kavioistaan
pääsevät.
— Seis, seis!
— Ne uppoavat!
— Bohun?
— Juuri hän.
— Milloin se tapahtui?
— Mikä on nimesi?
— Zelenski.
— Näitkö Chmielnickin?
— Näin kyllä.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookultra.com