0% found this document useful (0 votes)
61 views

Iterative Methods For Linear and Nonlinear Equations

This document discusses various preconditioning methods that can be used for iterative methods for solving linear and nonlinear equations. It mentions that Jacobi preconditioning and preconditioners based on classical stationary iterative methods may be somewhat useful for partial differential equations but are not expected to have dramatic effects. It also describes incomplete factorization preconditioners where a sparse Cholesky factorization is applied but small elements of the factors are discarded. Additionally, it discusses polynomial preconditioning where a polynomial is used to estimate the spectrum of the matrix A and improve clustering of the preconditioned system's spectrum near 1.

Uploaded by

xhela1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Iterative Methods For Linear and Nonlinear Equations

This document discusses various preconditioning methods that can be used for iterative methods for solving linear and nonlinear equations. It mentions that Jacobi preconditioning and preconditioners based on classical stationary iterative methods may be somewhat useful for partial differential equations but are not expected to have dramatic effects. It also describes incomplete factorization preconditioners where a sparse Cholesky factorization is applied but small elements of the factors are discarded. Additionally, it discusses polynomial preconditioning where a polynomial is used to estimate the spectrum of the matrix A and improve clustering of the preconditioned system's spectrum near 1.

Uploaded by

xhela1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

Of these costs, the application of the preconditioner is usuallythe larger. In


the remainder of this section we brieflymen tion some classes of preconditioners.
A more complete and detailed discussion of preconditioners is in [8] and a
concise surveywith manyp ointers to the literature is in [12].
Some effective preconditioners are based on deep insight into the structure
of the problem. See [124] for an example in the context of partial differential
equations, where it is shown that certain discretized second-order elliptic
problems on simple geometries can be veryw ell preconditioned with fast
Poisson solvers [99], [188], and [187]. Similar performance can be obtained from
multigrid [99], domain decomposition, [38], [39], [40], and alternating direction
preconditioners [8], [149], [193], [194]. We use a Poisson solver preconditioner
in the examples in 2.7 and 3.7 as well as for nonlinear problems in 6.4.2
and 8.4.2.
One commonlyused and easilyimplemen ted preconditioner is Jacobi
preconditioning, where M is the inverse of the diagonal part of A. One can also
use other preconditioners based on the classical stationaryiterativ e methods,
such as the symmetric GaussSeidel preconditioner (1.18). For applications to
partial differential equations, these preconditioners mayb e somewhat useful,
but should not be expected to have dramatic effects.
Another approach is to applya sparse Choleskyfactorization to the
matrix A (therebygiving up a fullymatrix-free formulation) and discarding
small elements of the factors and/or allowing onlya fixed amount of storage
for the factors. Such preconditioners are called incomplete factorization
preconditioners. So if A = LLT + E, where E is small, the preconditioner
is (LLT ) 1 and its action on a vector is done byt wo sparse triangular solves.
We refer the reader to [8], [127], and [44] for more detail.
One could also attempt to estimate the spectrum of A, find a polynomial
p such that 1zp(z) is small on the approximate spectrum, and use p(A) as a
preconditioner. This is called polynomial preconditioning. The preconditioned
system is
p(A)Ax = p(A)b
and we would expect the spectrum of p(A)A to be more clustered near z = 1
than that of A. If an interval containing the spectrum can be found, the
residual polynomial q(z) = 1 zp(z) of smallest L norm on that interval
can be expressed in terms of Chebyshev [161] polynomials. Alternatively
q can be selected to solve a least squares minimization problem [5], [163].
The preconditioning p can be directlyreco vered from q and convergence rate
estimates made. This technique is used to prove the estimate (2.15), for
example. The cost of such a preconditioner, if a polynomial of degree K is
used, is K matrix-vector products for each application of the preconditioner
[5]. The performance gains can be verysignifican t and the implementation is
matrix-free.

You might also like