0% found this document useful (0 votes)
78 views8 pages

New Iterative Improvement of A Solution For An Ill-Conditioned System of Linear Equations

The document presents a new iterative method for improving solutions to ill-conditioned systems of linear equations. The method is based on analyzing the behavior of a linear dynamic system related to iterative improvement. Specifically: 1) It formulates the iterative improvement process as an ordinary differential equation recursion, which can be viewed as a dynamic system. 2) For a parameter u greater than 0, it proposes a new iterative formula (2.3) for improving solutions, which generalizes Wilkinson's method (u = 0). 3) It analyzes the behavior of the dynamic system (2.4) corresponding to the new iterative formula, proving the exact solution is a globally asymptotically stable equilibrium point.

Uploaded by

Charles Wellard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views8 pages

New Iterative Improvement of A Solution For An Ill-Conditioned System of Linear Equations

The document presents a new iterative method for improving solutions to ill-conditioned systems of linear equations. The method is based on analyzing the behavior of a linear dynamic system related to iterative improvement. Specifically: 1) It formulates the iterative improvement process as an ordinary differential equation recursion, which can be viewed as a dynamic system. 2) For a parameter u greater than 0, it proposes a new iterative formula (2.3) for improving solutions, which generalizes Wilkinson's method (u = 0). 3) It analyzes the behavior of the dynamic system (2.4) corresponding to the new iterative formula, proving the exact solution is a globally asymptotically stable equilibrium point.

Uploaded by

Charles Wellard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

An lnlemational Joumd

computers &
mathematics
with applkduna
pERGAh{()N Computers and lLlathematics with Applications 44 (2002) 1109-1116
www.eIsevier.com/locate/camwa

New Iterative Improvement of a Solution


for an Ill-Conditioned System of Linear
Equations Based on a Linear Dynamic System
XINYUAN Wu, RONG SHAO AND YIRAN ZHU
Department of XIathematics
Nanjing University
Nanjing ‘210093, P.R. China

Abstract-In this paper, the analysis of the dynamic system for iterative improvement of a
solution is discussed, and the new iterative improvement of a solution is proposed based on the
dynamic system. We have proved that the new iterative improvement of the solution is convergent
unconditionally. The numerical experiments illustrate that the new iterative improvement of the
solution is more effective for an ill-conditional system of linear equations. @ 2002 Elsevier Science
Ltd. All rights reserved.

Keywords- Iterative improvement of solution, IIl-conditioned system of linear equations, Dy-


namic system, ODE recursion, Iteration method, Preconditioner.

1. INTRODUCTION

It is well known that if matrix A of coefficients is ill conditioned, then the computed solution 51
of linear system
Ax = b (1.1)

may not be sufficiently accurate. Therefore, the iterative improvement of the solution is employed
in order to generate an iterative sequence {x8} of the approximate solution that converges to the
solution of (1.1). A popular procedure of the iterative improvement is the famous Wilkinson’s
iterative refinement [1,2] of solution in which the Cholesky decomp~ition is repeatedly used. It
h<as been shown (34 that if A is not “too ill conditioned”, then the zS will be convergent to
the solution “to working accuracy”, provided the residual rS = b - Ax, is computed to double-
precision. But, from the point of view of numerical computation, this method may be impractical
for the ill-conditioned system of linear equations. For example, consider an ill-conditioned system
of equations with a 12 x 12 Hilbert matrix. If we use Wilkinson’s iterative improvement, it will
provide the computed solution without any significant figures, no matter how many iterative

This project supported by the Natural Science Foundation of Province Jiangsu of P.R. China.
The authors would like to thank the referee for the careful reading of the manuscript and several valuable
suggestions.

~698-1221/02/$ - see front matter @ 2002 Elsevier Science Ltd. AII rights reserved,
PII: SO898-1221(02)00219-5
1110 X. \Yu et al.

steps are performed. This is mainly because Wilkinson’s iterative improvement of solution can
be viewed as an iterative procedure with starting vector 20 = 0,

Ay, = b-Ax,,
1-l= O! 1,2, .... (1.2)
&I+1 = &I + Ynr

or equivalently,
y, = A-‘(b - Ax,J,
11 = 0, 1,2,. . . ) (1.3)
x,+1 = xn + I/n,

with .XO= 0. This means that Wilkinson’s iterative improvement of solution can be viewed as an
explicit Euler method with step size h = 1 for solving the following system of ordinary differential
equations:
$f = A-‘(b - Ay),
(1.4)
x(0) = x0 = 0,

where X* = A-lb is a unique stationary point of system (1.4).


Since A is ill conditioned, neither the numerical computed solution of (1.2) nor of (1.3) is suf-
ficiently accurate, although they can be viewed as a procedure that generates numerical approxi-
mate sequence {z,} along the orbit defined by (1.4). Especially if A is seriously ill conditioned, it
may be unworkable to produce the sufficiently accurate A-’ numerically, or equivalently, to gener-
ate sufficiently accurate yy, step by step. Therefore, when A is ill conditioned, in order to deal with
this problem, a transformation should be considered in advance and the behaviour of the dynamic
system relating to the continuation-like method (1.4) also should be analysed. Then some new it-
erative improvement of solution can be expected to be explored further. The principal strategy of
us is to convert an ill-conditioned problem into a series of relatively “well-conditioned” problems
but they have the same solution, and then solve numerically the “well-conditioned” problems by
the method of successive approximation. So it also can be viewed as a new preconditioner. The
discussion in detail will be presented in the next sections.

2. THE BEHAVIOUR OF A LINEAR DYNAMIC SYSTEM


RELATING TO THE NEW ITERATIVE IMPROVEMENT
OF THE SOLUTION UNDER CONSIDERATION
Consider a system of linear equations with an m x m matrix of coefficients

Ax = 6, (2.1)

where we restrict our attention to the matrix of coefficients which is normal positive definite.
Wilkinson [1,2] offered iterative improvement (1.2) of solutions in which the Cholesky decompo-
sition is employed. However. in Wilkinson’s iterative procedure, the improvement of the solution
at every step still suffers from cond(A). In other words, Wilkinson’s iteration is performed without
improving the condition of the problem to be solved. This enlightens us to discuss the following
iterative improvement of solution with a parameter u 2 0:

(ul+ A)x,+l = ux, + b, (2.2)

or equivalently,
z,+i = (~1+ A)-‘(ux, + b). (2.3)
It is easy to see that (2.2) can be rewritten as

(d + A)y, = b - Ax,,
(2.2’)
x,+1 = xn + yn,
Ill-Conditioned System of Linear Equations 1111

and accordingly (2.3) can be rewritten as


yn = (ul+ A)-‘(b - AX,,),
(2.3’)
Xn+l = Xn + Yn+

Explicitly, in the special case of u = 0, it leads to Wilkinson’s iterative improvement of the


solution, namely (1.2) or (1.3).
On one hand, many integrators for linear constant coe~cient ODES can be identified as an
approximation of linear system (2.1), and there exists a connecting between ODE recursions and
iterative solvers (see [5]); on the other hand, (2.3) or (2.3’) is just an ODE recursion. Thus, we
first consider the dynamic systems associated with (2.3) or (2.3’).
The dynamic systems relating to the iterative improvement (2.3) or (2.3’) are

g = (ul+ A)-‘(b - Ax),


(2.4)
x(O) = x0,

where u > 0, za E Rn.


Obviously, letting u = 0 and ~0 = 0 in (2.4), we obtain (1.4), that is the dynamic system
relating to Wilkinson’s improvement of solution. Sot in the following discussion, we only consider
u > 0 in (2.4).
Now, let us study the behaviour of dynamic systems (2.4).
THEOREM 2.1. The solution x* of linear system (2.1) is a unique globaily asymptotically stable
equilibrium point of the dynamic systems (2.4).
PROOF. Let
f(x) = b - Aa:
and
v(x) = f-(x+! Q-5)
Then we have
(i) v(x’) = 0 and W(X) > 0, provided z # x*, and
(ii)

d(x) = F =fT@)(-A) g = -f’(x)A(uI + A)-‘f(x)

= -fT(z) [(uI f A)A-‘I-’ f(x) = -fT(z) (I + uA-I)-’ f(s) < 0,


provided x # x’;
this is clue to u > 0 and A is positive definite, so do A-‘, (I + uA-‘), and (1 f uA-‘)-‘.
Thus, we conclude that v(x) defined by (2.5) is a strict Lyapunov function of the equilibrium
point z* of dynamic systems (2.4). Therefore, the unique solution z* of linear system (2.1) is a
unique globally ~~ynlptoticaily stable equilibrium point of dynamic systems (2.4). 1
COROLLARY 2.1. Suppose that the solution of systems (2.4) can be expressed

rc=T(t,Xo); (24
then we have
lim x (t.za) = X* = A-lb. (2.7)
t-+CCY
PROOF. It follows immediately from Theorem 2.1 and the definition of the well-known Lya-
punov’s asymptotical stability. a
From (2.7) we can see that the dynamic systems (2.4) present a continuation method for
linear system (2.1). However, generally speaking, it is not easy to obtain the solution of lin-
ear system (2.1) directly from (2.7), because it is di~cult to yieid the analytic expression of
.c(t, ~0) in (2.7). when nz is large enough. Consequently, we will employ numerical integration for
ODES (2.4).
1112 X. \Vu et al.

3. THE NEW ITERATIVE IMPROVEMENT OF A


SOLUTION BASED ON DYNAMIC SYSTEMS (2.4)
AND THE ANALYSIS OF CONVERGENCE
One of the simple numerical integrations is the Euler method. For (2.4), the Euler method
gives
z,+~ = Z, + h(uI + A)-‘(b - Az~),
11=0,1,..., (3.1)
10 = 0,
where h is a step size. In the following. the step size h = 1 is chosen for simplicity and convenience.
In this case. we have the ODE recursion

x,+1 = 5, + (.uI + A)-‘(b - Azn),


n=O,l,.... (3.2)
20 = 0.

This is (2.3’).
In view of spurious solutions of numerical methods for initial value problems (G-91, we will
investigate the behaviour of the ODE recursion (3.2).
Equation (3.2) can be rewritten as

z-n+1 = (~1 + A)-‘(us, + b),


1-L
=O,l,..., (3.3)
20 = 0,

since (~1 + A)(z,+l - x,,) = b - AZ,.


Equation (3.3) can be regarded as a stationary iterative method with the iterative matrix

Af = u(ul+ A)-‘. (3.4)


Because A is a positive definite normal matrix and u > 0, nl+ A, (ul+ A)-‘, and ZL(U~+ A)-’
are all positive definite normal matrices.

THEOREM3.1. Assume that oi (i = 1,2,. . . , m) are eigenvalues of 111x nz positive definite normal
matrix A in linear system (2.1) and

O<u1~cT2~...~c7,,. (3.5)
Tllen tire spectral radius of the iterative method (3.3,) is

p(Af) = p [t&f + A)-‘] = *, (3.6)

imd the asymptotic rate of convergence of (3.2) or (3.3) is R(M) = - In p(M) = - hr( U/(U + al)),
for any u > 0.
PROOF. Because U, (i = 1) 2, . . . , m) are the eigenvalues of A, from (3.5) and u > 0, we have

O<u+ar Iu+ag~~~*~u-!?a*

are the eigenvalues of u1+ A. Therefore,


1
l < l < ... 2 -
U+Um - u+c7m_r - ‘U+ 01
are the eigenvalues of (ul + A)-’ and

U+Um
U
5
u+um-1
‘lr <..*<u
u+ u1

are the eigenvalues of u(uI + A)-‘. Thus,

p(~u) = p [u(uI + A)-‘] = l~;$yn --& = --&, (3.7)


-- I
and the asymptotic rate of convergence of (3.2) or (3.3) is R(M) = - lnp(Af) = - ln(u/(u +
m)).
III-Conditioned System of Linear Equations 1113

COROLLARY 3.1. The new iterative improvement (3.2) of soh_Jtion is convergent uJJcoJJditionally
for ally u > 0.
The result of Corollary 3.1 is straightfor~vard from (3.7) of Theorem 3.1; in fact, we have
p(M) < 1 for any 21> 0.

THEOREM 3.2. Suppose that 0 < cl 5 62 5 a .I < urn are eigenvalues of the positive definite
normal Jnatrix A in (2.1). Then for any u > 0 we always have

~(a1 i- A) < &(A), (3.8)

n,tJere K(A) and K( uf + A) express the spectral CoJJditioJJ numbers of A and (UT+ A): respective&
PROOF. The spectral condition number of A and (ul + A) are

r;(A) = 2

~(~11+ A) = =cT,
u + u1
(3.9)

respectively. Since u > 0, and isn > crl > 0, from the strictly monotone decrease of the function

we deduce that (u + om)/(u + al) < a,/al; i.e., +I+ A) < K(A). I
Illequality (3.8) of Theorem 3.2 means that for any u > 0 the spectral condition number of
the new iterative improvement (3.2) is less than the spectral condition number of the original
problem (2.1) to be solved, that is the spectral condition number of Wilkinson’s iterative im-
provement (1.2) or (1.3). To illustrate this point, let us take Hilbert matrix H, as an example.
The spectral condition nmllbers ~(2~1+ H,) change with parameter U; see Tables 1 and 2. From
the tables, we can see the spectral condition numbers are improved efficiently.

Table 1. K(ZI~!+ Hm) with u=IO-~.

m 10 20 50 100

rc(H,n) 1.6 x 1013 1.1 x 10’” 1.0 x 10’9 5.2 x 10’”

4Ulf H,,, 1 1.8 x 105 1.9 x 10s 2.1 x 10s 2.2 x 10s

Table 2. tc(ul+ H,,) with u=lO-“.

m 10 20 50 100
h.(Hm) 1.6 x 1CI13 1.1 x 10’9 1.0 x 10’9 5.2 x 1O”Q
h.(uI+ H,,) 1.8 x 10” 1.9 x lo4 2.1 x 10” 2.2 x 10”

As for the consistency, it is easy to see that, if iterative improvement (3.2) is convergent, then
we have
x=x+(ul+A)-‘(b-Ax) (3.10)

from (3.2). Consequently, linear system (3.10) and linear system (2.1) have the same solution
x* = A-lb. That is to say, the iterative iniprovelnent (3.2) is consistent with system of linear
equations (2.1).
The rate of convergence for the new iterative improvement (3.2) or (3.3) is changed with u.
From (3.7). we can see that if u is chosen as

u F: cry.
1114 X. \\‘v et al.

then we have

and the asymptotic rate of convergence is

R(M) = - lnp(fIl) 2 ln2.

On the other hand, from (3.9), if ‘u is chosen too small, then the spectral condition number
of the iterative matrix ~1 -t A may not be improved significantly compared with Wilkinson’s
iterative i~llprovement, for which the spectral condition number is just K(A).
With regard to the estimate of error, we have the a priori error estimate

and the a posterior+, error estimate

Remarks

REMARK 3.1. For a nonnormal matrix A and the corresponding system of linear equations

Ax===: (3.11)

we may consider
8; = b. (3.12)

where A = AAT, and


x = ATz. (3.13)

It is obvious that A is a positive definite normal matrix, provided det(A) # 0.


RENARK 3.2. For a given problem (2.1), the choice of the parameter 2~depends on a compromise
between accuracy and stability. An efficient strategy to achieve this can be performed by the
discrepancy principle (see [lO,ll], etc.).

4. NUMERICAL ILLUSTRATIONS
PROBLEM 1. Let us consider R well-known ill-conditioned system of linear equations uritA nz x m
HiJbert matrix I-I,, = (hij),nxm of coefficients

H,x=b (4.1)

wllere bi = CT=, hik, i = 1,2:. . . ,1x So the accuracy soJutJon is [I, 1,. . . , l]T and stoppj~lg
criteria is /2,+r - xn//2 < eps, eps = 0.5 x 10m5, aud maximum iterative number of step is
500,000. The numericaJ results obtained by the new iterative improvement (3.2) are listed in
TahJe 3.

Table 3. Number of significant digits of computed solutions for Problem 1.

I 12 I 20 I 50 I 90 I
Wilkinson’s Solver No No No No
Our Solver (t6 = 10e5) 6 G 5 5
Ill-Conditioned System of Linear Equations 1115

PROBLEM 2. The matrix of coefficients is the same as Problem 1. Now the difference is b with
bi = C;=“=,hik x k, i = 1,2,. . . , m. Thus the corresponding accuracy solution is [1,2,. . . , ntlT.
The computed results are listed in Table 4.

Table 4. Number of significant digits of computed solutions for Problem 2.

~~

PROBLEhf 3. As another example we use iterative improvement solver (3.2) with u = 10e6 to
deal with the ill-conditioned system of linear equations as follow:

Az = b,

where A = (aiJ)goXg~,b = [bl, 62.. . . ,bgglT,

1, i #A
a,, = i!j = 1,2 ,..., 90, and bi = a&, i = 1,2,. . . ?90.
l-t+, i=j, x-=1

The theoretical solution of the problem is [l, 1,. . . , llT. Letthlg p = 0.5 x lo-‘, the spectral
condition number of A is r;(A) = (90 + p2)/p2 x 3.6 x 1012 (see 1121). After seven steps, the new
iterative improvement yields the approximate solution with six significant figures. However, after
100,000 steps, Wilkinson’s iterative method gives a computed solution with only two significant
figures (see Table 5).

Table 5. Number of significant digits and iterative steps of computed solutions for
Problem 3.

hlethod Iterative Steps Number of Significant Digits


Wilkinson’s Method 100,000 2
Our Method 7 6

All t,he numerical results in this paper are provided with double precision arithmetic.

5. CONCLUSION
Our intention in this paper is to establish a new iterative i~lprovement of the solution for
the ill-conditioned system of linear equations; to present our ideas in a particularly beneficial
framework, me chose to concentrate on (3.2) in Section 3. The convergence of (3.2) and its
corresponding dynamic system are analyzed in Sections 3 and 2. Some numerical experiments
are provided for illustrating the new method.

REFERENCES
1. R.S. RIartin, G. Peters and J.H. Wilkinson, Symmetric decompositions of a positive definite matrix. Numer.
Math. 7, 362-383. (1965).
2. R.S. hlertin, G. Peters and J.H. Wilkinson. Iterative refinement of the solution of a positive definite system
of equations, In H~R~bo~k for Automatic Comp~~a~i#n. Volume II, Linear Algebra,. (Edited by F.L. Baner),
Springer-Veriag, Berlin. (1971).
3. J.H. Wilkinson. Rounding Errors in Atgebrazc Processes, Prentice-Hall, Engleaood Cliffs, NJ, (1963).
4. J.H. Wilkinson, The Algebraic Eigenvahe Problem. Oxford University Press, London, (1965).
5. A.A. Lorbert, G.F. Carey and W.D. Joubert, ODE recursions and iterative solvers for linear equations, SIAM
J. Sci. Comput. 17 (1). 65-77. (1996).
G. A.R. Humphries. Spurious solutions of numerical methods for initial value problems. IMA .I. Namer. And.
13. 263-290, (1993).
1116 x. Wrr et al.

7. A.R. Humphries and AM Stuart, Runge-Kutta methods for dissipative and gradient dynamic systems, SIAM
J. Numer. Anal. 31. 14521485, (1994).
8. A. Iserles, A.T. Peplow and A.M. Stuart, A unified approach to spurious solutions introduced by time
discretization, Part I: Basic theory, SIAM J. Namer. Anal. 28. 1723-1751, (1991).
9. A. Iseries and A.hI. Stuart, A unified approach to spurious solutions introduced by time discretization, Part II:
BDF-like methods, MAJ. Numer. Anal. 12, 487-502, (1992).
10. G.R. Golub, Numerical method for solving least squares problem. Namer. Moth. 7, 206-216, (1965).
1I. P.C. Hansen, Runs-De~c~e~t and Lkcrete f&Posed ~~bZerns. SIAM, Phil~elphia~ PA, (1998).
12. J. Steer and R. Bulirsch, ~~t~d~ct~un to N~rne~cu~ Analysis. Springer-Verlag, New York, (1980).

You might also like