0% found this document useful (0 votes)
111 views9 pages

Calculation of The Eigenvectors of A Symmetric Tridiagonal Matrix by Inverse Iteration

The document describes a method for calculating the eigenvectors of a symmetric tridiagonal matrix using inverse iteration. It involves decomposing the matrix P - λI into LU form using Gaussian elimination with pivoting. This allows solving systems of the form (P - λI)x = v using forward and backward substitution. The method chooses v = e initially to efficiently compute the first iterate x. A second iteration y is then computed to obtain the eigenvector. The method is applicable for computing eigenvectors when the eigenvalues are already known to high accuracy.

Uploaded by

agbas20026896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views9 pages

Calculation of The Eigenvectors of A Symmetric Tridiagonal Matrix by Inverse Iteration

The document describes a method for calculating the eigenvectors of a symmetric tridiagonal matrix using inverse iteration. It involves decomposing the matrix P - λI into LU form using Gaussian elimination with pivoting. This allows solving systems of the form (P - λI)x = v using forward and backward substitution. The method chooses v = e initially to efficiently compute the first iterate x. A second iteration y is then computed to obtain the eigenvector. The method is applicable for computing eigenvectors when the eigenvalues are already known to high accuracy.

Uploaded by

agbas20026896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Numerische Mathematik 4, 368-- 376 (1962)

Handbook Series Linear Algebra

Calculation of the eigenvectors of a symmetric


tridiagonal matrix by inverse iteration
Contributed by

J. H. WILKINSON

1. Theoretical background
The method used for computing the eigenvectors is based on inverse iteration
(sometimes called WIELANDTiteration). Since we are given very accurate eigen-
values we would expect two iterations to be more than adequate provided we
can be sure that the initial vector v is not almost completely deficient in the
eigenvector we wish to compute. The technique embodies a method of choosing
the initial vector which is very effective in this respect.
Suppose v is taken as the initial vector, then to find the eigenvector cor-
responding to given values of ,~ using two steps of inverse iteration we must
solve
(p- ~I) x = v (1)
followed by
(P- ~x) y - - x (2)

where P is the symmetric tridiagonal matrix having diagonal elements c~ and


super-diagonal elements bi.
Effectively this requires a knowledge of the inverse of (P--,~I) or equi-
valent information. If we write

(P--~I)=LU (3)

where L is unit lower triangular and U is upper triangular, then a knowledge


of L and U enables us to solve (t) or (2) by a forward and a backward sub-
stitution.
The matrices L and U may be determined by Gaussian elimination, but in
order to ensure numerical stability some form of pivoting is essential. The
most convenient technique is to eliminate the variables in their natural order
but to select as pivotal row at the i-th stage the row having the maximum
coefficient of x i. At each stage there are only two such rows. One is the i-th
reduced row and the other is the ( i + t ) - t h row of (P--~I)x which is as yet
j. I-I. V~TILKINSON: Inverse iteration for a symmetric m a t r i x 369

unchanged. W e denote the former b y

U i X i - T V i Xi4-1 (4)
a n d t h e l a t t e r is
bi xi + (ci+~ - - 2) xi+~ + bi+l xr (5)
W h e n i = t, (4) is t h e first row (c1 - - ~) x 1 + b1 x2.
If [u d > [ b , I t h e n (4) is t a k e n as the p i v o t a l row; otherwise (5) is taken.
I n either case t h e p i v o t a l row is d e n o t e d b y

Pi x r xi+l + r i xi+2. (6)


Hence we h a v e :

pi=ui; qi=v~; ri=0 pi=bi; qi=ciH--A; r~=bi+l (7)


mi+ I = bi/u ~ , mi+ 1= udb i
u~+l = ( q + l - - 4) - - m~+l v~; vi+ 1 = bi+ 1 u i + l = v, - mi+ 1 (c,+1-- 4) ; vi+l = - mi+lbi+ 1

T h e p i v o t i n g a t t h e i - t h stage is a t most a simple interchange of the i - t h a n d


(i + l ) - t h rows. P r o v i d e d we r e m e m b e r these interchanges, t h e elements me,
Pi, qi a n d r~ give us sufficient information to solve

(P -- hi) x = v (8)

w i t h a n y r i g h t - h a n d side v, b y the a p p r o p r i a t e forward a n d b a c k w a r d sub-


stitutions. W e m a y write (1) a n d (2) in the form

LUx=v; LUy=x (9)

p r o v i d e d we include t h e interchanges in L.
Now if we t a k e
u~Le (to)

where e = (t, I . . . . . t), t h e n L U x = L e a n d hence

ux=e. (11)

W i t h this choice of v, x is d e t e r m i n e d b y a b a c k - s u b s t i t u t i o n only; we h a v e


no need to d e t e r m i n e L e explicitly. H a v i n g o b t a i n e d x we can t h e n find the
second i t e r a t e d v e c t o r y b y a forward and a b a c k w a r d s u b s t i t u t i o n .
I t should be a p p r e c i a t e d t h a t ( P - - 4 1 ) is a l m o s t singular for each of our
values of 4. H e n c e if t h e elements of P are of o r d e r u n i t y there is a t e n d e n c y
for t h e c o m p o n e n t s of x to be m u c h larger t h a n those of v a n d for those of y
to be larger t h a n those of x. I n d e e d if this were n o t true t h e process would be
unsuccessful.
370 Handbook Series Linear Algebra

As a simple example we take the matrix

P = .2 (12)
.t

of which an eigenvalue is 0.2. Suppose we take the approximation 2 = 0 . 2 0 0 1


and work to 4 figures. We have

m 2 = - - . 0 0 1 0 (interchange) P l = .1000; ql=--.000t; r l = .t000


m a = t.0000 (interchange) P 2 = .t000; q~=--.000t; r2= 0
P 3 = .0002; qs= 0 ; r s = 0.

The first iterate x is therefore the solution of

9t 000 x~ - - .000t x 2 + . 1000 x 3 = t [ xl = - - 4990


.1000x 2 -- .000t Xa= t giving ~ x 2 = 15
!
.0002 x s = 1 Ix3= 5000.

W e now perform the forward substitution on x using the m i and the inter-
changes to give the set,

.loooyl-.oool y~+ .toooy3= 15,] [ Y1=+4.995 Xt07


.tO00y2--.O001 Y3= 5000 ~giving Y2= 50
.0002 ya = - - 9990 ) y a = - - 4 . 9 9 5 Xt07.

The normalised y is therefore a correct eigenvector to 4 decimals, the working


precision. We would obtain just as good a result after two iterations with an
appreciably poorer 2.
If the value 2 = . 2 0 0 0 is taken, then P 3 = 0 and the first step in the back-
substitution involves a division b y zero. Clearly it is adequate to replace p~
b y some suitably small q u a n t i t y and an appropriate choice is 2 -t liP[too for com-
putation with t binary digits.

2. A p p l i c a b i l i t y

tridiinverse iteration m a y be used to determine the eigenvectors of a sym-


metric tridiagonal matrix, m l eigenvalues of which are known.
Notice t h a t the eigenvectors corresponding to multiple eigenvalues are not,
in general, orthogonal if computed b y this method 9

3. F o r m a l p a r a m e t e r l i s t
The vector c is the diagonal and the vector b the subdiagonal of a symmetric
matrix of order n. The vector w contains m l eigenvalues in decreasing order.
norm is the infinity norm of the tridiagonal matrix and macheps the relative
J. H. WILKINSON: Inverse iferation for a symmetric matrix 371

machine precision. The m l eigenvectors are stored as the matrix z, where zij
is the i-th component of the eigenvector corresponding to wi.

4. Algol p r o g r a m
procedure tridiinverse iteration (c, b, n, w, norm, m l , macheps) result: (z);
value n, m l , norm, macheps ;
integer n, m l ;
real norm, macheps ;
array c, b, w, z;
c o m m e n t c is the diagonal and b the sub-
diagonal of a symmetric matrix
of order n. The vector w of
order m l contains eigenvalues
computed b y some technique
which ensures high accuracy.
The eigenvalues are assumed to
be in decreasing order, norm is
the infinity norm of the tridiag-
anal matrix and macheps is the
relative machine precision. The
m l eigenvectors corresponding
to the eigenvalues are com-
puted b y inverse iteration and
stored as the matrix z, z [ i , / l
being the i-th component of the
eigenvector, which belongs to
w Eil;
begin
integer i, /;
real bi, 3i1, zl, lambda, u, s, v, h, eps, eta ;
array m, p, q, r, int [1 :n~, x El : n + 2 ~ ;
lambda ,= norm; eps ,= macheps • norm;
for/:= 1 step t until m l do
begin lambda := lambda -- eps ;
ifw [J'~< lambda then lambda ,= w [/1 ;
u , = c ~ t ] - - l a m b d a ; v , = b~t~;
if v = 0 then v ,= eps;
f o r / , = t step1 u n t i l n - - ~ do
begin bi = b EiJ ;
if b i = 0 then bi ,= eps ; comment I n order to insure independ-
ency in degenerate cases;
bil = b [ i + t ~ ;
if bi l = 0 then bi l .= eps ;
ifabs (bi) >=abs (u) then
372 H a n d b o o k Series L i n e a r A l g e b r a

b e g i n m [i + t ] = u/bi;
ifm[i + t ]=O ^ bi<=eps
then m [ i + t ] := t; c o m m e n t I n o r d e r to a v o i d t h e m u l t i -
p l i c a t o r zero i n t h e d e c o m -
p o s i n g case;
# lit ,= b i; q [i1 ' = c [i + t ] - - lambda;
r [iI ,= bil;
u =v--m[i+l] •
v = - - m [ i + l] •
int[i + t] ,-- + l
end
else
b e g i n m [ i + t I ,-- bi/u;
p [ q = u; q lit ,= v;
r[i] , = 0 ;
u = c [i + t ] -- lambda--m [i + 1] • v;
v~-bil; int[i + t ] = - - t
end
e n d i;
p [n] ..= u; q In] ..-- r [ ~ ,= 0;
x[n + t] ,= x[n + 2] =O; h , - - O ; eta,=l/n;
f o r i :-- n step - - 1 u n t i l t do
begin u , = eta - - q [ i ] • I•
ifp ~i] ----0 t h e n x [it ,= u/eps
else x [i] ,= u/p [i] ;
h ,= h + abs (x [i])
end/;
h ..= l/h;
f o r / . . - - t step t u n t i l n do
x [ i ] ,-- x [ i ] •
for/:= 2 step I u n t i l n do
b e g i n ifint [i I > 0 t h e n
begin u , = x[i -- t] ;
x[i-l] ,= x [ i ] ;
x[i] = u - - m [ i ] •
end
else x [it ,= x [it - - m [i1 • x [i - - t ]
e n d i;
h ,=0;
for i := n step - - t u n t i l t do
b e g i n u ,= x [i I - - q [i I • x [i + 1 ] - - r [it • x [i + 21 ;
ifp [it = 0 t h e n x [i] ..= u/eps
else x [i] ~ - u/p [i] ;
h . . = h + x [ i ] •215
end i;
n = l / ~ q r t (h);
J. H. WILKINSON;Inverse iteration for a symmetric matrix 373

f o r / : = t step t until n do
~[i,i~ = x[i] xh
end/"
end;

5. Organisational and notational details

6. Discussion of numerical properties


Pathologically close zeros. The technique we have described gives one eigen-
vector corresponding to each distinct eigenvalue. Corresponding to a multiple
eigenvalue, or eigenvalues which are coincident to working accuracy, some device
is needed to produce additional eigenvectors which span the appropriate sub-
space. In theory exactly coincident eigenvalues can exist only if at least "one
bi is zero but in practice we can have eigenvalues which are equal to working
accuracy without any bi being pathologically small (c.f. the example in section 6
of the article "Calculation of the eigenvalues of a symmetric tridiagonal matrix
by the method of bisection").
Pathologically close or coincident eigenvalues always imply that the inverse
iteration process is extremely sensitive to the value of ~ which is used, since
we can guarantee only that the computed vector will lie in the appropriate
subspace. Consider for example the matrix

i!0!12,
which has 3 as an eigenvalue of multiplicity 2, having (t, O, O) and (0, 1, I) as
independent eigenvectors. If we replace the zero off-diagonal elements by tO -t
and then perform the above process with 2 = 3 the vector obtained after the
first iteration is
2.,0',
10t + l | = in normalised form.
!
l0 t ]

On the other hand if we take , ~ = 3 - lO-~ the vector obtained is

i~ [i]
t0 t =
I lo'J
in normalised form,

while 2 = 3 -- 2 • -t gives

--(1/7 l0 t-2/7-
(2/,) lO'+ 417 in normalised form.
(2/,) t 0'
Numer. Math. 13cl.4 26
374 Handbook Series Linear Algebra

Each of these vectors lies in the correct subspace, though small variations in
2 have produced very different vectors. The third root t, is an isolated eigenvalue
and it will be found that any value of ~ close to t gives the unique eigenvector
(0, t, --1).
Although slightly different values of ~ in the neighbourhood of a multiple
eigenvalue .give very different eigenvectors lying in the appropriate subspace,
there is a danger that different eigenvectors obtained in this way will not give
full digital information on the subspace. Suppose we have obtained two vectors
u 1 and u 2 with Euclidean norms equal to unity. I t m a y happen that the vector
v2 defined by
v2-=u 2 - - k u l ; vT U l = 0 ,

has a Euclidean norm which is appreciably smaller than unity. The direction
orthogonal to u 1 in the subspace of eigenveetors will then be poorly defined.
In practice it was found that using values of ~ differing b y 2 • -t (llPIIoo) it
was uncommon to lose more than one decimal when orthogonalisation was
performed.
After computing one eigenvector Ul in the subspace we could take as initial
vector in the inverse iteration a vector which is orthogonal to ul, and might
reasonably expect that this would make it more probable that the second eigen-
vector would be completely independent of u 1. Similarly for eigenvalues of higher
multiplicity. In practice this has been found to give a slight additional safe-
guard. However it is not included in the present algorithm.

7. Examples of the use of the procedure


If the given eigenvalues were calculated b y a bisection method (c.f. "Calcu-
lation of the eigenvalues of a symmetric tridiagonal matrix b y the method of
bisection") or a Newton type method, no information about the eigenvector
will normally exist and it is appropriate to compute them by WIELANDT'S itera-
tion.
However, even when the eigenvalues have been computed b y methods, such
as the QD or QR algorithms, employing unitary transformations, so that the
eigenvectors could be obtained from these transformations, it m a y still be more
economical to use inverse iteration.

8. Test results
The procedure was tested on the Z 22 and Siemens 2002 in the Institute
for Applied Mathematics at Mainz and a second test was performed in the In-
stitute for Applied Mathematics of the Eidgen6ssische Technische Hochschule at
Zfirieh.
We give the eigenvectors of the tridiagonal matrices and of the original
matrices for the examples of section 8 Test results of "Householder's method"
and "Calculation of the eigenvalues of a symmetric tridiagonal matrix b y the
method of bisection" respectively. They were calculated on Siemens 2002
j. H. WILKINSON: Inverse iteration for a symmetric matrix 375

(macheps ~ 10-9). In the second example, the two eigenvectors corresponding


to each double eigenvalue are linearly independent, but not exactly orthogonal.

Eigenvectors of the Eigenvectors of the Eigenvectors of the Eigenvectors of the


tridiagonal matrix A matrix A tridiagonal matrix B matrix B

--1.0678945021o--2 --2.2t789750510-- 1
--2.77228334010--3 --2.4587793861o--I t .42 t 7444521o-- 1 --4.72911329%o-- 1
1.77t 6634991o-- 2 --3.0239603961o--I 9.7823277581o--t --0.720938140
t. 6645096991o-- t --4.532t452331o--1 8.2851624641 o - 3 2.5941489041o-- t
8.1388494551o-- 1 --0.577177153 -- 1.0282994971o-- 1 3.5780708731o-- t
--5.5638458371o--1 --5.5638458371o--t t .09956267tlO-- t t. 099562671 lO-- t

--3.561t354811o--1 --5.5096195601o--t --2.045 t 192641o--3 1.700617979io-- t


6.638060299io--t --0.709440339 2.72277548310--2 t.78584630tlo--I
6.00t 9643 t 01o-- t 3.40179t3251o--t t.8734085551o--1 --t.380664923IO--1
4.30446393tlO--2 8.34109534410--2 5.3956845821o--2 2.959t 5129610-- 1
2.6543567721o--! 2.6543567721o-- t -- 6.6967656591o-- t 0.565489671
7. t 608646531o-- 1 7.1608646531o--1
4.4780229701o-- 1 -- 5.47t 7279521o-- 1
--4.7202932291o--1 3. t 256992061o-- t -- 3.480025868io-- t 6.6954556761o--t
5.7t 51280301o--1 -- 0.618112077 2.6225 t 06561o-- 1 --3.9533173501o--t
2.0630071251o-- 1 l A 560659371o--t --4.1914052011o--2 1.3672636211o-- 1
4.5549374641o-- t 4.5549374641o-- t 8.2t406696tlO--1 --0.288372768
-- 2.34089767410-- t 4.63372t9281o--t
-- 2.8222492571o-- / 3.4t0t304181o--t - - 2.808t 0985610-- t - - 2.808t 0985610-- t

--4.6t t 5t 64651o-- 3 -- t .16434620010-- 1


--4.8t84343751o--I --0.t959067121o--1 5.078t 637751o-- t 0.13t64t89tlO--t
5.325ttt6181o--1 --0.682043035 --3.8268504441o--t 2.59286123 41o-- t
6.3607t 21401o-- t 6.3607121401o-- t 6.11623114210--2 - - t .995t 543031o--1
7.0289740571o-- t --0.728887389
7.700634o611o-- t 4.69358o724.1o-- 1 --2.003 t 622741o-- t 5.5115485651o--1
5.7984026041o-- t --0.542212195 -- 2.402966941 lO-- 1 -- 2.40296694tlO-- 1
--2.3077767741o--t -- 5.444524033xo-- 1
9.80323028810--2 4.2586566191o--t 5.2724684501o--t 5.03951797310-- 1
8.89885036510--2 8.89885036510--2 6.8466353941o-- 1 O.740322901 lO-- t
--9.3751938061o--2 -- 5.2916056301o-- !
t.99t98o8541o--t --3.1320230781o--t
3.379o4264olo--I -- 5.2138969241o-- t
3.oo995o2941o--t 3.0099502941o -- 1

4.1717t 48221o-- 1 3.9101568851o--t


5.41 72368431o-- t -- 0.8087821031o-- l
-- 7.417898328~o-- 2 --4.1868566571o--1
2.9247546171o--1 -- 4.462844719IO-- t
4.96t 32808510-- 1 --0.520371701
4.419402925 lO-- t 4.4194029251o-- 1
26*
376 Handbook Series Linear Algebra

Acknowledgements. The work described here has been carried out as part of
the research programme of the National Physical Laboratory and is published by
permission of the Director of the Laboratory. I wish to t h a n k Professors F. L. BAUER
and H. RrJTISHAUSER for numerous suggestions, the incorporation of which added
materially to the effectiveness of these procedures. Particular thanks are also due
to E. MANN of Gutenberg University, Mainz who assisted in the preparation of the
manuscript and supervised the incorporation of all the modifications suggested by
Professors BAUER and I~UTISHAUSER.

References
[1] WILKINSON, J . H . : Calculation of the eigenvectors of co-diagonal matrices.
Computer J. 1, 90 (1958).
[2] ~'IELANDT, H. : Bericht d. aerodynamischen Versuchsanstalt G6ttingen,
44[J/37 (1944).
National Physical Laboratory
Teddington, Middlesex

You might also like