NA Lecture 16
NA Lecture 16
Analysis
Lecture 16
Chapter 4
Eigen Value
Problems
Let [A] be an n x n square matrix.
Suppose, there exists a scalar
and a vector
T
X ( x1 x2 xn )
such that
[ A]( X ) ( X )
Then is the eigen value
and X is the corresponding
eigenvector of the matrix [A].
We can also write it as
[ A I ]( X ) (O)
This represents a set of n
homogeneous equations
possessing non-trivial
solution, provided
A I 0
This determinant, on
expansion, gives an n-th
degree polynomial which is
called characteristic
polynomial of [A], which has n
roots. Corresponding to each
root, we can solve these
equations in principle, and
determine a vector called
eigenvector.
Finding the roots of the
characteristic equation is
laborious. Hence, we look for
better methods suitable from
the point of view of
computation. Depending upon
the type of matrix [A] and on
what one is looking for,
various numerical methods are
available.
Power Method
Jacobi’s Method
Note!
We shall consider
only real and real-
symmetric matrices
and discuss power
and Jacobi’s methods
Power
Method
To compute the largest eigen
value and the corresponding
eigenvector of the system
[ A]( X ) ( X )
where [A] is a real, symmetric
or un-symmetric matrix, the
power method is widely used
in practice.
Procedure
Step 1: Choose the initial vector
such that the largest element is
unity.
(0)
Step 2: The normalized vector v
is pre-multiplied by the matrix
[A].
Step 3:The resultant vector is
again normalized.
We get
2 n
Av 1 c1v1 c2 v2 cn vn
1 1
Again, pre-multiplying by A and
simplifying, we obtain
2
2
A v 1 c1v1 c2 2 v2 cn n vn
2 2
1 1
Similarly, we have
r
r
Ar v 1r c1v1 c2 2 v2 cn n vn
1 1
r 1 r 1
2 n
and r 1 r 1
A v (1 ) c1v1 c2 v2 cn vn
1 1
Now, the eigen value 1
can be computed as the limit of
the ratio of the corresponding
components of Ar v and r 1
That is, A v.
r 1
r 1
( A v) p
1 1
Lt , p 1, 2, , n
1
r r
r ( A v)
p
Here, the index p stands for the p-th
component in the corresponding vector
Sometimes, we may be
interested in finding the least
eigen value and the
corresponding eigenvector.
In that case, we proceed as
follows.
We note that [ A]( X ) ( X ).
1
Pre-multiplying by [ A ] , we get
1 1 1
[ A ][ A]( X ) [ A ] ( X ) [ A ]( X )
Which can be rewritten
as 1 1
[ A ]( X ) ( X )
which shows that the
inverse matrix has a set
of eigen values which are
the reciprocals of the
eigen values of [A].
Thus, for finding the
eigen value of the
least magnitude of
the matrix [A], we
have to apply power
method to the inverse
of [A].
Jacobi’s
Method
Definition
An n x n matrix [A] is said to
be orthogonal if
T
[ A] [ A] [ I ],
T 1
i.e.[ A] [ A]
In order to compute all the
eigen values and the
corresponding eigenvectors
of a real symmetric matrix,
Jacobi’s method is highly
recommended. It is based on
an important property from
matrix theory, which states
that, if [A] is an n x n real
symmetric matrix, its eigen
values are real, and there
exists an orthogonal matrix
[S] such that the diagonal
matrix D is
1
[ S ][ A][ S ]
This diagonalization can be
carried out by applying a
series of orthogonal
transformations
S1 , S 2 ,..., S n ,
Let A be an n x n real
symmetric matrix. Suppose aij
be numerically the largest
element amongst the off-
diagonal elements of A. We
construct an orthogonal matrix
S1 defined as
sij sin , s ji sin ,
sii cos , s jj cos
While each of the remaining
off-diagonal elements are
zero, the remaining diagonal
elements are assumed to be
unity. Thus, we construct S1
as under
i-th column j -th column
1 0 0 0 0
0 1 0 0 0
0 0 cos sin 0 i-th row
S1
0 0 sin cos 0 j -th row
0 0 0 0 1
where cos , sin ,sin and cos
are inserted in (i, i), (i, j ), ( j, i), ( j, j ) th
positions respectively, and
elsewhere it is identical with a
unit matrix.
Now, we compute
1 T
D1 S AS1 S AS11 1
Since S1 is an orthogonal
1 T
matrix, such that S1 S1 .
After the transformation, the
elements at the position (i , j),
(j , i) get annihilated, that is dij
and dji reduce to zero, which
is seen as follows:
dii dij
d
d jj
ji
cos sin aii aij cos sin
a a
sin cos ij a jj sin cos
aii cos 2 sin cos a jj sin 2 ( a jj aii ) sin cos aij cos 2
(a ji aii ) sin cos aij cos 2 aii sin 2 a jj cos 2 2aij sin cos
Therefore, dij 0 , only if,
a jj aii
aij cos 2 sin 2 0
2
That is if
2aij
tan 2
aii a jj
Thus, we choose such that the
above equation is satisfied,
thereby, the pair of off-diagonal
elements dij and dji reduces to
zero.
However, though it creates a new
pair of zeros, it also introduces
non-zero contributions at
formerly zero positions.
Also, the above equation gives
four values of , but to get the
least possible rotation, we
choose
4 4.
As a next step, the numerically
largest off-diagonal element in
the newly obtained rotated
matrix D1 is identified and the
above procedure is repeated
using another orthogonal matrix
S2 to get D2. That is we obtain
1 T T
D2 S D1S 2 S ( S AS1 ) S 2
2 2 1
Similarly, we perform a series of
such two-dimensional rotations
or orthogonal transformations.
After making r transformations,
we obtain
1 1 1 1
Dr S S S S AS1S 2 S r 1S r
r r 1 2 1
1
( S1S 2 S r 1S r ) A( S1S 2 S r 1S r )
1
S AS
Now, as r , Dr approaches to
a diagonal matrix, with the eigen
values on the main diagonal. The
corresponding eigenvectors are
the columns of S.
It is estimated that the minimum
number of rotations required to
transform the given n x n real
symmetric matrix [A] into a
diagonal form is n(n-1)/2
Numerical
Analysis
Lecture 16