Eigen Values
Eigen Values
INTRODUCTION
Consider a scalar matrix Z, obtained by multiplying an identity matrix by a scalar; i.e., Z = c*I.
Deducting this from a regular matrix A gives a new matrix A - c*I.
Equation 1: A - Z = A - c*I.
Equation 2: |A - c*I| = 0
and A has been transformed into a singular matrix. The problem of transforming a regular matrix
into a singular matrix is referred to as the eigenvalue problem.
However, deducting c*I from A is equivalent to substracting a scalar c from the main diagonal of
A. For the determinant of the new matrix to vanish the trace of A must be equal to the sum of
specific values of c. For which values of c?
EXAMPLES
(1
)
for some scalar , then is called the eigenvalue of with corresponding (right) eigenvector .
(2
)
(3
)
(4
)
(5
)
where is the identity matrix. As shown in Cramer's rule, a linear system of equations has
nontrivial solutions iff the determinant vanishes, so the solutions of equation (5) are given by
(6
)
This equation is known as the characteristic equation of , and the left-hand side is known as the
characteristic polynomial.
(7
)
(8
)
If all eigenvalues are different, then plugging these back in gives independent equations
for the components of each corresponding eigenvector, and the system is said to be
nondegenerate. If the eigenvalues are -fold degenerate, then the system is said to be degenerate
and the eigenvectors are not linearly independent. In such cases, the additional constraint that the
eigenvectors be orthogonal,
(9
)
where is the Kronecker delta, can be applied to yield additional constraints, thus allowing
solution for the eigenvectors.
(10
)
(11
)
(13
)
(14
)
(15
)
Example 1
13 -4
A=[ ]
-4 7
of the linear transformation T(x1, x2) = (13x1 - 4x2, - 4x1 + 7x2) on R2. The characteristic
polynomial is
13 - λ -4
det(A - λI) = [ ] = (13 - λ)(7 - λ) - (-4)(-4) = 75 - 20λ + λ2.
-4 7 – λ
For λ1 = 5, we have
13 - 5 -4 8 -4
A - 5I = [ ]=[ ].
-4 7 - 5 -4 2
The solutions of (A - 5I)x = 0 are of the form x = c(1, 2), c arbitrary. We get an eigenvector v1 =
(1, 2), which is really a basis of the eigenspace nul(A - 5I).
13 - 15 -4 -2 -4
A - 15I = [ ]=[ ].
-4 7 - 15 -4 -8
The solutions of (A - 15I)x = 0 are of the form x = c(-2, 1), c arbitrary. We get an eigenvector v2
= (-2, 1), which is really a basis of the eigenspace nul(A - 15I).
Example 2
1 3 -3
A = [ -3 7 -3 ].
-6 6 -2
The characteristic polynomial has been computed in an earlier example: det(A - λI) = (4 - λ)2(- 2 -
λ). We have two eigenvalues λ1 = 4, λ2 = -2.
For λ1 = 4, we have
1-4 3 -3 -3 3 -3
A - 4I = [ -3 7 - 4 -3 ] = [ -3 3 -3 ].
-6 6 -2 - 4 -6 6 -6
The solutions of (A - 4I)x = 0 are of the form x = x2(1, 1, 0) + x3(1, 0, -1), x2 and x3 arbitrary. We
get two eigenvectors v1 = (1, 1, 0), v2 = (1, 0, -1), which form a basis of the eigenspace nul(A -
4I).
1+2 3 -3 3 3 -3
A + 2I = [ -3 7 + 2 -3 ] = [ -3 9 -3 ].
-6 6 -2 + 2 -6 6 0
The solutions of (A + 2I)x = 0 are of the form x = c(1, 1, 2), c arbitrary. We get an eigenvector v3
= (1, 1, 2), which is really a basis of the eigenspace nul(A + 2I).
We may verify that v1, v2, v3 actually form a basis of R3. Geometrically, we have complete
undestanding of the linear transformation given by A.
From the examples, we saw that for an n by n matrix A, det(A - λI) is a polynomial of degree n.
The general reason can be found here.
det(B - λI) = det(PAP-1 - λI) = det(P(A - λI)P-1) = det(P-1P(A - λI)) = det(A - λI).
Thus the characteristic polynomial of a linear transformation does not depend on the choice of
the basis. We also call det(A - λI) the characteristic polynomial of T.
Example 3
Let A= 2 −1 −4 −1 . Then
(A− I)v=0
EXAMPLE 4
will give the eigenvalues of A. This equation is called the characteristic equation or
characteristic polynomial of A. It is a polynomial function in of degree n. So we know that
this equation will not have more than n roots or solutions. So a square matrix A of order n will
not have more than n eigenvalues.
Example 5
This result is valid for any diagonal matrix of any size. So depending on the values you have on
the diagonal, you may have one eigenvalue, two eigenvalues, or more. Anything is possible.
Remark. It is quite amazing to see that any square matrix A has the same eigenvalues as its
transpose AT because
The number (a+d) is called the trace of A (denoted tr(A)), and clearly the number (ad-bc) is the
determinant of A. So the characteristic polynomial of A can be rewritten as
We have
This equation is known as the Cayley-Hamilton theorem. It is true for any square matrix A of any
order, i.e.
where is the characteristic polynomial of A.
SIGNATURE OF STUDENT
DATE 28/09/2010
____________________________________________________________
____________________________________________________________
____________________________________________________________
____________________________________________________________
____________________________________________________________
____________________________________________________________
____________________________________________________________
_____
Recommended: Yes No
Date: