0% found this document useful (0 votes)
135 views18 pages

Unit 10 Eigenvalctes and Eigenvectors: Structure

This document provides an introduction to eigenvalues and eigenvectors. It discusses: - The algebraic eigenvalue problem of determining all eigenvalues and eigenvectors of a linear transformation or matrix. - Definitions of eigenvalues as scalars where there exists a non-zero vector satisfying Tx = λx, and eigenvectors as non-zero vectors satisfying this equation. - Examples are provided to demonstrate finding eigenvalues and eigenvectors. - The eigenspace Wλ associated with an eigenvalue λ, defined as the null space of the linear transformation T minus λI. - Eigenvalues and eigenvectors are also defined for matrices by regarding a matrix as a linear transformation.

Uploaded by

Sajal Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views18 pages

Unit 10 Eigenvalctes and Eigenvectors: Structure

This document provides an introduction to eigenvalues and eigenvectors. It discusses: - The algebraic eigenvalue problem of determining all eigenvalues and eigenvectors of a linear transformation or matrix. - Definitions of eigenvalues as scalars where there exists a non-zero vector satisfying Tx = λx, and eigenvectors as non-zero vectors satisfying this equation. - Examples are provided to demonstrate finding eigenvalues and eigenvectors. - The eigenspace Wλ associated with an eigenvalue λ, defined as the null space of the linear transformation T minus λI. - Eigenvalues and eigenvectors are also defined for matrices by regarding a matrix as a linear transformation.

Uploaded by

Sajal Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 10 EIGENVALCTES AND

EIGENVECTORS
Structure
10.1 Introduction
Objectives
10.2 The Algebraic Eigenvalue Problem
10,3 Obtaining Eigenvalues and Eigenvectors
Characteristic Polynomial
Eigenvaiues of Linear Transformation
10.4 Diagonalisation
10.5 Summary
10.6 Solutions/Answers

10.1 INTRODUCTION
In Unit 7 you have studied about the matrix of a linear transformation. You have had
several opportunities, in earlier units, to observe that the matrix of a linear
transformation depends on the choice of the bases of the concerned vector spaces.
Let V be an n-dimensional vector space over F, and let T : V -,V be a linear
transformation. In this unit we will consider the problem of finding a suitable basis B,
of thevector space V, such that the n x n matrix[T], is a diagonal matrix. This problem
can also be seen as: given an n x n matrix A, find a suitable n x n non-singular matrix P
such that P-' AP is a diagonal matrix (see Unit 7, Cor. to Theorem 10). It is in this
context that the study of eigenvalues and eigenvectors plays a central role. This will be
seen in Section 10.4.
The eigenvalue problem involves the evaluation of all the eigenvalues and eigenvectors
of a linear transformation or a matrix. The solution of this problem has basic
applications in almost all branches of the sciences, technology and the social sciences,
besides its fundamental role in various branches of pure and applied mathematics. The
emergence of computers and the availability of modern computing facilities has further
strengthened this study, since they can handle very large systems of equations.
In Section 10.2 we define eigenvalues and eigenvectors. We go on to discuss a method
of obtaining them, in Section 10.3. In this section we will also define the characteristic
polynomial, of which you will study more in the next unit.

Objectives
After studying this unit, you should be able to
obtain the characteristic polynomial of a linear transformation or a matrix;
obtain the eigenvalues, eigenvectors and eigenspaces of a linear transformation or a
matrix;
obtain a basis of a vector space V with respect to which the matrix of a linear
transformation T : V -t V is in diagonal form;
obtain a non-singular matrix P which diagonalises a given diagonalisable matrix A.

10.2 THE ALGEBRAIC EIGENVALUE PROBLEM


Consider the linear mapping T : R2 -+ R2 : T(x,y) = (2x, y). Then, T(1,O) = (2,O) =
2(1,0). .'. T(x,y) = 2(x,y) for (x,y) = (1,O) # (0,O). In this situation we say that 2 is an
eigenvalue of T. But what is an eigenvalue?
Definitions: An eigenvalue of a linear transformation T : V + V is a scalar X F- F such
A is the Greek letter 'lambda'. that there exists a non-zero vector x E V satisfying the equation Tx = 1.x.
This non-zero vector x E V is called an eigenvector of T with respect to the eigenvalue
30 h. (In our example above, (1,O) is an eigenvector of T with respect to the eigenvalue 2.)
Thus, a vector x E V is an eigenvector of the linear transformation T if
i) x is non-zero, and
ii) Tx = hx for some scalar h E F.

The fundamental algebraic eigenvalue problem deals with the determination of all the
us look at some examples of how we can find

Example 1: Obtain an eigenvalue and a corresponding eigenvector for the linear


operator T : R3+ R3 : T(x,y ,z) = (2x, 2y, 22).
Solution: Clearly, T(x,y,z) = 2(x,y,z) ff (x,y,z) E R3.Thus, 2 is an eigenvalue of T. Any
non-zero element of R3 will be an eigenvector of T corresponding to 2.
Example 2: Obtain an eigenvalue and a corresponding eigenvector of T : c3+ c3:
T(x,y,z) = (ix, - iy, z).
Solution: Firstly note that T is a linear operator. Now, if h E C is an eigenvalue, then
3 (x,y,z) + (0,0,0) such that T(x,y,z) = A (x, y, z) (ix,-iy, z) = (Ax, hy, hz).
+ ix = Ax.,-iy = hy, z = hz ...................... ( 1 )
These equations are satisfied if A = i, y = 0, z = 0. ,.i

.'., A = i is an eigenvalue with acorresponding ci ., r~v~:ctorbeing (1,0,0) (or (x,O,O) for


(1)is also satisfied if 1. = -i, x = 0, -z'=o or if I, = 1, x = 0, y = 0. Therefore, -i and 1
are also eigenvalues with corresponding eigenvectors (O,y,O) and (0,0,z) respectively,
for any y # 0, z # 0.
Do try the following exercise now.
E E l ) Let T : R'-+ R' be defined by T(x,y) = (x,O). Obtain an eigenvalue and a
corresponding eigenvector of T.

,
Warning: The z r o vector can never be an eigenvector. But, 0 E F can be an eigenvalue.
For example, 0 is an eigenvalue of the linear operator in E 1, a corresponding
eigenvector being (0,l).

Now we define a vector space corresponding to an eigenvalue of T : V + V. Suppose


i. € F is an eigenvalue of the linear transformation T. Define the set
W, = { x E V 1 T(x)= Ax)
= (01 U {eigenvectors-ofT corresponding to 1,).
So, a vector v E Wj. if .and
. . only if v = 0 or v is an eigenvector of T corresponding to i..

Now, xE Wi. e+ Tx = i,Ix, I beingthe identityoperator.

@x € Ker (T-).I)
.'., W, = Ker (T - &I), and hence, W, is a subspace of V (ref. Unit 5. Theorem 4).
Since h is an eigenvalue of T, it has an eigenvector, which must be non-zero. Thus, W,
is non-zero.
Definition: For an eigenvalue h of T, the non-zero subspace W,. is called the ei,genspace
of'? associated with the eigenvaloe h.
Let us look at an example.
Example 3: Obtain W, for the linear operator given in Example 1.
Solution: W, = {(x,y,z) E R3 1 T(x,y,z) = 2(x,y ,z))
= {(x,y.z) E R" (2x,2y,2z) = 2(x, y ,z))
= R'.
rnlues and Elgenvectom Now, try the following exercise.
E E2) For T in Example 2, obtain the complex vector spaces W,, W-, and W,.

AS with every other concept related to linear transformations, we can define


eigenvalues and eigenvectors for matrices also: Let us do so

Let A be any n x n matrix over the field F.


As we have said in IJnit 9 (Theorem 5 ) , the matrix A becomes a linear transformation
from Vn(F) to Vn(F), ~fwe define
A : Vn(F)+ Vn(F) : A(X) = AX.
Also, you can see that [A]," = A,where
r - - - - -
1 0
0 1 0
0 0 0
B,= < el= . 7 e,= . ,...,en= .

0
0 0 1
- - -
is the standard ordered basis of V,(F). That is, the matrix of A, regarded as a h e a r
transformation from Vn(F) to V,(F), with respect to the standard basis B,, is A ~tself.
This is why we denote the linear transformation A by A itself.
Looking at matrices as linear transformations in the above manner will help you in the
understand~ngof eigenvalues and eigenvectors for matrices.
Definition: A scalar i..is an eigenvalue of an n x n matrix A over F if there exists X € Vn(F),
X # 0, such that AX = AX.
If A is an eigenvalue of A , then all the non-zero vectors in Vn(F) which are solutions of
the matrix equatlon AX = i,X are eigenvectors of the matrix A corresponding to the
eigenvalue A.

Let us look at a few examples.

Example4: Let A
eigenvector of A.

w u b n : NOW A
[[ k a] : I:[ ;1.
= 0 2 0

=
. Obtain an eigenvalue and a corresponding
1 1
T h ~shows
s that 1is an

eigenvalue and is an eigenvector corresponding t o it.


Thus, 2 and 3 are eigenvalues of A, with corresponding

eigenvectors [[ [ ]
]and ,respectively. I l e eigenvalues oE diag (dl ,......, dn
are d l ,........,dn

Example 5: Obtain an eigenvalue and a corresponding eigenvector

= [:I # I:]
Solution: Suppose A E R is an eigenvalue of A. Then

such that A X = AX, that is,


[z2J];[ =
So for what values of A, x and y are the equations -y = 'hx and x+2y = Ay satisfied? Note
that x # 0 and y # 0, because if either is zero then the other will have to be zero. Now,
solving our equations we get A = 1.

Then an eigenvector corresponding to it is


[-
Now you can solve an eigenvalue problem yourself!
:I-
,E E3) Show that 3is an eigenvalue of [ :] . Find 2 corresponding eigenvectors.

Just as we defined an eigenspace associated with a linear transformation we define the


eigenspace W,, corresponding to an eigenvalue A of an n x n matrix A, as follows:

i
For examp1e;the eigenspace W,, in the situation of Example 4, is
-
Elpnvducs and Elgenncton E E4) Find W, for the matrix E3. .

The algebraic eigenvalue problem for matrices is to determine all the eigenvalues and
eigenvectors of a given matrix. In fact, the eigenvalues and eigenvectors of an n x n
matrix A are precisely the eigenvalues and eigenvectors of A regarded as a linear
transformation from Vn(F)to Vn(F).
We end this section with the following iemark:
-
A s a h A is an eigenvalue of the matrix A if and only U (A AI) X = O has a non-zero
-
solution, i.e., If and only if det (A A I) = 0 (by Unit 9, Theorem 5).
Similarly, A is an eigenvalue of the linear transformation T if and only if det (T - AI) = 0
(ref. Section 9.4).
So far we have been obtaining eigenvalues by observation, or by some calculations that
may not give us all the eigenvalues of a given matrix or linear transformation. The
remark above suggests where to look for all the eigenvalues In the next section we
determine eigenvalues and eigenvectors explicitly.

10.3 OBTAINING EIGENVALUES AND EIGENVECTORS


In the previous section we have seen that a scalar A is an eigenvalue of a matrix A if and
only if det (A -11) = 0. In this section we shall see how this equation helps us to solve the ,'
eigenvalue problem.

10.3.1 Characteristic Polynomial

la:
Once we know that A is an eigenvalue of a matrix A, the eigenvectors can easily be

-':1
obtained by finding non-zero solutions of the system of equations given by AX = A X.
all a12 ... atn

Now,ifA = :::.. .. .. a:] and x=


an1 an2 ... ann Xn

the equation AX = AX becomes

Writing it out, we get the following system of equations.


allxl + a12x2+ ...... + alnxn = AX,
aZ1x1+ aZ2x2+ ..... + a,,x, = Ax,
...... . .
anlxl + an2x2+ ..... + annx,= Axn
This is equivalent to the following system. I~lgmvslueeand Eifi~~rv~ctsm

This homogeneous system of linear equations has a non-trivial solution if arid only if
the determinant of the coefficient matrix is equal to O (by Unit 9, Theorem 5). Thus, h
I is an eigenvalue of A if and only if

a,,-h a,, ..... 1' ,


a2, a,,-A ..... %n
det (A - AI) = ..... =O
.....
I.. a,, a,,
. ......
Now, det(A1-A) = (-1)" det(A -11) (multiplying each row by (- 1)). Hence, det(A1- A)
= 0 if and only if det (A - XI) = 0.
This leads us to define the concept of the characteristic polynomial.
Definition :Let A = [aij]be any n x n matrix. Then the characteristic polynomial of the
matrix A is defined by
fA(t)= det(t1- A)
t-a,, -al2 .....
- -a2, t-a, ..... -%n

.....
a a ...... t-a,,

= 1" + c,t" ' + c2t" + ..... + C" , f + C"

where the coefficients c;, c,, ......c, depend on the entries aijof the matrix A.
The equation fA(t)= 0 is the characteristic equation of A.
When no confusion arises, we shall simply write f(t) in place of fA(t).
Consider the following example.

Example 6: Obtain the characteristic polynomial of the matrix


r l 21
10 -11.
Solution: The required polynomial is

= (t-l)(t+l) = t2- 1.
Now try this exercise.

[P0 I:
E E5) Obtain the characteristic polynomial of the matrix
:1 - 2 .
+
Eigenvdua snd E ~ ~ W V ~ C ( O R Note that h is an eigenvalue of A if?det(hI - A) = fA(h)= 0, that is, iff h is a root of the
. characteristic polynomial f,(t), defined above. Due to this fact, eigenvalues are also
called characteristic roots, and eigenvectors are called characteristic vectors.
The roots of the characteristic For example, the eigenvaluesof the matrix in Example 6 are the roots of the polynomial
pdynomial of a matrix A form the
set ofeigenvalues of A.
tz-1, namely, 1 and (- 1).

E E6) Find the eigenvalues of the matrix in E5.

Now, the characteristic polynomial fA(t)is a polynomial of degree n. Hence, it can have
n roots, at the most. Thus, an n x n matrix has n eigenvalues, at the most. .
For example, the matrix in Example 6 has two eigenvalues, 1and-1, and the matrix in
E5 has 3 eigenvalues.
Now we will prove a theorem that will help us in Section 10.4.

Theorem 1: Similar matrices have the same eigenvalues.


Proof: Let an n x n matrix B be similar to an n x n matrix A.
Then, by definition, B = P-'AP, for some invertible matrix P.
Now, the characteristic polynomial of B,
fB(t) = det(t1-B)
= det(t1- P-'AP)
= det(P-'(t1-A)P), s i n c e p - l t ~=~P-'P
~ = tI.
= det(P-') det(t1- A) det(P) (see Sec. 9.4)
= det(t1-A) det(P-') det(P)
= fA(t)det(P-'P)
= fA(t),since det(P-'P) = det(1) = 1.
Thus, the roots of fB(t)and fA(t)coincide. Therefore, the eigenvaluesof A and B are the
same.
Let us consider some more examples so that the concepts mentioned in this section
become absolutely clear to you.
Example 7 :Find the eigenvalues and eigenvectors of the matrix

Solution: In solving E6 you found that the eigenvaluesof A are h, = 1,h, = -1, A, = -2
Now we obtain the eigenvectors of A.
The eigenvectors of A with respect to the eigenvalue h, = 1are the non-trivial solutions
of

which gives the equations


Elgenvllucs and Ekgenvwtam

gives all the eigenvectors associated with the eigenvalue

A, = 1, as x, takes all non-zero real values.


The eigenvectors of A with respect to A, = -1 are the non-trivial solutions of

which gives

Thus, the eigenvectors associated with (-1) are

The eigenvectors of A with respect to A, = - 2 are given by

0 1 -2
which gives

Thus, the eigenvectors corresponding to -2 are

Thus, in this example, the eigenspaces W,, W-, and W-2 are 1- dimensional spaces,
generated over R by

Example 8: Let A be the 4 x 4 real matrix

1
J Obtain its eigenvalues and eigenvectors.

Solution: The characteristic polynomial of A = f,(t) = det(t1-A)

Thus, the eigenvalues are A, = 0 and A, = 1.


Eigenvalues and Eigenvectors The eigenvectors corresponding t o A, = 0 are given by ,

which gives x, + x, = 0
-xl-x2=0
-2x, -2x, + 2x, + x4 = 0
X1 + X2-X3 = 0

The first and last equations give x3 = 0. Then, the third equation gives x4 = 0. The first
'equation gives x, = -x,.

Thus, the eigenvectors are

The eigenvectors corresponding to h, = 1are given by

which gives x, + x, = x,
-XI - x2 = X2
-2x1 - 2x, + 2x, + x4 = x,
XI + x2-Xg = X4

The first two equations give x2 = 0 and x, = 0. Then the last equation gives x, = -x,.
Thus, the eigenvectors are

[n s s]
Example 9 :Obtain the eigenvalues and eigenvectors of

A=

Solution: The characteristic polynomial of A = f,(t) = det(t1-A)

Therefore, the eigenvalues are h, = -1 and h2 = 1.

The eigenvectors corresponding to A, = -1 are given by

which is equivalent to
X2 = -x 1
XI = -x2
X3 = -x3
The last equation gives x, = 0. Thus, the eigenvectors are Eipenvalues and Eigei~vectors

The eigenvectors corresponding to A, =' 1 are given by

which gives x, = x,
X1 = X2
X3 = X3

Thus, the eigenvectors are

L 1

where x,, x, are real numbers, not simultaneously 0.


Note that, corresponding to A, = 1, there exist two linearly independent eigenvectors
rli r 01
L;J and llJ
0 ,which form a basis of the eigenspace W,.

Thus, W-, is 1- dimensional, while dim W, = 2.

Try the following exercises now.


E E7) Find the eigenvalues and bases for the eigenspaces of the matrix
Eigenvslues and Eigenvectors E ES) Find the eigenvalues and eigenvectors of the diagonal matrix

w e now turn to the eigenvalues and eigenvectors of linear transformations.

10.3.2 Eigenvalues of Linear Transformations


As in Section 10.2, let T : V + V be a linear transformation on a finite-dimensional
vector space V over the field F. We have seen that
A E F is an eigenvalue of T
t-.det (T - AI) = 0
det (A1 - T) = 0
t-.det (A1 - A) = 0, where A = [TI, is the matrix of T with respect to a basis B of V.
Note that [Al -T]H = AI- FIR.
This shows that A is an eigenvalue of T if and only if A is an eigenvalue of the matrix A
= [TI,, where B is a basis of V. We define the characteristic polynomial of the linear
transformation T to be the same as the characteristic polynomial of the matrix A = [TI,,
where B is a basis of V.
This definition does not depend on the basis B chosen, since similar matrices have the
same characteristic polynomial (Theorem I), and the matrices of the same linear
transformation T with respect to two different ordered bases of V are similar.
Just as for matrices, the eigenvalues of T are precisely the roots of the characteristic
polynomial of T.
-+ R2 be the linear transformation which maps el = (1,O) to e, Eigenvalues and Eigenvstors

[::]
ain the eigenvalues of T.

Solution: Let A = [TJ, = , where b= { e e : ] .I

The characteristic polynomial of T = the characteristic polynomial of A

hich has no real roots.

Hence, the linear transformation T has no real eigenvalues. But, it has two complex
eigenvalues i and -i.
Try the following exercises now.

E E9) Obtain the eigenvalues and eigenvectors of the differential operator D : P,-+ P, :
D(a, + alx + a2x2)= a, + 2a2x, for a,, a,, a, E R.

E E10) Show that the eigenvalues of a square matrix A coincide with those of At.

E Ell) r e t A be an invertible matrix. If h is an eigenvalue of A , show that h f 0 and that


h-I is an eigenvalue of A-'.

Now that we have discussed a method of obtaining the eigenvalues and eigenvectors of
a matrix, let us see how they help in transforming any square matrix into a diagonal

10.4 DIAGONALISATION
In this section we start with proving a theorem that discusses the linear independence
nding to different eigenvalues.
Elgenvducs and Elgenvedors Theorem 2 :Let T : V + V he a linear transformation on a finite-dimensional vector
space V over the field F. Let A,, A ,,.., A, be the distinct eigenvalues of T and v,, v, .....
v, be eigenvectors of T corresponding to A,, A, ,.... A, respectively. Then v,, v,,. ......
v, are linearly independent over F.
Proof: We know that
Tvi=Aivi,AiE F,O # v i E V f o r i = 1 , 2 , .... m,andAi#Ajfori# j.
Suppose, if possible, that {v,,v,,. ... v,) is a linearly dependent set. Now, the single
non-zero vector v, is linearly independent. We choose r(Sm) such that {v,,~,,.... v,)
. ,
is linearly independent and {v, v,, ...., v, ,v,) is linearly dependent. Then
v, = dlvl + a2v2+ ..... + a,,vF1 .........(1)
for some a,, a,, ....... a, in F.
Applying T, we get
Tv,= a,T vl + a,Tv, + ... + a,,T vr-,. This gives
A?, = a, Alvl + a2A2v2+ ..... + a,-, A,-, V,-, ....... (2)
Now, we multiply (1) by A, and subtract it from (2), to get
0 = a,(h, - A,) V, + a2(A2-A,)v, + ..... + a,, (A,l-A,)v,l
Since the set {v,, v,, .... v,- ,) is linearly independent, each of the coefficients in the
above equation must be 0. Thus, we have ai(Ai-A,) = 0 for i = 1 , 2 , ........ r-1.
Buthi#A,fori= 1 , 2.......r-l.Hence(Ai-A,)#Ofori= 1 , 2 ....... r-1,andwemust
have ai = 0 for i = 1, 2, ..... r - 1. However, this is not possible since (1) would imply
that v, = 0, and, being an eigenvector, v, can never be 0. Thus, we reach a contradiction.

Hence, the assumption we started with must be wrong. Thus, {v,, v,, ...... v,) must be
linearly independent, and the theorem is proved,
We will use Theorem 2 to choose a basis for a vector space V so that the matrix [TI, is
a diagonal matrix.

Definition :A linear transformation T : V + V on a finite-dimensional vector space V


is said to be diagonalisable if there exists a basis B = {v,, v,,. .. ,v,) of V such that the
matrix of T with respect to the basis B is diagonal. That is,

where A,, A,, ..., A, are scalars which need not be distinct.
The next theorem tells us under what conditions a linear transformation is
diagonalisable.
Theorem 3 : A linear transformation T, on a finite-dimensional vector space V, is
diagonalisable if and only if there exists a basis of V consisting of eigenvectors of T.
Proof: Suppose that T is diagonalisable. By definition, there exists a basis B = {v,,
v, ,...., v,) of V, such that

By definition of [ I B ,we must have


Tv, = A,v,, Tv, = hv,, .......... ,Tv, = A,v,
Since basis vectors are always non-zero, v,, v,, .....v, are non-zero. Thus, we find that
v,, v,, ..... V, are eigenvectors of T.
Conversely, let B = (v,, v?,.... v,) be a basis of V consisting of eigenvectors of T. Then,
there exist scalars a,, a,, ..... a,, not necessarily distinct, such that Tv, = a , ~ ,Tv,
, =
%v,, ......Tv, = a , ~ , .
But then we have Eigenvslues and Eigenvectors

- a ! 0 ... 0
0 a! ... 0
[TIB= : . .
. :. ,which means that T is diagonalisable.
0 0 ... a ,-

The next theorem combines Theorems 2 and 3.


Theorem 4: Let T:V+ V be a linear transformation, where V is an n-dimensional vector
space. Assume that T has n distinct eigenvalues. Then T is diagonalisable.

Proof :Let A,, A,,. .,A, be the n distinct eigenvalues of T. Then there exist eigenvectors
v,, v,,. ..,v, corresponding to the eigenvalues A,, A,,. .,An,respectively. By Theorem 2,
the set ,v,, v,,. .,v,, is linearly independent and has n vectors, where n = dim V. Thus,
from Unit 5 (corollary to Theorem 5), B = {v,, v,, -......,v,) is a basis of V consisting of
eigenvectors of T. Thus, by Theorem 3, T is diagonalisable.
Just as we have reached the conclusions of Theorem 4 for linear transformations, we
define diagonalisability of a matrix, and reach a similar conclusion for matrices.
Definition: An n x n matrix A is said to be diagonalisable if A is similar to a diagonal
matrix, that is, F'AP is diagonal for some non-singular n x n matrix P.
Note that the matrix A is diagonalisable if and only if the matrix A, regarded as a linear
transformation A: V,(F) + V,(F) : A(X) = AX, is diagonalisable.

1 Thus, Theorems 2,3 and 4 are true for the matrix A regarded as alinear transformation
from Vn(F) to V,(F). Therefore, given an n x n matrix A , we know that it is
diagonalisable if it has n distinct eigenvalues.
i; We now give a practical method of diagonalising a matrix.
Theorem 5: Let A be an n x n matrix having n distinct eigenvalues A,, A,,. ..,A,. Let X,,
X,,.......,XnEVn(F) be eigenvectors of A corresponding to A,, A,,. ..,A,, respectively.
Let P = (XI, X,, ......., X,) be the n x n matrix having X,, X, ,...,X, as its column
vectors. Then
P-'AP = diag(A,, A,,. ..,An).
Proof: By actual multiplication, you can see that
AP = A(Xl, X,,.. ..,X,,)
:h ! = (AX,, AX, ,....,AX,)
j

$11
: = (l,X1 A,%, " h,X,)
r A, 0 .... 0 1
.... .
0 0 ... A,'
= Pdiag (XI, ,.., A,).
! Now, by Theorem 2, the column vectors of P are linearly independent. This means that
t
P is invertible (Unit 9, Theorem 6). Therefore, we can pre-multiply both sides of the
b matrix equation AP = P diag (A,, A,,. ..., 1,) by P-' to get P-'AP = diag (A;, A,,.. .,A,).
Let us see how this theorem works in practice.

-81
Example 11: Diagonalise the matrix

.=[: 2
2 -21

Solution: The characteristic polynomial of A = f(t) =


Eigenvalues and Eigenvecton Thus, the eigenvalues of A are A, = 5 , A, = 3, A, = - 3 Since they are all distinct, A is
diagonalisable (by Theorem 4). You can find the eigenvectors by the method already
explained to you. Right now you can directly verify that

Thus. [-!I , [] and F]


are eigenvectors corresponding to the distinct eigenvalues 5 , 3 and -3, respectively. By
Theorem 5, the matrix which diagonalises A is given by

P=
L-:2 1
-:I
2 . Check, by actual multiplication, that

P-'AP = [ 5 0 0
0 3 01 ,which is in diagonal form.
0 0 -3
The following exercise will give you some practice in diagonalising matrices
E E12) Are the matrices in Examples 7 , 8 and 9 diagonalisable? If so, diagonalise them.

We end this unit by summarising what has been done in it.


Elpnvalucs and Eigenvectom
10.5 SUMMARY .
As in previous units, in this unit also we have treated linear transformations along with
the analogous matrix version. w e have covered the following points here.
1) The definition of eigenvalues, eigenvectors and eigenspaces of linear
transformations and matrices.
2) The definition of the characteristic polynomial and characteristic equation of a
linear transformati~n(or matrix).
3) A scalar A is an eigenvalue of a linear transformation T(or matrix A) if and only if it
is a root of the characteristic polynomial of T (or A).
'4) A method of obtaining all the eigenvalues and eigenvectors of a linear
transformation (or matrix).
5) Eigenvectors of a linear transformation (or matrix) corresponding to distinct
eigenvalues are linearly independent.
6) A linear transformation T : V -+ V is diagonalisable if and only if V has a basis
'
consisting of eigenvectors of T.
7) A linear transformation (or matrix) is diagonalisable if its eigenvalues are distinct.

E l ) Suppose A E R is an eigenvalue. Then 3 (x,y) f (0,O) such that T(x,y) = A (x, y)


5 (x,O) = (Ax, Ay)+hx= x, hy = 0. These equations are satisfied if A = 1, y = 0.
.'. , 1 is an eigenvalue. A corresponding eigenvector is (1,O). Note that there are
infinitely many eigenvectors corresponding to 1, namely, (x, 0) V 0 f x E R. '6

E2) W,= ((x, y, z) E c31 T (x, Y,2) = i(x, Y,2))


= ((x,y, z) E c3I (ix, - iy, z) = (ix, iy, iz))
= ((x, 0,O) I x E C).
Similarly, you can show that W-, = ((0, x, 0) I x E C) and W I= ((0,O. x) I x E C).

E 3) If 3 is an eigenvalve, then 3 11 1
rxi roi
f such that
b

These equations are satisfied by x = 1, y = 1 and x = 2, y = 2.

.' . 3 is an eigenvalue, and 1 I and 11 are eigenvectors

corresponding to 3.

I;[{ Ev2(R)I x=Y] =([I XER)

This is the 1-dimensional real subspace of V2(R) whose basis is


,*
E6) The eigenvalues are the roots of the polynomial t3+2t2-t-2 = (t-1) (t+l)(t+2)
.'., they are 1, -1, -2.

.'.,the eigenvalues are A, = 2, = 3.

The eigenvectors corresponding to A, are given by

This leads us to the equations

The eigeivectors correHponding to l2are given by

[
2 1

0 21
0
0

4
[ i] [i] = .This gives us the equations

t-a10 ... 0
0 t-a, ... 0
m) f ~ ( t j =. ... . = (t-a,)(t-aJ ... (t- a,)
... .
0 0 ... t-a,
.'. ,its eigenvalues are a,, a,,....,a,.
The eigenvectors corresponding to a, are given by

This gives us the equations

(since ai # aj for i # j).

.*.,the eigenvectors corresponding to a, are 1; 1 ,x, # 0 , x,CR.


Similarly,the eigenvectors corresponding to ai are 1 ,".'o,xiCR.

E9) B = (1, x, x2) is a basis of P2.

[:::I.
Then[D], = 0 0 2

t -1
.'. ,the characteristic polynomial of D is 0
0
t -2 = t3
0 0 t,

.'.,the only eigenvalue of D is A= 0.


The eigenvectors corresponding to A = 0 are a, + a,x + a,x2, where D(a, + a,x + a,xZ)
= 0, that is, a, + 2a2x = 0.
This gives a, = 0, a, = 0. . '., the set of eigenvectors corresponding to h = 0 are
{a,] a, E R, a, # 0) = R \ (01.
E10) It1 - A1 = I(tl - A)'l,since JAY = / A ( .
= JtI - ~ l , s i n c e1' = I and (B - C)' = B' - C'.
.'., the eigenvalues of A are the same as those of A'.
E l l ) Let X be an eigenvector corresponding to A. Then X # 0 and A X = AX.
... A-' (AX) = A-' (AX).
a (A-'A)X = A(A-'x)
3 x = A(A-' X)
3 A # 0, since X # 0.
Also, X = A(A-'x)* A-'X = A-' x 3 A-' is an eigenvalue of A-'.

r]
E12) Since the matrix in Example 7 has distinct eigenvalues 1, -1 and -2, it is
diagonalisable. Eigenvectors corresponding to

these eigenva1.s are [ ] [- and [i] respectively.

0 0 2 1 0 0
..,ifP=[:r-:1 1 1 thenP-'[I 0 01 -2l]p=[o 0 10 - O]
2.

The matrix in Example 8 is not diagonalisable. This is because it only has two distinct
eigenvalues and, corresponding to each, it has only one linearly independent
eigenvector. :., we cannot find a basis of V,(F) consisting of eigenvectors. And now
apply Theorem 3.
The matrix in Example 9 is diagonalisable though it only has two distinct eigenvalues.
This is because corresponding to A, = -1 there is one linearly independent eigenvector,
but corresponding to A, = 1 there exist two linearly independent eigenvectors.
Therefore, we can form a basis of V3(R) consistingof the eigenvectors

You might also like