Unit 12

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

UNIT 12

EIGENVALUES AND
EIGENVECTORS
Structure
Page Nos.
12.1 Introduction 40
Objective
12.2 The Algebraic Eigenvalue Problem 41
12.3 Obtaining Eigenvalues and Eigenvectors 48
Characteristic Polynomial
Eigenvalues of Linear Transformation
12.4 Diagonalisation 57
12.5 Summary 63
12.6 Solution/Answers 64

12.1 INTRODUCTION
In Unit 10 you have studied about the matrix of a linear transformation. You
have had several opportunities, in earlier units, to observe that the matrix of a
linear transformation depends on the choice of the bases of the concerned
vector spaces.

Let V be an n-dimensional vector space over F, and let T ∶ V → V be a linear


transformation. In this unit we will consider the problem of finding a suitable
basis B, of the vector space V, such that the n × n matrix [T]B is a diagonal
matrix. This problem can also be seen as: Given an n × n matrix A, find a
suitable n × n non-singular matrix P such that P−1 AP is a diagonal matrix (see
Unit 10). It is in this context that the study of eigenvalues and eigenvectors
plays a central role. This will be seen in Section 12.4.

The eigenvalue problem involves the evaluation of all the eigenvalues and
eigenvectors of a linear transformation or a matrix. The solution of this problem
has basic applications in almost all branches of the sciences, technology and
the social sciences, besides its fundamental role in various branches of pure
and applied mathematics. The emergence of computers and the availability of
modern computing facilities has further strengthened this study, since they can
handle very large systems of equations.

In Section 12.2 we define eigenvalues and eigenvectors. We go on to discuss


40 a method of obtaining them, in Section 12.3. In this section we will also define
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
the characteristic polynomial, of which you will study more in the next unit.

Objectives
After studying this unit, you should be able to:

• obtain the characteristic polynomial of a linear transformation or a matrix;

• obtain the eigenvalues, eigenvectors and eigenspaces of a linear


transformation or a matrix;

• obtain a basis of a vector space V with respect to which the matrix of a linear
transformation T ∶ V → V is in diagonal form;

• obtain a non-singular matrix P which diagonalises a given diagonalisable


matrix A.

12.2 THE ALGEBRAIC EIGENVALUE


PROBLEM
Consider the linear mapping T ∶ ℝ2 → ℝ2 ∶ T(x, y) = (2x, y). Then,

T(1, 0) = (2, 0) = 2(1, 0).

Now, here is a vector (1, 0) that changes by a scalar factor only when the linear
transformation T is applied to it. In this situation we say that 2 is an eigenvalue
of the linear mapping T and (1, 0) is an eigenvector of T with respect to the
eigenvalue 2. So, basically if a linear mapping changes a particular vector by a
scalar factor only, then the scalar is called eigenvalue and the vector is called
eigenvector of the linear mapping. Now, let us define this formally. 𝜆 a is Greek letter
‘lambda’.
Definition 1: An eigenvalue of a linear transformation T ∶ V → V is a scalar
𝜆 ∈ F such that there exists a non-zero vector x ∈ V satisfying the equation
Tx = 𝜆x.

This non-zero vector x ∈ V is called an eigenvector of T with respect to the


eigenvalue 𝜆.

Thus, a vector x ∈ V is an eigenvector of the linear transformation T if

i) x is non-zero, and

ii) Tx = 𝜆x for some scalar 𝜆 ∈ F.

The fundamental algebraic eigenvalue problem deals with the determination of


all the eigenvalues of a linear transformation. Let us look some examples of
how we can find eigenvalues.

Example 1: Let T(x, y, z) = (x + 2y, −3y, x + 2y + 2z) check whether


(5, −10, 3) and (0, 0, 1) are eigenvectors for T. What are the corresponding
eigenvalues? 41
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Solution:

T(5, −10, 3) = (5 − 20, 30, 5 − 20 + 6)


= (−15, 30, −9)
= −3(+5, −10, +3)

∴ (5, −10, 3) is an eigenvector for T, corresponding to the eigenvalue −3.

T(0, 0, 1) = (0, 0, 2) = 2(0, 0, 1).

∴ (0, 0, 1) is an eigenvector for T, corresponding to the eigenvalue 2.

∗∗∗

Example 2: Let T(x, y, z) = (x − z, −2x + 4z, 2x + y + z), check whether


(−1, 3, 1), (−1, 2, 0) and (1, −6, 2) are eigenvectors for T. What are the
corresponding eigenvalues?

Solution:

T(−1, 3, 1) = (−1 − 1, 2 + 4, −2 + 3 + 1)
= (−2, 6, 2)
= 2(−1, 3, 1)

∴ (−1, 3, 1) is an eigenvector for T, corresponding to the eigenvalue 2.

Again

T(−1, 2, 0) = (−1, 2, −2 + 2)
= (−1, 2, 0)

∴ (−1, 2, 0) is an eigenvector for T, corresponding to the eigenvalue 1.

Again,

T(1, −6, 2) = (1 − 2, −2 + 8, 2 − 6 + 2)
= (−1, 6, −2)
= −1(1, −6, 2)

∴ (1, −6, 2) is an eigenvector for T, corresponding to the eigenvalue −1.

∗∗∗

Example 3: Let T(x, y, z) = (x + y − z, −x − 2y + 4z, 2x + y + z), check whether


(4, −11, 1), (0, 1, 1) and (−2, 3, 1) are eigenvectors for T. What are the
corresponding eigenvalues?

Solution:

T(4, −11, 1) = (4 − 11 − 1, −4 + 22 + 4, 8 − 11 + 1)
= (−8, 22, −2)
= −2(4, −11, 1)
42
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Therefore, (4, −11, 1) is an eigenvector for T, corresponding to the eigenvalue
−2.

Again

T(0, 1, 1) = (0 + 1 − 1, −2 + 4, 0 + 1 + 1)
= (0, 2, 2)
= 2(0, 1, 1)

Therefore, (0, 1, 1) is an eigenvector for T, corresponding to the eigenvalue 2.

Again

T(−2, 3, 1) = (−2 + 3 − 1, 2 − 6 + 4, −4 + 3 + 1)
= (0, 0, 0)
= 0(−2, 3, 1)

Therefore, (−2, 3, 1) is an eigenvalue for T, corresponding to the eigenvalue 0.

∗∗∗

Do try the following exercise now.

E1) Let T ∶ ℝ3 → ℝ3 be defined by T(x, y, z) = (x − z, y + 3z, 2x + y + z) check


whether (−1, 3, 1), (−1, 2, 0) and (1, −3, 1) are eigenvectors for T. What
are the corresponding eigenvalues?

E2) Let T ∶ ℝ3 → ℝ3 be defined by T(x, y, z) = (y − z, 3x + y + z, 2x + y + z).


Check whether (0, 1, 1), (−1, 2, 3) and (−3, 4, 1) are eigenvectors for T.
What are the corresponding eigenvalues?

Note: The zero vector can never be an eigenvector. But, 0 ∈ F can be an


eigenvalue. For example, 0 is an eigenvalue of the linear operator in E1, a
corresponding eigenvector being (0, 1).

Now we define a vector space corresponding to an eigenvalue of T ∶ V → V.


Suppose 𝜆 ∈ F is an eigenvalue of the linear transformation T. Define the set

W𝜆 = {x ∈ V ∣T(x) = 𝜆x }
= {0} ∪ {eigenvectors of T corresponding to 𝜆} .

So, a vector v ∈ W𝜆 if and only if v = 0 or v is an eigenvector of T


corresponding to 𝜆. Now,

x ∈ W𝜆 ⇔ Tx = 𝜆Ix, I being the identity operator.


⇔ (T − 𝜆I) x = 0
⇔ x ∈ ker (T − 𝜆I)

∴, W𝜆 = ker (T − 𝜆I) , and hence, W𝜆 is a subspace of V.

Therefore, dim(W𝜆 ) = dim(ker(T − 𝜆)). We summarise our discussion in the


next theorem. 43
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Theorem 1: Let V be a vector space over F and T ∶ V → V be a linear
transformation. Suppose 𝜆 ∈ F is an eigenvalue of the linear transformation T.
Then

W𝜆 = ker(T − 𝜆I) and dim(W𝜆 ) = dim(ker(T − 𝜆I)).

Since 𝜆 is an eigenvalue of T, it has an eigenvector, which must be non-zero.


Thus, W𝜆 is non-zero.

Definition 2: For an eigenvalue 𝜆 of T, the non-zero subspace W𝜆 is called


the eigenspace of T associated with the eigenvalue 𝜆.

Let us look at an example.

Example 4: Obtain W−3 and W2 for the linear operator in Example 1. What
are the corresponding dimensions of the subspaces?

Solution:

W−3 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = −3(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x + 2y, −3y, x + 2y + 2z) = (−3x, −3y, −3z)}
= { (x, y, z) ∈ ℝ3 ∣ 4x + 2y = 0, x + 2y + 5z = 0}
= { (x, y, z) ∈ ℝ3 ∣ y = 2x, x − 4x + 5z = 0}
= { (x, y, z) ∈ ℝ3 ∣ y = −2x, x − 4x + 5z = 0}
3
= { (x, y, z) ∈ ℝ3 ∣ y = −2x, z = x}
5
3
= { (x − 2x, x)∣ x ∈ ℝ}
5
3
= { (1, −2, ) x∣ x ∈ ℝ}
5
= { (5, −10, 3) x∣ x ∈ ℝ}

Therefore, W−3 is a subspace of ℝ3 with dimension 1.

W2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x + 2y, −3y, x + 2y + 2z) = (2x, 2y, 2z)}
= { (x, y, z) ∈ ℝ3 ∣ x + 2y = 2x, −3y = 2y, x + 2y + 2z = 2z}
= { (x, y, z) ∈ ℝ3 ∣ − x + 2y = 0, y = 0, x + 2y = 0}
= { (x, y, z) ∈ ℝ3 ∣ x = 0, y = 0}

= { (0, 0, z)∣ z ∈ ℝ}
= { (0, 0, 1)z∣ z ∈ ℝ}

Therefore, W2 is a subspace of R3 with dimension 1.

∗∗∗

Example 5: Obtain W2 , W1 and W−1 for the linear operator in Example 2.


44 What are the corresponding dimensions of the subspaces?
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Solution:

W2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, −2x + 4z, 2x + y + z) = (2x, 2y, 2z)}
= { (x, y, z) ∈ ℝ3 ∣ x − z = 2x, −2x + 4z = 2y, 2x + y + z = 2z}

= { (x, y, z) ∈ ℝ3 ∣ x + z = 0, −2x − 2y + 4z = 0, 2x + y − z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x = −z, 2z − 2y + 4z = 0, −2z + y − z = 0}

= { (x, y, z) ∈ ℝ3 ∣ x = −z, y = 3z}


= { (−z, 3z, z)∣ z ∈ ℝ}
= { (−1, 3, 1)z∣ z ∈ ℝ}

Therefore, W2 is a subspace of R3 with dimension 1.

W1 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = (x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, −2x + 4z, 2x + y + z) = (x, y, z)}

= { (x, y, z) ∈ ℝ3 ∣ x − z = x, −2x + 4z = y, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ z = 0, y = −2x}
= { (x, −2x, 0)∣ x ∈ ℝ}
= { (1, −2, 0)∣ x ∈ ℝ}

Therefore, W1 is a subspace of ℝ3 with dimension 1.

W−1 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = −1(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, −2x + 4z, 2x + y + z) = (−x, −y, −z)}
= { (x, y, z) ∈ ℝ3 ∣ x − z = −x, −2x + 4z = −y, 2x + y + z = −z}
= { (x, y, z) ∈ ℝ3 ∣ z = 2x, y = −6x}
= { (x, −6x, 2x)∣ x ∈ ℝ}
= { (1, −6, 2)x∣ x ∈ ℝ}

Therefore, W−1 is a subspace of ℝ3 with dimension 1.

∗∗∗

Now, try the following exercise.

E3) Obtain W−2 , W2 and W0 for the linear operator in Example 3, what are the
corresponding dimensions of the subspaces?

E4) Obtain W2 , W1 and W0 for the linear operator in E1. What are the
corresponding dimensions of the subspaces?

E5) Obtain W2 , W1 and W−1 for the linear operator in E2. What are the
corresponding dimensions of the subspaces?

As with every other concept related to linear transformations, we can define


eigenvalues and eigenvectors for matrices also. Let us do so. 45
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Let A be any n × n matrix over the field F.
As we have said in Unit 9, the matrix A becomes a linear transformation from
Vn (F) to Vn (F), if we define

A ∶ Vn (F) → Vn (F) ∶ A(X) = Ax.

Also, you can see that [A]B = A, where


0

⎧ ⎫
{
{ 1 0 ⎡0⎤}
{ ⎡ ⎤ ⎡ ⎤ ⎢0⎥} }
{ ⎢0⎥ ⎢1⎥ ⎢ ⎥}
{ ⎢ ⎥ ⎢ ⎥ ⎢0⎥}
{ ⎢ ⎥ ⎢ ⎥ }
B∘ = ⎨e1 = ⎢0⎥ , e2 = ⎢0⎥ , … , en = ⎢ ⎥⎬
{ ⎢ ⎥ ⎢ ⎥ ⎢ ⋮ ⎥}
{ ⎢⋮⎥ ⎢⋮⎥ ⎢ ⎥}
{ ⎢ ⎥ ⎢ ⎥ ⎢0⎥}
{ ⎢ ⎥}
{ 0
⎣ ⎦ 0
⎣ ⎦ }
{ }
⎩ ⎣1⎦⎭

is the standard ordered basis of Vn (F). That is, the matrix of A, regarded as a
linear transformation from Vn (F) to Vn (F), with respect to the standard basis
B∘ , is A itself. This is why we denote the linear transformation A by A itself.

Looking at matrices as linear transformations in the above manner will help you
in the understanding of eigenvalues and eigenvectors for matrices.

Definition 3: A scalar 𝜆 is an eigenvalue of an n × n matrix A over F if there


exists X ∈ Vn (F), X ≠ 0, such that AX = 𝜆X.

If 𝜆 is an eigenvalue of A, then all the non-zero vectors in Vn (F) which are


solutions of the matrix equation AX = 𝜆X are eigenvectors of the matrix A
corresponding to the eigenvalue 𝜆.

Let us look at a few examples.

⎡1 0 0⎤
Example 6: Let A = ⎢⎢0 2 0⎥⎥ . Obtain an eigenvalue and a corresponding
⎢ ⎥
⎣0 0 3⎦
eigenvector of A.

⎡1⎤ ⎡1⎤ ⎡1⎤


Solution: Now A ⎢0⎥ = ⎢0⎥ . This show that 1 is an eigenvalue and ⎢⎢0⎥⎥ is an
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ ⎣0⎦ ⎣0⎦
eigenvector corresponding to it.

⎡0⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤


Note that the eigenvalues In fact, A ⎢1⎥ = 2 ⎢1⎥ and A ⎢0⎥ = 3 ⎢⎢0⎥⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
of diag (d1 , … , dn ) are ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ ⎣0⎦ ⎣1⎦ ⎣1⎦
(d1 , … , dn ) .
⎡0⎤
Thus, 2 and 3 are eigenvalues of A, with corresponding eigenvectors ⎢⎢1⎥⎥ and
⎢ ⎥
⎣0⎦
⎡0⎤
⎢0⎥ , respectively.
⎢ ⎥
⎢ ⎥
⎣1⎦

46 ∗∗∗
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Example 7: Obtain an eigenvalue and a corresponding eigenvector of
0 −1⎤
A = ⎡⎢ ⎥ ∈ M2 (ℝ).
⎣1 2 ⎦

x 0
Solution: Suppose 𝜆 ∈ ℝ is an eigenvalue of A. Then ∃x = ⎡⎢ ⎤⎥ ≠ ⎡⎢ ⎤⎥ such
⎣y⎦ ⎣0⎦
−y ⎤ ⎡𝜆x⎤
that AX = 𝜆X, this is, ⎡⎢ ⎥ = ⎢ ⎥.
⎣x + 2y⎦ ⎣𝜆y⎦

So for what values of 𝜆, x and y are the equations −y = 𝜆x and x + 2y = 𝜆y


satisfies? Note that x ≠ 0 and y ≠ 0, because if either is zero then the other will
have to be zero.

x + 2y = 𝜆y
x + 2(𝜆x) = 𝜆(−𝜆x)
x − 2𝜆x + 𝜆2 x = 0
x(𝜆 − 1) = 0

Therefore the eigenvalue is 𝜆 = 1 and the equation becomes y = −x.

1⎤
Then an eigenvector corresponding to it is ⎡⎢ ⎥.
⎣−1⎦

∗∗∗

Now you can solve an eigenvalue problem yourself!

1 2⎤
E6) Show that 3 is an eigenvalue of ⎡⎢ ⎥ . Find 2 corresponding
⎣0 3⎦
eigenvectors.

1 3⎤
E7) Show that −1 is an eigenvalue of ⎡⎢ ⎥ . Find one corresponding
⎣4 5⎦
eigenvector.

⎡1 0 1⎤
E8) Show that 1 and 2 are eigenvalues of ⎢⎢0 2 2⎥⎥ . Find the corresponding
⎢ ⎥
⎣0 0 3⎦
eigenvectors.

Just as we defined an eigenspace associated with a linear transformation we


define the eigenspace W𝜆 , corresponding to an 𝜆 of an n × n matrix A, as
follows:

W𝜆 = {X ∈ Vn (F) ∣ AX = 𝜆X } = {X ∈ Vn (F) ∣(A − 𝜆I)X = 0 }

and dim(W𝜆 ) = nullity of (A − 𝜆I). 47


Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
For example, the eigenspace W1 , in the situation of Example 6, is

⎧ x ∣ ⎡x⎤ ⎡x⎤⎫ ⎧ x ∣ ⎡ x ⎤ ⎡x⎤⎫


{
{⎡ ⎤ ∣ ⎢ ⎥ ⎢ ⎥} } {
{⎡ ⎤ ∣ ⎢ ⎥ ⎢ ⎥} }
⎢y⎥ ∈ V (ℝ) ∣ A ⎢y⎥ = ⎢y⎥ = ⎢ y⎥⎥ ∈ V(ℝ), ∣ ⎢2y⎥ = ⎢y⎥

⎨⎢ ⎥⎥ 3 ∣ ⎢ ⎥ ⎢ ⎥⎬ ⎨⎢⎢ ⎥ ∣ ⎢ ⎥ ⎢ ⎥⎬
{
{ z ∣ } {
} { ∣ 3z }
}
⎩⎣ ⎦ ∣ ⎣z⎦ ⎣z⎦⎭ ⎩ ⎣z⎦ ∣ ⎣ ⎦ ⎣z⎦⎭
⎧ x ∣ ⎫
{
{⎡ ⎤∣ }
}
= ⎨⎢ ⎥
⎢0⎥∣∣ x ∈ ℝ⎬
{ ⎢ ⎥ }
{ 0 ∣ }
⎩ ⎣ ⎦∣ ⎭

which is the same as {(x, 0, 0) |x ∈ ℝ }.

E9) Find W3 for the matrix in E8.

E10) Find W−1 for the matrix in E9.

E11) Find W1 and W2 for the matrix given in E10.

The algebraic eigenvalue problem for matrices is to determine all the


eigenvalues and eigenvectors of a given matrix. In fact, the eigenvalues and
eigenvectors of an n × n matrix A are precisely the eigenvalues and
eigenvectors of A regarded as a linear transformation from Vn (F) to Vn (F).

We end this section with the following remark:

A scalar 𝜆 is an eigenvalue of the matrix A if and only if (A − 𝜆I) X = 0 has a


non-zero solution, i.e., if and only if det (A − 𝜆I) = 0.

Similarly, 𝜆 is an eigenvalue of the linear transformation T if and only if


det (T − 𝜆I) = 0.

So far we have been obtaining eigenvalues by observation, or by some


calculations that may not give us all the eigenvalues of a given matrix or linear
transformation. The remark above suggests how to look for all the eigenvalues.
In the next section we determine eigenvalues and eigenvectors explicitly.

12.3 OBTAINING EIGENVALUES AND


EIGENVECTORS
In the previous section we have seen that a scalar 𝜆 is an eigenvalue of a
matrix A if and only if det (A − 𝜆i) = 0. In this section we shall see how this
equation helps us to solve the eigenvalue problem.

12.3.1 Characteristic Polynomial

Once we know that 𝜆 is an eigenvalue of a matrix A, the eigenvectors can


easily be obtaining by finding non-zero solutions of the system of equations
48 given by AX = 𝜆X.
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............

⎡ a11 a12 … a1n ⎤


⎢a a22 … a2n ⎥⎥
⎢ 21 ⎡x1 ⎤
⎢ . . … . ⎥ ⎢x ⎥
Now, if A = ⎢ ⎥ and X = ⎢ 2⎥
⎢ ⎥,
⎢ . . … . ⎥ ⎢⋮⎥
⎢ ⎥ ⎢ ⎥
⎢ . . … . ⎥ ⎣xn ⎦
⎢ ⎥
⎣an1 an2 … ann ⎦

the equation AX = 𝜆X becomes

⎡ a11 a12 … a1n ⎤


⎢a a22 … a2n ⎥⎥
⎢ 21 ⎡x1 ⎤ ⎡x1 ⎤
⎢ . . … . ⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ . ⎢ ⎥ = 𝜆⎢ ⎥
. … . ⎥ ⎢⋮⎥ ⎢⋮⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ . . … . ⎥ ⎣xn ⎦ ⎣xn ⎦
⎢ ⎥
⎣an1 an2 … ann ⎦

Writing it out, we get the following system of equations.

a11 x1 + a12 x2 + ⋯ + a1n xn = 𝜆x1


a21 x1 + a22 x2 + ⋯ + a2n xn = 𝜆x2

an1 x1 + an2 x2 + ⋯ + ann xn = 𝜆xn

This is equivalent to the following system.

(a11 − 𝜆)x1 + a12 x2 + ⋯ + a1n xn = 0


a21 x1 + (a22 − 𝜆)x2 + ⋯ + a2n xn = 0

an1 x1 + an2 x2 + ⋯ + (ann − 𝜆)xn = 0

This homogeneous system of linear equations has a non-trivial solution if and


only if the determinant of the coefficient matrix is equal to 0 (see Unit 9). Thus,
𝜆 is an eigenvalue of A if and only if

∣a11 − 𝜆 a12 … a1n ∣


∣ ∣
∣ a21 a22 − 𝜆 … a2n ∣
∣ ∣
det (A − 𝜆I) = ∣∣ . . … ∣=0
∣ .
∣ . . … . ∣
∣ ∣
∣ ∣
∣ an1 an2 … ann − 𝜆∣

Now, det (𝜆I − A) = (−1)n det (A − 𝜆I) (multiplying each row by (−1)). Hence,
det (𝜆I − A) = 0 if and only if det (A − 𝜆I) = 0.
This leads us to define the concept of the characteristic polynomial.

Definition 4: Let A = [aij ] be any n × n matrix. Then the characteristic


polynomial of the matrix A is defined by

fA (t) = det (tI − A)


∣t − a11 −a12 … −a1n ∣
∣ ∣
∣ −a21 t − a22 … −a2n ∣
∣ ∣
=∣ . . … . ∣∣

∣ . . … . ∣∣

∣ ∣
∣ −an1 −an2 … t − ann ∣ 49
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
= tn + c1 tn−1 + c2 tn−2 + ⋯ + cn−1 t + cn

where the coefficients c1 , c2 … , cn depend on the entries aij of the matrix A.


The equation fA (t) = 0 is the characteristic equation of A.
When no confusion arise, we shall simply write f(t) in place of fA (t).

Consider the following example.

Example 8: Obtain the characteristic polynomial of the matrix

⎡1 2 ⎤ .
⎢ ⎥
⎣0 −1⎦

Solution: The required polynomial is

∣t − 1 −2 ∣
∣ ∣ = (t − 1)(t + 1) = t2 − 1.
∣ 0 t + 1∣
∣ ∣

∗∗∗

1 3⎤
Example 9: Obtain the characteristic polynomial of the matrix ⎡⎢ ⎥.
⎣4 5⎦

Solution: The required polynomial is

∣t − 1 −3 ∣
∣ ∣ = (t − 1) (t − 5) − 12 = t2 − 6t − 7.
∣ −4 t − 5∣∣

∗∗∗

Example 10: Obtain the characteristic polynomial for the matrix


⎡1 1 1⎤
⎢−1 −1 −1⎥ .
⎢ ⎥
⎢ ⎥
⎣ 0 0 1 ⎦

∣t − 1 −1 −1 ∣
∣ ∣
Solution: ∣
The required polynomial is ∣ 1 t+1 1 ∣∣ = t2 (t − 1) .
∣ 0 0 t − 1∣∣

∗∗∗

Now try this exercise.

E12) Obtain the characteristic polynomials of the following matrices:


⎡0 0 2 ⎤ ⎡1 0 0⎤

a) ⎢1 0 1 ⎥ . ⎥ b) ⎢⎢1 2 1⎥⎥
⎢ ⎥ ⎢ ⎥
⎣0 1 −2⎦ ⎣2 3 2⎦
⎡1 −1 0 ⎤ ⎡2 0 0⎤
c) ⎢1 2 3 ⎥⎥ d) ⎢0 4 0⎥
⎢ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣3 1 −2⎦ ⎣0 0 6 ⎦

50
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Note that 𝜆 is an eigenvalue of A iff det(𝜆I − A) = fA (𝜆) = 0, that is, iff 𝜆 is a
root of the characteristic polynomial fA (t), defined above. Due to this fact,
eigenvalues are also called characteristic roots, and eigenvectors are called
characteristic vectors.

For example, the eigenvalues of the matrix in Example 9 are the roots of the
polynomial t2 − 1, namely, 1 and −1. Let us look at some more examples.

Example 11: Find the eigenvalues of the matrix in Example 9.

Solution: We can see the characteristic polynomial of the matrix is t2 − 6t − 7.


So the eigenvalues, roots of the characteristic polynomial, are −1 and 7.

∗∗∗

Example 12: Find the eigenvalues of the matrix in Example 10.

Solution: We can see that the characteristic polynomial of the matrix in


Example 10 is t2 (t − 1) . So, the eigenvalues, roots of the characteristic
polynomial are 0, 0 and 1.

∗∗∗

The roots of the


E13) Find the eigenvalues of the matrices in Exercise 12.
characteristic polynomial
of a matrix A form the set
Now, the characteristic polynomial fA (t) is a polynomial of degree n. Hence, it of eigenvalues of A.
can have n roots, at the most. Thus, an n × n matrix has n eigenvalues, at
the most.

For example, the matrix in Example 8 has two eigenvalues, 1 and −1, and the
matrix in E12(a) has 3 eigenvalues.

Now we will prove a theorem that will help us in Section 12.4.

Theorem 2: Similar matrices have the same eigenvalues.

Proof: Let an n × n matrix B be similar to an n × n matrix A.


Then, by definition, B = P−1 AP, for some invertible matrix P.
Now, the characteristic polynomial of B,

fB (t) = det (tI − B)


= det (tI − P−1 AP)

= det (P−1 (tI − A)P) , since P−1 tIP = tP−1 P = tI

= det(P−1 ) det (tI − A) det(P)


= det (tI − A) det(P−1 ) det(P)
= fA (t) det (P−1 P)

= fA (t), since, det(P−1 P) = det(I) = 1.

Thus, the roots of fB and fA (t) coincide. Therefore, the eigenvalues of A and B
are the same. ■ 51
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Let us consider some more examples so that the concepts mentioned in this
section become absolutely clear to you.

Example 13: Find the eigenvalues and eigenvectors of the matrix

⎡0 0 2 ⎤
A = ⎢⎢1 0 1 ⎥⎥
⎢ ⎥
⎣0 1 −2⎦

Solution: After solving E6, you find that the eigenvalues of A are
𝜆1 = 1, 𝜆2 = −1, 𝜆3 = −2. Now we obtain the eigenvectors of A.

The eigenvectors of A with respect to the eigenvalue 𝜆1 = 1 are the non-trivial


solutions of

⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = 1 ⎢x ⎥ ,
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 1 − 2⎦ ⎣x3 ⎦ ⎣x3 ⎦

which gives the equations

2x3 = x1 ⎫
} x1 = 2x3
}
}
x1 + x3 = x2 ⎬ ⇒ x2 = x1 + x3 = 3x3
}
}
x2 − 2x3 = x3 } x3 = x3

Thus,
⎡2x3 ⎤ ⎡2⎤
⎢3x ⎥ , that is, x ⎢3⎥ gives all the eigenvectors associated with the eigenvalue
⎢ 3⎥ 3⎢ ⎥
⎢ ⎥ ⎢ ⎥
x
⎣ 3⎦ ⎣1⎦
𝜆1 = 1, as x3 takes all non-zero real values.
The eigenvectors of A with respect to 𝜆2 = −1 are the non-trivial solution of

⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = (−1) ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 1 − 2⎦ ⎣x3 ⎦ ⎣x3 ⎦

which gives

2x3 = −x1 ⎫
} x1 = −2x3
}
}
x1 + x3 = −x2 ⎬ ⇒ x2 = −x1 − x3 = 2x3 − x3 = x3
}
}
x2 − 2x3 = −x3 } x3 = x3

Thus, the eigenvectors associated with (−1) are

⎡−2x3 ⎤ ⎡−2⎤
⎢ x ⎥=x ⎢ 1 ⎥ ∀x ≠ 0, x ∈ ℝ.
⎢ 3 ⎥ 3 ⎢ ⎥ 3 3
⎢ ⎥ ⎢ ⎥
x
⎣ 3 ⎦ 1
⎣ ⎦

The eigenvectors of A with respect to 𝜆3 = −2 are given by

⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = (−1) ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
52 ⎣0 1 −2⎦ ⎣x3 ⎦ ⎣x3 ⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
which gives

2x3 = −2x1 ⎫
} x1 = − x3
}
}
x1 + x3 = −2x2 ⎬ ⇒ x2 = 0
}
}
x2 − 2x3 = −2x3 } x3 = x3

Thus, the eigenvectors corresponding to −2 are

⎡−x3 ⎤ ⎡−1⎤
⎢ 0 ⎥=x ⎢ 0 ⎥ , x ≠ 0, x ∈ ℝ.
⎢ ⎥ 3 ⎢ ⎥ 3 3
⎢ ⎥ ⎢ ⎥
x
⎣ 3 ⎦ 1
⎣ ⎦

Thus, in this example, the eigenspaces W1 , W−1 and W−2 are 1-dimensional
spaces, generated over ℝ by

⎡2⎤ ⎡−2⎤ ⎡−1⎤


⎢3⎥ , ⎢ 1 ⎥ and ⎢ 0 ⎥ , respectively.
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ ⎣1⎦ ⎣1⎦

∗∗∗

Example 14: Let A be the 4 × 4 real matrix

⎡1 1 0 0⎤
⎢−1 −1 0 0⎥⎥

⎢ ⎥,
⎢−2 −2 2 1⎥
⎢ ⎥
⎣1 1 −1 0⎦

Obtain its eigenvalues and eigenvectors.

Solution: The characteristic polynomial of A = fA = det (tI − A)

⎡−t − 1 −1 0 0⎤
⎢ 1 t+1 0 0 ⎥⎥ 2
⎢ 2
⎢ ⎥ = t (t − 1)
⎢ 2 2 t − 2 −1⎥
⎢ ⎥
⎣ −1 −1 1 t ⎦

∴ the eigenvalues are 0, 0, and 1. Thus, the distnict eigenvalues are 𝜆1 = 0 and
𝜆2 = 1.

The eigenvectors corresponding to 𝜆1 = 0 are given by

⎡1 1 0 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢−1 −1 0 0⎥⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ = 0⎢ ⎥,
⎢−2 −2 2 1⎥ ⎢x3 ⎥ ⎢x3 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 −1 0⎦ ⎣x4 ⎦ ⎣x4 ⎦

which gives

x1 + x2 = 0
−x1 − x2 = 0
−2x1 − 2x2 + 2x3 + x4 = 0
x1 + x2 − x3 = 0
53
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
The first and last equations give x3 = 0. Then, the third equation gives x4 = 0.
The first equation gives x1 = −x2 .

Thus, the eigenvectors are

⎡−x2 ⎤ ⎡−1⎤
⎢x ⎥ ⎢1⎥
⎢ 2 ⎥ ⎢ ⎥
⎢ ⎥ = x2 ⎢ ⎥ , x2 ≠ 0, x2 ∈ ℝ.
⎢ 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦

The eigenvectors corresponding to 𝜆1 = 1 are given by

⎡1 1 0 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢−1 −1 0 0⎥⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ = 0⎢ ⎥,
⎢−2 −2 2 1⎥ ⎢x3 ⎥ ⎢x3 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 −1 0⎦ ⎣x4 ⎦ ⎣x4 ⎦

which gives

x1 + x2 = x1
−x1 − x2 = x2
−2x1 − 2x2 + 2x3 + x4 = x3
x1 + x2 − x3 = x4

The first two equation give x2 = 0 and x1 = 0. Then the last equation gives
x4 = −x3 . Thus, the eigenvectors are

⎡ 0 ⎤ ⎡0⎤
⎢ 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ = x3 ⎢ ⎥ , x3 ≠ 0, x3 ∈ ℝ.
⎢ x3 ⎥ ⎢1⎥
⎢ ⎥ ⎢ ⎥
⎣−x3 ⎦ ⎣−1⎦

∗∗∗

Example 15: Obtain the eigenvalues and eigenvectors of

⎡0 1 0⎤
A = ⎢⎢1 0 0⎥⎥
⎢ ⎥
⎣0 0 1⎦

Solution: The characteristic polynomial of A = fA (t) = det (tI − A)

∣ t −1 0 ∣
∣ ∣ 2
= ∣∣−1 t 0 ∣∣ = (t + 1) (t − 1)
∣0 0 t − 1∣∣

Therefore, the eigenvalues are 𝜆1 = −1 and 𝜆2 = 1.

The eigenvectors corresponding to 𝜆1 = −1 are given by

⎡0 1 0⎤ ⎡x1 ⎤
⎢1 0 0⎥ = (−1) ⎢x ⎥ ,
⎢ ⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥
54 ⎣0 0 1⎦ ⎣x3 ⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
which is equivalent to

x2 = −x1
x1 = −x2
x3 = −x3

The last equation gives x3 = 0. Thus, the eigenvectors are

⎡ x1 ⎤ ⎡1⎤
⎢−x ⎥ = x ⎢−1⎥ , x ≠ 0, x ∈ ℝ.
⎢ 1⎥ 1 ⎢ ⎥ 1 1
⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦
The eigenvectors corresponding to 𝜆2 = 1 are given by

⎡0 1 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 0⎥ ⎢x ⎥ = 1 ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 1⎦ ⎣x3 ⎦ ⎣x3 ⎦
which gives

x2 = x1
x1 = x2
x3 = x3

Thus, the eigenvectors are

⎡x1 ⎤ ⎡1⎤ ⎡0⎤


⎢x ⎥ = x ⎢1⎥ + x ⎢0⎥
⎢ 1⎥ 1 ⎢ ⎥ 3 ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣x3 ⎦ ⎣0⎦ ⎣1⎦
where x1 , x3 are real numbers, not simultaneously 0.
Note that, corresponding to 𝜆2 = 1, there exists two linearly independent
⎡1⎤ ⎡0⎤
⎢ ⎥
eigenvectors ⎢1⎥ and ⎢⎢0⎥⎥ , which form a basis of the eigenspace W1 .
⎢ ⎥ ⎢ ⎥
⎣0⎦ ⎣1⎦
Thus, W−1 is 1-dimensional, while dim(W1 ) = 2.

∗∗∗

Try the following exercises now.

E14) Find the eigenvalues and basis for the eigenspaces of the matrices
⎡2 1 0 ⎤
A = ⎢⎢0 1 −1⎥⎥
⎢ ⎥
⎣0 2 4 ⎦
E15) Find the eigenvalues and eigenvectors of the diagonal matrix

⎡a1 0 0 . . 0⎤
⎢0 a2 0 . . 0 ⎥⎥

⎢ ⎥
D=⎢0 0 a3 . . 0 ⎥,
⎢ ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎣0 0 . . . an ⎦

where ai ≠ aj for i ≠ j.
55
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
We now turn to the eigenvalues and eigenvectors of linear transformations.

12.3.2 Eigenvalues of Linear


Transformations
As in Section 12.2, let T ∶ V → V be a linear transformation on a
finite-dimensional vector space V over the field F. We have seen that 𝜆 ∈ F is
an eigenvalue of T

⇔ det (T − 𝜆I) = 0
⇔ det (𝜆I − T) = 0
⇔ det (𝜆I − A) = 0,

where A = [T]B is the matrix of T with respect to a basis B of V. Note that


[𝜆I − T]B = 𝜆I − [T]B .

This shows that 𝜆 is an eigenvalue of T if and only if 𝜆 is an eigenvalue of the


matrix A = [T]B , where B is a basis of V. We define the characteristic
polynomial of the linear transformation T to be the same as the
characteristic polynomial of the matrix A = [T]B , where B is a basis of V.

This definition does not depend on how the basis B is chosen, since similar
matrices have the same characteristic polynomial (Theorem 1), and the
matrices of the same linear transformation T with respect to two different
ordered basis of V are similar.

Just as for matrices, the eigenvalues of T are precisely the roots of the
characteristic polynomial of T.

Example 16: Let T ∶ ℝ2 → ℝ2 be the linear transformation which maps


e1 = (1, 0) to e2 = (0, 1) and e2 to −e1 . Obtain the eigenvalues of T.

0 −1⎤
Solution: Let A = [T]B = ⎡⎢ ⎥ , where B = {e1 , e2 } . The characteristic
⎣1 0 ⎦
t 1⎤ 2
polynomial of T = the characteristic polynomial of A = ⎡⎢ ⎥ = t + 1, which
⎣−1 t ⎦
has no real roots.

Hence, the linear transformation T has no real eigenvalues. But, if T is seen as


a map T ∶ ℂ2 → ℂ2 then it has two eigenvalues i and −i.

∗∗∗

Try the following exercise now.

E16) Obtain the eigenvalue and eigenvectors of the differential operator

D ∶ P2 → P2 ∶ D (a0 + a1 x + a2 x2 ) = a1 + 2a2 x, for a0 , a1 , a2 ∈ ℝ

where P2 is the set of all polynomials of degree ≤ 2.

56 E17) Show that the eigenvalues of a square matrix A coincide with those of At .
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
E18) Let A be an invertible matrix. If 𝜆 is an eigenvalue of A, show that 𝜆 ≠ 0
and that 𝜆−1 is an eigenvalue of A−1 .

Now that we have discussed a method of obtaining the eigenvalues and


eigenvectors of a matrix, let us see how they help in transforming some square
matrix into a diagonal matrix.

12.4 DIAGONALISATION
In this section we start with the prove a theorem that discusses the linear
independence of eigenvectors corresponding to different eigenvalues.

Theorem 3: Let T ∶ V → V be a linear transformation on a finite-dimensional


vector space V over the field F. Let 𝜆1 , 𝜆2 , … 𝜆m be the distinct eigenvalues of
T and v1 , v2 , … , vm be eigenvectors of T corresponding to 𝜆1 , 𝜆2 , … 𝜆m ,
respectively. Then v1 , v2 , … , vm are linearly independent over F.

Proof: We know that

Tvi = 𝜆i vi , 𝜆i ∈ F, 0 ≠ vi ∈ V for i = 1, 2, … , m, and 𝜆i ≠ 𝜆j for i ≠ j.

if possible, suppose that {v1 , v2 , … , vm } is a linearly dependent set. Now, the


single non-zero vector v1 is linearly independent. We chose r (≤ m) such
{v1 , v2 , … , vr−1 } is linearly independent and {v1 , v2 , … , vr−1 , vr } is linearly
dependent. Then

vr = 𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼r−1 vr−1 …(1)

for some 𝛼1 , 𝛼2 , … , 𝛼r−1 in F such that at least one 𝛼i , i = 1, … , r − 1 is non


zero.
Applying T, we get
Tvr = 𝛼1 Tv1 + 𝛼2 Tv2 + ⋯ + 𝛼r−1 Tvr−1 . This gives

𝜆r vr = 𝛼1 𝜆1 v1 + 𝛼2 𝜆2 v2 + ⋯ + 𝛼r−1 𝜆r−1 vr−1 …(2)

Now, we multiply Eqn. (1) by 𝜆r and subtract it from Eqn. (2), to get

0 = 𝛼1 (𝜆1 − 𝜆r ) v1 + 𝛼2 (𝜆2 − 𝜆r ) v2 + ⋯ + 𝛼r−1 (𝜆r−1 − 𝜆r ) vr−1

Since the set {v1 , v2 , … , vr−1 } is linearly independent, each of the coefficients in
the above equation must be 0. Thus, we have 𝛼i (𝜆i − 𝜆r ) = 0 for
i = 1, 2, … , r − 1.

Hence, the assumption we started with must be wrong. Thus, {v1 , v2 … , vm }


must be linearly independent, and the theorem is proved. ■

We will use Theorem 3 to choose a basis for a vector space V so that the
matrix [T]B is a diagonal matrix. 57
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Definition 5: A linear transformation T ∶ V → V on a finite-dimensional vector
space V is said to be diagonalisable if there exists a basis B = {v1 , v2 , … , vn }
of V such that the matrix of T with respect to the basis B is diagonal. That is,

⎡𝜆1 0 0 … 0⎤
⎢0 𝜆2 0 … 0 ⎥⎥

⎢ ⎥
[T]B = ⎢ 0 0 𝜆3 … . ⎥,
⎢ ⎥
⎢ . . . … . ⎥
⎢ ⎥
⎣0 0 0 … 𝜆n ⎦

where 𝜆1 , 𝜆2 , … , 𝜆n are scalars which need not be distinct.

The next theorem tells us under what conditions a linear transformation is


diagonalisable.

Theorem 4: A linear transformation T, on a finite-dimensional vector space V,


is diagonalisable if and only if there exists a basis of V consisting of
eigenvectors of T.

Proof: Suppose that T is diagonalisable. By definition, there exists a basis


B = {v1 , v2 , … , vn } of V, such that

⎡𝜆1 0 0 … 0⎤
⎢0 𝜆2 0 … 0 ⎥⎥

⎢ ⎥
[T]B = ⎢ 0 0 𝜆3 … . ⎥,
⎢ ⎥
⎢ . . . … . ⎥
⎢ ⎥
⎣0 0 0 … 𝜆n ⎦

By definition of [T]B , we must have

Tv1 = 𝜆1 v1 , Tv2 = 𝜆2 v2 , … , Tvn = 𝜆n vn .

Since basis vectors are always non-zero, v1 , v2 , … , vn are non-zero. Thus, we


find that v1 , v2 , … , vn are eigenvectors of T.

Conversely, let B = {v1 , v2 , … , vn } be a basis of V consisting of eigenvectors of


T. Then, there exists scalars 𝛼1 , 𝛼2 , … , 𝛼n , not necessarily distinct, such that
Tv1 = 𝛼1 v1 , Tv2 = 𝛼2 v2 , … , Tvn = 𝛼n vn .

But then we have

⎡𝛼1 0 … 0⎤
⎢0 𝛼2 … 0 ⎥⎥

[T]B = ⎢ ⎥,
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
⎢ ⎥
⎣0 0 … 𝛼n ⎦

which means that T is diagonalisable. ■

The next theorem combines Theorem 3 and 4.

Theorem 5: Let T ∶ V → V be a linear transformation, where V is an


n-dimensional vector space. Assume that T has n distinct eigenvalues. Then T
58 is diagonalisable.
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Proof: Let 𝜆1 , 𝜆2 , … , 𝜆n be the n distinct eigenvalues of T. Then there exist
eigenvectors v1 , v2 , … , vn corresponding to the eigenvalues 𝜆1 , 𝜆2 , … , 𝜆n ,
receptively. By Theorem 3, the set v1 , v2 , … , vn , is linearly independent and has
n vectors, where n = dim V. B = (v1 , v2 , … , vn ) is a basis of V consisting of
eigenvectors of T. Thus, by Theorem 4, T is diagonalisable. ■

Just as we have reached to the conclusions of Theorem 4 for linear


transformations, we define diagonalisability of a matrix, and reach a similar
conclusion for matrices.

Definition 6: An n × n matrix A is said to be diagonalisable if A is similar to a


diagonal matrix, that is, P−1 AP is diagonal for some non-singular n × n matrix P.

Note that the matrix A is diagonalisable if and only if the matrix A, regarded as
a linear transformation A ∶ Vn (F) → Vn (F) ∶ A(X) = AX, is diagonalisable.

Thus, Theorems 3,4 and 5 are true for the matrix A regarded as a linear
transformation from Vn (F) to Vn (F.) Therefore, given an n × n matrix A, we
know that it is diagonalisable if it has n distinct eigenvalues.

We now give a practical method of diagonalising a matrix.

Theorem 6: Let A be an n × n matrix having n distinct eigenvalues


𝜆1 , 𝜆2 , … , 𝜆n . Let X1 , X2 , … , Xn ∈ Vn (F) be eigenvectors of A corresponding to
𝜆1 , 𝜆2 , … , 𝜆n , receptively. Let P = (X1 , X2 , … , Xn ) be the n × n matrix having
X1 , X2 , … , Xn as its column vectors. Then

P−1 AP = diag (𝜆1 , 𝜆2 , … , 𝜆n ) .

Proof: By actual multiplication, you can see that

AP = A (X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )
= (𝜆1 X1 , 𝜆2 X2 , … , 𝜆n Xn )

⎡𝜆1 0 … 0⎤
⎢0 𝜆2 … 0 ⎥⎥

⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … 𝜆n ⎦

= P diag (𝜆1 , 𝜆2 , … , 𝜆n ) .

Now, by Theorem 3, the column vectors of P are linearly independent. This


means that P is invertible (Unit 11, Theorem 6). Therefore, we can pre-multiply
both sides of the matrix equation AP = Pdiag (𝜆1 , 𝜆2 , … , 𝜆n ) by P−1 to get

P−1 AP = diag (𝜆1 , 𝜆2 , … , 𝜆n ) .

■ 59
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Let us see how this theorem works in practice.

Example 17: Diagonalise the matrix

⎡1 2 0⎤

A = ⎢2 1 −6⎥⎥ .
⎢ ⎥
⎣2 −2 3 ⎦

Solution: The characteristic polynomial of A is

∣t − 1 −2 0 ∣
∣ ∣

f(t) = ∣ −2 t − 1 6 ∣∣ = (t − 5) (t − 3) (t + 3) .
∣ −2 2 t − 3∣∣

Thus, the eigenvalues of A are 𝜆1 = 5, 𝜆2 = 3, 𝜆3 = −3. Since they are all


distinct, A is diagonalisable (by Theorem 5). You can find the eigenvectors by
the method already explained to you. Right now you can directly verify that

⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡−1⎤ ⎡−1⎤


⎢ ⎥ ⎢
A ⎢ 2 ⎥ = 5 ⎢ 2 ⎥ , A ⎢⎢1⎥⎥ = 3
⎥ ⎢1⎥ , and A ⎢ 2 ⎥ = −3 ⎢ 2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣−1⎦ ⎣−1⎦ ⎣0⎦ ⎣0⎦ ⎣1⎦ ⎣1⎦

⎡1⎤ ⎡1⎤ ⎡−1⎤


Thus, ⎢⎢ 2 ⎥⎥ , ⎢1⎥ and ⎢ 2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣−1⎦ ⎣0⎦ ⎣1⎦
are eigenvectors corresponding to the distinct eigenvalues 5,3 and −3,
receptively. By Theorem 6, the matrix which diagonalises A is given by
⎡ 1 1 −1⎤
P = ⎢⎢ 2 1 2 ⎥⎥ . Check, by actual multiplication, that
⎢ ⎥
⎣−1 0 1 ⎦
⎡5 0 0 ⎤
P−1 AP =⎢ ⎥
⎢0 3 0 ⎥ , which is in diagonal form.
⎢ ⎥
⎣0 0 −3⎦
∗∗∗

⎡1 0 0⎤
Example 18: Diagonalise the matrix A = ⎢−1 1 1⎥⎥ .

⎢ ⎥
⎣−1 −2 4⎦

Solution: The characteristic polynomial of A is

∣t − 1 0 0 ∣
∣ ∣
f(t) = ∣∣ 1 t − 1 −1 ∣∣ = (t − 1) (t − 2) (t − 3) .
∣ 1 2 t − 4∣∣

Thus the eigenvalues of A are 𝜆1 = 1, 𝜆2 = 2, 𝜆3 = 3. Since they are distinct, A


is diagonalisable. You can find the eigenvectors by the method already
explained to you. Right now you can directly verify that

⎡1⎤ ⎡1⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤


A = ⎢⎢1⎥⎥ = 1 ⎢1⎥ ; A ⎢1⎥ = 2
⎢ ⎥ ⎢ ⎥
⎢1⎥ and A ⎢1⎥ = 3
⎢ ⎥ ⎢ ⎥
⎢1⎥
⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
60 ⎣1⎦ ⎣1⎦ ⎣1⎦ ⎣1⎦ ⎣2⎦ ⎣3⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............

⎡1⎤ ⎡0⎤ ⎡0⎤


Thus ⎢⎢1⎥⎥ , ⎢1⎥ and ⎢1⎥ are eigenvectors corresponding to the distinct
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ ⎣1⎦ ⎣2⎦
eigenvalues 1,2 and 3 respectively. By Theorem 6, the matrix which
diagonalises A is given by

⎡1 0 0⎤
P = ⎢⎢1 1 1⎥⎥ .
⎢ ⎥
⎣1 1 1⎦

⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 2 0⎥ , this is in diagonal
⎢ ⎥
⎣0 0 3⎦
form.

∗∗∗

We have seen that we can apply Theorem 6 only if a matrix of order n × n have
n distinct eigenvales. Remeber that it is sufficient but not necessery condition
for diagonalisation. Now, we will see that even if a matrix does not have n
distnict eigenvalues, it can be diagonalisable. We will also discuss the
necessery and sufficient condition for diagonalisation of a matrix.

Theorem 7: An n × n matrix A over a field F is diagonalisable if and only if there


exist n eigen vectors of A which are linearly independent.

Proof: Let A be similar to a diagonal matrix D = diag (d1 , d2 , … , dn ). Then


there exists a non-singular matrix P = (X1 , X2 , … , Xn ) having X1 , X2 , … , Xn as
its column vectors such that D = P−1 AP, i.e., AP = PD. Now

PD = Pdiag (d1 , d2 , … , dn )

⎡d1 0 … 0⎤
⎢0 d2 … 0 ⎥⎥

⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … dn ⎦
= (d1 X1 , d2 X2 , … , dn Xn .)

and

AP = A(X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )

Therefore,

(AX1 , AX2 , … , AXn ) = (d1 X1 , d2 X2 , … , dn Xn .)

i.e., AXi = di Xi for all i = 1, 2, … , n. Thus each column vector of P is an


eigenvector of A. Since p is non-singular, the column vectors of P are linearly
independent. It follows that A has n linearly independent eigenvectors. 61
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Conversely, Let X1 , X2 , … , Xn ∈ Vn (F) be n linearly independent eigenvectors
corresponding to respective eigenvalues 𝜆1 , 𝜆2 , … , 𝜆n , which may not be all
distinct. Let P = (X1 , X2 , … , Xn ) be the n × n matrix having X1 , X2 , … , Xn as its
column vectors. By actual multiplication, you can see that

AP = A (X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )
= (𝜆1 X1 , 𝜆2 X2 , … , 𝜆n Xn )

⎡𝜆1 0 … 0⎤
⎢0 𝜆2 … 0 ⎥⎥

⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … 𝜆n ⎦
= P diag (𝜆1 , 𝜆2 , … , 𝜆n ) .

As per our assumption, the column vectors of P are linearly independent. This
means that P is invertible (Unit 11, Theorem 6). Therefore, we can pre-multiply
both sides of the matrix equation AP = Pdiag (𝜆1 , 𝜆2 , … , 𝜆n ) by P−1 to get

P−1 AP = diag (𝜆1 , 𝜆2 , … , 𝜆n ) .

It follows that A is diagonalisable. ■

Let us see how this theorem works in practice.

⎡0 0 −2⎤
Example 19: Diagonalise the matrix A = ⎢⎢1 2 1 ⎥⎥ .
⎢ ⎥
⎣1 0 3 ⎦

Solution: The characteristic polynomial of A is

∣ t 0 2 ∣
∣ ∣
f(t) = ∣∣−1 t − 2 −1 ∣∣ = (t − 1) (t − 2)2 .
∣−1 0 t − 3∣∣

Thus the eigenvalues of A are 𝜆1 = 1, 𝜆2 = 2, 𝜆3 = 2. You can find the


eigenvectors by the method already explained to you. Right now you can
directly verify that

⎡−2⎤ ⎡−2⎤ ⎡−1⎤ ⎡−1⎤ ⎡0⎤ ⎡0⎤


A ⎢⎢ 1 ⎥⎥ = 1 ⎢ 1 ⎥; A ⎢ 0 ⎥ = 2
⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ and A ⎢1⎥ = 2
⎢ ⎥ ⎢ ⎥
⎢1⎥
⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ ⎣1⎦ ⎣1⎦ ⎣1⎦ ⎣0⎦ ⎣0⎦

⎡−2⎤ ⎡−1⎤ ⎡0⎤


Thus ⎢⎢ 1 ⎥⎥ , ⎢ 0 ⎥ and ⎢1⎥ are linearly independent eigenvectors
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ 1
⎣ ⎦ ⎣0⎦
corresponding to the eigenvalues 1, 2 and 2 respectively. By Theorem 7, the
matrix which diagonalises A is given by

⎡−2 −1 0⎤
P = ⎢⎢ 1 0 1⎥⎥ .
⎢ ⎥
62 ⎣1 1 0⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............

⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 2 0⎥ , this is in diagonal
⎢ ⎥
⎣0 0 2⎦
form.

∗∗∗

In the above example we have seen that the three eigenvalues of A are not
distinct, yet A is diagonalisable. The following exercise will give you some
practice in diagonalising matrices.

E19) Are the matrices in Example 13,14 and 15 diagonalisable? If so,


diagonalise them.

⎡3 2 1⎤
E20) Diagonalise the matrix A = ⎢⎢2 3 1⎥⎥ .
⎢ ⎥
⎣0 0 1⎦

12.5 SUMMARY
As in previous units, in this unit also we have treated linear transformation
along with the analogous matrix version. We have covered the following points
here.

1. The definition of eigenvalues, eigenvectors and eigenspaces of linear


transformation and matrices.

2. The definition of the characteristic polynomial and characteristic equation


of a linear transformation (or matrix).

3. A scalar 𝜆 is an eigenvalue of a linear transformation T (or matrix A) if and


only if it is a root of the characteristic polynomial of T (or A).

4. A method of obtaining all the eigenvalues and eigenvectors of a linear


transformation (or matrix).

5. Eigenvectors of a linear transformation (or matrix) corresponding to


distinct eigenvalues are linearly independent.

6. A linear transformation T ∶ V → V is diagonalisable if and only if V has a


basis consisting of eigenvectors of T.

7. A linear transformation (or matrix) is diagonalisable if its eigenvalues are


distinct.

63
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............

12.6 SOLUTIONS/ANSWERS

E1) T(−1, 3, 1) = (−1 − 1, 3 + 3, −2 + 3 + 1)


= (−2, 6, 2)
= 2(−1, 3, 1)
Therefore, (−1, 3, 1) is an eigenvector for T, corresponding to the
eigenvalue 2.
Again, T(−1, 2, 0) = (−1 − 0, 2 + 0, −2 + 2 + 0)
= 1(−1, 2, 0)
Therefore, (−1, 2, 0) is an eigenvector for T, corresponding to the
eigenvalue 1.
Again, T(−1, −3, 1) = (1 − 1, −3 + 3, 2 − 3 + 1)
= (0, 0, 0)
= 0(−1, −3, 1)
Therefore, (1, −3, 1) is an eigenvector for T, corresponding to the
eigenvalue 0.

E2) T(0, 1, 1) =(1 − 1, 0 + 1 + 1, 0 + 1 + 1)


= 2(0, 1, 1)
Therefore, (0, 1, 1) is an eigenvector for T, corresponding to the
eigenvalue 2.
Again, T(−1, 2, 3) = (2 − 3, −3 + 2 + 3, −2 + 2 + 3)
= (−1, 2, 3)
Therefore, (−1, 2, 3) is an eigenvector for T, corresponding to the
eigenvalue 1.
Again, T(−3, 4, 1) = (4 − 1 − 9 + 4 + 1, −6 + 4 + 1)
= (3 − 4, −1)
= −1(−3, 4, 1)
Therefore, (−3, 4, 1) is an eigenvector for T, corresponding to the
eigenvalue −1.

E3) W−2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = −2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x + y − z, −x − 2y + 4z, 2x + y + z) = (−2x, −2y, −2z)}
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = −2x, −x − 2y + 4z = −2y, 2x + y + z = −2z}
= { (x, y, z) ∈ ℝ3 ∣ 3x + y − z = 0, x = 4z, 2x + y + 3z = 0}
= { (x, y, z) ∈ ℝ3 ∣ 12z + y − z = 0, x = 4z, 8z + y + 3z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x = 4z, y = −11z}
= { (4z, −11z, z)∣ z ∈ ℝ}
= { (4, −11, 1)z∣ z ∈ ℝ}
Therefore W−2 is a subspace of ℝ3 with dimension 1.

64 W2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
= { (x, y, z) ∈ ℝ3 ∣ (x + y − z, −x − 2y + 4z, 2x + y + z) = (2x, 2y, 2z)}
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 2x, −x − 2y + 4z = 2y, 2x + y + z = 2z}
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, −x − 4y + 4z = 0, 2x + y − z = 0}
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, −5y + 5z = 0, 2x + y − z = 0}
(by subtracting the third equation from the second equation)
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, y = z, 2x + y − z = 0}

= { (x, y, z) ∈ ℝ3 ∣ x = 0, y = z}
= { (0, z, z)∣ z ∈ ℝ}
= { (0, 1, 1)z∣ z ∈ ℝ}
Therefore W2 is a subspace of ℝ3 with dimension 1.

W0 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x + y − z, −x − 2y + 4z, 2x + y + z) = (0, 0, 0)}
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, −x − 2y + 4z = 0, 2x + y + z = 0}
(by adding the first equation with the second equation)
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, −y + 3z = 0, 2x + y + z = 0}

= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, y = 3z, 2x + y + z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x + 3z − z = 0, y = 3z, 2x + 3z + z = 0}

= { (x, y, z) ∈ ℝ3 ∣ x = −2z, y = 3z}


= { (−2z, 3z, z)∣ z ∈ ℝ}
= { (−2, 3, 1)z∣ z ∈ ℝ}
Therefore, W0 is a subspace of R3 with dimension 1.

E4) W2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, y + 3z, 2x + y + z) = (2x, 2y, 2z)}
= { (x, y, z) ∈ ℝ3 ∣ x − z = 2x, y + 3z = 2y, 2x + y + z = 2z}

= { (x, y, z) ∈ ℝ3 ∣ x = −z, y = 3z}


= { (−z, 3z, z)∣ z ∈ ℝ}
= { (−1, 3, 1)z∣ z ∈ ℝ}
Therefore, W2 is a subspace of R3 with dimension 1.

W1 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, y + 3z, 2x + y + z) = (x, y, z)}

= { (x, y, z) ∈ ℝ3 ∣ x − z = x, y + 3z = y, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ z = 0, y = −2x}
= { (x, −2x, 0)∣ x ∈ ℝ}
= { (1, −2, 0)x∣ x ∈ ℝ}
Therefore, W1 is a subspace of R3 with dimension 1.

W0 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = (x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (x − z, y + 3z, 2x + y + z) = (0, 0, 0)}
65
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
= { (x, y, z) ∈ ℝ3 ∣ x − z = 0, y + 3z = 0, 2x + y + z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x = z, y = −3z}

= { (z, −3z, z)∣ z ∈ ℝ}


= { (1, −3, 1)z∣ z ∈ ℝ}
Therefore, W0 is a subspace of R3 with dimension 1.
E5) W2 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = 2(x, y, z)}
= { (x, y, z) ∈ ℝ3 ∣ (y − z, 3x + y + z, 2x + y + z) = (2x, 2y, 2z)}
= { (x, y, z) ∈ ℝ3 ∣ y − z = 2x, 3x − y + z = 2y, 2x + y + z = 2z}
= { (x, y, z) ∈ ℝ3 ∣ − 2x + y − z = 0, 3x − y + z = 0, 2x + y − z = 0}

(by adding the second equation with the third equation)


= { (x, y, z) ∈ ℝ3 ∣ − 2x + y − z = 0, 3x − y + z = 0, x = 0}
= { (x, y, z) ∈ ℝ3 ∣ y = z, x = 0}

= { (0, z, z)∣ z ∈ ℝ}
= { (0, 1, 1)z∣ z ∈ ℝ}
Therefore, W2 is a subspace of R3 with dimension 1.

W1 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = (x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (y − z, 3x + y + z, 2x + y + z) = (x, y, z)}
= { (x, y, z) ∈ ℝ3 ∣ y − z = x, 3x + y + z = y, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, 3x + z = 0, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, 3x + z = 0, 2x + y = 0}
= { (x, y, z) ∈ ℝ3 ∣ − x + y − z = 0, z = −3x, y = −2x}
= { (x, y, z) ∈ ℝ3 ∣ z = −3x, y = −2x}
= { (x, −2x, −3x)∣ x ∈ ℝ}
= { (1, −2, −3)x∣ x ∈ ℝ}
Therefore, W1 is a subspace of R3 with dimension 1.

W−1 = { (x, y, z) ∈ ℝ3 ∣ T(x, y, z) = −1(x, y, z)}


= { (x, y, z) ∈ ℝ3 ∣ (y − z, 3x + y + z, 2x + y + z) = (−x, −y, −z)}
= { (x, y, z) ∈ ℝ3 ∣ y − z = −x, 3x + y + z = −y, 2x + y + z = −z}
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, 3x + 2y + z = 0, 2x + y + 2z = 0}

(by adding the first equation with the second equation)


= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, 4x + 3y = 0, 2x + y + 2z = 0}
3 3 3
= { (x, y, z) ∈ ℝ3 ∣ − y + y − z = 0, x = − y, − y + y + 2z = 0}
4 4 2
1 3 1
= { (x, y, z) ∈ ℝ3 ∣ y − z = 0, x = − y, − y + 2z = 0}
4 4 2
= { (x, y, z) ∈ ℝ3 ∣ x = −3z, y = 4z}
= { (−3z, 4z, z)∣ z ∈ ℝ}
= { (−3, 4, 1)z∣ z ∈ ℝ}
Therefore, W−1 is a subspace of R3 with dimension 1.
66
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
x 0
E6) If 3 is an eigenvalue, then ∃ ⎡⎢ ⎤⎥ ≠ ⎡⎢ ⎤⎥ such that
⎣y⎦ ⎣0⎦

⎡1 2⎤ ⎡x⎤ = 3 ⎡x⎤ ⇒ x + 2y = 3x and 3y = 3y.


⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 3⎦ ⎣y⎦ ⎣y⎦

These equations are satisfied by x = 1, y = 1 and x = 2, y = 2.


1⎤ ⎡2⎤
∴3 is an eigenvalue, and ⎡
⎢ ⎥ and ⎢ ⎥ are eigenvectors corresponding to 3.
⎣1⎦ ⎣2⎦

x 0
E7) If −1 is an eigenvalue then ∃ ⎡⎢ ⎤⎥ ≠ ⎡⎢ ⎤⎥ such that
⎣y⎦ ⎣0⎦

⎡1 3⎤ ⎡x⎤ = − ⎡x⎤ ⇒ x + 3y = −x and 4x + 5y = −y


⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣4 5⎦ ⎣y⎦ ⎣y⎦

These equations are satisfied by x = 3, y = −2


3⎤
∴ − 1 is an eigenvalue and ⎡
⎢ ⎥ is an eigenvector corresponding to −1.
⎣2⎦

⎡x⎤ ⎡0⎤
E8) If 1 is an eigenvalue then ∃ ⎢⎢y⎥⎥ ≠ ⎢⎢0⎥⎥ such that
⎢ ⎥ ⎢ ⎥
⎣z⎦ ⎣0⎦

⎡1 0 1⎤ ⎡x⎤ ⎡x⎤
⎢0 2 2⎥ ⎢y⎥ = ⎢y⎥ ⇒ x + z = x, 2y + 2z = y, 3z = z
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 3 ⎦ ⎣z⎦ ⎣z⎦

These equations are satisfied by x = 1, y = 0 and z = 0


⎡1⎤
∴ 1 is an eigenvalue and ⎢ ⎥
⎢0⎥ is an eigenvector corresponding to 1.
⎢ ⎥
⎣0⎦
Now, try similar procedure for 2.

{ x ∣ x + 2y⎤ ⎡3x⎤⎫}
E9) W3 = ⎨ ⎡⎢ ⎤⎥ ∈ V2 (ℝ)∣∣ ⎡⎢ ⎥ = ⎢ ⎥⎬
{
⎩ ⎣y⎦ ∣ ⎣ 3y ⎦ ⎣3y⎦} ⎭

{ x⎤ ∣ ⎫
} ⎧ { ⎡x⎤ ∣ ⎫
}
= ⎨⎡⎢ ⎥ ∈ V2 (ℝ)∣∣ x = y⎬ = ⎨ ⎢ ⎥ ∈ V2 (ℝ)∣∣ x ∈ ℝ⎬
{
⎩ ⎣y⎦ ∣ } {
⎭ ⎩ ⎣x⎦ ∣ }

{ 1 ⎫
⎧ }
This is the 1-dimensional real subspace of V2 (ℝ) whose basis is ⎨⎡⎢ ⎤⎥⎬
⎩⎣1⎦}
{ ⎭

{ x ∣ x + 3y ⎤ ⎡−x⎤⎫ }
E10) W− 1 = ⎨ ⎡⎢ ⎤⎥ ∈ V2 (ℝ)∣∣ ⎡⎢ ⎥ = ⎢ ⎥⎬
{
⎩ ⎣y⎦ ∣ ⎣4x + 5y⎦ ⎣−y⎦}⎭

{ x⎤ ∣ ⎫
}
= ⎨⎡⎢ ⎥ ∈ V2 (ℝ)∣∣ 2x + 3y = 0⎬
{ y
⎩⎣ ⎦ ∣ }


{ x ⎤ ∣ ⎫
}
= ⎨⎡⎢ 2 ⎥ ∈ V2 (ℝ)∣∣ x ∈ ℝ⎬
{ −
⎩⎣ 3 ⎦x ∣ }


{ 3x ⎤ ∣ ⎫
}
= ⎨⎡⎢ ⎥ ∈ V2 (ℝ)∣∣ x ∈ ℝ⎬
{
⎩⎣ − 2x⎦ ∣ }

3⎤
This is a one dimensional real subspace of V2 (ℝ) whose basis is ⎡⎢ ⎥.
⎣−2⎦ 67
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
⎧ x ∣ ⎡ x + 7 ⎤ ⎡x⎤⎫
{
{⎡ ⎤ ∣⎢ }
}
⎢ ⎥ ∣ ⎥ ⎢ ⎥
E11) W1 = ⎨ ⎢y⎥ ∈ V3 (ℝ)∣ ⎢2y + 2z⎥ = ⎢y⎥⎬
{ ⎢ ⎥ ∣ ⎢ ⎥ ⎢ ⎥}
{ z }
⎩⎣ ⎦ ∣ ⎣ 3z ⎦ ⎣z⎦⎭
⎧ x ∣ ⎫
{
{⎡ ⎤ ∣ }
}
⎢ ⎥
= ⎨ ⎢y⎥ ∈ V2 (ℝ)∣∣ z = 0, y + 2z = 0, 2z = 0⎬
{ ⎢ ⎥ ∣ }
{ z }
⎩⎣ ⎦ ∣ ⎭
⎧ x ∣ ⎫
{
{⎡ ⎤ ∣ }
}
⎢ ⎥
= ⎨ ⎢y⎥ ∈ V3 (ℝ)∣∣ y = 0, z = 0⎬
{ ⎢ ⎥ ∣ }
{ z }
⎩⎣ ⎦ ∣ ⎭
⎧ x ∣ ⎫
{
{⎡ ⎤ ∣ }
}
⎢ ⎥
= ⎨ ⎢0⎥ ∈ V3 (ℝ)∣∣ x ∈ ℝ⎬
{ ⎢ ⎥ ∣ }
{ 0 }
⎩⎣ ⎦ ∣ ⎭
⎡1⎤
This is a one dimensional subspace of V3 (ℝ) whose basis is ⎢⎢0⎥⎥ .
⎢ ⎥
⎣0⎦
⎧ x ∣ ⎡ x + 7 ⎤ ⎡2x⎤⎫
{
{⎡ ⎤ ∣ }
}
W2 = ⎨ ⎢y⎥ ∈ V3 (ℝ)∣∣ ⎢⎢2y + 2z⎥⎥ = ⎢⎢2y⎥⎥⎬
⎢ ⎥
{ ⎢ ⎥ ∣ ⎢ 3z ⎥ ⎢2z⎥}
{ z }
⎩⎣ ⎦ ∣⎣ ⎦ ⎣ ⎦⎭
⎧ x ∣ ⎫
{
{⎡ ⎤ }
⎢y⎥ ∈ V (ℝ)∣∣ z = x, 2z = 0, z = 0}
⎨⎢ ⎥
⎢ ⎥ 3 ∣ ⎬
{
{ z ∣ }
}
⎩⎣ ⎦ ∣ ⎭
⎧ x ∣ ⎫
{
{⎡ ⎤ ∣ }
}
⎢ ⎥
= ⎨ ⎢y⎥ ∈ V3(ℝ)∣∣ x = 0, z = 0⎬
{ ⎢ ⎥ ∣ }
{ z }
⎩⎣ ⎦ ∣ ⎭
⎧ 0 ∣ ⎫
{
{⎡ ⎤ ∣ }
}
⎢ ⎥
= ⎨ ⎢ y ⎥ ∈ V3 (ℝ)∣∣ y ∈ ℝ⎬
{ ⎢ ⎥ ∣ }
{ 0 }
⎩⎣ ⎦ ∣ ⎭

⎡0⎤
This is a one-dimensional subspace of V3 (ℝ) whose basis is ⎢⎢1⎥⎥ .
⎢ ⎥
⎣0⎦
E12) a) It is
∣ t 0 −2 ∣
∣ ∣ ∣ t −1 ∣ ∣ 0 −2 ∣
∣−1 t − 1 ∣ = t∣ ∣+∣ ∣
∣ ∣ ∣−1 t + 2∣ ∣−1 t + 2∣
∣ 0 −1 t + 2∣ ∣ ∣ ∣ ∣
∣ ∣
= {t2 (t + 2) − t} − 2 = t3 + 2t2 − t − 2.

b) The required polynomial is


∣t − 1 0 0 ∣
∣ ∣
∣ −1
∣ t − 2 −1 ∣∣ = t3 − 2t2 − 4t + 5.
∣ −2 3 t + 1∣∣

c) The required polynomial is
∣t − 1 1 ∣ 0
∣ ∣
∣ −1 t−2 ∣ = t3 − t2 − t + 1.
1
∣ ∣
∣ −3 −2 t + 2∣∣
68 ∣
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
E13) a) The eigenvalues are the roots of the polynomial

t3 + 2t2 − t − 2 = (t − 1)(t + 1)(t + 2)

∴, they are 1, −1, −2.


b) The characteristic polynomial of the matrix is

t3 − 2t2 − 4t + 5 = (t − 1)(t2 − t − 5).


√ √
So, the eigenvalues are 1, 1+2 21 , 1−2 21 .
c) The characteristic polynomial of the matrix is

t3 − t2 − t + 1 = (t − 1)2 (t + 1) .

So, the eigenvalues are 1, 1, −1.


∣t − 2 −1 0 ∣
∣ ∣
E14) fA (t) = ∣∣ 0 t−1 1 ∣∣ = (t − 2)2 (t − 3)
∣ 0 −2 t − 4∣∣

∴, the eigenvalues are 𝜆1 = 2, 𝜆2 = 3.
The eigenvectors corresponding to 𝜆1 are given by

⎡2 1 0 ⎤ ⎡x⎤ ⎡x⎤
⎢0 1 −1⎥ ⎢y⎥ = 2 ⎢y⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 2 4 z
⎦ ⎣ ⎦ ⎣z⎦
This leads us to the equations
2x + y = 2x ⎫
} x=x
}
}
y − z = 2y ⎬ ⇒y=0
}
}
2y + 4z = 2z } z=0

⎧ x ∣ ⎫ ⎧ 1 ⎫
{
{⎡ ⎤ ∣ }
} {
{⎡ ⎤}
∴W2 = ⎨ ⎢ ⎥∣ x ∈ ℝ . ∴, basis for W2 is ⎢ ⎥} .
⎢0⎥∣ ⎬ ⎢0⎥
⎨⎢ ⎥⎬
{ ⎢ ⎥ } {
{ 0 ∣ } { 0 }
}
⎩⎣ ⎦ ∣ ⎭ ⎩⎣ ⎦⎭
The eigenvectors corresponding to 𝜆2 are given by

⎡2 1 0 ⎤ ⎡x⎤ ⎡x⎤
⎢0 1 −1⎥ ⎢y⎥ = 3 ⎢y⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 2 4 ⎦ ⎣z⎦ ⎣z⎦
This gives us the equations
2x + y = 3x ⎫
} x=x
}
}
y − z = 3y ⎬ ⇒y=x
}
}
2y + 4z = 3z } z = −2x

⎧ x ∣ ⎫ ⎧ 1 ⎫
{
{⎡ ⎤ ∣ }
} {
{⎡ ⎤ }
∴W2 = ⎨ ⎢ x ⎥ ∣ x ∈ ℝ . ∴, basis for W3 is ⎢ 1 ⎥} .
⎢ ⎥ ⎬ ⎨⎢ ⎥
{ ⎢ ⎥ ∣ } {⎢ ⎥⎬
{ −2x ∣ } { −2 }}
⎩⎣ ⎦ ∣ ⎭ ⎩⎣ ⎦⎭

⎡t − a1 0 … 0 ⎤
⎢ 0 t − a2 … 0 ⎥⎥

⎢ ⎥
E15) fD (t) = ⎢ . . … ⎥ = (t − a1 )(ta−2 ) … (t − an )
.
⎢ ⎥
⎢ . . … . ⎥
⎢ ⎥
⎣ 0 0 … t − an ⎦ 69
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
∴, its eigenvalues are a1 , a2 , … , an .
The eigenvectors corresponding to a1 are given by

⎡a1 0 … 0⎤ ⎡x1 ⎤ ⎡x1 ⎤


⎢0 a2 … 0 ⎥⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ = a1 ⎢ ⎥
⎢ . . … . ⎥ ⎢⋮⎥ ⎢⋮⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 … an ⎦ ⎣xn ⎦ ⎣xn ⎦
This gives us the equations
⎧a x = a1 x1 ⎫ x1 = x1
{ 1 1 }
{ }
{a2 x2 = a1 x2 } x =0
⎨ ⎬ ⇒ 2
{ ⋮ } ⋮
{ }
{a x }
= a1 xn ⎭ xn = 0
⎩ n n
(since ai ≠ aj for i ≠ j).
⎡xi ⎤
⎢0⎥
⎢ ⎥
∴, the eigenvectors corresponding to ai are ⎢ ⎥ , x1 ≠ 0, x1 ∈ ℝ.
⎢⋮⎥
⎢ ⎥
⎣0⎦
⎡0⎤
⎢⋮⎥
⎢ ⎥
⎢0⎥
⎢ ⎥
Similarly, the eigenvectors corresponding to ai are ⎢xi ⎥ , xi ≠ 0, xi ∈ ℝ.
⎢ ⎥
⎢0⎥
⎢ ⎥
⎢⋮⎥
⎢ ⎥
⎣0⎦
E16) B = {1, x, x2 } is a basis of P2 .
⎡0 1 0⎤
Then [D]B = ⎢⎢0 0 2⎥⎥ .
⎢ ⎥
⎣0 0 0⎦
∣ t −1 0 ∣
∣ ∣
∴, The characteristic polynomial of D is ∣∣0 t −2∣∣ = t3 .
∣0 0 t ∣∣

∴, The only eigenvalues of D is 𝜆 = 0.
The eigenvectors corresponding to 𝜆 = 0 are a0 + a1 x + a2 x2 , where
D (a0 + a1 x + a2 x2 ) = 0, that is, a1 + 2a2 x = 0.
This gives a1 = 0, a2 = 0. ∴, the set of eigenvectors corresponding to
𝜆 = 0 are { a0 ∣ a0 ∈ ℝ, a0 ≠ 0} = ℝ − {0} .

E17) |tI − A| = |(tI − A)t |, since |At | = |A|.


t
= |tI − At |, since It = I and (B − C) = Bt − Ct .
∴, the eigenvalues of A are the same as those of At .

E18) Let X be an eigenvector corresponding to 𝜆. Then X ≠ 0 and AX = 𝜆X.


∴A−1 (AX) = A−1 (𝜆X).

⇒ (A−1 A) X = 𝜆 (A−1 X)

⇒X = 𝜆 (A−1 X)

⇒𝜆 ≠ 0, since X ≠ 0.
70
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Also, X = 𝜆(A−1 X) ⇒ 𝜆−1 X = A−1 X ⇒ 𝜆−1 is an eigenvalue of A−1 .

E19) Since the matrix in Example 13 has distinct eigenvalues 1, −1 and −2, it
is diagonalisable. Eigenvectors corresponding to these eigenvalues are
⎡2⎤ ⎡−2⎤ ⎡−1⎤
⎢3⎥ ⎢ 1 ⎥ and ⎢ 0 ⎥ , respectively.
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ ⎣ 1 ⎦ ⎣1⎦

⎡2 −2 −1⎤ ⎡0 0 2 ⎤ ⎡1 0 0⎤

∴, if P = ⎢3 1 ⎥ − 1 ⎢ ⎥ ⎢
0 ⎥ , then P ⎢1 0 1 ⎥ P = ⎢0 −1 0 ⎥⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 1⎦ ⎣0 1 −2⎦ ⎣0 0 −2⎦
The matrix in Example 14 is not diagonalisable. This is because it only
has two distinct eigenvalues and, corresponding to each, it has only one
linearly independent eigenvector. ∴, we cannot find a basis of V4 (F)
consisting eigenvectors. And now apply Theorem 3.
The matrix in Example 15 is diagonalisable though it only has two distinct
eigenvalues. This is because corresponding to 𝜆1 = −1 there is one
linearly independent eigenvector, but corresponding to 𝜆2 = 1 there exist
two linearly independent eigenvectors. Therefore, we can form basis of
V3 (ℝ) consisting the eigenvectors

⎡ 1 ⎤ ⎡1⎤ ⎡0⎤
⎢−1⎥ , ⎢1⎥ , ⎢0⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦ ⎣1⎦

⎡ 1 1 0⎤
The matrix P = ⎢⎢−1 1 0⎥⎥ is invertible, and
⎢ ⎥
⎣ 0 0 1⎦

⎡0 1 0⎤ ⎡−1 0 0⎤
P−1 ⎢ ⎥ ⎢ ⎥
⎢1 0 0⎥ P = ⎢ 0 1 0⎥ .
⎢ ⎥ ⎢ ⎥
⎣0 0 1⎦ ⎣ 0 0 1⎦

E20) The characteristic polynomial of A is

∣t − 3 −2 −1 ∣
∣ ∣ 2
f(t) = ∣∣ −2 t − 3 −1 ∣∣ = (t − 1) (t − 5) .
∣ 0 0 t − 1∣∣

Thus the eigenvalues of A are 𝜆1 = 1, 𝜆2 = 1, 𝜆3 = 5. You can find the


eigenvectors by the method already explained to you. Right now you can
directly verify that

⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡1⎤


A = ⎢⎢−1⎥⎥ = 1 ⎢−1⎥ ; A ⎢ 0 ⎥ = 1
⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ and A ⎢1⎥ = 5
⎢ ⎥ ⎢ ⎥
⎢1⎥
⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ 0
⎣ ⎦ ⎣−2⎦ −
⎣ ⎦2 ⎣0⎦ ⎣0⎦

⎡1⎤ ⎡1⎤ ⎡1⎤


Thus ⎢⎢−1⎥⎥ , ⎢ 0 ⎥ and ⎢1⎥ are eigenvectors corresponding to the
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ −
⎣ ⎦2 ⎣0⎦
eigenvalues 1,1 and 5 respectively. By Theorem 7, the matrix which 71
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
diagonalises A is given by

⎡1 1 1⎤

P = ⎢−1 0 1⎥⎥ .
⎢ ⎥
⎣ 0 −2 0⎦

⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 1 0⎥ , this is in
⎢ ⎥
⎣0 0 5⎦
diagonal form.

72

You might also like