Unit 12
Unit 12
Unit 12
EIGENVALUES AND
EIGENVECTORS
Structure
Page Nos.
12.1 Introduction 40
Objective
12.2 The Algebraic Eigenvalue Problem 41
12.3 Obtaining Eigenvalues and Eigenvectors 48
Characteristic Polynomial
Eigenvalues of Linear Transformation
12.4 Diagonalisation 57
12.5 Summary 63
12.6 Solution/Answers 64
12.1 INTRODUCTION
In Unit 10 you have studied about the matrix of a linear transformation. You
have had several opportunities, in earlier units, to observe that the matrix of a
linear transformation depends on the choice of the bases of the concerned
vector spaces.
The eigenvalue problem involves the evaluation of all the eigenvalues and
eigenvectors of a linear transformation or a matrix. The solution of this problem
has basic applications in almost all branches of the sciences, technology and
the social sciences, besides its fundamental role in various branches of pure
and applied mathematics. The emergence of computers and the availability of
modern computing facilities has further strengthened this study, since they can
handle very large systems of equations.
Objectives
After studying this unit, you should be able to:
• obtain a basis of a vector space V with respect to which the matrix of a linear
transformation T ∶ V → V is in diagonal form;
Now, here is a vector (1, 0) that changes by a scalar factor only when the linear
transformation T is applied to it. In this situation we say that 2 is an eigenvalue
of the linear mapping T and (1, 0) is an eigenvector of T with respect to the
eigenvalue 2. So, basically if a linear mapping changes a particular vector by a
scalar factor only, then the scalar is called eigenvalue and the vector is called
eigenvector of the linear mapping. Now, let us define this formally. 𝜆 a is Greek letter
‘lambda’.
Definition 1: An eigenvalue of a linear transformation T ∶ V → V is a scalar
𝜆 ∈ F such that there exists a non-zero vector x ∈ V satisfying the equation
Tx = 𝜆x.
i) x is non-zero, and
∗∗∗
Solution:
T(−1, 3, 1) = (−1 − 1, 2 + 4, −2 + 3 + 1)
= (−2, 6, 2)
= 2(−1, 3, 1)
Again
T(−1, 2, 0) = (−1, 2, −2 + 2)
= (−1, 2, 0)
Again,
T(1, −6, 2) = (1 − 2, −2 + 8, 2 − 6 + 2)
= (−1, 6, −2)
= −1(1, −6, 2)
∗∗∗
Solution:
T(4, −11, 1) = (4 − 11 − 1, −4 + 22 + 4, 8 − 11 + 1)
= (−8, 22, −2)
= −2(4, −11, 1)
42
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Therefore, (4, −11, 1) is an eigenvector for T, corresponding to the eigenvalue
−2.
Again
T(0, 1, 1) = (0 + 1 − 1, −2 + 4, 0 + 1 + 1)
= (0, 2, 2)
= 2(0, 1, 1)
Again
T(−2, 3, 1) = (−2 + 3 − 1, 2 − 6 + 4, −4 + 3 + 1)
= (0, 0, 0)
= 0(−2, 3, 1)
∗∗∗
W𝜆 = {x ∈ V ∣T(x) = 𝜆x }
= {0} ∪ {eigenvectors of T corresponding to 𝜆} .
Example 4: Obtain W−3 and W2 for the linear operator in Example 1. What
are the corresponding dimensions of the subspaces?
Solution:
= { (0, 0, z)∣ z ∈ ℝ}
= { (0, 0, 1)z∣ z ∈ ℝ}
∗∗∗
= { (x, y, z) ∈ ℝ3 ∣ x + z = 0, −2x − 2y + 4z = 0, 2x + y − z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x = −z, 2z − 2y + 4z = 0, −2z + y − z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x − z = x, −2x + 4z = y, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ z = 0, y = −2x}
= { (x, −2x, 0)∣ x ∈ ℝ}
= { (1, −2, 0)∣ x ∈ ℝ}
∗∗∗
E3) Obtain W−2 , W2 and W0 for the linear operator in Example 3, what are the
corresponding dimensions of the subspaces?
E4) Obtain W2 , W1 and W0 for the linear operator in E1. What are the
corresponding dimensions of the subspaces?
E5) Obtain W2 , W1 and W−1 for the linear operator in E2. What are the
corresponding dimensions of the subspaces?
⎧ ⎫
{
{ 1 0 ⎡0⎤}
{ ⎡ ⎤ ⎡ ⎤ ⎢0⎥} }
{ ⎢0⎥ ⎢1⎥ ⎢ ⎥}
{ ⎢ ⎥ ⎢ ⎥ ⎢0⎥}
{ ⎢ ⎥ ⎢ ⎥ }
B∘ = ⎨e1 = ⎢0⎥ , e2 = ⎢0⎥ , … , en = ⎢ ⎥⎬
{ ⎢ ⎥ ⎢ ⎥ ⎢ ⋮ ⎥}
{ ⎢⋮⎥ ⎢⋮⎥ ⎢ ⎥}
{ ⎢ ⎥ ⎢ ⎥ ⎢0⎥}
{ ⎢ ⎥}
{ 0
⎣ ⎦ 0
⎣ ⎦ }
{ }
⎩ ⎣1⎦⎭
is the standard ordered basis of Vn (F). That is, the matrix of A, regarded as a
linear transformation from Vn (F) to Vn (F), with respect to the standard basis
B∘ , is A itself. This is why we denote the linear transformation A by A itself.
Looking at matrices as linear transformations in the above manner will help you
in the understanding of eigenvalues and eigenvectors for matrices.
⎡1 0 0⎤
Example 6: Let A = ⎢⎢0 2 0⎥⎥ . Obtain an eigenvalue and a corresponding
⎢ ⎥
⎣0 0 3⎦
eigenvector of A.
46 ∗∗∗
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Example 7: Obtain an eigenvalue and a corresponding eigenvector of
0 −1⎤
A = ⎡⎢ ⎥ ∈ M2 (ℝ).
⎣1 2 ⎦
x 0
Solution: Suppose 𝜆 ∈ ℝ is an eigenvalue of A. Then ∃x = ⎡⎢ ⎤⎥ ≠ ⎡⎢ ⎤⎥ such
⎣y⎦ ⎣0⎦
−y ⎤ ⎡𝜆x⎤
that AX = 𝜆X, this is, ⎡⎢ ⎥ = ⎢ ⎥.
⎣x + 2y⎦ ⎣𝜆y⎦
x + 2y = 𝜆y
x + 2(𝜆x) = 𝜆(−𝜆x)
x − 2𝜆x + 𝜆2 x = 0
x(𝜆 − 1) = 0
1⎤
Then an eigenvector corresponding to it is ⎡⎢ ⎥.
⎣−1⎦
∗∗∗
1 2⎤
E6) Show that 3 is an eigenvalue of ⎡⎢ ⎥ . Find 2 corresponding
⎣0 3⎦
eigenvectors.
1 3⎤
E7) Show that −1 is an eigenvalue of ⎡⎢ ⎥ . Find one corresponding
⎣4 5⎦
eigenvector.
⎡1 0 1⎤
E8) Show that 1 and 2 are eigenvalues of ⎢⎢0 2 2⎥⎥ . Find the corresponding
⎢ ⎥
⎣0 0 3⎦
eigenvectors.
Now, det (𝜆I − A) = (−1)n det (A − 𝜆I) (multiplying each row by (−1)). Hence,
det (𝜆I − A) = 0 if and only if det (A − 𝜆I) = 0.
This leads us to define the concept of the characteristic polynomial.
⎡1 2 ⎤ .
⎢ ⎥
⎣0 −1⎦
∣t − 1 −2 ∣
∣ ∣ = (t − 1)(t + 1) = t2 − 1.
∣ 0 t + 1∣
∣ ∣
∗∗∗
1 3⎤
Example 9: Obtain the characteristic polynomial of the matrix ⎡⎢ ⎥.
⎣4 5⎦
∣t − 1 −3 ∣
∣ ∣ = (t − 1) (t − 5) − 12 = t2 − 6t − 7.
∣ −4 t − 5∣∣
∣
∗∗∗
∣t − 1 −1 −1 ∣
∣ ∣
Solution: ∣
The required polynomial is ∣ 1 t+1 1 ∣∣ = t2 (t − 1) .
∣ 0 0 t − 1∣∣
∣
∗∗∗
50
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Note that 𝜆 is an eigenvalue of A iff det(𝜆I − A) = fA (𝜆) = 0, that is, iff 𝜆 is a
root of the characteristic polynomial fA (t), defined above. Due to this fact,
eigenvalues are also called characteristic roots, and eigenvectors are called
characteristic vectors.
For example, the eigenvalues of the matrix in Example 9 are the roots of the
polynomial t2 − 1, namely, 1 and −1. Let us look at some more examples.
∗∗∗
∗∗∗
For example, the matrix in Example 8 has two eigenvalues, 1 and −1, and the
matrix in E12(a) has 3 eigenvalues.
Thus, the roots of fB and fA (t) coincide. Therefore, the eigenvalues of A and B
are the same. ■ 51
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Let us consider some more examples so that the concepts mentioned in this
section become absolutely clear to you.
⎡0 0 2 ⎤
A = ⎢⎢1 0 1 ⎥⎥
⎢ ⎥
⎣0 1 −2⎦
Solution: After solving E6, you find that the eigenvalues of A are
𝜆1 = 1, 𝜆2 = −1, 𝜆3 = −2. Now we obtain the eigenvectors of A.
⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = 1 ⎢x ⎥ ,
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 1 − 2⎦ ⎣x3 ⎦ ⎣x3 ⎦
2x3 = x1 ⎫
} x1 = 2x3
}
}
x1 + x3 = x2 ⎬ ⇒ x2 = x1 + x3 = 3x3
}
}
x2 − 2x3 = x3 } x3 = x3
⎭
Thus,
⎡2x3 ⎤ ⎡2⎤
⎢3x ⎥ , that is, x ⎢3⎥ gives all the eigenvectors associated with the eigenvalue
⎢ 3⎥ 3⎢ ⎥
⎢ ⎥ ⎢ ⎥
x
⎣ 3⎦ ⎣1⎦
𝜆1 = 1, as x3 takes all non-zero real values.
The eigenvectors of A with respect to 𝜆2 = −1 are the non-trivial solution of
⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = (−1) ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 1 − 2⎦ ⎣x3 ⎦ ⎣x3 ⎦
which gives
2x3 = −x1 ⎫
} x1 = −2x3
}
}
x1 + x3 = −x2 ⎬ ⇒ x2 = −x1 − x3 = 2x3 − x3 = x3
}
}
x2 − 2x3 = −x3 } x3 = x3
⎭
⎡−2x3 ⎤ ⎡−2⎤
⎢ x ⎥=x ⎢ 1 ⎥ ∀x ≠ 0, x ∈ ℝ.
⎢ 3 ⎥ 3 ⎢ ⎥ 3 3
⎢ ⎥ ⎢ ⎥
x
⎣ 3 ⎦ 1
⎣ ⎦
⎡0 0 2 ⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 1 ⎥ ⎢x ⎥ = (−1) ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
52 ⎣0 1 −2⎦ ⎣x3 ⎦ ⎣x3 ⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
which gives
2x3 = −2x1 ⎫
} x1 = − x3
}
}
x1 + x3 = −2x2 ⎬ ⇒ x2 = 0
}
}
x2 − 2x3 = −2x3 } x3 = x3
⎭
⎡−x3 ⎤ ⎡−1⎤
⎢ 0 ⎥=x ⎢ 0 ⎥ , x ≠ 0, x ∈ ℝ.
⎢ ⎥ 3 ⎢ ⎥ 3 3
⎢ ⎥ ⎢ ⎥
x
⎣ 3 ⎦ 1
⎣ ⎦
Thus, in this example, the eigenspaces W1 , W−1 and W−2 are 1-dimensional
spaces, generated over ℝ by
∗∗∗
⎡1 1 0 0⎤
⎢−1 −1 0 0⎥⎥
⎢
⎢ ⎥,
⎢−2 −2 2 1⎥
⎢ ⎥
⎣1 1 −1 0⎦
⎡−t − 1 −1 0 0⎤
⎢ 1 t+1 0 0 ⎥⎥ 2
⎢ 2
⎢ ⎥ = t (t − 1)
⎢ 2 2 t − 2 −1⎥
⎢ ⎥
⎣ −1 −1 1 t ⎦
∴ the eigenvalues are 0, 0, and 1. Thus, the distnict eigenvalues are 𝜆1 = 0 and
𝜆2 = 1.
⎡1 1 0 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢−1 −1 0 0⎥⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ = 0⎢ ⎥,
⎢−2 −2 2 1⎥ ⎢x3 ⎥ ⎢x3 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 −1 0⎦ ⎣x4 ⎦ ⎣x4 ⎦
which gives
x1 + x2 = 0
−x1 − x2 = 0
−2x1 − 2x2 + 2x3 + x4 = 0
x1 + x2 − x3 = 0
53
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
The first and last equations give x3 = 0. Then, the third equation gives x4 = 0.
The first equation gives x1 = −x2 .
⎡−x2 ⎤ ⎡−1⎤
⎢x ⎥ ⎢1⎥
⎢ 2 ⎥ ⎢ ⎥
⎢ ⎥ = x2 ⎢ ⎥ , x2 ≠ 0, x2 ∈ ℝ.
⎢ 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦
⎡1 1 0 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢−1 −1 0 0⎥⎥ ⎢x ⎥ ⎢x ⎥
⎢ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ = 0⎢ ⎥,
⎢−2 −2 2 1⎥ ⎢x3 ⎥ ⎢x3 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 −1 0⎦ ⎣x4 ⎦ ⎣x4 ⎦
which gives
x1 + x2 = x1
−x1 − x2 = x2
−2x1 − 2x2 + 2x3 + x4 = x3
x1 + x2 − x3 = x4
The first two equation give x2 = 0 and x1 = 0. Then the last equation gives
x4 = −x3 . Thus, the eigenvectors are
⎡ 0 ⎤ ⎡0⎤
⎢ 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ = x3 ⎢ ⎥ , x3 ≠ 0, x3 ∈ ℝ.
⎢ x3 ⎥ ⎢1⎥
⎢ ⎥ ⎢ ⎥
⎣−x3 ⎦ ⎣−1⎦
∗∗∗
⎡0 1 0⎤
A = ⎢⎢1 0 0⎥⎥
⎢ ⎥
⎣0 0 1⎦
∣ t −1 0 ∣
∣ ∣ 2
= ∣∣−1 t 0 ∣∣ = (t + 1) (t − 1)
∣0 0 t − 1∣∣
∣
⎡0 1 0⎤ ⎡x1 ⎤
⎢1 0 0⎥ = (−1) ⎢x ⎥ ,
⎢ ⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥
54 ⎣0 0 1⎦ ⎣x3 ⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
which is equivalent to
x2 = −x1
x1 = −x2
x3 = −x3
⎡ x1 ⎤ ⎡1⎤
⎢−x ⎥ = x ⎢−1⎥ , x ≠ 0, x ∈ ℝ.
⎢ 1⎥ 1 ⎢ ⎥ 1 1
⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦
The eigenvectors corresponding to 𝜆2 = 1 are given by
⎡0 1 0⎤ ⎡x1 ⎤ ⎡x1 ⎤
⎢1 0 0⎥ ⎢x ⎥ = 1 ⎢x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 1⎦ ⎣x3 ⎦ ⎣x3 ⎦
which gives
x2 = x1
x1 = x2
x3 = x3
∗∗∗
E14) Find the eigenvalues and basis for the eigenspaces of the matrices
⎡2 1 0 ⎤
A = ⎢⎢0 1 −1⎥⎥
⎢ ⎥
⎣0 2 4 ⎦
E15) Find the eigenvalues and eigenvectors of the diagonal matrix
⎡a1 0 0 . . 0⎤
⎢0 a2 0 . . 0 ⎥⎥
⎢
⎢ ⎥
D=⎢0 0 a3 . . 0 ⎥,
⎢ ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎣0 0 . . . an ⎦
where ai ≠ aj for i ≠ j.
55
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
We now turn to the eigenvalues and eigenvectors of linear transformations.
⇔ det (T − 𝜆I) = 0
⇔ det (𝜆I − T) = 0
⇔ det (𝜆I − A) = 0,
This definition does not depend on how the basis B is chosen, since similar
matrices have the same characteristic polynomial (Theorem 1), and the
matrices of the same linear transformation T with respect to two different
ordered basis of V are similar.
Just as for matrices, the eigenvalues of T are precisely the roots of the
characteristic polynomial of T.
0 −1⎤
Solution: Let A = [T]B = ⎡⎢ ⎥ , where B = {e1 , e2 } . The characteristic
⎣1 0 ⎦
t 1⎤ 2
polynomial of T = the characteristic polynomial of A = ⎡⎢ ⎥ = t + 1, which
⎣−1 t ⎦
has no real roots.
∗∗∗
56 E17) Show that the eigenvalues of a square matrix A coincide with those of At .
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
E18) Let A be an invertible matrix. If 𝜆 is an eigenvalue of A, show that 𝜆 ≠ 0
and that 𝜆−1 is an eigenvalue of A−1 .
12.4 DIAGONALISATION
In this section we start with the prove a theorem that discusses the linear
independence of eigenvectors corresponding to different eigenvalues.
Now, we multiply Eqn. (1) by 𝜆r and subtract it from Eqn. (2), to get
Since the set {v1 , v2 , … , vr−1 } is linearly independent, each of the coefficients in
the above equation must be 0. Thus, we have 𝛼i (𝜆i − 𝜆r ) = 0 for
i = 1, 2, … , r − 1.
We will use Theorem 3 to choose a basis for a vector space V so that the
matrix [T]B is a diagonal matrix. 57
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Definition 5: A linear transformation T ∶ V → V on a finite-dimensional vector
space V is said to be diagonalisable if there exists a basis B = {v1 , v2 , … , vn }
of V such that the matrix of T with respect to the basis B is diagonal. That is,
⎡𝜆1 0 0 … 0⎤
⎢0 𝜆2 0 … 0 ⎥⎥
⎢
⎢ ⎥
[T]B = ⎢ 0 0 𝜆3 … . ⎥,
⎢ ⎥
⎢ . . . … . ⎥
⎢ ⎥
⎣0 0 0 … 𝜆n ⎦
⎡𝜆1 0 0 … 0⎤
⎢0 𝜆2 0 … 0 ⎥⎥
⎢
⎢ ⎥
[T]B = ⎢ 0 0 𝜆3 … . ⎥,
⎢ ⎥
⎢ . . . … . ⎥
⎢ ⎥
⎣0 0 0 … 𝜆n ⎦
⎡𝛼1 0 … 0⎤
⎢0 𝛼2 … 0 ⎥⎥
⎢
[T]B = ⎢ ⎥,
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
⎢ ⎥
⎣0 0 … 𝛼n ⎦
Note that the matrix A is diagonalisable if and only if the matrix A, regarded as
a linear transformation A ∶ Vn (F) → Vn (F) ∶ A(X) = AX, is diagonalisable.
Thus, Theorems 3,4 and 5 are true for the matrix A regarded as a linear
transformation from Vn (F) to Vn (F.) Therefore, given an n × n matrix A, we
know that it is diagonalisable if it has n distinct eigenvalues.
AP = A (X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )
= (𝜆1 X1 , 𝜆2 X2 , … , 𝜆n Xn )
⎡𝜆1 0 … 0⎤
⎢0 𝜆2 … 0 ⎥⎥
⎢
⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … 𝜆n ⎦
= P diag (𝜆1 , 𝜆2 , … , 𝜆n ) .
■ 59
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Let us see how this theorem works in practice.
⎡1 2 0⎤
⎢
A = ⎢2 1 −6⎥⎥ .
⎢ ⎥
⎣2 −2 3 ⎦
∣t − 1 −2 0 ∣
∣ ∣
∣
f(t) = ∣ −2 t − 1 6 ∣∣ = (t − 5) (t − 3) (t + 3) .
∣ −2 2 t − 3∣∣
∣
⎡1 0 0⎤
Example 18: Diagonalise the matrix A = ⎢−1 1 1⎥⎥ .
⎢
⎢ ⎥
⎣−1 −2 4⎦
∣t − 1 0 0 ∣
∣ ∣
f(t) = ∣∣ 1 t − 1 −1 ∣∣ = (t − 1) (t − 2) (t − 3) .
∣ 1 2 t − 4∣∣
∣
⎡1 0 0⎤
P = ⎢⎢1 1 1⎥⎥ .
⎢ ⎥
⎣1 1 1⎦
⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 2 0⎥ , this is in diagonal
⎢ ⎥
⎣0 0 3⎦
form.
∗∗∗
We have seen that we can apply Theorem 6 only if a matrix of order n × n have
n distinct eigenvales. Remeber that it is sufficient but not necessery condition
for diagonalisation. Now, we will see that even if a matrix does not have n
distnict eigenvalues, it can be diagonalisable. We will also discuss the
necessery and sufficient condition for diagonalisation of a matrix.
PD = Pdiag (d1 , d2 , … , dn )
⎡d1 0 … 0⎤
⎢0 d2 … 0 ⎥⎥
⎢
⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … dn ⎦
= (d1 X1 , d2 X2 , … , dn Xn .)
and
AP = A(X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )
Therefore,
AP = A (X1 , X2 , … , Xn )
= (AX1 , AX2 , … , AXn )
= (𝜆1 X1 , 𝜆2 X2 , … , 𝜆n Xn )
⎡𝜆1 0 … 0⎤
⎢0 𝜆2 … 0 ⎥⎥
⎢
⎢ ⎥
= (X1 , X2 , … , Xn ) ⎢ . … . ⎥
⎢ ⎥
⎢ . … . ⎥
⎢ ⎥
⎣0 0 … 𝜆n ⎦
= P diag (𝜆1 , 𝜆2 , … , 𝜆n ) .
As per our assumption, the column vectors of P are linearly independent. This
means that P is invertible (Unit 11, Theorem 6). Therefore, we can pre-multiply
both sides of the matrix equation AP = Pdiag (𝜆1 , 𝜆2 , … , 𝜆n ) by P−1 to get
⎡0 0 −2⎤
Example 19: Diagonalise the matrix A = ⎢⎢1 2 1 ⎥⎥ .
⎢ ⎥
⎣1 0 3 ⎦
∣ t 0 2 ∣
∣ ∣
f(t) = ∣∣−1 t − 2 −1 ∣∣ = (t − 1) (t − 2)2 .
∣−1 0 t − 3∣∣
∣
⎡−2 −1 0⎤
P = ⎢⎢ 1 0 1⎥⎥ .
⎢ ⎥
62 ⎣1 1 0⎦
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 2 0⎥ , this is in diagonal
⎢ ⎥
⎣0 0 2⎦
form.
∗∗∗
In the above example we have seen that the three eigenvalues of A are not
distinct, yet A is diagonalisable. The following exercise will give you some
practice in diagonalising matrices.
⎡3 2 1⎤
E20) Diagonalise the matrix A = ⎢⎢2 3 1⎥⎥ .
⎢ ⎥
⎣0 0 1⎦
12.5 SUMMARY
As in previous units, in this unit also we have treated linear transformation
along with the analogous matrix version. We have covered the following points
here.
63
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
12.6 SOLUTIONS/ANSWERS
= { (x, y, z) ∈ ℝ3 ∣ x = 0, y = z}
= { (0, z, z)∣ z ∈ ℝ}
= { (0, 1, 1)z∣ z ∈ ℝ}
Therefore W2 is a subspace of ℝ3 with dimension 1.
= { (x, y, z) ∈ ℝ3 ∣ x + y − z = 0, y = 3z, 2x + y + z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x + 3z − z = 0, y = 3z, 2x + 3z + z = 0}
= { (x, y, z) ∈ ℝ3 ∣ x − z = x, y + 3z = y, 2x + y + z = z}
= { (x, y, z) ∈ ℝ3 ∣ z = 0, y = −2x}
= { (x, −2x, 0)∣ x ∈ ℝ}
= { (1, −2, 0)x∣ x ∈ ℝ}
Therefore, W1 is a subspace of R3 with dimension 1.
= { (0, z, z)∣ z ∈ ℝ}
= { (0, 1, 1)z∣ z ∈ ℝ}
Therefore, W2 is a subspace of R3 with dimension 1.
x 0
E7) If −1 is an eigenvalue then ∃ ⎡⎢ ⎤⎥ ≠ ⎡⎢ ⎤⎥ such that
⎣y⎦ ⎣0⎦
⎡x⎤ ⎡0⎤
E8) If 1 is an eigenvalue then ∃ ⎢⎢y⎥⎥ ≠ ⎢⎢0⎥⎥ such that
⎢ ⎥ ⎢ ⎥
⎣z⎦ ⎣0⎦
⎡1 0 1⎤ ⎡x⎤ ⎡x⎤
⎢0 2 2⎥ ⎢y⎥ = ⎢y⎥ ⇒ x + z = x, 2y + 2z = y, 3z = z
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 0 3 ⎦ ⎣z⎦ ⎣z⎦
⎡0⎤
This is a one-dimensional subspace of V3 (ℝ) whose basis is ⎢⎢1⎥⎥ .
⎢ ⎥
⎣0⎦
E12) a) It is
∣ t 0 −2 ∣
∣ ∣ ∣ t −1 ∣ ∣ 0 −2 ∣
∣−1 t − 1 ∣ = t∣ ∣+∣ ∣
∣ ∣ ∣−1 t + 2∣ ∣−1 t + 2∣
∣ 0 −1 t + 2∣ ∣ ∣ ∣ ∣
∣ ∣
= {t2 (t + 2) − t} − 2 = t3 + 2t2 − t − 2.
t3 − t2 − t + 1 = (t − 1)2 (t + 1) .
⎡2 1 0 ⎤ ⎡x⎤ ⎡x⎤
⎢0 1 −1⎥ ⎢y⎥ = 2 ⎢y⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 2 4 z
⎦ ⎣ ⎦ ⎣z⎦
This leads us to the equations
2x + y = 2x ⎫
} x=x
}
}
y − z = 2y ⎬ ⇒y=0
}
}
2y + 4z = 2z } z=0
⎭
⎧ x ∣ ⎫ ⎧ 1 ⎫
{
{⎡ ⎤ ∣ }
} {
{⎡ ⎤}
∴W2 = ⎨ ⎢ ⎥∣ x ∈ ℝ . ∴, basis for W2 is ⎢ ⎥} .
⎢0⎥∣ ⎬ ⎢0⎥
⎨⎢ ⎥⎬
{ ⎢ ⎥ } {
{ 0 ∣ } { 0 }
}
⎩⎣ ⎦ ∣ ⎭ ⎩⎣ ⎦⎭
The eigenvectors corresponding to 𝜆2 are given by
⎡2 1 0 ⎤ ⎡x⎤ ⎡x⎤
⎢0 1 −1⎥ ⎢y⎥ = 3 ⎢y⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 2 4 ⎦ ⎣z⎦ ⎣z⎦
This gives us the equations
2x + y = 3x ⎫
} x=x
}
}
y − z = 3y ⎬ ⇒y=x
}
}
2y + 4z = 3z } z = −2x
⎭
⎧ x ∣ ⎫ ⎧ 1 ⎫
{
{⎡ ⎤ ∣ }
} {
{⎡ ⎤ }
∴W2 = ⎨ ⎢ x ⎥ ∣ x ∈ ℝ . ∴, basis for W3 is ⎢ 1 ⎥} .
⎢ ⎥ ⎬ ⎨⎢ ⎥
{ ⎢ ⎥ ∣ } {⎢ ⎥⎬
{ −2x ∣ } { −2 }}
⎩⎣ ⎦ ∣ ⎭ ⎩⎣ ⎦⎭
⎡t − a1 0 … 0 ⎤
⎢ 0 t − a2 … 0 ⎥⎥
⎢
⎢ ⎥
E15) fD (t) = ⎢ . . … ⎥ = (t − a1 )(ta−2 ) … (t − an )
.
⎢ ⎥
⎢ . . … . ⎥
⎢ ⎥
⎣ 0 0 … t − an ⎦ 69
Block
. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
∴, its eigenvalues are a1 , a2 , … , an .
The eigenvectors corresponding to a1 are given by
⇒ (A−1 A) X = 𝜆 (A−1 X)
⇒X = 𝜆 (A−1 X)
⇒𝜆 ≠ 0, since X ≠ 0.
70
Unit
. . . . .12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Eigenvalues
. . . . . . . . . . . . . .and
. . . . Eigenvectors
..............
Also, X = 𝜆(A−1 X) ⇒ 𝜆−1 X = A−1 X ⇒ 𝜆−1 is an eigenvalue of A−1 .
E19) Since the matrix in Example 13 has distinct eigenvalues 1, −1 and −2, it
is diagonalisable. Eigenvectors corresponding to these eigenvalues are
⎡2⎤ ⎡−2⎤ ⎡−1⎤
⎢3⎥ ⎢ 1 ⎥ and ⎢ 0 ⎥ , respectively.
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1⎦ ⎣ 1 ⎦ ⎣1⎦
⎡2 −2 −1⎤ ⎡0 0 2 ⎤ ⎡1 0 0⎤
⎢
∴, if P = ⎢3 1 ⎥ − 1 ⎢ ⎥ ⎢
0 ⎥ , then P ⎢1 0 1 ⎥ P = ⎢0 −1 0 ⎥⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 1 1⎦ ⎣0 1 −2⎦ ⎣0 0 −2⎦
The matrix in Example 14 is not diagonalisable. This is because it only
has two distinct eigenvalues and, corresponding to each, it has only one
linearly independent eigenvector. ∴, we cannot find a basis of V4 (F)
consisting eigenvectors. And now apply Theorem 3.
The matrix in Example 15 is diagonalisable though it only has two distinct
eigenvalues. This is because corresponding to 𝜆1 = −1 there is one
linearly independent eigenvector, but corresponding to 𝜆2 = 1 there exist
two linearly independent eigenvectors. Therefore, we can form basis of
V3 (ℝ) consisting the eigenvectors
⎡ 1 ⎤ ⎡1⎤ ⎡0⎤
⎢−1⎥ , ⎢1⎥ , ⎢0⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣0⎦ ⎣1⎦
⎡ 1 1 0⎤
The matrix P = ⎢⎢−1 1 0⎥⎥ is invertible, and
⎢ ⎥
⎣ 0 0 1⎦
⎡0 1 0⎤ ⎡−1 0 0⎤
P−1 ⎢ ⎥ ⎢ ⎥
⎢1 0 0⎥ P = ⎢ 0 1 0⎥ .
⎢ ⎥ ⎢ ⎥
⎣0 0 1⎦ ⎣ 0 0 1⎦
∣t − 3 −2 −1 ∣
∣ ∣ 2
f(t) = ∣∣ −2 t − 3 −1 ∣∣ = (t − 1) (t − 5) .
∣ 0 0 t − 1∣∣
∣
⎡1 1 1⎤
⎢
P = ⎢−1 0 1⎥⎥ .
⎢ ⎥
⎣ 0 −2 0⎦
⎡1 0 0⎤
Check, by actual multiplication, that P−1 AP =⎢ ⎥
⎢0 1 0⎥ , this is in
⎢ ⎥
⎣0 0 5⎦
diagonal form.
72