0% found this document useful (0 votes)
31 views5 pages

Eigenvalues and Eigenvectors - Updated

Chapter V discusses the concepts of eigenvalues and eigenvectors for square matrices, defining eigenvectors as non-zero vectors that satisfy the equation A𝐱 = λ𝐱, where λ is the corresponding eigenvalue. It explains how to find eigenvalues through the characteristic polynomial and provides examples of calculating eigenvalues and eigenvectors for specific matrices. The chapter also covers the relationship between eigenvalues, the invertibility of matrices, and the role of linear operators in defining eigenvectors and eigenvalues.

Uploaded by

hgenet71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views5 pages

Eigenvalues and Eigenvectors - Updated

Chapter V discusses the concepts of eigenvalues and eigenvectors for square matrices, defining eigenvectors as non-zero vectors that satisfy the equation A𝐱 = λ𝐱, where λ is the corresponding eigenvalue. It explains how to find eigenvalues through the characteristic polynomial and provides examples of calculating eigenvalues and eigenvectors for specific matrices. The chapter also covers the relationship between eigenvalues, the invertibility of matrices, and the role of linear operators in defining eigenvectors and eigenvalues.

Uploaded by

hgenet71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Chapter V Eigenvalues and Eigenvectors

Definition If 𝐴 is an 𝑛 × 𝑛 matrix, then a non-zero vector 𝐱 in ℝ𝑛 is called an eigenvector of


𝐴 (or of the matrix operator 𝑇𝐴 ) if 𝐴𝐱 is a scalar multiple of 𝐱; i.e 𝐴𝐱 = 𝜆𝐱 for some scalar 𝜆.
The scalar 𝜆 is called an eigenvalue of 𝐴 (or of 𝑇𝐴 ) and 𝐱 is said to be an eigenvector
corresponding to 𝜆.
1 3 0
Example The vector 𝐱 = ( ) is an eigenvector of 𝐴 = ( ) corresponding to
2 8 −1
eigenvalue λ=3, since
3 0 1
𝐴𝐱 = ( ) ( ) = 3𝐱
8 −1 2
𝑎1 0 … 0
0 𝑎2 … 0
Example Let 𝐴 = ( ) be a diagonal matrix, then every unit vector 𝐸 𝑖 (𝑖 =
⋮ ⋮ ⋮
0 … 𝑎𝑛
1, 2, … , 𝑛) is an eigenvector of 𝐴.
We have 𝐴𝐸 𝑖 = 𝑎𝑖 𝐸 𝑖 . Hence, 𝑎𝑖 is eigenvalue and 𝐸 𝑖 is eigenvector.
How to find eigenvalues and eigenvectors?
𝐴𝐱 = 𝜆𝐱 can be written as 𝐴𝐱 = 𝜆𝐼𝐱. i.e (𝜆𝐼 − 𝐴)𝐱 = 𝟎 .
For this equation to have non-trivial solution 𝜆𝐼 − 𝐴 should not be invertible or |𝜆𝐼 − 𝐴 | =
0.
Theorem If 𝐴 is an 𝑛 × 𝑛 matrix, then 𝜆 is an eigenvalue of 𝐴 if and only if |𝜆𝐼 − 𝐴 | = 0.
When |𝜆𝐼 − 𝐴 | is expanded, it takes the form 𝑃(𝜆) = 𝜆 𝑛 + 𝑐1 𝜆 𝑛−1 + ⋯ + 𝑐𝑛 , a polynomial
of degree 𝑛, and is called a Characteristic polynomial.
|𝜆𝐼 − 𝐴 | = 0 is the characteristic equation of 𝐴.
3 0
Example Find eigenvalues and eigenvectors of 𝐴 = ( )
8 −1
𝜆−3 0
Solution |𝜆𝐼 − 𝐴 | = 0 ⇒ | |=0
−8 𝜆 + 1
⇒ (𝜆 − 3)(𝜆 + 1) = 0
⇒ 𝜆 = 3, 𝜆 = −1 are eigenvalues of 𝐴.
To compute the eigenvectors,
𝑥1
(𝜆𝐼 − 𝐴)𝐱 = 𝟎 ⇒ (𝜆 − 3 0 0
) (𝑥 ) = ( )
−8 𝜆+1 2 0
If 𝜆 = 3
0 0 𝑥1 0
( ) (𝑥 ) = ( )
−8 4 2 0
−8𝑥1 + 4𝑥2 = 0 or 𝑥2 = 2𝑥1
Let 𝑥1 = 1. Then 𝑥2 = 2
1
( ) is an eigenvector.
2
1
Of course, all non-zero scalar multiples of ( ) is an eigenvector.
2
1
Hence, {𝑡 ( ) |𝑡 ∈ ℝ\{0}} is the set of eigenvectors (called eigenspace) corresponding to
2
eigenvalue 𝜆 = 3.
0 1 0
Example Find the eigenvalues of 𝐴 = (0 0 1)
4 −17 8
𝜆 −1 0
Solution The characteristic polynomial of 𝐴 is |𝜆𝐼 − 𝐴| = ( 0 𝜆 −1 ) = 𝜆3 −
−4 17 𝜆 − 8
8𝜆2 + 17𝜆 − 4
The eigenvalues of 𝐴 must satisfy the equation 𝜆3 − 8𝜆2 + 17𝜆 − 4 = 0
The roots must be divisors of -4 which are ±1, ±2, and ± 4. 𝜆 = 4 is a solution, hence 𝜆 − 4
is a factor. i.e (𝜆 − 4)(𝜆2 − 4𝜆 + 1) = 0.
The other solutions satisfy 𝜆2 − 4𝜆 + 1 = 0.

4 ± √16 − 4 4 ± 2√3
⇒𝜆= = = 2 ± √3
2 2

i.e 𝜆 = 4, 𝜆 = 2 + √3 and 𝜆 = 2 − √3
Theorem If 𝐴 is an 𝑛 × 𝑛 triangular matrix (upper triangular, lower triangular or diagonal),
then the eigenvalues of 𝐴 are 𝐴 are the entries on the main diagonal of 𝐴.
1
0 0
2
2
Example Find eigenvalues of a lower triangular matrix 𝐴 = −1 0
3
−1
5 −8
( 4 )
1
𝜆− 0 0
2
2 1 2 1
Solution |𝜆𝐼 − 𝐴| = || 1 𝜆−3 0 || = (𝜆 − ) (𝜆 − ) (𝜆 + ) = 0
2 3 4
1
−5 8 𝜆+4

1 2 −1
The eigenvalues are 𝜆 = 2 , 𝜆 = 3 , and 𝜆 = 4

Theorem If 𝐴 is an 𝑛 × 𝑛 matrix, the following statements are equivalent:


a) 𝜆 is an eigenvalue of 𝐴
b) 𝜆 is a solution of the characteristic equation det(𝜆𝐼 − 𝐴) = 0
c) The system of equations (𝜆𝐼 − 𝐴)𝐱 = 𝟎 has nontrivial solutions
d) There is a nonzero vector 𝐱 such that 𝐴𝐱 = 𝜆𝐱
Eigenspace of 𝐴 corresponding to 𝜆 can also be viewed as:
 The null space of the matrix 𝜆𝐼 − 𝐴
 The kernel of the matrix operator 𝑇𝜆𝐼−𝐴 : ℝ𝑛 → ℝ𝑛
 The set of vectors for which 𝐴𝐱 = 𝜆𝐱
Example (Bases for Eigenspace)
−1 3
Find bases for the eigenspace of the matrix 𝐴 = ( ).
2 0
Solution The characteristic equation of 𝐴 is
𝜆+1 −3
| | = 𝜆(𝜆 + 1) − 6 = (𝜆 − 2)(𝜆 + 3) = 0
−2 𝜆
So the eigenvalues of 𝐴 are 𝜆 = 2 and 𝜆 = −3.

(𝜆𝐼 − 𝐴)𝐱 = 𝟎 ⇒ (𝜆 + 1 −3 𝑥1 0
) (𝑥 ) = ( )
−2 𝜆 2 0
If 𝜆 = 2,
3 −3 𝑥1 0
( ) (𝑥 ) = ( )
−2 2 2 0
3𝑥 − 3𝑥2 = 0
⇒{ 1 ⇒ 𝑥1 = 𝑡, 𝑥2 = 𝑡
−2𝑥1 + 2𝑥2 = 0
𝑥1 𝑡 1
(𝑥 ) = ( ) = 𝑡 ( )
2 𝑡 1
1
⇒ ( ) is a basis for the eigenspace corresponding to 𝜆 = 2.
1
If 𝜆 = −3,
−2 −3 𝑥1 0
( ) (𝑥 ) = ( )
−2 −3 2 0
−3
⇒−2𝑥1 − 3𝑥2 = 0 ⇒ 𝑥1 = 𝑥2
2
−3
Let 𝑥2 = 𝑡, 𝑥1 = 𝑡
2

𝑥1 −3 −3
𝑡
(𝑥 ) = ( 2 ) = 𝑡 ( 2 )
2
𝑡 1
−3
⇒ ( 2 ) is a basis for the eigenspace corresponding to 𝜆 = −3.
1
0 0 −2
Example Find bases for the eigenspaces of 𝐴 = (1 2 1)
1 0 3
Solution The characteristic equation is 𝜆3 − 5𝜆2 + 8𝜆 − 4 = 0 or (𝜆 − 1)(𝜆 − 2)2 = 0.
𝜆 0 2 𝑥1 0
(𝜆𝐼 − 𝐴)𝐱 = 𝟎 ⇒ (−1 𝜆 − 2 −1 ) (𝑥𝟐 ) = (0)
−1 0 𝜆 − 3 𝑥𝟑 0
If 𝜆 = 2
2 0 2 𝑥1 0
(−1 0 𝑥
−1) ( 𝟐 ) = (0)
−1 0 −1 𝑥𝟑 0
𝑥1 + 𝑥3 = 0
Let 𝑥1 = −𝑠, 𝑥𝟐 = 𝑡, 𝑥3 = 𝑠
−𝑠 −𝑠 0 −1 0
𝐱 = ( 𝑡 ) = ( 0 ) + ( 𝑡 ) = 𝑠 ( 0 ) + 𝑡 (1)
𝑠 𝑠 0 1 0
−1 0
( 0 ) and (1) are linearly independent. They form a basis for the eigenspace
1 0
corresponding to 𝜆 = 2.
If 𝜆 = 1
1 0 2 𝑥1 0
(−1 −1 −1) (𝑥𝟐 ) = (0)
−1 0 −2 𝑥𝟑 0
𝑥1 + 2𝑥3 = 0
𝑥1 = −2𝑥3
⇒ {−𝑥1 − 𝑥2 − 𝑥3 = 0 ⇒ {
𝑥2 = −𝑥1 − 𝑥3 = 𝑥3
−𝑥1 − 2𝑥3 = 0
Let 𝑥3 = 𝑡, 𝑥𝟏 = −2𝑡, 𝑥2 = 𝑡
−2𝑡 −2
𝐱 = ( 𝑡 ) = 𝑡( 1 )
𝑡 1
−2
Hence, ( 1 ) is a basis for the eigenspace corresponding to 𝜆 = 1.
1
Theorem A square matrix 𝐴 is invertible if and only if 𝜆 = 0 is not an eigenvalue of 𝐴.
Proof 𝜆 = 0 is a solution of the characteristic equation
𝜆𝑛 + 𝑐1 𝜆𝑛−1 + ⋯ + 𝑐𝑛 = 0 if and only if 𝑐𝑛 = 0.
|𝜆𝐼 − 𝐴| = 𝜆𝑛 + 𝑐1 𝜆𝑛−1 + ⋯ + 𝑐𝑛
If 𝜆 = 0, det(−A) = (−1)𝑛 det(𝐴) = 𝑐𝑛
𝐴 is invertible if and only if det(𝐴) = (−1)𝑛 𝑐𝑛 ≠ 0
Example (Eigenvalues and Invertibility)
0 0 −2
𝐴 = (1 2 1 ) has eigenvalues 𝜆 = 1 and 𝜆 = 2, none of which is 0. Hence, it is
1 0 3
invertible.
Definition If 𝑇: 𝑽 → 𝑽 is a linear operator on a vector space 𝑽, then a nonzero vector 𝐱 in 𝑽
is called an eigenvector of 𝑇 if 𝑇(𝐱) is a scalar multiple of 𝐱; i.e 𝑇(𝐱) = 𝜆𝐱 for some scalar 𝜆.
The scalar 𝜆 is called an eigenvalue of 𝑇, and 𝐱 is said to be an eigenvector corresponding to
𝜆.
Example If 𝐷: 𝐶 ∞ → 𝐶 ∞ is a differentiation operator on a vector space of functions with
continuous derivatives of all orders on the interval (−∞, ∞), and if λ is a constant, then
𝐷(𝑒 𝜆𝑥 ) = 𝜆𝑒 𝜆𝑥 . Thus, 𝜆 is eigenvalue of 𝐷 and 𝑒 𝜆𝑥 is eigenvector corresponding to 𝜆.

Theorem Let 𝑽 be a vector space and let 𝑇: 𝑽 → 𝑽 be a linear map. Let 𝑉𝜆 be the subspace of
𝑉 generated by all eigenvectors of 𝐴 having 𝜆 as eigenvalue. Then every non-zero element of
𝑉𝜆 is an eigenvector of 𝐴 having 𝜆 as eigenvalue.
Proof Let 𝑣1 , 𝑣2 ∈ 𝑽 be such that 𝑇𝑣1 = 𝜆 𝑣1 and 𝑇𝑣2 = 𝜆 𝑣2 . Then
𝑇(𝑣1 + 𝑣2 ) = 𝑇𝑣1 + 𝑇𝑣2 = 𝜆 𝑣1 + 𝜆 𝑣2 = 𝜆 (𝑣1 + 𝑣2 )
If 𝑐 ∈ 𝑲, then 𝑇(𝑐𝑣1 ) = 𝑐𝑇(𝑣1 ) = 𝑐𝜆 𝑣1 = 𝜆(𝑐𝑣1 )
Thus the subspace 𝑉𝜆 is called the eigenspace of 𝐴 belonging to 𝜆.
Is 𝑉𝜆 a subspace of vector space 𝑽.
Theorem Let V be a vector space and let 𝑇: 𝑽 → 𝑽 be a linear map. Let 𝑣1 , 𝑣2 , … , 𝑣𝑚 be
eigenvectors of 𝑇, with eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑚 respectively. Assume that these eigenvalues
are distinct, i.e 𝜆𝑖 ≠ 𝜆𝑗 if 𝑖 ≠ 𝑗. Then 𝑣1 , 𝑣2 , … , 𝑣𝑚 are linearly independent.

Proof By induction on 𝑚. For 𝑚 = 1, an element 𝑣1 ∈ 𝑽, 𝑣1 ≠ 𝟎 is linearly independent.


Assume 𝑚 > 1. Suppose we have a relation
𝑐1 𝑣1 + 𝑐2 𝑣2 + ⋯ + 𝑐𝑚 𝑣𝑚 = 𝟎 … (∗) with scalars 𝑐𝑖 . We must prove all 𝑐𝑖 = 0.
If we multiply (*) by 𝜆1 , we get
𝑐1 𝜆1 𝑣1 + 𝑐2 𝜆1 𝑣2 + ⋯ + 𝑐𝑚 𝜆1 𝑣𝑚 = 𝟎 … (∗∗)
If we apply 𝑇 to (*), we get
𝑐1 𝜆1 𝑣1 + 𝑐2 𝜆2 𝑣2 + ⋯ + 𝑐𝑚 𝜆𝑚 𝑣𝑚 = 𝟎 … (∗∗∗)
Subtracting (**) from (***) gives us,
𝑐2 (𝜆2 − 𝜆1 )𝑣2 + ⋯ + 𝑐𝑚 (𝜆𝑚 − 𝜆1 )𝑣𝑚 = 𝟎
Since 𝜆𝑗 − 𝜆1 ≠ 0 𝑓𝑜𝑟 𝑗 = 2, … , 𝑚, we conclude by induction that 𝑐2 = ⋯ = 𝑐𝑚 = 0.

Going back to (*), we see that 𝑐1 𝑣1 = 𝟎, hence 𝑐1 = 0.

You might also like