0% found this document useful (0 votes)
173 views18 pages

An Introduction To Eigenvalues and Eigenvectors: Bachelor of Science

The document is a project report submitted by Ujjal Kumar Nanda for the degree of Bachelor of Science in Mathematics. It contains an introduction to eigenvalues and eigenvectors. The report discusses how eigenvalues and eigenvectors can be used to solve systems of differential equations. It provides examples of finding the eigenvalues and eigenvectors of different types of matrices, such as symmetric, idempotent, and skew-symmetric matrices. Applications of eigenvalues and eigenvectors are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views18 pages

An Introduction To Eigenvalues and Eigenvectors: Bachelor of Science

The document is a project report submitted by Ujjal Kumar Nanda for the degree of Bachelor of Science in Mathematics. It contains an introduction to eigenvalues and eigenvectors. The report discusses how eigenvalues and eigenvectors can be used to solve systems of differential equations. It provides examples of finding the eigenvalues and eigenvectors of different types of matrices, such as symmetric, idempotent, and skew-symmetric matrices. Applications of eigenvalues and eigenvectors are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

A Project Report

AN INTRODUCTION TO EIGENVALUES AND EIGENVECTORS

Submitted in Partial Fulfilment of the Requirements for the Degree of

BACHELOR OF SCIENCE

In Mathematics

By

Ujjal Kumar Nanda

Roll number: 2518031

Regd. Number: 10699/18

To the

DEPARTMENT OF MATHEMATICS

MPC Autonomous College, Takhatpur, Baripada

JULY, 2021
CONTENTS

1.DECLARATION

2.CERTIFICATION

3.ACKNOWLEDGEMENT

4. INTRODUCTION:.............................................................................................................................. 1

5. EIGENVALUES AND EIGENVECTORS:............................................................................................... 3

5.1. SYMMETRIC MATRIX:.............................................................................................................. 4

5.2. IDEMPOTENT MATRIX:............................................................................................................ 7

5.3. SKEW SYMMETRIC MATRIX:.................................................................................................... 8

5.4. CAYLEY-HAMILTON THEOREM:............................................................................................... 9

6. APPLICATIONS: …………………………………………………………………………………………………………………………… 13

7.CONCLUSION: ………………………………………………………………………………………………………………………………
1. DECLARATION

I Ujjal kumar Nanda of +3, 3rd year science, 6th semester


Mathematics honours bearing the examination roll no. 2518031 and the
registration no. 10699/18 has prepared the project work submitted in practical
fulfilment of the requirements for the degree of bachelor of science in
mathematics.

Place: -

Date: -

UJJAL KUMAR NANDA


Signature of the student
2. CERTIFICATE

This is to certify that the seminar topic “EIGENVALUES AND


EIGENVECTORS” prepared and represented by UJJAL KUMAR NANDA bearing the
examination roll no. 2518031 and the registration no- 10699/18 of +3,3rd year
science, mathematics honours for the session 2018-21 of M.P.C. AUTONOMOUS
COLLEGE, TAKHATPUR, BARIPADA has completed the seminar topic under my
supervision.

Date: -

Place: -

Signature of the guide


3. ACKNOWLEDGEMENT

I express my deep sense of gratitude my supervisor Dr. ALOK KUMAR NAIK


lecturer of Mathematics, M.P.C. AUTONOMOUS COLLEGE, for sincere
guidance, outstanding and valuable suggestion and encouragement which
helped me in the preparation of seminar report in the due time.
CHAPTER 1
1. INTRODUCTION:

The problem 𝐴𝑥 = 𝜆𝑥 will be solved by simplifying a matrix- making it diagonal if possible.


The basic step is no longer to subtract a multiple of one row from another.

Determinants give a transition from 𝐴𝑥 = 𝜆 to 𝐴𝑥 = 𝜆𝑥. In both the cases the determinant
leads to a formal solution to Cramer’s rule for 𝑥 = 𝐴−1 𝜆 and the polynomial det (𝐴 − 𝜆𝐼),
whose roots will be the eigen values. The determinant can actually be used if 𝑛 = 2 𝑜𝑟 3. For
large n, computing b is more difficult than solving 𝐴𝑥 = 𝑏.

The first step is to understand how eigenvalues can be useful. One of their application is to
ordinary differential equations. We shall not assume that the reader is an expert on
differential equation. If we can differentiate 𝑥 𝑛 , sin 𝑥 𝑎𝑛𝑑 𝑒 𝑥 , we know enough.
As a specific example, consider the coupled pair of equations
𝑑𝑣
= 4𝑣 − 5𝑤, 𝑣 = 8 𝑎𝑡 𝑡 = 0
𝑑𝑡

𝑑𝑤
= 2𝑣 − 3𝑤, 𝑤 = 5 𝑎𝑡 𝑡 = 0 __________ (1)
𝑑𝑡

This is an initial value problem. The unknown is specified at time 𝑡 = 0 by the given initial
values 8 and 5. The problem is to find 𝑣(𝑡) and 𝑤(𝑡) for later times 𝑡 > 0.

It is easy to write the system in matrix form. Let the unknown vector be 𝑢(𝑡) with initial
value 𝑢(0). The co-efficient matrix is A.
𝑣(𝑡) 8
Vector unknown 𝑢(𝑡) = ( ), 𝑢(0) = ( )
𝑤(𝑡) 5
4 −5
𝐴=( )
2 −3
The two coupled equations become the vector equations we want
𝑑𝑢
𝑑𝑡
= 𝐴𝑢 with 𝑢 = 𝑢(0) at 𝑡 = 0. _________ (2)

This is the basic statement of the problem. Note that it is a first order equation, no higher
derivatives and it is linear in the unknowns. It also has constant co-efficient the matrix A
which is independent of time.

How do we find (𝑡) ? If there were only one unknown instead of two, that equation would
have been easy to answer. We would have had a scalar instead of a vector equation.
𝑑𝑢
= 𝑎𝑢 𝑤𝑖𝑡ℎ 𝑢 = 𝑢(0) 𝑎𝑡 𝑡 = 0 ________ (3)
𝑑𝑡

The solution to this equation is the one thing you need to know.

Pure exponential 𝑢(𝑡) = 𝑒 𝑎𝑡 𝑢(0) ________ (4)

1
At the initial time 𝑡 = 0 u equals to 𝑢(0) because 𝑒 0 = 1. The derivatives of 𝑒 𝑎𝑡 has the
𝑑𝑢
required factor a, so that = 𝑎𝑢. Thus the initial condition and the equation are both
𝑑𝑡
satisfied. Notice the behaviour of u for large times. The equation is unstable if 𝑎 > 0,
neutrally stable if 𝑎 = 0 or stable if 𝑎 < 0. The factor 𝑒 𝑎𝑡 approaches infinitely, remains
bounded or goes to zero. If 𝑎 were a complex part it would have produced oscillations.

𝑒 𝑖𝛽𝑡 = cos 𝛽𝑡 + 𝑖 sin 𝛽𝑡


Decay or growth is governed by the factor 𝑒 𝑎𝑡 . So much for a single equation. We shall take
a direct approach to system and look for solution with the same exponential dependence on 𝑡
just found in the scalar case.

𝑣(𝑡) = 𝑒 𝜆𝑡 𝑦

𝑤(𝑡) = 𝑒 𝜆𝑡 𝑧 _______ (5)

Or in vector notation

𝑢(𝑡) = 𝑒 𝜆𝑡 𝑥 ________ (6)


𝑑𝑢
This is the whole key to the differential equation = 𝐴𝑢. Look for pure exponential
𝑑𝑡
solutions. Substituting 𝑣 = 𝑒 𝜆𝑡 𝑦 𝑎𝑛𝑑 𝑤 = 𝑒 𝜆𝑡 𝑧 into the equation, we find

𝜆𝑒 𝜆𝑡 𝑦 = 4𝑒 𝜆𝑡 𝑦 − 5𝑒 𝜆𝑡 𝑧

𝜆𝑒 𝜆𝑡 𝑧 = 2𝑒 𝜆𝑡 𝑦 − 3𝑒 𝜆𝑡 𝑧

The factor 𝑒 𝜆𝑡 is common to every term, and can be removed. This cancellation is the reason
for assuming the same exponent 𝜆 for both unknowns. It leaves eigenvalue problem

4𝑦 − 5𝑧 = 𝜆𝑦
2𝑦 − 3𝑧 = 𝜆𝑧 _________ (7)

That is the eigenvalue equation. In matrix form it is 𝐴𝑥 = 𝑏𝑥. We can see it again if we use
𝑢 = 𝑒 𝜆𝑡 𝑥 – a number 𝑒 𝜆𝑡 that growth or decay time around a fixed vector 𝑥. Substituting it
𝑑𝑢
into = 𝐴𝑢 gives
𝑑𝑡
𝜆𝑒 𝜆𝑡 𝑥 = 𝐴𝑒 𝜆𝑡 𝑥

The cancellation of 𝑒 𝜆𝑡 produces eigenvalue equation

𝐴𝑥 = 𝜆𝑥

2
CHAPTER 2
2. EIGENVALUES AND EIGENVECTORS:

Let 𝐴 be a square matrix of order 𝑛, then the values of 𝜆 for which the equation 𝐴𝑥 = 𝜆𝑥 has
non-trivial solution, are called the eigen values of 𝐴. If 𝜆 is an eigen value then the non-zero
vectors 𝑥 for which the equation 𝐴𝑥 = 𝜆𝑥 holds are called the eigen vectors of 𝐴.

Eigen values are also called characteristic values or proper values. Eigen vectors are also
called characteristic vectors or proper vectors.

Let 𝐴 be a square matrix of order 𝑛. Let 𝜆 be an eigen value of 𝐴. Then the polynomial
𝑑𝑒𝑡(𝐴 − 𝜆𝐼) is called the characteristic polynomial of the matrix 𝐴. The algebraic equation
𝑑𝑒𝑡(𝐴 − 𝜆𝐼) = 0 is called the characteristic equation of the matrix 𝐴.

Example:
Determine the eigenvalues and the corresponding eigenspaces for the following matrix.
3 1
𝐴=( )
6 2
Solution-
The characteristic equation of the matrix 𝐴 is

𝑑𝑒𝑡(𝐴 − 𝜆𝐼) = 0
3 1 1 0
⟹ 𝑑𝑒𝑡 {( )−𝜆( )} = 0
6 2 0 1
3 1 𝜆 0
⟹ 𝑑𝑒𝑡 {( )−( )} = 0
6 2 0 𝜆
3−𝜆 1
⟹ 𝑑𝑒𝑡 ( )=0
6 2−𝜆
⟹ (3 − 𝜆)(2 − 𝜆) − 6 = 0

⟹ 6 − 5𝜆 + 𝜆2 − 6 = 0

⟹ 𝜆2 − 5𝜆 = 0

⟹ 𝜆(𝜆 − 5) = 0

⟹ 𝜆 = 5 𝑜𝑟 𝜆 = 0

So, the eigenvalues of the matrix 𝐴 are 𝜆1 = 0, 𝜆2 = 5

For 𝜆1 = 0

𝐴𝑥 = 𝜆1 𝑥
3 1 𝑥1 𝑥1
⟹( ) (𝑥 ) = 0 (𝑥 )
6 2 2 2

3
3𝑥 + 𝑥2 0
⟹( 1 )=( )
6𝑥1 + 2𝑥2 0
⟹ 3𝑥1 + 𝑥2 = 0, 6𝑥1 + 2𝑥2 = 0

⟹ 3𝑥1 + 𝑥2 = 0

⟹ 𝑥2 = −3𝑥1

The eigenvectors corresponding to eigenvalue 0 are of the form (𝑥1 , −3𝑥1 ) when 𝑥1 ≠ 0. The
eigenspace corresponding to the eigenvalue is given by

⃗}
𝜖(0) = [(1, −3)]\{0

For 𝜆2 = 5

𝐴𝑥 = 𝜆2 𝑥
3 1 𝑥1 𝑥1
⟹( ) (𝑥 ) = 5 (𝑥 )
6 2 2 2

3𝑥 + 𝑥2 5𝑥
⟹( 1 ) = ( 1)
6𝑥1 + 2𝑥2 5𝑥2

⟹ 2𝑥1 − 𝑥2 = 0, 6𝑥1 − 3𝑥2 = 0

⟹ 2𝑥1 − 𝑥2 = 0

⟹ 𝑥2 = 2𝑥1

The eigenvectors corresponding to eigenvalue 5 are of the form (𝑥1 , 2𝑥1 ) when 𝑥1 ≠ 0. The
eigenspace corresponding to the eigenvalue is given by
⃗}
𝜖(5) = [(1,2)]\{0

2.1. SYMMETRIC MATRIX:

▪ Recall a matrix 𝐴 ∈ 𝑅 𝑛×𝑛 is symmetric if 𝐴𝑇 = 𝐴.


▪ All eigenvalues of a real symmetric matrix are real.
▪ Eigenvectors corresponding to distinct eigenvalues are orthogonal.
Eigen values of a symmetric matrix are real:

Let 𝜆 ∈ 𝐶 be an eigenvalue of a symmetric matrix 𝐴 ∈ 𝑅 𝑛×𝑛 and let 𝑢 ∈ 𝐶 𝑛 be a


corresponding eigenvector. Now

𝐴𝑢 = 𝜆𝑢 _______ (1)
Taking complex conjugates of both sides of (1), we obtain

𝐴∗ 𝑢∗ = 𝜆∗ 𝑢∗

i.e. 𝐴𝑢∗ = 𝜆∗ 𝑢∗ ______ (2)

4
Now we pre-multiply (1) with (𝑢∗ )𝑇 to obtain

𝜆(𝑢∗ )𝑇 𝑢 = (𝑢∗ )𝑇 (𝐴𝑢)

= ((𝑢∗ )𝑇 𝐴)𝑢 ; since (𝛽𝑣)𝑇 = 𝑣 𝑇 𝛽 𝑇

= (𝐴𝑇 𝑢∗ )𝑇 𝑢 ; since 𝐴𝑇 = 𝐴

= (𝜆∗ 𝑢∗ )𝑇 𝑢 ; using (2)

Thus (𝜆 − 𝜆∗ )(𝑢∗ )𝑇 𝑢 = 0

But 𝑢 being an eigenvector is non-zero and also,


𝑛

(𝑢∗ )𝑇 𝑢 = ∑ 𝑢𝑖 ∗ 𝑢𝑖 > 0
𝑖=1

Since at least one of the components of 𝑢 is non-zero and for any complex number 𝑧 = 𝑎 +
𝑖𝑏, we have 𝑧 ∗ 𝑧 = 𝑎2 + 𝑏 2 ≥ 0

Hence, 𝜆∗ = 𝜆 i.e. 𝜆 and 𝑢 are both real.


Eigen vectors of distinct eigenvalues of a symmetric matrix are orthogonal:

Let 𝐴 be a real symmetric matrix. Let 𝐴𝑢1 = 𝜆𝑢1 and 𝐴𝑢2 = 𝜆𝑢2 with 𝑢1 and 𝑢2 non-zero
vectors in 𝑅 𝑛 and 𝜆1 , 𝜆2 ∈ 𝑅. Pre-multiplying both sides of the first equation above with 𝑢2 𝑇 ,
we get

𝜆1 𝑢2 𝑇 𝑢1 = 𝑢2 𝑇 (𝐴𝑢1 )

= (𝑢2 𝑇 𝐴)𝑢1

= (𝐴𝑇 𝑢2 )𝑇 𝑢1

= (𝐴𝑢2 )𝑇 𝑢1

= 𝜆2 𝑢2 𝑇 𝑢1

Thus (𝜆1 − 𝜆2 )𝑢2 𝑇 𝑢1 = 0

Therefore, 𝜆1 ≠ 𝜆2 ⟹ 𝑢2 𝑇 𝑢1 = 0 as required.

If an eigenvalue 𝜆 has multiplicity 𝑚(𝑠𝑎𝑦) then we can always find a set of 𝑚 orthogonal
eigenvectors for 𝜆. We conclude that by normalizing the eigenvectors of 𝐴, we get an
orthogonal set of vetors 𝑢1 , 𝑢2 , … , 𝑢𝑛 .

Example:
3⁄
To find the eigenvalues of the matrix 𝐴 = (2 2)
2 0
Solution-
The characteristic equation of the given matrix is

5
𝑑𝑒𝑡(𝐴 − 𝜆𝐼) = 0
3⁄
⟹ det {(2 2) − 𝜆 (1 0
)} = 0
2 0 0 1

2−𝜆 3⁄
⟹ det ( 2) = 0
2 −𝜆
⟹ −𝜆(2 − 𝜆) − 3 = 0

⟹ 𝜆2 − 2𝜆 − 3 = 0

⟹ 𝜆2 − 3𝜆 + 2𝜆 − 3 = 0

⟹ 𝜆(𝜆 − 3) + 1(𝜆 − 3) = 0

⟹ (𝜆 − 3)(𝜆 + 1) = 0

⟹ 𝜆 = 3 𝑜𝑟 − 1
Therefore, eigenvalues are 3 or -1, which are real.

For 𝜆1 = 3,

𝐴𝑥 = 𝜆1 𝑥
3⁄ 𝑥 𝑥
⟹(2 2) ( 1 ) = 3 ( 1 )
𝑥2 𝑥2
2 0

2𝑥 + 3⁄2 𝑥2 3𝑥
⟹( 1 ) = ( 1)
2𝑥1 + 0𝑥2 3𝑥2

⟹2𝑥1 + 3⁄2 𝑥2 = 3𝑥1 , 2𝑥1 = 3𝑥2

⟹𝑥1 − 3⁄2 𝑥2 = 0, 2𝑥1 − 3𝑥2 = 0

⟹𝑥1 = 3⁄2 𝑥2

The eigenvectors corresponding to the eigenvalue 3 are of the form


(3⁄2 𝑥2 , 𝑥2 ) 𝑤ℎ𝑒𝑛 𝑥2 ≠ 0

The eigenspace corresponding to the eigenvalue is given by

𝜖(3) = [(3⁄2 , 1)]\{0


⃗}

For 𝜆1 = −1,

𝐴𝑥 = 𝜆2 𝑥
3⁄ 𝑥 𝑥
⟹(2 2) ( 1 ) = −1 ( 1 )
𝑥 𝑥
2 0 2 2

6
2𝑥 + 3⁄2 𝑥2 −𝑥1
⟹( 1 ) = (−𝑥 )
2𝑥1 + 0𝑥2 2

⟹2𝑥1 + 3⁄2 𝑥2 = −𝑥1 , 2𝑥1 = −𝑥2

⟹3𝑥1 + 3⁄2 𝑥2 = 0, 2𝑥1 + 𝑥2 = 0

⟹𝑥1 + 1⁄2 𝑥2 = 0

⟹𝑥2 = −2𝑥1

The eigenvectors corresponding to the eigenvalue -1 are of the form


(𝑥1 , −2𝑥1 ) 𝑤ℎ𝑒𝑛 𝑥1 ≠ 0

The eigenspace corresponding to the eigenvalue is given by


⃗}
𝜖(−1) = [(1, −2)]\{0

2.2. IDEMPOTENT MATRIX:

A square matrix 𝐴 is idempotent if 𝐴2 = 𝐴𝐴 = 𝐴.


Theorem:

If a matrix 𝐴 is idempotent, then its eigenvalues are either 0 or 1.

Proof:

If 𝐴 is idempotent, 𝜆 is an eigenvalue and 𝑣 is a corresponding eigenvector i.e. 𝐴 = 𝐴𝐴 = 𝐴2 .


Then

𝜆𝑣 = 𝐴𝑣

⟹𝜆=𝐴
Now 𝜆𝑣 = 𝐴𝑣

= 𝐴𝐴𝑣

= 𝜆𝜆𝑣

= 𝜆2 𝑣
Since 𝑣 ≠ 0, we find

𝜆 − 𝜆2 = 0

⟹ 𝜆(1 − 𝜆) = 0

⟹𝜆 = 0 𝑜𝑟 𝜆 = 1
Example:

7
1 0
𝐴=( )
0 1
1 0 1 0 1 0
𝐴2 = 𝐴𝐴 = ( )( )=( )
0 1 0 1 0 1
⟹𝐴 is an idempotent matrix

2.3. SKEW SYMMETRIC MATRIX:

A skew symmetric matrix 𝐵 has the property 𝐵 𝑇 = −𝐵

Eigenvalues of a skew symmetric matrix 𝐵 with real entries are purely imaginary.

𝐵𝑥 = 𝜆𝑥

⟹(𝐵𝑥)∗ = (𝜆𝑥)∗

⟹𝐵 ∗ 𝑥 ∗ = 𝜆∗ 𝑥 ∗
⟹𝐵𝑥 ∗ = 𝜆∗ 𝑥 ∗ since 𝐵 is real

In the above we transpose both sides and we get


(𝐵𝑥 ∗ )𝑇 = (𝜆∗ 𝑥 ∗ )𝑇

⟹(𝐵𝑥 ∗ )𝑇 = 𝜆∗ (𝑥 ∗ )𝑇

Now we multiply both sides from the right with 𝑥

(𝐵𝑥 ∗ )𝑇 𝑥 = 𝜆∗ (𝑥 ∗ )𝑇 𝑥

⟹(𝑥 ∗ )𝑇 𝐵 𝑇 𝑥 = 𝜆∗ (𝑥 ∗ )𝑇 𝑥

⟹−(𝑥 ∗ )𝑇 𝐵𝑥 = 𝜆∗ (𝑥 ∗ )𝑇 𝑥 since 𝐵 is skew symmetric

⟹−(𝑥 ∗ )𝑇 𝜆𝑥 = 𝜆∗ (𝑥 ∗ )𝑇 𝑥

⟹−𝜆(𝑥 ∗ )𝑇 𝑥 = 𝜆∗ (𝑥 ∗ )𝑇 𝑥

⟹−𝜆 = 𝜆∗

Therefore 𝜆 is purely imaginary. The only real eigenvalue that a skew symmetric matrix
might have is the zero eigenvalue.

Example:
0 1
Find the eigenvalues of the skew symmetric matrix 𝐴 = ( )
−1 0
Solution-
0 1
𝐴=( )
−1 0
The characteristic equation of the given matrix is,

8
det(𝐴 − 𝜆𝐼) = 0
0 1 1 0
⟹det {( )−𝜆( )} = 0
−1 0 0 1
0 1 𝜆 0
⟹det {( )−( )} = 0
−1 0 0 𝜆
−𝜆 1
⟹det ( )=0
−1 −𝜆
⟹𝜆2 + 1 = 0

⟹𝜆2 = −1

⟹𝜆 = ±√−1

⟹𝜆 = +𝑖, −𝑖

Therefore, eigenvalues are 𝜆1 = 𝑖, 𝜆2 = −𝑖 which are purely imaginary.

2.4. CAYLEY-HAMILTON THEOREM:

Let 𝑀(𝑛, 𝑛) be the set of all 𝑛 × 𝑛 matrices over a commutative ring with identity. Then the
Cayley-Hamilton theorem states,

Theorem 2.4.1:
Let 𝐴 ∈ 𝑀(𝑛 × 𝑛) with characteristic polynomial
det(𝐼 − 𝑡𝐴) = 𝑐0 𝑡 𝑛 + 𝑐1 𝑡 𝑛−1 + 𝑐2 𝑡 𝑛−2 + ⋯ + 𝑐𝑛

Then 𝑐0 𝐴𝑛 + 𝑐1 𝐴𝑛−1 + 𝑐2 𝐴𝑛−2 + ⋯ + 𝑐𝑛 𝐼 = 0

Proof:
First observe that
1
det(𝐼 − 𝑡𝐴) = 𝑡 𝑛 det ( 𝐼 − 𝐴) = 𝑐0 + 𝑐1 𝑡 + 𝑐2 𝑡 2 + ⋯ + 𝑐𝑛 𝑡 𝑛
𝑡
Now Laplace formula for calculating the determinant gives the standard equation

det(𝐼 − 𝑡𝐴)𝐼 = (𝐼 − 𝑡𝐴)𝑎𝑑𝑗 (𝐼 − 𝑡𝐴)

Where 𝑎𝑑𝑗 (𝐴) denotes the adjunct of 𝐴.

If we consider formal power series in 𝑡 with coefficients in 𝑀(𝑛 × 𝑛) then 𝐼 − 𝑡𝐴 id


invertible with ∑∞ 𝑖 𝑖
𝑖=0 𝐴 𝑡 as inverse.

So, (∑∞ 𝑖 𝑖 2 𝑛
𝑖=0 𝐴 𝑡 )(𝑐0 + 𝑐1 𝑡 + 𝑐2 𝑡 + ⋯ + 𝑐𝑛 𝑡 )𝐼 = 𝑎𝑑𝑗 (𝐼 − 𝑡𝐴)

Writing 𝑎𝑑𝑗 (𝐼 − 𝑡𝐴) as a formal power series in 𝑡 with coefficients in 𝑀(𝑛 × 𝑛) gives

9
∞ ∞
2 𝑛)
(𝑐0 + 𝑐1 𝑡 + 𝑐2 𝑡 + ⋯ + 𝑐𝑛 𝑡 (∑ 𝐴 𝑡 ) = ∑ 𝐵𝑖 𝑡 𝑖
𝑖 𝑖

𝑖=0 𝑖=0

Observe that the entries in 𝑎𝑑𝑗 (𝐼 − 𝑡𝐴) are polynomials in 𝑡 of degree at most 𝑛 − 1. So 𝐵𝑖
is the zero matrix for 𝑖 ≥ 𝑛. Equating the coefficients of 𝑡 𝑛 on both sides gives

𝑐0 𝐴𝑛 + 𝑐1 𝐴𝑛−1 + 𝑐2 𝐴𝑛−2 + ⋯ + 𝑐𝑛 𝐼 = 0
Example-1:
1 4
Verify Cayley-Hamilton theorem for 𝐴 = ( ). Also find 𝐴−1 .
2 3
Solution-
We know that

det(𝐴 − 𝜆𝐼) = 0
1 4 1 0
⟹det {( )−𝜆( )} = 0
2 3 0 1
1−𝜆 4
⟹| |=0
2 3−𝜆
⟹(1 − 𝜆)(3 − 𝜆) − 8 = 0

⟹3 − 𝜆 − 3𝜆 + 𝜆2 − 8 = 0

⟹𝜆2 − 4𝜆 − 5 = 0

The characteristic equation of 𝐴 is 𝑝(𝜆) = 𝜆2 − 4𝜆 − 5 = 0

Replacing 𝜆 with 𝐴, we get 𝑝(𝐴) = 𝐴2 − 4𝐴 − 5𝐼 = 0 ___ (1)


1 4 1 4
𝐴2 = ( )( )
2 3 2 3
1 + 8 4 + 12
=( )
2+6 8+9
9 16
=( )
8 17
Put all the required values in equation (1),
9 16 1 4 1 0
𝐴2 − 4𝐴 − 5𝐼 = ( ) −4( ) −5( )=0
8 17 2 3 0 1
9 16 4 16 5 0
⟹( )−( )−( )=0
8 17 8 12 0 5
9 − 4 − 5 16 − 16 − 0
⟹( )=0
8 − 8 − 0 17 − 12 − 5
0 0
⟹( )=0
0 0

10
Hence the equation is satisfied.

Now we find 𝐴−1

Multiply equation (1) by 𝐴−1

𝐴−1 𝐴2 − 4𝐴−1 𝐴 − 5𝐴−1 = 0

⟹𝐴 − 4𝐼 − 5𝐴−1 = 0

⟹5𝐴−1 = 𝐴 − 4𝐼
1
⟹𝐴−1 = 5 (𝐴 − 4𝐼) _______ (2)

1 1 4 1 0
Now 𝐴−1 = 5 {( ) − 4( )}
2 3 0 1
1 1−4 4−0
= 5( )
2−0 3−4
1 −3 4
= 5( )
2 −1
Example-2:
1 2 −1
−1
Find 𝐴 𝑎𝑛𝑑 𝑎𝑑𝑗 𝐴 if 𝐴 = (0 1 1 ) by using Cayley-Hamilton theorem.
3 −1 1
Solution-
1 2 −1
𝐴 = (0 1 1)
3 −1 1
𝑝(𝜆) = 𝜆2 − 3𝜆2 + 5𝜆 + 3 = 0
0 −1 −1
1
𝐴−1 = − 3 (−3 4 1)
−3 7 1
1
Since 𝐴−1 = |𝐴| 𝑎𝑑𝑗 𝐴

⟹𝑎𝑑𝑗 𝐴 = 𝐴−1 |𝐴|


0 −1 −1
Therefore, 𝑎𝑑𝑗 𝐴 = (−3 4 1)
−3 7 1
Example-3:
6 −2
If 𝐴 = ( ), then find 𝐴6 .
6 −1
Solution-

11
6 −2
𝐴=( )
6 −1
We know that |𝐴 − 𝜆𝐼| = 0
6 −2 1 0
⟹|( )−𝜆( )| = 0
6 −1 0 1
6−𝜆 −2
⟹| |=0
6 −1 − 𝜆
⟹(6 − 𝜆)(−1 − 𝜆) + 12 = 0

⟹−6 − 6𝜆 + 𝜆 + 𝜆2 + 12 = 0

⟹𝜆2 − 5𝜆 + 6 = 0

The characteristic equation of 𝐴 is

𝑝(𝜆) = 𝜆2 − 5𝜆 + 6 = 0

Replacing 𝜆 with 𝐴

𝑝(𝐴) = 𝐴2 − 5𝐴 + 6𝐼 = 0

Solving for 𝐴2 ,

𝐴2 = 5𝐴 − 6𝐼

Multiplying by 𝐴 on both sides,

𝐴3 = 5𝐴2 − 6𝐴

= 5(5𝐴 − 6𝐼) − 6𝐴

⟹𝐴3 = 19𝐴 − 30𝐼

Multiply by 𝐴 again on both sides,

𝐴4 = 19𝐴2 − 30𝐴

= 19(5𝐴 − 6𝐼) − 30𝐴

⟹𝐴4 = 65𝐴 − 114𝐼


Multiply by A on both sides once again,

𝐴5 = 65𝐴2 − 114𝐴

= 65(5𝐴 − 6𝐼) − 114𝐴

⟹𝐴5 = 221𝐴 − 390𝐼


Now again multiply by A on both sides,

𝐴6 = 221𝐴2 − 390𝐴

12
=221(5A-6I)-390A
=665A-1266I
6 −2 1 0
=665( ) - 1266 ( )
6 −1 0 1
2724 −1330
A6=( )
3990 1931

6.APPLICATIONS OF EIGENVALUES AND EIGENVECTORS-

• Machine learning (dimensionality reduction / PCA, facial recognition)


• Designing communication systems
• Designing bridges (vibration analysis, stability analysis)
• Quantum computing
• Electrical & mechanical engineering
• Determining oil reserves by oil companies
• Construction design
• Stability of the system

8.CONCLUSION-

Here are some learnings from this project:

• Eigenvector is a vector which when multiplied with a transformation matrix results in


another vector multiplied with a scaler multiple having same direction as
Eigenvector. This scaler multiple is known as Eigenvalue
• Eigenvectors and Eigenvalues are key concepts used in feature extraction techniques
such as Principal Component analysis which is an algorithm used to reducing
dimensionality while training a machine learning model.
• Eigenvalues and Eigenvector concepts are used in several fields including machine
learning, quantum computing, communication system design, construction designs,
electrical and mechanical engineering etc.

You might also like