0% found this document useful (0 votes)
32 views31 pages

Eigenvalues and Eigenvectors 2 Lectures

The document discusses eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors for both linear operators and matrices. It provides examples of eigenvalues and eigenvectors for transformations including stretching, reflection, projection, and rotation. It discusses that a matrix may not have eigenvalues/eigenvectors over a particular field, like the real numbers, but may over the complex numbers. It also defines the characteristic polynomial of a matrix and states its relationship to the eigenvalues.

Uploaded by

pratyay ganguly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views31 pages

Eigenvalues and Eigenvectors 2 Lectures

The document discusses eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors for both linear operators and matrices. It provides examples of eigenvalues and eigenvectors for transformations including stretching, reflection, projection, and rotation. It discusses that a matrix may not have eigenvalues/eigenvectors over a particular field, like the real numbers, but may over the complex numbers. It also defines the characteristic polynomial of a matrix and states its relationship to the eigenvalues.

Uploaded by

pratyay ganguly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Eigenvalues and eigenvectors

Dr. Dipankar Ghosh

Department of Mathematics
Indian Institute of Technology Kharagpur

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Eigenvalues and eigenvectors (of linear operators)
Let T : V → V be a linear transformation, which we call linear
operator (here domain and codomain are same).

Definition
1 A nonzero vector v ∈ V is called an eigenvector or
characteristic vector of T if T(v) = λv for some scalar λ.
2 A scalar λ is called an eigenvalue or characteristic value of T if
there exists a non-zero vector v ∈ V such that T(v) = λv.
In this case, v is called an eigenvector of T corresponding to the
eigenvalue λ.
3 The set of all the eigenvalues of T is called the spectrum of T.

Geometrically, an eigenvector, corresponding to an eigenvalue,


points in a direction that is stretched by the transformation, and
the eigenvalue is the factor by which it is stretched. If the
eigenvalue is negative, the direction is reversed.
An eigenvalue can be positive, negative or zero.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Eigenvalues and eigenvectors (of square matrices)
Similarly, one defines eigenvalues and eigenvectors of n × n matrices.

Definition
Let A be an n × n matrix over a field F (e.g., F = R or C).
1 A nonzero column vector v ∈ Fn is called an eigenvector or
characteristic vector of A if Av = λv for some λ ∈ F.
2 A scalar λ is called an eigenvalue or characteristic value of A if
there exists a nonzero column vector v ∈ Fn such that Av = λv.
When F = R, there is a geometric interpretation to this notion.
y
R2

2 • λv (λ > 0)

1 v

0 •
−2 −1 x
• λv (λ 0< 0) 1 2 3 4 5 6

−1
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Example 1: eigenvalues and eigenvectors of stretching

Let c ∈ R. Define T : R2 −→ R2 by
     
x x x
T =c for every ∈ R2 .
y y y
y
 
c 0
The matrix representation: [T] =
3 0 c

c · (x, y) = (cx, cy)


2 T

1
(x, y)

0
−3 −2 −1 0 1 2 3 4 5 6 x
Every v (6= 0) ∈ R2 is an eigenvector of T with the eigenvalue c.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example 2: eigenvalues and eigenvectors of reflection
   
2 2 x x
T : R −→ R is defined by T = .
y −y
y     
1 0 x x
2 Matrix repres. =
0 −1 y −y

1
(x, y)

0 mirror
−1 0 1 2 3 4 x
−1
(x, −y)
 
x
For x 6= 0, is an eigenvector of T with eigenvalue 1.
0
 
0
For y 6= 0, is an eigenvector of T with eigenvalue −1.
y
These are ALL the eigenvectors of T. (Verify it!)

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example 3: eigenvalues and eigenvectors of projection
   
x x
Define T : R2 −→ R2 by T =
y 0
y
    
1 0 x x
Matrix Repres. =
2 0 0 y 0
(x, y)
1
(x, 0)
0 projection on x-axis
−2 −1 0 1 2 3 4 x
−1
 
x
For x 6= 0, is an eigenvector of T with eigenvalue 1.
0
 
0
For y 6= 0, is an eigenvector of T with eigenvalue 0.
y
These are ALL the eigenvectors of T. (Verify it!)

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example 4: A may not have eigenvalues and
eigenvectors over a particular field
 
0 1
Consider the matrix A = over R.
−1 0
Does A have eigenvalues and eigenvectors over R?
   
x 0
If yes, then there are ∈ R2 r { } and λ ∈ R such that
y 0
    
0 1 x x

 −1 0   y  
y
0 1 x λ 0 x
=⇒ =
−1 0 y 0 λ y
    
−λ 1 x 0
=⇒ =
−1 −λ y 0
     
x 0 −λ 1
Since 6= , det = 0. Hence λ2 + 1 = 0. But
y 0 −1 −λ
no such λ exists in R.
So A does not have eigenvalues and eigenvectors over R.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


The existence of eigenvalues and eigenvectors

 
0 1
Consider the matrix A = over C, the set of complex
−1 0
numbers.
Does A have eigenvalues and eigenvectors over C? Ans. Yes.
Note that λ2 + 1 has solutions: ±i ∈ C.
Then, for each λ = ±i, in view of the previous slide, one should
solve the system
    
−λ 1 x 0
= .
−1 −λ y 0
to get
         
0 1 i i 0 1 i i
=i and = −i
−1 0 −1 −1 −1 0 1 1
Conclusion: The matrix A has eigenvalues and eigenvectors
over C, but not over R.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example 5: Rotation in Euclidean plane by an angle θ
The map T : R2 → R2 (givenby a counterclockwise
 rotation by an angle θ)
cos θ − sin θ
can be represented by A = . Let 0 < θ < π.
sin θ cos θ
The case θ = −π/2 is considered in Example 4
y
3

• T(x, y)
2

1 θ •
(x, y)

0
−3 −2 −1 0 1 2 3 4 x

−1

1 A does NOT have eigenvectors in R2 .


2 But A has eigenvalues in C, and eigenvectors in C2 .
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Characteristic polynomial of a (square) matrix
Let A be an n × n matrix over C. Denote the identity matrix by In .

Lemma
The following statements are equivalent:
1 λ ∈ C is an eigenvalue of A.
2 det(λIn − A) = 0.

Proof. Note that λ is an eigenvalue of A ⇔ there is a vector v 6= 0 in


Cn such that Av = λv, i.e., (A − λIn )v = 0 ⇔ the homogeneous system
(A − λIn )X = 0 has a non-trivial solution ⇔ det(A − λIn ) = 0 ⇔
det(λIn − A) = 0.

Definition
1 The characteristic polynomial of A, denoted by pA (x), is the
polynomial defined by pA (x) := det(xIn − A).

Thus the eigenvalues of A are nothing but the roots of pA (x).

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Algebraic and geometric multiplicities, and Eigenspace
Definition (Algebraic multiplicity)
Let λ be an eigenvalue of A. The algebraic multiplicity AMA (λ) of the
eigenvalue λ is its multiplicity as a root of the characteristic
polynomial pA (x) = det(xIn − A), i.e., the largest integer k such that
(x − λ)k is a factor of pA (x).

Definition (Eigenspace associated to an eigenvalue)


Given a particular eigenvalue λ of A. The set of all eigenvectors of A
corresponding to λ, together with the zero vector, is called the
eigenspace of A associated with λ.

It is denoted by Eλ . Note that Eλ = Null(A − λIn ) as


Av = λv if and only if (A − λIn )v = 0.

Definition (Geometric multiplicity)


The dimension of Eλ = Null(A − λIn ) is referred to as the geometric
multiplicity of λ, denoted by GMA (λ).
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Some inequalities on algebraic/geometric multiplicities

Theorem
Let A be an n × n matrix over C. For every eigenvalue λ of A, we have
1 1 6 AMA (λ) 6 n and 1 6 GMA (λ) 6 n.
Pr
i=1 AMA (λi ) = n, the sum varies over all the eigenvalues of A.
2

3 GMA (λ) 6 AMA (λ).

Proof.
From the facts deg(pA (x)) = n and pA (x) = (x − λ)AMA (λ) f (x) for
some f , one concludes that 1 6 AMA (λ) 6 n.
Since GMA (λ) = dim (Null(A − λIn )), we have 1 6 GMA (λ) 6 n.
The statement (2) follows from
Qr
pA (x) = i=1 (x − λi )AMA (λi ) and deg(pA (x)) = n.
We will skip the proof of (3).

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example: Computing eigenvalues and eigenvectors
 
1 2 0
1 Let A = 0 1 0.
0 0 3
2 The characteristic polynomial pA (x) is given by
x − 1 −2 0
det(xI3 − A) = 0 x−1 0 = (x − 1)2 (x − 3).
0 0 x−3
3 So the eigenvalues of A are λ1 = 1 and λ2 = 3 with algebraic
multiplicities AMA (λ1 ) = 2 and AMA (λ2 ) = 1.
4 The eigenspaces corresponding to λ1 = 1 and λ2 = 3 can be
obtained by solving the homogeneous systems:
(A − I
3 )X
=  0 and  (A − 3I3 )X = 0.
 x1 
5 Eigenspace Eλ1 =  0  : x1 ∈ R , hence GMA (λ1 ) = 1.
 0 
 

 0 
6 Eigenspace Eλ2 =  0  : x3 ∈ R , hence GMA (λ2 ) = 1.
x3
 

7 Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


How to compute eigenvalues and eigenvectors along
with their algebraic and geometric multiplicities?

General process:
First compute the characteristic polynomial of A:
pA (x) = det(xIn − A).
Next compute the roots of pA (x) by factorizing it into linear factors.
Which gives the eigenvalues (and their algebraic multiplicities).
Then, for each eigenvalue λ, solve the homogeneous system:
(A − λIn )X = 0
to get eigenspace of A associated to λ. (The dimension of
Null(A − λIn ) gives the geometric multiplicity of λ).
Recall that in order to solve a homogeneous linear system, you
may apply elementary row operations on the coefficient matrix to
make it into a row echelon form or row reduced echelon form,
then get the set of solutions by back substitution.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example: How to compute eigenvalues and
eigenvectors, algebraic and geometric multiplicities?
 
3 10 5
Example. Verify the following for A = −2 −3 −4.
3 5 7
1 The characteristic polynomial of A is pA (x) = x3 − 7x2 + 16x − 12.
2 Since pA (x) = (x − 2)2 (x − 3), the eigenvalues of A are 2 and 3.
3 The algebraic multiplicities AMA (2) = 2 and AMA (3) = 1.
4 Using Gaussian Elimination Method to the homogeneous
systems of linear equations (A − λI3 )X = 0 and (A − λI3 )X = 0,
find the eigenspaces E2 and E3 of A corresponding to the
eigenvalues 2 and 3 respectively.
    
 −5   −1 
5 Show that E2 = span −2 and E3 = span −1 .
5 2
   
6 The geometric multiplicities GMA (2) = 1 and GMA (3) = 1.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Similarity of matrices

Definition
Two n × n matrices A and B are called similar if there exists an
invertible n × n matrix P such that B = P−1 AP.

Some statements (without proof) about importance of similarity of


matrices:
Two matrices are similar if and only if they represent the same
linear operator with respect to (possibly) different bases. (???)
Two similar matrices A and B share many properties:
rank(A) = rank(B) as operators from Rn to itself.
det(A) = det(B); tr(A) = tr(B) (sum of all diagonal entries).
A and B have same characteristic polynomial, det(xIn − A).
Minimal polynomials of A and B are same. A monic polynomial
p(X) ∈ R[X] is said to be a minimal polynomial of A if p(A) = 0 (zero
matrix) and p has minimal possible degree.
Jordan canonical forms of A and B are same. (???)

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Diagonalizable matrices

Motivation: For a matrix, eigenvalues and eigenvectors can be used


to decompose the matrix, for example by diagonalizing it.

Definition
A matrix A is said to be diagonalizable if A is similar to a diagonal
matrix D, i.e., if there is an invertible matrix P such that
 
λ1 0 · · · 0
 0 λ2 · · · 0 
P−1 AP =  . .  (a diagonal matrix).
 
.. . .
 .. . . .. 
0 0 ··· λn

The set of eigenvectors helps us to test whether a matrix is


diagonalizable or not.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Use of eigen values and vectors on diagonalization
Theorem
Let A be an n × n matrix (over C). The following are equivalent:
1 A is diagonalizable.
2 The eigenvectors of A form a basis of Cn , equivalently, A has n
linearly independent eigenvectors v1 , . . . , vn with associated
eigenvalues λ1 , . . . , λn (which need not be distinct).
3 GMA (λ) = AMA (λ) for every eigenvalue λ of A.
4 The minimal polynomial of A has distinct roots (equivalently, A
satisfies a polynomial p(x) ∈ C[x] having distinct roots).

Proof. (1) ⇒ (2): There is an n × n invertible matrix P such that


 
λ1 0 · · · 0
 0 λ2 · · · 0 
P−1 AP =  . ..  for some λ1 , . . . , λn ∈ C.
 
.. . .
 .. . . .
0 0 ··· λn

Hence multiply by P from the left side.


Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Proof of the theorem contd...
 
λ1 0 ··· 0
0 λ2 ··· 0
Proof. (1) ⇒ (2): ... Thus AP = P  . .. .
 
.. ..
 .. . . .
  0 0 · · · λn
Write P = v1 v2 · · · vn for some
 v1 , . . . , vn ∈ Cn .
Then AP = Av1 Av2 · · · Avn and
        
λ1 0 ··· 0 λ1 0 0
0 λ2 ··· 0  0 λ2   0 
P . ..  = P  ..  P .  ··· P  . 
        
.. ..
 .. . . .  .  ..   .. 
0 0 ··· λn 0 0 λn
 
= λ1 v1 λ2 v2 · · · λn vn .
Therefore Avi = λi vi for every 1 6 i 6 n.
Note that v1 , . . . , vn are linearly independent, since P is invertible.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Proof of the theorem contd...

Proof. (2) ⇒ (1): A has n linearly independent eigenvectors


v1 , . . . , vn ∈ Cn with associated
 eigenvalues λ1 , . . . , λn ∈ C.
Set P := v1 v2 · · · vn . Clearly P is an n × n matrix.
Since v1 , . . . , vn are linearly independent, P is invertible.
Moreover    
AP = Av1 Av2 · · · Avn = λ1 v1 λ2 v2 · · · λn vn
 
λ1 0 · · · 0
 0 λ2 · · · 0 
= P . .. .
 
.. . .
 .. . . .
0 0 ··· λn
Therefore  
λ1 0 ··· 0
0 λ2 ··· 0
P−1 AP =  . ..  .
 
.. ..
 .. . . .
0 0 ··· λn
(1) ⇔ (3) ⇔ (4): We will skip it.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Example: Determine diagonalizability of matrices

 
3 10 5
Example 1. Show that A = −2 −3 −4 is not diagonalizable.
3 5 7
Hint. First find the eigenvalues of A. Then find algebraic and
geometric multiplicities of each eigenvalue of A. Observe that
GMA (λ) 6= AMA (λ) for some eigenvalue λ of A. (Using the above
theorem), conclude that A is not diagonalizable.

 
2 −1 1
Example 2. Show that A = −1 2 −1 is diagonalizable.
1 −1 2
Hint. Same procedure as above. Verify that GMA (λ) = AMA (λ) for
each eigenvalue λ of A. Hence conclude that A is diagonalizable.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Cayley-Hamilton Theorem
Let A be an n × n matrix over C.
Write Ar for the matrix multiplication of r many copies of A.
For c ∈ C, cA is just component wise scalar multiplication.
If f (x) = ar xr + · · · + a2 x2 + a1 x + a0 ∈ C[x], then
f (A) = ar Ar + · · · + a2 A2 + a1 A + a0 In is an n × n matrix/C.

Theorem (Cayley-Hamilton)
Consider the characteristic polynomial pA (x) := det(xIn − A). Then
pA (A) = 0 (zero matrix of order n × n).

Warning: pA (A) 6= det(AIn − A). LHS is a matrix; RHS is a scalar.

Example
 
1
2
If A = , then pA (x) = x2 − 5x − 2. The Cayley-Hamilton
3
4
 
2 0 0
Theorem says that A − 5A − 2I2 = .
0 0

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Some properties of eigenvalues
You may try to prove the following:
Theorem
Two similar matrices A and B have the same characteristic polynomial, i.e.,

det(xIn − P−1 AP) = det(xIn − A),

hence they have the same set of eigenvalues.

Theorem
Let λ1 , λ2 , . . . , λn be the eigenvalues (not necessarily distinct) of an n × n
matrix A. Then
1 det(A) = λ1 λ2 . . . λn .
2 trace(A) = λ1 + λ2 + · · · + λn , where trace(A) is the sum of the main
diagonal entries of A.

Hint. The characteristic


Qn polynomial can be written as
det(xIn − A) = i=1 (x − λi ). Compare the constant terms and the
coefficients of xn−1 from both sides. See the next slide, for the
detailed solutions.
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Proofs: det(A) = λ1 · · · λn and trace(A) = λ1 + · · · + λn
Consider the equality det(xIn − A) = ni=1 (x − λi ).
Q

Substitute x = 0 both sidesQto get det(−A) = ni=1 (−λi ), which implies


Q
1
n n n
that (−1) det(A) = (−1) i=1 λi , and hence det(A) = λ1 λ2 · · · λn . For
more clarity, see the solution of part (2) below.
Write the equality det(xIn − A) = ni=1 (x − λi ) as
Q
2

x − a11 −a12 ··· −a1n


−a21 x − a22 ··· −a2n Yn

.. .. .. .. = (x − λi )
. . . . i=1
−an1 −an2 ··· x − ann
= (x − λ1 )(x − λ2 ) · · · (x − λn )
= xn + xn−1 (−λ1 − λ2 − · · · − λn ) + · · · + (−1)n (λ1 λ2 . . . λn )
From the determinant formula, we can find that the coefficient of xn−1 in
the left hand side is given by (−a11 − a22 − · · · − ann ). While the
coefficient of xn−1 from the right hand side is (−λ1 − λ2 − · · · − λn ).
Comparing both sides, we get
a11 + a22 + · · · + ann = λ1 + λ2 + · · · + λn .
Thus trace(A) = λ1 + λ2 + · · · + λn .
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Some properties of eigenvalues and eigenvectors

Let λ be an eigenvalue of an n × n matrix A with a corresponding


eigenvector v. Then the following hold true.
1 Show that λ is an eigenvalue of At (the transpose of A).
2 Show that v is an eigenvector of B = A + cIn , where c is a fixed
scalar. What is the corresponding eigenvalue of B?
3 Let r be a positive integer. Show that λr is an eigenvalue of Ar
with the corresponding eigenvector v. Conclude that for every
polynomial f (x) ∈ R[x], f (λ) is an eigenvalue of f (A) with the
corresponding eigenvector v.
Hints. (1). det(λIn − A) = det(λIn − At ). For (2) and (3), use the
definitions of eigenvalues and eigenvectors.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Some properties of eigenvalues & eigenvectors contd.

Let λ be an eigenvalue of an n × n matrix A with a corresponding


eigenvector v. Then the following hold true.
1 Let P be an n × n invertible matrix. Show that λ is an eigenvalue
of P−1 AP with a corresponding eigenvector P−1 v. Conclude from
this statement that A and P−1 AP have the same set of
eigenvalues. Moreover, there is a one to one correspondence
between the eigenvectors of A and that of P−1 AP corresponding
to every fixed eigenvalue λ.
2 Suppose that A is invertible. Then λ 6= 0. Show that v is also an
eigenvector of A−1 with respect to the eigenvalue 1/λ.
Hints. (1). Let Eλ and Eλ0 be the eigenspaces of A and P−1 AP
corresponding to λ respectively. Define the maps ϕ : Eλ → Eλ0 and
ψ : Eλ0 → Eλ by ϕ(v) = P−1 v and ψ(u) = Pu respectively. (2). Apply
A−1 on (A − λIn )v = 0.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Hermitian matrix
1 For a complex number z = a + ib, its conjugate is z = a − ib.
2 For a complex matrix A, its conjugate is the matrix A whose
(i, j)th entry is aij , where aij is the (i, j)th entry of A.
3 Note that when all entries of A are real numbers, then A = A.
4 If A is an m × n matrix, then the transpose of A is defined to be
an n × m matrix AT such that the ith column vector of AT is the ith
row vector of A for all 1 6 i 6 m.
5 A complex square matrix A is called a Hermitian matrix if A is
equal to its conjugate transpose, i.e.,
T T
A=A (where A is the transpose of A).
6 A real Hermitian matrix A is nothing but a real symmetric matrix,
i.e., AT = A.

Theorem
All the eigenvalues of a Hermitian matrix are real. In particular, all the
eigenvalues of a real symmetric matrix are real.
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Skew-Hermitian matrix
1 A complex square matrix A is called a skew-Hermitian matrix if
its conjugate transpose is equal to the negative of A, i.e.,
T
A = −A.
T
Here A is the transpose of A, and −A is a matrix of the same
order whose (i, j)th entry is −aij , where aij is the (i, j)th entry of A.
2 A real skew-Hermitian matrix A is nothing but a real
skew-symmetric matrix, i.e., AT = −A.

Theorem
1 All the eigenvalues of a skew-Hermitian matrix are either purely
imaginary complex numbers or zero.
2 In particular, all the eigenvalues of a real skew-symmetric matrix are
either purely imaginary complex numbers or zero.

Recall a complex number z = a + ib is called a purely imaginary


complex number if the real part of z is zero, i.e., a = 0.
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Unitary matrix
1 A complex square matrix A is called a unitary matrix if A is
invertible and A−1 is equal to the conjugate transpose of A, i.e.,
T
A−1 = A .
T T
equivalently, if A A = AA = In , the identity matrix.
2 A real unitary matrix A is nothing but an orthogonal matrix, i.e.,
A is a real square matrix satisfying AT A = AAT = In .
Theorem
1 All the eigenvalues of a unitary matrix have absolute value 1.
2 In particular, all the eigenvalues of an orthogonal matrix have absolute
value 1.

Theorem
1 All Hermitian (hence real symmetric) matrices are diagonalizable.
2 All skew-Hermitian (hence real skew-symmetric) matrices are
diagonalizable.
3 All unitary (hence orthogonal) matrices are diagonalizable.
Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors
Applications of Eigenvalues and Eigenvectors

1 Some real life applications of the use of eigenvalues and


eigenvectors in science, engineering and computer science can
be found here:
https://www.intmath.com/matrices-determinants/
8-applications-eigenvalues-eigenvectors.php
2 See Section 8.2 of the textbook written by Erwin Kreyszig for
Some Applications of Eigenvalue Problems.

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors


Thank You!

Dr. Dipankar Ghosh (IIT Kharagpur) Eigenvalues and eigenvectors

You might also like