0% found this document useful (0 votes)
35 views32 pages

Introduction To Eigenvalues and Eigenvectors

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views32 pages

Introduction To Eigenvalues and Eigenvectors

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Contents

1 Understanding
Eigenvalues and Eigenvectors 2
1.1 What are Eigenvalues and Eigenvectors? . 3
1.2 Properties of
Eigenvalues and Eigenvectors . . . . . . . 9
1.3 Practical Applications . . . . . . . . . . . . 12
1.4 Challenges and Complexities . . . . . . . . 13

2 Let’s dive into Prof. Gilbert Strang’s


Lecture on Eigenvalues and Eigenvectors. 14
2.1 Example . . . . . . . . . . . . . . . . . . . . 17
2.2 Example of a Rotation
(Orthogonal) Matrix . . . . . . . . . . . . . 23
2.3 Trouble . . . . . . . . . . . . . . . . . . . . 26
2.4 Conclusions . . . . . . . . . . . . . . . . . . 27

3 Quiz on Eigenvalues and Eigenvectors 29

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 1 of 32


1 Understanding
Eigenvalues and Eigenvectors
• In Lecture 21 from MIT’s 18.06 Linear Algebra course,
Professor Gilbert Strang describes the meaning of eigen-
values and eigenvectors, an important concept in linear
algebra. These ideas have profound implications for ar-
eas ranging from physics to computer science.
• This lecture explores the fundamental concepts and meth-
ods for solving problems involving eigenvalues and the
challenges associated with them.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 2 of 32


1.1 What are Eigenvalues and Eigenvectors?
• Eigenvalues and eigenvectors provide a way to under-
stand how a matrix behaves when it transforms a vector.
• An eigenvector of a matrix is a vector that only changes
in magnitude (not direction) when a matrix is applied to
it. The factor by which it is scaled is called the eigen-
value.
• Mathematically, this relationship is expressed as:

Ax = λx

where: A is a square matrix, x is the eigenvector, λ is


the eigenvalue.
• This can be viewed as the input vector x to the transfor-
mation Ax results in the output of scaled parallel vector
x.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 3 of 32


Why Are Eigenvalues and Eigenvectors Important?
• Eigenvalues and eigenvectors are important because
they provide insight into the behavior of matrices, which
are used to represent systems of equations, transforma-
tions, 2nd-order tensors, and other phenomena.
• Understanding the eigenstructure of a matrix helps to:

1. Determine stability in dynamic systems.


2. Simplify computations, such as finding powers of ma-
trices and diagonalization of matrices for solving sys-
tems of ordinary differential equations.
3. Determine principal normal stresses and stretches
for stress and strain tensors in continuum mechan-
ics.
4. Determine matrix invariants under change of basis
transformation or coordinate rotation.
5. Analyze data, for example, in Principal Component
Analysis (PCA), for dimensionality reduction.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 4 of 32


1. Matrix Transformation and Eigenvectors
When a matrix A is applied to a vector x, it usually changes
both its direction and magnitude. However, for certain
vectors, called eigenvectors, the output vector remains in
the same or exactly opposite direction as x. The scaling
factor for this change is the eigenvalue λ.
• A classic example involves the projection matrix, which
projects vectors onto a plane. In this case, the vectors
already on the plane remain unchanged, making them
eigenvectors with eigenvalue 1.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 5 of 32


2. Characteristic Equation
To find the eigenvalues of a matrix, we rely on the char-
acteristic equation obtained from the eigenproblem ob-
tained by rewriting Ax − λx = 0, as

(A − λI)x = 0

where I is the identity matrix.


• For nonzero x, the matrix (A − λI) must be singular,
and therefore the determinant must be zero.
• The goal is to solve for λ0s by setting the determinant
of A − λI to zero, yielding a characteristic polynomial
equation in λ. The solutions to this equation are the
eigenvalues.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 6 of 32


3. Examples of Special Matrices
Professor Strang uses several examples to illustrate the
behavior of eigenvalues and eigenvectors:
• Projection Matrix: A matrix that projects vectors onto
a subspace. Its eigenvalues are typically 1 (for vectors
in the subspace) and 0 (for vectors orthogonal to it).
• Permutation Matrix: A matrix that swaps elements. It
showcases how eigenvectors change with permutation
while maintaining certain fixed properties.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 7 of 32


4. Complex Eigenvalues
• Not all matrices have real eigenvalues.
• For example, a rotation matrix that rotates vectors by
90 degrees has no real eigenvector that remains parallel
to the original vector. In such cases, imaginary numbers
are used to describe the eigenvalues.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 8 of 32


1.2 Properties of
Eigenvalues and Eigenvectors
1. Trace and Determinant
• The trace of a matrix (sum of diagonal elements) equals
the sum of its eigenvalues.
• The determinant is the product of the eigenvalues.

These properties provide valuable information to check


calculations, understand the matrix’s behavior, and are in-
variant to a change in basis.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 9 of 32


2. Symmetric Matrices
Symmetric matrices have real eigenvalues, and their eigen-
vectors are orthogonal (perpendicular). This makes them
particularly well-behaved in many applications, such as
physics, mechanical vibration, and machine learning.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 10 of 32


3. Repeated Eigenvalues and Deficiency
Matrices can have repeated eigenvalues. In some cases,
repeated eigenvalues may result in a shortage of inde-
pendent eigenvectors, making it difficult to describe the
system fully. This scenario is called deficiency and is a
complex aspect of linear algebra.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 11 of 32


1.3 Practical Applications
Eigenvalues and eigenvectors have numerous applications:
• Mechanical Vibrations: Determine natural frequencies
and mode shapes of systems.
• Quantum Mechanics: Describe physical states and en-
ergies.
• Principal Component Analysis (PCA): Reduce dimensions
in data, identifying principal directions of variation.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 12 of 32


1.4 Challenges and Complexities
While the theory of eigenvalues and eigenvectors can be
elegant, several complexities arise:
• Complex Numbers: Nonsymmetric real matrices can
have complex eigenvalues since they can be composed
as the sum of a symmetric and anti-symmetric matrix,
which leads to complex conjugate eigenvalue pairs with
both real and imaginary parts.
• Deficient Systems: Some matrices might not have enough
independent eigenvectors to describe their behavior
fully.
• Sensitivity: Eigenvalues can be sensitive to slight changes
in the matrix, which might lead to challenges in practi-
cal applications.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 13 of 32


2 Let’s dive into Prof. Gilbert Strang’s
Lecture on Eigenvalues and Eigenvectors.
• Given an input value of x, the function f (x) outputs the
image of x in a codomain.
• Similarly, given an input vector x, a matrix Ax operates
with a linear transformation T (x) acting on x and out-
puts the image of x in a codomain.

A question we can ask is:


• For input vector x, operated on by square matrix A,
what parallel vectors scaled with λ exist?
This problem can be posed as the eigenproblem:

Ax = λx (1)

Here x are eigenvectors parallel to Ax.

• How can we find the x’s and scaling factors λ’s?


• For a linear system Ax = b where vector b is given, we
know how to solve for x.
• However, for the eigenproblem in Eqn. (1), the b = λx
is also unknown, and also the associated λ.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 14 of 32


• How can we deal with this? Let’s rewrite the eigenprob-
lem and move the x to the same side:

Ax − λx = 0

This can be expressed with a collected x using the iden-


tity matrix I:

(A − λI) x = 0 (2)

In this form, the matrix (A − λI) is the matrix A shifted


by λI, and can be viewed as a linear system with right-
hand-side b = 0. Here both λ and x are unknowns.
• We know from linear algebra that with a b = 0, we know
that for a nonzero x in the null space, the matrix (A−λI)
must be singular.
• We know a singular matrix must have the determinant
equal to zero:

det(A − λI) = 0 (3)

This gives a characteristic polynomial equation in λ, which


we can use to solve for the roots λ that give a singular
matrix.
• The roots of this characteristic equation are called eigen-
values or characteristic values.
LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 15 of 32
• Once the λ’s are found that make the matrix (A − λI)
singular, then we can apply equation elimination meth-
ods to find the eigenvectors x’s in the null space asso-
ciated with these eigenvalues λ’s.
• Since the x’s are in the null space, we can only solve
them up to an arbitrary constant.
• If the square matrix is n × n, there will be n eigenvalues
and associated eigenvectors.
• Since they go together, we can refer to the eigenpairs
(λi, xi), for i = 1, 2, . . . , n.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 16 of 32


2.1 Example
Let’s do an example to see this solving process and any
insights into the eigenpairs (λi, xi)
 
3 1
A =  
1 3
In this case, the 2×2 matrix has special properties of being
symmetric, AT = A, and the diagonals are the same.
• Let’s take the determinant of the eigenproblem for this
matrix:
3−λ 1
= (3 − λ)2 − 1 = λ2 − 6λ + 8 = 0
1 33 − λ

• Notice that the 6 in the 2nd coefficient in front of λ is the


trace of the matrix defined by the sum of the diagonals:

tr(A) = 3 + 3 = 6

• We also observe that the constant 8 is equal to the de-


terminant of A:

det = (3)(3) − (1)(1) = 9 − 1 = 8

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 17 of 32


• Let’s factor this quadratic characteristic equation,

(λ − 2)(λ − 4) = 0

The two solutions are,

λ1 = 2, λ2 = 4

• Notice that the sum of the eigenvalues equals the trace


of A:

tr(A) = λ1 + λ2 = 2 + 4 = 6

• Also, the product of the eigenvalues equals the deter-


minant:

λ1λ2 = (2)(4) = 8

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 18 of 32


Now, let’s get the two eigenvectors,
For λ1 = 2:

(A − λ1I) x1 = 0
   
1 1 0
(A − 2I) =   x1 =  
1 1 0
The matrix must be singular, and x1 is in the null space. A
solution is
 
−1
x1 =  
1
For λ2 = 4:

(A − λ2I) x2 = 0
     
−1 1  0 1
(A − 4I) x2 =  x2 =   , → x2 =
  

   
1 −1 0 1
  

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 19 of 32


• What properties of the eigensolutions can we observe
here?

Since the matrix was real and symmetric, the eigenval-


ues are real-valued, and the eigenvectors are perpendic-
ular (orthogonal) and linearly independent, i.e., their dot
product with themselves is nonzero, and with others are
zero;
 
−1
x1 · x1 = [−1, 1]   = (−1)(−1) + (1)(1) = 2
1
 
1
x2 · x2 = [1, 1]   = (1)(1) + (1)(1) = 2
1
 
1
x1 · x2 = [−1, 1]   = (−1)(1) + (1)(1) = 0
1

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 20 of 32


Consider another matrix:
 
0 1 0−λ 1
A =   , = λ2−1 = 0, → λ1 = −1, λ2 = 1.
1 0 1 0−λ
 
−1
λ1 = −1 : → x1 =  
1
 
1
λ2 = 1 : → x2 =
 
 
1
 

Here

tr(A) = λ1 + λ2 = 0, det(A) = λ1λ2 = −1

• This shows that if we add 3I to this matrix, we get the


previous example matrix with constant diagonals of 3,
and this shifts the eigenvalues by 3: λ1 = −1 + 3 = 2,
λ2 = 1 + 3 = 4, and the eigenvector doesn’t change.
• We can see this by adding 3I to A, and with Ax = λx,
and the same eigenvectors,

(A + 3I)x = Ax + 3x = λx + 3x = (λ + 3)x

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 21 of 32


CAUTION
Be careful, just because we added 3I to the special sym-
metrix matrix with constant diagonals gave the same eigen-
vectors just with shifted eigenvalues; this is not true when
adding any matrix, say B to A.

• We could be tempted to write: If A has eigenvalues


λ’s, and B has eigenvalues α’s, then the eigenvalues of
A and B add, with the following (incorrect) argument:
Ax = λx; Bx = αx. Adding gives, (A + B) x =
(λ + α)x; but this is False!
• This argument assumed that the eigenvectors of A and
B were the same, which is not true.
• In general, the addition (A + B) and product (AB) of
matrices does not give the sum or product of eigenval-
ues.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 22 of 32


2.2 Example of a Rotation
(Orthogonal) Matrix
Consider a rotation (orthogonal) matrix that takes any vec-
tor and rotates it by 90◦ counter-clockwise.
• The rotated basis vectors are T (e1) = [0, 1]T , and T (e2) =
[−1, 0]T , thus the rotation matrix is:
 
  0 −1 
Q = T (e1) T (e2) = 

1 0

• What vector can output Qx a vector parallel to x after


rotating by 90◦?
• Answer: there won’t be any since the rotated vector is
perpendicular to the input vector.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 23 of 32


• Let’s check:
−λ −1
det(Q − λI) = = λ2 + 1 = 0
1 −λ
The roots are imaginary numbers (even though the ma-
trix is real):
√ √
λ = −1 = i, λ2 = − −1 = −i

• We see the two roots are complex conjugate pairs!


• The trace and determinant are the sum and product of
the eigenvalues:

tr(Q) = λ1 + λ2 = i − i = 0,
det(Q) = λ1λ2 = i(−i) = −i2 = −(−1) = 1

• The 90◦ rotation is as far away (in fact opposite) of a


symmetric matrix as we can get; it is an anti-symmetric
matrix, and the eigenvalues are imaginary, complex-
conjugate pairs.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 24 of 32


• In general, a matrix can be decomposed into a symmet-
ric and ani-symmetric part:

A = Asym + Aanti-sym

In this case, the eigenvalues λ = Re ± i Im are complex


conjugate pairs with both real (due to the symmetric
part) and imaginary (due to the anti-symmetric part).

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 25 of 32


2.3 Trouble
Consider this triangular matrix (zeros on one side of the
diagonal):
 
3 1
A =  
0 3
det(A) = (3 − λ)(3 − λ) = 0, → λ1 = λ2 = 3
• Here, the eigenvalues (roots of the characteristic poly-
nomial) are the same, i.e., repeated.
• Note that the eigenvalues are the same as the diago-
nal elements of the triangular matrix (this will always be
true).
• With repeated eigenvalues, we will have trouble with
the eigenvectors.
• With λ1 = 3,
     
0 1 0 1
 x1 =   , → x1 =  

  
0 0 0 0

The matrix is singular (as it should be), and the eigen-


vector x is in the null subspace. However, there is no
independent x2, and thus, there is a shortage of inde-
pendent eigenvectors (we call this degenerate).

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 26 of 32


2.4 Conclusions
• Eigenvalues and eigenvectors are foundational tools in
linear algebra. They allow us to analyze and understand
matrices more deeply. From simple projections to com-
plex rotations, they reveal the underlying structure of
transformations and provide insights across various dis-
ciplines.
• Mastering the methods for interpreting these values gives
us a powerful perspective on theoretical and applied
mathematics and physics, making them indispensable
for anyone working with mathematical models, engi-
neering, or data analysis.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 27 of 32


References
• Lecture 21: Eigenvalues and Eigenvectors.
Prof. Gilbert Strang.
Lec 21 – MIT 18.06 Linear Algebra, Spring 2005.
YouTube Video:
https://youtu.be/lXNXrLcoerU?si=YEsra72FrzJztKT_

We Love Matrix Linear Algebra!

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 28 of 32


3 Quiz on Eigenvalues and Eigenvectors
1. What is an eigenvector?

A) A vector that changes its direction after matrix mul-


tiplication.
B) A vector that remains in the same direction after
matrix multiplication.
C) A vector that disappears after matrix multiplica-
tion.
D) A vector that rotates 90 degrees after matrix mul-
tiplication.

2. How is the eigenvalue of a matrix defined?

A) The sum of all elements in a matrix.


B) The trace of a matrix.
C) The scalar by which an eigenvector is scaled after
matrix multiplication.
D) The number of columns in the matrix.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 29 of 32


3. What happens to eigenvalues when you add a scalar
cI to a matrix A?

A) Eigenvalues remain the same.


B) Eigenvalues are scaled by the scalar.
C) Eigenvalues increase by the value of the scalar.
D) Eigenvalues decrease by the value of the scalar.

4. What is a characteristic equation used for?

A) To find the inverse of a matrix.


B) To determine the null space of a matrix.
C) To calculate eigenvalues by setting the determi-
nant of A − λI to zero.
D) To identify the trace of the matrix.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 30 of 32


5. What is a major challenge when dealing with repeated
eigenvalues?

A) They always produce complex numbers.


B) They often result in a lack of enough independent
eigenvectors.
C) They require the use of orthogonal matrices.
D) They always have real numbers.

Enjoy testing your knowledge.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 31 of 32


Answers to Quiz
Here are the correct answers for the quiz:
1. B) A vector that remains in the same direction after ma-
trix multiplication.
2. C) The scalar by which an eigenvector is scaled after
matrix multiplication.
3. C) Eigenvalues increase by the value of the scalar.
4. C) To calculate eigenvalues by setting the determinant
of A − λI to zero.
5. B) They often result in a lack of enough independent
eigenvectors.

LinkedIn � - Dr. Lonny Thompson, Clemson University, Oct. 25, 2024. 32 of 32

You might also like