Intro 2 Eigen Stuff
Intro 2 Eigen Stuff
A. Havens
Department of Mathematics
University of Massachusetts, Amherst
April 2 - 6, 2018
Outline
1 Defining Eigenstuffs
Motives
Eigenvectors and Eigenvalues
3 Introduction to Applications
Classifying Endomorphisms of R2
Linear Recursion and Difference Equations
Linear Differential Equations
Motives
Motives
Motivating Applications
Motives
Motivating Applications
Motives
Motives
Motives
Motives
Formal Definitions
Definition
Let T : Rn → Rn be a linear transformation of Rn . Then a
nonzero vector x ∈ Rn − {0} is called an eigenvector of T if there
exists some number λ ∈ R such that
T (x) = λx.
The real number λ is called a real eigenvalue of the real linear
transformation T .
Let A be an n × n matrix representing the linear transformation T .
Then, x is an eigenvector of the matrix A if and only if it is an
eigenvector of T , if and only if
Ax = λx
for an eigenvalue λ.
Remark
We will prioritize the study of real eigenstuffs, primarily using 2 × 2
and 3 × 3 matrices, which give a good general sense of the theory.
Examples in 2-Dimensions
Example
Let v ∈ R2 be a nonzero vector, and ` = Span {v}. Let
Ref ` : R2 → R2 be the linear transformation of the plane given by
reflection through the line `.
Then, since Ref ` (v) = 1v, v is an eigenvector of Ref ` with
eigenvalue 1, and ` = Span {v} is an eigenline or eigenspace of the
reflection. Note, any nonzero multiple of v is also an eigenvector
with eigenvalue 1, by linearity.
Can you describe another eigenvector of Ref ` , with a different
associated eigenvalue? What is the associated eigenspace?
Examples in 2-Dimensions
Example
Let v ∈ R2 be a nonzero vector, and ` = Span {v}. Let
Ref ` : R2 → R2 be the linear transformation of the plane given by
reflection through the line `.
Then, since Ref ` (v) = 1v, v is an eigenvector of Ref ` with
eigenvalue 1, and ` = Span {v} is an eigenline or eigenspace of the
reflection. Note, any nonzero multiple of v is also an eigenvector
with eigenvalue 1, by linearity.
Can you describe another eigenvector of Ref ` , with a different
associated eigenvalue? What is the associated eigenspace?
If u ∈ R2 is any nonzero vector perpendicular to v, then u is an
eigenvector of Ref ` with eigenvalue −1. The line spanned by u is
also an eigenspace.
Examples in 2-Dimensions
Example
v·x
For v and ` as above, the orthogonal projection proj` (x) = v·v x
has the same eigenspaces as Ref ` , but a different eigenvalue for
the line `⊥ = Span {u} for u ∈ R2 − {0} with u · v = 0.
Examples in 2-Dimensions
Example
ñ ô
1 k
Let A = , for a nonzero real number k.
0 1
Examples in 2-Dimensions
Example
An eigenvector x of the shearing matrix A with eigenvalue 1 must
satisfy Ax = x, whence x is a solution of the homogeneous
equation Ax − I2 x = (A − I2 )x = 0.
Defining Eigenstuffs The Characteristic Equation Introduction to Applications
Examples in 2-Dimensions
Example
An eigenvector x of the shearing matrix A with eigenvalue 1 must
satisfy Ax = x, whence x is a solution of the homogeneous
equation Ax − I2 x = (A − I2 )x = 0.
Therefore the components x1 and x2 of x must satisfy
ñ ô ñ ôñ ô ñ ô
0 1−1 k x1 0x1 + kx2
= =
0 0 1−1 x2 0x1 + 0x2
Examples in 2-Dimensions
Example
An eigenvector x of the shearing matrix A with eigenvalue 1 must
satisfy Ax = x, whence x is a solution of the homogeneous
equation Ax − I2 x = (A − I2 )x = 0.
Therefore the components x1 and x2 of x must satisfy
ñ ô ñ ôñ ô ñ ô
0 1−1 k x1 0x1 + kx2
= =
0 0 1−1 x2 0x1 + 0x2
=⇒ kx2 = 0 =⇒ x2 = 0 .
Examples in 2-Dimensions
Example
ñ ô
t
Thus, x = , t ∈ R − {0} is an eigenvector of the shearing
0
matrix A, with eigenvalue 1, and the x1 axis is the corresponding
eigenspace.
Examples in 2-Dimensions
Example
ñ ô
0 −1
The matrix J = has no real eigenvectors.
1 0
All lines through 0 are rotated by π/2. We will later see that this
matrix has purely imaginary eigenvalues, as will be the case with
other rotation matrices.
Example
Consider the upper triangular matrix
a11 a12 a13
A = 0 a22 a23 .
0 0 a33
Show that the eigenvalues are the entries a11 , a22 and a33 along
the main diagonal.
Example
Consider the upper triangular matrix
a11 a12 a13
A = 0 a22 a23 .
0 0 a33
Show that the eigenvalues are the entries a11 , a22 and a33 along
the main diagonal.
If λ is an eigenvalue of A, then there exists a nonzero vector
x ∈ R3 such that Ax = λx. But then Ax − λx = (A − λI3 )x = 0
must have a nontrivial solution.
Example
Consider the upper triangular matrix
a11 a12 a13
A = 0 a22 a23 .
0 0 a33
Show that the eigenvalues are the entries a11 , a22 and a33 along
the main diagonal.
If λ is an eigenvalue of A, then there exists a nonzero vector
x ∈ R3 such that Ax = λx. But then Ax − λx = (A − λI3 )x = 0
must have a nontrivial solution.
But the homogeneous equation has a nontrivial solution if and only
if the square matrix A − λI3 has determinant equal to 0.
A. Havens Introduction to Eigenvalues and Eigenvectors
Defining Eigenstuffs The Characteristic Equation Introduction to Applications
Example
Since
a11 − λ a12 a13
A − λI3 = 0 a22 − λ a23 ,
0 0 a33 − λ
Example
Since
a11 − λ a12 a13
A − λI3 = 0 a22 − λ a23 ,
0 0 a33 − λ
Example
Since
a11 − λ a12 a13
A − λI3 = 0 a22 − λ a23 ,
0 0 a33 − λ
Example
Since
a11 − λ a12 a13
A − λI3 = 0 a22 − λ a23 ,
0 0 a33 − λ
Theorem
If v1 , . . . , vr are eigenvectors that correspond respectively to
distinct eigenvalues λ1 , . . . , λr of an n × n matrix A, then the set
{v1 , . . . , vr } is linearly independent.
Proof.
We proceed by contradiction. Suppose {v1 , . . . , vr } is a linearly
dependent set.
Theorem
If v1 , . . . , vr are eigenvectors that correspond respectively to
distinct eigenvalues λ1 , . . . , λr of an n × n matrix A, then the set
{v1 , . . . , vr } is linearly independent.
Proof.
We proceed by contradiction. Suppose {v1 , . . . , vr } is a linearly
dependent set.
Observe that, being a set of eigenvectors, vi 6= 0 for any
i = 1, . . . r , and by linear dependence we can find an index p,
1 < p < r such that {v1 , . . . , vp } is linearly independent, and
vp+1 ∈ Span {v1 , . . . vp }.
A Proof
Proof (continued.)
So there exist constants c1 , . . . , cp−1 not all zero such that
vp+1 = c1 v1 + . . . cp vp .
Defining Eigenstuffs The Characteristic Equation Introduction to Applications
A Proof
Proof (continued.)
So there exist constants c1 , . . . , cp−1 not all zero such that
vp+1 = c1 v1 + . . . cp vp .
Left-multiplying both sides of this relation by A, we obtain
Avp+1 = A(c1 v1 + . . . cp vp )
A Proof
Proof (continued.)
So there exist constants c1 , . . . , cp−1 not all zero such that
vp+1 = c1 v1 + . . . cp vp .
Left-multiplying both sides of this relation by A, we obtain
Avp+1 = A(c1 v1 + . . . cp vp ) =⇒
λp+1 vp+1 = c1 λ1 v1 + . . . cp λp vp
A Proof
Proof (continued.)
So there exist constants c1 , . . . , cp−1 not all zero such that
vp+1 = c1 v1 + . . . cp vp .
Left-multiplying both sides of this relation by A, we obtain
Avp+1 = A(c1 v1 + . . . cp vp ) =⇒
λp+1 vp+1 = c1 λ1 v1 + . . . cp λp vp
Proof (continued.)
This final relation is impossible:
Proof (continued.)
This final relation is impossible:
since the set {v1 , . . . vp } is linearly independent, this equation
requires that the scalar weights all vanish, but we know at least
one ci must be nonzero since vp+1 is an eigenvector (and hence
nonzero), while since the eigenvalues are all distinct, λi − λp+1 6= 0
for any i = 1, . . . p.
Proof (continued.)
This final relation is impossible:
since the set {v1 , . . . vp } is linearly independent, this equation
requires that the scalar weights all vanish, but we know at least
one ci must be nonzero since vp+1 is an eigenvector (and hence
nonzero), while since the eigenvalues are all distinct, λi − λp+1 6= 0
for any i = 1, . . . p.
An Eigenvalue of 0
Determinant Review
Determinant Review
Theorem
Let A, B ∈ Rn×n . Then
Qn
a. If A = (aij ) is triangular, then det A = i=1 aii , the product
of the diagonal entries.
b. det(AB) = (det A)(det B).
c. det At = det A.
d. A is invertible if and only if det A 6= 0.
e. A row replacement operation on A does not alter det A. A
row swap operation on A reverses the sign of det A. A row
scaling by s of a row of A scales the det A by s.
Finding Eigenvalues
det(A − λIn ) = 0 .
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
Defining Eigenstuffs The Characteristic Equation Introduction to Applications
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
A 2 × 2 Example
Example
ñ ô
6 8
Let A = . Find the eigenvalues and eigenvectors of A.
−2 −4
A 2 × 2 Example
Example
To obtain the eigenvectors, we must solve systems associated to
each eigenvalue:
Ä ä Ä ä
A − (−2)I2 x = 0 and A − (4)I2 x = 0.
A 2 × 2 Example
Example
To obtain the eigenvectors, we must solve systems associated to
each eigenvalue:
Ä ä Ä ä
A − (−2)I2 x = 0 and A − (4)I2 x = 0.
For λ1 = −2, this yields a homogeneous system with augmented
matrix ñ ô
8 8 0
,
−2 −2 0
which is solved so long as the components x1 and x2 of x satisfy
x2 = −x1 ,
A 2 × 2 Example
Example
To obtain the eigenvectors, we must solve systems associated to
each eigenvalue:
Ä ä Ä ä
A − (−2)I2 x = 0 and A − (4)I2 x = 0.
For λ1 = −2, this yields a homogeneous system with augmented
matrix ñ ô
8 8 0
,
−2 −2 0
which is solved so long as the components x1 and x2 of x satisfy
x2 = −x1 ,
ñ ô
1
Thus, e.g., spans the −2-eigenspace.
−1
A 2 × 2 Example
Example
For λ1 = 4 the corresponding homogeneous system has augmented
matrix ñ ô
2 8 0
,
−2 −8 0
which is solved whenever the components x1 and x2 satisfy
x1 = −4x2 .
A 2 × 2 Example
Example
For λ1 = 4 the corresponding homogeneous system has augmented
matrix ñ ô
2 8 0
,
−2 −8 0
which is solved whenever the components x1 and x2 satisfy
x1 = −4x2 .
ñ ô
4
Thus, e.g., spans the 4-eigenspace.
−1
A 3 × 3 example
Example
2 −2 −1 1
Let A = −1 1 −1 and let v = 1 .
−1 −2 2 1
A 3 × 3 example
Example
2 −2 −1 1
Let A = −1 1 −1 and let v = 1 .
−1 −2 2 1
(a) Show that v is an eigenvector of A. What is its associated
eigenvalue?
(b) Find the characteristic equation of A.
(c) Find the remaining eigenvalue(s) of A, and describe the
associated eigenspace(s).
A 3 × 3 example
Example
Solution:
(a) It is easy to check that Av = −v, whence v is an eigenvector
with associated eigenvalue −1.
A 3 × 3 example
Example
Solution:
(a) It is easy to check that Av = −v, whence v is an eigenvector
with associated eigenvalue −1.
Observe that since v = e1 + e2 + e3 , this can only be the case
since the sum of entries in each row of A is −1.
A 3 × 3 example
Example
Solution:
(a) It is easy to check that Av = −v, whence v is an eigenvector
with associated eigenvalue −1.
Observe that since v = e1 + e2 + e3 , this can only be the case
since the sum of entries in each row of A is −1. More
generally, e1 + e2 + . . . + en is an eigenvalue of an n × n
matrix if and only if the sum of all entries in the rows of the
matrix equal a constant λ, which is then the eigenvalue for v.
A 3 × 3 example
Example
Solution:
(a) It is easy to check that Av = −v, whence v is an eigenvector
with associated eigenvalue −1.
Observe that since v = e1 + e2 + e3 , this can only be the case
since the sum of entries in each row of A is −1. More
generally, e1 + e2 + . . . + en is an eigenvalue of an n × n
matrix if and only if the sum of all entries in the rows of the
matrix equal a constant λ, which is then the eigenvalue for v.
Ä ä
(b) Let χA (λ) = det A − λI3 be the characteristic polynomial.
To find χA (λ), we thus need to calculate the determinant of
the 3 × 3 matrix A − λI3 .
A 3 × 3 example
Example
(b) (continued.)
2−λ −2 −1
χA (λ) = −1 1 − λ −1
−1 −2 2 − λ
A 3 × 3 example
Example
(b) (continued.)
2−λ −2 −1
χA (λ) = −1 1 − λ −1
−1 −2 2 − λ
= (2 − λ)(1 − λ)(2 − λ) − 2 − 2
− (1 − λ) − 2(2 − λ) − 2(2 − λ)
A 3 × 3 example
Example
(b) (continued.)
2−λ −2 −1
χA (λ) = −1 1 − λ −1
−1 −2 2 − λ
= (2 − λ)(1 − λ)(2 − λ) − 2 − 2
− (1 − λ) − 2(2 − λ) − 2(2 − λ)
= −λ3 + 5λ2 − 3λ − 9
A 3 × 3 example
Example
(c) Any eigenvalue λ of A satisfies the characteristic equation,
and thus is a root of the characteristic polynomial χA (λ).
A 3 × 3 example
Example
(c) Any eigenvalue λ of A satisfies the characteristic equation,
and thus is a root of the characteristic polynomial χA (λ).
Thus, we seek solutions of the polynomial equation
A 3 × 3 example
Example
(c) Any eigenvalue λ of A satisfies the characteristic equation,
and thus is a root of the characteristic polynomial χA (λ).
Thus, we seek solutions of the polynomial equation
0 = λ2 − 6λ + 9 = (λ − 3)2 .
A 3 × 3 example
Example
(c) Any eigenvalue λ of A satisfies the characteristic equation,
and thus is a root of the characteristic polynomial χA (λ).
Thus, we seek solutions of the polynomial equation
0 = λ2 − 6λ + 9 = (λ − 3)2 .
A 3 × 3 example
Example
(c) (continued.) To find the associated eigenvector(s), we need to
solve the homogeneous system (A − 3I3 )x = 0.
−1 −2 −1 0 1 2 1 0
RREF −1 −2 −1 0 ∼ 0 0 0 0
−1 −2 −1 0 0 0 0 0
A 3 × 3 example
Example
(c) (continued.) Observe that this solution space is merely the
plane with equation x1 + 2x2 + x3 = 0, and it is spanned by
the vectors
−2 −1
1 , and 0 .
0 1
Thus the 3-eigenspace is two dimensional.
Remark
In the preceding example, the eigenvalue 3 appeared as a double
root of the characteristic polynomial. We say that 3 has algebraic
multiplicity 2.
Multiplicities Defined
Definition
Let A ∈ Rn×n be a real square matrix with characteristic
polynomial χA (λ). Suppose ν ∈ R is an eigenvalue of A, so
χA (ν) = 0. Let Eν := {x ∈ Rn | Ax = νx} ⊆ Rn be the
ν-eigenspace.
The algebraic multiplicity m := m(ν) of the eigenvalue ν is
the largest integer such that λ − ν divides χA (λ):
χA (λ) = (λ − ν)m q(λ),
where q(λ) is a polynomial of degree n − m with q(ν) 6= 0.
The geometric multiplicity µ := µ(ν) of the eigenvalue ν is
the dimension of Eν : µν = dim Eν .
ñ ô
1 k
Let k ∈ R and recall that the matrix A = has eigenvalue
0 1
λ = 1 with algebraic multiplicity 2.
An Inequality
Proposition
An eigenvalue ν of an n × n matrix A has algebraic multiplicity at
least as large as its geometric multiplicity:
1 ≤ µ(ν) ≤ m(ν) ≤ n .
Similar Matrices
Definition
Given two n × n matrices A and B, the matrix A is said to be
similar to B if there exists an invertible matrix P such that
A = PBP−1 .
Similar Matrices
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1 = P(B−λIn )P−1 ,
whence
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1 = P(B−λIn )P−1 ,
whence
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1 = P(B−λIn )P−1 ,
whence
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1 = P(B−λIn )P−1 ,
whence
Similar Matrices
A Proof
Proof.
Assume A = PBP−1 for some invertible matrix P ∈ Rn×n .
Observe that
A−λIn = PBP−1 −λPP−1 = PBP−1 −PλIn P−1 = P(B−λIn )P−1 ,
whence
Similar Matrices
Proof (continued.)
Now, suppose λ is an eigenvalue of both A and B, and suppose
the geometric multiplicity of λ for A is µ. Then there exist linearly
independent vectors v1 , . . . vµ spanning the λ-eigenspace of A, and
for any vi , i = 1, . . . , µ, Avi = λvi .
Then
λP−1 vi = P−1 Avi = B(P−1 vi ) ,
whence P−1 vi is an eigenvector for B with eigenvalue λ. Since P is
invertible, the map x 7→ P−1 x is an isomorphism, whence this
induces a one-to-one correspondence of eigenvectors of A and B
with eigenvalue λ. Thus, the geometric multiplicity of λ for B is
also µ.
Similar Matrices
Similar Matrices
Similar Matrices
Let î ó
P= v1 . . . vµ u1 . . . un−µ .
Similar Matrices
Let î ó
P= v1 . . . vµ u1 . . . un−µ .
Consider the product AP.
Similar Matrices
Proof (continued.)
î ó
AP = Av1 . . . Avµ Au1 . . . Aun−µ
Similar Matrices
Proof (continued.)
î ó
AP = Av1 . . . Avµ Au1 . . . Aun−µ
î ó
= νv1 . . . νvµ Au1 . . . Aun−µ
Similar Matrices
Proof (continued.)
î ó
AP = Av1 . . . Avµ Au1 . . . Aun−µ
î ó
= νv1 . . . νvµ Au1 . . . Aun−µ
Similar Matrices
Proof (continued.)
î ó
AP = Av1 . . . Avµ Au1 . . . Aun−µ
î ó
= νv1 . . . νvµ Au1 . . . Aun−µ
P−1 AP =
î ó
νe1 . . . νeµ P−1 Au1 . . . P−1 Aun−µ
Similar Matrices
Proof (continued.)
î ó
AP = Av1 . . . Avµ Au1 . . . Aun−µ
î ó
= νv1 . . . νvµ Au1 . . . Aun−µ
P−1 AP =
î ó
νe1 . . . νeµ P−1 Au1 . . . P−1 Aun−µ
ñ ô
νIµ ∗
= .
0(n−µ)×(n−µ) ∗
Similar Matrices
Proof (continued.)
Since there is a diagonal block of νIµ in P−1 AP, we see that
P−1 AP has a factor of (ν − λ)µ in its characteristic polynomial.
Similar Matrices
Proof (continued.)
Since there is a diagonal block of νIµ in P−1 AP, we see that
P−1 AP has a factor of (ν − λ)µ in its characteristic polynomial.
But since A and P−1 AP are similar, they share the same
characteristic polynomial.
Similar Matrices
Proof (continued.)
Since there is a diagonal block of νIµ in P−1 AP, we see that
P−1 AP has a factor of (ν − λ)µ in its characteristic polynomial.
But since A and P−1 AP are similar, they share the same
characteristic polynomial.
Thus, the algebraic multiplicity m(ν) for the eigenvalue ν of A is
at least µ.
Similar Matrices
Proof (continued.)
Since there is a diagonal block of νIµ in P−1 AP, we see that
P−1 AP has a factor of (ν − λ)µ in its characteristic polynomial.
But since A and P−1 AP are similar, they share the same
characteristic polynomial.
Thus, the algebraic multiplicity m(ν) for the eigenvalue ν of A is
at least µ.
Observation
If µ(ν) = m(ν), then we get a maximal diagonal block νIm in
P−1 AP; if χA factors completely into a product of terms
(νi − λ)µ(νi ) with i µ(νi ) = n for real numbers νi , then P−1 AP
P
2
Classifying Endomorphisms of R
Linear transformations T : R2 → R2
2
Classifying Endomorphisms of R
Projections
Since χA is a degree two polynomial for any A ∈ R2×2 , there are
two possibilities for zero eigenvalues: a single zero eigenvalue and
one nonzero eigenvalue λ, or a zero eigenvalue with algebraic
multiplicity m = 2.
If
ñ the eigenvalues
ô of A are 0 and λ 6= 0, then A is similar to
λ 0
, which represents a stretched projection map T (x) = Ax
0 0
projecting onto its nonzero eigenspace Eλ , with stretching factor λ:
if λ = 1 then T is an unstretched orthogonal or oblique
projection onto the eigenline Eλ ,
if |λ| < 1 then T is a contracted projection onto Eλ ,
if |λ| > 1 then T is a dilated projection onto Eλ ,
if λ < 0 then T is a additionally acts by reflection, reversing
the eigenline Eλ .
A. Havens Introduction to Eigenvalues and Eigenvectors
Defining Eigenstuffs The Characteristic Equation Introduction to Applications
2
Classifying Endomorphisms of R
Nilpotent maps
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
ñ ô
a11 a12
If A = , then the characteristic polynomial satisfies
a21 a22
2
Classifying Endomorphisms of R
ñ ô
a11 a12
If A = , then the characteristic polynomial satisfies
a21 a22
2
Classifying Endomorphisms of R
ñ ô
a11 a12
If A = , then the characteristic polynomial satisfies
a21 a22
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
Proposition
Let A be a matrix determining a linear map T (x) = Ax of R2 and
let ∆ = det(A), τ = tr (A). Then the eigenvalues of A are
Ä √ ä Ä √ ä
λ+ := 21 τ + τ 2 − 4∆ , λ− := 12 τ − τ 2 − 4∆ .
√
A has a repeated eigenvalue if and only if τ = ±2 ∆, and
otherwise has two distinct eigenvalues.
A has a zero eigenvalue if and only if ∆ = 0, and if in addition
τ = 0, then the matrix is either nilpotent or the zero matrix.
If τ 2 ≥ 4∆ then the eigenvalues λ± are real.
Otherwise, if τ 2 < 4∆ then the matrix has distinct complex
eigenvalues with strictly nonzero imaginary parts, occuring as
a conjugate pair λ = a + bi, λ̄ = a − bi. Moreover, the
determinant in this case is |λ|2 = λλ̄ = a2 + b 2 > 0.
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
Generalized Shearing
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
2
Classifying Endomorphisms of R
Definition
An n-th order recurrence relation is a discrete relation of the form
Definition
An n-th order recurrence is linear homogeneous if f is a
homogeneous linear function, i.e., if the recurrence relation is of
the form
Pn−1
xk = a0 xk−n + a1 xk−n+1 + . . . + an−1 xk−1 = i=0 ai xk−n+i ,
for numbers a0 , . . . , an−1 .
In this formulation, one sees that xn+k = Ck+1 xn−1 , for k ≥ −1. If
one can find an invertible matrix P such that C is similar to a
diagonal matrix D via P, then one can compute an explicit formula
Definition
The Fibonacci numbers Fk are the numbers defined by the simple
linear recurrence
Fk+1 = Fk + Fk−1 , F0 = 0 , F1 = 1 .
χC (λ) = λ2 − λ − 1 .
Exercise
Show that
®ñ ô´ ®ñ ô´
1 −φ
Eφ = Span , E−1/φ = Span .
φ 1
ñ ô ñ ô
1 −φ 1/φ 1
Let P = . Show that P−1 = √1 , and
φ 1 5 −1 1/φ
check that
ñ ô ñ ôñ ôñ ô
0 1 1 1 −φ φ 0 1/φ 1
=√
1 1 5 φ 1 0 −1/φ −1 1/φ
Example
Consider the linear system of differential equations:
A
( z }| ô{
dx1 ñ
dt = −x1 + 2x2 d −1 2
dx2
←→ dt x(t) = x(t) .
= 3x1 + 4x2 3 4
dt
The matrix A has eigenvalues −2 and 5 with respective
eigenvectors
ñ ô ñ ô
−2 1
v−2 = and v5 = ,
1 3
whence
(
x1 (t) = −2c1 e −2t + c2 e 5t
x(t) = c1 e −2t v−2 + c2 e 5t v5 ←→
x2 (t) = c1 e −2t + 3c2 e 5t
gives a general solution.
x 00 (t) + x(t) = 0 .