Five II Handout
Five II Handout
II Similarity
Linear Algebra
Jim Hefferon
http://joshua.smcvt.edu/linearalgebra
Definition and Examples
We’ve defined two matrices H and Ĥ to be matrix equivalent if there are
nonsingular P and Q such that Ĥ = PHQ. We were motivated by this
diagram showing H and Ĥ both representing a map h, but with respect to
different pairs of bases, B, D and B̂, D̂.
h
Vwrt B −−−−−→ Wwrt D
H
idy idy
h
Vwrt B̂ −−−−−→ Wwrt D̂
Ĥ
t
Vwrt D −−−−−→ Vwrt D
T̂
−1
In matrix terms, RepD,D (t) = RepB,D (id) RepB,B (t) RepB,D (id) .
Similar matrices
1.2 Definition The matrices T and T̂ are similar if there is a nonsingular P
such that T̂ = PT P−1 .
Example Consider the derivative map d/dx : P2 → P2 . Fix the basis
B = h1, x, x2 i and the basis D = h1, 1 + x, 1 + x + x2 i. In this arrow diagram
we will first get T , and then calculate T̂ from it.
t
Vwrt B −−−−−→ Vwrt B
T
idy idy
t
Vwrt D −−−−−→ Vwrt D
T̂
The matrix changing bases from B to D is RepB,D (id). We find these by eye
1 −1 0
RepD (id(1)) = 0 RepD (id(x)) = 1 RepD (id(x2 )) = −1
0 0 1
to get this.
1 −1 0 1 1 1
−1
P = 0 1 −1 P = 0 1 1
0 0 1 0 0 1
Now, by following the arrow diagram we have T̂ = PT P−1 .
0 1 −1
T̂ = 0 0 2
0 0 0
To check that, and to underline what the arrow diagram says
t
Vwrt B −−−−−→ Vwrt B
T
idy idy
t
Vwrt D −−−−−→ Vwrt D
T̂
S
T ...
The fact that N is not the zero matrix means that it cannot be similar to
the zero matrix, because the zero matrix is similar only to itself. Thus if N
were to be similar to a diagonal matrix D then D would have at least one
nonzero entry on its diagonal.
The crucial point is that a power of N is the zero matrix, specifically N2
is the zero matrix. This implies that for any map n represented by N with
respect to some B, B, the composition n ◦ n is the zero map. This in turn
implies that any matrix representing n with respect to some B̂, B̂ has a
square that is the zero matrix. But for any nonzero diagonal matrix D2 ,
the entries of D2 are the squares of the entries of D, so D2 cannot be the
zero matrix. Thus N is not diagonalizable.
2.4 Lemma A transformation t is diagonalizable if and only if there is a basis
~ 1, . . . , β
B = hβ ~ n i and scalars λ1 , . . . , λn such that t(β ~ i ) = λi β~ i for each i.
Proof Consider a diagonal representation matrix.
.. ..
. . λ 0
1
.. . . ..
RepB,B (t) = RepB (t(β
~ 1 )) · · · RepB (t(β ~ n ))
= .
. .
.. ..
. . 0 λn
t
Vwrt B −−−−−→ Vwrt B
D
We want λ1 and λ2 making these true.
4 1 ~ ~ 4 1 ~ ~2
β 1 = λ1 · β 1 β 2 = λ2 · β
0 −1 0 −1
has solutions b1 , b2 ∈ C that are not both zero (the zero vector is not an
element of any basis).
Rewrite that as a linear system.
(4 − x) · b1 + b2 = 0
(−1 − x) · b2 = 0
is diagonalizable to
−1 0
D=
0 4
where this is a basis.
−1 1
B=h , i
5 0
Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors
3.1 Definition A transformation t : V → V has a scalar eigenvalue λ if there
is a nonzero eigenvector ~ζ ∈ V such that t(~ζ) = λ · ~ζ.
3.5 Definition A square matrix T has a scalar eigenvalue λ associated with
the nonzero eigenvector ~ζ if T ~ζ = λ · ~ζ.
Example The matrix
4 0
D=
0 2
has an eigenvalue λ1 = 4 and a second eigenvalue λ2 = 2. The first is true
because an associated eigenvector is e~1
4 0 1 1
=4·
0 2 0 0
We want to find scalars x such that T ~ζ = x~ζ for some nonzero ~ζ. Bring the
terms to the left side.
0 5 7 z1 z1 0
−2 7 7 z2 − x z2 = 0
−1 1 4 z3 z3 0
This homogeneous system has nonzero solutions if and only if the matrix is
singular, that is, has a determinant of zero.
Some computation gives the determinant and its factors.
0−x 5 7
0= −2 7−x 7
−1 1 4−x
= x3 − 11x2 + 38x − 40 = (x − 5)(x − 4)(x − 2)
Gauss’s Method gives this solution set; its nonzero elements are the
eigenvectors.
1
V5 = { 1 z2 | z2 ∈ C }
0
Similarly, to find the eigenvectors associated with the eigenvalue of 4
specialize equation (∗) for x = 4.
−4 5 7 z1 0
−2 3 7 z2 = 0
−1 1 0 z3 0
gives this.
1
V2 = { −1 z3 | z3 ∈ C }
1
Example To find the eigenvalues and associated eigenvectors for the matrix
3 1
T=
1 3
3−x 1
= x2 − 6x + 8 = (x − 2)(x − 4)
1 3−x
2−x 1 0
0= 0 3−x 1 = (3 − x)(2 − x)2
0 0 2−x
The ~vk+1 term vanishes. Then the induction hypothesis gives that
c1 (λk+1 − λ1 ) = 0, . . . , ck (λk+1 − λk ) = 0. The eigenvalues are distinct so
the coefficients c1 , . . . , ck are all 0. With that we are left with the equation
~0 = ck+1~vk+1 so ck+1 is also 0. QED
Example This matrix from above has three eigenvalues, 5, 4, and 2.
0 5 7
T = −2 7 7
−1 1 4
Picking a nonzero vector from each eigenspace we get this linearly
independent set (which is a basis because it has three elements).
1 −14 −1/2
{ 1 , −14 , 1/2 }
0 2 −1/2
x cos(t)
{ = | 0 6 t < π}
y sin(t)
Angles
Example This plane transformation.
x 2x
7→
y 2x + 2y
is a skew.
As we move through the unit half circle on the left, the transformation has
varying effects on the vectors. The dilation vary, that is, different vectors
get their length multiplied by different factors, and they are turned through
varying angles. The next slide gives examples.
The
√ prior slide’s vector from the left shown in red is dilated by a factor
of 2 2 and rotated counterclockwise by π/4 ≈ 0.78 radians.
1 2
7→
0 2
p √
The orange vector is dilated by a factor of 2 cos2 (π/6) + 1 = 7 and
rotated by about 0.48 radians.
cos(π/6) 2 cos(π/6)
7→
sin(π/6) 2 cos(π/6) + 2 sin(π/6)
On the graph below the horizontal axis is the angle of a vectors from the
upper half unit circle, while the vertical axis is the angle through which
that vector is rotated.
x 2x
7→
y 2x + 2y
→
This plots the angle of each vector in the upper half unit circle against
the angle through which it is rotated.
x −x
7→
y 2y
→
Plotting the angle of each vector in the upper half unit circle against the
angle through which it is rotated
x x + 2y
7→
y 3x + 4y
gives that one vector gets a rotation of 0 radians, while another gets a
rotation of π radians.