0% found this document useful (0 votes)
19 views3 pages

Jord 2

Uploaded by

hassanbadredun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views3 pages

Jord 2

Uploaded by

hassanbadredun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Theorem 0.1. Suppose V is a complex vector space and T ∈ L(V).

If the
distinct eigenvalues of T are λ1 , . . . , λk , then

V = G(λ1 ) ⊕ · · · ⊕ G(λk ).

Proof. The proof is by induction on M = dim(V). If M = 1 there is an


eigenspace. Suppose the result holds if dim(V) < M . Since T has a eigen-
value λ1 we have

V = null(T − λ1 I)M ⊕ U, U = range(T − λ1 I)M .

Suppose v ∈ U . Then v = (T − λ1 I)M u and

T v = T (T − λ1 I)M u = (T − λ1 I)M T u ∈ range(T − λ1 I)M .

Thus U is an invariant subspace for T . Since dim null(T − λ1 I)M ≥ 1 either


V = null(T − λ1 I)M or we can apply the induction hypothesis to U .
In case the induction hypothesis applies, 1 ≤ dim(U ) < N , TU : U → U
has distinct eigenvalues µ1 , . . . , µJ , and

U = G(µ1 ) ⊕ · · · ⊕ G(µJ ).

Since each generalized eigenvector in U is also one in V, each µj is a λk . Also,


no generalized eigenvector in G(λ1 ) is in U .
We need to show that if w ∈ G(λm ) for m > 1, then w ∈ G(λm , TU ).
Express w as
w = v1 + u, v1 ∈ G(λ1 ), u ∈ U,
and
u = u1 + · · · + uJ , uj ∈ G(µj , TU ).
Then
0 = v1 + u1 + · · · + uJ − w.
Since generalized eigenvectors with distinct eigenvalues are linearly indepen-
dent, and w ∈ G(λm , T ) for m > 1, it must be that v1 = 0, w = u ∈ U , so
w ∈ G(λm , TU ). This implies

U = G(λ2 ) ⊕ · · · ⊕ G(λk),

and thus the conclusion of the theorem.

1
1 The characteristic polynomial
Thanks to Theorem 0.1 we can find a basis for V by picking bases (vk,1 , . . . , vk,m(k) )
for G(λk ). If the distinct eigenvalues of T are λ1 , . . . , λK , construct a basis
for V consisting of the basis for G(λ1 ) followed by G(λ2 ), etc. Since each
G(λk ) is an invariant subspace for T , the matrix M (T ) for T with respect to
this basis has the following form. If v is a basis vector in G(λk ), then T v is a
linear combination of G(λk ) basis vectors. That is, M (T ) is block diagonal,
 
A1 0 0 ... 0
 0 A2 0 . . . 0 
M (T ) =  ,
 
.. ..
0 . ... . 0 
0 0 0 . . . AK

with the blocks not sharing any rows or columns.


We also know that since T : G(λk ) → G(λk ), there is a basis of G(λk )
for which the matrix is in upper triangular form, with the eigenvalues of
T : G(λk ) → G(λk ) on the diagonal. With such a basis, each Ak will have
the form  
λk ∗ ∗ . . . ∗
 0 λk ∗ . . . ∗ 
Ak = 
 
.. .. 
 0 . ... . ∗
0 0 . . . 0 λk
Suppose dk = dim G(λk ). Then Ak = dk × dk , and G(λk ) = null(T −
λk I)dk . We say that dk is the (algebraic multiplicity) of the eigenvalue λk .
Define the characteristic polynomial of T to be

p(λ) = (λ − λ1 )d1 · · · (λ − λK )dK .

Notice that p(λ) has leading coefficient 1 and degree N .


Since each block Ak is annihilated by (T − λk I)dk , we have the following
result.

Theorem 1.1. (Cayley-Hamilton) Each linear map on a finite dimensional


complex vector space satisfies its characteristic polynomial p(T ) = 0.

The Jordan form refines the block diagonal matrix structure.

2
Theorem 1.2. For every linear map T on a finite dimensional complex
vector space, there is a basis such that the matrix of T with respect to the
new basis is block diagonal,
 
J1 0 . . . 0
 0 J2 . . . 0 
MT = 
 
.. 
0 0 . 0
0 0 . . . JK

with blocks all having an eigenvalue on the diagonal, ones on the first super-
diagonal, and zeros otherwise,
 
λm 1 0 0 ... 0
 0 λm 1 0 ... 0 
 
Jk =  0
 .. .. 
.
 0 . ... . 0
0 0 . . . 0 λm 1 
0 0 . . . . . . 0 λm

(Several blocks may have the same eigenvalue.)

You might also like