0% found this document useful (0 votes)
70 views11 pages

Sheet9 Solution

Uploaded by

salome.mose2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views11 pages

Sheet9 Solution

Uploaded by

salome.mose2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

MAA 201 2022-2023

Reduction of endomorphisms

Exercises - Week X - Nilpotent endomorphism and Jordan-Chevalley


decomposition

Let E be a finite-dimensional vector space over a field K and let f ∈ L(E) be


an endomorphism of E.

An application of the Jordan-Chevalley decomposition


Exercise 1.
Let A ∈ Mn (C). Then, the power series

A
X 1 k
e = A
k=0
k!

converges and is called the exponential of A.


1. If A is diagonalisable or if A is nilpotent, explain how to compute the expo-
nential of A.
2. Assume that A, B ∈ Mn (C) commute. Prove that eA+B = eA eB .
3. By using the Jordan-Chevalley decomposition, describe how to compute the
exponential of a matrix in Mn (C).
Remark : You can assume that the power series appearing in this exercise are
convergent.

Solution:
1. If A is diagonalisable, we can write the change of basis formula, A = P diag(λ1 , . . . , λn )P −1 ,
with P ∈ GLn (C), and then Ak = P diag(λk1 , . . . , λkn )P −1 .
Thus
∞ ∞
!
X 1 −1
X 1
A
e = k k
P diag(λ1 , . . . , λn )P = P diag(λ1 , . . . , λn ) P −1 = P diag(eλ1 , . . . , eλn )P −1
k k
k! k!
k=0 k=0
.

1
If N is nilpotent, then N n = 0, thus eN = 06k<n k! N k.
P

2. If AB = BA, then (A + B)k = 06l6k kl Al B k−l .


P 

Thus,
∞ ∞ X
A+B
X 1 k
X 1 l 1
e = (A + B) = A B k−l
k! l! (k − l)!
k=0 k=0 06l6k
.
By the Cauchy product formula, we get that
∞ ∞
X 1 lX 1 k
eA+B = A B = eA eB
l! k!
l=0 k=0

3. By the Jordan-Chevalley decomposition, A ∈ Mn (C) is equal to the sum of a diagonalisable


matrix D and a nilpotent matrix N such that D and N commute. Then, by the result of
question 2, we have
eA = eD eN
and the computation of eD and eN is the result of the first question.

Stable subspaces and some nilpotent endomorphisms


Exercise 2.
Recall that a subspace F of E is said to be stable with respect to f if f (F ) ⊂ F .
Let P ∈ K[X] be a polynomial. Prove that Ker(P (f )) and Image(P (f )) are stable
with respect to f .

Solution: Recall that we have f ◦ P (f ) = P (f ) ◦ f .


Let x be in KerP (f ). We have P (f )(f (x)) = f (P (f )(x)) = f (0) = 0, and hence f (x) ∈ KerP (f ).
This proves that KerP (f ) is stable with respect to f .
Let y = P (f )(z) be in Image(P (f )). We have f (y) = f (P (f )(z)) = P (f )(f (z)) ∈ Image(P (f )).
This proves that ImageP (f ) is stable with respect to f .

Exercise 3.
Suppose that Pf splits in K[X], that is, Pf = i (X − λi )mi for some λi ∈ K (this
Q
is always the case for K = C).
L
1. Prove that there exist stable subspaces Uλi such that E = Uλi and f|Uλi =
λi idUλi + ni , where ni is a nilpotent endomorphism of Uλi .
2. Choose a adapted basis with respect to the above decomposition. Prove that
the matrix of f with respect to this basis a matrix composed with diagonal
blocks of dimension dim Uλi .

Solution:
1. By the "lemme des noyaux", we have
M
E = Ker(Pf (f )) = Ker(f − λi idE )mi

Let Uλi be Ker(f − λi idE )mi . As we have (f|Uλi − λi idUλi )mi = 0 (by definition of the
kernel), ni = f|Uλi − λi idUλi is nilpotent. Hence, we indeed get f|Uλi = λi idUλi + ni with
ni nilpotent.
By exercise 1, the subspaces Uλi are stable with respect to f .

Page 2
2. Let B = (B1 , . . . , Bl ) be an adapted basis for the above decomposition.
Then we have
M atB1 (f|Uλ1 ) 0 ... 0
 
 0 M atB2 (f|Uλ2 ) . . . 0 
M atB (f ) = 
 
.. .. . . .
. 
 . . . . 
0 0 . . . M atBl (f|Uλl )

Exercise 4.
Suppose that f is a nilpotent endomorphism such that f n−1 6= 0End(E) .
1. Prove that Ker(f n−1 ) 6= Ker(f n ).
2. Let u be in E − Ker(f n−1 ). Prove that B = (f n−1 (u), f n−2 (u), . . . , f (u), u) is
a basis of E.
3. Compute the matrix of f in the basis B.
4. In the standard basis, suppose that f : R3 → R3 is represented by
 
0 1 2
A = 0 0 3
0 0 0
Use the basis B to find the desired matrix.
Solution:
1. By exercise 3 of sheet 7, we have f n = 0. Hence Ker(f n ) = E, and as f n−1 6= idE ,
Ker(f n−1 ) 6= E.
2. Let u be in E − Ker(f n−1 ). B is a family of n elements in a n-dimensional space, we only
to prove that P
they are linearly independent.
Suppose that λi f i (u) = 0. By contradiction, suppose that there is an i0 such that i0 is
the smallest integer such that λi0 is non-zero.
Then applying f n−i0 −1 to the equality, we get λi f n−i0 +i−1 (u) = 0. As for k > n,
P
k n−1
f = 0, and as for i < i0 , λi = 0, we get λi0 f (u) = 0. As u ∈ / Ker(f n−1 ), we get
λi0 = 0, which a contradiction. This concludes.
3. In this basis, we get  
0 1 0 ... 0 0
0 0 1 ... 0 0
 
0 0 0 ... 0 0
 
M atB (f ) =  ... ... ... .. .... 

 . . .
0 0 0 ... 1 0
 
0 0 0 ... 0 1
0 0 0 ... 0 0

4. Let u = t
(0, 0, 1). We t 2 t
 have Au = (2, 3, 0), Au = (3, 0, / KerA2 = Kerf 2 .
0). Thus u ∈
3 2 0 0 1 0
Let P = 0 3 0. Then we have A = P 0 0 1 P −1 .
0 0 1 0 0 0

Page 3
Part I : Structure of Nilpotent Endomorphisms
Exercise 5.
For any basis (ei )i=1,··· ,n of E, recall that f is said to have upper-triangular matrix
with respect to (ei )i=1,··· ,n if for every k = 1, · · · , n we have
k
M
f (ek ) ∈ K · ei = Vect(e1 , · · · , ek ).
i=1

In other words, the coefficients ai,j of the matrix of f in this basis are zero for
i > j.
Suppose that Pf splits in K[X] (e.g. K = C), let us prove that there is a basis of E
with respect to which f has upper-triangular matrix by induction on n = dimK E.
(a) Explain why f has a eigenvalue.
(b) Let λ ∈ C be an eigenvalue of f , and set U := Image(f − λ · IdE ), prove
that U is f -invariant (i.e. is stable with respect to f ).
(c) Conclude. Hint : apply the induction hypothesis to U
Solution:
(a) By Exercise 1 of TD8, eigenvalues of f coincide with the roots of the minimal polynomial
Pf of f . Since Pf splits and deg Pf > 1, hence Pf has a root. If K = C this is a result
of the Fundamental Theorem of Algebra.
(b) f − λ · IdE is a polynomial in f . It is the evaluation of X − λ on f . Hence, the result is
implied by exercise 1.
(c) Let us prove the result by induction on n. First notice that the result holds for n = 1,
hence suppose that n > 1. Let λ ∈ K be an eigenvalue of f , if f − λ · IdE = 0 then we
are done ; otherwise consider U := Image(f − λ · IdE ) 6= 0, by (b) it is an f -invariant
subspace of E. Moreover, since f − λ · IdE is not injective, it is not surjective, hence
0 < dimK U < n, then by the induction hypothesis there is a basis (ui )i=1,··· ,h such that
the matrix of f |U w.r.t (ui )i=1,··· ,h is upper-triangular where h := dimK U . Now the ui ’s
are linearly independent, we complete (ui )i=1,··· ,h into a basis (ui )i=1,··· ,n of E. We claim
that f has upper-triangular matrix w.r.t (ui )i=1,··· ,n :
— For i = 1, · · · , h, by induction hypothesis applies to U we have f (ui ) ∈ Vect(u1 , · · · , ui ).
— for e = h + 1, · · · , n, let us write f (ui ) = (f − λ · IdE )(ui ) + λ · ui , then
(f − λ · IdE )(ui ) ∈ Image(f − λ · IdE ) = U = Vect(u1 , · · · , uh ),
in consequence f (ui ) ∈ Vect(u1 , · · · , uh , ui ) ⊂ Vect(u1 , · · · , ui )
Hence we prove the result.
Remarque 1.
An important consequence of this exercise : suppose that Pf splits in K[X] (e.g. K = C) and
write the characteristic polynomial of f as
s
Y
χf = det(f − X · IdE ) = (λi − X)ni
i=1

with λi ’s being the eigenvalues of f , then λi appears exactly ni times on the diagonal of any
upper matrix of f . In particular :

Page 4
— The dimension of the generalised eigenspace Uλi := Ker(f − λi · IdE )mi where mi =
multiplicity of the factor X − λi in Pf (see below) is equal to ni . In fact, by the "lemme
des noyaux" we get the following direct decomposition (which can be regarded as the
more general structure theorem for finite modules over PIDs) :
s
M
E= Uλi ,
i=1

with each Uλi being f -invariant by exercise 1.


Then apply Exercise 4 to each Uλi and we get a basis of E such that the matrix of f under
this basis is a block diagonal matrix A = Diag(A1 , · · · , As ) where for each i the block
Ai (representing the endomorphism f |Uλi ) is an upper-triangular matrix whose diagonal
entries all equal to λi . Calculate the characteristic polynomial of A and compare it with
χf , we see that the size of Ai is exactly ni , i.e. ni = dimK Uλi .
— The trace of f is equal to
X s
trace(f ) = ni λi .
i=1

Recall that the trace of a square matrix is the sum of its diagonal entries. A calculation
shows
P that trace(AB) = trace(BA) for any two n by n (square) matrix (trace(AB) =
i,j Aij Bji ), which implies that the trace is invariant under similarity transformation
and thus can be well defined for any endomorphism (i.e. does not depend on the choice
of the basis).

Exercise 6.
Another proof of upper-triangularisation :
Suppose that Pf splits in K[X] (e.g. K = C), let us prove that there is a basis of
E with respect to which f has upper-triangular matrix.
(a) Use exercise 2 to explain why we can assume that f is nilpotent.
(b) Assume that f p = OL(E) . Prove that for 1 6 k 6 p, there is a subspace Fk
of E such that
Kerf k = Kerf k−1 ⊕ Fk
(c) Prove that E = F1 ⊕ · · · ⊕ Fp .
(d) Prove that taking an adapted basis with respect to the previous direct sum
concludes.
Remark : we actually proved that any nilpotent endomorphism f can be represented
as strictly upper-triangular matrix, i.e. the coefficients ai,j of the matrix of f in
the corresponding basis are zero for i > j.

Solution:
(a) Taking the notation of exercise 2, if we prove the result for fUλi , then the result is proven
for f .
We can therefore assume f = λidE + n, with n nilpotent. If we prove the result for n,

Page 5
then it is also true for f , as the matrix of λidE in any basis is λIn .

(b) As Kerf k−1 ⊂ Kerf k , this is just the incomplete basis theorem.

(c) By induction, it is easy to prove that Kerf k = F1 ⊕ · · · ⊕ Fk , beginning with F1 = Kerf .

(d) If x ∈ Kerf k , then f (x) ∈ Kerf k−1 . Thus f (Fk ) ⊂ F1 ⊕ · · · ⊕ Fk−1 . Hence by taking an
adapted basis with respect to this direct sum gives us a strictly upper-triangular matrix.
More precisely, if e is an element of the basis in Fk , then f (e) ∈ F1 ⊕ · · · ⊕ Fk−1 , and
hence has zero coordinate along the basis vectors in Fi , for i > k.

Exercise 7.
A subspace V of E is called f -cyclic if there is a vector v ∈ V such that the
family of vectors {f i (v)} i∈N spans V ; then v is called an f -cyclic vector of V and
h−1
f (v) , · · · , f (v) , v is called an f -cyclic basis of V where h := dimK V .
(a) Fix a basis (ei )i=1,··· ,n of E, can we find an endomorphism g such that
(ei )i=1,··· ,n is g-cyclic ? Given a polynomial Q ∈ K[X] of degree n, can we
find g such that χg = Q and that (ei )i=1,··· ,n is g-cyclic ?
(b) Conversely, suppose that E is f -cyclic, what is the matrix of f w.r.t an
f -cyclic basis ?
(c) Suppose that f is nilpotent, then there is a family of linearly independent
vectors vi i = 1, · · · , k in E with k := dimK Ker(f ) such that E admits
a direct decomposition
Mk
E= Vi
i=1

where Vi denotes the f -cyclic subspace of  E generated by vi (i.e. vi is an


f -cyclic vector of Vi ) and that f di −1 (vi ) i=1,··· ,k consists a basis of Ker(f )
where di := dimK Vi .

Solution:
(a) If we do not require the characteristic polynomial of g to be equal to Q, this is quite easy :
consider g : ei 7→ ei−1 , ∀i > 1 and e1 7→ en , i.e. g is given under the basis (ei )i=1,··· ,n by
the matrix  
0 1 0
 .. . . .. 
n n
. . . 
C((−1) (X − 1)) :=   .
0 . .. 1 

1 0 ··· 0
If we require in addition that χg = Q with

Q = (−1)n (X n + an−1 X n−1 + · · · + a1 X + a0 ) ∈ K[X],

Page 6
then by Cayley-Hamilton g n = −a0 · IdE − a1 · g + · · · − an−1 · g n−1 . In order that
(ei )i=1,··· ,n is g-cyclic, we must have (up to reordering) g : ei 7→ ei−1 for all i > 1. Then
we have e1 = g n−1 (en ) and thus

g(e1 ) = g n (en ) = −a0 ·en −a1 ·g(en )−· · ·+an−1 ·g n−1 (en ) = −an−1 ·e1 −· · ·−a1 ·en−1 −a0 ·en .

Hence the matrix associated to g under the basis (ei )i=1,··· ,n is


 
−an−1 1 0
 .. .. 
 . 0 . 
C(Q) :=  ..
.
 .. 
 −a1 . . 1
−a0 0 ··· 0

C(Q) is called the companion matrix of Q (in literatures it is also commonly called the
companion matrix of (−1)n Q so that we only define the companion matrix for monic
polynomials).
At last, let us check that χg = Q, i.e. χC(Q) = Q. To this end, we argue by induction on
n. By Laplace expansion and induction hypothesis we have
−an−1 − X 1 0
.. ..
. −X .
χg = χC(Q) = det(C(Q) − X · In ) =
..
−a1 . 1
−a0 0 −X
1 0 ··· 0 −an−1 − X 1 0
.. .. .. .. ..
−X . . . . −X .
= (−1)n+1 (−a0 ) · + (−1)2n (−X) · ..
expand along rn .. .. ..
. . 0 . . 1
0 −X 1 −a1 0 −X
= (−1)n a0 + (−X)(−1)n−1 (a1 + · · · + an−1 X n−2 + X n−1 )
induction hypothesis

= (−1)n (a0 + a1 X + · · · + an−1 X n−1 + X n ) = Q.



(b) Let v 6= 0 be an f -cyclic vector
 of E, then f n−1 (v), · · · , f (v), v is an f -cyclic basis of
E. It is clear that f f i (v) = f i+1 (v), hence in order to write the matrix of f it remains
to determine f n (v). By Cayley-Hamilton, if we write the characteristic polynomial of f
as
χf = (−1)n (X n + cn−1 X n−1 · · · + c1 X + c0 ),

then f f n−1 (v) = f n (v) = −cn−1 · f n−1 (v) − · · · − c1 · f (v) − c0 · v. Hence the matrix
associated to f under the basis f n−1 (v), · · · , f (v), v is
 
−cn−1 1 0
 .. .. 
 . 0 . 
C(χf ) := 

.. . .
.

 −c1 . . 1
−c0 0 ··· 0

In particular, if f is nilpotent, then the matrix of f is C(χf ) = C((−X)n ) = J0,n , the


Jordan matrix.

Page 7
(c) Let us prove the result by induction on n. If f = 0, then E = Ker(f ), clearly the
result holds ; hence we assume that f 6= 0. Since f is nilpotent, it is not injective, hence
not surjective, then let us consider U := Image(f ) 6= 0 whose dimension is rank(f ) =
n − k < n. Now apply the induction hypothesis to f |U ∈ L(U ), there is a family of
linearly independent vectors (ui )i=1,··· ,h in U such that

h
M
U= Ui (1)
i=1

with Ui the f -cyclic subspace of U generated by ui and that f bi −1 (ui ) i=1,··· ,h consist


a basis of Ker(f |U ) = Ker(f ) ∩ U where bi := dimK Ui . Now since ui ∈ Image(f ), there


is vi ∈ E such that ui = f (vi ) and then f bi +1 (vi ) = f bi (ui ) = 0.
Since f bi (vi ) i=1,··· ,h is a family of linearly independent vectors in Ker(f ), we can
complete it into a basis of Ker(f ) : f b1 (v1 ) , · · · , f bh (vh ) , vh+1 , · · · , vk . We claim that

k
M
E= Vi (2)
i=1

with Vi the f -cyclic subspace of E generated by vi . To prove this equality, we successively


check :
— The subspaces Vi ’s form a direct sum in E. By construction, we have
(
1 + bi = 1 + dimK Ui , if i = 1, · · · , h ;
di := dimK Vi =
1, if i = h + 1, · · · , k.

For i 6= j we will prove that Vi ∩ Vj = {0}, to this end it suffices to show that the
vectors vi , f (vi ) , · · · , f di −1 (vi ) , vj , f (vj ) , · · · , f dj −1 (vj ) are linearly independent.
Let (ar )r=0,··· ,di −1 and (bs )s=0,··· ,dj −1 be elements in K such that

i −1
dX dj −1
X
ar · f r (vi ) + bs · f s (vj ) = 0. (3)
r=0 s=0

Apply f to this equation we get (noting that f di (vi ) = 0) :

i −1
dX dj −1
X
ar−1 · f r (vi ) + bs−1 · f s (vj ) = 0. (4)
r=1 s=1

Since f r (vi ), f s (vj ) ∈ U when r, s > 0, hence the decomposition (1) implies that
ar = 0 for 0 6 r < di − 1 and bs = 0 for 0 6 s < dj − 1. Then (3) is reduced to the
following equation :

adi −1 · f di −1 vi + bdj −1 · f dj −1 vj = 0.

But by construction f dl −1 (vl ) l=1,··· ,k is a basis of Ker(f ) hence we must have




adi −1 = bdj −1 = 0. This means that Vi ∩ Vj = {0}. Hence the Vi ’s form a direct sum
in E.

Page 8
— We prove the equation (2) by a dimension argument as follows : since Vi ’s form a
direct sum we have
k
! k h k h
M X X X X
dimK Vi = dimK Vi = dimK Vi + dimK Vi = (1 + bi ) + 1 · (k − h)
i=1 i=1 i=1 i=h+1 i=1
h
X
=h+ dimK Ui + (k − h) = dimK U + k = rank(f ) + k = (n − k) + k = n
i=1
= dimK E,

hence the equality (2).


By induction we thus prove the result.
(c ) Repetita Let us give a proof of this result along the line of exercise 3 and exercise 5.
We will prove by induction the following result : for any linearly independent vectors
v1 , . . . , vl such that Kerf k = Kerf k−1 ⊕ V ect(v
 1 , . . . , vl ), we can complete this family
into a family of linearly independent vectors vi i = 1, · · · , m in Kerf k with m :=
dimK Ker(f ) such that Kerf k admits a direct decomposition
m
M
k
Kerf = Vi
i=1

where Vi denotes the f -cyclic subspace of Kerf k generated by vi (i.e. vi is an f -cyclic


di −1

vector of Vi ) and that f (vi ) i=1,··· ,m consists a basis of Ker(f ) where di := dimK Vi .

For k = 1 : there is nothing to do other than taking a basis of Ker(f ).

For k > 1 : consider w1 = f (v1 ), . . . , wl = f (vl ).


As V ect(v1 , . . . , vl ) ∩ Kerf k−1 = {0}, we have V ect(v1 , . . . , vl ) ∩ Kerf = {0}, thus
w1 , . . . , wl are linearly independent. Furthermore, V ect(v1 , . . . , vl ) ∩ Kerf k−1 = {0} im-
plies that V ect(f (v1 ), . . . , f (vl )) ∩ Kerf k−2 = {0}.

Complete w1 , . . . , wl into a family of linearly independent vectors w1 , . . . , wl0 such that


Kerf k−1 = KerfL k−2
⊕V ect(w1 , . . . , wl0 ). Apply the induction hypothesis to get w1 , . . . , wm
k−1 m
and Kerf = i=1 Wi .
Lm
Notice that we have Kerf k = i=1 Wi ⊕ V ect(v1 , . . . , vl ).
Now, v1 , . . . , vl , wl+1 , . . . wm is a completion of the family v1 , . . . , vl satisfying the condi-
tions. In particular, if Vi denotes the f -cyclic subspace of Kerf k generated by vi , for
1 6 i 6 l, we have Vi = Wi ⊕ Kvi . This concludes the induction.

To conclude the exercise, if Kerf p = E, starting with any family of independent vec-
tors v1 , . . . vl such that E = Kerf p = Kerf p−1 ⊕ V ect(v1 , . . . , vl ), we get the desired
decomposition of E = Kerf p .

Page 9
Part II : Jordan-Chevalley Decomposition and Jordan Normal
Form
Suppose that the minimal polynomial Pf of f splits in K[X] (this condition is
always satisfied when K = C), i.e.

Pf = (X − λ1 )m1 (X − λ2 )m2 · · · (X − λs )ms ,

then by the "lemme des noyaux" we get the following decomposition into f -
invariant subspaces :
Ms
E= Uλi ,
i=1
mi
where Uλi := Ker(f − λi · IdE ) is called the generalised eigenspace for the ei-
genvalue λi , whose elements are called generalised eigenvectors ; and the minimal
polynomial of f |Uλi is Pf |Uλ = (X − λi )mi . In consequence f can be written as the
i
sum of two endomorphisms :
f = fss + fn
with fss given by fss |Uλ := λi · IdUλi , thus diagonalisable (semi-simple) and
i
fn = f − fss nilpotent since f m = 0 for m = max(m1 , · · · , ms ) ; they are cal-
led the semi-simple part and nilpotent part of f respectively. This decomposition
is unique (by the simultaneous diagonalisation) in the following sense : every time
we write f as the sum of a diagonalisable one and a nilpotent one such that they
commute, the diagonalisable one must coincide with fss and the nilpotent one must
coincide with fn ; it is called the Jordan-Chevalley decomposition, or the Dunford
decomposition.

Exercise 8.
A Jordan matrix is an upper-triangular matrix of the form
 
λ 1 0
 ... ... 
Jλ,t =   ∈ Matt×t (K)
 
 . .. 1 
λ

for some λ ∈ K and t ∈ N∗ . A Jordan basis of E for f is a basis with respect


to which f has a block diagonal matrix with the blocks being Jordan matrices.
Suppose that Pf splits in K[X], then use the Exercise 2 above and the Dunford
decomposition to deduce that there exists a Jordan basis for f .

Page 10
Solution: Since Pf splits in K[X], we can write
s
Y
Pf = (X − λi )mi ,
i=1

where λ1 , · · · , λs are the eigenvalues of f . Then E is decomposed into generalised eigenspaces of


f :
Ms
E= Uλi ,
i=1
mi
with Uλi = Ker(f − λi · IdE ) . Then by the Dunford decomposition the semi-simple part fss
of f is given by fss |Uλi = λi · IdUλi , hence fn |Uλi = f − λi · IdUλi and thus fnmi |Uλi = 0, i.e. fn
is nilpotent on Uλi . Now for every i = 1, · · · , s, apply Exercise 2(c) to fn |Uλi , then Uλi can be
decomposed into fn |Uλi -cyclic subspaces, and thus E can be decomposed into fn -cyclic subspaces
(they are also f -invariant). For each one, we choose an fn |Uλi -cyclic basis so that the the matrix
of fn is a Jordan matrix, these basis come together to form a basis of E and under this basis the
matrix associated to f is a block diagonal matrix whose blocks are Jordan matrices., i.e. it is a
Jordan basis for f .

Page 11

You might also like