A Proof of The Jordan Normal Form Theorem
A Proof of The Jordan Normal Form Theorem
Jordan normal form theorem states that any matrix is similar to a block-
diagonal matrix with Jordan blocks on the diagonal. To prove it, we first
reformulate it in the following way:
Jordan normal form theorem. For any finite-dimensional vector
space V and any linear operator A : V → V, there exist
• a decomposition of V
V = V1 ⊕ V2 ⊕ . . . ⊕ V k
for some λi (which may coincide or be different for different i). Dimen-
sions of these subspaces and coefficients λi are determined uniquely up
to permutations.
Indeed, it is clear that we can get a basis for the direct sum by joining
together bases for summands.
Fix a basis for U1 and a basis for U2 . Joining them together, we get a
basis for V = U1 ⊕U2 . When we write down the matrix of any linear operator
1
A11 A12
with respect to this basis, we get a block-diagonal matrix ; its
A21 A22
splitting into blocks correspond to the way our basis is split into two parts.
Let use formulate important facts which are immediate from the definition
of a matrix of a linear operator.
1. A12 = 0 if and only if U2 is invariant;
2. A21 = 0 if and only if U1 is invariant.
Thus, this matrix is block-triangular if and only if one of the subspaces is
invariant, and is block-diagonal if and only if both subspaces are invariant.
This admits an obvious generalisation for the case of larger number of sum-
mands in the direct sum.
Generalised eigenspaces
The first step of the proof is to decompose our vector space into the direct
sum of invariant subspaces where our operator has only one eigenvalue.
Definition 3. Let N1 (λ) = Ker(A − λ Id), N2 (λ) = Ker(A − λ Id)2 , . . . ,
Nm (λ) = Ker(A − λ Id)m , . . . Clearly,
v ∈ Nk+l+1 (λ), v ∈
/ Nk+l (λ),
that is
(A − λ Id)k+l+1 (v) = 0, (A − λ Id)k+l (v) 6= 0.
Put w = (A − λ Id)(v) = 0. Obviously, we have
2
Indeed, assume there is a vector v ∈ Ker(A − λ Id)k ∩ Im(A − λ Id)k .
This means that (A − λ Id)k (v) = 0 and that there exists a vector w
such that v = (A − λ Id)k (w). It follows that (A − λ Id)2k (w) = 0, so
w ∈ Ker(A − λ Id)2k = N2k (λ). But from the previous lemma we know that
N2k (λ) = Nk (λ), so w ∈ Ker(A − λ Id)k . Thus, v = (A − λ Id)k (w) = 0,
which is what we need.
Lemma 3. V = Ker(A − λ Id)k ⊕ Im(A − λ Id)k .
Indeed, consider the direct sum of these two subspaces. It is a subspace of
V of dimension dim Ker(A − λ Id)k + dim Im(A − λ Id)k . Let A 0 = (A − λ Id)k .
Earlier we proved that for any linear operator its rank and the dimension
of its kernel sum up to the dimension of the vector space where it acts.
rk A 0 = dim Im A 0 , so we have
3
Let us modify slightly the notation we used in the previous sec-
tion; put N1 = Ker B, N2 = Ker B2 , . . . , Nm = Ker Bm , . . . We have
Nk = Nk+1 = Nk+2 = . . . = V.
To make our proof more neat, we shall use the following definition.
Definition 4. For a vector space V and a subspace U ⊂ V, we say that
a sequence of vectors e1 , . . . , el is a basis of V relative to U if any vector
v ∈ V can be uniquely represented in the form c1 e1 + c2 e2 + . . . + cl el + u,
where c1 , . . . , cl are coefficients, and u ∈ U. In particular, the only linear
combination of e1 , . . . , el that belongs to U is the trivial combination (all
coefficients are equal to zero).
Example 1. The usual notion of a basis is contained in the new notion
of a relative basis: a usual basis of V is a basis relative to U = {0}.
Definition 5. We say that a sequence of vectors e1 , . . . , el is linearly
independent relative to U if the only linear combination of e1 , . . . , el that
belongs to U is the trivial combination (all coefficients are equal to zero).
Exercise 1. Any sequence of vectors that is linearly independent relative
to U can be extended to a basis relative to U.
Now we are going to prove our statement, constructing a required basis
in k steps. First, find a basis of V = Nk relative to Nk−1 . Let e1 , . . . , es be
vectors of this basis.
Lemma 5. The vectors e1 , . . . , es , B(e1 ), . . . , B(es ) are linearly inde-
pendent relative to Nk−2 .
Indeed, assume that
so
d1 e1 + . . . + ds es ∈ Nk−1 ,
and we deduce that d1 = . . . = ds = 0 (e1 , . . . , es form a basis relative to
Nk−1 ), so the lemma follows.
Now we extend this collection of vectors by vectors f1 , . . . , ft which
together with B(e1 ), . . . , B(es ) form a basis of Nk−1 relative to Nk−2 . Abso-
lutely analogously one can prove
4
Lemma 6. The vectors e1 , . . . , es , B(e1 ), . . . , B(es ), B2 (e1 ), . . . , B2 (es ),
f1 , . . . , ft , B(f1 ), . . . , B(ft ) are linearly independent relative to Nk−3 .
We continue that extension process until we end up with a usual basis of
V of the following form:
where the first line contains a vector from Nk , a vector from Nk−1 , . . . , a
vector from N1 , the second one — a vector from Nk−1 , a vector from Nk−2 ,
. . . , a vector from N1 ,, . . . , the last one — just a vector from N1 .
To get from this basis a Jordan basis, we just re-number the basis vectors.
Note that the vectors
form a “thread” of vectors for which B(v1 ) = 0, B(vi ) = vi−1 for i > 1, which
are precisely formulas for the action of a Jordan block matrix. Arranging all
vectors in chains like that, we obtain a Jordan basis.
Remark 1. Note that if we denote by md the number of Jordan blocks
of size d, we have
m1 + m2 + . . . + mk = dim N1 ,
m2 + . . . + mk = dim N2 − dim N1 ,
...
mk = dim Nk − dim Nk−1 ,