0% found this document useful (0 votes)
48 views24 pages

Primary Decomposition Theorem

The document discusses the Primary Decomposition Theorem in linear algebra, detailing learning outcomes, prerequisites, and key concepts such as diagonalizable operators, minimal polynomials, and invariant subspaces. It aims to provide a comprehensive understanding of decompositions and their applications, particularly in solving differential equations. The chapter includes solved problems, exercises, and references to enhance the reader's grasp of the subject matter.

Uploaded by

chinu.221b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views24 pages

Primary Decomposition Theorem

The document discusses the Primary Decomposition Theorem in linear algebra, detailing learning outcomes, prerequisites, and key concepts such as diagonalizable operators, minimal polynomials, and invariant subspaces. It aims to provide a comprehensive understanding of decompositions and their applications, particularly in solving differential equations. The chapter includes solved problems, exercises, and references to enhance the reader's grasp of the subject matter.

Uploaded by

chinu.221b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/311589069

Primary Decomposition Theorem

Chapter · November 2016

CITATIONS READS

0 19,587

2 authors:

Rajesh Singh Radha Mohan


University of Delhi 9 PUBLICATIONS 22 CITATIONS
20 PUBLICATIONS 15 CITATIONS
SEE PROFILE
SEE PROFILE

All content following this page was uploaded by Rajesh Singh on 13 December 2016.

The user has requested enhancement of the downloaded file.


Primary Decomposition Theorem

Subject: Linear Algebra


Lesson: Primary Decomposition Theorem
Lesson Developer: Dr. Radha Mohan and Rajesh Singh
College/Department: Department of Mathematics,
St. Stephens College,
University of Delhi
and
Department of Mathematics,
University of Delhi

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 2

Table of Contents

Chapter: PRIMARY DECOMPOSITION THEOREM

1. Learning Outcomes
2. Prerequisites
3. Preliminaries
3.1. Diagonalizable Operator
3.2. Minimal Polynomial
3.3. Invariant Subspaces
3.4. Direct Sums
4. Decomposition of an Operator
4.1. Primary Decomposition Theorem
4.2. Decomposition into Diagonalizable and Nilpotent
operator
5. Solved Problems
6. Applications
7. Summary
8. Exercises
9. References

" 𝑴𝒂𝒕𝒉𝒆𝒎𝒂𝒕𝒊𝒄𝒔 𝒊𝒔 𝒏𝒐𝒕 𝒋𝒖𝒔𝒕 𝒂 𝒓𝒆𝒂𝒅𝒊𝒏𝒈 𝒔𝒖𝒃𝒋𝒆𝒄𝒕. 𝒀𝒐𝒖 𝒎𝒖𝒔𝒕 𝒖𝒔𝒆 𝒚𝒐𝒖𝒓 𝒃𝒓𝒂𝒊𝒏𝒔

𝒂𝒔 𝒘𝒆𝒍𝒍 𝒂𝒔 𝒚𝒐𝒖𝒓 𝒉𝒂𝒏𝒅𝒔. 𝑻𝒂𝒄𝒌𝒍𝒆 𝒑𝒓𝒐𝒃𝒍𝒆𝒎𝒔 𝒐𝒏 𝒚𝒐𝒖𝒓 𝒐𝒘𝒏. "

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 3

1.LEARNING OUTCOMES
We believe that at the end of this chapter the reader will become well versed with the
concept of decompositions, the motivation behind studying various types of
decompositions and will gain a good knowledgein primary decompositions, diagonalizable
operators and nilpotent operators. We have also given an application of this concept to
differential equations to make the reader realize that by studying such abstract concepts
we develop an effective unifying approach to tackle problems from different fields.

2. PREREQUISITES
We expect that the reader knows the definition of all the terms listed below and is
familiar with all basic concepts associated with these terms.

 Vector space, basis of a vector space


 Standard ordered basis
 Linear Operator
 Matrix Representations of linear transformations
 Characteristic value of a Linear Operator
 Characteristic vector of a Linear Operator
 Characteristic space of a Linear Operator
 Characteristic polynomial of a Linear Operator
 Minimal polynomial of a Linear Operator
 Irreducible, monic polynomials etc.
 Algebraically closed field

3. PRELIMINARIES
In this section we touch upon some concepts which we feel the reader is already familiar
with and hence we will only be stating definitions and results without going into the
details associated with them. The idea is just to make the text self-contained. Taking the
liberty that the reader knows above mentioned concepts we start with diagonalizable
operators.

3.1 DIAGONALIZABLE OPERATOR


Let 𝑇be a linear operator on a finite-dimensional vector space 𝑉. We say that 𝑇is
diagonalizableif there is an ordered basis𝛽of𝑉such that the matrix of 𝑇 w.r.t basis 𝛽 is a
diagonal matrix.
𝑐1 0 ⋯ 0
𝑐2
𝑖. 𝑒., 𝑇 𝛽 = 0 ⋯ 0 .
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 𝑐𝑛

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 4

Note that if 𝔙′ is any other basis of 𝑉, then there exists an invertible matrix 𝑃 such that

𝑇 𝔅′ = 𝑃−1 𝑇 𝛽 𝑃.

Thus if 𝑇 is diagonalizable, then matrix of 𝑇 w.r.t any basis of 𝑉 is similar to a diagonal


matrix.

Note:
If𝑇 is a diagonalizable operator and 𝛽 = 𝑣1 , 𝑣2 , … , 𝑣𝑛 is an ordered basis of𝑉such that
the matrix of 𝑇 w.r.t basis 𝛽 is a diagonal matrix
𝑐1 0 ⋯ 0
𝑐2
𝑖. 𝑒., 𝑇 𝛽 = 0 ⋯ 0 .
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 𝑐𝑛
Then 𝑐1 , 𝑐2 , … , 𝑐𝑛 are characteristic values of 𝑇 and each 𝑣𝑖 1 ≤ 𝑖 ≤ 𝑛 is an eigen vector
of 𝑇 corresponding to eigen value 𝑐𝑖 i.e.,
𝑇𝑣𝑖 = 𝑐𝑖 𝑣𝑖 ∀ 𝑖 = 1,2, … , 𝑛.

Theorem 3.1.1 Let Tbe a linear operator on a finite-dimensional vectorspace 𝑉over


afield 𝐹. Let 𝑐1 , 𝑐2 , … , 𝑐𝑛 be the distinct characteristic values of 𝑇 and let 𝑊𝑖 be the null
space of 𝑇 − 𝑐𝑖 𝐼 (i.e., 𝑊𝑖 is the characteristic space of 𝑇 associated with characteristic
value 𝑐𝑖 ). The following are equivalent
(a). 𝑇 is diagonalizable
(b). The characteristic polynomial for 𝑇 is
𝑑1 𝑑2 𝑑𝑛
𝑓 = 𝑥 − 𝑐1 𝑥 − 𝑐2 … 𝑥 − 𝑐𝑛
and 𝑑𝑖 = dim 𝑊𝑖 𝑖 = 1,2, … , 𝑛.
(c). dim 𝑉 = dim 𝑊1 + dim 𝑊2 + ⋯ + dim 𝑊𝑛 .

Theorem3.1.2Suppose that 𝑇𝛼 = 𝑐𝛼. If f is any polynomial, then 𝑓(𝑇)𝛼 = 𝑓(𝑐)𝛼. In


particular, 𝑇is diagonalizable implies that 𝑓(𝑇)is diagonalizable for any polynomial 𝑓.

Definition 3.1.3 Two operators 𝑃 and𝑄on vector space𝑉are said to be simultaneously


diagonalizable if there exists an invertible operator𝑆such that both𝑆 −1 𝑃𝑆 and 𝑆 −1 𝑄𝑆 are
diagonal operators.

Next we state a theorem without proof which we will require later.

Theorem 3.1.4 Let𝑃 and 𝑄 be two diagonalizable operators vector space𝑉. Then𝑃 and
𝑄commute if and only if they are simultaneously diagonalizable.

3.2 MINIMAL POLYNOMIAL

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 5

Let Tbe a linear operator on a finite-dimensional vectorspace 𝑉over the field 𝐹. The
minimal polynomialfor𝑇is the (unique)monic generator of the ideal of polynomials over
𝐹which annihilate 𝑇.

The minimal polynomial 𝑝for the linear operator 𝑇is uniquely determined by these three
properties :
1. 𝑝is a monic polynomial over the scalar field F.
2. 𝑝 𝑇 = 0.
3. No polynomial over 𝐹 which annihilates 𝑇has smaller degree than𝑝.

Theorem 3.2.1(𝑪𝒂𝒚𝒍𝒆𝒚-𝑯𝒂𝒎𝒊𝒍𝒕𝒐𝒏) Let 𝑇be a linear operator on a finite dimensional


vector space 𝑉. If 𝑓is the characteristic polynomial for 𝑇, then 𝑓(𝑇) = 0; in other words,
the minimal polynomial divides the characteristic polynomial for 𝑇.

Theorem3.2.2Let 𝑇be a linear operator on an 𝑛-dimensional vector space 𝑉. Then


characteristic and minimal polynomials for 𝑇have the same zeros, except for
multiplicities.

3.3 INVARIANT SUBSPACES and DIRECT SUMS


Let 𝑉 be a vector space and 𝑇a linear operator on 𝑉. If 𝑊is a subspace of 𝑉, we say that
𝑊is invariant under 𝑻if for each vector 𝛼in 𝑊the vector 𝑇𝛼is in 𝑊, i.e., if 𝑇(𝑊)is
contained in 𝑊.

Let 𝑊1 , . . . , 𝑊𝑘 be subspaces of the vector space 𝑉. We say that 𝑊1 , . . . , 𝑊𝑘 are


independentif
𝛼1 + ⋯ + 𝛼𝑘 = 0 𝛼𝑖 𝜖𝑊𝑖
implies that𝛼𝑖 is 0 for every 𝑖 = 1,2, … , 𝑘.

Lemma3.3.1Let𝑉be a finite-dimensional vector space. Let 𝑊1 , . . . , 𝑊𝑘 be subspaces of


𝑉and let 𝑊 = 𝑊1 + . . . + 𝑊𝑘 . The following are equivalent.
(a) 𝑊1 , . . . , 𝑊𝑘 are independent.
(b) For each 𝑗 1 ≤ 𝑗 ≤ 𝑘 , 𝑊𝑗 ∩ 𝑖≠𝑗 𝑊𝑖 = 0 .
(c) For each, 𝑗 2 ≤ 𝑗 ≤ 𝑘 , 𝑊𝑗 ⋂ 𝑊1 + . . . + 𝑊𝑗 −1 = 0 .
(d) If 𝔅𝑖 is an ordered basis for 𝑊𝑖 , 1 ≤ 𝑖 ≤ 𝑗, then the sequence 𝔅 = (𝔅1 , … , 𝔅𝑘 )is an
ordered basis for 𝑊.

If any (and hence all) of the conditions of the last lemma hold, then we say that the sum
𝑊 = 𝑊1 + . . . + 𝑊𝑘 is director that 𝑊is the direct sumof 𝑊1 , . . . , 𝑊𝑘 and we write
𝑊 = 𝑊1 ⊕. . .⊕ 𝑊𝑘 .

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 6

If 𝑉 is a vector space, aprojectionof 𝑉 is a linearoperator 𝐸on 𝑉 such that 𝐸 2 =


𝐸.Suppose that 𝐸is a projection. Let 𝑅be the range of𝐸and let 𝑁 bethe null space of 𝐸.
Then 𝑉 = 𝑅 𝑁.The unique expression for 𝑎as a sum of vectors in 𝑅 and 𝑁 is
𝑎=𝐸 𝑎 + 𝑎 − 𝐸 𝑎 .
The operator 𝐸 is called the projection on 𝑹along 𝑵.

Theorem 3.3.2If 𝑉 = 𝑊1 ⊕. . .⊕ 𝑊𝑘 , then there exist 𝑘linear operators 𝐸1 , … , 𝐸𝑘 on 𝑉 such


that
(i). each 𝐸𝑖 is a projection 𝐸𝑖 2 = 𝐸𝑖 ;
(ii). 𝐸𝑖 𝐸𝑗 = 0, whenever 𝑖 ≠ 𝑗;
(iii). 𝐼 = 𝐸1 + ⋯ + 𝐸𝑘 ;
(iv). the range of𝐸𝑖 is 𝑊𝑖 .
Conversely, if 𝐸1 , … , 𝐸𝑘 are 𝑘linear operators on 𝑉 which satisfy conditions(i), (ii), and (iii),
and if we let 𝑊𝑖 be the range of 𝐸𝑖 ,then 𝑉 = 𝑊1 ⊕. . .⊕ 𝑊𝑘 .

Theorem 3.3.3Let 𝑇be a linear operator on a finite-dimensional vectorspace 𝑉.If 𝑇is

diagonalizable and if 𝑐1 , … . , 𝑐𝑘 are the distinct characteristicvalues of T, then there exist linear

operators 𝐸1 , … , 𝐸𝑘 on 𝑉 such that

(i). 𝑇 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 ;
(ii). 𝐼 = 𝐸1 + ⋯ + 𝐸𝑘 ;

(iii). 𝐸𝑖 𝐸𝑗 = 0, whenever 𝑖 ≠ 𝑗;

(iv). each 𝐸𝑖 is a projection 𝐸𝑖 2 = 𝐸𝑖 ;


(v). the range of𝐸𝑖 is the characteristic space for 𝑇associated with 𝑐𝑖 .

Conversely, if there exist 𝑘distinct scalars 𝑐1 , … . , 𝑐𝑘 and 𝑘non-zerolinear operators 𝐸1 , … , 𝐸𝑘


which satisfy conditions (i), (ii) , and (iii), thenT is diagonalizable, 𝑐1 , … . , 𝑐𝑘 are the distinct
characteristic values of 𝑇, andconditions (iv) and (v) are satisfied also.

Definition 3.3.4 𝑨𝒍𝒈𝒆𝒃𝒓𝒂𝒊𝒄𝒂𝒍𝒍𝒚𝑪𝒍𝒐𝒔𝒆𝒅𝑭𝒊𝒆𝒍𝒅 A field 𝐹 is said to be algebraically closed


field if every polynomial with coefficients in 𝐹 has a root in 𝐹 i.e., for any 𝑓 ∈ 𝐹 𝑥 , 𝑓 can
be expressed as

𝑟1 𝑟2 𝑟𝑘
𝑓 𝑥 = 𝑥 − 𝑐1 𝑥 − 𝑐2 … 𝑥 − 𝑐𝑘

where𝑐1 , … , 𝑐𝑘 ∈ 𝐹 and 𝑟𝑖 ′𝑠 are positive integers.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 7

4. Decomposition of an Operator
In this chapter we try to decompose a linear operator 𝑇 defined on a finite-dimensional
vector space 𝑉 into a direct sum of operators which are elementary in some sense.

We already know from the invariant direct sums theory that if 𝑇 is a diagonalizable linear
operator on a finite dimensional space 𝑉 and 𝑐1 , … . , 𝑐𝑘 are distinct characteristic values of
𝑇, then there exist linear operators 𝐸1 , … . , 𝐸𝑘 on 𝑉 such that 𝑇 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 where 𝐸𝑖 is
the projection of vector space 𝑉 onto the space of characteristic vectors associated with
the characteristic value 𝑐𝑖 . Thus if the minimal polynomial 𝑝of 𝑇 can be factorized over
the scalar field 𝐹 into a product of distinct monic polynomials of degree 1i.e.,

𝑝 = 𝑥 − 𝑐1 𝑥 − 𝑐2 … 𝑥 − 𝑐𝑘
then
𝑉 = 𝐾𝑒𝑟 𝑇 − 𝑐1 𝐼 ⊕ 𝐾𝑒𝑟 𝑇 − 𝑐2 𝐼 ⊕ … ⊕ 𝐾𝑒𝑟 𝑇 − 𝑐𝑘 𝐼 𝑎𝑛𝑑

𝑇 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘
where𝐸𝑖 is the projection of vector space 𝑉 onto the null space 𝐾𝑒𝑟 𝑇 − 𝑐𝑖 𝐼 of 𝑇 − 𝑐𝑖 𝐼 .

But what if 𝑇 is not diagonalizable? In other words, how to decompose a linear operator
𝑇 defined on a finite-dimensional vector space 𝑉 whose minimal polynomial cannot be
factorized over the field 𝐹 into a product of distinct monic polynomials of degree 1 ? If 𝑇
is not diagonalizable, then for the minimal polynomial 𝑝 of 𝑇 we have two possibilities:

1 𝑝 = 𝑝1 𝑟1 . . . 𝑝𝑘 𝑟 𝑘 , where 𝑝𝑖 's are distinct irreducible monic polynomials over 𝐹 and𝑟𝑖 's
are positive integers.
2 𝑝is an irreducible monic polynomial over 𝐹.

In this chapter we will discuss the first case where the minimal polynomial can be
decomposed further over the scalar field 𝐹. In fact we will show that if the minimal
polynomial 𝑝 of a linear operator is of the form

𝑝 = 𝑝1 𝑟1 … 𝑝𝑘 𝑟 𝑘

where 𝑝𝑖 's are distinct irreducible monic polynomials over 𝐹 and𝑟𝑖 's are positive integers.
Then the vector space 𝑉 is expressible as

𝑟1 𝑟2 𝑟𝑘
𝑉 = 𝐾𝑒𝑟 𝑝1 𝑇 ⊕ 𝐾𝑒𝑟 𝑝2 𝑇 ⊕ … ⊕ 𝐾𝑒𝑟 𝑝𝑘 𝑇

and the linear operator 𝑇 is of the form 𝑇 = 𝑇1 + 𝑇2 + ⋯ + 𝑇𝑘 where

𝑇𝑖 = 𝑇 𝐾𝑒𝑟 𝑝 𝑖 𝑇 𝑟 𝑖 𝑓𝑜𝑟 𝑒𝑎𝑐𝑕 𝑖 = 1,2, … , 𝑘 .

Before proving the theorem, let us verify this fact by an example.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 8

Example 4.1. Let 𝑇 be the linear operator on 𝑅3 given by the matrix

3 1 −1
𝐴= 2 2 −1
2 2 0

with respect to the standard ordered basis of 𝑅3 . The characteristic polynomial of 𝑇 is


given by

3−𝑥 1 −1
𝑑𝑒𝑡 𝐴 − 𝑥𝐼 = 2 2−𝑥 −1 = − 𝑥 3 − 5𝑥 2 + 8𝑥 − 4 = −(𝑥 − 1)(𝑥 − 2)2 .
2 2 −𝑥

Thus 1 and 2 are the two characteristic values of the operator 𝑇. Now let us try to find
the characteristic vectors of 𝑇 associated with characteristic values 1and 2. Consider

2 1 −1
𝐴−𝐼 = 2 1 −1
2 2 −1

It is easy to observe that rank of 𝐴 − 𝐼 is 2 and hence dimension of the null space of
𝑇 − 𝐼 is 1. Thus the space of characteristic vectors associated with characteristic value 1
has dimension 1. Now

2 1 −1 𝑥1 2𝑥1 + 𝑥2 − 𝑥3
(𝐴 − 𝐼)𝑥 = 2 1 𝑥
−1 2 = 2𝑥1 + 𝑥2 − 𝑥3 .
2 2 𝑥
−1 3 2𝑥1 + 2𝑥2 − 𝑥3

Clearly, the vector 𝛼 = (1,0,2) satisfies the equation (𝐴 − 𝐼)𝑥 = 0 and hence is a
characteristic vector of 𝑇 associated with characteristic value 1. Also, the vector 𝛼 =
(1,0,2) spans the null space of 𝑇 − 𝐼. Now consider

1 1 −1
𝐴 − 2𝐼 = 2 0 −1
2 2 −2

Evidently, rank of 𝐴 − 2𝐼 is 2 and hence dimension of the null space of 𝑇 − 2𝐼 is 1. Also,


the vector 𝛽 = (1,1,2) satisfies the equation (𝐴 − 2𝐼)𝑥 = 0 and hence is a characteristic
vector of 𝑇 associated with characteristic value 2.
Since (𝐴 − 𝐼)(𝐴 − 2𝐼) ≠ 0, therefore the minimal polynomial for 𝑇is (𝑥 − 1)(𝑥 − 2)2 . Also, the
characteristic vectors of 𝑇 do not span the vector space 𝑉 i.e., there does not exist a
basis of 𝑉 consisting of eigen vectors of 𝑇. Thus 𝑇 is not diagonalizable.
Now let us find the null space of (𝑇 − 2𝐼)2 . Consider

1 1 −1 1 1 −1
2
𝐴 − 2𝐼 = 2 0 −1 2 0 −1
2 2 −2 2 2 −2

1+2−2 1+0−2 −1 − 1 + 2
= 2+0−2 2+0−2 −2 + 0 + 2
2+4−4 2+0−4 −2 − 2 + 4

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 9

1 −1 0
= 0 0 0
2 −2 0

2 2
Clearly, rank of 𝐴 − 2𝐼 is 1 and hence nullity of 𝐴 − 2𝐼 is 2. Now

1 −1 0 𝑥1 𝑥1 −𝑥2 0
2
𝐴 − 2𝐼 𝑥 = 0 0 0 𝑥2 = 0 0 0
2 −2 0 𝑥3 2𝑥1 −2𝑥2 0

The vectors (0,0,1) and (1,1,0) are linearly independent and satisfy the equation 𝐴−
2𝐼 2 𝑥 = 0. Hence the null space of (𝑇 − 2𝐼)2 is spanned by the vectors (0,0,1)and (1,1,0). The
vectors = (1,0,2), 𝛽 = (0,0,1) and 𝛾 = (1,1,0) are linearly independent and hence form the
basis of vector space ℝ3 . Thus we observe that the null space of (𝑇 − 𝐼) and the null
space of (𝑇 − 2𝐼)2 together span the vector space 𝑉 i.e.,

𝑉 = 𝐾𝑒𝑟 𝑇 − 𝐼 ⊕ 𝐾𝑒𝑟(𝑇 − 2𝐼)2 . █

4.1 PRIMARY DECOMPOSITION THEOREM


In this section we will obtain the decomposition of a linear operator defined on a finite
dimensional vector space if its minimal polynomial can be decomposed further into
product of irreducible monic polynomials.

Theorem 4.1.1 (Primary Decomposition Theorem) Let 𝑇 be a linear operator on the


finite-dimensional vector space 𝑉 over the field 𝐹. Let 𝑝 be the minimal polynomial for 𝑇,

𝑝 = 𝑝1 𝑟1 . . . 𝑝𝑘 𝑟 𝑘
where the 𝑝𝑖 are distinct irreducible monic polynomials over 𝐹 and𝑟𝑖 are positive integers.
Let 𝑊𝑖 = 𝐾𝑒𝑟 𝑝𝑖 (𝑇)𝑟 𝑖 be null space of 𝑝𝑖 (𝑇)𝑟 𝑖 , 𝑖 = 1, . . . , 𝑘. Then
(i). 𝑉 = 𝑊1 . . . 𝑊𝑘 ;
(ii). each 𝑊𝑖 is invariant under 𝑇;
(iii). if𝑇𝑖 is the operator induced on 𝑊𝑖 by 𝑇, then the minimal polynomial for 𝑇𝑖 is 𝑝𝑖 𝑟 𝑖 .

Proof:We will prove the theorem as follows:


1. For each 𝑖 (1 ≤ 𝑖 ≤ 𝑘), we will find a polynomial 𝑕𝑖 such that 𝐸𝑖 = 𝑕𝑖 (𝑇) is the
projection of 𝑉 onto the space 𝑊𝑖 and 𝐸1 +. . . +𝐸𝑘 = 𝐼.
2. 𝑉 = 𝑊1 . . . 𝑊𝑘
For each 𝑖 (1 ≤ 𝑖 ≤ 𝑘), consider the polynomial
𝑟
𝑓𝑖 = 𝑝𝑗 𝑟 𝑗 . 𝑖. 𝑒. 𝑝 = 𝑓𝑖 𝑝𝑖 𝑖
𝑗 ≠𝑖

Then clearly, the polynomials 𝑓1 , . . . , 𝑓𝑘 are relatively prime. Therefore there exist
polynomials 𝑔1 , . . . , 𝑔𝑘 such that
𝑓1 𝑔1 +. . . +𝑓𝑘 𝑔𝑘 = 1

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 10

Also, it is easy to observe that for 𝑖 ≠ 𝑗 we have 𝑓𝑖 𝑓𝑗 = 𝑝𝑞 where

𝑞= 𝑝𝑚 𝑟𝑚 .
𝑚 ≠𝑖,𝑗

Now let 𝑕𝑖 = 𝑓𝑖 𝑔𝑖 and 𝐸𝑖 = 𝑕𝑖 (𝑇) = 𝑓𝑖 (𝑇)𝑔𝑖 (𝑇), 𝑖 = 1, . . . , 𝑘.


Since 𝑕1 +. . . +𝑕𝑘 = 1, therefore 𝑕1 (𝑇)+. . . +𝑕𝑘 (𝑇) = 𝐼. Hence we have
𝐸1 +. . . +𝐸𝑘 = 𝐼.
Again as 𝑝 is a minimal polynomial of 𝑇, therefore 𝑝(𝑇) = 0. Thus for 𝑖 ≠ 𝑗,
𝐸𝑖 𝐸𝑗 = 𝑕𝑖 𝑇 𝑕𝑗 𝑇
= 𝑓𝑖 𝑇 𝑔𝑖 𝑇 𝑓𝑗 𝑇 𝑔𝑗 𝑇
= 𝑝 𝑇 𝑞 𝑇 𝑔𝑖 𝑇 𝑔𝑗 𝑇 ∵ 𝑓𝑖 𝑓𝑗 = 𝑝𝑞
= 0. ∵𝑝 𝑇 =0
Now since 𝐸1 +. . . +𝐸𝑘 = 𝐼 and 𝐸𝑖 𝐸𝑗 = 0 for all 𝑖 ≠ 𝑗, we have

𝐸𝑖 2 = 𝐸𝑖 𝑓𝑜𝑟𝑎𝑙𝑙𝑖 (1 ≤ 𝑖 ≤ 𝑘).
Let 𝑊𝑖 be the null space of 𝑝𝑖 (𝑇)𝑟 𝑖 , 𝑖 = 1, . . . , 𝑘. We claim that 𝑅𝑎𝑛𝑔𝑒𝐸𝑖 = 𝑊𝑖 . Let
𝑣𝜖𝑅𝑎𝑛𝑔𝑒𝐸𝑖 . Then 𝑣 = 𝐸𝑖 (𝑢) for some 𝑢𝜖𝑉 and therefore
𝐸𝑖 (𝑣) = 𝐸𝑖 2 (𝑢) = 𝐸𝑖 (𝑢) = 𝑣.
Consider
𝑟𝑖 𝑟𝑖
𝑝𝑖 𝑇 𝑣 = 𝑝𝑖 𝑇 𝐸𝑖 𝑣
𝑟𝑖
= 𝑝𝑖 𝑇 𝐸𝑖 𝑣
𝑟𝑖
= 𝑝𝑖 𝑇 𝑓𝑖 𝑇 𝑔𝑖 𝑇 𝑣
= 𝑝 𝑇 𝑔𝑖 𝑇 𝑣
=0 𝑓𝑜𝑟𝑎𝑙𝑙𝑖 (1 ≤ 𝑖 ≤ 𝑘).
𝑟𝑖
Thus 𝑣𝜖𝑊𝑖 and hence 𝑅𝑎𝑛𝑔𝑒𝐸𝑖 ⊆ 𝑊𝑖 . Conversely, let 𝑣𝜖𝑊𝑖 .Then 𝑝𝑖 𝑇 𝑣 = 0. Since 𝑝𝑖 𝑟 𝑖
divides 𝑓𝑗 𝑔𝑗 for 𝑗 ≠ 𝑖, we have 𝑓𝑗 𝑔𝑗 = 𝑠𝑝𝑖 𝑟 𝑖 for some polynomial 𝑠. Thus
𝐸𝑗 𝑣 = 𝑓𝑗 𝑇 𝑔𝑗 𝑇 𝑣
𝑟𝑖 𝑟
= 𝑠 𝑇 𝑝𝑖 𝑇 𝑣 ∵ 𝑓𝑗 𝑔𝑗 = 𝑠𝑝𝑖 𝑖
=0 𝑓𝑜𝑟𝑎𝑙𝑙𝑗 ≠ 𝑖.
Also, since 𝐸1 +. . . +𝐸𝑘 = 𝐼 and 𝐸𝑗 (𝑣) = 0 𝑓𝑜𝑟𝑎𝑙𝑙𝑗 ≠ 𝑖,wehave 𝐸𝑖 (𝑣) = 𝑣. Hence 𝑣𝜖𝑅𝑎𝑛𝑔𝑒𝐸𝑖 and
consequently,𝑊𝑖 ⊆ 𝑅𝑎𝑛𝑔𝑒𝐸𝑖 . Thus 𝑅𝑎𝑛𝑔𝑒𝐸𝑖 = 𝑊𝑖 .
Since for each 𝑖 (1 ≤ 𝑖 ≤ 𝑘), 𝑅𝑎𝑛𝑔𝑒𝐸𝑖 = 𝑊𝑖 and 𝐸1 +. . . +𝐸𝑘 = 𝐼therefore we must have
𝑉 = 𝑊1 +. . . +𝑊𝑘 .
For any 𝑣𝜖𝑉, let
𝑣 = 𝐸1 𝑢1 + ⋯ + 𝐸𝑘 𝑢𝑘
= 𝐸1 (𝑤1 )+. . . +𝐸𝑘 (𝑤𝑘 ).
Then for any 𝑖 (1 ≤ 𝑖 ≤ 𝑘),
𝐸𝑖 𝑣 = 𝐸𝑖 2 𝑢𝑖 = 𝐸𝑖 𝑢𝑖
𝑎𝑛𝑑𝐸𝑖 (𝑣) = 𝐸𝑖 2 (𝑤𝑖 ) = 𝐸𝑖 (𝑤𝑖 )
Thus 𝐸𝑖 (𝑢𝑖 ) = 𝐸𝑖 (𝑤𝑖 ) 𝑓𝑜𝑟𝑎𝑙𝑙𝑖 (1 ≤ 𝑖 ≤ 𝑘). Hence each element of 𝑉 is uniquely expressible as
sum of elements of 𝑊𝑖 ′𝑠. Therefore

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 11

𝑉 = 𝑊1 . . . 𝑊𝑘 .
Now since we have
𝑇𝐸𝑖 = 𝑇𝑓𝑖 𝑇 𝑔𝑖 𝑇
= 𝑓𝑖 𝑇 𝑔𝑖 𝑇 𝑇
= 𝐸𝑖 𝑇.
Therefore
𝑇 𝑊𝑖 = 𝑇 𝑅𝑎𝑛𝑔𝑒 𝐸𝑖 ⊆ 𝑅𝑎𝑛𝑔𝑒 𝐸𝑖 = 𝑊𝑖 ∵ 𝑇𝐸𝑖 = 𝐸𝑖 𝑇
It follows that for each 𝑖, 𝑊𝑖 is invariant under 𝑇.
Finally consider the operator 𝑇𝑖 induced on 𝑊𝑖 by 𝑇. Since
𝑟𝑖 𝑟𝑖
𝑝𝑖 𝑇 𝑣 = 0 𝑓𝑜𝑟𝑎𝑙𝑙𝑣𝜖𝑊𝑖 = 𝐾𝑒𝑟 𝑝𝑖 𝑇
𝑟𝑖
⇒ 𝑝𝑖 𝑇𝑖 𝑊𝑖 =0 .
Thus the minimal polynomial𝑔 of 𝑇𝑖 divides the polynomial 𝑝𝑖 𝑟 𝑖 .
Conversely, as 𝑔 is a minimal polynomial of 𝑇𝑖 , so𝑔(𝑇𝑖 ) = 0. Since 𝑔(𝑇) vanishes on 𝑊𝑖 and
𝑓𝑖 (𝑇) vanishes outside 𝑊𝑖 , therefore 𝑔(𝑇)𝑓𝑖 (𝑇) = 0. Thus polynomial 𝑔𝑓𝑖 is divisible by the
minimal polynomial 𝑝 of 𝑇. Hence in particular, 𝑝𝑖 𝑟 𝑖 divides 𝑔𝑓𝑖 . But then it implies that 𝑝𝑖 𝑟 𝑖
divides 𝑔 as 𝑝𝑖 𝑟 𝑖 is relatively prime with 𝑓𝑖 . Therefore the minimal polynomial 𝑔 for 𝑇𝑖 is
𝑝𝑖 𝑟 𝑖 . ∎

Corollary 4.1.2 If 𝐸1 , … , 𝐸𝑘 are the projections associated with the primary


decomposition of 𝑇, then each 𝐸𝑖 is a polynomial in 𝑇, and accordingly if a linear operator
𝑈 commutes with 𝑇 then 𝑈 commutes with each of the 𝐸𝑖 , i.e., each subspace 𝑊𝑖 is
invariant under 𝑈.

Proof: We already know that 𝐸𝑖 = 𝑕𝑖 𝑇 = 𝑓𝑖 𝑇 𝑔𝑖 (𝑇), thus 𝐸𝑖 is a polynomial in 𝑇. Now


consider for a linear operator 𝑈 which commutes with 𝑇
𝑈𝐸𝑖 = 𝑈𝑕𝑖 𝑇
= 𝑕𝑖 𝑇 𝑈 ∵ 𝑈 commutes with 𝑇
= 𝐸𝑖 𝑈
Thus 𝑈 commutes with each of the 𝐸𝑖 and therefore each subspace 𝑊𝑖 is invariant under
𝑈. ∎

Do You Know?
We have not used the fact that 𝑉is finite-dimensional while constructing the proof.
Thus it is not necessary that vector space 𝑉 should be finite-dimensional for the
primary decomposition theorem to hold.
Also, we can drop the condition that the polynomial 𝑝 needs to be minimal and can
still obtain parts (i) and (ii).

Thus, we can rewrite the primary decomposition theorem as:

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 12

Theorem : If 𝑇 is any linear operator on an arbitrary vector space 𝑉and there is a


monic polynomial 𝑝 such that 𝑝 𝑇 = 0, where
𝑝 = 𝑝1 𝑟1 . . . 𝑝𝑘 𝑟 𝑘
where the 𝑝𝑖 are distinct irreducible monic polynomials over 𝐹 and 𝑟𝑖 are positive
integers. Let 𝑊𝑖 be null space of 𝑝𝑖 (𝑇)𝑟 𝑖 , 𝑖 = 1, . . . , 𝑘. Then
(i). 𝑉 = 𝑊1 . . . 𝑊𝑘 ;
(ii). each 𝑊𝑖 is invariant under 𝑇;

Remark:
1. If 𝑇 is a linear operator as in Theorem 4.1.1 then 𝑇 can be decomposed as follows
𝑇 = 𝑇1 + ⋯ + 𝑇𝑘
where𝑇𝑖 = 𝑇𝐸𝑖 , 𝑖 = 1, … , 𝑘.

2. Since 𝐸𝑖 is the projection of 𝑉 onto 𝑊𝑖 and 𝑊𝑖 is the null space of 𝑝𝑖 (𝑇)𝑟 𝑖 , therefore
for each 𝑖 1 ≤ 𝑖 ≤ 𝑘 ,
𝑝𝑖 (𝑇)𝑟 𝑖 𝐸𝑖 = 0.
Thus, in particular, if 𝑝𝑖 = 𝑥 − 𝑐𝑖 , 𝑖 = 1, … , 𝑘 then
𝑟𝑖
𝑇 − 𝑐𝑖 𝐼 𝐸𝑖 = 0.
Also, since 𝑝 is a minimal polynomial of 𝑇, therefore for each 𝑖 1 ≤ 𝑖 ≤ 𝑘 , 𝑟𝑖 is the
least such integer.

3. If 𝑇 is a diagonalizable operator and 𝑐1 , … , 𝑐𝑘 are distinct characteristic values of 𝑇,


then for each 𝑖 1 ≤ 𝑖 ≤ 𝑘 , we have
𝑟𝑖 = 1 𝑎𝑛𝑑𝑝𝑖 = 𝑥 − 𝑐𝑖
𝑇 − 𝑐𝑖 𝐼 𝐸𝑖 = 0, 𝑖. 𝑒, 𝑇𝐸𝑖 = 𝑐𝑖 𝐸𝑖 .
Thus linear operator 𝑇 is expressible as linear sum of projection operators 𝐸𝑖 ′𝑠
𝑇 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 .

Example 4.1.3Let 𝑇be a linear operator on 𝑅3 which is represented in the standard


ordered basis by the matrix
6 −3 −2
𝐴= 4 −1 −2 .
10 −5 −3
The characteristic polynomial of 𝑇 is given by

6−𝑥 −3 −2
𝑑𝑒𝑡 𝐴 − 𝑥𝐼 = 4 −1 − 𝑥 −2 = − 𝑥 3 − 2𝑥 2 + 𝑥 − 2 = − 𝑥 2 + 1 𝑥 − 2 .
10 −5 −3 − 𝑥
Clearly, 𝑝 𝑥 = 𝑥 2 + 1 𝑥 − 2 is the minimal polynomial for 𝑇. Here 𝑝1 𝑥 = 𝑥 2 + 1and
𝑝2 𝑥 = 𝑥 − 2. Let 𝑊𝑖 be the null space of 𝑝𝑖 𝑇 , 𝑖 = 1,2 i.e.,
𝑊1 = 𝐾𝑒𝑟 𝑇 2 + 𝐼 𝑎𝑛𝑑 𝑊2 = 𝐾𝑒𝑟 𝑇 − 2𝐼 .

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 13

Then by the 𝒑𝒓𝒊𝒎𝒂𝒓𝒚 𝒅𝒆𝒄𝒐𝒎𝒑𝒐𝒔𝒊𝒕𝒊𝒐𝒏 𝒕𝒉𝒆𝒐𝒓𝒆𝒎, 𝑉 = 𝑊1 ⊕ 𝑊2 . Now consider,


6 −3 −2 2 1 0 0
𝐴2 + 𝐼 = 4 −1 −2 + 0 1 0
10 −5 −3 0 0 1
4 −5 0 1 0 0 5 −5 0
= 0 −1 0 + 0 1 0 = 0 0 0
10 −10 −1 0 0 1 10 −10 0
Thus
5 −5 0 𝑥1 5𝑥1 − 5𝑥2
𝐴2 + 𝐼 𝑥 = 0 ⇒ 0 0 0 𝑥2 = 0 ⇒ 0 =0 ⇒ 𝑥1 = 𝑥2 .
10 −10 0 𝑥3 10𝑥1 − 10𝑥2

The vectors 𝛼 = (0,0,1) and 𝛽 = (1,1,0) are linearly independent and satisfy the equation
𝑇2 + 𝐼 𝑥 = 0 .
Hence the null space𝑊1 of 𝑇 2 + 𝐼 is spanned by the vectors 0,0,1 and 1,1,0 .Now let us
calculate the null space 𝑊2 of 𝑇 − 2𝐼. Consider
𝑇 − 2𝐼 𝑥 = 0

6 −3 −2 1 0 0 𝑥1
⇒ 4 −1 −2 − 2 0 1 0 𝑥2 = 0
10 −5 −3 0 0 1 𝑥3

4 −3 −2 𝑥1
⇒ 4 −3 −2 𝑥2 = 0
10 −5 −5 𝑥3

4𝑥1 − 3𝑥2 − 2𝑥3


⇒ 4𝑥1 − 3𝑥2 − 2𝑥3 = 0
10𝑥1 − 5𝑥2 − 5𝑥3

⇒ 2𝑥1 = 𝑥3 𝑎𝑛𝑑 𝑥2 = 0 .
Clearly, the vector 𝛾 = 1, 0, 2 satisfy the equation 𝑇 − 2𝐼 𝑥 = 0 and hence spans the null
space 𝑊2 of 𝑇 − 2𝐼. ThusΒ1 = 0,0,1 , 1,1,0 is the basis for 𝑊1 and Β2 = 1, 0, 2 is the
basis for 𝑊2 .Now,
6 −3 −2 0 −2 0 1 1
𝐴𝛼 = 4 −1 −2 0 = −2 = −3 0 − 2 1 + 0 0
10 −5 −3 1 −3 1 0 2

6 −3 −2 1 3 0 1 1
𝐴𝛽 = 4 −1 −2 1 = 3 = 5 0 + 3 1 + 0 0
10 −5 −3 0 5 1 0 2
6 −3 −2 1 2 0 1 1
𝐴𝛾 = 4 −1 −2 0 = 0 = 0 0 + 0 1 + 2 0 .
10 −5 −3 2 4 1 0 2

Let 𝑇𝑖 is the operator induced on 𝑊𝑖 by 𝑇 and 𝐴𝑖 be the matrix of 𝑇𝑖 w.r.t basis Β𝑖 . Then
−3 5 0 0 0 0
𝐴1 = −2 3 0 𝑎𝑛𝑑 𝐴2 = 0 0 0 . █
0 0 0 0 0 2

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 14

4.2 DECOMPOSITION INTO DIAGONALIZABLE AND NILPOTENT


OPERATOR

Definition4.2.1 A linear operator 𝑁 defined on a vector space 𝑉 is said to be nilpotent if


there exists a positive integer 𝑘 such that 𝑁 𝑘 = 0.

Example Consider the vector space 𝑉 ℝ = 𝑎𝑛 𝑥 𝑛 + 𝑎𝑛 −1 𝑥 𝑛−1 + ⋯ + 𝑎1 𝑥 + 𝑎0 ∶ 𝑎𝑖 ∈ ℝ 𝑓𝑜𝑟 𝑒𝑎𝑐𝑕 𝑖


𝑑
of all real polynomials of degree less than or equal to 𝑛. The differential operator 𝐷 =
𝑑𝑥

on the vector space 𝑉 ℝ is a nilpotent operator. In fact, for any,

𝑑𝑛 𝑝 𝑥
𝐷𝑛 +1 𝑝 𝑥 = =0 ∀ 𝑝 𝑥 ∈𝑉 ℝ
𝑑𝑥 𝑛+1
𝑖. 𝑒., 𝐷𝑛 +1 = 0.

FAQ
Question :Let 𝑉 be a finite dimensional vector space over a field 𝐹. Does there
exist a natural number 𝑚 such that 𝑁 𝑚 = 0 for all nilpotent operators𝑁on 𝑉?
Answer :If 𝑉 is an 𝑛-dimensional vector space and 𝑁 is a nilpotent operator on 𝑉,
then there exists a positive integer 𝑘 ≤ 𝑛 such that 𝑁 𝑘 = 0. In particular, we
always have 𝑁 𝑛 = 0.

Theorem4.2.2Let 𝑁 be a nilpotent operator on a finite-dimensional vector space 𝑉.


Then 𝑁 is a diagonalizable operator if and only if 𝑁 = 0.

Proof:If 𝑁 = 0, then obviously 𝑁 is diagonalizable. Conversely let 𝑁be diagonalizable and


𝑝 be the minimal polynomial of 𝑁. Since 𝑁 is nilpotent operator, there exists a least
positive integer𝑚 such that 𝑁 𝑚 = 0. Obviously, 𝑝divides 𝑥 𝑚 . and due to minimality we
must have 𝑝 = 𝑥 𝑚 . Now since 𝑁is diagonalizable, the minimal polynomial 𝑝 of 𝑁 has no
repeated roots. Consequently, 𝑚 = 1. It follows that 𝑁 = 0. ∎

Definition4.2.3 Let 𝑇 be a linear operator on a finite-dimensional vector space 𝑉 over


the field 𝐹. Let 𝑝 be the minimal polynomial for 𝑇,

𝑝 = (𝑥 − 𝑐1 )𝑟1 … (𝑥 − 𝑐𝑘 )𝑟 𝑘

where𝑐1 , … , 𝑐𝑘 are distinct characteristic values of 𝑇 with multiplicities 𝑟1 , … , 𝑟𝑘 respectively.


Let 𝑊𝑖 be the null space of (𝑇 − 𝑐𝑖 𝐼)𝑟 𝑖 and 𝐸𝑖 be the projection of 𝑉 onto the space 𝑊𝑖 .
Consider the operator 𝐷 on this vector space given by

𝐷 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 .

Then obviously, 𝐷 is a diagonalizable operator (cf. Theorem 3.3.3). We will call the
operator 𝐷 as the diagonalizable part of 𝑻.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 15

Theorem 4.2.4 Let 𝑇 be a linear operator on a finite-dimensional vector space 𝑉 over


the field 𝐹. Let 𝑝 be the minimal polynomial for 𝑇,
𝑝 = (𝑥 − 𝑐1 )𝑟1 … (𝑥 − 𝑐𝑘 )𝑟 𝑘
where𝑐1 , … , 𝑐𝑘 are distinct characteristic values of 𝑇 with multiplicities 𝑟1 , … , 𝑟𝑘 respectively.
Then we can express the linear operator 𝑇 as
𝑇 =𝐷+𝑁
where𝐷 is a diagonalizable operator on the vector space 𝑉 and 𝑁 is a nilpotent operator
defined on 𝑉.

Proof: Let 𝑊𝑖 be the null space of (𝑇 − 𝑐𝑖 𝐼)𝑟 𝑖 and 𝐸𝑖 be the projection of 𝑉 onto the space
𝑊𝑖 . Consider the operator 𝐷 on vector space given by
𝐷 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 .

Then obviously, 𝐷 is a diagonalizable operator (cf. invariant direct sum Theorem 3.3.3).
Now consider another operator 𝑁 = 𝑇 − 𝐷.Since

𝑇 = 𝑇𝐸1 + ⋯ + 𝑇𝐸𝑘 [𝑠𝑖𝑛𝑐𝑒 𝐼 = 𝐸1 + ⋯ + 𝐸𝑘 𝑎𝑛𝑑 𝑇 = 𝑇𝐼]

𝐷 = 𝑐1 𝐸1 + ⋯ + 𝑐𝑘 𝐸𝑘 ,
therefore,
𝑁 = 𝑇 − 𝑐1 𝐼 𝐸1 + ⋯ + 𝑇 − 𝑐𝑘 𝐼 𝐸𝑘 .

Using the fact that 𝐸𝑖 2 = 𝐸𝑖 , 𝑖 = 1, … , 𝑘 and 𝐸𝑖 𝐸𝑗 = 0 whenever 𝑖 ≠ 𝑗, we get

𝑁 2 = 𝑇 − 𝑐1 𝐼 2 𝐸1 + ⋯ + 𝑇 − 𝑐𝑘 𝐼 2 𝐸𝑘
and in general we have
𝑁 𝑟 = 𝑇 − 𝑐1 𝐼 𝑟 𝐸1 + ⋯ + 𝑇 − 𝑐𝑘 𝐼 𝑟 𝐸𝑘 . (𝑟 ≥ 1)

𝑟𝑖
Also, since 𝑇 − 𝑐𝑖 𝐼 𝐸𝑖 = 0, 𝑖 = 1, … , 𝑘 , therefore we shall have

𝑁𝑟 = 0 𝑤𝑕𝑒𝑛𝑒𝑣𝑒𝑟𝑟 ≥ max 𝑟𝑖 .
1≤𝑖≤𝑘

Thus 𝑁 is a nilpotent operator. Hence the theorem holds. ∎

Remark:Since 𝐸𝑖 ′𝑠 are polynomials in 𝑇 therefore the diagonalizable operator 𝐷 and the


nilpotent operator obtained in Theorem 4.2.4both are polynomials in 𝑇. Also, since 𝐸𝑖 ′𝑠
commute with 𝑇, therefore 𝑁 commutes with 𝑇. Hence by Corollary 4.1.2 operator 𝑁
commutes with each of the 𝐸𝑖 . Again, since 𝐷 is a linear sum ofthe𝐸𝑖 ′𝑠, therefore
operator 𝐷 commutes with 𝑁.

Theorem 4.2.5 (Uniqueness)Let 𝑇 be a linear operator on the finite-dimensional


vector space 𝑉 over the field 𝐹. Suppose that the minimal polynomial for 𝑇 decomposes
over 𝐹 into a product of linear polynomials. Then there is a diagonalizable operator 𝐷 on
𝑉 and a nilpotent operator 𝑁 on 𝑉 such that

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 16

(i). 𝑇 =𝐷+𝑁
(ii). 𝐷𝑁 = 𝑁𝐷

The diagonalizable operator 𝐷 and the nilpotent operator are uniquely determined by (i)
and (ii) and each of them is a polynomial in 𝑇.

Proof:As already observed in Theorem 4.2.4 and the remark following it that we can
write 𝑇 = 𝐷 + 𝑁 where 𝐷 is a diagonalisable operator and 𝑁 is nilpotent operator, and
further,𝐷 and 𝑁 are polynomials in 𝑇 which commute with each other.

Now suppose there exists diagonalisable operator 𝐷′ and nilpotent operator 𝑁′ such that
𝑇 = 𝐷′ + 𝑁′ and 𝐷 ′ 𝑁 ′ = 𝑁 ′ 𝐷′ .We claim that 𝐷 = 𝐷′ and 𝑁 = 𝑁 ′ .

Consider
𝐷′ 𝑇 = 𝐷′ 𝐷′ + 𝑁 ′ = 𝐷′ 𝐷′ + 𝐷′ 𝑁 ′ = 𝐷′ 𝐷′ + 𝑁 ′ 𝐷′ = 𝐷′ + 𝑁 ′ 𝐷′ = 𝑇𝐷′ .

Thus 𝐷′ commutes with 𝑇. Similarly we can show that 𝑁′ commutes with 𝑇.


Consequently, 𝐷′ and 𝑁′ commute with any polynomial in 𝑇. Hence, in particular,𝐷′ and
𝑁′ commute with operators 𝐷and 𝑁. Thus operators 𝐷, 𝐷′ , 𝑁 and 𝑁 ′ all commute with one
another.

Since 𝐷 and 𝐷 ′ are both diagonalizable and they commute, therefore by Theorem 3.1.4𝐷
and 𝐷′ are simultaneously diagonalizable. Hence 𝐷 − 𝐷′ is diagonalizable operator. Since
𝑇 = 𝐷 + 𝑁 = 𝐷′ + 𝑁′, therefore we have 𝐷 − 𝐷′ = 𝑁′ − 𝑁. Thus (𝑁 ′ − 𝑁) is a diagonalizable
operator.

Since 𝑁 and 𝑁′ are nilpotent operators, therefore we can choose smallest positive integer
𝑚 such that 𝑁 𝑗 = 0 and 𝑁′𝑗 = 0 for all 𝑗 ≥ 𝑚. Again as 𝑁 and 𝑁 ′ commute, we can expand
(𝑁 ′ − 𝑁)2𝑚 binomially as
2𝑚
′ 2𝑚 2𝑚 2𝑚 −𝑘
𝑁 −𝑁 = 𝑁′ −𝑁 𝑘
𝑘
𝑘=0

=0 ∵ 𝑁 𝑗 = 0 and 𝑁′𝑗 = 0 for all 𝑗 ≥ 𝑚

Thus (𝑁 ′ − 𝑁) is nilpotent operator. Since (𝑁 ′ − 𝑁) is both nilpotent as well as


diagonalizable operator, therefore by Theorem 4.2.4 𝑁 ′ − 𝑁 = 0. Hence we have 𝐷 =
𝐷′ and 𝑁 ′ = 𝑁. ∎

Corollary 4.2.6 Let 𝑉 be a finite-dimensional vector space over an algebraically closed


field 𝐹. Then every linear operator 𝑇 on 𝑉 can be written as the sum of a diagonalizable
operator 𝐷 and a nilpotent operator 𝑁 which commute. These operators 𝐷 and 𝑁 are
unique and each is a polynomial in 𝑇.

Proof: For any linear operator 𝑇, the minimal polynomial for 𝑇 decomposes over an
algebraically closed field 𝐹 into a product of linear polynomials. Therefore by Theorem
4.2.5 it follows. ∎

Corollary 4.2.7Let𝑉 be a finite-dimensional complex vector space. Then every linear


operator 𝑇 on 𝑉 can be expressed as the sum of a diagonalizable operator 𝐷 and a

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 17

nilpotent operator 𝑁 which commute. These operators 𝐷 and 𝑁 are unique and each is a
polynomial in 𝑇.

Proof: The field of complex numbers is an algebraically closed field. ∎

Example 4.2.8 Again consider the linear operator 𝑇 of Example 4.1given by the matrix
3 1 −1
𝐴= 2 2 −1 .
2 2 0

Its minimal polynomial is given by 𝑥 − 1 𝑥 − 2 2 . Here

𝑊1 = 𝐾𝑒𝑟 𝑇 − 𝐼 = 𝑠𝑝𝑎𝑛 1, 0, 2 𝑎𝑛𝑑

2
𝑊2 = 𝐾𝑒𝑟 𝑇 − 2𝐼 = 𝑠𝑝𝑎𝑛 0, 0, 1 , 1, 1, 0 .
Since
1,0,0 = 1 1, 0, 2 − 2 0, 0, 1 + 0 1, 1, 0

0,1,0 = −1 1, 0, 2 + 2 0, 0, 1 + 1 1, 1, 0

0, 0, 1 = 0 1, 0, 2 + 1 0, 0, 1 + 0 1, 1, 0

The projection 𝐸1 of 𝑉 onto the space 𝑊1 acts on the standard ordered basis as follows:

𝐸1 1, 0, 0 = 1 1, 0, 2 = 1 1, 0, 0 + 0 0, 1, 0 + 2 0, 0, 1

𝐸1 0, 1, 0 = − 1, 0, 2 = − 1, 0, 0 + 0 0, 0, 1 − 2 0, 0, 1

𝐸1 1, 0, 0 = 0 1, 0, 2 = 0 1, 0, 0 + 0 0, 0, 1 + 0 0, 0, 1 .

1 −1 0 0 1 0
Therefore 𝐸1 = 0 0 0 . Similarly, it can be easily seen that 𝐸2 = 0 1 0 . Thus the
2 −2 0 −2 2 1
diagonalizable operator 𝐷 of 𝑇 is given by

1 −1 0 0 2 0 1 1 0
𝐷 = 1𝐸1 + 2𝐸2 = 0 0 0 + 0 2 0 = 0 2 0 .
2 −2 0 −4 4 2 −2 2 2

Hence the nilpotent operator 𝑁 of 𝑇 is given by

3 1 −1 1 1 0 2 0 −1
𝑁 =𝐴−𝐷 = 2 2 −1 − 0 2 0 = 2 0 −1
2 2 0 −2 2 2 4 0 −2

0 0 0
Observe that 𝑁 2 = 0 0 0 . █
0 0 0

Thus, from all the above results we see that to study linear operators having minimal
polynomial of the first type, it is enough to focus on nilpotent operators. But for linear
operators with minimal polynomial of second type, we need to find out some way other

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 18

than characteristic values and vectors. For example consider the linear operator𝑇 on 𝑅2
given by the matrix

0 −1
𝐴=
1 0

with respect to the standard ordered basis of 𝑅2 . The characteristic polynomial of 𝑇 is


given by

−𝑥 −1
𝑑𝑒𝑡(𝐴 − 𝑥𝐼) = = 𝑥2 + 1
1 −𝑥

Clearly, this polynomial has no real roots and hence the operator 𝑇 has no characteristic
value. How to deal with such operators ?In the next chapters we will show that these
two problems can be simultaneously handled.

Now let us look at some of the problems based on the discussions above. These
problems will take us deeper into this concept of decomposition.

5. SOLVED PROBLEMS

Problem 5.1. Let 𝑉 be a finite-dimensional vector space over an algebraically closed


field 𝐹. Let 𝑇 be a linear operator defined on 𝑉 and 𝐷 be diagonalizable part of 𝑇. Prove
that if 𝑔 is any polynomial with coefficients in 𝐹, then the diagonalizable part of 𝑔(𝑇) is
𝑔 𝐷 .

Proof:Let 𝑁 be the nilpotent operator such that

𝑇 =𝐷+𝑁

where𝐷 is the diagonalizable part of 𝑇 and 𝐷 and 𝑁 both commute. Now since both 𝐷 and
𝑁 commute, therefore by using simple binomial expansions we can write

𝑔 𝑇 = 𝑔 𝐷 + 𝑁𝑕 𝐷, 𝑁

for some polynomial 𝑕 𝐷, 𝑁 in 𝐷 and 𝑁.

Obviously, 𝑔 𝐷 is a diagonalizable operator as 𝐷 is diagonalizable. Also, since 𝑁 is


nilpotent there exists a positive integer 𝑘 such that 𝑁 𝑘 = 0. But, then we have

𝑘
𝑁𝑕 𝐷, 𝑁 = 𝑁 𝑘 𝑕 𝐷, 𝑁 𝑘
=0 𝑠𝑖𝑛𝑐𝑒 𝐷 𝑎𝑛𝑑 𝑁 𝑐𝑜𝑚𝑚𝑢𝑡𝑒

Therefore 𝑁𝑕 𝐷, 𝑁 is a nilpotent operator. Thus

𝑔 𝑇 = 𝑔 𝐷 + 𝑁𝑕 𝐷, 𝑁

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 19

where𝑔 𝐷 is a diagonalizable operator and 𝑁𝑕 𝐷, 𝑁 is a nilpotent operator such that they


both commute. Hence by uniqueness in Theorem 4.2.5 we have that diagonalizable part
of 𝑔(𝑇) is 𝑔 𝐷 . ∎

Problem 5.2. Let 𝑉 be a finite-dimensional vector space over the field 𝐹, and let 𝑇 be a
linear operator on 𝑉 such that 𝑟𝑎𝑛𝑘 𝑇 = 1. Prove that either 𝑇 is diagonalizable or 𝑇 is
nilpotent, not both.

Proof:Since 𝑟𝑎𝑛𝑘 𝑇 = 1, there exists𝑥(≠ 0) 𝜖𝑉 such that

𝑅𝑎𝑛𝑔𝑒 𝑇 = 𝛼𝑥 ∶ 𝛼𝜖𝐹 = 𝑠𝑝𝑎𝑛 𝑥 .

Now if dim 𝑉 = 1, then obviously 𝑇 𝑛 = 𝑐 𝑛 𝐼 for some scalar 𝑐(≠ 0) 𝜖𝐹 and for all 𝑛 ∈ ℕ. Hence
𝑇 is diagonalizable and 𝑇 is not nilpotent. Thereforelet dim 𝑉 > 1. Then dim 𝐾𝑒𝑟 𝑇 ≥ 1 and
hence 0 is a characteristic value of 𝑇.

Now from Theorem 4.2.2, it is clear that 𝑇 cannot be both diagonalizable and nilpotent.
If 𝑇 is nilpotent, then we have nothing to prove. Thus let 𝑇be not nilpotent.

Claim:𝑇(𝑥) ≠ 0.

Suppose on the contrary, 𝑇(𝑥) = 0 and let 𝑦𝜖𝑉 be arbitrary. Now since 𝑇(𝑦) 𝜖𝑅𝑎𝑛𝑔𝑒 𝑇 ,
therefore 𝑇(𝑦) = 𝛼𝑥 for some 𝛼𝜖𝐹. But then we have

𝑇 2 (𝑦) = 𝑇(𝑇𝑦) = 𝑇(𝛼𝑥) = 𝛼𝑇(𝑥) = 0.

Since 𝑦𝜖𝑉 is arbitrary, we have 𝑇 2 = 0 which contradicts the fact that 𝑇 is not nilpotent.
Hence our assumption is wrong and 𝑇(𝑥) ≠ 0.

Now as 𝑇(𝑥) 𝜖𝑅𝑎𝑛𝑔𝑒 𝑇 , there exists 𝛽 ≠ 0 𝜖𝐹 such that 𝑇(𝑥) = 𝛽𝑥.Then 𝛽is a characteristic
value of 𝑇. We will show that 𝛽 is the only non-zero characteristic value of 𝑇. On the
contrary. suppose𝛾 ≠ 𝛽 is a non-zero characteristic value of 𝑇. There exists 𝑧 ≠ 0 ∈ 𝑉
such that 𝑇 𝑧 = 𝛾𝑧. Since 𝑇 𝑧 ∈ 𝑅𝑎𝑛𝑔𝑒 𝑇 = 𝑠𝑝𝑎𝑛 𝑥 , 𝑧, 𝑥 is a linearly dependent set, a
contradicton to the fact that 𝑧 and 𝑥 are characteristic vectors of distinct non-zero
characteristic values. Thus 𝛽 is the only non-zero characteristic value of 𝑇.Let𝑊1 be the
null space of 𝑇 − 𝛽𝐼 and 𝑊2 be the null space of 𝑇 − 0𝐼 = 𝑇.Since

𝑉 = 𝑅𝑎𝑛𝑔𝑒 𝑇 ⊕ 𝐾𝑒𝑟 𝑇 = 𝑊1 ⊕ 𝑊2

it follows that 𝑇 = 𝛽𝐸1 + 0𝐸2 , where 𝐸𝑖 be the projection of 𝑉onto𝑊𝑖 , 𝑖 = 1,2.Hence 𝑇is
diagonalizable operator. ∎

Problem 5.3. Let𝑉be a finite-dimensional vector space over the field 𝐹, and let𝑇be a
linear operator on𝑉. Suppose that𝑇commutes with every diagonalizable linear operator
on𝑉. Prove that 𝑇is a scalar multiple of the identity operator.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 20

Proof: Suppose𝑇is not a scalar multiple of identity operator, then there exist linearly
independent vectors𝑥and𝑦in𝑉such that𝑇 𝑥 = 𝑦.First we will construct a linear operator 𝑆
on 𝑉which is diagonalizable and hence commutes with 𝑇. Let 𝑊1 = {𝛼𝑥 ∶ 𝛼 𝜖 𝐹}and
𝑊2 = {𝛽𝑦 ∶ 𝛽 𝜖 𝐹}. Then obviously, 𝑊1 ∩ 𝑊2 = 0 as 𝑥and𝑦 are linearly independent vectors.
Now let 𝑊0 be the subspace of𝑉 such that 𝑊0 is complementaryto 𝑊1 + 𝑊2 = 𝑊1 𝑊2 .
Then we have vector space𝑉as

𝑉 = 𝑊0 𝑊1 𝑊2 .

Consider the linear operator𝑆 on 𝑉 such that

𝑆 𝑣 = 𝑆 𝑣0 + 𝑣1 + 𝑣2 = 𝑣1 + 2𝑣2

for all 𝑣 𝜖 𝑉 such that 𝑣 is uniquely expressible as 𝑣 = 𝑣0 + 𝑣1 + 𝑣2 , where 𝑣𝑖 𝜖 𝑊𝑖 , 𝑖 = 0,1,2.

Then it is easy to observe that


𝑆|𝑊0 = 0, 𝑆 𝑥 = 𝑥 𝑎𝑛𝑑 𝑆 𝑦 = 2𝑦.
Thus 0, 1 and 2 are characteristic values of 𝑆with multiplicities dim 𝑊0 , dim 𝑊1 = 1 and
dim 𝑊2 = 1 respectively. Then by Theorem3.1.1, 𝑆 is a diagonalizable operator. Therefore
by the given hypothesis 𝑆 commutes with 𝑇 ,i.e.,
𝑇𝑆 = 𝑆𝑇
∴ 𝑇𝑆 𝑥 = 𝑆𝑇 𝑥
, 𝑖. 𝑒, 𝑇 𝑆 𝑥 =𝑆 𝑇 𝑥
∴𝑇 𝑥 =𝑆 𝑦
∴ 𝑦 = 2𝑦

which implies that 𝑦 = 0, a contradiction to the fact that 𝑦 ≠ 0. Hence our assumption is
wrong. Thus 𝑇 is a scalar multiple of an identity operator. ∎

Problem 5.4.If 𝑁 is a nilpotent operator on an 𝑛-dimensional vector space 𝑉, then the


characteristic polynomial for 𝑁 is 𝑥 𝑛 .

Proof: Let 𝑓 be the characteristic polynomial of 𝑁 and 𝑔 be the minimal polynomial of 𝑁.


Then deg 𝑓 = 𝑛 and both the polynomials 𝑓 and g have same roots.

Now since 𝑁 is a nilpotent operator, there exists a positive integer 𝑘 such that 𝑁 𝑘 = 0.
Thus 𝑁 satisfies the polynomial 𝑕 𝑥 = 𝑥 𝑘 . Hence 𝑔 must divide 𝑕. Therefore 0 is the only
root of 𝑔. Consequently, 0 is the only root of 𝑓 repeated 𝑛-times. Thus characteristic
polynomial for 𝑁 is 𝑥 𝑛 . ∎

In the next section we discuss one application of the primary decomposition theorem.
We will see how we use the primary decomposition theorem to decompose a complicated
differential equation into much simpler differential equations.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 21

6. APPLICATION

Let 𝑉 denote the space of all 𝑛 times continuously differentiable functions 𝑓 on ℂ (set of
complex numbers) which satisfy the differential equation

𝐷𝑛 𝑓 + 𝑎𝑛−1 𝐷𝑛 −1 𝑓 + ⋯ + 𝑎1 𝐷𝑓 + 𝑎0 𝑓 = 0 (𝐴)

where𝑎0 , … , 𝑎𝑛−1 are some fixed constants,𝑛 is a positive integer and


𝑑𝑘
𝐷𝑘 = 𝑘 = 1,2,3, ….
𝑑𝑡𝑘
Thus 𝑉 is the space of solutions of differential equation (A). If 𝐶𝑛 denotes the space of all
𝑛 times continuously differentiable functions, then it is easy to show that the space 𝑉 is a
subspace of 𝐶𝑛 . If 𝑝 is the polynomial

𝑝 𝑥 = 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎1 𝑥 + 𝑎0
then clearly, we have
𝑉 = 𝑓𝜖𝐶𝑛 ∶ 𝑝 𝐷 𝑓 = 0

= 𝑛𝑢𝑙𝑙𝑠𝑝𝑎𝑐𝑒𝑜𝑓𝑝 𝐷 .

Now since ℂ is an algebraically closed field, the polynomial 𝑝 can be decomposed as

𝑝 = (𝑥 − 𝑐1 )𝑟1 … (𝑥 − 𝑐𝑘 )𝑟 𝑘

where𝑐1 , … , 𝑐𝑘 are distinct complex numbers. If 𝑊𝑖 is the null space of (𝐷 − 𝑐𝑖 𝐼)𝑟 𝑖 , then
since the primary decomposition theorem holds for an arbitrary vector space and for any
monic polynomial annihilating the given linear operator, therefore we have

𝑉 = 𝑊1 ⊕ … ⊕ 𝑊𝑘 .

Thus if 𝑓 satisfies the differential equation (A), then we can express 𝑓 uniquely as

𝑓 = 𝑓1 + ⋯ + 𝑓𝑘

where𝑓𝑖 satisfies the differential equation (𝐷 − 𝑐𝑖 𝐼)𝑟 𝑖 𝑓𝑖 = 0. Thus, the problem of finding
solutions for the differential equation of the form aboveis reduced to the problem of
finding solutions for much simpler differential equations of the form

𝑟𝑖
𝐷 − 𝑐𝑖 𝐼 𝑓 = 0.

Hence by the primary decomposition theorem, we have reduced our original differential
equation into simpler and easily solvable differential equations.

Now to solve differential equations of the form

𝐷 − 𝑐𝐼 𝑘 𝑓 = 0 (𝐵)

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 22

we need to know something about differential equations ,i.e., we should know a little
more than the fact that 𝐷 is a linear operator. It is very easy to show by induction on 𝑘
that if 𝑓 𝜖 𝐶𝑘 , then

𝐷 − 𝑐𝐼 𝑘 𝑓 = 𝑒 𝑐𝑡 𝐷𝑘 𝑒 −𝑐𝑡 𝑓 .

Thus 𝐷 − 𝑐𝐼 𝑘 𝑓 = 0 if and only if 𝐷𝑘 𝑒 −𝑐𝑡 𝑓 = 0. Consequently, we have that𝑓 satisfies


equation (B) if and only if 𝑓 = 𝑒 𝑐𝑡 𝑔 where 𝑔 satisfies 𝐷𝑘 𝑔 = 0. Now we already know from
differential equation theory, that 𝐷𝑘 𝑔 = 0 if and only if 𝑔 is a polynomial of degree less
than equal to (𝑘 − 1):

𝑔 𝑡 = 𝑏0 + 𝑏1 𝑡 + ⋯ + 𝑏𝑘−1 𝑡 𝑘−1 .

Thus 𝑓 satisfies equation (B) if and only if 𝑓 is of the form

𝑓 = 𝑒 𝑐𝑡 𝑏0 + 𝑏1 𝑡 + ⋯ + 𝑏𝑘−1 𝑡 𝑘−1 .

Hence we see that the functions 𝑒 𝑐𝑡 , 𝑡𝑒 𝑐𝑡 , … , 𝑡 𝑘−1 𝑒 𝑐𝑡 span the space of solutions of (B). Now
since 1, 𝑡, … , 𝑡 𝑘−1 are linearly independent and the exponential functions has no zeroes,
therefore the functions 𝑒 𝑐𝑡 , 𝑡𝑒 𝑐𝑡 , … , 𝑡 𝑘−1 𝑒 𝑐𝑡 are linearly independent and hence form a basis
for the space of solutions of (B).

Returning to the differential equation (A), from the above discussions we see that the set

𝔅= { 𝑒 𝑐𝑖 𝑡 , 𝑡𝑒 𝑐𝑖 𝑡 , … , 𝑡 𝑟 𝑖 −1 𝑒 𝑐𝑖 𝑡 }
𝑖=1

of𝑛 functions forms a basis for the space 𝑉 of solutions of (A). Thus in particular, we
have that 𝑉 is finite-dimensional and hence has dimension equal to the degree of the
polynomial 𝑝.

7. SUMMARY

We already knew that if a linear operator on a finite-dimensional vector space is


diagonalizable then we can decompose the linear operator into elementary operators
(projections in this case). In this chapter we moved a step ahead by showing that every
linear operator on a finite-dimensional vector space over an algebraically closed field can
be decomposed into elementary operators (primary decomposition theorem). Further we
showed that every linear operator on a finite-dimensional vector space over an
algebraically closed field is uniquely expressible as a sum of diagonalizable and nilpotent
operators which commute with each other. In the end we discussed how the primary
decomposition theorem is used to break complicated differential equations into much
simpler equations and thus help us in finding their solutions.

Institute of Lifelong Learning, University of Delhi


Primary Decomposition Theorem 23

6. EXERCISES

1. Let 𝑇be a linear operator on 𝑅3 which is represented in the standard ordered basis by
the matrix
6 −3 −2
4 −1 −2
10 −5 −3
Express the minimal polynomial 𝑝for 𝑇 in the form 𝑝 = 𝑝1 𝑝2 , where 𝑝1 and 𝑝2 are monic
and irreducible over the field of real numbers. Let 𝑊𝑖 be the null space of 𝑝𝑖 (𝑇). Find bases
𝔅𝑖 for the spaces 𝑊1 and 𝑊2 .If 𝑇𝑖 is the operator induced on𝑊𝑖 by 𝑇, find the matrix of 𝑇𝑖 in
the basis 𝔅𝑖 (above).

2.Let 𝑇 be the linear operator on 𝑅3 which is represented by the matrix


3 1 −1
2 2 −1
2 2 0
in the standard ordered basis. Show that there is a diagonalizable operator 𝐷 on 𝑅3 and a
nilpotent operator 𝑁 on 𝑅3 such that 𝑇 = 𝐷 + 𝑁and 𝐷𝑁 = 𝑁𝐷. Find the matrices of 𝐷 and
𝑁in the standard basis. (Just repeat the proof of Theorem 4.2.4for this special case.)

3. If 𝑉is the space of all polynomials of degree less than or equal to 𝑛over a field 𝐹,
prove that the differentiation operator on 𝑉is nilpotent.

4.Let 𝑉 be the space of 𝑛 × 𝑛matrices over a field𝐹, and let 𝐴 be a fixed 𝑛 × 𝑛matrix
over 𝐹. Define a linear operator 𝑇 on 𝑉 by 𝑇 𝐵 = 𝐴𝐵 − 𝐵𝐴. Prove that if 𝐴 is a nilpotent
matrix, then 𝑇 is a nilpotent operator.

9. REFERENCES
[1] Kenneth Hoffman and Ray Kunze, Linear Algebra (Second Edition), Pearson
Education Inc., India 2005.
[2] Roger A. Horn and Charles R. Johnson, Matrix Analysis (Second Edition),

Cambridge UniversityPress, 2013.


[3] Lawrence E. Spence, Arnold J. Insel, Stephen H. Friedberg, Linear Algebra,
Pearson Education. Inc. 2008.

Institute of Lifelong Learning, University of Delhi

View publication stats

You might also like