(ADVANCE ABSTRACT ALGEBRA) Pankaj Kumar and Nawneet Hooda

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

M. Sc.

MATHEMATICS
MAL-521
(ADVANCE ABSTRACT ALGEBRA)

Lesson No &Lesson Name Writer Vetter

1 Linear Transformations Dr. Pankaj Kumar Dr. Nawneet Hooda

2 Canonical Transformations Dr. Pankaj Kumar Dr. Nawneet Hooda

3 Modules I Dr. Pankaj Kumar Dr. Nawneet Hooda

4 Modules II Dr. Pankaj Kumar Dr. Nawneet Hooda

DIRECTORATE OF DISTANCE EDUCATIONS


GURU JAMBHESHWAR UNIVERSITY OF SCIENCE & TECHNOLOGY
HISAR 125001
MAL-521: M. Sc. Mathematics (Algebra)
Lesson No. 1 Written by Dr. Pankaj Kumar
Lesson: Linear Transformations Vetted by Dr. Nawneet Hooda

STRUCTURE
1.0 OBJECTIVE
1.1 INTRODUCTION
1.2 LINEAR TRANSFORMATIONS
1.3 ALGEBRA OF LINEAR TRANSFORMATIONS
1.4 CHARACTERISTIC ROOTS
1.5 CHARACTERISTIC VECTORS
1.6 MATRIX OF TRANSFORMATION
1.7 SIMILAR TRANSFORMATIONS
1.8 CANONICAL FORM(TRIANGULAR FORM)
1.9 KEY WORDS
1.10 SUMMARY
1.11 SELF ASSESMENT QUESTIONS
1.12 SUGGESTED READINGS

1.0 OBJECTIVE
Objective of this Chapter is to study Linear Transformation on the
finite dimensional vector space V over the field F.

1.1 INTRODUCTION
Let U and V be two given finite dimensional vector spaces over the
same field F. Our interest is to find a relation (generally called as linear
transformation) between the elements of U and V which satisfies certain
conditions and, how this relation from U to V becomes a vector space over the
field F. The set of all transformation on U into itself is of much interest. On
finite dimensional vector space V over F, for given basis of V, there always
exist a matrix and for given basis and given matrix of order n there always
exist a linear transformation.
In this Chapter, in Section 1.2, we study about linear transformations.
In Section 1.3, Algebra of linear transformations is studied. In next two
sections characteristic roots and characteristic vectors of linear transformations
are studied. In Section 1.6, matrix of transformation is studied. In Section 1.7
canonical transformations are studied and in last section we come to know
about canonical form (Triangular form).

1.2 LINEAR TRANSFORMATIONS


1.2.1 Definition. Vector Space. Let F be a field. A non empty set V with two
binary operations, addition (+)and scalar multiplications(.), is called a vector
space over F if V is an abelian group under + and for v ∈ V , α.v ∈ V . The
following conditions are also satisfied:
(1) α. (v+w) = αv+ αw for all α ∈ F and v, w in V,
(2) (α + β) .v = αv+ β v,
(3) (αβ) .v = α.(β v)
(4) 1.v = v
For all α , β ∈ F and v, w belonging to V. Here v and w are called vectors and
α , β are called scalar.

1.2.2 Definition. Homomorphism. Let V and W are two vector space over the
same field F then the mapping T from V into W is called homomorphism if
(i) (v1+v2)T= v1T+v2 T
(ii) (αv1)T= α(v1T)
for all v1, v2 belonging to V and α belonging to F.
Above two conditions are equivalent to (αv1+βv2)T=α(v1T)+ β(v2T).
If T is one-one and onto mapping from V to W, then T is called an
isomorphism and the two spaces are isomorphic. Set of all homomorphism
from V to W is denoted by Hom(V, W) or HomR(V, W)

1.2.3 Definition. Let S and T∈ Hom(V, W), then S+T and λS is defined as:
(i) v(S+T)= vS+vT and
(ii) v(λS)= λ(vS) for all v∈V and λ∈F

1.2.4 Problem. S+T and λS are elements of Hom(V, W) i.e. S+T and λS are
homomorphisms from V to W.
Proof. For (i) we have to show that
(αu+βv)(S+T)= α(u(S+T))+ β(v(S+T))
By Definition 1.2.3, (αu+βv)(S+T)=(αu+βv)S+(αu+βv)T. Since S and T are
linear transformations, therefore,
(αu+βv)(S+T)=α(uS)+β(vS)+α(uT)+β(vT)
=α((uS)+α(uT))+β((vS)+(vT))
Again by definition 1.2.3, we get that (αu+βv)(S+T)=α(u(S+T))+β(v(S+T)). It
proves the result.
(ii) Similarly we can show that (αu+βv)(λS)=α(u(λS))+β(v(λS)) i.e. λS is also
linear transformation.

1.2.5 Theorem. Prove that Hom(V, W) becomes a vector space under the two
operation operations v(S+T)= vS + vT and v(λS)= λ(vS) for all v∈V, λ∈F
and S, T ∈Hom(V, W).
Proof. As it is clear that both operations are binary operations on Hom(V, W).
We will show that under +, Hom(V,W) becomes an abelian group. As
0∈Hom(V,W) such that v0=0 ∀ v∈V(it is call zero transformation), therefore,
v(S+0)= vS+v0 = vS = 0+vS= v0+vS= v(0+S) ∀ v∈V i.e. identity element
exists in Hom(V, W). Further for S∈Hom(V, W), there exist -S∈Hom(V, W)
such that v(S+(-S))= vS+v(-S)= vS-vS=0= v0 ∀ v∈V i.e. S+(-S)=0. Hence
inverse of every element exist in Hom(V, W). It is easy to see that
T1+(T2+T3)= (T1+T2)+T3 and T1+T2= T2+T1 ∀ T1, T2, T3∈Hom(V, W). Hence
Hom(V, W) is an abelian group under +.
Further it is easy to see that for all S, T ∈Hom(V, W) and α, β∈F, we
have α(S+T)= αS+αT, (α+β)S= αS+βS, (αβ)S= α(βS) and 1.S=S. It proves
that Hom(V, W) is a vector space over F.

1.2.6 Theorem. If V and W are vector spaces over F of dimensions m and n


respectively, then Hom(V, W) is of dimension mn over F.
Proof. Since V and W are vector spaces over F of dimensions m and n
respectively, let v1, v2,…, vm be basis of V over F and w1, w2,…, wn be basis
of W over F. Since v = δ1v1 + δ 2 v 2 + ... + δ m v m where δi ∈F are uniquely

determined for v∈V. Let us define Tij from V to W by


⎧w j if i = k
viTij= δi w j i.e. viTkj= ⎨ . It is easy to see that Tij
⎩0 if i ≠ k
∈Hom(V,W). Now we will show that mn elements Tij 1≤ i ≤ m and 1≤j≤n
form the basis for Hom(V, W). Take
β11T11 + β12T12 + ... + β1n T1n + ... + βi1Ti1 + βi 2Ti 2 + ... + βin Tin +

…+ β m1Tm1 + β m 2Tm 2 + ... + β mn Tmn =0


(Since a linear transformation on V can be determined completely if image of
every basis element of it is determined)
⇒ vi (β11T11 + β12T12 + ... + β1n T1n + ... + βi1Ti1 + βi 2Ti 2 + ... + βin Tin +

…+ β m1Tm1 + β m 2Tm 2 + ... + β mn Tmn )=vi0=0

⎧w j if i = k
⇒ βi1w1 + βi 2 w 2 + ... + βin w n =0 (∴viTkj= ⎨ )
⎩0 if i ≠ k
But w1, w2, …, wn are linearly independent over F, therefore,
βi1 = βi 2 = ... = βin = 0 . Ranging i in 1≤i ≤m, we get each βij = 0 . Hence Tij

are linearly independent over F. Now we claim that every element of


Hom(V,W) is linear combination of Tij over F. Let S ∈Hom(V,W) such that
v1S = α11w1 + α12 w 2 + ... + α1n w n ,

viS = α i1w1 + α i 2 w 2 + ... + α in w n

v m S = α m1w 1 + α m 2 w 2 + ... + α mn w n .
Take S0 = α11T11 + α12T12 + ... + α1n T1n + ... + α i1Ti1 + α i 2Ti 2 + ... + α in Tin +

α m1Tm1 + α m 2Tm 2 + ... + α mn Tmn .Then


viS0 = vi (α11T11 + α12T12 + ... + α1n T1n + ... + α i1Ti1 + α i 2Ti 2 + ... + α in Tin

+ α m1Tm1 + α m 2Tm 2 + ... + α mn Tmn )


= α i1w1 + α i 2 w 2 + ... + α in w n = viS .
Similarly we can see that viS0 = viS for every i, 1≤ i ≤ m.

Therefore, vS0 = vS ∀ v∈V. Hence S0=S. It shows that every element of

Hom(V,W) is a linear combination of Tij over F. It proves the result.


1.2.7 Corollary. If dimension of V over F is n, then dimension of Hom(V,V) over F
=n2 and dimension of Hom(V,F) is n over F.
1.2.8 Note. Hom(V, F) is called dual space and its elements are called linear
functional on V into F. Let v1, v2,…, vn be basis of V over F then v̂1, v̂ 2 , ..., v̂ n

⎧1 if i = j
defined by v̂i ( v j ) = ⎨ are linear functionals on V which acts as
⎩0 if i ≠ j
basis elements for V. If v is non zero element of V then choose v1=v, v2,…, vn
as the basis for V. Then there exist v̂1 ( v1 ) = v̂1 ( v) = 1 ≠ 0 . In other words we
have shown that for given non zero vector v in V we have a linear
transformation f(say) such that f(v)≠0.

1.3 ALGEBRA OF LINEAR TRANSFORMATIONS


1.3.1 Definition. Algebra. An associative ring A which is a vector space over F
such that α(ab)= (αa)b= a(αb) for all a, b∈A and α∈F is called an algebra
over F.

1.3.2 Note. It is easy to see that set of all Hom(V, V) becomes an algebra under the
multiplication of S and T ∈Hom(V, V) defined as:
v(ST)= (vS)T for all v∈ V.
we will denote Hom(V, V)=A(V). If dimension of V over F i.e. dimFV=n, then
dimF A(V)=n2 over F.

1.3.3 Theorem. Let A be an algebra with unit element and dimFA=n, then every
element of A satisfies some polynomial of degree at most n. In particular if
dimFV=n, then every element of A(V) satisfies some polynomial of degree at
most n2.
Proof. Let e be the unit element of A. As dimFA=n, therefore, for a∈A, the
n+1 elements e, a, a2,…,an are all in A and are linearly dependent over F, i.e.
there exist β0, β1,…, βn in F , not all zero, such that β0e+β1a+…+ βn an=0 . But
then a satisfies a polynomial β0+β1x+…+ βnxn over F. It proves the result.
Since the dimFA(V)=n2, therefore, every element of A(V) satisfies some
polynomial of degree at most n2.
1.3.4 Definition. An element T∈A(V) is called right invertible if there exist
S∈A(V) such that TS=I. Similarly ST=I (Here I is identity mapping) implies
that T is left invertible. An element T is called invertible or regular if it both
right as well as left invertible. If T is not regular then it is called singular
transformation. It may be that an element of A(V) is right invertible but not
left. For example, Let F be the field of real numbers and V be the space of all
df ( x )
polynomial in x over F. Define T on V by f ( x )T = and S by
dx
x
f ( x )S = ∫ f ( x )dx . Both S and T are linear transformations. Since
1

f ( x )(ST ) ≠ f ( x ) i.e. ST≠I and f ( x )(TS) = f ( x ) i.e. TS =I. Here T is right


invertible while it is not left invertible.

1.3.5 Note. Since T∈A(V) satisfies some polynomial over F, the polynomial of
minimum degree satisfied by T is called the minimal polynomial of T over F

1.3.6 Theorem. If V is finite dimensional over F, then T∈A(V) is invertible if and


only if the constant term of the minimal polynomial for T is non zero.
Proof. Let p(x)= β0+β1x+…+ βnxn , β n ≠ 0 , be the minimal polynomial for T

over F. First suppose that β0 ≠ 0 , then 0 = p(T)= β0+β1T+…+ βnTn implies

that -β0I=T(β1T+…+βnTn-1) or
β β β β β β
I = T(− 1 − 1 T − ... − − 1 T n −1 ) = (− 1 − 1 T − ... − − 1 T n −1 )T .
β0 β0 β0 β0 β0 β0
β β β
Therefore, S = (− 1 − 1 T − ... − − 1 T n −1 ) is the inverse of T.
β0 β0 β0

Conversely suppose that T is invertible, yet β 0 = 0 . Then β1T+…+

βnTn =0 ⇒ (β1T+…+ βnTn-1)T=0 . As T is invertible, on operating T-1 on both


sides of above equations we get (β1T+…+ βnTn-1)=0 i.e. T satisfies a
polynomial of degree less then the degree of minimal polynomial of T,
contradicting to our assumption that β 0 = 0 . Hence β 0 ≠ 0 . It proves the
result.
1.3.7 Corollary. If V is finite dimensional over F and if T ∈A(V) is singular, then
there exist non zero element S of A(V) such that ST=TS=0.
Proof. Let p(x)= β0+β1x+…+ βnxn , β n ≠ 0 be the minimal polynomial for T
over F. Since T is singular, therefore, constant term of p(x) is zero. Hence
(β1T+…+ βnTn-1)T=T(β1T+…+ βnTn-1)=0. Choose S=(β1T+…+ βnTn-1), then
S≠0(if S=0, then T satisfies the polynomial of degree less than the degree of
minimal polynomial of it) fulfill the requirement of the result.

1.3.8 Corollary. If V is finite dimensional over F and if T belonging to A(V) is


right invertible, then it is left invertible also. In other words if T is right
invertible then it is invertible.
Proof. Let U∈A(V) be the right inverse of T i.e. TU=I. If possible suppose T
is singular, then there exist non-zero transformation S such that ST=TS=0.
As
S(TU)= (ST)U
⇒ SI=0U ⇒ S=0, a contradiction that S is non zero. This
contradiction proves that T is invertible.

1.3.9 Theorem. For a finite dimensional vector space over F, T∈A(V) is singular if
and only if there exist a v≠0 in V such that vT=0.
Proof. By Corollary 1.3.7, T is singular if and only if there exist non zero
element S∈A(V) such that ST=TS=0. As S is non zero, therefore, there exist
an element u∈V such that uS≠0. More over 0=u0=u(ST)=(uS)T. Choose
v=uS, then v≠0 and vT=0. It prove the result.

1.4 CHARACTERISTIC ROOTS


In rest of the results, V is always finite dimensional vector space over F.
1.4.1 Definition. For T∈A(V), λ∈F is called Characteristic root of T if λI-T is
singular where I is identity transformation in A(V).
If T is singular, then clearly 0 is characteristic root of T.

1.4.2 Theorem. The element λ∈F is called characteristic root of T if and only there
exist an element v≠0 in V such that vT=λv.
Proof. Since λ is characteristic root of T, therefore, by definition the mapping
λI-T is singular. But then by Theorem 1.3.9, λI-T is singular if and only if
v(λI-T)=0 for some v≠0 in V. As v(λI-T)=0⇒vλ-vT=0⇒ vT= λv. Hence λ∈F
is characteristic root of T if and only there exist an element v≠0 in V such that
vT=λv.

1.4.3 Theorem. If λ∈F is a characteristic root of T, then for any polynomial q(x)
over F[x], q(λ) is a characteristic root of q[T].
Proof. By Theorem 1.4.2, if λ∈F is characteristic root of T then there exist an
element v≠0 in V such that vT=λv. But then vT2=(vT)T=(λv)T=λλv= λ2v. i.e.
vT2=λ2v. Continuing in this way we get, vTk=λkv. Let q(x)=β0+β1x+…+ βnxn ,
then q(T)= β0+β1T+…+ βnTn . Now by above discussion,
vq(T)=v(β0+β1T+…+ βnTn )= β0v+β1(vT)+…+ βn (vTn)= β0v+β1 λ2v +…+ βn
λnv = (β0+β1 λ2 +…+ βn λn)v=q(λ)v. Hence q(λ) is characteristic root of q(T).

1.4.4 Theorem. If λ is characteristic root of T, then λ is a root of minimal


polynomial of T. In particular, T has a finite number of characteristic roots in
F.
Proof. As we know that if λ is a characteristic root of T, then for any
polynomial q(x) over F, there exist a non zero vector v such that vq(T)=q(λ)v.
If we take q(x) as minimal polynomial of T then q(T)=0. But then vq(T)=q(λ)v
⇒ q(λ)v=0. As v is non zero, therefore, q(λ)=0 i.e. λ is root of minimal
polynomial of T.

1.5 CHARACTERISTIC VECTORS


1.5.1 Definition. The non zero vector v∈V is called characteristic vector belonging
to characteristic root λ ∈F if vT=λv.

1.5.2 Theorem. If v1, v2,…,vn are different characteristic vectors belonging to


distinct characteristic roots λ1, λ2,…, λn respectively , then v1, v2,…,vk are
linearly independent over F.
Proof. Let if possible v1, v2,…,vn are linearly dependent over F, then there
exist a relation β1v1+…+ βnvn=0 , where β1,+…+ βn are all in F and not all of
them are zero. In all such relation, there is one relation having as few non zero
coefficient as possible. By suitably renumbering the vectors, let us assume that
this shortest relation be
β1v1+…+ βkvk=0, where β1≠0,…, βk≠0. (i)
Applying T on both sides and using viT=λivi in (i) we get
λ1 β1v1+…+ λk βkvk=0 (ii)
Multiplying (i) by λ1 and subtracting from (ii), we obtain
(λ2-λ1)β2v2+…+ (λk-λ1)βkvk=0
Now (λi-λ1)≠0 for i>1 and β2≠0, therefore, (λi-λ1)βi≠0. But then we obtain a
shorter relation than that in (i) between v1, v2,…,vn. This contradiction proves
the theorem.

1.5.3 Corollary. If dimFV=n, then T∈A(V) can have at most n distinct


characteristic roots in F.
Proof. Let if possible T has more than n distinct characteristic roots in F, then
there will be more than n distinct characteristic vectors belonging to these
distinct characteristic roots. By Theorem 1.5.2, these vectors will be linearly
independent over F. Since dimFV=n, these n+1 element will be linearly
dependent, a contradiction. This contradiction proves T can have at most n
distinct characteristic roots in F.

1.5.4 Corollary. If dimFV=n and T∈A(V) has n distinct characteristic roots in F.


Then there is a basis of V over F which consists of characteristic vectors of T.
Proof. As T has n distinct characteristic roots in F, therefore, n characteristic
vectors belonging to these characteristic roots will be linearly independent
over F. As we know that if dimFV=n then every set of n linearly independent
vectors acts as basis of V(prove it). Hence set of characteristic vectors will act
as basis of V over F. It proves the result.
Example. If T ∈A(V) and if q(x) ∈F[x] is such that q(T)=0, is it true that
every root of q(x) in F is a characteristic root of T? Either prove that this is
true or give an example to show that it is false.
Solution. It is not true always. For it take V, a vector space over F with
dimFV=2 with v1 and v2 as basis element. It is clear that for v∈V, we have
unique α, β in F such that v=αv1+βv2. Define a transformation T∈A(V) by
v1T=v2 and v2T=0. let λ be characteristic root of T in F, then λI-T is singular.
It mean there exist a vector v(≠0) in V such that
vT=λv ⇒ (αv1+βv2)T=λαv1+λβv2 ⇒ α(v1T)+β(v2T)=λαv1+λβv2 ⇒
αv2+β.0=λαv1+λβv2 . As v is nonzero vector, therefore, at least one of α or β
is nonzero. But then αv2+β.0=λαv1+λβv2 implies that λ=0. Hence zero is the
only characteristic root of T in F. If We take a polynomial q(x)=x2(x-1), then
q(T)=T2(T-I). Now v1q(T)= ((v1T)T)(T-I) =(v2T)(T-I)=0(T-I)=0, v2q(T)=
((v2T)T)(T-I) =(0T)(T-I)=0 , therefore, vq(T)=0 ∀ v∈V . Hence q(T)=0. As
every root of q(x) lies in F yet every root of T is not a characteristic root of T.

Example. If T∈A(V) and if p(x) ∈F[x] is the minimal polynomial for T over
F, suppose that p(x) has all its roots in F. Prove that every root of p(x) is a
characteristic root of T.
Solution. Let p(x)= xn + β1 xn-1 +…+β0be the minimal polynomial for T and λ
be its root. Then p(x)= (x-λ)(xn-1 + γ1xn-2 +…+γ0). Since p(T)=0, therefore,
(T-λ)(Tn-1 + γ1 Tn-2 +…+γ0)=0. If (T-λ) is regular then (Tn-1 +γ1 Tn-2 +…+γ0)=0,
contradicting the fact that the minimal polynomial of T is of degree n over F.
Hence (T-λ) is not regular i.e. (T-λ) is singular and hence there exist a non
zero vector v in V such that v(T-λ)=0 i.e. vT=λv. Consequently λ is
characteristic root of T.

1.6 MATRIX OF TRANSFORMATIONS


1.6.1 Notation. The matrix of T under given basis of V is denoted by m(T).
We know that for determining a transformation T∈A(V) it is sufficient to find
out the image of every basis element of V. Let v1, v2,…,vn be the basis of V
over F and let
v1T = α11v1 + α12 v 2 + ... + α1n v n
… … … … …
vi T = α i1v1 + α i 2 v 2 + ... + α in v n
… … … … …
v n T = α n1v1 + α n 2 v 2 + ... + α nn v n
Then matrix of T under this basis is
⎡ α11 α12 ... α1n ⎤
⎢ ... ... ... .... ⎥⎥

m(T)= ⎢ α i1 αi 2 ... α in ⎥
⎢ ⎥
⎢ ... ... ... ... ⎥
⎢⎣α n1 α n 2 ... α nn ⎥⎦ n×n

Example. Let F be the field and V be the set of all polynomials in x of degree
n-1 or less. It is clear that V is a vector space over F. The dimension of this
vector space is n. Let {1, x, x2,…, xn-1} be its basis. For β0+β1x+…+ βn-1xn-1
∈V, Define (β0+β1x+…+ βn-1xn-1)D=β1+2β2x2+…+n-1βn-1xn-2 . Then D is a
linear transformation on V. Now we calculate the matrix of D under the basis
v1(=1), v2(=x), v3(=x2),.., vn(=xn-1) as:
v1D=1D=0= 0.v1 + 0.v 2 + ... + 0.v n
v2D=xD=1= 1.v1 + 0.v 2 + ... + 0.v n
v3D=x2D=2x= 0.v1 + 2.v 2 + ... + 0.v n
… … … …
viD= xi-1D=ixi-1= 0.v1 + 0.v 2 + ...ivi + ... + 0.v n
… … … … …
vnD= xn-1D=n-1xn-2= 0.v1 + 0.v 2 + ... + (n − 1) v n −1 + 0.v n
Then matrix of D is
⎡0 0 0 0 ... 0 0⎤
⎢1
⎢ 0 0 0 ... 0 0⎥⎥
⎢0 2 0 0 ... 0 0⎥
⎢ ⎥
m(D)= ⎢0 0 3 0 ... 0 0⎥
⎢. . . . ... . .⎥
⎢ ⎥
⎢0 0 0 . n−2 0 0⎥
⎢0
⎣ 0 0 . ... n −1 0⎥⎦ n×n

Similarly we take another basis v1(=xn-1), v2(=xn-2),..., vn(=1), then matrix of D


under this basis is
⎡0 n − 1 0 0 ... 0 0⎤
⎢0
⎢ 0 n−2 0 ... 0 0⎥⎥
⎢0 0 0 n −3 ... 0 0⎥
⎢ ⎥
m1(D)= ⎢0 ... ... ... ... ... 0⎥
⎢M M M M M M M⎥
⎢ ⎥
⎢0 0 0 ... ... 0 1⎥
⎢0
⎣ 0 0 . ... ... 0⎥⎦ n×n

If we take the basis v1(=1), v2(=1+x), v3(=1+x2),.., vn(=1+xn-1) then the matrix
of D under this basis is obtained as:
v1D=1D=0= 0.v1 + 0.v 2 + ... + 0.v n
v2D=(1+x)D=1= 1.v1 + 0.v 2 + ... + 0.v n
v3D=(1+x2)D=2x=-2+2(1+x)= − 2.v1 + 2.v 2 + ... + 0.v n
… … … …
vnD=xn-1D=n-1xn-2=-(n-1)+n-1(1+xn-2)= − (n − 1).v1 + ... + (n − 1) v n −1 + 0.v n
Then matrix of D is
⎡ 0 0 0 0 ... 0⎤ 0
⎢ 1
⎢ 0 0 0 ... 0⎥⎥ 0
⎢ −2 2 0 0 ... 0⎥ 0
⎢ ⎥
m3(D)= ⎢ − 3 0 3 0 ... 0 0⎥
⎢ . . . . ... . .⎥
⎢ ⎥
⎢− (n − 2) 0 0 . n−2 0 0⎥
⎢ − (n − 1)
⎣ 0 0 . ... n − 1 0⎥⎦ n×n

1.6.3 Theorem. If V is n dimensional over F and if T∈A(V) has a matrix m1(T) in


the basis v1, v2,…,vn and the matrix in the basis in the basis w1, w2,…,wn of V
over F. Then there is an element C∈Fn such that m2(T)= Cm1(T)C-1. In fact C
is matrix of transformation S∈A(V) where S is defined by viS=wi ; 1≤i ≤n.
Proof. Let m1(T)=(αij) , therefore, for 1≤i ≤n,
n
viT=αi1v1+αi2v2+…+αinvn= ∑ αijv j (1)
j=1

Similarly, if m2(T)=(βij) , therefore, for 1≤i ≤n,


n
wiT=βi1w1+βi2w2+…+βinwn= ∑ βijw j (2)
j=1
Since viS=wi , the mapping one –one and onto. Using viS=wi in (2) we get
viST=βi1(v1S)+βi2(v2S)+…+βin(vnS)
=(βi1.v1+βi2v2+…+βinvn)S
As S is invertible, therefore, on applying S-1 on both sides of above equation
we get vi (STS-1)=(βi1.v1+βi2v2+…+βinvn). Then by definition of matrix we
get m1(STS-1)=(βij)= m2(T). As the mapping T→m(T) is an isomorphism from
A(V) to Fn, therefore, m1(STS-1)= m1(S)m1(T)m1(S-1)= m1(S)m1(T)m1(S)-1 =
m2(T). Choose C= m1(S), then the result follows.

Example. Let V be the vector space of all polynomial of degree 3 or less over
the field of reals. Let T ∈A(V) is defined as: (β0+β1x+β2x2+β3x3)T
=β1+2β2x+3β3x2. Then D is a linear transformation on V. The matrix of T in
the basis v1(=1), v2(=x), v3(=x2), v4(=x3) as:
v1T=1T=0= 0.v1 + 0.v 2 + 0 v3 + 0.v 4

v2T=xT =1= 1.v1 + 0.v 2 + 0v3 + 0.v 4

v3T=x2T=2x= 0.v1 + 2.v 2 + 0v3 + 0.v 4

v4T= x3T=3x2= 0.v1 + 0.v 2 + 3v3 + 0.v 4


Then matrix of t is
⎡0 0 0 0⎤
⎢1 0 0 0⎥⎥
m1(D)= ⎢
⎢0 2 0 0⎥
⎢ ⎥
⎣0 0 3 0⎦
Similarly matrix of T in the basis w1(=1), w2(=1+x), w3(=1+x2), w4(=1+x3), is
⎡0 0 0 0⎤
⎢1 0 0 0⎥⎥
m2(D)= ⎢ .
⎢− 2 2 0 0⎥
⎢ ⎥
⎣− 3 0 3 0⎦
If We set viS=wi, then
v1S= w1= 1= 1.v1 + 0.v 2 + 0v3 + 0.v 4

v2S=w2 = 1+x= 1.v1 + 1.v 2 + 0v3 + 0.v 4

v3S=w3=1+x2 = 1.v1 + 0.v 2 + 1v3 + 0.v 4

v4T= w4=1+x3= 1.v1 + 0.v 2 + 0v3 + 1.v 4


⎡1 0 0 0⎤ ⎡1 0 0 0⎤
⎢1 1 0 ⎥
0⎥ ⎢− 1 1 0 0⎥⎥
But the C=m(S)= ⎢ and C-1= ⎢ and
⎢1 0 1 0⎥ ⎢− 1 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣1 0 0 1⎦ ⎣− 1 0 0 1⎦

⎡1 0 0 0⎤ ⎡0 0 0 0⎤ ⎡1 0 0 0⎤ ⎡ 0 0 0 0⎤
⎢ 0⎥⎥ ⎢1 0⎥⎥ ⎢− 1 0⎥⎥ ⎢⎢ 1 0⎥⎥
-1 ⎢1 1 0 ⎢ 0 0 ⎢ 1 0 0 0
Cm1(D)C = =
⎢1 0 1 0⎥ ⎢0 2 0 0⎥ ⎢− 1 0 1 0⎥ ⎢ − 2 2 0 0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 0 0 1⎦ ⎣0 0 3 0⎦ ⎣− 1 0 0 1⎦ ⎣ − 3 0 3 0⎦
=m2(D) as required.

1.6.3 Note. In above example we see that for given basis of V there always exist a
square matrix of order equal to the dimFV. Converse part is also true. i.e. for
given basis and given matrix there always exist a linear transformation. Let V
be the vector space of all n-tuples over the field F, then Fn the set of all n×n
matrix is an algebra over F. In fact if v1=(1,0,0…,0), v2=(0,1,0…,0) ,…,
vn=(0,0,0…,n), then (αij)∈Fn acts as: v1(αij)= first row of (αij), …, vi(αij)= ith
row of (αij). We denote Mt is a square matrix of order t such that its each super
diagonal entry is one and the rest of the entries are zero. For example
⎡0 1 0 0⎤
⎡0 1 0 ⎤ ⎢0 0 1 0⎥⎥
M3 = ⎢⎢0 0 1⎥⎥ and M4 = ⎢
⎢0 0 0 1⎥
⎢⎣0 0 0⎥⎦ ⎢ ⎥
3×3
⎣0 0 0 0⎦ 4×4

1.7 SIMILAR TRANSFORMATIONS.


1.7.1 Definition (Similar transformations). Transformations S and T belonging to
A(V) are said to similar if there exist R∈A(V) such that RSR-1=T.

1.7.2 Definition. A subspace W of vector space V is invariant under T∈A(V) if


WT⊆W. In other words wT ∈W ∀ w∈W.

1.7.3 Theorem. If subspace W of vector space is invariant under T, then T induces a


V
linear transformation T on , defined by ( v + W ) T = vT + W . Further if T
W
satisfies the polynomial q(x) over F, then so does T .
V
Proof. Since the elements of are the cosets of W in V, therefore, T
W
V
defined by ( v + W ) T = vT + W is a mapping on . The mapping is well
W
defined as v1 + W = v 2 + W ⇒ v1 − v 2 ∈ W . Since W is invariant under T,

therefore, v1 + W = v 2 + W ⇒ ( v1 − v 2 )T ∈ W which further implies that

v1T + W = v 2T + W i.e. ( v1 + W ) T = ( v 2 + W ) T . Further

(α( v1 + W ) + β( v 2 + W )) T = ((αv1 + βv 2 ) + W )) T = (αv1 + β v 2 )T + W . Since


T is linear transformation, therefore, (αv1 + βv 2 )T + W = α( v1T) + β( v 2T)

+ W = α( v1T) + β( v 2T) + W = α( v1T + W ) + β( v 2T + W ) = α( v1 + W ) T


V
+ β( v 2 + W ) T i.e. T is a linear transformation on .
W
Now we will show that for given polynomial q(x) over F,
V
q(T) = q( T ) . For given element v+W of , ( v + W )T 2 = vT 2 + W
W
V
= ( vT)T + W = ( vT + W ) T = ( v + W ) T T = ( v + W ) T 2 ∀ v+W ∈ . i.e.
W

T2 = T 2 . Similarly we can see that Ti = T i ∀ i. If

q( x ) = α 0 + α1x + ... + α n x n , then q(T) = α 0 + α1T + ... + α n T n and

( v + W )q(T) = ( v + W )(α 0 + α1T + ... + α n T n ) = v(α 0 + α1T + ... + α n T n ) + W

= α 0 v + W + α1( vT + W ) + ... + α n ( vT n + W ) = α 0 ( v + W ) + α1( v + W ) T +

i i
... + α n ( v + W )T n . Using T = T we get

( v + W )q (T) = α 0 ( v + W ) + α1( v + W ) T + ... + α n ( v + W ) T n

= ( v + W )(α 0 + α1T + ... + α n T n ) = ( v + W )q( T ) i.e. q(T) = q( T ) . Since by

given condition q(T)=0, therefore, 0 = q(T) = q( T ) . Hence T satisfies the


same polynomial as satisfied by T.

1.7.4 Corollary. If subspace W of vector space is invariant under T, then T induces


V
a linear transformation T on , defined by ( v + W ) T = vT + W and
W
minimal polynomial p1(x)(say) of T divides the minimal polynomial p(x) of
T.
Proof. Since p(x) is minimal polynomial of T, therefore, p(T)=0. But then by
Theorem 1.7.3, p( T )=0. Further, p1(x) is minimal polynomial of T , therefore,
p1(x) divides p(x).

1.8 CANONICAL FORM(TRIANGULAR FORM)


1.8.1 Definition. Let T be a linear transformation on V over F. The matrix of T in
the basis v1, v 2 ,..., v n is called triangular if
v1T = α11 v1 ,
v 2T = α 21 v1 + α 22 v 2
… … …. …
vi T = αi1 v1 + αi 2 v 2 + ...αi i vi

…. … …. . ….
v n T = α n1 v1 + α n 2 v 2 + ...α nn v n

1.8.2 Theorem. If T∈A(V) has all its characteristic roots in F, then there exist a
basis of V in which the matrix of T is triangular.
Proof. We will prove the result by induction on dimFV=n.
Let n=1. By Corollary 1.5.3, T has exactly one distinct root λ(say) in F. Let
v(≠0) be corresponding characteristic root in V. Then vT= λv. Since n=1. take
{v} as a basis of V. Now the matrix of T in this basis is [λ]. Hence the result is
true for n=1.
Choose n>1 and suppose that the result holds for all
transformations having all its roots in F and are defined on vector space V*
having dimension less then n.
Since T has all its characteristic roots in F; let λ1 be the root
characteristic roots in F and v1 be the corresponding characteristic vector.
Hence v1T=λ1v1. Choose W={αv1 | α∈F}. Then W is one dimensional
subspace of V. Since (αv1)T=α(v1 T)= αλ1v1 ∈W, therefore, W is invariant
V
under T. Let V̂ = . Then V̂ is a subspace of V such that dimF V̂ = dimFV-
W
dimFW=n-1. By Corollary 1.7.4, all the roots of minimal polynomial of
induced transformation T being the roots of minimal polynomial of T, lies in
F. Hence the linear transformation T in its action on V̂ satisfies hypothesis of
the theorem. Further dimF V̂ <n, there fore by induction hypothesis, there is a
basis v 2 (= v 2 + W ) , v3 (= v3 + W ) , …, v n (= v n + W ) of V̂ over F such that

v2 T = α 22 v2 ,

v3 T = α32 v 2 + α33 v3 ,
…. …. …. …
vi T = αi 2 v2 + αi3 v3 + ... + αii vi
…. … …. ….
vn T = α n 2 v2 + α n 3 v3 + ... + α nn v n
i.e matrix of is triangular
Take a set B={ v1, v 2 ,..., v n }. We will show that B is the required basis which

fulfills the requirement of the theorem. As the mapping V→ V̂ defined by


v→ v(= v + W ) ∀ v∈V is an onto homomorphism under which v 2 , v3 , …,

v n are the images of v2, v3, …, vn respectively. Since v 2 , v3 , …, v n are

linearly independent over F, then there pre-image vectors i.e. v2, v3, …, vn are
also linearly independent over F. More over v1 can not be lineal combination
of vectors v2, v3, …, vn because if it is so then v 2 , v3 , …, v n will be linearly
dependent over F. Hence the vectors v1, v2, …, vn are n linearly independent
vectors over F. Choose this set as the basis of V.
Since v1T=λ1v1 = =α11v1 for α11=λ1 .
Since v 2 T = α 22 v2 or ( v 2 + W ) T = α 22 v 2 + W or
v 2T + W = α 22 v 2 + W . But then v 2T − α 22 v 2 ∈ W and hence
v 2T − α 22 v 2 = α 21v1 . Equivalently,

v 2T = α 21v1 + α 22 v 2 .
Similarly
v3 T = α32 v 2 + α33 v3 ⇒ v3T = α31v1 + α32 v 2 + α33v3 .

Continuing in this way we get that


vi T = αi 2 v2 + αi3 v3 + ... + αii vi
⇒ viT = αi1v1 + αi 2 v 2 + ... + αii vi for all i, 1≤i≤n.
Hence B={v1, v2, …, vn} is the required basis in which the matrix of T is
triangular.

1.8.3 Theorem. If the matrix A∈Fn(=set of all n order square matrices over F) has
all its characteristic roots in F, then there is a matrix C∈Fn such that CAC-1 is
a triangular matrix.
Proof. Let A=[aij] ∈Fn. Further let Fn= {(α1, α2,…,αn)| αi∈F} be a vector
space over F and e1, e2,…, en be a basis of basis of V over F. Define T:V→V
by
eiT = a i1e1 + a i 2e 2 + ... + a ii ei + ... + a in e n .
Then T is a linear transformation on V and the matrix of T in this basis is
m1(T)= [aij]=A. Since the mapping A(V) →Fn defined by T→m1(T) is an
algebra isomorphism, therefore all the characteristic roots of A are in F.
Equivalently all the characteristic root of T are in F. Therefore, by Theorem
1.8.2, there exist a basis of V in which the matrix of T is triangular. Let it be
m2(T). By Theorem 1.6.3, there exist an invertible matrix C in Fn such that
m2(T)= Cm1(T)C-1= CAC-1 . Hence CAC-1 is triangular.

1.8.4 Theorem. If V is n dimensional vector space over F and let the matrix A∈Fn
has n distinct characteristic roots in F, then there is a matrix C∈Fn such that
CAC-1 is a diagonal matrix.
Proof. Since all the characteristic roots of matrix A are distinct, the linear
transformation T corresponding to this matrix under a given basis, also has
distinct characteristic roots say λ1, λ2,…, λn in F. Let v1, v2,…, vn be the
corresponding characteristic vectors in V . But then
vi T = λ i vi ∀ 1 ≤ i ≤ n (1)
We know that vectors corresponding to distinct characteristic root are linearly
independent over F. Since these are n linearly independent vectors over F and
dimension of V over F is n, therefore, set B={ v1, v2,…, vn } can be taken as
basis set of V over F. Now the matrix of T in this basis is
⎡λ1 0 ... 0 ⎤
⎢ 0 λ ... 0 ⎥
⎢ 2 ⎥ . Now By above Theorem, there
⎢ 0 ... ... 0 ⎥
⎢ ⎥
⎣ 0 0 ... λ n ⎦

⎡λ1 0 ... 0⎤
⎢ λ2 ... 0 ⎥⎥
-1 ⎢ 0
exist C in Fn such that CAC = is diagonal matrix.
⎢ 0 ... ... 0 ⎥
⎢ ⎥
⎣0 0 ... λ n ⎦

1.8.5 Theorem. If V is n dimensional vector space over F and T∈A(V) has all its
characteristic roots in F, then T satisfies a polynomial of degree n over F.
Proof. By Theorem 1.8.3, we can find out a basis of V in which matrix of T is
triangular i.e. we have a basis v1, v2,…, vn of V over F such that
v1T = λ1v1
v 2T = α 21v1 + λ 2 v 2
… … … …
vi T = α i1v1 + α i 2 v 2 + ... + α i (i −1) vi −1 + λ i vi

… … … … …
v n T = α n1v1 + α n 2 v 2 + ... + α n ( n −1) v n −1 + λ n v n

Equivalently,
v1(T − λ1) = 0
v 2 (T − λ 2 ) = α 21v1
…. …. …
vi (T − λi ) = αi1v1 + αi 2 v 2 + ... + αi (i −1) vi −1

… … … … …
v n (T − λ n ) = α n1v1 + α n 2 v 2 + ... + α n ( n −1) v n −1 .

Take the transformation


S= (T − λ1)(T − λ 2 )...(T − λ n ) .
Then v1S= v1 (T − λ1)(T − λ 2 )...(T − λ n ) = 0(T − λ 2 )...(T − λ n ) = 0
v2S= v 2 (T − λ1)(T − λ 2 )...(T − λ n ) = v 2 (T − λ 2 )(T − λ1)...(T − λ n )
= α 21v1(T − λ1)...(T − λ n ) = 0 .
Similarly we can see that viS=0 for 1≤ i ≤n. Equivalently, vS=0 ∀ v∈V. Hence
S= (T − λ1)(T − λ 2 )...(T − λ n ) =0 i.e. S is zero transformation on V.
Consequently T satisfies the polynomial ( x − λ1)( x − λ 2 )...( x − λ n ) of degree
n over F.
1.9 KEY WORDS
Transformations, similar transformations, characteristic roots, canonical
forms.
1.10 SUMMARY
In this chapter, we study about linear transformations, Algebra of linear
transformations, characteristic roots and characteristic vectors of linear
transformations, matrix of transformation and canonical form (Triangular
form).

1.11 SELF ASSESMENT QUESTIONS


(1) If V is a finite dimensional vector space over the field of real numbers with
basis v1 and v2. Find the characteristic roots and corresponding characteristic
vectors for T defined by
(i) v1T = v1 + v2 , v2T = v1 - v2
(ii) v1T = 5v1 + 6v2 , v2T = -7v2
(iii) v1T = v1 + 2v2 , v2T = 3v1 + 6v2
(2) If V is two-dimensional vector space over F, prove that every element in
A(V) satisfies a polynomial of degree 2 over F

1.12 SUGGESTED READINGS:


(1) Topics in Algebra; I.N HERSTEIN, John wiley and sons, New York.
(2) Modern Algebra; SURJEET SINGH and QAZI ZAMEERUDDIN, Vikas
Publications.
(3) Basic Abstract Algebra; P.B. BHATTARAYA, S.K.JAIN, S.R.
NAGPAUL, Cambridge University Press, Second Edition.
MAL-521: M. Sc. Mathematics (Advance Abstract Algebra)
Lesson No. 2 Written by Dr. Pankaj Kumar
Lesson: Canonical forms Vetted by Dr. Nawneet Hooda

STRUCTURE
2.0 OBJECTIVE
2.1 INTRODUCTION
2.2 NILPOTENT TRANSFORMATION
2.3 CANONICAL FORM(JORDAN FORM)
2.4 CANONICAL FORM( RATIONAL FORM)
2.5 KEY WORDS
2.6 SUMMARY
2.7 SELF ASSESMENT QUESTIONS
2.8 SUGGESTED READINGS

2.0 OBJECTIVE
Objective of this Chapter is to study Nilpotent Transformations and
canonical forms of some transformations on the finite dimensional vector
space V over the field F.

2.1 INTRODUCTION
Let T ∈A(V), V is finite dimensional vector space over F. In first
chapter, we see that every T satisfies some minimal polynomial over F. If T is
nilpotent transformation on V, then all the characteristic root of T lies in F.
Therefore, there exists a basis of V under which matrix of T has nice form.
Some time all the root of minimal polynomial of T does not lies in F. In that
case we study, rational canonical form of T.
In this Chapter, in Section 2.2, we study about Nilpotent
transformations. In next Section, Jordan forms of a transformation are studied.
At the end of this chapter, we study, rational canonical forms.

2.2 NILPOTENT TRANSFORMATION


2.2.1 Definiton. Nilpotent transformation. A transformation T∈A(V) is called
nilpotent if Tn=0 for some positive integer n. Further if T r = 0 and T k ≠ 0
for k<r, then T is nilpotent transformation with index of nilpotence r.

2.2.2 Theorem. Prove that all the characteristic roots of a nilpotent transformation T
∈A(V) lies in F.
Proof. Since T is nilpotent, let r be the index of nilpotence of T. Then Tr=0 .
Let λ be the characteristic root of T, then there exist v(≠0) in V such that
vT=λv. As vT2=(vT)T= (λv)T=λ(vT)= λλv =λ2v . Therefore, continuing in
this way we get vT3=λ3v ,…, vTr=λrv . Since Tr=0 , hence vTr=v0 =0 and
hence λrv=0. But v≠0, therefore, λr=0 and hence λ=0, which all lies in F.

2.2.3 Theorem. If T∈A(V) is nilpotent and β0 ≠ 0 , then β0+β1T+…+ βmTm ; βi ∈ F

is invertible.
Proof. If S is nilpotent then Sr=0 for some integer r. Let β0 ≠ 0 , then

I S S2 Sr −1
(β0 + S)( − 2 + 3 + ... + (−1) r −1 r )
β0 β0 β0 β0
r −1 r −1
S S S2 S2 r −1 S r −1 S r −1 S
r
=I − + − + + ... + (−1) − (−1) + (−1)
β 0 β 0 β 0 2 β0 2 β0r −1 β0r −1 β0 r

= I. Hence (β0 + S) is invertible.


Now if Tk=0, then for the transformation
S=β1T+…+ βmTm,
vSk=v(β1T+…+ βmTm)k=vTk(β1+…+ βmTm-1)k ∀ v∈V.
Since Tk=0, therefore, vTk=0 and hence vSk=0 ∀ v∈V i.e. Sk=0. Equivalently,
Sk is a nilpotent transformation. But then by above discussion
β0+S=β0+β1T+…+ βmTm is invertible if β0 ≠ 0 . It proves the result.

2.2.4 Theorem. If V= V1⊕V2⊕…⊕Vk where each subspace Vi of V is of dimension


ni and is invariant under T∈A(V). Then a basis of V can be found so that the
⎡A1 0 0 0 ⎤ K
⎢0 A
⎢ 2 0 K 0 ⎥⎥
matrix of T in this basis is of the form ⎢ 0 0 A3 K 0 ⎥ where each
⎢ ⎥
⎢ M M M M ⎥
⎢⎣ 0 0 0 K A k ⎥⎦

Ai is an ni×ni matrix and is the matrix of linear transformation Ti induced by T


on Vi.

Proof. Since each Vi is of dimension ni, let { v1(1) , v(21) , ..., v(n1) },
1

{ v1( 2) , v(22) , ..., v (n2) },…, { v1(i) , v(2i) , ..., v(ni) },…,{ v1( k ) , v(2k ) , ..., v(nk ) } are the basis
2 i k

of V1 , V2 ,…, Vi,…, Vk respectively, over F. We will show that

{ v1(1) , v(21) , ..., v(n1) , v1( 2) , v(22) , ..., v(n2) ,…, v1(i) , v(2i) , ..., v(ni) ,…, v1( k ) , v(2k ) , ..., v(nk ) }
1 2 i k

is the basis of V. First we will show that these vectors are linearly independent
over F. Let

6444447444448 64444447444444 8
(1) (1) (1) (1) (1) (1) ( 2) ( 2) ( 2) ( 2) ( 2) ( 2)
α1 v1 + α 2 v 2 + ... + α n v n + α1 v1 + α 2 v 2 + ... + α n v n +…+
1 1 2 2

6444447444448 644444474444448
(i ) ( i ) (i ) (i ) (i ) (i )
α1 v1 + α 2 v 2 + ... + α n v n +…+ α1( k ) v1( k ) + α (2k ) v(2k ) + ... + α (nk ) v(nk ) =0.
i i k k

But V is direct sum of Vi’s therefore, zero has unique representation i.e.

6444447444448
0=0+0+…+0+…0. Hence α1(i) v1(i) + α (2i) v(2i) + ... + α (ni ) v(ni) =0 for 1≤ i≤k. But
i i

for 1≤ i≤k , v1(i) , v(2i) , ..., v(ni) are linearly independent over F. Hence
i

α1(i) = α (2i) = ... = α (ni) = 0 and hence v1(1) , v(21) , ..., v(n1) , v1( 2) , v(22) , ..., v (n2) ,…,
i 1 2

v1(i) , v(2i) , ..., v(ni) , …, v1( k ) , v(2k ) , ..., v(nk ) are linearly independent over F. More
i k

over for v∈V, there exist vi ∈Vi such that v=v1 + v2+…+vi +…+vk . But for 1≤

i ≤ k, vi = α1(i) v1(i) + α (2i) v (2i) + ... + α (ni) v (ni) ; for 1 ≤ t i ≤ n i , α (ji) ∈ F . Hence
i i

v= α1(1) v1(1) + ... + α (n1) v (n1) + ... + α1( k ) v1( k ) + ... + α (nk ) v (nk ) . In other words we can
1 1 k k

say that every element of V is linear combination of v1(1) , v(21) , ..., v(n1) ,
1
v1( 2) , v(22) , ..., v (n2) ,…, v1(i) , v(2i) , ..., v(ni) , …, v1( k ) , v(2k ) , ..., v(nk ) over F. Hence
2 i k

{ v1(1) , ..., v (n1) , v1( 2) , v(22) , ..., v(n2) ,…, v1(i) , v(2i) , ..., v(ni) , …, v1( k ) , v(2k ) , ..., v(nk ) } is
1 2 i k

a basis for V over F. Define Ti on Vi by setting viTi=viT ∀ vi∈Vi. Then Ti is a


linear transformation on Vi. Since Vi are linealy independent, therefore, For
obtaining m(T) we proceed as:

v1(1) T = α11
(1) (1) (1) (1)
v1 + α12 v1 ... + α1(1n) v (n1)
1 1

6444447444448
(1) (1)
= α11 v1 ... + α1(1n) v (n1) + 0.v1( 2) + ... + 0. v (n2) + 0.v1( k ) + ... + 0. v (nk ) .
(1) (1)
v1 + α12
1 1 2 k

v (21) T = α (21
1) (1)
v1 + α (22
1) (1)
v1 ... + α (21n) v (n1)
1 1

6444447444448
= α (21
1) (1)
v1 + α (22 v1 ... + α (21n) v(n1) + 0.v1( 2) + ... + 0. v (n2) + 0.v1( k ) + ... + 0. v (nk ) .
1) (1)
1 1 2 k

… … … ….. …

v (n1) T = α (n1)1v1(1) + α (n1)2 v1(1) ... + α (n1)n v (n1)


1 1 1 1 1 1

644444 47444444 8
= α n 1v1 + α n 2 v1 ... + α n n v n + 0.v1( 2) + ... + 0. v (n2) + 0.v1( k ) + ... + 0. v (nk ) .
(1) (1) (1) (1) (1) (1)
1 1 1 1 1 2 k

Since it is easy to see that m(T1)= [α ij(1) ]n1 ×n1 =A1. Therefore, role of T on V1

produces a part of m(T) given by [A1 0], here 0 is a zero matrix of order n1×n-
n1. Similarly part of m(T) obtained by the roll of T on V2 is [0 A2 0], here first

0 is a zero matrix of order n1×n1 , A2= [α ij( 2) ]n 2 ×n 2 and the last zero is a zero

matrix of order n1×n-n1-n2. Continuing in this way we get that

⎡A1 0 0 0 ⎤ K
⎢0 A
⎢ 2 0 K 0 ⎥⎥
⎢0 0 A3 K 0 ⎥ as required.
⎢ ⎥
⎢ M M M M ⎥
⎢⎣ 0 0 0 K A k ⎥⎦

2.2.5 Theorem. If T∈A(V) is nilpotent with index of nilpotence n1, then there
always exists subspaces V1 and W invariant under T so that V =V1⊕W.
Proof. For proving the theorem, first we prove some lemmas:
Lemma 1. If T∈A(V) is nilpotent with index of nilpotence n1, then there
always exists subspace V1 of V of dimension n1 which is invariant under T.

Proof. Since index of nilpotence of T is n1 , therefore, T n1 = 0 and T k ≠ 0 for


n1 −1
1≤ k ≤n1-1. Let v(≠0)∈V. Consider the elements v, vT, vT2, … vT of V.
n1 −1
Take α1v + α 2 vT + ... + α s vT (s −1) + ... + α n1 vT = 0 , α i ∈ F and let α s be

the first non zero element in above equation. Hence


n1 −1 n1 − s
α s vT (s −1) + ... + α n1 vT = 0 . But then vT (s −1) (α s + ... + α n1 T ) = 0 . As

n1 − s
α s ≠ 0 and T is nilpotent, therefore, (α s + ... + α n1 T ) is invertible and

hence vT (s −1) = 0 ∀ v ∈ V i.e. T (s −1) =0 for some integer less than n1, a
n1 −1
contradiction. Hence each α i = 0 . It means elements v, vT, vT2,…, vT are
linearly independent over F. Let V1 be the space generated by the elements v,
n1 −1
vT, vT2, …, vT . Then the dimension of V1 over F is n1. Let u∈V1, then
n1 − 2 n1 −1
u= β1v + ... + β n1 −1vT + β n1 vT and

n1 −1 n1 n1 −1
uT= β1v + ... + β n1 −1vT + β n1 vT = β1v + ... + βn1 −1vT

n1 −1
i.e. uT is also a linear combination of v, vT, vT2, …, vT over F. Hence
uT∈V1. i.e. V1 is invariant under T.
n1 −1
Lemma(2). If V1 is subspace of V spanned by v, vT, vT2, …, vT , T∈A(V)
n1 − k
is nilpotent with index of nilptence n1 and u∈V1 is such that uT =0; 0 <

k ≤ n1, then u= u 0T k for some u0∈V1.


n1 −1
Proof. For u∈V1, u= α1v + ... + α k vT ( k −1) + α k +1vT k ... + α n1 vT ; αi ∈ F .

n1 − k n1 −1 n1 − k
and0= uT =( α1v + ... + α k vT ( k −1) + α k +1vT k ... + α n1 vT )T

n1 − k n1 −1 2n1 − k −1
= α1vT + ... + α k vT + α k +1vT n1 ... + α n1 vT

n1 − k n1 −1 n1 − k n1 −1
= α1vT + ... + α k vT . Since vT + ... + vT are
linearly independent over F, therefore, α1 = ... = α k = 0 . But then
n1 −1 n1 − k
u= α k +1vT k + ... + α n1 vT = (α k +1v + ... + α n1 vT )T k . Put

n1 − k
α k +1v + ... + α n1 vT = u 0 . Then u=u0Tk. It proves the lemma.

Proof of Theorem. Since T is nilpotent with index of nilpotence n1, then by


Lemma 3, there always exist a subspace V1 of V generated by v, vT, vT2,…,
n1 −1
vT . Let W be the subspace of V of maximal dimension such that
(i) V1∩W=(0) and (ii) W is invariant under T.
We will show that V=V1+W. Let if possible V≠V1+W. then there exist z∈V
n1 n1
such that z∉V1+W. Since T =0, therefore, zT = 0 . But then there exist an

integer 0 < k ≤n1 such that zT k ∈ V1 + W and zT i ∉ V1 + W for 0<i<k. Let


n1
zT k = u + w . Since 0 = zT = z(T k T n1 − k ) = (zT k )T n1 − k = (u + w )T n1 − k =

uT n1 − k + wT n1 − k , therefore, uT n1 − k = − wT n1 − k . But then uT n1 − k ∈ V1 and

W. Hence uT n1 − k = 0 . By Lemma 3, u= u 0T k for some u0∈V1. Hence

zT k = u 0T k + w or (z − u 0 )T k ∈ W . Take z1=z-u0, then z1T k ∈ W . Further,

for i<k, z1T i ∉ W because if z1T i ∈ W , then zTi − u 0T i ∈ W. Equivalently,

zT i ∈ V1 + W , a contradiction to our earlier assumption that i<k,

zTi ∉ V1 + W.
Let W1 be the subspace generated by W, z1, z1T, z1T2,…,

z1T k −1 . Since z1 does not belongs to W, therefore, W is properly contained in


W1 and hence dimFW1> dimFW. Since W is invariant under T, therefore, W1 is
also invariant under T. Now by induction hypothesis, V1∩W1≠(0). Let

w + α1z1 + α 2 z1T + ... + α k z1T k −1 be a non zero element belonging to V1∩W1.

Here all α i ’s are not zero because then V1∩W≠(0). Let α s be the first non

zero α i . Then

w + αs z1T s −1 + ... + α k z1T k −1 = w + z1T s −1 (α s + ... + α k T k −s ) ∈ V1 .

Since α s ≠ 0 , therefore , R = (α s + ... + α k T k −s ) is invertible and hence


wR −1 + z1Ts −1∈V1R −1 ⊆ V1 . Equivalently, z1T s −1 ∈ V1 + W , a contradiction.
This contradiction proves that V=V1+W. Hence V=V1⊕W.

2.2.6 Theorem. If T∈A(V) is nilpotent with index of nilpotence n1, then there exist
subspace V1, V2, …, Vr, of dimensions n1, n2,…,nr respectively, each Vi is
invariant under T such that V= V1⊕V2⊕…⊕Vr , n1≥ n2 ≥…≥nr and dim V =
n1+n2+…+nr. More over we can find a basis of V over F in which matrix of T
⎡M n1 0 0 0 ⎤ K
⎢ 0
⎢ Mn2 0 K 0 ⎥⎥
is of the form ⎢ 0 0 Mn3 K 0 ⎥.
⎢ ⎥
⎢ M M M M ⎥
⎢ 0
⎣ 0 0 K M n r ⎥⎦

Proof. First we prove a lemma. If T∈A(V) is nilpotent with index of


n1 −1
nilpotence n1 , V1 is a subspace of V spanned by v, vT, vT2, …, vT where
v∈V. Then M n1 will be the matrix of T on V1 under the basis v1 = v ,

v 2 = vT , …, v n1 = vT n1 −1 .

Proof. Since
v1T = 0.v1+1.v2+…+0. v n1

v2T=(vT)T=vT2= v3= 0.v1+0.v2+1.v3+…+0. v n1

… … … …
n1 − 2 n1 −1
v n1 −1T = ( vT )T = vT = v n1 = 0.v1 + 0.v 2 + ... + 1.v n1 and

n1 −1 n1
v n1 T1 = ( vT )T = vT = 0 = 0.v1 + 0.v 2 + ... + 0.v n1 , therefore,

n1 −1
the matrix of T under the basis v, vT, vT2, …, vT is
⎡0 1 0 K 0⎤
⎢0
⎢ 0 1 K 0⎥⎥
⎢0 0 0 K 0⎥ = M n1 .
⎢ ⎥
⎢M M M 1⎥
⎢⎣0 0 0 K 0⎥⎦ n ×n
1 1
Proof of main theorem. Since by Theorem 2.2.5, If T∈A(V) is nilpotent
with index of nilpotence n1, then there always exists subspaces V1 and W,
invariant under T so that V =V1⊕W. Now let T2 be the transformation

induced by T on W. Then T2n1 = 0 on W. But then there exist an integer n2

such that n2≤n1 and n2 is index of nilpotene of T2. But then we can write W=

V2⊕W1 where V2 is subspace of V spanned by u, uT2, uT22 ,…, uT2n 2 −1 where

u∈V and W1 is invariant subspace of V. Continuing in this way we get that


V= V1⊕V2⊕…⊕ Vk
Where each Vi is ni dimensional invariant subspace of V on which the matrix
of T (i.e. matrix of T obtained by using basis of Vi) is M n i where n1≥ n2 ≥…≥

nk and n1+ n2 +…+nk=n=dim V. Since V= V1⊕V2⊕…⊕Vk, therefore, by


Theorem 2.2.4, the matrix of T i.e.
⎡A1 0 0 0 ⎤
K
⎢0 A
⎢ 2 0 K 0 ⎥⎥
m(T)= ⎢ 0 0 A3 K 0 ⎥ where each Ai= M n i . It proves the theorem.
⎢ ⎥
⎢ M M M M ⎥
⎢⎣ 0 0 0 K A k ⎥⎦

2.2.8 Definition. Let T∈A(V) is nilpotent transformation with index of nilpotence


n1. Then there exist subspace V1, V2,…,Vk of dimensions n1, n2,…,nk
respectively, each Vi is invariant under T such that V= V1⊕V2⊕…⊕Vk, n1≥
n2 ≥…≥nk and dim V = n1+n2+…+nk. These integers n1, n2, …,nk are called
invariants of T.

2.2.9 Definition. Cyclic subspace. A subspace M of dimension m is called cyclic


with respect to T ∈A(V) if
(i) MTm=0, MTm-1≠0 (ii) there exist x in M such that x, xT, …, xTm-1 forms
basis of M.

2.2.10 Theorem. If M is cyclic subspace with respect to T then the dimension of


MTk is m-k for all k≤m.
Proof. Since M is cyclic with respect to T, therefore, there exist x in M such
that x, xT, …, xTm-1 is a basis of M. But then z∈M,
z= a1x+ a2xT+ …+ am xTm-1; ai∈F
Equivalently, zTk= a1xTk+ a2xTk+1+ …+ am-kxTm-1 +..+am xTm+k= a1xTk+
a2xTk+1+ …+ am-kxTm-1. Hence every element z of MTk is linear combination
of m-k elements xTk, xTk+1, …,xTm-1 . Being a subset of linearly independent
set these are linearly independent also. Hence the dimension of MTk is m-k for
all k.

2.2.11 Theorem. Prove that invariants of a nilpotent transformation are unique.


Proof. Let if possible there are two sets of invariant n1, n2, …, nr and m1 , m2,
…, mr of T. Then V= V1⊕V2⊕…⊕ Vr and V= W1⊕W2⊕…⊕ Ws, where each
Vi and Wi’s are cyclic subspaces of V of dimension ni and mi respectively, We
will show that r=s and ni=mi. Suppose that k be the first integer such that
nk≠mk. i.e. n1=m1, n2=m2,…, nk-1=mk-1. Without loss of generality suppose that

nk>mk. Consider VT m k . Then

VT m k = V1T m k ⊕ V2T m k ⊕ ... ⊕ Vr T m k and

dim(VT m k )=dim(V1T m k )+ dim(V2T m k )+ ... +dim(Vr T m k ) . As by Theorem

2.2.10, dim(Vi T m k )=n i −m k , therefore,

dim(VT m k ) ≥ (n1− m k ) + ... + (n k−1 − m k ) (1)

Similarly dim(VT m k )=dim( W1T m k )+ dim(W2T m k )+ ... +dim(Ws T m k ) . As

mj ≤ mk for j ≥ k, therefore, W jT m k = {0} subspace and then

dim(WjT m k ) = 0 . Hence dim(VT m k ) ≥ (m1−m k ) + ... + (m k−1−m k ) . Since

n1=m1, n2=m2,…,nk-1 =mk-1, therefore,

dim(VT m k ) = (n1−m k ) + ... + (n k−1−m k ) , contradicting (1). Hence ni=mi.

Further n1+n2+…+nr= dim V=m1+m2+…+ms and ni=mi for all i implies that
r=s. It proves the theorem.

2.2.12 Theorem. Prove that transformations S and T∈A(V) are similar iff they have
same invariants.
Proof. First suppose that S and T are similar i.e. there exist a regular mapping
R such that RTR-1=S. Let n1, n2,…, nr be the invariants of S and m1 , m2, …,
ms are that of T. Then V= V1⊕V2⊕…⊕ Vr and V= W1⊕W2⊕…⊕ Ws, where
each Vi and Wi’s are cyclic and invariant subspaces of V of dimension ni and
mi respectively, We will show that r=s and ni=mi.
As ViS ⊆Vi, therefore, Vi (RTR-1)⊆Vi ⇒ (ViR)(TR-1) ⊆ Vi. Put Vi R= Ui.
Since R is regular, therefore, dim Ui=dimVi=ni. Further UiT= Vi RT= Vi SR.
As ViS ⊆Vi, therefore, UiT⊆Ui. Equivalently we have shown that Ui is
invariant under T. More over
V=VR= V1R⊕V2R⊕…⊕ VrR=U1⊕U2⊕…⊕ Ur.
Now we will show that each Ui is cyclic with respect to T. Since each Vi is
cyclic with respect to S and is of dimension ni, therefore, for v∈Vi, v,

vS,…, vSn i −1 is basis of Vi over F. As R is regular transformation on V,

therefore, vR, vSR,…, vSn i −1 R is also a basis of V. Further S=RTR-1 ⇒

SR=RT ⇒ S2R=S(SR)=S(RT)=(SR)T=RTT= RT2. Similarly we have

StR=RTt. Hence {vR, vSR,…, vSn i −1 R} = {vR, vRT,…, vRT n i −1 }. Now vR

lies in Ui whose dimension is ni and vR, vRT,…, vRT n i −1 are ni elements

linearly independent in Ui, the set {vR, vRT,…, vRT n i −1 } becomes a basis of
Ui. Hence Ui is cyclic with respect to T. Hence invariant of T are n1, n2,…,nr .
As by Theorem 2.2.11, the invariants of nilpotents transformations are unique,
therefore, ni=mi and r=s.
Conversely, suppose that two nilpotent transformations R and S have
same invariants. We will show that they are similar. As they have same
invariants, therefore, there exist two basis say X={x1, x2,…, xn } and Y={y1,
y2,…, yn }of V such that the matrix of S under X is equal to matrix of T under
Y is same. Let it be A=[aij]n×n. Define a regular mapping R:V→V by xiR=yi.
n
As xi(RTR-1)= xi R(TR-1)= yi TR-1 = (yi T)R-1 = ( ∑ a ij y j )R −1 =
j=1

n n
= ∑ a ij ( y jR −1) = ∑ a ijx j = xi S. Hence RTR-1=S i.e. S and T are similar.
j=1 j=1
2.3 CANONICAL FORM(JORDAN FORM)
2.3.1 Definition. Let W be a subspace of V invariant under T∈A(V), then the
mapping T1 defined by wT1=wT is called the transformation induced by T on
W.

2.3.2 Note.(i) Since W is invariant under T and wT=wT1, therefore, wT2=(wT)T=


(wT)T1=(wT1)T1=wT12 ∀ w∈W. Hence T2=T12 . Continuing in this way we
get Tk=T1k . Hence on W, q(T)=q(T1) for all q(x)∈F[x].
(ii) Further it is easy to see that if p(x) is minimal polynomial of T and r(T)=0,
then p(x) always divides r(x).

2.3.3 Lemma. Let V1 and V2 be two invariant subspaces of finite dimensional


vector space V over F such that V=V1⊕V2. Further let T1 and T2 be the linear
transformations induced by T on V1 and V2 respectively. If p(x) and q(x) are
minimal polynomials of T1 and T2 respectively, then the minimal polynomial
for T over F is the least common multiple of p(x) and q(x).
Proof. Let h(x)= lcm(p(x), q(x)) and r(x) be the minimal polynomial of T.
Then r(T)=0. By Note 3.2(i), r(T1)=0 and r(T2)=0. By Note 3.2(ii), p(x)|r(x)
and q(x)|r(x). Hence h(x)|r(x). Now we will show that r(x)|h(x). By the
assumptions made in the statement of lemma we have p(T1)=0 and q(T2)=0.
Since h(x) = lcm(p(x), q(x)), therefore, h(x)= p(x)t1(x) and h(x)= p(x)t2(x),
where t1(x) and t2(x) belongs to F[x].
As V=V1⊕V2, therefore, for v∈V we have unique v1 ∈V1 and v2 ∈V2
such that v = v1 + v2. Now vh(T) = v1h(T) + v2h(T) = v1h(T1) + v2h(T2) =
v1p(T1)t1(T1) +v2p(T2)t2(T2)=0+0=0. Since the result holds for all v∈V,
therefore, h(T)=0 on V. But then by Note 2.3.2(ii), r(x)|h(x). Now h(x)|r(x)
and r(x)|h(x) implies that h(x)=r(x). It proves the lemma.

2.3.4 Corollary. Let V1, V2 , …, Vk are invariant subspaces of finite dimensional


vector space V over F such that V=V1⊕V2 ⊕… ⊕Vk.. Further let T1 , T2 , …,
Tk be the linear transformations induced by T on V1, V2, …, Vk respectively. If
p1(x), p2(x),…, pk(x) are their respective minimal polynomials. Then the
minimal polynomial for T over F is the least common multiple of p1(x),
p2(x),…, pk(x).
Proof. It’s proof is trivial.

2.3.5 Theorem. If p(x)= p1( x ) t1 p 2 ( x ) t 2 ...p k ( x ) t k ; pi(x) are irreducible factors of


p(x) over F, is the minimal polynomial of T, then for 1≤ i ≤k, the set

Vi = {v ∈ V | vpi (T ) t i = 0} is non empty subspace of V invariant under T.


Proof. We will show that Vi is a subspace of V. Let v1 and v2 are two elements

of Vi. Then by definition, v1pi (T) t i = 0 and v 2 pi (T ) t i = 0 . Now using

linearity property of T we get ( v1 − v 2 )pi (T) t i = v1pi (T) t i − v 2pi (T) t i = 0 .


Hence v1 - v2 ∈Vi. Since minimal polynomial of T over F is p(x), therefore,

h i (T) = p1(T) t1 ...pi −1 (T) t i −1 pi +1(T) t i +1 ...p k (T ) t k ≠ 0. Hence there exist u in V

such that uhi(T)≠0. But uh i (T)pi (T) t i = 0 , therefore, uhi(T)∈Vi. Hence Vi≠0.

More over for v∈Vi, vT(pi (T) t i ) = vpi (T) t i (T) = 0T = 0 . Hence vTVi for all

v∈Vi. Hence Vi is invariant under T. It proves the lemma.

2.3.6 Theorem. If p(x)= p1( x ) t1 p 2 ( x ) t 2 ...p k ( x ) t k ; pi(x) are irreducible factors of


p(x) over F, is the minimal polynomial of T, then for 1≤ i ≤k,

Vi = {v ∈ V | vpi (T ) t i = 0} ≠(0), V= V1⊕V2 ⊕… ⊕Vk. and the minimal

polynomial for Ti is pi ( x ) t i .
Proof. If k=1 i.e. number of irreducible factors in p(x) is one then V=V1 and

the minimal polynomial of T is p1( x ) t1 i.e. the result holds trivially.


Therefore, suppose k >1. By Theorem 2.3.5, each Vi is non zero subspace of V
invariant under T. Define

h1 ( x ) = p 2 ( x ) t 2 p3 ( x ) t 3 ...p k ( x ) t k ,

h 2 ( x ) = p1( x ) t1 p3 ( x ) t 3 ...p k ( x ) t k ,
… …. … … … …
k t
hi (x) = ∏ p j (x) j .
j=1
j≠ ij

The polynomials h1(x), h2(x),…, hk(x) are relatively prime. Hence we can find
polynomials a1(x), a2(x),…,ak(x) in F[x] such that
a1(x) h1(x)+ a2(x) h2(x)+…+ ak(x) hk(x)=1. Equivalently, we get
a1(T) h1(T)+ a2(T) h2(T)+…+ ak(T) hk(T)=I(identity transformation).
Now for v∈V,
v=vI=v( a1(T) h1(T)+ a2(T) h2(T)+…+ ak(T) hk(T))
= va1(T) h1(T)+ va2(T) h2(T)+…+ vak(T) hk(T).

Since va i (T)h i (T)pi (T) t i =0, therefore, va i (T)h i (T) ∈ Vi . Let va i (T)h i (T) =
vi. Then v=v1+v2+…+vk. Thus V=V1+V2+…+Vk. Now we will show that if
u1+u2+…+uk=0, ui∈Vi then each ui=0.
As u1+u2+…+uk=0 ⇒ u1h1(T)+u2h1(T)+…+ukh1(T)=0h1(T)=0. Since

h1(T) = p 2 (T) t 2 p3 (T) t 3 ...p k (T) t k , therefore, u jh1(T) = 0 for all j=2,3,…,k.

But then u1h1(T)+u2h1(T)+…+ukh1(T)= 0⇒ u1h1(T)= 0. Further u1p1(T) t1 = 0 .


Since gcd(h1(x), p1(x))=1, therefore, we can find polynomials r(x) and g(x)

such that h1( x )r ( x ) + p1( x ) t1 g ( x ) =1. Equivalently,

h1(T )r (T) + p1(T) t1 g(T) =I. Hence u1=u1I= u1(h1(T)r (T) + p1(T) t1 g (T ))

= u1h1(T)r (T) + u1p1(T) t1 g(T) =0. Similarly we can show that if


u1+u2+…+uk=0 then each ui=0. It proves that V= V1⊕V2 ⊕… ⊕Vk.

Now we will prove that pi ( x ) t i is the minimal polynomial of Ti on Vi.

Since Vi pi (T) t i = (0) , therefore, pi (T ) t i =0 on Vi. Hence the minimal

polynomial of Ti divides pi ( x ) t i . But then the minimal polynomial of Ti is

pi ( x ) ri ; ri≤ti for each i=1, 2,…,k. By Corollary 2.3.4, the minimal polynomial

of T on V is least common multiple of p1( x ) r1 , p 2 ( x ) r2 ,…, p k ( x ) rk which is

p1( x ) r1 p 2 ( x ) r2 … p k ( x ) rk . But the minimal polynomial is in fact

p1( x ) t1 p 2 ( x ) t 2 ...p k ( x ) t k , therefore, ti≤ri for each i=1, 2,…,k. Hence we get

that the minimal polynomial of Ti on Vi is pi ( x ) t i . It proves the result.


2.3.7 Corollary. If all the distinct characteristic roots λ1, λ2, …,λk of T lies in F,
then V can be written as V= V1⊕V2 ⊕… ⊕Vk where

Vi = {v ∈ V | v(T − λi ) t i = 0} and where Ti has only one characteristic root λI


on Vi.
Proof. As we know that if all the distinct characteristic roots of T lies in F,
then every characteristic root of T is a root of its minimal polynomial and vice
versa. Since the distinct characteristic roots λ1, λ2, …,λk of T lies in F. Let
the multiplicity of these roots are t1, t2, …, tk. Then the minimal polynomial of

T over F is ( x − λ1) t1 ( x − λ 2 ) t 2 ...( x − λ k ) t k . If we define

Vi = {v ∈ V | v(T − λi ) t i = 0} , then by Theorem 3.6, the corollary follows.

⎡λ 1 0 K 0⎤
⎢0 λ 1 K 0 ⎥⎥

2.3.8 Definition. The matrix ⎢ 0 0 λ K 0⎥ of order t is called Jordan
⎢ ⎥
⎢M M M 1⎥
⎢⎣ 0 0 0 K λ ⎥⎦ t× t

⎡λ 1 ⎤
block of order t belonging to λ. For example, ⎢ ⎥ is the Jordan block of
⎣0 λ⎦
order 2 belonging to λ.

2.3.9 Theorem. If all the distinct characteristic roots λ1, λ2, …,λk of T∈A(V) lies
in F, then a basis of V can be found in which the matrix of T is of the form

⎡J1 0 0 0 ⎤ ⎡Bi1 0 0 0 ⎤
⎢0 J ⎥ ⎢ 0 Bi 2 0 0 ⎥⎥
⎢ 2 0 0⎥ ⎢
where each Ji= and where Bi1, Bi2,…,
⎢ ... ... ... ... ⎥ ⎢ ... ... ... ... ⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 0 0 Jk ⎦ ⎢⎣ 0 0 0 Biri ⎥⎦

Biri are basic Jordan block belonging to λ.

Proof. Since all the characteristic roots of T lies in F, the minimal polynomial

of T over F will be of the form ( x − λ1) t1 ( x − λ 2 ) t 2 ...( x − λ k ) t k . If we define

Vi = {v ∈ V | v(T − λi ) t i = 0} , then for each i, Vi≠(0) is a subspace of V which

is invariant under T and V= V1⊕V2 ⊕… ⊕Vk such that ( x − λi ) t i will be the


minimal polynomial of Ti. As we know that if V is direct sum of its subspaces
invariant under T, then we can find a basis of V in which the matrix of T is of
⎡J1 0 0 0 ⎤
⎢0 J 0 ⎥⎥
⎢ 2 0
the form , where each Ji is the ni×ni matrix of Ti (the
⎢ ... ... ... ... ⎥
⎢ ⎥
⎣ 0 0 0 Jk ⎦
transformation induced by T on Vi) under the basis of Vi. Since the minimal

polynomial of Ti on Vi is ( x − λi ) t i , therefore, (T − λi I) is nilpotent


transformation on Vi with index of nilpotence ti. But then we can obtain a
basis Xi of Vi in which the matrix of (T − λi I) is of the form.

⎡M i1 0 0 0 ⎤
⎢ 0 M 0 ⎥⎥
⎢ i2 0
where i1≥ i2 ≥ … ≥ iri; i1+ i2+ …
⎢ ... ... ... ... ⎥
⎢ ⎥
⎢⎣ 0 0 0 M iri ⎥⎦
n i ×n i

+ iri= ni=dim Vi. Since Ti=λiI+ Ti-λiI, therefore, the matrix of Ti in the basis Xi
of Vi is Ji= matrix of λiI under the basis Xi + matrix of Ti -λiI under the basis

⎡λ 0 0 0⎤ ⎡M i1 0 0 0 ⎤
⎢0 λ 0 0 ⎥⎥
⎢ 0 M
⎢ i2 0 0 ⎥⎥
Xi. Hence Ji= ⎢ +
⎢... ... ... ...⎥ ⎢ ... ... ... ... ⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 λ ⎦ n ×n ⎣⎢ 0 0 0 M iri ⎦⎥
i i n i ×n i

⎡Bi1 0 0 0 ⎤
⎢ 0 B 0 ⎥⎥
⎢ i2 0
= , Bij are basic Jordan blocks. It proves the result.
⎢ ... ... ... ... ⎥
⎢ ⎥
⎣⎢ 0 0 0 Biri ⎥⎦

2.4 CANONICAL FORM(RATIONAL FORM)


2.4.1 Definition. An abelian group M is called module over a ring R or R-module if
rm ∈ M for all r∈R and m∈M and
(i) (r + s)m=rm + rs
(ii) r(m1 + m2) = rm1 + rm2
(iii) (rs)m = r(sm) for all r, s ∈R and m, m1, m2∈M.
2.4.2 Definition. Let V be a vector space over the field F and T∈A(V). For f(x) ∈
F[x], define, f(x)v=vf(T) , f(x) ∈ F[x] and v∈V. Under this multiplication V
becomes an F[x]-module.

2.4.3 Definition. An R-module M is called cyclic module if M ={rm0 | r∈R and


some m0∈M.

2.4.4 Result. If M is finitely generated module over a principal ideal domain R.


Then M can be written as direct sum of finite number of cyclic R-modules. i.e.
there exist x1 , x2, …, xn in M such that
M=Rx1 ⊕Rx2 ⊕… ⊕Rxn.

2.4.5 Definition. Let f(x)= a0 + a1x + …+am-1 xm-1 + xm be a polynomial over the
⎡ 0 1 0 0 ⎤
⎢ 0 0 O 0 ⎥⎥
field F. Then the companion matrix of f(x) is ⎢ .
⎢ ... ... ... 1 ⎥
⎢ ⎥
⎣− a 0 − a1 ... − a m −1 ⎦ m×m

It is a square matrix [bij] of order m such that bi, i+1=1 for 1 ≤ i ≤ m-1, bm, j= aj-1
for 1 ≤ j ≤ m and for the rest of entries bij=0. The above matrix is called
companion matrix of f(x). It is denoted by C(f(x)). For example companion
⎡0 1 0 0 ⎤
⎢0 0 1 0 ⎥⎥
matrix of 1+2x -5x2 +4x3 + x4 is ⎢
⎢0 0 0 1 ⎥
⎢ ⎥
⎣− 1 − 2 5 − 4⎦ 4×4

2.4.6 Note. Every F[x]-module M becomes a vector space over F.Under the
multiplication f(x)v = vf(T), T∈ A(V) and v ∈ V, V becomes a vector space
over F.

2.4.7 Theorem. Let V be a vector space over F and T∈A(V). If f(x) = a0 + a1x +
…+am-1 xm-1 + xm is minimal polynomial of T over F and V is cyclic F[x]-
module, then there exist a basis of V under which the matrix of T is
companion matrix of f(x).
Proof. Clearly V becomes F[x]-module under the multiplication defined by
f(x)v= vf(T) for all v∈V , T∈A(V). As V is cyclic F[x]-module, therefore,
there exist v0∈V such that V = F[x]v0 ={ f(x)v0 | f(x)∈F[x]}= { v0f(T) |
f(x)∈ F[x]}. Now we will show that if v0s(T)=0, then s(T) is zero
transformation on V. Since v = f(x)v0 , then vs(T) = (f(x)v0)s(T)= (v0 f(T))s(T)
= (v0 s(T))f(T)= 0f(T)=0. i.e. every element of v is taken to 0 by s(T). Hence
s(T) is zero transformation on V. In other words T also satisfies s(T). But then
f(x) divides s(x). Hence we have shown that for a polynomial s(x)∈F[x], if
v0s(T) =0, then f(x) | s(x).
Now consider the set A={v0, v0T,…, v0Tm-1 } of elements of V.
We will show that it is required basis of V. Take r0v0 + r1 (v0T) +…+ rm-1
( v0Tm-1) =0, ri∈F. Further suppose that at least one of ri is non zero. Then r0v0
+ r1 (v0T) + … + rm-1 ( v0Tm-1) = 0 ⇒ v0 (r0+ r1 T + … + rm-1 Tm-1) =0.
Then by above discussion f(x)| (r0+ r1 T + … + rm-1 Tm-1), a contradiction.
Hence if r0v0 + r1 (v0T) +…+ rm-1 ( v0Tm-1) =0 then each ri =0. ie the set
A is linearly independent over F.
Take v∈V. Then v = t(x)v0 for some t(x)∈F[x]. As we can
write t(x)= f(x)q(x) + r(x), r(x) = r0+ r1x + … + rm-1xm-1 , therefore, t(T)=
f(T)q(T) + r(T) where r(T)= r0+ r1 T + … + rm-1 Tm-1. Hence v = t(x)v0 =
v0t(T) = v0(f(T)q(T) +r(T)) = v0f(T)q(T) + v0r(T) = v0r(T) = v0 (r0+ r1 T + …
+ rm-1 Tm-1)= r0v0 + r1 (v0T) + … + rm-1 ( v0Tm-1). Hence every element
of V is linear combination of element of the set A over F. Therefore, A is a
basis of V over F.
Let v1 =v0, v2 =v0T, v3 =v0T2, …, vm-1 =v0Tm-2 , vm =v0Tm-1.
Then
v1T= v2 = 0.v1 + 1.v2 + 0.v3 + …+0. vm-1 +0vm,
v2T= v3 = 0.v1 + 0.v2 + 1.v3 + …+0. vm-1 +0vm,
… … … …,
vm-1T= vm = 0.v1 + 0.v2 + 0.v3 + …+0. vm-1 +1vm.
Since f(T)=0 ⇒ v0 f(T) = 0 ⇒ v0(a0 + a1T + …+am-1Tm-1 + Tm) = 0
⇒a0 v0+ a1 v0T + …+am-1 v0Tm-1 + v0Tm=0
⇒ v0Tm= -a0 v0 - a1 v0T - …- am-1 v0Tm-1 .
As vmT= v0Tm-1T=v0Tm = -a0 v0 - a1 v0T - …- am-1 v0Tm-1
= -a0 v1 - a1 v2 - …- am-1 vm.
Hence the matrix under the basis v1 =v0, v2 =v0T, v3 =v0T2, …, vm-1 =v0Tm-2 ,
⎡ 0 1 0 ⎤ 0
⎢ 0 0 O 0 ⎥⎥
vm =v0T is ⎢
m-1
= C(f(x)). It proves the result.
⎢ 0 0 0 1 ⎥
⎢ ⎥
⎣− a 0 − a1 L − a m −1 ⎦ m×m

2.4.8 Theorem. Let V be a finite dimensional vector space over F and T∈A(V).
Suppose q(x)t is the minimal polynomial for T over F, where q(x) is
irreducible monic polynomial over F . Then there exist a basis of V such that
the matrix of T under this basis is of the form
⎡ C( q ( x ) t 1 ) 0 L 0 ⎤
⎢ ⎥
⎢ 0 C(q ( x ) t 2 ) L 0 ⎥ where t=t ≥ t ≥…≥t .
1 2 k
⎢ 0 0 L M ⎥
⎢ ⎥
⎢⎣ 0 0 L C(q ( x ) t k )⎥⎦

Proof. Since we know that if M is a finitely generated module over a principal


ideal domain R, then M can be written as direct sum of finite number of cyclic
R-submodules. We know that V is a vector space over F[x] with the scalar
multiplication defined by f(x)v=vf(T). As V is a finite dimensional vector
space over F, therefore, it is finitely dimensional vector space over F[x] also.
Thus, it is finitely generated module over F[x] (because each vector space is a
module also). But then we can obtain cyclic submodules of V say F[x]v1,
F[x]v2, …, F[x]vk such that V = F[x]v1 ⊕ F[x]v2 ⊕ … ⊕F[x]vk, vi∈V.
Since (F(x)vi ) T =(vi F[T]) T = =vi (F[T] T) = =(vi g(T))=
g(x)vi ∈ F[x]vi . Hence each F[x]vi is invariant under T. But then we can find

⎡A1 0 L 0 ⎤
⎢0 A L 0 ⎥⎥
a basis of V in which the matrix of T is ⎢ 2
where Ai is the
⎢0 0 L M ⎥
⎢ ⎥
⎣0 0 L Ak ⎦

matrix of T under the basis of Vi. Now we claim that Ai = C(q( x ) t i ) . Let
pi(x) be the minimal polynomial of Ti (i.e of T on Vi). Since wi q(T)t = 0 for

all wi∈ F[x]vi, therefore, pi(x) divides q(x)t. Thus pi = q ( x ) t i . 1≤ ti ≤ t. Re

indexing Vi, we can find t1 ≥ t2 ≥ … ≥ tk. Since V = F[x]v1 ⊕ F[x]v2 ⊕ …


⊕F[x]vk, therefore, the minimal polynomial of T on V is

lcm( q ( x ) t1 , q ( x ) t 2 , ..., q( x ) t k )= q ( x ) t1 . Then q ( x ) t = q ( x ) t1 . Hence t=t1. By


Theorem 2.4.7, the matrix of T on Vi is companion matrix of monic minimal

polynomial of T on Vi. Hence Ai = C(q ( x ) t i ) . It proves the result.

2.4.9 Theorem. Let V be a finite dimensional vector space over F and T∈A(V).

Suppose q1( x ) t1 q 2 ( x ) t 2 ...q k ( x ) t k is the minimal polynomial for T over F,


where qi(x) are irreducible monic polynomial over F . Then there exist a basis
of V such that the matrix of T under this basis is of the form
⎡C(q ( x ) t i1 ) 0 L 0 ⎤
⎡A1 0 L 0 ⎤ ⎢ i

⎢0 A 0 ⎥⎥ 0 C( q i ( x ) t i 2 ) 0
⎢ 2 L where Ai= ⎢ L ⎥
⎢0 0 L M ⎥ ⎢ 0 0 L M ⎥
⎢ ⎥ ⎢ t ⎥
⎣0 0 L A k ⎦ n×n ⎢⎣ 0 0 L C(q i ( x ) iri )⎥⎦

ri r
where ti= t i1 ≥ t i 2 ≥ ... ≥ t iri for each i, 1≤ i ≤ k, ∑ t ij = n i and ∑ n i = n .
j=1 i =1

Proof. Let Vi ={ v∈V | v qi (T ) t i =0}. Then each Vi is non zero invariant (under

T) subspace of V and V = V1 ⊕ V2 ⊕ … ⊕Vk. Also the minimal polynomial

of T on Vi is qi ( x ) t i . For such a V, we can find a basis of V under which the


⎡A1 0 L 0 ⎤
matrix of T is of the form
⎢0 A
⎢ 2 L 0 ⎥⎥ . In this matrix, each Ai is a
⎢0 0 L M ⎥
⎢ ⎥
⎣0 0 L A k ⎦ n×n

square matrix and is the matrix of T in Vi. As T has qi ( x ) t i as its minimal


polynomial, therefore, by Theorem, 2.4.8, Ai =
⎡C(q ( x ) t i1 ) 0 L 0 ⎤
i
⎢ ⎥
⎢ 0 C( q i ( x ) t i 2 ) L 0 ⎥ . Rest part of the result is easy to
⎢ 0 0 L M ⎥
⎢ t iri ⎥
⎢⎣ 0 0 L C(q i ( x ) )⎥⎦

prove.

t t krk
2.4.10 Definition. The polynomials q1 ( x ) t11 , ..., q1 ( x ) 1r1 , ...., q k ( x ) t k1 , ..., q k ( x ) are
called elementary divisors of T.
2.4.11 Theorem. Prove that elementary divisors of T are unique.

Proof. Let q( x) = q1 ( x)l1 q2 ( x)l2 ...qk ( x)lk be the minimal polynomial of T

where each qi(x) is irreducible and li ≥ 1. Let Vi= { v∈V| vqi (T )li =0}. Then

Vi is a non zero invariant subspace of V, V=V1 ⊕V2⊕…⊕Vk and the minimal

polynomial of T on Vi i.e. of Ti , is q i ( x )li . More over we can find a basis of

⎡R ⎤
V such that the matrix of T is ⎢ 1 , where Ri is the matrix of T on Vi.
⎣ R k ⎥⎦

Since V becomes an F[x] module under the operation f(x)v=vf(T),


therefore, each Vi is also an F[x]-module. Hence there exist v1, v2, …, vri ∈ Vi

such that Vi= F[x]v1 +… +F[x] vri = Vi1 + Vi2 + … + Viri where each Vij is a

subspace of Vi and hence of V . More over Vij is cyclic F[x] module also. Let
l l
q( x) ij be the minimal polynomials of T on Vij. Then q( x) ij becomes
elementary divisors of T, 1 ≤ i ≤ k and 1≤ j ≤ ri. Thus to prove that elementary
divisors of T are unique, it is sufficient to prove that for all i, 1≤ i ≤ k, the
l
polynomials qi ( x)li1 , qi ( x)li 2 ,…, qi ( x) iri are unique. Equivalently, we have

to prove the result for T∈A(V), with q(x)l , q(x) is irreducible as the minimal
polynomial have unique elementary divisor.
Suppose V = V1 ⊕V2⊕…⊕Vr and V = W1 ⊕ W2⊕…⊕Ws where each
Vi and Wi is a cyclic F[x]-module. The minimal polynomial of T on Vi is

have unique elementary divisors q ( x)li where l=l1≥ l2 ≥ … ≥ lr and l=l*1≥ l*2
r s
≥ … ≥ l*s . Also ∑ li d = n = dim V and ∑ l *i d = dim V, d is the degree of
i =1 i =1

q(x). We will sow that li = l*i and r=s. Suppose t is first integer such that
l1=l*1, l2 =l*2 , …, lt-1= l*t-1 and lt ≠ l*t. Since each Vi and Wi are invariant

under T, therefore, Vq (T )l *t = V1q (T )l *t ⊕ ... ⊕ Vr q (T )l *t . But then the


r i
dimension Vq(T )l *t = ∑ dim V j q (T )l *t ≥ ∑ dim V j q(T )l *t . Since lt ≠ l*t,
j =1 j =1

without loss of generality, suppose that lt > l*t. As V j q(T )l *t = d(lj -l*t),
i −1
therefore, dim Vq(T )l *t ≥ ∑ d (l j − l *t ) . Similarly dimension of
j =1

i −1 i
Vq (T )l *i = ∑ d (l * j −l *t ) < ∑ d (l j − l *t ) ≤ Vq (T )l *i , a contradiction. Thus
j =1 j =1

lt ≤ l*t. Similarly, we can show that lt ≥ l*t. Hence lt = l*t . It holds for all t.
But then r = s.

2.5 KEY WORDS


Nilpotent Transformations, similar transformations, characteristic roots,
canonical forms.

2.6 SUMMARY
For T ∈A(V), V is finite dimensional vector space over F, we study nilpotent
transformation, Jordan forms and rational canonical forms.

2.7 SELF ASSESMENT QUESTIONS


(1) Show that all the characteristic root of a nilpotent transformations are zero
(2) If S and T are nilpotent transformations, then show that S+T and ST are
also nilpotent.
(3) Show that S and T are similar if and only they have same elementary
divisors.

2.8 SUGGESTED READINGS:


(1) Modern Algebra; SURJEET SINGH and QAZI ZAMEERUDDIN, Vikas
Publications.
(2) Basic Abstract Algebra; P.B. BHATTARAYA, S.K.JAIN, S.R.
NAGPAUL, Cambridge University Press, Second Edition.
MAL-521: M. Sc. Mathematics (Advance Abstract Algebra)
Lesson No. 3 Written by Dr. Pankaj Kumar
Lesson: Modules I Vetted by Dr. Nawneet Hooda
STRUCTURE
3.0 OBJECTIVE
3.1 INTRODUCTION
3.2 MODULES (CYCLIC MODULES)
3.3 SIMPLE MODULES
3.4 SIMI-SIMPLE MODULES
3.5 FREE MODULES
3.6 NOETHERIAN AND ARTINIAN MODULES
3.7 NOETHERIAN AND ARTINIAN RINGS
3.8 KEY WORDS
3.9 SUMMARY
3.10 SELF ASSESMENT QUESTIONS
3.11 SUGGESTED READINGS

3.0 OBJECTIVE
Objective of this chapter is to study another algebraic system
(modules over an arbitrary ring R) which is generalization of vector spaces
over field F.

3.1 INTRODUCTION
A vector space is an algebraic system with two binary operations over
a field F which satisfies certain conditions. If we take an arbitrary ring, then
vector space V becomes an R-module or a module over ring R.
In first section of this chapter we study definitions and examples of
modules. In section 3.3, we study about simple modules (i.e. modules having
no proper submodule). In next section, semi-simple modules are studied. Free
modules are studied in section 3.5. We also study ascending and descending
chain conditions for submodules of given module. There are certain modules
which satisfies ascending chain conditions (called as noetherian module) and
descending chain conditions (called as artinian modules). Such type of
modules are studied in section 3,6. At last we study noetherian and artinian
rings.

3.2 MODULES(CYCLIC MODULES)


3.2.1 Definition. Let R be a ring. An additive abelian group M together with a
scalar multiplication μ: R×M→M, is called a left R module if for all r, s∈R
and x, y ∈M
(i) μ(r, (x + y)) = μ(r, x) + μ(r, y)
(ii) μ((r + s), x) = μ(r, x) +μ(s, x)
(iii) μ(r, sx))= μ (rs, x)
If we denote μ(r, x) =rx, then above conditions are equivalent to
(i) r(x + y)) = rx + ry
(ii) (r + s) x = r x +s x
(iii) r (sx) = (rs) x.
If R has an identity element 1 and
(iv) 1x=x for all x ∈M. Then M is called Unitary (left) R-module

Note. If R is a division ring, then a unital (left) R-module is called as left


vector space over R.

Example (i) Let Z be the ring of integer and G be any abelian group with nx
defined by
nx = x + x +...+ x(n times) for positive n and
nx=-x-x-...-x(n times) for negative n and zero other wise.
Then G is an Z-module.
(ii) Every extension K of a field F is also an F-module.
(iii) R[x], the ring of polynomials over the ring R, is an R-module

3.2.2 Definition. Submodule. Let M be an R-module. Then a subset N of M is


called R-submodule of M if N itself becomes a module under the same scalar
multiplication defined on R and M. Equivalently, we say that if
(i) x-y∈N
(ii) rx∈N for all x, y∈N and r∈R.
Example (i) {0} and M are sub modules of R-module M. These are called
trivial submodules.
(ii) Since 2Z (set of all even integers) is an Z-module. Then 4Z, 8Z are its Z
submodules.
(iii) Each left ideal of a ring R is an R-submodule of left R-module and vice
versa.

3.2.3 Theorem. If M is an left R-module and x∈M, then the set Rx={rx| x∈R} is an
R-submodule of M.
Proof. As Rx ={rx| x∈R}, therefore, for r1 and r2 belonging to R, r1x and r2x
belongs to Rx. Since r1-r2 ∈R, therefore, r1x -r2x= (r1 -r2)x∈Rx. More over for
r and s∈R, s(rx)=(sr)x∈Rx. Hence Rx is an R-submodule of M.

3.2.4 Theorem. If M is an R-module and K={rx + nx| r∈R, n∈Z} is an R-


submodule of M containing x . Further if M is unital R-module then K=Rx.
Proof. Since for r1, r2 ∈R and n1, n2 ∈Z we have r1-r2 ∈R and n1-n2 ∈Z,
therefore, r1x+n1x–(r2x+n2x) = r1x – r2x + n1x – n2x = (r1–r2)x+(n1– n2)x ∈K.
More over for s∈R, s(rx + nx) = s(rx + x +… + x) = s(rx) + sx +…+ sx = (sr)x
+ sx +…+ sx= ((sr) + s + … + s)x. Since ((sr) + s + … + s)∈R, therefore, ((sr)
+ s +…+ s)x + 0.x ∈K. Hence K is an R-submodule. As x = 0x + 1x∈K,
therefore, K is an R-submodule containing x. Let S be another R-submodule
containing x, then rx and nx ∈S. Hence K ⊆ S. Therefore, K is the smallest R-
submodule containing x.
If M is unital R-module, then 1∈R such that 1.m=m∀ m∈M. Hence for
x ∈M, x=1.x ∈ Rx. As by Theorem 3.2.3, Rx is an R-submodule. But K is the
smallest R-submodule of M containing x. Hence K⊆Rx. Now For rx∈Rx,
rx=rx + 0x∈K. Hence K=Rx. It proves the theorem.

3.2.5 Definition. Let S be a subset of an R-module M. The submodule generated by


S, denoted by <S> is the smallest submodule of M containing S.
3.2.6 Theorem. Let S be a subset of an R-module M. Then <S> ={0} if S=φ, and is
C(S)={ r1 x1+r2 x2 + …+ rn xn| ri ∈R} if S={x1, x2, …, xn}.
Proof. Since < S > is the smallest submodule containing S, therefore, for the
case when S=φ, < S >= {0}. Suppose that S={x1, x2,…, xn}. Let x and y∈C(S).
Then x= r1 x1+r2 x2 + …+ rn xn, y= t1 x1+t2 x2 + …+ tn xn, ri and ti ∈R and x-y
= (r1−t1)x1+(r2−t2)x2+…+(rn−tn)xn ∈C(S). Similarly rx∈C(S) for all r∈R and
x∈C(S). Therefore, C(S) is a submodule of M. Further if N is another
submodule containing S then x1, x2, …, xn ∈ N and hence r1 x1+r2 x2 + …+ rn
xn ∈N i.e. C(S) ⊆ N. It shows that C(S) =<S>is the smallest such submodule.

3.2.7 Definition. Cyclic module. An R-module M is called cyclic module if it is


generated by single element of M. The cyclic module generated by x is and is
{rx+nx| r∈ R , n∈Z}. Further if M is an unital R-module, then < x > ={rx |
r∈R}.
Example.(i) Every finite additive abelian group is cyclic Z-module.
(ii) Every field F as an F-module is cyclic module.

3.3 SIMPLE MODULES


3.3.1 Definition. A module M is said to be simple R-module if RM≠{0} and the
only submodules of it are {0} and M.

3.3.2 Theorem. Let M be an unital R-module. Then M is said to be simple if and


only if M =Rx for every non zero x∈M. In other words M is simple if and
only if it is generated by every non zero element x ∈ M.
Proof. First suppose that M is simple. Consider Rx={rx | r∈R}. By Theorem
3.2.3, it is an R-submodule of M. As M is unital R-module, therefore, there
exist 1∈R such that 1.m=m for all m∈M. Hence x(≠0)=1.x ∈Rx, therefore Rx
is non zero unital R-module. Since M is simple, therefore, M=Rx. It proves the
result.
Conversely, suppose that M=Rx for every non zero x in M. Let
A be any non zero submodule of M. Then A ⊆ M. Let y be a non zero element
in A. Then y∈M. Hence by our assumption, M=Ry. By Theorem 3.2.3, Ry is
the smallest submodule containing y, therefore, Ry⊆A. hence M⊆A. Now
A⊆M, M⊆A implies that M=A i.e. M has no non zero submodule. Hence M is
simple.

3.3.3 Corollary. If R is a unitary ring. Then R is a simple R-module if and only if R


is a division ring.
Proof. First suppose that R is simple R-module. We will show that R is a
division ring. Let x be a non zero element in R. As R is a unitary simple ring,
therefore, by Theroem 3.2.8, R=Rx. As 1∈R and R=Rx, therefore, 1∈Rx.
Hence there exist a non-zero y in R such that 1=yx. i.e. inverse of non zero
element exist in R. Hence R is a division ring.
Conversely suppose that R is a division ring. Since ideals of a
ring are R-submodules of that ring and vice versa, therefore ideals of R will be
submodules of M. But R has two ideal {0} and R itself. Hence R has only
trivial submodules. Therefore, R is simple R-module.

3.3.4 Definition. A f be a mapping from an R-module M to an R-module N is called


homomorphism if
(i) f(x + y)=f(x) + f(y) (ii) f(rx)=rf(x) for all x, y ∈M and r∈R.
It is easy to see that f(0)=0, f(−x)= −f(x) (iii) f(x−y)=f(x) −f(y).

3.3.5 Theorem (Fundamental Theorem on Homomorphism). If f is an


M
homomorphism from R-modules M into N, then ≅ f (M) .
ker f

3.3.6 Problem. Let R be a ring with unity and M be an R-module. Show that M is
R
cyclic if and only if M ≅ , where I is left ideal of R.
I
Solution. First let M be cyclic i.e. M=Rx for some x∈M. Define a mapping
φ: R→M by φ(r) =rx, r∈R. Since φ(r1 + r2)= (r1 + r2)x=r1x + r2x =φ(r1) + φ(r2)
and φ(sr)= (sr)x = s(rx)=sφ(r) for all r1, r2, s and r belonging to R, therefore, φ
is an homomorphism from R to M. As M=Rx, therefore, for rx∈M, there exist
r ∈R such that φ(r) =rx i.e. the mapping is onto also. Hence by Fundamental
R
theorem on homorphism, ≅ M . But Ker φ is an left ideal of R,
Ker φ

R
therefore, taking Ker φ=I we get M ≅ .
I
R R
Conversely suppose that M ≅ . Let f : →M be an isomorphism
I I
such that f(1+I)=x. Then for r∈R, f(r+I) = f(r(1+I))=r f(1+I)=rx. i.e. we have
shown that img f = {rx| r∈R}=Rx. Since image of f is M, therefore, Rx =M
for some x∈M. Thus M is cyclic. It proves the result.

3.3.7 Theorem. Let N be a submodule of M. Prove that the submodules of the


M U
quotient module are of the form , where U is submodule of M
N N
containing N.
M
Proof. Define a mapping f: M→ by f(m)=m+N ∀ m∈M. Let X be an
N
M
submodule of . Define U={x∈M| f(x)∈X}= { x∈M | m + N ∈X }. Let x,
N
y∈U. Then f(x), f(y) ∈X . But then f(x−y) = f(x)−f(y) ∈X and for r∈R,
f(rx)=rf(x) ∈X . Hence by definition of U, x−y and rx∈U. i.e. U is an R-
submodule. Also N ⊆ U, because for all x∈N, f(x) = x + N = N = identity of
X, therefore, f(x)∈M. Because f is an onto mapping, therefore, for x∈X, there
always exists y ∈M, such that f(y)=x. By definition of U, y∈U. Hence X ⊆
U U
f(U). Clearly f(U) ⊆ X. Thus X=f(U). But f(U) = . Hence X = . It proves
N N
the result.

3.3.8 Theorem. Let M be an unital R-module. Then the following are equivalent
(i) M is simple R-module
(ii) Every non zero element of M generates M
R
(iii) M ≅ , where I is maximal left ideal of R.
I
Proof. (i)⇒(ii) follows from Theroem 3.2.8.
(ii) ⇒(iii). As every non zero element of M generates M, therefore, M is cyclic
R
and by Problem 3.2.12, M ≅ . Now we have to show that I is maximal.
I
R
Since M is simple, therefore, is also simple. But then I is maximal ideal of
I
R. It proves (iii)
R
(iii) ⇒(i). By (iii) M ≅ , I is maximal left ideal of R. Since I is maximal
I
R R
ideal of R, therefore, I ≠ R. Further 1+I ∈ and R( )≠{I} implies that RM
I I
R
≠{0}. Let N be a submodule of M and f is an isomorphism from M to .
I
R J
Since f(N) is a submodule of , therefore, by Theorem 3.3.7, f(N) = . But I
I I
is maximal ideal of R, therefore, J=I or J=R. If J=I, then f(N) = {I} implies
R
that N={0}. If J=R, then f(N)= implies that N=M. Hence M has no non-
I
trivial submodule i.e. M is simple.

3.3.9 Theorem. (Schur’s lemma). For a simple R-module M, HomR(M, M) is a


division ring.
Proof. Since the set of all homomorphism from M to M form the ring under
the operation defines by (f +g) (x)=f(x) + g(x) and (f.g)(x)=f(g(x)) for all f and
g belonging to the set of all homomorphism and for all x belonging to M. In
order to show that HomR(M, M) is a division ring we have to show that every
non zero homomorphism f has an inverse in HomR(M, M). i.e. we have to
show that f is one-one and onto. As f : M→M. consider Ker f and img f. Both
are submodules of M. But M is simple, therefore, ker f={0} or M. If ker f =M,
then f becomes a zero homomorphism. But f is non zero homomorphism.
Hence ker f ={0}. i.e. f is one-one.
Similarly img f={0} or M. If img f={0}, then f becomes an zero
mapping which is not true. Hence img f =M i.e. mapping is onto also. Hence f
is invertible. Therefore, we have shown that every non zero element of
HomR(M, M) is invertible. It mean HomR(M, M) is division ring.
3.4 SEMI-SIMPLE MODULES
3.4.1 Definition. Let M be an R-module and (Ni), 1≤ i≤ t be a family of submodules
t
of M. The submodule generated by U Ni is the smallest submodule
i =1

containing all the submodules Ni. It is also called the sum of submodules Ni
t
and is denoted by ∑ Ni .
i =1

3.4.2 Theorem. Let M be an R-module and (Ni), 1≤ i≤ t be a family of submodules


t
of M. Show that ∑ Ni ={x1+ x2 + … +xt | xi∈Ni}.
i =1

Proof. Let S={x1+ x2 + … +xt | xi∈Ni}. Further let x and y∈S. Then x= x1+ x2
+ …+ xn, y= y1 +y2 + …+ yn , xi and yi ∈S. Then x−y =( x1 + x2 + …+ xn) −
(y1 + y2 + …+ yn )= (x1−y1) + (x2−y2) +…+(xn−yn) ∈S . Similarly rx∈S for all
r∈R and x∈S. Therefore, S is an submodule of M.
Further if N is another left submodule containing S then x1,
x2, …, xn ∈ N and hence x1+ x2 + …+ xn ∈N i.e. S ⊆ N. It shows that S is the
t
smallest module containing each Ni. Therefore, by Definition 3.4.1, ∑ Ni =
i =1

S={x1+ x2 + … +xt | xi∈Ni}.

3.4.3 Note. If U Ni is a family of submodules of M, then ∑ Ni = { ∑ x i | x i ∈ Ni } .


i∈Λ finite

3.4.4 Definition. Let ( N i )i∈Λ be a family of submodule M. The sum ∑ Ni is called


i∈Λ

direct sum if each element x of ∑ Ni can be uniquely written as x = ∑ x i ,


i∈Λ

where xi ∈Ni and xi =0 for almost all i in index set Λ. In other words, there are
finite number of xi that are non zero in ∑ x i . It is denoted by ⊕ ∑ Ni . Each
i∈Λ

Ni in ⊕ ∑ Ni is called a direct summand of the direct sum ⊕ ∑ Ni .


i∈Λ i∈Λ
3.4.5 Theroem. Let ( N i )i∈Λ be a family of submodule M. Then the following are
equivalent.
(i) ∑ Ni is direct
i∈Λ

(ii) Ni ∩ ∑ N j = {0} for all i


j∈Λ
j≠ i

(iii) 0= ∑ x i ∈ ∑ Ni ⇒ xi = 0 for all i.


i∈Λ

Proof. These results are easy to prove.

3.4.6 Definition. (Semi-simple module). An R-module M is called semi-simple or


completely reducible if M = ∑ Ni , where Ni’s are simple R-submodules of M.
i∈Λ

Example. R3 is a semi-simple R-module.

3.4.7 Theorem. Let M = ∑ M α be a sum of simple R-submodules Mα and K be a


α∈Λ

submodule of M. Then there exist a subset Λ* ⊆ Λ such that ∑ Mα is a


α∈Λ*

direct sum and M= K ⊕ (⊕ ∑ M α ) .


*
α∈Λ

Proof. Let S={Λ** ⊆ Λ | ∑ Mα is a direct sum and K ∩ ∑ M α ={0}}.


α∈Λ** α∈Λ**

Since φ ⊆ Λ and ∑ M α ={0}, therefore, K ∩ ∑ M α =K∩{0}={0}. Hence


α∈φ α∈φ

φ∈S. Therefore, S is non empty. Further S is partial order set under the
relation that for A, B ∈S, A is in relation with B iff either A ⊆ B or B ⊆ A.
More over every chain (Ai) in S has an upper bound ∪Ai in S. Thus by Zorn’s
lemma S has maximal element say Λ*. Let N= K ⊕ (⊕ ∑ M α ) . We will
*
α∈Λ

show that N=M. Let ω ∈Λ. Since Mω is simple, therefore, either N∩Mω ={0}
or Mω. If N∩Mω = {0}, then M ω ∩ (⊕ ∑ M α ) = {0} . But then
*
α∈Λ

∑ Mα is a direct sum having non empty intersection with K. But this


α∈Λ* ∪{ω}

contradicts the maximality of Λ*. Thus N ∩ Mω = Mω i.e. Mω ⊆ N, proving


that N=M.
3.4.8 Note. If we take K={0} module in Theorem 3.4.7, then we get the result that
“ If M = ∑ M α is the sum of simple R-submodules Mα , then there exist a
α∈Λ

subset Λ* ⊆ Λ such that ∑ Mα is a direct sum and M= ⊕ ∑ M α ”.


α∈Λ* *
α∈Λ

3.4.9 Theorem. Let M be an R-module. Then the following conditions are


equivalents
(i) M is semi-simple
(ii) M is direct sum of simple modules
(iii) Every submodule of M is direct summand of M.
Proof. (i)⇒(ii). Since M is semi-simple, then by definition, M = ∑ M α ,
α∈Λ

where Ni’s are simple submodules. Also by Theorem 3.4.7, if M = ∑ M α is a


α∈Λ

sum of simple R-submodules Mα’s and K be a submodule of M, then there


exist a subset Λ* ⊆ Λ such that ∑ Mα is a direct sum and
α∈Λ*

M= K ⊕ (⊕ ∑ M α ) . By Note 3.4.8, if we take K={0}, then M = ⊕ ∑ M α


* *
α∈Λ α∈Λ

i.e. M is direct sum of simple submodules.


(ii) ⇒(iii). Let M = ⊕ ∑ M α , where each Mα is simple. Then M is sum of
α∈Λ

simple R-submodules. But then by Theorem 3.4.7, for given submodule K of


M we can find a subfamily Λ* of given family Λ of submodules such that

M= K ⊕ (⊕ ∑ M α ) . Take ⊕ ∑ M α =M*. Then M= K ⊕ M* .Therefore, K


* *
α∈Λ α∈Λ

is direct summand of M.
(iii) ⇒(i). First we will show that M has simple submodule. Let N=Rx be a
submodule of M. Since N is finitely generated module, therefore, N has a
maximal element N* (say) (because every finitely generated module has a
N
maximal element). Consider the quotient module . Since N* is simple,
N*
N
therefore, is simple. Being a submodule of N, N* is submodule of M
N*
also. Hence N* is a direct summand of M. Therefore, there exist submodule
M1 of M such that M=N*⊕M1. But then N ⊆ N*⊕M1. If y∈N, then y = x + z
where x∈N* and z∈M1. Since z = y-x ∈N (because y ∈N and x∈N*⊆N),
therefore, y-x ∈N∩M1. Equivalently, y∈N* + N∩M1. Hence N⊆N* + N∩M1.
Since N* and N∩M1 both are subset of N, therefore, N* + N∩M1 ⊆ N. By
above discussion we conclude that N* +N∩M1 = N. Since M =N*⊕M1,
(N*∩ M1) = {0}, therefore, N*∩(N∩M1) = (N*∩ M1)∩N ={0}. Hence N=
N* ⊕ (N∩M1).
N N * + N ∩ M1 N ∩ M1 N ∩ M1
Now = ≅ = ≈ N ∩ M1 .
N* N* N * ∩( N ∩ M1) {0}

N
Since is simple submodule, therefore, (N∩M1) is also simple submodule
N*
of N and hence of M also. By above discussion we conclude that M always
has a simple submodule. Take ƒ={Mω}ω∈Λ as the family of all simple
submodules of M. Then by above discussion ƒ ≠ φ. Let X= ∑ M ω . Then X is
ω∈Λ

a submodule of M. By (iii), X is direct summand of M, therefore, there exist


M* such that M=X⊕M*. We will show that M*={0}. If M* is non zero, then
M* has simple submodule say Y. Then Y∈ƒ. Hence Y⊆X. But then
Y=X∩M*, a contradiction to the result M=X⊕M*. Hence M*={0} and M=
X= ∑ M ω i.e. M is semi-simple and (i) follows.
ω∈Λ

3.4.10 Theorem. Prove that submodule and factor modules of a semi-simple module
are again a semi-simple.
Proof. Let M be semi-simple R-module and N be a submodule of M. As M is
semi-simple, therefore, every submodule of M is direct summand of M. Hence
for given submodule X, there exist M* such that M =X⊕M*. But then
N=M∩N= X⊕M*∩N=(X∩N)⊕(M*∩N) . Hence X∩N is direct summand of
N. Therefore N is semi-simple.
M
Now we will show that is also semi-simple. Since M is semi-
N
simple and N is a submodule of M, therefore, N is direct summand of of M i.e.
M= N⊕M*. Since N∩M*={0}, therefore,
M N⊕M* M* M*
= ≅ = = M * . Being a submodule of semi-simple
N N N ∩ M * {0}
M
module M, M* is semi-simple and hence is semi-simple. It proves the
N
result.

3.5 FREE MODULES


3.5.1 Definition. Let M be an R module. A subset S of M is said to be linearly
dependent over R if and only if there exist distinct elements x1, x2, …, xn in S
and elements r1, r2, …,rn in R, not all zero such that r1x1+r2x2+…+rnxn=0.

3.5.2 Definition. If the elements x1, x2, …, xn of M are not linearly dependent over
R, then we say that x1, x2, …, xn are linearly independent over R. A subset S=
{x1, x2, …, xt}of M is called linearly independent over ring R if elements x1,
x2, …, xt are linearly independent over R.

3.5.3 Definition. Let M be an R-module. A subset S of M is called basis of M over


R if
(i) S is linearly independent over R,
(ii) <S> = M. i.e. S generates M over R.

3.5.4 Definition. An R-module M is said to be free module if and only it has a basis
over R

Example(i) Every vector space V over a field F is a free F-module.


(ii) Every unitary R-module, R is a free R-module.
(iii) Every Infinite abelian group is a free Z-module.

Example of an R-module M which is not free module. Show that Q (the


field of rational numbers ) is not a free Z-module.(Here Z is the ring of
integers).
p r
Solution. Take two non-zero rational numbers and . Then there exist two
q s
p r
integers qr and -ps such that qr + (− ps ) = 0 . i.e. every subset S of Q
q s
having two elements is Linearly dependent over Z. Hence every super set of S
i.e. every subset of Q having at least two elements is linearly dependent over
Z. Therefore, basis of Q over Z has at most one element. We will show the set
p
containing single element can not be a basis of Q over Z. Let be the basis
q
p p
element. Then by definition of basis, Q={n , n∈Z}. But belongs to Q
q 2q
p 1p p p
such that = ≠n . Hence Q≠{n , n∈Z}. In other word Q has no basis
2q 2 q q q

over Z. Hence Q is not free module over Z.

3.5.5 Theorem. Prove that every free R-module M with basis {x1, x2, …, xt} is
isomorphic to R(t). (Here R(t) is the R-module of t-tuples over R).
Proof. Since {x1, x2, …, xt} is the basis of M over R, therefore, M={r1x1+ r2x2
+… + rtxt | r1, r2 ,…, rt∈R}. As R(t)={(r1, r2, …, rt)| r1, r2 ,…, rt∈R }. Define a
mapping f : M→R(t) by setting f(r1x1+ r2x2 +… + rtxt)=(r1, r2, …, rt). We will
show that f is an isomorphism.
Let x and y ∈M, then x= r1x1+ r2x2 +… + rtxt and y= s1x1+ s2x2 +… +
stxt where for each i, si and ri ∈R. Then
f(x+y)= f((r1+s1) x1+ (r2 + s2 )x2 +… + (rt + st )xt )
=((r1+s1), (r2+s2 ), … , (rt+st )) = (r1 , r2, … , rt ) + (s1, s2 , … , st )
= f(x)+f(y)
and f(rx)= f(r(r1x1+ r2x2 +… + rtxt)= f(rr1x1+r r2x2 +… + rrtxt)= (rr1, rr2, … rrt)
= r(r1, r2, … rt)= rf(x). Therefore, f is an R-homomorphism.
This mapping f is onto also as for (r1, r2, … rt)∈R(t) , there exist x = r1x1+ r2x2
+… + rtxt∈M such that f(x)= (r1, r2, … rt). Further f(x)=f(y)⇒ (r1, r2, … rt)
=(s1, s2 , … , st ) ⇒ ri =si for each i. Hence x=y i.e. the mapping f is one-one
also and hence the mapping f is an isomorphism from M to R(t).
3.6 NOETHERIAN AND ARTINIAN MODULES
3.6.1 Definition. Let M be a left R-module and {Mi}i≥1 be a family of submodules
of M . The family {Mi}i≥1 is called ascending chain if M1⊆ M2⊆… ⊆Mn⊆…
Similarly if M1⊇ M2⊇… ⊇Mn⊇…, then family {Mi}i≥1 is called descending
chain.

3.6.2 Definition. An R-module M is called Noetherian if for every ascending chain


of submodules of M, there exist an integer k such that Mk=Mk+t for all t ≥ 0. In
other words Mk=Mk+1= Mk+2 =… . Equivalently, an R-module M is called
Noetherian if every ascending chain becomes stationary or terminates after a
finite number of terms.
If the left R-module M is Noetherian, then M is called left Noetherian
and if right R-module M is Noetherian, then M is called right Noetherian.

Example. Show that Z as Z-module is Noetherian.


Solution. Since we know that Z is principal ideal ring and in a ring every ideal
is submodule of Z-module Z. Consider the submodule generated by <n>, n∈Z.
Further <n> ⊆ <m> iff m|n. As the number of divisors of n are finite,
therefore, the number of distinct member in the ascending chain of family of
submodules are finite. Hence Z is noetherian Z-module.

3.6.3 Theorem. Prove that for an left R-module M, following conditions are
equivalent:
(i) M is Noetherian (ii) Every non empty family of R -module has a
maximal element (iii) Every submodule of M is finitely generated.
Proof. (i) ⇒(ii). Let ƒ be a non empty family of submodules of M. If possible
ƒ does not have a maximal element, then for M1 ∈ ƒ, there exist M2 such that
M1 ⊆ M2. By our assumption, there exist M3, such that M1⊆M2⊆M3.
Continuing in this way we get an non terminating ascending chain M1
⊆M2⊆M3…. , of submodules of M, a contradiction to the fact that M is
Noetherian . Hence ƒ always have a maximal element.
(ii)⇒(iii). Consider a submodule N of M. Let xi∈N for i=1, 2, 3, … Consider
the family ƒ of submodules M1=<x1> , M2=<x1 , x2>, M3= < x1 , x2 , x3 >, … ,
of N or equivalently of M. By (ii), ƒ has maximal element Mk(say). Definitely
Mk is finitely generated. In order to show that N is finitely generated, it is
sufficient to show that Mk=N. Trivially Mk ⊆ N. Let xi∈N. Then xi∈Mi ⊆Mk
for all i. Hence N⊆Mk i.e. Mk=N. It proves (iii).
(ii)⇒(iii). Let ƒ be an ascending chain of submodules of M. and ascending
chain is M1⊆M2⊆M3… . Consider N = U M i . Then N is a submodule of M.
i ≥1

By (iii), N is finitely generated i.e. N=<x1, x2, …, xk>. Let Mt be the


submodule in the ascending chain M1⊆M2⊆M3… . such that each xi is
contained in Mt. Then N⊆Mr for all r ≥ t. But Mr ⊆N. Then N=Mr. Hence
Mt=Mt+1=Mt+2=… and hence M is Noetherian. It proves (i).

3.6.4 Definition. Let M be an left R-module and ζ ={Mλ}λ∈Λ be a non empty family
of submodules of M. M is called finitely co-generated if for every non empty
family ζ having {0} intersection has a finite subfamily with {0} intersection.

3.6.5 Definition. Left R-module M is called Left Artinian module if every


descending chain M1⊇M2⊇M3… of submodules of M becomes stationary after
a finite number of steps. i.e there exist k such that Mk=Mk+t for all t≥0.

3.6.6 Theorem. Prove that for an left R-module M, following conditions are
equivalent:
(i) M is Artinian (ii) Every non empty family of R-module has a minimal
element (iii) Every quotient module of M is finitely co-generated.
Proof. (i)⇒(ii). Let ƒ be a non empty family of submodules of M. If possible
ƒ does not have a minimal element, then for M1∈ ƒ, there exist M2 such that
M1⊇M2. By our assumption, there exist M3, such that M1⊇M2⊇M3 .
Continuing in this way we get an non terminating discending chain M1
⊇M2⊇M3…. , of submodules of M, a contradiction to the fact that M is
Artinian. Hence ƒ always have a minimal element.
M
(ii)⇒(iii). For a submodule N, consider the quotient module . Let
N
M M Mλ
{ λ }λ∈Λ be a family of submodules of such that I ={N}. Since
N N λ∈Λ N

IM
M λ λ∈Λ λ
N= I = , therefore I M λ = N . Let ζ = {M λ }λ∈Λ and for
λ∈Λ N N λ∈Λ

every finite subset Λ* ⊆Λ let f ={A = I M λ }. As Mλ ∈ f for all λ∈Λ,


λ∈Λ*

therefore, ζ ⊆ f . i.e. f≠φ. By given condition f has a minimal element say A.

Then A= M λ1 ∩ M λ 2 ∩ ... ∩ M λ n . Let λ∈Λ. Then A ∩ Mλ ⊆A. But A is

minimal element of the collection f, therefore, A ∩ Mλ ≠(0). Hence A ∩ Mλ


=A ∀ λ∈Λ. But then A ⊆ I M λ =N. Since N is contained in each Mλ ,
λ∈Λ

n
therefore, N ⊆ M λ1 ∩ M λ 2 ∩ ... ∩ M λ n =A. Hence N=A= I M λ i . Now
i =1
n
I Mλ
n Mλ i N Mλ
I i
= i =1 = = N . Hence there exist a subfamily { i }1≤i ≤ n of the
i =1 N N N N

M n Mλ
family { λ }λ∈Λ such that I i
= N . It shows that every quotient module
N i =1 N

is finitely co-generated. It proves (iii).


(iii)⇒(i). Let M1 ⊇ M2⊇…⊇Mn⊇Mn+1⊇…be a descending chain of
submodules of M. Let N= I Mi . Then N is a submodule of M. Consider the
i ≥1

IM
Mi M M i i ≥1 i N
family { }i≥1 of submodules of . Since I = = = N and
N N λ∈Λ N N N

M Mλ
is finitely co-generated, therefore, there exist a subfamily { i }1≤i ≤ n of
N N
Mi n Mλ
the family { }i≥1 such that I i
= N . Let k= max={λ1, λ2, …λn}. Then
N i =1 N
n
I Mλ
n Mλ i M
N= I i
= i =1 = k ⇒ Mk=N. Now N = I M i ⊆ Mk+i ⊆ Mk ⊆ N ⇒
i =1 N N N i ≥1

Mk+i ⊆ Mk for all i≥0. Hence M is Artinian.


3.6.7 Theorem. Let M be Noetherian left R-module. Show that every submodule
and factor module of M are also Noetherian.
Proof. Since M is Noetherian, therefore, it is finitely generated. Being a
submodule of finitely module, N is also finitely generated. Hence N is also
Noetherian.
M A
Consider factor module . Let be its submodule. Then A is
N N
submodule of M is Noetherian, therefore, A is finitely generated. Suppose A is
A
generated by x1, x2, …, xn. Take arbitrary element x + N of . Then x∈A.
N
Therefore, x =r1x1 + r2x2 + …rnxn, ri ∈ R. But then x +N=(r1x1 + r2 x2+ …+
rnxn) + N = r1 (x1 +N) + r2 ( x2+N)+…+rn (xn+N) i.e. x +N is linear
combination of (x1 +N), ( x2+N), …, (xn+N) over R. Equivalently, we have
A A
shown that is finitely generated. Hence is Noetherian. It proves the
N N
result.

3.6.8 Theorem. Let M be an left R-module. If N is a submodule of M such that N


M
and both are Noetherian, then M is also Noetherian.
N
Proof. Let A be a submodule of M. In order to show M is Noetherian we will
show that A is finitely generated. Since A+N is a submodule of M conting N,
A+N M
therefore, is submodule of . Being a submodule of Noetherian
N N
A+N A+N A A
module is finitely generated. As ≅ , therefore,
N N A∩N A∩N
A
is also finitely generated. Let =<y1 + (A∩N), y2 + (A∩N), …, yk +
A∩N
(A∩N)>. Further A ∩N is a submodule of Noetherian module N, therefore, it
is also finitely generated. Let (A∩N)= <x1 , x2 , …, xt >. Let x∈A. Then
A
x+(A∩N)∈ . Hence x+(A∩N)= r1( y1 + (A∩N))+ r2( y2 + (A∩N))+ …
A∩N
+ rk(yk + (A∩N)), ri∈R . Then x + (A∩N)= (r1y1 + r2y2 + … + rkyk +
(A∩N)) or x - (r1y1 + r2y2 + … + rkyk )∈ (A∩N). Since (A∩N)= <x1 , x2 ,
…, xt >, therefore, x - (r1y1 + r2y2 + … + rkyk )= s1x1 + s2x2 + … + stxt.
Equivalently x = (r1y1 + r2y2 + … + rkyk ) + s1x1 + s2x2 + … + stxt, si∈R.
Now we have shown that every element of A is linear combination of
elements of the set { r1, r2, … , r k , s1 , s2 , … , st} i.e. A is finitely
generated. It proves the result.

3.6.9 Theorem. Let M be an left R-module and N be a submodule of M. Then M is


M
artinian iff both N and are Artinian.
N
Proof. Suppose that M is Artinian. We will show that every submodule and
quotient modules of M are Artinian.
Let N be a submodule of N. Consider the deccending chain N1 ⊇ N2
⊇ … ⊇Nk ⊇ Nk+1 ⊇… of submodules of N. But then it becomes a descending
chain of submodules of M also. Since M is Artinian, therefore, there exist a
positive integer k such that Nk =Nk+i ∀ i ≥0. Hence N is Artinian.
M
Let be a factor module of M. Consider a descending chain
N
M1 M 2 M M
⊇ ⊇…⊇ k ⊇ k+1 ⊇…, Mi are submodules of M
N N N N
containing N and are contained in Mi-1. Thus we have a descending chain
M1 ⊇ M2 ⊇ … ⊇Mk ⊇ Mk+1 ⊇… of submodules of M. Since M is
Artinian, therefore, there exist a positive integer K such that Mk =Mk+i ∀ i ≥0.
M k M k +i M
But then = ∀ i ≥0. Hence is Artinain.
N N N
M
Conversely suppose that both N and are Artinian submodules of
N
M. We will show that M is Artinian. Let N1 ⊇ N2 ⊇ … ⊇Nk ⊇ Nk+1 ⊇… be
the deccending chain of submodules of M. Since Ni+N is a submodule of M
Ni + N M
containing N, therefore, for each i, is a submodule of such that
N N
Ni + N Ni +1 + N
⊇ . Consider descending chain
N N
N1 + N N 2 + N N + N N k +1 + N M
⊇ ⊇ ... ⊇ k ⊇ ⊇ ... of submodules of . As
N N N N N
M
is Artinian, therefore, there exist a positive integer k1 such that
N
N k1 + N N k1 + i + N
= for all i ≥ 0. But then N k1 + N = N k1 +i + N for all i ≥
N N
0.
Since Ni ∩ N is a submodule of an Artinian module N and Ni ∩ N ⊇
Ni+1 ∩ N for all i, therefore, for descending chain
N1 ∩ N ⊇ N 2 ∩ N ⊇ ... ⊇ N k ∩ N ⊇ ... N1 ∩ N of submodules of N, there

exist a positive integer k2 such that N k 2 ∩ N = N k 2 +i ∩ N for all i ≥ 0. Let

k=max{k1, k2}. Then N k + N = N k +i + N and N k ∩ N = N k +i ∩ N for all i


≥ 0. Now we will show that if N k + N = N k +i + N and N k ∩ N = N k + i ∩ N ,

then N k = N k +i for all i ≥ 0. Let x∈Nk , then x∈Nk +N= N k +i + N . Thus x=

y+z where y∈Nk+i and z∈N . Equivalently, x-y=z∈N. Since y∈Nk+i,


therefore, y∈Nk also. But then x-y=z also belongs to Nk . Hence z∈Nk∩N=
Nk+i∩N and hence z=x-y ∈Nk+i. Now x-y∈Nk+i and y∈Nk+i implies that
x∈Nk+i. In other words we have shown that N k ⊆ N k + i . But then

N k = N k + i for all i ≥ 0. It proves the result.

3.6.10 Theorem. Prove that R-homomorphic image of Noetherian(Artinian) left R-


module is again Noetherian(Artinian).
Proof. Since homomorphic image of an Noetherian(Artinian) module M is
f(M) where f is an homomorphism from M to R-module N. Being a factor
M M
module of M, is Noetherian(Artinian). As f(M)≅ , therefore,
Ker f Ker f

f(M) is also Noetherian(Artinian).

3.7 NOETHERIAN AND ARTINIAN RINGS


3.7.1 Definition. A ring R is said to satisfy ascending (descending) chain condition
denoted by acc(dcc) for ideals if and only if given any sequence of ideals I1, I2,
I3… of R with I1⊆ I2 ⊆ … ⊆ In ⊆ …(I1 ⊇ I2 ⊇ … ⊇ In ⊇ …), there exist an
positive integer n such that In=Im for all m≥n.
Similarly a ring R is said to satisfy ascending (descending ) chain
condition for left (right) ideals if and only if given any sequence of left ideals
I1, I2, I3… of R with I1⊆ I2 ⊆ … ⊆ In ⊆ …(I1 ⊇ I2 ⊇ … ⊇ In ⊇ …), there exist
an positive integer n such that In=Im for all m ≥ n.

3.7.2 Definition. A ring R is said to be Notherian(Artinian) ring if and only if it


satisfies the ascending ()chain conditions for ideals of R. Similarly for non
commutative ring , a ring R is said to be left-Notherian(left-Notherian) ring if
and only if it satisfies the ascending chain conditions for left ideals (right
ideals) of R.

3.7.3 Definition. A ring R is said to satisfies the maximum condition if every non
empty set of ideals of R , partially ordered by inclusion, has a maximal
element.

3.7.4 Theorem. Let R be a ring then the following conditions are equivalent:
(i) R is Noetherian (ii) Maximal condition (for ideals) holds in R (iii) every
ideal of R is finitely generated.
Proof. (i) ⇒(ii). Let f be a family of non empty collection of ideals of R and
I1 ∈f. If I1 ∈ is not maximal element in f, then ther exist I2 ∈f such that I1⊆ I2.
Again if I2 is not maximal then there exist I3 ∈f such that I1⊆ I2 ⊆ I3. If f has
no maximal element, then continuing in this way we get an non terminating
ascending chain of ideal of R. But it is contradiction to (i) that R is noehterian.
Hence f has maximal element.
(ii) ⇒(iii). Let I be an ideal of R and f={A | A is an ideal of R, A is finitely
generated and A⊆I}. As {0}⊆I which is finitely generated ideal of R,
therefore, {0}∈f. By (ii), f has maximal element say M. We will show that
M=I. Suppose that M≠I, then there exist an element a∈I such that a∉M. Since
M is finitely generated, therefore, M=< a1, a2, …, ak > . But then M*=< a1, a2,
…, ak, a > is also finitely generated submodule of I containing M properly. By
definition M* belongs to f, a contradiction to the fact that M is maximal ideal
of f. Hence M=I. But then I is finitely generated. It proves (iii).
(iii) ⇒(i). I1⊆ I2 ⊆ I3⊆…⊆In ⊆… be an ascending chain of ideals of R. Then
U Ii is an ideal of R. By (iii) it is finitely generated. Let U Ii =<a1, a2,…,ak>.
i ≥1 i ≥1
Now each ai belongs to some I λ i of the given chain. Let n=max{λ1, λ2, …,λk}.

Then each ai∈In. Consequently, for m≥n, U Ii =<a1, a2,…,ak> ⊆ In ⊆ Im⊆ U Ii .


i ≥1 i ≥1

Hence In=Im for m≥n implies that the given chain of ideals becomes stationary
at some point i.e. R is Noetherian.

3.8 KEY WORDS


Modules, simple modules, semi simple modules, Noethrian, Artinian.

3.9 SUMMARY
In this chapter, we study about modules, simple modules (i.e. modules having
no proper submodule), semi-simple modules , Free modules, Noetherian and
Artinian rings and modules.

3.10 SELF ASSESMENT QUESTIONS


(1) Let R be a noethrian ring. Show that the ring of square matrices over R is
also noetherian.
(2) Show that if Ri, i=1, 2, 3, … is an infinite family of non zero rings and if
R is direct sum of member of this family. Then R can not be noetherian.
(3) Let M be a completely reducible module, and let K be a non zero
submodule of M. Show that K is completely reducible. Also show that K is
direct summand of M.

3.11 SUGGESTED READINGS


(1) Modern Algebra; SURJEET SINGH and QAZI ZAMEERUDDIN, Vikas
Publications.
(2) Basic Abstract Algebra; P.B. BHATTARAYA, S.K.JAIN, S.R.
NAGPAUL, Cambridge University Press, Second Edition.
MAL-521: M. Sc. Mathematics (Advance Abstract Algebra)
Lesson No. 4 Written by Dr. Pankaj Kumar
Lesson: Modules II Vetted by Dr. Nawneet Hooda
STRUCTURE
4.0 OBJECTIVE
4.1 INTRODUCTION
4.2 MORE RESULTS ON NOETHERIAN AND ARTINIAN MODULES
AND RINGS
4.3 RESULT ON HR(M, M) AND WEDDENBURN ARTIN THEOREM
4.4 UNIFORM MODULES, PRIMARY MODULES AND NOETHER-
LASKAR THOEREM
4.5 SMITH NORMAL FORM
4.6 FINITELY GENERATED ABELIAN GROUPS
4.7 KEY WORDS
4.8 SUMMARY
4.9 SELF ASSESMENT QUESTIONS
4.10 SUGGESTED READINGS

4.0 OBJECTIVE
Objective of this paper is to study some more properties of modules

4.1 INTRODUCTION
In last chapter, we have studied some more results on modules and
rings. In Section, 4.2, we study more results on noetherian and artinian
modules and rings. In next section, Weddernburn theorem is studied. Uniform
modules, primary modules, noether-laskar theorem and smith normal theorem
are studied in next two section. The last section is contained with finitely
generated abelian groups.

4.2 MORE RESULTS ON NOETHERIAN AND ARTINIAN MODULES


AND RINGS
4.2.1 Theorem. Every principal ideal domain is Noetherian.
Solution. Let D be a principal ideal domain and I1⊆ I2 ⊆ I3⊆…⊆In ⊆…be an
ascending chain of ideals of D. Let I= U Ii . Then I is an ideal of D. Since D is
i ≥1

principal ideal domain, therefore, there exist b∈D such that I=<b>. Since
b∈D, therefore, b∈In for some n. Consequently, for m ≥ n, I ⊆ In ⊆ Im⊆ I.
Hence In=Im for m ≥ n implies that the given chain of ideals becomes
stationary at some point i.e. R is Noetherian.
(2) (Z,+,.) is a Notherian ring.
(3) Every field is Notherian ring.
(4) Every finite ring is Notherian ring.

4.2.2 Theorem. (Hilbert basis Theorem). If R is Noetherian ring with identity, then
R[x] is also Noetherian ring.
Proof. Let I be an arbitrary ideal of R[x]. To prove the theorem, it is sufficient
to show that I is finitely generated. For each integer t≥0, define;
It={r∈R : a0+a1x + …+ rxt}∪{0}
Then It is an ideal of R such that It⊆It+1 for all t. But then I0 ⊆ I1 ⊆ I2 ⊆… is an
ascending chain of ideals of R. But R is Noetherian, therefore, there exist an
integer n such In=Im for all m≥0. Also each ideal Ii of R is finitely generated.
Suppose that Ii =< a i1, a i 2 , ..., a im i > for i=0, 1, 2, 3, …, n, where a ij is the

leading coefficient of a polynomial f ij ∈I of degree i. We will show that

m0+m1+…+mn polynomials f 01 , f 02 , …, f 0m 0 , f11 , f12 , …, f1m1 ,…, f n1 ,

f n 2 , …, f nm n generates I. Let J=< f 01 , f 02 , …, f 0m 0 , f11 , f12 , …, f1m1 ,…,

f n1 , f n 2 , …, f nm n >. Trivially J ⊆ I. Let f(≠ 0)∈R[x] be such that f∈I and of

degree t (say): f=b0+b1x+…+bt-1xt-1 + bxt. We now apply induction on t. For


t=0, f=b0∈I0 ⊆ J. Further suppose that every polynomial of I whose degree
less than t also belongs to J. Consider following cases:
Case 1. t > n. As t > n, therefore, leading coefficient b (of f)∈It=In (because
It=In ∀ t ≥ n). But then b= r1a n1 + r2a n1 + ... + rm n a nm n , ri ∈R. Now g = f-

(r1f n1 + r2f n1 + ... + rm n f nm n ) xt-n∈I having degree less than t (because the
coefficient of xt in g is b − r1a n1 + r2a n1 + ... + rm n a nm n =0, therefore, by

induction, f∈J.
Case (2). t ≤ n. As b∈It, therefore, b = s1a t1 + s 2a t 2 + ... + s m t a tm t ; si ∈R. Then

h=f- (s1f n1 + s 2f n1 + ... + s m n f nm n ) ∈I, having degree less than t. Now by

lsinduction hypothesis, h∈J ⇒ f∈J. Consequently, in either case I⊆J and


hence I=J. Thus I is finitely generated and hence R[x] is Noetherian. It prove
the theorem.

4.2.3 Definition. A ring R is said to be an Artinian ring iff it satisfies the descending
chain condition for ideals of R.

4.2.4 Definition. A ring R is said to satisfy the minimum condition (for ideals) iff
every non empty set of ideals of R, partially ordered by inclusion, has a
minimal element.

4.2.5 Theorem. Let R be a ring. Then R is Artinian iff R satisfies the minimum
condition (for ideals).
Proof. Let R be Artinian and f be a nonempty set of ideal of R. If I1 is not a
minimal element in f, then we can find another ideal I2 in f such that I1 ⊃ I2. If
f has no minimal element, the repetition of this process we get a non
terminating descending chain of ideals of R, contradicting to the fact that R is
Artinian. Hence f has minimal element.
Conversely suppose that R satisfies the minimal condition. Let
I1 ⊇ I2 ⊇ I3… be an descending chain of ideals of R. Consider F ={It : t=1, 2,
3, …}. I1∈F ⇒ F is non empty. Then by hypothesis, F has a minimal element
In for some positive integer n ⇒ Im⊆ In ∀ m ≥ n.
Now Im ≠ In ⇒ Im∉F (By the minimality of In) , which is not
possible. Hence Im= In ∀ m ≥ n i.e. R is Artinian.

4.2.6 Theorem. Prove that an homomorphic image of a Noetherian(Artinian) ring is


also Noetherian(Artinian).
Proof. Let f be a homomorphic image of a Noetherian ring R onto the ring S.
Consider the ascending chain of ideals of S:
J1 ⊆ J2 ⊆ …⊆... (1)
Suppose Ir=f-1(Jr), for r=1, 2, 3, ….
I1 ⊆ I2 ⊆ …⊆… (2)
Relation shown in (2) is an ascending chain of ideals of R. Since R is
Noehterian, therefore, there exist positive integer n such that Im=In ∀ m≥n.
This shows that Jm=Jn ∀ m≥n. But then S becomes Noetherian and the result
follows.

4.2.7 Corollary. If I is an ideal of a Noetherian(Artinian) ring, then factor module


R
is also Noetherian(Artinian).
I
R
Proof. Since is homomorphic image of R, therefore, by Theorem 4.2.10,
I
R
is Noehterian.
I

R
4.2.8 Theorem. Let I be an ideal of a ring R. If R and are both Noehterian rings,
I
then R is also Noetherian.
Proof. Let I1 ⊆ I2 ⊆ …⊆… be an ascending chain of ideals of R. Let f: R→
R
. It is an natural homomorphism. But then f(I1) ⊆ f(I2) ⊆ …⊆ is an
I
R R
ascending chain of ideals in . Since is Noetherian, therefore, there exist
I I
a positive integer n such that f(In) = f(In+i) ∀ i ≥ 0. Also (I1 ∩ I) ⊆ (I2 ∩ I)⊆
…⊆… is an ascending chain of ideals of I. As I is Noehterian, therefore, there
exsit a positive integer m such that (Im ∩ I) = (Im+i ∩ I). Let r=max{m, n}.
Then f(Ir) = f(Ir+i) and (Ir ∩ I) = (Ir+i ∩ I) ∀ i ≥ 0. Let a∈Ir+i, then there exist
x∈Ir such that f(a)=f(x) i.e. a+I=x+I. Then a-x∈I and also a-x∈Ir+i. This shows
that a-x ∈(Ir+i ∩ I)= (Ir ∩ I). Hence a-x∈Ir ⇒ a∈Ir i.e. Ir+i ⊆ Ir. But then Ir+i = Ir
for all i≥0. Now we have shown that every ascending chain of ideals of R
terminates after a finite number of steps. It shows that R is Noetherian.
4.2.9 Definition. An Artinian domain R is an integral domain which is also an
Artinian ring.

4.2.10 Theorem. Any left Artinian domain is a division ring.


Proof. Let a is a non zero element of R. Consider the ascending chain of ideals
of R as: <a>⊇ <a2> ⊇ <a3> ⊇……Since R is an Artinian ring, therefore, < an>
= <an+i> ∀ i ≥ 0. Now <an> =<an+1> ⇒ an =ran+1 ⇒ ar =1 i.e. a is invertible ⇒
R is a division ring.

4.2.11 Theorem. Let M be a finitely generated free module over a commutative ring
R. Then all the basis of M are finite.
Proof. let {ei}i∈Λ be a basis and {x1, x2, …,xn} be a generator of M. Then
each xj can be written as xj = ∑ βij ei where all except a finite number of βij’s
i

are zero. Thus the set of all ei’s that occurs in the expression of xj’s,
j=1,2,…,n.

4.2.12 Theorem. Let M be finitely generated free module over a commutative ring R.
Then all the basis of M has same number of element.
Proof. Let M has two bases X and Y containing m and n elements
respectively. But then M≅ Rn and M≅Rm. But then Rm≅Rn. Now we will
show that m=n. Let m< n, f is an isomorphism from Rm to Rn and g=f-1. Let
{x1, x2, …, xm} and {y1, y2, …, yn} are basis element of Rm and Rn
respectively. Define
f(xi)= a1i y1 + a2i y2 +…+ ani yn and g(yj)= b1j x1 + b2j x2 +… +bmj xm. Let
A(aji) and B=(bkj) be n×m and m×n matrices over R. Then g
n n m n
f(xi)=g( ∑ a ji y j ) = ∑ a jig ( y j ) = ∑ ∑ b kja ji x k . 1≤ i ≤m. Since gf=I ,
j=1 j=1 k =1 j=1

m n n n
therefore, xi = ∑ ∑ b kja ji x k i.e. ∑ b1 ja ji x1 + ... + ∑ ( bija ji − 1) x i
k =1 j=1 j=1 j=1

n
+ ... + ∑ b mja ji x m = 0 . As xi’s are linearly independent, therefore,
j=1
n ⎡ B⎤
∑ b kja ji x k = δ ki . Thus BA=Im and AB=In. Let A*=[A 0] and B*= ⎢ ⎥ , then
j=1 ⎣0⎦
⎡I 0⎤
A*B*= In and B*A*= ⎢ m . But then det(A*B*)=In and det(B*A*)=0.
⎣0 0⎥⎦
Since A* and B* are matrices over commutative ring R, so det(A*B*)
det(B*A*), which yield a contradiction. Hence M ≥ N. By symmetry N ≥ M
i.e. M=N.

4.3 RESULT ON HR(M, M) AND WEDDENBURN ARTIN THEOREM


k
4.3.1 Theorem 4. Let M= ⊕ ∑ M i be a direct sum of R-modules Mi. Then
i =1

⎡ Hom R (M1, M1 ) Hom R (M 2 , M1 ) ... Hom R (M k , M1 ) ⎤


⎢ ⎥
HomR(M, M)≅ ⎢ Hom R (M1, M 2 ) Hom R (M 2 , M 2 ) L Hom R (M k , M 2 ) ⎥ as a
⎢ M M M M ⎥
⎢ ⎥
⎣Hom R ( M1, M k ) Hom R (M 2 , M k ) Hom R ( M k , M k )⎦

ring (Here right hand side is a ring T(say) of K×K matrices f=(fij) under the
usual matrix addition and multiplication, where fij is an element of
HomR(Mj, Mi)).
Proof. We know that for are submodules X and Y, HomR(X, Y) (=set of all
homomorphisms from X to Y) becomes a ring under the operations (f +g)
x=f(x) +g(x) and fg(x)=f(g(x)), f , g ∈HomR(X, Y) and x∈X. Further λj: Mj
→ M and πi: M→Mi are two mappings defined as:
λj(xj)=(0, …, xj,…,0) and πi(x1, …, xi, …,xk) = xi. (These are called
inclusion and projection mappings). Both are homomorphisms. Clearly, πi φ
λj: Mj → Mi is an homomorphism, therefore, πi φ λj ∈HomR(Mj , Mi). Define a
mapping σ : HomR(M, M)→T by σ(φ)= (πi φ λj), φ ∈ HomR(M , M) and (πi
φ λj ) is k×k matrix whose (i, j)th enrty is πi φ λj . We will show that σ is an
isomorphism. Let φ1, φ2 ∈ HomR(M , M). Then
σ (φ1 + φ2) = (πi (φ1+ φ2)λj )= (πi φ1λj + πi φ2λj ) = (πi φ1λj) + (πi φ2λj )
k
=σ (φ1) + σ (φ2) and σ (φ1) σ (φ2) = (πi φ1 λj ) (πi φ2 λj ) = ∑ πi φ1λl πl φ2λ j
l =1
= πi φ1λ1π1φ2λ j + πi φ1λ 2 π2φ2λ j + ... + πi φ1λ k πk φ2λ j

= πi φ1 (λ1π1 + ... + λ k π k )φ 2λ j . Since for (x1,…, xi, …,xk) = x ∈M, λiπi (x) =

λi(xi)= (0,…, xi, …,0), therefore, (λ1π1 + λ 2π2 + ... + λ k πk ) (x)=

(λ1π1( x) + λ 2π2 ( x) + ... + λ k πk ( x) = (x1, …,0)+ (0, x2, …,0)+…+ (0,…, xk)=

(x1, x2, …,xk) = x. Hence (λ1π1 + λ 2π2 + ... + λ k πk ) =I on M. Thus

σ(φ1)σ(φ2)= πi φ1φ 2λ j = σ (φ1φ2). Hence σ is an homomorphism. Now we will

show that σ is one-one. For it let σ(φ)= (πi φ λj)=0. Then πi φ λj=0 for each
k
i, j ; 1 ≤ i, j ≤ k. But then π1 φ λj + π2 φ λj +…+ πk φ λj =0. Since ∑ πi is an
i =1

k k
identity mapping on M, therefore, ( ∑ πi )φλ j ⇒ φλ j = 0. But then φ ∑ λ j =
i =1 j =1

0 and hence φ =0. Therefore, the mapping is one-one. Let f = (fij)∈T, where

fij : Mj →Mi is an R-homomorphism. Set ψ = ∑ λ i f ij π j . Since for each i and


i, j

j, λ i f ij π j is an homomorphism from M to M, therefore, ∑ λ i f ij π j is also an


i, j

element of Hom(M, M). Since σ(φ) is a square matrix of order k, whose (s, t)
entry is fst, therefore, σ(ψ)=(πs( ∑ λ i f ij π j )λt). As πp λq = δpq, therefore, πs(
i, j

∑ λ i f ij π j )λt = fst. Hence σ(ψ)=(fij)=f i.e. mapping is onto also. Thus σ is an


i, j

isomorphism. It proves the result.

4.3.2 Definition. Nil Ideal. A left ideal A of R is called nil ideal if each element of it
nilpotent.
Example. Every Nilpotent ideal is nil ideal.

4.3.3 Theorem. If J is nil left ideal in an Artinian ring R, then J is nilpotent.


Proof. Suppose Jk≠(0). For some positive integer k. Consider a family {J,
J2, … }. Because R is Artinian ring, this family has minimal element say
B=Jm. Then B2=J2m=Jm=B implies that B2=B. Now consider another family
f={A| A is left ideal contained in B with BA≠(0). As BB=B≠(0), therefore, f is
non empty. Since it is a family of left ideals of an Artinian ring R, therefore, it
has minimal element. Let A be that minimal element in f. Then BA ≠(0) i.e.
there exist a in A such that Ba≠(0) Because A is an ideal, therefore, Ba ⊆ A
and B(Ba)=B2a=Ba ≠(0). Hence Ba∈ f. Now the minimality of A implies that
Ba=A. Thus ba=a for some b∈B. But then bia = a ∀ i ≥1. Since b is nilpotent
element, therefore, a=0, a contradiction. Hence for some integer k, Jk=(0).

Theorem. Let R be Noetherian ring. Then the sum of nilpotent ideals in R is a


nilpotent ideal.
Proof. Let B = ∑ Ai be the sum of nilpotent ideals in R. Since R is
i∈Λ

noetherian, therefore, every ideal of R is finitely generated. Hence B is also


finitely generated. Let B=<x1, x2, …, xt> . Then each xi lies in some finite
number of Ai’s say A1, A2, …, An. Thus B=A1+A2+…+An. But we know that
finite sum of nilpotent ideals is nilpotent. Hence B is nilpotent.

4.3.4 Lemma. Let A be a minimal left ideal in R. Then either A2=(0) or A=Re.
Proof. Suppose that A2≠(0). Then there exist a∈A sucht that Aa≠(0). But Aa
⊆A and the minimality of A shows that Aa =A. From this it follows that there
exist e in A such that ea=a. As a is non zero, therefore, ea≠0 and hence e≠0.
Let B={c∈A | ca=0}, then B is a left ideal of A. Since ea ≠ 0 , therefore, e∉
B. Hence B is proper ideal of A. Again minimality of A implies that B=(0).
Since e2a=eea=ea ⇒ (e2-e)a=0, therefore, (e2-e) ∈B=(0). Hence e2=e. i.e e is
an idempotent in R. As 0≠ e=e2= e.e∈Re, therefore, Re is a non zero subset of
A. But then Re=A. It proves the result.

4.3.5 Theorem. (Wedderburn-Artin). Let R be a left (or right) artinian ring with
unity and no nonzero nilpotent ideals. Then R is isomorphic to a finite direct
sum of matrix rings over the division ring.
Proof. First we will show that each non zero left ideal in R is of the form Re
for some idempotent. Let A be a non-zero left ideal in R. Since R is artinian,
therefore, A is also artinian and hence every family of left ideal of A contains
a minimal element i.e. A has a minimal ideal M say. But then M2=(0) or
M=Re for some idempotent e of R. If M2=(0), then
(MR)2=(MR)(MR)=M(RM)R=MMR=M2R= (0). But then MR is nilpotent.
Thus by given hypothesis MR=(0). Now MR = (0) implies that M = (0), a
contradiction. Hence M=Re. This yields that each non zero left ideal contains
a nonzero idempotent. Let f ={R(1-e)∩A | e is a non-zero idempotent in A}.
Then f is non empty. Because M is artinian, f has a minimal member say R(1-
e)∩A. We will show that R(1-e)∩A=(0). If R(1-e)∩A≠(0) then it has a non
zero idempotent e1. Since e1 = r(1-e) , therefore, e1e=r(1-e)e= r(e-e2)=0. Take
e* = e + e1 - ee1. Then (e*)2 =(e + e1 - ee1)( e + e1 - ee1)= ee + e1e - ee1e + ee1 +
e1e1 - ee1e1 -eee1- e1ee1 + ee1ee1= e + 0 – e0 + ee1 + e1 - ee1 -ee1- 0e1 + e0e1= e
+ e1 - ee1 = e* i.e. we have shown that e* is an idempotent. But e1e*=e1e + e1e1
- e1ee1= e1≠0 implies that e1 ∉ R(1-e*) ∩ A. (Because if e1∈ R(1-e*) ∩ A,
then e1 = r(1-e*) for some r∈R and then e1e*= r(1-e*) e*= r(e*- e*e*)=0). More
over for r(1-e*)∈ R(1-e*), r(1-e*)= r(1- e - e1 + ee1)= r(1- e - e1(1- e))= r(1-
e1)(1- e)= s(1-e) for s = r(1-e1)∈ R , therefore, Hence R(1-e*)∩A is proper
subset of R(1-e)∩A. But it is a contradiction to the minimality of R(1-e)∩A
in f. Hence R(1-e)∩A=(0). Since for a∈A, a(1-e)∈ R(1-e)∩A, therefore, a(1-
e)=(0) i.e. a=ae. Then A ⊇ Re ⊇ Ae ⊇ A ⇒ A=Re.
For an idempotent e of R, Re∩ R(1-e)=(0). Because if x∈Re∩R(1-e), then
x=re and x=s(1-e) for some r and s belonging to R. But then re=s(1-e)⇒
ree=s(1-e)e ⇒ re= s(e-e2)=0 i.e. x=0. Hence Re ∩ R(1-e)=(0). Now let S be
the sum of all minimal left ideals in R. Then S=Re for some idempotent e in R.
If R(1-e)≠(0), then there exist a minimal left ideal A in R(1-e). But then A ⊆
Re ∩ R(1-e)=(0), a contradiction. Hence , R(1-e)=(0) i.e
R=Re=S= ∑ Ai where (Ai)i∈Λ is the family of minimal left ideals in R. But
i∈Λ

then there exist a subfamily (Ai)i∈Λ* of the family (Ai)i∈Λ such that
R = ⊕ ∑ Ai . Let 1 = ei1 + ei 2 + ... + ei n . Then R= Rei1 ⊕ ... ⊕ Rei n (because
i∈Λ*

for r∈R, 1 = ei1 + ei 2 + ... + ei n ⇒ r = rei1 + rei 2 + ... + rei n ). After reindexing if

necessary, we may write R = Re1⊕ Re 2 ⊕ ... ⊕ Re n , a direct sum of minimal


left ideals. In this family of minimal left ideals Re1, Re 2 , ..., Re n , choose a
largest subfamily consisting of all minimal left ideals that are not isomorphic
to each other as left R-modules. After renumbering if necessary, let this
subfamily be Re1, Re 2 , ..., Re k . Suppose the number of left ideal in the family
(Rei), 1≤ i ≤n, that are isomorphic to Rei is ni. Then
n1 summands 2 474n summands n summands
k 474
6474 8 6 8 6 8
R = [Re1⊕ ...] ⊕ [Re2 ⊕ ...] ⊕ ... ⊕ [Rek ⊕ ...] where each set of brackets
contains pair wise isomorphic minimal left ideals, and no minimal left ideal in
any pair of bracket is isomorphic to minimal left ideal in another pair. Since
HomR(Rei , Rej)=(0) for i≠j , 1≤ i , j≤ k and HomR(Rei , Rei) =Di is a division
ring(by shcur’s lemma). Thus by Theorem 4, we get HomR(R,R)≅
⎡D1 L D1 ⎤
⎢ ⎥
⎢ M M 0 0 ⎥
⎢D L D ⎥
⎢ 1 1

⎢ D2 L D2 ⎥ ⎡(D1 ) n ⎤
⎢ ⎥ ≅ ⎢ 1

0 M M ⎢ O ⎥
⎢ ⎥
⎢ D2 L D2 O ⎥ ⎢ ( D ) ⎥
⎣ k nk ⎦
⎢ Dk L Dk ⎥
⎢ ⎥
⎢ 0 M M ⎥
⎢ ⎥
⎣ Dk L Dk ⎦

≅ (D1 ) n1 ⊕ ... ⊕ (D k ) n k . But since HomR(M, M) ≅Rop ( under the mapping f:

Rop→HomR(M, M) given by f(a)=a* where a*(x)=aox=xa) as rings and the


opposite ring of a division ring is a division ring. Since Rop ≅ R, therefore, R is
finite direct sum of matrix rings over division rings.

4.4 UNIFORM MODULES, PRIMARY MODULES AND NOETHER-


LASKAR THOEREM
4.4.1 Definition. Uniform module. A non zero module M is called uniform if any
two nonzero submodules of M have non zero intersection.
Example. Z as Z-module is uniform as: Since Z is principal ideal domain,
therefore, the two sub-modules of it are <a> and <b> say, then <ab> is another
submodule which is contained in both <a> and <b> . Hence intersection of any
two nonzero sub-modules of M is non zero. Thus Z is a uniform module over
Z.

4.4.2 Definition. If U and V are uniform modules, we say U is sub-isomorphic to V


provided that U and V contains non zero isomorphic sub-modules.
4.4.3 Definition. A module M is called primary if each non zero sub-module of M
has uniform sub-module and any two uniform sub-modules of M are sub-
isomorphic.
Example. Z is a primary module over Z.

4.4.4 Theorem. Let M be a Noetherian module or any module over a Noetherian


ring. Then each non zero submodule contains a uniform module.
Proof. Let N be a non zero submodule of M. Then there exist x(≠ 0) ∈N.
Consider the submodule xR of N. Then it is enough to prove that xR contains
a uniform module. If M is Noetherian, then the every submodule of M is
noetherian and hence xR is also noetherian and if R is Noethrian then, being a
homomorphic image of Noetherian ring R, xR is also Noetherian. Thus, for
both cases, xR is Noetherian.
Consider a family f of submodules of xR as: f ={N| N has a zero
intersection with at least one submodule of xR}. Then {0}∈ f. Since xR is
noetherian, therefore, f has maximal element K(say). Then there exist an
submodule U of xR such that K∩U={0}. We claim U is uniform. Otherwise,
there exist submodules A, B of U such that A∩B={0}. Since K∩U={0},
therefore, we can talk about K⊕A as a submodule of xR such that K⊕A
∩B={0}. But then K⊕A∈ f, a contradiction to the maximality of K. This
contradiction show that U is uniform. Hence U ⊆xR⊆N. Thus every
submodule N contains a uniform submodule.

4.4.5 Definition. If R is a commutative noetherian ring and P is a prime ideal of R,


then P is said to be associated with module M if R/P imbeds in M or
equivalently, P=r(x) for some x∈M, where r(x)={a∈R | xa =0}.

4.4.6 Definition. A module M is called P- primary for some prime ideal P if P is the
only prime associated with M.
4.4.7 Theorem. Let U be a uniform module over a commutative noetherain ring R.
Then U contains a submodule isomorphic to R/P for precisely one prime ideal
P. In other words U subisomorphic to R/P for precisely one ideal P.
Proof. Consider the family f of annihilators of ideals r(x) for non zero x ∈U.
Being a family of ideals of noetherian ring R, f has a maximal element r(x)
say. We will show that P=r(x) is prime ideal of R. For it let ab∈r(x), a∉r(x).
As ab∈r(x) ⇒ (ab)x = 0. Since xa ≠ 0, therefore, b(xa) = 0 ⇒ b∈r(xa). More
over for t∈r(xa) ⇒ t(xa)=0 ⇒ (ta)x=0 ⇒ r(xa) ∈ f. Clearly r(x) ⊆ r(xa). Thus
the maximality of r(x) in f implies that r(xa)=r(x) i.e. b∈r(x). Hence r(x) is
prime ideal of R. Define a mapping from R to xR by θ(r)=xr. Then it is an
homomorphism from R to xR. Kernal θ ={ r∈R | xr=0}. Then Kernal θ =
r(x). Hence by fundamental theorem on homomorphism, R/ r(x) ≅ xR = R/P.
Therefore R/P is embeddable in U. Hence [R/P]=[R/Q]. this implies that there
exist cyclic submodules xR and yR of R/P and R/Q respectively such that
xR≅yR. But then R/P≅R/Q, which yields P=Q. It prove the theorem.

4.4.8 Note. The ideal in the above theorem is called the prime ideal associated with
the uniform module U.

4.4.9 Theorem. Let M be a finitely generated ideal over a commutative noetherian


ring R. Then there are only a finite number of primes associated with M.
Proof. Take a family f consisting of the direct sum of cyclic uniform
submodules of M. Since every submodule M over a noehtrian ring contains a
uniform submdule, therefore, f is non empty. Define a relation ≤, on the set of
elements of f by ⊕ ∑ x i R ≤ ⊕ ∑ x jR iff I ⊆ J and xiR ⊆ yjR for some j∈J.
i∈I j∈J

This relation is a partial order relation on f . By Zorn’s lemma F has a maximal


member K = ⊕ ∑ x i R . Since M is noetherian, therefore, K is finitely
i∈I
t
generated. Thus K = ⊕ ∑ x i R . By theorem, 4.2.7, there exist xiai ∈ xi R such
i =1

t
that r(xiai)=Pi, the ideal associated with xiR. Set xi*= xiai and K* = ⊕ ∑ x*iR .
i =1
Let Q =r(x) be the prime ideal associated with M. We shall show that Q =Pi
for some i, 1≤ i ≤ t.
Since K is a maximal member of f , therefore, K as well as K*
has the property that each has non zero intersection with each submodule L of
t
M. Now let 0≠ y∈xR∩ K*. Write y= ⊕ ∑ x*ibi =xb. We will show that r(xi*bi)=
i =1

r(xi*) whenever xi*bi ≠ 0. Clearly, r(xi*) ⊆ r(xi*bi). Let xi*bic =0. Then bic
r(xi*)=Pi and so c∈Pi since bi ∉ Pi. Hence, c∈ r(xi*).
t
Further, we note Q=r(x)=r(y)= I r ( x*ibi ) = I Pi , omitting those terms
i =1 i∈Λ

from xi*bi =0, where Λ ⊂ {1, 2, ..., t} . Therefore, Q ⊆ Pi for all i ∈Λ. Also

∏ Pi ⊂ I Pi = Q . Since Q is a prime ideal , at least one Pi appearing in the


i∈Λ i∈Λ

product ∏ Pi must be contained in Q. Hence Q = Pi for some i.


i∈Λ

4.4.10 Theorem.(Noether-Laskar theorem). Let M be a finitely generated ideal over


a commutative noetherian ring R. Then there exist a finite family N1, N2, …,
Nt of submodules of M such that
t t
(a) I Ni = (0) and I N i ≠ (0) for 1≤ i0 ≤ t.
i =1 i =1
i≠i0

(b) Each quotient module M/Ni is a Pi - primary module for some prime ideal
Pi.
(c) The Pi are all distinct, 1≤ i ≤ t.
(d) The primary component Ni is unique iff Pi does not contain Pj for some j≠i.
Proof. Let Ui , 1≤ i ≤ t, be a uniform sub module obtained as in the proof of
the Theorem 4.4.9. Consider the family { K | K is a subset of M and K
contains no submodule subisomorphic to Ui }. Let Ni be a maximal member of
this family, then with this choice of Ni, (a), (b) and (c) follows directly.

4.5 SMITH NORMAL FORM


4.5.1 Theorem. Obtain Smith normal form of given matrix. Or if A is m×n matrix
over a principal ideal domain R. Then A is equivalent to a matrix that has the
⎡a1 ⎤
⎢ a2 ⎥
⎢ ⎥
diagonal form ⎢ O ⎥ where ai≠0 and a1 | a2 | a3 |…|ar.
⎢ ar ⎥
⎢ ⎥
⎣⎢ O⎥⎦

Proof. For non zero a, define the length l(a)=no of prime factors appearing in
the factorizing of , a=p1p2 …pr (pi need not be distinct primes). We also take
l(a) if a is unit in R. If A=0, then the result is trivial otherwise, let aij be the
non zero element with minimum l(aij). Apply elementary row and column
operation to bring it (1, 1) position. Now a11 entry of the matrix so obtained is
of smallest l value i.e. the non zero element of this matrix at (1, 1) position.
Let a11 does not divide a1k. Interchanging second and kth column so that we
may suppose that a11 does not divide a12. Let d=(a11, a12) be the greatest
common divisor of a11 and a12, then a11=du, a12=dv and l(d) < l(a11). As
d=(a11, a12), therefore we can find s and t∈R such that d=(sa11+ta12)= d(su +
⎡u t ⎤
⎢v − s ⎥
⎢ ⎥
vt). Then we get that A ⎢ 1 ⎥ is a matrix whose first row is (d, 0,
⎢ 1 ⎥
⎢ ⎥
⎢⎣ 1⎥⎦
b13, b14, …b1n) where l(d) < l(a11). If a11 | a12, then a12=ka11. On applying, the
1
operation C2- kC1 and C1 we get the matrix whose first row is again of the
u
form (d, 0, b13, b14, …b1n). Continuing in this way we get a matrix whose
first row and first column has all its entries zero except the first entry. This
⎡a1 0 L 0⎤
⎢0 ⎥
matrix is P1AQ1 ⎢ ⎥ , where A1 is (m-1)×(n-1) matrix, and P1 and
⎢M A1 ⎥
⎢ ⎥
⎣0 ⎦
Q1 are m×m and n×n invertible matrices respectively. Now applying the same
⎡a 2 0 L 0⎤
⎢0 ⎥
process of A1, we get that P2' A1Q'2 = ⎢ ⎥ , where A2 is (m-2)×(n-
⎢M A2 ⎥
⎢ ⎥
⎣0 ⎦

2) matrix, and P2' and Q'2 are (m-1)×(m-1) and (n-1)×(n-1) invertible matrices
⎡1 0 ⎤ ⎡1 0 ⎤
respectively. Let P2 = ⎢ ' ⎥ and Q 2 = ⎢ ' ⎥ . Then P2P1AQ1Q2=
⎣0 P2 ⎦ ⎣0 Q 2 ⎦
⎡a1 0 L 0⎤
⎢0 a ⎥
⎢ 2 ⎥ . Continuing in this way we get matrices P and Q such that
⎢M A2 ⎥
⎢ ⎥
⎣0 ⎦
PAQ=diag(a1, a2,…, ar, 0, …0). Finally we show that we can reduce PAQ so
that a1| a2 | a3|…. For it if a1 does not divide a2, then add second row to the first
row and obtain the matrix whose first row is (a1, a2, 0, 0,…,0). Again
⎡u t ⎤
⎢v − s ⎥
⎢ ⎥
multiplying PAQ by a matrix of the form ⎢ 1 ⎥ we can obtain a
⎢ 1 ⎥
⎢ ⎥
⎢⎣ 1⎥⎦
matrix such that a1|a2. Hence we can always obtain a matrix of required form.

⎡1 2 3 ⎤
4.5.2 Example. Obtain the normal smith form for a matrix ⎢ ⎥.
⎣ 4 5 0⎦

⎡ 1 2 3⎤ R 2 − 4 R 1
Solution. ⎢ 4 5 0⎥ →
⎣ ⎦

⎡1 2 3 ⎤ C 2 − 2C1 ,C3 −3C1


⎢0 − 3 − 12⎥ →
⎣ ⎦

⎡1 0 0 ⎤ C 3 − 4C 2
⎢0 − 3 − 12⎥ →
⎣ ⎦

⎡1 0 0 ⎤ C 3 − 4C 2 ⎡1 0 0⎤ − R 2 ⎡1 0 0⎤
⎢0 − 3 − 12⎥ → ⎢0 − 3 0 ⎥ → ⎢ 0 3 0 ⎥ .
⎣ ⎦ ⎣ ⎦ ⎣ ⎦

4.6 FINITELY GENERATED ABELIAN GROUPS


4.6.1 Note. Let G1, G2,… Gn be a family of subgroup of G and let G*= G1…Gn.
Then the following are equivalent.
(i) G1×…×Gn ≅G* under the mapping (g1, g2, …, gn) to g1g2…gn
(ii) Gi is normal in G* and every element x belonging to G* can be uniquely
expressed as x=g1g2 … gn , gi∈Gi.
(iii) Gi is normal in G* and if e =g1g2 … gn , then each xi=e.
(iv) Gi is normal in G* and Gi∩ G1…Gi-1 Gi+1…Gn ={e}, 1≤ i ≤ n.

4.6.2 Theorem.(Fundamental theorem of finitely generated abelian groups). Let


G be a finitely generated abelian group. Then G can be decomposed as a direct
sum of a finite number of cyclic groups Ci i.e. G = C1⊕ C2⊕…⊕ Ct where
either all Ci’s are infinite or for some j less then k, C1, C2, . . . Cj are of order
m1, m2, . . .mj respectively, with m1| m2 | …| mj and rest of Ci’s are infinite.
Proof. Let {a1, a2, …, at} be the smallest generating set for G. If t=1, then G
is itself a cyclic group and the theorem is trivially true. Let t > 1 and suppose
that the result holds for all finitely generated abelian groups having order less
then t. Let us consider a generating set {a1, a2, …, at} of element of G with the
property that , for all integers x1, x2, …, xt , the equation
x1 a1 + x2 a2 + … + xt at = 0
implies that
x1 = 0, x2 = 0, . . ., xt = 0.
But this condition implies that every element in G has unique representation of
the form
g = x1 a1 + x2 a2 + … + xt at, xi ∈Z.
Thus by Note 4.6.1,
G = C1⊕ C2⊕…⊕ Ct
where Ci = <ai> is cyclic group generated by ai, 1≤ i ≤ t. By our choice on
element of generated set each Ci is infinite set (because if Ci is of finite order
say ri , then riai =0). Hence in this case G is direct sum of finite number of
infinite cyclic group.
Now suppose that that G has no generating set of t elements with the
property that x1 a1 + x2 a2 + … + xt at = 0 ⇒ x1 = 0, x2 = 0, . . ., xt = 0. Then,
given any generating set {a1, a2, …, at} of G, there exist integers x1, x2, … , xt
not all zero such that
x1 a1 + x2 a2 + … + xt at = 0.
As x1 a1 + x2 a2 + … + xt at = 0 implies that -x1 a1 - x2 a2 - … - xt at = 0,
therefore, with out loss of generality we can assume that xi >0 for at least one
i. Consider all possible generating sets of G containing t elements with the
property that x1 a1 + x2 a2 + … + xt at = 0 implies that at least one of xi > 0. Let
X is the set of all such (x1, x2, … xt ) t -tuples. Further let m1 be the least
positive integers that occurring in the set t-tuples of set X. With out loss of
generality we can take m1 to be at first component of that t-tuple (a1, a2, …, at)
i.e. m1 a1 + x2 a2 + … + xt at = 0 (1)
By division algorithm, we can write, xi=qim1 + si , where 0 ≤ si < m1. Hence
(1) becomes,
m1 b1 + s2 a2 + … + st at = 0, where b1= a1 + q2 a2 + … + qt at.
Now if b1=0, then a1 = -q2 a2 - … - qt at. But then G has a generator set
containing less then t elements, a contradiction to the assumption that the
smallest generator set of G contains t elements. Hence b1 ≠ 0. Since a1 = -b1 -
q2 a2 - … - qt at, therefore, {b1, a2, …, an} is also a generator of G. But then by
the minimality of m1, m1 b1 + s2 a2 + … + st at = 0 ⇒ si =0 for all i. 2≤ i ≤ t.
Hence m1b1=0. Let C1 = <b1>. Since m1 is the least positive integer such that
m1b1=0, therefore, order of C1=m1.
Let G1 be the subgroup generated by {a2, a3, …, at}. We claim
that G = C1⊕G1. For it, it is sufficient to show that C1∩G1 ={0}. Let
d∈C1∩G1. Then d=x1b1 , 0 ≤ x1 < m1 and d = x2 a2 + … + xt at . Equivalently,
x1b1 +(-x2)a2 +… + (-xt)at =0. Again by the minimal property of m1, x1=0.
Hence C1∩G1 ={0}.
Now G1 is generated by set {a2, a2, …, at} of t-1 elements. It is
the smallest order set which generates G1(because if G1 is generated by less
then t-1 elements then G can be generated by a set containing t-1 elements, a
contradiction to the assumption that the smallest generator of G contains t
elements). Hence by induction hypothesis,
G1= C2⊕…⊕ Ct
where C2, …, Ck are cyclic subgroup of G that are either all are infinite or, for
some j ≤ t, C2, … , Cj are finite cyclic group of order m2, …, mj respectively
such that m2| m3 | …| mj, and Ci are infinite for i > j.
Let Ci =[bi], i=2, 3, …, k and suppose that C2 is of order m2.
Then {b1, b2, …, bt} is the generating set of G and m1b1 + m2b2 + 0.b3 +…+
0.bk =0. By repeating the argument given for (1), we conclude that m1|m2. This
completes the proof of the theorem.
4.6.3 Theorem. Let G be a finite abelian group. Then there exist a unique list of
integers m1, m2, …, mt (all mi > 1) such that order of G is m1 m2 …mt and G
= C1⊕ C2⊕…⊕ Ct where C1, C2, …, Ct are cyclic groups of order m1, m2, …,
mk respectively. Consequently, G ≅ Zm1 ⊕ Zm1 ⊕ ... ⊕ Zm t .

Proof. By theorem 4.6.2, G = C1⊕ C2⊕…⊕ Ct where C1, C2, …, Ct are cyclic
groups of order m1, m2, …, mt respectively, such that m1|m2 | …|mt. As order
of S×T = order of S × order of T, therefore, order of G = m1 m2 …mt . Since a
cyclic group of order m is isomorphic to Zm group of integers under the
operation addition mod m, therefore,
G ≅ Zm1 ⊕ Zm1 ⊕ ... ⊕ Zm t .

We claim that m1 , m2, …, mt are unique. For it, let there exists n1, n2,…, nr
such that n1 | n2 | …| nr and G = D1⊕ D2⊕…⊕ Dr where Dj are cyclic groups
of order nj. Since Dr has an element of order nr and largest order of element of
G is mt , therefore, nr≤mt. By the same argument, mt ≤ nr. Hence mt = nr.
Now consider mt-1 G={mt-1g | g∈G}. Then by two decomposition of G
we get mt-1 G= (mt-1 C1)⊕ (mt-1 C2 ) ⊕…⊕ (mt-1 Ct)
=(mt-1 D1)⊕ (mt-1 D2 ) ⊕…⊕ (mt-1 Dr-1).
As mi | mt-1 (it means mi divides mt-1)for all i, 1≤ i ≤ t-1, therefore, for all such
i, mt-1 Ci={0}. Hence order of (mt-1 G) i.e. | mt-1 G | =|(mt-1 Ct) | = |(mt-1 Dr) |.
Thus |(mt-1 Dj) | = 1 for j=1, 2, …, r-1. Hence nr-1 | mt-1 . Repeating the process
by taking mr-1 G, we get that mt-1 | nr-1. Hence mt-1 = nr-1. Continuing this
process we get that mi =ni for i=t, t-1, t-2, …. But m1m2 …mt= |G|= n1 n2 …nr,
therefore, r = t and mi=ni for all i, 1≤ i ≤ k.

4.6.3 Corollary. Let A be a finitely generated abelian group. Then A


Z Z
≅ Zs ⊕ ⊕ ... ⊕ , where s is a nonnegative integer and ai are nonzero
a1Z arZ

non-unit in Z, such that a1| a2|… | ar . Further decomposition of A shown above


is unique in the sense that ai are unique.
4.6.4 Example. The abelian group generated by x1 and x2 subjected to the condition
2x1 = 0 , 3x2 = 0 is isomorphic to Z/<6> because the matrix of these equation
⎡ 2 0⎤ ⎡1 0 ⎤
is ⎢ ⎥ has the smith normal form ⎢0 6⎥
⎣ 0 3⎦ ⎣ ⎦

4.7 KEY WORDS


Uniform modules, Noether Lashkar, wedderburn artin, finitely generated.

4.8 SUMMARY
In this chapter, we study about Weddernburn theorem, uniform modules,
primary modules, noether-laskar theorem, smith normal theorem and finitely
generated abelian groups. Some more results on noetherian and artinian
modules and rings are also studied.

4.9 SELF ASSESMENT QUESTIONS


(1) Let R be an artinain rings. Then show that the following sets are ideals and
are equal:
(i) N= sum of nil ideals , (ii) U = some of nilpotent ideals, (iii) Sum of all
nilpotent right ideals.
(2) Show that every uniform module is a primary module but converse may
not be true
⎡− x 4 −2 ⎤

(3) Obtain the normal smith form of the matrix ⎢ − 3 8 − x 3 ⎥⎥ over the
⎢⎣ 4 − 8 − 2 − x ⎥⎦

ring Q[x].
(4) Find the abelian group generated by {x1, x2, x3} subjected to the conditions
5x1 + 9x2 + 5x3=0, 2x1 + 4x2 + 2x3=0, x1 + x2 - 3x3=0

4.10 SUGGESTED READINGS


(1) Modern Algebra; SURJEET SINGH and QAZI ZAMEERUDDIN, Vikas
Publications.
(2) Basic Abstract Algebra; P.B. BHATTARAYA, S.K.JAIN, S.R.
NAGPAUL, Cambridge University Press, Second Edition.

You might also like