Vector Spaces Project-F
Vector Spaces Project-F
Bachelor of Honors
in
Mathematics
Submitted by
Priya Parihar
Supervisor Coordinator
This is to certify that Mr. Priya Parihar, who was registered under the Roll No./Regn.
No. 18532040011 for the degree of Bachelor of Honors in Mathematics under my
supervision, has completed his work pertaining to his thesis. The exact title of his thesis
is Vector Space and Modules . I have gone through the draft of the thesis and
found it worthy of consideration in partial fulllment of the requirements for the award
of degree of Bachelor of Honors. I further certify that:-
ii. he has put in the required attendance and has also delivered and attended semi-
nars/group discussion sessions.
Dated:23-08-2021
i
Acknowledegment
Firstly, I thank God for completion of this project work. I would like to express my
sincere gratitude to my project advisor Dr. Rohit Verma. I feel proud to state that his
candid attitude, worthy suggestions, guidance, valuable comments and incessant support
provided to me during the completion of this project work was really appreciating. This
Project has strengthened my passion for mathematics and boosted my condence in the
future. I have learned a lot from this project besides having a chance to sharpen my
computer skills also.
Priya Parihar
ii
Contents
Certicate i
Acknowledegment ii
1 Introduction 1
2 Vector Spaces 3
3 Linear Transformation 31
iii
Contents
4 Modules 45
4.1 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Submodules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5 Module Homomorphisms 61
iv
Chapter 1
Introduction
The project thesis entitled, Vector space and Modules , is a study of the foundations
of that branch of mathematics called linear algebra. A vector space also called linear
space is a collection of objects called vectors, a vector space originates from the notion
of a vector that we are familiar with in mechanics or geomertry. We recall that a vector
is dened as a directed line segment, which in algebraic terms is dened as an ordered
pair (a,b), being coordinates of a terminal point relative to a xed coordinate system.
Addition of vectors is given by the rule,
One can easily verify that set of vectors under this forms an abelian group. Also mul-
tiplication is dened by the rule α(a, b) = (αa, αb) which satises certain properties .
This concept is extended similarly to the three dimensions. We can easily generalise the
whole idea through denition of vector space and vary the scalars not only in the set of
reals but in any eld F . A vector space thus diers from groups and rings in as much as
it also involves elements from outside of itself.
From the various elementary courses that the study of vector spaces and linear trans-
formations notions have an applications in several dierent areas of mathematics. Linear
1
Introduction
operators dened on an innite dimensional complex vector spaces are frequently used
to study the essence of quantum mechanics. In general relativity, the equations are
represented by the operators between certain vector spaces. In modelling problems of
economics, vector spaces play an important role in the production of items and their
exchange in the markets.
Further, vector spaces over some nite elds plays a signicant part for constructing
error-correcting codes, that can be employed in sending o message signals along a tele-
phone line, or for allotting phone numbers which automatically recognize and rectify the
errors.
One of the most elementary introduction to linear algebra is the notion of a deter-
minant which is dened for square matrices and it is assumed that the elements of the
matrices usually taken from a eld F . But, come the consideration of eigen values (or
latent roots), the matrix whose determinant has to be found is of the form
and therefore has its entries in a polynomial ring. This prompts the question of whether
the various properties of determinants should not really be developed in a more general
setting, and leads to the wider question of whether the scalars in the denition of a vector
space should not be restricted to lie in a eld but should more generally belong to a ring.
It turns out that the modest generalisation so suggested is of enormous importance and
leads to the most important structure in the whole of algebra, namely that of a module.
Many branches of algebra are linked by the theory of modules. Since the notion of a
module is obtained essentially by a modest generalisation of that of a vector space, it is
not surprising that it plays an important role in the theory of linear algebra.
2
Chapter 2
Vector Spaces
Denition 2.1.1. Internal Composition. Let A be a non-empty set, then the mapping
f :A×A→A
is called internal composition in it. This is also called binary composition or binary
operation.
Example 2.1.2. Consider R, the set of real numbers, and let f : R × R be dened as
f (a, b) = ab ∀ (a, b) ∈ R × R,
Denition 2.1.3. External Composition. Let V and F be any two non-empty sets,
then the mapping
f :V ×F →V
3
2.2 Vector Space and its Subspaces
Example 2.1.4. Let V be the set of all n × n matrices over F , where F is the set of real
numbers. If f : V × F → V is dened as
Denition 2.2.1. Let (F, +, .) be a eld and let V be a non- empty set, then V is called
a vector space or linear space over the eld F , denoted by V (F ), if
x + y ∈ V.
(x + y) + z = x + (y + z).
x + 0 = 0 + x = x,
4
2.2 Vector Space and its Subspaces
x + y = y + x.
the two compositions i.e. scalar multiplication and addition of vectors satisfy the
following conditions
(i) ∀ α ∈ F and ∀ x, y ∈ V , we have
α(x + y) = αx + αy.
(α + β)x = αx + βx.
(αβ)x = α(βx).
(iv) ∀ x ∈ V , we have
1.x = x,
Note. If V is a vector space over the eld F , then the members of F are called scalars
and the members of V are called vectors.
5
2.2 Vector Space and its Subspaces
∴ x+y ∈V [∵ xi , yi ∈ R ⇒ xi + yi ∈ R for i = 1, 2, . . . , n]
= (x1 + y1 , x2 + y2 , . . . , xn + yn ) + (z1 , z2 , . . . , zn )
= (x1 , x2 , . . . , xn ) + (y1 + z1 , y2 + z2 , . . . , yn + zn )
= x + (y + z)
6
2.2 Vector Space and its Subspaces
x + 0 = (x1 , x2 , . . . , xn ) + (0, 0, . . . , 0)
= (x1 + 0, x2 + 0, . . . , xn + 0)
= (x1 , x2 , . . . , xn )
=x
Similarly, 0 + x = x
∴ x + 0 = x = 0 + x ∀ x = (x1 , x2 , . . . , xn ) ∈ V
Thus, 0 = (0, 0, . . . , 0) is the additive identity in V .
= (0, 0, . . . , 0)
=0
Similarly, (−x) + x = 0
∴ x + (−x) = (−x) + x = 0.
7
2.2 Vector Space and its Subspaces
V ; xi , yi ∈ R for 1 ≤ i ≤ n, then
(x + y) = (x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn )
= (x1 + y1 , x2 + y2 , . . . , xn + yn )
= (y1 + x1 , y2 + x2 , . . . , yn + xn )
[∵ xi , yi ∈ R ⇒ xi + yi = yi + xi for 1 ≤ i ≤ n]
= (y1 , y2 , . . . , yn ) + (x1 , x2 , . . . , xn )
=y+x
αx = α(x1 , x2 , . . . , xn )
∈V
= α(x1 + y1 , x2 + y2 , . . . , xn + yn )
= α(x1 , x2 , . . . , xn ) + α(y1 , y2 , . . . , yn )
= αx + αy
8
2.2 Vector Space and its Subspaces
(α + β)x = (α + β)(x1 , x2 , . . . , xn )
= α(x1 , x2 , . . . , xn ) + β(x1 , x2 , . . . , xn )
= αx + βx
(αβ)x = (αβ)(x1 , x2 , . . . , xn )
= α(β(x1 , x2 , . . . , xn ))
= α(βx)
1.x = 1.(x1 , x2 , . . . , xn )
= (x1 , x2 , . . . , xn )
=x
9
2.2 Vector Space and its Subspaces
then F n is a vector space over F . The following theorem shows the properties of vector
space.
(i) α0 = 0 ∀ α ∈ F
(ii) 0x = 0 ∀ x ∈ V
(iv) αx = 0 ⇒ α = 0 or x = 0 ∀ α ∈ F, x ∈ V
Denition 2.2.4. Let V be a vector space over a eld F , then a non-empty subset W
of a vector space V is called a subspace of V if W is itself a vector space over F with the
same operations of addition and scalar multiplication dened for V .
Note. If V is any vector space over a eld F , then V itself is a subspace of V and
the subset {0} of V which contains zero vector alone is also a subspace of V called zero
subspace of V . These two subspaces i.e. {0} and V of V are called improper subspaces.
And the subspaces other than these two subspaces are called proper subspaces.
10
2.2 Vector Space and its Subspaces
(i) x + y ∈ W ∀ x, y ∈ W
(ii) αx ∈ W ∀ α ∈ F, x ∈ W
Proof. First suppose that W is a subspace of V , then W is itself a vector space over F
with the operations of addition and scalar multiplication in V . Thus, W is closed under
vector addition and scalar multiplication i.e. x + y ∈ W and αx ∈ W ∀ x, y ∈ W , α ∈ F .
Conversely, suppose that W is a non-empty subset of V and W is closed under addition
−1 ∈ F [∵ F is a eld]
−1 ∈ F, x ∈ W ⇒ (−1)x ∈ W
i.e. − (1.x) ∈ W
⇒ −x ∈ W
11
2.2 Vector Space and its Subspaces
x + (−x) = 0 = (−x) + x
Thus, the additive identity exists in W and the additive inverse of each element of W
also exists in W .
Further, the elements of W are also the elements of V , therefore, vector addition will
be commutative and associative in W . Hence, W is an abelian group with respect to
vector addition.
Moreover, it is given that W is closed under scalar multiplication and also W being
a subset of vector space V will satisfy all the remaining properties to be a vector space .
Therefore, we have
α(x + y) = αx + αy ∀ α ∈ F and x, y ∈ W
(α + β)x = αx + βx ∀ α, β ∈ F and x ∈ W
1.x = x ∀ x ∈ W and 1 ∈ F .
W = {(x, y, z) | ax + by + cz = 0; x, y, z ∈ F }
is a subspace of V3 (F ).
12
2.2 Vector Space and its Subspaces
αu + βv = α(x1 , y1 , z1 ) + β(x2 , y2 , z2 )
= α(0) + β(0)
=0
Hence W is a subspace of V3 (F )
Note. The above example implies that any plane passing through (0, 0, 0) is a subspace
of R3 .
Denition 2.2.8. Let W1 and W2 be the two subspaces of a vector space V over a eld
F , then the sum of W1 and W2 , denoted by W1 + W2 , is dened as
W1 + W2 = {x + y | x ∈ W1 and y ∈ W2 }.
13
2.3 Linear Combination and Linear Span
Theorem 2.2.9. Let W1 and W2 be the two subspaces of a vector space V over a eld
F , then
Note. The arbitrary intersection of subspaces of a vector space V over a eld F is also
a subspace of V .
v = α1 v1 + α2 v2 + . . . + αn vn
n
or v =
X
αi vi , αi ∈ F
i=1
Example 2.3.1. Let V be a vector space over a eld F such that V = R2 = {(a, b) | a, b ∈
R} and F = R, then write v = (−1, 37 ) as a linear combination of (1, 2) and (3, −1).
14
2.3 Linear Combination and Linear Span
v = αx + βy
7
i.e. −1, = α(1, 2) + β(3, −1)
3
= (α, 2α) + (3β, −β)
7
⇒ −1, = (α + 3β, 2α − β)
3
Thus, v = 76 x + −13
21
y = 67 (1, 2) + −13
21
(3, −1)
Theorem 2.3.2. Let V be a vector space over a eld F , then the linear span of any
subset S of V is the smallest subspace of V containing S .
Theorem 2.3.3. If S and T are subsets of a vector space V over a eld F , then
15
2.3 Linear Combination and Linear Span
⇒ vi ∈ L(T ) for 1 ≤ i ≤ n
[∴ S ⊂ L(T )]
αi vi ∈ L(T ) for 1 ≤ i ≤ n
Pn
⇒ i=1
[∴ L(T ) is a subspace of V (F )]
⇒ v ∈ L(T )
∴ L(S) ⊂ L(T )
(ii) Given S ⊂ T
16
2.3 Linear Combination and Linear Span
n
X
v= αi vi
i=1
⇒v∈S
∴ L(S) ⊂ S
[ ∴ if vk ∈ S Then vk = 1.vk ]
⇒ S is a subsapce of V . [ ∴ S = L(S)]
Theorem 2.3.4. A linear sum of two subspaces W1 and W2 of a vector space V over a
eld F is a subspace of V generated by W1 and W2 i.e.
W1 + W2 = ⟨W1 ∪ W2 ⟩
17
2.4 Linear Dependence and Independence
Linear Dependence. If V is a vector space over a eld F , then the vectors v1 , v2 , ...., vn
in V are called lineraly dependent over F if there exists scalars α1 , α2 , ...., αn in F not all
of them are zero such that
α1 v1 + α2 v2 + ... + αn vn = 0.
For Example. The vectors v1 = (2, 3, 4), v2 = (1, 0, 0), v3 = (0, 1, 0) and v4 = (0, 0, 1)
are lineraly dependent in R3 (R). Here,we have
so we have α1 v1 + α2 v2 + α3 v3 + α4 v4 = 0
where α1 = 1 ̸= 0, α2 = −2 ̸= 0, α3 = −3 ̸= 0, α4 = −4 ̸= 0
α1 v1 + α2 v2 + ... + αn vn = 0.
For Example. The vectors v1 = (2, 0, 0); v2 = (0, 3, 0) and v3 = (0, 0, 4) are lineraly
independent in R3 (R). Consider α1 v1 + α2 v2 + α3 v3 = 0 for some scalars α1 , α2 , α3 ∈ R.
18
2.4 Linear Dependence and Independence
⇒ α1 = 0, α2 = 0 and α3 = 0
⇒ either α = 0 or v = 0
⇒ α=0 [∵ v ̸= O, given]
∴ αv = 0 ⇒α=0
19
2.4 Linear Dependence and Independence
Then α . 0 = 0 for α ̸= 0 ϵ F
⇒ S is L.D. set.
(iii) Let S = {v1 , v2 , ..., vn } be a subset of vector space V (F) such that one of the
vectors of S is zero vector.
and αk = 1
Now α1 v1 + α2 v2 + ...αn vn
=0 + 0 + ... + 0 + 0 + 0 + ... + 0
α1 v1 + α2 v2 = ... + αn vn = 0
⇒ S is L.D.
20
2.4 Linear Dependence and Independence
If S contains zero vector , then S is L.D. which contradicts the given that
α1 v1 + α2 v2 + ... + αn vn = 0 ...(A)
∴ α1 v1 + α2 v2 + ... + αm vm = 0 ...(A)
⇒ α1 = α2 = ... = αm = 0
∴ clearly T is a subset of S
21
2.5 Basis and Dimension
Consider α1 v1 + ... + αp vp = 0
(ii) The set {v1 , v2 } is L.D. i v1 and v2 are collinear i.e., i one is a scalar multiple
of the other.
(iii) The set {v1 , v2 , v3 } is L.D. i v1 , v2 and v3 are coplanar i.e., i one is a linear
combination of the other two.
Denition 2.5.1. Let V be a vector space over a eld F , then a subset B of V is called
a basis of V if
Note.(i) A set of vectors containing zero vector is always L.D. set, so it cannot be a
basis of any vector space. Thus, a zero vector cannot be an element of a basis of a vector
space.
(ii) Since L(ϕ) = 0 and ϕ is L.I.
∴ ϕ is a basis of {0}.
22
2.5 Basis and Dimension
Example 2.5.2. (a) The set B = {(1, 0, 0), (0, 1, 0), (0, 0, 1) is a basis of V3 (R).
B is L.I.
⇒ (a,b,c) = (0,0,0)
⇒ a= 0, b= 0 and c= 0
L(B) = V3 (R)
(b) The set B1 = {(1, 0, 0), (1, 1, 0), (1, 2, 3), (0, 1, 0)} is not a basis of V3 (R).
For, let a,b,c,d ∈ R such that
a (1,0,0) + b (1,1,0) + c (1,2,3) + d (0,1,0) = 0 ....... (A)
23
2.5 Basis and Dimension
⇒ ( a + b + c , b + 2c + d , 3c ) = (0 , 0 , 0 )
∴a+b+c=0 ........(1)
b + 2c + d = 0 ........(2)
3c = 0 ........ (3)
a + b = 0 and b + d = 0
⇒a=-b,d=-b
Let b = - k ̸= 0 real
∴a=k,d=k,c=0
Note. In general , any subset of Vm (F ), where F is any eld, having more than m ele-
ments is linearly dependent and hence cannot be a basis of Vm (F ).
24
2.5 Basis and Dimension
Theorem 2.5.5. [Existence Theorem] There exists a basis for each nite dimensional
vector space or Every nite dimensional vector space has a basis.
Proof. Let V be a nite dimensional vector space over a eld F , then there exists a nite
subset S = {v1 , v2 , . . . , vn } of V such that
L(S) = V · · · · · · · · · (A)
Without any loss of generality we may suppose here that all vectors in the subset S are
non-zero because contribution of zero vector in any linear combination of the vectors of
S is zero.
Since S ⊂ V , so either S is L.I. or L.D.
If S is L.I. , then S is a basis of V(F) [ ∵ by (A) , it is given L(S) = V ]
Hence the result.
If S is L.D., then exists m, 2 <m <n such that vm ∈ S is a linear combination of
v1 v2 , . . . , vm−1 i.e.
m−1
ai vi , ai ∈ F · · · · · · · · · (B)
X
vm =
i=1
25
2.5 Basis and Dimension
Now let v ∈ V
⇒ v is linear combination of the elements of S [ ∵ L(S) = V ]
∴ v = β1 v1 + β2 v2 + ......+ βn vn for Bj ' s ∈ F
= β1 v1 + β2 v2 + ...... + βm−1 vm−1 + βm vm + βm+1 vm+1 + ...... + βn vn
= β1 v1 + β2 v2 + ... + βm−1 vm−1 + βm ai vi + βm+1 vm+1 + ..... + βn vn .
Pm−1
i=1
Theorem 2.5.6. [Extension Theorem] Any linearly independent set in a nite dimen-
sional vector space is either a basis or can be extended to form a basis.
26
2.5 Basis and Dimension
⇒ ∃ a set B ={v1 , v2 , ..., vn } ⊂ V, which is a basis of V(F). then B is L.I and L(B) = V
Also we are given that the set
S={w1 , w2 , ..., wp } is L.I set in V
Consider the set S2 = {w1 , w2 , ...wp , v1 , v2 , ...vn }.
Let wj ∈ S1 ⇒ wj ∈ V f or1 ≤ j ≤ p as S1 ⊂ V
⇒ wj can be expressed as L.C of the elements of B (∵ L(B) = V )
⇒ wj = ni=1 aij vi , ...(1)
P
27
2.5 Basis and Dimension
for scalars ci ∈ F
Pn
⇒y= i=1 ci vi
P
= i̸=k ci vi + ck v k
P
p
[Using (4)]
P Pk−1
= i̸=k ci vi + ck
j=1 βj wj + i=1 αi vi
Pk−1 Pn Pk−1 Pp
= i=1 ci v i + i=k+1 ci vi + i=1 (ck αi )vi + j=1 (ck βj )wj
Remark 2.5.7. Any two basis of a nite dimensional vector space have same number of
elements.
28
2.5 Basis and Dimension
vector space.
For example, If V = Rn and F = R, then
dim (V ) = n
as B = {e1 , e2 , ........, en }, where
ei =(0,0,....1,....0), 1 occurs at ith place, is a basis of V having n elements.
Thus, Rn is a n-dimensional vector space over R.
Note.(i) dim {0} = 0, as basis of zero space is empty set which contains no element.
(ii) If a eld F is taken as a vector space over the same eld F , then dim (F ) = 1 and
{1} is a basis of F , where 1 is the unity of a eld F .
Dimension of Subspace.
Theorem 2.5.9. Let W be a subspace of a nite dimensional vector space V over a eld
F , then W is nite dimensional and dim (W ) ≤ dim (V ). Moreover, if dim (W ) = dim
(V ), then V = W .
Theorem 2.5.10. If a basis of a vector space V over a eld F contains n elements, then
(i). A subset W of V having more than n elements is L.D.
(ii) A L.I. subset of V cannot have more than n elements.
(iii) A subset W of V which generates V must have atleast n elements.
Theorem 2.5.11. If W1 and W2 are the two nite dimensional subspaces of a vector space
V over a eld F , then W1 + W2 is a nite dimensional subspace and dim (W1 + W2 ) =
dim (W1 )+ dim (W2 )− dim (W1 ∩ W2 ).
Denition 2.5.12. (Quotient space) Let V be a vector space over a eld F and W
be a subspace of V , then the set
V
= {x + W | x ∈ V }
W
of all cosets of W in V is a vector space over the eld F with the operations vector
addition and scalar multiplication dened as:
29
2.5 Basis and Dimension
(x + W ) + (y + W ) = (x + y) + W ∀ x, y ∈ V and
α(x + W ) = αx + W ∀ α ∈ F, x ∈ V .
This vector space V
W
over the eld F is called quotient space.
30
Chapter 3
Linear Transformation
Denition 3.1.1. If V and W are two vector spaces over the same eld F , then a
mapping
T :V →W
Remark 3.1.2. If V and W are the two vector spaces over a same eld F , then a
mapping T : V → W is a linear transformation i T (αv + βw) = αT (v) + βT (w) ∀ v, w ∈
V and α, β ∈ F .
31
3.1 Linear Transformation and Linear Operator
T (a1 , a2 ) = (2a1 + a2 , a1 ), a1 , a2 ∈ R,
T (a, b, c) = (a − b, a + c), a, b, c ∈ R,
32
3.1 Linear Transformation and Linear Operator
T ( v′ ) = v
Again v ′ ∈ V means v ′ = αi vi αi ∈ F
P
∴ v = T ( v′ ) = T ( αi vi ) = αi T ( vi )
P P
⇒ T ( v1 ) , T ( v2 ) , ..... , T ( vn ) spans V
and as dim V = n , { T ( v1 ) , ..... , T ( vn ) } forms a basis of V .
Now if v ∈ ker T be any element then T ( v ) = 0
⇒T( αi v i ) = 0
P
αi T ( vi ) = 0
P
⇒
⇒ αi = 0 for all I as T ( v1 ) , ..... , T ( vn ) are L.I.
⇒v= αi vi = 0
P
⇒ ker T = { 0 } ⇒ T is 1 - 1 .
Theorem 3.1.7. If V and W are the two vector spaces over a eld F . Let {v1 , v2 , . . . , vn }
be a basis of V and w1 , w2 , . . . , wn be any vectors in W , then there exists a unique L.T.
T : V → W such that T (vi ) = wi for i = 1, 2, . . . , n.
as { v1 , v2 , ..... , vn } is a basis of V .
Dene T : V → W such that
T ( V ) = αi wi
P
T′ ( v ) = T′ ( αi vi ) = αi T ′ ( vi ) = αi w i = T ( v )
P P P
33
3.2 Rank-Nullity Theorem
Hence T ′ = T
Thus, we notice that a linear transformation is completely determined by its values on
the elements of a basis.
Denition 3.2.1. Let T : V → W be a L.T., where V and W are vector spaces over a
eld F , then the range of T is the set of all vectors y in W such that y = T (x) for some
x ∈ V i.e.
Range (T ) = {y ∈ W | y = T (x) for some x ∈ V }.
Denition 3.2.2. Let T : V → W be a L.T., where V and W are vector spaces over a
eld F , then the set of all vectors x in V such that T (x) = 0 is called kernel or null space
of T . It is denoted by N (T ) i.e.
N (T ) = {x ∈ V | T (x) = 0W }.
Theorem 3.2.3. Let T : V → W be a L.T., where V and W are vector spaces over a
eld F , then
Denition 3.2.4. Let T : V → W be a L.T., where V and W are vector spaces over a
eld F , then the dimension of range of T is called the rank of T and denoted by rank
(T ). And the dimension of kernel or null space of T is called nullity of T and denoted by
Nullity (T ).
orem) Let T : V → W be a L.T., where V and W are vector spaces over a eld F . If V
34
3.2 Rank-Nullity Theorem
35
3.3 Algebra of Linear Transformation
(ii) ⇒ (i)
Let x ∈ Range T ∩ Ker T
⇒ x ∈ Range T and x ∈ Ker T
⇒ x = T (v) for some v ∈ V and
T (x) = 0
Now, x = T (v)
⇒ T (x) = T (T (v))
⇒ 0 = T (T (v))
⇒ T (v) = 0
⇒x=0
Denition 3.3.1. Let V and W be two vector spaces over the same eld F . Let T :
V → W and S : V → W be two linear transformations, then the function T +S : V → W
dened by
(T + S)(x) = T (x) + S(x) ∀ x ∈ V
36
3.3 Algebra of Linear Transformation
is a linear transformation and called the product of linear transformation with scalar.
Denition 3.3.2. Let U , V and W be the three vector spaces over the same eld F .
Let T : U → V and S : V → W be two linear transformations, then the function
ST : U → W dened by
(ST )(x) = S(T (x)) ∀ x ∈ U
Theorem 3.3.3. Let T, T1 , T2 be linear operators on vector space V over a eld F, and
let I : V → V be the identity operator on V , then
(i) IT = T I = T
(ii) T (T1 + T2 ) = T T1 + T T2 and
(T1 + T2 )T = T1 T + T2 T
(iii) α(T1 T2 ) = (αT1 )T2 = T1 (αT2 ) α ∈ F
(iv) T1 (T2 T3 ) = (T1 T2 )T3
= (T T1 + T T2 )x
⇒ T (T1 + T2 ) = T T1 + T T2
37
3.3 Algebra of Linear Transformation
Example 3.3.4. The Range ,Rank, Ker and nullity of the linear transformation
T : R3 → R3
such that
T (x, y, z) = (x + z, x + y + 2z, 2x + y + 3z)
T (x, y, z) = (0, 0, 0)
⇒x+0+z =0
x + y + 2z = 0
2x + y + 3z = 0
38
3.3 Algebra of Linear Transformation
Problem. Give an example of two distinct linear maps T and U such that Ker T = Ker
U and Range T= Range U.
39
3.4 Singular and Non-Singular Transformations
⇒ y=0
⇒ Ker U = {(x,0) x∈ R}
so Ker T = Ker U
Also, (x,y)∈ Range T ⇒ (x,y)=T(α, β) = (β, β, )
⇒ x=y
⇒ Range T ={(x,x) x∈ R}
Denition 3.4.1. Let V and W be two vector spaces over a same eld F . A linear
transformation T : V → W is called invertible if T is one-one and onto.
Theorem 3.4.2. Let V and W be two vector spaces over a same eld F . Let T : V → W
be an invertible linear transformation, then T −1 : W → V is a linear transformation.
Proof. Since a map T:V→W is invertible i it is 1-1 and onto, and inverse of T is the
map T −1 : W→ V such that
T −1 (y) = x ⇔ T (x) = y .
we show that inverse of a L.T. is also a L.T.
Let T: V→W be a 1-1, onto L.T. and T −1 :W→V be its inverse.
40
3.4 Singular and Non-Singular Transformations
We have to prove
T −1 (αw1 + βw2 ) = αT −1 (w1 ) + βT −1 (w2 ) α, β ∈ F, w1 , w2 ∈ W
since T is onto, for w1 , w2 ∈ W, ∃v1 , v2 ∈ V such that T (v1 ) = w1 , T (v2 ) = w2
⇔ v1 = T −1 (w1 ), v2 = T −1 (w2 )
Now, T −1 (αw1 + βw2 ) = T −1 (αT (v1 ) + βT (v2 ))
=T −1 (T (αv1 ) + T (βv2 ))
=T −1 (T (αv1 ) + βv2 ))
=αv1 + βv2
=αT −1 (w1 ) + βT −1 (w2 ).
41
3.4 Singular and Non-Singular Transformations
Theorem 3.4.5. Let T : V → W be a L.T., where V and W are two nite dimensional
vector spaces over a eld F with same dimensions, then the following are equivalent
(i) T is invertible
(ii)⇒(iii) T is non-singular
⇒ Ker T = 0
⇒ dim Ker T = 0
Range T = W
42
3.4 Singular and Non-Singular Transformations
dim Ker T = 0
⇒ Ker T = 0
i.e., T is an isomorphism
(iv)⇒(i)
Let T(v1 ),...,T(vn ) be basis of W where v1 , ..., vn is basis of V. Any w ϵ W can be put
as
43
3.4 Singular and Non-Singular Transformations
44
Chapter 4
Modules
4.1 Modules
Denition 4.1.1. Let R be a ring with unity and let (M, +) be an abelian group, then
M together with a mapping
f :R×M →M
such that f (a, x) = ax for all a ∈ R and x ∈ M is said to be a left R-module if the
following properties hold.
Denition 4.1.2. Let R be a ring with unity and let (M, +) be an abelian group, then
M together with a mapping
f :M ×R→M
45
4.1 Modules
f(a,x) = ax ∀ a ∈ R and x ∈ M
or
the mapping f : M × R → M dened as
f(x,a) = xa ∀ x ∈ M and a ∈ R
is known as scalar multiplication (or external composition).
Example 4.1.3. 1. Every vector space over a eld F is a F-module or a module over
eld F.
2. Every abelian group G is a Z-module.
Exercise. Let R be a ring with unity and let S be a non-empty set. Suppose M =
{f | f : S → R} be the set of all mappings from S to R. We dene addition on M as
(f+g)(x) = f(x) + g(x) ∀ x ∈ S and we dene scalar multiplication on M as
(α f)(x) = α f(x) ∀ x ∈ S and α ∈ R, then show that M is a left R-module.
46
4.1 Modules
= (f+g)(x) + h(x)
f + (g+h) = (f+g) + h
(iii) Existence of identity : Let O : S −→ R dened by O(x) = 0 ∀ x ∈ S be the zero
map, then
O ∈ M. Let x ∈ S, then (0 + f)(x) = 0(x) + f(x)
= 0 + f(x) = f(x)
⇒f+0=f=0+f
∴ O is the identiy of M.
(iv) Existence of inverse : Let f ϵ M and dene f : S −→ R by (-f)(x)= -f(x) ∀ x ∈ S,
= f(x)-f(x) = 0 = 0(x)
47
4.1 Modules
⇒ f+(-f) = 0 = (-f)+f
⇒ -f is the inverse of f in M.
(v) Commutative property : Let f,g ∈ M and let x ϵ S, then
= (g + f )(x)
= (f + g)(x) = (g + f )(x) ∀ x S
= f+g = g+f
48
4.2 Submodules
4.2 Submodules
49
4.2 Submodules
50
4.2 Submodules
i.e. (f - g)(x) = 0 ∀ c ϵ S
⇒f-gϵN
∴ N is a subgroup of M.
i.e. (α f)(x) = 0 ∀ x ϵ S
⇒αfϵN
Hence, N is a submodule of M.
51
4.2 Submodules
Exercise. Show that the union of two submodules of an R-module need not be a sub-
module.
2,3 ∈ N1 N2 but 2 - 3 = -1 ∈
S S
/ N1 N2
N2 is not a subgroup of M.
S
∴ N1
52
4.2 Submodules
i.e. x ∈ N
∴ N ̸= ϕ
(ii) N is a subgroup of M
Now, y - z = α1 x - α2 x = (α1 - α2 )x ϵ N
∴ N is a subgroup of M.
i.e. β y ϵ N ∀ y ϵ N, β ϵ R
Hence, N is a submodule of M.
53
4.3 Linear Combination and Linear Span
Denition 4.3.2. Sum of submodules. Let M be an R-module and let N1 , N2 are the
two submodules of M , then the sum of N1 and N2 , denoted by N1 + N2 , is dened as
N1 + N2 = {x + y | x ∈ N1 and y ∈ N2 }
54
4.3 Linear Combination and Linear Span
i.e., the set of all linear combination of elements of S is called linear span or spanning set
of S.
( n )
Proof. Since L(S) = αi xi | αi ∈ R and xi ∈ S .
X
i=1
(i)L(S) ̸= ϕ
Since S ̸= ϕ, we let x ∈ S .
i.e. 0 ∈ L(S)
∴ L(S) ̸= ϕ
55
4.3 Linear Combination and Linear Span
n
X n
X
N ow y − z = α i xi − β i xi
i=n i=n
Xn
= (αi − βi )xi ∈ L(S) (∴ αi − βi ∈ R ∀ αi , βi ∈ R)
i=n
n n
Now αy = α
X X
(αi xi ) = (ααi )xi ∈ L(S) (∴ α, αi ∈ R so α, αi ∈ R)
n=i i=n
Let x ∈ S ,then
x = 1.x ∈ L(S) (∴ 1 ∈ R)
i.e., x ∈ L(S).
Now, xi ∈ S
∴ xi ∈ N (∴ S ⊆ N )
56
4.4 Direct Sum of Submodules
αi xi ∈ N ∀ i = 1, 2, ..., n
n
X
⇒ α i xi ∈ N
n=i
⇒y∈N
∴ L(s) ⊆ N
Denition 4.4.2. Direct Sum. Let M be an R-module and let M1 , M2 , ..., Mn are
submodules of M, then M is said to be a direct sum of M1 , M2 , .., Mn if
(i) M = M1 + M2!
+, .., +Mn ; and
n
(ii) Mi ∩
X
Mi = {0}
i=n
M = M1 ⊕ M2 ⊕ .... ⊕ Mn or M = ⊕ni=n Mi
57
4.4 Direct Sum of Submodules
n
Let M1 , M2 , .., Mn be submodules of an R-module M, then M =
M
Theorem 4.4.3. Mi
i=n
i every x ∈ M can be uniquely written as x = x1 + x2 +, .., +xn , where xi ∈ Mi for
i = 1, 2, .., n.
Now, we have
x1 + x2 +, .., +xn = y1 + y2 +, ..., +yn
i.e. x1 + x2 +, .., +xi−1 + xn + xn+1 +, ..., +xn = y1 + y2 +, .., +yn−1 + yn + yn+1 +, ..., yn
⇒ xi − yi = 0 ∀ i = 1, 2, ..., n
58
4.4 Direct Sum of Submodules
x = x1 + x2 + ... + xn , xi ∈ Mi ∀ i = 1, 2, ..., n.
x = x1 + x2 + ... + xn ; xi ∈ Mi ∀ i = 1, 2, ..., n
M = M1 + M2 + .... + Mn
n
To show that M = Mi , it is enough to show that
M
i=n
Xn
Mi ∩ ( Mi ) = {0} ∀ i = 1, 2, .., n
i=n
n
For, let x ∈ Mi ∩ (
X
Mi )
n=i
x=0
n
!
X
∴ Mi ∩ Mi = {0}
i=n
n
Hence,
M
M= Mi
i=n
59
4.4 Direct Sum of Submodules
60
Chapter 5
Module Homomorphisms
f : M → N is said to be a monomorphism if
(i) f is homomorphism
f : M → N is said to be an epimorphism if
(i) f is homomorphism
61
5.1 Module Homomorphism
(ii) f is onto .
Denition 5.1.4. Isomorphism. Let M and N be two R - Modules, then the mapping
f : M → N is said to be an isomorphism if
(i) f is homomorphism
(iii) f is onto
Note. Let M and N be two R - Modules , then we say that M is isomorphic to N if there
exists an isomorphism f : M → N. We write M ≈ N if M is isomorphic to N.
gof : M → L is homomorphism.
( gof ) ( x + y ) = g [ f ( x + y ) ] = g [ f (x ) + f ( y ) ] [ ∵ f is homomorphism ]
= g [ f (x) ] + g [ f ( y ) ]
= ( gof ) ( x ) + ( gof ) y .
( gof ) ( α x ) = g [ f ( α x ) ]
= g [ α f ( x ) ] as f is homomorphism
62
5.1 Module Homomorphism
= α [ g ( f (x ) ) ] as g is homomorphism
= α ( gof ) ( x )
∴ ϕx ( a + b ) = ( a + b ) x = a x + b x = ϕx ( a ) + ϕx ( b ).
∴ ϕx ( α a ) = ( α a ) x = α ( a x ) = α ϕx ( a )
ker f = {x ∈ M | f (x) = ON },
63
5.2 Quotient Modules
f ( x1 - x2 ) = f ( x1 ) - f ( x 2 ) [ ∵ f is a module homomorphism ]
= ON - ON
= ON
⇒ x1 - x2 ∈ ker f
Also , f ( α x1 ) = α f ( x1 ) = α . ON
= ON [ ∵ f is a module ]
i.e, f ( α x1 ) = ON
∴ α x1 ∈ ker f
64
5.2 Quotient Modules
(x + N ) + (y + N ) = (x + y) + N and
α(x + N ) = αx + N ∀ α ∈ R and x + N, y + N ∈ M
N
,
Theorem 5.2.3. If M
N
is a quotient module of an R- module M , then the natural map
p:M → M
N
is an R-module epimorphism.
p(x) = x + N ∀ x ∈ M .
p is homomorphism
Let α ∈ R and let x , y ∈ M, then
p ( x + y ) = ( x + y ) + N = ( x + N ) + ( y + N ) = p ( x ) + p ( y )and
p(αx)=αx+N=α(x+N)
=αp(x)
∴ p is an R - module homomorphism
p is onto
Let Z ∈ M
N
, then
65
5.2 Quotient Modules
Z = x + N for some x ∈ M
and p ( x ) = x + N = Z
p(x)=Z
∴ p is onto
⇒ x+N,y+N ∈ K
N
and since K
N
is a submodule of M
N
, we have
(x+N)-(y+N)∈ K
N
and α(x + N ) ∈ K
N
⇒ (x-y)+N∈ K
N
and αx+K ∈ K
N
⇒ x-y∈K and αx ∈ K
66
5.2 Quotient Modules
∴ K is a submodule of M.
⇒ x∈K, so N is contained in K.
Since, K is a submodule of M
⇒ x-K∈K and αx ∈ R
⇒ (x-y)+N∈ K
N
and αx + N ∈ K
N
∴ K
N
is a submodule of M
N
References
1. Brown, William A. Matrices and vector spaces, New York M, 1991. Dekker, ISBN
978-0-8247-8419-5
2. Lang, Serge. Linear algebra, Berlin, New York:Springer-Verlag, 1987. ISBN 978-0-
67
5.2 Quotient Modules
387-96412-6
4. Anderson FW, Fuller KR. Rings and categories of modules, graduate texts in math-
ematics, 2nd Ed., Springer-Verlag, New York, 1992, 13. ISBN 0-387-97845-3, ISBN
3-540-97845-3
7. Meyer, Carl D. Matrix Analysis and Applied Linear Algebra, Society for Industrial
and Applied Mathematics (SIAM), 2001. ISBN 978-0-89871-454-8
10. Leon, Steven J. Linear Algebra with Applications (7th ed.), Pearson Prentice Hall,
2006.
11. Anderson FW, Fuller KR. Rings and categories of modules, graduate texts in math-
ematics, 2nd Ed., Springer-Verlag, New York, 1992, 13. ISBN 0-387-97845-3, ISBN
3-540-97845-3
12. Nathan Jacobson. Structure of rings. Colloquium publications, 2nd Ed., AMS
Bookstore, 1964, 37. ISBN 978082181037
68
5.2 Quotient Modules
69
5.2 Quotient Modules
70