Vector Spaces and Matrices
Vector Spaces and Matrices
TEXT BOOK on
V ector S paces
and M atrices
(For B.A. and B.Sc. IVth Semester students of Kumaun University)
By
Kumaun
Dedicated
to
Lord
Krishna
Authors & Publishers
P reface
This book on Vector Spaces and Matrices has been specially written according to
the latest Syllabus to meet the requirements of B.A. and B.Sc. Semester-IV Students
of all colleges affiliated to Kumaun University.
The subject matter has been discussed in such a simple way that the students will find
no difficulty to understand it. The proofs of various theorems and examples have been
given with minute details. Each chapter of this book contains complete theory and a fairly
large number of solved examples. Sufficient problems have also been selected from various
university examination papers. At the end of each chapter an exercise containing objective
questions has been given.
We have tried our best to keep the book free from misprints. The authors shall be
grateful to the readers who point out errors and omissions which, inspite of all care, might
have been there.
The authors, in general, hope that the present book will be warmly received by the
students and teachers. We shall indeed be very thankful to our colleagues for their
recommending this book to their students.
The authors wish to express their thanks to Mr. S.K. Rastogi, M.D., Mr. Sugam Rastogi,
Executive Director, Mrs. Kanupriya Rastogi, Director and entire team of KRISHNA
Prakashan Media (P) Ltd., Meerut for bringing out this book in the present nice form.
The authors will feel amply rewarded if the book serves the purpose for which it is
meant. Suggestions for the improvement of the book are always welcome.
— Authors
Syllabus
Vector spaces: Vector space, sub spaces, Linear combinations, linear spans, Sums and
direct sums.
Bases and Dimensions: Linear dependence and independence, Bases and dimensions,
Dimensions and subspaces, Coordinates and change of bases.
Matrices: Idempotent, nilpotent, involutary, orthogonal and unitary matrices, singular and
nonsingular matrices, negative integral powers of a nonsingular matrix; Trace of a matrix.
Rank of a matrix: Rank of a matrix, linear dependence of rows and columns of a matrix,
row rank, column rank, equivalence of row rank and column rank, elementary
transformations of a matrix and invariance of rank through elementary transformations,
normal form of a matrix, elementary matrices, rank of the sum and product of two matrices,
inverse of a non-singular matrix through elementary row transformations; equivalence of
matrices.
Chapter 5: Matrices.....................................................................V-87—V-108
1. Vector Spaces
6. Rank of a Matrix
7. Applications of Matrices
V-3
1
V ector S paces
o far we have studied groups and rings. Now we shall study another important
S algebraic structure known as vector space. Before giving the definition of a vector
space we shall make a distinction between internal and external compositions.
Let A be any set. If a * b ∈ A V a, b ∈ A, and a * b is unique then * is said to be an
internal composition in the set A. Here a and b are both elements of the set A.
Let V and F be any two sets. If a ο α ∈ V for all a ∈ F and for all α ∈V and a ο α is
unique, then ο is said to be an external composition in V over F. Here a is an element
of the set F and α is an element of the set V and the resulting element a ο α is an
element of the set V.
1 Vector Space
(Kumaun 2002; Garhwal 10)
Definition: Let ( F , + , .) be a field. The elements of F will be called scalars. Let V be a
non-empty set whose elements will be called vectors. Then V is a vector space over the field F, if
Note 3: There should also be no confusion about the use of the word vector. Here by
vector we do not mean the vector quantity which we have defined in vector algebra as
a directed line segment. Here we shall call the elements of the set V as vectors.
Note 4: In a vector space we shall be dealing with two types of zero elements. One is
the zero vector and the other is the zero element of the field F i.e., the 0 scalar. To
distinguish between the two, we shall use the zero letter in bold type to represent the
zero vector. However the students may use the same symbol 0 to denote the zero
vector as well as the zero scalar. There will be no confusion in this use. The use of 0 will
itself tell whether it stands for zero vector or for zero scalar.
Note 5: We shall usually use the lower case Greek letters α, β, γ etc. to denote vectors
i.e., the elements of V and the lower case Latin letters, a, b, c etc. to denote the scalars
i.e., the elements of the field F.
Example 2: The set V of all m × n matrices with their elements as real numbers is a vector space
over the field F of real numbers with respect to addition of matrices as addition of vectors and
multiplication of a matrix by a scalar as scalar multiplication. (Garhwal 2007)
V-6
Solution: As in groups, we can easily prove that V is an abelian group with respect to
addition of matrices. The null matrix O of the type m × n is the additive identity of this
abelian group.
If a ∈ F and α ∈V (i. e., α is a matrix of the type m × n with elements as real numbers),
then aα ∈ V because aα is also a matrix of the type m × n with elements as real
numbers. Therefore V is closed with respect to scalar multiplication. Also from our
study of matrices we observe that
(i) a (α + β) = aα + aβ V a ∈ F and V α, β ∈ V .
(ii) (a + b) α = aα + bα V a, b ∈ F and V α ∈ V .
(iii) (ab) α = a (bα) V a, b ∈ F and V α ∈ V .
(iv) 1α = α V α ∈ V where 1 is the unity element of the field F of real numbers.
Hence V ( F ) is a vector space.
Note: If V is the set of all m × n matrices with their elements as rational numbers and
F is the field of real numbers, then V will not be closed with respect to scalar
multiplication. For √ 7 ∈ F and if α ∈V, then √ 7α ∉V because the elements of the
matrix √ 7α will not be rational numbers. Therefore V ( F ) will not be a vector space.
Example 3: The vector space of all ordered n-tuples over a field F.
Solution: Let F be a field. An ordered set α = (a1, a2 , a3 , …, an) of n elements of F is
called an n-tuple over F. Let V be the totality of all ordered n-tuples over F i.e., let
V = {(a1, a2 , …, an) : a1, a2 , a3 , … , an ∈ F }.
Now we shall give a vector space structure to V over the field F. For this we define
equality of two n-tuples, addition of two n-tuples and multiplication of an n-tuple by a
scalar as follows :
Equality of two n-tuples. Two elements
α = (a1, a2 , …, an)
and β = (b1, b2 , …, bn)
of V are said to be equal if and only if ai = bi for each i = 1, 2 , …, n.
Addition composition in V. We define
α + β = (a1 + b1, a2 + b2 , … , an + bn)
V α = (a1, a2 , … , an), β = (b1, b2 , …, bn) ∈ V .
Since a1 + b1, a2 + b2 , … , an + bn are all elements of F, therefore α + β ∈V and thus V is
closed with respect to addition of n-tuples.
Scalar multiplication in V over F. We define
a α = (aa1, aa2 , …, aan) V a ∈ F , V α = (a1, a2 , …, an) ∈ V .
Since aa1, aa2 , … , aan are all elements of F, therefore a α ∈ V and thus V is closed with
respect to scalar multiplication.
Now we shall see that V is a vector space for these two compositions.
Associativity of addition in V. We have
(a1, a2 , … , an) + [(b1, b2 , … , bn) + (c1, c2 , … , c n)]
= (a1, a2 , … , an) + (b1 + c1, b2 + c2 , … , bn + c n)
= (a1 + [b1 + c1], a2 + [b2 + c2 ], …, an + [bn + c n])
V-7
= f ( x) + [ g ( x) + h ( x)].
Existence of additive identity in F [ x ] . Let 0 denote the zero polynomial over the
field F i. e., 0 = 0 + 0 x + 0 x 2 + 0 x 3 + …
Then 0 ∈ F [ x] and 0 + f ( x) = f ( x).
∴ the zero polynomial 0 is the additive identity.
V-9
1. (V , +) is an abelian group.
(i) We have α + β = < α n > + < β n > = < α n + β n > which is also a convergent
sequence. Therefore V is closed for addition of sequences.
(ii) Commutativity of addition. We have
α + β = < αn > + < βn > = < αn + βn > = < βn + αn >
= < βn > + < αn > = β + α
(iii) Associativity of addition. We have
α + ( β + γ ) = < α n > + [< β n > + < γ n >] = < α n > + < β n + γ n >
= < α n + ( β n + γ n) > = < (α n + β n) + γ n > = < α n + β n > + < γ n >
= [< α n > + < β n >] + < γ n > = (α + β) + γ .
(iv) Existence of additive identity. The zero sequence < 0 > = < 0 , 0 , 0 , …> is the
additive identity.
(v) Existence of additive inverse. For every sequence < α n > there exists a
sequence < − α n > such that
< α n > + < − α n > = < α n − α n > = < 0 > = the additive identity.
∴ (V , +) is an abelian group.
2. V is closed for scalar multiplication.
Let a be any scalar i. e., a be any real number. Then
aα = a < α n > = < aα n > which is also a convergent sequence because
lim aα n = a lim α n.
n→ ∞ n→ ∞
Solution: If any of the postulates of a vector space is not satisfied, then V will not be
a vector space. We shall show that for the operation of addition of vectors as defined
in this problem the identity element does not exist. Suppose the ordered pair ( x1, y1) is
to be the identity element for the operation of addition of vectors. Then we must have
( x, y) + ( x1, y1) = ( x, y) V x, y ∈ R
⇒ ( x + x1, 0 ) = ( x, y) V x, y ∈ R.
But if y ≠ 0, then we cannot have ( x + x1, 0 ) = ( x, y). Thus there exists no element
( x1, y1) of V such that
( x, y) + ( x1, y1) = ( x, y) V ( x, y) ∈ V .
Therefore the identity element does not exist and V is not a vector space over the
field R.
Example 7: Let V be the set of all pairs ( x, y) of real numbers, and let F be the field of real
numbers. Define
( x, y) + ( x1, y1) = (3 y + 3 y1, − x − x1)
c ( x, y) = (3 cy, − cx).
Verify that V, with these operations, is not a vector space over the field of real numbers.
Example 8: How many elements are there in the vector space of polynomials of degree at most n
in which the coefficients are the elements of the field I ( p) over the field I ( p), p being a prime
number ?
Solution: The field I ( p) is the field ({0 , 1, 2 , … , p − 1}, + p , × p ).
Comprehensive Exercise 1
Show that with these operations V is not a vector space over the field of real
numbers.
11. Let V be the set of ordered pairs ( x, y) of real numbers, Show that V is not a vector
space over R with addition in V and scalar multiplication on V defined by :
(i) ( x, y) + ( x1,
y1) = ( x + x1, y + y1) and c ( x, y) = ( x, y) ;
(ii) ( x, y) + ( x1,
y1) = ( x, y) and c ( x, y) = (cx, cy) ;
(iii) ( x, y) + ( x1,
y1) = (0 , 0 ) and c ( x, y) = (cx, cy) ;
(iv) ( x, y) + ( x1,
y1) = ( xx1, yy1) and c ( x, y) = (cx, cy).
12. Let V be the set of all pairs ( x, y) of real numbers and let F be the field of real
numbers. Define
( x, y) + ( x1, y1) = ( x + x1, y + y1)
c ( x, y) = (c x, 0 ).
Is V a vector space over the field of real numbers with these operations ?
13. Let R be the field of real numbers and let Pn be the set of all polynomials (of degree
at most n) over the field R. Prove that Pn is a vector space over the field R.
14. Let U and W be vector spaces over a field F. Let V be the set of ordered pairs i. e.,
V = {(u, w) : u ∈ U , w ∈ W }.
Show that V is a vector space over F with addition in V and scalar multiplication
on V defined by
(u, w) + (u1, w1) = (u + u1, w + w1)
and k (u, w) = (ku, kw) where u, u1 ∈ U and w, w1 ∈ W and k ∈ F .
(Kumaun 2003)
A nswers 1
4. pn
8. (i) not a vector space (ii) not a vector space
(iii) not a vector space (iv) not a vector space
12. not a vector space
3 Vector Subspaces
Definition: Let V be a vector space over the field F and W ⊆ V . Then W is called a subspace
of V if W itself is a vector space over F with respect to the operations of vector addition and scalar
multiplication in V. (Garhwal 2010)
V-15
Remark: Let V ( F ) be any vector space. Then V itself and the subset of V consisting
of zero vector only are always subspaces of V. These two are called improper
subspaces. If V has any other subspace, then it is called a proper subspace. The
subspace of V consisting of zero vector only is called the zero subspace.
Theorem 1: The necessary and sufficient condition for a non-empty subset W of a vector
space V (F ) to be a subspace of V is that W is closed under vector addition and scalar
multiplication in V. (Kumaun 2003, 10, 13)
Proof: If W itself is a vector space over F with respect to vector addition and scalar
multiplication in V, then W must be closed with respect to these two compositions.
Hence the condition is necessary.
The condition is sufficient. Now suppose that W is a non-empty subset of V and
W is closed under vector addition and scalar multiplication in V.
Let α ∈W. If 1 is the unity element of F, then − 1 ∈ F. Now W is closed under scalar
multiplication. Therefore
− 1 ∈ F, α ∈ W ⇒ (− 1) α ∈ W ⇒ − (1α) ∈ W ⇒ − α ∈ W .
[∵ α ∈ W ⇒ α ∈ V and 1 α = α in V .]
Thus the additive inverse of each element of W is also in W.
Now W is closed under vector addition.
Therefore α ∈W, − α ∈ W ⇒ α + (− α) ∈ W ⇒ 0 ∈W ,
where 0 is the zero vector of V.
Hence the zero vector of V is also the zero vector of W . Since the elements of W are
also the elements of V, therefore vector addition will be commutative as well as
associative in W. Hence W is an abelian group with respect to vector addition. Also it
is given that W is closed under scalar multiplication. The remaining postulates of a
vector space will hold in W since they hold in V of which W is a subset.
Hence W itself is a vector space for the two compositions.
∴ W is a subspace of V.
Theorem 2: The necessary and sufficient conditions for a non-empty subset W of a vector
space V ( F ) to be a subspace of V are
(i) α ∈W, β ∈ W ⇒ α − β ∈ W. (Garhwal 2013)
(ii) a ∈ F , α ∈ W ⇒ aα ∈ W.
Proof: The conditions are necessary. If W is a subspace of V, then W is an
abelian group with respect to vector addition. Therefore α ∈W, β ∈ W ⇒ α − β ∈ W.
Also W must be closed under scalar multiplication. Therefore the condition (ii) is also
necessary.
The conditions are sufficient: Now suppose W is a non-empty subset of V
satisfying the two given conditions. From condition (i), we have α ∈W,
α ∈W ⇒ α − α ∈W ⇒ 0 ∈W.
Thus the zero vector of V belongs to W and it will also be the zero vector of W.
Now 0 ∈W, α ∈ W ⇒ 0 − α ∈ W ⇒ − α ∈ W .
Thus the additive inverse of each element of W is also in W.
V-16
Again α ∈W, β ∈ W ⇒ α ∈ W, − β ∈ W
⇒ α − (− β) ∈ W ⇒ α + β ∈ W.
Thus W is closed with respect to vector addition.
Since the elements of W are also the elements of V, therefore vector addition will be
commutative as well as associative in W. Hence W is an abelian group under vector
addition. Also from condition (ii), W is closed under scalar multiplication. The
remaining postulates of a vector space will hold in W since they hold in V which is a
superset of W. Hence W is a subspace of V.
Theorem 3: The necessary and sufficient condition for a non-empty subset W of a vector
space V (F) to be a subspace of V is
a, b ∈ F and α, β ∈W ⇒ aα + b β ∈ W .
(Kumaun 2007,14 ; Garhwal 10 ; Gorakhpur 10)
Proof: The condition is necessary. If W is a subspace of V, then W must be
closed under scalar multiplication and vector addition.
Therefore a ∈ F , α ∈ W ⇒ aα ∈ W
and b ∈ F , β ∈ W ⇒ bβ ∈ W.
Now aα ∈ W , bβ ∈ W ⇒ aα + bβ ∈ W . Hence the condition is necessary.
The condition is sufficient. Now suppose W is a non-empty subset of V satisfying
the given condition i.e.,
a, b ∈ F and α, β ∈ W ⇒ aα + bβ ∈ W.
Taking a = 1, b = 1, we see that if α, β ∈W then 1 α + 1 β ∈W
⇒ α + β ∈W. [∵ α ∈ W ⇒ α ∈ V and 1 α = α in V ]
Thus W is closed under vector addition.
Now taking a = − 1, b = 0, we see that if α ∈W, then
(− 1) α + 0 α ∈ W [in place of β we have taken α]
⇒ − (1α) + 0 ∈ W ⇒ − α ∈ W.
Thus the additive inverse of each element of W is also in W.
Taking a = 0, b = 0, we see that if α ∈W, then
0 α + 0 α ∈W ⇒ 0 + 0 ∈W ⇒ 0 ∈W.
Thus the zero vector of V belongs to W. It will also be the zero vector of W.
Since the elements of W are also the elements of V, therefore vector addition will be
associative as well as commutative in W. Thus W is an abelian group with respect to
vector addition.
Now taking β = 0, we see that if a, b ∈ F
and α ∈W, then
aα + b 0 ∈ W i.e., aα + 0 ∈ W i.e., a α ∈ W .
Thus W is closed under scalar multiplication.
The remaining postulates of a vector space will hold in W since they hold in V of
which W is a subset. Hence W ( F ) is a subspace of V ( F ).
Theorem 4: A non-empty subset W of a vector space V ( F ) is a subspace of V if and only if
for each pair of vectors α, β in W and each scalar a in F the vector aα + β is again in W.
V-17
Example 10: Let V be the vector space of all polynomials in an indeterminate x over a field F.
Let W be a subset of V consisting of all polynomials of degree ≤ n. Then W is a subspace of V.
Solution: Let α and β be any two elements of W. Then α, β are polynomials over F of
degree ≤ n. If a, b are any two elements of F, then aα + bβ will also be a polynomial of
degree ≤ n.
Therefore aα + bβ ∈ W . Hence W is a subspace of V.
Example 11: If a1, a2 , a3 are fixed elements of a field F, then the set W of all ordered triads
( x1, x2 , x3 ) of elements of F, such that a1 x1 + a2 x2 + a3 x3 = 0 , is a subspace of V3 ( F ).
(Kumaun 2007)
Solution: W is non-empty since (0 , 0 , 0 ) ∈W . Let α = ( x1, x2 , x3 ) and
β = ( y1, y2 , y3 ) be any two elements of W. Then x1, x2 , x3 , y1, y2 , y3 are elements of
F and are such that
a1 x1 + a2 x2 + a3 x3 = 0 …(1)
and a1 y1 + a2 y2 + a3 y3 = 0 . …(2)
If a, b be any two elements of F, we have
aα + bβ = a ( x1, x2 , x3 ) + b ( y1, y2 , y3 )
= (ax1, ax2 , ax3 ) + (by1, by2 , by3 )
= (ax1 + by1, ax2 + by2 , ax3 + by3 ).
Now a1 (ax1 + by1) + a2 (ax2 + by2 ) + a3 (ax3 + by3 )
= a (a1 x1 + a2 x2 + a3 x3 ) + b (a1 y1 + a2 y2 + a3 y3 )
= a0 + b0 [by (1) and (2)]
= 0.
∴ a α + bβ = (ax1 + by1, ax2 + by2 , ax3 + by3 ) ∈ W .
Hence W is a subspace of V3 ( F ).
Example 12: Which of the following sets of vectors α = (a1, a2 , … , an) in R n are subspaces of
R n (n ≥ 3) ?
(i) all α such that a1 ≤ 0 ;
(ii) all α such that a3 is an integer ;
(iii) all α such that a2 + 4 a3 = 0 ;
(iv) all α such that a1 + a2 + … + an = k (a given constant).
Solution: (i) Let W = {α : α ∈ R n and a1 ≤ 0}.
If we take a1 = − 3, then a1 < 0 and so
α = (− 3, a2 , … , an) ∈ W .
Now if we take a = − 2, then
aα = (6, − 2 a2 , … , − 2 an).
Since the first coordinate of aα is 6 which is > 0, therefore aα ∉W .
Thus α ∈ W , a ∈ R but aα ∉W . Therefore W is not closed for scalar multiplication and
so W is not a subspace of R n .
(ii) Let W = {α : α ∈ R n and a3 is an integer}.
If we take a3 = 5, then a3 is an integer and so
α = (a1, a2 , 5, … , an) ∈ W .
V-19
1
Now if we take a = , then
2
1 1 5 1
aα = a1, a2 , , … , an ⋅
2 2 2 2
Since the third coordinate of aα is 5 / 2 which is not an integer therefore aα ∉W .
Thus α ∈ W , a ∈ R but aα ∉W . Therefore W is not closed for scalar multiplication and
so W is not a subspace of R n .
(iii) Let W = {α : α ∈ R n and a2 + 4 a3 = 0}.
W is non-empty since 0 = (0 , 0 , … , 0 ) ∈ W .
Let α = (a1, … , an) and β = (b1, … , bn) be any two members of W.
Then a2 + 4 a3 = 0 and b2 + 4 b3 = 0 .
If a, b ∈R, then aα + bβ = (aa1 + bb1, … , aan + bbn).
We have (aa2 + bb2 ) + 4 (aa3 + bb3 ) = a (a2 + 4 a3 ) + b (b2 + 4 b3 ) = a . 0 + b . 0 = 0 .
Thus according to the definition of W , aα + bβ ∈ W .
In this way α, β ∈W and a, b ∈ R ⇒ aα + bβ ∈ W . Hence W is a subspace of R n .
(iv) A subspace if k = 0 and not a subspace if k ≠ 0.
Example 13: Let R be the field of real numbers. Which of the following are subspaces of
V3 (R) ?
(i ) {( x , 2 y , 3 z ) : x , y , z ∈R}.
(ii ) {( x , x , x) : x ∈R}.
(iii) {( x , y , z ) : x , y , z are rational numbers}.
Solution: (i) Let W = {( x , 2 y, 3 z ) : x, y, z ∈ R}.
Let α = ( x1, 2 y1, 3 z1) and β = ( x2 , 2 y2 , 3 z2 ) be any two elements of W. Then
x1, y1, z1, x2 , y2 , z2 are real numbers. If a, b are any two real numbers, then
aα + bβ = a ( x1, 2 y1 , 3 z1) + b ( x2 , 2 y2 , 3 z2 )
= (ax1 + bx2 , 2 ay1 + 2 by2 , 3 az1 + 3 bz2 )
= (ax1 + bx2 , 2 [ay1 + by2 ], 3 [az1 + bz2 ])
∈W, since ax1 + bx2 , ay1 + by2 , az1 + bz2
are real numbers.
Thus a, b ∈R and α, β ∈ W ⇒ aα + bβ ∈ W.
∴ W is a subspace of V3 (R).
(ii) Let W = {( x, x, x) : x ∈ R}.
Let α = ( x1, x1, x1) and β = ( x2 , x2 , x2 ) be any two elements of W. Then x1, x2 are real
numbers. If a, b are any real numbers, then
aα + bβ = a ( x1, x1, x1) + b ( x2 , x2 , x2 )
= (ax1 + bx2 , ax1 + bx2 , ax1 + bx2 ) ∈ W , since ax1 + bx2 ∈R.
Thus W is a subspace of V3 (R).
(iii) Let W = {( x , y , z ) : x , y , z are rational numbers}.
Now α = (3, 4, 5) is an element of W. Also a = √ 7 is an element of R . But
aα = √ 7 (3, 4, 5) = (3 √ 7, 4 √ 7, 5 √ 7) ∉W since 3 √ 7, 4 √ 7, 5 √ 7
are not rational numbers.
Therefore W is not closed under scalar multiplication. Hence W is not a subspace of
V3 (R).
V-20
Example 14: Let V be the vector space of all 2 × 2 matrices over the field R. Show that W is not
a subspace of V, where W contains all 2 × 2 matrices with zero determinants.
a 0 0 0
Solution: Let A = , B = 0 b , where a, b ∈R and a ≠ 0 , b ≠ 0 .
0 0
We have | A| = 0 ,| B| = 0 so A, B ∈ W .
a 0
Now A + B = . We have | A + B| = ab ≠ 0 .
0 b
∴ A + B ∉W .
Thus A ∈ W , B ∈ W but A + B ∉W .
Hence W is not a subspace of V.
Example 15: If a vector space V is the set of all real valued continuous functions over the field of
real numbers R , then show that the set W of solutions of the differential equation.
d2 y dy
2 −9 +2y =0
dx2 dx
is a subspace of V.
d2 y dy
Solution: We have W = y : 2 2 − 9 + 2 y = 0 , where y = f ( x).
dx dx
Obviously y = 0 satisfies the given differential equation and as such it belongs to W
and thus W ≠ ∅.
Now let y1, y2 ∈ W . Then
d2 y1 dy1
2 2
−9 + 2 y1 = 0 …(1)
dx dx
d2 y2 dy2
and 2 2
−9 + 2 y2 = 0 . …(2)
dx dx
Let a, b ∈R. If W is to be a subspace then we should show that ay1 + by2 also belongs to
W i. e., it is a solution of the given differential equation.
d2 d
We have 2 2
(ay1 + by2 ) − 9 (ay1 + by2 ) + 2 (ay1 + by2 )
dx dx
d2 y1 d2 y2 dy1 dy
= 2a 2
+ 2b 2
− 9a − 9 b 2 + 2 ay1 + 2 by2
dx dx dx dx
d y1 2
dy1 d2 y2 dy
= a 2 2
− 9 + 2 y1 + b 2
2
− 9 2 + 2 y2
dx dx dx dx
= a ⋅ 0 + b ⋅ 0, by (1) and (2)
= 0.
Thus ay1 + by2 is a solution of the given differential equation and so it belongs to W.
Hence W is a subspace of V.
V-21
Example 16: Find the solution of system of homogeneous linear equations. Let V ( F ) be the
vector space of all n × 1matrices over the field F. Let A be an m × n matrix over F. Then the set W of
all n × 1matrices X over F such that AX = O is a subspace of V. Here O is a null matrix of the type
m × 1.
Solution: W is non-empty since On× 1 ∈ W . Let X , Y ∈ W . Then X and Y are n × 1
matrices over F such that AX = O, AY = O.
Let a ∈ F . Then aX + Y is also an n × 1 matrix over F.
We have A (aX + Y ) = A (aX ) + AY = a ( AX ) + AY = aO + O = O + O = O
Therefore aX + Y ∈ W . Thus
a ∈ F , X , Y ∈ W ⇒ aX + Y ∈ W .
Hence W is a subspace of V.
Comprehensive Exercise 2
8. Let V be a vector space of all real n × n matrices. Prove that the set W consisting of
all n × n real matrices which commute with a given matrix T of V forms a subspace
of V.
9. Let V be the vector space of all 2 × 2 matrices over the real field R . Show that the
subset of V consisting of all matrices A for which A2 = A is not a subspace of V.
10. Determine whether or not W is a subspace of R 3 if W consists of those vectors
(a, b, c ) ∈R 3 for which :
(i) a = 2 b (ii) ab = 0
(iii) a ≤ b ≤ c (iv) a = b2
(v) a2 + b2 + c 2 ≤ 1 (vi) k1 a + k2 b + k3 c = 0 , k i ∈ R .
11. Let V be the (real) vector space of all functions f from R into R . Which of the
following sets of functions are sub-spaces of V ?
(i) all f such that f ( x2 ) = [ f ( x)]2 ;
(ii) all f such that f (0 ) = f (1) ;
(iii) all f such that f (3) = 1 + f (− 5) ;
(iv) all f such that f (− 1) = 0;
(v) all f which are continuous.
12. Let V be the vector space of all functions from the real field R into R . Check
whether W is a subspace of V where
(i) W = { f : f (7) = 2 + f (1) };
(ii) W consists of all bounded functions;
(iii) W consists of all integrable functions in the interval [0 , 1].
13. Let V be the vector space of infinite sequences < a1, a2 , … an, … > in a field F. Show
that W is a subspace of V, if
(i) W consists of all sequences with 0 as the first component ;
(ii) W consists of all sequences with only a finite number of non-zero
components.
14. Let AX = B be a non-homogeneous system of linear equations in n unknowns
over a field F. Show that the solution set W of the system is not a subspace of F n.
A nswers 2
4. (i) not a subspace (ii) a subspace
5. (i) not a subspace (ii) a subspace (iii) not a subspace;
(iv) not a subspace (v) not a subspace
10. (i) a subspace (ii) not a subspace
(iii) not a subspace (iv) not a subspace
(v) not a subspace (vi) a subspace
11. (i) not a subspace (ii) a subspace (iii) not a subspace
(iv) a subspace (v) a subspace
12. (i) not a subspace (ii) a subspace (iii) a subspace
V-23
4 Algebra of Subspaces
Theorem 1: The intersection of any two subspaces W1 and W2 of a vector space V ( F ) is also a
subspace of V ( F ).
(Kumaun 2002, 07, 09; Garhwal 07, 12; Gorakhpur 10, 13, 15)
Proof: Since 0 ∈W1 and W2 both, therefore W1 ∩ W2 is not empty.
Let α , β ∈ W1 ∩ W2 and a , b ∈ F .
Now α ∈ W1 ∩ W2 ⇒ α ∈ W1 and α ∈W2
and β ∈ W1 ∩ W2 ⇒ β ∈ W1 and β ∈W2 .
Since W1 is a subspace, therefore a , b ∈ F and α , β ∈ W1 ⇒ aα + bβ ∈ W1.
Similarly a , b ∈ F and α , β ∈ W2 ⇒ aα + bβ ∈ W2 .
Now aα + bβ ∈ W1, aα + bβ ∈ W2 ⇒ aα + bβ ∈ W1 ∩ W2 .
Thus a , b ∈ F and α , β ∈ W1 ∩ W2 ⇒ aα + bβ ∈ W1 ∩ W2 .
Hence W1 ∩ W2 is a subspace of V ( F ).
Note: The union of two subspaces of V ( F ) may not be a subspace of V ( F ). For
example if R be the field of real numbers, then W1 = {(0 , 0 , z ) : z ∈ R} and
W2 = {(0 , y, 0 ) : y ∈ R} are two subspaces of V3 (R).
We have (0 , 0 , 3) ∈W1 and (0, 5, 0) ∈W2 .
But (0 , 0 , 3) + (0 , 5, 0 ) = (0 , 5, 3) ∉W1 ∪ W2 since neither (0 , 5, 3) ∈W1 nor
(0 , 5, 3) ∈W2 . Thus W1 ∪ W2 is not closed under vector addition. Hence W1 ∪ W2 is
not a subspace of V3 (R). (Gorakhpur 2015)
Theorem 2: The union of two subspaces is a subspace if and only if one is contained in the
other. (Kumaun 2003, 07, 11; Garhwal 06, 07, 13)
Proof: Suppose W1 and W2 are two subspaces of a vector space V.
Let W1 ⊆ W2 or W2 ⊆ W1. Then W1 ∪ W2 = W2 or W1. But W1, W2 are subspaces and
therefore, W1 ∪ W2 is also a subspace.
Conversely, suppose W1 ∪ W2 is a subspace.
To prove that W1 ⊆ W2 or W2 ⊆ W1 .
Let us assume that W1 is not a subset of W2 and W2 is also not a subset of W1.
Now W1 is not a subset of W2 ⇒ ∃ α ∈ W1 and α ∉W2 …(1)
and W2 is not a subset of W1 ⇒ ∃ β ∈ W2 and β ∉W1. …(2)
From (1) and (2), we have α ∈ W1 ∪ W2 and β ∈ W1 ∪ W2 .
Since W1 ∪ W2 is a subspace, therefore α + β is also in W1 ∪ W2 .
But α + β ∈ W1 ∪ W2 ⇒ α + β ∈ W1 or W2 .
Suppose α + β ∈W1 . Since α ∈W1 and W1 is a subspace, therefore (α + β) − α = β is in
W1.
But from (2), we have β ∉W1. Thus we get a contradiction. Again suppose that
α + β ∈W2 . Since β ∈W2 and W2 is a subspace, therefore (α + β) − β = α is in W2 . But
from (1), we have α ∉W2 . Thus here also we get a contradiction. Hence either
W1 ⊆ W2 or W2 ⊆ W1 .
V-24
Thus a , b ∈ F and α , β ∈ ∩ Wt ⇒ aα + bβ ∈ ∩ Wt .
t ∈T t ∈T
Hence ∩ Wt is a subspace of V ( F ).
t ∈T
Theorem 1: The linear span L (S) of any subset S of a vector space V ( F ) is a subspace of V
generated by S i.e., L (S) = { S }. (Kumaun 2009)
Proof: Let α , β be any two elements of L (S).
Then α = a1 α1 + a2 α2 + … + am α m
and β = b1 β1 + b2 β2 + … + bn β n
where the a’s and b’s are elements of F and the α’s and β’s are elements of S.
If a , b be any two elements of F, then
aα + bβ = a (a1 α1 + a2 α2 + … + am α m )
+ b (b1 β1 + b2 β2 + … + bn β n)
= a (a1 α1) + a (a2 α2 ) + … + a (am α m ) + b (b1 β1) + b (b2 β2 ) + …
…+ b (bn β n)
= (aa1) α1 + (aa2 ) α2 + …+ (aam ) α m + (bb1) β1 + (bb2 ) β2 + …
… + (bbn) β n.
Thus aα + bβ has been expressed as a linear combination of a finite set
α1 , α2 ,…, α m , β1, β2 , …, β n of the elements of S. Consequently aα + bβ ∈ L (S ).
Thus a , b ∈ F and α , β ∈ L (S) ⇒ aα + bβ ∈ L (S).
Hence L (S) is a subspace of V ( F ).
Also each element of S belongs to L (S) because if α r ∈ S, then α r = 1 α r and this
implies that α r ∈ L (S). Thus L (S) is a subspace of V and S is contained in L (S).
Now if W is any subspace of V containing S, then each element of L (S) must be in W
because W is to be closed under vector addition and scalar multiplication. Therefore
L (S) will be contained in W.
Hence L (S) = { S} i.e., L (S) is the smallest subspace of V containing S.
Note 1: Important. Suppose S is a non-empty subset of a vector space V ( F ). Then a
vector α ∈V will be in the subspace of V generated by S if it can be expressed as a
linear combination over F of a finite number of vectors belonging to S.
Note 2: If in any case we are to prove that L (S) = V , then we should prove that
V ⊆ L (S) because L (S) ⊆ V since L (S) is a subspace of V. In order to prove that
V ⊆ L (S), we should prove that each element of V can be expressed as a linear
combination of a finite number of elements of S. Then each element of V will also be
an element of L (S) and we shall have V ⊆ L (S).
Finally V ⊆ L (S) and L (S) ⊆ V ⇒ L (S) = V .
Illustration 1: The subset containing a single element (1, 0, 0) of the vector space V3 ( F )
generates the subspace which is the totality of elements of the form (a , 0 , 0 ).
Illustration 2: The subset {(1, 0 , 0 ), (0 , 1, 0 )} of V3 ( F ) generates the subspace which is the
totality of the elements of the form (a, b, 0 ).
Illustration 3: The subset S = {(1, 0 , 0 ), (0 , 1, 0 ), (0 , 0 , 1)} of V3 ( F ) generates or spans the
entire vector space V3 ( F ) i.e., L (S) = V .
V-26
Then V = L (S ).
Now a1α1 + a2 α2 + … + am α m ∈ L (S )
and b1 β1 + b2 β2 + … + bp β p ∈ L (T ).
Therefore α ∈ L (S ) + L (T ).
Consequently L (S ∪ T ) ⊆ L (S ) + L (T ).
Now let γ be any element of L (S ) + L (T ). Then γ = β + δ where β ∈ L (S ) and
δ ∈ L (T ). Now β will be a linear combination of a finite number of elements of S and δ
will be a linear combination of a finite number of elements of T . Therefore β + δ will be
a linear combination of a finite number of elements of S ∪ T .
Thus β + δ ∈ L (S ∪ T ). Consequently
L (S ) + L (T ) ⊆ L (S ∪ T ).
Hence L (S ∪ T ) = L (S ) + L (T ).
(iii) Suppose S is a subspace of V. Then we are to prove that L (S ) = S.
Let α ∈ L (S ). Then α = a1α1 + a2α2 + … + an α n where a1, … , an ∈ F and α1, … , α n ∈ S.
But S is a subspace of V. Therefore it is closed with respect to scalar multiplication and
vector addition. Hence α = a1α1 + … + an α n ∈ S. Thus
α ∈ L (S ) ⇒ α ∈ S.
Therefore L (S ) ⊆ S. Also S ⊆ L (S ). Therefore L (S ) = S.
Converse. Suppose L (S ) = S. Then to prove that S is a subspace of V. We know
that L (S ) is a subspace of V. Since S = L (S ), therefore S is also a subspace of V.
(iv) L ( L (S )) is the smallest subspace of V containing L (S ). But L (S ) is a subspace
of V. Therefore the smallest subspace of V containing L (S ) is L (S ) itself.
Hence L ( L (S )) = L (S ).
Example 18: In the vector space R3 express the vector (1, − 2, 5) as a linear combination of the
vectors (1, 1, 1), (1, 2, 3) and (2, − 1, 1). (Kumaun 2008)
Solution: Let α = (1, − 2, 5), α1 = (1, 1, 1), α2 = (1, 2, 3), α3 = (2, − 1, 1).
We wish to express α as a linear combination of α1, α2 , α3 .
Let α = a1α1 + a2α2 + a3 α3 where a1, a2 , a3 ∈R.
Then (1, − 2, 5) = a1 (1, 1, 1) + a2 (1, 2, 3) + a3 (2, − 1, 1)
= (a1, a1, a1) + (a2 , 2 a2 , 3 a2 ) + (2 a3 , − a3 , a3 )
= (a1 + a2 + 2 a3 , a1 + 2 a2 − a3 , a1 + 3 a2 + a3 )
∴ a1 + a2 + 2 a3 = 1 …(1)
a1 + 2 a2 − a3 = − 2 …(2)
a1 + 3 a2 + a3 = 5 …(3)
Method 1. The augmented matrix of the system of equations (1), (2) and (3) is
1 1 2 ⋮ 1
[ A : B] = 1 2 −1 ⋮ −2 .
1 3 1 ⋮ 5
We shall reduce the matrix [ A : B ] to Echelon form by applying the row
transformations only.
V-29
1 1 2 ⋮ 1
We have [ A : B] ~ 0 1 −3 ⋮ −3 .
0 2 −1 ⋮ 4
1 1 2 ⋮ 1
~ 0 1 −3 ⋮ −3 .
0 0 5 ⋮ 10
Thus we have rank A = rank [ A : B ] i. e., the system of equations (1), (2), (3) is
consistent and so has a solution i. e., α can be expressed as a linear combination of
α1, α2 , α3 . The equations (1), (2) and (3) are equivalent to equations
a1 + a2 + 2 a3 = 1, a2 − 3 a3 = − 3, 5 a3 = 10 .
Solving for unknowns, we get a3 = 2 , a2 = 3, a1 = − 6.
Hence (1, − 2 , 5) = − 6 (1, 1, 1) + 3 (1, 2 , 3) + 2 (2 , − 1, 1).
Method 2. Eliminating a3 between (1) and (2), we get
3 a1 + 5 a2 = − 3 …(4)
Eliminating a3 between (2) and (3), we get
2 a1 + 5 a2 = 3 …(5)
Solving (4) and (5), we get a1 = − 6, a2 = 3.
Putting a1 = − 6, a2 = 3 in (1), we get a3 = 2.
Hence (1, − 2, 5) = − 6 (1, 1, 1) + 3 (1, 2 , 3) + 2 (2 , − 1, 1).
Example 19: For what value of m, the vector (m, 3, 1) is a linear combination of the vectors
(3, 2, 1) and (2, 1, 0 ) ?
Solution: Let α1 = (3, 2, 1) and α2 = (2, 1, 0 ).
Let (m, 3, 1) = aα1 + bα2 where a, b ∈R.
Then (m, 3, 1) = a (3, 2, 1) + b (2, 1, 0 ) = (3 a, 2 a, a) + (2 b, b, 0 )
= (3 a + 2 b, 2 a + b, a)
∴ 3 a + 2 b = m, 2 a + b = 3, a = 1.
These give a = 1, b = 1, m = 5.
Hence the required value of m is 5.
Example 20: In the vector space R4 determine whether or not the vector (3, 9, − 4, − 2) is a
linear combination of the vectors (1, − 2, 0 , 3), (2, 3, 0 , − 1) and (2 , − 1, 2 , 1).
Solution: Let α = (3, 9, − 4, − 2), α1 = (1, − 2 , 0 , 3),
α2 = (2 , 3, 0 , − 1), α3 = (2 , − 1, 2 , 1).
Let α = aα1 + bα2 + cα3 where a, b, c ∈R.
Then (3, 9, − 4, − 2) = a (1, − 2, 0 , 3) + b (2 , 3, 0 , − 1) + c (2, − 1, 2, 1)
or (3, 9, − 4, − 2) = (a + 2 b + 2 c , − 2 a + 3 b − c , 2 c , 3 a − b + c )
∴ a + 2b + 2c = 3 …(1)
− 2a + 3b − c = 9 …(2)
2c = − 4 …(3)
3a − b + c = − 2 …(4)
V-30
Solving (1), (2) and (3), we get a = 1, b = 3, c = − 2. These values satisfy (4). Thus the
system of equations (1), (2), (3) and (4) has a solution i. e., α can be written as a linear
combination of α1, α2 and α3 .
Also we have α = α1 + 3α2 − 2α3 .
Example 21: In the vector space R3 , let α = (1, 2 , 1), β = (3 , 1, 5), γ = (3 , − 4 , 7). Show that
the subspaces spanned by S = {α , β} and T = {α , β, γ } are the same.(Kumaun 2008, 11)
Solution: First we shall show that the vector γ can be expressed as a linear
combination of the vectors α and β. Let
(3, − 4, 7) = a (1, 2 , 1) + b (3, 1, 5).
Then a + 3 b = 3, 2 a + b = − 4, a + 5 b = 7. Solving the first two equations, we get
a = − 3, b = 2 and these values satisfy the third equation also. Therefore we can write
γ = − 3α + 2β.
Now S ⊆ T ⇒ L (S ) ⊆ L (T ).
Further let δ ∈ L (T ). Then δ can be expressed as a linear combination of the vectors
α , β and γ. In this linear combination the vector γ can be replaced by − 3α + 2β. Thus δ
can be expressed as a linear combination of the vectors α and β. Therefore δ ∈ L (S ).
Thus δ ∈ L (T ) ⇒ δ ∈ L (S ). Therefore L (T ) ⊆ L (S ).
Hence L (T ) = L (S ).
Example 22: Show that (1, 1, 1), (0 , 1, 1) and (0 , 1, − 1) generate R3 .
Solution: We shall show that any vector (a, b, c ) ∈R3 is a linear combination of the
given vectors.
Let (a, b, c ) = x (1, 1, 1) + y (0 , 1, 1) + z (0 , 1, − 1), where x, y, z ∈R.
Then (a, b, c ) = ( x , x + y + z , x + y − z ).
∴ x = a, x + y + z = b, x + y − z = c .
b + c − 2a b−c
These give x = a, y = , z= i. e., the above system has a solution. Thus
2 2
the given vectors generate R3 .
Example 23: Let V be the vector space of all functions from R into R ; let Ve be the subset of even
functions, f (− x) = f ( x) ; let Vo be the subset of odd functions, f (− x) = − f ( x).
(i) Prove that Ve and Vo are subspaces of V.
(ii) Prove that Ve + Vo = V .
(iii) Prove that Ve ∩ Vo = {0}.
Solution: (i) Suppose f e and ge ∈ Ve and a is any scalar i. e., a ∈R. Then
(af e + ge ) (− x) = af e (− x) + ge (− x)
= af e ( x) + ge ( x) = (af e + ge) ( x).
Therefore af e + ge is an even function.
Thus a ∈R and f e , ge ∈ Ve ⇒ af e + ge ∈ Ve .
Hence Ve is a subspace of V.
V-31
Comprehensive Exercise 3
1. For which value of k will the vector (1, − 2, k ) in R 3 be a linear combination of the
vectors (3, 0 , − 2) and (2, − 1, − 5) ?
2. In the vector space R 4 determine whether or not the vector (3, 9, − 4, 2) is a linear
combination of the vectors (1, − 2, 0 , 3), (2, 3, 0 , − 1) and (2, − 1, 2, 1).
3. Is the vector (3, − 1, 0 , − 1) in the subspace of R 4 spanned by the vectors
(2, − 1, 3, 2), (− 1, 1, 1, − 3) and (1, 1, 9, − 5) ?
4. Is the vector (2, − 5, 3) in the subspace of R 3 spanned by the vectors
(1, − 3, 2),(2, − 4, − 1),(1, − 5, 7)? (Kumaun 2009; Garhwal 13)
V-32
3 1
5. Express the matrix E = as a linear combination of the matrices
1 −1
1 1 0 0 0 2
A= ,B= and C = .
1 0 1 1 0 −1
6. Write E as a linear combination of
1 1 1 1 1 −1
A= ,B = and C = 0 0
0 −1 −1 0
3 −1 2 1
where (i) E = (ii) E = .
1 −2 −1 −2
7. Express the polynomial f = x2 + 4 x − 3 over R as a linear combination of the
polynomials
f1 = x2 − 2 x + 5, f2 = 2 x2 − 3 x and f3 = x + 3.
8. Express f as a linear combination of the polynomials
f1 = 2 x2 + 3 x − 4, f2 = x2 − 2 x − 3
where (i) f = 3 x2 + 8 x − 5 (ii) f = 4 x2 − 6 x − 1.
9. Find the condition on a, b and c so that (a, b, c ) ∈R 3 is in the space generated by
(2, 1, 0 ), (1, − 1, 2) and (0 , 3, − 4).
10. Find one vector in R 3 which generates the intersection of W1 and W2 where
W1 = {(a, b, 0 ) : a, b ∈ R } and W2 is the space generated by the vectors (1, 2, 3) and
(1, − 1, 1).
11. If α, β and γ are vectors such that α + β + γ = 0, then α and β span the same
subspace as β and γ.
12. Give an example of a subset W of a vector space V which is not a subspace of V but
for which
(i) W + W = W . (ii) W + W ⊂ W .
13. Let V be the vector space of n × n matrices over a field F. Let W1 and W2 be the
subspaces of upper triangular matrices and lower triangular matrices
respectively. Find
(i) W1 + W2 (ii) W1 ∩ W2 .
14. Show that the space U generated by the vectors u1 = (1, 2, − 1, 3),
u2 = (2, 4,1, − 2) and u3 = (3, 6, 3, − 7) and the space V generated by the vectors
v1 = (1, 2, − 4, 11) and v2 = (2, 4, − 5, 14) are equal.
A nswers 3
1. k = −8 2. No 3. No
4. No 5. E = 3 A − 2B − C
6. (i) E = 2 A − B + 2 C (ii) Not possible
7. f = − 3 f1 + 2 f2 + 4 f3 8. (i) f = 2 f1 − f2 (ii) Not possible
9. 2 a − 4 b − 3 c = 0 10. (− 2, 5, 0 )
13. (i) W1 + W2 = V (ii) space of diagonal matrices
V-33
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. A subspace of V3 (R), where R is the real field, must always contain the origin.
2. The set of vectors α = ( x, y) ∈ V2 (R) for which x2 = y2 is a subspace of V2 ( R ).
3. The set of ordered triads ( x, y, z ) of real numbers with x > 0 is a subspace of V3 (R).
4. The set of ordered triads ( x, y, z ) of real numbers with x + y = 0 is a subspace of
V3 (R).
5. The union of any two subspaces W1 and W2 of a vector space V ( F ) is always a
subspace of V ( F ).
6. The intersection of two subspaces of a vector space V is also a subspace of V.
7. If A and B are subsets of a vector space, then A ≠ B ⇒ L ( A) ≠ L ( B).
A nswers
¨
V-35
2
L inear D ependence of V ectors
Example 1: Prove that if two vectors are linearly dependent, one of them is a scalar multiple of
the other. (Gorakhpur 2011; Kumaun 13)
Solution: Let α , β be two linearly dependent vectors of the vector space V. Then ∃
scalars a, b not both zero, such that
aα + bβ = 0.
If a ≠ 0, then we get a α = − bβ
b
⇒ α = − β ⇒ α is a scalar multiple of β.
a
If b ≠ 0, then we get bβ = − aα
a
⇒ β = − α ⇒ β is a scalar multiple of α.
b
Thus one of the vectors α and β is a scalar multiple of the other.
Example 2: In the vector space Vn ( F ) , the system of n vectors
e1 = (1, 0 , 0 , …, 0 ), e2 = (0 , 1, 0 , …, 0 , 0 ),…, en = (0 , 0 , …, 0 , 1)
is linearly independent where 1 denotes the unity of the field F.
Solution: If a1, a2 , a3 , …, an be any scalars, then
a1 e1 + a2 e2 + … + an en = 0
⇒ a1 (1, 0 , 0 , …, 0 ) + a2 (0 , 1, 0 , … , 0 ) + … + an (0 , 0 , …, 0 , 1) = 0
⇒ (a1, a2 , …, an) = (0 , 0 , …, 0 )
⇒ a1 = 0 , a2 = 0 , …, an = 0 .
Therefore the given set of n vectors is linearly independent.
In particular {(1, 0 , 0 ), (0 , 1, 0 ), (0 , 0 , 1)} is a linearly independent subset of V3 ( F ).
Example 3: If the set S = {α1, α2 , …, α n} of vectors of V ( F ) is linearly independent, then
none of the vectors α1, α2 , …, α n can be zero vector.
V-38
Example 13: In the vector space F [ x] of all polynomials over the field F the infinite set
S = {1, x, x2 , x3 , …} is linearly independent.
V-42
m1 m2 mn
Solution: Let S ′ = {x ,x , …, x } be any finite subset of S having n vectors. Here
m1, m2 ,…, mn are some non-negative integers. Let a1, a2 , …, an be scalars such that
a1 x m1 + a2 x m2 + … + an x mn = 0. (i. e., zero polynomial) …(1)
By the definition of equality of two polynomials we have from (1)
a1 = 0 , a2 = 0 , … , an = 0.
Thus every finite subset of S is linearly independent.
Therefore S is linearly independent.
Example 14: Show that the three vectors (1, 1, − 1), (2 , − 3, 5) and (−2 , 1, 4) of R3 are linearly
independent. (Kumaun 2010, 13)
Solution: Let a, b, c be scalars i. e., real numbers such that
a (1, 1, − 1) + b (2 , − 3, 5) + c (−2 , 1, 4) = (0 , 0 , 0 )
i. e., (a + 2 b − 2 c , a − 3 b + c , − a + 5 b + 4 c ) = (0 , 0 , 0 )
i. e., a + 2b − 2c = 0 …(1)
a − 3b + c = 0 …(2)
− a + 5b + 4c = 0 …(3)
Now we shall solve the simultaneous equations (1), (2) and (3).
Multiplying (2) by 2 and adding to (1), we get
3a − 4b = 0. …(4)
Again multiplying (1) by 2 and adding to (3), we get
a + 9 b = 0. …(5)
Multiplying (5) by 3 and subtracting from (4), we get
−31b = 0 or b = 0 .
Putting b = 0 in (5), we get a = 0.
Now putting a = 0 , b = 0 in (1), we get c = 0.
Thus a = 0 , b = 0 , c = 0 is the only solution of the equations (1), (2) and (3).
∴ a (1, 1, − 1) + b (2 , − 3, 5) + c (−2 , 1, 4) = (0 , 0 , 0 )
⇒ a = 0, b = 0, c = 0.
Hence the vectors (1, 1, − 1), (2 , − 3, 5), (−2 , 1, 4) of R3 are linearly independent.
Example 15: Find a maximal linearly independent subsystem of the system of vectors
α1 = (2, − 2, − 4), α2 = (1, 9, 3), α3 = (− 2, − 4, 1)
and α4 = (3, 7, − 1).
2 −2 −4
1 9 3
Solution: Let A denote the matrix whose rows consist of the vectors
−2 −4 1
3 7 −1
α1, α2 , α3 and α4 .
We shall reduce the matrix A to Echelon form by applying the row transformations.
We have
1 −1 −2
1 9 3
A~ , applying R1 → 1 R1
− 2 − 4 1 2
3 7 − 1
V-43
1 −1 −2
0 by R2 → R2 − R1
10 5
~ , R3 → R3 + 2 R1
0 −6 −3
0 R4 → R4 − 3 R1
10 5
1
1 −1 −2 by R2 → R2
0 10
1 12 1
~ , R3 → − R3
0 1 12 6
0 1 12 1
R4 → R4
10
1 −1 −2
0 1 12 by R3 → R3 − R2
~ ,
0 0 0 R4 → R4 − R2
0 0 0
which is in Echelon form.
We have rank A = the number of non-zero rows in its Echelon form = 2.
∴ the maximum number of linearly independent row vectors in the matrix A
= the rank A = 2.
The vectors α1 and α2 are linearly independent and so {α1, α2} is a maximal linearly
independent subsystem of the given system of vectors. We observe that none of the
given four vectors is a scalar multiple of any of the remaining three vectors. So any two
of the given four vectors form a maximal linearly independent subsystem of the given
system of vectors.
Comprehensive Exercise 1
1. Show that the vectors (1, 1, 2 , 4), (2 , − 1, − 5, 2), (1, − 1, − 4 , 0 ) and (2 , 1, 1, 6) are
linearly dependent in R 4 .
2. Show that the vectors (1, 1, 0 , 0 ), (0 , 1, − 1, 0 ), (0 , 0 , 0 , 3) in R 4 are linearly
independent.
3. Determine whether the following set of vectors in V3 (Q ) is linearly dependent or
independent, Q being the field of rational numbers :
{(− 1, 2, 1), (3, 1, − 2)}.
4. Find a linearly independent subset T of the set S = {α1, α2 , α3 , α4 }
where α1 = (1, 2, − 1), α2 = (− 3, − 6, 3), α3 = (2, 1, 3), α4 = (8, 7, 7) ∈ R 3
which spans the same space as S.
5. If the vectors (0 , 1, x),( x, 1, 0 ),(1, x, 1) of the vector space R 3 are linearly dependent,
then find the value of x. (Kumaun 2012, 14)
6. Show that the vectors (0 , 2, − 4) , (1, − 2, − 1), (1, − 4, 3) in R 3 are linearly
dependent. Also express (0 , 2, − 4) as a linear combination of (1, − 2, − 1) and
(1, − 4, 3).
V-44
7. Determine if the vectors (1, − 2, 1), (2, 1, − 1), (7, − 4, 1) in R 3 are linearly
independent or linearly dependent.
8. Let α1, α2 , α3 be the vectors of V ( F ), a, b ∈ F . Show that the set {α1, α2 , α3 } is
linearly dependent if the set {α1 + aα2 + bα3 , α2 , α3 } is linearly dependent.
9. If α, β, γ are linearly independent vectors of V ( F ) where F is the field of complex
numbers, then so also are α + β, α − β, α − 2β + γ .
10. Show that the set {1, x, x (1 − x)} is a linearly independent set of vectors in the
space of all polynomials over the real number field.
11. Find whether the vectors
2 x3 + x2 + x + 1, x3 + 3 x2 + x − 2 and x3 + 2 x2 − x + 3 of R [ x] ,
the vector space of all polynomials over the real number field, are linearly
independent or not.
12. Prove that a set of vectors which contains the zero vector is linearly dependent.
13. Let V be the vector space of 2 × 2 matrices over R. Determine whether the
matrices A, B, C ∈ V are linearly dependent, where
1 2 3 −1 1 −5
A= ,B= ,C = .
3 1 2 2 −4 0 (Garhwal 2013)
14. Prove that the three non-zero non-coplanar vectors are linearly independent.
15. Show that any three non-zero coplanar vectors are linearly dependent in R 3 .
16. Show that the set of functions { x, | x |} is L.I. in vector space of continuous
functions defined in (− 1, 1).
17. Show that the vectors α = (1 + i, 2 i), β = (1, 1 + i) in C2 (C) are L.D. but in C2 (R)
are L.I.
18. If V ( F ) is a vector space and α1, α2 , … , α n ∈ V , λ 2 , λ 3 , … , λ n ∈ F
such that {α2 + λ 2 α1, α3 + λ 3 α1, … , α n + λ n α1}
is L.D. then show that {α1, α2 , … , α n} is L.D.
19. Let V be the vector space of functions from R into R. Show that f , g, h ∈ V are
linearly independent where
(i) f ( x) = e2 x , g ( x) = x2 , h ( x) = x
(ii) f ( x) = sin x, g ( x) = cos x, h ( x) = x.
20. Test the vectors (0 , 1, 0 , 1, 1), (1, 0 , 1, 0 , 1), (0 , 1, 0 , 1, 1),(1, 1, 1, 1, 1) in V5 over the field
of rational numbers for linear independence.
21. Prove that the four vectors (1, 0 , 0 ), (0 , 1, 0 ), (0 , 0 , 1), (1, 1, 1) in V3 (C) form a
linearly dependent set but any three of them are linearly independent.
A nswers 1
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. A set containing a linearly independent set of vectors is itself linearly
independent.
2. Union of two linearly independent subsets of a vector space is linearly
independent.
3. The vectors (2, − 3, 7), (0 , 0 , 0 ), (3, − 1, − 4) are linearly independent in R 3.
4. The vectors (1, − 3, 7), (2, 0 , − 6), (3, − 1, − 1), (2, 4, − 5) are linearly dependent in R3 .
V-46
A nswers
True or False
1. F 2. F 3. F 4. T
¨
V-47
3
B asis and D imensions
If we omit the vector α i from S3 then V is also generated by the remaining set
S4 = { β1, α1, α2 , … , α i − 1, α i + 1, … , α m }.
Since V = L (S4 ) and β2 ∈V, therefore β2 can be expressed as a linear combination of
the vectors belonging to S4 . Consequently the set
S5 = { β2 , β1, α1, α2 , … , α i − 1, α i + 1, … , α m }
is linearly dependent. Therefore there exists a member α j of this set S5 such that α j is a
linear combination of the preceding vectors. Obviously α j will be different from β1 and
β2 since { β1, β2} is a linearly independent set. If we exclude the vector α j from S5 , then
the remaining set will generate V ( F ).
We may continue to proceed in this manner. Here each step consists in the exclusion of
an α and the inclusion of a β in the set S1.
Obviously the set S1 of α’s cannot be exhausted before the set S2 of β’s otherwise V ( F )
will be a linear span of a proper subset of S2 and thus S2 will become linearly
dependent. Therefore we must have
m< | n.
Interchanging the roles of S1 and S2 , we shall get that
n <| m.
Hence n = m.
Illustration: For the vector space V3 ( F ), the sets
S1 = {(1, 0 , 0 ), (0 , 1, 0 ), (0 , 0 , 1)}
and S2 = {(1, 0 , 0 ), (1, 1, 0 ), (1, 1, 1)}
are bases as can easily be seen. Both these bases contain the same number of elements
i. e., 3.
Obviously L (S1) = V . Since the α’s can be expressed as linear combinations of the β’s
therefore the set S1 is linearly dependent.
Therefore there is some vector of S1 which is a linear combination of its preceding
vectors. This vector cannot be any of the α’s since the α’s are linearly independent.
Therefore this vector must be some β, say β i . Now omit the vector β i from S1 and
consider the set
S2 = {α1, α2 , … , α m , β1, β2 , … , β i − 1, β i + 1, … , β n}.
Obviously L (S2 ) = V . If S2 is linearly independent, then S2 will be a basis of V and it is
the required extended set which is a basis of V. If S2 is not linearly independent, then
repeating the above process a finite number of times, we shall get a linearly
independent set containing α1, α2 , … , α m and spanning V. This set will be a basis of V
and it will contain S. Since each basis of V contains the same number of elements,
therefore exactly n − m elements of set of β’s will be adjoined to S so as to form a basis
of V.
Theorem 2: Each set of (n + 1) or more vectors of a finite dimensional vector space V ( F ) of
dimension n is linearly dependent.
Proof: Let V ( F ) be a finite dimensional vector space of dimension n. Let S be a
linearly independent subset of V containing (n + 1) or more vectors. Then S will form a
part of basis of V. Thus we shall get a basis of V containing more than n vectors. But
every basis of V will contain exactly n vectors. Hence our assumption is wrong.
Therefore if S contains (n + 1) or more vectors, then S must be linearly dependent.
Theorem 3: Let V be a vector space which is spanned by a finite set of vectors β1, β2 , … , β m .
Then any linearly independent set of vectors in V is finite and contains no more than m vectors.
(Garhwal 2009; Kumaun 14)
Proof: Let S = { β1, β2 , … , β m}.
Since L (S ) = V , therefore V has a finite basis and dim V ≤ m. Hence every subset S ′ of
V which contains more than m vectors is linearly dependent. This proves the theorem.
Theorem 4: If V ( F ) is a finite dimensional vector space of dimension n, then any set of n
linearly independent vectors in V forms a basis of V.
Proof: Let {α1, α2 , … , α n} be a linearly independent subset of a finite dimensional
vector space V ( F ) of dimension n. If S is not a basis of V, then it can be extended to
form a basis of V. Thus we shall get a basis of V containing more than n vectors.
But every basis of V must contain exactly n vectors. Therefore our
assumption is wrong and S must be a basis of V.
Theorem 5: If a set S of n vectors of a finite dimensional vector space V ( F ) of dimension n
generates V ( F ), then S is a basis of V.
Proof: Let V ( F ) be a finite dimensional vector space of dimension n. Let
S = {α1, α2 , … , α n} be a subset of V such that L (S ) = V . If S is linearly independent,
then S will form a basis of V. If S is not linearly independent, then there will exist a
proper subset of S which will form a basis of V. Thus we shall get a basis of V
containing less than n elements. But every basis of V must contain exactly n elements.
Hence S cannot be linearly dependent and so S must be a basis of V.
Note: If V is a finite dimensional vector space of dimension n, then V cannot be generated by
fewer than n vectors.
V-52
Dimension of a Subspace:
Theorem 6: Each subspace W of a finite dimensional vector space V ( F ) of dimension n is a
finite dimensional space with dim m ≤ n.
Also V = W iff dim V = dim W . (Garhwal 2008; Kumaun 10, 12)
Proof: Let V ( F ) be a finite dimensional vector space of dim n. Let W be a subspace
of V. Any subset of W containing (n + 1) or more vectors is also a subset of V and any
(n + 1) vectors in V are linearly dependent. Therefore any linearly independent set of
vectors in W can contain, at the most n vectors. Let
S = {α1, α2 , … , α m}
be a linearly independent subset of W with maximum number of elements. We claim
that S is a basis of W. The proof is as follows :
(i) S is a linearly independent subset of W.
(ii) L (S ) = W . Let α be any element of W.
Then the (m + 1) vectors α, α1, α2 , … , α m belonging to W are linearly dependent
because we have supposed that the largest independent subset of W contains m
vectors.
Now {α1, α2 , … , α m , α} is a linearly dependent set. Therefore there exists a vector
belonging to it which can be expressed as a linear combination of the preceding
vectors. Since α1, α2 , … , α m are linearly independent, therefore this vector cannot be
any of these m vectors. So it must be α itself. Thus α can be expressed as a linear
combination of α1, α2 , … , α m . Hence L (S ) = W .
∴ S is a basis of W.
∴ dim W = m and m ≤ n.
Now if V = W , then every basis of V is also a basis of W.
Hence dim V = dim W = n.
Conversely let dim W = dim V = n. Then to prove that W = V .
Let S be a basis of W. Then L (S ) = W and S contains n vectors. Since S is also a subset
of V and S contains n linearly independent vectors, therefore S will also be a basis of V.
Therefore L (S ) = V . Hence W = V . We thus conclude :
If W is a proper subspace of a finite-dimensional vector space V, then W is finite dimensional and
dim W < dim V. (Garhwal 2008)
Hence L (S ) = W .
Thus S is a finite basis of W and S0 ⊆ S.
Theorem 8: Let S = {α1, α2 , … , α n} be a basis of a finite dimensional vector space V ( F ) of
dimension n. Then every element α of V can be uniquely expressed as
α = a1α1 + a2 α2 + … + an α n where a1, a2 , … , an ∈ F .
Proof: Since S is a basis of V, therefore L (S ) = V . Therefore any vector α ∈V can be
expressed as
α = a1α1 + a2 α2 + … + an α n .
To show uniqueness let us suppose that
α = b1α1 + b2 α2 + … + bn α n .
Then we must show that a1 = b1, a2 = b2 , … , an = bn .
We have a1α1 + a2 α2 + … + an α n = b1α1 + b2 α2 + … + bn α n
⇒ (a1 − b1) α1 + (a2 − b2 ) α2 + … + (an − bn) α n = 0
⇒ a1 − b1 = 0 , a2 − b2 = 0 , … , an − bn = 0 ,
since α1, α2 , … , α n are linearly independent
⇒ a1 = b1, a2 = b2 , … , an = bn .
Hence the theorem.
Theorem 9: If W1, W2 are two subspaces of a finite dimensional vector space V ( F ), then
dim (W1 + W2 ) = dim W1 + dim W2 − dim (W1 ∩ W2 ).
(Kumaun 2004, 13; Garhwal 08, 13; Gorakhpur 15)
Proof: Let dim (W1 ∩ W2 ) = k and let the set
S = { γ1, γ 2 , γ 3 , … , γ k}
be a basis of W1 ∩ W2 . Then S ⊆ W1 and S ⊆ W2 .
Since S is linearly independent and S ⊆ W1, therefore S can be extended to form a
basis of W1. Let
{ γ1, γ 2 , … , γ k , α1, α2 , … , α m}
be a basis of W1. Then dim W1 = k + m. Similarly let
{ γ1, γ 2 , … , γ k , β1, β2 , … , β t}
be a basis of W2 . Then dim W2 = k + t.
∴ dim W1 + dim W2 − dim (W1 ∩ W2 ) = (m + k ) + (k + t) − k
= k + m + t.
∴ to prove the theorem we must show that
dim (W1 + W2 ) = k + m + t.
We claim that the set
S1 = { γ1, γ 2 , … , γ k , α1, α2 , … , α m , β1, β2 , … , β t}
is a basis of W1 + W2 .
First we show that S1 is linearly independent. Let
c1γ1 + c2 γ 2 + … + c k γ k + a1α1 + a2 α2 + … + am α m
+ b1 β1 + b2 β2 + … + bt β t = 0 …(1)
Example 3: Let V be the vector space of all 2 × 2 matrices over the field F. Prove that V has
dimension 4 by exhibiting a basis for V which has 4 elements.
(Garhwal 2006; Kumaun 11)
V-55
1 0 0 1 0 0 0 0
Solution: Let α = ,β = ,γ = and δ =
0 0 0 0 1 0 0 1
be four elements of V.
The subset S = {α, β, γ , δ } of V is linearly independent because
aα + bβ + c γ + d δ = 0
1 0 0 1 0 0 0 0 0 0
⇒ a +b +c +d =
0 0 0 0 1 0 0 1 0 0
a b 0 0
⇒ c d = 0 0
⇒ a = 0, b = 0, c = 0, d = 0.
a b
Also L (S) = V because if is any vector in V, then we can write
c d
a b
c d = aα + bβ + cγ + dδ.
Therefore S is a basis of V. Since the number of elements in S is 4, therefore
dim V = 4.
Example 4: Show that if S = {α , β, γ } is a basis of C 3 (C), then the set
S ′ = {α + β, β + γ , γ + α}
is also a basis of C 3 (C). (Kumaun 2011)
Solution: The vector space C 3 (C) is of dimension 3. Any subset of C 3 having three
linearly independent vectors will form a basis of C 3 . We have shown in one of the
previous examples that if S is a linearly independent subset of C 3 , then S ′ is also a
linearly independent subset of C 3 .
Therefore S ′ is also a basis of C 3 .
Example 5: Let V be the vector space of ordered pairs of complex numbers over the real field R
i. e., let V be the vector space C 2 (R). Show that the set
S = {(1, 0 ), (i, 0 ), (0 , 1), (0 , i)}
is a basis for V.
Solution: S is linearly independent. We have
a (1, 0 ) + b (i, 0 ) + c (0 , 1) + d (0 , i) = (0 , 0 ) where a, b, c , d ∈R
⇒ (a + ib, c + id ) = (0 , 0 )
⇒ a + ib = 0 , c + id = 0
⇒ a = 0, b = 0, c = 0, d = 0.
Therefore S is linearly independent.
Now we shall show that L (S ) = V . Let any ordered pair (a + ib, c + id ) ∈ V where
a, b, c , d ∈R. Then as shown above we can write
(a + ib, c + id ) = a (1, 0 ) + b (i, 0 ) + c (0 , 1) + d (0 , i).
Thus any vector in V is expressible as a linear combination of elements of S. Therefore
L (S ) = V and so S is a basis for V.
V-56
Example 6: Show that the vectors (1, 2, 1), (2, 1, 0 ), (1, − 1, 2) form a basis of R3 .
(Garhwal 2010; Kumaun 14)
Solution: We know that the set {(1, 0 , 0 ), (0 , 1, 0 ), (0 , 0 , 1)} forms a basis for R3 .
Therefore dim R3 = 3. If we show that the set S = {(1, 2, 1), (2, 1, 0 ), (1, − 1, 2)} is linearly
independent, then this set will also form a basis for R3 .
[See theorem 4 of article 3]
We have a1 (1, 2, 1) + a2 (2, 1, 0 ) + a3 (1, − 1, 2) = (0 , 0 , 0 )
⇒ (a1 + 2 a2 + a3 , 2 a1 + a2 − a3 , a1 + 2 a3 ) = (0 , 0 , 0 ).
∴ a1 + 2 a2 + a3 = 0 …(1)
2 a1 + a2 − a3 = 0 …(2)
a1 + 2 a3 = 0 . …(3)
Now we shall solve these equations to get the values of a1, a2 , a3 . Multiplying the
equation (2) by 2, we get
4 a1 + 2 a2 − 2 a3 = 0 . …(4)
Subtracting (4) from (1), we get
− 3 a1 + 3 a3 = 0
or − a1 + a3 = 0. …(5)
Adding (3) and (5), we get 3 a3 = 0 or a3 = 0 . Putting a3 = 0 in (3), we get a1 = 0 . Now
putting a3 = 0 and a1 = 0 in (1), we get a2 = 0 .
Thus solving the equations (1), (2) and (3), we get a1 = 0 , a2 = 0 , a3 = 0 . Therefore the
set S is linearly independent. Hence it forms a basis for R3 .
Example 7: Determine whether or not the following vectors form a basis of R3 :
(1, 1, 2), (1, 2, 5), (5, 3, 4). (Kumaun 2013)
Solution: We know that dim R3 = 3. If the given set of vectors is linearly
independent, it will form a basis of R3 otherwise not. We have
a1 (1, 1, 2) + a2 (1, 2, 5) + a3 (5, 3, 4) = (0 , 0 , 0 )
⇒ (a1 + a2 + 5 a3 , a1 + 2 a2 + 3 a3 , 2 a1 + 5 a2 + 4 a3 ) = (0 , 0 , 0 ).
a1 + a2 + 5 a3 = 0 ...(1)
∴ a1 + 2 a2 + 3 a3 = 0 . ...(2)
2 a1 + 5 a2 + 4 a3 = 0 ...(3)
Now we shall solve these equations to get the values of a1, a2 , a3 . Subtracting (2) from
(1), we get
− a2 + 2 a3 = 0. …(4)
Multiplying (1) by 2, we get
2 a1 + 2 a2 + 10 a3 = 0 . …(5)
Subtracting (5) from (3), we get
3 a2 − 6 a3 = 0
or a2 − 2 a3 = 0 . …(6)
We see that the equations (4) and (6) are the same and give a2 = 2 a3 . Putting
a2 = 2 a3 in (1), we get a1 = − 7 a3 . If we put a3 = 1, we get a2 = 2 and a1 = − 7. Thus
a1 = − 7, a2 = 2, a3 = 1 is a non-zero solution of the equations (1), (2) and (3). Hence
the given set is linearly dependent so it does not form a basis of R3 .
V-57
Example 8: Show that the vectors α1 = (1, 0 , − 1), α2 = (1, 2 , 1), α3 = (0 , − 3, 2) form a basis
for R3 . Express each of the standard basis vectors as a linear combination of α1, α2 , α3 .
(Gorakhpur 2013)
Solution: Let S = {α1, α2 , α3}.
First we shall show that the set S is linearly independent. Let a, b, c be scalars i. e., real
numbers such that
aα1 + bα2 + cα3 = 0
i. e., a (1, 0 , − 1) + b (1, 2 , 1) + c (0 , − 3, 2) = (0 , 0 , 0 )
i. e., (a + b + 0 c , 0 a + 2 b − 3 c , − a + b + 2 c ) = (0 , 0 , 0 )
i. e., a+ b=0 …(1)
2b − 3c = 0 …(2)
− a + b + 2 c = 0. …(3)
Adding both sides of the equations (1) and (3), we get
2b + 2c = 0 …(4)
Subtracting (2) from (4), we get
5 c = 0 or c = 0.
Putting c = 0 in (2), we get b = 0 and then putting b = 0 in (1), we get a = 0.
Thus a = 0 , b = 0 , c = 0 is the only solution of the equations (1), (2) and (3) and so
aα1 + bα2 + cα3 = 0
⇒ a = 0, b = 0, c = 0.
∴ The vectors α1, α2 , α3 are linearly independent.
We have dim R3 = 3. Therefore the vectors α1, α2 , α3 form a basis of R3 .
Now let γ = ( p, q, r) be any vector in R3 and let
γ = ( p, q, r) = xα1 + yα2 + zα3 , x, y, z ∈ R
= x (1, 0 , − 1) + y (1, 2 , 1) + z (0 , − 3, 2). …(5)
Then x+ y=p …(6)
2 y − 3z = q …(7)
− x + y + 2z = r …(8)
Adding both sides of equations (6) and (8), we get
2 y + 2 z = p + r. …(9)
Subtracting (7) from (9), we get
5z = p + r − q
1 1 1
or z = p − q + r.
5 5 5
Then from (9), we get
1 1
y = − z + p+ r
2 2
3 1 3
= p+ q + r
10 5 10
and from (6), we get
x = p− y
V-58
3 1 3
= p− p− q − r
10 5 10
7 1 3
= p− q − r.
10 5 10
Thus, the relation (5) expresses the vector γ = ( p, q, r) as a linear combination of α1, α2
and α3 where x , y, z ∈R are as found above.
The standard basis vectors are
e1 = (1, 0 , 0 ), e2 = (0 , 1, 0 ) and e3 = (0 , 0 , 1).
If γ = e1, then p = 1, q = 0 , r = 0 and so
7 3 1
x= , y= and z = ⋅
10 10 5
7 3 1
∴ e1 = α1 + α2 + α3 .
10 10 5
If γ = e2 , then p = 0 , q = 1, r = 0 and so
1 1 1
x = − , y = and z = − ⋅
5 5 5
1 1 1
∴ e2 = − α1 + α2 − α3 .
5 5 5
Finally if γ = e3 , then p = 0 , q = 0 , r = 1 and so
3 3 1
x=− , y= and z = ⋅
10 10 5
3 3 1
∴ e3 = − α1 + α2 + α3 .
10 10 5
Example 10: Show that the set S = {1, x, x2 , … , x n} of n + 1polynomials in x is a basis of the
vector space Pn (R), of all polynomials in x (of degree at most n) over the field of real numbers.
Solution: Pn (R) is the vector space of all polynomials in x (of degree at most n) over
the field R of real numbers.
S = {1, x, x2 , … , x n} is a subset of Pn consisting of n + 1polynomials. To prove that S is a
basis of the vector space Pn (R).
First we show that the vectors in the set S are linearly independent over the field R.
The zero vector of the vector space Pn (R) is the zero polynomial. Let
a0 , a1, a2 , … , an ∈R be such that
a0 (1) + a1 x + a2 x2 + … + an x n
= 0 i. e., zero polynomial.
Now by the definition of the equality of two polynomials, we have
a0 + a1 x + a2 x2 + … + an x n
=0
⇒ a0 = 0 , a1 = 0 , a2 = 0 , … , an = 0 .
∴ the vectors 1, x, x2 , … , x n of the vector space Pn (R) are linearly independent.
Now we shall show that the set S generates the whole vector space Pn (R).
Let α = a0 + a1 x + a2 x2 + … + an x n be any arbitrary member of Pn , where
a0 , a1, … , an ∈R. Then α is a linear combination of the polynomials 1, x, x2 , … , x n over
the field R. Therefore S generates Pn (R).
Since S is a linearly independent subset of Pn (R) and it also generates Pn (R), therefore
it is a basis of Pn (R).
Example 11: Show that a finite subset W of a vector space V ( F ) is linearly dependent if and
only if some element of W can be expressed as a linear combination of the others.
Solution: Let W = {α1, α2 , … , α n} be a finite subset of a vector space V ( F ).
First suppose that W is linearly dependent. Then to show that some vector of W can
be expressed as a linear combination of the others.
Since the vectors α1, α2 , … , α n are linearly dependent, therefore there exist scalars
a1, a2 , … , an not all zero such that
a1α1 + a2 α2 + … + an α n = 0. …(1)
Suppose ar ≠ 0 , 1 ≤ r ≤ n.
Then from (1), we have
ar α r = − a1 α1 − a2 α2 − … − ar − 1 α r − 1 − ar + 1 α r + 1 − … − an α n
or ar −1 (ar α r ) = ar −1 (− a1α1 − a2 α2 − … − ar − 1 α r − 1 − ar + 1 α r + 1
− … − an α n)
−1 −1 −1
or α r = (− ar a1) α1 + (− ar a2 ) α2 + … + (− ar ar − 1) α r − 1
+ (− ar −1 ar + 1) α r + 1 + … + (− ar −1 an) α n .
Thus α r is a linear combination of the other vectors of the set W.
Conversely suppose that some vector of W is a linear combination of the others. Then
to prove that W is linearly dependent.
V-60
Example 12: If n vectors span a vector space V containing r linearly independent vectors, then
show that n ≥ r. (Kumaun 2013)
Solution: Suppose a subset S of V containing n vectors spans V. Then there exists a
linearly independent subset T of S which also spans V. This subset T of S will form a
basis of V. Suppose the number of vectors in T is m. Then dim V = m ≤ n.
Since dim V = m, therefore any subset of V containing more than m vectors will be
linearly dependent.
Hence if S1 is a linearly independent subset of V and S1 contains r vectors, we must
have
r≤m ⇒ r ≤ n. [ ∵ m ≤ n]
Example 13: Let W1 and W2 be distinct subspaces of V and that dim W1 = 4,
dim W2 = 4 and dim V = 6. Find the possible dimensions of W1 ∩ W2 .
Solution: Since W1 and W2 are distinct, so W1 + W2 properly contains W1 and W2 i. e.,
dim (W1 + W2 ) > 4. Also dim V = 6 ⇒ dim (W1 + W2 ) cannot be greater than 6. Thus
we have two possibilities i. e., dimension of (W1 + W2 ) must be either 5 or 6.
Using theorem dim (W1 + W2 ) = dim W1 + dim W2 − dim (W1 ∩ W2 ), we have
(i) 5 = 4 + 4 − dim (W1 ∩ W2 ) or dim W1 ∩ W2 = 3
(ii) 6 = 4 + 4 − dim (W1 ∩ W2 ) or dim W1 ∩ W2 = 2.
Hence the possible dimensions of W1 ∩ W2 are 2, 3.
Comprehensive Exercise 1
1. Give a basis for each of the following vector spaces over the indicated fields.
(i) R (√ 2) over R (ii) C over R (iii) Q (21 /4 ) over Q
where Q, R, C are fields of rational, real and complex numbers respectively.
2. Find the dimension of Q (√ 2, √ 3) over Q.
3. Find bases for the subspaces
W1 = {(a, b, 0 ): a, b, ∈ R }, W2 = {(0 , b, c ): b, c ∈ R } of R 3 .
Find the dimensions of W1, W2 and W1 ∩ W2 .
4. Show that the real field R is a vector space of infinite dimension over the rational
field Q.
5. Let V be the vector space of all 2 × 2 symmetric matrices over F. Show that
dimV = 3.
V-61
6. Prove that the space of all m × n matrices over the field F has dimension mn, by
exhibiting a basis for this space.
7. If {α1, α2 , α3 } is a basis of V3 (R), show that {α1 + α2 , α2 + α3 , α3 + α1} is also a
basis of V3 (R).
8. If W1 and W2 are finite-dimenisonal subspaces with the same dimension, and if
W1 ⊆ W2, then W1 = W2 .
9. In the vector space R 3 , let α = (1, 2, 1), β = (3, 1, 5), γ = (3, − 4, 7). Show that there
exists more than one basis for the subspace spanned by the set S = {α, β, γ }.
10. For the 3-dimensional space R 3 over the field of real numbers R, determine if the
set {(2, − 1, 0 ), (3, 5, 1), (1, 1, 2)} is a basis.
11. Show that the set {(1, i, 0 ), (2 i, 1, 1), (0 , 1 + i, 1 − i)} is a basis for V3 (C).
(Kumaun 2011)
3
12. Find three vectors in R which are linearly dependent, and are such that any two
of them are linearly independent.
13. Select a basis, if any, of R 3 (R) from the set {α1, α2 , α3 , α4 }, where
α1 = (1, − 3, 2), α2 = (2, 4, 1), α3 = (3, 1, 3), α4 = (1, 1, 1).
14. If W is a subspace of a finite dimensional vector space V, prove that any basis of
W can be extended to form a basis of V.
15. Prove that any finite set S of vectors, not all the zero vectors, contains a linearly
independent subset T which spans the same space as S.
16. Let V be a vector space. Let W be a subspace of V generated by the vectors
α1, …, α s . Prove that W is spanned by a linearly independent subset of α1, …, α s .
17. If a vector space V is spanned by a finite set of m vectors, then show that any
linearly independent set of vectors in V has at most m elements.
18. In a vector space V over the field F, let B = {α1, α2 , …, α n} span V. Prove that the
following two statements are equivalent :
(i) B is linearly independent
n
(ii) If α ∈V, then the expression α = ∑ ai α i with ai ∈ F is unique.
i =1
20. Let W be the subspace of R 4 generated by the vectors (1, − 2, 5, − 3), (2, 3, 1, − 4)
and (3, 8, − 3, − 5). Find a basis and the dimension of W. Extend the basis of W to a
basis of the whole space R 4 .
21. Let E be a subfield of a field F and F a subfield of a field K i. e., E ⊂ F ⊂ K. Let K be
of dimension n over F and F is of dimension m over E. Show that K is of dimension
mn over E.
22. Let W1 and W2 be the subspaces of R 4 :
W1 = {(a, b, c , d) : b + c + d = 0 }, W2 = {(a, b, c , d): a + b = 0 , c = 2 d}.
Find the dimension of W1 ∩ W2 .
V-62
23. Let W1 and W2 be subspaces of V and that dim W1 = 4, dim W2 = 5 and dim
V = 7. Find the possible dimensions of W1 ∩ W2 .
24. Let W1 and W2 be the subspaces of R4 generated by the sets
B1 = {(1, 1, 0, − 1), (1, 2, 3, 0 ), (2, 3, 3, − 1)}
and B2 = {(1, 2, 2, − 2),(2, 3, 2, − 3),(1, 3, 4, − 3)} respectively.
Find (i) dim (W1 + W2 ) (ii) dim (W1 ∩ W2 ).
A nswers 1
1. (i) {1, √ 2} (ii) {1, i} (iii) {1, 21 /4 }
2. 4
3. Basis for W1 = {(1, 0 , 0 ), (0 , 1, 0 )}; dim W1 = 2
Basis for W2 = {(0 , 1, 0 ), (0 , 0 , 1)}; dim W2 = 2 ; dim (W1 ∩ W2 ) = 1
10. Yes 12. (1, 0 , 0 ), (0 , 1, 0 ), (1, 1, 0 )
13. {α1, α2 , α4 } 19. {α1, α2 }
20. A basis of W = {(1, − 2, 5, − 3), (0 , 7, − 9, 2)} ; dim W = 2
A basis of R 4 is {(1, − 2, 5, − 3), (0 , 7, − 9, 2), (0 , 0 , 1, 0 ), (0 , 0 , 0 , 1)}
22. 1 23. 2, 3 or 4
24. (i) 3 (ii) 1
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. If a subset of an n-dimensional vector space V consists of n non-zero vectors, then
it will be a basis of V.
2. If A and B are subspaces of a vector space, then dim A < dim B ⇒ A ⊂ B.
3. If A and B are subspaces of a vector space, then
A ≠ B ⇒ dim A ≠ dim B.
4. If M and N are finite dimensional subspaces with the same dimension, and if
M ⊂ N , then M = N .
5. In an n-dimensional space any subset consisting of n linearly independent vectors
will form a basis.
V-64
A nswers
True or False
1. F 2. F 3. F 4. T
5. T
¨
V-65
4
H omomorphism of
V ector S paces
Proof: (i) Let α ∈U. Then f (α) ∈ V . Since 0′ is the zero vector of V, therefore
f (α) + 0 ′ = f (α) = f (α + 0) = f (α) + f (0).
Now V is an abelian group with respect to addition of vectors.
∴ f (α) + 0 ′ = f (α) + f (0)
⇒ 0 ′ = f (0) by left cancellation law.
(ii) If α ∈U, then − α ∈ U. Also we have
0 ′ = f (0) = f [α + (− α)] = f (α) + f (− α).
Now f (α) + f (− α) = 0 ′ ⇒ f (− α) = the additive inverse of f (α)
⇒ f (− α) = − f (α).
Comprehensive Exercise 1
2 Quotient Space
(Garhwal 2007, 10, 12; Gorakhpur 10)
Right and Left Cosets.
Let W be any subspace of a vector space V ( F ). Let α be any element of V. Then the set
W + α = {γ + α : γ ∈ W }
is called a right coset of W in V generated by α. Similarly the set
α + W = {α + γ : γ ∈ W }
is called a left coset of W in V generated by α.
Obviously W + α and α + W are both subsets of V. Since addition in V is
commutative, therefore we have W + α = α + W . Hence we shall call W + α as simply
a coset of W in V generated by α.
The following results about cosets are both to be remembered :
(i) We have 0 ∈V and W + 0 = W . Therefore W itself is a coset of W in V.
(ii) α ∈ W ⇒ W + α = W.
Proof: First we shall prove that W + α ⊆ W .
Let γ + α be any arbitrary element of W + α. Then γ ∈W.
Now W is a subspace of V. Therefore
γ ∈W, α ∈W
⇒ γ + α ∈W.
Thus every element of W + α is also an element of W. Hence
W + α ⊆ W.
Now we shall prove that W ⊆ W + α.
Let β ∈W. Since W is a subspace, therefore
α ∈ W ⇒ − α ∈ W.
Consequently β ∈ W , − α ∈ W ⇒ β − α ∈ W . Now we can write
β = ( β − α) + α ∈ W + α since β − α ∈W.
Thus β ∈ W ⇒ β ∈ W + α. Therefore W ⊆ W + α.
Hence W = W + α.
(iii) If W + α and W + β are two cosets of W in V, then
W + α = W + β ⇔ α − β ∈W.
Proof: Since 0 ∈W, therefore 0 + α ∈ W + α. Thus
α ∈ W + α.
Now W +α =W +β
⇒ α ∈W + β
⇒ α − β ∈ W + ( β − β)
⇒ α − β ∈W + 0
⇒ α − β ∈W.
Conversely,
α − β ∈W
⇒ W + (α − β) = W
V-68
⇒ W + [(α − β) + β] = W + β
⇒ W + α = W + β.
Let V / W denote the set of all cosets of W in V i. e., let
V / W = {W + α : α ∈ V }.
We have just seen that if α − β ∈W, then W + α = W + β. Thus a coset of W in V can
have more than one representation.
Now if V ( F ) is a vector space, then we shall give a vector space structure to the set
V / W over the same field F. For this we shall have to define addition in V / W i. e.,
addition of cosets of W in V and multiplication of a coset by an element of F i. e., scalar
multiplication.
Theorem: If W is any subspace of a vector space V ( F ), then the set V / W of all cosets
W + α where α is any arbitrary element of V, is a vector space over F for the addition and scalar
multiplication compositions defined as follows :
(W + α) + (W + β) = W + (α + β) V α, β ∈ V
and a (W + α) = W + aα ; a ∈ F , α ∈ V .
Proof: We have α, β ∈ V ⇒ α + β ∈ V .
Also a ∈ F , α ∈ V ⇒ aα ∈ V .
Therefore W + (α + β) ∈ V / W and also W + aα ∈ V / W . Thus V / W is closed with
respect to addition of cosets and scalar multiplication as defined above. Now first of
all we shall show that these two compositions are well defined i. e., are independent of
the particular representation chosen to denote a coset.
Let W + α = W + α ′ , α, α ′ ∈ V
and W + β = W + β ′ , β, β ′ ∈ V .
We have W + α = W + α′ ⇒ α − α′ ∈W
and W + β = W + β′ ⇒ β − β′ ∈W.
Now W is a subspace, therefore
α − α ′ ∈W, β − β′ ∈W
⇒ (α − α ′ ) + ( β − β ′ ) ∈W
⇒ (α + β) − (α ′ + β ′ ) ∈W
⇒ W + (α + β) = W + (α ′ + β ′ )
⇒ (W + α) + (W + β) = (W + α ′ ) + (W + β ′ ).
Therefore addition in V /W is well defined.
Again a ∈ F, α − α ′ ∈ W
⇒ a (α − α ′ ) ∈ W
⇒ aα − aα ′ ∈ W ⇒ W + aα = W + aα ′ .
∴ scalar multiplication in V /W is also well defined.
Commutativity of addition. Let W + α , W + β be any two elements of V / W .
Then
(W + α) + (W + β) = W + (α + β)
= W + ( β + α)
= (W + β) + (W + α).
V-69
5 Disjoint Subspaces
Definition: Two subspaces W1 and W2 of the vector space V ( F ) are said to be disjoint if
their intersection is the zero subspace i. e., if W1 ∩ W2 = { 0 }.
Theorem: The necessary and sufficient conditions for a vector space V ( F ) to be a direct sum
of its two subspaces W1 and W2 are that
(i) V = W1 + W2
(ii) W1 ∩ W2 = { 0 } i. e., W1 and W2 are disjoint. (Garhwal 2010, 14; Gorakhpur 11)
V-72
In order to prove that dim V = dim W1 + dim W2 , we should therefore prove that dim
V = m + l. We claim that the set
S = S1 ∪ S2 = {α1, α2 , … , α m , β1, β2 , … , β l }
is a basis of V.
First we show that the set S is linearly independent. Let
a1α1 + a2 α2 + … + am α m + b1 β1 + b2 β2 + … + bl β l = 0.
⇒ a1α1 + a2 α2 + … + am α m = − (b1 β1 + b2 β2 + … + bl β l).
Now a1α1 + a2 α2 + … + am α m ∈ W1
and − (b1 β1 + b2 β2 + … + bl β l) ∈ W2 .
Therefore a1α1 + a2 α2 + … + am α m ∈ W1 ∩ W2
and − (b1 β1 + b2 β2 + … + bl β l) ∈ W1 ∩ W2 .
But V is the direct sum of W1 and W2 . Therefore 0 is the only vector belonging to
W1 ∩ W2 . Then we have
a1α1 + a2 α2 + … + am α m = 0, b1 β1 + b2 β2 + … + bl β l = 0.
Since both the sets {α1, α2 , … , α m} and { β1, β2 , … , β l} are linearly independent,
therefore we have
a1 = 0 , a2 = 0 , … , am = 0 , b1 = 0 , b2 = 0 , … , bl = 0 .
Therefore S is linearly independent.
Now we shall show that L (S ) = V . Let α be any element of V. Then
α = an element of W1 + an element of W2 .
= a linear combination of S1 + a linear combination of S2
= a linear combination of elements of S.
∴ L (S ) = V
∴ S is a basis of V. Therefore dim V = m + l.
Hence the theorem.
A sort of converse of this theorem is true. It has been proved in the following theorem.
Theorem 2: Let V be a finite dimensional vector space and let W1, W2 be subspaces of V such
that V = W1 + W2 and dim V = dim W1 + dim W2 .
Then V = W1 ⊕ W2 .
Proof: Let dim W1 = l and dim W2 = m. Then
dim V = l + m.
Let S1 = {α1, α2 , … , α l }
be a basis of W1 and
S2 = { β1, β2 , … , β m}
be a basis of W2 . We shall show that S1 ∪ S2 is a basis of V.
Let α ∈V. Since V = W1 + W2 , therefore we can write
α = γ + δ where γ ∈ W1, δ ∈ W2 .
V-74
Now γ ∈W1 can be expressed as a linear combination of the elements of S1 and δ ∈W2
can be expressed as a linear combination of the elements of S2 . Therefore α ∈V can be
expressed as a linear combination of the elements of S1 ∪ S2 . Therefore
V = L (S1 ∪ S2 ). Since dim V = l + m and L (S1 ∪ S2 ) = V , therefore the number of
distinct elements in S1 ∪ S2 cannot be less than l + m. Thus S1 ∪ S2 has l + m distinct
elements and therefore S1 ∪ S2 is a basis of V. Therefore the set
{α1, α2 , … , α l , β1, β2 , … , β m}
is linearly independent.
Now we shall show that
W1 ∩ W2 = { 0 }.
Let α ∈ W1 ∩ W2 .
Then α ∈ W1, α ∈ W2 .
Therefore α = a1α1 + a2 α2 + … + al α l
and α = b1 β1 + b2 β2 + … + bm β m
for some a’s and b’s ∈ F.
∴ a1α1 + a2 α2 + … + al α l = b1 β1 + b2 β2 + … + bm β m
⇒ a1α1 + a2 α2 + … + al α l − b1 β1 − b2 β2 − … − bm β m = 0
⇒ a1 = 0 , a2 = 0 , … , al = 0 , b1 = 0 , b2 = 0 , … , bm = 0
⇒ α = 0.
∴ W1 ∩ W2 = { 0 }.
7 Complementary Subspaces
Definition: Let V ( F ) be a vector space and W1, W2 be two subspaces of V. Then the
subspace W2 is called the complement of W1 in V if V is the direct sum of W1 and W2 .
Now a1i α1i + … + ani i α ni i ∈ Wi . Therefore from (1) which is an expression for 0 ∈V as
a sum of vectors one in each Wi , we get
a1i α1i + … + ani i α ni i = 0, i = 1, … , k
a1i = … = ani i = 0
since {α1i, … , α ni i} is linearly independent being a basis for Wi .
k
Therefore B = ∪ Bi is linearly independent. Therefore B is a basis of V.
i =1
k
(ii) ⇒ (i). It is given that B = ∪ Bi is a basis of V.
i =1
= α1 + α2 + … + α k …(2)
where α i = a1i α1i + … + ani i α ni i ∈ Wi .
Thus each vector in V can be expressed as a sum of vectors one in each Wi .
Now V will be the direct sum of W1, … , Wk if the expression (2) for α is unique. Let
α = β1 + β2 + … + β k …(3)
where β i = b1i α1i + … + bni i α ni i ∈ Wi .
From (2) and (3), we get
α1 + … + α k = β1 + …+ β k
⇒ (α1 − β1) + … + (α i − β i) + … + (α k − β k ) = 0
k
⇒ Σ (α i − β i) = 0
i =1
k
⇒ Σ [(a1i − b1i) α1i + … + (ani i − bni i) α ni i ] = 0
i =1
Note: While proving this theorem we have proved that if a finite dimensional vector
space V ( F ) is the direct sum of its subspaces W1, … , Wk , then
dim V = dim W1 + … + dim Wk .
V-79
1 3 1
Also we have ( x, y, z ) = x, 0 , z + 0 , y, z where x, 0 , z ∈ W1 and
4 4 4
3
0 , y, z ∈ W2 .
4
It follows that representation of ( x, y, z ) as a sum of a vector of W1 and a vector of W2 is
not unique.
Hence R3 ≠ W1 ⊕ W2 .
Hence R3 = W1 ⊕ W2 .
Example 5: Construct three subspaces W1, W2 , W3 of a vector space V so that
V = W1 ⊕ W2 = W1 ⊕ W3 but W2 ≠ W3 .
V-80
Comprehensive Exercise 2
1. Let V be the vector space of square matrices of order n over the field R. Let W1 and
W2 be the subspaces of symmetric and antisymmetric matrices respectively.
Show that V = W1 ⊕ W2 .
2. Let V be the vector space of all functions from the real field R into R. Let U be the
subspace of even functions and W the subspace of odd functions. Show that
V = U ⊕ W.
3. Let W1, W2 and W3 be the following subspaces of R 3 :
W1 = {(a, b, c ) : a + b + c = 0 }, W2 = {(a, b, c ) : a = c },
W3 = {(0 , 0 , c ) : c ∈ R }.
Show that (i) R 3 = W1 + W2 ; (ii) R 3 = W1 + W3 ; (iii) R 3 = W2 + W3 .
When is the sum direct ?
4. Let W be a subspace of a vector space V over a field F. Show that α ∈ β + W iff
α − β ∈W.
5. Let W1 and W2 be two subspaces of a finite dimensional vector space V. If dim
V = dim W1 + dim W2 and W1 ∩ W2 = {0}, prove that V = W1 ⊕ W2 .
A nswers 2
3. The sum is direct in (ii) and (iii)
10 Co-ordinates
Let V ( F ) be a finite dimensional vector space. Let
B = {α1, α2 , … , α n}
be an ordered basis for V. By an ordered basis we mean that the vectors of B have been
enumerated in some well-defined way i. e., the vectors occupying the first, second,
… , nth places in the set B are fixed.
V-81
Let α ∈V. Then there exists a unique n-tuple ( x1, x2 , … , xn) of scalars such that
n
α = x1α1 + x2 α2 + … + xn α n = Σ xi α i .
i =1
The n-tuple ( x1, x2 , … , xn) is called the n-tuple of co-ordinates of α relative to the
ordered basis B. The scalar xi is called the i th coordinate of α relative to the
ordered basis B. The n × 1 matrix
x1
x
X=
2
...
x
n
is called the coordinate matrix of α relative to the ordered basis B. We shall use the
symbol [α]B for the coordinate matrix of the vector α relative to the ordered basis B.
(Garhwal 2008)
It should be noted that for the same basis set B, the coordinates of the vector α are
unique only with respect to a particular ordering of B. The basis set B can be ordered in
several ways. The coordinates of α may change with a change in the ordering of B.
Note: If V ( F ) is a vector space of dimension n, then coordinate vector of 0 ∈V
relative to any basis of V is always the zero n-tuple (0 , 0 , … , 0 ).
11 Change of Basis
Suppose V is an n-dimensional vector space over the field F. Let B and B′ be two
ordered bases for V. If α is any vector in V, then we are now interested to know what is
the relation between its coordinates with respect to B and its coordinates with respect
to B′ . (Garhwal 2010)
Theorem 1: Let V ( F ) be an n-dimensional vector space and let B and B ′ be two ordered
bases for V. Then there is a unique necessarily invertible, n × n matrix A with entries in F such
that
(1) [α]B = A [α]B ′ (Garhwal 2008)
(2) [α]B ′ = A−1 [α]B for every vector α in V.
Proof: Let B = {α1, α2 , … , α n} and B′ = { β1, β2 , … , β n}.
Then there exists a unique linear transformation T from V into V such that
T (α j) = β j , j = 1, 2, … , n. …(1)
Since T maps a basis B onto a basis B ′ , therefore T is necessarily invertible. The
matrix of T relative to B i. e., [T ]B will be a unique n × n matrix with elements in F.
Also this matrix will be invertible because T is invertible.
Let [T ]B = A = [aij]n × n . Then
n
T (α j) = Σ aij α i, j = 1, 2, … , n. …(2)
i =1
Let x1, x2 , … , xn be the coordinates of α with respect to B and y1, y2 , … , yn be the
coordinates of α with respect to B′. Then
n
α = y1 β1 + y2 β2 + … + yn β n = Σ y j β j
j =1
V-82
n
= Σ y j T (α j) [From (1)]
j =1
n n
= Σ y j Σ aij α i [From (2)]
j =1 i =1
n n
= Σ Σ aij y j α i .
i =1 j =1
n
Also α = Σ xi α i .
i =1
n
∴ xi = Σ aij y j
j =1
Now suppose α is any vector in V. If [α]B is the coordinate matrix of α relative to the
basis B and [α]B ′ its coordinate matrix relative to the basis B ′ then
[α]B = A [α]B ′
and [α]B ′ = A−1 [α]B .
Theorem 2: Let B = {α1, … , α n} and B′ = { β1, … , β n} be two ordered bases for an
n
n-dimensional vector space V ( F ). If ( x1, … , xn) is an ordered set of n scalars, let α = Σ xi α i
i =1
n
and β = Σ xi β i .
i =1
Example 7: Find the coordinates of the vector (2, 1, − 6) of R3 relative to the basis
α1 = (1, 1, 2), α2 = (3, − 1, 0 ), α3 = (2, 0 , − 1).
Solution: To find the coordinates of the vector (2 , 1, − 6) relative to the ordered basis
{α1, α2 , α3}, we shall express the vector (2 , 1, − 6) as a linear combination of the vectors
α1, α2 , α3 . Let p, q, r be scalars in R such that
(2 , 1, − 6) = pα1 + qα2 + rα3
⇒ (2 , 1, − 6) = p (1, 1, 2) + q (3, − 1, 0 ) + r (2 , 0 , − 1)
⇒ (2 , 1, − 6) = ( p + 3 q + 2 r, p − q, 2 p − r).
∴ p + 3 q + 2 r = 2 , p − q = 1, …(1)
and 2 p − r = − 6.
Solving the equations (1), we have
p = − 7 / 8, q = − 15 / 8
and r = 17 / 4.
Hence the required coordinates of the vector (2 , 1, − 6) relative to the ordered basis
{α1, α2 , α3} are ( p, q, r) i. e., (− 7 / 8, − 15 / 8, 17 / 4).
Comprehensive Exercise 3
1. Show that the set S = {(1, 0 , 0 ), (1, 1, 0 ), (1, 1, 1)} is a basis of C3 (C) where C is the
field of complex numbers. Hence find the coordinates of the vector
(3 + 4 i, 6 i, 3 + 7 i) in C3 with respect to the above basis.
2. Find the co-ordinate vector of (3, 5, − 2) relative to the basis { e1, e2 , e3 } where
e1 = (1, 1, 1), e2 = (0 , 2, 3), e3 = (0 , 2, − 1).
3. Let B = {α1, α2 , α3 } be an ordered basis for R 3 , where
α1 = (1, 0 , − 1), α2 = (1, 1, 1), α3 = (1, 0 , 0 ).
Obtain the coordinates of the vector (a, b, c ) in the ordered basis B.
4. Let V be the vector space of all polynomial functions of degree less than or equal
to two from the field of real numbers R into itself. For a fixed t ∈R, let
g1 ( x) = 1, g2 ( x) = x + t, g3 ( x) = ( x + t)2 .
Prove that { g1, g2 , g3 } is a basis for V and obtain the coordinates of
c0 + c1 x + c2 x2 in this ordered basis.
A nswers 3
1. (3 − 2 i, − 3 − i, 3 + 7 i) 2. (3, − 1, 2)
3. (b − c , b, a − 2 b + c ) 4. (c0 − c1 t + c2 t2 , c1 − 2 c2 t, c2 )
V-85
1. Let V be the vector space of all functions from the real field R into R. Let W1 be the
subspace of even functions and W2 be the subspace of odd functions. Then
V = …… .
2. If W is an m-dimensional subspace of an n-dimensional vector space V, then the
V
dimension of the quotient space is …… .
W (Garhwal 2006)
3. Two subspaces W1 and W2 of the vector space V ( F ) are said to be disjoint if
W1 ∩ W2 = …… .
4. If a finite dimensional vector space V ( F ) is a direct sum of two subspaces W1
and W2 , then dim V = …… .
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. If a finite dimensional vector space V ( F ) is a direct sum of two subspaces W1 and
W2 , then dim V = dim W1 + dim W2 .
V,
2. The dimension of where V = R 3 , W = { (a, 0 , 0 ) : a ∈ R } is 3.
W
3. If W1 = {( x, 0 , z ) : x, z ∈ R }, W2 = {(0 , y, z ) : y, z ∈ R } be two subspaces
of R 3 then R 3 ≠ W1 + W2 .
V-86
A nswers
True or False
1. T 2. F 3. F
¨
5
Matrices
2 −2 −4
A = −1 3 4
1 −2 −3
ab b2
A= is nilpotent of index 2.
2
− a − ab
1 −3 4
A = −1 3 4 is nilpotent of index 2.
1 −3 −4
Then A 2 = A .
d1 0 0 ... 0 d1 0 0 ... 0
0 d2 0 ... 0 0 d2 0 ... 0
Now A2 =
... ... ... ... 0 ... ... ... ... 0
0 0 0 ... dn 0 0 0 ... dn
d 2 0 0 ... 0
1 2
0 d2 0 ... 0
= .
... ... ... ... 0
0 0 0 ... dn
2
∴ A 2 = A gives
2 2 2
d1 = d1, d2 = d2 , … , dn = dn
i. e., d1 = 0 , 1; d2 = 0 , 1 ; … ; dn = 0, 1.
Hence Diag. [d1 , d2 ..., dn], d1, d2 , …, dn = 0, 1 is the required idempotent diagonal
matrix of order n.
V-89
= I2 − IB − BI + B 2
= I − B − B + B2 [∵ IB = B = BI]
=I− B− B+ B [∵ B2 = B]
=I− B [ ∵ − B + B = O]
= A.
1 1 3
A = 5 2 6 is nilpotent and find its index.
−2 −1 −3
Solution: We have A 2 = AA
1 1 3 1 1 3
= 5 2 6 × 5 2 6
−2 −1 −3 −2 −1 −3
V-90
0 0 0
= 3 3 9 ⋅
−1 −1 −3
1 1 3 0 0 0
Again A = AA = 5
3 2
2 6 3
3 9
−2 −1 −3 −1 −1 −3
0 0 0
= 0 0 0 = O.
0 0 0
Thus 3 is the least positive integer such that A 3 = O. Hence the matrix A is nilpotent of
index 3.
(ii) The number of rows may or may not be equal to number of columns in a matrix
while in a determinant the number of rows is equal to the number of columns.
(iii) Interchanging the rows and columns, a different matrix is formed while in a
determinant, an interchange of rows and columns does not change the value of
the determinant.
Definition: Let A and B be non-singular matrices. If p is any positive integer, then we define
(A − p ) = (A p )−1.
If A is non-singular and its negative integral powers are considered, then for any
integers r, s, we have
(i) A r A s = A r +s (ii) (A r )s = A rs .
For two non-singular matrices A and B of the same order, the index law
(AB)m = A mB m fails. However it becomes valid if A and B commute with respect to
multiplication.
= (A ⋅ A ⋅ .... k times)−1
= (A −1 ⋅ A −1 … k times)
= (A −1)k .
(i) (A −1)−1 = A
Proof: (i) Since A is an invertible matrix therefore there exists a matrix X = A −1 such
that AX = XA = I.
V-92
Hence, (A −1)−1 = A .
= kk −1AA −1
= I.
Similarly, X (kA ) = I.
∴ (kA ) X = X (kA ) = I.
= A [B (B −1 A −1)]
= A (BB −1 A −1)
= A (IA −1)
= AA −1
= I.
Similarly, X (AB) = I.
∴ (AB) X = X (AB) = I.
(v) The proof is similar to that of part (iv). Take transposed conjugate instead of
transpose.
V-93
5 Trace of a Matrix
Definition: Let A be a square matrix of order n. The sum of the elements of A lying along the
principal diagonal is called the trace of A . We shall write the trace of A as tr A . Thus if
n
A = [ai] n× n , then tr A = Σ aii = a11 + a22 + … + ann.
i =1
In the following theorem we have given some fundamental properties of the trace
function.
Theorem: Let A and B be two square matrices of order n and λ be a scalar. Then
(1) tr ( λ A ) = λ tr A ;
(2) tr ( A + B) = tr A + tr B;
(3) tr ( AB) = tr (BA).
n n
= Σ aii + Σ bii
i =1 i =1
= tr A + tr B.
n
(3) We have AB = [cij ] n × n , where cij = Σ aik bkj .
k =1
n
Also BA = [d ij ] n × n , where d ij = Σ bik akj .
k =1
n n n
Now tr ( AB) = Σ cii = Σ Σ aik bki
i =1 i = 1 k = 1
n n
= Σ Σ aik bki , interchanging the order of summation in
k =1 i =1 the last sum
n n n
= Σ Σ bki aik = Σ d kk
k = 1 i = 1 k =1
6 Transpose of a Matrix
Definition: Let A = [aij]m × n. Then the n × m matrix obtained from A by changing its rows
into columns and its columns into rows is called the transpose of A and is denoted by the symbol A′
or AT .
A = [aij]m × n ,
1 2 3
1 2 3 4
2 3 4
A = 2 3 4 1 is the 4 × 3 matrix A′ = .
3 3 4 2
4 2 13 × 4
4 1 14 × 3
The first row of A is the first column of A′. The second row of A is the second column of
A′ . The third row of A is the third column of A′.
(i) (A′ )′ = A;
The above law (iv) is called the reversal law for transposes i. e., the transpose of the product
is the product of the transposes taken in the reverse order.
7 Orthogonal Matrix
Definition: A square matrix A is said to be orthogonal if A ′ A = I . (Kanpur 2009)
⇒ | A| ⋅| A| = 1 [∵| A ′| = | A|]
V-95
⇒ | A|2 = 1 ⇒ | A| = ± 1 ⇒ | A| ≠ 0
⇒ A is invertible.
A ′ A = I = AA ′ .
Proof: Since A and B are both n-rowed square matrices, therefore AB is also an n-rowed
square matrix.
Now (AB)′ = B ′ A ′.
= B ′ (A ′ A) B
= B ′ IB [∵ A ′ A = I]
= B′ B
= I. [∵ B ′ B = I]
0 2β γ
Example 5: Determine the values of α, β, γ when α β − γ is orthogonal.
α −β γ
0 2β γ
Solution : Let A = α β − γ .
α −β γ
0 α α
Then A′ = 2β β −β .
γ −γ γ
If A is orthogonal, then AA ′ = I .
V-96
0 2β γ 0 α α 1 0 0
α β − γ 2β β −β = 0 1 0
α −β γ γ −γ γ 0 0 1
4β2 + γ 2 = 1 …(1)
2β2 − γ 2 = 0 …(2)
α2 + β2 + γ 2 = 1. …(3)
1 1
From (1) and (2), we get β = ± ,γ = ± .
6 3
1
From (3), we get α = ± .
2
1 1 1
Hence α=± ,β = ± ,γ = ± .
2 6 3
cos θ sin θ
is orthogonal.
− sin θ cos θ
cos θ sin θ
Solution: Let A= .
− sin θ cos θ
cos θ − sin θ
Then A′ = .
sin θ cos θ
1 0
=
0 1
= I2 .
1 2 2
1
2 1 −2 is orthogonal.
3
−2 2 −1
1 2 2
1
Solution: Let A= 2 1 −2 .
3
−2 2 −1
1 2 −2
1
Then A′ = 2 1 2
3 2
−2 −1
1 2 2 1 2 −2
1
We have AA′ = 2 1 −2 2 1 2
9
−2 2 −1 2
−2 −1
9 0 0
1
= 0 9 0 .
9 0
0 9
1 0 0
= 0 1 0 = I3 .
0 0 1
8 Conjugate of a Matrix
If i = (−1 ), then z = x + iy is called a complex number where x and y are any real
numbers. If z = x + iy, then z = x − iy is called the conjugate of the complex number z.
If z1 and z2 are two complex numbers, then it can be easily seen that
Conjugate of a Matrix
Definition: The matrix obtained from any given matrix A on replacing its elements by the
corresponding conjugate complex numbers is called the conjugate of A and is denoted by A.
Thus if A = [aij]m × n, then A = [aij]m × n where aij denotes the conjugate complex of aij.
If A be a matrix over the field of real numbers, then obviously A coincides with A.
2 + 3 i 4 − 7i 8 2 − 3 i 4 + 7i 8
Illustration: If A = , then A = .
−i 6 9 + i i 6 9 − i
(i) (A) = A
Obviously the conjugate of the transpose of A is the same as the transpose of the
conjugate of A i. e.,
( A ′) = (A)′ = A θ .
where b ji = aij i. e., the ( j, i)th element of A θ = the conjugate complex of the (i, j)th
element of A.
1 + 2 i 2 − 3i 3 + 4 i
Illustration: If A = 4 − 5 i 5 + 6i 6 − 7 i ,
8 7 + 8i 7
1 + 2 i 4 − 5i 8
then A′ = 2 − 3 i 5 + 6i 7 + 8 i
3 + 4 i 6 − 7i 7
V-99
1 − 2 i 4 + 5i 8
θ
and (A ′ ) = A = 2 + 3 i 5 − 6i 7 − 8 i .
3 − 4 i 6 + 7i 7
(i) (A θ )θ = A
10 Invertible Matrices
Note: For the products AB, BA to be both defined and be equal, it is necessary that A
and B are both square matrices of the same order. Thus non-square matrices cannot
possess inverse.
Existence of the Inverse. Theorem. The necessary and sufficient condition for a square
matrix A to possess the inverse is that | A| ≠ 0 .
1
Important: If A be an invertible matrix, then the inverse of A is Adj. A. It is usual to
| A|
denote the inverse of A by A −1 .
1
Thus, A −1 = Adj . A, provided | A| ≠ 0 .
| A|
Thus the necessary and sufficient condition for a matrix to be invertible is that it is
non-singular.
2 −1 3
A = −5 3 1 .
−3 2 3
2 −5 −3
T
Solution: (i) We have the transpose of A = A = −1 3 2 .
3 1 3
2 −1 3
(ii) We have | A| = −5 3 1
−3 2 3
= 2 (9 − 2) − (− 1) (− 15 + 3) + 3 (− 10 + 9)
= 14 − 12 − 3 = − 1.
3 1 −5 1
Then A11 = = 7, A12 = − = 12,
2 3 −3 3
−5 3
A13 = = − 1,
−3 2
−1 3 2 3
A21 = − = 9, A22 = = 15,
2 3 −3 3
2 −1
A23 = − = − 1,
−3 2
−1 3
A31 = = − 10 ,
3 1
V-101
2 3
A32 = − = − 17,
−5 1
2 −1
A33 = = 1.
−5 3
7 12 −1
B= 9 15 −1 .
−10 −17 1
7 9 −10
Now adj. A (= the transpose of the matrix B) = 12 15 −17 .
−1 −1 1
7 9 −10
−1 1 1
(iii) We have A = adj. A = 12 15 −17
| A| (− 1) −1
1 1
7 9 −10
= (−1) 12 15 −17
−1 −1 1
−7 −9 10
= −12 −15 17 .
1 1 −1
Note: To check that the answer is correct the students should verify that AA −1 = I.
12 Unitary Matrix
Definition: A square matrix A is said to be unitary if A θ A = I .
Hence A θ A = I implies AA θ = I .
V-102
Proof: Since A and B are both n-rowed square matrices, therefore AB is also an n-rowed
square matrix.
Now (AB)θ = B θ A θ .
= B θ (A θ A) B
= B θ IB [∵ A θ A = I]
= Bθ B
= I. [∵ B θ B = I]
1 1 1+ i
B= is unitary.
3 1 − i −1 (Bundelkhand 2006; Garhwal 11)
1 1 1− i
Solution: We have B= .
3 1 + i −1
1 1 1+ i
∴ B θ = (B)′ = .
3 1 − i −1
1 1 1 + i 1 1+ i
Now Bθ B =
3 1 − i −1 1 − i −1
1 1.1 + (1 + i) (1 − i) 1. (1 + i) + (1 + i) (−1)
=
3 (1 − i).1 + (−1).(1 − i) (1 − i) (1 + i) + (−1) (−1)
V-103
1 3 0
=
3 0 3
1 0
= = I.
0 1
∴ B is unitary.
α + iγ −β + iδ
Example 10: Show that the matrix A = is a unitary matrix, if
β + iδ α − iγ
α2 + β2 + γ 2 + δ2 = 1.
α + iγ −β + iδ
Solution: We have A=
β + iδ α − iγ
α + iγ β + iδ
∴ A′ =
− β + iδ α − iγ
α − iγ β − iδ
Aθ = A′ = .
− β − iδ α + iγ
α + iγ −β + iδ α − iγ β − iδ 1 0
∴ =
β + iδ α − iγ − β − iδ α + iγ 0 1
α2 + β2 + γ 2 + δ2
or
αβ − iβγ + iαδ + γδ − αβ − iαδ + iβγ − δγ
α2 + β2 + γ 2 + δ2 0 1 0
or =
0 α2 + β2 + γ 2 + δ2 0 1
if α2 + β2 + γ 2 + δ2 = 1 .
Rot (θ) rotates column vectors by means of the following matrix multiplication
x ′ cos θ − sin θ x
= .
y ′ sin θ cos θ y
x ′ = x cos θ − y sin θ,
y ′ = x sin θ + y cos θ.
1
(iii) Rot (θ) Ref (φ) = Ref φ + θ
2
1
(iv) Ref (φ) Rot (θ) = Ref φ − θ .
2
The set of all reflections in lines through the origin and rotations about the origin,
together with the operation of composition of reflections and rotations, forms a group
Rots (0) is the identity of the group. There exists an inverse Rot (− ϕ) for every rotation
(Rot ϕ). Every reflection Ref. (θ) is its own inverse. Matrix multiplication being
associative, the composition has closure and is associative. Ref (θ) and Rot (θ) both
being represented by orthogonal matrices have a determinant whose absolute value is
unity. Rotation matrices have a determinant of value +1 and reflection matrices have
a determinant of value −1 .
Note : The set of all two dimensional matrices together with matrix multiplication
form the orthogonal group O(2).
1 0 Identity matrix. x x
y → y
0 1 Right remains right, up remains up.
−1 0 Rotation by 180°. x − x
y → − y
0 −1 Right has become left, up has become down.
Comprehensive Exercise 1
2 −2 −4
1. Show that the matrix A = −1 3 4 is idempotent.
1 −2 −3
ab b2 2
4. Show that the matrix A = is nilpotent such that A = O .
2
− a − ab
−1 2 2
1
5. Show that the matrix 2 −1 2 is orthogonal.
3
2 2 −1
1 1 1
−
3 6 2
1 2
6. Verify that the matrix − 0 is orthogonal.
3 6
1 1 1
3 6 2
1 1 i
7. Show that the matrix = is unitary.
2 − i −1
V-107
1 + i − 1 + i
2 is unitary.
8. Prove that the matrix 2
1+ i 1− i
2 2 (Rohilkhand 2010)
l1 m1 n1
l2 m2 n2 is an orthogonal matrix.
l m3 n3
3
10. Show that if A is an orthogonal matrix, then A ′ and A −1 are also orthogonal.
0 1 + 2 i −1
11. If N = is a matrix, then show that (I − N) (I + N) is a unitary
− 1 + 2 i 0
matrix, where I is an identity matrix.
True or False
Write ‘T’ for true and ‘F’ for false statement.
cos θ sin θ
3. A matrix A = is unitary matrix.
− sin θ cos θ
A nswers
True or False
1. T 2. T 3. F
o
V-109
6
Rank of a Matrix
1 Submatrix of a Matrix
uppose A is any matrix of the type m × n. Then a matrix obtained by leaving some
S rows and columns from A is called a submatrix of A. In particular the matrix A
itself is a sub-matrix of A because it is obtained from A by leaving no rows or columns.
2 4 1 9 1
0 5 2 5 2
A=
1 9 7 3 4
3 −2 8 1 84 × 5.
Thus 2 4 1 9 2 4 9 1
0 5 2 5 0 5 5 2
, , etc.
1 9 7 3 1 9 3 4
3 −2 8 1 3 −2 1 8
If we leave two columns and one row from A, we shall get a square submatrix of A of
order 3.
Thus 2 4 1 4 1 9 5 2 5
0 5 2 , 5 2 5 , 9 7 3 , etc.
1 9 7 9 7 3 −2 8 1
If we leave three columns and two rows from A , we shall get a square submatrix of A of
order 2.
Thus 2 4 4 1 5 2
, , , etc. are 2-rowed minors of A.
0 5 5 2 9 7
2 Rank of a Matrix
(Lucknow 2006)
Definition: A number r is said to be the rank of a matrix A if it possesses the following two
properties :
(i) There is at least one square submatrix of A of order r whose determinant is not equal to zero.
(ii) If the matrix A contains any square submatrix of order r + 1, then the determinant of every
square submatrix of A of order r + 1 should be zero.
In short the rank of a matrix is the order of any highest order non-vanishing minor of the matrix.
Thus the rank of a matrix A is the order of any highest order square submatrix of A whose
determinant is not equal to zero.
It is obvious that the rank r of an (m × n) matrix can at most be equal to the smaller of
the numbers m and n , but it may be less.
If there is a matrix A which has at least one non-zero minor of order n and there is no
minor of A of order n + 1, then the rank of A is n. Thus the rank of every non-singular
matrix of order n is n. The rank of a square matrix A of order n can be less than n if and
only if A is singular i. e., |A| = 0.
Note 1: Since the rank of every non-zero matrix is ≥ 1, we agree to assign the rank, zero, to
every null matrix :
Important : The following two simple results will help us very much in finding the
rank of a matrix:
(i) The rank of a matrix is ≤ r, if all (r + 1) - rowed minors of the matrix vanish.
(ii) The rank of a matrix is ≥ r, if there is at least one r-rowed minor of the matrix which is not
equal to zero.
Illustration:
1 0 0
(a) Let A = I3 = 0 1 0 be a unit matrix of order 3.
0 0 1
0 0 0
(b) Let A = 0 0 0 .
0 0 0
1 2 3
(c) Let A = 2 3 4 .
0 2 2
1 2 3
(d) Let A = 3 4 5 .
4 5 6
Therefore the rank of A is less than 3. Now there is at least one minor of A of order 2,
1 2
namely which is not equal to zero. Hence rank A = 2.
3 4
3 1 2
(e) Let A = 6 2 4 .
3 1 2
Also each 2-rowed minor of A is equal to zero. But A is not a null matrix. Hence rank
A = 1.
2 4 3 2
(f) Let A = .
3 5 1 4
2 4
Here we see that there is at least one minor of A of order 2 i. e., which is not
3 5
equal to zero. Also there is no minor of A of order greater than 2. Hence the rank of
A = 2.
(i) Every row of A which has all its entries 0 occurs below every row which has a non-zero entry.
(ii) The number of zeros before the first non-zero element in a row is less than the number of such
zeros in the next row.
Important result: The rank of a matrix in Echelon form is equal to the number of non-zero
rows of the matrix.
0 1 2 3
Illustration: Find the rank of the matrix 0 0 1 −1 .
0 0 0 0
V-113
The matrix A has one zero row. We see that it occurs below every non-zero row.
Further the number of zeros before the first non-zero element in the first row is one.
The number of zeros before the first non-zero element in the second row is two. Thus
the number of zeros before the first non-zero element in any row is less than the
number of such zeros in the next row.
4 Theorem
The rank of the transpose of a matrix is the same as that of the original matrix.
Proof: Let A be any matrix and A′ be the transpose of the matrix A. Let rank A = r and
rank A′ = s. Then to prove that r = s.
We have rank A = r
⇒ there exists at least one r-rowed square submatrix, say R, of A such that
| R ′| = | R| ≠ 0
| R ′| = | R| ≠ 0
⇒ rank A′ ≥ r ⇒ s ≥ r. ...(1)
Since (A ′ )′ = A, therefore interchanging the roles of A and A′ in the above result (1), we
have
r ≥ s. …(2)
1 2 3
1 2 3
(i) 2 1 0 , (ii)
0 2 4 5
1 2 (Bundelkhand 2009)
V-114
1 2 3
Solution : (i) Let A = 2 1 0 .
0 1 2
= 2 − 8 + 6 = 0.
1 2
But there is at least one minor of order 2 of the matrix A, namely which is not
2 1
equal to zero. Hence rank A = 2.
1 2 3
(ii) Let A= .
2 4 5
1 3
Here there is at least one minor of order 2 of the matrix A, namely which is not
2 5
equal to 0. Also there is no minor of the matrix A or order greater than 2. Hence rank
A = 2.
2 4 2
A = 2 1 2 ?
1 0 x
2 4 2
i. e., 2 1 2 ≠0
1 0 x
or 2 ( x − 0 ) − 4 (2 x − 2) + 2 (0 − 1) ≠ 0
or 2x − 8x + 8 − 2 ≠ 0
or − 6x + 6 ≠ 0
or x ≠ 1.
V-115
1 5 4
Example 3: For which value of ‘b’ the rank of the matrix A = 0 3 2 is 2?
b 13 10
1 5 4
i. e., 0 3 2 =0
b 13 10
or 1 (30 − 26) − 5 (0 − 2 b) + 4 (0 − 3 b) = 0
or 4 + 10 b − 12 b = 0
or 4 − 2b = 0
or b = 2.
Example 4: Find the values of a so that rank (A) < 3, where A is the matrix
3 a − 8 3 3
A= 3 3a − 8 3 .
3 3 3 a − 8
3 a − 8 3 3
i. e., 3 3a − 8 3 =0
3 3 3 a − 8
Applying R1 → R1 + R2 + R3 , we get
3a − 2 3a − 2 3a − 2
3 3a − 8 3 =0
3 3 3a − 8
1 1 1
or (3 a − 2) 3 3a − 8 3 =0
3 3 3a − 8
V-116
1 0 0
(3 a − 2) 3 3 a − 11 0 =0
3 0 3 a − 11
or (3 a − 2) (3 a − 11) (3 a − 11) = 0
2 11 11
or a= , , .
3 3 3
Example 5: Prove that the points ( x1, y1), ( x2 , y2 ) and ( x3 , y3 ) are collinear if and only if the
rank of the matrix.
x1 y1 1
A = x2 y2 1 is less than three.
x y3 1
3 (Kanpur 2001; Bundelkhand 07)
x1 y1 1
We are to prove that the rank of matrix A = x2 y2 1 is less than three.
x y3 1
3
x1 y1 1
1
⇒ x2 y2 1 =0
2
x3 y3 1
x1 y1 1
⇒ x2 y2 1 =0
x3 y3 1
x1 y1 1
⇒ The rank of matrix x2 y2 1 is less than 3.
x3 y3 1
x1 y1 1
The rank of the given matrix x2 y2 1 is less than 3.
x3 y3 1
rank A < 3
x1 y1 1
⇒ x2 y2 1 =0
x3 y3 1
x1 y1 1
1
⇒ x2 y2 1 = 0.
2
x3 y3 1
Example 6: A is a non-zero column and B a non-zero row matrix, show that rank (AB) = 1 .
a11
a21
Solution : Let A = a31 and B = [b11 b12 b13 … b1n]
…
am1
Since A and B are non-zero matrices, therefore the matrix AB will also be non-zero. The
matrix AB will have at least one non-zero element obtained by multiplying
corresponding non-zero elements of A and B .
All the two-rowed minors of A obviously vanish. But A is a non-zero matrix. Hence
rank A = 1.
V-118
Comprehensive E xercise 1
1 2 3 4 1 2 −7 5
(v) 2 4 6 8 (vi) 0 5 0 8
3 6 9 12 0 0 0 −3
1 2 3
(vii) .
2 4 5
2. Show that the rank of a matrix is ≥ the rank of every sub-matrix thereof.
3. Show that the rank of a matrix does not alter on affixing any number of
additional rows or columns of zeros.
0 1 0 0
0 0 1 0
4. If A = , find the rank of A and A2 .
0 0 0 1
0 0 0 0
5. Under what conditions the rank of the following matrix is 3 ? Is it possible for the
2 4 2
rank to be 1? Why ? A = 3 1 4.
1 0 x
(Kanpur 2010)
6. A is an n-rowed square matrix of rank (n − 1) , show that Adj. A is not a null matrix.
A nswers 1
4. Rank A = 3, rank A2 = 2
7
5. x≠ ; No, because one minor of order 2 of A is non-zero
5
2. The multiplication of the elements of any row (or column) by any non-zero number.
3. The addition to the elements of any other row (or column) the corresponding elements of any
other row (or column) multiplied by any number.
3. The addition of k times the j th row to the ith row will be denoted by Ri → Ri + kRj.
1 4 2 9
B = 8 19 17 11
3 7 8 4 3 × 4.
1 4 2 9
C = 6 15 3 9 .
3 7 8 4
1
Now if we apply the elementary transformation R2 → R2 to the matrix C , we see that
3
the matrix C transforms back to the matrix A.
7 Elementary Matrices
Definition: A matrix obtained from a unit matrix by a single elementary transformation is
called an elementary matrix (or E-matrix). For example,
0 0 1 1 0 0 1 2 0
0 1 0 , 0 4 0 , 0 1 0
1 0 0 0 0 1 0 0 1
(i) Eij will denote the E-matrix obtained by interchanging the ith and j th rows of a unit
matrix. The students can easily see that the matrices obtained by interchanging
the ith and j th rows or the ith and j th columns of a unit matrix are the same.
Therefore Eij will also denote the elementary matrix obtained by interchanging
the ith and j th columns of a unit matrix.
V-121
(ii) Ei (k ) will denote the E-matrix obtained by multiplying the ith row of a unit matrix
by a non-zero number k. It can be easily seen that the matrices obtained by
multiplying the ith row or the ith column of a unit matrix by k are the same.
Therefore Ei (k ) will also denote the elementary matrix obtained by multiplying
the ith column of a unit matrix by a non-zero number k.
(iii) Eij (m) will denote the elementary matrix, obtained by adding to the elements of
the ith row of a unit matrix, the products by any number m of the corresponding
elements of the j th row. It may be easily seen that the E-matrix, Eij (m) can also be
obtained by adding to the elements of the j th column of a unit matrix, the
products by m of the corresponding elements of the ith column.
Now it is very interesting to note that the elementary transformations of a matrix can
also be obtained by algebraic operations on the same by the corresponding elementary
matrices. In this connection we have the following theorem.
8 Theorem
Every elementary row (column) transformation of a matrix can be obtained by pre-multiplication
(post-multiplication) with the corresponding elementary matrix.
We shall first prove that every elementary row transformation of a product AB of two
matrices A and B can be obtained by subjecting the pre-factor A to the same
elementary row transformation. Similarly every elementary column transformation
of a product AB of two matrices A and B can be obtained by subjecting the post-factor B
to the same elementary column transformation.
Let R1, R2 , R3 ,…, R m denote the row vectors of the matrix A and C1, C2 , …, Cp denote
the column vectors of the matrix B.
V-122
R1
R2
We can then write, A = R3 , B = [C1 C2 C3 … Cp ].
…
R m
Now if σ denotes any elementary row transformation, it is quite obvious from the
above representation that (σ A) B = σ (AB). For example, if σ denotes the elementary
row transformation R1 ↔ R2 , it is quite obvious that (σA) B = σ (AB).
Similarly it is quite obvious that if the columns C1, C2 ,…, Cp of B be subjected to any
elementary column transformation, the columns ofAB are also subjected to the same
elementary column transformation. Hence the result.
A = Im A.
σ A = σ (I m A) = (σ Im ) A = EA,
1 4 2
Illustration : Let A = 2 7 1 .
3 8 4
V-123
7 20 10
B = 2 7 1 .
3 8 4
1 0 0
I3 = 0 1 0 ,
0 0 1
1 0 2
then the E-matrix E thus obtained is E = 0 1 0 .
0 0 1
1 0 2 1 4 2
Now EA = 0 1 0 2 7 1
0 0 1 3 8 4
1⋅ 1 + 0 ⋅ 2 + 2 ⋅ 3 1⋅ 4 + 0 ⋅ 7 + 2 ⋅ 8 1 ⋅ 2 + 0 ⋅ 1 + 2 ⋅ 4
= 0 ⋅ 1 + 1 ⋅ 2 + 0 ⋅ 3 0 ⋅ 4 + 1⋅ 7 + 0 ⋅ 8 0 ⋅ 2 + 1 ⋅ 1 + 0 ⋅ 4
0 ⋅ 1 + 0 ⋅ 2 + 1 ⋅ 3 0 ⋅ 4 + 0 ⋅ 7 + 1⋅ 8 0 ⋅ 2 + 0 ⋅ 1 + 1 ⋅ 4
7 20 10
= 2 7 1 = B.
3 8 4
Let Eij denote the elementary matrix obtained by interchanging the ith and j th rows of a
unit matrix.
The interchange of the ith and j th rows of Eij will transform Eij to the unit matrix. But
every elementary row transformation of a matrix can be brought about by
pre-multiplication with the corresponding elementary matrix. Therefore the row
transformation which changes Eij to I can be effected on pre-multiplication by Eij.
V-124
Similarly, we can show that the elementary matrix corresponding to the E-operation Ci ↔ C j is
its own inverse.
(ii) The inverse of the E-matrix corresponding to the E-operation Ri → kRi, (k ≠ 0 ), is the
E-matrix corresponding to the E-operation Ri → k −1 Ri.
Let Ei (k ) denote the elementary matrix obtained by multiplying the elements of the ith
row of a unit matrix I by a non-zero number k.
Similarly, we can show that the inverse of the E-matrix corresponding to the E-operation
Ci → kCi, k ≠ 0 , is the E-matrix corresponding to the E-operation Ci → k −1 Ci.
(iii) The inverse of the E-matrix corresponding to the E-operation Ri → Ri + kRj is the E-matrix
corresponding to the E-operation Ri → Ri − kRj.
Let Eij (k ) denote the elementary matrix obtained by adding to the elements of the ith
row of a unit matrix I, the products by any number k of the corresponding elements of
the j th row of I.
If we add to elements of the ith row of Eij (k ), the products by −k of the corresponding
elements of its j th row, then this row operation will transform Eij(k ) to the unit matrix I.
Now this row transformation of Eij(k ) can be effected on pre-multiplication by the
corresponding elementary matrix Eij (− k ).
Similarly, we can show that the inverse of the E-matrix corresponding to the E-operation
Ci → Ci + kC j is the E-matrix corresponding to the E-operation Ci → Ci − kC j.
From the above theorem, we thus conclude that the inverse of an elementary matrix
is also an elementary matrix of the same type.
V-125
Proof: Let A = [aij] be an m × n matrix of rank r. We shall prove the theorem in three
stages.
Let B be the matrix obtained from the matrix A by the E-transformation Rp ↔ Rq . Let r
be the rank of A and s be the rank of B. Then to prove that r = s.
We have rank A = r ⇒ there exists at least one r-rowed square submatrix, say R, of A
such that | R| ≠ 0 .
Let S be the r-rowed square submatrix of B which has the same rows as are in R though
they may be in different relative positions.
∴ | R | ≠ 0 ⇒ |S| ≠ 0 .
∴ rank B ≥ r ⇒ S ≥ r.
Hence r = s.
Case II. Multiplication of the elements of a row by a non-zero number does not change the rank.
Now if| B0| be any (r + 1) -rowed minor of B there exists a uniquely determined minor
| A0| of A such that
| B0| = | A0| ( this happens if the pth row of B is one of those rows which
or | B0| = k | A0|
(this happens when pth row is retained while obtaining B0 from B).
V-126
Since the matrix A is of rank r, therefore every (r + 1)-rowed minor of A vanishes i. e.,
| A0| = 0 . Hence | B0 | = 0 . Thus we see that every (r + 1)-rowed minor of B also vanishes.
Therefore, s (the rank of B) cannot exceed r ( the rank of A).
∴ s ≤ r.
Also, since A can be obtained from B by E-transformation of the same type i. e.,
Rp → (1 / k ) Rp , therefore, by interchanging the roles of A and B we find that
r ≤ s.
Thus r = s.
Case III. Addition to the elements of a row the products by any number k of the corresponding
elements of any other row does not change the rank .
The transformation Rp → Rp + kRq has changed only the pth row of the matrix A. Also
the value of a determinant does not change if we add to the elements of any row the
corresponding elements of any other row multiplied by some number k.
Therefore, if no row of the sub-matrix A0 is part of the pth row of A, or if two rows of A0
are parts of the pth and q th rows of A, then | B0| = | A0|.
Again, if a row of A0 is a part of the pth row of A, but no row is a part of the q th row, then
| B0| = | A0| + k |C0| , where C0 is an (r + 1) -rowed square matrix which can be obtained
from A0 by replacing the elements of A0 in the row which corresponds to the pth row of
A by the corresponding elements in the q th row of A. Obviously all the r + 1rows of the
matrix C0 are exactly the same as the rows of some (r + 1)-rowed square sub-matrix of A,
though arranged in some different order. Therefore|C0| is ± 1times some (r + 1)-rowed
minor of A. Since the rank of A is r, therefore, every (r + 1)-rowed minor of A is zero, so
that | A0| = 0 ,|C0| = 0 , and consequently | B0| = 0 .
Thus we see that every (r + 1)-rowed minor of B also vanishes. Hence, s (the rank of B)
cannot exceed r ( the rank of A).
∴ s ≤ r.
V-127
Also, since A can be obtained from B by an E-transformation of the same type i. e.,
Rp → Rp − kRq , therefore, interchanging the roles of A and B, we have r ≤ s.
Thus r = s.
We have thus shown that rank of a matrix remains unaltered by any E-row
transformation. Therefore we can also say that the rank of a matrix remains unaltered
by a series of elementary row transformations.
Similarly we can show that the rank of a matrix remains unaltered by a series of
elementary column transformations.
Finally, we conclude that the rank of a matrix remains unaltered by a finite chain of elementary
operations.
Since A is a non-zero matix, therefore A has at least one element different from zero,
say apq = k ≠ 0.
By interchanging the pth row with the first row and the q th column with the first
column respectively, we obtain a matrix B whose leading element is equal to k which is
not equal to zero.
Multiplying the elements of the first row of the matrix B by 1/k, we obtain a matrix C
whose leading element is equal to unity.
V-128
Subtracting suitable multiples of the first column of C from the remaining columns,
and suitable multiples of the first row from the remaining rows, we obtain a matrix D in
which all elements of the first row and first column except the leading element are
equal to zero.
1 0 0 0 … 0
0
0 A1
Let D= ,
0
…
0
where A1 is an (m − 1) × (n − 1) matrix.
If now, A1 be a non-zero matrix, we can deal with it as we did with A. If the elementary
operations applied to A1 for this purpose be applied to D, they will not affect the first
row and the first column of D . Continuing this process, we shall finally obtain a matrix
M, such that
Ik O
M= .
O O
The matrix M has the rank k. Since the matrix M has been obtained from the matrix A
by elementary transformations and elementary transformations do not alter the rank,
therefore we must have k = r.
Ir O
Hence every m × n matrix of rank r can be reduced to the form by a finite chain
O O
of elementary transformations.
Note: The above form is usually called the first canonical form or normal form of a
matrix.
Corollary 1: The rank of an m × n matrix A is r if and only if (iff) it can be reduced to the form
Ir O
by a finite chain of E-operations.
O O
V-129
The condition is necessary. The proof has been given in the above theorem.
The condition is also sufficient. The matrix A has been transformed into the form
Ir O
by elementary transformations which do not alter the rank of the matrix.
O O
Ir O
Since the rank of the matrix is r, therefore the rank of the matrix A must also
O O
be r.
Ir O
PAQ = .
O O
Ir O
Proof: If A be an m × n matrix of rank r, it can be transformed into the form by
O O
elementary operations. Since E-row (column) operations are equivalent to
pre-(post)-multiplication by the corresponding elementary matrices, we have the
following result :
Ir O
Ps Ps − 1 … P1A Q1Q2 …Qt = .
O O
Now each elementary matrix is non-singular and the product of non-singular matrices
is also non-singular. Therefore if P = Ps Ps − 1 … P2 P1 and Q = Q1 Q2 , …Qt , then P and Q
are non-singular matrices. Hence
Ir O
PAQ = .
O O
11 Equivalence of Matrices
Definition: If B be an m × n matrix obtained from an m × n matrix A by finite number of
elementary transformations of A then A is called equivalent to B. Symbolically, we write
A ~ B, which is read as ‘A is equivalent to B.’
The following three properties of the relation ‘~’ in the set of all m × n matrices are
quite obvious :
V-130
Therefore the relation ‘~’ in the set of all m × n matrices is an equivalence relation.
Example 7: If A and B be two equivalent matrices, then show that rank A = rank B.
Example 8: Show that if two matrices A and B have the same size and the same rank, they are
equivalent.
Solution: Let A and B be two m × n matrices of the same rank r. Then by article 10, we
have
Ir O Ir O
A~ and also B ~ .
O O O O
Ir O Ir O
B~ implies ~ B.
O O O O
Ir O Ir O
A~ and ~ B implies A ~ B.
O O O O
V-131
Example 9: (i) Use elementary transformations to reduce the following matrix A to triangular
form and hence find rank A :
5 3 14 4
A = 0 1 2 1.
1 −1 2 0
(Bundelkhand 2010)
8 1 3 6
(ii) Find the rank of the matrix 0 3 2 2.
−8 −1 −3 4
1 −1 2 0
A ~ 0 1 2 1 by R1 ↔ R3
5 3 14 4
1 −1 2 0
~ 0 1 2 1 by R3 → R3 − 5 R1
0 8 4 4
1 −1 2 0
~ 0 1 2 1 by R3 → R3 − 8 R2 .
0 0 −12 −4
The last equivalent matrix is in Echelon form (or in triangular form). The number of
non-zero rows in this matrix is 3. Therefore its rank is 3. Hence rank A = 3.
(ii) Let us denote the given matrix by A. To find the rank of A, we shall reduce it to
1
Echelon form. Performing the column operation C1 → C1, we get
8
1 1 3 6 1 1 3 6
A~ 0 3 2 2 ~ 0 3 2 2 by R3 → R3 + R1.
−1 −1 −3 4 0 0 0 10
The last equivalent matrix is in Echelon form. The number of non-zero rows in this
matrix is 3. Therefore its rank is 3. Hence rank A = 3.
1 2 1
Example 10: Is the matrix −1 0 2 equivalent to I3 ?
2 1 −3
(Meerut 2008)
V-132
1 2 1
Solution: Let A = −1 0 2.
2 1 −3
1 2 1 1 2 1
We have | A | = −1 0 2 = 0 2 3 R2 + R1, R3 − 2 R1
2 1 −3 0 −3 −5
= − 10 + 9 = − 1 i. e., ≠ 0.
Thus the matrix A is non-singular. Hence it is of rank 3. The rank of I3 is also 3. Since A
and I3 are matrices of the same size and the same rank, therefore A ~ I3 .
1 −1 2 −3
4 1 0 2 Ir O
Example 11: Reduce the matrix, A = to the nor mal form and
0 3 0 4 O O
0 1 0 2
hence determine its rank. (Meerut 2001, 09B, 10)
1 0 0 0
4 5 −8 14
~ by C2 → C2 + C1, C3 → C3 − 2 C1, C4 → C4 + 3 C1
0 3 0 4
0 1 0 2
1 0 0 0
0 5 −8 14
~ by R2 → R2 − 4 R1
0 3 0 4
0 1 0 2
1 0 0 0
0 1 0 2
~ by R2 ↔ R4
0 3 0 4
0 5 −8 14
1 0 0 0
0 1 0 0
~ by C4 → C4 − 2 C2
0 3 0 −2
0 5 −8 4
V-133
1 0 0 0
0 1 0 0
~ by R3 → R3 − 3 R2 , R4 → R4 − 5 R2
0 0 0 −2
0 0 −8 4
1 0 0 0
0 1 0 0
~ by C3 ↔ C4
0 0 −2 0
0 0 4 −8
1 0 0 0
0 1 0 0 1 1
~ by C3 → − C3 , C4 → − C4
0 0 1 0 2 8
0 0 −2 1
1 0 0 0
0 1 0 0
~ by R4 → R4 + 2 R3
0 0 1 0
0 0 0 1
2 −1 3 4
0 3 4 1
(i) 2 3 7 5
2 5 11 6 (Kanpur 2010)
−2 −1 −3 −1
1 2 3 −1
(ii) 1 0 1 1
0 1 1 −1 (Meerut, 2003, 09 B, 10B; Garhwal 14)
Solution : (i) Let us denote the given matrix by A. Performing the elementary
operations R3 → R3 − R1, R4 → R4 − R1, we see that
V-134
2 −1 3 4 2 −1 3 4
0 3 4 1 0 3 4 1
A~ ~ by R4 → R4 − 2 R2
0 4 4 1 0 4 4 1
0 6 8 2 0 0 0 0
2 −1 3 4
0 12 16 4
~ by R2 → 4 R2 , R3 → 3 R3
0 12 12 3
0 0 0 0
2 −1 3 4
0 12 16 4
~ by R3 → R3 − R2 .
0 0 −4 −1
0 0 0 0
The last equivalent matrix is in Echelon form. The number of non-zero rows in this
matrix is 3. Therefore its rank is 3. Hence rank A = 3.
(ii) Let us denote the given matrix by A. Performing the elementary operation
R1 ↔ R2 , we see that
1 2 3 −1
−2 −1 −3 −1
A~
1 0 1 1
0 1 1 −1
1 2 3 −1
0 3 3 −3
~ by R2 → R2 + 2 R1, R3 → R3 − R1
0 −2 −2 2
0 1 1 −1
1 2 3 −1
0 1 1 −1 1 1
~ by R2 → R2 , R3 → − R3
0 1 1 −1 3 2
0 1 1
−1
1 2 3 −1
0 1 1 −1
~ by R3 → R3 − R2 , R4 → R4 − R2 .
0 0 0 0
0 0 0 0
V-135
The last equivalent matrix is in Echelon form. The number of non-zero rows in this
matrix is 2. Therefore rank A = 2.
2 −2 0 6
4 2 0 2
Example 13: Find the rank of the matrix A = by reducing it to normal form.
1 −1 0 3
1 −2 1 2
(Meerut 2008; Avadh 08; Rohilkhand 09)
1 1
Solution : Performing the operation R1 → R1, R2 → R2 , we see that
2 2
1 −1 0 3 1 −1 0 3
2 1 0 1 0 3 0 −5
A~ ~
1 −1 0 3 0 0 0 0
1 −2 1 2 0 −1 1 −1
by R2 → R2 − 2 R1, R3 → R3 − R1, R4 → R4 − R1
1 0 0 0
0 3 0 −5
~ by C2 → C2 + C1, C4 → C4 − 3 C1
0 0 0 0
0 −1 1 −1
1 0 0 0
0 −1 1 −1
~ by R2 ↔ R4
0 0 0 0
0 3 0 −5
1 0 0 0
0 1 −1 1
~ by R2 → (−1) R2
0 0 0 0
0 3 0 −5
1 0 0 0
0 1 −1 1
~ by R4 → R4 − 3 R2
0 0 0 0
0 0 3 −8
V-136
1 0 0 0
0 1 0 0
~ by C3 → C3 + C2 , C4 → C4 − C2
0 0 0 0
0 0 3 −8
1 0 0 0
0 1 0 0
~ by R4 ↔ R3
0 0 3 −8
0 0 0 0
1 0 0 0
0 1 0 0 1 1
~ by C3 → C3 , C4 → − C4
0 0 1 1 3 8
0 0 0
0
1 0 0 0
0 1 0 0
~ by C4 → C4 − C3 ,
0 0 1 0
0 0 0 0
I3 O
which is the normal form . Hence rank A = 3.
O O
Example 14: Find two non-singular matrices P and Q such that PAQ is in the normal form where
1 1 1
A = 1 −1 −1 .
3 1 1
Also find the rank of the matrix A. (Garhwal 2006, 10; Kumaun 10)
1 1 1 1 0 0 1 0 0
1 −1 −1 = 0 1 0 A 0 1 0 ⋅
3 1 1 0 0 1 0 0 1
Now we go on applying E-operations on the matrix A (the left hand member of the
above equation) until it is reduced to the normal form. Every E-row operation will also
be applied to the pre-factor I3 (or its transform) of the product on the right hand
member of the above equation and every E-column operation to the post-factor I3 (or
its transform).
V-137
1 1 1 1 0 0 1 0 0
0 −2 −2 = −1 1 0 A 0 1 0 .
0 −2 −2 −3 0 1 0 0 1
1 0 0 1 0 0 1 −1 −1
0 −2 −2 = −1 1 0 A 0 1 0 .
0 −2 −2 −3 0 1 0 0 1
1
Performing R2 → − R2 , we get
2
1 0 0 1 0 0 1 −1 −1
1 1
0 1 1 = − 0 A 0 1 0 .
0 2 2
−2 −2 −3 0 1 0 0 1
Performing R3 → R3 + 2 R2 , we get
1 0 0 1 0 0 1 −1 −1
1 1
0 1 1 = − 0 A 0 1 0 .
0 2 2
0 0 −2 −1 1 0 0 1
Performing C3 → C3 − C2 , we get
1 0 0 1 0 0 1 −1 0
1 1
0 1 0 = − 0 A 0 1 −1 .
0 2 2
0 0 −2 −1 1 0 0 1
I2 O
∴ PAQ = ,
O O
1 0 0 1 −1 0
1 1
where P= − 0 , Q = 0 1 −1 .
−2 2
−1 1 0 1
2 0
I2 O
Since A~ , therefore rank A = 2.
O O
V-138
Comprehensive Exercise 2
8 0 0 1
1 −1 3 6
1 0 8 1
10. 1 3 −3 −4 11.
5 0 0 1 8
3 3 11
0 8 1 8 (Kumaun 2008)
0 1 −3 −1
1 0 1 1
12. 3 1 0 2
1 1 −2 0 (Garhwal 2006, 13; Avadh 10)
V-139
1 1 1
1 −3 4 7
13. (Kumaun 2008) 14. 2 2 2
9 1 2 0 3
3 3
2 1 3 4 5 6
15. 4 7 13 (Kumaun 2012) 16. 5 6 7
4 −3 −1 7 8 9
1 0 2 1
1 1 1
0 1 −2 1
17. 1 18. a b c
−1 4 0 a3
b3 c 3
−2 2 8 0 (Meerut 2011; Kumaun 12)
19. With the help of elementary transformations find the rank of the following
matrix:
1 1 2 3
1 3 0 3
1 −2 −3 −3
1 1 2 3 (Rohilkhand 2010)
20. Reduce the following matrix to its Echelon form and find its rank :
1 3 4 5
3 9 12 9
−1 −3 −4 −3
(Meerut 2004B; Rohilkhand 06, 10)
1 2 3
21. Find the rank of the matrix A = 2 3 4 after reducing it to normal form.
3 5 7
(Garhwal 2015)
0 1 3 −1
1 0 1 1
22. Reduce the matrix to normal form and find its rank.
3 1 0 2
1 1 −2 0
4 0 2 3 9 0 2
(ii) 3 1 0 , 7 −2 0 1
5 0 0 8 1 1 5
25. Find the ranks of A, B, A + B, AB and BA where
1 1 −1 −1 −2 −1
A = 2 −3 4, B = 6 12 6 .
3 −2 3 5 10 5
26. Show that if A and B are equivalent matrices, then there exist non-singular
matrices P and Q such that B = PAQ.
27. Show that the rank of a matrix is not altered if a column of it is multiplied by a
non-zero scalar.
28. (i) What is the rank of a non-singular matrix of order n?
(ii) What is the rank of an elementary matrix?
A nswers 2
1. 3 2. 2 3. 3 4. 2 5. 2 6. 2 7. 3 8. 4 9. 3
10. 3 11. 4 12. 2 13. 2 14. 1 15. 2 16. 2 17. 3
18. rank (A ) = 3 if a ≠ b ≠ c and a + b + c ≠ 0; rank (A ) = 2 if a ≠ b ≠ c and
a + b + c = 0; Also rank (A ) = 2 if a = b ≠ c ; and rank (A ) = 1 if a = b = c
19. 3 20. 2 21. 2 22. 3 23. (i) 3 (ii) 2
24. (i) No, since the rank of the first matrix is 4 and that of the second matrix is 2.
(ii) No, since the matrices are not of the same type
25. Rank A = 2; Rank B = 1; Rank A + B = 2;
Rank AB = 0; Rank BA = 1
28. (i) n. (ii) Equal to the order of the matrix
V-141
Ir O
PAQ = . …(1)
O O
Ir O
PAQ1 Q2 ..... Qt = ⋅ …(2)
O O
G
PA = ⋅
O
V-142
Since elementary transformations do not alter the rank, therefore the rank of the
G
matrix PA is the same as that of the matrix A which is r. Thus the rank of the matrix
O
is r and therefore the rank of the matrix G is also r as the matrix G has r rows and last
G
m − r rows of the matrix consist of zeros only.
O
Ir O
PAQ = ⋅ …(1)
O O
Ir O
P1 P2 .... Ps AQ = . …(2)
O O
Now elementary transformations do not alter the rank. Therefore the rank of the
matrix AQ is the same as that of A which is r. Thus the rank of the matrix [H O] is r and
therefore the rank of the matrix H is also r as the matrix H has r columns and the last
n − r columns of the matrix [H O] consists of zero only.
V-143
Proof: Let A and B be two m × n and n × p matrices respectively. Let r1, r2 be the ranks
of A and B respectively and let r be the rank of the product AB.
To prove r ≤ r1 and r ≤ r2 .
Since A is an m × n matrix of rank r1, therefore there exists a non-singular matrix P such
G
that PA = , where G is an r1 × n matrix of rank r1 and O is (m − r1) × n.
O
G
∴ PAB = B.
O
Since the rank of a matrix does not alter by multiplying it with a non-singular matrix,
therefore
G
∴ Rank of the matrix B = r.
O
G
Since the matrix G has only r1 non-zero rows, therefore the matrix B cannot have
O
more than r1 non-zero rows which arise by multiplying the r1 non-zero rows of G with
the columns of B.
G
∴ Rank of the matrix B is ≤ r1 i. e., t ≤ r1
O
= r2 .
16 Theorem
Every non-singular matrix is row equivalent to a unit matrix.
Proof: We shall prove the theorem by induction on n, the order of the matrix. If the
matrix be of order 1 i. e., if A =| a11|, the theorem obviously holds.
Let us assume that the theorem holds for all non-singular matrices of order n − 1.
Let A = [aij] be an n × n non-singular matrix. The first column of the matrix A has at
least one element different from zero, for otherwise we shall have |A | = 0 and the
matrix A will not be non-singular.
Let ap1 = k ≠ 0 .
By interchanging the p th row with the first row (if necessary), we obtain a matrix B
whose leading element is equal to k which is not equal to zero.
Multiplying the elements of the first row of the matrix B by 1 / k, we obtain a matrix C
whose leading element is equal to unity.
Subtracting suitable multiples of the first row of C from the remaining rows, we obtain
a matrix D in which all elements of the first column except the leading element are
equal to zero.
By adding suitable multiples of the second, third, …, n th rows to the first row of M, we
obtain the matrix In.
Thus the matrix A has been reduced to In by E-row operations only.
The proof is now complete by induction.
Corollary 1: If A be an n-rowed non-singular matrix, there exist E-matrices E1, E2 , ...., Et such
that
Pre-multiplying both sides of the relation (1) by (Et Et − 1 ...... E2 E1)−1, we get
−1
or A = E1−1E2−1 ..... Et − 1E−t 1.
Since the inverse of an elementary matrix is also an elementary matrix of the same type
hence we get the result.
V-146
Now we go on applying E-row transformations only to the matrix A and the pre-factor
In of the product InA till we reach the result In = BA.
1 2 1
Example 15: Find the inverse of the matrix A = 3 2 3 by using E-transformations.
1 1 2
(Avadh 2006, 08; Purvanchal 09; Rohilkhand 09; Lucknow 09; Garhwal 10, 12)
1 2 1 1 0 0
Solution : We write A = I3 A, i. e., 3 2 3 = 0 1 0 A.
1 1 2 0 0 1
Now we go on applying E-row transformations to the matrix A (the left hand member
of the above equation) until it is reduced to the form I3 . Every E-row transformation
will also be applied to the prefactor I3 (or its transform) of the product on the right
hand side of the above equation.
1 2 1 1 0 0
0 −4 0 = −3 1 0 A.
0 −1 1 −1 0 1
Now we should try to make 1 in the place of the second element of the second row of
1
the matrix on the left hand side. So applying R2 → − R2 , we get
4
1 2 1 1 0 0
3 1
0 1 0 = − 0 A.
0 4 4
−1 1 −1 0 1
Now we shall make zeros in the place of the second elements of the first and third rows
with the help of the second row. So applying R1 → R1 − 2 R2 , R3 → R3 + R2 , we get
1 1
− 0
1 0 1 2 2
3 1
0 1 0 = − 0 A.
0 4 4
0 1 1 1
− − 1
4 4
V-148
Now the third element of the third row is already 1. So to make the third element of
the first row zero, we apply R1 → R1 − R3 , and we get
1 3
− −1
1 0 1 3 4
3 1
0 1 0 = − 0 A.
0 4 4
0 1 1 1
− − 1
4 4
1 3
− 4 4
−1
3 1
Thus I3 = BA, where B = − 0 .
4 4
− 1 −
1
1
4 4
1 3
− 4 4
−1
3 1
∴ A −1 =B= − 0 ⋅
4 4
− 1 −
1
1
4 4
0 1 2 2
1 1 2 3
Example 16: Find the inverse of the matrix A = by using E-transformations.
2 2 2 3
2 3 3 3
0 1 2 2 1 0 0 0
1 1 2 3 0 1 0 0
Solution : We write 2 = A.
2 2 3 0 0 1 0
2 3 3 3 0 0 0 1
Performing R1 ↔ R2 , we get
1 1 2 3 0 1 0 0
0 1 2 2 1 0 0 0
2 = A.
2 2 3 0 0 1 0
2 3 3 3 0 0 0 1
V-149
1 1 2 3 0 1 0 0
0 1 2 2 1 0 0 0
0 0 = A.
−2 −3 0 −2 1 0
0 1 −1 −3 0 −2 0 1
Performing R4 → R4 − R2 , R1 → R1 − R2 , we get
1 0 0 1 − 1 1 0 0
0 1 2 2 1 0 0 0
0 = A.
0 −2 −3 0 −2 1 0
0 1 −3 −5 −1 −2 0 1
Performing R4 → R4 − R2 , R1 → R1 − R2 , we get
1 0 0 1 − 1 1 0 0
0 1 2 2 1 0 0 0
0 = A.
0 −2 −3 0 −2 1 0
0 0 −3 −5 −1 −2 0 1
1
Performing R3 → − R3 , we get
2
1 0 0 1 − 1 1 0 0
0 1 2 2 1 0 0 0
0 3 = 1 A.
0 1 0 1 − 0
2 2
0 0 −3 −5 −1 −2 0 1
Performing R2 → R2 − 2 R3 , R4 → R4 + 3 R3 , we get
1 0 0 1 −1 1 0 0
0 1 0 −1 1 −2 1 0
0 3 = 1
0 1 0 1 − 0 A.
2 2
1 3
0 0 0 − −1 1 − 1
2 2
Performing R4 → − 2 R4 , we get
1 0 0 1 −1 1 0 0
0 1 0 −1 1 −2 1 0
0 3 = 1 A.
0 1 0 1 − 0
2 2
0 0 0 1 2 −2 3 −2
V-150
3
Performing R3 → R3 − R4 , R2 → R2 + R4 , R1 → R1 − R4 we get
2
1 0 0 0 −3 3 −3 2
0 1 0 0 3 −4 4 −2
0 = A.
0 1 0 −3 4 −5 3
0 0 0 1 2 −2 3 −2
−3 3 −3 2
3 −4 4 −2
∴ A −1 = ⋅
−3 4 −5 3
2 −2 3 −2
Comprehensive Exercise 3
1 2 1
1. Reduce the matrix A = −1 0 2 to I3 by E-row transformations only.
2 1 3
−1 −3 3 −1
1 3 3
1 1 −1 0
(iii) (iv) 1 4 3
2 −5 2 −3 1
3 4
−1 1 0 1
(Avadh 2007; Garhwal 15)
0 1 2
(v) 1 2 3
3 1 1
(Rohilkhand 2011; Bundelkhand 08)
V-151
A nswers 3
1 1
0 −
1 −3 2 4 2
1 3i i
2. (i) −3 3 −1 (ii) −1
157 4 2
−2 −1 0 0 1 1
4 2
0 2 1 3
7 −3 −3
1 1 −1 −2
(iii) (iv) −1 1 0
1 2 0 1 −1
0 1
−1 1 2 6
1 1 1
2 −
2 2
(v) −4 3 −1
5 3 1
−
2 2 2
(a) rank A = r + 1
(b) rank A = r
(d) rank A ≥ r + 1
V-152
(b) rankA = 1
(c) rank A = m
(d) rank A = n
1 2 3 0
2 4 3 2
3. The rank of the matrix is
3 2 1 3
6 8 7 5
(a) 1 (b) 2
(c) 3 (d) 4
4. If A and B are two matrices such that rank of A = m and rank of B = n, then
(a) rank (A B) = m n
10. If two matrices A and B have the same size and the same rank, they are ……… .
1 0
11. The rank of a matrix is ………
0 1 (Agra 2008)
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. The rank of the transpose of a matrix is the same as that of the original matrix.
A nswers
True or False
1. T 2. F 3. F
4. T 5. T
o
V-155
7
A pplications of M atrices
1 Vectors
efinition: Any ordered n-tuple of numbers is called an n-vector. By an ordered n-tuple
D we mean a set consisting of n numbers in which the place of each number is fixed.
If x1, x2 , ...., xn be any n numbers, then the ordered n -tuple X = ( x1, x2 , ..., xn) is called an
n-vector. The ordered triad ( x1, x2 , x3 ) is called a 3-vector. Similarly (1, 0 , 1, − 1) and
(1, 8, − 5, 7) are 4-vectors. The n numbers x1, x2 , ..., xn are called components of the
n-vector X = ( x1, x2 , ..., xn). A vector may be written either as a row vector or as a column
vector. If A be a matrix of the type m × n, then each row of A will be an n-vector and each
column of A will be an m-vector. A vector whose components are all zero is called a zero
vector and will be denoted by O.
If k be any number and X be any vector, then relative to the vector X, k is called a scalar.
Equality of two vectors. Two n-vectors X and Y where X = ( x1, x2 ,..., xn) and
Y = ( y1, y2 , ..., yn) are said to be equal if and only if their corresponding components
are equal i. e., if xi = yi, for all i = 1, 2,.., n.
For example if
then X = Y.
X = ( x1, x2 , …, xn),
then by definition
The vector k X is called the scalar multiple of the vector X by the scalar k.
and 0 X = (0 , 0 , 0 ).
(i) X + Y = Y + X. (ii) X + (Y + Z) = (X + Y) + Z.
A set of r n-vectors X1, X2 , …, X r is said to be linearly dependent if there exist r scalars (numbers)
k1, k2 , ..., k r , not all zero, such that
k1X1 + k2 X2 + .. + k r X r = O,
A set of r n-vectors X1, X2 , ..., X r is said to be linearly independent if every relation of the type
k1X1 + k2 X2 + … + k r X r = O
implies k1 = k2 = k3 … = k r = 0 .
Example 1: Show that the vectors X1 = (1, 2, 4), X2 = (3, 6, 12) are linearly dependent.
Thus there exist numbers k1 = 3, k2 = − 1 which are not all zero such that
k1X1 + k2 X2 = O.
Example 2: Show that the set consisting only of the zero vector, O, is linearly dependent.
Solution : Let X = (0 , 0 , 0 , ...., 0 ) be an n-vector whose components are all zero. Then
the relation kX = O is true for some non-zero value of the number k. For example,1X = O
and 1 ≠ 0.
Example 3: Show that the vectors X1 = (1, 2, 3) and X 2 = (4, − 2, 7) are linearly independent.
i.e., (k1 + 4 k2 , 2 k1 − 2 k2 , 3 k1 + 7 k2 ) = (0 , 0 , 0 ).
Equating the corresponding components, we get
k1 + 4 k2 = 0 , 2 k1 − 2 k2 = 0 , 3 k1 + 7 k2 = 0 .
The only common values of k1 and k2 which satisfy these equations are k1 = 0 , k2 = 0 .
X1 = (1, 0 , 0 ), X2 = (0 , 1, 0 ), X3 = (0 , 0 , 1)
is linearly independent.
Solution : Let k1, k2 , k3 be three numbers such that
k1X1 + k2 X2 + k3 X3 = O,
i. e., k1(1, 0 , 0 ) + k2 (0 , 1, 0 ) + k3 (0 , 0 , 1) = (0 , 0 , 0 ),
i. e., (k1, 0 , 0 ) + (0 , k2 , 0 ) + (0 , 0 , k3 ) = (0 , 0 , 0 ),
i. e., (k1, k2 , k3 ) = (0 , 0 , 0 ).
X = k1X1 + k2 X2 + … + k r X r ,
(i) If a set of vectors is linearly dependent, then at least one member of the set can be expressed as
a linear combination of the remaining members.
(ii) If a set of vectors is linearly independent then no member of the set can be expressed as a linear
combination of the remaining members.
V-159
It is important to note that every sub-space of Vn contains the zero vector, being the
scalar product of any vector with the scalar zero.
i. e., (k1 + k2 ) a
which is also a member of S. Also the scalar multiple by any scalar x of any vector k1a of
S is the vector ( xk1)a which is again a member of S.
which is again a member of S. Thus S is a vector subspace and we say that S is a spanned
by the vectors a, b and c. More generally, if a1, a2 , ..., a r be a set of r fixed vectors of Vn,
then the set S of all n-vectors of the form p1a1 + p2 a2 + … pr a r where p1, p2 , ...., pr are
any scalars is a vector subspace of Vn.
V-160
en = (0 , 0 , 0 , ..., 1)
We have already shown that these vectors are linearly independent. Moreover any
vector a = (a1, a2 , ..., an) of Vn is expressible as a = a1e1 + a2 e2 + a3 e3 + … + anen. Hence
the vectors e1, e2 , e3 , ..., en constitute a basis of Vn.
Theorem: A basis of a subspace, S, can always be selected out of a set of vectors which span S.
Proof: Let a1, a2 , ..., a r be a set of vectors which spans a subspace S. If these vectors are
linearly independent, they already constitute a basis of S as they span S. In case they
are linearly dependent, some member of the set is a linear combination of the
members. Deleting this member we obtain another set which also spans.
For example, if
But it is important to note that the number of members in any one basis of a subspace is the
same as in any other basis. This number is called the dimension of the subspace.
V-161
We have already shown that one basis of Vn possesses n members. Therefore every
basis of Vn must possess n members. Thus Vn is of dimension n. In a particular the
dimension of V3 is 3.
Also it can be easily shown that if r be the dimension of a subspace S and if a1, a2 ,… a k
be a linearly independent set of vectors belonging to S, then we can always find vectors
a k + 1, a k + 2 , .... a r such that the vectors a1, a2 , . . . , a k , a k + 1,..., a r constitute a basis of S.
In other words we can say that every linearly independent set of vectors belonging to a
subspace S can always be extended so as to constitute a basis of S.
Moreover if r be the dimension of a subspace S, then every set of more than r members
of S will be linearly dependent.
Row rank of a matrix. Definition: The maximum number of linearly independent rows
of a matrix A is said to be the row rank of the matrix A.
We shall now prove that the sum of the row rank and the row nullity of a matrix is equal to the
number of rows, i. e.,
r + s = m.
Proof: Since the row space R of A is spanned by the row vectors of A, therefore it will
be a set of all vectors of the form
i. e., of the form ( x1a11 + x2 a21 + … + xm am1, x1a12 + x2 a22 + .... + xm am2 ,
Let u1, u2 , ..., u s be a basis of the subspace S of Vm generated by all vectors X such that
XA = O. Then, we have
u1A = u2 A = … = u s A = O.
Since the vectors u1, u2 , ..., u s belong to Vm and form a linearly independent set,
therefore we can find vectors u s + 1, u s + 2 , …, u m in Vm such that the vectors
u1, u2 , ..., u s , u s + 1, ..., u m constitute a basis of Vm . Then every vector X belonging to Vm
can be expressed in the form
X = h1u1 + h2 u2 + .... + hm u m .
i. e., as (h1u1 + h2 u2 + … + hm u m ) A
i. e., as h1u1A + h2 u2 A + … + hs u s A + hs + 1u s + 1A + hs + 2 u s + 2 A + … + hm u m A
i. e., as hs + 1u s + 1A + hs + 2 u s + 2 A + … + hm u m A.
k s + 1u s + 1A + k s + 2 u s + 2 A + … + k m u m A = O
implies (k s + 1u s + 1 + k s + 2 u s + 2 + … + k m u m )A = O,
But the vectors u1, u2 , ..., u m are linearly independent. Therefore a relation of the form
k s + 1u s + 1 + k s + 2 u s + 2 + … + k m u m = p1u1 + p2 u2 + .....+ ps u s will exist if and only if
k s + 1 = k s + 2 = … = k m = 0 . Hence the vectors u s + 1A, u s + 2 A , ..., u m A are linearly
independent and form a basis of R. Thus the dimension of R is m − s.
Hence r=m−s or r + s = m.
V-163
Proof: Let A be any given m × n matrix. Let B be a matrix row equivalent to A. Since B
is obtainable from A by a finite chain of E-row operations and every E-row operation is
equivalent to pre-multiplication by the corresponding E-matrix, there exist E-matrices
E1, E2 , ..., Ek each of the type m × m such that
B = Ek Ek − 1 … E2 E1A, i. e., B = PA ,
Let us write
where the matrix A has been expressed as a matrix of its row sub-matrices
R1, R2 , ..., R m .
V-164
From the product of the matrices on the R.H.S. of (1), we observe that the rows of the
matrix B are
p21R1 + p22 R2 + … + p2 m R m ,
… … … … …
… … … … …
Thus we see that the rows of B are all linear combinations of the rows R1, R2 , ...., R m of
A. Therefore every member of the row space of B is also a member of the row space of A.
Similarly by writing A = P−1B and giving the same reasoning we can prove that every
member of the row space of A is also a member of the row space of B. Therefore the row
spaces of A and B are identical.
Thus we see that elementary row operations do not alter the row space of a matrix.
Hence the row rank of a matrix remains invariant under E-row transformations.
Or
Post-multiplication by a non-singular matrix does not alter the column rank of a matrix.
Proof: Proceeding in the same way as in article 10, we can show that
post-multiplication with a non-singular matrix does not alter the column space and
therefore the column rank of a matrix.
Note: Since every n-rowed E-matrix is obtained from In by single E-operation (row or
column operation as may be desired), therefore the row rank and column rank of an
E-matrix are equal to n.
V-165
Proof: Let A be any given m × n matrix and let B be a matrix row equivalent to A. Then
there exists a non-singular matrix P such that B = PA.
BX = (PA) X = P (AX) = PO = O.
Thus we see that the matrices A and B have the same right nullities and consequently
their column ranks are equal.
Similarly we can prove that column equivalent matrices have the same row rank.
13 Theorem
If r be the row rank of an m × n matrix A then there exists a non-singular matrix, P such that
K
PA = ,
O
Proof: If the row rank r of A is zero, we have nothing to prove. Therefore let us
assume that r > 0. The matrix A has then r linearly independent rows. By elementary
row operations on A we can bring these linearly independent rows in the first r places.
Since the last m − r rows are now linear combinations of the first r rows, they can be
made zero by E-row operations without altering the first r rows.
Thus we see that the matrix A is row equivalent to a matrix B such that
K
B= ,
O
K
PA = ⋅
O
AR = [L O],
Proof: Let, s be the row rank and r, the rank of an m × n matrix A.Since the matrix A is of
row rank s, therefore by article 13, there exists a non-singular matrix P such that
K
PA = , where K is an s × n matrix.
O
Now we know that pre-multiplication by a non-singular matrix does not alter the rank
of a matrix.
But each minor of order (s + 1) of the matrix PA involves at least one row of zeros.
∴ Rank (PA) ≤ s.
∴ r≤s
Again, since the rank of the matrix A is r, therefore by article 13 of chapter 6 there exists
a non-singular matrix R such that
G
RA = ,
O
where G is an r × n matrix.
Now we know that pre-multiplication by a non-singular matrix does not alter the row
rank of a matrix.
V-167
But the matrix RA has only r non-zero rows. Therefore the row rank of RA can, at the
most, be equal to r.
∴ s≤ r
Hence r = s.
Proof: Let the matrix A′ be the transpose of the matrix A. Then the columns of A are
the rows of A′ .
∴ the column rank of A = the row rank of A′ = the rank of A′ = the rank of A.
Thus from theorems 1 and 2, we conclude that the rank, row rank and column rank of a
matrix are all equal. In other words we can say that the maximum number of linearly
independent rows of a matrix is equal to the maximum number of its linearly
independent columns and is equal to the rank of the matrix.
Thus we have proved that the row rank, the column rank and the rank of a matrix are
all equal. Therefore sometimes the rank of a matrix is also defined as the maximum number of
linearly independent row vectors or column vectors.
We shall now first consider systems of linear homogeneous equations and then
proceed to discuss systems of non-homogeneous linear equations.
AX = O. …(2)
The matrix A is called the coefficient matrix of the system of equations (1).
Again suppose X1 and X2 are two solutions of (2). Then their linear combination k1X1 + k2 X2 ,
where k1 and k2 are any arbitrary numbers, is also a solution of (2).
Therefore the collection of all the solutions of the system of equations AX = O forms a sub-space of
the n-vector space Vn.
Proof:
Since the rank of the coefficient matrix A is r, therefore it has r linearly independent
columns. Without loss of generality we can suppose that the first r columns from the
left of the matrix A are linearly independent, because it amounts only to renaming the
components of X.
The matrix A can be written as A = [C1, C2 , …, Cr , … Cn] 1 × n , where C1, C2 , ..., Cn are the
column vectors of the matrix A each of them being an m-vector.
The vectors X1, X2 , ..., X n − r form a linearly independent set. For, if we have a relation
of type
l1X1 + l2 X2 + … + ln − r X n − r = O, …(4)
then comparing the (r + 1)th, (r + 2)th, … , nth components on both sides of (4), we get
− l1 = 0 , − l2 = 0 , ..., − ln − r = 0 ,
It can now be easily seen that every solution of the equation AX = O is some suitable
linear combination of these n − r solutions X1, X2 , ...., X n − r .
Suppose the vector X, with components x1, x2 , ..., xn is any solution of the equation
AX = O. The the vector
AX = O.
Therefore
X = − xr + 1X1 − xr + 2 X2 − … − xnX n − r .
{ X1, X2 , …, X n − r }
forms a basis of the vector space of all the solutions of the system of equations AX = O.
Case II. If r < n, we shall have n − r linearly independent solutions. Any linear
combination of these n − r solutions will also be a solution of AX = O. Thus in this case
the equation AX = O will have an infinite number of solutions.
Case III. Suppose m < n i. e., the number of equations is less than the number of
unknowns. Since r ≤ m, therefore r is definitely less than n. Hence in this case the given
system of equations must possess a non-zero solution. The number of solutions of the
equation AX = O will be infinite.
X = c1X1 + c2 X2 + … + c k X k ,
If r = n, the zero solution (trivial solution) will be the only solution. If r < n, there will
be an infinity of solutions.
V-172
Example 5: Does the following system of equations possess a common non-zero solution?
x + 2 y + 3 z = 0, 3 x + 4 y + 4 z = 0 , 7 x + 10 y + 12 z = 0 . (Lucknow 2005)
Solution : The given system of equations can be written in the form of the single matrix
equation
1 2 3 x 0
AX = 3 4 4 y = 0 = O.
7 10 12 z 0
We shall start reducing the coefficient matrix A to triangular form by applying only
E-row transformations on it. Applying R2 → R2 − 3 R1, R3 → R3 − 7 R1, the given
system of equations is equivalent to
1 2 3 x
0 −2 −5 y = O.
0 −4 −9 z
Here we find that the determinant of the matrix on the left hand side of this equation is
not equal to zero. Therefore the rank of this matrix is 3. So there is no need of further
applying E-row transformations on the coefficient matrix. The rank of the coefficient
matrix A is 3, i. e., equal to the number of unknowns. Therefore the given system of
equations does not possess any linearly independent solution. The zero solution, i. e.,
x = y = z = 0 is the only solution of the given system of equations.
x + 3 y − 2 z = 0 , 2 x − y + 4 z = 0 , x − 11 y + 14 z = 0 . (Meerut 2005B)
Solution : The given system of equations is equivalent to the single matrix equation
1 3 −2 x
AX = 2 −1 4 y = O.
1 −11 14 z
We shall reduce the coefficient matrix A to Echelon form by applying only E-row
operations on it. Performing R2 → R2 − 2 R1, R3 → R3 − R1, we have
V-173
1 3 −2 x
0 −7 8 y = O.
0 −14 1 z
Performing R3 → R3 − 2 R2 , we have
1 3 −2 x
0 −7 8 y = O.
0 0 0 z
The coefficient matrix is now triangular. The coefficient matrix being of rank 2, the
given system of equations possess 3 − 2 = 1 linearly independent solution. We shall
assign arbitrary values to n − r = 3 − 2 = 1 variable and the remaining r = 2 variables
shall be found in terms of these. The given system of equations is equivalent to
x + 3 y − 2 z = 0, − 7 y + 8 z = 0 .
8 10
Thus y= z, x = − z.
7 7
Choose z = c.
8 10
Then y= c, x = − c.
7 7
10 8
Hence x=− c, y = c, z = c
7 7
constitute the general solution of the given system, where c is an arbitrary parameter.
10 10
− c − 7
x 7
8 8
y = c =c , where c is an arbitrary number.
z 7 7
c 1
Example 7: Does the following system of equations possess a common non-zero solution?
x + y + z = 0 , 2 x − y − 3 z = 0 , 3 x − 5 y + 4 z = 0 , x + 17 y + 4 z = 0 .
Solution: The given system of equations is equivalent to the single matrix equation
V-174
1 1 1
x
2 −1 −3
AX = y = O.
3 −5 4 z
1 17 4
We shall first find the rank of the coefficient matrix A by reducing it to Echelon form
by applying elementary row transformations only.
1 1 1 1 1 1
0 −3 −5 0 −3 −5
A~ ~ by R3 → 3 R3 , R4 → 3 R4
0 −8 1 0 −24 3
0 16 3 0 48 9
1 1 1
0 −3 −5
~ by R3 → R3 − 8 R2 , R4 → R4 + 16 R2
0 0 43
0 0 −71
1 1 1
0 −3 −5 71
~ by R4 → R4 + R3 .
0 0 43 43
0 0 0
Above is the Echelon form of the coefficient matrix A. We have rank A = the number
of non-zero rows in this Echelon form = 3. The number of unknowns is also 3. Since
rank A is equal to the number of unknowns, therefore the given system of equations
does not possess any linearly independent solution. Thus the given system of
equations possesses no non-zero solution. Hence the zero solution i. e., x = y = z = 0 is
the only solution of the given system of equations.
3 x + 4 y − z − 6w = 0, 2 x + 3 y + 2z − 3w = 0,
2 x + y − 14 z − 9 w = 0 , x + 3 y + 13 z + 3 w = 0 .
Solution: The given system of equations is equivalent to the single matrix equation
V-175
3 4 −1 −6 x
2 3 2 −3 y
AX = z = O.
2 1 −14 −9
1 3 13 3 w
We shall first find the rank of the coefficient matrix A by reducing it to Echelon form
by applying E-row transformations only.
1 3 13 3
2 3 2 −3
A~
2 1 −14 −9
3 4 −1 −6
1 3 13 3
0 −3 −20 −9
~
0 −5 −40 −15
0 −5 −40 −15
By R2 → R2 − 2 R1, R3 → R3 − 2 R1, R4 → R4 − 3 R1
1 3 13 3
0 1 8 3
~
0 1 8 3
0 1 8 3
1 1 1
by R2 → − R2 , R3 → − R3 , R4 → − R4
3 5 5
1 3 13 3
0 1 8 3
~ by R3 → R3 − R2 , R4 → R4 − R2 .
0 0 0 0
0 0 0 0
The rank of A is obviously 2 which is less than the number of unknowns 4. Therefore
the given system of equations possesses 4 − 2, i. e., 2 linearly independent solutions.
The given system of equations is equivalent to the equation
V-176
1 3 13 3 x 0
0 1 8 3 y 0
0 0 0 0 z = 0 ⋅
0 0 0 0 w 0
Thus the given system of four equations is equivalent to the system of two equations,
i. e.,
x + 3 y + 13 z + 4 w = 0
⋅
y + 8z + 3w = 0
y = − 8 z − 3 w, x = − 3 (− 8 z − 3 w) − 13 z − 3 w
i. e., y = − 8 z − 3 w, x = 11z + 6 w.
Remark: In matrix form, the general solution of the given system of equations can
be expressed as
x 11c1 + 6 c2 11 6
y − 8 c1 − 3 c2 − 8 − 3
z = 1c + 0 c = c1 1 + c2 0 .
1 2
w 0 c1 + 1c2 0 1
x 11 x −6
y − 8 y −3
Here =
z 1 and z = 0
w 0 w 1
are two linearly independent solutions of the given system of equations and all their
linear combinations will also be the solutions of the given system of equations.
Example 9: Show that the only real value of λ for which the following equations have non-zero
solutions is 6 :
x + 2 y + 3 z = λx, 3 x + y + 2 z = λy, 2 x + 3 y + z = λz .
Solution: The given system of equations is equivalent to the single matrix equation
1 − λ 2 3 x
AX = 3 1− λ 2 y = O.
2 3 1 − λ z
1 − λ 2 3
3 1− λ 2 =0
2 3 1 − λ
6 − λ 6−λ 6 − λ
or 3 1− λ 2 = 0,
2 3 1 − λ
1 1 1
or (6 − λ ) 3 1− λ 2 =0
2 3 1 − λ
1 1 0
or (6 − λ ) 3 −λ − 2 −1 = 0 , C2 − C1, C3 − C1
1 1 − λ − 1
or (6 − λ ) [(λ + 2) (λ + 1) + 1] = 0
or (6 − λ ) [λ 2 + 3 λ + 3] = 0 .
− 3 ± (9 − 12)
The roots of the equation λ 2 + 3 λ + 3 = 0 are λ = i. e.,are imaginary.
2
Hence the only real value of λ for which the system of equations is to have a non-zero
solution is 6.
V-178
Comprehensive E xercise 1
Find all the solutions of the following system of linear homogeneous equations :
1. 2 x − 3 y + z = 0 , x + 2 y − 3 z = 0 , 4 x − y − 2 z = 0 .
2. x + y − 3 z + 2 w = 0 , 2 x − y + 2 z − 3 w = 0 , 3 x − 2 y + z − 4 w = 0 ,
− 4 x + y − 3z + w = 0.
3. x + y + z = 0 , 2 x + 5 y + 7 z = 0 , 2 x − 5 y + 3 z = 0 .
4. x + 2 y + 3 z = 0 , 2 x + 3 y + 4 z = 0 , 7 x + 13 y + 19 z = 0 .
5. 4 x + 2 y + z + 3 u = 0 , 6 x + 3 y + 4 z + 7 u = 0 , 2 x + y + u = 0 .
6. 2 x − 2 y + 5 z + 3 w = 0 , 4 x − y + z + w = 0 , 3 x − 2 y + 3 z + 4 w = 0 ,
x − 3 y + 7 z + 6 w = 0.
7. x − 2 y + z − w = 0 , x + y − 2 z + 3 w = 0 , 4 x + y − 5 z + 8 w = 0 ,
5 x − 7 y + 2z − w = 0
A nswers 1
1. x = 0 , y = 0 , z = 0 2. x = 0 , y = 0 , z = 0 , w = 0
3. x = 0 , y = 0 , z = 0 4. x = 2 c , y = − 2 c , z = c
5. x = c1, u = c2 , y = − 2 c1 − c2 , z = − c2
5 7
6. x = c , y = 4 c , z = c , w = c
9 9
5 4
7. x = c1 − c2 , y = c1 − c2 , z = c1, w = c2
3 3
There is no set of values of x and y which satisfies both these equations. Such
equations are said to be inconsistent.
V-179
3x + 4 y = 5
.
6 x + 8 y = 10
These equations are consistent since there exist values of x and y which satisfy both of
4 5
these equations. We see that x = − c + , y = c constitute a solution of these
3 3
equations, where c is arbitrary. Thus these equations possess an infinite number of
solutions.
If we write
Any set of values of x1, x2 , ..., xn which simultaneously satisfy all these equations is called a
solution of the system (1). When the system of equations has one or more solutions, the equations
are said to be consistent otherwise they are said to be inconsistent.
The matrix
(Lucknow 2005; Meerut 07B; Kumaun 10, 12, 13; Kanpur 11)
Proof: Let C1, C2 , ..., C n denote the column vectors of the matrix A. The equation
AX = B is then equivalent to
x1
x2
[C1, C2 , ...., Cn] … = B i. e., x1C1 + x2C2 + … + xnCn = B. …(1)
…
xn
Let now r be the rank of the matrix A. The matrix A has then r linearly independent
columns and without loss of generality, we can suppose that the first r columns
C1, C2 , ..., Cr form a linearly independent set so that each of the remaining n − r
columns is a linear combination of these r columns.
The condition is sufficient. Now suppose that the matrices A and [A B] are of the
same rank r. The maximum number of linearly independent columns of the matrix
[A B] is then r. But the first r columns C1, C2 , ..., Cr of the matrix [A B] already form a
linearly independent set. Therefore the column B should be expressed as a linear
combination of the columns C1, C2 , ...., Cr .
To show that the solution is unique, let us suppose that X1 and X2 be two solutions of
AX = B.
⇒ IX 1 = IX 2 ⇒ X 1 = X 2.
In this case the equations AX = B are inconsistent i. e., they have no solution.
V-182
In this case the equations AX = B are consistent i. e., they possess a solution. If r < m,
then in the process of reducing the matrix [A B] to Echelon form, (m − r) equations
will be eliminated. The given system of m equations will then be replaced by an
equivalent system of r equations. From these r equations we shall be able to express
the values of some r unknowns in terms of the remaining n − r unknowns which can be
given any arbitrarily chosen values.
If r < n, then n − r variables can be assigned arbitrary values. So in this case there will be
an infinite number of solutions. Only n − r + 1 solutions will be linearly independent
and the rest of the solutions will be linear combinations of them.
If m < n, then r ≤ m < n. Thus in this case n − r > 0. Therefore when the number of
equations is less than the number of unknowns, the equations will always have an
infinite number of solutions, provided they are consistent.
x + y + z = − 3, 3 x + y − 2 z = − 2, 2 x + 4 y + 7 z = 7
Solution : The given system of equations is equivalent to the single matrix equation
1 1 1 x −3
AX = 3 1 −2 y = −2 = B.
2 4 7 z 7
1 1 1 : −3
The augmented matrix [A B] = 3 1 −2 : −2.
2 4 7 : 7
1 1 1: −3
[A B] ~ 0 −2 −5 : 7
0 2 5 : 13
1 1 1: −3
~ 0 −2 −5 : 7, applying R3 → R3 + R2 .
0 0 0 : 20
Above is the Echelon form of the matrix [A B]. We have rank [A B] = the number of
non-zero rows in this Echelon form = 3
1 1 1
A ~ 0 −2 −5.
0 0 0
Obviously rank A = 2.
Since Rank A ≠ Rank [A B], therefore the given equations are inconsistent i. e., they
have no solution.
Remark: The inconsistency of the given equations can also be shown as below :
1 1 1 x −3
0 −2 −5 y = 7.
0 0 0 z 20
x + y + z = −3
0 x − 2 y − 5z = 7 .
0 x + 0 y + 0 z = 20
The last equation shows that 0 = 20 , which is not possible. Hence the given equations
are inconsistent.
x + y + z = 6, x + 2 y + 3 z = 14, x + 4 y + 7 z = 30
Solution : The given system of equations is equivalent to the single matrix equation
1 1 1 x 6
AX = 1 2 3 y = 14 = B.
1 4 7 z 30
1 1 1 ⋮ 6
The augmented matrix [A B] = 1 2 3 ⋮ 14 .
1 4 7 ⋮ 30
1 1 1⋮ 6 1 1 1⋮ 6
[A B] ~ 0 1 2⋮ 8 ~ 0 1 2⋮ 8, by R3 → R3 − 3 R2 .
0 3 6⋮ 24 0 0 0 ⋮ 0
Above is the Echelon form of the matrix [A B]. We have rank [A B] = the number of
non-zero rows in this Echelon form = 2 .
1 1 1
By the same elementary transformations, we get A ~ 0 1 2.
0 0 0
Obviously rank A = 2. Since rank A = rank [A B], therefore the given equations are
consistent. Here the number of unknowns is 3. Since rank A is less than the number of
unknowns therefore the given system will have an infinite number of solutions. We
see that the given system of equations is equivalent to the matrix equation
1 1 1 x 6
0 1 2 y = 8.
0 0 0 z 0
x + y + z = 6
.
y + 2z = 8
∴ y = 8 − 2 z , x = 6 − y − z = 6 − (8 − 2 z ) − z = z − 2.
Example 12: Apply the test of rank to examine if the following equations are consistent :
2 x − y + 3 z = 8, − x + 2 y + z = 4, 3 x + y − 4 z = 0
Solution : The given system of equations is equivalent to the single matrix equation
2 −1 3 x 8
AX = −1 2 1 y = 4 = B.
3 1 −4 z 0
2 −1 3 ⋮ 8
[A B] = −1 2 1⋮ 4.
3 1 −4 ⋮ 0
We shall reduce the augmented matrix to Echelon form by applying elementary row
transformations only. Applying R1 ↔ R2 , we get
−1 2 1⋮ 4
[A B] ~ 2 −1 3 ⋮ 8
3 1 −4 ⋮ 0
−1 2 1⋮ 4
~ 0 3 5 ⋮ 16, by R2 → R2 + 2 R1, R3 → R3 + 3 R1
0 7 −1 ⋮ 12
−1 2 1 ⋮ 4
0 3 5 ⋮ 16, by R3 → 3 R3
0 21 −3 ⋮ 36
−1 2 1 ⋮ 4
~ 0 3 5 ⋮ 16, by R3 → R3 − 7 R2
0 0 −38 ⋮ −76
−1 2 1 ⋮ 4
1
~ 0 3 5 ⋮ 16, by R3 → − R3 .
0 38
0 1 ⋮ 2
V-186
Above is the Echelon form of the matrix [A B]. We have the rank [A B] = the number
of non-zero rows in this Echelon form = 3.
−1 2 1
By the same transformations, we get A ~ 0 3 5 ⋅
0 0 1
Obviously rank A = 3. Since rank A = rank [A B], therefore the given equations are
consistent.Here the number of unknowns is 3. Since rank A is equal to the number of
unknowns, therefore the given equations have a unique solution. We see that the
given equations are equivalent to the matrix equation
−1 2 1 x 4
0 3 5 y = 16.
0 0 1 z 2
− x + 2 y + z = 4, 3 y + 5 z = 16, z = 2.
These give z = 2, y = 2, x = 2.
x + 2 y − z = 3, 3 x − y + 2 z = 1, 2 x − 2 y + 3 z = 2, x − y + z = − 1
are consistent and solve them. (Meerut 2006B, 09; Rohilkhand 06; Garhwal 15)
Solution: The given system of equations is equivalent to the single matrix equation
1 2 −1 3
x
3 −1 2 1
AX = y = = B.
2 −2 3 2
z
1 −1 1 −1
1 2 −1 ɺ: 3
3 −1 2 ɺ: 1
[A B] = .
2 −2 3 ɺ: 2
1 −1 1 ɺ: −1
1 2 −1 ⋮ 3
0 −7 5 ⋮ −8
[A B] ~
0 −6 5 ⋮ −4
0 −3 2 ⋮ −4
1 2 −1 ⋮ 3
0 −1 0 ⋮ −4
~ , by R2 → R2 − R3
0 −6 5 ⋮ −4
0 −3 2 ⋮ −4
1 2 −1 ⋮ 3
0 −1 0 ⋮ −4
~ , by R3 → R3 − 6 R2 , R4 → R4 − 3 R2
0 0 5 ⋮ 20
0 0 2 ⋮ 8
1 2 −1 ⋮ 3
0 −1 0 ⋮ −4 1 1
~ , by R3 → R3 , R4 → R4
0 0 1 ⋮ 4 5 2
0 0 1 ⋮ 4
1 2 −1 ⋮ 3
0 −1 0 ⋮ −4
~ , by R4 → R4 − R3 .
0 0 1 ⋮ 4
0 0 0 ⋮ 0
Thus the matrix [A B] has been reduced to Echelon form. We have rank [A B] = the
number of non-zero rows in this Echelon form = 3. Also
1 2 −1
0 −1 0
A ~ .
0 0 1
0 0 0
We have rank A = 3. Since rank [A B] = rank A, therefore the given equations are
consistent. Since rank A = 3 = the number of unknowns, therefore the given equations
have unique solution. The given equations are equivalent to the equations
x + 2 y − z = 3, − y = − 4, z = 4.
These give z = 4, y = 4, x = − 1.
V-188
Example 14: State the conditions under which a system of non-homogeneous equations will have
(i) no solution (ii) a unique solution (iii) infinity of solutions. (Lucknow 2008)
(i) These equations will have no solution if the coefficient matrix A and the
augmented matrix [A B] are not of the same rank.
(ii) These equations will possess a unique solution if the matrices A and [A B] are of
the same rank and the rank is equal to the number of variables. In particular if A is
a square matrix, these equations will possess a unique solution if and only if the
matrix A is non-singular.
(iii) These equations will have infinity of solutions if the matrices A and [A B] are of
the same rank and the rank is less than the number of variables.
x + y + z = 9, 2 x + 5 y + 7 z = 52, 2 x + y − z = 0 .
Solution : The given system of equations is equivalent to the single matrix equation
1 1 1 x 9
AX = 2 5 7 y = 52 = B.
2 1 −1 z 0
1 1 1 ⋮ 9
[A B] = 2 5 7 ⋮ 52
2 1 −1 ⋮ 0
1 1 1 ⋮ 9
~ 0 3 5 ⋮ 34, by R2 → R2 − 2 R1, R3 → R3 − 2 R1
0 −1 −3 ⋮ −18
1 1 1 ⋮ 9
~ 0 −1 −3 ⋮ −18 , by R2 ←→ R3
0 3 5 ⋮ 34
V-189
1 1 1 ⋮ 9
~ 0 −1 −3 ⋮ −18 , by R3 → R3 + 3 R2 .
0 0 −4 ⋮ −20
Above is the Echelon form of the matrix [A B]. We have rank [A B] = the number of
non-zero rows in this Echelon form = 3.
1 1 1
A ~ 0 −1 −3 .
0 0 −4
∴ rank A = 3.
Since rank A = rank [A B], therefore the given equations are consistent. Also rank
A = 3 and the number of unknowns is also 3. Hence the given equations will have a
unique solution. To find the solution we see that the given system of equations is
equivalent to the matric equation
1 1 1 x 9
0 −1 −3 y = −18.
0 0 −4 z −20
x + y + z = 9, − y − 3 z = − 18, − 4 z = − 20 .
x + y + z = 6, x + 2 y + 3 z = 10 , x + 2 y + λz = µ
have (i) no solution, (ii) a unique solution, (iii) an infinite number of solutions.
(Meerut 2006, 09B; Garhwal 06, 11; Bundelkhand 09; Rohilkhand 07; Kanpur 09)
1 1 1 x 6
AX = 1 2 3 y = 10 = B .
1 2 λ z µ
V-190
1 1 1 ⋮ 6
The augmented matrix [A B] = 1 2 3 ⋮ 10
1 2 λ ⋮ µ
1 1 1 ⋮ 6
~ 0 1 2 ⋮ 4 by R2 → R2 − R1, R3 → R3 − R1
0 1 λ −1 ⋮ µ − 6
1 1 1 ⋮ 6
~ 0 1 2 ⋮ 4 by R3 → R3 − R2 .
0 0 λ −3 ⋮ µ − 10
So in this case the given system of equations is consistent. Since rank A = the number
of unknowns, therefore the given system of equations possesses a unique solution.
Thus if λ ≠ 3,the given system of equations possesses a unique solution for any value of
µ.
If λ = 3 and µ ≠ 10, we have rank [A B] = 3 and rank A = 2. Thus in this case rank
[A B] ≠ rank A and so the given system of equations is inconsistent i. e., possesses no
solution.
So in this case the given system of equations is again consistent. Since rank A < the
number of unknowns, therefore in this case the given system of equations possesses an
infinite number of solutions.
x + y + z = 1, x + 2 y + 4 z = η, x + 4 y + 10 z = η2 ,
(Meerut 2011; Agra 07; Kanpur 06, 08, 10; Kumaun 10, 13)
1 1 1 x 1
1 2 4 y = η.
1 4 10 z η2
V-191
1 1 1 x 1
0 1 3 y = η − 1 .
0 3 9 z η2 − 1
Performing R3 → R3 − 3 R2 , we have
1 1 1 x 1
0 1 3 y = η−1 …(1)
0 0 0 z η2 − 3 η + 2
1 1 1 x 1
0 1 3 y = 1 .
0 0 z
0 0
y + 3 z = 1, x + y + z = 1.
∴ y = 1 − 3z , x = 2z .
1 1 1 x 1
0 1 3 y = 0 .
0 0 0 z 0
y + 3 z = 0 , x + y + z = 1.
∴ y = − 3z , x = 1 + 2z .
Comprehensive Exercise 2
1. Use the test of rank to show that the following equations are not consistent :
2 x − y + z = 4, 3 x − y + z = 6, 4 x − y + 2 z = 7, − x + y − z = 9.
(Rohilkhand 2009; Kumaun 11)
13. For what values of the parameter λ will the following equations fail to have a
unique solution
3 x − y + λz = 1, 2 x + y + z = 2, x + 2 y − λz = − 1?
Will the equations have any solutions for these values of λ?
(Garhwal 2008)
14. Solve the equations
λx + 2 y − 2 z − 1 = 0 , 4 x + 2 λy − z − 2 = 0 , 6 x + 6 y + λz − 3 = 0 ,
considering specially the case when λ = 2.
A nswers 2
3. x = − 7, y = 22, z = − 9
4. x = 1, y = 2, z = 3
5. x = 1, y = 1, z = 1
1 3 5
6. x = , y = ,z =
2 2 2
7. x = − 1 + 2 c , y = 3 − 2 c , z = c , where c is arbitrary
35 29 5
8. x = , y= ,z =
18 18 18
9. x = c − 2, y = 3 − 2 c , z = c
10. Consistent; x = − 1, y = − 2, z = 4
V-194
1
11. Consistent ; x = 2, y = 2, z =
2
7 7
13. λ ≠ , the solution is unique; λ = − , no solution
2 2
14. In case λ = 2, the general solution of the given system of equations is given by
1
x = − c, y = c, z = 0
2
7 7
15. λ ≠ , the solution is unique; λ = − , no solution
10 10
7 16 3 1
16. x = − c, y = + c, z = c
11 11 11 11
17. x = 3, y = 4, z = 6
5 3 3 1
18. x = − c, y = − + c, z = c
2 2 2 2
−8 5 25 13
19. x = + c, y = − + c, z = c
7 7 7 7
(c) k1 + k2 + ......... + k r = 1
(c) no solution
(d) none of these
3. The system of linear equations x + y + z = 2, 2 x + y − z = 3, 3 x + 2 y + kz = 4 has
a unique solution if
(a) k ≠ 0
(c) no solution
4. When the system of equations has one or more solutions, the equations are said
to be ....., otherwise they are said to be .......
7. If the number of equations is less than the number of unknowns, the equations
will aways have an ....... number of solutions, provided they are consistent.
V-196
True or False
Write ‘T’ for true and ‘F’ for false statement.
2. If a set of vectors is linearly independent, then at least one member of the set can
be expressed as a linear combination of the remaining members.
3. The zero solution i. e., x = y = z = 0 is the only solution of the system of equations
x + 2 y + 3 z = 0, 3 x + 4 y + 4 z = 0 , 7 x + 10 y + 12 z = 0 .
A nswers
True or False
1. T 2. F 3. T
4. T 5. F