0% found this document useful (0 votes)
40 views44 pages

Unit 5

Uploaded by

DIHARSUV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views44 pages

Unit 5

Uploaded by

DIHARSUV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

UNIT 5

BASES AND DIMENSION


Structure
Page Nos.
5.1 Introduction 32
Objective
5.2 Linear Independence 33
Some Elementary results on Linear Independence
5.3 Basis and Dimension 39
Dimension
5.4 Dimensions of Some Subspaces. 48
5.5 Dimension of a Quotient Space. 54
5.6 Summary 56
5.7 Solutions/Answers 57

5.1 INTRODUCTION
In this Block we continue our study of Linear Algebra with a discussion of
solution of simultaneous equations. The problem of solution of linear equations
efficiently is one of the fundamental problems of Mathematics. Efficient
methods of solution of systems of linear equations has many applications,
including in new areas like Data Science.

In Sec. 5.2 we discuss the concept of linear independence of vectors and the
dimension of a vector space. In Sec. 5.3, we introduce you to two of the
important concepts in Linear Algebra, the concepts of basis and dimension of a
vector space. In Sec. 5.4, we will determine the dimensions of some
subspaces. In Sec. 5.5, we will determine the dimension of quotient space.
Objectives
After studying this unit, you should be able to:

• define a linearly independent set and give examples of linearly independent


and linearly dependent sets of vectors;

• determine whether a give set of vectors is linearly independent or not;

• define a basis for a vector space and give examples;

32 • determine whether a set forms a basis for a vector space or not;


Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
• determine bases for well known vector spaces;

5.2 LINEAR INDEPENDENCE


Consider the two vectors (1, 0) and (0, 1). You can see that
𝛼 (1, 0) + 𝛽 (0, 1) = (0, 0) implies that 𝛼 = 0 and 𝛽 = 0 (where 𝛼, 𝛽 ∈ ℝ). We say
that {(1, 0) , (0, 1)} is a linearly independent subset of ℝ2 . Let us now define the
concept of a linearly independent set formally.

Definition 1: Let V be a vector space over a field F and let S = {v1 , v2 , … , vn } be


a subset of V. Then we say that S is a linearly independent set if,

𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼n vn = 0

for 𝛼i ∈ F, 1 ≤ i ≤ n, then 𝛼i = 0.

If S is a subset of V, not necessarily finite, we say that S is a linearly


independent set if every finite, non-empty, subset of S is linearly independent.

A set is linearly dependent if it is not linearly indpendent.

Remark 1: Another way of defining a finite linearly independent set is as


follows: A finite set {v1 , v2 , … , vn } is linearly independent if and only if the only
way of writing the zero vector as a linear combination of {v1 , v2 , … , vn } is the
trivial way:

0v1 + 0v2 + ⋯ + 0vn = 0


Lecture in 3Blue1Brown
An alternative way of defining a linearly dependent set of vectors is as follows:
A finite set {v1 , v2 , … , vn } is linearly dependent if and only if we can of write the
zero vector as a linear combination of {v1 , v2 , … , vn } in a non-trivial way; in other
words, we can find 𝛼1 ,, 𝛼2 , …, 𝛼n , not all of them zero, such that

𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼n vn = 0

Note that in two dimension, if two vectors v1 and v2 are linearly independent,
then there are 𝛼1 , 𝛼2 , not both zero, such that 𝛼1 v1 + 𝛼2 v2 = 0. Without loss of
𝛼
generality, we may assume that 𝛼1 ≠ 0. We then have v1 = − 2 v2 . So, v1 is a
𝛼1
scalar multiple of v2 and hence v1 and v2 are collinear. Thus, in ℝ2 , two vectors
linearly dependent iff they are collinear.

Similarly, if v1 , v2 and v3 are three vectors in ℝ3 which are linearly dependent,


we can find 𝛼1 , 𝛼2 and 𝛼3 , not all of them zero such that 𝛼1 v1 + 𝛼2 v2 + 𝛼3 v3 = 0.
𝛼 𝛼
Again, assuming that 𝛼1 ≠ 0, we get v1 = − 2 v2 − 3 v3 . Thus, in ℝ3 , three
𝛼1 𝛼1
vectors are linearly dependent iff they are coplanar.

A standard way of checking the linear independence of vectors {v1 , v2 , … , vn } is


to write down the equation 𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼n vn and check whether the
equation has a non-trivial solution or not. It the equation has a non-trivial 33
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
solution, the set of vecgtors is linearly dependent; otherwise the set of vectors
is linearly independent. Let us now look at some examples to understand this
process.

Example 1: Check whether the following set of vectors are linearly


independent:
a) {(1, 0, 0), (0, 1, 0), (0, 0, 1)} b) {(1, 0, 0), (1, 1, 0), (1, 1, 1)}
c) {(1, 0, 2, 1), (−1, 1, 1, 2), (1, 2, 8, 7)}

Solution:

a) Writing as column vectors, suppose


1 0 0 0
𝛼1 [0] + 𝛼2 [1] + 𝛼3 [0] = [0]
0 0 1 0
or
𝛼1 0
[𝛼2 ] = [0]
𝛼3 0
Therefore, 𝛼1 = 0, 𝛼2 = 0 and 𝛼3 = 0. So, the set is linearly independent.

b) Writing as column vectors, suppose


1 1 1 0
𝛼1 [0] + 𝛼2 [1] + 𝛼3 [1] = [0]
0 0 1 0
or
𝛼1 + 𝛼2 + 𝛼3 0
[ 𝛼2 + 𝛼3 ] = [0]
𝛼3 0
So, 𝛼1 , 𝛼2 and 𝛼3 are solutions to the system of equations

𝛼1 + 𝛼2 + 𝛼3 = 0
𝛼2 + 𝛼3 = 0
𝛼3 = 0

This is a triangular system and we can easily solve this by back


substitution. We get 𝛼1 = 0, 𝛼2 = 0 and 𝛼3 = 0.

c) Writing as column vectors, suppose


1 −1 1 0
0 1 2 0
𝛼1 [ ] + 𝛼2 [ ] + 𝛼3 [ ] = [ ]
2 1 8 0
1 2 7 0
or
𝛼1 − 𝛼2 + 𝛼3 0
𝛼2 + 2𝛼3 0
[ ]=[ ] …(1)
2𝛼1 + 𝛼2 + 8𝛼3 0
𝛼1 + 2𝛼2 + 7𝛼3 0
34
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
We can write the last vector equation as the following system of equations:

𝛼1 − 𝛼2 + 𝛼3 = 0
𝛼2 + 2𝛼3 = 0
2𝛼1 + 2𝛼2 + 8𝛼3 = 0
𝛼1 + 2𝛼2 + 7𝛼3 = 0

The matrix form is


1 −1 1 0
0 1 2 0
[ ]
2 1 8 0
1 2 7 0

Carrying out the the operations R3 → R2 − 2R1 , R4 → R4 − R1 ,


R3 → R3 − 3R2 , R4 → R4 − 3R2 , R1 → R1 + R2 , we get

1 0 3 0
0 1 2 0
[ ]
0 0 0 0
0 0 0 0

As before, we take 𝛼3 , the variable corresponding to the only non-pivot


column as the free variable and set x3 = 𝜆. The solution set is (−3𝜆, −2𝜆, 𝜆).
Taking 𝜆 = 1, we see that 𝛼1 = −3, 𝛼2 = −2, and 𝛼3 = 1 is a non-trivial
solution to Eqn. (1). So, the vectors are linearly dependent.

∗∗∗

Here are some exercises for you to try.

E1) Check whether the following vectors are linearly independent:


a) {(−1, 1, 1), (1, −1, 1), (1, 1, 0)} b) {(1, 3, 2), (2, 1, −1), (1, 2, 1)}.

We will now look at some more examples.

Example 2: In the real vector space of all functions from ℝ to ℝ determine


whether the set {sinx, ex } is linearly independent.

Solution: The zero element of this vector space is the zero function, i.e., it is
the function 0 such that 0 (x) = 0 ∀ x ∈ ℝ. So we have to determine a, b ∈ ℝ
such that, ∀ x ∈ ℝ, a sin x + bex = 0.

In particular, putting x = 0, we get a.0 + b.1 = 0, i.e., b = 0. So our equation


reduces to a sinx = 0. Then putting x = 𝜋/2, we have a = 0. Thus, a = 0, b = 0.
So, {sinx, ex } is linearly independent.

∗∗∗

5.2.1 Some Elementary Results on Linear


Independence 35
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
We shall study some simple consequences of the definition of linear
independence. An immediate consequence is the following theorem.

Theorem 1: If 0 ∈ {v1 , v2 , … , vn }, a subset of the vector space V, then the set


{v1 , v2 , … , vn } is linearly dependent.

Proof: 0 is one of the v′i s. We may assume that v1 = 0. Then


1.v1 + 0.v2 + 0.v3 + ⋯ + 0.vn = 0 + 0 + ⋯ + 0 = 0. That is, 0 is a linear combination of
v1 , v2 , … , vn in which all the scalars are not zero. Thus, the set is linearly
dependent. ■

Try to prove the following result yourself.

E2) Show that, if v is a non-zero element of a vector space V over a field F,


then {v} is linearly independent.

The next result is also very elementary.

Theorem 2: a) If S is a linearly dependent subset of a vector space V over F,


then any subset of V containing S is linearly dependent.

b) A subset of a linearly independent set is linearly independent.

Proof: a) Suppose S = {u1 , u2 , … , uk } and S ⊆ T ⊆ V. We want to show that


T is linearly dependent.
If S = T there is nothing to prove. Otherwise, let

T = S ∪ {v1 , … ., vm } = {u1 , u2 , … .., uk , v1 , … ., vm }

where m > 0.
Now S is linearly dependent. Therefore, for some scalars 𝛼1 , 𝛼2 , … , 𝛼k , not
all zero, we have
k
∑ 𝛼i ui = 0
i=1

But then, 𝛼1 u1 + 𝛼2 u2 + … . + 𝛼k uk + 0.v1 + 0.v2 + … .. + 0.vm = 0, with some


𝛼i ≠ 0. Thus, T is linearly dependent.

b) Suppose T ⊆ V is linearly independent, and S ⊆ T. If possible, suppose S


is not linearly independent. Then S is linearly dependent, but then by (a),
T is also linearly dependent, since S ⊆ T. This is a contradiction. Hence,
our supposition is wrong. That is S is linearly independent.

Now, what happens if one of the vectors in a set can be written as a linear
combination of the other vectors in the set? The next theorem states that such
36 set is linearly dependent.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 3: Let S = {v1 , v2 , … , vn } be a subset of a vector space V over field F.
Then S is linearly dependent if and only if some vector of S is a linear
combination of the rest of the vectors of S.

Proof: We have to prove two statements here:

i) If some vi , say v1 , is linearly combination of v2 , … .., vn , then S is linearly


dependent.

ii) If S is linearly dependent, then some vi is linear combination of the other


v′i s.

Let us prove (i) now, suppose v1 is linear combination of v2 , … , vn , i.e.,


n
v1 = 𝛼2 v2 + ⋯ + 𝛼n vn = ∑ 𝛼i vi
i=2

where 𝛼i ∈ F ∀i . Then v1 − 𝛼2 v2 − 𝛼3 v3 − ⋯ − 𝛼n vn = 0, which shows that S is


linearly dependent.

We now prove (ii), which is the converse of (i). Since S is linearly dependent,
there exist 𝛼i ∈ F, not all zero, such that

𝛼1 v1 + 𝛼2 v2 + … + 𝛼n vn = 0.

Since some 𝛼i ≠ 0, suppose 𝛼k ≠ 0. Then we have

𝛼k vk = −𝛼1 v1 … ⋯ − 𝛼k−1 vk−1 − 𝛼k+1 vk+1 … . − 𝛼n vn

Since 𝛼k ≠ 0, we divide throughout by 𝛼k and get


n
𝛼1 −𝛼n 𝛼i
vk = (− ) v1 + ⋯ + ( ) vn = ∑ 𝛽i vi , 𝛽i = −
𝛼k 𝛼k i=1 𝛼k
i≠k

Thus, vk is a linear combination of v1 , v2 , … , vk−1 , vk+1 , … , vn . ■

We can also state Theorem 3 as : S is linearly dependent if and only if some


vector in S is in the linear span of the rest of the vectors of S.

Now, let us look at the situation in ℝ3 where we know the i, j are linearly
independent. Can you immediately prove whether the set {i, j, (3, 4, 5)} is
linearly independent or not? The following theorem will help you to do this.

Theorem 4: If S is linearly independent and v ∉ [S], then S ∪ {v} is linearly


independent.

Proof: Let S = {v1 , v2 , ⋯ , vn } and T = S ∪ {v}. If possible, suppose T is linearly


dependent, then there exist scalars 𝛼, 𝛼1 , 𝛼2 , ⋯ , 𝛼n , not all zero, such that

𝛼v + 𝛼1 v1 + ⋯ + 𝛼n vn = 0. 37
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Now, if 𝛼 = 0, this implies that there exist scalars 𝛼1 , 𝛼2 , … , 𝛼n , not all zero, such
that

𝛼1 v1 + ⋯ + 𝛼n vn = 0.

But that is impossible as S is linearly independent. Hence 𝛼 ≠ 0. But then,

(−𝛼1 ) (−𝛼n )
v= v1 + ⋯ + vn ,
𝛼 𝛼
i.e., v is linear combination of v1 , v2 , … , vn , i.e., v ∈ [S], which contradicts our
assumption.

Therefore, T = S ∪ {v} must be linearly independent. ■

Using this theorem we can immediately see that the set {i, j, (3, 4, 5)} is linearly
independent, since (3, 4, 5) is not a linear combination of i and j.

Now try the following exercises.

E3) Ginen a linearly independent subset S of a vector space V, can we always


get a set T such that S ⊆ T and T is linearly independent? (Hint: Consider
the real space R2 and the set S = {(1, 0) , (0, 1)}.)

If you’ve done Exercise 3) you will have found that, by adding a vector to a
linearly independent set, it may not remain linearly independent. Theorem 4
tells us that if, to a linearly independent set, we add a vector which is not in
the linear span of the set, then the augmented set will remain linearly
independent. Thus, the way of generating larger and larger linearly
independent subsets of a non-zero vector space V is as follows:

1. Start with any linearly independent set S1 of V, for example, S1 = {v1 },


where 0 ≠ v1 ∈ V.

2. If S1 generates the whole vector space V , i.e., if [S1 ] = V, then every


v ∈ V is a linear combination of S1 .So S1 ∪ {v} is linearly dependent for
every v ∈ V. In this case S1 is a maximal linearly independent set, that is,
no larger set than S1 is linearly independent.

3. If [S1 ] ≠ V, then there must be a v2 ∈ V such that v2 ∉ S1 . Then,


S1 ∪ {v2 } = {v1 , v2 } = S2 (say) is linearly independent. In this case, we
have found a set larger than S1 which is linearly independent, namely, S2 .

4. If [S2 ] = V, the process ends. Otherwise, we can find a still larger set S3
which is linearly independent. It is clear that, in this way, we either reach a
set which generates V or we go on getting larger and larger linearly
independent subsets of V.

38 In the next example, we we will give an infinite set that is linearly independent.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Example 3: Prove that the infinite subset S = {1, x, x2 , … … }, of the vector space
P of all real polynomials in x, is linearly independent.

Solution: Take any finite subset T of S. Then ∃ distinct, non-negative integers


a1 , a2 , … , ak , such that

T = {xa1 , xa2 , … , xak }

Now, suppose
k
ai
∑ 𝛼i x = 0, where 𝛼i ∈ ℝ ∀ i
i=1

In P, 0 is the zero polynomial, all of whose coefficients are zero. ∴𝛼i = 0 ∀ i.


Hence T is linearly independent. As every finite subset of S is linearly
independent, so is S.

∗∗∗

E4) Prove that {1, x + 1, x2 + 1, x3 + 1, …} is a linearly independent subset of the


vector space P.

We have seen in E6, Unit 2 of Block 1 that any vector in R2 is a linear


combination of the vectors e1 = (1, 0) and e2 = (0, 1). Further, in R3 , we can
write every vector as a linear combination of i = (1, 0, 0), j = (0, 1, 0) and
k = (0, 0, 1). So, a natural question is, in every vector space whether there is a
subset such that we can write any vector in the vector space as a linear
combination of vectors from this subset. The next question is can we find a
subset of the ”smallest size”. This leads us to the twin concepts of dimension
and basis. We will discuss them in the following sections.

5.3 BASIS AND DIEMENSION


We begin the subsection with the definition of a basis.

Definition 2: A subset B, of a vector space V, is called a basis of V, if

a) B is linearly independent, and

b) B generates V, i.e., [B] = V.

Note that b) implies that every vector in V is a linear combination of a finite


number of vectors from B.

Thus, B ⊆ V is a basis of V if B is linearly independent and every vector of V is Lecture by Prof. Strang
a linear combination of a finite number of vectors of B. on basis and dimension
39
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
You have already seen that we can write every element of R3 as a linear
combination of i = (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1). We proved in Example
1, that this set is linearly independent. So, {i, j, k} is a basis for ℝ3 . The
following example shows that ℝ2 has more than one basis.

Example 4: Prove that B = {v1 , v2 } is a basis of ℝ2 , where v1 = (1, 1),


v2 = (−1, 1).

Solution: Firstly, for 𝛼, 𝛽 ∈ ℝ, 𝛼v1 + 𝛽v2 = 0

⟹ (𝛼, 𝛼) + (−𝛽, 𝛽) = (0, 0) ⟹ 𝛼 − 𝛽 = 0, 𝛼 + 𝛽 = 0

⟹𝛼=𝛽=0

Hence, B is linearly independent.

Secondly, given (a, b) ∈ ℝ2 , we can write


b+a b−a
(a, b) = v1 + v2
2 2
Thus, every vector in ℝ2 is a linear combination of v1 and v2 . Hence, B is also a
basis of ℝ2 .

∗∗∗

Another important characteristic of a basis is that no proper subset of a basis


can generate the whole vector space. This is brought out in the following
example.

Example 5: Prove that {i = (1, 0)} is not a basis of ℝ2 .

Solution: By E5), since i ≠ 0, {i} is linearly independent.

Now, [{i}] = {𝛼i |𝛼 ∈ ℝ } = {(𝛼, 0) |𝛼 ∈ ℝ }. So, the second component of every


vector in [i] is zero. Therefore, (1, 1) ∉ [{i}]; so [{i}] ≠ ℝ2 .

Thus, {i} is not a basis of ℝ2 .

Note that {i} is a proper subset of the basis {i, j} of ℝ2 .

∗∗∗

E5) B = {u, v, w} is a basis of ℝ3 , where

u = (1, 2, 0) , v = (2, 1, 0) , w = (0, 0, 1).

E6) Prove that {1, x, x2 , x3 , … … .} is a basis of the vector space, P, of all


polynomials over a field F.

E7) Prove that {1, x + 1, x2 + 2x} is a basis of the vector space, P2 , of all
polynomials of degree less than or equal to 2.

40 E8) Prove that {1, x + 1, 3x − 1, x2 } is not a basis of the vector space P2 .


Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........

We have already mentioned that no proper subset of a basis can generate the
whole vector space. We will now prove another important characteristic of a
basis, namely, no linearly independent subset of a vector space can obtain
more vectors than a basis of the vector space. In other words, a basis
contains the maximum possible number of linearly independent vectors. In the
next section, we will discuss the dimensions of some subspaces.

Theorem 5: If B = {v1 , v2 , ⋯ , vn } is a basis of a vector space V over field F, and


S = {w1 , w2 , … .., wm } is a linearly independent subset of V, than m ≤ n.

Proof: Since B is a basis of V and w1 ∈ V, w1 is a linear combination of v1 , v2 ,



… , vn . Hence, by Theorem 3, S1 = {w1 , v1 , v2 , … , vn } is linearly dependent.
Since [B] = V and B ⊆ S′1 , we have [S′1 ] = V. Since S′1 is a linearly dependent
set we have
n

𝛽w1 + ∑ 𝛼i vi = 0
i=1

not all of 𝛽, 𝛼i , 1 ≤ i ≤ n are zero. If 𝛽 = 0, we have ∑ni=1 𝛼i′ vi = 0, not all of 𝛼i ,


1 ≤ i ≤ n are zero. This contradicts the assumption that B is a linearly

𝛼
independent set. So, writing 𝛼i = − i , we have
𝛽

n
w1 = ∑ 𝛼i vi , 𝛼i ∈ F ∀ i = 1, … , n.
i=1

Now, 𝛼i ≠ 0 for some i. (Because, otherwise w1 = 0. But , as w1 belongs to a


linearly independent set, w1 ≠ 0.)

Suppose 𝛼k ≠ 0. Then we can just recorder the elements of B, so that vk


becomes v1 . This does not change any characteristic of B. It only makes the
proof easier to deal with since we can now assume that 𝛼1 ≠ 0. Then
n
1 𝛼i
v1 = w1 − ∑ vi ,
𝛼1 i=2 𝛼1

that is, v1 is a linear combination of w1 , v2 , v3 , … , vn . So any linear combination


of v1 , v2 , … , vn can also be written as a linear combination of w1 , v2 , … , vn . Thus,
if S1 = {w1 , v2 , v3 , … , vn } , then [S1 ] = V.

Note that we have been able to replace v1 by w1 in B in such a way that the
new set still generates V. Next, let

S′2 = {w2 , w1 , v2 , v3 , … , vn } .

Then, as above, S′2 is linearly dependent and [S′2 ] = V.

As we argued in the case of w1 we have

w2 = 𝛽1 w1 + 𝛽2 v2 + ⋯ + 𝛽n vn , 𝛽i ∈ F ∀ i = 1 , … , n

Again, 𝛽i ≠ 0 for some i, since w2 ≠ 0. Also, it cannot happen that 𝛽1 ≠ 0 and


𝛽i = 0 ∀ i ≥ 2, since {w1 , w2 } is a linearly independent set (by Theorem 2(b)). So
𝛽i ≠ 0 for some i ≥ 2. 41
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Again for convenience, we may assume that 𝛽2 ≠ 0. Then

1 𝛽1 𝛽3 𝛽n
v2 = w2 − w1 − v3 − ⋯ − vn ,
𝛽2 𝛽2 𝛽2 𝛽2

This shows that v2 is a linear combination of w1 , w2 , v3 , ⋯ , vn . Hence, if

S2 = {w2 , w1 , v3 , v4 , ⋯ , vn }, then [S2 ] = V.

So we have replaced v1 , v2 in B by w1 , w2 , and the new set generates V. It is


clear that we can continue in the same way, replacing v1 by w1 at this ith step.

Now, suppose n < m. Then, after n steps, we will have replaced all v′i s by
corresponding w′i s and we shall have a set Sn = {wn , wn−1 , … , w2 , w1 } with
[Sn ] = V. But then, this means that wn+1 ∈ V = [Sn ], i.e., wn+1 is a linear
combination of w1 , w2 , … , wn . This implies that the set {w1 , … , wn , wn+1 } is
linearly dependent. This contradicts the fact that {w1 , w2 , … , wm } is linearly
dependent. Hence, m ≤ n. ■

An immediate corollary of Theorem 5 gives us a very quick way of determining


whether a given set is basis of a given vector space or not.

Corollary 1: If B = {v1 , v2 , … , vn } is a basis of V, then any set of n linearly


independent vectors is a basis of V.

Proof: If S = {w1 , w2 , … ., wn } is linearly independent subset of V, then, as


shown in the proof of Theorem 5, [S] = V. As S is linearly independent and
[S] = V, S is a basis of V. ■

The following example shows how the corollary is useful.

Example 6: Show that (1,4) and (0,1) form a basis of ℝ2 over ℝ.

Solution: You know that (1,0) and (0,1) form a basis of ℝ2 over ℝ. Thus, to
show that the given set forms a basis, we only have to show that the 2 vectors
in it are linearly independent. For this, consider the equation
𝛼 (1, 4) + 𝛽 (0, 1) = 0, where 𝛼, 𝛽 ∈ ℝ. Then (𝛼, 4𝛼 + 𝛽) = (0, 0) ⟹ 𝛼 = 0, 𝛽 = 0.

Thus, the set is linearly independent. Hence, it forms a basis of ℝ2 .

∗∗∗

E9) Let V be a vector space over F, with {u, v, w, t } as a basis.

a) Is {u, v + w, w + t, t + u} a basis.
b) Is {u, t} a basis of V?

We now give two results that you must always keep in mind when dealing with
42 vector spaces. They depend on Theorem 5.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 6: If one basis of a vector space contains n vectors, then all its bases
contain n vectors.

Proof: Suppose B1 = {v1 , v2 , … , vn } and B2 = {w1 , w2 , … , wm } are both bases of


V. As B1 is a basis and B2 is linearly independent, we have m ≤ n, by
Theorem 5. On the other hand, since B2 is a basis and B1 is linearly
independent, n ≤ m. Thus, m = n. ■
Theorem 7: If a basis of a vector space contains n vectors, then any set
containing more than n vectors is linearly dependent.

Proof: Let B1 = {v1 , … , vn } be a basis of V and B2 = {w1 , … , wn+1 } be a subset of


V. Suppose B2 is linearly independent. Then, by Corollary 1, wn+1 is a linear
combination of w1 , … , wn . This contradicts our assumption that B2 is linearly
independent. Thus, B2 must be linearly dependent. ■

Try the next exercises which illustrates the use of Theorem 7.

E10) Using Theorem 7, prove that the subset S = {1, x + 1, x2 , x3 + 1, x3 , x2 + 6} of


P3 , the vector space of all real polynomials of degree ≤ 3, is linearly
dependent.

So far we have been saying that “if a vector space has a basis, then …”. Now
we state the following theorem (without proof).

Theorem 8: Every non-zero vector space has a basis.

Note: The space {0} has no basis.

Let us look at the scalars in any linear combination of basis vectors.

Coordinates of a vector: You have seen that if B = {v1 , … , vn } is a basis of a


vector space V, then every vector of V is a linear combination of the elements
of B. We now show this linear combination is unique.

Theorem 9: If B = {v1 , v2 , … , vn } is a basis of the vector space V over a field F,


then every v ∈ V can be expressed uniquely as a linear combination of
v1 , v2 , … , vn .

Proof: Since [B] = V and v ∈ V, v is a linear combination of {v1 , v2 , … , vn }. To


prove uniqueness, suppose there exist scalars 𝛼1 , … , 𝛼n , 𝛽1 , … , 𝛽n such that

v = 𝛼1 v1 + ⋯ + 𝛼n vn = 𝛽1 v1 + ⋯ + 𝛽n vn .

Then (𝛼1 − 𝛽1 ) v1 + (𝛼2 − 𝛽2 ) v2 + ⋯ + (𝛼n − 𝛽n ) vn = 0.

But {v1 , v2 , … , vn } is linearly independent. Therefore, 𝛼i − 𝛽i = 0 ∀ i, i.e., 𝛼i = 𝛽i ∀ i.

This establishes the uniqueness of the linear combination. ■ 43


Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
This theorem implies that given a basis B of V, for every v ∈ V, there is one and
n
only one way of writing v = ∑ 𝛼i vi with 𝛼i ∈ F ∀ i.
i=1

Definition 3: Let B = {v1 , v2 , … , vn } be a basis of an n – dimensional vector


space V. Let v ∈ V. If the unique expression of v as a linear combination of
v1 , v2 , … , vn is v = 𝛼1 v1 + ⋯ + 𝛼n vn , then (𝛼1 , 𝛼2 , … , 𝛼n ) are called the coordinates
of v relative to the basis B, and 𝛼i is called the ith coordinate of v.

The coordinates of a vector will depend on the particular basis chosen, as can
be seen in the following example.

Example 7: For ℝ2 , consider the two bases B1 = {(1, 0) , (0, 1)},


B2 = {(1, 1) , (−1, 1)} (see Example 4). Find the coordinates of the following
vector in ℝ2 relative to both B1 and B2 .
a) (1,2) b) (0,0) c) (p,q)

Solution:

a) Now, (1, 2) = 1 (1, 0) + 2 (0, 1) . Therefore, the coordinates of (1,2) relative


to B1 are (1,2).
3 1
Also, (1, 2) = (1, 2) + (−1, 1). Therefore, the coordinates of (1, 2) are
2 2
3 1
( , ).
2 2

b) (0, 0) = 0 (1, 0) + 0(0, 1) and (0, 0) = 0 (1, 1) + 0 (−1, 1).


In this case, the coordinates of (0,0) relative to both B1 and B2 are (0, 0).

c) (p, q) = p (1, 0) + q(0, 1) and


q+p q−p
(p, q) = (1, 1) + (−1, 1)
2 2
Therefore, the coordinates of (p, q) relative to B1 are (p, q) and the
q+p q−p
coordinates of (p, q) relative to B2 are ( , ).
2 2

∗∗∗

Note: The basis B1 = {i, j} has the pleasing property that for all vectors (p, q)
and all the coordinates of (p, q) relative to B1 are (p, q). For this reason B1 is
called the standard basis of ℝ2 , and the coordinates of a vector relative to the
standard basis are called standard coordinates of the vector. In fact, this is
the basis we normally use for plotting points in 2-dimensional space.

In general, the basis

B = {(1, 0, … .., 0) , (0, 1, 0, … ., 0) , … … , (0, 0, … … , 0, 1)}

of ℝn over ℝ is called the standard basis of ℝn .

Example 8: Let V be the vector space of all real polynomials of degree at


most 1 in the variable x. Consider the basis B = {5, 3x} of V. Find the
coordinates relative to B of the following vectors:
a) 2x + 1 b) 3x − 5 c) 11 d) 7x.

44 Solution:
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
a) Let 2x + 1 = 𝛼 (5) + 𝛽 (3x) = 3𝛽x + 5𝛼.
Then 3𝛽 = 2, 5𝛼 = 1. So, the coordinates of 2x + 1 relative to B are
(1/5, 2/3).

b) 3x − 5 = 𝛼 (5) + 𝛽 (3x) ⟹ 𝛼 = −1, 𝛽 = 1. Hence, the answer is (−1, 1).

c) 11 = 𝛼 (5) + 𝛽 (3x) ⟹ 𝛼 = 11/5, 𝛽 = 0. Thus, the answer is (11/5, 0).

d) 7x = 𝛼 (5) + 𝛽 (3x) ⟹ 𝛼 = 0, 𝛽 = 7/3. Thus, the answer is (0, 7/3).


∗∗∗

E11) Find a standard basis of ℝ3 and for the vector space P2 of all polynomials
of degree ≤ 2.

E12) For the basis B = {(1, 2, 0) , (2, 1, 0) , (0, 0, 1) of ℝ3 , find the coordinates of
(−3, 5, 2).

E13) Prove that, for any basis B = {v1 , v2 , … , vn } of a vector space V, the
coordinates of 0 are (0, 0, 0, … , ).

E14) For the basis B = {3, 2x + 1, x2 − 2} of the vector space P2 of all polynomial
of degree ≤ 2, find the coordinates of
a) 6x + 6 b) (x + 1)2 c) x2

E15) For the basis B = {u, v} of ℝ2 , the coordinates of (1, 0) are (1/2, 1/2) and
the coordinates of (2, 4) are (3, −1). Find u, v.

We now continue the study of vector space by looking into their ‘dimension’, a
concept directly related to the basis of a vector space.

5.3.1 Dimension
So far we have seen that, if a vector space has a basis of n vectors, then every
basis has n vectors in it. Thus, given a vector space, the number of elements in
its different bases remains constant.

Definition 4: If a vector space V over the field F has a basis containing n


vectors, we say that the dimension of V is n. We write dimF V = n or, if the
underlying field is understood, we write dim V = n.

If V = {0}, it has no basis. We define dim 0 = 0.

If a vector space does not have a finite basis, we say that it is


infinite-dimensional.

In Exercise 6), you have seen that P is infinite-dimensional. Also Exercise 7


says that dimℝ P2 = 3 . Earlier you have seen that dimℝ ℝ2 = 2 and dimℝ ℝ3 = 3.
In Theorem 8, you saw that every non-zero vector space has a basis. The next
theorem gives us a helpful criterion for obtaining a basis of a finite-dimensional
vector space. 45
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Theorem 10: If there is a subset S = {V1 , … , vn } of a non-empty vector space V
such that [S] = V, then V is finite-dimensional and S contains a basis of V.

Proof: We may assume that 0 ∉ S because, if 0 ∈ S, then S ⧵ {0} will still


satisfy the conditions of the theorem. If S is linearly independent then, since
[S] = V, S itself is a basis of V. Therefore, V is finite-dimensional (dim V = n). If
S is linearly dependent, then some vector of S is a linear combination of the rest
(Theorem 3). We may assume that this vector is vn . Let S1 = {v1 , v2 , … ., vn−1 }.

Since [S] = V and vn is a linear combination of v1 , … , vn−1 , [S] = V.

If S1 is linearly dependent, we drop, from S1 , that vector which is a linear


combination of the rest, and proceed as before. Eventually, we get a linearly
independent subset Sr = {v1 , v2 , … , vn−r } of S, such that [Sr ] = V (This must
happen because {v1 } is certainly linearly independent.) So Sr ⊆ S is a basis of
V and dim V = n − r . ■

Example 9: Show that the dimension of ℝn is n.

Solution: The set of n vectors

{(1, 0, 0, … ., 0) , (0, 1, 0, … , 0) , … ., (0, 0, 0, … ., 0, 1)}

spans V and is obviously a basis of ℝn .

∗∗∗

E16) Let V be a vector space of dimension n. If S ⊂ V, [S] = V and |S| = n, then


S is a linearly independent set and forms a basis for V.

E17) Prove that the real vector space ℂ of all complex numbers has dimension
two.

E18) Prove that the vector space Pn , of all polynomials of degree at most n,
has dimension n + 1.

Int the next subsection, we will see how to complete a linearly independent set
in a vector space to a basis of vector space when the dimension of the vector
space is finite.

5.3.2 Completion of a Linearly Independent Set to


a Basis
We have seen that in an n-dimensional vector space, a linearly independent
subset cannot have more than n vectors Theorem 7. We now ask: Suppose
we have a linearly independent subset S of an n-dimensional vector space V.
Further, suppose S has m(< n) vectors. Can we add some vectors to S, so that
46 the enlarged set will be a basis of V? In other words, can we extend a linearly
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
independent subset to get a basis? The answer is yes. But, how many vectors
would we have to add? Do you remember Corollary 1 of Theorem 5? That
gives the answer: n − m. Of course, any (n − m) vectors won’t do the job. The
vectors have to be carefully chosen. That is what the next theorem is about.

Theorem 11: Let W = {w1 , w2 , … , wm } be a linearly independent subset of an n


– dimensional vector space V. Suppose m < n. Then there exist vectors
v1 , … , v2 , … , vn−m ∈ V such that B = {w1 , w2 , … , wm , v1 , v2 , … , vn−m } is a basis of V.

Proof: Since m < n, W is not a basis of V (Theorem 6). Hence, [W] ≠ V. Thus,
we can find a vector v1 ∈ V such that v1 ∈ [W]. Therefore, by Theorem 4,
W1 = W ∪ {v1 } is a linearly independent set with n vectors in the n-dimensional
space V, so W1 contains m + 1 vectors. If m + 1 = n, W1 is linearly independent
set with n vectors in the n-dimensional space V, so W1 is a basis of V
(Theorem 5, Corollary 1). That is, {w1 , … , wm , v1 } is a basis of V. If m + 1 < n,
then [W1 ] ≠ V, so there is a v2 ∈ V such that v2 ∉ [W1 ]. Then W2 = W ∪ {v2 } is
linearly independent and contains m + 2 vectors. So, if m + 2 = n, then

W2 = W1 ∪ {v2 } = W ∪ {v1 , v2 } = {w1 , w2 , … , wm , v1 , v2 }

is a basis of V. If m + 2 < n, we continue in this fashion. Eventually, when we


have adjoined n − m vectors v1 , v2 , … , vn−m to W, we shall get a linearly
independent set B = {w1 , w2 , … , wm , v1 , v2 , … , vn−m } containing n vectors, and
hence B will be a basis of V. ■

Let us see how Theorem Theorem 12 actually works.

Example 10: Complete the linearly independent subset S = {(2, 3, 1)} of ℝ3 to


a basis of ℝ3 .

Solution: Since S = {(2, 3, 1)},

[S] = {𝛼(2, 3, 1) |𝛼 ∈ ℝ}

= {(2𝛼, 3𝛼, 𝛼) |𝛼 ∈ ℝ }

Now we have to find v1 ∈ ℝ3 such that v1 ∉ [S], i.e., such that v1 ≠ (2𝛼, 3𝛼, 𝛼)
for any 𝛼 ∈ ℝ. We can take v1 = (1, 1, 1). Then

S1 = S ∪ {(1, 1, 1)} = {(2, 3, 1) , (1, 1, 1)}

is a linearly independent subset of ℝ3 containing 2 vectors.

Now [S1 ] = {𝛼 (2, 3, 1) + 𝛽(1, 1, 1) |𝛼, 𝛽 ∈ ℝ}

= {(2𝛼 + 𝛽, 3𝛼 + 𝛽, 𝛼 + 𝛽) |𝛼, 𝛽 ∈ ℝ

Now select v2 ∈ ℝ3 such that v2 ∉ [S1 ]. We can take v2 = (3, 4, 0). How do we
‘hit upon’ this v2 ? There are many ways. What we have done here is to take
𝛼 = 1 = 𝛽, then 2𝛼 + 𝛽 = 3, 3𝛼 + 𝛽 = 4, 𝛼 + 𝛽 = 2. So (3, 4, 2) belongs to [S1 ] then,
by changing the third component from 2 to 0, we get (3, 4, 0), which is not in
[S1 ]. Since v2 ∉ [S1 ] , S1 ∪ {v2 } is linearly independent. That is,

S2 = {(2, 3, 1) , (1, 1, 1),(3, 4, 0)} 47


Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
is a linearly independent subset of ℝ3 . Since S2 contains 3 vectors and
dimℝ ℝ3 = 3 , S2 is a basis of ℝ3 .

∗∗∗

Note: Since we had a large number of choices for both v1 and v2 , it is obvious
that we could have extended S to get a basis of ℝ3 in many ways.

Example 11: For the vector space P2 of all polynomials of degree ≤ 2,


complete the linearly independent subset S = {x + 1, 3x + 2} to form a basis of P2 .

Solution: We note that P2 has dimension 3, a basis being {1, x, x2 } (see E19).
So we have to add only one polynomials to S to get a basis of P2 .

Now [S] = {a (x + 1) + b(3x + 2) |a, b ∈ ℝ}

= {(a + 3b) x + (a + 2b) |a, b ∈ ℝ}

This shows that [S] does not contain any polynomials of degree 2. So we can
choose x2 ∈ P2 because x2 ∉ [S]. So S can be extended to {x + 1, 3x + 2, x2 },
which is a basis of P2 . Have you wondered why there is no constant term in
this basis? A constant term is not necessary. Observe that 1 is linear
combination of x + 1 and 3x + 2, namely, 1 = 3 (x + 1) − 1(3x + 2). So, 1 ∈ [S] and
hence , ∀ 𝛼 ∈ ℝ, 𝛼.1 = 𝛼 ∈ [S].

∗∗∗

E19) Complete S = {(−3, 1/3)} to a basis of ℝ2 .

E20) Complete S = {(1, 0, 1) , (2, 3, −1) in two different ways to get two distinct
bases of ℝ3 .

E21) For the vector space P3 of all polynomials of degree = 3, complete

a) S = {2, x2 + x, 3x3 }
b) S = {x2 + 2, x2 − 3x}

to get a basis of P3 .

Let us now look at some properties of the dimensions of some subspaces.

5.4 DIMENSIONS OF SOME SUBSPACES


In Unit 3 you learnt what a subspace of a space is. Since it is a vector space
itself, it must have a dimension. We have the following theorem.

Theorem 12: Let V be a vector space over a field F such that dim V = n . Let W
48 be a subspace of V. Then dim W ≤ n .
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Proof: Since W is a vector space over F in its own right, it has a basis.
Suppose dim W = m . Then the number of elements in W’s basis is m. These
elements form a linearly independent subset of W, and hence, of V. Therefore,
by Theorem 7, m ≤ n. ■
Remark 2: If W is a subspace of V such that dim W = dim V = n then
W = V, since the basis of W is a set of linearly independent elements in V, and
we can appeal to Corollary 1.

Example 12: Let V be a subspace of ℝ2 . What are the possible subspaces of


V?

Solution: By Theorem 12, since dim ℝ2 = 2, the only possibilities for dim V
are 0,1 and 2.

If dim V = 2, then, by the remark above, V = ℝ2 .

If dim V = 1 , then {(𝛽1 , 𝛽2 )} is a basis of V, where (𝛽1 , 𝛽2 ) ∈ ℝ2 ⧵ 0. Then

V = {𝛼 (𝛽1 , 𝛽2 ) |𝛼 ∈ ℝ} .

The set of points on the line ax + by = 0, through the origin, is a subspace of ℝ2 .


In particular S = { (x, y) ∈ ℝ2 | 𝛽2 x − 𝛽1 y = 0} is a subspace of ℝ2 and V ⊆ S. So,
dim S ≥ dim V = 1.

To show that S = V, we need to show that S ⊆ V. Note that, since (𝛽1 , 𝛽2 ) ≠ 0,


at most one of 𝛽1 or 𝛽2 can be zero. We will assume that 𝛽1 ≠ 0, 𝛽2 ≠ 0. If
𝛽1 y y y
(x, y) ∈ S, we have x = y or x = 𝛽1 . Therefore, (x, y) = ( 𝛽1 , 𝛽2 ) = 𝛼(𝛽1 , 𝛽2 )
𝛽2 𝛽2 𝛽2 𝛽2
y
where 𝛼 = . The same argument works if 𝛽1 = 0, 𝛽2 ≠ 0. If 𝛽2 = 0, 𝛽1 ≠ 0, we
𝛽2
x
have y𝛽1 = 0 or y = 0. Then, (x, y) = (x, 0) = (𝛽1 , 𝛽2 ) = 𝛼 (𝛽1 , 𝛽2 ). Since we can
𝛽1
write the equation ax + by = 0 as ax − (−b)y = 0, it follows that we can
characterise the one dimensional subspaces of ℝ2 as lines through the origin.

If dim V = 0 , then V = {0}.

∗∗∗

Let us see what happens in three dimensions.

Example 13: Let V be a subspace of ℝ3 . What are the possibilities of its


structure?

Solution: dim V can be 0, 1, 2 or 3. dim V = 0 ⟹ V = {0}.

dim V = 1 ⟹ V = { 𝛼(𝛽1 , 𝛽2 , 𝛽3 )| 𝛼 ∈ ℝ}

for some (𝛽1 , 𝛽2 , 𝛽3 ) ∈ ℝ3 .

As we did in the previous example, let us first consider the case 𝛽1 ≠ 0, 𝛽2 ≠ 0


and 𝛽3 ≠ 0. Consider the set

x y z
S = {(x, y, z) ∈ ℝ3 | = = }
𝛽1 𝛽2 𝛽3 49
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Let (x1 , y1 , z1 ), (x2 , y2 , z2 ) ∈ S. The set S is a line through the origin.

Further,
x1 y1 z1 x2 y2 z2
= = and = =
𝛽1 𝛽2 𝛽3 𝛽1 𝛽2 𝛽3

Check that
𝛼x1 + 𝛽x2 𝛼y1 + 𝛽y2 𝛼z1 + 𝛽z2
= =
𝛽1 𝛽2 𝛽2

This shows that S is a subspace of ℝ3 . Further V ⊆ S.


x y z
Also, if (x, y, z) ∈ S, writing = = = 𝛼, we have
𝛽1 𝛽2 𝛽3
(x, y, z) = (𝛼𝛽1 , 𝛼𝛽2 , 𝛼𝛽3 ) = 𝛼𝛼 (𝛽1 , 𝛽2 , 𝛽3 ). So, V = S. Suppose one of 𝛽1 , 𝛽2 or 𝛽3
is zero, say, 𝛽1 = 0. Then, we let

y z
S = {(x, y, z) ∈ ℝ3 |x = 0, = }
𝛽2 𝛽3

Then, S is a line through the origin that lies on the yz-plane. Again, we can
show that S is a subspace and

S = V = { 𝛼 (0, 𝛽2 , 𝛽3 )| 𝛼 ∈ ℝ}

Similarly, if 𝛽2 = 0 and 𝛽1 ≠ 0, 𝛽3 ≠ 0, we let

x z
S = {(x, y, z) ∈ ℝ3 |y = 0, = }
𝛽1 𝛽3

is a line in xz-plane. Again, we can show that S is a subspace and

S = V = { 𝛼 (𝛽1 , 0, 𝛽3 )| 𝛼 ∈ ℝ}

If 𝛽3 = 0 and 𝛽1 ≠ 0, 𝛽2 ≠ 0, we let

x y
S = {(x, y, z) ∈ ℝ3 |z = 0, = }
𝛽1 𝛽2

Again, we can show that S is a subspace and

S = V = { 𝛼 (𝛽1 , 𝛽3 , 0)| 𝛼 ∈ ℝ}

Similarly, you work out the cases where two of 𝛽1 , 𝛽2 or 𝛽3 is zero.

dim V = 2 ⟹ V is generated by two linearly independent space vectors.


Suppose u1 = (𝛼1 , 𝛼2 , 𝛼3 ), u2 = (𝛽1 , 𝛽2 , 𝛽3 ) are linearly independent vectors and

V = [u1 , u2 ]

i.e. {u1 , u2 } is a basis of V. Consider the equations

𝛼1 x + 𝛼2 y + 𝛼3 z = 0
𝛽1 x + 𝛽2 y + 𝛽3 z = 0

The system has two equations in three unkowns. So, by ??, this has a nonzero
solution (a, b, c). Let

50 S = { (x, y, z) ∈ ℝ3 | ax + by + cz = 0}
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Then, S is a subspace of ℝ3 and u1 , u2 ∈ S. Since S contains a basis of V, it
contains V.

Consider the equation ax + by + cz = 0, at least one of a, b or c is nonzero. Let


us suppose a ≠ 0. The same argument works when b or c is nonzero. Then,
b c
x = − y − z. So, if (x, y, z) ∈ S, we have
a a

b c
x − −
a a
[y] = y [ 1 ] + z [ 0 ]
z 0 1

b c
− −
a a
So, S is spanned by v1 = [ 1 ] and v2 = [ 0 ]. Check that {v1 , v2 } is linearly
0 1
independent. Therefore, dim S = 2. Since V ⊆ S and dim V = dim S, it follows
that S = V.

dim V = 3 ⟹ V = ℝ3 .

∗∗∗

Now let us go further and discuss the dimensions of the sum of subspace (See
Unit 3.). If U and W are subspaces of a vector space V, then so are U + W and
U ∩ W. Thus, all these subspaces have dimensions. We relate these
dimensions in the following theorem.

Theorem 13: If U and W are two subspaces of a finite-dimensions vector


space V over a field F, then

dim(U + W) = dim U + dim W − dim(U ∩ W)

Proof: We recall that U + W = {u + w |u ∈ U, w ∈ W }.

Let dim(U ∩ W) = r, dim U = m, dim W = n. We have to prove that


dim (U + W) = m + n − r.

Let {v1 , v2 , … , vr } be a basis of U ∩ W. Then {v1 , v2 , … , vr } is a linearly


independent subset of U and also of W. Hence, by, it can be extended to form a
basis

A = {v1 , v2 , … , vr , ur+1 , ur+2 , … , um }

of U and a basis

B = {v1 , v2 , … , vr , wr+1 , wr+2 , … , wn }

of W.

Now, note that none of the u′ s can be a w. For, if us = wt then us ∈ U, wt ∈ W,


so that us ∈ U ∩ W. But then us must be a linear combination of the basis
{v1 , … , vr } of U ∩ W. This contradicts the fact that A is linearly independent. 51
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Thus,

A ∪ B = {v1 , v2 , … , vr , ur+1 , … , um , wr+1 , … , wn }

contains (m − r) + (n − r) vectors. We need to prove that A ∪ B is a basis of


U + W. For this, we need to prove that every vector of U + W is linear
combination of A ∪ B. So let
r m n
∑ 𝛼i vi + ∑ 𝛽j uj + ∑ 𝜏k wk = 0
i=1 j=r+1 k=r+1

where 𝛼i , 𝛽j , 𝜏k ∈ F ∀ i, j, k.

Then
r m n
∑ 𝛼i vi + ∑ 𝛽j uj = − ∑ 𝜏k wk …(2)
i=1 j=r+1 k=r+1

The vector on the left hand side of ?? is a linear combination of


{v1 , … .vr , ur+1 , … … , un }. So it is in U. The vector on the right hand side is in W.
Hence, the vectors on both side of the equation are in U ∩ W. But {v1 , … … , vr }
is a basis of U ∩ W. So the vectors on both sides of Eqn. (2) are a linear
combination of the basis {v1 , … .., vr } of U ∩ W.

That is,
r m r
∑ 𝛼i vi + ∑ 𝛽j uj = ∑ 𝛿i vi …(3)
i=1 j=r+1 i=1

and
n r
∑ 𝜏k wk = ∑ 𝛿i vi …(4)
k=r+1 i=1

where 𝛿i ∈ F ∀ i = 1, … , r

Eqn. (2) gives ∑ (𝛼i − 𝛿i ) vi + ∑ 𝛽i ui = 0.

But {v1 , … , vr , ur+1 , … , um } is linearly independent, so 𝛼i = 𝛿i and 𝛽j = 0 ∀ i, j.

Similarly, since by Eqn. (3)

∑ 𝛿i vi + ∑ 𝜏k wk = 0,

Since, we have already obtained 𝛼i = 𝛿i ∀ i, we get 𝛼i = 0 ∀ i.

Thus, ∑ 𝛼i vi + ∑ 𝛽j uj + ∑ 𝜏l wk = 0

⟹ 𝛼i = 0, 𝛽j = 0, 𝜏k = 0 ∀ i, j, k.

So A ∪ B is linearly independent.

Next, let u + w ∈ U + W. Then u = ∑ 𝛼i vi + ∑ 𝛽j uj and w = ∑ 𝛽i vi + ∑ 𝜏k wk , i.e., u + w


is a linear combination of A ∪ B.

∴, A ∪ B is a basis of U + W, and

dim (U + W) = m + n − r = dim U + dim W − dim(U ∩ W)

52 ■
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
We give a corollary to Theorem 13 now.

Corollary 2: dim(U ⨁ W) = dim U + dim W.

Proof: The direct sum U ⨁ W indicates that U ∩ W = {0}. Therefore, dim


(U ∩ W) = 0. Hence, dim (U + W) = dim U + dim W . ■

Let us use Theorem 13 now.

Example 14: Suppose U and W are subspace of V,


dim U = 4 , dim W = 5, dim V = 7 . Find the possible values of dim(U ∩ W) .

Solution: Since W is a subspace of U + W, we must have


dim (U + W) ≥ dim W = 5 .i.e.,
dim U + dim W − dim(U ∩ W) ≥ 5 ⟹ 4 + 5 − dim(U ∩ W) ≥ 5 ⟹ dim(U ∩ W) ≤ 4 .
On the other hand, U + W is a subspace of V, so dim (U + W) ≤ 7.

⟹ 5 + 4 − dim(U ∩ W) ≤ 7

⟹ dim(U ∩ W) ≥ 2

Thus, dim (U ∩ W) = 2, 3 or 4.

∗∗∗

Example 15: Let V and W be the following subspace of ℝ4 :

V = {(a, b, c, d) |b − 2c + d = 0} , W = {(a, b, c, d) |a = d, b = 2c}

Find basses and the dimensions of V, W and V ∩ W. Hence prove that


4
ℝ = V + W.

Solution: We observe that

(a, b, c, d) ∈ V ⟺ b − 2c + d = 0.

⟺ (a, b, c, d) = (a, b, c, 2c − b)

= (a, 0, 0, 0) + (0, b, 0 − b) + (0, 0, c, 2c)

= a (1, 0, 0 , 0) + b (0, 1, 0, −1) + c(0, 0, 1, 2)

This shows that every vector in V is a linear combination of the three linearly
independent vectors (1, 0, 0, 0) , (0, 1, 0, −1) , (0, 0, 1, 2). Thus, a basis of V is

A = {(1, 0, 0, 00, (0, 1, 0, −1) , (0, 0, 1, 2)}

Hence, dim V = 3 .

Next, (a, b, c, d) ∈ W ⟺ a = d, b = 2c

⟺ (a, b, c, d) = (a, 2c, c, a) = (a, 0, 0, a) + (0, 2c, c, 0)

= a (1, 0, 0, 1) + c (0, 2, 1, 0) ,

This shows that W is generated by the linearly independent set


{(1, 0, 0, 1) , (0, 2, 1, 0)}. 53
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
∴, a basis for W is

B = {(1, 0, 0, 1) , (0, 2, 1, 0)},

and dim W = 2 .

Next, (a, b, c, d) ∈ V ∩ W ⟺ (a, b, c, d) ∈ V and (a, b, c, d) ∈ W

⟺ b − 2c + d = 0, a = d, b = 2c

⟺ (a, b, c, d) = (0, 2c, c, 0) = c(0, 2, 1, 0)

Hence, a basis of V ∩ W is {(0, 2, 1, 0)} and dim (V ∩ W) = 1 .

Finally, dim (V + W) = dim V + dim W − dim(V ∩ W)

= 3 + 2 − 1 = 4.

Since V + W is a subspace of ℝ4 and both have the same dimension,


4
ℝ = V + W.

∗∗∗

E22) If U and W are 2 – dimensional subspaces of ℝ3 , show that U ∩ W ≠ {0}.

E23) If U and W are distinct 4 –dimensional subspaces of a 6-dimensional


vector space V, find the possible dimensions of U ∩ W.

E24) Suppose V and W are subspaces of ℝ4 such that dim V = 3, dim W = 2.


prove that dim(V ∩ W) = 1 or 2.

E25) Let V and W be subspaces of ℝ4 defined as follows :

V = {(a, b, c) |b + 2c = 0}

W = {(a, b, c) |a + b + c = 0}

a) Find bases and dimensions of V, W, V ∩ W


b) Find dim (V + W).

Let us now look at the dimension of a quotient space. Before going further it
may help to revise Sec. 3.5.

5.5 DIMENSION OF A QUOTIENT SPACE


In unit 3 we defined the quotient space V/W for any vector space V and
subspace W. Recall that V/W = {v + W |v ∈ V }.

We also showed that it is a vector space. Hence, it must have a basis and a
54 dimension. The following theorem tells us what dim V/W should be.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 14: If W is a subspace of a finite-dimensional space V, then
dim (V/W) = dim V − dim W.

Proof: Suppose dim V = n and dim W = m. Let {w1 , w2 , … , wm } be a basis of


W. Then there exist vectors v1 , v2 , … , vk such that {w1 , w2 , … , wm , v1 , v2 , … , vk } is
a basis of V, where m + k = n. (Theorem 11.)

We claim that B = {v1 + W, v2 + W, … , vk + W} is a basis of V/W. First, let us


show that B is linearly independent. For this, suppose ∑ki=1 𝛼i (vi + W) = W,
where 𝛼1 , … , 𝛼k are scalars

(note that the zero vector of V/W is W).

Then ∑ki=1 𝛼i vi + W = W
k
⟹ (∑ 𝛼i vi ) + W = W
i=1

k
⟹ ∑ 𝛼i vi ∈ W
i=1

But W = [{w1 , w2 , … ., wm }], so ∑ki=1 𝛼i vi = ∑m


j=1 𝛽j wj for some scalars 𝛽1 , … ., 𝛽m .

⟹ ∑ 𝛼i vi − ∑ 𝛽j wj = 0

But {w1 , … , wm , v1 , … .., vk } is a basis of V, so it is linearly independent. Hence


we must have

𝛽j = 0, 𝛼i = 0 ∀ j, i.

Thus,

∑ 𝛼i (vi + W) = W ⟹ 𝛼i = 0 ∀ i.

So B is linearly independent.

Next, to show that B generates V/W, let v + W ∈ V/W. Since v ∈ V and


m k
{w1 , … , wm , v1 , … , vk } is a basis of V, v = ∑1 𝛼i wi + ∑1 𝛽j vj , where the 𝛼i s and 𝛽j s
are scalars.

Therefore,

v + W = (∑ 𝛼i wi + ∑ 𝛽j vj , ) + W
i j

= {(∑ 𝛼i wi ) + W} + {(∑ 𝛽j vj ) + W}
i j
k
= W + ∑ 𝛽j (vj + W) since ∑ 𝛼i wi ∈ W
1
k
= ∑ 𝛽j (vj + W)
j=1

since W is the zero element of V/W. 55


Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Thus, v + W is linear combination of {vj + W, j = 1, 2, , … … ., k}.

So, v + W ∈ [B].

Thus, B is a basis of V/W.

Hence, dim V/W = k = n − m = dim V − dim W . ■

Let us use this theorem to evaluate the dimensions of some familiar quotient
spaces.

Example 16: If Pn denotes the vector space of all polynomials of degree ≤ n,


exhibit a basis of P4 /P2 and verify that dim P4 /P2 = dim P4 − dim P2 .

Solution: Now P4 = {ax4 + bx3 + cx2 + dx + e |a, b, c, d, e ∈ ℝ} and


P2 = {ax2 + bx + c |a, b, c ∈ ℝ} .

Therefore, P4 /P2 = {(ax4 + bx3 ) + P2 |a, b ∈ ℝ },

Now,

(x4 + bx3 ) + P2 = (ax4 + P2 ) + (bx3 + P2 ) = a (x4 + P2 ) + b(x3 + P2 )

This shows that every element of P4 /P2 is a linear combination of the two
elements (x4 + P2 ) and (x3 + P2 ).

These two elements of P4 /P2 are also linearly independent because if


4 3 4 3
𝛼 (x + P2 ) + 𝛽 (x + P2 ) = P2 , then 𝛼x + 𝛽x ∈ P2 (𝛼, 𝛽 ∈ ℝ).

4 3 2
∴, 𝛼x + 𝛽x = ax + bx + c for some a, b, c ∈ ℝ

⟹ 𝛼 = 0, 𝛽 = 0, a = 0, b = 0, c = 0.

Hence a basis of P4 /P2 is {x4 + P2 , x3 + P2 }.

Thus, dim (P4 /P2 ) = 2 . Also dim (P4 ) = 5 , dim (P2 ) = 3, (see E19). Hence
dim (P4 /P2 ) = dim (P4 ) − dim (P2 ) is verified.

∗∗∗

Try the following exercise now.

E26) Let V be an n – dimensional real vector space. Find dim (V/V) and
dim V/{0}.

We end this unit by summarizing what we have covered in it.

5.6 SUMMARY
56 In this unit, we
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
1. defined a linearly independent set and gave examples of linearly
independent and linearly dependent sets of vectors;

2. saw how to determine whether a give set of vectors is linearly


independent or not;

3. defined a basis for a vector space and give examples;

4. how to determine whether a set forms a basis for a vector space or not;

5. determined bases for well known vector spaces;

6. determined the dimension of the sum of two spacese, intersection of two


subspaces and quotient spaces.

5.7 SOLUTIONS/ANSWERS

E1) a) Writing as column vectors, suppose

−1 1 1 0
𝛼1 [ 1 ] + 𝛼2 [−1] + 𝛼3 [1] = [0]
1 1 0 0
or
−𝛼1 − 𝛼2 + 𝛼3 0
[ 𝛼1 − 𝛼2 + 𝛼3 ] = [0]
𝛼1 + 𝛼2 0

Writing in the form of equations

−𝛼1 + 𝛼2 + 𝛼3 = 0
𝛼1 − 𝛼2 + 𝛼3 = 0
𝛼1 + 𝛼2 =0

The matrix form is


−1 1 1 0
[1 −1 1 0]
1 1 0 0

Carrying out row operations R1 → −R1 , R2 → R2 − R1 , R3 → R3 − R1 ,


1 1 3
R2 ↔ R3 , R2 → R2 , R1 → R1 + R2 , R3 → R3 , R1 → R1 − R3 and
2 2 2
1
R2 → R2 − R3 gives the matrix (Check this!)
2

1 0 0 0
[0 1 0 0]
0 0 1 0

There are no non-pivot columns except the last column which doesn’t
correspond to a variable. The solution to the system is 𝛼1 = 0, 𝛼2 = 0
and 𝛼3 = 0. So, the vectors are linearly independent. 57
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
b) Writing as column vectors, suppose

1 2 1 0
𝛼1 [3] + 𝛼2 [ 1 ] + 𝛼3 [2] = [0]
2 −1 1 0
or

𝛼1 + 2𝛼2 + 𝛼3 0
[3𝛼1 + 𝛼2 + 2𝛼3 ] = [0] …(5)
2𝛼1 − 𝛼2 + 𝛼3 0

Writing as linear equations, we get

𝛼1 + 2𝛼2 + 𝛼3 = 0
3𝛼1 + 𝛼2 + 2𝛼3 = 0
2𝛼1 − 𝛼2 + 𝛼3 = 0

The matrix form is

1 2 1 0
[3 1 2 0]
2 −1 1 0

Carrying out the row operations R2 → R2 − 3R1 , R3 → R3 − 2R1 ,


R
R2 → − 2 , R1 → R1 − R2 and R3 → R3 + 5R2 , we get
5

3
1 0 0
5
1
[0 1 0]
5
0 0 0 0

The column corresponding te 𝛼3 is the non-pivot column. Setting


3 1
x3 = 𝜆, the solutions is 𝛼1 = − 𝜆, 𝛼2 = − 𝜆, 𝛼3 = 𝜆. In particular, setting
5 5
𝜆 = 5, we get a non-trivial solution 𝛼1 = −5, 𝛼2 = −3 and 𝛼3 = −1.

E2) Suppose 𝛼 ∈ F such that 𝛼v = 0. Then, from Unit 3 you know that 𝛼 = 0
or v = 0. But v ≠ 0…, 𝛼 = 0, and {v} is linearly independent.

E3) The set S = {(1, 0) , (0, 1)} is a linearly independent subset of ℝ2 . Now,
suppose ∃ T such that S ⊆ T ⊆ ℝ2 . Let (x, y) ∈ T such that (x, y) ∉ S. Then
we can always find a, b, c ∈ ℝ, not all zero, such that
a (1, 0) + b (0, 1) + c (x, y) = (0, 0). (Take a = −x, b = −y, c = 1, for example.)
∴S ∪ {(x, y)} is linearly independent. Since this is obtained in T, T is
linearly dependent. The answer to the question in this exercise is ‘No’.

E4) Let T be a finite subset of P. Suppose 1 ∉ T. Then, as in Example 3, ∃


non zero a1 , … … , ak such that

T = {xa1 + 1, … .., xak + 1}.

Suppose ∑ki=1 𝛼i (xa1 + 1) = 0, where 𝛼i ∈ ℝ ∀ i.

58 Then 𝛼1 xa1 + ⋯ + 𝛼k xak + (𝛼1 + 𝛼2 + ⋯ + 𝛼k ) = 0


Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
⟹ 𝛼i = 0 = 𝛼2 = . ⋯ = 𝛼k , so that T is linearly independent.
If 1 ∈ T, then T = {1, xa1 + 1, … , xak + 1} for some non-zero a1 , … , ak .
k a
𝛽0 + ∑i=1 𝛽i (x i + 1) = 0, where 𝛽0 , 𝛽1 , … .., 𝛽k ∈ ℝ.
Then 𝛽0 + 𝛽1 + ⋯ + 𝛽k = 0, 𝛽1 xa1 + ⋯ + 𝛽k xak = 0.

⟹ 𝛽0 + 𝛽1 + ⋯ + 𝛽k = 0, 𝛽1 = 0 = 𝛽2 = ⋯ = 𝛽k

⟹ 𝛽0 = 0 = 𝛽1 = ⋯ = 𝛽k

⟹ T is linearly independent.
Thus, every finite subset of {1, x + 1, x2 + 1, …} is linearly independent.
Therefore, {1,x+1,….} is a linearly independent.

E5) B is linearly independent. For any (a, b, c) ∈ ℝ3 , we have


2b−a 2a+b
(a, b, c) = (1, 2, 0) + (2, 1, 0) + c(0, 0, 1). Thus, B also spans ℝ3 .
3 3

E6) Firstly, any element of P is of the form a0 + a1 x + a2 x2 + … . + an xn ,ai ∈ ℝ ∀ i.


This is a linear combination of {1, x, … ., xn }, a finite subset of the given set.
Therefore, the given set spans P. Secondly, Example 3 says that the
given set is linearly independent. Therefore, it is a basis of P.
E7) The set {1, x + 1, x2 + 2x} is linearly independent. It also spans P2 , since
any element a0 + a1 x + a2 x2 ∈ P2 can be written as
(a0 − a1 + 2a2 ) + (a1 − 2a2 ) (x + 1) + a2 (x2 + 2x). Thus, the set is a basis of
P2 .
E8) The set is linearly dependent, since 4 − 3 (x + 1) + (3x − 1) + 0.x2 = 0. .., it
can’t form a basis of P2 .

E9) a) We have to show that the given set is linearly independent.


Now au + b (v + w) + c (w + t) + d (t + u) = 0, for a, b, c, d ∈ F.

⟹ (a + d) u + bv + (b + c) w + (c + d) t = 0

⟹ a + d = 0, b = 0, b + c = 0 and c + d = 0. Solving for a, b, c and d,


we get a = 0, b = 0, c = 0 and d = 0. Since it has 4 vectors, it is a
basis of V.
b) No, since [{u, t}] ≠ V. For example, w ∉ [{u, t}] as {u, w, t} is a linearly
independent set by Theorem 2.

E10) You know that {1, x, x2 , x3 } is a basis of P3 , and contains 4 vectors. The
given set contains 6 vectors, and hence, by Theorem 7, it must be linearly
dependent.

E11) A standard basis of ℝ3 is {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} .{1, x, x2 } is a standard


basis for P2 , because the coordinates of any vector a0 + a1 x + a2 x2 , in P2 ,
is (a0 , a1 , a2 ).
13 11
E12) ( ,− , 2)
3 3
E13) Since 0 = 0.v1 + 0.v2 + … .. + 0.vn , the coordinates are (0, 0, … ., 0). 59
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
E14) a) 6x + 6 = 1.3 + 3 (2x + 1) + 0.(x2 − 2). ∴, the coordinates are (1,3,0).
b) (2/3, 1, 1).
c) (2/3, 0, 1).
E15) Let u = (a, b) , v = (c, d). We know that
a+c b+d
(1, 0) = 1/2 (a, b) + 1/2 (c, d) = ( , ),
2 2
and (2, 4) = 3 (a, b) − (c, d) = (3a − c, 3b − d)
∴, a + c = 2, b + d = 0, 3a − c = 2, 3b − d = 4. Solving these equations gives
us a = 1, b = 1, c = 1, d = −1

∴, u = (1, 2) , v = (1, −2).

E16) By Theorem 10 it follows that there is a set S1 ⊆ S such that S1 is a basis


for V. If S1 ⊆ S, then |S1 | < |S| = n. But this contradicts Theorem 6. So,
S1 = S and S is a basis for V.
E17) ℂ = {x + iy |x, y ∈ ℝ }. Consider the set S = {1 + i0, 0 + i1}. This spans ℂ and
is linearly independent. Therefore, it is a basis of ℂ. ∴, dimℝ C = 2.
E18) Check that the set {1, x, … ..xn } is linearly independent and spans Pn .

E19) To obtain a basis we need to add one element. Now.

[S] = {𝛼91, 0, 1) + 𝛽(2, 3, −1) |𝛼, 𝛽 ∈ R }

= {(𝛼 + 2𝛽, 3𝛽, 𝛼 − 𝛽) |𝛼, 𝛽 ∈ R}

Then (1, 0, 0) ∉ [S] and (0, 1, 0) ∉ [S] . ∴{(1, 0, 1) , (2, 3, −1) , (1, 0, 0)} and
3
{(1, 0, 1) , (2, 3, −1) , (0, 1, 0)} are two distinct bases of R .

E20) a) Check that x ∉ [S].∴ S ∪ {x} is a basis.


b) 1 ∉ [S]. Let S1 = S ∪ {1}. Then x3 ∉ [S1 ]. Thus, a basis is
2 2 3
{1, x + 2, x − 3x, x }.

E21) dim U = 2 = dim W. Now U + W is a subspace of R3 .


∴dim(U + W) ≤ 3. i.e., dim U + dim W − dim(U ∩ W) ≤ 3 .
i.e., dim(U ∩ W) ≥ 1 ∴U ∩ W ≠ {0}.
E22) dim V = 6 , dim U = 4 = dim W and U ≠ W. Then dim(U + W) ≤ 6
⟹ 4 + 4 − dim(U ∩ W) ≤ 6 ⟹ dim(U ∩ W) ≥ 2 . Also,
dim(U + W) ≥ dim U ⟹ dim(U ∩ W) ≤ 4 . ∴, the possible dimensions of
U ∩ W are 2,3,4.
E23) Since V + W is a subspace of R4 , dim(V + W) ≤ 4 .
That is , dim V + dim W − dim(V ∩ W) ≤ 4 .

∴, dim(V ∩ W) ≥ 1.

Also V ∩ W is a subspace of W. ∴, dim(V ∩ W) ≤ dim W = 2 .

60 ∴, 1 ≤ dim(V ∩ W) ≤ 2 .
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
E24) a) Any element of V is v = (a, b, c) with b + 2c = 0.

∴,v = (a, −2c, c) = a (1, 0, 0) + c(0, −2, 1).

∴, a basis of V is {(1, 0, 0) , (0, −2, 1)}.∴,dim V = 2 .


Any element of W is w = (a, b, c) with a + b + c = 0.

∴, w = (a, b, −a − b) = a (1, 0, −1) + b(0, 1, −1)

∴, a basis of W is {(1, 0, −1) , (0, 1, −1)}.

∴,dim W = 2 .

Any element of V ∩ W is x = (a, b, c) with b + 2c = 0 and a + b + c = 0.


∴,x = (a, −2c, c) with a − 2c + c = 0, that is, a = c.
∴, x = (c, −2c, c) = c(1, −2, 1). ∴, a basis of V ∩ W is {1, −2, 1}.

∴,dim (V ∩ W) = 1 .

b) dim (V + W) = dim V + dim W − dim (V ∩ W) = 3 . ∴,V + W = R3 .

E25) 0, n.

61
Block 2

MISCELLANEOUS EXAMPLES AND EXERCISES


The examples and exercises given below cover the concepts and processes
you have studied in this block. Doing them will give you a better understanding
of the concepts concerned, as well as practice in solving such problems.

As we saw in Unit 4, Gaussian elimination is equivalent to reducing the


associated augmented matrix to REF. We look at one more example of
Gaussian elimination.

Example 1: Solve the following system of equations by reducing the matrix


corresponding to the system of equations using Gaussian elimination.

x1 + 2x 2 + x 3 = 8
x1 − x 2 + x 3 = 2
x1 + x 2 + 3x 3 = 8

Solution: The corresponding matrix

1 2 1 8 
1 −1 1 2
1 1 3 8 
 

We saw in Unit 4, Example 4 that the REF of this matrix is

 1 2 1 8
 0 1 0 2
 0 0 1 1
 

Writing down the corresponding equations, we have

x1 + 2x 2 + x 3 = 8
x2 =2
x3 = 1

Back substitution gives x 3 = 1,x 2 = 2,x1 = 3.

***

E1) Solve the following system of equations by reducing the matrix


corresponding to the system of equations to REF.
x1 + x 2 + x 3 = 3
− x1 + x 2 + 3x 3 = 4
2x1 + 2x 2 + x 3 = 5

Example 2: In this example, we will see how to determine the current flow in
an electrical laws using Kirchoff’s laws. We begin with the simple circuit in
Fig. 1.

62
Block Miscellaneous
Examples Exercises and

Fig. 1
In the figure symbol denotes a voltage source, usually a battery. The
bigger symbol on the right is the positive terminal and the smaller symbol on
the left is the negative terminal. The current always flows from the positive
terminal of the battery to the negative terminal of the battery and this is taken

as the positive direction.The symbol denotes a resistor. There is a

voltage source that supplies 20 volts current.

There are two resistors of resistance 2 ohms and 3 ohms. When the current
passes through a resistor, there is a drop in voltage. The voltage drop is
governed by Ohm’s law V = IR where V is the voltage, R is the resistance
and I is the current. We have the Kirchoff’s voltage law which states the
following:

Kirchoff’s Voltage Law: In a closed path within an electrical circuit, the


sum of the voltage drop in any direction is equal to the sum of the
voltage sources in the same direction.

In the figure above, there are two voltage drops, one is of 2I and the other is of
4I Their sum equals the voltage supplied by the battery. The drop in voltage is
2I+4I=6I and the voltage supplied is 30 volts. So, 6I+30 and I, the current flow
30
is = 6 amperes.
5

Let us now look at a slightly more complicated situation. Consider the figure
below:

Fig. 2
In this circuit, there are junctions at A and B, indicated by dots. There
are three branches, each with its current flow. The current flow at the
junctions follow another law, called the Kirchoff’s Current Law which is
as follows:

Kirchoff’s Current Law: The current flow into any junction is equal
to the current flow out of the junction.

We consider the closed path ABCD. Total voltage supply in this path is
zero. I2 is in a direction opposite to I1. We have
63
Block 2

(2 + 2)I1 − I2 = 0 or 4I1 − 2I2 = 0. In the closed path AEFB, the voltage


supply is 42 volts. The voltage drop is 2I2 + I3 . Therefore, 2I2 + I3 = 42 .

Finally, at the junction B, the inflow (towards B) is I3 and the outflow


(away from B) is I1 + I2 . By Kirchoff’s Current Law I1 + I2 = I3 or
I1 + I2 − I3 = 0 . Therefore, we get the equations
4I1 − 2I2 = 0
2I2 + I3 = 42
I1 + I2 − I3 = 0
The corresponding augmented matrix is
 4 −2 0 0 
 0 2 1 42 
 1 1 −1 0 
 
The RREF of this matrix is
 1 0 0 6
0 1 0 12 
0 0 1 18 
 
Therefore I1 = 6, I2 = 12, I3 = 18.

E 2) Find the current flow in each branch of the following circuit:

Fig. 3

E 3) Find the current flow in each branch of the following circuit:

Fig. 4

Example 3: Solve the following system of homogeneous linear equations:


x1 − x 2 − x 3 + 2x 4 = 0
2x 2 + 4x 3 − 4x 4 = 0
x 2 + 2x 3 − 2x 4 = 0
2x1 + x 3 = 0
64 Solution: The corresponding matrix is
Block Miscellaneous
Examples Exercises and
1 −1 −1 2 0
0 2 4 −4 0
0 1 2 −2 0
2 0 
 0 1 0
R 4 → R 4 − 2R1 gives
1 −1 −1 2 0
0 2 4 −4 0
0 1 2 −2 0
0 2 3 −4 0 

1
R2 → R2 gives
2
1 −1 −1 2 0
0 1 2 −2 0
0 1 2 −2 0
0 2 3 −4 0 

R1 → R1 + R 2 gives

1 0 1 0 0
0 1 2 −2 0
0 1 2 −2 0
0 2 3 −4 0 

R3 → R3 − R 2 gives
1 0 1 0 0
0 1 2 −2 0
0 0 0 0 0
0 2 3 −4 0 

R 4 → R 4 − 2R2 gives

1 0 1 0 0
0 1 2 −2 0
0 0 0 0 0
0 2 3 −4 0 

The third row is an all zeros row and this is followed by a non-zero row. We
interchange the third and fourth rows to get
1 0 1 0 0
0 1 2 −2 0 
0 0 −1 0 0 
0 0 0 0 0 

R3 → ( −1)R3 gives

1 0 1 0 0
0 1 2 −2 0 
0 0 1 0 0
0 0 0 0 0 

R2 → R 2 − 2R3 gives

65
Block 2

1 0 1 0 0
0 1 0 −2 0 
0 0 1 0 0
0 0 0 0 0 

R1 → R1 − R3

1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0 

The rank is 3 and these are 4 variables. So, there are 4 − 3 = 1 free variable.
However, the column corresponding to the free variable is zero. So, we get a
unique solution x1 = 0,x 2 = 0,x 3 = 0,x 4 = 0 and x 5 = 0.

***
Try the next exercise to check your understanding of the above example.

E4) Solve the following system of equations.


x1 + x 2 + x 4 = 0

x1 + x 2 + 2x 3 + x 4 = 0

x1 + x 2 + x 3 + x 4 = 0

x1 + x 2 + 4x 3 + x 4 = 0

In the next example, we will look at one more application of homogeneous


system of linear equations, balancing chemical equations.

Example 4: You would have studied various chemical reactions in school. For
reation, there is a chemican equation. For example, when hydrogen and
oxygen react, we get water. The chemical formula for the hydrogen molecule
is H2 and the chemical formula for oxygen is O2. We represent the reaction by
the equation
2H2 + O2 → 2H2O
Note the factor 2 in H2 and H2O. Without these factors, the number of
hydrogen atoms and oxygen atoms in the reactants and the products will not
be equal. We have added these factors to balance the equation. More often
than not, we can balance a chemical equation by trial and error. However,
some of the reactions are so complicated that we need to use Linear Algebra
to balance the equations. Consider the reaction of sodium sulfite (Na2SO4)
with nitric acid (HNO3). The products are sodium nitrate (Na2NO3), sulphur
dioxide (SO2) and water (H2O). Let us write the reaction in the form
x1Na2SO3 + x 2HNO3 → x 3NaNO3 + x 4SO2 + x 5H2O
We now write down the number of atoms of each element in the reactants and
products.
Reactants Products
Sodium 2x1 x3
Hydrogen x2 2x5
Sulphur x1 x4
Oxygen 3x1+3x2 3x3+2x4+x5

66
Block Miscellaneous
Examples Exercises and
Since the number of atoms in products and reactants are the same, we get the
following system of homogeneous equations:

2x1 − x3 = 0
x2 − 2x 5 = 0
x1 − x4 = 0
3x1 + 3x 2 − 3x 3 − 2x 4 − x5 = 0

The associated matrix is


 2 0 −1 0 0 
 0 1 0 0 −2 
A=
1 0 0 −1 0 
 3 −3 −2 −1
 3
The RREF of the matrix is
 1 0 0 0 −1
 0 1 0 0 −2
A = 
0 0 1 0 −2
 1 −1
 0 0 0
The column corresponding to x5 is the non-pivot column. So, we take x5 as the
free variable and set x 5 = . The solution is x1 = , x 2 = 2, x 3 = 2, x 4 = 
and x 5 = . So, the solution set is

 (1,2,2,1,1)   
Since we want an integer solution, we set  = 1 to get the solution (1,2,2,1,1).
So, the balanced equation is
Na2SO3 + 2HNO3 → 2NaNO3 + SO2 + H2O
We can multiply both sides of the equation by any integer to get infinitely many
solutions, but they are essentially the same.
***
Here is an exercise for you to try.

E 5) Balance the following chemical equation:


C6H12O6 → CO2 + C2H5OH

We saw how to check the consistency of a system of equations in Unit 4. One


application of this is to check whether a vector b 
n
is in the linear span of a
set of v1,v 2 ,, v k  
n
. Let us write
 ai1   b1 
 ai2  b 
vi =   , b =  2 
a  b 
 1n   n
Then, if b is in the linear span of v1,v 2 ,,v k  if and only if there are
1,  2 ,, k  such that 1v1 +  2 v 2 +
n
+ k v k = b. This is equivalent to
saying that the matrix equation
 1a11 +  2a12 + + k a1k   b1 
1a21 +  2a22 + + k a2k  b2 
 = 
 a +  a + + k ank  bn 
 1 n1 n n2

This means that 1,  2 ,, k 


n
is a solution to the system of equations
67
Block 2

1a11 +  2a12 + + k a1k = b1


1a21 +  2a22 + + k a2k = b2

1an1 + nan2 + + k ank = bn


We can check whether the above system of equations is consistent using
RREF. If the system of equations is consistent, b is in the linear span of
v1,v 2 ,,v k  . If the system of equations is consistent, we can also solve for
1,  2 ,, k . Let us now look at some examples now.

Example 5: Check whether the vector b = (0,0,3) is in the linear span of


v1 = (1,1,1) , v 2 = (1,1, −1) and v 3 = (5,3,0).
Solution: The vector b will be in the linear span of v1,v 2 ,v 3  if there are
1,  2 , and 3 such that 1v1 +  2 v 2 + 3 v 3 = b. Writing the vectors as
column vectors, b  v1,v 2 , v 3  if and only if the system of equations
1 + 2 + 5 3 = 0
1 + 2 + 3 3 = 0
1 − 2 = 3
The associated matrix is
1 1 5 0
A = 1 1 3 0
1 −1 0 3 

The RREF of A is
 3
 1 0 0 2
 3
A  = 0 1 0 − 
 2
 0 0 1 0
 
There is no row in the RREF which has zeros in all but the last column and a
nonzero entry in the last column. So, the system is consistent. From the
3 3
RREF, we get the solution 1 = ,  2 = − and  3 = 0.
2 2
***
Here is another example.
Example 6: Check whether the vector b = (1,5,4) is in [S] where
S = (1,2,1),( −1,1,1),(2,1,0).
Solution: Since this is similar to the previous example, we sketch the solution.
The associated matrix is
 1 −1 2 1
A = 2 1 1 5 
 1 1 0 4
 
Carrying out row operations R 2 → R 2 − 2R1, R3 → R3 − R1,
In the last row, the entries in all but the entry in the last column are zero and
the entry in the last column is 1. So, the associated system of equations is not
consistent. Therefore b is not in [S].
***
Here is an exercise for you.

68
Block Miscellaneous
Examples Exercises and
E 6) In each of the following cases, check whether b is in [S]. If b is in
[S], write b as a linear combination of elements in [S].
a) b = (3,1,0) , S = (1,0,1),(0,1, −1),(0, −1,1).
b) b = (1,1,5), S = (1,1,2),( −1, −1,1),(2,2,1)
c) b = (0,3,1), S = (1,1,1),(2,1, −1),( −1,0,2)

Example 7: Show that, for any m  n matrix A, (A) =  A t . ( )


Solution: Note that RS(A) = CS(A ) . Therefore, we have
t

( ( )) =  ( A ) =  ( A )
( A ) = r (A) = dim (RS(A)) = dim CS A t c
t t

***

Next, we consider the problem of checking whether a given set of vectors are
n
linearly independent or not. If we are working in , we can use the concept
of rank of a matrix.
Example 8: Check whether the following subsets of 3 or 4 (as the case
may be) are linearly independent or not.

a) {u = (1,0,0),v = (0,0, − 5)}


b) {u = ( −1,6, − 12),v = (1/ 2, − 3,6)}
c) {u = (1,2,3,4),v = (4,3,2,1)}

Solution: a) Let au + bv = 0,a,b  .

Then, a(1,0,0) + b(0,0, −5) = (0,0,0)


i.e., (a,0,0) + (0,0, −5b) = (0,0,0)
i.e., (a,0, −5b) = (0,0,0)
i.e., a = 0, − 5b = 0,i.e.,a = 0,b = 0
,{u,v} is linearly independent.

b) Let au + bv = 0,a,b  .

b 
Then ( −a,6a, − 12a) +  , − 3b,6b) = (0,0,0) 
2 
b
i.e., −a + = 0,6a − 3b = 0, −12a + 6b = 0. Each of these equations is
2
equivalent to 2a − b = 0, which is satisfied by many non-zero values of a and
b (e.g., a = 1,b = 2).

Hence, {u,v} is linearly dependent.

c) Suppose au + bv = 0, a, b  . Then

(a + 4b,2a + 3b,3a + 2b,4a + b) = (0,0,0,0)


i.e.,a + 4b = 0 …(1)
2a + 3b = 0 …(2)
3a + 2b = 0 …(3)
4a + b = 0 …(4)
69
Block 2

Subtracting (2) from (3) we get a − b,i.e.,a = b. Putting this in (1), we have
5b = 0. ,b = 0, and so, a = b = 0. Hence, {u,v} is linearly independent.

***
You know that the set {1,x,x ,...,x }  P is linearly independent. For larger
2 n

and larger n, this set becomes a larger and larger linearly independent subset
P . This example shows that in the vector space P, we can have as large a
linearly independent set as we wish. In contrast to this situation look at the
following example, in which more than two vectors are not linearly
independent.

2
Example 9: Prove that in any three vectors form a linearly dependent set.

Solution: Let u = (a1,a2 ),v = (b1,b2 ),w = (c1,c 2 )  2


. If any of these is the
zero vector, say u = (0,0), then the linear combination 1.u + 0.v + 0.w, of
u,v, w, is the zero vector, showing that the set {u,v, w} is linearly dependent.
Therefore, we may suppose that u,v, w, are all non-zero.

We wish to prove that there are real numbers, , , , not all zero, such that
u + v + w = 0. That is, u + v = −w. This reduces to the pair of
equations.

a1 +  b1 = −c1
a2 + b2 = −c 2

We can solve this pair of equations to get values of ,  in terms of


a1,a2 ,b1,b2 ,c1,c 2 and  iff a1b2 − a2b1  0. So, if

a1b2 − a2b1  0, we get


(b1c 2 − b2c1 )
=
a1b2 − a2b1
(c1a2 − a2c1 )
=
a1b2 − a2b1

Then, we can give  a non-zero value and get the corresponding values of 
and . Thus, if a1b2 − a2b1  0 we see that {u,v, w} is a linearly dependent set.

Suppose, a1b2 − a2b1 = 0. Then one of a1 and a 2 is non-zero since u  0.


Similarly, one of b1 and b2  0. Let us suppose that a1  0,b1  0. Then,
observe that b1(a1,a2 ) − a1(b1,b2 )

= (b1a1,b1a2 ) − (a1b1,a1b2 )
= (0,0)
i.e., b1u − a1v + 0.w = 0 and a1  0,b1  0.
Hence, in this case, also {u,v, w} is a linearly dependent set.

Try the following exercises now.


70
Block Miscellaneous
Examples Exercises and
3
E7) Check whether each of the following subsets of is linearly
independent.

a) (1,2,3),(2,3,1),(3,1,2)
b) (1,2,3),(2,3,1),( −3, − 4,1)
c) ( −2,7,0),(4,17,2),(5, − 2,1)
d) ( −2,7,0),(4,17,2)
E8) Prove that in the vector space of all functions from to , the set
{sin x,cos x} is linearly independent, and the set
{sin x,cos x,sin(x +  / 6)} is linearly dependent.
E9) Determine whether each of the following subsets of P is linearly
independent or not.

a) {x 2 ,x 2 + 2}
b) {x 2 + 1,x 2 + 11,,2x 2 − 3}
c) {3,x + 1,x 2 ,x 2 + 2x + 5}
d) {1,x 2 ,x 3 + x 2 + 1}

SOLUTIONS/ANSWERS

E 1)The matrix corresponding to the system of equations is


 1 1 1 3
 −1 1 3 4 
 2 2 1 5
 
R2 → R 2 + R1 gives
1 1 1 3 
0 2 4 7 
0 0 −1 −1
 
1
The operations R2 → R2 and R3 → ( −1)R3 give
2
1 1 1 3 
 7
0 1 2 
0 0 1 2 1 

The corresponding system of equations are
x1 + x 2 + x 3 = 3
7
x 2 + 2x 3 =
2
x3 = 1
3 1
Back substitution gives. x 3 = 1 , x 2 = and x1 = .
2 2
E 2) Looking at the closed path ABCD, we get 5I1 − 5I2 = 0 since I1 and I2 are
in opposite directions. In AEFB, the supply is 30 volts, so we get 5I2 + 5I3 = 30.
71
Block 2

Looking at the junction B, we get I1 + I2 − I3 = 0 . To find I1, I2 and I3, we need to


solve the following system of equations:
5I1 − 5I2 = 0
2I2 + I3 = 30
I1 + I2 − I3 = 0
Solving, we get I1 = 2, I2 = 2 and I3 = 4 .
E 3) One new feature in this problem is there are voltage sources in both
the closed paths. Considering AEFB, we get −I2 + 2I3 = 24. Considering the
closed path ABCD, we get 2I1 + I2 = 48. Considering the junction A, we get
I1 − I2 − I3 = 0. So, the equations are
I1 − I2 − I3 = 0
− I2 + 2I3 = 24
2I1 + I2 = 48
Solving, we get I1 = 21, I2 = 6 and I3 = 15.
E 4) The matrix form of the system of equations.
1 1 0 1 0 
1 1 2 1 0 
1 1 1 1 0 
1 1 4 1 0 
 
R2 → R 2 − R1 gives
1 1 0 1 0
0 0 2 0 0 
1 1 1 1 0
1 1 4 1 0
 
R3 → R3 − R1 gives
1 1 0 1 0
0 0 2 0 0 
0 0 1 0 0 
0 0 4 0 0 
 
1
R2 → R2 gives
2
1 1 0 1 0
0 0 1 0 0 
0 0 1 0 0 
0 0 4 0 0 
 
R3 → R3 − R 2 gives
1 1 0 1 0
0 0 1 0 0 
0 0 0 0 0 
0 0 4 0 0 
 
1
R4 → R4 gives
4
1 1 0 1 0
0 0 1 0 0 
0 0 0 0 0 
0 0 1 0 0 
 
R 4 → R 4 − R1 gives
72
Block Miscellaneous
Examples Exercises and
1 1 0 1 0
0 0 1 0 0 
0 0 0 0 0 
0 0 0 0 0 
 
x 2 and x 4 are the columns without pivot elements. The solution is
x1 = − x 2 − x 4
x 3 = 0.
Writing 1 = x 2 ,  2 = x 4 . We have x1 = 1 +  2 ,x 3 = 0 . So, the solution set
is (1 +  2 ,0, 1,  2 ) | 1,  2  .
E 5) We have the equation
x1C6H12O6 → x 2CO2 + x 3C2H5OH
Comparing the number of carbon, hydrogen and oxygen atoms in
the reactants and the product, we get

Reactants Products
Carbon 6x1 x2+2x3
Hydrogen 12x1 6x3
Oxygen 6x1 2x2+x3
So, the system of equations is
6x1 − x2 − x3 = 0
12x1 − 5x 3 = 0
6x1 − 2x 2 − x3 = 0
The associated matrix is
 6 −1 −2
A = 12 0 −5 
 6 −2 −1
 
The RREF of A is
 1
 1 0 − 2
A  = 0 1 −1
0 0 0 
 
 
The non-pivot column corresponds to x3.Taking x 3 = , the

solution set is x1 = , x 2 = , and x 3 = , i.e.
2
 1  
  ,11 ,    .
 2  
Since we need a solution in natural numbers, we take  = 2 and
get the solution x1 = 1, x 2 = 2, and x 3 = 2. The balanced equation
is
C5H12O6 → 2CO2 + 2C2H5OH
E 6) a) The associated matrix is
 1 1 0 3
0 1 −1 1
 1 0 1 0
 
The row operations R3 → R3 − R1, R3 → R3 + R2 gives the
matrix 73
Block 2

 1 1 0 3
 0 1 −1 1
 0 0 0 −2 
 
In the last row, the entries in all the columns, except the
last column, are zero and the last column is −2.
Therefore, the system system of equations is inconsistent
and b is not in [S] .
b) The associated matrix is
 1 −1 2 1
 1 −1 2 1
2 1 1 5 
 
The RREF is
 1 0 1 2
 0 1 −1 1
 0 0 0 0
 
The equations are consistent. The third column is
the non-pivot column. So, we take the third
variable as the free variable. We get 1 = 2 − ,
 2 = 1 +  and  3 = . Taking  = 0, we get 1 = 1
 2 = 1 and  3 = 0. So,
2(1,1,2) + ( −1, −1,1) + 0(2,2,1) = (1,1,5)
c) The associated matrix is
 1 2 −1 0 
 1 1 0 3
 1 −1 2 1
 
The row operations R2 → R2 − R1, R3 → R3 − R1
gives the matrix
 1 2 −1 0 
 0 1 −1 −3 
 0 0 0 8
 
In the last row, entries in all the columns, except
the one in last column, is zero and the last column
is 8. So, the associated system of equations is
inconsistent. Therefore b is not in [S].

E 7) a) a(1,2,3) + b(2,3,1) + c(3,1,2) = (0,0,0)


 (a,2a,3a) + (2b,3b,b) + (3c,c,2c) = (0,0,0)
 (a + 2b + 3c,2a + 3b + c,3a + b + 2c) = (0,0,0)
 a + 2b + 3c = 0 ...........(1)
2a + 3b − 4c = 0 ...........(2)
3a + b + 2c = 0 ...........(3)

Then (1) + (2) − (3) gives 4b + 2c = 0, i.e., c = −2b. Putting this


value in (1) we get a + 2b − 6b = 0, i.e., a = 4b. Then (2) gives
8b + 3b − 2b = 0, i.e. b = 0. Therefore, a = b = c = 0 . Therfore, the
given set is linearly independent.

b) a(1,2,3) + b(2,3,1) + c( −3, −4,1) = (0,0,0)


 (a + 2b − 3c,2a + 3b − 4c,3a + b + c) = (0,0,0)
74
Block Miscellaneous
Examples Exercises and
 a + 2b − 3c = 0
2a + 3b − 4c = 0
3a + b + c = 0

On simultaneously solving these equations you will find that a,b,c


can have many non-zero values, one of them being
a = −1,b = 2,c = 1., the given set is linearly dependent.

c) Linearly dependent

e) Linearly independent.
E 8) To show that {sin x,cos x} is linearly independent, suppose a,b  such
that a sin x + b cos x = 0.

Putting x = 0 in this equation, we get b = 0. Now, take x =  / 2.


We get a = 0. Therefore, the set is linearly independent.
Now, consider the equation

a sin x + bcos x + c sin(x +  / 6) = 0.

Since sin(x +  / 6) = sin x cos  / 6 + cos x sin  / 6


= 3 / 2sin x + 1/ 2cos x, taking a = − 3 / 2,b = 1/ 2,c = 1, we get a
linear combination of the set {sin x,cos x,sin(x +  / 6)} which shows
that this set is linearly independent.
E 9) a) ax 2 + b(x 2 + 1) = 0  (a + b)x 2 + b = 0  a + b = 0,b = 0
 a = 0,b = 0.  , the given set is linearly independent.
b) Linearly dependent because, for example,
−5(x 2 + 1) + (x 2 + 11) + 2(2x 2 − 3) = 0 .
c) Linearly dependent.
d) Linearly dependent.

75

You might also like