Block 2
Block 2
SYSTEMS OF LINEAR
EQUATIONS
Structure
Page Nos.
4.1 Introduction 5
Objective
4.2 Gaussian Elimination 6
4.3 Row Operations 8
4.4 Systems of Linear Equations 19
4.5 Summary 27
4.6 Solutions/Answers 27
4.1 INTRODUCTION
In this Block we continue our study of Linear Algebra with a discussion of
solution of simultaneous equations. The problem of solution of linear equations
efficiently is one of the fundamental problems of Mathematics. Efficient
methods of solution of systems of linear equations has many applications,
including in new areas like Data Science.
Objectives
After studying this unit, you should be able to:
• explain the Gaussian Elimination procedure;
• reduce a matrix to Row Echelon form or Row Reduced Echelon form using
row operations;
x1 + x2 + x3 = 3 …(1)
x2 − x3 = 0 …(2)
x3 = 1 …(3)
As you may be already aware, not all the systems of simultaneous linear
equations are triangular. However, the good news is that we can reduce any
system of linear equations to a triangular system. This method of reducing a
system of simultaneous linear equations to a triangular system and solving it is
due to Gauss, hence the name Gaussian elimination. In the next example, we
will see how to reduce an arbitrary system of linear equations to a triangular
system and solve it.
x1 + x2 + x3 = 4 …(4)
2x1 + 2x2 − x3 = 2 …(5)
2x1 + 4x2 − 2x3 = 2 …(6)
x1 + x2 + x3 = 4 …(7)
−3x3 = −6 …(8)
2x1 + 4x2 − 2x3 = 2 …(9)
By equivalent equations, we mean that both the sets of equations have the
same set of solutions. We will see why these two systems of equations are
equivalent later. So, instead of Eqn. (4), Eqn. (5) and Eqn. (6), we can solve
the equations Eqn. (7), Eqn. (8) and Eqn. (9). Note that, the second set of
equations is simpler than the first set because the second equation in the
second set has two less variables; x1 and x2 have coefficients 0.
x1 + x2 + x3 = 4 …(10)
2x1 + 4x2 − 2x3 = 2 …(11)
−3x3 = −6 …(12)
We eliminate the variable x1 from the second equation; we multiply Eqn. (10)
by 2 and subract it from the second equation to get
x1 + x2 + x3 = 4 …(13)
2x2 − 4x3 = −6 …(14)
−3x3 = −6 …(15)
1
Next, we multiply the second equation by to get
2
x1 + x2 + x3 = 2 …(16)
x2 − 2x3 = −3 …(17)
−3x3 = −6 …(18)
1
Next, we multiply the third equation by − to get
3
x1 + x2 + x3 = 4 …(19)
x2 − 2x3 = −3 …(20)
x3 = 2 …(21)
∗∗∗
Before we proceed further, solve the following exercise to see if you have
understood the Gaussian elimination procedure.
7
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
E1) Solve the following set of simultaneous equations using Gaussian
elimination:
x1 + 2x2 + x3 = 7
2x1 + x2 − x3 = 5
3x1 − x2 − x3 = 3
In the next section, we will look at the operations we performed in the Gaussian
elimination procedure as matrix operations by writing a system of equations in
matrix form.
x1 + x2 + x3 = 4
2x1 + 2x2 − x3 = 2
2x1 + 4x2 − 2x3 = 2
1 1 1 x1 4
[2 2 −1] [x2 ] = [2]
2 4 −2 x3 2
1 1 1 4
[2 2 −1 2] …(22)
2 4 −2 2
The first operation we carried out was to multiply the first equation by 2 and
subtract it from the second equation. Equivalently, we replace the second row
in Eqn. (22) by
1 1 1 4
[0 0 −3 −6]
2 4 −2 2
Next operation we performed was to interchange the second and third rows.
We denote this operation by R2 ↔ R3 . We get
1 1 1 4
[2 4 −2 2 ]
0 0 −3 −6
Next, we multiply the first equation by 2 and subtract it from the second
equation. Again, we denote the operation by R2 → R2 − 2R1 . The matrix form
of the equations is
1 1 1 4
[0 2 −4 −6]
0 0 −3 −6
1
Next, we multiply the second row by to get
2
1 −1 1 4
[0 1 −2 −3]
0 0 −3 −6
1
Next, we multiply the last row by − . We get
3
1 1 1 4
[0 1 −2 −3] …(23)
0 0 1 2
Bearing in mind that the first three columns correspond to first three variables
and the last column corresponds to right hand side, we can reconstruct the
equations.
x1 + x2 + x3 = 4
x2 − 2x3 = −3
x3 = 2
As before we can use back substitution to solve these equations.
You can have one doubt. How can we be sure that the row operations don’t
alter the solutions of the system of equations? One way could be to check that
the solutions we get for the modified set of equations are also solutions for the
original equations. (Do this for the examples and exercises we have done so
far!) Don’t you think it will be better to prove that, in general, the row operations
do not change the solution set of a system of equation? We will show this in
the next proposition. 9
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Before that, we will set up some definitions that will help us in better formulation
as well as the proof of this result.
Definition 1: Let
a11 x1 + a12 x2 + ⋯ + a1n xn = b1 ⎫
⎪
a21 x2 + a22 x2 + ⋯ + a2n xn = b2 ⎪
…(∗)
⎬ ⋮
⎪
⎪
am1 x1 + an2 x2 + ⋯ + amn xn = bm ⎭
The set
In the next proposition we prove that the row operations we carry out for
solving a system of linear equations do not change the solution set of the
system of linear equations.
a11 x1 + ⋯ + a1n xn = b1
a21 x1 + ⋯ + a2n xn = b2
} …(25)
⋮ ⋮ ⋮ ⋮ ⋮
am1 x1 + ⋯ + amn xn = bm
1. Multiplying the ith equation by a constant a and adding it to the jth equation.
The, Eqn. (25) and Eqn. (24) have the same set of solutions.
You can skip this proof in
first reading and come
Proof: 1. It is clear that interchanging equations doesn’t affect the solution
back later after going
through the entire course set. For example, if (a1 , a2 , … , an ) is a solution to the second equation, it
10
once. will still be a solution to the equation even if it become the third equation.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
2. Suppose we mulitply the jth equation in Eqn. (25) by 𝜆, 𝜆 ≠ 0. The new set
ot equations will be
a11 x1 + ⋯ + a1n xn = b1 ⎫
⎪
a21 x1 + ⋯ + a2n xn = b2 ⎪
⎪
⎪
⎪
⋮ ⋮ ⋮ ⋮ ⋮
…(26)
𝜆ai1 x1 + ⋯ + 𝜆ain xn = 𝜆bi ⎬
⎪
⎪
⋮ ⋮ ⋮ ⋮ ⋮ ⎪⎪
⎪
am1 x1 + ⋯ + amn xn = bm ⎭
Only the ith equations of Eqn. (25) and Eqn. (26) are different. To check
that Eqn. (25) and Eqn. (26) have the same solution, we need to show
that any solution to the ith equation of Eqn. (25) is a solution to Eqn. (26)
and vice versa.
Suppose (a1 , … , an ) is a solution to the ith equation
where 𝜆 ≠ 0. We have
′ ′ ′
𝜆ai1 a1 + 𝜆ai2 a2 + ⋯ + 𝜆ain an = 𝜆bi
or
′ ′ ′
𝜆 (ai1 a1 + ai2 a2 + ⋯ + ain an − bi ) = 0
Since 𝜆 ≠ 0, we have
3. Suppose we multiply the jth equation by c and add it to the ith equation.
We can always assume that i > j since we saw that reordering the 11
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
equations does not alter the solution set. So, the new set of equations is
a11 x1 + ⋯ + a1n xn = b1 ⎫
⎪
a21 x1 + ⋯ + a2n xn = b2 ⎪
⎪
⎪
⎪
⋮ ⋮ ⋮ ⋮ ⋮
…(27)
(ai1 + caj1 ) x1 + ⋯ + (ain + cajn ) xn = bi + cbj ⎬
⎪ ⎪
⎪
⋮ ⋮ ⋮ ⋮ ⋮ ⎪
⎪
am1 x1 + ⋯ + amn xn = bm ⎭
ai1 x1 + ⋯ + ain = bi
and
aj1 x1 + ⋯ + ajn xn = bj
which are, respectively, the ith and jth equations of Eqn. (25).
Let us prove part (b). Suppose that (a′1 , … , a′n ) is a solution to the system
in Eqn. (27). In particular, (a′1 , … , a′n ) is a solution to the ith equation in the
system Eqn. (27). In other words,
(ai1 a′1 + ai2 a′2 + ⋯ + ain a′n − bi ) + c (aj1 a′1 + aj2 a′2 + ⋯ + ajn a′n − bj ) = 0 …(28)
Since (a′1 , … , a′n ) is a solution to the jth equation of the system in Eqn. (27),
that is
From Proposition 1 it follows that the row operation we carry out to a matrix
12 representing a system of equations doesn’t alter the solution set of the system.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
Another question that you may have is the following: When we solve a system
of equations using Gaussian elimination, we know when we reach our goal of
reducing the set of equations to a triangular equation. When we write the
equations in the form of matrix equations, how can we find out that we have
reduced the equations to triangular form?
To understand this, let us take a close look at the matrix in Eqn. (23). See
Fig. 1.
1 1 1 2
⎡ ⎤
⎢
⎢ 0 1 −2 −3 ⎥
⎥
⎢ ⎥
⎢ ⎥
0 0 1 2
⎣ ⎦
1. The first nonzero entry in each row, from the left, is one and all entries
below this nonzero entry in the column containing the entry is zero. We
have circled the first nonzero entry in the first row.
2. The first nonzero entry in each row, from the left, is to the right of the first
nonzero entry in the previous row.
So, we can stop our row operations and solve the system of equations using
back substitution once the matrix representing the system of equations is of the
above form. The matrices that have the above form are supposed in Row
Echelon Form. Echelon means ‘steps’. You can see in Fig. 1 that the first
nonzero elements in each row form a step like structure.
1. The first non zero entry in each row is to the right of the first non zero
entry in the previous row. We call the first non zero entry in each row the
leading entry of that row.
2. All entries below the leading entry in the column containing the leading entry are
zero. The position occupied by the leading entry is called the pivot
position. The value of the leading entry is called the pivot and the pivot
should be one. The column containing the leading entry is called the
pivot column.
3. A row with all the entries zero is called a zero row. All the zero rows occur
after all the nonzero rows. 13
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
For example, in Fig. 1, the first, second and third columns are pivot columns.
The pivots in first, second and third column are, respectively, 1, 2 and −3,
respectively.
Example 3: Which of the following are in REF? Justify your answer. For the
matrices which are in REF,identify the pivots.
1 2 1 0 0 1 −1 2 1 0
0 0 0 0 0 0 1 0 0 1
i) [ ] ii) [ ]
0 2 1 0 0 0 0 0 1 1
0 0 1 0 0 0 0 1 0 0
1 2 1 2 0 1 0 0 2
0 1 1 0 0 0 1 0 3
iii) [ ] iv) [ ]
0 −1 1 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0
Solution:
i) The pivot position is in the first row, first column of the matrix. The pivot is
1. All the entries below the pivot position is zero.
The second rows is a zero row and there are nonzero rows after this row.
This is not in REF because the row of all zeros is appearing before
nonzero rows.
ii) First row, first column is the pivot position of the first row and the pivot is
1. All the entries below the pivot position are zero.
The pivot position in the second row should be in the second, third, fourth
or fifth column. The pivot position is in the second row, second column
and the pivot is 1. All the entries below the pivot position is zero.
The pivot position in the third row is in the fourth column. So, the pivot
position in the fourth row must be in the fifth column, but it is in the third
column.
This matrix is not in REF.
iii) The first nonzero entry in the first row is in the first column and all the
other entries below this entry in the first column are 0.
The pivot position in the second row is in second column and the pivot is
1. The entry below the pivot position is −1 and not zero. This is not in REF.
iv) The pivot position in the first row is in the second column and the pivot is
1. All the entries below the pivot position is zero.
The pivot position in the second row has to be in the third, fourth or fifth
column. The pivot position is in the third column and the pivot is 1. All the
entries below the pivot position are 0.
The pivot position in the third row, which should appear in the fourth or
fifth column is in the fourth column. The pivot is one. Further, all the
14 entries below the pivot position is zero.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
The zero row appears below all the nonzero rows. This is in REF.
0 1 0 0 2
0 0 1 0 3
[ ]
0 0 0 1 0
0 0 0 0 0
∗∗∗
E2) Which of the following matrices are in REF? Justify your answer.
0 1 3 1 0 1 1 0 0 1
0 0 1 1 0 0 1 2 1 0
i) [ ] ii) [ ]
0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 1 0
1 0 1 2 0 1 1 0 0 2
0 1 1 −2 1 0 0 1 0 0
iii) [ ] iv) [ ]
0 0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 1 0 0
1 2 1 8
[1 −1 1 2]
1 1 3 8
Solution: The left most nonzero column is the first column. The pivot position
in the first row is in the first column. We now carry out row operations
R2 → R2 − R1 , R3 → R3 − R1 , to get
1 2 1 8
[0 −3 0 −6]
0 −1 2 0
We now look for the first nonzero entry in the second row. This is in the second
row, third column which is to the right of the previous pivot column, the first
column. (We have already found a pivot in the first row.) The second column
1
has the nonzero entry −3. Carrying out the row operation R2 → − R2 gives
3
1 2 1 8
[0 1 0 2]
0 −1 2 0 15
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Carrying out the row operations R3 → R3 + R2 , gives
1 2 1 8
[0 1 0 2]
0 0 2 2
We now look for the first nonzero column which is to the right of the previous
pivot column, the second column, and has a nonzero entry in the third row. The
third column has the nonzero entry 2 in the third row. We divide the row by 2 to
get
1 2 1 8
[0 1 0 2]
0 0 1 1
There are no entries below the pivot entry. So, there is nothing more to do in
this row and there are no more rows left. The matrix is in Row Echelon Form.
∗∗∗
1 1 0 1 2
[2 2 −1 1 4]
1 1 0 1 1
Solution: The firt nonzero entry in the first row is in the first column. The
pivot is 1. The row operation R2 → R2 + (−2)R1 gives,
1 1 0 1 2
[0 0 −1 −1 0]
1 1 0 1 1
1 1 0 1 2
[0 0 −1 −1 0 ]
0 0 0 0 −1
Now, all the entries in the pivot in the first row are zero. We move on to the
second row. The first nonzero entry in the second row is in the third column
and it is −1. This is to the right of the first column which contains the pivot for
the first row. We carry out the row operation R2 → (−1)R2 . We get
1 1 0 1 2
[0 0 1 1 0]
0 0 0 0 −1
1 1 0 1 2
[0 0 1 1 0]
16 0 0 0 0 −1
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
We now look for nonzero column to the right of the third column which a
nonzero entry in the third row. The fourth column is nonzero, but the entry in
the third row fourth column is zero. However, the entry in the fifth column, third
row is −1 so we choose this column as the next pivot column. Carrying out row
operation R3 → (−1)R3 , we get
1 1 0 1 2
[0 0 1 1 0]
0 0 0 0 1
∗∗∗
Try the following exercise to check whether you have understood the method
for reducing a matrix to REF.
1 1 1 3
[−1 1 3 4]
2 2 1 5
x1 − x2 + x3 = 4
x2 − 2x3 = −3
x3 = 2
Our plan now is to reduce these equations to three equations in which the first
equation involves only x1 , the second equation involves only x2 and the third
equation involves only x3 .
x1 + x2 + x3 = 4 …(29)
x2 =1 …(30)
x3 = 2 …(31)
x1 + x2 =2 …(32)
x2 =1 …(33)
x3 = 2 …(34)
x1 =1
x2 =1
x3 = 2 17
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Again the matrix form of these equations is
1 0 0 1
[0 1 0 1]
0 0 1 2
Again, the matrix is special form. Along with conditions for REF, the matrix
satisfies the following additional condition the in the pivot column all the entries
other than the pivot are zero.
The next proposition says that the Row Reduced Echelon Form of matrix is
unique.
Solution:
i) The pivot in the first row is 1 and it is in the first column. The first nonzero
entry in the second row is 1. This should be in second, third, fourth or fifth
column and it is in second column. Also, all the entries in this column other
than this are not zero.
However, the pivot in the third row is −1 and not 1. So, this matrix is not in
RREF.
ii) The pivot in the first row is one and it is in the first column and all the
remaining entries in the column are zero.
The pivot in the second row should be in the second, third, fourth or fifth
column. It is one and is in the third column. All the other entries in the third
column, the pivot column, are zero.
The pivot in the third row must be in the fourth of fifth column, and it is in
the fourth column. However, the entry in the first row of the fourth column is
not zero. So this matrix is not in RREF.
iii) The pivot in the first row is 1 and it is in the second column and all the other
18 entries in that column are zero.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
The pivot in the second row should be in the third, fourth, or fifth column. It
1 and it is in the third column. However, the pivot is not 1. So the matrix is
not in RREF.
iv) The pivot in the first row is 1 and it is in the first column, and all the other
entries in that column are zero.
The pivot in the second row should be in second, third, fourth or fifth
column. It is in the second column and it is one. All the other entries in the
second column are zero.
The pivot in the third row should be one and it should be in the third, fourth
or fifth column. It is one and it is in the fourth column.
All the entries in the fourth row are zero and it appears after all nonzero
rows. Since all the conditions are satisfied the matrix is in RREF.
∗∗∗
Here is an exercise for you to check your understanding of the above example.
1 1 0 0 0 2
0 0 1 0 0 1
[ ]
0 0 0 0 1 0
0 0 0 0 0 0
Example 7:
x1 + x2 + x3 + x4 = 0
x1 + 2x2 + x3 + x4 + x5 = 0
2x1 + x2 + 2x3 + 2x4 − x5 = 0
x1 + 6x2 + 2x3 + 2x4 + 4x5 = 0
19
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Solution: The augmented matrix form is
1 1 1 1 0 0
1 2 1 1 1 0
[ ]
2 2 2 2 −1 0
2 6 2 2 4 0
The first nonzero entry is in the first column and choose this as the pivot entry.
R2 → R2 − R1 gives
1 1 1 1 0 0
0 1 0 0 1 0
[ ]
2 1 2 2 −1 0
2 6 2 2 4 0
R3 → R3 − 2R1 gives
1 1 1 1 0 0
0 1 0 0 1 0
[ ]
0 −1 0 0 −1 0
2 6 2 2 4 0
R4 → R4 − 2R1 gives
1 1 1 1 0 0
0 1 0 0 1 0
[ ]
0 −1 0 0 −1 0
0 4 0 0 4 0
We look for the pivot entry in the second column in the second, third and fourth
rows. The entry in the second row is nonzero and we choose this as the pivot.
R1 → R1 − R2 gives
1 0 1 1 −1 0
0 1 0 0 1 0
[ ]
0 −1 0 0 −1 0
0 4 0 0 4 0
R3 → R3 + R2 gives
1 0 1 1 −1 0
0 1 0 0 1 0
[ ]
0 0 0 0 0 0
0 4 0 0 4 0
R4 → R4 − R2 gives
1 0 1 1 −1 0
0 1 0 0 1 0
[ ]
0 0 0 0 0 0
0 0 0 0 0 0
This matrix is in Row Reduced Echelon Form. We see that the first and second
20 columns have pivots and there are no pivots in the other columns. We choose
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
the variables corresponding to the columns without pivots to be free variables.
Writing as equations, we get
x1 = −x3 − x4 + x5
x2 = −x5
−𝜆1 − 𝜆2 + 𝜆3 −1 −1
1
⎡ −𝜆3 ⎤ ⎡ 0⎤ ⎡ 0⎤ ⎡−1⎤
⎢
⎢ ⎥
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 𝜆1 ⎥ ⎢
= 𝜆1 ⎢ 1 ⎥ + 𝜆2 ⎢ 0 ⎥ + 𝜆3 ⎢ 0 ⎥
⎥ ⎢ ⎥ ⎢
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥
𝜆2 0 1 0
⎣ 𝜆3 ⎦ ⎣0⎦ ⎣0⎦ ⎣1⎦
−1 −1 1 ⎫
⎧
⎪ ⎪
⎪⎡ 0 ⎤ ⎡ 0 ⎤ ⎡−1⎤⎪
⎪ ⎪
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
S = ⎢⎢ 1 ⎥⎥ , ⎢⎢ 0 ⎥⎥ , ⎢⎢ 0 ⎥⎥
⎨ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎬
⎪
⎪ 0 1 0 ⎪ ⎪
⎪ ⎪
⎩⎣ 0 ⎦ ⎣ 0 ⎦ ⎣ 1 ⎦⎭
∗∗∗
x1 + x2 + x3 + 3x4 = 0
x1 + x2 + x3 + 5x4 = 0
x1 + x2 + x3 + 7x4 = 0
1 1 1 3 0
[1 1 2 5 0]
1 1 3 7 0 21
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
In the first column, the entry in the first row is 1 which is nonzero and choose
this as the pivot entry. R2 → R2 − R1 gives
1 1 1 3 0
[0 0 1 2 0]
1 1 3 7 0
R3 → R3 − R1 gives
1 1 1 3 0
[0 0 1 2 0]
0 0 2 4 0
We now look for a pivot in the second column. We cannot choose the entry in
the first row, second column as the pivot and there are no other nonzero entries
in the second column. So, we look for the pivot in the third column. The entry in
the second row, third column is nonzero and we choose this as the pivot entry.
R3 → R3 − 2ℝ2 gives
1 1 1 3 0
[0 0 1 2 0]
0 0 0 0 0
R1 → R1 − R2 gives
1 1 0 1
[0 0 1 2]
0 0 0 0
The matrix is in RREF. The second and fourth columns don’t have pivots, so
the free variables are x2 and x4 . We get the equations
x1 + x3 + x4 = 0
x3 + 2x4 = 0
or
x1 = −x2 − x4
x3 = −2x4
{ (−𝜆1 − 𝜆2 , 𝜆1 , −2𝜆2 , 𝜆2 )| 𝜆1 , 𝜆2 ∈ ℝ}
So, writing in the form of a column vector, we can write every solution in the
form
−1 −1
1 0
𝜆1 [ ] + 𝜆2 [ ]
0 −2
22 0 1
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
As before, the solution set is S where
−1 −1
1 0
S = {[ ] , [ ]}
0 −2
0 1
∗∗∗
Proof: The RREF of the matrix associated to the system can have only as
many columns with pivots as the number of rows in the matrix form of the
equations. This is because every row has at most one pivot position. So, the
number of pivot postions cannot be greater than the number of rows. But, the
number of rows in the matrix of the homogeneous system is the number of
equations in the system and the number of columns is the number of variables
in the system. If the number of equations is less than the number of variables,
the matrix associated with the system will have more columns than rows. So,
there will be at least one non-pivot column. This means there is at least one
free variable in the solution. Giving different values to the free variable, we get
infinitely many solutions to the system. ■
x1 + x2 + 2x4 = 0
2x1 + x2 + x4 = 0
x1 + x2 + x3 − x4 = 0
x1 + 2x2 + x3 + 2x4 = 2
x1 + 3x2 + x3 + 3x4 = 3
−x1 + x2 − x3 + x4 = 2
1 2 1 2 2
[ 1 3 1 3 3]
−1 1 −1 1 2 23
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
The operations R2 → R2 − R1 , R3 → R3 + R1 , R1 → R1 − 2R2 and
R3 → R3 − 3R2 reduces the matrix to following RREF
1 0 1 0 0
[0 1 0 1 1]
0 0 0 0 0
(Check this!) There are two non-pivot columns (other than the column
corresponding to the RHS) corresponding to variables x3 and x4 . We choose
them as free variables. Reverting to equations, the system is
x1 + x3 =0
x2 + x4 = 1
(x1 , x2 , x3 , x4 ) = (−𝜆1 , 1 − 𝜆2 , 𝜆1 , 𝜆2 )
0 −1 0
0 −1 1
(−𝜆1 , 1 − 𝜆2 , 𝜆1 , 𝜆2 ) = 𝜆1 [ ] + 𝜆2 [ ] + [ ]
1 0 0
0 1 0
−1 0
0 −1
u1 = [ ] , u2 = [ ] , S = {u1 , u2 } , W = [S]
1 0
0 1
x1 + 2x2 + x3 + 2x4 = 0
x1 + 3x2 + x3 + 3x4 = 0
−x1 + x2 − x3 + x4 = 0
We saw in the previous example that the solution set of a system of linear
equations is of the form w + v where w is a solution of the homogeneous
system of equation and v is a particular solution of the system. This true in
24 general. The solution of a linear system of equations (if it exists) is of the form
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
w + v where w is a solution to the corresponding homogeneous system and v is
any solution to the given system. In particular if the subspace W of the system
of corresponding homogeneous system is not the zero space, if the system has
one solution, it has infinitely many solutions. If W = {0}, the system has a
unique solution. Try the following exercise now.
x1 − x2 + x3 − x4 = −1
−x1 + 2x2 − x3 + 2x4 = 2
2x1 − x2 + 2x3 − x4 = 0
1 −1 1 −1 −1
[−1 2 −1 2 2 ]
2 −1 2 −1 0
1 0 1 0 0
[0 1 0 1 1]
0 0 0 0 1
x1 + x3 =0 …(35)
x2 + x4 = 1 …(36)
0x1 + 0x2 + 0x3 + 0x4 = 1 …(37)
As you can see easily, this system has no solutions because the LHS of
Eqn. (37) will always be zero for any value of x1 , x2 , x3 and x4 while the RHS is
nonzero.
∗∗∗
1 r1,k+1 ⋯ r1,n b1
⎡ ⋱ ⋮ ⋮ ⋮ ⋮⎤
⎢
⎢ ⎥
⎢ 1 rk,k+1 ⋯ rk,n bk ⎥⎥
⎢ ⎥
⎣ ⎦
By giving any values to the free variables xk+1 , xk+2 , xk+1 , xk+2 , …, xn , if there is at
least one free variable, we can get infinitely many solutions to the system. If
there are no free variables, i.e. k = n, the system has an unique solution
x1 = b1 , x2 = b2 , …, xk = bk . In any case there is always a solution and the
system is consistent. ■
x1 + x3 = −1
x1 + x2 + x3 + x4 = 2
2x1 + x2 + 2x3 + x4 = 0
x1 − x2 + 2x3 = 1
2x1 + x2 + x3 = 5
x1 + x2 =4
1 −1 2 1
[2 1 1 5]
26 1 1 0 1
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
R2
Carrying out row operations R2 → R2 − 2R1 , R3 → R3 − R1 , R2 → ,
3
R1 → R1 + R2 and R3 → R3 − 3R2 , we get
1 0 1 2
[0 1 −1 1]
0 0 0 1
Note that the matrix is not in RREF because the entries above the pivot entry in
the fourth row are not zero. However, we can stop here because the last row
has zeros in the first three columns and one in the last column. If we translate
this to an equation, we get the equation
and this equation has no solutions. So, the system of equations is inconsistent.
∗∗∗
We saw that the solution set of the system in Example 7 is spanned by three
vectors. Can we span the solution set with just two vectors? How about just a
single vector? The answer is ‘no’. In fact, the spanning vectors of the solution
set is minimal in the sense that there can’t be a smaller spanning set of
vectors. In the next Unit, we will develop the concepts that will let us prove this
fact. vector space and the dimension of a vector space.
4.5 SUMMARY
In this unit, we
6. how to find the solution set of a system of linear equations using row
reduction;
4.6 SOLUTIONS/ANSWERS
27
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
E1) We number the equations for convenience.
x1 + 2x2 + x3 = 7 …(38)
2x1 + x2 − x3 = 5 …(39)
3x1 − x2 − x3 = 3 …(40)
Multiplying Eqn. (38) by 2 and subtracting from Eqn. (39) we get the
following new set of equations:
x1 + 2x2 + x3 = 7 …(41)
− 3x2 − 3x3 = −9 …(42)
3x1 − x2 − x3 = 3 …(43)
Multiplying Eqn. (41) by three and subtracting from Eqn. (43), we get the
following new set of equations:
x1 + 2x2 + x3 = 7 …(44)
− 3x2 − 3x3 = −9 …(45)
− 7x2 − 4x3 = −18 …(46)
x1 + 2x2 + x3 = 7 …(47)
x2 + x3 = 3 …(48)
− 7x2 − 4x3 = −18 …(49)
x1 + 2x2 + x3 = 7 …(50)
x2 + x3 = 3 …(51)
3x3 = 3 …(52)
1
Multiplying Eqn. (52) by , we get
3
x1 + 2x2 + x3 = 7 …(53)
x2 + x3 = 3 …(54)
x3 = 1 …(55)
E2) i) The pivot in the first row appears in the second column and the pivot
is 1. All the entries in the column below the pivot position are zero.
The pivot in the second row, which should be in third, fourth or fifth
column, appears in the third column and the pivot is 1. All the other
entries in the third column below the pivot position are zero.
The pivot in the third row should appear in fourth or fifth column and it
appears in fourth column and the pivot is 1. All the other entries in the
28 fourth column below the pivot position are zero.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
The pivot in the fourth row should be in the fifth column and it appears
in the fifth column and the pivot is 1.
0 1 3 1 0
0 0 1 1 0
[ ]
0 0 0 1 0
0 0 0 0 1
This matrix is in REF.
ii) The pivot in the first row appears in the first column and all the other
entries in that column are zero.
The pivot in the second row must be in second, third, fourth or fifth
column and it appears in the second column. All the entries in the
second column below the pivot position are zero.
However, the third row is a zero row and it appears before the fourth
row which is not a zero row.
This is not in REF.
iii) The pivotis 1 and it is in the first row, first column and all the other
entries blow the below the pivot position in the first column are zero.
The pivot in the second row should be in the second, third, fourth
column or fifth column and it is the second column. The pivot is one
and all the entries below the pivot are zero.
The pivot in the third row should be in the third, fourth or the fifth
column, and it in the fifth column. The pivot is 1 and all the entries
below the pivot are zero.
The fourth row should be a row of zeros since the leading pivot, if it
exists, should be in the sixth column and the matrix has only five
columns. However there is a nonzero entry in the fourth column.
So, the matrix is not in REF.
iv) The leading entry in the first row is in first column, and all the entries
below the leading entry are zero.
In the second row the first nonzero entry is in the third column.
However, the entry in the fifth row, third column is not zero.
So, the matrix is not in REF.
E3) The first column is a nonzero column with a nonzero entry, 1, in the first
row. We choose the first column as the pivot column and the entry in the
first row as the pivot. Carrying out row operations R2 → R2 + R1 gives
1 1 1 3
[0 2 4 7]
2 2 1 5
Carrying out row operation R3 → R3 − 2R1 gives
E4) The pivot in the first row is 1 and it is in the first column, and all the other
entries in that column are zero.
The pivot in the second row should be in second, third, fourth or fifth
column. It is in the third column and it is one. All the other entries in the
second column are zero. 29
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
The pivot in the third row should be one and it should be in the third,
fourth or fifth column. It is one and it is in the fifth column.
All the entries in the fourth row are zero and it appears after all nonzero
rows. Since all the conditions are satisfied the matrix is in RREF.
1 1 0 2
[2 1 0 1 ]
1 1 1 −1
ℝ2 → R2 R1 gives
1 1 0 2
[0 −1 0 −3]
1 1 1 −1
ℝ3 → R3 − R1 gives
1 1 0 2
[0 −1 0 −3]
0 0 1 −3
ℝ2 → (−1)R2 gives
1 1 0 2
[0 1 0 3 ]
0 0 1 −3
ℝ1 → R1 − R2 gives
0 0 0 −3
[0 1 0 3 ]
0 0 1 −3
x1 − 3x4 = 0
x2 + 3x4 = 0
x3 − 3x4 = 0
We set x4 = 𝜆.
Then we get
2 2 2 −1 3
[2 1 2 0 3 ]
30 1 2 1 −1 1
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Systems
. . . . . . . . . .of
. . linear
. . . . . . .equations
..........
The operations R2 → R2 − 2R1 , R3 → R3 − R1 , R2 → −R2 , R1 → R1 − R2 ,
R
R3 → R3 − R2 , R3 → 2R3 , R1 → R1 − 3 , R2 → R2 + R3 reduces the matrix
2
to the following RREF
1 0 1 0 2
[0 1 0 0 −1]
0 0 0 1 −1
x1 + x3 = 2
x2 = −1
x4 = −1
. Let
−1 2
0 −1
u = [ ],v = [ ],
1 0
0 −1
W = {𝜆u ∣ 𝜆 ∈ ℝ}
1 0 1 0 −1
[1 1 1 1 2 ]
2 1 2 1 0
1 −1 1 −1 −1
[0 1 0 1 1]
2 −1 2 −1 0
1 0 1 0 0
[ 0 1 0 1 1]
0 0 0 0 1
In the last row, all the entries except the right most entry is zero and the
right most entry is 1. So, the system doesn’t have a solution.
31
UNIT 5
5.1 INTRODUCTION
In this Block we continue our study of Linear Algebra with a discussion of
solution of simultaneous equations. The problem of solution of linear equations
efficiently is one of the fundamental problems of Mathematics. Efficient
methods of solution of systems of linear equations has many applications,
including in new areas like Data Science.
In Sec. 5.2 we discuss the concept of linear independence of vectors and the
dimension of a vector space. In Sec. 5.3, we introduce you to two of the
important concepts in Linear Algebra, the concepts of basis and dimension of a
vector space. In Sec. 5.4, we will determine the dimensions of some
subspaces. In Sec. 5.5, we will determine the dimension of quotient space.
Objectives
After studying this unit, you should be able to:
𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼n vn = 0
for 𝛼i ∈ F, 1 ≤ i ≤ n, then 𝛼i = 0.
𝛼1 v1 + 𝛼2 v2 + ⋯ + 𝛼n vn = 0
Note that in two dimension, if two vectors v1 and v2 are linearly independent,
then there are 𝛼1 , 𝛼2 , not both zero, such that 𝛼1 v1 + 𝛼2 v2 = 0. Without loss of
𝛼
generality, we may assume that 𝛼1 ≠ 0. We then have v1 = − 2 v2 . So, v1 is a
𝛼1
scalar multiple of v2 and hence v1 and v2 are collinear. Thus, in ℝ2 , two vectors
linearly dependent iff they are collinear.
Solution:
𝛼1 + 𝛼2 + 𝛼3 = 0
𝛼2 + 𝛼3 = 0
𝛼3 = 0
𝛼1 − 𝛼2 + 𝛼3 = 0
𝛼2 + 2𝛼3 = 0
2𝛼1 + 2𝛼2 + 8𝛼3 = 0
𝛼1 + 2𝛼2 + 7𝛼3 = 0
1 0 3 0
0 1 2 0
[ ]
0 0 0 0
0 0 0 0
∗∗∗
Solution: The zero element of this vector space is the zero function, i.e., it is
the function 0 such that 0 (x) = 0 ∀ x ∈ ℝ. So we have to determine a, b ∈ ℝ
such that, ∀ x ∈ ℝ, a sin x + bex = 0.
∗∗∗
where m > 0.
Now S is linearly dependent. Therefore, for some scalars 𝛼1 , 𝛼2 , … , 𝛼k , not
all zero, we have
k
∑ 𝛼i ui = 0
i=1
Now, what happens if one of the vectors in a set can be written as a linear
combination of the other vectors in the set? The next theorem states that such
36 set is linearly dependent.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 3: Let S = {v1 , v2 , … , vn } be a subset of a vector space V over field F.
Then S is linearly dependent if and only if some vector of S is a linear
combination of the rest of the vectors of S.
We now prove (ii), which is the converse of (i). Since S is linearly dependent,
there exist 𝛼i ∈ F, not all zero, such that
𝛼1 v1 + 𝛼2 v2 + … + 𝛼n vn = 0.
Now, let us look at the situation in ℝ3 where we know the i, j are linearly
independent. Can you immediately prove whether the set {i, j, (3, 4, 5)} is
linearly independent or not? The following theorem will help you to do this.
𝛼v + 𝛼1 v1 + ⋯ + 𝛼n vn = 0. 37
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Now, if 𝛼 = 0, this implies that there exist scalars 𝛼1 , 𝛼2 , … , 𝛼n , not all zero, such
that
𝛼1 v1 + ⋯ + 𝛼n vn = 0.
(−𝛼1 ) (−𝛼n )
v= v1 + ⋯ + vn ,
𝛼 𝛼
i.e., v is linear combination of v1 , v2 , … , vn , i.e., v ∈ [S], which contradicts our
assumption.
Using this theorem we can immediately see that the set {i, j, (3, 4, 5)} is linearly
independent, since (3, 4, 5) is not a linear combination of i and j.
If you’ve done Exercise 3) you will have found that, by adding a vector to a
linearly independent set, it may not remain linearly independent. Theorem 4
tells us that if, to a linearly independent set, we add a vector which is not in
the linear span of the set, then the augmented set will remain linearly
independent. Thus, the way of generating larger and larger linearly
independent subsets of a non-zero vector space V is as follows:
4. If [S2 ] = V, the process ends. Otherwise, we can find a still larger set S3
which is linearly independent. It is clear that, in this way, we either reach a
set which generates V or we go on getting larger and larger linearly
independent subsets of V.
38 In the next example, we we will give an infinite set that is linearly independent.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Example 3: Prove that the infinite subset S = {1, x, x2 , … … }, of the vector space
P of all real polynomials in x, is linearly independent.
Now, suppose
k
ai
∑ 𝛼i x = 0, where 𝛼i ∈ ℝ ∀ i
i=1
∗∗∗
Thus, B ⊆ V is a basis of V if B is linearly independent and every vector of V is Lecture by Prof. Strang
a linear combination of a finite number of vectors of B. on basis and dimension
39
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
You have already seen that we can write every element of R3 as a linear
combination of i = (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1). We proved in Example
1, that this set is linearly independent. So, {i, j, k} is a basis for ℝ3 . The
following example shows that ℝ2 has more than one basis.
⟹𝛼=𝛽=0
∗∗∗
∗∗∗
E7) Prove that {1, x + 1, x2 + 2x} is a basis of the vector space, P2 , of all
polynomials of degree less than or equal to 2.
We have already mentioned that no proper subset of a basis can generate the
whole vector space. We will now prove another important characteristic of a
basis, namely, no linearly independent subset of a vector space can obtain
more vectors than a basis of the vector space. In other words, a basis
contains the maximum possible number of linearly independent vectors. In the
next section, we will discuss the dimensions of some subspaces.
n
w1 = ∑ 𝛼i vi , 𝛼i ∈ F ∀ i = 1, … , n.
i=1
Note that we have been able to replace v1 by w1 in B in such a way that the
new set still generates V. Next, let
S′2 = {w2 , w1 , v2 , v3 , … , vn } .
w2 = 𝛽1 w1 + 𝛽2 v2 + ⋯ + 𝛽n vn , 𝛽i ∈ F ∀ i = 1 , … , n
1 𝛽1 𝛽3 𝛽n
v2 = w2 − w1 − v3 − ⋯ − vn ,
𝛽2 𝛽2 𝛽2 𝛽2
Now, suppose n < m. Then, after n steps, we will have replaced all v′i s by
corresponding w′i s and we shall have a set Sn = {wn , wn−1 , … , w2 , w1 } with
[Sn ] = V. But then, this means that wn+1 ∈ V = [Sn ], i.e., wn+1 is a linear
combination of w1 , w2 , … , wn . This implies that the set {w1 , … , wn , wn+1 } is
linearly dependent. This contradicts the fact that {w1 , w2 , … , wm } is linearly
dependent. Hence, m ≤ n. ■
Solution: You know that (1,0) and (0,1) form a basis of ℝ2 over ℝ. Thus, to
show that the given set forms a basis, we only have to show that the 2 vectors
in it are linearly independent. For this, consider the equation
𝛼 (1, 4) + 𝛽 (0, 1) = 0, where 𝛼, 𝛽 ∈ ℝ. Then (𝛼, 4𝛼 + 𝛽) = (0, 0) ⟹ 𝛼 = 0, 𝛽 = 0.
∗∗∗
a) Is {u, v + w, w + t, t + u} a basis.
b) Is {u, t} a basis of V?
We now give two results that you must always keep in mind when dealing with
42 vector spaces. They depend on Theorem 5.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 6: If one basis of a vector space contains n vectors, then all its bases
contain n vectors.
So far we have been saying that “if a vector space has a basis, then …”. Now
we state the following theorem (without proof).
v = 𝛼1 v1 + ⋯ + 𝛼n vn = 𝛽1 v1 + ⋯ + 𝛽n vn .
The coordinates of a vector will depend on the particular basis chosen, as can
be seen in the following example.
Solution:
∗∗∗
Note: The basis B1 = {i, j} has the pleasing property that for all vectors (p, q)
and all the coordinates of (p, q) relative to B1 are (p, q). For this reason B1 is
called the standard basis of ℝ2 , and the coordinates of a vector relative to the
standard basis are called standard coordinates of the vector. In fact, this is
the basis we normally use for plotting points in 2-dimensional space.
44 Solution:
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
a) Let 2x + 1 = 𝛼 (5) + 𝛽 (3x) = 3𝛽x + 5𝛼.
Then 3𝛽 = 2, 5𝛼 = 1. So, the coordinates of 2x + 1 relative to B are
(1/5, 2/3).
E11) Find a standard basis of ℝ3 and for the vector space P2 of all polynomials
of degree ≤ 2.
E12) For the basis B = {(1, 2, 0) , (2, 1, 0) , (0, 0, 1) of ℝ3 , find the coordinates of
(−3, 5, 2).
E13) Prove that, for any basis B = {v1 , v2 , … , vn } of a vector space V, the
coordinates of 0 are (0, 0, 0, … , ).
E14) For the basis B = {3, 2x + 1, x2 − 2} of the vector space P2 of all polynomial
of degree ≤ 2, find the coordinates of
a) 6x + 6 b) (x + 1)2 c) x2
E15) For the basis B = {u, v} of ℝ2 , the coordinates of (1, 0) are (1/2, 1/2) and
the coordinates of (2, 4) are (3, −1). Find u, v.
We now continue the study of vector space by looking into their ‘dimension’, a
concept directly related to the basis of a vector space.
5.3.1 Dimension
So far we have seen that, if a vector space has a basis of n vectors, then every
basis has n vectors in it. Thus, given a vector space, the number of elements in
its different bases remains constant.
∗∗∗
E17) Prove that the real vector space ℂ of all complex numbers has dimension
two.
E18) Prove that the vector space Pn , of all polynomials of degree at most n,
has dimension n + 1.
Int the next subsection, we will see how to complete a linearly independent set
in a vector space to a basis of vector space when the dimension of the vector
space is finite.
Proof: Since m < n, W is not a basis of V (Theorem 6). Hence, [W] ≠ V. Thus,
we can find a vector v1 ∈ V such that v1 ∈ [W]. Therefore, by Theorem 4,
W1 = W ∪ {v1 } is a linearly independent set with n vectors in the n-dimensional
space V, so W1 contains m + 1 vectors. If m + 1 = n, W1 is linearly independent
set with n vectors in the n-dimensional space V, so W1 is a basis of V
(Theorem 5, Corollary 1). That is, {w1 , … , wm , v1 } is a basis of V. If m + 1 < n,
then [W1 ] ≠ V, so there is a v2 ∈ V such that v2 ∉ [W1 ]. Then W2 = W ∪ {v2 } is
linearly independent and contains m + 2 vectors. So, if m + 2 = n, then
[S] = {𝛼(2, 3, 1) |𝛼 ∈ ℝ}
= {(2𝛼, 3𝛼, 𝛼) |𝛼 ∈ ℝ }
Now we have to find v1 ∈ ℝ3 such that v1 ∉ [S], i.e., such that v1 ≠ (2𝛼, 3𝛼, 𝛼)
for any 𝛼 ∈ ℝ. We can take v1 = (1, 1, 1). Then
= {(2𝛼 + 𝛽, 3𝛼 + 𝛽, 𝛼 + 𝛽) |𝛼, 𝛽 ∈ ℝ
Now select v2 ∈ ℝ3 such that v2 ∉ [S1 ]. We can take v2 = (3, 4, 0). How do we
‘hit upon’ this v2 ? There are many ways. What we have done here is to take
𝛼 = 1 = 𝛽, then 2𝛼 + 𝛽 = 3, 3𝛼 + 𝛽 = 4, 𝛼 + 𝛽 = 2. So (3, 4, 2) belongs to [S1 ] then,
by changing the third component from 2 to 0, we get (3, 4, 0), which is not in
[S1 ]. Since v2 ∉ [S1 ] , S1 ∪ {v2 } is linearly independent. That is,
∗∗∗
Note: Since we had a large number of choices for both v1 and v2 , it is obvious
that we could have extended S to get a basis of ℝ3 in many ways.
Solution: We note that P2 has dimension 3, a basis being {1, x, x2 } (see E19).
So we have to add only one polynomials to S to get a basis of P2 .
This shows that [S] does not contain any polynomials of degree 2. So we can
choose x2 ∈ P2 because x2 ∉ [S]. So S can be extended to {x + 1, 3x + 2, x2 },
which is a basis of P2 . Have you wondered why there is no constant term in
this basis? A constant term is not necessary. Observe that 1 is linear
combination of x + 1 and 3x + 2, namely, 1 = 3 (x + 1) − 1(3x + 2). So, 1 ∈ [S] and
hence , ∀ 𝛼 ∈ ℝ, 𝛼.1 = 𝛼 ∈ [S].
∗∗∗
E20) Complete S = {(1, 0, 1) , (2, 3, −1) in two different ways to get two distinct
bases of ℝ3 .
a) S = {2, x2 + x, 3x3 }
b) S = {x2 + 2, x2 − 3x}
to get a basis of P3 .
Theorem 12: Let V be a vector space over a field F such that dim V = n . Let W
48 be a subspace of V. Then dim W ≤ n .
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Proof: Since W is a vector space over F in its own right, it has a basis.
Suppose dim W = m . Then the number of elements in W’s basis is m. These
elements form a linearly independent subset of W, and hence, of V. Therefore,
by Theorem 7, m ≤ n. ■
Remark 2: If W is a subspace of V such that dim W = dim V = n then
W = V, since the basis of W is a set of linearly independent elements in V, and
we can appeal to Corollary 1.
Solution: By Theorem 12, since dim ℝ2 = 2, the only possibilities for dim V
are 0,1 and 2.
V = {𝛼 (𝛽1 , 𝛽2 ) |𝛼 ∈ ℝ} .
∗∗∗
dim V = 1 ⟹ V = { 𝛼(𝛽1 , 𝛽2 , 𝛽3 )| 𝛼 ∈ ℝ}
x y z
S = {(x, y, z) ∈ ℝ3 | = = }
𝛽1 𝛽2 𝛽3 49
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
Let (x1 , y1 , z1 ), (x2 , y2 , z2 ) ∈ S. The set S is a line through the origin.
Further,
x1 y1 z1 x2 y2 z2
= = and = =
𝛽1 𝛽2 𝛽3 𝛽1 𝛽2 𝛽3
Check that
𝛼x1 + 𝛽x2 𝛼y1 + 𝛽y2 𝛼z1 + 𝛽z2
= =
𝛽1 𝛽2 𝛽2
y z
S = {(x, y, z) ∈ ℝ3 |x = 0, = }
𝛽2 𝛽3
Then, S is a line through the origin that lies on the yz-plane. Again, we can
show that S is a subspace and
S = V = { 𝛼 (0, 𝛽2 , 𝛽3 )| 𝛼 ∈ ℝ}
x z
S = {(x, y, z) ∈ ℝ3 |y = 0, = }
𝛽1 𝛽3
S = V = { 𝛼 (𝛽1 , 0, 𝛽3 )| 𝛼 ∈ ℝ}
If 𝛽3 = 0 and 𝛽1 ≠ 0, 𝛽2 ≠ 0, we let
x y
S = {(x, y, z) ∈ ℝ3 |z = 0, = }
𝛽1 𝛽2
S = V = { 𝛼 (𝛽1 , 𝛽3 , 0)| 𝛼 ∈ ℝ}
V = [u1 , u2 ]
𝛼1 x + 𝛼2 y + 𝛼3 z = 0
𝛽1 x + 𝛽2 y + 𝛽3 z = 0
The system has two equations in three unkowns. So, by ??, this has a nonzero
solution (a, b, c). Let
50 S = { (x, y, z) ∈ ℝ3 | ax + by + cz = 0}
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Then, S is a subspace of ℝ3 and u1 , u2 ∈ S. Since S contains a basis of V, it
contains V.
b c
x − −
a a
[y] = y [ 1 ] + z [ 0 ]
z 0 1
b c
− −
a a
So, S is spanned by v1 = [ 1 ] and v2 = [ 0 ]. Check that {v1 , v2 } is linearly
0 1
independent. Therefore, dim S = 2. Since V ⊆ S and dim V = dim S, it follows
that S = V.
dim V = 3 ⟹ V = ℝ3 .
∗∗∗
Now let us go further and discuss the dimensions of the sum of subspace (See
Unit 3.). If U and W are subspaces of a vector space V, then so are U + W and
U ∩ W. Thus, all these subspaces have dimensions. We relate these
dimensions in the following theorem.
of U and a basis
of W.
where 𝛼i , 𝛽j , 𝜏k ∈ F ∀ i, j, k.
Then
r m n
∑ 𝛼i vi + ∑ 𝛽j uj = − ∑ 𝜏k wk …(2)
i=1 j=r+1 k=r+1
That is,
r m r
∑ 𝛼i vi + ∑ 𝛽j uj = ∑ 𝛿i vi …(3)
i=1 j=r+1 i=1
and
n r
∑ 𝜏k wk = ∑ 𝛿i vi …(4)
k=r+1 i=1
where 𝛿i ∈ F ∀ i = 1, … , r
∑ 𝛿i vi + ∑ 𝜏k wk = 0,
Thus, ∑ 𝛼i vi + ∑ 𝛽j uj + ∑ 𝜏l wk = 0
⟹ 𝛼i = 0, 𝛽j = 0, 𝜏k = 0 ∀ i, j, k.
So A ∪ B is linearly independent.
∴, A ∪ B is a basis of U + W, and
52 ■
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
We give a corollary to Theorem 13 now.
⟹ 5 + 4 − dim(U ∩ W) ≤ 7
⟹ dim(U ∩ W) ≥ 2
Thus, dim (U ∩ W) = 2, 3 or 4.
∗∗∗
(a, b, c, d) ∈ V ⟺ b − 2c + d = 0.
⟺ (a, b, c, d) = (a, b, c, 2c − b)
This shows that every vector in V is a linear combination of the three linearly
independent vectors (1, 0, 0, 0) , (0, 1, 0, −1) , (0, 0, 1, 2). Thus, a basis of V is
Hence, dim V = 3 .
Next, (a, b, c, d) ∈ W ⟺ a = d, b = 2c
= a (1, 0, 0, 1) + c (0, 2, 1, 0) ,
and dim W = 2 .
⟺ b − 2c + d = 0, a = d, b = 2c
= 3 + 2 − 1 = 4.
∗∗∗
V = {(a, b, c) |b + 2c = 0}
W = {(a, b, c) |a + b + c = 0}
Let us now look at the dimension of a quotient space. Before going further it
may help to revise Sec. 3.5.
We also showed that it is a vector space. Hence, it must have a basis and a
54 dimension. The following theorem tells us what dim V/W should be.
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
Theorem 14: If W is a subspace of a finite-dimensional space V, then
dim (V/W) = dim V − dim W.
Then ∑ki=1 𝛼i vi + W = W
k
⟹ (∑ 𝛼i vi ) + W = W
i=1
k
⟹ ∑ 𝛼i vi ∈ W
i=1
⟹ ∑ 𝛼i vi − ∑ 𝛽j wj = 0
𝛽j = 0, 𝛼i = 0 ∀ j, i.
Thus,
∑ 𝛼i (vi + W) = W ⟹ 𝛼i = 0 ∀ i.
So B is linearly independent.
Therefore,
v + W = (∑ 𝛼i wi + ∑ 𝛽j vj , ) + W
i j
= {(∑ 𝛼i wi ) + W} + {(∑ 𝛽j vj ) + W}
i j
k
= W + ∑ 𝛽j (vj + W) since ∑ 𝛼i wi ∈ W
1
k
= ∑ 𝛽j (vj + W)
j=1
So, v + W ∈ [B].
Let us use this theorem to evaluate the dimensions of some familiar quotient
spaces.
Now,
This shows that every element of P4 /P2 is a linear combination of the two
elements (x4 + P2 ) and (x3 + P2 ).
4 3 2
∴, 𝛼x + 𝛽x = ax + bx + c for some a, b, c ∈ ℝ
⟹ 𝛼 = 0, 𝛽 = 0, a = 0, b = 0, c = 0.
Thus, dim (P4 /P2 ) = 2 . Also dim (P4 ) = 5 , dim (P2 ) = 3, (see E19). Hence
dim (P4 /P2 ) = dim (P4 ) − dim (P2 ) is verified.
∗∗∗
E26) Let V be an n – dimensional real vector space. Find dim (V/V) and
dim V/{0}.
5.6 SUMMARY
56 In this unit, we
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
1. defined a linearly independent set and gave examples of linearly
independent and linearly dependent sets of vectors;
4. how to determine whether a set forms a basis for a vector space or not;
5.7 SOLUTIONS/ANSWERS
−1 1 1 0
𝛼1 [ 1 ] + 𝛼2 [−1] + 𝛼3 [1] = [0]
1 1 0 0
or
−𝛼1 − 𝛼2 + 𝛼3 0
[ 𝛼1 − 𝛼2 + 𝛼3 ] = [0]
𝛼1 + 𝛼2 0
−𝛼1 + 𝛼2 + 𝛼3 = 0
𝛼1 − 𝛼2 + 𝛼3 = 0
𝛼1 + 𝛼2 =0
1 0 0 0
[0 1 0 0]
0 0 1 0
There are no non-pivot columns except the last column which doesn’t
correspond to a variable. The solution to the system is 𝛼1 = 0, 𝛼2 = 0
and 𝛼3 = 0. So, the vectors are linearly independent. 57
Block
. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Vector
. . . . . . . .Spaces-II
.........
b) Writing as column vectors, suppose
1 2 1 0
𝛼1 [3] + 𝛼2 [ 1 ] + 𝛼3 [2] = [0]
2 −1 1 0
or
𝛼1 + 2𝛼2 + 𝛼3 0
[3𝛼1 + 𝛼2 + 2𝛼3 ] = [0] …(5)
2𝛼1 − 𝛼2 + 𝛼3 0
𝛼1 + 2𝛼2 + 𝛼3 = 0
3𝛼1 + 𝛼2 + 2𝛼3 = 0
2𝛼1 − 𝛼2 + 𝛼3 = 0
1 2 1 0
[3 1 2 0]
2 −1 1 0
3
1 0 0
5
1
[0 1 0]
5
0 0 0 0
E2) Suppose 𝛼 ∈ F such that 𝛼v = 0. Then, from Unit 3 you know that 𝛼 = 0
or v = 0. But v ≠ 0…, 𝛼 = 0, and {v} is linearly independent.
E3) The set S = {(1, 0) , (0, 1)} is a linearly independent subset of ℝ2 . Now,
suppose ∃ T such that S ⊆ T ⊆ ℝ2 . Let (x, y) ∈ T such that (x, y) ∉ S. Then
we can always find a, b, c ∈ ℝ, not all zero, such that
a (1, 0) + b (0, 1) + c (x, y) = (0, 0). (Take a = −x, b = −y, c = 1, for example.)
∴S ∪ {(x, y)} is linearly independent. Since this is obtained in T, T is
linearly dependent. The answer to the question in this exercise is ‘No’.
⟹ 𝛽0 + 𝛽1 + ⋯ + 𝛽k = 0, 𝛽1 = 0 = 𝛽2 = ⋯ = 𝛽k
⟹ 𝛽0 = 0 = 𝛽1 = ⋯ = 𝛽k
⟹ T is linearly independent.
Thus, every finite subset of {1, x + 1, x2 + 1, …} is linearly independent.
Therefore, {1,x+1,….} is a linearly independent.
⟹ (a + d) u + bv + (b + c) w + (c + d) t = 0
E10) You know that {1, x, x2 , x3 } is a basis of P3 , and contains 4 vectors. The
given set contains 6 vectors, and hence, by Theorem 7, it must be linearly
dependent.
Then (1, 0, 0) ∉ [S] and (0, 1, 0) ∉ [S] . ∴{(1, 0, 1) , (2, 3, −1) , (1, 0, 0)} and
3
{(1, 0, 1) , (2, 3, −1) , (0, 1, 0)} are two distinct bases of R .
∴, dim(V ∩ W) ≥ 1.
60 ∴, 1 ≤ dim(V ∩ W) ≤ 2 .
Unit
. . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bases
. . . . . . . and
. . . . .Dimension
...........
E24) a) Any element of V is v = (a, b, c) with b + 2c = 0.
∴,dim W = 2 .
∴,dim (V ∩ W) = 1 .
E25) 0, n.
61
UNIT 6
ELEMENTARY MATRICES AND ROW
REDUCTION
6.1 Introduction
Objectives
6.2 Elementary Matrices
6.3 Row Space, Column Space and Null Space of a Matrix
6.4 Summary
6.5 Solution/Answers
6.1 INTRODUCTION
In the previous Units, you have learned about several objects (mathematical!)
like vector space, subspaces, linear independence, bases and dimension. In
this Unit, the emphasis is on computing some of these objects. The main
technique that we are going to use is row reduction. In Sec.~6.2 we take a
closer look at row reduction. As we will see, row operations on a matrix are
equivalent multiplying the matrix by certain special matrices called elementary
matrices. We will study the properties of these matrices in Sec.~6.2. We will
also see how to find the inverse of a matrix through row operations. In
Sec.~6.3, we will see how to calculate bases and dimension of the subspaces
spanned by the columns or rows of a matrix. We introduce you to an
important concept, the rank of a matrix, in this section. Later in this section, we
will see how to find a bases for subspaces of n if we are given a spanning
set of the subspace of the subspace. We will also see how to find bases for
the sum and intersection of two subspaces of n .
Objectives
After studying this Unit, you will be able to:
• define elementary matrices and explain the row operations to them;
• prove that elementary matrices are invertible and give the inverse of an
elementary matrix;
• define the row rank and the column rank of a matrix;
• compute the rank of a matrix;
• state and apply the rank-nullity theorem;
• find the dimension and a basis for subspace of n given a spanning
set of the subspace;
• find bases for the sum and intersection of two subspaces of n .
Consider the operation of interchanging the ith row and jth row of an m n
matrix. We can do this by multiplying the matrix on the left by the matrix
obtained by interchanging the ith and jth rows of an m m identity matrix. For
example, to interchange the second and third rows of the matrix.
1 1 1 4
0 0 −3 −6 (1)
2 4 −2 2
we multiply this matrix by the matrix
1 0 0
0 0 1
0 1 0
which is the matrix we get by interchanging the 2nd and 3rd rows of the 3 3
identity matrix. Check that
1 0 0 1 1 1 4 1 1 1 4
0 0 1 0 0 −3 −6 = 2 4 −2 2
0 1 0 2 4 −2 2 0 0 −3 −6
We can also look at this from the point of view of looking at Eqn. (1) as the
matrix equation
1 1 1 x1 4
0 0 −3 x 2 = − 6 (2)
2 4 −2 x 2
3
1 0 0
Multiplying both sides of Eqn. (2) by the matrix 0 0 1 , we have
0 1 0
1 0 0 1 1 1 x1 1 0 0 x1 1 0 0 4
0 0 1 0 0 −3 x 2 = 0 0 1 x 2 = 0 0 1 −6
0 1 0 2 4 −2 x 0 1 0 x 0 1 0 2
3 3
or
1 1 1 x1 4
2 4 −2 x 2 = 2
0 0 −3 x −6
3
More generally, we have
63
i j a11 a12 ain b1
1 a21 a22 a2n b2
i 0 1 a ain bi
i1 ai2
j 1 0 a j1 a jn b j
1 a
m1 am2 amn bm
64
Consider the operation of multiplying the ith row of a matrix by a constant
c, Ri → cRi . For this we multiply the matrix corresponding to the set of
equations on the left by the matrix we get by replacing the 1 in the ith row ith
column by c. For example, to multiply the last row of the matrix
1 −1 1 2
0 2 −4 −6 (6)
0 0 −3 −6
1
by − we multiply this matrix on the left by the matrix
3
1 0 0
0 1 0
1
0 0 −
3
Check that
1 0 0 1 −1 1 2 1 −1 1 2
0 1 0 0 2 −4 −6 = 0 2 −4 −6
1 0 0 −3 −6 0 0 1 2
0 0 −
3
If we write Eqn. (6) in the form of a matrix equation, we have
1 −1 1 x1 2
0 2 −4 x 2 = −6 (7)
0 0 −3 x −6
3
Multiplying both sides of Eqn. (7) by
1 0 0
0 1 0
1
0 0 −
3
we get
1 −1 1 x1 2
0 2 −4 x 2 = −6
0 0 1 x 2
3
More generally, we have
i
1 0 0 0 a11 a12 ain b1
0 1 0 0 a21 a22 a2n b2
i 0 0 c 0 ai1 ai2 ain bi
0 0 1 am1 am2 amn bm
65
a11 a12 ain b1
a21 a22 a2n b2
=
ac in cbi
(8)
cai1 cai2
am1 am2 amn bm
In terms of matrix equation, we have
i
1 0 0 0 a11 a12 ain x1
0 1 0 0 a21 a22 a2n x2
i 0 0 c 0 ai1 ai2 ain x
i
0 0 1 am1 am2 amn xm
i
1 0 0 0 b1
0 1 0 0 b2
(9)
= b
i 00 c 0 i
0 0 1 bm
So,
Consider the operation multiplying the ith row by a constant c and adding it to
the jth row. For this we multiply by the matrix we get by replacing the zero
entry in the ith column of the jth row of the n n identity matrix by c. For
example to multiply the first row of the matrix
1 −1 1 2
2 4 −2 2 (11)
0 0 −3 −6
by −2 and add it to the second row, we multiply the matrix by
1 0 0
−2 1 0
0 0 1
Check that
1 0 0 1 −1 1 2 1 −1 1 2
−2 1 0 2 4 −2 2 = 0 6 −4 −2
0 0 1 0 0 −3 −6 0 0 −3 −6
Again, writing Eqn. (11) in the form of a matrix equation, we get
66
1 −1 1 x1 2
2 4 −2 x 2 = 2 (12)
0 0 −3 x −6
3
Multiplying both sides of Eqn. (12) by
1 0 0
−2 1 0
0 0 1
we get
1 −1 1 x1 2
0 6 −4 x 2 = −2
0 0 −3 x −6
3
More generally, we have
i j a11 a12 a1n b1
1 a a22 a2n b2
21
i 1
ain bi
ai1 aij
j c 1 a j1 a j2 a jn b j
1
am1 am2 amn bn
a j1 + cai1 a j2 + cai2 a jn + cain b j + cbi
a bm
m1 am2 amn
In terms of matrix equation, we have
i j a11 a12 a1n b1 x1
1 a a22 a2n b2 x 2
21
i 1
ain bi xi
ai1 aij
j c 1 a j1 a j2 a jn b j x j
1
am1 am2 amn bn xm
i j
b1
1 b2
i 1 b
= i
j c 1 bj
1 b
m
That is
67
a11 a12 a1n x1 b1
a21 a22 a2n x 2 b2
a ai2 ain x b
i1 i= i
a j1 + cai1 a j2 + cai2 a jn + cain x j b j + cbi
a amn x b
m1 am2 m m
Example 1: The first column in the following table gives the order of the matrix
on which one of the row operations is to be performed, the second column
gives the row operation to be performed. Write down the elementary matrix
you will use to perform the row operation.
Table 1
Solution:
1. The matrix is a 4 3 matrix. The elementary matrix should have four
columns so that we can multiply the matrix on the left by the elementary
matrix. Since the elementary matrices are square matrices, the required
elementary matrix will be a 4 4 matrix. Since we want to interchange
the first and third rows of the matrix, the elementary matrix will be the
matrix we get by interchanging the first and third rows of the 4 4
identity matrix. So, the matrix is
0 0 1 0
0 1 0 0
1 0 0 0
0 0 0 1
68
3. The elementary matrix will have order 5 5 . Since we are multiplying
1
the fourth row by , we replace the 1 in the fourth row, fourth column of
3
1
the 5 5 identity matrix by . So, the elementary matrix is
3
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
1
0 0 0 0
3
0 0 0 0 1
4. The elementary matrix will have order 4 4 . We are changing the first
1
row in the row operation and we want to add times the third row to the
5
first row. So, we change the element in the third column of the first row
1
of the 5 5 identity matrix from 0 to . So, the required matrix is
5
1
1 0 5
0
0 1 0 0
0 0 1 0
0 0 0 1
***
Try the next exercise to check your understanding of Example 1.
E 1) The first column in the following table gives the order of the matrix on
which one of the row operations is to be performed, the second column gives
the row operation to be performed. Write down the elementary matrix you will
use to perform the row operation.
Table 2
E ij is an n n matrix with 1 as the entry in the ith row, jth column and all the
0 0 0
other entries are zero. For example, the 3 3 matrix unit E23 = 0 0 1 . We
0 0 0
have the following rule for matrix units:
69
Eij Ek = jkEi (13)
EijEij =
0 if i j
Eii if i = j
(14)
2) The n n elementary matrix that interchanges the ith and jth rows is
1 0 0
given by I − Eii − E jj + Eij + E ji . For example the matrix 0 0 1
1 0 0
interchanges the second and third row.
We have
1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 = 0 1 0 − 0 1 0 − 0 0 0 + 0 0 1 + 0 0 0
0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0
= I − E22 − E33 + E23 + E32 .
70
Recall that, an n n matrix A is invertible if there is an n n matrix B such
that AB = In , the n n identity matrix. We denote the inverse matrix of A by
A −1 and write A −1 = B. Note that, if AB = I, we have BA = I.
The next proposition shows that all the elementary matrices are invertible and
their inverses are also elementary.
1 1
c) (I + (c − 1)Eii ) I + − 1 Eii = I + − 1 Eii + (c − 1)Eii
c c
1
+(c − 1) − 1 EiiEii
c
1 1
= I + − 1 + c − 1 + 1 − c − + 1 Eii = I.
c c
72
d) a). Let A be the RREF of A. We claim that A = 1. Since row operations
do not change the solution set of a system of homogenous equations, the only
solution of A x = 0 is x = 0. So, A does not have any zero row; otherwise the
number of variables will be greater than the number of equations and the
system will have a solution different from the zero vector. This means that
each of the m rows of A should have a pivot, so there are m pivots. So, each
column has exactly one pivot which is 1. Because the column containing the
pivot in a row is to the right of the column containing the pivot in the previous
row, the columns of A form an m m identity matrix. So, there are elementary
matrices E1,E2 ,...,Ek such that EkEk −1...E1A = A = I or A = Ek−1Ek−−11...E1−1, a
product of elementary matrices.
Using theorem 1, we can find the inverse of a square matrix by row reduction
let us see how.
1 −1 0
Example 2: Find the inverse of the matrix 1 1 1 by row reduction.
−1 0 1
1 −1 0 1 0 0
1 1 1 0 1 0
−1 0 1 0 0 1
1 0 0 1/ 3 1/ 3 −1/ 3
0 1 0 −2 / 3 1/ 3 −1/ 3
0 0 1 1/ 3 1/ 3 2 / 3
−1
1 −1 0 1/ 3 1/ 3 −1/ 3
1 1 1 = −2 / 3 1/ 3 −1/ 3
−1 0 1 1/ 3 1/ 3 2 / 3
73
1 1 2
E1) Find the inverse of the matrix 0 −1 0 using row reduction.
1 0 1
In the next section, we will consider the vector spaces spanned by the row
vectors of a matrix and the column vectors of a matrix. We will see how to find
their dimensions.
We call the dimension of RS(A) the row rank of A; we denote the row mark of
A by r (A) . We call the dimension CS(A) the column rank of A; we denote
the column rank of A by c (A). If A is an m n , then r (A) m and
c (A) n.
You may have a question now. How do the row operations affect the row rank
and column rank of a matrix? The answer is that they are unaffected. First, we
investigate what is the effect of a row operation on the row rank of a matrix.
Proof: Let B = (bij )mm . Consider BA. Then, the rows of BA are
u1 = (b11u1 + b12u2 + ... + b1mum ),u2 = (b21u1 + b22u2 ,...,bm2um ),...
um = (bm1u1 + bm2u2 + ...bmmum ).
Corollary 1: Elementary row operations do not affect the row rank of a matrix.
In particular, if A is an m n matrix and A is its RREF, RS(A) = RS(A ) and
r (A) = r (A ).
The next proposition tells us how we can calculate the row rank of a matrix by
row reduction.
( )
Proposition 3: Let A be an m n matrix and suppose A = aij is the REF of
A. The non zero rows of A form a basis for RS(A). Also, the row rank of A
is the number of non zero rows in the REF of A.
Proof: We know that RS(A) = RS(A ). RS(A ) is spanned by its non zero
rows since we can always omit zero rows from the spanning set. It remains to
show that the nonzero rows are linearly independent.
Each nonzero row contains a pivot, namely one. Suppose there are k nonzero
rows and the pivots are in columns j1 j2 ... jk . Let u1,u2 ,...,uk be the
k
nonzero rows and suppose that u
i=1
i i = 0.
Example 3: Find a basis for the vector space V which is spanned by the
vectors {( −1,1,0,1),(0,1,1,0),(2,3,1,2),(1, − 1,1,0)} .
1 0 0 −2
0 1 0 −1
0 0 1 .
1
0 0 0 0
The first three nonzero rows (1,0,0, −2),(0,1,0, −1),(0,0,1,1) forms a basis for
V. The dimension of V is three.
***
E2) Find a basis for the vector spanned by the vectors (1,1,0,1),(1,1,1,1),
(2,2, − 2,2),(0,0,01). What is the dimension of V ?
We now discuss the relationship between the column rank of a matrix and the
column rank of its row reduced echelon matrix. We convert a matrix into
RREF by multiplying it on the left by an invertible matrix. What is the effect on
the columns of an m n matrix A when we multiply A on the left by an
m m matrix B ?
n n
In particular, ivi = 0 iBvi = 0.
i=1 i=1
n
n
Proof: We have B v = Bv . (See Block 1, miscellaneous
i i i i
i=1 i=1
k k
k
exercises, example 4) Therefore, if v i i = 0, iBv i = B i v i = Bw
i=1 i=1 i=1
76
k
k
If B is invertible and iBv i = Bw. We have B i v i = Bw. Multiplying
i=1 i=1
k
both sides of this equation by B −1, we get v
i=1
i i = w.
a11
a12
a1n
a21 a22 a2n
a ak 2 akrn
k1
0 0 0 0
0 0 0 0
That is, in each column vector the last k − r rows are zeros. Therefore, we can
write each column vector as a linear combination of e1,e2 ,...,ek , where e i is a
column vector of length m with a one in the ith row and 0 in the other rows:
a1i
a2i 1 0 0
0 1 0
= a
1i + a 2i + ... + aki
aki 0 0 1
0
0 0 0 0
On the other hand, since there are k nonzero rows there are k pivot entries.
Since no two pivot entries are in the same row or same column, it follows there
are k distinct pivot columns. Each pivot column is e j for some j, 1 j k,
since each pivot column has one in a particular row and zero in the other rows.
If v j1 ,v j2 ,...,v jk , j1 j2 jk , are the pivot columns in A , v j1 has one in the
first row and zeros in the other rows, v j2 has 1 in the second row and zeros in
the other rows etc. So, v ji = ei for 1 i k. Therefore,
{vj1 ,vj2 ,...,vjk } = {e1,e2 ,...,ek }
Therefore {e1,e2 ,...,ek } CS(A ). So, S[{e1,e2 ,...,er }] CS(A ) and
therefore CS(A ) = S[{e1,e2 ,...,ek }] = S[{v j1 ,...,v jk }]. In particular c (A ) = k.
77
We claim that v j1 ,...,v jk forms a basis for CS(A). Since vj ,...,v jk is a
1
linearly independent set, by lemma 2, {v j1 ,...,v jr } is also a linearly independent
set. We need to show that this set spans CS(A). Let w CS(A). Then
k
Bw CS(A ), therefore there are 1, 2 ,..., k such that v
i=1
i ji = Bw or
k
So, CS(A) and CS(A ) have the same dimension, but they are different
m
subspaces of .
In the next example we will see how to compute a basis for the column space
of a matrix.
−1 1 0 −2
0 1 0 −1
A=
−2 3 1 1
1 −1 0 0
1 0 0 −2
0 1 0 −1
0 0 1 1
0 0 0 0
The first three columns are the pivot columns of the RREF of A . So, by
Proposition 4, the first three columns of A, namely
−1 1 0
0 1 1
−2 , 3 and 1
1 −1 1
78
forms a basis for CS(A).
***
1 1 0 1
1 1 1 1
E3) Find a basis for the column space of .
22 −2 2
0 0 1
0
The important thing to note is that the basis we get is a subset of the spanning
set we were given. This is not true if we write vectors in the spanning set of
vectors of a subspace V as row vectors of a matrix, row reduce the matrix and
take the nonzero vectors of the REF as the basis for the subspace V. The next
example illustrates this procedure.
Example 5: Find a basis for the vector space spanned by the set
S = {(1, −2,1,0),(1, − 1,2,0),(0,1,1,0),(0,0, −2,1)} which is a subset of the
spanning set S.
1 1 0 0
−2 −1 1 0
A=
1 2 1 −2
0 1
0 0
1 0 −1 0
0 1 1 0
0 0 .
0 1
0 0 0 0
First, second and the fourth columns are pivot columns. So, writing the
corresponding columns in A as row vectors, we see that
{(1, − 2,1,0),(1, − 1,2,0),(0,0, − 2,1)} is a basis for V.
***
79
E4) Find a basis for the vector space spanned by S = {(2, − 2, − 1, − 2),
(3, − 1, − 1, − 2),(1,1,0,0),( −2,1,1,2)} which is a subset of S.
We now turn our attention to the third space associated with a matrix; the Null
space of a matrix. We begin with the definition of Null space of a matrix. We
begin with the definition of Null space of a matrix.
Note that N(A) is the solution set of the system of homogeneous linear
equations Ax = 0. We have already seen that the solution set of a system of
homogeneous linear equations in n unknowns is a vector subspace of n .
From our definition of N(A) also, if Ab1 = 0 and Ab 2 = 0, we have
A ( b1 + b2 ) = Ab1 + Ab2 = 0. So, it follows that N(A) is a subspace of
n
.
In the next lemma we prove that the null space of a matrix and the null space
of the RREF of the matrix are the same.
The next theorem gives the relationship between the rank and nullity of a
matrix.
Let f1, , fr be a basis of N(A). This is linearly independent set which can
be extended to a basis f1, f2 , , fn of n
. Then,
80
n
n n
v = i fi Av = A i fi = i Afi
i=1 i=1 i=1
We have Afi = 0 for 1 i r since f1, f2 , , fr N(A). Therefore
n
Av = Af . In other words, Af
i=r +1
i r +1, Afr + 2 , , Afn spans col(A). We claim
that Afr +1, Afr + 2 , , Afn is linearly independent set. Suppose there are
n
n
r +1, r + 2 , , n , such that i Afi = 0. Then A i fi = 0. Therefore
i=r +1 i=r +1
n
n
r
Therefore, we have i i = ifi for some 1, 2,
f , r .
i=r +1 i=1
r n
Therefore (− )f + f = 0. Since f , f ,
i=1
i 1
i=r +1
i i 1 2 , fn is a linearly
x1 + 2x 2 + x 3 + x 4 + x 5 = 0
2x1 + x 2 + 2x 3 + 2x 4 − x 5 = 0
x1 + 6x 2 + 2x 3 + 2x 4 + 4x 5 = 0.
x1
1 1 1 1 0 x2
1 2 1 1 1
A= , x = x3
2222 −1
2 6 2 2 −4 x4
x 5
Since the last column of the augmented matrix associated with the system of
equations in the zero column, it doesn’t play any role in the computation of
RREF. So, the RREF of A is
81
1 0 1 1 −1
0 1 0 0 1
0 0 0 0 .
0
0 0 0 0 0
The number of nonzero rows in this matrix is 2, so the rank of the matrix is
three. The number of columns is five. Therefore the nullity is 5 − 2 = 3.
The solution set of the system of equations is the null space of A. We have
already seen that the set
−1 −1 1
0 0 −1
0
S = 1 , 0 ,
0 1 0
0 0
1
spans the solution set and therefore spans N(A). Since
| S | = dim(N(A)) = nullity of A = 3, it follows from E16), Unit 5, that S is
actually a basis for N(A).
***
We will now see how to compute a basis for the sum and intersection of two
subspaces of n , if we are given a spanning sets of the subspaces.
82
p1
k r p
piui + ( −qi )v i = 0. Therefore, x 0 = k N(A). So,
i=1 i=1 −q1
−qr
c j1
c p1
j2
m m pk
x 0 = y X j = j c jk = .
− q
j=1 j =1 d j1 1
r − q
d jk
m
If follows that pi = C .
j=1
j ij
k m
k m
k m
v = p u = jc j u = j c iju = j w j .
=1 =1 j=1 j=1 =1 j=1
Therefore, V1 V2 [{w 1, w 2 , , w m }] . So, V1 V2 [{w 1, w 2 , , w m }].
We now summarise the method for finding the sum and intersection of the
subspaces V1 and V2 , given spanning sets {u1, u2 , , v k } for V1 and
{v1, v 2 , , v r } of V2 .
x i1
x i2
Xi = xik
− y i1
− yir
We then compute w i as
k r
w i = xiju j or w i = − yij v j ,
j=1 j=1
0 1 1 0
0 0 −1 1
Example 7: Let V1 = , , V2 = , . Find a basis for
1 0 −1 0
1 1 −1 1
V1 + V2 . Also find a spanning set for V1 V2 .
0 1 1 0
0 0 −1 1
A= .
1 0 −1 0
1 1 −1 1
1 0 0 −1
0 1 0 1
The RREF of A is A = .
001 −1
0 0 0 0
The first, second and the fourth columns of A are the pivot columns of A .
So, the first, second and fourth columns of A form a basis for A. Therefore,
0 1 1
0 0 −1
1 , 0 , −1 forms a basis for V1 + V2 .
1 1 −1
The fourth column is the only non-pivot column and so we set x 4 = . So,
N(A) is given by x1 − = 0, x 2 + = 0, x 3 − = 0, x 4 = . So,
1
−1
N(A) = .
1
1
1
0
Therefore, u1 − u2 = −v1 − v 2 = spans V1 V2 .
1
0
***
1 1 1 1 1 2
0 1 0 1 0 0
Example 8: Let V1 = , , V2 = , , . Find a
0 0 1 1 1 −1
1 1 1 1 0 0
basis for V1 + V2 and spanning set for V1 V2 . Is the spanning set a basis for
V1 V2 ?
84
1 1 1 1 1 2
0 1 0 1 0 0
A= .
0 0 111 −1
1 1 1 1 0 0
1 0 0 −1 0 3
0 1 0 1 0 0
The RREF is A = .
001 1 0 −3
0 0 0 2
0 1
The pivot columns are the first, second, third and the fifth columns. So, the
corresponding columns in A,
1 1 1 1
0 1 0 0
0 , 0 , 1 , 1
1 1 1 0
form a basis for V1 + V2 .
We compute a basis for N(A). The non-pivot columns are the fourth and sixth
columns. So, we set x 4 = , x 6 = . N(A) is given by
x1 − + 3 = 0, x 2 + = 0, x 3 + − 3 = 0, x 4 = , x 5 + 2 = 0, x 6 = .
x1 = − 3, x 2 − , x 3 = − + 3, x 4 = , x 5 = 2, x 6 = .
x1 − 3 1 −3
x 2 − −1 0
x − + 3 −1 3
3 = = + .
x4 1
0
5
x −2 0
−
2
x 6
0 1
1
−1
−1
Corresponding to since it is convenient to compute in terms of v 1 here,
1
0
0
we have
−1
−1
w1 = ( −1)v1 + 0.v 2 + 0.v 3 = −v1 = ( = u1 − u2 − u3 )
−1
−1
−3
0
3
corresponding to , we have
0
−2
1
w 2 = −3u1 + 3u3 = 0.v1 + 2v 2 + ( −1)v 3 = 2v 2 − v 3
1 2 0
0 0 0
= 2 − = .
1 −1 3
0 0 0
85
−1 0
−1 0
Therefore, , span V1 V2 . Check that w1 and w 2 are linearly
−1 3
−1 0
independent and hence form a basis for V1 V2 .
***
1 0 1 1 2
1 −1 0 0 0
E5) Let V1 = , , , V2 = , .
0 −1 1 0 −1
1 1 0 0
1
Find a basis for V1 + V2 and a generating set for V1 V2 . Is the
spanning set for V1 V2 a basis for V1 V2 ?
3) The bases can be read off from the row echelon form.
The algorithm is simple to describe. The proof is not difficult, but involves,
concepts from the material in the second volumne.
We now describe the algorithm. Although the algorithm works for general
vector spaces we restrict our attention to subspaces of n . Let V1 and V2 be
n
subspaces of and suppose the rows of
a11 a12 a1n
a a a2n
M1 = 21 22
a a arn
r1 r 2
span V1 and the rows of the matrix
86
a11 212 a1n a11 a21 a1n
a21 a22 a2n a21 a22 a2n
arn
M1 M1 ar1 ar 2 arn ar1 ar 2
=
M2 0 b11 = A
b21 b1n 0 0 0
b21 b22 b2n 0 0 0
bk1 bk 2 bkn 0 0 0
We use row operations to get the row echelon form of the matrix:
1 1 1 1 0 0
1 0 0 0 1 0
Example 9: Let V1 = , , , V2 = , , .
0 1 0 0 −1 −1
0 0 1 −
0 1
1
Find a basis for V1 + V2 and V1 V2 using Zassenhaus’ algorithm.
1 1 0 0 1 0 0 −1
Solution: We form the matrices M1 = 1 0 1 0 and M2 = 0 1 −1 0 and
1 0 0 1 0 0 −1 1
the matrix
1 1 0 0 1 1 0 0
1 0 1 0 1 0 1 0
M1 M1 1 0 0 1 1 0 0 1
A= = .
M2 0 1 0 0 −1 0 0 0 0
0 1 −1 0 0 0 0 0
0 0 −1 1 0 0 0 0
1 1 0 0 1 1 0 0
0 1 −1 0 0 1 −1 0
0 0 1 −1 0 0 1 −1
The REF is
0 0 0 1 1/ 2 0 0 1/ 2
.
0 0 0 0 0 −1 1 0
0 0 0 0 0 0 1 −1
The set of rows of the 3 4 matrix at the top left, {(1, 1, 0, 0), (0, 1, − 1, 0),
87
(0, 0, − 1, − 1),(0,0,0,1)} form a basis for V1 + V2 . The set of rows of the 2 4
matrix at the bottom right, {(0, − 1, 1, 0), (0, 0, 1, − 1)} form a basis for V1 V2 .
***
1 0 1 1
E6) Let V1 = 1 , −1 V2 = 0 , 1 .
1 0 1 0
Find bases for V1 + V2 and V1 V2 using Zassenhaus’ algorithm.
We have reached the end of this Unit. In the next section, we will summarise
what we have learned in this unit
6.4 SUMMARY
In this Unit we
• defined elementary matrices and explained the row operations
corresponding to them;
• proved that elementary matrices are invertible and given the inverse of
an elementary matrix;
• defined the row rank and the column rank and of a matrix;
• saw how to compute the rank of a matrix using row reduction;
• stated the rank-nullity theorem and seen some of its applications;
• saw how to find dimension and a basis for subspace of n given a
spanning set of the subspace;
• saw how to find bases for the sum and intersection of two subspaces
of n .
6.5 SOLUTION/ANSWERS
1 0 0 −1 −1 2
0 1 0 0 −1 0
0 0 1 1 1 −1
−1 −1 2
A −1 = 0 −1 0
1 1 −1
1 1 0 1
1 1 1 1
2 2 −2 2
0 0 1
0
88
carrying out row operations
R2 → R2 − R1,R3 → R3 − 2R1,R3 → R3 + 2R2 ,
1 1 0 0
0 0 1 0
R3 R 4 ,R1 → R1 − R3 gives . The first three rows,
0 0 0 1
0 0 0 0
(1,1,0,0),(0,0,1,1) and (0,0,0,1) forms for V. Since the basis has three
elements, dim V = 3.
1 1 0 0
0 0 1 0
E4) As we saw in E? the RREF of this matrix is .
0 0 0 1
0 0 0 0
The pivot columns are the first, third and the fourth columns. So, the
1 0 1
1 1 1
corresponding columns in A, , and 2 form basis for the
2 −2
0 0 1
column space of A.
E5) Arranging the vectors as columns of a matrix, we get the matrix, we get
2 3 1 −2
−2 −1 1 1
− 1 −1 0 1 .
0 −1 −1 3
R2
Carrying out row operations R2 → ,R2 → R2 + 2R1,R3 → R3 + R1,
2
R 3R2 R
R2 = 2 ,R1 → R1 − ,R3 → R3 − 2 ,R 4 → R 4 + R2 ,R3 → 4R3 ,
2 2 2
R R 5R3
R1 → R1 + 3 ,R2 → R2 + 3 ,R4 → R4 − ,
4 2 2
1 0 −1 0
0 1 1 0
We get .
00 0 1
0 0 0 0
The pivot columns are the first, second and fourth columns. Writing the
corresponding columns as row vectors, the set
{(2, − 2, − 1,0),(3, − 1, − 1, − 1), ( −2,1,1,3)} forms a basis for V.
M1 =
1 1 1
, M2 =
1 0 1
.
0 −11 1 1 0
We form the matrix
1 1 11 1 1
M M 0 −1 1 0 −1 1
A = 1 1 = .
M2 0 1 0 10 0 0
1 1 00 0 0
1 1 1 1 1 1
0 1 0 0 1 0
The row reduced form is A = .
0 0 1 1 1 1
0 0 0 −1 0 1
The set of rows of the 3 3 matrix at the top left,
{(1, 1, 1), (0, − 1, 1), (1, 0, 1)} form a basis for V1 + V2.
The set containing the row at the bottom right, ( −1,0,1) , forms a basis
for V1 V2 .
90
Block Miscellaneous
Examples Exercises and
MISCELLANEOUS EXAMPLES AND EXERCISES
The examples and exercises given below cover the concepts and processes
you have studied in this block. Doing them will give you a better understanding
of the concepts concerned, as well as practice in solving such problems.
x1 + 2x 2 + x 3 = 8
x1 − x 2 + x 3 = 2
x1 + x 2 + 3x 3 = 8
1 2 1 8
1 −1 1 2
1 1 3 8
1 2 1 8
0 1 0 2
0 0 1 1
x1 + 2x 2 + x 3 = 8
x2 =2
x3 = 1
***
Example 2: In this example, we will see how to determine the current flow in
an electrical laws using Kirchoff’s laws. We begin with the simple circuit in
Fig. 1.
91
Block 2
Fig. 1
In the figure symbol denotes a voltage source, usually a battery. The
bigger symbol on the right is the positive terminal and the smaller symbol on
the left is the negative terminal. The current always flows from the positive
terminal of the battery to the negative terminal of the battery and this is taken
There are two resistors of resistance 2 ohms and 3 ohms. When the current
passes through a resistor, there is a drop in voltage. The voltage drop is
governed by Ohm’s law V = IR where V is the voltage, R is the resistance
and I is the current. We have the Kirchoff’s voltage law which states the
following:
In the figure above, there are two voltage drops, one is of 2I and the other is of
4I Their sum equals the voltage supplied by the battery. The drop in voltage is
2I+4I=6I and the voltage supplied is 30 volts. So, 6I+30 and I, the current flow
30
is = 6 amperes.
5
Let us now look at a slightly more complicated situation. Consider the figure
below:
Fig. 2
In this circuit, there are junctions at A and B, indicated by dots. There
are three branches, each with its current flow. The current flow at the
junctions follow another law, called the Kirchoff’s Current Law which is
as follows:
Kirchoff’s Current Law: The current flow into any junction is equal
to the current flow out of the junction.
We consider the closed path ABCD. Total voltage supply in this path is
zero. I2 is in a direction opposite to I1. We have
92
Block Miscellaneous
Examples Exercises and
(2 + 2)I1 − I2 = 0 or 4I1 − 2I2 = 0. In the closed path AEFB, the voltage
supply is 42 volts. The voltage drop is 2I2 + I3 . Therefore, 2I2 + I3 = 42 .
Fig. 3
Fig. 4
1 −1 −1 2 0
0 2 4 −4 0
0 1 2 −2 0
2 0
0 1 0
R 4 → R 4 − 2R1 gives
1 −1 −1 2 0
0 2 4 −4 0
0 1 2 −2 0
0 2 3 −4 0
1
R2 → R2 gives
2
1 −1 −1 2 0
0 1 2 −2 0
0 1 2 −2 0
0 2 3 −4 0
R1 → R1 + R 2 gives
1 0 1 0 0
0 1 2 −2 0
0 1 2 −2 0
0 2 3 −4 0
R3 → R3 − R 2 gives
1 0 1 0 0
0 1 2 −2 0
0 0 0 0 0
0 2 3 −4 0
R 4 → R 4 − 2R2 gives
1 0 1 0 0
0 1 2 −2 0
0 0 0 0 0
0 2 3 −4 0
The third row is an all zeros row and this is followed by a non-zero row. We
interchange the third and fourth rows to get
1 0 1 0 0
0 1 2 −2 0
0 0 −1 0 0
0 0 0 0 0
R3 → ( −1)R3 gives
1 0 1 0 0
0 1 2 −2 0
0 0 1 0 0
0 0 0 0 0
R2 → R 2 − 2R3 gives
94
Block Miscellaneous
Examples Exercises and
1 0 1 0 0
0 1 0 −2 0
0 0 1 0 0
0 0 0 0 0
R1 → R1 − R3
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
The rank is 3 and these are 4 variables. So, there are 4 − 3 = 1 free variable.
However, the column corresponding to the free variable is zero. So, we get a
unique solution x1 = 0,x 2 = 0,x 3 = 0,x 4 = 0 and x 5 = 0.
***
Try the next exercise to check your understanding of the above example.
x1 + x 2 + 2x 3 + x 4 = 0
x1 + x 2 + x 3 + x 4 = 0
x1 + x 2 + 4x 3 + x 4 = 0
Example 4: You would have studied various chemical reactions in school. For
reation, there is a chemican equation. For example, when hydrogen and
oxygen react, we get water. The chemical formula for the hydrogen molecule
is H2 and the chemical formula for oxygen is O2. We represent the reaction by
the equation
2H2 + O2 → 2H2O
Note the factor 2 in H2 and H2O. Without these factors, the number of
hydrogen atoms and oxygen atoms in the reactants and the products will not
be equal. We have added these factors to balance the equation. More often
than not, we can balance a chemical equation by trial and error. However,
some of the reactions are so complicated that we need to use Linear Algebra
to balance the equations. Consider the reaction of sodium sulfite (Na2SO4)
with nitric acid (HNO3). The products are sodium nitrate (Na2NO3), sulphur
dioxide (SO2) and water (H2O). Let us write the reaction in the form
x1Na2SO3 + x 2HNO3 → x 3NaNO3 + x 4SO2 + x 5H2O
We now write down the number of atoms of each element in the reactants and
products.
Reactants Products
Sodium 2x1 x3
Hydrogen x2 2x5
Sulphur x1 x4
Oxygen 3x1+3x2 3x3+2x4+x5
95
Block 2
Since the number of atoms in products and reactants are the same, we get the
following system of homogeneous equations:
2x1 − x3 = 0
x2 − 2x 5 = 0
x1 − x4 = 0
3x1 + 3x 2 − 3x 3 − 2x 4 − x5 = 0
(1,2,2,1,1)
Since we want an integer solution, we set = 1 to get the solution (1,2,2,1,1).
So, the balanced equation is
Na2SO3 + 2HNO3 → 2NaNO3 + SO2 + H2O
We can multiply both sides of the equation by any integer to get infinitely many
solutions, but they are essentially the same.
***
Here is an exercise for you to try.
97
Block 2
( ( )) = ( A ) = ( A )
( A ) = r (A) = dim (RS(A)) = dim CS A t c
t t
***
Next, we consider the problem of checking whether a given set of vectors are
n
linearly independent or not. If we are working in , we can use the concept
of rank of a matrix.
Example 8: Check whether the following subsets of 3 or 4 (as the case
may be) are linearly independent or not.
b) Let au + bv = 0,a,b .
b
Then ( −a,6a, − 12a) + , − 3b,6b) = (0,0,0)
2
b
i.e., −a + = 0,6a − 3b = 0, −12a + 6b = 0. Each of these equations is
2
equivalent to 2a − b = 0, which is satisfied by many non-zero values of a and
b (e.g., a = 1,b = 2).
c) Suppose au + bv = 0, a, b . Then
***
You know that the set {1,x,x ,...,x } P is linearly independent. For larger
2 n
and larger n, this set becomes a larger and larger linearly independent subset
P . This example shows that in the vector space P, we can have as large a
linearly independent set as we wish. In contrast to this situation look at the
following example, in which more than two vectors are not linearly
independent.
2
Example 9: Prove that in any three vectors form a linearly dependent set.
We wish to prove that there are real numbers, , , , not all zero, such that
u + v + w = 0. That is, u + v = −w. This reduces to the pair of
equations.
a1 + b1 = −c1
a2 + b2 = −c 2
Then, we can give a non-zero value and get the corresponding values of
and . Thus, if a1b2 − a2b1 0 we see that {u,v, w} is a linearly dependent set.
= (b1a1,b1a2 ) − (a1b1,a1b2 )
= (0,0)
i.e., b1u − a1v + 0.w = 0 and a1 0,b1 0.
Hence, in this case, also {u,v, w} is a linearly dependent set.
3
E7) Check whether each of the following subsets of is linearly
independent.
a) (1,2,3),(2,3,1),(3,1,2)
b) (1,2,3),(2,3,1),( −3, − 4,1)
c) ( −2,7,0),(4,17,2),(5, − 2,1)
d) ( −2,7,0),(4,17,2)
E8) Prove that in the vector space of all functions from to , the set
{sin x,cos x} is linearly independent, and the set
{sin x,cos x,sin(x + / 6)} is linearly dependent.
E9) Determine whether each of the following subsets of P is linearly
independent or not.
a) {x 2 ,x 2 + 2}
b) {x 2 + 1,x 2 + 11,,2x 2 − 3}
c) {3,x + 1,x 2 ,x 2 + 2x + 5}
d) {1,x 2 ,x 3 + x 2 + 1}
SOLUTIONS/ANSWERS
1 1 0 1 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
x 2 and x 4 are the columns without pivot elements. The solution is
x1 = − x 2 − x 4
x 3 = 0.
Writing 1 = x 2 , 2 = x 4 . We have x1 = 1 + 2 ,x 3 = 0 . So, the solution set
is (1 + 2 ,0, 1, 2 ) | 1, 2 .
E 5) We have the equation
x1C6H12O6 → x 2CO2 + x 3C2H5OH
Comparing the number of carbon, hydrogen and oxygen atoms in
the reactants and the product, we get
Reactants Products
Carbon 6x1 x2+2x3
Hydrogen 12x1 6x3
Oxygen 6x1 2x2+x3
So, the system of equations is
6x1 − x2 − x3 = 0
12x1 − 5x 3 = 0
6x1 − 2x 2 − x3 = 0
The associated matrix is
6 −1 −2
A = 12 0 −5
6 −2 −1
The RREF of A is
1
1 0 − 2
A = 0 1 −1
0 0 0
The non-pivot column corresponds to x3.Taking x 3 = , the
solution set is x1 = , x 2 = , and x 3 = , i.e.
2
1
,11 , .
2
Since we need a solution in natural numbers, we take = 2 and
get the solution x1 = 1, x 2 = 2, and x 3 = 2. The balanced equation
is
C5H12O6 → 2CO2 + 2C2H5OH
E 6) a) The associated matrix is
1 1 0 3
0 1 −1 1
1 0 1 0
The row operations R3 → R3 − R1, R3 → R3 + R2 gives the
102 matrix
Block Miscellaneous
Examples Exercises and
1 1 0 3
0 1 −1 1
0 0 0 −2
In the last row, the entries in all the columns, except the
last column, are zero and the last column is −2.
Therefore, the system system of equations is inconsistent
and b is not in [S] .
b) The associated matrix is
1 −1 2 1
1 −1 2 1
2 1 1 5
The RREF is
1 0 1 2
0 1 −1 1
0 0 0 0
The equations are consistent. The third column is
the non-pivot column. So, we take the third
variable as the free variable. We get 1 = 2 − ,
2 = 1 + and 3 = . Taking = 0, we get 1 = 1
2 = 1 and 3 = 0. So,
2(1,1,2) + ( −1, −1,1) + 0(2,2,1) = (1,1,5)
c) The associated matrix is
1 2 −1 0
1 1 0 3
1 −1 2 1
The row operations R2 → R2 − R1, R3 → R3 − R1
gives the matrix
1 2 −1 0
0 1 −1 −3
0 0 0 8
In the last row, entries in all the columns, except
the one in last column, is zero and the last column
is 8. So, the associated system of equations is
inconsistent. Therefore b is not in [S].
a + 2b − 3c = 0
2a + 3b − 4c = 0
3a + b + c = 0
c) Linearly dependent
e) Linearly independent.
E 8) To show that {sin x,cos x} is linearly independent, suppose a,b such
that a sin x + b cos x = 0.
104