Lec 4
Lec 4
Lecture – 04
Systems of Linear Equations I
Okay, so let us begin with today's lecture. In the previous lecture, we had started looking at why
linear algebra is important to study.
(Refer Slide Time: 00:35)
Then we started looking at how does linearity arise in setups. Basically there are 2 ways linearity
can arise. One is because of linear equations and the other is studying geometry in the algebraic
setup. We started looking at the system of linear equations. We looked at system of linear equations
in 2 variables and then in 3 variables.
And both dimensions 2 and dimension 3, that is 2 variables and 3 variables, we analyze
geometrically how the solutions can be obtained and that led us to algebraic method of solving
those equations namely trying to eliminate variables one at a time and trying to see whether the
system has a solution or not. So we will start with looking at general system of linear equations
and generalize Gauss elimination method for a system of 𝑚 linear equations in 𝑛 variables.
(Refer Slide Time: 01:43)
(Refer Slide Time: 02:49)
So what we have done is? We have reduced the number of equations which are required to find
the solution, okay, to r equations possible. Here remaining equations 𝑛 − 𝑟 equations are all 0 =
0. So this is the method suggested by Gauss that do those row operations, elementary row
operations on the system and transform into this form. But what is the advantage of this? So let us,
now supposing this last equation, the remaining equations are always true but look at this equation,
okay.
In this supposing it so happens, okay, that in the 𝑟th equation, all the coefficients here are 0 but 𝑑𝑟
is not equal to 0, suppose this happens, right. You will get 0 equal to a non-0 number and that will
be in, there is some inconsistency in the system. So the system will not have any solution. So that
is called an inconsistent.
(Refer Slide Time: 07:21)
So if conclusion is, if for some 𝑟 between 1 and 𝑛, 𝑛 is the number of variables, all the coefficients
𝑐𝑟𝑗 = 0, so that means for all 𝑗, that equation on the left hand side is 0 but the right hand side is
not equal to 0. Then the system is inconsistent. So the second possibility is, system is consistent,
that means this does not happen. So then there are solutions. So then 𝑛 − 𝑟 variables get arbitrary
values.
So let us just look at the previous equation. Supposing this is a consistent system. So this involves
the variables 𝑥𝑟 , 𝑥𝑟+1 , … , 𝑥𝑛 . So in this equation, this is an equation involving these variables, 𝑛 −
𝑟 variables are involved in this. We can solve one of the variables in terms of the other variables
using this equation.
So in this, we can give the remaining variables, are arbitrary variables, find values for one of the
variables. So that what it says that 𝑛 − 𝑟 variables get arbitrary values, okay. And the remaining
can be determined in terms of these variables from the backward substitution. So this, we are
summarizing this.
The system is inconsistent means no solution and that happens when there is something like 0
equal to non-0 in one of those equations which have been reduced to a specific form using the row
operations. Consistent means there are solutions at least 1 solution and you can get between infinite
number of solutions when putting various values for some variables and calculating the other in
terms of that. So this is the method suggested by Gauss to solve system of linear equations.
To make the system more systematic, we realize that the coefficients are important and not the
variables. So we look at this array of numbers. So it becomes important to study array of numbers.
That gave us a concept of what is a matrix.
(Refer Slide Time: 10:00)
So such a thing is called a matrix. So this system we saw it yesterday in the previous lectures that
this system of linear equations m equations in n variables can be written in terms of matrices. So
the matrix consisting of the coefficients of this m equations for getting the variables and forgetting
the arithmetic symbols that is 𝐴, 𝑋 is the variable which is unknown vector. So 𝑋 is 𝑛 rows, 1
column, 𝐴 is 𝑚 rows 𝑛 columns, so when you multiply, you will get 𝑚 × 𝑛, right.
So in the matrix form, you get this 𝐴𝑋 = 𝑏 in the matrix form of the equation. The idea writing in
the matrix form is we are not really concerned with this 𝑋, we are only concerned with 𝐴 and 𝑏
when we transform, apply elementary transformations.
So the transformed matrix 𝐴 which was original is transformed to 𝐴̃ which is of this form and the
vector b gets transformed to 𝑏̃ because whatever operations you are doing on the left hand side,
you have to do it on the right hand side also, right. So when 𝐴 changes 𝑏 automatically is going to
change according to those operations, right. So this is the new coefficient matrix for the new system
which is equivalent to the earlier one.
And this has a special form. Look at the bottom, there are some rows which are all equal to 0 and
the number of possibly non-0 coefficients, right, the place where they occur is increasing. This is
special form of a matrix. We will spend some time on this form of the matrix because this is going
to be important for all future calculations.
(Refer Slide Time: 12:43)
So to keep track of 𝐴 and the vector 𝑏 together, we form a new matrix, 𝐴 is 𝑚 × 𝑛, 𝑏 is 𝑚 × 1. So
the row operations whatever we do on 𝐴 are also going to be done on 𝑏. So this is the matrix on
which we are going to operate. And the idea is do the elementary row operations and change it to
that form. This part of the matrix should be changed to the special form.
(Refer Slide Time: 13:31)
So this matrix is called the augmented matrix because we have added 1 more, we have added 1
more column to 𝐴. So it is called augmented matrix of the system of linear equations. And Gauss
elimination method is summarized as reducing this matrix to a special form by elementary row
operations. So what is that special form? We will discuss that.
(Refer Slide Time: 14:00)
Here is the special form. It is called the row echelon form of a matrix. For any matrix 𝑀 which is
𝑚 × 𝑛, non-0, of course, the 0 matrix has nothing to do with it. It will not given anything. So we
assume that it is at least 1 non-0 entry in the matrix. So 𝑀 is 𝑚 × 𝑛, non-0 matrix. We say this
matrix is in row echelon form. REF is short for row echelon form if there is a number 𝑟 between
1 and the minimum of 𝑚 and 𝑛, and there are natural numbers 1, 𝑝1 , 𝑝2 , … , 𝑝𝑟 with the following
properties, right. The first property says the matrix looks like the first 𝑟 rows possibly are non-0.
Remaining are all 0 rows. So that is the property of this, characteristic property of this number 𝑟,
okay.
(Refer Slide Time: 15:44)
That means what? This means in the first r rows, the non-0 entries, they cannot go back kind of a
thing. If it is, in some row it is coming at a place, then the next rows, it has to be on the right side
of it, column, right. The previous ones are all 0. So the first non-0 entry comes at pith column. So
in 𝑝1 somewhere comes, so 𝑝2 non-0 entry should not come on the left side of that column. It
should be only on the column number on the right hand side of it. So that is the property of
this 𝑝1 , 𝑝2 , … , 𝑝𝑟 , okay.
(Refer Slide Time: 17:32)
For a matrix, of course non-0, 𝑚 × 𝑛, there is a number 𝑟. Why this has to be written minimum of
𝑚 × 𝑛? Because 𝑟 rows are non-0, right. So 𝑟 has to be less than the number of rows, 𝑟 has to be
less than or equal to 𝑚 but the non-0 entries going to come in a column. So 𝑟 has to be less than
or equal to number of columns, right.